id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.04411
Quantum Cyber-Attack on Blockchain-based VANET
Blockchain-based Vehicular Ad-hoc Network (VANET) is widely considered as secure communication architecture for a connected transportation system. With the advent of quantum computing, there are concerns regarding the vulnerability of this architecture against cyber-attacks. In this study, a potential threat is investigated in a blockchain-based VANET, and a corresponding quantum cyber-attack is developed. Specifically, a quantum impersonation attack using Quantum-Shor algorithm is developed to break the Rivest-Shamir-Adleman (RSA) encrypted digital signatures of VANET and thus create a threat for the trust-based blockchain scheme of VANET. A blockchain-based VANET, vehicle-to-everything (V2X) communication, and vehicular mobility are simulated using OMNET++, the extended INET library, and vehicles-in-network simulation (VEINS) along with simulation of urban mobility (SUMO), respectively. A small key RSA based message encryption is implemented using IBM Qiskit, which is an open-source quantum software development kit. The findings reveal that the quantum cyber-attack, example, impersonation attack is able to successfully break the trust chain of a blockchain-based VANET. This highlights the need for a quantum secured blockchain.
Kazi Hassan Shakib, Mizanur Rahman, Mhafuzul Islam
2023-04-10T06:46:33Z
http://arxiv.org/abs/2304.04411v1
# Quantum Cyber-Attack on Blockchain-based VANET ###### Abstract Blockchain-based Vehicular Ad-hoc Network (VANET) is widely considered as secure communication architecture for a connected transportation system. With the advent of quantum computing, there are concerns regarding the vulnerability of this architecture against cyber-attacks. In this study, a potential threat is investigated in a blockchain-based VANET, and a corresponding quantum cyber-attack is developed. Specifically, a quantum impersonation attack using Shor's algorithm is developed to break the Rivest-Shamir-Adleman (RSA) encrypted digital signatures of VANET and thus create a threat for the trust-based blockchain scheme of VANET. A blockchain-based VANET, vehicle-to-everything (V2X) communication, and vehicular mobility are simulated using OMNET++, the extended INET library, and vehicles-in-network simulation (VEINS) along with simulation of urban mobility (SUMO), respectively. A small key RSA based message encryption is implemented using IBM Qiskit, which is an open-source quantum software development kit. The findings reveal that the quantum cyber-attack--i.e., impersonation attack--is able to successfully break the trust chain of a blockchain-based VANET. This highlights the need for a quantum secured blockchain. Quantum Computing, Blockchain, VANET, Cybersecurity, Cyber-attack. ## I Introduction Recently, blockchain-based Vehicular Ad-hoc Network (VANET) architecture has been gaining popularity due to its distributed and decentralized architecture [1], efficient data transmission capability, and secure data generation and broadcasting ability over VANET networks [2]. Rating-based or trust-value-based blockchain networks can efficiently play a trusted role by setting up the proof-of-work (PoW) or proof-of-stake (PoS) consensus mechanisms [2]. Such a trust management system could ensure privacy-protected [1] and secured vehicle-to-everything (V2X) communication because of its ability to ensure the veracity of the exchanged messages via a digital signature of a message sender (e.g., vehicle). For example, RSA (Rivest-Shamir-Adleman) is a public-private asymmetric key encryption, which is widely used as an encryption technique in blockchain-based VANET architecture to encrypt messages. Due to the high mobility of vehicles, small key based encryption is popular in VANET [3]. Moreover, a small key based encryption needs less complex computational operations and storage and hence is highly recommended for blockchain security [4][5]. Existing studies prove that a non-quantum computing-based or classical attack [6] cannot generate such a blockchain-based VANET attack because blockchain can identify the attacker through consensus-based or rating-based mechanisms, hashing, encryption, and its distributed nature with transparency in the public ledger-based approach [2]. However, recently, several survey studies pointed out that quantum computing-based cyber-attacks could open the vulnerabilities of a blockchain-based VANET [7][8][4][9]. A classical attack model, such as a false message by any attacker, will cause a lack of trust in vehicle-to-vehicle (V2V) communication, and a blockchain-based VANET can identify such malicious vehicles (or vehicle) using a trust-based blockchain framework. However, there are cases where the trust-based framework of blockchain could become vulnerable. For example, a blockchain-based VANET cannot identify a malicious vehicle and corresponding false messages in an impersonation attack, as shown in Figure 1. In this scenario, a group of vehicles, with each vehicle acting as a block, creates a blockchain under a Roadside Unit (RSU), and the RSU runs a trust management unit. All RSUs between each other form (as shown between the #(N+1) to #(N+2) RSU) a trust evaluation chain, which is a chained framework to store and transmit trust information. As presented in Figure 1, a malicious vehicle (orange in color) could impersonate a non-malicious vehicle (green in color) by forging its digital signature using a quantum attack. The attacker could generate an impersonation message of a false traffic crash and broadcast the false message to surrounding vehicles, as shown in Figure 1. All vehicles within its communication range, upon receiving the message, send the false message to RSU#(N+1) as an acknowledgement (Ack) of the transaction. RSU#(N+1) then disseminates the transaction message to other vehicles for their consensus regarding the transaction. After that, if it is an authenticated and trusted message, RSU sends the false message to all other vehicles and RSUs about the crash within its communication range. In this scenario, RSU#(N+1) cannot find the original sender (orange in color) of the false traffic crash message, as the digital signature of the green vehicle has been compromised by the malicious vehicle (orange in color). So, the trust mechanism fails to detect the original attacker. Furthermore, if the attacker vehicle receives the "Miner" (block creation responsibility) capability as a trusted vehicle, it also adds another malicious vehicle to support the false message in the blockchain. This scenario shows a vulnerability of a blockchain based VANET using a quantum computing based attack model. The blockchain-based architecture relies on two cryptographic mechanisms to provide security and trust [6]: (i) check the integrity of the data itself using hash functions, and (ii) check the ownership of the data with asymmetric cryptography. If a quantum algorithm can break the cryptographic algorithm, it can create security concerns for any secure communication architectures, such as blockchain, as it uses an encryption technique, i.e., RSA [7], elliptic curve digital signature algorithm (ECDSA)[3][8]. In this paper, we developed a quantum cyber-attack framework exploiting the vulnerability of the cryptographic ownership mechanism. The objectives of this study are to: 1. identify the existing vulnerabilities of secured blockchain-based VANET, 2. develop a quantum impersonation attack model to break the trust-based architecture of a blockchain-based VANET, and 3. develop a proof-of-concept for quantum advantage (using quantum Shor's algorithm) against RSA. This proof-of-concept will demonstrate a potential threat of quantum attacks on the public-private key based cryptography in VANET architecture. The quantum attack model in this study is developed based on Shor's algorithm [8], which has an exponential speedup in solving prime number factorization. Findings from this study will help to identify existing vulnerabilities of blockchain based VANET architecture and provide insights into potential threats that could arise due to quantum computing advancement. ## II Related Work For non-quantum-based attacks, prime factorization, which is a major part of cryptography used in ownership mechanism, cannot be broken in polynomial time [9]. As modular exponentiation (one of the necessary steps in prime factorization) cannot be executed in polynomial time by classical computing algorithms, it takes exponential time. However, such an approach is possible in a quantum-based attack scenario. In quantum computing, using the superposition property of qubits, complex calculations can be solved. For example, it is possible to do factorization in polynomial time with quantum computing, where classical computer needs exponential time to perform prime factorization [9]. Due to the nature of finding non-trivial factors in every iteration, Shor's algorithm guesses and is more likely to find factors in polynomial time [9]. Quantum attacks can be formulated using two different algorithms: (i) Shor's algorithm [9] because of its ability of factorization; and (ii) Grover's algorithm due to its searching capability [10]. Shor's algorithm provides threat on asymmetric cryptography through prime factorization and discrete logarithm problem solving capability, which covers the basis for a wide range of cryptographic techniques. On the other hand, Grover's algorithm is applicable for symmetric cryptography through inversing the hash functions [11]. Grover's algorithm is more appropriate for PoW-based scheme. Authors in [6] presented two primary applications of Grover's algorithm. Firstly, the algorithm can enable quadratic speedup for solving hash functions and creating attacks [12]. The attacker, who controls more than half of computing power of the network as miner, can monopolize the data and even rewrite the blocks by creating forks as well. Secondly, the attacker can search for hash collisions [12][13] to replace certain parts of blocks without breaking the chain. Moreover, the attacker can create a more reliable chain by keeping the hash of the block the same, as the pointer of the previous block will point to the same block. Fig 1: An example attack scenario for a Blockchain-based VANET. In the study[14], authors describe a transaction hijacking scenario in cryptocurrency platforms using the public key, which is published to the network or revealed from transactions sitting in the memory pools. They present an attack model that uses the Shor's algorithm on a known public key to decrypt the private key. Then, using that decrypted private key creates a conflicting transaction spending the same value. They create a commit-delay-reveal architecture to safeguard the reveal of public key encryption. Though such a scheme can fail in a broadcasting scenario, such a delay will cause lack of trust. Another limitation of the study was the failure to provide a proof-of-concept. A few studies have been conducted on improving the ownership mechanism of blockchain and making it quantum-safe through post-quantum cryptography [15] and quantum key distribution (QKD) [16]. However, post-quantum solutions lack standardization, suffer from periodicity and symmetry [17], and use large-size keys, which increase the complexity of the decryption of the key, such as a lattice-based architecture [6]. In addition, the QKD-based technique requires dedicated quantum communication channels [18], and physically setting up such channel is not feasible in a dynamic network, such as VANET. In addition, such long-distance QKD transfer in quantum secure direct communication channels, which is known as quantum relay, could have increased quantum error and complexity. Moreover, quantum relays [19] are prone to quantum noise and errors as well as concurrent eavesdropping (compromised RSU in our case), which will be almost impossible to make a fault-tolerant and randomized (random qubit measurement) QKD [4] to address security issues. Therefore, the quantum-based blockchain solution, which could ensure security against quantum-based attacks, is in its early days. To the best of our knowledge, there is no quantum attack model on blockchain-based VANET in the literature that demonstrates its vulnerability. This indicates the importance of this research and the gap in the existing literature. ## III Blockchain-based VANET and Threat Model This section presents an architecture of blockchain-based VANET and identifies its vulnerabilities against a quantum-attack. ### _Blockchain-based VANET Architecture_ Blockchain-based VANET architecture has not been standardized yet. However, two different mechanisms, i.e., Proof-of-Work (PoW) and Proof-of-Stake (PoS) based consensus [1][2], are available in literature for implementing VANET architecture. Unlike PoW, which requires the computational power for solving cryptographic puzzles (i.e., solving SHA-X) [17], PoS considers all the vehicles as stakeholders of the system, and trustworthy vehicles/randomly selected vehicles to be considered as the 'Miner' criteria. Here, we briefly explain the concept of PoW based VANET and PoS based VANET as follows: 1. _PoW based VANET_: In PoW, the consensus mechanism works as when a vehicle sends a traffic crash warning message, and it goes to the "Mempool" of the RSU/miner. The mempool is where the unauthenticated messages or transactions stay as long as there are no similar reports from other vehicles regarding the message [20]. 2. _PoS based VANET_: The consensus mechanism of PoS works as it checks the traffic crash message with other vehicles'(stakeholders') consensus about the message. It has the ability to find out the malicious transaction through horizontal trust (HT) in V2V communication and vertical trust (VT) in V2I communication [21]. Algorithm-1 (see below) presents the pseudo code of a blockchain based VANET architecture. In this paper, we identified the vulnerabilities of a trust based PoS mechanism as well as indicated corresponding vulnerabilities of PoW mechanism. Note that this study focused on proving the vulnerability of PoS mechanism instead of PoW; the corresponding reasonings are presented in subsection III.B. The RSU is defined as a static node that keeps a copy of the ledger and works as an administrator of the system. RSU validates the integrity of a transaction and the corresponding messages by a consensus mechanism. Trust-chain is a trust-based blockchain architecture with asymmetric encryption to identify ownership using the digital signatures of the vehicles and their trust values, which increase with every authenticated transaction. A trust chain-based blockchain architecture verifies authenticity in two steps. Firstly, it verifies the ownership of the digital signature that is encrypted with RSA/ECDSA. Secondly, it checks the authenticity of the message. ``` Result: Blockchain secured system Initialization. Step1: Building vertical trust mechanism between vehicles (VT) Step2: Building horizontal trust mechanism between vehicle to infrastructure (HT) Step3: Storing transactions on public ledger on RSU Step4: Executing consensus-based voting system for every transaction Step5: Generate a voting result to identify malicious nodes Follow Step2 Step6: If malicious node found Increase trust value Track two more transactions and block Else Continue checking Step7: Assign miner responsibility, add new block and increase trust value Step8: Pass the blocklist to other RSU ``` **Algorithm 1**Blockchain based VANET architecture. To generate a block, multiple entities called miners with high trust value (validates transactions and introduces new vehicles) verify the block transaction with a defined rule. As a final step, verified transactions are stored in the blockchain and dissipated to every block to maintain transparency. For example, a vehicle with false trajectories (speed-difference and distance), false digital signature, or false information about a traffic crash can be identified using a consensus mechanism. The voting is done by horizontal trust result \(\ell\)trusted\(=\)1 and not trusted\(=\)0) created by every transaction, which is sent to RSU (as VT step) for voting. RSU calculates majority voting (greater than the threshold trust value) for every transaction and classifies malicious transactions. This threshold changes with increasing trust of the vehicles. There is a forward dissemination of packets from RSU to all vehicles within its wireless communication coverage to let every other vehicle know about any transactions and corresponding trust values to maintain the ledger. A vehicle with the highest trust value (the highest stake value) or a randomly selected vehicle (if many vehicles has the same trust value) serves as a validator/miner. It will create a new block with a new vehicle and an increase in overall trust values. This trust value offset has been calculated as described in the reference [2]. Trust value can be increased with every vehicle authenticity check and message validation (adds 5 per transaction in this paper), with identifying false transactions (adds 10 per transaction), and with adding new validators/miners (20 for mining) upon writing onto the block. If a malicious vehicle tries to do double spending with the same gas value (i.e., compensation value for a transaction to store in a block), it can be identified too. The system blocks the malicious vehicle after two transactions. Trust values can be carried to other RSUs, and if a vehicle is blocked by any RSU, it cannot enter a new block of other RSUs. ### _Threats and Vulnerabilities of a Blockchain-based VANET_ The threats to a blockchain-based VANET system can be considered low if the trustworthiness of the system is maintained. Tempering of data and sniffing in ongoing VANET conversations (V2V and/or V2I) are considered less possible because of trustworthy encryption, such as RSA, hashing of the data, and distributed transparent ledger. However, as quantum computing can find out the digital signature of a vehicle, which is part of each disseminated message, it can be used to create an attack and make a trust-based system vulnerable and can have a catastrophic impact on VANET, even creating the possibility of multi-vehicle collisions and changing the routes of vehicles. Here, we explain the vulnerabilities of PoS and PoW of a blockchain based VANET because of an impersonation attack. 1. _Vulnerabilities of PoS-VANET due to an impersonation attack:_ As shown in Figure 1, a general attack scenario on the PoS mechanism is created in which a malicious vehicle creates a false message, and with the consensus mechanism of blockchain, gets detected, tracked, and eventually gets blocked. Then, a digital signature from disseminated packets of RSU is used and tries to forge the digital signature of a trusted vehicle. A malicious vehicle creates an impersonation attack by broadcasting a false message of a traffic crash with forged digital signature. The system tries to block malicious vehicle identification through the digital signature with majority votes of the stakeholders. In this way, a blockchain-based VANET blocks a vehicle as a malicious vehicle. So, with quantum computers [20], if we can figure out and break the encryption, such an attack would be possible [22]. An attacker gains the trust of the system, gets miner responsibilities, and introduces new malicious vehicles. Thus, the consensus of majority voting fails to remove the malicious vehicle as it has more stakes in the system by adding more malicious vehicles and using their votes to influence the system. The attacker starts double spending the initial gas amount at the beginning and spams the VANET with more false messages. 2. _Vulnerabilities of PoW-VANET due to an impersonation attack:_ The PoW mechanism also acts on the consensus mechanism using the consensus mechanism in an unauthenticated mempool [20]. A malicious vehicle can report the same traffic crash with other vehicles' digital signatures and authenticate the message. An attacker could have 10 minutes [17] to either figure out how to crack the puzzle to act as a miner (which can be done using Quantum Grover's algorithm) [22] or the digital signature (using Quantum Shor's algorithm) of other vehicles to spread the traffic crash notification from the mempool. ## IV Quantum-based Impersonation Attack Model This section presents an architecture of Quantum Shor's algorithm that will be used to launch a quantum cyber-attack on a blockchain-based VANET. ### _Quantum Shor's Algorithm_ Quantum Shor's algorithm uses period finding (phase) in superposition states to perform modular exponentiation in polynomial time. In Algorithm-2, we have explained the quantum Shor's algorithm to break RSA. The key insight behind Shor's algorithm is to use quantum Fourier transform to efficiently find the period of a modular multiplication function. The modular multiplication function takes two integers, say \(m\) and \(x\), and computes the remainder of the product of \(m\) and \(x\) divided by some modulus, \(N\). In other words, it computes \(m^{x}\ mod\ N\). To factor a composite number \(N\) using Shor's algorithm, we first choose a random prime number \(m\) between \(1\) and \(N\)-\(1\). Then we apply the modular multiplication function repeatedly to compute \(m^{x}mod\ N\) for a series of values of \(x\) until we find two values of \(x\), say \(x_{1}\) and \(x_{2}\), such that \(\text{m}^{x}\mathbf{x}_{1}\ mod\ N=m^{x}\mathbf{x}_{2}\mod N\). The period of the modular multiplication function is then given by the difference between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), i.e., \(p=\mathbf{x}_{1}\)-\(\mathbf{x}_{2}\). Finding the period \(p\) is the key step in Shor's algorithm, and it can be done efficiently using Quantum Fourier Transform (QFT). The QFT is a quantum version of the classical Fourier transform, which is a mathematical tool for analyzing periodic functions. By applying the QFT on the superposition states corresponding to the outputs of the modular multiplication function, the algorithm can find the period [23]. If the phase is odd or check \(\frac{\tau}{m^{2}}\) (\(m\) is a prime number) and it is congruent to 1 (mod \(N\), where \(N\) is a public key), it is in the order of \(r\)/\(2\) (\(r\) is a fraction denominator). Then, the \(GCD(m^{\frac{r}{2}-1}),N)\) and \(GCD(m^{\frac{r}{2}+1},N)\) are applied to obtain prime factors of \(N\). Figure 2 depicts the quantum circuit implementation with 8 qubits(\(q\)) and 4 classical bits (\(c\)) for the prime factorization of 15, where \(m\)=7 and \(N\)=15. Four of the counting qubits (\(q_{0},q_{l}\), \(q_{2}\) and \(q_{3}\)) are into superposition states of (\(|\)\(0\)\(>\), \(|\)\(1\)\(>\)) with equal probability using the Hadamard gate (see "H" in Figure 2). Then, we use \(X\)-gate or Pauli-\(X\) gate (see "X" in Figure 2) to convert from State-1 (an auxiliary qubit) to State-0. After that, \(m\) is raised to the power of \(2^{q}\)_modulo_\(N\) using a controlled operation to find the modular exponentiation. Later, all gates are appended to aggregate every qubit, \(q\), in each iteration of modulo operation to form the circuit. After that, an inverse QFT is performed on the counting qubits to determine the finite time (period finding) value (state /\(s\)\(>\)) from the frequency domain (superposition state /\(t\)\(>\)) and appended (with every qubit q) to get the circuit. Then, the counting qubits are measured and stored the results in the classical bits. ### _Quantum Impersonation Attack_ In this scheme, the trust chain's dependency on the corresponding trust value is exploited to detect any discrepancy by verifying the digital signatures and messages. As shown below, the attack scheme in Algorithm-3 presents trust-based blockchain vulnerabilities using quantum Shor's algorithm. Algorithm-2 runs as a sub-process of Algorithm-3 to forge the digital signature and use it to broadcast a false message (e.g., traffic crash) with a forged digital signature. It also checks if the system is PoW-architecture based or not, as it must decide if there is a time constraint for the attack to be successful. An attacker also tries to double spending when it sends multiple transactions with the same gas value to save up its transaction limits. It also gains trust by pinning its blame on other vehicles using their forged digital signatures. As a malicious vehicle takes on the miners' responsibilities, it can introduce new malicious vehicles and help to get a majority in voting. Our attack model depends on breaking asymmetric cryptography or public-key cryptography. With the forged private key or digital signature, a sender (i.e., a malicious vehicle) broadcasts false messages or transactions to other trusted vehicles and corresponding RSU within its communication range in the blockchain. These private-public key pairs are formed using the prime factorization in RSA. By using quantum Shor's algorithm, a malicious vehicle can guess the prime factor and figure out the private key, thus forming the attack scheme. Our proposed attack scheme does well in finding small private keys with the existing capabilities of quantum computing. ``` Result: Compromised digital signatures and adding new malicious nodes Initialization. Step 1: Take the public key from broadcasted messages Step 2: Use Quantum Shor's algorithm [run Algorithm. 3] to get the private key Step 3: Forge digital signatures Step 4: Broadcast false message Step 5: If time duration-based Proof of Work Follow step 1 Else Continine Step 6: Do double spending Step 7: Gain trust and block another node with consensus voting Step 8: Gain minor responsibility Step 9: Add false nodes/vehicles to the system ``` **Algorithm 3**Attack Scheme based on Quantum Computing (QC) Fig 2: Quantum circuit implementation of modular exponentiation using IBM Qiskit. ## V Experimental Setup and Implementation In order to validate the effectiveness of the quantum attack model, the performance evaluations of a blockchain-based VANET are conducted utilizing a simulation platform, i.e., Objective Modular Network Testbed in C++ (OMNET++) [21][25]. For implementing this simulation platform, the attack model is divided into three environments: mobility, VANET, and quantum computing, as shown in Figure 3. To set up the mobility environment, we used Simulation for Urban Mobility (SUMO) [26], which interacts with VEINS [27] and contains the road network and vehicle trajectory information. Overall, we simulated blockchain-based VANET using OMNET++, extended INET [28] library for communication, VEINS, and SUMO for vehicular mobility. In our simulation, we use a small key RSA-based message encryption, implemented in Qiskit (IBM quantum environment) [24]. Prime factorization has been done using Shor's algorithm to forge the digital signature of neighboring vehicles. Finally, we generate a false message using the forged digital signature to create an impersonation attack within the VANET. ### _Mobility and VANET Environment Setup_ The blockchain-based VANET has been tested and implemented using OMNET++ 5.6.2 version with the help of SUMO. The VEINS (5.21 version)-SUMO integration has been done to set up the simulation of a traffic crash scenario. INET (version 4) is used to implement V2RSU (V2I) and V2V communication. We implement the blockchain using C++ in OMNET++ with five nodes, where four moving nodes act as vehicles and one static node, i.e., RSU. In the omnetpp.ini file, we declare the simulation conditions, initial money, data rate, connectivity protocols, delays, number of malicious vehicles, and number of malicious transactions (acts as fault-tolerance limit, considered two in our simulation). On the network (.ned) file, we declare all the V2V and V2I communication protocols. In the vehicular (vehicles and RSU) files, we define the message relay, packet generation (packet length, packet type, digital signatures, timestamp, hashed messages), receive, acknowledgment, block creation request, verification, and routing information. The success of the trust chain depends on successfully decrypting, validating, and verifying authenticated messages and finding malicious transactions and suspected vehicles at any point in time. We generate log messages (timestamp and trust value) for every transaction within the blockchain-based VANET. The RSU aggregates the voting of every vehicle to make malicious vehicle identification decisions as per the consensus mechanism. As illustrated in Figure 4, every vehicle's onboard unit (OBU) has three modules: mobility, VANET, and blockchain ledger. The mobility module consists of the roadway network and the vehicle's trajectory information. The VANET module enables V2V and V2I communication. The blockchain ledger module consists of records of transactions that contain a digital signature, a message, a timestamp, and transaction value details. Each vehicle uses a blockchain ledger module for each transaction to vote for authenticity. The RSU has the VANET module as it receives messages and disseminates transaction messages. It also contains a ledger module, which checks authenticity using majority voting, and maintains a list of suspected vehicles and their corresponding block lists. We use VEINS and SUMO to generate vehicle trajectory information. A dedicated ledger module is coded in OMNET++, and INET is used for VANET communication. For each message, a sender (a vehicle or an RSU) generates a packet, encrypts it with RSA using an asymmetric key, and broadcasts it to a receiver (a vehicle or an RSU). This communication between a sender and a receiver will be stored in their respective ledgers. This ledger information consists of their transaction value, which is a gas value as defined above, their digital signature, message, and timestamp. If both sender and receiver are vehicles, these transaction details are also sent to the RSU by the receiver, and the RSU disseminates this transaction to other vehicles to maintain consistency in the ledgers of other vehicles. Each vehicle generates the creditability of this transaction and uploads its trust value using either "1" or "0." Here, "1" represents a trusted transaction, and "0" represents a non-trusted transaction. The RSU then checks the transaction's digital signature and corresponding gas value for double spending scenarios. In addition, the RSU checks the overall authenticity of the transaction and the corresponding message based on majority voting. The trustworthiness of each transaction is decided based on a trust threshold value. If trusted, it increases the trust value of the overall system; otherwise, it puts the vehicle on a suspected list. At least two transactions are needed to identify a malicious vehicle as fault tolerance, block it, and pass it to other RSUs. The transaction value (or gas spent in every transaction) and trust value are calculated as presented in the reference [2]. The key parameters related to the quantum attack model are provided in Table I. \begin{table} \begin{tabular}{l l} \hline **Parameters** & **Values** \\ \hline Number of Vehicles & 4 \\ Number of RSU & 1 \\ Packet Size & 100 bytes \\ Encryption Type & RSA \\ Hash Algorithm & SHA-256 \\ Gas & 100 \\ \hline \end{tabular} \end{table} TABLE I: KEY PARAMETERS Fig. 3: Block diagram of system interaction architecture. ### _Quantum Computing Environment Setup_ An attacker receives the disseminated packet of other transactions from RSU and uses Quantum Shor's algorithm to decipher the digital signature of the sender. In this simulation experiment, an attack is created using IBM Quantum Lab [24]. As we can identify the message and public key from RSU disseminated packets, we guess the private key of the sender and verify the private key pair formulation for a pair of vehicles using Shor's algorithm. Using Qiskit programming language, we are able to determine the co-prime of a small prime number (15) that can encrypt any number between 0 and 9 as a digital signature and even up to any of 14 sequential letters (out of the 26 alphabets in English) that can be used as messages in V2V communications. Thus, we break the small key RSA algorithm using Shor's algorithm. Then, we generate messages with forged digital signatures from a vehicle, which is selected as the malicious vehicle in our simulation. With such an attack the whole trust mechanism of the blockchain became vulnerable as it identified another vehicle as a malicious vehicle but could not identify the original malicious vehicle. The trust value of the malicious vehicle increases as the blockchain-based VANET could not identify the malicious vehicle. After this attack, the malicious vehicle gains more trust, gets selected as the miner, and introduces more malicious vehicles into the new block. ## VI Evaluation Outcomes This section is structured into three subsections. The first subsection focuses on the evaluation of the failure of a trust based PoS-VANET system caused by a quantum impersonation attack. The accumulated trust value was calculated by adding offsets in every triggered event and by generating event logs. In the second subsection, we evaluate the failure of the trust mechanism due to an impersonation attack by examining each vehicle profile. Finally, the third subsection assesses the computational time required for a quantum-powered attack to meet the time-limited consensus-based VANET's blockchain architecture. ### _Evaluation of Trust-based System Failure_ We present a trust-based VANET architecture using simulation parameters (as shown in Table I) that fails to detect and increases the trust value of a malicious vehicle (i.e., Vehicle #2). Note that Vehicle #2 impersonates another vehicle, i.e., Vehicle #3, to complete a malicious transaction. As shown in Figure 5, we observe that the accumulated trust value in the RSU increases even if a malicious transaction or impersonation attack by creating false messages that occur within the VANET. Trust value increases as long as the blockchain can find a malicious vehicle responsible for the false message transaction. However, this creates controversy on the trust-based VANET system, as it cannot decrease the trust until it finds a malicious transaction that has been impersonated and compromised by a malicious vehicle itself. This proves the vulnerability of the trust-based system. We also observe that a new block has been introduced, even if there is a malicious vehicle present, that Fig 4: Component diagram for the implementation of quantum-based attack model. Fig 5: Variation of accumulated trust value in RSU. conducts malicious transactions in the VANET. We observed (see Figure 5) that the accumulated trust value increases for an event with an increment of 10 even if a malicious transaction occurs (indicated as red circles). On the other hand, the accumulated trust value increases for an event with an increment of 20 due to mining a new block (see the green circle), whereas the trust value increases with an increment of 5 for an authenticated transaction. This shows how a trust-based architecture fails to resist an impersonation attack and assigns mining responsibilities even if a malicious transaction occurs. ### _Evaluation of Vehicular Profile of Trust Value_ In this subsection, we evaluate trust-chain value of each vehicle at the vehicular level without considering trust offsets. Figure 6 illustrates the trust value profiles for Vehicle #2 and Vehicle #3. As Vehicle #2 executes a quantum impersonation attack on Vehicle #3, it forges the digital signature of Vehicle #3, causing the trust value of Vehicle #3 to decline for each malicious transaction. However, the trust value of Vehicle #2 increases, making it appear like a trusted node in VANET. We observe that Vehicle #3 gets blocked and its trust value reaches zero when it appears to perform malicious transactions more than the threshold. Figure 6 shows that Vehicle #3 gets suspected of a malicious transaction by blockchain consensus mechanism at event 17, and the trust value reduces to -1. Vehicle #2 does this impersonation attack, but it does not get suspected due to its forging of digital signature, and its trust value increases to +1. Again, the same happens at events #88 and #137, where Vehicle #2 pins the blame on Vehicle #3, and the trust value of Vehicle #3 declines and eventually gets to 0 as it gets blocked. Vehicle #2, despite being the malicious node, becomes the miner at event #93 by being randomly selected as the miner. This depicts the evaluation of individual vehicular level trust mechanism and failure of PoS-based trust-chain architecture (as stakeholders' consensus fails to identify the malicious vehicle). ### _Computational Time Requirement_ We assess the time required for generating a quantum-based private key to satisfy the computational time requirement for smaller key RSA [15]. To model a quantum attack, it is first necessary to break RSA, decrypt the received message, and forge digital signatures. Note that the private keys change with a predefined frequency of 10 mins for a new block as per PoW mechanism-based blockchain [6]. Our approach only takes polynomial time complexity and can identify the factors for a small key efficiently (which acts as a proof-of-concept). In our case, factoring a 4-bit number like 15 takes between 5 to 17 seconds (experiment repeated 30 times). We developed this quantum-based attack model with a small key RSA (4 bits in our experiment) as a proof-of-concept due to existing limited computational capacity to break 2048-bit RSA. Figure 7 shows that the mean factorization time is 10.4 using a Poisson probability distribution of 30 trials. In addition to this factorization time, additional computational time is required to decrypt a message and create a false message packet with forged digital signature. OMNET\(++\) stores the total computational time (factorization time, message decryption and forged message creation) in a log file as shown below. #### Total Computational Time \(=\) Factorization Time \(+\) Message Decryption \(+\) Forged Message Creation Overall, it takes a mean of 115 seconds, which is significantly lower than the 10 minutes required to hack into the current block. This ensures that the false message in "mempool" gets sufficient votes within the time constraint. Thus, our quantum cyber-attack generation technique takes advantage of the current limitation of blockchain-based VANET. ## VII Conclusion and Recommendation The presented attack model with a quantum advantage over classical computers has given the necessary proof-of-concept to expose the vulnerabilities of a blockchain-based VANET. Even though existing quantum computers do not have the necessary computational efficiency to break 2048-bit RSA, we conclude that the presented quantum attack on a small key RSA (4 bits in our experiment) could act as a proof-of-concept for such an attack. In this study, a quantum impersonation attack model is developed against a state-of-the-art PoS and a time-constraint based PoW mechanism and evaluated its effectiveness. Specifically, the vulnerability of vertical trust (vehicle-to-infrastructure trust-chain) using the aggregated trust profile at RSU and the horizontal trust (vehicle-to-vehicle trust-chain) is Fig 6: Vehicular trust value profiles. Fig 7: Probability distribution of required time (in seconds) for factorization. evaluated using each vehicle's trust profile. It reveals that the quantum cyber-attack--i.e., impersonation attack--is able to successfully break the trust chain of a blockchain-based VANET. Thus, blockchain based VANET requires post-quantum cryptography solutions that do not have periodicity, symmetry, or large key problems. In future studies, quantum-based short key solutions like quantum digital signatures (QDS), quantum key distribution (QKD) over quantum satellites, or 1000 km long quantum communication channels could be explored. Therefore, a quantum secured blockchain based VANET can be a solution and a future direction for attaining a secured blockchain based VANET system. ## Acknowledgment This material is based on a study supported by the Alabama Transportation Institute (ATI). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Alabama Transportation Institute (ATI).
2304.07118
Bosonization, effective action, and R-operation in a generalized Nambu-Jona-Lasinio model
Bosonization in a theory with four-fermion interaction of Nambu-Jona-Lasinio type with additional U(N) symmetry is studied. It is demonstrated that bosonization is not uniquely determined by the interaction terms due to Fierz identities. Effective action including both fermions and composite fields is constructed. R-operation renormalization scheme is developed for the effective action. A generalization of bosonization transformation called composite fields formalism is proposed and demonstrated to be applicable to any field theory. Nambu-Jona-Lasinio model with three-fermion composite fields is studied as an example. Fierz identities for sixth-order combinations of fermions are derived.
Sergii Kutnii
2023-04-14T13:19:00Z
http://arxiv.org/abs/2304.07118v2
# Bosonization, effective action, and R-operation in a generalized Nambu-Jona-Lasinio model ###### Abstract Bosonization in a theory with four-fermion interaction of Nambu-Jona-Lasinio type with additional U(N) symmetry is studied. It is demonstrated that bosonization is not uniquely determined by the interaction terms due to Fierz identities. Effective action including both fermions and composite fields is constructed. R-operation renormalization scheme is developed for the effective action. A generalization of bosonization transformation called composite fields formalism is proposed and demonstrated to be applicable to any field theory. Nambu-Jona-Lasinio model with three-fermion composite fields is studied as an example. Fierz identities for sixth-order combinations of fermions are derived.
2305.08102
A machine learning-based viscoelastic-viscoplastic model for epoxy nanocomposites with moisture content
In this work, we propose a deep learning (DL)-based constitutive model for investigating the cyclic viscoelastic-viscoplastic-damage behavior of nanoparticle/epoxy nanocomposites with moisture content. For this, a long short-term memory network is trained using a combined framework of a sampling technique and a perturbation method. The training framework, along with the training data generated by an experimentally validated viscoelastic-viscoplastic model, enables the DL model to accurately capture the rate-dependent stress-strain relationship and consistent tangent moduli. In addition, the DL-based constitutive model is implemented into finite element analysis. Finite element simulations are performed to study the effect of load rate and moisture content on the force-displacement response of nanoparticle/ epoxy samples. Numerical examples show that the computational efficiency of the DL model depends on the loading condition and is significantly higher than the conventional constitutive model. Furthermore, comparing numerical results and experimental data demonstrates good agreement with different nanoparticle and moisture contents.
Betim Bahtiri, Behrouz Arash, Sven Scheffler, Maximilian Jux, Raimund Rolfes
2023-05-14T08:33:11Z
http://arxiv.org/abs/2305.08102v1
A machine learning-based viscoelastic-viscoplastic model for epoxy nanocomposites with moisture content ###### Abstract In this work, we propose a deep learning (DL)-based constitutive model for investigating the cyclic viscoelastic-viscoplastic-damage behavior of nanoparticle/epoxy nanocomposites with moisture content. For this, a long short-term memory network is trained using a combined framework of a sampling technique and a perturbation method. The training framework, along with the training data generated by an experimentally validated viscoelastic-viscoplastic model, enables the DL model to accurately capture the rate-dependent stress-strain relationship and consistent tangent moduli. In addition, the DL-based constitutive model is implemented into finite element analysis. Finite element simulations are performed to study the effect of load rate and moisture content on the force-displacement response of nanoparticle/epoxy samples. Numerical examples show that the computational efficiency of the DL model depends on the loading condition and is significantly higher than the conventional constitutive model. Furthermore, comparing numerical results and experimental data demonstrates good agreement with different nanoparticle and moisture contents. keywords: Nanocomposite, Deep-Learning, Recurrent Neural Network, Viscoelasticity-Viscoplasticity, Finite element + Footnote †: journal: Computer methods in applied mechanics and engineering ## 1 Introduction Innovative designs of polymer nanocomposites have led to the development of advanced materials with complex material response [1]. To accurately predict the history-dependent behavior of the materials under hygothermal conditions, more complex constitutive models with additional parameters have been developed in the literature [2; 3]. Although the proposed constitutive models are able to describe the highly nonlinear behavior of the materials, a major challenge is to reduce the error between the numerical predictions and the experimental data under complex loading conditions. In addition, since the numerical modeling of the nonlinear material behavior involves time-consuming iterative solutions, the ability of deep learning (DL)-based constitutive models to improve the computational efficiency of the numerical methods has attracted attention [4; 5; 6; 7; 8]. One of the first works in this direction is done by Ghaboussi et al. [9] presenting a neural network approach to unify the mechanical behavior of plain concrete directly from experimental data. Since the material behavior is stated to be path-dependent, they trained the neural network to predict the strain increments given the current state of stress, strain and stress increment. Due to the requirement of a comprehensive set of experiments, the model was able to predict only biaxial and uniaxial loading. This work is extended to an auto-progressive training and therefore reduction of needed experimental data [10]. Stoffel et al. [11] utilized the nonlinear stress-strain behavior of an aluminium tube under pressure to train a neural network and substitute a viscoplastic model within a finite element (FE) framework. Since the neural network applied is not able to intrinsically learn the path-dependent behavior of the viscoplastic model they included the plastic strain tensor and backstress tensor as additional inputs to predict the path-dependency. They showed that the computational time using the neural network was up to 50% lower than in the case of the FE simulation. Similar efforts to replace the constitutive model were presented in [12], where the gated recurrent unit structure together with an attention mechanism is incorporated to learn the history-dependent elastoplastic mechanical behavior of low-yield-point steels under cyclic loading conditions. They were able to depict the nonlinear behavior of the constitutive model and underlined the importance of the loss-function used in the learning process of neural networks. Recently, long short-term memory Networks (LSTM), which belong to the class of recurrent neural networks (RNNs), have been incorporated with a FE\({}^{2}\)[13] computational homogenization technique to develop an anisotropic plasticity model for heterogeneous material under arbitrary loading paths [14]. The authors map the strain tensor, the material parameters and an averaged strain to depict the stress tensor by creating 14,000 sets of data to train the LSTM. The proposed LSTM network is proving to be very effective in capturing the arbitrary loading paths for the two-dimensional case. A similar approach is provided by Ghavamian et. al [15] who proposed a strategy to collect stress-strain data from the micromechanical models of academic examples using a viscoplastic constitutive model and implement it into the FE\({}^{2}\) homogenization scheme. The authors utilize the automatic differentation of RNNs to compute the consistent tangent tensor. The above mentioned RNNs are developed by introducing recursions into the data flow and therefore learning from the previous time step [16; 17]. This allows the neural networks to learn long-term dependencies between timesteps of a sequence data and has been largely used in DL for classification or regression tasks [18; 19]. However, Hochreiter et al. [20] reported that RNNs suffer from fading memory and vanishing or exploding gradient, which motivated the development of LSTMs by introducing gating of the architecture [21]. Through the gating approach, the network is able to bridge minimal time legs by enforcing a constant error flow through a error carousel within the units of the LSTM. Therefore, LSTM networks control the data flow and address the fading memory and vanishing or exploding gradient problems associated to regular RNNs [22]. This makes the LSTM a powerful network to depict history-dependent and nonlinear mechanical behavior [23; 24]. Besides the increase in computational efficiency, DL models are also constructed to enhance computational mechanics in a different manner. Sadeghi and Lotfan [25] proposed an approach to implement neural networks for system identification and parameter estimation for a cantilever beam model in the presence of noise. Since a closed form for estimating the non-linear parameters is not available, the authors introduce a neural network approach. The results are validated using cross-validation, and the corresponding outputs confirm the predictability of the approach. Koeppe et al. [26] presented "meta elements," which maintain the main idea of FE but lead to a significant model order reduction. The model predicts field variables and forces using neural networks while reducing the number of degrees of freedom. A similar approach is followed by Tandale et al. [27], who incorporate LSTM networks within the finite element framework to predict the equivalent plastic strain of the elasto-viscoplastic constitutive model. The authors apply a self-learning approach to make the neural network adaptable to different materials by introducing a modified loss function. Other works extend the usage of the DL models within computational mechanics by implementing physics-informed neural networks, which learn the solution of the partial differential equations by utilizing a loss function compromised of the residual error of the linear equilibrium and its boundary conditions [28; 29; 30; 31]. Although DL models have been developed for traditional materials, the development of material models for new high-performance materials, such as polymer nanocomposites, is of great significance. Recently, nanoparticle-reinforced thermosets have shown to be a promising composite material, where the low weight of epoxy resins is combined with the nanoparticles' features [1]. Among different particles, boehmite nanoparticles (BNPs) have been considered to improve the material properties of the composites, including shear strength, compressive strength, and fracture toughness [2; 32]. Furthermore, intensive research activities on constitutive models for composites have led to various models trying to reproduce the nonlinear rate-dependent mechanical behavior of the material [33; 34; 35; 36]. Among others, Boyce et al. [37] developed a finite deformation-based model for thermoplastic polyurethanes exhibiting strong hysteresis and softening. The material is composed of hard and soft segments to take into account in the constitutive model derivation. Melro et al. [38] introduced a constitutive model for continuous-fiber reinforced composites with a fixed strain rate and therefore ignoring the viscoelasticity. The authors presented a model based on a paraboloidal yield criterion to separate yield strengths under tension or compression and pressure sensitivity. Poulain et al. [39] developed a finite deformation constitutive model for epoxy resins and implemented a modified argon model [40] to predict the viscoelastic behavior depending on temperature. They calibrated the model over a wide range of temperatures and strain rates for uniaxial loading conditions. Arash et al. [2] developed a physically informed model at finite deformations by utilizing molecular dynamics simulations [41; 42] to predict material parameters and validated the model against uniaxial loading conditions. The study shows that introducing atomistic simulations can reduce the number of experiments required to calibrate the material model. They also successfully validated the model for uniaxial loading. Recently, Arash et al. [3] developed a phase-field fracture model for polymer nanocomposites and incorporated the moisture dependency in the constitutive model. Their results show good agreement with experimental data at different nanoparticle contents. Regarding the modeling of cyclic behavior, Rocha et al. [43] recently developed a model based on small strains and included the moisture dependency for an epoxy system. They were able to predict several phenomena, such as nonlinear reloading branches. Still, they could not accurately estimate the amount of plastic strain. Silberstein et al. [36] presented a viscoelastic-viscoplastic model for electrolyte membrane Nafion at ambient conditions by introducing a back stress approach leading to accurate results for the mentioned material system. Based on these investigations from the literature, some unaddressed issues still need to be addressed. Firstly, the models are usually calibrated against uniaxial loading conditions, which is insufficient to capture the viscoelastic behavior of polymer-based materials fully. Secondly, the constitutive models are mainly developed for neat polymeric material or composites at dry conditions by neglecting ambient conditions and the influence of moisture content in combination with particle contents. Thirdly, the models mostly fail to capture the time-dependent irreversible response fully, or the predictions must be more accurate. To address the open questions, we propose several additions to improve the performance of the constitutive model for a nanoparticle reinforced and highly crosslinked thermosetting polymer with moisture content. First, to capture the highly nonlinear hysteresis behavior, we propose a nonlinear and physically motivated viscoelastic model, which is modified to capture the strong hysteresis of the nanocomposite. Secondly, a stress softening damage approach is implemented to the constitutive model to model the quasi-irreversible sliding of the molecular chains. Finally, an additional viscoplastic dashpot, including strain-rate dependency, is added to the constitutive model to depict the time-dependent irreversible viscoplastic response. The model is derived for investigating the effect of moisture content on the stress-strain behavior of BNP/epoxy nanocomposite at finite deformation. Due to the complexity of the constitutive model, a DL model is developed to increase the computational efficiency of the numerical simulations. The present work is organized as follows. First, we present the nonlinear viscoelastic-viscoplastic damage model in Section 2, including moisture and nanoparticle volume fraction dependency at finite deformations. The integration and explanation of the multilayered LSTM network are then provided in Section 3. Here, insights are delivered into the training technique to predict the constitutive model and the tangent modulus tensor needed in the finite element framework. Next, a finite element formulation is presented in Section 4 to integrate our constitutive- and DL model. In Section 5, the proposed constitutive model is validated using numerical simulations, and the effect of moisture, BNPs volume fraction on the cyclic loading-unloading behavior of the nanocomposites is investigated. Beyond this, we underline the benefits of the DL model compared with the conventional constitutive model by highlighting the efficiency increase. The study may broadly impact the usage of DL models in computational mechanics in the sense that the proposed model can replace a complex and highly time-consuming time integration of the constitutive model yet keep the advanced predictability of the material model. ## 2 Constitutive model for nanoparticle/epoxy In this section, a viscoelastic-viscoplastic damage model for BNP/epoxy nanocomposites is proposed. The stress response is decomposed into an equilibrium part and two viscous parts to capture the nonlinear rate-dependent behavior of the materials. The effect of BNPs and moisture on the stress-strain relationship is taken into account by defining an amplification factor as a function of the nanofiller and moisture contents. Here, we also take into account the material swelling through moisture. The proposed model is an extension of previous work by Bergstrom and Hilbert [44], aimed at predicting the experimentally observed response of BNP/epoxy nanocomposites under cyclic loading. ### Kinematics The total deformation gradient, containing the mechanical deformation, is multiplicatively split into a volumetric and deviatoric part as \[\mathbf{F}=\ J^{1/3}\ \mathbf{F}_{iso}, \tag{1}\] where \(J=\text{det}[\mathbf{F}]\) and \(\mathbf{F}_{iso}\) are the volumetric deformation and the isochoric deformation gradient, respectively. The volume deformation is further decomposed into two terms: The mechanical compressibility \(J_{m}\) and the moisture-induced swelling \(J_{w}\), leading to an overall volumetric deformation as \[J=\ J_{m}\ J_{w}, \tag{2}\] where \[J_{w}=\ 1\ +\ \alpha_{w}\ w_{w}. \tag{3}\] In the equation above, \(\alpha_{w}\) is the moisture swelling coefficient and \(w_{w}\) is the moisture content [3; 45]. Our model incorporates experimental characteristics by decomposing the material behavior into a viscoelastic and a viscoplastic part, corresponding to the time-dependent reversible and time-dependent irreversible response, respectively. We further decompose the viscoelastic stress response into a hyperelastic network and a viscous network. The hyperelastic spring, associated with the entropy change due to deformations, captures the equilibrium response, while the viscous network composed of an elastic spring and a viscoelastic dash-pot describes the non-equilibrium behavior of the nanocomposites. Additionally, the quasi-irreversible sliding of the molecular chains, resulting in stress softening, also known as the Mullins effect [46; 47], is implemented within the constitutive model. A schematic structure of the model is presented in Fig. 1. The proposed constitutive model is able to capture the following main features of the material behavior: 1. Nonlinear elasticitiy at finite deformation; 2. nonlinear viscoelastic behavior; 3. viscoplastic flow because of stress driven chain sliding; 4. stress softening during deformation; and 5. the effect of moisture content on the stress-strain relationship. The deviatoric part of the deformation gradient is decomposed into a viscoplastic and a viscoelastic component [48]: \[\mathbf{F}_{iso}=\ \mathbf{F}_{iso}^{ve}\mathbf{F}_{iso}^{vp}. \tag{4}\] Also, the viscoelastic deformation gradient is split into an elastic and an inelastic part as \[\mathbf{F}_{iso}^{ve}=\ \mathbf{F}_{iso}^{e}\mathbf{F}_{iso}^{v}. \tag{5}\] Accordingly, similar decompositions are obtained for the left Cauchy-Green deformation ten Figure 1: One-dimensional schematic of the viscoelastic-viscoplastic constitutive model. sors: \[\mathbf{B}_{iso} =\ \mathbf{F}_{iso}\ \mathbf{F}_{iso}^{T}, \tag{6}\] \[\mathbf{B}_{iso}^{v} =\ \mathbf{F}_{iso}^{v}\ \mathbf{F}_{iso}^{vT},\] (7) \[\mathbf{B}_{iso}^{e} =\ \mathbf{F}_{iso}^{e}\ \mathbf{F}_{iso}^{eT}. \tag{8}\] ### Viscoelastic-viscoplastic damage model at finite deformation The Cauchy stress acting on the viscoelastic network is decomposed into equilibrium \(\boldsymbol{\sigma}_{eq}\), non-equilibrium \(\boldsymbol{\sigma}_{neq}\) and volumetric \(\boldsymbol{\sigma}_{vol}\) terms. The equilibrium and non-equilibrium stress are given by a generalized neo-Hookean model \[\boldsymbol{\sigma}=\ J^{-1}\ \left(\mu_{eq}\ \mathbf{B}_{iso}^{ve}\ +\ \mu_{neq}\ \mathbf{B}_{iso}^{e}\ \right), \tag{9}\] and the volumetric part is defined by \[\boldsymbol{\sigma}_{vol}=\ \ \frac{1}{2}k_{v}\left(J_{m}\ -\ \frac{1}{J_{m}}\right)\ 1, \tag{10}\] resulting in an overall stress \[\boldsymbol{\sigma}_{tot}=(\boldsymbol{\sigma}\ +\ \boldsymbol{\sigma}_{vol}), \tag{11}\] which is formulated in the damaged state as follows: \[\boldsymbol{\sigma}_{tot}^{d}=(1-\mathrm{d})(\boldsymbol{\sigma}\ +\ \boldsymbol{\sigma}_{vol}), \tag{12}\] where \(\mathrm{d}\ \in\ [0,1)\) is a scalar damage variable and its evolution obeys the following rule: \[\dot{\mathrm{d}}=\mathrm{A}(1-\mathrm{d})\dot{\Lambda}_{chain}^{ max}. \tag{13}\] A is a material parameter calibrated using experimental data, \(\Lambda_{chain}\ =\ \sqrt{\mathrm{tr}[\mathbf{B}_{iso}]/3}\), \(\dot{\Lambda}_{chain}^{max}\) takes the following form: \[\dot{\Lambda}_{chain}^{max}=\begin{cases}0&\Lambda_{chain}\ <\ \Lambda_{chain}^{max}\\ \dot{\Lambda}_{chain}&\Lambda_{chain}\ \geq\ \Lambda_{chain}^{max}\end{cases} \tag{14}\] The shear moduli of the neo-Hookean stress contribution depend on the BNPs volume fraction \(v_{np}\) and moisture content \(w_{w}\) as follows: \[\mu_{eq}(v_{np},w_{w}) =\ \mathrm{X}(v_{np},w_{w})\ \mu_{eq}^{0}, \tag{15}\] \[\mu_{neq}(v_{np},w_{w}) =\ \mathrm{X}(v_{np},w_{w})\ \mu_{neq}^{0}, \tag{16}\] while the volumetric bulk modulus \(k_{v}\) is a constant. Nanoparticles play an important role in the mechanical behavior of the epoxy system. The BNPs are assumed to be rigid particles, which occupy a significant volume and serve as effective stiff fillers in the material. We apply a slightly modified Guth-Gold model [49] to obtain the effective stiffness of the nanoparticle-modified epoxy system. Although the quadratic form of the amplification factor X is adopted, we increase the gradient of the quadratic function, since the BNP reinforced epoxy is assumed to have a higher stiffness than the thermoplastic material used in [37]. Also, we add a moisture dependency on the effective stiffness of the material, resulting to the following amplification factor [3] \[\mathrm{X}=\ \left(1\ +\ 5v_{np}\ +\ 18v_{np}^{2}\right)\left(1+\ 0.057w_{w}^{2}\ -\ 9.5w_{w}\right), \tag{17}\] where \(v_{np}\) is the volume fraction of BNPs and \(w_{w}\) represents the moisture content. The total velocity gradient of the viscoelastic network, \(\mathbf{L}^{ve}=\ \mathbf{\dot{F}}^{ve}\left(\mathbf{F}^{ve}\right)^{-1}\), can be decomposed into an elastic and a viscous component analogously to Eq. (5) \[\mathbf{L}^{ve}=\ \mathbf{L}^{e}\ +\ \mathbf{F}^{e}\mathbf{L}^{e}\mathbf{F}^{e -1}\ =\ \mathbf{L}^{e}\ +\ \tilde{\mathbf{L}}^{v}, \tag{18}\] and \[\tilde{\mathbf{L}}^{v}=\ \mathbf{\dot{F}}^{v}\mathbf{F}^{v-1}\ =\ \tilde{\mathbf{D}}^{v}\ +\ \tilde{\mathbf{W}}^{v}. \tag{19}\] Here, \(\tilde{\mathbf{D}}^{v}\) represents the rate of the viscous deformation and \(\mathbf{W}^{v}\) is a skew-symmetric tensor representing the rate of stretching and spin, respectively. We make the intermediate state unique by prescribing \(\tilde{\mathbf{W}}^{v}=0\). The rate of the viscoelastic flow is constitutively described by \[\tilde{\mathbf{D}}^{v}=\ \frac{\dot{\varepsilon}^{v}}{\tau_{neq}}\ \mathrm{dev}\left[\mathbf{\sigma}^{{}^{\prime}}_{neq}\right] \tag{20}\] where \(\tau_{neq}\ =\ \parallel\mathrm{dev}[\mathbf{\sigma}_{neq}]\ \parallel_{F}\) represents the Frobenius norm of the driving stress, \(\dot{\varepsilon}^{v}\) is the viscous flow and \(\mathbf{\sigma}^{{}^{\prime}}_{neq}\ =\ \mathbf{R}^{T}_{e}\mathbf{\sigma}_{neq} \mathbf{R}_{e}\) represents the stress acting on the viscous component in its relaxed configuration. The viscous flow is defined by the Argon model: \[\dot{\varepsilon}_{v}=\ \dot{\varepsilon}_{0}\ \mathrm{exp}\ \left[\frac{\Delta\mathrm{H}}{k_{b} \mathrm{T}}\ \left(\left(\frac{\tau_{neq}}{\tau_{0}}\right)^{m}-1\right)\right], \tag{21}\] where \(k_{b}\), \(\dot{\varepsilon}_{0}\), \(\Delta\mathrm{H}\) and \(\tau_{0}\) are the Boltzmann constant, a pre-exponential factor, the activation energy and the athermal yield stress. Recent models [39, 50] have shown a better agreement with experimental data by using the exponential factor \(m\) as a material parameter and therefore reconcile the moisture- or temperature dependency of the stiffness with the softening of the viscoelastic flow. We also modify the athermal yield stress of the argon model and propose a nonlinear behavior of the athermal yield stress driven by the local chain stretch \(\Lambda_{chain}\), in contrast to the linear modification proposed by [37]. In our case, we use a sigmoid function for the athermal yield stress modification as follows: \[\tau_{0}=\ y_{0}\ +\ \frac{\mathrm{a}_{s}}{1+\ \exp\left(-\frac{\left(\dot{ \Lambda}_{chain}^{max}-\ x_{0}\right)}{\mathrm{b}_{s}}\right)}. \tag{22}\] The presented modification leads to an increase of the viscoelasticity with increasing stretch and damage. This leads to an increasing hysteresis after each cycle, which is also observed in different experimental data [43; 51]. Especially in the first few cycles an increasing hysteresis converging to a constant value after a certain point, is observed. Our model captures the initial exponential increase before reaching a plateau after a certain stretch. This way we ensure an inflationary increase of the viscoelastic behavior in the beginning of the stretch while keeping the athermal yield stress at the deformed configuration upon removal of applied stretch, thus assuming that the athermal yield stress change is taken to be permanent. The parameters \(\mathrm{y}_{0}\), \(\mathrm{x}_{0}\), \(\mathrm{a}_{s}\) and \(\mathrm{b}_{s}\) are material parameters which are calibrated using experimental data. The implemented sigmoid function is presented in Fig. 2. In summary, the time derivative of \(\dot{\mathbf{F}}^{v}\) can be derived from Eq. (20) and Eq. (19) as follows: \[\dot{\mathbf{F}}^{v}=\mathbf{F}^{e-1}\dot{\varepsilon}^{v}\frac{\mathrm{dev} \left[\boldsymbol{\sigma}_{neq}^{{}^{\prime}}\right]}{\tau_{neq}}\mathbf{F}^ {ve}. \tag{23}\] Similarly, the total velocity gradient of the overall network, \(\mathbf{L}=\ \dot{\mathbf{F}}(\mathbf{F})^{-1}\), can be expanded Figure 2: Proposed sigmoid-function for the behavior of the athermal yield stress \(\tau_{0}\) driven by the local chain stretch \(\Lambda_{chain}\). to the following: \[\mathbf{L}=\ \mathbf{L}^{ve}\ +\ \mathbf{F}^{ve}\mathbf{L}^{vp}\mathbf{F}^{ve-1}\ =\ \mathbf{L}^{ve}\ +\ \tilde{\mathbf{L}}^{vp}. \tag{24}\] Again, we consider the viscoplastic velocity gradient to be additively decomposed into the symmetric rate of stretching and the skew-symmetric rate of spinning: \[\tilde{\mathbf{L}}^{vp}=\ \dot{\mathbf{F}}^{vp}\mathbf{F}^{vp-1}\ =\ \tilde{\mathbf{D}}^{vp}\ +\ \tilde{\mathbf{W}}^{vp}, \tag{25}\] and we take \(\tilde{\mathbf{W}}^{vp}=0\) again leading to: \[\tilde{\mathbf{D}}^{vp}=\ \frac{\dot{\varepsilon}^{vp}}{\tau_{tot}}\ \mathrm{dev}\left[\mathbf{\sigma}_{tot}^{{}^{\prime}} \right], \tag{26}\] where \(\mathrm{dev}\left[\mathbf{\sigma}_{tot}^{{}^{\prime}}\right]\) is the total deviatoric stress in its relaxed configuration and \(\tau_{tot}\) is the Frobenius norm of the total stress. To characterize the viscoplastic flow \(\dot{\varepsilon}^{vp}\), we implement a simple phenomenological representation, similar to [52], as follows: \[\dot{\varepsilon}^{vp}=\begin{cases}0&\tau_{tot}<\ \sigma_{0}\\ \mathrm{a}(\epsilon\ -\ \epsilon_{0})^{\mathrm{b}}\dot{\epsilon}&\tau_{tot}\ \geq\ \sigma_{0}\end{cases}, \tag{27}\] where a,b and \(\sigma_{0}\) are material parameters. \(\epsilon_{0}\) is the stress at which the viscoplastic flow is activated, represented by the Frobenius norm of the Green strain tensor \(\parallel\mathbf{E}\parallel_{F}\), which is derived from the deformation gradient: \[\mathbf{E}=\ \frac{1}{2}\ (\mathbf{F}^{T}\mathbf{F}-\mathbf{I}), \tag{28}\] and \(\dot{\epsilon}\) is the strain rate of the effective strain \(\parallel\mathbf{E}\parallel_{F}\), thus introducing a simple strain-rate dependency of the viscoplastic flow. Analogous to Eq. (23), the time derivative of the viscoplastic deformation gradient is given by \[\dot{\mathbf{F}}^{vp}=\mathbf{F}^{ve-1}\dot{\varepsilon}^{vp}\frac{\mathrm{ dev}\left[\mathbf{\sigma}_{tot}^{{}^{\prime}}\right]}{\tau_{tot}}\mathbf{F}_{iso}, \tag{29}\] characterizing the rate kinematics of the viscoplastic flow. We obtain the viscous and viscoplastic deformation gradients at the end of a time increment using the Euler backward time integration. The step-by-step procedure is presented in Section 4. ## 3 Deep learning-based constitutive model To replace the complex viscoelastic-viscoplastic model with a deep learning-based model, the following framework is implemented: 1. Calibrate and validate the nonlinear viscoelastic-viscoplastic damage model using experimental data. 2. Use the following nonlinear mapping of the input sequence to the corresponding output sequence: \[\begin{bmatrix}\mathbf{\vec{B}}\\ \Delta t\\ w_{w}\\ v_{np}\end{bmatrix}\ \rightarrow\ \left[\vec{\boldsymbol{\sigma}}_{tot}\right],\] (30) \(\mathbf{\vec{B}}\) represents the upper triangular components of the overall left Cauchy-Green deformation tensor, \(\Delta t\) is the timestep, and \(\vec{\boldsymbol{\sigma}}_{tot}\) is the stress vector of the undamaged material containing the upper triangular components again. This ensures fewer inputs and outputs while capturing two physics-informed principles: Local balance of angular momentum and preserving the stress-free undeformed configuration [53]. The proposed mapping scheme leads to 9 input values mapped to 6 output values. We include \(\Delta t\),\(w_{w}\), and \(v_{np}\) to ensure the effect of rate, moisture, and BNP content on the nanocomposites' behavior will be captured. Here, the stress tensor of the undamaged material is calculated after the mapping scheme to obtain more flexibility regarding the damage model. 3. Generate specific data by adapting a perturbation method to capture any complex loading condition and accurately calculate an approximation of the tangent modulus tensor \(\hat{\mathbb{C}}\). 4. Train and validate an LSTM-enhanced deep network to learn the constitutive model. 5. Implement the proposed DL model in the FE analysis. Figure 3: Proposed scheme for the finite element implementation of the intelligent constitutive model. Starting from the constitutive model, a DL model is developed and integrated into the finite element analysis. In the following subsections, the architecture of the DL model and the training framework are presented. ### Long-short term memory network In deep learning, dense feed-forward neural networks are the basic building block of deep networks and can represent the nonlinear mapping of Eq. (30) from inputs \(\mathbf{x}\) to predictions \(\mathbf{t}\) with a number of consecutive layers L and trainable parameters \(\mathbf{\omega}\) as \[\mathcal{F}_{nn}(\mathbf{\omega}):\ \mathbf{x}\ \rightarrow\ \mathbf{t}, \tag{31}\] where the trainable parameters \(\mathbf{\omega}\) include weights \(\mathbf{W}\) and biases \(\mathbf{b}\) of each Layer. For each layer, a linear transformation of the inputs \(\mathbf{x}\) is applied before a non-linear activation is enforced to obtain the activation of each layer \(\mathbf{a}^{l}\). This can be expressed as \[\mathbf{a}^{l}=\ \Phi^{l}\left(\mathbf{W}^{l}\ \cdot\ \mathbf{a}^{l\ -\ 1}\ +\ \mathbf{b}^{l}\right),\ l\ =\ 1,2\...\ L. \tag{32}\] Here, \(\Phi\) is the nonlinear activation function and the first activation \(\mathbf{a}^{0}\) corresponds to the input vector while the last activation \(\mathbf{a}^{L}\) represents the output of the deep network. While feed-forward deep networks have been proven to yield good results for the non-linear behavior of constitutive models [54; 55; 11; 56], they are not able to predict sequential data \(\mathbf{x}^{t}\) that may be sorted according to real-time and collected into a larger sequence \(\mathbf{x}=[\mathbf{x}^{1}...\mathbf{x}^{T}]\). Hence, we implement a multilayered LSTM deep network consisting of several memory cells and gates for remembering and forgetting information in each sequence by minimizing a loss between the target \(\mathbf{t}\) and the prediction \(\mathbf{t}^{*}\). The input sequence in our case yields \(\mathbf{x}\ =\ ([\mathbf{\vec{B}},\Delta\mathrm{t},w_{w},v_{np}]^{1}\,\...\,[ \mathbf{\vec{B}},\Delta\mathrm{t},w_{w},v_{np}]^{T})\), which is mapped to a stress vector \(\mathbf{t}\ =\ ([\mathbf{\vec{\sigma}}_{tot}]^{1}\,\...\,[\mathbf{\vec{\sigma}}_{ tot}]^{T})\). The architecture of a single LSTM cell is presented in Fig. 4. In the illustration, \(\mathbf{h}\) and \(\mathbf{c}\) denote the hidden and cell states at timestep \(i-1\) and \(i\), respectively. The hidden state is an encoding of the most recent timestep and can be processed at any point to obtain meaningful data. The cell state acts as a global memory of the LSTM network over all timesteps, allowing the LSTM cell to have information on the history of each sequence. The learnable parameters \(\mathbf{\omega}\) of each component presented in Fig. 4 of the LSTM cell are the input weights \(\mathbf{W}\), the recurrent weights \(\mathbf{R}\), and the bias \(\mathbf{b}\). These matrices are concatenations of the input weights, the recurrent weights, and the bias of each gate as follows: \[\mathbf{W}=\begin{bmatrix}\mathbf{W}_{i}\\ \mathbf{W}_{f}\\ \mathbf{W}_{g}\\ \mathbf{W}_{o}\end{bmatrix},\ \mathbf{R}\ =\begin{bmatrix}\mathbf{R}_{i}\\ \mathbf{R}_{f}\\ \mathbf{R}_{g}\\ \mathbf{R}_{o}\end{bmatrix},\ \mathbf{b}\ =\begin{bmatrix}\mathbf{b}_{i}\\ \mathbf{b}_{f}\\ \mathbf{b}_{g}\\ \mathbf{b}_{o}\end{bmatrix}, \tag{33}\] where \(i,f,g\) and \(o\) denote the input gate, forget gate, cell candidate, and output gate, respectively, as presented in Fig. 4. The cell state \(\mathbf{c}^{i}\) at timestep \(i\) is given by \[\mathbf{c}^{i}=\ \mathbf{f}^{i}\odot\mathbf{c}^{i-1}\ +\ \mathbf{i}^{i}\odot \mathbf{g}^{i}. \tag{34}\] The operator \(\odot\) corresponds to Hadamard product or element-wise product. The hidden state \(\mathbf{h}^{i}\) can be evaluated as follows: \[\mathbf{h}^{i}=\mathbf{o}^{i}\ \odot\ \tanh\left(\mathbf{c}^{i}\right), \tag{35}\] Figure 4: A single LSTM cell consisting of multiply connected layers. \(\mathbf{\sigma}\) represents the sigmoid function, \(tanh\) is the hyperbolic tangent function. The forget gate, input gate and output gate do control the data flow into the cell. The candidate gate presents a possible candidate for the cell state. where the components of each gate are described by the following equations: \[\mathbf{i}^{i}= \ \sigma\left(\mathbf{W}_{i}\mathbf{x}^{i}\ +\ \mathbf{R}_{i} \mathbf{h}^{i-1}\ +\ \mathbf{b}_{i}\right), \tag{36}\] \[\mathbf{f}^{i}= \ \sigma\left(\mathbf{W}_{f}\mathbf{x}^{i}\ +\ \mathbf{R}_{f} \mathbf{h}^{i-1}\ +\ \mathbf{b}_{f}\right),\] (37) \[\mathbf{g}^{i}\ =\ \tanh\left(\mathbf{W}_{g}\mathbf{x}^{i}\ +\ \mathbf{R}_{g} \mathbf{h}^{i-1}\ +\ \mathbf{b}_{g}\right),\] (38) \[\mathbf{o}^{i}= \ \sigma\left(\mathbf{W}_{o}\mathbf{x}^{i}\ +\ \mathbf{R}_{o} \mathbf{h}^{i-1}\ +\ \mathbf{b}_{o}\right). \tag{39}\] In the equations above, \(\mathbf{i}^{i}\), \(\mathbf{f}^{i}\), \(\mathbf{g}^{i}\) and \(\mathbf{o}^{i}\) are respectively the components of the input gate, forget gate, cell candidate and output gate, and \(\mathbf{\sigma}\) represents the sigmoid function. The element-wise product allows each gate to control the data flow into the cell state \(\mathbf{c}^{i}\) by considering the history of the sequence. As we can see in Eq. (34), we use the previous cell state and the current cell candidate to obtain the current cell state \(\mathbf{c}^{i}\), which then is used as an input to obtain the final output vector \(\mathbf{t}^{i}\) at each timestep as follows: \[\mathbf{t}^{i}=\ \mathbf{o}^{i}\odot\tanh\left(\mathbf{c}^{i}\right). \tag{40}\] To predict the stress tensor using the described Deep Network, we employ two LSTM layers and connect them to a dense forward layer. Therefore, we need to save two hidden states and cell states for each LSTM cell as depicted in Fig. 5. The stacked LSMT units are responsible for processing the input vector \(\mathbf{x}^{i}\) and the state variables through the gates, therefore updating the state of the LSTM units and representing an encoded representation of the current state. This ensures that the required history path dependency is included. To transform the encoded representation of the current state from the LSTM units, we apply a dense forward layer that approximates the stress vector \(\mathbf{\sigma}^{i}_{tot}\) from the LSTM output \(\mathbf{t}^{i}\). We follow a supervised approach to train our parameters \(\mathbf{\omega}\) of our deep network. The Figure 5: Overall structure of the LSTM enhanced intelligent constitutive model including two LSTM layers. The input layer includes our input vector \(\mathbf{x}_{i}\) at step \(i\) and the state variables \(\mathbf{h}_{1}^{i-1}\), \(\mathbf{c}_{1}^{i-1}\), \(\mathbf{h}_{2}^{i-1}\) and \(\mathbf{c}_{2}^{i-1}\) used in both LSTM layers. The output vector \(\mathbf{\sigma}_{tot}\) is predicted by the dense forward layer. subsection below presents a space-filling sampling approach to generate training data. ### Data generation To capture the path-dependent behavior of our constitutive model, we presented a mapping scheme including moisture, strain rate, and BNPs volume fraction dependency. Consequently, generating the database for the supervised learning of the LSTM-enhanced Deep Network as presented in Fig. 5 is essential for the learning process. Different approaches are presented in the literature to generate training data [23; 57; 58; 59; 60; 53; 27]. Here, we implement a space-filling procedure to make sure our DL-model can be trained sufficiently to predict the stress \(\boldsymbol{\sigma}_{tot}\) of the viscoelastic-viscoplastic constitutive model at any possible three-dimensional state. The driving force for the generation of loading paths in our finite deformation model is the deformation gradient \(\mathbf{F}\). Each \(F_{ij}(i,j=1,...,3)\) component leads to a different loading scenario. Therefore, we generate data using the deformation gradient in a spatiotemporal space. Firstly, we constrain the components of the deformation gradient as follows: \[\mathrm{F}_{ij}\in\begin{cases}[0.9\quad 1.1],&\text{when i }=\text{ j}\\ &\\ \end{cases}. \tag{41}\] In this study, the sampling process starts from an undeformed configuration in which the diagonal elements of the deformation gradient are set to 1.0, and the upper/lower triangular components are set to 0.0. This ensures a bounded spatial space within a realistic viscoelastic-viscoplastic regime for the nanocomposites. Accordingly, the DL model is expected to learn and predict the correct output accurately inside this domain. To cover the nine-dimensional spatial space with sufficient random points, quasi-random numbers are produced using the Halton sequence generation algorithm [61], leading to uniform samples within the space and a better sampling of the region. As presented in Fig. 6, the resulting points in the spatial space adequately span the bounded domain compared with the pseudorandom algorithm used in most algorithms. To generate loading paths for training the DL model, we first produce the uniform data points for each of the nine components of the total deformation gradient. It allows capturing loading scenarios related to uniaxial tension or compression, triaxial loading, biaxial loading, and pure or simple shear loading. We utilize the uniform data points for each component of the deformation gradient to generate 5% of the training data. Secondly, we implement an algorithm to randomly visit the quasi-random data points in the nine-dimensional spatial space of the deformation gradient components. Therefore, loading paths to capture complex loading scenarios are created as shown, for example, in Fig. 7 for the diagonal components of the deformation gradient. We utilize the quasi-random data points in the nine-dimensional space to generate 95% of the training data. This process ensures that both simple and complex loading scenarios are included in the training data. The created sequence then serves as an input to integrate the constitutive model using the Euler backward algorithm and create the training sequence as presented in Eq. (30). Figure 6: Example of the generated data points in a two dimensional space using the Halton sequence algorithm (a) and the pseudorandom MATLAB algorithm (b). Figure 7: Example of three generated loading paths in the uniformly distributed space for the diagonal components \(\mathrm{F}_{ii}\) of the deformation gradient. All paths start from the undeformed configuration (i.e., \(\mathbf{F}=\mathbf{I}\)). The loading paths are created using different time and deformation increments within \(\Delta\text{F}\in\left[10^{-6},10^{-4}\right]\) and \(\Delta\text{t}\in\ \left[0.05,\ 5\right]\)s. This ensures a realistic time and deformation step within FE simulations and allows us to constrain the strain rate within \(\dot{\varepsilon}\in\ \left[10^{-5},10^{-3}\right]1/\text{s}\). Table 1 summarizes the step-by-step algorithm for generating training data in the nine-dimensional spatiotemporal space. This process is repeated for three different points to be visited P: \(\text{P}=1,\text{P}=3\) and \(\text{P}=6\). P represents the number of points visited within a loading path in the space, thus resulting in different loading-unloading scenarios in tension and compression and capturing complex deformation paths. The overall generated data for the supervised learning contains \(\text{T}=52000\) sequences, including 10% for validation. For better performance of the DL model, we normalize our input data by calculating the per-feature mean and standard deviation of all the sequences. Then, we subtract the mean value and divide each training observation by the standard deviation. The training is done on two Tesla V100 GPUs, with each 20 CPU cores. For FE analysis, we need to accurately predict the stress tensor \(\boldsymbol{\sigma}_{tot}\) for the constitutive model and the perturbation method, as presented in the next section. For this, each generated loading path is modified by adding a random perturbation at the end of each sequence as presented in algorithm 1 and illustrated exemplarily in Fig. 9. Accordingly to the algorithm, the components to be perturbed are chosen randomly. Therefore, in this specific case, no perturbation associated with the component \(\text{F}_{22}\) is employed. This allows the deep network \begin{table} \begin{tabular}{l} \hline \hline 1. Generate uniform data for the nine components. \\ of the deformation gradient in a range defined by Eq.(41). \\ 2. Define P as the number of points to be visited. \\ 3. Generate loading path within the 9-dimensional space \\ as exemplary presented in Fig. 8 (for the 3-dimensional space) \\ for moisture content \(w_{w}\) and BNPs volume fraction \(v_{np}\). \\ 4. Calculate a representative strain rate as: \(\dot{\epsilon}\ =\ \parallel\mathbf{E}\parallel_{F}\ /\ \Delta\text{t}\), \\ where \(\mathbf{E}\) is presented in Eq. (28). \\ 5. If \(\dot{\epsilon}\ >\ 1\cdot 10^{-5}\ and\ \dot{\epsilon}\ <\ 1\cdot 10^{-3}\) GOTO step 6 else GOTO step 3. \\ 6. Add a perturbation step to the generated loading path according to algorithm 1. \\ 7. Integrate the constitutive model and obtain \(\boldsymbol{\sigma}_{tot}\). \\ 8. Create the input sequence \(\mathbf{x}\) and the output sequence \(\boldsymbol{\sigma}_{tot}\) for training. \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the step-by-step algorithm for generation of a single training sequence. to randomly learn a sudden change of each deformation gradient component by a specific perturbation step, which is also the case for the perturbation method. It is noteworthy that the tangent modulus \(\hat{\mathbb{C}}\) can also be computed from the automatic differentiation of the DL model [15; 62; 63]. However, due to the scarcity of automatic differentiation libraries in finite element software packages and the higher computational efficiency [23], we propose a data generation scheme to predict the tangent modulus using the perturbation method accurately. ``` 1:Input: Total loading path: F with length n 2:Output: Perturbed last step included in F 3: Perturbation parameter: \(\alpha=1\cdot 10^{-4}\) 4:GOTO last step of loading path: F\({}_{ij}\)(n) 5:Choose random components \(i_{p}\) and \(j_{p}\) of F\({}_{ij}\)(n) to be perturbed 6:for i = 1:3 do 7:for j = 1:3 do 8:if\(i==i_{p}\) and \(j==j_{p}\)then 9: Perturbate component: \(\tilde{\text{F}}_{ij}=\text{F}_{ij}\)(n) + \(\alpha\) 10: Add component to F: F\({}_{ij}\)(n + 1) = \(\tilde{\text{F}}_{ij}\) 11:else\(\triangleright\) Needed to keep sequence length constant 12: Add unchanged component to F: F\({}_{ij}\)(n + 1) = F\({}_{ij}\)(n) 13:endif 14:endfor 15:endfor ``` **Algorithm 1** Perturbation of the loading path in order to learn the approximation of \(\hat{\mathbb{C}}\) With the technique used, a training framework to obtain data-driven surrogate models at finite deformation can be developed. The training data is able to inherently capture four basic concepts of our constitutive model: * Viscoelasticity and viscoplasticity in a three-dimensional space. * Strain rate dependency within the range of \(1\cdot 10^{-5}\) and \(1\cdot 10^{-3}\). * Moisture and BNPs volume fraction dependency. * Approximation of the tangent modulus tensor using the perturbation method. ### Hyperparameters The hyperparameters of the Deep Network include the number of LSTM cell layers, the activation function for the output layer, the batch size, the number of units in an LSTM layer, Figure 8: The created loading paths for the diagonal terms of the deformation gradient generated from the visited points as displayed in Fig. 7. Each one represents an unique sequence considered in the training data. Figure 9: Exemplary illustration of the algorithm 1 used on the loading path presented in Fig. 7(a). and the number of epochs for training. The architecture of the Deep Network consists of two layers of LSTM units connected to a dense forward layer. We employ the sigmoid activation function for the output layer and train for 300 epochs to reach a final training state. In the next section we present a finite element formulation implemented to solve the linear equilibrium equation. ## 4 Finite-element formulation The Euler-Langrange equations for the strong form of the boundary value problem in referential form can be written as \[\nabla_{x}\cdot\mathbf{P}\ +\ \mathrm{B} =\ 0\ \mathrm{in}\ \Omega, \tag{42}\] \[\mathbf{P}\ \cdot\ \mathrm{N} =\ \overline{\mathrm{T}}\ \mathrm{on}\ \Gamma_{T},\] (43) \[\mathrm{u} =\mathrm{u}_{d}\ \mathrm{on}\ \Gamma_{d}, \tag{44}\] where \(\mathbf{P}\) is the first Piola-Kirchhoff stress, \(\mathrm{B}\) is the vector of body forces in referential form on the body \(\Omega\), \(\mathrm{N}\) is the outward unit normal vector on the boundary \(\Gamma_{\mathrm{T}}\), \(\overline{\mathrm{T}}\) is the traction force and \(\mathrm{u}_{d}\) represents the prescribed displacements at the boundary \(\Gamma_{d}\). To obtain the weak forms of Eq. (42), a multiplication of the residual by a weighting function \(\eta_{u}\) and by integrating the residual over the whole domain is fulfilled. Using the Gauss divergence theorem leads to the following equations: \[\int_{\Omega_{0}}\ \mathbf{P}\ \cdot\ \nabla_{x}\eta_{u}\ d\Omega_{0}\ -\int_{\Omega_{0}}\rho_{0}\mathrm{B}\cdot\eta_{u}\ d\Omega_{0}\ -\ \int_{\Gamma_{0}}\overline{\mathrm{T}}\cdot\eta_{u}\ d \Gamma_{0}\ =0. \tag{45}\] The equation can then be expressed in terms of external and internal nodal forces as: \[\mathrm{r}^{u}=\ f^{u}_{int}\ -\ f^{u}_{ext}\ =\ 0, \tag{46}\] where \[f^{u}_{int}=\int_{\Omega_{0}}\mathbf{P}\cdot\nabla_{x}\eta_{u}\ d \Omega_{0}, \tag{47}\] and \[f^{u}_{ext}=\int_{\Omega_{0}}\rho_{0}\mathrm{B}\cdot\eta_{u}\ d \Omega_{0}\ +\int_{\Gamma_{0}}\overline{\mathrm{T}}\cdot\eta_{u}\ d\Gamma_{0}. \tag{48}\] By linearizing Eq. (46) at iteration \(i+1\) with respect to the previous iteration \(i\) and assuming dead loads: \[\mathrm{r}^{u}_{i+1}=\mathrm{r}^{u}_{i}\ +\ \Delta\mathrm{r}^{u}\ =\ 0, \tag{49}\] where \[\Delta{\rm r}^{u}={\rm D}_{u}{\rm r}_{i}^{u}\cdot\Delta{\rm u}, \tag{50}\] and \[{\rm D}_{u}{\rm r}_{i}^{u}\cdot\Delta{\rm u}=\int_{\Omega_{0}}{\rm D }_{u}{\bf P}\cdot\Delta{\rm u}\cdot\nabla_{x}\eta_{u}\ {\rm d}\Omega_{0}, \tag{51}\] which represents the directional derivative of the operator \({\rm r}^{u}\) in the direction of \(\Delta{\rm u}\). The linearization of the first Piola-Kirchhoff stress tensor leads to \({\bf P}={\bf F}{\bf S}\) and the linearization of the second Piola-Kirchhoff stress tensor can be based on the following equation: \[{\rm D}{\bf S}\cdot\Delta{\rm u}=\mathbb{C}\Delta{\bf E}, \tag{52}\] where the last term is the linearization of the Green-Lagrange strain tensor \({\bf E}\) and \(\mathbb{C}\) is the elasticity tensor which is referred to the initial configuration. The linearization of the weak form is completed as follows: \[\begin{split}{\rm D}_{u}{\rm r}_{i}^{u}\cdot\Delta{\rm u}& =\int_{\Omega_{0}}(\nabla_{x}\Delta{\rm u}\ {\bf S}+{\bf F}{\rm D}_{u}{\bf S}\Delta{\rm u})\cdot\nabla_{x}\eta_{u}\ \ {\rm d}\Omega_{0}\\ &=\int_{\Omega_{0}}(\nabla_{x}\Delta{\rm u}\ {\bf S}+{\bf F}\mathbb{C}\Delta{\bf E})\cdot\nabla_{x}\eta_{u}\ \ {\rm d}\Omega_{0}.\end{split} \tag{53}\] The linearization of the weak form in the spatial configuration can be obtained now by a \(push\ forward\) in \(\nabla_{x}^{s}\Delta{\rm u}\) of the linearization in Eq. (53) to the known current configuration, which leads to the updated Lagrange formulation: \[{\rm D}_{u}{\rm r}_{i}^{u}\cdot\Delta{\rm u}=\int_{\Omega_{t}} \left(\nabla_{x}\Delta{\rm u}\ \boldsymbol{\sigma}\cdot\nabla_{x}\eta_{u}+\nabla_{x}^{s}\eta_{u}\cdot\hat{ \mathbb{C}}\nabla_{x}^{s}\eta_{u}\Delta{\rm u}\right)\ {\rm d}\Omega_{t}, \tag{54}\] where \(\boldsymbol{\sigma}\) is now the Cauchy stress. The Eq. (54) is referred to the current configuration, which leads to the following definition of the constitutive tensor: \[\hat{\mathbb{C}}\ =\ \frac{1}{J}\mathbb{C}. \tag{55}\] For a more specific and detailed discussion, the reader is referred to [64]. Employing the Bubnov-Galerkin method, the displacement and the corresponding weight functions are discretized in each element by \[{\bf u}^{h}={\bf N}{\bf u},\ \boldsymbol{\eta}_{u}^{h}\ =\ {\bf N} \boldsymbol{\eta}_{u},\ \nabla{\bf u}^{h}\ ={\bf B}{\bf u}, \tag{56}\] where the shape function matrix \({\bf N}\) interpolates the nodal values \({\bf u}\), and \({\bf B}\) is the gradient operator for the displacements. Substituting the relations into the weak formulation of the governing equations yields \[\mathbf{K}_{i}\Delta\mathbf{U}_{i+1}=\mathbf{f}_{ext}-\mathbf{f}_{ int,i}, \tag{57}\] where \[\mathbf{K}_{i}=\int_{\Omega_{t}}\mathbf{B}^{T}\mathbb{\hat{C}} \mathbf{B}\ \mathrm{d}\Omega_{t}\ +\ \int_{\Omega_{t}}\mathbf{B}^{T}\boldsymbol{\sigma}\mathbf{B}\ \mathrm{d}\Omega_{t}, \tag{58}\] represents the linear and nonlinear stiffness matrices, respectively. We solve the presented linearization in Eq. (57) by using the Newton-Raphson iteration until the relative L\({}_{2}\)-norm of the residual is less than a tolerance of \(10^{-4}\). The step-by-step procedure is summarized in Table 2. As mentioned above, we need the tangent modulus tensor \(\mathbb{\hat{C}}\ =\ \dfrac{\partial\sigma}{\partial\boldsymbol{\varepsilon}}\) to integrate our material model into a finite element framework. Since a closed form is not a straightforward task, we adapt the approach proposed by Sun et al. [65] to estimate the tangent moduli for the Jaumann rate of the Kirchhoff stress, \(\boldsymbol{\tau}\ =\ \mathrm{J}\boldsymbol{\sigma}\). We obtain the tangent modulus by perturbing the (i,j) components of the deformation gradient \(\mathbf{F}_{ij}\). The choice of (i) is chosen to be (11),(22),(33),(12),(13), and (23), leading to the following equation: \[\mathbb{C}\approx\ \dfrac{1}{\alpha}\ \left(\boldsymbol{\tau}\left( \mathbb{\hat{F}}^{ij}\right)\ -\ \boldsymbol{\tau}(\mathbf{F})\right), \tag{59}\] where \(\mathbb{\hat{F}}^{ij}=\mathbf{F}+\Delta\mathbf{F}_{ij}\) is the perturbed deformation gradient and \(\alpha\) is the perturbation parameter. The final tangent modulus tensor is then obtained using Eq. (55). The reader is referred to [65] for a detailed discussion on the numerical approximation of the tangent modulus tensor. As described above, we implement the Euler backward method to integrate the state variables of the constitutive model. The Euler backward method is an iterative scheme. Accordingly, the computational cost increases with highly nonlinear behavior captured by our constitutive model. Also, besides integrating the state variables described in steps 5 and 6, we need to integrate our constitutive model another six times by perturbing the deformation gradient to obtain the tangent modulus tensor as shown in Eq. (59). ## 5 Results and discussion In the following section, the constitutive model is first calibrated using experimental data. Due to the chosen geometry of the specimen used within our experiments, we utilize FE 1. Known values at time \(t\): * Deformation gradient: \({}^{t}\mathbf{F}\), \({}^{t}\mathbf{F}_{iso}\),\({}^{t}\mathbf{F}_{iso}^{c}\) and \({}^{t}\mathbf{F}_{iso}^{vc}\). * State variables: \({}^{t}\mathbf{F}_{iso}^{v}\),\({}^{t}\mathbf{F}_{iso}^{vp}\). 2. Known values at time \(t+\Delta t\): * Deformation gradient: \({}^{t+\Delta t}\mathbf{F}+\Delta t\) and \({}^{t+\Delta t}\mathbf{F}_{iso}+\Delta t\). 3. Calculate trial isochoric viscoelastic and elastic deformation gradient using Eq. (4) and Eq. (5). 4. Calculate \(\boldsymbol{\sigma}_{ncq}\) using Eq. (9) at \(t+\Delta t\). 5. Update \({}^{t}\mathbf{F}_{iso}^{v}\) using the Euler backward method and Eq. (23) for \(\mathbf{F}_{iso}^{v}\) obtaining \(\mathbf{F}_{trial/iso}^{v}\) as follows: * Calculate the trial viscous flow rate \(\dot{\varepsilon}_{v}\) using Eq. (21). * Calculate the trial viscous stretching \(\tilde{\mathbf{D}}^{v}\) using Eq. (20). * Calculate the trial viscous state variable: \({}^{t+\Delta t}\mathbf{F}_{trial/iso}^{v}\ =\left(\ {}^{t+\Delta t}\mathbf{F}_{ trial/iso}^{c}\right)^{-1}\ {}^{t+\Delta t}\mathbf{F}_{trial/iso}^{vc}+\Delta t\). * Update the state variable: \({}^{t+\Delta t}\mathbf{F}_{iso}^{v}\ =^{t}\mathbf{F}_{iso}^{v}\ +\ \Delta t \tilde{\mathbf{D}}^{v}\ {}^{t+\Delta t}\mathbf{F}_{trial/iso}^{v}\). 6. Update \({}^{t}\mathbf{F}_{iso}^{vp}\) using the Euler backward method and Eq. (29) for \(\mathbf{F}_{iso}^{vp}\) obtaining \(\mathbf{F}_{trial/iso}^{vp}\) as follows: * Calculate the trial viscoplastic flow rate \(\dot{\varepsilon}_{vp}\) using Eq. (27). * Calculate the trial viscoplastic stretching \(\tilde{\mathbf{D}}^{vp}\) using Eq. (26). * Calculate the trial viscoplastic state variable: \({}^{t+\Delta t}\mathbf{F}_{trial/iso}^{vp}\ =\left(\ {}^{t+\Delta t}\mathbf{F}_{ trial/iso}^{vc}\right)^{-1}\ {}^{t+\Delta t}\mathbf{F}_{iso}+\Delta t\). * Update the state variable: \({}^{t+\Delta t}\mathbf{F}_{iso}^{vp}\ =^{t}\mathbf{F}_{iso}^{vp}\ +\ \Delta t \tilde{\mathbf{D}}^{vp}\ {}^{t+\Delta t}\mathbf{F}_{trial/iso}^{vp}\). 7. If \(\parallel\ {}^{t+\Delta t}\mathbf{F}_{iso}^{v}\ -{}^{t+\Delta t}\mathbf{F}_{ trial/iso}^{v}\ \parallel\) and \(\parallel\ {}^{t+\Delta t}\mathbf{F}_{iso}^{vp}\ -\ {}^{t+\Delta t}\mathbf{F}_{ trial/iso}^{vp}\ \parallel\)\(<\) tolerance than GOTO step 8 else GOTO step 3. 8. Calculate \(\boldsymbol{\sigma}_{eq}\) using Eq. (9) at \(t+\Delta t\). 9. Update the damage variable at \(t+\Delta t\) according to Eq. (13). 10. Obtain the total Cauchy stress using Eq. (9) at \(t+\Delta t\). 11. Store the state variables: \({}^{t+\Delta t}\mathbf{F}_{iso}^{v}{}^{t+\Delta t}\,\mathbf{F}_{iso}^{vp}\). 12. Compute the tangent modulus \(\mathbb{C}\). 13. Solve the system of equations in Eq. (57) using the Newton-Raphson method. \begin{table} \begin{tabular}{l l} \hline \hline & \multicolumn{1}{c}{} & \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the step-by-step algorithm for the integration of state variables. analysis to obtain realistic material parameters. Next, we present the training setup, and the simulation results of the DL model are compared with those of the constitutive model. Finally, the DL model's capability in predicting the force-displacement behavior within the finite element framework is evaluated, and its computational efficiency is compared with the constitutive model. ### Experiments The specimens for the conditioning and mechanical tests are cut from the panels as presented in Fig. 10. Works from Poulain et al. [39] and [43] suggest that the necking in tension direction of epoxy systems is instead a structural instability than material property, consequently a notch is inserted to reduce the influence of material imperfections and necking on its yield. The specimens are conditioned at 60 \({}^{\circ}\)C and 85% relative humidity until the saturated state at a moisture concentration of 1.0% for the neat epoxy system and 1.2% for the BNPs reinforced epoxy is reached. The overall conditioning time was 115 days. Finally, mechanical loading-unloading tests are produced according to testing standard DIN EN ISO 527-2, using an extensometer to measure the elongation of the specimens and a load rate of 1 mm/min. We apply a total of six cycles by loading to a certain amplitude and unloading until the loading force reaches zero. ### Calibration of the viscoelastic-viscoplastic damage model As presented above, the specimen used in our experiments includes stress concentration at the center of the sample to more accurately calibrate the damage parameter. Accordingly, the parameter identification of the constitutive model is conducted in two steps as follows: Figure 10: Planar dimensions of the specimen for conditioning and mechanical loading-unloading tests with a thickness of 2.3 mm. All dimensions are in millimeters. 1. The material parameters are initially pre-calibrated using the rheological model and experimental data. For this, the objective function, defined by the root mean square deviation between the experimentally measured and numerically predicted stress values, is minimized using a genetic algorithm. The population size is set at 200, and a maximum number of generations of 500 is used. To ensure that the optimum solution is obtained, the number of steps to determine whether the genetic algorithm is progressing is set to 500. The pre-calibration allows obtaining a precise upper and lower bound for the next parameter identification step. 2. Since the presented geometry of the specimens used in our experimental data cannot be considered within our one-dimensional rheological model, the final calibration of the material parameters is fulfilled using FE analysis. Also, the damage is concentrated at the center of the sample and can only be accurately calibrated using FE analysis. Therefore, this step re-optimizes the shear and volumetric bulk modulus of the equilibrium, the shear modulus of the non-equilibrium stress contribution, and the damage parameter using FE simulation results and experimental data. Here, an objective function is defined by the root mean square deviation between the experimentally measured and numerically predicted force values. It is worth noting that although the viscous dashpot parameters can be identified using the experimental data, Unger et al. [42] performed a set of atomistic simulations and predicted the parameters of the Argon viscoelastic model. Accordingly, we adopt the parameter \(\dot{\varepsilon}_{0}\) and the equivalent activation energy \(\Delta\)H. Furthermore, since FE simulations are computationally expensive, we calibrate the viscoplastic parameters manually by doing multiple sensitivity simulations to reduce the calibration time. Considering the double symmetry of the specimen at mid-length, symmetric boundary conditions are applied in our finite element analysis to reduce the computational cost while keeping the full solution of the model. A three-dimensional simulation is performed to cover the transverse strain effect and loading conditions accordingly to the experiments are applied as presented in Fig. 11. The presented model is discretized with 16104 eight-noded hexahedral (Q8) elements. The mesh is refined at the right part of the model, where a stress concentration area is observed and 8 elements are implemented to discretize the thickness direction. We multiply the horizontal displacement at the loading point by two to measure the displacement. The final identified material parameters are listed on Table 3 and the corresponding force displacement behavior is presented in Fig. 12. The constitutive material model is validated using experimental data for the epoxy system with 0% BNPs at the saturated condition and the epoxy system including 10% BNPs at dry and saturated conditions. The prediction of the calibrated constitutive model for the epoxy system with 0% BNPs at saturated condition is presented in Fig. 13. The agreement between experimental data and constitutive model prediction in the figure confirms the predictive capability of the implemented viscoelastic-viscoplastic damage model on the mechanical behavior of the nanocomposite. Accurate results are also observed for the epoxy system containing 10% BNPs under saturated and dry conditions as presented in Fig. 14. Although the proposed constitutive model can reasonably predict the highly nonlinear viscoelastic-viscoplastic behavior of the material, it has some shortcomings compared with the experimental data. The deviation from the experimental results may result from the formulation of the constitutive model and the unique set of material parameters. Especially in the first cycles at 0% BNPs volume fraction, the predicted stiffness and plastic strain at zero forces are overestimated. Nevertheless, the implemented amplification approach to include the influence of moisture and BNPs volume fraction leads to realistic results. It is visible from the numerical response of the 10% BNP/epoxy model at the saturated condition in Fig. 14, where both linear and nonlinear rate-dependent behaviors are appropriately captured. Figure 11: Loading and boundary conditions imposed on the model(top) and the 3D model as used in our finite element analysis(bottom). ### Training and validation of the deep learning model The trainable parameters \(\omega\) for the LSTM units and the dense forward layer are trained using the mean average error (MAE) as the loss function for each batch size M and the number of features. The MAE is proven to be not as sensitive as the mean squared error regarding \begin{table} \begin{tabular}{l l l l l} \hline \hline & Parameter & Value & Equation & References \\ \hline Equilibrium shear modulus & \(\mu_{eq}^{0}\)(MPa) & 760 & 15 & \\ Non-equilibrium shear modulus & \(\mu_{neq}^{0}\)(MPa) & 790 & 16 & \\ Volumetric bulk modulus & \(\kappa_{v}\)(MPa) & 1154 & 10 & \\ Viscoelastic dashpot & \(\hat{\varepsilon}_{0}\) (s\({}^{-1}\)) & 1.0447 x 10\({}^{12}\) & 21 & [42] \\ & \(\Delta H\)(J) & 1.977 x 10\({}^{-19}\) & 21 & [42] \\ & \(m\) & 0.657 & 21 & \\ & y\({}_{0}\) & 75 & 22 & \\ & x\({}_{0}\) & 0.2369 & 22 & \\ & b\({}_{s}\) & 0.06786 & 22 & \\ & a\({}_{s}\) & -48.23 & 22 & \\ Viscoplastic dashpot & a & 0.179 & 27 & \\ & b & 0.910 & 27 & \\ & \(\sigma_{0}\)(MPa) & 5.5 & 27 & \\ Damage & A & 320 & 13 & \\ Moisture swelling coefficient & \(a_{w}\) & 0.039 & 3 & [45] \\ \hline \hline \end{tabular} \end{table} Table 3: Materials parameters of the viscoelastic-viscoplastic damage model Figure 12: Force-displacement curve of dry epoxy system without BNPs at a load rate of 1 mm/min and room temperature under cyclic loading-unloading conditions. outliers [12]. The error \(\epsilon\) is computed using the loss function, and the contribution of each trainable parameter is backpropagated to calculate the gradient and train the deep network. The optimization algorithm implemented is the Adam optimizer [66] with a learning rate of \(\eta=0.001\) which is dropped to \(\eta=0.0001\) after 200 epochs. The loss function adopts the L1 loss, yielding smooth results according to cross-validation. Firstly, we tune the number of hidden units for both layers at a fixed batch size of M = 64. Figure 14: Comparison of experimental and numerical force-displacement response of the epoxy/BNPs system for loading-unloading tests with increasing amplitude at dry/saturated conditions and a load-rate of 1 mm/min. Figure 13: Experimental force-displacement response of the epoxy system without BNPs at saturated condition and finite element response obtained by the calibrated constitutive model at a load-rate of 1 mm/min. The presented results of the training are shown in Fig. 15, and the mean average error is calculated as follows: \[\text{MAE}=\ \frac{1}{\text{N}}\sum_{\text{i = 1}}^{\text{N}}\left|\mathbf{\sigma}_{tot}^{ pred}\ -\ \mathbf{\sigma}_{tot}^{targ}\right|, \tag{60}\] where \(\mathbf{\sigma}_{tot}^{pred}\) is the predicted value by the DL model at the i\(th\) timestep of a sequence in the training data, \(\mathbf{\sigma}_{tot}^{targ}\) is the correct corresponding value and N is the number of data. The mean(MAE) is calculated by taking the mean value of the last 50 epochs. It can be observed that the most accurate results are obtained by using 150 LSTM units. Consequently, we continue the training using 150 units for each layer, which is also preferable due to less trainable parameters and leads assumingly to a better performance regarding computational efficiency compared with a larger number of units. Considering the choice of the batch size, representing the number of training examples in one backward pass, the DL model is trained on varying batch sizes of [32, 64, 128]. The results are presented in Table 4 and show that using a batch size of 64 leads to the best results in the training data. The final results using 150 LSTM units and a batch size of 64 are presented in Fig. 16. As can be seen, lowering the learning rate after 200 epochs helped the DL model to achieve an accurate result leading to a low plateau at a loss of \(5\cdot 10^{-2}\). As mentioned above, the validation of the model is done using 10% of the generated data leading to a loss of \(8\cdot 10^{-2}\) MPa. Hence, the following results are obtained using validation data. Figure 15: Performance of the two-layer LSTM architecture as a function of the number of LSTM units per layer. Fig. 17(a) and Fig. 17(b) show the stress-strain behavior of BNP/epoxy nanocomposites under uniaxial cyclic loading predicted by the constitutive and DL model, respectively. A comparison between the simulation results reveals that the DL model is able to learn the nonlinear rate-dependent material behavior at both dry and saturated conditions. Fig. 18 also presents the behavior of the epoxy under a complex cyclic path of uniaxial and shear deformations. In this case, we visit a total of P = 5 points for each deformation gradient component in the 9-dimensional spatial space as presented in Table 1, resulting to an unique loading path for each direction and thus making the deformation scenario as complex as possible. As can be seen, the DL model can predict the stress tensor accurately under complex loading scenarios and can, therefore, replace the constitutive model. Next, a set of uniaxial cyclic loading-unloading results is presented in Fig. 19 to evaluate the DL model's capability to predict the epoxy's rate-dependent behavior. As can be seen, the DL model is able to accurately predict the rate-dependent behavior in excellent agreement with the constitutive model for a wide range of strain rates. Furthermore, the imposed deformation \begin{table} \begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{Batch size} \\ \cline{2-3} & 32 & 64 & 128 \\ \hline mean(MAE) & \(8\cdot 10^{-2}\) & \(5\cdot 10^{-2}\) & \(7\cdot 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of the training data results for three different batch sizes Figure 16: Training loss during the training of the DL model. The loss converges towards a plateau after 200 epochs. path includes compression, confirming the predictive ability regarding the transition of the stress state from tensile to compression. ### Computational efficiency In the following subsection, the computational efficiency of the DL model is compared with the constitutive model at the rheological level. As stated earlier, the Euler backward time Figure 17: Uniaxial loading results of the constitutive model and DL model for different amount of BNPs volume fraction at dry and moisture saturated state. \(\mathrm{E}_{11}\) represents the Green strain element and is obtained using Eq. (28). The strain rate is \(\dot{\varepsilon}~{}=~{}5~{}\times~{}10^{-4}~{}s^{-1}\). Figure 18: The neat and dry epoxy under a complex three-dimensional combination of shear and axial cyclic loading. The diagonal terms of the stress tensor are presented in (a) and the shear terms are shown in (b). The effective strain rate derived from the frobenius norm of the Green strain tensor is \(\dot{\varepsilon}_{F}~{}=~{}3~{}\times~{}10^{-4}~{}s^{-1}\). integration scheme is used to formulate the constitutive model. Fig. 20 presents a deformation path with a deformation step of \(\Delta F=8.7\cdot 10^{-5}\) and a timestep of \(\Delta t=1.28s\). It should be noted that the chosen timestep is not applicable for an explicit time integration since no convergence within the specified accuracy can be reached. The Euler backward integration scheme and the forward propagation of the DL model are implemented in an in-house MATLAB code. The simulation results show that the computational efficiency strongly depends on the deformation path's complexity. While simple deformation paths, such as uniaxial loading, lead to no or low acceleration of the model, a reduction of the computation time of the model with complex loading paths by a factor of 3.5 is detected. The path includes a combination of all deformation gradient components. It can be regarded as one of the most complex paths since it contains all deformation directions in the 3D stress-strain space. Table 5 shows the CPU-time needed for two combinations of BNPs volume fraction at dry and saturated state. We note a reduction of the computation time by a factor of 3.5 for 10% BNPs volume fraction at dry state, while other combinations indicate no or low accelerations. Another noteworthy observation is the constant CPU time of the DL model, which is expected as the forward propagation includes no iterative scheme, and only matrix multiplications are repeated in each loading step. This behavior is elucidated by observing the numerical time passed at each timestep within the Euler backward algorithm compared with the DL model. The results are presented in Figure 19: Comparison of the constitutive model against the DL model for the uniaxial case of the neat and saturated epoxy system under cyclic loading-unloading conditions at three different strain-rates. Fig. 21. In each combination, the Euler backward algorithm needs a low number of iterations at the beginning of each run, possibly due to the inactivity of the viscous or viscoplastic dashpot presented in Fig. 1. After approximately 300 steps, the integration time per step increases up to a peak of 0.28 s, which is an indication that the viscous and viscoplastic dashpots are activated, and the Euler backward algorithm requires multiple iterations to fully integrate the model within a tolerance of \(1\cdot 10^{-5}\). It also implies that an explicit time integration is not a reasonable choice for the chosen timestep. Also, a proportional increase in integration time per step is observed by increasing the model's stiffness. Accordingly, the simulation time per timestep for the DL model is constant at \(7.7\cdot 10^{-4}\)s. \begin{table} \begin{tabular}{|c c|c c|} \hline \hline & & \multicolumn{2}{c|}{BNPs volume fraction} \\ \cline{3-4} & & 0\% & 10\% \\ \hline \multirow{2}{*}{Dry} & Constitutive model & 0.57 & 1.24 \\ \cline{2-4} & DL model & 0.35 & 0.35 \\ \hline \multirow{2}{*}{Saturated} & Constitutive model & 0.31 & 0.44 \\ \cline{2-4} & DL model & 0.35 & 0.35 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of the CPU time for the DL and constitutive model in seconds. Figure 20: Exemplery loading path including all nine deformation gradient components presented using the Eq. (28) as Green strain components. ### Accuracy of the proposed DL model The accuracy of the proposed DL model has already been verified in Section 5.3. Here, the accuracy of the trained and validated DL model within the FE framework is evaluated by comparing the results of the force-displacement curves for the 3D model presented in Fig. 11. FE simulations are performed by implementing the trained deep network within a FE code in C++ and utilizing DealII libraries [67] combined with the Eigen library for the matrix multiplication [68]. The obtained force-displacement curves based on the DL and constitutive models are \begin{table} \begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{BNPs volume fraction} \\ \hline & 0\% & 10\% \\ \hline Dry & 20.9 & 5.9 \\ Saturated & 17.8 & 14.6 \\ \hline \hline \end{tabular} \end{table} Table 6: MAE of the DL model compared to the constitutive model Figure 21: Time needed for each loading step within the Euler backward algorithm to integrate the constitutive model compared with the DL model. presented in Fig. 22. For a better comparison, the MAE of the force-displacement curves is detailed in Table 6, suggesting a good agreement between simulation results obtained by the two material models. The FE simulation results confirm the predictive capability of the DL model in capturing the nonlinear behavior of BNP/epoxy nanocomposites. It should be mentioned that a converged solution within the Newton-Raphson method can only be reached with the proposed perturbation algorithm within the data generation described in algorithm 1. It implies that the stress-strain behavior can be characterized by generating data without the algorithm 1. Nevertheless, an approximation of the tangent modulus \(\hat{\mathbb{C}}\) without the algorithm 1 is solely possible for single-element case studies and failed for the specific three-dimensional model presented above. Conclusively, the agreement between the experimental data and FE facilitated by the DL model assures a significant increase in computational efficiency while only a marginal decrease in accuracy is observed. Also, it is observed that the CPU time of FE simulations using the DL model is decreased by a factor of 1.5 for the model with 10% BNP volume fraction at the dry condition. Also, a reduction by a factor of 1.3 is noticed for the model without nanoparticles at the saturated condition. The simulations are run on 30 CPUs of Intel Cascade Lake Xeon Gold 6230N (2.3GHz, 30MB Cache, 125W) using multithreading. Although the simulation involves only uniaxial loading, the stress concentration at the middle of the specimen increases the computational costs for the constitutive model. Accordingly, the DL model runs at constant CPU time, reducing the wall time by a factor of 1.6 and 1.4, respectively. ## 6 Summary and conclusions A nonlinear viscoelastic-viscoplastic damage model has been proposed to investigate the mechanical behavior of BNP/epoxy nanocomposites with moisture content at finite deformation. We implemented the Guth-Gold model to predict the impact of nanoparticles and moisture content on the material behavior. Also, the athermal yield stress related to the viscoelastic Argon model was modified by proposing a nonlinear sigmoid function and the chain stretch as the driving force. The results show that the proposed constitutive model is able to accurately predict the nonlinear cyclic behavior of the nanocomposite. To accelerate finite element simulations, a LSTM enhanced network was trained to predict the highly nonlinear material behavior of the nanocomposites. The traditional viscoelastic-viscoplastic model uti lizing the Euler backward algorithm for integration of the constitutive equations was replaced with the trained DL model enhanced by two layers of LSTM units. We proposed a data generation framework using a space-filling approach and perturbing loading paths to obtain a reasonable set of data for the supervised learning of the DL model to predict the stress-strain behavior and to compute the consistent tangent modulus. Benchmarks examples show that the elimination of the iterative integration algorithm with the trained DL model leads to a significant increase of the computational efficiency and the DL model is capable to predict complex cyclic loading paths for a wide range of strain-rates. Also, the implemented approach is able to predict complex cyclic loading paths for a wide range of strain-rates. Figure 22: Force-displacement response for the DL- and constitutive model associated with specimens made of BNP/epoxy at dry and saturated conditions. to perturb the loading path within the data generation algorithm leads to an accurate computation of the consistent tangent modulus as needed in the finite element simulations. While the proposed data generation approach can be used for most of the constitutive models to fully capture the three-dimensional stress-strain behavior, a reasonable question is whether the generation of data and the training is worth the time. It should be noted here, that the speed up is especially useful if we are interested in complex constitutive models as presented here, where the numerical integration is a significant part of the computational cost. Thus, employing a DL model for simple constitutive models, e.g. linear elasticity, can lead to an increase of the computational cost. Also, the proposed DL model can be further extended to adapt to different materials by using additional inputs e.g. material parameters. This can be easily achieved, since the generated deformation paths are saved and we can rerun the algorithm in Table 1 starting from STEP 6 to generate the output for a different material. Another feature is that the model can be also extended to adapt to different ambient conditions like temperature dependency, making the model as universal as possible. Since the constitutive model does already include the temperature dependency within the viscoelastic Argon model as presented in Eq. (21), we can generate additional training data to capture this dependency as well without the need to change the constitutive model at hand. Since the constitutive model is implemented in FE analysis in a modular fashion, integrating the DL model was straightforward. It led to a remarkable increase in the computational efficiency of FE analysis, while only a marginal decrease in accuracy is observed. In summary, the proposed DL model can be further developed to integrate other ambient conditions, such as temperature dependency, and should be compared with experimental data. Furthermore, in future studies, the effect of non-uniform dispersion of nanoparticles and moisture content at different temperatures can also be introduced within the developed model to increase its flexibility. _Data availability_ The source codes of the finite element analysis in this work are available at [https://github.com/BBahtiri/viscoelastic_viscoplastic_model](https://github.com/BBahtiri/viscoelastic_viscoplastic_model). _Acknowledgements_ This work originates from the following research project: "Challenges of industrial application of nanomodified and hybrid material systems in lightweight rotor blade construction" ("HANNAH - Herausforderungen der industriellen Anwendung von nanomodifizierten und hybriden Werkstoffsystemen im Rotorblattleichtbau"), funded by the Federal Ministry for Economic Affairs and Energy, Germany. The authors wish to express their gratitude for the financial support. The authors acknowledge the support of the LUIS scientific computing cluster, Germany, which is funded by Leibniz Universitat Hannover, Germany, the Lower Saxony Ministry of Science and Culture (MWK), Germany and the German Research Council (DFG).
2306.11103
Forest Parameter Prediction by Multiobjective Deep Learning of Regression Models Trained with Pseudo-Target Imputation
In prediction of forest parameters with data from remote sensing (RS), regression models have traditionally been trained on a small sample of ground reference data. This paper proposes to impute this sample of true prediction targets with data from an existing RS-based prediction map that we consider as pseudo-targets. This substantially increases the amount of target training data and leverages the use of deep learning (DL) for semi-supervised regression modelling. We use prediction maps constructed from airborne laser scanning (ALS) data to provide accurate pseudo-targets and free data from Sentinel-1's C-band synthetic aperture radar (SAR) as regressors. A modified U-Net architecture is adapted with a selection of different training objectives. We demonstrate that when a judicious combination of loss functions is used, the semi-supervised imputation strategy produces results that surpass traditional ALS-based regression models, even though \sen data are considered as inferior for forest monitoring. These results are consistent for experiments on above-ground biomass prediction in Tanzania and stem volume prediction in Norway, representing a diversity in parameters and forest types that emphasises the robustness of the approach.
Sara Björk, Stian N. Anfinsen, Michael Kampffmeyer, Erik Næsset, Terje Gobakken, Lennart Noordermeer
2023-06-19T18:10:47Z
http://arxiv.org/abs/2306.11103v1
Forest Parameter Prediction by Multiobjective Deep Learning of Regression Models Trained with Pseudo-Target Imputation ###### Abstract In prediction of forest parameters with data from remote sensing (RS), regression models have traditionally been trained on a small sample of ground reference data. This paper proposes to impute this sample of true prediction targets with data from an existing RS-based prediction map that we consider as pseudo-targets. This substantially increases the amount of target training data and leverages the use of deep learning (DL) for semi-supervised regression modelling. We use prediction maps constructed from airborne laser scanning (ALS) data to provide accurate pseudo-targets and free data from Sentinel-1's C-band synthetic aperture radar (SAR) as regressors. A modified U-Net architecture is adapted with a selection of different training objectives. We demonstrate that when a judicious combination of loss functions is used, the semi-supervised imputation strategy produces results that surpass traditional ALS-based regression models, even though Sentinel-1 data are considered as inferior for forest monitoring. These results are consistent for experiments on above-ground biomass prediction in Tanzania and stem volume prediction in Norway, representing a diversity in parameters and forest types that emphasises the robustness of the approach. Forest remote sensing, above-ground biomass (AGB), stem volume, synthetic aperture radar (SAR), Sentinel-1, airborne laser scanning (ALS), deep neural networks, regression modelling, U-Net, composite loss function, semi-supervised learning, pseudo-targets, imputation. ## I Introduction Accurate monitoring of forest above-ground biomass (AGB) is essential to better understand the carbon cycle. Vegetation biomass is, for example, a larger global storage of carbon than the atmosphere [1, 2]. Additionally, to monitor, measure and predict the amount of available AGB correctly is important for economic aspects, e.g. to estimate available raw materials or the potential for bioenergy [3, 4]. As the stem volume (SV) accounts for the highest proportion of biomass in each tree, typically 65-80% [5, 6, 7], AGB monitoring often focuses on the available SV. In other applications, the total amount of available biomass is of interest, which comprises stems, stumps, branches, bark, seeds and foliage [2, 3, 8]. Today, remote sensing (RS) data from radar, optical or airborne laser scanning systems (ALS) are commonly used together with a sparse sample of collected ground reference forest measurements to develop prediction models over larger areas and regions [9, 10, 11]. Satellite and airborne RS have become an important source of information about these forest parameters and others. Traditionally, AGB and SV prediction models use relatively simple statistical regression algorithms, such as multiple linear regression, or machine learning regression models like random forests or multilayer perceptrons (MLPs) [12]. These models are usually noncontextual, as they restrict the regressor information to the pixel that is being predicted and do not combine regressor and regressand information from neighbouring pixels, known as spatial context. Remote sensing is commonly used to infer forest parameters on spatial scales that are coarser than the pixel size, for instance on stand level. Hence, there is no formal reason to avoid the use of contextual information and one should select the method that provides the highest accuracy on the desired scale. This motivates the use of deep learning (DL) and convolutional neural networks (CNNs), whose popularity hinges on their efficient use of spatial context and the inference accuracy obtained by these highly flexible function approximators. The ability of CNNs to exploit spatial patterns was also pointed out in a recent review [13] as an explanation as to why CNN are particularly suitable for RS of vegetation. A recent review [14] of DL methods applied to forestry concludes that these are in an early phase, although some work has emerged. We build our proposed method on Bjork _et al_'s sequential approach to forest biomass prediction [12], which uses a conditional generative adversarial network (cGAN) to generate AGB prediction maps by using synthetic aperture radar (SAR) as regressors and AGB predictions from ALS as the regressand. Their regression approach consists of two models that operate in sequence to provide more target data for training the model that regresses on SAR data. This implies that the first regression model learns the mapping between a small set of ground reference data and RS data from a sensor known to provide a high correlation with the response variable. ALS data are suitable for this purpose [8, 15], but are expensive to acquire. Hence, the second model in the sequence establishes a relationship between the ALS-derived prediction map, as a surrogate for the ground reference data, and RS data from a sensor that offers large data amounts at low cost, namely the Sentinel-1 SAR sensors. This paper preserves some of the principal ideas from [12]: The first is to train the regression model on an ALS-derived prediction map of the target forest parameter to increase the amount of training data. The motivation is that the small amount of ground reference data used to train conventional models limits their ability to capture the dynamics of the response variable, as demonstrated in [12]. The second is to carry forward the use of CNNs to leverage their exploitation of contextual information, their flexibility as regression functions, and their demonstrated performance in other applications. At the same time, we make several new design choices to improve on the previous approach and remedy its weaknesses: Firstly, the sequential model is replaced by an approach where ground reference data are imputed with data from the ALS-derived prediction map. In practice, this is done by inserting the sparse set of true targets into the dense map of pseudo-targets. By letting these data sources together form the prediction target, the SAR-based prediction model can be trained simultaneously on ground reference data and the ALS-derived prediction map in a problem setting that we frame as semi-supervised learning; A second improvement is that we replace or combine the generative adversarial network (GAN) loss used in [12] with a pixel-wise error loss and a frequency-aware spectral loss. This modification is motivated by an emerging awareness that the GAN loss used by the Pix2Pix [16] model employed in [12] may be well suited to preserve perceptual quality and photo-realism, which is required in many image-to-image translation tasks, but is less appropriate for the regression task that we address. This paper has a stronger technical and methodological focus than [12] and emphasises the method's ability to handle different tasks and cases: It demonstrates the proposed regression framework both on AGB prediction in dry tropical forests in Tanzania and on SV prediction in boreal forests in Norway, representing different parameters and very different forest types. Another difference is that the ALS-derived SV predictions used as pseudo-targets in the Norwegian dataset cover spatially non-contiguous forest stands, and is not a wall-to-wall prediction map. We have adapted the CNN-based regression algorithm for use with such data by implementing masked computation of the loss functions. In summary, we make the following contributions: 1. We develop a method that enables us to train contextual deep learning models to predict forest parameters from C-band SAR data from the Sentinel-1 satellite. 2. We enable the CNN-based regression model to use target data that consist of spatially disjoint polygons, thereby showing that it can be trained on complex datasets that arise in operational forest inventories. 3. By testing the method on AGB prediction in Tanzania and SV prediction in Norway, we demonstrate that it can handle different forest parameters and forest types. 4. We investigate an established consensus from the image super-resolution (SR) field about the trade-off between reconstruction accuracy and perceptual quality. For this purpose, we perform an ablation study of composite cost functions, including the GAN loss, a pixel-wise loss, and a recently proposed frequency loss. 5. We demonstrate state-of-the-art prediction performance on datasets from Tanzania and Norway. Notably, we show that a deep learning model with C-band SAR data as input supercedes a conventional ALS-based prediction model after it has been trained on ground reference data imputed with ALS-derived predictions of the forest parameters. The remainder of this paper is organised as follows: Section II reviews published research on related topics in deep learning applied to forest parameter prediction and other topics relevant to the proposed method. Section III presents the datasets used in this work. Section IV details the proposed approach and describes how we facilitate the imputation of pseudo-targets for regression modelling, enabling the CNN model to learn from continuous and discontinuous target data using a variety of loss functions. Experimental results are provided in Section V and discussed in Section VI. Finally, Section VII concludes the paper. ## II Related work Bjork _et al._ showed in a precursor of this paper [12] that the popular cGAN architecture _Pix2pix_[16] can be used in the forestry sector to predict AGB from Sentinel-1 data by training it on ALS-derived prediction maps. Their work inspired [17] to also exploit ALS-derived AGB prediction maps and cGANs to predict AGB from multispectral and radar imagery and to quantify aleatoric and epistemic uncertainty. Despite apparent similarities, the current paper distinguishes itself from both [12] and [17] in many ways. The differences from [12] are discussed in Section I when listing the contributions of the paper. Just like [12], Leonhardt _et al._[17] train their regression network with adversarial learning through a cGAN architecture, but pretrain the generator with a mean square error (MSE) loss to find a proper initialisation. Notably, their final goal is not point prediction in the MSE sense or according to similar metrics, but to develop probabilistic methods for AGB prediction that quantify uncertainty. Another example of deep learning applied to AGB prediction is Pascarella _et al._[18], who show that a traditional U-Net [19] trained with a pixel-wise error loss can be used as a regression model to predict AGB from image patches of optical Sentinel-2 data. Compared to [18], we focus on utilising data from the Sentinel-1 radar sensor that, as opposed to the optical Sentinel-2 sensor, can acquire data both at night and under cloudy conditions and is therefore a more reliable source of data. Besides these examples, the literature on deep learning for regression modelling of forest parameters is sparse. This is also pointed out in the review of the use of CNNs in vegetation RS conducted by Kattenborn _et al._[13]. It found that only 9% of the studies surveyed focused on regression modelling and only 8% were specifically related to forestry and forest parameter retrieval, such as biomass prediction. A recently published review by Hamedianfar _et al._[14] attributes this literature gap to the challenge of acquiring the large amounts of target data needed to train accurate contextual CNN models for forest. This has been a main motivation for using pseudo-targets from existing prediction maps to train our SAR-based prediction models. For further inspiration, we have had to look to alternative topics in the literature. Another image processing task that has inspired us to consider alternative loss functions and combinations of these is image super-resolution (SR). Single-image SR techniques are trained in a similar fashion as regression models: A full-resolution image is often used as the prediction target and a reduced resolution version of it as predictor data (see e.g. [20]), which renders the problem a prediction task that resembles the one in regression. Both the regression and the single-image SR task can be solved with generative models, but it is noteworthy that the literature identifies the SR task as an attempt to achieve two conflicting goals: It should produce images with high perceptual quality, meaning that they should appear natural and realistic. At the same time, it should reconstruct the underlying truth, that is, the high-resolution version of the input image, as closely as possible [20, 21, 22, 23, 24]. The SR literature associates GAN losses and adversarial training with the perceptual quality criterion, as these enforce realistic fidelity and crispness in the generated image. This is achieved at the expense of accurate reconstruction in the MSE sense, since the generator module of the GAN effectively learns to hallucinate the kind of spatial pixel configurations that fools the discriminator module, but does not consider pixel-wise reconstruction. On the other hand, pixel-wise losses such as error measures based on the \(L_{1}\) and \(L_{2}\) norm naturally reduce the reconstruction error, but lead to a blurry appearance of the generated image that is not realistic [20, 21]. This has made us realise that although the Pix2Pix model has established itself as a preferred standard model in image-to-image translation, its GAN loss and adversarial learning approach may be better suited for generative tasks where the result must be visually credible. This is not a concern in the regression of biophysical parameters, where regression performance in terms of root mean square error (RMSE), mean absolute error (MAE) or similar metrics is used to evaluate and rank methods. When training such models, one should therefore consider other loss functions or composite loss functions that support the relevant aspects of the regression task. The SR literature exemplifies ways of combining different loss functions, both regarding which losses to select and how they should interact [20, 21]. For instance, different losses can be used sequentially in pretraining and fine-tuning, or they can be used simultaneously as a composite loss. Although perceptual quality is not of the essence for prediction maps of forest parameters, it may still be worth including loss functions that promote sharpness and visual information fidelity as part of a composite loss. One particular class of loss functions we find interesting to investigate is frequency-aware losses. Their aim is to preserve the high-frequency content of the image, which can e.g. be related to forest boundaries, structure and texture. These have not previously been utilised in forest applications, and to a limited extent in SR, but relevant work is found in the more general computer vision literature, where issues referred to as Fourier spectrum discrepancy, spectral inconsistency, frequency bias or spectral bias have gained a lot of attention [25, 26, 27, 28, 29, 30, 31]. These terms relate to CNN-based generative models' lack of ability to capture the image distribution's high-frequency components, leading to blurriness and low perceptual quality. Some claim that the spectral bias is caused by the up-sampling method, e.g. transposed convolutions, used by the generator network [26, 31, 27]. Thus, changing the up-sampling method in the last layer of the generator network has been suggested [27]. However, Bjork _et al._[29] claim that changing the up-sampling procedure in the last layer from transposed convolution to e.g. nearest-neighbour interpolation followed by standard convolution gives ambiguous results. Chen _et al._[25] argue that the down-sampling modules in the discriminator network of the GAN are the issue, resulting in a generator network that lacks an incentive from the discriminator to learn high-frequency information of the data. However, more recent work [28] proves that the frequency bias must be rooted in the GAN's generator and not the discriminator. Hence, there has been a focus on modifying the generative training objective by incorporating a spectral or frequency-aware loss with the traditional spatial loss during training [26, 29, 30]. The observations and lessons from the precursor paper [12] and from the literature on SR and generic generative models prompts us to investigate if model accuracy improves when we combine loss functions and whether pretraining of the model is enough or if we can increase model performance with a fine-tuning phase. Among the loss functions we combine is a newly proposed frequency-aware loss: the simple but promising FFT loss [29]. It has been shown to perform better than other more complex frequency-aware losses [26, 30] on experiments where it was used to train a generative variational autoencoder (VAE) [32]. As the FFT loss has previously only been evaluated on VAEs with images from common benchmark datasets [29], we contribute with new insight into its behaviour when employed for other models and tasks. ## III Study areas and datasets This section introduces the datasets used throughout this work, i.e. the ground reference target data, the ALS-derived Fig. 1: The location of the Tanzanian dataset, represented by Sentinel-1A image data covering the AOI overlaid with ground reference data shown as red L-shaped clusters of ground plots. Figure from [12]. prediction maps of AGB and SV, and the SAR data from the Sentinel-1 sensors. The ALS-derived prediction maps will interchangeably be referred to as the pseudo-target datasets, while the ground reference data are also referred to as field data, data from the field plots, or true prediction targets. The AGB dataset comes from the Liwake district in Tanzania. The SV datasets are from three regions in the southeast of Norway: Nordre Land, Tyristrand and Hole. For Tanzania, both the field data and the ALS data were acquired in 2014, as described in [9] and Section III-B1. The Sentinel-1A satellite was launched in April 2014 and only one single Sentinel-1A scene acquired in September 2015 was found to comply with our requirements, meaning that it covers one of Liwake's two yearly dry seasons and is close enough in time to the field inventory and the ALS campaigns in Tanzania. For Norway, the acquisition of the ALS data in 2016 and the field inventory in 2017 (see [10] and Section III-B3) implies that more Sentinel-1 data are available. Thus, the models we develop for the Norwegian test sites utilise a temporal stack of Sentinel-1A and Sentinel-1B scenes from July 2017. ### _Study area and dataset description_ #### Iii-BB1 Study area and dataset description This section briefly describes the Tanzanian and Norwegian study areas, including the ground reference data and related ALS-derived prediction maps. The interested reader is referred to [9] and [10], respectively, for in-depth descriptions of the ground reference data and the ALS-derived prediction maps. #### Iii-B2 Tanzanian study area This work focuses on the same study area as [12], i.e. the Liwake district in the southeast of Tanzania (\(9^{\circ}52\)'\(9^{\circ}58\)', \(38^{\circ}19\)'\(38^{\circ}36\)'E). The area of interest (AOI) is a rectangular region with a size of \(11.25\times 32.50~{}\mathrm{km}\) (WGS 84/UTM zone 36S). Fig. 1 shows the location of the AOI in Tanzania and the distribution of the 88 associated field plots. These field plots were collected within 11 L-shaped clusters, each containing eight plots, as seen in Fig. 1. The field work was performed in January-February 2014, and a circular area of size 707 \(\mathrm{m}^{2}\) represents each sample plot on the ground, i.e. they have a radius of 15 m. We refer to [33] for a description of the national level sample design in Tanzania, while [34, 9, 35] explain how data from the field work are used to develop large-scale AGB models. Generally, the miombo woodlands of the AOI are characterised by a large diversity of tree species. Measured AGB from the field work ranged from 0 to 213.4 \(\mathrm{Mg\,ha}^{-1}\)[9] with a mean and standard deviation of \(\mu\!=\!51.3\) and \(\sigma=45.6\mathrm{Mg\,ha}^{-1}\). #### Iii-B2 Tanzanian ALS-predicted AGB data We follow [12] and use the same ALS data from the Liwake AOI, which was acquired in 2014. We refer to [9] for details on the ALS flight campaign, ALS data processing, and the match-up of ALS data with ground reference AGB data from the field plots. After model fitting, the ALS-based AGB model was in [9] used to infer a wall-to-wall prediction map for the whole AOI in Liwake. The wall-to-wall map is represented as a grid with square pixels of size 707 \(\mathrm{m}^{2}\). We have gained access to this prediction map and will use it as pseudo-targets to train contextual CNN models for AGB predictions based on Sentinel-1 SAR data. #### Iii-B3 Norwegian study area The Norwegian study area consists of three regions shown in Fig. 2 and referred to as Nordre Land (A), Tyristrand (B) and Hole (C). All field work was performed during the summer and fall of 2017, initially resulting in 386 circular field plots of shape 250 \(\mathrm{m}^{2}\) distributed over the three regions. We refer to [10] for a description of the sampling design and related data properties. Of the original 386 field plots used for modelling stem volume, a total of 122 plots were not located within polygons of forest stands delineated in the inventories, and thus fell outside the spatial extent of the ALS-predicted SV datasets. We therefore excluded these plots from the analysis. In Table I, the column _No. of plots (after filtering)_ indicates the number of field plots included in the current study. The remaining entities of Table I, such as geographical coordinates, inventory size, field inventory information and distribution of the dominant tree species in each region, are sourced from [10]. In Nordre Land, ground reference values of SV ranged from 33.7 to 659.2 \(\mathrm{m}^{3}\mathrm{ha}^{-1}\) with a mean and standard deviation of \(\mu\!=\!252.7\) and \(\sigma\!=\!145.5\mathrm{m}^{3}\mathrm{ha}^{-1}\). Tyristrand it ranged from 56.1 to 513.3 \(\mathrm{m}^{3}\mathrm{ha}^{-1}\) with \(\mu=212.6\) and \(\sigma=96.9\mathrm{m}^{3}\mathrm{ha}^{-1}\), while in Hole it ranged from 29.5 to 563.9 \(\mathrm{m}^{3}\mathrm{ha}^{-1}\) with \(\mu=253.4\) and \(\sigma=125.8\mathrm{m}^{3}\mathrm{ha}^{-1}\). #### Iii-B4 Norwegian ALS-predicted SV data The ALS flight campaigns were performed in 2016 for all three regions of Norway. We refer to [10] for a description of how the ALS data were processed, the formulation of the nonlinear local prediction models and the match-up of ALS-derived predictions with ground reference data. After model fitting, maps of SV predictions were generated for all three regions, limited to areas where the forest height exceeded 8-9 meters. We refer to these as the ALS-derived SV prediction maps. In all regions, predictions were made for square pixels of size 250 \(\mathrm{m}^{2}\), i.e. \(15.8~{}\mathrm{m}\times 15.8~{}\mathrm{m}\) on the ground. The ALS-derived SV is given in units of \(\mathrm{m}^{3}\mathrm{ha}^{-1}\). Fig. 2: Location of the regions Nordre Land (A), Tyristrand (B) and Hole (C) in the Norwegian dataset. ### _Postprocessing of the ALS-derived prediction maps_ The ALS-derived prediction maps have been obtained as vector data in polygon format stored as shapefiles. These must be converted to raster data in order to be used as training data for CNN models. This conversion is straightforward for the Tanzania datasets, where all polygons are square and have the same areal coverage. Hence, we map project and sample the SAR data such that the SAR pixels coincide with the polygons of the AGB prediction map. The process for the Norway dataset is more complicated. Fig. 3 shows a section of the ALS-derived SV prediction map retrieved in the Nordre Land municipality. Brown areas show where SV predictions are available, whereas the background (other colours) is retrieved from OpenStreetMap [36]. An overlaid lattice of square grid cells can be seen at all zoom levels of Fig. 3. This lattice represents two things: Firstly, it contributes to the delineation of the polygons in the SV prediction map. In this dataset, SV has been predicted for polygons of varying size and shape, that are delimited by: 1) the grid cells of the lattice, as mentioned above; 2) the commercial forest boundaries that enclose the brown areas; and 3) curves within the brown areas that mark internal forest boundaries and subdivide different forest areas. These are seen at all zoom levels of the figure. Secondly, the lattice coincides with the map grid of the SAR data, since we have map projected and resampled the SAR images to align their map grid with the lattice of the SV polygons. Hence, the lattice grid is identical to the pixel grid we want for our training dataset. In summary, SV is only predicted in brown areas. Each prediction is associated with a polygon, which can be square if it is only delimited by the lattice and coincides with a lattice grid cell. It can also be of irregular shape and size, if a forest perimeter or an internal forest boundary delimits it. Each polygon is assigned a stem volume, \(V\), and an areal coverage, \(A\). Some of the square lattice cells are fully covered by one or more polygons, while others are only partly covered. Some lattice cells contain one polygon, while others contain two or more. We refer to this as a multipolygon format, as every lattice grid cell potentially contains multiple polygons. The multipolygon dataset must be rasterised into a target dataset with the same pixel grid as the SAR predictor data. This means that all polygons within a lattice grid cell must be merged, and the grid cell must be assigned a single SV value and the associated areal coverage. The predicted SV contributed by all intersecting multipolygons is computed as \[V_{merged}=\sum_{i=1}^{n}V_{mp(i)}, \tag{1}\] where _mp(i)_ indicates multipolygon number \(i\) and \(n\) is the number of multipolygons in a grid cell. Simultaneously, the total areal coverage is computed as: \[A_{merged}=\sum_{i=1}^{n}A_{mp(i)}. \tag{2}\] The described merging process guarantees that each grid cell is assigned a unique SV, but this value does not necessarily represent a full grid cell of 250 m\({}^{2}\). To quality assure the SV dataset, we remove all SV predictions with less than 40% areal grid cell coverage. This threshold is chosen heuristically to accommodate all three regions, as this removes less than 12% of the Nordre Land and Tyristrand dataset and less than 10% of the Hole dataset. The remaining SV prediction dataset is deemed suitable for the training of CNN regression models. All postprocessing steps are applied using QGIS [37]. ### _SAR data_ Low data cost can sometimes be crucial for developing forest parameter monitoring systems suitable for commercial Fig. 3: Small section of the ALS-derived SV prediction map from Nordre Land. SV has been predicted in the brown areas. The lattice represents the common pixel grid of the SAR predictor data and the rasterised SV prediction map. The original prediction map is obtained in vector format, with one SV prediction per polygon and multiple polygons per grid cell. The rasterisation process with merging of polygons is described in detail in the text. or operational use. This paper utilises SAR data from the freely available Sentinel-1 sensors, which also offer short revisit time and good coverage for the areas of interest. The SAR images are dual-polarisation (VV and VH) C-band scenes acquired in a high-resolution Level-1 ground range detected (GRD) format with a 10 \(\mathrm{m}\) pixel size. The SAR data was downloaded from Copernicus Sentinel Scientific Data Hub1. Footnote 1: See [https://scihub.coopernicus.eu/dhus/#home](https://scihub.coopernicus.eu/dhus/#home) For the AOI in Tanzania, we use a single scene acquired on 15 September 2015, as this is the only available Sentinel-1 product that covers the AOI at a time close to the acquisition of the ALS data and during one of Liva's two yearly dry seasons. The latter criterion implies that the radar signal achieves sufficient sensitivity to dynamic AGB levels. We utilise data from the Sentinel-1A and -1B satellites for the three Norwegian regions. Since the field work for the three Norwegian regions was performed during the summer and fall of 2017, we decided to create temporal stacks of Sentinel-1 -scenes from July 2017 for each of the three regions. ### _SAR data processing and preparation of datasets_ The Sentinel-1 GRD product in the Tanzanian dataset was processed with the ESA SNAP toolbox [38] following the workflow described in [12]. The Sentinel-1 GRD products in the Norway dataset have been processed with the GDAR SAR processing software at NORCE Norwegian Research Institute. They are geocoded with a \(10~{}\mathrm{m}\times 10~{}\mathrm{m}\) digital elevation model to the same map projection as the ALS-derived SV prediction map and resampled to a pixel resolution of \(15.8~{}\mathrm{m}\) to match the \(250~{}\mathrm{m}^{2}\) grid cells of the prediction map. Since [12] showed that it is more advantageous to train CNN-based prediction models with Sentinel-1 intensity data on decibel (dB) scale, the stacks of Sentinel-1 scenes for the Norwegian regions are converted to dB format. The final Sentinel-1 products for the Norwegian regions contain nine features that were extracted from the Sentinel-1 time series: NDI, mean(VV), mean(VH), min(VV), min(VH), max(VV), max(VH), median(VV), median(VV). NDI denotes the normalised difference index feature, a normalised measure of how much the measured backscatter differs in VV and VH. It is computed as \[NDI=(VV-VH)/(VV+VH). \tag{3}\] ## IV Methodology This section describes the proposed methodology to train contextual CNN models for forest parameter prediction. We describe the semi-supervised approach and how training, test and validation datasets are created for each region. In general terms, we introduce the CNN models we use in our work and describe the changes proposed to improve on the performance obtained in [12]. This section focuses on a semi-supervised learning strategy where we impute the sparse reference data with data from ALS-derived prediction maps to increase the amount of training data and to create a dataset that allows us to train CNN models. It also explains the multiobjective training approach, which exploits composite loss functions with varying objectives in the pretraining and fine-tuning stages. #### Iv-E1 Overview The framework of the proposed method is illustrated in Fig. 4. Initially, the ground reference data, also known as the true prediction targets, are imputed with the ALS-derived prediction map, also called the pseudo-target dataset. Then two binary masks are created, one indicating the pixel positions of the true targets and the other indicating pixels where pseudo-target data are available. The two masks are referred to as ancillary training data. They enable the CNN to learn from discontinuous pseudo-target data and boost learning in regions where ground reference data are available. When the pseudo-target data are spatially continuous and have the same extent as the predictor data, the pseudo-target mask will have a constant value of one. The imputed target dataset and the two masks are combined with regressor data from the Sentinel-1 sensor. See Section IV-B and Fig. 5 for details. Fig. 4 shows that baseline models are pretrained as an initial training step. Following the pretraining stage, fine-tuning may be applied to the baseline CNN models with a composition of different losses. Inference, i.e. production of SAR-derived prediction maps, is done with the resulting models2. Footnote 2: Code will be available from [https://github.com/sbj028/DeepConvolutionalForestParameterRegression](https://github.com/sbj028/DeepConvolutionalForestParameterRegression) ### _Imputing ground reference data with pseudo-targets_ The cGAN-based models developed in [12] for SAR-based regression trained on ALS-derived prediction maps could not compete with the conventional ALS-based regression model in terms of prediction accuracy. We argue that this is because the cGAN model is not trained on the true prediction targets and Fig. 4: Overall workflow for dataset generation, model training and inference to create prediction maps, displayed with image data from the Tyrstrand dataset. True targets (white circles) have been magnified for illustrative purposes. therefore inherits too much of the uncertainty in the ALS-based prediction maps. By contrast, the conventional ALS model was fitted directly to all the true prediction targets. To address this shortcoming and improve the performance of CNN models, we propose to impute pseudo-targets from the ALS-derived prediction maps into the dataset of true prediction targets, so that the CNN model is trained on the complete set of available targets. Since the ground reference dataset is much smaller than the prediction maps, this is in practice done by inserting true targets into the pseudo-target prediction maps. Following the imputation process, the Tanzanian dataset comprises less than 0.08% of target values originating from the ground reference data. For the Norwegian datasets, the ground reference data represents less than 0.04%, 0.11%, and 0.13% of the pixels in the respective Nordre Land, Tyristrand, and Hole datasets after the imputation process. We would generally use all available ground reference data for model training and hyperparameter tuning. However, for model evaluation, we report the performance after cross-validation (CV), where we have trained models on a target dataset that only contains 80% of the true target labels. The remaining 20% are reserved for validation. Results obtained with CV are referred to as CV-RMSE in the result section. ### _Preparing the datasets for contextual learning_ To create training, test and validation datasets for the Norwegian regions, all true target labels from the field inventory were first inserted into the ALS prediction maps of pseudo-targets. Two binary masks were additionally created; the pseudo-target mask, denoted \(\boldsymbol{\mathcal{M}}_{pt}\) indicates the positions of available ALS-derived predictions. It is needed for masked computation of the loss functions, which are restricted to pixels where prediction targets are available. The ground reference mask, denoted \(\boldsymbol{\mathcal{M}}_{gr}\), holds the positions of the true prediction targets. It is also used in the loss computation, where we weight the loss for the true prediction targets higher than the pseudo-targets. After having produced the imputed target dataset and the two masks, we follow the workflow shown in Fig. 5 to create datasets with training, test and validation image patches. The figure illustrates the process for Tyristrand, but it is identical for all three Norwegian regions. Firstly, all available data are combined into a stack, including the Sentinel-1 mosaic of nine feature bands, the imputed target map, and the two masks. Then the entire scene is divided into superpatches by splitting it into blocks with no overlap. A superpatch is defined as a block of pixels that is larger than the image patches we use for training, testing and validation. See Table II for an overview of the total number of pixels in each region, the corresponding size of each superpatch and the number of possible superpatches that can be extracted for that region. \(\boldsymbol{\mathcal{M}}_{pt}\) was used to remove superpatches with no overlap with pseudo-targets. Among all available superpatches, those with at least 10% overlap with \(\boldsymbol{\mathcal{M}}_{pt}\) were identified as candidates for the test dataset. Fulfilling this criterion, approximately 15% of all available superpatches were randomly selected as test superpatches. These were further split into test patches of \(64\times 64\) pixels without overlap. Test patches having no overlap with \(\boldsymbol{\mathcal{M}}_{pt}\) were discarded. Table II shows each region's final number of test patches. The remaining superpatches were initially used for hyper-parameter tuning. See Appendix A for details. After this, all patches not used for testing were combined into training sets for the Norwegian models by splitting superpatches into training image patches of \(64\times 64\) pixels using 50% overlap and data augmentation with flipping and rotation. Patches with no overlap with \(\boldsymbol{\mathcal{M}}_{pt}\) were discarded. Table II lists the number of training image patches per region after hyperparameter tuning. The training, test and validation datasets for Tanzania were created by similar use of superpatches. Since the Tanzanian ALS-derived prediction map covers the whole AOI without any discontinuities, there is no need to check if image patches overlap with pseudo-targets. See Table II for details on region sizes in pixels and the number of test and training patches. To evaluate the models, we also created CV target datasets where we used only 80% of the true target labels from the field inventory. We compute the model's performance both when it is trained with all true target labels and also a CV performance for the case when 20% of the true target labels are held out and used for testing. When comparing these results, one must recall that in the former case, the model has seen the test data during training. Moreover, the models are in the CV case trained with less true prediction targets. ### _Backbone U-Net Implementation_ The CNN we use for SAR-based prediction of forest parameters is a modified version of the U-Net architecture in [19], a fully convolutional encoder-decoder network originally developed for biomedical image segmentation. The U-Net consists of a contraction part and a symmetric extraction path, with skip connections between each encoder block and the associated decoder block. The skip connections imply that low-level feature maps from the contraction part are concatenated with high-level feature maps from the extraction part to improve the learning in each level of the network. Fig. 6 illustrates the U-Net generator network we use with an encoder-decoder depth of 4. This is the depth used by the Norwegian models, determined by hyperparameter tuning, while the Tanzanian models use a depth of 5. In both cases, we use ResNet34 [39] as backbone for the convolutional encoder network and refer to the whole model as a regression U-Net. The regression U-Net is trained to perform image-to-image translation. I.e., given Sentinel-1 image patches from the input domain, the model translates these into prediction maps of AGB or SV maps for the same area, guided by the imputed target data. For the Norwegian datasets, we have modified the first layer of the encoder to enable nine-channel inputs, i.e. input tensors of dimension \(9\times 64\times 64\). The Tanzanian models take three-channel inputs with a shape of \(3\times 64\times 64\). Additionally, the segmentation head was removed from the original U-Net architecture, as our work concerns a regression task and not a segmentation task. Finally, the softmax activation function in the final layer was replaced with a ReLU activation function to ensure non-negative AGB and SV predictions. The initial layer of the encoder network uses a \(7\times 7\) convolution kernel with a stride of 2, followed by a normalisation layer, ReLU activation and a max-pooling operation. This implies that the number of feature channels is increased to 64, while the image dimension is decreased to \(16\times 16\) pixels. The following layer combines residual basic blocks, each using a \(3\times 3\) convolution, followed by a normalisation layer, ReLU activation, \(3\times 3\) convolution and a normalisation layer. The following encoder layers' residual blocks additionally employ down-sampling layers, which double the feature channels and half the spatial resolution of the image. In addition to the skip connections previously mentioned, each residual block uses common short connections [39]. Each block in the extraction part uses upsampling through nearest-neighbour interpolation and combines feature maps from the skip connection. It further processes the feature maps through two identical transformations, each including \(3\times 3\) convolutional filtering followed by a normalisation layer, ReLU activation and identity mapping. The upsampling procedure halves the number of feature maps while doubling the spatial resolution. We use the Pytorch implementation of the U-Net model from [41] for our regression U-Net, with the above-mentioned modifications. ### _Pretraining Stage_ We follow the training procedure proposed for ESRGAN, a super-resolution model trained with multiple objectives in [20], and divide the training of the U-Net architecture into two stages: pretraining and fine-tuning. In the pretraining stage, we train two baseline CNN models: an \(\mathcal{L}_{1}\)-based regression U-Net and a cGAN-based generative U-Net. In the fine-tuning stage, described in Section IV-E, we continue to train the baseline models with additional losses. #### Iv-D1 Pixel-aware Regression U-Net We refer to a regression-type U-Net model optimised on a pixel-wise loss computed between model-inferred predictions and target predictions as a pixel-aware regression U-Net (PAR U-Net). In this work, the PAR U-Net is optimised on the \(\mathcal{L}_{1}\) loss similar to [20], i.e. \[\mathcal{L}_{1}=\sum_{k}||\mathbf{Y}-F(\mathbf{X})||=\sum_{k}||\mathbf{Y}-\mathbf{\hat{Y}}||, \tag{4}\] where \(\mathbf{X}\) and \(\mathbf{Y}\) represent a corresponding pair of input and target image patches from the training dataset, \(\mathbf{\hat{Y}}=F(\mathbf{X})\) is the image patch predicted by a CNN model \(F(\cdot)\), and \(k\) is the total number of image patches. #### Iv-D2 cGAN U-Net In addition to training the modified U-Net with a \(\mathcal{L}_{1}\) loss, we also train it as a cGAN, like in [12]. Formally conditioned on image patches from the Sentinel-1 input domain, the cGANs generator (\(G\)) network is trained to learn the optimal mapping \(G:\mathcal{X}\rightarrow\mathcal{Y}\) to generate realistic-looking image patches from the target domain. The \(G\) network also uses the regression U-Net architecture in Fig. 6. Simultaneously as the \(G\) network aims to improve the generation task, the adversarially trained discriminator network (\(D\)) is trained to distinguish between real or false pairs of image patches successfully. A real pair of image patches corresponds to one Sentinel-1 image patch and the corresponding target ALS-derived prediction map. On the other hand, a false pair corresponds to one Sentinel-1 image patch and the corresponding prediction map generated by \(G\). Adversarial training of \(G\) and \(D\) results from optimising the minimax loss function of the so-called Vanilla GAN (VGAN) [42]: \[\begin{split}\min_{G}\max_{D}\mathcal{L}_{VGAN}(D,G)=\mathbb{E} _{\mathbf{X},\mathbf{Y}}[\text{log }D(\mathbf{X},\mathbf{Y})]\\ +\mathbb{E}_{\mathbf{X}}[\text{log}(1-D(\mathbf{X},G(\mathbf{X})))].\end{split} \tag{5}\] Fig. 5: Illustration of how training and test image patches are extracted from the stack of Sentinel-1 dataset, imputed target dataset, pseudo-target mask and ground reference mask. The datasets shown are retrieved from the AOI in Tyristrand. However, the process is identical for all Norwegian regions and representative of how the Tanzanian dataset is prepared. See Section IV-B for details. However, to address stability issues during training of the VGAN, the least squares GAN (LSGAN) was introduced [43]. In a conditional setting, it optimises the objective functions \[\begin{split}\min_{D}\mathcal{L}_{LSGAN}(D)=&\frac{1}{ 2}\mathbb{E}_{\mathbf{X},\mathbf{Y}}[(D(\mathbf{X},\mathbf{Y})-b)^{2}]+\\ &\frac{1}{2}\mathbb{E}_{\mathbf{X}}[(D(\mathbf{X},G(\mathbf{X}))-a)^{2}]\,, \\ \min_{G}\mathcal{L}_{LSGAN}(G)&=\frac{1}{2}\mathbb{E }_{\mathbf{X}}[(D(\mathbf{X},G(\mathbf{X}))-c)^{2}],\end{split} \tag{6}\] where \(\mathbf{X}\) and \(\mathbf{Y}\) are image patches from the input and the target domain, \(a\) and \(b\) are labels for false and real data, while \(c\) denotes a value that \(G\) tricks \(D\) to believe for false data [43]. Isola _et al._[16] suggest to combine a GAN loss with an \(\mathcal{L}_{1}\) loss to reduce visual artefacts in the generated images. The contribution of the \(\mathcal{L}_{1}\) loss to the overall objective function is weighted by a regularisation parameter \(\alpha\), which is determined by hyperparameter tuning. In [12], the LSGAN model was found to outperform a VGAN and a Wasserstein GAN [44]. We therefore replace (7) with the following objective function for the generator of the baseline cGAN U-Net: \[\mathcal{L}_{cGAN}(G)=\mathcal{L}_{L1}+\alpha\mathcal{L}_{LSGAN}(G),\quad \alpha\in[0,1]. \tag{8}\] We find an optimal value of \(\alpha=0.01\), as in [16]. Similar to [16], we do not change the objective function of \(D\) for the baseline cGAN U-Net. In [16], different architectures were evaluated for the discriminator \(D\) by altering the patch size \(N\) of the receptive fields, ranging from a \(1\times 1\) PixelGAN to an \(N\times N\) PatchGAN. The \(D\) network applies convolutional processing to the pair of image patches to produce several classification responses, which are then averaged to determine whether the pair of image patches is real or false. In a PixelGAN, the discriminator attempts to classify each \(1\times 1\) pixel within the image patch as either real or false. In contrast, for the two PatchGAN networks, the discriminator tries to differentiate each \(N\times N\) patch of pixels in the image patch as real or false. ### _Fine-tuning Stage_ This section describes the loss functions used to fine-tune the PAR U-Net model and the cGAN U-Net model. #### Iv-E1 Pixel- and frequency-aware regression U-Net To enforce the regression U-Net to focus on the alignment of the image frequency components during training, we propose to add a frequency-aware loss to the pixel-aware regression model or the adversarial cGAN model. We choose to employ the FFT loss from [29], which has shown promising results and is complementary to existing spatial losses. It is formulated as \[\begin{split}\mathcal{L}_{FFT}=&\frac{1}{k}\sum \left(imag[\mathcal{F}(\mathbf{Y})]-imag[\mathcal{F}(\mathbf{\hat{Y}})]\right)^{2}+ \\ &\frac{1}{k}\sum\left(real[\mathcal{F}(\mathbf{Y})]-real[\mathcal{F}( \mathbf{\hat{Y}})]\right)^{2},\end{split} \tag{9}\] where \(\mathcal{F}\) denotes the fast Fourier transform. The \(\mathcal{L}_{FFT}\) uses MSE to enforce alignment of the _real_ and _imaginary_ parts of target and generated image patches in the frequency domain. The total composite loss function becomes: \[\mathcal{L}_{tot}=\mathcal{L}_{1}+\alpha\mathcal{L}_{LSGAN}+\gamma\mathcal{L} _{FFT},\quad\alpha,\gamma\in[0,1]. \tag{10}\] A regularisation parameter \(\gamma\) is associated with \(\mathcal{L}_{FFT}\) to adjust its influence on \(\mathcal{L}_{tot}\). The \(\alpha\) is still associated with \(\mathcal{L}_{cGAN}\). All objective functions we use in the fine-tuning can be formulated with \(\mathcal{L}_{tot}\), as we can ablate it by setting \(\alpha=0\) or \(\gamma=0\). The baseline PAR U-Net model is either fine-tuned on the \(\mathcal{L}_{cGAN}\) loss (\(\mathcal{L}_{tot}\) with \(\gamma=0\)), the combined \(\mathcal{L}_{1}\) and \(\mathcal{L}_{FFT}\) loss (\(\mathcal{L}_{tot}\) with \(\alpha=0\)), or the \(\mathcal{L}_{tot}\) loss. The baseline cGAN Fig. 6: Above: Regression U-Net architecture used for image-to-image translation. Below: modules of the regression U-Net. Modification of figure in [40]. U-Net model is fine-tuned on \(\mathcal{L}_{tot}\). We refer to Appendix A for an extensive evaluation of model settings and hyperparameters used in the pretraining and fine-tuning phase. ### _Masked Loss Computation on Discontinuous Data_ Due to the discontinuity of the ALS-derived SV prediction maps from Norway, there is not a target pixel for every pixel of the continuous SAR predictor dataset. To remedy this, we introduce masked loss computation. In this way, the convolutional processing of the predictor data creates a wall-to-wall map of model predictions, but in the comparison with the target dataset, pixels without prediction targets are masked out and excluded from the learning process. In addition to masking the pseudo-targets, we want to boost learning for pixels and patches with true prediction targets, hence reducing the impact of pseudo-targets relative to true targets. As shown in Fig. 4, the training dataset contains two binary masks of the same size as the input and target data patches: the ground reference mask, \(\boldsymbol{\mathcal{M}}_{gr}\), and the pseudo-target mask, \(\boldsymbol{\mathcal{M}}_{pt}\), which for the Tanzanian AOI contains only ones. Masked losses are computed through simple Hadamard products, i.e. element-wise multiplication, denoted \(\odot\). For instance, the masked \(\mathcal{L}_{1}\) loss becomes: \[\begin{split}\mathcal{L}_{1}^{\mathcal{M}}&=\mathcal{ L}(\boldsymbol{\mathcal{M}}\odot\boldsymbol{Y},\boldsymbol{\mathcal{M}}\odot \hat{\boldsymbol{Y}})\\ &=\frac{1}{N\times N}\sum_{i,j}{(\boldsymbol{\mathcal{M}}\times|y _{i,j}-\hat{y}_{i,j}|)},\end{split} \tag{11}\] where \(\boldsymbol{\mathcal{M}}\) can be \(\boldsymbol{\mathcal{M}}_{pt}\) or \(\boldsymbol{\mathcal{M}}_{gr}\), \(y_{i,j}\) and \(\hat{y}_{i,j}\) are pixels of the target patch \(\boldsymbol{Y}\) and the predicted target patch \(\boldsymbol{\hat{Y}}\), whose size is \(N\times N\). Similarly, \(\mathcal{L}_{FFT}\) can be computed on \(\mathcal{F}(\boldsymbol{\mathcal{M}}\odot\boldsymbol{\mathcal{Y}})\) and \(\mathcal{F}(\boldsymbol{\mathcal{M}}\odot\hat{\boldsymbol{Y}})\). Also the discriminator \(D\) can be fed with masked patches, either the real pair \((\boldsymbol{\mathcal{M}}\odot\boldsymbol{X},\boldsymbol{\mathcal{M}}\odot \boldsymbol{Y})\) or the fake pair \((\boldsymbol{\mathcal{M}}\odot\boldsymbol{X},\boldsymbol{\mathcal{M}}\odot \boldsymbol{G}(\boldsymbol{X})\). With this input to \(D\), the \(\mathcal{L}_{LSGAN}\) losses in (6) and (7) generalise to the masked case. Let loss functions masked with \(\boldsymbol{\mathcal{M}}_{pt}\) and \(\boldsymbol{\mathcal{M}}_{gr}\) be denoted \(\mathcal{L}^{\mathcal{M}_{pt}}\) and \(\mathcal{L}^{\mathcal{M}_{gr}}\), respectively. To weight the true targets and the pseudo-targets differently, the total loss is decomposed as: \[\begin{split}\mathcal{L}_{tot}&=\delta\mathcal{L} _{tot}^{\mathcal{M}_{gr}}+\mathcal{L}_{tot}^{\mathcal{M}_{pt}}\\ &=\delta\mathcal{L}_{1}^{\mathcal{M}_{gr}}+\mathcal{L}_{1}^{ \mathcal{M}_{pt}}+\gamma\left(\delta\mathcal{L}_{FFT}^{\mathcal{M}_{gr}}+ \mathcal{L}_{FFT}^{\mathcal{M}_{gr}}\right)\\ &+\alpha\left(\delta\mathcal{L}_{LSGAN}^{\mathcal{M}_{gr}}+ \mathcal{L}_{LSGAN}^{\mathcal{M}_{pt}}\right)\,,\end{split} \tag{12}\] with \(\alpha,\gamma\in[0,1]\) and true target weighting parameter \(\delta\gg 1\), found from hyperparameter tuning (see Appendix A). A masked loss decreases when the mask has many zeros, which is as intended, since the amount of true or pseudo-targets contained in a patch should determine its impact. This is inspired by pseudo-labelling [45], a related semi-supervised learning algorithm for categorical prediction. It recommends to balance the losses computed over pseudo-labels (the categorical equivalent to the pseudo-targets in the regression task) and true labels, as there are generally much more pseudo-labels than true labels. In our training paradigm, this translates to boosting the masked loss computed over the true targets. ## V Experimental Results This section presents experimental results of the prediction models trained on the Tanzanian and Norwegian datasets. We provide results on both regional and pan-regional models for the Norwegian datasets. The pan-regional models have been trained on all available training datasets from Nordre Land, Tyristrand and Hole. The regional models were trained on datasets from either Nordre Land, Tyristrand or Hole, and evaluated on the test data from the same region it was trained on. Appendix A provides details on hyperparameter tuning and settings used during model training. Results are given both for the pretraining stage, i.e. the baseline models, and the fine-tuning stage as root mean square error (RMSE) and mean absolute error (MAE). Models with a low RMSE and MAE are preferred, as indicated by the symbol \(\downarrow\) in the tables. Models have been trained to in two ways: (i) using all true target imputed with pseudo-targets; (ii) in cross-validation (CV) mode by rotationally imputing 80% of the target labels with the available pseudo-targets. For the latter case, a CV-RMSE is reported as \(\mu\) (mean) \(\pm\sigma\) (standard deviation). In the evaluation, we report model performance on the true targets and unseen test dataset. Since the CNN models work on image patches, model predictions are inferred by processing the AOI as \(64\times 64\) Sentinel-1 image patches with 50% overlap. A wall-to-wall prediction map is created by mosaicking patches through linear image blending, using the p-norm with a heuristic value of \(p\!=\!5\), as proposed in [12]. ### _Results: Tanzania models_ The Tanzanian test set consists of 14 patches of pseudo-target AGB predictions and true targets from the 88 field plots. Quantitative results in terms of model performance on both the pseudo-target dataset and on the true targets are given in Table III. Metrics for the original ALS-derived AGB model, see [9, 12], and the best sequential cGAN model from [12] are also provided. Note that the best cGAN model from [12] was trained only on pseudo-targets, without access to true targets. We do not report the performance of the original ALS-derived AGB model and the sequential cGAN model on the test dataset, or the \(\mu\!\pm\!\sigma\) CV-RMSE on the true targets, as these metrics were not provided in [9, 12]. All units in Table III are of \(\mathrm{Mg\,ha^{-1}}\). Numbers in boldface indicate the best-performing model per column, while (\(\bullet\)) indicates that a model performs better than the baseline ALS model. ### _Results: Norwegian models_ The Norwegian models have all been trained to translate Sentinel-1 data into ALS-derived SV predictions for commercial forests. Table II shows that data from Nordre Land is over-represented in the Norwegian dataset. I.e., approximately 80% of the training image patches are from Nordre Land, while only 6% are from the Hole region. Four types of Norwegian models were developed: one pan-regional model that represents all three regions and separate regional models for Nordre Land, Tyristrand and Hole. The pan-regional models were trained on pooled training data from all regions, but evaluated separately on each region's pseudo-target data and true target data. The three regional models were both trained and evaluated on data from each separate region. Since Nordre Land is over-represented in the dataset, we wish to investigate if the pan-regional models evaluated on Nordre Land perform similarly to the corresponding regional models developed for Nordre Land. On the other hand, as the available data from both Hole and Tyrstrand are limited, we wish to compare the respective regional models to the pan-regional model. The aim is to identify and quantify any difference in performance and, if possible, to draw conclusions about transferability and impacts of dataset size. As for the Tanzania, different CNN models were evaluated against each other by comparing their performance on unseen test patches of pseudo-target data and on true targets of field measured SV. The number of field plots, i.e. true targets, in each region, can be found in Table I. The Hole test set consists of 14 patches of pseudo-target data, Tyrstrand of 25 and Nordre Land of 87 test patches, each of \(64\times 64\) pixels. Quantitative results from the evaluation of the pan-regional Norwegian models are listed in Table IV while Table V lists results for the regional models. For the regional models, only results for the baseline PAR U-Net model and the model pretrained on \(\mathcal{L}_{1}\) and fine-tuned with the \(\mathcal{L}_{cGAN}\) loss are given, as these have proven to be robust on both the Tanzania data and the pan-regional Norwegian dataset. Metrics obtained with the original ALS-derived SV model have been computed for each region by extracting the area-weighted mean of ALS-derived SV predictions at the location of each field plot. The CV-RMSE for the original ALS-derived SV models were not provided to us for this work and are therefore not given in Table IV or Table V. All metrics in both tables are in units of \(\mathrm{m}^{3}\,\mathrm{ha}^{-1}\). Boldface numbers in a column of Table III indicate the model that performs best. A (\(\bullet\)) symbol indicates that a model performs better than the baseline ALS model. ## VI Discussion Six new CNN-based regression models (two baseline and four fine-tuned ones) have been developed to improve earlier work on the Tanzanian dataset using the semi-supervised imputation strategy proposed herein. Above all, Table III shows that the model pretrained on the \(\mathcal{L}_{1}\) loss and fine-tuned on the \(\mathcal{L}_{cGAN}\) loss performs better than the conventional statistical ALS-based AGB model proposed in [9], and all other Tanzanian models on the field data. The CNN model that most accurately recreates the AGB pseudo-target data is pretrained on the \(\mathcal{L}_{1}\) loss and fine-tuned on the combined \(\mathcal{L}_{1}\) and \(\mathcal{L}_{FFT}\) loss, see Table III. The results on the Tanzanian dataset show the potential of a two-stage training paradigm and of frequency-aware training to reduce the impact of spectral bias. Furthermore, the results in Table III show that the baseline PAR U-Net model performs better than the baseline cGAN U-Net model on both the pseudo-target and the true target data. These findings align with existing knowledge in the field of image super-resolution: It is disadvantageous to adopt a purely adversarial training strategy on tasks that require high reconstruction accuracy in terms of RMSE. In this case, employing a simpler pixel-wise regression U-Net is better. The proposed baseline cGAN U-Net model is most similar to the sequential cGAN model proposed in [12]. Table III shows that the proposed semi-supervised imputation strategy improves the CNN models' performance in AGB prediction. Several new CNN models are also proposed for SV prediction on the Norwegian datasets. Our approach is to train pan-regional models by combining data from all three Norwegian regions, Nordre Land, Tyristrand and Hole, followed by evaluation of test and field data from each individual region. The purpose of the pan-regional models is to develop models that generalise well to more than one region, which is particularly advantageous for regions with little training data. As a result, these models hold the potential for substantial cost-savings if field work can be reduced during operational inventories. According to Table IV, the baseline PAR U-Net model outperforms the other models in accurately recreating the pseudo-target SV data. We advise to avoid the baseline cGAN U-Net model or the pan-regional model that was pretrained on the \(\mathcal{L}_{cGAN}\) loss, followed by fine-tuning on the combined \(\mathcal{L}_{cGAN}\) and \(\mathcal{L}_{FFT}\) losses, when training CNN models for SV prediction. As the models are evaluated on RMSE and not perceptual quality, the results suggest that adversarial training should be avoided in the initial training phase. As demonstrated in Table IV, fine-tuning and the composition of losses generally improve model performance with respect to field data, with few exceptions. Moreover, all pan-regional fine-tuned models perform better than the conventional statistical ALS-based models derived for either Nordre Land, Tyristrand, or Hole. Based on the CV-RMSE, we recommend using fine-tuned models that are pretrained on \(\mathcal{L}_{1}\) and fine-tuned on either the combined \(\mathcal{L}_{1}\) and \(\mathcal{L}_{FFT}\) loss or on \(\mathcal{L}_{cGAN}\). For instance, the model fine-tuned on the combined \(\mathcal{L}_{1}\) and \(\mathcal{L}_{FFT}\) loss performs best on the Hole field data, whereas the model fine-tuned on \(\mathcal{L}_{cGAN}\) performs best on Tyristrand field data. In addition to the pan-regional models trained on the whole Norwegian dataset, regional models were developed for this work. Unlike the pan-regional models, these were only trained and evaluated on a specific region. Table II shows a significant difference in the amount of available training data among the regions. The Hole region has the least data, followed by Tyristrand, while Nordre Land has the most data. Consequently, it implies that the regional Nordre Land model has been trained on almost the same training data as the pan-regional model. For Hole (and Tyristrand), the regional models are trained on only a fraction of the training data available for the pan-regional model, which could impact their relative performance. Based on the discussion above, we train the following two models: a regional PAR U-Net model and a regional model pretrained on \(\mathcal{L}_{1}\) and fine-tuned on the \(\mathcal{L}_{cGAN}\) loss. The fine-tuned model was chosen among the other three, as it has proven to be robust on both the Tanzanian data and the pan-regional Norwegian dataset. In general, comparing the results of the pan-regional models in Table IV to the regional models in Table V, we observe that the pan-regional Norwegian models perform better than all regional models with one exception. The regional Nordre Land model pretrained on the \(\mathcal{L}_{1}\) objective and fine-tuned on the \(\mathcal{L}_{cGAN}\) objective performs better than the pan-regional model on the corresponding regional model on field data. These results show the potential of training regional models that utilises all available data from nearby regions. To our knowledge, it is the first time that the \(\mathcal{L}_{FFT}\) loss has been evaluated outside the natural image domain, e.g. on remote sensing images. Our results from both the Tanzanian and Norwegian models show that the simple \(\mathcal{L}_{FFT}\) objective function efficiently reduces the impact of spectral bias and thereby improves the performance of the CNN model. ## VII Conclusion Through the use of a semi-supervised imputation strategy, we demonstrate the ability of contextual generative CNN models to accurately map Sentinel-1 C-band data to target data consisting of spatially disjoint polygons of ALS-derived prediction maps. The generalisation ability of our modelling approach was evaluated for AGB prediction in the Tanzanian miombo woodlands and for SV prediction in three managed boreal forests in Norway. Our results show that the models developed using the imputation strategy achieve state-of-the-art performance compared to previous studies, suggesting that the contextual C-band SAR-based models outperform conventional statistical ALS-based models in accurately predicting the target labels of ground reference data. Furthermore, we demonstrate that a two-phased learning strategy, which includes pretraining with a pure pixel-wise regression U-Net followed by either a regression cGAN model or a pixel- and frequency-aware regression U-Net in the fine-tuning phase, improves model performance. We argue that pixel aware pretraining enforces the model to focus on pixel-to-pixel relationships before learning general relationships. ## Acknowledgements We gratefully acknowledge the Norwegian University of Life Sciences, the Tanzania Forest Services Agency, Prof. Eliakimu Zaababu and coworkers at Sokoine University of Agriculture, Viken Skog and the Swedish University of Agricultural Sciences for participation in field work and provision of in situ measurements, ALS-derived AGB and SV products. Special thanks to Prof. Hakan Olsson for providing ALS data acquired by SLU and to Mr. Svein Dypsund at Viken Skog for providing in situ measurements in Norway. Many thanks to Assoc. Prof. Benjamin Ricaud for valuable input on relevant experience from the field of single-image super-resolution. \\ \end{tabular} \begin{tabular}{c c} & Sara Bjork received the M.Sc. degree in Applied Physics and Mathematics from UIT The Arctic University of Norway, in 2016, where she is currently pursuing the Ph.D. degree in physics. Since 2022, she has been working as a system developer in the Earth Observation Team at KSAT Kongsberg Satellite Services. Her research interests include computer vision, image processing, and deep learning, with a particular focus on developing methodologies that leverage deep learning techniques and remote sensing data for forest parameter retrieval. \\ \end{tabular} \begin{tabular}{c c} & Stian Normann Anfinsen received the M.Sc. degree in communications, control and digital signal processing from the University of Strathclyde, Glasgow, UK (1998) and the Cand.scient. (2000) and Ph.D. degrees (2010) in physics from UIT The Arctic University of Norway (UIT), Tomsos, Norway. He is a faculty member at the Dept. of Physics and Technology at UIT since 2014, currently as adjunct professor in machine learning. Since 2021 he is a senior researcher with NORCE Norwegian Research Centre in Tomsos. His research interests are in statistical modelling and machine learning for image and time series analysis. \\ \end{tabular} \begin{tabular}{c c} & Michael Kampffmeyer is an associate professor and head of the Machine Learning Group at UIT The Arctic University of Norway. He is also an adjunct senior research scientist at the Norwegian Computing Center in Oslo. His research interests include explainable AI and learning from limited labels (e.g. clustering, few/zero-shot learning, domain adaptation and self-supervised learning). Kampffmeyer received the Ph.D. degree from UIT in 2018. He has had long-term research stays in the Machine Learning Department at Carnegie Mellon University and Berlin Center for Machine Learning at the Technical University of Berlin. He is general chair of the annual Northern Lights Deep Learning Conference. \\ \end{tabular} \begin{tabular}{c c} & Erik Nesset received the M.Sc. degree in forestry and the Ph.D. degree in forest inventory from the Agricultural University of Norway, As, Norway, in 1983 and 1992, respectively. His major field of research is forest inventory and remote sensing, with particular focus on operational management inventories, sample surveys, photogrammetry, and airborne LiDAR. He has played a major role in developing and implementing airborne LiDAR in operational forest inventory. He has been the leader and coordinator of more than 60 research programs funded by the Research Council of Norway, the European Union, and private forest industry. He has published around 250 papers in international peer-reviewed journals. His teaching includes lectures and courses in forest inventory, remote sensing, forest planning, and sampling techniques. \\ \end{tabular} \begin{tabular}{c c} & Terje Gobakken is professor in forest planning and has published more than 190 peer-reviewed scientific articles related to forest inventory and planning in international journals. He has been working at the Norwegian National Forest Inventory and participated in compiling reports of emissions and removals of greenhouse gases from land use, land-use change and forestry in Norway. He has coordinated and participated in a number of externally funded projects, including international projects funded by for example NASA and EU, and has broad practical and research-based experience with development of big data and information infrastructures for forest inventory, planning and decision support. \\ \end{tabular} Lennart Noordermeer received the M.Sc. degree in forestry and the Ph.D. degree in forest inventory from the Norwegian University of Life Sciences (NMBU) in 2017 and 2020, respectively. He currently has a researcher position in the Forest Inventory Group at the Faculty of Environmental Sciences and Natural Resource Management, NMBU. His research focuses on operational forest inventory, with emphasis on the use of data from forest harvesters as well as the use of multitemporal remotely sensed data for forest productivity estimation.
2309.01595
Cosmological complexity of the modified dispersion relation
Complexity will be more and more essential in high-energy physics. It is naturally extended into the very early universe. Considering the universe as a quantum chaotic system, the curvature perturbation of the scalar field is identified with the two-mode squeezed state. By solving the Schr$\ddot{o}$dinger equation, one can obtain the numerical solutions of the angle parameter and squeezing parameter. The solution of the squeezing parameter mainly determines the evolution of complexity. Our numeric indicates that the complexity of the modified dispersion relation will have a non-linear pattern after the horizon exits. Meanwhile, its corresponding Lyapunov index is also larger compared with the standard case. During the inflationary period, the complexity will irregularly oscillate and its scrambling time is also shorter compared with the standard case. Since the modified dispersion relation can be dubbed as the consequences of various frameworks of quantum gravity, it could be applicable to these frameworks. Finally, one can expect the framework of quantum gravity will lead to the fruitful evolution of complexity, which guides us in distinguishing various inflationary models.
Tao Li, Lei-Hua Liu
2023-09-04T13:26:20Z
http://arxiv.org/abs/2309.01595v3
# Cosmological complexity of the modified dispersion relation ###### Abstract Complexity will be more and more essential in high-energy physics. It is naturally extended into the very early universe. Considering the universe as a quantum chaotic system, the curvature perturbation of the scalar field is identified with the two-mode squeezed state. By solving the Schrodinger equation, one can obtain the numerical solutions of the angle parameter and squeezing parameter. The solution of the squeezing parameter mainly determines the evolution of complexity. Our numeric indicates that the complexity of the modified dispersion relation will have a non-linear pattern after the horizon exits. Meanwhile, its corresponding Lyapunov index is also larger compared with the standard case. During the inflationary period, the complexity will irregularly oscillate and its scrambling time is also shorter compared with the standard case. Since the modified dispersion relation can be dubbed as the consequences of various frameworks of quantum gravity, it could be applicable to these frameworks. Finally, one can expect the framework of quantum gravity will lead to the fruitful evolution of complexity, which guides us in distinguishing various inflationary models. ## I Introduction The finding of anti-de Sitter/conformal field theory (AdS/CFT) has opened a totally new window and direction for exploring the nature of gravity and strong-coupling system in the condensed field [1]. In light of this, spacetime can naturally emerge from quantum entanglement [2]. Moreover, one can even suspect that ER (Einstein-Rosen bridge) = EPR (Einstein-Prodolsky-Rosen pair) which relates two distinctive physical concepts [3]. Further development can be found in Refs. [4; 5; 6; 7; 8]. Thus, quantum entanglement will be more and more essential in high-energy physics. However, the researcher has discovered that the boundary of quantum field theory (QFT) reaches thermal equilibrium within a short temporal interval, and the evolution of wormhole will take a much longer time compared with QFT [9]. In order to solve this issue, the so-called quantum complexity is introduced to describe the evolution of the corresponding wormhole [10; 11]. In order to relate this concept to the holographic principle, the CA conjecture was proposed by [12], in which the complexity is equaled to the computation of action evaluated on a bulk subregion called Wheeler-DeWitt patch. Afterward, there are many profound results were produced [13; 14; 15; 16; 17; 18; 19; 20]. With the development of complexity, two main methods for investigating the so-called circuit complexity are: (a). Nielsen et al. utilize a geometrical method within the phase space of quantum gates [21; 22; 23]. (b). One can also use the "Fubini-Study" distance to investigate the circuit complexity [24]. In light of these ones, the complexity can be explicitly obtained by using the wave function [25; 26; 27] and covariance matrix [28; 29; 30; 31; 32; 33]. Even though the definition of complexity is still preliminary, Ref. [25] already tries to give it in quantum field theory [25]. Based on it, the Hawking radiation can be interpreted by circuit complexity [34; 35]. Futhermore, spacetime can be explained by this complexity [36]. In the geometric method [21], the most convenient quantum system is the inverted Harmonic system. For the single inflationary theory, the Hamilton of curvature perturbation is similar to the inverted Harmonic system in momentum space since the sign of kinetic term and potential is the opposite. Thus, one can implement the technology of quantum circuit complexity to investigate various cosmological periods. Ref. [37] has investigated the complexity that is the most essential after the bounce period, in which they also estimated the scrambling time. More generic cosmological models were studied that show the complexity grows fastest in matter-dominated era [38]. By the consideration of the backreaction of background expansion [39], they found that inflation has the most simple pattern of complexity that will linearly grow after inflation. Ref. [40] investigated the cosmological complexity of the thermal state in the expansion space. In our previous work [41], we have shown that the formation of primordial blackhole will lead to the different evolution of complexity in inflation, but it is similar after inflation. The complexity is also related to an integral part of the web of diagnostics for quantum chaos [42; 43; 44; 45; 46; 47], which provides the Lyapunov index that could test the Kerr blackhole in various modified gravitational models [48]. Based on Ref. [41], it has shown that the various patterns of complexity in inflation compared with [37; 38; 39], in which they show the complexity in the simple inflationary models. Another point is that our study [41] of complexity is only different in inflation. For the late universe, the trend of evolution is similar. In order to capture the various information of complexity in different eras, we will utilize the modified dispersion relation via [49] since it will impact the power spectrum in various periods due to the modified dispersion relation. From another perspective, the modified dispersion relation can be dubbed as the consequence of quantum gravity [50; 51; 52; 53; 54; 55; 56; 57; 58]. It has so many phenomenological implications [59; 60; 61; 62; 63; 64; 65; 66; 67], \(i.e.\) the string cosmology, the DBI inflation, the cosmology of loop gravity, \(e.t.c.\) Thus, the investigation of the cosmological complexity of the modified relation is applicable to many theoretical frameworks. The structure of this paper is organized as follows. In section II, we will review the so-called dispersion relation according to [49]. In section III, we will treat the cosmological perturbation as the two-mode squeezed state to evaluate the evolution of angle parameter \(\phi_{k}\) and squeezing parameter \(r_{k}\). Section IV will investigate the evolution of complexity for the modified dispersion relation. In section V, we will give our conclusions and outlooks. ## II The modified dispersion relation In this section, we will work in the Friedman-Lemaitre--Robertson Walker background metric, \[ds^{2}=a(\eta)^{2}(-d\eta^{2}+d\vec{x}^{2}), \tag{1}\] where \(\vec{x}=(x,y,z)\) (denoting the three-dimensional spatial part) is the spatial vector and \(a(\eta)\) is the scale factor in conformal time. Since the curvature perturbation is inside the Hubble radius due to the inflationary period, then the wavelength will become larger, meanwhile, the universe undergoes the expansion all the time, and this inhomogeneity will re-enter the Hubble radius again. This physical process can be characterized by which the wavelength \(k/a\) is longer than \(1/H\). Thus, the conformal time is more convenient compared with physical time. Under metric (1), one can define the perturbation of scalar field like \(\phi(x_{\mu})=\phi_{0}(\eta)+\delta\phi(x_{\mu})\), the corresponding metric can be read as follows, \[ds^{2}=a(\eta)^{2}\left(-(1+\psi(\eta,x)d\eta^{2}+(1-\psi(\eta,x)d\vec{x}^{2} )\right), \tag{2}\] where \(\psi(\eta,x)\) is the perturbation of metric. Once obtaining metric (1), (2) and the curvature perturbation of scalar field, the perturbated action can be written by \[S=\frac{1}{2}\int dtd^{3}xa^{3}\frac{\dot{\phi}}{H^{2}}\left[\dot{\mathcal{K} }-\frac{1}{a}(\partial_{i}\mathcal{R})^{2}\right], \tag{3}\] where \(H=\frac{\dot{a}}{a}\), \(\mathcal{R}=\psi+\frac{H}{\phi_{0}}\delta\phi\), \(z=\sqrt{2\epsilon}a\), \(\epsilon=-\frac{\dot{H}}{H}=1-\frac{H^{\prime}}{H}\). Action (3) can be transferred into a canonical form in terms of the Mukhanov variable \(v=z\mathcal{R}\), \[S=\frac{1}{2}\int d\eta d^{3}x\left[v^{\prime 2}-(\partial_{i}v)^{2}+\frac{z^ {\prime}}{z}v^{2}-2\frac{z^{\prime}}{z}v^{\prime}v\right], \tag{4}\] where \(\prime\) means the derivative with respect to the conformal time. If we approximate \(\epsilon\) to be a constant, then one can get \(\frac{z^{\prime}}{z}=\frac{a^{\prime}}{a}\), \[S=\frac{1}{2}\int d\eta d^{3}x\left[v^{\prime 2}-(f(k_{\rm ph})\partial_{i}v) ^{2}+\frac{a^{\prime}}{a}v^{2}-2\frac{a^{\prime}}{a}v^{\prime}v\right], \tag{5}\] where \(f(k_{\rm ph})\) comes the definititon via Ref. [49], \[\begin{cases}&\text{if }(k_{\rm ph})>M;f=(\frac{k_{\rm ph}}{M})^{\alpha}\\ &\text{if }(k_{\rm ph})\leq M;f=1.\end{cases} \tag{6}\] One can see that \(f=1\) will nicely recover the standard dispersion relation, which is also equivalent to the sound speed is one as shown in Ref. [41]. Furthermore, one could see that \(f(k_{\rm ph})\) plays a role of non-trivial sound speed \(c_{s}\) but the origin is different. As discussed in introduction I, the high energy scale (denoted by \(M\)) will lead to the modified dispersion relation, such as the trans-Planckian physics and Lorentz violating effects, _etc._ However, the formula is different in various theoretical frameworks. As discussed in Refs. [66; 68], the modified dispersion relation is all proportional to \(k_{\rm ph}^{\alpha}\) (the notation for describing the modified dispersion is different) where \(\alpha\) is non-zero in many gravitational models. That is the reason we adopt the formula \(\nu=kf(k_{\rm ph})\) for the dispersion relation. We should emphasize this formula is applicable in many gravitational models since the modified dispersion relation is of the structure \(k_{\rm ph}^{\alpha}\). Action (5) is our starting point for investigating its evolution of circuit complexity. Varying action with respect to \(v\) and then transforming into the momentum space, its equation of motion (EOM) of \(v_{k}\) (\(k\) denotes the momentum space) can be derived by \[v_{k}{}^{\prime\prime}+(\nu^{2}-\frac{a^{\prime\prime}}{a})v_{k}=0. \tag{7}\] \(f=1\) corresponds to the EOM of single field inflationary model of curvature perturbation. According to the condition (6), one can clearly see that the dispersion relation can be classified into two regions, one is the so-called high energy scale dubbed as the ultraviolet (UV) regime when \(k_{\rm ph}>M\). On the contrary, there is also an infrared (IR) regime as \(k_{\rm Ph}<M\). It is naturally considered \(M\) as the criterion for assessing the energy scale. Being armed with this logic, the main feature of the dispersion relation can described by the value of \(\alpha\), which corresponds to the UV regime or the IR regime. Here, we summarized the values in light of [49]. In this kind of relation of the modified dispersion relation, its main feature is characterized by the value of \(\alpha\). \((a)\). The standard dispersion relation corresponds to \(\alpha=0\) is also the standard inflationary model [69], which belongs to the IR regime \((b)\). As \(0<\alpha<2\), it also belongs to the UV regime. \((c)\). As \(\alpha=2\), it is a special kind of Horv\(\check{a}\)-Lifshitz cosmology [70; 71]. \((d)\). As \(\alpha>2\), it also belongs to the UV regime. According to the above understanding, it explicitly shows that theory will be in the UV regime as \(\alpha>0\), which is consistent with our previous discussion since the modified dispersion relation comes via the quantum gravitational effects corresponding to \(\alpha\) is nonzero. From another aspect, we only assume that \(\alpha>0\), since the power spectrum will be conflicted with observation according to \(\hat{P}_{\delta\phi}\propto k^{\alpha}\) as \(\alpha<0\). Our discussion only focuses on the power spectrum in light of Ref. [49]. Therefore, the value of \(\alpha\) is independent of specific cosmological models. Furthermore, we will adopt some specific values of \(\alpha\) for the following investigations. For intuitively understanding its various cases of \(f(k_{\rm ph})\), we give the plot of \(f^{2}\) since it plays the role of non-trivial sound speed. Fig. 1 clearly indicates the various cases of \(f^{2}\) varying with comoving momentum. Since it is a simple power-law function in terms of \(\alpha\), in which we give five cases: \(\alpha=0\), \(\alpha=0.05\), \(\alpha=1.5\), \(\alpha=2\) and \(\alpha=3\) that corresponds to our previous illustrations. Here, there is only one thing needs to be noticed that \(\alpha_{2}=1.5\), which we expect that its corresponding complexity could show the deviation compared with the case of the standard dispersion relation. ## III The squeezed quantum states for cosmological perturbations The complexity describes the unstable and chaotic features of a statistical system. The most simple system is the inverted harmonic oscillator, where its corresponding Hamilton is denoted by \(H=\frac{1}{2}p^{2}-\frac{1}{2}kx^{2}\) with \(k\) is the frequency and \(p\) is the momentum. Thereafter, one can follow the standard procedure of canonical quantization to investigate its corresponding complexity. Noticing that the sign between the kinetic term and the potential term is opposite, in which the Hamilton of curvature perturbation (can be canonical quantization) has the same situation in momentum space. Thus, it is naturally implemented the technology of quantum information to investigate the complexity of curvature perturbation. Ref. [72] has shown that the curvature perturbation can be transited via the squeezed state, and the wave function of the quantum inverted harmonic oscillator is the Gaussian distribution. Thus, the squeezed state will be implemented into the investigation of complexity for the curvature perturbation of inflation. According to EOM (7), one can construct its corresponding action as follows \[S=\int d\eta L=\frac{1}{2}\int d\eta d^{3}x\left[v^{\prime 2}-f^{2}(\partial_{ i}v)^{2}+\frac{a^{\prime}}{a}v^{2}-2\frac{a^{\prime}}{a}v^{\prime}v\right], \tag{8}\] where we do not transform it into the momentum space and \(f\) is (6). Once obtaining the action, one can define its canonical momentum, \[\pi(\eta,\vec{x})=\frac{\delta L}{\delta v^{\prime}(\eta,\vec{x})}=v^{\prime} -\frac{a^{\prime}}{a}v. \tag{9}\] And the Hamiltonian \(H=\int d^{3}x(\pi v^{\prime}-\mathcal{L})\), thus we could obtain \[H=\frac{1}{2}\int d^{3}x\left[\pi^{2}+f^{2}(\partial_{ i}v)^{2}+\frac{a^{\prime}}{a}(\pi v+v\pi)\right]. \tag{10}\] Following the standard procedure of quantum field theory, one promote the variable \(v\) and \(\pi\) as Fourier modes, \[\hat{v}(\eta,\vec{x})=\int\frac{d^{3}k}{(2\pi)^{3/2}}\sqrt{\frac{ 1}{2k}}(\hat{c}^{\dagger}_{-\vec{k}}v^{*}_{\vec{k}}(\eta)e^{-i\vec{k}\cdot\vec {x}}+c_{\vec{k}}v_{\vec{k}}e^{i\vec{k}\cdot\vec{x}}), \tag{11}\] \[\hat{\pi}(\eta,\vec{x})=i\int\frac{d^{3}k}{(2\pi)^{3/2}}\sqrt{\frac {\overline{p}}{2}}(\hat{c}^{\dagger}_{-\vec{k}}u^{*}_{\vec{k}}(\eta)e^{-i\vec{ k}\cdot\vec{x}}-\hat{c}_{\vec{k}}u_{\vec{k}}e^{i\vec{k}\cdot\vec{x}}), \tag{12}\] where \(\hat{c}^{\dagger}_{-\vec{k}}\) and \(\hat{c}_{\vec{k}}\) represent the creation and annihilation operators, respectively. And then we choose an appropriate normalization condition for mode functions \(u_{k}(\eta)\), \(v_{k}(\eta)\), and we can get the following Hamiltonian \[\hat{H}=\int d^{3}k\hat{H}_{k}= \int d^{3}k[\frac{k}{2}(f^{2}_{s}+1)\hat{c}^{\dagger}_{-\vec{k}} \hat{c}_{-\vec{k}}+\frac{k}{2}(f^{2}_{s}+1)\hat{c}_{\vec{k}}\hat{c}^{\dagger}_ {\vec{k}} \tag{13}\] \[+(\frac{k}{2}(f^{2}_{s}-1)+i\frac{a^{\prime}}{a})\hat{c}^{ \dagger}_{\vec{k}}\hat{c}^{\dagger}_{-\vec{k}}\] \[+(\frac{k}{2}(f^{2}_{s}-1)-i\frac{a^{\prime}}{a})\hat{c}_{\vec{k} }\hat{c}_{-\vec{k}}].\] If \(f^{2}=1\), it will nicely recover the Hamilton of the standard dispersion relation, which is the same as sound speed is one as shown in [41]. Observing that Fourier mode contains \(\hat{c}_{-\vec{k}}\) and \(\hat{c}_{\vec{k}}\), which the unitary operator should be of two modes. The wave function is Gaussian distribution, one usually implements the unitary evolution operator acting on this Gaussian-type wave function whose form can be parameterized in the factorized form [72; 73] as follows, \[\hat{\mathcal{U}}_{\vec{k}}(\eta,\eta_{0})=\hat{\mathcal{S}}_{ \vec{k}}(r_{k},\phi_{k})\hat{\mathcal{R}}_{\vec{k}}(\theta_{k}). \tag{14}\] In the above equation, \(\hat{\mathcal{R}}_{\vec{k}}\) is the two-mode rotation operator, which could be written in terms of the rotation angle \(\theta_{k}(\eta)\) \[\hat{\mathcal{R}}_{\vec{k}}(\theta_{k})=\exp[-i\theta_{k(\eta)}( \hat{c}_{\vec{k}}\hat{c}^{\dagger}_{\vec{k}}+\hat{c}^{\dagger}_{-\vec{k}}\hat{ c}_{-\vec{k}})]. \tag{15}\] Figure 1: It shows that the modified dispersion relation varies with \(k/a\). We have set \(M=20\) and set \(\alpha=0\) (corresponding to the standard dispersion relation), \(\alpha=0.05\), \(\alpha=1.5\), \(\alpha=2\) (Horv\(\mathring{a}\)-Lifshitz gravity) and \(\alpha=3\). Meanwhile, \(\hat{\mathcal{S}}_{\vec{k}}\) is the two-mode squeeze operator written in terms of the squeezing parameter \(r_{k}(\eta)\) and the squeezing angle \(\phi_{k}(\eta)\), respectively. \[\hat{\mathcal{S}}_{\vec{k}}(r_{k},\phi_{k})=\exp[r_{k}(\eta)(e^{-2i\phi_{k}(\eta )}\hat{c}_{\vec{k}}\hat{\mathcal{E}}_{-\vec{k}}-e^{2i\phi_{k}(\eta)}\hat{c}_{- \vec{k}}^{\dagger}\hat{c}_{\vec{k}}^{\dagger})]. \tag{16}\] when (15) acts on the initial value of vacuum, it only generates the irrelevant phase factor which will not impact the evolution of wave functions, thus it can be neglected. When using squeezed operator (16) to act on the vacuum \(\ket{0;0}_{\vec{k},-\vec{k}}\), one can obtain the two-mode squeezed state as follows, \[\ket{\psi}_{\mathrm{sq}}=\sum_{n=0}^{\infty}(-1)^{n}e^{2in\phi_{k}}\tanh^{n}r_ {k}\ket{n;n}_{\vec{k},-\vec{k}}, \tag{17}\] where \(\ket{n;n}_{\vec{k},-\vec{k}}\) represents the two-mode excited state, which has the following relationship with the two-mode vacuum state \(\ket{0;0}_{\vec{k},-\vec{k}}\) \[\ket{n;n}_{\vec{k},-\vec{k}}=\frac{1}{n!}(\hat{c}_{\vec{k}}^{\dagger})^{n}( \hat{c}_{-\vec{k}}^{\dagger})^{n}\ket{0;0}_{\vec{k},-\vec{k}}. \tag{18}\] Combine Eqs. (13) (17) with Schrodinger equation, \[i\frac{d}{d\eta}\ket{\psi}_{\mathrm{sq}}=\hat{H}_{k}\ket{\psi}_{\mathrm{sq}}, \tag{19}\] when we use the Hamilton operator and \(i\frac{d}{d\eta}\) acting on the wave function of squeezed state, the real part and imaginary part would generate the following equations for \(\phi_{k}(\eta)\) and \(r_{k}(\eta)\), \[-\frac{dr_{k}}{d\eta} =-\frac{k}{2}(f^{2}-1)\sin(2\phi_{k})+\frac{a^{\prime}}{a}\cos(2 \phi_{k}), \tag{20}\] \[\frac{d\phi_{k}}{d\eta} =-\frac{k}{2}(f^{2}+1)+\frac{k}{2}(f^{2}-1)\cos(2\phi_{k})\coth (2r_{k})\] \[+\frac{a^{\prime}}{a}\sin(2\phi_{k})\coth(2r_{k}),\] where Eq. (20) comes via the imaginary part of Schr\(\ddot{o}\)dinger equation (19) and Eq. (21) is the real part of Schr\(\ddot{o}\)dinger equation (19). To be honest, these two equations are very difficult to solve even for the numerical simulations, thus we change the variable \(\eta\) as \(\log_{10}a\), and then Eqs. (20), (21) will become as follows, \[\frac{10^{y}H_{0}}{\ln 10}\frac{dr}{dy} =\frac{k}{2}[(\frac{k_{\mathrm{ph}}}{M})^{2\alpha}-1]\sin(2\phi_{ k})-aH_{0}\cos(2\phi_{k}), \tag{22}\] \[\frac{10^{y}H_{0}}{\ln 10}\frac{d\phi_{k}}{dy}= -\frac{k}{2}[(\frac{k_{\mathrm{ph}}}{M})^{2\alpha}+1]+aH_{0}\sin( 2\phi_{k})\coth(2r_{k})\] \[+\frac{k}{2}[(\frac{k_{\mathrm{ph}}}{M})^{2\alpha}-1]\cos(2\phi_ {k})\coth(2r_{k}), \tag{23}\] where we have defined \(y=\log_{10}a\). With these two equations, one numerically simulate \(\phi_{k}\) and \(r_{k}\) in Figs. (2,3). Fig. 2, it shows that the evolution of \(r_{k}\). As \(\alpha=0\), it will recover into the standard dispersion relation, also equal to the sound speed \(c_{s}=1\) as shown in [41; 37]. In [41], it shows that the non-trivial sound speed will lead, where the deviation cannot be large corresponding to \(\alpha<1\), to the fast oscillation and lagging the linear growth. In Fig. 2, it clearly indicates the same trend [41] when \(\alpha<1\) which the \(f\) plays a role of \(c_{s}\). The difference comes via the case of \(\alpha>1\), in which it shows that there is the damping of the oscillation and the damping will be more significant as enhancing the value of \(\alpha\). Another difference is that the time scale for reaching the linear growth after the horizon exit will be shortened as \(\alpha>1\), which these cases all belong to the UV regime. Physically speaking, the string theory, trans-Planckian physics, and Lorentz-violating physics will lead to this damping behavior of \(r_{k}\), where it also includes the Horv\(\check{a}\)-Lifshitz cosmology [70; 71]. In the later investigation of complexity, we will show that the complexity will be mainly influenced by \(r_{k}\). As for \(\phi_{k}\) in Fig. 3, the trend of various cases is the same where it will grow until reaching a constant. The only difference is that the constant will be larger as enhancing the value of \(\alpha\) which will not significantly impact the complexity. Being armed with these two parameters, one can investigate the evolution of the modified dispersion relation according to Figs. (2, 3). Here, we only consider the tree level of the quantum perturbations where we have ignored the contribution from the backreaction of perturbations. However, it may play a significant role from the inflationary scale to the energy scale below self-reproduction [74]. ## IV The complexity of modified dispersion relation Nielson's geometric method will be implemented for investigating the evolution of complexity [21; 22; 23]. For a given reference state, one can define it as \(\left|\psi^{R}\right\rangle\) at \(\tau=0\). At \(\tau=1\), the corresponding target state will be connected with the reference state via the unitary operator as follows, \[\left|\psi^{T}\right\rangle_{\tau=1}=U(\tau=1)\left|\psi^{R}\right\rangle_{ \tau=0}, \tag{24}\] where \(\tau\) could parameterize the Hilbert space in quantum mechanics. Following QFT, this unitary operator will be constructed from a path-ordered exponential of a Hamiltonian operator \[U(\tau)=\overleftarrow{\mathcal{P}}exp\left(-i\int_{0}^{\tau}dsH(s)\right), \tag{25}\] where \(\overleftarrow{\mathcal{P}}\) is the path ordering from right to left. In this framework, Hamiltonian operator will be formed in terms of a basis of Hermitian operators \(M_{I}\) base, which are the generators for elementary logic gates \[H(s)=Y(s)^{I}M_{I}, \tag{26}\] where \(Y(s)^{I}\) is identified with the control function which determines which gate will be switched on of switched off. Meanwhile, it also satisfies with the Schr\(\ddot{o}\)dinger equation \[\frac{dU}{ds}=-iY(s)^{I}M_{I}U(s), \tag{27}\] In order to define the complexity, one should introduce the function to associated with circuit complexity \[C(U)=\int_{0}^{1}\mathcal{F}(U,\dot{U})d\tau. \tag{28}\] Then, one can obtain the complexity by minimizing the cost function (28). Afterword, one can find shortest (geodesic) line between the reference state and target state. First, we will pay attention to the quadratic cost function, \[\mathcal{F}(U,Y)=\sqrt{\sum_{l}(Y^{I})^{2}}. \tag{29}\] The target wave function is the two-mode squeezed state (17), then one can transform it into the momentum space and we could obtain this, \[\Psi_{\text{sq}}(q_{\vec{k}},q_{-\vec{k}}) =\sum_{n=0}^{\infty}(-1)^{n}\frac{\tanh^{n}r_{k}}{\cosh^{n}r_{k} }\left\langle q_{\vec{k}};q_{-\vec{k}}|n;n\right\rangle_{\vec{k},-\vec{k}} \tag{30}\] \[=\frac{\exp[A(r_{k},\phi_{k})\cdot(q_{k}^{2}+q_{-k}^{2})-B(r_{k},\phi_{k})q_{\vec{k}}q_{-\vec{k}}]}{\cosh r_{k}\sqrt{\pi}\sqrt{1-e^{-4i\phi_{ k}}}\tanh^{2}r_{k}},\] where \(A(r_{k},\phi_{k})\) and \(B(r_{k},\phi_{k})\) are \[A(r_{k},\phi_{k}) =\frac{k}{2}\left(\frac{e^{-4i\phi_{k}}\tanh^{2}r_{k}+1}{e^{-4i \phi_{k}}\tanh^{2}r_{k}-1}\right), \tag{31}\] \[B(r_{k},\phi_{k}) =\frac{k}{2}\left(\frac{e^{-2i\phi_{k}}\tanh^{2}r_{k}}{e^{-4i \phi_{k}}\tanh^{2}r_{k}-1}\right). \tag{32}\] In vector spaces of \((q_{\vec{k}},q_{-\vec{k}})\), Eq. (30) can be written in terms of diagonal matrix form after some rotation, \[\Psi_{\text{sq}}(q_{\vec{k}},q_{-\vec{k}})=\frac{\exp[-\frac{1}{2}\tilde{M}^{ ab}q_{a}q_{b}]}{\cosh r_{k}\sqrt{\pi}\sqrt{1-e^{-4i\phi_{k}}}\tanh^{2}r_{k}}, \tag{33}\] \[\tilde{M}=\begin{pmatrix}\Omega_{\vec{k^{\prime}}}&0\\ 0&\Omega_{-\vec{k^{\prime}}}\end{pmatrix}=\begin{pmatrix}-2A+B&0\\ 0&-2A-B\end{pmatrix},\] Naturally, one can consider the squeezed vacuum state as the reference state. Then using the same standard procedure to denote it as \[\Psi_{00}(q_{\vec{k}},q_{-\vec{k}}) =\left\langle q_{\vec{k}};q_{-\vec{k}}|0;0\right\rangle_{\vec{k},- \vec{k}} \tag{34}\] \[=\frac{\exp[-\frac{1}{2}(\omega_{\vec{k}}q_{\vec{k}}^{2}+\omega_{ -\vec{k}}q_{-\vec{k}}^{2})]}{\pi^{1/2}}\] \[=\frac{\exp[-\frac{1}{2}\tilde{M}^{ab}q_{a}q_{b}]}{\pi^{1/2}}\] where \[\tilde{M}=\begin{pmatrix}\Omega_{\vec{k^{\prime}}}&0\\ 0&\Omega_{-\vec{k^{\prime}}}\end{pmatrix}. \tag{35}\] Note that Eq. (34) is the Gaussian distribution that agreed with our previous discussion, and meanwhile the wave function (33) is also of the Gaussian distribution. Thus, the unitary operator (25) will not change the structure of wave functions. According to (24), one can relate the reference states (34) (33) to their corresponding target state via (25), \[\Psi_{\tau}(q_{\vec{k}},q_{-\vec{k}})=\tilde{U}(\tau)\Psi_{00}(q_{\vec{k}},q_{ -\vec{k}})\tilde{U}^{\dagger}(\tau), \tag{36}\] \[\Psi_{\tau=0}(q_{\vec{k}},q_{-\vec{k}})=\Psi_{00}(q_{\vec{k}},q_{-\vec{k}}), \tag{37}\] \[\Psi_{\tau=1}(q_{\vec{k}},q_{-\vec{k}})=\Psi_{\rm sq}(q_{\vec{k}},q_{-\vec{k}}), \tag{38}\] where \(U(\tau)\) is a \(GL(2,C)\) unitary matrix that gives the geodesic line of parameter space between the reference state and target state. Following [25], \(U(\tau)\) will take the form as follows \[\tilde{U}(\tau)=\exp[\sum_{k=1}^{2}Y^{k}(\tau)M_{k}^{\rm diag}], \tag{39}\] where \(M_{k}^{\rm diag}\) denotes two generator of \(GL(2,C)\) that defines as \[M_{1}^{\rm diag}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix},M_{2}^{\rm diag}=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}. \tag{40}\] As we discussed, \(U(\tau)\) could parametrize the geodesic line in this group manifold. Thus, the off-diagonal elements are zero since it will generate the nontrivial curvature of this group manifold. As for \(Y_{I}(\tau)\), it could be constructed from [75] as follows, \[Y_{I}(\tau)=Y_{I}(\tau=1)\cdot\tau+Y_{I}(\tau=0). \tag{41}\] From the boundary conditions (37) and (38), it could obtain that \[{\rm Im}({\rm Y}^{1,2})|_{\tau=0}={\rm Re}({\rm Y}^{\rm I})|_{\tau=0}=0, \tag{42}\] \[{\rm Im}({\rm Y}^{1,2})|_{\tau=1}=\frac{1}{2}\ln\frac{|\Omega_{\vec{k},-\vec{ k}}|}{\omega_{\vec{k},-\vec{k}}}, \tag{43}\] \[{\rm Re}({\rm Y}^{1,2})|_{\tau=1}=\frac{1}{2}\arctan\frac{{\rm Im}(\Omega_{ \vec{k},-\vec{k}})}{{\rm Re}(\omega_{\vec{k},-\vec{k}})}. \tag{44}\] Being armed with these conditions, we can write the complexity as the geodesic line in the parametric manifold as follows, \[C(\tilde{U})=\int_{0}^{1}d\tau\sqrt{G_{IJ}\dot{Y}^{I}(\tau)\dot{Y}^{I}(\tau)^ {*}}, \tag{45}\] where \(G_{ij}\) is the induced metric of the group manifold not for the spacetime. In Ref. [25], it shows that \(G_{IJ}\) could have an arbitrary structure corresponding to various structures of the group manifold. As we mentioned, our group structure is \(GL(2,C)\) whose induced metric is flat. Therefore, one can substitute Eq. (41) into Eq. (45) for obtaining its corresponding complexity, \[\begin{split} C(k)&=\frac{1}{2}[(\ln\frac{|\Omega_ {\vec{k}}|}{\omega_{\vec{k}}})^{2}+(\arctan\frac{{\rm Im}(\Omega_{\vec{k}})}{{ \rm Re}(\omega_{\vec{k}})})^{2}\\ &+(\ln\frac{|\Omega_{-\vec{k}}|}{\omega_{-\vec{k}}})^{2}+(\arctan \frac{{\rm Im}(\Omega_{-\vec{k}})}{{\rm Re}(\omega_{-\vec{k}})})^{2}],\end{split} \tag{46}\] where the information of \(\phi_{k}\) and \(r_{k}\) are including in the coefficients (31) and (32). Then, one can implement the numeric as shown in Figs. (2,3) to investigate the evolution of complexity. Fig. 4 shows the evolution of complexity varying with scale \(\ln_{10}(a)\). The most essential physical quantities are scrambling time and the Lyapunov index. In light of Fig. 4, the scrambling time can be defined as the complexity begins to increase and the Lyapunov index can be dubbed as the slope of complexity when complexity becomes linear as shown in [44]. In Fig. 4, we can see that the Lyapunov index with \(\alpha>1\) is larger compared with \(\alpha<1\), which physically means that the framework of string theory, trans-Planckian physics and Lorentz violating physics will lead to the more chaos comparing with the standard dispersion relation. Our new finding is that the complex will manifest the nonlinear growth as \(\alpha>1\) compared with \(\alpha<1\) not included in [37; 38; 39; 41]. Thus, we conjecture that if the inflation comes via the string theory, trans-Planckian physics, and Lorentz violating physics, the complexity would show the non-linear evolution after the horizon exit. As for the scrambling time, the cases of \(\alpha>1\) will be shorter compared with \(\alpha<1\) including the standard dispersion relation. Finally, the oscillation of complexity will occur around the inflationary period, in which one could see that there is no exact set of rules for their damping behaviors as \(\alpha>1\) which may be influenced by the evolution of \(r_{k}\) as shown in Fig. 2. To sum up, the complexity of \(\alpha>1\) will show more chaotic features compared with \(\alpha<1\). ## V Summary and discussion The modified dispersion relation (6) can be considered as the consequences of quantum gravity, including the Figure 4: The numerical solutions of \(\phi_{k}(\eta)\) in terms of \(\log_{10}a\) with \(\alpha=0\), \(\alpha=0.05\), \(\alpha=1.5\), \(\alpha=2\), and \(\alpha=3\). Our plots adopt \(H_{0}=1\). The scrambling time is identified with the complexity begins to increase and the Lyapunov index can be dubbed as the complexity becomes the linear growth [44]. string cosmology, DBI inflation, cosmology of loop gravity \(e.t.c.\) As we discussed in Sec. II, the modified dispersion relation is applicable in many frameworks. From the perspective of quantum information, the dispersion relation could show the various patterns of the evolution of complexity. Based on the above two points, we implement the two-mode squeezed state to investigate the complexity. First and foremost, we write down the Hamilton in terms of creation and annihilation operators (13) in light of the inverted harmonic oscillator. Afterward, we numerically solve (19) so as to obtain the numeric of \(\phi_{k}\) and \(r_{k}\) as shown in Figs. (2,3). Then, we could use them to show how the complexity varies with scale as shown in Fig. 4. The main results of this paper can be summarized as follows: \((a)\). Fig. 2 clearly indicates that there will be damping behavior as \(\alpha>1\), which manifests the effects of the modified dispersion relation. And the oscillation is shorter compared with \(\alpha<1\). All of these cases will grow linearly after the horizon exit. As for \(\alpha<1\) including the standard dispersion relation, it only shows the oscillation before the horizon exit. \((b)\). Fig. 3 indicates that the varying trend of \(\phi_{k}\) is the same, where it is growing first and then it approaches a constant value. The only difference is that the maximal value of \(\alpha\) will be enhanced by improving \(\alpha\). \((c)\). Our new findings are mainly included in Fig. 4. First, it will present the nonlinear evolution after horizon exit in \(\alpha>1\), meanwhile this case will also tell the irregular damping oscillations comparing with \(\alpha<1\). The slope of complexity when it grows linearly can be identified with the Lyapunov index, which describes the chaotic features of a statistical system. Thus, Fig. 4 clearly indicates the Lyapunov index is larger compared with the standard dispersion relation. Another important quantity describing the chaotic system is the scrambling time, which can dubbed as the complexity begins to increase. Obviously, the scrambling time of \(\alpha>1\) is shorter compared with \(\alpha<1\). Thus, we conjecture that various frameworks of quantum gravity will lead to a more fruitful evolution of the complexity. Our work is based on the single field inflation and the gravitational part only contains the Hilbert-Einstein action, namely \(R\) (Ricci scalar). Thus, it naturally extends our methods into \(f(R)\) gravitational models [76; 77] and the multi-field inflationary models [78; 79; 80; 81]. The key place is that we need to develop the technology for obtaining their quantum Hamilton in terms of creation and annihilation operators. The k-essence and D-brane naturally contain the modified dispersion relation (equaled to the non-trivial sound speed in some sense) [82; 83], thus our analysis could also apply to them. Further, we can implement the modified dispersion relation to the Krylov Complexity [84; 85]. ## Acknowledgements LH and TL are funded by NSFC grant NO. 12165009 and Hunan Natural Science Foundation NO. 2023JJ30487.
2304.10464
Learning to Plan with Natural Language
Large Language Models (LLMs) have shown remarkable performance in various basic natural language tasks. For completing the complex task, we still need a plan for the task to guide LLMs to generate the specific solutions step by step. LLMs can directly generate task plans, but these plans may still contain factual errors or are incomplete. A high-quality task plan contains correct step-by-step solutions for solving all situations and behavioral instructions for avoiding mistakes. To obtain it, we propose the Learning to Plan method, which involves two phases: (1) In the first learning task plan phase, it iteratively updates the task plan with new step-by-step solutions and behavioral instructions, which are obtained by prompting LLMs to derive from training error feedback. (2) In the subsequent test phase, the LLM uses the learned task plan to guide the inference of LLM on the test set. We demonstrate the effectiveness of our method on the five different reasoning type tasks (8 datasets). Further, our analysis experiment shows that the task plan learned by one LLM can directly guide another LLM to improve its performance, which reveals a new transfer learning paradigm. We release the code at \url{https://github.com/Eureka6174/LearnNLPlan}
Yiduo Guo, Yaobo Liang, Chenfei Wu, Wenshan Wu, Dongyan Zhao, Nan Duan
2023-04-20T17:09:12Z
http://arxiv.org/abs/2304.10464v4
# Learning to Program with Natural Language ###### Abstract Large Language Models (LLMs) have shown remarkable performance in various basic natural language tasks, which raises hope for achieving Artificial General Intelligence. For completing the complex task, we still need a program for the task first and then ask LLMs to follow the program to generate the specific solution. We propose using natural language as a new programming language to describe task procedures, making them easily understandable to both humans and LLMs. The LLM is capable of directly generating natural language programs, but these programs may still contain factual errors or incomplete steps. Therefore, we further propose the Learning to Program (LP) method to ask LLMs themselves to learn the natural language program based on the training dataset of the complex task first and then use the learned program to guide the inference. Our experiments on the reasoning tasks of five different reasoning types (8 datasets) demonstrate the effectiveness of our approach. Further, our analysis experiment shows that the learned program can be directly used to guide another LLM to improve its performance, which reveals a new transfer learning paradigm1. Footnote 1: The code is at [https://github.com/microsoft/NaturalLanguageProgram](https://github.com/microsoft/NaturalLanguageProgram). ## 1 Introduction Large Language Models (LLMs), such as ChatGPT and GPT-4 [21], have recently achieved strong zero-shot/few-shot performance on various natural language tasks, such as generating passages [2], generating code [18], and solving grade school math problems [23]. LLMs can further learn new basic abilities by connecting them with millions of APIs like TaskMatrix.AI [17; 31] or new tools like ToolFormer [25]. However, LLMs still struggle to complete complex tasks, such as writing a long novel [32], coding for a large project [22], and solving complex math problems [6]. This indicates that knowing every basic capability is insufficient to complete complex tasks - we also require a procedure for how to combine them. Like programmers who use programming languages to teach computers how to complete complex tasks, we propose leveraging natural language as the new programming language to teach LLMs how to complete complex tasks. To enable natural language programming for LLMs, we propose to make the inference process into two steps explicitly: In the first step, we find a natural language program \(p\) for the test sample. In the second step, we use the test sample, prompt, and natural language program \(p\) as the input to LLMs to generate the specific solution. Natural Language Programs can guide LLMs by first analyzing all possible cases and then decomposing the complex task into sub-tasks, sequentially employing functions to complete the task. These programs provide general solutions that explain the generic steps to solve the task, including which basic capability to use and the necessary background knowledge. For example, if the task is to calculate the sine value of an angle in a right triangle, given the lengths of its legs, the natural language program for this task would be as follows: First, use the Pythagorean theorem to calculate the length of the leg whose length is not given. Then, calculate the sine value of the target angle based on its definition. Natural language programs offer two key advantages: (1) generalizability, as they can guide LLMs to solve all questions similar to the test sample, and (2) human understandability, as humans can read and edit them. Humans can edit the program to better guide LLMs, while LLM-created programs can teach humans new knowledge about the task. In complex tasks, programs are essential for LLMs to systematically organize basic components (e.g., APIs) and solve complex tasks as they usually involve high variance cases, multi-step reasoning, and professional knowledge. The simplest way to find a natural language program is to ask LLMs to generate a program consisting of general solutions. We call it the **Self-Program** (SP) method. However, this approach may result in programs with factual errors or programs that are incomplete. One way to fix the poor-quality generated program is to continually update the parameter of LLMs based on the training corpus collected from human feedback. However, we usually only have API access to strong LLMs, such as ChatGPT, GPT-4 [21], and PALM [5]. We propose another method to improve the quality of the natural language program by learning the natural language program in text form based on the training dataset. **The learned natural language program \(p\) can then guide LLMs in solving the complex task. In the future, it can be introduced with knowledge written by humans into the updating process of LLMs to enhance LLMs' capabilities.** Our **Learning to Program** (LP) method, depicted in Figure 1, is inspired by the traditional deep learning process. LP follows a similar process, with one key difference: instead of learning the vector weight, LP learns program \(p\) as text. In the first **making predictions with program** step, LLMs make predictions for the training data from the batch based on the guidance of the natural language program. And then we collect the wrong samples by comparing the predictions with the labels (**computing loss**). Next, in the **computing revision** (computing gradient) step, LP employs a strategy similar to the back-forward propagation algorithm but instead generates program revision \(\nabla p\) (analogous to the gradient) that improves the learned natural language program \(p\). Specifically, LP uses a revision strategy to generate program revision candidates from the errors. Next, it respectively compresses the program revision candidates to make them shorter. Then it verifies the candidates and finds the best candidate as the program revision \(\nabla p\). In the final **updating program** step, we directly add \(\nabla p\) into the program to update it. When the program is too long, we also consider compressing it to make it shorter. We verify the performance of our LP method on reasoning tasks of five different reasoning types: mathematical reasoning, causal reasoning, logical reasoning, symbolic reasoning task, and combinatorial reasoning. Our method improves the performance of these tasks significantly. For example, our LP method's average performance of 10 tasks from the AMPS mathematical dataset [10] outperforms the performance directly measured in the zero-shot/few-shot chain of thought setting by 18.3\(\%\)/7\(\%\) respectively (Table 1). Furthermore, the learned program from one LLM, such as the GPT-3.5 Turbo model, can directly guide the inference of another LLM, like GPT-4, improving its performance. This means that task knowledge written in natural language programs can transfer not only between humans and LLMs but also directly within LLMs. Figure 1: We illustrate the traditional deep learning way in subfigure (a). BP is the back-forward propagation algorithm. Our Learning to Program (LP) way is in subfigure (b). Revision strategy is the strategy to generate the revision of the program. Related Work Large Language Models (e.g., GPT3 [3], ChatGPT, OPT [33], and LLaMA [28])recently achieve huge progress. They can generate simple code [4], solve simple math problem[2], and write the draft of mathematical proof[12]. Further, as a text understanding and generation module, they can connect other APIs [17], models [31], and tools [25] to expand basic abilities for solving more complex tasks. But it's still challenging for LLMs to solve complex tasks like writing a novel. Our paper focus on the learning problem for the deployed large language model to solve complex tasks. Prompt engineering [19; 30; 14] further improves the performance of LLMs by inserting a prompt into the input. Discrete prompts consist of natural language phrases. [13] uses the mining-based methods to extract Middle words and words in the dependency path as the potential discrete prompts. Prompt-paraphrasing methods [13; 9] create the potential prompts by paraphrasing the original prompt into a set of other prompts. [8] uses pre-trained T5 to generate discrete prompts. However, discrete prompts can not update the previous prompt to make the prompt better and do not induce the task information into their method. Continuous prompts [16; 35; 20] perform prompting directly in the embedding space of the model. Their templates have their own parameters that can be tuned in the training dataset. But humans can not directly understand them. Some prompting methods can be viewed as a heuristic for LLMs to search for a better solution in the question space. Chain of Thought (CoT) prompting [30; 14] improves the performance by asking LLMs to think of intermediate steps. Least-to-most prompting [36] decouple a complex problem into a list of subproblems and solve them sequentially. Iterative Prompting [29] focuses on multi-step reasoning and it trains a model to learn to process the query and previous evidence and composes a new prompt at each inference step. Recently, TaskMatrix.AI [17] and HuggingGPT [26] have proposed to first plan for the given task and then execute the plan to generate the specific solution. And their plans are based on human-designed rules and human-written prompts. Self-critiquing method [24] and AutoGPT aim to autonomously reflect and improve LLMs' behaviors. But self-reflection can not completely fix the factual errors in LLMs and learn new knowledge/patterns from human feedback. Different from them, our LP method focuses on the learning problem of LLMs and aims to learn high-quality natural language programs from human feedback. APE [37] first asks large language models (LLM) to generate instruction candidates and then selects the best one. But it and Instruction Induction [11] do not update the selected/verified instruction by inducing errors. ## 3 Problem Formalization Assuming that the core of the Intelligence Agent \(I\) is the understanding and generation module \(U\) as \(U\) can understand the input and generate the specific solution to solve the test sample. When an agent enrolls in a new environment \(E\), it encounters multiple samples sampled from the environment variable \(X\). A sample \(x\) is associated with a label \(y\). Given the sample \(x\), the agent needs to predict the label. For instance, if the task involves calculating the square root of a positive number, a sample \(x\) may be 'What is the square root of 10?' and its label is \(\sqrt{10}\). To predict the label of \(x\), the agent implicitly generates a program for calculating the square root of one number and then generates the specific prediction \(H(x)\) with the guidance of the program. To better control and understand the inference process, we propose to decompose the inference process into two explicit subtasks. In the first task, which we call the **programming task**, we search for the natural language program of this task. In the second **generation task**, we concatenate the sample \(x\) with the CoT prompt and the natural language program \(p\) to ask the understanding and generation module \(U\) to output the specific solution \(H(x,p)\). When the agent \(I\) obtains the label of \(x\) from human feedback, how can it update the program to improve the prediction performance? The understanding and generation module \(U\) is typically composed of LLMs. But updating LLMs'parameter to improve its task performance requires multiple advanced GPUs and access to the model parameter, which are usually only available to the big company, and not to users who typically only have API access for inference. Additionally, companies tend to focus on improving abilities that interest them, which may ignore users' specific interests. So can the agent \(I\) still learn from the training corpus to improve the natural language program without updating its model parameters? The paper focuses on this problem. Method ### Natural Language Program Natural Language Program is a type of computer program that is written in natural language to instruct an agent for performing complex tasks. For example, a natural language program may be used to automatically extract knowledge from a large dataset, solve complex math problems, or answer questions based on natural language input. The natural language program has the main benefit of being human-understandable, which allows users to read and understand the program. Then humans can edit, delete, and add text to improve the quality of the program, as well as find new knowledge or solutions from the program. Unlike specific solutions, solutions in the natural language program are general, which makes the program clear and short. Additionally, the natural language program's solutions must consider all possible cases and provide complete instructions to the agent to ensure that the agent can solve the same task in different environments or domains. There are three approaches for obtaining the natural language program: (1) _Self-program_ approach that generates the natural language program by the agent itself. Specifically, we ask \(U\) to generate a natural language program that includes general solutions to solve all questions similar to the test sample \(x\) (see Figure 2). Next, we ask \(U\) to follow the natural language program to generate the specific solution of \(x\). The knowledge source of this approach is the corpus in the pre-training and instruction fine-tuning processes. So the generated natural language program may have many factual errors. The advantage of this approach is that it can deploy into the open domain setting without training. (2) _Human annotation_ approach that writes natural language programs by the human expert. Human experts can provide rich knowledge and complete steps in the program. But the labor cost is also high when we meet a huge amount of tasks. (3) _Learning to program_ approach that obtains natural language programs by learning the program from the training dataset of the task. The learned program by LLMs themselves has correct steps and richer knowledge than the generated program by the self-program approach. Also, its labor cost is less than that of the human annotation approach. We can design training algorithms based on the text to help the agent to learn the natural language program. ### Learning to Program The Learning to Program (LP) method aims to learn the natural language program \(p\) for the task based on the training dataset. \(p\) can take the form of natural language sentences/paragraphs, equations, codes, or any other thing that both \(U\) and humans can understand. The goal of introducing \(p\) here is to provide \(U\) with task-related information, thus enabling \(U\) to achieve better prediction performance. #### 4.2.1 Making Predictions with Program Similar to the traditional deep learning setting, we begin by collecting a subset of samples, with their labels to form the training set \(D\). We assume that the label of each sample represents the correct feedback for that sample. Next, we divide the training set \(D\) into different data batches, with the batch size set to \(d\). The agent then incrementally processes each data batch as it arrives. Specifically, when a data batch \(D_{t}=\{x_{i},y_{i}\}_{i=1}^{d}\) is received, the agent makes prediction \(H(x_{i},p)\) for each data point in the batch. Figure 2: Prompt for the self-program method. #### 4.2.2 Computing Loss At the \(t\)-th training iteration, for each wrong prediction where \(H(x,p_{t-1})\neq y\), we collect sample \(x\), its correct label \(y\), and the wrong prediction \(H(x,p_{t-1})\) to construct the wrong example. Then we collect all such wrong examples to construct the wrong example set \(\{x_{i},H(x_{i},p_{t-1}),y_{i}\}_{i=1}^{d^{{}^{\prime}}}\) where \(d^{{}^{\prime}}\leq d\). #### 4.2.3 Computing Revision We set the initialization of the natural language program \(p_{0}\) as the empty string and it needs to be iteratively updated by adding the program revision \(\nabla p\). To generate \(\nabla p\), we propose the learning-from-errors strategy. The learning-from-errors strategy asks \(U\) to find general solutions that can correctly solve questions similar to the wrong example and avoid these errors. it then uses these general solutions as \(\nabla p_{t}\). Specifically: at the \(t\)-th iteration, (1) it randomly chooses \(m\) wrong examples from the wrong example set. Then it uses the \(m\) wrong examples, previous natural language program \(p_{t}\), and an error-solution prompt (see Figure 3) as input to ask \(U\) to generate the revision candidate (e.g., some new general solutions to solve the task and avoid the errors) that differ from previous solutions in \(p_{t-1}\). (2) it compresses the generated program revision candidate to avoid repeated solutions and trivial information. (3) it repeats (1) and (2) \(K\) time to collect \(K\) program revision candidates, then it verifies these candidates on the validation set and selects the best one as \(\nabla p_{t}\). **Compression information:** Directly using the generated revision candidate that consists of multiple new solutions may result in the following problems: (1) there may have repeated solutions in all generated solutions, which should be deleted. (2) the meaning of the information is correct, but its form is unsuitable for \(U\) to utilize it. (3) the length of the whole input increases with the adding of program revision \(\nabla p_{t}\). Then the total tokens in the input may be beyond the tokens limitation of the understanding and generation module \(U\). The last problem is different from the traditional deep learning setting. In the traditional deep learning setting, both the gradient and the weight are vectors and the vector addition/reduction operations do not increase the size of the weight. To avoid these problems, we propose to use summarization to compress the information while maintaining its essential information. Specifically: for each natural language program revision candidate, we use a summarization prompt (Figure4) to ask \(U\) to generate a summarized natural language program revision candidate that deletes redundant and trivial information and maintains important information. After compressing the revision candidate, we need to determine whether adding it can result in fewer errors and better performance (an effective \(\nabla p_{t}\)). To do so, we use the pseudo-updated performance Figure 4: Compression Prompt for the self-program method. Figure 3: Prompt for the revision strategy in the self-program method. on the validation set. This involves the following steps: (1) we first collect a subset of the samples with their label as the validation set \(D_{valid}\). (2) We pseudo-update the natural language program as \(p_{t}=p_{t-1}+\nabla p_{t}\) and test the agent's performance on the validation set. (3) If the updated performance is better than the recent average recorded performance by a threshold, we maintain this candidate and its validation performance. Otherwise, we delete it. (4) We choose the candidate with the best validation performance as \(\nabla p_{t}\) and record its validation performance. Note that \(\nabla p_{t}\) is the best among all revision program candidates and its performance should be better than the recent average recorded performance. #### 4.2.4 Updating the Program \(p_{t-1}\) We update the program as \(p_{t}=p_{t-1}+\nabla t\). Other candidates and their performances are deleted. If all candidates fail to pass the verification step, we set \(p_{t}=p_{t-1}\). Too long \(p\) also has drawbacks: (1) the token number of the input may be beyond the limitation. (2) even if the token number of the input does not beyond the limitation, it may turn the short-text inference into a long-text inference. That increases the difficulty of solving the task. (3) It may make the learning process stuck in a local optimum and hinders the agent to find a better \(p\). To avoid these, we use \(p\), and the summarization prompt to ask \(U\) to generate a summarized \(p\) that is shorter and more general. Similarly, we follow the verification method above to identify if we can use the summarized \(p\) to replace the original \(p\). The validation performance of the summarized \(p\) should be better than the original \(p\)'s performance or just drop a little. We repeat the compression and verification steps until we get a compressed \(p\) or beyond the time limitation. Stop criterion: The agent can run the training set with multiple epochs. If the agent does not find an effective \(\nabla p_{t}\) during recent batches, we let it stops the training process. ## 5 Experiment **Datasets** For **mathematical reasoning**, (1) we select 10 challenging mathematical tasks from the AMPS sub-dataset that belongs to the AMPS pre-training dataset [10] (see Table 1 in Appendix A). Each task corresponds to a specific mathematical concept or problem type. We have at least one task for each of the three fundamental areas of mathematics: geometry, calculus, and algebra. (2) We divide the Math dataset [10] into seven tasks, based on the original type annotation (see Table 2 in Appendix A). These tasks include pre-algebra, Intermediate Algebra (IA), Algebra, Counting and Probability (CP), Geometry, Number theory (NT), and Precalculus. **Causal Reasoning**: (1) We consider the Causal Judgment task from BIG-bench [27]. Given a short story where multiple cause-effect events are introduced, this task asks LLM to answer causal questions such as "Did X cause Y?". (2) We consider the counterfactual reasoning assessment (CRASS) dataset [7]. Given a base premise and a questionized counterfactual conditional, this task asks LLMs to choose a correct consequence from a set of potential effects. **Logical Reasoning**: (1) We consider the logical reasoning part of the Law school admission test (LSAT) [34]. Given a passage and a question based on the passage, this task measures LLMs' ability to analyze complex information. (2) We consider the date understanding task from BIG-bench [27], in which LLMs are asked to infer the date from the date-related information. For **symbolic reasoning**, we consider the last letter concatenation task [30], in which we ask LLMs to extract the last letters of five random sampled words from Wiktionary2 and concatenate the last letters as the response. For **combinatorial reasoning**, we consider the SayCan dataset [1]. Given an instruction, LLMs need to generate the action sequence for the robot to complete the instruction. More details and dataset statistics information table (e.g., Table 3 for non-mathematical reasoning tasks) are in Appendix A. Footnote 2: [https://en.wiktionary.org/wiki/Wiktionary:Frequency_lists/PG/2006/04/1-10000](https://en.wiktionary.org/wiki/Wiktionary:Frequency_lists/PG/2006/04/1-10000) **Model and settings**: We use the ChatGPT model (GPT-3.5-turbo) as the large language model3. We consider the zero-shot CoT and few-shot CoT settings. In the zero-shot CoT test setting, for all experiments, we add the chain of thought (CoT) prompt 'Let's think step by step' after the test sample to encourage LLMs to think intermediate steps. But no demonstration examples are used in the prompt. In the few-shot CoT test setting, for all experiments, we add the chain of thought prompt and fixed demonstration examples into the input. We follow [15] and set four examples for tasks of mathematical reasoning, logical reasoning, and causal reasoning. We follow [30] and set six examples for the combinatorial reasoning task and four examples for the symbolic reasoning task. The examples are randomly chosen from the original training set. Each example consists of the problem, the explanation (solution) for this problem, and the correct label. For the LP method, we use the same demonstration samples in both training and test settings. **Hyper-parameters for the LP method** We run each task with 10 epochs. If the agent goes through consecutive 10 training batches without updating the natural language program \(p\), we stop the training process. The training batch size is 32 for all tasks. We set the size of the validation set to be 5 times that of the training batch size for the tasks4. We set the number of selected wrong samples \(m\) in Section 4.2.3 as 3. We set the time of generating revision candidates \(K\) in Section 4.2.3 as 5. We use a temperature of 0 for stable output, except during the compression step, where we set the temperature to 0.6 to encourage diverse summarization. For simple tasks (e.g., the AMPS tasks), we ask LLMs to summarize the program into no more than five main general solutions due to their simplicity. For more complex tasks (e.g., tasks from the Math dataset), we summarize the program into no more than ten main general solutions. The recent average recorded performance in Section 4.2.3 is the average performance of the recent three recorded validation performances. We set the threshold as 1.0 here as too low a threshold leads to trivial updates. If the program has been updated by three times without compressing it, we activate the compression step for the natural language program. Footnote 4: Although a larger validation set provides more precise validation performance, it also leads to huge computational consumption and needs more data points. Tasks from the Math dataset have hundreds of training data, so we set a larger validation size for them. ### Experiment Results Zero-shot CoT setting: For mathematical reasoning, we observe that our LP method obviously improves the average performance of 10 AMPS tasks in the zero-shot CoT setting by 18.3 percent (Table 1). Our LP method also achieves the highest performance (29.5) for the more complex tasks from the Math dataset (Table 2). LP method also improves the performance of four non-mathematical reasoning tasks markedly (Table 3), especially for the last letter concatenation task (from 44.6 percent to 75.2 percent). The large improvement over reasoning tasks of five different types is because LLMs learn a high-quality natural language program from the training set to solve the complex tasks. Our self-program method also improves almost all mathematical tasks' performance in the zero-shot CoT setting. But in some mathematical tasks (e.g. task 9 of the AMPS dataset) and the date understanding \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline Task & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & Avg \\ \hline ChateGPT (zero-shot CoT) & 73.5 & 41.0 & 65.2 & 30.4 & 38.8 & 53.1 & 29.2 & 77.0 & 60.0 & 12.2 & 48.3 \\ \hline ChateGPT + Self-Program (zero-shot CoT) & 83.6 & 29.6 & 80.5 & 30.4 & 38.8 & 67.3 & 45.3 & 85.5 & 42.0 & 15.5 & 51.6 \\ \hline ChateGPT + Learning-to-Program (zero-shot CoT) & **89.8** & **55.1** & **87.9** & **60.9** & **56.1** & **81.0** & **54.1** & **90.0** & **67.0** & **18.9** & **66.6** \\ \hline \hline ChateGPT (fake-shot CoT) & 79.8 & 19.6 & 94.5 & 43.8 & 56.1 & 17.3 & 75.0 & 77.1 & 66.0 & 56.0 & 64.0 \\ \hline ChateGPT + Self-Program (few-shot CoT) & 35.7 & 38.8 & 70.3 & 21.4 & 25.1 & 59.2 & 37.5 & 97.5 & 57.0 & 8.8 & 52.8 \\ \hline ChateGPT + Learning-to-Program (few-shot CoT) & **90.0** & **29.5** & **97.8** & **43.8** & **56.1** & **81.6** & **87.5** & **100.0** & **79.0** & **56.7** & **73.0** \\ \hline \end{tabular} \end{table} Table 1: Performance of the 10 tasks on the AMPS dataset. ‘Avg’ is the average performance on 10 tasks. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Task & \multicolumn{1}{c|}{Predigning} & \multicolumn{1}{c|}{IA} & \multicolumn{1}{c|}{Algebra} & \multicolumn{1}{c|}{NT} & \multicolumn{1}{c|}{Geometry} & \multicolumn{1}{c|}{Procalculus} & \multicolumn{1}{c|}{CP} & \multicolumn{1}{c}{Avg} \\ \hline ChateGPT (zero-shot CoT) & 64.2 & 13.9 & **48.0** & 25.9 & 17.1 & 15.4 & 22.3 & 27.4 \\ \hline ChateGPT + Self-Program (zero-shot CoT) & 40.2 & 11.5 & 34.2 & 22.8 & **18.8** & 15.5 & 20.1 & 23.3 \\ \hline ChateGPT + Learning-to-Program (zero-shot CoT) & **50.6** & **15.3** & 48.2 & **28.2** & 18.4 & **16.6** & **29.3** & **29.5** \\ \hline \hline ChateGPT (fake-shot CoT) & **52.4** & 15.8 & 49.6 & 28.3 & 21.3 & 16.8 & 30.2 & 30.6 \\ \hline ChateGPT + Self-Program (few-shot CoT) & 48.9 & 15.5 & 48.4 & 29.3 & 19.8 & **16.9** & **29.2** & 29.3 \\ \hline ChateGPT + Learning-to-Program (few-shot CoT) & 52.3 & **16.9** & **49.6** & **29.8** & **22.5** & 16.3 & **30.2** & **31.1** \\ \hline \end{tabular} \end{table} Table 2: Performance of the 7 tasks on the Math dataset, ‘IA’ is the Intermediate Algebra task, and ‘NT’ is the Number Theory task. ‘CP’ is the Counting \(\&\) Probability task. ‘Avg’ is the average performance on 7 tasks. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline Task & \multicolumn{1}{c|}{Causal judgment} & \multicolumn{1}{c|}{CRASS} & \multicolumn{1}{c|}{Logical Reasoning (LSAT)} & \multicolumn{1}{c|}{DU} & \multicolumn{1}{c|}{ILC} & \multicolumn{1}{c}{SayCm} \\ \hline ChateGPT (zero-shot CoT) & 64.5 & 79.1 & 52.5 & 47.3 & 44.6 & 36.0 \\ \hline ChateGPT + Self-Program (zero-shot CoT) & 68.2 & 90.4 & 32.2 & 37.3 & 46.4 & 36.0 \\ \hline ChateGPT + Learning-to-Program (zero-shot CoT) & **70.8** & **94.2** & **53.5** & **48.0** & **75.2** & **52.0** \\ \hline \hline ChateGPT (fake-shot CoT) & 62.5 & 88.4 & 59.2 & 50.5 & 72.6 & 48.0 \\ \hline ChateGPT + Self-Program (few-shot CoT) & 64.5 & 89.9 & 43.5 & 43.0 & 40.0 & 44.0 \\ \hline ChateGPT + Learning-to-Program (few-shot CoT) & **64.8** & **92.8** & **60.1** & **51.6** & **81.2** & **52.0** \\ \hline \end{tabular} \end{table} Table 3: Performance of the non-mathematical reasoning tasks, ‘CRASS’ is the counterfactual reasoning assessment task, ‘DU’ is the Date Understanding task, and ‘LLC’ is the Last Letter Concatenation task. task, its performance is even worse than the performance directly measured in the zero-shot CoT setting. And its average performance on the 10 tasks is lower than that of the LP method. The reason is that the automatically generated program from LLMs has some factual errors and is incomplete. Self-program method's performance becomes worse as the hallucination problem becomes more severe in complex tasks. That verifies the necessity to learn knowledge from human feedback rather than just simply relying on the knowledge from LLMs. Few-shot CoT setting: Experiments in Table 1, 2, and 3 have shown that adding demonstration examples usually can improve task performance. And our LP method can further improve the performance by learning a natural language program to solve the errors that have not been solved by the guidance of demonstration examples. Our LP method also can improve the few-shot performance for the non-mathematical reasoning tasks (see Table 3). For the deployment of the self-program method in the few-shot setting, we first randomly select 4 training samples and ask the model to generate the natural language program for each sample. Next, we concatenate the natural language programs and make the concatenation shorter and more general by compressing it. Finally, we use the compressed natural language program to guide the inference of all test samples. We observe that the few-shot performance of the self-program method is better than its zero-shot performance. That is because the generated program in the few-shot setting takes into account more cases and is more comprehensive in its solutions. ### Analysis **Transfer from one LLM to another LLM** Transferring the learned program from LLMs to humans is easy as it is written in clear natural language. Surprisingly, when the learned program (knowledge) from the GPT-3.5 Turbo model is used to guide the inference process of GPT-4, its performance improves across all tasks (see Table 4). This means that if both LLMs understand the natural language well, the learned natural language program (knowledge) can be directly transferred without changing the LLMs' parameters. Additionally, this approach introduces a new method for deploying LLMs: using a more affordable and faster LLM for training to learn the natural language program and then employing a slower but stronger LLM for testing with the guidance of the learned program. **Quality analysis of the learned natural language program** We present the learned natural language program for reasoning tasks of five types in Appendix B. We find that the learned solutions are highly relevant general solutions to solve questions in the task. The natural language program usually consists of multiple general solutions to solve different cases. The program utilizes the model's internal knowledge and functions, such as the Pythagorean Theorem or counterfactual inference principle. sometimes, it provides an example to explain its solution. That helps the understanding of both humans and LLMs themselves. However, sometimes LLMs accumulate the same solution multiple times, and deleting a repeated solution can result in a drop in performance, highlighting potential differences between how LLMs and humans solve problems as humans do not need to repeat one solution. **Case Study of the comparison of the learned natural language program and the generated program in solving task** We compare the solution guided by the learned natural language program with the solution guided by the generated natural language program from LLMs to demonstrate the former's influence. Figure 17 in Appendix C depicts the difference between the programs and specific solutions obtained from the two methods. The generated natural language program has a factual error about the Law of Cosines and that leads to a wrong specific solution. The learned natural language program provides a general solution and detailed steps based on the Pythagorean Theorem. The model then finds the correct specific solution by following the steps. The advantage of the LP method is that it relies on inducing experience on the training dataset to acquire knowledge. **Case study of the continual updation of the program** Figure 18 in Appendix C reveals that the program generated by the self-program method (before the training) has a factual error and leads to the wrong answer. During the training, the LP method learns how to solve the task using one or two general solutions in the program. However, the program is not yet complete. After the training process, the trained program has a variety of general solutions to calculate angles and achieve success \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Task & Algebra & Causal Judgment & Logical Reasoning (LSAT) & LLC & SayCan \\ \hline GPT-3.5X (zero-shot, CoT) & 57.8 & 64.6 & 81.3 & 85.4 & 72.0 \\ \hline GPT-3.5X + Learned-Program \(p\) from Turbo (zero-shot, CoT) & **58.9** & **68.8** & **82.0** & **88.6** & **80.0** \\ \hline \end{tabular} \end{table} Table 4: Zero-shot Transfer performance of the five different type reasoning tasks. ’Algebra’ is the algebra task from the Math dataset. ’LLC’ is the last letter concatenation task. in solving the test sample. That shows our method's ability to reduce errors in the program and improve the program quality by updating the program. ### Ablation Study For the above main experiments, we use the **CoT prompt** to boost the performance. The performance of experiments without the CoT prompt drops slightly (see Table 5) and our method can still improve the performance without CoT prompt (see experiment 'w/o+LP'). That means our method can improve the performance with or without prompt by learning from the errors caused by LLMs. We believe our method can also improve other prompts by learning from errors. For **the number of epochs**, we record the performance in the 1st, 5th, and 10th epoch. From Table 5, we observe that the performance arises quickly at first epochs, and then arises slowly comparatively as the LP method updates the program \(p\). So we set the number of epochs as 10. For **the number of revision candidates**\(K\), we find a positive correlation between \(K\) and the performance. LLMs select the best program revision to update the program from the candidates and more candidates mean a larger search space and a better final revision. But more candidates also means increased computational consumption required to verify them. So we set \(K\) as 5 as it achieves strong performance and less computation. **Validation size** is a crucial hyper-parameter as we need to find an effective update/revision for the whole task rather than a subset of the task. A larger validation size results in better performance (see Table 6). However, a larger validation size also implies a greater inference cost. Therefore, we set the validation size as 5 times the training batch size as it achieves strong performance with less computation. When the LP method generates one revision candidate, it needs to provide LLMs with \(m\)**wrong samples** and ask LLMs to generate general solutions to solve the errors. In Table 6, we observe that performance first improves with larger \(m\) as LLMs can refer to more wrong samples and then it drops as the input length becomes too long, and the output quality of the long-text solution generation is not good. So we set \(m\) as 5. To determine if a generated revision should be maintained, we compare its validation performance with the recent average recorded performance. If it is better than the recent performance by a **threshold**, we maintain it. We set the threshold as 1.0 here as too high a threshold would filter out good revision, as shown in Table 6. ## 6 Conclusion The impressive performance of Large Language Models (LLMs) in various natural language tasks without training is due to their ability to implicitly generate a natural language program for the test sample and then follow the program to perform inference for it. Based on this, we propose a two-step approach that involves searching for natural language program outlines for the test sample and generating the inference by following the program. Then we introduce the Learning to Program method, which learns the natural language program from the training dataset for the first step. Our proposed method's effectiveness has been verified through multiple reasoning task datasets, and we conclude that it is a promising approach for improving the performance of LLMs in complex tasks. We discuss the _limitations_ in Appendix D. \begin{table} \begin{tabular}{c|c c c c c|c c c} \hline \hline Hyper-parameter & \multicolumn{3}{c|}{Validation size} & \multicolumn{3}{c|}{Wrong samples \(m\)} & \multicolumn{3}{c}{Threshold} \\ \hline Value & 1 & 5 & 10 & 1 & 5 & 10 & 1 & 5 & 10 \\ \hline Composite function & 41.8 & 55.1 & 57.0 & 36.7 & 55.1 & 47.1 & 55.1 & 44.0 & 44.0 \\ \hline Causal Judgment & 54.2 & 70.5 & 71.0 & 58.3 & 70.5 & 64.8 & 70.5 & 66.7 & 64.3 \\ \hline LLC & 62.4 & 75.2 & 75.2 & 70.6 & 75.2 & 54.2 & 75.2 & 63.2 & 60.6 \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study. ‘Composite function’ is the second AMPS task. ‘LLC’ is the Last Letter Concatenation task. The validation size is equal to the time value multiplied training batch size. \begin{table} \begin{tabular}{c|c c c c|c c c|c c c} \hline \hline Hyper-parameter & \multicolumn{3}{c|}{CoT} & \multicolumn{3}{c|}{Epoch} & \multicolumn{3}{c}{Confidence number \(K\)} \\ \hline Value & w/o & w/o+LP & CoT & CoT+LP & 1 & 5 & 10 & 1 & 5 & 10 \\ \hline Composite function & 42.8 & 51.0 & 44.0 & 55.1 & 47.1 & 50.1 & 35.1 & 50.0 & 55.1 & 35.1 \\ \hline Causal Judgment & 56.3 & 60.4 & 64.5 & 70.3 & 64.8 & 68.7 & 70.3 & 69.0 & 70.5 & 70.5 \\ \hline LLC & 40.2 & 60.2 & 44.6 & 75.2 & 38.7 & 70.0 & 75.2 & 52.8 & 75.2 & 77.8 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study. ‘Composite function’ is the second AMPS task. ‘LLC’ is the Last Letter Concatenation task. The validation size is equal to the time value multiplied training batch size.
2308.10969
Quantum tasks assisted by quantum noise
We introduce a notion of expected utility for quantum tasks and discuss some general conditions under which this is increased by the presence of quantum noise in the underlying resource states. We apply the resulting formalism to the specific problem of playing the parity game with ground states of the random transverse-field Ising model. This demonstrates a separation in the ground-state phase diagram between regions where rational players will be ``risk-seeking'' or ``risk-averse'', depending on whether they win the game more or less often in the presence of disorder. The boundary between these regions depends non-universally on the correlation length of the disorder. Strikingly, we find that adding zero-mean, uncorrelated disorder to the transverse fields can generate a weak quantum advantage that would not exist in the absence of noise.
Chuqiao Lin, Vir B. Bulchandani, Shivaji L. Sondhi
2023-08-21T18:29:23Z
http://arxiv.org/abs/2308.10969v1
# Quantum tasks assisted by quantum noise ###### Abstract We introduce a notion of expected utility for quantum tasks and discuss some general conditions under which this is increased by the presence of quantum noise in the underlying resource states. We apply the resulting formalism to the specific problem of playing the parity game with ground states of the random transverse-field Ising model. This demonstrates a separation in the ground-state phase diagram between regions where rational players will be "risk-seeking" or "risk-averse", depending on whether they win the game more or less often in the presence of disorder. The boundary between these regions depends non-universally on the correlation length of the disorder. Strikingly, we find that adding zero-mean, uncorrelated disorder to the transverse fields can generate a weak quantum advantage that would not exist in the absence of noise. ## I Introduction Entangled many-body quantum states are common in the natural world but are generically not useful for universal quantum computation. This raises the question of whether physically realistic quantum states might be used to accomplish more modest quantum tasks, that are still classically impossible and probe nontrivial features of quantum mechanics, but are easier to analyze theoretically than a universal quantum computer. This question has recently been investigated for quantum nonlocal games [1; 2; 3; 4], i.e. cooperative games for which a set of players who share an entangled quantum state before playing will win with strictly higher probability than the best possible classical players. Many of the nonlocal games studied in the latter papers further have the unusual [5] property of being "scalable": versions of these games exist for any number of players \(N\geq 3\), who can attain quantum advantage by sharing \(\mathcal{O}(N)\) qubit states before playing the game. Such games are of theoretical interest because they are simple enough to be analyzed in some detail while also probing fundamental properties of entangled many-body quantum states such as contextuality [6; 7], which is believed to enable the measurement-based model of universal quantum computation [8; 9]. Thus a precise quantification of which states are "good" resources for winning quantum nonlocal games can be seen as a first step towards the much more ambitious goal of characterizing which states are useful for universal quantum computing. It is worth emphasizing that the most popular measure of quantum entanglement, namely the entanglement entropy, is wholly inadequate for this purpose [10; 11]. The specific question that we consider in this paper is how the "expected utility" of a quantum task is modified by randomness of the underlying quantum states. The expected utility for a quantum task will be defined carefully below in Section II; for now it can be thought of as a real number that quantifies the rate of success at the quantum task in question, with larger values corresponding to a higher rate of success. The mathematics needed to analyze this problem is standard within economics [12; 13] but less commonly applied within physics. For simplicity, our presentation will focus on random ensembles of pure states (there are no serious difficulties in extending the discussion to mixed states). In the context of non-local games, our analysis can be viewed as an extension of earlier results [1; 2; 3; 4] to allow for quantum noise. We note that while previous studies have looked at the effects of specific forms of non-unitary [5] and unitary [14] noise on non-local games, the general features of this problem that we identify below do not seem to have been discussed in the literature. We will illustrate the resulting theory using the example of the parity game. The original perfect quantum strategy for the parity game is due to Brassard-Broadbent-Tapp (BBT) [15], building on earlier results by Mermin [16] on the Greenberger-Horne-Zeilinger (GHZ) state. Their proposal involves applying a specific sequence of gates and measurements (that we call the "BBT protocol") to the GHZ state. A recent study involving two of the present authors systematically examined how well a set of quantum players could perform at the parity game by applying the BBT protocol to a general pure state \(|\psi\rangle\), instead of the GHZ state [3]. That work focused on the case that \(|\psi\rangle\) was the ground state of the transverse-field Ising model, and found that the resulting "quantum advantage" (measured by the difference \(\Delta p\) between the quantum probability of the players winning the game and the best possible classical probability of winning) could range from an order one positive number, to a positive number exponentially small in the number of players, to a negative number, with the specific location of these regimes within the Ising phase diagram depending on the version of the parity game being considered. In what follows, we will refer to these cases as "strong quantum advantage", "weak quantum advantage", and "no quantum advantage" for the parity game respectively. We distinguish strong and weak quantum advantage in this way for the reason that resolving an exponentially small value of \(\Delta p>0\) will in general require a number of experimental trials that is exponentially large, which departs from the colloquial notion of quantum advantage based on polynomial-time quantum algorithms. (Our distinction between strong and weak quantum advantage is analogous to the distinction between the classical probabilistic computational complexity classes BPP and PP [17].) These results illustrate how, upon playing the parity game with states that are somewhat more physically natural (from the viewpoint of condensed matter physics) than the GHZ state, multiple qualitatively different regimes of quantum advantage become possible. An immediate question is how robust these different kinds of quantum advantage are to the presence of quantum noise, beyond the quantum fluctuations inherent in the state \(|\psi\rangle\). The particular type of quantum noise of interest depends on how the state \(|\psi\rangle\) is realized physically. For example, if \(|\psi\rangle\) arises experimentally as the ground state of a many-body Hamiltonian \(\hat{H}\), a potentially significant source of quantum noise is disorder in the couplings of \(\hat{H}\). From this viewpoint, a minimal extension of our previous analysis for the parity game to allow for quantum noise consists of studying the quantum winning probability when the parity game is played with ground states of the random transverse-field Ising model (RTFIM). This will provide the central example for our study below. The paper is structured as follows. We first introduce a notion of expected utility for quantum tasks and discuss its response to quantum noise in the underlying resource states. We observe that the sign of this response is perturbatively determined by the Hessian of the utility function and note an analogy with the theory of risk aversion in economics [13; 18]. We then propose a notion of expected utility for the parity game, and illustrate how this behaves when the parity game is played with ground states of the RTFIM. We find in general that the effect of disorder on the players' probability of winning the game depends non-universally on the correlation length of the disorder, with possible singularities at the quantum critical point of the underlying TFIM. We further exhibit examples for which adding zero-mean disorder to the transverse fields generates a weak quantum advantage, despite there being no quantum advantage for the clean system. We conclude with some natural open questions. ## II Expected utility for quantum tasks ### General theory Our operational definition of a quantum task \(Q\) will be any sequence of unitary gates and projective measurements applied to some (pure) quantum state \(|\psi\rangle\in\mathcal{H}\), where \(\mathcal{H}\) denotes the set of possible resource states. We will further assume that the effectiveness of the state \(|\psi\rangle\) for performing the task \(Q\) is quantified by a real-valued function \(u:\mathcal{H}\rightarrow\mathbb{R}\), which we call the "utility function" for the task \(Q\). We emphasize that \(u\) can be any measurable function of the state \(|\psi\rangle\) whatsoever. (Thus \(u\) is less constrained than the analogous notion of "resource measure" in quantum resource theories [19].) Let us now suppose that \(|\psi\rangle\) exhibits quantum noise; to be precise, suppose that \(|\psi\rangle\) is drawn from some random ensemble of pure states \(\mathcal{E}\). Then, letting \(\mathbb{E}\) denote expectation values over the ensemble \(\mathcal{E}\), we define the "expected utility" for the ensemble \(\mathcal{E}\) to be the number \[U=\mathbb{E}[u(|\psi\rangle)]. \tag{1}\] We note that according to this formulation of expected utility, \(u\) will generally already encode a Born-rule average over possible outcomes of projective measurements on the state \(|\psi\rangle\). The average \(\mathbb{E}\) thus represents an additional average over noisy quantum states \(|\psi\rangle\). The resulting definition of \(U\) in Eq. (1) seems to us the most simple-minded way of combining these two averages. However, just as for the notion of expected utility in economic theory [12], this should be regarded as a convenient choice rather than a canonical definition, and it is conceivable that for more elaborate quantum tasks involving post-selection or feedback based on measurement outcomes, an alternative definition of \(U\) will be more useful. As a more structured example that is germane to our considerations below, suppose that \(|\psi\rangle\) depends smoothly on \(M\) real parameters \(g_{1},g_{2},\ldots,g_{M}\), and that the ensemble \(\mathcal{E}\) is defined by randomness in these couplings \(\mathbf{g}\in\mathbb{R}^{M}\). (For example, \(|\psi\rangle\) might be the ground state of a disordered Hamiltonian with \(M\) random couplings.) Then we can treat \(u\) as a function \(u:\mathbb{R}^{M}\rightarrow\mathbb{R}\) and by Jensen's inequality [20] it follows that \[\begin{cases}u(\mathbb{E}[\mathbf{g}])\leq\mathbb{E}[u(\mathbf{g})],&\text{if $u$ is convex}\\ u(\mathbb{E}[\mathbf{g}])\geq\mathbb{E}[u(\mathbf{g})],&\text{if $u$ is concave}\end{cases} \tag{2}\] as a function of \(\mathbf{g}\). Thus if \(u(\mathbf{g})\) is a convex function, disorder in \(\mathbf{g}\) will never decrease the expected utility of the quantum task \(Q\). In this way, quantum noise can _enhance_ the expected utility of a quantum task. At the same time, the "global" condition that \(u({\bf g})\) is convex is rather restrictive and too strong to be satisfied by realistic examples, including the main example of interest below. Let us therefore, following Pratt[13], consider the case of perturbatively weak randomness, with \({\bf g}=\bar{\bf g}+\delta{\bf g}\), where the \(\delta g_{i}\) are possibly correlated random variables, with zero mean \(\mathbb{E}[\delta{\bf g}]=0\), covariance \(C_{ij}=\mathbb{E}[\delta g_{i}\delta g_{j}]=\mathcal{O}(\sigma^{2})\) where \(\sigma\ll 1\), and suppose that all higher moments of \(\delta{\bf g}\) are \(\mathcal{O}(\sigma^{3})\) as \(\sigma\to 0\). We additionally assume that \(u({\bf g})\) is thrice differentiable. Then by Taylor's theorem it is immediate that \[\mathbb{E}[u({\bf g})]=u(\bar{\bf g})+\frac{1}{2}\sum_{i,j=1}^{M}C_{ij}H_{ij} (\bar{\bf g})+\mathcal{O}(\sigma^{3}) \tag{3}\] as \(\sigma\to 0\), where we introduced the Hessian \[H_{ij}({\bf g})=\frac{\partial^{2}u}{\partial g_{i}\partial g_{j}}({\bf g}). \tag{4}\] It is clear that the effect of small random perturbations \(\delta{\bf g}\) is controlled by the spectrum of the Hessian \(H(\bar{\bf g})\); if the latter is positive (resp. negative) definite, such perturbations will always increase (resp. decrease) the expected utility of \(Q\). For a general covariance matrix \(C\), this is the most general "local" condition that will guarantee a definite sign for the second variation \[\delta u^{(2)}=\frac{1}{2}\sum_{i,j=1}^{M}C_{ij}H_{ij}(\bar{\bf g}). \tag{5}\] However, if the covariance matrix has additional structure, then weaker conditions will suffice. For example, in the specific case of i.i.d. zero-mean noise, which we can write as \(C_{ij}=\sigma^{2}\delta_{ij}\), Eq. (3) simplifies to \[\mathbb{E}[u({\bf g})]=u(\bar{\bf g})+\frac{1}{2}\sigma^{2}\nabla^{2}u(\bar{ \bf g})+\mathcal{O}(\sigma^{3}), \tag{6}\] so that a sufficiently small and nonzero amount of uncorrelated noise will always be helpful for accomplishing the quantum task \(Q\) provided that \(\nabla^{2}u(\bar{\bf g})>0\). Thus we have identified three increasingly weak conditions under which disorder will always increase (resp. decrease) the expected utility of \(Q\). Finally, let us suppose that the agents executing the quantum task \(Q\) in question are free to choose the quantum noise level \(\sigma\ll 1\) for their experimental system. Then, borrowing standard economic terminology[13], economically rational agents seeking to maximize their joint expected utility \(u\) will be either "risk-seeking" or "risk-averse" depending on whether the sign of \(\delta u^{(2)}\) is positive or negative. ### Example: the parity game We now discuss a specific realization of the theory presented above in the context of the parity game[15; 16]. We first briefly recall some key properties of this game[3; 5], so as to be self-contained. Each round of the parity game can be played by \(N\geq 3\) players and a referee. At the beginning of the game, the referee gives the \(j\)th player a classical bit \(a_{j}\in\{0,1\}\). The bit string \(\vec{a}=(a_{1},a_{2},\ldots,a_{N})\) is drawn uniformly randomly from the set of \(2^{N-1}\) bit strings satisfying the promise \(\sum_{j=1}^{N}a_{j}\mod 2=0\). The players may not communicate classically with one another during the course of the game, and to win the game, they must each return a classical bit \(b_{j}\in\{0,1\}\) to the referee. The players collectively win the game if \[\sum_{j=1}^{N}b_{j}\bmod 2=\frac{\sum_{j=1}^{N}a_{j}}{2}\bmod 2 \tag{7}\] and lose the game otherwise. It can be shown that a set of purely classical players win the game with a probability at most \[p\leq p_{\rm cl}^{*}=\frac{1}{2}+\frac{1}{2^{|N/2|}} \tag{8}\] and that there exist classical strategies saturating this bound[15]. On the other hand, a set of quantum players who share the \(N\)-qubit GHZ state \(|\psi\rangle=|{\rm GHZ}^{+}\rangle\) before playing the game, where \[|{\rm GHZ}^{\pm}\rangle=\frac{1}{\sqrt{2}}\left(|00\ldots 0\rangle\pm|11\ldots 1 \rangle\right), \tag{9}\] can win the game with probability one. In order to achieve this, the players apply the BBT protocol of gates and measurement \(\mathcal{P}_{\rm BBT}\) to their shared state \(|\psi\rangle\), defined as follows: First, each player acts on their qubit with an input-dependent phase gate, \[\hat{Z}^{a_{j}/2}=\left(\begin{array}{cc}1&0\\ 0&i^{a_{j}}\end{array}\right). \tag{10}\] Second, each player rotates their basis to the \(\hat{X}\) or Hadamard basis by applying the unitary \[\hat{U}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right). \tag{11}\] Finally, each player measures their qubit in the \(\hat{Z}\) basis, yielding a measurement outcome \((-1)^{b_{j}}\), and returns the bit \(b_{j}\). We refer to Mermin[16] and BBT[15] for an explanation of why this quantum strategy always wins the game. In previous work [3], we generalized the BBT protocol to arbitary states \(|\psi\rangle\) by considering the quantum strategy \(\mathcal{S}=(|\psi\rangle,\mathcal{P}_{\text{BBT}})\) for playing the parity game, which consisted of applying the input-dependent protocol \(\mathcal{P}_{\text{BBT}}\) to an arbitrary pure state \(|\psi\rangle\). There we found the explicit formula \[p_{\text{qu}}(|\psi\rangle)=\frac{1}{2}\left(1+\left|\left\langle\text{GHZ}^{+} |\psi\right\rangle\right|^{2}-\left|\left\langle\text{GHZ}^{-}|\psi\right\rangle \right|^{2}\right) \tag{12}\] for the quantum probability of winning the game, which attains its maximum for the GHZ state \(|\psi\rangle=|\text{GHZ}^{+}\rangle\). This result can be seen as a simplified but practically useful form of the full "rigidity" statement for the parity game [21; 22; 23; 24]. For example, in previous work we computed Eq. (12) exactly for the \(g>0\) ground state of the transverse-field Ising model \[\hat{H}_{\text{TFIM}}=-\sum_{j=1}^{N}\hat{Z}_{j}\hat{Z}_{j+1}-g\sum_{j=1}^{N} \hat{X}_{j} \tag{13}\] on a ring with \(j\equiv j+N\), finding the expressions [332] \[\begin{split} p_{\text{qu}}(g)&=\frac{1}{2}+\frac{ 1}{2}\left|\left\langle\text{GHZ}^{+}\mid\psi(g)\right\rangle\right|^{2}\\ &=\frac{1}{2}+\frac{1}{2}\prod_{k>0}\cos^{2}\left(\frac{\theta_{k} (g)-\theta_{k}(0^{+})}{2}\right)\\ &=\frac{1}{2}+\frac{1}{2^{\left\langle N/2\right\rangle+1}}\prod _{k>0}\left(1+\frac{1-g\cos k}{\sqrt{1+g^{2}-2g\cos k}}\right)\end{split} \tag{14}\] with the allowed wavenumbers \(k\) defined by \[k=\begin{cases}\pm\frac{\pi}{N},\pm\frac{3\pi}{N},\ldots\pm\frac{(N-1)\pi}{N},&N\text{ even},\\ \pm\frac{\pi}{N},\pm\frac{3\pi}{N},\ldots\pm\frac{(N-2)\pi}{N},&N\text{ odd},\end{cases} \tag{15}\] where \(\tan\theta_{k}(g)=\sin k/(g-\cos k)\) and it will be useful to define \(\epsilon_{k}(g)=\sqrt{1+g^{2}-2g\cos k}\) (see discussion around Eq. (33) for further details). We found that for all \(0<g<1.506\ldots\), this quantum strategy exhibited a weak quantum advantage relative to the optimal classical probability of winning \(p_{\text{cl}}^{*}\), i.e. \(\Delta p=p_{\text{qu}}(g)-p_{\text{cl}}^{*}\) was positive but exponentially small in \(N\), with both \(p_{\text{qu}}(g)\) and \(p_{\text{cl}}^{*}\) themselves exponentially small corrections to the winning probability for random guessing, \(p_{\text{r}}=1/2\). We previously referred to this competition as a "battle of exponentials" between the exponentially small differences \(p_{\text{qu}}(g)-p_{\text{r}}\) and \(p_{\text{cl}}^{*}-p_{\text{r}}\). The question now arises of how best to define the collective utility of a set of players of a nonlocal game. We emphasize from the beginning that this choice is arbitrary; as a simple example, if the players collectively receive a payoff 1 whenever they win the game and a payoff 0 whenever they lose the game, their expected payoff (the standard notion of utility for games [12]) recovers the quantum probability of winning. Thus for the parity game and the strategy \(\mathcal{S}=(|\psi\rangle,\mathcal{P}_{\text{BBT}})\), this definition yields the utility function \[u(|\psi\rangle)=p_{\text{qu}}(|\psi\rangle). \tag{16}\] An undesirable property of this utility function is that it does not usefully capture the quantum advantage of the players relative to a set of purely classical players. A naive modification of this quantity is \(u(|\psi\rangle)=\Delta p=p_{\text{qu}}-p_{\text{cl}}^{*}\), which has the convenient property that it is only positive if the strategy \(\mathcal{S}\) yields a quantum advantage for the parity game. However, the exponential smallness of this quantity makes it unsuitable for directly probing weak quantum advantage as arises quite generically for physically realistic even-parity states [3]. From this viewpoint, a "useful" measure of weak quantum advantage is one that quantifies the rate of decay of this exponential with the number of players, compared to the classical probability of winning, leading us to the definition \[u(|\psi\rangle)=\log\bigg{(}\frac{p_{\text{qu}}(|\psi\rangle)-p_{\text{r}}}{p_ {\text{cl}}^{*}-p_{\text{r}}}\bigg{)}, \tag{17}\] which we shall adopt throughout the remainder of this work. We emphasize that this formula can be applied to any nonlocal game and quantum strategy with \(p_{\text{qu}}>p_{\text{r}}\) (there is no point in considering strategies that perform worse than random guessing). We note that when the quantum strategy \(\mathcal{S}\) is optimal, the quantities \(2\Delta p\) and \((p_{\text{qu}}-p_{\text{r}})/(p_{\text{cl}}^{*}-p_{\text{r}})\) are known as the "bias difference" and the "bias ratio" respectively [25]; in this language, our preferred notion of utility for suboptimal quantum strategies \(\mathcal{S}\) is the logarithm of a suboptimal bias ratio. For the parity game and the quantum strategy \(\mathcal{S}=(|\psi\rangle,\mathcal{P}_{\text{BBT}})\), where \(|\psi\rangle\) has even parity \(\prod_{j=1}^{N}\hat{X}_{j}|\psi\rangle=|\psi\rangle\), this formula reduces to \[u(|\psi\rangle)=\log\Big{(}2^{\left\lceil N/2\right\rceil-1}|\langle\text{GHZ} ^{+}|\psi\rangle|^{2}\Big{)}, \tag{18}\] so that \[-\infty\leq u(|\psi\rangle)\leq(\left\lceil N/2\right\rceil-1)\log 2. \tag{19}\] According to our terminology of strong and weak quantum advantage, the quantum strategy \(\mathcal{S}\) can only exhibit either of these properties if the limit \[b=\lim_{N\rightarrow\infty}\frac{u(|\psi\rangle)}{N} \tag{20}\] exists and is strictly positive. The special case \[b=\frac{\log 2}{2} \tag{21}\] corresponds to strong quantum advantage for the parity game, while all other cases \[0<b<\frac{\log 2}{2} \tag{22}\] correspond to weak quantum advantage. For the specific case that \(|\psi(g)\rangle\) is a ground state of the transverse-field Ising model with coupling strength \(g>0\), Eq. (14) implies that the utility Eq. (17) of the transverse-field Ising ground state for playing the parity game is given by \[u(|\psi(g)\rangle)\sim Nb(g),\quad N\rightarrow\infty, \tag{23}\] where \[b(g)=\int_{0}^{\pi}\frac{dk}{2\pi}\log\Bigg{(}1+\frac{1-g\cos k}{\sqrt{1+g^{ 2}-2g\cos k}}\Bigg{)}, \tag{24}\] which is monotonically decreasing with a unique zero \(b(g_{*})=0\) at \(g_{*}=1.506...\). This function is plotted in Fig 1. Thus the transverse-field Ising ground state can provide strong quantum advantage (as \(g\to 0^{+}\)), weak quantum advantage (for \(0<g<g_{*}\)) or no quantum advantage (for \(g\geq g_{*}\)) for the parity game, depending on the value of the coupling strength \(g>0\). ## III Playing the parity game with random Ising ground states We now illustrate the theory developed above with the example of playing the parity game with ground states of the random transverse-field quantum Ising model, \[\hat{H}=-\sum_{j=1}^{N}\hat{Z}_{j}\hat{Z}_{j+1}-\sum_{j=1}^{N}g_{j}\hat{X}_{j} \tag{25}\] on a ring with \(j\equiv j+N\), where \(g_{j}\) denotes the strength of the transverse field at site \(j\). The ground state of this system will be written as \(|\psi(\mathbf{g})\rangle\), can be solved exactly in terms of Jordan-Wigner fermions [26], and will always be even parity if \(g_{j}>0\), which we henceforth assume. The vector of couplings \(\mathbf{g}:=(g_{1},g_{2}\ldots,g_{N})\) is a random variable whose distribution will be specified on a case-by-case basis below (thus \(M=N\) in the notation of Section II.1). Our goal will be to understand the utility Eq. (18) of the state \(|\psi(\mathbf{g})\rangle\) for playing the parity game, which we denote \[u(\mathbf{g})=\log\Big{(}2^{\lceil N/2\rceil-1}|\langle\text{GHZ}^{+}|\psi( \mathbf{g})\rangle|^{2}\Big{)} \tag{26}\] It will be useful to reserve the notation \(\chi(g)\) for the specific case \(g_{1}=g_{2}=\ldots=g_{N}=g\), i.e. \[\chi(g):=u(g,g,\ldots,g). \tag{27}\] In terms of the Bogoliubov angles \(\theta_{k}(g)\), we can write this function explicitly as \[\chi(g)= (\lceil N/2\rceil-1)\log 2\] \[+ 2\sum_{k>0}\log\cos\bigg{(}\frac{\theta_{k}(g)-\theta_{k}(0^{+} )}{2}\bigg{)}. \tag{28}\] ### Perfectly correlated disorder The simplest limit to consider is perfectly correlated disorder, with \(g_{1}=g_{2}=\ldots=g_{N}=g\) and \(g>0\) drawn from some probability distribution with small variance \(\sigma^{2}\ll 1\). In this case, we have \[\mathbb{E}[u(\mathbf{g})]\approx\chi(\bar{g})+\frac{\sigma^{2}}{2}\chi^{\prime \prime}(\bar{g}). \tag{29}\] Thus to determine the perturbative effect of disorder it suffices to compute \[\chi^{\prime\prime}(g)=\sum_{i,j=1}^{N}\frac{\partial^{2}u}{\partial g_{i} \partial g_{j}}(g,g,\ldots,g). \tag{30}\] From Eq. (28), we find that \[\chi^{\prime}(g)=-\sum_{k>0}f_{k}(g) \tag{31}\] Figure 1: The quantity \(b(g)\) as defined in Eq. (24). This plot shows how the ground state of the transverse-field Ising model provides a quantum advantage for the parity game that interpolates continuously between a strong quantum advantage as \(g\to 0^{+}\) to no quantum advantage for \(g\geq 1.506\ldots\), via an extended regime of weak quantum advantage. The change in slope at the critical point \(g=g_{c}=1\) indicates a lack of smoothness at this point, which is responsible for the singular behaviour depicted in Fig. 2. and \[\chi^{\prime\prime}(g)=\sum_{k>0}\left[\frac{2f_{k}(g)\cos\theta_{k}(g)}{\epsilon _{k}(g)}-\frac{f_{k}^{2}(g)}{2}-\frac{\sin^{2}\theta_{k}(g)}{2\epsilon_{k}^{2}(g )}\right], \tag{32}\] where we defined \[f_{k}(g):=-\frac{1}{\epsilon_{k}(g)}\tan\left(\frac{\theta_{k}(g)-\theta_{k}( 0^{+})}{2}\right)\sin\theta_{k}(g). \tag{33}\] The rescaled second variation \(\delta u^{(2)}\) in this case is given by \[\frac{\delta u^{(2)}}{N\sigma^{2}}=\frac{1}{2N}\chi^{\prime\prime}(g). \tag{34}\] Eq. (34) is plotted in Fig. 2. A striking conclusion from this plot is that rational players will _prefer_ weak disorder in the paramagnetic phase, because it enhances their expected probability of winning. Meanwhile, they will avoid weak disorder in the ferromagnetic phase, because this diminishes their expected probability of winning. In the large-system limit [33] as \(N\rightarrow\infty\), these two regimes are cleanly separated by a divergence in the rescaled second variation that lies precisely at the Ising critical point \(g=g_{c}=1\). A more detailed asymptotic analysis reveals that this divergence is independent of the order of limits: the rescaled second variation diverges whether one lets \(g\to g_{c}\) having already taken the large-system limit as \(N\rightarrow\infty\), or lets \(N\rightarrow\infty\) while holding \(g=g_{c}\) fixed (see Appendix C). ### Uncorrelated (i.i.d.) disorder We next consider the case of i.i.d. couplings \(g_{i}\) drawn from a distribution with \(\sigma\ll\bar{g}\), so that \(\delta g_{i}=g_{i}-\bar{g}\) satisfies \[\mathbb{E}[\delta g_{i}]=0,\quad\mathbb{E}[\delta g_{i}\delta g_{j}]=\sigma^{ 2}\delta_{ij}. \tag{35}\] Then by Eq. (6) we have \[\mathbb{E}[u(\mathbf{g})]\approx\chi(\bar{g})+\frac{1}{2}\sigma^{2}\nabla^{2} u(\bar{\mathbf{g}}) \tag{36}\] and \[\begin{split}&\nabla^{2}u(\bar{\mathbf{g}})=\frac{2}{N}\sum_{p_{1}>0,\,p_{2}>0}\frac{1}{(\epsilon_{p_{1}}+\epsilon_{p_{2}})^{2}}\left[-\epsilon_{ p_{1}}\epsilon_{p_{2}}f_{p_{1}}f_{p_{2}}+\right.\\ &\left.(\epsilon_{p_{1}}+\epsilon_{p_{2}})\left(f_{p_{2}}\cos \theta_{p_{1}}+f_{p_{1}}\cos\theta_{p_{2}}\right)+(\cos\theta_{p_{1}}\cos \theta_{p_{2}}-1)\right]\end{split} \tag{37}\] (see Appendix B for details). This formula is verified against numerical simulations in Fig. 3. The behaviour of the rescaled second variation \[\frac{\delta u^{(2)}}{N\sigma^{2}}=\frac{1}{2N}\nabla^{2}u(\bar{\mathbf{g}}) \tag{38}\] as \(N\rightarrow\infty\) is depicted for a wider range of values of the mean transverse-field strength \(\bar{g}\) in Fig. 4. As for the case of perfectly correlated disorder, we find that rational players will eschew disorder in most (>99%) of the ferromagnetic phase and prefer disorder in the paramagnetic phase, since the Laplacian is negative (resp. positive) in these two regimes. Similarly, the Laplacian diverges at the Ising critical point \(g=g_{c}\). However, in contrast to the perfectly correlated case, the Laplacian changes sign at a value of \(g\) that differs from the Ising critical point, \(g\approx 0.9902<g_{c}\) in the large-system limit. We obtain this value by approximating the sum Eq. (37) by an integral, which we evaluate numerically; this integral yields the blue dashed line in Fig. 4. To clarify the meaning and significance of these results, it is helpful to study the histograms of \(u(\mathbf{g})-u(\bar{\mathbf{g}})\) while keeping \(\sigma\) fixed and varying the number of qubits \(N\). Such histograms are depicted in Fig. 5. Quite strikingly, we see that as \(N\) is varied, there is an extensive shift in the expected utility corresponding to extensivity of \(\nabla^{2}u(\bar{\mathbf{g}})\), which can be deduced from Eq. (37). According to Mermin's formulation of the parity game [16], this can be interpreted as an exponentially large change in the average number of satisfied GHZ stabilizers for the state \(|\psi(\mathbf{g})\rangle\). Following our discussion in the previous section, this change can be negative (deep in the ferromagnetic phase) or positive (deep in the paramagnetic phase). In Figure 2: The second variation of utility Eq. (34) for ground states of the random transverse-field Ising model with perfectly correlated disorder, as a function of the mean transverse-field strength \(\bar{g}_{i}=g\) and the system size \(N\). This exhibits a clear divergence at the Ising critical point \(g=g_{c}=1\), which can be seen as a phase transition between regimes where rational players will be risk-averse (in the ferromagnetic phase) and risk-seeking (in the paramagnetic phase) respectively. the paramagnetic phase, the lower panel of Fig. 5 demonstrates an even more dramatic effect that can occur for certain parameter values, whereby the introduction of disorder tips the system from a regime of no quantum advantage to a regime of weak quantum advantage for the parity game. We have chosen to illustrate this effect away from the immediate vicinity of \(g_{*}\) where the change in \(b\) is relatively weak; by moving \(\bar{g}\) closer to \(g_{*}\), this effect can be amplified. For example, at \(\bar{g}=1.55\), with the same system size and disorder distribution, we find that \(b(\bar{\mathbf{g}})=-0.010...<0\) while \(\mathbb{E}[b(\mathbf{g})]=0.015...>0\), so that the disorder-averaged value of \(b\) is roughly an order of magnitude larger than for the example depicted in Fig. 5. Finally, we note that the shape of the distributions of \(u(\mathbf{g})\) becomes qualitatively normal as \(N\) increases, which is consistent with the intuition that the quantum winning probability for the RTFIM ground state effectively computes the determinant of a large random matrix (see Eq. (100)), and should therefore be asymptotically log-normally distributed as \(N\rightarrow\infty\). ### Partially correlated disorder In Sections III.1 and III.2 above, we considered the limits of perfectly correlated and perfectly uncorrelated disorder in the random transverse-field Ising model. We finally turn to the intermediate and physically relevant case of disorder with a finite correlation length \(\xi\). Thus we suppose that the small random perturbations \(\{\delta g_{j}\}\) satisfy \[\mathbb{E}[\delta g_{j}]=0,\quad\mathbb{E}[\delta g_{j}\delta g_{l}]=\sigma^{ 2}e^{-\frac{|j-l|}{\xi}} \tag{39}\] with \(\sigma\ll 1\). In order to probe the effect of the correlation length on the expected utility, we again consider the rescaled second variation of the expected utility \(\frac{\delta u^{(2)}(\mathbf{g})}{N\sigma^{2}}\), with \(\delta u^{(2)}\) obtained exactly from Eq. (110). We find that for a fixed system size \(N\), this interpolates smoothly between the behaviours identified above in the limits of perfectly correlated and perfectly uncorrelated disorder, as shown in Figure 6. We can gain some insight into this behaviour through the following intuitive argument; if \(\xi=\mathcal{O}(N^{0})\) (the most Figure 3: Leading behaviour of the expected utility Eq. (36) for the RTFIM in the limit of weak and uncorrelated Gaussian disorder. The precise distributions that \(\mathbf{g}\) is sampled from are defined in the figure titles. Each blue datapoint is calculated from \(50000\) samples at the appropriate disorder strength \(\sigma\), with error bars representing the standard error of the sample mean of \(u(\mathbf{g})-u(\bar{\mathbf{g}})\). The magenta line depicts the analytical prediction \(\delta u^{(2)}=\frac{\sigma^{2}}{2}\nabla^{2}u(\mathbf{g})\) for the second variation of the expected utility predicted by Eq. (37). Figure 4: Eq. (37) plotted as a function of the mean transverse-field strength \(\bar{g}_{i}=g\) and the system size \(N\). For finite values of \(N\), there is a crossover from a regime where the players will be risk-averse, that includes nearly all of the ferromagnetic phase, to a regime where the players will be risk-seeking, which includes the entire paramagnetic phase. This crossover occurs at a transverse-field strength \(g\approx 0.9902...<1\) in the large-system limit as \(N\rightarrow\infty\), and is accompanied by a divergence at the Ising critical point \(g=g_{c}=1\). physical case) then the system will resemble the limit of perfectly uncorrelated disorder on length scales \(\gg\xi\) as \(N\to\infty\), and should thus recover the qualitative behaviour observed in Sec. III.2. Conversely, if \(\xi\) grows faster than \(N\), e.g. as \(\xi=N^{2}\), then the separation of scales \(N\ll\xi\) implies that the disorder will appear perfectly correlated at the system length scale \(N\), recovering the qualitative behaviour observed in III.1. These intuitions are consistent with the behaviour observed in Fig. 6 and with numerical simulations for multiple values of \(N\) (not shown), and can be justified more systematically from Eq. (5). ## IV Conclusion We have developed a theoretical framework to quantify the effect of quantum noise on the successful execution of quantum tasks, and pointed out the similarities between this formalism and the theory of risk aversion in economics [13; 18]. We illustrated this formalism with the specific example of playing the parity game with ground states of the random transverse-field Ising model, with the players free to specify the variance \(\sigma^{2}\) of the disorder in the transverse fields. We found that in the regime of perturbatively weak disorder strength \(\sigma^{2}\ll 1\), the players were risk-averse over a region approximating the ferromagnetic phase, and risk-seeking over a region approximating the paramagnetic phase, with the boundary Figure 6: The second variation of the expected utility for partially correlated disorder. The colors quantify the correlation length in units of the system size \(N=40\), as measured by the quantity \(\log(\xi/N)\). Thus the blue curves correspond to the limit of uncorrelated disorder \(\xi\ll N\), which recovers the results of Section III.2, while the red curves correspond to the limit of highly correlated disorder \(\xi\gg N\) that is discussed in Section III.1. This plot further demonstrates a smooth interpolation between the qualitative behaviours in the perfectly correlated limit (blue dotted line) and the perfectly uncorrelated limit (red dotted line) as the correlation length \(\xi\) is varied. between these regions depending non-universally on the correlation length of the disorder. A nontrivial prediction from our analysis is that the sensitivity of the quantum winning probability to disorder will diverge at the quantum critical point for disorder that is either perfectly correlated (\(\xi/N\to\infty\)) or perfectly uncorrelated (\(\xi/N\to 0\)) in the large-system limit as \(N\to\infty\). This is a much more direct diagnostic of quantum criticality than the quantum winning probability itself [3], which is continuous at the quantum critical point \(g=1\) and only loses quantum advantage at \(g\approx 1.5\), raising the question of how far such divergences are a universal feature of playing nonlocal games with ground states of local Hamiltonians. For example, the effects that we observe might be robust to improving [24] the Brassard-Broadbent-Tapp protocol with single-qubit unitaries, by virtue of the divergent correlation length at criticality. Another nontrivial effect that we have identified above is the possibility for disorder with zero mean and non-zero variance \(\sigma^{2}>0\) to _increase_ the expected degree of weak quantum advantage for the parity game, to the point of generating a quantum advantage that would be entirely absent in the clean limit \(\sigma^{2}=0\). It seems surprising that there are circumstances in which noise can engender a quantum advantage, given that noise is usually regarded as the bane of fault-tolerant quantum computation [27] and that depolarizing noise can make quantum systems easier to simulate classically [28]. A few recent works have nevertheless argued that quantum noise can be beneficial within specific contexts [29; 30]. In future work, it would be desirable to understand more systematically the circumstances in which quantum noise is helpful for accomplishing a given quantum task. It would similarly be interesting to find additional examples of quantum tasks that are enhanced by physically natural realizations of quantum noise, and to understand whether in such physical cases, noise can ever generate a strong quantum advantage of the type that could be discerned experimentally in a large system. ## V Acknowledgments We thank F.J. Burnell for collaborations on related topics. V.B.B. is supported by a fellowship at the Princeton Center for Theoretical Science and thanks D.S. Borgania, S.J. Garratt and A. Natarajan for helpful discussions, R.A. Bulchandani for introducing him to expected utility theory, and the Simons Institute for the Theory of Computing for their hospitality during the completion of this work. S.L.S. was supported by a Leverhulme Trust International Professorship, Grant Number LIP-202-014. For the purpose of Open Access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.
2309.00853
Correlated and Multi-frequency Diffusion Modeling for Highly Under-sampled MRI Reconstruction
Most existing MRI reconstruction methods perform tar-geted reconstruction of the entire MR image without tak-ing specific tissue regions into consideration. This may fail to emphasize the reconstruction accuracy on im-portant tissues for diagnosis. In this study, leveraging a combination of the properties of k-space data and the diffusion process, our novel scheme focuses on mining the multi-frequency prior with different strategies to pre-serve fine texture details in the reconstructed image. In addition, a diffusion process can converge more quickly if its target distribution closely resembles the noise distri-bution in the process. This can be accomplished through various high-frequency prior extractors. The finding further solidifies the effectiveness of the score-based gen-erative model. On top of all the advantages, our method improves the accuracy of MRI reconstruction and accel-erates sampling process. Experimental results verify that the proposed method successfully obtains more accurate reconstruction and outperforms state-of-the-art methods.
Yu Guan, Chuanming Yu, Shiyu Lu, Zhuoxu Cui, Dong Liang, Qiegen Liu
2023-09-02T07:51:27Z
http://arxiv.org/abs/2309.00853v1
# Correlated and Multi-frequency Diffusion Modeling for ###### Abstract Most existing MRI reconstruction methods perform targeted reconstruction of the entire MR image without taking specific tissue regions into consideration. This may fail to emphasize the reconstruction accuracy on important tissues for diagnosis. In this study, leveraging a combination of the properties of k-space data and the diffusion process, our novel scheme focuses on mining the multi-frequency prior with different strategies to preserve fine texture details in the reconstructed image. In addition, a diffusion process can converge more quickly if its target distribution closely resembles the noise distribution in the process. This can be accomplished through various high-frequency prior extractors. The finding further solidifies the effectiveness of the score-based generative model. On top of all the advantages, our method improves the accuracy of MRI reconstruction and accelerates sampling process. Experimental results verify that the proposed method successfully obtains more accurate reconstruction and outperforms state-of-the-art methods. MRI reconstruction, score-based generative model, diffusion process, multi-frequency prior. ## I Introduction Magnetic Resonance Imaging (MRI) is an essential technology that employs the nuclear magnetization of the material for imaging without ionizing radiation [1], [2]. Despite its numerous advantages, the limited imaging speed of MRI remains a significant bottleneck. To address this issue, compressed sensing [3]-[6] and parallel imaging [7]-[9] have been developed. While advancements in their reconstruction algorithms have led to significant progress in clinical settings, challenges regarding reconstruction accuracy persist. Therefore, optimizing reconstruction processes to capture as many intricate details as possible has become a primary area of focus in MRI research. In recent years, diffusion models have gained wide interest as a new class of generative model since they can provide a more accurate representation of the data distribution and surprisingly high sample quality [10]-[17]. Hereafter, leveraging the learned score function as a prior, the diffusion model with Langevin dynamic appeared as an emerging technology to reconstruct MRI. Among many works, Quan _et al._[16] presented a diffusion framework to exploit homotopic gradients of generative density priors by taking advantage of the denoising score matching for MRI reconstruction. A similar approach has been also applied to the accelerated MRI by Chung _et al._[17], which trained a continuous time-dependent score function with denoising score matching for the high accuracy of reconstruction. Thanks to the development of diffusion model in MR reconstruction has shown impressive results, the hotspot of research has transitioned to explore the possibility of extracting prior information in the high-frequency space to improve the performance of accurate reconstruction [18]-[20]. Generally, there are two prominent strategies have garnered attention for the extraction of high-frequency information, denoted as "Mask-K-Space" and "Weight-K-Space". From a visual perspective, "Mask-K-Space" directly segregates the high-frequency and low-frequency components of k-space data through the application of artificial masks. Mathematically, this approach exhibits a resemblance to the concept of hard-thresholding. One of the most promising approaches was proposed by Xie _et al._[21], they applied diffusion process in k-space domain with conditioned under-sampling mask and obtained the high-frequency information of acquired data through the imaging operator. Diverging from the hard-threshold technique employed by "Mask-K-Space", "Weight-K-Space" employs a weight-based technology to modulate the entirety of k-space data, which is similar to the underlying principle of soft-thresholding for data manipulation. Following this idea to improve the reconstruction accuracy of MRI has been successfully demonstrated in our earlier work [22]. In detail, we effectively applied k-space weight-based techniques in score-based generative model to capture high-frequency priors. Another common approach depending on the idea of "Weight-K-Space" was presented by Cao et al. [23], where the goal was to acquire high-frequency noise with soft-thresholding and combine it with image data to construct a new diffusion model for robust MRI reconstruction. Nevertheless, there are still many open questions and challenges in this field. For example, whilst our initial weight-based strategy was found to be successful, it demonstrated a high level of dependence on specific parameter selection. Furthermore, the ability to improve the reconstruction accuracy by weight-based technology itself has proved to be limited, even with optimal parameters. Therefore, the utilization of supplementary manners to construct an array of varied and extensive priors may serve as an effective solution. Other shortcomings of the above methods were that they were not universal due to the measurement-conditioned model [21] and suffered from long computation time due to the transformation of different domains [23]. Based on the above analysis, we introduce a novel reconstruction scheme that inherits the advantages of previous methods while eliminating some of their shortcomings. First of all, the aim of the study is to focus on the underlying signal properties of k-space data in high-frequency space through the combination of "Weight-K-Space" and "Mask-K-Space". Afterward, a unique method named Correlated and Multi-frequency **D**iffusion **M**odeling **(CM-DM)** is put forward to preserve high-frequency content as well as fine textural details in the reconstructed image. On one hand, integrating two distinct strategies for extracting high-frequency information may optimize the utilization of complementary information characteristics, which is superior to the usage of any one strategy in isolation. On the other hand, it is noteworthy that the target distribution of the high-frequency diffusion process bears a closer resemblance to the noise distribution than that of the diffusion process over the entire k-space. The main contributions and observations of this work are summarized as follows: \(\bullet\) _Diversity of High-frequency Priors for Diffusion Modeling._ Aiming at accurate reconstruction, multi-profile high-frequency components in k-space domain are combined to directly train diffusion model. Experimental results demonstrate that when subjected to a high acceleration rate of 15-fold, the preservation of image details is enhanced due to the diffusion process primarily focuses on high-frequency. \(\bullet\) _A New Look to the Interpretation of Optimal Diffusion Time._ Constraining the diffusion process in the frequency domain and exploiting the structural distributional priors in high-frequency is an underlying contributor for convergence. **Theorem 1** illustrates how the feature can promote fast convergence of the diffusion process. The remainder of this paper is exhibited as follows: Section II briefly introduces some relevant works in this study. Section III contains the key idea of the proposed method. The experimental setting and results are shown in Section IV. Section V conducts a concise discussion and Section VI draws a conclusion for this work. ## II Related Work ### _Forward Imaging Model_ The forward model of MRI can be represented by the following formula: \[f=Ak+\eta \tag{1}\] where \(f\in\mathbb{R}^{*}\) is the under-sampled measurement in the k-space domain, \(k\in\mathbb{R}^{*}\) is the medical k-space data to be reconstructed, \(A\in\mathbb{R}^{*\alpha}\) is a measuring matrix according to \(k\), and \(\eta\) is the Gaussian noise. More specifically, \(A=PS\) for the sake of multi-coil acquisition, where \(P\) is the under-sampling operator and \(S\) is the coil sensitivity. Due to insufficient information of \(k\), reconstructing accurate k-space data \(k\) is known as an ill-posed issue. This means that we cannot obtain a solution \(k\) by directly inverting Eq. (1). Thereby, it is important to impose constraints to achieve regularization and Eq. (1) can be formulated as an optimization problem: \[\underset{k}{Min}\left\|Ak-f\right\|_{2}^{2}+\lambda R(k) \tag{2}\] where \(\left\|Ak-y\right\|_{2}^{2}\) term is the data fidelity. \(R(k)\) is the regularization term, which plays a significant role in pursuing high-quality results. \(\lambda\) is the weight coefficient that balances the data fidelity term and the regularization term for optimization solution. ### _Diffusion Models in MRI Reconstruction_ Diffusion models are a cutting-edge class of generative models that have demonstrated to be highly effective in learning complex data distributions, such as MRI reconstruction [24]-[26]. Specially, Jalal _et al._[27] proposed the first study which trains the diffusion model on MR images as prior information for the inversion pathway in reconstructing realistic MR images. Furthermore, it inspired Song _et al._[28] to improve the basic theory of the original diffusion model and then derive it to the application of medical image reconstruction. As another class of relevant diffusion model, Gungor _et al._[29] leveraged an efficient adaptive diffusion prior trained via adversarial mapping over large reverse diffusion steps for accelerated MRI reconstruction. Overall, diffusion model has proven to be one of the highly flexible and tractable generative models that can accurately generate complex data distributions from random noise in the image domain. Recently, the transformation of the diffusion process from the image domain to the k-space domain to directly process k-space data has become a new research hotspot. Tu _et al._[22] trained an unsupervised diffusion model on weighted and high-dimensional k-space data, where weight-based technology and high-dimensional space augmentation design are applied to the initial k-space data for better capturing the prior distribution. Similar to this work, Xie _et al._[23] presented a measurement-conditioned diffusion model for MRI reconstruction and achieved results that outperformed other methods. They defined the model in the k-space domain with conditioned under-sampling mask to provide an estimate of uncertainty as output. Therefore, constraining the diffusion process in k-space domain not only enables the direct utilization of inherent k-space data for enhanced efficiency but also achieves substantial results. With the remarkable advancements in above-mentioned methods, achieving more refined MRI reconstruction by extracting a set of high-frequency components has also observed a growing interest [30]. For instance, He _et al._[31] extracted high-frequency part of images and designed a multi-profile denoising autoencoder for attaining deep frequency-recurrent prior. Besides, Yang _et al._[32] developed a two-stage reconstruction procedure to treat the low-frequency and high-frequency parts progressively for a more accurate reconstruction. Influenced by the diffusion model, Xie _et al._[21] invented an effective algorithm measurement-conditioned denoising diffusion probabilistic model, which exploited high-frequency information in k-space domain with a specific mask. In the follow-up development, a score-based diffusion method is developed by Cao _et al._[23] that they added high-frequency noise to the data in image domain for stable and accelerated MR reconstruction. ## III Method ### _Motivation_ K-space data contains high-frequency and low-frequency components. The texture details of the image are associated with high-frequency components while the low-frequency components can effectively represent image profiles. Exploiting high-frequency component is significant for realizing accurate reconstruction. Considering that more attention is paid to handling high-frequency information for accurate MRI reconstruction. In this section, we first focus on different strategies of extracting high-frequency information in the k-space domain. Critically, it is shown that the convergence can be sped up dramatically when we constraint the diffusion process in the high-frequency domain, which is further briefly discussed in **Section III.B**. Then we exploit the strengths of each strategy and combine them in different ways to avoid the unique pitfalls of each (**Section III.C**). **Fig. 1** shows the overall procedure of the proposed method. ### _Diffusion Process Meets Multi-frequency Space_ _Diffusion Process in K-Space:_ Suppose a certain dataset \(x_{0}\in\mathbb{R}^{d}\) contains i.i.d. sampled data from an unknown distribution \(x_{0}-p_{x}(X)\) and the score function \(s_{\theta}(x)\) with parameter \(\theta\) is an approximation of the gradient of its log probability density, i.e., \(\nabla_{x}\log p_{x}(X)\). Then, diffusion process sampling is performed according to the score function to obtain samples that obey the distribution \(p_{x}(X)\), i.e., \[x^{\prime-1}=x^{\prime}+\frac{\varepsilon^{\prime}}{2}\nabla_{x}\log p_{x}(x^{ \prime})+\sqrt{\varepsilon^{\prime}}z^{\prime},\ t=T,\cdots,1 \tag{3}\] where \(\varepsilon^{\prime}\) specifies the step size. The initial distribution \(x^{\prime}\) is sampled from a given prior distribution and the noise \(z^{\prime}\) is samples of the standard \(d\) -dimensional Gaussian distribution. Typically, the step size \(\varepsilon^{\prime}\) for sampling must be small enough for ensuring stability in the diffusion procedure. Otherwise, the model will fail to converge to the target distribution [33]. As a result, the diffusion process suffers from a lengthy sampling process as many iterations are needed. Based on the theory in [34] that this limitation can be alleviated by increasing the similarity between the target data distribution and diffusion related distributions. This is intuitive as both the starting point and added noise is constrained to be relevant w.r.t. the target distribution, hence minimizing the inefficient fluctuations over the diffusion process. We first show that the sampling process can be equivalently transformed to the frequency domain as follows: \[k^{\prime-1}=k^{\prime}+\frac{\varepsilon^{\prime}}{2}\nabla_{x}\log p_{x}(k^ {\prime})+\sqrt{\varepsilon^{\prime}}F[z^{\prime}] \tag{4}\] where \(F\) is Fourier transform and \(k=Fx\). Given the same sampling process as **Eq. (1)**, the above formula aims to redesign the diffusion process for model inference in frequency domain that it is still capable of producing samples of excellent quality under high acceleration. Details are given in **Appendix A.1** for a full definition. Nevertheless, there exists a general observation that the amplitude of low-frequency information of images is typically much higher than that of high-frequency ones [35]. This means the distribution of images exhibits huge gaps in quantity between different coordinates in the frequency domain, causing a severe ill-conditioned issue. Therefore, it explains the necessity to regulate the frequency distribution of a diffusion process, which is implemented by extracting only high-frequency information. With the above theoretical findings, as an instantiation we formulate different high-frequency operations to regulate the frequency distribution of the samples and promote the behavior of accelerated diffusion process. Concretely, the key idea is that mathematically matrix operator is effective in excavating high-frequency information in a way that the amplitude in frequency domains become more consistent along all the directions. _Weight-K-Space:_ To restrict the magnitude difference of data in k-space, the weight-based matrix strategy is presented to solve the problem that the value range of k-space data varies drastically. Specifically, it is incorporated to handle k-space data with high-frequency and low-frequency information during the training stage. The weight-based matrix operator can be expressed as follows: \[K_{x}(k)=w\odot k;\ w=(r\cdot x_{x}^{2}+r\cdot y_{x}^{2})^{p} \tag{5}\] where \(k\) denotes the initial data in the k-space domain, and \(w\) is the specific weight-based matrix. The \(\odot\) means element-wise multiplication and \(r\) is introduced for setting the cutoff value. \(p\) decides the smoothness of the weight boundary while \(x_{x}\) and \(y_{x}\) are the count of frequency encoding lines and phase encoding lines. Formally, we enrich the above Langevin dynamics (**Eq. (4)**) by imposing a weighting operation into the diffusion process as: Fig. 1: Overview of the CM-DM method. Different high-frequency prior extractors (“Weight-K-Space” or “Mask-K-Space”) are employed to restrict the diffusion process in the training process. The second row shows that the input goes through predictor and corrector iteratively to obtain the final reconstructed image. \[K_{\omega}^{i-1}=K_{\omega}^{i}+\frac{\varepsilon^{i^{\prime}}}{2}\nabla_{K_{ \omega}}\log p_{K_{\omega}}(K_{\omega}^{i})+\sqrt{\varepsilon^{i^{\prime}}}F[z^{ \prime}] \tag{6}\] In most cases, it is weighted on the overall k-space data which has the advantage of strong convergence and theoretical analysis. However, the training of the method for MRI reconstruction problem is not easy in the presence of large number of parameters, which leads to the difficulty in fine-tuning parameters. For more fine-grained regulation, we can design multiple manners each with a dedicated advantage. _Mask-K-Space:_ Focusing on the extraction of high-frequency information while keeping the steady-state distribution simultaneously, another scheme based on k-space segmentation is proposed to obtain high-frequency information by adjusting window size. In detail, for the sake of effectively narrowing the range of values in the k-space data, the "Mask-K-Space" with an adjustable central window is designed to eliminate low-frequency information. Algorithmically, the specific "Mask-K-Space" operator is defined as follows: \[K_{m}(k)=(I-m)\odot k \tag{7}\] where \(m\) is the specific "Mask-K-Space" kernel to separate low-frequency and high-frequency information, with value 1 in k-space center (assume the window size \(n\times n\) is adjustable) and 0 for the rest of the region which denotes the high-frequency part. \(I\) is an all-ones matrix and \((I-m)\) is a mask which could remove the low-frequency information in the central area. Combining the operation above, we redefine another diffusion process as: \[K_{m}^{i-1}=K_{m}^{i}+\frac{\varepsilon^{i^{\prime}}}{2}\nabla_{K_{m}}\log p _{K_{\omega}}(K_{m}^{i})+\sqrt{\varepsilon^{i^{\prime}}}F[z^{\prime}] \tag{8}\] _Deviation Analysis:_ For theory completeness, we further briefly discuss the possibility to construct preconditioning operators using the matrix \(w\) and \(m\) (**Eq. 5 and Eq. 7**) as accelerators of the diffusion process. Note, this is a theoretical extension as formulated above to clarify the effect of high-frequency information on convergence (see **Fig. 2**). **Theorem 1.** Suppose the diffusion processes (**Eq. (6)** and **Eq. (8)**) converge to a distribution \(K_{\omega}^{*}\)-\(p_{z^{\prime}}(K_{\omega}^{*})\). Denoting \(\mathcal{F}_{-i^{\prime}}\) as the \(\sigma\) -algebra generated by \(\{K_{\omega}^{T},z^{\prime},s=T,\cdots,t+1\}\). If the additive noise \(\{z^{*}\}_{z^{\prime+1}}^{T}\) depend on \(K_{\omega}^{i}\) and satisfy the condition \(\mathbb{E}[z^{i^{\prime}}\left|\mathcal{F}_{-i}\right|=0,\forall t\in\{T, \cdots,1\}]\), then the deviation of \(K_{\omega}^{i}\) from \(K_{\omega}^{i}\) can be written as \[\mathbb{E}[\parallel K_{\omega}^{*})-K_{\omega}^{i-1}\parallel^{2}]=C_{1}+ \varepsilon^{i^{\prime}}\mathbb{E}[\parallel z^{i}]-2\sqrt{\varepsilon^{i^{ \prime}}}\mathbb{E}[K_{\omega}^{*})\cdot z^{\prime}] \tag{9}\] where the term \(C_{1}\) is a constant independent of \(z^{\prime}\). \(K_{\omega}^{*}\) and \(K_{\omega}^{i^{\prime}}\) are an instantiation of \(K_{\omega}^{i}\). _Proof._ For the conditional deviation, we have \[\mathbb{E}[\parallel K_{\omega}^{*})-K_{\omega}^{i-1}\parallel^{2} \left|\mathcal{F}_{-i}\right| \tag{10}\] \[= \mathbb{E}[\parallel K_{\omega}^{*})-K_{\omega}^{i-1}\frac{ \varepsilon^{i^{\prime}}}{2}s_{K_{\omega}^{i}}(K_{\omega}^{i})-\sqrt{ \varepsilon^{i^{\prime}}}z^{i}\parallel^{2}\left|\mathcal{F}_{-i}\right|\] \[= \mathbb{E}[\parallel K_{\omega}^{*})-K_{\omega}^{i^{\prime}}\frac{ \varepsilon^{i^{\prime}}}{2}s_{K_{\omega}^{i}}(K_{\omega}^{i^{\prime}}) \parallel^{2}\left|\mathcal{F}_{-i}\right|+\varepsilon^{i}\mathbb{E}[ \parallel z^{i}\parallel^{2}\left|\mathcal{F}_{-i}\right|]\] \[- 2\sqrt{\varepsilon^{i}}\mathbb{E}[K_{\omega}^{*})\cdot z^{\prime} \left|\mathcal{F}_{-i}\right|\] The last equation is due to \[\mathbb{E}[K_{\omega}^{i})+\frac{\varepsilon^{i}}{2}s_{K_{\omega}^{i }}(K_{\omega}^{i})\cdot z^{\prime}\left|\mathcal{F}_{-i}\right| \tag{11}\] \[= (K_{\omega}^{i})+\frac{\varepsilon^{i}}{2}s_{K_{\omega}^{i}}(K_{ \omega}^{i})\cdot\mathbb{E}[z^{\prime}\left|\mathcal{F}_{-i}\right|=0\] Then we take the expectation of both sides and the theorem is proved: \[\mathbb{E}[\parallel K_{\omega}^{*})-K_{\omega}^{i-1}\parallel^{2}]=\mathbb{E }[\parallel K_{\omega}^{*})-K_{\omega}^{i})-\frac{\varepsilon^{i^{\prime}}}{2}s _{K_{\omega}^{i}}(K_{\omega}^{i})\parallel^{2}] \tag{12}\] \[+\varepsilon^{i^{\prime}}\mathbb{E}[\parallel z^{i}\parallel^{2}]-2 \sqrt{\varepsilon^{i^{\prime}}}\mathbb{E}[K_{\omega}^{*})\cdot z^{\prime}]\] Suppose we keep the noise variance \(\mathbb{E}[\parallel z^{i}\parallel^{2}]\) unchanged, \(z^{i}\) impacts the deviation only through the correlation term \(-2\sqrt{\varepsilon^{i^{\prime}}}\mathbb{E}[K_{\omega}^{*})\cdot z^{\prime}\). Remarkably, since we use operators "Weight-K-Space" or "Mask-K-Space" to restrict the target data distribution and diffusion process in the high-frequency domain to explore the prior knowledge, coupled with theoretical analysis, both of them in the high-frequency diffusion process are more positive relevant. This suggests that if we have the elements of \(z^{i}\) positively correlate with the corresponding elements of \(K_{\omega}^{*}\), this deviation will decrease, encouraging the diffusion convergence. ### Underlying Correlations between High-frequency Operators As previously discussed, "Weight-K-Space" and "Mask-K-Space" extract corresponding high-frequency prior information in k-space, so this subsection will undertake a more comprehensive exploration into the underlying elements of these two operators. Fig. 3 exhibits the Fourier-transformed images of the original k-space data after different preprocessing operators. Due to the essence of operators is to extract high-frequency information, the obtained images have the characteristic of structural similarity. Specifically, it means that the structural information garnered through the two strategies exhibits a degree of redundancy and correlation. Nevertheless, changing the kernel of "Mask-K-Space" can confirm the effect of high-frequency component injection so that the corresponding feature maps are distinct, the blue indicated line in Fig. 3 gives a comprehensive visualization. Consequently, in order to explore the correlation between the two strategies, we further employ the correlation coefficient as an evaluative measure to gauge the similarity between the feature maps associated with the "Weight-K-Space" and those relevant to the "Mask-K-Space" operators of varying sizes (depicted by the red indi Fig. 2: Illustration of the diffusion process in high-frequency space. The first row shows the diffusion process of operator “Mask-K-Space” and the second row shows the diffusion process of operator “Weight-K-Space”. Note that the amplitude of the k-space data becomes more uniform through different frequency domain operators. actor line in Fig. 3), i.e., \(\rho(x,y)=\frac{\text{cov}(x,y)}{\sigma(x)\cdot\sigma(y)}\). where cov(\(\star\)) represents the covariance between maps and \(\sigma(\star)\) represents the corresponding standard deviation of the maps. Note that the closer the correlation coefficient is to 1, the higher the correlation between maps. Evidently, it is apparent that the feature map denoted as \(x_{z}\) exhibits the highest degree of correlation with the feature map \(y\) originating from the "Weight-K-Space", when the kernel of operator "Mask-K-Space" is set to \(50\times 50\). It is because more high-frequency components are retained by "Mask-K-Space" with \(50\times 50\) than the small-window operator, the more perfectly the texture structure can be reconstructed. As the kernel expands to \(70\times 70\), the correlations between "Weight-K-Space" and "Mask-K-Space" exhibit a gradual reduction. One possible reason is that more high-frequency information obtained by "Mask-K-Space" is lost due to excessive pursuit of narrowing the difference. The potential phenomenon further provides additional evidence supporting the idea that the structural information extracted by the two operators shares a correlation, yet each operator also possesses its own specific focus. Ablation experiments will further confirm this phenomenon at the level of evaluation metrics. On this basis, we combine different schemes jointly to excavate high-frequency information according to the aggregation idea to form the multi-frequency prior. By combining the advantages of distinct operators, they can be integrated with each other to obtain good interpretability of network topology and generate realistic results. Afterwards, two collaborative methodologies are introduced to capture multi-frequency prior knowledge and accelerate the diffusion process. The upper column of Fig. 1 depicts the procedure of combinational manners. One kind of combinational mode is in view of serial combination or two-stage learning, that is, "Weight-K-Space" and "Mask-K-Space" are combined in a **serial-manner**. First of all, the under-sampled data in k-space domain multiplied by the weight-based matrix is identified as the input of "Weight-K-Space". Subsequently, the output obtained by the "Weight-K-Space" operator is divided by the corresponding weight-based matrix. Meanwhile, the new input of "Mask-K-Space" emerges after being handled by the artificial mask. Another kind of combinational mode is **parallel-manner**. Note that the input of "Weight-K-Space" is multiplied by the weight-based matrix while the input of "Mask-K-Space" is the k-space data handled by the artificial mask. Finally, we conduct the mean operation on the outputs of two models. ### _Diffusion Reconstruction via Multi-frequency Prior_ For the purpose of utilizing the strong generation ability of the generated model and generating fixed data, the data consistency (DC) operation is imposed after every generation step. Thus, we can sample from the \(p(k\mid y)\) in a reasonable way. The subproblem with regard to DC can be described as: \[\underset{k}{\text{\emph{Min}}}\{\parallel Ak-f\parallel_{z}^{2}+\lambda \parallel k-K\parallel_{z}^{2}\} \tag{15}\] where \(k\) is the entry in k-space generated by network and \(K\) stands for the under-sampled measurement in k-space domain. Due to the collaborative strategy that optimizes \(K_{w}\) and \(K_{w}\) recurrently by different diffusion process employed in this work, the final output needs to be weighted before data consistency, which facilitates the subsequent hybrid prior information to be fed into the network again. The \(K\) can be described as the solution of the two collaborative manners: \[K=\begin{cases}\mu_{k}K_{w}\{(\mu_{k}(K_{w}))\};&\text{\emph{Serial}}\\ \lambda_{i}(K_{w})+\lambda_{i}(K_{w});&\text{\emph{Parallel}}\end{cases} \tag{16}\] where \(\mu_{w}\) and \(\lambda_{w}\) control the level of linear combination between serial and parallel values. Based on this formula, the corresponding DC solution could be solved through mathematical reasoning: \[k(u)=\begin{cases}k(u),&\text{\emph{if}}\ u\not\in\Omega\\ \big{[}k(u)+\lambda K(u)\big{]}/(1+\lambda);&\text{\emph{if}}\ u\in\Omega\end{cases} \tag{17}\] where \(\Omega\) denotes an index set of the acquired k-space samples. \(k(u)\) is the entry at index \(u\) in k-space domain generated by network. When setting noiseless (i.e., \(\lambda\rightarrow\infty\)), the predicted coefficient at \(u\) step is substitute by the initial coefficient if it has been sampled. Traditionally, the reconstruction problem has been conceptualized as a low-rank matrix completion problem. To achieve high-quality reconstruction results, we have incorporated traditional operator following network iteration, which serves to further restore the low-rank matrix. The object of traditional operator is a data matrix which is generated by the network. As the k-space data matrix transforms into Hankel matrix formulation, we can analyze the hard-threshold singular values of the data matrix. According to the low-rank property in Hankel matrix formulation, solving the low-rank constraint term turns to an optimization problem: \[\underset{k}{\text{\emph{Min}}}\parallel Ak-y\parallel_{z}^{2}\text{ \emph{s.t.} }\text{\emph{rank}}(L)=l,k=H^{*}(L) \tag{18}\] where \(H^{*}(\star)\) is the Hankel pseudo-inverse operator, \(L\) is a data matrix with low-rank property after conducting hard-threshold singular values operation, and \(l\) is the rank of the data matrix. In addition, after hard thresholding, DC operation is implemented subsequently to fix data. **Algorithm 1** explains the reconstruction algorithm in detail. ``` Required:\(S_{\rho}(K_{w})\) ; \(S_{\rho}(K_{w})\) ``` **Algorithm 1**CM-DM Fig. 3: Visualization of the underlying features of high-frequency operators. Yellow line represents underlying features in Weight-K-Space and the blue line exhibits features corresponding to different kernels of Max-K-Space. Meanwhile, the red line shows the correlation of different feature maps. ### _Reconstruction Experiments_ _Comparisons with State-of-the-arts._ To verify the advantages of the proposed method, we conduct a comparative experiment with the following DL-based methods (HGGDP [37] and EBMRec [38]) and traditional methods (SAKE [39] and P-LORAKS [40]). Meanwhile, it is mentioned that algorithm CM-DM has opted for the serial-manner in subsequent experiments. The quantitative results of the above methods evaluated under 8\(\times\), 10\(\times\), 12\(\times\), and 15\(\times\) acceleration factors for Poisson, random, and uniform sampling patterns are shown in Table I. As can be seen from Table I, CM-DM performs best comparing to others for all acceleration factors and all sampling patterns. More particularly, as the ac celeration factor increases, the performance of CM-DM is still considerable. Overall, CM-DM achieves the best performance in terms of the quantitative metrics and preserves the most realistic high-frequency details. To further prove the superiority of CM-DM over other algorithms, the visualization results of different methods under different acceleration factors are shown in Figs. 4-5. Overall, as for processing high-frequency data, CM-DM achieves more accurate reconstruction with clear texture and boundaries. For example, compared to the reference image, SAKE shows significant noise-like residuals associated with high acceleration noise vulnerability. More obviously, P-LORAKS shows blurred reconstruction details, which is effectively alleviated by CM-DM. Theoretically, the inferiority of P-LORAKS and SAKE indicates that the traditional model is not robust enough while meeting challenging data with amounts of high-resolution features and details. As one of the generation models, EEMRec suffers from heavy noise and artifacts under high acceleration factors. Although HGGDP makes some progress compared with EEMRec, it is not devoted to high-frequency prior information and obtains unsatisfactory effects under high acceleration factors. The above experimental results validate that CM-DM can accurately reconstruct images and has good generalization ability. dicates that the combination of multiple operators for extracting high-frequency priors can promote the generation of more details in the reconstruction process. Furthermore, Fig. 6 visually highlight the strengths of CM-DM. CM-DM owns lower residual errors and the higher sensitivity when describing detailed tissue structures. It means that CM-DM retrieves more high-frequency information. In view of the fact that finer details of the image are associated with the high-frequency components, our proposed method demonstrates the capability to retain well-defined boundaries and textural intricacies, resulting in a precise representation of the reconstructed image. Besides, distinct from the existing methods WKGM and HFS-SDE that employ a solitary high-frequency prior, CM-DM utilizes different strategies to combine multi-frequency priors for achieving superior accuracy in MRI reconstruction. ### _Performance of Preserving Pathological Regions_ Experiments are conducted with E2E-Varnet [42] via the _Test 2_ to evaluate the clinical feasibility of CM-DM. It can be seen that the proposed method is able to reconstruct the structural content in the image, including many fine details, successfully. This is also indicated by the quantitative results shown in Table III that CM-DM outperforms E2E-Varnet by a large margin in all cases. As the acceleration factor is increased, we see that the uncertainty increases correspondingly. Therefore, although the pathological area is enlarged to clarify the details, some noise still remains in the results of CM-DM. By automatically mapping the intrinsic feature from the MR images, E2E-Varnet can also provide comparative results for MRI reconstruction, which is consistent with the observations in Fig. 7. Nevertheless, the difference with the full-sampled image indicates that it is too smooth to fully preserve the structural content. Although the SSIM value of E2E-Varnet and CM-DM are similar, over smooth and distort phenomena can be easily recognizable while accurate texture details with few noises are reconstructed by CM-DM. Obviously, CM-DM shows successful effects in maintaining pathological regions of the image. ### _Convergence Analysis_ In this subsection, the correlation between the convergence of WKGM and CM-DM with the number of iterations is investigated by quantitative indices PSNR and SSIM. We randomly select an example of reconstructing brain images using the random sampling pattern with an acceleration factor \(R\)=8. It can be seen in Fig. 8, both the SSIM curve and the PSNR curve of WKGM model and CM-DM model first rise rapidly with the number of iterations increasing then gradually stabilize. It means that the reconstruction method in high-frequency domain is prone to converge due to the small range of high-frequency information, which is also proved mathematically in **Theorem 1**. Therefore, it can be concluded that the reconstruction method directly constraining the model in k-space leads to fewer iterations. However, the PSNR and SSIM curves of CM-DM reach convergence in a quicker pace in Fig. 8. It is encouraged that the combination of two different high-frequency distributions which is correlated and more random can increase the similarity between the sample initialization and noise sampling. Consequently, the scheme of model combination is beneficial to capture sufficient high-frequency information and accelerate Fig. 6: Complex-valued reconstruction results at \(R\)=15 using Cartesian sampling with 12 coils. From left to right: Full-sampled, Under-sampled, reconstruction by HFS-SDE, WKGM, and CM-DM (ours). The second row shows the enlarged view of the ROI region (indicated by the yellow box in the first row), and the third row shows the error map of the reconstruction. Fig. 7: Reconstruction results using E2E-Varnet and CM-DM at \(R\)=10 of the equispaced mask. From top to bottom: Reconstruction images, magnified views corresponding to pathological regions, residual maps. convergence. ## V Discussion We have demonstrated that spatially constraining the diffusion process in high-frequency domain effectively improves reconstruction stability and convergence speed. Moreover, CM-DM combines different high-frequency distributions to increase the correlation of the data, thus reducing the time required for sampling while maintaining the reconstruction quality. However, some areas still need further discussion or improvement for our proposed model. On the one hand, we extract multi-frequency priors with various combination manners and make a quick comparison in the same situation. The collaborative strategies include the serial-manner and the parallel-manner. Here, we choose the serial-manner as an example. Detailed experimental results are demonstrated in the Table IV. Similarly, "W" corresponds to the "Weight-K-Space". It is evident that the performance of two model combination always outperforms than that of the single model. Meanwhile, the combination of the same type model slightly improves the quantitative effects. The combination of "Weight-K-Space" and "Mask-K-Space" makes a big difference. When choosing the combination of the weighting scheme and the "Mask-K-Space" size \(50\times 50\), every index scores the best. As a result, this situation is picked as CM-DM. On the other hand, exploration of diverse combination manners to attain multi-frequency prior is imperative owing to the variety of high-frequency operators. In light of the previous discussion, we executed a set of ablation experiments. Given that the amalgamation of two preconditioning operators enables the extraction of substantial prior information within the multi-frequency domain, both parallel and serial combinations manners achieve performance gains. Notably, quantitative analysis of the results in Table V also indicates that the gains obtained by different combination manners are slightly different from the practical point of view. Generally speaking, the reconstruction effect of the series-manner is more dominant and it has been selected in this study. Due to the large iteration numbers and long reconstruction time of the diffusion model, improving the sampling speed and the generating speed of the diffusion model remains an ongoing study. Although CM-DM achieves accurate reconstruction while in need of relatively short time, there is still large room for shortening the reconstruction time. Therefore, how to minimize the generation time when reconstructing accurate images is the future goal. ## VI Conclusion In conclusion, we present a novel mathematical framework with correlated and multi-frequency prior to preserve the structural details in the reconstructed image. Specifically, different preprocessing operators are designed to solve the intractable problem of large magnitude contrast between low-frequency and high-frequency k-space data. Meanwhile, we reformulate the diffusion process with corelated and multi-frequency prior information whilst preserving its steady-state distribution, so that the noise and target distributions are much closer to dramatically speed up the convergence of sampling process. Further theoretical analyses and thorough experimental validations fully validate that CM-DM outperforms existing MRI reconstruction methods and achieves comparable performances with the conventional score-based diffusion methods. Other than that, there still remain unanswered questions. Hence, we expect that many interesting questions and answers will be actively discussed in the near future. ## Appendix ### _Diffusion Process in Frequency Domain_ In this section, we will prove theoretically why we can directly regulate the frequency distribution of a diffusion process through the preconditioning strategy, and why it is necessary to do so. We first show that the sampling process can be equivalently transformed to the frequency domain via an orthogonal transform. We start with the following lemma. **Lemma A. 1**.: If two \(d\) -dimensional random vectors \(x\), \(y\in\mathbb{R}^{d}\) have differentiable density functions, and satisfy \(y=Gx\), where the matrix \(G\in\mathbb{R}^{d\times d}\) is invertible, we have \[\nabla,\log p_{y}(y)=\nabla,\log p_{x}(G^{\scalebox{0.5}{$\sim$}}y)=\nabla_{y }\log p_{x}(x)\] (A-1) Proof.: Note for any invertible differentiable transformation \(g\in\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), if \(y=g(x)\), we have: \[p_{y}(y)=p_{x}(g^{\scalebox{0.5}{$\sim$}}(y))\bigg{|}\det\bigg{[}\frac{dg^{ \scalebox{0.5}{$\sim$}}(y)}{dy}\bigg{]}\] (A-2) In particular, \(p_{y}(y)=p_{x}(G^{\scalebox{0.5}{$\sim$}}y)\big{|}G^{\scalebox{0.5}{$\sim$}}\). We verify the lemma by taking the logarithm and calculating the gradients at both sides of the equation. **Theorem A. 1**.: Suppose \(F\in\mathbb{R}^{d\times d}\) is an orthogonal matrix, then the diffusion process (Eq. (1)) can be rewritten as Fig. 8: Convergence curves of WKGM and CM-DM in terms of PSNR and SSIM versus the iteration number when reconstructing the brain image from 1/8 sampled data under random sampling pattern. \[k^{t-1}=k^{t}+\frac{\xi^{\prime}}{2}\nabla_{k}\log p_{k}(k^{t})+\sqrt{\xi^{\prime}} \,F[z^{\prime}\,]\] (A-3) where \(k=Fx\), and \(\{z^{\prime}\}\) do not need to follow isotropic Gaussian distributions. _Proof._ Multiplying \(F\) at both sides of the original diffusion process, we have \[k^{t-1}=k^{t}+\frac{\xi^{\prime}}{2}\,F\nabla_{x}\log p_{x}(x^{\prime})+\sqrt{ \xi^{\prime}}\,F[z^{\prime}\,]\] (A-4) By Lemma A. 1, we have \(\nabla_{k}\log p_{k}(k)=\nabla_{k}\log p_{x}(x)\). We also have \(\nabla_{x}=F^{\top}\nabla_{k}\) by the chain rule. Putting all the things together, we have \[k^{t-1}=k^{t}+\frac{\xi^{\prime}}{2}\,FF^{\top}\nabla_{k}\log p_{k}(k^{t})+ \sqrt{\xi^{\prime}}\,F[z^{\prime}\,]\] (A-5) As \(F\) is orthogonal, there exists \(FF^{\top}=I_{d}\). We thus finish the proof.
2306.01310
EPIC: Graph Augmentation with Edit Path Interpolation via Learnable Cost
Data augmentation plays a critical role in improving model performance across various domains, but it becomes challenging with graph data due to their complex and irregular structure. To address this issue, we propose EPIC (Edit Path Interpolation via learnable Cost), a novel interpolation-based method for augmenting graph datasets. To interpolate between two graphs lying in an irregular domain, EPIC leverages the concept of graph edit distance, constructing an edit path that represents the transformation process between two graphs via edit operations. Moreover, our method introduces a context-sensitive cost model that accounts for the importance of specific edit operations formulated through a learning framework. This allows for a more nuanced transformation process, where the edit distance is not merely count-based but reflects meaningful graph attributes. With randomly sampled graphs from the edit path, we enrich the training set to enhance the generalization capability of classification models. Experimental evaluations across several benchmark datasets demonstrate that our approach outperforms existing augmentation techniques in many tasks.
Jaeseung Heo, Seungbeom Lee, Sungsoo Ahn, Dongwoo Kim
2023-06-02T07:19:07Z
http://arxiv.org/abs/2306.01310v2
# EPIC: Graph Augmentation with Edit Path Interpolation via Learnable Cost ###### Abstract Graph-based models have become increasingly important in various domains, but the limited size and diversity of existing graph datasets often limit their performance. To address this issue, we propose EPIC (**E**dit **P**ath **I**nterpolation via learnable **C**ost), a novel interpolation-based method for augmenting graph datasets. Our approach leverages graph edit distance to generate new graphs that are similar to the original ones but exhibit some variation in their structures. To achieve this, we learn the graph edit distance through a comparison of labeled graphs and utilize this knowledge to create graph edit paths between pairs of original graphs. With randomly sampled graphs from a graph edit path, we enrich the training set to enhance the generalization capability of classification models. We demonstrate the effectiveness of our approach on several benchmark datasets and show that it outperforms existing augmentation methods in graph classification tasks. ## 1 Introduction Graph data has become increasingly important in various domains, such as social networks, bioinformatics, and recommendation systems [1; 2; 3]. However, the limited size and diversity of existing graph datasets often limit the performance of graph-based models. One way to overcome this limitation is to augment the existing dataset by generating new graphs with similar properties. [4; 5; 6; 7; 8; 9; 10; 11; 12] The augmentation can improve the generalization ability of graph-based models and make them more robust to different real-world scenarios [13; 14]. The mixup method has proven successful as a data augmentation technique, particularly for Euclidean data such as images [7; 13; 15]. It involves generating new samples by performing linear interpolation between the features and labels of two randomly selected samples from the original dataset. This method has demonstrated its effectiveness in enhancing the generalization and robustness of deep learning models for image classification tasks [16]. However, applying the mixup method directly to graph data poses challenges. Unlike the Euclidean domain, graph structures lack a direct spatial arrangement of their elements, making it difficult to define meaningful linear interpolations for graphs. This highlights the necessity of exploring alternative augmentation methods specifically designed for graph data. In this paper, we propose a novel approach to graph dataset augmentation based on the concept of graph edit distance [17]. The graph edit distance is a widely-used metric that quantifies the similarity between two graphs by counting the minimum number of edit operations required to transform one graph into another, such as node and edge insertions, deletions, or substitutions. By computing the graph edit distance between two graphs, we construct a graph edit path representing the transformation process from one graph to the other through edit operations. The intermediate graph states along this path can be seen as an interpolation between two graphs and are utilized to augment the training set. A simple count-based approach for edit distance computation, however, may not adequately capture the importance of individual operations. For instance, operations that modify functional groups in molecular graphs may hold greater significance than others [18; 19]. Hence, we introduce a problem-specific cost model to account for the context-dependent cost. We formulate an edit distance learning framework that leverages the insight that the distances between graphs within the same class should be relatively shorter than those between different classes. This enables us to learn a cost model that better reflects the underlying graph data characteristics. By combining the graph edit distance metric with our learned cost model, we present a novel graph dataset augmentation method named Edit Path Interpolation via learnable Cost (EPIC). Experimental evaluations on various graph classification tasks demonstrate the effectiveness and improved performance of our approach compared to existing methods. Additional experiments under the presence of noisy labels show the robustness of our approach against the others. ## 2 Related Work ### Graph edit distance The graph edit distance is a metric that quantifies the dissimilarity between two graphs [17]. It measures the minimum number of edit operations required to transform one graph into another. These edit operations include node and edge insertion, deletion, and substitution. The computational complexity of obtaining graph edit distance is known to NP-complete [20]. A number of works addressing the problem of the high computational cost have been proposed. Cross et al. [21] cast the optimization process into a Bayesian framework with a genetic search algorithm. Myers et al. [22] adopt the Levenshtein distance to model the probability distribution for structural errors in the graph-matching. In [23], a binary linear programming formulation of the graph edit distance for unweighted, undirected graphs is proposed. Fischer et al. [24] propose an approximated graph edit distance based on Hausdorff matching. Recent approaches adopt the deep learning method to efficiently prune the search tree in computation [25]. Recently, deep neural network-based graph edit distance learning methods have been proposed. In contrast to the traditional approximation methods, where the cost of edit operation is fixed, these methods learn the cost of individual edit operations. One standard approach to learning costs using neural networks involves obtaining embeddings from node and edge attributes, which are then used to compute the edit distance in a supervised manner [26; 27]. However, these approaches require ground truth or expected correspondences between two graphs, which are inapplicable in various situations. Riba et al. [28] learn the cost using node features extracted by graph neural networks and optimizes the cost by approximating graph edit distance to Hausdorff distance. The Hausdorff distance is an effective method for approximating the graph edit distance in quadratic time, but it is not suitable for edit path construction since it allows one-to-many substitution between nodes. ### Graph augmentation for graph neural networks With the success of recent Graph Neural Networks (GNNs) for graph-level classification tasks, augmentation methods for graph-structured data aim to improve the generalization ability of GNNs by creating diverse training samples. Most commonly used augmentation methods are based on random modifications on original data [5; 4; 6]. For example, Dropnode [5] and DropEdge [4] uniformly drop nodes and edges, respectively. A subgraph sampling [6; 29] and motif swap [30] perturb the subgraph of the original graph via subgraph matching. These methods assign the same labels before and after the perturbation. To overcome the simplicity of basic approaches, mixup-based methods are proposed for graph augmentation [8; 10; 9; 12]. The mixup methods are an augmentation technique that generates new training data by taking convex combinations of pairs of inputs and their corresponding labels. Mixup methods can be naturally used for regular data, such as images. However, applying mixup for graph-structure data is challenging due to the irregular structure. Manifold mixup [8] interpolates embeddings from the last layer for two graphs and uses it as a graph representation of the augmented graph. Submix [10] proposes a node split and merge algorithm to perturb original graphs and then mix random subgraphs of multiple graphs. ifMixup [9] adds dummy nodes to match the size of two graphs. Then it interpolates between node feature matrices and adjacency matrices to generate mixed graphs. S-mixup [12] computes an alignment matrix between two graphs with graph matching network [31] first and then mixes up node features. G-mixup [11] mixes graphons [32] of different classes and augments training set by generating the graphs from the mixed graphon. ## 3 EPIC: Edit Path Interpolation via learnable Cost In this section, we first describe a graph data augmentation method with a graph edit path. We then propose a method to learn a graph edit distance by learning the cost of individual edit operations. ### Augmentation with graph edit path Construction of graph edit pathWe consider a graph \(G=(\mathcal{V},\mathcal{E})\) associated with node and edge attributes. The graph edit distance is a metric that quantifies the dissimilarity between two graphs. It measures the minimum number of edit operations required to transform one graph into another or computes the total cost of edit operations if the cost of individual operations varies. These edit operations involve node and edge insertion, deletion, and attribute substitution. Once the graph edit distance is computed, a graph edit path can be obtained by applying a series of edit operations from a source graph to reach a target graph. It represents the step-by-step transformation from one graph to another while minimizing the edit distance. The graph edit distance is generally invariant to the order of edit operations. However, there are certain dependencies between node and edge operations. The node deletion operation can only be performed after all the connected edges are deleted. The edge insertion operation can only be performed when the two target nodes are presented. We only consider the node operations in order to simplify the graph edit path construction. Specifically, when a new node is inserted into a graph, we perform edge insertion operations for all edges whose adjacent nodes are given after insertion. When a node is deleted, we perform edge deletion operations for all edges connected to the deleted node. When a node is substituted, we perform edge insertion, deletion, and substitution operations accordingly. The detailed algorithm is provided in Algorithm 1. By doing so, we can construct the graph edit path whose length is equal to the number of node edit operations in the computation of the graph edit distance. Figure 1 illustrates an example of a graph edit path between two graphs and possible augmentation. Graph augmentation and label assignmentWe use the graph edit path to construct an augmented graph. We randomly sample two graphs in the training set. The graph edit distance between the two Figure 1: Illustration of graph edit path and their corresponding labels for augmentation. graphs is computed, then the graph edit path is constructed with node operations in random order. The samples obtained from a graph edit path are used as augmented graphs. Examples of augmented graphs are provided in Figure 1. To assign a label to the augmented graph, we use the cost of edit operations from the augmented graph to the source and the target graphs. Let \((o_{1},...,o_{n})\) be a sequence of edit operations applied to transforming source graph \(G_{S}\) to target graph \(G_{T}\), and \(c(o)\) be the real-valued cost function of operation \(o\), which is precisely defined in Section 3.2. With the corresponding one-hot classification label of the source graph \(\mathbf{y}_{S}\) and the target graph \(\mathbf{y}_{T}\), the label of the augmented graph \(\bar{\mathbf{y}}\) obtained by applying the first \(m\) operations \((o_{1},...,o_{m})\) is computed as \[\bar{\mathbf{y}}=\frac{\sum_{i=m+1}^{n}c\left(o_{i}\right)}{\sum_{i=1}^{n}c\left(o_ {i}\right)}\mathbf{y}_{S}+\frac{\sum_{i=1}^{m}c\left(o_{i}\right)}{\sum_{i=1}^{n} c\left(o_{i}\right)}\mathbf{y}_{T}, \tag{1}\] where \(c(\cdot)\) measures the cost of an operation. The assigned label is inversely proportional to the operation cost to reach the source or target from the augmented graph. ### Learning costs of edit operations The standard unit cost model [17; 24; 33] of the graph edit distance is incapable of measuring the importance of each operation as all operations are assigned the same cost, regardless of their significance or impact. However, the importance of edit operations would differ based on the context of the dataset. For example, changes in a functional group of molecular graphs can lead to larger semantic perturbation than the other parts in a property prediction task [18; 19]. Therefore, the edit operation leading to a large semantic modification should cost more than the others. To measure the importance of each operator in the computation of graph edit distance, we propose a learning algorithm for the operation cost based on a neural network model. Triplet loss for learning distanceA good cost function should be problem dependent. We use the triplet loss with known labels [34] to learn the graph edit distance and the operation costs therein. We assume that the pair of graphs within the same class has a relatively shorter distance than those of different classes. Let \(\mathrm{GED}(G,G^{\prime})\) be the distance between graphs \(G\) and \(G^{\prime}\). We propose a triplet loss-based objective function to encode our intuition: \[\mathcal{L}(G,G^{+},G^{-})=\max\left(\mathrm{GED}(G,G^{+})-\mathrm{GED}(G,G^{- })+\gamma,0\right), \tag{2}\] where \(G^{+}\) is a positive example whose label is the same as \(G\), \(G^{-}\) is a negative example whose labels are different from \(G\), and \(\gamma\) is a margin hyperparameter. Graph edit distance as constrained optimizationThe computational complexity of graph edit distance computation is NP-complete [20]. Learning graph edit distance often requires relaxation to make the algorithm tractable [24, 33]. To simplify the learning process, we assume that the cost of node operation _subsumes_ the cost of dependent edit operations, similar to the construction of the edit path. With the simplified assumption, we only need to consider the following three cases when computing the edit distance between source graph \(G_{S}=(\mathcal{V}_{S},\mathcal{E}_{S})\) and target graph \(G_{T}=(\mathcal{V}_{T},\mathcal{E}_{T})\) with operation cost function \(c:\mathcal{V}_{S}\cup\varnothing\times\mathcal{V}_{T}\cup\varnothing\to \mathbb{R}\), where \(\varnothing\) represents an empty node: * Node \(u\) in \(\mathcal{V}_{S}\) is substituted by node \(v\) in \(\mathcal{V}_{T}\) with substitution cost \(c(u,v)\). * Node \(u\) in \(\mathcal{V}_{S}\) is deleted with deletion cost \(c(u,\varnothing)\). * Node \(v\) in \(\mathcal{V}_{T}\) is inserted with insertion cost \(c(\varnothing,v)\). We construct a cost matrix that encapsulates all required costs to compute the edit distance between two graphs. The cost matrix is constructed as \[C=\left[\begin{array}{ccccc}c\left(u_{1},v_{1}\right)&\cdots&c\left(u_{1},v _{m}\right)&c\left(u_{1},\varnothing\right)&\cdots&\infty\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ c\left(u_{n},v_{1}\right)&\cdots&c\left(u_{n},v_{m}\right)&\infty&\cdots&c \left(u_{n},\varnothing\right)\\ c\left(\varnothing,v_{1}\right)&\cdots&\infty&\infty&\cdots&\infty\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \infty&\cdots&c\left(\varnothing,v_{m}\right)&\infty&\cdots&\infty\end{array} \right], \tag{3}\] where \(n\) and \(m\) are the number of nodes in \(G_{S}\) and \(G_{T}\), respectively. With the cost matrix, the problem of computing the graph edit distance can be reduced to solving the assignment problem. Since the node in the source graph can only be substituted or deleted, only one operation can be performed in each of the first \(n\) rows. Similarly, since the node in the target graph can only be substituted or inserted, only one operation in each of the first \(m\) columns can be performed in edit distance computation. A binary assignment matrix \(X\), whose size is the same as the cost matrix, is introduced to indicate which operation is performed in edit distance computation. With the assignment matrix, the computation of graph edit distance can be formulated as a constrained optimization problem \[\mathrm{GED}(G_{S},G_{T})=\min_{X}\sum_{i=1}^{n+m}\sum_{j=1}^{n+m} C_{ij}X_{ij}\] \[\text{s.t.}\ \sum_{j=1}^{n+m}X_{ij}=1,\ 1\leq i\leq n,\quad\sum_{i=1}^{n+m} X_{ij}=1,\ 1\leq j\leq m,\quad\ X_{ij}\in\{0,1\}. \tag{4}\] Design cost function with neural networksWe introduce the graph neural network framework to parameterize the cost function \(c\). Specifically, we use the embedding distances between two nodes as the substitution cost. Let \(h_{u}\) and \(h_{v}\) be the output embedding of node \(u\in G_{S}\) and \(v\in G_{T}\) from a graph neural network. We use the distance between two embeddings as a substitution cost, i.e., \(c_{\theta}(u,v)=||h_{u}-h_{v}||_{2}\), where \(\theta\) is the parameter of the graph neural network. The embeddings of the graph neural network encode the neighborhood structure of the target node. If the embeddings of two nodes are similar, then the two nodes are likely to play a similar role in the graph. Hence, the substitution cost measures the structural similarity between two nodes. For the insertion and deletion operations, we additionally introduce a multi-layer perceptron, i.e., \(c_{\theta,\phi}(u,\epsilon)=\mathrm{MLP}_{\phi}(h_{u})\) and \(c_{\theta,\phi}(\epsilon,v)=\mathrm{MLP}_{\phi}(h_{v})\), where \(\phi\) denotes the parameters of \(\mathrm{MLP}\). The \(\mathrm{MLP}\) computes the cost of insertion and deletion using the embedding of a node. We use the same network for both insertion and deletion. The graph neural networks encode the local structure of nodes into node embeddings. Consequently, by considering the costs of node operations, we effectively encapsulate the information regarding the neighborhood edges as well. Model optimization with a differentiable assignment matrixTo learn the graph edit distance, we need to minimize the loss in Equation 2 w.r.t \(\theta\) and \(\phi\). However, this optimization involves a non-differentiable optimization problem w.r.t the assignment matrix \(X\) in Equation 4. The Hungarian algorithm [35] can be used to find the optimal assignment \(X\) for each iteration of the stochastic gradient descent step. However, the Hungarian algorithm is non-differentiable and has a computational complexity of \(O(n^{3})\), making it difficult to employ during gradient-based optimization. We instead employ the Sinkhorn-Knopp algorithm [36] to address this issue to obtain a differentiable assignment matrix. The Sinkhorn-Knopp algorithm transforms a non-negative matrix into a doubly stochastic matrix to approximate the Hungarian algorithm. Specifically, Sinkhorn-Knopp iteratively updates a soft assignment matrix \(\tilde{X}\) via two intermediate variables \(\mathbf{u}\) and \(\mathbf{v}\). Once \(\mathbf{u}\) and \(\mathbf{v}\) are initialized as a vector of ones, i.e., \(\mathbf{u}^{(0)}=[1,\cdots,1]^{\top}\), at \(k\)-th iteration of Sinkhorn-Knopp approximates the assignment matrix via \[\mathbf{u}^{(k)}=\frac{X^{(k-1)}\mathbf{1}}{K\mathbf{v}^{(k-1)}},\quad\mathbf{v}^{(k)}=\frac{X^ {(k-1)\top}\mathbf{1}}{K^{\top}\mathbf{u}^{(k-1)}},\quad\tilde{X}^{(k)}=\operatorname {diag}(\mathbf{u}^{(k)})K\operatorname{diag}(\mathbf{v}^{(k)}), \tag{5}\] where each entry of matrix \(K\) is parameterized by the cost matrix and a regularizer parameter \(\delta\) as \(K_{ij}=\exp(-C_{ij}/\delta)\), and \(\mathbf{1}\) is a vector of ones. \(\delta\) is a regularization term controlling the sharpness of the assignment matrix. Note that the back-propagation algorithm needs to optimize the entire iterative process of the Sinkhorn-Knopp approximation. In experiments, we set the number of maximum iterations to 10 to reduce the computational cost. After learning the cost function, to augment a pair of randomly selected graphs, we first compute the cost matrix and then create a graph edit path using the optimal assignment from the Hungarian algorithm. Figure 2 shows the overall illustration of our proposed approach. ## 4 Experiments In this section, we first show the effect of EPIC in graph classification tasks over 11 datasets. We further evaluate the robustness of GNNs with our method against corrupted labels. We provide additional analysis of our model selection process. ### Effect of augmentation for graph classification DatasetsWe used eight classification datasets: NCI1, BZR, COX2, Mutagenicity, IMDB-BINARY, IMDB-MULTI, PROTEINS, ENZYMES from TUDataset [2] and three classification dataset: BBBP, BACE, HIV from MoleculeNet [37]. The datasets cover a wide range of tasks, including social networks, bioinformatics, and molecules. The detailed statistics of each dataset are shown in Appendix A. Figure 2: Overall illustration of graph edit distance learning. The assignment matrix can be obtained by either Hungarian or Sinkhorn-Knopp. For learning, we use Sinkhorn-Knopp, and for augmentation, we use Hungarian. BaselinesFor baseline augmentation models, we employ two classical graph augmentation methods: DropEdge [4] and DropNode [5], and three Mixup-based augmentations: SubMix [10], Manifold-Mixup (M-Mixup) [8], and G-Mixup [11]. We also report the performance of a vanilla model without augmentation. Implementation detailsWe first learn the cost of edit operations for each dataset. We use Adam optimizer [38] with a learning rate decay of 0.1 every 25 epochs. We train the cost function for 100 epochs on TUDataset. While we use Sinkhorn-Knopp approximation with \(k=10\) in Equation 5 for training, the Hungarian algorithm is used for inference to obtain an optimal assignment given costs. We perform graph classification tasks with GIN [39] and GCN [40] as a backbone model for augmentation. When we train each backbone model, we use the same hyperparameter and architecture for all baselines and our method for a fair comparison. We follow the Open Graph Benchmark setting [1] for MoleculeNet dataset. When training classification models, we compute the edit path between randomly paired graphs in each batch and use randomly chosen graphs from the edit path as augmentation. We use the validation set to choose the portion of augmented data points. The additional details of hyperparameters and model configurations can be found in Appendix B. Classificaiton resultsTable 1 shows the overall results of the TUDataset for graph classification tasks. Our augmentation method outperforms the other baselines on seven and six datasets with GIN and GCN backbones, respectively, and achieves the second-best performance on one dataset with GIN and GCN backbones. Table 2 shows the classification AUC-ROC with MoleculeNet datasets. The results show that EPIC consistently improves classification accuracy over the vanilla model, whereas the other augmentations occasionally degrade the classification performance. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & Method & NCI & BZR & COX2 & Mutgen. & IMDB-B & IMDB-M & PROTEINS & ENZYMES & Rank \\ \hline \multirow{7}{*}{\begin{tabular}{c} Sinkhorn \\ \end{tabular} } & Vanilla & 81.686(0.01) & 87.07(2.7) & 83.40(2.7) & 81.57(0.7) & 72.90(0.6) & 48.40(1.2) & 67.80(2.9) & 46.33(2.3) & 3.5 \\ & Dropedge [4] & 74.61(0.7) & 85.85(0.0) & 80.43(2.7) & 79.13(1.1) & 71.20(0.2) & 68.97(1.0) & 39.00(0.1) & 5.4 \\ & Dropnode [5] & 73.05(1.0) & 85.61(0.8) & 78.51(1.6) & 77.91(1.0) & 72.90(0.4) & 46.60(2.0) & 67.53(2.3) & 38.00(0.0) & 6.1 \\ & SubMix [10] & 81.31(0.0) & 85.61(0.8) & 83.62(2.6) & 65.88(1.7) & 71.70(1.0) & 48.60(1.0) & 70.66(2.4) & 44.33(2.4) & 4.0 \\ & M-Mix [8] & 81.00(0.0) & 85.37(0.6) & 84.47(1.0) & 81.68(0.9) & 73.20(0.6) & 47.93(1.0) & 46.67(0.2) & 3.5 \\ & G-Mix [11] & 80.92(1.9) & **88.05(1.0)** & 82.98(0.0) & 81.66(0.0) & 72.20(1.0) & 48.13(1.2) & 68.16(0.2) & 43.67(0.0) & 4.1 \\ \cline{2-10} & EPIC & **82.31(1.5)** & 87.32(2.0) & **84.89(2.7)** & **81.86(1.0)** & **73.40(0.5)** & **48.93(0.4)** & **70.85(0.9)** & **47.83(3.5)** & **1.1** \\ \hline \multirow{7}{*}{ \begin{tabular}{c} Sinkhorn \\ \end{tabular} } & Vanilla & 81.07(0.71) & 85.85(2.8) & 84.26(2.2) & 81.70(1.1) & 70.60(1.2) & **48.27(1.5)** & 64.13(1.0) & 44.67(5.7) & 3.5 \\ & Dropedge [4] & 74.24(1.0) & 82.93(4.3) & 83.40(1.2) & 80.37(0.6) & 70.50(1.0) & 45.67(1.5) & 64.75(0.4) & 39.33(2.8) & 5.4 \\ & Dropnode [5] & 73.78(1.2) & 80.73(2.4) & 79.15(1.0) & 78.32(1.6) & 69.40(3.0) & 39.80(3.7) & **68.43(1.8)** & 36.17(0.6) & 6.0 \\ & SubMix [10] & 81.99(0.0) & 86.34(2.0) & 83.62(2.4) & 79.77(0.9) & 68.50(0.1) & 46.47(2.5) & 66.82(2.1) & 43.67(4.0) & 4.5 \\ & M-Mix [8] & 81.41(0.8) & 84.15(2.3) & 83.83(2.1) & 81.96(0.6) & 69.40(1.3) & 46.40(2.7) & 66.28(1.9) & 44.00(0.3) & 4.1 \\ & G-Mix [11] & 82.04(1.2) & 87.32(2.4) & 84.89(1.4) & 80.32(0.7) & 69.90(1.3) & 45.87(0.6) & 68.07(1.2) & 46.17(4.2) & 3.0 \\ \cline{2-10} & EPIC & **82.31(1.1)** & **87.80(1.2)** & **85.74(0.6)** & **82.19(0.5)** & **70.80(1.7)** & 47.07(1.0) & 67.44(1.2) & **47.17(0.6)** & **1.4** \\ \hline \hline \end{tabular} \end{table} Table 1: Classification accuracy of TUDataset [2]. We report the average and standard deviation (in brackets) over five seeds. We mark the best and the second-best performances in **bold** and underline, respectively. The rank column shows the average rank of a model performance across all datasets. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Method & BBBP & BACE & HIV & Rank \\ \hline \multirow{7}{*}{\begin{tabular}{c} Sinkhorn-Knopp \\ \end{tabular} } & Vanilla & 65.90(1.2) & 77.01(2.7) & 75.10(2.7) & 4.0 \\ & Dropedge [4] & 65.32(2.8) & 74.50(0.8) & 75.57(0.6) & 4.7 \\ & Dropnode [5] & 64.32(2.8) & 76.37(0.0) & 75.36(1.3) & 5.3 \\ & SubMix [10] & 65.58(0.8) & 75.26(0.8) & 76.36(1.3) & 3.7 \\ & M-Mix [8] & 64.48(1.5) & 75.30(2.6) & 75.55(2.0) & 4.7 \\ & G-Mix [11] & 64.33(2.2) & **78.54(1.0)** & 75.29(0.6) & 4.3 \\ \cline{2-5} & EPIC & **68.33(3.5)** & 77.17(3.2) & **76.85(1.4)** & **1.3** \\ \hline \multirow{7}{*}{ \begin{tabular}{c} Sinkhorn \\ \end{tabular} } & Vanilla & 66.08(0.4) & 76.35(4.2) & 75.45(0.7) & 4.7 \\ & Dropedge [4] & 65.71(2.5) & 72.79(0.7) & 75.90(1.4) & 5.0 \\ \cline{1-1} & Dropnode [5] & 68.33(0.0) & 71.37(0.0) & 74.41(0.9) & 5.3 \\ \cline{1-1} & SubMix [10] & 67.68(1.2) & 75.19(0.6) & 75.61(2.1) & 4.0 \\ \cline{1-1} & M-Mix [8] & 67.38(2.0) & **79.67(0.8)** & 75.23(1.2) & 3.7 \\ \cline{1-1} & G-Mix [11] & 65.10(2.3) & 77.54(3.3) & 75.99(0.6) & 4.0 \\ \cline{1-1} \cline{2-5} & EPIC & **68.68(2.3)** & 77.67(2.7) & **76.81(1.3)** & **1.3** \\ \hline \hline \end{tabular} \end{table} Table 2: Classification AUC-ROC of MoleculeNet [37]. ### Robustness analysis We conduct a study that shows the robustness of the augmentation method with the presence of noisy labels. A similar study is also conducted in Hendrycks et al. [13], Pang et al. [14]. We randomly corrupt labels in the training set of IMDB-BINARY, IMDB-MULTI, and Mutagenicity datasets and test the model performance with the uncorrupted test set. We run the experiments with three different proportions of noise: \(\{0.2,0.4,0.6\}\) based on GIN. Except for the noise, we use the same setting used in Section 4.1. Table 3 shows the classification accuracy with different proportions of noisy labels. EPIC outperforms the other baseline models, except for one case, showing the robustness of our augmentation under the noisy environment. ### Ablation studies In this subsection, we show the result of ablation studies on each component in EPIC. Further ablation studies are shown in Appendix C. Cost function variationsWe test the effectiveness of the learnable cost function against other variations of the cost function. We use two variations of the cost function: unit cost, which measures the number of edit operations, and feature-distance cost, which measures the distance between two input node features, adopted from Ling et al. [12]. The result in Table 3(a) shows that the learnable cost outperforms the other cost functions across all datasets. The results empirically verify our claim that the good cost function should be problem dependent and can be learned from the dataset. We also classify the graphs in the test dataset based on their distance to the closest graph in each class. If the cost is learned properly, the distance from a test graph to the graph in the same class should be close to each other. Table 3(b) shows the result of distance-based classification. In most cases, EPIC outperforms the other fixed-cost methods. For some datasets, such as IMDB-BINARY, the distance-based classification performs similarly to the classification model with augmentation. \begin{table} \end{table} Table 4: Comparisons between three different cost functions. Three different cost functions: unit, feature-distance, and EPIC, are tested on TUDataset. GIN is used as a backbone. \begin{table} \end{table} Table 3: Robustness analysis on IMDB-BINARY, IMDB-MULTI, and Mutagenicity datasets. We randomly corrupt the label of the training set and measure the test performance with random noise. Hungarian vs Sinkhorn-KnoppWe investigate the impact of a differentiable assignment matrix with the Sinkhorn-Knopp algorithm against a fixed assignment matrix from the Hungarian algorithm in the process of training. In Figure 3, we present the training and validation curve on the BBBP and HIV datasets. In general, the Sinkhorn-Knopp algorithm shows a more stable learning process than the Hungarian algorithm. The training loss with Hungarian has not been stabilized after 40 epochs in BBBP and 20 epochs in HIV datasets. Moreover, the validation loss of Sinkhorn-Knopp is consistently lower than Hungarian after certain iterations. We conjecture that the non-smooth loss surface of Hungarian makes the gradient descent work hard, and eventually, the model fails to reach a good local minimum, whereas the smooth loss surface of Sinkhorn-Knopp results in a better performance despite being an approximation of Hungarian. ### Qualitative analysis To examine how the results of the learned distance are reflected in the augmented graph, we conduct an experiment with a lollipop dataset whose structural properties are easily visualized. The \((m,n)\)-_lollipop_ graph consists of a head, a complete graph with \(m\) nodes, and a tail, a chain structured \(n\) nodes. The lollipop dataset consists of graphs with varying \(m\) and \(n\). The label of a graph is the size of the head, i.e., \(m\). Figure 4 displays examples of the trained graph edit distances and the corresponding edit path for positive and negative pairs. With a positive pair, the learned graph edit distance substitutes the head nodes from the source graph for those of the target graph. Eventually, it maintains the complete subgraph of the head along the edit path. Whereas with a negative pair, the head nodes of the source graph are deleted first, and those of the target graph are inserted later. We also observe that the insertion and deletion costs of head nodes are more expensive than those of tail nodes, making the distance between the negative pair long. Additionally, we present the assigned Figure 4: Examples of learned edit distance from a lollipop dataset. The dashed lines in (a) and (c) represent the nodes with substitution operations. The head nodes are substituted to their counterpart in the positive pair case, whereas no substitution is performed between the heads of the negative pair. (b) and (d) show the sampled edit path from the source to target graphs. The lollipop head remains the same during the transition with a positive pair. Figure 3: Training and validation loss curves on BBBP and HIV datasets with Hungarian and Sinkhorn-Knopp algorithms in the training process. cost for substitution to each node in Appendix D, which consistently shows the effectiveness of the cost function for EPIC. ## 5 Conclusion In this paper, we have presented a novel approach for graph dataset augmentation based on the graph edit distance. Our method overcomes the limitations of linear interpolation techniques in the non-Euclidean domain and provides a tailored augmentation solution for graph data. Through extensive experiments on benchmark datasets, we have demonstrated the effectiveness of our approach in improving the performance and robustness of graph-based models. In this work, we only consider the node operation cost for computational simplicity, but incorporating edge operation cost into the framework would be a solid and important future direction.
2304.00677
DNN-based Denial of Quality of Service Attack on Software-defined Hybrid Edge-Cloud Systems
In order to satisfy diverse quality-of-service (QoS) requirements of complex real-time video applications, civilian and tactical use cases are employing software-defined hybrid edge-cloud systems. One of the primary QoS requirements of such applications is ultra-low end-to-end latency for video applications that necessitates rapid frame transfer between end-devices and edge servers using software-defined networking (SDN). Failing to guarantee such strict requirements leads to quality degradation of video applications and subsequently mission failure. In this paper, we show how a collaborative group of attackers can exploit SDN's control communications to launch Denial of Quality of Service (DQoS) attack that artificially increases end-to-end latency of video frames and yet evades detection. In particular, we show how Deep Neural Network (DNN) model training on all or partial network state information can help predict network packet drop rates with reasonable accuracy. We also show how such predictions can help design an attack model that can inflict just the right amount of added latency to the end-to-end video processing that is enough to cause considerable QoS degradation but not too much to raise suspicion. We use a realistic edge-cloud testbed on GENI platform for training data collection and demonstration of high model accuracy and attack success rate.
Minh Nguyen, Jacob Gately, Swati Kar, Soumyabrata Dey, Saptarshi Debroy
2023-04-03T01:38:51Z
http://arxiv.org/abs/2304.00677v1
# DNN-based Denial of Quality of Service Attack on Software-defined Hybrid Edge-Cloud Systems ###### Abstract In order to satisfy diverse quality-of-service (QoS) requirements of complex real-time video applications, civilian and tactical use cases are employing software-defined hybrid edge-cloud systems. One of the primary QoS requirements of such applications is ultra-low end-to-end latency for video applications that necessitates rapid frame transfer between end-devices and edge servers using software-defined networking (SDN). Failing to guarantee such strict requirements leads to quality degradation of video applications and subsequently mission failure. In this paper, we show how a collaborative group of attackers can exploit SDN's control communications to launch Denial of Quality of Service (DQoS) attack that artificially increases end-to-end latency of video frames and yet evades detection. In particular, we show how Deep Neural Network (DNN) model training on all or partial network state information can help predict network packet drop rates with reasonable accuracy. We also show how such predictions can help design an attack model that can inflict just the right amount of added latency to the end-to-end video processing that is enough to cause considerable QoS degradation but not too much to raise suspicion. We use a realistic edge-cloud testbed on GENI platform for training data collection and demonstration of high model accuracy and attack success rate. Denial of service, quality of service, edge-cloud systems, deep neural networks, software-defined networking. ## I Introduction In order to satisfy diverse quality-of-service (QoS) requirements of complex real-time applications (e.g., video processing, 3D reconstruction, AR/VR), civilian and tactical use cases are starting to employ software-defined hybrid edge-cloud systems (a.k.a. Software-defined Wide Area Network) [1, 2]. The edge sites are primarily responsible for processing real-time jobs (at the edge servers) in order to satisfy the end-to-end latency requirements of such applications. Connectivity to the cloud is essential for: i) processing significantly intensive computation jobs offloaded by edge sites and ii) running the centralized Software-defined (SDN) [3] controller that manages edge-cloud resources through OpenFlow [4] based control communication with the edge sites. Unlike traditional distributed networking, SDN uses a centralized approach where a programmable control plane (hosted by a SDN controller) dictates routing through fast, simple, and commodity routers/switches implementing generalized data-plane forwarding in hardware. In this approach, OpenFlow API is used by the switches to request forwarding rules that are computed by the central controller in order to route video packets from camera-enabled end-devices to edge servers for processing [4]. In hybrid edge cloud systems, SDN controllers are often hosted at remote cloud data-centers (instead of one of the edge sites) in order to give the controller global visibility of the entire environment [1]. However, this adds Internet scale delays (\(>\)100 ms) to the end-to-end latency of video processing for control communication between the switch and the controller. If persistent, Such delays can severely impact the quality of service (QoS) of real time video applications resulting in failure of involved missions [5]. In this paper, we propose a stealthy attack that can cause such persistent Denial of QoS (DQoS) for edge-cloud supported real-time video applications. Unlike traditional Denial of Service (DoS) attacks that aims to cripphe normal operations of a system by flooding the target at high intensity, our proposed stealthy DQoS aims to increase the end-to-end video packet delivery delay just enough to violate the application QoS requirement, yet stay undetected. Our proposed DQoS attack uses deep neural networks (DNN) to artificially cause frequent control communication exchange between the routers (at the edge) and the cloud-native SDN controller, thereby increasing the end-to-end delivery latency of video packets from end-devices to the edge servers. We show that a small group of collaborative attackers infiltrating the system and monitoring all or some of the key system/network information (with/without noise) can effectively (i.e., with very low error rate) train a DNN model to predict key network metrics (e.g., average packet drops at strategic routers) for different attack intensities. We show how such prediction can help the attackers in generating ideal attack intensity that does not raise any system alarms (i.e., being stealthy) and yet can significantly increase the video packet latency. It is to be noted that the focus of this paper is not to investigate how the attackers can monitor such network parameters, rather how such parameters can be exploited (if compromised) to significantly impact the application QoS. We evaluate the performance of the proposed attack model on a softwarized egd-cloud testbed implemented on GENI framework [6]. The results demonstrate that the proposed attack model can increase the end-to-end latency (from end-devices to edge servers) by \(\sim\)3x without increasing the average packet drop rate beyond the acceptable range. ## II System Model and Problem Evidence Analysis As shown in Fig. 1, software-defined hybrid edge-cloud systems that support real-time video processing applications involve single or multiple edge site(s) consisting of end-devices that capture videos and send the video frames to an edge server (in the same or different edge site) for processing through OpenFlow switches. The SDN-enabled OpenFlow switches route packets containing video frames using their flow tables that are populated by the cloud-hosted SDN controller. Typically, for any incoming packet, the OpenFlow switch results its flow table to check if any existing flow rule applies for that packet (based on packet contents, e.g., source IP, destination IP etc.) in order to forward it. However, unlike traditional networking, if no such flow rules exist in the table (an event called _Table miss_), the switch asks the SDN controller for new rules (in form of Packet-In messages), in response to which the controller computes and pushes new flow rules (in form of Packet-Out and Flow-Mod messages) to the requesting switch. At the switch, these rules are then added to the flow table for forwarding all packets that match with those flow rules [7]. The size of flow tables in OpenFlow switches can vary within a wide range. However for a particular network, the size typically depends on the switch's hardware capacity and system administrator's implementation needs. Based on the size of the flow table and with each new flow addition, at one point a flow table gets fully occupied and beyond which no new flow can be added. To work around this, OpenFlow protocol uses two data structures, viz., Hard Timeout and Idle Timeout that define the periodicity of erasing older flow rules from the table in order to make room for new flows. Again, the values of such timeouts depend on system implementation. System implementation also dictates parameters and metrics that are used by the SDN controller to monitor suspicious behavior and anomalies. There is a range of such metrics and parameters used by different systems based on the system realities and objectives, e.g., packet drop rate, bandwidth utilization, and switch/router buffer overflow status to name a few. In this work, we assume that the controller monitors the packet drop rate at strategic switches as an indicator for network anomalies and sustained drop rates beyond the statistical long term average is considered as an indicator of suspicious behavior. In traditional data center implementation of SDN, any such table miss only causes a negligible amount of delay for round-trip communication between the switch and controller. However for software-defined edge-cloud implementations, any table miss add Internet scale delay to the end-to-end latency of video frames and if recurrent can cause Denial of QoS. We use the CloudLab [8] platform to perform evidential experiments to demonstrate the feasibility of table misses initiated end-to-end latency increase of video frames. As shown in Fig. 1, we use two different topologies where the end-devices and edge servers are in the same and different edge sites. Fig. 1(a) shows an edge-cloud system where video frame source and destination within the same edge site (located in Clemson aggregate within CloudLab platform) are connected through 3 OpenFlow switches that are again interconnected through high bandwidth dedicated Layer 2 connection. Fig. 1(b) shows a multi-site scenario where the edge sites (located in Clemson, UWisconsin, and Utah aggregates) are interconnected via Layer 3 Internet. In both scenarios, the edge sites are connected via Internet to the data center hosting the SDN controller (located in UMass aggregate). Table I compares average latency for table misses at different switches along the route from the end-device to the edge server for single and multi-site edge-cloud scenarios. The results show that for both cases, each additional table miss along the route adds upto \(\sim\)70 ms of roundtrip (from switch to controller and back) delay to the end-to-end latency from end-device to edge server. It is important to keep in mind unlike this controller setup, in real production edge-cloud network the roundtrip delay for each table miss will be much higher (\(>\)100 ms) due to much higher traffic load at the switch and controller. The overall results clearly demonstrates the attacker objective, i.e., _making sure that frequent table misses occur at as many switches as possible along the route without increasing the mean packet drop rate at strategic switches._ ## III Attack Model Design and Evaluation ### _Attack model approach_ In order to achieve the above objective, we take a data-driven approach where we assume that a group of collaborative attackers have infiltrated the edge sites by connecting to the end switches/base stations where typically other end-devices would connect through address spoofing. Typically in edge-cloud systems, such end-connections would be wireless that opens the door for many snooping and infiltration vulnerabilities. We assume that such vulnerabilities aid the attackers to gain partial or complete view of the overall system and its parameters. Based on such partial or complete view, the attackers can monitor system behavior by injecting unsuspecting traffic into the network as part of reconnaissance. Later we will show the attack success is correlated with gaining such partial or complete view and their ability to observe different parameters. For the reconnaissance, we assume that the attackers have a wide degree of freedom (e.g., spoofed IP address, MAC address, port number) in terms of parameters for false packet injection. Based on the observed network parameters for a certain false packet injection intensity, the attackers (enabled with enough processing capability) train a DNN that can predict mean packet drop rates at strategic switches for a certain packet injection intensity. This prediction is then used to design a greedy algorithm to generate ideal false packet injection rate that can significantly increase end-to-end latency of regular video traffic by triggering frequent table misses at multiple switches along the route [9] without increasing mean packet drop rates in important switches. ### _Testbed design for training data collection_ In order to collect the dataset required for DNN training, we generate synthetic data using a realistic edge-cloud testbed implemented on GENI platform [6] (as shown in Fig. 2) as real data for such next generation systems is not readily available. NSF-supported GENI platform is an educational cloud environment that allows registered users to create distributed testbeds as collection of virtual machines (VMs) in federated sites across the continental US and abroad for running futuristic experiments. As can be seen in Fig. 2, our softwarized hybrid edge-cloud testbed implements three edge sites: one at NYU InstaGENI that hosts all the edge servers for video data processing and two at Wisconsin InstaGENI and Ohio Metro Data Center InstaGENI that primarily host end-devices that generate video traffic for processing. The host edge sites together have 6 hosts (as end-devices) that send serialized video frames to the 4 servers for processing. Fig. 1: Single vs. multi-site software-defined hybrid edge-cloud models \begin{table} \begin{tabular}{|c||c|c|} \hline **Scenarios** & \begin{tabular}{c} **Same** \\ **site** \\ \end{tabular} & \begin{tabular}{c} **Different** \\ **sites** \\ \end{tabular} \\ \hline \hline Avg. latency for no table miss & 10 ms & 66 ms \\ \hline Avg. latency for table miss at 51 only & 78 ms & 144 ms \\ \hline Avg. latency for table miss at 51 and S2 & 144 ms & 213 ms \\ \hline Avg. latency for table miss at all 3 switches & 212 ms & 280 ms \\ \hline \end{tabular} \end{table} TABLE I: Avg. latency for table misses at different switches along the route Each host's video transmission follows an ON-OFF model where during the ON phase the host sends frames and stays idle during the OFF phase. Length of both ON and OFF phases are chosen randomly within the ranges of \(10-15\) and \(0-5\) minutes respectively. The ranges correspond to typical operating times of off-the-shelf camera enabled drones/robots between recharges. The host to server mapping follows a fair allocation method where servers are selected for a particular session in order to keep balanced traffic loads among servers. The host edge sites also consist of collaborative attackers, as well as dummy VMs that help generate artificial traffic that mimic the traffic load in realistic edge-cloud environments. Specifically, dummy VMs generate artificial traffic to keep the mean packet drop rate around 0% with \(+3\)% maximum allowance on _switch34_, which is the most important switch in the system for being the entry point to the server site and a point of potential traffic bottlenecks. The host edge sites are connected to the server edge site through a collection of Open Virtual Switches that act as programmable switches and create the softwarized network fabric. The cloud-hosted SDN controller (not shown in this figure) connects to the switches via Internet and implements Floodlight for control plane. For the switches' configuration, the flow table sizes are set to 1000 flow entries, Idle Timeout is set to 5 seconds, and Hard Timeout to \(\infty\), commensurate with a system of this proportion. The bandwidth for the connections between edge switches (e.g., between _switch34_ and _switch13_ or _switch34_ and _switch23_) is set to 100 Mbps while for all other connections it is set to 50 Mbps. ### _Data collection and preparation_ We collect historical network state versus packet drop rates data for different attack (i.e., false packet generation) intensities using a Python based tool that is able to forge packets [10]. The network state is defined as a vector of comprehensive network parameters, viz., packet drop rates and flow table sizes at different switches, end-to-end latency of video frames, and bandwidth utilization at different links etc. In our experiments towards data collection, each attacker \(a_{i}\) can generate packets at rate \(\alpha_{i}^{t}\times R\) at any time instance \(t\), where \(\alpha_{i}^{t}\in[0,1]\) represents the fraction of maximum attack rate \(R\). In this experiment, \(R\) for each attacker is set to 100 packets per second with each packet payload is designed to cause table-miss (as explained in Section III-A). Given the testbed architecture and experimental setup, the network state parameters change with attack state, i.e., attackers' packet generation rates. At every \(t=10\) seconds, we randomly select an attack vector \(\{\alpha_{i}^{t}\}_{i=1}^{K}\) (\(K\) is the number of attackers) for all attackers and observe the corresponding network state changes. A measurement snapshot consists of the network state at current time \(t\) (current state), attack rates \(\{\alpha_{i}^{t}\}_{i=1}^{K}\), and the changed network state (changed state) from \(t+5\) seconds to \(t+10\) seconds. Here the \(+5\) seconds delay between generating a new attack state and capturing the corresponding network state ensures strong causality. The network state information collection between \(t+5\) to \(t+10\) takes place at a granularity of \(100\) ms. The total experiment is run for over 3 hours generating around \(1200\) snapshots. ### _DNN model design and training_ Using the collected data, we train a DNN model that can predict the packet drop rates at strategic locations, e.g., _switch34_ from Fig. 2. Later, we will show how this aids the attackers to adjust the attack parameters that is ideal to trigger enough table misses at different switches to cause maximum latency increase but limit the drop rates within normal network bounds. To this end, we design a DNN that is simple in architecture but highly efficient in predicting the packet drop rates. The network architecture is defined in Table II. The DNN model is optimized on the training dataset to be able to accurately predict the drop rates. Each iteration of DNN training is performed on a batch of \(5\) snapshots where each snapshot consists of current network state, \(\{\alpha_{i}^{t}\}_{i=1}^{K}\), and the corresponding changed network state. The input to the DNN is a pair of current state and \(\{\alpha_{i}^{t}\}_{i=1}^{K}\) that is feed forwarded through the DNN layers to predict the changed drop rates. A batch of such inputs generates the predicted drop rate sets that are compared with the corresponding actual drop rates of the changed states. A mean square loss function is used to compute the average prediction error per batch which is then back propagated from output to input layers resulting appropriate changes of network parameters towards loss minimization. The _adam_ optimizer is used for network training. The network is trained for a maximum of 50 epochs, but an early stopping mechanism with patience 5 is used to prevent overfitting and to help minimize loss. Depending on the extent of system infiltration level, the attackers may have access to complete or partial network state. at the same time, attackers' visibility of the network state may or may not be accurate. With this in the mind, we trained 10 variations of the DNN model. We divide network state parameters into \(5\) categories: (1) time difference between successive frames for each host to server connection, (2) packet drop rates at each switch, (3) flow table size at each switch, (4) bandwidth utilization between each pair of connected switches, and (5) number of waiting frames at each switch. All the models use \(\{\alpha_{i}^{t}\}_{i=1}^{K}\) and a subset of network state categories. Model 1 uses all categories. Model 2 uses categories 1, 3, 4, and 5. Model 3 uses categories 1 and 3. Model 4 uses categories 1 and 4. Model 5 used categories 1 and 5. Finally, Model 6 through 10 use inputs similar to Model 1 through 5 but the network state values are polluted with a Gaussian noise with standard deviation of 0.3. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Layer & Type & Nodes & Act. Function & Rate \\ \hline \hline 1 & Dense & 20 & relu & - \\ \hline 2 & Dropout & - & - & 0.15 \\ \hline 3 & Dense & 20 & relu & - \\ \hline 4 & Dropout & - & - & 0.15 \\ \hline 5 & Dense & 20 & relu & - \\ \hline 6 & Dropout & - & - & 0.15 \\ \hline 7 & Dense & 10 & - & - \\ \hline \end{tabular} \end{table} TABLE II: Summary of DNN model architecture. Fig. 2: An experimental edge-cloud testbed implementation on GENI platform ### _Performance evaluation of DNN training_ Fig. 3 shows the accuracy of different prediction models. In order to produce these results, we run a new set of experiments for \(1000+\) seconds where \(\alpha_{i}^{t}\) are changed in every 10 seconds. The attack rates are randomized, however their range is chosen in order to keep the packet drop rates between 3% to 10%. During an attack interval, the \(+5\) seconds of delay between the new attack state and corresponding network state is maintained similar to test data collection method. For the performance evaluation, we predict the current packet drop rates of all switches using the current attack state and the previous network state and compare them with the actual current packet drop rates. We repeat the same experiment for all the models. Fig. 3(a) and 3(b) illustrate the actual and predicted packet drop rates of _switch34_ with Model 1 and Model 7 respectively. Meanwhile, Fig. 3(c) compares the actual and predicted but for _switch21_ with Model 4. Finally, Fig. 3(d) compares drop rates of _switch34_ and _switch21_ with Model 10. Overall, all the results demonstrate high accuracy of the proposed DNN based packet drop predictions. ### _DQoS attack algorithm design and performance_ We use a simple greedy algorithmic approach (as shown in Algo. 1) for the attackers to compute ideal attack rates in order to maintain the average packet drop rate of _switch34_ (the most important switch) between it's statistical mean of (\(m\pm k\))% as it is assumed that such packet drop rate does not trigger any system alarms. Due to the high accuracy of the DNN based prediction of packet drop rates, it is used by the attackers to adjust the ideal attack rate \(\alpha_{i}\). As the network conditions are dynamic, the algorithm is run periodically in order to adjust the attack rates based on current network conditions. Table III shows the end-to-end latency degradation of video packets under the ideal attack intensity computed by Algo. 1. We observe that for all possible maximum allowed drop rate scenarios for _switch34_, the mean latency under attack is \(\sim\)3x worse than the latency without attack. Upon investigation, we find that the attack is able to successfully cause frequent table misses at all the switches along the route. Such tables misses are caused by the false packets rapidly: (1) populating the flow tables with useless flow rules and (2) replacing old flow rules that are being vacated by Hard Timeout and Idle Timeout stipulations. It is to be noted that \(\sim\)3x latency increase is due to table misses at only 4 switches along the path. In real edge-cloud implementations, the number of such switches will be higher, resulting in greater degradation. ``` Input: Attack rates \(K\) attackers \(\{\alpha_{i}\}_{i=1}^{K}\); Predicted packet drop rate of \(switch34\) for\(d_{i}^{p}\), using trained DNN model: Statistical actual mean packet drop rate \(d_{i}^{p}\), (\(m\pm k\)% of Switch \(J\); Attack rate increment \(q\) ) Output: Adjusted attack rates to maintain \(d_{i}^{p}\) between (\(m\pm k\))% 1while\(d_{i}^{p}<\tau_{i}^{p}\)do 2forall\(i\leq K\)do 3\(\alpha_{i}=q\); 4 Compute \(d_{i}^{p}\); 5while\(d_{i}^{p}>(m+k)\)% do 6forall\(i\leq K\)do 7\(\alpha_{i}=q\); 8 Compute \(d_{j}^{p}\); 9return\(\{\alpha_{i}\}_{i=1}^{K}\); ``` **Algorithm 1**DQoS attack rate adjustment algorithm ## IV Conclusions In this paper, we showed a proof-of-concept of DNN training enabled stealthy collaborative attack to significantly degrade QoS of edge-cloud system hosted real-time video processing applications. As part of future work, we plan to explore the means for such attackers to monitor the network parameters for successful DNN training. At the same time, we will explore how systems can exploit AI based techniques to thwart such stealthy attacks.
2306.09994
Quantum Mycielski Graphs
The classical Mycielski transformation allows for constructing from a given graph the new one, with an arbitrarily large chromatic number but preserving the size of the largest clique contained in it. This particular construction and its specific generalizations were widely discussed in graph theory literature. Here we propose an analog of these transformations for quantum graphs and study how they affect the (quantum) chromatic number as well as clique numbers associated with them.
Arkadiusz Bochniak, Paweł Kasprzak
2023-06-16T17:47:20Z
http://arxiv.org/abs/2306.09994v2
# Quantum Mycielski graphs ###### Abstract. The classical Mycielski transformation allows for constructing from a given graph the new one, with an arbitrarily large chromatic number but preserving the size of the largest clique contained in it. This particular construction and its specific generalizations were widely discussed in graph theory literature. Here we propose an analog of these transformations for quantum graphs and study how they affect the (quantum) chromatic number as well as clique numbers associated with them. Key words and phrases:Quantum graphs, Chromatic numbers, Mycielski transformation 2020 Mathematics Subject Classification: Primary: 46L05, Secondary: 81P45 ## 1. Introduction Properties and characteristics of classical graph colouring were widely studied from different perspectives, starting from their fundamental aspects, through applications in several branches of science and engineering, as well as in multiple daily life problems [10]. The typical parameters one uses to characterize a given graph \(G\) are related to a number of colors that can be used to label either its vertices or edges according to a certain set of rules. The most common characteristic is called the chromatic number \(\chi(G)\) and is defined as the minimal number of colors that could be used to label the vertices of \(G\) in such a way that none of the edges has the same colors associated to its two endpoints. The problem of determining this quantity for a generic graph is known to be NP-hard [9]. Yet another NP-hard parameter containing information about the structure of a given graph \(G\) is its clique number \(\omega(G)\) which determines the largest subgraph \(K_{n}\) of \(G\) whose every vertex shares an edge with any other vertex of this subgraph, the complete graph of \(n\) vertices. In particular, if \(\omega(G)=2\), then the graph is triangle-free (i.e. there is no closed loop formed out of three vertices). One could ask if starting from a triangle-free connected graph we can arbitrarily enlarge the chromatic number by adding a certain number of vertices and edges but at the same time, no triangle is generated as a subgraph and the resulting graph remains connected. The affirmative answer to this question was given by Mycielski [15]. The resulting transformation that from a given graph \(G\) produces a new graph \(\mu(G)\) such that \(G\subseteq\mu(G)\), \(\omega(G)=\omega(\mu(G))\) and \(\chi(\mu(G))=\chi(G)+1\) is referred to as the Mycielski transformation. This construction was later generalized in [17, 18], and for it, the notion of generalized Mycielski transformation, Stiebitz transformation, or (higher) cones over a graph is used. One area of intriguing applications of graph theory is the information theory [6, 16], where for a given noisy classical channel \(\Phi_{\mathrm{cl}}:A\to\mathrm{Prob}(B)\) between two persons, Alice and Bob, one can associate the so-called confusability graph, defined as a complement of the distinguishability graph whose vertices are elements of the set \(A\), and there exists an edge between two vertices \(a_{1},a_{2}\in A\) if and only if for all \(b\in B\) we have \(\Phi_{\mathrm{cl}}(b|a_{1})\Phi_{\mathrm{cl}}(b|a_{2})=0\), where \(\Phi_{\mathrm{cl}}(b|a)\) stands for the conditional probability that Bob will receive \(b\in B\) provided that Alice had sent \(a\in A\). The fact that there is a connection between classical channels and graphs led to an intriguing possibility of associating to a quantum channel \(\Phi_{\mathrm{q}}\) an object that will mimic the behavior of classical graphs - the quantum graph. This can be done either by directly using the associated Kraus operators defining the completely positive trace-preserving map \(\Phi_{\mathrm{q}}\), or use them to construct certain operator systems (spaces) and study their properties. The concept of quantum graphs appeared already before in [7], however, the relation to quantum information, in particular the problem of zero-error correction, significantly accelerated the interest in quantum graphs. Several formulations, which under some conditions turned out to be equivalent, were proposed. For a neat survey, we refer the reader to [4]. The two most widely used are: 1. [label=.] 2. approach based on quantum relations and their formulations in terms of operator systems (spaces) [19, 20], 3. formulation by mimicking in the quantum world the properties of adjacency matrices [14, 1]. Let us again consider the aforementioned noisy classical channel \(\Phi_{\mathrm{cl}}\), and now promote Alice and Bob to be players for the following game [16]. Introducing an external referee, say, Charlie, one can consider the scenario in which Charlie, chooses a value \(c\) from its own set \(C\) and, with a conditional probability \(P(x,u|c)\), sends a message \(x\in X\) to Alice, and a value \(u\in B\) to Bob. If Alice is equipped with a map \(f:X\to A\), called strategy, she can use it to produce \(a=f(x)\) and send this value to Bob using the channel \(\Phi_{\mathrm{cl}}\) so that he will receive a value \(b\in B\) with probability \(\Phi_{\mathrm{cl}}(b|a)\). The goal of the players is to decode the initial value \(c\in C\). With a set \(X\) one can associate the so-called characteristic graph defined in terms of \(P\), and it turns out [16] that the existence of this type of scenario with a winning strategy is equivalent to the existence of a homomorphism between such characteristic graph and the distinguishability one associated with the channel \(\Phi_{\mathrm{cl}}\). This observation suggests that certain characteristics of graphs could be potentially defined in terms of the existence of a winning strategy for certain types of games. In particular, this is true for the chromatic number. One can consider a two-player game in which Charlie chooses two vertices, say \(v\) and \(w\), of a given graph \(G\) and sends one of them to Alice and the second one to Bob. They have to respond with numbers \(\alpha,\beta\) from a fixed set \(\{1,\ldots,c\}\) according to the following rules: * \(v=w\Rightarrow\alpha=\beta\), * \(vw\) is an edge \(\Rightarrow\alpha\neq\beta\). The lack of communication between Alice and Bob is assumed during the game, but they can agree on a strategy before the game started. It turns out that such a winning strategy exists if and only if \(c\geq\chi(G)\). This motivated the concept of quantum chromatic number introduced in [3] where the players are no longer assumed to be classical but they are allowed to share some entangled quantum state \(\Psi\) and, instead of providing answers, they are performing quantum measurements. More precisely, the players' strategy is mathematically modeled by positive operator-valued measures (POVMs) \((E_{v\alpha})_{\alpha=1,\ldots,c}\) and \((F_{v\alpha})_{\alpha=1,\ldots,c}\), respectively, and it is a winning strategy if * \(\forall v\in V(G)\:\forall\alpha\neq\beta\left\langle\Psi|E_{v\alpha}\otimes F _{v\beta}|\Psi\right\rangle=0\), * \(\forall vw\in E(G)\:\:\forall\alpha\left\langle\Psi|E_{v\alpha}\otimes F_{w \alpha}|\Psi\right\rangle=0\). The quantum chromatic number is defined, by analogy to its classical counterpart, as the minimal number for which there exists a winning strategy for this non-local game. Both classical and quantum chromatic numbers, together with the notion of (quantum) coloring, can be also defined in the framework of quantum graphs [2, 8]. It makes then sense to ask about analogs of Mycielski theorem in the quantum setting. In this paper, we propose such a generalization of the Mycielski transformation for quantum graphs and study how it affects parameters like (quantum) chromatic numbers as well as (several versions of) clique numbers. In Section 2 we describe in detail the Mycielski transformation for classical graphs and discuss its generalization into cones over a graph. In Section 3 we briefly recall the notion of a quantum graph and establish the notation widely used in the rest of the paper. We introduce the (generalized) Mycielski transformation for quantum graphs in Section 4 and show that indeed the resulting object is a well-defined quantum graph. Next, we study in Section 5 how this transformation affects the (quantum) chromatic number. The impact on the (different versions of) clique numbers is discussed in Section 6. Finally, in Section 7 we briefly comment on other graph parameters and collect a series of open questions as well as potential future research directions. ## 2. Mycielski transformation for classical graphs Let \(G\) be an undirected graph with a given set of vertices \(V(G)\) and the set of edges \(E(G)\). The Mycielski transformation [15] of \(G\), denoted by \(\mu(G)\), is the graph with the set of vertices \(V(\mu(G))=\{\bullet\}\sqcup V(G)\sqcup V(G)\) s.t. every vertex from the second copy of \(V(G)\) is connected with the distinguished vertex \(\bullet\), and if \(\{v_{i},v_{j}\}\in E(G)\) and \(u_{i}\) denotes the copy of \(v_{i}\) is the second summand, then both \(\{v_{i},u_{j}\}\) and \(\{v_{j},u_{i}\}\) are edges in \(\mu(G)\). In other words, the adjacency matrix \(A_{\mu(G)}\) is of the form \[A_{\mu(G)}=\begin{pmatrix}0&\vec{0}^{T}&\vec{1}^{T}\\ \vec{0}&A_{G}&A_{G}\\ \vec{1}&A_{G}&\vec{0}\end{pmatrix}. \tag{1}\] The Mycielski transformation \(\mu(G)\) is a special case of so-called generalized Mycielski transformation \(\mu_{r}(G)\) (or \(r\)-Mycielskian) for \(r=1\), also known as cones over a graph \(G\), [17, 18]. If the vertex set of \(G\) is \(V^{0}=\{v_{1}^{0},v_{2}^{0},\ldots,v_{n}^{0}\}\) and the edge set of \(G\) is denoted by \(E_{0}\) then the vertex set of \(r\)-Mycielskian is \[V^{0}\sqcup V^{1}\sqcup\ldots\sqcup V^{r}\sqcup\{\bullet\}\] where \(V^{i}=\{v^{i}_{1},v^{i}_{2},\ldots,v^{i}_{n}\}\) is the distinct copy of \(V^{0}\) and the edge set is given by \[E=E_{0}\cup\bigcup_{i=0}^{r-1}\{v^{i}_{j^{\prime}}v^{i+1}_{j^{\prime}}:v^{0}_{j }v^{0}_{j^{\prime}}\in E_{0}\}\cup\{v^{r}_{1}\bullet,v^{r}_{2}\bullet,\ldots,v^{ r}_{n}\bullet\}.\] We illustrate this construction in a particular example. Let \(G=K_{2}\), i.e. The Mycielski transformation \(\mu_{1}(G)\) of this graph is Notice that this graph is simply Next, we observe that \(\mu_{2}(G)\) is of the form We also notice that \(\mu_{1}(\mu_{1}(G))\neq\mu_{2}(G)\). ## 3. Quantum graphs Let us recall here the definition of a quantum graph [4, Definition 2.4]. **Definition 3.1**.: A triple \(\mathcal{G}=(\mathbb{G},\psi,A)\) is a quantum graph, if * \(\mathbb{G}\) is a finite quantum space; the corresponding (finite-dimensional) \(\mathrm{C}^{*}\)-algebra will be denoted by \(\mathrm{C}(\mathcal{G})\); * \(\psi:\mathrm{C}(\mathcal{G})\to\mathbb{C}\) is a faithful state; the GNS space will be denoted by \(L^{2}(\mathcal{G})\); the multiplication map when viewed as a linear map \(L^{2}(\mathcal{G})\otimes L^{2}(\mathcal{G})\to L^{2}(\mathcal{G})\) will be denoted by \(m\); since \(\mathbb{G}\) is finite and \(\psi\) is faithful, \(\mathrm{C}(\mathcal{G})\) and \(L^{2}(\mathcal{G})\) can be identified as vector spaces; the map \(\mathbb{C}\ni\lambda\longmapsto\lambda\mathds{1}_{\mathrm{C}(\mathcal{G})}\in L ^{2}(\mathcal{G})\) will be denoted by \(\eta\); note incidentally that \(\eta^{*}(x)=\psi(x)\) for all \(x\in L^{2}(\mathcal{G})\); * \(\psi\) is a \(\delta\)-form, i.e. \(mm^{*}=\delta^{2}\mathrm{id}_{L^{2}(\mathcal{G})}\); * \(A\), called quantum adjacency matrix, is a self-adjoint map \(A:L^{2}(\mathcal{G})\to L^{2}(\mathcal{G})\) s.t. \[A =\delta^{-2}m(A\otimes A)m^{*}\] (2) \[A =(\mathrm{id}\otimes\eta^{*}m)(\mathds{1}\otimes A\otimes \mathds{1})(m^{*}\eta\otimes\mathrm{id})\] (3) If, moreover, \(m(A\otimes\mathds{1})m^{*}=\delta^{2}\mathds{1}\), then the quantum graph \(\mathcal{G}\) is called reflexive. On the other hand, if \(m(A\otimes\mathds{1})m^{*}=0\) then the quantum graph is called irreflexive. We will often deal with a multiple of quantum graphs and, to avoid confusion, subscripts indicating the corresponding quantum graph will be added to the state, adjacency matrix, multiplication map, etc. and e.g. we shall write \(\mathcal{G}=(\mathbb{G}_{\mathcal{G}},\psi_{\mathcal{G}},A_{\mathcal{G}})\); \(\dim\mathrm{C}(\mathcal{G})\) will be denoted by \(|\mathcal{G}|\). Notice that **Lemma 3.1**.: _For a quantum graph \(\mathcal{G}=(\mathbb{G},\psi,A)\) we have \((\mathrm{id}\otimes\eta^{*})m^{*}=\mathrm{id}=(\eta^{*}\otimes\mathrm{id})m^{*}\)._ Proof.: The lemma is proved by conjugating the identities \[m(\eta\otimes\mathrm{id})=\mathrm{id}=m(\mathrm{id}\otimes\eta),\] where the latter expresses the identity \(a\mathds{1}=a=\mathds{1}a\) for all \(a\in\mathrm{C}(\mathcal{G})\). In what follows we shall use Swedler notation for \(m^{*}:L^{2}(\mathcal{G})\longmapsto L^{2}(\mathcal{G})\otimes L^{2}(\mathcal{ G})\) and write \[m^{*}(a)=a_{(1)}\otimes a_{(2)}.\] The \(\mathrm{C}^{*}\)-algebra \(\mathrm{C}(\mathcal{G})\) will be viewed as a \(\mathrm{C}^{*}\)-algebra of operators acting on \(L^{2}(\mathcal{G})\) and \(\mathrm{C}(\mathcal{G})^{\prime}\) will denote its commutant \[\mathrm{C}(\mathcal{G})^{\prime}=\{T\in B(L^{2}(\mathcal{G})):Ta=aT\text{ for all }a\in\mathrm{C}(\mathcal{G})\}.\] The map \[P:B(L^{2}(\mathcal{G}))\ni X\longmapsto\delta^{-2}m(A\otimes X)m^{*}\in B(L^ {2}(\mathcal{G})) \tag{4}\] is a projection satisfying \[P(XT)=P(X)T,\quad P(TX)=TP(X),\quad P(X^{*})=P(X)^{*}\] for all \(X\in B(L^{2}(\mathcal{G}))\). In particular the image \(S\) of \(P\) \[S=P(B(L^{2}(\mathcal{G}))) \tag{5}\] is a selfadjoint operator subspace in \(B(L^{2}(\mathcal{G}))\) which is also a bimodule over \(\mathrm{C}(\mathcal{G})^{\prime}\). Since \(\mathrm{C}(\mathcal{G})\) is finite-dimensional, it is also a von Neumann algebra thus the triple \((S,\mathrm{C}(\mathcal{G}),B(L^{2}(\mathcal{G})))\) is a quantum graph in the sense of Weaver [19, 20]. We shall usually write \(S_{\mathcal{G}}\) for the selfadjoint subspace \(S\subset B(L^{2}(\mathcal{G}))\) described above. Note that if \(\mathcal{G}\) is reflexive then \(S_{\mathcal{G}}\) is an operator space. The operator space \(S_{G}\) for a classical graph \(G\) is spanned by the matrix units \(e_{ij}\in\mathtt{Mat}_{n}\) corresponding to edges \((ij)\in E(G)\), see Section 6. ## 4. Mycielski transformation for quantum graphs Prior to the definition of the (generalized) Mycielski transformation we need to fix a notation: given linear maps \(T_{i}:V_{i}\to W\) for \(i=1,2,\ldots,r\) the unique linear map \(T:\bigoplus\limits_{i=1}^{r}V_{i}\to W\) such that \(T|_{V_{i}}=T_{i}\) for \(i=1,2,\ldots,r\), will be denoted by \(\bigoplus\limits_{i=1}^{r}T_{i}\). **Definition 4.1**.: The Mycielski transformation \(\mu(\mathcal{G})\) of a quantum graph \(\mathcal{G}=(\mathbb{G}_{\mathcal{G}},\psi_{\mathcal{G}},A_{\mathcal{G}})\) is a triple \((\mathbb{G}_{\mu(\mathcal{G})},\psi_{\mu(\mathcal{G})},A_{\mu(\mathcal{G})})\) where * \(\mathbb{G}_{\mu(\mathcal{G})}=\bullet\sqcup\mathbb{G}_{\mathcal{G}}\sqcup \mathbb{G}_{\mathcal{G}}\) (i.e. the corresponding \(\mathrm{C}^{*}\)-algebra is \(\mathbb{C}\oplus\mathrm{C}(\mathcal{G})\oplus\mathrm{C}(\mathcal{G})\)), * \(\psi_{\mu(\mathcal{G})}=\frac{1}{1+2\delta^{2}}\big{(}\mathrm{id}\oplus\delta ^{2}\psi_{\mathcal{G}}\oplus\delta^{2}\psi_{\mathcal{G}}\big{)}\), * \(A_{\mu(\mathcal{G})}:L^{2}(\mu(\mathcal{G}))\to L^{2}(\mu(\mathcal{G}))\) is defined by \[A_{\mu(\mathcal{G})}\begin{pmatrix}\lambda\\ x\\ y\end{pmatrix}=\begin{pmatrix}\delta^{2}\psi_{\mathcal{G}}(y)\\ A_{\mathcal{G}}(x+y)\\ \lambda\mathds{1}_{\mathbb{C}(\mathcal{G})}+A_{\mathcal{G}}(x)\end{pmatrix}\] (6) for all \(\lambda\in\mathbb{C}\) and \(x,y\in L^{2}(\mathcal{G})\). Note that we identify \(L^{2}(\mu(\mathcal{G}))\) with \(\mathbb{C}\oplus L^{2}(\mathcal{G})\oplus L^{2}(\mathcal{G})\), with the scalar product on the latter given by \[\left\langle\begin{pmatrix}\lambda\\ x\\ y\end{pmatrix}\right|\begin{pmatrix}\mu\\ u\\ v\end{pmatrix}\right\rangle=\frac{1}{2\delta^{2}+1}\left(\overline{\lambda}\mu +\delta^{2}\langle x\!\left|u\right\rangle+\delta^{2}\langle y\!\left|v \right\rangle\right).\] Instead of showing that this construction indeed defines a quantum graph, we first introduce the quantum analog of the generalized Mycielski graphs. We will show that the generalized Mycielski transformation for quantum graphs produces new quantum graphs. In particular, it will follow that the Mycielski transformation as defined in Definition 4.1 also has this property. **Definition 4.2**.: Let \(\mathcal{G}=(\mathbb{G}_{\mathcal{G}},\psi_{\mathcal{G}},A_{\mathcal{G}})\) be a quantum graph and let \(r\geq 1\). The \(r-1\)-Mycielski transformation \(\mu_{r-1}(\mathcal{G})\) of \(\mathcal{G}\) is a triple \((\mathbb{G}_{\mu_{r-1}(\mathcal{G})},\psi_{\mu_{r-1}(\mathcal{G})},A_{\mu_{r-1 }(\mathcal{G})})\) where * \(\mathbb{G}_{\mu_{r-1}(\mathcal{G})}=\bullet\sqcup\underbrace{\mathbb{G}_{ \mathcal{G}}\sqcup\ldots\sqcup\mathbb{G}_{\mathcal{G}}}_{r\text{ times}}\) (i.e. the C\({}^{*}\)-algebra of \(\mu_{r-1}(\mathcal{G})\) is \(\mathbb{C}\oplus\mathbb{C}(\mathcal{G})^{\oplus r}\)), * \(\psi_{\mu_{r-1}(\mathcal{G})}=\frac{1}{1+r\delta^{2}}\left(\operatorname{id} \oplus\delta^{2}\psi_{\mathcal{G}^{\oplus r}}\right)\), * \(A_{\mu_{r-1}(\mathcal{G})}:L^{2}(\mu_{r-1}(\mathcal{G}))\to L^{2}(\mu_{r-1}( \mathcal{G}))\) is defined by \[A_{\mu_{r-1}(\mathcal{G})}\begin{pmatrix}\lambda\\ x_{1}\\ \vdots\\ x_{r}\end{pmatrix}=\begin{pmatrix}\delta^{2}\psi_{\mathcal{G}}(x_{r})\\ A_{\mathcal{G}}(x_{1}+x_{2})\\ A_{\mathcal{G}}(x_{1}+x_{3})\\ \vdots\\ A_{\mathcal{G}}(x_{k}+x_{k+2})\\ \vdots\\ A_{\mathcal{G}}(x_{r-2}+x_{r})\\ \lambda\mathds{1}_{\mathbb{G}}+A_{\mathcal{G}}(x_{r-1})\end{pmatrix}\] (7) for all \(\lambda\in\mathbb{C}\) and \(x_{1},\ldots,x_{r}\in L^{2}(\mathcal{G})\). Here \(L^{2}(\mu_{r-1}(\mathcal{G}))\) is identified with \(\mathbb{C}\oplus\bigoplus\limits_{k=1}^{r}L^{2}(\mathcal{G})\). Let us right away observe that \(\mu_{1}(\mathcal{G})\) as described in Definition 4.2 coincides with Mycielskian \(\mu(\mathcal{G})\) of \(\mathcal{G}\) as defined in Definition 4.1. Note also that for every \(r\geq 1\), the scalar product on \(L^{2}(\mu_{r-1}(\mathcal{G}))\) is given by \[\left\langle\begin{pmatrix}\lambda\\ x_{1}\\ \vdots\\ x_{r}\end{pmatrix}\right|\begin{pmatrix}\mu\\ y_{1}\\ \vdots\\ y_{r}\end{pmatrix}=\frac{1}{1+r\delta^{2}}\left(\overline{\lambda}\mu+\delta^{ 2}\sum\limits_{k=1}^{r}\left\langle x_{k}\!\left|y_{k}\right\rangle\right).\] For \(k=1,\ldots,r\), let \(\iota_{k}\) be the isometric embedding of \(L^{2}(\mathcal{G})\) into \(L^{2}(\mu_{r-1}(\mathcal{G}))\) \[\iota_{k}:L^{2}(\mathcal{G})\ni x\longmapsto\sqrt{\frac{1+r\delta^{2}}{ \delta^{2}}}\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ x\\ 0\\ \vdots\\ 0\end{pmatrix}\in L^{2}(\mu_{r-1}(\mathcal{G})),\] where the element \(x\) is embedded into the \(k\)th summand of \(\bigoplus\limits_{k=1}^{r}L^{2}(\mathcal{G})\). Note that \[\iota_{k}^{*}\begin{pmatrix}x_{0}\\ \vdots\\ x_{k}\\ \vdots\\ x_{r}\end{pmatrix}=\sqrt{\frac{\delta^{2}}{1+r\delta^{2}}}x_{k}.\] We also define the isometric embedding \(\iota_{0}\) of \(\mathbb{C}\) into \(L^{2}(\mu_{r-1}(\mathcal{G}))\), \[\iota_{0}:\mathbb{C}\ni\lambda\mapsto\sqrt{1+r\delta^{2}}\begin{pmatrix} \lambda\\ 0\\ \vdots\\ 0\end{pmatrix}\in L^{2}(\mu_{r-1}(\mathcal{G}))\] and we have \[\iota_{0}^{*}\begin{pmatrix}x_{0}\\ \vdots\\ x_{r}\end{pmatrix}=\sqrt{\frac{1}{1+r\delta^{2}}}x_{0}\] For every \(k,l=1,\ldots,r\) we have \(\iota_{l}^{*}\iota_{k}=\delta_{kl}\mathrm{id}_{L^{2}(\mathcal{G})}\), and \(\iota_{0}^{*}t_{0}=\mathrm{id}_{\mathbb{C}}\). Furthermore, \[\sum_{j=0}^{r}\iota_{j}\iota_{j}^{*}=\mathrm{id}_{L^{2}(\mu_{r-1}(\mathcal{G} ))}\] Denoting \(m_{0}=m_{\bullet}\) and \(m_{j}=\delta^{-1}m_{\mathcal{G}}\) for \(j=1,\ldots r\) we have \[m_{\mu_{r-1}(\mathcal{G})}=\sqrt{1+r\delta^{2}}\sum_{k=0}^{r}\iota_{k}m_{k}( \iota_{k}^{*}\otimes\iota_{k}^{*}), \tag{8}\] and \[m_{\mu_{r-1}(\mathcal{G})}^{*}=\sqrt{1+r\delta^{2}}\sum_{k=0}^{r}(\iota_{k} \otimes\iota_{k})m_{k}^{*}\iota_{k}^{*}.\] The quantum adjacency matrix can be written as \[A_{\mu_{r-1}(\mathcal{G})}=\delta_{r}\eta_{\mathcal{G}}\iota_{0}^{*}+\delta_{ t0}\eta_{\mathcal{G}}^{*}\iota_{r}^{*}+\iota_{1}A_{\mathcal{G}}\iota_{1}^{*}+ \sum_{k=1}^{r-1}\left(\iota_{k}A_{\mathcal{G}}\iota_{k+1}^{*}+\iota_{k+1}A_{ \mathcal{G}}\iota_{k}^{*}\right). \tag{9}\] From this formula we deduce that \(A_{\mu_{r-1}(\mathcal{G})}\) is a selfadjoint operator on \(L^{2}(\mu_{r-1}(\mathcal{G}))\). Let us prove that \(\psi_{\mu_{r-1}(\mathcal{G})}\) is a \(\sqrt{1+r\delta^{2}}\)-form. **Proposition 4.1**.: _The equation_ \[m_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}(\mathcal{G})}^{*}=(1+r\delta^{2}) \mathrm{id}_{L^{2}(\mathcal{G})}\] _holds, and in particular \(\psi_{\mu_{r-1}(\mathcal{G})}\) is a \(\sqrt{1+r\delta^{2}}\)-form._ Proof.: We compute \[m_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}(\mathcal{G})}^{*} =(1+r\delta^{2})\iota_{0}m_{\bullet}m_{\bullet}^{*}\iota_{0}^{*} +\frac{1+r\delta^{2}}{\delta^{2}}\sum_{k=1}^{r}\iota_{k}m_{\mathcal{G}}m_{ \mathcal{G}}^{*}\iota_{k}^{*}\] \[=(1+r\delta^{2})\iota_{0}\iota_{0}^{*}+\frac{1+r\delta^{2}}{ \delta^{2}}\delta^{2}\sum_{k=1}^{r}\iota_{k}\iota_{k}^{*}=(1+r\delta^{2}) \mathrm{id}_{L^{2}(\mu_{r-1}(\mathcal{G}))}.\] **Proposition 4.2**.: _For any quantum graph \(\mathcal{G}\) and any \(r\geq 1\), the generalized Mycielski transformation \(\mu_{r-1}(\mathcal{G})\) is a quantum graph. Moreover, if \(\mathcal{G}\) was irreflexive (resp. reflexive), then \(\mu_{r-1}(\mathcal{G})\) is so._ Proof.: To prove the claim, we have to verify that \(\mu_{r-1}(\mathcal{G})\) fulfills all the axioms from Definition 3.1. First, by Proposition 4.1 we know that \(\psi_{\mu_{r-1}(\mathcal{G})}\) is a \(\sqrt{1+r\delta^{2}}\)-form. Moreover, \(A_{\mu_{r-1}(\mathcal{G})}\in B(L^{2}(\mu_{r-1}(\mathcal{G})))\) is a selfadjoint operator (c.f. Eq. (9)). Next, we compute \[m_{\mu_{r-1}(\mathcal{G})}\left(A_{\mu_{r-1}(\mathcal{G})}\otimes A _{\mu_{r-1}(\mathcal{G})}\right)m^{*}_{\mu_{r-1}(\mathcal{G})}= (r\delta^{2}+1)\left(\iota_{0}m_{\bullet}(\iota_{0}^{*}\otimes \iota_{0}^{*})+\frac{1}{\delta}\sum\limits_{k=1}^{r}\iota_{k}m_{\mathcal{G}}( \iota_{k}^{*}\otimes\iota_{k}^{*})\right)\] \[\times\left(A_{\mu_{r-1}(\mathcal{G})}\otimes A_{\mu_{r-1}( \mathcal{G})}\right)\left((\iota_{0}\otimes\iota_{0})m^{*}_{\bullet}\iota_{0} ^{*}+\frac{1}{\delta}\sum\limits_{k=1}^{r}(\iota_{k}\otimes\iota_{k})m^{*}_{ \mathcal{G}}\iota_{k}^{*}\right).\] Since \(m_{\mu_{r-1}(\mathcal{G})}\) and \(m^{*}_{\mu_{r-1}(\mathcal{G})}\) contain only tensors of the form \(\iota_{j}^{*}\otimes\iota_{j}^{*}\) and \(\iota_{j}\otimes\iota_{j}\)\((j=0,\ldots,r)\), respectively (i.e. diagonal in the subscripts), the only non-zero contribution from \(A_{\mu_{r-1}(\mathcal{G})}\otimes A_{\mu_{r-1}(\mathcal{G})}\) is \[\delta^{2}(\iota_{r}\otimes\iota_{r})(\eta_{\mathcal{G}}\otimes \eta_{\mathcal{G}})(\iota_{0}^{*}\otimes\iota_{0}^{*})+\delta^{2}(\iota_{0} \otimes\iota_{0})(\eta^{*}_{\mathcal{G}}\otimes\eta^{*}_{\mathcal{G}})(\iota _{r}^{*}\otimes\iota_{r}^{*})+(\iota_{1}\otimes\iota_{1})(A_{\mathcal{G}} \otimes A_{\mathcal{G}})(\iota_{1}^{*}\otimes\iota_{1}^{*})\] \[+ \sum\limits_{k=1}^{r-1}\left[(\iota_{k}\otimes\iota_{k})(A_{ \mathcal{G}}\otimes A_{\mathcal{G}})(\iota_{k+1}^{*}\otimes\iota_{k+1}^{*})+( \iota_{k+1}\otimes\iota_{k+1})(A_{\mathcal{G}}\otimes A_{\mathcal{G}})(\iota _{k}^{*}\otimes\iota_{k}^{*})\right].\] Therefore, \[m_{\mu_{r-1}(\mathcal{G})}\left(A_{\mu_{r-1}(\mathcal{G})} \otimes A_{\mu_{r-1}(\mathcal{G})}\right)m^{*}_{\mu_{r-1}(\mathcal{G})}= (1+r\delta^{2})\left\{\delta\iota_{0}m_{\bullet}(\eta^{*}_{ \mathcal{G}}\otimes\eta^{*}_{\mathcal{G}})m^{*}_{\mathcal{G}}\iota_{r}^{*}+ \delta\iota_{r}m_{\mathcal{G}}(\eta_{\mathcal{G}}\otimes\eta_{\mathcal{G}})m^ {*}_{\bullet}\iota_{0}^{*}\right.\] \[+\left.\delta^{-2}\iota_{1}m_{\mathcal{G}}(A_{\mathcal{G}} \otimes A_{\mathcal{G}})m^{*}_{\mathcal{G}}\iota_{1}^{*}\right.\] \[+\left.\delta^{-2}\sum\limits_{k,p=1}^{r}\sum\limits_{l=1}^{r-1} \iota_{k}m_{\mathcal{G}}(\iota_{k}^{*}\otimes\iota_{k}^{*})\left[(\iota_{l} \otimes\iota_{l})(A_{\mathcal{G}}\otimes A_{\mathcal{G}})(\iota_{l+1}^{*} \otimes\iota_{l+1}^{*})\right.\] \[\left.\qquad\qquad\left.+(\iota_{l+1}\otimes\iota_{l+1})(A_{ \mathcal{G}}\otimes A_{\mathcal{G}})(\iota_{l}^{*}\otimes\iota_{l}^{*}) \right](\iota_{p}\otimes\iota_{p})m^{*}_{\mathcal{G}}\iota_{p}\right\}.\] This can be further simplified into \[m_{\mu_{r-1}(\mathcal{G})}\left(A_{\mu_{r-1}(\mathcal{G})}\otimes A _{\mu_{r-1}(\mathcal{G})}\right)m^{*}_{\mu_{r-1}(\mathcal{G})}= (1+r\delta^{2})\Big{\{}\delta\iota_{0}m_{\bullet}(\eta^{*}_{ \mathcal{G}}\otimes\eta^{*}_{\mathcal{G}})m^{*}_{\mathcal{G}}\iota_{r}^{*}+ \delta\iota_{r}m_{\mathcal{G}}(\eta_{\mathcal{G}}\otimes\eta_{\mathcal{G}})m^ {*}_{\bullet}\iota_{0}^{*}\] \[+\delta^{-2}\iota_{1}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes A_{ \mathcal{G}})m^{*}_{\mathcal{G}}\iota_{1}^{*}\] \[+\delta^{-2}\sum\limits_{k=1}^{r-1}\left(\iota_{k}m_{\mathcal{G}}(A _{\mathcal{G}}\otimes A_{\mathcal{G}})m^{*}_{\mathcal{G}}\iota_{k+1}^{*}+ \iota_{k+1}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes A_{\mathcal{G}})m^{*}_{ \mathcal{G}}\iota_{k}^{*}\right)\Big{\}}.\] Since \(m_{\mathcal{G}}(A_{\mathcal{G}}\otimes A_{\mathcal{G}})m^{*}_{\mathcal{G}}= \delta^{2}A_{\mathcal{G}}\) and \(m_{\mathcal{G}}(\eta_{\mathcal{G}}\otimes\eta_{\mathcal{G}})m^{*}_{\bullet}= \eta_{\mathcal{G}}\), we get \[m_{\mu_{r-1}(\mathcal{G})}\left(A_{\mu_{r-1}(\mathcal{G})}\otimes A_{\mu_{r-1}( \mathcal{G})}\right)m^{*}_{\mu_{r-1}(\mathcal{G})}=(r\delta^{2}+1)A_{\mu_{r-1}( \mathcal{G})}.\] In order to show that \[(\mathrm{id}\otimes\eta^{*}_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}(\mathcal{G})} )(\mathrm{id}\otimes A_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})(m^{*}_{\mu _{r-1}(\mathcal{G})}\eta_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})=A_{\mu_{r -1}(\mathcal{G})}\] notice that since \(\eta_{\mu_{r-1}(\mathcal{G})}=\frac{1}{\sqrt{1+r\delta^{2}}}\left(\iota_{0}\eta_{ \bullet}+\delta\sum\limits_{k=1}^{r}\iota_{k}\eta_{\mathcal{G}}\right)\), and Eq. (8) holds, it follows that \[\eta^{*}_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}(\mathcal{G})}=\eta^{*}_{\bullet}m_{ \bullet}(\iota_{0}^{*}\otimes\iota_{0}^{*})+\sum\limits_{k=1}^{r}\eta^{*}_{ \mathcal{G}}m_{\mathcal{G}}(\iota_{k}^{*}\otimes\iota_{k}^{*}),\] and \[m^{*}_{\mu_{r-1}(\mathcal{G})}\eta_{\mu_{r-1}(\mathcal{G})}=(\iota_{0}\otimes \iota_{0})m^{*}_{\bullet}\eta_{\bullet}+\sum\limits_{k=1}^{r}(\iota_{k} \otimes\iota_{k})m^{*}_{\mathcal{G}}\eta_{\mathcal{G}}.\] Therefore, \[(\mathrm{id}\otimes\eta^{*}_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}( \mathcal{G})}) (\mathrm{id}\otimes A_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})(m^{*}_{ \mu_{r-1}(\mathcal{G})}\eta_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})\] \[=\left[\mathrm{id}\otimes\eta^{*}_{\bullet}m_{\bullet}(\iota^{*}_ {0}\otimes\iota^{*}_{0})+\mathrm{id}\otimes\sum_{k=1}^{r}\eta^{*}_{\mathcal{G }}m_{\mathcal{G}}(\iota^{*}_{k}\otimes\iota^{*}_{k})\right]\] \[\quad\times\left[\delta\,\mathrm{id}\otimes\iota_{r}\eta_{ \mathcal{G}}\iota^{*}_{0}\otimes\mathrm{id}+\delta\,\mathrm{id}\otimes\iota_ {0}\eta^{*}_{\mathcal{G}}\iota^{*}_{r}\otimes\mathrm{id}+\mathrm{id}\otimes \iota_{1}A_{\mathcal{G}}\iota^{*}_{1}\otimes\mathrm{id}\right.\] \[\quad\left.+\sum_{k=1}^{r-1}\left(\mathrm{id}\otimes\iota_{k}A_ {\mathcal{G}}\iota^{*}_{k+1}\otimes\mathrm{id}+\mathrm{id}\otimes\iota_{k+1} A_{\mathcal{G}}\iota^{*}_{k}\otimes\mathrm{id}\right)\right]\] \[\quad\times\left[(\iota_{0}\otimes\iota_{0})m^{*}_{\bullet}\eta_ {\bullet}\otimes\mathrm{id}+\sum_{k=1}^{r}(\iota_{k}\otimes\iota_{k})m^{*}_{ \mathcal{G}}\eta_{\mathcal{G}}\otimes\mathrm{id}\right].\] The only non-zero contributions to the above expression are: \[(\mathrm{id}\otimes\eta^{*}_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}( \mathcal{G})})(\mathrm{id}\otimes A_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id })(m^{*}_{\mu_{r-1}(\mathcal{G})}\eta_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})\] \[=\delta\left[\mathrm{id}\otimes\eta^{*}_{\bullet}m_{\bullet}( \iota^{*}_{0}\otimes\iota^{*}_{0})\right]\left[\mathrm{id}\otimes\iota_{0} \eta^{*}_{\mathcal{G}}\iota^{*}_{r}\otimes\mathrm{id}\right]\left[(\iota_{r} \otimes\iota_{r})m^{*}_{\mathcal{G}}\eta_{\mathcal{G}}\otimes\mathrm{id}\right]\] \[+\delta\left[\mathrm{id}\otimes\eta^{*}_{\mathcal{G}}m_{\mathcal{ G}}(\iota^{*}_{r}\otimes\iota^{*}_{r})\right]\left[\mathrm{id}\otimes\iota_{r} \eta_{\mathcal{G}}\iota^{*}_{0}\otimes\mathrm{id}\right]\left[(\iota_{0} \otimes\iota_{0})m^{*}_{\bullet}\eta_{\bullet}\otimes\mathrm{id}\right]\] \[+\left[\mathrm{id}\otimes\eta^{*}_{\mathcal{G}}m_{\mathcal{G}}( \iota^{*}_{0}\otimes\iota^{*}_{1})\right]\left[\mathrm{id}\otimes\iota_{1}A_ {\mathcal{G}}\iota^{*}_{1}\otimes\mathrm{id}\right]\left[(\iota_{1}\otimes \iota_{1})m^{*}_{\mathcal{G}}\eta_{\mathcal{G}}\otimes\mathrm{id}\right]\] \[+\sum_{l,p=1}^{r}\sum_{k=1}^{r-1}\left[\mathrm{id}\otimes\eta^{*} _{\mathcal{G}}m_{\mathcal{G}}(\iota^{*}_{l}\otimes\iota^{*}_{l})\right]\left[ \mathrm{id}\otimes\iota_{k}A_{\mathcal{G}}\iota^{*}_{k+1}\otimes\mathrm{id}+ \mathrm{id}\otimes\iota_{k+1}A_{\mathcal{G}}\iota^{*}_{k}\otimes\mathrm{id} \right]\left[(\iota_{p}\otimes\iota_{p})m^{*}_{\mathcal{G}}\eta_{\mathcal{G}} \otimes\mathrm{id}\right].\] This can be further reduced into \[(\mathrm{id}\otimes\eta^{*}_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}( \mathcal{G})})(\mathrm{id}\otimes A_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id })(m^{*}_{\mu_{r-1}(\mathcal{G})}\eta_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})\] \[=\delta(\mathrm{id}\otimes\eta^{*}_{\bullet}m_{\bullet})(\iota_{r }\otimes\mathrm{id}\otimes\iota^{*}_{0})(\mathrm{id}\otimes\eta^{*}_{\mathcal{G }}\otimes\mathrm{id})(m^{*}_{\mathcal{G}}\otimes\mathrm{id})(\eta_{\mathcal{G }}\otimes\mathrm{id})\] \[+\delta(\mathrm{id}\otimes\eta^{*}_{\mathcal{G}})(\mathrm{id} \otimes m_{\mathcal{G}})(\mathrm{id}\otimes\eta_{\mathcal{G}}\otimes\mathrm{id })(\iota_{0}\otimes\mathrm{id}\otimes\iota^{*}_{r})(m^{*}_{\bullet}\eta_{ \bullet}\otimes\mathrm{id})\] \[+(\mathrm{id}\otimes\eta^{*}_{\mathcal{G}}m_{\mathcal{G}})(\iota_{ 1}\otimes A_{\mathcal{G}}\otimes\iota^{*}_{1})(m^{*}_{\mathcal{G}}\eta_{ \mathcal{G}}\otimes\mathrm{id})\] \[+\sum_{k=1}^{r-1}\left[(\mathrm{id}\otimes\eta^{*}_{\mathcal{G}}m_ {\mathcal{G}})(\iota_{k}\otimes A_{\mathcal{G}}\otimes\iota^{*}_{k+1})(m^{*}_{ \mathcal{G}}\eta_{\mathcal{G}}\otimes\mathrm{id})+(\mathrm{id}\otimes\eta^{*} _{\mathcal{G}}m_{\mathcal{G}})(\iota_{k+1}\otimes A_{\mathcal{G}}\otimes\iota^{* }_{k})(m^{*}_{\mathcal{G}}\eta_{\mathcal{G}}\otimes\mathrm{id})\right].\] Notice that, \[(\mathrm{id}\otimes\eta^{*}_{\bullet}m_{\bullet})(\iota_{r}\otimes \mathrm{id}\otimes\iota^{*}_{0})(\mathrm{id}\otimes\eta^{*}_{\mathcal{G}} \otimes\mathrm{id})(m^{*}_{\mathcal{G}}\otimes\mathrm{id})(\eta_{\mathcal{G}} \otimes\mathrm{id})=\] \[=(\iota_{r}\otimes\mathrm{id}\otimes\iota^{*}_{0})(\mathrm{id} \otimes\eta^{*}_{\mathcal{G}}\otimes\mathrm{id})(m^{*}_{\mathcal{G}}\otimes \mathrm{id})(\eta_{\mathcal{G}}\otimes\mathrm{id})\] \[=\iota_{r}((\mathrm{id}\otimes\eta^{*}_{\mathcal{G}})m^{*}_{ \mathcal{G}}\eta_{\mathcal{G}})\iota^{*}_{0}=\iota_{r}\eta_{\mathcal{G}}\iota^{*}_{0},\] where to justify the first step we observe that the map \(\eta^{*}_{\bullet}m_{\bullet}:\mathbb{C}\otimes\mathbb{C}\to\mathbb{C}\) coincides with the canonical identification of \(\mathbb{C}\otimes\mathbb{C}\) and \(\mathbb{C}\) and in the second step we used Lemma 3.1. Similarly, \[(\mathrm{id}\otimes\eta^{*}_{\mathcal{G}})(\mathrm{id}\otimes m_{\mathcal{G}})( \mathrm{id}\otimes\eta_{\mathcal{G}}\otimes\mathrm{id})(\iota_{0}\otimes \mathrm{id}\otimes\iota^{*}_{r})(m^{*}_{\bullet}\eta_{\bullet}\otimes\mathrm{id })=\iota_{0}\eta^{*}_{\mathcal{G}}\iota^{*}_{r}.\] Moreover, \[(\mathrm{id}\otimes\eta^{*}_{\mathcal{G}}m_{\mathcal{G}})(\iota_{k}\otimes A_{ \mathcal{G}}\otimes\iota^{*}_{l})(m^{*}_{\mathcal{G}}\eta_{\mathcal{G}} \otimes\mathrm{id})=\iota_{k}(\mathrm{id}\otimes\eta^{*}_{\mathcal{G}}m_{ \mathcal{G}})(\mathrm{id}\otimes A_{\mathcal{G}}\otimes\mathrm{id})(m^{*}_{ \mathcal{G}}\eta_{\mathcal{G}}\otimes\mathrm{id})\iota^{*}_{l}=\iota_{k}A_{ \mathcal{G}}\iota^{*}_{l},\] for every \(k\) and \(l\) which follows from Eq. (3) that is satisfied by \(A_{\mathcal{G}}\). All this demonstrates that \[(\mathrm{id}\otimes\eta^{*}_{\mu_{r-1}(\mathcal{G})}m_{\mu_{r-1}(\mathcal{G})})( \mathrm{id}\otimes A_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})(m^{*}_{ \mu_{r-1}(\mathcal{G})}\eta_{\mu_{r-1}(\mathcal{G})}\otimes\mathrm{id})=A_{\mu _{r-1}(\mathcal{G})}.\] In order to check the (ir)reflexivity of \(\mu_{r-1}(\mathcal{G})\) we compute \[m_{\mu_{r-1}(\mathcal{G})}\left(A_{\mu_{r-1}(\mathcal{G})}\otimes \operatorname{id}\right)m^{*}_{\mu_{r-1}(\mathcal{G})} =(1+r\delta^{2})\left[\iota_{0}m_{\bullet}(\iota^{*}_{0}\otimes \iota^{*}_{0})+\frac{1}{\delta}\sum\limits_{k=1}^{r}\iota_{k}m_{\mathcal{G}}( \iota^{*}_{k}\otimes\iota^{*}_{k})\right]\] \[\times\left[\delta\iota_{r}\eta_{\sharp}\iota^{*}_{0}\otimes \operatorname{id}+\delta\iota_{0}\eta^{*}_{\sharp}\iota^{*}_{r}\otimes \operatorname{id}+\iota_{1}A_{\mathcal{G}}\iota^{*}_{1}\otimes\operatorname{id}\right.\] \[\qquad+\left.\sum\limits_{k=1}^{r}\left(\iota_{k}A_{\mathcal{G}} \iota^{*}_{k+1}\otimes\operatorname{id}+\iota_{k+1}A_{\mathcal{G}}\iota^{*} _{k}\otimes\operatorname{id}\right)\right]\] \[\times\left[(\iota_{0}\otimes\iota_{0})m^{*}_{\bullet}\iota^{*} _{0}+\frac{1}{\delta}\sum\limits_{k=1}^{r}(\iota_{k}\otimes\iota_{k})m^{*}_{ \mathcal{G}}\iota^{*}_{k}\right]\] \[=\frac{1+r\delta^{2}}{\delta^{2}}\iota_{1}m_{\mathcal{G}}(A_{ \mathcal{G}}\otimes\operatorname{id})m^{*}_{\mathcal{G}}\iota^{*}_{1}=\begin{cases} 0,\quad\mathcal{G}-\text{irreflexive},\\ (1+r\delta^{2})\operatorname{id},\quad\mathcal{G}-\text{reflexive}.\end{cases}\] ## 5. Chromatic numbers under Mycielski transformation **Definition 5.1**.: _[_8_, Definition 2.16]_ _Let \(\mathcal{G}\) be an irreflexive quantum graph and \(S_{\mathcal{G}}\) the associated operator space as described in Eq. (5). We say that \(\mathcal{G}\) possesses a quantum \(c\)-coloring if there exists a finite von Neumann algebra \(\mathcal{N}\) and a partition of unity \(\{P_{a}\}_{a=1}^{c}\in\operatorname{C}(\mathcal{G})\otimes\mathcal{N}\) such that_ \[\forall_{1\leq a\leq c}\forall_{X\in S_{\mathcal{G}}}:\,P_{a}(X\otimes \mathds{1}_{\mathcal{N}})P_{a}=0.\] _The quantum chromatic number is defined as (c.f. [2])_ \[\chi_{q}(\mathcal{G})=\min\{c\in\mathbb{N}|\,\exists\,c\text{- colouring with }\dim(\mathcal{N})<\infty\}, \tag{10}\] _while the classical chromatic number for a quantum graph is_ \[\chi_{\operatorname{loc}}(\mathcal{G})=\min\{c\in\mathbb{N}|\,\exists\,c \text{-colouring with }\dim(\mathcal{N})=1\}. \tag{11}\] For a classical graph \(G\), it is known that \(\chi_{\operatorname{loc}}(\mu(G))=\chi_{\operatorname{loc}}(G)+1\). We now study this type of relation in the quantum framework. **Proposition 5.1**.: _For every irreflexive quantum graph \(\mathcal{G}\), \(r\geq 1\) and \(t\in\{\operatorname{\mathit{loc}},q\}\), \(\chi_{t}(\mu_{r-1}(\mathcal{G}))\leq\chi_{t}(\mathcal{G})+1\)._ Proof.: The proof for \(t=q\) is an amplification of the proof for \(t=\operatorname{loc}\) which we give below. We show that if \(\{P_{k}\}_{k=1}^{c}\subset\operatorname{C}(\mathcal{G})\) is a \(c\)-colouring of \(\mathcal{G}\), then there exists a \(c+1\)-colouring \(\{Q_{k}\}_{k=0}^{c}\subset\operatorname{C}(\mu_{r-1}(\mathcal{G}))\) of \(\mu_{r-1}(\mathcal{G})\). Viewing \(\operatorname{C}(\mu_{r-1}(\mathcal{G}))\) as an algebra acting on \(L^{2}(\mu_{r-1}(\mathcal{G}))\) we define self-adjoint projections \(Q_{0}=\iota_{0}\iota^{*}_{0}\) and \(Q_{i}=\sum\limits_{j=1}^{r}\iota_{j}P_{i}\iota^{*}_{j}\) for \(i=1,2,\ldots,c\) which incidentally form a partition of unity of \(\operatorname{C}(\mu_{r-1}(\mathcal{G}))\): \[\sum\limits_{i=0}^{r}Q_{0}=\iota_{0}\iota^{*}_{0}+\sum\limits_{i,j=1}^{r}\iota_{ j}P_{i}\iota^{*}_{j}=\iota_{0}\iota^{*}_{0}+\sum\limits_{j=1}^{r}\iota_{j}\iota^{*}_{j}= \mathds{1}.\] In order to conclude that \(\{Q_{k}\}_{k=0}^{c}\) is \(c+1\)-colouring of \(\mu_{r-1}(\mathcal{G})\), take \(X\in S_{\mu_{r-1}(\mathcal{G})}\), i.e. \[X=m_{\mu_{r-1}(\mathcal{G})}(A_{\mu_{r-1}(\mathcal{G})}\otimes Y)m^{*}_{\mu_{r-1 }(\mathcal{G})}\] for some \(Y\in B(L^{2}(\mu_{r-1}(\mathcal{G})))\). Remembering that \(L^{2}(\mu_{r-1}(\mathcal{G}))\) can be identified with the direct sum of \(\mathbb{C}\) and \(r\) copies of \(L^{2}(\mathcal{G})\) (c.f. Definition 4.2) we shall use matrix notation \((Y_{ij})_{i,j=0,\ldots,r}\), where \(Y_{00}\in\mathbb{C}\), \(Y_{j0},Y^{*}_{0j}\in L^{2}(\mathcal{G})\) for \(j=1,\ldots,r\), and \(Y_{ij}\in B(L^{2}(\mathcal{G}))\) for \(i,j=1,\ldots,r\). Let us first show that \(Q_{0}XQ_{0}=0\): \[Q_{0}m_{\mu_{r-1}(\mathcal{G})}(A_{\mu_{r-1}(\mathcal{G})}\otimes Y)m^{*}_{\mu _{r-1}(\mathcal{G})}Q_{0} =\iota_{0}\iota^{*}_{0}m_{\mu_{r-1}(\mathcal{G})}(A_{\mu_{r-1}( \mathcal{G})}\otimes Y)\sqrt{1+r\delta^{2}}\sum\limits_{k=0}^{r}(\iota_{k} \otimes\iota_{k})m^{*}_{k}\iota^{*}_{k}\iota_{0}\iota^{*}_{0}\] \[=\sqrt{1+r\delta^{2}}\iota_{0}\iota^{*}_{0}m_{\mu_{r-1}(\mathcal{ G})}(A_{\mu_{r-1}(\mathcal{G})}\otimes Y)(\iota_{0}\otimes\iota_{0})m^{*}_{0}\iota^{*}_{0}\] \[=\sqrt{1+r\delta^{2}}\iota_{0}\iota^{*}_{0}m_{\mu_{r-1}(\mathcal{ G})}(\delta\iota_{r}\eta_{\mathcal{G}}\otimes Y\iota_{0})m^{*}_{0}\eta_{0}\] \[=\delta(1+r\delta^{2})\iota_{0}\iota^{*}_{0}\iota_{r}m_{r}(\eta_{ \mathcal{G}}\otimes Y\iota_{0})m^{*}_{0}\eta_{0}=0,\] since \({\iota}_{0}^{*}{\iota}_{r}=0\). Similarly we shall show that \(Q_{i}XQ_{i}=0\) for all \(i=1,\ldots,c\): \[Q_{i}m_{{\mu}_{r-1}(\mathcal{G})}(A_{{\mu}_{r-1}(\mathcal{G})} \otimes Y)m_{{\mu}_{r-1}(\mathcal{G})}^{*}Q_{i}\] \[=\sum_{j,k=1}^{r}{\iota}_{k}P_{i}{\iota}_{k}^{*}m_{{\mu}_{r-1}( \mathcal{G})}(A_{{\mu}_{r-1}(\mathcal{G})}\otimes Y)\sqrt{1+r\delta^{2}}\sum_{ p=0}^{r}({\iota}_{p}\otimes{\iota}_{p})m_{p}^{*}{\iota}_{p}^{*}{\iota}_{j}P_{i}{ \iota}_{j}^{*}\] \[=\sqrt{1+r\delta^{2}}\sum_{j=1}^{r}Q_{i}m_{{\mu}_{r-1}(\mathcal{ G})}(A_{{\mu}_{r-1}(\mathcal{G})}\otimes Y)({\iota}_{j}\otimes{\iota}_{j})m_{j}^{*} P_{i}{\iota}_{j}^{*}\] \[=\sqrt{1+r\delta^{2}}Q_{i}m_{{\mu}_{r-1}(\mathcal{G})}\Bigg{\{}( \delta{\iota}_{0}\eta_{\mathcal{G}}^{*}\otimes Y{\iota}_{r})m_{r}^{*}P_{i}{ \iota}_{r}^{*}+\sum_{j=1}^{r}({\iota}_{1}A_{\mathcal{G}}{\iota}_{1}^{*} \otimes Y)({\iota}_{j}\otimes{\iota}_{j})m_{j}^{*}P_{i}{\iota}_{j}^{*}\] \[\qquad\qquad\qquad+\sum_{j=1}^{r}\sum_{p=1}^{r-1}\left[({\iota}_{ p}A_{\mathcal{G}}{\iota}_{p+1}^{*}+{\iota}_{p+1}A_{\mathcal{G}}{\iota}_{p}^{*}) \otimes Y\right]({\iota}_{j}\otimes{\iota}_{j})m_{j}^{*}P_{i}{\iota}_{j}^{*}\Bigg{\}}\] \[=\sum_{l=0}^{r}Q_{i}{\iota}_{l}m_{l}({\iota}_{l}^{*}\otimes{\iota }_{l}^{*})\Bigg{\{}(\delta{\iota}_{0}\eta_{\mathcal{G}}^{*}\otimes Y{\iota}_{r })m_{r}^{*}P_{i}{\iota}_{r}^{*}+({\iota}_{1}A_{\mathcal{G}}{\iota}_{1}^{*} \otimes Y{\iota}_{1})m_{1}^{*}P_{i}{\iota}_{1}^{*}\] \[\qquad\qquad\qquad+\sum_{j=1}^{r-1}\left[({\iota}_{j}m_{j}(A_{ \mathcal{G}}\otimes Y{\iota}_{j+1}))m_{j+1}^{*}P_{i}{\iota}_{j}^{*}+({\iota}_ {j+1}A_{\mathcal{G}}\otimes Y{\iota}_{j})m_{j}^{*}P_{i}{\iota}_{j+1}^{*}\right]\] \[=Q_{i}{\iota}_{0}m_{0}(\delta{\iota}_{0}\eta_{\mathcal{G}}^{*} \otimes Y_{0,r})m_{r}^{*}P_{i}{\iota}_{r}^{*}+Q_{i}{\iota}_{1}m_{1}(A_{ \mathcal{G}}\otimes Y_{11})m_{1}^{*}P_{i}{\iota}_{1}^{*}\] \[\quad+\sum_{l=1}^{r}\sum_{j=1}^{r-1}Q_{i}{\iota}_{l}m_{l}({\iota} _{l}^{*}\otimes{\iota}_{l}^{*})\left[({\iota}_{j}m_{j}(A_{\mathcal{G}}\otimes Y {\iota}_{j+1}))m_{j+1}^{*}P_{i}{\iota}_{j}^{*}+{\iota}_{j}^{*}+({\iota}_{j+1}A _{\mathcal{G}}\otimes Y{\iota}_{j})m_{j}^{*}P_{i}{\iota}_{j+1}^{*}\right]\] \[={\iota}_{1}m_{1}(A_{\mathcal{G}}\otimes Y_{11})m_{1}^{*}P_{i}{ \iota}_{1}^{*}+Q_{i}\sum_{j=1}^{r-1}\left[{\iota}_{j}m_{j}(A_{\mathcal{G}} \otimes Y_{j+1})m_{j+1}^{*}P_{i}{\iota}_{j}^{*}+{\iota}_{j+1}m_{j+1}(A_{ \mathcal{G}}\otimes Y_{j+1}{\iota}_{j})m_{j}^{*}P_{i}{\iota}_{j+1}^{*}\right]\] \[={\iota}_{1}P_{i}m_{1}(A_{\mathcal{G}}\otimes Y_{11})m_{1}^{*}P_{ i}{\iota}_{1}^{*}+\sum_{j=1}^{r-1}\left[{\iota}_{j}P_{i}m_{j}(A_{\mathcal{G}} \otimes Y_{j+1})m_{j+1}^{*}P_{i}{\iota}_{j}^{*}+{\iota}_{j+1}P_{i}m_{j+1}(A_{ \mathcal{G}}\otimes Y_{j+1}{\iota}_{j})m_{j}^{*}P_{i}{\iota}_{j+1}^{*}\right]\] \[={\iota}_{1}P_{i}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes Y_{11})m_{ \mathcal{G}}^{*}P_{i}{\iota}_{1}^{*}+\sum_{j=1}^{r-1}\left[{\iota}_{j}P_{i}m_{ \mathcal{G}}(A_{\mathcal{G}}\otimes Y_{j+1})m_{\mathcal{G}}^{*}P_{i}{\iota}_{j }^{*}+{\iota}_{j+1}P_{i}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes Y_{j+1}{\iota} _{j})m_{\mathcal{G}}^{*}P_{i}{\iota}_{j+1}^{*}\right]=0,\] where in the sixth equality we used \(Q_{i}{\iota}_{0}=0\) for all \(i\geq 1\) and in the last equality we refer to the fact that \(\{P_{k}\}_{k=1}^{c}\subset\mathrm{C}(\mathcal{G})\) is a \(c\)-colouring of \(\mathcal{G}\). It is known that in contrast to the case \(r=2\), the generalized Mycielski construction \({\mu}_{r-1}\) for \(r>2\) may leave classical chromatic number unchanged. Let us restrict our attention to the \(r=2\)-case for quantum graph \(\mathcal{G}\) and suppose that \(\{P_{k1}\}_{k=0}^{c^{\prime}}\) is a \(c^{\prime}+1\)-colouring of \(\mu(\mathcal{G})\). Since \(\mathrm{C}(\mu(\mathcal{G}))=\mathbb{C}\oplus\mathrm{C}(\mathcal{G})\oplus \mathrm{C}(\mathcal{G})\) we can, without loss of generality assume that \(P_{0}=(1,P_{01},P_{02})\) and \(P_{k1}=(0,P_{k1},P_{k2})\) for \(k=1,2,\ldots,c^{\prime}\). Our goal is to prove that under a mild commutativity condition \(\chi_{\mathrm{loc}}(\mu(\mathcal{G}))=\chi_{\mathrm{loc}}(\mathcal{G})+1\). First, we observe: **Lemma 5.2**.: _With the notation above, \(P_{02}=0\)._ Proof.: Consider \(Y\in B(L^{2}(\mu(\mathcal{G})))\). Remembering that \(Y=(Y_{ij})_{i,j=0,1,2}\) where \(Y_{00}\in\mathbb{C}\), \(Y_{10},Y_{20}\in L^{2}(\mathcal{G})\), we compute \[P_{0}m_{\mu(\mathcal{G})}(A_{\mu(\mathcal{G})}\otimes Y)m_{\mu( \mathcal{G})}^{*}P_{0}\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}=(1+2\delta^{2})P_{0}m_{\mu(\mathcal{G})}(A_{\mu(\mathcal{G})} \otimes Y)\left(\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}\otimes\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}\right)\] \[=(1+2\delta^{2})P_{0}m_{\mu(\mathcal{G})}\left(\begin{pmatrix}0 \\ 0\\ \mathds{1}_{\mathcal{G}}\end{pmatrix}\otimes Y\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}\right)\] \[=(1+2\delta^{2})P_{0}m_{\mu(\mathcal{G})}\left(\begin{pmatrix}0 \\ 0\\ \mathds{1}_{\mathcal{G}}\end{pmatrix}\otimes\begin{pmatrix}Y_{00}\\ Y_{10}\\ Y_{20}\end{pmatrix}\right)\] \[=(1+2\delta^{2})P_{0}\begin{pmatrix}0\\ 0\\ Y_{20}\end{pmatrix}=(1+2\delta^{2})\begin{pmatrix}0\\ 0\\ P_{02}Y_{20}\end{pmatrix}\equiv 0\] and since this holds for all \(Y_{20}\in L^{2}(\mathcal{G})\), we conclude that \(P_{02}=0\). **Proposition 5.3**.: _Let \(\mathcal{G}\) be a quantum graph with \(c^{\prime}+1=\chi_{loc}(\mu(\mathcal{G}))\) and suppose that there exists \(c^{\prime}+1\)-colouring \(\{P_{k1}\}_{k=0}^{c^{\prime}}\) of \(\mu(\mathcal{G})\) with the property that \(P_{01}P_{2}=P_{l2}P_{01}\), for all \(l\in\{1,\ldots,c^{\prime}\}\). Then \(\chi_{loc}(\mathcal{G})=\chi_{loc}(\mu(\mathcal{G}))-1\)._ Proof.: By Proposition 5.1 it is enough to construct a \(c^{\prime}\) coloring \(\{Q_{l}\}_{l=1}^{c^{\prime}}\) of \(\mathcal{G}\). By Lemma 5.2 we can assume that \(P_{0}=(1,P_{01},0)\) and \(P_{k1}=(0,P_{k1},P_{k2})\) for \(k=1,\ldots,c^{\prime}\). Note that \(P_{12}+\ldots+P_{c^{\prime}2}=\mathds{1}\). We define \(Q_{l}=P_{l1}+P_{01}P_{l2}\) and note that \[Q_{1}+\ldots+Q_{c^{\prime}}=\mathds{1}.\] Moreover, under the assumed condition, we have \[Q_{l}^{2}=Q_{l}=Q_{l}^{*}\] Next, we compute assuming that \(l>0\), \[0=P_{l}A_{\mu(\mathcal{G})}P_{l}\begin{pmatrix}\lambda\\ x\\ y\end{pmatrix} =P_{l}A_{\mu(\mathcal{G})}\begin{pmatrix}0\\ P_{l1}x\\ P_{l2}y\end{pmatrix}\] \[=\begin{pmatrix}0\\ P_{l1}A_{\mathcal{G}}(P_{l1}x+P_{l2}y)\\ P_{l2}A_{\mathcal{G}}P_{l1}x\end{pmatrix}\] for all \(x,y\in L^{2}(\mathcal{G})\). Hence we have \(P_{l2}A_{\mathcal{G}}P_{l1}=0=P_{l1}A_{\mathcal{G}}P_{l1}\) for all \(l>0\). Next, \[P_{0}A_{\mu(\mathcal{G})}P_{0}\begin{pmatrix}1\\ x\\ y\end{pmatrix} =P_{0}A_{\mu(\mathcal{G})}\begin{pmatrix}1\\ P_{01}x\\ 0\end{pmatrix}\] \[=\begin{pmatrix}0\\ P_{01}A_{\mathcal{G}}P_{01}x\\ 0\end{pmatrix}\] for all \(x\in L^{2}(\mathcal{G})\). Hence \(P_{l2}A_{\mathcal{G}}P_{l1}=0=P_{l1}A_{\mathcal{G}}P_{l1}\) for all \(l\geq 0\) which used in the next computation (under the assumption \(P_{01}P_{l2}=P_{l2}P_{01}\)) yields \[Q_{l}A_{\mathcal{G}}Q_{l} =(P_{l1}+P_{01}P_{2})A_{\mathcal{G}}(P_{1}+P_{l2}P_{01})\] \[=P_{l1}A_{\mathcal{G}}P_{l1}+P_{l1}A_{\mathcal{G}}P_{l2}P_{01}+P_{ 01}P_{l2}A_{\mathcal{G}}P_{l1}+P_{l2}P_{01}A_{\mathcal{G}}P_{01}P_{l2}\] \[=0\] Thus if \(P_{01}P_{l2}=P_{l2}P_{01}\) for all \(l\in\{0,1,\ldots,c^{\prime}\}\), then \(\{Q_{l}\}_{l=1}^{c^{\prime}}\) is a \(c^{\prime}\)-colouring. **Question 1.** Is there a quantum graph \(\mathcal{G}\) for which \(\chi_{\text{loc}}(\mu(\mathcal{G}))=\chi_{\text{loc}}(\mathcal{G})\)? The concept of quantum graphs homomorphism will be useful in this and the next sections. **Definition 5.2**.: ([16, Definition 7]) We say that there exists a homomorphism between quantum graphs \(\mathcal{G}\) and \(\mathcal{F}\), and write \(\mathcal{G}\to\mathcal{F}\), if there exists a Hilbert space \(\mathsf{H}\) and an isometry \(\mathcal{J}:L^{2}(\mathcal{G})\to L^{2}(\mathcal{F})\otimes\mathsf{H}\) s.t. \(\mathcal{J}S_{\mathcal{G}}\mathcal{J}^{*}\subseteq S_{\mathcal{F}}\otimes B( \mathsf{H})\). By [16, Theorem 8] every homomorphism in the above sense between classical graphs corresponds to a homomorphism between these graphs. **Definition 5.3**.: ([16, Section III, p. 6]) We say that \(\mathcal{G}\) is a quantum subgraph of \(\mathcal{F}\) if there exists an isometry \(\mathcal{J}:L^{2}(\mathcal{G})\to L^{2}(\mathcal{F})\) s.t. \(\mathcal{J}S_{\mathcal{G}}\mathcal{J}^{*}\subseteq\mathcal{S}_{\mathcal{F}}\). **Remark 5.4**.: Let us note that \(\mathcal{G}\) is a quantum subgraph of Mycielskian \(\mu_{r-1}(\mathcal{G})\), for any \(r\geq 1\), where the isometry \(J:L^{2}(\mathcal{G})\to L^{2}(\mu_{r-1}(\mathcal{G}))\) is given by \(J(x)=\iota_{1}(x)\). Note that if \(\mathcal{G}\) is a quantum subgraph of \(\mathcal{F}\), then in particular \(\mathcal{G}\to\mathcal{F}\). As a direct consequence of the monotonicity property of quantum chromatic numbers [2, Proposition 6.4] and Remark 5.4 we have the following **Proposition 5.5**.: _For any irreflexive quantum graph \(\mathcal{G}\), any \(r\geq 1\) and any \(t\in\{\text{loc},q\}\),_ \[\chi_{t}(\mathcal{G})\leq\chi_{t}(\mu_{r-1}(\mathcal{G})).\] We thank David E. Roberson for pointing out to us the following fact. **Remark 5.6**.: There exists a classical graph \(G\) such that \(\chi_{q}(\mu(G))=\chi_{q}(G)\). Indeed, let \(G_{13}\) be the graph introduced in [12] and defined as follows. Consider a three-dimensional cube centered on the origin of \(\mathbb{R}^{3}\) and identify vector \(v\) with \(-v\). The set of vertices \(V\) for \(G_{13}\) consists of vectors (after the aforementioned identification) representing midpoints of the faces (three of them), midpoints of the edges (six of them), and vertices of the cube (four of them). The pair \(\{u,v\}\) forms an edge in \(G_{13}\) if and only if \(v\) and \(u\) are orthogonal as vectors in \(\mathbb{R}^{3}\). Adding a single vertex and connecting it with all the vertices from \(G_{13}\) one gets a new graph, called \(G_{14}\). By [12, Lemma 6], we have \(\chi_{q}(G_{13})=\chi_{q}(G_{14})\). On the other hand, by construction there is a morphism \(\mu(G_{13})\to G_{14}\), so that \(\chi_{q}(\mu(G_{13}))\leq\chi_{q}(G_{14})\) by [2, Proposition 6.4]. This shows that \(\chi_{q}(\mu(G_{13}))=\chi_{q}(G_{13})\). ## 6. Clique numbers Let us recall that with an irreflexive graph \(G\) with \(|V(G)|=n\) we associate an operator space of the form \[S_{G}=\operatorname{span}\{|e_{i}\rangle\langle e_{j}|:\,e_{i}\sim e_{j}\} \subseteq B(\mathbb{C}^{n})\] with \(\{e_{i}\}\) being the standard basis of \(\mathbb{C}^{n}\). The clique number \(\omega(G)\) of \(G\), which is the size of the largest complete graph contained in \(G\), can be equivalently defined [16, 6] as the maximal cardinality of a set \(K\) for which we can find a collection of non-zero vectors \(\{\psi_{k}\in\ell_{n}^{2}:k\in K\}\) such that \(|\psi_{i}\rangle\langle\psi_{j}|\in S_{G}\) for \(i,j,k\in K\) and \(i\neq j\), i.e. \[\omega(G)=\max\left\{|\{\psi_{k}\in\ell_{n}^{2}:k\in K\}|\,:\,\psi_{k}\neq 0 \,,|\psi_{i}\rangle\langle\psi_{j}|\in S_{G}\text{ for }i,j,k\in K\text{ and }i\neq j\right\}. \tag{12}\] This definition can be adapted to the context of a finite quantum graph \(\mathcal{G}\): the clique number for a quantum graph \(\mathcal{G}=(\mathbb{G},\psi,A)\) is given by \[\omega(\mathcal{G})=\max\left\{|\{\psi_{k}\in L^{2}(\mathcal{G}):k\in K\}|\,: \,\psi_{k}\neq 0\,,|\psi_{i}\rangle\langle\psi_{j}|\in S_{\mathcal{G}}\text{ for }i,j,k\in K\text{ and }i\neq j\right\}. \tag{13}\] The clique number can be also defined using the notion of graph homomorphism from a complete graph, i.e., by \(\omega(\mathcal{G})=\max\{|K_{n}|:\,K_{n}\to\mathcal{G}\}\)[16]. **Remark 6.1**.: It is easy to check that the above definition agrees with (13). Moreover, if there is a morphism \(\mathcal{G}\to\mathcal{F}\) then \(\omega(\mathcal{G})\leq\omega(\mathcal{F})\). In particular \(\omega(\mathcal{G})\leq\omega(\mu_{r-1}(\mathcal{G}))\) for every \(r\geq 1\). **Proposition 6.2**.: _Let \(\mathcal{G}\) be a quantum graph. Then \(\omega(\mu(\mathcal{G}))=\omega(\mathcal{G})\)._ Proof.: By the Remark 6.1 we have \(\omega(\mathcal{G})\leq\omega(\mu(\mathcal{G}))\). In the proof of the converse inequality let us use the following notation: remembering that as vector spaces we have \(L^{2}(\mu(\mathcal{G}))=\mathbb{C}\oplus L^{2}(\mathcal{G})\oplus L^{2}( \mathcal{G})\), the components of \(\varphi\in L^{2}(\mu(\mathcal{G}))\) will be denoted by \(\varphi^{0},\varphi^{1},\varphi^{2}\) respectively. Similarly every \(X\in B(L^{2}(\mu(\mathcal{G})))\) can be written in a matrix form \[X=\begin{pmatrix}X_{00}&X_{01}&X_{02}\\ X_{10}&X_{11}&X_{12}\\ X_{20}&X_{21}&X_{22}\end{pmatrix}\] where \(X_{00}\in\mathbb{C}\), \(X_{ij}\in B(L^{2}(\mathcal{G}))\) and \(X_{i0},X_{0i}^{*}\in L^{2}(\mathcal{G})\) for \(i,j\in\{1,2\}\). Suppose that the clique number \(\omega(\mu(\mathcal{G}))=|K|\) where \(K\) is a finite set such that there is a set of non-zero vectors \(\{\psi_{k}\in L^{2}(\mu(\mathcal{G})):k\in K\}\) satisfying \(|\psi_{i}\rangle\langle\psi_{j}|\in S_{\mu(\mathcal{G})}\) for \(i,j\in K\) and \(i\neq j\). Let \(i\neq j\). Since \(|\psi_{i}\rangle\langle\psi_{j}|\in S_{\mu(\mathcal{G})}\), there exists \(X^{(ij)}\in B(L^{2}(\mu(\mathcal{G})))\) such that \[|\psi_{i}\rangle\langle\psi_{j}|=\frac{2\delta^{2}+1}{\delta^{2}} \begin{pmatrix}0&0&m_{\bullet}(\eta_{\mathcal{G}}^{*}\otimes X_{02}^{(ij)})m_{ \mathcal{G}}^{*}\\ 0&\frac{1}{\delta^{2}}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes X_{11}^{(ij)})m_{ \mathcal{G}}^{*}&\frac{1}{\delta^{2}}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes X_ {12}^{(ij)})m_{\mathcal{G}}^{*}\\ m_{\mathcal{G}}(\eta_{\mathcal{G}}\otimes X_{20}^{(ij)})m_{\bullet}^{*}&\frac{1 }{\delta^{2}}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes X_{21}^{(ij)})m_{\mathcal{ G}}^{*}&0\end{pmatrix} \tag{14}\] where the matrix structure on the right of Equation (14) is a direct consequence of Definition 4.1 of \(A_{\mu(\mathcal{G})}\). On the other hand, we have \[|\psi_{i}\rangle\langle\psi_{j}|=\begin{pmatrix}|\psi_{1}^{0}\rangle\langle \psi_{j}^{0}|&|\psi_{1}^{0}\rangle\langle\psi_{j}^{1}|&|\psi_{1}^{0}\rangle \langle\psi_{j}^{2}|\\ |\psi_{1}^{1}\rangle\langle\psi_{j}^{0}|&|\psi_{1}^{1}\rangle\langle\psi_{j}^ {1}|&|\psi_{1}^{1}\rangle\langle\psi_{j}^{2}|\\ |\psi_{1}^{2}\rangle\langle\psi_{j}^{0}|&|\psi_{1}^{2}\rangle\langle\psi_{j}^ {1}|&|\psi_{1}^{2}\rangle\langle\psi_{j}^{2}|\end{pmatrix}=\begin{pmatrix}0&0& |\psi_{1}^{0}\rangle\langle\psi_{j}^{2}|\\ 0&|\psi_{1}^{1}\rangle\langle\psi_{j}^{1}|&|\psi_{1}^{1}\rangle\langle\psi_{j }^{2}|\\ |\psi_{1}^{2}\rangle\langle\psi_{j}^{0}|&|\psi_{1}^{2}\rangle\langle\psi_{j}^ {2}|\end{pmatrix} \tag{15}\] and looking at the right lower corner of the right-hand side of (15) we conclude that for every \(i\neq j\) either \(\psi_{i}^{2}=0\) or \(\psi_{j}^{2}=0\). Suppose that there is \(i_{0}\) such that \(\psi_{i_{0}}^{2}\neq 0\) (say without loss of generality we can take \(i_{0}=1\)). Then \(\psi_{j}^{2}=0\) for every \(j\neq 1\). Consider now \(i\neq j\) and \(i,j\neq 1\). In this case \[|\psi_{i}\rangle\langle\psi_{j}|=\begin{pmatrix}0&0&0\\ 0&|\psi_{1}^{1}\rangle\langle\psi_{j}^{1}|&0\\ 0&0&0\end{pmatrix} \tag{16}\] and therefore \(|\psi_{i}^{1}\rangle\langle\psi_{j}^{1}|\neq 0\) for all \(i\neq j\) and \(i,j\neq 1\) and hence \(\{\psi_{2}^{1},\psi_{2}^{1},\psi_{3}^{1},\ldots,\psi_{|K|}^{1}\}\) is a witness of a clique of size \(|K|\) in \(\mathcal{G}\). If \(\psi_{i}^{2}=0\) for all \(i\) then Equation (16) holds for all \(i,j\). Hence \(\psi_{i}^{1}\neq 0\) for all \(i\) and therefore \(\{\psi_{1}^{1},\psi_{2}^{1},\psi_{3}^{1},\ldots,\psi_{|K|}^{1}\}\) is a witness of a clique of size \(|K|\) in \(\mathcal{G}\) and this ends the proof. The techniques used in the proof of Proposition 6.2 lead to the following generalization. **Proposition 6.3**.: _Let \(\mathcal{G}\) be a quantum graph and \(r\geq 1\). Then \(\omega(\mu_{r-1}(\mathcal{G}))=\omega(\mathcal{G})\)._ Proof.: Suppose that the clique number \(\omega(\mu_{r-1}(\mathcal{G}))=|K|\) where \(K\) is a finite set analogous to the one in the proof of Proposition 6.2. Let \(i\neq j\). In a completely similar manner as before, there exists \(X^{(ij)}\in B(L^{2}(\mu_{r-1}(\mathcal{G})))\) such that \[|\psi_{i}\rangle\langle\psi_{j}|=\frac{1+r\delta^{2}}{\delta^{2}} \Bigg{\{}\iota_{0}m_{\bullet}\left(\eta_{\mathcal{G}}^{*}\otimes X_{0r}^{( ij)}\right)m_{\mathcal{G}}^{*}l_{r}^{*}+\iota_{r}m_{\mathcal{G}}\left(\eta_{ \mathcal{G}}\otimes X_{r0}^{(ij)}\right)m_{\bullet}^{*}l_{0}^{*}+\frac{1}{ \delta^{2}}\iota_{1}m_{\mathcal{G}}(A_{\mathcal{G}}\otimes X_{11}^{(ij)})m_{ \mathcal{G}}^{*}l_{1}^{*} \tag{17}\] Comparing the above expression with \(|\psi_{i}\rangle\langle\psi_{j}|=\sum\limits_{l,k=0}^{r}\iota_{l}|\psi_{i}^{l} \rangle\langle\psi_{j}^{k}|l_{k}^{*}\), in particular by looking at the entry \(\iota_{r}|\psi_{i}^{r}\rangle\langle\psi_{j}^{r}|l_{r}^{*}\), we conclude that for every \(i\neq j\) either \(\psi_{i}^{r}=0\) or \(\psi_{j}^{r}=0\). If there exists \(i_{0}\) such that \(\psi_{i_{0}}^{r}\neq 0\), then \(\psi_{j}^{r}=0\) for every \(j\neq i_{0}\). For \(i\neq j\) such that \(i,j\neq i_{0}\) we then have \[|\psi_{i}\rangle\langle\psi_{j}|=\iota_{1}|\psi_{i}^{1}\rangle\langle\psi_{j}^{ 1}|l_{1}^{*}+\sum\limits_{k=1}^{r-1}\left(\iota_{k}|\psi_{i}^{k}\rangle\langle \psi_{j}^{k+1}|l_{k+1}^{*}+\iota_{k+1}|\psi_{i}^{k+1}\rangle\langle\psi_{j}^{k}| l_{k}^{*}\right). \tag{18}\] For \(r>2\), the second term does not vanish automatically. However, looking now at the entry \(\iota_{r-1}|\psi_{i}^{r}\rangle\langle\psi_{j}^{r}|l_{r-1}^{*}\) we conclude that for all \(i\neq j\) either \(\psi_{i}^{r-1}=0\) or \(\psi_{j}^{r-1}=0\). Recursively, we show that for all \(q=2,\ldots,r\) either \(\psi_{i}^{q}=0\) or \(\psi_{j}^{q}=0\). In any of these choices, we can construct a witness of a clique of size \(|K|\) in \(\mathcal{G}\). The concept of a complete graph admits a quantum version with a quantum space playing a role of a space of vertices and thus the clique number of a given (quantum graph) admits a quantum version that measures the maximal size of a complete quantum subgraph. **Definition 6.1**.: Given a finite quantum space \(\mathbb{G}\) and a \(\delta\)-form \(\psi:\mathrm{C}(\mathbb{G})\to\mathbb{C}\) we define a quantum graph \((\mathbb{G},\psi,A)\) where \(A:L^{2}(\mathbb{G},\psi)\to L^{2}(\mathbb{G},\psi)\) is given by \[Ax=\delta^{2}\mathbb{I}\psi(x)-x\] where \(\mathbb{I}\) denotes the matrix with all entries equal to \(1\). This quantum graph is denoted \(\mathcal{K}_{\mathbb{G},\psi}\) and referred to as a complete quantum \((\mathbb{G},\psi)\)-graph. **Remark 6.4**.: We shall often skip \(\psi\) and instead writing \(\mathcal{K}_{\mathbb{G},\psi}\) we use \(\mathcal{K}_{\mathbb{G}}\) referring to it as a complete quantum \(\mathbb{G}\)-graph. **Definition 6.2**.: The quantum clique number for a quantum graph \(\mathcal{G}\) is given by \[\omega_{q}(\mathcal{G})=\max\{|\mathcal{K}_{\mathbb{F}}|:\,\mathcal{K}_{ \mathbb{F}}\to\mathcal{G}\}. \tag{19}\] **Remark 6.5**.: Analogously to the Remark 6.1 we see that if there exists a morphism \(\mathcal{G}\to\mathcal{F}\) then \(\omega_{q}(\mathcal{G})\leq\omega_{q}(\mathcal{F})\). In particular, if \(\mathcal{G}\) is a quantum subgraph of \(\mathcal{F}\), then \(\omega_{q}(\mathcal{G})\leq\omega_{q}(\mathcal{F})\) and hence \(\omega_{q}(\mathcal{G})\leq\omega_{q}(\mu_{r-1}(\mathcal{G}))\) for every \(r\geq 1\). **Question 2**.: Is the opposite inequality also true? If not, what is the minimal counterexample? Next, for a quantum graph \(\mathcal{G}\) and a positive semidefinite operator \(\Lambda\in B(\mathsf{H})\) we define the operator space \(S_{\mathcal{G}}\otimes\Lambda\subseteq B(L^{2}(\mathcal{G}))\otimes B( \mathsf{H})\subseteq B(L^{2}(\mathcal{G})\otimes\mathsf{H})\). Since \((S_{\mathcal{G}},\mathrm{C}(\mathcal{G}),B(L^{2}(\mathcal{G})))\) is a quantum graph, we know in particular that \(\mathrm{C}(\mathcal{G})^{\prime}S_{\mathcal{G}}\,\mathrm{C}(\mathcal{G})^{ \prime}\subseteq S_{\mathcal{G}}\). Defining \(\mathsf{A}=\mathrm{C}(\mathcal{G})\otimes B(\mathsf{H})\) and noticing that \(\mathsf{A}^{\prime}(S_{\mathcal{G}}\otimes\Lambda)\mathsf{A}^{\prime}\subseteq S _{\mathcal{G}}\otimes\Lambda\), (where we use \(B(\mathsf{H})^{\prime}=\mathbb{C}\)) we conclude that \((S_{\mathcal{G}}\otimes\Lambda,\mathsf{A},B(L^{2}(\mathcal{G})\otimes \mathsf{H}))\) forms a quantum graph which we denote by \(\mathcal{G}\otimes\Lambda\). **Definition 6.3**.: ([16, Definition 15]) We say that there exists a quantum homomorphism between quantum graphs \(\mathcal{G}\) and \(\mathcal{F}\), denoted \(\mathcal{G}\xrightarrow{*}\mathcal{F}\), if there exists a positive semidefinite operator \(\Lambda\in B(\mathsf{H})\) s.t. \(\mathcal{G}\otimes\Lambda\to\mathcal{F}\). **Remark 6.6**.: For quantum homomorphisms, one can define the corresponding clique number and the quantum clique number by simply replacing \(\mathcal{G}\to\mathcal{F}\) by \(\mathcal{G}\xrightarrow{*}\mathcal{F}\) in their definitions. We denote these parameters by \(\omega_{*}(\mathcal{G})\) and \(\omega_{q*}(\mathcal{G})\) respectively. In a complete analogy to clique numbers \(\omega\) and \(\omega_{q}\) we see that if there exists a quantum homomorphism \(\mathcal{G}\xrightarrow{*}\mathcal{F}\) then both \(\omega_{*}(\mathcal{G})\leq\omega_{*}(\mathcal{F})\) and \(\omega_{q*}(\mathcal{G})\leq\omega_{q*}(\mathcal{F})\). In particular, if \(\mathcal{G}\) is a quantum subgraph of \(\mathcal{F}\), then \(\omega_{*}(\mathcal{G})\leq\omega_{*}(\mathcal{F})\) and \(\omega_{q*}(\mathcal{G})\leq\omega_{q*}(\mathcal{F})\) and hence \(\omega_{*}(\mathcal{G})\leq\omega_{*}(\mu_{r-1}(\mathcal{G}))\) and \(\omega_{q*}(\mathcal{G})\leq\omega_{q*}(\mu_{r-1}(\mathcal{G}))\) for every \(r\geq 1\). ## 7. Outlook and open questions We have defined the Mycielski transformation (and its generalized versions) for quantum graphs and demonstrated how it affects their certain parameters, in particular (quantum) clique numbers as well as (quantum) chromatic numbers. In contrast to the classical counterpart, we were not able to prove that this transformation automatically enlarges the chromatic number by one. Though we were not able to explicitly construct an example that violates the aforementioned equality, we believe that in general, this equality is not true. A similar lack of equality is expected for quantum chromatic numbers. To a classical graph \(G\), one can also associate the Lovasz number \(\overline{\vartheta}(G)\)[11], which satisfies the monotonicity condition, i.e. the existence of a graph homomorphism \(G\to F\) implies \(\overline{\vartheta}(G)\leq\overline{\vartheta}(F)\)[5, Section 4]. Moreover, \(\omega(G)\leq\overline{\vartheta}(G)\leq\chi_{\mathrm{loc}}(G)\). The generalization of the Lovasz number into the framework of quantum graphs was proposed in [6] (see also [16, Definition 6]). By [16, Theorem 19], if for two irreflexive quantum graphs \(\mathcal{G}\) and \(\mathcal{F}\) we have either \(\mathcal{G}\to\mathcal{F}\) or \(\mathcal{G}\xrightarrow{*}\mathcal{F}\), then \(\overline{\vartheta}(\mathcal{G})\leq\overline{\vartheta}(\mathcal{F})\), so that \(\overline{\vartheta}(\mathcal{G})\leq\overline{\vartheta}(\mu_{r-1}(\mathcal{G }))\). Yet another aspect that we aim to investigate in the forthcoming publication is to study how the proposed Mycielski transformation for quantum graphs affects their (quantum) groups of symmetries. In addition to the aforementioned open questions, we formulate below further potentially intriguing problems motivated by the known results in classical graph theory. ### Quantum versions of Motzkin-Straus clique number The clique number of a classical graph \(G=(V,E)\) can be also computed using the Motzkin-Straus characterization [13], \[1-\frac{1}{\omega(G)}=\max\Bigl{\{}\langle v,A_{G}v\rangle:\ v\in\mathbb{R}_{ +}^{|V|},\quad\sum_{i=1}^{|V|}v_{i}=1\Bigr{\}}. \tag{20}\] We now mimic this characterization in the quantum setting. For a given convex closed cone in \(\mathcal{S}\subseteq L^{2}(\mathcal{G})\) we can define the following clique numbers. **Definition 7.1**.: The Motzkin-Straus clique number \(\omega_{\mathcal{S}}(\mathcal{G})\) for quantum graph \(\mathcal{G}\) and convex closed cone \(\mathcal{S}\subseteq L^{2}(\mathcal{G})\) is defined through \[1-\frac{1}{\omega_{\mathcal{S}}(\mathcal{G})}=\max_{v\in\mathcal{S}}\langle v,A_ {\mathcal{G}}v\rangle. \tag{21}\] **Question 3**.: Characterize (if exist) cones that correspond to the clique numbers for quantum graphs defined in Section 6. Which of the Motzkin-Straus clique numbers are preserved by the Mycielski transformation? ### Quantum version of Stiebitz theorem For a given classical graph \(G\), \(n\geq 1\) and \(r_{j}\geq 1\) for \(j=1,\ldots,n\), we define \[\mu_{\{r_{1},\ldots,r_{n}\}}(G)=\mu_{r_{n}-1}\left(\ldots\mu_{r_{2}-1}\left( \mu_{r_{1}-1}(G)\right)\ldots\right). \tag{22}\] For \(n=0\), however, we identify \(\{r_{1},\ldots,r_{n}\}\) with \(\emptyset\) and put \(\mu_{\emptyset}(G)=G\). For \(k\geq 2\) we then define \[\mathcal{M}_{k}=\left\{\mu_{\{r_{1},\ldots,r_{k-2}\}}(K_{2})\,|\,r_{j}\geq 1,\,j=1,\ldots,k-2\right\}, \tag{23}\] i.e. it is the set of all generalized Mycielski transformations of \(K_{2}\) obtained from \(k-2\) consecutive applications of \(\mu_{r-1}(\cdot)\) with possibly different \(r\)s in every iteration, and the following holds: **Theorem 7.1**.: _([17]) For every \(G\in\mathcal{M}_{k}\) we have \(\chi_{loc}(G)\geq k\)._ Let \(\mathcal{K}_{n}\) be the quantum complete graph on \(\operatorname{Mat}_{n}\) equipped with the tracial \(\delta\)-form \(\psi_{n}\), and define \[\mathbb{M}_{k}=\left\{\mu_{\{r_{1},\ldots,r_{k-2}\}}(\mathcal{K}_{2})\,|\,r_{j }\geq 1,\,j=1,\ldots,k-2\right\}. \tag{24}\] **Question 4**.: For which type of (quantum) chromatic numbers do we have \(\chi_{\bullet}(\mathcal{G})\geq k\) for all \(\mathcal{G}\in\mathbb{M}_{k}\). ## Acknowledgments The work of AB was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868. We thank David E. Roberson for his helpful comments.
2310.05690
Abstractive Summarization of Large Document Collections Using GPT
This paper proposes a method of abstractive summarization designed to scale to document collections instead of individual documents. Our approach applies a combination of semantic clustering, document size reduction within topic clusters, semantic chunking of a cluster's documents, GPT-based summarization and concatenation, and a combined sentiment and text visualization of each topic to support exploratory data analysis. Statistical comparison of our results to existing state-of-the-art systems BART, BRIO, PEGASUS, and MoCa using ROGUE summary scores showed statistically equivalent performance with BART and PEGASUS on the CNN/Daily Mail test dataset, and with BART on the Gigaword test dataset. This finding is promising since we view document collection summarization as more challenging than individual document summarization. We conclude with a discussion of how issues of scale are
Sengjie Liu, Christopher G. Healey
2023-10-09T13:06:21Z
http://arxiv.org/abs/2310.05690v1
# Abstractive Summarization of # Abstractive Summarization of Large Document Collections Using GPT Shengjie Liu Operations Research Department North Carolina State University Raleigh, NC 27695-8206 [email protected] &Christopher G. Healey Department of Computer Science & Institute for Advanced Analytics North Carolina State University Raleigh, NC 27695-8206 [email protected] ###### Abstract This paper proposes a method of abstractive summarization designed to scale to document collections instead of individual documents. Our approach applies a combination of semantic clustering, document size reduction within topic clusters, semantic chunking of a cluster's documents, GPT-based summarization and concatenation, and a combined sentiment and text visualization of each topic to support exploratory data analysis. Statistical comparison of our results to existing state-of-the-art systems BART, BRIO, PEGASUS, and MoCa using ROGUE summary scores showed statistically equivalent performance with BART and PEGASUS on the CNN/Daily Mail test dataset, and with BART on the Gigaword test dataset. This finding is promising since we view document collection summarization as more challenging than individual document summarization. We conclude with a discussion of how issues of scale are being addressed in the GPT large language model, then suggest potential areas for future work. Abstractive summarization large language models NLP transformer-attention models visualization _ACM-Class_ * **Computing methodologies \(\sim\) Artificial intelligence \(\sim\) Natural language processing \(\sim\) Natural Language Generation** * **Information systems \(\sim\) Information retrieval \(\sim\) Retrieval tasks and goals \(\sim\) Summarization** * **Theory of computation \(\sim\) Design and analysis of algorithms \(\sim\) Online algorithms** * **Human-centered computing \(\sim\) Visualization \(\sim\) Visualization application domains \(\sim\) Visual analytics** ## 1 Introduction Research on transformer-attention mechanisms and large language models (LLMs) have produced impressive results, particularly in natural language processing (NLP) and text analytics. LLMs like BERT Kenton et al. (2019), BART Lewis et al. (2020), GPT Radford et al. (2018), Bard Manyaka (2023), and LLaMA Meta AI (2023) have produced significant research and general public impact. Despite their state-of-the-art performance, the goal of broad, general-purpose use leaves certain tasks only partially solved. This paper focuses on the abstractive summarization of multi-document collections. Systems like GPT can perform abstractive summarization but are currently limited to a small maximum input of 512 to 4,096 terms. Document collections can easily consist of hundreds of documents containing thousands of terms. An intelligent method is needed to manage scale to leverage an LLM's abstractive summarization capabilities. We also propose applying sentiment analysis and visualization to augment the summaries with additional properties presented in an interactive and simple-to-understand visual format. Our approach performs the following steps to extend GPT's abstractive summarization method to large document collections. 1. Apply the Facebook AI Similarity Search Johnson et al. (2021) (FAISS) to estimate document similarity based on the semantic similarity of pairs of documents. 2. Perform Hierarchical Density-Based Spatial Clustering and Application with Noise Malzer and Baum (2020) (HDBSCAN) using FAISS results to generate semantic topic clusters. 3. Identify topic-representative terms and build a collection of _representative term sets_ for each cluster, where each set contains a representative term and all semantically similar terms Nagwani (2015) in the parent cluster. 4. Use the representative term sets to further reduce topic cluster size by combining sentences in a cluster containing representative terms into _semantic chunks_ based on change points in their semantic content. 5. Use GPT's summarization API to summarize each semantic chunk, followed by its concatenation API to combine the semantic chunk summaries into an abstractive summarization of the original document collection. 6. Perform term-based sentiment analysis on each semantic chunk to generate valence (pleasure) and arousal scores. 7. Visualize the semantic chunks in a dashboard that allows interactive exploration of both the summary's sentiment and its text at different levels of detail. Comparison of our summaries with current state-of-the-art approaches using ROGUE metrics shows we achieve comparable performance for a multi-document collection versus single document-summary pairs. This suggests our approach can effectively scale to summarize larger documents or document collections beyond the scope of existing systems. ## 2 Related Work Text summarization is generally divided into two broad categories: _extractive_ and _abstractive_. Several recent survey papers cover this topic in detail Cao (2022), Gupta and Gupta (2019), Lin and Ng (2019), Noratanch and Chitrakala (2016), Zhang et al. (2022). ### Summarization Extractive summarization extracts representative text from the original documents verbatim and organizes it into a coherent summary. Common approaches include keyword extraction using a weighting scheme to rank keywords based on how well they capture a document's content (_e.g._, term frequency-inverse document frequency) or sentence extraction using weights to rank sentences and measures of sentence-sentence similarity to avoid including redundant text. Abstractive summarization attempts to construct a unique text summary without using text from a document verbatim. This is similar to how a human reader would construct a summary. Lin and Ng describe this as a three-step process of information extraction, content selection, and surface realization Lin and Ng (2019). * **Information extraction.** Extract meaningful information from a document, for example, noun and verb phrases together with context, information items that form subject-verb-object triples, or verb-object abstraction schemas for different topics. * **Content selection.** Select a subset of candidate phrases to include in the summary. Heuristic approaches and integer linear programming (ILP) have been proposed to structure the summarization task as a constrained optimization problem. The advantage of ILP is that phrase selection is performed _jointly_ across all candidates and not _sequentially_. * **Surface realization.** Combine candidate phrases using grammar and syntactic rules to generate a coherent summary. Gupta and Gupta Gupta (2019) propose dividing abstractive summarization into _structural_, _semantic_, and _neural network_-based. Structural approaches identify relevant information in the text to create a predefined structure for generating abstractive summaries. Semantic methods convert text into a semantic representation used as input to a natural language generation (NLG) system to create abstractive summaries. Neural network techniques use deep neural networks to train a model using text-summary pairs. The model is then applied to generate abstractive summaries for unlabeled text. Structure-based methods use trees, templates, ontologies, graphs, and rules to convert text to abstractive summaries. Barzilay and McKeown Barzilay and McKeown (2005) apply a content theme approach to convert common phrases into dependency trees. The dependency trees are augmented with additional relevant information to create a set of subtrees used as input to an NLG system to organize them into novel sentences and produce an abstractive summary. Alternatively, _word graphs_ are built to represent term relationships in a document, then analyzed to construct abstractive summarizations Lloret and Palomar (2011). Mehdad et al. locate the shortest path in a word graph to identify relevant sentences and remove redundant information, then fuse the remaining sentences to produce an abstractive summary Mehdad et al. (2013). Semantic approaches build a semantic representation of the document text, for example, predicate-argument structures (verbs, subjects, and objects of a sentence) or semantic graphs. These are used as input to an NLG system that converts the semantic representation into an abstractive summary. Moawad and Aref built rich semantic graphs where nouns and verbs represent nodes, and semantic relationships represent edges Moawad and Aref (2012). The graph is reduced using heuristic rules, then fed to an NLG system. Munot and Govilkar (2015). Similar approaches have been used with abstract meaning representation graphs: directed acyclic graphs of sentences and their semantic relationships Liu et al. (2015). ### Neural Network Abstractive Summarization The most recent state-of-the-art abstractive summarizers use deep neural networks to train models based on text-summary pairs. Initial work used recurrent neural networks (RNNs) to generate summaries Buys and Blunsom (2017). More recent approaches use transformed-based attention models. Both methods are built on encoder-decoder strategies. An encoder converts terms into vector representations, usually word embeddings. A decoder attempts to determine the next word in the output based on the previous words to date. Training an encoder-decoder is often called a _seq2seq_ learning problem. Attention allows information to be preferentially extracted from the encoder based on where to focus to gain the largest benefit at the current stage in the training process. Examples of RNN abstractive summarizers are numerous. Rush et al. construct a feed-forward neural network comprised of an attention-based encoder and a beam-search decoder for sentence-level summarization Chopra et al. (2016); Rush et al. (2015). A beam search expands on the traditional greedy search by selecting the \(k\) best candidates at each decoding step and pruning them if an end-of-sequence token is seen. Nallapati et al. use an RNN that processes word embeddings and generates summaries to address issues like keyword modeling and rare word inclusion Nallapati et al. (2016). Recent work involves large language models (LLMs) like GPT, Bard Manyaka (2023), and LLaMA Meta AI (2023). GPT-3 (Generative Pre-trained Transformer 3) was built on a training set with 175 billion parameters Brown et al. (2020). GPT-3 adopts a _meta-learning_ approach: constructing a model with a broad set of general skills and pattern recognition abilities during training, then inferring results with at most a few examples during task completion without adjusting any of the model's internal weights. GPT-3 demonstrated performance with zero-shot (no task examples), one-shot (one task example), and few-shot (10-100 task examples) contexts that rivalvaled fine-tuned models. Tasks included sentence completion, story-ending selection, question answering, language translation, pronoun reference, common sense reasoning, mathematical reasoning, and reading comprehension. Meta's LLaMA (Large Language Model Meta AI) uses an alternative approach, arguing that performance depends not only on parameter size but also on the amount of training performed Touvron et al. (2023). Touvron et al. propose that although a larger model may be less expensive to train, a smaller model trained for more time will be more efficient at _inference_ when it is used to perform NLP tasks. LLaMA was evaluated by training it on different-sized inputs, performing a short fine-tuning step, then testing it on question answering, common sense reasoning, mathematical reasoning, code generation, and reading comprehension using 0, 1, 5, and 64-shot contexts. Training sets contained 7, 13, 33, and 65 billion models, trained on 1 trillion tokens for the two smaller model sets and 1.4 trillion tokens for the two larger model sets. Task performance showed LLaMA outperformed GPT-3 on most benchmarks. #### 2.2.1 Training and Test Datasets. A number of datasets containing text and a corresponding summary exist for training and testing. One of the most common is the Document Understanding Conferences (DUC) datasets managed by the National Institutes of Standards and Technology (NIST) National Institute of Standards and Technology (2011). Analysis of DUC datasets is a track in NIST's annual Text Analytics Conference. Each DUC entry contains news documents and three ground-truth summaries: (1) manually generated, (2) automatically generated as baselines, and (3) automatically generated by existing systems. The CNN/Daily Mail dataset contains document-summary pairs of newspaper articles and corresponding summaries: 286,817 training pairs, 13,368 validation pairs, and 11,487 test pairs See (2023). The Gigaword dataset contains approximately 3.8 million English news article training pairs, 189,000 validation pairs, and 2,000 testing pairs Microsoft (2023). Although sufficient to train a neural network model, the training pairs use the first sentence of a document as its summarization ground truth. Finally, the NYT dataset contains preprocessed articles from the New York Times: approximately 650,000 manually generated article-summary pairs with articles limited to 800 tokens and summaries limited to 100 tokens Sandhaus (2008). #### 2.2.2 Evaluation. Once a _candidate_ abstractive summary has been constructed, it must be compared to a _reference_ "ground truth" summary to evaluate its quality. Although several evaluation metrics exist, including BLEU, METEOR, and ROUGE, variations of ROUGE (Recall-Oriented Understudy for Gisting Information) are the most common evaluation methods: ROUGE-1 for unigrams, ROUGE-2 for bigrams, and ROUGE-L for longest common subsequence Lin (2004). ROUGE's evaluations are based on character overlap, not on semantic content. ROUGE returns recall: the proportion of words in the reference summary captured by the candidate summary; and precision: the proportion of words in the candidate summary that appear in the reference summary. \[\text{ROUGE-1}_{\text{R}}=\frac{n_{\text{o}}}{n_{\text{r}}}\qquad\qquad\text{ ROUGE-1}_{\text{P}}=\frac{n_{\text{o}}}{n_{\text{c}}} \tag{1}\] where \(n_{\text{c}}\) is the number of candidate tokens, \(n_{\text{r}}\) is the number of reference tokens, and \(n_{\text{o}}\) is the number of candidate tokens included in (_i.e._, overlapping) the reference summary. Consider the reference summary "John really loves data science" and the candidate summary "John loves data science." Here \(n_{\text{r}}=5\), \(n_{\text{c}}=4\), and \(n_{\text{o}}=4\) so recall is ROUGE-\(\text{1}_{\text{R}}=\frac{4}{5}=0.8\) and precision is ROUGE-\(\text{1}_{\text{P}}=\frac{4}{4}=1.0\). ROUGE-2 uses the same formulas for recall and precision but works with bigrams rather than unigrams. For the same candidate and reference sentences \(n_{\text{r}}=4\): {(John, really), (really, loves), (loves, data), (data, science)}; \(n_{\text{c}}=3\): {(John, loves, data), (data, science)} and \(n_{\text{o}}=2\): {(loves, data), (data, science)}, producing recall and precision of ROUGE-\(\text{2}_{\text{R}}=\frac{2}{4}=0.5\) and ROUGE-\(\text{2}_{\text{P}}=\frac{2}{3}=0.67\), respectively. ROUGE-L identifies the longest common subsequence \(n_{\text{l}}\) of tokens in the same order but not necessarily consecutive. For example, a candidate sentence "John really loves data science and studies it extensively" and a reference sentence "John very much _loves data science and enjoys it_ a lot" produces \(n_{\text{L}}=6\), generating recall and precision of ROUGE-\(\text{L}_{\text{R}}=\frac{6}{11}=0.55\) and ROUGE-\(\text{L}_{\text{P}}=\frac{6}{9}=0.67\), respectively. ### State of the Art Abstractive Summarizers Papers With Code maintains a list of state-of-the-art abstractive summarizers tested on the CNN/Daily Mail dataset1. Currently, the top four systems are versions of BART, BRIO, PEGASUS, and MoCa. Since we test our approach against these methods, we provide brief overviews of each system. BART is a "denoising" transformer model trained to convert noisy or corrupted text into denoised, uncorrupted text using various permutations of a target sentence (_e.g._, term masking, deletion, or permutation) Lewis et al. (2020). BART's initial language understanding model is then fine-tuned to perform NLP tasks like abstractive summarization. BRIO splits the summary generation process into two stages: generation using cross-entropy loss and evaluation using contrastive loss Liu et al. (2022). Combining these metrics balances probabilities across the summary-to-date during training, producing high-quality summaries even when their maximum-likelihood estimation, the standard scoring method during summary generation, would not have recommended them. An extension of PEGASUS, known as _sequence likelihood calibration_ (SLiC), calibrates model-generated sequences with reference sequences in the model's latent space. Calibration refers to the ability to compare the quality of different potential summaries. Rather than apply heuristics to perform this, the authors propose to align candidate sequence likelihoods to the target sequence using a model's latent states during the decode stage of the seq2seq process. MoCa addresses the issue of _exposure bias_ during inference (_i.e._, the trainer only has access to previously predicted tokens rather than ground-truth tokens during search) Zhang et al. (2022). To correct this, MoCa uses a combination of a generator model and an online model to slowly evolve samples that align generator model scores with online model scores using ranking loss. This is done by modifying the generator's parameters using a _momentum coefficient_ based on ranking loss during back-propagation. Footnote 1: [https://paperswithcode.com/sota/abstractive-text-summarization-on-cnn-daily](https://paperswithcode.com/sota/abstractive-text-summarization-on-cnn-daily) ### Sentiment Analysis Sentiment analysis is an active research area in NLP, information retrieval (IR), and machine learning (ML). Two standard analysis methods are: (1) supervised, using a training set to build emotion estimation models, and (2) unsupervised, where raw text is converted directly into scores along emotional dimensions Liu and Zhang (2012); Mohammad (2015); Pang and Lee (2008); Zhang et al. (2018). Analysis is often built on psychological models of emotion that use orthogonal dimensions to describe emotional affect. Russell defined three dimensions pleasure (or valence), arousal, and dominance--the PAD model--to represent emotion Russell (1980); Russell and Feldman Barrett (1999) (Figure 1a). Plutchik's four-dimensional model of joy-sadness, anger-fear, trust-disgust, and anticipation-surprise uses a color wheel to represent basic emotions: hue for dimension endpoints (eight hues) and saturation for emotional intensity (weak saturation for low intensity to strong for high, Figure 1b) Plutchik (1980). ### Sentiment Estimation In supervised NLP approaches, preprocessing has been applied prior to sentiment analysis. Using ML, Pang and Lee calculated subjectivity weights for sentences, producing a graph of sentence nodes and subjectivity-weighted edges Pang and Lee (2004). A minimum graph cut is used to separate objective and subjective sentences. Pang et al. then compared Naive Bayes, maximum entropy, and support vector machines (SVMs) for classifying movie reviews as positive or negative Pang et al. (2002). Unigrams performed best using SVMs. Augmenting the training set with intuitive extensions like bigrams, term frequencies, part of speech tagging, and document position information did not improve performance. Turney rated online reviews as positive or negative using pointwise mutual information to generate statistical dependence between review phrases and the anchor words "excellent" and "poor" Turney (2002). Several pre-built sentiment analysis libraries are available Bonata and Janardhan (2019); DiBattista (2021). In Python, the Natural Language Toolkit's Valence Aware Dictionary and Sentiment Reasoner (NLTK VADER) scores text blocks for polarity (negative, neutral, and positive) and overall sentiment (compound). Textblob includes a sentiment analysis engine, among other common NLP algorithms, returning a sentiment polarity score in the range \([-1,\ldots,1]\) and a subjectivity score in the range \([0,1]\). Flair uses a pre-trained word embedding model to perform sentiment analysis. Although slower than VADER or Textblob, tests suggest that Flair produces more accurate sentiment scores when compared to star ratings for product reviews Podiotis (2020). More recently, deep learning has been applied to sentiment analysis with great success. Initial work focused on recurrent neural networks, often augmented with long-short term memory (RNNs and LSTMs). Recently, RNN and LSTM models have been superseded by deep learning transformers. Bidirectional encoder representations from transformers (BERT) has been fine-tuned for sentiment analysis Kenton et al. (2019). A common approach is to use TensorFlow and a review database with star ratings like IMDB or Amazon to predict sentiment polarity. Once extended, BERT can be applied to unlabeled text to estimate sentiment. GPT-3's pre-trained model can also be extended to estimate sentiment, although this can often be done with fewer training samples based on its few-shot context abilities. Masked sequence to sequence (MASS) and BART combine the encode-decode step to produce generalizations of BERT and GPT Lewis et al. (2020); Song et al. (2019). MASS masks out \(k\) consecutive tokens in the input sequence, then attempts to predict those tokens in the output sequence. BART introduces noise into the input sequence to generate "noisy" input for the encoder, then applies an autoregressive decoder to remove the noise and reconstruct the original input. Since MASS and BART are extensions of BERT and GPT-3, they can also be extended to estimate sentiment. Figure 1: Emotional models: (a) Russell’s emotional circumplex, pleasure (valence) on the horizontal axis, arousal on the vertical axis; (b) Plutchik’s emotional model, anger–fear on the horizontal axis, joy–sadness on the vertical axis, trust–disgust on the right-diagonal axis, and anticipation–surprise on the left-diagonal axis #### 2.5.1 Sentiment Dictionaries. A common unsupervised approach employs sentiment dictionaries. Terms appear as keys, each associated with one or more emotional dimension scores. POMS-ex (Profile of Mood States) is a 793-term dictionary designed to measure emotion on six dimensions: tension-anxiety, depression-dejection, anger-hostility, fatigue-inertia, vigor-activity, and confusion-bewilderment Pepe and Bollen (2008). ANEW (Affective Norms for English Words) used the PAD model to score 1,033 emotion-carrying terms along each dimension using a nine-point scale Mislove et al. (2010). Mohammad and Turney created EmoLex from 14,182 nouns, verbs, adjectives, and adverts using Plutchik's four emotional dimensions Mohammad and Turney (2013). Other dictionaries also exist: SentiStrength, built from MySpace comments Thehall et al. (2010); LWIC (Linguistic Inquiry and Word Count), a dictionary that classifies terms as positive, negative, or neutral Tausczik and Pennebaker (2010); and SentiWordNet, built from the well know WordNet synset dictionary Baccianella et al. (2010). More recently, researchers have applied Amazon Mechanical Turk to assign scores for emotional dimensions to large dictionaries. Warriner extended the original ANEW dictionary to approximately 13,000 terms Warriner et al. (2013) using MTurk to obtain PAD scores and compare results to the original ANEW scores for validation. #### 2.5.2 Sentiment Visualization. Visualizing sentiment has received significant attention as part of the general text visualization area. Kucher et al. provide an overview of recent sentiment visualization techniques Kucher et al. (2017). Cao et al. developed Whisper to monitor the spatiotemporal diffusion of social media information. Sentiment polarity was visualized using a sunflower metaphor Cao et al. (2012). SocialHelix followed, visualizing and tracking social media topics as they form and their sentiment diverges using a DNA-like representation Cao et al. (2014). Wu et al. presented opinion propagation in Twitter using a combination of streamgraphs and Sankey graphs Wu et al. (2014). Liu et al. linked primary and secondary text using semantic lexical matching. Results are presented in a dashboard containing topic keywords, concept clusters, and a causality timeline Liu et al. (2017). El-Assadi et al. visualized multi-party conversation behavior at the topic level with ConToVi El-Assady et al. (2016). They also extracted conversation threads from large online conversation spaces using a combination of supervised and unsupervised machine learning algorithms El-Assady et al. (2018). Hoque and Carenini implemented ConVis and MultiConVis, an ML, NLP, and visual analytic system to explore blog conversations Hoque and Carenini (2014, 2016). Mohammad et al. extracted stance and sentiment in tweets using a labeled database, with results visualized using treemaps, bar graphs, and heatmaps Mohammad et al. (2017). Kucher et al. identified stance and sentiment polarity in social media text, then used similarity over these properties to visualize analysis of collections of topic-data source streams Kucher et al. (2020). Wei et al. proposed TIARA, a system to extract topics that are visualized in an annotated streamgraph Wei et al. (2010). Dork et al. used Topic Streams, a streamgraph approach to monitoring topics in a large online conversation environment over time Dork et al. (2010). Despite this significant progress, numerous challenges in sentiment estimation continue to exist: more subtle text cues (_e.g._, sarcasm, irony, humor, or metaphors), a writer's emotion versus what they write (_e.g._, an author evoking a particular emotional affect), emotion towards different aspects of an entity, stance (_i.e._, the opinion on a topic), and cross-cultural and domain differences (_e.g._, "alcohol" can be evaluated differently depending on the underlying culture) Mohammad (2015, 2016), Pang and Lee (2008). ## 3 Methodology Although LLMs like GPT can perform few-shot abstractive summarization, they are limited in the input size they support. GPT currently has a 4,096 token maximum, which is too small for even a moderately sized document or document collection. The issue, therefore, becomes: how can we scale an LLM to perform abstractive summarization? An obvious approach is to compress the collection to the LLMs's size limits intelligently. This shifts our goal from abstractive summarization to intelligent document compression prior to summarization. Ideally, we would like to identify, extract, and compress the most semantically meaningful information from the collection. This explains the standard recommendation of extractive summarization followed by abstractive summarization for large document collections. Although existing extractive summarization techniques can produce acceptable results, this reduces the abstractive summarizer to a language rewriter. The choppy and often discontinuous extractive sentences are converted into a grammatically and syntactically correct summary, much more like what a human writer would produce. The main point, however, is that if a concept is not included in the extractive summary, it can never occur in the abstractive summary. Therefore, it is critical to produce the best extractive component or components to summarize. Overall, the key objectives of our proposed models are: (1) to overcome the term limitation of GPT for the abstractive summarization task and (2) to create an end-to-end abstractive summary generation and visualization pipeline for large document collections. Unlike traditional extractive summary approaches that produce a single set of sentences, we propose to subdivide the document collection into semantic topic clusters, extract representative terms from each cluster that subdivide sentences into semantic chunks, abstractly summarize each semantic chunk, then concatenate the chunks to generate a final abstractive summary. This approach has a number of novel advantages: (1) improving scaling by subdividing a document collection into semantically similar clusters; (2) operating at a semantic chunk level rather than a sentence level; and (3) attempting to identify the most semantically meaningful information in a collection. Once the abstractive summary is produced, sentiment analysis is used to augment it with estimated emotional affect, presented using visualization. Sentiment analysis presents unique challenges, particularly how to aggregate sentiment over a multi-sentence text block and visually represent sentiment and its related properties optimally. An overview of our approach is as follows. Given a document collection, we first perform text clustering and topic modeling on the collection. We then identify topic-representative terms and construct representative term sets for each cluster, containing a representative term and all semantically similar terms in the parent cluster. To further reduce the topic cluster size, we divide sentences containing representative terms in each cluster into semantic chunks based on change points in their content. We leverage GPT's summarization API to summarize each semantic chunk and use its concatenation API to combine the semantic chunk summaries into an abstractive summarization of the original document collection. Finally, we estimate sentiment and present the summary and its sentiment using an interactive visualization dashboard. We demonstrate our framework using five 100-document collections collected from the CNN/Daily Mail and Gigaword datasets. These collections cover the topics Barack Obama, university research, wildlife protection, the stock market, and basketball. We compare our results to current state-of-the-art abstractive summarizers to test our framework. ### Query Support If required, query support can be performed prior to our abstractive summary pipeline. Here, we retrieve documents \(d_{i}\) in document collection \(D,\ 1\leq i\leq n_{D}\) with the highest similarity to a user query \(q\). We employ a text encoder \(E\) that maps its input to a final hidden layer in an LLM. \(E\) is applied to all documents in \(D\) and \(q\), producing \(E_{d_{i}},\ 1\leq i\leq n_{D}\) and \(E_{q}\). We use Facebook's FAISS library to index all \(d_{i}\). This allows us to query the \(u\)-nearest matches to \(q\) in a computationally efficient manner, generating a subset \(D^{\prime}\) of the original \(D\), \(D^{\prime}=\{d^{\prime}_{1},d^{\prime}_{2},\ldots d^{\prime}_{u}\}\). \(D^{\prime}\) replaces \(D\) as input to the document clustering stage. ### Document Clustering We apply document clustering to \(D\) (the original document collection or the result of a user query) to subdivide \(D\) into more granular topic sets. We use the output of a text encoder \(E\) together with UMAP (Uniform Manifold Approximation and Projection) to perform projection, allowing us to cluster in a lower dimension like a plane (2D) or volume (3D) McInnes et al. (2018). Various dimensional reduction approaches are available, including PCA (principal component analysis) and t-SNE (t-distributed Stochastic Neighbor Embedding) Herve and Williams (2010), van der Maaten and Hinton (2008). We selected UMAP based on its nonlinear and unsupervised nature and ability to efficiently manage large text datasets, preserving their local and global structure. Following projection, we apply HDBSCAN to generate topic clusters Campello et al. (2013). HDBSCAN transforms points into low-density and high-density spatial regions based on local neighbor distances, builds a minimum spanning tree on the resulting distance-weighted graph, constructs and condenses a cluster hierarchy based on a minimum cluster size, then extracts stable clusters from the condensed tree. The result is a set \(\mathcal{C}\) of \(n_{\mathcal{C}}\) clusters \(C_{j}\in\mathcal{C},\ 1\leq j\leq n_{\mathcal{C}}\) that are not constrained to specific shapes or sizes, as well as "outlier" or noise documents that do not belong to any cluster. ### Topic Sentence Extraction To identify concept-based topic keywords, latent Dirichlet allocation (LDA) is applied to the set of documents in each cluster Blei et al. (2003). LDA converts bag-of-words-based term-document matrices into concept-document matrices, where each document \(d_{i}\in C_{j}\) is transformed from a weighted term frequency vector to a concept vector representing the amount of each _latent_ concept in cluster \(C_{j}\) that \(d_{i}\) contains. The first step in LDA is identifying \(n_{C_{j}}\) concepts as weighted combinations of the unique terms contained in \(C_{j}\). \(n_{C_{j}}\) must be defined before running LDA and is itself an open problem. Given \(n_{C_{j}}\), a Dirichlet distribution is assumed to form conditional probabilities of document-concept mixtures and term-topic assignments. This is used to generate a document-concept matrix and a concept-term matrix. The concept-term matrix defines the weights of all unique terms in \(C_{j}\) for each of the \(n_{C_{j}}\) concepts. This term list is usually truncated to contain the top \(t_{C_{j}}\) terms or terms whose weights exceed a predefined threshold \(\varepsilon_{C_{j}}\). Now, every \(d_{i}\in C_{j}\) is represented by the amount of each \(n_{C_{j}}\) concept \(d_{i}\) contains. This allows similarity to be defined by concept overlap, a more semantic approach than weighted term overlap. We use the concept-term matrix to construct a collection of _topic terms_\(T_{C_{j}}\). \(T_{C_{j}}\) is the set of terms for \(C_{j}\)'s concepts that occur above a threshold frequency across all concepts. \(T_{C_{j}}\) is extended using WordNet to add synonyms for each topic term. We extract all sentences from each \(d_{i}\in C_{j}\) that contain one or more topic terms, generating a list of topic sentences \(\mathit{Sen}_{C_{j}}=\{\mathit{sen}_{1,j},\mathit{sen}_{2,j},\ldots\}\). \begin{table} \begin{tabular}{p{14.2pt} p{284.5pt}} \hline **Documents** & 1. & Since it launched in late 2022, ChatGPT seems to have taken the world by storm. \(\cdots\) So while it’s impossible to predict what a future filled with A.I. lobbyists will look like, it will probably make the already influential and powerful even more so. \(\cdots\) \\ **Documents** & \(n\). & Since its launch in November 2022, ChatGPT (‘GPT’ stands for Generative Pre-trained Transformer), a type of artificial intelligence model, has gained over a million users. \(\cdots\) I recommend we do all we can as educators to cultivate the powers of the human mind in the face of this novel threat to our intelligence. \\ **LDA Concepts** & 1. & dunn (\(w=0.0009\)), said (\(w=0.0009\)), \(\ldots\) lobbi (\(w=0.0009\)) \\ **Concepts** & \(n\). & lobbi (\(w=0.0033\)), comment (\(w=0.0030\)), \(\ldots\)strategi (\(w=0.0019\)) \\ **Topic Term Sets** & 1. & \(\{\)**solid:**\(\varnothing\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\) \(\ldots\ldots\) ### Semantic Chunking Since the number of sentences in \(\textit{Sen}_{C_{j}}\) may still be large, we split sentences \(\textit{sen}_{i,j}\in\textit{Sen}_{C_{j}}\) into _semantic chunks_\(K_{C_{j}}=\{k_{1,j},k_{2,j},\ldots\}\). To do this, we obtain the SentenceBERT Reimers and Gurevych (2019) embeddings for each \(\textit{sen}_{i,j}\) and use them to construct a similarity matrix \(\overset{\sim}{\textit{Sen}_{C_{j}}}\). To automatically identify chunking points between sentences, we take the two adjacent diagonals in \(\overset{\sim}{\textit{Sen}_{C_{j}}}\) immediately to the right of the main diagonal and build a two-column matrix Polovinkin (2022). The columns represent similarity scores between all pairs of adjacent sentences. To better capture differences in similarity, we amplify the similarity scores for certain sentences and suppress the scores for others using an activation weight \(w\) based on the reverse sigmoid function. \[w(x)=\frac{1}{1+\exp(0.5x)} \tag{2}\] where \(x\) is the similarity score in each matrix cell. The weighted similarities in each row are summed to compute the similarity between pairs of adjacent sentences. Next, _relative minima_ are identified: locations in the weighted sum list where the similarity score decreases, then increases. The relative minima represent the splits between semantic chunks. ### GPT Zero Shot Summarization Given each chunk \(k_{i,j}\in K_{C_{j}}\), we run the summarization pipeline through GPT's completion API \[\textit{sum}_{i,j}=\text{GPT3Summarization}(k_{i,j}+\text{TI;dr:}) \tag{3}\] This generates summaries for each chunk. Although we did not encounter this during our analysis and testing, it is _possible_ that \(k_{i,j}\) exceeds GPT's maximum token count. If this happens, \(k_{i,j}\) is further subdivided into two parts. This is done at the internal sentence in \(k_{i,j}\) that produces the largest absolute difference with its neighbours based on weighted semantic sentence similarity \(\text{sim}_{i,j}\). Assuming \(k_{i,j}\) spans sentences \(k_{i,j}=(\textit{sen}_{i,u},\textit{sen}_{i,u+1},\ldots\textit{sen}_{i,v})\), the split point \(sp\) occurs as follows. \[sp=\max_{u+1\leq s\leq v-1}\{\,||(\text{sim}_{i,s-1}-\text{sim}_{i,s})||+||( \text{sim}_{i,s}-\text{sim}_{i,s+1})||\,\} \tag{4}\] producing two semantic chunks \(k^{\prime}_{i,sp}=(\textit{sen}_{i,u},\ldots\textit{sen}_{i,sp})\) and \(k^{\prime}_{i,sp+1}=(\textit{sen}_{i,sp+1},\ldots\textit{sen}_{i,v})\) that span the original \(k_{i,j}\). Identical subdivision of \(k^{\prime}_{sp}\) and \(k^{\prime}_{sp+1}\) can occur as needed. Once each semantic chunk meets the maximum term limit constraint, the \(\text{sum}_{i,j}\) are combined and summarized. \[\textit{Sum}_{j}=\text{GPT3Summarization}(\textit{sum}_{1,j}+\textit{sum}_{2,j}+\cdots) \tag{5}\] This produces a final abstractive summary for cluster \(C_{j}\). We repeat the same procedure for all clusters in \(D\) to construct a summary for every cluster in the document collection. Finally, we combine cluster summaries, again using GPT. \[\textit{Sum}=\text{GPT3Summarization}(\textit{Sum}_{1}+\textit{Sum}_{2}+\cdots) \tag{6}\] ## 4 Sentiment Visualization To further highlight patterns and insights in the abstractive summaries and the components used to construct them, we designed a sentiment visualization dashboard to allow viewers to explore different aspects of the final summaries. Fig. 2 shows an example of the visualization for our ChatGPT topic cluster, including the base visualization (Fig. 2a) and interactive exploration of the raw sentences (Fig. 2b) and semantic chunks (Fig. 2c). Our approach breaks the abstractive summary into three parts: semantic chunks represented as lines colored by valence on the top; individual sentences shown as bars within each semantic chunk together with their valence (represented by hue) and arousal (represented by height); and the final abstractive summary for the entire topic shown in text colored by valence on the bottom. We apply our knowledge of cognitive vision in visualization to choose perceptually effective representations for our data. For example, we and others have studied extensively the best use of geometric shape, properties of color (luminance, hue, and saturation), layout, and their combined to highlight data properties that we believe are most relevant to our viewers Callaghan (1989); Healey and Enns (1999, 2012). These guidelines were used to select the double-ended saturation scales, rectangles, and lines to present sentiment, arousal, sentences, and semantic chunks. A common issue with sentiment analysis is aggregation. Normally, text should be divided into blocks that contain a single sentiment. Aggregating multiple sentiment values often leads to neutral results since positive and negative sentiments cancel under most aggregation operations, resulting in overall neutral sentiment. Outlier situations, either in the number of positive or negative sentiment scores or in their absolute values, are required to "push" the aggregated sentiment toward a positive or negative overall result. Different approaches can address this, for example, setting specific thresholds to discretize valence into negative, neutral, and positive in ways that better distinguish negative and positive scores. We use a more straightforward method, subdividing semantic chunks into sentences. It is usually assumed that a sentence contains a single sentiment, so aggregating the valence of terms in the sentence should not cause conflicting valence values to cancel. This is why we present individual sentence valence and arousal bars. Several insights can be drawn from this level of detail. First, very few bars are grey, suggesting that few sentences have a neutral valence. At the next level of detail, sentiment chunk valence, only two lines are negative (orange-red), even though orange and red sentence bars occur throughout the topic's text. This shows how negative valence is cancelled by positive valence in a Figure 2: Summary visualizations: (a) chunk summary valence (top, represented by hue) and individual sentence valence and arousal (bottom, represented by hue and height) for the ChatGPT topic cluster, the final abstractive summary of the cluster’s text is shown at the bottom of the figure; (b) highlighting a sentence to see its text, valence, and arousal; (c) highlighting a semantic chunk to see its text, valence, and arousal semantic chunk unless large negative sentence valence occurs (seventh semantic chunk) or the number of sentences is small (fourteenth semantic chunk). At the highest level of detail, the topic's abstractive summary text is displayed and coloured based on its valence. The dashboard lets users hover over sentence bars and semantic chunk lines interactively to reveal their underlying text, valence, and arousal scores (Fig. 2b,c). ## 5 Experiments We evaluated our proposed framework for abstractive summarization by comparing it to existing state-of-the-art techniques: BART, BRIO, PEGASUS, and MoCa. We aimed to investigate whether our proposed framework can maintain zero-shot performance comparable to competing systems while simultaneously handling large multi-document collections. We used the ROGUE-1, ROGUE-2, and ROGUE-L scores to measure summary quality. ### Datasets We performed experiments using the CNN/Daily Mail and the Gigaword datasets from the Hugging Face library. Each entry in the CNN/Daily Mail dataset contains a document ID, a newspaper article's text, and a corresponding summary (or highlights, as it is referred to within the dataset). The Gigaword dataset includes an English news article's text and a corresponding summary. The CNN/Daily Mail dataset contains 286,817 training pairs, 13,368 validation pairs, and 11,487 test pairs. The Gigaword dataset contains approximately 3.8 million training pairs, 189,000 validation pairs, and 2,000 text pairs. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Query** & **Topics** & **Query Summary** \\ & 1. { joke, jest, dinner, attend,... poll, survey, Bush, Walker } & Scott Walker recently signed a right-to-work law in Wisconsin, and Obama has criticized it as part of a “sustained, coordinated assault on unions.” \\ Barack Obama & 4. { indic, Indic, Indo-Aryan, zeid,... speaker, plan, design, month } & \\ & 5. { obama, cotton, cotton\_fiber, cotton\_wool,... rock, sway, site, user } & \\ & 6. { line, deduct, republican, budget,... lexu, said, say, tell } & Paul has criticized George W. Bush for not being conservative enough. \\ &... & \\ & 1. { duck, dog, water, shoot,... monkey, meat, lamb, dean } & Jiri Michal was trying to take a picture of a great grey ow in Vysoka when it started playing peek-a-boo with him. Snowy owls are diurnal and hunt silently, but animal rights campaigners have accused the Harry Potter studio tour of mistreating owls by keeping them in cages and allowing fans to touch them. PETA has called for Warner Brothers to stop using live animals in their tour. \\ wildlife protection & 4. { plant, carbon, tree, reel,... prey, behaviour, behavior, attack } & \\ & 5. { canal, duct, epithelial\_duct, channel,... alternate, author, boat, hunt } & \\ & 6. { tour, circuit, go, spell,... flash, photograph, approach, cover } & Warner Brothers to stop using live animals in their tour. \\ & 7. { heron, Hero, Heron, octopu,... brown, flight, green, know } & \\ \hline \end{tabular} \end{table} Table 2: Example queries, keywords (in order of importance) for the LDA topics built from the documents returned for each query, and the final abstractive summary of the query’s topic summaries We assess the zero-shot performance of our framework for abstractive summarization by evaluating its ability to perform the summarization task on the test set without any prior training. Given the short length of each article, we increase the size of the document collection \(D\) to \(n_{D}=100\) documents. For our system, we had to predefine the number of topic clusters \(n_{C}\) created by HDBSCAN. We chose \(n_{C}=10\) to ensure semantic chunks that met GPT-3's maximum token limit. We also selected GPT hyperparameters temperature \(=0.3\) to set randomness in the output to favor terms with a higher probability of occurrence, \(\text{top}_{p}=0.9\) to select the smallest collection of terms whose cumulative probability exceeds \(0.9\), frequency and presence penalties of \(0\) to reduce the likelihood of repetitive text, and use of the davinici-003 model to "produce higher quality writing with better long-form generation versus davinici-002"." ### Performance We compared our results against four state-of-the-art abstract summarizers: BART, BRIO, PEGASUS, and MoCa. All four systems are designed to summarize individual documents that meet the limitations of GPT-3's token maximum. None of the systems are specifically built to summarize document collections. Because of this, we ensured that documents with lengths at or below GPT-3's limits were selected. These documents averaged approximately 500 terms. Since we are generating multi-document summaries for our system, we first need to define a ground-truth summary to compare to. This is done by taking the individual ground-truth summaries from the test dataset for each document in a cluster and concatenating them. The abstractive summary for the topic cluster is compared to the concatenated ground-truth summary. ROGUE scores for each topic cluster are averaged to generate an overall ROGUE score for our approach. We made five queries (Barack Obama, university research, wildlife protection, stock market, and basketball) on the CNN/Daily Mail and Gigaword datasets to extract 100 documents per query. We applied our system to each query's 100 documents: HDBSCAN was used to generate topic clusters, LDA identified ten concepts per cluster, topic sets were built to identify sentences containing topic terms, adjacent sentence similarities were used to locate relative minima separating semantic chunks, each semantic chunk was summarized using GPT-3, the chunk summaries were concatenated with GPT-3 to produce a topic summary, and the topic summaries were themselves concatenated to generate a final abstractive summary of the original 100 documents. Examples of the queries, the LDA topics generated from the documents returned by the query, and the final abstractive summary we built from individual topic summaries are shown in Table 2. Topic summaries were compared to the ground-truth summaries we constructed to calculate ROGUE scores. The topic cluster ROGUE scores were then averaged to produce a final ROGUE score for our abstractive summarization approach. We compared the ROUGE-1, ROUGE-2, and ROUGE-L metrics for the five abstractive summaries generated by our system to the competing approaches' ROGUE results. Scores in Table 3 for the competing systems were taken directly from their papers' reported results. Our system produced the highest ROGUE scores for the CNN/Daily Mail dataset, and MoCa reported the highest ROGUE scores for the Gigaword dataset. Our Gigaword scores, however, are comparable to all four systems. This is promising since we are generating multi-document summaries versus the other systems that generate single-document summaries. We also wanted to compare to GPT since it is capable of multi-document abstractive summarization. Unfortunately, we could not locate reported ROGUE scores for abstractive summarization. We considered testing GPT directly, but GPT-3 and GPT-4 have a limit of 4,096 and 8,192 input terms, respectively, smaller than our smallest cluster's size. A beta version of GPT-4 claims to increase this limit to 32,767 terms, but as of our testing, neither the GPT-4 nor the GPT-4 beta APIs are available outside of special case use, which we could not secure. \begin{table} \begin{tabular}{l|c c c|c c c} \hline & \multicolumn{3}{c|}{**CNN/Daily Mail**} & \multicolumn{3}{c}{**Gigaword**} \\ **System** & **ROGUE-1** & **ROGUE-2** & **ROGUE-L** & **ROGUE-1** & **ROGUE-2** & **ROGUE-L** \\ \hline BART & 44.2 & 21.3 & 40.9 & 39.1 & 20.1 & 36.4 \\ BRIO & 48.0 & 23.8 & 44.7 & — & — & — \\ PEGASUS & 44.2 & 21.5 & 41.1 & 39.1 & 19.9 & 36.2 \\ MoCa & 48.9 & 24.9 & 45.8 & **39.6** & **20.6** & **36.8** \\ Ours & **58.7** & **25.6** & **56.0** & 38.7 & 19.7 & 35.8 \\ \hline \end{tabular} \end{table} Table 3: ROGUE scores for BART, BRIO, PEGASUS, MoCa, and our proposed system To search for statistical performance differences, we ran the competing systems and our system on the sets of documents generated by our five queries, then performed analysis of variance (ANOVA) on the individual ROGUE scores for each ROGUE type and summarization system. The following steps were performed for each system to calculate an overall ROGUE score. 1. Ask the given system, for example, BART, to provide an abstractive summary \(Sum_{\text{BART}}\) for an input document. 2. Calculate the ROGUE-1, ROGUE-2, and ROGUE-L scores for \(Sum_{\text{BART}}\) versus the ground-truth summary provided in the test dataset. 3. Average the ROGUE-1, ROGUE-2, and ROGUE-L scores to generate an overall ROGUE score for the given system. Because variance across summarization systems was not equivalent, we applied the non-parametric Kruskal-Wallis ANOVA. All systems except MoCa offer implementations in the Hugging Face transformer model repository3. Because of this, MoCa was excluded from our statistical analysis. In addition, BRIO's implementation is not tuned for the short documents in the Gigaword dataset, so it was not included in any ANOVAs involving Gigaword comparisons. For an \(\alpha=95\)% significance rate, the \(F\)-results for the CNN/Daily Mail and Gigaword ROGUE-1, ROGUE-2, and ROGUE-L scores are shown in Table 4. Footnote 3: [https://huggingface.co/models](https://huggingface.co/models) ANOVA results confirmed a significant difference for all ROGUE scores across dataset and technique. Dunn's post hoc analysis was performed to search for pairwise differences in performance. The following pairs were identified as _not_ significantly different. * CNN/Daily Mail, ROGUE-1, ours-PEGASUS, \(p=0.12\) * CNN/Daily Mail, ROGUE-2, ours-BART, \(p=0.31\) * CNN/Daily Mail, ROGUE-2, ours-PEGASUS, \(p=0.14\) * CNN/Daily Mail, ROGUE-L, ours-BART, \(p=0.11\) * CNN/Daily Mail, ROGUE-L, ours-PEGASUS, \(p=0.11\) * Gigaword, ROGUE-2, ours-BART, \(p=0.10\) Dunn's pairwise results show that our method performs statistically equivalently to PEGASUS and BART for all but one of the ROGUE scores on the CNN/Daily Mail dataset and equivalently to BART for the ROGUE-2 scores on the Gigaword dataset. This is a positive indication of the usefulness and generalizability of our technique to scale to multi-document collections that existing systems either struggle with or cannot handle due to maximum input term limits. Finally, we note that the results for the Gigaword dataset in Tables 3 and 4 are mainly due to the types of summaries Gigaword provides, coupled with our use of GPT. GPT is designed to return summaries that contain complete, grammatically correct sentences. Gigaword's summaries are often text snippets and therefore do not correspond as well to GPT's summaries as those included in the CNN/Daily Mail dataset. For example, Gigaword's test dataset contains text-summary entries like "UNK-russian liberal party wins resignation" or "the rand gained ground against the dollar at the opening here wednesday, to #.###### to the greenback from #.###### at the close tuesday.-rand gains ground". The first example contains no original text, reporting it as UNK or unknown. The second example uses placeholders for numeric values and a short text snippet rather than a complete sentence for the ground-truth summary. In both cases, GPT, and by extension, our system, struggles to produce a comparable summary. We avoided entries with UNK in either the text or the summary. We did not remove pairs with incomplete or grammatically incorrect summaries since we wanted to honestly represent how our system performs on these types of datasets. \begin{table} \begin{tabular}{|l|l l l|} \hline **Dataset** & \multicolumn{1}{c|}{**ROGUE-1**} & \multicolumn{1}{c|}{**ROGUE-2**} & \multicolumn{1}{c|}{**ROGUE-L**} \\ \hline CNN & \(F(3,1542)=86.93\), \(p<0.01\) & \(F(3,1542)=27.43\), \(p<0.01\) & \(F(3,1542)=67.46\), \(p<0.01\) \\ Gigaword & \(F(2,1045)=72.73\), \(p<0.01\) & \(F(2,1045)=62.62\), \(p<0.01\) & \(F(2,1045)=72.75\), \(p<0.01\) \\ \hline \end{tabular} \end{table} Table 4: Analysis of variance on ROGUE scores for our proposed system versus BART, BRIO, and PEGASUS on the CNN/Daily Mail dataset, and BART and PEGASUS on the Gigaword dataset ## 6 Conclusions and Future Work Our goal in this paper is a technique that can scale to perform abstractive summarization on a multi-document collection. We divide documents into semantic topic clusters using FAISS and HDBSCAN. Representative term sets are generated for each cluster, then used to reduce the cluster size into semantic chunks. GPT is applied to summarize each chunk, then concatenate the summaries into an abstractive summarization of each topic. The same concatenation operation combines topic summaries into an overall document collection summary. The sentence, semantic chunk, and topic summaries are analyzed for sentiment, then visualized using an interactive dashboard that allows users to explore the valence, arousal, and raw and summarized text at multiple levels of detail. Statistical analysis of ROGUE scores for our system and competing approaches, including BART, BRIO, and PEGASUS, confirmed comparable performance for our multi-document summaries versus existing approaches' individual document results. We offer the following advantages over existing systems. 1. The ability to scale to multi-document collections versus individual documents. 2. Identification of topics using semantic clustering to provide multiple levels of detail on the content of a document collection. 3. Perceptually-based, interactive visualization dashboards designed to present text sentiment, text summaries, and raw text at different levels of detail. 4. Harnessing and extending the capabilities of large language models for abstractive summarization. 5. Future integration of new techniques, for example, new large language models, abstractive summarization algorithms, or evaluation methods, as they become available since our approach can quickly generalize to any of these changes. In terms of future work, we are currently investigating three potential improvements to our system. 1. **Streaming.** Extend our system to support real-time streaming, allowing it to dynamically add or remove documents in the document collection. This would impact topic clustering since we assume topics will shift over time. Real-time clustering algorithms exist, for example, Real Time Exponential Filter Clustering (RTFEC) and Real Time Moving Average Clustering (RTMAC). A potentially more relevant approach is density-based clustering for real-time stream data Chen and Tu (2007). Another possibility is to track estimated error in the current cluster results and perform an updated clustering when a threshold error is crossed, similar to how we maintained accurate TF-IDF scores in a streaming document environment Venkatesh (2010). Once clusters are defined, follow-on semantic chunking, chunk, topic, and document collection summarization, and visualization would be performed as in the current system. 2. **Visualization.** Improve the visualization dashboard to support more sophisticated visual analytics. Extending the visualization dashboard to support additional exploratory analysis, particularly at different levels of detail, is an area of potential interest. Our current focus is a system similar to one we built for exploring topics and their associated sentiment patterns during customer chat sessions Healey et al. (2021). This system was specifically designed to present relevant information at multiple levels of detail. 3. **Alternative LLMs.** Explore the strengths and limitations of additional LLMs like Bard, BLOOM Le Scao et al. (2023), and LLaMA that offer abstractive summarization and text concatenation capabilities. We plan to investigate these and similar LLMs to determine whether they have any particular strengths or limitations for our summarization pipeline.
2308.01864
Modeling Apsidal Motion in Eclipsing Binaries using ELC
Apsidal motion is the precession of the line of apsides in the orbit of a binary star due to perturbations from General Relativity (GR), tides, or third-body interactions. The rate of precession due to tidal effects depends on the interior structures of the stars, and as a result, binaries in which this precession occurs are of great interest. Apsidal motion is observed through the analysis of eclipse times, which reveal small changes in the average interval between successive primary and secondary eclipses, taking all available observed times of eclipse and yielding an estimate of the apsidal rate. Given that this is a single observed quantity, various degeneracies are unavoidably present. Ideally, one would have a model that predicts eclipse times given the orbital and stellar parameters. These parameters for a given binary could then be computed using least squares, provided a suitably large number of eclipse times. Here we use the eclipsing light curve (ELC) program as such a model. The Newtonian equations of motion with additional force terms accounting for GR contributions and tidal distortions are integrated, yielding precise sky positions as a function of time. Times of mid-eclipse and instantaneous orbital elements are computed as a function of time. In this paper, we outline the method and compare numerically computed apsidal rates with standard formulae using a set of 15 binaries based on real systems. For our simulated systems, the derived apsidal rates agree with the standard formula.
Alexander J. Dimoff, Jerome A. Orosz
2023-08-03T16:39:34Z
http://arxiv.org/abs/2308.01864v2
# Modeling Apsidal Motion in Eclipsing Binaries Using ELC ###### Abstract Apsidal motion is the precession of the line of apsides in the orbit of a binary star due to perturbations from General Relativity (GR), tides, or third-body interactions. The rate of precession due to tidal effects depends on the interior structures of the stars, and as a result, binaries in which this precession occurs are of great interest. Apsidal motion is observed through the analysis of eclipse times, which reveal small changes in the average interval between successive primary and secondary eclipses, taking all available observed times of eclipse and yielding an estimate of the apsidal rate. Given that this is a single observed quantity, various degeneracies are unavoidably present. Ideally, one would have a model that predicts eclipse times given the orbital and stellar parameters. These parameters for a given binary could then be computed using least squares, provided a suitably large number of eclipse times. Here we use the eclipsing light curve (ELC) program as such a model. The Newtonian equations of motion with additional force terms accounting for GR contributions and tidal distortions are integrated, yielding precise sky positions as a function of time. Times of mid-eclipse and instantaneous orbital elements are computed as a function of time. In this paper, we outline the method and compare numerically computed apsidal rates with standard formulae using a set of 15 binaries based on real systems. For our simulated systems, the derived apsidal rates agree with the standard formula. Apsidal motion (2023) 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 202 the ELC model by comparing model-computed values of \(\dot{\omega}\) to the standard formula. We conclude with a short summary. ## 2 Analytic Formulae for Contributions to Apsidal Motion ### General Relativity Contribution The rate of apsidal advance in an orbit in GR is given by Equation (10) in Barker & O'Connell (1975a), \[\dot{\omega}=\frac{3G^{2/3}n^{5/3}(M_{\rm 1}+M_{\rm 2})^{2/3}}{c^{2}(1-e^{2})}, \tag{2}\] where \(n=2\pi/P\) is the mean daily motion. Substituting, we obtain \[\dot{\omega}=\left[\frac{3G^{2/3}(M_{\rm 1}+M_{\rm 2})^{2/3}}{c^{2}(1-e^{2})} \right]\left(\frac{2\pi}{P}\right)^{5/3}, \tag{3}\] where the units of \(\dot{\omega}\) are radians per second. We multiply by the orbital period \(P\) to obtain radians per cycle, then by \(180/\pi\) to obtain degrees per cycle, \[\dot{\omega}=\left[\frac{540(2\pi)^{5/3}G^{2/3}}{\pi c^{2}}\right]\left(\frac {1}{1-e^{2}}\right)\left(\frac{M_{\rm 1}+M_{\rm 2}}{P}\right)^{2/3}. \tag{4}\] In the above equation, the units of the masses are kilograms and the units of the period are seconds. To convert into solar masses and days, respectively, we let \(M_{\odot}\) be the solar mass in kg and \(s=86{,}400\) be the number of seconds in a day. We then have \[\dot{\omega}=\left[\frac{540(2\pi)^{5/3}}{\pi c^{2}}\!\left(\frac{GM_{\odot}} {s}\right)^{2/3}\right]\left(\frac{1}{1-e^{2}}\right)\!\left(\frac{m_{\rm 1}+m_{ \rm 2}}{P_{d}}\right)^{2/3} \tag{5}\] in units of degrees per cycle, where \(m_{\rm 1}\) and \(m_{\rm 2}\) are the masses of the binary components in solar masses, and \(P_{d}\) is the period in days. The combination \(GM_{\odot}\) is known more accurately than either \(G\) or \(M_{\odot}\) are alone, \[GM_{\odot}=\frac{k^{2}A^{3}}{s^{2}}, \tag{6}\] where \(A=149597870700.0\) is the number of meters in an Astronomical Unit (an exact number by definition), and \(k=0.01720209895\) radians per day is the Gaussian gravitational constant (Clemence, 1965). We finally arrive at the well-known formula for \(\omega_{\rm GR}^{\rm 4}\)(e.g., Gimenez, 1985), where the units are in degrees per cycle, \[\dot{\omega}= \left[\frac{540(2\pi)^{5/3}k^{2/3}A^{2}}{\pi c^{2}s^{2}}\right] \left(\frac{1}{1-e^{2}}\right)\!\left(\frac{m_{\rm 1}+m_{\rm 2}}{P_{d}} \right)^{2/3}\] \[= (5.447127276\,\times\,10^{-4})\!\left(\frac{1}{1-e^{2}}\right)\! \left(\frac{m_{\rm 1}+m_{\rm 2}}{P_{d}}\right)^{2/3}. \tag{7}\] The coefficient in this approximation is an exact expression from fundamental constants. From the formula, we see that systems with higher stellar masses and shorter periods will have faster rates of apsidal motion due to GR effects. ### Tidal (Newtonian) Contribution In Newtonian gravity, the orbit of two bound point masses is given by the well-known Kepler equations. The orbit is closed, and the orientation of the semimajor axis (the line of apsides) remains fixed. Real stars are not point masses, and departures from spherical symmetry due to rotation and/or tides give rise to small nonradial forces that cause the orbital elements to change with time (see the Lagrange planetary equations). In most cases, the rate of change of the line of apsides (characterized by the so-called argument of periastron \(\omega\)) is highest and therefore most readily observable. Figure 1: CPOC diagram for simulated binary systems resembling Y Cyg, showing apsidal motion due to tides. The solid lines are the times of the primary eclipse, and the dashed lines are the times of the secondary eclipse. The black curves represent the model with the nominal value of the apsidal constants. The red curves are for a system with doubled apsidal constants compared to the black curves, and the frequency of the apsidal precession is therefore doubled. The black line(s) at zero represent a model with apsidal constants of effectively 0, indicating no precession and adherence to a linear ephemeris. From Roy (1978), a form of the Lagrange planetary equations is \[\frac{da}{dt}=\frac{2}{na}\frac{\partial S}{\partial\chi} \tag{8}\] \[\frac{de}{dt}=\frac{1}{na^{2}e}\Bigg{[}(1-e)^{2})\frac{\partial S}{\partial\chi }-(1-e^{2})^{2}/\frac{\partial S}{\partial\omega}\Bigg{]} \tag{9}\] \[\frac{d\chi}{dt}=-\frac{(1-e^{2})}{na^{2}e}\frac{\partial S}{\partial e}- \frac{2}{na}\frac{\partial S}{\partial a} \tag{10}\] \[\frac{d\Omega_{N}}{dt}=\frac{1}{na^{2}(1-e^{2})^{1/2}\sin i}\frac{\partial S }{\partial i} \tag{11}\] \[\frac{d\omega}{dt}=\frac{(1-e^{2})^{1/2}}{na^{2}e}\frac{\partial S}{\partial e }-\frac{\cot i}{na^{2}(1-e^{2})^{1/2}}\frac{\partial S}{\partial i} \tag{12}\] \[\frac{di}{dt}=\frac{1}{na^{2}(1-e^{2})^{1/2}}\Bigg{[}\cot i\frac{\partial S}{ \partial\omega}-\csc i\frac{\partial S}{\partial\Omega_{N}}\Bigg{]}, \tag{13}\] where \(n^{2}a^{3}=G(m_{1}+m_{2})\) and \(\chi=-nT\), with \(n=2\pi/P\) being the mean daily motion. The eccentricity, nodal angle, and inclination are defined by \(e\), \(\Omega_{N}\), and \(i\), respectively. These expressions and the functional form of \(S\) therein determine which of the orbital elements will change over time, and how quickly they change. Expressions for \(\omega_{N}\) due to tidal distortions were first derived by Cowling (1938) and Sterne (1939). Following the revised formulation by Kopal (1978), the expression for the apsidal period \(U\) from tides is given by \[\frac{P}{U}=\frac{\omega_{N}}{2\pi}=c_{1}k_{21}+c_{2}k_{22}, \tag{14}\] where the \(k_{2i}\) factors are the second-order (quadrupolar) internal structure constants. These internal structure constants are related to the density distribution within the star, and they can be computed from stellar evolution models. The weighting coefficients \(c_{i}\) are functions of the mass ratio, eccentricity, and relative radii with respect to the orbital separation, and they are given by \[c_{i}=r_{i}^{5}\Bigg{(}\frac{m_{3-i}}{m_{i}}[15g(e)+\gamma_{i}^{2}f(e)]+\gamma _{i}^{2}f(e)\Bigg{)}, \tag{15}\] where \(r_{i}\) is the fractional radius scaled to the orbital separation (\(R_{i}/a\)), the parameters \(\gamma_{i}\) are the ratio of the angular velocity of the axial rotation to that of orbital motion, and \(g(e)\) and \(f(e)\) are functions of the eccentricity, originally estimated as a power series in \(e\) by Sterne (1939), and provided by Bulut et al. (2017) as \[f(e)=\frac{1}{(1-e^{2})^{2}} \tag{16}\] \[g(e)=\frac{(8+12e^{2}+e^{4})f(e)^{2.5}}{8}. \tag{17}\] When we adopt pseudo-synchronous rotation in eccentric orbits, as done in Hut (1981), the maximum angular velocity at periastron can be well approximated as \[\gamma_{i}^{2}=\frac{(1+e)}{(1-e)^{3}}=\frac{\omega_{P}^{2}}{\Omega_{\rm k}^{ 2}}. \tag{18}\] A similar recent approach by Bulut et al. (2017) determines the \(c_{i}\) coefficients through a combination of the contributions of rotational distortion and tidal effects, \[c_{i}=r_{i}^{5}\Bigg{[}\Bigg{(}\frac{\Omega_{r,i}}{\Omega_{\rm k}}\Bigg{)}^ {2}\Bigg{(}1+\frac{m_{3-i}}{m_{i}}\Bigg{)}f(e)+\frac{15m_{3-i}}{m_{i}}g(e) \Bigg{]}. \tag{19}\] In the expression for \(c_{i}\) (Equation (19)), \(r_{i}\) is the fractional radius, \(m_{i}\) is the mass, \(\Omega_{r}\) is the angular velocity of the axial rotation for each component \(i\), \(\Omega_{\rm k}\) is the Keplerian angular velocity, and \(e\) is the orbital eccentricity. Thus, when the Keplerian parameters are known, then the stellar parameters including the rotation rates and the apsidal period can be calculated. The mean value of the internal structure constants can be derived from the observed value of \(\dot{\omega}\) using the expression \[\bar{k}_{2,\rm obs}=\frac{1}{c_{21}+c_{22}}\frac{P}{U}=\frac{1}{c_{21}+c_{22}} \frac{\dot{\omega}}{2\pi}, \tag{20}\] where the \(c_{2i}\) coefficients are the same functions of the eccentricity, mass, radius, and separation as in Equation (19). It is known that the observed mean value of the internal structure constants contains both contributions from the Newtonian and the relativistic effects of apsidal motion. When the constants are combined through the equation \[\bar{k}_{2,\rm theoo}=\frac{c_{21}k_{21,\rm theo}+c_{22}k_{22,\rm theo}}{c_{21}+c_{22}}, \tag{21}\] the weighted average coefficient \(\bar{k}_{2,\rm theo}\) can be determined. This weighted average is directly comparable with the observed value. However, this historical method fails to constrain the individual constants \(k_{21}\) and \(k_{22}\) as only the average value is returned, and this method does not work well with binaries with mass ratios \(q\!\approx\!1\). #### 2.2.1 Stellar Spin Axes We can break down the Newtonian component of apsidal motion into the contributions from tides and rotation: \[\dot{\omega}_{\rm total}=\dot{\omega}_{\rm GR}+\dot{\omega}_{\rm tidal,1}+\dot{\omega}_{\rm tidal,2}\] \[\phantom{\dot{\omega}_{\rm total}}+\dot{\omega}_{\rm rot,1}\dot{ \phi}_{1}+\dot{\omega}_{\rm rot,2}\dot{\phi}_{2}+\dot{\omega}_{\rm LTT}.\] If the spin axes are misaligned, there is an additional contribution to the tidal term in the expression for apsidal motion. The pointing direction of the angular momentum vector of the stars will affect the apsidal motion of the system. This is parameterized in two values, where \(\angle\alpha_{i}\) is the deflection in the plane of the orbit, and \(\angle\beta_{i}\) is the deflection in the plane of the sky. The combination of the two of these can result in any pointing direction for either the primary or secondary star. A summary of the theory behind misaligned spin axes and the subsequent affect on the secular motion of the apse is given in Shakura (1985). In particular, his Equation (4) gives the rate of change of \(\omega\), \[\frac{d_{\omega}}{dt}=\left(\frac{d\omega}{dt}\right)_{E}+\dot{ \omega}15g(\epsilon)\left[k_{1}r_{1}^{s}\frac{m_{2}}{m_{1}}+k_{2}r_{2}^{s}\frac{ m_{1}}{m_{2}}\right]\] \[-\frac{\dot{\omega}}{\sin^{2}f}(\epsilon)\left\{k_{1}r_{1}^{s} \left(\frac{\omega}{\dot{\omega}}\right)^{2}\right\}^{2}\left(1+\frac{m_{2}}{ m_{1}}\right)\] \[\times\left[\cos\alpha_{1}(\cos\alpha_{1}-\cos\beta_{1}\cos i)+ \frac{1}{2}\sin^{2}i(1-5\cos^{2}\alpha_{1})\right]\] \[+k_{2}r_{2}^{s}\frac{\omega_{2}}{\dot{\omega}}\left(\frac{\omega_{ 2}}{\dot{\omega}}\right)^{2}\left(1+\frac{m_{1}}{m_{2}}\right)\] \[\times\left[\cos\alpha_{2}(\cos\alpha_{2}-\cos\beta_{2}\cos i)+ \frac{1}{2}\sin^{2}i(1-5\cos^{2}\alpha_{2})\right]\right\}, \tag{22}\] where the \((d\omega/dt)_{E}\) is the GR contribution. If the spin axes of the stars are aligned, then \(\alpha_{1}=\alpha_{2}=0\) and \(\beta_{1}=\beta_{2}=i\), where \(i\) is the inclination of the orbital plane of the binary. ### Third-body Contribution Although we focus our attention in this paper on binaries with apsidal motion due to tidal or GR effects, for completeness, we mention the possible contribution from a third body, which can lead to observable changes in the CPOC diagram. The addition caused by the third body has to be accounted for before the apsidal motion can be analyzed. Because the binary orbits the barycenter of the triple system, the observed times of the primary and secondary eclipses can either be early or late because the distance between the observer and the binary changes periodically. The signal in the \(O-C\) diagram then superficially resembles a radial velocity curve. Irwin (1952) gives a formula to model this light travel time effect (LTTE) signal. In the massive binary DR Vul, the CPOC signals have been modeled by a combination of an LTTE orbit with an orbital period of \(\approx\)63 yr and apsidal motion of the inner binary with an apsidal period \(\approx\)36 yr (Wolf et al., 2019; Dimoff, 2021). If the third body is sufficiently close to the binary, the gravitational perturbations can lead to changes in the orbital period as well as in the precession of the orbit, and these changes can even dwarf the LTTE. Several binaries with large eclipse-timing variations (ETVs) due to a third body have been discovered using data from the Kepler and Transiting Exoplanet Survey Satellite (TESS) missions. Generally speaking, the ETV signal seen in the \(O-C\) diagram from dynamical interactions is complex. Borkovits et al. (2011) and Baycroft et al. (2023; their Appendix A) give expressions for \(\dot{\omega}\) including effects from a third body. Although these dynamical interactions can produce measurable effects in the \(O-C\) on both short and long timescales, the short-period low-amplitude variations are less important for apsidal motion studies. The long-period perturbations in the apsidal motion, however, can substantially alter the tidal and relativistic effects (see Naoz, 2016, or Borkovits et al., 2020 for a recent example). ## 3 Modeling Eclipse Times Using ELC We now discuss our forward model, which can produce the times of primary and secondary eclipses given the stellar parameters and initial orbital parameters. It is based on the ELC code (Orosz & Hauschildt, 2000; Orosz et al., 2019). The code is general, and the light and velocity curves of a variety of binary and three-body systems can be modeled directly. In the mode that we describe here, the observed times of eclipse can be fitted, which is useful in situations without access to the light curves. The basic outline of a photodynamical code like ELC is relatively straightforward. Given the masses of two (or more) bodies and initial conditions (positions and velocities) for these bodies, the equations of motion are integrated, yielding the sky positions as a function of time. From the time series of the positions, it is easy to find the times when any two bodies are at conjunction. It can easily be checked if an eclipse occurs at or near conjunction when given the radius of each body. For convenience, ELC has a Keplerian-to-Cartesian converter (based on the algorithms given in Murray & Dermott, 1999), where six orbital elements (the period \(P\), the time of periastron passage \(T\), the eccentricity \(e\), the argument of periastron \(\omega\), the inclination \(i\), and the nodal angle \(\Omega\)) uniquely determine six phase-space coordinates for each star (e.g., \(x\), \(y\), \(z\), \(v_{x}\), \(v_{y}\), and \(y_{z}\)). The \(x\), \(y\) plane is the sky plane, and the \(z\)-axis points toward the observer. The inverse transformation (the Cartesian-to-Keplerian converter) is available, where six phase-space coordinates give a unique set of orbital elements. The equations of motion are the usual force equations for the two-body problems, plus additional force terms that arise from tidal distortions and force terms from the full GR treatment (Mardling & Lin, 2002), \[\ddot{r}=-\frac{G(m_{1}+m_{2})}{r^{3}}r+\mathbf{f}_{\text{QD},1}+\mathbf{f}_{\text{QD},2}+\mathbf{f}_{\text{rel}}\ +\mathbf{f}_{3}. \tag{23}\] The acceleration due to the quadrupole moment of body 1 is a combination of its spin distortion and tidal distortion produced by the presence of the companion, \[\dot{f}_{\text{QD},1} =\frac{R_{1}^{s}(1+\,m_{2}/m_{1})k_{21}}{r^{4}}\] \[\times\left(\left[5(\Omega_{1}\cdot\dot{\mathbf{r}})^{2}-\,\Omega_{1} ^{2}-\,\frac{6Gm_{2}}{r^{3}}\right]\dot{\mathbf{r}}\right.\] \[-\,2(\mathbf{\Omega}_{1}\cdot\dot{\mathbf{r}})\mathbf{\Omega}_{1}\Bigg{)}, \tag{24}\] where \(k_{21}\) is the apsidal motion constant of body 1, \(\Omega_{1}\) is the ratio of the rate of stellar rotation and the orbital rotation, and \(\dot{\mathbf{r}}\) is a unit vector in the direction of \(r\). A similar expression exists for the tidal distortion of body 2 due to body 1. The orbital acceleration due to the relativistic potential of the binary (Kidder, 1995) is given by \[\mathbf{f}_{\text{rel}} =-\frac{Gm_{12}}{r^{2}c^{2}}\] \[\times\left(\left[(1+3\eta)\dot{\mathbf{r}}\cdot\dot{\mathbf{r}}-\,2(2+\, \eta)\frac{Gm_{12}}{r}-\frac{3}{2}\eta r^{2}\right]\dot{\mathbf{r}}\right.\] \[-\,2(2-\,\eta)\dot{r}\dot{\mathbf{r}}\Bigg{)}, \tag{25}\] where \(m_{12}=m_{1}+m_{2}\), \(c\) is the speed of light, and \(\eta=m_{1}m_{2}/(m_{12})^{2}\). The force due to a third body (where applicable) is given by \[\mathbf{f}_{3}=Gm_{3}(\mathbf{\beta}_{23}-\mathbf{\beta}_{13}), \tag{26}\] where \(\mathbf{\beta}_{ij}=\eta_{i}/\eta_{ij}^{3}\) is a ratio of the relative positions of the bodies in the system. To find the positions of all the bodies at any given time, the equations of motion are integrated using a symplectic twelfth-order Gaussian Runge-Kutta (GRK) integration routine based on the methods from Hairer et al. (2006). In our implementation, one can solve only the purely Newtonian equations (the forces are only from point masses), the Newtonian equations plus the tidal forces, the Newtonian equations plus the GR forces, or the equations with all three contributions. As noted previously, if the additional force terms due to tides or due to GR effects are included, the orbit will not be closed, and the orbital elements will change with time. The Cartesian-to-Keplerian converter can be used to compute time series of the orbital elements. Of interest here is the time series for the argument of periastron \(\omega\)(\(t\)). ## 4 Comparing Derived Apsidal Rates with Analytic Formula The analytic formula discussed in Section 2 gives the apsidal rate, namely \(\dot{\omega}\). For a model that can produce a time series for \(\omega\)(\(t\)), the rate of change of \(\omega\) (e.g., \(\dot{\omega}\)) should be easy to compute and can be compared to the analytic formula. However, in this case, there are some subtle issues. Figure 2 displays the argument of periastron (\(\omega\)) over time for a system resembling Y Cyg considering tides and GR independently. For this calculation, \(\omega_{0}\!=\!1^{\circ}\). The system is precessing, and over the course of 1000 days, \(\omega\) changed by about \(20^{\circ}\) due to tidal effects. However, when the graph is examined more closely, small small-scale oscillations become visible with a period equal to the binary orbital period. The amplitude of these oscillations depends on the apsidal period, and in the limit where there is no apsidal motion (e.g., when \(\dot{\omega}=0\)), the oscillations must vanish. At each time step, six phase-space coordinates give a unique set of orbital parameters (including \(\omega\)). However, because the underlying motion is not Keplerian, higher-order oscillations are noticeable. The equations of motion with the corrections provide an orbit-averaged force. The effect of that force depends on the instantaneous separation of the bodies, so the actual speed of the star changes in a slightly different manner relative to the average Keplerian. Given these oscillations, the value of \(\dot{\omega}\) from a given model is computed by using a linear fit to the \(\omega\)(\(t\)) curve. More reliable values of omega dot are obtained from averaging over a longer time-span that contains many orbital periods. From a collection of eclipsing binaries exhibiting rapid apsidal motion (Claret & Willems 2002), we select 15 prototypical binary systems to use as a test set for our model. We simulate the orbital motion of these systems each for a set number of days relative to an apsidal cycle, taking into account their physical parameters including the mass, radius, apsidal constants, and orbital spin-axis orientations of both components. A summary of the input parameters for each system is presented in Table 1 (for assumed rotationally aligned systems). We do not claim that these are the actual physical parameters of the selected systems, but they are similar enough to their physical values to represent realistic binaries. For each of the 15 binaries, we ran 200,000 forward models for three situations: (i) GR perturbations only, (ii) tidal Figure 2: Advance of the argument of periastron over time for a Y Cyg-like system. The top panels show the linear progression of \(\omega\) with time considering tides and GR independently (left and right, respectively). In the bottom panels, the red line is the linear regression. The observed oscillations from the linear trend in the bottom panels occur on an orbital timescale (for Y Cyg \(\sim\) 3 days). The shapes of the curves are sinusoidal to first order, and the amplitudes are related to the length of the apsidal cycle. perturbations with aligned spin axes, and (iii) tidal perturbations with misaligned spin axes. The initial value of \(\omega\) was always \(1^{\circ}\), and the values of the other orbital elements and the stellar masses and radii were drawn from normal distributions that represent typical measurement uncertainties on these quantities. The model values of \(\dot{\omega}\) for each binary system are determined from a linear regression of \(\omega(t)\), and are compiled in Table 2. We compare these values to results from the respective analytic formula as a fractional percent difference, and discuss each of these three cases in turn. ### Accuracy of the General Relativity Contribution A histogram of the error distributions for the GR case for each of our selected 15 systems is shown in Figure 3, limited to the range \(-0.002\%\,\mathrm{to}\,0.002\%\). While the combined distributions are not centered exactly on zero, each individual distribution is approximately symmetric. Systems with lower eccentricity (e.g., CW Cep, EM Car, V478 Cyg, or U Oph) exhibit wider histograms, as well as peaks or horns close to the ends of the distributions. This is the result of the near-ambiguity in the value of the argument of periastron \(\omega\) in a nearly circular orbit. When only GR is taken into account, the model apsidal rates agree quite well with the formula given by Equation (7). The percent errors are \(\sim\!\!5\times 10^{-4}\) on average. We note at this level, there are not enough digits in the coefficients given in Equation (7) from Gimenez (1985) for a meaningful comparison with our models. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline System & \(P\) (days) & \(e\) & \(i\) & \(m_{1}\), \(m_{2}\) (\(M_{\odot}\)) & \(r_{1}\), \(r_{2}\) (\(R_{\odot}\)) & \(\Omega_{1}\), \(\Omega_{2}\) & \(\kappa_{21}\), \(\kappa_{22}\) & References \\ \hline AG Per & 2.0288769 & 0.0709 & 81.40 & 5.35 & 2.995 & 1.156 & 0.0072 & 0,1 \\ & & & 4.489 & 1.1497 & 1.156 & 0.0072 & & \\ \hline DI Her & 10.550170 & 0.4895 & 89.30 & 5.17 & 2.681 & 1.000 & 0.0077 & 27,28,29,30,32 \\ & & & 4.524 & 2.478 & 1.000 & 0.0077 & & \\ \hline EM Car & 3.4149023 & 0.0119 & 81.50 & 21.394 & 8.371 & 0.999 & 0.0080 & 0,2,22,35 \\ & & & 22.883 & 9.347 & 1.048 & 0.0029 & & \\ \hline IQ Per & 1.7436292 & 0.0763 & 89.30 & 3.504 & 2.445 & 1.169 & 0.0044 & 0,3,4,5,6 \\ & & & 1.730 & 1.499 & 1.169 & 0.0044 & & \\ \hline QX Car & 4.4781279 & 0.279 & 85.70 & 6.240 & 4.291 & 1.000 & 0.0071 & 0,7,8 \\ & & & 8.460 & 4.053 & 1.000 & 0.0071 & & \\ \hline V1647 Sgr & 3.2828505 & 0.4142 & 90.00 & 2.184 & 1.832 & 1.000 & 0.0042 & 0,9,10,11 \\ & & & 1.967 & 1.667 & 1.000 & 0.0042 & & \\ \hline V526 Sgr & 1.9194849 & 0.2199 & 89.10 & 2.206 & 1.880 & 1.712 & 0.0038 & 22,23,24,35 \\ & & & 1.680 & 1.820 & 1.269 & 0.0020 & & \\ \hline Y Cyg & 2.9968537 & 0.1462 & 86.47 & 17.790 & 5.525 & 1.358 & 0.0095 & 0,12,13,14,35 \\ & & & 18.296 & 5.784 & 1.358 & 0.0157 & & \\ \hline CW Cep & 2.7294954 & 0.0292 & 81.80 & 12.932 & 5.521 & 1.029 & 0.0053 & 0,15,16,17,35 \\ & & & 11.898 & 5.095 & 1.019 & 0.0125 & & \\ \hline DR Vul & 2.2512153 & 0.0945 & 88.30 & 13.203 & 4.801 & 0.796 & 0.0063 & 25,26,35 \\ & & & 12.189 & 4.336 & 0.992 & 0.0150 & & \\ \hline GG Lup & 1.8496919 & 0.1546 & 78.00 & 4.106 & 2.380 & 1.382 & 0.0070 & 0,18 \\ & & & 2.504 & 1.726 & 1.382 & 0.0070 & & \\ \hline U Oph & 1.6773459 & 0.00 & 87.86 & 5.090 & 3.440 & 1.000 & 0.0053 & 0,5,17,19 \\ & & & 4.580 & 3.050 & 1.000 & 0.0053 & & \\ \hline V478 Cyg & 2.881 & 0.0158 & 78.00 & 16.60 & 7.430 & 1.032 & 0.0056 & 29,31,33,34 \\ & & & 16.30 & 7.430 & 1.032 & 0.0056 & & \\ \hline V760 Sco & 1.6697772 & 0.0113 & 85.00 & 3.921 & 2.852 & 1.023 & 0.0044 & 0,20,11 \\ & & & 2.545 & 1.854 & 1.023 & 0.0044 & & \\ \hline \(\zeta\) Phe & 1.66977 & 0.0116 & 89.30 & 3.908 & 2.835 & 3.346 & 0.0077 & 0,21,15,7 \\ & & & 2.536 & 1.885 & 3.436 & 0.0077 & & \\ \hline \end{tabular} Note.: Input data were collected from (0) Claret & Willems (2002), (1) Gimenez & Clausen (1994), (2) Andersen & Clausen (1989), (3) Burns et al. (1996), (4) Caton & Burns (1993), (5) Andersen (1991), (6) Lacy & Frueh (1985), (7) Andersen et al. (1983), (8) Gimenez et al. (1986), (9) Clausen et al. (1977), (10) Andersen & Gimenez (1985), (11) Wolf (2000), (12) Hill & Holmgren (1995), (13) Simon et al. (1994), (14) Holmgren et al. (1995), (15) Claret & Gimenez (1993), (16) Claret & Gimenez (1991), (17) Popper & Hill (1991), (18) Andersen et al. (1993), (19) Baxter (1986), (20) Andersen et al. (1985), (21) Clausen (1996), (22) Wolf & Zejda (2005), (23) Lacy (1997), (24) Lacy (1993), (25) Bozkurt & Deglimeen (2007), (26) Wolf et al. (2019), (27) Albrecht et al. (2009), (28) Popper (1982), (29) Marcussen & Abrecht (2022), 30) Anderson & Winn (2022), (31) Claret et al. (2021), (32) Claret & Giménez (2010), (34) Pavlovski et al. (2018), and (35) Dimoff (2021). \end{table} Table 1: Adopted Input Orbital and Physical Parameters for the Selected Binaries Exhibiting Apsidal Motion ### Accuracy of the Tidal Contribution Histograms of the error distributions for each of our selected 15 systems considering the tidal contribution are shown in Figure 4 for the aligned case and in Figure 5 for the misaligned case. The analytic approach to modeling tides taken by Sterne (1939; e.g., Equation (14)) works well. In this case, we find that the relative errors are acceptably small, within 0.4%. For the misaligned case (Equation (22)), the percent errors are larger, \(\sim\)2%. the Schwarzschild metric, and agrees with Barker & O'Connell (1975b) when appropriate substitutions are made. The agreement between Equation (5) and our model indicates that the force equations account quite well for the GR perturbations. While not as good as GR, our modeled apsidal rates for the tidal contribution with an aligned stellar spin axes are accurate compared to the analytic formula (Equation (14)). Again, the differences are systematically positive, with a median percent difference value of \(\approx\)0.013. Unlike the GR case, there is an approximation in the formula for \(\dot{\omega}\). Sterne (1939) uses the \(n\!=\!2\) (second-order) term in the tidal potential, which is proportional to \((r^{2}/a^{3})\). The additional force terms in our model are presumably derived from a treatment that uses the \(n\!=\!2\) term in the disturbing potential. Apparently, in this case, the solutions to the differential equations do not reproduce the analytic results as faithfully as they do in the GR case. According to Sterne (1939), when the third-order term is included in the tidal potential, \(\dot{\omega}\) scales with \((r/a)^{7}\). For the binaries considered here, \((r/a)^{7}\) is only a few percent of \((r/a)^{5}\) in Equation (19). When the axial misalignment of the stars is taken into account, the accuracy is not as good, although the histograms of the percent differences are closer to symmetric about zero. For each star, we establish an axis deflection \(\alpha_{i}\) as the spin-orbit inclination, and axis \(\beta_{i}\) as the inclination with respect to the plane of the sky (Equation (22)). These axial inclination parameters for each star can define any spin-axis orientation for a binary. For a system where the spin axes of the stars are aligned with the angular momentum axis of the orbit to within about 15\({}^{\circ}\), Equation (22) is accurate to within a few percent. As seen in Figure 6, the largest relative errors occur at mutual inclinations of \(\sim\)45\({}^{\circ}\), representing the maximum allowed asymmetry as the tidal bulge of the companion changes hemispheres. Furthermore, the relative errors are minimized at mutual inclinations of 0\({}^{\circ}\) and 90\({}^{\circ}\), where in the former case, the spin axis is perpendicular to the plane of the orbit, and in the latter case, the spin axis is within the plane of the orbit. One simulated system each of these regimes is plotted in Figure 7. We fit a linear regression to the line of \(\omega\) versus \(t\) and compute the residuals. In systems close to the aligned case (mutual inclination \(\approx\)0), a linear trend is recovered, visible in the top left panel of Figure 7. For extreme cases, the \(\omega(t)\) is highly nonlinear. In these extreme cases, where the mutual inclination \(\approx\!0\) (where the computed relative error compared to the analytic formula is large), we find a second-order trend with the \(\omega(t)\) curve. This can be seen in the top right panel of Figure 7. Mardling & Lin (2002) point out that if the spin axis of one of the stars is not aligned, there will be a torque on that star. If there is a torque on that star, then the direction of the spin axis will change (\(\dot{\Omega}_{r}\propto r\times\dot{f}_{\rm QD}\)). The expression for the force given in Equation (24) contains \(\Omega_{r}\), so that in principle, if that vector changes with time, it could add complications to the system. However, our numerical experiments show that \(\dot{\Omega}_{r}\) is very small in most practical cases. Furthermore, Shakura (1985) shows that the inclination \(i\) and nodal angle \(\Omega_{N}\) also change Figure 7: Residuals from a linear fit to the advance of the argument of periastron in two cases of axial misalignment for a Y Cyg-like system. The left panels represents a system close to alignment; the axial deflections result in a slightly different pattern of the oscillations, but maintain a linear trend. The right panels represent a system far from axial alignment; the oscillations have more structure, and the overall trend is nonlinear. with time, and gives corresponding formulae for \(di/dt\) and \(d\Omega_{N}/dt\). Given the changes in the other orbital elements, it is not surprising that the \(\omega\) versus \(t\) curve becomes nonlinear on a shorter timescale. Indeed, Equation (22) for \(d\omega/dt\) includes a term for the inclination. If \(di/dt\) is nonzero, there are additional contributions to \(d\omega/dt\). ## 6 Conclusion The precession of the line of apsides in a binary orbit is driven by perturbations from GR, tidal, and third-body interactions. These perturbations depend on the physical and orbital properties of the systems, including the orbital period, eccentricity, stellar mass, internal structure constants, and relative rotation speeds of each star, and it is affected by the angular deflection of the stellar rotation axes. We investigate the effectiveness of the ELC software in modeling the apsidal motion of 15 realistic binary systems. Modifying Newton's equations of orbital motion to include dynamical perturbations from tides, GR, axial misalignment, and possible third-body effects, we run a set of forward models and compute the rates of apsidal motion, and compare them to the corresponding analytical formulae. Our results indicate an extremely good agreement between for the GR contribution, and a good agreement between our modeling and theoretical predictions for the tidal contribution to the apsidal motion. Despite this agreement, inconsistencies remain in the case considering tides with misaligned rotation axes. These functional degeneracies between derived orbital and physical parameters will be addressed in a follow-up study. This approach to numerically modeling the tidal and GR forces is a fast and precise way to model apsidal motion in binary (or higher-order multiple) stellar systems. We plan to apply this technique to real eclipse-timing data and compare our results in a future work, and we aim to compute refined orbital parameters, internal structure constants, and stellar rotation axis alignments. ## Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 101008324 (ChETEC-INFRA). Support in funding comes from the State of Hessen within the Research Cluster ELEMENTS (Project ID 500\(/\)10.006). This work made use of the EMMY supercomputer, provided by the North German Supercomputing Alliance (Norddeutscher Verbund fur Hoch- und Hochstletstiungswechten--HLRN), hosted at Georg-August-Universitat Gottingen. ELC (Orosz & Hauschildt, 2000), (Orosz et al., 2019), Numpy (Harris et al., 2020), Matplotlib (Hunter, 2007). ## ORCID iDs Alexander J. Dimoff &[https://orcid.org/0009-0007-3458-0401](https://orcid.org/0009-0007-3458-0401) Jerome A. Orosz &[https://orcid.org/0000-0001-9647-2886](https://orcid.org/0000-0001-9647-2886)
2307.07448
Depth-bounded Epistemic Logic
Epistemic logics model how agents reason about their beliefs and the beliefs of other agents. Existing logics typically assume the ability of agents to reason perfectly about propositions of unbounded modal depth. We present DBEL, an extension of S5 that models agents that can reason about epistemic formulas only up to a specific modal depth. To support explicit reasoning about agent depths, DBEL includes depth atoms Ead (agent a has depth exactly d) and Pad (agent a has depth at least d). We provide a sound and complete axiomatization of DBEL. We extend DBEL to support public announcements for bounded depth agents and show how the resulting DPAL logic generalizes standard axioms from public announcement logic. We present two alternate extensions and identify two undesirable properties, amnesia and knowledge leakage, that these extensions have but DPAL does not. We provide axiomatizations of these logics as well as complexity results for satisfiability and model checking. Finally, we use these logics to illustrate how agents with bounded modal depth reason in the classical muddy children problem, including upper and lower bounds on the depth knowledge necessary for agents to successfully solve the problem.
Farid Arthaud, Martin Rinard
2023-07-11T07:01:35Z
http://arxiv.org/abs/2307.07448v1
# Depth-bounded Epistemic Logic ###### Abstract Epistemic logics model how agents reason about their beliefs and the beliefs of other agents. Existing logics typically assume the ability of agents to reason perfectly about propositions of unbounded modal depth. We present **DBEL**, an extension of **S5** that models agents that can reason about epistemic formulas only up to a specific modal depth. To support explicit reasoning about agent depths, **DBEL** includes depth atoms \(E_{a}^{d}\) (agent \(a\) has depth exactly \(d\)) and \(P_{a}^{d}\) (agent \(a\) has depth at least \(d\)). We provide a sound and complete axiomatization of **DBEL**. We extend **DBEL** to support public announcements for bounded depth agents and show how the resulting **DPAL** logic generalizes standard axioms from public announcement logic. We present two alternate extensions and identify two undesirable properties, amnesia and knowledge leakage, that these extensions have but **DPAL** does not. We provide axiomatizations of these logics as well as complexity results for satisfiability and model checking. Finally, we use these logics to illustrate how agents with bounded modal depth reason in the classical muddy children problem, including upper and lower bounds on the depth knowledge necessary for agents to successfully solve the problem. ## 1 Introduction Epistemic logics model how agents reason about their beliefs and the beliefs of other agents. These logics generally assume the ability of agents to perfectly reason about propositions of unbounded modal depth, which can be seen as unrealistic in some contexts [8, 19]. To model agents with the ability to reason only to certain preset modal depths, we extend the syntax of epistemic logic **S5**[9] to depth-bounded epistemic logic (DBEL). The **DBEL** semantics assigns each agent a depth in each state. For an agent to know a formula \(\psi\) in a given state of a model, the assigned depth of the agent must be at least the modal depth of \(\psi\), i.e. \(d\left(\psi\right)\). To enable agents to reason about their own and other agents' depths, **DBEL** includes **depth atoms**\(E_{a}^{d}\) (agent \(a\) has depth exactly \(d\)) and \(P_{a}^{d}\) (agent \(a\) has depth at least \(d\)). For example, the formula \(K_{a}(P_{b}^{5}\to K_{b}p)\) expresses the fact that, "agent \(a\) knows that whenever agent \(b\) is depth at least 5, agent \(b\) knows the fact \(p\)." Depth atoms enable agents to reason about agent depths and their consequences in contexts in which each agent may have complete, partial, or even no information about agent depths (including its own depth). We provide a sound and complete axiomatization of **DBEL** (Section 2), requiring a stronger version of the Lindenbaum lemma which ensures each agent can be assigned a depth (proven in Appendix B). Its satisfiability problem for two or more agents is immediately \(\mathsf{PSPACE}\)-hard (because **DBEL** includes **S5** as a syntactic fragment). We provide a depth satisfaction algorithm for **DBEL** in \(\mathsf{PSPACE}\) (Section 5), establishing that the **DBEL** satisfiability problem is \(\mathsf{PSPACE}\)-complete for two or more agents. Public announcement logic (PAL) [10] extends epistemic logic with public announcements. PAL includes the following public announcement and knowledge axiom (PAK), which characterizes agents' knowledge after public announcements, \[[\varphi]K_{a}\psi\leftrightarrow(\varphi\to K_{a}[\varphi]\psi).\] (PAK) We extend **DBEL** to include public announcements (Section 3). The resulting depth-bounded public announcement logic (DPAL) provides a semantics for public announcements in depth-bounded epistemic logic, including a characterization of how agents reason when public announcements exceed their epistemic depth. We prove the soundness of several axioms that generalize (PAK) to **DPAL**, first in a setting where each agent has exact knowledge of its own depth, then in the general setting where each agent may have partial or even no knowledge of its own depth. We provide a sound axiom set for **DPAL** as well as an upper bound on the complexity of its model checking problem 1 Footnote 1: Arthaud and Rinard [4] present a lower bound for this problem, as well as additional results, proofs and content. We also present two alternate semantics that extend **DBEL** with public announcements (Section 3.3). The resulting logics verify simpler generalizations of (PAK) in the context of depth-bounded agents, but each has one of two undesirable properties that we call _annesia_ and _knowledge leakage_. Amnesia causes agents to completely forget about all facts they knew after announcements, whereas knowledge leakage means shallow agents can infer information from what deeper agents have learned from a public announcement. **DPAL** suffers from neither of these two undesirable properties. We provide a sound and complete axiomatization of the first of the two alternate semantics (Section 4). We also prove the -completeness of its satisfiability problem and show that its model checking problem remains -complete (Section 5). Finally, we use these logics to illustrate how agents with bounded depths reason in the muddy children reasoning problem [9]. We prove a lower bound and an upper bound on the structure of knowledge of depths required for agents to solve this problem (Section 6). Related workLogical omniscience, wherein agents are capable of deducing any fact deducible from their knowledge, is a well-known property of most epistemic logics. The ability of agents to reason about facts to unbounded modal depth is a manifestation of logical omniscience. Logical omniscience has been viewed as undesirable or unrealistic in many contexts [9] and many attempts have been made to mitigate or eliminate it [9, 16, 18]. To the best of our knowledge, only Kaneko and Suzuki [12] below have involved modal depth in the treatment of logical omniscience in epistemic logic. Kaneko and Suzuki [12] define the logic of shallow depths, which relies on a set of chains of agents for which chains of modal operators can appear. A subset restricts chains of modal operators along which agents can perform deductions about other agents' knowledge. An effect of bounding agents' depths in **DPAL** is creating a set of allowable chains of modal operators. Unlike, the bound on an agent's depth is not global in **DPAL**, it can also be a function of the worlds in the Kripke possible-worlds semantics [9]. In particular,, unlike, enables agents to reason about their own depth, the depth of other agents, and (recursively) how other agents reason about agent depths. **DPAL** also includes public announcements, which to the best of our knowledge has not been implemented in. Kline [13] uses to investigate the 3-agent muddy children problem, specifically by deriving minimal epistemic structures that solve the problem. The proof relies on a series of belief sets with atomic updates called "resolutions," with the nested length of the chains in providing epistemic bounds on the required reasoning. **DPAL**, in contrast, includes depth atoms and public announcements as first-class features. We leverage these features to directly prove theorems expressing that for muddy children, (Theorem 6.2) if the problem is solvable by an agent, that agent must have depth at least and know that it has depth at least (this theorem provides a lower bound on the agent depths required to solve the problem) and (Theorem 6.1) if an agent has depth at least, knows it, knows another agent is depth at least, knows that the other agent knows of another agent of depth,, then it can solve the problem (this theorem provides an upper bound on the agent depths necessary to solve the problem). Our depth bounds match the depth bounds of Kline [13] for 3 agents (Theorems 3.1 and 3.3 in [13]), though our bounds also provide conditions on recursive knowledge of depths for the agents as described above. Dynamic epistemic logic (DEL) [7, 19] introduces more general announcements. Private announcements are conceptually similar to public announcements in **DPAL** in that they may be perceived by only some of the agents. In DEL, model updates depend only on the relation between states in the initial model and the relations in the action model. But in **DPAL**, model updates must also take into account the agent depths in the entire connected components of each state (see Definition 3). Resource-bounded agents in epistemic logics have been explored by Balbiani et. al [6] (limiting perceptive and inferential steps), Artemov and Kuznets [3] (limiting the computational complexity of inferences), and Alechina et. al [2] (bounding the size of the set of formulas an agent may believe at the same time and introducing communication bounds). Alechina et. al [2] also bound the modal depth of formulas agents may believe, but all agents share the same depth bound and they leave open the question of whether inferences about agent depth or memory size could be implemented, which **DPAL** does. ## 2 Depth-bounded epistemic logic The modal depth \(d\left(\varphi\right)\) of a formula \(\varphi\), defined as the largest number of modal operators on a branch of its syntactic tree, is the determining factor of the complexity of a formula in depth-bounded epistemic logic (DBEL). Modal operators are the main contributing factor to the complexity of model checking a formula; the recursion depth when checking satisfiability of a formula is equivalent to its modal depth [15]; and bounding modal depth often greatly simplifies the complexity of the satisfiability problem in epistemic logics [17]. Humans are believed to reason within limited modal depth [8, 20]. We extend the syntax of classical epistemic logic by assigning to each agent \(a\) in a set of agents \(\mathcal{A}\) a depth \(d(a,s)\) in each possible world \(s\). The language also includes **depth atoms**\(E_{a}^{d}\) and \(P_{a}^{d}\) to respectively express that agent \(a\) has depth exactly \(d\) and agent \(a\) has depth at least \(d\). To know a formula \(\varphi\), agents are required to be at least as deep as \(d\left(\varphi\right)\) and also know that the formula \(\varphi\) is true in the usual possible-worlds semantics sense [9]. We translate the classical modal operator \(K_{a}\) from multi-agent epistemic logic into the operator \(K_{a}^{\infty}\) with the same properties, therefore \(K_{a}^{\infty}\varphi\) can be interpreted as "agent \(a\) would know \(\varphi\) if \(a\) were of infinite depth". The operator \(K_{a}\varphi\) will now take the meaning described above, i.e. \(P_{a}^{d\left(\varphi\right)}\wedge K_{a}^{\infty}\varphi\). **Definition 1**.: The language of **DPAL** is inductively defined as, for all agents \(a\in\mathcal{A}\) and depths \(d\in\mathbb{N}\), \[\mathcal{L}^{\infty}:=\varphi=p\mid E_{a}^{d}\mid P_{a}^{d}\mid\neg\varphi \mid\varphi\wedge\varphi\mid K_{a}\varphi\mid K_{a}^{\infty}\varphi\mid[\varphi ]\varphi.\] The \(K_{a}^{\infty}\) operator is used mainly as a tool in axiomatization proofs, we call \(\mathcal{L}\) the fragment of our logic formulas without any \(K_{a}^{\infty}\) operators, which will be used in most of our theorems. We further define \(\mathcal{H}^{\infty}\) and \(\mathcal{H}\) to respectively be the syntactic fragments of \(\mathcal{L}^{\infty}\) and \(\mathcal{L}\) without public announcements \([\varphi]\psi\). The modal depth \(d\) of a formula in \(\mathcal{L}^{\infty}\) is inductively defined as, \[d\left(p\right)=d\left(E_{a}^{d}\right)=d\left(P_{a}^{d}\right)=0 d\left(\neg\varphi\right)=d\left(\varphi\right) d\left([\varphi]\psi\right)=d\left(\varphi\right)+d\left(\psi\right)\] \[d\left(\varphi\wedge\psi\right)=\max\left(d\left(\varphi\right), d\left(\psi\right)\right) d\left(K_{a}\varphi\right)=1+d\left(\varphi\right)\] We defer treatment of public announcements \([\varphi]\psi\) to Section 3. We work in the framework of **S5**[9], assuming each agent's knowledge relation to be an equivalence relation, unless otherwise specified--however, our work could be adapted to weaker epistemic logics [9] by removing the appropriate axioms. **Definition 2**.: A model in **DBEL** is defined as a tuple \(M=(\mathcal{S},\sim,V,d)\) where \(\mathcal{S}\) is a set of states, \(V:\mathcal{S}\to 2^{\mathcal{P}}\) is the valuation function for atoms and \(d:\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{N}\) is a depth assignment function. For each agent \(a\), \(\sim_{a}\) is an equivalence relation on \(\mathcal{S}\) modeling which states are seen as equivalent in the eyes of \(a\). The semantics are inductively defined over \(\mathcal{H}^{\infty}\) by, \[(M,s)\models p\iff p\in V(s)\qquad\quad(M,s)\models E_{a}^{d} \iff d(a,s)=d\qquad\quad(M,s)\models P_{a}^{d}\iff d(a,s)\geq d\] \[\qquad(M,s)\models\neg\varphi\iff(M,s)\not\models\varphi\qquad \quad(M,s)\models\varphi\land\psi\iff(M,s)\models\varphi\text{ and }(M,s)\models\psi\] \[\qquad(M,s)\models K_{a}^{\infty}\varphi\iff(\forall s^{\prime}, \,s\sim_{a}s^{\prime}\implies(M,s^{\prime})\models\varphi)\qquad(M,s)\models K _{a}\varphi\iff(M,s)\models P_{a}^{d(\varphi)}\wedge K_{a}^{\infty}\varphi.\] Note that this definition does not require agents to have any (exact or approximate) knowledge of their own depth. On the other hand, it does not prohibit agents agents from having exact knowledge of their own depths, for instance we could model each agent carrying out some'meta-reasoning' about its own depth 2 leading each agent to know its own depth exactly. These models are a subset of the class of the models we consider, which we study in more detail in Section 3.1. Footnote 2: For instance deducing \(P_{a}^{d(\varphi)}\) from the fact that it knows \(\varphi\), or deducing \(\neg P_{a}^{\mu}\) from the fact that it does not know \(K_{a}^{\mu}\top\). As **DBEL** is an extension of **S5** up to renaming of the modal operators, one can expect for it to have a similar axiomatization: one new axiom is needed to axiomatize \(K_{a}\) and three others for depth atoms. **Theorem 2.1**.: _Axiomatization from Table 1 is sound and complete with respect to **DBEL** over \(\mathcal{H}\)._ Proof.: Rather than directly showing soundness and completeness, we show it is equivalent to the axiomatization of Table 3 in Appendix A on the fragment \(\mathcal{H}\), which is shown to be sound and complete over \(\mathcal{H}^{\infty}\) in Theorem A.1. We begin by proving any proposition in \(\mathcal{H}\) that can be shown using Table 1 can be shown using Table 3 and then that any proof of a formula in \(\mathcal{H}\) using the axioms in Table 3 can be shown using those in Table 1. For the first direction, we prove that the axioms in Table 1 can be proven using those from Table 3. Most of them are immediate applications of bounded knowledge within the axioms of Table 3, along with tautologies when necessary. For positive and negative introspection, see equation (6) below in the proof of the opposite direction of the equivalence. We prove the least evident axiom, the deduction axiom, here as an example: \[\text{Deduction}\quad(K_{a}^{\infty}\varphi\wedge K_{a}^{\infty}(\varphi \rightarrow\psi))\to K_{a}^{\infty}\psi \tag{1}\] \begin{table} \begin{tabular}{||r|l||} \hline All propositional tautologies & \(p\to p\), _etc._ \\ Deduction & \((K_{a}\varphi\wedge K_{a}(\varphi\rightarrow\psi))\to K_{a}\psi\) \\ \hline Truth & \(K_{a}\varphi\rightarrow\varphi\) \\ Positive introspection & \((K_{a}\varphi\wedge P_{a}^{d(\varphi)+1})\to K_{a}(P_{a}^{d(\varphi)} \to K_{a}\varphi)\) \\ Negative introspection & \((\neg K_{a}\varphi\wedge P_{a}^{d(\varphi)+1})\to K_{a}\neg K_{a}\varphi\) \\ \hline Depth monotonicity & \(P_{a}^{d}\to P_{a}^{d-1}\) \\ Exact depths & \(P_{a}^{d}\leftrightarrow\neg(E_{a}^{d}\lor\cdots\lor E_{a}^{d-1})\) \\ Unique depth & \(\neg(E_{a}^{d_{1}}\wedge E_{a}^{d_{2}})\) for \(d_{1}\neq d_{2}\) \\ \hline Depth deduction & \(K_{a}\varphi\to P_{a}^{d(\varphi)}\) \\ \hline \hline _Modus ponens_ & From \(\varphi\) and \(\varphi\rightarrow\psi\), deduce \(\psi\) \\ Necessitation & From \(\varphi\) deduce \(P_{a}^{d(\varphi)}\to K_{a}\varphi\) \\ \hline \end{tabular} \end{table} Table 1: Sound and complete axiomatization for **DBEL** over \(\mathcal{H}\). \[\text{Bounded knowledge in (1)} (K_{a}^{\infty}\varphi\wedge K_{a}^{\infty}(\varphi\to\psi))\to P_{a}^{d( \psi)}\to K_{a}\psi \tag{2}\] \[\text{Tautology in (2)} P_{a}^{\max(d(\varphi),d(\psi))}\to K_{a}^{\infty}\varphi\to K_{a}^{ \infty}(\varphi\to\psi)\to P_{a}^{d(\psi)}\to K_{a}\psi\] (3) \[\text{Repeated depth consistency} P_{a}^{\max(d(\varphi),d(\psi))}\to(P_{a}^{d(\varphi)}\wedge P_{a}^{d(\psi)})\] (4) \[\text{Bounded knowledge and (3) and (4)} P_{a}^{\max(d(\varphi),d(\psi))}\to K_{a}\varphi\to K_{a}(\varphi\to \psi)\to K_{a}\psi\] (5) \[\text{Bounded knowledge in (5)} K_{a}\varphi\to K_{a}(\varphi\to\psi)\to K_{a}\psi.\] In the other direction, we will show by induction over a proof of a valid formula in \(\mathcal{H}\) using Table 3 that it can be transformed into a proof with the same conclusion, using only axioms from Table 1. The transformation of a proof in the first axiomatization is as follows, * If an item of the proof is a propositional tautology, replace all \(K_{a}^{\infty}\varphi\) subformulas by \(P_{a}^{d(\varphi)}\to K_{a}\varphi\), clearly the tautology still holds and it is in Table 1. * If an item is an instance of the bounded knowledge axiom, replace it with the formula \(K_{a}\varphi\leftrightarrow(P_{a}^{d(\varphi)}\wedge P_{a}^{d(\varphi)}\to K _{a}\varphi)\) which is a consequence of depth deduction and a tautology (and therefore can be added to the proof with two extra steps). * If it uses any of the other axioms, replace it with the corresponding axiom (with the same name) from Table 1. We now have a sequence that has the same conclusion (since the conclusion is in \(\mathcal{H}\)) and only uses axioms from Table 1. The last thing to show for this to be a proof in this axiomatization is that all applications of _modus ponens_ and necessitation are still correct within this sequence. To this end, we show by induction that each step of the sequence is the same as the original proof where every \(K_{a}^{\infty}\varphi\) subformula in each step has been replaced by \(P_{a}^{d(\varphi)}\to K_{a}\varphi\). First, note that this is the case for the two first bullet points of our transformation rules above. This is also true of each axiom in the table after our transformation: a proof similar to the one in equation (1) will yield the equivalence for deduction, the only remaining non-trivial cases are positive and negative introspection. For positive introspection, performing the substitution yields, \[(P_{a}^{d(\varphi)}\to K_{a}\varphi)\to P_{a}^{d(\varphi)+1}\to K_{a}(P_{a}^{d (\varphi)}\to K_{a}\varphi). \tag{6}\] Through application of a tautology and the depth monotonicity axiom we find it to be equivalent to, \(P_{a}^{d(\varphi)+1}\to K_{a}\varphi\to K_{a}(P_{a}^{d(\varphi)}\to K_{a}\varphi)\). Therefore, up to adding steps to the proof and using tautologies, we can prove the axiom from Table 1 from the axiom in Table 3 after the substitution. The same can be said of negative introspection through a similar transformation. Finally, since _modus ponens_ and necessitation also maintain the property of replacing \(K_{a}^{\infty}\varphi\) subformulas in each step by \(P_{a}^{d(\varphi)}\to K_{a}\varphi\), it is true that the transformed proof is indeed a proof of the same conclusion in Table 1's axiomatization. ## 3 Depth-bounded public announcement logic We next present how to incorporate depth announcements in **DBEL**, which are a key challenge in defining depth-bounded public announcement logic (DPAL). Recall the axiom (PAK) of public announcement logic, \([\varphi]K_{a}\psi\leftrightarrow(\varphi\to K_{a}[\varphi]\psi)\). For the right-hand side to be true, agent \(a\) must be of depth \(d([\varphi]\psi)=d(\varphi)+d(\psi)\) according to **DBEL**. This suggests that an agent must "consume" \(d\left(\varphi\right)\) of its depth every time an announcement \(\varphi\) is made, meaning that an agent's depth behaves like a depth budget with respect to public announcements. Moreover, to model that some agents might be too shallow for the announcement \(\varphi\), each possible world is duplicated in a _negative_ version where the announcement has not taken place and a _positive_ version where the announcement takes place in the same way as in PAL. Agents who are not deep enough to perceive the announcement see the negative and positive version of the world as equivalent. **Definition 3**.: Models in **depth-bounded public announcement logic** (DPAL) are defined the same way as in **DBEL** and the semantics is extended to \(\mathcal{L}^{\infty}\) by \((M,s)\models[\varphi]\psi\iff((M,s)\models\varphi\implies(M\mid\varphi,(1,s)) \models\psi)\), where we define \(M\mid\varphi\) to be the model \((\mathcal{S}^{\prime},\sim^{\prime},V^{\prime},d^{\prime})\), where, \[\mathcal{S}^{\prime} =(\{0\}\times\mathcal{S})\cup\{(1,s),\;s\in\mathcal{S},\;(M,s) \models\varphi\}\] \[\sim^{\prime}_{a}\text{ is the transitive symmetric closure of }R_{a}\text{ such that},\] \[(i,s)\,R_{a}\,(i,s^{\prime}) \iff s\sim_{a}s^{\prime}\qquad\text{ for }i=0,1\] \[(1,s)\,R_{a}\,(0,s) \iff(M,s)\not\models P^{d(\varphi)}_{a}\] \[V^{\prime}((i,s)) =V(s)\qquad\qquad\text{ for }i=0,1\] \[d^{\prime}(a,(0,s)) =d(a,s)\] \[d^{\prime}(a,(1,s)) =\begin{cases}d(a,s)\quad\text{if }d(a,s)<d\,(\varphi)\\ d(a,s)-d\,(\varphi)\quad\text{otherwise}.\end{cases} \tag{7}\] Since public announcements are no longer unconditionally and universally heard by all agents, we revisit the axiom (PAK) in **DPAL**. The determining factor is **depth ambiguity**: agents that are unsure about their own depth introduce uncertainty about which agents have perceived the announcement. ### Unambiguous depths setting A model verifies the **unambiguous depths** setting whenever each agent knows its own depth exactly: \[\forall a,s,s^{\prime},\quad s\sim_{a}s^{\prime}\implies d(a,s)=d(a,s^{\prime}). \tag{8}\] The proof of the following theorem is given as Proposition C.1 in Appendix C. **Theorem 3.1**.: _For all \(\varphi\in\mathcal{L}^{\infty}\), the following two properties, respectively called **knowledge preservation** and **traditional announcements**, are valid in **DPAL** in the unambiguous depths setting,_ \[\forall\psi\in\mathcal{L}^{\infty}_{a},\;\neg P^{d(\varphi)}_{a} \rightarrow([\varphi]K_{a}\psi\leftrightarrow(\varphi\to K_{a} \psi))\] (KP) \[\forall\psi\in\mathcal{L}^{\infty},\;P^{d(\varphi)}_{a} \rightarrow([\varphi]K_{a}\psi\leftrightarrow(\varphi\to K_{a} [\varphi]\psi))\,,\] (TA) _where \(\mathcal{L}^{\infty}_{a}\) is the fragment of \(\mathcal{L}^{\infty}\) without depth atoms or modal operators for agents other than \(a\)._ DiscussionKnowledge preservation (KP) means that an agent who is not deep enough to perceive an announcement \(\varphi\) must not change its knowledge of a formula \(\psi\). However, such a property could not be true of all formulas \(\psi\), for instance if \(\psi=K_{a}K_{b}p\) but \(b\) is deep enough to perceive \(\varphi\), then the depth adjustment formula (7) could mean that \(b\)'s depth is now 0, making \(\psi\) no longer hold. Even when \(a\) is certain about \(b\)'s depth, its uncertainty about what the announcement entails could also mean that formulas such as \(\neg K_{b}p\) could no longer be true if \(P^{d(\varphi)}_{b}\) and \(\varphi\to p\) in the model. This demonstrates that in depth-bounded logics public announcements must introduce uncertainty: if \(a\) is unsure what \(b\) has perceived, it can no longer hold any certainties about what \(b\) does not know. This is not the case in PAL since all agents perceive all announcements. Our treatment of the depth-ambiguous case in Section 3.2 generalizes (KP) to obtain a property (KP') that holds on all formulas in \(\mathcal{L}^{\infty}\). Traditional announcements (TA) ensures that announcements behave the same as in PAL when the agent is deep enough for the announcement. The caveats from the discussion of (KP) no longer apply here, as any \(K_{b}\) operator that appears in \(\psi\) will still appear after the same public announcement operator, meaning that depth variations or knowledge variations are accounted for. ### Ambiguous depths setting We now abandon the depth unambiguity assumption from equation (8), and explore how properties (KP) and (TA) generalize to settings without depth unambiguity. We find a condition that ensures that sufficient knowledge about other agents' depths is given to \(a\) in order to maintain its recursive knowledge about other agents. The proof to the following theorem is given as Proposition C.2 in Appendix C. **Theorem 3.2**.: _For any \(\varphi\in\mathcal{L}^{\infty}\), let \(\mathcal{F}_{\varphi}:\mathcal{L}^{\infty}\to\mathcal{L}^{\infty}\) be inductively defined as,_ \[\mathcal{F}_{\varphi}(p)=\mathcal{F}_{\varphi}(E_{a}^{d})= \mathcal{F}_{\varphi}(P_{a}^{d})=\top\qquad\qquad\mathcal{F}_{\varphi}(\neg \psi)=\mathcal{F}_{\varphi}(\psi)\qquad\qquad\mathcal{F}_{\varphi}(\psi\wedge \chi)=\mathcal{F}_{\varphi}(\psi)\wedge\mathcal{F}_{\varphi}(\chi)\] \[\qquad\qquad\qquad\mathcal{F}_{\varphi}(K_{a}\psi)=\neg K^{ \infty}_{a}(\varphi\to P_{a}^{d(\varphi)})\wedge K^{\infty}_{a}(\varphi\to \neg P_{a}^{d(\varphi)}\lor P_{a}^{d(\varphi)+d(\psi)})\wedge K^{\infty}_{a} \mathcal{F}_{\varphi}(\psi)\] \[\mathcal{F}_{\varphi}(K^{\infty}_{a}\psi)=\neg K^{\infty}_{a}( \varphi\to P_{a}^{d(\varphi)})\wedge K^{\infty}_{a}\mathcal{F}_{\varphi}(\psi) \qquad\qquad\qquad\qquad\qquad\mathcal{F}_{\varphi}([\psi_{1}]\psi_{2})= \mathcal{F}_{\varphi}(\psi_{1})\wedge\mathcal{F}_{\varphi}(\psi_{2}).\] _For all \(\varphi\in\mathcal{L}^{\infty}\), the following two properties are valid in **DPAL**,_ \[\forall\psi\in\mathcal{L}^{\infty},\ \mathcal{F}_{\varphi}(K_{a} \psi)\qquad\quad\to\left([\varphi]K_{a}\psi\leftrightarrow(\varphi\to K_{a} \psi)\right)\] (KP') \[\forall\psi\in\mathcal{L}^{\infty},\ K^{\infty}_{a}(\varphi\to P _{a}^{d(\varphi)})\to\left([\varphi]K_{a}\psi\leftrightarrow(\varphi\to K_{a} [\varphi]\psi)\right).\] (TA') ### Alternate treatments of model updates for public announcements One question is whether using a definition of public announcements closer to PAL would produce a version of the above axioms closer to (PAK). Eager depth-bounded public announcement logic (EDPAL) below unconditionally decrements the depth value of all agents after public announcements. **Definition 4** (**Edpal**).: **EDPAL** extends the **DBEL** semantics to include public announcements by defining \((M,s)\models[\varphi]\psi\iff((M,s)\models\varphi\implies(M\mid\varphi,s) \models\psi)\), where \(M\mid\varphi\) is the model \((\mathcal{S}^{\prime},\sim^{\prime},V,d^{\prime})\) in which \(\mathcal{S}^{\prime}=\{s\in\mathcal{S},\ (M,s)\models\varphi\}\), \(\sim^{\prime}_{a}\) is the restriction of \(\sim_{a}\) to \(\mathcal{S}^{\prime}\), \(d^{\prime}(a,s)=d(a,s)-d(\varphi)\), and \(d\) may take values in \(\mathbb{Z}\). **EDPAL** has a sound and complete axiomatization based on the axiomatization of **DBEL** (Theorem 4.1), which also allows us to prove the complexity result of Theorem 5.1. However, another consequence of its definition is that excessive public announcements in **EDPAL** can lead an agent to a state in which it cannot reason anymore, as it has consumed its entire depth budget. **Proposition 3.3** (Amnesia).: _In **EDPAL**, the formula \(\neg P_{a}^{d(\varphi)}\to[\varphi]\neg K_{a}\psi\) is valid for all \(\varphi\) and \(\psi\)._ Proof.: If \((M,s)\not\models\varphi\) then the implicand is true. If \((M,s)\models\varphi\wedge\neg P_{a}^{d(\varphi)}\) then the depth of \(a\) in \((M\mid\varphi,s)\) will be at most \(-1\), meaning that \((M\mid\varphi,s)\not\models K_{a}\psi\) for all \(\psi\). In particular, for \(\psi=\top\) one notices that standard intuitions about knowledge fail in **EDPAL**. This property is undesirable: _(i)_ one may expect agents to maintain some knowledge even after public announcements that they are not deep enough to understand and _(ii)_ deeper agents should be able to continue to benefit from the state of knowledge of shallower agents even after the shallower agents have exceeded their depth. One way to try to remedy this property is to change model updates in **EDPAL** to make agents perceive announcements only when they are deep enough to understand them. The resulting asymmetric depth-bounded public announcement logic (ADPAL) removes depth from an agent's budget only when it is deep enough for an announcement, and only updates its equivalence relation in states where it is deep enough for the announcement. **Definition 5** (**Adpal**).: **ADPAL** extends the **DBEL** semantics to include public announcements by defining \((M,s)\models[\varphi]\psi\iff((M,s)\models\varphi\implies(M\mid\varphi,s) \models\psi)\), where \(M\mid\varphi\) is the model \((\mathcal{S},\sim^{\prime},V,d^{\prime})\), \[s\;\gamma^{\prime}_{a}\;s^{\prime} \iff s\;\gamma_{a}\;s^{\prime}\;\text{or}\;\begin{cases}(M,s) \models P_{a}^{d(\varphi)}\\ (M,s)\models\varphi\iff(M,s^{\prime})\not\models\varphi,\end{cases}\] \[d^{\prime}(a,s)= \begin{cases}d(a,s)\quad\text{if}\;d(a,s)<d\,(\varphi)\\ d(a,s)-d\,(\varphi)\quad\text{otherwise.}\end{cases}\] The relations \(\sim_{a}\) are only assumed to be reflexive (as opposed to equivalence relations earlier). Unfortunately, in **ADPAL** an agent that is too shallow for an announcement could still learn positive information that was learned by another agent who is deep enough to perceive the announcement. We call this property _knowledge leakage_ as reflected in the following proposition. **Proposition 3.4** (Knowledge leakage).: _ADPAL does not verify the \(\rightarrow\) direction of (KP\({}^{*}\))._ Proof.: Consider three worlds, \(\{0,1,2\}\) and three agents \(a,b,c\). The relations for \(a\) and \(c\) are identity, the relation for \(b\) is the symmetric reflexive closure of, \(0\sim_{b}1\sim_{b}2\). The depth of \(a\) is 1 everywhere, \(b\)'s depth is \(0,2,0\) in each respective state and the depth of \(c\) is 2 everywhere. The atom \(p_{0}\) is true only in 0 and 1. Consider \(\varphi=K_{c}K_{c}p_{0}\), it is true in 0 and 1 only, and consider \(\psi=K_{b}p_{0}\). \(K_{a}\psi\) is not true in state 1, however \([\varphi]K_{a}\psi\) is. Moreover, one can easily check that \(\mathcal{F}_{\varphi}(K_{a}\psi)\) is true in that state. The proof provides a practical example of such leakage in **ADPAL** and we further demonstrate knowledge leakage in Proposition 6.4 in the muddy children reasoning problem (see Section 6). Note how each direction of the equivalence in (KP\({}^{*}\)) expresses \((\rightarrow)\) that no knowledge leakage occurs and \((\leftarrow)\) no amnesia occurs. As shown in Theorem 3.2, **DPAL** verifies both directions and thus has neither amnesia nor knowledge leakage. As reflected in the following proposition, although EDPAL has amnesia, it doesn't have knowledge leakage and verifies (TA). **Proposition 3.5**.: _[_4_]_ _EDPAL verifies (TA) and the \(\rightarrow\) direction in (KP) over \(\psi\in\mathcal{L}^{\infty}\), but not the converse._ ## 4 Axiomatizations **Theorem 4.1**.: _The axiomatization in Table 2 is sound and complete with respect to **EDPAL** (Definition 4) over the fragment \(\mathcal{L}\)._ Proof.: Similarly to the proof of Proposition 2.1, rather than directly showing soundness and completeness we show it is equivalent to the axiomatization of Table 4, which is shown to be sound and complete for **EDPAL** in Theorem A.2 in Appendix A. In the first direction, all axioms in Table 2 can be shown using those in Table 4 immediately, either from the proof of Proposition 2.1 or because they are the same. The only difficulty lies in knowledge announcement, but a proof similar to equation (1) shows it is sound. The other direction also follows the exact same proof as in Proposition 2.1: the public announcement axioms are direct translations of the same axioms in Table 4 by replacing the \(K_{a}^{\infty}\varphi\) subformulas with \(P_{a}^{d(\varphi)}\to K_{a}\varphi\). The proof transformation from Proposition 2.1 therefore still yields a proof of the same formula in this axiomatization, which proves completeness. We now present a sound set of axioms for **DPAL**. The main missing axioms for a sound and complete axiomatization are knowledge and public announcements, which we explored in the previous section, and announcement composition. In fact, announcement composition cannot exist in **DPAL**, since making a single announcement of depth \(d_{1}+d_{2}\) can behave very differently from making an announcement of depth \(d_{1}\) followed by another of depth \(d_{2}\), for instance when an agent's depth is between \(d_{1}\) and \(d_{1}+d_{2}\). **Theorem 4.2**.: _Replacing knowledge announcement by (KP') and (TA') and depth adjustment by,_ \[\forall d\in\mathbb{N},\quad[\varphi]E_{a}^{d}\leftrightarrow\Big{(}\varphi \rightarrow\Big{(}(P_{a}^{d(\varphi)}\wedge E_{a}^{d+d(\varphi)})\vee(\neg P_{ a}^{d(\varphi)}\wedge E_{a}^{d})\Big{)}\Big{)}\] _in Table 2 produces a set of sound axioms with respect to **DPAL**3._ Footnote 3: One could also easily add axioms for \(K_{a}^{\infty}\) modal operators, for instance using those from Table 4 in Appendix A. Proof.: Theorem 3.2 verifies the two axioms (KP') and (TA'). The proofs for most axioms follows from Theorem 4.1 and that knowledge is defined the same way in both semantics. In particular, atomic permanence and conjunction announcement axioms are proven in Theorem 3.1's induction for (KP). We are left to show depth adjustment, \[(M,s)\models[\varphi]E_{a}^{d} \iff(M,s)\models\varphi\implies(M\mid\varphi,(1,s))\models E_{a}^ {d}\] \[\iff(M,s)\models\varphi\implies\begin{cases}d(a,s)=d+d\left( \varphi\right)&\text{if }d(a,s)\geq d\left(\varphi\right)\\ d(a,s)=d&\text{if }d(a,s)<d\left(\varphi\right)\end{cases}\] \[\iff(M,s)\models\varphi\rightarrow\Big{(}(P_{a}^{d(\varphi)} \wedge E_{a}^{d+d(\varphi)})\vee(\neg P_{a}^{d(\varphi)}\wedge E_{a}^{d}) \Big{)}\,.\qed\] \begin{table} \begin{tabular}{|c|l||} \hline All axioms from Table 1 & \\ \hline Atomic permanence & \([\varphi]p\leftrightarrow(\varphi\to p)\) \\ Depth adjustment & \(\forall d\in\mathbb{Z},\ [\varphi]E_{a}^{d}\leftrightarrow\Big{(}\varphi \to E_{a}^{d(\varphi)+d}\Big{)}\) \\ Negation announcement & \([\varphi]\neg\psi\leftrightarrow(\varphi\rightarrow\neg[\varphi]\psi)\) \\ Conjunction announcement & \([\varphi](\psi\wedge\chi)\leftrightarrow([\varphi]\psi\wedge[\varphi]\chi)\) \\ Knowledge announcement & \([\varphi](P_{a}^{d(\psi)}\to K_{a}\psi)\leftrightarrow(\varphi \to P_{a}^{d(\varphi)+d(\psi)}\to K_{a}[\varphi]\psi)\) \\ Announcement composition & \([\varphi]\|\psi|\chi\leftrightarrow([\varphi\wedge[\varphi]\psi|\chi)\) \\ \hline \hline _Modus ponens_ & From \(\varphi\) and \(\varphi\rightarrow\psi\), deduce \(\psi\) \\ Necessitation & From \(\varphi\) deduce \(P_{a}^{d(\varphi)}\to K_{a}\varphi\) \\ \hline \end{tabular} \end{table} Table 2: Sound and complete axiomatization of **EDPAL** over \(\mathscr{L}\). ## 5 Complexity We first state that adding depth bounds does not change the complexity of **S5** and PAL respectively. **Theorem 5.1**.: _The satisfiability problems for \(\mathbf{DBEL}\) with \(n\geq 2\) agents and for \(\mathbf{EDPAL}\) are \(\mathsf{PSPACE}\)-complete._ Proof.: The lower bound results from \(\mathsf{PSPACE}\)-completeness of \(\mathbf{S5}_{n}\) for \(n\geq 2\)[11] and PAL [15], respective syntactic fragments of \(\mathbf{DBEL}\) and \(\mathbf{EDPAL}\). For both logics, we begin by translating \(K_{a}\varphi\) subformulas into \(P_{a}^{d(\varphi)}\wedge K_{a}^{\infty}\varphi\), which only increases formula size at most linearly. Then, in the case of \(\mathbf{EDPAL}\), using the same translation as Lemma 9 of [15], we translate formulas with public announcement \(\varphi\) into equivalent formulas \(t(\varphi)\) without public announcement such that \(|t(\varphi)|\) is at most polynomial in \(|\varphi|\) (this is possible because the axiomatization of \(K_{a}^{\infty}\) with relation to public announcements is the same). We have therefore transformed our formula \(\varphi\) into an equivalent formula in the syntactic fragment without \(K_{a}\) operators or public announcements of polynomial size relative to the initial formula \(\varphi\)'s size. We can then use the ELE-World procedure from Figure 6 of [15] by re-defining types to accommodate for depth atoms. As a reminder, we define \(\mathbf{cl}(\Gamma)\) for any set of formulas \(\Gamma\) to be the smallest set of formulas containing \(\Gamma\) and closed by single negation and sub-formulas. We then say that \(\gamma\subseteq\mathbf{cl}(\Gamma)\) is a type if all of the following are true, 1. \(\neg\psi\in\gamma\) if and only if \(\psi\not\in\gamma\) when \(\psi\) is not a negation 2. if \(\psi\wedge\chi\in\mathbf{cl}(\Gamma)\) then \(\psi\wedge\chi\in\gamma\) if and only if \(\psi\in\gamma\) and \(\chi\in\gamma\) 3. if \(K_{a}^{\infty}\psi\in\gamma\) then \(\psi\in\gamma\) 4. if \(P_{a}^{d}\in\gamma\) then \(\neg P_{a}^{d^{\prime}}\not\in\gamma\) and \(E_{a}^{d^{\prime}}\not\in\gamma\) for all \(d^{\prime}<d\) 5. if \(E_{a}^{d}\in\gamma\) then \(E_{a}^{d^{\prime}}\not\in\gamma\) for all \(d^{\prime}\neq d\) and \(\neg P_{a}^{d^{\prime}}\not\in\gamma\) for \(d^{\prime}<d\) 6. if \(\neg P_{a}^{d}\in\gamma\) then there exists \(d^{\prime}<d\) such that \(\neg E_{a}^{d^{\prime}}\not\in\gamma\) 7. \(\neg P_{a}^{0}\not\in\gamma\) Clearly, checking that a subset of \(\mathbf{cl}(\Gamma)\) is not a type does not increase the space complexity of the algorithm. Lemma 18 from [15] remains true here, i.e. the procedure ELE-World returns true if and only if the formula is satisfiable. It is sufficient for this to show that any type has a consistent depth assignment for all agents, as it is clear that if any of the new rules introduced for depths are violated the formula is not satisfiable. If the type contains \(E_{a}^{d}\) then it contains only one such depth atom per rule 5, the only \(P_{a}^{d^{\prime}}\) it contains are for \(d^{\prime}\leq d\) per rule 4, and it does not contain \(\neg P_{a}^{d^{\prime}}\) for \(d^{\prime}\leq d\) per rule 5, therefore \(d(a)=d\) is a consistent setting. If it does not contain any \(E_{a}^{d}\), it may contain a number of inequalities polynomial in \(|\varphi|\), that admit a solution in \(\mathbb{N}\) by rule 7. Therefore a possible algorithm is \(d_{0}=\max\{d^{\prime},\,P_{a}^{d^{\prime}}\in\gamma\}\) and then \(d(a)=\min\{d^{\prime},\,d^{\prime}\geq d_{0},\,\neg E_{a}^{d^{\prime}}\not\in\gamma\}\). If no \(P_{a}^{d}\) are in the type, then \(d_{0}=\min\{d^{\prime},\,\neg P_{a}^{d^{\prime}}\in\gamma\}\) and \(d(a)=\max\{d^{\prime},\,d^{\prime}\leq d_{0},\neg E_{a}^{d^{\prime}}\not\in\gamma\}\) are a possible choice (this choice will always be greater or equal to 0 because of rules 7 and 6 above). Finally, if there are no depth atoms in the type, the formula is clearly satisfiable for any choice of \(d(a)\). The model checking problem remains \(\mathsf{P}\)-complete in \(\mathbf{DBEL}\), using the same algorithm as for \(\mathbf{S5}\)[9]. For \(\mathbf{EDPAL}\) and \(\mathbf{ADPAL}\), the model checking problem is \(\mathsf{P}\)-complete, as the same algorithm as PAL can be used, relying on the fact that model size can only decrease after announcements [14] (the lower bounds results from the fact that PAL is a fragment of both). This is however not the case of **DPAL**, where model size grows after announcements, potentially exponentially, in fact model checking in **DPAL** is -hard [4]. **Theorem 5.2**.: _The complexity of model checking for finite models in **DPAL** is in. An upper bound in time complexity for checking in is, where is the sum of the number of states and number of pairs in each relation of._ Proof.: The model-checking algorithm is the same as the one for public announcement logic [14]: a tree is built from subformulas, with splits introduced only for subformulas of the form, with to the left and to the right. Treating a node labeled means labeling each state in with either or. The tree is treated from bottom-left to the top, always going up first except when a node of the type is found. In that case, since the nodes in the left sub-tree have been treated, we can build easily in time from the truth value of and the depth functions of. Moreover, the size of is at most. To see this, consider an equivalence class for in of size, it has exactly connections within it. The number of states it creates in is at most, and the number of connections it creates is at most. Each connection being in exactly one connected component means the bound holds. Therefore we can recurse in the right sub-tree with to check in time. Writing the time necessary to build, we find that checking takes time at most. ## 6 Muddy children Consider the well-known muddy children reasoning problem, where children convene after playing outside with mud. of them have mud on their foreheads, but have no way of knowing it. The father, an external agent, announces that at least one child has mud on their forehead. Then, he repeatedly asks if any child would like to go wash themselves. After exactly repetitions of the father's question, all muddy children understand they are muddy and go wash themselves. Readers unfamiliar with the reasoning problem and its solution are directed to Van Ditmarsch et. al [19]'s treatment using PAL. Consider the set of states, where each tuple contains entries indicating for each child if they are muddy (1) or not (0). For the sake of simplicity and since it is of depth 0, we assume the father's announcement has taken place and therefore define the Kripke structure with the usual definition of the agents' knowledge relations [9]. We define the **DPAL** class of muddy children models to be models extending with any depth function. We name the atom expressing that child is muddy. We number the agents in, where the first are muddy, and focus on the reasoning of one agent (without loss of generality agent 0) to understand that it is muddy. Recall the definition of the dual of public announcements, and define the following series of formulas for, Here states that if each of the children from to announce one after the other they don't know they are muddy, then child 0 knows that they (child 0) are muddy 4 It is well known this formula is true for unbounded agents in in PAL (it is also a consequence of Theorem 6.1 below). The following two theorems define a sufficient structure of knowledge of depths for the formula to be true and a necessary condition on the structure of knowledge of depths for it to be true. **Theorem 6.1** (Upper bound).: _For all three semantics, \(K_{0}\left(P_{0}^{k-1}\wedge K_{1}(P_{1}^{k-2}\wedge\cdots K_{k-1}(P_{k-1}^{0}) \cdots)\right)\to\varphi_{k}\) is true in all muddy children models \(\hat{M}_{n}\) in the initial state._ Note that this formula directly provides an upper bound on the structure of depths and knowledge about depths: it shows a sufficient condition on the knowledge of depths for the problem to be solvable by agent 0. Moreover, the upper bound for one child readily generalizes to a sufficient condition for all children to understand they are muddy: each muddy child must know they are of depth at least \(k-1\), know at least some other muddy child knows they are of depth at least \(k-2\), and know that that other child knows some other muddy child knows they are of depth at least \(k-3\), _etc_. Proof.: For the sake of simplicity and since it does not change the treatment of the problem, we assume \(n=k\). We show the result for **DPAL**, as the treatments for **EDPAL** and **ADPAL** are similar. We will show the result by induction over \(k\). Denote \(s_{k}=(1,\ldots,1)\) the true state of the world where all the children are muddy. For \(k=2\), we assume \(K_{0}P_{0}^{1}\) and want to show \(\neg K_{1}m_{1}\wedge[\neg K_{1}m_{1}]K_{0}m_{0}\). First notice that \((\hat{M}_{2},s_{2})\models\neg K_{1}m_{1}\), simply because it considers the state \((1,0)\) to also be possible. In the state \((0,1)\), child 1 knows it is muddy. Therefore, the set of states for the successful part of the model update will be \((1,(1,1))\) and \((1,(1,0))\). Moreover, since \(K_{0}P_{0}^{1}\), it is deep enough in \(s_{2}\) to not have any links to the unsuccessful part of the model update, therefore it knows \(m_{0}\). Consider some \(k>2\), we denote \(S_{i}\) the set of states that are "active" when considering \(\varphi_{i}\). More precisely, we set \(S_{i}=\left\{0,1\right\}^{i}\times\left\{1\right\}^{k-i}\setminus\left\{0 \right\}^{k}\). We will show that after \(k-i\) announcements, the remainder of the problem is equivalent to checking \(\varphi_{i}\) on the subgraph induced by the states \(S_{i}\). This is evident for \(i=k\) by definition, we now show by descending induction that it is equivalent to checking \(\varphi_{2}\) on \(S_{2}\), which we have just verified to be true. Firstly, it is true that \((\hat{M}_{n},s_{k})\models\neg K_{k-1}m_{k-1}\) since child \(k-1\) considers possible the state \((1,\ldots,1,0)\). The set of states in which \(K_{k-1}m_{k-1}\) holds is exactly \((0,\ldots,0,1)\). Therefore, the model update will create a copy of all other states. We then notice that the set of states whose last component is 0 can be ignored in the rest of the problem: they are not reachable from \(s_{k}\) by any sequence of \(\sim_{i}\) that does not contain \(\sim_{k-1}\) and the rest of the formula \(\varphi_{k-1}\) to be checked does not use any modal operators for agent \(k-1\) any more. These states will never be reached and can therefore be removed without altering the result of the rest of the execution. We are therefore restricting ourselves, after the model update, to the set of states \(S_{k-1}\) in the positive part of the model. Note however there are still possibly links between the negative part of the model and \(S_{k-1}\) in the positive part of the model. We will show that these links have no effect on the checking of the rest of the formula, by showing that links for child \(i\) find themselves in \(S_{k-1}\setminus S_{i}\): therefore, by the time we query modal operator \(i\), the set of ignored states will contain all states with a link for child \(i\). For child \(i<k-1\), the information we have about its depth is \(K_{0}K_{1}\cdots K_{i}P_{i}^{k-1-i}\) before the model update. Therefore, we in particular know it is deep enough for the announcement (which is of depth \(1\leq k-1-i\)) in the set of states in which the \(i\) first components might have changed compared to \(s_{k}\) but the last \(k-1-i\) are all fixed to 1: this is exactly \(S_{i}\). We have shown that the recursive check in \(M\mid\neg K_{k-1}m_{k-1}\) will take place on a set of states for which the execution is equivalent to \(S_{k-1}\) and on which we will have to check the formula \(\varphi_{k-1}\). Finally, since the depths of each agent other than \(k-1\) was at least 1 on \(S_{k-2}\), they are reduced by 1 and the induction hypothesis on depths for \(k-2\) is also verified. **Theorem 6.2** (Lower bound).: _For **DPAL**, the formula \(\varphi_{k}\to K_{0}P_{0}^{k-1}\) is true in all models \(\hat{M}_{n}\)._ Proof.: We use the notations from the proof of Theorem 6.1 above. Notice first that all of the announcements remain true when they are performed, because \(\neg K_{k-1}^{\infty}m_{k-1}\to\neg K_{k-1}m_{k-1}\) and the implicant is true by the usual lower bound for muddy children (it takes \(k\) announcements for any child to know they are muddy). Assume by contraposition that \(d(0,s_{k})=i<k-1\) or \(d(0,\tilde{s}_{k})=i<k-1\) initially, where \(\tilde{s}_{k}\) is the state \((0,1,\ldots,1)\) of \(\hat{M}_{n}\). After \(i\) public announcements, it will be true that \(\neg K_{0}m_{0}\) still, as well as \(\neg K_{0}\neg E_{0}^{0}\) since each public announcement is of depth 1. The former is a consequence of the usual lower bound for muddy children, and can be derived from the proof in Theorem 6.1 using symmetry between 0 and \(k-1-i\) after the \(i\) announcements and monotonicity of knowledge of atoms: if the depths are lower than they were in the previous proof, there are more states and more links in the updated model and therefore \(\neg K_{k-1-i}m_{k-1-i}\) remains true. Therefore in this model after \(i\) announcements, either \(s_{k}\) or \(\tilde{s}_{k}\) sees agent 0 of depth 0 and both states are still connected by \(\sim_{0}\). This means that for the next announcement, since \(\neg K_{0}m_{0}\) after each announcement except potentially the last using the same argument as above, we will have the chain of connections \((1,s_{k})\sim_{0}^{\prime}(0,s_{k})\sim_{0}^{\prime}(0,s_{k}^{\prime})\) or \((1,s_{k})\sim_{0}^{\prime}(1,\tilde{s}_{k})\sim_{0}^{\prime}(0,\tilde{s}_{k})\). This means that by an immediate induction, after the \(k-i\) announcements it is still true that \(\neg K_{0}m_{0}\): this is a contradiction with \(\varphi_{k}\). A stronger lower bound for each child is available [4], with recursive conditions on the depth of all agents similarly to Theorem 6.1. This formula provides a lower bound on the knowledge of depths of the agent 0 to be able to solve the problem: it must be depth at least \(k-1\) and know so. By symmetry, this generalizes to any child or any set of children solving the problem. Finally, we present propositions that illustrate how _annesia_ in **EDPAL** (Proposition 3.3) and _knowledge leakage_ in **ADPAL** (Proposition 3.4) manifest in the muddy children problem. These propositions are easily verified by computing explicitly the models after updates. **Proposition 6.3** (Amnesia in **Edpal**).: _Consider the instance of muddy children \(M_{3}\), where child \(i\) is unambiguously of depth \(2-i\), i.e. \(d(i,\cdot)=2-i\). The formula \(\langle\neg K_{2}m_{2}\rangle\langle\neg K_{1}m_{1}\rangle\neg K_{2}\top\) is true in **EDPAL** but not in **DPAL** or **ADPAL**. This means that in **EDPAL**, after the first two announcements, agent \(2\) does not know anything anymore._ **Proposition 6.4** (Knowledge leakage in **Appal**).: _The formula \(\langle K_{1}\neg K_{2}m_{2}\rangle K_{1}K_{0}m_{0}\) is true in **ADPAL** but not in **DPAL** or **EDPAL**. In **ADPAL**, agent \(1\) has deduced the conclusion of agent \(0\)'s reasoning, despite not being deep enough to perceive the announcement. Moreover, if agent \(0\) were of depth \(1\) it would not be true that \(\langle K_{1}\neg K_{2}m_{2}\rangle K_{0}m_{0}\): agent \(0\) would not be able to deduce what agent \(1\) has deduced._ LibraryAlongside this paper, we publish code for a library for multi-agent epistemic logic model checking and visualization in Python. It implements depth-unbounded PAL models as well as **DPAL**, **EDPAL** and **ADPAL**. The code is available in an online repository [5]. The code can also be used to generate illustrations of model updates in the muddy children reasoning problem [4] under the assumptions of Theorem 6.1 above. ConclusionWe have shown how **S5** and public announcement logic (PAL) can be extended to incorporate bounded-depth agents. We have shown completeness results for several of the resulting logics and explored the relationship between public announcements and knowledge in **DPAL**, as well as complexity bounds for these logics. We finally illustrated the behavior of depth-bounded agents in the muddy children reasoning problem, where we showed upper and lower bounds on depths (and recursive knowledge of depths) necessary and sufficient to solve the problem. These results extend epistemic logics to support formal reasoning about agents with limited modal depth.
2310.15483
LDPC Decoding with Degree-Specific Neural Message Weights and RCQ Decoding
Recently, neural networks have improved MinSum message-passing decoders for low-density parity-check (LDPC) codes by multiplying or adding weights to the messages, where the weights are determined by a neural network. The neural network complexity to determine distinct weights for each edge is high, often limiting the application to relatively short LDPC codes. Furthermore, storing separate weights for every edge and every iteration can be a burden for hardware implementations. To reduce neural network complexity and storage requirements, this paper proposes a family of weight-sharing schemes that use the same weight for edges that have the same check node degree and/or variable node degree. Our simulation results show that node-degree-based weight-sharing can deliver the same performance requiring distinct weights for each node. This paper also combines these degree-specific neural weights with a reconstruction-computation-quantization (RCQ) decoder to produce a weighted RCQ (W-RCQ) decoder. The W-RCQ decoder with node-degree-based weight sharing has a reduced hardware requirement compared with the original RCQ decoder. As an additional contribution, this paper identifies and resolves a gradient explosion issue that can arise when training neural LDPC decoders.
Linfang Wang, Caleb Terrill, Richard Wesel, Dariush Divsalar
2023-10-24T03:20:53Z
http://arxiv.org/abs/2310.15483v2
# LDPC Decoding with Degree-Specific Neural Message Weights and RCQ Decoding ###### Abstract Recently, neural networks have improved MinSum message-passing decoders for low-density parity-check (LDPC) codes by multiplying or adding weights to the messages, where the weights are determined by a neural network. The neural network complexity to determine distinct weights for each edge is high, often limiting the application to relatively short LDPC codes. Furthermore, storing separate weights for every edge and every iteration can be a burden for hardware implementations. To reduce neural network complexity and storage requirements, this paper proposes a family of weight-sharing schemes that use the same weight for edges that have the same check node degree and/or variable node degree. Our simulation results show that node-degree-based weight-sharing can deliver the same performance requiring distinct weights for each node. This paper also combines these degree-specific neural weights with a reconstruction-computation-quantization (RCQ) decoder to produce a weighted RCQ (W-RCQ) decoder. The W-RCQ decoder with node-degree-based weight sharing has a reduced hardware requirement compared with the original RCQ decoder. As an additional contribution, this paper identifies and resolves a gradient explosion issue that can arise when training neural LDPC decoders. LDPC decoder, neural decoder, low-bitwidth decoding, hardware efficiency, layered decoding, FPGA. ## I Introduction Low-Density Parity-Check (LDPC) codes [2] have been implemented broadly, including in NAND flash systems and wireless communication systems. Message-passing decoders are often used to decode LDPC codes. Typical message-passing decoders utilize belief propagation (BP), MinSum, and its variations. However, message-passing decoders are sub-optimal because of the existence of cycles in the corresponding Tanner graph. Recently, numerous works have been focused on enhancing the performance of message-passing decoders with the help of neural networks (NNs) [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], such as neural belief propagation (N-BP) in [3], normalized MinSum (NMS) and neural OMS decoders in [3, 4, 6]. The neural network is created by unfolding the message passing operations of each decoding iteration [3]. Each decoding iteration is unfolded into two hidden layers, representing a check node processing layer and a variable node processing layer, and each neuron represents a variable-to-check message (V2C) or a check-to-variable (C2V) message. These neural decoders normally assign each C2V message and/or each V2C message a distinct weight in each iteration and hence are impractical for long-blocklength LDPC codes because the number of required parameters is proportional to the number of edges in the Tanner graph of the parity check matrix. One solution is to share the weights across iterations or edges in the Tanner graph, like in [5, 13, 15, 16]. However, these simple weight-sharing methods sacrifice decoding performance in different ways. Besides, the precursor works of literature mainly focus on the short-blocklength codes (\(n<2000\)), which may have resulted from the fact that the required memory for training neural decoders with long block lengths by using popular deep learning research platforms, such as PyTorch, exceeds the computation resources that researchers can access. However, as shown in [13, 1], it is possible to train neural decoders by only using CPUs on personal computers for very long-blocklength codes if resources are handled more efficiently. On the other perspective, decoders for LDPC codes with low message bit widths are desired when considering the limited hardware resources. Recently, the non-uniformly quantized decoders [18, 19, 20, 21, 22, 23, 24, 25, 26, 27] have shown to deliver excellent performance with very low message precision. One promising decoding paradigm is called reconstruction-computation-quantization (RCQ) decoder [25, 26, 27]. The node operation in an RCQ decoder involves a reconstruction function that allows high-precision message computation and a quantization function that allows low-precision message passing between nodes. Specifically, the reconstruction function, equivalent to a dequantizer, maps the low-bitwidth messages received by a node to high-bitwidth messages for computation. The quantization function quantizes the calculated high-bitwidth messages to low-bitwidth messages that will be sent to its neighbor nodes. The excellent decoding performance of RCQ decoder comes from its dynamic quantizers and dequantizers that are updated in each layer and each iteration. However, such dynamic quantizers/dequantizers are also overheads of the RCQ decoder in hardware implementation, which may even offset the benefit brought by the low bit-width messages [27]. ### _Contribution_ This paper proposes a family of weight-sharing schemes for the neural MinSum decoder based on the check node degree and variable node degree. Our simulation results show that the decoders with the node-degree-based weight-sharing schemes can deliver the same performance as the neural MinSum decoder that doesn't share the weights. This paper also combines neural decoding with the RCQ decoding paradigm and proposes a weighted RCQ (W-RCQ) decoder. The W-RCQ decoder with node-degree-based weight sharing has a reduced hardware requirement compared with the RCQ decoder. The contributions of this paper are summarized below: * _Posterior Joint Training Method._ This paper identifies the gradient explosion issue when training neural LDPC decoders. A posterior joint training method is proposed in this paper to address the gradient explosion problem. Simulation results show posterior joint training delivers better decoding performance than the simple gradient clipping method. * _Node-Degree-Based Weight Sharing._ This paper illustrates that the weight values of the N-NMS decoder are strongly related to check node degree, variable node degree, and iteration index. As a result, this paper proposes node-degree-based weight-sharing schemes that assign the same weight to the edges with the same check node degree and/or variable node degree. * _Neural-2D-MinSum decoder._ By employing the node-degree-based weight-sharing schemes on the N-NMS and N-OMS decoders, this paper proposes the N-2D-NMS decoder and N-2D-OMS decoder. _2D_ means 2-dimensional and implies that the weights in each iteration are shared across two dimensions, i.e., check node degree and variable node degree. * _W-RCQ Decoder._ This paper applies N-2D-NMS and N-2D-OMS to the RCQ decoding paradigm to introduce a weighted-RCQ (W-RCQ) decoding paradigm. Simulation results for a (9472,8192) LDPC code on a field-programmable gate array (FPGA) device show that compared with the 4-bit RCQ decoder and the 5-bit OMS decoder, the 4-bit W-RCQ decoder delivers comparable FER performance but with reduced hardware requirements. ### _Organization_ The remainder of this paper is organized as follows: Section II derives the gradients for a flooding-scheduled N-NMS decoder and shows that the memory to calculate the gradients can be reduced by storing the forward messages compactly. This section also describes the posterior joint training method that addresses the gradient explosion issue. Section III proposes node-degree-based weight-sharing schemes for neural MinSum decoder, which leads to a family of neural-2D-MinSum decoders. Section IV gives the W-RCQ decoding structure and describes how to train W-RCQ parameters via a quantized neural network (QNN). The simulation results are presented in Section V, and Section VI concludes our work. ## II Training Neural MinSum Decoders for Long Blocklength Codes For the neural network corresponding to a neural LDPC decoder, the number of neurons in each hidden layer equals the number of edges in the Tanner graph corresponding to the parity check matrix [3]. For the popular NN platforms, such as PyTorch, each neuron requires a data structure that stores the value of the neuron, the gradient of the neuron, the connection of this neuron with other neurons, and so on. Therefore, training neural decoders for long-blocklength LDPC codes in PyTorch demands significant memory, which poses a challenge for researchers with limited resources. However, the data structure used in PyTorch is redundant to the neural LDPC decoders. One reason is that the neuron connections between hidden layers are repetitive and can be interpreted by the parity check matrix. This immediately reduces the required memory. This section uses N-NMS decoder to show that the memory required to calculate gradients of the neural MinSum decoders can be further reduced by compactly storing the messages in forward propagation. ### _Forward Propagation of N-NMS Decoder_ Let \(H\in\mathbb{F}_{2}^{(n-k)\times n}\) be the parity check matrix of an \((n,k)\) binary LDPC code, where \(n\) is the codeword length and \(k\) is dataword length. Denote \(i^{th}\) variable node and \(j^{th}\) check node by \(v_{i}\) and \(c_{j}\), respectively. Let \(\mathrm{sgn}(\cdot)\) be the sign function, i.e., \(\mathrm{sgn}(x)=1\) for \(x\geq 0\) and \(\mathrm{sgn}(x)=-1\) otherwise. For the flooding-scheduled decoder, in the \(t^{th}\) decoding iteration, N-NMS decoder updates the C2V message, \(u_{c_{j}\to v_{i}}^{(t)}\), by: \[\begin{split} u_{c_{i}\to v_{j}}^{(t)}&=\beta_{(c_{i}, v_{j})}^{(t)}\times\prod_{v_{j^{\prime}}\in\mathcal{N}(c_{i})\backslash\{v_{j}\}} \text{sgn}\left(l_{v_{j^{\prime}}\to c_{i}}^{(t-1)}\right)\\ &\times\min_{v_{j^{\prime}}\in\mathcal{N}(c_{i})\backslash\{v_{j} \}}\left|l_{v_{j^{\prime}}\to c_{i}}^{(t-1)}\right|,\end{split} \tag{1}\] \(\mathcal{N}(c_{i})\) is the set of variable nodes that connect \(c_{i}\) and \(\left\{\beta_{(c_{i},v_{j})}^{(t)}|i\in\{1,\ldots(n-k)\},j\in\{1,\ldots n\},H(i,j)=1,t\in\{1,\ldots,I_{T}\}\right\}\) is the set of trainable parameters. \(I_{T}\) represents the maximum iterations. The V2C message, \(l_{v_{i}\to c_{j}}^{(t)}\) and posterior of each variable node, \(l_{v_{i}}^{(t)}\), of N-NMS decoder in iteration \(t\) are calculated by: \[l_{v_{j}\to c_{i}}^{(t)}=l_{v_{j}}^{ch}+\sum_{c_{i^{\prime}}\in \mathcal{N}(v_{j})\backslash\{c_{i}\}}u_{c_{i^{\prime}}\to v_{j}}^{(t)}, \tag{2}\] \[l_{v_{j}}^{(t)}=l_{v_{j}}^{ch}+\sum_{c_{i^{\prime}}\in\mathcal{N }(v_{j})}u_{c_{i^{\prime}}\to v_{j}}^{(t)}. \tag{3}\] \(\mathcal{N}(v_{j})\) represents the set of the check nodes connected to \(v_{j}\). \(l_{v_{j}}^{ch}\) is the log-likelihood ratio (LLR) of the channel observation of \(v_{j}\). The decoding stops when all parity check nodes are satisfied or \(I_{T}\) is reached. ### _Backward Propagation of N-NMS_ Let \(J\) be some loss function for the N-NMS neural network, for example, the multi-loss cross entropy in [3]. Denote the gradients of loss \(J\) with respect to (w.r.t.) the trainable weights, the C2V message and V2C message by \(\frac{\partial J}{\partial\beta_{(c_{i},v_{j})}^{(t)}}\), \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\), and \(\frac{\partial J}{\partial l_{v_{j}}^{(t)}\to v_{j}}\) respectively. Fig. 1 shows the calculation of the V2C messages for a degree-3 variable node \(v\) that connects check nodes \(c_{1}\), \(c_{2}\), and \(c_{3}\). The gradients of these V2C messages are components of the gradients of the C2V messages. As shown by the red dashed paths, \(u_{c_{1}\to v}^{(t)}\) is used to calculate \(l_{v\to c_{2}}^{(t)}\), \(l_{v\to c_{3}}^{(t)}\), and \(l_{v}^{(t)}\), therefore the gradient \(\frac{\partial J}{\partial u_{c_{1}\to v}^{(t)}}\) is accumulated by these three terms. Generally, in iteration \(t\), \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\) is updated by: \[\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}=\frac{\partial J}{ \partial l_{v_{j}}^{(t)}}+\sum_{c_{i^{\prime}}\in\mathcal{N}(v_{j})\backslash \{c_{i}\}}\frac{\partial J}{\partial l_{v_{j}\to c_{i^{\prime}}}^{(t)}}. \tag{4}\] In iteration \(t\), when calculating C2V messages, check node \(c_{i}\) receives minimum and second minimum magnitude values, denoted by \(\texttt{min}\texttt{l}_{c_{i}}^{t}\) and \(\texttt{min}\texttt{2}_{c_{i}}^{t}\) from variable nodes \(\texttt{pos1}_{c_{i}}^{t}\) and \(\texttt{pos2}_{c_{i}}^{t}\), respectively, where \[\begin{split}\texttt{min}\texttt{1}_{c_{i}}^{t}&= \min_{v_{j^{\prime}}\in\mathcal{N}(c_{i})}|l_{v_{j^{\prime}}\to c_{i}}^{(t-1)}|,\\ \texttt{pos1}_{c_{i}}^{t}&=\operatorname*{argmin}_{v_{j^ {\prime}}\in\mathcal{N}(c_{i})}|l_{v_{j^{\prime}}\to c_{i}}^{(t-1)}|.\\ \texttt{min}\texttt{2}_{c_{i}}^{t}&=\min_{v_{j^ {\prime}}\in\mathcal{N}(c_{i})/\left(\texttt{pos1}_{c_{i}}^{t}\right)}|l_{v_{j ^{\prime}}\to c_{i}}^{(t-1)}|,\\ \texttt{pos2}_{c_{i}}^{t}&=\operatorname*{argmin}_{v_{j ^{\prime}}\in\mathcal{N}(c_{i})/\left(\texttt{pos1}_{c_{i}}^{t}\right)}|l_{v_{j ^{\prime}}\to c_{i}}^{(t-1)}|.\end{split} \tag{5}\] Only \(\texttt{min}\texttt{1}_{c_{i}}^{t}\) and \(\texttt{min}\texttt{2}_{c_{i}}^{t}\) are used for C2V messages calculation. Fig. 2 illustrates an example of computing the C2V messages of a degree-3 check node \(c\) that connects variable nodes \(v_{1}\), \(v_{2}\), and \(v_{3}\). Fig. 2 assumes \(\texttt{pos1}_{c}^{(t)}=v_{3}\) and \(\texttt{pos2}_{c}^{(t)}=v_{1}\). It can be seen from Fig. 2 that \(\frac{\partial J}{\partial\beta_{(c_{i},v_{j})}^{(t)}}\) is calculated using \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\) by: \[\frac{\partial J}{\partial\beta_{(c_{i},v_{j})}^{(t)}}=u_{c_{i}\to v_{j}}^{(t) *}\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}, \tag{7}\] where \(u_{c_{i}\to v_{j}}^{(t)*}=\frac{u_{c_{i}\to v_{j}}^{(t)*}}{\beta_{(c_{i},v_{j})} ^{(t)}}\). \(u_{c_{i}\to v_{j}}^{(t)*}\) is the output of check node Min operation and hence can be calculated efficiently by \(\operatorname{sgn}\left(l_{v_{j}\to c_{i}}^{(t-1)}\right)\texttt{min} \texttt{1}_{c_{i}}^{t}\), \(\texttt{min}\texttt{2}_{c_{i}}^{t}\), \(\texttt{pos1}_{c_{i}}^{t}\). In Fig. 2, \(l_{v_{3}\to c}^{(t-1)}\) is used to compute \(u_{c\to v_{1}}^{(t)}\) and \(u_{c\to v_{2}}^{(t)}\), \(l_{v_{1}\to c}^{(t-1)}\) is used to compute \(u_{c\to v_{3}}^{(t)}\), \(l_{v_{2}\to c}^{(t-1)}\) is not involved in computing any C2V messages. As a result, only \(l_{v_{1}\to c}^{(t-1)}\) and \(l_{v_{3}\to c}^{(t-1)}\) accumulate the gradients, as shown in the blue dashed and red dash-dotted paths in Fig. 2, respectively. Generally, for all variable nodes connected to the check node \(c_{i}\), only \(l_{c_{i}\to\texttt{pos1}_{c_{i}}^{(t)}}^{(t-1)}\) and \(l_{c_{i}\to\texttt{pos2}_{c_{i}}^{(t)}}^{(t-1)}\) receive backward information. Note that the Fig. 2: Computation of V2C messages of a degree-3 check node \(c\). The example assumes that \(v_{3}\) and \(v_{1}\) provide the first and second minimum values, respectively. As a result, only \(l_{v_{1}\to c}^{(t-1)}\) and \(l_{v_{3}\to c}^{(t-1)}\) accumulate the gradients, which are illustrated as the blue dashed and red dash-dotted paths, respectively. sign operation makes gradient 0, and \(\min\) operation passes the gradient to the neuron that provides the minimum value. Hence, \(\frac{\partial J}{\partial l_{v_{j}\to c_{i}}^{(t-1)}}\) is computed as follows: \[\left\{\begin{array}{ll}\text{sgn}\left(l_{v_{j}\to c_{i}}^{(t-1)} \right)\sum_{v_{j}\in\mathcal{N}(c_{i})\backslash\{v_{j}\}}\frac{\partial J}{ \partial|u_{c_{i}\to v_{j}}^{(t)}|}&,v_{j}=\text{pos}\mathbbm{1}_{c_{i}}^{(t)} \\ \text{sgn}\left(l_{v_{j}\to c_{i}}^{(t-1)}\right)\frac{\partial J}{ \partial|u_{c_{i}\to v_{j}}^{(t)}|}&,v_{j}=\text{pos}\mathbbm{2}_{c_{i}}^{(t)} \\ 0&,\text{otherwise}.\end{array}\right. \tag{8}\] The term \(\frac{\partial J}{\partial|u_{c_{i}\to v_{j}}^{(t)}|}\) is calculated by: \[\frac{\partial J}{\partial|u_{c_{i}\to v_{j}}^{(t)*}|}=\text{sgn}(u_{c_{i} \to v_{j}}^{(t)*})\beta_{(c_{i},v_{j})}^{(t)}\frac{\partial J}{\partial u_{c _{i}\to v_{j}}^{(t)*}}. \tag{9}\] (4)-(9) indicate that the neuron values in each hidden layer can be stored compactly with \(\text{sgn}\!\left(\!\left(l_{v_{j}\to c_{i}}^{(t)}\right)\!,\,\text{min} \mathbbm{1}_{c_{i}}^{t},\,\text{min}\mathbbm{2}_{c_{i}}^{t},\,\text{pos} \mathbbm{1}_{c_{i}}^{t}\!\right.\) and \(\text{pos}\mathbbm{1}_{c_{i}}^{t}\!\). The compactly stored neural values in the hidden layers significantly reduce memory. ### _Posterior Joint Training_ Eq. (20) implies that in iteration \(t\), for all variable nodes that connect check node \(c\), only \(\text{pos}\mathbbm{1}_{c}^{t}\) and \(\text{pos}\mathbbm{2}_{c}^{t}\) receive gradients from \(c\). Besides, \(|\mathcal{N}(c)|-1\) gradient terms flow to \(\text{pos}\mathbbm{1}_{c}^{1}\). Hence, if check node \(c\) has a large degree, the gradient of \(J\) w.r.t. \(\text{pos}\mathbbm{1}_{c}^{t}\) can have a large magnitude, and this large-magnitude gradient will be propagated to the neurons in the preceding layer corresponded to the C2V messages whose check nodes (other than \(c\)) connect \(\text{pos}\mathbbm{1}_{c}^{t}\). As a result, the large-magnitude gradients are accumulated and propagated as back propagation proceeds, which results in gradient explosion. Fig. (a)a illustrates the gradient explosion phenomenon when training a flooding-scheduled N-NMS decoder for a (3096,1032) LDPC code. Define \(\mu^{(t)}\) as the average magnitude of the gradients of \(J\) w.r.t. all C2V messages in iteration \(t\). The gradients are calculated by feeding the NN corresponding to N-NMS decoder with a random input sample and then performing backward propagation. Fig. (a)a plots \(\mu^{(t)}\) in each decoding iteration. The maximum check node degree and variable node degree of the code are 19 and 27, respectively. The maximum number of decoding iterations of the decoder is 50. It can be seen that the \(\mu^{(t)}\) increases exponentially with the decrease of decoding iteration \(t\). Eq. (7) indicates that large magnitude of \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\) leads to large magnitude of \(\frac{\partial J}{\partial\beta_{(c_{i},v_{j})}^{(t)}}\) and hence prevents the neural network from optimizing weights effectively. To our knowledge, this paper is the first to report the gradient explosion issue for neural LDPC decoder training. However, there have been several techniques that solve the gradient explosion problem: 1. _Gradient Clipping_. Gradient explosion is a common problem in the deep learning field, such as recurrent neural networks, and one way to solve this problem is gradient clipping [28]. The gradient clipping in this paper means to limit the maximum gradient magnitude to be some threshold \(l\). 2. _Greedy Training_. Dai _et al_ in [29] proposed greedy training. Greedy training trains the parameters in \(t^{th}\) decoding iteration by fixing the pre-trained parameters in the first \(t-1\) iterations. Greedy training solves the gradient explosion problem because the large magnitude gradients won't be accumulated and propagated to the preceding hidden layers. However, greedy training requires a time complexity that is proportional to \(I_{T}^{2}\), because one must have trained first \((t-1)\) iterations in order to train the \(t^{th}\) decoding iteration. Eq. (4) indicates that the gradient of \(J\) w.r.t. \(u_{c_{i}\to v_{j}}^{(t)}\) comes from two parts: the first part is from the posterior \(l_{v_{j}}^{(t)}\), and the second part is from the V2C messages \(l_{v_{j}\to c_{i}^{\prime}}^{(t)}\), \(c_{i^{\prime}}\in\mathcal{N}(v_{j})\setminus\{c_{i}\}\). Based on the previous analysis, if any \(l_{v_{j}\to c_{i}^{\prime}}^{(t)}\), \(c_{i^{\prime}}\in\mathcal{N}(v_{j})\setminus\{c_{i}\}\) provides a large magnitude gradient, Fig. 3: Fig. (a): The average magnitude of gradients of loss \(J\) w.r.t. C2V messages in each decoding iteration. Fig. (b): FER curves of the flooding-scheduled N-NMS decoders for a (3096,1032) LDPC code. Gradient clipping, greedy training, and posterior joint training are used to address the gradient explosion issue. the neuron \(u_{c_{i}\to v_{j}}^{(t)}\) can also have a large magnitude gradient. This will result in a large magnitude to the gradient of \(J\) w.r.t. \(\beta_{(c_{i},v_{j})}^{(t)}\), as indicated by (7). In this paper, we propose posterior joint training, which calculates the gradient of \(J\) w.r.t. \(u_{c_{i}\to v_{j}}^{(t)}\) only using the posterior \(l_{v_{j}}^{(t)}\). More explicitly, for the flooding-scheduled N-NMS neural network, \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\) is calculated by: \[\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}=\frac{\partial J}{ \partial l_{v_{j}}^{(t)}}. \tag{10}\] Hence, the gradient of \(J\) w.r.t. \(\beta_{(c_{i},v_{j})}^{(t)}\) is calculated as: \[\frac{\partial J}{\partial\beta_{(c_{i},v_{j})}^{(t)}}=\frac{u_{c_{i}\to v_{j} }^{(t)}}{\beta_{(c_{i},v_{j})}^{(t)}}\frac{\partial J}{\partial u_{(c_{i},v_{j })}^{(t)}}=\frac{u_{c_{i}\to v_{j}}^{(t)}}{\beta_{c_{i}\to v_{j}}^{(t)}} \frac{\partial J}{\partial t_{v_{j}}^{(t)}}. \tag{11}\] By calculating the gradients of neurons in the \(t^{th}\) decoding iteration only using the posterior \(l_{v_{j}}^{(t)}\), (10) and (11) prevent the large-magnitude gradients from being propagated to the preceding hidden layers. The posterior training equivalently treats each decoding iteration as the last iteration, because \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\) in the last iteration is calculated by (10). Besides, (10) is also used in the greedy training [29], because the greedy training trains the parameters of iteration \(t\) by assuming the decoder has a maximum iteration of \(t\) and fixing the pre-trained parameters of previous \(t-1\) iterations. However, the posterior joint training optimizes parameters of all decoding iterations jointly and hence requires a time complexity proportional to \(I_{T}\). Unlike the flooding scheduled decoder, which calculates the V2C message using the C2V messages all from the previous iterations, the layered-scheduled decoder facilitates the most recently updated C2V messages to calculate V2C messages. As a result, the \(u_{c_{i}\to v_{j}}^{(t)}\) is used to calculate the following terms: 1) Soft decision in iteration \(t\), \(l_{v_{j}}^{(t)}\); 2) V2C messages in iteration \(t\), \(l_{v_{j}\to c_{i^{\prime}}}^{(t)}\), where \(i^{\prime}\in\{i^{*}|c_{i^{\prime}}\in\mathcal{N}(v_{j}),i^{*}>i\}\) ; and 3) V2C messages in iteration \(t+1\), \(l_{v_{j}\to c_{i^{\prime}}}^{(t+1)}\), where \(i^{\prime}\in\{i^{*}|c_{i^{\prime}}\in\mathcal{N}(v_{j}),i^{*}<i\}\). Hence, \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\) is calculated by: \[\begin{split}\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}} &=\frac{\partial J}{\partial l_{v_{j}}^{(t)}}+\sum_{i^{\prime}\in \{i^{*}|c_{i^{\prime}}\in\mathcal{N}(v_{j}),i^{*}>i\}}\frac{\partial J}{ \partial l_{v_{j}\to c_{i^{\prime}}}^{(t)}}\\ &+\sum_{i^{\prime}\in\{i^{*}|c_{i^{\prime}}\in\mathcal{N}(v_{j}), i^{\prime}<i\}}\frac{\partial J}{\partial l_{v_{j}\to c_{i^{\prime}}}^{(t+1)}}.\end{split} \tag{12}\] Posterior joint training abandons the last term in (12) and calculates \(\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}\) as follows: \[\frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}=\frac{\partial J}{ \partial l_{v_{j}}^{(t)}}+\sum_{i^{\prime}\in\{i^{*}|c_{i^{\prime}}\in \mathcal{N}(v_{j}),i^{*}>i\}}\frac{\partial J}{\partial l_{v_{j}\to c_{i^{ \prime}}}^{(t)}}. \tag{13}\] Fig. 3b shows the frame error rate (FER) of flooding-scheduled N-NMS decoders for a (3096,1032) LDPC code. The maximum decoding iteration time is 50. All three methods to prevent gradient explosion are implemented. The gradient clipping uses a threshold of \(l=10^{-3}\). The performance of BP and NMS decoders with the same decoding schedule and maximum decoding iteration is also compared. The NMS decoder uses a factor of 0.7. The simulation results show that greedy training and posterior joint training have similar FER curves and perform better than gradient clipping. However, posterior joint training has a lower time complexity than greedy training. ## III Node-Degree-Based Weight Sharing N-NMS and N-OMS decoders for the long-blocklength LDPC codes are impractical, because the number of parameters of these decoders is proportional to the number of edges in the corresponding Tanner graph. Weight sharing [30] solves this problem by assigning one weight to different neurons in the NN. Different weight-sharing schemes have been proposed to reduce the number of neural weights in N-NMS and N-OMS decoders. However, simple weight-sharing schemes, such as across iterations or edges in [15, 16], degrade the decoding performance in different degrees. This section proposes the node-degree-based weight-sharing schemes that assign the same weights to the edges with the same check node degree and/or variable node degree. We call the N-NMS and N-OMS decoder with node-degree-based Fig. 4: Mean values of messages of a flooding-scheduled N-NMS decoder for a (3096,1032) LDPC code in each iteration show strong correlations to check node degree and variable node degree. weight-sharing schemes by neural 2-dimensional NMS (N-2D-NMS) decoder and neural 2-dimensional OMS (N-2D-OMS) decoder, respectively, because they are similar to the 2D-MS decoders in [31, 32]. Simulation results in Section V show that the N-2D-NMS decoder can deliver the same decoding performance with the N-NMS decoder. ### _Motivation_ In this subsection, we investigate the relationship between the neural weights of a flooding-scheduled N-NMS decoder and node degrees. The N-NMS decoder is trained for a (3096, 1032) LDPC code, the same one used in Section II-C. The maximum decoding iteration is 10. Define the set of neural weights of N-NMS decoder that are associated to check node degree \(d_{c}\) in the \(t^{th}\) decoding iteration by \(\mathcal{B}^{(t,d_{c})}\), and \(\mathcal{B}^{(t,d_{c})}=\{\beta^{(t)}_{(c_{i},v_{j})}|\text{deg}(c_{i})=d_{c}\}\). Let \(\widetilde{\beta}^{(t,d_{c})}\) be the mean value of \(\mathcal{B}^{(t,d_{c})}\). Fig.4a shows \(\widetilde{\beta}^{(t,d_{c})}\) versus decoding iteration \(t\) with all possible check node degrees. The simulation result shows a clear relationship between check node degree and \(\widetilde{\beta}^{(t,d_{c})}\), i.e., a larger check node degree corresponds to a smaller \(\widetilde{\beta}^{(t,d_{c})}\). This difference is significant in the first few iterations. Additionally, \(\widetilde{\beta}^{(t,d_{c})}\) changes drastically in first few iterations for all check node degrees. In order to explore the relationship between the weights and variable node degrees given a check node degree \(d_{c}\) and decoding iteration index \(t\), we further define \(\mathcal{B}^{(t,d_{c},d_{v})}=\{\beta^{(t)}_{(c_{i},v_{j})}\)\(|\text{deg}(c_{i})=d_{c},\text{deg}(v_{i})=d_{v}\}\). We denote the average value of \(\mathcal{B}^{(t,d_{c},d_{v})}\) by \(\beta^{(t,d_{c},d_{v})}\). Fig. 4b gives the average weights corresponding to various check node degrees and variable node degrees at iteration 4. Statistical results show that, given a specific iteration \(t\) and check node degree \(d_{c}\), a larger \(d_{v}\) indicates a smaller \(\widetilde{\beta}^{(t,d_{c},d_{v})}\). In conclusion, the weights of the N-NMS decoder are correlated with the check node degree, the variable node degree, and the decoding iteration index. Thus, node degrees should affect the weighting of messages on their incident edges when decoding LDPC codes. This observation motivates us to propose a family of N-2D-MS decoders. ### _Neural 2D Normalized MinSum Decoders_ Based on the previous discussion, it is intuitive to consider assigning the same weights to messages with same check node degree and/or variable node degree. In this subsection, we propose a family of node-degree-based weight-sharing schemes. These weight-sharing schemes can be used on the N-NMS decoder, which gives an N-2D-NMS decoder. In the \(t^{th}\) iteration, a flooding-scheduled N-2D-NMS decoder update \(u^{(t)}_{c_{i}\to v_{j}}\) as follows: \[\begin{split} u^{(t)}_{c_{i}\to v_{j}}&=\beta^{(t )}_{*}\times\prod_{v_{j^{\prime}}\in\mathcal{N}(c_{i})/\{v_{j}\}}\text{sgn} \left(l^{(t-1)}_{v_{j^{\prime}}\to c_{i}}\right)\\ &\times\min_{v_{j^{\prime}}\in\mathcal{N}(c_{i})/\{v_{j}\}}\left| l^{(t-1)}_{v_{j^{\prime}}\to c_{i}}\right|.\end{split} \tag{14}\] \[l^{(t)}_{v_{j}\to c_{i}}=l^{ch}_{v_{i}}+\alpha^{(t)}_{*}\sum_{c_{i^{\prime}} \in\mathcal{N}(v_{j})/\{c_{i}\}}u^{(t)}_{c_{i^{\prime}}\to v_{j}}, \tag{15}\] \(l^{(t)}_{v_{j}}=l^{ch}_{v_{i}}+\alpha^{(t)}_{*}\sum_{c_{i^{\prime}}\in \mathcal{N}(v_{j})}u^{(t)}_{c_{i^{\prime}}\to v_{j}}\). (16) \(\beta^{(t)}_{*}\) and \(\alpha^{(t)}_{*}\) are the learnable weights. The subscript * is replaced in Table I with the information needed to identify the specific weight depending on the weight-sharing methodology. Table I lists different weight-sharing types, each identified in the first column by a type number. As a special case, we denote type 0 by assigning distinct weights to each edge, i.e., N-NMS decoder. Columns 2 and 3 describe how each type assigns \(\beta^{(t)}_{*}\) and \(\alpha^{(t)}_{*}\), respectively. In this paper, we refer to a decoder that uses a type-\(x\) weight-sharing scheme as a type-\(x\) decoder. Types 1-4 assign the same weights based on the node degree. In particular, Type 1 assigns the same weight to the edges that have same check node _and_ variable node degree. Type 2 considers the check node degree and variable node degree separately. As a simplification, type 3 and type 4 only consider check node degree and variable node degree, respectively. Dai. _et. al_ in [29] studied weight sharing based on the edge type of multi-edge-type (MET)-LDPC codes, or protograph-based codes. We also consider this metric for types 5, 6, and 7. Type 5 assigns the same weight to the edges with the same edge type, i.e., the edges that belong to the same position in the protomatrix. In Table. I, \(f\) is the lifting factor. Types 6 and 7 assign parameters based only on the horizontal (protomatrix row) and vertical layers (protomatrix column), respectively. Finally, type 8 assigns a single weight to all edges in each decoding iteration, as in [13, 16]. The gradients \(\frac{\partial J}{\partial\beta^{(t)}_{*}}\) and \(\frac{\partial J}{\partial\alpha^{(t)}_{*}}\) in N-2D-NMS decoder are accumulated from the gradients of C2V messages that use \(\beta^{(t)}_{*}\) and \(\alpha^{(t)}_{*}\) in decoding process, respectively. For example, the type-2 N-2D-NMS decoder assigns \(\beta^{(t)}_{d_{c}}\) to all C2V messages with check node degree \(d_{c}\) and assigns \(\alpha_{d_{v}}^{(t)}\) to all C2V messages with check node degree \(d_{v}\). As a result, \[\frac{\partial J}{\partial\beta_{d_{e}}^{(t)}} =\sum_{(c_{i},v_{j})\in\mathcal{E}(d_{c})}u_{c_{i}\to v_{j}}^{(t)*} \frac{\partial J}{\partial u_{c_{i}\to v_{j}}^{(t)}}, \tag{17}\] \[\frac{\partial J}{\partial\alpha_{d_{v}}^{(t)}} =\sum_{(c_{i},v_{j})\in\mathcal{E}(d_{v})}u_{c_{i}\to v_{j}}^{(t)} \frac{\partial J}{\partial\bar{u}_{c_{i}\to v_{j}}^{(t)}}, \tag{18}\] where \(\mathcal{E}(d_{c})\) and \(\mathcal{E}(d_{v})\) are the set of edges whose check node degree and variable node degree are \(d_{c}\) and \(d_{v}\), respectively. \(u_{c_{i}\to v_{j}}^{(t)*}=\frac{u_{c_{i}\to v_{j}}^{(t)*}}{\beta_{(c_{i},v_{j} )}^{(t)}}\), \(\bar{u}_{c_{i}\to v_{j}}^{(t)}=\alpha_{\text{deg}(v_{j})}^{(t)}u_{c_{i} \to v_{j}}^{(t)}\). Fig. 5 gives the relationship between \(u_{c_{i}\to v_{j}}^{(t)*}\), \(u_{c_{i}\to v_{j}}^{(t)}\), and \(\tilde{u}_{c_{i}\to v_{j}}^{(t)*}\) in the type-2 N-2D-NMS decoder. A (3096,1032) LDPC code and the (16200,7200) DVBS-2 [33] standard LDPC code are considered in this section, and the number of parameters per iteration required for various weight-sharing schemes of these two codes are listed in column 4 and 5 in Table. I, respectively. It is shown that the number of parameters required by the node-degree-based weight sharing is less than that required by the protomatrix-based weight sharing. ### _Neural 2D Offset MinSum Decoder_ The node-degree-based weight-sharing schemes can be applied to the N-OMS decoder similarly and lead to a neural 2D OMS (N-2D-OMS) decoder. Specifically, a flooding N-2D-OMS decoder updates \(u_{c_{i}\to v_{j}}^{(t)}\) by: \[\begin{split} u_{c_{i}\to v_{j}}^{(t)}&=\prod_{v_{j} \in\mathcal{N}(c_{i})/\{v_{j}\}}\text{sgn}\left(l_{v_{j}^{\prime}\to c_{i}}^{(t -1)}\right)\\ &\times\text{ReLU}\left(\min_{v_{j^{\prime}}\in\mathcal{N}(c_{i}) /\{v_{j}\}}\left|l_{v_{j^{\prime}}\to c_{i}}^{(t-1)}\right|-\beta_{*}^{(t)}- \alpha_{*}^{(t)}\right).\end{split} \tag{19}\] \(\mathrm{ReLU}(x)=\max(0,x)\). The \(l_{v_{j}\to c_{i}}^{(t)}\) and \(l_{v_{j}}^{(t)}\) are updated using (2) and (3). For the N-2D-OMS decoders, the constant value 1 in Table I should be replaced by 0. ### _Hybrid Neural Decoder_ We consider a hybrid training structure that utilizes a neural network combining feed-forward and recurrent modules to reduce the number of parameters further. The hybrid neural decoder uses distinct neural weights for each of the first \(I^{\prime}\) decoding iterations and uses the same weights for the remaining \(I_{T}-I^{\prime}\) iterations. For example, for the hybrid neural NMS decoder, the C2V messages are updated by: \[u_{c_{i}\to v_{j}}^{(t)}=\left\{\begin{array}{ll}\beta_{(c_{i},v_{j})}^{(t) }u_{c_{i}\to v_{j}}^{(t)*},&t<I^{\prime}\\ \beta_{(c_{i},v_{j})}^{(t)*}u_{c_{i}\to v_{j}}^{(t)*},&I^{\prime}\leq t\leq I _{T},\end{array}\right. \tag{20}\] and the hybrid version of N-2D-NMS decoders can be constructed similarly. The motivation for the hybrid decoder is from the observation that the neural weights of N-NMS decoder change drastically in the first few iterations but negligibly during the last few iterations, as illustrated in Fig. 4. Therefore, using the same parameters for the last few iterations doesn't cause large performance degradation. ## IV Weighted RCQ Decoder This section combines the N-2D-NMS or N-2D-OMS decoder with RCQ decoding paradigm and proposes a weighted RCQ (W-RCQ) decoder. Unlike the RCQ decoder, whose quantizers and dequantizers are updated in each iteration (and each layer, if layer-scheduled decoding is considered), the W-RCQ decoder only uses a small number of quantizers and dequantizers during the decoding process. However, the C2V messages of the W-RCQ decoder will be weighted by dynamic node-degree-based parameters that are trained by a QNN. ### _Layered Decoding and RCQ decoder_ The flooding schedule and layered schedule are two decoding schedules widely used in LDPC decoders. As in (1) to (3), the flooding schedule first updates all C2V messages and then all V2C messages in one iteration. The layered schedule, on the other hand, partitions all the check nodes (or variable nodes) to several layers and updates the C2V messages and V2C messages layer by layer. As an example, in the \(t^{th}\) iteration, a layered MinSum decoder calculates the messages \(u_{c_{i}\to v_{j^{\prime}}}^{(t)}\) and updates the posteriors \(l_{v_{j^{\prime}}}\) as follows: \[l_{v_{j^{\prime}}}=l_{v_{j^{\prime}}}-u_{c_{i}\to v_{j^{\prime}}}^{(t-1)}\ \ \ \forall v_{j^{\prime}}\in\mathcal{N}(c_{i}), \tag{21}\] \[\begin{split} u_{c_{i}\to v_{j^{\prime}}}^{(t)}&= \left(\prod_{v_{j}\in\mathcal{N}(c_{i})/\{v_{j^{\prime}}\}}\text{sgn}(l_{v_{j} })\right)\\ &\times\min_{v_{j}\in\mathcal{N}(c_{i})/\{v_{j^{\prime}}\}}|l_{v_{j }}|,\ \ \forall v_{j^{\prime}}\in\mathcal{N}(c_{i}),\end{split} \tag{22}\] \[l_{v_{j^{\prime}}}=l_{v_{j^{\prime}}}+u_{c_{i}\to v_{j^{\prime}}}^{(t)}\ \ \forall v_{j^{\prime}}\in\mathcal{N}(c_{i}). \tag{23}\] Low-bit-width decoders with uniform quantizers typically suffer large degradation in decoding performance. The authors in [27] propose the reconstruction-computation-quantization (RCQ) paradigm that facilitates dynamic non-uniform quantization to achieve good decoding performance with low message precision. Fig. 6a gives a layered MinSum RCQ (msRCQ) decoding diagram. The layered mNRCQ decoder follows the equation (21)-(23) but with the non-uniform quantizers \(Q^{(t,r)}\) and reconstruction functions \(R^{(t,r)}\) designed distinctly for each layer \(r=1,\dots,M\) in iteration \(t=1,\dots,I_{T}\). \(M\) is the number of total layers. The \(Q^{(t,r)}\) functions quantize \(b_{v}\)-bit messages to \(b_{c}\)-bit messages, where \(b_{v}>b_{c}\), and the \(R^{(t,r)}\) functions map \(b_{c}\)-bit messages to \(b_{v}\)-bit messages. The dynamic non-uniform \(Q^{(t,r)}\) and \(R^{(t,r)}\) deliver good decoding performance with a small \(b_{c}\) but also bring extra overhead to store those parameters in hardware. As shown in (a)a, extra memory is required to store \(Q^{(t,r)}\) and \(R^{(t,r)}\) parameters, and extra logic and wires are required to distribute the quantizer and dequantizers to each variable node (VN) units. As shown in [26, 27], the overhead to store a larger number of \(Q^{(t,r)}\) and \(R^{(t,r)}\) functions may offset the benefit brought by decoding with messages of low bit width. ### _Weighted-RCQ Decoder_ In this section, we combine the RCQ decoder with the neural decoder to propose a weighted RCQ decoding structure. Fig. (b)b gives the decoding paradigm of a layer-scheduled weighted OMS-RCQ decoder (W-OMS-RCQ). One goal of W-RCQ is to reduce memory requirements by reducing the number of quantizer/dequantizer pairs \(Q()/R()\) and instead relying on the scalar parameters \(\beta_{(\deg(c,\deg(e_{j}))}^{(t)}\) determined by the neural network to capture most or all of the dynamic adjustments needed to adapt with each iteration. The differences between the W-OMS-RCQ decoder and the msRCQ decoder are summarized as follows: * _Reconstruction and Quantization_. Unlike the msRCQ decoding diagram in Fig. (a)a, the W-OMS-RCQ decoder only uses very few \(R(\cdot)\) and \(Q(\cdot)\) functions in the decoding, for example, three or fewer, and each \(R(\cdot)\) and \(Q(\cdot)\) are used for several iterations. This reduces required memory and wire complexity. * _Message adjustment_. The W-OMS-RCQ decoder weights the reconstructed C2V messages with additive parameters. As shown in Fig. (b)b, extra memory is required to store the weights. Besides, the weights in Fig. (b)b can be multiplicative, leading to a W-NMS-RCQ decoder. ### _Non-Uniform Quantizer_ An important design for a W-RCQ decoder is the quantization and reconstruction (dequantization) function selection. The authors in [27] use discrete density evolution to design dynamic quantizers and dequantizers for the RCQ decoder. For the W-RCQ decoder, this paper considers the quantizers and dequantizers parameterized by the power functions. Let \(Q(x)\) be a symmetric \(b_{c}\)-bit quantizer that features sign information and a magnitude quantizer \(Q^{*}(|x|)\). The magnitude quantizer selects one of \(2^{b_{c}-1}\) possible indices using the threshold values \(\{\tau_{0},\tau_{1},...,\tau_{\text{max}}\}\), where \(\tau_{j}=C\left(\frac{j}{2^{b_{c}-1}}\right)^{\top}\) for \(j\in\{0,...,2^{b_{c}-1}-1\}\) and \(\tau_{\text{max}}\) is \(\tau_{j_{\text{max}}}\) for \(j_{\text{max}}=2^{b_{c}-1}-1\). Given an input \(x\), which can be decomposed into sign part \(\operatorname{sgn}(x)\) and magnitude part \(|x|\), \(Q^{*}(|x|)\in\mathbb{F}_{2}^{b_{c}-1}\) is defined by: \[Q^{*}(|x|)=\begin{cases}j,&\tau_{j}\leq|x|<\tau_{j+1}\\ 2^{b_{c}-1}-1,&|x|\geq\tau_{\text{max}}\end{cases}, \tag{24}\] where \(0\leq j\leq j_{\text{max}}\). Let \(s(x)\) be the sign bit of \(x\), which is computed by \(s(x)=\mathbb{1}\left(x<0\right)\), where \(\mathbb{1}\left(\cdot\right)\) is the indicator function. Then, \(Q(x)=\left[s(x)\right.\)\(Q^{*}(|x|)\)]. The thresholds of \(Q^{*}(|x|)\) have a power-function form and are controlled by two parameters. The parameter \(C\) defines the maximum magnitude the quantizer can take, and \(\gamma\) manipulates the non-uniformity of the quantizer. Specifically, if \(\gamma=1\), \(Q(x)\) becomes a uniform quantizer. Let \(d\in\mathbb{F}_{2}^{b_{c}}\) be a \(b_{c}\)-bit message. \(d\) can be represented as \([d^{\text{MSB}}\tilde{d}]\), where \(d^{\text{MSB}}\in\{0,1\}\) indicates the sign and \(\tilde{d}\in\mathbb{F}_{2}^{b_{c}-1}\) corresponds to the magnitude. The magnitude reconstruction function \(R^{*}(\tilde{d})=\tau_{\tilde{d}}=C\left(\frac{\tilde{d}}{2^{b_{c}-1}}\right)^ {\gamma}\), and \(R(d)=(-2d^{\text{MSB}}+1)R^{*}(\tilde{d})\). Note that both the magnitude quantization function and magnitude reconstruction function use \(\{\tau_{0},...,\tau_{\text{max}}\}\) as their parameters. The choice of quantizers is heuristic. We start with one quantizer and tune its \(C\) and \(\gamma\). One-quantizer scheme will be used if the resultant decoder has no error floor above a target FER such as \(10^{-6}\); otherwise, we increase the number of quantizers by one each time and tune their parameters until a set of quantizers that don't show error floor is found. Besides, for the case of multiple quantizers, the general rule of tuning parameters is that the quantizer for earlier iterations takes a smaller magnitude. The other way to optimize the quantizers is to treat the quantizer parameters as the learnable parameters of the neural network and optimize them in the training process, as in [19]. This method will be studied in our future research. ### _Training W-RCQ decoder via a Quantized Neural Network_ Like the neural NMS decoder in [3], the W-RCQ decoder can be unfolded to an NN. The neural network unfolded by Fig. 6: Decoding diagrams for two distinct versions of layered MinSum decoding: (a) standard RCQ decoder of [27] and (b) proposed weighted (offset) RCQ decoder. the W-RCQ decoder is a QNN because of its quantization and reconstruction functions. The QNN then trains the weights of the W-RCQ decoder. The training was conducted using stochastic gradient descent with mini-batches. Each sample in the mini-batch is a BPSK-modulated all-zero codeword corrupted by the additive white Gaussian noise whose variance is in the range where a conventional NMS decoder with a factor 0.7 reaches a FER between \(10^{-3}\) and \(10^{-2}\). In our training experiments, we assign each sample with an \(\frac{E_{b}}{N_{0}}\) for noise generation. The \(\frac{E_{b}}{N_{0}}\) of each sample is chosen such that all the \(\frac{E_{b}}{N_{0}}\) values with a step of 0.1 dB in the specified range are evenly distributed to each mini-batch. One problem of QNN is that quantization functions result in zeros derivatives almost everywhere. In this work, we use a straight-through estimator (STE) [19, 34] in the backward propagation to solve this issue. The STE uses artificial gradients in QNN training to replace the zero derivative of a quantization function in the chain rule. STE is found to be the most efficient training method for QNNs in [34]. ### _Fixed-Point W-RCQ decoder_ This paper uses the pair (\(b_{c}\),\(b_{v}\)) to denote the bit width for the fixed-point decoders, where \(b_{c}\) is the bit width of C2V messages, and \(b_{v}\) is the bit width of V2C messages and the posteriors of variable nodes. For the W-RCQ decoders, the learnable parameters are first trained under a floating point message representation and then quantized to \(b_{v}\) bits. ## V Simulation Result and Discussion This section evaluates the performance of the N-2D-NMS decoders and the W-RCQ decoders for LDPC codes with different block lengths and code rates. The LDPC codes used in this section are listed in Table II. All the encoded bits are modulated by binary phase-shift keying (BPSK) and transmitted through an Additive White Gaussian Noise (AWGN) channel. ### _(16200,7200) DVBS-2 LDPC code_ Fig. (a)a shows the FER performance of N-2D-NMS decoders with various weight sharing types for the (16200, 7200) DVBS-2 LDPC code. The FER performance of BP and NMS decoders is given for comparison. The single multiplicative weight of NMS decoder is 0.88. All decoders are flooding-scheduled, and the maximum decoding iteration is 50. It is shown that the N-NMS decoder (i.e., type-0 decoder) outperforms BP at \(1.3\) dB with a lower error floor but requires \(4.8*10^{-5}\) parameters in each iteration. The type-1 and type-2 decoders, which share weights based on the check node degree and variable node degree, deliver a slightly better decoding performance than the N-NMS decoder, with only 13 and 8 parameters per iteration, respectively. Fig. (a)a also shows that the FER performance degrades if only considering sharing weights w.r.t. the check node degree (type-3) or the variable node degree (type-4). Type-4 decoder delivers a similar performance to N-NMS decoder; whereas Type-3 decoder is inferior to N-NMS decoder by 0.04 dB. Fig. 7: Fig. (a): The FER performance of the N-2D-NMS decoders with various weight-sharing types for the (16200,7200) DVBS-2 LDPC code. Fig. (b): The FER performance of the hybrid type-2 N-2D-NMS decoder that uses distinct weights in the first 20 iterations and the same weights in the remaining 30 iterations. However, both types 3 and 4 require only 4 parameters in each iteration. Fig. (a)a and (b)b give the \(\beta^{(t)}_{(\text{deg}(e_{i}))}\) and \(\alpha^{(t)}_{(\text{deg}(e_{j}))}\) of type-2 N-2D-NMS decoder, which align with our observation in the previous section; i.e., in each decoding iteration, larger degree node corresponds to a smaller value. Besides, as shown in Fig. (a)a and (b)b, the weights change negligibly after \(20^{th}\) iteration. Thus, the hybrid type-2 N-2D-NMS decoder with \(I^{\prime}=20\) delivers similar performance to the full feed-forward decoding structure, as shown in Fig. (b)b. ### _(9472,8192) Quasi-Cyclic LDPC code_ This subsection designs 3-bit and 4-bit W-OMS-RCQ decoders for a (9742,8192) QC LDPC code and compares them with the fixed-point OMS decoder and RCQ decoders. All decoders in this subsection are layer-scheduled with a maximum iteration of 10. The 4-bit W-OMS-RCQ decoder uses one quantizer/dequantizer pair with \(C=10\), \(\gamma=1.7\) in decoding. The 3-bit W-OMS-RCQ decoder, on the other hand, uses three quantizer/dequantizer pairs. The decoder uses the quantizer/dequantizer with \(C=3\) and \(\gamma=1.3\) in the first six iterations. In iteration 7 to 8, the quantizer/dequantizer has \(C=5\) and \(\gamma=1.3\). The quantizer/dequantizer uses \(C=7\) and \(\gamma=1.3\) in the last two decoding iterations. The pair \((b_{c},b_{v})\) in the legend gives the bit width of each decoder's check node message and variable node message. Fig. (a)a compares the FER performance of W-OMS-RCQ decoders with msRCQ decoders and a 5-bit OMS decoder. The decoders in Fig. (a)a are also implemented using an FPGA device (Xilinx Zynq UltraScale+ MPSoC) for the study of resource usage. Table III lists the usage of lookup tables (LUTs), registers, block RAM (BRAM), and routed nets of various decoders. For the details of FPGA implementations of the decoders, we refer the readers to [26]. The simulation result shows the 4-bit msRCQ decoder has the best FER performance. The 4-bit W-OMS-RCQ decoder and 5-bit OMS decoder have similar FER performance, which is inferior to the 4-bit msRCQ decoder by 0.01 dB. However, as shown in Table III, the 4-bit W-OMS-RCQ decoder requires much fewer resources than the 4-bit msRCQ decoder and the 5-bit OMS decoder. Compared to the 5-bit OMS decoder, the 3-bit W-OMS-RCQ and 3-bit msRCQ decoder have a 0.025 and 0.05 dB gap, respectively. Specifically, the 3-bit msRCQ decoder has similar LUT, BRAM, and routed net usage to the 4-bit W-OMS-RCQ decoder. On the other hand, the 3-bit W-OMS-RCQ uses much fewer resources than the 4-bit Fig. 8: The change of weights of the type-2 N-2D-NMS decoder for (16200, 7200) DVBS-2 LDPC code w.r.t. check node degree, variable node degree and iteration index. Specifically, Fig. (a) gives \(\beta^{(t)}_{(\text{deg}(e_{i}))}\) for all possible check node degrees in each decoding iteration \(t\), Fig. (b) gives \(\alpha^{(t)}_{(\text{deg}(e_{j}))}\) for all possible variable node degrees in each decoding iteration \(t\). Fig. 9: Fig. (a): FER performance of W-OMS-RCQ decoders, RCQ decoders, and 5-bit OMS decoder for a (9472, 8192) QC LDPC code. Fig. (b): FER performance of 3-bit W-OMS-RCQ decoders with two and three quantizer/dequantizer pairs. W-OMS-RCQ decoder. The 3-bit W-OMS-RCQ decoder in Fig. (a)a uses three quantizers for three decoding phases. In the first three iterations, most messages have low magnitudes. Hence a quantizer with small \(C\) is required for a finer resolution to the low-magnitude values. However, the message magnitudes increase with the increase of decoding iteration. As a result, the quantizers with larger \(C\) should be used correspondingly. Fewer quantizers may not accommodate the message magnitude growth in the decoding process and will result in performance degradation. For example, Fig. (b)b considers a 3-bit W-OMS-RCQ decoder that uses two quantizer/dequantizer pairs, the first pair has \(C_{1}=3\), \(\gamma_{1}=1.3\) and is used for iteration \(1\sim 7\), the second pair has \(C_{2}=5\), \(\gamma_{2}=1.3\) and is used for iteration \(8\sim 10\). The simulation result shows that the 3-bit W-OMS-RCQ decoder that uses 2 quantizer/dequantizer pairs has an early error floor at FER of \(10^{-7}\). ### \(k=1032\) _Protograph-Based Raptor-Like code_ 5G LDPC codes have the protograph-based raptor-like (PBRL) [36] structure which offers inherent rate-compatibility and excellent decoding performance. In this subsection, we examine the performance of N-2D-NMS decoders and W-RCQ decoders for a \(k=1032\) PBRL LDPC code, whose supported rates are listed in Table I. The edge distribution of the lowest-rate code, which corresponds to the full parity check matrix, is also given in Table I. All the decoders in this subsection are layer-scheduled with a maximum of 10 decoding iterations. Fig. 10 shows the FER performance of the N-2D-NMS decoder with various weight sharing types for the PBRL code with lowest code rates \(\frac{1}{3}\). As a comparison, the decoding performance of the N-NMS (type 0) decoder and the NMS decoder is also given. All of the decoders use floating-point message representation. The simulation results show that the N-NMS decoder has a more than 0.5 dB improvement over the NMS decoder but requires \(1.6*10^{4}\) parameters per iteration, as given in Table I. On the other hand, the N-2D-NMS decoders with types 1, 2, and 5 have the same decoding performance as the N-NMS decoder but only use 41, 15, and 101 parameters in each iteration, respectively. By only considering sharing weights based on check node degrees, N-2D-NMS decoders of types 3 and 6 have a degradation of around 0.05 dB compared with the N-NMS decoder, with 8 and 17 parameters in each iteration, respectively. On the other hand, when only considering sharing the weights based on the variable node degrees, N-2D-NMS decoders of types 4 and 7 have a degradation of around 0.2 dB compared with the N-NMS decoder, with 7 and 25 parameters in each iteration, respectively. Thus, for this (3096,1032) PBRL LDPC code, assigning weights based only on check nodes can benefit more than assigning weights based on variable nodes. Fig. 11 gives the FER performance of fixed-point W-NMS-RCQ decoders for the \(k=1032\) PBRL code with rate \(\frac{1}{3}\), \(\frac{1}{2}\), \(\frac{2}{3}\) and \(\frac{8}{9}\). The W-NMS-RCQ decoder assigns 4 bits to C2V message and 10 bits to V2C message. Two quantizer/dequantizer pairs are used for W-NMS-RCQ decoder across all investigated rates. The first quantizer has \(C_{1}=7\), \(\gamma_{2}=1.7\) and is used for the first 7 iterations. The second quantizer has \(C_{2}=10\), \(\gamma_{2}=2.3\) and is used for the last three iterations. We use a 6-bit OMS decoder as the benchmark because it delivers better decoding performance than the NMS decoder with the same bit width. We first consider the 4-bit W-NMS-RCQ decoder with type-1 weight sharing that assigns the same weight to the edges with the same check node degree and variable node degree. The decoder is rate-specific; i.e., a W-NMS-RCQ decoder is trained separately for each considered code rate. The simulation results show that, targeting an FER of \(10^{-6}\), the 4-bit rate-specific W-NMS-RCQ decoder outperforms the 6-bit OMS decoder with 0.1\(\sim\) 0.15 dB for all considered code rates. Fig. 11 also gives the FER curves of the 4-bit type-1 rate-specific W-OMS-RCQ decoder at various code rates. The simulation indicates that W-OMS-RCQ doesn't perform as well as the W-NMS-RCQ decoder. For the PBRL code, the protomatrix of each possible rate is Fig. 10: FER performance of N-2D-NMS decoders with various weight sharing types for a (3096,1032) PBRL LDPC code compared with N-NMS (type 0) and NMS. a sub-matrix of a base protomatrix [36]. As shown in Table. I, the type-5 weight sharing assigns the same weight to the edges corresponding to the same element in the protomatrix. Hence, it is possible to use _one_ trained type-5 neural decoder to match different code rates. We refer to such a decoder as a rate-compatible decoder. In [29], the authors propose training the rate-compatible decoder using samples from different code rates. Fig. 11 shows the decoding performance of the rate-compatible type-5 W-NMS-RCQ decoder. The simulation result shows that for the higher rate, such as \(\frac{2}{3}\) and \(\frac{8}{9}\), the rate-compatible type-5 W-NMS-RCQ decoder has a similar decoding performance to the rate-specific type-1 W-NMS-RCQ decoder. However, for the lower rates such as \(\frac{1}{3}\) and \(\frac{1}{2}\), the rate-compatible type-5 W-NMS-RCQ decoder method doesn't deliver decoding performance as well as rate-specific type-1 W-NMS-RCQ decoder. Besides, considering the four rates in Fig. 11, the number of neural weights for rate-specific type-1 and rate-compatible type-5 W-NMS-RCQ decoder are 96 and 101, respectively. ## VI Conclusion Neural networks have improved MinSum message-passing decoders for low-density parity-check (LDPC) codes by multiplying or adding weights to the messages, where a neural network determines the weights. However, the neural network complexity to determine distinct weights for each edge is high, often limiting the application to relatively short LDPC codes. In particular, when training the neural network using PyTorch or TensorFlow, memory constraints prevent designing weights for long-blocklength codes. This paper solves this memory issue by compactly storing feed-forward messages, which allows us to design weights for blocklengths of 16,000 bits. As an additional contribution, this paper identifies a gradient explosion problem in the neural decoder training and provides a posterior joint training method that addresses this problem. For neural decoders such as N-NMS decoder and N-OMS decoder, assigning distinct weights to each edge in each decoding iteration is impractical for long-blocklength codes because of the storage burden associated with the huge number of the neural weights. This paper proposes node-degree-based weight-sharing schemes that drastically reduce the number of required weights with often negligible increase in frame error rate. Finally, this paper combines the idea of weights designed by a neural network with the nonlinear quantization paradigm of RCQ, producing the W-RCQ decoder, a non-uniformly quantized decoder that delivers excellent decoding performance in the low-bitwidth regime. Unlike the RCQ decoder, which designs quantizer/dequantizer pairs for each layer and iteration, the W-RCQ decoder only uses a small number of quantizer/dequantizer pairs.
2305.11453
Exact conditions for antiUnruh effect in (1+1)-dimensional spacetime
Exact conditions for antiUnruh effect in (1+1)-dimensional spacetime are obtained. For detectors with Gaussian switching functions, the analytic results are similar to previous ones, indicating that antiUnruh effect occurs when the energy gap matches the characteristic time scale. However, this conclusion does not hold for detectors with square wave switching functions, in which case the condition turns out to depend on both the energy gap and the characteristic time scale in some nontrivial way. We also show analytically that there is no antiUnruh effect for detectors with Gaussian switching functions in (3+1)-dimensional spacetime.
Dawei Wu, Ji-chong Yang, Yu Shi
2023-05-19T06:18:45Z
http://arxiv.org/abs/2305.11453v1
# Exact conditions for antiUnruh effect in (1+1)-dimensional spacetime ###### Abstract Exact conditions for antiUnruh effect in (1+1)-dimensional spacetime are obtained. For detectors with Gaussian switching functions, the analytic results are similar to previous ones, indicating that antiUnruh effect occurs when the energy gap matches the characteristic time scale. However, this conclusion does not hold for detectors with square wave switching functions, in which case the condition turns out to depend on both the energy gap and the characteristic time scale in some nontrivial way. We also show analytically that there is no antiUnruh effect for detectors with Gaussian switching functions in (3+1)-dimensional spacetime. ## 1 Introduction It is well known [1; 2; 3] that a uniformly accelerated observer views the Minkowski vacuum as a thermal state with the temperature proportional to the observer's acceleration \(T=a/2\pi\), usually called the Unruh effect. To give a coordinate-invariant characterization of Unruh effect, people often employ the so-called Unruh-DeWitt detector [1; 4] and study how it "tinkles" when accelerated. The simplest Unruh-DeWitt detector is a two-level system, and it is expected that when an accelerating detector interacts with some quantum field, there is a probability of transition from the initial ground state to the excited state and the probability increases with the increase of the acceleration [5]. However, in recent years it was found [6] that under some circumstances the transition probability decreases as the acceleration increases, which seems to imply that the detector gets cooler when the acceleration increases. This effect is called the antiUnruh effect. Since the antiUnruh effect is defined according to the behavior of detectors, it is, unlike the Unruh effect, highly dependent on the types of detectors. The antiUnruh effect may lead to the enhancement of the entanglement between Unruh-DeWitt detectors [7; 8; 9; 10]. Moreover, the results can be applied to black holes [11; 12; 13; 14; 15] and other thermal systems [16; 17]. However, despite some discussions on the mechanism of antiUnruh phenomena [6; 18], the physical reason for it remains unclear. In this paper, we derive the exact conditions of the antiUnruh effect for detectors with Gaussian and square wave switching functions. In (1+1)-dimensional spacetime, for Gaussian switching functions, the antiUnruh effect appears when \(\Omega\sigma<1/\sqrt{2}\) while for square wave switching functions, the antiUnruh effect appears when \((2(\Omega\sigma)^{2}-3)\cos(2\Omega\sigma)-4\Omega\sigma\sin(2\Omega\sigma)+3<0\), where \(\Omega\) and \(\sigma\) are the energy gap and the characteristic switching time respectively. We also find that no antiUnruh effect exists in (3+1)-dimensional spacetime, at least for Gaussian switching functions. We expect our analytic calculations and results be useful in revealing the physical reason of the antiUnruh effect. This paper is organized as the following. In Section II we review the basic model for the antiUnruh effect in (1+1)-dimensional and (3+1)-dimensional spacetimes. We present and analyze our main results in Section III. Section IV is the summary and conclusion. ## 2 Model In this section, we review the simplest model for this effect [6]. First, we consider a uniformly accelerated two-level Unruh-DeWitt detector with the energy gap \(\Omega\) in (1+1)-dimensional Minkowski spacetime. The detector interacts with a massive scalar field \(\varphi\), with the interaction Hamiltonian \[H_{I}=\lambda\chi(\tau,\sigma)\mu(\tau)\varphi\left(x\left(\tau\right),t \left(\tau\right)\right), \tag{1}\] where \(\lambda\) is the strength of the coupling and \(\tau\) is the proper time along the detector's worldline, \(\mu(\tau)=\exp\left(i\Omega\tau\right)\sigma^{+}+\exp\left(-i\Omega\tau\right) \sigma^{-}\) is the monopole operator, \(\chi\) is the switching function, which we can, for example, choose as the Gaussian type \[\chi(\tau,\sigma)=e^{-\frac{\tau^{2}}{2\sigma^{2}}}, \tag{2}\] with \(\sigma\) being the characteristic time. Suppose the initial state is \(\left|g\right\rangle\left|0\right\rangle\), where \(\left|g\right\rangle\) refers to the ground state of the detector and \(\left|0\right\rangle\) refers to the vacuum state of the scalar field in the Minkowski spacetime. The evolution of the system is \[U\left|g\right\rangle\left|0\right\rangle=\left(1-i\int d\tau H(\tau)+\cdots \right)\left|g\right\rangle\left|0\right\rangle \tag{3}\] where we have used the perturbation expansion. Given the monopole operator \(\mu(\tau)=\exp\left(i\Omega\tau\right)\sigma^{+}+\exp\left(-i\Omega\tau \right)\sigma^{-}\) and the mode expansion of the massive scalar field in (1+1)-dimensional Minkowski spacetime \(\varphi(x,t)=\int\frac{dk}{\sqrt{4\pi\omega}}\left[a(k)e^{-i(\omega t-kx)}+a^ {\dagger}(k)e^{i(\omega t-kx)}\right]\) where \(\omega=\sqrt{k^{2}+m^{2}}\), \(m\) is the mass of the scalar field, we obtain the final state of the system, \[\left|g\right\rangle\left|0\right\rangle-i\lambda\int d\tau\chi(\tau,\sigma)e^{i \Omega\tau}\int\frac{dk}{\sqrt{4\pi\omega}}e^{i\left(\omega t-kx\right)}\left|e \right\rangle\left|1\right\rangle_{k} \tag{4}\] where \(\left|e\right\rangle\) is the excited state of the detector and \(\left|1\right\rangle_{k}\) is the one-particle state of the field in mode \(k\). The typical trajectory of a uniformly accelerated detector can be given as \(x(\tau)=a^{-1}\left(\cosh\left(a\tau\right)-1\right)\) and \(t(\tau)=a^{-1}\sinh\left(a\tau\right)\), where \(a\) is the acceleration. Therefore the transition probability is \[P(\Omega,a,\sigma,m)=\int_{-\infty}^{+\infty}dk|I_{k}|^{2}, \tag{5}\] with \[\begin{array}{c}I_{k}=\frac{\lambda}{\sqrt{4\pi\omega}}\int_{- \infty}^{+\infty}d\tau\chi(\tau,\sigma)\exp\left(i\Omega\tau\right.\\ \left.+i\frac{\omega}{a}\sinh a\tau-i\frac{k}{a}\left(\cosh a\tau-1\right) \right).\end{array} \tag{6}\] Similar results hold for antiUnruh effect in (3+1)-dimensional spacetime. In such a case, the scalar field mode expansion is \[\varphi(\vec{x},t)=\int\frac{d^{3}\vec{k}}{\sqrt{(2\pi)^{3}2\omega}}\left[a( \vec{k})e^{-i(\omega t-\vec{k}\cdot\vec{x})}+a^{\dagger}(\vec{k})e^{i(\omega t -\vec{k}\cdot\vec{x})}\right]. \tag{7}\] Suppose the detector accelerates along x axis (\(y=z=0\)). Then following similar calculations, we obtain the final state \[\left|g\right\rangle\left|0\right\rangle-i\lambda\int d\tau\chi(\tau,\sigma)e^ {i\Omega\tau}\int\frac{d^{3}\vec{k}}{\sqrt{(2\pi)^{3}2\omega}}e^{i(\omega t- \vec{k}\cdot\vec{x})}\left|e\right\rangle\left|1\right\rangle_{\vec{k}}, \tag{8}\] and the transition probability \[P(\Omega,a,\sigma,m)=\int_{-\infty}^{+\infty}d^{3}\vec{k}|I_{\vec{k}}|^{2}, \tag{9}\] with \[\begin{array}{c}I_{\vec{k}}=\frac{\lambda}{\sqrt{(2\pi)^{3}2\omega}}\int_{ -\infty}^{+\infty}d\tau\chi(\tau,\sigma)\exp\left(i\Omega\tau\right.\\ \left.+i\frac{\omega}{a}\sinh a\tau-i\frac{k_{x}}{a}\left(\cosh a\tau-1\right) \right).\end{array} \tag{10}\] ## 3 Results In this section, we present the analytic conditions for antiUnruh effects in (1+1)-dimensional and (3+1)-dimensional spacetimes. The details of the calculation are given in the Appendix. ### (1+1)-dimensional spacetime In the case of \(D=1+1\), we focus on Unruh-DeWitt detectors with Gaussian or square wave switching functions, which can be written as \[\chi^{(G)}(\tau,\sigma) =\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{r^{2}}{2\sigma^{2}}}, \tag{10}\] \[\chi^{(S)}(\tau,\sigma) =\frac{1}{2\sigma}H(\sigma-\tau)H(\sigma+\tau),\] where \(H\) is the Heaviside step function. Note that the Fourier transformations of the switching functions are \[\tilde{\chi}^{(G)}(\omega,\sigma) =\frac{e^{-\frac{\omega^{2}\sigma^{2}}{2}}}{\sqrt{2\pi}}, \tag{11}\] \[\tilde{\chi}^{(S)}(\omega,\sigma) =\frac{\sin(\sigma\omega)}{\sqrt{2\pi}\sigma\omega},\] and they are both square integrable, \[\int_{-\infty}^{\infty}d\omega\left|\tilde{\chi}^{(G)}(\omega, \sigma)\right|^{2} =\frac{1}{2\sqrt{\pi}\sigma}, \tag{12}\] \[\int_{-\infty}^{\infty}d\omega\left|\tilde{\chi}^{(S)}(\omega, \sigma)\right|^{2} =\frac{1}{2\sigma}.\] #### 3.1.1 Gaussian switching function We start with Gaussian switching functions. As shown in Eqs. (17) and (22), we obtain the analytic expression for the transition probability in the small mass limit, \[P_{\pm}^{(G)}=\frac{1}{2\pi\sigma^{2}}\left(P_{LO}^{(G)}+P_{ NLO}^{(G)}\right)+\mathcal{O}(m^{3}), \tag{13}\] \[P_{LO}^{(G)}=\sigma^{2}e^{-\Omega^{2}\sigma^{2}}\left\{\left( \Omega^{2}\sigma^{2}\,_{2}F_{2}\left(\begin{array}{c}1,1\\ \frac{3}{2},2\end{array}\right|\Omega^{2}\sigma^{2}\right)-\log\frac{m\sigma}{2 }-\frac{\pi}{2}\mathrm{erfi}(\Omega\sigma)-\frac{\gamma_{E}}{2}\right)\] \[-\frac{a^{2}\sigma^{2}}{12}\left(1-2\Omega^{2}\sigma^{2}-\frac{a^ {2}\sigma^{2}}{60}\left(4\Omega^{4}\sigma^{4}-12\Omega^{2}\sigma^{2}+3\right) \right)\right\}+\mathcal{O}(a^{6}\sigma^{8}),\] \[P_{NLO}^{(G)}=\frac{2m^{2}\sigma^{4}}{\sqrt{\pi}}\left(2\Omega \sigma\left.\left(\frac{d}{dx}\,_{1}F_{1}\left(\begin{array}{c}x\\ \frac{3}{2}\end{array}\right|-\Omega^{2}\sigma^{2}\right)\right)\right|_{x=2}\] \[-\left(\left(2\Omega^{2}\sigma^{2}-1\right)F(\Omega\sigma)- \Omega\sigma\right)\left(2\log(m\sigma)+\pi+\gamma_{E}-1\right)\right),\] where \({}_{p}F_{q}\) are generalized hypergeometric function, as defined in Eq. (16). Note that although infrared divergence is encountered in \(\log(m\sigma)\), the result is still valid for small mass. At the leading order of \(a^{2}\), the coefficient of \(a^{2}\) is \(1-2\Omega^{2}\sigma^{2}\). Therefore the antiUnruh effect can be found at \[\Omega\sigma<\frac{1}{\sqrt{2}}. \tag{10}\] This is in agreement with the original statement of antiUnruh effect which claims the interaction time interval to be finite \(\sigma\sim\Omega^{-1}\)[6]. The comparison of the analytic and numerical results is shown in Figs. 1 and 2. #### 3.1.2 Square wave switching function For detectors with square wave switching functions, as shown in Eq. (14), the transition probability is given as \[P_{\pm}^{(S)}=\frac{1}{2\sigma}\left(P_{LO}^{(S)}+P_{NLO}^{(S)} \right)+\mathcal{O}(m^{3}),\] \[P_{LO}^{(S)}=\frac{1}{2\pi\Omega^{2}}\left\{-2\text{Ci}(2\Omega \sigma)-2\log\left(\frac{1}{m\sigma}\right)\cos(2\Omega\sigma)+2\log\left( \frac{2\Omega}{m}\right)+4\Omega s\text{Si}(2\Omega\sigma)-2\pi\Omega\sigma-2\right.\] \[\left.+\pi\sin(2\Omega\sigma)+2\gamma\cos(2\Omega\sigma)+2\cos( 2\Omega\sigma)+a^{2}\frac{2\Omega^{2}\sigma^{2}\cos(2\Omega\sigma)-4\Omega \sigma\sin(2\Omega\sigma)-3\cos(2\Omega\sigma)+3}{6\Omega^{2}}\right.\] \[\left.+\frac{a^{4}}{180\Omega^{4}}\left(-2\Omega^{4}\sigma^{4} \cos(2\Omega\sigma)+8\Omega^{3}\sigma^{3}\sin(2\Omega\sigma)+18\Omega^{2} \sigma^{2}\cos(2\Omega\sigma)-24\Omega\sigma\sin(2\Omega\sigma)\right.\right.\] \[\left.\left.-15\cos(2\Omega\sigma)+15\right)+\mathcal{O}\left( \frac{\sigma^{8}a^{6}}{\Omega^{2}}\right)\right\},\] \[P_{NLO}^{(S)}=\frac{m^{2}}{4\Omega^{4}\pi}\left\{\log\left(\frac {64\Omega^{6}}{m^{6}}\right)-6\text{Ci}(2\Omega\sigma)+\sin(2\Omega\sigma) \left(8\Omega\sigma\log(m\sigma)-2\pi\Omega^{2}\sigma^{2}+4(2\gamma_{E}-1) \Omega\sigma+3\pi\right)\right.\] \[\left.+\cos(2\Omega\sigma)\left(\left(6-4\Omega^{2}\sigma^{2} \right)\log(m\sigma)-4(\gamma_{E}-1)\Omega^{2}\sigma^{2}-4\pi\Omega\sigma+6 \gamma_{E}+5\right)+4\Omega\sigma\text{Si}(2\Omega\sigma)-2\pi\Omega\sigma-5\right\} \tag{23}\] where Ci and Si are cosine and sine integral functions, as defined in Eq. (A.26). Likewise, we assume the mass of the scalar field to be small though nonzero. At the leading order of \(a^{2}\), the coefficient of \(a^{2}\) is \((2(\Omega\sigma)^{2}-3)\cos(2\Omega\sigma)-4\Omega\sigma\sin(2\Omega\sigma)+3\). Therefore the condition for antiUnruh effect can be written in "closed form" as \[\left(2(\Omega\sigma)^{2}-3\right)\cos(2\Omega\sigma)-4\Omega\sigma\sin(2 \Omega\sigma)+3<0. \tag{10}\] The comparison of the analytic and numerical results is shown in Fig. 3. It can be checked easily that the condition of antiUnruh effect for detectors with square-wave switching functions is quite different from that for detectors with Gaussian switching functions. The antiUnruh effect can be found not as \(\Omega\sigma\to 0\) but at, for example, \(\Omega\sigma=20.5\pi\). This means that antiUnruh effects occur even when the interaction time is long (with the energy gap fixed). Therefore our results support the argument that antiUnruh effect are not due to non-equilibrium transient effects [6], since the KMS condition [19; 20] is satisfied [6; 21]. Furthermore, the condition Eq. (10) depends on \(\Omega\sigma\) in the form of sine and cosine function, which can be naturally expected from the Fourier transformation of the square wave function Eq. (11). In particular, this means that for some given energy gap, antiUnruh effect can be found from time to time with the increase of \(\sigma\), which is a surprising result. ### (3+1)-dimensional spacetime We conclude this section by displaying expressions for the transition probability of Unruh-DeWitt detectors with Gaussian switching functions in (3+1)-dimensional spacetime. As shown in Eq. (A.31), the result can be obtained as \[\begin{split}& P_{\pm}^{D=3+1}=\frac{1}{2\pi\sigma^{2}}\left(P_{a }^{(0)}+P_{a}^{(2)}\right)+\mathcal{O}(a^{4}),\\ & P_{a}^{(2)}=\frac{a^{2}\sigma^{2}}{24\pi}e^{-\Omega^{2}\sigma^{ 2}}+\mathcal{O}(m).\end{split} \tag{11}\] Note that \(P_{a}^{(0)}\) is UV divergent; however \(P_{a}^{(0)}\sim\mathcal{O}(a^{0})\) and is therefore of little concern to us. The dependence of \(P_{a}^{(2)}\) on \(a\) shows that when \(a\) is small there is no antiUnruh effect in the small mass limit. The same result for massless scalar field can be obtained by using a somewhat different method in [22]. ## 4 Conclusion We obtain the analytic conditions for antiUnruh effect in (1+1)-dimensional spacetime. The product of the detector's energy gap \(\Omega\) and the interaction time \(\sigma\) is the characteristic quantity in the conditions. We show that for detectors with Gaussian switching functions, the condition is \(\Omega\sigma<\frac{1}{\sqrt{2}}\). However, for detectors with square wave switching functions, antiUnruh effect could happen when \(\Omega\sigma\) is large. Furthermore, for a fixed energy gap, whether antiUnruh effect occur or not depends on the interaction time non-monotonically. Our results support the argument that antiUnruh effect is in accordance with the KMS condition and is therefore not a transient effect. We hope that our calculations would provide some insight on the physical nature of antiUnruh effect. Finally we show that for detectors with Gaussian switching functions there is no antiUnruh effect in (3+1)-dimensional spacetime. The analytic results with small mass In general, the integral to be calculated can be written as \[P_{\pm}=\int d^{d}k\left|\int_{-\infty}^{\infty}d\tau\frac{1}{\sqrt{4\pi\omega}} \chi(\tau,\sigma)\exp\left(i\Omega\tau+i\frac{\omega}{a}\sinh(a\tau)-i\frac{k_{ x}}{a}\left(\cosh(a\tau)-1\right)\right)\right|^{2}. \tag{10}\] where \(\omega\equiv\sqrt{k^{2}+m^{2}}\), \(\chi(\tau,\sigma)\) is the switching function, and \(\Omega\) is defined as \(\pm\Omega_{0}\) for \(P_{\pm}\). ### The case of D=1+1 It is convenient to integrate over \(k\) first. The integral can be written in a somewhat symmetric form as \[P_{\pm}=\int_{-\infty}^{\infty}d\tau_{1}\int_{-\infty}^{\infty} d\tau_{2}\chi(\tau_{1},\sigma)\chi(\tau_{2},\sigma)e^{i\Omega(\tau_{2}- \tau_{1})}\left(P_{k}(A,B)+P_{k}(A,-B)\right),\] \[P_{k}(A,B)=\int_{0}^{\infty}dk\frac{1}{4\pi\sqrt{m^{2}+k^{2}}} \exp\left(i\left(A\sqrt{m^{2}+k^{2}}-Bk\right)\right), \tag{11}\] \[A=\frac{\sinh(a\tau_{2})-\sinh(a\tau_{1})}{a},\ \ B=\frac{\cosh(a\tau_{2})- \cosh(a\tau_{1})}{a}.\] In the case of small mass, one have \[P_{k}(A,B)=\int_{0}^{\infty}dk\frac{1}{4\pi\sqrt{m^{2}+k^{2}}} \left(e^{i(A-B)k}+\frac{iAm^{2}}{2k}e^{i(A-B)k}+\mathcal{O}(\frac{Am^{4}}{k^{3 }})\right). \tag{12}\] The leading-order term of Eq. (12) can be integrated out as \[\int_{0}^{\infty}dk\frac{1}{\sqrt{m^{2}+k^{2}}}\exp(iCk))=-\hat{F} \left(1,\frac{C^{2}m^{2}}{4}\right)-\frac{1}{2}i\pi\mathbf{L}_{0}(Cm)+\log\left(- \frac{1}{2}iCm\right)(-I_{0}(Cm))\] \[=-\log\left(-\frac{1}{2}iCm\right)-\gamma_{E}-iCm-\frac{C^{2}m^{ 2}}{4}\left(\log(-\frac{1}{2}iCm)-\gamma_{E}+1\right)+\mathcal{O}(m^{3}), \tag{13}\] where \(\gamma_{E}\approx 0.57721\) is the Euler constant and \(I_{0}\) is modified Bessel function of the first kind. \(\mathbf{L}_{0}\) is modified Struve function, and \(\hat{F}\) is defined as \[\hat{F}(a,z)\equiv\left.\frac{d}{da^{\prime}}\left(\tfrac{{}_{0}F_{1}(a^{ \prime}|z)}{\Gamma(a^{\prime})}\right)\right|_{a^{\prime}=a}, \tag{14}\] where \({}_{p}F_{q}\) is the generalized hypergeometric function defined as \[{}_{p}F_{q}\left(\left.\begin{array}{c}a_{1},a_{2},...,a_{p}\\ b_{1},b_{2},...,b_{q}\end{array}\right|x\right)=\sum_{n=0}^{\infty}\frac{\prod _{i=1}^{p}(a_{i})_{n}}{\prod_{j=1}^{q}(b_{j})_{n}}\frac{x^{n}}{n!}. \tag{15}\] Verified by numerical results, we conclude when mass is small, \[\int_{0}^{\infty}dk\frac{1}{\sqrt{m^{2}+k^{2}}}\exp\left(i(A\sqrt{k^{ 2}+m^{2}}-Bk)\right)=-\log\left(-\frac{1}{2}i(A-B)m\right)-\gamma_{E}+\mathcal{ O}(m). \tag{10}\] Next we can calculate the next-to-leading order term. Note that the integral can be written as \[P_{k}(A,B)=P_{k}(0,B)+\int_{0}^{A}dA^{\prime}\frac{\partial P_{ k}(A^{\prime},B)}{\partial A^{\prime}}, \tag{11}\] where the first term \(P_{k}(0,B)\) is already known in Eq. (10) as \[P_{k}(0,B)=-\log\left(-\frac{1}{2}iBm\right)-\gamma_{E}-iBm- \frac{B^{2}m^{2}}{4}\left(\log(-\frac{1}{2}iBm)-\gamma_{E}+1\right)+\mathcal{ O}(m^{3}). \tag{12}\] We define the integrand of the second term as \(p(m)\) \[p(m)\equiv\frac{\partial P_{k}(A^{\prime},B)}{\partial A^{\prime}}=\frac{i}{ 4\pi}\int_{0}^{\infty}dke^{i(A^{\prime}\sqrt{m^{2}+k^{2}}-Bk)}, \tag{13}\] and similarly, \[p(m)=p(0)+\frac{i}{4\pi}\int_{0}^{m}dm^{\prime}\int_{0}^{\infty}dk\frac{ \partial e^{i(A^{\prime}\sqrt{m^{\prime 2}+k^{2}}-Bk)}}{\partial m^{\prime}}. \tag{14}\] The first term can be integrated out, while the second term is \[\frac{i}{4\pi}\int_{0}^{\infty}dk\frac{\partial e^{i(A^{\prime} \sqrt{m^{\prime 2}+k^{2}}-Bk)}}{\partial m^{\prime}}=-\frac{A^{\prime}m^{ \prime}}{4\pi}\int_{0}^{\infty}dk\frac{1}{\sqrt{m^{\prime 2}+k^{2}}}e^{i(A^{ \prime}\sqrt{m^{\prime 2}+k^{2}}-Bk)}, \tag{15}\] with the leading-order term also already known in Eq. (10). Therefore we have \[p(m)=-\frac{1}{A^{\prime}-B}-\frac{A^{\prime}m^{2}}{2}\left(-\log\left(-\frac {i}{2}(A^{\prime}-B)m\right)-\gamma_{E}+\frac{1}{2}\right)+\mathcal{O}(m^{3}) \tag{16}\] and using Eqs. (11), (11 - 13) and (16), \[\begin{split}& P_{\pm}=\frac{1}{4\pi}\int d \tau_{1}d\tau_{2}\chi(\tau_{1},\sigma)\chi(\tau_{2},\sigma)\exp(i\Omega(\tau_{ 2}-\tau_{1}))\left(2\log\frac{2a}{m}-2\log\left(2i\sinh\left(\frac{a(\tau_{1} -\tau_{2})}{2}\right)\right)-2\gamma_{E}\\ &+\frac{1}{4}m^{2}(\tau_{1}-\tau_{2})^{2}(2\log(im(\tau_{1}-\tau _{2}))-2+2\gamma_{E}-\log(4))\right)+\mathcal{O}(m^{3}).\end{split} \tag{17}\] #### a.1.1 The Gaussian switching function The Gaussian switching function can be written as \[\chi^{(G)}(\tau,\sigma)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{r^{2}}{2 \sigma^{2}}}. \tag{101}\] Using \[T=\frac{\tau_{1}+\tau_{2}}{2},\,\,\,t=\tau_{1}-\tau_{2}, \tag{102}\] and integrating over \(T\) first, we get \[\begin{split}& P_{\pm}^{(G)}=\frac{1}{2\pi\sigma^{2}}\left(P_{LO}^{(G) }+P_{NLO}^{(G)}\right)+\mathcal{O}(m^{3}),\\ & P_{LO}^{(G)}=\sigma^{2}e^{-\Omega^{2}\sigma^{2}}\left\{\Omega^{ 2}\sigma^{2}\,\,_{2}F_{2}\left(\,\frac{1}{3},2\,\right|\Omega^{2}\sigma^{2} \right)-\log\frac{m\sigma}{2}-\frac{\pi}{2}\text{erfi}(\Omega\sigma)-\frac{ \gamma_{E}}{2}\right\}+I_{t},\\ & P_{NLO}^{(G)}=\frac{2m^{2}\sigma^{4}}{\sqrt{\pi}}\left(2\Omega \sigma\,\left(\frac{d}{dx}\,\,_{1}F_{1}\left(\,\frac{x}{3}\,\right|-\Omega^{2 }\sigma^{2}\right)\right)\right|_{x=2}\\ &\quad-\left(\left(2\Omega^{2}\sigma^{2}-1\right)F(\Omega\sigma) -\Omega\sigma\right)\left(2\log(m\sigma)+\pi+\gamma_{E}-1\right)\right), \end{split} \tag{103}\] where \(I_{t}\) is defined as \[I_{t}=-\frac{\sqrt{\pi}\sigma}{\pi}\int_{0}^{\infty}dt\exp\left(- \frac{t^{2}}{4\sigma^{2}}\right)\cos\left(\Omega t\right)\log\left(\frac{2 \sinh\frac{at}{2}}{at}\right). \tag{104}\] Considering only the case in which \(a<1\), we have \[\log\left(\frac{2\sinh\frac{at}{2}}{at}\right)=\frac{1}{24}a^{2}t^{2}-\frac{1 }{2880}a^{4}t^{4}+\frac{1}{181440}a^{6}t^{6}+\mathcal{O}(a^{8}), \tag{105}\] therefore \[\begin{split}& I_{t}=\frac{1}{24}I_{t}^{1}-\frac{1}{2880}I_{t}^{2 }+\frac{1}{181440}I_{t}^{3}+\mathcal{O}(a^{8}\sigma^{10}),\\ & I_{t}^{n}=-\frac{\sqrt{\pi}\sigma}{\pi}\int_{0}^{\infty}dt \exp\left(-\frac{t^{2}}{4\sigma^{2}}\right)\cos\left(\Omega t\right)a^{2n}t^{2 n}\\ &=-\pi^{\frac{1}{4}}2^{n}\sigma^{2}e^{-\frac{1}{2}\Omega^{2} \sigma^{2}}\sqrt{\frac{(2n)!}{\Omega}}(ia\sigma)^{2n}\phi_{2n}(\Omega,\sigma) \sim\mathcal{O}(a^{2n}\sigma^{2n+2}),\end{split} \tag{106}\] where \(\phi_{2n}(\Omega\sigma)\) is the wave function of harmonic oscillator defined as \[\phi_{n}(\Omega,\sigma)\equiv\frac{\left(\frac{\Omega^{2}}{\pi} \right)^{\frac{1}{4}}}{\sqrt{2^{n}n!}}e^{-\frac{\Omega^{2}\sigma^{2}}{2}}H_{n} (\Omega\sigma), \tag{107}\] and \(H_{n}(x)\) is the Hermit polynomial. We keep the result to order \(\mathcal{O}(a^{4}\sigma^{6})\) and obtain \[\begin{split}& P_{LO}^{(G)}=\sigma^{2}e^{-\Omega^{2}\sigma^{2}} \left\{\left(\Omega^{2}\sigma^{2}\ _{2}F_{2}\left(\left.\begin{array}{c}1,1\\ \frac{3}{2},2\end{array}\right|\Omega^{2}\sigma^{2}\right)-\log\frac{m\sigma}{2 }-\frac{\pi}{2}\text{erfi}(\Omega\sigma)-\frac{\gamma_{E}}{2}\right)\\ &\left.-\frac{a^{2}\sigma^{2}}{12}\left(1-2\Omega^{2}\sigma^{2}- \frac{a^{2}\sigma^{2}}{60}\left(4\Omega^{4}\sigma^{4}-12\Omega^{2}\sigma^{2}+3 \right)\right)\right\}+\mathcal{O}(a^{6}\sigma^{8}).\end{split} \tag{10}\] #### a.1.2 The square wave switching function The square wave switching function can be written as \[\chi^{(S)}(\tau,\sigma)=\frac{1}{2\sigma}H(\sigma-\tau)H(\sigma+\tau). \tag{11}\] where \(H(x)\) is the Heaviside step function. Also using Eq. (10) and the variable substitution in Eq. (11), we can easily integrate \(T\) out and obtain \[\begin{split}& P_{\pm}^{(S)}=\frac{1}{\sigma}\text{Re}\left[ \frac{1}{4\pi}\int_{0}^{2\sigma}dt(2\sigma-t)\exp(-i\Omega t)\left(2\log\frac{2 a}{m}-2\log\left(2i\sinh\left(\frac{at}{2}\right)\right)\right.\right.\\ &\left.\left.-2\gamma_{E}+\frac{1}{4}m^{2}t^{2}(2\log(imt)-2+2 \gamma_{E}-\log(4))\right)\right]+\mathcal{O}(m^{3}).\end{split} \tag{12}\] Similarly, we use the expansion in Eq. (10) and find \[\begin{split}& P_{\pm}^{(S)}=\frac{1}{2\sigma}\left(P_{LO}^{(S)} +P_{NLO}^{(S)}\right)+\mathcal{O}(m^{3}),\\ & P_{LO}^{(S)}=\frac{1}{2\pi\Omega^{2}}\left\{-2\text{Ci}(2 \Omega\sigma)-2\log\left(\frac{1}{m\sigma}\right)\cos(2\Omega\sigma)+2\log \left(\frac{2\Omega}{m}\right)+4\Omega s\text{Si}(2\Omega\sigma)-2\pi\Omega \sigma-2\right.\\ &\left.+\pi\sin(2\Omega\sigma)+2\gamma\cos(2\Omega\sigma)+2\cos (2\Omega\sigma)+a^{2}\frac{2\Omega^{2}\sigma^{2}\cos(2\Omega\sigma)-4\Omega \sigma\sin(2\Omega\sigma)-3\cos(2\Omega\sigma)+3}{6\Omega^{2}}\right.\\ &\left.+\frac{a^{4}}{180\Omega^{4}}\left(-2\Omega^{4}\sigma^{4} \cos(2\Omega\sigma)+8\Omega^{3}\sigma^{3}\sin(2\Omega\sigma)+18\Omega^{2} \sigma^{2}\cos(2\Omega\sigma)-24\Omega\sigma\sin(2\Omega\sigma)\right.\\ &\left.-15\cos(2\Omega\sigma)+15\right)+\mathcal{O}\left(\frac{ \sigma^{8}a^{6}}{\Omega^{2}}\right)\right\},\\ & P_{NLO}^{(S)}=\frac{m^{2}}{4\Omega^{4}\pi}\left\{\log\left( \frac{64\Omega^{6}}{m^{6}}\right)-6\text{Ci}(2\Omega\sigma)+\sin(2\Omega \sigma)\left(8\Omega\sigma\log(m\sigma)-2\pi\Omega^{2}\sigma^{2}+4(2\gamma_{E }-1)\Omega\sigma+3\pi\right)\right.\\ &\left.+\cos(2\Omega\sigma)\left(\left(6-4\Omega^{2}\sigma^{2} \right)\log(m\sigma)-4(\gamma_{E}-1)\Omega^{2}\sigma^{2}-4\pi\Omega\sigma+6 \gamma_{E}+5\right)+4\Omega\sigma\text{Si}(2\Omega\sigma)-2\pi\Omega\sigma-5 \right\}\end{split} \tag{13}\] where Ci and Si are cosine and sine integral functions defined as \[\text{Ci}(z)\equiv-\int_{z}^{\infty}dt\frac{\cos(t)}{t},\ \ \text{Si}(z)\equiv\int_{0}^{z}dt \frac{\sin(t)}{t}. \tag{14}\] ### The case of D=3+1 In the case of \(D=3+1\), the integral is UV divergent. However, we can still extract how \(P_{\pm}\) depends on \(a\) with small mass and small \(a\). Expanding the integrand over \(a\), we obtain \[\begin{split}&\exp\left(i\frac{i\omega}{a}\sinh\left(a\tau\right)- \frac{ik_{x}}{a}\left(\cosh\left(a\tau\right)-1\right)\right)\\ &=e^{i\tau\omega}-\frac{1}{2}ie^{i\tau\omega}k_{x}t^{2}a-\frac{ 1}{24}a^{2}\left(\tau^{3}e^{i\tau\omega}\left(3k_{x}^{2}\tau-4i\omega\right) \right)+\mathcal{O}(a^{3}).\end{split}\] (A.27) We integrate over \(\tau\) using Gaussian switching function, and find \[\begin{split}& I\equiv\left|\int_{-\infty}^{\infty}dte^{-\frac{t^{2 }}{2\sigma^{2}}}e^{i\Omega t}\exp\left(i\frac{i\omega}{a}\sinh\left(at\right)- \frac{ik_{x}}{a}\left(\cosh\left(at\right)-1\right)\right)\right|^{2}=I_{a}^{ (0)}+I_{a}^{(2)}+\mathcal{O}(a^{4}),\\ & I_{a}^{(0)}=2\pi\sigma^{2}\exp\left(-(\omega+\Omega)^{2}\sigma ^{2}\right),\\ & I_{a}^{(2)}=\frac{1}{3}\pi a^{2}\sigma^{6}e^{-(\omega+\Omega)^{2 }\sigma^{2}}\left(6k_{x}^{2}\Omega^{2}\sigma^{2}+12k_{x}^{2}\Omega\sigma^{2} \omega+6k_{x}^{2}\sigma^{2}\omega^{2}-3k_{x}^{2}+2\Omega^{3}\sigma^{2}\omega \right.\\ &\left.+6\Omega^{2}\sigma^{2}\omega^{2}+6\Omega\sigma^{2}\omega^{3 }-6\Omega\omega+2\sigma^{2}\omega^{4}-6\omega^{2}\right).\end{split}\] (A.28) Using \[\int d^{d}kf(|k|)k_{i}^{2}=\int d^{d}kf(|k|)\frac{k^{2}}{d},\] (A.29) we obtain \[\begin{split}&\frac{1}{16\pi^{3}}\int d^{d}k\frac{1}{\omega}I_{a}^{ (2)}=\frac{1}{4\pi^{2}}\int_{0}^{\infty}dk\frac{k^{2}}{\omega}\frac{1}{3}\pi a ^{2}\sigma^{6}e^{-(\omega+\Omega)^{2}\sigma^{2}}\left(2k^{2}\Omega^{2}\sigma^ {2}+4k^{2}\Omega\sigma^{2}\omega+2k^{2}\sigma^{2}\omega^{2}-k^{2}+2\Omega^{3} \sigma^{2}\omega\right.\\ &\left.+6\Omega^{2}\sigma^{2}\omega^{2}+6\Omega\sigma^{2}\omega^{ 3}-6\Omega\omega+2\sigma^{2}\omega^{4}-6\omega^{2}\right).\end{split}\] (A.30) It is possible to obtain analytic results when \(m\to 0\), that is, \[\begin{split}& P_{\pm}^{D=3+1}=\frac{1}{2\pi\sigma^{2}}\left(P_{a}^ {(0)}+P_{a}^{(2)}\right)+\mathcal{O}(a^{4})\\ & P_{a}^{(2)}=\frac{1}{4\pi^{2}}\frac{1}{3}\pi a^{2}\sigma^{6} \int_{0}^{\infty}dkk^{2}e^{-(k+\Omega)^{2}\sigma^{2}}\left(8k\Omega^{2}\sigma^ {2}+10k^{2}\Omega\sigma^{2}+4k^{3}\sigma^{2}-7k+2\Omega^{3}\sigma^{2}-6\Omega \right)+\mathcal{O}(m)\\ &=\frac{a^{2}\sigma^{2}}{24\pi}e^{-\Omega^{2}\sigma^{2}}+ \mathcal{O}(m).\end{split}\] (A.31)
2302.04947
Gaussian Process-Gated Hierarchical Mixtures of Experts
In this paper, we propose novel Gaussian process-gated hierarchical mixtures of experts (GPHMEs). Unlike other mixtures of experts with gating models linear in the input, our model employs gating functions built with Gaussian processes (GPs). These processes are based on random features that are non-linear functions of the inputs. Furthermore, the experts in our model are also constructed with GPs. The optimization of the GPHMEs is performed by variational inference. The proposed GPHMEs have several advantages. They outperform tree-based HME benchmarks that partition the data in the input space, and they achieve good performance with reduced complexity. Another advantage is the interpretability they provide for deep GPs, and more generally, for deep Bayesian neural networks. Our GPHMEs demonstrate excellent performance for large-scale data sets, even with quite modest sizes.
Yuhao Liu, Marzieh Ajirak, Petar Djuric
2023-02-09T21:39:20Z
http://arxiv.org/abs/2302.04947v2
# Gaussian Process-Gated Hierarchical Mixtures of Experts ###### Abstract In this paper, we propose novel Gaussian process-gated hierarchical mixtures of experts (GPHMEs) that are used for building gates and experts. Unlike in other mixtures of experts where the gating models are linear to the input, the gating functions of our model are inner nodes built with Gaussian processes based on random features that are non-linear and non-parametric. Further, the experts are also built with Gaussian processes and provide predictions that depend on test data. The optimization of the GPHMEs is carried out by variational inference. There are several advantages of the proposed GPHMEs. One is that they outperform tree-based HME benchmarks that partition the data in the input space. Another advantage is that they achieve good performance with reduced complexity. A third advantage of the GPHMEs is that they provide interpretability of deep Gaussian processes and more generally of deep Bayesian neural networks. Our GPHMEs demonstrate excellent performance for large-scale data sets even with quite modest sizes. Gaussian processes, mixtures of experts, soft decision trees, random features. ## 1 Introduction In this paper, we build a hierarchical mixture of experts by way of Gaussian processes (GPs). Models based on hierarchical mixtures of experts (HMEs) have been used in numerous regression, classification, and fusion applications in healthcare, finance, and pattern recognition, [17]. These models can be viewed as conditional mixture models where distributions of target variables are represented by mixtures of experts, with the experts and the mixing coefficients being conditioned on the input variables. The model parameters are usually estimated by maximizing the likelihood, but this results in severe overfitting. In combating this issue, [14], formulated a fully Bayesian treatment of the model based on variational inference. The authors in [1] also introduced an end-to-end differentiable amortized variational inference algorithm for HMEs and used a recurrent neural network to approximate the posterior distribution over tree node routing decisions. We observe that the hierarchical mixtures of experts share the same framework of soft decision trees with fixed tree structure. We recall that a decision tree is a hierarchical structure composed of internal decision nodes and terminal leaves [15], [16]. A canonical decision tree is composed of internal nodes that represent tests on attributes. Based on the results of the test, the tree assigns samples to one of the children. The leaves on the other hand hold the labels for the classification tasks or are constants for the regression tasks. As a result, a sample traverses a single path from the root to one of the leaves. The nodes can be univariate, which entails that they use only one feature of the input and compare it against a threshold value [16]. If they are multivariate nodes, they define a linear discriminant in the input space that is used for comparisons [18]Murthy, Kasif, and Salzberg], [21]. This discriminant can be generalized to be nonlinear [10]. Trees can have any kind of nodes chosen by a statistical model selection procedure [21]. Unlike in hard deterministic trees, in soft probabilistic decision trees, all the children are selected with a certain probability, [14]Irsoy, Yildiz, and Alpaydin]. Namely, all the possible paths to all the leaves are traversed and the final decision is contributed by all the leaves but with different probabilities. Gaussian soft decision trees for interpretable feature-based classification were studied in [15]. In [14], a deep neural network (DNN) was used to train a soft decision tree that mimics the input-output function discovered by a neural network (NN). The authors use soft decision trees in order to provide interpretability for DNNs and explainability of the representations of individual hidden layers. They are equivalent to a hierarchical mixture of experts with parametric probabilistic tree-based aggregation [17]. The experts are the leaves, while the coefficients are obtained by the gating nodes. GP extensions to HMEs have also been studied, [15]Shi, Murray-Smith, and Titterington], [16]. Further, [15], investigated the unsupervised case in hierarchical GP latent variable models. We observe that the GP-based HME models rely mostly on function spaces rather than feature spaces. Sparse GPs reduce the complexity of standard GPs from cubic to quadratic. There are two main approaches for reducing the complexity: the inducing point-based approach and the random feature (RF)-based approach. Unlike inducing point-based methods, the random feature framework does not require any matrix decomposition, and instead, it only needs matrix products, which in turn boosts its speed significantly. Random feature-based GPs transform nonlinear input spaces into linear kernel feature spaces (hereinafter called feature spaces to distinguish from the input spaces). Several papers have pointed out the connection between GPs and NNs, [18]Lee, Bahri, Novak, Schoenholz, Pennington, and Sohl-Dickstein], [Wilson and Izmailov(2020)], [Dutordoir et al.(2021)]Dutordoir, Hensman, van der Wilk, Ek, Ghahramani, and Durrande], [Pleiss and Cunningham(2021)]. Given the relationship between the GPs and single-layered NNs with an infinite number of hidden units [Neal(2012)], GPs alleviate the issue of specifying the number of units in hidden layers by implicitly working with infinite representations. In view of feature spaces, GP models yield Bayesian neural networks (BNNs) that quantify uncertainties. One approximation of GPs with feature spaces takes advantage of random Fourier features. Random Fourier features for large-scale kernel machines were proposed in [Rahimi and Recht(2007)] and applications of random features to GPs were studied in [Lazaro-Gredilla et al.(2010)Lazaro-Gredilla, Quinonero-Candela, Rasmussen, and Figueiras-Vidal]. Variational learning of the posterior over the frequencies when the squared exponential kernel is used was proposed in [Gal and Turner(2015)]. The RF expansion for different kernels of GPs results in different activation functions of NNs, for instance, trigonometric for the Radial Basis Function (RBF) kernel, and Rectified Linear Unit (ReLU) functions for the ARC-COSINE kernel. The above models, however, partition the data only in the input space, either in the form of HMEs or soft decision trees. In other words, the gating models or the inner nodes are linear functions of the inputs. In this paper, we propose GP-gated HMEs (GPHMEs) with structures of fixed complete binary trees where both, the gating and expert models, rely on RF expansions of GPs. This allows for making decisions by the GPs in linear feature spaces and for a considerable reduction of complexity of the trees. We optimize the GPHMEs by using variational inference and by exploiting the reparameterization trick. In practice, the results show that the optimal height of a tree is no greater than four even for large-scale data sets with more than millions of samples. The smaller size of our tree structure is a result of the nonlinear transformation of the input space and the oblique decision mechanisms that are created. Further, the GPHME outperforms the tree-based benchmarks that partition the input data. Our model can also be used in providing interpretability of deep GPs (DGPs). DGPs are deep belief networks based on a stack of GP mappings, where each hidden layer is composed of a multivariate GP. In practice, DGPs require less depth and width compared to DNNs. However, the DGPs encounter the problem of interpreting the hidden layers. Similar to the interpretability of DNNs distilled by soft decision trees [Frosst and Hinton(2017)], this paper provides ways of getting insights into understanding how a DGP makes its decisions. In summary, the contributions of this paper are as follows: (i) we propose a novel HME, GPHME, that relies on RF-based GPs ; (ii) we demonstrate the ability of our work to outperform related state-of-the-art methods, including HMEs, decision trees, and GP-based models. Further, the proposed GPHME has reduced complexity and is readily applicable to large-scale problems; (iii) we quantify the uncertainty of probability distributions offered by GPs compared to trees; (iv) we provide an explainable way for interpreting the behaviors of DGPs. ## 2 Background A GP is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of these random variables has a multivariate normal distribution. More specifically, for a finite set of inputs \(\mathbf{X}\in\mathbb{R}^{N\times D_{x}}\), where \(N\) is the number of inputs and \(D_{x}\) is the dimension of the input, the corresponding outputs \(\mathbf{y}\in\mathbb{R}^{N}\) follow a Gaussian process \(f\), where \(f(\mathbf{X})\sim\mathcal{N}(\mathbf{0},\kappa(\mathbf{X},\mathbf{X}))\), with \(\kappa(\cdot,\cdot)\) being a kernel or covariance function. A popular example of a stationary kernel is the Radial Basis Function (RBF) defined by \[\kappa(\mathbf{x},\mathbf{x}^{\prime})=\sigma_{\lambda}^{2}\overline{\kappa} (\mathbf{x},\mathbf{x}^{\prime})=\sigma_{\lambda}^{2}\exp\left[-\frac{1}{2} \sum_{d=1}^{D_{x}}\frac{(x_{d}-x_{d}^{\prime})^{2}}{\lambda_{d}^{2}}\right], \tag{1}\] where \(\sigma_{\lambda}^{2}\) is the kernel variance, \(\lambda_{d}\) is the kernel lengthscale, and \(\overline{\kappa}\) is the standardized kernel with norm \(||\overline{\kappa}(\cdot,\cdot)||\leq 1\). Another commonly used kernel is the ARC-COSINE kernel with a degree \(n\) \[\widetilde{\kappa}(\mathbf{x},\mathbf{x}^{\prime})=\frac{||\mathbf{x}||\times ||\mathbf{x}^{\prime}||}{\pi}J_{n}\left(\alpha\right), \tag{2}\] where \[J_{n}(\alpha)=\left(-1\right)^{n}\left(\sin\alpha\right)^{2n+1}\left(\frac{1}{ \sin\alpha}\frac{\partial}{\partial\alpha}\right)^{n}\left(\frac{\pi-\alpha} {\sin\alpha}\right), \tag{3}\] and \[\alpha=\cos^{-1}\left(\frac{\mathbf{x}^{\top}\mathbf{x}^{\prime}}{||\mathbf{ x}||||\mathbf{x}^{\prime}||}\right). \tag{4}\] This kernel is referred to as an arc-cosine kernel because of its dependence on the angle \(\alpha\) and the arc-cosine function. In the paper, we use \(n=1\) because it produces ReLu random features, and \[J_{1}(\alpha)=\sin\alpha+(\pi-\alpha)\cos\alpha. \tag{5}\] This kernel can also implement automated relevance determination (ARD) by dividing \(x_{d}\) with \(\lambda_{d}\). ### _Random Feature Expansions for Gaussian processes_ Bochner's theorem states that if \(\overline{\kappa}\left(\mathbf{x}_{i},\mathbf{x}_{j}\right)=\overline{\kappa }\left(\mathbf{x}_{i}-\mathbf{x}_{j}\right)\) is a continuous shift-invariant normalized covariance function, it can be rewritten as the Fourier transform of a non-negative measure \(p(\boldsymbol{\omega})\)[Rahimi and Recht(2007)]. Let \(\boldsymbol{\omega}\) represent a vector of spectral frequencies, \(i=\sqrt{-1}\), and \(\boldsymbol{\Delta}=\mathbf{x}_{i}-\mathbf{x}_{j}\). Then we can write \[\overline{\kappa}(\boldsymbol{\Delta})=\int p(\boldsymbol{\omega})\exp\left(i \boldsymbol{\Delta}^{\top}\boldsymbol{\omega}\right)d\boldsymbol{\omega}, \tag{6}\] where \(p(\boldsymbol{\omega})\) is the Fourier transform of \(\overline{\kappa}\left(\mathbf{x}_{i},\mathbf{x}_{j}\right)\). We drop the complex part of the argument of the expectation because the covariance function and the nonnegative measures are real, and we keep \(\cos\left(\boldsymbol{\Delta}^{\top}\boldsymbol{\omega}\right)=\cos\left(( \mathbf{x}-\mathbf{x}^{\prime})^{\top}\boldsymbol{\omega}\right)\), which can also be expressed as \(\cos\left(\mathbf{x}^{\top}\boldsymbol{\omega}\right)\cos\left(\mathbf{x}^{ \prime\top}\boldsymbol{\omega}\right)+\sin\left(\mathbf{x}^{\top}\boldsymbol{ \omega}\right)\sin\left(\mathbf{x}^{\top}\boldsymbol{\omega}\right)\). Next, we note that with the above expansion we can estimate \(\overline{\kappa}\left(\mathbf{x},\mathbf{x}^{\prime}\right)\) using Monte Carlo sampling. If \(\mathbf{z}(\mathbf{x},\boldsymbol{\omega})=\left[\cos\left(\mathbf{x}^{\top} \boldsymbol{\omega}\right),\sin\left(\mathbf{x}^{\top}\boldsymbol{\omega} \right)\right]\), an unbiased estimate of \(\overline{\kappa}\left(\mathbf{x},\mathbf{x}^{\prime}\right)\) can be obtained by \[\overline{\kappa}\left(\mathbf{x},\mathbf{x}^{\prime}\right)\approx\frac{1}{J }\sum_{j=1}^{J}\mathbf{z}\left(\mathbf{x},\boldsymbol{\omega}_{j}\right) \mathbf{z}\left(\mathbf{x}^{\prime},\boldsymbol{\omega}_{j}\right)^{\top}, \tag{7}\] where the \(\mathbf{\omega}_{j}\)s are samples from \(p(\mathbf{\omega})\), and \(J\) is the number of random samples of spectral frequencies. Using \(\mathbf{x}\) and the random samples \(\mathbf{\omega}_{j}\), which are columns of \(\mathbf{\Omega}\), and where \(\mathbf{\Omega}\in\mathbb{R}^{D_{x}\times J}\) (or, \(\mathbf{\Omega}_{\cdot j}=\mathbf{\omega}_{j}\)), we define the random features for the RBF kernel by \[\mathbf{\phi}(\mathbf{x})\triangleq\frac{\sigma_{\lambda}}{\sqrt{J}}[\sin( \mathbf{x}^{\top}\mathbf{\Omega}),\cos(\mathbf{x}^{\top}\mathbf{\Omega})]^{\top},\] where \[\sin(\mathbf{x}^{\top}\mathbf{\Omega})=\left[\sin\left(\mathbf{x}^{ \top}\mathbf{\omega}_{1}\right)\ \sin\left(\mathbf{x}^{\top}\mathbf{\omega}_{2}\right)\ \ldots\sin\left(\mathbf{x}^{\top}\mathbf{\omega}_{J}\right)\ \right]. \tag{8}\] The definition of \(\cos(\mathbf{x}^{\top}\mathbf{\Omega})\) is analogous. Following [11], an integral representation of the ARC-COSINE kernel is given by \[\widetilde{\kappa}(\mathbf{x},\mathbf{x}^{\prime})\] \[\qquad=2\int\mathbf{x}^{\top}\mathbf{\omega}\ \mathbf{\omega}^{\top}\mathbf{x}^{\prime}\,H(\mathbf{\omega}^{\top}\mathbf{x})H(\mathbf{ \omega}^{\top}\mathbf{x}^{\prime})\mathcal{N}(\mathbf{\omega}|\mathbf{0},\mathbf{ I})\mathrm{d}\mathbf{\omega}, \tag{9}\] where \(H(\cdot)\) is the Heaviside function. Similarly to the RBF kernel, the random feature based on this covariance leads to \[\mathbf{\phi}(\mathbf{x})\triangleq\frac{\sqrt{2}\sigma_{\lambda}}{\sqrt{J}}\max \left(\mathbf{0},\mathbf{\Omega}^{\top}\mathbf{x}\right). \tag{10}\] We note that the identity feature is defined by \[\mathbf{\phi}(\mathbf{x})\triangleq[1\ \mathbf{x}^{\top}]^{\top}. \tag{11}\] Our model reduces to an ordinary Bayesian HME when we take the feature space from (11). Therefore, we express the different approximation of the kernels by \(k(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{\phi}(\mathbf{x})^{\top}\mathbf{\phi}( \mathbf{x}^{\prime})\), and we define the GP approximation of \(f\) by \[\widehat{f}(\mathbf{x})=\mathbf{\phi}(\mathbf{x})^{\top}\mathbf{w}, \tag{12}\] where the weight \(\mathbf{w}\) is a parameter vector with a Gaussian prior, \(\mathcal{N}(\mathbf{0},\mathbf{I})\). ## 3 Gaussian Process-Gated Hierarchical Mixtures of Experts Here we propose GP-gated HMEs (GPHMEs) trained with mini-batch gradient descent to build mixtures of experts. An example of a GPHME is shown in Fig. 1. Each inner node \(\nu\) of the tree has a learned filter \(\mathbf{w}_{\nu}\) and a random vector \(\mathbf{\Omega}_{\nu}\), whereas each leaf node \(l\) has a learned distribution \(Q_{l}\). At each inner node, the probability of taking the leftmost branch is \(\sigma(z_{\nu})\). The variable \(z_{\nu}\) is defined by \[z_{\nu}(\mathbf{x})=\mathbf{\phi}(\mathbf{x}^{\top}\mathbf{\Omega}_{\nu})\mathbf{w}_{ \nu}, \tag{13}\] where \(\mathbf{x}\) is the input to the model, and \(\mathbf{\phi}\) is the random feature. The equation (13) gives a nonlinear partition of the input space. The model described by Fig. 1 is a hierarchical mixture of experts and learns a hierarchy of filters that are used to assign each sample to the GP experts with respective path probabilities. Unlike in [10], where each expert is actually a "bigot" because it does not use the test data to compute the experts' probability distributions, the experts in our model learn their distributions over the possible output classes given a test input \(\mathbf{x}\) according to \[Q_{l}^{k}=\frac{\exp(z_{l}^{k})}{\sum_{k^{\prime}}\exp(z_{l}^{k^{\prime}})}, \tag{14}\] where each \(z_{l}^{k}=\mathbf{\phi}(\mathbf{x}^{\top}\mathbf{\Omega}_{l})\mathbf{w}_{l}^{k}\) is a learned distribution for class \(k\) at the \(l\)th leaf, and \(Q_{l}^{k}\) denotes the distribution of probability at that leaf. The objective function, the mixture log-likelihood of \(\mathbf{\Theta}\) for an input-output pair \((\mathbf{x},y)\), is defined by \[\log p(y|\mathbf{x},\mathbf{\Theta})=\sum_{l}P(l|\mathbf{x},\mathbf{\Theta})\log p(y| \mathbf{x},l,\mathbf{\Theta}), \tag{15}\] where \(P(l|\mathbf{x},\mathbf{\Theta})\) is the probability of arriving at leaf node \(l\) given the input \(\mathbf{x}\) and \(\mathbf{\Theta}\), and \(p(y|\mathbf{x},l,\mathbf{\Theta})=Q_{l}^{y}\) is the likelihood of \(\mathbf{\Theta}\) of the \(l\)th expert. We collectively denote the hidden variables by \(\mathbf{\Theta}=\{\mathbf{w}_{\nu},\mathbf{w}_{l}^{k},\mathbf{\Omega}_{\nu},\mathbf{ \Omega}_{l}\}\), where the index \(\nu\) refers to the inner nodes and \(l\) to the leaf nodes. For multivariate regression tasks, the leaves provide regressors simply by \(Q_{l}^{k}=z_{l}^{k}\), where \(k\) refers to the dimension of the multivariate \(\mathbf{y}\). We reiterate that the model is by nature a hierarchical mixture of GP experts, where we project the input \(\mathbf{x}\) into different feature spaces to assign paths from the root to the leaves and predict the target distribution at the leaves. The objective function in (15), however, would encourage each leave to minimize its log-likelihood, which in turn would lead each leaf to be an expert on all the classes and would result in no preference for any of the classes. In other words, the experts would become jacks of all trades. Although this objective function can help us obtain more accurate results, we may wish that each leaf prefers a specific class to make the tree more interpretable. Therefore, another option of the objective function is the normalized likelihood. Let \[\log Q^{k}=\sum_{l}P(l|\mathbf{x},\mathbf{\Theta})\log p(k|\mathbf{x},l,\mathbf{ \Theta}), \tag{16}\] Figure 1: A GPHME with a fixed tree structure, comprising expert leaves, shown as shaded circles and inner nodes, shown as circles. Therefore, the GPHMEs are tree-structured models. The edges represent RF-based decision rules associated with the inner nodes, and the \(Q\)s denote the conditional distributions over the target variable \(y\). be proportional to the log-probability distribution of class \(k\) given \(\mathbf{x}\). Specifically, (15) is equivalent to \(\log Q^{y}\), and the normalized likelihood is defined by \[\text{Normalized Likelihood}=\frac{Q^{y}}{\sum_{k}Q^{k}}. \tag{17}\] In other words, we not only maximize the likelihood of a class \(y\) but also consider the likelihood of \(y\) relative to other classes. Consequently, each expert would prefer one class after training, i.e., recognize one specific class with a higher probability on average with respect to other classes. We refer to the objective function in (15) as objective function one (OF1) while to the objective function in (17), as objective function two (OF2). We note that multiple experts might be experts for one class and that they are trained automatically. ## 4 Variational Inference Our goal is to find a variational distribution \(q(\mathbf{\Theta})\) that approximates the true posterior distribution \(p(\mathbf{\Theta}|\mathbf{X},\mathbf{Y})\). Defining the marginalized log-likelihood \(L=\sum_{n=1}^{N}\log p(y|\mathbf{x})\) and \(L^{\prime}=\sum_{n=1}^{N}\mathbb{E}_{q(\mathbf{\Theta})}\log p(y|\mathbf{x}, \mathbf{\Theta})\), we have \[L\geq L^{\prime}-\text{KL}\left[q(\mathbf{\Theta})||p(\mathbf{\Theta})\right], \tag{18}\] where KL stands for the Kullback-Leibler divergence, and \(p(\mathbf{\Theta})\) is the prior distribution of the hidden variables \(\mathbf{\Theta}\). More specifically, \[p(\mathbf{\Theta})=\prod_{\nu}\left[p(\mathbf{w}_{\nu})\prod_{j}p(\mathbf{\omega }_{\nu j})\right]. \tag{19}\] Note that the KL divergence regularizes \(\mathbf{\Theta}\) automatically, which avoids overfitting when \(||\mathbf{w}_{\nu}||\) is too large. We assume a Gaussian approximating distribution that factorizes across nodes. Then we have \[q(w_{\nu j}) \sim\mathcal{N}(m_{\nu j},(\kappa_{\nu j})^{2}), \tag{20}\] \[q(\Omega_{\nu ij}) \sim\mathcal{N}(\mu_{\nu ij},(\sigma_{\nu ij})^{2}), \tag{21}\] where \(w_{\nu j}\) represents the \(j\)th element of \(\mathbf{w}_{\nu}\), \(\Omega_{\nu ij}\) is the element of \(\mathbf{\Omega}_{\nu}\) from the \(i\)th row and the \(j\)th column. The variational parameters are the mean and the variance of each of the approximating factors \(m_{\cdot j}\), \(s_{\cdot j}\), \(\mu_{\cdot ij}\), and \(\sigma_{\cdot ij}\), and we aim to optimize the lower bound with respect to these parameters. The settings of the inner nodes \(\mathbf{w}_{\nu}\) and \(\mathbf{\Omega}_{\nu}\) are analogous to the leaves, \(\mathbf{w}_{l}^{k}\) and \(\mathbf{\Omega}_{l}\). Because of the independent observations, we use a doubly-stochastic approximation of \(L^{\prime}\). If we randomly select \(M\) points indexed by \(\mathcal{I}_{M}\), \(L^{\prime}\) can be estimated in an unbiased way using mini-batches by \[L^{\prime}=\frac{N}{M}\sum_{m\in\mathcal{I}_{M}}\mathbb{E}_{q(\mathbf{\Theta} )}L(y_{m}|\mathbf{x}_{m},\mathbf{\Theta}). \tag{22}\] To deal with the expectation term, we resort to Monte Carlo sampling, which yields \[L^{\prime}=\frac{N}{M}\sum_{m\in\mathcal{I}_{M}}\frac{1}{N_{MC}}\sum_{r=1}^{N_ {MC}}L(y_{m}|\mathbf{x}_{m},\mathbf{\Theta}_{r}), \tag{23}\] where \(\mathbf{\Theta}_{r}\) is sampled from \(q(\mathbf{\Theta})\), and \(N_{MC}\) is the number of drawn samples. In order to utilize the backward propagation, we apply the reparameterization trick so that the weights are reparameterized as follows: \[w_{\nu jr} =m_{\nu jr}+s_{\nu jr}\epsilon_{\nu jr}, \tag{24}\] \[\Omega_{\nu ijr} =\mu_{\nu ijr}+\sigma_{\nu ijr}\epsilon_{\nu ijr}, \tag{25}\] where the \(\epsilon_{\nu jr}\)s and \(\epsilon_{\nu ijr}\)s are independent samples from the standard normal distribution. One can also assign prior distributions to \(\sigma_{\lambda}\) and \(\lambda_{d}\) and then use variational inference. In the implementation of gradient descent, the probabilities \(\sigma(z_{\nu})\) of the inner nodes are close to \(0\) or \(1\) so that the tree tends to assign almost all the probability to one of its branches. To overcome this, we adopt a regularizer defined by the average probability that \(\mathbf{x}\) at node \(\nu\) goes to the left child, and it is given by [11], \[\alpha_{\nu}=\frac{\sum_{\mathbf{x}}P_{\nu}(\mathbf{x})p_{\nu}(\mathbf{x})}{ \sum_{\mathbf{x}}P_{\nu}(\mathbf{x})}, \tag{26}\] where \(P_{\nu}(\mathbf{x})=P(\nu|\mathbf{x},\mathbf{\Theta})\) is the probability that \(\mathbf{x}\) arrives at node \(\nu\) and \(p_{\nu}(\mathbf{x})\) is the probability that \(\mathbf{x}\) goes to the left child of node \(\nu\). Therefore, the penalty term becomes \[C=\lambda N\sum_{\nu}\left[0.5\log(\alpha_{\nu})+0.5\log(1-\alpha_{\nu})\right], \tag{27}\] where \(\lambda\) is a hyper-parameter and set to \(2^{-d}\) in our experiments. The penalized evidence lower bound (PELBO) is given by \[PELBO=L^{\prime}+C-\text{KL}\left[q(\mathbf{\Theta})||p(\mathbf{\Theta})\right]. \tag{28}\] So far, we set \(\mathbf{\Omega}_{\nu}\) to be different at the different nodes, i.e., the random feature spaces are unique for each node (NIS-N), and therefore, they are non-isotropic. However, this might not be necessary and could incur huge costs in space complexity. We could expect that there exists a specific random feature space that separates data linearly, which means that all the nodes share a common distribution of \(\mathbf{\Omega}\). We refer to this option as the isotropic space of all the nodes (ISO-N). Another option is to restrict the nodes at each level and have them share the same distribution of \(\mathbf{\Omega}\) and thereby mitigating the computation burden. We refer to this as the isotropy of the spaces across the nodes on the same levels (ISO-L). However, this arrangement does not seem reasonable because it is difficult to justify pooling nodes from the same level that are far away from each other. In the section on numerical experiments, the results did show that this option is worse than the other options most of the time. In practice, the ISO-N option is slightly worse than NIS-N. However, despite the slightly worse performance, the ISO-N option provides better interpretability than the NIS-N option on account of the single projected feature space. The two main operations of inference come from Eqs. (13) and (14). For a mini batch with size \(M\), the computational complexity of (13) is \(\mathcal{O}(MJD_{x}N_{MC})+\mathcal{O}(MJN_{MC})\), and of (14), \(\mathcal{O}(MJD_{x}N_{MC})+\mathcal{O}(MJD_{y}N_{MC})\). Combined with the height of the tree \(h\), the final training complexity is \(\mathcal{O}(2^{h}MJN_{MC}(D_{x}+D_{y}))\). The test complexity is \(\mathcal{O}(2^{h}N_{test}JN_{MC}(D_{x}+D_{y}))\), where \(N_{test}\) is the size of the test set. The code for obtaining the results in this paper is available at Github.1 Footnote 1: [https://github.com/yuhaoliu94/GPHME](https://github.com/yuhaoliu94/GPHME). ## 5 Numerical Experiments In this section, we first show how our GPHME explains the DGPs by mimicking their behaviors. Then we discuss the different settings of \(\mathbf{\Omega}\) and choices of the height of trees. Next, we compare our methods with the Bayesian HMEs (BHMEs) [1] as well as with soft and hard trees. All the results suggest that there is a need for projection in feature spaces. Finally, we show results of implementation of our methods on large-scale data. All the experiments ran on a single machine of NVIDIA TITAN RTX GPU having 24GB RAM, but can also be directly launched on CPUs. ### _Explaining DGPs and DNNs with GPHMEs_ Each hidden layer of the random feature-based DGP models is in fact a special case of a two-layer BNN, and thus, our GPHMEs explain how the DGPs or more general Bayesian DNNs work. Figure 2 illustrates how our method makes decisions on the MNIST data. If we take, for example, the left-most four leaves, we can see that the most likely classifications are 4 and 9, and therefore, their parent node is simply learning to distinguish between these two digits. This makes sense for groups 4 and 9 because they both have closed regions in their digits. Further, 2 and 5 are the most challenging for classification because none of the experts seem to reach high enough probabilities for confident decisions. The decisions of the inner nodes are made similarly and can readily be understood. Given the good interpretability in hand, during training, our model achieves the highest accuracy of 97.79% with a tree of height four and with an ISO-N setting. If we use the loss function in (15), we obtain 98.49%, still under an ISO-N setting. With more parameters, the DGP model only peaks at 98.04% when the number of hidden layers is one and decreases with the number of layers increasing. Our GPHME model is also comparable with a one-hidden-layer DNN, which achieves 98.4% accuracy [21][21]. The GPHME has this accuracy with only two-thirds of the number of parameters of the DNNs. Further, the GPHME attains 98.67% accuracy at most with a height of only two under the NIS-N setting, which is comparable to 98.6% reported by SVMs [14]. It is also better than other kernel-based methods including GPs and their variants [15][16], Matthews, and Ghahramani, [17][15], Bonilla, Cutajar, and Filippone. ### _Discussion on hyperparameters_ Our tree-based model, due to the information embedded in the feature spaces, does not need to grow the tree too deep. Therefore, in this section, the heights of the trees are chosen up to two. We took one regression data set \(Protein\) and one multi-output classification data set \(Optical\)\(Digits\) (OPT) from the UCI repository.2 The dimension of the random features was set to 100 for the RBF and the ARC-COSINE kernels. Figure 4 shows the performance under different settings, including different heights of trees, structures of \(\mathbf{\Omega}\), and kernel types. Without loss of generality and making the figures readable, we only present the results of different structures of \(\mathbf{\Omega}\) when the height of trees equals two. Footnote 2: [https://archive.ics.uci.edu](https://archive.ics.uci.edu) Figure 3 demonstrates how the models work with different heights of trees and \(\mathbf{\Omega}\) options. The figure also provides results under different \(\mathbf{\Omega}\) options when the height of the tree is one. As we expected, the performance does not always improve when the height of trees grows because of overfitting. For the setting of \(\mathbf{\Omega}\), the NIS-N option has the most number of parameters but does not improve the results significantly compared with the ISO-L and the ISO-N options. The ISO-N option beats the NIS-N option in some cases. This is one reason why we prefer the ISO-N option in practice. The ISO-L option is not stable and might obtain the worst results among all of the \(\mathbf{\Omega}\) settings. This is reasonable because this option forces the nodes at the same level to share the common feature spaces. As for the kernel choice, the RBF kernel has better performance in general, while the ARC-COSINE kernel may work better in binary classification tasks. The identity features are discussed in the next section as a benchmark model. ### _UCI Data Sets_ To compare the generalization error and model complexity of tree-based models, including linear discriminant trees (LDT), hard tree (C4.5), soft tree, and BHMEs, we used the same data sets as in [17], which reported results for LDT, C4.5, and soft-decision trees. However, some data sets cannot be found at this time. Therefore, we only selected those that still exist, including four regression data sets (ABAlone, ADD10, BOSton, CONcrete), eight binary classification data sets (BREast, GERman, MAGic, MUSk2, PIMa, RINgnorm, SPAphase, TWOnorm), and ten multi-class classification data sets (BALance, CMC, DERmatology, ECOli, GLAss, OPTdigits, PAGeblock, PENdigits, SEGment, YEAst) from the UCI repository. The benchmark methods use five folds of data and then average the results where one-third of the data are test data and the other two-thirds are training data. We adopted the same setting here and still let the height of the trees be at most two for our models. To compare with the BHMEs fairly, we let the height of trees for BHMEs to be large enough so that the number of parameters of BHMEs is just larger than that of our models. In this case, the highest height of trees for BHMEs was usually 7 or 8 on average, although the BHMEs achieve the best results almost always with heights two or three. Besides, we provided the standard deviation across the five shuffled folds while the benchmarks of decision trees did not report this in their source. The results are selected from the best model among different heights and kernel types. Tables 1, 2, and 3 show the MSEs of regression, the accuracy of binary classification, and accuracy of multi-classification. The tables also include the number of samples \(N\), the dimension of inputs \(D_{x}\), and the dimension of outputs \(D_{y}\). The averaged training times over five different folds for all data sets are also listed. The time duration for the training process was set not to be larger than 60 minutes. The sizes of the applied trees were specified by the numbers of all the inner nodes and leaves. The results of other tree-based benchmarks include C4.5, which corresponds to the univariate hard tree as well as LDT, which is an oblique multivariate hard tree. From the results, it is clear that our model outperforms all the other candidate tree-based models in terms of loss. For example, for the RIN data set, the accuracy of the BHME candidates is only around \(77\%\), while our model improves it to over \(98\%\) for the same size of trees. Further, our tree is smaller in size than the benchmark trees in about 80% of the cases. We applied the unequal variance (Welch) t-test between GPHME and BHME. We did not include the ordinary decision tree models [11]Irsoy et al.(2014)Irsoy et al.(2014) in the Welch t-tests because the variances of their estimates are not provided. The results shown in bold in Tables 1, 2, and 3 are the instances where our method was significantly better. The benchmark results are of trees Figure 3: Evolution of RMSE in the regression case, error rate in the classification case, and mean negative log-likelihood (MNLL) over time. Figure 2: A visualization of a GPHME of depth four trained on the MNIST data set. The final most likely classifications are shown at each leaf with its average probability over samples. The classes annotated at each inner node are traced backward from the leaves to the root. A leaf does not only predict one class but predicts all the classes. If for example, there are 100 samples and a certain leaf predicts 80 of them as digit 0 while 20 of them as any digit from 1-9, then the leaf in the figure is annotated as digit 0 with p=0.8. The classes written in the inner nodes are sourced from the leaves backward layer by layer because the predictions occur at the leaves. Our model is a “soft” decision tree, and the paths have probabilities, which entails that a digit could “go” both left and right. reported in [21][22][23]. However, the authors of these papers did not provide variances of the estimates. ### _Large-Scale Data Sets_ As we mentioned, the GPHMEs can be treated as weak learners and can provide interpretability for DGPs or more generally, Bayesian DNNs. We did not expect our model to perform better than DGPs but were only interested in using them to explain DGPs. However, we were surprised that our model outperformed (at least slightly) the DGPs for most of the time. Moreover, one of the defining characteristics of our model is the ability to scale up to large data sets. We evaluated our model on two large-scale problems which go beyond the scale of data sets to which GPs and especially DGPs are typically applied. We first considered MNISTM, which artificially extends the original MNIST data set to 8+ million observations. We trained this model using the same configuration described for standard MNIST with the height of trees as two but with the NIS-N setting. We obtained \(99.30\%\) accuracy and 0.0372 MNLL on the test set. These results beat the DGP counterpart provided by [17][22]. We would like to point out that the number of parameters in our model is less than that of the DGPs. Note that [12][23][24], Bonilla, Cutajar, and Filippone recently obtained 99.11% accuracy with the AutoGP framework with MNLL 0.033, while our model achieves the lowest MNLL 0.0328 with an accuracy of 99.24%. A common large-scale data in the GP field is the AIRLINE data set, which contains flight information for 5+ million US flights in 2008. Although this data set is not public, we found a substitute public data set that contains more than 6+ million records of flight information.3 We used this 8-dimensional data set for classification to determine whether a flight has been delayed or not. We constructed the test set using the scripts provided in [25][25], where 100,000 data points were held out for testing. We constructed our models using the 100 RBF-based random features and set the height of trees to one so that it matched the number of parameters in one-hidden layer DGPs [17][26]. As shown in Table IV, the results by our model are directly comparable to those obtained by DGPs, which means that the decision tree could almost perform as well as a DGP. Further, when we grow the size of a tree to a height of four, our model achieves 72% accuracy while the counterpart DGP with 10 hidden layers could not converge. Footnote 3: [https://www.kaggle.com/vikalpdongre/us-flights-data-2008](https://www.kaggle.com/vikalpdongre/us-flights-data-2008) The training time for the MNIST data was 40 minutes, and the training time for the AIRLINE data was 19 minutes. ## 6 Summary In this paper, we proposed a novel hierarchical mixture of experts whose inner nodes and experts are both represented by GPs. We chose to work with random features as a way of expanding the GPs. Our GPHMEs outperform all the benchmarks, including tree-structured HMEs and decision trees. They have reduced complexity with respect to other GP-based hierarchical mixtures of experts and offer interpretations of DGPs and deep BNNs. The HMEs have a Figure 4: Evolution of RMSEs in the regression case, error rates in the classification case, and mean negative log-likelihood (MNLL) over time. The x-axes of the MNLL panels are different from those of the RMSE and ER panels because the MNLLs converge later than the RMSEs and ERs. limitation in pre-selecting the size, and the number of their parameters increases exponentially. However, it turns out that in practice, we do not need trees with large heights. Our results on various data sets clearly show excellent performance even for large-scale data sets with small trees. Future work includes the following: 1. Pruning of trees. Once the training is completed, one can proceed with pruning the trees. For example, Fig. 2 shows that we might combine the sub-trees which predict the same class. It is important to investigate principled ways of pruning. 2. Extensions to ensembles of GPHMEs. Suppose that we have a number of GPHMEs each defined with its own set of feature spaces. Further, let these GPHMEs have prior information that their trees have structures of ISO-N, i.e., \(\mathbf{\Omega}\) is the same across all the nodes, and they have known feature spaces \(\mathbf{\Phi}\). In that case, the input spaces are projected into different fixed feature spaces rather than random feature spaces. The fixed feature space is easier to interpret than random feature spaces. Therefore, the ensemble of decision trees is naturally an ensemble that exploits known fixed feature spaces. We would like to explore ways of identifying good ensembles, e.g., by allowing the trees to be time-varying, or by spawning new trees from the GPHMEs with good performance using distribution functions of spectral frequencies that are more informative than their respective priors. 3. Extension to boosting. This is a rather straightforward task because we only have to replicate the standard boosting routine in constructing a series of trees. 4. Modeling experts with DGPs. It might be possible to improve the performance of the GPHMEs by replacing the experts of the tree (the leaves), which are now GPs with DGPs. By doing this we are not losing the nice property of interpretability of the GPHMEs but may gain in performance by using DGPs as experts. 5. Feature Selection. A traditional tree can provide information about the most important features of \(\mathbf{X}\). Do our trees also provide information about important features? We plan to examine if information about the importance of features can be extracted from \(\mathbf{\Omega}\) and w. 6. Extension to inducing points approximations. With inducing points, we have an alternative approach to implementing scalable GPs. It will be interesting to examine how trees based on such GPs compare to the ones from this paper. 7. Feed forward option. In the paper, we only used feature spaces \(\boldsymbol{\phi}(\mathbf{x})\) to make decisions and predictions. However, an interesting direction is to explore the use of the feed-forward option for building the tree, that is, to use both \(\boldsymbol{\phi}(\mathbf{x})\) and \(\mathbf{x}\). The objective is not only to improve the accuracy of the tree in its tasks but also to understand how adding the feed-forward option affects the structure of the tree and its parameters. ## Acknowledgments The authors would like to thank the support of NSF under Award 2212506.
2304.06560
Real-Time Wheel Detection and Rim Classification in Automotive Production
This paper proposes a novel approach to real-time automatic rim detection, classification, and inspection by combining traditional computer vision and deep learning techniques. At the end of every automotive assembly line, a quality control process is carried out to identify any potential defects in the produced cars. Common yet hazardous defects are related, for example, to incorrectly mounted rims. Routine inspections are mostly conducted by human workers that are negatively affected by factors such as fatigue or distraction. We have designed a new prototype to validate whether all four wheels on a single car match in size and type. Additionally, we present three comprehensive open-source databases, CWD1500, WHEEL22, and RB600, for wheel, rim, and bolt detection, as well as rim classification, which are free-to-use for scientific purposes.
Roman Stanek, Tomas Kerepecky, Adam Novozamsky, Filip Sroubek, Barbara Zitova, Jan Flusser
2023-04-13T14:12:57Z
http://arxiv.org/abs/2304.06560v1
# Real-Time Wheel Detection and Rim Classification ###### Abstract This paper proposes a novel approach to real-time automatic rim detection, classification, and inspection by combining traditional computer vision and deep learning techniques. At the end of every automotive assembly line, a quality control process is carried out to identify any potential defects in the produced cars. Common yet hazardous defects are related, for example, to incorrectly mounted rims. Routine inspections are mostly conducted by human workers that are negatively affected by factors such as fatigue or distraction. We have designed a new prototype to validate whether all four wheels on a single car match in size and type. Additionally, we present three comprehensive open-source databases, CWD1500, WHEEL22, and RB600, for wheel, rim, and bolt detection, as well as rim classification, which are free-to-use for scientific purposes. Roman Stanek\({}^{1}\), Tomas Kerepecky\({}^{2,3}\), Adam Novozamsky\({}^{3}\), Filip Sroubek\({}^{3}\), Barbara Zitov\({}^{3}\), Jan Flusser\({}^{3}\)+\({}^{1}\)Charles University, Czechia, \({}^{2}\)Czech Technical University in Prague, Czechia \({}^{3}\)Institute of Information Theory and Automation, The Czech Academy of Sciences, Czechia Detection, Classification, Automotive Footnote †: This work was partially supported by the Czech Science Foundation, grant no. GA21-03921S, and by the _Pruemium Academiae_ awarded by the Czech Academy of Sciences. ## 1 Introduction In 2021, global motor vehicle production was estimated to be over 80 million vehicles [1]. Most quality check tasks are performed by trained workers, who can be affected by many negative factors, which reduces the reliability of the inspection. This tedious work provides a significant opportunity for automation through computer vision, which has the potential to lower the cost of the overall process and, at the same time, achieve superior accuracy. Despite the prevalence of automated computer vision tasks, the quality control of rim mounting inaccuracies is still done manually. This paper aims to design a real-time system to ensure that all four rims on a car are of the same size and type, which is a crucial factor for maintaining car stability and passenger safety. There are limited studies focused on wheel detection and, to the best of our knowledge, a lack of literature regarding rim classification. In this regard, we present a novel approach that constitutes the first comprehensive pipeline for joint wheel detection, classification, and size estimation. **Contributions. (1)** we propose a novel real-time rim detection, classification, and inspection approach and design a prototype ready-to-use in automotive production; **(2)** we contribute three new publicly available datasets, namely **CWD1500** for car and wheel detection, **WHEEL22** for rim classification and **RB600** for bolt detection. ## 2 Related Work We present an overview only in the area of wheel detection, as there is no public research on rim classification. These studies predominantly employ the _Hough transform (HT)_[2] and use machine learning to determine the presence of a wheel. A comprehensive introduction to car and wheel detection is in [2], which presents a three-stage approach for detecting car contours from side view and identifying wheels using HT and _SURF descriptors_[3]. They use heuristics and a _Snake algorithm_[4] to improve results, but the results are inconclusive due to a small dataset (100 images) and white background. The Master thesis [5] detects wheels utilizing real-world recordings and using _Local binary patterns_[6] and _Random forest classifier_[7]. Another work [8] identifies 14 regions of interest in vehicles from a side view, including wheels, using a classifier trained on Haar-like features [9] and HT. The paper [10] uses _Fast HT_[11] to detect wheels in a deployed industrial vehicle classification system in Russia, filtering the Hough space to avoid false positives. ## 3 Method Our method consists of several building blocks in which standard computer vision methods are complemented with deep Figure 1: Visualization of the main wheel parts. The tyre is marked in blue, the rim in red, the wheel bolt centers in yellow, and the diameter of the pitch circle in green. learning methods. We start with the creation of three datasets and then describe the car and wheel detection, rim classification, and finally, rim-size estimation. The terms _wheel_, _rim_, and _tyre_ can be ambiguous in the common language. In this paper, we refer to a wheel as the combination of rim and tyre. A rim is a rigid core of the wheel usually made from metal. A tyre is mounted on the rim and ensures good contact with the surface under the car. It is usually made of rubber-like material. For illustration, see Figure 1. The primary purpose of the rim is to provide rigid support for the tyre and to transmit forces that affect the movement of the vehicle, such as the rotational force from the engine to the tyre. [12]. ### Datasets All data were collected at the Skoda Auto factory in Mlada Boleslav, where quality control is carried out. This company permitted us to install the monitoring equipment to gather car data and use them for research purposes. Cars are stationary on a slowly moving conveyor belt that is well-lit by multiple sources. The cameras were arranged on both sides of the conveyor belt; Camera A recorded cars from the right side, and Camera B from the left. A top-down schema of the setup is shown in Figure 2. Images of the scene taken by individual cameras simultaneously are shown in Figure 3. We employed standard Logitech BRIO 4K Stream Edition cameras with the same configuration as in [13]. The data collection script ran continuously for several days at a rate of one frame per second in FullHD quality, recording data in ten-minute bursts. Every frame was captured as Motion JPEG, and the whole time-lapse video was encoded in HEVC H.265. The data was gathered during standard work shifts; people and objects can move in the scene; see Figure 4. Thirty-four hours of collected data from both cameras were obtained after removing unusable video segments. It is necessary to provide training data with accurately labeled objects to train a neural network for object detection or tuning parameters for standard algorithms such as HT. We manually annotated the cars and wheels by drawing bounding boxes in the training images using the _Computer Vision Annotation Tool(CVAT)_[14]. The first dataset, **CWD1500**, is designed for car and wheel detection and includes 1000 training frames, 250 validation frames, and 250 testing frames. The training set also includes 91 unlabelled images to prevent false positives during training. We employed the YOLO [15] detection network, which had been trained using the CWD1500 dataset, to identify the location of all rims in the collected videos. Subsequently, each frame was processed by cropping it to a square shape centered around the detected bounding box and resized to 256 x 256 pixels. It is important to note that some of the identified rims were not entirely captured in the pictures, which assisted in generalizing the learning process of classification, since it functioned as cropping in standard data augmentation. We manually identified 31 classes of rims based on differences in shape and color of detected wheels. For ten classes the number of representative samples was too low (less than 300). Therefore, we used only 21 classes for further analysis. We randomly selected 100 training samples for each class, 25 validation samples, and 25 test samples. It should be noted that images of one car will only appear in one set (training, validation, or test). We added one special class to include cases when the detector returns a candidate that cannot be classified even by a human. One example per class is shown in Figure 5. We refer to this comprehensive labeled dataset containing 3300 rim images as **WHEEL22**. The last dataset was created for detecting five wheel bolts. It contains 400 images for training, 100 for testing, and 100 for validation. The rims and bolts were also manually annotated using CVAT. This dataset is called **RB600**. Figure 4: Examples of problematic objects in the scene: a scooter wheel is not the object of interest, and a staff standing in front of the car completely occluding the wheel. Figure 3: Sequences of 3 subsequent frames from Camera A (top) and Camera B (bottom) illustrate the conveyor belt movement speed - approximately 11.5 cm/s. The red circles in the left column show the opposite camera. Figure 2: Top-Down Camera Setup: The cameras were placed on either side of the conveyor belt, with Camera A capturing cars from the right and Camera B capturing them from the left. ### Car and wheel detection This section is split into two subsections. The first part deals with research utilizing traditional computer vision techniques, while the latter focuses on convolutional neural networks. A comparison of the two approaches is presented in Table 1. There are _precision_ (P), _recall_ (R), and _mean average precision_ (mAP) to evaluate object detection performance. All of them are calculated using the _Intersection-over-Union_ (IoU) threshold of 50%. If a range is specified, such as [email protected]:.95, it indicates the average mAP over an IoU range from 0.5 to 0.95 with step size 0.05. From the results, it is evident that the deep learning method outperforms traditional methods. _a)Traditional Computer Vision:_ We chose HT as the most appropriate method based on the related work presented in Section 2. HT is a widely accepted technique for detecting geometric shapes that a limited number of parameters can describe. We utilized an implementation provided by [16] that was optimized for minimizing memory usage. To enhance its performance, we carried out several preprocessing steps. The image was first downscaled, converted to grayscale, and then blurred using a Gaussian filter to reduce noise and eliminate high-frequency components. The results obtained by measuring performance on the validation dataset of CWD1500 are summarized in Table 1 (first line) and are satisfactory for a baseline method. The true positives do not always match the rim contours, as the car wheels are not always perpendicular to the cameras' optical axes. The false negatives in the detection can be attributed to three reasons: dim rims, partially occluded rims, and rims partially out of the camera field of view. In the third group, the wheels are missing more than half of their area, which is hard for HT to detect. _b)Deep Learning Approach:_ Standard deep-learning detection methods include approaches such as U-net [17], Mask-RCNN [18], and YOLO [15]. We chose YOLOv5s specification [19] with 7M parameters and pre-trained on the COCO val2017 dataset. The model was re-trained for 50 epochs using a 2:1:1 split of the CWD1500 dataset. The estimated inference time on a single frame with YOLOv5s is around 26 ms, which is still sufficiently fast for our purposes (HT takes only 8ms). ### Rim classification We compared traditional techniques with deep learning approaches as in the previous section on wheel detection. _a)Traditional Computer Vision:_ As the traditional computer vision technique, we combined a _Histogram of Oriented Gradients_ (HOG) [20] with a _Support Vector Machines_ (SVM) [21] classifier. HOG is a feature descriptor used in computer vision for object detection. It represents the orientation of intensity gradients in an image, dividing the image into small cells and counting the gradient directions in each cell. The results of this analysis are then compiled into a histogram, which serves as a descriptor of the object's appearance. The 'orientation' parameter defines how many bins the histogram has in each cell. The parameter 'pixels per cell' describes the size of one cell in pixels. The results for three algorithm settings on WHEEL22 dataset are summarized in Table 2. In this case, the maximum achieved accuracy below \(0.75\) is insufficient. _b)Deep Learning Approach:_ Here we will concentrate on transfer learning and EfficientNet [22], a high-performing \begin{table} \begin{tabular}{l l l l l l l l l} \multicolumn{1}{l}{} & \multicolumn{1}{l}{**Method**} & \multicolumn{1}{l}{**Class**} & \multicolumn{1}{l}{**Instances**} & \multicolumn{1}{l}{**P**} & \multicolumn{1}{l}{**R**} & \multicolumn{1}{l}{**[email protected]**} & \multicolumn{1}{l}{**[email protected]:95**} \\ \hline \hline HT & & wheel & 182 & 1.000 & 0.703 & — & — \\ \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Sec. 3.2 & \begin{tabular}{l} wheel \\ car \\ \end{tabular} & 182 & 0.983 & 0.970 & 0.993 & 0.962 \\ & \begin{tabular}{l} car \\ \end{tabular} & 300 & 0.983 & 0.980 & 0.993 & 0.928 \\ \cline{1-1} & Sec. 3.4 & \begin{tabular}{l} bolt \\ rim \\ \end{tabular} & 475 & 1.000 & 0.998 & 0.995 & 0.651 \\ \cline{1-1} & Sec. 3.4 & \begin{tabular}{l} rim \\ \end{tabular} & 95 & 0.984 & 1.000 & 0.995 & 0.993 \\ \hline \end{tabular} \end{table} Table 1: The performance of the detection method was evaluated on test data, comparing two models: HT and YOLOv5s, using the CWD1500 dataset. Additionally, the bottom section of the results presents the outcomes of the same YOLO architecture trained and tested on the RB600 dataset. Figure 5: WHEEL22 dataset: 1+21 rim categories. C00 category is used for handling occlusions. Figure 6: Data flow in the proposed prototype. classification network. Transfer learning is a technique where a model pre-trained on a large dataset, such as ImageNet [23], is then used for training on a smaller dataset. This process accelerates and improves training by leveraging the learned representations from the pre-trained model. The layers of the pre-trained model are divided into two categories: frozen and trainable. The frozen layers remain unchanged, while the trainable layers are modified during the training. The number of trainable layers varies depending on the model, dataset size, and complexity. Individual runs with the number of unfrozen layers and achieved validation accuracy are in Table 3. The findings show that it is sufficient to train 25 layers, for which we reach a close-to-optimal model while training time is still moderate. The inference time of 60 ms per image is also favorable for our intended application. The confusion matrix for the test data shows similarity to the identity matrix, except for three classes. Specifically, C08 had two misrecognized representatives, C09 had four, and C14 had one, which resulted in an overall accuracy of 98.72%. ### Rim size estimation Since real rim dimensions are not known, we can either compare only relative sizes within a single car or use objects of known size to estimate the real rim diameter. All cars in our scenario have a pitch circle diameter of 112 mm, so the object of known size, which we have to detect, is the circle on which five bolts lie. The cameras were not calibrated precisely and had slight discrepancies in tilt and mounting positions. The car position on the conveyor belt is not fixed, and wheels may not be perfectly perpendicular to the camera. Therefore, the rim circumference may not be a perfect circle but an ellipse, as shown in Figure 7. The second detection network was trained on the RB600 set to detect the rim and bolts. The same YOLOv5s architecture as described in Section 3.2b was used. The input to the second YOLO network was the bounding box of the wheel detected by the first YOLO network. The performance of the bolt and rim detection, including the overall performance, is summarized in Table 1. Then we calculate the ellipse [24] on which the centers of the bolts lie. To extract the outline of the rim, we applied Otsu's method [25] for thresholding. To find stable sample points, we cast rays from the center of each edge, spaced at 10% of the edge length. When a ray hits a white pixel, that location is added as a sample point. We use these points to fit the second ellipse that matches the contour of the rim. Visualization of the entire procedure using rays is presented in Figure 7. From these two ellipses we are able to estimate the real size of the rim. ### Tracking The tracking algorithm for car parts is of paramount significance as it improves accuracy and enhances detection speed. We use only Camera A and apply the same tracking information to Camera B due to the minor differences in horizontal coordinates. The IoU of bounding boxes conducts the data association for tracking in consecutive frames. The wheels are tracked similarly to the car and then assigned to the currently tracked car. Using the tracking information, calculating the median class for a particular rim achieves 100% accuracy on tested videos of the total length of 10 hours containing 500 cars. The prototype design follows the order described in Section 3. A visual representation of the primary modules and their inter-module data flow can be seen in Figure 6. The procedure begins by acquiring frames from both cameras. Next, vehicle and wheel detection is performed (3.2b). Subsequently, the rim class is predicted (3.3b), and the diameter of the rim is estimated from the bounding box surrounding the wheel (3.4). The final step involves comparing all four wheels. The average processing time for a single input consisting of two frames, one from each camera, is 0.4 seconds for the entire pipeline. All experiments were performed on GeForce RTX 2070. ## 4 Conclusion We proposed a real-time, fully automated system for rim size inspection of cars moving on the assembly line. The system consists of three steps: car and wheel detection, rim classification, and estimation of real rim dimensions. Traditional computer vision methods, such as Hough Transform and SVM with HOG features, were compared with deep learning techniques. When deep learning techniques are selected, the success rate in each intermediate step is approximately 99 percent. For the purpose of learning and testing, three datasets were prepared, which are publicly available for scientific purposes on Kaggle: [https://www.kaggle.com/datasets/adamnovozmsk/cawdec](https://www.kaggle.com/datasets/adamnovozmsk/cawdec) \begin{table} \begin{tabular}{c c c c} **Orientation** & **Pixels per cell** & **Accuracy** & **Features** \\ \hline 9 & 8x8 & 0.643 & 73K \\ 13 & 24x24 & **0.744** & 7.5K \\ 16 & 24x24 & 0.718 & 9.2K \\ \end{tabular} \end{table} Table 2: Results of HOG with multiple variations of parameters. The column _Features_ describes the number of features generated per image. \begin{table} \begin{tabular}{c c c c} **Unfrozen layers** & **Trainable param.** & **Accuracy** & **Train time [s]** \\ \hline 2 & 2.5K & 0.956 & 574 \\ 10 & 893.2K & 0.989 & 659 \\ 25 & 1.5M & **0.995** & 1103 \\ 237 & 4M & 0.989 & 1848 \\ \end{tabular} \end{table} Table 3: Transfer learning results using EfficientNet with various unfrozen layer configurations. Figure 7: Detection of rim and bolt ellipses.
2310.02699
Continual Contrastive Spoken Language Understanding
Recently, neural networks have shown impressive progress across diverse fields, with speech processing being no exception. However, recent breakthroughs in this area require extensive offline training using large datasets and tremendous computing resources. Unfortunately, these models struggle to retain their previously acquired knowledge when learning new tasks continually, and retraining from scratch is almost always impractical. In this paper, we investigate the problem of learning sequence-to-sequence models for spoken language understanding in a class-incremental learning (CIL) setting and we propose COCONUT, a CIL method that relies on the combination of experience replay and contrastive learning. Through a modified version of the standard supervised contrastive loss applied only to the rehearsal samples, COCONUT preserves the learned representations by pulling closer samples from the same class and pushing away the others. Moreover, we leverage a multimodal contrastive loss that helps the model learn more discriminative representations of the new data by aligning audio and text features. We also investigate different contrastive designs to combine the strengths of the contrastive loss with teacher-student architectures used for distillation. Experiments on two established SLU datasets reveal the effectiveness of our proposed approach and significant improvements over the baselines. We also show that COCONUT can be combined with methods that operate on the decoder side of the model, resulting in further metrics improvements.
Umberto Cappellazzo, Enrico Fini, Muqiao Yang, Daniele Falavigna, Alessio Brutti, Bhiksha Raj
2023-10-04T10:09:12Z
http://arxiv.org/abs/2310.02699v3
# Continual Contrastive Spoken Language Understanding ###### Abstract Recently, neural networks have shown impressive progress across diverse fields, with speech processing being no exception. However, recent breakthroughs in this area require extensive offline training using large datasets and tremendous computing resources. Unfortunately, these models struggle to retain their previously acquired knowledge when learning new tasks continually, and retraining from scratch is almost always impractical. In this paper, we investigate the problem of learning sequence-to-sequence models for spoken language understanding in a class-incremental learning (CIL) setting and we propose COCONUT, a CIL method that relies on the combination of experience replay and contrastive learning. Through a modified version of the standard supervised contrastive loss applied only to the rehearsal samples, COCONUT preserves the learned representations by pulling closer samples from the same class and pushing away the others. Moreover, we leverage a multimodal contrastive loss that helps the model learn more discriminative representations of the new data by aligning audio and text features. We also investigate different contrastive designs to combine the strengths of the contrastive loss with teacher-student architectures used for distillation. Experiments on two established SLU datasets reveal the effectiveness of our proposed approach and significant improvements over the baselines. We also show that COCONUT can be combined with methods that operate on the decoder side of the model, resulting in further metrics improvements. ## 1 Introduction With the rapid progress of intelligent voice-enabled personal assistants, the significance of Spoken Language Understanding (SLU) has gained substantial recognition in recent years (Arora et al., 2022; Qin et al., 2021). Conventional SLU models deploy a cascaded pipeline of an automatic speech recognition (ASR) system followed by a natural language understanding (NLU) module (Mesnil et al., 2014; Horlock and King, 2003). ASR maps the input speech into text representations, and NLU extracts the target intent labels from the intermediate text. Even though these approaches can leverage a vast abundance of ASR and NLU data, they suffer from ASR error propagation. Conversely, end-to-end (E2E) SLU (Agrawal et al., 2022; Lugosch et al., 2019; Saxon et al., 2021) has received more attention in recent research because it uses a single trainable model to map the speech audio directly to the intent labels, bypassing the need to explicitly generate a text transcript. This approach leads to reduced latency and error propagation. The assumption that the data distribution the model will face after deployment aligns with what it encountered during the training phase is brittle and unrealistic. In fact, real-world scenarios entail evolving streams of data where novel categories (e.g., new vocabulary or intents) emerge sequentially, known as continual learning (CL). Unfortunately, while neural networks thrive in a stationary environment, the situation is reversed in CL, resulting in the "catastrophic forgetting" (CF) of the existing knowledge in favor of the fresh new information (McCloskey and Cohen, 1989). Although the majority of CL works have focused on computer vision tasks like image classification (Buzzega et al., 2020; Wang et al., 2022c) and semantic segmentation (Maracani et al., 2021; Yang et al., 2022a), a few works have recently turned their attention towards text (Wang et al., 2023; Ke et al., 2023) and speech-related (Cappellazzo et al., 2023; Diwan et al., 2023) problems, as well as vision-language (Ni et al., 2023; Zhu et al., 2023) and vision-audio (Mo et al., 2023; Pian et al., 2023). While most SLU works have considered an offline setting, a thorough study of SLU under a class-incremental learning (CIL) setup still lacks. In CIL, one single model is adapted on a sequence of different tasks as incremental intent labels emerge sequentially. Recently, Cappellazzo et al. (2023b) studied the problem of CIL in ASR-SLU, where SLU is carried out in a sequence-to-sequence (seq2seq) fashion, thus computing the intent labels in an auto-regressive way together with the ASR transcriptions. By doing this, the model comprises three blocks: a text encoder, an audio encoder, and an ASR decoder. While Cappellazzo et al. (2023b) proposed to overcome CF by using knowledge distillation techniques applied to the ASR decoder, in this paper, we exploit the multi-modal audio-text setting and propose COCONUT: COtinual Contrastive spOken laNguage UndersTanding. COCONUT combines experience replay (ER) and contrastive learning principles. Whereas ER is a well-established approach in CL (Rolnick et al., 2019), only recently has contrastive learning been harnessed to learn representations continually. Both supervised (Cha et al., 2021; Yang et al., 2022a) and self-supervised (Fini et al., 2022; Wang et al., 2022c) contrastive learning have proven useful to lessen the CF issue. Specifically, COCONUT relies on two contrastive learning-based losses that operate on a shared embedding space where the audio and text features are projected. The first contrastive loss, coined _Negative-Student Positive-Teacher_ (NSPT), is a modified version of the supervised contrastive learning loss that aims to consolidate what the model has learned in the previous tasks. It also exploits the knowledge distillation (KD) principle (Hinton et al., 2015; Li & Hoiem, 2017) to guide the current model (student) to produce representations that follow the ones obtained with the model from the previous task (teacher). For this reason, this loss is computed only for the rehearsal data (i.e., the anchors). A key difference between our loss and the standard contrastive one is that the positive samples are computed using the teacher model (the positives only come from the rehearsal data), whereas the negatives are computed with the student. In this way, we avoid stale and scattered representations for the new data. The second loss is inspired by the recent progress in multi-modal representation learning. Considering that for audio-text paired data, audio and text represent the same information but in different ways, it has been shown that aligning their representations results in better performance for various speech-related problems (Zhu et al., 2022; Ye et al., 2022; Manco et al., 2022). Therefore, we propose a multi-modal (MM) supervised contrastive loss that, exclusively applied to the current task's data, brings audio and text representations belonging to the same class into closer proximity in the Figure 1: Overview of COCONUT: our proposed CL method. Aside from the standard ASR loss, COCONUT implements two contrastive learning-based losses. The NSPT (negative-student positive-teacher) loss is a supervised contrastive distillation loss that preserves the feature representations of the past classes for both audio and text samples. The positive and negative samples are computed with the teacher and student model, respectively. The MM (multi-modal) loss aims to align audio and text representations belonging to the same class. The combination of these two losses produces features that are more transferable and resilient to catastrophic forgetting. shared feature space, resulting in features that are more transferable and resilient to CF. An overview of COCONUT is illustrated in Figure 1. In summary, our contributions are the following: * We introduce COCONUT, a CL method that makes use of two supervised contrastive learning objectives to mitigate catastrophic forgetting for seq2seq SLU models. * We conduct extensive experiments on two popular SLU benchmarks and we show COCONUT achieves consistent improvements over the baselines. We also demonstrate that it can be combined with a KD technique applied to the ASR decoder, leading to further improvements. * We finally ablate the contribution of each loss and its components, as well as the role of the temperature parameter in the contrastive continual learning process. ## 2 Related Work A vast array of CL strategies exist in the literature (Wang et al., 2023; Zhou et al., 2023), which can be categorized into some macro groups: _regularization_-based, _experience replay_, and _architecture_-based. _Regularization_ methods contrast forgetting either by introducing some ad-hoc regularization terms that penalize changes to model weights (Ebrahimi et al., 2019; Kirkpatrick et al., 2017) or to model predictions (Hou et al., 2018; Li and Hoiem, 2017; Fini et al., 2020). _Experience replay_ approaches interleave the new data with cherry-picked samples from the prior tasks (Chaudhry et al., 2018; Bang et al., 2021; Buzzega et al., 2020), or they incorporate regularization terms with this additional data to steer the optimization process and prevent catastrophic forgetting (Chaudhry et al., 2018; Wang et al., 2021; Yang et al., 2022). Finally, _architecture_ methods involve creating task-specific/adaptive parameters, such as dedicated parameters to each task (Xue et al., 2022; Wang et al., 2022) or task-adaptive sub-modules or subnetworks (Aljundi et al., 2017; Ostapenko et al., 2021). Contrastive learning (Oord et al., 2018; Chen et al., 2020) is a popular approach in self-supervised learning, but it can also be used in supervised learning (Gui et al., 2023) and multimodal learning (Radford et al., 2021). Its objective is to learn discriminative feature representations by pushing apart different samples (negatives) and bringing closer similar ones (positives). In the case of supervised CIL, it has been shown that endowing the model with contrastive learning objectives results in more robust representations against CF. For incremental semantic segmentation, Yang et al. (2022) and Zhao et al. (2023) propose to exploit contrastive learning in conjunction with knowledge distillation. For image classification, Wang et al. (2022) advance a contrastive learning strategy based on the vision transformer architecture for online CL. ## 3 Problem Formulation ### ASR-SLU Multi-task Learning SLU is considered a more difficult task than ASR and NLU since it involves concurrent acoustic and semantic interpretation (Tur and De Mori, 2011). For this reason, it is common practice in the literature to include an additional ASR objective such that the intent and the transcript are generated in an auto-regressive fashion, resulting in a multi-task learning setting (Arora et al., 2022; Peng et al., 2023). By doing this, the text transcript input to the model includes a class intent token that is specific to the actual task. Let \(\theta\) be the parameters of a seq2seq ASR model, constituted by an audio encoder, a text encoder (i.e., embedding layer), and an ASR decoder. Let \(\textbf{x}=[x_{0},\dots,x_{U-1}]\) be an audio input sequence of length \(U\), and \(\textbf{y}=[y_{cls},y_{sep},y_{0},\dots,y_{J-3}]\) be the corresponding "extended" input transcript of length \(J\), where with the term "extended" we refer to the original transcript \([y_{0},\dots,y_{J-3}]\) augmented with the intent class token \(y_{cls}\) and a special separation token \(y_{sep}\). The goal of the ASR model is to find the most likely extended transcript given the input sequence **x**: \[\hat{\textbf{y}}=\operatorname*{arg\,max}_{\textbf{y}\in\mathcal{Y}^{*}}p( \textbf{y}|\textbf{x};\theta), \tag{1}\] where \(\mathcal{Y}^{*}\) is the set of all token sequences. The predicted intent is obtained extracting \(y_{cls}\) from \(\hat{\textbf{y}}\). ### Class-Incremental Learning For our experiments, we consider a CIL setting where we adapt a single model to learn sequentially \(N\) tasks corresponding to non-overlapping subsets of classes (in our case _intents_). Put formally, the training dataset is divided into \(N\) distinct tasks, \(\mathcal{D}=\{\mathcal{D}_{0},\ldots,\mathcal{D}_{N-1}\}\), based on the intent token \(y_{cls}\), so that one intent is included in one and only one task. The dataset \(\mathcal{D}_{n}\) of task \(n\) comprises audio signals \(\mathcal{X}_{n}\) with associated transcriptions \(\mathcal{Y}_{n}\), i.e. \(\mathcal{D}_{n}=(\mathcal{X}_{n},\mathcal{Y}_{n})\). The CIL setting is challenging in that the model must be able to distinguish all classes until task \(n\), thus at inference time the task labels are not available (unlike in task-incremental learning) (Hsu et al., 2018). ## 4 Proposed Approach ### Standard Rehearsal-based Approach We assume the availability of a rehearsal buffer, \(\mathcal{M}\), in which we can store a few samples for each class encountered in the previous tasks. During the training phase of task \(n\), \(\mathcal{D}_{n}\), we refer to \(\mathcal{B}\) as a mini-batch of samples \((\mathbf{x},\mathbf{y})\), some of which come from the current task and some from the rehearsal memory. To increase the variance of the audio data, we apply SpecAug (Park et al., 2019) to the audio waveform \(\mathbf{x}\) as a data augmentation transformation. Regarding the transcript \(\mathbf{y}\), we do not implement any augmentation technique. Then, we encode each modality separately through a dedicated feature encoder. An audio encoder maps each audio input into a feature vector \(\mathbf{h}_{\mathrm{A}}\in\mathbb{R}^{U\times d_{\mathrm{A}}}\), where \(d_{\mathrm{A}}\) is the audio hidden size. Similarly, a text encoder converts each text input into a feature vector \(\mathbf{h}_{\mathrm{T}}\in\mathbb{R}^{J\times d_{\mathrm{T}}}\), where \(d_{\mathrm{T}}\) is the text hidden size. At this point, if no specific CL losses are introduced, the ASR decoder generates the output sequence in an auto-regressive fashion, cross-attending on the audio encoder's feature representations \(\mathbf{h}_{\mathrm{A}}\). Therefore, at task \(n\), we minimize the conventional cross-entropy loss over the current mini-batch \(\mathcal{B}\): \[\mathcal{L}_{\text{ASR}}=-\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{x},\mathbf{y} )\in\mathcal{B}}\log(p(\mathbf{y}|\mathbf{x};\theta)). \tag{2}\] ### Coconut **Preliminaries**. We introduce here some notations for our proposed approach COCONUT. Since we work with audio and text sequences, we need to aggregate the features we obtain with the encoders before computing the contrastive loss. For the audio component \(\mathbf{h}_{\mathrm{A}}\) we apply a mean operation over its sequence length, whereas for text we only select the feature related to the intent token. Then, as is common practice in contrastive learning (Radford et al., 2021; Chen et al., 2020), the resulting embeddings go through two separate linear projection layers that map them into a shared embedding space. At inference time, the projection layers are discarded. Therefore, we get the projected embeddings \(\mathbf{a}\) and \(\mathbf{t}\) in the following way: \[\mathbf{a}=g_{\mathrm{A}}(avg(\mathbf{h}_{\mathrm{A}})),\quad\mathbf{t}=g_{ \mathrm{T}}(cls(\mathbf{h}_{\mathrm{T}})), \tag{3}\] where \(cls(\cdot)\) is a function that extracts the feature associated with the class token, \(g_{\mathrm{A}}(\cdot)\) and \(g_{\mathrm{T}}(\cdot)\) are the projection layers, \(\mathbf{a}\in\mathbb{R}^{d_{\mathrm{S}}}\) and \(\mathbf{t}\in\mathbb{R}^{d_{\mathrm{S}}}\), where \(d_{\mathrm{S}}\) is the dimension of the shared space. Furthermore, we introduce some notations for the indices of samples coming from the current mini-batch \(\mathcal{B}\). Let \(\mathcal{I}_{\mathrm{c}}\) and \(\mathcal{I}_{\mathrm{r}}\) represent the set of indices of the new task samples and the indices of the samples from the rehearsal memory (old task samples) in \(\mathcal{B}\), respectively. Also, let \(\mathcal{I}=\mathcal{I}_{\mathrm{c}}\cup\mathcal{I}_{\mathrm{r}}\), and we define \(\mathcal{P}(k)\) as the set of indices of positive samples (i.e., samples with the same intent token). The objective of a standard supervised contrastive loss (SCL) (Khosla et al., 2020) is to push the representations of samples with different classes (negative pairs) farther apart while clustering representation of samples with the same class (positive pairs) closely together. Suppose that we get from the projection layers a generic representation \(\mathbf{z}_{i}^{D}\) for the \(i\)-th element in the batch, where \(\mathbf{z}=\{\mathbf{a},\mathbf{t}\}\) and the superscript \(D\) denotes whether the representation is computed with the teacher or student model. A generic formulation of the SCL loss takes the following form: \[\mathcal{L}_{\text{SCL}}=\sum_{k\in\mathcal{I}}\frac{-1}{|\mathcal{P}(k)|}\sum _{p\in\mathcal{P}(k)}\log\frac{\exp(\mathbf{z}_{k}^{D}\cdot\mathbf{z}_{p}^{D}/ \tau)}{\sum_{i\in\mathcal{I}}\exp(\mathbf{z}_{k}^{D}\cdot\mathbf{z}_{i}^{D}/ \tau)}, \tag{4}\] where \(\tau\in\mathbb{R}^{+}\) is a fixed temperature scaling parameter. **Supervised Contrastive Distillation Loss (NSPT)**. This loss combines the benefits of knowledge distillation with those of contrastive learning (Tian et al., 2019; Sun et al., 2020). First of all, since the teacher model conveys information about the previous classes, we would like to use it as a guide for the student through a knowledge distillation objective. In this way, the loss encourages the student model to produce audio and text embeddings consistent with those obtained by the teacher. Therefore, only the rehearsal samples are involved in this process as the teacher had no chance to see the current data. Additionally, we want to pull closer embeddings sharing the same intent class (i.e. the positives), while we push away the others (i.e. the negatives, whose class is different). This is obtained via a modified version of the standard supervised contrastive loss tailored for our setting. In fact, a standard one would use the teacher model to compute both the positives and the negatives (Khosla et al., 2020). However, since the teacher model is frozen and it is pointless to compute the representations of the samples from the current task using the teacher, we propose to use the student model for computing the representations of the negatives. Therefore, our contrastive distillation loss computes the embeddings of the anchor and its corresponding negatives using the student model, while the positives come from the teacher (we call this loss _Negative-Student Positive-Teacher_, NSPT). On the contrary, for the standard contrastive loss both the positives and negatives are computed with the teacher (we call it _Negative-Teacher Positive-Teacher_, NTPT). Figure 2 illustrates visually how the NTPT and NSPT work in the shared embedding space. The NSPT loss is computed for both audio and text embeddings, leading to two components, one for each modality, as follows: \[\mathcal{L}_{\text{NSPT}}=\sum_{k\in\mathcal{I}_{\tau}}\frac{-1}{|\mathcal{P }(k)|}\sum_{p\in\mathcal{P}(k)}\bigg{[}\underbrace{\log\frac{\exp(\mathbf{a}_ {k}^{n}\cdot\mathbf{a}_{n}^{n-1}/\tau)}{\sum_{i\in\mathcal{I}}\exp(\mathbf{a}_ {k}^{n}\cdot\mathbf{a}_{n}^{n}/\tau)}}_{\mathcal{L}_{\text{A}}}+\underbrace{ \log\frac{\exp(\mathbf{t}_{k}^{n}\cdot\mathbf{t}_{n}^{n-1}/\tau)}{\sum_{i\in \mathcal{I}}\exp(\mathbf{t}_{k}^{n}\cdot\mathbf{t}_{i}^{n}/\tau)}}_{\mathcal{L} _{\text{T}}}\bigg{]}, \tag{5}\] where \(n\) and \(n-1\) denote whether the representation is obtained with the student or teacher, and \(\mathcal{L}_{\text{A}}\) and \(\mathcal{L}_{\text{T}}\) represent the audio and text contributions, respectively. We empirically validate that the intuition of using negative samples from the student is beneficial in practice in section 5.3. **Supervised Multi-Modal Contrastive Loss**. This loss is introduced for two reasons. First of all, since during the first task (no CL) the NSPT loss is not computed, this means that the projector layers of the model are not trained. This is a problem during the second task when the student distills the knowledge from the teacher with randomly initialized projectors. Second, we want to exploit the multi-modal nature of our SLU CIL setting. Consequently, we introduce a multi-modal (MM) loss that aims to align audio and text representations belonging to the same class, and thus training the projectors of the model. This alignment is achieved via a supervised multi-modal (i.e., audio-text) Figure 2: Illustration of the NTPT loss and our proposed NSPT loss. Given an anchor sample from the current mini-batch, the NTPT loss computes the negatives and positives using the teacher model (dashed circles). Instead, the NSPT loss, since the negative samples mainly come from the new classes and the teacher model has not been trained using those classes, computes the positives with the teacher while the negatives are computed with the student model (solid circles). If the features obtained with the teacher are scattered and static (the teacher is frozen), those obtained with the student are more clustered and can be learned during the current task. Best viewed in color. contrastive learning objective where feature representations of samples sharing the same intent token are attracted while the others are pushed away. Similar to (Kwon et al., 2022), we use the [CLS] text token (\(y_{cls}\)) for performing the multi-modal alignment. Furthermore, following (Cha et al., 2021), we always treat the rehearsal samples as negatives, preventing them from being anchors during the learning process. This design choice is buttressed by two motivations: 1) rehearsal data have been learned by the previous model already and are preserved via the NSPT loss, and 2) we encourage the model to produce clusters for the new data that are separated from those of the rehearsal data. Formally, the MM loss takes the following form: \[\mathcal{L}_{\text{MM}}=\sum_{k\in\mathcal{I}_{e}}\frac{-1}{|\mathcal{P}(k)|} \sum_{p\in\mathcal{P}(k)}\Bigg{[}\text{log}\,\frac{\exp(\mathbf{a}_{k}\cdot \mathbf{t}_{p}/\tau)}{\sum_{i\in\mathcal{I}}\exp(\mathbf{a}_{k}\cdot\mathbf{t }_{i}/\tau)}+\text{log}\,\frac{\exp(\mathbf{t}_{k}\cdot\mathbf{a}_{p}/\tau)}{ \sum_{i\in\mathcal{I}}\exp(\mathbf{t}_{k}\cdot\mathbf{a}_{i}/\tau)}\Bigg{]}. \tag{6}\] The first term of the internal loss is the audio-to-text component, whereas the second is the text-to-audio component (Zhang et al., 2022). The presence of both directions (\(A\to T\) and \(T\to A\)) makes the MM loss symmetric. All in all, COCONUT minimizes the following loss: \[\mathcal{L}=\mathcal{L}_{\text{ASR}}+\lambda_{\text{MM}}\mathcal{L}_{\text{ MM}}+\lambda_{\text{NSPT}}\mathcal{L}_{\text{NSPT}}, \tag{7}\] where lambdas are loss-specific weights. An overview of COCONUT is illustrated in Figure 1. ## 5 Experiments ### Experimental Setup and Implementation Details **Datasets and CIL setting**. We evaluate COCONUT on two SLU datasets: the Fluent Speech Commands (FSC) (Lugosch et al., 2019) and the Spoken Language Understanding Resource Package (SLURP) (Bastianelli et al., 2020). FSC includes 30,043 English utterances, recorded at 16 kHz. It includes 31 intent classes in total. The SLURP dataset comprises around 56 hours of audio of people interacting with a home assistant (_slurp_real_), with the addition of 43.5 hours of synthetic data (_slurp_synth_). It is considered the most challenging SLU dataset due to its lexical complexity. Each utterance is annotated with 3 semantics: scenario, action, and entity. The pair (scenario, action) defines an intent. Overall, there are 18 scenarios and 69 intents. For our experiments, we only perform intent classification. Following (Cappellazzo et al., 2023b), we use the scenario labels as splitting criterion to define the CIL setting. We experiment on two configurations: 1) the datasets are partitioned into 3 tasks, each task comprising 6 scenarios for SLURP (denoted as SLURP-3), and 10 intents for FSC (FSC-3); 2) a more challenging configuration with 6 tasks, each task including 3 scenarios for SLURP (SLURP-6), and 5 intents for FSC (FSC-6). **Implementation Details**. For both datasets, the text encoder is a standard text embedding layer with size \(768\). For the audio encoder, we use a Wav2vec 2.0 base model (Baevski et al., 2020) pre-trained and fine-tuned on 960 hours of Librispeech for SLURP (\(\sim 94.3\)M parameters), while we use DistilHuBERT base (Chang et al., 2022) for FSC (\(\sim 23.5\)M parameters). Since FSC is a less challenging dataset than SLURP, we found that a smaller pre-trained encoder is sufficient to achieve state-of-the-art results. Both encoders have hidden sizes of \(768\) and their feature extractor is kept frozen during training. As in (Radford et al., 2021), we employ linear projection layers to map from each encoder's representation to the audio-text embedding space, whose dimension is \(512\). The ASR decoder is transformer-based with 6 layers, hidden size equal to \(768\), \(8\) attention heads, and the dimension of the feedforward layers is \(2048\). For the tokenization we apply Byte-Pair Encoding (BPE) (Sennrich et al., 2016) for SLURP, with a vocabulary size of \(1000\) and BPE dropout equal to \(0.1\), whereas for FSC, given the limited number of unique words, we use word tokenization, resulting in 139 tokens. BPE automatically assigns to each intent a dedicated token, whereas for FSC we manually add the intent tokens. We refer the reader to the appendix for an exhaustive description of the hyperparameters. Regarding the weight coefficients, we set \(\lambda_{\text{MM}}\) to \(0.1\), and similar to (Douillard et al., 2022; Wu et al., 2019) we set \(\lambda_{\text{NSPT}}\) to \(\frac{L_{p}}{L_{p}+L_{n}}\), where \(L_{p}\) and \(L_{n}\) count the number of past and new classes. **Baselines**. Apart from the standard **offline** (1 task, no continual) and **fine-tuning** (no CL strategies) baselines, we compare COCONUT against standard **experience replay** (ER) methods with _random_ and _iCaRL_(Rebuffi et al., 2017) sampling strategies. We note that ER is already a strong baseline for FSC and SLURP. Additionally, we report two methods proposed in (Cappellazzo et al., 2023b): audio-KD (**A-KD**) that applies the KD on the audio features of the rehearsal samples, and seq-KD (**S-KD**) that at the end of the current task stores the text transcriptions computed with beam search for the rehearsal samples and use them as pseudo-transcriptions for the next task. This method operates on the ASR decoder. We also report text-KD (**T-KD**), the text counterpart of the A-KD. **Metrics**. Following (Douillard et al., 2022), we report the results in terms of the _Avg Acc_, which is the average of the intent accuracies after each training task, and the _Last Acc_, which is the intent accuracy after the last task. We also report the _Avg WER_, defined as the average of the Word Error Rate (WER) of the extended transcription after each task. ### Main Results In the first two rows of Table 1, we include the upper and lower bounds represented by the offline learning (which is in line with the state-of-the-art) and fine-tuning approaches. For the fine-tuning approach, we can notice how CF deteriorates the knowledge of the prior classes. We then include ER baselines with buffer capacity equal to 1 or 2% of the dataset size. While all methods use 1%, we also include one with 2% to show how COCONUT and the other methods perform with respect to this one, but using half memory. From these results we can see that ER-based methods achieve good results for all metrics and configurations, confirming themselves as solid baselines. For FSC, COCONUT outperforms the other baselines by a significant margin, in terms of both accuracy and WER. Its combination with the S-KD leads to additional improvements (last row). \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline **Setting \(\rightarrow\)** & & \multicolumn{3}{c}{**FSC-3**} & \multicolumn{3}{c}{**FSC-6**} & \multicolumn{3}{c}{**SLURP-3**} & \multicolumn{3}{c}{**SLURP-6**} \\ \cline{2-13} **Metric \(\rightarrow\)** & **ER size/** & Avg Acc Acc & Last & Avg Acc Acc & Last & Avg Acc Acc & Last & Avg Acc Acc & WER & Acc Acc & Acc & WER \\ \hline Offline & - & 99.28 & - & 0.48 & 99.28 & - & 0.48 & 84.41 & - & 17.65 & 84.41 & - & 17.65 \\ Fine-tuning & - & 49.13 & 17.61 & 36.37 & 29.92 & - & 7.59 & 54.66 & 46.65 & 18.42 & 28.32 & 31.90 & 10.57 & 34.79 \\ \hline \hline FC & 20\(\times\)/rand & 85.02 & 84.20 & 5.19 & 7.91 & 7.97 & 13.75 & 7.562 & 68.08 & 19.55 & 27.75 & 6.50 & 22.98 \\ ER & 1\%/rand & 79.17 & 69.81 & 15.87 & 68.61 & 63.71 & 24.04 & 71.44 & 61.88 & 21.25 & 66.57 & 58.22 & 24.50 \\ \hline ER & 1\%/i\_CRRL & 82.04 & 74.00 & 13.45 & 69.76 & 64.12 & 23.22 & 71.94 & 63.22 & 21.06 & 68.08 & 62.29 & 26.05 \\ T-KD & 1\%/i\_CRRL & 82.11 & 75.43 & 12.95 & 69.08 & 64.73 & 23.82 & 72.44 & 62.43 & 21.19 & 66.95 & 60.47 & **24.26** \\ A-KD & 1\%/i\_CRRL & 84.79 & 18.28 & 11.54 & 73.44 & 67.05 & 20.36 & 72.10 & 63.84 & **20.67** & 68.52 & 62.51 & 24.29 \\ S-KD & 1\%/i\_CRRL & 84.29 & 75.31 & 12.39 & 73.65 & 67.71 & 21.27 & **74.28** & **65.95** & 21.26 & 69.91 & 63.22 & **24.26** \\ \hline COCONUT & 1\%/i\_CRRL & 86.39 & 80.21 & **11.08** & **77.09** & **73.80** & **19.05** & 72.75 & 64.62 & 21.25 & **70.17** & **63.86** & **24.29** \\ \hline COCONUT & 1\%/i\_CRRL & 87.64 & 83.65 & 10.87 & 7.59 & 7.60 & 18.38 & 7.58 & 63.38 & 21.61 & 8.19 & 63.41 & 24.16 \\ \hline \hline \end{tabular} \end{table} Table 1: Results in terms of Average Accuracy (\(\uparrow\)), Last Accuracy (\(\uparrow\)), and Average WER (\(\downarrow\)) for different strategies on FSC and SLURP datasets. The second column represents the experience replay (ER) buffer size and the selection strategy (Selec.) used to populate the buffer. **Bold** and _underscore_ numbers denote the best and second best method for a specific setting and metric, respectively. We show in the last row that COCONUT and S-KD can be used together, leading to the best results. For simplicity, the values of the last row are not in bold even though attain the best results. Figure 3: _Left_: the trend of the intent accuracy on the observed tasks for the FSC-6 setting. _Right_: the trend of the intent accuracy on the observed tasks for SLURP-6. If we turn our focus to SLURP we see that, for the setting with 3 tasks, S-KD turns out to be the best approach in terms of intent accuracy, followed by COCONUT. For the WER, all the methods achieve similar performance and do not provide significant enhancements. We speculate that, as only some words are task-specific while the others are spread across multiple tasks, the text modality is less affected by CF. It is also compelling to note that the A-KD always achieves better performance than T-KD, a trend that will also be observed for the NSPT loss in the ablation study. For SLURP-6, COCONUT slightly surpasses S-KD in terms of accuracy, and performs on par with the others for the WER metric. This indicates that COCONUT scales properly with the number of tasks, and thus the setting becomes more challenging. Additionally, we point out that for SLURP COCONUT provides less noticeable improvements than FSC. This can be attributable to the higher complexity of the dataset due to its larger dictionary and to the larger number of intents with respect to FSC (69 vs. 31). Finally, similar to FSC, the combination of COCONUT with S-KD attains the best results, confirming that fighting CF both at the encoders and ASR decoder is an effective solution. In Fig. 3 we illustrate the trend of the intent accuracy after each task for FSC-6 and SLURP-6, respectively. For FSC-6, COCONUT outperforms the other baselines by a large margin after each task. For SLURP-6, COCONUT has a similar trend as S-KD, and their combination leads to a noteworthy boost to such an extent that after task 3 it even beats the baseline that uses twice as much memory. On the left part of Fig. 4 we show the trend of the WER task by task. If it is evident that COCONUT and its combination with S-KD outstrip the other baselines, we can also observe that the gap between COCONUT and the baseline with 2% of rehearsal samples is more prominent for the WER than it was for the accuracy. On the right of Fig. 4, we study the trend of COCONUT for different values of rehearsal samples per class. Note that 8 samples per class is tantamount to a buffer of capacity 1% with respect to the entire training dataset. The maximum gain provided by COCONUT with respect to the ER baseline is reached for 4 and 8 samples per class (\(9.27\) and \(6.69\), respectively), while for the extreme cases of 2 and 16 samples, the gap is reduced. This is explained by the fact that when few samples are stored for each class, the effect of the NSPT loss is highly reduced given its reliance on the rehearsal data, whilst in the opposite case the abundance of rehearsal data makes the ER baseline already strong, thereby improving it becomes more challenging. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Dataset \(\rightarrow\)** & \multicolumn{3}{c}{**FSC-6**} & \multicolumn{3}{c}{**SLURP-6**} \\ \cline{2-7} **Metric \(\rightarrow\)** & Avg & Last & Avg & Avg & Last & Avg \\ **Method \(\downarrow\)** & Acc & Acc & WER & Acc & Acc & WER \\ \hline ER 1\%/iCaRL & 69.76 & 64.12 & 23.22 & 68.08 & 62.29 & 26.05 \\ **MM** & 71.12 & 67.76 & 22.88 & 68.78 & 62.94 & 24.81 \\ MM + NTPT & 74.05 & 67.61 & 21.22 & 68.91 & 62.57 & 24.69 \\ **MM + NSPT** & **77.09** & **73.80** & **19.05** & **70.17** & **63.66** & **24.29** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation on the use of NSPT and NTPT losses. Figure 4: _Left_: the trend of the WER on the observed tasks for the FSC-6 setting. _Right_: the accuracy of COCONUT and other methods as a function of the memory size. ### Ablation Study In this section, we ablate some design properties of COCONUT. In Tab. 2 we evaluate the difference in performance between the standard NTPT loss and our proposed NSPT. For FSC-6, the use of our proposed NSPT loss gives a considerable improvement over the NTPT loss in terms of all three considered metrics. For SLURP-6, the trend is maintained, and now the NTPT even brings a small deterioration over the MM baseline in terms of Last Acc. Also, the MM loss alone contributes positively over the ER baseline for both settings. We recall that we do not study the individual contribution of the NSPT loss due to the issue of the randomly-initialized projectors of the teacher during the second task (see section 4.2). In Table 3 we study the design properties of the MM loss on FSC-6, and with its best configuration, we determine the individual contribution of the audio and text components to the NSPT loss. As was evident for the A-KD and T-KD, with the former giving more valuable results, here we also discover that the audio component is predominant. Plus, the concurrent use of both components brings a moderate increase in accuracy, and this is due to the alignment between audio and text obtained via the MM loss. **On the impact of the temperature parameter**. Finally, in this section we analyze the role of the temperature parameter in the CIL process for the MM loss (see Eq. 6) on the FSC-6 setting. We first try to set the value beforehand (\(0.07\), \(0.1\), \(0.2\)), and then we make the temperature a learnable hyperparameter (initial value is \(0.07\)). Results are reported in Table 4. We can observe that \(\tau=0.1\) is the best configuration for the accuracy metric. Note that, however, the model does not seem very sensible to the temperature for the Avg Acc, whereas the Last Acc is more influenced. Since the Avg Acc does not change much across the three configurations, yet the Last Acc swaps much more, this means that for \(\tau=0.1\) the model struggles more during the initial tasks, but it performs better towards the end of the learning process. On the other hand, learning \(\tau\) task by task does not seem to be the right choice as the Avg Acc and WER metrics deteriorate with respect to the other three configurations where it is fixed. In fact, we observed that during the first tasks, the model is learning the optimal value for \(\tau\) until it finds it (this value approximately lies in the range \(0.134-0.142\)). This initial transitional phase penalizes the accuracy of the first tasks, which in turn leads to a deterioration in the Avg Acc metric. ## 6 Conclusion In this work, we study the problem of E2E SLU using a seq-2-seq model for class-incremental learning. In order to mitigate catastrophic forgetting we propose COCONUT, a CL approach that exploits experience replay and contrastive learning. On the one hand, it preserves the previously learned feature representations via an ad-hoc supervised contrastive distillation loss, on the other it contributes to aligning audio and text representations, thus resulting in more transferable and robust to catastrophic forgetting representations. We also show that COCONUT outperforms the other baselines and that synergizes with other knowledge distillation techniques operating on the decoder \begin{table} \begin{tabular}{l c c c} \hline \hline **Metric \(\rightarrow\)** & Avg & Last & Avg \\ **Temp. (\(\tau\))**\(\downarrow\) & Acc & Acc & WER \\ \hline 0.07 & 71.06 & 64.75 & **22.07** \\ \hline 0.1 & **71.12** & **67.76** & 22.88 \\ 0.2 & 71.01 & 62.35 & 22.78 \\ \hline Learnable & 69.05 & 66.33 & 24.57 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of the temperature \(\tau\) for the MM loss. We experiment on FSC-6 by setting \(\tau\) beforehand and making it a learnable hyperparameter as is common practice in offline settings (Radford et al., 2021). The light-blue row corresponds to the value we used for our experiments. \begin{table} \begin{tabular}{c c c c} \hline \hline **CLS** & **Anchor** & \(\mathcal{L}_{\text{A}}\) & \(\mathcal{L}_{\text{T}}\) & **Acc** \\ \hline & & & 70.10 \\ ✓ & & & 70.49 \\ & ✓ & & 71.09 \\ ✓ & ✓ & & **71.12** \\ ✓ & ✓ & ✓ & 76.84 \\ ✓ & ✓ & & ✓ & 73.11 \\ ✓ & ✓ & ✓ & **77.09** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of the MM (upper part) and NSPT (bottom part) components. **CLS**: whether only the intent class token is used; **Anchor**: whether ER data are excluded from the anchors. \(\mathcal{L}_{\text{A}}/\mathcal{L}_{\text{T}}\): whether the audio/text component of NSPT loss is used. side. We finally dissect the design choices of COCONUT through specific ablation studies, as well as the influence of the temperature parameter throughout the continual learning process.
2303.13874
Query-Dependent Video Representation for Moment Retrieval and Highlight Detection
Recently, video moment retrieval and highlight detection (MR/HD) are being spotlighted as the demand for video understanding is drastically increased. The key objective of MR/HD is to localize the moment and estimate clip-wise accordance level, i.e., saliency score, to the given text query. Although the recent transformer-based models brought some advances, we found that these methods do not fully exploit the information of a given query. For example, the relevance between text query and video contents is sometimes neglected when predicting the moment and its saliency. To tackle this issue, we introduce Query-Dependent DETR (QD-DETR), a detection transformer tailored for MR/HD. As we observe the insignificant role of a given query in transformer architectures, our encoding module starts with cross-attention layers to explicitly inject the context of text query into video representation. Then, to enhance the model's capability of exploiting the query information, we manipulate the video-query pairs to produce irrelevant pairs. Such negative (irrelevant) video-query pairs are trained to yield low saliency scores, which in turn, encourages the model to estimate precise accordance between query-video pairs. Lastly, we present an input-adaptive saliency predictor which adaptively defines the criterion of saliency scores for the given video-query pairs. Our extensive studies verify the importance of building the query-dependent representation for MR/HD. Specifically, QD-DETR outperforms state-of-the-art methods on QVHighlights, TVSum, and Charades-STA datasets. Codes are available at github.com/wjun0830/QD-DETR.
WonJun Moon, Sangeek Hyun, SangUk Park, Dongchan Park, Jae-Pil Heo
2023-03-24T09:32:50Z
http://arxiv.org/abs/2303.13874v1
# Query-Dependent Video Representation ###### Abstract Recently, video moment retrieval and highlight detection (MR/HD) are being spotlighted as the demand for video understanding is drastically increased. The key objective of MR/HD is to localize the moment and estimate clip-wise accordance level, i.e., saliency score, to the given text query. Although the recent transformer-based models brought some advances, we found that these methods do not fully exploit the information of a given query. For example, the relevance between text query and video contents is sometimes neglected when predicting the moment and its saliency. To tackle this issue, we introduce Query-Dependent DETR (QD-DETR), a detection transformer tailored for MR/HD. As we observe the insignificant role of a given query in transformer architectures, our encoding module starts with cross-attention layers to explicitly inject the context of text query into video representation. Then, to enhance the model's capability of exploiting the query information, we manipulate the video-query pairs to produce irrelevant pairs. Such negative (irrelevant) video-query pairs are trained to yield low saliency scores, which in turn, encourages the model to estimate precise accordance between query-video pairs. Lastly, we present an input-adaptive saliency predictor which adaptively defines the criterion of saliency scores for the given video-query pairs. Our extensive studies verify the importance of building the query-dependent representation for MR/HD. Specifically, QD-DETR outperforms state-of-the-art methods on QWHighlights, TVSum, and Charades-STA datasets. Codes are available at github.com/wijun0830/QD-DETR. ## 1 Introduction Along with the advance of digital devices and platforms, video is now one of the most desired data types for consumers [2, 57]. Although the large information capacity of videos might be beneficial in many aspects, e.g., informative and entertaining, inspecting the videos is time-consuming, so that it is hard to capture the desired moments [1, 2]. Indeed, the need to retrieve user-requested or highlight moments within videos is greatly raised. Numerous research efforts were put into the search for the requested moments in the video [1, 13, 15, 35] and summarizing the video highlights [4, 38, 56, 66]. Recently, Moment-DETR [28] further spotlighted the topic by proposing a QVHighlights dataset that enables the model to perform both tasks, retrieving the moments with their highlight-ness, simultaneously. When describing the moment, one of the most favored types of query is the natural language sentence (text) [1]. While early methods utilized convolution networks [16, 53, 68], recent approaches have shown that deploying the attention mechanism of transformer architecture is more effective to fuse the text query into the video representation. For example, Moment-DETR [28] introduced the transformer Figure 1: Comparison of highlight-ness (saliency score) when relevant and non-relevant queries are given. We found that the existing work only uses queries to play an insignificant role, thereby may not be capable of detecting negative queries and video-query relevance; saliency scores for clips in ground-truth (GT) moments are low and equivalent for positive and negative queries. On the other hand, query-dependent representations of QD-DETR result in corresponding saliency scores to the video-query relevance and precisely localized moments. architecture which processes both text and video tokens as input by modifying the detection transformer (DETR), and UMT [36] proposed transformer architectures to take multi-modal sources, e.g., video and audio. Also, they utilized the text queries in the transformer decoder. Although they brought breakthroughs in the field of MR/HD with seminal architectures, they overlooked the role of the text query. To validate our claim, we investigate the Moment-DETR [28] in terms of the impact of text query in MR/HD (Fig.1). Given the video clips with a relevant positive query and an irrelevant negative query, we observe that the baseline often neglects the given text query when estimating the query-relevance scores, i.e., saliency scores, for each video clip. To this end, we propose Query-Dependent DETR (QD-DETR) that produces query-dependent video representation. Our key focus is to ensure that the model's prediction for each clip is highly dependent on the query. First, to fully utilize the contextual information in the query, we revise the transformer encoder to be equipped with cross-attention layers at the very first layers. By inserting a video as the query and a text as the key and value of the cross-attention layers, our encoder enforces the engagement of the text query in extracting video representation. Then, in order to not only inject a lot of textual information into the video feature but also make it fully exploited, we leverage the negative video-query pairs generated by mixing the original pairs. Specifically, the model is learned to suppress the saliency scores of such negative (irrelevant) pairs. Our expectation is the increased contribution of the text query in prediction since the videos will be sometimes required to yield high saliency scores and sometimes low ones depending on whether the text query is relevant or not. Lastly, to apply the dynamic criterion to mark highlights for each instance, we deploy a saliency token to represent the entire video and utilize it as an input-adaptive saliency criterion. With all components combined, our QD-DETR produces query-dependent video representation by integrating source and query modalities. This further allows the use of positional queries [34] in the transformer decoder. Overall, our superior performances over the existing approaches validate the significance of the role of text query for MR/HD. ## 2 Related Work ### Moment Retrieval and Highlight Detection MR is the task of localizing the moment relevant to the given text description. Popular approaches are modeling the cross-modal interaction between text query-video pair [63, 65, 37] or understanding the context of the temporal relation among video clips [1, 68]. On the other hand, TVT [29] exploited the additional data, i.e., subtitle, to capture the moment, and FVMR [16] enhanced the model in terms of inference speed for efficient MR. Different from the MR, HD aims to measure the clip-wise importance level of the given video [49, 61]. Due to its popularity and applicability, HD can be divided into several branches. From the perspective of annotation, we can categorize HD into supervised, weakly supervised, and unsupervised HD. Supervised HD [49, 59, 18] utilizes fine-grained highlight scores, which are very expensive to collect and annotate [58]. On the other hand, weakly supervised HD [5, 41, 58] learns to detect segments as highlights with video event labels, and finally, unsupervised HD [4, 26, 38, 45] does not require any annotations. Also, while the task is often implemented only with the video, there are works to employ the extra data modalities. Generally, multi-modality was taken into account by using the natural language query to find the desired thumbnail [35] and using additional sources, i.e., audio, to predict highlights [3, 62]. Although the MR/HD share a common objective to localize or discover the desired part of the given video, they have been studied separately. To handle these tasks at once, Moment-DETR [28] proposed the QVHighlights dataset, which contains a human-written text query and its corresponding moment with clip-level saliency labels. They also introduced the modified version of detection transformer (DETR [6]) to localize the query-relevant moments and their saliency scores. Following them, UMT [36] focused on processing multi-modal data by utilizing both video and audio features. Different from recent works deploying transformer architectures, here we concentrate on producing a query-dependent representation with the transformer. ### Detection Transformers DETR [6], an end-to-end object detector based on vision transformers, is one of the very recent works that utilize the transformer architectures for computer vision [50, 12]. Although DETR suffered from slow convergence, it simplifies the prediction process by eliminating the need for anchor generation and non-maximum suppression. Since then, along with the advance in DETR [11, 30, 69], DETR-like architectures have been popular in downstream tasks in both the image [67, 10, 23, 9] and video domains [28, 60]. Some of these works focused on analyzing the role of the decoder query and discovered that using the positional information speeds up the training and also enhances the detection performance [34, 39]. On the other hand, there are trials to extend the application of DETR on multi-modal data [28, 24], especially dealing with the query from different modalities, i.e., text, for detection (or retrieval). They generally handle the multi-modal data by simply forwarding them together to the transformer. In this paper, we also focus on handling the multi-modal data based on DETR-like architecture. However, different from the aforementioned techniques, we concentrate on the query-dependency of the prediction results. ## 3 Query-Dependent DETR Moment retrieval and highlight detection have the common objective to find preferred moments with the text query. Given a video of \(L\) clips and a text query with \(N\) words, we denote their representations as \(\{v_{1},v_{2},...,v_{L}\}\) and \(\{t_{1},t_{2},...,t_{N}\}\) extracted by frozen video and text encoders, respectively. With these representations, the main objective is to localize the center coordinate \(m_{c}\) and width \(m_{\sigma}\) within the video and rank the highlight score (saliency score) \(\{s_{1},s_{2},...,s_{L}\}\) for each clip. A straightforward approach to utilize transformer [12, 52] for the MR is to make a moment-wise prediction as a set of clips [28], or generating the moment according to the clip-wise predictions [36]. To exploit the multi-modal information, e.g., video and text query, they either simply concatenated the features across the modalities or inserted the texts to form the moment query to the transformer decoder. However, we claim that the relationship between the video and text query should be carefully considered rather than a simple concatenation since MR/HD requires every video clip to be conditionally assessed with the text queries. Our overall architecture is described in Fig. 2, following the design of concrete baseline, Moment-DETR [28]. Given a video and query representation extracted from fixed backbones, QD-DETR first transforms the video representation to be query-dependent using cross-attention layers. To further enhance the query-awareness of video representations, we incorporate irrelevant video-query pairs with a low saliency for the learning objective. Then, along with the transformer encoder-decoder architectures, the saliency token is defined that turns into an adaptive saliency predictor when attended by the specific video instance. ### Cross-Attentive Transformer Encoder In this subsection, we use italic letters to represent _query_, _key_, and _value_ of the cross-attention layers. The key objective of the encoder for MR/HD is to produce clip-wise representations equipped with information regarding the degree of query-relevance since these features are directly used for retrieving the query-matched moments and predicting clip-wise saliency scores. However, the encoding process of existing works may not ensure the query conditioning on every clip. For example, Moment-DETR [28] naively concatenated the video with the query for input to the self-attention layers, which may result in an insignificant role of the query if the high similarities among the video clips overwhelm the contribution of the text query. On the other hand, UMT [36] utilizes the text query only for the synthesis of a moment query in the transformer decoder so thus resulting video representations are not associated with the text query. To take the textual contexts into every video clip representation, we deploy cross-attention layers between the source and the query modalities at the very first layers of the encoder. This ensures the consistent contribution of the query, thereby extracting query-dependent video representation. In detail, whereas the _query_ for cross-attention layers is prepared by projecting the video clips as \(Q_{v}=[\,p_{q}(v_{1}),...,p_{q}(v_{L})\,]\), the Figure 2: Overview of the proposed QD-DETR architecture. Given a video and text query, we first extract video and text features from the frozen backbones. These video and text features are forwarded into the cross-attention transformer (Sec. 3.1). This process ensures the consistent contribution of text queries to the video tokens and together with negative pair learning (Sec. 3.2), builds query-dependent video representation. Then, accompanied by the saliency token (Sec. 3.3), video tokens are given to the transformer encoder. In this procedure, the saliency token is transformed into the adaptive saliency prediction criteria. The outputs of the encoder are then, processed to compute losses for both HD and MR. Specifically, the encoder’s output tokens are directly projected to saliency scores and optimized for HD, and also provided to the transformer decoder with the learnable moment queries to estimate the query-described moments. Finally, losses for MR are computed by the discrepancy between predicted and their corresponding GT moments. _key_ and _value_ are computed with the query text features as \(K_{t}=[\ p_{k}(t_{1}),...,p_{k}(t_{N})\ ]\) and \(V_{t}=[\ p_{v}(t_{1}),...,p_{v}(t_{N})\ ]\). \(p_{q}(\cdot)\), \(p_{k}(\cdot)\), and \(p_{v}(\cdot)\) are projection layers for _query_, _key_, and _value_. Then, the cross-attention layer operates as follows: \[\text{Attention}(Q_{v},K_{t},V_{t})=\text{softmax}(\frac{Q_{v}K_{t}^{T}}{ \sqrt{d}})V_{t}, \tag{1}\] where \(d\) is the dimension of the projected _key_, _value_, and _query_. Since the softmax scores are distributed only over the query elements, video clips are expressed with the weighted sum of the text queries in proportion to the similarity to texts. Attention scores are then projected through MLP and integrated into the original video representations as the typical transformer layers. For the rest of the paper, we define the query-dependent video tokens, i.e., the output of cross-attention layers, as \(X=\{x_{v}^{1},x_{v}^{2},...,x_{v}^{L}\}\). ### Learning from Negative Relationship While the cross-attention layers explicitly fuse the video and query features for intermediate video clip representations to engage the query information in an architectural way, we argue that given video-text pairs lack diversity to learn the general relationship. For instance, many consecutive clips in a single video often share similar appearances, and the similarity to a specific query will not be highly distinguishable, thereby, the text query may not much affect the prediction. Thus, we consider the relationships between irrelevant pairs of videos inspired by many recognition practices [20, 31, 32, 40] that learn discriminative features across different categories. To implement such relationships, we define given training video-query pairs as positive pairs and mix the video and query from different pairs to construct negative pairs. Fig. 3 illustrates the ways to augment such negative pairs and utilize them with positive pairs in training. While the video clips in positive pairs are trained to yield segmented saliency scores according to the query-relevance, irrelevant negative video-query pairs are enforced to have the lowest saliency scores. Formally, the loss function for suppressing the saliency of negative pairs \(x_{v}^{\text{seg}}\) are expressed as follows: \[L_{\text{seg}}=-\log(1-S(x_{v}^{\text{seg}})), \tag{2}\] where \(S(\cdot)\) is the saliency score predictor. This training scheme can also prevent the model from predicting the moments and highlights solely based on the inter-relationship among video clips without consideration of the query-relevance since the same video instance should be predicted differently depending on whether the positive or negative query is given. ### Input-Adaptive Saliency Predictor Naive implementation for saliency predictor \(S(\cdot)\) would be stacking one or more fully-connected layers. However, such a general head provides identical criteria for the saliency prediction of every video-query pair, neglecting the diverse nature of video and natural language query pairs. This violates our key idea to extract query-dependent video representation. Thus, we define the saliency token \(x_{s}\) to be utilized as an input-adaptive saliency predictor. Briefly, the saliency token is a randomly initialized learnable vector that becomes an input-adaptive predictor when added to the sequence of encoded video tokens and projected through the transformer encoder. To illustrate, as shown in Fig. 2, we first concatenate the saliency token with the query-dependent video tokens \(X\). We process these tokens to the transformer encoder which makes the saliency token to be re-organized with the input-dependent contexts. Consequently, saliency and video tokens are projected by a corresponding single fully-connected layer with weights, \(w_{s}\) and \(w_{v}\), respectively, where their scaled-dot product becomes the saliency scores. Formally, saliency score \(S(x_{v}^{i})\) is computed as follows: \[S(x_{v}^{i})=\frac{w_{s}^{T}x_{s}\cdot w_{v}^{T}x_{v}^{i}}{\sqrt{d}}, \tag{3}\] where \(d\) is the channel dimension of projected tokens. ### Decoder and Objectives Transformer Decoder.Recently, understanding the role of the query in the detection transformer is being spotlight [34, 39]. It is verified that designing the query with the positional information helps not only for acceleration of training but also for enhancing accuracy. Yet, it is hard to directly employ these studies in tasks handling multi-modal data, e.g., MR/HD, since multi-modal data often have different definitions of position; the position can be understood as time in the video and word order in the text. Figure 3: Illustration of negative pair learning. Typical HL loss is defined only with a positive video–query pair, which is insufficient to learn various degrees of query-relevance. On the other hand, our negative pair learning enforces the model to yield different scores for a video depending on the query and to be learned to suppress saliency scores for the negative query. On the contrary, our architectural design eliminates the need to feed the text query to the decoder since the query information is already taken into the video representations. To this end, we modify the 2D dynamic anchor boxes [34] to represent 1D moments in the video. Specifically, we utilize the center coordinate \(m_{c}\) and the duration \(m_{\sigma}\) of the moments to design the queries. Similarly to the previous way in the image domain, we pool the features around the center coordinate and modulate the cross-attention map with the moment duration. Then, the coordinates and durations are layer-wisely revised. Loss Functions.Training objectives for QD-DETR include loss functions for MR/HD, respectively. First, objective functions for MR, in which the key focus is to locate the desired moments, are adopted from the baseline [28]. Moment retrieval loss \(L_{\text{mr}}\) measures the discrepancy between the GT moment and the predicted counterpart. It consists of a \(L1\) loss and a generalized IoU loss \(L_{\text{gIoU}}(\cdot)\) from previous work [44] with minor modification to localize temporal moments. Additionally, the cross-entropy loss is used to classify the predicted moments as \(\hat{y}\) either to foreground and background by \(L_{\text{CE}}=-\sum_{y\in Y}y\log(\hat{y})\) where \(\{\text{fg},\text{bg}\}\subset Y\). Thus, \(L_{\text{mr}}\) is defined as follows: \[L_{\text{mr}}=\lambda_{L1}||m-\hat{m}||+\lambda_{\text{gIoU}}L_{ \text{gIoU}}(m,\hat{m})+\lambda_{\text{CE}}L_{\text{CE}} \tag{4}\] where \(m\) and \(\hat{m}\) are ground-truth moment and its correspond prediction containing center coordinate \(m_{c}\) and duration \(m_{\sigma}\). Also, \(\lambda_{*}\) are hyperparameters for balancing the losses. Loss functions for HD are to estimate the saliency score. It comprises two components; margin ranking loss \(L_{\text{margin}}\) and rank-aware contrastive loss \(L_{\text{cont}}\). Following [28], the margin rank loss operates with two pairs of high-rank and low-rank clips. To be specific, the high-rank clips are ensured to retain higher saliency scores than both the low-rank clips within the GT moment and the negative clips outside the GT moment. In short, \(L_{\text{margin}}\) is defined as: \[L_{\text{margin}}=\text{max}(0,\Delta+S(x^{\text{low}})-S(x^{ \text{high}})) \tag{5}\] where \(\Delta\) is the margin, \(S(\cdot)\) is the saliency score estimator, and \(x^{\text{high}}\) and \(x^{\text{low}}\) are video tokens from two pairs of high and low-rank clips, respectively. In addition to margin loss which only indirectly guides the saliency predictor, we employ rank-aware contrastive loss [21] to learn the precisely segmented saliency levels with the contrastive loss. Given the maximum rank value \(R\), each clip in the mini-batch has a saliency score lower than \(R\). Then, we iterate the batch for \(R\) times, each time utilizing the samples with higher saliency scores than the iteration index (\(r\in\{0,1,...,R-1\}\)) to build the positive set \(X_{r}^{\text{pos}}\). Samples with a lower rank than the iteration index are included in the negative set \(X_{r}^{\text{neg}}\). Then, the rank-aware contrastive loss \(L_{\text{cont}}\) is defined as: \[L_{\text{cont}}=-\sum_{r=1}^{R}\text{log}\frac{\sum_{x\in X_{r} ^{\text{pos}}}\text{exp}(S(x)/\tau)}{\sum_{x\in(X_{r}^{\text{pos}}\cup X_{r}^ {\text{neg}})}\text{exp}(S(x)/\tau)} \tag{6}\] where \(\tau\) is a temperature scaling parameter. Note that, \(X_{r}^{\text{neg}}\) also include all clips in negative pairs \(x_{r}^{\text{neg}}\) defined in Sec. 3.2. Finally with margin loss and rank-aware contrastive loss, \(L_{\text{hl}}\) and total loss function \(L_{\text{total}}\) are defined as follows: \[L_{\text{hl}}=\lambda_{\text{margin}}L_{\text{margin}}+ \lambda_{\text{cont}}L_{\text{cont}}, \tag{7}\] \[L_{\text{total}}=L_{\text{hl}}+L_{\text{mr}}+\lambda_{\text{neg} }L_{\text{neg}}. \tag{8}\] ## 4 Evaluation ### Experimental Settings Dataset and Evaluation Metrics.For the evaluation, we validate the effectiveness of query-dependent source representation on QVHighlights [28], TVSum [48], Charades-STA [15]. **QVHighlights** is the most recently publicized dataset for both moment retrieval and highlight detection. It is also the only dataset that has annotations for both tasks. In detail, QVHighlights consists of over 10,000 videos annotated with human-written text queries. It provides a fair benchmark as the evaluation for the test split can only be measured through submitting the prediction to the QVHighlights server1. **Charades-STA** and **TVSum** are the dataset for moment retrieval and video summarization, respectively. Each of them contains 9,848 videos regarding indoor activities and 50 videos of various genres, e.g., news, documentary, and vlog. For all datasets, we follow the data splits from the existing works [28, 36]. Footnote 1: [https://codalab.lisin.upsaclay.fr/competitions/6937](https://codalab.lisin.upsaclay.fr/competitions/6937) To measure the performances, we use the same evaluation metrics used in the baselines. Specifically, recall@1 with IoU thresholds 0.5 and 0.7, and mean average precision (mAP) at different thresholds. Similarly, we use mAP and HIT@1 for evaluating the highlight detection. HIT@1 is computed through the hit ratio of the highest-scored clip. ### Experimental Results We compare QD-DETR against baselines in MR and HD throughout Tab. 1, Tab. 3, and Tab. 2. Our experiments with multi-modal sources, i.e., video with audio, are implemented by simply concatenating the video and audio along the channel axis. Throughout the tables, we use bolds to denote the best scores. In Tab. 1, the task is to jointly learn and predict MR/HD. As observed, our QD-DETR outperforms state-of-the-art (SOTA) approaches with all evaluation metrics. Among methods utilizing the video source, QD-DETR shows a dramatic increase with stricter metrics with high IOU; it outperforms previous SOTA by large margins up to 36% in [email protected] and [email protected]. On the other hand, QD-DETR with video and audio sources boosts 11.84% on average of the metrics for MR compared to the SOTA method employing the multi-modal source data. These results verify the importance of emphasizing the source (video-only or video+audio) descriptive contexts in the text queries. Results in Tab. 3 also compare MR performances against the models using VGG [8, 16, 36, 68, 55, 64, 8], C3D [15, 19, 33, 64, 65], and Slowfast (SF) and CLIP features [28] on Charades dataset. For a fair comparison, we enumerate each method with its backbone and compare within it. For each feature from VGG, C3D, and SF+C, we follow the data preparation settings from UMT [36], VSLNet [65], and Moment-DETR [28]. As reported, we validate that our model surpasses the existing SOTA methods in every type of feature. For video highlight detection in Tab. 2, we follow the protocols from the previous work [36]. Specifically, we train the model for each category and average the mAP scores. Out of 10 categories, QD-DETR outperforms baselines on 9 categories when the only video source is available, and 8 categories when both video and audio are available. Overall, compared to methods with video-only and multi-modal sources, QD-DETR establishes new SOTA performances by improving by 4.2% in average compared to the previous SOTA model. ### Ablation study To investigate the effectiveness of each component in our work, we conduct an extensive ablation study in Tab. 4. Note that, CATE and DAM denote cross-attentive transformer encoder and dynamic anchor moments, respectively. Rows (b) to (e) show the effectiveness of each component compared to \begin{table} \begin{tabular}{l|c|c c c c c c c} \hline \hline \multirow{3}{*}{Method} & \multirow{3}{*}{Src} & \multicolumn{4}{c}{MR} & \multicolumn{4}{c}{HD} \\ \cline{3-8} & & \multicolumn{3}{c}{R1} & \multicolumn{3}{c}{mAP} & \multicolumn{3}{c}{\(>\)= Very Good} \\ \cline{3-8} & & @0.5 & @0.7 & @0.5 & @0.75 & Avg. & mAP & HIT@1 \\ \hline BeautyThumb [47] & V & - & - & - & - & - & 14.36 & 20.88 \\ DVSE [35] & V & - & - & - & - & - & 18.75 & 21.79 \\ MCN [1] & V & 11.41 & 2.72 & 24.94 & 8.22 & 10.67 & - & - \\ CAL [13] & V & 25.49 & 11.54 & 23.40 & 7.65 & 9.89 & - & - \\ XML [29] & V & 41.83 & 30.35 & 44.63 & 31.73 & 32.14 & 34.49 & 55.25 \\ XML+ [29] & V & 46.69 & 33.46 & 47.89 & 34.67 & 34.90 & 35.38 & 55.06 \\ \hline Moment-DETR [28] & V & \(52.89_{\pm 2.3}\) & \(33.02_{\pm 1.7}\) & \(54.82_{\pm 1.7}\) & \(29.40_{\pm 1.7}\) & \(30.73_{\pm 1.4}\) & \(35.69_{\pm 0.5}\) & \(55.60_{\pm 1.6}\) \\ QD-DETR (**Ours**) & V & \(\textbf{62.40}_{\pm_{1.1}}\) & \(\textbf{44.98}_{\pm_{0.8}}\) & \(\textbf{62.52}_{\pm_{0.6}}\) & \(\textbf{39.88}_{\pm_{0.7}}\) & \(\textbf{39.86}_{\pm_{0.6}}\) & \(\textbf{38.94}_{\pm_{0.4}}\) & \(\textbf{62.40}_{\pm_{1.4}}\) \\ \hline UMT [36] & V+A & 56.23 & 41.18 & 53.38 & 37.01 & 36.12 & 38.18 & 59.99 \\ QD-DETR (**Ours**) & V+A & \(\textbf{63.06}_{\pm_{1.0}}\) & \(\textbf{45.10}_{\pm_{0.7}}\) & \(\textbf{63.04}_{\pm_{0.9}}\) & \(\textbf{40.10}_{\pm_{1.0}}\) & \(\textbf{40.19}_{\pm_{0.6}}\) & \(\textbf{39.04}_{\pm_{0.3}}\) & \(\textbf{62.87}_{\pm_{0.6}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison on QVHighlights _test_ split. V and A in the Src column denote video and audio, respectively, representing the modalities of the source data. Our experiments are averaged over five runs and ‘\(\pm\)’ denotes the standard deviation. \begin{table} \begin{tabular}{l|c|c c c c c c c c c|c} \hline \hline Method & Src & VT & VU & GA & MS & PK & PR & FM & BK & BT & DS & Avg. \\ \hline SLSTM [66] & V & 41.1 & 46.2 & 46.3 & 47.7 & 44.8 & 46.1 & 45.2 & 40.6 & 47.1 & 45.5 & 45.1 \\ SG [38] & V & 42.3 & 47.2 & 47.5 & 48.9 & 45.6 & 47.3 & 46.4 & 41.7 & 48.3 & 46.6 & 46.2 \\ LIM-S [58] & V & 55.9 & 42.9 & 61.2 & 54.0 & 60.3 & 47.5 & 43.2 & 66.3 & 69.1 & 62.6 & 56.3 \\ Trailer [54] & V & 61.3 & 54.6 & 65.7 & 60.8 & 59.1 & 70.1 & 58.2 & 64.7 & 65.6 & 68.1 & 62.8 \\ SL-Module [59] & V & 86.5 & 68.7 & 74.9 & **86.2** & 79.0 & 63.2 & 58.9 & 72.6 & 78.9 & 64.0 & 73.3 \\ QD-DETR (**Ours**) & V & **88.2** & **87.4** & **85.6** & 85.0 & **85.8** & **86.9** & **76.4** & **91.3** & **89.2** & **73.7** & **85.0** \\ \hline MINI-Net [22] & V+A & 80.6 & 68.3 & 78.2 & 81.8 & 78.1 & 65.8 & 57.8 & 75.0 & 80.2 & 65.5 & 73.2 \\ TCG [62] & V+A & 85.0 & 71.4 & 81.9 & 78.6 & 80.2 & 75.5 & 71.6 & 77.3 & 78.6 & 68.1 & 76.8 \\ Joint-VA [3] & V+A & 83.7 & 57.3 & 78.5 & 86.1 & 80.1 & 69.2 & 70.0 & 73.0 & **97.4** & 67.5 & 76.3 \\ UMT [36] & V+A & 87.5 & 81.5 & 88.2 & 78.8 & 81.4 & 87.0 & 76.0 & 86.9 & 84.4 & **79.6** & 83.1 \\ \hline QD-DETR (**Ours**) & V+A & **87.6** & **91.7** & **90.2** & **88.3** & **84.1** & **88.3** & **78.7** & **91.2** & 87.8 & 77.7 & **86.6** \\ \hline \hline \end{tabular} \end{table} Table 2: Highlight detection performance comparison on TVsum dataset. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Method & feat & [email protected] & [email protected] & Method & feat & [email protected] & [email protected] \\ \hline SAP & VGG & 27.42 & 13.36 & CTRL & C3D & 23.63 & 8.89 \\ TripNet & VGG & 36.61 & 14.50 & ACL & C3D & 30.48 & 12.20 \\ SM-RL & VGG & 24.36 & 11.17 & RWM-RL & C3D & 36.70 & - \\ MAN & VGG & 41.24 & 20.54 & MAN & C3D & 46.53 & 22.72 \\ 2D-TAN & VGG & 40.94 & 22.85 & DEBUG & C3D & 37.39 & 17.69 \\ FVMR & VGG & 42.36 & 24.14 & VSLNet & C3D & 47.31 & 30.19 \\ UMT\(\dagger\) & VGG & 48.31 & 29.25 & **Ours** & C3D & **50.67** & **31.02** \\ \hline **Ours** & VGG & 52.77 & 31.13 & M-DETR & SF+C & 53.63 & 31.37 \\ **Ours\(\dagger\)** & VGG & **55.51** & **34.17** & **Ours** & **SF+C** & **57.31** & **32.55** \\ \hline \hline \end{tabular} \end{table} Table 3: Charades dataset. \(\dagger\) denotes the method using the video and audio as the source. SF+C stands for Slowfast and CLIP features. the baseline (a). To explain, whereas (e) only boosts the MR performances since it only affects the transformer decoder, (b), (c), and (d) are especially beneficial for both MR/HD tasks since they are focused on query-dependent video representations ; (b) ensures the contributions of text query in the video representation, (c) fully exploits the contexts of the text query, and (d) provides input-adaptive saliency predictor instead of MLP. Moreover, while our components are verified that they are all complementary to others, DAM's effectiveness is especially dependent on the usage of CATE (compare between {(a, e)} and {(b, f), (i, j)}). We claim that this is because DAM exploits the position information of the input tokens to capture the corresponding moments. However, without CATE, input tokens are a mixture of multi-modal tokens, thereby providing confusing position information. To provide in-depth examinations of each component, we inspect the difference between the positive and the negative saliency scores in Fig. 4. Since the role of text query is trivial in our baseline, each distribution significantly overlies on top of the other. Then, as we add CATE and negative pair learning, we observe a consistent decrease in overlapped \begin{table} \begin{tabular}{c|c|c|c|c c c c c c c} \hline \hline & \multirow{2}{*}{CATE} & \multirow{2}{*}{Neg. Pair} & \multirow{2}{*}{Saliency Token} & \multirow{2}{*}{DAM} & \multicolumn{4}{c}{MR} & \multicolumn{4}{c}{HD} \\ \cline{5-12} \cline{6-12} & & & & \multicolumn{2}{c}{R1} & \multicolumn{2}{c}{mAP} & \multicolumn{2}{c}{\(>\)= Very Good} \\ \cline{5-12} \cline{6-12} & & & & & @0.5 & @0.7 & @0.5 & @0.75 & Avg. & mAP & HIT@1 \\ \hline (a) & & & & & 52.89 & 33.02 & 54.82 & 29.40 & 30.73 & 35.69 & 55.60 \\ \hline (b) & ✓ & & & & 56.16 & 38.71 & 56.48 & 33.42 & 34.07 & 37.14 & 58.34 \\ (c) & & ✓ & & & 58.69 & 39.83 & 58.39 & 34.84 & 35.40 & 39.02 & 62.81 \\ (d) & & & ✓ & & 55.48 & 37.00 & 55.81 & 26.75 & 32.84 & 37.48 & 58.59 \\ (e) & & & & ✓ & 53.19 & 35.91 & 55.58 & 32.55 & 33.33 & 35.68 & 55.56 \\ \hline (f) & ✓ & & & & ✓ & 57.72 & 42.35 & 59.10 & 38.16 & 38.03 & 36.56 & 57.44 \\ (g) & ✓ & ✓ & & & & 59.57 & 42.12 & 59.19 & 36.63 & 36.76 & 38.64 & 61.62 \\ (h) & & ✓ & ✓ & & & 60.00 & 40.97 & 59.21 & 35.41 & 35.89 & 39.06 & 62.88 \\ (i) & ✓ & ✓ & ✓ & & & 60.32 & 42.39 & 59.47 & 36.79 & 36.93 & **39.21** & 62.76 \\ \hline (j) & ✓ & ✓ & ✓ & ✓ & **62.68** & **46.66** & **62.23** & **41.82** & **41.22** & 39.13 & **63.03** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study on QVHighlights _val_ split. CATE and DAM stands for cross-attentive transformer encoder and using dynamic anchor moments as the decoder query, respectively. All the quantities are averaged over 5 runs. Figure 4: Ablation study in terms of saliency scores. We plot the histograms of the average value of saliency scores in each video when the positive and negative text queries are given. For positive scores, we only account the scores within GT moments. The average value of each histogram are visualized by the dotted line. The decrease in the overlap between histograms and the increase in the gap between average values confirms the gradual improvements of significance of the query in extracting video representation. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \hline & \multicolumn{4}{c}{MR} & \multicolumn{4}{c}{HD} \\ \cline{2-9} T2V & \multicolumn{2}{c}{R1} & \multicolumn{2}{c}{mAP} & \multicolumn{2}{c}{\(>\)= Very Good} \\ \cline{2-9} & @0.5 & @0.7 & @0.5 & @0.75 & Avg. & mAP & HIT@1 \\ \hline Moment-DETR (SATE 2) & 52.89\(\pm_{2.3}\) & 33.02\(\pm_{1.7}\) & 54.82\(\pm_{1.7}\) & 29.40\(\pm_{1.7}\) & 30.73\(\pm_{1.4}\) & 35.69\(\pm_{0.5}\) & 55.60\(\pm_{1.6}\) \\ Moment-DETR (SATE 4) & 53.60\(\pm_{1.2}\) & 35.81\(\pm_{0.9}\) & 54.55\(\pm_{0.8}\) & 30.64\(\pm_{0.7}\) & 31.74\(\pm_{0.4}\) & 35.96\(\pm_{0.2}\) & 56.56\(\pm_{0.9}\) \\ Moment-DETR (CATE 4) & 55.10\(\pm_{0.7}\) & 37.02\(\pm_{0.9}\) & 56.21\(\pm_{0.3}\) & 32.00\(\pm_{0.9}\) & 33.19\(\pm_{0.6}\) & 36.43\(\pm_{0.3}\) & 56.98\(\pm_{0.6}\) \\ Moment-DETR (CATE 4) & 56.16\(\pm_{1.2}\) & 38.71\(\pm_{1.1}\) & 56.48\(\pm_{0.8}\) & 33.42\(\pm_{0.7}\) & 34.07\(\pm_{0.6}\) & 37.14\(\pm_{0.4}\) & 58.34\(\pm_{0.4}\) \\ \hline QD-DETR (SATE 4)\(\dagger\) & 60.48\(\pm_{0.7}\) & 45.21\(\pm_{1.0}\) & 60.84\(\pm_{0.5}\) & 40.45\(\pm_{0.7}\) & 40.12\(\pm_{0.6}\) & 38.66\(\pm_{0.2}\) & 61.29\(\pm_{1.0}\) \\ QD-DETR (CATE 4)\(\dagger\) & **62.68\(\pm_{1.1}\)** & **46.66\(\pm_{0.6}\)** & **62.23\(\pm_{1.0}\)** & **41.82\(\pm_{0.9}\)** & **41.22\(\pm_{0.4}\)** & **39.13\(\pm_{0.3}\)** & **63.03\(\pm_{0.5}\)** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study on cross-attention transformer encoder. We compare ours against the deepened transformer encoder with self-attention layers to validate that the performance gain does not come from additional parameters. SATE and CATE each indicate the transformer encoder only with self-attention layers and our transformer encoder. The numbers in the parenthesis denote the number of layers. For the experiment with \(\dagger\), we only use the query features as the condition in the encoder and only the video representations are processed by the decoder. areas and a larger gap between the average saliency scores of positive and negative histograms. Also, we believe that the widely-distributed histogram of saliency scores for the 'CATE+Neg.pair' is due to using an identical criterion for saliency prediction for diverse video-query representations. By employing an input-adaptive saliency predictor, we notice that scores for positive queries are in almost optimal shape. In addition, some might ask whether CATE benefits the training because of additional encoder layers. To answer this, we conduct another ablation study in Tab. 5. Briefly, since our transformer encoder utilizes 2 cross-attention layers and 2 self-attention layers, we conduct comparisons against the transformer encoder composed of 4 self-attention layers (SATE). First, we compare CATE and SATE on Moment-DETR; by comparing the results in \(2^{\text{nd}}\) and \(3^{\text{rd}}\) rows, we find that CATE is much more beneficial than SATE even with the same number of layers. Furthermore, the last two rows show comparisons within QD-DETR architecture that has the same tendency. These results clearly demonstrate that the improvements from CATE are mainly from emphasizing the role of the text query rather than additional layers. ### Qualitative Results In this subsection, we study how the query-dependent video representation sensitively reacts to the change in the contexts of the text query. In Fig. 5, the measured saliency scores according to the video-query relevance are visualized. We found that the more the query is relevant to the video clips, the higher the saliency scores retained for the query. For instance, whereas the negative query that is totally irrelevant to the video instance has the lowest scores, the scores for semi-positive reside between the positive and the negative ones. Also, we find that QD-DETR sometimes provides a more precise moment prediction than a given ground-truth moment, as can be seen with the temporal box bounded by the dotted lines. We believe that the tendency of a bit higher saliency scores at non-relevant clips for a positive query is due to the information mixing in the self-attention layers. ## 5 Limitation and Conclusion LimitationAs elaborated in the paper, we aim to highlight the role of the text query in retrieving the relevant moments and estimating their accordance level with the given text query. Likewise, the proposed components expect a given query to maintain a meaningful context. If not, and noisy text queries are provided, i.e., mismatched or irrelevant ground truth texts, the training may not be effective as reported. ConclusionAlthough the advent of transformer architecture has been powerful for MR/HD, investigation of the role of text query has been lacking in such architectures. Therefore, we focused on studying the role of the text query. As we found that the textual information is not fully exploited in expressing the video representations, we designed the cross-attentive transformer encoder and proposed a negative-pair training scheme. Cross-attentive encoder assures the query's contributions while extracting video representation, and negative-pair training enforces the model to learn the relationship between query and video by preventing solving the problems without consideration of the query. Finally, to preserve the diversity of query-dependent video representation, we defined the saliency token to be an input-adaptive saliency predictor. Extensive experiments validated the strength of QD-DETR with superior performances. **Acknowledgements.** This work was supported in part by MSIT/IITP (No. 2022-0-00680, 2019-0-00421, 2020-0-01821, 2021-0-02068), and MSIT&KNPA/KIPoT (Police Lab 2.0, No. 210121M06). Figure 5: Visualization of results predicted by QD-DETR. Predicted and ground-truth moments are bounded by the lines. Blue, green, and red lines indicate the saliency scores for positive, semi-positive, and negative queries. The positive saliency scores are consistently higher than the others, while the scores for semi-positive are higher than the ones for the negative. ## 6 Training Details In this section, we elaborate on the implementation details and hyperparameters used for experiments in the main manuscript. To unify configurations across all experiments, our encoder composes of 4 layers of transformer block (2 cross-attention layers and 2 self-attention layers) whereas there are only 2 layers in the decoder (For HD dataset, i.e., TVSum, we only use encoding layers). We set the hidden dimension of transformers as 256, and use the Adam optimizer with a weight decay of 1e-4. Besides, we set the temperature of a scaling parameter \(\tau\) for contrastive loss as 0.5 for all experiments. Loss balancing parameters are \(\lambda_{\text{margin}}=1\), \(\lambda_{\text{cont}}=1\), \(\lambda_{L1}=10\), \(\lambda_{\text{gloU}}=1\), \(\lambda_{\text{CE}}=4\) and \(\lambda_{\text{neg}}=1\), unless otherwise mentioned. Additionally, we use the PANN [27] model trained on AudioSet [17] to extract audio features1 for experiments with the audio modality. Footnote 1: [https://github.com/TencentARC/UMT](https://github.com/TencentARC/UMT) Other configurations are described as follows: **QVIHighlight.** We use video features extracted from both pretrained SlowFast [14] (SF) and CLIP encoder [43], and text embeddings from CLIP, following the Moment-DETR. We train QD-DETR for 200 epochs with a batch size of 32 and a learning rate of 1e-4. **Charades-STA.** We utilize official VGG [46] features with GloVe [42] text embedding. To compare with additional baselines, we also test our model on pretrained C3D [51], SlowFast and CLIP for video features with CLIP text embedding. Specifically, we utilize pre-extracted features provided by other baselines repositories: UMT2, VSLNet3 and Moment-DETR4. We train ours for 100 epochs with a batch size of 8 and a learning rate of 1e-4. Footnote 3: [https://github.com/IsaacChangau/VSLNet](https://github.com/IsaacChangau/VSLNet) Footnote 4: [https://github.com/jaylecin/moment_detr](https://github.com/jaylecin/moment_detr) **TVSum.** I3D [7] features pretrained on Kinetics-400 [25] are utilized as a visual one, and CLIP features are used for the text embedding. Following the most recent work [36], we train our model for 2000 epochs with a learning rate of 1e-3. The batch size is set to 4. ## 7 Further study on model performance on varying lengths of the query. As discussed in the limitation, the performance of QD-DETR may depend on the quality of provided ground truth text descriptions. Yet, this does not imply the QD-DETR's vulnerability against commonly used meaningless words in text descriptions. As we think the queries with longer lengths may have a higher chance of including noisy texts, we divide the validation set into 3 groups each with long-, medium-, and short-length queries, and report the query-length-wise performances of QD-DETR in Tab. 6. As shown, QD-DETR works well regardless of the query length, showing [36.7, 28.0, 26.3%] and [7.3, 11.8, 11.1%] improvements in mAP each for MR and HD with [Short, Medium, Long] queries. This study implies that while irrelevant (wrong) text descriptions for video contexts can degrade the effectiveness of QD-DETR, QD-DETR is robust against meaningless words that are commonly present in text queries.
2307.10652
Exploring the Landscape of Natural Language Processing Research
As an efficient approach to understand, generate, and process natural language texts, research in natural language processing (NLP) has exhibited a rapid spread and wide adoption in recent years. Given the increasing research work in this area, several NLP-related approaches have been surveyed in the research community. However, a comprehensive study that categorizes established topics, identifies trends, and outlines areas for future research remains absent. Contributing to closing this gap, we have systematically classified and analyzed research papers in the ACL Anthology. As a result, we present a structured overview of the research landscape, provide a taxonomy of fields of study in NLP, analyze recent developments in NLP, summarize our findings, and highlight directions for future work.
Tim Schopf, Karim Arabi, Florian Matthes
2023-07-20T07:33:30Z
http://arxiv.org/abs/2307.10652v5
# Exploring the Landscape of Natural Language Processing Research ###### Abstract As an efficient approach to understand, generate, and process natural language texts, research in natural language processing (NLP) has exhibited a rapid spread and wide adoption in recent years. Given the increasing research work in this area, several NLP-related approaches have been surveyed in the research community. However, a comprehensive study that categorizes established topics, identifies trends, and outlines areas for future research remains absent. Contributing to closing this gap, we have systematically classified and analyzed research papers in the ACL Anthology. As a result, we present a structured overview of the research landscape, provide a taxonomy of fields of study in NLP, analyze recent developments in NLP, summarize our findings, and highlight directions for future work. 1 Footnote 1: Code available: [https://github.com/sebischaft/Exploring-NLP-Research](https://github.com/sebischaft/Exploring-NLP-Research) ## 1 Introduction Natural language is a fundamental aspect of human communication and inherent to human utterances and information sharing. Accordingly, most human-generated digital data are composed in natural language. Given the ever-increasing amount and importance of digital data, it is not surprising that computational linguists have started developing ideas on enabling machines to understand, generate, and process natural language since the 1950s Hutchins (1999). More recently, the introduction of the transformer model Vaswani et al. (2017) and pretrained language models Radford and Narasimhan (2018); Devlin et al. (2019) have sparked increasing interest in natural language processing (NLP). Submissions on various NLP topics and applications are being published in a growing number of journals and conferences, such as TACL, ACL, and EMNLP, as well as in several smaller workshops that focus on specific areas. Thereby, the ACL Anthology2 as a repository for publications from many major NLP journals, conferences, and workshops emerges as an important tool for researchers. As of January 2023, it provides access to over 80,000 articles published since 1952. Figure 1 shows the distribution of publications in the ACL Anthology over the 50-year observation period. Footnote 2: [https://aclanthology.org](https://aclanthology.org) Accompanying the increase in publications, there has also been a growth in the number of different fields of study (FoS) that have been researched within the NLP domain. FoS are academic disciplines and concepts that usually consist of (but are not limited to) tasks or techniques Shen et al. (2018). Given the rapid developments in NLP research, obtaining an overview of the domain and maintaining it is difficult. As such, collecting insights, consolidating existing results, and presenting a structured overview of the field is important. However, to the best of our knowledge, no stud Figure 1: Distribution of the number of papers per year in the ACL Anthology from 1952 to 2022. ies exist yet that offer an overview of the entire landscape of NLP research. To bridge this gap, we performed a comprehensive study to analyze all research performed in this area by classifying established topics, identifying trends, and outlining areas for future research. Our three main contributions are as follows: * We provide an extensive taxonomy of FoS in NLP research shown in Figure 2. * We systematically classify research papers included in the ACL Anthology and report findings on the development of FoS in NLP. * We identify trends in NLP research and highlight directions for future work. Our study highlights the development and current state of NLP research. Although we cannot fully cover all relevant work on this topic, we aim to provide a representative overview that can serve as a starting point for both NLP scholars and practitioners. In addition, our analysis can assist the research community in bridging existing gaps and exploring various FoS in NLP. ## 2 Related Work Related literature that considers various different FoS in NLP is relatively scarce. Most studies focus only on a particular FoS or sub-field of NLP research. For example, related studies focus on knowledge graphs in NLP Schneider et al. (2022), explainability in NLP Danilevsky et al. (2020), ethics and biases in NLP Suster et al. (2017); Blodgett et al. (2020), question answering Liu et al. (2022), or knowledge representations in language models Safavi and Koutra (2021). Studies that analyze NLP research based on the entire ACL Anthology focus on citation analyses Mohammad (2020); Rungta et al. (2022) or visualizations of venues, authors, and n-grams and keywords extracted from publications Mohammad (2020); Parmar et al. (2020). Anderson et al. (2012) apply topic modeling to identify different epochs in the ACL's history. Various books categorize different FoS in NLP, focusing on detailed explanations for each of these categories Allen (1995); Manning and Schutze (1999); Jurafsky and Martin (2009); Eisenstein (2019); Tunstall et al. (2022). ## 3 Research Questions The goal of our study is an extensive analysis of research performed in NLP by classifying established topics, identifying trends, and outlining areas for future research. These objectives are reflected in our research questions (RQs) presented as follows: Rq1:_What are the different FoS investigated in NLP research?_ Although most FoS in NLP are well-known and defined, there currently exists no commonly used taxonomy or categorization scheme that attempts to collect and structure these FoS in a consistent and understandable format. Therefore, getting an overview of the entire field of NLP research is difficult, especially for students and early career researchers. While there are lists of NLP topics in conferences and textbooks, they tend to vary considerably and are often either too broad or too specialized. To classify and analyze developments in NLP, we need a taxonomy that encompasses a wide range of different FoS in NLP. Although this taxonomy may not include all possible NLP concepts, Figure 2: Taxonomy of fields of study in NLP. Appendix A.1 includes more detailed descriptions of the fields of study. it needs to cover a wide range of the most popular FoS, whereby missing FoS may be considered as subtopics of the included FoS. This taxonomy serves as an overarching classification scheme in which NLP publications can be classified according to at least one of the included FoS, even if they do not directly address one of the FoS, but only subtopics thereof. Rq2:_How to classify research publications according to the identified FoS in NLP?_ Classifying publications according to the identified FoS in NLP is very tedious and time-consuming. Especially with a large number of FoS and publications, a manual approach is very costly. Therefore, we need an approach that can automatically classify publications according to the different FoS in NLP. Rq3:_What are the characteristics and developments over time of the research literature in NLP?_ To understand past developments in NLP research, we examine the evolution of popular FoS over time. This will allow a better understanding of current developments and help contextualize them. Rq4:_What are the current trends and directions of future work in NLP research?_ Analyzing the classified research publications allows us to identify current research trends and gaps and predict possible future developments in NLP research. ## 4 Classification & Analysis In this section, we report the approaches and results of the data classification and analysis. It is structured according to the formulated RQs. ### Taxonomy of FoS in NLP research (RQ1) To develop the taxonomy of FoS in NLP shown in Figure 2, we first examined the submission topics of recent years as listed on the websites of major NLP conferences such as ACL, EMNLP, COLING, or IJCNLP. In addition, we reviewed the topics of workshops included in the ACL Anthology to derive further FoS. In order to include smaller topics that are not necessarily mentioned on conference or workshop websites, we manually reviewed all papers from the recently published EMNLP 2022 Proceedings, extracted their FoS, and annotated all 828 papers accordingly. This provided us with an initial set of FoS, which we used to create the first version of the NLP taxonomy. Based on our initial taxonomy, we conducted semi-structured expert interviews with NLP researchers to evaluate and adjust the taxonomy. In the interviews, we placed particular emphasis on the evaluation of the mapping of lower-level FoS to their higher-level FoS and the correctness and completeness of FoS in the NLP domain. In total, we conducted more than 20 one-on-one interviews with different domain experts. After conducting the interviews, we noticed that experts demonstrated a high degree of agreement on certain aspects of evaluation, while opinions were highly divergent on other aspects. While we easily implemented changes resulting from high expert agreement, we acted as the final authority in deciding whether to implement a particular change for aspects with low expert agreement. For example, one of the aspects with the highest agreement was that certain lower-level FoS must be assigned not only to one but also to multiple higher-level FoS. Based on the interview results, we subsequently adjusted the annotations of the 828 EMNLP 2022 papers and developed the final NLP-taxonomy, as shown in Figure 2. ### Field of Study Classification (RQ2) We trained a weakly supervised classifier to classify ACL Anthology papers according to the NLP taxonomy. To obtain a training dataset, we first defined keywords for each FoS included in the final taxonomy to perform a database search for relevant articles. Based on the keywords, we created search strings to query the Scopus and arXiv databases. The search string was applied to titles and author keywords, if available. While we limited the Scopus search results to the NLP domain with additional restrictive keywords such as "NLP", "natural language processing", or "computational linguistics", we limited the search in arXiv to the cs.CL domain. We subsequently merged duplicate articles to create a multi-label dataset and removed articles included in the EMNLP 2022 proceedings, as this dataset is used as test set. Finally, we applied a fuzzy string matching heuristic and added missing classes based on the previously defined FoS keywords that appear twice or more in the article titles or abstracts. The final training dataset consists of 178,521 articles annotated on average with 3.65 different FoS. On average, each class includes 7936.50 articles, while the most frequent class is represented by 63728 articles and the least frequent class by 141 articles. We split this unevenly distributed dataset into three different random 90/10 training/validation sets and used the human-annotated EMNLP 2022 articles as the test dataset. For multi-label classification, we fine-tuned and evaluated different base models. Training and evaluation details are shown in Appendix A.2. We found that SPECTER 2.0 performed best on validation and test data, with average \(F_{1}\) scores of 96.06 and 93.21, respectively, on multiple training runs. Therefore, we selected SPECTER 2.0 as our final classification model, which we subsequently trained on the combined training, validation, and test data. Using the final model, we classified all papers included in the ACL Anthology from 1952 to 2022. To obtain our final dataset for analysis, we removed the articles that were not truly research articles, such as prefaces; articles that were not written in English; and articles where the classifier was uncertain and simply predicted every class possible. This final classified dataset includes a total of 74,279 research papers. Table 1 shows the final classification results with respect to the number of publications for each of the most popular FoS. ### Characteristics and Developments of the Research Landscape (RQ3) Considering the literature on NLP, we start our analysis with the number of studies as an indicator of research interest. The distribution of publications over the 50-year observation period is shown in Figure 1. While the first publications appeared in 1952, the number of annual publications grew slowly until 2000. Accordingly, between 2000 and 2017, the number of publications roughly quadrupled, whereas in the subsequent five years, it doubled again. We therefore observe a near-exponential growth in the number of NLP studies, indicating increasing attention from the research community. Examining Table 1 and Figure 3, the most popular FoS in the NLP literature and their recent development over time are revealed. While the majority of studies in NLP are related to machine translation or language models, the developments of both FoS are different. Machine translation is a thoroughly researched field that has been established for a long time and has experienced a modest growth rate over the last 20 years. Language models have also been researched for a long time. However, the number of publications on this topic has only experienced significant growth since 2018. Similar differences can be observed when looking at the other popular FoS. Representation learning and text classification, while generally widely researched, are partially stagnant in their growth. In contrast, dialogue systems & conversational agents and particularly low-resource NLP continue to exhibit high growth rates in the number of studies. Based on the development of the average number of studies on the remaining FoS in Figure 3, we observe a slightly positive growth overall. However, the majority of FoS are significantly less researched than the most \begin{table} \begin{tabular}{l l l|l l l} \hline \hline popular FoS. We conclude that the distribution of research across FoS is extremely unbalanced and that the development of NLP research is largely shaped by advances in a few highly popular FoS. ### Research Trends and Directions for Future Work (RQ4) Figure 4 shows the growth-share matrix of FoS in NLP research inspired by Henderson (1970). We use it to examine current research trends and possible future research directions by analyzing the growth rates and total number of papers related to the various FoS in NLP between 2018 and 2022. The upper right section of the matrix consists of FoS that exhibit a high growth rate and simultaneously a large number of papers overall. Given the growing popularity of FoS in this section, we categorize them as _trending stars_. The lower right section contains FoS that are very popular but exhibit a low growth rate. Usually, these are FoS that are essential for NLP research but already relatively mature. Hence, we categorize them as _foundational FoS_. The upper left section of the matrix contains FoS that exhibit a high growth rate but only very few papers overall. Since the progress of these FoS is rather promising, but the small number of overall papers renders it difficult to predict their further developments, we categorize them as _rising question marks_. The FoS in the lower left of the matrix are categorized as _niche FoS_ owing to their low total number of papers and their low growth rates. Figure 4 shows that language models are currently receiving the most attention, which is also consistent with the observations from Table 1 and Figure 3. Based on the latest developments in this area, this trend is likely to continue and accelerate in the near future. Text classification, machine translation, and representation learning rank among the most popular FoS but only show marginal growth. In the long term, they may be replaced by faster-growing fields as the most popular FoS. In general, FoS related to syntactic text processing exhibit negligible growth and low popularity overall. Conversely, FoS concerned with responsible & trustworthy NLP, such as green & sustainable NLP, low-resource NLP, and ethical NLP tend to exhibit a high growth rate and also high popularity overall. This trend can also be observed in the case of structured data in NLP, visual data in NLP, and speech & audio in NLP, all of which are concerned with multimodality. In addition, natural language interfaces involving dialogue systems & conversational agents, and question answering are becoming increasingly important in the research community. We conclude that in addition to language models, responsible & trustworthy NLP, multimodality, and natural language interfaces are likely to characterize the NLP research landscape in the near future. Further notable developments can be observed in the area of reasoning, specifically with respect to knowledge graph reasoning and numerical reasoning and in various FoS related to text generation. Although these FoS are currently still relatively small, they apparently attract more and more interest from the research community and show a clear positive tendency toward growth. Figure 5 shows the innovation life cycle of the most popular FoS in NLP adapted from the _diffusion of innovations_ theory Rogers (1962) and inspired by Huber (2005). The central assumption Figure 3: Distribution of number of papers by most popular FoS from 2002 to 2022. of the innovation life cycle theory is that for each innovation (or in this case FoS), the number of published research per year is normally distributed over time, while the total number of published research reaches saturation according to a sigmoid curve. Appendix A.3 shows how the positions of FoS on the innovation life cycle curve are determined. From Figure 5, we observe that FoS related to syntactic text processing are already relatively mature and approaching the end of the innovation life cycle. Particularly, syntactic parsing is getting near the end of its life cycle, with only late modifications being researched. While Table 1 shows that machine translation, representation learning, and text classification are very popular overall, Figure 5 reveals that they have passed the inflection point of the innovation life cycle curve and their development is currently slowing down. They are adopted by most researchers but show stagnant or negative growth, as also indicated in Figure 4. However, most of FoS have not yet reached the inflection point and are still experiencing increasing growth rates, while research on these FoS is accelerating. Especially FoS related to responsible & trustworthy NLP, multimodality, and natural language interfaces are just beginning their innovation life cycle, suggesting that research in these areas will likely accelerate in the following years. This is also in line with the observations from Figure 4, where most of the FoS related to these areas are categorized as _trending stars_. Further, we observe that language models have passed the first two stages of innovation and are currently in their prime unfolding phase. They are adopted by a large number of researchers and research on them is still accelerating. Comparing this to Figure 4, where language models are among the most trending FoS, we conclude that this trend is likely to continue in the near future and is unlikely to slow down anytime soon. ## 5 Discussion The observations of our comprehensive study reveal several insights that we can situate to related work. Since the first publications in 1952, researchers have paid increasing attention to the field of NLP, particularly after the introduction of Word2Vec (Mikolov et al., 2013) and accelerated by BERT (Devlin et al., 2019). This observed growth in research interest is in line with the study of Mohammad (2020). Historically, machine translation was one of the first research fields in NLP (Jones, 1994), which continues to be popular and steadily growing nowadays. However, recent advances in language model training have sparked increasing research efforts in this field, as shown in Figure 3. Since scaling up lan Figure 4: Growth-share matrix of FoS in NLP. The growth rates and total number of works for each FoS are calculated from the start of 2018 to the end of 2022. To obtain a more uniform distribution of the data, we apply the Yeo-Johnson transformation (Yeo and Johnson, 2000). guage models significantly enhance performance on downstream tasks (Brown et al., 2020; Kaplan et al., 2020; Wei et al., 2022a; Hoffmann et al., 2022), researchers continue to introduce increasingly larger language models (Han et al., 2021). However, training and using these large language models involves significant challenges, including computational costs (Narayanan et al., 2021), environmental issues (Strubell et al., 2019), and ethical considerations (Perez et al., 2022). As a result, a recent increase in research efforts has been noted to render language models and NLP more responsible & trustworthy in general, as shown in Figure 4 and Figure 5. Additionally, recent advances aim to train large-scale multimodal language models capable of understanding and generating natural language text and performing all types of downstream tasks while interacting with humans through natural language input prompts (OpenAI, 2023). From our observations in Figure 4 and Figure 5, we again find support for this trend in NLP literature for multimodality, text generation, and natural language interfaces. Although language models have achieved remarkable success on various NLP tasks, their inability to reason is often seen as a limitation that cannot be overcome by increasing the model size alone (Rae et al., 2022; Wei et al., 2022b; Wang et al., 2023). Although reasoning capabilities are a crucial prerequisite for the reliability of language models, this field is still relatively less researched and receives negligible attention. While Figure 4 exhibits high growth rates for knowledge graph reasoning and numerical reasoning in particular, research related to reasoning is still rather under-represented compared to the more popular FoS. ## 6 Conclusion Recent years have witnessed an increasing prominence of NLP research. To summarize recent developments and provide an overview of this research area, we defined a taxonomy of FoS in NLP and analyzed recent research developments. Our findings show that a large number of FoS have been studied, including trending fields such as multimodality, responsible & trustworthy NLP, and natural language interfaces. While recent developments are largely a result of recent advances in language models, we have noted a lack of research pertaining to teaching these language models to reason and thereby afford more reliable predictions. ## 7 Limitations Constructing the taxonomy highly depends on the personal decisions of the authors, which can bias Figure 5: Innovation life cycle of the most popular FoS in NLP. FoS on the left side of the curve are at the beginning of their life cycle. They have just been invented or are in an early phase, where innovation on FoS accelerates by a rising number of studies. After passing the inflection point, the FoS move towards the end of their innovation life cycle, where research on FoS is retained or declines and only late modifications are added to the FoS. the final result. The taxonomy may not cover all possible FoS and offers potential for discussions, as domain experts have inherently different opinions. As a countermeasure, we aligned the opinions of multiple domain experts and designed the taxonomy at a higher level, allowing non-included FoS to be considered as possible subtopics of existing ones. For this study, we limited our analysis to papers published in the ACL Anthology, which typically feature research presented at major international conferences and are written in English. However, research communities that publish their work in regional venues exist, often in languages other than English. In addition, NLP research is also presented at other prominent global conferences such as AAAI, NeurIPS, ICLR, or ICML. Therefore, the findings we report in this study pertain specifically to NLP research presented at major international conferences and journals in English. Furthermore, the accuracy of the classification results poses another threat to the validity of our study. Data extraction bias and classification model errors may negatively affect the results. To mitigate this risk, the authors regularly discussed the used classification schemes and conducted a thorough evaluation of the performance of the classification model. ## Acknowledgments We would like to thank Phillip Schneider, Stephen Meisenbacher, Mahdi Dhaini, Juraj Vladika, Oliver Wardas, Anum Afzal, Wessel Poelman, and Alexander Blatzheim of sebis for helpful discussions and valuable feedback.
2305.01126
Spectral gap bounds on H-type groups
In this note we provide bounds on the spectral gap for the Dirichlet sub-Laplacians on $H$-type groups. We use probabilistic techniques and in particular small deviations of the corresponding hypoelliptic Brownian motion.
Marco Carfagnini, Maria Gordina
2023-05-01T23:56:34Z
http://arxiv.org/abs/2305.01126v2
# Spectral gap bounds on H-type groups ###### Abstract. In this note we provide bounds on the spectral gap for the Dirichlet sub-Laplacians on \(H\)-type groups. We use probabilistic techniques and in particular small deviations of the corresponding hypoelliptic Brownian motion. Key words and phrases:Sub-Laplacian; H-type group; spectral gap; small ball problem 2020 Mathematics Subject Classification: Primary 35P05, 35H10; Secondary 35K08 ###### Contents * 1 Introduction and main result * 2 Geometric background * 2.1 Carnot groups * 2.2 \(H\)-type groups * 3 Proof of the main result * 3.1 A probabilistic lemma * 3.2 Spectral bounds ## 1. Introduction and main result Let \((E,d)\) be a metric space and \(\{X_{t}\}_{0\leq t\leq T}\) be an \(E\)-valued stochastic process with continuous paths such that \(X_{0}=x_{0}\) a.s. for some \(x_{0}\in E\) and \(T>0\) is fixed. Denote by \(W_{x_{0}}(E)\) the space of \(E\)-valued continuous functions on \([0,T]\) starting at \(x_{0}\). The process \(X_{t}\) is said to satisfy a _small deviation principle_ with rates \(\alpha\) and \(\beta\) if there exists a constant \(c>0\) such that \[\lim_{\varepsilon\to 0}-\varepsilon^{\alpha}|\log\varepsilon|^{\beta}\log \mathbb{P}\left(\max_{0\leq t\leq T}d(X_{t},x_{0})<\varepsilon\right)=c. \tag{1.1}\] The values of \(\alpha\), \(\beta\), and \(c\) depend on the process \(X_{t}\) and on the chosen norm on \(W_{x_{0}}(E)\). Small deviations have many applications including metric entropy estimates and limit laws such as Chung's laws of the iterated logarithm. We are interested in such a problem and its analytic consequences when \(E\) is a Heisenberg-type (H-type) group. ###### Contents * 1 Introduction [MISSING_PAGE_POST] where \(\lambda_{1}=\lambda_{1}(m,n)\) is the spectral gap of \(-\frac{1}{2}\Delta_{\mathbb{G}}\) restricted to the homogeneous ball \(\{x\in\mathbb{G}:|x|<1\}\), and \[c\left(\lambda_{1}^{(m)},\lambda_{1}^{(n)}\right):=f(x^{*})=\inf_{ 0<x<1}f(x),\] \[f(x)=\frac{\lambda_{1}^{(m)}}{\sqrt{1-x}}+\frac{\lambda_{1}^{(n) }\sqrt{1-x}}{4x},\] \[x^{*}=\frac{\sqrt{\left(\lambda_{1}^{(n)}\right)^{2}+32\lambda_{ 1}^{(n)}\lambda_{1}^{(m)}}-3\lambda_{1}^{(n)}}{2\left(4\lambda_{1}^{(m)}- \lambda_{1}^{(n)}\right)},\] where \(\lambda_{1}^{(n)}\) is the lowest Dirichlet eigenvalues in the unit ball in \(\mathbb{R}^{n}\). The proof of Theorem 3.2 is of a probabilistic nature and it relies on Lemma 3.1. If \(B_{t}\) is a two-dimensional Brownian motion and \[A_{t}:=\frac{1}{2}\int_{0}^{t}B_{1}(s)dB_{2}(s)-B_{2}(s)dB_{1}(s)\] the corresponding Levy area, then a classical result in stochastic analysis [6, Ch.VI Example 6.1] states that there exists a one-dimensional Brownian motion \(W_{t}\) independent of \(|B_{t}|\) such that \[A_{t}=W_{\tau(t)}, \tag{1.4}\] where \(\tau(t):=\frac{1}{4}\int_{0}^{t}|B_{s}|^{2}ds\). In Lemma 3.1 we prove a version of (1.4) where \(B_{t}\) is replaced by an \(m\)-dimensional Brownian motion, and \(A_{t}\) by an \(n\)-dimensional martingale consisting of stochastic integrals depending on \(B_{t}\). ## 2. Geometric background ### Carnot groups We begin by recalling basic facts about Carnot groups. **Definition 2.1** (Carnot groups).: We say that \(\mathbb{G}\) is a Carnot group of step \(r\) if \(\mathbb{G}\) is a connected and simply connected Lie group whose Lie algebra \(\mathfrak{g}\) is _stratified_, that is, it can be written as \[\mathfrak{g}=V_{1}\oplus\cdots\oplus V_{r},\] where \[[V_{1},V_{i-1}]=V_{i},\ \ 2\leqslant i\leqslant r,\] \[[V_{1},V_{r}]=\{0\}\,.\] To avoid degenerate cases we assume that the dimension of the Lie algebra \(\mathfrak{g}\) is at least \(3\). In addition we will use a stratification such that the center of \(\mathfrak{g}\) is contained in \(V_{r}\). We generally assume that \(r\geqslant 2\) to exclude the case when the corresponding Laplacian is elliptic. In particular, Carnot groups are nilpotent. We will use \(\mathcal{H}:=V_{1}\) to denote the space of _horizontal_ vectors that generate the rest of the Lie algebra, noting that \(V_{2}=[\mathcal{H},\mathcal{H}],...,V_{r}=\mathcal{H}^{(r)}\). Finally, by [1, Proposition 2.2.17, Proposition 2.2.18] we can assume without loss of generality that a Carnot group can be identified with a _homogeneous Carnot group_. For \(i=1,...,r\), let \(d_{i}=\dim V_{i}\) and \(d_{0}=0\). The Euclidean space underlying \(G\) has dimension \[N:=\sum_{i=1}^{r}d_{i},\] that is, \(\mathbb{G}\cong\mathbb{R}^{N}\), and the homogeneous dimension of \(\mathbb{G}\) is given by \[Q:=\sum_{i=1}^{r}i\cdot d_{i}.\] A homogeneous Carnot group is equipped with a natural family of _dilations_ defined for any \(a>0\) by \[D_{a}\left(x_{1},\ldots,x_{N}\right):=\left(a^{\sigma_{1}}x_{1},\ldots,a^{ \sigma_{N}}x_{N}\right),\] where \(\sigma_{j}\in\mathbb{N}\) is called the _homogeneity_ of \(x_{j}\), with \[\sigma_{j}=i\quad\text{ for }\sum_{k=0}^{i-1}d_{k}+1\leqslant j\leqslant\sum_{ k=1}^{i}d_{k},\] with \(i=1,\ldots,r\) and recalling that \(d_{0}=0\). That is, \(\sigma_{1}=\cdots=\sigma_{d_{1}}=1,\sigma_{d_{1}+1}=\cdots=\sigma_{d_{1}+d_{2 }}=2\), and so on. We assume that \(\mathcal{H}\) is equipped with an inner product \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\), in which case the Carnot group has a natural sub-Riemannian structure. Namely, one may use left translation to define a _horizontal distribution_\(\mathcal{D}\) as a sub-bundle of the tangent bundle \(T\mathbb{G}\), and a metric on \(\mathcal{D}\). First, we identify the space \(\mathcal{H}\subset\mathfrak{g}\) with \(\mathcal{D}_{e}\subset T_{e}\mathbb{G}\). Then for \(g\in\mathbb{G}\) let \(L_{g}\) denote left translation \(L_{g}h=gh\), and define \(\mathcal{D}_{g}:=(L_{g})_{*}\mathcal{D}_{e}\) for any \(g\in\mathbb{G}\). A metric on \(\mathcal{D}\) may then be defined by \[\langle u,v\rangle_{\mathcal{D}_{g}} :=\langle(L_{g^{-1}})_{*}u,(L_{g^{-1}})_{*}v\rangle_{\mathcal{D}_ {e}}\] \[=\langle(L_{g^{-1}})_{*}u,(L_{g^{-1}})_{*}v\rangle_{\mathcal{H}} \quad\text{ for all }u,v\in\mathcal{D}_{g}.\] We will sometimes identify the horizontal distribution \(\mathcal{D}\) and \(\mathcal{H}\). Vectors in \(\mathcal{D}\) are called _horizontal_. Let \(\{X_{1},\ldots,X_{d_{1}}\}\) be an orthonormal basis for \(\mathcal{H}\) of left-invariant vector fields, then the sum of squares operator \[\Delta_{\mathbb{G}}:=\sum_{j=1}^{d_{1}}X_{j}^{2}\] is called the canonical sub-Laplacian on \(\mathbb{G}\). By [5, Section 3] th operator \(\Delta_{\mathbb{G}}\) depends only on the inner product on \(\mathcal{H}\), and not on the choice of the basis. **Definition 2.2**.: Suppose \(\mathbb{G}=\left(\mathbb{R}^{N},\star,\delta_{\lambda}\right)\) is a homogeneous Carnot group, and \(\rho:\mathbb{G}\rightarrow\left[0,\infty\right)\) is a continuous function with respect to the Euclidean topology. Then \(\rho\) is a _homogeneous norm_ if it satisfies the following properties \[\rho\left(\delta_{\lambda}(x)\right)=\lambda\rho(x)\text{ for every }\lambda>0\text{ and }x\in\mathbb{G},\] \[\rho(x)>0\text{ if and only if }x\neq 0.\] The norm \(\rho\) is called _symmetric_ if it satisfies \(\rho\left(x^{-1}\right)=\rho\left(x\right)\) for every \(x\in\mathbb{G}\). **Definition 2.3**.: A \(\mathbb{G}\)-valued Markov process \(g_{t}\) is called a hypoelliptic (or horizontal) Brownian motion if its infinitesimal generator is \(\frac{1}{2}\Delta_{\mathbb{G}}\). For \(g\in\mathbb{G}\), let \(L_{g}:\mathbb{G}\rightarrow\mathbb{G}\) denote the left translation given by \(L_{g}(h)=g^{-1}h\) for \(h\in\mathbb{G}\). Let us denote by \(\theta_{g}^{\ell}\) the _left Maurer-Cartan form_ on \(\mathbb{G}\), that is, a \(\mathfrak{g}\)-valued \(1\)-form on \(\mathbb{G}\) defined by \(\theta_{g}^{\ell}(v):=dL_{g}(v)\), \(v\in T_{g}\mathbb{G}\). The horizontal Brownian motion \(g_{t}\) is then the solution to the \(\mathfrak{g}\)-valued stochastic differential equation \[\theta_{g}^{\ell}(dg_{t})=\left(dB_{t},0\right),\] \[g_{0}=e,\] where \(B_{t}\) is an \(\mathcal{H}\)-valued Brownian motion and \(e\) is the identity in \(\mathbb{G}\). ### \(H\)-type groups H-type or Heisenberg-type groups are examples of Carnot groups. They were first introduced by Kaplan in [7] and basic facts about these groups can be found in [1, Chapter 18]. **Definition 2.4**.: A finite dimensional real Lie algebra \(\mathfrak{g}\) is said to be an \(H\)_-type Lie algebra_ if there exists an inner product \(\langle\cdot,\cdot\rangle\) on \(\mathfrak{g}\) such that \[[\mathfrak{z}^{\perp},\mathfrak{z}^{\perp}]=\mathfrak{z},\] where \(\mathfrak{z}\) is the center of \(\mathfrak{g}\), and for every fixed \(Z\in\mathfrak{z}\), the map \(J_{Z}:\mathfrak{z}^{\perp}\rightarrow\mathfrak{z}^{\perp}\) defined by \[\langle J_{Z}X,Y\rangle=\langle Z,[X,Y]\rangle\] is an orthogonal map whenever \(\langle Z,Z\rangle=1\). An \(H\)_-type group_\(\mathbb{G}\) is a connected and simply connected Lie group whose Lie algebra \(\mathfrak{g}\) is an \(H\)-type algebra. **Remark 2.5** (Corollary 1 in [7]).: _Let \(n\) and \(m\) be two integers. Then there exists an \(H\)-type Lie algebra of dimension \(m+n\) whose center has dimension \(n\) if and only if \(n<\rho(m)\), where \(\rho\) is the Hurwitz-Radon function_ \[\rho:\mathbb{N}\rightarrow\mathbb{N},\ \rho(n)=8p+q,\text{ where }n=\text{( odd)}\cdot 2^{4p+q},\ 0\leqslant q\leqslant 3.\] **Example 2.6**.: The Heisenberg group \(\mathbb{H}\) is the Lie group identified with \(\mathbb{R}^{3}\) endowed with the following group multiplication law \[\left(\mathbf{v}_{1},z_{1}\right)\cdot\left(\mathbf{v}_{2},z_{2} \right):=\left(x_{1}+x_{2},y_{1}+y_{2},z_{1}+z_{2}+\frac{1}{2}\omega\left( \mathbf{v}_{1},\mathbf{v}_{2}\right)\right),\] \[\text{where }\mathbf{v}_{1}=\left(x_{1},y_{1}\right),\mathbf{v}_{2}= \left(x_{2},y_{2}\right)\in\mathbb{R}^{2},\] \[\omega:\mathbb{R}^{2}\times\mathbb{R}^{2}\longrightarrow\mathbb{R}, \ \ \omega\left(\mathbf{v}_{1},\mathbf{v}_{2}\right):=x_{1}y_{2}-x_{2}y_{1}\] is the standard symplectic form on \(\mathbb{R}^{2}\). The identity in \(\mathbb{H}\) is \(e=(0,0,0)\) and the inverse is given by \((\mathbf{v},z)^{-1}=(-\mathbf{v},-z)\). The Heisenberg Lie algebra \(\mathfrak{h}\) is spanned by the vector fields \[X=\frac{\partial}{\partial x}-\frac{y}{2}\frac{\partial}{\partial z},\ \ Y=\frac{\partial}{\partial y}+\frac{x}{2}\frac{\partial}{\partial z},\ \ Z=\frac{\partial}{\partial z}.\] Note that \([X,Y]=Z\) is the only non-zero Lie bracket, and hence \(\mathfrak{h}\) is an \(H\)-type Lie algebra under any inner product such that \(\{X,Y,Z\}\) is an orthonormal basis. In particular, \(\mathfrak{z}=\operatorname{span}\left\{Z\right\}\), and \(J_{Z}X=Y,J_{Z}Y=-X\). **Example 2.7**.: For \(n\geqslant 1\), the Heisenberg-Weyl group \(\mathbb{H}_{n}\) is the Lie group identified with \(\mathbb{R}^{2n+1}\) with the following group multiplication law \[\left(\mathbf{v},z\right)\cdot\left(\mathbf{v}^{\prime},z^{\prime }\right):=\left(\mathbf{v}+\mathbf{v}^{\prime},z+z^{\prime}+\frac{1}{2}\omega \left(\mathbf{v},\mathbf{v}^{\prime}\right)\right),\] \[\text{where }\mathbf{v}=\left(x_{1},\ldots x_{n}\right),\ \mathbf{v}^{ \prime}=\left(x_{1}^{\prime},\ldots,x_{2n}^{\prime}\right)\in\mathbb{R}^{2n}, \ z,z^{\prime}\in\mathbb{R},\] \[\omega:\mathbb{R}^{2n}\times\mathbb{R}^{2n}\longrightarrow\mathbb{ R},\ \ \omega\left(\mathbf{v}_{1},\mathbf{v}_{2}\right):=\sum_{j=1}^{n}x_{2j-1}x_{2j}^{ \prime}-x_{2j-1}^{\prime}x_{2j}.\] Its Lie algebra \(\mathfrak{h}_{n}\) is generated by \[X_{2j-1}=\frac{\partial}{\partial x_{2j-1}}-\frac{1}{2}x_{2j}\frac{\partial}{ \partial z},\ \ X_{2j}=\frac{\partial}{\partial x_{2j}}+\frac{1}{2}x_{2j-1}\frac{\partial}{ \partial z},\ \ Z=\frac{\partial}{\partial z},\] where \(j=0,\ldots n\). Note that the only non zero brackets are \([X_{2j-1},X_{2j}]=Z\), and \(\mathfrak{z}=\operatorname{span}\left\{Z\right\}\), and hence \(\mathfrak{h}_{n}\) is an \(H\)-type algebra under an inner product that makes \(\{X_{1},\ldots,X_{2n},Z\}\) orthonormal. Moreover, one has that \(J_{Z}X_{2j-1}=X_{2j}\), \(J_{Z}X_{2j}=-X_{2j-1}\) for all \(j=0,\ldots n\). **Remark 2.8**.: _Let \(\mathbb{G}\) be an \(H\)-type groups, then \(\mathbb{G}\) is a Carnot group of step \(2\). Indeed, if \(\mathfrak{z}\) denotes the center of its Lie algebra \(\mathfrak{g}\), then one can consider the stratification \(V_{1}:=\mathfrak{z}^{\perp}\), \(V_{2}:=\mathfrak{z}\)._ The following characterization of \(H\)-type groups can be found in [1, Theorem 18.2.1]. **Theorem 2.9**.: _Let \(\mathbb{G}\) be an \(H\)-type group. Then there exist integers \(m,n\) such that \(\mathbb{G}\) is isomorphic to \(\mathbb{R}^{m+n}\) with the group law given by_ \[x\cdot y=\left(\overline{x}+\overline{y},\widehat{x}_{1}+\widehat{y}_{1}+ \frac{1}{2}\langle U^{(1)}\overline{x},\overline{y}\rangle,\ldots,\widehat{x} _{n}+\widehat{y}_{n}+\frac{1}{2}\langle U^{(n)}\overline{x},\overline{y} \rangle\right), \tag{2.1}\] _for any \(x:=(\overline{x},\widehat{x}),\,y=(\overline{y},\widehat{y})\in\mathbb{R}^{m+n}\), where the matrices \(U^{(1)},\ldots,U^{(n)}\) satisfy the following properties_ _1. \(U^{(i)}\) is an \(m\times m\) skew-symmetric and orthogonal matrix for \(i=1,\ldots,n\)._ _2. \(U^{(i)}U^{(j)}+U^{(j)}U^{(i)}=0\) for every \(i\neq j\)._ **Remark 2.10**.: _The integers \(m\) and \(n\) in Theorem 2.9 only depend on the structure of \(\mathbb{G}\). More precisely, \(m=\dim\mathfrak{z}^{\perp}\) and \(n=\dim\mathfrak{z}\)._ By [1, Proposition 5.1.4, p. 230]. one has that all homogeneous norms on a Carnot group are equivalent. \(H\)-type groups are Carnot groups of step 2 and hence, without loss of generality, we can focus our attention on the homogeneous norm \[|x|:=\left(|\overline{x}|_{\mathbb{R}^{m}}^{4}+|\widehat{x}|_{\mathbb{R}^{n}} ^{2}\right)^{\frac{1}{4}} \tag{2.2}\] for any \(x=(\overline{x},\widehat{x})\in\mathbb{R}^{m+n}\). **Remark 2.11**.: _The matrices \(U^{(i)}\) in Theorem 2.9 satisfies_ \[\langle U^{(i)}\overline{x},U^{(j)}\overline{x}\rangle=0, \tag{2.3}\] _for any \(\overline{x}\in\mathbb{R}^{m}\) and any \(i\neq j\) in \(\{1,\ldots,n\}\). Indeed, \(U^{(i)}U^{(i)\,T}=I\) and \(U^{(i)}=-U^{(i)\,T}\), and hence, for any \(i\neq j\)_ \[\langle U^{(i)}\overline{x},U^{(j)}\overline{x}\rangle=\langle U^{(j)}U^{(i) }\overline{x},-\overline{x}\rangle=\langle U^{(i)}U^{(j)}\overline{x}, \overline{x}\rangle.\] _On the other hand_ \[\langle U^{(i)}\overline{x},U^{(j)}\overline{x}\rangle=\langle-\overline{x}, U^{(i)}U^{(j)}\overline{x}\rangle=-\langle U^{(i)}U^{(j)}\overline{x}, \overline{x}\rangle.\] We now describe the Maurer-Cartan form, the hypoelliptic Brownian motion on \(\mathbb{G}\), and its infinitesimal generator \(\frac{1}{2}\Delta_{\mathbb{G}}\). **Proposition 2.12**.: _Let \(\mathbb{G}\) be an \(H\)-type group. Then the Maurer-Cartan form is given by_ \[\theta_{k}^{\ell}(v)=\left(\overline{v},\widehat{v}_{1}-\frac{1}{2}\langle U ^{(1)}\overline{k},\overline{v}\rangle,\ldots,\widehat{v}_{n}-\frac{1}{2} \langle U^{(n)}\overline{k},\overline{v}\rangle\right)\] _for any \(k=(\overline{k},\widehat{k})\in\mathbb{G}\) and \(v=(\overline{v},\widehat{v})\in T_{k}\mathbb{G}\)._ _Left-invariant vector fields at \(x=(\overline{x},\widehat{x})\) can be written as_ \[X_{j} =\frac{\partial}{\partial\overline{x}_{j}}-\frac{1}{2}\sum_{s=1} ^{n}\left(\sum_{i=1}^{m}U^{(s)}_{ji}\overline{x}_{i}\right)\frac{\partial}{ \partial\widehat{x}_{s}}, j=1,\ldots,m,\] \[Z_{i} =\frac{\partial}{\partial\widehat{x}_{i}}, i=1,\ldots,n.\] _A hypoelliptic Brownian motion on \(\mathbb{G}\) starting at the identity can be written as_ \[g_{t}=\left(B_{t},A_{t}\right),\] _where \(B_{t}\) is a standard Brownian motion on \(\mathbb{R}^{m}\) and \(A_{t}=(A_{1}(t),\ldots,A_{n}(t))\), with_ \[A_{i}(t)=\frac{1}{2}\int_{0}^{t}\langle U^{(i)}B_{s},dB_{s}\rangle,\;i=1,\ldots,n. \tag{2.4}\] Proof.: Let \(\gamma(t)=(\overline{\gamma}(t),\widehat{\gamma}(t))\), \(0\leqslant t\leqslant 1\), be a curve in \(\mathbb{G}\) with \(\gamma(0)=k\in\mathbb{G}\) and \(\gamma^{\prime}(0)=v\in T_{k}\mathbb{G}\). Then \[\theta_{k}^{\ell}(v):=dL_{k}(v)=\left.\frac{d}{dt}\right|_{t=0}k^ {-1}\gamma(t)\] \[=\left.\frac{d}{dt}\right|_{t=0}\left(-\overline{k}+\overline{ \gamma}(t),-\widehat{k}_{1}+\widehat{\gamma}_{1}(t)-\frac{1}{2}\langle U^{(1) }\overline{k},\overline{\gamma}(t)\rangle,\ldots,\right.\] \[\left.-\widehat{k}_{n}+\widehat{\gamma}_{n}(t)-\frac{1}{2} \langle U^{(n)}\overline{k},\overline{\gamma}(t)\rangle\right)\] \[=\left(\overline{v},\widehat{v}_{1}-\frac{1}{2}\langle U^{(1)} \overline{k},\overline{v}\rangle,\ldots,\widehat{v}_{n}-\frac{1}{2}\langle U ^{(n)}\overline{k},\overline{v}\rangle\right),\] from which the expression for left-invariant vector fields follows. Let \(g_{t}\) be a hypoelliptic Brownian motion on \(\mathbb{G}\), then \[(dB_{t},0)=\theta_{g_{t}}^{\ell}(dg_{t})\] \[=\left(d\overline{g}_{t},d\widehat{g}_{1}(t)-\frac{1}{2}\langle U ^{(1)}\overline{g}_{t},d\overline{g}_{t}\rangle,\ldots,d\widehat{g}_{n}(t)- \frac{1}{2}\langle U^{(n)}\overline{g}_{t},d\overline{g}_{t}\rangle\right),\] that is, \[g_{t}=\left(B_{t},\frac{1}{2}\int_{0}^{t}\langle U^{(1)}B_{s},dB_{s}\rangle, \ldots,\frac{1}{2}\int_{0}^{t}\langle U^{(n)}B_{s},dB_{s}\rangle\right),\] where \(B_{t}\) is a standard Brownian motion on \(\mathbb{R}^{m}\). **Corollary 2.13**.: _The sub-Laplacian on an \(H\)-type group \(\mathbb{G}\) is given by_ \[\Delta_{\mathbb{G}}=\sum_{j=1}^{m}X_{j}^{2}=\Delta_{\overline{x}}-\sum_{i=1}^ {n}\langle U^{(i)}\overline{x},\nabla_{\overline{x}}\rangle\frac{\partial}{ \partial\widehat{x}_{i}}+\frac{1}{4}|\overline{x}|^{2}\Delta_{\widehat{x}},\] _where \(x=(\overline{x},\widehat{x})\),_ \[\Delta_{\overline{x}} =\sum_{j=1}^{m}\frac{\partial^{2}}{\partial\overline{x}_{j}^{2}},\] \[\Delta_{\widehat{x}} =\sum_{i=1}^{n}\frac{\partial^{2}}{\partial\widehat{x}_{i}^{2}},\] \[\text{and}\ \ \nabla_{\overline{x}}=\left(\frac{\partial}{ \partial\overline{x}_{1}},\ldots,\frac{\partial}{\partial\overline{x}_{m}} \right).\] _is the horizontal gradient._ Proof.: One has that \[\sum_{j=1}^{m}X_{j}^{2}=\sum_{j=1}^{m}\left(\partial_{\overline{x}_{j }}-\frac{1}{2}\sum_{s=1}^{n}\sum_{i=1}^{m}\left(U_{ji}^{(s)}\overline{x}_{i} \right)\partial_{\widehat{x}_{s}}\right)^{2}\] \[=\Delta_{\overline{x}}-\frac{1}{2}\sum_{j,i=1}^{m}\sum_{s=1}^{n} \left(\partial_{\overline{x}_{j}}\left(U_{ji}^{(s)}\overline{x}_{i}\partial_{ \widehat{x}_{s}}\right)+U_{ji}^{(s)}\overline{x}_{i}\partial_{\widehat{x}_{s} \overline{x}_{j}}^{2}\right)\] \[+\frac{1}{4}\sum_{p,s=1}^{n}\sum_{j,i=1}^{m}U_{ji}^{(s)} \overline{x}_{i}U_{jl}^{(p)}\overline{x}_{l}\partial_{\widehat{x}_{s} \widehat{x}_{p}}^{2}\] \[=\Delta_{\overline{x}}-\frac{1}{2}\sum_{j,i=1}^{m}\sum_{s=1}^{n} \left(U_{ji}^{(s)}\delta_{ji}\partial_{\widehat{x}_{s}}+2U_{ji}^{(s)}\overline {x}_{i}\partial_{\widehat{x}_{s}\overline{x}_{j}}^{2}\right)+\frac{1}{4}\sum_ {s,p=1}^{n}\langle U^{(s)}\overline{x},U^{(p)}\overline{x}\rangle\partial_{ \widehat{x}_{s}\widehat{x}_{p}}^{2},\] and by (2.3) \[\Delta_{\overline{x}}-\frac{1}{2}\sum_{s=1}^{n}\left(\sum_{i=1}^{ m}U_{ii}^{(s)}\partial_{\widehat{x}_{s}}+2\sum_{j,i=1}^{m}U_{ji}^{(s)} \overline{x}_{i}\partial_{\widehat{x}_{s}\overline{x}_{j}}^{2}\right)+\frac{1} {4}\sum_{s,p=1}^{n}\langle U^{(s)}\overline{x},U^{(p)}\overline{x}\rangle \partial_{\widehat{x}_{s}\widehat{x}_{p}}^{2}\] \[=\Delta_{\overline{x}}-\sum_{s=1}^{n}\sum_{j,i=1}^{m}U_{ji}^{(s) }\overline{x}_{i}\partial_{\widehat{x}_{s}\overline{x}_{j}}^{2}+\frac{1}{4} \sum_{s=1}^{n}|U^{(s)}\overline{x}|^{2}\partial_{\widehat{x}_{s}^{2}}^{2}\] \[=\Delta_{\overline{x}}-\sum_{s=1}^{n}\langle U^{(s)}\overline{x},\nabla\overline{x}\rangle\frac{\partial}{\partial\widehat{x}_{s}}+\frac{1}{4} |\overline{x}|^{2}\Delta_{\widehat{x}},\] where in the second to last line we used that \(U_{ii}^{(s)}=0\) for all \(i=1,\ldots,m\) and all \(s=1,\ldots,n\). ## 3. Proof of the main result ### A probabilistic lemma Let us recall that if \(B_{t}\) is a two-dimensional standard Brownian motion and \(A_{t}:=\frac{1}{2}\int_{0}^{t}B_{1}(s)dB_{2}(s)-B_{2}(s)dB_{1}(s)\) the corresponding Levy area, then \(A_{t}=W_{\tau(t)}\), where \(W_{t}\) is a one-dimensional standard Brownian motion independent of \(|B_{t}|\), [6, p.470]. This idea has been used in [4] to study small deviations for a hypoelliptic Brownian \(g_{t}\) motion on the Heisenberg group \(\mathbb{H}\). The next Lemma extends the result in [6] to the vector-valued case. **Lemma 3.1**.: _Let \(g_{t}=(B_{t},A_{t})\) be a hypoelliptic Brownian motion on an \(H\)-type group \(\mathbb{G}\cong\mathbb{R}^{m+n}\), where \(B_{t}\) is a standard Brownian motion in \(\mathbb{R}^{m}\) and \(A_{t}=(A_{1}(t),\ldots,A_{n}(t))\), with_ \[A_{i}(t)=\frac{1}{2}\int_{0}^{t}\langle U^{(i)}B_{s},dB_{s}\rangle,\ i=1, \ldots,n.\] Then there exists an \(n\)-dimensional Brownian motion \(W_{t}\) such that 1. \(|W_{t}|\) is independent of \(|B_{t}|\). 2. \(A_{t}=W_{\tau(t)}\), where \(\tau(t):=\frac{1}{4}\int_{0}^{t}|B_{s}|^{2}ds\). Proof.: Let \(X_{t}\) be the one-dimensional Brownian motion given by \[X_{t}=\sum_{k=1}^{m}\int_{0}^{t}\frac{B_{k}(s)}{|B_{s}|}dB_{s},\] and let us consider the \((n+1)\)-dimensional martingale \((X_{t},A_{1}(t),\ldots,A_{n}(t))\). Note that the quadratic variation of \(A_{i}(t)\) is independent of \(i\) since the matrices \(U^{(i)}\) are orthogonal for \(i=1,\ldots,n\). Indeed, \[dA_{i}(t)=\frac{1}{2}\langle U^{(i)}B_{t},dB_{t}\rangle,\] from which it follows that \[d\langle A_{i}\rangle_{t}=\frac{1}{4}\sum_{k=1}^{m}\left(U^{(i)}B_{t}\right)_ {k}^{2}dt=\frac{1}{4}|U^{(i)}B_{t}|^{2}dt=\frac{1}{4}|B_{t}|^{2}dt.\] We claim that the following covariations are zero \[\langle A_{i},A_{j}\rangle_{t}=0,\text{ for all }i\neq j\,\text{in }\{1, \ldots,n\} \tag{3.2}\] \[\langle A_{i},X\rangle_{t}=0,\text{ for all }i=1,\ldots n.. \tag{3.1}\] Indeed, by (2.4) \[d(A_{i}(t)+A_{j}(t))=\frac{1}{2}\langle U^{(i)}B_{t}+U^{(j)}B_{t},dB_{t}\rangle,\] and hence, by (2.3) \[d\langle A_{i}+A_{j}\rangle_{t}=\frac{1}{4}\sum_{k=1}^{m}\left(U ^{(i)}B_{t}+U^{(j)}B_{t}\right)_{k}^{2}dt\] \[=\frac{1}{4}\sum_{k=1}^{m}\left(U^{(i)}B_{t}\right)_{k}^{2}dt+ \frac{1}{4}\sum_{k=1}^{m}\left(U^{(j)}B_{t}\right)_{k}^{2}dt+\frac{1}{2} \langle U^{(i)}B_{t},U^{(j)}B_{t}\rangle dt\] \[=d\langle A_{i}\rangle_{t}+d\langle A_{j}\rangle_{t},\] for all \(i\neq j\) in \(1,\ldots,n\), which proves (3.1). Similarly, \[d(X_{t}+A_{i}(t))=\sum_{k=1}^{m}\left(\frac{1}{|B_{t}|}B_{k}(t)+\frac{1}{2} \left(U^{(i)}B_{t}\right)_{k}\right)dB_{k}(t),\] and hence \[d\langle X_{t}+A_{i}(t)\rangle=\sum_{k=1}^{m}\left(\frac{1}{|B_ {t}|}B_{k}(t)+\frac{1}{2}\left(U^{(i)}B_{t}\right)_{k}\right)^{2}dt\] \[=\sum_{k=1}^{m}\frac{B_{k}(t)^{2}}{|B_{t}|^{2}}dt+\frac{1}{4} \sum_{k=1}^{m}\left(U^{(i)}B_{t}\right)_{k}^{2}dt+\frac{1}{|B_{t}|}\sum_{k=1}^ {m}B_{k}(t)\left(U^{(i)}B_{t}\right)_{k}dt\] \[=d\langle X\rangle_{t}+d\langle A_{i}\rangle_{t}+\frac{1}{|B_{t} |}\langle U^{(i)}B_{t},B_{t}\rangle dt=d\langle X\rangle_{t}+d\langle A_{i} \rangle_{t},\] where in the last line we used that \(U^{(i)}\) is skew-symmetric for \(i=1,\ldots,n\). Thus, by [6, Ch.2 Sect Theorem 7.3] there exists an \((n+1)\)-dimensional Brownian motion \((W_{0}(t),W_{1}(t),\ldots,W_{n}(t))\) such that \[X_{t}=W_{0}(\langle X\rangle_{t}),\quad A_{1}(t)=W_{1}(\langle A_{1}\rangle_{t }),\quad\ldots\ldots\quad A_{n}(t)=W_{n}(\langle A_{n}\rangle_{t}),\] that is, \[X_{t}=W_{0}(t),\quad\quad A_{1}(t)=W_{1}(\tau(t)),\quad\quad\ldots\ldots\quad A _{n}(t)=W_{n}(\tau(t)),\] where \(\tau(t):=\frac{1}{4}\int_{0}^{t}|B_{s}|^{2}ds\). In particular, \(X_{t}\) is independent of \(W_{t}:=(W_{1}(t),\ldots,W_{n}(t))\). The proof is complete once we prove that \(W_{t}\) is independent of \(|B_{t}|\). Note that \[|B_{t}|^{2}=2\int_{0}^{t}|B_{s}|dX_{s}+mt,\] and hence \(\sigma\{|B_{s}|,s\leqslant t\}\subset\sigma\{X_{s},s\leqslant t\}\), and thus \(|B_{t}|\) is independent of \(W_{t}\). ### Spectral bounds **Theorem 3.2**.: _Let \(\mathfrak{g}\) be an \(H\)-type Lie algebra with center \(\mathfrak{z}\) and set \(n:=\dim\mathfrak{z}\), \(m:=\dim\mathfrak{z}^{\perp}\). Let \(\mathbb{G}\) be the corresponding \(H\)-type group with sub-Laplacian \(\Delta_{\mathbb{G}}\). Then_ \[\lambda_{1}^{(m)}\leqslant\lambda_{1}\leqslant c\left(\lambda_{1}^{(m)}, \lambda_{1}^{(n)}\right), \tag{3.3}\] _where \(\lambda_{1}=\lambda_{1}(m,n)\) is the spectral gap of \(-\frac{1}{2}\Delta_{\mathbb{G}}\) restricted to the homogeneous ball \(\{x\in\mathbb{G}:|x|<1\}\), where \(|\cdot|\) is given by (2.2), and_ \[c\left(\lambda_{1}^{(m)},\lambda_{1}^{(n)}\right):=f(x^{*})= \inf_{0<x<1}f(x),\] \[f(x)=\frac{\lambda_{1}^{(m)}}{\sqrt{1-x}}+\frac{\lambda_{1}^{(n) }\sqrt{1-x}}{4x},\] \[x^{*}=\frac{\sqrt{\left(\lambda_{1}^{(n)}\right)^{2}+32\lambda_{ 1}^{(n)}\lambda_{1}^{(m)}}-3\lambda_{1}^{(n)}}{2\left(4\lambda_{1}^{(m)}- \lambda_{1}^{(n)}\right)},\] _where \(\lambda_{1}^{(n)}\) is the lowest Dirichlet eigenvalue on the unit ball in \(\mathbb{R}^{n}\) defined in Notation 1.1._ **Corollary 3.3**.: _We have that_ \[\lim_{m\to\infty}\frac{\lambda_{1}(m,n)}{\lambda_{1}^{(m)}}=1, \tag{3.4}\] _that is, when the dimension \(m\) of the center of \(\mathbb{G}\) is large, the hypoelliptic spectral gap is approximated by the spectral gap on \(\mathbb{R}^{m}\). Moreover, for any \(m>n\)_ \[\lambda_{1}^{(m)}\leqslant\lambda_{1}(m,n)\leqslant 2\lambda_{1}^{(m)},\] _which gives a bound for small values of \(m\) and \(n\)._ Proof.: The lower bound in (3.3) follows from the small deviation principle (1.2) for an \(\mathbb{R}^{m}\)-valued Brownian motion and the fact that \[\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|g_{t}|<\varepsilon\right)\leqslant \mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|<\varepsilon\right).\] Let us now prove the upper bound. By Proposition 2.12 and Lemma 3.1 it follows that a horizontal Brownian motion \(g_{t}\) on \(\mathbb{G}\) can be written as \(g_{t}=(B_{t},A_{t})=\left(B_{t},W_{\tau(t)}\right)\), where \(B_{t}\) and \(W_{t}\) are \(m\)-dimensional and \(n\)-dimensional independent Brownian motions, and \(\tau(t)=\frac{1}{4}\int_{0}^{t}|B_{s}|^{2}ds\). For any \(x\in(0,1)\) we have \[\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|g_{t}|< \varepsilon\right)=\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|_{ \mathbb{R}^{m}}^{4}+|A_{t}|_{\mathbb{R}^{n}}^{2}<\varepsilon^{4}\right)\] \[\geqslant\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|_{ \mathbb{R}^{m}}^{4}<(1-x)\varepsilon^{4},\,\max_{0\leqslant t\leqslant 1}|A_{t}|_{ \mathbb{R}^{n}}^{2}<x\varepsilon^{4}\right)\] \[=\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|_{ \mathbb{R}^{m}}^{4}<(1-x)\varepsilon^{4},\,\max_{0\leqslant t\leqslant 1}|W_{ \tau(t)}|_{\mathbb{R}^{n}}^{2}<x\varepsilon^{4}\right)\] \[=\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|_{ \mathbb{R}^{m}}^{4}<(1-x)\varepsilon^{4},\,\max_{0\leqslant t\leqslant\tau(1)}| W_{t}|_{\mathbb{R}^{n}}^{2}<x\varepsilon^{4}\right)\] \[\geqslant\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|_{ \mathbb{R}^{m}}^{4}<(1-x)\varepsilon^{4},\,\max_{0\leqslant t\leqslant\frac{1}{4 }\sqrt{1-x}\varepsilon^{2}}|W_{t}|_{\mathbb{R}^{n}}^{2}<x\varepsilon^{4}\right)\] \[=\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|_{ \mathbb{R}^{m}}^{4}<(1-x)\varepsilon^{4},\,\max_{0\leqslant t\leqslant 1}|W_{ t}|_{\mathbb{R}^{n}}^{2}<\frac{4x\varepsilon^{2}}{\sqrt{1-x}}\right)\] \[=\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|B_{t}|_{ \mathbb{R}^{m}}^{4}<(1-x)\varepsilon^{4}\right)\mathbb{P}\left(\max_{0 \leqslant t\leqslant 1}|W_{t}|_{\mathbb{R}^{n}}^{2}<\frac{4x\varepsilon^{2}}{ \sqrt{1-x}}\right).\] Thus, \[\log\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|g_{t}|< \varepsilon\right)\geqslant\log\mathbb{P}\left(\max_{0\leqslant t\leqslant 1 }|B_{t}|_{\mathbb{R}^{m}}<(1-x)^{\frac{1}{4}}\varepsilon\right)\] \[+\log\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|W_{t}|_{ \mathbb{R}^{n}}<\frac{2\sqrt{x}\varepsilon}{(1-x)^{\frac{1}{4}}}\right),\] and hence \[-\varepsilon^{2}\log\mathbb{P}\left(\max_{0\leqslant t\leqslant 1}|g_{t} |<\varepsilon\right)\] \[\leqslant-\varepsilon^{2}\sqrt{1-x}\log\mathbb{P}\left(\max_{0 \leqslant t\leqslant 1}|B_{t}|_{\mathbb{R}^{m}}<(1-x)^{\frac{1}{4}}\varepsilon \right)\frac{1}{\sqrt{1-x}}\] \[-\varepsilon^{2}\frac{4x}{\sqrt{1-x}}\log\mathbb{P}\left(\max_{0 \leqslant t\leqslant 1}|W_{t}|_{\mathbb{R}^{n}}<\frac{2\sqrt{x}\varepsilon}{(1-x )^{\frac{1}{4}}}\right)\frac{\sqrt{1-x}}{4x}.\] From the small deviation principle (1.2) for a standard Brownian motion applied to \(B_{t}\) and \(W_{t}\) and the one for a hypoelliptic Brownian motion applied to \(g_{t}\) it follows that \[\lambda_{1}(m,n)\leqslant\frac{\lambda_{1}^{(m)}}{\sqrt{1-x}}+\frac{\lambda_{ 1}^{(n)}\sqrt{1-x}}{4x}, \tag{3.5}\] for all \(x\) in \((0,1)\). Note that \[f(x):=\frac{\lambda_{1}^{(m)}}{\sqrt{1-x}}+\frac{\lambda_{1}^{(n)}\sqrt{1-x}} {4x}=\lambda_{1}^{(m)}\left(\frac{1}{\sqrt{1-x}}+\frac{c\sqrt{1-x}}{4x}\right)>0\] for all \(x\in(0,1)\), where \(c:=\frac{\lambda_{1}^{(n)}}{\lambda_{1}^{(m)}}\). Note that \(f\) always has a local minimum over \((0,1)\) even if we do not rely on the values of the eigenvalues \(\lambda_{1}^{(m)}\) and \(\lambda_{1}^{(n)}\), and the minimum is achieved at \[x^{*}=\frac{\sqrt{c^{2}+32c}-3c}{2\left(4-c\right)}\in(0,1).\] which gives (3.3). Moreover, \[\frac{c}{4}\leqslant\frac{c\sqrt{33}-3c}{8}\leqslant\frac{\sqrt{c^{2}+32c}-3 c}{8}\leqslant x^{*}\leqslant\frac{3\sqrt{c}-c}{4-c},\] since \(c<1\) for \(m>n\). Thus, \[\lambda_{1}^{(m)}\leqslant\lambda_{1}(m,n)\leqslant f(x^{*})\leqslant\lambda_ {1}^{(m)}\left(\frac{\sqrt{4-c}}{\sqrt{4-3\sqrt{c}}}+\frac{\sqrt{4-c}}{2} \right)\leqslant 2\lambda_{1}^{(m)},\] for any \(m>n\). Finally, by (3.5) we have that \[1\leqslant\frac{\lambda_{1}(m,n)}{\lambda_{1}^{(m)}}\leqslant\frac{1}{\sqrt{1 -x}}+\frac{\lambda_{1}^{(n)}}{\lambda_{1}^{(m)}}\frac{\sqrt{1-x}}{4x},\] for all \(x\in(0,1)\), and the asymptotic (3.4) then follows since \[\lambda_{1}^{(d)}\sim(2\pi)^{\frac{d+1}{d}}2^{-\frac{2}{d}}(d!!)^{\frac{2}{d}}\] for any integer \(d\).
2305.00238
The FAIRy Tale of Genetic Algorithms
Genetic Algorithm (GA) is a popular meta-heuristic evolutionary algorithm that uses stochastic operators to find optimal solution and has proved its effectiveness in solving many complex optimization problems (such as classification, optimization, and scheduling). However, despite its performance, popularity and simplicity, not much attention has been paid towards reproducibility and reusability of GA. In this paper, we have extended Findable, Accessible, Interoperable and Reusable (FAIR) data principles to enable the reproducibility and reusability of algorithms. We have chosen GA as a usecase to the demonstrate the applicability of the proposed principles. Also we have presented an overview of methodological developments and variants of GA that makes it challenging to reproduce or even find the right source. Additionally, to enable FAIR algorithms, we propose a vocabulary (i.e. $evo$) using light weight RDF format, facilitating the reproducibility. Given the stochastic nature of GAs, this work can be extended to numerous Optimization and machine learning algorithms/methods.
Fahad Maqbool, Muhammad Saad Razzaq, Hajira Jabeen
2023-04-29T11:36:09Z
http://arxiv.org/abs/2305.00238v1
# The FAIRy Tale of Genetic Algorithms ###### Abstract Genetic Algorithm (GA) is a popular meta-heuristic evolutionary algorithm that uses stochastic operators to find optimal solution and has proved its effectiveness in solving many complex optimization problems (such as classification, optimization, and scheduling). However, despite its performance, popularity and simplicity, not much attention has been paid towards reproducibility and reusability of GA. In this paper, we have extended Findable, Accessible, Interoperable and Reusable (FAIR) data principles to enable the reproducibility and reusability of algorithms. We have chosen GA as a usecase to the demonstrate the applicability of the proposed principles. Also we have presented an overview of methodological developments and variants of GA that makes it challenging to reproduce or even find the right source. Additionally, to enable FAIR algorithms, we propose a vocabulary (i.e. \(evo\)) using light weight RDF format, facilitating the reproducibility. Given the stochastic nature of GAs, this work can be extended to numerous Optimization and machine learning algorithms/methods. keywords: FAIR, Genetic Algorithm, Metadata, Digital Artifact, Reproducibility, Reusability, Evolutionary Algorithm. + Footnote †: journal: [inst]organization of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=C of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=C of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=C of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science and Technology,address=Chair of Science, and Technology=Chair ## 1 Introduction Traditional optimization techniques (random search, univariate method, stochastic gradient decent, quasi-Newton) are good at solving simple optimization problems [1; 2]. However, they suffer from the performance bottleneck with an increase in the problem complexity. Also, these techniques require a well-defined deterministic path at the start. On the other hand, stochastic optimization techniques like GA perform well with non-smooth and ill-conditioned objective functions. It is capable to find good solutions while avoiding local optima. GA is based on the idea of "Survival of the fittest". Given a population, it has three main operators i.e. Selection, Mutation, and Crossover. Selection chooses potentially promising solutions to proceed to the next generation, while crossover combines the traits of parent chromosomes to create the offsprings. Mutation changes a certain value of a gene within a chromosome, and this helps in avoiding local optima. GA has a wide application range including scheduling, planning, assignment, and prediction in various industry and business problems [3; 4]. Currently, we are in the middle of the golden jubilee and diamond jubilee of Genetic Algorithms but far away from the standardization of GA. Even after 50+ years, we are unable to decide and agree on the name that corresponds to a particular set of hyperparameters. Genetic Algorithm was the term coined by John Holland in 1965 [1; 2]. Since then, Genetic Algorithms [1], Simple Genetic Algorithms [2], Canonical Genetic Algorithms [1], and Sequential Genetic Algorithms [2] are the several names given to GA. One might be confused if these are different GA variants, but these all are the same and refer to John Holland's GA. Similar is the case with GA software (source code of a GA research publication). One may find different code repositories of GA12, Simple GA Simple Genetic Algorithm3456, Canonical Genetic Algorithm7891011122, and Sequential Genetic Algorithm13 on GitHub14.These various implementations of the same approach with different naming conventions may decrease the findability, accessibility and reusability of GA. Also, one may find various implementations of the GA1516171819 with different hyperparameters but with the same naming conventions. This make it difficult in understandability and reusability. Footnote 3: [https://github.com/tmsquill/simple-ga](https://github.com/tmsquill/simple-ga) Footnote 4: [https://github.com/yetanotherapris/SimpleGeneticAlgorithm](https://github.com/yetanotherapris/SimpleGeneticAlgorithm) Footnote 5: [https://github.com/afiskon/simple-genetic-algorithm](https://github.com/afiskon/simple-genetic-algorithm) Footnote 6: [https://github.com/ajlopez/SimpleGA](https://github.com/ajlopez/SimpleGA) Footnote 7: [https://github.com/GMTurbo/canonical-ga](https://github.com/GMTurbo/canonical-ga) Footnote 8: [https://github.com/nanoff/Canonical-Genetic-Algorithm](https://github.com/nanoff/Canonical-Genetic-Algorithm) Footnote 9: [https://github.com/sanamadanii/Canonical-Genetic-Algorithm](https://github.com/sanamadanii/Canonical-Genetic-Algorithm) Footnote 10: [https://github.com/szajadaemmi/Canonical-Genetic-Algorithm](https://github.com/szajadaemmi/Canonical-Genetic-Algorithm) Footnote 11: [https://github.com/yareddada/Canonical-Genetic-Algorithm](https://github.com/yareddada/Canonical-Genetic-Algorithm) Footnote 12: [https://github.com/UristMcMiner/canonical_genetic_algorithm](https://github.com/UristMcMiner/canonical_genetic_algorithm) Footnote 13: [https://github.com/regicsf2010/SequentialGA](https://github.com/regicsf2010/SequentialGA) Footnote 14: [https://guides.github.com/features/pages/](https://guides.github.com/features/pages/) Footnote 15: [https://github.com/ezstoltz/genetic-algorithm](https://github.com/ezstoltz/genetic-algorithm) Footnote 16: [https://github.com/strawberry-magic-pocket/Genetic-Algorithm](https://github.com/strawberry-magic-pocket/Genetic-Algorithm) Footnote 17: [https://github.com/streameto/GeneticAlgorithm](https://github.com/streameto/GeneticAlgorithm) Footnote 18: [https://github.com/ShiSanChuan/GeneticAlgorithm](https://github.com/ShiSanChuan/GeneticAlgorithm) Footnote 19: [https://github.com/lagodiuk/genetic-algorithm](https://github.com/lagodiuk/genetic-algorithm) A motivation behind this work is that a considerable portion of scientific data and research manuscripts remain unnoticed every year due to partial findability, accessibility, reusability, and interoperability by humans or machines [5, 6, 7, 8, 9, 10, 11, 12]. Only one-fifth of the published manuscripts also publish experimental data on some data repositories [5]. In current research practices, most of the data used in research articles is not findable. Hence, it cannot easily be reused by the research community. Similarly, the act of sharing research software (i.e. code and hyperparameter settings) is not a common practice due to little to no attribution mechanisms for the research software developers [13]. In most cases, research software's details are very briefly shared in research manuscripts and hence are far from being findable and reproducible. Same naming conventions of different digital arti facts, different naming convention of same digital artifact, ignored practices of publishing dataset and software code with research articles, sharing code in repositories without rich metadata adds a challenge to code findability and hence compromises the algorithm reusability and reproducibility. In recent years, efforts have been made in research, academia, development, and industry to make scientific data FAIR (Findable, Accessible, Interoperable, Reusable) for both humans and machines [6; 5]. Table 1 briefly covers FAIR data principles proposed by Wilkinson et al. [6] in 2016 for making data FAIR. These principles focus on machine action-ability with minimum to nil human intervention. FAIR principles revolve around three main components (i.e Digital Artifact, Metadata about the digital artifact and Infrastructure). The FAIR guidelines emphasize automated discovery (Findability) of the digital artifact (mainly data). Once discovered one should have a clear idea of how these artifacts can be accessed including authentication and authorization. Metadata should be well defined to assist reusability. Data is more productive if its accessibility, interoperability, and reusability details are clearly documented in its metadata. The contribution of this article is \begin{table} \begin{tabular}{|p{42.7pt}|p{284.5pt}|p{284.5pt}|} \hline FAIR & Id & Description \\ \hline \multirow{4}{*}{F} & 1 & metadata are assigned a globally unique and persistent identifier. \\ \cline{2-4} & 2 & data are described with rich metadata. \\ \cline{2-4} & 3 & metadata clearly and explicitly include the identifier of the data it describes. \\ \cline{2-4} & 4 & metadata are registered or indexed in a searchable resource. \\ \hline \multirow{4}{*}{A} & 1 & metadata are retrievable by their identifier using a standardized Communications protocol. \\ \cline{2-4} & 1.1 & the protocol is open, free, and universally implementable. \\ \cline{2-4} & 1.2 & the protocol allows for an authentication and authorization procedure, where necessary. \\ \cline{2-4} & 2 & metadata are accessible, even when the data are no longer available. \\ \hline \multirow{4}{*}{I} & 1 & metadata use a formal, accessible, shared, and broadly applicable language to facilitate machine readability and data exchange. \\ \cline{2-4} & 2 & metadata use vocabularies that follow FAIR principles. \\ \cline{2-4} & 3 & metadata include qualified references to other (meta)data. \\ \hline \multirow{4}{*}{R} & 1 & metadata are richly described with a plurality of accurate and relevant attributes. \\ \cline{2-4} & 1.1 & metadata are released with a clear, and accessible data usage license. \\ \cline{2-4} & 1.2 & metadata are associated with detailed provenance. \\ \cline{2-4} & 1.3 & metadata meet domain-relevant community standards. \\ \hline \end{tabular} \end{table} Table 1: FAIR data principles 1. We have extended FAIR principles beyond data, so that these could be applied to methods, algorithms and software artifacts. 2. We have presented GA as a usecase to demonstrate the applicability of proposed FAIR principles for algorithms. 3. We have proposed specialized metadata for GA to ensure FAIR practice using light weight RDF format. 4. We demonstrate the application of proposed principles for a Python based GA code 20 and published its associated metadata 21 through zenodo. Footnote 20: [https://doi.org/10.5281/zenodo.7096663](https://doi.org/10.5281/zenodo.7096663) Footnote 21: [https://doi.org/10.5281/zenodo.7095155](https://doi.org/10.5281/zenodo.7095155) The rest of the article is comprising of section 2 that highlights the preliminaries of GA, FAIR principles, and pointed the challenges currently being faced by the research community. In section 3 we have explored the relevant literature and summarized the recent development on FAIR and highlighted the challenges of fostering the FAIR culture. Section 4 covers the FAIR common and exclusive principles for algorithms and GA, while section 5 presents the metadata of GA. Mapping of FAIR Algorithms principles on GA is highlighted in Section 6. Section 7 present the conclusion and future guidelines. ## 2 Preliminaries ### Genetic Algorithm (GA) GA is an evolutionary algorithm that has gained much importance in the last few decades due to its simplicity and effectiveness for complex optimization problems. GA is a directed randomization technique based on Charles Darwin's theory of "Natural's Selection" [1; 2]. Randomization helps GA to avoid local optima while the directed approach helps to converge to an optimal solution. GA uses stochastic operators (i.e. crossover and mutation) that helps to explore the search space and exploit the solutions respectively. GA starts by initializing a population of candidate solutions. Each candidate solution represents a string of feature/decision variables. The population is evolved by applying GA operators on the candidate solutions. The fitness of the candidate solution is evaluated using a fitness function that is mainly problem dependent. The termination criteria is based on the maximum number of generations, the maximum amount of time, or the specified convergence criteria. Different variants of GA (i.e Sequential GA, Parallel GA and Distributed GA) are briefly explained in Table 2. GA has different population initialization methods (i.e Random, Feasible Individuals, and Random and Greedy) as explained in Table 3. Moreover GA has also different population structures and how individual solutions communicate with each other as shown in Table 4. While working with GA, researchers must carefully select and specify essential parameters like population initialization, population structure, encoding scheme, selection criteria, crossover technique, crossover rate, mutation rate, mutation technique, and replacement criteria as shown in detail in Figure 1. The suggested details helps the researchers to express GA metadata more appropriately and use the existing GA techniques to reproduce the results effectively. We have also performed a limited survey to support our claim that most of the GA algorithms are not FAIR. We only selected top \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline S.NO & GA Variant & GA Variant Detail & Reference \\ \hline 1 & Sequential GA & It starts with a single population of solutions and evolves it over time by applying GA operators. The process continues until the desired convergence, or required generations are reached. & [1, 2] \\ \hline 2 & Parallel GA & The initial population is divided into subpopulations. Multiple GA operations are performed in parallel like fitness evaluation, selection, crossover, and mutation. & [1, 2] \\ \hline 3 & Distributed GA & In this variant, dimensions of individuals or population are distributed. For dimension distribution multi-agent and coevolution methods are used. In population distribution Island, Hierarchical, Master slave, Cellular, and Pool model are used. & [18, 19, 20, 21, 22, 23, 24, 25, 26, 27] \\ \hline \end{tabular} \end{table} Table 2: Different variants of GA 50 articles against the keyword search22 "Genetic Algorithm" from Google Scholar, published in year 2021. From 50 articles, only 03 articles [39, 40, 41] mention the source code 23, 24, 25 of their proposed technique. Moreover most of the GA-based code repositories available on Github 123456789910011121314,14, do not provide the required hyper-parameter settings, configuration parameters, and metadata details. We reiterate here that the purpose of this survey is neither to perform a exhaustive overview of Genetic Algorithm, \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline S.NO & Population Structure & Description & Reference \\ \hline 1 & Conventional GA & A combined population pool where each individual can interact with any other individual. & [35] \\ \hline 2 & Island Model & The initial population is divided into multiple subpopulations/islands. On each island, GA operators work independently. & [27] \\ \hline 3 & Cellular Model & Each individual can interact within a defined small neighborhood, and GA operations are applied to them. & [26] \\ \hline 4 & Terrain-Based & Parameters are available across the population. At each generation, an individual can interact with the best individual in a close neighborhood. & [36] \\ \hline 5 & Spatially-Dispersed & Once the first individual/parent is selected, the second individual is chosen based on its spatial coordinates visibility from the first individual/parent. & [37] \\ \hline 6 & Multilevel Cooperative & The population is divided into multiple groups. Offsprings in sub populations evolve and are updated by replacing weak individuals. & [38] \\ \hline \end{tabular} \end{table} Table 4: Population structures in different variants of GA \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline S.NO & Population Initialization & Initialization details & Reference \\ \hline 1 & Random & The initial population is randomly selected without any heuristic and constraint. & [28, 29, 30] \\ \hline 2 & Feasible Individuals & The initial population contains selected/possible individuals as an initial population set. & [31, 32] \\ \hline 3 & Random and Greedy & The initial population is based on a random and greedy mixed approach. & [33, 34] \\ \hline \end{tabular} \end{table} Table 3: Population initialization methods in GA Figure 1: Detailed metadata parameters to improve the reusability and reproducibility of GA. nor to provide a thorough understanding of the algorithm. Rather it is aimed to merely highlight the main challenges in findability and reproducibility of GA research software by selecting a sample. Another challenge to the findability is multiple naming conventions of GA and its variants available in literature [40; 41]. Researchers may intermix multiple conventions leading to poor findability. ### Fair The journey of making data FAIR (Findable, Accessible, Interoperable, Reusable) data started from the guidelines initially proposed by Wilkinson et al. [6] enlisted in Table 1. Research communities were suggested to follow the guidelines and agree upon common data and metadata storage framework. FAIR not only helps the researchers to get the maximum potential (by attracting new research partnerships and increasing citations/visibility) from the data set. It also helps in to improve the reusability and reproducibility of the data by building novel resources and tools by taking maximum benefits from the existing datasets. FAIRification of data is based on four key components i.e Findable, Accessible, Interoperable, and Reusable. Findability suggests that the necessary practices that should be carried out to make your data and metadata easily findable by both man and machines. In order to ensure data availability, it should be placed in such a way that every element of data and metadata is accessible using a unique and persistent URI. This will help to avoid the ambiguity about the elements. Search engines are the key source of information nowadays. In order to make data/metadata findable by the search engines, it should not only be placed on index-able sources so that search engine bots may read and index them in their SERP (Search Engine Result Pages) but also metadata should be rich enough so that it may contain the necessary information/keywords based on which you want to be found in search engines. Including data identifiers in metadata file will help increase clarification about the data for which metadata is being defined. Accessibility doesn't state that data should be freely accessible to everyone but rather there should be a clear machine readable guidelines available that states who can access the data. Data accessibility using standard communications protocols (e.g. HTTP, FTP, SMTP) not only ensures data reusability but also help to increase this if the protocols are available free of cost. Data storage carries a cost and may become unavailable overtime. Metadata should even remain available in case of broken links to dataset so that if someone interested in the dataset then he/she may track the publisher or author of the dataset using the metadata. To ensure the interoperability, a formal, shared, and broadly applicable, metadata format is required, that follows standard vocabularies and qualified references to other metadata. Interoperability helps machine to understand exchanged data format from other machines with the help of standardized ontology and vocabularies. In reusability, metadata is enriched with detailed attributes about the data along with a clear data usage license. Complete details and conditions under which data was generated through experiments/sensors/machine/protocol and citation details is mentioned in metadata. In this article we have adapted and extended FAIR data principles for algorithms(FAIR-Algorithms). Later we have presented a usecase by applying FAIR-Algorithms using GA. ## 3 Related Work Researchers usually follow different steps in their research process ranging from problem analysis, literature review, data collection, research software development/usage, experiments, and result analysis. By following FAIR guidelines, researchers can easily reuse published data, research software, and results. It also helps to increase the focus on extending the existing work and achieving their research goals at rapid pace. Recently, efforts have been made by research community to support the FAIR research culture. Recent development in academia [6], Life Science [7], FAIR ontologies [9], SmartAPI [42], Immune Epitope Database [43] and Health Care [44], have been made to make data, webAPI, ontologies and biological databases FAIR. Academia and publishing industry must play an active and vibrant role in making such efforts to publish data, research soft ware, and research manuscripts according to FAIR principles. In this regard few journals (JORS, IPOL, JOSS, eLife, and science direct) have already started to review the research software during the peer review process and publish it along with the article [45]. Also Association for Computing Machinery (ACM), has started to review research software, datasets, experiments, and other related files, along with research manuscripts [46]. ACM has introduced the policy badges to review the research articles for results replication i.e. (same results of an article) or results reproduction (results generated independently) mechanism. ACM also encouraged reproducibility, reusability, and replicability in which the experimental setup of software can be used or extended by the same or a different team [47]. Moreover few recommendations for executing FAIR practices including (training, education, fundraising, incentives, rewards, recognition, development, and monitoring of policies) were suggested by Hong et.al. [48]. Also FAIR has far reaching benefits for different domains. In agriculture, Basharat et.al [49] discussed role of FAIR data in agriculture industry. They have applied FAIR guidelines to ensure data findabilty and reusabilty of agriculture data, used for decision making and agriculture performance. For making a molecular plant data FAIR a check list is compiled by Reiser et.al [50], it includes placing data at a stable repository, using unique identifiers for genes and its products, by using standard file formats, reproducible computational technique by mentioning ( software versions, raw data files, citing data source, parameter settings). Different industries have started their projects of making data FAIR [51]. A project on making life science data FAIR FAIRplus 26 is in progress. Footnote 26: [https://fairplus-project.eu/](https://fairplus-project.eu/) FAIR guidelines are independent of tools, technologies, and implementation platforms [6]. There are few common and some exclusive details for FAIR data and FAIR research software's suggested by Lamprecht et al. [52]. They have adopted some of the existing FAIR principles where they fit in for research software's and modify/extend the remaining one. The list of recommendations for FAIR research software based on existing FAIR guidelines for data is pro posed by Hasselbring et al. [13]. Software development community also suggested that FAIR research software principles should be separately defined [45]. There are different challenges (i.e. software documentation, accessibility, licensing issues, software dependencies, environment, quality control, and software sustainability) in making research software findable and reusable [53]. All these efforts are made in recent years for making scientific data, software, and related objects FAIR. Different digital artifact repositories (i.e. Github27, GitLab28, Zenodo29,SourceForge30,and Bitbucket31) are used to store and publish the data and software. The findability of data and research software is enforced by the relevant conference, workshop, or journal at the time of publishing of manuscript. Similarly, the accessibility of data and research software has its own challenges. Data and software usage license and copyrights details should be clearly stated and permission of access should be granted to the research community where admissible. Reproducibility of research software is also a challenge and this is due to the lack of availability of software code, its Persistent IDs, and reproducing the complete software environment as highlighted by Alliez et al.[54]. To improve the reusability by the research community for the photovoltaic time series data, a set of recommendation's were suggested by Arafath et.al. [55]. It includes clearly defined dataset, accessibility and availability of metadata in human and machine readable format i.e JSON-LD. Footnote 27: [https://guides.github.com/features/pages/,Accessed](https://guides.github.com/features/pages/,Accessed) Oct 8,2021 Footnote 28: [https://gitlab.com/gitlab-org/gitlab](https://gitlab.com/gitlab-org/gitlab), Accessed Oct 8,2021 Footnote 29: [https://zenodo.org/,Accessed](https://zenodo.org/,Accessed) Oct 8,2021 Footnote 30: [https://sourceforge.net/,Accessed](https://sourceforge.net/,Accessed) Oct 8,2021 Footnote 31: [https://bitbucket.org/,Accessed](https://bitbucket.org/,Accessed) Oct 8,2021 Another challenge related to research software is not a well-defined attribution mechanism for the developers of the research community. This results in less focus on quality research software but on research manuscripts [52]. Current citation mechanisms, impact factor policies, and promotion/hiring in universities are research publication centric. Preliminary work on software citation principles was highlighted by Smith et al.[56]. They have identified, importance, credit, attribution, unique identification, persistence, accessibility, and specificity as the major software citation principles. Format of citing software, metadata of software for citation, criteria for peer review of the software, and acceptance of software as a digital product were the few challenges related to software citations that were highlighted by Niemeyer et al. [57]. The research community, journals, conferences, workshops, and research and project funding agencies have to initiate such reforms that help in making research data, software, and related research objects FAIR. All the relevant stakeholders have to develop/encourage practices, like citation incentives and scholarly attribution for research software developers/data analysts/researchers for following FAIR principles. To the best of our knowledge, we are unable to find any application of FAIR guidelines to algorithms, so in this article, we have extended FAIR data guidelines to develop FAIR-Algorithms. Moreover to justify the applicability of FAIR-Algorithms we have presented a use case (i.e FAIR-GA) and validated our \(FAIR-Algorithms\) proposed guidelines. ## 4 FAIR-Algorithms: FAIR Principles for Algorithms An algorithm is a set of instructions to complete a specific task. It is usually designed to solve a specialized problem / sub-problem and consists of inputs, tasks, outputs, and parameters settings. Inputs are the data and parameters, while the task is the main description of the work that uses inputs to generate outputs(reports, computational outcome, models). Sharing the data, research software, algorithm and related metadata is required in improving, reusing, or reproducing algorithms with a purpose to improve efficiency, efficacy, or resource utilization. Hence a recent trend of developing FAIR principles for software may be of interest to those researchers who view software as a black box or an atomic entity. This motivated us to work on FAIR guidelines for algorithms. The idea behind FAIR-Algorithms is that if a research is FAIR then not only others should be able to reproduce data and software but also should be able to reproduce, reuse, extend, or build on top of the algorithm. We have enlisted FAIR principles for algorithms in Table 5 and mentioned the action (i.e. adapted, and extended) against each principle. Adapted is used where FAIR data principle is used for algorithm with out any modification and extended is used when existing FAIR principles is modified to cover the algorithm and its related details. FAIR-Algorithms guidelines are presented in Table 5. ### Findability **F1:- Algorithm and its metadata is assigned a globally unique and persistent identifier.** Existing digital artifact repositories (i.e. Github32, GitLab33, Zen \begin{table} \begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline FAIR & ID & FAIR for Algorithms & Action \\ \hline \multirow{4}{*}{F} & 1 & Algorithm and its metadata is assigned a globally unique and persistent identifier. & extended \\ \cline{2-4} & 2 & Algorithms are described with rich metadata. & adapted \\ \cline{2-4} & 3 & Algorithm metadata clearly and explicitly include the identifier of the algorithm it describes. & extended \\ \cline{2-4} & 4 & Algorithm metadata are indexed on a searchable repository. & extended \\ \hline \multirow{4}{*}{A} & 1 & Algorithm metadata are retrievable by their identifier using a standardized communications protocol. & adapted \\ \cline{2-4} & 1.1 & The protocol is open, free, and universally implementable. & adapted \\ \cline{2-4} & 1.2 & The protocol allows free and easy access to algorithms’ metadata and details. & extended \\ \cline{2-4} & 2 & Algorithm-metadata remains available, even when the algorithms are modified. & extended \\ \hline \multirow{4}{*}{I} & 1 & Algorithm metadata uses a formal, accessible, shared, and broadly applicable language for knowledge representation. & adapted \\ \cline{2-4} & 2 & Metadata uses vocabulary that follows FAIR principles. & adapted \\ \cline{2-4} & 3 & Algorithm metadata include qualified references to other metadata. & adapted \\ \hline \multirow{4}{*}{R} & 1 & Algorithm metadata is richly described with a plurality of accurate and relevant attributes. & adapted \\ \cline{2-4} & 1.1 & Usually, algorithm metadata is freely accessible. However, if required, algorithm metadata is released with a clear and accessible usage license. & extended \\ \cline{2-4} & 1.2 & Algorithm metadata includes detailed provenance. It includes its basics, execution and performance attributes & extended \\ \cline{2-4} & 1.3 & Algorithm metadata meet domain-relevant community standards. & adapted \\ \hline \end{tabular} \end{table} Table 5: FAIR principles for Algorithms odo34, SourceForge35, Bitbucket36) do not assign a unique identifier to the algorithm, rather to a software repository that may contain one or more algorithms. Therefore it is recommended that there should be a unique metadata file related to each algorithm. Also a unique identifier should be assigned to each metadata file and algorithm. An algorithm has a unique identifier, If its sub algorithms have no unique ID (UID), and the owner wants to assign UID, he can opt to do so. On the other hand, if an algorithms is used in another algorithms it UID will be used, and the new/parent algorithms will be assigned a new UID. We do not imagine allocation of UIDs retrospectively. Moreover different implementation of same algorithms have different UIds and in case of updates in an implementation its versioning control should also be maintained. Footnote 34: [https://zenodo.org/](https://zenodo.org/) Footnote 35: [https://sourceforge.net/](https://sourceforge.net/) Footnote 36: [https://bitbucket.org/](https://bitbucket.org/) **F2:- Algorithms are described with rich metadata** The metadata for an algorithm includes its input, tasks/steps, output, implementation details, parameter settings, execution environment, and execution duration. We have extended the MEX vocabulary [58] to define the metadata for algorithms as given in Table 6. Algorithm's metadata file also includes the UID of the algorithm in it. **F3:- Algorithm metadata clearly and explicitly include the identifier of the algorithm it describes.** Currently there is no practice of publishing algorithm's metadata. However we have recommended that algorithm's metadata should clearly and explicitly point to the algorithm that is being described, these identifiers include the identifier, author, usage information, citation, and other related properties as suggested in metadata for algorithms in Table 6. **F4:- Algorithm metadata are indexed on a searchable repository** Publishing algorithm's metadata is not in practice and hence not indexed. Therefore, we have suggested that algorithm's metadata should be placed on digital artifact repositories (i.e Zenodo, Github, GitLab, and BitBucket) that quickly index the published resources and make them searchable. ### Accessability **A1:- Algorithm metadata are retrievable by their identifier using a standardized Communications protocol.** Existing artifact repositories (i.e Zenodo, Github, GitLab, and BitBucket) are accessible using standard communication protocols like http/https. So, metadata placed on these repositories is also accessible. Hence, we recommend to use these for algorithms. **A1.1:- The protocol is open, free, and universally implementable.** Generally, the https protocols are open, free, and universally used. The algorithms that are published on above mentioned digital artifact repositories are using these free access protocols. In case of private publishing of Algorithms or its metadata, the metadata must be made accessible through universally acceptable protocols. **A1.2:- The protocol allows free and easy access to algorithms' metadata and details.** Mostly algorithms don't have authentication and authorization issues. In case of privacy related issues in publishing a particular algorithm, the metadata should still remain accessible, even after following an authentication and authorization procedure. However, the authentication and authorization procedure may be adapted where necessary. **A2:- Algorithm-metadata remains available, even when the algorithms are modified.** New variants of algorithms are proposed over time and older variants may reduce their public visibility. Therefore, metadata for the earlier versions of an algorithm should remain accessible and available even after its new version or extension has become more prevalent. ### Interoperability **I1:- Algorithm metadata uses a formal, accessible, shared, and broadly applicable language for knowledge representation** Current artifacts repositories support XML37, JSON38, JSON-LD39, and Rest APIs40 as broadly applicable languages. Therefore we recommend to use above mentioned broadly applicable formats for algorithm's metadata. Footnote 37: [https://www.w3.org/XML/](https://www.w3.org/XML/) Footnote 38: [https://www.json.org/](https://www.json.org/) Footnote 39: [https://json-ld.org/](https://json-ld.org/) Footnote 40: [https://restfulapi.net/](https://restfulapi.net/) Footnote 41: [https://github.com/mexplatform/mex-vocabulary/blob/master/vocabulary/](https://github.com/mexplatform/mex-vocabulary/blob/master/vocabulary/) Footnote 42: [https://raw.githubusercontent.com/mexplatform/mex-vocabulary/master/vocabulary/mexalgo.ttl](https://raw.githubusercontent.com/mexplatform/mex-vocabulary/master/vocabulary/mexalgo.ttl) **I2:- Metadata uses vocabularies that follow FAIR principles.** Fairification of vocabularies to define algorithm's metadata have been under explored. In this regards, we have suggested to use and extend (where necessary) MEX Vocabulary (comprising \(mexcore\)41, \(mexalgo\)42, \(mexperf\)43) [58] for algorithm metadata as it maximally satisfies FAIR principles. \(mexcore:Context\), \(mexalgo:AlgorithmClass\), \(mexcore:model\), \(mexalgo:AlgorithmParameter\), and \(mexalgo:Implementation\) are few of the main classes of the vocabulary. Footnote 43: [https://github.com/mexplatform/mex-vocabulary/blob/master/vocabulary/mexperf.ttl](https://github.com/mexplatform/mex-vocabulary/blob/master/vocabulary/mexperf.ttl) **I3:- Algorithm metadata include qualified references to other metadata.** Currently there is no practice of publishing algorithms metadata. Once it is started, focus on referencing other related metadata would also be in practice. It will help in increasing algorithm reusability. ### Reusability **R1:- Algorithm metadata is richly described with a plurality of accurate and relevant attributes.** Rich metadata is helpful in understanding an algorithm. We have combined metadata specifications in detail in Table 6. It includes basic attributes from schema44. Algorithm, parameters, learning methods, tool, class from \(mexalgo\). Performance measure and user defined measures from \(mexperf\). All these details \(mexcore\), \(mexalgo\) and \(mexperf\) are taken from mex vocabulary [58]. Algorithm metadata is richly described by following these detailed set of attributes. **R1.1:- Usually, algorithm metadata is freely accessible. However, if required, algorithm metadata is released with a clear and accessible usage license.** Copyright protects the creative work or expression of ideas but not the ideas themselves. An algorithm is an abstract idea and not subject to licensing [59]. However, the code of the algorithm should have licensing information. An algorithm code is part of the research software, so the algorithm's usage license is as per the discretion of the research software team. **R1.2:- Algorithm metadata includes detailed provenance. It includes its basics, execution and performance attributes** Currently metadata specification for algorithm is not in practice. We have specified metadata specifications in detail in Table 6. These metadata specifications helps in making algorithm FAIR. **R1.3:- Algorithm metadata meet domain-relevant community standards.** To the best of our knowledge, there doesn't exist any algorithm relevant community standards. Therefore it is suggested to follow the guidelines from algorithm related vocabularies and ontologies. ## 5 Metadata for Genetic Algorithm Rich metadata of a digital artifact specified using a well known data format plays an important role in machine readability. Also it plays a vital role in reusability and reproducibility using well defined hyper-parameter values. Algorithm metadata based on mex-vocabulary covers all general-purpose attributes(like tools, dataset, feature etc) that are common among algorithms. Defining GA specific attributes (like population size, fitness function, crossover rate) is not workable using mex-vocabulary and demands an extension of attributes in mex-vocabulary. In this section we have have suggested specific metadata \({}^{21}\) for GA as listed in table 7. Listing 1 shows experiment metadata of a Python based GA for solving one Figure 3: Algorithm related parameter specification for GA Figure 2: GA iteration cycle starting from the problem specification till the performance measures. max search optimization problem 45. We have represented this meta-data using minimal classes and properties from prov-o46, mex-core47, mex-algo48, and mex-perf49. Moreover we also suggest to improve upon MEX vocabulary and propose \(evo\) vocabulary. A hierarchical representation of these parameters has been shown in Figure 1. Few main terms/entities of \(evo\) vocabulary are listed below. Footnote 45: [https://colab.research.google.com/drive/1t_kUu613a4F1sP6oK92CHZwqANsTMijP?usp=sharing](https://colab.research.google.com/drive/1t_kUu613a4F1sP6oK92CHZwqANsTMijP?usp=sharing) Footnote 46: [https://www.w3.org/TR/prov-o/](https://www.w3.org/TR/prov-o/) Footnote 47: [http://mex.aksw.org/mex-core#](http://mex.aksw.org/mex-core#) Footnote 48: [http://mex.aksw.org/mex-algo#](http://mex.aksw.org/mex-algo#) Footnote 49: [http://mex.aksw.org/mex-perf#](http://mex.aksw.org/mex-perf#) Figure 4: Performance related parameter specification for GA 8. \(evo:CrossoverProbability\) 9. \(evo:Mutation\) 10. \(evo:MutationProbability\) 11. \(evo:Replacement\) 12. \(evo:Elitism\) 13. \(evo:Termination\) 14. \(evo:Generations\) 15. \(evo:Time\) 16. \(evo:Fitness\) Figure 2 shows the problem specific parameters for GA. The detailed parameters includes the problem (i.e. scheduling, searching, or optimization), GA, population (initialized using dataset, random initializer, or heuristic). Algorithm related parameters for GA and performance related parameters for GA are shown in Figure 3 and 4 respectively. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Property & Description & Type \\ \hline schema:identifier & DOI of algorithm & schema:URL \\ \hline schema:url & URL of software repository & schema:URL \\ \hline schema:name & Name of the algorithm & schema:Text \\ \hline schema:description & Algorithm description that discusses Algorithm purpose and application areas. & schema:Text \\ \hline schema:author & Person / organization that has created the code and holds its intellectual copyrights. & schema:Organization schema:Person \\ \hline schema:usageInfo & Limitation of the algorithm. & schema:Text \\ \hline schema:keywords & Keywords and tags that describe the key terms to define a software. & schema:Text \\ \hline schema:citation & Source code attribution (i.e. link to the article where the particular algorithm has been discussed). & schema:Text \\ \hline schema:license & It helps to protect the intellectual property by defining the guidelines for use and distribution of software. & schema:URL \\ \hline mexcore:ApplicationContext & Basic information about algorithm that may provide a high-level overview (including goals, aims, objectives, and scope) of the software & mexcore:ApplicationContext \{ :trustyURI rdfs:Literal \\ \hline mexcore:Context & The problem for which algorithm has been employed e.g data clustering, neural network optimization, and protein folding. & mexcore:Context \{ prov:wasAttributedTo mexcore:ApplicationContext \} \\ \hline mexcore:Experiment & The class represents some basic information about the experiment. & mexcore:Experiment \{ \\ \hline mexcore:Execution & A single run of an algorithm-based program. Each run is based on specific parameter specification and hardware configurations. & mexcore:Execution \{ mexcore:endsAtPosition xsd:string :targetClass xsd:string prov:wasInformedBy mexcore:ExperimentConfiguration } \\ \hline \end{tabular} \end{table} Table 6: Metadata specification for Algorithm. \begin{tabular}{|p{85.4pt}|p{142.3pt}|p{142.3pt}|} \multicolumn{3}{c}{Metadata specification for Algorithm (Continued).} \\ \hline mexcore:ExperimentConfiguration & represents execution detail (on different algorithm configuration and hardware environments) of an experiment. & mexcore: ExperimentConfiguration \{ \\ \hline mexcore:HardwareConfiguration & Detail about hardware configuration & mexcore: HardwareConfiguration \{ \\ & & mexcore:cpu xsd:string mexcore:cpuCache xsd:String mexcore:hdfType xsd:string mexcore:memory xsd:string \} \\ \hline mexcore:DataSet & Initial population/ dataset for algorithm experiments & owl:Class \\ \hline mexcore:Example & An individual solution or a chromosome & mexcore:Example \{ \\ & & mexcore:datasetColumn rdfs:Literal mexcore:datasetRow rdfs:Literal \\ & & \\ \hline mexcore:ExampleCollection & ExampleCollection is a collection of chromosomes and represents a population at a particular generation. & mexcore:ExampleCollection \{ \\ & & mexcore:startsAt rdfs:Literal mexcore::nhasPhase mexcore:Phase \\ \hline mexalgo:LearningMethod & This defines the learning approach of the algorithm i.e. evolution in case of genetic algorithm. & mexalgo:LearningMethod \{ \\ \hline mexalgo:LearningProblem & GA is a metaheuristic based algorithm & mexalgo:LearningProblem \{ \\ & & :isLearningProblemOf :Algorithm \\ \hline mexalgo:AlgorithmClass & The algorithm class (e.g.:GeneticAlgorithm) & mexalgo:AlgorithmClass \{ \\ & & :isAlgorithmClassOf :Algorithm \\ \hline mexalgo:AlgorithmParameter & The representation of GA parameter with its associated values (e.g. encoding, population, crossover scheme) & mexalgo:AlgorithmParameter \{} \\ \hline mexalgo:Tool & It describes the libraries for GA (e.g.: PyGAD, GAlib, GeneAI). & mexalgo:Implementation \{ \} \\ \hline mexperf:PerformanceMeasure & It describes the evaluation measure to check the performance of GA (e.g.: Fitness function). & mexperf:PerformanceMeasure \{ \\ \hline mexperf:UserDefinedMeasure & This property is used to mention domain relevant metrics. & mexperf:UserDefinedMeasure \{ \\ & & \\ \hline \end{tabular} \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Property & Description & Type \\ \hline evo:Initialization & Their are different initilization methods (i.e random, heuristic and dataset). & schema: Text \\ \hline evo:Encoding & GA works on the encoding of the solutions rather than solutions. These may include binary encoding, value encoding, and permutation encoding. & schema: Text \\ \hline evo:Bound & It represents the upper and lower bound values for each dimension in the state space. & schema: Text \\ \hline evo:PopulationSize & Population size is generally dependent on the problem domain and may be decided by hit and trial method. It has a significant role to speed up the convergence. & schema: Number \\ \hline evo:FitnessMeasure & It is used to represent the final fitness value. & schema: Number \\ \hline evo:TimeMeasure & This represents the clock time used by the GA & schema: Duration \\ \hline evo:GenerationMeasure & It represents the total generation consumed during the execution. & schema: Number \\ \hline evo:Evolution & It represents the learning method used by the GA & schema: Text \\ \hline evo:Crossover & Parent chromosomes recombine to create new offsprings. Crossover property is used to specify the crossover operator (i.e. single point crossover, multi-point crossover, uniform crossover, or three parent crossover). & schema: Text \\ \hline evo:CrossoverRate & It is used to to decide the number of parents involved in the crossover process & schema: Text \\ \hline evo:Mutation & Mutation operator helps to avoid getting stuck in local optima by maintaining the population diversity. Popular mutation includes bit flip, inversion, scramble, and random resetting. & schema: Text \\ \hline evo:MutationRate & It is the frequency measure by which value of randomly selected genes would be modified & schema: Text \\ \hline evo:Selection & Selection scheme specifies the criteria through which parent chromosome will be selected from the current generation to produce offsprings using crossover and mutation. This may include rank selection, roulette wheel selection, and tournament selection. & schema: Text \\ \hline evo:PopulationUpdate & It can be steady state (i.e. off springs would be added to the population as they are created) or generational (i.e. off springs would be added to the population after the generation) & schema: Text \\ \hline evo:Replacement & Replacement is the scheme/strategy (weak parent, both parent, random parent) through which parent chromosomes will be replaced by newly created offsprings. & schema: Text \\ \hline evo:Termination & Termination specifies the criteria (i.e. generations, time, fitness, convergence) that stops the execution of genetic algorithm. & schema: Text \\ \hline evo:MaxGenerations & It specify the maximum number of iterations after which genetic algorithm will be terminated & schema: Number \\ \hline evo:FitnessFunc & It is used to mention the fitness function name (e.g. Sphere, Ackley or Griewank). Fitness function takes a solution as input and evaluates how close a given solution is to the optimal solution. & schema: Text \\ \hline evo:FitnessFuncDef & This class would be used to mention the fitness formula or fitness function definition. & schema: Text \\ \hline evo:Time & Maximum amount of clock time after which we terminate the execution of genetic algorithm. & schema: Duration \\ \hline \end{tabular} \end{table} Table 7: Proposed \(evo\) vocabulary for GA ## 6 \(Fair-Ga\): FAIR Genetic Algorithm (A usecase of FAIR-Algorithms) Genetic algorithm is a popular optimization algorithm that is very effective in solving complex optimization problems. It initially starts with a population of encoded solutions, evolve these solutions using stochastic reproduction operators (i.e. crossover & mutation), evaluate solutions using fitness function, eliminate less fit solutions, and proceeds with fittest solutions to next generations until termination criteria is met. Although GA and its variants have proved their significance in many optimization problems but lack of focus on reproducibility limits reusability of these techniques. In this section we have presented FAIR-GA (i.e. a use case of FAIR-Algorithms). This will not only help in supporting FAIR research culture but also helps researchers to increase their citation index and reusability of their proposed GA variants. Also, we have suggested guidelines that should be performed while mapping FAIR-Algorithms on GA (i.e. \(FAIR-GA\)). These steps relate to the application of \(FAIR-Algorithms\) on GA with some customization in F2, I2, R1, and R1.3 as discussed below. **F2:- GA are described with rich metadata** Taking care of FAIR standards, we suggest that GA metadata should contain detailed information about the input, output, algorithm name, algorithm task, parameters, parameters settings, citation detail, hardware details, and software dependencies. Algorithms metadata as listed in Table 6 are not inlined with the vocabulary requirements related to GA. GA has more specialized attributes (like population initialization, population size, solution encoding, generations, and termination criteria). Therefore proposed \(evo\) vocabulary illustrated in Table 7 has to be collectively used with metadata specifications for algorithm described in Table 6 to improve the reusability and reproducibility of GA. **I2:- GA software use vocabularies that follow FAIR principles** For Fair-GA, we have proposed \(evo\) vocabulary to support GA. The \(evo\) vocabulary comprises of twenty properties as illustrated in table 7 along with the description and type of each property. **R1:- Metadata are richly described with a plurality of accu rate and relevant attributes GA metadata must include accurate and relevant attributes. We have proposed extended metadata for GA as shown in Figure 1 and listed in \(evo\) vocabulary given in table 7. This is a set of relevant and related attributes and suggested to be the part of GA techniques. **R1.3:- Metadata meet domain-relevant community standards** Our proposed \(evo\) vocabulary is developed by looking into different state of the art GA variants. Detailed metadata has been suggested by taking care of all GA related hyperparameters configurations. However, we are unable to find any domain-relevant community standard for GA. In Listing 1 we have applied, proposed \(evo\) vocabulary along with other suggested FAIR-Algorithms guidelines for representing the experiments of GA one max search optimization problem. ``` 1{ 2{ 3"@context":{ 4"prov":"[http://www.w3.org/ns/prov#](http://www.w3.org/ns/prov#)", 5"mexperf":"[http://mex.aksw.org/mex-perf#](http://mex.aksw.org/mex-perf#)", 6"mexcore":"[http://mex.aksw.org/mex-core#](http://mex.aksw.org/mex-core#)", 7"mexperf":"[http://mex.aksw.org/mex-algo](http://mex.aksw.org/mex-algo)", 8"evo":"[http://mex.aksw.org/evo](http://mex.aksw.org/evo)" 9}, 10"@id":"mexperf:ExecutionPerformance", 11"prov:generated":[ 12{ 13"@id":"evo:FitnessMeasure", 14"evo:hasFitness":"-20" 15}, 16{ 17"@id":"evo:TimeMeasure", 18"evo:elapsedTime":"3", 19"evo:timeUnit":"nsec" 20}, 21{ 22"@id":"evo:GenerationMeasure", 23"evo:generationCount":"8" 24} 25}, 26 27 28"prov:wasInformedBy":{ 29"@id":"mexcore:Execution", 30"prov:wasInformedBy":{ 31"@id":"mexcore:ExperimentConfiguration", * [32]"prov:used":{ * "@id": "mexcore:HardwareConfiguration", * "mexcore:hardDisk": "108GB", * "mexcore:memory": "36GB" * }, * "prov:wasStartedBy":{ * "@id": "mexcore:Experiment", * "prov:wasAttributedTo":{ * "@id": "mexcore:ApplicationContext" * } * }, * "prov:used":{ * "@id": "mexalgo:Algorithm", * "schema:identifier": "[https://doi.org/10.5281/zenodo.7096663](https://doi.org/10.5281/zenodo.7096663)", * "schema:name": "SimpleGenetic Algorithm", * "schema:description": "Apython implementation of a simple genetic algorithm tooptimizethenumerical functions.", * "schema:author":[{ * "@id": "schema:Person", * "name": "Saad Razzaq", * "email": "[email protected]" * }, * "@id": "schema:Person", * "name": "Fahad Maqbool", * "email": "[email protected]" * }, * "@id": "schema:Person", * "name": "Hajira Jabeen", * "email": "[email protected]" * }, * "schema:keywords": "Evolutionary Optimization; Mutation; Crossover", * "schema:license": "[https://www.gnu.org/licenses/gpl-3.0-standalone.html](https://www.gnu.org/licenses/gpl-3.0-standalone.html)", * "mexalgo:hasClass":{ * "@id": "mexalgo:GeneticAlgorithms" * }, * "mexalgo:hasLearningProblem":{ * "@id": "mexalgo:MetaHeuristic" * }, * "mexalgo:hasLearningMethod":{ * "@id": "evo:Evolution" * }, * "mexalgo:hasTool":{ * "@id": "mexalgo:Python" * }, * "mexalgo:hasHyperParameter":[ * { * "@id": "evo:Initialization", * "prov:value": "Random" * }, * { * "@id": "evo:Bound", * "prov:value": ["-5","+5"] * [88] { * [89] "@id": "evo:Encoding", * [90] "prov:value": "Bit-String" * [91] }, * [92] { * [93] "@id": "evo:PopulationSize", * [94] "prov:value": "100" * [95] }, * [96] { * [97] "@id": "evo:Dimensions", * [98] "prov:value": "20" * [99] }, * [100] { * [101] "@id": "evo:Crossover", * [102] "prov:value": "One-point Crossover" * [103] }, * [104] { * [105] "@id": "evo:CrossoverRate", * [106] "prov:value": "0.9" * [107] }, * [108] { * [109] "@id": "evo:Mutation", * [110] "prov:value": "Bit Flip" * [111] }, * [112] { * [113] "@id": "evo:MutationRate", * [114] "prov:value": "1.0 / (float(n_bits) * [115] }, * [116] { * [117] "@id": "evo:Selection", * [118] "prov:value": "Tournament" * [119] }, * [120] { * [121] "@id": "evo:PopulationUpdate", * [122] "prov:value": "Generational" * [123] }, * [124] { * [125] "@id": "evo:Replacement", * [126] "prov:value": "BothParent" * [127] }, * [128] { * [129] "@id": "evo:Termination", * [130] "prov:value": "Generations" * [131] }, * [132] { * [133] "@id": "evo:MaxGenerations", * [134] "prov:value": "100" * [135] }, * [136] { * [137] "@id": "evo:FitnessFunc", * [138] "prov:value": "One-Max" * [139] }, * [140] { * [141] "@id": "evo:FitnessFuncDef", * [142] "prov:value": "defonemax(x):return-sum(x)" * [143] ## 7 Conclusion We have extended FAIR principles beyond data, so that these could be applied to methods, algorithms and software artifacts. Our focus in this article is to ensure the reproducibility and reusability of algorithms. We have presented \(FAIR-GA\) as a usecase to demonstrate the applicability of the proposed principles for algorithms. Additionally, to ensure \(FAIR-Algorithms\) we propose a metadata schema using light weight RDF format, facilitating the reproducibility. Finally, we have demonstrated the application of proposed \(FAIR-GA\) using a python based GA code for solving one max search optimization problem. Moreover, we have also proposed a specialized vocabulary (i.e \(evo\)) for GA. In future this work can be extended to numerous machine learning algorithms by suggesting specialized vocabulary for making them FAIR.
2305.17744
Heterogeneous Matrix Factorization: When Features Differ by Datasets
In myriad statistical applications, data are collected from related but heterogeneous sources. These sources share some commonalities while containing idiosyncratic characteristics. One of the most fundamental challenges in such scenarios is to recover the shared and source-specific factors. Despite the existence of a few heuristic approaches, a generic algorithm with theoretical guarantees has yet to be established. In this paper, we tackle the problem by proposing a method called Heterogeneous Matrix Factorization to separate the shared and unique factors for a class of problems. HMF maintains the orthogonality between the shared and unique factors by leveraging an invariance property in the objective. The algorithm is easy to implement and intrinsically distributed. On the theoretic side, we show that for the square error loss, HMF will converge into the optimal solutions, which are close to the ground truth. HMF can be integrated auto-encoders to learn nonlinear feature mappings. Through a variety of case studies, we showcase HMF's benefits and applicability in video segmentation, time-series feature extraction, and recommender systems.
Naichen Shi, Raed Al Kontar, Salar Fattahi
2023-05-28T14:56:17Z
http://arxiv.org/abs/2305.17744v2
# Heterogeneous Matrix Factorization: When Features Differ by Datasets ###### Abstract In myriad statistical applications, data is collected from related but heterogeneous sources. These sources share some commonalities while containing idiosyncratic characteristics. More specifically, consider the setting where observation matrices from \(N\) sources \(\{\mathbf{M}_{i}\}_{i=1}^{N}\) are generated from a few common and source-specific factors. Is it possible to recover the shared and source-specific factors? We show that under appropriate conditions on the alignment of source-specific factors, the problem is well-defined and both shared and source-specific factors are identifiable under a constrained matrix factorization objective. To solve this objective, we propose a new class of matrix factorization algorithms, called **H**eterogeneous **M**atrix **F**actorization. HMF is easy to implement, enjoys local linear convergence under suitable assumptions, and is intrinsically distributed. Through a variety of empirical studies, we showcase the advantageous properties of HMF and its potential application in feature extraction and anomaly detection. ## 1 Introduction The collection of data from a diverse range of related sources is common in various applications. For instance, in the Internet of Things (IoT), data are frequently gathered at the edge devices such as mobile phones or sensors, which often operate under different conditions such as temperature, pressure, or vibration. Despite these disparities, data from related sources often possess similar features that describe the common knowledge shared by the entire population. In this paper, we adopt a matrix factorization route to find these shared and unique features from data. Matrix factorization (MF) is a widely used technique in data analytics for identifying low-rank structures within high-dimensional matrices. Such low-rank structures can capture underlying physical processes or latent features that are highly informative for understanding patterns in high-dimensional observations (Wright and Ma, 2022). MF has been successfully applied to a diverse range of fields, including image processing (Lee and Seung, 1999), time series analysis (Yu et al., 2016), and many others, making it one of the most popular methods in data analytics. Most works on matrix factorization focus on the scenario where data come from a single source with low-rank structures, thus lacking the ability to analyze the heterogeneous structure of the data from different sources. To bridge the gap, this paper studies a class of problems where the effects of common and unique factors from different data sources are interleaved in the observations. More specifically, we consider a group of \(N\) observation matrices \(\{\mathbf{M}_{(i)}\}_{i=1}^{N}\) from related but heterogeneous sources. We model the common signals in these matrices as low-rank components whose columns span the same subspace. Accordingly, the unique signals are modeled by low-rank components with source-specific column subspaces. The question is then: Given \(N\) observation matrices \(\{\mathbf{M}_{(i)}\}_{i=1}^{N}\), can we identify their shared and unique features? At first glance, such identification appears to be challenging when no further information about the data generation process is available. Since the number of unknown variables, namely shared signals and unique signals, are roughly twice the number of data points in the observation matrices \(\mathbf{M}_{(i)}\)'s, finding these variables for every instance of observations seems hopeless. Motivated by the recent work on personalized PCA (Shi and Kontar, 2022), we show that there exist identifiability conditions under which the optimal solutions to a constrained optimization problem provide statistically consistent estimators of the ground truth signals. Equipped with the statistical guarantee, we explore efficient algorithms to solve the constrained optimization problem. To this end, we propose an algorithm called **H**eterogeneous **M**atrix **F**actorization (HMF). HMF exploits an invariance property of the objective to handle the constraints. It is distributed in nature. Theoretically, HMF is proved to converge linearly to an optimal solution under suitable stepsize and initial optimality gap. We will summarize our contributions in the following. ### Summary of Contributions **Formulation**. We propose a constrained optimization problem to recover the shared and unique factors from asymmetric matrices. **Efficient and distributed algorithm**. We design an efficient algorithm HMF. HMF is naturally _distributed_, meaning that we can do most of the processing at the \(N\) sources where the data are generated. More importantly, only the shared information needs to be transferred across iterations. **Convergence guarantee**. We show that our proposed algorithm comes equipped with a strong convergence guarantee. More specifically, under our identifiability conditions and with an appropriate choice of stepsize, we show that HMF is locally linear convergent to an optimal solution. **Applications**. We use a wide range of numerical experiments to demonstrate the effectiveness of the proposed methods. The case studies on video segmentation, temporal graph feature extraction, and stock market showcase the benefits of extracting shared and unique factors in datasets. ## 2 Related Work **Matrix Factorization** Numerous works analyze the theoretical and practical properties of first-order algorithms that solve the (asymmetric) matrix factorization problem \(\min_{\mathbf{U,V}}\left\|\mathbf{M}-\mathbf{U}\mathbf{V}^{T}\right\|_{F}^{2}\) or its variants (Li et al., 2018; Ye and Du, 2021; Sun and Luo, 2016; Park et al., 2017; Tu et al., 2016). Among them, Sun and Luo (2016) analyzes the local landscape of the optimization problem and establishes the local linear convergence of a series of first-order algorithms. Park et al. (2017); Ge et al. (2017) study the global geometry of the optimization problem. Tu et al. (2016) propose the Rectangular Procrustes Flow algorithm that is proved to converge linearly into the ground truth under proper initialization and a balancing regularization of the form \(\left\|\mathbf{U}^{T}\mathbf{U}-\mathbf{V}^{T}\mathbf{V}\right\|_{F}^{2}\). Recently, Ye and Du (2021) show that gradient descent with small and random initialization can converge to the ground truth. Despite the abundance of literature on standard matrix factorization and matrix completion, to our best knowledge, no work has studied the case where data come contain heterogeneous trends. **Distributed matrix factorization** Recent development of edge computation has fueled a trend to move matrix factorization to the edge. Gemulla et al. (2011) exploits the distributed gradient descent to factorize large matrices. Chai et al. (2021) proposes a cryptographic framework where multiple clients use their local data to collaboratively factorize a matrix without leaking private information to the server. These works use one set of feature matrices \(\mathbf{U}\) and \(\mathbf{V}\) to fit data from all clients, and hence they do not account for source-by-source feature differences. More details can be found in the survey on distributed and federated learning by Kontar et al. (2021). **Personalized modeling** Very recently, Personalized PCA proposed by Shi and Kontar (2022) attempts to find common and shared principal components when data comes from heterogeneous sources. Personalized PCA designs a distributed version of Riemannian gradient descent to find the top eigenvectors in the shared and unique parts of the covariance matrices. Though Personalized PCA achieves decent performance on several tasks, it can only handle symmetric matrices. In our work, we consider the broader setting of asymmetric matrices. ## 3 Model We consider the setting where \(N\) noisy observation matrices \(\mathbf{M}_{(1)},\mathbf{M}_{(2)},\cdots,\mathbf{M}_{(N)}\) are collected from \(N\in\mathbb{N}^{+}\) different but related sources. These matrices \(\mathbf{M}_{(i)}\in\mathbb{R}^{n_{1}\times n_{2}}\) are assumed to have the same shape 1. To model the commonality and uniqueness among them, we assume each matrix is driven by \(r_{1}\) shared factors and \(r_{2}\) unique factors and also contaminated by noise. More specifically, we consider the model where the observation matrix \(\mathbf{M}_{(i)}\) from source \(i\) is generated as Footnote 1: This assumption can be easily extended to the setting where \(\{\mathbf{M}_{(i)}\}_{i=1}^{N}\) have the same number of rows but a different number of columns. \[\mathbf{M}_{(i)}=\mathbf{U}^{\star}_{g}\mathbf{V}^{\star T}_{(i),g}+\mathbf{U }^{\star}_{(i),l}\mathbf{V}^{\star T}_{(i),l}+\mathbf{E}^{\star}_{(i)} \tag{1}\] where \(\mathbf{U}^{\star}_{g}\in\mathbb{R}^{n_{1}\times r_{1}}\), \(\mathbf{V}^{\star}_{(i),g}\in\mathbb{R}^{n_{2}\times r_{1}}\), \(\mathbf{U}^{\star}_{(i),l}\in\mathbb{R}^{n_{1}\times r_{2}}\), \(\mathbf{V}^{\star}_{(i),l}\in\mathbb{R}^{n\times r_{2}}\), \(\mathbf{E}^{\star}_{(i)}\in\mathbb{R}^{n_{1}\times n_{2}}\). We use \({}^{\star}\) to denote the ground truth. In the above model, \(r_{1}\) is the rank of the global (shared) feature matrices, while \(r_{2}\) is the rank of local (unique) feature matrices. The matrix \(\mathbf{U}^{\star}_{g}\mathbf{V}^{\star T}_{(i),g}\) models the shared low-rank part of the observation matrix, as the column space is the same across different sources. The matrix \(\mathbf{U}^{\star}_{(i),l}\mathbf{V}^{\star T}_{(i),l}\) models the unique low-rank part. For the low-rank matrix factorization, ranks are often smaller than matrix dimensions \(\max\{r_{1},r_{2}\}<\min\{n_{1},n_{2}\}\). \(\mathbf{E}^{\star}_{(i)}\) models the noise from source \(i\). In matrix factorization problems, the representations \(\mathbf{U}^{\star}_{g}\) and \(\mathbf{U}^{\star}_{(i),l}\) often correspond to latent data features. For instance, in recommender systems, \(\mathbf{U}^{\star}_{g}\) and \(\mathbf{U}^{\star}_{(i),l}\) can be interpreted as user features that reveal their preferences on different items (Koren et al., 2009). For better interpretability, it is often desirable to have the underlying features disentangled so that each feature can vary independently of others (Higgins et al., 2017). Under this rationale, we consider the model where shared and unique factors are orthogonal, \[\mathbf{U}^{\star T}_{g}\mathbf{U}^{\star}_{(i),l}=0,\ \forall i\in[N] \tag{2}\] We use \([N]\) to denote the set \(\{1,2,\cdots,N\}\). The orthogonality of features implies that the shared and unique features span different subspaces, thus describing different patterns in the observation. The orthogonal condition (2) models a diverse range of applications where the shared and unique features are decoupled. For instance, in anomaly detection, the anomalies are generated from unique mechanisms distinguished from normal ones (Chandola et al., 2009). Given our formulation, the task is to find \(\mathbf{U}^{\star}{{}_{g}},\{\mathbf{V}^{\star}{{}_{(i),g}},\mathbf{U}^{\star}{{} _{(i),l}},\mathbf{V}^{\star}{{}_{(i),l}}\}\) from observations \(\{\mathbf{M}_{(i)}\}\). As discussed, such a goal is challenging as the problem is under-definite. Since there are infinitely many ways in which shared and unique components can form the observation matrices, it is not apparent whether untangling them is feasible. Fortunately, we show that there exist identifiability conditions under which we can recover the shared and unique components by solving a constrained optimization problem. ## 4 Formulation and Identifiability In this section, we will propose a constrained optimization problem to find the shared and unique features. Then we will introduce the notion of misalignment and present our theorem on the statistical error of the solution to the constrained optimization problem. ### Formulation We design a constrained optimization problem that extends the formulation of the standard matrix factorization. The constrained optimization is formulated as, \[\begin{split}&\min_{\mathbf{x}}\sum_{i=1}^{N}\tilde{f}_{i}( \mathbf{U}_{g},\mathbf{V}_{(i),g},\mathbf{U}_{(i),l},\mathbf{V}_{(i),l})\\ &\text{such that }\mathbf{U}_{g}^{T}\mathbf{U}_{(i),l}=0,\ \forall i\in[N]\end{split} \tag{3}\] where \(\mathbf{x}=\left(\mathbf{U}_{g},\{\mathbf{U}_{(i),l},\mathbf{V}_{(i),g}, \mathbf{V}_{(i),l}\}_{i=1}^{N}\right)\) collects the decision variables and \(\tilde{f}_{i}\) is a regularized fitting residual consisting of two parts. \[\tilde{f}_{i}(\mathbf{U}_{g},\mathbf{V}_{(i),g},\mathbf{U}_{(i), l},\mathbf{V}_{(i),l})\] \[=\underbrace{\frac{1}{2}\left\|\mathbf{M}_{(i)}-\mathbf{U}_{g} \mathbf{V}_{(i),g}^{T}-\mathbf{U}_{(i),l}\mathbf{V}_{(i),l}^{T}-\mathbf{S}_{( i)}\right\|_{F}^{2}}_{f_{i}}+\underbrace{\frac{\beta}{2}\left\|\mathbf{U}_{g}^{T} \mathbf{U}_{g}-\mathbf{I}\right\|_{F}^{2}+\frac{\beta}{2}\left\|\mathbf{U}_{( i),l}^{T}\mathbf{U}_{(i),l}-\mathbf{I}\right\|_{F}^{2}}_{g_{i}}\] (4a) We will explain them respectively. Term \[f_{i}\] measures the distance between the sum of shared and unique signals and the observation matrix \[\mathbf{M}_{(i)}\]. It denotes the residual of the fitting. The constraints \[\mathbf{U}_{g}^{T}\mathbf{U}_{(i),l}=0\] reflect our prior belief. Since we consider the model where the true shared and unique features are orthogonal ( 2 ), it is natural to encode the orthogonality into constraints. Term \(g_{i}\) is a regularization term that encourages the solved \(\mathbf{U}\) matrices to be orthonormal. A well-known theoretical issue in matrix factorization is that objectives like the term \(g_{i}\) do not have favorable geometrical properties around the optimal solutions. To see this, one can multiply an arbitrarily large number \(\gamma\) with \(\mathbf{U}_{g}\), then divide \(\mathbf{V}_{(i),g}\) by \(\gamma\), without changing \(f_{i}\). Thus the norm of \(\mathbf{U}_{g}\) and \(\mathbf{V}_{(i),g}\) can potentially approach zero or infinity, which makes the optimization challenging as the condition number can become very large. Regularization terms are added to alleviate this issue. Several works on matrix completion (e.g. Park et al. (2017); Tu et al. (2016a); Fattahi and Sojoudi (2020)) consider balancing regularization terms of the form \(\left\|\mathbf{U}_{g}^{T}\mathbf{U}_{g}-\mathbf{V}_{(i),g}^{T}\mathbf{V}_{(i),g}\right\|_{F}^{2}\) to encourage column factors \(\mathbf{U}_{g}\) and row factors \(\mathbf{V}_{(i),g}\) to have similar singular values. Therefore neither \(\mathbf{U}_{g}\) nor \(\mathbf{V}_{(i),g}\) can have too small or too large singular values. Under a similar rationale, we leverage the regularization terms in \(g_{i}\) to prevent too radical \(\mathbf{U}_{g}\) and \(\mathbf{U}_{(i),l}\)'s, as such regularizations are easier to work with in the presence of the constraints \(\mathbf{U}_{g}^{T}\mathbf{U}_{(i),l}=0\). Here, the parameter \(\beta\) controls the strength of the regularization. Since term \(f_{i}\) and \(g_{i}\) are both nonconvex, and the feasible set corresponding to the constraint \(\mathbf{U}_{g}^{T}\mathbf{U}_{(i),l}=0\) is also nonconvex, the problem (3) is nonconvex. Due to this non-convexity, it is not apparent whether the optimal solutions to the nonconvex (3) indeed capture the underlying structure from data. In the following, _we will show that the answer is a qualified yes: if the data satisfy the identifiability conditions_, then solving (3) will lead to good estimates of the ground truth. ### Misalignment Before formally presenting the identifiability conditions, we briefly review the concept of misalignment proposed in Shi and Kontar (2022). Intuitively speaking, misalignment characterizes the "minimal difference" among the subspace spanned by a series of vectors. To see why it is indispensable in identifying the shared features and the unique ones, we can consider a counterexample where all \(\mathbf{U}^{\star}{}_{(i),l}\)'s are identical, \(\mathbf{U}^{\star}{}_{(1),l}=\mathbf{U}^{\star}{}_{(2),l}=\cdots=\mathbf{U}^{ \star}{}_{(N),l}\). In this case, "unique" factors are also common. Thus identifying which features are unique is not possible. From this example, we can see that the unique factors should not perfectly align; namely, they should differ from each other. To formally introduce the notion of misalignment, we introduce the projection notation. For a matrix \(\mathbf{U}\in\mathbb{R}^{d\times n}\), we define the projection matrix \(\mathbf{P}_{\mathbf{U}}\in\mathbb{R}^{d\times d}\) as \(\mathbf{P}_{\mathbf{U}}=\mathbf{U}\left(\mathbf{U}^{T}\mathbf{U}\right)^{-1} \mathbf{U}^{T}\). **Definition 4.1**.: _(\(\theta\)-misalignment) We say \(\{\mathbf{U}^{\star}{}_{(i),l}\}_{i=1}^{N}\) are \(\theta\)-misaligned if there exists a positive constant \(\theta\in(0,1)\) such that:_ \[\lambda_{\max}\left(\frac{1}{N}\sum_{i=1}^{N}\mathbf{P}_{\mathbf{U}^{\star}{}_ {(i),l}}\right)\leq 1-\theta \tag{5}\] By the triangular inequality of \(\lambda_{\max}\left(\cdot\right)\), we know \(\lambda_{\max}\left(\frac{1}{N}\sum_{i=1}^{N}\mathbf{P}_{\mathbf{U}^{\star}{} _{(i),l}}\right)\leq\frac{1}{N}\sum_{i=1}^{N}\lambda_{\max}\left(\mathbf{P}_{ \mathbf{U}^{\star}{}_{(i),l}}\right)=1\). Thus the introduced \(\theta\) is always non-negative. As a special case, if all \(\mathbf{P}_{\mathbf{U}^{\star}{}_{(i),l}}\)'s have a common nonempty eigenspace with eigenvalue \(1\), it is easy to verify that \(\theta=0\). We can show that the inverse is also true. Thus \(\theta\)-misalignment condition basically requires that the subspaces spanned by all unique factors do not contain a common nontrivial subspace. On the contrary, all global features are shared, thus, the subspaces spanned by these features are also identical. This comparison shows that the misalignment condition unequivocally distinguishes unique features from shared ones. In general, \(\theta\) measures the level of misalignment among the subspaces spanned by the column vectors of \(\mathbf{U}_{(i),l}\). To see this, we can consider a simple example. **Example 1**.: _We set \(N=2\) and \(\mathbf{U}_{(1),l}=\left(\cos\vartheta,\sin\vartheta\right)^{T}\), \(\mathbf{U}_{(2),l}=\left(\cos\vartheta,-\sin\vartheta\right)^{T}\) for \(\vartheta\in[0,\frac{\pi}{4}]\). Then the angle between \(\mathbf{U}_{(1),l}\) and \(\mathbf{U}_{(2),l}\) is \(2\vartheta\). By simple algebra, we know,_ \[\frac{1}{2}\left(\mathbf{P}_{\mathbf{U}_{(1),l}}+\mathbf{P}_{\mathbf{U}_{(2), l}}\right)=\text{diag}\left(\cos^{2}\vartheta,\sin^{2}\vartheta\right)\] _Thus by definition, \(\theta=\sin^{2}\vartheta\). We can thus clearly see that when \(\vartheta\) increases, the \(\mathbf{U}_{(1),l}\) and \(\mathbf{U}_{(2),l}\) become more misaligned. As a result, \(\theta\) is also larger._ ### Identifiability With the definition of \(\theta\)-misalignment, we now formally present the identifiability results. **Theorem 2** (Statistical error).: _Consider the data generation model (1). Suppose that (i) the signal part \(\mathbf{U}^{\star}{}_{g}\mathbf{V}^{\star T}_{(i),g}+\mathbf{U}^{\star}{}_{(i), l}\mathbf{V}^{\star T}_{(i),l}\) has \(r_{1}+r_{2}\) nonzero singular values upper bounded by \(\sigma_{\max}\) and lower bounded by \(\sigma_{\min}\), (ii) unique factors \(\mathbf{U}^{\star}{}_{(i),l}\) are \(\theta\)-misaligned, (iii) \(\hat{\mathbf{U}}_{g},\{\hat{\mathbf{V}}_{(i),g},\hat{\mathbf{U}}_{(i),l},\hat {\mathbf{V}}_{(i),l}\}_{i=1}^{N}\) is one set of optimal solutions to (3), then, the following holds_ \[\sum_{i=1}^{N}\left\|\hat{\mathbf{U}}_{g}\hat{\mathbf{V}}_{(i),g} ^{T}-\mathbf{U}^{\star}{}_{g}\mathbf{V}^{\star T}_{(i),g}\right\|_{F}^{2}+ \left\|\hat{\mathbf{U}}_{(i),l}\hat{\mathbf{V}}_{(i),l}^{T}-\mathbf{U}^{\star }{}_{(i),l}\mathbf{V}^{\star T}_{(i),l}\right\|_{F}^{2} \tag{6}\] \[=O\left(\frac{\sigma_{\max}^{2}}{\theta\sigma_{\min}^{4}}\sum_{i= 1}^{N}\left(2\left\|\mathbf{E}^{\star}{}_{(i)}\right\|_{F}\sigma_{\max}+\left\| \mathbf{E}^{\star}{}_{(i)}\right\|_{F}^{2}\right)^{2}+\sum_{i=1}^{N}\left(2 \left\|\mathbf{E}^{\star}{}_{(i)}\right\|_{F}\sigma_{\max}+\left\|\mathbf{E}^ {\star}{}_{(i)}\right\|_{F}^{2}\right)\right)\] Theorem 2 is significantly different from the analysis in Shi and Kontar (2022) as it works with asymmetric matrix factorization and does not need additional structural assumptions on covariance matrices. From (6), it is clear that when the norms of the error matrices \(\mathbf{E}^{\star}{}_{(i)}\) go to zero, the statistical errors of shared signal \(\hat{\mathbf{U}}_{g}\hat{\mathbf{V}}_{(i),g}^{T}\) and unique signal \(\hat{\mathbf{U}}_{(i),l}\hat{\mathbf{V}}_{(i),l}^{T}\) also shrink to zero. Thus the optimal solutions to the problem (3) are consistent estimators of the ground truth. An interesting observation in Theorem 2 is that heterogeneity can be a blessing rather than blight: the right-hand side of (6) decreases with the increase of \(\theta\), indicating that more misaligned unique signal leads to smaller statistical error. The proof of Theorem 2 is relegated to the supplementary materials. ## 5 Algorithm It remains to develop an efficient algorithm to solve the problem (3). To achieve this, we propose a heterogeneous matrix factorization (HMF) algorithm based on gradient descent. ### Heterogeneous Matrix Factorization We use \(\tilde{f}\) to denote the summation of \(\tilde{f}_{i}\)'s over different sources \(i\), \[\tilde{f}(\mathbf{U}_{g},\{\mathbf{V}_{(i),g},\mathbf{U}_{(i),l},\mathbf{V}_{ (i),l}\})=\sum_{i=1}^{N}\tilde{f}_{i} \tag{7}\] The objective \(\tilde{f}\) is differentiable. Thus to optimize (3), one can calculate the gradient of \(\tilde{f}_{i}\) with respect to its variables as, \[\begin{cases}\nabla_{\mathbf{U}_{g}}\tilde{f}_{i}=\left(\mathbf{M}_{(i)}- \mathbf{U}_{g}\mathbf{V}_{(i),g}^{T}-\mathbf{U}_{(i),l}\mathbf{V}_{(i),l}^{T} \right)\mathbf{V}_{(i),g}+2\beta\mathbf{U}_{g}\left(\mathbf{U}_{g}^{T}\mathbf{ U}_{g}-\mathbf{I}\right)\\ \nabla_{\mathbf{V}_{(i),g}}\tilde{f}_{i}=\left(\mathbf{M}_{(i)}-\mathbf{U}_{ g}\mathbf{V}_{(i),g}^{T}-\mathbf{U}_{(i),l}\mathbf{V}_{(i),l}^{T}\right)^{T} \mathbf{U}_{g}\\ \nabla_{\mathbf{U}_{(i),l}}\tilde{f}_{i}=\left(\mathbf{M}_{(i)}-\mathbf{U}_{ g}\mathbf{V}_{(i),g}^{T}-\mathbf{U}_{(i),l}\mathbf{V}_{(i),l}^{T}\right)\mathbf{V}_{(i),l} +2\beta\mathbf{U}_{(i),l}\left(\mathbf{U}_{(i),l}^{T}\mathbf{U}_{(i),l}- \mathbf{I}\right)\\ \nabla_{\mathbf{V}_{(i),l}}\tilde{f}_{i}=\left(\mathbf{M}_{(i)}-\mathbf{U}_{ g}\mathbf{V}_{(i),g}^{T}-\mathbf{U}_{(i),l}\mathbf{V}_{(i),l}^{T}\right)^{T} \mathbf{U}_{(i),l}\end{cases} \tag{8}\] The constraint \(\mathbf{U}_{g}^{T}\mathbf{U}_{(i),l}=0\) poses challenges to the optimization. One naive idea is to use projected gradient descent to handle this constraint. However, the projection can not be easily implemented as the feasible region is nonconvex. Few works also introduce infeasible updates when the constraint is complicated, including ADMM (Hong and Luo, 2017) or SQP (Curtis and Overton, 2012). Such approaches often introduce additional tuning parameters to balance the objective and the constraint. _We take a different route by exploiting a special invariance property_ in (3). More specifically, for any \(\mathbf{R}\in\mathbb{R}^{r_{1}\times r_{2}}\), we can apply the transform \(\varphi_{\mathbf{R}}\) on \(\mathbf{U}_{g}\) and \(\mathbf{U}_{(i),l}\), \[\varphi_{\mathbf{R}}:\big{(}\mathbf{U}_{g},\mathbf{V}_{(i),g},\mathbf{U}_{(i),l},\mathbf{V}_{(i),l}\big{)}\mapsto\big{(}\mathbf{U}_{g},\mathbf{V}_{(i),g}+ \mathbf{V}_{(i),l}\mathbf{R}^{T},\mathbf{U}_{(i),l}-\mathbf{U}_{g}\mathbf{R}, \mathbf{V}_{(i),l}\big{)}\] without changing \(f_{i}\): \(f_{i}(\mathbf{U}_{g},\mathbf{V}_{(i),g},\mathbf{U}_{(i),l},\mathbf{V}_{(i),l} )=f_{i}\left(\varphi_{\mathbf{R}}\left(\mathbf{U}_{g},\mathbf{V}_{(i),g}, \mathbf{U}_{(i),l},\mathbf{V}_{(i),l}\right)\right)\). The invariance property is a result of the special bi-linear structure in (3). One can use such invariance to ensure the feasibility of the iterations: by choosing \(\mathbf{R}=\left(\mathbf{U}_{g}^{T}\mathbf{U}_{g}\right)^{-1}\mathbf{U}_{g}^{ T}\mathbf{U}_{(i),l}\), the transform \(\varphi_{\mathbf{R}}\) automatically corrects \(\mathbf{V}_{(i),g}\) and \(\mathbf{U}_{(i),l}\), such that \(\mathbf{U}_{(i),l}\) is orthogonal to \(\mathbf{U}_{g}\), without changing \(f_{i}\). Based on this fact, we can propose an iterative algorithm. In each epoch, we use \(\varphi_{\mathbf{R}}\) to correct the variables to ensure feasibility. Then we use gradient descent on \((\mathbf{U}_{g},\mathbf{V}_{(i),g},\mathbf{U}_{(i),l},\mathbf{V}_{(i),l})\) to decrease the regularized objective. A pseudo-code is shown in Algorithm 1. ``` 1:Input matrices \(\{\mathbf{M}_{(i)}\}_{i=1}^{N}\), stepsize \(\eta\) 2:Initialize \(\mathbf{U}_{g,1},\mathbf{V}_{(i),g,\frac{1}{2}},\mathbf{U}_{(i),l,\frac{1}{2}},\mathbf{V}_{(i),l,1}\) to be small random matrices. 3:for Iteration \(\tau=1,...,R\)do 4:for\(i=1,\cdots,N\)do 5: Correct \(\mathbf{U}_{(i),l,\tau}=\mathbf{U}_{(i),l,\tau-\frac{1}{2}}-\mathbf{U}_{g, \tau}\left(\mathbf{U}_{g,\tau}^{T}\mathbf{U}_{g,\tau}\right)^{-1}\mathbf{U}_{g,\tau}^{T}\mathbf{U}_{(i),l,\tau-\frac{1}{2}}\) 6: Correct \(\mathbf{V}_{(i),g,\tau}=\mathbf{V}_{(i),g,\tau-\frac{1}{2}}+\mathbf{V}_{(i),l,\tau}\mathbf{U}_{(i),l,\tau-\frac{1}{2}}^{T}\mathbf{U}_{g,\tau}\left(\mathbf{ U}_{g,\tau}^{T}\mathbf{U}_{g,\tau}\right)^{-1}\) 7: Update \(\mathbf{U}_{(i),g,\tau+1}=\mathbf{U}_{g,\tau}-\eta\nabla_{\mathbf{U}_{g}} \tilde{f}_{i}\), with the gradient calculated in (8). 8: Update \(\mathbf{V}_{(i),g,\tau+\frac{1}{2}}=\mathbf{V}_{(i),g,\tau}-\eta\nabla_{ \mathbf{V}_{(i),i}}\tilde{f}_{i}\), with the gradient calculated in (8). 9: Update \(\mathbf{U}_{(i),l,\tau+\frac{1}{2}}=\mathbf{U}_{(i),l,\tau}-\eta\nabla_{ \mathbf{U}_{(i),l}}\tilde{f}_{i}\), with the gradient calculated in (8). 10: Update \(\mathbf{V}_{(i),l,\tau+1}=\mathbf{V}_{(i),l,\tau}-\eta\nabla_{\mathbf{V}_{(i),l}}\tilde{f}_{i}\), with the gradient calculated in (8). 11:endfor 12: Calculates \(\mathbf{U}_{g,\tau+1}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{U}_{(i),g,\tau+1}\) 13:endfor 14:Return \(\mathbf{U}_{g,R},\{\mathbf{V}_{(i),g,R}\},\{\mathbf{U}_{(i),l,R}\},\{ \mathbf{V}_{(i),l,R}\}\). ``` **Algorithm 1**HMF In Algorithm 1, we use \(\tau\) to denote the iteration index, where a half-integer denotes that the update of the variable is half complete: it is updated by gradient descent but is not feasible yet. One salient feature of algorithm 1 is that it is distributed in nature. Suppose there are \(N\) computation nodes, each holding one observation matrix \(\mathbf{M}_{(i)}\). Then, Algorithm 1 can be run on these computation nodes with the help of a central server. In such a scenario, node \(i\) carries out all computation from line 5 to line 10 in Algorithm 1, and sends the updated copies of \(\mathbf{U}_{(i),g}\) to the central server. The server then takes the average in line 12, and broadcasts the averaged \(\mathbf{U}_{g}\) to all nodes. The inversion in algorithm 1 takes \(O(r_{1}^{3})\) operations, all other matrix productions take \(O(n_{1}n_{2}(r_{1}+r_{2}))\) operations. Therefore if \(r_{1}\) is dominated by \(n_{1}\) and \(n_{2}\), the computational complexity of one single iteration is \(O(n_{1}n_{2}(r_{1}+r_{2}))\). ### Convergence Guarantee Recall that \(\{\hat{\mathbf{U}}_{g},\{\hat{\mathbf{V}}_{(i),g}\},\{\hat{\mathbf{U}}_{(i),l}\}, \{\hat{\mathbf{V}}_{(i),l}\}\}\) denotes the optimal solution to the problem (3). We define the residual \(\mathbf{R}_{(i)}\) for client \(i\) as \(\mathbf{R}_{(i)}=\mathbf{M}_{(i)}-\hat{\mathbf{U}}_{g}\hat{\mathbf{V}}_{(i),g} ^{T}-\hat{\mathbf{U}}_{(i),l}\hat{\mathbf{V}}_{(i),l}^{T}.\) Additionally, we use \(\phi_{(i),\tau}\) to denote the optimality gap at iteration \(\tau\) of HMF for client \(i\), \(\phi_{(i),\tau}=\hat{f}_{i}(\mathbf{U}_{g,\tau},\mathbf{V}_{(i),g,\tau}, \mathbf{U}_{(i),l,\tau},\mathbf{V}_{(i),l,\tau})-\frac{1}{2}\left\|\mathbf{R}_ {(i)}\right\|_{F}^{2}\). Accordingly, the total optimality gap is defined as \(\phi_{\tau}=\sum_{i=1}^{N}\phi_{(i),\tau}\). The following theorem establishes the linear convergence of HMF. **Theorem 3** (Convergence of HMF).: _Suppose that the following conditions are satisfied for HMF: (i) \(\mathbf{M}_{(i)}=\mathbf{U}^{*}_{g}\mathbf{V}^{*T}_{(i),g}+\mathbf{U}^{*}_{(i ),l}\mathbf{V}^{*T}_{(i),l}+\mathbf{E}^{*}_{(i)}\), where \(\left\|\mathbf{E}^{*}_{(i)}\right\|_{F}=O(\theta\sigma_{\min})\), (ii) the initial optimality gap satisfies \(\phi_{1}=O\left(\theta^{1.5}\sigma_{\min}^{2}\right)\), (iii) the stepsize satisfies \(\eta=O\left(\frac{1}{\sigma_{\max}^{2}}\right)\)._ _Then, there exists a constant \(C>0\) such that the iterations of HMF satisfy_ \[\phi_{\tau}\leq(1-C\eta)^{\tau-1}\,\phi_{1} \tag{9}\] The first condition in Theorem 3 imposes an upper bound on the noise matrix. The second condition in Theorem 3 requires an upper bound on the initial optimality gap. Roughly speaking, this condition ensures that the iterates do not become trapped at a sub-optimal local solution and lie within a basin of attraction of the global solution. We note that recent results have relaxed this condition for other variants of nonconvex matrix factorization by resorting to small, random, or spectral initialization techniques (Li et al., 2018; Stoger and Soltanolkotabi, 2021; Ma and Fattahi, 2022; Ma et al., 2022; Tu et al., 2016; Ma et al., 2018). We believe such techniques can be used in HMF to relax the aforementioned initial condition. In fact, in our simulations, we observed that HMF with a small Gaussian random initial point converges to a global solution in almost all instances. We leave the rigorous analysis of this observation as an enticing challenge for future research. Finally, the last condition in Theorem 3 imposes an upper bound on the stepsize of our algorithm to guarantee its convergence. We relegate the proof of Theorem 3 to the supplementary materials. ## 6 Experimental Results In this section, we present the results of the numerical simulations. We use a synthetic example and three real-life case studies. Code for all numerical studies is available in the Github repository. All experiments are conducted on a desktop with NVIDIA GeForce RTX 3080. Each iteration of HMF typically takes about 0.1 seconds. ### Synthetic Data We use synthetic data to examine the numerical convergence of HMF. We generate data according to the model (1), where \(\mathbf{U}^{\star}_{\,\,g}\), \(\mathbf{U}^{*}_{(i),l}\), \(\mathbf{V}^{\star}_{(i),g}\), \(\mathbf{V}^{\star}_{(i),l}\) are randomly sampled from Gaussian distributions and satisfy the requirement \(\mathbf{U}^{\star T}_{\,\,g}\mathbf{U}^{*}_{(i),l}=0\). The noise \(\mathbf{E}^{\star}_{(i)}\) is set to 0 to better examine the convergence behavior of our algorithm. We fix \(n_{2}=100\) and select \(n_{1}\in\{30,60,120\}\) to see the effect of different observation matrix sizes. The ranks of the global and local components are set to \(r_{1}=r_{2}=3\), and the number of clients is \(N=100\). We run HMF on the generated data with the decision variables \(\mathbf{U}_{g}\), \(\mathbf{U}_{(i),l}\), \(\mathbf{V}_{(i),g}\), \(\mathbf{V}_{(i),l}\) initialized as small Gaussian matrices. The errors \(\sum_{i=1}^{N}\left\|\mathbf{U}_{g,\tau}\mathbf{V}_{(i),g,\tau}^{T}-\mathbf{U }^{\star}{}_{g}\mathbf{V}^{\star T}_{(i),g}\right\|_{F}^{2}\) and \(\sum_{i=1}^{N}\left\|\mathbf{U}_{(i),l,\tau}\mathbf{V}_{(i),l,\tau}^{T}- \mathbf{U}^{\star}{}_{(i),l}\mathbf{V}^{\star T}_{(i),l}\right\|_{F}^{2}\) are calculated in each iteration and plotted as shared feature error and unique feature error in Figure 1. It can be observed that, after a few iterations2, the errors decrease linearly, which is consistent with our theoretical guarantee in Theorem 3. Footnote 2: The initial sublinear convergence of the algorithm is due to the fact that our initial point does not satisfy the condition of Theorem 3. However, once the iterations reach the basin of attraction of the global solution, the iterations converge linearly. ### Case Study: Video Segmentation We use an illustrative example in video segmentation to demonstrate the application of HMF. The video comes from a simulated surveillance video dataset (Cuevas et al., 2016). In the video, multiple vehicles drive through a roundabout. We divide each video frame \(i\) into multiple \(7\times 7\) patches and flatten these patches into row vectors to construct observation matrix \(\mathbf{M}_{(i)}\). Then, we apply HMF to identify the shared and unique signals from the observation matrices with \(r_{1}=20\) and \(r_{2}=100\). Naturally, the shared signals will correspond to the stationary components in the video, and unique signals correspond to the changing parts. We reconstruct the frames from the shared and unique signals and plot the results in Figure 2. One can see clearly that HMF correctly identifies the background and the foreground. ### Case Study: Feature Extraction in Temporal Graph \begin{table} \begin{tabular}{c c} \hline \hline HMF & Pooled SVD \\ \hline 23.6 \(\pm\) 0.3 & 24.7 \(\pm\) 0.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Prediction error of the communication graph on the test set. Figure 1: Shared factor error and unique factor error (log-scale) in each iteration. There are numerous applications where data are generated based on a series of temporal graphs with both common structures and distinctive patterns that are indicative of the graph topology. As an example, we analyze the email communication network of a research institution in Europe (Leskovec and Krevl, 2014; Paranjape et al., 2017). The dataset consists of multiple email records collected during 803 days. Each record is a tuple of \((u,v,t)\), which corresponds to an email sent by person \(u\) to person \(v\) at time \(t\). There are 986 users in the email network. The available data can be naturally mapped to distinct graphs for each day \(i\). Each graph \(G_{i}\) has 986 nodes representing 986 users. If a user \(u\) sends \(w\) emails to user \(v\) in day \(i\), we connect node \(u\) to node \(v\) in graph \(G_{i}\) with a directed edge with weight \(w\). We then analyze the adjacency matrices of \(\{G_{i}\}_{i=1}^{N}\) using HMF to extract the common trends among the constructed graphs and untangle them from the unique features in each graph. We set \(r_{1}=50\) and \(r_{2}=10\). After obtaining \(\mathbf{U}_{g}\), \(\mathbf{V}_{(i),g}\),\(\mathbf{U}_{(i),l}\), \(\mathbf{V}_{(i),l}\), we plot the graph corresponding to the adjacency matrices \(\mathbf{U}_{g}\mathbf{V}_{(i),g}^{T}\) and \(\mathbf{U}_{(i),l}\mathbf{V}_{(i),l}^{T}\). Results are demonstrated in Figure 3, where we show the topology of 5 graphs and their corresponding shared and unique components. One can see that our algorithm successfully recovers the common components with similar topologies, as well as the unique components with varying topologies. To investigate the predictive power of the extracted features, we randomly split the graphs into 90% training set and 10% test set, and apply HMF on the training set to extract common and unique features. Then, we use these features to predict the communication graphs in the test set and calculate the fitting error. Since we do not know the unique features in the test set, we use the unique features from the neighbor graphs (the graphs corresponding to the previous and subsequent date) in the training set as an approximation of the unique features for the test graph. Such an approximation is valid due to the continuous nature of the dataset. We run eight repetitions with different random seeds, and the mean and standard deviation of fitting errors are reported in Table 1. As a comparison, we also run SVD on the pooled adjacency matrix, i.e., the combined adjacency matrix in the training set, and calculate the norm of the residual after removing top \(r_{1}+r_{2}\) singular components from the pooled adjacency matrix. The smaller prediction error of HMF suggests that the extracted features have stronger predictive power. ### Case Study: Stocks Market Data HMF can also be applied to data from the financial market. As a proof-of-concept, we analyze the daily stock prices of 214 stocks from January 2, 1990 to November 9, 2017. The goal is to understand the time-specific patterns in stock prices. These patterns are often related to abnormal market behaviors and provide insightful information for subsequent trading decisions. Similar to Fattahi and Gomez (2021), we use a time window of 30 days to group the stock prices into different batches, and analyze the common and unique features among these batches. Each batched data matrix \(\mathbf{M}_{(i)}\) has dimension \(214\times 30\). The unique features represent structural differences in the stock prices in each batch, thus signaling sudden changes in the market. To find the unique features, we apply HMF on the batched data to extract \(\mathbf{U}_{(i),l}\)'s and \(\mathbf{V}_{(i),l}\)'s. To measure the "heterogeneity index" in each batch, we calculate the column-wise \(\ell_{1}\) norm of the signal matrix \(\mathbf{U}_{(i),l}\mathbf{V}_{(i),l}^{T}\) for each \(i\). The heterogeneity indices are plotted as the blue curve in Figure 4. To provide better insight, we also plot the _SP500_ closing prices in the same figure. From Figure 4, one can observe that almost every significant historical market crash (shown as sudden drops in the orange curve) corresponds to a peak in the heterogeneity index. We also identify 4 major periods when the heterogeneity index has several large peaks. By comparing these periods with the history of the global financial market (Wikipedia, 2023), one can see that A corresponds to the "dot-com bubble", B corresponds to the "September 11 attack", C corresponds to "stock market downturn of 2002", and D corresponds to the "2007-2008 financial crisis and the aftermath". ## 7 Conclusion and Discussion of Limitations This work proposes HMF that solves a constrained matrix factorization problem to extract shared and unique features from heterogeneous data. One avenue for future research is to consider scenarios where the rank of feature matrices \(r_{1}\) and \(r_{2}\) are unknown and must be over-estimated instead. Such a setting (also known as overparameterization) has been extensively studied for the classical matrix factorization literature (Ma and Fattahi, 2022; Stoger and Soltanolkotabi, 2021). Another promising direction is to improve the initialization condition for the guaranteed convergence of our proposed algorithm. Although the theoretical guarantee of our algorithm (Theorem 3) relies on the availability Figure 4: The blue curve denotes the heterogeneity index of 214 stock returns from 1998 to 2014. The orange curve is the _SP500_ closing prices at the corresponding dates. We label 4 periods where the heterogeneity index has large peaks of a good initial point, we hypothesize that this requirement can be relaxed, as the algorithm works well in practice with a small and random initialization. ## Acknowledgements Raed Al Kontar is supported, in part, by NSF CAREER Award CMMI-2144147. Salar Fattahi is supported, in part, by NSF Award DMS-2152776 and ONR Award N00014-22-1-2127, and MICDE Catalyst Grant.
2308.09586
Explaining the Arts: Toward a Framework for Matching Creative Tasks with Appropriate Explanation Mediums
Although explainable computational creativity seeks to create and sustain computational models of creativity that foster a collaboratively creative process through explainability, there remains little to no work in supporting designers when exploring the explanation medium. While explainable artificial intelligence methods tend to support textual, visual, and numerical explanations, within the arts, interaction mediums such as auditorial, tactile, and olfactoral may offer more salient communication within the creative process itself. Through this research, I propose a framework to assist designers of explainable user interfaces in modeling the type of interaction they wish to create using explanations.
Michael Clemens
2023-08-18T14:29:23Z
http://arxiv.org/abs/2308.09586v1
Explaining the Arts: Toward a Framework for Matching Creative Tasks with Appropriate Explanation Mediums ###### Abstract. Although explainable computational creativity seeks to create and sustain computational models of creativity that foster a collaboratively creative process through explainability, there remains little to no work in supporting designers when exploring the explanation medium. While explainable artificial intelligence methods tend to support textual, visual, and numerical explanations, within the arts, interaction mediums such as auditorial, tactile, and olfactoral may offer more salient communication within the creative process itself. Through this research, I propose a framework to assist designers of explainable user interfaces in modeling the type of interaction they wish to create using explanations. **ACM Reference Format:** Michael Clemens. 2023. Explaining the Arts: Toward a Framework for Matching Creative Tasks with Appropriate Explanation Mediums. In the _1st International Workshop on Explainable AI for the Arts (XAIxArts), ACM Creativity and Cognition (C&C) 2023_. Online, 3 pages. [https://xiaxarts.github.io](https://xiaxarts.github.io) ## 1. Introduction Neural networks and deep learning are widely used for co-creative applications, but their explainability is a major challenge (Krizhevsky et al., 2016). Most explainable artificial intelligence (XAI) research focuses on explaining predictions and decisions in domains such as healthcare, finance, law, and criminal justice (Krizhevsky et al., 2016). My work, however, explores explainability within the artistic domains. XAI methods typically use visual, textual, or numerical explanations, but more than these may be needed for artistic contexts. Explainability in the arts is often related to intentionality and autonomy, which are considered essential for attributing creativity to an artificial agent (Bryan and Sukhlik, 2017; Krizhevsky et al., 2016). Recent surveys further indicate that people expect artificial systems to exhibit novelty, quality, intentionality, and autonomy to be considered creative (Krizhevsky et al., 2016). Explainable computational creativity is a branch of XAI that aims to build models that enable bi-directional communication between the user and the system. Based on HCI and creativity literature, Bryan-Kinns et al. (2016) proposed a framework to analyze AI in the interactive arts along three dimensions: the role of AI, the interaction with AI, and the common ground with AI. They claim that the explainability of a creative AI depends on these aspects. For example, more explanation may be necessary when the process is more collaborative, requiring more engagement and grounding. Also, more contact with the agent helps users to learn about and infer the knowledge and understanding of the creative AI. My research extends their work by contributing a framework that helps designers select the best explanation medium per the interaction type alongside the artistic domain. Current efforts within the field of creative AI have created a surge of interest in these autonomous systems, however it's important to note there are various ways to describe the type of interaction, as previously mentioned, between the human and the computationally creative system. These types of interactions include: autonomous systems, creativity support tools (CSTs), and co-creative systems. Autonomous systems produce creative artifacts without any interaction from the user (Bryan and Sukhlik, 2017). CSTs refer to tools and apps that are designed to aid in the user's creativity (Bryan and Sukhlik, 2017). Shneiderman (Krizhevsky et al., 2016) defines CSTs as tools that enhance the creative thought of users and enable them to be both productive and innovative. Co-creation is the process of collaboratively creating meaningful objects or activities, enabled by users' ability to share emotions, experiences, and ideas without needing specific guidance or predetermined strategies from a central authority (Hansen et al., 2017). I will use Karimi's et al. definition of co-creativity and explicitly define the concept of co-creation as the "interaction between at least one AI agent and at least one human where they take action based on the response of their partner and their conceptualization of creativity during the co-creative task" (Krishnan et al., 2017). While evaluating the system of interaction supported by the creative agent, it is also imperative to assess for whom the explanation is provided (Hansen et al., 2017). The target audience determines the most effective method for describing the why behind the decisions. Two dimensions are often mentioned within XAI research concerning user characteristics: AI literacy (Krishnan et al., 2017) and mastery of the domain. Although some work mentions the concept of experts in terms of AI experts, I introduce the concept of experts and novices to qualitatively assess a user's agency and experience with the respective artistic domain. Ehsan et al. (Ehsan et al., 2017) found that AI literacy significantly affects how numerical explanations were assessed, and each group (non-AI and AI experts) had unique perceptions of what determined a human-like explanation. In a music production context, Clemens et al. (Clemens et al., 2016) found that novice users were enthusiastic about using the creative agent in their workflow, while experts were concerned about the creative agent's impact on their creative autonomy. The drastic differences between these user groups' views of explanations necessitate their inclusion while creating this framework. The final dimension I wish to explore within the framework is the artistic outlet where creative interaction occurs. Within the artistic domain of music, there are several forms of artistic expression, including composing, performing, and producing. Although all of these fall under the domain of music, they are very different types of artistic expressions in how they are experienced. Interaction modalities such as CSTs and co-creativity may better support composing and performing, while performing is more apt for autonomous systems such as Shimon (Shimon, 2017). ## 2. Research Objective My research aims to create a framework that designers of explainable user interfaces can use to explore the most salient explanation medium for their targeted interaction paradigm with the creative system. Figure 1 presents an example of the framework for visually intensive artistic expressions. Although only one type of artistic expression is listed here, the completed framework would also include artistic expressions more focused on auditory, haptic, olfactory, and gustatory. Although particular art forms will include aspects of each of these, such as drawing includes both visual and haptic aspects, there is a primary focus on the visual portion rather than the haptic aspect of the art itself. The distinctions of these art forms will be based on the most intensive sense being used within the expression. Figure 1 provides an example based on the two user dimensions: AI literacy and domain expertise, along with the appropriate mediums for each of the four created quadrants. Examples of the most salient medium are also displayed within each type of creative agent. Only three types of mediums are shown, but other types of medium could include auditorial, olfactorial, haptic, or gustoral. The broader work in this area aims to develop a framework to measure an explanation's effectiveness in computational creative applications. One of the challenges is that there remains no universal standard for evaluating explanations in different creative domains or for differing user groups. This framework addresses this problem by explicitly defining the interaction between the user and the creative agent when explanations are provided. The next step in this research endeavor is to test this proposed framework on an artistic domain that uses senses not commonly used in XAI research, such as sound, and see if explanations in those modalities are more helpful for users than visual, textual, or numerical ones.
2305.06891
Hierarchical Block Low-rank Approximation of Cavity Radiation
In this paper we examine the use of low-rank approximations for the handling of radiation boundary conditions in a transient heat equation given a cavity radiation setting. The finite element discretization that arises from cavity radiation is well known to be dense, which poses difficulties for efficiency and scalability of solvers. Here we consider a special treatment of the cavity radiation discretization using a block low-rank approximation combined with hierarchical matrices. We provide an overview of the methodology and discusses techniques that can be used to improve efficiency within the framework of hierarchical matrices, including the usage of the approximate cross approximation (ACA) method. We provide a number of numerical results that demonstrate the accuracy and efficiency of the approach in practical problems, and demonstrate significant speedup and memory reduction compared to the more conventional "dense matrix" approach.
Ivan Baburin, Jonas Ballani, John W. Peterson, David Knezevic
2023-05-11T15:28:35Z
http://arxiv.org/abs/2305.06891v1
# Hierarchical Block Low-rank Approximation ###### Abstract In this paper we examine the use of low-rank approximations for the handling of cavity radiation boundary conditions in a transient heat transfer problem. The finite element discretization that arises from cavity radiation involves far-field degree of freedom coupling and matrices characterized by a mixed dense/sparse structure, which pose difficulties for the efficiency and scalability of solvers. Here we consider a special treatment of the cavity radiation discretization using a block low-rank approximation combined with hierarchical matrices. We provide an overview of the methodology and discuss techniques that can be used to improve efficiency within the framework of hierarchical matrices, including use of the adaptive cross approximation (ACA) method. We provide a number of numerical results that demonstrate the accuracy and efficiency of the approach in practical problems, and demonstrate significant speed-up and memory reduction compared to the more conventional "dense matrix" approach. keywords: Cavity Radiation, Hierarchical Matrices, Radiation Boundary, Sparse Computing + Footnote †: journal: CMAME ## 1 Introduction Blackbody radiation is a well-studied phenomenon in classical mechanics. Though challenging to model mathematically (c.f. the "paradox" known as the "ultraviolet catastrophe" [1]), the study of blackbody radiation at the beginning of 20th century gave birth to Planck's law and the concept of quantization of energy, and led to further fundamental discoveries that laid the foundations for the field of quantum theory. In the modern era of scientific computation, there are still many challenges connected to blackbody radiation, in particular the computation of radiation-based heat transfer. In this article we consider a more general setting of "graybody" radiation, in which the emissivity coefficient may be specified on the cavity surface. This is relevant in many industrial and scientific settings, such as heat transfer in reactors [2, 3, 4], or radiative heat transfer on celestial surfaces [5, 6, 7, 8]. The primary challenge of cavity radiation from the computational point of view is that the interactions are "non-local": in general every surface facet interacts with every other surface facet, where the interactions are quantified via the so-called "reflection matrix," which in general is dense and of size \(n_{\rm facets}\times n_{\rm facets}\). This non-locality results in a loss of sparsity in the matrices arising from finite element discretizations, which in turn leads to very high computational cost when \(n_{\rm facets}\) becomes large (e.g. \(>\) 10,000). The most straightforward approach -- explicitly allocating and computing the dense matrices associated with cavity radiation -- is prohibitively expensive in terms of both memory utilization and computation time, especially when applied on large or highly detailed models. We propose an alternative method of computing cavity radiation which avoids dense matrix discretizations by exploiting the physical nature of radiation, namely its fast (inverse-square) decay relative to the distance between emitting surfaces. Due to this property, many individual entries in the view factor matrix \(F\) and reflection matrix \(C\) (for the precise definitions we refer to Section 2) underlying the computation of the cavity radiation flux interact only weakly, and thus, if grouped together, can be approximated reliably using low-rank blocks. Since the grouping or clustering of facets is strongly linked to the underlying geometry, it is natural to store the \(F\) and \(C\) matrices hierarchically, identifying large, weakly interacting matrix blocks through a geometric clustering criterion. In the analogous context of boundary element methods, hierarchical matrices have been successfully applied to large-scale problems, see [9, 10] for comprehensive introductions. While we were working on the implementation, a very similar approach was presented by Potter et al. [8] in the context of thermal irradiance on planetary surfaces, where the radiosity matrix shares many structural properties with the view factor matrix \(F\) from heat transfer. In their paper, Potter et al. describe an efficient construction of a hierarchical matrix with low-rank approximation of individual blocks, and provide an application of their approach to the computation of irradiance on a lunar crater. The numerical results demonstrate the potential benefits of the hierarchical approach when applied to practical problems. The novelty of our approach with regard to Potter et al. is twofold. First, due to the nature of thermal radiation, we will perform computations not only with the hierarchical view factor matrix \(F\), but also with the inverse of the reflection matrix \(C\). Due to the nonlinear (temperature-dependent materials, radiation flux terms) and time-dependent character of the heat equation, we have to perform many applications of \(C^{-1}\), but, because \(C\) is itself _independent_ of the current temperature solution, we are able to employ a _block-LU decomposition_ of \(C\). While there is a non-trivial up-front computational cost involved in the construction of the block-LU decomposition, this cost is amortized over the many time steps and nonlinear iterations which are performed during the course of the simulation. We note that the low-rank structure of the reflection matrix is preserved in its block-LU decomposition, and this fact is exploited to allow the action of \(C^{-1}\) to be computed quickly and with low memory requirements. Second, we use the _adaptive cross approximation_ (ACA) [11] to optimize the construction of the view factor matrix, since we found that it was significantly faster than the singular value decomposition when constructing a low-rank approximation of individual blocks. Due to its strong reliability guarantees, we have opted to use ACA with _full pivoting_ (running linearly in the number of entries). Further computational gains can be expected when using ACA with _partial pivoting_ (running linearly in the number of facets), at the cost of potentially less reliable low-rank approximations due to the use of partial information. A comprehensive discussion of different cross approximation techniques can be found in [12]. Note also that in the current work we do not consider _occlusion detection_ of surface facets, and hence in Section 6 we conduct numerical experiments in which occlusion detection is not required. Including occlusion detection is a natural extension of the implementation, e.g. via a ray-tracing approach [13]. We emphasize that incorporating occlusion detection would not change the block-low rank approach that we present here in any way, since it corresponds to "zeroing out" entries in the view factor matrix that correspond to occluded facets, which is equivalent to modifying only the "input" to the block low-rank algorithm. ## 2 Problem Setting ### Transient heat equation We consider transient heat transfer in a domain \(\Omega\subset\mathbb{R}^{3}\) with radiation boundary \(\Gamma\subseteq\partial\Omega\) governed by: \[\rho c_{p}\frac{\partial T}{\partial t} =\nabla\cdot(k\nabla T)+f \in\Omega \tag{1}\] \[k\nabla T\cdot\vec{n} =q(T) \in\Gamma \tag{2}\] where \(T\) is the unknown temperature, \(q(T)\) is the radiation heat flux, \(f\) is the internal heat source, \(k\) is the thermal conductivity, \(\rho\) is the material density, and \(c_{p}\) is the specific heat. In addition to the radiation boundary condition (2) shown above, there may in general be other types of boundary conditions (Dirichlet, Neumann) present, though their details are not crucial to our discussion. Furthermore, in order to fully specify the problem, an initial condition for the temperature which applies on all of \(\Omega\) at time \(t=0\) must also be provided. The variational statement associated with (1) is then: find \(T\in\mathcal{H}^{1}(\Omega)\) such that \[\int_{\Omega}\rho c_{p}\frac{\partial T}{\partial t}v\,\mathrm{d}V+\int_{ \Omega}k\nabla T\cdot\nabla v\,\mathrm{d}V-\int_{\Gamma}q(T)v\,\mathrm{d}S= \int_{\Omega}fv\,\mathrm{d}V \tag{3}\] holds for all test functions \(v\) from the Sobolev space \(\mathcal{H}^{1}(\Omega)\). Equation (3) is in general a nonlinear equation for \(T\), both because the flux \(q(T)\) depends on \(T\) to the fourth power (see (4)), but also because the thermal conductivity and specific heat may depend on the temperature. The nonlinearity of (3) plays a crucial role in its numerical solution, since we need to iterate (via Newton's method) within each time step to find the converged solution at the current time, and each iteration requires the formation and solution of a large linear system of equations involving the Jacobian matrix, as we will further elaborate in Section 3. ### Radiation flux In this section, we briefly recall some of the fundamentals of thermal radiation which are relevant to the current work, as presented in [14]. The thermal energy flux (amount of energy emitted per unit time, per unit area) from a so-called "graybody" thermal radiator is given, according to the Stefan-Boltzmann law of radiation, by \[q=\epsilon\sigma T^{4} \tag{4}\] where \(\sigma\approx 5.669\times 10^{-9}\ \frac{W}{m^{2}K^{4}}\) is the Stefan-Boltzmann constant, \(0<\epsilon\leq 1\) is the surface emissivity (dimensionless), and \(T\) is the absolute surface temperature of the body, measured in Kelvin. As \(\epsilon\to 1\), we recover the case of a "blackbody" radiator. In this formulation, we also assume that the emissivity is independent of the wavelength of the radiation, that is, the surfaces are so-called "ideal" graybodies [15]. The SI units of the energy flux \(q\) are thus \(\frac{W}{m^{2}}\). In the case of two radiating graybody infinite flat planes with temperatures \(T_{1}\) and \(T_{2}\) and surface emissivities \(\epsilon_{1}\) and \(\epsilon_{2}\), respectively, it can be shown [15] that the net radiation energy flux between them is proportional to the difference between their temperatures raised to the fourth power: \[q=\frac{\sigma\epsilon_{1}\epsilon_{2}(T_{1}^{4}-T_{2}^{4})}{1-(1-\epsilon_{ 1})(1-\epsilon_{2})} \tag{5}\] The case of infinite parallel planes is particularly simple because one can assume that all the energy emitted by plane 1 is absorbed by plane 2, and vice-versa, which then admits a closed-form solution based on Kirchoff's law. In real applications, one must of course consider finite, non-planar surfaces with non-uniform temperature fields. The standard approach taken in the general case is thus to discretize the surfaces comprising the radiation cavity into approximately planar finite regions or "facets," and then consider the exchange of energy between individual pairs of facets separately, the details of which are discussed in SS3.2. ## 3 Discretization details ### Finite element discretization First we discretize the three-dimensional domain \(\Omega\) into a mesh \(\mathcal{T}_{h}\) with \(n_{\mathrm{nodes}}\) nodes comprising hexahedral or tetrahedral elements with circum-spherical diameter \(\mathcal{O}(h)\). Our approach does not explicitly depend on the geometric type of the elements used, and should in theory work for all standard element types as well as "hybrid" meshes consisting of a mixture of geometric types. Then, the faces of the elements in \({\cal T}_{h}\) which lie on the radiation boundary \(\Gamma\) can be considered as a lower-dimensional "manifold" mesh \({\cal M}_{h}\) consisting of \(n_{\rm facets}:=|{\cal M}_{h}|\) facets, which will all be allowed to emit and reflect radiation. We next introduce the standard continuous Lagrange nodal basis \(\{\varphi_{N}\}_{N=1}^{n_{\rm nodes}}\) associated to \({\cal T}_{h}\) and state the semi-discrete Galerkin finite element approximation of (3), find \(T_{h}\) such that: \[\int_{\Omega}\rho c_{p}\frac{\partial T_{h}}{\partial t}\varphi_{N}\,{\rm d}V+ \int_{\Omega}k\nabla T_{h}\cdot\nabla\varphi_{N}\,{\rm d}V-\int_{\Gamma}q(T_{ h})\varphi_{N}\,{\rm d}S=\int_{\Omega}f\varphi_{N}\,{\rm d}V \tag{6}\] holds for all \(N=1,\ldots,n_{\rm nodes}\), where \[T_{h}(x):=\sum_{M=1}^{n_{\rm nodes}}T_{M}\varphi_{M}(x) \tag{7}\] and \(T_{M}\) are the unknown coefficients. The argument \(x\) in (7) indicates the spatial dependence of the basis functions (and lack thereof for the coefficients). The temperature on node \(M\) is consequently given by \(T_{M}\), and the associated "radiation power" on node \(M\) is henceforth defined as \[\eta_{M}:=T_{M}^{4} \tag{8}\] The volume integrals in (6) are computed in the usual manner by looping over the elements of \({\cal T}_{h}\) and using standard Gauss quadrature. For the radiation surface integral, we follow the approach presented in the Abaqus manual [16], and treat the radiation flux \(q_{i}\) as a constant on each facet \(i\in{\cal M}_{h}\), so that this term, which we shall denote by \(Q_{N}\), is approximated as \[Q_{N}:=\int_{\Gamma}q(T_{h})\varphi_{N}(x)\;{\rm d}S\approx\sum_{i=1}^{n_{\rm facets }}q_{i}\int_{\Gamma_{i}}\varphi_{N}(x)\;{\rm d}S \tag{9}\] In the preceding few equations, we have introduced the notational convention of using \(N\), \(M\) to index node-based quantities, and \(i\), \(j\) to index facet-based quantities, and this approach will be continued throughout the rest of this section. To complete the description of the finite element discretization, we need an expression for \(q_{i}\), the radiation flux to the \(i\)th facet, in terms of all other facets in the radiation cavity. This is discussed in SS3.2. ### Radiation flux discretization The key ingredient in modeling the interaction between discretized facets \(i\) and \(j\) of an enclosure is the so-called view factor matrix, whose entries are given by [15] \[F_{ij}=\int_{A_{i}}\int_{A_{j}}\frac{\cos\phi_{i}\cos\phi_{j}}{\pi R^{2}}\, \mathrm{d}A_{j}\,\mathrm{d}A_{i}, \tag{10}\] where \(A_{i}\) is the area of facet \(i\), \(R\) is the distance between facets \(i\) and \(j\), and \(\phi_{i}\) is the angle between the line segment joining the facet centroids and the plane-normal vector of facet \(i\). We note in particular that the diagonal of the view factor matrix consists of all zeros, since (10) is only well-defined when \(i\neq j\). In this work we compute entries of \(F\) via numerical quadrature on the facets. The individual entries \(F_{ij}\) are non-polynomial and hence may require specialized quadrature rules. A typical approach, which we follow in our implementation, is to apply adaptive Gauss quadrature in which the order of the quadrature rule is increased for facets that are close together (hence strongly interacting), and decreased for facets that are further away from one another. In our testing this leads to accurate and computationally efficient computation of the view factor matrix. However, we note that more sophisticated integration techniques can be applied to the \(F_{ij}\) if needed, e.g. a detailed discussion for a more efficient treatment of singular integrals representing the interaction of neighboring facets may be found in [10]. A secondary quantity, which depends on the view factor matrix, is the so-called reflection matrix, which is given by \[C_{ij}=\delta_{ij}-\frac{1-\epsilon_{i}}{A_{i}}F_{ij}\;\Leftrightarrow\;C=I- \Lambda F, \tag{11}\] where \(I\) denotes the identity matrix and \(\Lambda\) is a diagonal scaling matrix with \(\Lambda_{ii}=(1-\epsilon_{i})/A_{i}\). With the preceding definitions in mind, the radiation flux to facet \(i\) can be written [16] using the previously defined quantities as \[q_{i}=\frac{\sigma\epsilon_{i}}{A_{i}}\sum_{j}\epsilon_{j}\sum_{k}F_{ik}C_{kj }^{-1}(\eta_{j}-\eta_{i}) \tag{12}\] where the radiation power \(\eta_{i}\) on facet \(i\) is treated as a constant given by the nodal averaging/projection formula \[\eta_{i}:=\sum_{M=1}^{n_{\mathrm{nodes}}}\frac{1}{A_{i}}\int_{\Gamma_{i}}\eta _{M}\varphi_{M}(x)\;\mathrm{d}S \tag{13}\] where \(A_{i}\) is the area of facet \(i\), whose domain is \(\Gamma_{i}\). We observe that the sum in (13) nominally goes over all nodes \(M\) in the finite element mesh, but in fact only a small number of nodal basis functions \(\varphi_{M}\) are non-zero on any given facet \(\Gamma_{i}\) due to the localized nature of the finite element basis functions. Therefore, many of the terms in the sum are zero. Associated with the formula in (13), it is illustrative to define the "projection" operator \[P_{iM}:=\frac{1}{A_{i}}\int_{\Gamma_{i}}\varphi_{M}(x)\;\mathrm{d}S \tag{14}\] so that \[\eta_{i}=\sum_{M=1}^{n_{\mathrm{nodes}}}P_{iM}\eta_{M} \tag{15}\] The projection operator \(P_{iM}\) therefore maps node-based quantities to facet-based quantities, and we can think of it as a sparse, rectangular \(n_{\mathrm{facets}}\times n_{\mathrm{nodes}}\) matrix for simplicity. Combining all of the preceding ingredients, we finally get the following expression for \(Q_{N}\), originally defined in (9) (note: all sums go from 1 to \(n_{\mathrm{facets}}\) unless otherwise noted) \[Q_{N} :=\int_{\Gamma}q(T_{h})\varphi_{N}(x)\;\mathrm{d}S\] \[\approx\sum_{i}q_{i}\int_{\Gamma_{i}}\varphi_{N}(x)\;\mathrm{d}S\] \[=\sum_{i}\frac{\sigma\epsilon_{i}}{A_{i}}\sum_{j}\epsilon_{j}\sum _{k}F_{ik}C_{kj}^{-1}(\eta_{j}-\eta_{i})\int_{\Gamma_{i}}\varphi_{N}(x)\; \mathrm{d}S\] \[=\sum_{i}\left(\sum_{j}R_{ij}\eta_{j}-\sum_{k}R_{ik}\eta_{i} \right)\left(\frac{1}{A_{i}}\int_{\Gamma_{i}}\varphi_{N}(x)\;\mathrm{d}S\right)\] \[=\sum_{i}\sum_{j}\bar{R}_{ij}\eta_{j}P_{iN}\] \[=\sum_{i}\sum_{j}\bar{R}_{ij}\left(\sum_{M=1}^{n_{\mathrm{nodes} }}P_{jM}\eta_{M}\right)P_{iN}\] \[=\sum_{i}\sum_{j}\sum_{M=1}^{n_{\mathrm{nodes}}}P_{iN}\bar{R}_{ ij}P_{jM}\eta_{M} \tag{16}\] where we have used (14), (15) and, for simplicity of notation, we have defined the tensors \(R\) and \(\bar{R}\) as \[R_{ij} :=\sigma\epsilon_{i}\epsilon_{j}\sum_{k}F_{ik}C_{kj}^{-1} \tag{17}\] \[\bar{R}_{ij} :=R_{ij}-\delta_{ij}\sum_{k}R_{ik} \tag{18}\] We can write the double facet sum from (16) as \[S_{NM}:=\sum_{i}\sum_{j}P_{iN}\bar{R}_{ij}P_{jM} \tag{19}\] or, employing matrix notation \[S:=P^{\top}\bar{R}P \tag{20}\] which gives us the following final expression for \(Q_{N}\) \[Q_{N}=\sum_{M=1}^{n_{\rm nodes}}S_{NM}\eta_{M} \tag{21}\] ### Time discretization and Newton iteration In the time domain, we discretize (6) using a standard implicit Euler scheme. Let \(T_{h}^{n}\) be the approximate solution at time level \(t_{n}\), and consider a fixed time step of size \(\Delta t\) such that \(t_{n+1}=t_{n}+\Delta t\). The \(N\)th component of the discrete residual vector associated with (6) is then given by \[\mathscr{R}_{N}(T_{h}^{n+1}) :=\int_{\Omega}\rho c_{p}\left(\frac{T_{h}^{n+1}-T_{h}^{n}}{ \Delta t}\right)\varphi_{N}\,\mathrm{d}V+\int_{\Omega}k\nabla T_{h}^{n+1} \cdot\nabla\varphi_{N}\,\mathrm{d}V\] \[-Q_{N}^{n+1}-\int_{\Omega}f\varphi_{N}\,\mathrm{d}V \tag{22}\] The solution of the nonlinear system of equations \(\mathscr{R}=0\) implied by (22) is performed at each time step via Newton-Krylov [17; 18] iterations, which require the assembly of the Jacobian \(J\) associated with \(\mathscr{R}\) based on the current temperature iterate. Note that the Jacobian contribution due to the volume integral terms, which we shall denote by \(J^{\rm sparse}\), will be sparse due to the usual properties of the Galerkin finite element method. The flux term \(Q_{N}^{n+1}\), on the other hand, is both nonlinear and non-local, as discussed previously. The Jacobian contribution for this term, \(J^{\rm cav}\), is thus a dense matrix given by \[J^{\rm cav}_{NM}:=\frac{\partial Q_{N}}{\partial T_{M}}=\sum_{K=1}^{n_{\rm nodes }}S_{NK}\frac{\partial}{\partial T_{M}}\left(T_{K}^{4}\right)=4S_{NM}T_{M}^{3} \tag{23}\] where we have dropped the \(n+1\) superscript for clarity. We also emphasize that the matrix \(S\), as defined in (20), forms the core part of \(J^{\rm cav}\), and also is temperature independent. The total Jacobian \[J=J^{\rm sparse}+J^{\rm cav} \tag{24}\] therefore has both sparse and dense contributions, and requires specialized handling, which is described in Section 3.4. ### Iterative approach In case of a large number of facets, the matrix size of the (dense) cavity part \(J^{\rm cav}\) of the Jacobian becomes prohibitive, making direct solution methods for the solution of the linear system in each nonlinear iteration impractical. We hence resort to preconditioned iterative methods (such as GMRES), requiring the implementation of the action of the Jacobian and a suitable preconditioner onto an arbitrary vector \(x\). Thanks to the additive decomposition of the total Jacobian (24), we may consider the two terms in the product \[Jx=J^{\rm sparse}x+J^{\rm cav}x \tag{25}\] in isolation. Since it is inexpensive to compute, we assume that \(J^{\rm sparse}\) is available in fully assembled (sparse) form, and the resulting product \(J^{\rm sparse}x\) can therefore be computed efficiently by standard matrix-vector operations. In addition to being conveniently fully assembled, we have also observed that \(J^{\rm sparse}\) is suitable for use as a preconditioner. Our preconditioned iteration hence reads \[x\mapsto(J^{\rm sparse})^{-1}\left(J^{\rm sparse}x+J^{\rm cav}x\right) \tag{26}\] where \((J^{\rm sparse})^{-1}\) is computed explicitly via a sparse LU decomposition. We note that more refined choices of preconditioners are possible (in particular involving additional terms for the cavity part of the Jacobian), but we have not investigated these in more detail since, in our examples, we generally observed good performance (small numbers of GMRES iterations) with this simple choice. The approach for efficiently storing and computing matrix-vector products with \(J^{\rm cav}\) relies on a block low-rank approximation of the view factor matrix \(F\), and a subsequent approximation of the inverse of the reflection matrix \(C\); these are described in detail in Sections 4 and 5. If the material properties (thermal conductivity, specific heat, etc.) are temperature- and time-independent, then \(J^{\rm sparse}\) is itself also independent of the temperature, and thus theoretically only needs to be computed once and reused for all subsequent iterations. Our present implementation is designed to handle the general case of temperature-dependent material properties, and thus does not currently take advantage of this particular optimization. This choice is justified by the solver performance results reported in Section 6, which show that, especially in the fine grid limit, the time spent recomputing \(J^{\rm sparse}\) at each iteration is small compared to the time spent in the computation of \(J^{\rm cav}\). ## 4 Block low-rank approximation of the view factor matrix The view factor matrix \(F\) defined in (10) represents the radiation interaction between different facets from the mesh \(\mathcal{M}_{h}\). The particular form of (10) immediately gives rise to the following two observations (cf. [8]): 1. Although we do not consider occlusion detection in the current work, matrix blocks corresponding to occluded groups of facets are identically zero. This may result in major computational gains, as the corresponding matrix entries need not be computed or stored explicitly. 2. The radiation interaction decays with increasing distance \(R\). Not only does this make the entries \(F_{ij}\) smaller in absolute value, but it also leads to a weak interaction between groups of facets that are far apart. This enables data-sparse approximations of the corresponding matrix blocks of the view factor matrix. Our focus in this paper is to explore aspect (ii) using hierarchical block low-rank approximations of the view factor matrix. The key elements of this approximation are: 1. Geometric clustering of the facets via a \(k\)-d tree to hierarchically subdivide the row and column index set \(\mathcal{I}=\{1,...,n\}\) of \(F\). 2. Using the subdivision from (a), the block index set \(\mathcal{I}\times\mathcal{I}\) is hierarchically subdivided into smaller blocks until the corresponding facet groups are sufficiently far apart. To prevent unreasonably small matrix blocks, the subdivision is also stopped if a user-defined minimal block size has been reached. 3. Matrix blocks corresponding to distant groups of facets are approximated by low-rank matrices. The rest of the blocks are stored as dense matrices. In the following, we provide a more detailed description of the techniques used for (a), (b) and (c). ### Geometric Clustering To group facets from the mesh \(\mathcal{M}_{h}\) into clusters, we use a \(k\)-d tree based on the centroids of the individual facets. We note that an alternative, yet similar approach would be to consider a clustering approach based on the bounding boxes of the facets. With the index set \(\mathcal{I}:=\{1,...,n\}\) corresponding to the centroids of all facets as a starting point, the clustering proceeds iteratively by geometrically splitting the point cloud of centroids via a hyperplane. Once the split has been defined, the iteration continues recursively for the two disjoint sets of centroids. The procedure for an arbitrary index set \(\tau\subset\mathcal{I}\) is defined in Algorithm 1. ``` if\(\#\tau\leq n_{\text{min}}\)then \(T\leftarrow\{\tau\}\)\(\triangleright\) leaf index set else \(\tau=:\tau_{\text{left}}\cup\tau_{\text{right}}\)\(\triangleright\) split using a hyperplane \(T_{\text{left}}\leftarrow\texttt{BuildIndexTree}(\tau_{\text{left}})\) \(T_{\text{right}}\leftarrow\texttt{BuildIndexTree}(\tau_{\text{right}})\) \(T\gets T_{\text{left}}\uplus T_{\text{right}}\)\(\triangleright\) left and right subtrees endif ``` **Algorithm 1**BuildIndexTree(\(\tau\subset\mathcal{I}\)) The initial call to Algorithm 1 reads \[T_{\mathcal{I}}:=\texttt{BuildIndexTree}(\mathcal{I}), \tag{27}\] where the obtained _index tree_\(T_{\mathcal{I}}\) represents the hierarchical split of all centroids. We note that in Algorithm 1 the recursion is only continued in case the cardinality of the index set \(\tau\) is larger than a user-defined minimal leaf size \(n_{\min}\geq 1\) (say \(n_{\min}\sim 100\)). This is to prevent the creation of arbitrarily small matrix blocks which would negatively impact the performance of the overall hierarchical approach. Since the splitting in Algorithm 1 is unaware of the original ordering of the facets in \(\mathcal{M}_{h}\), the nodes \(\tau\in T_{\mathcal{I}}\) generally correspond to non-contiguous index sets. To associate the obtained splitting to contiguous row and column index sets of the view factor matrix \(F\), it is hence convenient to permute the rows and columns of \(F\) accordingly. ### Block Subdivision Using the hierarchical subdivision of the index set \(\mathcal{I}\), we next introduce a block subdivision of \(\mathcal{I}\times\mathcal{I}\) defining the eventual matrix blocks of the view factor matrix \(F\). As before, we proceed by recursively subdividing the block index set \(\mathcal{I}\times\mathcal{I}\) into smaller subblocks until a stopping criterion is fulfilled. Since we expect that groups of facets which are far apart interact only weakly, we use a standard geometrically-based _admissibility condition_ (cf. [9]) as a stopping criterion. Given two index sets \(\sigma,\tau\subset\mathcal{I}\), we denote the corresponding matrix block \(\sigma\times\tau\) as _admissible_ if \[\min(\operatorname{diam}(A_{\sigma}),\operatorname{diam}(A_{\tau}))\leq c \operatorname{dist}(A_{\sigma},A_{\tau}), \tag{28}\] where \(A_{\sigma},A_{\tau}\) denote the point clouds of centroids of the facets belonging to index sets \(\sigma,\tau\), respectively. The recursive procedure for the block subdivision is defined in Algorithm 2. The initial call to Algorithm 2 reads \[T_{\mathcal{I}\times\mathcal{I}}:=\texttt{BuildBlockTree}(\mathcal{I},\mathcal{ I}), \tag{29}\] where the obtained _block tree_\(T_{\mathcal{I}\times\mathcal{I}}\) represents the hierarchical block subdivision of the view factor matrix \(F\). **Example 1**.: _Consider a simple equidistant arrangement of the centroids of \(n\) facets along a straight line. For a trivial minimal leaf size of \(n_{\min}=1\), Algorithms 1 and 2 would result in a block structure as depicted in Figure 1._ **Example 2**.: _Consider two finite parallel flat plates of size \(L\times L\), separated by a distance \(L\), and let their surfaces be successively refined so that in total there are \(40\times 40\), \(64\times 64\), and \(106\times 106\) facets, respectively, on each surface as shown in Fig. 2. Then, using a leaf size of 128, we obtain the block low-rank structure shown in Fig. 3. Note that the "on-diagonal" blocks are initially represented as dense matrices on the coarse grids, but they are eventually revealed to have some internal low-rank structure as the mesh is refined._ ### Adaptive Cross Approximation For the low-rank approximation of data sparse blocks one can use the singular value decomposition (SVD) to compute the low-rank decomposition, but it is well know that this becomes computationally expensive for large blocks. An alternative approach that is typically more computationally efficient is to apply adaptive cross approximation (ACA, cf. [11]). Using ACA we can construct a \(k\)-rank approximation of a data sparse matrix \(A\in\mathbb{R}^{m,n}\) just in \(\mathcal{O}(mnk)\) time, which is significantly faster than classical SVD. Figure 1: Visualisation of the block structure of the view factor matrix for a trivial one-dimensional arrangement of \(n\) facets. Gray blocks are represented as low-rank matrices and blue blocks as dense matrices. The main idea of ACA is to iteratively determine sets \(\mathcal{R}=\{i_{1},\ldots,i_{k}\}\), \(\mathcal{C}=\{j_{1},\ldots,j_{k}\}\) of row and column indices, respectively, to construct a low-rank approximation of admissible matrix blocks \(X\) of \(F\) of the form \[X\approx X(:,\mathcal{C})\left(X(\mathcal{R},\mathcal{C})\right)^{-1}X( \mathcal{R},:), \tag{30}\] see Figure 4 for an illustration. The representation (30) may then be further postprocessed (or truncated) to obtain low-rank factors \(X\approx\tilde{X}:=UV^{\top}\) in standard form. There are multiple approaches to how the pivoting should be performed (some of them provide accuracy guarantees related to best low-rank approximation), but exact details go beyond the scope of this paper. A more detailed discussion on the different techniques can be found in [12]. In Section 6 we investigate different approximation tolerances \(\epsilon_{\mathrm{rel}}\) which can be set for ACA to satisfy: \[\frac{\|X-\tilde{X}\|}{\|X\|}\leq\epsilon_{\mathrm{rel}} \tag{31}\] for all blocks \(X\) as defined above. We will analyze the resulting accuracy and demonstrate the computational advantage of ACA for practical cavity radiation problems with respect to the choice of \(\epsilon_{\mathrm{rel}}\) in the solver. ## 5 Cavity Jacobian assembly The iterative approach introduced in Section 3.4 requires us to efficiently construct the cavity part \(J^{\mathrm{cav}}\) of the Jacobian from (23) and to implement Figure 2: Separated parallel flat plates with 3200 facets on the cavity surface. its action on an arbitrary vector. We recall from (20) that the matrix-vector product with \(J^{\text{cav}}\) is defined through a (sparse) projection operator \(P\) and a dense coupling matrix \(\bar{R}\) which depends on the view factor matrix \(F\) and the inverse of the reflection matrix \(C\) via (17), (18). Figure 3: View factor matrix block low-rank approximation for successively refined parallel flat plates. These block structures were computed numerically using the approach described in this Section. We now assume that the view factor matrix \(F\) has been constructed by the techniques introduced in Section 4 and is hence available in a hierarchical block low-rank format. It is then straightforward to obtain the reflection matrix \(C=I-\Lambda F\) from (11) within the same hierarchical format as well, since the diagonal matrix \(\Lambda\) only introduces a simple row scaling in each matrix block of \(F\), and the identity matrix \(I\) can be added to the diagonal (dense) blocks without modification of the block structure. The key computational task is hence to efficiently compute an (approximate) factorization of \(C\). For simplicity, we construct this factorization within the same hierarchical block low-rank format as \(F\), though alternative choices are possible (cf. [9]). The remaining task within our iterative framework is then to implement matrix-vector multiplication with matrices in hierarchical block low-rank format and arbitrary vectors. ### Hierarchical block LU decomposition of the reflection matrix The factorization of the reflection matrix \(C\approx LU\) proceeds recursively using the block structure defined through Algorithm 2. The first block subdivision leads to a block system \[\begin{bmatrix}C_{11}&C_{12}\\ C_{21}&C_{22}\end{bmatrix}=\begin{bmatrix}L_{11}&0\\ L_{21}&L_{22}\end{bmatrix}\cdot\begin{bmatrix}U_{11}&U_{12}\\ 0&U_{22}\end{bmatrix}\] which can be solved via the following four steps: 1. perform the LU decomposition \(L_{11}U_{11}=C_{11}\), 2. solve for \(U_{12}=L_{11}^{-1}C_{12}\) by forward substitution, 3. solve for \(L_{21}=C_{21}U_{11}^{-1}\) by backward substitution, Figure 4: Low rank approximation by Adaptive Cross Approximation (ACA) with row and column indices \(\mathcal{R}=\{2,4,7\}\) and \(\mathcal{C}=\{1,3,6\}\), respectively 4. perform the LU decomposition \(L_{22}U_{22}=C_{22}-L_{21}U_{12}\). The LU decompositions in Steps 1 and 4 are computed recursively in case the corresponding blocks are further subdivided, whereas dense LU factorizations are employed in leaf blocks. The forward and backward substitutions in Steps 2 and 3 are again implemented by recursive algorithms involving block matrix-matrix products and additions, cf. [9] for a detailed outline. It is important to note that matrix additions and multiplications will formally lead to a rank increase in the corresponding low-rank blocks and hence need to be coupled with a suitable rank truncation policy. In our experiments, we have chosen to fix a relative accuracy at the value \(\epsilon_{\rm rel}\) (equal to the approximation accuracy used for the view factor matrix \(F\)) and to determine the ranks adaptively based on the singular value decay within the low-rank blocks. As a consequence of the truncation, the computed LU decomposition \(C\approx LU\) is only approximate, but within a prescribed, controllable accuracy. ### Matrix-vector assembly Once the initial factorization of the reflection matrix has been obtained, it can be re-used within all iterations of the cavity Jacobian assembly. In particular, the action \(x\mapsto C^{-1}x\) is approximated by \(x\mapsto U^{-1}(L^{-1}x)\) which is implemented as a subsequent forward and backward substitution. Similarly, the assembly of the action \(x\mapsto Fx\) is performed recursively, following the block structure of \(F\). Most remarkably, the matrix-vector assembly does not involve any further approximation but is performed "exactly" (within the numerical accuracy of floating point operations). ## 6 Numerical experiments In this section, we consider radiative heat transfer between a collection of spheres arranged in a Fibonacci spiral pattern [19]. The spiral configuration is chosen because it leads to a distribution of both nearby and distant facets that provide a good illustration of the capabilities of the block low-rank approach presented in the preceding sections. (We also note that there is little occlusion between the various spheres when they are arranged in this manner, which justifies our omission of occlusion detection for this test case.) Since we include the entire exterior boundary of each sphere in the radiation cavity, there will be some facets (which we term "isolated" facets) that cannot "see" any other facets within the cavity, and thus produce identically zero rows in the view factor matrix. ### Problem setup and representative results The physical setup of the problem consists of a central, heated sphere surrounded by several ambient temperature spheres of the same size arranged in a spiral pattern, as shown in Fig. 5. Referring to the labeling system shown in Fig. (a)a, sphere 1 is located at the origin, and then spheres 2, 3, 5, 7, 9, 11, and 13 are offset by \((\pm a_{n},\pm a_{n})\), from the previous sphere's location, where \(a_{n}\) is the \(n\)th Fibonacci number, and the signs are chosen so that a counter-clockwise spiral is formed. Finally, the remaining five spheres are placed at the center of the quarter-circular arcs joining the previous spheres. The spheres are meshed with four-node tetrahedral elements, and have 88, 240, 432, 720, 1248, 1992, and 2664 exterior facets at Levels 1-7, respectively, as shown in Fig. (b)b. The total number of mesh nodes, elements, and cavity facets (including all 13 spheres) for each mesh level is given in Table 1. This geometric configuration provides several different opportunities for low-rank approximation. First, because the distance between the spheres increases rapidly as one proceeds around the spiral, there will be cavity facets Figure 5: Spheres (a) arranged in Fibonacci spiral configuration and (b) showing the different mesh refinement levels used. in relatively close proximity (near the center of the spiral) as well as far apart, relative to the mesh spacing (facet sizes) present in the mesh. Second, because the spheres are arranged in a planar spiral, there are some facets (around 7% of the total number of facets) on the "top" and "bottom" of each sphere, as well as along the exterior of the outer arm of the spiral, which cannot "see" any other facets, i.e. are isolated. Although this configuration of spheres does not conform to the classical definition of a "cavity," i.e. a convex, closed region, the characteristics listed previously make it an interesting test problem for assessing the effectiveness of the low-rank solver's capabilities. The difference between a closed and open cavity is reflected in the treatment of the view factor matrix entries as follows. First, we define the (scaled) \(i\)th row sum \[s_{i}:=\frac{1}{A_{i}}\sum_{j}F_{ij} \tag{32}\] For the "truth" view factor matrix, we of course have \(0\leq s_{i}\leq 1\), but because of quadrature error and floating point tolerances, we sometimes obtain row sums slightly outside this range. Therefore, we define the corresponding "clamped" row sum \[c_{i}:=\left\{\begin{array}{ll}0,&s_{i}<0\\ 1,&s_{i}>1\\ s_{i},&\text{otherwise}\end{array}\right. \tag{33}\] \begin{table} \begin{tabular}{c c c c} \hline \hline Mesh Level & Num. Facets & Num. Nodes & Num. Elements \\ \hline 1 & 1144 & 793 & 2782 \\ 2 & 3120 & 3081 & 14,339 \\ 3 & 5616 & 7553 & 38,766 \\ 4 & 9360 & 13,910 & 72,956 \\ 5 & 16,224 & 32,097 & 175,422 \\ 6 & 25,896 & 59,241 & 328,211 \\ 7 & 34,632 & 90,051 & 505,258 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of facets, nodes, and elements for each mesh level in the Fibonacci spheres problem. We use a first-order Lagrange finite element, so the number of degrees of freedom (DOFs) in the temperature solve is equal to the number of nodes for each mesh level. Then, in the case of a closed cavity, the \(i\)th row of the view factor matrix is scaled by \(c_{i}\), i.e. if \(c_{i}\neq 0\), \[F_{ij}\leftarrow\frac{F_{ij}}{c_{i}}\quad\forall i,j \tag{34}\] In an open cavity, on the other hand, we do not scale the view factor matrix rows, but instead define an "ambient" temperature, \(T_{\infty}\), and include an additional nonlinear boundary flux term in the residual (6) which is given by \[B_{N}(T_{h}):=\sum_{i}\int_{\Gamma_{i}}\left(1-c_{i}\right)\left(T_{h}^{4}-T_{ \infty}^{4}\right)\varphi_{N}\,\mathrm{d}S \tag{35}\] to represent thermal energy which is "radiated to ambient." While the most "realistic" approach would be to treat this model as an open cavity problem, with almost all the thermal energy being radiated to ambient, and little heat actually being transferred between the spheres, we found that by instead treating this model as an artificially (mathematically) closed cavity using the approach described above, the solutions had a more pronounced transient behavior and hence provided a better test case for our implementation. That is, in the open cavity case, the surrounding spheres' temperature remains nearly unchanged, while in the closed cavity case, the surrounding spheres are heated non-uniformly. This choice allowed us to better classify the accuracy of the different low-rank approaches relative to the Direct solve approach. The initial temperature of all the spheres is set to 300K, except for the central sphere, which is set to 1000K. There is no internal heat generation, and the radiation cavity is defined to be the union of the surfaces of all the spheres. The final solution time is set to 1000s, and we use a uniform time step of \(\Delta t=25\)s, so that a total of 40 time steps are computed. We use the same time discretization for all mesh levels, though as mentioned elsewhere, a more realistic mesh refinement study would refine in both time and space simultaneously to reduce both temporal and spatial discretization errors at approximately the same rate. We treat this model as a closed cavity, which means that all thermal energy radiated from the central sphere is eventually absorbed (and possibly re-emitted) by the other spheres. At steady state, all spheres should reach the same equilibrium temperature. A representative solution at time \(t=1000s\) on the Level 7 mesh is shown in Fig. 6, where the color bar is set to a maximum of 380K in order to better highlight the temperature variation on the non-central spheres; this causes the central sphere to appear as a single color which is outside the maximum temperature range. Also, note that we are only showing the central seven spheres in Fig. 6 since the outer spheres are both too far away to show in the same figure with a reasonable level of detail, and the temperature fields on the outer spheres are more uniform and hence less interesting than the inner spheres. More detailed images of sphere 2 (the sphere closest to the central sphere) at evenly-spaced time steps are shown in Fig. 7. This figure clearly shows that the side of sphere 2 which is closest to the central sphere is preferentially heated relative to the far side, which remains several degrees cooler. In Fig. 8, we show the internal details of the temperature field in sphere 3, by creating a clip plane whose normal is orthogonal to the line segment which joins the centers of spheres 1 and 3. This figure shows how heat is transferred internally in the sphere (via conduction) after the surface is heated via radiation. The temperature field contours within the sphere are curved rather than straight, a result of the non-uniform radiation heat flux being applied to the surface. Figure 6: Surface temperature of central spheres at time \(t=1000s\). Figure 7: Surface temperature of sphere 2 for various simulation times. Figure 8: Clip plane showing internal temperature of sphere 3 for various simulation times. ### Accuracy comparison with Direct solve method When possible, we compare the performance of the low-rank solver to the so-called "Direct solve" method in which the full view factor and reflection matrices are explicitly constructed, and the reflection matrix inverse is explicitly computed. That is, the \(J^{\text{cav}}\) contribution from (25) is a true dense matrix, and the Direct solver thus has no way of taking advantage of e.g. the zero rows arising from the isolated facets mentioned previously. In our comparisons below in which the low-rank truncation tolerance \(\epsilon_{\text{rel}}\) is varied, the Direct solution method is taken as the "truth" solution, and the relative error in the low-rank solution methods is computed via the discrete space-time norm \[e(T):=\max_{t_{n}}\frac{\|T-T_{\text{Direct}}\|}{\|T_{\text{Direct}}\|} \tag{36}\] where \(T_{\text{Direct}}\) is the solution computed by the Direct solve method, the maximum is taken over all time steps \(t_{n}\), \(n=1,\ldots,n_{\text{steps}}\), and \(\|\cdot\|\) is the discrete \(\ell^{2}\)-norm. In addition to comparing the temperature fields computed by the Direct and low-rank approaches, we also provide detailed comparisons of the overall runtime and memory usage of the different approaches, in order to quantify the effectiveness of the low-rank solver approach. In our comparisons we employed the sequence of meshes from Level 1 to 5 shown in Fig. 5b. On each mesh considered, we also varied the truncation tolerance \(\epsilon_{\text{rel}}\) (cf. (31)) from a maximum value of 0.75 to a minimum value of \(10^{-6}\). The results of these calculations are shown in Fig. 9a, and provide good evidence that the accuracy of the low-rank solver depends continuously on the truncation accuracy \(\epsilon_{\text{rel}}\), but does not depend strongly on the mesh refinement level. In practice, this means that one can choose a single value of \(\epsilon_{\text{rel}}\) for an entire mesh refinement study, and expect a consistent level of agreement with the corresponding Direct solve at that mesh refinement level. The relative error comparisons stop at Level 5 since the Direct solver required more RAM than was available on the compute node in order to successfully solve the problem on the Level 6 and 7 meshes. The low-rank solver, on the other hand, had relatively modest peak memory requirements, as shown in Fig. 9b. For the low-rank solver, the peak memory usage increases quickly for decreasing \(\epsilon_{\text{rel}}\) on fine grids, but for the largest \(\epsilon_{\text{rel}}\) values (coarsest truncation accuracy), we observe that the peak RAM usage for the Level 7 mesh is only about 1.87 times larger than for the Level 1 mesh, despite the Level 7 mesh having over 30 times as many facets in the cavity. Figure 9: Plots comparing (a) relative error \(e(T)\) defined in (36) and (b) peak RAM usage vs. \(\epsilon_{\text{rel}}\) for all mesh refinement levels. ### The choice of \(\epsilon_{\rm rel}\) Due to the importance of the choice of \(\epsilon_{\rm rel}\), we make some further comments about this tolerance here. Recall that this tolerance refers to the truncation accuracy of the data-sparse blocks within the block low-rank approximation, whereas the dense blocks are represented exactly. This means that in general we expect a higher-accuracy approximation of \(F\) and \(C\) than the level specified by \(\epsilon_{\rm rel}\), given that we have a mix of (\(\epsilon_{\rm rel}\) approximate) low-rank and (exact) dense blocks. We quantify this effect in Table 2, which shows that the approximation error in \(F\) is typically an order of magnitude lower than \(\epsilon_{\rm rel}\). A second point we note is that we are ultimately only interested in the accuracy of the solution field, \(T\), and the cavity radiation terms are simply a "stepping stone" to \(T\). This is an important point because the solution fields in heat transfer are well known to be smooth, which tends to compensate for approximation introduced in the representation of \(F\) and \(C\). Taken together, these two effects explain why in general we observe \(e(T)\ll\epsilon_{\rm rel}\), and hence this provides justification for choosing relatively large values of \(\epsilon_{\rm rel}\) such as \(\epsilon_{\rm rel}=0.1\). This is welcome since these relatively large values of \(\epsilon_{\rm rel}\) are in the region in which we observe good scalability and high speed from the block low-rank approach in our numerical results. Nevertheless, it is important to keep in mind that in general the behavior of the block low-rank approximation with respect to truncation tolerance will be problem-dependent, and hence one should ideally justify a particular choice of \(\epsilon_{\rm rel}\) by first running some test cases targeted at the model problem of interest. \begin{table} \begin{tabular}{l l} \hline \(\epsilon_{\rm rel}\) & \(\|F-F_{\epsilon_{\rm rel}}\|/\|F\|\) \\ \hline \(1.0\times 10^{-1}\) & \(9.529\times 10^{-3}\) \\ \(1.0\times 10^{-2}\) & \(9.668\times 10^{-4}\) \\ \(1.0\times 10^{-3}\) & \(9.416\times 10^{-5}\) \\ \(1.0\times 10^{-4}\) & \(9.399\times 10^{-6}\) \\ \(1.0\times 10^{-5}\) & \(9.372\times 10^{-7}\) \\ \(1.0\times 10^{-6}\) & \(9.415\times 10^{-8}\) \\ \hline \end{tabular} \end{table} Table 2: Relative error in the low-rank approximation, \(F_{\epsilon_{\rm rel}}\), with respect to the “true” view factor matrix, \(F\), for different \(\epsilon_{\rm rel}\) values. \(\|\cdot\|\) represents the Frobenius norm. ### Performance vs. \(\epsilon_{\rm rel}\) The performance of the low-rank solver can roughly be categorized into two main parts: (i) the time spent in computing and applying the low-rank LU factorization, and (ii) everything else including finite element assembly, GMRES iterations, file I/O, and other tasks. Here we will mainly be interested in the time spent in part (i), since part (ii) represents tasks that must also be performed in the Direct solver. Within part (i), it is useful to further separate the time spent (a) building the view factor matrix, (b) computing the low-rank LU factorization, and (c) applying the low-rank LU factorization. Items (a) and (b) happen only once per simulation, and thus their costs are amortized across all time steps, while the time required for item (c) increases in direct proportion to the number of time steps computed. The time spent in part (i), i.e. building the view factor matrix, computing the low-rank factorization, and applying the LU factorization is shown, for a range of \(\epsilon_{\rm rel}\) values, in Fig. 10. For coarse meshes, we find that as the truncation tolerance \(\epsilon_{\rm rel}\) is decreased, the percentage of time spent building the view factor matrix (Fig. 10a) plateaus or even decreases, while the percentage of time spent computing the LU factorization (Fig. 10b) increases steadily. In general, decreasing the truncation tolerance \(\epsilon_{\rm rel}\) has a more pronounced effect on the LU factorization time for the finer (Level 6 and 7) grids; the LU factorization time exceeds 50% of the total runtime in the finest case. Our test problem only computes 40 time steps, so we would of course expect the percentage of time spent building the view factor matrix and LU factorization to be lower in a model that performed hundreds or even thousands of time steps. Regarding the time spent applying the LU factorization, for the coarse mesh levels tested, the percentage of time spent applying the factorization increases with decreasing \(\epsilon_{\rm rel}\), but on finer meshes it actually decreases with decreasing \(\epsilon_{\rm rel}\). This indicates the relatively larger expense of building the low-rank LU factorization vs. applying it for fine grids. In problems with more time steps, we would expect the application time of the low-rank factorization to eventually dominate the creation time. Figure 10: Time required to (a) build the view factor matrix, (b) build the low-rank LU factorization of the cavity Jacobian, and (c) apply the low-rank LU factorization (as a percentage of the total runtime) vs. \(\epsilon_{\text{rel}}\) for each mesh level. ### Performance vs. Direct solver The previous plots have focused on comparing the low-rank solver's performance for different mesh refinement levels and different \(\epsilon_{\mathrm{rel}}\) values, but it is of course also necessary to compare the performance of the low-rank solver to the Direct solve method. In Fig. 11, we therefore switch the \(x\)-axis of the plots from \(\epsilon_{\mathrm{rel}}\) to the number of cavity facets, a quantity which is the same for both the Direct and low-rank solver at all mesh levels. In Fig. (a)a, we compare the full simulation time (also referred to as "Alive Time") for the Direct solver and the low-rank solver with three different \(\epsilon_{\mathrm{rel}}\) values of \(10^{-1}\), \(10^{-2}\), and \(10^{-3}\). As mentioned previously, we were unable to run the Direct solver on the Level 6 and 7 meshes due to the RAM required to form the explicit cavity Jacobian inverse, and this rapid memory growth is depicted in Fig. (b)b. Moreover, the dashed line in Fig. 11 indicates extrapolated Direct solve simulation times for the Level 6 and 7 meshes, based on a curve fit of the preceding data points. For the Level 7 mesh with \(\epsilon_{\mathrm{rel}}=10^{-1}\), the low-rank solver's simulation time is approximately 15.6 minutes while the estimated time for the direct solver (on a computer with \(\geq 40\) GB RAM) is approximately 4.9 hours. To conclude this discussion of results, we present in Fig. 12 a plot of the speed-up of the low-rank solver with respect to the direct solver for all meshes and \(\epsilon_{\mathrm{rel}}\) values tested. Speed-up here is defined simply as the ratio of the Direct solver time to the low-rank solver time, and in the case of the Figure 11: Comparison of (a) total simulation time and (b) peak RAM usage for the Direct and low-rank solvers with \(\epsilon_{\mathrm{rel}}=10^{-1},10^{-2},10^{-3}\) vs. number of radiation cavity facets. Level 6 and 7 meshes (dashed lines in the Figure) where the Direct solve time is not available, we use an estimated time based on the curve fit discussed previously. For the coarsest meshes (Levels 1 and 2), the problem size is too small to realize a large speed-up with respect to the Direct solver; the time spent in "other" parts of the code such as I/O and sparse matrix finite element assembly are comparable in both solvers in these cases. On the finest meshes (Levels 6 and 7), the (estimated) speed-up over the Direct solver is approximately \(17.3\times\) and \(21.3\times\), respectively, but we note that the speed-up decreases quickly with decreasing \(\epsilon_{\mathrm{rel}}\) (i.e. increasing accuracy), so a judicious choice of \(\epsilon_{\mathrm{rel}}\) is important for ensuring good performance. ## 7 Conclusions In this work we presented a new approach to the finite element-based computation of nonlinear transient heat transfer with cavity radiation. Conventional finite element formulations for this problem employ dense matrix-based approaches for the cavity terms, which scale poorly as the number Figure 12: Speed-up relative to Direct solve vs. \(\epsilon_{\mathrm{rel}}\) for different mesh refinement levels. Speed-up for the Level 6 and 7 cases (dashed lines) are based on extrapolated Direct Solve solution times since the compute node had insufficient memory to complete the Direct Solve in those cases. of cavity facets becomes large (e.g. \(>\) 10,000). To address the poor scaling of the dense matrix formulation, we developed a novel hierarchical low-rank approximation of the cavity terms. As noted in Section 1, this method has similarities to [8], but also some important differences, including a low-rank LU-factorization and back/forward substitution framework for efficiently applying cavity terms at many nonlinear iterations and time steps, and an efficient ACA method for constructing low-rank blocks. Our numerical results demonstrated the accuracy, efficiency, and scalability of the block low-rank framework that we proposed. Using the dense matrix approach as the reference, we were able to obtain highly accurate results in all cases, and with a speed-up of more than \(20\times\) for the test cases with larger \(n_{\text{facets}}\). Perhaps most importantly, we demonstrated that for a given \(n_{\text{facets}}\) the block low-rank approach has much lower memory requirements than the dense approach. This means that, for a given hardware configuration, the block low-rank approach can be applied to much larger models (with higher \(n_{\text{facets}}\)) than is possible with the conventional "dense matrix" approach. Based on our findings, it is clear that the computational advantage of the block low-rank approach would only increase further for models with larger \(n_{\text{facets}}\) than those considered here. Hence the methodology proposed in this work is an enabler for solving large-scale cavity radiation problems that arise regularly in industrial and scientific applications, without loss of fidelity in the numerical approximation.
2305.08664
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the Utility
Being able to infer ground truth from the responses of multiple imperfect advisors is a problem of crucial importance in many decision-making applications, such as lending, trading, investment, and crowd-sourcing. In practice, however, gathering answers from a set of advisors has a cost. Therefore, finding an advisor selection strategy that retrieves a reliable answer and maximizes the overall utility is a challenging problem. To address this problem, we propose a novel strategy for optimally selecting a set of advisers in a sequential binary decision-making setting, where multiple decisions need to be made over time. Crucially, we assume no access to ground truth and no prior knowledge about the reliability of advisers. Specifically, our approach considers how to simultaneously (1) select advisors by balancing the advisors' costs and the value of making correct decisions, (2) learn the trustworthiness of advisers dynamically without prior information by asking multiple advisers, and (3) make optimal decisions without access to the ground truth, improving this over time. We evaluate our algorithm through several numerical experiments. The results show that our approach outperforms two other methods that combine state-of-the-art models.
Zhaori Guo, Timothy J. Norman, Enrico H. Gerding
2023-05-15T14:13:47Z
http://arxiv.org/abs/2305.08664v1
# MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the Utility ###### Abstract. Being able to infer ground truth from the responses of multiple imperfect advisors is a problem of crucial importance in many decision-making applications, such as lending, trading, investment, and crowd-sourcing. In practice, however, gathering answers from a set of advisors has a cost. Therefore, finding an advisor selection strategy that retrieves a reliable answer and maximizes the overall utility is a challenging problem. To address this problem, we propose a novel strategy for optimally selecting a set of advisers in a sequential binary decision-making setting, where multiple decisions need to be made over time. Crucially, we assume no access to ground truth and no prior knowledge about the reliability of advisers. Specifically, our approach considers how to simultaneously (1) select advisors by balancing the advisors' costs and the value of making correct decisions, (2) learn the trustworthiness of advisers dynamically without prior information by asking multiple advisers, and (3) make optimal decisions without access to the ground truth, improving this over time. We evaluate our algorithm through several numerical experiments. The results show that our approach outperforms two other methods that combine state-of-the-art models. To address this problem, we propose a novel strategy for optimally selecting a set of advisers in a sequential binary decision-making setting, where multiple decisions need to be made over time. Crucially, we assume no access to ground truth and no prior knowledge about the reliability of advisers. Specifically, our approach considers how to simultaneously (1) select advisors by balancing the advisors' costs and the value of making correct decisions, (2) learn the trustworthiness of advisers dynamically without prior information by asking multiple advisers, and (3) make optimal decisions without access to the ground truth, improving this over time. We evaluate our algorithm through several numerical experiments. The results show that our approach outperforms two other methods that combine state-of-the-art models. Trust and Reputation; Crowdsourcing; Truth Inference + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems Table 1 provides an overview of the state of the art and how it compares to the problem we are addressing. In more detail, we present a novel method, called Multi-Advisor Dynamic Decision-Making Method (MADDM), to address the limitations of existing approaches described above. MADDM (see Section 3 for details) integrates and extends several state-of-the-art methods and consists of three interdependent components: trust assessment, advisor selection, and decision-making. Trust assessment builds and maintains models of the trustworthiness of each advisor. For every sequential decision, advisor selection identifies which advisors to consult. This is similar to a multi-armed bandit problem, which requires a balance of exploration and exploitation. We use Thompson Sampling combined with the decision-making model to compute each advisor's expected marginal contribution and select advisers until the marginal contribution is negative. The third component uses the set of answers from the advisor selected to make a decision using the _Bayesian Weighted Voting Ensemble_ (BWVE) method proposed in (Bordes and Tschum, 2015). In addition, we conduct extensive experiments (Section 4) that compare MADDM to a variety of methods that combine state-of-the-art approaches, including budget-limited decision making, \(\epsilon\)-greedy selection, and expectation maximization, and we benchmark performance against the optimal utility that could be gained with perfect knowledge. The results show that MADDM outperforms the other two methods in almost all environments. Before presenting MADDM in detail, in what follows we first formalize our problem domain. ## 2. Problem Formalization Let \(D\) be the set of decisions and \(X\) be a set of advisors. For every decision \(d\in D\), the decision-maker needs to choose a unique answer with a binary value, namely \(a_{d}\in\{-1,1\}\). For simplicity but without loss of generality, we assume that the correct value, i.e. the ground truth, denoted by \(a^{*}_{d}\), is positive, i.e. \(a^{*}_{d}=1\). Given a decision \(d\), \(v^{+}_{d}\) is the value that the decision-maker gets if the answer is correct. We denote with \(v^{-}_{d}\) the value that the decision-maker pays if the answer it infers is wrong. Therefore, the value of the decision is represented by the tuple \(v^{\pm}_{d}=(v^{+}_{d}v^{-}_{d})\). Moreover, since we rely on advisors to answer queries to inform decisions, we need to incentive them by introducing a payment system. For each advisor \(x\in X\), \(c_{x}\) is its price. For any given \(d\in D\), the decision-maker must select a subset of advisors \(Y_{d}\subseteq X\). The choice of advisors also depends on their trustworthiness. For each advisor, \(x\in X\), \(\tau_{x}\) is its trustworthiness, which is updated after every decision for which that advisor is consulted. Finally, we denote with \(\widehat{c}\) and \(\widehat{\tau}\) the vectors containing all the advisors' prices and trustworthiness values, respectively. We, therefore, describe any possible selection through a function \(s\) that, to every tuple \(I\coloneqq(d,\widehat{\tau},v^{\pm}_{d},\widehat{c})\in D\times[0,1]^{|X|} \times[0,+\infty]^{2}\times[0,+\infty]^{|X|}\), associates a subset of advisors \(Y_{d}\in\mathcal{P}(X)\), where \(|X|\) is the cardinality of \(X\) and \(\mathcal{P}(X)\) is the power set of \(X\); we call \(s\) the _selection function_. Table 2 gives an overview of the main variables and parameters used. For any given \(d\in D\), we denote with \(P_{d,s}\subseteq s(I)\subseteq X\) the set of advisors who give positive answers to decision \(d\). Similarly, we denote with \(N_{d,s}\subseteq s(I)\subseteq X\) the set of advisors who give a negative answer to decision \(d\). When it is clear from the context, we simplify the notation and use \(P_{d}\) and \(N_{d}\) over \(P_{d,s}\) and \(N_{d,s}\), respectively. Note that \(P_{d,s}\cap N_{d,s}=\emptyset\) and \(P_{d,s}\cup N_{d,s}=s(I)\) for every \(d\in D\). We assume that, for any given decision, \(d\), there exists a true answer \(a^{*}_{d}\), but this is never revealed to the decision maker. Therefore, we use \(a_{d}=f(P_{d},N_{d})\) to refer to the decision-making function of our inference model. This is a function of the advisors' responses in \(P_{d}\) and \(N_{d}\). Let \(u_{d}\in v^{+}_{d},v^{-}_{d}\) denote the value that the decision-maker gets from the decision \(d\), and let \(a^{*}_{d}\) denote the ground truth of the decision. If \(a_{d}=a^{*}_{d}\), we say that the answer is correct and \(v_{d}=v^{+}_{d}\). Otherwise, we say that the answer is wrong and \(v_{d}=v^{-}_{d}\). Accordingly, for every decision, \(d\), the total cost to the decision-maker to hire the advisors in \(s(I)\) is \(C_{d}(s)=\sum_{x\in s(I)}c_{x}\). Finally, we define the utility that the decision-maker gets for every decision. Given a decision, \(d\), we define its utility to the decision-maker as \(u_{d}(s)=v_{d}-C_{d}(s)\). In particular, the sum of the utilities for all the decisions is \(u(s)=\sum_{d\in D}u_{d}(s)\). Since each advisor has a different cost, the final utility depends on the advisor selection function adopted. In this framework, the goal of the decision-maker is to find the selection function, \(s\), that maximizes its payoff: \[s^{*}=\operatorname*{arg\,max}_{s\in\mathcal{S}}u(s), \tag{1}\] where \(\mathcal{S}\) denotes the set of all feasible selection functions. \begin{table} \begin{tabular}{r c c c c c c c} \hline \hline Setting & \(\epsilon\)-First (Krause et al., 2017) & ZenCrowd (2018) & SBB (Bordes and Tschum, 2015) & ACT (Bordes and Tschum, 2015) & DEMV (Krause et al., 2017) & BAL (Bordes and Tschum, 2015) & MTIR (Bordes and Tschum, 2015) & MADDM \\ \hline sequential & ✓ & \(\times\) & ✓ & ✓ & \(\times\) & \(\times\) & ✓ & ✓ \\ truth inference & \(\times\) & ✓ & \(\times\) & \(\times\) & ✓ & ✓ & ✓ & ✓ \\ multi-advisor for one task & \(\times\) & ✓ & \(\times\) & \(\times\) & ✓ & ✓ & ✓ & ✓ \\ budget-limited & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & ✓ \\ different task values & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & ✓ \\ different advisor’s price & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & ✓ \\ trustworthiness assessment & ✓ & ✓ & ✓ & ✓ & \(\times\) & ✓ & ✓ & ✓ \\ insufficient samples & ✓ & \(\times\) & \(\times\) & ✓ & \(\times\) & \(\times\) & ✓ & ✓ \\ aggregation method & \(\times\) & EM & Bayesian & \(\times\) & MV & EM & BWVE & BWVE \\ advisor selection & \(\epsilon\)-first & \(\times\) & \(\times\) & TS & \(\times\) & \(\times\) & \(\times\) & TS \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison of MADDM with state of the art. TS - Thompson Sampling; MV = Majority Voting; BWVE = Bayesian Weighted Voting Ensemble; EM = Expectation-Maximization. ## 3. Multi-Advisor Dynamic Decision-Making The design of MADDM consists of three components. The first is a trust assessment model that determines an advisor's trustworthiness, which can be used as a weight in the decision model and to calculate the contributions of advisors in the advisor selection model. The second component is the advisor selection model, which assigns a set of advisors to every decision. The third is the decision model, which selects an answer after receiving the advisors' opinions. Figure 1 provides a graphical overview of the structure of MADDM. ### Trustworthiness Model Following Jsang (Jesang, 2017), we build our trustworthiness model using a Beta distribution. Recall that we do not know the ground truth. So, for each advisor, we associate two values, called _advice estimated to be correct \(\alpha_{x}\)_ and _advice estimated to be incorrect \(\beta_{x}\)_. Initially, these values are 1 (Jesang, 2017); i.e. we start with a prior that is \(\mathrm{Beta}(1,1)\), or close to uninformative. We update these values whenever the advisor responds to a query. Correct answers to all decisions are _estimated_ by our model without ground truth; we use the estimated answer to determine whether the advisor's answer is correct or not (see Section 3.3). Now, for each advisor \(x\in X\), we define its trustworthiness as \(\tau_{x}=\alpha_{x}/(\beta_{x}+\alpha_{x})\in(0,1)\). If \(\tau_{x}=1\), we say that the advisor \(x\) is completely trustworthy. If \(\tau_{x}=0\), we say that the advisor \(x\) is completely untrustworthy. This concept of trustworthiness is insufficient since it does not capture the epistemic uncertainty associated with that assessment. For this reason, each advisor's trustworthiness \(\tau_{x}\) is paired with a parameter that quantifies this epistemic uncertainty behind the computation of \(\tau_{x}\). This uncertainty will reduce as we acquire more evidence regarding an advisor \(x\). More specifically, we compute the uncertainty by using _Subjective Logic_(Jesang, 2017). This is commonly-employed method in computational models of trust in multi-agent systems and information fusion. Formally, for each advisor \(x\in X\), the uncertainty of \(x\) is \(\theta_{x}=2/(\alpha_{x}+\beta_{x})\in(0,1]\). ### Advisor Selection The overall aim of the system is to maximize utility, \(u(s)\) (see Equation 1), which requires balancing the trade-off between advisor costs and decision value. Typically, the costs of asking all advisers would exceed the decision value, even if the decision is correct, so it is rarely optimal. For example, for a decision with a value of $10, it is not worth spending $100 to hire advisors. Our method selects the set of advisors according to the value of the problem and estimates their contributions to a decision. We assume their trustworthiness is initially unknown, and all advisers have equal trustworthiness. This knowledge is updated over time but is not reliable at first. Therefore, focusing too early on seemingly good advisors can lead to sub-optimal decisions. To address this, our system solves a multi-armed bandit problem in which it has to balance the exploration of new advisors with the exploitation of the knowledge it has already gathered. Among the many possible algorithms used to solve the multi-armed bandit problem, we use _Thompson Sampling_(Bena et al., 2017), which samples from a Beta distribution to compute the contribution of each advisor. In Algorithm 1, we sketch the pseudo-code of our selection function \(s\). Recall that \(\vec{\tau}\) denotes the trustworthiness vector that contains the trustworthiness of each advisor, and \(\vec{c}\) are their costs. Let \(\vec{\alpha}\) and \(\vec{\beta}\) denote the estimated evidence vectors, respectively. Given a decision \(d\in D\), let \(P^{e+}_{d}\) and \(P^{e-}_{d}\) denote the probability that \(a_{d}=1\) and \(a_{d}=-1\), respectively.1 We denote with \(U_{d}\) the vector containing the advisors' utilities. Footnote 1: These values are computed by the decision model, as we will see in Section 3.3. In more detail, after initializing the answer probabilities \(P^{e+}_{d}\) and \(P^{e-}_{d}\), the answer sets \(P_{d}\) and \(N_{d}\), the utility vector \(U_{d}\), and the trustworthiness vector (Line 2), the model enters a loop for selecting advisors (Line 3). Let \(V^{x}_{d}\), \(u^{x}_{d}\) denote the expected contribution and the marginal utility of the advisor \(x\) in decision \(d\). Recall that \(c_{x}\) is the price of advisor \(x\). Their relationship can be expressed as follows: \[u^{x}_{d}=V^{x}_{d}-c_{x}. \tag{2}\] In each round of advisor selection, we need to compute the marginal utility \(u^{x}_{d}\) of each advisor and select the one with the best \(u_{d,x^{*}}\), which is our estimate of the advisor that maximizes the expected profit for the decision-maker (Lines 4-8). Computing the marginal utility \(u^{x}_{d}\) is achieved in two steps. First, for each advisor, \(x\), we define a Beta distribution \(\mathrm{Beta}(\alpha_{x},\beta_{x})\) and sample from it to get the Beta trustworthiness \(r^{\prime}_{x}\). We only use it to compute the utility \(u_{d,x}\) of the advisor \(x\) (Line 5), whereas the model does not use \(r^{\prime}_{x}\) for real decision-making. When there is little evidence regarding an advisor, e.g. when \(\alpha=1\) and \(\beta=1\), the \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{2}{c}{Variables and Parameters List} \\ \hline \(x\) & advisor index \\ \(d\) & decision index \\ \(s\) & selection function \\ \(f\) & decision function \\ \(a_{x}\) & correct estimated evidence of the advisor \(x\) \\ \(\beta_{x}\) & wrong estimated evidence of the advisor \(x\) \\ \(\theta_{x}\) & uncertainty of the advisor \(x\) \\ \(\tau_{x}\) & trustworthiness of the advisor \(x\) \\ \(\tau^{x}_{x}\) & trustworthiness of the advisor \(x\) from Beta Sampling \\ \(i_{d}\) & confidence value of decision \(d\) \\ \(c_{x}\) & price of the advisor \(x\) \\ \(P_{d}\) & set of the advisors whose answer for decision \(d\) is \(1\) \\ \(N_{d}\) & set of the advisors whose answer for decision \(d\) is \(-1\) \\ \(Y_{d}\) & \(P_{d}\cup N_{d}\) \\ \(u_{d}\) & utility of decision \(d\) \\ \(a_{d}\) & final inferred answer of decision \(d\) \\ \(a^{*}_{d}\) & ground truth of decision \(d\) \\ \(v^{x}_{d}\) & profits that if \(a_{d}=a^{*}_{d}\) \\ \(v^{x}_{d}\) & loss that if \(a_{d}\neq a^{*}_{d}\) \\ \(P^{a+}_{d}\) & probability that \(a_{d}=1\) from ensemble model \\ \(P^{a-}_{d}\) & probability that \(a_{d}=-1\) from ensemble model \\ \(P^{a-}_{d}\) & probability that \(a_{d}=-1\) from ensemble model \\ \hline \hline \end{tabular} \end{table} Table 2. List of additional variables and parameters used in our MADDM system. Beta distribution has a large variance. Consequently, the value \(\tau^{\prime}_{x}\) is subject to large fluctuations, which increases the decision error. Second, we need to know the contribution \(V^{x}_{d}\) of each advisor \(x\). Let us now assume that advisor \(x\) answered \(1\) to a decision \(d\); the case in which the advisor answers \(-1\) follows a similar routine. In order to compute its contribution, we first add \(x\) to the set \(P_{d}\) and proceed to calculate the probabilities \(P^{e+^{\prime}}_{d}\) and \(P^{e-^{\prime}}_{d}\). The value \(P^{e+^{\prime}}_{d}\) and \(P^{e-^{\prime}}_{d}\) describes the probability that \(a_{d}=1\) and \(a_{d}=-1\), respectively. Therefore, the wider the gap between \(P^{e+}_{d}\) and \(P^{\prime}_{e+}\), the larger the advisor's contribution. We therefore set: \[\Delta V^{x}_{d,+}=P_{+}|P^{e+^{\prime}}_{d}-P^{e+}_{d}|*(v^{+}_{d}+v^{-}_{d}). \tag{3}\] The values \(P_{+}:=P(a^{*}_{d}=1)\) and \(P_{-}:=P(a^{*}_{d}=-1)\) are the _a priori_ probability that the answer is positive or negative, respectively. Hence the value \(|P^{e+^{\prime}}_{d}-P^{e+}_{d}|\) represents the change of the answer probability if advisor \(x\) participates in the decision. Similarly, if the advisor answers \(-1\), we set: \[\Delta V^{x}_{d,+}=P_{-}|P^{e-^{\prime}}_{d}-P^{e-}_{d}|*(v^{+}_{d}+v^{-}_{d}). \tag{4}\] After we compute \(\Delta V^{x}_{d,+}\) and \(\Delta V^{x}_{d,-}\), we compute the expected contribution \(V^{x}_{d}\) as: \[V^{x}_{d}=(\tau^{\prime}_{x}-(1-\tau^{\prime}_{x}))*(\Delta V^{x}_{d,+}+ \Delta V^{x}_{d,-}). \tag{5}\] Finally, the algorithm computes the utility \(u^{x}_{d}\) by Equation 2. If \(u^{x^{*}}_{d}>0\), the advisor \(x^{*}\) is selected, which means that its contribution is greater than its cost. The selected advisor \(x^{*}\) needs to provide the answer for decision \(d\). Depending on the answer from the advisor \(x^{*}\), it can be added to \(P_{d}\) or \(N_{d}\) (Lines 9-13), which is used to update the answer probability \(P^{e+}_{d}\) and \(P^{e-}_{d}\) (Line 14). After every selection, we need to recalculate the marginal utility of each advisor for selecting the next advisor because their marginal utilities change. For example, if we select an advisor with 90% trustworthiness and give a positive answer to decision \(d\), \(P^{e+}_{d}\) will increase from 50% to 90%. The model repeats Lines 4-16 to select advisors one by one until \(u_{d,x^{*}}\leq 0\) (Line 17), and outputs the final answer set \((P_{d},N_{d})\) (Line 18). ### Bayesian and Weighted Voting Ensemble Decision Model We use Bayesian and Weighted Voting Ensemble (BWVE) as the decision function \(f\) to make decisions (Bahdan et al., 2017). There are two reasons for choosing BWVE. First, it is a truth inference method without ground truth. It has been shown to outperform the simple weighted voting Figure 1. The advisor selection model can select a subset of advisors from all advisors to answer the decision. It needs to consider the decision of value and risk, the advisors’ cost, and trustworthiness. The decision model uses advisors’ trustworthiness and the answer set to decide the aggregating answer and the estimated evidence for updating the trustworthiness. The trustworthiness model builds and updates the advisors’ trustworthiness. method, which considers the advisors' weights, determined by their trustworthiness, to bias majority voting (Ballall et al., 2017). Second, it returns a probability distribution over the answers, allowing us to evaluate each advisor's contribution, which aligns with our advisor selection model and retrospectively re-calibrates their trustworthiness. In the following, we detail the BWVE procedure. Essentially, it combines two decision procedures to improve the overall outcome. One is based on a Bayesian model, while the other follows a weighted voting decision method. If we know the real trustworthiness \(\vec{\tau}\) of all the advisors, the Bayesian method will obtain higher accuracy than the weighted voting method. However, in the beginning, because the uncertainty of the trustworthiness is large, the Bayesian method is unstable, so BWVE relies more on the weighted voting method for decisions. With the decreasing of the average uncertainty, the Bayesian method has a better performance. So BWVE uses the average uncertainty to control the weights of Bayesian and weighted voting automatically. #### 3.3.1. Bayesian For every decision \(d\), the advisor selection function returns a subset \(Y_{d}\subseteq X\) that needs to answer the decision \(d\). We recall that \(P_{d}\subseteq Y_{d}\) denotes the set of advisors that answered \(1\) to the decision, while the advisors in \(N_{d}\subseteq Y_{d}\) answered \(-1\). Given the partition \((P_{d},N_{d})\) of \(Y_{d}\), from the Bayesian method, the probability that \(a_{d}^{*}=1\) is \(P_{d}^{b*}:=P_{b}(a_{d}^{*}=1|P_{d},N_{d})\), while \(P_{d}^{b-}:=P_{b}(a_{d}^{*}=-1|P_{d},N_{d})\) is the probability that \(a_{d}^{*}=-1\). From Bayes theorem, we can then express \(P_{d}^{b+}\) and \(P_{d}^{b-}\) as follows: \[P_{d}^{b+}=\frac{P_{+}P(P_{d},N_{d}|a_{d}^{*}=1)}{P_{+}P(P_{d},N_{d}|a_{d}^{* }=1)+P_{-}P(P_{d},N_{d}|a_{d}^{*}=-1)} \tag{6}\] \[P_{d}^{b-}=\frac{P_{-}P(P_{d},N_{d}|a_{d}^{*}=-1)}{P_{-}P(P_{d},N_{d}|a_{d}^{ *}=-1)+P_{+}P(P_{d},N_{d}|a_{d}^{*}=1)}. \tag{7}\] We recall that \(P_{+}:=P(a_{d}^{*}=1)\) and \(P_{-}:=P(a_{d}^{*}=-1)\) is the _a priori_ probability that the answer is positive or negative, respectively. Since we do not have any evidence about \(a_{d}^{*}\), both \(P_{+}\) and \(P_{-}\) are equally likely, therefore we set \(P_{+}=P_{-}=0.5\). The quantities \(P(P_{d},N_{d}|a_{d}^{*}=1)\) and \(P(P_{d},N_{d}|a_{d}^{*}=-1)\) describe the probability to observe the partition \((P_{d},N_{d})\) under the assumption that \(a_{d}^{*}=1\) and \(a_{d}^{*}=-1\), respectively. Both \(P(P_{d},N_{d}|a_{d}^{*}=1)\) and \(P(P_{d},N_{d}|a_{d}^{*}=-1)\) are computed through the trustworthiness \(\tau_{\text{x}}\) as it follows: \[P(P_{d},N_{d}|a_{d}^{*}=1) =\prod_{i\in P_{d}}\prod_{j\in N_{d}}\tau_{i}(1-\tau_{j}) \tag{9}\] \[P(P_{d},N_{d}|a_{d}^{*}=-1) =\prod_{i\in P_{d}}\prod_{j\in N_{d}}\tau_{j}(1-\tau_{i}) \tag{8}\] #### 3.3.2. Weighted Voting The Bayesian decision method can only work well when the advisors' trustworthiness is sufficiently high. In the initial phase of the process, the advisors' trustworthiness is unreliable, so the Bayesian method is not stable. Since there is no ground truth, it is easily misled by bad advisors when the mean advisors' accuracy is not high. BWVE deals with this problem by using the weighted voting method, which is more robust than Bayesian at the beginning. Then, during the initialization, the weighted voting method has more influence on the decision than Bayesian. For the weighted voting method, under the answer set \((P_{d},N_{d})\), the probability that the ground truth \(a_{d}^{*}=1\) and -1 are correct can be denoted as \(P_{d}^{w+}:=P_{w\theta}(a_{d}^{*}=1|P_{d},N_{d})\) and \(P_{d}^{w-}:=P_{w\theta}(a_{d}^{*}=-1|P_{d},N_{d})\), respectively. The model then uses the sum of the advisors' trustworthiness to calculate them: \[P_{d}^{w+}=\frac{\sum_{i\in P_{d}}\tau_{i}}{\sum_{j\in P_{d}\cup N_{d}}\tau_{j}} \tag{10}\] \[P_{d}^{w-}=\frac{\sum_{i\in N_{d}}\tau_{j}}{\sum_{j\in P_{d}\cup N_{d}}\tau_{j}} \tag{11}\] #### 3.3.3. Ensemble Decision BWVE uses the average uncertainty \(\bar{\theta}_{d}\) to control the weights of the Bayesian and the weighted voting for decisions. The higher the average uncertainty of the advisors in the answer set \(Y_{d}\), the lower reliability of trustworthiness and the more weight for the weighted voting method. Let \(|Y_{d}|\) denote the cardinality of \(Y_{d}\). It can be expressed as: \[\bar{\theta}_{d}=\frac{\sum_{i\in Y_{d}}\theta_{i}}{|Y_{d}|} \tag{12}\] The average uncertainty \(\bar{\theta}_{d}\) gradually decreases as time goes on, and the weight of the Bayesian method needs to increase. For the ensemble decision, given the answer set \((P_{d},N_{d})\), the probability that \(a_{d}^{*}=1\) is \(P_{d}^{e+}:=P_{b}(a_{d}^{*}=1|P_{d},N_{d})\), while \(P_{d}^{e-}=P_{b}(a_{d}^{*}=-1|P_{d},N_{d})\)is the probability that \(a_{d}^{*}=-1\). They can be expressed as: \[P_{d}^{e+}=(1-\bar{\theta}_{d})P_{d}^{b+}+\bar{\theta}_{d}P_{d}^{w+} \tag{13}\] \[P_{d}^{e-}=(1-\bar{\theta}_{d})P_{d}^{b-}+\bar{\theta}_{d}P_{d}^{w-} \tag{14}\] Their relationship is: \[P_{d}^{e+}+P_{d}^{e-}=1 \tag{15}\] After getting \(P_{d}^{e+}\) and \(P_{d}^{e-}\), the system needs to compare them. If \(P_{d}^{e+}>P_{d}^{e-}\), the final answer \(a_{d}=1\). Otherwise, \(a_{d}=-1\). #### 3.3.4. Trustworthiness Update BWVE uses the absolute difference of \(P_{d}^{e+}\) and \(P_{d}^{e-}\) as the new estimated evidence to update \(\alpha\) and \(\beta\). \[i_{d}=|P_{e}(a_{d}^{*}=1|P_{d},N_{d})-P_{e}(a_{d}^{*}=-1|P_{d},N_{d})| \tag{16}\] If \(a_{d}=1\), the update of \(\alpha_{x}\) and \(\beta_{x}\) can be expressed as: \[\alpha_{x}\leftarrow\alpha_{x}+i_{d}\quad\forall x\in P_{d}, \tag{17}\] \[\beta_{x}\leftarrow\beta_{x}+i_{d}\quad\forall x\in N_{d}, \tag{18}\] If \(a_{d}=-1\), the update of \(\alpha_{x}\) and \(\beta_{x}\) can be expressed as: \[\beta_{x}\leftarrow\beta_{x}+i_{d}\quad\forall x\in P_{d}, \tag{19}\] \[\alpha_{x}\leftarrow\alpha_{x}+i_{d}\quad\forall x\in N_{d}, \tag{20}\] #### 3.3.5. Review Update Recall that MADDM is an online problem without access to ground truth. Moreover, the initial trustworthiness is low. Therefore, the update of the trustworthiness \(\vec{\tau}\) relies on the evidence from new decisions. And the decisions, in turn, rely on the trustworthiness \(\vec{\tau}\). This dynamic loop is used for building the model to make the trustworthiness and the aggregating answer more accurate. Therefore, similar to the EM method, after every answer, we continuously update the trustworthiness of the advisors through the answers from past decisions. Algorithm 2 describes how the review update works. Let \(\vec{P}_{past}\), \(\bar{N}_{past}\) denote the vector that contains the past answer set, and we recall that \(\vec{\tau}\) denote the trustworthiness vector that contains all advisors' trustworthiness. Let \(\vec{\tau}_{0}\) denote the old trustworthiness vector, \(\Delta\tau\) the sum of the difference between the old trustworthiness vector \(\vec{\tau}_{0}\) and the new trustworthiness vector \(\vec{\tau}\). Furthermore, let \(V_{s}\) denote the threshold of \(\Delta\tau\) for terminating the update. \(V_{s}\) usually is set to a small value. Note that \(\Delta\tau\) is used to judge the update step size of \(\vec{\tau}\). Specifically, when \(\Delta\tau\) is smaller than \(V_{s}\), the model stops updating. ``` 1:Input:\(\vec{p}_{past},\vec{N}_{past},\vec{\tau},V_{s}\) 2:\(initialize\ \Delta\tau=0\), \(\vec{\tau}_{0}=\vec{\tau}\) 3:while\(true\)do 4:for\(P_{d},N_{d}\) in \(\vec{p}_{past},\vec{N}_{past}\)do 5:\(\vec{p}_{d}^{e+},\vec{p}_{d}^{e-}\gets f(P_{d},N_{d},\vec{\tau})\) 6:\(\vec{\tau}_{0}=\vec{\tau}\) 7:\(\vec{\tau}\gets TrustworthinessUpdate(P_{d}^{e+},\vec{p}_{d}^{e-})\) 8:\(\Delta\tau=sum(\vec{\tau}-\vec{\tau}_{0})\) 9:until\(\Delta\tau\leq V_{s}\) 10:Output:\(\vec{\tau}\) ``` **Algorithm 2** Pseudo-code of the review maximization algorithm ## 4. Experiments In this section, we present the decision-answer experiments to evaluate our method. Specifically, we compare our method with two cost-constraint-based methods. The first is the Fixed Number of Advisors based method (FNA), which means that the decision-maker selects a fixed number for answering every decision. The second is the Budget-Constraint based method (BC), which means that there is a budget constraint to stop selecting advisors. For both approaches, we combine these with different advisor-selection criteria. ### Setting To the best of our knowledge, there is no standard environment to run decision experiments. For this reason, we rely on synthetically generated ones. In more detail, the environment we generate includes 1000 decisions with binary answers and different values. The full set of advisors consists of 30 simulated agents with different answer accuracy and costs. An Extended Rectified Gaussian distribution (ERGd) samples both the profits and losses of every decision (Grover and Leskovec, 2016). We generate each advisor's real accuracy and cost using the same probability distribution. During the experiments, the decision-maker selects a set of advisors to enquire and infers the answers using different methods. After answering 1000 decisions, the decision-maker gets the final utility. Due to the probabilistic nature of the experiments, every experiment is repeated for 100 different runs to obtain statistically significant results. To reduce variance and bias, all methods are run using the same conditions. That is, although the conditions vary between runs, the same set of runs are used to compare the methods (i.e., using the same set of advisor qualities and prices, the same decision sequence, and the same decision profits and losses). We consider different ratios between the decision's value and the advisor's cost, which leads us to define two sets of experiments. In the first set, both the decision profits and decision losses are sampled from an ERGd whose means and standard deviation are equal to 100. In the second one, the mean and the standard deviation of the ERGd are both changed to 500. Due to the large deviation, the decision values are highly volatile. Hence, some decisions may be worth more than 1000, and some may be worthless. Furthermore, the real accuracy of advisor \(x\), i.e. \(\tau_{x}^{r}\), is sampled from an ERGd whose standard deviation is fixed at 0.3 while its mean ranges in the set \(\{0.5+0.01*k\}\) where \(k=1,2,\ldots,50\). For example, if \(\tau_{x}^{r}\) is 0.8, the advisor \(x\) has 80% probability of giving a correct answer. Hence, we consider 50 different frameworks in which the average trustworthiness increases every time. Finally, we assume that the cost of each advisor is proportional to its real trustworthiness. In practice, higher quality often comes at a cost. For example, senior advisors are more costly than junior ones. Similarly, more advanced machine learning algorithms typically require higher computational costs. However, this is only a correlation and not always the case for every instance. To achieve this correlation, the cost of each advisor is sampled from an ERGd whose average is \(\tau_{x}^{r}*20\) and whose standard deviation is 10. Note that the correlation makes the problem more challenging since the system has to make trade-offs between cost and quality. Without such correlation, there is a high likelihood of a cheap and reliable advisor which makes the problem easier to solve but also less realistic. We used three different exploration methods (Upper Confidence Bound (UCB), Thompson Sampling, \(\epsilon\)-greedy) and two rules of the advisor selection (trustworthiness, cost-effectiveness) to combine with FNA and BC, respectively. The aggregation method of FNA and BC is EM, which can maximize the sample utilization and has been verified multiple times in truth inference (Beng et al., 2017; Beng et al., 2017). In more detail, in terms of advisor selection strategies, UCB, Thompson Sampling and \(\epsilon\)-greedy are effective for solving the multi-armed bandit problem. We experimented with a range of values and found that the \(\epsilon\)-greedy method has the best performance when \(\epsilon=0.1\) (we also tested \(\epsilon=0.05\), 0.15, 0.2, 0.25) for all methods. UCB and Thompson Sampling explore more than \(\epsilon\)-greedy at the beginning. Since the lack of ground truth, not every exploration can provide correct feedback for updating trustworthiness, especially when the average advisor's accuracy is low. The criteria for advisor selection contain trustworthiness and cost-effectiveness. For example, if trustworthiness is the rule, the greedy strategy always selects the advisor with bigger trustworthiness. Cost-effectiveness is a method we improved from work (Grover and Leskovec, 2016). The cost-effectiveness of the advisor \(x\) can be expressed by \(\epsilon_{x}/(\tau_{x}-0.5)\), which means how much cost is the improvement \begin{table} \begin{tabular}{c c} \hline \hline setting & value \\ \hline env1: decision profits \(\vec{v}_{d}^{+}\ mean,std\) & \(100,100\) \\ env1: decision loss \(\vec{v}_{d}^{-}\ mean,std\) & \(100,100\) \\ env2: decision profits \(\vec{v}_{d}^{+}\ mean,std\) & \(500,500\) \\ env2: decision loss \(\vec{v}_{d}^{-}\ mean,std\) & \(500,500\) \\ advisor cost \(\epsilon_{x}\ mean,std\) & 0 to 20,10 \\ real trustworthiness \(mean,std\) & from 0.51 to 1,0.3 \\ \hline \hline \end{tabular} \end{table} Table 3. Experiment setting of trustworthiness for advisor \(x\). It has a better performance than trustworthiness. For FNA and BC, we also test their performance under different hyper-parameters. First, we test the performance of FNA by setting the number of advisors from 1 to 10. The results show that five advisors have the best performance. Second, BC, we try 5%, 10%, 15%, 20%, 25% of the value (profit + loss) of every decision as the budget constraint and 10% has the best performance. To clearly understand the performance of our method, FNA and BC, we selected two other methods for comparison. The first is random voting (RV). It randomly selects three advisors and combines them by majority voting. Another one is the best utility (BU). It describes the maximum utility the decision-maker can get, which means all the decisions are correct, and the advisor cost is 0. The method with the trustworthiness model is easily misled by malicious advisors when the mean advisors' accuracy is low [5]. In practical applications, the methods for solving the problem include adding some decisions with ground truth, selecting several advisors with high accuracy to participate in decision-making, or considering the prior information of advisors. In this paper, our assumptions are no ground truth and no prior information, so we design the exploration-first model to solve this problem. In the Figure 2: In four figures, the X-axis represents the mean advisors’ accuracy from 0.51 to 1. The Y-axis represents the average utility of 100 experiments. The half-transparent area, along with the curve, is the 95% confidence interval error bar. Figure (a) shows the results of environment 1 (mean, standard deviation = 100, 100). Figure (b) shows the results of environment 2 (mean, standard deviation = 500, 500). In Figures (a) and (b), the left figures represent the standard methods, and the right figures are the exploration-first-based methods. MADDM = multi-advisor dynamic decision-making(ours); FNA = \(\epsilon\)-greedy fixed number of the advisor EM; BC = \(\epsilon\)-greedy budget-limited EM; RV = random voting. first few decisions, the model selects all advisors for answering to increase the accuracy of the answer and then back to the method's standard advisor selection strategy. We use this model before rounds 1-15, respectively, and the results show that the three methods perform best when the model is used before the 10 round. Therefore, we added the exploration-first model to our method, FNA and BC and did additional experiments in two environments. ### Results We now compare the utility obtained by the different methods we considered. Table 4 shows the mean and standard deviation of the utility in every environment. Overall, our MADDM method has the best performance in terms of the average utility in almost all environments. In all the experiments, the average utilities obtained by the exploration-first methods are significantly bigger than the others. Moreover, the standard deviation of the utilities is also reduced, which means that the result is more consistent. We did 600 (3\({}^{\circ}\)50\({}^{\circ}\)4) pairs of Mann-Whitney Tests between MADDM and FNA, BC, and Random Voting (RV) with 50 different average advisors' accuracy in four different environments. We observe that 527 out of the 600 results have significant differences (\(p<0.05/3\)). Figure 2 describes the utility curves of different methods as the advisors' accuracy increases. In the vast majority of cases, MADDM gets more utility than FNA and BC for all the possible accuracy. In the right graph in Figures 1(a) and 1(b), we compare the utilities when all the methods use the exploration-first based model. RV is better than the other three methods when the mean advisors' accuracy is low. When there is no ground truth and a significant proportion of bias, the methods with the trustworthiness model are easily misled by malicious advisors. Once the trustworthiness model is misled, then malicious users take the initiative to sabotage future decisions. However, we observe that MADDM is less prone to be sabotaged. This is due to the fact that MADDM selects more advisors at the beginning and decreases as trustworthiness is updated, which means MADDM has stronger robustness to malicious advisors than FNA and BC. Similarly, we observe that the performances of MADDM are more robust to the manipulation of the bad advisors when the average cost of the advisors and the decision values are bigger. Since the decisions in the environment, 2 are more valuable than the ones in the environment 1, MADDM chooses more advisors to make decisions together at the beginning in environment 2, which helps to increase the reliability of the answer. Therefore, based on this idea, we partially addressed this issue by using the exploration-first-based methods. The disadvantage of the exploration-first is that it uses more cost for building trustworthiness. It does not perform as well as standard methods when the mean advisors' accuracy is high. However, we do not know the real distribution of the mean advisors' accuracy and decision values before asking, so it is worth using some cost at first to improve the method's expected utility. MADDM automatically selects the advisors by balancing the advisor's cost and the decision values without any hyper-parameters, which makes MADDM less prone to select an insufficient number of advisors or to waste costs. In the two methods based on cost-effectiveness, they need to set the number of advisors and budget proportion to control the advisor cost. If the prior distribution is unknown, the values of these hyper-parameters are difficult to determine. Furthermore, if the advisor cost is too small, the reliability of the output answer is not enough. While if the cost is too high, it causes a waste of advisor costs. For example, in Figure 1(b), we observe that FNA does not select enough advisors when the mean advisors' accuracy is less than 0.8, whereas the best performance of BC has a gap with MADDM when the mean advisors' accuracy is higher than 0.65. ## 5. Conclusion In this paper, we introduce Multi-Advisor Dynamic Decision-Making Method (MADDM), a novel approach for making optimal decisions in sequential decision-making settings with no ground truth. The model takes into account multiple variables, including the decision of profits and loss, advisors' costs, and trustworthiness. It selects advisors by balancing the advisors' costs and the value of making the correct decisions. It also makes decisions by combining the advice from multiple advisors without access to the ground truth and dynamically learns the trustworthiness of advisors without prior information. We test our method through decision-answer experiments in a simulated environment. We also introduce two benchmark methods, one using a fixed number of advisors (FNA) and another one using a fixed budget (BC), which are combined with state-of-the-art sampling and aggregating methods. The results show that MADDM significantly outperforms the benchmark methods. An interesting direction for future work is moving from binary answers to multiple answers, making our approach applicable to more scenarios. This requires changing the calculations of the probabilities to deal with more than two outcomes. The first challenge in doing so is calculating the confidence value and how to use it for updating the trustworthiness. The second challenge is adjusting the weights of the weighted voting approach and the Bayesian method for making the decision. Another interesting direction is dealing with multiple simultaneous decisions at each point, which requires us to consider the allocation of advisors to each of the decisions.
2304.10302
Optimal Activation of Halting Multi-Armed Bandit Models
We study new types of dynamic allocation problems the {\sl Halting Bandit} models. As an application, we obtain new proofs for the classic Gittins index decomposition result and recent results of the authors in `Multi-armed bandits under general depreciation and commitment.'
Wesley Cowan, Michael N. Katehakis, Sheldon M. Ross
2023-04-20T13:32:30Z
http://arxiv.org/abs/2304.10302v1
# Optimal Activation of Halting Multi-Armed Bandit Models ###### Abstract We study new types of dynamic allocation problems the Halting Bandit models. As an application, we obtain new proofs for the classic Gittins index decomposition result cf. Gittins [9], and recent results of the authors in Cowan and Katehakis [4]. **Keywords:** Machine learning, Dynamic data driven systems; Autonomous reasoning and learning; Markovian decision processes; Adaptive systems. ## 1 Introduction We investigate a class of Halting Bandit models, where at every time step a controller must choose which project out of a fixed collection to activate, and at some (stochastic) time, when sufficient time and effort has been invested in a given project or process, it will be completed or "halt". Additionally, halting may be considered a catastrophic event, such as a project breaking down. These halting events allow bandits to be'singled out' - receiving rewards from successful bandits and paying costs for unsuccessful bandits. This singling out of projects based on state status is novel; prior results focused mainly on maximizing cumulative collective payouts cf. model (CCP) of Section 5. In this paper we consider the following models for maximizing terminal rewards (or minimizing terminal costs): two versions of expected terminal solo payout, taken to be a reward dependent on the last (ultimate) or second to last (penultimate) state of the first bandit to halt successfully; the terminal collective payout reward, taken to be a reward dependent on the final states of all bandits at the first halting; the terminal non-halting costs, taken to be a cost incurred by all bandits that failed to halt; the terminal collective profit, taken to be a reward from the successfully halted bandit less the cost incurred by bandits that failed to halt. After establishing these results, we consider the same model in the framework of cumulative rewards, rather than terminal, when bandits are taken to generate rewards each time they are activated until halting. We use a standard technique to reduce these models to corresponding terminal halting models and in this way, we recover prior results in Cowan and Katehakis [4] and hence the celebrated Gittins' decomposition cf. Gittins [9]. The central results presented here, the derivation of optimal policies for the terminal solo payout and terminal collective payout models, rests on establishing a correspondence between the two payout models; essentially, the game where every bandit contributes to the total reward can be replaced by an equivalent game where only a single bandit contributes to the terminal reward. This gives further insight into why classical bandit decomposition results work cf. Chakravorty and Mahajan [2], Gittins et al. [10], Mahajan and Teneketzis [18], Ishikida and Varaiya [13], Weber [32]. For related work we first note that for the finite state Markov Chain version of the cumulative collective payouts model of Section 5, Sonin [25] introduced an equivalent formulation of the indices derived herein in order to derive an efficient algorithm for the calculation of the indices for all states of the Markov chain. The basic idea of this paper's generalized indices was to use a common Markov Decision Processes theory interpretation of of the expected discounted total reward with a discount factor \(\beta\) where the state space is complemented by an absorbing state \(x^{*}\) and new transition probabilities that are defined as follows. The probability of entering an absorbing state \(x^{*}\) in one step for any state \(y\neq x^{*}\) ('probability of termination') is equal to \(1-\beta\), and all other initial transition probabilities are multiplied by \(\beta\). In other words, \(\beta\) is the probability of'survival', or not 'halting' herein. Sonin [25] considered variable probabilities of survival \(\beta(x)\) and defined a generalized index \(\alpha(x)\) taken to be the maximum ratio of the expected discounted total reward up to the time \(\tau\) of halting ('termination') per chance of termination at the time \(\tau\) of halting. He established that for non constant discount factors the the equality of the new generalized index with the retirement index of Whittle [33] and the restart index of Katehakis and Veinott Jr [16], thus he argued that the true meaning of the Gittins index is given by its expression as a ratio of the expected discounted total reward up to the time \(\tau\) of halting ('termination') per chance of termination at the time \(\tau\) of halting, and pointed out its relation with the work in Mitten [19]. These results can be extended along the lines of El Karoui and Karatzas [8] who established the restart representation of the Gittins index in a continuous time framework without making further use of it. Additional results connecting the Sonin indices with other problems of stochastic optimization are given in Bank and El Karoui [1] and in Sonin and Steinberg [27]. For other related work we refer to Szepesvari [29], Slivkins et al. [23], Dumitriu et al. [7], Katta and Sethuraman [17], and to Stadje [28], Pinedo and Rammouz [21], [2], Glazebrook et al. [11], Negoescu et al. [20], Villar et al. [31] Glazebrook et al. [12], Denardo et al. [5], Katehakis and Rothblum [15], Katehakis and Derman [14], and Skitsas et al. [22], Talebi et al. [30], Cowan et al. [3]. The paper presents a collection of results, organized sequentially to build off each other to the final result. It is worth outlining this explicitly at the start, with a roadmap: Section 2 gives the underlying mathematical framework of the discussion to follow, to guarantee the necessary processes and control processes are well defined. Ultimately, the key point of these results is this: The relation between the'single payout' model and the 'collective payout' model reveals why the contributions of each bandit in the original formulation can be considered individually, by expressing the total game in terms of an equivalent one where only one bandit gives rewards. Section 3 considers a simplified or'solo-payout' model, where only the bandit that halts (or breaks) yields a reward to the controller. These solo payout model bandits have a simple optimal policy. In Section 4, we consider a collective-payout model (rewards from all bandits) and derive equivalent (or bounding) solo-payout models. The optimal solo-payout policy on the equivalent (or bounding) model is then shown to give an equivalent reward to a simple index policy on the collective-payout model, yielding a proof of optimality. In Section 5, a number of alternative payout models are introduced, and all are shown to be equivalent to the solved collective-payout model. The classical Gittins formulation is recovered herein. Some proofs, technical and uninstructive, are relegated to Section 6. ## 2 Problem Formulation ### Probability Framework A controller is presented with a finite collection of \(N\geqslant 2\) probability spaces, \((\Omega^{i},\mathcal{F}^{i},\mathbb{P}^{i},\mathbb{F}^{i})\), for \(1\leqslant i\leqslant N\), representing \(N\) environments in which experiments will be performed or rewards collected - the "bandits," or "projects." To each space, we associate an \(\mathbb{F}^{i}\)-adapted _reward process_\(X^{i}=\{X^{i}_{t}\}_{t\geqslant 0}\). For \(t\in\{0,1,\ldots\}\), we take \(X^{i}_{t}(=X^{i}_{t}(\omega^{i}))\in\mathbb{R}\) to represent the reward (or state) attained from the \(i^{th}\) bandit on its \(t^{th}\) activation. We denote the collection of these processes as \(\mathbb{X}\). Additionally, to each bandit, we associate an \(\mathbb{F}^{i}\)-stopping time \(\sigma^{i}>0\), the "halting time" of the bandit, so that at the \(\sigma^{i}\)-th activation of bandit \(i\), we take the bandit to be stopped, and no longer capable of being activated. Note, \(\sigma^{i}\) represents the number of times bandit \(i\) can be activated, so the last activation of bandit \(i\) occurs at bandit-time \(\sigma^{i}-1\), and at bandit time \(\sigma^{i}\), the bandit is permanently stopped. On every activation prior to halting, we assume there is a positive probability of halting. We take the first of any bandit halting to halt the entire decision process (game). In what follows, we reserve the term "round" to differentiate global controller time (denoted with \(s\)), when the controller must decide which bandit to activate, from local bandit times (denoted by \(t\)), indicating the current total activations of a given bandit. In each round, the controller activates a bandit, advancing both its local time and the global time by one time step. All bandits begin at local time \(0\), and advance only on activation, i.e., in every round _unactivated bandits remain frozen_. As stated, the game halts upon the first halting of any bandit. The controller needs a control policy \(\pi\), that specifies, at each round \(s\) of global time, which bandit to activate. We embed these bandits in a larger product space \((\Omega,\mathcal{G},\mathbb{P})=(\otimes_{i=1}^{N}\Omega^{i},\otimes_{i=1}^{ N}\mathcal{F}^{i},\otimes_{i=1}^{N}\mathbb{P}^{i})\), a standard product-space construction, representing the environment of the controller - aware information from all bandits. This 'global' probability space is necessary for making sure processes at the controller level (e.g., the policy for bandit activation) are well defined. This construction captures the first key aspect of the model: _the bandits are mutually independent_ (e.g., \(X^{i},X^{j}\) are independent relative to \(\mathbb{P}\) for \(i\neq j\)). Expectations relative to the local space, i.e., bandit \(i\), will be denoted \(\mathbb{E}^{i}\), while expectations relative to the global space are simply \(\mathbb{E}\). **Remark 1**.: We adopt the following notational liberty, allowing a random variable \(Z\) defined on a local space \(\Omega^{i}\) to also be considered as a random variable on the global space \(\Omega\), taking \(Z(\omega)=Z(\omega^{i})\), where \(\omega=(\omega^{1},\ldots,\omega^{N})\in\Omega\). Via this extension, we may take expectations involving a process \(X^{i}\), or \(\mathbb{F}^{i}\)-stopping times, relative to \(\mathbb{P}\) or \(\mathbb{P}^{i}\), without additional notational overhead. We make the following assumptions. **Assumption 1:** For each bandit \(i\) \[\mathbb{E}^{i}\left[\sup_{n\geqslant 0}|X^{i}_{n}|\right]<\infty. \tag{1}\] **Assumption 2:** For each bandit \(i\) the following are true. \[a) \mathbb{P}^{i}(\sigma^{i}<\infty)=1, \tag{2}\] \[b) \mathbb{P}^{i}(\sigma^{i}=t+1|\mathcal{F}^{i}(t))>0,\text{ for all }t<\sigma^{i}, (\mathbb{P}^{i},\,\mathbb{P}\text{-a.e.}). \tag{3}\] **Remark 2**.: Note, the above assumptions, while technical in statement, have natural interpretations: 2.a) each bandit will halt after finite activations, almost surely; 2.b) at any time prior to halting, there is non-zero probability of halting on the next activation. A control policy \(\pi\), is a stochastic process on \((\Omega,\mathcal{G},\mathbb{P})\) that specifies, at each round \(s\) of global time, which bandit to activate and collect from, e.g., \(\pi(s)(=\pi(s,\omega))=i\) activates bandit \(i\) at round \(s\). We restrict attention to the set of policies \(\mathcal{P}\) defined to be non-anticipatory, i.e., the choice of which bandit to activate at round \(s\) does not depend on outcomes that have not yet occurred, or information not yet available. A policy \(\pi\) defines \(T^{i}_{\pi}(s)\) the \(\pi\)-local time of bandit \(i\) just prior to the \(s^{th}\) round under it, i.e., \(T^{i}_{\pi}(0)=0\), and for \(s>0\), \[T^{i}_{\pi}(s)=\sum_{s^{\prime}=0}^{s-1}\mathds{1}\{\pi(s^{\prime})=i\}. \tag{4}\] Note, this gives as a result that at global time \(s\), the sum of all the local times must be \(s\), i.e., \[T^{1}_{\pi}(s)+T^{2}_{\pi}(s)+\ldots+T^{N}_{\pi}(s)=\sum_{i=1}^{N}\sum_{s^{ \prime}=0}^{s-1}\mathds{1}\{\pi(s^{\prime})=i\}=\sum_{s^{\prime}=0}^{s-1}\sum _{i=1}^{N}\mathds{1}\{\pi(s^{\prime})=i\}=\sum_{s^{\prime}=0}^{s-1}1=s, \tag{5}\] where the inner sum reduces to 1 since exactly one bandit is activated each round. It is convenient to define the global time analog, \(T_{\pi}(s)=T^{\pi(s)}_{\pi}(s)\) to denote the current \(\pi\)-local time of the bandit activated at round \(s\) under policy \(\pi\). This will allow us to define concise global time analogs of several processes. An important example of such a process is the global reward process \(X_{\pi}\) on \((\Omega,\mathcal{G},\mathbb{P})\) defined as \[X_{\pi}(s)=X^{\pi(s)}_{T_{\pi}(s)},\] giving the reward available from collection \(\mathbb{X}\) under policy \(\pi\), which is to be received if the game halts at round \(s\). To be able to translate between global time and local times, when the controller operates according to a policy \(\pi\), we define the random variables \(S^{i}_{\pi}(t)\) to represent the round at which bandit \(i\) is activated for the \(t^{th}\) time, i.e., \[\begin{split} S^{i}_{\pi}(0)&=\inf\{s\geqslant 0: \pi(s)=i\},\\ S^{i}_{\pi}(t+1)&=\inf\{s>S^{i}_{\pi}(t):\pi(s)=i\}. \end{split} \tag{6}\] Utilizing this notation, we may define a global halting time \(\sigma_{\pi}\), i.e., the first round under policy \(\pi\) at which one of the bandits has halted, ending the game: \[\sigma_{\pi}=\min_{i}\{S^{i}_{\pi}(\sigma^{i}-1)\}+1. \tag{7}\] Remark 3.To clarify the above definition, note that \(S^{i}_{\pi}(0)\) is the time that a policy first activates bandit \(i\), advancing it from local time \(0\) to local time \(1\). So \(S^{i}_{\pi}(\sigma^{i}-1)\) is the global time round at which the policy \(\pi\) advances bandit \(i\) from local time \(\sigma^{i}-1\) to local time \(\sigma^{i}\), halting that bandit. The expression above for \(\sigma_{\pi}\) therefore identifies the first global round at which no further activations will be made, because one bandit has been halted. In what follows, for a given policy \(\pi\), we take the final reward the controller receives to be a function of the last rewards of the game, generally a linear combination of \(\{X^{i}_{T^{i}_{\pi}(\sigma_{\pi})}\}_{1\leqslant i\leqslant N}\), or in the penultimate model a function of the second to last rewards. To maximize her expected reward, in every round the controller's decision of which bandit to activate must balance not only the current rewards of each bandit, but also the probability of halting that bandit and in doing so ending the game - losing all potential future rewards. ### Global Information Versus Local Information One of the intricacies of the results to follow is in properly distinguishing and determining what information is available to the controller to act on at a given time. The following statements are somewhat technical, but necessary for the purpose of making sure all relevant processes are mathematically well-defined, and that our control processes do not depend on information they should not have access to. Ultimately, the optimal policy results of Theorems 1 and 4 (essentially stating the simplicity of the optimal policy) demonstrate that in the optimal policy, any decision to activate a given bandit depends only on information from other bandits individually, thus rendering these filtrations unnecessary under an optimal policy. However, these extended filtrations are a technical necessity for the proof of Theorems 4. For each bandit \(i\), the filtration \(\mathbb{F}^{i}=\{\mathcal{F}^{i}(t)\}_{t\geqslant 0}\) represents the progression of information available about that bandit - the \(\sigma\)-algebra \(\mathcal{F}^{i}(t)\) representing the local information available about bandit \(i\) at local time \(t\), such as (but not limited to) the process history of \(X^{i}\). Taking \(X^{i}\) as \(\mathbb{F}^{i}\)-adapted as we do, we have \(\sigma(X^{i}_{0},X^{i}_{1},\ldots,X^{i}_{t})\subset\mathcal{F}^{i}(t)\). At round \(s\), all information available to the controller is determined by the state of each bandit at that round, i.e. acting under a given policy \(\pi\) until round \(s\), the global information available at round \(s\) is given by the \(\sigma\)-algebra \(\bigotimes_{i=1}^{N}\mathcal{F}^{i}(T^{i}_{\pi}(s))\). We may therefore refine the prior definition of non-anticipatory policies to be the set of policies \(\mathcal{P}\) such that for each \(s\geqslant 0\), \(\pi(s)\) is measurable with respect to the prior \(\sigma\)-algebra, i.e., determined by the information available at round \(s\). Weaker definitions of non-anticipatory, such as dependence on random events, e.g., coin flips, are addressed in Section 6. It is convenient to define the initial global \(\sigma\)-algebra \(\mathcal{G}_{0}=\bigotimes_{i=1}^{N}\mathcal{F}^{i}(0)\), representing the initial information available from each bandit, which is independent of policy \(\pi\). Additionally, given a policy \(\pi\), it is necessary to define a set of policy-dependent filtrations in the following way: let \(\mathbb{H}_{\pi}^{i}=\{\mathcal{H}_{\pi}^{i}(t)\}_{t\geqslant 0}\), where \(\mathcal{H}_{\pi}^{i}(t)=\bigotimes_{j=1}^{N}\mathcal{F}^{j}(T_{\pi}^{j}(S_{ \pi}^{i}(t)))\) represents the total information available to the controller about all bandits, prior to the \(t^{th}\) activation of bandit \(i\) under \(\pi\). It is indexed by the local time of bandit \(i\), but at each time \(t\) gives the current state of information of each bandit. Note that, since \(T_{\pi}^{i}(S_{\pi}^{i}(t))=t\), \(\mathcal{H}_{\pi}^{i}(t)\) contains the information available in \(\mathcal{F}^{i}(t)\). This filtration is necessary for expressing local stopping times, i.e., concerning \(X^{i}\), from the perspective of the controller - \(\mathbb{F}^{i}\)-stopping times no longer suffice, since the controller has access to information from all the other processes as well. Note though, \(\mathbb{F}^{i}\)-stopping times may be viewed as \(\mathbb{H}_{\pi}^{i}\)-stopping times, cf. Remark 1. **Notation.** When discussing stopping times, we will utilizing the following notation. For a general filtration \(\mathbb{J}\) (e.g., \(\mathbb{J}=\mathbb{F}^{i},\mathbb{H}_{\pi}^{i}\)), we denote by \(\hat{\mathbb{J}}(t)\) the set of all \(\mathbb{J}\)-stopping times strictly greater than \(t\) (\(\mathbb{P}^{i},\mathbb{P}\)-a.e.). For a \(\mathbb{J}\)-stopping time \(\tau\), \(\hat{\mathbb{J}}(\tau)\) is similarly defined. The following simple example illustrates the random variables we have defined in this section. **Example 1.** Take \(N=2\) bandits, independent geometric stopping times \(\sigma^{i}\) with \[\mathbb{P}^{i}(\sigma^{i}>t)=\beta_{i}^{t},\text{ for }t=0,1,\ldots\] for some constants \(\beta_{i}\in(0,1),\,i=1,2\), and consider a cyclic policy \(\pi^{1}(t)=1\) for \(t=0,2,\ldots,\) and \(\pi^{1}(t)=2\) for \(t=1,3,\ldots\). Under the policy \(\pi^{1}\) for any sample path for which \(\sigma^{1}>2\) and \(\sigma^{2}>2\) we will have: \begin{tabular}{|c||c|c||c||c|c|} \hline \(s\) & \(T_{\pi^{1}}^{1}\) & \(T_{\pi^{1}}^{2}\) & \(\pi^{1}(s)\) & Reward & Probability of not stopping at \(s\) \\ \hline \(0\) & \(0\) & \(0\) & \(1\) & \(X_{0}^{1}\) & \(\beta_{1}=\mathbb{P}^{i}(\sigma^{1}>1)\) \\ \hline \(1\) & \(1\) & \(0\) & \(2\) & \(X_{0}^{2}\) & \(\beta_{1}\beta_{2}=\mathbb{P}^{i}(\sigma^{1}>1,\sigma^{2}>1)\) \\ \hline \(2\) & \(1\) & \(1\) & \(1\) & \(X_{1}^{1}\) & \(\beta_{1}^{2}\beta_{2}=\mathbb{P}^{i}(\sigma^{1}>2,\sigma^{2}>1)\) \\ \hline \(3\) & \(2\) & \(1\) & \(2\) & \(X_{1}^{2}\) & \(\beta_{1}^{2}\beta_{2}^{2}=\mathbb{P}^{i}(\sigma^{1}>2,\sigma^{2}>2)\) \\ \hline \(4\) & \(2\) & \(2\) & \(1\) & \(X_{2}^{1}\) & \(\beta_{1}^{3}\beta_{2}^{2}=\mathbb{P}^{i}(\sigma^{1}>3,\sigma^{2}>2)\) \\ \hline \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \end{tabular} Thus is easy to see that under \(\pi^{1}\) the expected total reward received from the two bandits is \[V_{\pi^{1}}(\mathbb{X})=\mathbb{E}\left[X_{0}^{1}+\beta_{1}X_{0}^{2}+\beta_{1} \beta_{2}X_{1}^{1}+\beta_{1}^{2}\beta_{2}X_{1}^{2}+\beta_{1}^{2}\beta_{2}^{2} X_{2}^{1}+\cdots\right].\] Note also that: \(S_{\pi^{1}}^{1}(0)=\inf\{s>0:\pi^{1}(s)=1\}=0\), \(S_{\pi^{1}}^{1}(1)=\inf\{s>S_{\pi^{1}}^{1}(0):\pi^{1}(s)=1\}=2\), \(S_{\pi^{1}}^{1}(2)=4\) and \(S_{\pi^{1}}^{2}(0)=\inf\{s>0:\pi^{1}(s)=2\}=1\), \(S_{\pi^{1}}^{2}(1)=\inf\{s>S_{\pi^{1}}^{2}(0):\pi^{1}(s)=2\}=3\), etc. Finally note that under policy \(\pi^{1}\) on the event \(\{\sigma^{1}\geqslant 2,\ \sigma^{2}=1\}\) bandit 2 causes the game to end at round \(s=2\) i.e., the global halting time is \[\sigma_{\pi^{1}}=\min\{S^{1}_{\pi^{1}}(\sigma^{1}-1),\ S^{2}_{\pi^{1}}(0)\}+1=1 +1=2,\] since \(S^{1}_{\pi^{1}}(\sigma^{1}-1)\geqslant S^{1}_{\pi^{1}}(1)=2\). ## 3 Maximizing Solo Payouts: Non-Increasing Rewards In this section, we consider the problem of maximizing the expected _penultimate_ reward from the bandit that halts and ends the game. That is, if a bandit is activated and halts, stopping the game, the controller receives the reward that bandit offered prior to its last activation. Additionally, in this section, we assume that the reward processes of each bandit are non-increasing. In fact, under this restriction, we may even maximize the reward _almost surely_. This result, while intuitive, acts as the basis of all future optimality results herein. We define the _penultimate solo payout_ value of a policy \(\pi\) as, \[\begin{split} V^{\text{PSP}}_{\pi}(\mathbb{X})&= \mathbb{E}\left[X_{\pi}(\sigma_{\pi}-1)|\mathcal{G}_{0}\right]\\ &=\sum_{i=1}^{N}\mathbb{E}\left[\mathbbm{1}\left\{i=\pi(\sigma_{ \pi}-1)\right\}X^{i}_{T_{\pi}(\sigma_{\pi}-1)}|\mathcal{G}_{0}\right].\end{split} \tag{8}\] **Theorem 1** (A Greedy, Almost-Sure Result for Non-Increasing Solo Payout Processes): _Given a collection of reward processes \(\mathbb{X}\) such that for each \(i\), \(X^{i}\) is almost surely non-increasing for \(t<\sigma^{i}\), there exists a policy \(\pi^{*}\in\mathcal{P}\) such that for any policy \(\pi\in\mathcal{P}\),_ \[X_{\pi}(\sigma_{\pi}-1)\leqslant X_{\pi^{*}}(\sigma_{\pi^{*}}-1)\ \ \text{($ \mathbb{P}$-a.e.)}. \tag{9}\] _In particular, such a \(\pi^{*}\) is given by the following greedy rule: In each round \(s\geqslant 0\), activate the bandit with the largest current value of \(X^{i}\), i.e.,_ \[\pi^{*}(s)=\ \arg\max_{i}X^{i}_{T^{i}_{\pi^{*}}(s)}.\] **Proof.** The proof proceeds by incremental improvements on an arbitrary policy. Let \(X^{i}_{0}=\max_{j}X^{j}_{0}\). Let \(\pi\in\mathcal{P}\) be arbitrary, and define \(S=S^{i}_{\pi}(0)\), the first round bandit \(i\) is activated under \(\pi\). If \(i\) is never activated, we take \(S\) to be infinite. From \(\pi\), we construct a policy \(\pi^{\prime}\in\mathcal{P}\) as follows: \(\pi^{\prime}\) activates bandits in the same order as \(\pi\), but it advances the first activation of bandit \(i\) from round \(s=S\) to round \(s=0\). That is, \[\pi^{\prime}(s)=\begin{cases}i&\text{for }s=0,\\ \pi(s-1)&\text{for }s=1,2,\ldots S,\\ \pi(s)&\text{for }s\geqslant S+1.\end{cases} \tag{10}\] That is, after the initial round policy \(\pi^{\prime}\) activates the bandit that policy \(\pi\) activated in the previous round, continuing this through the first round that \(\pi\) activates bandit \(i\), then making the same choice in each round as does \(\pi\). Policy \(\pi^{\prime}\) is well-defined and in \(\mathcal{P}\), as at every round \(s\), the information available under \(\pi^{\prime}\) about each bandit is greater than or equal to the information available under \(\pi\) at that round. We next compare the performance of these two policies by cases. In the case that \(\sigma_{\pi}>S+1\) (\(=S_{\pi}^{i}(0)+1\)), that is when the game halts under \(\pi\)_after_ the first activation of bandit \(i\), then there is no difference between the rewards returned by either policy, since both policies perform the same activations after time \(S\) (sample path-wise). Similarly, if \(\sigma_{\pi}=S+1\), that is \(\pi\) halts _due to_ the first activation of bandit \(i\), the reward returned under \(\pi\) is \(X_{0}^{i}\), and as bandit \(i\) halted on its first activation, the reward returned under \(\pi^{\prime}\) is also \(X_{0}^{i}\). Finally the only situation in which \(\pi\) and \(\pi^{\prime}\) differ in their returned rewards is when \(\sigma_{\pi}\leqslant S\) and \(\sigma^{i}=1\). Therefore, it follows from the above cases that: \[\begin{split} X_{\pi^{\prime}}(\sigma_{\pi^{\prime}}-1)-X_{\pi} (\sigma_{\pi}-1)&=(X_{\pi^{\prime}}(\sigma_{\pi^{\prime}}-1)-X_{ \pi}(\sigma_{\pi}-1))\mathbbm{1}_{\{\sigma_{\pi}\leqslant S\}}\mathbbm{1}_{ \{\sigma^{\prime}=1\}}\\ &=(X_{0}^{i}-X_{\pi}(\sigma_{\pi}-1))\mathbbm{1}_{\{\sigma_{\pi} \leqslant S\}}\mathbbm{1}_{\{\sigma^{\prime}=1\}}\\ &\geqslant 0\ \ (\mathbb{P}\text{-a.e.}).\end{split} \tag{11}\] The last step follows taking \(X_{0}^{i}\) as the initial largest reward, and that all bandits are non-increasing. It follows that advancing the activation of the initial maximal bandit improves or at least does not change the value of a policy. This same argument can be applied at every round that follows, i.e., at every round, activation of the current initial maximal bandit is an improvement over (or at least does not change the value) of any other policy. Note, collisions may occur if at a given round two bandits have equal rewards. This may be resolved at the discretion of the controller, such as by always taking the bandit with the smaller index \(i\). As each bandit halts in a finite time, almost surely, for sufficiently many greedy improvements as outlined above, the resulting improvement of any policy \(\pi\) will return the same value as the completely greedy strategy \(\pi^{*}\). Hence, \[X_{\pi^{*}}(\sigma_{\pi^{*}}-1)\geqslant X_{\pi}(\sigma_{\pi}-1)\ \ (\mathbb{P}\text{-a.e.}). \tag{12}\] **Remark 4**.: The Necessity of finite \(\sigma^{i}\). Note that Assumption 2.a: \(\sigma^{i}<\infty\) almost surely, for each bandit \(i\), is employed to exclude cases such as the following, in which no optimal policy exists. Consider two bandits, Bandit A offering a potential reward of $100 in each time step, and Bandit B offering a potential reward of $50 in each time step. Further, suppose that \(\mathbb{P}^{A}(\sigma^{A}<\infty)=0.5\), and \(\sigma^{B}=1\) almost surely - that is, Bandit B halts after its first activation. This choice of \(\sigma^{B}\) implies that any policy on these bandits may be described in the following way: For any a.s. finite \(\mathbb{P}^{A}\)-stopping time \(\tau\geqslant 0\), \(\pi_{\tau}\) activates Bandit A until \(\tau\), then Bandit B, ending the game. The value of such a policy is given by \[V_{\pi_{\tau}}^{PSP}(A,B)=\$100\ \mathbb{P}^{A}(\sigma^{A}<\tau)+\$50\ \mathbb{P}^{A}(\sigma^{A}\geqslant\tau)\leqslant 75. \tag{13}\] This upper bound may be achieved within an arbitrary amount by choosing a finite, sufficiently large \(\tau\) - the larger the \(\tau\), the closer to achieving the upper bound. However, taking \(\tau\) to be infinite, the $100 is only collected with probability 0.5, and Bandit B is never activated at all, yielding a total expected value of \(\$100\times 0.5=\$50<\$75\). In this case, there exist \(\epsilon\)-optimal policies, but no optimal policy. This phenomenon appears in all versions of the problems discussed herein and its investigation is an avenue of interesting additional research. ## 4 Maximizing Collective Payouts In this section, we consider a model where rewards are collective i.e., received from all bandits, at the halting of the game. Thus, the expected _collective payout_ value of a policy \(\pi\) is \[V_{\pi}^{CP}(\mathbb{X})=\sum_{i=1}^{N}\mathbb{E}\left[X_{T_{\pi}(\sigma_{\pi} )}^{i}|\mathcal{G}_{0}\right]. \tag{14}\] In the following subsections, we develop a policy \(\pi^{*}\in\mathcal{P}\) such that for all \(\pi\in\mathcal{P}\), \[V_{\pi}^{CP}(\mathbb{X})\leqslant V_{\pi^{*}}^{CP}(\mathbb{X})\ (\text{$\mathbb{P}$-a.e.}). \tag{15}\] **Remark 5**.: For algebraic convenience in the remainder of this section we take \(X_{0}^{i}=0\) for all \(i\). For a more arbitrary reward processes \(\{\hat{X}^{i}\}\), recall that the initial \(\hat{X}_{0}^{i}\) are taken to be constant and known at the initial round by assumption. Hence, defining \(X_{t}^{i}=\hat{X}_{t}^{i}-\hat{X}_{0}^{i}\), maximizing the total expected reward from the \(\{\hat{X}^{i}\}\) processes is equivalent to maximizing the total expected reward from the \(\{X^{i}\}\) processes. ### Block Values This section introduces a way of considering the "value" of a set of activations of a bandit. The "true" value of a decision to activate a bandit is not simply the potential reward gained through that decision, but instead it must balance the immediate potential reward with the incurred risk of halting the game through that decision, and the resulting loss of potential future rewards. For each bandit \(i\), for a given policy \(\pi\) we define \(\tau_{\pi}^{i}\) to be the first activation of bandit \(i\) that does not occur under \(\pi\). That is, \[\tau_{\pi}^{i}=\min\{t\geqslant 0:S_{\pi}^{i}(t)\geqslant\sigma_{\pi}\}. \tag{16}\] Note, the above makes use in its definition of \(\pi\) 'after the halting time \(\sigma_{\pi}\)', but we simply mean to observe here that at the global halting time, we can observe what the next activation of each bandit would have been - this is \(\tau_{\pi}^{i}\). With this, we state the following definitions. **Definition 1** (Process Blocks and their Values): _Given times \(t^{\prime}<t^{\prime\prime}\) with \(t^{\prime}<\sigma^{i}\), and a policy \(\pi\in\mathcal{P}\) with \(S_{\pi}^{i}(t^{\prime})<\sigma_{\pi}\):_ 1. _The_ solo-payout value of the \([t^{\prime},t^{\prime\prime})\) - block of \(X^{i}\) _as:_ \[\rho^{i}(t^{\prime},t^{\prime\prime})=\frac{\mathbb{E}^{i}\left[X_{\sigma^{i} \wedge t^{\prime\prime}}^{i}-X_{t^{\prime}}^{i}\middle|\mathcal{F}^{i}(t^{ \prime})\right]}{\mathbb{P}^{i}\left(t^{\prime}<\sigma^{i}\leqslant t^{\prime \prime}\middle|\mathcal{F}^{i}(t^{\prime})\right)}.\] (17) 2. _The_ \(\pi\)-value of the \([t^{\prime},t^{\prime\prime})\) - block of \(X^{i}\) _as:_ \[\nu^{i}_{\pi}(t^{\prime},t^{\prime\prime})=\frac{\mathbb{E}\left[X^{i}_{\pi^{i} (\sigma_{\pi})\wedge t^{\prime\prime}}-X^{i}_{t^{\prime}}\middle|\mathcal{H}^{ i}_{\pi}(t^{\prime})\right]}{\mathbb{P}\left(t^{\prime}<\sigma^{i}\leqslant\tau^{i}_{ \pi}\wedge t^{\prime\prime}\middle|\mathcal{H}^{i}_{\pi}(t^{\prime})\right)}.\] (18) **Remark 6**.: Due to Eq. (3) c.f. Assumption 2, the denominators of both block values are non-zero. The above quantities are all measurable with respect to the indicated \(\sigma\)-fields, and finite (\(\mathbb{P}^{i},\mathbb{P}\) -a.e.), due to Eq. (1) c.f. Assumption 1. Notionally, \(\rho^{i}\) can be thought of as the value of a block under consecutive activation, while \(\nu^{i}_{\pi}\) is, correspondingly, the value of a block potentially 'diluted' or broken up by activations of other bandits under \(\pi\). The denominator of \(\nu^{i}_{\pi}\) may be interpreted as the probability that the game halts due to bandit \(i\), halting during activation of the \([t^{\prime},t^{\prime\prime})\)-block. **Remark 7**.: The above might be justified as the 'value' of a block of activations in the following way: even if the incremental reward gained due to an activation block (the numerators) is small, if the probability of halting due to those activations (the denominators) is sufficiently small, there is very little risk in attempting to gain that increment through that activation. In fact, there might be more to gain in such a case than if the incremental reward were slightly larger, but the probability of halting were also larger. The above values captures this trade-off between risk of halting and reward gained. The following theorem illustrates the relationship between \(\rho^{i}\) and \(\nu^{i}_{\pi}\), essentially stating that the value of any block under some policy \(\pi\) is at most the value of some block activated consecutively. **Theorem 2** (Block Value Comparison): _For bandit \(i\) under policy \(\pi\), for any time \(t_{0}\) such that \(S^{i}_{\pi}(t_{0})<\sigma_{\pi}\), the following holds for any \(\mathbb{H}^{i}_{\pi}\)-stopping time \(\tau\) with \(t_{0}<\tau\):_ \[\nu^{i}_{\pi}(t_{0},\tau)\leqslant\operatorname*{ess\,sup}_{\hat{\tau}\in \hat{\mathbb{P}}^{i}(t_{0})}\rho^{i}(t_{0},\hat{\tau})\ (\mathbb{P}\text{-a.e.}). \tag{19}\] **Proof.** Note that it follows from Eqs. (1), (3) that the essential supremum is finite _(\(\mathbb{P}\)-a.e)_. For each bandit \(i\) and any \(\pi\in\mathcal{P}\), it can be shown by cases (whether the game does or does not halt due to an activation of \(i\)) that \(T^{i}_{\pi}(\sigma_{\pi})=\sigma^{i}\wedge\tau^{i}_{\pi}\). Therefore, for a given \(\tau\in\hat{\mathbb{H}}^{i}_{\pi}(t_{0})\), \[\nu^{i}_{\pi}(t_{0},\tau) =\frac{\mathbb{E}\left[X^{i}_{\sigma^{i}\wedge\tau^{i}_{\pi} \wedge\tau}-X^{i}_{t_{0}}\middle|\mathcal{H}^{i}_{\pi}(t_{0})\right]}{\mathbb{ P}\left(t_{0}<\sigma^{i}\leqslant\tau^{i}_{\pi}\wedge\tau\middle|\mathcal{H}^{i}_{ \pi}(t_{0})\right)} \tag{20}\] \[=\frac{\mathbb{E}\left[X^{i}_{\sigma^{i}\wedge(\tau^{i}_{\pi} \wedge\tau)}-X^{i}_{t_{0}}\middle|\mathcal{H}^{i}_{\pi}(t_{0})\right]}{\mathbb{ P}\left(t_{0}<\sigma^{i}\leqslant(\tau^{i}_{\pi}\wedge\tau)\middle|\mathcal{H}^{i}_{ \pi}(t_{0})\right)}\leqslant\operatorname*{ess\,sup}_{\hat{\tau}\in\hat{ \mathbb{H}}^{i}_{\pi}(t_{0})}\frac{\mathbb{E}\left[X^{i}_{\sigma^{i}\wedge \tau}-X^{i}_{t_{0}}\middle|\mathcal{H}^{i}_{\pi}(t_{0})\right]}{\mathbb{P} \left(t_{0}<\sigma^{i}\leqslant\tau\middle|\mathcal{H}^{i}_{\pi}(t_{0})\right) }\ (\mathbb{P}\text{-a.e.}).\] The last step above follows as, given that \(\tau^{i}_{\pi}\) and \(\tau\) are both in \(\hat{\mathbb{H}}^{i}_{\pi}(t_{0})\) by assumption, so too is \(\tau^{i}_{\pi}\wedge\tau\), and the term on the right hand side is the \(\operatorname*{ess\,sup}\) over all such stopping times. Defining a 'global' \(\pi\)-analog of \(\rho^{i}\), \[\rho^{i}_{\pi}(t^{\prime},t^{\prime\prime})=\frac{\mathbb{E}\left[X^{i}_{\sigma^{ i}\wedge t^{\prime\prime}}-X^{i}_{\tau}|\mathcal{H}^{i}_{\pi}(t^{\prime}) \right]}{\mathbb{P}\left(t^{\prime}<\sigma^{i}\leqslant t^{\prime\prime}| \mathcal{H}^{i}_{\pi}(t^{\prime})\right)}, \tag{21}\] we have the following relations: \[\nu^{i}_{\pi}(t_{0},\tau)\leqslant\operatorname*{ess\,sup}_{\hat{\tau}\in \hat{\mathbb{H}}^{i}_{\pi}(t_{0})}\rho^{i}_{\pi}(t_{0},\hat{\tau})\leqslant \operatorname*{ess\,sup}_{\hat{\tau}\in\hat{\mathbb{P}}^{i}(t_{0})}\rho^{i}(t _{0},\hat{\tau})\ (\mathbb{P}\text{-a.e.}). \tag{22}\] The first inequality above is simply a restatement of Eq. (20). The second inequality, the exchange from \(\mathbb{H}^{i}_{\pi}\)-stopping times to \(\mathbb{F}^{i}\)-stopping times, is intuitive: as the \(X^{i}\) process and \(\sigma^{i}\) are independent of the non-\(i\) bandits, information about those independent bandits (through the \(\mathbb{H}^{i}_{\pi}\)-stopping times) cannot assist in maximizing the quotient. Rigorously, this amounts to integrating out the independent bandits; this is done in detail as Proposition 1. \(\square\) **Proposition 1**: _For bandit \(i\) under policy \(\pi\), for any time \(t_{0}\) such that \(S^{i}_{\pi}(t_{0})<\sigma_{\pi}\), the following holds:_ \[\operatorname*{ess\,sup}_{\hat{\tau}\in\hat{\mathbb{H}}^{i}_{\pi}(t_{0})}\rho ^{i}_{\pi}(t_{0},\hat{\tau})\leqslant\operatorname*{ess\,sup}_{\hat{\tau}\in \hat{\mathbb{P}}^{i}(t_{0})}\rho^{i}(t_{0},\hat{\tau})\ (\mathbb{P}\text{-a.e.}). \tag{23}\] See Section 6 for its proof. The following proposition provides, using \(\rho^{i}\) and \(\nu^{i}_{\pi}\), expressions for the incremental reward gained through consecutive or under \(\pi\) activation of a block. **Proposition 2**: _For each bandit \(i\), the following relations hold for any \(\mathbb{F}^{i}\)-stopping times \(\tau^{\prime}<\tau^{\prime\prime}\) where the quantities are well defined. Equality also holds when conditioning with respect to the initial information, \(\mathcal{F}^{i}(0)\), \(\mathcal{G}_{0}\) respectively via the tower property._ \[\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\tau^{\prime\prime}}-X^{i}_{\tau^{ \prime}}\middle|\mathcal{F}^{i}(\tau^{\prime})\right] =\mathbb{E}^{i}\left[\sum_{t=\tau^{\prime}}^{\tau^{\prime\prime}- 1}\rho^{i}(\tau^{\prime},\tau^{\prime\prime})\mathbbm{1}_{\{\sigma^{i}=t+1\}} \middle|\mathcal{F}^{i}(\tau^{\prime})\right] \tag{24}\] \[\mathbb{E}\left[X^{i}_{T^{i}_{\pi}(\sigma_{\pi})\wedge\tau^{\prime\prime}}-X^{ i}_{\tau^{\prime}}\middle|\mathcal{H}^{i}_{\pi}(\tau^{\prime})\right] =\mathbb{E}\left[\sum_{t=\tau^{\prime}}^{\tau^{\prime}-1}\nu^{i}_{\pi}( \tau^{\prime},\tau^{\prime\prime})\mathbbm{1}_{\{\sigma_{\pi}=S^{i}_{\pi}(t)+1 \}}\middle|\mathcal{H}^{i}_{\pi}(\tau^{\prime})\right] \tag{25}\] **Proof.** The above equations follow directly from Eqs. (17), (18), observing the following relations: \[\mathbb{P}^{i}\left(t^{\prime}<\sigma^{i}\leqslant t^{\prime\prime }\middle|\mathcal{F}^{i}(t^{\prime})\right) =\mathbb{E}^{i}\left[\sum_{t=t^{\prime}}^{\tau^{\prime}-1}\mathbbm{1 }_{\{\sigma^{i}=t+1\}}\middle|\mathcal{F}^{i}(t^{\prime})\right], \tag{26}\] \[\mathbb{P}\left(t^{\prime}<\sigma^{i}\leqslant\tau^{i}_{\pi}\wedge t ^{\prime\prime}\middle|\mathcal{H}^{i}_{\pi}(t^{\prime})\right) =\mathbb{E}\left[\sum_{t=t^{\prime}}^{\tau^{\prime\prime}-1} \mathbbm{1}_{\{\sigma_{\pi}=S^{i}_{\pi}(t)+1\}}\middle|\mathcal{H}^{i}_{\pi}(t ^{\prime})\right].\] \(\square\) ### Solo Payout Indices and Times Theorem 2 indicates the significance of the following quantity. **Definition 2** (The Solo-Payout Index): _For any \(t<\sigma^{i}\), the incremental Solo-Payout Index at \(t\) is defined to be_ \[\rho^{i}(t)=\operatorname*{ess\,sup}_{\tau\in\widehat{\mathbb{F}}^{i}(t)}\rho^{ i}(t,\tau). \tag{27}\] This index can be interpreted as the maximal quotient of "incremental reward" over "probability of termination/halting" as in Eq. (17). Sonin [26] defined this index for the case of finite state Markov chain reward processes, in order to provide an efficient computation of the Gittins indices of all states. The following result demonstrates that \(\rho^{i}(t)\) is realized as the value of some block from time \(t\), i.e., for some \(\tau>t\), \(\rho^{i}(t)=\rho^{i}(t,\tau)\) (\(\mathbb{P}^{i}\)-a.e.). As such, \(\rho^{i}(t)\) represents the maximal block value achievable from process \(i\) from time \(t\). **Proposition 3**: _For any time \(t_{0}<\sigma^{i}\), there exists a \(\tau\in\widehat{\mathbb{F}}^{i}(t_{0})\) such that \(\rho^{i}(t_{0})=\rho^{i}(t_{0},\tau)\) (\(\mathbb{P}^{i}\)-a.e.)._ The proof is relegated to Section 6, since it specialized and not the focus of this paper. The solo-payout indices and their realizing blocks provide a natural time scale with which to view a process, in terms of a sequence of blocks. In particular, we define the following sequence: **Definition 3** (Solo-Payout Index Times): _Define a sequence of \(\mathbb{F}^{i}\)-stopping times \(\{\tau^{i}_{k}\}_{k\geqslant 0}\) in the following way, that \(\tau^{i}_{0}=0\), and for \(k>0\),_ \[\tau^{i}_{k+1}=\arg\operatorname*{ess\,sup}\{\rho^{i}(\tau^{i}_{k},\tau):\tau \in\widehat{\mathbb{F}}^{i}(\tau^{i}_{k})\}. \tag{28}\] In the case that \(\tau^{i}_{k}=\sigma^{i}\) for some \(k\), then \(\tau^{i}_{k^{\prime}}\) is taken to be infinite for all \(k^{\prime}>k\). In the case that \(\tau^{i}_{k}<\sigma^{i}\), we have that \(\rho^{i}(\tau^{i}_{k})=\rho^{i}(\tau^{i}_{k},\tau^{i}_{k+1})\). The question of whether the '\(\arg\operatorname*{ess\,sup}\)' exists is resolved in the positive by Proposition 3; if there is more than one stopping time that attains the '\(\arg\operatorname*{ess\,sup}\)', we take \(\tau^{i}_{k+1}\) to be the one demonstrated by the application of Lemma 1 in the proof of Proposition 3. Using this sequence of stopping times, we partition the local process times \(\mathbb{N}^{i}=\{0,1,2,\ldots\}\) into \[\mathbb{N}^{i}=[0,\tau^{i}_{1})\cup[\tau^{i}_{1},\tau^{i}_{2})\cup[\tau^{i}_ {2},\tau^{i}_{3})\cup\ldots.\] One important property of this partition is the following: **Proposition 4** (Solo-Payout Indices are Non-Increasing over Index Times): _For any \(k>0\) such that \(\tau^{i}_{k}<\sigma^{i}\), the following is true: \(\rho^{i}(\tau^{i}_{k-1})\geqslant\rho^{i}(\tau^{i}_{k})\) (\(\mathbb{P}^{i}\)-a.e.)._ For intuition, recall the \(\{\tau^{i}_{k}\}_{k}\) are meant to realize successively the maximal indices of the process \(\{X^{i}_{t}\}_{t}\). If \(\rho^{i}(\tau^{i}_{k-1})=\rho^{i}(\tau^{i}_{k-1},\tau^{i}_{k})<\rho^{i}(\tau^{ i}_{k})\), the index from \(\tau^{i}_{k-1}\) may be increased by taking a block that extends from \(\tau^{i}_{k-1}\)_past_\(\tau^{i}_{k}\). This contradicts the idea of the \(\{\tau^{i}_{k}\}_{k}\) as realizing the maximal indices. The proof is relegated to Section 6, as technical, and not the focus of this paper. ### Equivalent Solo Payout Processes For each bandit, we have developed a partition of local time into blocks of activations via the solo payout index stopping times. With Proposition 2 in mind, we use these blocks to define a set of reward equivalent penultimate solo payout processes, and \(\pi\)-equivalent solo payout processes. **Definition 4**: _Given the collection of reward processes \(\mathbb{X}=(X^{1},...,X^{N})\), and \(\{\tau^{i}_{k}\}_{k\geqslant 0}\) for each \(i\) as in Definition 3, we define:_ 1. _The_ reward-equivalent solo payout collection__\(\mathbb{Y}^{X}=(Y^{1},...,Y^{N})\) _by_ \[Y^{i}(t)=\rho^{i}(\tau^{i}_{k}),\ \ \text{if }\tau^{i}_{k}\leqslant t<\tau^{i}_{k+1}.\] (29) 2. _For_ \(\pi\in\mathcal{P}\)_, the_ \(\pi\)-equivalent solo payout collection__\(\mathbb{Y}^{X}_{\pi}=(Y^{1}_{\pi},...,Y^{N}_{\pi})\)_, by_ \[Y^{i}_{\pi}(t)=\nu^{i}_{\pi}(\tau^{i}_{k},\tau^{i}_{k+1}),\ \ \text{if }\tau^{i}_{k} \leqslant t<\tau^{i}_{k+1}.\] (30) Like \(X^{i}\), the process \(Y^{i}\) is defined on \((\Omega^{i},\mathcal{F}^{i},\mathbb{P}^{i},\mathbb{F}^{i})\) and is \(\mathbb{F}^{i}\)-adapted, as the \(\rho^{i}(\tau^{i}_{k})\) is defined by the information available locally at time \(\tau^{i}_{k}\). However, as the \(\nu^{i}_{\pi}(\tau^{i}_{k},\tau^{i}_{k+1})\) depend on the specifics of policy \(\pi\), so do the \(Y^{i}_{\pi}\) processes; the \(Y^{i}_{\pi}\) processes are \(\mathbb{H}^{i}_{\pi}\)-adapted, but not \(\mathbb{F}^{i}\)-adapted. Note, \(Y^{i}\) is only really defined for \(t<\sigma^{i}\), and \(Y^{i}_{\pi}\) is only defined for \(t\) such that \(S^{i}_{\pi}(t)<\sigma_{\pi}\). However, since no rewards are collected from bandit \(i\) after these times, this lack of definition is of no consequence. The following are simple, but important properties of the \(\mathbb{Y}^{X},\mathbb{Y}^{X}_{\pi}\) processes. **Proposition 5**: _For \(\pi\in\mathcal{P}\), for each \(i\), and any \(k\) where the following quantities are defined,_ \[\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\tau^{i}_{k+1}}-X^{i}_{\tau^{i}_{k }}\big{|}\mathcal{F}^{i}(\tau^{i}_{k})\right]=\mathbb{E}^{i}\left[\sum_{t= \tau^{i}_{k}}^{\tau^{i}_{k+1}-1}Y^{i}(t)\mathbb{1}_{\{\sigma^{i}=t+1\}}\big{|} \mathcal{F}^{i}(\tau^{i}_{k})\right], \tag{31}\] \[\mathbb{E}\left[X^{i}_{T^{i}_{\pi}(\sigma_{\pi})\wedge\tau^{i}_{k+1}}-X^{i}_{ \tau^{i}_{k}}\big{|}\mathcal{H}^{i}_{\pi}(\tau^{i}_{k})\right]=\mathbb{E}\left[ \sum_{t=\tau^{i}_{k}}^{\tau^{i}_{k+1}-1}Y^{i}_{\pi}(t)\mathbb{1}_{\{\sigma_{ \pi}=S^{i}_{\pi}(t)+1\}}\big{|}\mathcal{H}^{i}_{\pi}(\tau^{i}_{k})\right]. \tag{32}\] _As with Proposition 2, equality also holds when conditioning with respect to \(\mathcal{F}^{i}(0),\mathcal{G}_{0}\)._ **Proof.** _This follows as an application of Proposition 2 and the definitions of \(Y^{i}\), \(Y^{i}_{\pi}\)._ The following proposition serves as justification of the term "equivalent" in describing the \(\mathbb{Y}^{X},\mathbb{Y}^{X}_{\pi}\) collections. **Proposition 6**: _For each \(i\), for any policy \(\pi\in\mathcal{P}\),_ \[\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}}\big{|}\mathcal{F}^{i}(0)\right]= \mathbb{E}^{i}\left[Y^{i}(\sigma^{i}-1)\big{|}\mathcal{F}^{i}(0)\right], \tag{33}\] \[\mathbb{E}\left[X^{i}_{T^{i}_{\pi}(\sigma_{\pi})}\big{|}\mathcal{G}_{0}\right]= \mathbb{E}\left[\mathbbm{1}_{\{i=\pi(\sigma_{\pi}-1)\}}Y^{i}_{\pi}(T^{i}_{\pi}( \sigma_{\pi}-1))\big{|}\mathcal{G}_{0}\right]. \tag{34}\] **Proof.** _Each follows from the corresponding equation in Proposition 5, summing over \(k\) and taking expectations from the initial time, via the tower property. On the right hand sides, the \(X^{i}\) terms telescope in the sum, and \(X^{i}_{0}\) is taken to be 0. On the left hand sides, the sums over \(Y\) may be expressed as single terms, due to the indicators._ **Proposition 7**: _For each \(i\), and any time \(t>0\) such that \(Y^{i}(t)\) is well defined,_ \[Y^{i}(t-1)\geqslant Y^{i}(t)\ (\mathbb{P}^{i}\text{-a.e.}). \tag{35}\] **Proof.** _This follows immediately from Proposition 4, and Definition 4.1._ **Theorem 3** (Comparison of Equivalent, \(\pi\)-Equivalent Solo Payout Processes): _For any \(\pi\in\mathcal{P}\), for each \(i\) and all time \(t\) where both are defined, we have:_ \[Y^{i}_{\pi}(t)\leqslant Y^{i}(t)\ \ \ \ \ (\mathbb{P}\text{-a.e.}). \tag{36}\] **Proof.** _For such a \(t\), we have for some \(k\) that \(\tau^{i}_{k}\leqslant t<\tau^{i}_{k+1}\), and as an application of Theorem 1,_ \[Y^{i}_{\pi}(t)=\nu^{i}_{\pi}(\tau^{i}_{k},\tau^{i}_{k+1})\leqslant\ \underset{\tau^{i}\in\hat{\Pi}^{i}_{\pi}(\tau^{i}_{k})}{\text{ess}\sup}\ \nu^{i}_{\pi}(\tau^{i}_{k},\tau^{i})\leqslant\ \underset{\hat{\tau}\in\hat{\Pi}^{i}(\tau^{i}_{k})}{\text{ess}\sup}\ \rho^{i}(\tau^{i}_{k},\hat{\tau})=\rho^{i}(\tau^{i}_{k})=Y^{i}(t)\ \ \ (\mathbb{P}\text{-a.e.}). \tag{37}\] _Note in the above that the first and the last relations are just definitions, the second follows naturally by comparing one instance of the function to an \(\,\mathrm{ess}\sup\) of the same function, the third is due to Theorem 2, the fourth is due to the definition of the \(\rho^{i}\) function. \(\square\)_ ### The Optimal Policy The derivation of the optimal control policy for an arbitrary collection of reward processes \(\mathbb{X}\) under a collective reward structure is all but immediate now. **Theorem 4** (The Optimal Collective Payout Control Policy): _For a collection of reward processes \(\mathbb{X}=(X^{1},X^{2},\ldots,X^{N})\), and the associated stopping times \(\{\sigma^{i}\}_{i=1,\ldots,N}\), there exists a strategy \(\pi^{*}\in\mathcal{P}\) such that for all \(\pi\in\mathcal{P}\),_ \[V^{CP}_{\pi}(\mathbb{X})\leqslant V^{CP}_{\pi^{*}}(\mathbb{X})\ (\mathbb{P}\text{-a.e.}). \tag{38}\] _In particular, such an optimal policy \(\pi^{*}\) can be described in the following way: successively activate the bandit with the largest current solo payout index,_ \[\rho^{i}(t)=\underset{\tau\in\hat{\mathbb{P}}^{i}(t)}{\text{ess}\sup}\ \frac{\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\tau}-X^{i}_{t}\big{|} \mathcal{F}^{i}(t)\right]}{\mathbb{P}^{i}\left(t<\sigma^{i}\leqslant\tau\big{|} \mathcal{F}^{i}(t)\right)}, \tag{39}\] _for the duration of the corresponding index block._ Before giving the proof of this theorem, we give a corollary, which gives a useful alternative characterization of the policy \(\pi^{*}\). **Corollary 1**: _An alternative characterization of the policy \(\pi^{*}\) in Theorem 4 is the following: at every round, activate the bandit with the largest current solo payout index._ **Proof.** From Theorem 4, it follows that the optimal first activation is to activate a bandit with the largest current solo payout index. If that activation does not halt the bandit and end the game, the controller is faced with a structurally identical decision problem. It follows that again, the optimal activation is to activate a bandit with the largest current solo payout index. This argument may be iterated until halting, which will occur in finite time by assumption on the \(\{\sigma^{i}\}\). **Proof of Theorem 4.** For an arbitrary policy \(\pi\), and \(\pi^{*}\) as indicated above, we establish the following relations: \[V_{\pi}^{CP}(\mathbb{X})=V_{\pi}^{PSP}(\mathbb{Y}_{\pi}^{X})\leqslant V_{\pi}^ {PSP}(\mathbb{Y}^{X})\leqslant V_{\pi^{*}}^{PSP}(\mathbb{Y}^{X})=V_{\pi^{*}}^{ CP}(\mathbb{X})\ (\mathbb{P}\mbox{-a.e.}), \tag{40}\] i.e., for any policy \(\pi\), we have that \(V_{\pi}^{CP}(\mathbb{X})\leqslant V_{\pi^{*}}^{CP}(\mathbb{X})\) (\(\mathbb{P}\mbox{-a.e.}\)) and therefore \(\pi^{*}\) is an optimal policy. In the following steps we prove relations (40). _Step 1:_\(V_{\pi}^{CP}(\mathbb{X})=V_{\pi}^{PSP}(\mathbb{Y}_{\pi}^{X})\), (\(\mathbb{P}\mbox{-a.e.}\)). We have, via Prop. 6, Eq. (34), \[V_{\pi}^{CP}(\mathbb{X})=\sum_{i=1}^{N}\mathbb{E}\left[X_{T_{\pi}^{i}(\sigma_{ \pi})}^{i}\big{|}\mathcal{G}_{0}\right]=\sum_{i=1}^{N}\mathbb{E}\left[\mathbb{ 1}_{\{i=\pi(\sigma_{\pi^{-1}})\}}Y_{\pi}^{i}(T_{\pi}^{i}(\sigma_{\pi}-1)) \big{|}\mathcal{G}_{0}\right]=V_{\pi}^{PSP}(\mathbb{Y}_{\pi}^{X}).\] Note, because the \(Y_{\pi}^{i}\) processes are defined in terms of \(\pi\), they are not \(\mathbb{F}^{i}\)-adapted, and cannot be utilized under any other policy. However, the value \(V_{\pi}^{PSP}(\mathbb{Y}_{\pi}^{X})\) is well defined via the above equation. _Step 2:_\(V_{\pi}^{PSP}(\mathbb{Y}_{\pi}^{X})\leqslant V_{\pi}^{PSP}(\mathbb{Y}^{X})\) (\(\mathbb{P}\mbox{-a.e.}\)). This follows from the point-wise inequality of Theorem 3, \(V_{\pi}^{i}(t)\leqslant Y^{i}(t)\) for all \(t\). Note that for any \(t\) where \(Y_{\pi}^{i}(t)\) is not defined, the \(t^{th}\) activation of \(i\) does not occur under \(\pi\), and no comparison is necessary. _Step 3:_\(V_{\pi}^{PSP}(\mathbb{Y}^{X})\leqslant V_{\pi^{*}}^{PSP}(\mathbb{Y}^{X})\) (\(\mathbb{P}\mbox{-a.e.}\)). This follows simply from Theorem 1 as, by construction, the terms of each \(Y^{i}\) process are equal to the solo payout indices of \(X^{i}\), piecewise constant over blocks, and non-increasing. _Step 4:_\(V_{\pi^{*}}^{PSP}(\mathbb{Y}^{X})=V_{\pi^{*}}^{CP}(\mathbb{X})\) (\(\mathbb{P}\mbox{-a.e.}\)). Note that \(\pi^{*}\) activates bandits consecutively over the duration of their index blocks. For a given \(i\), define \[k_{i}^{*}=\min_{k\geqslant 0}\{S_{\pi^{*}}^{i}(\tau_{k}^{i})\geqslant\sigma_{ \pi}\}, \tag{41}\] the first block of \(i\) that is not activated under \(\pi^{*}\). Note then that for each \(i\), we have the following relation \[T_{\pi^{*}}^{i}(\sigma_{\pi^{*}})=\sigma^{i}\wedge\tau_{k_{i}^{*}}^{i}. \tag{42}\] Expressing the value of policy \(\pi^{*}\) relative to activations over blocks, and utilizing the tower property, we have the following equivalences: \[\begin{split} V_{\pi^{*}}^{PSP}(\mathbb{Y}^{X})&=\sum_{i =1}^{N}\sum_{k=0}^{\infty}\mathbb{E}\left[\mathbbm{1}_{\{k^{\prime}_{i}>k\}} \sum_{t=\tau_{k}^{i}}^{\tau_{k+1}^{i}-1}Y^{i}(t)\mathbbm{1}_{\{\sigma^{i}=t+1 \}}|\mathcal{G}_{0}\right]\\ &=\sum_{i=1}^{N}\sum_{k=0}^{\infty}\mathbb{E}\left[\mathbbm{1}_{\{ k^{\prime}_{i}>k\}}\mathbb{E}\left[\sum_{t=\tau_{k}^{i}}^{\tau_{k+1}^{i}-1}Y^{i}(t) \mathbbm{1}_{\{\sigma^{i}=t+1\}}|\mathcal{H}_{\pi}^{i}(\tau_{k}^{i})\right]| \mathcal{G}_{0}\right]\\ &=\sum_{i=1}^{N}\sum_{k=0}^{\infty}\mathbb{E}\left[\mathbbm{1}_{\{ k^{\prime}_{i}>k\}}\mathbb{E}\left[X_{\sigma^{i}\wedge\tau_{k+1}^{i}}^{i}-X_{ \tau_{k}^{i}}^{i}|\mathcal{H}_{\pi}^{i}(\tau_{k}^{i})\right]|\mathcal{G}_{0}\right] \\ &=\sum_{i=1}^{N}\mathbb{E}\left[X_{\sigma^{i}\wedge\tau_{k}^{i}}^{ i}-X_{0}^{i}|\mathcal{G}_{0}\right]\\ &=\sum_{i=1}^{N}\mathbb{E}\left[X_{T_{\pi^{*}}^{i}(\sigma_{\pi^{* }})}^{i}|\mathcal{G}_{0}\right]=V_{\pi^{*}}^{CP}(\mathbb{X}).\end{split} \tag{43}\] Note the exchange over blocks of the \(Y^{i}\) rewards for the \(X^{i}\) rewards is due to Proposition 5, Eq. (31), taking the extension to \(\mathcal{H}_{\pi^{*}}^{i}(\tau_{k}^{i})\) in place of \(\mathcal{F}^{i}(\tau_{k}^{i})\). **Remark 8**.: The above theorem demonstrates a policy \(\pi^{*}\in\mathcal{P}\) that is \(\mathbb{P}\)-a.e. superior (or equivalent) to every other policy \(\pi\in\mathcal{P}\). However, the set of non-anticipatory policies \(\mathcal{P}\) was defined in a fairly restrictive sense in Sec. 2.2, so that the decision in any round was completely determined by the results of the past. This might be weakened to allow for randomized policies, so that the decision in a given round might depend on the results of independent events, e.g., coin flips. However, such a construction simply amounts to placing a distribution on \(\mathcal{P}\). Since \(\pi^{*}\) is \(\mathbb{P}\)-a.e. superior to any \(\pi\in\mathcal{P}\), \(\pi^{*}\) would be similarly superior to any policy sampled randomly from \(\mathcal{P}\). The structure of the proof of Theorem 4 above is based on deriving an optimality result for the collective payout model by reducing it to an instance of the a solo payout model. It suggests an interesting correspondence between the two. Under the collective payout model, in any period the controller wishes to achieve via bandit activation high collective rewards of all bandits on halting. Under a solo payout model, in any period the controller wishes to achieve via bandit activation high rewards of a given bandit on halting. However, since (under either model) the controller can only activate one bandit at a time, under the collective payout model the controller essentially seeks in every period to maximize the change in collective reward due to a single bandit _should that bandit halt_, or equivalently to maximize the change in reward _of that single bandit_ should that bandit halt. The collective payout model can therefore be cast as a penultimate solo payout model, where the payout on halting is based on the _change_ in reward of the activated bandit rather than the final collective rewards of all bandits. This can be further seen in the following section, where optimal index policies for the general (penultimate and ultimate) solo payout model are given. Additional Payout Schemes Utilizing the results of the previous section, we next provide index policies for optimizing the rewards/costs from a number of additional payout models, by reducing them to the collective payout model cf. Eq.(14) of the previous section, and utilizing Theorem 4. We construct the models below, specified by different ways in which rewards are received and/or costs are paid. We note that analogous results can be obtained for the penultimate solo payout model without the monotonicity restriction of Section 3 on the underlying reward processes. They are omitted for brevity. **1.** The _Ultimate Solo Payout_ model (SP). In this model the controller aims to maximize the expected final reward from the bandit that halts the game, i.e., the value of a policy \(\pi\) is defined as, \[V_{\pi}^{SP}(\mathbb{X})=\mathbb{E}\left[X_{\pi}(\sigma_{\pi})|\mathcal{G}_{0} \right]=\sum_{i=1}^{N}\mathbb{E}\left[\mathbbm{1}_{\{i=\pi(\sigma_{\pi}-1)\}}X _{T_{\pi}^{i}(\sigma_{\pi})}^{i}|\mathcal{G}_{0}\right]. \tag{44}\] **2.** The _Non-Halting Cost_ model (NH). In this model the controller _pays a cost_ based on the bandits that did not halt the game, and wishes to minimize this expected cost. The _halting cost_ of a policy \(\pi\) is \[V_{\pi}^{NH}(\mathbb{X})=\mathbb{E}\left[\sum_{i\neq\pi(\sigma_{\pi}-1)}X_{T_ {\pi}^{i}(\sigma_{\pi})}^{i}|\mathcal{G}_{0}\right]=\sum_{i=1}^{N}\mathbb{E} \left[\mathbbm{1}_{\{i\neq\pi(\sigma_{\pi}-1)\}}X_{T_{\pi}^{i}(\sigma_{\pi})} ^{i}|\mathcal{G}_{0}\right]. \tag{45}\] **3.** The _Total Profit_ model (TP). In this model to each bandit \(i\) we associate a reward process \(\{R_{t}^{i}\}_{t\geqslant 0}\) and a cost process \(\{C_{t}^{i}\}_{t\geqslant 0}\). The controller gains a reward from the bandit that halts the game, and pays a cost for each bandit that does not halt. The controller wishes to maximize her expected total profit, i.e., the value of a policy \(\pi\) is now defined as, \[V_{\pi}^{TP}(\mathbb{R},\mathbb{C})=\sum_{i=1}^{N}\mathbb{E}\left[\mathbbm{1} _{\{i=\pi(\sigma_{\pi}-1)\}}R_{T_{\pi}^{i}(\sigma_{\pi})}^{i}-\mathbbm{1}_{\{ i\neq\pi(\sigma_{\pi}-1)\}}C_{T_{\pi}^{i}(\sigma_{\pi})}^{i}|\mathcal{G}_{0} \right]. \tag{46}\] **4.** The _Cumulative Collective Payout_ model (CCP) and the Gittins Index. In this model the controller gains a bandit's current reward each time that bandit is chosen to be activated. Bandits that are never activated give no rewards. The controller wishes to maximize her expected total payout, i.e., the value of a policy \(\pi\) is now defined as, \[V_{\pi}^{CCP}(\mathbb{X})=\sum_{i=1}^{N}\mathbb{E}\left[\sum_{t=0}^{T_{\pi}^{ i}(\sigma_{\pi})-1}X_{t}^{i}|\mathcal{G}_{0}\right]. \tag{47}\] Note, in the above expressions we take empty sums to be 0. For all these models, we will provide an index policy to maximize the corresponding value function as follows. **1.** For the _SP_ model, define a collection of reward processes \(\mathbb{Z}=\{Z^{i}\}_{1\leqslant i\leqslant N}\) by for each \(i\), each \(t\geqslant 0\), \[Z_{t}^{i}=\mathbbm{1}_{\{\sigma^{i}=t\}}X_{t}^{i}. \tag{48}\] Notice that at round \(\sigma_{\pi}\), \(Z_{t}^{i}=0\) for all bandits that did not halt the game, and \(Z_{t}^{i}=X_{\sigma^{i}}^{i}\) for the bandit that did halt the game. Hence the collective payout under \(\mathbb{Z}\) is equal to the solo payout under \(\mathbb{X}\), \(V_{\pi}^{CP}(\mathbb{Z})=V_{\pi}^{SP}(\mathbb{X})\). Applying Theorem 4, the optimal policy for the collective payout under \(\mathbb{Z}\) yields an optimal policy for the solo payout under \(\mathbb{X}\), and it is given by a policy that always activates bandits according to the maximum _solo payout index_: \[\rho_{SP}^{i}(t)=\operatorname*{ess\,sup}_{\tau\in\mathbb{P}^{i}(t)}\frac{ \mathbb{E}^{i}\left[\mathbbm{1}_{\{\tau\geqslant\sigma^{i}\}}X_{\sigma^{i}}^{ i}|\mathcal{F}^{i}(t)\right]}{\mathbb{P}^{i}\left(t<\sigma^{i}\leqslant\tau \big{|}\mathcal{F}^{i}(t)\right)}. \tag{49}\] It is interesting to observe that the policy based on the above index has a very natural interpretation, viewing the index as the maximal conditional expected payout of a bandit on its halting, i.e., the policy always activates the bandit with the largest potential payout - should it pay out. Additionally, comparing the above index to the optimal index for the collective payout model, it is clear that the collective payout index emphasizes the "change in reward on halting" of a single bandit, while the solo payout index emphasizes only the final reward of a single bandit on halting. This again highlights the correspondence between these two models, as discussed at the end of Section 4.4. **2.** We reduce the Non-Halting Cost model, to the collective payout model in the following way. Define a collection of reward processes \(\mathbb{Z}=\{Z^{i}\}_{1\leqslant i\leqslant N}\) by for each \(i\), each \(t\geqslant 0\), \[Z^{i}_{t}=-\mathbbm{1}_{\{\sigma^{i}\neq t\}}X^{i}_{t}. \tag{50}\] Notice that at round \(\sigma_{\pi}\), if bandit \(i\) was activated to halt the game (i.e., \(\pi(\sigma_{\pi}-1)=i\)), Eq.(50) implies that \(Z^{i}_{t}=0\) and \(Z^{j}_{t}=-X^{j}_{t}\), for \(j\neq i\). Hence, the collective payout under \(\mathbb{Z}\) is equal to the negative of the halting cost under \(\mathbb{X}\): \(V_{\pi}^{CP}(\mathbb{Z})=-V_{\pi}^{NH}(\mathbb{X})\); it follows that maximizing the collective payout under \(\mathbb{Z}\) minimizes the halting cost under \(\mathbb{X}\). Applying Theorem 4, the optimal policy for the collective payout under \(\mathbb{Z}\) yields an optimal policy for the non-halting cost model under \(\mathbb{X}\), and it is given by a policy that always activates bandits according to the minimum _non-halting cost index_: \[\rho_{NH}^{i}(t)=\operatorname*{ess\,sup}_{\tau\in\mathbb{P}^{i}(t)}\frac{ \mathbb{E}^{i}\left[\mathbbm{1}_{\{\sigma^{i}>\tau\}}X^{i}_{\tau}-X^{i}_{t} \big{|}\mathcal{F}^{i}(t)\right]}{\mathbb{P}^{i}\left(t<\sigma^{i}\leqslant \tau\big{|}\mathcal{F}^{i}(t)\right)}. \tag{51}\] **3.** For the _Total Profit_ model (TP) model, in order to provide an index policy to maximize its value function, we reduce it to the collective payout model in the following way. Define a collection of reward processes \(\mathbb{Z}=\{Z^{i}\}_{1\leqslant i\leqslant N}\) by for each \(i\), each \(t\geqslant 0\), \[Z^{i}_{t}=\mathbbm{1}_{\{\sigma^{i}=t\}}R^{i}_{t}-\mathbbm{1}_{\{\sigma^{i} \neq t\}}C^{i}_{t}. \tag{52}\] Notice that at round \(\sigma_{\pi}\), \(Z^{i}_{t}=-C^{i}_{t}\) for all bandits that did not halt the game, and \(Z^{i}_{t}=R^{i}_{t}\) for the bandit that did halt the game. Hence the collective payout under \(\mathbb{Z}\) is equal to the collective profit solo payout under \((\mathbb{R},\mathbb{C})\), \(V_{\pi}^{CP}(\mathbb{Z})=V_{\pi}^{TP}(\mathbb{R},\mathbb{C})\), cf. Eq.(14). Thus, as before, the optimal policy for the collective payout under \(\mathbb{Z}\) yields an optimal policy for the total profit under \((\mathbb{R},\mathbb{C})\), given by a policy that always activates bandits according to the maximum _total profit index_: \[\rho_{TP}^{i}(t)=\operatorname*{ess\,sup}_{\tau\in\mathbb{P}^{i}(t)}\frac{ \mathbb{E}^{i}\left[\mathbbm{1}_{\{\sigma^{i}\leqslant\tau\}}R^{i}_{\sigma^{i }}-\mathbbm{1}_{\{\sigma^{i}>\tau\}}C^{i}_{\tau}+C^{i}_{t}\big{|}\mathcal{F}^{ i}(t)\right]}{\mathbb{P}^{i}\left(t<\sigma^{i}\leqslant\tau\big{|}\mathcal{F}^{i}(t) \right)}. \tag{53}\] **4.** For the _Cumulative Collective Payout_ model (CCP). In this model the controller gains a bandit's current reward each time that bandit is chosen to be activated. Bandits that are never activated give no rewards. To provide an index policy to maximize this value function, we reduce it to the collective payout model, in the following way. Define a collection of reward processes \(\mathbb{Z}=\{Z^{i}\}_{1\leqslant i\leqslant N}\) by \[Z^{i}_{t}=\sum_{t^{\prime}=0}^{t-1}X^{i}_{t^{\prime}},\text{ for each $i$, each $t \geqslant 0$}. \tag{54}\] It follows easily that the collective payout model value under \(\mathbb{Z}\) is equal to the collective cumulative payout under \(\mathbb{X}\), i.e., \(V^{CP}_{\pi}(\mathbb{Z})=V^{CCP}_{\pi}(\mathbb{X})\). Thus, applying Theorem 4, the optimal policy for the collective payout under \(\mathbb{Z}\) yields an optimal policy for the collective cumulative payout under \(\mathbb{X}\), given by a policy that always activates bandits according to the maximum _collective cumulative payout index_: \[\rho^{i}_{CCP}(t)=\operatorname*{ess\,sup}_{\tau\in\mathbb{P}^{i}(t)}\frac{ \mathbb{E}^{i}\left[\sum_{t^{\prime}=t}^{\sigma^{i}\wedge\tau-1}X^{i}_{t^{ \prime}}\big{|}\mathcal{F}^{i}(t)\right]}{\mathbb{P}^{i}\left(t<\sigma^{i} \leqslant\tau\big{|}\mathcal{F}^{i}(t)\right)}. \tag{55}\] This extension of the collective payout model is interesting in its own right, because it allows us to readily recover and provide new simple proofs for the classic result of Gittins [9] and the recent results in Cowan and Katehakis [4]. Indeed, consider the case in which each time the controller activates a bandit, all future expected rewards are effectively discounted by a factor equal to the probability of that decision not halting the game. In the special case that each halting time \(\sigma^{i}>0\) is a geometric random variable with a constant parameter \(0<\beta<1\), independent of the reward processes \(\mathbb{X}\), i.e., \(\mathbb{P}^{i}(\sigma^{i}=t+1|\mathcal{F}^{i}(t))=1-\beta\). This results in every activation discounting all future rewards by a factor of \(\beta\). It is easy to see that \[V^{CCP}_{\pi}(\mathbb{X})=\sum_{i=1}^{N}\mathbb{E}\left[\sum_{t=0}^{T_{\pi}^{i }(\sigma_{\pi})-1}X^{i}_{t}\big{|}\mathcal{G}_{0}\right]=\mathbb{E}\left[\sum _{s=0}^{\infty}\beta^{s}X_{\pi}(s)\big{|}\mathcal{G}_{0}\right]. \tag{56}\] It follows from Eq. (56), that maximizing the \(V^{CCP}_{\pi}(\mathbb{X})\) under this model (with \(\mathbb{P}^{i}(\sigma^{i}=t+1|\mathcal{F}^{i}(t))=1-\beta\), for all \(t\) and all \(i\)) is then equivalent precisely the framework outlined by Gittins [9], i.e., total expected discounted reward of \(\mathbb{X}\) for a constant discount factor \(\beta\). In this case, the collective cumulative payout index reduces to \[\rho^{i}_{CCP}(t)=\operatorname*{ess\,sup}_{\tau\in\hat{\mathbb{P}}^{i}(t)} \frac{\mathbb{E}^{i}\left[\sum_{t^{\prime}=t}^{\tau-1}\beta^{t^{\prime}-t}X^{i }_{t^{\prime}}\big{|}\mathcal{F}^{i}(t)\right]}{\mathbb{E}^{i}\left[1-\beta^{ \tau-t}\big{|}\mathcal{F}^{i}(t)\right]}=\frac{1}{1-\beta}\operatorname*{ess\, sup}_{\tau\in\hat{\mathbb{P}}^{i}(t)}\frac{\mathbb{E}^{i}\left[\sum_{t^{ \prime}=t}^{\tau-1}\beta^{t^{\prime}}X^{i}_{t^{\prime}}\big{|}\mathcal{F}^{i}( t)\right]}{\mathbb{E}^{i}\left[\sum_{t^{\prime}=t}^{\tau-1}\beta^{t^{\prime}} \big{|}\mathcal{F}^{i}(t)\right]}, \tag{57}\] where the essential sup on the right is precisely the Gittins index for bandit \(i\). As \(1/(1-\beta)\) is a constant, positive factor, activating according to the maximal collective cumulative payout index and activating according to the maximal Gittins index result in equivalent, optimal policies. We also note that in this \(\rho^{i}_{CCP}(t)\) is the restart index cf. Katehakis and Veinott Jr [16] and the generalized index of Sonin [25]. The above is a well known interpretation of the Gittins index problem in terms of halting bandits, but its treatment herein provides a new interesting implication. In its classical form, it is not intuitively clear why the decision problem decomposes into indices that treat each bandit separately. However, framing it as a collective payout halting problem, we may make use of the previously described correspondence with the solo payout model. Reducing the Gittins model to a solo payout model, where in every period the controller wishes to realize the largest change in value of a single bandit on halting, provides additional insight into why the decomposition of the decision process into treating each bandit independently holds. We additionally note that the above arguments can be extended to generalized sequences of discount factors, for which \(\mathbb{P}^{i}(\sigma^{i}=t+1|\mathcal{F}^{i}(t))=1-\beta_{t}^{i}\), and thus recover the main results of Cowan and Katehakis [4]. ## 6 Proofs of auxiliary propositions We start with the following. **Proof of Proposition 1**. Without loss of generality, we may take \(t_{0}=0\). Recall the definition of \(\rho_{\pi}^{i},\rho^{i}\): \[\begin{split}\rho_{\pi}^{i}(t^{\prime},t^{\prime })&=\frac{\mathbb{E}\left[X_{\sigma^{i}\wedge t^{\prime\prime}} ^{i}-X_{t}^{i}\big{|}\mathcal{H}_{\pi}^{i}(t^{\prime})\right]}{\mathbb{P} \left(t^{\prime}<\sigma^{i}\leqslant t^{\prime\prime}\big{|}\mathcal{H}_{\pi }^{i}(t^{\prime})\right)},\\ \rho^{i}(t^{\prime},t^{\prime\prime})&=\frac{ \mathbb{E}^{i}\left[X_{\sigma^{i}\wedge t^{\prime\prime}}^{i}-X_{t^{\prime}}^ {i}\big{|}\mathcal{F}^{i}(t^{\prime})\right]}{\mathbb{P}^{i}\left(t^{\prime}< \sigma^{i}\leqslant t^{\prime\prime}\big{|}\mathcal{F}^{i}(t^{\prime})\right) }.\end{split} \tag{58}\] Letting \(R\) denote the R.H.S. of Eq. (23), observe (by the definition of the \(\operatorname{ess\,sup}\)) that for any \(\hat{\tau}\in\hat{\mathbb{F}}^{i}(0)\), \[\mathbb{E}\left[X_{\sigma^{i}\wedge\hat{\tau}}^{i}-X_{0}^{i}-R\mathds{1}\{0< \sigma^{i}\leqslant\hat{\tau}\}\big{|}\mathcal{F}^{i}(0)\right]\leqslant 0\;( \mathbb{P}\text{-a.e.}). \tag{59}\] To prove the proposition, it suffices to show that for any \(\hat{\tau}\in\hat{\mathbb{H}}_{\pi}^{i}(0)\), \[\mathbb{E}\left[X_{\sigma^{i}\wedge\hat{\tau}}^{i}-X_{0}^{i}-R\mathds{1}\{0< \sigma^{i}\leqslant\hat{\tau}\}\big{|}\mathcal{H}_{\pi}^{i}(0)\right]\leqslant 0 \;(\mathbb{P}\text{-a.e.}). \tag{60}\] For compactness of argument, we take \(N=2\) and \(i=1\), though the following argument generalizes to arbitrary bandits in the obvious way. For notational compactness, we define \(W_{t}^{i}=X_{\sigma^{i}\wedge t}^{i}-X_{0}^{i}-R\mathds{1}\{0<\sigma^{i} \leqslant t\}\). Note that for any set \(A\in\mathcal{H}_{\pi}^{1}(0)\), and any \(\tau\in\hat{\mathbb{H}}_{\pi}^{1}(0)\), \[\mathbb{E}\left[\mathds{1}_{A}\mathbb{E}\left[W_{\tau}^{1}\big{|}\mathcal{H}_ {\pi}^{1}(0)\right]\right]=\mathbb{E}\left[\mathds{1}_{A}W_{\tau}^{1}\right]. \tag{61}\] Taking \(A\) as a rectangle in \(\mathcal{H}_{\pi}^{1}(0)\), \(A=A_{1}\times A_{2}\), observe that \(A_{1}\in\mathcal{F}^{1}(0)\). The indicator may be decomposed as \(\mathds{1}_{A}(\omega)=\mathds{1}_{A_{1}}(\omega^{1})\mathds{1}_{A_{2}}(\omega ^{2})\). It follows as a result of the initial integrability assumptions on the bandits, Eqs. (1), (3), that we may exchange the expectation over the product space for an iterated expectation: \[\begin{split}\mathbb{E}\left[\mathds{1}_{A}W_{\tau}^{1}\right]& =\mathbb{E}^{2}\left[\mathbb{E}^{1}\left[\mathds{1}_{A_{1}} \mathds{1}_{A_{2}}W_{\tau}^{1}\right]\right]\\ &=\mathbb{E}^{2}\left[\mathds{1}_{A_{2}}\mathbb{E}^{1}\left[ \mathds{1}_{A_{1}}W_{\tau}^{1}\right]\right]\\ &=\mathbb{E}^{2}\left[\mathds{1}_{A_{2}}\mathbb{E}^{1}\left[ \mathds{1}_{A_{1}}\mathbb{E}^{1}\left[W_{\tau}^{1}\big{|}\mathcal{F}^{1}(0) \right]\right]\right].\end{split} \tag{62}\] Observe that, while \(\tau\) (begin an \(\mathbb{H}^{1}_{\pi}\)-stopping time) may have a dependence on \(\Omega^{2}\), inside the iterated integral with the dependence on \(\Omega^{2}\) fixed, it is an \(\mathbb{F}^{i}\)-stopping time. Hence, as an application of Eq. (59), we have the bound \[\mathbb{E}\left[\mathbbm{1}_{A}W^{1}_{\tau}\right]=\mathbb{E}^{2}\left[ \mathbbm{1}_{A_{2}}\mathbb{E}^{1}\left[\mathbbm{1}_{A_{1}}\mathbb{E}^{1}\left[ W^{1}_{\tau}\big{|}\mathscr{F}^{1}(0)\right]\right]\right]\leqslant\mathbb{E}^{2} \left[\mathbbm{1}_{A_{2}}\mathbb{E}^{1}\left[\mathbbm{1}_{A_{1}}0\right]\right]=0. \tag{63}\] Hence, for all rectangles \(A\in\mathscr{H}^{1}_{\pi}(0)\), \(\mathbb{E}\left[\mathbbm{1}_{A}\mathbb{E}\left[W^{1}_{\tau}\big{|}\mathscr{H}^ {1}_{\pi}(0)\right]\right]\leqslant 0\). This extends via the usual monotone-class type argument to _all_\(A\in\mathscr{H}^{1}_{\pi}(0)\). Hence, it follows that for all \(\tau\in\hat{\mathbb{H}}^{1}_{\pi}(0)\), \[\mathbb{E}\left[W^{1}_{\tau}\big{|}\mathscr{H}^{1}_{\pi}(0)\right]\leqslant 0 \ (\mathbb{P}\text{-a.e.}). \tag{64}\] This establishes the result. \(\square\) The proof of Proposition 3 below requires the following technical lemma. Its proof follows along the lines of the proofs of Theorems 4.1 - 4.3 in Snell [24], see also the Optimal Optional Stopping Lemma in Derman and Sacks [6]. **Lemma 1**: _In an arbitrary probability space with a filtration \(\mathbb{J}=\{\mathscr{J}_{t}\}_{t\geqslant 0}\), consider an adapted discrete-time process \(\{Z_{\tau}\}_{t\geqslant 0}\) such that \(\mathbb{E}\left[\sup_{\mathbb{N}}\left|Z_{t}\right|\big{|}\mathscr{J}_{0} \right]<\infty\). If the \(\mathbb{J}\)-stopping time \(\tau^{*}\in\hat{\mathbb{J}}(0)\) defined by_ \[\tau^{*}=\inf\{n>0:\;\operatorname*{ess\,sup}_{\tau\in\hat{\mathbb{J}}(n)} \mathbb{E}\left[Z_{\tau}\big{|}\mathscr{J}_{n}\right]\leqslant Z_{n}\} \tag{65}\] _is almost surely finite, then_ \[\mathbb{E}\left[Z_{\tau^{*}}\big{|}\mathscr{J}_{0}\right]=\;\operatorname*{ ess\,sup}_{\tau\in\hat{\mathbb{J}}(0)}\mathbb{E}\left[Z_{\tau}\big{|}\mathscr{J}_{0}\right]\ (\mathbb{P}\text{-a.e.}). \tag{66}\] **Proof of Proposition 3**. Recall that we need to show that for any time \(t_{0}<\sigma^{i}\), there exists a \(\tau\in\hat{\mathbb{F}}^{i}(t_{0})\) such that \(\rho^{i}(t_{0})=\rho^{i}(t_{0},\tau)\ (\mathbb{P}^{i}\text{-a.e.})\). We have that for all \(\hat{\tau}\in\hat{\mathbb{F}}^{i}(t_{0})\), \(\rho^{i}(t_{0},\hat{\tau})\leqslant\rho^{i}(t_{0})\) (\(\mathbb{P}^{i}\text{-a.e.}\)). Taking \[\mathbb{P}^{i}(t_{0}<\sigma^{i}\leqslant\hat{\tau}\big{|}\mathscr{F}^{i}(t_{0 }))=\mathbb{E}^{i}\left[\mathbbm{1}_{\{t_{0}<\sigma^{i}\leqslant\hat{\tau}\}} \big{|}\mathscr{F}^{i}(t_{0})\right],\] we have in parallel with Eq. (24), \[\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\hat{\tau}}-X^{i}_{t_{0}}-\rho^{i} (t_{0})\mathbbm{1}_{\{t_{0}<\sigma^{i}\leqslant\hat{\tau}\}}\big{|}\mathscr{F}^ {i}(t_{0})\right]\leqslant 0\ (\mathbb{P}^{i}\text{-a.e.}). \tag{67}\] Defining \[\varepsilon=-\operatorname*{ess\,sup}_{\hat{\tau}\in\hat{\mathbb{P}}^{i}(t_{0 })}\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\hat{\tau}}-X^{i}_{t_{0}}-\rho^ {i}(t_{0})\mathbbm{1}_{\{t_{0}<\sigma^{i}\leqslant\hat{\tau}\}}\big{|} \mathscr{F}^{i}(t_{0})\right], \tag{68}\] we have that \(\varepsilon\geqslant 0\) (\(\mathbb{P}^{i}\)-a.e.). We may use \(-\varepsilon\) as an improved upper bound in Eq. (67). This may be rearranged to yield \[\rho^{i}(t_{0},\hat{\tau})\leqslant\rho^{i}(t_{0})-\frac{\varepsilon}{\mathbb{ E}^{i}\left[\mathbbm{1}_{\{t_{0}<\sigma^{i}\leqslant\hat{\tau}\}}\big{|} \mathscr{F}^{i}(t_{0})\right]}\leqslant\rho^{i}(t_{0})-\varepsilon\ (\mathbb{P}^{i}\text{-a.e.}). \tag{69}\] Since the above property holds for all such \(\hat{\tau}\), it extends to the essential supremum, yielding \[\rho^{i}(t_{0})\leqslant\rho^{i}(t_{0})-\varepsilon\ (\mathbb{P}^{i}\text{-a.e.}), \tag{70}\] or equivalently that \(\varepsilon\leqslant 0\) (\(\mathbb{P}^{i}\)-a.e.). In conjunction with the first observation, that \(\varepsilon\geqslant 0\) (\(\mathbb{P}^{i}\)-a.e.), we have \(\varepsilon=0\) (\(\mathbb{P}^{i}\)-a.e.), i.e., \[\operatorname*{ess\ sup}_{\hat{\tau}\in\hat{\mathbb{P}}^{i}(t_{0})}\mathbb{E}^ {i}\left[X^{i}_{\sigma^{i}\wedge\hat{\tau}}-X^{i}_{t_{0}}-\rho^{i}(t_{0}) \mathbbm{1}_{\{t_{0}<\sigma^{i}\leqslant\tau\}}\left|\mathcal{F}^{i}(t_{0}) \right.\right]=0\ (\mathbb{P}^{i}\text{-a.e.}). \tag{71}\] Define \(Z^{i}_{t}=X^{i}_{\sigma^{i}\wedge\tau}-X^{i}_{t_{0}}-\rho^{i}(t_{0})\mathbbm{1 }_{\{t_{0}<\sigma^{i}\leqslant\tau\}}\). Note that the integrability condition of Lemma 1 is satisfied due to Eq. (1). For \(t\geqslant\sigma^{i}\), \(Z^{i}_{t}\) is constant, hence \(\tau^{*}\leqslant\sigma^{i}<\infty\) almost surely. Hence we may apply Lemma 1 here to yield a stopping time \(\tau^{*}\in\widehat{\mathbb{F}}^{i}(t_{0})\) such that \[\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\tau^{*}}-X^{i}_{t_{0}}-\rho^{i}(t _{0})\mathbbm{1}_{\{t_{0}<\sigma^{i}\leqslant\tau^{*}\}}\left|\mathcal{F}^{i} (t_{0})\right.\right]=0\ (\mathbb{P}^{i}\text{-a.e.}), \tag{72}\] or \[\rho^{i}(t_{0})=\frac{\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\tau^{*}}-X^ {i}_{t_{0}}|\mathcal{F}^{i}(t_{0})\right]}{\mathbb{P}^{i}\left(t_{0}<\sigma^{ i}\leqslant\tau^{*}\left|\mathcal{F}^{i}(t_{0})\right.\right)}=\rho^{i}(t_{0}, \tau^{*})\ (\mathbb{P}^{i}\text{-a.e.}). \tag{73}\] Hence, the solo-payout index \(\rho^{i}(t_{0})\) is realized (\(\mathbb{P}^{i}\)-a.e.) for some \(\mathbb{P}^{i}\)-stopping time \(\tau^{*}>t_{0}\). **Proof of Proposition 4**. For \(k>0\), let \(\tau^{i}_{k}<\sigma^{i}\), and therefore \(\tau^{i}_{k-1}<\sigma^{i}\). Defining \[Z^{i}_{t}=X^{i}_{\sigma^{i}\wedge t}-X^{i}_{\tau^{i}_{k-1}}-\rho^{i}(\tau^{i}_ {k-1})\mathbbm{1}_{\{\tau^{i}_{k-1}<\sigma^{i}\leqslant\tau\}},\] note that for \(t>\tau^{i}_{k}\): \(Z^{i}_{t}-Z^{i}_{\tau^{i}_{k}}=X^{i}_{\sigma^{i}\wedge t}-X^{i}_{\tau^{i}_{k}} -\rho^{i}(\tau^{i}_{k-1})\mathbbm{1}_{\{\tau^{i}_{k}<\sigma^{i}\leqslant\tau\}}\). It follows from the proof of Proposition 3 that the solo-payout index from time \(\tau^{i}_{k-1}\) is realized by a \(\tau^{i}_{k}\) such that \[\operatorname*{ess\ sup}_{\tau^{\prime}\in\hat{\mathbb{P}}^{i}(\tau^{i}_{k})} \mathbb{E}^{i}\left[Z^{i}_{\tau^{\prime}}\middle|\mathcal{F}^{i}(\tau^{i}_{k} )\right]\leqslant Z^{i}_{\tau^{i}_{k}}\ (\mathbb{P}^{i}\text{-a.e.}), \tag{74}\] or \[\operatorname*{ess\ sup}_{\tau^{\prime}\in\hat{\mathbb{P}}^{i}(\tau^{i}_{k})} \mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\tau^{\prime}}-X^{i}_{\tau^{i}_{k}} -\rho^{i}(\tau^{i}_{k-1})\mathbbm{1}_{\{\tau^{i}_{k}<\sigma^{i}\leqslant\tau^{ \prime}\}}\left|\mathcal{F}^{i}(\tau^{i}_{k})\right.\right]\leqslant 0\ (\mathbb{P}^{i}\text{-a.e.}). \tag{75}\] From the above, for any \(\tau^{\prime}\in\widehat{\mathbb{F}}^{i}(\tau^{i}_{k})\), we have \[\frac{\mathbb{E}^{i}\left[X^{i}_{\sigma^{i}\wedge\tau^{\prime}}-X^{i}_{\tau^{ i}_{k}}\middle|\mathcal{F}^{i}(\tau^{i}_{k})\right]}{\mathbb{P}^{i}\left(\tau^{i}_{k} <\sigma^{i}\leqslant\tau^{\prime}\middle|\mathcal{F}^{i}(\tau^{i}_{k})\right)} \leqslant\rho^{i}(\tau^{i}_{k-1})\ (\mathbb{P}^{i}\text{-a.e.}). \tag{76}\] Taking the essential supremum over such \(\tau^{\prime}\) establishes that \(\rho^{i}(\tau^{i}_{k})\leqslant\rho^{i}(\tau^{i}_{k-1})\), \(\ (\mathbb{P}^{i}\text{-a.e.})\). **Acknowledgements.** We acknowledge support for this work from the National Science Foundation, NSF grants: CMMI-1662629 and CMMI-1662442.
2310.03587
Designer quantum reflection from a micropore
We expand the theoretical toolbox for controllable quantum reflection by departing from a simple planar reflector. We introduce a circular hole (a micropore) of variable size, for which the electrostatic image potential can be exactly calculated. We combine this with two-dimensional simulations of wavepacket propagation at arbitrary angle of incidence to show that the quantum reflection probability can be tuned over a wide range of values.
Romuald Kilianski, Robert Bennett
2023-10-05T15:11:47Z
http://arxiv.org/abs/2310.03587v1
# Designer quantum reflection from a micropore ###### Abstract We expand the theoretical toolbox for controllable quantum reflection by departing from a simple planar reflector. We introduce a circular hole (a micropore) of variable size, for which the electrostatic image potential can be exactly calculated. We combine this with two-dimensional simulations of wavepacket propagation at arbitrary angle of incidence to show that the quantum reflection probability can be tuned over a wide range of values. ## I Introduction Quantum reflection is a counter-intuitive effect in which an attractive surface-atom potential exhibits a repulsive behaviour towards an incoming matter-wave, i.e., the reflection occurs despite the absence of a classical turning point [1]. In other words, an atom that is accelerated towards a surface has a non-zero chance of reflection before coming into contact with it. This quintessentially wave behaviour is familiar from classical theories, i.e., describing wave propagation in inhomogeneous media [2], but it is its quantum realisation that has been the focus of research in recent decades. Quantum reflection has been a widely studied phenomenon since the inception of quantum mechanics and was first placed into the realm of atom-surface interactions by Jones and Devonshire [3]. In this context, the effect translates to an interference of probability waves; the unexpected outcome is that the reflection occurs at a threshold distance away from the surface. This effect was neatly demonstrated in a recent work [4], where a helium dimer was non-destructively reflected before it could reach the surface at which the potential would have been strong enough to dissociate its weak bond. A wealth of literature exists on the theoretical treatment of quantum reflection [5; 6; 7; 8; 9], some recent publications were concerned with reflection of atoms from rough surfaces [10; 11], some with antimatter (antihydrogen) reflecting off nanoporous materials [12] and liquid helium [13], while others probed ultracold molecular collisions [14], and the effect of reflection in a cold Rydberg atomic gas with the use of single photons [15], and solitons [16]. Since the seminal paper by Shimizu [17], experimental realisations of quantum reflection from a solid surface have become abundant [18; 19]. Some of the recent works include using quantum reflection to trap atoms in optical potentials [20], the reflection of Bose-Einstein condensates [21; 22], metrological applications via observation of diffraction orders [23] and tests of quantum vacuum [24]. The diverse range of phenomena that can be probed via quantum reflection, makes it an exciting and versatile tool in atomic physics. The electromagnetic forces playing the key role in many realisations of quantum reflection belong to the class of phenomena collectively known as dispersion forces [25]. They arise as a result of the field fluctuations between two objects that do not possess a permanent electric or magnetic dipole moment. Amongst them are interatomic van der Waals forces (vdW), initially proposed by London [26], and interactions between larger bodies, introduced by Casimir [27] and Lifshitz [28]. A third, mixed case describes the force between an atom or a molecule and a macroscopic object. It was first developed in the electrostatic regime by Lennard-Jones [29], and then extended to the retarded distances by Casimir and Polder (CP) [30]. Different naming conventions for the specific dispersion forces exist in the literature, but all are on a fundamental level expressions of the fluctuations of the vacuum, as described by quantum electrodynamics (QED). We will follow the convention adopted in [25], and refer to atom-body interactions -- that are of central importance in this work -- as CP forces1. Moreover, different distance regimes impose limits on applicability of different theoretical descriptions of dispersion forces. Due to the finite speed of light, the information exchanged on scales much larger than the transition wavelength of an atom will suffer a phase delay, thus experiencing retardation effects. In this paper, we consider the short, non-retarded regime -- this is the domain of applicability of the electrostatic potential we will use. Footnote 1: Some authors refer to any far-field dispersion interaction involving at least one atom as Casimir-Polder, while others use the convention employed here where Casimir-Polder refers to the interaction (at any distance) between an atom and macroscopic body. Control of dispersion forces therefore allows control of quantum reflection. Research so far in this direction has mainly been centred around investigating the versatility of graphene as a material, enabling the control of the atom-surface potential. The works in [31; 32; 33] explore the use of a magnetic field to alter the properties of a sheet of graphene, effectively changing the CP interaction potential. Some other investigations in the graphene-based systems include carrier doping [34], the application of mechanical strain [35], and plane stacking arrangements [36]. Moreover, various experimental applications have already been realised [37; 38; 39], paving the way for future uses in nanotechnology. Other materials in the graphene family have also been probed in the context of quantum reflection and its connection with topological phase transitions, stimulated by electric fields [40]. The application of electric fields has equally been successful in controlling quantum reflection in silicon gratings-- outside of the realm of graphene-- as experimentally shown in [41]. In this work we choose to investigate the paradigmatic case of a perfectly reflecting plate, but with a twist. We introduce a circular hole (a micropore) to our metallic plate. By changing the diameter of the hole, we are able to reduce the potential gradient in its immediate neighbourhood, and in turn, gain control of the strength of reflection of, for example, a matter-wave passing through. Despite the presence of the micropore, which one might expect to allow full transmission of a highly-collimated matter-wave, the atom continues to reverse its motion at the threshold, albeit at a reduced rate relative to no hole. This apparent "anti-tunnelling" event, where in the region of absence of a material surface one would expect the atom to propagate through, renders a curious addition to an already counter-intuitive quantum phenomenon. After incorporating the variable hole diameter, we extend the space of control parameters by including the angle of incidence and test their impact on the reflectivity. Due to the non-separability of the potential, we numerically solve the time dependent Schrodinger equation (TDSE), for \({}^{3}\)He and Na, both modelled as a Gaussian pulse propagating towards a plate with a hole. This paper is organised as follows, we first discuss the conditions for the quantum reflection, considering a single degree of freedom. Secondly, in section II we present the exact, two-dimensional, non-separable potential for a perfectly reflecting plate with a hole from [42]. We then modify its domain to enable numerical simulations of quantum reflection by solving the TDSE using a spectral, split-step method. We then proceed to present the results in Section III, showcasing the dependence of the reflectivity upon the hole diameter and angle of incidence. In the appendices we validate our algorithm for the case of normal incidence and examine the influence of the grid size on convergence. ## II Procedure ### Quantum reflection in 1D and 2D Quantum reflection has been studied overwhelmingly as a one-dimensional problem. The atom-surface forces between matter and a regular macroscopic object depend on the normal distance between them, resulting in consideration of only a single degree of freedom. The conditions for the quantum reflection in 1D are determined by the properties of the travelling matter-wave. If \(U(x)\) is an arbitrary potential varying in the \(x\) direction, \(m\) is the particle's mass, and \(k_{0}\) is its \(k\)-vector at \(x\to\infty\) (\(U\to 0\)), the local wave vector of the particle \(k\), \[k=\sqrt{k_{0}^{2}-2mU(x)/\hbar^{2}}, \tag{1}\] is required to change abruptly on the scale of its de Broglie wavelength \(\lambda_{dB}\) for quantum reflection to occur [17]. This significant change can be mediated by an interaction potential \(U\) that grows rapidly as the atom approaches the surface. CP forces with their \(1/r^{3}\) or \(1/r^{4}\) dependence are therefore ideal for inducing quantum reflection. The majority of theoretical investigations into quantum reflection elude fully analytical treatment and rely on semiclassical approaches such as WKB approximation [43; 44], however these are not applicable in higher dimensions for non-separable potentials [45]. Only until recently [45], all efforts have been confined to the one dimensional case, in which a time-independent Schrodinger equation (TISE) is solved for a given potential, and the reflectivity is obtained as a ratio of amplitudes of counter-propagating waves. Quite understandably, a reliable method of solving a two-dimensional quantum reflection problem is a recent occurrence due to the computationally expensive nature of such setup. Inspired by the aforementioned work by Galiffi et al. [45], we apply a time-dependent approach to solve a pulse propagation problem in the vicinity of a perfectly reflect Figure 1: A schematic illustration of the setup. A pulse travels towards the plate at an incidence angle \(\theta\), being influenced by an attractive potential \(U(\rho,z;d)\). The potential was originally derived in the cylindrical coordinates but we discard the \(\phi\)-dependent component due to the uniaxial invariance and treat \(\rho\) and \(z\) as Cartesian coordinates. ing plate with a hole. ### Potential function The geometry we have chosen is a smooth metal plate with a hole in its centre, as shown in Fig. 1. The exact electrostatic potential for this situation was calculated by Eberlein and Zietal [42] by the means of a Kelvin transform [46]. Defined in cylindrical coordinates, \(U(\rho,\phi,z)\), and for a hole diameter \(d\) their result can be written as \[U(\rho,z;d)=-\frac{1}{16\pi^{2}\varepsilon_{0}}(\Xi_{\rho}\left< \mu_{\rho}^{2}\right>+\Xi_{\phi}\left<\mu_{\phi}^{2}\right>+\Xi_{z}\left<\mu_{ z}^{2}\right>). \tag{2}\] The \(\left<\mu_{i}^{2}\right>\) are the expectation values of the \(i\)-th cylindrical component of the dipole moment operator, and the coefficients \(\Xi_{i}\) are: \[\Xi_{\rho} =\frac{d\rho^{2}}{R_{+}^{5}R_{-}^{5}}\left(P^{2}-d^{2}z^{2}\right) +\frac{d^{3}}{6R_{+}^{3}R_{-}^{3}}+\frac{1}{4z^{3}}\left[\frac{\pi}{2}+\arctan \left(\frac{P}{dz}\right)+\frac{dz}{R_{+}^{4}R_{-}^{4}}Q_{-}^{2}P\right], \tag{3}\] \[\Xi_{\phi} =\frac{d^{3}}{6R_{+}^{3}R_{-}^{3}}+\frac{1}{4z^{3}}\left[\frac{ \pi}{2}+\arctan\left(\frac{P}{dz}\right)+\frac{dz}{R_{+}^{2}R_{-}^{2}}P\right],\] (4) \[\Xi_{z} =\frac{d}{R_{+}^{5}R_{-}^{5}}\left(z^{2}Q_{+}^{2}-\frac{d^{2}}{4 }Q_{-}^{2}\right)+\frac{d^{3}}{6R_{+}^{3}R_{-}^{3}}+\frac{1}{2z^{3}}\left[ \frac{\pi}{2}+\arctan\left(\frac{P}{dz}\right)+\frac{dz}{R_{+}^{2}R_{-}^{2}}Q _{-}+\frac{2d\rho^{2}z^{3}}{R_{+}^{4}R_{-}^{4}}P\right], \tag{5}\] where we used the shorthand notations: \[P =\rho^{2}+z^{2}-\frac{d^{2}}{4} \tag{6}\] \[Q_{\pm} =\rho^{2}\pm z^{2}\pm\frac{d^{2}}{4}\] (7) \[R_{\pm} =\left[\left(\rho\pm\frac{d}{2}\right)^{2}+z^{2}\right]^{1/2}. \tag{8}\] For the hole diameter \(d\) approaching zero, \(U(\rho,z;d)\) reduces to a form \(\propto z^{-3}\)-- a potential varying only in one direction, thus reducing to the familiar 1D form. Equation (2) describes the energy shift of an atom with an arbitrarily oriented dipole; in our case, we choose the dipole to always be pointing in the direction of the atom's motion. We therefore parametrise the dipole moment as \[\mathbf{\mu_{\rho}} =(\mu_{x}\cos\phi+\mu_{y}\sin\phi)\hat{\rho} \tag{9}\] \[\mathbf{\mu_{\phi}} =(-\mu_{x}\sin\phi+\mu_{y}\cos\phi)\hat{\phi}\] (10) \[\mathbf{\mu_{z}} =\mu_{z}\hat{z}, \tag{11}\] where \(\hat{\rho},\hat{\phi},\hat{z}\) are the usual cylindrical unit vectors and \(\mathbf{\mu}=(\mu_{x},\mu_{y},\mu_{z})\) is the dipole moment vector in Cartesian coordinates. By choosing a plane of motion where \(\mu_{y}=0\) and \(\mu_{x}>0\), we notice that \(\phi=\arctan(y/x)=0\). Now, by defining an angle \(\theta=\arctan(x/z)\), we can write the remaining dipole components as \(\mu_{x}=|\mathbf{\mu}|\sin\theta\) and \(\mu_{z}=|\mathbf{\mu}|\cos\theta\). This allows us to write the energy shift \(V(\rho,z;\theta,d)\) as \[V(\rho,z;\theta,d) =-\frac{1}{16\pi^{2}\varepsilon_{0}}(\Xi_{\rho}\left<|\mathbf{\mu}|^{ 2}\right>\sin^{2}\theta+\Xi_{z}\left<|\mathbf{\mu}|^{2}\right>\cos^{2}\theta)\] \[=-\frac{C_{3}}{\pi}(\Xi_{\rho}\sin^{2}\theta+\Xi_{z}\cos^{2} \theta), \tag{12}\] where \(C_{3}=\left<|\mathbf{\mu}|^{2}\right>/16\pi\varepsilon_{0}\), with \(\left<|\mathbf{\mu}|^{2}\right>\) the square of expectation value of the dipole moment, and following [47; 45], we set \(C_{3}=4.0\times 10^{-50}\)J, which describes the interaction between \({}^{3}\)He and a Au plate. By fixing the plane of propagation, \(\rho\) and \(z\) effectively become Cartesian coordinates, but for the sake of clarity and continuity we retain the cylindrical labels. The fixed, relative relationship between \(\Xi_{z}\) and \(\Xi_{\rho}\) components, such that the atom's dipole always points in the direction of motion is described by the angle of incidence \(\theta\), and is schematically shown in Fig. 1. ### Extended potential Any experiment aiming to measure quantum reflection needs to come up with a way of isolating it from the classical reflections induced by the short-range repulsion very close to the surface (for example using the bond dissociation technique in [4]). In our numerics we do this by simply not including the short-range repulsion (which would be implemented by letting \(V\rightarrow\infty\) at the plate), and considering only the effects of the potential \(V(\rho,z;\theta,d)\) as shown. This means that any reflections that occur are necessarily quantum in nature. To implement this we split our computational domain into two halves, with the plate envisaged as being in the middle at \(z=0\). On the right hand side, the potential is \(V(\rho,z;\theta,d)\), while on the left hand side we artificially continue the potential to the edge of the domain in the way we shall explain shortly. Since the potential \(V(\rho,z;\theta,d)\) experiences an unphysical singularity at \(z=0\), we choose a small enough distance \(\epsilon\), as a cut-off point (as was implemented, for example, in [48]). This length needs to be sufficiently small so the re sulting potential still reaches close enough to the surface to be relevant to electrostatic interactions -- varying the \(\epsilon\) impacts the reflectivity and this is discussed in the appendix where we test different lengths \(\epsilon\). We now proceed to define a new piecewise potential function \(V_{C}\) as \[V_{C}=\begin{cases}V(\rho,z,\theta;d)&z>\epsilon\\ -\dfrac{3V_{0}}{2\epsilon^{2}}z^{2}+\dfrac{5V_{0}}{2}&0\leq z\leq \epsilon\text{ and }|\rho|<\dfrac{d}{2}\\ \dfrac{5V_{0}}{2}&z<0\text{ and }|\rho|<\dfrac{d}{2}\\ V_{<}&\text{otherwise },\end{cases} \tag{13}\] where \(V_{0}\equiv V(0,\epsilon;\theta,0)\) and \(V_{<}\equiv V(0,\epsilon;\theta,d)\). The extended potential in the region \(0\leq z\leq\epsilon\) is essentially a function of \(z\) only. The change of the potential's landscape in the \(\rho\) direction induced by introduction of the hole is symmetrical, and significant only at \(z\) near \(\epsilon\). We thus create a gap in the continued part of the potential in the positive and negative \(\rho\) direction, at \(z=\epsilon\), to account for the vanishing potential gradient at the hole's centre. For \(z<\epsilon\) and beyond the gap, \(V_{C}\) is invariant in \(\rho\); for a wave packet travelling in the \(\rho\) direction, such an abrupt change in the \(\rho\) direction will have an effect on its motion. However, this is inconsequential for our purposes as this occurs in the continued part of of the potential and does not influence the reflection in the normal direction. We plot the regularized potential at a \(\rho=0\) slice for different diameters \(d\) in Fig. 3. ### Evolution of the system We aim to solve a dimensionless time-dependent Schrodinger equation \[-\frac{1}{2}\nabla^{2}\Psi(\mathbf{r},t)+V(\mathbf{r})\Psi(\mathbf{r},t)=i \partial_{t}\Psi(\mathbf{r},t). \tag{14}\] We solve Eq. (14) by taking advantage of an open source library [49] -- a solver utilising the split-step Fourier technique, also known as the Beam Propagation Method (BPM) [50]. Determined by the natural units and the choice of length scale \(L=1\mu\)m, the energy unit in which the system in Eq. (14) is solved is \(\hbar^{2}/mL^{2}\), where \(m\) is the actual mass of the atom in SI units. We adapt the source code of [49] to include our extended potential function \(V_{C}\), and solve the TDSE for a range of chosen angles of incidence \(\theta\), and diameters \(d\). In Table 1, we specify the simulation specific parameters for \({}^{3}\)He and Na, to Figure 3: The extended potential \(V_{C}\) at \(\rho=0\) for three different hole diameters \(d\). The red dashed line shows the cut-off point \(\epsilon\). The larger the hole diameter, the flatter the potential gradient becomes near the centre. Figure 2: On the left, potential \(V\) for an atom of \({}^{3}\)He, continued beyond the \(z=\epsilon\) distance through to the negative \(z\) values where it reaches a constant value. On the right, the behaviour of the original potential \(V\) from the cut-off point at \(z=\epsilon\). The unit \(\hbar^{2}/mL^{2}=0.014\) neV. which we refer throughout the text. At \(t=0\), we define \(\Psi(\mathbf{r}_{0},0)\) to be a Gaussian with \(\sigma_{z}=\sigma_{\rho}=1\mu\)m, situated at the location \(\mathbf{r}_{0}=\{r\cos\theta,r\sin\theta\}\), where \(\theta\) is the angle of incidence and \(r\) was chosen to be \(4\mu\)m. We impart on the pulse an initial momentum \(p_{0}=\sqrt{2mE_{0}}\), where \(m\) is the mass of \({}^{3}\)He = 3.016 amu (Na = 22.99 amu), and \(E_{0}\) is the kinetic energy. The computational domain is surrounded by an absorbing boundary, where the solver makes \(V\) imaginary. Additionally, periodic boundary conditions are enforced and any pulse that "leaks" through the absorbing medium, reappears at the starting point. We can stop that from happening if we choose a "sensible" stopping point, an appropriate duration of propagation turns out to be \(t_{f}=0.21\). The resulting \(\Psi(\mathbf{r},t_{f})\) contains the information about the spread of the pulse at time \(t_{f}\). To extract the information about the reflected part of the pulse, we simply integrate the normalised squared amplitude of the wave function along the \(\rho\) axis, and the positive \(z\). This way, we find the proportion of the pulse travelling in the positive \(z\) direction at \(t_{f}\), which we call reflectivity \(R(t_{f})\), \[R(t_{f})=\int_{-\infty}^{\infty}\mathrm{d}\rho\int_{0}^{\infty}\mathrm{d}z\ |\Psi(\mathbf{r},t_{f})|^{2}. \tag{15}\] An alternative, very similar treatment is to Fourier transform the \(\Psi(\mathbf{r},t_{f})\) and integrate along the momentum in \(\rho\) direction and positive momentum in \(z\); the reader can follow [45; 48] for further details. We found this technique to produce almost identical results with the exception of cases where a pulse travels at grazing angles of incidence, causing the positive momentum to be poorly defined. We notice that the addition of the hole significantly flattens the gradient of the potential in the region corresponding to its diameter across the whole domain, as seen in the right panel of Fig. 2. This serves as basis for expecting suppressed reflectivity across those regions. ## III Results and analysis Since we have based our investigations on using an electrostatic potential, we need to confirm that the reflection is happening at distances appropriate to the short-range, non-retarded CP interaction. The electrostatic regime is usually accessible through high kinetic energies where the particle's speed \(v\approx 300\) m s\({}^{-1}\) -- as shown experimentally in for example [19]. We can also consider non-retarded distances in the case of lower energies (\(v\approx 2\) m s\({}^{-1}\)), by balancing out other parameters, i.e., choosing the \(C_{3}\) coefficient (corresponding to a different atom for a case of a perfect reflector, or a combination of an atom and a surface for a more general treatment) to be sufficiently smaller than in the high-energy case. We can confirm the suitability of our setup to an electrostatic regime by reducing our problem to a single dimension -- normal incidence at \(\rho=d=0\) -- and examining the location of a quantum reflection. A well-known estimation of the order of magnitude of this distance (in one dimension) can be inferred from the so called Badlands function [44; 51], as demonstrated for example in [52] and [12]. The dimensionless form of the Badlands function \(Q(z)\) can be written as \[Q(z)=\frac{4(V(z)-E)V^{\prime\prime}(z)-5(V^{\prime}(z))^{2}}{32(E-V(z))^{3}}, \tag{16}\] where \(V(z)\equiv V(0,z,0;0)\) is the one dimensional potential function, \(E\) is the kinetic energy, and primes denote differentiation with respect to \(z\). The peaks of \(Q(z)\) coincide with regions where the WKB approximation breaks down (distances at which the wave vector experiences drastic changes), revealing the approximate position at which the quantum reflection occurs. Thus, by finding the location of a maximum of the Badlands function for a given configuration (choice of an atom and its velocity), we can check the applicability of a given regime. We have found the peaks of the Badlands function for the case of a perfect reflector for \({}^{3}\)He along with \(\mathrm{Na,K,Rb}\) and Cs, using the \(C_{3}\) coefficients for the alkali metals from [53]. The results are shown in Fig. 4 a) and b). For all elements in Fig. 4 a), we notice a rapid growth of the distance \(z_{R}\), for \(v<2\) m s\({}^{-1}\), a retarded regime corresponding to the usually associated with quantum reflection, lower energies. On the opposite side of the spectrum (\(z_{R}\gg\lambda\), for a dominating transition wavelength of \({}^{3}\)He, \(\lambda=9.3\) nm [48]), the distance \(z_{R}\) falls inside the electrostatic (non-retarded) limit. We are thus considering a reflection distance which is approximately on the order of a wavelength \(\lambda\), motivating us to discard any contribution that might be arising over the scale of retarded distances (\(z\gg\lambda\)). We believe this to be a marginally justifiable assumption for \({}^{3}\)He, following the work in [45], whose use of the electrostatic potential influences our own method. Additionally, the lowest limit for the cut-off point \(\epsilon\) producing convergent results for all angles \(\theta\) was found to be \(\epsilon=10\) nm. This situates it within the approximate region where the Badlands function predicts the reflection to occur, yet it still allows for the full interaction to play out - the pulse starts reversing its motion before the cut-off point. For the case of the alkali atoms (K,Rb,Cs) along with Na as shown in Fig. 4 b), the distance at which the quantum reflection occurs, falls clearly in the non-retarded regime. In the case of Na, the distance \(z_{R}\) is approximately five times smaller than its transition \begin{table} \begin{tabular}{|c|c|c|} \hline Atom & \({}^{3}\)He & Na \\ \hline Array dims. & \(25\times 25(25\times 25\ \mu\)m) & \(25\times 25(25\times 25\ \mu\)m) \\ Energy \(E_{0}\) & \(1.13\times 10^{5}(1.56\ \mu\)eV) & 665.70(1.21 neV) \\ Time \(t_{f}\) & 0.21(2.21 ns) & 0.21(0.115 \(\mu\)s) \\ Cut-off \(\epsilon\) & 0.001(1 nm) & 0.1(100 nm) \\ \hline \end{tabular} \end{table} Table 1: Parameters in natural units (SI units) used in the simulations. wavelength, \(\lambda\approx 590\)nm. Having confirmed the validity of the one-dimensional electrostatic model for several atoms, we carry out two-dimensional simulations for an atom of \({}^{3}\)He, and Na; in both cases we vary the incidence angle \(\theta\) and the hole diameter \(d\). The normalised (with respect to the initial reflectivity) results for \({}^{3}\)He (Na) are plotted in Fig. 5 (Fig. 7), showcasing the relationship between the hole diameter \(d\) and reflectivity \(R\). Quite intuitively, for an atom travelling at incidence angle \(\theta\), the bigger the overlap between the hole's cross section and the arc that the angle \(\theta\) subtends, the larger portion of the pulse experiences the reduced strength of the potential gradient. This can be clearly seen by the diminishing influence of the hole on pulses that travel at the grazing angle of incidence--the results of such simulations are shown in Figs 6 and 8. Additionally, independent of the hole diameter \(d\), in the case of each atom we observe a periodic behaviour along the \(\theta\) axis. The ratio of the reflected wave to the incoming one is modulated by the coupling between the potential's respective dependencies on \(\rho\) and \(z\). Curiously, when the diameter of the hole approaches zero -- which nullifies non-perpendicular dependence-- we still observe the periodic behaviour. Since this occurs for both atoms, we have examined the animations of the respective simulations and have found them to be describing the correct values of the reflectivity --we expand on this point in the appendix. We suggest that the reason behind this phenomenon lies in the self-interference of the wave packet. As it strikes the potential barrier, it disperses in all directions, ultimately affecting the reflectivity in a quasi-periodic fashion. ## IV Conclusions/summary We have presented a proof-of-principle method of controlling the magnitude of quantum reflection of a \({}^{3}\)He atom from a perfectly reflecting plate by adding a circular hole of varying diameter at its centre. The addition of the hole significantly modifies the potential experienced by the atom and directly influences the probability of quantum reflection. We extended the parameter space familiar from standard quantum reflection approaches by allowing our matter-wave to travel at arbitrary incidence angles with respect to the surface. This introduces complications as the lack of a chosen single trajectory impacts the choice of boundary conditions in the time independent approach, rendering defining a suitable-for-all simulation space computationally infeasible. We thus have modelled the problem as a 2D pulse propagation in the presence of an attractive potential, and solved a TDSE using a split-step method, utilising an open source solver [49]. Our results confirm the intuitions insofar as the increase in the hole diameter reduces the probability of the reflection--this is additionally influenced by the coupling between the direction of propagation and the strength of the potential gradient. The ability to study the reflection from the perspective of different directions of propagation reveals varied and interesting behaviours for the same atom. In the appendix we show how the finer grid density leads to convergence for the case of a normal incidence, and this will naturally apply to arbitrary direction of propagation. The length scale at which we tested quantum reflection is ideally suited to the regime of nanotechnology, opening up possibilities for designing tunable quantum reflection devices, such as velocity selectors able to filter out neutral atoms [54]. As well as the range of possibilities in technological applications, the plate with the hole offers an interesting scheme for investigating quantum nature of matter waves. In this paper, we have discussed the behaviour of a single atom incident on the perfectly reflecting surface, but the same method (perhaps at lower energies and thus considering retarded distances) can be applied to studying the quantum reflection of a BEC, with the specific emphasis on the two-dimensional profiles, which will be explored in a future work. Alternative avenues exist to extend and interpolate the plate with the hole potential to a non-electrostatic regime in the form of a heuristic argument as it is often done in dispersion force calculations [17], or numerical simulations. Both remain to be respectively tested to expand the reach of possible quantum reflection experiments. ###### Acknowledgements. It is a pleasure to thank Marc Caffrey for discussions. Financial support from UK Research and Innovation grant EPSRC/DTP 2020/21/EP/T517896/1 is gratefully acknowledged. ## Appendix: Convergence As already pointed out by Galiffi et al. [45], the convergence of a solution to the 2D pulse propagation problem depends on the density of points along the axis of the particle's propagation. They report using different grids for \(x\) and \(y\) values -- the pulse is travelling only along the \(x\) axis (normal incidence). In our case of arbitrary incidence, shortening the grid in the \(y\) direction leads to spurious results, i.e., the direction of the pulse acquiring a phase of \(-2\theta\), where \(\theta\) is the angle of incidence. As there is no preferred direction of motion, we thus use grids that have equal density across \(z\) and \(\rho\). We have inspected the animations of our simulations to establish a lower bound on the number of grid points \(N\), for which the pulse follows a correct trajectory and we have found it to be \(2^{11}\). Furthermore, we tested more dense grid configurations of the form \(n\times n\) and were limited by memory to the case of \(N=2^{13}\). Thus, we performed the numerical simulations -- results of which are shown in Figs: 5, 6, 7 and 8 -- using the \(z\) grid of \(N_{z}=2^{12}\), balancing accuracy and performance. Moreover, the algorithm of the split step numerical method converges for small values of t [49] -- in our case the time step is chosen to be \(dt=0.005\). It is worth noting that introducing the regularization of the potential in the form of a cut-off length \(\epsilon\) has an influence on overall results. With decreasing \(\epsilon\), the potential gradient a particle is experiencing becomes larger, and a denser grid is needed for more accurate sampling. We have tested this relationship using our algorithm for the case of normal incidence for different hole diameters, as shown in Fig. 9. The number of points on the \(\rho\) axis was fixed to \(N_{\rho}=2^{7}\), and we varied the density in the \(z\) direction between \(N_{z}=2^{11}\) and \(N_{z}=2^{15}\). The values of cut-off length \(\epsilon\) are bound by the reflection distance \(z_{R}\), and were chosen between 1 and 5 nm -- shown as separate panels in Fig. 9. In each case, we observe that the amplitude of fluctuations around a mean value (dashed line) decreases as the number of points is increased. For increasing diameter \(d\), the oscillations also decrease; the presence of the hole weakens the magnitude of the gradient in the normal direction. Thus, even a smaller resolution is able to capture the behaviour adequately. For our choice of range of \(\epsilon\), the oscillations decrease in a similar manner until \(\epsilon=10\) nm, where they become more smoothed out for \(N_{z}>2^{14}\). All numerical computations were performed on a PC Figure 4: The distance \(z_{R}\) at which a quantum reflection occurs as a function of incident velocity, for various atoms. \(z_{R}\) has been calculated as a location of the maximum of a Badlands function. The plot in a) shows the regime of applicability of the \({}^{3}\)He atom, with the dashed line marking its velocity (\(\mathrm{v}=2\) m s\({}^{-1}\)); the same in b) but for the Na atom (\(\mathrm{v}=0.1\) m s\({}^{-1}\)). Figure 5: Explicit dependence of reflectivity on the diameter of the hole \(d\) for a selection of angles \(\theta\) for the atom of \({}^{3}\)He travelling at \(\mathrm{v}=2\) m s\({}^{-1}\)). Figure 6: Reflectivity as a function of the diameter of the hole \(d\) and the angle of incidence \(\theta\) for an atom of \({}^{3}\)He travelling at \(v=2\) m s\({}^{-1}\). with an 8 core 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz, 16GB of RAM, and a Rocky Linux operating system. ## Appendix: Periodic behaviour We have examined the animations produced by the simulations and found the visual representation to agree with the calculated values of reflectivity. The plots can be seen in Fig. 10, there, we have included snapshots from the simulations of the Na atom where angles of incidence were respectively \(\theta=0.2\pi\) and \(\theta=0.3\pi\). The respective resultant reflectivities \(R(t_{f})\) were 0.221 and 0.353, which agree with the main results, leading us to assume that the reflectivity calculations are correct for all \(\theta\). The influence of \(\theta\) on periodic behaviour seen in Figs 6 and 8 cannot be explained through the action of simple functions such as \(\sin\theta\) (\(\cos\theta\)) since they are strictly increasing (decreasing) on the interval \((0,\frac{\pi}{2})\). Thus, a more complicated response must be at play, being born out of the scattering of the wave packet across different angles of incidence. Given the strong non-separability of (14), we are unable to investigate this behaviour analytically. Figure 9: Relationship between reflectivity \(R\) and grid density in the \(z\) direction for normal incidence for the \({}^{3}\)He atom. The number of points on the \(\rho\) axis is kept constant, \(N_{\rho}=2^{7}\). Different coloured lines correspond to different diameters of the hole — which when increased reduce reflectivity as well as the magnitude of the fluctuations. Figure 10: Different stages (\(t=0.04,0.1,0.2\)) of the wave-packet propagation and scattering for the Na atom at a plate without a hole. The top row shows low reflectivity—resultant from propagation at angle \(\theta=0.2\pi\), whereas the bottom row depicts high reflectivity— angle \(\theta=0.3\pi\). The standard deviation \(\sigma_{\rho}\) and \(\sigma_{z}\) have been increased to 2 for ease of distinguishing the wave packet features.
2302.06836
COMET: Neural Cost Model Explanation Framework
Cost models predict the cost of executing given assembly code basic blocks on a specific microarchitecture. Recently, neural cost models have been shown to be fairly accurate and easy to construct. They can replace heavily engineered analytical cost models used in mainstream compiler workflows. However, their black-box nature discourages their adoption. In this work, we develop the first framework, COMET, for generating faithful, generalizable, and intuitive explanations for neural cost models. We generate and compare COMET's explanations for the popular neural cost model, Ithemal against those for an accurate CPU simulation-based cost model, uiCA. Our empirical findings show an inverse correlation between the prediction errors of Ithemal and uiCA and the granularity of basic block features in COMET's explanations for them, thus indicating potential reasons for the higher error of Ithemal with respect to uiCA.
Isha Chaudhary, Alex Renda, Charith Mendis, Gagandeep Singh
2023-02-14T05:20:51Z
http://arxiv.org/abs/2302.06836v3
# CoMEt: x86 Cost Model Explanation Framework ###### Abstract ML-based program cost models have been shown to yield highly accurate predictions. They have the capability to replace heavily-engineered analytical program cost models in mainstream compilers, but their black-box nature discourages their adoption. In this work, we propose the first method for obtaining faithful and intuitive explanations for the throughput predictions made by ML-based cost models. We demonstrate our explanations for the state-of-the-art ML-based cost model, Ithemal. We compare the explanations for Ithemal with the explanations for a hand-crafted, accurate analytical model, uiCA. Our empirical findings show that high similarity between explanations for Ithemal and uiCA usually corresponds to high similarity between their predictions. An implementation of our explanation framework can be found at [https://github.com/uiuc-focal-lab/CoMEt](https://github.com/uiuc-focal-lab/CoMEt) Machine Learning, ICML ## 1 Introduction _Program cost models_ are analytical or learned models which predict the resources (memory, time, energy, etc) that the program takes while executing. They are used to guide compiler optimization (Mendis et al., 2019; Cummins et al., 2017) and superoptimization (Schkufza et al., 2012). In this paper, we focus specifically on cost models predicting runtime/throughput for _x86 basic blocks_, which are sequences of x86 assembly instructions with no jumps or loops. **Analytical models.** Traditionally, analytical cost models generate their predictions by simulating program execution. These consist of simulators that resemble the physical CPU for which the cost model is generating timing predictions. Analytical cost models are hand-engineered using released documentation from processor vendors alongside measurements of the behavior of the CPU under study. Examples of such simulation-based analytical cost models include uiCA (Abel and Reineke, 2022), LLVM-MCA (Di Biagio and Davis, 2018), IACA (Intel, 2017), and OSACA (Laukemann et al., 2018). Analytical cost models' errors range from 1% (Abel and Reineke, 2022) to 35% (Laukemann et al., 2018). Further, analytical cost models provide an explanation for their prediction, in the form of _simulation traces_: descriptions of the simulated machine state during program execution. However, these analytical models also require significant engineering effort to construct, and must be manually re-engineered to reflect changes across different CPUs. **ML-based models.** An alternative is to use machine learning techniques to learn a cost model (Mendis et al., 2018; Kaufman et al., 2020; Baghdadi et al., 2021). Development of ML-based cost models requires the collection of a dataset of representative programs, the collection of the end-to-end timings for the execution of those programs on the CPU under study, and the training of a selected type of ML model. An instance of such ML-based cost models is Ithemal (Mendis et al., 2018), which is an LSTM-based model trained on the BHive (Chen et al., 2019) dataset of x86 basic blocks. Ithemal's error on the BHive dataset is around 10%, more accurate than most analytical cost models (Chen et al., 2019). However accurate, ML-based models have the downside that they are essentially black-box in nature: there is no corresponding notion of a simulation trace or other explanatory information for the prediction of an ML-based cost model. **This work.** To maintain the performance and portability of ML-based models while allowing their predictions to be explained, we present CoMEt, a novel post-hoc explanation framework for x86 basic block cost models. CoMEt takes in an arbitrary x86 basic block cost model and an x86 basic block as input, and as output returns a set of instructions which are identified to be important for the timing prediction of the basic block, in that changing these instructions would result in a significant change in the predicted timing of the block. We primarily focus on cost models which predict _throughput_ (defined as the number of CPU clock cycles to execute the program when looped in steady state, in Mendis et al. (2018)). We note that CoMEt is general and can be applied to explain the predictions of cost models for other performance parameters such as instructions per cycle or \(\mu\)ops per cycle as well. **Key Challenges.** Our explanation method is based on the Anchors algorithm (Ribeiro et al., 2018), a local, model-agnostic explanation technique that can generate intuitive and faithful explanations. Specifically, it outputs a set of _anchors_ for a given input, which are predicates about the input which also hold for similar inputs with the same predictions. While CoMEt can be made compatible with other perturbation-based explanation techniques as well with minor modifications, we have selected to explain in terms of Anchors due to the high precision and coverage guarantees associated with these explanations (Ribeiro et al., 2018). There are two key challenges in generating explanations with Anchors for ML-based cost model. First, the choice of explanation predicates is not as clear as in vision or NLP where pixels and words respectively are natural candidates for this task. Second, Anchors relies on a perturbation mechanism for generating explanations generalizing to other similar inputs. Existing methods based on generative models (Devlin et al., 2018; Sanh et al., 2019) and hand-crafted heuristics (Schkufza et al., 2012) are inefficient for generating perturbations of basic blocks for cost models and often produce invalid code. **Our approach.** We present two novel primitives to use the Anchors technique in the x86 basic block throughput prediction context: an _x86 predicate set_ and an _x86 perturbation model_. The x86 predicate set is the set of predicates used as explanation; we find that using predicates corresponding to the set of instructions in the block leads to high-quality explanations. The x86 perturbation model is used to explore similar inputs to confirm whether or not the predicates lead to good explanations; we construct a novel perturbation model that leads the Anchors algorithm to find high-quality explanation predicates for x86 basic blocks. **Evaluation.** We apply CoMEt to explain Ithemal's predictions on basic blocks in the BHive dataset (Chen et al., 2019). We evaluate the explanations generated by CoMEt for their _faithfulness_ to the underlying cost model's behavior and their _utility_ in helping the human stakeholders, compiler and performance engineers in this case, to develop an understanding of the underlying cost model's behavior. **Contributions.** We make the following contributions: 1. We present CoMEt, the first explanation framework for ML-based cost models. The generated explanations identify the most important instructions in the input basic block for the throughput prediction made by a state-of-the-art ML-based cost model, Ithemal. 2. We present a novel evaluation scheme to gauge the faithfulness of CoMEt's explanations. 3. We conduct detailed case studies to understand the behavior of Ithemal using CoMEt's explanations for its throughput predictions. ## 2 Related Work **Explanation techniques**. Explanations for ML models is a novel research direction wherein there are two paradigms, either to build inherently interpretable ML models (Lakkaraju et al., 2016), or create post-hoc explanations for the models (Ribeiro et al., 2016, 2018; Lakkaraju et al., 2019; Martens and Provost, 2014). The former gets difficult to achieve for deep neural networks, which is why post-hoc explanations are preferred. These post-hoc explanations can either be global descriptions of the behavior of the ML model (Lundberg and Lee, 2017) or can be used to explain the behavior of the model in the local region in the input space around specific inputs (Ribeiro et al., 2016, 2018). As a first step, we have chosen to explain locally. This is because those explanations are more compatible with the explanations that can be obtained from the analytical cost models and thus our explanations can be put alongside those from the analytical models for comparison. The contemporary explanation techniques can be broadly divided into black-box or model-agnostic (Ribeiro et al., 2016, 2018; Lundberg and Lee, 2017) and white-box techniques (Simonyan et al., 2013; Seo et al., 2018). We have chosen to create model-agnostic explanations to be able to create a general explanation technique applicable to all cost models with same types of inputs and outputs and useful for proprietary cost models as well. We evaluate our explanations using the notions of faithfulness and utility described in Chen et al. (2022). **Perturbation Algorithms**. Some ML-model explanation algorithms such as (Ribeiro et al., 2018, 2016) perturb the inputs of the model to study the model's behavior on the perturbations and use them to develop explanations for it. For settings wherein the input is a sequence of discrete entities such as NLP and code, prior work (Ribeiro et al., 2018) has used generative models such as (Devlin et al., 2018; Feng et al., 2020) to obtain input perturbations. We observe that the perturbations created by generative models might be incompliant with the syntax of code. As Cito et al. (2021) point out, unnatural perturbations of programs can result in erroneous explanations. Hence, we have not used such unconstrained perturbation techniques in our explanation framework. Stoke (Schkufza et al., 2012) is a stochastic superoptimizer which perturbs input x86 assembly programs to optimize them. Although Stoke should generate perturbations which should have correct syntax, we have observed that it can also output some syntactically incorrect perturbed assembly code. As our method requires the perturbations to obey the syntax of x86 assembly, we do not use Stoke to create them. ## 3 Background: Anchors Explanations Our cost model explanation algorithm is developed on top of the Anchors explanation algorithm (Ribeiro et al., 2018), which gives local explanations with high precision and coverage for the behavior of the model that is being explained. The Anchors' algorithm is model-agnostic, which enables it to explain non-ML models which have the same types of inputs and outputs, as well. The Anchors' algorithm takes as input a set of input feature predicates, defined in Definition 3.1. **Definition 3.1**.: (_Feature Predicate_) A feature predicate \(\pi_{f}\) is a boolean function, \(\pi_{f}:v\to b\), that evaluates to true when its input \(v\) contains a particular feature \(f\). For example, consider \(\pi_{f}\) where \(f\) = push \(\mathtt{rax}\). \(\pi_{f}(v)=True\) for input basic blocks \(v\) which have \(f\) as an instruction and false otherwise. Input feature predicates, \(\mathcal{P}\) are the predicates corresponding to the features in the input. The Anchors algorithm generates explanation function \(\epsilon=\bigcap_{\phi\in\Phi}\phi\), \(\Phi\subseteq\mathcal{P}\), which is true for a large number of inputs including the original input (high Coverage (Definition 3.3)) and when it is true, then it is highly likely for the input to be classified to a particular class (high Precision (Definition 3.2)). **Definition 3.2**.: (_Precision of an explanation_) The Precision of an explanation function \(A\) is defined as \(Prec(A)=E_{D(z|A(z)=True)}[I_{\rho(z)=\rho(x)}]\), where \(D(z|A(z)=True)\) is an input perturbation distribution conditioned on the perturbation to satisfy \(A\) and \(\rho\) is the model that is being explained. **Definition 3.3**.: (_Coverage of an explanation_) Coverage of \(A\) is defined as \(Cov(A)=E_{D(z)}A(z)\). The Anchors' algorithm identifies \(A\) by solving the optimization problem shown in (1). \[\max_{Prec(A)\geq(1-\delta)}cov(A) \tag{1}\] As the exact evaluation of precision and coverage for an Anchor \(A\) is intractable over continuous perturbation distributions, these quantities are estimated with samples from \(D(z|A)\) and \(D(z)\) respectively. Thus, we require a mechanism, _perturbation model_, that can generate samples \(z\) from \(D(z)\) that can have \(A(z)=1\) if required. The anchors' algorithm solves the optimization problem shown in (1) iteratively. In iteration \(i\) (starting from \(1\)), the anchors' algorithm selects sets \(\mathcal{S}\) as candidate Anchors, where \(\mathcal{S}=\bigcap_{\theta\in\Theta}\theta\); \(\Theta\subseteq\mathcal{P},|\ \Theta\ |=i\). The algorithm selects \(m\) sets like \(\mathcal{S}\), \(S_{m}\) which have the highest precision, with a best-m selection algorithm called KL-LUCB. Each set in \(S_{m}\) is extended with another predicate in the next iteration. Finally, from all the candidate explanations that have precision higher than the threshold \((1-\delta)\), the one with the maximum coverage is given as the Anchor explanation. ## 4 Explanations for Cost Models Next, we describe our framework, called CoMEt, for generating intuitive explanations of cost models operating on assembly basic blocks. We build CoMEt on top of the state-of-the-art model-agnostic explanation algorithm, Anchors (Ribeiro et al., 2018), which can provide formal bounds on the generality of the generated explanations within a local neighborhood of the original input. Next, we describe our key contributions: x86 explanation predicates' set and x86 perturbation model. We illustrate our approach with the running example of the basic block \(\beta\) shown as the input basic block in Figure 1. Let \(\beta\) and cost model \(\rho\) be the inputs to CoMEt, which generates explanations for \(\rho\)'s prediction for \(\beta\), \(\rho(\beta)\). Let \(\Psi\) be the set of all instructions in the basic block \(\beta\). **x86 Explanation Predicates**. We use the set of feature predicates \(\mathcal{P}\) (Definition 3.1) as the atomic units of our explanations. Our predicates correspond to the elements of \(\Psi\). For the example input basic block in Figure 1, the feature predicates in \(\mathcal{P}\) are characterized by the elements of \(\Psi\), where \(\Psi=\{\text{push }\text{rax};\text{mov }\text{dword }\text{ptr }[\text{rbx}]\text{, }\text{eax};\text{add }\text{rax},\text{rbx};\text{cmp }\text{rbx},\text{rdx}\}\). \(\mathcal{E}_{\rho}(\beta)=\bigcap\limits_{\theta\in\Theta}\theta\), \(\Theta\subseteq\mathcal{P}\) is an explanation of \(\rho(\beta)\). In other words, \(\mathcal{E}_{\rho}(\beta)\) is the conjunction of all predicates \(\theta\) in a subset of \(\mathcal{P}\), \(\Theta\). We work with predicates characterized by instructions as instructions are the smallest executable, stand-alone elements of a basic block. Note that there are other possible choices for explanation predicates such as predicates characterized by the tokens of the basic block, i.e. the opcode and operands of each instruction in the block. While prior work in NLP explanations (Ribeiro et al., 2018; Wallace et al., 2018) has used words/tokens as explanation predicates, we find that in the domain of cost models, explanations in terms of basic block tokens are too fine-grained and hence uninterpretable and unactionable. For the input basic block in Figure 1, using token-level predicates, we get the tokens \(\{\text{push},\text{mov},\text{rbx}\text{ }(\text{base of first, memory operand in instruction }1)\}\) as explanations with precision = \(0.96\) for the throughput prediction of cost model Ithemal (Mendis et al., 2018). As tokens are not standalone units of a basic block, we are not aware of any way to match these explanations with our intuitive understanding of the execution and bottlenecks of the block. These are determined by the flow of instructions in the block through the CPU pipeline model (Di Biagio & Davis, 2018). We can not comprehend the token-level explanations or decide whether Ithemal bases its throughput prediction on the right set of input features. Hence, it seems reasonable to provide explanations in terms of predicates corresponding to all the instructions in the basic block. **x86 Perturbation Model**. To estimate the precision (Definition 3.2) and coverage (Definition 3.3) of any given candi date explanation in the Anchors' algorithm (Section 3), we need a perturbation model. For specifying our requirements from the perturbation model, we define valid x86 assembly code in Definition 4.1. **Definition 4.1**.: (_Valid x86 assembly code_) Valid x86 assembly code is an assembly instruction or a sequence of assembly instructions that can be compiled and executed on real hardware. Such sequences of instructions have each instruction take the number and types of operands that are supported by the instruction's opcode in the x86 Instruction Set Architecture. We have the following requirements for the perturbation model. * The perturbation model should produce valid assembly basic blocks (Definition 4.1). * The perturbation model should facilitate the retention of a subset of input features corresponding to a given set of predicates. The first requirement exists because the precision estimation needs the cost model's predictions for the perturbed basic blocks. Cost models, such as those in Mendis et al. (2018); Abel and Reineke (2022), typically require their input basic blocks to be valid x86 assembly codes (Definition 4.1). Because of this requirement, perturbation models used in prior work Ribeiro et al. (2018) based on generative models are unsuitable for this domain as they give no guarantee of producing valid x86 assembly code. One could potentially reject the invalid basic blocks generated by the generative models (rejection sampling), but the large number of valid perturbations that are needed to achieve high precision and sufficient coverage in the Anchors' algorithm makes the use of these generative models computationally expensive. Moreover, if we perform rejection sampling, then it will be hard to characterize the mathematical structure of the perturbation model. A formal mathematical structure of the perturbation model can permit potential tuning of the perturbation distribution \(D(z)\) and hence modulate the coverage of our explanations, as needed. The second requirement stated above is because we want to model the distribution of perturbations with candidate explanation \(A\), \(D(z|A)\) (defined in Section 3) as well, for the formulation of the precision of \(A\) (Definition 3.2). To overcome the abovementioned limitations of the perturbation models in prior work and fulfill the requirements, we design a custom perturbation model, \(\Pi\) for our explanations of cost models. A basic block perturbation model which always produces valid basic blocks can be constructed with the composition of the following primitive perturbation operations on a given basic block: _Insertion_ of a valid instruction, _Deletion_ of an existing instruction, and _Replacement_ of an existing instruction with another valid instruction. We construct \(\Pi\) such that it composes Deletion and Replacement to create perturbed basic blocks. \(\Pi\) does not insert an instruction, as this operation might create new bottlenecks in basic blocks in addition to the pre-existing ones, which can significantly change the throughput prediction without any modification in the original features of the block. For example, if for the input basic block in Figure 1, we create a perturbation wherein the instruction div r10 is inserted as Figure 1: Our method for explaining the predictions of a cost model. The individual instructions in the input basic block characterize the candidate explanation’s predicates. The perturbation model perturbs the original block to generate new blocks (such as blocks (a), (b), (c)) and queries the cost model for predictions on those (shown as the values in the outputs of the cost model). The perturbations and predictions are given to the Anchors algorithm, which determines the set of predicates that are important for cost model’s prediction for the basic block. In the illustrated example, instructions push rax and movd word ptr [rbx], eax characterize the predicates in our explanations for cost model’s prediction for the basic block. the second instruction (Listing 1), the throughput prediction made by cost model uICA (Abel & Reineke, 2022) increases from \(2\) cycles to \(31\) cycles and the basic block's throughput bottleneck changes from a backend bottleneck to a bottleneck due to the data dependency between instructions \(1\) and \(2\) caused by common operand \(\mathrm{rax}\) (implicit operand for \(\mathrm{div}\) instruction). Our explanation algorithm might attribute the change in the throughput prediction to randomly selected input feature predicates and hence unreliably explain the behavior of \(\mathrm{uiCA}\) on the block. \(\Pi\) also has a restricted instruction replacement operation which replaces an instruction \(inst\) only with another valid instruction that takes the same number and type of operands as \(inst\). The advantage of such a restriction is that there will be no replacements with instructions that operate on absolutely different kinds of operands than the original instruction. For instance, it may be undesirable to have the instruction 2 of input basic block in Figure 1, \(\mathrm{mov}\)\(\mathrm{dword}\)\(\mathrm{ptr}\)[\(\mathrm{rbx}\)], \(\mathrm{eax}\) which has operands containing integers, replaced with \(\mathrm{vmovss}\)\(\mathrm{dword}\)\(\mathrm{ptr}\)[\(\mathrm{rbx}\)], \(\mathrm{xmm1}\)1, which has operands containing floating point values. Footnote 1: [https://www.felixcloutier.com/x86/movss](https://www.felixcloutier.com/x86/movss) \(\Pi\) contains both Deletion and Replacement instruction perturbation operations and not either of them for the following reason. If \(\Pi\) had just the deletion operation, then the number of unique perturbations that can be created for input basic blocks with \(n\) instructions will be \(2^{n}-1\), which will be insufficient for accurately estimating the precision and coverage of candidate explanations in our explanation algorithm for smaller basic blocks. On the other hand, having the replacement operation can create a lot of perturbations to other valid assembly instructions, but having just the replacement operation restricts the explanation algorithm from considering the behavior of the cost model in the absence of an instruction. In our experiments, we have found that having the deletion operation alongside the replacement operation leads to better explanations than those with just the replacement operation (Appendix G). Next, we describe the construction of \(\Pi\). We formally model the primitive instruction perturbation operations in \(\Pi\), instruction deletion and restricted replacement of instruction \(inst\) with perturbations within the equivalence class (Definition 4.2) to which \(inst\) belongs. **Definition 4.2**.: _(Equivalence Classes of Assembly Instructions) Equivalence Classes of assembly instructions consist of all valid assembly instructions that have the same number and type (\(\tau\) in (2)) of operands (\(\omega\)). Each equivalence class also includes an empty string \(\phi\)._ We define the type \(\tau\) of an operand \(\omega\) in x86 assembly in (2), where \(\sigma\) denotes the number of bits in \(\omega\). \(f_{R},f_{M},f_{C}\) are the predicate functions that are satisfied with register, memory, and constant/immediate inputs. \[\begin{split}\tau(\omega):=& R_{\sigma},\ \textbf{if}\ f_{R}(\omega)\\ & M_{\sigma},\ \textbf{if}\ f_{M}(\omega)\\ & C,\ \textbf{if}\ f_{C}(\omega)\\ & U,\ otherwise\end{split} \tag{2}\] Let \(\Gamma(inst)\) denote the tuple of type of operands of \(inst\). Thus, \(\Gamma(inst)=(\tau(\omega_{i}))_{i\in[m]}\), where \(m\) is the number of operands in \(inst\). The equivalence class for \(inst\), \(\Sigma_{\Gamma(inst)}\) is characterized by \(\Gamma(inst)\). \(\Pi\) will select a random element of \(\Sigma_{\Gamma(inst)}\) to replace \(inst\). The selection of \(\phi\) would denote the deletion of \(inst\), the selection of any other element, \(\overline{inst}\in\Sigma_{\Gamma(inst)};\overline{inst}\neq inst\) would denote the replacement of \(inst\) and the selection of \(inst\) will denote its retention. For example, consider instruction \(1\), \(inst_{1}\) in the input basic block of Figure 1, \(\texttt{push}\ \ \ \mathrm{rax}\). \(\tau(rax)=R_{64}\). \(\Gamma(inst_{1})=(R_{64})\). Thus, \(inst_{1}\) is contained in \(\Sigma_{\Gamma(inst_{1})}=\Sigma_{(R_{64})}\). Other instructions in \(\Sigma_{(R_{64})}\) are \(\texttt{push}\ \ \mathrm{rbx}\), \(\texttt{pop}\ \ \mathrm{rdx}\), and the empty string (instruction) \(\phi\). Our perturbation algorithm, \(\Pi\) is described succinctly in Algorithm 1. \(\Pi\) facilitates the preservation of a set of instructions, \(\Phi\) of basic block \(\beta\) in its perturbed basic blocks [lines 6-7]. We independently attempt to perturb each instruction \(inst,inst\notin\Phi\), of \(\beta\). For perturbing \(inst\), we map it to its equivalence class \(\Sigma_{\Gamma(inst)}\) [line 9]. We map every element of \(\Sigma_{\Gamma(inst)}\) to probability masses with a custom probability mass function based on a tunable parameter \(p\) [line 10]. We denote the total probability of change in \(inst\) to \(\overline{inst}\in\Sigma_{\Gamma(inst)},\overline{inst}\neq inst\), by \(Pr(p)\). We have detailed our choice of \(Pr(p)\) in the Appendix D. The perturbed instruction, \(\overline{inst}\) is a randomly selected element of \(\Sigma_{\Gamma(inst)}\), weighted by the probability masses [line 11]. All \(\overline{inst}\) are then combined to make the perturbed basic block \(\overline{\beta}\) [line 15]. Examples of \(\overline{\beta}\) for input basic block are shown as outputs of the perturbation model in Figure 1. **Characterizing \(\Pi\)**. As \(\Pi\) perturbs \(\beta\) at the instruction level, we define the distance between \(\beta\) and \(\overline{\beta}\), \(\Delta(\beta,\overline{\beta})\) as the number of instructions that were modified or deleted from \(\beta\) to make \(\overline{\beta}\). Our definition of \(\Delta(\beta,\overline{\beta})\) is inspired by our observation that \(\Pi\) is an \(L_{0}\) sampler (Cormode & Firmani, 2014) defined over the support set of all assembly instructions. For perturbations created according to \(\Pi\), \(\Delta(\beta,\overline{\beta})\in[0,n]\), where \(n\) is the number of instructions in \(\beta\). While it is possible to restrict the value of \(\Delta(\beta,\overline{\beta})\) to be less than a fixed upper bound, we work with no such constraints and leave the analysis of such constraints to future work. The expected value of \(\Delta(\beta,\overline{\beta})\) is \(E_{\overline{\beta}}[\Delta(\beta,\overline{\beta})]=(n-\mid\Phi\mid)\cdot(Pr(p))\), where \(\Phi\) is the set of instructions that must be preserved in the perturbations created by \(\Pi\), which is parameterized by \(p\). The parameterization of \(\Pi\) with parameter \(p\) enables tuning of \(\Pi\) to achieve a target value for the expected amount of perturbation introduced by \(\Pi\) in \(\beta\), depending on the desired variance in the perturbation distribution around the original input basic block. **Generating explanations for model \(\rho\)**. Our explanation generation process for basic block throughput prediction is visualized in Figure 1. We use the Anchors' algorithm (Ribeiro et al., 2018) to create explanations \(\mathcal{E}(\beta)\) for the throughput prediction of a cost model for \(\beta\). We have provided the specific details of our method of adapting the Anchors' algorithm to our problem which has regression output in the Appendix E. ## 5 Evaluation In this section, we conduct experiments to study two properties of the explanations generated by CoMEt. * _Faithfulness_. Do the explanations generated by CoMEt correctly reflect the behavior of the cost model that is being explained? * _Utility_. Can the explanations be used to develop an understanding of the behavior of the cost model? **Experimental Setup**. All our experiments were conducted on a 12th Gen 20-core Intel i9 processor. Unless mentioned otherwise, we set the precision threshold \((1-\delta)\) in (1) as \(0.82\) and the parameter \(p\) of the probability mass function in our perturbation model as \(0.5\) for all our experiments. We set the Anchor algorithm's specific hyperparameters, mentioned in Ribeiro et al. (2018) as \(B=1,\epsilon=0.15\), and the total number of coverage samples \(=10000\). We study the sensitivity of CoMEt to the hyperparameters in Appendix H. We use basic blocks from the popular BHive dataset (Chen et al., 2019). To analyze the explanations generated by CoMEt, we randomly sample \(200\) frontend-bound basic blocks (with throughput bottleneck at the Predecoder or Decoder of the processor) and \(200\) backend-bound basic blocks (with throughput bottleneck at the instruction execution ports of the processor). We get the bottleneck type of the basic block from the analysis report generated by uiCA2. We combine the two random selections of basic blocks to create our _dataset for explanation_, \(\mathcal{D}\). We include only basic blocks with number of instructions between 4 and 10. We have consistently worked with throughput predictions for the Haswell microarchitecture and leave the analysis for other microarchitectures to future work. Footnote 2: [https://uica.uops.info/](https://uica.uops.info/) ### Explanation Faithfulness Study For this study, we work with two throughput cost models, ML-based model Ithermal (Mendis et al., 2018) (Appendix B) and analytical model uiCA (Abel and Reineke, 2022) (Appendix C). Analytical models are generally considered more trustworthy than ML-models as they provide additional analysis pertaining to the execution of the basic block. Thus, the main utility of CoMEt is in explaining ML-based models, such as Ithermal, as their internal workings are not known. So we create explanations for Ithermal using CoMEt and validate them using the predictions and explanations for uiCA. We have selected uiCA as it achieves the lowest mean absolute percentage error among several state-of-the-art throughput cost models on the BHive dataset, as reported in Abel and Reineke (2022). Further, the internal working mechanisms of actual processors are highly intricate and often undocumented, making it difficult to perform any analysis directly on them. Hence, we keep uiCA as our reference ground truth to study the faithfulness of our explanations and retrain Ithemal against uiCA's throughput predictions on the BHive dataset (originally, Ithemal was trained on the BHive dataset with the ground truth throughput values of the basic blocks on an actual processor), so that Ithemal can learn to model the workings of uiCA, which a completely transparent. Our retrained version of Ithemal achieves \(7\%\) mean absolute percentage error on a \(20\%\) test set derived from BHive. We use CoMEt to generate explanations for Ithemal on the basic blocks in \(\mathcal{D}\). We first study their average precision and generality (with number of coverage samples) in the first entry of Table 1. Our results in Table 1 indicate that CoMEt generates explanations with high average precision. This implies that the satisfaction of the predicates in our explanations by a basic block leads to a throughput prediction that is very close to the original basic block's throughput prediction with high probability. This indicates that our explanations are faithful to the behavior of the cost model that they are trying to explain. The explanations generated by CoMEt generalize to several other basic blocks as well (depicted by their number of coverage samples), which makes them useful for understanding the overall behavior of the cost model. We further analyze the explanations generated by CoMEt for Ithemal by comparing them with similar explanations created by CoMEt for uiCA on the basic blocks in \(\mathcal{D}\). Specifically, we compute the cosine similarity between the bit-masked vector representation of the explanation predicate set for the explanation for Ithemal \(\mathcal{E}_{I}(\beta)\) and uiCA \(\mathcal{E}_{U}(\beta)\) for a given basic block \(\beta\). We study the variation of the cosine similarity with the mean absolute error \(\epsilon(\alpha_{I},\alpha_{U})\) between rounded-off predictions of Ithemal \(\alpha_{I}\) and uiCA \(\alpha_{U}\). We describe our error metric further in Appendix F. We visualize the variation in the error \(\epsilon\) in Ithemal with respect to uiCA versus similarity \(\kappa\) between their explanations in Figure 2. Out of the \(400\) basic blocks in \(\mathcal{D}\), there are \(344\) basic blocks having \(\epsilon=0\). Out of these \(344\) basic blocks, \(175\) (\(>50\)%) have \(\kappa=1.0\) and \(259\) (\(65\)%) have \(\kappa\geq 0.8\). Thus, the explanations generated by CoMEt for the two models are mostly similar when the error between them is low. This observation validates the faithfulness of the explanations generated by CoMEt to the error between two models and hence their faithfulness to the models themselves. **Validating CoMEt's explanations for different types of bottlenecks**. Next, we validate the faithfulness of our explanations to the cost model's behavior for basic blocks with different types of bottlenecks. While the metrics shown in Table 1 did not indicate any major differences in the quality of CoMEt's explanations for backend and frontend bound basic blocks, we observe such differences using the cost model error versus explanation similarity analysis. As our analysis would validate explanations if we simultaneously get high similarity between explanations and low error in cost model predictions, we study the variation of the fraction of all basic blocks with \(0.0\) error versus the similarity between the explanations for the basic blocks for Ithemal and uiCA in Figure 3. We observe that a higher fraction of backend-bound basic blocks show high similarity between explanations created by CoMEt than that of frontend-bound basic blocks. This suggests that the explanations created by CoMEt are more faithful to the cost models' behaviors towards backend-bound blocks than that for frontend-bound blocks. We justify this observation by noting the fact that backend-bound blocks have bottlenecks in the execution ports and should therefore depend more on the type of instruction, while the frontend-bound blocks have bottlenecks in the Predecoder and Decoder, which depend on the type as well as the number of instructions in the basic block. Hence, we might need to include more feature predicates of the basic block for better explanations for frontend-bound basic blocks. We leave this to future work. ### Explanation Utility Study Next, we attempt to utilize the explanations created by CoMEt to understand Ithemal's behavior in representative case studies of basic blocks hand-picked from \(\mathcal{D}\) corresponding to the extreme points in Figure 2. **Case Study 1**. (\(\epsilon=0.0\) and \(\kappa=1.0\)) Consider the basic block in listing 2. The throughput predictions of Ithemal and uiCA are both \(2\) cycles, and the actual throughput of the basic block on real CPU is also \(2\) cycles. Thus, both models predict the correct value of throughput of the basic block. CoMEt's explanations for both Ithemal \begin{table} \begin{tabular}{l l c c c} \hline \hline Number of Samples & Bottleneck & Precision & Coverage Samples & Time (s) \\ \hline 400 & All & 0.95 & 600 & 62.6 \\ 200 & Backend & 0.96 & 400 & 59.5 \\ 200 & Frontend & 0.94 & 900 & 65.8 \\ \hline \hline \end{tabular} \end{table} Table 1: Average precision, number of coverage samples, and time for Ithemal’s explanations on \(\mathcal{D}\) using CoMEt. Figure 2: Variation of similarity of explanations with the absolute error between throughput predictions of Ithemal and uiCA for basic blocks in \(\mathcal{D}\). A larger point size indicates higher number of basic blocks. and uiCA on this example consist of instructions \(3\) and \(4\). Intuitively, the mov operations in instructions \(3,4\) should be slow as they involve writing data to memory and hence should be bottlenecks for the throughput prediction. The lea operation in instruction \(1\) should be very fast and the execution time of the immediate-to-register mov operation in instruction \(2\) would be subsumed by the execution times of instructions \(3\) and \(4\). Thus, our explanations match our intuition and seem correct. Moreover, getting the same and correct explanations for both Ithemal and uiCA justifies getting the same and correct throughput predictions for the two cost models. **Case Study 2**. (\(\epsilon=1.0\) and \(\kappa=1.0\)) There is only one block, shown in listing 3, in \(\mathcal{D}\) where similarity and error are both high. Ithemal's prediction for this block is \(3\) cycles, while uiCA's prediction is \(4\) cycles. The actual throughput of the basic block on a real CPU is \(4\) cycles. The explanations generated by CoMEt for both Ithemal and uiCA indicate that instructions \(3,4,5,6,7\) are important for the throughput prediction made by both cost models. In uiCA's simulation model, this block is bottlenecked by the availability of resources on the CPU. Each of instructions 3, 4, 5, and 6 execute on the same resource, of which there is only 1 available. Thus, while given infinite resources instructions 3 and 4 and instructions 5 and 6 could execute in parallel, in practice they are limited by the availability of CPU resources; this is known as a _structural hazard_. Instruction 7 impacts the prediction by allowing the vmovaps instructions to always execute for free. We probe further into the difference between Ithemal's and uiCA's throughput values by reducing the basic block to just instructions \(3\) and \(4\), which are assigned to the same processor resource. These instructions need to be executed sequentially by the processor and should take \(1\) cycle each. But Ithemal predicts \(1\) cycle for the execution of both instructions. This suggests that Ithemal might not have learned to identify such structural hazards with its training dataset and learning algorithm. So while Ithemal considers instructions \(3,4,5,6,7\) as important by themselves with high precision, we hypothesize that it errs at identifying the structural hazard presented by them. **Case Study 3**. (\(\epsilon=0.0\) and \(\kappa=0.0\)) Consider the basic block in listing 4. Both Ithemal and uiCA make a throughput prediction of \(1.5\) cycles when rounded off to the nearest half cycle. The actual throughput of the basic block on a real CPU is also \(1.5\) cycles. But the explanation for Ithemal is instruction \(3\) only, while the explanation for uiCA is instructions \(1,2,4\). This basic block has two independent bottlenecks: that caused by the mov instruction (as in Case Study 1), which has a 1-cycle-latency bottleneck; and that caused by the three lea instructions. In isolation, a lea instruction is not a significant bottleneck; however, with three instructions, there is a sufficient dependency chain and contention for CPU resources that they form a 1.5-cycle-latency bottleneck. The 1.5-cycle-latency bottleneck thus Figure 3: Variation of fraction of total number of basic blocks with \(\kappa\) for different bottlenecks in basic blocks in \(\mathcal{D}\) with \(0\) throughput prediction error between Ithemal and uiCA. Figure 4: Case Study 3 determines the performance of the block. Ithemal is not able to identify that the lea instructions are the source of the bottleneck, and instead predicts that the mov instruction (which is in isolation the most costly instruction of the block) is the bottleneck. This observation reinforces our hypothesis made in Case Study 2 that Ithemal might not have learned to identify possible execution port contentions in basic blocks. Thus, we realize the utility of our explanations in helping develop intuition about the behavior of the black-box cost model, Ithemal. ## 6 Conclusion In this work, we presented CoMEt, the first approach for generating faithful explanations for state-of-the-art ML-based cost models. Our results show that CoMEt can generate faithful and intuitive explanations. We believe that CoMEt's explanations can be used for debugging ML-based cost models, improving trust in the workings of highly accurate ML-based cost models, and accelerating their real-world adoption.
2310.09914
Kinetic glass transition in granular gases and non-linear molecular fluids
In this paper we investigate, both analytically and numerically, the emergence of a kinetic glass transition in two different model systems: a uniformly heated granular gas and a molecular fluid with nonlinear drag. Despite the profound differences between these two physical systems, their behavior in thermal cycles share strong similarities, which stem from the relaxation time diverging algebraically at low temperatures for both systems. When the driving intensity -- for the granular gas -- or the bath temperature -- for the molecular fluid -- is decreased to sufficiently low values, the kinetic temperature of both systems becomes ``frozen" at a value that depends on the cooling rate through a power law with the same exponent. Interestingly, this frozen glassy state is universal in the following sense: for a suitable rescaling of the relevant variables, its velocity distribution function becomes independent of the cooling rate. Upon reheating, i.e., when either the driving intensity or the bath temperature is increased from this frozen state, hysteresis cycles arise and the apparent heat capacity displays a maximum. The numerical results obtained from the simulations are well described by a perturbative approach.
A. Patrón, B. Sánchez-Rey, A. Prados
2023-10-15T18:50:23Z
http://arxiv.org/abs/2310.09914v2
# Laboratory glass transition in granular gases and non-linear molecular fluids ###### Abstract In this paper we investigate the emergence of a laboratory glass transition in two different physical systems: a uniformly heated granular gas and a molecular fluid with non-linear drag. Despite the profound differences between the two systems, their behaviour in thermal cycles share strong similarities. When the driving intensity--for the granular gas--or the bath temperature--for the molecular fluid--is decreased to sufficiently low values, the kinetic temperature of both systems becomes "frozen" at a value that depends on the cooling rate through a power law with the same exponent. Interestingly, this frozen glassy state is universal in the following sense: for a suitable rescaling of the relevant variables, its velocity distribution function becomes independent of the cooling rate. Upon reheating, i.e. when either the driving intensity or the bath temperature is increased from this frozen state, hysteresis cycles arise and the apparent heat capacity displays a maximum. We develop a boundary layer perturbative theory that accurately explains the behaviour observed in the numerical simulations. ## I Introduction As is well known, most liquids can avoid crystallisation if they are cooled sufficiently fast. In that case, the liquid enters into a metastable supercooled regime in which a dramatic slowing down of the dynamics takes place. On the one hand, above the melting point \(T_{m}\), density fluctuations of the liquid relax on a time scale of the order of picoseconds. On the other hand, in the supercooled regime the relaxation times increase so fast that they become 14 orders of magnitude larger when the temperature is around \(\frac{2}{3}T_{m}\). [1] At this point, the liquid does not flow anymore and the glass transition occurs: configurational rearrangements cease, the liquid structure becomes "frozen" and the system gets trapped in a non-equilibrium disordered yet solid state, called the glassy state. [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] In spite of the great effort devoted to the investigation of glassy systems in the last decades, the glass transition continues to be an open problem. There is not yet a conclusive answer to the fundamental question of whether the glass transition is a purely dynamical phenomenon or is the consequence of an underlying phase transition as predicted in certain theoretical frameworks. [4; 5; 8; 10; 12; 13] Numerous studies have addressed the rich phenomenology that accompany the glass transition from different and complementary viewpoints. For instance, spin models put the accent on the characterisation of potential energy landscapes with a large number of energy minima connected by complex dynamics pathways, [16; 17; 18] kinetically constrained models emphasise the fact that relaxation events are cooperative because of the presence of geometric frustration, [19; 20] and so on. But the development of a successful theory to explain all the phenomenological observations in a unified and satisfactory manner is still a challenge. There are some key behaviours that are displayed by glass formers when submitted to cooling protocols followed by reheating. In the following, we exemplify the observed behaviour with the average energy \(\left\langle E\right\rangle\), but other physical quantity might be the relevant one in certain physical contexts--e.g. the average volume for polymeric glasses. [21; 22] When the system is cooled down to a low temperature, e.g. by lowering the bath temperature \(T\) at a constant rate \(r_{\mathrm{c}}\), the average energy \(\left\langle E\right\rangle\) departs from equilibrium and gets frozen when the system relaxation time \(\tau\) exceeds the characteristic cooling time \(r_{\mathrm{c}}^{-1}\). This is the behaviour that is termed the laboratory glass transition, which is a purely kinetic phenomenon. The temperature of the glass transition--actually a range of temperatures--at which the system departs from equilibrium and gets frozen decreases with the cooling rate and, consequently, the properties of a glass depend on the process by which is formed. When the system is reheated from the frozen state at the same rate \(r\), \(\left\langle E\right\rangle\) overshoots the equilibrium curve before returning there. This entails that the apparent [23] heat capacity \(d\left\langle E\right\rangle/dT\) displays a non-trivial behaviour with a marked peak at a certain temperature \(T_{\mathrm{g}}\), which can be employed to characterise the laboratory glass transition. [2; 3; 24; 25; 26] In this work, our aim is to analyse the emergence of the laboratory glass transition in two specific systems: a uniformly heated granular gas [27; 28; 29; 30; 31; 32] and a molecular fluid with non-linear drag. [33; 34; 35; 36; 37; 38; 39] Both systems are largely different from a fundamental point of view. In the molecular fluid with non-linear drag, collisions between particles are elastic and energy is thus conserved. Therefore, the non-linear molecular fluid tends in the long-time limit to an equilibrium state, with a Gaussian--or Maxwellian--velocity distribution function (VDF). In the granular gas, collisions between particles are inelastic and thus energy is continuously lost. Therefore, an energy injection mechanism is necessary to drive the system to a steady state. The simplest one is the so-called stochastic thermostat, in which a stochastic forcing homogeneously acts on all the particles. In this uniformly heated granular gas, the system remains spatially homogeneous and tends in the long-time limit to a non-equilibrium steady state (NESS), in which the kinetic temperature is a certain function of the driving inten sity. Moreover, the stationary VDF has a non-Gaussian shape, which is well described by the so-called first Sonine approximation. Therein, the non-Gaussianities are accounted for by the excess kurtosis, which is a smooth function of the inelasticity but independent of the driving intensity.[27; 29] Despite their apparent dissimilarities, uniformly heated granular systems and non-linear molecular fluids share some features and characteristic behaviours. The energy landscape of both kinds of systems is extremely simple, being only kinetic: the kinetic temperature \(T(t)\) determines the average energy \(\left\langle E\right\rangle(t)\), since they are simply proportional. Notwithstanding, the two systems display memory effects,[40] both the Kovacs[32; 38; 41; 42] and the Mpemba[43; 42; 38] memory effects. The Kovacs memory effect is especially characteristic of the complex response of glassy systems.[21; 22; 44; 45; 46; 47; 48; 49; 50; 51] It is interesting to note that the Mpemba effect has also been observed in spin glasses, but only in the spin glass phase--where it arises due to the aging dynamics of the internal energy.[52] In addition, when quenched to a very low temperature, both granular gases and non-linear molecular fluids tend to a time-dependent, non-equilibrium state, in which the kinetic temperature presents a very slowly non-exponential, algebraic, decay over a wide intermediate time window. These non-equilibrium attractors, the homogeneous cooling state (HCS) for the granular gas[53; 54; 55] and the long-lived non-equilibrium state (LLNES) for the molecular fluid,[42; 38] are characterised by non-Gaussian VDFs. Afterwards, for very long times, both systems approach their respective stationary states, NESS and equilibrium state for the granular and molecular cases, respectively. Since non-exponential relaxation and memory effects are hallmarks of glassy behaviour,[21; 22; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 56; 57; 58; 59] it is natural to pose the question as whether granular gases and non-linear molecular fluids undergo a laboratory glass transition when being subjected to a continuous cooling programme of the bath temperature. Of course, these systems are not realistic models of glass-forming liquids but one of the most interesting features of glassy behaviour is its ubiquity and universality: the glass transition is found in systems with typical length and time scales very different from molecular ones--such as colloidal suspensions and granular materials.[12] More specifically, we would also like to elucidate the possible role played by the HCS--for the granular gas--and the LLNES--for the non-linear molecular fluid--in the laboratory glass transition. The organisation of the paper is as follows. In Sec. II we introduce the uniformly heated granular gas model and write the evolution equations for the kinetic temperature and the excess kurtosis--in the first Sonine approximation that we employ in our work. In Sec. III, we investigate the laboratory glass transition in the granular gas when the driving intensity is continuously decreased--for the sake of concreteness, we consider a linear cooling programme in which the bath temperature changes linearly in time. Not only do we perform numerical simulations of the system under this cooling programme but also develop a singular perturbation theory approach--specifically, of boundary layer type--that accounts for the system evolution and even characterises very well the final glassy state. The hysteresis cycle that emerges when the system is reheated from final glassy, frozen, state is the subject of Sec. IV. The molecular fluid with non-linear drag model is introduced in Sec. V, where--similarly to the framework developed in Sec. II for the granular gas--the evolution equations of the model in the first Sonine approximation are put forward. In Sec. VI, we address the glass transition and hysteresis cycles in a molecular fluid with non-linear drag, by combining again numerical simulations and a boundary layer approach--this analysis is presented in a simplified way, because of its formal similarity with the granular gas. Finally, we present in Sec. VII the main conclusions and a brief discussion of our results. The appendices study more general cooling programmes and give additional details on the perturbative theory of the molecular fluid. ## II Model: Uniformly Driven granular gas First, we consider a granular gas of \(d\)-dimensional hard spheres of mass \(m\) and diameter \(\sigma\), with number density \(n\). These hard spheres undergo binary inelastic collisions, in which the tangential component of the relative velocity between two particles remains unaltered, while the normal component is reversed and shrunk by a factor \(\alpha\). This parameter \(\alpha\) is called the restitution coefficient, \(0\leq\alpha\leq 1\); elastic collisions--in which the kinetic energy is conserved--are recovered for \(\alpha=1\).[54; 55] In the uniformly heated granular gas, the system reaches a steady state in the long term because the kinetic energy lost in collisions is balanced on average by energy inputs, modelled through independent white noise forces acting over each particle.[27] For sufficiently diluted, homogeneous and isotropic gases, the description of the system may be accounted by the one-particle VDF \(f(\mathbf{v},t)\), whose dynamical evolution is governed by the Boltzmann-Fokker-Planck equation \[\partial_{t}f(\mathbf{v},t)-\frac{\dot{\mathbf{v}}^{2}}{2}\frac{\partial^{2}}{ \partial\mathbf{v}^{2}}f(\mathbf{v},t)=J_{\mathbf{\alpha}}[\mathbf{v}|f,f].\] (II.1) In the above, the Boltzmann operator \(J_{\mathbf{\alpha}}[\mathbf{v}|f,f]\) accounts for the inelastic collisions between the particles. We do not provide its full expression, as we will be working directly with the evolution equations for the cumulants of the VDF, as written below.[60] The parameter \(\xi\), on the other hand, stands for the strength of the stochastic thermostat. The kinetic (or granular) temperature \(T(t)\) is defined as usual, proportional to the average kinetic energy of the system: \[T(t)=\frac{m}{dk_{B}}\left\langle v^{2}\right\rangle=\frac{m}{dnk_{B}}\int d \mathbf{v}\ v^{2}f(\mathbf{v},t),\] (II.2) where \(k_{B}\) is the Boltzmann constant. In order to gain analytical insights on the evolution of the granular temperature it is useful to introduce the scaled VDF \(\phi(\mathbf{c},t)\) as \[f(\mathbf{v},t)=\frac{n}{v_{T}^{4}(t)}\phi(\mathbf{c},t),\quad\mathbf{c}\equiv\frac{\mathbf{v }}{v_{T}(t)},\] (II.3) with \(v_{T}(t)\equiv\sqrt{2k_{B}T(t)/m}\) being the thermal velocity. For isotropic states, such scaled VDF may be expanded in a complete set of orthogonal polynomials as \[\phi(\mathbf{c},t)=\frac{e^{-\epsilon^{2}}}{\pi^{d/2}}\left[1+\sum_{l=2}^{\infty}a_{ l}(t)L_{l}^{\frac{d-2}{2}}(c^{2})\right],\] (II.4) where \(L_{l}^{(k)}\) are the Sonine polynomials,[54; 55; 61; 62] and the \(a_{l}(t)\) coefficients are known as the Sonine cumulants. The latter account for the deviations from the Maxwellian equilibrium distribution \(\phi_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{ }}}}}}}}}}}}}} \)) are the \(a_{l}(t)\) coefficients are known as the Sonine cumulants. The latter account for the deviations from the Maxwellian equilibrium distribution \(\phi_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}} )) are \(-c^{2}\). Throughout this work, here for the granular gas--and later for the molecular fluid--we work under the first Sonine approximation. Therein, we only need to monitor the kinetic temperature \(T\) and the first Sonine cumulant \(a_{2}\), given by \[a_{2}=\frac{d}{d+2}\frac{\left\langle v^{4}\right\rangle}{\left\langle v^{2} \right\rangle^{2}}-1,\] (II.5) which is also known as the excess kurtosis. For our analysis below, it is useful to introduce a characteristic length \(\lambda\) and a characteristic rate \(\nu\) as \[\lambda^{-1}= \frac{2n\sigma^{d-1}\pi^{\frac{d-1}{2}}}{d\,\Gamma(d/2)},\] (II.6a) \[\nu(T)= \left(1-\alpha^{2}\right)\lambda^{-1}\left(\frac{k_{B}T}{m} \right)^{1/2}.\] (II.6b) In the absence of stochastic thermostat, the granular gas reaches the spatially-uniform non-steady state known as the HCS, for which the scaled VDF \(\phi\) becomes stationary and the granular temperature decays algebraically in time, \(T(t)\propto t^{-2}\), following Haff's law.[53; 54; 55; 63] Under the first Sonine approximation, the stationary value of the excess kurtosis at the HCS is given by \[a_{2}^{\rm HCS}=\frac{16(1-\alpha)(1-2\alpha^{2})}{25+2\alpha^{2}(\alpha-1)+2 4d+\alpha(8d-57)}.\] (II.7) When the stochastic thermostat is present, the granular gas reaches a NESS in the long time limit. The temperature \(T_{\rm s}\) at the NESS is given in terms of the stochastic strength \(\xi\) via the relation[27] \[\frac{k_{B}T_{\rm s}}{m}=\left[\frac{\lambda\,\xi^{2}}{(1-\alpha^{2})\left(1+ \frac{3}{16}a_{2}^{\rm s}\right)}\right]^{2/3},\] (II.8) where \(a_{2}^{\rm s}\) is the NESS value of the excess kurtosis, \[a_{2}^{\rm s}=\frac{16(1-\alpha)(1-2\alpha^{2})}{73+56d-24d\alpha-105\alpha+3 0(1-\alpha)\alpha^{2}}.\] (II.9) Such value has the same sign as \(a_{2}^{\rm HCS}\), thus attaining a null value at \(\alpha=1/\sqrt{2}\). From the Boltzmann-Fokker-Planck equation, the evolution equations for the temperature and the excess kurtosis are derived,[27; 29; 32; 41] \[\frac{d\theta}{dt^{*}} =\theta_{\rm s}^{3/2}\left(1+\frac{3}{16}a_{2}^{\rm s}\right)- \theta^{3/2}\left(1+\frac{3}{16}a_{2}\right),\] (II.10a) \[\frac{da_{2}}{dt^{*}} =2\theta^{1/2}\left[\left(1-\left(\frac{\theta_{\rm s}}{\theta} \right)^{3/2}\right)a_{2}+B\left(a_{2}^{\rm s}-a_{2}\right)\right],\] (II.10b) where we have defined the dimensionless temperatures and time \[\theta\equiv\frac{T}{T_{l}},\quad\theta_{\rm s}\equiv\frac{T_{\rm s}}{T_{l}},\quad t^{*}\equiv\nu(T_{l})t,\] (II.11) and introduced a parameter \(B\), \[B=\frac{73+8d(7-3\alpha)+15\alpha[2\alpha(1-\alpha)-7]}{16(1-\alpha)(3+2d+2 \alpha^{2})}.\] (II.12) Interestingly, \(B\) can be written in terms of \(a_{2}^{\rm HCS}\) and \(a_{2}^{\rm s}\), specifically one has that \(B=a_{2}^{\rm HCS}/(a_{2}^{\rm HCS}-a_{2}^{\rm s})\)--as predicted by Eq. (II.10b) for \(\theta_{\rm s}=0\).[32; 41] From now on, we drop the asterisks when referring to the dimensionless time not to clutter our formulas. Note that the initial value of the dimensionless temperature is always \(\theta_{\rm i}=1\) with our choice of units. ## III Laboratory glass transition and boundary layer approach In order to elucidate the emergence of a laboratory glass transition in the granular fluid, we consider a time dependent driving intensity \(\xi(t)\), which continuously decreases from its initial value \(\xi_{\rm i}\) to zero. The corresponding "bath temperature" \(T_{\rm s}\), as given by Eq. (II.8), also becomes time-dependent and continuously decreases from \(T_{\rm s,i}\) to zero.[64] The system is initially prepared in the NESS corresponding to \(\xi_{\rm i}\), thus the initial value of the dimensionless temperature is \(\theta(t=0)=\theta_{\rm s}(t=0)=1\). Therefrom, we apply a linear cooling programme with rate \(r_{\rm c}\), \[\frac{d\theta_{\rm s}}{dt}=-r_{\rm c},\quad\theta_{\rm s}(t)=1-r_{\rm c}t.\] (III.1) We consider a slow cooling, \(r_{\rm c}\ll 1\), such that we may resort to the tools of perturbation theory to study the laboratory glass transition. The choice of a linear cooling programme is done for the sake of concreteness, a more general family of protocols is considered in Appendix A. We study the behaviour of the dimensionless kinetic temperature within the \(\theta_{\rm s}-\theta\) plane during the cooling programme. For high enough bath temperatures, such that \(\theta_{\rm s}\simeq O(1)\), we expect \(\theta\simeq\theta_{\rm s}\) up to some small deviation. As soon as we keep decreasing the bath temperature, if the glass transition takes place, for \(\theta_{\rm s}\ll 1\) the kinetic temperature \(\theta\) is not able to keep up with the bath one, and thus the system remains "frozen" at a certain limiting temperature \(\theta^{\rm Frz}\). Therefore, as there are two intrinsically different regimes throughout the cooling process, we will employ the tools from boundary layer theory[65] to approach the problem. It is based on the existence of two clearly different regimes: the _outer layer_, for which the kinetic temperature does not deviate much from the bath temperature, and the _inner layer_, for which it freezes at \(\theta^{\text{Frz}}\). There is a continuous transition between the two layers, through the region that we shall refer to as the _matching region_, as shown by our analysis below. ### Regular perturbative expansion in \(r_{c}\) As we are interested in studying the dynamical behaviour of the system within the \(\theta_{\text{s}}-\theta\) plane, it is convenient to express the time derivatives in terms of derivatives with respect to the bath temperature, \[\frac{d}{dt}=\frac{d}{d\theta_{\text{s}}}\frac{d\theta_{\text{s}}}{dt}=-r_{c} \frac{d}{d\theta_{\text{s}}}.\] (III.2) Therefore, the system (II.10) is rewritten as \[-r_{c}\frac{d\theta}{d\theta_{\text{s}}}=\theta_{\text{s}}^{3/2} \left(1+\frac{3}{16}a_{2}^{\text{s}}\right)-\theta^{3/2}\left(1+\frac{3}{16}a_ {2}\right),\] (III.3a) \[-r_{c}\frac{da_{2}}{d\theta_{\text{s}}}=2\theta^{1/2}\left[\left( 1-\left(\frac{\theta_{\text{s}}}{\theta}\right)^{3/2}\right)a_{2}+B\left(a_{ 2}^{\text{s}}-a_{2}\right)\right].\] (III.3b) Since the system is cooled down from the NESS corresponding to \(\theta_{\text{s}}=1\), Eqs. (III.3) have to be solved with the boundary conditions \[\theta(\theta_{\text{s}}=1)=1,\quad a_{2}(\theta_{\text{s}}=1)=a_{2}^{\text{ s}}.\] (III.4) In order to approximately solve Eqs. (III.3), we take advantage of our slow coolinng hypothesis \(r_{c}\ll 1\) by introducing the regular perturbative series \[\theta=\theta^{(0)}+r_{c}\ \theta^{(1)}+O(r_{c}^{2}),\] \[a_{2}=a_{2}^{(0)}+r_{c}\ a_{2}^{(1)}+O(r_{c}^{2}).\] (III.5) Eqs. (III.5) are inserted now into the evolution equations Eq. (II.10), in which we subsequently equal the terms with the same power of \(r_{c}\). At the lowest order, \(O(r_{c}^{0})\), i.e. for terms independent of \(r_{c}\), we have \[0=\theta_{\text{s}}^{3/2}\left(1+\frac{3}{16}a_{2}^{\text{s}} \right)+[\theta^{(0)}]^{3/2}\left(1+\frac{3}{16}a_{2}^{(0)}\right),\] (III.6a) \[0=\left[1-\left(\frac{\theta_{\text{s}}}{\theta^{(0)}}\right)^{ 3/2}\right]a_{2}^{(0)}+B\left(a_{2}^{\text{s}}-a_{2}^{(0)}\right),\] (III.6b) the solution of which corresponds to the NESS curve \[\theta^{(0)}=\theta_{\text{s}},\quad a_{2}^{(0)}=a_{2}^{\text{s}}.\] (III.7) At the first order, \(O(r_{c})\), i.e. for terms linear in \(r_{c}\), we have \[1=\frac{3}{2}\theta_{\text{s}}^{1/2}\ \theta^{(1)}\left(1+\frac{3}{1 6}a_{2}^{\text{s}}\right)+\theta_{\text{s}}^{3/2}\left(1+\frac{3}{16}a_{2}^{(1 )}\right),\] (III.8a) \[0=\frac{3}{2}a_{2}^{\text{s}}\ \frac{\theta^{(1)}}{\theta_{\text{s}}}+B\ a_{2}^{(1 )},\] (III.8b) where we have already substituted the \(O(r_{c}^{0})\) solutions from Eq. (III.7). The solution is \[\theta^{(1)}=\frac{2}{3}\theta_{\text{s}}^{-1/2}\left[1+\frac{3}{ 16}\ a_{2}^{\text{s}}\left(1+\frac{1}{B}\right)\right]^{-1},\] (III.9a) \[a_{2}^{(1)}=-\frac{a_{2}^{\text{s}}}{B}\theta_{\text{s}}^{-3/2} \left[1+\frac{3}{16}\ a_{2}^{\text{s}}\left(1+\frac{1}{B}\right)\right]^{-1}.\] (III.9b) The regular perturbation theory breaks down for low bath temperatures \(\theta_{\text{s}}\to 0^{+}\), for which both \(\theta^{(1)}\) and \(a_{2}^{(1)}\) diverge. More specifically, the regular perturbation theory ceases to be valid when when the lowest order and the first order terms become comparable, which comes about when \(\theta_{\text{s}}=O\left(r_{c}/\theta_{\text{s}}^{1/2}\right)\) or \(\theta_{\text{s}}=O(r_{c}^{2/3})\). The regular perturbative expansion derived above is thus limited to high enough bath temperatures, \(\theta_{\text{s}}\gg r_{c}^{2/3}\). This condition gives the range of validity of the _outer solution_ at lowest order, \[\theta_{\text{O}}\equiv\theta^{(0)}=\theta_{\text{s}},\quad a_{2,\text{O}} \equiv a_{2}^{(0)}=a_{2}^{\text{s}},\] (III.10) which is useful in the forthcoming sections. The scaling of the outer solution presented above has striking implications. As already stated, the regular expansion is not valid for low enough bath temperatures, \(\theta_{\text{s}}\ll r_{c}^{2/3}\). We expect, on a physical basis, that a laboratory glass transition should emerge such that the system becomes "frozen" as soon as \(\theta_{\text{s}}\) becomes of the order of \(r_{c}^{2/3}\). Therefore, we can estimate the value of the system variables in the frozen state by considering the situation for \(\theta_{\text{s}}=O(r_{c}^{2/3})\), in which the lowest and first order terms of the outer expansion share the same behaviour with \(r_{c}\). On the one hand, both \(\theta^{(0)}\) and \(r_{c}\theta^{(1)}\) are proportional to \(r_{c}^{2/3}\) for \(\theta_{\text{s}}=O(r_{c}^{2/3})\), so we expect that \[\theta^{\text{Frz}}\equiv\lim_{\theta_{\text{s}}\to 0}\theta\propto r_{c}^{2/3};\] (III.11) On the other hand, both \(a_{2}^{(0)}\) and \(r_{c}a_{2}^{(1)}\) are independent of \(r_{c}\) in the same region, so we expect that \[a_{2}^{\text{Frz}}\equiv\lim_{\theta_{\text{s}}\to 0}a_{2}=O(1),\] (III.12) independent of \(r_{c}\). The latter suggests that all the cumulants of the Sonine expansion become independent of the cooling rate, i.e. the frozen state of the system is unique. Let us now compare our analytical predictions with simulation results obtained from Direct Simulation Monte Carlo (DSMC) integration [66] of the kinetic equation that governs the dynamics of the granular gas. Unless otherwise specified, for all of the simulations of the granular gas performed, we have employed the system parameters \(\alpha=0.9\), \(d=3\), and a number of particles \(N=10^{5}\). On the left panel of Fig. 1, we plot the relaxation of the kinetic temperature when applying the cooling programme from Eq.(III.1). The final granular temperatures \(\theta^{\text{Frz}}\) at the frozen state are plotted on the right panel versus the cooling rate \(r_{c}\). They are very well fitted by the power law \(\theta^{\text{Frz}}=a\,r_{c}^{\text{P}}\) with \(b=0.666\), thus confirming the scaling given by Eq. (III.11). In Fig. 2, we prove that the frozen state is indeed unique. On the left panel, the dimensionless VDF at the frozen state corresponding to different cooling rates overlap on a universal curve. Note that, although our theoretical analysis has been carried out within the first Sonine approximation, the numerical results show that this remarkable property holds for the exact (numerical) VDF. To neatly visualise the non-Gaussian character of the frozen state, we present (i) the ratio of the VDF over the equilibrium Maxwellian in the inset of the left panel and (ii) the excess kurtosis at the frozen state \(a_{2}^{\rm Frz}\) as a function of the restitution coefficient \(\alpha\) on the right panel.[67] This also allows us to check that \(a_{2}^{\rm Frz}\)--and thus the VDF--is indeed independent of \(r_{c}\) for all inelasticities. The graph shows that \(a_{2}^{\rm Frz}\) is really far from the steady-state kurtosis \(a_{2}^{\rm s}\) but very close to the HCS values \(a_{2}^{\rm HCS}\), which suggests that the HCS has a key role in the frozen state--as further discussed in Appendix A. ### Boundary layer approach. Universality We are now concerned with the behaviour of the system for very low bath temperatures, when the system is close to its frozen state. We follow a boundary layer approach, in which Figure 1: (Left) Dynamical evolution of the granular temperature \(\theta\) as a function of the bath temperature \(\theta_{\rm s}\). The different curves correspond to DSMC simulation data for the linear cooling protocol (III.1) with different rates \(r_{c}\), namely: \(r_{c}\)= 0.05 (red squares), 0.025 (green up triangles), 0.01 (orange circles), 0.005 (blue down triangles), 0.0025 (black rectangles), and 0.001 (purple diamonds). The black dashed line corresponds to the instantaneous NESS curve \(\theta=\theta_{\rm s}\). (Right) Limit values of the kinetic temperature at the frozen state as a function of the cooling rate. The plotted points have been extracted from the DSMC data on the left panel. The black, dotted line corresponds to the best fit of those points to the function \(a_{c}^{\rm B}\), with \(a=0.741\) and \(b=0.666\), in excellent agreement with the theoretical prediction (III.11). Figure 2: Universality of the frozen state. (Left) VDF at the frozen state for different values of the cooling rate \(r_{c}\). The curves have been obtained from the same numerical data employed for Fig. 1, for which \(\alpha=0.9\) and \(d=3\), and thus the colour code and associated symbols are the same. All VDFs are superimposed over a unique, universal, curve independent of \(r_{c}\), in agreement with our theoretical prediction. In the inset, we show the VDF at the frozen state divided by the equilibrium Maxwellian, with the solid line corresponding to the polynomial in Eq. (II.4) within the first Sonine approximation. (Right) Excess kurtosis at the frozen state as a function of the restitution coefficient \(\alpha\). Here, for the sake of clarity, we show DSMC data corresponding to only two values of the cooling rate, \(r_{c}=0.01\) (squares) and \(r_{c}=0.001\) (circles). The numerical values of the excess kurtosis at the frozen state are compared with both the NESS value \(a_{2}^{\rm s}\) (blue dashed line) and the HCS value \(a_{2}^{\rm HCS}\) (red solid line), being very close to the latter. we introduce scaled variables and look for a distinguished limit of the evolution equations (II.10). In this way, we find an inner expansion, valid for low temperatures, which is afterwards matched with the outer solution derived above: in this way, an (approximate) solution for all values of the bath temperature, known as a uniform solution,[65] is built. We define the scaled variables \[Y\equiv r_{c}^{-2/3}\theta,\quad X\equiv r_{c}^{-2/3}\theta_{\rm s},\] (III.13) as suggested by Eqs. (III.11) and (III.12). Interestingly, the evolution equations (II.10) become independent of the cooling rate when written in terms of \(X\) and \(Y\): \[-\frac{dY}{dX} =X^{3/2}\left(1+\frac{3}{16}a_{2}^{\rm s}\right)-Y^{3/2}\left(1+ \frac{3}{16}a_{2}\right),\] (III.14a) \[-\frac{da_{2}}{dX} =2Y^{1/2}\left\{\left[1-\left(\frac{X}{Y}\right)^{3/2}\right]a_{ 2}+B\left(a_{2}^{\rm s}-a_{2}\right)\right\}.\] (III.14b) These equations provide us with the inner solution, which is expected to be valid for \((X,Y,a_{2})\) of the order of unity, i.e. close to the frozen state as discussed above. In order to find the inner solution, we must complement Eq. (III.14) with the boundary conditions (III.4), which are now written as \[Y(r_{c}^{-2/3})=r_{c}^{-2/3},\quad a_{2}(r_{c}^{-2/3})=a_{2}^{\rm s}.\] (III.15) Note that the boundary conditions have absorbed all the dependency of the inner solution on the cooling rate \(r_{c}\). Figure 3 shows the same numerical data on the left panel of Fig. 1, obtained from DSMC simulation, but in terms of the scaled variables \(X\) and \(Y\). It is neatly observed that all the curves for different values of the cooling rate \(r_{c}\) collapse onto a unique master curve, independent of \(r_{c}\). The only difference appears for large values of \(X\), for which the different curves start from different initial points, consistently with the boundary conditions (III.15). Since the plotted data corresponds to the numerical solution to the kinetic equation, not to our perturbative approach, this suggests that the exact solution to the problem presents a universal behaviour in scaled variables. In order to understand such universal behaviour in scaled variables, we seek the solution of the inner problem at the lowest order, which we denote by \((Y_{I}(X),a_{2J}(X))\). Therefore, we have to solve Eq. (III.14) with the approximate boundary conditions at infinity \[\lim_{X\to\infty}Y_{I}(X)=\infty,\quad\lim_{X\to\infty}a_{2J}(X)=a_{2}^{\rm s},\] (III.16) which are obtained by considering the limit as \(r_{c}\to 0\) in Eq. (III.15). Although it is not possible to write \((Y_{I}(X),a_{2J}(X))\) in a simple closed form, it is clear that \((Y_{I}(X),a_{2J}(X))\) does not depend on \(r_{c}\), since the dependence thereof has vanished in Eq. (III.16). A dominant balance of Eq. (III.14) shows that \[Y_{I}(X)\sim X,\qquad a_{2J}(X)\sim a_{2}^{\rm s},\quad X\gg 1,\] (III.17) which is consistent with the tendency of the DSMC data in Fig. 3 to the NESS curve (dashed line) for large \(X\). It can be shown that the above universality also holds for the uniform solution--valid over the whole interval of \(\theta_{\rm s}\), not only for low temperatures. The uniform solution is constructed as the sum of the outer and inner solutions, minus the common behaviour found in the intermediate matching region, where \(\theta_{\rm s}\ll 1\) but \(X\gg 1\).[65] That is, the uniform solution to the lowest order can be written as \[\theta_{\rm BL}(\theta_{\rm s}) =\theta_{O}(\theta_{\rm s})+r_{c}^{2/3}Y_{I}(X=r_{c}^{-2/3} \theta_{\rm s})-\theta_{c}(\theta_{\rm s}),\] (III.18) \[a_{2,\rm BL}(\theta_{\rm s}) =a_{2,O}(\theta_{\rm s})+a_{2J}(X=r_{c}^{-2/3}\theta_{\rm s})-a_{2,c}(\theta_{\rm s}).\] (III.19) The common behaviour for the kinetic temperature and the excess kurtosis are \[\theta_{c}(\theta_{\rm s})=\theta_{\rm s},\quad a_{2,c}(\theta_{\rm s})=a_{2}^ {\rm s},\] (III.20) bringing to bear Eqs. (III.10) and (III.17). Note that the common behaviour coincides with the outer solution (III.10), so the uniform solution coincides with the inner solution and its range of validity extends to all values of \(X\), \[\theta_{\rm BL}(\theta_{\rm s}) =r_{c}^{2/3}Y_{I}(X=r_{c}^{-2/3}\theta_{\rm s}),\] (III.21) \[a_{2,\rm BL}(\theta_{\rm s}) =a_{2J}(X=r_{c}^{-2/3}\theta_{\rm s}).\] (III.22) The first equation tells us that \(Y_{\rm BL}\equiv r_{c}^{-2/3}\theta_{\rm BL}=Y_{I}\), i.e. we have the universal behaviour in Fig. 3 over the uniform solution--at the lowest order. From the boundary layer solution, the frozen values of the scaled variables are readily obtained, \[Y^{\rm Frz} \equiv\lim_{X\to 0}Y_{I}(X),\] (III.23a) \[a_{2}^{\rm Frz} \equiv\lim_{X\to 0}a_{2,1}(X).\] (III.23b) Our above argument about the independence of \(Y_{I}(X)\) on the cooling rate is immediately translated to \(\theta_{\rm Frz}=r_{c}^{2/3}Y^{\rm Frz}\) Figure 3: Plot of the dynamical evolution of the scaled granular temperature \(Y\) as a function of \(X\) using the cooling protocol (III.1) for different cooling rates. The colour codes and symbols for the DSMC data are the same as those for the data depicted in Fig.1. which means that \(\theta_{\rm Ftz}\) follows the power law behaviour \(\theta_{\rm Ftz}\propto r_{c}^{2/3}\) that we have already checked on the right panel of Fig. 1. Also, the independence of \(a_{2,I}(X)\) on the cooling rate, and thus the independence of \(a_{2}^{\rm Ftz}\) on \(r_{c}\) has been already checked on the right panel of Fig. 2. ## IV Hysteresis cycles Now we turn our attention to a reheating protocol from the frozen state with rate \(r_{h}\), \(d\theta_{\rm s}/dt=+r_{h}\). First, we consider the paradigmatic case \(r_{h}=r_{c}\). Even in this simple case, we will show that the system does not follow backwards the cooling curve, but crosses the NESS line \(\theta=\theta_{\rm s}\) and afterwards tends thereto from below. This is similar to the hysteresis cycle displayed by glassy systems in temperature cycles (cooling followed by reheating). Second, we consider the more general case \(r_{h}\neq r_{c}\), in particular we are interested in analysing the reheating with a given rate \(r_{h}\) after the system has been cooled down to different frozen states corresponding to different values of \(r_{c}\). ### Universal hysteresis cycle with \(r_{c}=r_{h}\) Similarly to the cooling programme, we may introduce scaled variables as \[Y\equiv r_{h}^{-2/3}\theta,\quad X\equiv r_{h}^{-2/3}\theta_{\rm s}.\] (IV.1) In terms thereof, the evolution equations become independent of the heating rate, \[\frac{dY}{dX} =X^{3/2}\left(1+\frac{3}{16}a_{2}^{5}\right)-Y^{3/2}\left(1+\frac {3}{16}a_{2}\right),\] (IV.2a) \[\frac{da_{2}}{dX} =2\sqrt{Y}\left[\left(1-\left(\frac{X}{Y}\right)^{3/2}\right)a_{ 2}+B\left(a_{2}^{5}-a_{2}\right)\right].\] (IV.2b) The above system must be complemented with the new boundary conditions \[Y(0)=Y^{\rm Ftz},\quad a_{2}(0)=a_{2}^{\rm Ftz},\] (IV.3) which correspond to that of the frozen state from the previously applied cooling programme, given by Eq.(III.23) to the lowest order--recall that \(r_{c}=r_{h}\). A completely similar analysis to that carried out for the cooling programme shows that the solution to Eq. (IV.2) gives the lowest order approximation to the behaviour of the kinetic temperature and the excess kurtosis in the heating programme. Since the rate \(r_{c}=r_{h}\) does not appear in these equations, the reheating behaviour is also independent of the rate: there is a unique, universal, hysteresis cycle for all rates. The above theoretical prediction is checked in Fig. 4. On the left panel, the hysteresis cycle of the kinetic temperature is shown. DSMC simulation data (symbols) are compared with the boundary layer solution (blue lines) of Eq. (IV.2), for different values of the cooling/heating rate \(r=r_{c}=r_{h}\). The independence of \(r\) of the hysteresis cycle is clearly observed, and the boundary layer solution captures very well the numerical results throughout the whole cycle. Remarkably, the heating curve crosses the NESS line \(\theta=\theta_{\rm s}\) (black dashed line) and tends thereto from below--this is further analysed in Sec. IV.2. On the right panel, we display the apparent "heat capacity" \(d\theta/d\theta_{\rm s}=dY/dX\) over the thermal cycle. This heat capacity is non-monotonic in the heating process, with a Figure 4: Hysteresis cycle upon reheating in the granular gas. (Left) Kinetic temperature as a function of the bath temperature, when the latter is first cooled down with rate \(r_{c}\) and later reheated from the frozen state with rate \(r_{h}=r_{c}=r\). Specifically, we present results for \(r=0.01\) (red squares) and \(r=0.001\) (purple diamonds). Symbols are simulation results of the kinetic equation, while the blue solid curves correspond to the numerical integration of Eqs. (III.14) and (IV.2). The black dashed straight line corresponds to the NESS curve \(Y=X\). (Right) Associated apparent heat capacity \(dY/dX\). The red (blue) line corresponds to the heating (cooling) programme, specifically to the numerical integration of Eqs. (III.14) and (IV.2). The purple vertical line marks the bath temperature \(X_{\rm g}\) at which the heat capacity reaches its maximum in the reheating programme. marked maximum at a certain value of \(\theta_{\rm s}\) (or \(X\)) that can be employed to define the glass transition temperature \(\theta_{\rm s,g}\) (or \(X_{g}\)). [25; 26; 68; 3; 23] ### Normal heating curve In order to understand the hysteretic behaviour, a perturbative approach to the heating process can be carried out by expanding the granular temperature \(\theta\) in powers of \(r_{h}\), similarly as we did for the cooling process. By simply changing \(r_{c}\leftrightarrow-r_{h}\), we obtain the perturbative expressions \[\theta =\theta_{\rm s}-\frac{2}{3}\frac{r_{h}}{\theta_{\rm s}^{1/2}} \left[1+\frac{3}{16}\;a_{2}^{\rm s}\left(1+\frac{1}{B}\right)\right]^{-1}+O(r _{h}^{2}),\] (IV.4a) \[a_{2} =a_{2}^{\rm s}+\frac{r_{h}\;a_{2}^{\rm s}}{B\;\theta_{\rm s}^{3/2} }\left[1+\frac{3}{16}\;a_{2}^{\rm s}\left(1+\frac{1}{B}\right)\right]^{-1}+O(r _{h}^{2}).\] (IV.4b) These perturbative expressions are expected to be valid for not too low temperatures, i.e. over the outer layer--employing the terminology of boundary layer theory. Note that they depend on the heating programme \(r_{h}\), but not on the previously applied cooling programme with cooling rate \(r_{c}\). In other words, if we start the heating process from different initial frozen temperatures \(\theta^{\rm{Frz}}=Y^{\rm{Frz}}r_{c}^{2/3}\) corresponding to different values of \(r_{c}\) but reheat with a common rate \(r_{h}\), we expect to approach the behaviour in Eq. (IV.4) once the system reaches the outer layer. That is the behaviour depicted in Fig. 5: despite having different cooling programmes, all the simulation results tend to reach a universal curve for high enough values of the bath temperature. The above behaviour is similar to that found in simple systems described by master equations, in which it can be analytically proved that there exists a universal _normal_ curve in the heating case that is the global attractor of the dynamics. [68; 69; 70; 71; 72] In this context, the expressions in Eq. (IV.4) may be considered as perturbative expansions of a similar normal curve in the granular gas. Equation (IV.4a) explains why the kinetic temperature overshoots the NESS curve \(\theta=\theta_{\rm s}\) in reheating, which stems from the normal curve lying below the NESS curve--whereas the cooling curves always lie above the NESS curve, as illustrated by Fig. 1. ## V Molecular fluid with non-linear drag We now focus our attention to a second relevant physical system: a molecular fluid with non-linear drag. [73; 74; 75; 37; 38; 37] The considered model arises when analysing an ensemble of Brownian particles of mass \(m\) immersed in an isotropic and uniform background fluid, [34; 35] the particles of which have mass \(m_{\rm{bf}}\). In the \(m_{\rm{bf}}/m\to 0\) limit--the so-called Rayleigh limit, the drag coefficient \(\zeta\) becomes velocity independent and thus the drag force is linear. However, in real physical scenarios we have that \(m_{\rm{bf}}/m\neq 0\), and it is thus relevant to consider the corrections to the Rayleigh limit. Specifically, by introducing the first order corrections thereto, i.e. by retaining only linear terms in \(m_{\rm{bf}}/m\), the drag coefficient is found to be quadratic on the velocities. [34; 35; 36] Interestingly, it has recently been shown that this model describes a mixture of ultracold Cs and Rb atoms. [36] Let us consider a system of \(d\)-dimensional hard spheres of mass \(m\), diameter \(\sigma\), and density \(n\) immersed in a background fluid at temperature \(T_{\rm{s}}\). In the regime just explained above, the Brownian particles are subjected to a non-linear drag force of the form [34; 35; 36] \[\mathbf{F}=-m\,\zeta(v)\mathbf{v},\] (V.1) where \(\mathbf{v}\) is the particle velocity, and \[\zeta(v)=\zeta_{0}\Big{(}1+\gamma\frac{mv^{2}}{k_{B}T_{\rm{s}}}\Big{)}\] (V.2) is a non-linear drag coefficient, with \(\gamma\) being a dimensionless parameter that measures the degree of non-linearity of the drag force, and \(\zeta_{0}\) is the zero-velocity limit of the drag coefficient. The latter depends on the bath temperature \(T_{\rm{s}}\); for hard spheres, it is found that \(\zeta_{0}\propto T_{\rm{s}}^{1/2}\)--see e.g. Refs. [36] and [37] for the complete expression. The dependence of \(\zeta_{0}\) on \(T_{\rm{s}}\) is relevant here because the bath temperature depends on time in cooling/heating processes. Similarly to the granular gas, the system may be accurately described by the one-particle VDF \(f(\mathbf{v},t)\) if sufficiently diluted. In this case, the dynamical evolution of the VDF is governed by the Fokker-Planck equation (FPE) \[\partial_{t}f(\mathbf{v},t)-\frac{\partial}{\partial\mathbf{v}}\cdot\left[\zeta(v)\bm {v}+\frac{\xi_{2}^{2}(v)}{2}\frac{\partial}{\partial\mathbf{v}}\right]f(\mathbf{v},t) =0,\] (V.3) where \(m^{2}\xi^{2}(v)\) is the variance of a stochastic white noise force. The coefficients \(\xi^{2}(v)\) and \(\zeta(v)\) are related by means Figure 5: Hysteresis cycles for reheating with rate \(r_{h}\) from the frozen states corresponding to different cooling rates \(r_{c}\). Reheating curves for the kinetic correspond to \(r_{h}=0.01\), and the different cooling rates employed are: (colour, \(r_{c}\)) = (red, 0.05), (orange, 0.01), (blue, 0.005) and (purple, 0.001). Symbols correspond to DSMC simulation data, whereas the blue solid curve corresponds to the perturbative expression for the normal curve in Eq. (IV.4). of the fluctuation-dissipation relation \[\xi^{2}(v)=\frac{2k_{B}T_{s}}{m}\zeta(v),\] (V.4) which ensures that the equilibrium Maxwellian VDF \[f_{s}(\mathbf{v})=n\left(\frac{m}{2\pi k_{B}T_{s}}\right)^{\frac{d}{2}}e^{-\frac{mv^{ 2}}{2\theta_{\rm s}T_{s}}},\] (V.5) constitutes the unique stationary solution of the FPE (V.3). The velocity dependence of the drag coefficient implies that we have multiplicative noise in this problem. [76; 77] By employing the Ito interpretation of stochastic integration [78; 79], which is the most convenient one for numerical simulations, the FPE is equivalent to the following Langevin equation: \[\dot{\mathbf{v}}(t)=-\zeta_{\rm eff}(v)\,\mathbf{v}(t)+\xi(v)\,\mathbf{\eta}(t),\] (V.6) where \[\zeta_{\rm eff}(v)=\zeta_{0}\Big{(}1-2\gamma+\gamma\frac{mv^{2}}{k_{B}T_{\rm s }}\Big{)}\] (V.7) constitutes an effective drag coefficient, while \(\mathbf{\eta}(t)\) is a Gaussian white noise of zero average \(\langle\mathbf{\eta}(t)\rangle=0\) and correlations \(\langle\mathbf{\eta}_{i}(t)\,\mathbf{\eta}_{j}(t^{\prime})\rangle=\delta_{i,j}\,\delta (t-t^{\prime})\). The kinetic temperature is again defined as in Eq. (II.2) for the granular gas, but understanding \(f(\mathbf{v},t)\) as the solution of the FPE. Inserting (II.2) into (V.3) leads to the following evolution equation for the temperature, \[\dot{T}=\zeta_{0} \bigg{\{}2(T_{\rm s}-T)\left[1+\gamma(d+2)\frac{T}{T_{\rm s}}\right]\] \[\quad-2\gamma(d+2)\frac{T^{2}}{T_{\rm s}}a_{2}\bigg{\}},\] (V.8) where \(a_{2}\) corresponds to the excess kurtosis, previously introduced in Eq. (II.5) when studying the granular gas. For non-linear drag, \(\gamma\neq 0\), the evolution of the temperature is coupled to that of the excess kurtosis and, thus, we need to consider the evolution equation for the latter too. In turn, the evolution equation for the excess kurtosis involves sixth-degree moments, and in general there emerge an infinite hierarchy of equations for the moments. Under the first Sonine approximation, we have the evolution equations [37; 38] \[\dot{\theta}=\theta_{\rm s}^{1/2} \bigg{[}2(\theta_{\rm s}-\theta)+2\gamma(d+2)\,\theta\] \[\quad-2\,\gamma(d+2)(1+a_{2})\frac{\theta^{2}}{\theta_{\rm s}} \bigg{]},\] (V.9a) \[\dot{a}_{2}=\theta_{\rm s}^{1/2} \bigg{\{}8\gamma\left(1-\frac{\theta}{\theta_{\rm s}}\right)\] \[\quad-\left[\frac{4\theta_{\rm s}}{\theta}-8\gamma+4\gamma(d+8) \frac{\theta}{\theta_{\rm s}}\right]a_{2}\bigg{\}},\] (V.9b) where we have introduced the dimensionless variables \[\theta\equiv\frac{T}{T_{i}},\quad\theta_{\rm s}\equiv\frac{T_{\rm s}}{T_{i}}, \quad t^{*}\equiv\zeta_{0}(T_{i})\,t,\] (V.10) with \(T_{i}\equiv T(t=0)\) being the initial temperature. We have also taken into account that \(\zeta_{0}(T_{\rm s})=\zeta_{0}(T_{i})\theta_{\rm s}^{1/2}\). In previous work, [38; 42] we have shown that the non-linear fluid approaches a non-equilibrium state, termed LLNES (long-lived non-equilibrium state), over a wide intermediate timescale, when instantaneously quenched to low enough values of the bath temperature, i.e. \(T_{i}/T_{\rm s}\gg 1\). The VDF at the LLNES is given by a delta peak; in terms of the scaled variables in Eq. (II.3), it reads \[\phi_{\rm L}(\mathbf{c})=\Omega_{d}^{-1}\left(\frac{2}{d}\right)^{\frac{d-1}{2}} \delta\left(c-\sqrt{\frac{d}{2}}\right),\] (V.11) with \(\Omega_{d}\) being the \(d\)-dimensional solid angle. [42] The exact value of the excess kurtosis at the LLNES will be useful, which is \[a_{2}^{\rm L}=-\frac{2}{d+2}.\] (V.12) It is worth remarking that the VDF for the LLNES, and thus \(a_{2}^{\rm L}\), does not depend on the non-linearity parameter \(\gamma\). [42] The LLNES state corresponds to the extreme scenario that comes about when the system is instantaneously quenched to a very low temperature. In this case, for a system relaxing from equilibrium at \(T_{i}\) to equilibrium at \(T_{\rm s}\ll T_{i}\), the system first reaches the LLNES and afterwards tends to equilibrium from it. Note the strong similarity with the HCS for granular gases, which also appears when the intensity of the stochastic thermostat is instantaneously quenched to a very low value. In such a protocol, the granular gas first approaches the HCS and afterwards tends to the stationary state imposed by the stochastic thermostat. Thus, it is worth investigating the role played by the LLNES in the possible emergence of a laboratory glass transition in fluids with non-linear drag. ## VI Glassy behaviour of the non-linear molecular fluid Now, in order to investigate a possible glass transition in the molecular fluid, we decrease the bath temperature following the same cooling programme as in Eq. (III.1) for the granular gas. In fact, as we follow the same perturbative approach in the cooling rate, we leave the mathematical details for Appendix B. Up to order \(O(r_{c})\), the regular perturbative solution is given by \[\theta =\theta_{\rm s}+\frac{r_{c}}{2\theta_{\rm s}^{1/2}}\frac{1+\gamma( d+6)}{\left[1+\gamma(d+4)\right]^{2}-2\gamma^{2}(d+4)},\] (VI.1a) \[a_{2} =-\frac{r_{c}}{\theta_{\rm s}^{3/2}}\frac{\gamma}{\left[1+\gamma( d+4)\right]^{2}-2\gamma^{2}(d+4)}.\] (VI.1b) Thus, we have that \(\theta-\theta_{\rm s}\propto r_{c}/\theta_{\rm s}^{1/2}\) and \(a_{2}\propto r_{c}/\theta_{\rm s}^{3/2}\). Our regular perturbative approach fails when the \(O(r_{c}^{0})\) and the \(O(r_{c}^{1})\) terms become comparable, i.e. again when \(\theta_{\rm s}=O(r_{c}^{2/3})\), which implies that \(\theta=O(r_{c}^{2/3})\) and \(a_{2}=O(1)\). Let us outline that, regardless of the intrinsic differences between the molecular fluid and granular gas systems, they both lead to the same scaling for both the kinetic temperature and the excess kurtosis. The above discussion entails the necessity of introducing again a boundary layer approach. We define scaled variables, analogous to those for the granular gas in Eq. (III.13), \(Y\equiv r_{c}^{-2/3}\theta\) and \(X\equiv r_{c}^{-2/3}\theta_{\rm s}\). In term of the scaled variables, the evolution equations (II.10) become independent of \(r_{c}\), \[-\frac{dY}{dX}= X^{1/2}\left\{2\left(X-Y\right)\left[1+\gamma(d+2)\frac{Y}{X}\right]\right.\] \[\left.-2\gamma(d+2)\frac{Y^{2}}{X}a_{2}\right\},\] (VI.2a) \[-\frac{da_{2}}{dX}= X^{1/2}\left\{8\gamma\left(1-\frac{Y}{X}\right)\right.\] \[\left.-\left[\frac{4X}{Y}-8\gamma+4\gamma(d+8)\frac{Y}{X}\right] a_{2}\right\},\] (VI.2b) as all the dependence on \(r_{c}\) is being absorbed in the boundary conditions \[Y(r_{c}^{-2/3})=r_{c}^{-2/3},\quad a_{2}(r_{c}^{-2/3})=0.\] (VI.3) It is clear the resemblance of the found picture and that of our previous study of the granular gas. Therefore, in order to avoid reiteration, we focus on the main aspects of the glassy behaviour in the molecular fluid. As will be seen, the analogy with the behaviour found in the granular gas is almost complete. The lowest order solution for the cooling protocol would be again obtained by solving Eqs. (VI.2) with the boundary conditions \(\lim_{X\to\infty}Y(X)=\infty\), \(\lim_{X\to\infty}a_{2}(X)=0\), which is completely independent of \(r_{c}\). At the frozen state we thus have \[Y^{\rm Fz}\equiv\lim_{X\to 0}Y(X),\quad a_{2}^{\rm Fz}\equiv\lim_{X\to 0}a_{2}(X),\] (VI.4) which are expected to be independent of \(r_{c}\). We check this expectation in Table 1, in which we compare the value of \(Y^{\rm Fz}\) and \(a_{2}^{\rm Fz}\) obtained from numerical simulation of the Langevin equation (V.6), for different values of \(r_{c}\), and our theoretical prediction. [80] The agreement is excellent for the kinetic temperature, and fair for the excess kurtosis. This was to be expected within the first Sonine approximation since \(a_{2}^{\rm Fz}\) is quite large for the non-linear fluid. Moreover, \(a_{2}^{\rm Fz}\) is not so close to its value at the LLNES, \(a_{2}^{\rm L}=-0.4\) for \(d=3\) as predicted by Eq. (VI.2), as it was \(a_{2}^{\rm Fz}\) close to its HCS value in the granular gas. See Appendix A for a more detailed discussion on this point. The independence of \(a_{2}^{\rm Fz}\) on the cooling rate suggests that this property should also hold for the complete VDF of the non-linear fluid--as was the case for the granular gas. We check this property by plotting the scaled VDF for the non-linear fluid on the frozen state, obtained from the numerical integration of the Langevin equation (V.6), in Fig. 6. The universality of the VDF at the frozen state is clearly observed. The largeness of \(a_{2}^{\rm Fz}\) entails that the deviation from the Maxwellian equilibrium distribution is also large. For reference, the position of the delta peak corresponding to the LLNES is also plotted. From the frozen state, we may reheat the system with the same rate \(r_{h}=r_{c}\). Once more, scaled variables are introduced as \(Y\equiv r_{h}^{-2/3}\theta\), \(X\equiv r_{h}^{-2/3}\theta_{\rm s}\), and the evolution equations become independent of the heating rate \[\frac{dY}{dX}= X^{1/2}\left\{2\left(X-Y\right)\left[1+\gamma(d+2)\frac{Y}{X}\right]\right.\] \[\left.-2\gamma(d+2)\frac{Y^{2}}{X}a_{2}\right\},\] (VI.5a) \[\frac{da_{2}}{dX}= X^{1/2}\left\{8\gamma\left(1-\frac{Y}{X}\right)\right.\] \[\left.-\left[\frac{4X}{Y}-8\gamma+4\gamma(d+8)\frac{Y}{X}\right] a_{2}\right\},\] (VI.5b) which differ from the cooling evolution equations (VI.2) only in the sign of the left hand side (lhs). Again, this system has to be solved with the boundary conditions \(Y(0)=Y^{\rm Fz}\), \(a_{2}(0)=0\) \begin{table} \begin{tabular}{||c|c|c|} \hline & \(Y^{\rm Fz}\) & \(a_{2}^{\rm Fz}\) \\ \hline Boundary layer & 0.397 & -0.154 \\ \hline Sim. (\(r_{c}=0.05\)) & 0.402 & -0.146 \\ \hline Sim. (\(r_{c}=0.01\)) & 0.403 & -0.147 \\ \hline Sim. (\(r_{c}=0.005\)) & 0.403 & -0.144 \\ \hline Sim. (\(r_{c}=0.001\)) & 0.404 & -0.148 \\ \hline \end{tabular} \end{table} Table 1: Comparison between the numerical (simulation) and theoretical (boundary layer) values of the scaled kinetic temperature and the excess kurtosis at the frozen state. Specifically, we have considered a three-dimensional non-linear fluid with non-linearity parameter \(\gamma=0.1\). Figure 6: Plot of the dimensionless VDF at the frozen state for the non-linear fluid. Symbols correspond to the numerical integration of the Langevin equation for \(N=10^{5}\) stochastic trajectories for different cooling rates: \(r_{c}=0.005\) (purple diamonds), 0.01 (blue triangles), 0.05 (orange circles) and 0.1 (red squares). The black dashed curve corresponds to the equilibrium Maxwellian, whereas the vertical line marks the position of the Dirac-delta peak at the LLNES, as given by Eq. (V.11). Other parameters are \(\gamma=0.1\), \(d=3\). \(\alpha_{2}^{\rm Frz}\), which correspond to the frozen state from the previously applied cooling programme. Figure 7 is the transposition of Fig. 4 to the case of the nonlinear molecular fluid. Its left panel shows both the numerical simulations of the Langevin equation (red symbols) and the boundary layer solution (blue lines) for a full hysteresis cycle. Similarly to the granular gas case, our boundary layer solution captures very well the simulation data. On the right panel, the behaviour of the associated apparent heat capacity of the molecular fluid, \(d\theta/d\theta_{\rm s}=dY/dX\) is displayed. In the reheating curve, the typical maximum that may be used to define a glass transition temperature is neatly observed. Interestingly, in the cooling curve, an anomalous behaviour emerges, the apparent heat capacity increases instead of going to a constant. This anomalous behaviour is better discerned in the inset--which shows a zoom of the very low temperatures region. This stems from the singular behaviour for small \(X\) of the dynamic equation (VI.2a) for \(Y\) in the cooling protocol: therefrom, one has that \[\frac{dY}{dX}\sim 2\gamma(d+2)\frac{(Y^{\rm Frz})^{2}}{X^{1/2}}(1+a_{2}^{\rm Frz }),\quad X\ll 1,\] (VI.6) which diverges as \(X^{-1/2}\). This has to be contrasted with the behaviour for the granular gas: from Eq. (III.14a), one has that in the cooling protocol \[\frac{dY}{dX}\sim(Y^{\rm Frz})^{3/2}\left(1+\frac{3}{16}a_{2}^{\rm Frz}\right),\quad X\ll 1,\] (VI.7) which goes to a constant--consistently with the behaviour reported in Fig. 4. Finally, our molecular fluid also presents an universal curve when reheated from different frozen states. A regular perturbation theory, once more analogous to that carried out before for the granular gas, gives \[\theta =\theta_{\rm s}-\frac{r_{h}}{2\theta_{\rm s}^{1/2}}\frac{1+ \gamma(d+6)}{\left[1+\gamma(d+4)\right]^{2}-2\gamma^{2}(d+4)},\] (VI.8a) \[a_{2} =+\frac{r_{h}}{\theta_{\rm s}^{3/2}}\frac{\gamma}{\left[1+\gamma( d+4)\right]^{2}-2\gamma^{2}(d+4)},\] (VI.8b) neglecting \(O(r_{h}^{2})\) terms. These expressions are obtained from Eqs. (VI.1) by exchanging \(r_{c}\leftrightarrow-r_{h}\). They are valid for the return to the equilibrium curve, when the system is close enough thereto--i.e. for high enough temperatures; more specifically, when \(\theta_{\rm s}\gtrsim r_{h}^{2/3}\). Therefore, if the system is reheated from different initial frozen states with kinetic temperatures \(\theta^{\rm Frz}=Y^{\rm Frz}r_{c}^{2/3}\), obtained from previously applied cool Figure 8: Heating curves for the kinetic temperature with heating rate \(r_{h}=0.05\), that have been previously cooled towards their frozen values with different cooling rates: (colour, \(r_{c}\)) = (red, 0.05), (orange, 0.01), (blue, 0.005), and (purple, 0.001). Symbols are simulation results while the blue solid curve corresponds to the perturbative solution from Eq. (VI.8). Other parameters are \(\gamma=0.1\) and \(d=3\). Figure 7: Hysteresis cycle in the non-linear molecular fluid. Both panels are the transposition to case of the non-linear molecular fluid of those in Fig. 4 for the granular gas, with the same values of the cooling and heating rate \(r\) in dimensionless variables. For the molecular fluid, the numerical data corresponds to the simulation of the Langevin equation (V.6), while the theoretical curves correspond to the numerical integration of Eqs. (VI.2) and (VI.5). Other parameters employed are \(\gamma=0.1\) and \(d=3\). On the right panel, there is an additional inset to show the anomalous behaviour of the apparent heat capacity in the cooling process, which is discussed in the text. ing programmes with different rates \(r_{c}\), we expect the kinetic temperature to tend towards Eq. (VI.8a) as \(\theta_{\rm s}\) increases. This entails the behaviour shown in Fig. 8: all the heating curves, independently of the previous cooling rate \(r_{c}\), overshoot the equilibrium curve and tend towards a unique curve when being reheated with rate \(r_{h}\). ## VII Conclusions We have investigated the emergence of a laboratory glass transition in two basic fluid models: a granular gas of smooth hard spheres and a molecular fluid with non-linear drag force. The two systems are very different from a fundamental point of view. One the one hand, collisions in the granular gas are inelastic, and thus its VDF is always non-Gaussian and the system is intrinsically out-of-equilibrium, tending eventually to a NESS if an energy injection mechanism is introduced. On the other hand, collisions are elastic in the molecular fluid and the system approaches equilibrium, with a Maxwellian VDF, in the long time limit. In both cases, our analysis have been carried out within the first Sonine approximation of the relevant evolution equation for the VDF: the inelastic Boltzmann equation for the granular gas, the Fokker-Planck equation for the molecular fluid with non-linear drag. Therein, the evolution equation of the kinetic temperature--basically, the average kinetic energy--is found to be coupled with that of the excess kurtosis. The evolution equation of the excess kurtosis is in turn coupled with higher-order cumulants but these are neglected in the first Sonine approximation, since they are assumed to be small. Despite the profound differences between granular gases and molecular fluids, both systems share some striking similarities in their dynamical behaviour. Specifically, we have studied the evolution of the kinetic temperature when the bath temperature is decreased by applying a linear cooling programme. We have approached the problem by employing a perturbation theory that assumes that the cooling rate is a small parameter. Interestingly, both for the granular gas and the molecular fluid, the boundary layer approach leads to the same scaling behaviour of the kinetic temperature, predicting that it departs from equilibrium at low bath temperatures and freezes in a value that scales as \(r_{c}^{2/3}\). This theoretical prediction has been confirmed by our numerical results, DSMC simulations of the inelastic Boltzmann equation for the granular gas and numerical integration of the non-linear Langevin equation for the molecular fluid. A key point of our approach is the evolution equations becoming independent of the cooling rate when they are written in terms of scaled variables, well-suited for our boundary layer treatment of the problem. This leads to the emergence of universality of the frozen state, in the sense that the VDF--again, both for the granular gas and the non-linear molecular fluid--at the frozen state is independent of the cooling rate. Moreover, when the system is reheated from this frozen state with the same rate, this universality extends to the whole dynamical evolution. This entails that the observed hysteresis when the systems are submitted to a thermal cycle--first cooling, followed by reheating--is also universal, independent of the rate of variation of the bath temperature. Once more, this theoretical prediction is confirmed by numerical simulations of both systems, and an excellent agreement between the numerical and the theoretical curves have been found. Another interesting feature of both systems is their tendency to a unique normal curve upon reheating, independent of the previous cooling programme. This behaviour has been theoretically predicted for Markovian systems obeying master equations,[69] and observed in a variety of simple models for glasses and dense granular systems.[70; 71; 68; 72; 69] It is this tendency to approach the normal curve that explains the overshoot of the NESS--for the granular gas--or equilibrium--for the molecular fluid--curve of the kinetic temperature upon reheating, since the normal curve lies below them whereas the cooling curves lie above them. In the granular gas, the values of the excess kurtosis at the frozen state are very close to that of the HCS: this hints at the frozen state being strongly related with the HCS. In the non-linear molecular fluid, the value of the excess kurtosis are further from that at the LLNES, so the relation between the frozen state and the LLNES is less clear. Still, it seems that both the HCS for the granular gas and the LLNES for the non-linear fluid play the role of a reference state for the cooling protocol--a first step in this direction is provided in Appendix A, although this point certainly deserves further investigation. The universality of the frozen state, in the sense of its independence of \(r_{c}\) in scaled variables, is an appealing feature of the laboratory glass transition found in this work--both for the smooth granular gas and the molecular fluid with (quadratic) non-linear drag. The possible extension of this property to other systems, for example rough granular fluids[81; 82; 83], molecular fluids with more complex non-linearities,[33; 84; 75; 76; 84] or binary mixtures[85; 86; 87; 88]is an interesting prospect for future work. ###### Acknowledgements. We acknowledge financial support from Grant PID2021-122588NB-I00 funded by MCIN/AEI/10.13039/501100011033/ and by "ERDF A way of making Europe". We also acknowledge financial support from Grant ProyExcel_00796 funded by Junta de Andalucia's PAIDI 2020 programme. A. Patron acknowledges support from the FPU programme through Grant FPU2019-4110. ## Data Availability The Fortran codes employed for generating the data that support the findings of this study, together with the Mathematica notebooks employed for producing the figures presented in the paper, are openly available in the GitHub page of University of Sevilla's FINE research group. ## Appendix A Glass transition for different cooling programmes Throughout this work, we have employed linear cooling programmes in order to study the emergence of glassy behaviour in both molecular fluids and granular gases. Let us now consider the following more general family of cooling protocols, \[\frac{d\theta_{\rm s}}{dt}=-r_{\rm c}\theta_{\rm s}^{k}, \tag{10}\] with \(k\) being a real number. Notice that the \(k=0\) case reduces to the already studied linear cooling programme. We still consider that the cooling is slow, in the sense that \(r_{\rm c}\ll 1\). Following the same regular perturbative procedure as the ones employed for both the granular gas and the non-linear fluid, we would obtain that the solution to the lowest order corresponds again to the stationary solutions \(\theta^{(0)}=\theta_{\rm s}\), \(a_{2}^{(0)}=a_{2}^{\rm s}\). The first-order \(O(r_{\rm c})\) corrections would be provided by the equations \[-\theta_{\rm s}^{k} =\theta_{\rm s}^{1/2}\left\{c_{1}\theta^{(1)}+c_{2}\theta_{\rm s} a_{2}^{(1)}\right\}, \tag{11a}\] \[0 =c_{3}\frac{\theta^{(1)}}{\theta_{\rm s}^{1/2}}+c_{4}\theta_{\rm s }^{1/2}a_{2}^{(1)}, \tag{11b}\] with \(c_{i},\ i=1,..,4\) being constants that depend on the parameters of the specific system of concern. These equations entail the scalings \[\theta^{(1)}\propto\theta_{\rm s}^{k-\frac{1}{2}},\quad a_{2}^{(1)}\propto \theta_{\rm s}^{k-\frac{3}{2}}. \tag{12}\] which imply that \(\theta^{(1)}\ll\theta^{(0)}=\theta_{\rm s}\), \(a_{2}^{(1)}\ll a_{2}^{(0)}=a_{2}^{\rm s}\) when \(k>k_{\rm crit}=3/2\). Therefore, for \(k>k_{\rm crit}\) we would not see a laboratory glass transition neither in the granular gas nor in the non-linear fluid: the cooling is so slow for \(k>k_{\rm crit}\) that both systems remain basically over the stationary curve \(\{\theta=\theta_{\rm s},a_{2}=a_{2}^{\rm s}\}\) for all bath temperatures.[89] Let us now consider the case \(k\leq k_{\rm crit}\). In this case, the regular perturbative approach breaks down for low enough bath temperatures, which marks the onset of the laboratory glass transition. Our regular perturbative approach ceases to be valid when the \(O(1)\) terms become comparable with the \(O(r_{\rm c})\) ones, thus implying \[\theta_{\rm s}=O\left(r_{\rm c}^{\frac{2}{3-2}}\right). \tag{13}\] Consistently with our discussion above, \(\theta_{\rm s}\) does not exist--diverges--for \(k>k_{\rm crit}=3/2\). Equation (13) entails that we expect that the kinetic temperature at the frozen state scales as \(\theta^{\rm Fraz}\propto r_{\rm c}^{\frac{2}{3-2}}\), which generalises the power law behaviour \(r_{\rm c}^{2/3}\) found in the main text for \(k=0\). Interestingly, regardless of the choice of \(k\), the frozen state is still universal, in the sense that it is independent of the cooling rate \(r_{\rm c}\), since \[a_{2}-a_{2}^{\rm s}\sim a_{2}^{(1)}\propto r_{\rm c}\,\theta_{\rm s}^{k-\frac{ 3}{2}}=O(1). \tag{14}\] As in the main text, the above scaling relations suggest the introduction of scaled variables \[Y\equiv r_{\rm c}^{-\frac{2}{3-2}}\theta,\quad X\equiv r_{\rm c}^{-\frac{2}{3 -2}}\theta_{\rm s}. \tag{15}\] In terms of them, the dynamic equations for the cooling protocol become \(r_{\rm c}\)-independent. The same applies for a reheating program with \(r_{h}=r_{\rm c}\) from the frozen state. We remark that the evolution equations for the scaled variables \((Y,a_{2})\) in both systems are the same as the ones we have written in the main text, with the only change \(d/dX\leftrightarrow X^{k}d/dX\) on their lhs. Figure 9 shows the evolution of the excess kurtosis \(a_{2}\) towards its frozen state in a cooling programme with rate \(r_{\rm c}\) for different values of \(k\) in Eq.(10), for both the granular gas and the non-linear molecular fluid. In both cases, the excess kurtosis follows a similar trend: on the one hand, for \(k\rightarrow-\infty\), the time window over which \(\theta_{\rm s}\) decays towards zero becomes infinitely small, and thus the excess kurtosis does not have time to deviate from its stationary state value and is approximately constant for all \(X\). On the other hand, as the value of \(k\) is increased, the time window to relax also increases. The limiting case \(k=k_{\rm crit}\) constitutes the ultimate balance between a sufficiently wide time window to relax, and a fast enough relaxation protocol such that \(\theta\) deviates from the \(\theta=\theta_{\rm s}\) behaviour. It is worth noting that, for \(1/2\leq k<k_{\rm crit}\), the excess kurtosis tends to the value over the HCS--for the granular gas--and the LLNES--for the non-linear molecular fluid. The lower bound \(k=1/2\) corresponds to the value above which the deviations from the \(\theta=\theta_{\rm s}\) line become significantly small, but still allowing for the kurtosis to evolve towards the frozen state, as Eq. (12) states. Since we are showing the numerical integration of the evolution equations in the first Sonine approximation, these limit values of the excess kurtosis correspond to their theoretical estimates in this framework. For the granular gas, this is given by Eq. (11), which is quite accurate due to its smallness. For the non-linear fluid, the first Sonine approximation gives \(a_{2}^{1/7}=-2/(d+8)\), which is quite different from its exact value in Eq. (12)--this is reasonable, since the deviations from the Gaussian are much larger in the LLNES than in the HCS. The above discussion hints at the frozen state corresponding to the HCS and the LLNES for the granular gas and the non-linear molecular fluid, respectively. This would mean that these systems, either the granular gas or the non-linear molecular fluid, reach the corresponding non-equilibrium state, either the HCS or the LLNES, over a time window of the order of \(r_{\rm c}^{-1}\) when cooling with a programme for which \(1/2\leq k<k_{\rm crit}\). The latter suggests useful applications in optimal control [90; 91; 92; 93] and also within the study of non-equilibrium effects, as previous work on both systems shows that both the HCS and the LLNES are responsible for the emergence of a plethora of non-equilibrium phenomena, such as the Mpemba and Kovacs effects.[32; 38; 40; 41; 42; 43] ## Appendix B Regular perturbation theory for the molecular fluid Following an approach similar to that in Sec. III.1, let us decrease the bath temperature by applying the linear cooling programme \[\frac{d\theta_{\mathrm{s}}}{dt}=-r_{c}\ \Rightarrow\frac{d\theta}{dt}=-r_{c}\frac{d \theta}{d\theta_{\mathrm{s}}},\quad\frac{da_{2}}{dt}=-r_{c}\frac{da_{2}}{d \theta_{\mathrm{s}}}, \tag{10}\] where \(r_{c}\ll 1\) is the cooling rate. We employ again the boundary layer theory [65] to approach the problem. For the outer layer, for which it is expected that \(\theta\) does not deviate too much from \(\theta_{\mathrm{s}}\), we insert the regular perturbation series \[\theta =\theta^{(0)}+r_{c}\theta^{(1)}+O(r_{c}^{2}), \tag{11a}\] \[a_{2} =a_{2}^{(0)}+r_{c}a_{2}^{(1)}+O(r_{c}^{2}), \tag{11b}\] into the evolution equations (10) and equate terms with the same power of \(r_{c}\). At the lowest order, \(O(1)\), one obtains \[0= 2(\theta_{\mathrm{s}}-\theta^{(0)})+2\gamma(d+2)\,\theta^{(0)}\] \[-2\,\gamma(d+2)(1+a_{2}^{(0)})\frac{[\theta^{(0)}]^{2}}{\theta_{ \mathrm{s}}}, \tag{12a}\] \[0= 8\gamma\left(1-\frac{\theta^{(0)}}{\theta_{\mathrm{s}}}\right)\] \[-\left[\frac{4\theta_{\mathrm{s}}}{\theta^{(0)}}-8\gamma+4\gamma (d+8)\frac{\theta^{(0)}}{\theta_{\mathrm{s}}}\right]a_{2}^{(0)}, \tag{12b}\] whose solution corresponds to equilibrium: \[\theta^{(0)}=\theta_{\mathrm{s}},\quad a_{2}^{(0)}=0. \tag{13}\] The linear terms in \(r_{c}\) obey \[-1 =\theta_{\mathrm{s}}^{1/2}\left\{-2\left[1+\gamma(d+2)\right] \theta^{(1)}-2\gamma(d+2)\theta_{\mathrm{s}}a_{2}^{(1)}\right\}, \tag{14a}\] \[0 =2\gamma\frac{\theta^{(1)}}{\theta_{\mathrm{s}}^{1/2}}+\theta_{ \mathrm{s}}^{1/2}\left[1+\gamma(d+6)\right]a_{2}^{(1)}. \tag{14b}\] The solution of this system is given by \[\theta^{(1)} =\frac{1}{2\theta_{\mathrm{s}}^{1/2}}\frac{1+\gamma(d+6)}{\left[1 +\gamma(d+4)\right]^{2}-2\gamma^{2}(d+4)}, \tag{15}\] \[a_{2}^{(1)} =-\frac{1}{\theta_{\mathrm{s}}^{3/2}}\frac{\gamma}{\left[1+ \gamma(d+4)\right]^{2}-2\gamma^{2}(d+4)}. \tag{16}\] The regular perturbation expansion (10) in the main text is directly obtained by combining Eqs. (11), (13) and (15).
2306.05610
Approximation by the Bessel-Riesz Quotient
How large is the Bessel potential, $G_{\alpha,\mu}f$, compared to the Riesz potential, $I_\alpha f$, of a given function? We prove that, for certain $f$ and $p$, \[\Vert G_{1,\mu} f\Vert_p\approx \omega(I_1f,1/\mu)_p,\] where $\omega(f,t)_p$ is the $L^p$ modulus of continuity. However, for $0<\alpha<1$, \[\Vert G_{\alpha,\mu}f\Vert_p\leq C(\omega(I_\alpha f,1/\mu)_p)^\alpha\cdot\Vert I_\alpha f\Vert^{1-\alpha}_p.\] These estimates are obtained by studying the quotient of the two operators, $E_{\alpha,\mu}:=\frac{(-\Delta)^{\alpha/2}}{(\mu^2-\Delta)^{\alpha/2}}$, and exploiting its approximation theoretic properties. Additionally, $G_{\alpha,\mu}f=\mathcal{O}(\mu^{-\frac\alpha2})$ if $I_\alpha f$ vanishes near a given point. This ``localization" result is derived from kernel estimates of $E_{\alpha,\mu}$.
Ikemefuna Agbanusi
2023-06-09T01:04:43Z
http://arxiv.org/abs/2306.05610v3
# Approximation by the Bessel-Riesz Quotient ###### Abstract Consider the operator \(E_{\mu}=\frac{\sqrt{-\Delta}}{\sqrt{\mu^{2}-\Delta}}\) where \(\Delta\) is the Laplacian in \(\mathbb{R}^{d}\) and \(\mu\) a positive parameter. \(E_{\mu}\) is the formal quotient of the Riesz and Bessel potentials and usually serves to relate these classical operators. It turns out that \(E_{\mu}f=f-T_{\mu}f\) for a certain singular approximate identity, \(T_{\mu}\), and this note examines \(E_{\mu}\) from the viewpoint of approximation theory. ## 1 Introduction and Overview This paper deals with some topics at the intersection of Fourier analysis and approximation theory. Specifically, for \(\mu>0\) we study the Bessel-Riesz quotient, \(E_{\mu}\), which is the operator defined by the Fourier multiplier \[m_{\mu}(\xi):=\frac{\left|\xi\right|}{\left(\mu^{2}+\left|\xi\right|^{2} \right)^{\frac{1}{2}}}. \tag{1}\] It arises in the following context. Let \(\hat{f}(\xi)\) denote the Fourier transform. If \(\widetilde{G_{\alpha,\mu}f}(\xi):=(\mu^{2}+|\xi|^{2})^{-\frac{\alpha}{2}}\hat{ f}(\xi)\) and \(\widetilde{I_{\alpha}f}(\xi):=|\xi|^{-\alpha}\hat{f}(\xi)\) define the Bessel and Riesz potential of \(f\) respectively, it is easy to check that \(G_{1,\mu}f=E_{\mu}I_{1}f\). From this identity, properties of the Bessel or Riesz potential are gleaned easily once they are known for \(E_{\mu}\). At least two observations point to the connection of \(E_{\mu}\) to approximation theory. The first is the trivial fact that \(m_{\mu}\to 0\) pointwise as \(\mu\to\infty\). The second observation starts with a variant of a formula in Stein [9, SS5.3.2] \[\frac{\left|\xi\right|}{\left(\mu^{2}+\left|\xi\right|^{2}\right)^{\frac{1}{2} }}=\sqrt{\frac{\left|\xi\right|^{2}}{\mu^{2}+\left|\xi\right|^{2}}}=\sqrt{1- \frac{\mu^{2}}{\mu^{2}+\left|\xi\right|^{2}}}=1-\sum_{j=1}^{\infty}a_{j}(1+| \xi\mu^{-1}|^{2})^{-j},\] where the coefficients \(a_{j}\) are from the Maclaurin series for \(\sqrt{1-t}\). For now we only need to know that \(a_{j}>0\) and that \(\sum a_{j}=1\). By Fourier inversion we obtain \[E_{\mu}f(x)=f(x)-T_{\mu}f(x), \tag{2}\] where \(T_{\mu}\) has the convolution kernel \(A_{\mu}(z)\) defined as \[A_{\mu}(z):=\mu^{-d}\sum_{j=1}^{\infty}a_{j}G_{2j,1}(\mu z). \tag{3}\] \(G_{\alpha,1}(z)=G_{\alpha}(z)\) are the Bessel kernels, that is, the inverse Fourier transform of \((1+|\xi|^{2})^{-\frac{\alpha}{2}}\). Their well known properties imply that \(A_{\mu}(z)\) is a positive, radial, integrable function with \(L^{1}\) norm \(\|A_{\mu}\|_{1}=\|\sum_{j=1}^{\infty}a_{j}G_{2j}\|_{1}=1\). Note that \(A_{\mu}(z)\) has a singularity at the origin since \(\lim_{|z|\to 0}G_{2j}(z)=\infty\) when \(0<2j\leq d\). Despite this, we are still entitled to view \(T_{\mu}\) as an approximate identity and, by (2), \(E_{\mu}\) is its approximation error. With any approximation scheme, the main task is quantifying the error. It is standard to measure size in the \(L^{p}\) metric and so we consider \(\|E_{\mu}f\|_{p}\), though other choices are possible. It turns out that the error depends on the \(L^{p}\) modulus of continuity, \[\omega(f,t)_{p}:=\sup_{|h|\leq t}\|f(\cdot+h)-f(\cdot)\|_{p}. \tag{4}\] The exact dependence is summarized next. **Theorem 1.1**.: _Suppose \(f\in L^{p}(\mathbb{R}^{d})\). There is a constant \(k=k(d)\) such that_ \[\|E_{\mu}f\|_{{}_{1}}\leq k\omega(f,1/\mu)_{1}\ln\omega(f,1/\mu)_{1};\quad p=1, \tag{5}\] _and a constant \(c=c(d,p)\) such that_ \[c^{-1}\omega(f,1/\mu)_{p}\leq\|E_{\mu}f\|_{p}\leq c\omega(f,1/\mu)_{p};\quad 1 <p<\infty. \tag{6}\] The novelty here is the \(L^{1}\) estimate (5) since (6) is implicit in Colzani [3] and Liu-Lu [8]. The proof is presented in SS2 along with some extensions. A natural question is finding the best or worst possible order of approximation. Saturation theorems dealing with the best possible rate can be found in SS3. Finding the worst rate of approximation is, in view of Theorem 1.1, tantamount to a chracterization of \(\omega(f,t)_{p}\), which is an open problem. Theorem 1.1 immediately yields another characterization of Besov spaces (see Notation). As usual, \(X\approx Y\) means that \(C^{-1}Y\leq X\leq CY\) for some \(C>0\). **Corollary 1.2**.: _Fix \(0<\alpha<1\). For \(1<p<\infty\) and \(0<q\leq\infty\), we have_ \[|f|_{B_{p,q}}^{q}\approx\int_{1}^{\infty}(\mu^{\alpha}||E_{\mu}f||_{p})^{q} \,\frac{d\mu}{\mu}.\] We turn to issues of pointwise convergence. Let \(\mathcal{M}_{\rm HL}\) denote the Hardy-Littlewood maximal operator. As \(A_{1}(z)\) is radially decreasing, we have the maximal inequality \(\sup_{\mu>0}|(A_{\mu}\star f)(x)|\leq C\mathcal{M}_{\rm HL}f(x)\) (see Stein [10, SS2.2.1]). This implies a weak type \((1,1)\) boundedness from which convergence of \(T_{\mu}f(x)\) to \(f(x)\) for almost all \(x\) follows. However, to deal with convergence at a _specified point_, or to study the structure of the set where pointwise convergence holds, we need good kernel estimates. A direct attack on the series for \(A_{\mu}\), (3), appears unwieldy. Instead we note that if \(b(\xi)\) is either \(m_{\mu}(\xi)\) or \(1-m_{\mu}(\xi)\), then \(b\) satisfies \[|\partial_{\xi}^{\beta}b(\xi)|\leq C_{\beta}\left|\xi\right|^{-|\beta|};\quad \xi\neq 0, \tag{7}\] but the usual Littlewood-Paley argument does not give fast enough decay at infinity for the kernel, though it shows the uniform \(L^{p}\) boundedness of \(E_{\mu}\) for \(1<p<\infty\). Luckily, a more detailed analysis shows that \[|\partial_{\xi}^{\beta}b(\xi)|\leq C_{\beta}|\xi|^{1-|\beta|}(\mu^{2}+|\xi|^{2 })^{-\frac{1}{2}}. \tag{8}\] This refinement is the main ingredient in the following result. **Theorem 1.3**.: _If \(b(\xi)\in L^{\infty}\) and satisfies (8), then its kernel \(B(x)\) satisfies_ \[|\partial_{x}^{\gamma}B|\leq C_{\gamma,d}\begin{cases}|2\mu x|^{-\frac{1}{2}}|x|^ {-|\gamma|-d};&|\mu x|>1,\\ (|\mu x|^{2}+1)^{-\frac{1}{2}}|x|^{-|\gamma|-d};&|\mu x|\leq 1.\end{cases}\] A bonus is that this result applies to a wider class of multipliers. The details are in SS4. Another consequence is a quantitative localization principle a la Riemann. **Corollary 1.4**.: _If \(f\) in \(L^{p}\) vanishes in near \(x_{0}\), then \(E_{\mu}f(x_{0})=\mathcal{O}(\mu^{-\frac{1}{2}})\)._ The proof is short enough to be given here. Proof.: By translation invariance, we may assume that \(x_{0}=0\), and \(\delta>0\) is such that \(f=0\) for \(|x|<\delta\). If \(\mu\delta>1\), Theorem 1.3 applied to \(m_{\mu}(\xi)\) gives \[|E_{\mu}f(0)|=\left|\int_{|y|>\delta}K_{\mu}(-y)f(y)\,dy\right|\leq\frac{C}{ \sqrt{\mu}}\int_{|y|>\delta}\frac{|f(y)|}{|y|^{d+\frac{1}{2}}}\,dy,\] where \(K_{\mu}\) is its kernel. The proof is completed by applying Holder's inequality: \[|E_{\mu}f(0)|\leq C_{d,p}\mu^{-\frac{1}{2}}\delta^{-(\frac{d}{p}+\frac{1}{2}) }\|f\|_{p}\leq C_{d,\delta,p}\mu^{-\frac{1}{2}}\|f\|_{p}.\] Several questions and generalizations suggest themselves. We mention a couple of each. Where does \(T_{\mu}\) fit in the wider scheme of approximation and summability? How does it compare with approximation by other kernels, say the Gauss-Weierstrass, Abel-Laplace, Poisson, Spherical, Bochner-Riesz, etc.? Speaking of generalizations, one possibility is to treat the multiplier \(m_{\mu}^{\delta}(\xi):=\frac{|\xi|^{\delta}}{(\mu^{2}+|\xi|^{2})^{\frac{3}{2}}}\), for \(\delta>0\). A more difficult one is to replace the Laplacian by a linear second order elliptic differential operator \(L(x,D)\) and to consider \(\frac{\sqrt{L(x,D)}}{\sqrt{\mu^{2}+L(x,D)}}\). These and other issues will be treated elsewhere. ### Notation We lay out the notation used. Everything takes place in \(\mathbb{R}^{d}\) and for \(1\leq p\leq\infty\), \(L^{p}=L^{p}(\mathbb{R}^{d})\) are the usual Lebesgue spaces with norm denoted by \(\|f\|_{p}\). For a non-negative integer \(k\), the _Sobolev space_\(W^{k}_{p}\) consists of \(L^{p}\) functions having distributional derivatives up to order \(k\) in \(L^{p}\). In \(W^{k}_{p}\) we use the norm \(\|f\|_{W^{k}_{p}}=\sum_{|\gamma|\leq k}\|D^{\gamma}f\|_{p}\), and seminorm \(|f|_{W^{k}_{p}}=\sum_{|\gamma|=k}\|D^{\gamma}f\|_{p}\). The direct and inverse Fourier transform of \(f\) and \(\hat{g}\) respectively defined as \[\hat{f}(\xi)=\int e^{-ix\cdot\xi}f(x)\,dx;\qquad\check{g}(x)=(2\pi)^{-d}\int e ^{ix\cdot\xi}\hat{g}(\xi)\,d\xi.\] When convenient we also use \(\mathcal{F}_{x\to\xi}\) and \(\mathcal{F}_{\xi\to x}\) for the direct and inverse transform. For suitable functions \(a(\xi)\), we associate the operator \[a(D)f(x)=(2\pi)^{-d}\int e^{ix\cdot\xi}a(\xi)\widehat{f}(\xi)\,d\xi.\] The _Bessel potential space_\(\mathcal{L}_{p}^{\alpha}\) is defined as \(\mathcal{L}_{p}^{\alpha}:=\{G_{\alpha}\star f:f\in L^{p}\}\). For \(0<\alpha<1\), \(1\leq p<\infty\) and \(1\leq q\leq\infty\), we define the _Besov spaces_\(B_{p,q}^{\alpha}\) as those \(f\in L^{p}\) for which the seminorm \[|f|_{B_{p,q}^{\alpha}}:=\left(\int_{0}^{1}(t^{-\alpha}\omega(f,t)_{p})^{q} \frac{dt}{t}\right)^{1/q}<\infty.\] Equipped with the norm \(\|f\|_{B_{p,q}^{\alpha}}=\|f\|_{p}+|f|_{B_{p,q}^{\alpha}}\) this becomes a Banach space. \(\mathrm{Lip}(\alpha,p)\) is the set of \(L^{p}\) functions which satisfy \(\omega(f,t)_{p}=\mathcal{O}(t^{\alpha})\). For \(0<\alpha<1\), \(\mathrm{Lip}(\alpha,p)=B_{p,\infty}^{\alpha}\) while \(\mathrm{Lip}(1,p)=W_{p}^{1}\). For more on these function spaces we refer the reader to [9]. ## 2 Order of Approximation ### The \(L^{p}\) Case Recall from the Introduction that \(A_{\mu}(z)>0\) and \(\|A_{\mu}(z)\|_{1}=1\). Consequently \[E_{\mu}f(x)=f(x)-\int_{\mathbb{R}^{d}}A_{\mu}(x-y)f(y)\,dy=\int_{\mathbb{R}^{ d}}A_{\mu}(x-y)(f(x)-f(y))\,dy.\] From Minkowski's inequality and a change of variables we obtain \[\|E_{\mu}f\|_{p}\leq\int A_{1}(y)\|f(\cdot)-f(\cdot-y/\mu)\|_{p}\,dy\leq\int A _{1}(y)\omega(f,|y|/\mu)_{p}\,dy.\] Now, set \(A(z):=A_{1}(z)\). As \(A(z)=\sum_{j=1}^{\infty}a_{j}G_{2j}(z)\), to proceed we need to know properties of \(a_{j}\) and \(G_{2j}(z)\). The salient ones are summarized next. **Lemma 2.1**.: 1. \(G_{2j}(z)\) _is positive, radial and decreasing with_ \(\|G_{2j}\|_{L^{1}}=1\)_. Moreover,_ \(G_{2j}(z)=\frac{1}{2^{\frac{d+2j-2}{2}}\pi^{\frac{d}{2}}\Gamma\left(j\right)} K_{\frac{d-2j}{2}}(|z|)|z|^{\frac{2j-d}{2}}\)_, where_ \(K_{\nu}(t)\) _is the modified Bessel function of the third kind._ 2. _The coefficients_ \(a_{j}\) _are positive, satisfy_ \(a_{j}\leq j^{-3/2}\)_,_ \(\sum_{j=1}^{\infty}a_{j}=1\) _and_ \(\sum_{j=1}^{\infty}(-1)^{j}a_{j}=1-\sqrt{2}\) _._ 3. \(\int G_{2j}(y)|y|^{\alpha}\,dy=C_{d,\alpha}\frac{\Gamma\left(j+\frac{\alpha} {2}\right)}{\Gamma(j)}\) _for some constant_ \(C_{d,\alpha}\)_. In addition, for_ \(0\leq\alpha<1\)_,_ \(\int A(y)|y|^{\alpha}\,dy\) _converges._ Proof.: 1. These are proved in Aronszajn-Smith [1, pp 413-421]. 2. Start with the expansion \(\sqrt{1-t}=1-\sum_{j=1}^{\infty}a_{j}t^{j}\), where \(a_{j}=\frac{(2(j-1))!}{2^{2j-1}(j-1)!j!}\). Then, as is easily checked, \[a_{j}=\frac{1}{2j}\prod_{k=1}^{j-1}\left(1-\frac{1}{2k}\right)=\frac{1}{2j} \exp\left(\sum_{k=1}^{j-1}\ln\left(1-\frac{1}{2k}\right)\right).\] From this follows \[a_{j}\leq\frac{1}{2j}\exp\left(\int_{1}^{j}\ln\left(1-(2x)^{-1}\right)\,dx\right) \leq\frac{\left(1-(2j)^{-1}\right)^{j}}{j(2j-1)^{1/2}}.\] Hence \(a_{j}\leq j^{-3/2}\) and \(\sum_{j=1}^{\infty}a_{j}t^{j}\) converges absolutely for \(|t|\leq 1\). The sums are obtained by plugging \(t=\pm 1\) into \(\sqrt{1-t}\). 3. For \(|\mathrm{Re}(\nu)|<\mathrm{Re}(\beta)\), we use the formula [5, Eq. 10.43.19]: \[\int_{0}^{\infty}t^{\beta-1}K_{\nu}(t)\,dt=2^{\beta-2}\Gamma\left(\frac{\beta +\nu}{2}\right)\Gamma\left(\frac{\beta-\nu}{2}\right).\] (9) A switch to spherical coordinates combined with part (a) and (9) gives \[\int G_{2j}(y)|y|^{\alpha}\,dy=\frac{2^{2-j-\frac{d}{2}}}{\Gamma(\frac{d}{2}) \Gamma(j)}\int_{0}^{\infty}t^{j+\frac{d}{2}+\alpha-1}K_{j-\frac{d}{2}}(t)\,dt =\frac{2^{\alpha}\Gamma(\frac{d+\alpha}{2})}{\Gamma(\frac{d}{2})}.\frac{ \Gamma\left(j+\frac{\alpha}{2}\right)}{\Gamma(j)}.\] Since \(\Gamma(x+a)\sim\Gamma(x)x^{a}\) for large \(x\), and \(a_{j}\leq j^{-3/2}\) we have \[\int A(y)|y|^{\alpha}\,dy=\sum_{j=1}^{\infty}a_{j}\int G_{2j}(y)|y|^{\alpha}\, dy=C_{\alpha,d}\sum_{j=1}^{\infty}a_{j}\frac{\Gamma\left(j+\frac{\alpha}{2} \right)}{\Gamma(j)}\leq C\sum_{j=1}^{\infty}j^{-\frac{(3-\alpha)}{2}},\] which converges when \(0\leq\alpha<1\). Before using this result in earnest, we warm-up with a computation. Set \(B(\xi,\mu)=\ln(1+|\xi/\mu|^{2})\). The inequality \(1-e^{-t}\leq\min\{1,t\}\) for \(t>0\) shows \[m_{\mu}(\xi)=\sum a_{j}\left(1-(1+|\xi/\mu|^{2})^{-j}\right)=\sum a_{j}\left( 1-e^{-jB}\right)\leq\sum a_{j}\min\{1,jB\},\] and split the resulting sum to see that \[m_{\mu}(\xi)\leq\sum_{j\leq B^{-1}}a_{j}jB+\sum_{j\geq B^{-1}}a_{j}.\] The bound for \(a_{j}\) in Lemma 2.1(b) and comparison with an integral now yields \[m_{\mu}(\xi)\leq B\int_{0}^{\frac{1}{B}}\frac{1}{\sqrt{t}}\,dt+\int_{\frac{1 }{B}}^{\infty}\frac{1}{t^{\frac{3}{2}}}\,dt\leq 2\sqrt{B(\xi,\mu)}.\] This suggests a Kolmogorov-Seliverstov-Plessner type result. **Proposition 2.2**.: _If \(\|\sqrt{\ln(1+|\xi|^{2})}\widehat{f}(\xi)\|_{2}<\infty\), then \(\|E_{\mu}f\|_{{}_{2}}=\mathcal{O}((\ln\mu)^{-\frac{1}{2}})\) as \(\mu\to\infty\)._ Proof.: By Plancherel \[\|E_{\mu}f\|_{{}_{2}}^{2}\leq\int\frac{|\xi|^{2}}{(|\xi|^{2}+\mu^{2})\ln(1+| \xi|^{2})}\ln(1+|\xi|^{2})|\widehat{f}(\xi)|^{2}\,d\xi.\] \(r^{2}/((r^{2}+\mu^{2})\ln(1+r^{2}))\) attains its maximum when \(r\approx\mu\) completing the proof. Our next application of Lemma 2.1 considerably extends this exercise and contains the \(p=1\) case of Theorem 1.1. **Theorem 2.3**.: _Assume \(1\leq p<\infty\). There is a \(C>0\) depending only on \(d\) such that_ \[\|E_{\mu}f\|_{p}\leq C\omega(f,1/\mu)_{p}\left(3+2\ln\left(\frac{||f||_{p}}{C \omega(f,1/\mu)_{p}}\right)\right).\] Proof.: Let \(R>0\) be a large number to be chosen shortly. Observe that \[\|E_{\mu}f\|_{p} \leq\sum_{j=1}^{\infty}a_{j}\int_{\mathbb{R}^{d}}\omega\left(f,|y| /\mu\right)_{p}G_{2j}(y)\,dy\] \[\leq\sum_{j\leq R}a_{j}\int_{\mathbb{R}^{d}}\omega\left(f,|y|/\mu \right)_{p}G_{2j}(y)\,dy+\sum_{j>R}a_{j}\int_{\mathbb{R}^{d}}\omega\left(f,|y| /\mu\right)_{p}G_{2j}(y)\,dy,\] For the first sum we use the inequality \(\omega(f,\gamma t)_{p}\leq(1+|\gamma|)\omega(f,t)_{p}\). In the second sum we use \(\omega(f,t)_{p}\leq 2\|f\|_{p}\). Altogether \[\|E_{\mu}f\|_{p}\leq\omega\left(f,\mu^{-1}\right)_{p}\sum_{j\leq R}a_{j}\int_{ \mathbb{R}^{d}}(1+|y|)G_{2j}(y)\,dy+2\|f\|_{p}\sum_{j>R}a_{j}\int_{\mathbb{R}^ {d}}G_{2j}(y)\,dy.\] By Lemma 2.1(c), \[\|E_{\mu}f\|_{p}\leq c_{d}\omega\left(f,1/\mu\right)_{p}\sum_{j\leq R}a_{j}(1+ j^{\frac{1}{2}})+2\|f\|_{p}\sum_{j\geq R}a_{j}.\] We know that \(a_{j}\leq j^{-3/2}\) from Lemma 2.1(b) and can once again compare sums to integrals to deduce \[\|E_{\mu}f\|_{p}\leq c_{d}\omega\left(f,1/\mu\right)_{p}(1+\ln R)+2\|f\|_{p}R^ {-\frac{1}{2}}. \tag{10}\] The choice \(R=\left(||f||_{p}/c_{d}\omega(f,1/\mu)_{p}\right)^{2}\) minimizes (10) and completes the proof. As another application of Lemma 2.1, let us quickly estimate the order of approximation for functions in Besov spaces. Assume first that \(f\in B^{\alpha}_{p,\infty}:=\operatorname{Lip}(\alpha,p)\) with \(0<\alpha<1\) and \(1\leq p<\infty\). Recall that functions in \(\operatorname{Lip}(\alpha,p)\) satisfy \(\omega(f,t)_{p}=\mathcal{O}(t^{\alpha})\). Arguing as in the above proof but using Lemma 2.1(c) \[\|E_{\mu}f\|_{p}\leq\int A(y)\|f(\cdot)-f(\cdot-y/\mu)\|_{p}\,dy \leq\mu^{-\alpha}|f|_{B^{\alpha}_{p,\infty}}\int A(y)|y|^{\alpha}\,dy\] \[\leq\mu^{-\alpha}C_{d,\alpha}|f|_{B^{\alpha}_{p,\infty}}.\] As the embedding \(B^{\alpha}_{p,q}\hookrightarrow B^{\alpha}_{p,\infty}\) holds, the same estimate applies to \(f\in B^{\alpha}_{p,q}\). To handle the case \(\alpha=1\) we insist that \(1<p<\infty\). The estimate \(\omega(f,t)_{p}=O(t)\) implies \(f\in\mathcal{L}^{1}_{p}=W^{1}_{p}\) (see [9, pp. 135-139]). Thus, for some \(g\in L^{p}\), \((1+|\xi|^{2})^{1/2}\widehat{f}(\xi)=\widehat{g}(\xi)\) and \(E_{\mu}f=\mu^{-1}E_{1}(G_{1,\mu}\star g)\). From this \[\|E_{\mu}f\|_{p}=\mu^{-1}\|E_{1}(G_{1,\mu}\star g)\|_{p}\leq 2\mu^{-1}\|g\|_{p} \leq C\mu^{-1}\|f\|_{W^{1}_{p}(\mathbb{R}^{d})}.\] This argument combined with Theorem 2.3 imply results for function in various spaces. For simplicity, we state those that apply to functions in the Lipschitz classes \(\operatorname{Lip}(\alpha,p)\). **Corollary 2.4**.: _If \(f\in\text{Lip}(\alpha,p)\) then_ \[\|E_{\mu}f\|_{p}\leq C\begin{cases}\mu^{-\alpha},&p>1,\quad 0<\alpha\leq 1;\\ \mu^{-\alpha},&p=1,\quad 0<\alpha<1;\\ \mu^{-1}\ln(\mu),&p=1,\quad\alpha=1.\end{cases}\] Corollary 2.4 improves on Theorem 2.3, but the sharp result is Theorem 1.1 as it asserts the equivalence \(\|E_{\mu}f\|_{p}\approx\omega(f,1/\mu)_{p}\) for \(1<p<\infty\). Before turning to the proof we give a summary. The idea is to use \(K\)-functionls as an intermediary. We establish the equivalence between the order of approximation and a \(K\)-functional. Known relationships between \(K\)-functionls and the modulus of continuity allow us to complete the proof. Proof of Theorem 1.1.: As in Ditzian-Ivanov [4], we introduce the \(K\)-functional \[K(t,f,|D|)_{p}:=\inf_{\begin{subarray}{c}g\in L^{p}\\ |D|g\in L^{p}\end{subarray}}\left(\|f-g\|_{p}+t\||D|g\|_{p}\right). \tag{11}\] Here is the main step in proving the equivalence theorem. **Lemma 2.5**.: \(K(1/\mu,f,|D|)_{p}\approx\|E_{\mu}f\|_{p}\)_, for \(1<p<\infty\)._ We need two additional results for the proof of this Lemma. The first is an integration identity. It will also be useful when we treat saturation theorems. **Lemma 2.6**.: _If \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\), the Schwartz class, then_ \[E_{\mu}\varphi(x)=\int_{\mu}^{\infty}s^{-2}\left(G_{3,s}\star|D|\varphi\right) (x)\,ds.\] Proof.: We have the chain of equalities \[\widehat{E_{\mu}\varphi}(\xi) =|\xi|\widehat{\varphi}(\xi)\int_{\mu}^{\infty}s(s^{2}+|\xi|^{2}) ^{-3/2}\,ds\] \[=\int_{\mu}^{\infty}|\xi|\widehat{\varphi}(\xi)s^{-2}(1+(|\xi|/s) ^{2})^{-3/2}\,ds\] \[=\int_{\mu}^{\infty}s^{-2}(\widehat{G_{3,s}\star|D|\varphi})(\xi )\,ds\] \[=\mathcal{F}_{x\to\xi}\left[\int_{\mu}^{\infty}s^{-2}\left(G_{3,s }\star|D|\varphi\right)(x)\,ds\right].\] The last interchange of integrals is justified because \(|\xi|\widehat{\varphi}(\xi)\in\mathcal{S}\). Fourier inversion then proves the identity. **Lemma 2.7**.: _For \(1<p<\infty\), \(g\in W^{1}_{p}\) if and only if \(g\) and \(|D|g\) are in \(L^{p}\)._ Proof.: Using the Riesz transforms \(R_{j}\), we can write \(D_{j}g=R_{j}(|D|g)\), and the boundedness of \(R_{j}\) implies that \(D_{j}g\in L^{p}\) whenever \(|D|g\in L^{p}\) if \(1<p<\infty\). Note that \(R_{j}\) is unbounded in \(L^{1}\) and \(L^{\infty}\) and so we cannot include the case \(p=1,\infty\). For the converse, suppose that \(g\in W^{1}_{p}\). Then \(g=G_{1}\star h\) for some \(h\in L^{p}\). By definition, \(\widehat{|D|g}(\xi)=|\xi|(|\xi|^{2}+1)^{-\frac{1}{2}}\widehat{h}(\xi)\), so that \(|D|g=E_{1}h\). As \(E_{\mu}\) is uniformly \(L^{p}\) bounded, \(\||D|g\|_{p}<\infty\) Proof of Lemma 2.5.: If both \(g,|D|g\in L^{p}\), Lemma 2.6, Lemma 2.7 and the density of \(\mathcal{S}(\mathbb{R}^{d})\) in \(W_{p}^{1}(\mathbb{R}^{d})\) together imply \[\|E_{\mu}g\|_{p} =\left\|\int_{\mu}^{\infty}s^{-2}\left(G_{3,s}\star|D|g\right)(x) \,ds\right\|_{p}\] \[\leq\int_{\mu}^{\infty}s^{-2}\|\left(G_{3,s}\star|D|g\right)\|_{p }\,ds\] \[\leq\||D|g\|_{p}\int_{\mu}^{\infty}s^{-2}\,ds\leq\mu^{-1}\||D|g\|_ {p}.\] This in turn implies \[\|E_{\mu}f\|_{p}\leq\|E_{\mu}(f-g)\|_{p}+\|E_{\mu}g\|_{p}\leq\|f-g\|_{p}+\mu^{- 1}\||D|g\|_{p},\] and taking the infimum over such \(g\) gives \(\|E_{\mu}f\|_{p}\leq K(1/\mu,f,|D|)_{p}\) which is one direction of the result. We turn to the opposite inequality. Set \(g=T_{\mu}f\). We only need show that \(\mu^{-1}\||D|g\|_{p}:=\mu^{-1}\||D|T_{\mu}f\|_{p}\leq C\|f-T_{\mu}f\|_{p}\). On the Fourier transform side \[\mu^{-1}\widehat{|D|T_{\mu}f(\xi)} =\frac{|\xi|}{\mu}\left(1-\frac{|\xi|}{(\mu^{2}+|\xi|^{2})^{\frac {1}{2}}}\right)\widehat{f}(\xi)\] \[=\frac{\mu}{((\mu^{2}+|\xi|^{2})^{\frac{1}{2}}+|\xi|)}\cdot\frac{ |\xi|}{(\mu^{2}+|\xi|^{2})^{\frac{1}{2}}}\widehat{f}(\xi)\] \[=r(\xi)\widehat{E_{\mu}f}(\xi),\] and we only need show that \(r(\xi)\) defines a bounded operator on \(L^{p}\). A direct computation shows that for \(\xi\neq 0\) \[\left|\frac{\partial r}{\partial\xi_{k}}\right|=\left|-\mu((\mu^{2}+|\xi|^{2} )^{\frac{1}{2}}+|\xi|)^{-2}\cdot\left(\frac{\xi_{k}}{|\xi|}+\frac{\xi_{k}}{( \mu^{2}+|\xi|^{2})^{\frac{1}{2}}}\right)\right|\leq\frac{2}{|\xi|}.\] For any multi-index \(\gamma\), this can be extended to \(|\partial^{\gamma}r(\xi)|\leq C_{\gamma}|\xi|^{-|\gamma|}\). Mikhlin's well known multiplier theorem (see [10]) shows that \(r(D)\) is \(L^{p}\) bounded for \(1<p<\infty\). Hence, for \(1<p<\infty\), we obtain \(\mu^{-1}\||D|A_{\mu}f\|_{p}\leq C\|f-T_{\mu}f\|_{p}\). Thus \[K(1/\mu,f,|D|)_{p}\leq\|f-T_{\mu}f\|_{p}+\mu^{-1}\||D|T_{\mu}f\|_{p}\leq C\|E_ {\mu}f\|_{p},\] concluding the proof of Lemma 2.5. To wrap up, we apply the result of Johnen-Scherer, [7], on the equivalence of moduli of continuity and \(K\)-functionals. If we define \[K(t,f,L^{p},W_{p}^{1})=\inf_{g\in W_{p}^{1}}\left(||f-g||_{p}+t\sup_{|\gamma|= 1}||D^{\gamma}g||_{p}\right),\] their result is that \(K(t,f,L^{p},W_{p}^{1})\approx\omega(f,t)_{p}\) for \(1\leq p\leq\infty\). However, Lemma 2.7 shows that when \(g\in W_{p}^{1}\), we have \(\sup_{|\gamma|=1}\|D^{\gamma}g\|_{p}\approx\||D|g\|_{p}\) for \(1<p<\infty\). This implies \(K(t,f,|D|)\approx K(t,f,L^{p},W_{p}^{1})\), and we have shown \[\|E_{\mu}f\|_{p}\approx K(t,f,|D|)\approx K(t,f,L^{p},W_{p}^{1})\approx\omega( f,t)_{p}.\] ### The \(H^{p}\) Case We denote the real Hardy spaces by \(H^{p}(\mathbb{R}^{d})\). For \(p>1\), they coincide with the \(L^{p}\) spaces. For \(0<p\leq 1\), it is a normed space of distributions. We denote the norm by \(\|\cdot\|_{H^{p}}\) in this section. A far more thorough exposition on these spaces can be found in [10, Chap. 3] and we state the bare minimum required. The analog of Theorem 1.1 for functions in Hardy spaces is the following. **Theorem 2.8**.: _For \(0<p\leq 1\), we have \(\|E_{\mu}f\|_{H^{p}}\leq C_{p}\omega(f,1/\mu)_{H^{p}}\)._ For \(f\) in \(H^{p}(\mathbb{R}^{d})\), \(\omega(f,t)_{H^{p}}:=\sup_{|h|\leq t}\|f(\cdot+h)-f(\cdot)\|_{H^{p}}\) is the \(H^{p}\) modulus of continuity. We require a result of Colzani [3, Theorem 4.1] in the proof and to state it we need some notation. Let \(\phi\in C^{\infty}(\mathbb{R}^{d})\) be supported in \(|\xi|\leq 2\), with \(\phi(\xi)=1\) for \(|\xi|\leq 1\). and define \(\widehat{\Phi_{s}\star f}(\xi)=\phi\left(\xi/s\right)\widehat{f}(\xi)\). **Lemma 2.9**.: \(||f-\Phi_{\mu}\star f||_{H^{p}}\leq C\omega(f,1/\mu)_{H^{p}}\)_,_ We also repeatedly use the "multiplier theorem for Hardy spaces" [9, SS7.4.9]. **Lemma 2.10**.: _Let \(k\in\mathbb{N}\). Then \(m(\xi)\) is \(H^{p}\) bounded if \(m\in L^{\infty}\) and, for \(|\beta|\leq k\) and \(k>d/p\), satisfies_ \[\sup_{0<A<\infty}A^{2|\beta|-d}\int_{A<|\xi|\leq 2A}|\partial^{\beta}m(\xi)|^{2 }\,d\xi\leq B.\] Proof of Theorem 2.8.: \(E_{\mu}\) is \(H^{p}\) bounded as the estimate \(|\partial^{\beta}m_{\mu}(\xi)|\leq C_{\beta}|\xi|^{-\beta}\) and the pointwise boundedness \(m_{\mu}(\xi)\) are enough to verify the conditions of Lemma 2.10. From here we check that \[\|E_{\mu}f\|_{H^{p}}=\|E_{\mu}(f-\Phi_{\mu}\star f+\Phi_{\mu} \star f)\|_{H^{p}} \leq C\|f-\Phi_{\mu}\star f\|_{H^{p}}+\|E_{\mu}(\Phi_{\mu}\star f )\|_{H^{p}}\] \[\leq C\omega(f,1/\mu)_{H^{p}}+\|E_{\mu}(\Phi_{\mu}\star f)\|_{H^ {p}},\] where we used the \(H^{p}\) boundedness of \(E_{\mu}\) in the second line and Colzani's result, Lemma 2.9, in the third line. Thus the proof reduces to showing that \(\|E_{\mu}(\Phi_{\mu}\star f)\|_{H^{p}}\leq C\omega(f,1/\mu)_{H^{p}}\). To prove this, we modify another argument of Colzani [3, Theorem 5.1]. As in that paper, we can assume that \(\widehat{f}\) is supported in \((\overline{R_{+}})^{d}=\{x_{k}\geq 0;\,1\leq k\leq d\}\). Let \(\theta\in S^{d-1}\) be a fixed unit vector with \(\theta\) in a proper conic subset of \((R_{+})^{d}\). We can thus find an \(\epsilon>0\), such that \(\theta^{\perp}\), the plane through the origin orthogonal to \(\theta\), lies outside the \(\epsilon\) conic neighborhood of \((\overline{R_{+}})^{d}\). Let \(\chi(\xi)\) be smooth, homogenous of degree \(0\), identically \(1\) on \((\overline{R_{+}})^{d}\) and vanish outside of the \(\epsilon/2\) conic neighborhood of \((\overline{R_{+}})^{d}\). We use Colzani's trick and write \[\widehat{E_{\mu}(\Phi_{\mu}\star f)}(\xi)=\chi(\xi)m_{\mu}(\xi)\phi\left(\xi /\mu\right)(e^{i\frac{\xi\cdot\theta}{\mu}}-1)^{-1}\cdot(e^{i\frac{\xi\cdot \theta}{\mu}}-1)\widehat{f}(\xi)\] We exploit the homogeneity of \(\chi\) and define two auxiliary functions \[\psi(z):=\frac{|z|\chi(z)\phi(z)}{(|z|^{2}+1)^{\frac{1}{2}}(e^{iz\cdot\theta}- 1)},\quad\widetilde{\psi}(z):=\psi(z/\mu).\] If \(\widetilde{\psi}(\xi)\) were \(H^{p}\) bounded, we would have \[\|E_{\mu}(\Phi_{\mu}\star f)\|_{H^{p}}=\|\widetilde{\psi}(D)\left[f(\cdot+ \theta/\mu)-f(\cdot)\right]\|_{H^{p}}\leq C_{p}\|f(\cdot+\theta/\mu)-f(\cdot)\| _{H^{p}},\] which of course implies that \(\|E_{\mu}(\Phi_{\mu}\star f)\|_{H^{p}}\leq C_{p}\omega(f,1/\mu)_{H^{p}}\) by the definition of the \(H^{p}\) modulus of continuity. The proof thus reduces to checking that \(\widetilde{\psi}\) satisfies the conditions of Lemma 2.10. Those conditions are dilation invariant so it is enough to check them for \(\psi\). To show boundedness, we focus on the behavior near \(\theta^{\perp}\) where \(e^{iz.\theta}-1=0\). Excluding the origin, \(\chi\) vanishes on \(\theta^{\perp}\) by construction. As \(\lim_{z\to 0}\psi(z)\) exists, we see that \(\psi\) is bounded. For the derivatives, we use the product rule \[\partial^{\beta}\psi(z)=\sum_{\beta_{1}+\ldots+\beta_{4}=\beta}\binom{\beta}{ \beta_{1}\cdots\beta_{4}}\partial^{\beta_{1}}\chi\partial^{\beta_{2}}\phi \partial^{\beta_{3}}\left(\frac{1}{e^{iz.\theta}-1}\right)\partial^{\beta_{4} }\left(\frac{|z|}{(1+|z|^{2})^{\frac{1}{2}}}\right).\] Straightforward estimates using the support of \(\phi\), and the homogeneity of \(\chi\), yield \(|\partial^{\beta}\psi(z)|\leq C_{\epsilon,\beta}|z|^{-\beta}\). Applying Lemma 2.10 completes the proof. ## 3 Saturation Theorems In this section we show there is a limit to how well \(T_{\mu}\) approximates functions in \(L^{p}\). We also characterize the class of functions which achieve the optimal approximation rate. To see the main idea, note that \[m_{\mu}(\xi)\widehat{f}(\xi)=\frac{|\xi|}{(\mu^{2}+|\xi|^{2})^{\frac{1}{2}}} \widehat{f}(\xi)=\frac{|\xi|\widehat{f}(\xi)}{(|\xi|^{2}+\mu^{2})^{\frac{1}{2} }}.\] The numerator is bounded in \(L^{p}\) when \(\nabla f\) is in \(L^{p}\) while the denominator is \(\mathcal{O}(1/\mu)\). We thus expect that functions with a derivative in \(L^{p}\) should have approximation error that is \(\mathcal{O}(1/\mu)\). Though not rigorous, it is not far from the truth. The arguments to follow dress this observation in functional analytic clothing. Parts are adapted from Butzer-Nessel [2]. **Theorem 3.1**.: _For \(1\leq p<\infty\), \(\|E_{\mu}f\|_{p}=o(1/\mu)\) implies \(f=0\) in \(L^{p}\)._ Proof.: The proof is broken into cases. 1. When \(p=1\), both \(\widehat{f}(\xi)\) and \(\widehat{E_{\mu}f}(\xi)\) are continuous functions. By assumption, \(\mu|\widehat{E_{\mu}f}(\xi)|\leq\mu\|E_{\mu}f\|_{1}\) so \(\lim_{\mu\to\infty}\mu|\widehat{E_{\mu}f}(\xi)|=0\) holds pointwise. A calculus argument gives \(\lim_{\mu\to\infty}\mu\widehat{E_{\mu}f}(\xi)=|\xi|\widehat{f}(\xi)\). This implies \(\widehat{f}(\xi)=0\) almost everywhere finishing the proof. Incidentally, we showed that \(\lim_{\mu\to\infty}\mu E_{\mu}f=|D|f\). This remark is used below. 2. For the case \(1<p\leq 2\), we use the Hausdorff-Young inequality to arrive at \(\|\mu\widehat{E_{\mu}f}\|_{p^{\prime}}\leq\|\mu E_{\mu}f\|_{p}=o(1)\). If we define the sequence \(g_{n}(\xi):=n\widehat{E_{n}f}(\xi)\), we see that \(\lim g_{n}=0\) in \(L^{p^{\prime}}\) and some subsequence satisfies \(\lim g_{n_{k}}=0\) almost everywhere. But the earlier calculus argument and the uniqueness of limits implies \(0=\lim_{k}g_{n_{k}}=\lim_{k}n_{k}\widehat{E_{n_{k}}f}=|\xi|\widehat{f}(\xi)\), and again \(\widehat{f}(\xi)=0\) almost everywhere. 3. For the case \(p>2\) we use a duality argument. As \(1<p^{\prime}<2\), we know that \(\varphi\in W^{1}_{p^{\prime}}\) implies \(|D|\varphi\in L^{p^{\prime}}\). The functional on \(W^{1}_{p^{\prime}}\) defined by \(L_{\mu,f}(\varphi):=\int\mu E_{\mu}f(x)\varphi(x)\,dx\) is easily seen to be a bounded linear functional with norm \(o(1)\). Two application of the dominated convergence theorem shows that \[0=\lim_{\mu\to\infty}L_{\mu,f}(\varphi)=\lim_{\mu\to\infty}\int\mu E_{\mu}f \varphi=\lim_{\mu\to\infty}\int f\mu E_{\mu}\varphi=\int f\cdot|D|\varphi.\] As a linear functional, \(f\) vanishes on the dense subspace \(W^{1}_{p^{\prime}}\) so \(f=0\). We now describe the functions which attain the optimal order \(\|E_{\mu}f\|_{p}=\mathcal{O}(1/\mu)\). **Theorem 3.2**.: _For \(f\in L^{p}\), \(\|E_{\mu}f\|_{p}=\mathcal{O}(1/\mu)\) holds if and only if_ 1. \(f\in W^{1}_{p}\) _when_ \(1<p<\infty\)_._ 2. \(|\xi|\widehat{f}(\xi)=\widehat{\nu}(\xi)\)_, for some bounded measure_ \(\nu\) _when_ \(p=1\)_._ Proof.: 1. In view of Theorem 1.1, such a function satisfies \(\omega(f,t)_{p}=\mathcal{O}(t)\) for \(1<p<\infty\). This defines \(W^{1}_{p}\) when \(1<p<\infty\). 2. In proving Theorem 3.1, we showed that \(\lim\limits_{\mu\to\infty}\mu E_{\mu}f=|D|f\). If \(\|E_{\mu}f\|_{1}=\mathcal{O}(1/\mu)\), then \(d\nu_{n}=nE_{n}f(x)\,dx\) is a bounded sequence of Radon measures. This sequence satisfies \(\sup_{n}|\nu_{n}|(\mathbb{R}^{d})<\infty\). By weak compactness (see [6, pgs. 54-55]), we can extract a limit, call it \(\nu\), also satisfying \(|\nu|(\mathbb{R}^{d})<\infty\). As a result, \(|D|f=\nu\) or \(|\xi|\widehat{f}(\xi)=\widehat{\nu}(\xi)\) which is one direction of (a). If \(f\in L^{1}\) and \(|D|f=\nu\), for some bounded measure \(\nu\), then Lemma 2.6 shows \[\|E_{\mu}f\|_{1}\leq\int_{\mu}^{\infty}s^{-2}\|G_{3,s}\star\nu\|_{1}\,ds\leq\mu ^{-1}|\nu|(\mathbb{R}^{d}),\] which proves the other half of (a). ## 4 Pointwise Kernel Estimates and Localization We prove Theorem 1.3 in this section. We abandon the representation of the kernel of \(E_{\mu}\) as a sum of Bessel kernels which has served us well thus far. We instead base the proof on a Littlewood-Paley type argument as in Stein [10, pgs 241-247]. We modify the standard argument by replacing the Hormander-Mikhlin condition (7) with the condition (8): \(|\partial^{\beta}b_{\mu}(\xi)|\leq C_{\beta}|\xi|^{1-|\beta|}(\mu^{2}+|\xi|^{ 2})^{-\frac{1}{2}}\). Proof of Theorem 1.3.: Let \(1=\sum_{j\in\mathbb{Z}}\phi(2^{-j}\xi)\) be a Littlewood-Paley partition of unity. Put \[B_{j,\mu}(x)=\int e^{ix\cdot\xi}\phi(2^{-j}\xi)b_{\mu}(\xi)\,d\xi\] For any multi-indices \(\beta\) and \(\gamma\) we see \[\left|x^{\beta}(-i\partial_{x})^{\gamma}B_{j,\mu}\right|=\left|\int x^{\beta} e^{ix\xi}\xi^{\gamma}\phi(2^{-j}\xi)b_{\mu}(\xi)\,d\xi\right|\leq\int\left| \partial_{\xi}^{\beta}(\xi^{\gamma}\phi(2^{-j}\xi)b_{\mu}(\xi))\right|\,d\xi.\] The product rule, (8), and direct integration gives \[\left|x^{\beta}(-i\partial_{x})^{\gamma}B_{j,\mu}\right|\leq C_{\gamma,\beta, \mu}2^{j(d+|\gamma|-|\beta|)}\frac{2^{j-1}}{(2^{2(j-1)}+\mu^{2})^{1/2}},\] which can be rearranged to the derivative estimate \[|\partial_{x}^{\gamma}B_{j,\mu}|\leq C_{\gamma,M,d}2^{j(d+|\gamma|-M)}\frac{2^{j- 1}}{(2^{2(j-1)}+\mu^{2})^{1/2}}|x|^{-M}. \tag{12}\] We now split the sum as \[\partial_{x}^{\gamma}B_{\mu}=\sum_{2^{j-1}\leq|x|^{-1}}\partial_{x}^{\gamma}B_ {j,\mu}+\sum_{2^{j-1}>|x|^{-1}}\partial_{x}^{\gamma}B_{j,\mu}.\] To estimate the first sum, set \(M=0\) in (12) to find that \[\sum_{2^{j-1}\leq|x|^{-1}}|\partial_{x}^{\gamma}B_{j,\mu}|\leq C_{\gamma,d} \sum_{2^{j-1}\leq|x|^{-1}}\frac{2^{j(d+|\gamma|)}}{(1+(\mu/2^{j-1})^{2})^{\frac {1}{2}}}. \tag{13}\] When \(2^{j-1}\leq|x|^{-1}\), we see that \((1+(\mu/2^{j-1})^{2})^{-\frac{1}{2}}\leq(1+|\mu x|^{2})^{-\frac{1}{2}}\) and summing the geometric series (13) we obtain \[\sum_{2^{j-1}\leq|x|^{-1}}|\partial_{x}^{\gamma}B_{j,\mu}|\leq\frac{C_{\gamma,d}}{(1+|\mu x|^{2})^{\frac{1}{2}}}\frac{1}{|x|^{d+|\gamma|}}. \tag{14}\] For the second sum, we set \(M\) to be the smallest integer greater than \(|\gamma|+d+1/2\) and arrive at \[\sum_{2^{j-1}>|x|^{-1}}|\partial_{x}^{\gamma}B_{j,\mu}| \leq C_{\gamma,d,M}|x|^{-M}\sum_{2^{j-1}>|x|^{-1}}2^{j(d+|\gamma|- M)}\frac{2^{j-1}}{\left(\mu 2^{j-1}\left(\frac{\mu}{2^{j-1}}+\frac{2^{j-1}}{\mu} \right)\right)^{\frac{1}{2}}}\] \[\leq C_{\gamma,d,M}|x|^{-M}\mu^{-1/2}\sum_{2^{j-1}>|x|^{-1}}\frac{2 ^{j(d+|\gamma|+\frac{1}{2}-M)}}{\left(\frac{\mu}{2^{j-1}}+\frac{2^{j-1}}{\mu} \right)^{\frac{1}{2}}}. \tag{15}\] Setting \(t=\mu/2^{j-1}\) and \(L=|\mu x|\). If \(2^{j-1}>|x|^{-1}\), we see that \(0<t\leq L\). A direct calculation shows \[\sup_{0<t\leq L}(t+t^{-1})^{-\frac{1}{2}}=\begin{cases}2^{-\frac{1}{2}};&L>1, \\ (L+L^{-1})^{-\frac{1}{2}};&L\leq 1.\end{cases}\] We use this to sum the geometric series in (15) and obtain \[\sum_{2^{j-1}>|x|^{-1}}|\partial_{x}^{\gamma}B_{j,\mu}|\leq C\begin{cases}|2 \mu x|^{-\frac{1}{2}}|x|^{-|\gamma|-d};&|\mu x|>1,\\ (|\mu x|^{2}+1)^{-\frac{1}{2}}|x|^{-|\gamma|-d};&|\mu x|\leq 1.\end{cases} \tag{16}\] Combining (16) with the earlier estimate (14) completes the proof. An similar argument establishes a Hormander condition which we include here for completeness and for contrast with the usual one. **Theorem 4.1**.: _If \(b_{\mu}(\xi)\) satisfies (8), then its kernel satisfies_ \[\int\limits_{|x|\geq 2|y|}|B_{\mu}(x+y)-B_{\mu}(x)|\ dx\leq C\begin{cases}|2 \mu y|^{-1/2};&|\mu y|>1,\\ (|\mu y|^{2}+1)^{-1/2};&|\mu y|\leq 1.\end{cases} \tag{17}\] We present a refinement of Corollary 1.4 from the Introduction. **Theorem 4.2**.: _Fix \(\delta>0\) and suppose \(f\in L^{\infty}\) vanishes for \(|x|<\delta\). If \(\sigma<\delta\), there is a constant \(C=C(d,\sigma,\delta)\) such that the "uniform maximal" estimate holds:_ \[\sup_{|x|\leq\sigma}\sup_{\mu\geq 1}|E_{\mu}f(x)|\leq C\|f\|_{{}_{L^{\infty}}}. \tag{18}\] The behavior of the constant in (18) is of interest here. The proof below gives a logarithmic dependence on the distance \(\delta-\sigma\). Proof.: Let \(K_{\mu}(z)\) be the associated kernel. For any \(|x|<\sigma\) it is enough to estimate \[\int_{|y|\geq\delta}|K_{\mu}(x-y)|\,dy\leq\int_{|x-y|\geq\delta-\sigma}|K_{\mu} (x-y)|\,dy.\] Since we are taking a supremum in \(\mu\) we cannot assume that \(1/\mu\) is small compared to \(\delta-\sigma\) and so split the integral over two regions: \(|x-y|\geq 1/\mu\) and \(\delta-\sigma\leq|x-y|\leq 1/\mu\). By Theorem 1.3 \[\int_{|y|\geq\delta}|K_{\mu}(x-y)|\,dy\leq\int_{|x-y|\geq\frac{1}{\mu}}\, \frac{\mu^{-\frac{1}{2}}}{|x-y|^{d+\frac{1}{2}}}\,dy+\int_{\delta-\sigma\leq |x-y|\leq\frac{1}{\mu}}\frac{(1+(\mu|x-y|)^{2})^{-\frac{1}{2}}}{|x-y|^{d}}\,dy.\] After the integration dust settles, we see that \[\sup_{\mu\geq 1}\int_{|y|\geq\delta}|K_{\mu}(x-y)|\,dy\leq C\sup_{\mu\geq 1} \left(1+\frac{|\ln\mu(\delta-\sigma)|}{(1+(\mu(\delta-\sigma))^{2})^{\frac{1}{ 2}}}\right)\leq C(1+|\ln(\delta-\sigma)|)\] ### Acknowledgement I would like to thank Professor L. Colzani for clarifying some points in [3] which improved the argument in SS2.2. Andres Larrain-Hubach made helpful comments on an earlier draft. Any remaining errors are, of course, mine
2308.03915
Predicting and explaining nonlinear material response using deep Physically Guided Neural Networks with Internal Variables
Nonlinear materials are often difficult to model with classical state model theory because they have a complex and sometimes inaccurate physical and mathematical description or we simply do not know how to describe such materials in terms of relations between external and internal variables. In many disciplines, Neural Network methods have arisen as powerful tools to identify very complex and non-linear correlations. In this work, we use the very recently developed concept of Physically Guided Neural Networks with Internal Variables (PGNNIV) to discover constitutive laws using a model-free approach and training solely with measured force-displacement data. PGNNIVs make a particular use of the physics of the problem to enforce constraints on specific hidden layers and are able to make predictions without internal variable data. We demonstrate that PGNNIVs are capable of predicting both internal and external variables under unseen load scenarios, regardless of the nature of the material considered (linear, with hardening or softening behavior and hyperelastic), unravelling the constitutive law of the material hence explaining its nature altogether, placing the method in what is known as eXplainable Artificial Intelligence (XAI).
Javier Orera-Echeverria, Jacobo Ayensa-Jiménez, Manuel Doblare
2023-08-07T21:20:24Z
http://arxiv.org/abs/2308.03915v1
Predicting and explaining nonlinear material response using deep Physically Guided Neural Networks with Internal Variables ###### Abstract Nonlinear materials are often difficult to model with classical state model theory because they have a complex and sometimes inaccurate physical and mathematical description or we simply do not know how to describe such materials in terms of relations between external and internal variables. In many disciplines, Neural Network methods have arisen as powerful tools to identify very complex and non-linear correlations. In this work, we use the very recently developed concept of Physically Guided Neural Networks with Internal Variables (PGNNIV) to discover constitutive laws using a model-free approach and training solely with measured force-displacement data. PGNNIVs make a particular use of the physics of the problem to enforce constraints on specific hidden layers and are able to make predictions without internal variable data. We demonstrate that PGNNIVs are capable of predicting both internal and external variables under unseen load scenarios, regardless of the nature of the material considered (linear, with hardening or softening behavior and hyperelastic), unravelling the constitutive law of the material hence explaining its nature altogether, placing the method in what is known as eXplainable Artificial Intelligence (XAI). Nonlinear computational solid mechanics Deep Neural Network Internal Variables Explainable Artificial Intelligence Physics-Informed Machine Learning Physically Guided Neural Networks ## 1 Introduction It is of common knowledge that our everyday life is being dramatically challenged by Big Data and Artificial Intelligence (AI). According to the International Data Corporation, the Global Datasphere (the summation of all data, whether created, captured, or replicated) will grow from 33 ZB in 2018 to 175 ZB by 2025 [1]. This is mainly due to the explosion of social networks, e-commerce and marketing, and the extension of the Internet of Things (IoT). Among the top eight companies in terms of market capitalization, five are based on data value leverage [2]. This huge amount of available information justifies the prosperity of data science and data-based decision in fields as diverse as sociology, economics, engineering and medicine. As a response, Machine Learning (ML) methods have become today one of the main tools in business, but also in science and technology. These methodologies enable the extraction of information from data that would be intractable by means of traditional methods [3]. They try to mimic the process of human knowledge acquisition and structuring taking advantage of the aforementioned advances in data generation, management, and storage, as well as huge improvements in the performance of computers and algorithms [4]. In the special case of Scientific ML [5], the natural adaptation of many supervised as well as unsupervised ML algorithms to the vectorized representation that most physical problems exhibit, makes the study of the convergence between both of special importance. Data-driven methods are used in many different physical disciplines such as chemical and electrical processes [6], biology [7], spoken language recognition [8] and a long etcetera. However, the link between classical physical modelling and data-driven methods has not been quite clear so far, since the physical description of most systems was built on the basis of empirical knowledge rather than large data-bases. Nonetheless, the new era of computation and Big Data has opened new perspectives where data can be incorporated in this physical description in a consistent and comprehensive way. Promising improvements can be therefore spotted, such as new forms of empiricism that declare "the end of theory" and the impending advancement of data-driven methods over knowledge-driven science [9]. One of the most prominent strategies that has proven to be especially prolific in recent years is the use of Artificial Neural Networks (ANNs). Since 1958, when Rosenblatt developed the _perceptron_[10], many works have been devoted to ANNs, some of them demostrating their character as universal approximators [11, 12, 13, 14, 15]. However, it has been only in the last decade of the XX\({}^{\text{th}}\) century, thanks to the important progress of high performance computing capabilities and the combination of back-propagation [16] with stochastic gradient descent [17] algorithms, that ANNs have become a booming technology. The progress has accelerated in the last decade with the advent of Convolutional Neural Networks (CNNs) [18] and Recurrent Neural Networks (RNNs) [19], in what is known nowadays as Deep Learning (DL) [20], and culminating with the attention mechanism [21], transformer models and generative AI, whose impact has gained nowadays great popularity thanks to Large Language Models (LLMs) [22]. Leaving aside the progress of the DL as a research field, AI approaches have changed the way we conceive science. On the one hand, AI has been used to discover the hidden physical structure of the data and unravel the equations of a system [23, 24, 25]. On the other hand, the tremendous predictive power of AI has been blended with the scientific consistency of the explicit mathematical representation of physical systems through the concept of _data-driven_ models for simulation-based engineering and sciences (SBES) [26]. The latter in turn may be done by the combination of raw data and physical equations [27, 28], by enforcing a metriplectic structure to the model, related with the fulfillment of thermodynamic laws [29] or by defining the specific structure of the model [30, 31]. These novel _data-driven science_ approaches, coined as Scientific Machine Learning or Physics-Informed Machine Learning (PIML) arise therefore with the main purpose of turning apparently not-physically meaningful data-driven models, where approaches such as ANNs have excelled, into physics-aware models. However, the interplay between data and physical sciences has not been exempt from setbacks of different nature. In fact, the use of complex DL models does not fit well with the study of physical problems. Furthermore, in many physical problems, many variables are involved, interacting in complex and non-stationary ways. This requires huge amounts of data to get accurate predictions using ANN techniques, sometimes in regions of the solution space that are difficult to access or uncommon and therefore difficult to be sampled. As a consequence, due to the bias-variance trade-off, poorly extrapolation capacity is obtained out of the the usual data range for models looking for recreating complex physics [32]. In addition, a physically based model is not only useful for making predictions, but also to help in acquiring new knowledge by the interpretation of its structure, parameters, and mathematical properties. Physical interpretability is, in most cases, at least as important as predictive performance. However, it is known that interpretability is one of the main weaknesses of ANNs, as the acquired knowledge is encoded in the strength of multiple connections, rather than stored at specific locations. That explains the huge efforts that are currently being made towards "whitening" the "black-box" character of ANN [33, 34] in what has been coined as eXplainable Artificial Intelligence (XAI) [35, 36]. In the context of data-driven simulation-based engineering and sciences (DDSBES) [37]. Two ways of proceeding can be distinguished: building specific ANNs structures endowed with the problem equations [38, 39], also known as the _inductive bias_ approach, and/or by regularizing the loss function using this same physical information [40, 41, 42]. The approach to be followed depends decisively upon the data availability and the way in which this data is used. If we follow a supervised approach, there are two possibilities. * The first one assumes that we know the whole physics of the problem. In that situation, supervised ML is used for the sake of computational requirements. In that sense, ANNs act as Reduced Order Models (ROMs) that can be used as a surrogate in problems involving optimization or control. Hybridizing DL with physical information is a way of improving standard DL methods in terms of data requirements, less expensive training or noise filtering at the evaluation step, thanks to regularization [38, 40]. Therefore, we are interested in the predictive capacity of the approach. * The second one assumes that we know some of the physics of the problem. In that situation, we are interested in knowing the hidden physics that remains unknown, expressed in terms of some model parameters [42] or functional relations [43]. For that reason, we are rather interested in the explanatory character of the method. In other situations, we follow unsupervised approaches. This may be indeed due to two different possibilities: * If we know the whole physics of the problem, the use of ANNs is merely instrumental and is used as an alternative way of solving numerically some system of Partial Differential Equations (PDEs) that require an important computational effort [44]. * When some of the physics of the problem are unknown, the intrinsic variability of the data (for instance when measuring spatial or temporal fields) may be exploited for an unsupervised discovery of some hidden constitutive models [45, 46]. For that reason, we are rather interested in the explanatory character of the method. However, in the context of the IoT, where data quantity generally dominates over data quality, the data availability and variability is key, and it is difficult to guarantee whether, considering for instance the specific case of computational solid mechanics, "the combination of geometry and loading generates sufficiently diverse and heterogeneous strain states to train a generalizable constitutive model with just a single experiment" [47]. Therefore, there are no other alternatives rather than introducing all the control or measurable variables in the workflow, while maintaining the desirable properties of the PIML approach, namely, its ability to get fast predictions in real time (for optimization and control issues) together with its explanatory capacity. In this work, we demonstrate, how, in the context of computational solid mechanics, Physically Guided Neural Networks with Internal Variables (PGNNIVs) enable the compliance with these requirements particularly well. PGNNIVs comprise ANNs that are able to incorporate some of the known physics of the problem, expressed in terms of some measurable variables (for instance forces and displacements) and some hidden ones (for instance stresses). Their predictive character, improving many of the features of conventional ANN [38], allow for fast and accurate predictions. In addition, PGNNIVs are able to unravel the constitutive equations of different materials from unstructured data, that is, uncontrolled test data obtained from system monitorization. The content of this paper is structured as follows. First, the methodology is described, including a brief overview of the state of the art, the use of PGNNIVs in computational mechanics, the computational treatment of the physical tensorial fields and operators, as well as the data-set generation and training process. Then, the main results are presented, of both the predictive and explanatory capacity of the method. Finally, conclusions are summarized, together with the main limitations and a brief overview of future work. ## 2 Methods ### Brief overview of Physics Informed Machine Learning in computational solid mechanics PDEs are the standard way to describe physical systems under the continuum setting thanks to their overarching capacity to model extremely different and complex phenomena. However, analytic solutions are most times difficult or even impossible to find. That is the reason why numerical methods have become the universal tool to obtain approximate but accurate solutions to PDEs. These methods consider a given discretization in space and time, which results in an algebraic (in general non-linear) system that is then solved by means of standard matrix manipulation. In the last three decades, however, attempts to solve PDEs from a data-driven point of view have been numerous. First tentatives [48, 49] generalized earlier ideas for Ordinary Differential Equations (ODEs). Since then, many different approaches have been proposed, from collocation methods [50, 51, 52], variational/energy approaches [53, 54, 55], loss regularization using physical or domain knowledge [56, 57, 41, 58], to the most recent approaches using automatic differentiation [42], that is nowadays known as Physics-Informed Neural Networks (PINNs). Other works have extensively tried to address this challenge using the stochastic representations of high-dimensional parabolic PDEs [59, 60, 61]. In order to provide the data-driven models with a meaningful physical character, remarkable efforts have been done in the recent years to embed physical information into data-driven descriptions. The potential of solving inverse problems with linear and non-linear behavior in solid mechanics, for example, has been explored using DL methods [62], where the forward problem is solved first to create a database, which is then used to train the ML algorithms and determine the boundary conditions from assumed measurements. Other approaches initially build a constitutive model into the framework by enforcing constitutive constraints, and aim at calibrating the constitutive parameters [63]. In this context, clearly differentiating between external and internal variables becomes an important factor when approaching complex physical problems, but this is always disregarded from a data-driven viewpoint. External variables are those observable, measurable variables of the system, that can be obtained directly from physical sensors such as position, temperature or forces; internal variables are non-observable (not directly measurable) variables, that integrate locally other observable magnitudes and depend on the particular internal structure of the system [64]. This is very important to consider in the ML framework since predictions associated to an internal state model require explicitly the definition of the cloud of experimental values that identifies the internal state model [65]. This implies "measuring", or better, assuming values for non-observable variables [66, 67]. This is for example, the case of stresses in continuum mechanics that can be determined _a priori_ only after making strong assumptions such as their uniform distribution in the center section of a sample under uniform tension. An alternative is the use of PGNNIVs. This new methodological appraisal permits to predict the values of the internal variables by mathematically constraining some hidden layers of a deep ANN (whose structural topology is appropriately predefined) by means of the fundamental laws of the continuum mechanics such as conservation of energy or mechanical momenta that relate internal non-measurable with external observable variables. With this, it is possible to transform a pure ML based model into a physically-based one without giving up the powerful tools of DL, including the implicit correlation between observable data and the derived predictive capacity. This way, not only the real internal variables of the problem are predicted, but also the data needed to train the network decreases, convergence is reached faster, data noise is better filtered and the extrapolation capacity out of the range of the training data-set is improved, as recently demonstrated in [38]. In line with the terminology and general framework used in PIML [68], PGNNIVs showcase an intuitive interplay between an _inductive bias_ approach and a _learning bias_ appraisal, where physical constrains are incorporated by means of a physics-informed likelihood, i.e. additional terms in the loss function (also known as _collocation_ or _regularization losses_). ### Physically Guided Neural Networks with Internal Variables in computational mechanics #### 2.2.1 Revisiting PGNNIVs PGNNIVs are in essence a generalization of PINNS. In the latter, physical equations constrain the values of output variables to belong to a certain physical manifold that is built from the information provided by the data and the specific form of the PDE considered. One of the shortcomings of PINNs is that, in general, only simple ordinary PDEs that contain a few free parameters and are closed-form can be considered. In contrast, PGNNIVs do not only apply to scenarios where PDEs involve many parameters and have complex forms, but also to those where a mathematical description is not available. In fact, this new paradigm embodies a unique architecture where the values of the neuron variables in some intermediate layers acquire physical meaning in an unsupervised way, providing the network with an inherent explanatory capacity. The main differentiating features that, up to the authors' knowledge, constitute a relevant contribution to the advances of data-driven physical modelling and its particular application to nonlinear mechanics are two-folded: first, physical constraints are applied in predefined internal layers (PILs), in contrast to previous works on PINNs. Secondly, and even most important, PGNNIVs are able to predict and explain the nature of the system all at once, i.e. predictability of the variables as well as explainability of the constitutive law are ensured and learned altogether. In all related works, modeling assumptions on the constitutive law of the correspondent materials are directly imposed, so that the material response complies with certain constrains. On the contrary, PGNNIVs only enforce universal laws of the system (e.g. balance equations) and no prior knowledge on the constitutive model is incorporated. Classical Deep Neural Networks (DNNs) are often represented as _black boxes_ that theoretically can compute and learn any kind of function correlating the input and output data [69]. In particular, they perform very well in areas of science and technology where complex functions convey good approximations of the governing phenomena. Although there exist some heuristic rules [70], these _black boxes_ are usually trained via trial and error. Adding a physical meaning to the hidden layers and constraining them by adding an extra term to the cost function, has already proven to have significant advantages such as less data requirements, higher accuracy and faster convergence in real physical problems, as well as model unravelling capacities [38]. The basic principles of PGNNIVs are briefly exposed in the next lines. Let us consider a set of continuous Partial Differential Equations (PDEs) of the form \[\mathcal{F}(u,v) =f,\text{ in }\Omega, \tag{1a}\] \[\mathcal{G}(u,v) =g,\text{ in }\partial\Omega,\] (1b) \[\mathcal{H}(u) =v,\text{ in }\Omega, \tag{1c}\] where \(u\) and \(v\) are the unknown fields of the problem, \(\mathcal{F}\) and \(\mathcal{H}\) are functionals representing the known and unknown physical equations of the specific problem, \(\mathcal{G}\) is a functional that specifies the boundary conditions, and \(f\) and \(g\) are known fields. The continuous problem has its analogous discretized representation in finite-dimensional spaces in terms of vectorial functions \(\mathbf{F}\), \(\mathbf{G}\) and \(\mathbf{H}\) and nodal values \(\mathbf{u}\), \(\mathbf{v}\), \(\mathbf{f}\) and \(\mathbf{g}\). Particularly, \(\mathbf{u}\) are the solution field nodal values and \(\mathbf{v}\) are the unknown internal field variables at the different nodes. The discretization may be done using any discretization technique, such as the Finite Element Method (FEM). Hence, Eqs. (1) become \[\mathbf{F}(\mathbf{u},\mathbf{v})=\mathbf{f},\text{ in }\Omega, \tag{2a}\] \[\mathbf{G}(\mathbf{u},\mathbf{v})=\mathbf{g},\text{ in }\partial\Omega,\] (2b) \[\mathbf{H}(\mathbf{u})=\mathbf{v},\text{ in }\Omega. \tag{2c}\] A PGNNIV may be defined for a problem of type (2) in the following terms: \[\mathbf{y} =\mathsf{Y}(\mathbf{x}),\] \[\mathbf{v} =\mathsf{H}(\mathbf{u}),\] \[\mathbf{x} =\mathbf{I}(\mathbf{u},\mathbf{f},\mathbf{g}),\] \[\mathbf{y} =\mathbf{O}(\mathbf{u},\mathbf{f},\mathbf{g}),\] \[\mathbf{R}(\mathbf{u},\mathbf{v},\mathbf{f},\mathbf{g})=0,\] where: 1. \(\mathbf{u}\), \(\mathbf{f}\) and \(\mathbf{g}\) are the measurable variables of the problem. 2. \(\mathbf{x}\) and \(\mathbf{y}\) are the input and output variables respectively and will be defined depending on which relation \(\mathbf{x}\mapsto\mathbf{y}\) wants to be predicted. 3. \(\mathbf{I}\) and \(\mathbf{O}\) are functions that compute the input \(\mathbf{x}\) and the output \(\mathbf{y}\) of the problem from the measurable variables. In other words, functions \(\mathbf{I}\) and \(\mathbf{O}\) define the data used as starting point to make predictions, \(\mathbf{x}\), and the data that we want to predict, that is, \(\mathbf{y}\). 4. \(\mathbf{R}\) are the physical constraints, related to the relations given by \(\mathbf{F}\) and \(\mathbf{G}\). 5. \(\mathsf{Y}\) and \(\mathsf{H}\) are DNN models: * \(\mathsf{Y}\) is the **predictive model**, whose aim is to infer accurate values for the output variables for a certain input set, that is, to surrogate the relation \(\mathbf{x}\mapsto\mathbf{y}\). * \(\mathsf{H}\) is the **explanatory model**, whose objective is to unravel the hidden physics of the relation \(\mathbf{u}\mapsto\mathbf{v}\). #### 2.2.2 Adaptation to computational solid mechanics. Our aim now is to reframe Eqs. (1) in the context of solid mechanics. To fix ideas, although it is not difficult to adapt the methodology to other constitutive models, we restrict this analysis to hyperelastic solids with constant and known density \(\rho\). First, we have to consider equilibrium equations (momentum conservation) in the domain \(\Omega\). In spatial coordinates, equilibrium reads \[\mathrm{div}(\mathbf{\sigma})+\rho\mathbf{b}=\mathbf{0}, \tag{4}\] where \(\mathbf{\sigma}\) is Cauchy stress tensor, \(\rho\) is the density, \(\mathbf{b}\) the spatial volumetric body force field and \(\mathrm{div}\) is the divergence operator in spatial coordinates. In material coordinates, equilibrium reads \[\mathrm{DIV}(\mathbf{P})+\rho\mathbf{B}=\mathbf{0}, \tag{5}\] where now \(\mathbf{P}=\det(\mathbf{F})\mathbf{\sigma}\mathbf{F}^{-\intercal}\) is the first Piola-Kirchhoff stress tensor, \(\mathbf{B}=\det(\mathbf{F})\mathbf{b}\) is the reference volumetric body force field, and \(\mathrm{DIV}\) is the divergence operator in material coordinates. If \(\mathbf{\xi}=\chi(\mathbf{\Xi})\) is the motion function that relates spatial (\(\mathbf{\xi}\)) and material (\(\mathbf{\Xi}\)) coordinates, we define the deformation gradient tensor \(\mathbf{F}\) as \[\mathbf{F}=\mathrm{GRAD}(\chi)=\mathrm{GRAD}\otimes\mathbf{\xi}, \tag{6}\] where \(\mathrm{GRAD}\) is the gradient operator in material coordinates. Eq. (6) shows that \(\mathbf{F}\) is a potential tensor field so \(\mathbf{F}\) must satisfy the following compatibility equation in each connected component of \(\Omega\): \[\mathrm{ROT}(\mathbf{F})=\mathbf{0}, \tag{7}\] where \(\mathrm{ROT}\) is the rotational in material coordinates. For hyperelastic materials, the Cauchy stress tensor \(\mathbf{\sigma}\) is related to the deformation gradient tensor by means of the equation \[\mathbf{\sigma}=\frac{1}{\det(\mathbf{F})}\frac{\partial\Psi}{\partial\mathbf{F}}\cdot\bm {F}^{\intercal}, \tag{8}\] where \(\Psi\) is the strain energy function expressed as a function of the deformation state given by \(\mathbf{F}\), \(\Psi=\mathfrak{F}(\mathbf{F})\). Obtaining this particular function is the subject of research of material sciences and many approaches are possible, ranging from phenomenological descriptions to mechanistic and statistical models. Finally, Eqs. (4) or (5), (7) or (6), and (8) must be supplemented with appropriate boundary conditions. We distinguish here between _essential_ boundary conditions and _natural_ boundary conditions. The former define the motion of the solid at some boundary points \(\Gamma_{E}\subset\partial\Omega\): \[\mathbf{\xi}=\bar{\mathbf{\xi}}, \tag{9}\] whereas the latter define the traction vector at some other boundary points \(\Gamma_{N}\subset\partial\Omega\): \[\mathbf{\sigma}\cdot\mathbf{n}=\bar{\mathbf{t}}, \tag{10}\] where \(\mathbf{n}\) is the outwards normal vector (in the spatial configuration) and \(\bar{\mathbf{\xi}}\) and \(\bar{\mathbf{t}}\) are known values of the solid motion and spatial traction forces respectively. The material analogous of Eq. (10) is \[\mathbf{P}\cdot\mathbf{N}=\bar{\mathbf{T}}, \tag{11}\] where now \(\mathbf{N}\) and \(\bar{\mathbf{T}}\) are the material analogous of \(\mathbf{n}\) and \(\bar{\mathbf{t}}\). Let us assume that it is possible to measure (for instance using Digital Image Correlation techniques [71]) the system response in terms of its motion given by the map \(\mathbf{\xi}=\chi(\mathbf{\Xi})\), that is, \(\mathbf{\xi}\) is a measurable variable. Let us also assume that we can measure the volumetric loads, \(\mathbf{b}=\mathbf{b}(\mathbf{\xi})\), as well as the prescribed traction forces \(\bar{\mathbf{t}}\) (or, in an equivalent manner, \(\mathbf{B}=\mathbf{B}(\mathbf{\Xi})\) and \(\bar{\mathbf{T}}\)). Therefore, using Eqs. (6) and (8) it is possible to express: \[\mathbf{F} =\mathcal{A}(\mathbf{\xi}), \tag{12a}\] \[\mathbf{\sigma} =\mathcal{B}(\Psi,\mathbf{\xi}), \tag{12b}\] for some appropriate differential operators \[\mathcal{A}\] and \[\mathcal{B}\]. Therefore, it is possible to recast hyperelastic solid mechanics as \[\mathrm{div}(\mathbf{\sigma}(\Psi,\mathbf{\xi})) =-\rho\mathbf{b}\quad\mathrm{in}\quad\Omega, \tag{13a}\] \[\mathbf{\xi} =\bar{\mathbf{\xi}}\quad\mathrm{in}\quad\Gamma_{E},\] (13b) \[\mathbf{\sigma}\cdot\mathbf{n} =\bar{\mathbf{t}}\quad\mathrm{in}\quad\Gamma_{N},\] (13c) \[\Psi =\mathfrak{F}(\mathbf{F}(\mathbf{\xi}))\quad\mathrm{in}\quad\Omega. \tag{13d}\] where Eqs. (13a) and (13c) may be eventually substituted by their material analogous. Groupping Eqs. (13b) and (13c), it is clear that we can express computational solid mechanics for hyperelastic materials as \[\mathcal{F}(\mathbf{\xi},\mathbf{\sigma})=\mathbf{b},\;\mathrm{in}\;\Omega, \tag{14a}\] \[\mathcal{G}(\mathbf{\xi},\mathbf{\sigma})=(\bar{\mathbf{\xi}},\bar{\mathbf{t}}) \;\mathrm{in}\;\partial\Omega,\] (14b) \[\mathcal{H}(\mathbf{\xi})=\mathbf{\sigma},\;\mathrm{in}\;\Omega, \tag{14c}\] that is in the form of Eqs. (1) with \(u=\mathbf{\xi}\), \(v=\mathbf{\sigma}\), \(f=\mathbf{b}\) and \(g=(\bar{\mathbf{\xi}},\bar{\mathbf{t}})\). Particularization to small strains solid mechanics.In small strains solid mechanics, it is common to work with the displacement field \(\mathbf{U}=\mathbf{\xi}-\mathbf{\Xi}\) and to define the displacement gradient tensor \(\mathbf{J}=\mathrm{GRAD}(\mathbf{U})\). In that case, the constitutive equation is formulated as \[\mathbf{\sigma}=\mathfrak{G}(\mathbf{\varepsilon}), \tag{15}\] where \(\mathbf{\varepsilon}\) is the Cauchy small strain tensor \[\mathbf{\varepsilon}=\mathrm{symgrad}(\mathbf{U})=\frac{1}{2}(\mathbf{J}+\mathbf{J}^{\intercal }), \tag{16}\] and \(\mathfrak{G}\) is a tensor map. With these considerations, the equations of the problem are now written as \[\mathcal{F}(\mathbf{U},\mathbf{\sigma}) =\mathbf{b},\;\mathrm{in}\;\Omega, \tag{17a}\] \[\mathcal{G}(\mathbf{U},\mathbf{\sigma}) =(\bar{\mathbf{U}};\bar{\mathbf{t}}),\;\mathrm{in}\;\partial\Omega,\] (17b) \[\mathcal{H}(\mathbf{U}) =\mathbf{\sigma},\;\mathrm{in}\;\Omega, \tag{17c}\] again in the form of Eqs. (1) with \(u=\mathbf{U}\), \(v=\mathbf{\sigma}\), \(f=\mathbf{b}\) and \(g=(\bar{\mathbf{U}},\bar{\mathbf{t}})\). Once we have discretized the problem, our aim is to predict a motion field \(\mathbf{\xi}\) (or a displacement field \(\mathbf{U}\)) from a particular load case, expressed in terms of the volumetric loads and the natural boundary conditions1, \(\bar{\mathbf{t}}\), therefore \(\mathbf{I}(\mathbf{u},\mathbf{f},\mathbf{g})=(\mathbf{f},\mathbf{g})=(\mathbf{b},\bar{\mathbf{t}})\). With these last remarks, the PGNNIV problem is stated for finite solid mechanics as Footnote 1: It is important to recall that, as essential boundary conditions can be measured as an output variable, it is not necessary to include them as inputs of our problem. \[\mathbf{\xi} =\mathsf{Y}(\bar{\mathbf{t}}) \tag{18a}\] \[\mathbf{\sigma} =\mathsf{H}(\mathrm{KIN}(\mathbf{\xi}))\] (18b) \[\mathbf{x} =(\bar{\mathbf{t}},\mathbf{b})\] (18c) \[\mathbf{y} =\mathbf{\xi}\] (18d) \[\mathbf{R}(\mathbf{\xi},\mathbf{\sigma},\bar{\mathbf{t}}) =(\mathrm{div}(\mathbf{\sigma})-\rho\mathbf{b};\mathbf{\sigma}\cdot\mathbf{n}- \bar{\mathbf{t}};\mathbf{\xi}-\bar{\mathbf{\xi}}). \tag{18e}\] where \(\mathrm{KIN}(\mathbf{\xi})\) is a selected kinematic descriptor of the strain state, such as the deformation gradient \(\mathbf{F}=\mathrm{GRAD}(\mathbf{\xi})\), the right Cauchy - Green deformation tensor \(\mathbf{C}=\mathbf{F}^{\intercal}\mathbf{F}\), or the Green - Lagrange strain tensor \(\mathbf{E}=\frac{1}{2}(\mathbf{C}-\mathbf{I})\), among others. For small strains solid mechanics, the methodology simplifies to \[\mathbf{U} =\mathsf{Y}(\bar{\mathbf{t}}) \tag{19a}\] \[\mathbf{\sigma} =\mathsf{H}(\mathrm{symgrad}(\mathbf{U}))\] (19b) \[\mathbf{x} =(\bar{\mathbf{t}},\mathbf{b})\] (19c) \[\mathbf{y} =\mathbf{U}\] (19d) \[\mathbf{R}(\mathbf{U},\mathbf{\sigma},\bar{\mathbf{t}}) =(\mathrm{div}(\mathbf{\sigma})-\rho\mathbf{b};\mathbf{\sigma}\cdot\mathbf{n}- \bar{\mathbf{t}};\mathbf{U}-\bar{\mathbf{U}}). \tag{19e}\] The appropriate structure and architecture of \(\mathsf{H}\) and \(\mathsf{Y}\) depend on the complexity of the material in hands, which is discussed later. #### 2.2.3 Case study: geometry and external forces. For illustration purposes, the case study considered in this work consists in a non-uniform biaxial test on a rectangular plate of height \(L_{1}=16\) cm and width \(L_{2}=20\) cm, under plane stress. No volumetric loads are incorporated, that is, \(\mathbf{b}=\mathbf{0}\). We impose a certain arbitrary compression load profile \(p=p(s)\) (where \(s\) is the coordinate along the right and top contour). To accelerate computations we consider a load profile that is symmetric with respect to the vertical and horizontal axis and acts perpendicularly to the plate contour, as shown in Figure 1. The symmetry of the problem allows therefore for the analysis of an equivalent problem by extracting the upper-right portion of the plate and applying the corresponding symmetry boundary conditions. ### Data and operators representation In this section we discuss how the different mathematical objects in our use case problem (scalar, vectorial and tensorial fields and operators) are represented. First, we describe how the different fields are encoded and related to the measured data. Then, we explain how the different operators involved are built. This includes both the known operators (equilibrium and compatibility) as well as the unknown relationships comprising the predictive and explanatory networks, \(\mathsf{Y}\) and \(\mathsf{H}\) respectively. Then, we discuss how physical constrains are hardwired into the ANN, so that the built PGNNIV is tailored towards the discovery of constitutive models that comply with the physics of the solid mechanics problem, constraining therefore the learning space and bypassing the parametrization of the constitutive law. #### 2.3.1 Data structures The data that contains the nodal and element-wise variables (that is, displacements and stresses/strains respectively) is stored in array structures. Now we introduce the notation that will be used for referring to a given tensor field, that is represented by an array \(\mathsf{I}\) containing the information of the tensorial field itself. The different dimensions of the data are represented using the indexation \(\mathsf{I}[I|J|K]\) where \(I\) is a multi-index associated with the discretization of the problem, \(J\) with the tensorial character and \(K\) with the data instance. Thus, considering that we have a data-set of size \(N\) and a discretization of size \(n_{x}\times n_{y}\), the displacement field is represented by \(\mathsf{U}[i,j|k|l]\) where \(i=1,\dots,n_{x}\), \(j=1,\dots,n_{y}\), and where \(k=1,2\) (2D problem) and \(l=1\dots,N\), so that \[\mathsf{U}[i,j|k|l]=u_{k}(x_{i},y_{j})\] is the \(k\) component of the displacement field evaluated at \((x_{i},y_{j})\) (that is, the node \((i,j)\)) corresponding to the data \(l\). Analogously, the strain and stress fields are represented respectively by \(\mathsf{E}[i,j|k,l|m]\) and \(\mathsf{S}[i,j|k,l|m]\) where \(i=1,\dots,n_{x}-1\), \(j=1,\dots,n_{y}-1\), \(k,l=1,2\) and \(m=1\dots,N\), such that, for instance, \[\mathsf{E}[i,j|k,l|m]=E_{kl}\left(\frac{1}{2}(x_{i}+x_{i+1}),\frac{1}{2}(y_{j}+ y_{j+1})\right)\] is the \(k,l\) component of the strain tensor evaluated at the element \((i,j)\) corresponding to the data \(m\). Finally, the traction forces, \(\mathbf{\bar{t}}^{\rm top}\) and \(\mathbf{\bar{t}}^{\rm right}\), which are treated as the inputs of our problem (provided the volume forces are not considered), are represented by \(\mathtt{t}^{\rm top}[i|j|k]\) where \(i=1,\ldots,n_{x}\), \(j=1,2\), and \(k=1\ldots,N\), so that \[\mathtt{t}^{\rm top}[i|j|k]=t_{j}^{\rm top}(x_{i},L_{1}/2),\] and \(\mathtt{t}^{\rm right}[i|j|k]\) where \(i=1,\ldots,n_{y}\), \(j=1,2\), and \(k=1\ldots,N\), so that \[\mathtt{t}^{\rm right}[i|j|k]=t_{j}^{\rm right}(L_{2}/2,y_{i}),\] both associated with the data instance \(k\). #### 2.3.2 Operator construction In this section we specify the details to build the predictive and explanatory networks (Y and H), as well as the constraint operator \(\mathbf{R}\). The definition of the complexity of a PGNNIV adapted to solid mechanics stems from the architecture of the predictive and explanatory networks, Y and H, as they must be able to learn the non-linearities between variables \(\mathbf{\bar{t}}\mapsto\mathbf{U}\) or \(\mathbf{\bar{t}}\mapsto\mathbf{\xi}\) and \(\mathbf{E}\mapsto\mathbf{P}\) (or \(\mathbf{\varepsilon}\mapsto\mathbf{\sigma}\)), as explained in Section 2.2.2. Predictive network.The predictive network must be able to represent the data variability, so typically it has an autoencoder-like structure. Its complexity is therefore associated with the latent dimensionality and structure of the volumetric loads and boundary conditions. Although more sophisticated approaches coming from Manifold Learning theory are possible for analyzing data dimensionality and structure [72], this is not the main interest of this work and therefore is out of the scope of this particular study. Here we follow a much simpler approach, where we build an ANN that is sufficiently accurate when predicting the output \(\mathbf{y}\) from an input \(\mathbf{x}\). Since we consider a biaxial quadratic load applied to the plate, the complexity of the network depends on the data variability. For non-uniform loads, the values of the elemental loads are the input of a DNN whose output are the nodal displacements. For the uniform load, we use a much simpler autoencoder-like DNN. It is important to note that a single, complex enough, autoencoder-like DNN would be able to represent the data variability even in the more complex scenario (the case with the largest latent space in the process of data generation). However, we have decided to use two different network architectures for illustrating this particular feature of PGNNIV: the predictive network is associated to data variability, rather than data nature. Therefore, we can adapt the network architecture to our problem characteristics, aiming either at a better network performance (avoiding overfitting) or at a lower computational cost. Figure 1: **Dimensions and representation of the non-uniform biaxial test on a 2D plate.** Load \(p=p(s)\) acting perpendicularly to the contour is arbitrary, provided that it is compatible with the symmetry of the problem. We locate the origin \((x,y)=(0,0)\) at the geometrical center of the plate. In Appendix A, the particular architecture of the two predictive networks is detailed, both for the non-uniform biaxial test and for the uniform biaxial test. Anyway, although the Y network architecture was handcrafted, it is expected that the more and variate data is available, the less relevant the hand-engineering of the network becomes. Explanatory network.As in this work we restrict ourselves to the elastic regime, the input variable is the given strain state at an arbitrary node \(\mathbb{E}[i,j|k,l|m]\) (E represents the Cauchy deformation tensor for infinitesimal theory and the Green - Lagrange deformation tensor for finite strains theory) and the output variable is the associated stress state at that same element, \(\mathbb{S}[i,j|k,l|m]\) (again, S represents the Cauchy stress tensor or the first Piola-Kirchhoff tensor depending on whether we are in the infinitesimal or finite strains theory). Note that under the homogeneity assumption (and postulating that the stress state depends only on the value of the deformation at the same point), the explanatory network is a nonlinear map \(\mathbb{R}^{3}\to\mathbb{R}^{3}\), due the symmetry of both tensors, that may be expressed symbolically as \[\mathbb{S}[i,j]\cdot,\left|m\right|=\mathsf{H}\left(\mathbb{E}[i,j|\cdot, \left|m\right|\right).\] For non-local materials, given the described discretization, the explanatory network could be in principle a map \(\mathbb{R}^{3n_{x}n_{y}}\to\mathbb{R}^{3n_{x}n_{y}}\) represented as \[\mathbb{S}[\cdot,\cdot|\cdot,\left|m\right|=\mathsf{H}\left(\mathbb{E}[\cdot,\cdot|\cdot,\left|m\right|\right).\] In particular, H has to be able to capture the highly nonlinear dependencies that may exist between variables. This is in theory possible thanks to the universal approximation theorem: by adding more internal layers (also known as hidden layers) to the DNN model, we can provide the network with the learning capability and complexity that a particular nonlinear constitutive law might require. The implementation of the explanatory network for homogeneous materials is efficiently implemented using a _convolutional_ filter to move across the domain element by element, but expanding the features in a higher dimensional spaces, as illustrated in Fig. 2. We call this type of architecture a moving Multilayer Perceptron (mMLP). For the very particular case of linear elasticity, we can explicitly parameterize the explanatory network, \(\mathsf{H}(\mathbf{\sigma})=\mathbf{H}(\mathbf{\sigma};\mathbf{D})\) where \(\mathbf{D}\) is the elastic tensor and \(\sigma_{ij}=D_{ijkl}\varepsilon_{kl}\). Using Voigt convention, the tensor \(\mathbf{D}\) is expressed, under plane stress conditions and in the most general case, as a \(3\times 3\) matrix. \[\mathbf{D}=\left(\begin{array}{ccc}d_{11}&d_{12}&d_{13}\\ d_{12}&d_{22}&d_{23}\\ d_{13}&d_{23}&d_{33}\end{array}\right), \tag{20}\] Figure 2: **Representation of the explanatory network for the 2D plane stress problem.** On the left, the strain field computed from the displacement field predicted by Y is represented on the plate. In an homogeneous material, the DNN is fed with the strain state on each element, and the correspondent stresses are obtained. The set of weights of this DNN moves across the elements of the plate and updates after each iteration of the optimization (see Figure 3 for the whole picture), acting as a 2D-convolutional filter. We call this type of architecture a moving Multilayer Perceptron (mMLP). where \(d_{ij}\) are free model parameters, whereas for an isotropic elastic material we have \[\mathbf{D}=\frac{E}{1-\nu^{2}}\left(\begin{array}{ccc}1&\nu&0\\ \nu&1&0\\ 0&0&1-\nu\end{array}\right), \tag{21}\] where the elastic modulus \(E\) and the Poisson ratio \(\nu\) are the only free fitting parameters learned during the training process. An analogous reasoning holds for more complex parametric dependencies. It is possible to express the explanatory network \(\mathsf{H}\) as a parametric model relating the strain and the stress states, that is \[\mathsf{S}[i,j]\cdot,\cdot|m]=\mathbf{H}\left(\mathsf{E}[i,j]\cdot,\cdot|m]; \mathbf{\Lambda}\right).\] where \(\mathbf{\Lambda}\) are some pre-defined fitting parameters that are learned during the training step. In particular, an homogeneous material is described as \[\mathsf{S}[i,j]\cdot,\cdot|m]=\mathbf{H}\left(\mathsf{E}[i,j]\cdot,\cdot|m]; \mathbf{\Lambda}_{ij}\right).\] In this work, this approach is illustrated with different types of materials ranging from the more simple case of a linear elastic material under infinitesimal strain theory to an hyperelastic Ogden material under finite strains theory. Coupling the two networks using physical constraints.The definition and subsequent formulation of the PGNNIV framework implies that the loss function includes a term proportional to the quadratic error between the predictions and true values of the output variable (minimization of the maximum likelihood of the data given the parameters) and other penalty terms related to some (physical) equations i.e. equilibrium constraints. Therefore the different terms involved are: 1. Loss term associated with the measurement of the displacement field: \[\mathrm{MSE}=\frac{1}{N}\sum_{i=1}^{N}||\bar{\mathbf{U}}^{(i)}-\mathsf{Y}(\mathbf{t}^ {(i)})||^{2},\] (22) where \(\bar{\mathbf{U}}^{i}\) is the observed displacement corresponding to sample \(i\). 2. Constraint associated with the equilibrium equation. \[\mathbf{\nabla}\cdot\mathbf{P}=\mathbf{0},\quad\mathrm{or}\quad\mathbf{\nabla}\cdot\mathbf{\sigma }=\mathbf{0}.\] (23) 3. Constraint associated with the compatibility in the domain. \[\mathbf{E}-\frac{1}{2}\left(\mathbf{F}^{\intercal}\mathbf{F}-\mathbf{I}\right)=\mathbf{0},\quad \mathrm{or}\quad\mathbf{\varepsilon}-\frac{1}{2}(\nabla\otimes\mathbf{U}+\mathbf{U} \otimes\nabla)=\mathbf{0}.\] (24) 4. Constraint associated with the equilibrium of the stresses in the boundary. \[\mathbf{P}\cdot\mathbf{N}-\mathbf{T}=\mathbf{0},\quad\mathrm{or}\quad\mathbf{\sigma}\cdot\mathbf{n}- \mathbf{t}=\mathbf{0},\;\mathrm{in}\;\Gamma_{N}.\] (25) 5. Constraints associated with the compatibility of the displacements in the boundary. \[U_{x}(x=0,y)=0,\quad U_{y}(x,y=0)=0.\] (26) The global cost function (which turns out to be a _virtual_ physics-informed likelihood in a Bayesian formulation or, equivalently, a regularized cost function in the most common terminology) can be computed as a weighted sum of \(\mathrm{MSE}\) and \(\mathrm{PEN}\), with \(\mathrm{PEN}\) referring to the physical terms, that is, Eqs. (23), (24), (25) and (26). As Eq. (24) may be expressed as an explicit relation between \(\mathbf{E}\) (or \(\mathbf{\varepsilon}\)) and \(\mathbf{U}\), it is directly embedded in the network architecture. Therefore, the loss is expressed as: \[\mathrm{CF}=\mathrm{MSE}+\mathrm{PEN}, \tag{27}\] where \[\mathrm{MSE}=\frac{1}{N}\sum_{i=1}^{N}\left[p_{1}\|\bar{\mathbf{U}}^{(i)}-\mathsf{ Y}\left(\bar{\mathbf{t}}^{(i)}\right)\|^{2}\right], \tag{28}\] and \[\mathrm{PEN}=\frac{1}{N}\sum_{i=1}^{N}\left[p_{2}\|\mathbf{\nabla}\cdot\mathbf{ \sigma}^{(i)}\|^{2}+p_{3}\|\mathbf{\sigma}^{(i)}\cdot\mathbf{n}-\bar{\mathbf{t}}^{(i)}\|^ {2}+p_{4}\|U_{x}^{(i)}(x=0,y)+U_{y}^{(i)}(x,y=0)\|^{2}\right], \tag{29}\] where with the superscript \((i)\) we refer to the \(i\)-th piece of data and \(p_{j},j=1,2,3,4\) are penalty coefficients that account for the relative importance of each term in the global CF (and may be seen as Lagrange multipliers that softly enforce the constrains). Recall that no penalty for the compatibility in the domain is included since \(\mathbf{E}-\frac{1}{2}\left(\mathbf{F}^{\intercal}\mathbf{F}-\mathbf{I}\right)\) (or \(\mathbf{\varepsilon}-\frac{1}{2}(\nabla\otimes\mathbf{U}+\mathbf{U}\otimes\nabla)\)) is identically \(\mathbf{0}\). The ANN minimization problem reads therefore: \[\min_{\mathbf{W}}\mathrm{CF}(\mathcal{E};\mathbf{W}), \tag{30}\] where \(\mathbf{W}\) are the network parameters and \(\mathcal{E}=\{\mathbf{\tilde{t}}^{i},\mathbf{\bar{U}}^{i}|i=1,\cdots,N\}\) is a given training data-set. By minimizing this function (and assuring that not overfitting is observed by examining the predictions for test data) we will obtain predictions of displacement, stresses and strains. For simplicity, Algorithm 1 details a stochastic gradient descent version of the optimization, even if in this work we always used the Adam optimizer. From the theoretical point of view, Eq. (30) presents a complex constrained optimization problem that has been widely studied in the context of applied mathematics, i.e. Langrange multipliers. However, when NNs come into play along with PDEs, the optimization becomes more involved as the complex nature of the Pareto front, extensively studied in [73] for Physics-Informed Neural Networks (PINNs), determines that the optimum is a state where an individual loss cannot be further decreased without increasing at least one of the others, and therefore the optimal set of weighting hyperparameters \(p_{i}\) cannot be inferred in advance. This weighting hyperparameters \(p_{i}\), commonly referred as penalties, arise in a natural way if they are regarded as real numbers scaling the covariance matrix of the variables' _virtual_ maximum likelihood probability distribution. This concept was introduced in [74] and [75] in the context of state-space particle dynamics. ``` Input: PGNNIV architecture, batch size \(n_{b}\), penalties \(p_{k}\), \(k=1,2,3,4\), and number of iterations \(M\); Data: external forces \(\mathbf{\tilde{t}}^{(i)}\), measured displacements \(\mathbf{\bar{U}}^{(i)}\), \(i=1,\ldots,N\); Initialization of PGNNIV parameters, \(\mathbf{w}=\mathbf{w}^{0}\), \(j=0\); repeat for\(i=1,\ldots,n_{b}\)do \(\mathtt{U}^{(i)}\leftarrow\mathsf{Y}\left(\mathbf{\bar{t}}^{(i)};\mathbf{w}\right)\);/* Predictive network */ \(\mathtt{E}^{(i)}=\mathsf{KIN}\left(\mathtt{U}^{(i)}\right)\);/* Green-Lagrange or Cauchy strain tensor */ \(\mathtt{S}^{(i)}\leftarrow\mathsf{H}\left(\mathtt{E}^{(i)};\mathbf{w}\right)\);/* Explanatory network */ end for \(\mathrm{MSE}=\frac{1}{n_{b}}\sum_{i=1}^{n_{b}}\left[p_{1}||\mathbf{\bar{0}}^{(i)}- \mathtt{U}^{(i)}||^{2}\right]\); \(\mathrm{PEN}=\frac{1}{n_{b}}\sum_{i=1}^{n_{b}}\left[p_{2}||\mathsf{DIV}( \mathtt{S}^{(i)})||^{2}+p_{3}||\mathtt{S}^{(i)}\cdot\mathbf{N}-\mathbf{\bar{T}}^{(i)} ||^{2}+p_{4}\left(||\mathtt{U}^{(i)}_{x}(x=0,y)||^{2}+||\mathtt{U}^{(i)}_{y}(x,y=0)||^{2}\right)\right]\); \(\mathrm{CF}=\mathrm{MSE}+\mathrm{PEN}\); \(\mathbf{w}\leftarrow\mathbf{w}-\nabla_{w}\mathrm{CF}\); /* Stochastic gradient descent step */ \(j\gets j+1\); until\(j=M\); Output: Optimal parameters \(\mathbf{w}^{*}=\mathbf{w}\) for \(\mathsf{Y}\) and \(\mathsf{H}\); ``` **Algorithm 1**PGNNIV learning algorithm Figure 3 shows a graphical representation of the different structures involved (tensorial fields) and the links between them (known and unknown operators) for finite strains solid mechanics. #### 2.3.3 Details about the discretization. The discretization of both space and time domains lies on the basis of numerical methods. In the particular case of solid mechanics under the hypotheses considered here, time discretization turns out not to be relevant for the overall computations, since loads are applied in a quasi-static way and the sole discretization of the geometry provides a very good approximation of how the continuum solid behaves. Traditional FEMs follow a matrix-based approach to have algebraic systems whose solution is an approximate solution. By subdividing the whole domain into small parts (_finite elements_), PDEs governing the physical phenomena occurring in the particular geometry can be approximated by means of computable functions to generate algebraic systems even for complex geometries. However, FEMs require exact knowledge of the properties of the material and are usually time-consuming. On the contrary, PGNNIVs require no information about the material properties since these are learned during the training process of the network, and the calculation time for the forward problem is reduced to seconds at prediction time in the online loop. The discrete nature of methods such as FEM, which subdivide space in small elements, very closely resembles that of PGNNIVs, which comprise a number of discrete units (neurons) to represent field variables. Moreover, commonly used differential operators in these methods are also subject of a suitable description in the PGNNIV framework using convolutional filters. Figure 3: **Graphical representation of the designed 2D-planar stress PGNNIV.** All significant tensorial fields of the problem are represented: the input variables (top and right tractions and volume forces, where the latter are assumed to be null so removed formally from the input), the output variables (displacement field at each nodal value), as well as the internal variables of the problem (stress and strain fields, represented in Voigt notation). For instance, it is possible to define the discrete gradient filter GRAD acting on the nodes for obtaining values on the elements or acting on the elements for obtaining values on the nodes. For example, if \(\mathtt{w}=\mathsf{GRAD}\otimes\mathtt{U}\), then \[\mathtt{w}[i,j|1,1|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{U}[i,j+1|1|m]+\Delta_{x} \mathtt{U}[i,j-1|1|m]\right),\] \[\mathtt{w}[i,j|1,2|m] =\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{U}[i+1,j|1|m]+\Delta_{y} \mathtt{U}[i-1,j|1|m]\right),\] \[\mathtt{w}[i,j|2,1|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{U}[i,j+1|2|m]+\Delta_{x} \mathtt{U}[i,j-1|2|m]\right),\] \[\mathtt{w}[i,j|2,2|m] =\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{U}[i+1,j|2|m]+\Delta_{y} \mathtt{U}[i-1,j|2|m]\right),\] where \(\Delta_{x}\mathtt{U}[i,\cdot|1|m]=\mathtt{U}[i+1,\cdot|1|m]-\mathtt{U}[i-1, \cdot|1|m]\) and \(\Delta_{y}\mathtt{U}[\cdot,j|1|m]=\mathtt{U}[\cdot,j+1|1|m]-\mathtt{U}[\cdot,j-1|1|m]\). Analogously, if \(\mathtt{R}=\mathsf{GRAD}\cdot\mathtt{T}\) \[\mathtt{R}[i,j|1|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{T}[i,j+1|1,1|m]+\Delta_ {x}\mathtt{T}[i,j-1|1,1|m]\right)+\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{T} [i+1,j|1,2|m]+\Delta_{y}\mathtt{T}[i-1,j|1,2|m]\right),\] \[\mathtt{R}[i,j|2|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{T}[i,j+1|2,1|m]+\Delta_ {x}\mathtt{T}[i,j-1|2,1|m]\right)+\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{T} [i+1,j|2,2|m]+\Delta_{y}\mathtt{T}[i-1,j|2,2|m]\right),\] Now we can define the different discretized differentials. For instance the 2D-discretized symmetric gradient of a vector field \(\mathbf{V}\) is \(\mathsf{SGRAD}(\mathbf{V})=\frac{1}{2}\left(\mathsf{GRAD}\otimes\mathbf{V}+\mathbf{V} \otimes\mathsf{GRAD}\right),\) where GRAD is the discrete gradient operator. Therefore, for large strain we have \[\mathsf{E}=\frac{1}{2}\left(\mathsf{GRAD}\otimes\mathbf{U}+\mathbf{U}\otimes\mathsf{ GRAD}+\left(\mathsf{GRAD}\otimes\mathbf{U}\right)\left(\mathbf{U}\otimes\mathsf{GRAD} \right)\right)\] and for small strains \[\mathsf{E}=\mathsf{SGRAD}(\mathbf{U})=\frac{1}{2}\left(\mathsf{GRAD}\otimes\mathbf{U}+ \mathbf{U}\otimes\mathsf{GRAD}\right).\] Similarly, the 2D-discretized divergence of a tensor field \(\mathbf{T}\) is defined as: \[\mathsf{DIV}(\mathbf{T})=\mathsf{GRAD}\cdot\mathbf{T},\] so we can express the equilibrium equation both in infinitesimal as well as finite strains theory as \(\mathsf{DIV}(\mathbf{\sigma})=\mathbf{0}\) or \(\mathsf{DIV}(\mathbf{P})=\mathbf{0}\) respectively. ### Data generation and training process In this work we generate synthetic data although the methodology is the same for real experimental data. Synthetic data generation is more practical, as it allows for error validation by comparing to the real solutions, and inexpensive, given that an accurate numerical solution is available via FEM Analysis. Small strains: linear, softening and hardening elastic materials.For the creation of the linear, softening and hardening materials, we used Matlab, in co-simulation with Abaqus CAE/6.14-2. Matlab was used to automatically and iteratively generate an Abaqus input file that contained the geometry and load profiles. Once each numerical simulation is completed, the results are stored back into Matlab. For all the test cases, the geometry is the same, whereas the variability in the data-set is achieved by randomly changing the load profiles so that all the examples correspond with different experiments. The load profiles are parabolic and are generated for both the right and top contours. We consider three elastic materials with different constitutive laws. On the one hand, we choose an isotropic linear elastic material with elastic modulus \(E=1000\) Pa and Poisson's coefficient \(\nu=0.3\). On the other hand, we considered two nonlinear materials, one with softening properties and another one with hardening properties. In Abaqus, they were modeled as plastic materials with no discharging effects caused by the removal of the load, and strain ranges confined within very small values, that allowing for the compliance of the nonlinear constitutive law with the infinitesimal strains hypothesis. A relation of the type \(\sigma=K\varepsilon^{n}\) was used, with values of \(K\) and \(n\) specified in Table 1. The data-set comprises \(N=10^{3}\) FEM-simulations for the linear material and \(N=10^{4}\) FEM-simulations for the hardening and softening materials. Finite strains: Ogden-like hyperelastic material.For the finite strains case, an incompressible Ogden-like hyperelastic material of order 3 is used for the data-set generation. The incompressible Ogden hyperelastic material of order \(m\) is defined in terms of its strain-energy density function [76]: \[\Psi(\mathbf{C})=\Psi(\lambda_{1},\lambda_{2},\lambda_{3})=\sum_{i=p}^{m}\frac{\mu_{ p}}{\alpha_{p}}\left(\lambda_{1}^{\alpha_{p}}+\lambda_{2}^{\alpha_{p}}+ \lambda_{3}^{\alpha_{p}}-3\right). \tag{31}\] In Table 2 we report the material parameters used for the data generation. We produce \(N=10^{4}\) examples corresponding to uniform biaxial tests, where \(\lambda_{1},\lambda_{2}\in[1;1.10]\). For an incompressible membrane under biaxial deformation, assuming a plane stress state, the solution of the problem using an Ogden's strain-energy function has analytical solution [77]. The displacement fields corresponding to uniform biaxial deformations are \[U_{x}(x,y)=\lambda_{1}x,\quad U_{y}(x,y)=\lambda_{2}y,\quad U_{z}(x,y)=\frac{1 }{\lambda_{1}\lambda_{2}}x,\] so, the non-vanishing components of the Green-Lagrange deformation tensor, \(\mathbf{E}\), are \[E_{xx}=\frac{1}{2}(\lambda_{1}^{2}-1),\quad E_{yy}=\frac{1}{2}(\lambda_{2}^{2 }-1),\quad E_{zz}=\frac{1}{2}((\lambda_{1}\lambda_{2})^{-2}-1).\] As we are assuming plane stress, that is \(\sigma_{zz}=\sigma_{xz}=\sigma_{yz}=0\), the non-vanishing components of the first order Piola-Kirchhoff stress tensor are \[P_{xx}=\frac{1}{\lambda_{1}}\sum_{k=1}^{3}\mu_{k}\left(\lambda_{1}^{\alpha_{k} }-(\lambda_{1}\lambda_{2})^{-\alpha_{k}}\right),\quad P_{yy}=\frac{1}{\lambda_ {2}}\sum_{k=1}^{3}\mu_{k}\left(\lambda_{2}^{\alpha_{k}}-(\lambda_{1}\lambda_{2} )^{-\alpha_{k}}\right).\] Training process.For the evaluation of the methodology, we have trained four PGNNIVs corresponding to four cases: * Linear material with parametric explanatory network. * Linear, softening and hardening materials with non-parametric explanatory network. * Ogden-like material with parametric explanatory network. * Ogden-like material with non-parametric explanatory network. For all the data-sets considered, there is a number of hyperparameters that have been tuned for obtaining the proceeding results with the different networks, namely the learning rate \(\beta\) and the four penalty coefficients \(p_{i}\), \(i=1,\ldots,4..\). The specific values of these hyperparameters are reported together with the different network topologies in Appendix A.2. ## 3 Results When used as forward solvers, PGNNIVs can either predict measurable variables if force-displacement data is available, for example, through Digital Image Correlation (DIC) techniques, or explain the internal state of the solid if this is needed for a certain application. In this section, we validate the performance of PGNNIVs acting as a forward-solver against standard FEM solutions for the plate using the different materials described in Section 2.4, and also as a method for constitutive equation discovery. \begin{table} \begin{tabular}{|l|l|l|} \hline **Material** & \(K\) [Pa] & \(n\) [-] \\ \hline Softening & \(18.69\) & \(0.45\) \\ \hline Hardening & \(1.869\times 10^{12}\) & \(3.5\) \\ \hline \end{tabular} \end{table} Table 1: Parameter values for the softening and hardening materials. \begin{table} \begin{tabular}{|c|c|} \hline **Parameter** & **Value** \\ \hline \(\mu_{1}\) & \(281\) Pa \\ \hline \(\mu_{2}\) & \(-280\) Pa \\ \hline \(\mu_{3}\) & \(0.31\) Pa \\ \hline \(\alpha_{1}\) & \(1.66\) \\ \hline \(\alpha_{2}\) & \(1.61\) \\ \hline \(\alpha_{3}\) & \(38.28\) \\ \hline \end{tabular} \end{table} Table 2: Parameter values for the Ogden hyperelastic material. _Predicting and explaining nonlinear material response using Deep Physically Guided Neural Networks with Internal Variables_. J. Orera-Echeverria et al. ### Predictive capacity #### 3.1.1 Infinitesimal strains case We first evaluate the prediction capacity of PGNNIVs for the different tested materials under random parabolic loads. For a quantitative evaluation of the predictive capacity of the PGNNIV, we define the Relative Error (\(\mathrm{RE}\)) of an array field \(\mathrm{I}\) as: \[\mathrm{RE}(\mathrm{I})=\frac{\sum_{I,J,K}\left(\hat{\mathrm{I}}[I|J|K]- \hat{\mathrm{I}}[I|J|K]\right)^{2}}{\sum_{I,J,K}\mathrm{I}[I|J|K]^{2}}, \tag{32}\] where \(\hat{\mathrm{I}}\) is the predicted value and \(\mathrm{I}\) the value obtained using FEM. For instance, for the displacement field \(\mathbf{U}\) represented by the array \(\mathrm{U}\): \[\mathrm{RE}(\mathrm{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\ with Eqs. (20) and (21) by computing the relative errors for a given parameter \(\lambda\), that are defined as: \[\epsilon_{r}(\lambda)=\frac{|\hat{\lambda}-\lambda|}{\lambda}, \tag{36}\] where \(\hat{\lambda}\) and \(\lambda\) are the predicted and real values respectively. The learned anisotropic and isotropic tensors, up to a precision of \(1\) Pa are: \[\mathbf{D}_{\rm aniso}=\begin{pmatrix}1099&330&0\\ 330&1099&0\\ 0&0&511\end{pmatrix},\quad\mathbf{D}_{\rm iso}=\begin{pmatrix}1098&331&0\\ 331&1098&0\\ 0&0&767\end{pmatrix}.\] The errors of the elastic tensor when using the anisotropic or the isotropic elastic material model are respectively \(\varepsilon_{r}(\mathbf{D}_{\rm aniso})=14.4\%\) and \(\varepsilon_{r}(\mathbf{D}_{\rm iso})=0.2\%\), i.e when the assumed hypotheses are true, the explanatory capacity of the network increases. Notwithstanding, the general anisotropic model has a certain explanatory capacity, as we may detect the supplementary structural symmetries in the resulting elastic tensor, that is, \(d_{13},d_{23}\ll d_{11},d_{12},d_{22},d_{33}\), \(d_{11}\simeq d_{22}\). The last symmetry condition \(d_{33}=d_{11}-d_{12}\) is not fulfilled, as the data-set is not very rich in large pure shear-stress states, so the model is not able to detect this symmetry. Moving to specific structural parameter identification, Table 5 describes how the model is able to predict accurately the different elastic parameters. As commented before, the worst prediction is observed for the anisotropic model and the parameter that correlates shear stresses and strains. If, using the anisotropic model, we were interested in finding the value of the isotropic model parameters \(E\) and \(\nu\), it is possible to compute \(\nu\) using the relations \[\nu=\frac{d_{12}}{d_{11}}=\frac{d_{12}}{d_{22}},\] and then to compute \(E\) using \[E=d_{11}(1-\nu^{2})=d_{22}(1-\nu^{2})=d_{12}\frac{1-\nu^{2}}{\nu}=d_{33}(1+\nu).\] Of course, we obtain a different accuracy for each of these expressions. Using the anisotropic model, we obtain values of \(0.1\%\) and \(0.1\%\) for \(\epsilon_{r}(\nu)\) and values of \(0.04\%\), \(0.04\%\), \(0.4\%\) and \(34\%\) for \(\epsilon_{r}(E)\), in agreement with the previous observations. State model discovery.While the predictive capacity of PGNNIVs does not necessarily surpass that of a classical (unconstrained) NN, the significant improvement is visible when assessing the explanatory capacity of the PGNNIV, which can be evaluated by its ability to learn the material constitutive law. We perform a virtual uniaxial test using the explanatory network, which corresponds to the functional representation of \(\sigma_{xx}=\mathsf{H}_{1}(\varepsilon,0,0)\) for \(\varepsilon\in[\varepsilon_{\rm min};\varepsilon_{\rm max}]\) and we compare the PGNNIV predictions with a virtual uniaxial test produced with FEM, as described in Section 2.4. Results are shown in Figure 4. The explanatory error is quantified as the normalized area confined between the real uniaxial test curve and the PGNNIV-predicted one in Figure 4. It is expressed as: \[\rm RE(\mathsf{H})=\sqrt{\frac{\int_{\varepsilon_{\rm min}}^{\varepsilon_{ \rm max}}(\hat{\sigma}_{xx}(\varepsilon)-\sigma_{xx}(\varepsilon))^{2}\,d \varepsilon}{\int_{\varepsilon_{\rm min}}^{\varepsilon_{\rm max}}\sigma_{xx} ^{2}(\varepsilon)\,d\varepsilon}}, \tag{37}\] \begin{table} \begin{tabular}{|c|c|} \hline **Parameter, \(\lambda\)** & **Relative error, \(\epsilon_{r}(\lambda)\) (\%)** \\ \hline Anisotropic model & \\ \hline \(d_{11}\) & \(0.02\) \\ \(d_{12}\) & \(0.08\) \\ \(d_{13}\) & \(\infty\) \\ \(d_{22}\) & \(0.02\) \\ \(d_{23}\) & \(\infty\) \\ \(d_{33}\) & \(33.59\) \\ \hline \hline Isotropic model & \\ \hline \(E\) & \(0.20\) \\ \hline \(\nu\) & \(0.52\) \\ \hline \end{tabular} \end{table} Table 5: Predicted and real values of the model parameters. where \(\hat{\sigma}_{xx}\) and \(\sigma_{xx}\) are the predicted and FEM stresses respectively, which result from strains \(\varepsilon\in[\varepsilon_{min};\varepsilon_{max}]\). #### 3.2.2 Finite strains We explore now the explanatory capacity for the finite strains case. If \(\lambda\in[\lambda_{\min};\lambda_{\max}]\) is the longitudinal stretch, \(\lambda_{1}\), the relative explanatory errors for a given transversal stretch \(\lambda_{2}\) are defined as: \[\mathrm{RE}_{xx}(\mathsf{H};\lambda_{2}) =\sqrt{\frac{\int_{\lambda_{\min}}^{\lambda_{\max}}(\hat{P}_{xx} (\lambda,\lambda_{2})-P_{xx}(\lambda,\lambda_{2}))^{2}\,\mathrm{d}\lambda}{ \int_{\lambda_{\min}}^{\lambda_{\max}}P_{xx}^{2}(\lambda,\lambda_{2})\, \mathrm{d}\lambda}}, \tag{38}\] \[\mathrm{RE}_{yy}(\mathsf{H};\lambda_{2}) =\sqrt{\frac{\int_{\lambda_{\min}}^{\lambda_{\max}}(\hat{P}_{yy} (\lambda,\lambda_{2})-P_{yy}(\lambda,\lambda_{2}))^{2}\,\mathrm{d}\lambda}{ \int_{\lambda_{\min}}^{\lambda_{\max}}P_{yy}^{2}(\lambda,\lambda_{2})\, \mathrm{d}\lambda}}. \tag{39}\] Note that, as the roles of \(x\) and \(y\) are symmetrical in the considered biaxial test (which is uniform, meaning that \(P_{xx}\) on the top contour has the same vale as \(P_{yy}\) on the right contour), the indicated errors are sufficient for illustrating the explanatory capacity of the method. For structural parameter identification, the formula used for error quantification is Eq. (36). Parameter identification.As explained previously, we first explicitly state the parametric shape of the constitutive equation, that is, we prescribe the material to be Ogden-like. Under these assumptions and for the uniform biaxial test \begin{table} \begin{tabular}{|l|l|} \hline **Type of material** & \(\mathrm{RE}(\mathsf{H})\) (\%) \\ \hline Linear & 3.02 \\ \hline Softening & 0.96 \\ \hline Hardening & 5.92 \\ \hline \end{tabular} \end{table} Table 6: Explanatory errors for the \(\mathsf{H}\) model subjected to uniform uniaxial test. Figure 4: **PGNNIV prediction versus FEM solution of the uniaxial test curve for the different data-sets. We observe good agreement between FEM solution (continuous line) and PGNNIV prediction (dashed line), for the softening (a), linear (b) and hardening (c) materials.** considered, the constitutive relation writes \[P_{xx} =\frac{1}{\sqrt{2E_{xx}+1}}\sum_{p=1}^{3}\mu_{k}\left[(2E_{xx}+1)^{ \alpha_{k}/2}-\left((2E_{xx}+1)(2E_{yy}+1)\right)^{-\alpha_{k}/2}\right],\] \[P_{yy} =\frac{1}{\sqrt{2E_{yy}+1}}\sum_{p=1}^{3}\mu_{k}\left[\left(2E_{yy }+1\right)^{\alpha_{k}/2}-\left((2E_{xx}+1)(2E_{yy}+1)\right)^{-\alpha_{k}/2} \right].\] Therefore, the parameters \(\alpha_{k}\), \(\mu_{k}\), \(k=1,2,3\) are, in principle, the ones that ought to be learned by the explanatory network. We obtain values for the parameters of \(\mu_{1}=276\,\mathrm{Pa}\), \(\mu_{2}=-277\,\mathrm{Pa}\), \(\mu_{3}=0.31\,\mathrm{Pa}\), \(\alpha_{1}=1.53\), \(\alpha_{2}=1.47\) and \(\alpha_{3}=38.32\). The relative errors for the different parameters are shown in Table 7. It is important to note that there are some parameters that are more accurately predicted than others, i.e. they are superfluous. This fact relies on the capacity of each of the parameters for explaining the material response, as observed in Fig. 5, where we compare the theoretical constitutive relation with the one obtained using the learned parameters. The explanatory errors are reported in Table 8, adding evidence of the explanatory power of the method despite the discrepancies in some parameters. State model discovery.We now evaluate the model discovered by the PGNNIV with a virtual biaxial test using the explanatory network. The functional representation is now \((P_{xx},P_{yy},P_{xy})=\mathsf{H}(E_{xx},E_{yy},0)\) for \(E_{xx}\in[E_{\mathrm{min}};E_{\mathrm{max}}]\) and we compare the PGNNIV predictions with the model used for the data generation. The results are shown in Fig. 6 for three different values of \(\lambda_{2}\), and the errors, computed according to Eqs.(39) are displayed in Table 8. \begin{table} \begin{tabular}{|c|c|} \hline **Parameter**, \(\lambda\) & **Relative error**, \(\epsilon_{r}(\lambda)\) (\%) \\ \hline \(\mu_{1}\) & \(1.7\) \\ \(\mu_{2}\) & \(-1.1\) \\ \(\mu_{3}\) & \(0.5\) \\ \(\alpha_{1}\) & \(7.7\) \\ \(\alpha_{2}\) & \(8.6\) \\ \(\alpha_{3}\) & \(0.1\) \\ \hline \end{tabular} \end{table} Table 7: Predicted and real values of the model parameters for the Ogden material. Figure 5: **Parametric PGNNIV prediction versus analytic solution of the uniform biaxial test curve for the Ogden material.** We observe good agreement between analytical solution (continuous line) and PGNNIV prediction (dashed line), for the different values of \(\lambda_{2}\). This indicates that the network has a good explicability capacity even though some superfluous model parameters are not accurately fitted. ## 4 Discussion, conclusions and future work Throughout this work we have presented the mathematical foundations of PGNNIVs in the field of computational solid mechanics, which demonstrates to be a particularly interesting niche for the use of such methodology. We have demonstrated that PGNNIVs have both predictive and explanatory capacity: * **Predictive capacity**: PGNNIVs are able to accurately predict the solid response to new external stimuli in real time, something fundamental for optimization, control and probabilistic problems. They are also able to predict not only the solid response in terms of displacement field, but also the deformation and stress fields, without the need of any extra post-processing. This has been demonstrated in Section 3.1, where we have obtained relative errors always below \(10\%\). Controlling non-primary fields is sometimes important in engineering problems, as high stresses cause damage, plasticity or structural failure. As the explanatory network, once trained, encodes all the information about the material properties, it can be used for the prediction of stresses directly from the displacement fields, if necessary. * **Explanatory capacity:** PGNNIVs are able to unveil hidden state model equations, that is, the constitutive equations of computational solid mechanics. First, for parameter identification and fitting, PGNNIV are able to identify inherent material symmetries (such as isotropy) and also to predict the value of the structural model parameters with high accuracy. In the latter, PGNNIVs are in a certain sense an alternative to conventional least-square minimization problems [78] (e.g. using standard methods such as Levenberg-Marquardt algorithm), but making the use of software and hardware tools associated with ANN technology: Graphical Processor Units (GPUs) and Tensor Processor Units (TPUs), distributed and cloud computation, scalability, transfer and federate learning strategies among others. In addition, PGNNIVs address the more challenging problem of model-free unravelling of nonlinear materials constitutive laws. In Section 3.2, we have demonstrated the explanatory capacity of PGNNIVs both for parameter identification and state model discovery with many examples (linear and nonlinear materials both in the infinitesimal and finite strains framework). The relative error when predicting structural parameters is always below \(2\%\) except if the data-set does not contain \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(Y\) stretch ratio & \multicolumn{2}{c|}{**Parametric model**} & \multicolumn{2}{c|}{**Non-parametric model**} \\ \hline & \(\mathrm{RE}_{xx}(\mathsf{H};\lambda_{2})\) (\%) & \(\mathrm{RE}_{yy}(\mathsf{H};\lambda_{2})\) (\%) & \(\mathrm{RE}_{xx}(\mathsf{H};\lambda_{2})\) (\%) & \(\mathrm{RE}_{yy}(\mathsf{H};\lambda_{2})\) (\%) \\ \hline \(\lambda_{2}=1.02\) & 0.98 & 2.65 & 0.34 & 3.57 \\ \hline \(\lambda_{2}=1.05\) & 1.15 & 1.93 & 0.33 & 1.77 \\ \hline \(\lambda_{2}=1.08\) & 1.32 & 1.10 & 0.39 & 1.52 \\ \hline \end{tabular} \end{table} Table 8: Explanatory errors for the finite strains problem subjected to uniform biaxial test. Figure 6: **Non-parametric PGNNIV prediction versus analytic solution of the biaxial test curve for the Ogden material. We observe good agreement between FEM solution (continuous line) and PGNNIV prediction (dashed line), for the different values of \(\lambda_{2}\).** information about the material response to stimuli associated with a certain parameter or the parameter itself has not a direct impact in the explanatory capacity (superfluous parameters). Besides, the explanatory relative error is below \(10\%\) for all the cases analyzed. One important characteristic of PGNNIVs related to this double capacity is the fact that it is possible to decouple two sources of variability in the data obtained from a mechanical system that can be measured using any kind of sensor, namely, the stimuli variability and the variability related to system response. * The stimuli variability is stored in the predictive network, which acts as an autoencoder. The encoder is able to map data that lives in a space of dimension \(D\), to a latent space of dimension \(d\ll D\). The size of the latent space is therefore informative about the data variability, and the values of the latent variables are a compressed representation of data. In addition to theoretical considerations, this fact has important practical consequences: if we know the sources of variability of our system, it is possible to design the predictive network accordingly. * The physical interpretable knowledge, that is, the constitutive equation of the material, is delocalized, spread and diluted in the weights of the predictive network, but is also encoded in the relation \(\varepsilon\mapsto\mathbf{\sigma}\) or \(\mathbf{E}\mapsto\mathbf{P}\) that is learned by the explanatory network, in a much more structured way. Indeed, if it is intended to identify and adjust some structural parameters, this is possible. Otherwise, the constitutive model as a whole may be also unravelled using the expressiveness of ANN methods. In that sense, PGNNIVs are knowledge generators from data as most of ML techniques are, with the difference that for this particular method, the physical knowledge is directly distilled in a separated component. This particularity is what makes the difference between PINNs and PGNNIVs. In PINNs, data and mathematical physics models are seamlessly integrated for solving parametric PDEs of a given problem [42, 68], but, by construction, the information cannot be extrapolated to other situations. That means that, in the context of solid mechanics, the network trained for a given problem using PINNs cannot be used for predicting the response of the system under different volume loads or boundary conditions, which greatly weakens its ability as a predictive method. PGNNIVs overcome this difficulty precisely by distilling the physical information from the intrinsic variability of the stimuli. There is another paramount characteristic of PGNNIVs for computational solid mechanics that has been largely discussed and has explained their emergence. There is no need to have access to the values of internal variables, that are, strictly speaking, non-measurable as they are mathematical constructs coming from a scientific theory. In that sense, even if the bases for thermodynamically-appropriated ANN for constitutive equations in solid mechanics have been investigated [79], it is important to recall that stress fields, such as \(\mathbf{P}\) or \(\mathbf{\sigma}\) are not accessible without the need of extra hypothesis (geometry or load specific configurations). This fundamental issue should not be overlooked and many recent works have worked for this purpose. Efficient Unsupervised Constitutive Law Identification and Discovery (EUCLID) method is one of the most acclaimed efforts in that direction, either using sparse identification [80, 45, 81], clustering [82], Bayesian methods [83] or ANN [47]. However, EUCLID paradigm relies on the fact that the geometry and loads are appropriate enough to ensure strain-stress fields variability in a single specimen. When the geometry or the data acquisition capabilities do not satisfy this requirement, the only possibility is to take action on the data-set or the network, or at least to track the different load conditions and incorporate them to the computational pipeline. Finally, among the different PIML methods, PGNNIVs are more transparent than other approaches that have demonstrated to be very performant for computing the evolution of dynamical systems by incorporating thermodynamical constraints. Structure Preserving Neural Networks enforce first and second laws of thermodynamics as regularization term [84]. Even if in the cited work the GENERIC structure allows for a split between reversible and non-reversible (dissipative) components, the physical information is again diluted in all the network weights, rather than in some specific components. A dimensionality reduction of the dynamic data was also explored [85]. This may be interpreted also as an information reduction to distil physical knowledge, although the interpretability is still dark. In PGNNIVs, however, interpretable physical information (that is, knowledge) is located in specific ANN components. Nevertheless, the presented methodology still has some limitations and there exists room for exploration in several directions: 1. The data requirements for the problem in hands are high. In this work, we have generated data synthetically, but in reality sensors usually collect noisy data from experimental tests and there exist also important limitations concerning the size of the data-sets. A probabilistic (Bayesian) viewpoint will enable a new interpretation of PGNNIVs in the _small data_ regime, although this methodology is rather thought for systems where intensive data mining is possible, in which data quantity prevails over data quality. 2. Defining a suitable architecture for the Y and H networks is not a simple task and requires an iterative process, including the tuning of many hyperparameters. In addition, PGNNIVs require extra hyper-parameters, i.e. penalty coefficients \(p_{i}\), making the process even more involved and time consuming. We have presented some insights into to the complexity and architecture of both predictive and explanatory networks, related with both their prediction and explanation character, but either a great intuition for network design, or time-intensive trial-and-error iterative testing are needed. 3. In this work, we have made used of relatively coarse discretizations, but finer meshes will result in more expensive training processes due to the exponentially larger number of parameters required. More powerful computational strategies (distributed computing, parallelization) as well as more advanced hardware (GPUs and TPUs) will enable the acceleration of the PGNNIVs training processes although the problem of dealing with multidimensional and unstructured meshes still remains open. Many challenges lie ahead of the development of a more general PGNNIV framework. Next lines of research will address the formulation of PGNNIVs under finite strain assumptions using a much more theoretical basis, such as the presented in some recent works [79, 86, 87], leveraging its predictive and explanatory power. Furthermore, extensions of the 2D planar stress architecture to general 3D problems with more complex geometries and load scenarios as well as to more complex constitutive laws that might depend on time (visco-elasticity), or heterogeneous conditions, still pose major challenges for the future. Finally, the usage of real data from sensors, for example, through Digital Image or Volume Correlation tests (DIC, DVC) [71] or even more advanced methods such as Finite Element Model Updating (FEMU) [88] or Virtual Fields Methods (VFM) [89] will put to the test the applicability of PGNNIVs to real scientific problems in the field of engineering. In conclusion, we have demonstrated that in the context of computational solid mechanics, PGNNIVs are a family of ANN for accurately predicting measurable and non-measurable variables such as displacement and stress fields, in real time, and are also able to describe or unravel the constitutive model with high accuracy for different linear and nonlinear (hyper-)elastic materials. Even if this work is preliminary, the ingredients that it comprises correspond to a general approach and the methodology can be applied to cases of scientific interest with the necessary adaptations of the network architectures.
2304.03418
Sensitivity of ultralight axion dark matter search with optical quantum sensors
An optical quantum sensor (OQS) based on lasers and alkali-metal atoms is a sensitive ambient-temperature magnetometer that can be used in axion dark matter search with an inductor-capacitor (LC) circuit at kHz and MHz frequencies. We have previously investigated the sensitivity of an LC circuit-OQS axion detector to ultralight axion dark matter that could be achieved using a fT-noise OQS constructed in our lab. In this paper, we investigate the sensitivity that could be potentially reached by an OQS performing close to the fundamental quantum noise levels of 10 aT/$\sqrt{\text{Hz}}$. To take advantage of the quantum-limited OQS, the LC circuit has to be made of a superconductor and cooled to low temperature of a few K. After considering the intrinsic noise of the advanced axion detector and characterizing possible background noises, we estimate that such an experiment could probe benchmark QCD axion models in an unexplored mass range near 10 neV. Reaching such a high sensitivity is a difficult task, so we have conducted some preliminary experiments with a large-bore magnet and a prototype axion detector consisting of a room-temperature LC circuit and a commercial OQS unit. This paper describes the prototype experiment and its projected sensitivity to axions in detail.
Young Jin Kim, Leanne Duffy, Igor Savukov, Ping-Han Chu
2023-04-06T23:52:28Z
http://arxiv.org/abs/2304.03418v1
# Sensitivity of ultralight axion dark matter search with optical quantum sensors ###### Abstract An optical quantum sensor (OQS) based on lasers and alkali-metal atoms is a sensitive ambient-temperature magnetometer that can be used in axion dark matter search with an inductor-capacitor (LC) circuit at kHz and MHz frequencies. We have previously investigated the sensitivity of an LC circuit-OQS axion detector to ultralight axion dark matter that could be achieved using a fT-noise OQS constructed in our lab. In this paper, we investigate the sensitivity that could be potentially reached by an OQS performing close to the fundamental quantum noise levels of \(10~{}\mathrm{aT/\sqrt{Hz}}\). To take advantage of the quantum-limited OQS, the LC circuit has to be made of a superconductor and cooled to low temperature of a few K. After considering the intrinsic noise of the advanced axion detector and characterizing possible background noises, we estimate that such an experiment could probe benchmark QCD axion models in an unexplored mass range near 10 neV. Reaching such a high sensitivity is a difficult task, so we have conducted some preliminary experiments with a large-bore magnet and a prototype axion detector consisting of a room-temperature LC circuit and a commercial OQS unit. This paper describes the prototype experiment and its projected sensitivity to axions in detail. ## I Introduction Several mysteries in particle and astrophysics suggest that there are new particles yet to be discovered. One of them is an elusive cosmic substance, six times more abundant than the ordinary matter in the Universe, known as dark matter [1]. Another, seemingly unrelated mystery, is the fact that the strong nuclear interactions, described by quantum chromodynamics (QCD), are invariant under time-reversal with \(10^{-10}\) precision or better. The QCD axion, a hypothetical particle first proposed in the 1970s [2; 3; 4; 5], is an excellent candidate for the Universe's dark matter [6]: if lighter than \(\sim\)meV in mass, it can be produced with the correct abundance and temperature in the early Universe to account for dark matter. Furthermore, it provides a dynamical mechanism to suppress time-reversal asymmetries in QCD [2; 3; 4; 5]. The target of many experimental designs for axion direct detection is the extremely small signal induced by the axion's weak coupling to electromagnetism [7; 8; 9; 10]. To date, the benchmark QCD axion models of Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) [11; 12] and Kim-Shifman-Vainshtein-Sakharov (KSVZ) [13; 14] have both been probed in masses above 2.66 \(\mu\)eV by the Axion Dark Matter eXperiment (ADMX) [7] using a resonant cavity haloscope [15; 16]. The haloscope relies on the interaction of the dark matter axion field with a strong static magnetic field generated from a superconducting magnet. The smallest axion mass of \(\sim\mu\)eV that ADMX can search is limited by the physical size of the resonant cavity (\(\sim 1\) m), and thus the magnet bore in which it must fit: the cavity size should be comparable to the axion Compton wavelength, for example, \(\sim 1\) m and \(\sim 1\) km for axion mass of \(\mu\)eV and neV, respectively. Axions with mass around 10 neV are predicted by Grand Unified Theory (GUT) models of particle physics [17]. Due to the physical lower mass bound, cavity haloscopes cannot be used to search for these ultralight axions. A recent proposal to extend the use of cavity searches in ADMX is the use of reentrant cavities to reach lower masses, down to around 0.4 \(\mu\)eV [18]. At axion masses below this, a different experimental approach is required. As a part of that, an axion detector using an inductor-capacitor (LC) circuit coupled to a superconducting quantum interference device (SQUID), as a sensitive magnetometer, was first proposed by Sikivie, Sullivan, and Tanner [8]. We have previously investigated using an optical quantum sensor (OQS) as a sensitive magnetometer [19] and have developed an experiment prototype based on a commercial OQS unit. During this prototype development, we realized that the detector sensitivity can be potentially improved with a quantum-limited OQS, with a field noise floor of the order of 10 \(\mathrm{aT/\sqrt{Hz}}\). In this paper, we study the sensitivity of an advanced LC circuit-OQS experiment designed to take full advantage of the noise of a quantum-limited OQS. This study is organized as follows. In Section II, we discuss the magnetic signature of dark matter axions in an LC circuit-OQS axion detector. In Section III, we discuss noise sources in axion detection experiments. We investigate the intrinsic noise of the axion detector arising from the intrinsic noise of the OQS and the thermal Johnson noise of the LC circuit. For the improved axion detector sensitivity with a quantum-limited OQS, the LC circuit must be cooled to low temperature of a few K; otherwise the thermal noise of the circuit at room temperature dominates. In this case, other background noises could become significant. Therefore, we also investigate possible background noises, including the backaction noise of the OQS on the LC circuit and the thermal noise of surrounding magnetic shield. In Section IV, we describe development of a prototype experiment with a room temperature LC circuit and a commercial fT-noise OQS. The sensitivity of this prototype setup to axions is discussed in Section V. Potential improvements to take advantage of a quantum-limited OQS and the advanced axion detector's sensitivity that can be achieved are discussed in Section VI. We conclude our discussion in Section VII. ## II Axion signal and detection scheme Dark matter axions behave like a classically field oscillating at the axion Compton frequency, \(\omega_{a}=m_{a}\) with \(m_{a}\) being the axion mass, and permeating the entire Universe [20]. Thus the axion field can be written as \(a(t)=a_{0}\sin{(m_{a}t)}\), where \(a_{0}\) is the local amplitude of the axion field. Here natural units \(c=\hbar=\mu_{0}=1\) are used. The existence of axions in the presence of a static magnetic field \(\vec{B}_{0}\) gives rise to additional terms in the Maxwell equations [8]. In particular, Ampere's Law is modified to \[\vec{\nabla}\times\vec{B}_{0}-\frac{\partial\vec{E}}{\partial t}= g\Big{(}\vec{E}\times\vec{\nabla}a-\vec{B}_{0}\frac{\partial a}{ \partial t}\Big{)}+\vec{j}_{e}, \tag{1}\] where \(g\) is the coupling constant of the axion to two photons [21] and \(\vec{j}_{e}\) is the electrical current density associated with ordinary matter. It follows that the homogeneous axion field (\(\vec{\nabla}a\approx 0\)) can induce an electrical current density along \(\vec{B}_{0}\), \[\vec{j}_{a}(t)=-g\vec{B}_{0}[da(t)/dt]=-g\vec{B}_{0}\sqrt{2\rho_{DM}}\cos{(m_ {a}t)}, \tag{2}\] with the relation of \(a_{0}=\sqrt{2\rho_{DM}}/m_{a}\), where \(\rho_{DM}\approx 0.3\) GeV/cm\({}^{3}\) is the standard local dark matter density [22]. In turn, \(\vec{j}_{a}(t)\) can produce a minute perpendicular magnetic field \(\vec{B}_{a}(t)\) oscillating at an angular frequency \(m_{a}\) through \(\vec{\nabla}\times\vec{B}_{a}=\vec{j}_{a}\). This led Sikivie, Sullivan, and Tanner to propose axion detection based on the LC circuit and SQUIDs [19]. Here we propose an LC circuit design with a gradiometer coil and a quantum-limited OQS for detection of the axion-induced oscillating magnetic field \(\vec{B}_{a}\). A sketch of the axion detection concept with an OQS is shown in Fig. 1. The axion detector is comprised of two components. The first component is the first-order planar gradiometer input coil and a two-loop circular output coil, the two coils connected in series with capacitors (LC circuit) to resonantly amplify the \(\vec{B}_{a}\). The second component is an OQS to detect the amplified magnetic field \(\vec{B}_{d}\). The OQS manipulates atomic spins for sensitive magnetic sensing based on lasers, alkali-metal vapor cells, and optical components [23], and operates at ambient temperatures without the need for cryogens. Cryogen-free operation of the OQS has many advantages for various applications, and for the axion search the main advantage is convenience for OQS replacement and optimization. The basic principle for a typical implementation of OQSs is shown in Fig. 2. Typically, two laser beams are used: one circularly polarized pump beam to optically polarize the spins of unpaired electrons of alkali-metal atoms, such as rubidium (Rb) or potassium (K), and one linearly polarized probe beam to read out the state of the electron spins. The wavelengths of the laser beams are tuned to or near an atomic transition between the ground state and an excited state, most often the \(P_{1/2}\) state. The laser beams are sent to overlap in an alkali-metal vapor cell heated to elevate an alkali-metal atom density, e.g., \(\sim 10^{14}\) cm\({}^{-3}\) in case of K atoms at a \(\sim 180\)\({}^{\circ}\)C cell temperature. The action of the pump beam, referred to as optical pumping, orients nearly all of the electron spins along its propagation direction. The interaction of a weak external magnetic field to be detected with the polarized electron spins leads to a change in the orientations of the spins. The degree of change is proportional to the strength of the magnetic field. The non-zero spin projection along the probe beam results in a rotation of the light linear polarization plane of the probe beam caused by the Faraday effect. This optical rotation is precisely detected with a polarizing beam splitter and two photo-detectors as a small difference in the balanced output. The OQS frequency of the maximum sensitivity, \(\nu_{m}\), is tuned by a small bias static magnetic field \(B_{b}\) through \(\nu_{m}=\gamma B_{b}\), where \(\gamma=7\) GHz/T is the gyromagnetic coefficient of Rb-87 or K-39/41 electron spins. To detect the amplified magnetic axion signal \(\vec{B}_{d}\), the OQS vapor cell is placed at the center of the output coil. The input gradiometer coil is located inside a magnet bore at room temperature, producing a static magnetic field \(B_{0}\). The \(\vec{B}_{a}\) induces a voltage in the input gradiometer by Faraday's law, which drives a current through the output coil producing an Figure 1: Sketch of the LC circuit-OQS axion detector showing electrical connections of the LC circuit and geometrical arrangement of the OQS and magnetic shield (not to scale). Figure 2: OQS configuration: two laser beams overlap in a vapor cell of alkali-metal atoms. The circularly polarized pump beam orients atomic spins along its direction (the dashed green arrow). The external magnetic field tilts the spins by a small angle (the solid green arrow). The tilt leads to a rotation of the plane of linear polarization of the probe beam (dotted and solid black arrows), which is measured by a polarizing beam splitter and two photo-detectors. oscillating magnetic field \(\vec{B}_{d}\) that is our observable axion signal. As \(\vec{B}_{a}\) is azimuthally symmetric, each pickup loop of the input gradiometer covers each half-side of a horizontal plane through the magnet and central axis. This configuration doubles the \(B_{a}\) signal, but greatly reduces the background magnetic noise that generates voltages of opposite signs. As this is a resonant LC circuit axion detector, many aspects are similar to the proposal of Sikivie, Sullivan, and Tanner [8], but an OQS is uniquely used as a sensitive magnetometer. Thus our design is modified for the coupling to the OQS. Our initial experimental proposal is described in detail in a previous publication [19]. When the LC circuit resonates at an angular frequency equal to the axion mass, i.e., \(\omega=1/\sqrt{(L_{\rm in}+L_{\rm out})C}=m_{a}\) with \(L_{\rm in}\) and \(L_{\rm out}\) being the inductances of the input and output coils and \(C\) being the capacitance of the capacitor (this includes the two coils' self-capacitance and the capacitance of leads, as well as other parasitic effects), the magnitude of the current in the circuit is given by \[|I|=\frac{Q|\Phi_{a}|}{L_{\rm in}+L_{\rm out}}, \tag{3}\] where \(Q=\omega(L_{\rm in}+L_{\rm out})/R\) is the quality factor of the LC circuit, \(R\) is the total AC resistance of the circuit, and \(\Phi_{a}\) is the axion-induced magnetic flux through the input gradiometer coil. Using Eq. (2), cylindrical coordinates, \((z,\rho,\phi)\), and \(\vec{B}_{0}=B_{0}\hat{z}\), the axion-induced oscillating magnetic field \(\vec{B}_{a}\) in Fig. 1 is [19] \[\vec{B}_{a}=-\frac{g\sqrt{2\rho_{DM}}B_{0}\rho}{2}\hat{\phi}, \tag{4}\] and thus the magnitude of \(\Phi_{a}\) is \[|\Phi_{a}|=|\int 2N_{\rm in}\vec{B}_{a}\cdot d\vec{A}|=2N_{\rm in}V_{\rm in }g\sqrt{2\rho_{DM}}B_{0}, \tag{5}\] where \(N_{\rm in}\) is the number of turns of each pickup loop of the input gradiometer coil and \(V_{\rm in}=l_{\rm in}r_{\rm in}^{2}/4\) is a geometric factor for the input gradiometer coil with \(l_{\rm in}\) and \(r_{\rm in}\) as its length and width. The output coil is composed of two loops in series with the center-to-center spacing, \(2d\). The current \(I\) in Eq. (3) flowing through the output coil produces the magnetic field \(\vec{B}_{d}\) at the location of the OQS sensing volume and thus the magnitude of \(\vec{B}_{d}\) is \[|B_{d}|= \frac{N_{\rm out}|I|}{r_{\rm out}[1+(d/r_{\rm out})^{2}]^{3/2}}\] \[= \frac{N_{\rm out}Q|\Phi_{a}|}{r_{\rm out}[1+(d/r_{\rm out})^{2}] ^{3/2}(L_{\rm in}+L_{\rm out})}, \tag{6}\] where \(r_{\rm out}\) and \(N_{\rm out}\) is the radius and the number of turns of each loop of the output coil, respectively. Substituting Eq. (5) to Eq. (6), the field magnitude is \[|B_{d}|=\frac{2N_{\rm in}N_{\rm out}QV_{\rm in}g\sqrt{2\rho_{DM}}B_{0}}{r_{ \rm out}[1+(d/r_{\rm out})^{2}]^{3/2}(L_{\rm in}+L_{\rm out})}. \tag{7}\] This indicates that the large magnet bore size and strong field \(B_{0}\) are critical to improve the axion signal strength. As the axion mass (frequency) is unknown, the axion detector must be tuned across a range of frequencies. This will be achieved by simultaneous tuning the OQS frequency range by changing its bias magnetic field and the LC circuit by adjusting capacitors. When the LC circuit resonates at the axion mass, the \(\vec{B}_{d}\) is enhanced by the quality factor of the LC circuit. To find the axion, the task is to detect an extremely small signal above noise sources. In the following, we investigate the noise contributions to determine the sensitivity that a quantum-limited OQS offers for axion dark matter detection. ## III Noise sources The two main sources of noise in the axion detector in Fig. 1 are the intrinsic magnetic field noise of the OQS, \(\delta B_{\rm OQS}\), and the magnetic Johnson noise (or thermal noise) of the LC circuit, \(\delta B_{J}\). Therefore, the total magnetic noise of this detector is given by \[\delta B_{d}=\sqrt{\delta B_{J}^{2}+\delta B_{\rm OQS}^{2}}. \tag{8}\] When the field noise of the OQS approaches the fundamental quantum limit, the LC circuit must be cooled or its intrinsic thermal noise will dominate the experiment. At low temperatures of the LC circuit, additional background noise sources must also be considered, which include the backaction noise of the OQS on the LC circuit and thermal noise from experimental magnetic shielding. This is illustrated in Fig. 3, and discussed in detail in the following. ### Fundamental Quantum Noise Limit of OQS As in any quantum sensor, the intrinsic magnetic field noise of OQSs is ultimately limited by quantum fluctuations. In this sensor, these are due to the finite number of alkali-metal Figure 3: Illustration of background noise contributions using an OQS as the magnetometer for axion detection with an LC circuit. In addition to the intrinsic thermal (magnetic Johnson) noise of the LC circuit and the intrinsic field noise of the OQS, additional noise contributions come from the backaction noise of the OQS on the LC circuit and the thermal noise of experimental magnetic shielding, here copper shield. atomic spins and the probe beam photons, both used for magnetic field sensing. Fundamentally, the three dominant sources of quantum field noise in OQSs are: (i) the spin projection noise caused by the finite number of alkali-metal atomic spins, (ii) the photon shot noise resulting from the finite number of probe beam photons, and (iii) the light-shift noise caused by fluctuations in the polarization of the probe beam [24]. The quadrature sum of the three individual noise sources determines the fundamental quantum noise limit (QNL) of so-called radio-frequency OQS [24], \[\delta B_{\text{QNL}}=\frac{1}{\gamma\sqrt{nV}}\sqrt{\frac{4}{T_{2}}+\frac{R_{ pr}\text{OD}}{32}+\frac{8}{R_{pr}\text{OD}T_{2}^{2}\eta}} \tag{9}\] operating at frequencies much above 10 kHz where spin-exchange interaction is suppressed by optical pumping into the stretched state \(|FF>\). Here \(\gamma\) and \(n\) are the gyromagnetic ratio and the density of alkali-metal atoms, respectively; \(V\) is the active measurement cell volume defined by the overlap of the pump and probe beams; \(T_{2}\) is the coherence time of the electron spins of alkali-metal atoms; \(R_{pr}\) is the absorption rate of photons from the probe beam; \(\eta\approx 0.8\) is the photodiode quantum efficiency in the probe beam readout [24]; and OD is the optical depth of the probe beam. If this expression inside the square root is optimized with respect to the product of \(R_{pr}\) and OD (the optimal condition in Fig. 4), it will depend on \(T_{2}\) as the first term, and the field noise of OQS will be limited by \(T_{2}\) and the number of spins. Increasing \(T_{2}\) can be obtained by pumping the electron spins into the stretched state to reduce the dominant spin-exchange relaxation by alkali-metal atom collisions [24]. The number of spins can be increased by using a large vapor cell. Figure 4 shows the fundamental quantum noise limit of OQS for operation using K spins, calculated using Eq. (9) with \(\gamma=7\times 10^{9}\) Hz/T, \(n=7\times 10^{13}\) cm\({}^{-3}\), \(V=100\) cm\({}^{3}\), and \(T_{2}=3.5\) ms. It indicates that the OQS field noise of 10 aT/\(\sqrt{\text{Hz}}\) can be achievable at the optimal condition. The K atoms (natural abundance mixture of two stable isotopes K-39 and K-41 both having the same nuclear spin 3/2) are selected owing to their lower spin-destruction cross section between atoms [24] (10 times lower than that of Rb atoms). The optimal value of \(R_{pr}\times\text{OD}\) has to be quite large. As the \(R_{pr}\times\text{OD}\) is proportional to the alkali-metal density, the length of the vapor cell along the probe beam direction, and the probe laser power [24], the large optimal value can be achieved by optimally increasing the three parameters. The excessive increase of the alkali-metal density will shorten \(T_{2}\); hence, the density should increase until alkali-metal spin-destruction collisions start to dominate the spin-destruction rate. Increasing the probe beam path length, which does not affect \(T_{2}\), can be implemented by using a vapor cell that has a long dimension along the probe beam direction; however, this method is not practical if a path length of greater than 20 cm is required. The probe laser power can be increased as long as it does increase much \(1/T_{2}\)[24]. ### Intrinsic Magnetic Johnson Noise of LC Circuit The magnetic Johnson noise of the circuit \(\delta B_{J}\) is the combined thermal noise from the input gradiometer coil, the output coil, and the wire leads between the input and output coils (Fig. 1). Due to current fluctuations arising from the voltage Johnson noise of the LC circuit, \(\delta I_{J}=\sqrt{4k_{B}TR}/Z=Q\sqrt{4k_{B}TR}/\omega(L_{\text{in}}+L_{\text {out}})\) with \(Z\) being the impedance of the resonant LC circuit, the \(\delta B_{J}\) at the location of the OQS vapor cell is given by \[\delta B_{J}= \Big{[}\omega(L_{\text{in}}+L_{\text{out}})r_{\text{out}}[1+(d/r_ {\text{out}})^{2}]^{3/2}\Big{]}^{-1}\] \[\times N_{\text{out}}Q\sqrt{4k_{B}T\rho\Big{[}\frac{4(l_{\text{in} }+r_{\text{in}})N_{\text{in}}}{A_{\text{in}}}+\frac{4\pi r_{\text{out}}N_{ \text{out}}}{A_{\text{out}}}+\frac{2l_{\text{lead}}}{A_{\text{lead}}}\Big{]}}, \tag{10}\] where \(k_{B}\) is the Boltzmann's constant; \(T\) is the absolute temperature of the LC circuit; \(l_{\text{lead}}\) is the length of the wire leads; \(\rho\) and \(A_{\text{in,out,lead}}\) are the the resistivity and the total cross section area of the wires of the input gradiometer coil, output coil, and wire leads, respectively. This indicates that the \(\delta B_{J}\) reduces at lower temperature. ### Backaction OQS noise We investigate backaction of the OQS, which injects the spin noise arising from fluctuating K spins in the OQS cell into the LC circuit through the output coil. The backaction OQS noise becomes significant when using the LC circuit cooled to a few K due to its low thermal noise. The OQS spin noise can be estimated in the assumption of the standard quantum limit at the magnetic resonance: \[\delta B_{SN}=\frac{\mu_{0}\mu_{B}\sqrt{N_{s}T_{2}}}{4\pi d^{3}}L(\nu-\nu_{0}, \Delta\nu), \tag{11}\] where \(\mu_{B}\) is the Bohr magneton, \(d\) is the distance from the center of the cell, \(N_{s}\) is the number of the K spins in the cell. Here Figure 4: Calculated fundamental quantum noise limit of OQS as a function of the product of \(R_{pr}\) and OD. we consider the K cell as a magnetic dipole with the fluctuating magnetic moment of \((1/2)\mu_{B}\sqrt{N_{s}T_{2}}\) for simplification. We assume that the spin noise scales as \(\sqrt{N_{s}}\) in the standard quantum limit and it has a Lorentzian profile \(L(\nu-\nu_{0},\Delta\nu)\) near the magnetic resonance [25]. This profile is normalized to 1 at the maximum and has the relation of \(\Delta\nu=1/(2\pi T_{2})\). For the K cell that can reach \(\delta B_{\rm QNL}=10\) aT/\(\sqrt{\rm Hz}\) at the optimal condition, discussed in Sec. III.1, with \(V=100\) cm\({}^{3}\) and \(n=7\times 10^{13}\) cm\({}^{-3}\), the spin noise is estimated to be \(\delta B_{SN}=0.009\) aT/\(\sqrt{\rm Hz}\) at \(d=8\) cm, as an example. The magnetic flux of \(\delta B_{SN}\) through the two-loop circular output coil is \[\delta\Phi_{SN}= \int 2N_{\rm out}\delta B_{SN}dA\] \[=\frac{\mu_{0}\mu_{B}\sqrt{N_{s}T_{2}}N_{\rm out}}{r_{\rm out}[1 +(d/r_{\rm out})^{2}]^{3/2}}L(\nu-\nu_{0},\delta\nu), \tag{12}\] which injects the OQS spin noise into the LC circuit through the relation \(\delta V_{SN}=\delta\Phi_{SN}\omega\). The \(\delta V_{SN}\) should be compared with the thermal noise inside the LC circuit in order to check the significance of the backaction OQS noise. ### Possible Magnetic Background Noise The main experimental challenge is to detect the extremely weak magnetic axion signal above magnetic background noises. The LC circuit needs to be shielded from external electromagnetic fields. The simplest shielding method is placing the elements of the LC circuit into a copper shield (but not any ferromagnetic shield, since the circuit is partially exposed to the strong magnetic field). We estimate that the shielding factor of \(10^{5}\) (\(10^{6}\)) can be achieved with a copper shield thickness \(x=0.5\) mm (1 mm) based on the \(e^{-x/\delta}\) shielding law with the copper's skin depth \(\delta=65\)\(\mu\)m at 1 MHz. The most sensitive part of the LC circuit to the electromagnetic interference is the large input gradiometer coil. The output coil and the OQS sensor head are not exposed to the strong field, therefore various options exist for shielding including \(\mu\)-metal shield and ferrite shield that has the lowest possible magnetic background noise due to its extremely small electrical conductivity. Since the non-ferromagnetic conductive copper shield generates magnetic background noise and can impose limitations on the axion detector sensitivity, we consider the question of copper shield thermal noise in detail in this subsection. #### iii.4.1 Copper Shield Thermal Noise: Johnson Noise and Black-Body Radiation Noise The thermal noise of the conductive copper shield can potentially have two components: the Johnson noise and the black-body radiation noise. Here, we show that actually at frequencies below \(\sim\)MHz, the black-body radiation noise is identical to the Johnson noise for the copper surface, and thus its contribution is included via the Johnson noise. While one might try to use Plank's law to obtain the energy density for black-body radiation \[\varepsilon(\omega)\propto\frac{\omega^{3}}{e^{\hbar\omega/(k_{B}T)}-1}, \tag{13}\] at the frequency range the assumptions which were used for the derivation are no longer true, and also this limit is not well understood [26]. Physically thermal black-body radiation arises from thermal motion of atoms [27], which leads to random current densities [28]. By using the expression for the photon energy Bose-Einstein distribution \[\Theta(\omega,T)=\frac{\hbar\omega}{e^{\hbar\omega/(k_{B}T)}-1}, \tag{14}\] that in the frequency limit of \(\hbar\omega<<k_{B}T\) can be simplified to \[\Theta(\omega,T)=k_{B}T, \tag{15}\] and substituting the dielectric constant from the Drude model (see for example Ref. [29]) \[\varepsilon(\omega)=\varepsilon_{\infty}-\frac{\sigma_{0}/\tau}{\varepsilon_{ 0}(\omega^{2}+i\omega/\tau)}, \tag{16}\] where \(\varepsilon_{\infty}\) is the \(\varepsilon(\omega)\) at very high frequency, \(\sigma_{0}\) is the electrical conductivity at zero frequency, \(\varepsilon_{0}\) is the vacuum permittivity, and \(\tau\) is electron collision time, these random current densities can be written as \[<j_{m}(r,\nu)j_{n}(r^{\prime},\nu^{\prime})>=4\sigma k_{B}T\delta_{\rm mn} \delta(r-r^{\prime})\delta(\nu-\nu^{\prime}). \tag{17}\] Here \(\sigma\) is the electrical conductivity and \(r\) (\(r^{\prime}\)) is the \(x\), \(y\), \(z\) (\(x^{\prime}\), \(y^{\prime}\), \(z^{\prime}\)) vector. Note that we replaced \(\delta(\omega-\omega^{\prime})\) in the original expression of Ref. [28] with \(\delta(\nu-\nu^{\prime})/2\pi\) and multiplied it by a factor of 2 for folding the negative spectrum on to the positive one. It can be shown that the expression in Eq. (17) leads to a correct Johnson noise voltage. The black-body radiation current in a short copper can be found by integrating the fluctuating random current densities over the cross-section area \(A\) of the copper. This gives \[I_{BB}^{2}(z,\nu)=4\sigma k_{B}T\delta(z-z^{\prime})\delta(\nu-\nu^{\prime}). \tag{18}\] The black-body radiation voltage is then \(dV_{BB}=IdR=Idz/\sigma A\), and by two-time integration of the squared voltage over the length of the conductor \(l\) along \(z\), \[V_{BB}^{2}=4k_{B}TR\delta(\nu-\nu^{\prime}). \tag{19}\] Here \(R=l/\sigma A\). The delta function means un-correlated noise in the frequency domain, therefore double integration over some small frequency interval \(\Delta\nu\) leads to the correct expression for Johnson noise \(V_{JN}\) \[V_{BB}=\sqrt{4k_{B}TR\Delta\nu}=V_{JN}. \tag{20}\] The magnetic noise of a small copper disk of radius \(a\) at a distance \(z\) much larger than \(a\) can be easily calculated analytically from Eq. (17) and the Biot-Savart law: \[B=\int\frac{\mu_{0}}{4\pi}\frac{j\times r}{r^{3}}dxdydz. \tag{21}\] Since \(B\) is the random function, we calculate \(B^{2}\), actually \(B_{z}^{2}\), the component normal to the surface of the copper disk, which is proportional to \[<(j_{x}y-j_{y}x)^{2}>=<j_{x},j_{x}>y^{2}+<j_{y},j_{y}>x^{2}=j_{0}^{2}(x^{2}+y^{2 }), \tag{22}\] where \(j_{0}^{2}=4\sigma k_{B}T\delta(r-r^{\prime})\delta(\nu-\nu^{\prime})\). This can be integrated over the volume of the disk with the thickness \(h\) as well as over frequency similar to such integration in case of the copper voltage noise to give \[B_{z}=\frac{\mu_{0}}{\sqrt{8\pi}}\sqrt{\sigma k_{B}Th}\frac{a^{2}}{z^{3}}, \tag{23}\] which in the limit of \(a\ll z\) is the exact expression for the Johnson noise of a small disk [30]. In case of an arbitrary copper shield, since we showed that the black-body radiation noise is identical to the Johnson noise, it is possible just to use Johnson noise calculations in order to estimate the thermal noise of the copper shield. There is still a question of noise arising from dielectric materials, for example, glass material of the OQS vapor cell. For the noise from a dielectric material, the random current density in Eq. (17) can be obtained by replacing \(\sigma\) with \(\omega\epsilon_{0}\epsilon^{\prime\prime}\), where \(\epsilon^{\prime\prime}\) is the imaginary part of the complex dielectric permittivity. The noise ratio between copper and dielectric materials for the same geometry will be \[\sqrt{\frac{\sigma\delta_{S}}{\omega\epsilon_{0}\epsilon^{\prime\prime}h}}. \tag{24}\] If we compare a copper disk with \(\sigma=5.96\times 10^{7}\) S/m and the skin depth \(\delta_{S}=7\times 10^{-5}\) m with a disk made of dielectric material with a typical \(\epsilon^{\prime\prime}=10^{-3}\) and the thickness \(h=1\) cm for \(\omega=2\pi\times 10^{6}\) Hz, the noise ratio is \(2.7\times 10^{6}\), meaning that the noise due to dielectric material losses is expected much lower and can be neglected. #### iii.2.2 Magnetic Johnson Noise of Copper Shield The magnetic field noise due to Johnson current noise in the copper shield can be found using the method in Ref. [31]. The noise for specific copper shield configuration can be obtained by scaling the results in Ref. [31]. In particular, it has been shown that the noise at considered frequency scales, where the inductance of the current path dominates over the resistance of the path, in the following way: \[\delta B=kT^{1/2}\sigma^{-1/2}\omega^{-3/4}h^{-2}, \tag{25}\] where \(k\) is a coefficient of proportionality dependent on other parameters such as temperature and shield geometry. In Ref. [31] it has been shown that at a 4.6 cm distance and room temperature, the thermal noise of copper shield with the thickness \(h=0.2\) mm for \(\omega=2\pi\times 10^{5}\) Hz, was measured to be 220 aT\(\sqrt{\rm Hz}\) and the shield Johnson noise was modeled to be 210 aT\(\sqrt{\rm Hz}\). This result confirms our claim that the thermal noise of the copper shield at frequencies below \(\sim\) MHz can be estimated by only the Johnson noise of the copper shield. The noise value can be scaled to a 50 cm distance (the approximate average distance from the copper shield to the coils in our experiment) and 1 MHz frequency, giving 0.3 aT\(\sqrt{\rm Hz}\). Further reduction in the noise can be achieved by cooling the copper material. If nitrogen cooling is used (77 K), the copper resistivity can be reduced by 10 times (slightly depending on purity of copper material). Taking into consideration of the temperature reduction factor, the noise at 77 K can be reduced by 6 times to 0.05 aT\(/\sqrt{\rm Hz}\). Cooling the copper material to 4 K can reduce the noise another factor of 40 for high-purity copper, reaching 0.001 aT\(/\sqrt{\rm Hz}\). ## IV Prototype axion detector development To perform the first tests of an OQS in axion detection with an LC circuit, we have developed and constructed an optimized room temperature design for an existing solenoid magnet at Los Alamos National Laboratory (LANL), with a commercial OQS. The magnet is a superconducting solenoid with a warm bore of 1-m diameter and 3-m length in the center, which can produce a static field \(B_{0}=2\) T. We employed a commercial Twinleaf OQS containing a \(5\times 5\times 5\) mm\({}^{3}\) Rb-87 vapor cell, whose sensor head is shown in Fig. 5(a). The measured magnetic field noise of the OQS around 300 kHz (the target frequency range in this prototype experiment described below) was 10 fT\(/\sqrt{\rm Hz}\). As an example, the OQS field noise around 307.5 kHz is shown in Fig. 5(b). The peak at 307.5 kHz is a known applied calibration magnetic field to convert the OQS output voltages into magnetic fields. We also investigated a frequency response of the OQS by applying a sinusoidally varying magnetic field at different frequencies around 307.5 kHz, as indicated in Fig. 5(c) (blue dot points). The data was fit to a Lorentzian function, \(f(\nu)=a_{0}+a_{1}/[4(\nu-\nu_{0})^{2}+\Delta\nu^{2}]\) with \(\nu\) being the frequency of the applied field. The fit (solid red curve) gives the bandwidth of the OQS of \(\Delta\nu_{\rm OQS}=1.8\) kHz. To accommodate the sizes of the magnet bore and the OQS, we selected the following experimental dimensions for the axion detector: \(l_{\rm in}=1.0\) m, \(r_{\rm in}=0.3\) m, \(r_{\rm out}=2.9\) cm, and \(d=2.0\) cm. The \(l_{\rm lead}=9.0\) m was chosen in order to locate the OQS at a position where the residual magnetic field of the magnet is sufficiently suppressed to the level of the Earth's magnetic field. To reduce the AC resistance of the LC circuit closer to the DC resistance value, the circuit was made from Litz wires of multiple strands of thin copper wire with a 0.06 mm diameter, recommended at our target frequency range [32]. Because the total length of the LC circuit is dominated by the input coil and the wire leads, their wire diameter was chosen to be \(b_{\rm in}=5.2\) mm in order to sufficiently reduce their magnetic Johnson noise. On the other hand, the wire diameter of the output coil was reduced to \(b_{\rm out}=1.3\) mm because of the small output coil's diameter. We optimized \(N_{\rm in}\) and \(N_{\rm out}\) to maximize the signal-to-noise ratio (SNR) of the axion detector, \(|B_{d}|/\delta B_{d}\), at 300 kHz using Eqs. (7), (8), and (10), and \(\delta B_{\rm OQS}=10\) fT\(/\sqrt{\rm Hz}\). We esti mated the inductance of the input gradiometer coil [8], \[L_{\rm in}\approx\frac{2}{\pi}N_{\rm in}^{2}l_{\rm in}\text{ln}(r_{\rm in}/0.5b_{ \rm in})=N_{\rm in}^{2}\times 3.8\ \mu\text{H}, \tag{26}\] and the inductance of the output coil [8], \[L_{\rm out}\approx 2r_{\rm out}N_{\rm out}^{2}\left[\text{ln}\Big{(}\frac{8r_{ \rm out}}{0.5b_{\rm out}}\Big{)}-2\right]=N_{\rm out}^{2}\times 0.3\ \mu\text{H}, \tag{27}\] where we neglected the mutual inductance between the two loops for simplicity. Figure 6 shows the calculated, normalized 2D distribution of the SNR values of the axion detector as a function of \(N_{\rm in}\) and \(N_{\rm out}\). This calculation implies that various optimal values exist in the red region; however, fewer number of turns are better according to our investigations that the noise of coils approached the theoretical value in Eq. (10) as the number of turns was reduced [33]. Thus, we selected \(N_{\rm in}=1\) and \(N_{\rm out}=4\) as optimal values, leading to the input gradiometer consisting of two series-configured one-turn \(1.0\ \text{m}\times 0.3\ \text{m}\) pickup loops. Figure 7 shows the estimated total magnetic noise of the optimized prototype axion detector as a function of the frequency (black solid curve), indicating that the axion detector loses sensitivity at the frequency range below \(\sim\)100 kHz due to the thermal noise of the room temperature LC circuit (blue dotted curve). Above \(\sim\)100 kHz, our experimental sensitivity to the axion is limited by the OQS field noise (orange dotted curve). Based on Fig. 7, our experiment targeted frequencies around 300 kHz. We have built a prototype axion detector with the selected experimental dimensions and the optimized experimental parameters, as shown in Fig. 8(a)-(c). The wire leads were shielded by copper tube and the assembly of the output coil and the OQS sensor head was shielded by a \(\mu\)-metal enclosure in order to minimize interference from ambient magnetic signals. On the other hand, the input gradiometer coil was shielded by the magnet shield composed of an open iron rectangular enclosure located outside the magnet bore and additional copper mesh and Faraday cage to cover the openings. When the magnet is cooled to 4 K, the superconducting solenoid coil of the magnet could also add additional shielding for the input gradiometer coil. The OQS output was connected to a 24-bit data acquisition system (NI PXIe-4480) and recorded at a sampling rate of 1 MHz using a home-built Lab Figure 5: (a) Photograph of the cm-scale commercial OQS sensor head, (b) the magnetic field noise and (c) the frequency response of the OQS. In (b), the peak at 307.5 kHz is the known applied calibration magnetic field. The magnetic field noise of the OQS is measured to be 10 \(\text{T/V}\text{Hz}\). In (c), the data was collected by scanning applied sine field of constant amplitude between 306 and 309 kHz; the solid curve indicates a Lorentzian fit, giving the OQS bandwidth of 1.8 kHz. Figure 6: Optimization of the number of turns of the input gradiometer and output coils to maximize the SNR of the prototype axion detector. The SNR values are normalized. VIEW program. First, we tuned the prototype axion detector around 307 kHz. This was achieved by both tuning the OQS by applying its corresponding bias magnetic field of 43.9 \(\mu\)T and tuning the LC circuit by using mica capacitors of 19 nF. The quality factor of the LC circuit was measured to be \(Q=43\), indicating its bandwidth of 7.1 kHz through the relation of \(\Delta\nu_{\rm LC}=\nu/Q\). Since \(\Delta\nu_{\rm LC}>\lambda\nu_{\rm OQS}\), the bandwidth of the prototype is determined by \(\Delta\nu_{\rm OQS}=1.8\) kHz. Before the magnet was cooled, the sensitivity of the prototype was measured to be 50 fT/\(\sqrt{\rm Hz}\), 5 times larger than the estimated values shown in Fig. 7. We estimate that the worse sensitivity was due to the magnetic Johnson noise from the solenoid coil of the magnet at room temperature, but this noise decreased significantly when the solenoid coil became superconducting. With the magnet cooled to 4 K where the solenoid coil becomes superconducting, we obtained background data with the prototype (i.e., without the magnet energized), shown in Fig. 8(d). The experimental sensitivity of the prototype was measured to be around 10 fT/\(\sqrt{\rm Hz}\), demonstrating that with other ambient noises sufficiently suppressed, the prototype sensitivity to the axion dark matter is determined by the OQS noise. This means that the OQS noise reduction is the key to success. ## V Sensitivity estimate of axion detection In principle, the SNR of the prototype axion detector can be increased with long data integration. The total field noise of the prototype with a data integration time, \(t_{\rm int}\), is given by \[\delta B_{d}^{\rm int}=\delta B_{d}\times(t_{c}t_{\rm int})^{-1/4}, \tag{28}\] where \(t_{c}=(0.16~{}\rm s)\times(MHz/\nu)\) is the axion signal coherence time for the isothermal halo model [8; 34]. For example, the \(\delta B_{d}\) can be reduced by a factor of 11 and 8 with a 7-hours integration time at 300 kHz and 1 MHz, respectively. The axion signal coherence time limits the experimental noise reduction with long data integration. Based on Eqs. (7), (8), (10), and (28), we can estimate the sensitivity of the prototype axion detector to the axion-photon coupling \(g\), \[g =\mathrm{S}\frac{r_{\rm out}[1+(d/r_{\rm out})^{2}]^{3/2}(L_{\rm in }+L_{\rm out})\delta B_{d}^{\rm int}}{2QN_{\rm in}N_{\rm out}V_{\rm in}\sqrt{2 \rho_{DM}B_{0}}}\] \[=\mathrm{S}\Big{(}\frac{\delta B_{d}^{\rm int}}{10^{-15}~{}\rm T }\Big{)}\Big{(}\frac{\mathrm{GeV/cm^{3}}}{\rho_{DM}}\Big{)}^{\frac{1}{2}} \Big{(}\frac{10^{3}}{Q}\Big{)}\Big{(}\frac{L}{\mu\rm H}\Big{)}\Big{(}\frac{ \rm T}{B_{0}}\Big{)}\Big{(}\frac{1}{N_{\rm out}}\Big{)}\] \[\times\Big{(}\frac{1}{N_{\rm in}}\Big{)}\Big{(}\frac{r_{\rm out}[ 1+(d/r_{\rm out})^{2}]^{\frac{3}{2}}}{\rm cm}\Big{)}\Big{(}\frac{\rm m^{3}}{V _{\rm in}}\Big{)}(2\times 10^{-16}~{}\rm GeV^{-1}), \tag{29}\] where S is the SNR, taken as 2 (2\(\sigma\) or 95% confidence level; note that once the axion signal is discovered, the measurement can be repeated many times to verify the discovery with higher confidence level), and \(L=L_{\rm in}+L_{\rm out}\) is the total inductance of the LC circuit. Fig. 9 shows our estimated sensitivity of the Figure 8: The Photographs of the prototype axion detector: (a) the optimized first-order planar input gradiometer coil, (b) the combination of the optimized circular output coil and the commercial OQS sensor head, (c) the \(\mu\)-metal enclosure that houses the combination shown in (b) (the \(\mu\)-metal lid is opened for illustration purposes only), and (d) its sensitivity when the magnet is cooled to 4 K and the prototype is tuned at 307 kHz. Figure 7: Estimated total magnetic noise of the prototype axion detector with the optimized LC circuit at 300 K and the commercial OQS unit (black solid curve), and the advanced axion detector with the optimized superconducting LC circuit at 2 K and the quantum-limited OQS (red solid curve). The noise below \(\sim\)100 kHz and \(\sim\)1 MHz is dominated by the thermal noise of the LC circuit at 300 K and 2 K, respectively, limiting the axion detector sensitivities. prototype (blue dotted curve) from the background data obtained with 7-hours integration at each observation frequency. The upper end of our search range, around 10 MHz, is limited by the combination of inductance and stray capacitance of the chosen specific configuration of the LC circuit. The size of the coil can be reduced in principle to extend the search to higher frequencies, but this would require redesigning the coil and will also reduce the axion flux and hence sensitivity. The sensitivity loss at masses larger than \(10^{-9}\) eV is due to the noise reduction limitation from the axion signal coherence time, described above. Our prototype sensitivity could improve the current constraints (gray solid curves) set by the CERN Axion Solar Telescope (CAST) [35], Broadband/Resonant Approach to Cosmic Axion Detection with an Amplifying B-field Ring Apparatus (ABRACADABRA) [9], and Search for Halo Axions with Ferromagnetic Toroids (SHAFT) [10] experiments on an axion mass range between \(10^{-10}\) eV and \(10^{-7}\) eV, corresponding to the frequency range between 10 kHz and 10 MHz, in particular by 1 order of magnitude at masses of around \(10^{-9}\) eV. For comparison, projected sensitivities of the DM-Radio experiment are shown with two gray dotted lines [36]. The yellow band indicates a broad range of the axion-photon coupling \(g\) for the QCD axion predicted by various axion models. As benchmark examples, the KSVZ [13; 14] and DFSZ [11; 12] axion models are included. The constraints set by the ADMX [7; 37; 38] and ADMX-SLIC (Superconducting LC Circuit Investigating Cold Axions) [39] experiments are also shown. Our experiment will be able to probe the axion dark matter on the mass range between \(10^{-11}\) eV and \(10^{-7}\) eV. ## VI Potential improvement of axion detector sensitivity Figure 7 shows the magnetic Johnson noise of optimized LC circuits at 300 K (top blue dotted curve) and 2 K (bottom blue dotted curve), and the OQS noise limit of the commercial unit (top orange dotted curve), and a quantum-limited OQS (bottom range dotted curve). It is clear that to take advantage of the quantum-limited OQS in axion detection, the detector thermal noise must be reduced by cooling the LC circuit. In this regime, the backaction noise from the OQS and the thermal noise from the surrounding copper shield must also be considered, as discussed in Section III. An optimized design of an axion detector with a quantum-limited OQS is considered in this section. ### Axion Detector with Quantum-limited OQS Improvement over the prototype design can be achieved with a quantum-limited OQS with \(\delta B_{\text{OQS}}=10\) aT/\(\sqrt{\text{Hz}}\). To take full advantage of a quantum-limited OQS, we consider the LC circuit made of a pure superconducting wire [e.g., 3-m niobium (Nb) wire] and cooled to 4 K and below, which can considerably reduce the magnetic Johnson noise of the LC circuit. We investigate the sensitivity of an improved axion detector with \(l_{\text{in}}=2.9\) m and \(r_{\text{in}}=0.45\) m, more closely matching the dimension of the bore of the magnet to maximize the axion-induced magnetic flux through the input gradiometer coil. Considering the K cell volume of \(V=100\) cm\({}^{3}\), we selected \(r_{\text{out}}=5.0\) cm and \(d=8.0\) cm to allow space between the cooled output coil and the K cell. We also selected \(l_{\text{lead}}=4\) m where the residual magnetic field of the magnet is reduced to the level of the Earth's magnetic field. The inductance of the superconducting input and output coils was estimated using a three-dimensional inductance extraction program in superconducting structures [40]: \(L_{\text{in}}=N_{\text{in}}^{2}\times 26.0\)\(\mu\)H and \(L_{\text{out}}=N_{\text{out}}^{2}\times 0.4\)\(\mu\)H. The parasitic inductance of the twisted-pair leads of 3-mil superconducting Nb wires was measured to be 2.3 nH/cm [41], giving the negligible inductance of the wire leads of 0.9 \(\mu\)H. While a pure superconducting wire has zero electrical resistance at low frequencies, the resistance at high frequencies is extremely small but non-zero [42]. The high-frequency resistance \(R_{Nb}\) of superconducting Nb at around 2 K is on the order of n\(\Omega\)[42]. This \(R_{Nb}=1\) n\(\Omega\) resistance can achieve \(Q\approx 10^{11}\), however we will detune the superconducting LC circuit to reach \(Q_{\text{eff}}\approx 10^{6}\) in order to have a reasonable axion detector bandwidth. Based on Eq. (10), the corresponding thermal magnetic Johnson noise at the location of the OQS vapor cell is given Figure 9: Estimated sensitivity of our prototype axion detector (blue dotted curve) to the axion-photon coupling \(g\) with \(t_{\text{int}}=7\) hours and 95% confidence level to the axion mass range from \(10^{-11}\) to \(4\times 10^{-8}\) eV. In similar range, the constraints were set by the CAST [35], ABRACADABRA [9], and SHAFT [10] experiments (gray solid curves). Estimated sensitivities of the DM-Radio experiments (gray dotted curves) are also shown [36]. The yellow band encompasses various QCD axion models, including the benchmark KSVZ [13; 14] and DFSZ [11; 12] models, predicting the QCD axion coupling. Furthermore, the constraints set by the ADMX [7; 37; 38] and ADMX-SLIC [39] experiments are shown. The red dotted curve shows the estimated sensitivity of our proposed axion detector optimized to an quantum-limited OQS with 10 aT/\(\sqrt{\text{Hz}}\) field noise and a superconducting LC circuit with \(t_{\text{int}}=15\) s. by \[\delta B_{J}=\frac{N_{\rm out}Q_{\rm eff}\sqrt{4k_{B}TR_{Nb}}}{\omega(L_{\rm in}+L _{\rm out})r_{\rm out}[1+(d/r_{\rm out})^{2}]^{3/2}}. \tag{30}\] For the axion detector with the quantum-limited OQS, we re-optimized \(N_{\rm in}\) and \(N_{\rm out}\) to maximize the SNR of the detector, \(|B_{d}|/\delta B_{d}\), at 10 MHz using Eqs. (7), (8), and (30) with \(\delta B_{\rm OQS}=10\) aT\(/\sqrt{\rm Hz}\) and \(Q_{\rm eff}=2\times 10^{6}\). Figure 10 shows the calculated, normalized 2D distribution of the SNR values of the improved axion detector as a function of \(N_{\rm in}\) and \(N_{\rm out}\). Among various optimal values in the red region, we selected \(N_{\rm in}=2\) and \(N_{\rm out}=11\) as optimal values, which do not significantly increase the coils' width because of using 3-mil Nb wire. Based on these optimal parameters, Fig. 7 shows the estimated total magnetic noise of the axion detector with the quantum-limited OQS as a function of the frequency (red solid curve), indicating that the axion detector loses sensitivity at the frequency range below \(\sim\)1 MHz due to the dominant thermal noise of the superconducting LC circuit (blue dotted curve). On the other hand, above \(\sim\)1 MHz, it is limited by the OQS fundamental quantum field noise (orange dotted curve), thus enhancing the OQS field noise is the key to improve the axion detector sensitivity in this frequency range. The projected sensitivity of this optimized axion detector with \(t_{\rm int}=15\) s integration time at each observation frequency is shown in Fig. 9 (red dotted curve). We anticipate that with sensitivity up to 7 orders of magnitude beyond the current best limit, the improved axion detector with the quantum-limited OQS can potentially access the compelling targets below the KSVZ QCD axion band and probe the QCD axion parameter space in a mass range near \(10^{-8}\) eV, corresponding to the frequency range of \(\sim\)1 MHz. ### Mitigation of Backaction OQS Noise For the optimized superconducting output coil with \(r_{\rm out}=~{}5.0\) cm, \(d=8.0\) cm, and \(N_{\rm out}=11\), the magnetic flux of the backaction OQS noise through the output coil in Eq. (12) is \(\delta\Phi_{\rm SN}=~{}1.9\times 10^{-21}~{}{\rm T}\cdot{\rm m}^{2}/\sqrt{\rm Hz}\). For 1 MHz frequency, we compare the backaction noise \(\delta V_{SN}=\delta\Phi_{\rm SN}\omega=1.2\times 10^{-14}~{}{\rm V}/\sqrt{\rm Hz}\) with the thermal Johnson noise inside the superconducting LC circuit \(\delta V_{J}=\sqrt{4k_{B}TR_{Nb}}=3.3\times 10^{-16}~{}{\rm V}/\sqrt{\rm Hz}\), assuming \(R_{Nb}=1~{}{\rm n}\Omega\) at 2 K. This indicates that the backaction OQS noise can become significant. Hence, proper detuning of the OQS magnetic resonance can be helpful to make the two noises comparable. The OQS field noise and the backaction noise follow the Lorentzian shape, and when the detuning exceeds the resonance width they decrease as \([2\pi(\nu-\nu_{0})T_{2}]^{-1}\). For example, a detuning from the resonance of the LC circuit by 4.6 kHz can reduce the backaction noise by 100 times, resulting in \(\delta V_{\rm SN}\approx\delta V_{J}\), while the OQS field noise will increase to 1 \(\rm\,fT/\sqrt{\rm Hz}\). Reaching 10 aT field noise will give some extra sensitivity for detuning to decrease the backaction and increase the axion detector bandwidth, accelerating scanning [36]. Spin squeezing can be a promising mitigation method to considerably suppress the backaction noise without sacrificing the OQS field noise. Significant reduction of the OQS spin noise by spin squeezing has been demonstrated, e.g., 70% noise reduction in Ref. [43], a factor of 6.4 in Ref. [44], and even a factor of 100 in Ref. [45]. In fact, the OQS field noise and the backaction noise are correlated, hence conducting OQS measurements during the minimum of the oscillating spin noise can reduce both the backaction noise and the spin projection noise in Eq. (9). In continuous non-demolition measurement [25], the data can be processed to weigh signals when the spins are directed toward the output coil (i.e., parallel to its symmetry axis). This measurement will have the minimal backaction noise if at the same time the spin state is read out with the probe laser beam. During this measurement, the orthogonal spin noise component reaches the maximum; however, it is perpendicular to the output coil's symmetry axis and thus is not contributed to the backaction noise. As a result, it could be possible to suppress the backaction noise without sacrificing the OQS field noise. In contrast to the previous papers where the goal of using spin squeezing was to demonstrate the reduction in OQS field noise, here the spin squeezing will be essential for the backaction noise reduction. In addition, it is important to note that for a long continuous measurement the advantage of spin squeezing is removed, but for short measurements \(<T_{2}\), the advantage can be on the order of the spin squeezing. Because of the coherence of the axion signal over 0.25 s, if we periodically implement the protocol of spin squeezing with the period matching that of the axion signal, then the averaging of multiple measurements can lead to \(1/\sqrt{N}\) noise reduction, with \(N\) being proportional to the measurement time, with similar improvement of the SNR to long measurements of coherent signals. Figure 10: Optimization of the number of turns of the input gradiometer and output coils made of 3-mil Nb wire at 2 K to maximize the SNR of the axion detector with a quantum-limited OQS. The SNR values are normalized. ## VII Conclusion We have built a prototype axion detector operating at room temperature, comprised of the optimized LC circuit and the commercial OQS with 10 \(\mathrm{fT/\sqrt{Hz}}\) field noise. The LC circuit contained the first-order one-turn planar gradiometer input coil located inside the large bore of the 2-T superconducting magnet and the two-loop four-turn circular output coil coupled to the commercial OQS sensor head. We tuned the prototype at about 300 kHz and obtained background data. We investigated the sensitivity of the prototype based on the background data and showed that the prototype experiment can probe the axion dark matter on the significant mass range between \(10^{-11}\) eV and \(10^{-7}\) eV. The sensitivity, limited by the OQS field noise of 10 \(\mathrm{fT/\sqrt{Hz}}\), is up to 1 order of magnitude better than the current best limit. We also investigated the potential sensitivity of an axion detector based on the superconducting LC circuit, the OQS reaching the fundamental quantum noise limit of 10 \(\mathrm{aT/\sqrt{Hz}}\), and the existing 2-T magnet at LANL. The improved axion detector can potentially enhance the experimental sensitivity up to 7 orders of magnitude beyond the current best limit, allowing us to probe the QCD axion parameter space in a mass range near \(10^{-8}\) eV. The improved experiment will be limited by the quantum noise limit of the OQS. In addition, we characterized possible background noises in the experiment including the backaction OQS noise and the thermal noise of the copper shield, showing that these noises can be reduced below the thermal noise of the superconducting LC circuit. ###### Acknowledgements. The authors gratefully acknowledge the support by the Los Alamos National Laboratory LDRD office through Grants No. 20190113ER, 20210254ER, and 20230633ER. The authors are grateful for helpful discussion and assistance with operating the magnet and constructing the prototype axion detector with Dr. Leonardo Civale, Jaren Cordova, Leonard Gonzales, Larry Burney, and Shaun Newman. The authors are also grateful to Dr. Daniele Alves for useful comments.
2307.02256
Magnetic braking during direct collapse black hole formation
Magnetic fields are expected to be efficiently amplified during the formation of the first massive black holes via the small-scale dynamo and in the presence of strong accretion shocks occurring during gravitational collapse. Here, we analyze high-resolution cosmological magneto-hydrodynamical simulations of gravitational collapse in atomic cooling halos, exploring the dynamical role of magnetic fields, particularly concerning the effect of magnetic braking and angular momentum transport. We find that after the initial amplification, magnetic fields contribute to the transport of angular momentum and reduce it compared to pure hydrodynamical simulations. However, the magnetic and Reynolds torques do not fully compensate for the inward advection of angular momentum, which still accumulates over timescales of $\sim1$~Myr. A Jeans analysis further shows that magnetic pressure strongly contributes to suppressing fragmentation on scales of $0.1-10$~pc. Overall, the presence of magnetic fields thus aids in the transport of angular momentum and favors the formation of massive objects.
Muhammad A. Latif, Dominik R. G. Schleicher
2023-07-02T15:13:51Z
http://arxiv.org/abs/2307.02256v1
# Magnetic braking during direct collapse black hole formation ###### Abstract Magnetic fields are expected to be efficiently amplified during the formation of the first massive black holes via the small-scale dynamo and in the presence of strong accretion shocks occurring during gravitational collapse. Here, we analyze high-resolution cosmological magneto-hydrodynamical simulations of gravitational collapse in atomic cooling halos, exploring the dynamical role of magnetic fields, particularly concerning the effect of magnetic braking and angular momentum transport. We find that after the initial amplification, magnetic fields contribute to the transport of angular momentum and reduce it compared to pure hydrodynamical simulations. However, the magnetic and Reynolds torques do not fully compensate for the inward advection of angular momentum, which still accumulates over timescales of \(\sim 1\) Myr. A Jeans analysis further shows that magnetic pressure strongly contributes to suppressing fragmentation on scales of \(0.1-10\) pc. Overall, the presence of magnetic fields thus aids in the transport of angular momentum and favors the formation of massive objects. methods: numerical -- early universe -- galaxies: high-redshift -- dark ages, reionization, first stars 0000-0002-4826-8088]Muhammad A. Latif ## 1 Introduction Magnetic fields are ubiquitous throughout the cosmos and are considered responsible for various astrophysical phenomena, such as the transfer of angular momentum, launch of collimated jets and outflows, suppressing fragmentation, and stabilizing accretion disks (Beck et al., 1999; Beck, 2007; Pudritz et al., 2012). They may have a primordial origin (e.g. Widrow et al., 2012) or result from the efficient amplification of weak seed fields due to astrophysical dynamos (e.g., Brandenburg & Subramanian, 2005). It has been suggested that magnetic fields may play an important role already during the formation of the first objects in the Universe, particularly the first stars and supermassive black holes (e.g. Pudritz & Silk, 1989; Sethi & Subramanian, 2005; Silk & Langer, 2006; Schleicher et al., 2009). The origin of magnetic fields is still uncertain as the standard model does not provide any constraints on their strength. Magnetic field may have generated during the cosmic inflation through electro-weak or quantum chromodynamics phase transitions or alternatively via the Biermann battery effect and the Weibel instability (see review by Widrow et al. (2012)). The current observational constraints on the strength of intergalactic magnetic fields are derived from CMB observations which were suggested to provide an upper limit of a few nano Gauss while blazer observations provide lower limit of \(10^{-16}\) G (Kahniashvili et al., 2010; Neronov & Vovk, 2010; Planck Collaboration et al., 2016). Irrespective of their origin, the seed magnetic fields are many orders of magnitude smaller than the present day fields. Over the last decade, numerous studies have suggested that magnetic fields, irrespective of their initial field strength, can be efficiently amplified by the small scale dynamo during first structure formation (Schleicher et al., 2010; Sur et al., 2010; de Souza & Opher, 2010; Federrath et al., 2011; Schober et al., 2012; Turk et al., 2012; Latif et al., 2013; Grete et al., 2019). Furthermore, they can get amplified via the \(\alpha-\Omega\) dynamo in the presence of rotation (Latif & Schleicher, 2016; Sharda et al., 2020) and strong accretion shocks due to the rapid infall within cosmo logical simulations (Latif et al., 2014; Hirano et al., 2021; Hirano & Machida, 2022). Such strongly amplified fields inhibit fragmentation and stabilize the central accretion disks (Latif et al., 2023) (hereafter L23), as also seen in Hirano et al. (2021). L23 performed cosmological magneto-hydrodynamical (MHD) simulations in the context of direct collapse black hole formation, evolving them for about 1.6 Myr, a timescale comparable to the lifetime of supermassive stars (Janka, 2002). This study focused on assessing the impact of magnetic fields on the degree of fragmentation and the masses of clumps by comparing their results with hydrodynamical runs. In the context of the first massive objects, the potential role of magnetic fields in suppressing fragmentation via the magnetic Jeans mass has been suggested previously (e.g. Schleicher et al., 2009; Latif et al., 2016), along with the presence of the magneto-rotational instability to drive accretion within the disk (Silk & Langer, 2006). However, as it is well-known in the context of present-day star formation, magnetic fields can also affect angular momentum transport and help to delay or suppress the formation of rotationally supported disks (Kulsrud, 1971; Galli et al., 2006; Shu et al., 2007; Hennebelle et al., 2016; Sheikhnezami & Fendt, 2022). It is not clear how magnetic torques compare to the Reynold torques and what are their relevant contributions. In this letter, simulations by L23 are analyzed with respect to the magnetic braking and its effect on distribution of the angular momentum within the collapsing halos. A short summary of the methods employed to perform the previous simulations is given in section 2. The results of the analysis are presented in section 3 and a final discussion and conclusions are provided in section 4. ## 2 Numerical Method We analyzed here the cosmological magnetohydrodynamics simulations performed with adaptive mesh refinement code ENZO published in L23. A brief summary of these simulations is presented here and for further details interested readers are referred to L23. Cosmological magnetohydrodynamical simulations were performed for three distinct halos and their results were compared with hydrodynamical runs. The simulated halos had a masses of \(3\times 10^{7}\) M\({}_{\odot}\), \(1.7\times 10^{7}\) M\({}_{\odot}\) and \(2.3\times 10^{7}\) M\({}_{\odot}\) at z= 13.2, 12.5 & 12.6, with respective spin parameters of 0.07, 0.01 & 0.02. They were seeded with an initial uniform magnetic field strength of \(10^{-14}\) G (\(4.5\times 10^{-19}\) G in comoving units) at z=150. The motivation for the choice of such initial field strength comes from theoretical works which predict B fields of strength \(10^{-17}-10^{-20}\) G at galactic scales during electroweak phase transitions (Baym et al., 1996; Grasso & Rubinstein, 2001) and \(10^{-20}\) G from the quantum chromodynamics in the very early universe (Sigl et al., 1997). We further assume uniform B field for the sake of a simplicity and coherent fields on larger scales may be generated by the \(\alpha-\Omega\) dynamo in the presence of helicity. All simulations have an effective dark matter resolution of \(\sim 67\) M\({}_{\odot}\) and a spatial resolution of \(\sim 2000\) AU. We further ensured a minimum resolution of 64 cells per Jeans length during adaptive mesh refinement to resolve turbulent eddies (Federrath et al., 2011; Latif et al., 2013), using 15 (additional) refinement levels. Such a resolution allows to resolve small scale dynamo action, converting turbulent into magnetic energy (Latif et al., 2013, 2014). As also shown in previous runs (Latif et al., 2014; Hirano et al., 2021; Hirano & Machida, 2022), additional amplification occurs in the center of the halo when the accretion flow hits the central pressure-supported core, leading to the formation of strong shocks. The central part of the halo then becomes magnetized very efficiently. The simulations employed a non-equilibrium chemistry network consisting of six primordial species (H, H\({}^{+}\), He, He\({}^{+}\), He\({}^{2+}\), and e\({}^{-}\)) which self-consistently solves the rate equations along with the (magneto-)hydrodynamics. L23 studied an isothermal gas collapse assuming that the intense Lyman Werner flux quenches the formation of H\({}_{2}\) emitted by nearby star forming galaxies. Their chemical model further included cooling from collisional ionization and excitation of H and He, radiative recombination, inverse Compton scattering and bremsstrahlung radiation. The simulations were evolved for about 1.6 Myr beyond their initial collapse employing a pressure floor technique to study the impact of magnetic fields during the formation of supermassive stars. ## 3 Results Some of the main properties of the simulations for the three different halos are summarized in Fig. 1 via mass-weighted radial profiles. The radial profile of the enclosed mass approximately follows the form expected from an isothermal sphere on scales above 1 pc, with the exception of occasional minor bumps due to small inhomogeneities in the density distribution. Within the central region, we first note some flattening of the enclosed mass until a radius of about 0.1 pc, and subsequently a steep decline due to the density being approximately constant within the central region. On scales above 10 pc, the rotational velocity shows the same behaviour for MHD and hydrodynamical simulations. On smaller scales it Figure 1: Mass-weighted radial profiles of the three simulated halos. Top left: Enclosed mass. Top right: Rotational velocity. Mid left: Angular momentum. Mid right: Magnetic field strength (in proper units). Bottom left: Plasma beta parameter. Bottom right: Alfvén and thermal velocity. The solid lines correspond to the MHD simulations, the dotted lines to the hydrodynamical ones. In the bottom right figure the the dashed line refers to the sound speed, which we checked to be very similar in the MHD and hydrodynamical simulations. the rotational velocity being larger in two of the hydrodynamical simulations compared to the MHD ones, though there is also one simulation where this appears to be the other way round. Towards the center, the rotational velocity then usually declines, again with the exception of one halo where the center is not well-defined and other clumps are present in the vicinity, leading to more complex velocity structures. For the angular momentum, the radial profiles are very similar down to a scale of 10 pc. On smaller scales, the hydrodynamical runs show a higher angular momentum, including strong peaks in the profile corresponding to clumps with significant amounts of angular momentum. In the radial profile of the magnetic field, we find signatures from compression on scales of \(30-300\) pc and a steep increase around scales of \(10-30\) pc, as in the inner region the magnetic field has been efficiently amplified due to turbulence and shocks as a result of strong infall. The plasma beta parameter corresponding to the ratio of thermal over magnetic pressure is initially high and of the order \(10^{10}\) on scales above 10 pc, then dropping significantly in the range of \(1-10\) pc where the magnetic field strength increases very significantly, leading to typical values of \(3-30\) within the central region. The sound speed in all three halos is a few times \(10^{5}\) cm s\({}^{-1}\) independent of scale, while the Alfven velocity initially is insignificant of the order \(0.1\) cm s\({}^{-1}\) on scales above 30 pc, though then increasing steeply and reaching values comparable to the sound speed on scales below 10 pc. For radii of \(0.1-1\) pc, it even exceeds the sound speed by a factor of \(2-3\). This behaviour is reflected in the thermal and magnetic Jeans mass given in Fig. 2. As the gas is approximately isothermal within the simulation, the thermal Jeans mass follows an approximate power-law behaviour with values of \(\sim 10^{7}\) M\({}_{\odot}\) on scales of 100 pc and decreasing to about 30 M\({}_{\odot}\) on scales of 0.02 pc, with a moderate bump on scales of 0.5 pc where the temperature is slightly increased, as also reflected by bumps in the density structure. The magnetic Jeans mass is insignificant on scales above 30 pc, but then rises steeply and reaches a maximum of \(\sim 10^{6}\) M\({}_{\odot}\) on scales of \(\sim 0.3\) pc. It dominates over the thermal Jeans mass in the range from \(0.1-10\) pc and thus considerably contributes to suppress fragmentation. However, in the innermost part of the central core (scales below 0.4 pc), the thermal support is more relevant than the magnetic one. In Fig. 3, we show the time evolution of angular momentum and rotational velocity profiles of halo 1, together with magnetic and Reynolds torques given as (Sheikhnezami & Fendt, 2022) \[\tau_{\rm{Reyn}} = \int_{S}r\left(\rho u_{phi}\vec{u}_{p}\right)\cdot\vec{ds}, \tag{1}\] \[\tau_{\rm{M}} = -\int_{S}r\frac{1}{4\pi}B_{phi}\vec{B}_{p}\cdot\vec{ds}. \tag{2}\] The angular momentum is generally found to be higher in the hydrodynamical runs compared to the MHD runs. In all simulations, the angular momentum is found to increase in the center as a function of time, but more strongly within the hydrodynamical simulations. A similar trend is found for the rotational velocity which generally increases with time. The Reynolds torque shows scatter but no significant dependence on spatial scale and moderate increase over time, typically being in the range of \(10^{51}-10^{53}\) g cm\({}^{-2}\) s\({}^{-2}\). The magnetic torque is originally negligible as initially the magnetic field is weak, but reaches similar values of order \(10^{51}\) g cm\({}^{-2}\) s\({}^{-2}\) during the time evolution on scales less than 10 pc. Both the Reynolds and the magnetic torques show occasional peaks in the range of \(0.5-30\) pc due to inhomogeneities in the flow. We checked that halos 2 and 3 show very similar results. We estimate the inward transport of angular momentum due to advection as \[\dot{J}_{\rm adv}(r)=4\pi r^{2}\rho rv_{\rm{rot}}v_{r}, \tag{3}\] with \(\rho\) being density, \(v_{r}\) the radial and \(v_{\rm{rot}}\) the rotational velocity. In Fig. 4, the sum of the magnetic and Reynolds torque is shown and compared to the advection term. Their contributions are generally found to be very similar, though the inward transport term exceeds the magnetic and Reynolds stresses at least on some scales and thus explains the inward transport of the angular momentum. Within the innermost core on scales below 0.1 pc, though, we note that the inward Figure 2: Mass-weighted radial profiles of the thermal (dashed line) and magnetic Jeans mass (solid line) for the three simulated halos. transport term decreases more strongly as the innermost core is still gravitationally stable, implying lower radial velocities. Similarly, we note that the Reynolds and magnetic torques decrease in the central core due to lower magnetic fields strengths and reduced velocities on these scales. In Fig. 5, we compare the timescales associated with the Reynolds and magnetic torques as well as the inward advection timescale with the free-fall timescale, given as \[T_{\rm ff}=\sqrt{\frac{3\pi}{16G\rho}} \tag{4}\] The free-fall time follows an approximate power-law behaviour starting around \(10^{7}\) years on scales of \(1000\) pc and reaching about \(10^{3}\) years around \(0.02\) pc. The timescale related to the Reynolds stress is always considerably shorter than the free-fall time and only becomes comparable in the central region, where the free-fall time is short and turbulent and magnetic stresses are weak. The timescale associated with magnetic braking, on the other hand, is initially considerably larger than the free-fall time, though drops considerably between \(10-30\) pc. Particularly on scales of \(0.1-1\) pc, it is close to the Reynolds timescale and contributes significantly to the redistribution of angular momentum. The timescale of inward advection of the angular momentum is typically found to be somewhat smaller, though fluctuating, compared to the Reynolds and magnetic braking timescales, thereby explaining that the angular momentum still increases in the central region of the collapse. Overall our results thus show that Reynolds and magnetic stresses considerably contribute to the redistribution of angular momentum and reduce the total amount of angular momentum that would be present in pure hydrodynamical runs, though are not sufficient to fully compensate for the advection of angular momentum provided by infall. ## 4 Discussion and Conclusions Using the cosmological high-resolution (magneto-)hydrodynamical simulations of gravitational collapse Figure 3: Mass-weighted radial profiles of halo 1. Top: Time evolution of angular momentum and rotational velocity in the MHD (solid line) and hydrodynamical simulations (dotted line) of halo 1. Bottom: Time evolution of Reynolds and magnetic torque. The Reynolds torque is shown for the MHD (solid line) and hydrodynamical (dotted line) simulations. We also provide a comparison with the inward advection term given in Eq. 3 (thin lines both dotted and solid). in atomic cooling halos from L23, we have analyzed here the evolution of angular momentum considering the Reynolds and magnetic torques as well as the inward transport of the angular momentum via convection. Both in hydrodynamical and magneto-hydrodynamical runs, the angular momentum on scales below 1 pc is found to increase significantly over a timescale of \(\sim 1\) Myr, but more strongly so in the purely hydrodynamical runs. The magnetic field strength increases significantly over time and while the magnetic torques are initially insignificant, they provide a relevant contribution to the Reynolds torques after about 0.5 Myrs. Both terms together however do not fully compensate for the inward transport of angular momentum via convection, so that the angular momentum in the center nonetheless keeps increasing. The dynamical role of the magnetic field with respect to fragmentation has been noted already by L23, as considerably more fragments have formed in the pure hydrodynamical runs, even though many of them are Figure 4: The top panel is showing the mass-weighted radially averaged Reynold torques for HD runs and the bottom panel the total torque (Reynolds plus magnetic) for the MHD runs at the end of simulations for all three halos. For comparison, the thin lines in both panels show the angular momentum inflow term due to advection, \(\dot{J}_{\rm adv}\) in both panels. Figure 5: Mass-weighted radially averaged profile of the characteristic timescales associated with Reynolds torque (dotted lines), magnetic torque (solid lines), infall torques (thin solid lines) in comparison to the free-fall (dashed line) timescale. subsequently merging. Here, we show via the comparison of the magnetic and the thermal Jeans mass that indeed on scales of \(0.1-10\) pc, the magnetic fields are sufficiently strong to considerably suppress fragmentation. This is similar to results found e.g. by Sharda et al. (2020) in the context of smaller halos. Together with the additional contribution to the angular momentum transport, these results explain how the magnetic field aids to reduce fragmentation and favors the formation of a single central object. Such objects are expected to evolve towards a supermassive protostar (e.g. Hosokawa et al., 2012, 2013; Schleicher et al., 2013; Haemmerle et al., 2019), which will subsequently contract into supermassive black holes via the General Relativistic instability. Piddington (1970) proposed that the gravitational collapse and differential rotation may account for the observed galactic fields within the Hubble timescale. Ratra et al. (1995) based on timescale arguments anticipated that magnetic braking may remove angular momentum during the formation of the first cosmic structures. Pandey et al. (2019) employed an analytical model to study the impact of magnetic braking on the angular momentum in halos of \(10^{7}-10^{9}\) M\({}_{\odot}\). They found that comoving large scale magnetic fields of strength \(\geq\) 0.1 nG are needed to remove the angular momentum from gas clouds. They assumed constant densities of 1, 10 and 100 times the background density and therefore their results do not remain valid on scales below the viral radius of the halo. Also, their toy model does not capture the 3D dynamical effects such as mergers, shocks and turbulent motions which further amplify magnetic fields. Our findings show that even weak seed fields of the order of \(10^{-19}\) G (comoving, at z=150) can be efficiently amplified by the small scale dynamo and in the presence of accretion shocks and significantly contribute to braking the rotation of gas clouds at later times along with Reynold torques. Therefore, the results of Pandey et al. (2019) should be considered as an upper limit on the strength of B field required for magnetic braking. The case of rotating supermassive stars was investigated by Uchida et al. (2017), finding that angular momentum leads to the formation of a torus surrounding the rotating black hole. In case of the additional presence of magnetic fields, collapse will further lead to the launching of jets consistent with the typical duration of long gamma-ray bursts (Butler et al., 2018; Sun et al., 2018). The presence of magnetic fields during the formation of the first black holes may thus give rise to direct observational implications, and relates early black hole formation to already known observed phenomena. Our choice of the initial B field is about two order of magnitude smaller than the lower limit inferred from Blazer observations at galactic scales. If we were to employ a stronger initial B field, it would further strengthen our main findings by suppressing fragmentation and efficiently transporting angular momentum by exerting magnetic torques. All in all, it will support the formation of DCBHs. To investigate magnetic braking during the formation of DCBHs, we studied here pristine halos of a few times \(10^{7}\) M\({}_{\odot}\) which are considered as embryos of DCBHs. As the rotational velocity scales with halo mass, relatively weaker and stronger B fields will be required to induce magnetic braking in both smaller and larger halos (\(\geq 10^{8}\) M\({}_{\odot}\)), respectively, as found by Pandey et al. (2019). However, this needs to be investigated in detail in future works. The strength of magnetic field in our simulations at kpc scales is smaller than observed galactic fields as simulations are only evolved for 1.5 Myr. Studies exploring magnetic fields in Milky Way like galaxies show that e-folding time of about 100 Myr is required until saturation occurs with typical galactic field strength of a 10-50 \(\mu\)G (Pakmor et al., 2017). Previous works also show that initial magnetic field strength is irrelevant due to the rapid amplification by the small scale dynamo and shocks (Pakmor et al., 2014; Latif et al., 2014; Marinacci and Vogelsberger, 2016). Therefore, if we were to evolve our simulations for a few hundred Myrs they will reproduce observed galactic fields. MAL thanks the UAEU for funding via UPAR grants No. 31S390 and 12S111. DRGS gratefully acknowledges support by the ANID BASAL projects ACE210002 and FB21003, the Millenium Nucleus NCN19-058 (TITANs) as well as via Fondecyt Regular (project code 1201280). DRGS thanks for funding via the Alexander von Humboldt - Foundation, Bonn, Germany.
2304.14164
A comparative study of methods to estimate conversion gain in sub-electron and multi-electron read noise regimes
Of all sensor performance parameters, the conversion gain is arguably the most fundamental as it describes the conversion of photoelectrons at the sensor input into digital numbers at the output. Due in part to the emergence of deep sub-electron read noise image sensors in recent years, the literature has seen a resurgence of papers detailing methods for estimating conversion gain in both the sub-electron and multi-electron read noise regimes. Each of the proposed methods work from identical noise models but nevertheless yield diverse procedures for estimating conversion gain. Here, an overview of the proposed methods is provided along with an investigation into their assumptions, uncertainty, and measurement requirements. A sensitivity analysis is conducted using synthetic data for a variety of different sensor configurations. Specifically, the dependence of the conversion gain estimate uncertainty on the magnitude of read noise and quanta exposure is explored. Guidance into the trade-offs between the different methods is provided so that experimenters understand which method is optimal for their application. In support of the reproducible research effort, the MATLAB functions associated with this work can be found on the Mathworks file exchange.
Aaron Hendrickson, David P. Haefner
2023-04-27T13:04:41Z
http://arxiv.org/abs/2304.14164v1
A comparative study of methods to estimate conversion gain in sub-electron and multi-electron read noise regimes ###### Abstract Of all sensor performance parameters, the conversion gain is arguably the most fundamental as it describes the conversion of photoelectrons at the sensor input into digital numbers at the output. Due in part to the emergence of deep sub-electron read noise image sensors in recent years, the literature has seen a resurgence of papers detailing methods for estimating conversion gain in both the sub-electron and multi-electron read noise regimes. Each of the proposed methods work from identical noise models but nevertheless yield diverse procedures for estimating conversion gain. Here, an overview of the proposed methods is provided along with an investigation into their assumptions, uncertainty, and measurement requirements. A sensitivity analysis is conducted using synthetic data for a variety of different sensor configurations. Specifically, the dependence of the conversion gain estimate uncertainty on the magnitude of read noise and quanta exposure is explored. Guidance into the trade-offs between the different methods is provided so that experimenters understand which method is optimal for their application. In support of the reproducible research effort, the MATLAB functions associated with this work can be found on the Mathworks file exchange. conversion gain, DSERN, photon counting distribution, photon transfer, QIS, read noise, sensor characterization, sub-electron noise Further author information: (Send correspondence to A.H.) A.H.: E-mail: [email protected] D.P.H.: E-mail: [email protected] ## 1 Introduction Since the advent of the Charge-Coupled Device (CCD) in the early 1970s, methods for characterizing electro-optical image sensors have continued to adapt to emerging technologies. Of particular importance in image sensor characterization is the measurement of conversion gain, which describes an intrinsic conversion constant relating arbitrary units of Digital Numbers (DN) at the sensor output back to a physically meaningful quantity of electrons (\(e\)-) at the sensor input. Generally speaking, each pixel in an image sensor array will have a unique conversion gain and this gain nonuniformity corrupts the output imagery. For this reason, a precise estimate of each pixel's conversion gain is needed to correct the image degrading effects of gain nonuniformity and calibrate the sensor in terms of absolute units. For many decades, the Photon Transfer (PT) method has been the standard approach to conversion gain estimation [1, 2, 3, 4]. Since the arrival of photons at a sensor is accurately modeled by the Poisson distribution, the moments of the sensor input are known allowing PT to treat the sensor as a black box, only observing the statistical moments of the output, to determine the conversion gain. In 2015, the first Deep Sub-Electron Read Noise (DSERN) image sensor was reported in the literature, carrying with it promising applications in low-light imaging and quantum technologies [5]. As a result of sub-electron read noise, _DSERN_ devices could _discern_ the number of electrons generated in each pixel leading to never before seen structure in the data produced by such devices. As an example, Figure 1 shows histograms produced by a traditional scientific grade CCD pixel (left) and DSERN CMOS pixel (right) exposed to constant illumination. While it is not possible to observe electron events in the CCD data (since the read noise \(\sigma_{R}\) is too large), the DSERN produced histogram shows distinct peaks where zero, one, two, etc. free-electrons have been detected within the pixel. The additional structure observed in DSERN sensor data has led to the development of several new methods of conversion gain estimation, which leverage the additional structure to produce lower uncertainty estimates in comparison to the traditional PT method [6, 7, 8]. What is not clear in this body of research is that all of the proposed methods are derived from the same statistical model, which is valid for sensors with sub-electron and multi-electron read noise. Furthermore, while these newly proposed methods were designed to leverage the additional structure in data produced by DSERN capable devices, some show promise in characterizing sensors outside the DSERN regime; thus, serving as a general estimation procedure to supersede the legacy PT method. In this work, a comprehensive overview of all currently available methods for conversion gain estimation will be discussed using a unified model and notation to facilitate comparison between each method. This will be accomplished by first describing the unifying model of sensor noise and then using the framework of this model to describe each method in detail. With a full description of each method at hand, Monte Carlo simulations will be carried out to determine which method is best under a variety of sensor parameters. The authors aim to implement each method as faithfully as possible, and to this end, all of the code, including the implementation of each method, is available on the Mathworks file exchange. ## 2 The Photon Counting Distribution Model The Photon Counting Distribution (PCD) model represents a single observation from a pixel (a digital gray value) as the random variable \(X\) given by [8, 9] \[\begin{split} X&=\lceil(K+R)/g+\mu\rfloor\\ K&\sim\text{Poisson}(H)\\ R&\sim\mathcal{N}(0,\sigma_{R}^{2}).\end{split} \tag{1}\] The variables used in this model are defined as follows: \(K\) represents the _electron number_, \(H\) represents the expected number of electrons generated (thermally or otherwise) per integration time and is expressed in units of (\(e\)-), \(\sigma_{R}\) represents the input referred analog read noise in units of (\(e\)-), \(g\) represents the conversion gain in units of (\(e\)-/DN), \(\mu\) represents the pixel bias or DC offset in units of (DN), and \(\lceil\cdot\rfloor\) denotes rounding to the nearest integer. In all, the random variable \(X\) captures the process of adding noise (\(R\)) to a number of electrons (\(K\)) followed by the application of gain, offset, and finally quantization. The quantization (rounding) defining the PCD model in (1) adds significant complexity to the distribution of \(X\). If, however, \(g\ll\sigma_{R}\) so that the quantization bins are sufficiently small, the quantization process can be Figure 1: Histograms produced by a scientific grade CCD pixel (left) and DSERN CMOS pixel (right). modeled as an additive noise source so that a continuous distribution still provides an adequate model. Under this assumption the distribution of \(X\) is modeled by the PCD \[f_{X}(x|\theta)=\sum_{k=0}^{\infty}\frac{e^{-H}H^{k}}{k!}\phi(x;\mu+k/g,\sigma^{ 2}), \tag{2}\] where \(\theta=(H,g,a,b^{2})\) denotes the PCD parameter vector, \(\phi(x;\alpha,\beta^{2})\) is the Gaussian probability density with mean \(\alpha\) and variance \(\beta^{2}\), and \(\sigma=(\sigma_{R}^{2}/g^{2}+\sigma_{Q}^{2})^{1/2}\) represents the combined read and quantization noise in units of (DN). For most applications the series representation (2) works best since only a few terms are needed to get a good approximation for \(f_{X}\); however, through the use of characteristic functions an integral representation can also be derived in the form \[f_{X}(x|\theta)=\frac{1}{\pi}\int_{0}^{\infty}\exp(H(\cos(t/g)-1)-\sigma^{2}t^{ 2}/2)\cos((\mu-x)t+H\sin(t/g))\,\mathrm{d}t. \tag{3}\] Furthermore, (2) suggests a Monte Carlo estimator of the form \[f_{X}(x|\theta)=\mathsf{E}\phi(x;\mu+K/g,\sigma^{2})\approx\frac{1}{n}\sum_{k =1}^{n}\phi(x;\mu+K_{k}/g,\sigma^{2}), \tag{4}\] where \(\{K_{1},\ldots,K_{n}\}\) are i.i.d. Poisson(\(H\)) random variables. For notational purposes the shorthand \(X\sim\mathrm{PCD}(H,g,\mu,\sigma^{2})\) will be used to denote a random variable distributed according to the PCD. Two special cases of the PCD occur as \(H\to 0\) and \(\sigma\to\infty\) giving \(\mathrm{PCD}(H,g,\mu,\sigma^{2})\to\mathcal{N}(\mu,\sigma^{2})\) and \(\mathrm{PCD}(H,g,\mu,\sigma^{2})\to\mathcal{N}(\mu+H/g,\sigma^{2}+H/g^{2})\), respectively. As far as the author's know, the first known mention of the PCD (albeit not by this name), in the context of image sensors, can be found is James Janesick's _Photon Transfer_ (pg. 26, Figure 3.7), which showed simulated data for the standardized version \(\mathrm{PCD}(1,1,0,\sigma^{2})\)[3]. Later papers by Fossum, Starkey, and Ma[10, 11, 12, 13, 6] wrote down a more complete mathematical description of \(\mathrm{PCD}(H,1,0,\sigma^{2})\), which included a parameter for the quanta exposure. Furthermore, Nakamoto and Hotaka[7] included a parameter for the gain in the form \(\mathrm{PCD}(H,g,0,\sigma^{2})\) but never in the full form accounting for the offset as seen in (2). What makes (2) a complete description is the fact that it provides all the parameters necessary to fit the PCD to raw sensor data. Depending on the specified parameters, the shape of the PCD can vary from a simple Gaussian bell-curve to a more complicated form involving many local maxima (peaks). The parameter \(\mu\) acts as a location parameter shifting the PCD on the x-axis, while \(g\) acts as a scaling factor that controls the distance between adjacent peaks. Changing either of these parameters does not drastically change the overall look of the PCD. On the other hand, the parameters \(H\) and \(\sigma^{2}\) play a significant role in the shape of the PCD. Figure 2 plots the PCD for various \(H\) and \(\sigma^{2}\) (fixing \(\mu=0\) and \(g=1\)) to show how these parameters change the appearance of the probability density. In particular, for small enough \(\sigma^{2}\), the PCD oscillates showing many local maxima. Sensors that exhibit clearly resolved peaks like this are said to belong to the DSERN (a.k.a. sub-electron noise) regime. Likewise, the parameter \(H\) changes the overall envelope of the PCD from a highly skewed form at small \(H\) to a more Gaussian profile at large \(H\). ## 3 Methods for Estimating Conversion Gain In recent years several novel methods have emerged for estimating conversion gain in the sub-electron and multi-electron noise regimes. What is not immediately clear in the literature is that all of these newly proposed methods, as well as the traditional PT method, can all be fully described in the context of the PCD model introduced in the previous section. As such, the goal of this section is to explain each method, using a unified notation, in the PCD framework. Supporting theory for each method will be presented followed by a discussion of the associated pros and cons. ### Photon Transfer Method Photon Transfer (PT) is a classic method for measuring conversion gain that has been around for several decades in various forms[14, 3, 1]. To derive the PT estimator of the conversion gain, first let \(X\sim\mathrm{PCD}(H,g,\mu,\sigma^{2})\) and define \(v(H):=\mathsf{Var}X=\sigma^{2}+H/g^{2}\) to be the variance of \(X\) as a function of the quanta exposure \(H\). Here, the symbol \(\mathsf{E}\) is used to denote the expectation operator so that the variance is defined as \(\mathsf{Var}X=\mathsf{E}X^{2}-(\mathsf{E}X)^{2}\). It follows from the definition of the derivative that \[\frac{1/g}{\partial_{H}v(H)}=\lim_{\Delta H\to 0}\frac{\Delta H/g}{v(H+\Delta H)-v(H)}= \lim_{\Delta H\to 0}g=g. \tag{5}\] Notice that the fraction defining the derivative in (5) is independent of \(\Delta H\); thus, the limit \(\Delta H\to 0\) is not needed and nonzero values of \(\Delta H\) may be used to compute \(g\). With this information now suppose \(X\sim\mathrm{PCD}(H+\Delta H,g,\mu,\sigma^{2})\) and \(Y\sim\mathrm{PCD}(H,g,\mu,\sigma^{2})\) are independent \(\mathrm{PCD}\) random variables. Consequently, \(\Delta H/g=\mathsf{E}X-\mathsf{E}Y\) and \(v(H+\Delta H)-v(H)=\mathsf{Var}X-\mathsf{Var}Y\) leading to the classic PT relation \[g=\frac{\mathsf{E}X-\mathsf{E}Y}{\mathsf{Var}X-\mathsf{Var}Y}. \tag{6}\] The PT method for conversion gain estimation replaces the populations means and variances in (6) with their respective unbiased estimators to obtain an estimator for \(g\). Specifically, let \(\mathbf{x}=\{x_{1},\ldots,x_{n_{1}}\}\) with \(x_{k}\sim\mathrm{PCD}(H+\Delta H,g,\mu,\sigma^{2})\) denote a random sample of \(n_{1}\) observations at a quanta exposure of \(H+\Delta H\) and \(\mathbf{y}=\{y_{1},\ldots,y_{n_{2}}\}\) with \(y_{k}\sim\mathrm{PCD}(H,g,\mu,\sigma^{2})\) denote a second (independent) random sample of \(n_{2}\) observations at a quanta exposure of \(H\). Denoting \(\bar{x}=\frac{1}{n_{1}}\sum_{k=1}^{n_{1}}x_{k}\) as the sample mean and \(\hat{x}=\frac{1}{n_{1}-1}\sum_{k=1}^{n_{1}}(x_{k}-\bar{x})^{2}\) as the sample variance of the \(\mathbf{x}\)-sample (and likewise for \(\bar{y}\) and \(\hat{y}\)), the PT estimate for the conversion gain is given by [14, 15] \[\tilde{g}=\frac{\bar{x}-\bar{y}}{\hat{x}-\hat{y}}. \tag{7}\] Since this is an estimator of two independent samples, Hendrickson et. al. (2022) derived approximate optimal Figure 2: Plots of the \(\mathrm{PCD}\) for various \(H\) and \(\sigma^{2}\) with \(\mu=0\) and \(g=1\) fixed. sample size pairs \((n_{1}^{\rm opt},n_{2}^{\rm opt})\) of the form [15] \[\begin{split} n_{1}^{\rm opt}&\sim\frac{2(1+\zeta)}{ \mathtt{acv}_{0}^{2}(1-\zeta)^{2}}+5\\ n_{2}^{\rm opt}&\sim\frac{2\zeta(1+\zeta)}{\mathtt{ acv}_{0}^{2}(1-\zeta)^{2}}+1,\end{split} \tag{8}\] where \[\zeta=\frac{\mathtt{Var}Y}{\mathtt{Var}X}=\frac{\sigma^{2}+\frac{H}{g^{2}}}{ \sigma^{2}+\frac{H}{g^{2}}+\frac{\Delta H}{g^{2}}} \tag{9}\] and \(\mathtt{acv}_{0}\) denotes the desired relative uncertainty of the final estimate, e.g. \(\mathtt{acv}_{0}=0.05\) corresponds to 5% estimator uncertainty [15]. These approximate optimal sample size pairs allow an experimenter to achieve the desired estimate uncertainty (\(\mathtt{acv}_{0}\)) with the fewest total number of samples possible [15]. In practice, most image sensors are not perfectly linear (\(g\) is dependent on \(H\)) so that a small \(\Delta H\) (\(\zeta\approx 1\)) is needed to obtain a meaningful estimate of \(g\) at the chosen illumination level. This, however, can cause instability in the estimator due to the fact that statistical uncertainty in the quantity \(\hat{x}-\hat{y}\) can lead to division by zero type errors*. In fact, for even moderately large \(n_{1}\) and \(n_{2}\), the sampling distribution of \((\hat{x}-\hat{y})^{-1}\) is accurately modeled by the inverse gamma-difference distribution, which is known to have undefined moments in a similar manner as the Cauchy distribution [16]. This lack of well-defined moments leads to the PT estimator \(\tilde{g}\) having ill-behaved statistical properties, most of which can be mitigated by using very large sample sizes (notice that \(n_{i}^{\rm opt}\to\infty\) and \(\Delta H\to 0\)). Additionally, a disadvantage of this estimator is that it utilizes only the first two moments of the PCD. Since the PCD is not fully described by these first two moments, the PT estimator does not fully utilize all the information about \(g\) contained in the sample leading to larger estimator uncertainty compared to other techniques. Despite these disadvantages, the PT estimator is still attractive as it provides useful estimates of \(g\) in both the sub-electron and multi-electron read noise regimes and is calculated from basic sample moments; rendering it the most computationally inexpensive estimator of \(g\). Footnote *: The numerical instability of this estimator for \(g\) is similar to the numerical instability of numerical derivatives. ### Photon Counting Histogram Method In response to the emergence of DSERN capable image sensors, the Photon Counting Histogram (PCH) method, developed at Dartmouth University, was the first documented method to explicitly incorporate the PCD model into the estimation of the sensor performance parameters [5, 6]. PCH is primarily a method for estimating conversion gain and read noise by detecting the locations of local maxima and minima observed in an experimentally generated histogram. To perform PCH characterization, a sample \(\mathbf{x}=\{x_{1},\ldots,x_{n_{1}}\}\) with \(x_{k}\sim\text{PCD}(H,g,\mu,\sigma^{2})\) is captured and binned as a histogram, which is the experimental PCH. Each bin count is divided by the sample size \(n_{1}\) to normalize the histogram so that it represents an approximation of the pixel's PCD at the chosen value of \(H\). Assuming the read noise is small enough, many peaks (local maxima) in the experimental PCH should be present and an algorithm for detecting these peaks is deployed. Figure 3 shows a simulated PCH with the locations of ten detected peaks. To estimate the conversion gain, let \(\{(p_{x1},p_{y1}),\ldots,(p_{xm},p_{ym})\}\) denote a sequence of \(m\) consecutive peak locations detected in the experimental PCH. According to the PCD model, the abscissas of the peaks locations \(\{p_{xk}\}\), in units of (DN), are approximately located at equally spaced intervals of the form \(p_{xk}=\mu+k/g\) with \(k\in\mathbb{N}_{0}\). As such, fitting a line to the data \(\{(1,p_{x1}),\ldots,(m,p_{xm})\}\) and extracting the reciprocal slope of the fit yields an estimate \(\tilde{g}\) for the conversion gain. If the electron number associated with each peak location is also known, one may instead fit a line to the data \(\{(k_{1},p_{x1}),\ldots,(k_{m},p_{xm})\}\) with the reciprocal slope again yielding \(\tilde{g}\) and the \(y\)-intercept yielding an estimate for the bias \(\tilde{\mu}\). Estimation of the quanta exposure requires the two most prominent peak locations and their electron numbers denoted by \(\{(p_{xk^{*}},p_{yk^{*}}),(p_{x(k^{*}+1)},p_{y(k^{*}+1)})\}\) and \((k^{*},k^{*}+1)\), respectively (see Figure 4). For small read noise values the ordinates of these peaks are approximated by \[p_{yk}\sim\frac{1}{\sqrt{2\pi}\sigma}\frac{e^{-H}H^{k}}{k!}. \tag{10}\] Taking the ratio of the two most prominent peaks and solving for \(H\) subsequently gives the estimate (see Figure 4) \[\tilde{H}=(k^{*}+1)\frac{p_{y(k^{*}+1)}}{p_{yk^{*}}}. \tag{11}\] In a similar manner, the read noise is calculated by first locating the valley (local minima) between the two most prominent peaks, denoted \((v_{x*},v_{y*})\), and then computing the Valley Peak Modulation (VPM) \[\text{VPM}=1-\frac{v_{y*}}{\frac{1}{2}(p_{yk^{*}}+p_{y(k^{*}+1)})}. \tag{12}\] The VPM is independent of the parameters \(g\) and \(\mu\) so that a lookup table can be generated containing the VPM for various values of \(\sigma_{R}\) and \(H\). Using the estimate \(\tilde{H}\), one can then lookup the value of \(\sigma_{R}\) corresponding to the estimated VPM to obtain the PCH estimate of read noise. Figure 4: Simulated PCH showing two most prominent peaks with corresponding valley location used for quanta exposure and read noise estimation. Figure 3: Simulated PCH showing ten detected peak locations. Assuming one is able to obtain estimates for all four parameters (which requires knowing the electron numbers for the detected peaks), the PCH estimates can be refined by fitting the PCD to the experimental PCH using nonlinear least squares with the initial parameter estimates as starting points. PCH provides an intuitive graphical approach to sensor characterization and only requires a single sample of data. Because PCH incorporates the full description of the PCD model into the estimation procedure, it leverages the structure of DSERN data resulting in estimates of conversion gain with less uncertainty compared to the PT method. Furthermore, a unique feature of PCH is that the initial estimate of \(g\) obtained from peak locations does not assume a Poissonian light source so that sources of an arbitrary probabilistic nature can in theory be used [7]. The biggest challenge with this method is the need to reliably detect peak and valley locations, which requires both large sample sizes and sufficiently small read noise so that the peaks can be observed. For this reason, the applicability of the PCH method is restricted to the DSERN regime. ### Fourier Transform Method Initially, the Fourier-based approach for sensor characterization was not developed as a self-contained method, but rather as a means of deriving starting points for the PCH-EM algorithm discussed in Section 3.6. Nevertheless, this approach can be regarded as a characterization method in its own right [8]. The idea behind this technique, similar to the PCH method, stemmed from the fact that the PCD exhibits periodic oscillations when the read noise is sufficiently small. This approach revolves around the analytical expression for the magnitude of the PCD Fourier transform given by \[|\hat{f}_{X}(\omega)|\coloneqq|\mathsf{E}\exp(-2\pi i\omega X)|=\exp(H(\cos(2 \pi\omega/g)-1)-2\pi^{2}\sigma^{2}\omega^{2}). \tag{13}\] Recall that when the read noise is sufficiently small, the PCD has local maxima occurring with a period of approximately \(1/g\) (frequency of \(g\)). This property means the magnitude function should exhibit local maxima at frequencies approximately located at integer multiple of \(g\) as is seen in Figure 5 (blue curve). Let \(\omega^{*}\) denote the frequency corresponding to the local maxima near \(g\), which is the second most prominent peak of the magnitude function after the primary peak at \(\omega=0\). We note that in order for this secondary peak to exist we must have \(\sigma_{e^{-}}^{2}/H<|\min_{x>0}\operatorname{sinc}x|=0.217\dots\), where \(\sigma_{e^{-}}=\sigma\times g\) is the read plus quantization noise in units of electrons and \(\operatorname{sinc}x=\sin x/x\). Using Lagrange inversion and defining \(z=-\sigma_{e^{-}}^{2}/H\) we compute \[\omega^{*}=g+\sum_{n=1}^{\infty}\lim_{\omega\to g}\partial_{\omega}^{n-1} \left(\frac{\omega-g}{\operatorname{sinc}(2\pi\omega/g)}\right)^{n}\frac{z^{ n}}{n!}=g(1+z+z^{2}+\mathcal{O}(z^{3})), \tag{14}\] which shows that for small \(|z|\), \(\omega^{*}\) is very well approximated by \(g\). With this information we can approximate the magnitude function \(|\hat{f}_{X}(\omega)|\) near the secondary peak by considering the following asymptotic approximation as \(\omega\to g\): \[|\hat{f}_{X}(\omega)|\sim a\exp(-2\pi^{2}v(\omega-b)^{2}), \tag{15}\] where \(v=\mathsf{Var}X=\sigma^{2}+H/g^{2}\), \[a=\exp\left(-2\pi^{2}\left(H-\frac{(H/g)^{2}}{v}\right)\right), \tag{16}\] and \[b=\frac{H/g}{v}. \tag{17}\] Figure 5 shows the exact magnitude function (blue) compared to the asymptotic approximation (purple) along with the location of the secondary peak (\(\omega^{*},|\hat{f}_{X}(\omega^{*})|\)). As can be observed, the peak of the asymptotic approximation, \((b,a)\), provides an excellent approximation to the location of the exact peak. To understand why this is, notice that this asymptotic expression approximates \(\omega^{*}\) as \[\omega^{*}\sim b=\frac{g}{1+\sigma_{e^{-}}^{2}/H}=\frac{g}{1-z}=g(1+z+z^{2}+ \mathcal{O}(z^{3})), \tag{18}\] which shows agreement with the first three terms of the exact expansion for \(\omega^{*}\) obtained in (14). The system of three equations given by \(v\), \(a\), and \(b\) can be inverted to give the following approximations for \(H\), \(g\), and \(\sigma^{2}\): \[\begin{split} H(v,a,b)&\sim vb^{2}-\frac{\log a}{2 \pi^{2}}\\ g(v,a,b)&\sim b-\frac{\log a}{2\pi^{2}vb}\\ \sigma^{2}(v,a,b)&\sim v-\left(v-\frac{\log a}{2\pi ^{2}b^{2}}\right)^{-1}.\end{split} \tag{19}\] Equipped with these details, the Fourier based method of characterization is as follows. First, a sample \(\mathbf{x}=\{x_{1},\ldots,x_{n_{1}}\}\) with \(x_{k}\sim\mathrm{PCD}(H,g,\mu,\sigma^{2})\) is captured and the sample mean \(\bar{x}=\frac{1}{n_{1}}\sum_{k=1}^{n_{1}}x_{k}\) and sample variance \(\hat{x}=\frac{1}{n_{1}-1}\sum_{k=1}^{n_{1}}(x_{k}-\bar{x})^{2}\) are computed. Since the sample \(\mathbf{x}\) is integer-valued, it is binned in bins centered on the integers and the counts \(c_{k}\) for each bin location \(b_{k}\) are normalized via \(p_{k}=c_{k}/n_{1}\) so that we obtain a density normalized experimental PCH. The Discrete Fourier Transform (DFT) of the density normalized experimental PCH is then calculated and the location of the secondary peak in the DFT is detected yielding an estimate \((\tilde{b},\tilde{a})\). Initial estimates for \(H\), \(g\) and \(\sigma^{2}\) are then found from (19) giving \((H_{0},g_{0},\sigma_{0}^{2})=(H(\hat{x},\tilde{a},\tilde{b}),g(\hat{x},\tilde {a},\tilde{b}),\sigma^{2}(\hat{x},\tilde{a},\tilde{b}))\). Final estimates \(\tilde{H}\), \(\tilde{g}\), \(\tilde{\sigma}^{2}\) are then found by fitting the magnitude function (13) to the experimental PCH DFT using nonlinear least squares and the starting points \((H_{0},g_{0},\sigma_{0}^{2})\). Using the fact that \(\mathds{E}X=\mu+H/g\) we then obtain an estimate for \(\mu\) in the form \(\tilde{\mu}=\bar{x}-\tilde{H}/\tilde{g}\). Further improvements on \(\tilde{\mu}\) are possible using autocorrelation.[8] The Fourier based method echos that of the PCH method in that it requires sufficiently small read noise to work. It also requires some implementation of peak detection and refines the initial estimates with nonlinear least squares. In fact, the PCH and Fourier methods are essentially the same method with PCH operating in the original data space and Fourier operating in the frequency space. What makes the Fourier method attractive as that it obtains estimates of each PCD parameter from a single sample and requires only detecting a single peak in the experimental PCH DFT (compare this to detecting many peaks with PCH). Since only a single peak is needed and the method is based on the DFT it is also easily automated and computationally inexpensive. Additionally, this method makes full use of the PCD model so that it can usually obtain lower uncertainty estimates of the conversion gain in comparison to the PT method. The major downside of this method is that the read noise needs to be small enough to guarantee the existence of the secondary peak. This ultimately limits the applicability of this method, like the PCH method, to the DSERN regime. Figure 5: Graph of \(|\hat{f}_{X}(\omega)|\) (blue) and its asymptotic approximation near \(\omega=g\) (purple) versus \(\omega\) showing the two most dominant peaks at \(\omega=0\) and \(\omega=\omega^{*}\). ### Nakamoto's Method In response to the PCH method, Nakamoto and Hotaka introduced a characterization technique based on the principle of Maximum Likelihood Estimation (MLE) that also takes advantage of the full PCD model [7]. To understand how this method works, let \(\mathbf{y}=\{y_{1},\ldots,y_{n_{2}}\}\) with \(y_{k}\sim\text{PCD}(0,g,\mu,\sigma^{2})\stackrel{{ d}}{{=}} \mathcal{N}(\mu,\sigma^{2})\) be a sample of data captured under dark conditions with short enough integration time so that dark current is negligible (\(H=0\)). Since the distribution of this data is normal, unbiased estimates for \(\mu\) and \(\sigma^{2}\) may be directly measured from the \(\mathbf{y}\)-sample via \[\tilde{\mu}=\frac{1}{n_{2}}\sum_{k=1}^{n_{2}}y_{k} \tag{20}\] and \[\tilde{\sigma}^{2}=\frac{1}{n_{2}-1}\sum_{k=1}^{n_{2}}(y_{k}-\tilde{\mu})^{2}, \tag{21}\] which are the sample mean and sample variance, respectively. Now consider capturing a second sample \(\mathbf{x}=\{x_{1},\ldots,x_{n_{1}}\}\) with \(x_{k}\sim\text{PCD}(H,g,\mu,\sigma^{2})\) for some choice of \(H>0\). Since \(\mathbb{E}x_{k}=\mu+H/g\) we can estimate the sample mean \(\bar{x}=\frac{1}{n_{1}}\sum_{k=1}^{n_{1}}x_{k}\) and then construct an estimator for the quanta exposure as a function of \(g\) in the form \[\tilde{H}(g)=g(\bar{x}-\tilde{\mu}). \tag{22}\] Then, a constrained likelihood function is made from these three estimates via \[L(g|\mathbf{x})=\prod_{k=1}^{n_{1}}f_{X}(x_{k}|\tilde{H}(g),g,\tilde{\mu}, \tilde{\sigma}^{2}) \tag{23}\] and an estimate for \(g\) is computed by maximizing this constrained likelihood function (or equivalently it's logarithm) \[\tilde{g}=\operatorname*{arg\,max}_{g}L(g|\mathbf{x}). \tag{24}\] An estimate for the quanta exposure follows by evaluating \(\tilde{H}=\tilde{H}(\tilde{g})\). Since closed-form solutions for this maximization are intractable, any number of numerical methods can be employed. Like the PCH and Fourier methods, Nakamoto's method incorporates the full description of the PCD model into the estimation procedure, which generally results in conversion gain estimates with less uncertainty than can be obtained with the traditional PT method. Furthermore, the \(H=0\) sample allows for direct measurement of \(\mu\) and \(\sigma^{2}\), which generally helps stabilize the conversion gain estimates when the read noise is large. Because peak detection is not part of the estimation procedure, Nakamoto's method shows promise in being a viable method for both the sub-electron and multi-electron read noise regimes. The most significant disadvantage of this method is the requirement to obtain a sample at \(H=0\), which may not be possible depending on the available integration times of the sensor and the magnitude of dark current. This requirement is particularly challenging when trying to characterize an entire sensor array, which will inevitably contain hot pixels that cannot achieve a quanta exposure near zero. Lastly, we note that this method requires two samples but does not make full use of the information contained in both samples. This can be seen by the fact that the \(\mathbf{x}\)-sample contains information about \(\mu\) and \(\sigma\); however, these parameters are estimated only from the \(\mathbf{y}\)-sample. ### PCH Expectation Maximization Algorithm The Photon Counting Histogram Expectation Maximization (PCH-EM) algorithm (in review at the time of writing) is the latest iteration of methods for performing sensor characterization based on the PCD model [8, 9, 17]. It is the first technique devised to compute simultaneous maximum likelihood estimates for all four PCD parameters using only a single sample of data. This method was inspired from the fact that when the electron numbers associated with each observation are known, that is, we have the _complete data_\((\mathbf{x},\mathbf{k})=\{(x_{1},k_{1}),\ldots,(x_{n_{1}},k_{n_{1}})\}\), closed-form maximum likelihood estimators for each PCD parameter are easily derived [8]. While the electron numbers cannot be directly observed, this fact motivates a latent (hidden) variables model of estimation, which is what the general expectation maximization algorithm provides. As such, PCH-EM is a specific implementation of the general EM algorithm with the PCD being the underlying distribution to be estimated. To perform PCH-EM, a random sample \(\mathbf{x}=\{x_{1},\ldots,x_{n_{1}}\}\) with \(x_{k}\stackrel{{\text{iid}}}{{\sim}}\text{PCD}(H,g,\mu,\sigma^{2})\) is captured. Given an initial estimate of the parameters \(\theta_{0}=(H_{0},g_{0},\mu_{0},\sigma_{0}^{2})\) (obtained by one of the other methods, e.g. Fourier or PT method), the PCH-EM algorithm iteratively updates the parameter estimates via the update equations[8] \[H_{t+1} =A_{t} \tag{25a}\] \[g_{t+1} =\frac{B_{t}-H_{t+1}^{2}}{C_{t}-\bar{x}H_{t+1}}\] (25b) \[\mu_{t+1} =\bar{x}-\frac{H_{t+1}}{g_{t+1}}\] (25c) \[\sigma_{t+1}^{2} =\hat{x}-\frac{B_{t}-H_{t+1}^{2}}{g_{t+1}^{2}}, \tag{25d}\] where \(\bar{x}=\frac{1}{n_{1}}\sum_{k=1}^{n_{1}}x_{k}\) is the sample mean, \(\hat{x}=\frac{1}{n_{1}}\sum_{k=1}^{n_{1}}(x_{k}-\bar{x})^{2}\) is the sample variance, and \[A_{t} =\frac{1}{N}\sum_{n=1}^{N}\sum_{k=0}^{\infty}\gamma_{nk}^{(t)}k \tag{26a}\] \[B_{t} =\frac{1}{N}\sum_{n=1}^{N}\sum_{k=0}^{\infty}\gamma_{nk}^{(t)}k^ {2}\] (26b) \[C_{t} =\frac{1}{N}\sum_{n=1}^{N}x_{n}\sum_{k=0}^{\infty}\gamma_{nk}^{(t)}k, \tag{26c}\] where \[\gamma_{nk}^{(t)}=\frac{\frac{e^{-H_{t}}H_{t}^{k}}{k!}\phi(x_{n};\mu_{t}+k/g_ {t},\sigma_{t}^{2})}{\sum_{\epsilon=0}^{\infty}\frac{e^{-H_{t}}H_{t}^{k}}{t!} \phi(x_{n};\mu_{t}+\ell/g_{t},\sigma_{t}^{2})}. \tag{27}\] The \(\gamma_{nk}^{(t)}\) are called _membership probabilities_ because they represent the probability of \(x_{n}\) belonging to the \(k\)th Gaussian component of the PCD given the current parameter estimates \(\theta_{t}\). Since PCH-EM is just a specific implementation of the more general Expectation Maximization (EM) algorithm, each iteration of the algorithm guarantees an increase in the likelihood of the sample. The algorithm halts when a specified convergence criteria is achieved. In practical implementation, all of the series in the update equations can be truncated to finite sums by only considering the terms \(k\in\{F^{-1}(\epsilon),\ldots,F^{-1}(1-\epsilon)\}\), where \(F^{-1}\) is the Poisson(\(H_{t}\)) quantile function and \(\epsilon>0\) is a small positive number. PCH-EM provides many positive characteristics in that it provides maximum likelihood estimates of all the PCD parameters using a single sample of data, incorporates the full PCD model, and does not require numerical optimization, e.g. Newton iteration, to maximize the sample likelihood. It is also easily automated and computationally inexpensive although not as inexpensive as traditional PT. Furthermore, because the general EM algorithm is so well studied, many extensions of PCH-EM are possible to improve the robustness of the algorithm and its estimates. The major downside of this method, which holds for all other methods excluding PT, is the requirement of starting points. Poor starting points can result in slow convergence or convergence to a local maxima while missing the global maxima of the sample likelihood function. While PCH-EM can suffer form the issue of local maxima, extensions of the algorithm using annealing are possible [18, 19, 20]. ### Two-Sample PCH-EM Algorithm Currently in development, PCH-EM2 is the two-sample generalization of PCH-EM that incorporates two samples taken at different \(H\) into a single likelihood function, thus offering both advantages and disadvantages similar to the single-sample PCH-EM approach. One significant benefit of PCH-EM2 is that it enables one to obtain low-uncertainty estimates of all four parameters by combining samples taken at two different \(H\)-values, as the uncertainty of estimates for each PCD parameter varies differently with \(H\). Therefore, PCH-EM2 is expected to be more accurate than PCH-EM. Further extensions of this method to an arbitrary number of samples is also possible. Additionally, the use of an extra sample stabilizes the algorithm in the presence of high read noise, making it a good candidate for general estimation procedure in the sub-electron and multi-electron read noise regimes. Unlike Nakamoto's method, PCH-EM2 can extract information about each parameter from both samples, making it theoretically more accurate. However, PCH-EM2, like other two-sample methods, is more sensitive to nonlinearity in comparison to single-sample methods because it requires the sensor to behave linearly over both samples instead of just one. ## 4 Comparison of Methods ### Design of Experiment To compare the uncertainty of each methods' conversion gain estimates, Monte Carlo experiments were performed. However, the four-dimensional parameter space of the PCD model made it challenging to fully explore in simulations. To reduce the area of exploration, the parameter \(\mu\) was set to zero for all simulation runs since it only shifts the PCD without changing its shape. Moreover, to avoid over-quantization, the conversion gain was fixed at \(g=\sigma_{R}/6\) for all runs, leaving only two dimensions \((\sigma_{R},H)\in\mathbb{R}_{>0}\times\mathbb{R}_{\geq 0}\) to consider. As the read noise surpasses about \(0.5\,e\)-, peak detection methods struggle, so the read noise interval was limited to \(\sigma_{R}\in(0,2)\). Similarly, the Poisson distribution's dynamic changes occur mainly when \(H\) is small, so the quanta exposure was limited to \(H\in(0,10)\). Using these constraints, a grid of 64 \(\sigma_{R}\)-values on \((0.05,2)\) and 32 \(H\)-values on \((0.05,10)\) was created, which paired with \(\mu=0\) and \(g=\sigma_{R}/6\), resulted in a total of 2048 points in the PCD parameter space to simulate data on. Once the desired parameters were selected, the following stage in the experimental design involved selecting the kinds of data to simulate along with their corresponding sample sizes. Six methods were available, three of which (PCH, Fourier, PCH-EM) necessitated just one sample, while the remaining three (PT, Nakamoto, PCH-EM2) required two samples. Specifically, Nakamoto's approach required two samples, with one of them being at \(H=0\). To accommodate all methods two types of data were generated including _dark_ samples of the form \(\mathbf{y}=\{y_{1},\ldots,y_{n_{2}}\}\) with \(y_{k}\sim\mathrm{PCD}(0,g,\mu,\sigma^{2})\) and _illuminated_ samples of the form \(\mathbf{x}=\{x_{1},\ldots,x_{n_{1}}\}\) with \(x_{k}\sim\mathrm{PCD}(H,g,\mu,\sigma^{2})\). Observations in each sample were generated according to the model (1). The uncertainty in any method's conversion gain estimates will be a function of the parameters. For this reason, the optimal sample size pairs in (8) for \(\mathtt{acv}_{0}=0.015\) and \(\zeta=(1+H/\sigma_{R}^{2})^{-1}\) were chosen to make the uncertainty in the PT conversion gain estimates mostly independent of the parameters. In this way, PT would be a reference to compare the uncertainty of the five other methods against. The specified value of \(\mathtt{acv}_{0}=0.015\) means that the PT conversion gain estimates should have an uncertainty of approximately \(1.5\%\) across all parameters. Figure 6 shows the total sample size \(n_{1}+n_{2}\) used as a function of the parameters. Parameters where \(n_{1}>10^{5}\) (seen as the white region in Figure 6) were ignored to make sure the experiment did not take too long to complete. To ensure a fair comparison of each method, both the dark and illuminated data were made available to all six methods, even if the method naturally used only one sample. This way, the two-sample methods did not have access to more information than the one-sample methods. The two-sample methods required no changes to their approach since they inherently incorporated both samples into their estimation procedure. However, for the one-sample methods, the information in the dark sample was integrated into the estimation procedure by providing the starting points \(g_{0}=(\bar{x}-\bar{y})/(\hat{x}-\hat{y})\), \(\mu_{0}=\bar{y}\), \(\sigma_{0}^{2}=\hat{y}\), and \(H_{0}=g_{0}(\bar{x}-\mu_{0})\), where \(\bar{x}\) represents the sample mean and \(\hat{x}\) represents the sample variance for the \(x\)-data (and likewise for the \(y\)-data). With this step, the experimental design phase was concluded. ### Results The experiment was executed on MATLAB code containing two nested loops. In the outer loop, each iteration consisted of selecting the next set of PCD parameters and associated sample sizes. For each iteration of the outer loop, the inner loop was repeated 512 times, where in each of the 512 iterations a \(\mathbf{x}\) and \(\mathbf{y}\) sample were generated and then supplied to each method so that the conversion gain could be estimated. This subsequently esulted in 512 conversion gain estimates \(\tilde{g}_{k}\) for each method, which were then used to compute the normalized Root Mean Squared Error (RMSE) \[\text{RMSE}(\theta)=\left(\frac{1}{512}\sum_{k=1}^{512}(1-\tilde{g}_{k}/g)^{2} \right)^{1/2}. \tag{28}\] As a result of the Monte Carlo experiment, a \(64\times 32\) array of normalized RMSE values for each method was generated. Figure 7 shows the Monte Carlo estimated RMSE for each method as a function of \(\sigma_{R}\) and \(H\). The first row is comprised of the one-sample methods with the second containing only the two-sample methods. The black region corresponds to parameters where data was not simulated due to the sample sizes becoming too large. Several observations can be derived from the Figure 7. Initially, it should be acknowledged that the PT estimates' RMSE remained relatively stable at approximately RMSE \(\approx 0.015\). This value corresponds to the \(\texttt{acv}_{0}=0.015\) uncertainty specification used for the optimal sample sizes. Therefore, the optimal sample sizes effectively controlled the PT estimate uncertainty for the parameters considered. Additionally, all five methods that utilized the full description of the PCD in the estimation process showed a region below \(\sigma_{R}\approx 0.42,e\)- where their conversion gain estimates' uncertainty was generally lower than PT (blue strip). This finding was due to the PCD-inclusive techniques utilizing the data's extra structure at low read noise values, which PT ignores by only incorporating the first two moments. Regarding the one-sample methods, the PCH and Fourier techniques outperformed PT in terms of RMSE below approximately \(\sigma_{R}\approx 0.42,e\)-; however, their performance degraded above this read noise value due to their reliance on peak detection. Unlike the PCH and Fourier methods, PCH-EM did not necessitate observing peaks and could still estimate the conversion gain at higher read noise values, bridging the gap into the multi-electron read noise regime for one-sample methods. For the two-sample methods, both Nakamoto's method and PCH-EM2 produced conversion gain uncertainties comparable to PT's in the \(\sigma_{R}>0.42,e\)- regime. This finding suggests that once the PCD's structure is lost due to increasing read noise, there is only a small advantage to using the full PCD model in the estimation process. Overall, PCH-EM performed the best for one-sample methods, while PCH-EM2 outperformed PCH-EM and showed potential as a general estimation technique in the sub-electron and multi-electron read noise regimes. Figure 6: Total number of samples used for each set of PCD parameters. The black region corresponds to points where \(n_{1}>10^{5}\), which were ignored in the simulation. ## 5 Discussion and Future Work This study presented an overview of the PCD model as a universal framework for describing all currently available methods of conversion gain estimation in the sub-electron and multi-electron read noise regimes. By unifying the notation and model, it became possible to compare and contrast the differences between these methods. Monte Carlo experiments revealed that utilizing the full PCD model in the estimation procedure produced conversion gain estimates with less uncertainty than the traditional PT method, especially when the read noise is below \(\sigma_{R}\approx 0.42\,e\)-. Notably, the PCH-EM2 algorithm outperformed the time tested PT method below this threshold, while its performance merged with that of PT at higher read noise levels. This suggests that PCH-EM2 could potentially replace PT as a general estimation procedure. Future research will involve developing and implementing a multi-sample (\(\geq 2\) sample) PCH-EM method and exploring the use of annealing to enhance the algorithm's robustness to poor starting points. ###### Acknowledgements. The authors wish to express their gratitude to Nicholas Shade at Dartmouth University for his feedback on the implementation of the PCH method. The authors also would like to acknowledge and thank Katsuhiro Nakamoto from Hamamatsu Photonics for his help in implementing his method. Their contributions have been invaluable to the research, and the authors are appreciative of their assistance.
2310.16834
Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution
Despite their groundbreaking performance for many generative modeling tasks, diffusion models have fallen short on discrete data domains such as natural language. Crucially, standard diffusion models rely on the well-established theory of score matching, but efforts to generalize this to discrete structures have not yielded the same empirical gains. In this work, we bridge this gap by proposing score entropy, a novel loss that naturally extends score matching to discrete spaces, integrates seamlessly to build discrete diffusion models, and significantly boosts performance. Experimentally, we test our Score Entropy Discrete Diffusion models (SEDD) on standard language modeling tasks. For comparable model sizes, SEDD beats existing language diffusion paradigms (reducing perplexity by $25$-$75$\%) and is competitive with autoregressive models, in particular outperforming GPT-2. Furthermore, compared to autoregressive mdoels, SEDD generates faithful text without requiring distribution annealing techniques like temperature scaling (around $6$-$8\times$ better generative perplexity than un-annealed GPT-2), can trade compute and quality (similar quality with $32\times$ fewer network evaluations), and enables controllable infilling (matching nucleus sampling quality while enabling other strategies besides left to right prompting).
Aaron Lou, Chenlin Meng, Stefano Ermon
2023-10-25T17:59:12Z
http://arxiv.org/abs/2310.16834v3
# Discrete Diffusion Language Modeling by Estimating the Ratios of the Data Distribution ###### Abstract Despite their groundbreaking performance for many generative modeling tasks, diffusion models have fallen short on discrete data domains such as natural language. Crucially, standard diffusion models rely on the well-established theory of score matching, but efforts to generalize this to discrete structures have not yielded the same empirical gains. In this work, we bridge this gap by proposing score entropy, a novel discrete score matching loss that is more stable than existing methods, forms an ELBO for maximum likelihood training, and can be efficiently optimized with a denoising variant. We scale our Score Entropy Discrete Diffusion models (SEDD) to the experimental setting of GPT-2, achieving highly competitive likelihoods while also introducing distinct algorithmic advantages. In particular, when comparing similarly sized SEDD and GPT-2 models, SEDD attains comparable perplexities (normally within \(+10\%\) of and sometimes outperforming the baseline). Furthermore, SEDD models learn a more faithful sequence distribution (around \(4\times\) better compared to GPT-2 models with ancestral sampling as measured by large models), can trade off compute for generation quality (needing only \(16\times\) fewer network evaluations to match GPT-2), and enables arbitrary infilling beyond the standard left to right prompting. ## 1 Introduction Many recent advances in deep learning have centered around generative modeling. In this setting, neural networks learn to generate new samples given unstructured data. Remarkably, combining the powerful generalization of neural networks with this rather straightforward objective has led to unparalleled capabilities. For example, modern "generative AI" systems are able to generate images from arcane descriptions (Ramesh et al., 2022) and answer complex queries (Brown et al., 2020). So far, the techniques used for these advances have largely been bifurcated according to the structure of the data. For computer vision data, where one can faithfully dequantize into continuous space, diffusion modeling (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020) is the core paradigm undergriding contemporary methods. Conversely, for natural language data, which is far more discrete, autoregressive modeling is indispensable for most problems (Radford et al., 2019). Despite this field-level divide, researchers have attempted to apply diffusion models to language modeling tasks (Li et al., 2022; Austin et al., 2021). This can be done by either designing a special dequantization for the discrete tokens (Dieleman et al., 2022) or by directly modeling a discrete diffusion process on said tokens (He et al., 2022; Zheng et al., 2023). However, despite considerable efforts, no such method has yet yielded a diffusion model scheme that is on par with or provides a clear benefit over standard autoregressive training. In our work, we close this gap for GPT-2 scale experiments (Radford et al., 2019), demonstrating, for the first time, a non-autoregressive modeling technique that is able to achieve similar perplexity scores as autoregressive modeling at a modern scale. Our approach has the added benefits of producing better samples from the learned distribution, allowing for a compute-quality tradeoff, and enabling prompting with arbitrary positions. Key to this success is score entropy, a novel training objective for discrete-space diffusion models that is analogous to score matching for continuous-space diffusion models (Hyvarinen, 2005; Song and Ermon, 2019). Our contributions can be summarized as follows: 1. We introduce score entropy, a discrete score matching loss that can be used to trained discrete diffusion models. Score entropy learns the concrete scores (analogous to the score function in standard diffusion) of the perturbed data distribution in a scalable and principled manner. 2. We use the modeled scores to develop several enhanced sampling methods. In particular, we derive a score-based ancestral sampling method and a general infilling procedure. 3. We combine our theoretical advances with architectural improvements to scale our Score Entropy Discrete Diffusion models (SEDD) to GPT-2 model sizes. As previously mentioned, SEDD is comparable to GPT-2 for perplexities but also offer several distinct advantages for high quality, fast, and controllable generation. ## 2 Preliminaries Diffusion models learn to generate data by reversing a Markov process that takes the data distribution \(x_{0}\sim p_{0}\) to a simple noise distribution \(x_{T}\sim p_{T}\)(Sohl-Dickstein et al., 2015). We want to reverse this process, allowing us to sample from \(p_{T}\) to generate samples from \(p_{0}\), but the reverse transitions \(p(x_{t-\Delta t}|x_{t})\) are difficult to approximate since they are nontrivial densities. However, as \(\Delta t\to 0\), the concept of the "score function" emerges to enable a more faithful modeling paradigm. Learning this quantity is well-established for continuous spaces but remains an open problem for discrete spaces. ### Continuous Diffusion Models When the data support is \(\mathbb{R}^{d}\), one constructs the Markov process by perturbing data points \(\mathbf{x}_{0}\sim p_{0}\) with a stochastic process defined by the stochastic differential equation (SDE) (Song et al., 2020): \[\mathrm{d}\mathbf{x}_{t}=f(\mathbf{x}_{t},t)\mathrm{d}t+g(t)d\mathbf{B}_{t} \tag{1}\] The perturbed densities \(p_{t}\) of the points \(\mathbf{x}_{t}\) evolve according to the corresponding Fokker-Planck partial differential equation and approaches a Gaussian limit distribution \(\pi\approx p_{T}\). A famous result by Anderson constructs the reverse of this stochastic differential equation (Anderson, 1982): \[\mathrm{d}\mathbf{x}_{t}=\left(f(\mathbf{x}_{t},t)-g(t)^{2}\frac{\nabla_{x}p_ {t}(\mathbf{x}_{t})}{p_{t}(\mathbf{x}_{t})}\right)\mathrm{d}t+g(t)\mathrm{d} \mathbf{B}_{t} \tag{2}\] which takes \(p_{T}\) back to \(p_{0}\). One approximates this process by learning the unknown \(\frac{\nabla_{x}p_{t}(\mathbf{x}_{t})}{p_{t}(\mathbf{x}_{t})}\) (normally written as \(\nabla_{x}\log p_{t}(\mathbf{x}_{t})\)) with a neural network \(\mathbf{s}_{\theta}(\mathbf{x}_{t},t)\). This can be done optimizing the well known score matching (shown below) jointly over all \(t\). \[\mathcal{L}_{\mathrm{SM}}=\frac{1}{2}\mathbb{E}_{\mathbf{x}\sim p_{t}}\left\| \mathbf{s}_{\theta}(\mathbf{x},t)-\frac{\nabla_{x}p_{t}(\mathbf{x})}{p_{t}( \mathbf{x})}\right\|^{2} \tag{3}\] Score matching has many equivalent forms such as the implicit (Song et al., 2019) and denoising score matching losses (Vincent, 2011) that remove the unknown term \(\frac{\nabla_{x}p_{t}(\mathbf{x})}{p_{t}(\mathbf{x})}\). With a learned score, the diffusion model can sample \(x_{T}\sim\pi\) and solves the parameterized reverse SDE \[\mathrm{d}\mathbf{x}_{t}=\left(f(\mathbf{x}_{t},t)-g(t)^{2}\mathbf{s}_{ \theta}(\mathbf{x},t)\right)\mathrm{d}t+g(t)\mathrm{d}\mathbf{B}_{t} \tag{4}\] to approximately sample from \(p_{0}\). Importantly, when the score matching losses are jointly optimized with a relative weighting of \(g(t)^{2}\) for each \(t\), one is able to compute the ELBO for training and evaluating likelihoods (Song et al., 2021; Kingma et al., 2021; Huang et al., 2021). This so-called "score-parameterization" of diffusion models has been essential for the recent success of continuous space diffusion models. In particular, modeling the score function (up to a scaling) has consistently been shown to result in both superior generation quality and improved likelihood values (Ho et al., 2020; Karras et al., 2022). ### Discrete Diffusion Models We now consider the a discrete data support \(\{1,\ldots,N\}\). Here, the probability distributions \(p_{t}:\mathcal{X}\to R\) instead become probability mass vectors \(p_{t}\in\mathbb{R}^{N}\) that are positive and sum to \(1\), and the diffusion is best described by the discrete analogue of the Fokker-Planck partial differential equation acting on \(p_{t}\). In particular, we evolve the data distribution \(p_{0}\) according to the equation (Campbell et al., 2022) \[\frac{dp_{t}}{dt}=Q_{t}p_{t}\quad p_{0}=p_{\rm data} \tag{5}\] where \(Q_{t}\) are the diffusion matrices that are required to have non-negative non-diagonal entries and columns which sum to zero (so that the rate \(\frac{dp_{t}}{dt}\) sums to \(0\), meaning \(p_{t}\) does not gain or lose total mass). This process is realized at the sample level by the transition densities that are defined by the columns of \(Q_{t}\): \[p(x_{t+\Delta t}=y|x_{t}=x)=\delta_{xy}+Q_{t}(y,x)\Delta t+O(\Delta t^{2}) \tag{6}\] which enables a Euler-Maruyama type sampling algorithm that steps according to \(\Delta t\). For certain \(Q_{t}\), \(p_{T}\) approaches a limiting distribution \(\pi\) for large \(T\). Additionally, this Markov process has a well known reversal (Kelly, 1980; Sun et al., 2022) given by another diffusion matrix \(\overline{Q}_{t}\): \[\frac{dp_{T-t}}{dt}=\overline{Q}_{T-t}p_{T-t}\quad\overline{Q}_{t}(y,x)= \begin{cases}\frac{p_{t}(y)}{p_{t}(x)}Q_{t}(x,y)&x\neq y\\ -\sum_{k\neq i}\overline{Q}_{t}(z,x)&x=y\end{cases} \tag{7}\] Note that the reverse process again depends on \(p_{t}\), which is defined by the data distribution \(p_{0}\) and the diffusion \(Q_{t}\). This is analogous to the reverse SDE (Equation 2), with the ratio \(\frac{p_{t}(y)}{p_{t}(x)}\), generalizing the score function1. Similar to the continuous case, by approximating this score function, one can generate samples by sampling from \(\pi\) and simulating a parameterized reverse process. However, there still has yet to be a consensus on how to learn these ratios (see Section 6). Footnote 1: The gradient operator for discrete structures is (up to some scaling) defined for pairs \(x\neq y\) by \(\nabla f(xy):=f(y)-f(x)\). The score function would generalize to the normalized gradients \(\frac{\nabla p(xy)}{p(x)}=\frac{p(y)}{p(x)}-1\). **Concrete Score Matching.**Meng et al. (2022) take a score matching view and group \(\left[\frac{p_{t}(y)}{p_{t}(x)}\right]_{y\neq x}\) for each value \(x\), forming the _concrete score_. By generalizing the standard score matching loss, they learn \(s_{\theta}(x,t)\approx\left[\frac{p_{t}(y)}{p_{t}(x)}\right]_{y\neq x}\) with a discrete generalization of the score matching loss: \[\mathcal{L}_{\rm CSM}=\frac{1}{2}\mathbb{E}_{x\sim p_{t}}\left[\sum_{y\neq x} \left(s_{\theta}(x_{t},t)_{y}-\frac{p_{t}(y)}{p_{t}(x)}\right)^{2}\right] \tag{8}\] Due to its similarities with standard score matching, this approach is rather promissing. In particular, \(s_{\theta}(x,t)\) is a general model and one recovers the true score given infinite data. However, in practice \(\mathcal{L}_{\rm CSM}\) is based on the \(\ell^{2}\) loss, which is only suitable for real value inputs. Both \(s_{\theta}(x,t)_{j}\) and \(\frac{p(j)}{p(t)}\) are nonnegative, and this mismatch leads to suboptimal gradient behavior. As an example, \(s_{\theta}(i,t)_{j}=0\) and \(0.2\) induce equal loss signals when \(\frac{p(j)}{p(i)}=0.1\), but the \(0\) value is much worse: dropping the support of the data distribution induces an infinite KL divergence. As such, concrete score matching has seen limited success even in the non diffusion modeling regime. ## 3 Score Entropy Discrete Diffusion Models In this section, we introduce score entropy, our proposed loss. Similar to concrete score matching, we model the concrete score \(s_{\theta}(x,t)\approx\left[\frac{p_{t}(y)}{p_{t}(x)}\right]_{y\neq x}\). However, we design this loss to be compatible with the modeled values and the discrete diffusion, necessitating a significantly different expression. **Definition 3.1**.: _We define the **score entropy** for a discrete distribution \(p\), weights \(w_{xy}\geq 0\) and a score network \(s_{\theta}(x)_{y}\) as_ \[\mathcal{L}_{\rm SE}=\mathbb{E}_{x\sim p}\left[\sum_{y\neq x}w_{xy}\left(s_{ \theta}(x)_{y}-\frac{p(y)}{p(x)}\log s_{\theta}(x)_{y}+K\left(\frac{p(y)}{p(x) }\right)\right)\right] \tag{9}\] _where \(K(a)=a(\log a-1)\) is a normalizing constant function._ **Remark**.: _Score entropy is a natural extension of the cross-entropy loss function to general positive values (as opposed to probabilities), inspiring the name. The weights \(w_{xy}\) are similarity weights between \(x\) and \(y\) are used primarily when combining score entropy with diffusion models._ While this expression is more complex than the standard score matching variants, we show that the score entropy satisfies several desiderata for a discrete diffusion training objective: ### Score Entropy Properties **First,** score entropy is a suitable loss function that recovers the ground truth concrete score. **Proposition 3.2** (Consistency of Score Entropy).: _Suppose \(p\) is fully supported and \(w_{xy}>0\). As the number of samples and model capacity approaches \(\infty\), the optimal \(\theta^{*}\) that minimizes Equation 9 satisfies \(s_{\theta^{*}}(x)_{y}=\frac{p(y)}{p(x)}\). Furthermore, \(\mathcal{L}_{\mathrm{SE}}\) will be \(0\) at \(\theta^{*}\)._ **Second,** score entropy directly improves upon concrete score matching by rescaling problematic gradients. For the weights \(w_{xy}=1\), \(\nabla_{s(x)_{y}}\mathcal{L}_{\mathrm{SE}}=\frac{1}{s(x)_{y}}\nabla_{s(x)_{y} }\mathcal{L}_{\mathrm{CSM}}\), so the gradient signals for each pair \((x,y)\) are scaled by a factor of \(s(x)_{y}\) as a normalization component. As such, this forms a natural log-barrier which keeps our \(s_{\theta}\) valid, as shown in Figure 1. **Third,** similar to concrete score matching, score entropy can be made computationally tractable by removing the unknown \(\frac{p(y)}{p(x)}\). There are two alternative forms, the first of which is analogous to the implicit score matching loss (Hyvarinen, 2005): **Proposition 3.3** (Implicit Score Entropy).: \(\mathcal{L}_{\mathrm{SE}}\) _is equal up to a constant independent of \(\theta\) to the implicit score entropy_ \[\mathcal{L}_{\mathrm{ISE}}=\mathbb{E}_{x\sim p}\left[\sum_{y^{\prime}\neq x}w_ {xy}s_{\theta}(x)_{y}-w_{yx}\log s_{\theta}(y)_{x}\right] \tag{10}\] We need to evaluate \(s(y)\) for all \(y\), which is intractable, so one must resort to sampling \(y\) uniformly. This is analogous to the additional variance introduced by the Hutchinson trace estimator (Hutchinson, 1989) for sliced score matching (Song et al., 2019) and, in practice, renders \(\mathcal{L}_{\mathrm{ISE}}\) unsuitable for high dimensional problems. Therefore, we work with the score entropy variant of the more empirically practical denoising score matching loss (Vincent, 2011): **Theorem 3.4** (Denoising Score Entropy).: _Suppose \(p\) is a perturbation of a base density \(p_{0}\) and a transition kernel \(p(\cdot|\cdot)\), ie \(p(x)=\sum_{x_{0}}p(x|x_{0})p_{0}(x_{0})\) is equivalent (up to a constant independent of \(\theta\)) to the denoising score entropy_ \[\mathcal{L}_{\mathrm{DSE}}=\mathbb{E}_{x_{0}\sim p_{0},x\sim p(\cdot|x_{0})} \left[\sum_{y^{\prime}\neq x}w_{xy}\left(s_{\theta}(x)_{y}-\frac{p(y|x_{0})}{ p(x|x_{0})}\log s_{\theta}(x)_{y}\right)+K\left(\frac{p(y|x_{0})}{p(x|x_{0})} \right)\right] \tag{11}\] \(\mathcal{L}_{\mathrm{DSE}}\) is scalable since it only requires the evaluation of one \(s_{\theta}\), namely \(s_{\theta}(x)\). It is also particularly suitable for discrete diffusion since the intermediate densities \(p_{t}\) are all perturbation of the base \(p_{0}\). In particular, for SEDD, we sample data points \(x_{0}\sim p_{\mathrm{data}}\), perturb with the forward diffusion transition \(p_{t|0}(\cdot|x_{0})\) to sample \(p_{t}\), and then train with \(\mathcal{L}_{\mathrm{DSE}}\) using the transition densities \(p_{t|0}(\cdot|x_{0})\). ### Likelihood Bound For Score Entropy Discrete Diffusion **Fourth,** the score entropy can be used to define an ELBO for likelihood-based training and evaluation. Figure 1: The graphs of \(\mathcal{L}_{\mathrm{CSM}}\) versus \(\mathcal{L}_{\mathrm{SE}}\) for a ground truth score of \(0.2\). The score entropy loss respects nonnegativity. **Definition 3.5**.: _For our time dependent score network \(s_{\theta}(\cdot,t)\), the parameterized reverse matrix is \(\overline{Q}_{t}^{\theta}(y,x)=\begin{cases}s_{\theta}(x,t)_{y}Q_{t}(x,y)&x\neq y \\ -\sum_{z\neq x}\overline{Q}_{t}(z,x)&x=y\end{cases}\) found by replacing the ground truth scores in Equation 7. Our parameterized densities \(p_{t}^{\theta}\) thus satisfy_ \[\frac{dp_{T-t}^{\theta}}{dt}=\overline{Q}_{T-t}^{\theta}p_{T-t}^{\theta}\quad p _{T}^{\theta}=\pi\approx p_{T} \tag{12}\] The log likelihood of data points can be bounded with an ELBO that depends only on the rate matrices (Campbell et al., 2022). Interestingly, this becomes our score entropy loss: **Theorem 3.6** (Likelihood Training and Evaluation).: _For the diffusion and forward probabilities defined above, we can upper bound the log-likelihood of individual data points_ \[-\log p_{0}^{\theta}(x_{0})\leq\mathcal{L}_{\mathrm{DWDSE}}(x_{0})+D_{KL}(p_{ T|0}(\cdot|x_{0})\parallel\pi) \tag{13}\] _where \(\mathcal{L}_{\mathrm{DWDSE}}(x_{0})\) is the **diffusion weighted denoising score entropy** for data point \(x_{0}\)_ \[\int_{0}^{T}\mathbb{E}_{x_{t}\sim p_{t|0}(\cdot|x_{0})}\sum_{y\neq x_{t}}Q_{t }(x_{t},y)\left(s_{\theta}(x_{t},t)_{y}-\frac{p_{t|0}(y|x_{0})}{p_{t|0}(x_{t}| x_{0})}\log s_{\theta}(x_{t},t)_{y}+K\left(\frac{p_{t|0}(y|x_{0})}{p_{t|0}(x_{t}| x_{0})}\right)\right)dt \tag{14}\] This means that, with a particular diffusion-based weighting scheme for \(\mathcal{L}_{\mathrm{DSE}}\) in the form of \(\mathcal{L}_{\mathrm{DWDSE}}\), our original training setup becomes maximizes likelihood training. We an also report an upper bound on \(-\log p_{0}^{\theta}(x_{0})\) for evaluation purposes. ### Practical Implementation for Language Modeling **Fifth.** the score entropy can be scaled to high dimensional tasks. In practice, our set \(\{1,\ldots,N\}\) factorizes into sequences \(\{1,\ldots,n\}^{d}\) (e.g. sentences of tokens or image pixel values) \(\mathbf{x}=x^{1}\ldots x^{d}\). To work with this factorization, our transition matrix \(Q_{t}^{\mathrm{seq}}\) instead perturbs tokens independently with a matrix \(Q_{t}^{\mathrm{token}}\) acting on each component \(x^{i}\in\{1,\ldots,n\}\). The transition densities directly factorizes \[p_{t|0}(\mathbf{y}|\mathbf{x})=\prod_{i=1}^{d}p_{t|0}^{\mathrm{token}}(y^{i}| x^{i})\quad\frac{dp_{t}^{\mathrm{token}}}{dt}=Q_{t}^{\mathrm{token}}p_{t}^{ \mathrm{token}} \tag{15}\] The full sequence transition matrix \(Q_{t}^{\mathrm{seq}}\) is mostly \(0\) except when the indexing sequences differ at one position (e.g. \(x^{1}\ldots x^{i}\ldots x^{d}\) and \(x^{1}\ldots\widehat{x}^{i}\ldots x^{d}\) ). By the loss weighting of \(\mathcal{L}_{\mathrm{DWDSE}}\), we only need to model and learn the ratios between two sequences that differ at one position. This can be modeled similar to non-autoregressive language modeling tasks with our score network \(s_{\theta}(\cdot,t):\{1,\ldots,n\}^{d}\rightarrow\mathbb{R}^{d\times n}\) where \[(s_{\theta}(x^{1}\ldots x^{i}\ldots x^{d},t))_{i,y}\approx\frac{p_{t}(x^{1} \ldots\widehat{x}^{i}\ldots x^{d})}{p_{t}(x^{1}\ldots x^{i}\ldots x^{d})} \tag{16}\] To efficiently compute the other parts of \(\mathcal{L}_{\mathrm{DWDSE}}\), we need to compute the (token) forward transitions \(p_{t|0}(x_{t}^{j}|x_{0}^{i})\) We follow previous convention and define \(Q_{t}^{\mathrm{token}}=\sigma(t)Q\) for a fixed graph Laplacian-based \(Q\) and a noise level \(\sigma\). If we define \(\overline{\sigma}(t)=\int_{0}^{t}\sigma(s)ds\) as the total noise level, the forward densities thus satisfy \[p_{t}^{\mathrm{token}}=\exp\left(\overline{\sigma}(t)Q\right)p_{0}^{\mathrm{ token}}\quad p_{t|0}^{\mathrm{token}}(\cdot|x)=x\text{-th column of }\exp\left(\overline{\sigma}(t)\cdot Q\right) \tag{17}\] To scale to GPT-2 experiments (where \(d=1024\), \(n=50257\), and the batch size is \(64\) per GPU), there are some practical consequences that render most \(Q\) unusable. In particular, one is not able to store all edge weights \(Q_{t}(i,j)\) (since this takes around \(20\) GB of GPU memory and is extremely slow to access) used to compute \(\mathcal{L}_{\mathrm{DWDSE}}\). Furthermore, one must be able to compute the columns \(\exp(\overline{\sigma}(t)\cdot Q)\) to get the transition ratios, but again one can't directly store all of them in memory. We use two standard matrices with special structures that sidestep the above issues. They arise, respectively, from considering a fully connected graph structure and the MASK token used in models such as BERT (Devlin et al., 2019): \[Q^{\mathrm{uniform}}=\mathbb{1}-N\mathbb{I}_{N}\in\mathbb{R}^{N\times N}\quad Q^ {\mathrm{absorb}}=\begin{bmatrix}-1&0&\cdots&0&0\\ 0&-1&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&-1&0\\ 1&1&\cdots&1&0\end{bmatrix}\in\mathbb{R}^{(N+1)\times(N+1)} \tag{18}\] Notably, with such a structured \(Q\), one can compute all the values in \(\mathcal{L}_{\mathrm{DWDSE}}\) quickly without much memory overhead. As such, our score entropy training iterations are about as quick and use a similar amount of memory as a standard autoregressive model training iteration. ## 4 Simulating Reverse Diffusion with Concrete Scores Given our scores \(s_{\theta}\), we now derive various strategies for simulating the (factorized) reverse diffusion process \(\mathbf{x}_{t}=x_{t}^{1}x_{t}^{2}\ldots x_{t}^{d}\sim p_{t}\). Notably, the additional information that we gain from \(s_{\theta}\) being an approximate ratio of \(p_{t}\) can be used to enhance the sampling process. ### Time-Reversal Strategies To simulate the diffusion in Definition 3.5, one may be tempted to use the Euler strategy from Equation 6. However, as noted in (Campbell et al., 2022), this is inefficient because the structure of \(Q_{t}^{\mathrm{seq}}\) means we can only alter one token per step. Instead, a natural alternative has been to use \(\tau\)-leaping (Gillespie, 2001) to simultaneously step through all states at once \(p_{t-\Delta t|t}(\mathbf{x}_{t-\Delta t}|\mathbf{x}_{t})=\prod_{i=1}^{n}p^{i}( x_{t-\Delta t}^{i}|\mathbf{x}_{t}^{i})\) \[p^{i}(x_{t-\Delta t}^{i}|\mathbf{x}_{t})=\begin{cases}\Delta t\cdot Q_{t}^{ \mathrm{token}}(x_{t}^{i},x_{t-\Delta t}^{i})s_{\theta}(\mathbf{x}_{t},t)_{i, x_{t-\Delta t}^{i}}&x_{t-\Delta t}^{i}\neq x_{t}^{i}\\ 1-\Delta t\sum_{y\neq x_{t}^{i}}Q_{t}^{\mathrm{token}}(x_{t}^{i},y)s_{\theta }(\mathbf{x}_{t},t)_{i,y}&x_{t-\Delta t}^{i}=x_{t}^{i}\end{cases} \tag{19}\] where the probabilities are clipped and normalized to account for discretization error. However, this procedure is agnostic to the probabilistic information of \(s_{\theta}\). As an alternative, we introduce a discrete analogue of the famous Tweedie's theorem (Efron, 2011): **Theorem 4.1** (Discrete Tweedie's Theorem).: _Suppose that a distribution \(p_{t}\) is a perturbation of a base distribution \(p_{t-\epsilon}\) with a diffusion matrix \(\exp(\overline{\sigma}Q)\). Then the (exact) reverse transition is given by_ \[p_{t-\epsilon|t}(x_{t-\epsilon}|x_{t})=\left(\exp(-\overline{\sigma}Q)\left[ \frac{p_{t}(y)}{p_{t}(z_{t})}\right]_{i=1}^{N}\right)_{x_{0}}\exp(\overline{ \sigma}Q)_{x_{t},x_{0}} \tag{20}\] Note that this denoising scheme can not be directly applied to our language modeling task. In particular, we are not modeling the ratios between any two sequences, as otherwise this would allow us to generate from \(p_{T}\) to \(p_{0}\) in only one step. However, we can use this intuition to build an Tweedie \(\tau\)-leaping update: \[p(x_{t-\Delta t}^{i})=\big{(}\exp(\overline{\sigma}_{\Delta t}(t) Q)s_{\theta}(\mathbf{x}^{t},t)_{i}\big{)}_{x_{t-\Delta t}^{i}}\exp(\overline{ \sigma}_{\Delta t}(t)Q)_{x_{t}^{i},x_{t-\Delta t}^{i}} \tag{21}\] \[\overline{\sigma}_{\Delta t}(t)=\overline{\sigma}(t)-\overline{ \sigma}(t-\Delta t) \tag{22}\] Interestingly, this is optimal when one forces the \(\tau\)-leaping discretization: **Theorem 4.2** (Mean Parameterization and Scores).: _Let \(p_{t-\Delta t|t}^{\theta^{*}}\) be the update rule defined by Equation 21 when our score function \(s_{\theta}^{*}\) is learned perfectly. Then, \(p_{t-\Delta t|t}^{\theta^{*}}\) minimizes the KL divergence \(D_{KL}\left(p_{t-\Delta t|t}(\mathbf{x}_{t-\Delta t}|\mathbf{x}_{t})\parallel p _{t-\Delta t|t}^{\theta}(\mathbf{x}_{t-\Delta t}|\mathbf{x}_{t})\right)\) when we restrict \(p_{t-\Delta t|t}^{\theta}\) to factorize by dimension._ Furthermore, this recover the analytic sampling method for other methods (Appendix C.2). ### Arbitrary Prompting and Infilling Our concrete score can be used to enable greater control over the generative process. In particular, we consider the infilling problem defined by the conditional probabilities \[p_{t}(\mathbf{x}^{\Omega}|\mathbf{x}^{\overline{\Omega}}=\mathbf{y})\quad \Omega\text{ unfilled indices}\quad\overline{\Omega}\text{ already filled indices}. \tag{23}\] for example, a standard autoregressive conditional generation would have \(\overline{\Omega}=\{1,2,\dots,c\}\) and \(\Omega=\{c+1,c+2,\dots,d\}\). By Bayes' rule, the conditional scores can be recovered exactly from the unconditional score. \[\begin{array}{l}\frac{p_{t}(\mathbf{x}^{\Omega}=\mathbf{z}^{ \prime}|\mathbf{x}^{\overline{\Omega}}=\mathbf{y})}{p_{t}(\mathbf{x}^{\Omega}= \mathbf{z}|\mathbf{x}^{\overline{\Omega}}=\mathbf{y})}=\frac{p_{t}(\mathbf{x} =\mathbf{z}^{\prime}\oplus\mathbf{y})}{p_{t}(\mathbf{x}=\mathbf{z}\oplus \mathbf{y})}\quad\oplus\text{ is concatenation}\end{array} \tag{24}\] We can therefore approximate the relevant ratios (namely those with with one changed index) with our vanilla score function \(s_{\theta}\), which justifies the following sampling procedure \[\mathbf{x}_{t-\Delta t}=\mathrm{proj}_{|\overline{\Omega}\to\mathbf{y}}( \mathrm{sample}_{\mathrm{t-\Delta t}}(\mathbf{x}_{\mathrm{t}},s_{\theta})) \tag{25}\] (i.e. projecting the known indices after each score function-based step/only changing the unknown indices). In principle, we can also bound the likelihoods using Theorem 3.6. However, one major problem is that the probabilities \(p_{t}(\mathbf{z}\oplus\mathbf{y})\) may be low, making it hard to learn \(s_{\theta}(\mathbf{z}\oplus\mathbf{y},t)\). If this is the case, we can follow previous work and approximate this with \(s_{\theta}(\mathbf{z}\oplus\mathbf{y}(t),t)\)(Song et al., 2020). ## 5 Experiments We now empirically validate that our score entropy discrete diffusion (SEDD) model can compete with existing large-scale autoregressive models, namely GPT-2 (Radford et al., 2019). ### Model and Training Setup Our model is a standard encoder-only transformer architecture (Vaswani et al., 2017) similar to standard masked language models (Devlin et al., 2019). However, our model incorporates the time conditioning method from (Peebles and Xie, 2022) and uses rotary instead of positional encodings Su et al. (2021). Instead of outputting \(s_{\theta}\) directly, we instead output \(\log s_{\theta}\) to maintain positivity without clipping the output or gradients. We report results for both the uniform and absorbing token matrices \(Q^{\mathrm{uniform}}\) and \(Q^{\mathrm{absorb}}\). For the absorbing transition for reported perplexities, we use a log-linear noise schedule that will mask out \(\tau\sim U([0,d])\) tokens. For all other experiments/generations, we used a geometric noise schedule that interpolates between \(10^{-5}\) and \(20\). Outside of this, we did not systemically explore noise schedules or alternative loss weighting, although these will most likely improve sample perplexity and generation (as is commonly seen for continuous diffusion). We train on OpenWebText (Gokaslan and Cohen, 2019), an open source recreation of the WebText dataset used for training GPT-2. We matched the architecture sizes of GPT-2, although our models have slightly more non-embedding parameters (\(\approx 5-10\%\)) due to the additional time conditioning network. Further experimental and architecture details are given in Appendix B. ### Perplexity Score Comparison We follow GPT-2 and report zero-shot perplexities on the LAMBADA, WikiText2, PTB, WikiText103, and 1 Billion Words datasets. We recompute baseline likelihoods for all datasets except 1BW, where we encountered unexpected behavior with the public implementations. Our likelihood computation changes from the original setting since use different splits and evaluate unconditional likelihoods (ie without a sliding window). This results in slightly higher perplexities for GPT-2 than originally reported, although this is minor on most datasets. Our results are reported in Table 1. Our absorbing transition models effectively match the performance of GPT-2, as their perplexities are commonly within a \(+10\%\). Prior work Song et al. (2021) has shown that this is around the gap between exact likelihoods and the variational bounded for continuous space diffusion, although it is unknown if holds true for discrete space. However, the uniform transition models consistently underperforms. ### Sample Quality Comparison We also test the quality of our generated samples. In particular, we generate unconditional samples of length \(1024\) and report the generative perplexity (as measured by GPT-2 Large). We use the analytic sampling method for GPT-2 and our reverse diffusion sampler for SEDD to fairly compare the modeled probability distributions. Note that other commonly used autoregressive sampling methods (ie beam search or nucleus sampling) don't sample from the true distribution and are adversarial against our evaluation objective since they are made to explicitly decrease the perplexity of the generated sequences (Freitag and Al-Onaizan, 2017; Holtzman et al., 2019). Our results (for small models) are shown in Figure 2. SEDD with absorbing transition reliably outperforms GPT-2, in addition to creating a time/quality tradeoff curve that is log-log linear. We can also see the added consistency and fluency of our approach in the generated samples. More information about the generative perplexity of other models (e.g. uniform transition, euler sampling) and additional samples are given in Appendix C.4. ### Arbitrary Infilling Finally, we showcase our ability to condition our generation with inputs at arbitrary location. Our results are shown in Table 2. We can see that the additional flexibility offered by our framework can lead to more options to control the text generation. In addition to the standard autoregressive prompting at the start of the sentence, our model is able to generate from the end, middle, or with multiple prompts at different locations. Additional samples are given in Appendix C.4. \begin{table} \begin{tabular}{l|c c c c c} & LAMBADA & WikiText2 & PTB & WikiText103 & 1BW \\ \hline GPT-2-small & **45.04** & **42.43** & 138.43 & **41.60** & **75.20\({}^{*}\)** \\ SEDD-small Absorb & \(\leq\)52.21 & \(\leq\)44.75 & \(\leq\)**130.49** & \(\leq\)43.14 & \(<\)80.70 \\ SEDD-small Uniform & \(\leq\)66.94 & \(\leq\)55.88 & \(\leq\)144.88 & \(\leq\)53.90 & \(\leq\)100.86 \\ \hline GPT-2-medium & **35.66** & **31.80** & 123.14 & **31.39** & **55.72\({}^{*}\)** \\ SEDD-medium Absorb & \(\leq\)44.60 & \(\leq\)34.85 & \(\leq\)**93.26** & \(\leq\)32.97 & \(\leq\)67.91 \\ SEDD-medium Uniform & \(\leq\)51.14 & \(\leq\)39.79 & \(\leq\)100.58 & \(\leq\)37.69 & \(\leq\)79.26 \\ \end{tabular} \end{table} Table 1: **Zero-shot unconditional perplexity (lower is better) on a variety of datasets. For a fixed model size, the best perplexity is bolded and ELBO bounds that fall within the variational error of \(+10\%\) are underlined. Our score entropy discrete diffusion (SEDD) model with absorbing transition almost matches the performance of GPT-2, although the uniform transition lags behind.** Figure 2: **Evaluation of generated text. We compare the ancestral sampling techniques of autoregressive and diffusion models (GPT2 vs SEDD). Our SEDD model consistently outperforms GPT2, interpolating between a \(16\times\) speedup and a \(5\times\) improvement based on the chosen step size. The generated text showcases this improved generation capability. Additional samples in Appendix C.4 ## 6 Related Work **Mean Prediction Parameterization.** Many previous discrete diffusion works draw inspiration from Ho et al. (2020) and parameterize \(\overline{Q}\) with the "mean" probability distribution \(p_{0|t}(\mathbf{x}_{0}|\mathbf{x}_{t})\)(Austin et al., 2021; Hoogeboom et al., 2021; Campbell et al., 2022; He et al., 2022). However, these models typically perform anywhere from \(2\) to \(4\) times worse than autoregressive models on non-toy language modeling datasets like 1BW (Austin et al., 2021; He et al., 2022). We discuss more similarities and differences and corroborate that mean parameterization is unsuitable for our scale in Appendix C.2. **Ratio Matching Conditionals.**(Sun et al., 2022) notes that the ratios parameterize the reverse distribution but instead optimizes with the ratio matching loss (Hyvarinen, 2007), which parameterizes the conditional distributions \(p_{t}(x^{i}|(x^{j})_{j\neq i})\). This setup differs greatly because one must construct specialized architectures to make sure \(x^{i}\) does not affect \(p(x^{i})\)(Chen and Duvenaud, 2019), which is tough to scale and has seen limited success in non-language modeling tasks. **Learned Dequantizations.** Several works instead examine continuous relaxations of the discrete diffusion process (Li et al., 2022; Dieleman et al., 2022; Gulrajani and Hashimoto, 2023). By doing so, these models can leverage the existing continuous-space diffusion framework. However, such a graph embedding is hard and can result in sparse signals for the model to learn. As such, when compared with autoregressive models, these require \(10\times\) the parameters for similar perplexities (Gulrajani and Hashimoto, 2023) and a comparable (if not more) amount of sampling steps to generate high quality samples. ## 7 Future Work Despite our contribution, much work remains before discrete diffusion models can truly rival modern autoregressive models. For example, the effects of scaling on performance (Hoffmann et al., 2022) are unexplored, and diffusion models are limited since they generate full length outputs. Furthermore, existing systems level improvements such as the KV cache can cut into our algorithmic speedup. However, our work takes the crucial first step in showcasing competitive viability and demonstrating tangible benefits for language diffusion modeling. Therefore, we remain optimistic that future work in this direction can rival the domination of autoregressive models. By generalizing improvements in the continuous diffusion model framework, future work could tune the noise schedule to further improve generation quality (Dhariwal and Nichol, 2021), leverage score-based controllability methods Ho (2022), or further reduce the number of sampling steps (Song et al., 2023). \begin{table} \begin{tabular}{|p{227.6pt}|} \hline A bow and arrow is a traditional weapon used by penury Englishmen. The gun shoots into water, starvation and thunder centuries after short-range weapons were built. The weapon is the focus of a new exhibition Dr Tom Fellow, from Pcock, is curator of objects at the History Museum in Oxford.... \\ \hline... seems to have known skydiving is a fun sport that exists, in other words, subliminally like climbing the feeling is exhilarating. Watson is beginning to wonder, as their conversation on it continues, why not. “One thing springs to mind,” she says.... \\ \hline... with significantly lower skin infections. Also this year a Franklin study published a report that found that with more use of reliable medical data, monthly changes following a nutritional boost could have a devastating stay in school kids. \\ \hline... as if he could have been erred, (Donald Trump and Hillary Clinton started to change their position. Some, as Tom and Perez mentioned, were good specifics, such as where they have a letter the FFP agents give their way to pass to offsetting... \\ \hline... beyond a doubt.\(<\)|endofttext|\(>\)Barack Obama used to have all the ‘white house people’ in Barack Obama Plaza. Being Barack you hold every hold of Obama’s statements and you blame it for being misleading. Obama has been enough. He’s seen enough stuff to make you think twice. So, give him a break... \\ \hline \end{tabular} \end{table} Table 2: **Picked Conditionally Generated Text. Prompt tokens are given in blue. Our model is able to generate meaningful text with prompt tokens in the front, the end, the middle, or even split up. First two samples are generated by SEDD medium, while the last three are from SEDD small. Additional samples are given in Appendix C.4.** ## 8 Conclusion We have introduced score entropy discrete diffusion (SEDD) models, a new class of discrete diffusion model that is parameterized by the concrete score and can be learned efficiently with our novel score entropy loss. SEDD achieves competitive performance in a direct head-to-head with GPT-2, almost matching likelihoods while generating higher quality samples with more control options. We hope that future work can build off of our framework to define diffusion model alternatives to the modern autoregressive language modeling paradigm. ## 9 Acknowledgements This project was supported by NSF (#1651565), ARO (W911NF-21-1-0125), ONR (N00014-23-1-2159), CZ Biohub, a Stanford HAI GCP grant, and Pika Labs. AL is supported by a NSF Graduate Research Fellowship.
2301.00917
Anomalous circular phonon dichroism in transition metal dichalcogenides
A magnetic field can generally induce circular phonon dichroism based on the formation of Landau levels of electrons. Here, we study the magnetization-induced circular phonon dichroism in transition metal dichalcogenides, without forming Landau levels. We find that, instead of the conventional deformation potential coupling, pseudogauge-type electron-phonon coupling plays an essential role in the emergence of the phenomenon. As a concrete example, a large dichroism signal is obtained in monolayer MoTe2 on a EuO substrate, even without considering Rashba spin-orbit coupling. Due to the two-dimensional spin-valley-coupled band structure, MoTe2 shows a reciprocal and nonreciprocal absorption of circularly polarized acoustic phonons upon reversing the direction of phonon propagation and magnetization, respectively. By varying the gate voltage, a tunable circular phonon dichroism can be realized, which paves a way toward different physics and applications of two-dimensional acoustoelectronics.
Wen-Yu Shan
2023-01-03T01:22:40Z
http://arxiv.org/abs/2301.00917v1
# Anomalous circular phonon dichroism in transition metal dichalcogenides ###### Abstract Magnetic field can generally induce circular phonon dichroism based on the formation of Landau levels of electrons. Here we study the magnetization-induced circular phonon dichroism in transition metal dichalcogenides, without forming the Landau levels. We find that, instead of the conventional deformation potential coupling, the pseudogauge-type electron-phonon coupling plays an essential role in the emergence of the phenomenon. As a concrete example, a large dichroism signal is obtained in monolayer MoTe\({}_{2}\) on a EuO substrate, even without considering the Rashba spin-orbit coupling. Due to the two-dimensional spin-valley-coupled band structure, MoTe\({}_{2}\) shows a reciprocal and nonreciprocal absorption of circularly polarized acoustic phonons upon reversing the direction of phonon propagation and magnetization, respectively. By varying the gate voltage, a tunable circular phonon dichroism can be realized, which paves a way toward new physics and applications of two-dimensional acoustoelectronics. _Introduction_.--Recent years have seen a surge of interest in investigating topological properties in the nonelectronic systems, e.g., photonic, magnonic and phononic materials. For phonons, the concepts of band topology and geometry have brought into new ingredients: chiral phonons [1; 2; 3; 4], angular momentum [5; 6; 7; 8], orbital magnetic moments of phonons [9; 10; 11; 12], phonon angular momentum Hall effect [13], phonon rotoelectric effect [14] and so on. In metals, the interplay between phonons and electrons with nontrivial band topology or geometry may further induce distinctive features, such as phonon helicity [15] and phonon magnetochiral effect [16; 17]. Circular dichroism, the differential absorption between left- and right-handed circularly polarized light, has been widely used in examining topological phases of matter [18; 19; 20; 21; 22; 23; 24]. A phononic analog, namely, circular phonon dichroism (CPD), is later proposed in three-dimensional Weyl semimetals [25]. However, a direct analogy between phonons and photons is not that obvious. The reasons are twofold. First, the photon wave vector is usually much smaller than the Fermi wave vector of electrons, thus only inducing the interband transition of electrons; whereas the phonon wave vector may be comparable to that of electrons, giving rise to either interband or intraband transition (see Fig. 1 (b)). Second, light waves consist of only transverse modes, whereas acoustic waves in solids have both longitudinal and transverse modes. Particularly, when dealing with two-dimensional (2D) materials, one has to mix longitudinal and transverse in-plane modes to create circular phonons [26], in marked contrast to the case of light. This indicates that 2D circular phonon dichroism is intrinsically different from the circular dichroism of light, where the former has received far less attention. Experimentally, several works have unveiled the effect of Landau levels of electrons on the phonon dispersion or circular dichroism in graphene [26; 27; 28], such as the magnetophonon resonance. Nevertheless, the treatment of Landau levels inevitably induces topology, even into an originally trivial system. In this sense, the CPD can not resolve the real band topology or geometry of the underlying system. Another way of breaking time-reversal symmetry is to introduce the magnetic exchange interaction, which does not require the formation of Landau levels and could retain the basic topology or geometry of the band structure. Up to now, the intrinsic magnetization-induced CPD in 2D materials like monolayer transition metal dichalcogenides, remains unknown. This generalization of magnetization bears similarities to the case of anomalous Hall effect, hence the name \(anomalous\ circular\ phonon\ dichroism\). The distinct spin-valley-coupled band structure of transition metal dichalcogenides may further contribute to the anomalous behaviors of CPD and their nonreciprocal relations. Therefore studying this new type of CPD would be desirable for a better understanding and manipulation of band geometry or topology in 2D materials. In this paper, we explore the magnetization-induced CPD in monolayer transition metal dichalcogenides. To allow this effect, the pseudogauge-type electron-phonon coupling is necessary instead of the conventional deformation potential coupling. We obtain a large dichroism signal in monolayer MoTe\({}_{2}\) on a EuO substrate, even in the absence of Rashba spin-orbit coupling. Due to the unique spin-valley coupling, we find that MoTe\({}_{2}\) shows a reciprocal (nonreciprocal) absorption of circularly polarized acoustic phonons upon reversing the direction of phonon propagation (magnetization). Our study refreshes our knowledge on the effect of electron-phonon coupling on phonon dynamics, and paves the way toward acoustoelectronics for 2D materials. _Model Hamiltonian_.--We take the pristine 2H-phase transition metal dichalcogenides MoTe\({}_{2}\) on a EuO substrate as a prototype (see Fig. 1 (a)). The effective electronic Hamiltonian is given by \(H_{soc}+H_{ex}+H_{R}\)]\(\psi(\mathbf{k})\), where [29; 30] \[\begin{split} H_{0}&=\hbar v(\tau\sigma_{x}k_{x}+ \sigma_{y}k_{y})+\frac{\Delta}{2}\sigma_{z},\\ H_{soc}&=\tau s_{z}(\lambda_{c}\sigma_{+}+\lambda_{v }\sigma_{-}),\\ H_{ex}&=-\mathbf{s}\cdot\mathbf{n}(B_{c}\sigma_{+}+B_{v} \sigma_{-}),\\ H_{R}&=\lambda_{R}(\tau s_{y}\sigma_{x}-s_{x} \sigma_{y}).\end{split} \tag{1}\] \(\psi^{+}(\mathbf{k})\) and \(\psi(\mathbf{k})\) are the creation and annihilation operator of electrons. \(H_{soc}\), \(H_{ex}\) and \(H_{R}\) correspond to the Ising-type spin-orbit coupling, proximity-induced exchange and Rashba interaction, respectively. \(\mathbf{s}\) and \(\mathbf{\sigma}\) are Pauli matrices acting on spin \(\{\uparrow,\downarrow\}\) and orbit subspace \(\{|d_{z^{2}}\rangle,\frac{1}{\sqrt{2}}(|d_{x^{2}-y^{2}}\rangle+i\tau|d_{xy} \rangle)\}\), and \(\sigma_{\pm}=\frac{1}{2}(\sigma_{0}\pm\sigma_{z})\). \(\tau=\pm 1\) labels valley \(K_{\pm}\). \(\lambda_{c/v}\) describes the spin splitting of the conduction and valence bands, respectively. \(B_{c/v}\) is the effective Zeeman field experienced by the conduction and valence bands, arising from the exchange coupling with the magnetic substrate. The out-of-plane \(z\)-direction magnetization \(\mathbf{n}=\mathbf{e}_{z}\) is considered (see Fig. 1 (a)). For the moment, we set \(\lambda_{R}=0\) in order to have analytical expressions and intuitive physical picture. The role of \(\lambda_{R}\) will be clarified later. The electronic band structure upon magnetization is schematically shown in Fig. 1 (b), where the signature of spin-valley coupling can be seen explicitly. The Fermi level is pinned at the valence bands, where the effect of spin-valley coupling is manifest. The magnetization \(B_{c/v}\) shifts the opposite-spin states from different valleys in opposite directions, and thus breaks the time-reversal symmetry. For the phononic part, we consider two branches of in-plane acoustic phonon modes. Due to the low sound velocity \(c_{l/t}\) (\(l/t\) for longitudinal/transverse phonon polarization), the acoustic phonon energy \(\omega_{l/t}=\hbar c_{l/t}|\mathbf{q}|\) is much smaller than the valence band splitting of electrons, i.e., \(2(B_{v}\pm\lambda_{v})\), where \(\mathbf{q}\) is the phonon wave vector. As a result, only intraband transitions of electrons are triggered by acoustic phonons (see Fig. 1 (b)). By contrast, optical phonon modes with larger energy, may enable either intraband or interband transitions. Nonetheless, the basic physical picture should be similar. For simplicity, we further study the long-wavelength limit of phonon modes, which allows us to neglect the intervalley scattering process of electrons. Based on the theory of elasticity [31; 32; 33], the electro-acoustic-phonon coupling in MoTe\({}_{2}\) contains two terms: \(\mathcal{H}_{e-ph}=\mathcal{H}_{e-ph}^{d}+\mathcal{H}_{e-ph}^{p}\), where \(\mathcal{H}_{e-ph}^{d}\) (\(\mathcal{H}_{e-ph}^{p}\)) refers to the deformation (pseudogauge) potential coupling. \(\mathcal{H}_{e-ph}^{d/p}\) has a general form [34; 35] \[\mathcal{H}_{e-ph}^{d/p}=\sum_{\mathbf{k},\mathbf{q}}\psi^{+}(\mathbf{k}+\mathbf{q})[\mathbf{u}( \mathbf{q})\cdot\hat{\mathbf{T}}_{d/p}(\mathbf{q})]\psi(\mathbf{k}), \tag{2}\] where \(\mathbf{u}(\mathbf{q})\) is the Fourier transform of the in-plane collective displacement \(\mathbf{u}(\mathbf{r})\) for acoustic modes [32] and \(\hat{\mathbf{T}}(\mathbf{q})\) is the Fourier transform of the "effective" force operator \(\hat{\mathbf{T}}(\mathbf{r})\) acting on atoms by electrons. For the deformation potential \(\mathcal{H}_{e-ph}^{d}\), \(\hat{\mathbf{T}}_{d}(\mathbf{q})=ig_{d}\mathbf{q}\), which is independent of the valley index \(\tau\). For the pseudogauge potential \(\mathcal{H}_{e-ph}^{p}\), the force operator becomes valley-dependent, that is, \(\hat{\mathbf{T}}_{p}^{\tau=-1}(\mathbf{q})=ig_{p}[\mathbf{q}\cdot\mathbf{\sigma},(\mathbf{q}\times \mathbf{\sigma})_{z}]\) and \(\hat{\mathbf{T}}_{p}^{\tau=1}(\mathbf{q})=\mathcal{K}[\hat{\mathbf{T}}_{p}^{\tau=-1}(-\mathbf{q})]\). \(\mathcal{K}\) is the complex conjugation operator. The relation between \(\hat{\mathbf{T}}_{p}^{\tau=1}(\mathbf{q})\) and \(\hat{\mathbf{T}}_{p}^{\tau=-1}(-\mathbf{q})\) preserves the time-reversal symmetry of electron-phonon coupling in the absence of magnetization. _Phonon equation of motion._--For the phonon dynamics, we consider the phonon equation of motion in the frequency-momentum (\(\omega\), \(\mathbf{q}\)) domain [25] \[\omega^{2}u_{\alpha}(\mathbf{q})=\sum_{\beta}[\Phi_{\alpha\beta}(\mathbf{q})+\hbar \chi_{\alpha\beta}(\mathbf{q},\omega)]u_{\beta}(\mathbf{q}), \tag{3}\] where \(\alpha,\beta=x,y\) and \(\Phi(\mathbf{q})\) is the dynamical matrix. \(\chi_{\alpha\beta}(\mathbf{q},\omega)\) is a retarded response function arising from the electron-phonon coupling and follows at each valley [35] \[\begin{split}&\chi_{\alpha\beta}^{\tau}(\mathbf{q},\omega+i\delta)=\sum_{n,m} \int\frac{\hbar d^{2}\mathbf{k}}{\rho(2\pi)^{2}}\frac{f_{\tau,m,\mathbf{k}}-f_{\tau,n, \mathbf{k}-\mathbf{q}}}{\omega+i\delta+E_{\tau,m,\mathbf{k}}-E_{\tau,n,\mathbf{k}-\mathbf{q}}}\\ &\times\langle\tau,m,\mathbf{k}|\hat{T}_{\alpha}^{\tau}(\mathbf{q})|\tau,n,\mathbf{k}-\mathbf{q}\rangle\langle\tau,n,\mathbf{k}-\mathbf{q}|\hat{T}_{\beta}^{\tau}(-\mathbf{ q})|\tau,m,\mathbf{k}\rangle.\end{split} \tag{4}\] \(E_{\tau,m,\mathbf{k}}\) and \(|\tau,m,\mathbf{k}\rangle\) are the dispersion and electronic wave function of Hamiltonian (1), respectively. \(f_{\tau,m,\mathbf{k}}\) (\(f_{\tau,n,\mathbf{k}-\mathbf{q}}\)) is the Fermi distribution function, \(\rho\) is the 2D Figure 1: Schematics of (a) the setup and (b) electronic band structure of monolayer MoTe\({}_{2}\). In (a), 2H-phase monolayer MoTe\({}_{2}\) is deposited on the EuO substrate. In (b), yellow (green) region corresponds to spin-up (-down) bands. Transition process of electrons due to acoustic phonons (photons) is indicated by the blue solid (red dashed) line. mass density, \(\delta\) is a positive infinitesimal. Since only intraband transitions (band indices \(m=n\)) of electrons are allowed by acoustic modes in the low-temperature limit, \(m,n\) reduce to the ones intersected by the Fermi level, i.e., spin-split valence bands at valley \(K_{\pm}\) (see Fig. 1 (b)). _Circular phonon dichroism._--Our main interest lies in the anti-Hermitian part of \(\chi(\mathbf{q},\omega)\), that is, \(-2i\omega\gamma(\mathbf{q},\omega)\), where \(\gamma(\mathbf{q},\omega)\) is a Hermitian matrix satisfying \(\gamma^{+}(\mathbf{q},\omega)=\gamma(\mathbf{q},\omega)\). This matrix corresponds to the non-Hermitian part of the phonon self-energy, which physically originates from the phonon absorption by electrons. In the basis of \(\{\hat{x},\hat{y}\}^{T}\), \(\gamma\) matrix has the form \[\gamma(\mathbf{q},\omega)=\left[\begin{array}{cc}D(\mathbf{q},\omega)+ \bar{D}(\mathbf{q},\omega)&\bar{A}(\mathbf{q},\omega)+iA(\mathbf{q},\omega)\\ \bar{A}(\mathbf{q},\omega)-iA(\mathbf{q},\omega)&D(\mathbf{q},\omega)-\bar{D}(\mathbf{q}, \omega)\end{array}\right]. \tag{5}\] Different from the Weyl semimetals [25], new terms \(\bar{D}(\mathbf{q},\omega)\) and \(\bar{A}(\mathbf{q},\omega)\) occur in monolayer MoTe\({}_{2}\) as a result of \(D_{3h}\) point-group symmetry. For the left- and right-handed circularly polarized phonons, \(|u_{L/R}\rangle=\frac{1}{2\pi}[1\pm i]^{T}\), the damping (absorption) coefficients read \(\gamma^{L/R}=D(\mathbf{q},\omega)\mp A(\mathbf{q},\omega)\). The relative difference between \(\gamma^{L}\) and \(\gamma^{R}\) defines the \(circular\)\(phonon\)\(dichroism\) (CPD). One can see that the behavior of CPD is totally determined by \(A(\mathbf{q},\omega)/D(\mathbf{q},\omega)\). For longitudinal or transverse phonons, the polarization is linear as \(|u_{l}\rangle=[\cos\phi_{q}\sin\phi_{q}]^{T}\) and \(|u_{t}\rangle=[-\sin\phi_{\mathbf{q}}\cos\phi_{q}]^{T}\), where the angular variable \(\phi_{\mathbf{q}}=\tan^{-1}(q_{y}/q_{x})\). The damping coefficients are given by \(\gamma^{l/t}=D(\mathbf{q},\omega)\)\(\pm\)\(\cos 2\phi_{\mathbf{q}}\bar{D}(\mathbf{q},\omega)\)\(\pm\)\(\sin 2\phi_{\mathbf{q}}\bar{A}(\mathbf{q},\omega)\), which explicitly depends on the phonon propagation direction \(\mathbf{q}\). Here, different from the circular phonons, the damping coefficients \(\gamma^{l/t}\) for the linear phonons depend on the parameters \(\bar{D}(\mathbf{q},\omega)\) and \(\bar{A}(\mathbf{q},\omega)\). Specifically for the deformation potential \(\mathcal{H}^{d}_{e-ph}\), \(\gamma\) matrix is proportional to [35] \[\gamma(\mathbf{q},\omega)\propto\left[\begin{array}{cc}q_{x}^{2}&q_{x}q_{y}\\ q_{x}q_{y}&q_{y}^{2}\end{array}\right]. \tag{6}\] This immediately leads to \(A(\mathbf{q},\omega)=0\), meaning that the CPD vanishes when only the deformation potential coupling is taken in account. Meanwhile, \(\gamma^{t}=0\), suggesting that there is no absorption for the transverse phonon modes. This agrees with the fact that the deformation potential only couples electrons to the longitudinal phonon modes [34]. For the pseudogauge potential \(\mathcal{H}^{p}_{e-ph}\), the situation is more complex. When only focusing on the acoustic modes, analytical expressions for all elements of \(\gamma(\mathbf{q},\omega)\) matrix can be obtained [35]. For example, when a single valence band at valley \(K_{\tau}\) is intersected by the Fermi level, \(A(\mathbf{q},\omega)/D(\mathbf{q},\omega)\) reduces to \[\frac{A^{\tau}(\mathbf{q},\omega)}{D^{\tau}(\mathbf{q},\omega)}=\tau\omega\frac{\Delta -\tau\lambda_{c}+\tau\lambda_{v}+B_{c}-B_{v}}{2[\omega x_{F}^{\tau}-(\hbar vk_ {F}^{\tau})^{2}]}\Theta(k_{F}^{\tau}-\frac{q}{2}-k_{0}^{\tau}), \tag{7}\] where the Heaviside step function \(\Theta(\cdots)\) constrains the magnitude of phonon wave vector \(q=|\mathbf{q}|\). \(k_{F}^{\tau}\) is a valley-dependent Fermi wave vector of electrons. \(k_{0}^{\tau}=\frac{\omega}{2\hbar v}\sqrt{1+\frac{(\Delta-\tau\lambda_{c}+ \tau\lambda_{v}+B_{c}-B_{v})^{2}}{(\hbar vq)^{2}-\omega^{2}}}\) and \(x_{F}^{\tau}=\sqrt{\left(\frac{\Delta-\tau\lambda_{c}+\tau\lambda_{c}+B_{c}-B_ {v}}{2}\right)^{2}+(\hbar vk_{F}^{\tau})^{2}}\). However, such a simple relation fails when both valleys are intersected by the Fermi level, given that \(A(\mathbf{q},\omega)=\sum_{\tau}A^{\tau}(\mathbf{q},\omega)\) and \(D(\mathbf{q},\omega)=\sum_{\tau}D^{\tau}(\mathbf{q},\omega)\). On the other hand, both \(\bar{A}(\mathbf{q},\omega)\) and \(\bar{D}(\mathbf{q},\omega)\) become \(\phi_{\mathbf{q}}\)-dependent [35]: \(\bar{A}(\mathbf{q},\omega)=-F(q,\omega)\sin 4\phi_{\mathbf{q}}\) and Figure 2: (a) Angular \(\phi_{\mathbf{q}}\)-dependence of the damping coefficients \(\gamma^{l}\) (\(\gamma^{t}\)) for the longitudinal (transverse) acoustic phonon modes. (b)-(d) Relations of the circular phonon dichroism \(A/D\) versus the phonon wave vector \(q/k_{F}^{\tau-1}\), Fermi energy \(E_{F}\) and gap function \(\Delta\), respectively. In (b), the Fermi energy is fixed: \(E_{F}=-0.48\) eV. The inset shows the details of the cyan region, and the two peaks are due to the longitudinal and transverse phonon modes, respectively. The peaks in the cyan region are given by a summation of valleys \(K_{\pm}\), whereas the peaks near \(q/k_{F}^{\tau-1}=2\) are only determined by valley \(K_{-}\). \(k_{F}^{\tau-1}\) is the Fermi wave vector at valley \(K_{-}\). The red dot refers to the case of (a). In (c), different values of \(q\) are adopted. The locations of the Fermi level at the peaks are shown in the inset: both valley \(K_{\pm}\) are intersected at smaller \(E_{F}\); only valley \(K_{-}\) is intersected at larger \(E_{F}\). In (d), the black dot indicates the value of \(\Delta\) in (a)-(c): \(\Delta=1.05\) eV. Parameters: \(\lambda_{v}=0.11\) eV, \(\lambda_{c}=0.029\) eV, \(\hbar v=2.33\) eV\(\cdot\mathring{A}\), \(B_{c}=0.206\) eV, \(B_{v}=0.17\) eV [29], longitudinal and transverse sound velocity \(c_{l}=3.64\times 10^{3}\) m/s and \(c_{t}=2.21\times 10^{3}\) m/s [36], mass density \(\rho=9.40\times 10^{-6}\) kg/m\({}^{2}\)[37] and the electron-phonon coupling constant \(g_{p}=0.32\) eV [38]. \(\bar{D}(\mathbf{q},\omega)=F(q,\omega)\cos 4\phi_{\mathbf{q}}\), with a \(\phi_{\mathbf{q}}\)-independent factor \(F(q,\omega)\). By substituting these into \(\gamma^{l/t}\), we find for linearly polarized phonons, \[\gamma^{l/t}=D(q,\omega)\pm F(q,\omega)\cos 6\phi_{\mathbf{q}}. \tag{8}\] One can see that \(\gamma^{l/t}\) has a six-fold (\(C_{6}\)) rotational symmetry on \(\phi_{\mathbf{q}}\) (see Fig. 2 (a)), which is different from the three-fold (\(C_{3}\)) rotational symmetry of the underlying crystals. The reason for the symmetry mismatch is due to the reciprocal behaviors of \(\gamma^{l/t}\) upon reversing the direction of phonon propagation, i.e., \(\mathbf{q}\rightarrow-\mathbf{q}\), as shown in Table 1. For phonons, \(c_{l/t}\ll v\), giving rise to [35]\(D(q,\omega)\approx-F(q,\omega)\). As a result, \(\gamma^{l}\approx 2D(q,\omega_{l})\sin^{2}3\phi_{\mathbf{q}}\) and \(\gamma^{t}\approx 2D(q,\omega_{t})\sin^{2}3(\phi_{\mathbf{q}}-\frac{\pi}{6})\). This means that there is an angular shift \(\frac{\pi}{6}\) in \(\phi_{\mathbf{q}}\) between \(\gamma^{l}\) and \(\gamma^{t}\), as shown in Fig. 2 (a). For circularly polarized phonons, numerical results of \(A(\mathbf{q},\omega)/D(\mathbf{q},\omega)\) as functions of the rescaled phonon wave vector \(q/k_{F}^{\tau=1}\), Fermi energy \(E_{F}\) and \(\Delta\) are shown in Fig. 2 (b)-(d), respectively. The Fermi wave vector \(k_{F}^{\tau=-1}\) rather than \(k_{F}^{\tau=1}\) is selected since the valence band edge of valley \(K_{-}\) is higher than \(K_{+}\), as shown in Fig. 1. In Fig. 2 (b), a non-monotonic behavior of \(A/D\) as \(q\) increases can be seen explicitly. The jumps at \(q/k_{F}^{\tau=-1}\approx 0.47\) and \(1.98\) originate from the sudden vanishing of valley \(K_{+}\) and \(K_{-}\), respectively, as required by the factor \(\Theta(k_{F}^{\tau}-\frac{q}{2}-k_{0}^{\tau})\) in Eq. (7). Such a factor can be understood as a result of the energy and momentum conservation for the electron-phonon scattering process. For acoustic phonons, the electron scattering approximately occurs on the Fermi surface. In this sense, the phonon wave vector \(q\) must be smaller than the maximum value of momentum transfer of electrons, that is, \(q<2k_{F}^{\tau}\). \(k_{0}^{\tau}\) is a small offset wave vector arising from the acoustic phonon dispersion \(\omega\). As seen in the inset of Fig. 2 (b), there are actually two adjacent peaks (jumps) in the highlighted region corresponding to the \(l\) and \(t\) mode, respectively, since \(k_{0}^{\tau}\) is different for \(\omega=\omega_{l/t}\). As the sound velocity \(c_{l}>c_{t}\), \(k_{0}^{\tau}\) is larger for the longitudinal mode, leading to a smaller transition value of \(q\). In Fig. 2 (c), the locations of the Fermi level for the peaks are indicated in the inset. The peaks at the lower (higher) Fermi level are dominated by valley \(K_{+}\) (\(K_{-}\)), which exhibit opposite signs of \(A/D\). For each valley, the magnitude \(|A/D|\) increases when the Fermi level is tuned toward the band edge. Different values of \(q\) are also compared. We find that by adopting a smaller \(q\), the peaks are shifted to a higher Fermi level, as \(k_{F}^{\tau}\) becomes smaller. The peaks also show a larger magnitude and become sharper, particularly for the second peaks. Therefore this provides a means of tuning the sign and magnitude of the CPD. In Fig. 2 (d), the value of \(\Delta\) adopted in Fig. 2 (a)-(c) is indicated. We can see that the magnitude \(|A/D|\) is basically enhanced when \(\Delta\) increases, expect for the discontinuous points. That is the reason why we propose monolayer transition metal dichalcogenides as candidate materials, which have large band gap and thus large CPD signals. For an order-of-magnitude estimate, we consider the parameters corresponding to the red dot in Fig. 2 (b), which also refer to the case of Fig. 2 (a). We find \(D=1.90\times 10^{7}/\text{s}\) and \(A=-7.81\times 10^{5}/\text{s}\). This yields a difference of the attenuation between the left- and right-handed circularly polarized waves, that is, \((\gamma^{L}-\gamma^{R})/\bar{c}\sim 534/\text{m}\), where \(\bar{c}=(c_{l}+c_{t})/2\) is the average sound velocity. Such difference is much larger than that of the Weyl semimetals [25], and should be observable in ultrasonic experiments. _Nonreciprocal absorption._--Given that both the space-inversion and time-reversal symmetry are broken in our system, the absorption of circularly polarized phonons is expected to be nonreciprocal. To see this, we consider in Table 1 the transformation properties of parameters \(D\), \(\bar{D}\), \(A\) and \(\bar{A}\) upon reversing the direction of phonon propagation \(\mathbf{q}\) or magnetization \(\mathbf{n}\). We find that \(D\), \(\bar{D}\) and \(\bar{A}\) are even functions of \(\mathbf{q}\) and \(\mathbf{n}\), whereas \(A\) is an even (odd) function of \(\mathbf{q}\) (\(\mathbf{n}\)). Accordingly, the absorption coefficients of circular phonons \(\gamma^{L/\bar{R}}\) remain unchanged under the transformation \(\mathbf{q}\rightarrow-\mathbf{q}\), whereas \(\gamma^{L/R}\) interchange with each other under the transformation \(\mathbf{n}\rightarrow-\mathbf{n}\). This represents a reciprocal and nonreciprocal CPD upon reversing the direction of phonon propagation and magnetization, respectively. Such result is similar to that of the Faraday rotation of light polarization [39], where the rotation angle only depends on the magnetic field direction. However, the origin is different. The absorption of circularly polarized photons is actually nonreciprocal when \(\mathbf{q}\rightarrow-\mathbf{q}\), but the chirality of circular photons also depends on the light propagation direction \(\mathbf{q}\). As a result, the reciprocal \(\mathbf{q}\)-dependence of the rotation angle is recovered. On the other hand, due to the 2D nature, the chirality of in-plane circular phonons is independent of the phonon propagation direction \(\mathbf{q}\), giving rise to the reciprocal absorption. This also indicates that there is no directional dichroism [40] or phonon magnetochiral effect [16; 17] in our system. _Roles of Rashba spin-orbit coupling._--Now we take into account the Rashba term \(H_{R}\) and treat it as a perturbation. Analytical expressions are calculated [35] and numerical results of electronic band structure and CPD are shown in Fig. 3. In Fig. 3 (a), we find that the Rashba term shifts the conduction (valence) band edge to higher (lower) energy at valley \(K_{-}\), whereas it hardly \begin{table} \begin{tabular}{c c c c c} Transformation & \(D(\mathbf{q},\omega)\) & \(\bar{D}(\mathbf{q},\omega)\) & \(A(\mathbf{q},\omega)\) & \(\bar{A}(\mathbf{q},\omega)\) \\ \hline \(\mathbf{q}\rightarrow-\mathbf{q}\) & \(+\) & \(+\) & \(+\) & \(+\) \\ \(\mathbf{n}\rightarrow-\mathbf{n}\) & \(+\) & \(+\) & \(-\) & \(+\) \\ \end{tabular} \end{table} Table 1: Transformation properties of parameters \(D\), \(\bar{D}\), \(A\) and \(\bar{A}\). changes the band structure at valley \(K_{+}\). This explains the phenomenon of peak shift in Fig. 3 (b) since the peaks are always close to the band edge. By increasing the strength of Rashba spin-orbit coupling, the magnitude of CPD can be enhanced, which provides us a knob to tune the CPD. For a realistic strength \(\lambda_{R}=0.072\) eV [29], the behavior of \(A/D\) is similar to the case without Rashba spin-orbit coupling, thus validating our above treatment. Particularly, we find that the introduction of \(H_{R}\) does not change the reciprocal behaviors of absorption coefficients \(\gamma^{L/R}\) under \(\mathbf{q}\rightarrow-\mathbf{q}\). Therefore, to obtain the nonreciprocity, additional ingredients should be taken into account, such as the cyclotron motion of electrons [26, 27, 41, 42] or phonon-magnon coupling [43]. _Discussion and conclusion_.--We have studied the circular phonon dichroism in magnetic two-dimensional materials, i.e., monolayer MoTe\({}_{2}\) in proximity to the EuO substrate. Large dichroism signal is obtained for the pseudogauge-type electron-phonon coupling, even without introducing the Landau levels or Rashba spin-orbit coupling. Such a signal is reciprocal (nonreciprocal) upon reversing the direction of phonon propagation (magnetization). By varying the gate voltage, the CPD signal can be tuned through the role of Fermi level and Rashba spin-orbit coupling. The proposed CPD effect can also be applied to other transition-metal dichalcogenides with spin-valley-coupled band structure, or their van der waals heterostructures. The effect can be detected by the pulse-echo technique [44, 45] based on the different absorption coefficients between left- and right-handed circularly polarized phonons. An alternative detection is the Raman spectroscopy analysis [26, 27] of phonon polarization by injecting a linearly polarized acoustic waves. _Acknowledgments_. This work is supported by the National Natural Science Foundation of China (NSFC, Grant No. 11904062). We also acknowledge the support of a startup grant from Guangzhou University.
2310.11183
Real Topological Hochschild Homology of Perfectoid Rings
We refine several results of Bhatt-Morrow-Scholze on THH to THR. In particular, we compute THR of perfectoid rings. This will be useful for establishing motivic filtrations on real topological Hochschild and cyclic homology of quasisyntomic rings. We also establish a real refinement of the Hochschild-Kostant-Rosenberg theorem.
Jens Hornbostel, Doosung Park
2023-10-17T11:59:48Z
http://arxiv.org/abs/2310.11183v2
# Real topological Hochschild homology of perfectoid rings ###### Abstract. We refine several results of Bhatt-Morrow-Scholze on THH to THR. In particular, we compute THR of perfectoid rings. This will be useful for establishing motivic filtrations on real topological Hochschild and cyclic homology of quasisyntomic rings. We also establish a real refinement of the Hochschild-Kostant-Rosenberg theorem. Key words and phrases:real topological Hochschild homology, perfectoid rings, real Hochschild-Kostant-Rosenberg theorem 2020 Mathematics Subject Classification: Primary 19D55; Secondary 11E70, 16E40, 55P91 ## 1. Introduction This article may be considered as a continuation of [25]. It establishes both general properties and specific computations for real topological Hochschild homology, which as usual we abbreviate by THR. Recently, there has been a lot of progress on THR, see e.g. [1], [16], [17], [19], [39], [36], and [25]. We refer to the introduction of the latter article for further background. Besides the importance of further THR computations for their own sake, there are at least two motivations for our investigations. First, Bhatt, Morrow, and Scholze [8], [9], and [10] have presented several approaches to integral \(p\)-adic Hodge theory. The approach of [9] relies on computations of topological Hochschild (and cyclic) homology for perfectoid and more generally quasiregular semiperfectoid rings. They moreover study certain motivic filtrations on THH and TC. The latter then leads to a suitable definition of syntomic cohomology, and the generalization of Fontaine's period rings \(A_{inf}(R)\) appears as \(\pi_{0}\mathrm{TC}^{-}(R;\mathbb{Z}_{p})\) for perfectoid rings \(R\). In this article, we refine several key results of [9] to the real setting, thus making a first step towards _real integral \(p\)-adic Hodge theory_. This will be continued by the second author in [37], where THR of quasiregular semiperfectoid and quasisyntomic rings will be investigated. (There is also very interesting recent work on more classical real Hodge theory, see e.g. [6], but that's another story.) Second, some time ago, Harpaz, Nikolaus, and Shah announced the construction of a real refinement of the cyclotomic trace. This real trace should then satisfy a real refinement of the theorems of Dundas, Goodwillie, and McCarthy, and hopefully also of the recent results of Clausen-Mathew-Morrow [13] as quoted in [9, Theorem 7.15]. This work on the real trace has not yet appeared on arXiv, but it has already been used by Land [26] in his work on Gabber rigidity for hermitian \(K\)-theory. We expect that it will lead to many more interesting results about hermitian \(K\)-theory, and we hope that the results of this article will be useful for this. Some theorems of this article only hold for commutative rings with trivial involution. Future research is necessary to improve our understanding in the case with involutions. To prepare the ground for this, we establish large parts of the following in the more general setting of commutative rings with involution, and even for arbitrary Green functors. We also provide computations for two important families of commutative rings with involution, see Propositions 4.35 and 4.37 as well as Remarks 4.36 and 5.18. We now summarize some of our main results. We prove the following real refinement of the Hochschild-Kostant-Rosenberg theorem: **Theorem 1.1**.: _(see Theorems 4.30 and 4.31) Let \(R\) be a commutative ring, and let \(A\) be a simplicial commutative \(R\)-algebra. Then there exists a natural filtration \(\operatorname{Fil}_{\bullet}\!\operatorname{HR}(A/R)\) on the real Hochschild homology \(\operatorname{HR}(A/R)\) whose \(n\)th graded piece is_ \[\operatorname{gr}^{n}\!\operatorname{HR}(A/R)\simeq(\iota\wedge_{A}^{n} \mathbb{L}_{A/R})[n\sigma]\] _for every integer \(n\). If \(A\) is a smooth \(R\)-algebra or if \(2\) is invertible in \(R\), then this filtration is complete._ We expect that this theorem generalizes to the case when \(A\) is a commutative \(R\)-algebra with involution and \(2\) is invertible in \(R\), see Remark 4.36 for the details. The next result refines [9, Theorem 6.1]. It is a fundamental ingredient of [37], which establishes motivic filtrations on THH and TCR and discusses their applications to computations of real and hermitian \(K\)-theories assuming the results announced by Harpaz, Nikolaus, and Shah. Recall that any commutative ring \(R\) may be considered as a constant Mackey functor \(\underline{R}\) and thus leads to an equivariant Eilenberg-MacLane spectrum \(\operatorname{H}\underline{R}\). **Theorem 1.2**.: _(see Theorem 5.16) Let \(R\) be a perfectoid ring. Then there is a natural equivalence of normed \(\mathbb{Z}/2\)-spectra_ \[\operatorname{THR}(R;\mathbb{Z}_{p})\simeq T_{\operatorname{H}\underline{R}}( S^{1+\sigma}):=\bigoplus_{n=0}^{\infty}\Sigma^{n+\sigma n}\operatorname{H} \underline{R},\] _where \(T_{\operatorname{H}\underline{R}}(S^{1+\sigma})\) denotes the free associative \(\operatorname{H}\underline{R}\)-algebra on \(S^{1+\sigma}\)._ In particular, for perfectoid rings \(\operatorname{THR}(R)\) is a _very even_\(\mathbb{Z}/2\)-spectrum in the sense explained in Definition 3.9 below. In a previous version, Theorem 5.16 was only proved for \(p\neq 2\), but thanks to [37, Theorem 6.20] we now have it for all primes. We refer to [8, Definition 3.5] for perfectoid rings. By [8, Example 3.15], a perfectoid ring is a generalization of a perfect \(\mathbb{F}_{p}\)-algebra to mixed characteristic. The \(p\)-completion of \(\mathbb{Z}_{p}[\zeta_{p^{\infty}}]\) is an example of a perfectoid ring, see [8, Example 3.6]. In the proof of Theorem 5.16, one of the key properties of perfectoid rings \(R\) we are using is [9, Proposition 4.19(2)]: The \(p\)-completed cotangent complex \((\mathbb{L}_{R/\mathbb{Z}_{p}})_{p}^{\wedge}\) is equivalent to \(R[1]\). To see why inverting \(2\) makes life easier sometimes, observe that taking fixed points is not right exact, and in general produces \(2\)-torsion in cokernels, compare e.g. the computations in Remark 4.28 and in Proposition 2.18. We also point out that Theorem 1.1 above is true without assuming \(2\) invertible also in the non-smooth case when stated in the derived category of \(R\)-modules with involution. This corresponds to the derived evaluation at \((G/e)\) of the modules over the Green functor \(\underline{R}\) considered in the theorem above. This in turn corresponds to considering homotopy fixed points rather than fixed points. These two are often the same (and always after \(2\)-adic completion) for hermitian \(K\)-theory \(K\)-theory, see [7], and it seems reasonable to expect a similar pattern for THR. After this preprint was essentially finished, we discovered that Lucy Yang gave several talks with a "real Hochschild-Kostant Rosenberg theorem" in the title. Also, Angelini-Knoll mentions on his homepage an ongoing project with Hill, Kong, and Quigley on even slices and real syntomic cohomology. It will certainly be very interesting and helpful to compare and combine the results and arguments of the current preprint with the forthcoming preprints from all these people. Throughout this article we let \(G=C_{2}=\mathbb{Z}/2\), although some arguments and results on Mackey and Green functors obviously extend to other groups \(G\). We use the notation from [25], and occasionally allow ourselves to switch between point-set/model categorical and \(\infty\)-categorical description, cf. appendix A of loc. cit.. However, following [9] most definitions and results are stated using \(\infty\)-categorical language. We always use homological indexing, which is compatible with simplicial indexing. This is essentially compatible with [9]. Beware however that when working with \(\mathrm{D}(A)\) rather than \(\mathrm{H}\mathbb{Z}\), [9] sometimes switches to cohomological indexing. e.g. when writing "Tor amplitude in \([-1,0]\)". We also refer to [34] for a nice survey from an algebraic topology perspective. From the above discussion, it is clear that we consider THR as the central object of our studies. The real refinement of Hochschild homology, denoted HR, is introduced in Definition 4.1 as something built out of THR. We establish several new results about HR, in particular Theorem 4.30. Still, we admit we are somewhat less interested in HR for its own sake, and HR is often rather a symbol for an object that appears in all kinds of arguments involving inductions and filtrations. Indeed, results and proofs switch frequently forth and back between THR and HR in this article. Our article uses a few results from [9] about the cotangent complex, and also the classical Hochschild-Kostant-Rosenberg theorem. In combination with a computation of THR of a spherical monoid ring, this leads to Lemma 4.27, which is in some sense where the concrete computations start. The article obviously recalls and extends all kinds of technicalities. The end of section 2 makes precise the idea that Mackey functors and derived completion interact nicely. The fact that slices for \(G=C_{2}\) have a very simple form, recalled in Proposition 3.2, is absolutely crucial for us. We need to develop some theory about filtrations to make some inductive arguments running, e.g. in Proposition 5.2. Finally, we need a few results about pseudocoherence in section 5, entering via Lemma 5.7 and then in Lemma 5.12. Another crucial ingredient in [9] is that \(A\to\mathrm{THH}(A)\) is universal among maps of \(E_{\infty}\)-rings with a circle action on the target. This is originally due to McClure-Schwanzl-Vogt, and was reproved in [35, Proposition IV.2.2]. There is a real version of this result in [39, Definition 5.2 and Remark 5.4], but we have not used this in our proofs below. _Acknowledgement_: This research was conducted in the framework of the DFG-funded research training group GRK 2240: _Algebro-Geometric Methods in Algebra, Arithmetic and Topology_. ## 2. Derived categories of Mackey functors For a category \(\mathcal{C}\), let \(\operatorname{Span}(\mathcal{C})\) denote the _category of spans_, see [3, section 5]. The objects of \(\operatorname{Span}(\mathcal{C})\) are the same as the objects of \(\mathcal{C}\), and the morphisms in \(\operatorname{Span}(\mathcal{C})\) have the form of the diagram \(X\gets Y\to Z\) in \(\mathcal{C}\). Let \(\operatorname{Fin}_{\mathrm{B}G}\) denote the category of finite \(G\)-sets. A _Mackey functor_ is a presheaf on \(\operatorname{Span}(\operatorname{Fin}_{\mathrm{B}G})\). See e.g. [25, section A.3] for further details, and [25, section A.4] for recollections on Green functors. For an abelian group \(M\) (with or without involution), let \(\underline{M}\) denote the associated Mackey functor, as e.g. in [25, Example 2.2.2]. Similarly, for a commutative ring \(A\) with or without involution, let \(\underline{A}\) denote the associated Green functor. **Remark 2.1**.: Recall [25, Example 2.2.2] that there is a fully faithful embedding from abelian groups with involution to Mackey functors. Looking at the Burnside Mackey functor, which is not a \(\underline{\mathbb{Z}}\)-module, one sees that this embedding is not essentially surjective. This functor extends to an embedding from commutative rings with involution to Green functors, which is fully faithful as well. The two notions of module coincide via this embedding: if \(\underline{M}\) is a Mackey functor and \(\underline{A}\) is a Green functor, then \(\underline{M}\) is an \(\underline{A}\)-module if and only if \(M\) is an \(A\)-module. This follows from Lemma 2.17 in the case without involution on \(R\). Finally, taking some Mackey functor \(\underline{M}\) arising from an abelian group \(\mathrm{M}\) with involution and then adding a copy of \(\mathbb{Z}/2\) to \(M(G/G)\) produces a Mackey functor which does not come from an abelian group (\(=\mathbb{Z}\)-module) with involution, but by Lemma 2.17 still yields a \(\underline{\mathbb{Z}}\)-module, so that there are strictly more \(\underline{\mathbb{Z}}\)-modules than abelian groups with involution. Also recall [25, Lemma 2.2.3]: morphisms on the underlying classical modules uniquely extend to morphisms of modules over Green functors. By Proposition 2.18, this is essentially the only possible difference. **Recollection 2.2**.: For a Green functor \(A\), let \(\operatorname{Mod}_{A}\) be the category of \(A\)-modules, see [27, pp. 62-63] for the notion of modules over Green functors. This is an abelian category, in which cokernel and kernel can be computed pointwise. Let \(\operatorname{Ch}(A):=\operatorname{Ch}(\operatorname{Mod}_{A})\) be the category of chain complexes of \(A\)-modules. For every integer \(n\), we have the homology functor \[\underline{H}_{n}\colon\operatorname{Ch}(A)\to\operatorname{Mod}_{A}\] sending a chain complex \(\cdots\to\mathcal{F}_{n+1}\xrightarrow{d_{n+1}}\mathcal{F}_{n}\xrightarrow{d _{n}}\mathcal{F}_{n-1}\to\cdots\) to \(\ker d_{n}/\operatorname{im}d_{n+1}\). A morphism \(\mathcal{F}\to\mathcal{G}\) in \(\operatorname{Ch}(A)\) is a _quasi-isomorphism_ if \(\underline{H}_{n}(\mathcal{F})\to\underline{H}_{n}(\mathcal{G})\) is an isomorphism of \(A\)-modules for every integer \(n\). Observe that a quasi-isomorphism in \(\operatorname{Ch}(A)\) is a pointwise quasi-isomorphism. Let \(\operatorname{D}(A):=\operatorname{D}(\operatorname{Mod}_{A})\) be the derived \(\infty\)-category of \(\operatorname{Mod}_{A}\), which is obtained by inverting quasi-isomorphisms in \(\operatorname{Ch}(A)\) in the \(\infty\)-categorical sense. For every integer \(n\), we have the homology functor \[\underline{H}_{n}\colon\operatorname{D}(A)\to\operatorname{Mod}_{A}.\] For \(\mathcal{F}\in\operatorname{D}(A)\), we set \[H_{n}(\mathcal{F}):=\underline{H}_{n}(\mathcal{F})(G/G).\] Let \(\tau_{\leq n},\tau_{\geq n}\colon\operatorname{D}(A)\to\operatorname{D}(A)\) denote the truncation functors in the sense of derived \(\infty\)-categories of abelian categories, see [32, Notation 1.2.1.7]. Here we use the standard \(t\)-structure on the derived category of an abelian category. Hence \(\mathcal{F}\in\tau_{\leq n}\operatorname{D}(A)\) if and only if \(\underline{H}_{m}(\mathcal{F})=0\) for all \(m>n\), and similarly for \(\tau_{\geq n}\operatorname{D}(A)\). We say that \(\mathcal{F}\) is \((n-1)\)_-connected_ if the induced morphism \(\tau_{\geq n}\mathcal{F}\to\mathcal{F}\) in \(\mathrm{D}(A)\) is an equivalence. Recall that the _Burnside category_\(\mathcal{B}_{G}\) consists of the finite \(G\)-sets, with morphisms given by \[\mathrm{Hom}_{\mathcal{B}_{G}}(X,Y):=\mathrm{Hom}_{\mathrm{Sp}^{G}}(\Sigma^{ \infty}X_{+},\Sigma^{\infty}Y_{+})\] for finite \(G\)-sets \(X\) and \(Y\). We refer to [29, section V.9] for an algebraic description of this category. For a finite \(G\)-set \(X\), we set \(\underline{\mathcal{B}}^{X}:=\mathrm{Hom}_{\mathcal{B}_{G}}(-,X)\). Let \(\square\) denote the box product of Mackey functors, see [25, Definition A.4.1]. The set of objects \(A\,\square\,\underline{\mathcal{B}}^{G/H}[n]\) for all subgroups \(H\) of \(G\) and integers \(n\) generates \(\mathrm{D}(A)\) since \[\mathrm{Hom}_{\mathrm{D}(A)}(A\,\square\,\underline{\mathcal{B}}^{G/H}[n], \mathcal{F})\simeq\underline{H}_{n}(\mathcal{F})(G/H)\] for \(\mathcal{F}\in\mathrm{D}(A)\). By the non-graded versions of [28, Propositions 4.3, 4.4], the abelian category \(\mathrm{Mod}_{A}\) has enough projectives, and every projective \(A\)-module is a direct summand of a product of \(A\)-modules of the form \(A\,\square\,\underline{\mathcal{B}}^{G/H}\), where \(H\) is a subgroup of \(G\). Hence [12, Theorem 2.2] implies that \(\mathrm{Ch}(A)\) admits the projective model structure, where a morphism \(\mathcal{F}\to\mathcal{G}\) in \(\mathrm{Ch}(A)\) is a projective fibration (resp. weak equivalence) if and only if \(\underline{H}_{n}(\mathcal{F})\to\underline{H}_{n}(\mathcal{G})\) is an epimorphism of Mackey functors for every \(n\in\mathbb{Z}\) (resp. a quasi-isomorphism). In particular, every object in \(\mathrm{Ch}(A)\) is projectively fibrant. A (homologically) bounded below complex of projective \(A\)-modules is a cofibrant object in \(\mathrm{Ch}(A)\), see [12, Lemma 2.7(b)]. On the other hand, \(\mathrm{Ch}(A)\) admits the injective model structure by [5, Proposition 3.13], where a morphism \(\mathcal{F}\to\mathcal{G}\) in \(\mathrm{Ch}(A)\) is an injective cofibration if and only if \(\underline{H}_{n}(\mathcal{F})\to\underline{H}_{n}(\mathcal{G})\) is a monomorphism of Mackey functors for every \(n\in\mathbb{Z}\). In particular, every object in \(\mathrm{Ch}(A)\) is injectively cofibrant. Hence the coproduct in \(\mathrm{D}(A)\) can be computed using the coproduct in \(\mathrm{Ch}(A)\). It follows that for every set of objects \(\{\mathcal{F}_{i}\}_{i\in I}\) and integer \(n\), we have a natural isomorphism \[\underline{H}_{n}(\bigoplus_{i\in I}\mathcal{F}_{i})\cong\bigoplus_{i\in I} \underline{H}_{n}(\mathcal{F}_{i}).\] Use this to show that \(A\,\square\,\underline{\mathcal{B}}^{G/H}[n]\) is a compact object of \(\mathrm{D}(A)\) for every subgroup \(H\) of \(G\) and integer \(n\). Since \(\mathrm{Mod}_{A}\) is a symmetric monoidal abelian category, \(\mathrm{D}(A)\) has a natural symmetric monoidal product \(\square_{A}^{\mathrm{L}}\), which is the derived tensor product. Using [32, Lemma 4.1.8.8], we see that \(\square_{A}^{\mathrm{L}}\) preserves colimits in each variable. In particular, \(\square_{A}^{\mathrm{L}}\) is an exact functor. **Example 2.3**.: Let \(A\) be a commutative ring, and consider the constant Green functor \(\underline{A}\). Let \(M\) be an \(\underline{A}\)-module. There is a natural isomorphism \[(M\,\square\,\underline{\mathcal{B}}^{X})(Y)\cong M(X\times Y),\] see [28, p. 519]. Use this when \(M=\underline{A}\) to obtain a natural isomorphism \[\underline{A}\,\square\,\underline{\mathcal{B}}^{G/H}\cong\underline{A}^{ \oplus G/H}\] for every subgroup \(H\) of \(G\), where \(A^{\oplus G/H}\) is the \(A\)-module with the \(G\)-action that permutes the coordinates, and \(\underline{A}^{\oplus G/H}\) is the associated \(\underline{A}\)-module. **Lemma 2.4**.: _Let \(F\colon\mathcal{C}\to\mathcal{D}\) be a colimit preserving functor of stable \(\infty\)-categories that admit colimits, and let \(S\) be a set of compact objects of \(\mathcal{C}\) that generates \(\mathcal{C}\). If \(F\) sends every object of \(S\) to a compact object of \(\mathcal{D}\), then a right adjoint \(G\) of \(F\) preserves colimits._ Proof.: This is well-known. By [32, Proposition 1.4.4.1(2)], it suffices to show that \(G\) preserves sums. To finish the proof, use the property that the functor corepresented by a compact object preserves coproducts. **Construction 2.5**.: Let \(A\) be a commutative ring. There is an adjunction \[\iota:\operatorname{Mod}_{A}\rightleftarrows\operatorname{Mod}_{\underline {A}}:(-)^{G}\] such that \(\iota M:=\underline{M}\) for every \(A\)-module \(M\) and \(\underline{L}^{G}:=\underline{L}(G/G)\) for every \(\underline{A}\)-module \(\underline{L}\). We obtain the induced adjunction \[\iota:\operatorname{Ch}(A)\rightleftarrows\operatorname{Ch}(\underline{A}):( -)^{G}.\] Using the description of the model structure on \(\operatorname{Ch}(\underline{A})\) in Recollection 2.2, we see that \((-)^{G}\) is a right Quillen functor. Hence we obtain the induced adjunction \[\iota:\operatorname{D}(A)\rightleftarrows\operatorname{D}(\underline{A}):( -)^{G}.\] **Construction 2.6**.: Let \(A\) be a commutative ring. Then the change of groupoids \((i_{\sharp},i^{*})\) with respect to \(i:*\to BG\), see e.g. [25, section A.1], induces an adjunction \[i_{\sharp}:\operatorname{Mod}_{A}\rightleftarrows\operatorname{Mod}_{ \underline{A}}:i^{*}\] such that \(i_{\sharp}M:=\underline{M}^{\oplus G}\) for every \(A\)-module \(M\) and \(i^{*}\underline{L}:=\underline{L}(G/e)\) for every \(\underline{A}\)-module \(\underline{L}\). We obtain the induced adjunction \[i_{\sharp}:\operatorname{Ch}(A)\rightleftarrows\operatorname{Ch}(\underline{A} ):i^{*}.\] Using the description of the model structure on \(\operatorname{Ch}(\underline{A})\) in Recollection 2.2, we see that \(i^{*}\) is a right Quillen functor. Hence we obtain the induced adjunction \[i_{\sharp}:\operatorname{D}(A)\rightleftarrows\operatorname{D}(\underline{A} ):i^{*}.\] Use Lemma 2.4 to see that \(i^{*}\) is colimit preserving. **Construction 2.7**.: Let \((\operatorname{Fin}_{\mathbb{BZ}/2})_{*}\) denote the category of pointed finite \(\mathbb{Z}/2\)-sets, and let \(A\) be a Green functor. Consider the functor \[\alpha^{*}\colon(\operatorname{Fin}_{\mathbb{BZ}/2})_{*}\to\operatorname{D}(A)\] sending a pointed finite \(\mathbb{Z}/2\)-set \((V,p)\) to \((A\,\square\,\underline{\mathcal{B}}^{V})/(A\,\square\,\underline{\mathcal{B} }^{\{p\}})\). This functor sends finite coproducts to finite direct sums and is monoidal if the monoidal structure on \((\operatorname{Fin}_{\mathbb{BZ}/2})_{*}\) is given by the smash product. Let \((\operatorname{Spc}^{\mathbb{Z}/2})_{\bullet}\) denote the category of pointed \(\mathbb{Z}/2\)-spaces, which is equivalent to \(\mathcal{P}_{\Sigma}((\operatorname{Fin}_{\mathbb{BZ}/2})_{*})\), see [2, Lemma 2.2, section 9.2]. Hence we have the induced symmetric monoidal functor \[\alpha^{*}\colon(\operatorname{Spc}^{\mathbb{Z}/2})_{\bullet}\to\operatorname{ D}(A).\] Let \(S^{\sigma}\) be the \(\mathbb{Z}/2\)-space \(S^{1}\) with the \(\mathbb{Z}/2\)-action given by \((x,y)\in S^{1}\subset\mathbb{R}^{2}\mapsto(x,-y)\). Observe that \(S^{\sigma}\) is equivalent to the homotopy cofiber of the equivariant map \((\mathbb{Z}/2)_{+}\to*_{+}\), where as usual \(\mathbb{Z}/2\) is considered with the non-trivial involution. Hence \(\alpha^{*}\) sends \(S^{\sigma}\) to the complex \[A[\sigma]:=[A\,\square\,\underline{\mathcal{B}}^{\mathbb{Z}/2}\xrightarrow{f} A\to 0],\] where \(A\) sits in degree \(0\). It admits a monoidal inverse \[A[-\sigma]:=[0\to A\xrightarrow{g}A\,\square\,\underline{\mathcal{B}}^{ \mathbb{Z}/2}],\] where \(A\) sits in degree \(0\). If \(A=\underline{R}\) for some commutative ring \(R\) with the trivial involution, then \(f\) becomes the summation \(\underline{R}^{\oplus\mathbb{Z}/2}\to\underline{R}\), as without involutions the two composite morphisms \(R\rightrightarrows R\oplus R\to R\) with the two inclusions from the summands correspond to the two composition maps \(*\rightrightarrows\mathbb{Z}/2\to*\). Also, \(g\) becomes the diagonal morphism \(\underline{R}\to\underline{R}^{\oplus^{\mathbb{Z}/2}}\). Together with [40, Proposition 2.9(1)] and the \(\infty\)-categorical construction of \(\operatorname{Sp}^{\mathbb{Z}/2}\) in [2, section 9.2], we obtain a colimit preserving symmetric monoidal functor \[\alpha^{*}\colon\operatorname{Sp}^{\mathbb{Z}/2}\to\operatorname{D}(A)\] sending \(\Sigma^{\infty}(V,p)\) to \((A\,\square\,\underline{\mathcal{B}}^{V})/(A\,\square\,\underline{\mathcal{B} }^{\{p\}})\) for every pointed finite \(\mathbb{Z}/2\)-set \((V,p)\). Let \(\alpha_{*}\) be a right adjoint of \(\alpha^{*}\). **Definition 2.8**.: Let \(A\) be a Green functor. Recall that \(\square_{A}^{\mathbb{L}}\) denotes the derived tensor product in \(\operatorname{D}(A)\). For \(\mathcal{F}\in\operatorname{D}(A)\) and integers \(m\) and \(n\), we have the _equivariant shift_ \[\mathcal{F}[m+n\sigma]:=\mathcal{F}[m]\,\square_{A}^{\mathbb{L}}(A[\sigma])^{ \square_{A}^{\mathbb{L}}\,n}.\] If \(n=0\), then this is the usual shift. We set \[\underline{H}_{m+n\sigma}(\mathcal{F}):=\underline{H}_{m}(\mathcal{F}[-n\sigma ])\text{ and }H_{m+n\sigma}(\mathcal{F}):=H_{m}(\mathcal{F}[-n\sigma]).\] **Remark 2.9**.: Let \(A\) be a Green functor. Recall that the family of \(A[n]\) and \(A\,\square\,\underline{\mathcal{B}}^{\mathbb{Z}/2}[n]\) for integers \(n\) generates \(\operatorname{D}(A)\). Using the cofiber sequence \(A\,\square\,\underline{\mathcal{B}}^{\mathbb{Z}/2}\to A\to A[\sigma]\), we see that the family of \(A[n]\) and \(A[n+\sigma]\) for integers \(n\) also generates \(\operatorname{D}(A)\). Let \(\operatorname{Sp}^{G}\) denote the \(\infty\)-category of \(G\)-spectra. We have the fixed point functor \((-)^{G}\colon\operatorname{Sp}^{G}\to\operatorname{Sp}\), which admits a left adjoint \(\iota\). We have the forgetful functor \(i^{*}\colon\operatorname{Sp}^{G}\to\operatorname{Sp}\), which admits a left adjoint \(i_{\sharp}\). We refer to [25, appendix A] for a review of equivariant stable homotopy theory including the details about the functors \(\iota\), \((-)^{G}\), \(i_{\sharp}\), and \(i^{*}\). For the next results, recall also the functors in Constructions 2.5 and 2.6. **Proposition 2.10**.: _Let \(A\) be a commutative ring. Then there are commutative squares_ Proof.: There are equivalences \(\iota\alpha^{*}\mathbb{S}\simeq\underline{A}\simeq\alpha^{*}\iota\mathbb{S}\) and \(i_{\sharp}\alpha^{*}\mathbb{S}\simeq\underline{A}^{\oplus G}\simeq\alpha^{*}i _{\sharp}\mathbb{S}\). By [32, Corollary 1.4.4.6], we obtain the desired commutative squares. For a symmetric monoidal \(\infty\)-category and its commutative algebra object \(R\), let \(\operatorname{Mod}_{R}:=\operatorname{Mod}_{R}(\mathcal{C})\) denote the \(\infty\)-category of \(R\)-modules. For a Mackey functor \(M\), let \(\operatorname{H}\!M\) denote the equivariant Eilenberg-MacLane spectrum. For a Green functor \(A\), we can regard \(\operatorname{H}\!A\) as a commutative algebra object of \(\operatorname{Sp}^{\mathbb{Z}/2}\), compare [25, section A.4]. For an \(A\)-module \(M\), we can regard \(\operatorname{H}\!M\) as an \(\operatorname{H}\!A\)-module object of \(\operatorname{Sp}^{\mathbb{Z}/2}\). See [25, sections A.3, A.4] for a review. The following is an equivariant refinement of the stable Dold-Kan correspondence [42, Theorem 5.1.6] proved by Schwede-Shipley. In the case of the Burnside Mackey functor, the following is due to Patchkoria-Sanders-Wimmer [38, Theorem 5.10]. **Proposition 2.11**.: _Let \(A\) be a Green functor, e.g. \(A\) a commutative ring with possibly non-trivial involution. Then the functor \(\mathrm{H}\) above induces an equivalence of symmetric monoidal stable \(\infty\)-categories_ \[\mathrm{Mod}_{\mathrm{H}A}\simeq\mathrm{D}(A).\] Proof.: Let \(\mathcal{F}\) be the family of \(\Sigma^{n}\mathbb{S}\) and \(\Sigma^{n}\Sigma^{\infty}(\mathbb{Z}/2)_{+}\) for all integers \(n\). Observe that \(\mathcal{F}\) is a set of compact objects of \(\mathrm{Sp}^{\mathbb{Z}/2}\) that generates \(\mathrm{Sp}^{\mathbb{Z}/2}\), see [25, Proposition A.1.6]. As the images of all the objects in \(\mathcal{F}\) under \(\alpha^{*}\) are obviously compact, Lemma 2.4 implies that a right adjoint \(\alpha_{*}\) to \(\alpha^{*}\colon\mathrm{Sp}^{\mathbb{Z}/2}\to\mathrm{D}(A)\) preserves colimits. We claim that \(\alpha_{*}\) is conservative. For this, it suffices to show that the family of functors \(\mathrm{Map}_{\mathrm{Sp}^{\mathbb{Z}/2}}(X,\alpha_{*}(-))\) for \(X\in\mathcal{F}\) is conservative. This holds by adjunction since \(\alpha^{*}\) sends \(\mathcal{F}\) to a set of generators of \(\mathrm{D}(A)\). The canonical morphism \[X\wedge\alpha_{*}Y\to\alpha_{*}(\alpha^{*}X\,\square^{\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: We observed that the functor \(\alpha_{*}\colon\operatorname{D}(\underline{A})\to\operatorname{Sp}^{\mathbb{Z}/2}\) is conservative in the proof of Proposition 2.11. Hence it suffices to show that the pair of functors \(\alpha_{*}i^{*},\alpha_{*}(-)^{\mathbb{Z}/2}\colon\operatorname{D}(\underline{A })\to\operatorname{Sp}\) is conservative. Proposition 2.10 yields natural equivalences \(\alpha_{*}i^{*}\simeq i^{*}\alpha_{*}\) and \(\alpha_{*}(-)^{\mathbb{Z}/2}\simeq(-)^{\mathbb{Z}/2}\alpha_{*}\). To conclude, observe that the pair of functors \(i^{*},(-)^{\mathbb{Z}/2}\colon\operatorname{Sp}^{\mathbb{Z}/2}\to\operatorname {Sp}\) is conservative and \(\alpha_{*}\colon\operatorname{D}(A)\to\operatorname{Sp}\) is conservative. For an abelian group \(A\), the constant Mackey functor \(\underline{A}\) can be described as **Lemma 2.16**.: _Let \(M\) be a Mackey functor, and let \(A\) be an abelian group. Then the box product \(M\operatorname{\square}\underline{A}\) can be described as_ \[\begin{array}{c}M(C_{2}/e)\otimes A\xxrightleftharpoons{\operatorname{ tr}\otimes\operatorname{id}}(M(C_{2}/C_{2})\otimes A)/I,\\ \Updownarrow\\ \end{array}\] _where \(I\) is the ideal generated by \((\operatorname{tr}(\operatorname{res}(x))-2)\otimes z\) for \(x\in\underline{M}(C_{2}/C_{2})\) and \(z\in A\)._ Proof.: By [43, Lemma 2.46], \((M\operatorname{\square}\underline{A})(C_{2}/C_{2})\) is the quotient of \[(M(C_{2}/C_{2})\otimes A)\oplus(M(C_{2}/e)\otimes A)\] by the ideal \(J\) generated by \[(x\otimes\operatorname{tr}(z),0)-(0,\operatorname{res}(x)\otimes z),\] \[(\operatorname{tr}(y)\otimes z,0)-(0,y\otimes\operatorname{res}(z)),\] \[(0,w(y)\otimes z)-(0,y\otimes z)\] for \(x\in M(C_{2}/C_{2})\), \(y\in M(C_{2}/e)\), and \(z\in A\). Recall that we have \(\operatorname{res}(z)=z\) and \(\operatorname{tr}(z)=2z\). The third relation follows from the second since \[(0,w(y)\otimes z)=(\operatorname{tr}(w(y))\otimes z,0)=(\operatorname{tr}(y) \otimes z,0)=(0,y\otimes z).\] We also have \[(x\otimes 2z,0)=(0,\operatorname{res}(x)\otimes z)=(\operatorname{tr}( \operatorname{res}(x))\otimes z,0).\] Hence we obtain the isomorphism \[((M(C_{2}/C_{2})\otimes A)\oplus(M(C_{2}/e)\otimes A))/J\to(M(C_{2}/C_{2}) \otimes A)/I\] induced by \(\operatorname{id}\otimes\operatorname{id}\) on the left summand and by \(\operatorname{id}\otimes\operatorname{tr}\) on the right summand. The description of the transition, restriction, and involution for \(M\operatorname{\square}\underline{A}\) is also due to loc. cit. We considered modules over arbitrary Green functors in Recollection 2.2. In the special case of constant Green functors, these can be described as follows. The next Lemma shows in particular that a Mackey functor is a module over \(\underline{\mathbb{Z}}\) if and only if \(\operatorname{tr}\circ\operatorname{res}=2\). **Lemma 2.17**.: _Let \(R\) be a commutative ring. Then the objects of the category of \(R\)-modules are precisely those Mackey functors \(M\) for which \(M(C_{2}/e)\) and \(M(C_{2}/C_{2})\) are \(R\)-modules, \(\operatorname{res}\), \(\operatorname{tr}\), and \(w\) are \(R\)-linear, and \(\operatorname{tr}\circ\operatorname{res}=2\). The morphisms in the category of \(R\)-modules are the pointwise \(R\)-linear morphisms of Mackey functors._ Proof.: Let \(M\) be an \(\underline{R}\)-module. We have the module structure morphism \(\underline{R}\,\square\,M\to M\) of Mackey functors. The relation \(\operatorname{tr}\circ\operatorname{res}=2\) holds for \(\underline{R}\,\square\,M\) by Lemma 2.16. This implies that the relation \(\operatorname{tr}\circ\operatorname{res}=2\) still holds for \(M\) since the structure morphism \(\underline{R}\,\square\,M\to M\) is an epimorphism. With this in hand, we see that the box product \(\underline{R}\,\square\,M\) can be described as \[\underline{R}\,\otimes\,M(C_{2}/e)\,\xrightarrow[\operatorname{id\otimes tr} ]{\operatorname{id\otimes res}}\,\underline{R}\,\otimes\,M(C_{2}/C_{2})\] by Lemma 2.16 again. Then we can interpret the module axioms for \(\underline{R}\,\square\,M\to M\) as what is written in the statement. Recall [25, Lemma 2.2.3] that the functor \(u\) from the category of \(R\)-modules with involutions to the category of \(\underline{R}\)-modules given by \(M\mapsto\underline{M}\) is right adjoint to the evaluation functor \((C_{2}/e)\). We discuss some subtleties for this adjunction at the end of Remark 4.28. Also note that the image of \(u\) only contains objects which are fixed under the projector \(\mathcal{P}^{0}\). If we invert \(2\), everything become much easier: **Proposition 2.18**.: _Let \(R\) be a commutative ring such that \(2\) is invertible in \(R\). Then the functor \(u\) above is an equivalence of categories._ Proof.: Let \(M\) be an \(\underline{R}\)-module. We have \(\operatorname{res}\circ\operatorname{tr}=1+w\). By Lemma 2.17, we also have \(\operatorname{tr}\circ\operatorname{res}=2\). Let \(M^{\prime}\) be the submodule of \(M(C_{2}/e)\) consisting of the elements \(a\) satisfying \(w(a)=a\). When we restrict \(\operatorname{res}\) and \(\operatorname{tr}\) to \(M^{\prime}\), we have \(\operatorname{res}\circ\operatorname{tr}=2\) and \(\operatorname{tr}\circ\operatorname{res}=2\). Together with the assumption that \(2\) is invertible in \(R\), we see that \(\operatorname{res}\colon M(C_{2}/C_{2})\to M(C_{2}/e)\) is injective and its image is precisely \(M^{\prime}\). This description means \(M\simeq\underline{L}\), where \(L\) is the \(R\)-module \(M(C_{2}/e)\) with the involution \(w\). Hence \(u\) is essentially surjective. Using [25, Lemma 2.2.3], we deduce that \(u\) is fully faithful. Let \(A\) be a commutative ring. We refer to [44, section 15.91] and the references given there for various equivalent descriptions of derived completion. The _derived \(p\)-completion of \(\mathcal{F}\in\operatorname{D}(A)\)_ is \[\mathcal{F}_{p}^{\wedge}:=\mathbb{R}\lim_{n}\mathcal{F}\otimes_{A}^{\mathbb{L} }A/p^{n}A,\] where \(A/p^{n}A\) means \(\operatorname{cofib}(p^{n}\colon A\to A)\). We often omit the \(\mathbb{L}\) and \(\mathbb{R}\)-decorations in \(\mathbb{R}\lim\) and \(\otimes^{\mathbb{L}}\) when it is clear from the context that we are dealing with a derived limit and a derived tensor product. If \(\mathcal{F}\) is concentrated in degree \(0\), then the derived \(p\)-completion of \(\mathcal{F}\) is different from the classical \(p\)-completion \(\lim_{n}\mathcal{F}\otimes_{A}A/p^{n}A\) in general. We now study derived \(p\)-completion in more detail, following [33]. **Definition 2.19**.: Let \(A\) be a Green functor. The category \(\operatorname{Mod}_{A}\) of \(A\)-modules is \(\mathbb{Z}\)-linear, i.e., an additive category. Hence the stable \(\infty\)-category \(\operatorname{D}(A)\) is \(\mathbb{Z}\)-linear in the sense of [33, Definition D.1.2.1]. We set \[A[1/p]:=\operatorname{colim}(A\xrightarrow[]{p}A\xrightarrow[]{p}\cdots)\] in \(\operatorname{D}(A)\), which is a Green functor concentrated in degree \(0\). Following [33, Definitions 7.1.1.1, 7.2.4.1, 7.3.1.1], we say that \(\mathcal{F}\in\operatorname{D}(A)\) is 1. _derived_ \(p\)_-nilpotent_ if \(\mathcal{F}\,\square_{A}^{\mathbb{L}}\,A[1/p]\simeq 0\), 2. _derived_ \(p\)_-local_ if \(\operatorname{Map}_{\operatorname{D}(A)}(\mathcal{G},\mathcal{F})\) is contractible for every derived \(p\)-nilpotent object \(\mathcal{G}\) of \(\operatorname{D}(A)\), 3. _derived_ \(p\)_-complete_ if \(\operatorname{Map}_{\operatorname{D}(A)}(\mathcal{G},\mathcal{F})\) is contractible for every derived \(p\)-local object \(\mathcal{G}\) of \(\operatorname{D}(A)\). Let \(\operatorname{D}_{p-\operatorname{comp}}(A)\) denote the full subcategory of \(\operatorname{D}(A)\) spanned by the derived \(p\)-complete objects. We have the localization functor \[(-)_{p}^{\wedge}\colon\operatorname{D}(A)\to\operatorname{D}_{p-\operatorname {comp}}(A)\] which by definition is left adjoint to the inclusion functor \(\operatorname{D}_{p-\operatorname{comp}}(A)\to\operatorname{D}(A)\), and hence \((-)_{p}^{\wedge}\) preserves colimits. Assume that \(A=\underline{R}\) for some commutative ring \(R\). By [33, Corollary 7.3.2.2], \(\mathcal{F}\in\operatorname{D}(\underline{R})\) is derived \(p\)-complete if it is a local object in the sense of [31, Definition 5.5.4.1] with respect to the class of morphisms \(\underline{R[1/p][n]\to 0}\) and \(\underline{R[1/p]^{\oplus 2\mathbb{Z}/2}[n]\to 0}\) for all integers \(n\). **Proposition 2.20**.: _Let \(A\) be a commutative ring. Then there are natural equivalences_ \[(\iota\mathcal{F})_{p}^{\wedge}\simeq\iota(\mathcal{F}_{p}^{\wedge}),\;(i_{ \sharp}\mathcal{F})_{p}^{\wedge}\simeq i_{\sharp}(\mathcal{F}_{p}^{\wedge}), \text{ and }(i^{*}\mathcal{G})_{p}^{\wedge}\simeq i^{*}(\mathcal{G}_{p}^{ \wedge})\] _for \(\mathcal{F}\in\operatorname{D}(A)\) and \(\mathcal{G}\in\operatorname{D}(\underline{A})\)._ Proof.: Observe that \(\iota\) sends \(A[1/p]\) to \(\underline{A}[1/p]\), \(i_{\sharp}\) sends \(A[1/p]\) to \(\underline{A}^{\oplus\mathbb{Z}/2}[1/p]\), and \(i^{*}\) sends \(\underline{A}[1/p]\) to \(A[1/p]\). Using the universal property of the localization [31, Proposition 5.5.4.20], we obtain the desired equivalences. For a Green functor \(A\) and integer \(m\geq 1\), the notation \((-)\operatorname{\square}_{A}^{\mathbb{L}}A/mA\) means the cofiber of the multiplication morphism \(m\colon(-)\operatorname{\square}_{A}^{\mathbb{L}}A\to(-)\operatorname{ \square}_{A}^{\mathbb{L}}A\). **Proposition 2.21**.: _Let \(A\) be a Green functor, and let \(\mathcal{F}\) be an object of \(\operatorname{D}(A)\)._ 1. _We have equivalences_ \[\mathcal{F}_{p}^{\wedge}\simeq\operatorname{cofib}((\lim(\cdots\xrightarrow{p }\mathcal{F}\xrightarrow{p}\mathcal{F})\to\mathcal{F})\simeq\lim_{n}\mathcal{ F}\operatorname{\square}_{A}^{\mathbb{L}}A/p^{n}A\] _in_ \(\operatorname{D}(A)\)_. Hence the derived_ \(p\)_-completion_ \(\mathcal{F}_{p}^{\wedge}\) _is the pointwise derived_ \(p\)_-completion if_ \(A=\underline{R}\) _for some commutative ring_ \(R\)_._ 2. \(\operatorname{fib}(\mathcal{F}\to\mathcal{F}_{p}^{\wedge})\) _is derived_ \(p\)_-local._ 3. \(\mathcal{F}\) _is derived_ \(p\)_-local if and only if_ \(\mathcal{F}_{p}^{\wedge}\simeq 0\)_._ 4. \(\mathcal{F}\) _is derived_ \(p\)_-complete if and only if the induced morphism_ \(\mathcal{F}\to\mathcal{F}_{p}^{\wedge}\) _in_ \(\operatorname{D}(A)\) _is an equivalence._ 5. _If_ \(\mathcal{F}\) _is derived_ \(p\)_-local, then_ \(\mathcal{F}[m+n\sigma]\) _is derived_ \(p\)_-local for all integers_ \(m\) _and_ \(n\)_._ 6. _If_ \(\mathcal{F}\) _is derived_ \(p\)_-local and_ \(m\) _is an integer coprime to_ \(p\)_, then the multiplication morphism_ \(m\colon\mathcal{F}\to\mathcal{F}\) _is an equivalence._ 7. _If_ \(A=\underline{R}\) _for some commutative ring_ \(R\)_, then_ \(\mathcal{F}\) _of_ \(\operatorname{D}(\underline{R})\) _is derived_ \(p\)_-complete if and only if_ \(i^{*}\mathcal{F}\) _and_ \(\mathcal{F}^{\mathbb{Z}/2}\) _in_ \(\operatorname{D}(R)\) _are derived_ \(p\)_-complete._ Proof.: The first equivalence in (1) is a consequence of [33, Proposition 7.3.2.1], and the second uses that \(\lim\) and cofib commute in the stable setting. (2) and (3) are consequences of [33, Remark 7.2.0.2, Proposition 7.3.1.4]. (4) holds since \((-)_{p}^{\wedge}\) is a localization functor in the sense of [31, Definition 5.2.7.2]. (5) is a consequence of (1) and (3). (6) For every integer \(n\geq 1\), using that \(m\) and \(p^{n}\) are coprime, we see that the multiplication \(m\colon A/p^{n}A\to A/p^{n}A\) in \(\operatorname{D}(A)\) is an equivalence. Take \(\lim_{n}(\mathcal{F}\,\square^{\mathbb{L}}_{A}(-))\) and use (1) and (4) to deduce the claim. (7) Since \(i_{\sharp}(R[1/p])\simeq\underline{R}[1/p]^{\oplus\mathbb{Z}/2}\) and \(\iota(R[1/p])\simeq\underline{R}[1/p]\), we see that \(\mathcal{F}\) is derived \(p\)-complete if and only if \[\operatorname{Hom}_{\operatorname{D}(\underline{R})}(i_{\sharp}(R[1/p])[n], \mathcal{F})=\operatorname{Hom}_{\operatorname{D}(\underline{R})}(\iota(R[1/ p])[n],\mathcal{F})=0\] for every integer \(n\). This implies the claim. **Proposition 2.22**.: _Let \(A\) be a Green functor. If \(\mathcal{F}\) is a derived \(p\)-local object of \(\operatorname{D}(A)\), then \(\mathcal{F}\,\square^{\mathbb{L}}_{A}\,\mathcal{G}\) is derived \(p\)-local for \(\mathcal{G}\in\operatorname{D}(A)\)._ Proof.: The class \(\mathcal{S}\) of those objects \(\mathcal{G}\) for which \(\mathcal{F}\,\square^{\mathbb{L}}_{A}\,\mathcal{G}\) is derived \(p\)-local is closed under colimits by [33, Proposition 7.2.4.9(2)] and under equivariant shifts by Proposition 2.21(5). Furthermore, \(\mathcal{S}\) contains \(A\). Using Remark 2.9, it follows that \(\mathcal{S}\) contains every object of \(\operatorname{D}(A)\). **Proposition 2.23**.: _Let \(A\) be a Green functor, and let \(\mathcal{F}\to\mathcal{F}^{\prime}\) be a morphism in \(\operatorname{D}(A)\). If the induced morphism \(\mathcal{F}^{\wedge}_{p}\to\mathcal{F}^{\prime\wedge}_{p}\) in \(\operatorname{D}(A)\) is an equivalence, then the induced morphism_ \[(\mathcal{F}\,\square^{\mathbb{L}}_{A}\,\mathcal{G})^{\wedge}_{p}\to(\mathcal{ F}^{\prime}\,\square^{\mathbb{L}}_{A}\,\mathcal{G})^{\wedge}_{p}\] _in \(\operatorname{D}(A)\) is an equivalence for \(\mathcal{G}\in\operatorname{D}(A)\)._ Proof.: Using Proposition 2.21(2), we see that the fiber of \(\mathcal{F}\to\mathcal{F}^{\prime}\) is derived \(p\)-local. Hence the fiber of \(\mathcal{F}\,\square^{\mathbb{L}}_{A}\,\mathcal{G}\to\mathcal{F}^{\prime}\, \square^{\mathbb{L}}_{A}\,\mathcal{G}\) is derived \(p\)-local by Proposition 2.22. Proposition 21(3) finishes the proof. **Proposition 2.24**.: _Let \(A\) be a Green functor. Then there exists a unique symmetric monoidal structure on \(\operatorname{D}_{p-\operatorname{comp}}(A)\) such that \((-)^{\wedge}_{p}\) is monoidal._ Proof.: As in [33, Corollary 7.3.5.2], this is a consequence of [32, Proposition 2.2.1.9] and Proposition 2.23. The next result will be useful when proving Theorem 5.17. **Proposition 2.25**.: _Let \(A\) be a Green functor, let \(B\) be a commutative algebra object of \(\operatorname{D}(A)\). For \(B\)-modules \(M\) and \(N\), the induced morphism_ \[(M\,\square^{\mathbb{L}}_{B}\,N)^{\wedge}_{p}\to(M^{\wedge}_{p}\,\square^{ \mathbb{L}}_{B^{\wedge}_{p}}\,N^{\wedge}_{p})^{\wedge}_{p}\] _in \(\operatorname{Mod}_{B}\) is an equivalence._ Proof.: The description of the monoidal structure on \(\infty\)-categories of modules in [32, section 4.4.1, Theorem 4.5.2.1(2)] yields a natural equivalence \[M\,\square^{\mathbb{L}}_{B}\,N\simeq\operatorname{colim}\big{(}\cdots\overset{ \rightarrow}{\rightarrow}\!M\,\square^{\mathbb{L}}_{A}\,B\,\square^{\mathbb{L} }_{A}\,N^{\rightarrow}_{\rightarrow}\!M\,\square^{\mathbb{L}}_{A}\,N\big{)}\] in \(\operatorname{Mod}_{B}\), where the colimit is obtained by the two-sided bar construction. Take \((-)^{\wedge}_{p}\) on both sides, permute colim and \((-)^{\wedge}_{p}\), and use Proposition 2.23 to obtain a natural equivalence \[(M\,\square^{\mathbb{L}}_{B}\,N)^{\wedge}_{p}\simeq\big{(}\operatorname{colim} \big{(}\cdots\overset{\rightarrow}{\rightarrow}\!M^{\wedge}_{p}\,\square^{ \mathbb{L}}_{A}\,B^{\wedge}_{p}\,\square^{\mathbb{L}}_{A}\,N^{\wedge}_{p} \overset{\rightarrow}{\rightarrow}\!M^{\wedge}_{p}\,\square^{\mathbb{L}}_{A}\,N ^{\wedge}_{p}\big{)}\big{)}^{\wedge}_{p}\] in \(\operatorname{Mod}_{B}\). The latter one is equivalent to \((M^{\wedge}_{p}\,\square^{\mathbb{L}}_{B^{\wedge}_{p}}\,N^{\wedge}_{p})^{\wedge}_ {p}\). ## 3. Recollection on slices in stable equivariant homotopy theory Hill-Hopkins-Ravenel [21, section 4] introduces a slice filtration. Ullman [45] suggests a slightly modified version, also considered in [22], which we now briefly recall. **Remark 3.1**.: Both [21] and [22] yield the same \(\geq 0\). Comparing [21, 4.48] and [22, 11.1.18], we see that \(\geq 1\) in [21] is a weaker condition than the one in [22]. We also see that they have different \(0\)-slice by comparing [21, 4.50] with [22, 11.1.45]. Hence the \(\leq 0\) differs as well, the condition of [21] is stronger than the one in [22, 11.1.18 (iv)]. For every integer \(n\), let \(\operatorname{Sp}_{\geq n}^{\mathbb{Z}/2}\) be the localizing (in the sense of [22, Definition 6.3.12], in particular not triangulated) subcategory of \(\operatorname{Sp}^{\mathbb{Z}/2}\) generated by the set \[\mathcal{S}_{n}:=\{\Sigma^{m+m\sigma}\mathbb{S}:2m\geq n\}\cup\{\Sigma^{m}( \mathbb{S}^{\oplus\mathbb{Z}/2}):m\geq n\},\] where the involution on \(\mathbb{S}^{\oplus\mathbb{Z}/2}\) permutes the coordinates. This leads to the slice filtration \[\cdots\to P_{n}\to P_{n-1}\to\cdots\] of endofunctors on \(\operatorname{Sp}^{\mathbb{Z}/2}\) with layers \(P_{n}^{n}:=\operatorname{cofib}(P_{n+1}\to P_{n})\), see e.g. [22, section 11.1.E] for the details. We set \(P^{n}:=\operatorname{cofib}(P_{n+1}\to\operatorname{id})\). Let \(\operatorname{Sp}_{\leq n}^{\mathbb{Z}/2}\) be the full subcategory of \(\operatorname{Sp}^{\mathbb{Z}/2}\) spanned by the objects \(X\) such that \(\operatorname{Hom}_{\operatorname{Sp}^{\mathbb{Z}/2}}(Y,X)=0\) for every \(Y\in\mathcal{S}_{n}\). For a \(\mathbb{Z}/2\)-spectrum \(X\) and integers \(m\) and \(n\), we set \[\pi_{m+n\sigma}(X):=\pi_{0}((\Sigma^{-m-n\sigma}X)^{\mathbb{Z}/2 }),\] \[\underline{\pi}_{m+n\sigma}(X):=\underline{\pi}_{m}(\Sigma^{-n \sigma}X)\cong\underline{\pi}_{0}(\Sigma^{-m-n\sigma}X)\in\operatorname{ Mack}_{\mathbb{Z}/2},\] compare also Definition 2.8. The notation \(\underline{\pi}_{n}\) appears in [21, section 3]. The following is the key computational result for the equivariant slices. **Proposition 3.2**.: _Let \(X\) be a \(\mathbb{Z}/2\)-spectrum. Then we have equivalences_ \[P_{2n}^{2n}(X)\simeq\Sigma^{n+n\sigma}\mathrm{H}\underline{\pi} _{n+n\sigma}(X),\] \[P_{2n+1}^{2n+1}(X)\simeq\Sigma^{n+1+n\sigma}\mathrm{H}\mathcal{P} ^{0}\underline{\pi}_{n+1+n\sigma}(X)\] _for all integers \(n\), where \(\mathcal{P}^{0}\) is the endofunctor of \(\operatorname{Mack}_{\mathbb{Z}/2}\) killing the kernel of the restriction._ Proof.: We refer to [20, Theorem 17.5.25]. For any \(\mathbb{Z}/2\)-spectrum \(X\) and integer \(n\), we set \[\rho_{2n}(X):=\underline{\pi}_{n+n\sigma}(X),\] \[\rho_{2n+1}(X):=\mathcal{P}^{0}\underline{\pi}_{n+1+n\sigma}(X)\] so that we have \(P_{2n}^{2n}(X)\simeq\Sigma^{n+n\sigma}\mathrm{H}\rho_{2n}(X)\) and \(P_{2n+1}^{2n+1}(X)\simeq\Sigma^{n+1+n\sigma}\mathrm{H}\rho_{2n+1}(X)\). We now introduce a similar filtration on the derived category \(\mathrm{D}(\underline{A})\) of \(\underline{A}\)-modules for a commutative ring \(A\), which under Proposition 2.11 is easily seen to correspond to the slice filtration. **Definition 3.3**.: Let \(A\) be a Green functor, e.g. \(A=\underline{R}\) for a commutative ring \(R\) with or without involution. For every integer \(n\), let \(\mathrm{D}_{\geq n}(A)\) be the smallest full subcategory of \(\mathrm{D}_{\geq n}(A)\) closed under colimits and extensions and containing \[\mathcal{S}_{n}(A):=\{A[m+m\sigma]:2m\geq n\}\cup\{A\,\square\,\underline{B}^{ \mathbb{Z}/2}[m]:m\geq n\}.\] Let \(\mathrm{D}_{\leq n-1}(A)\) be the full subcategory of \(\mathrm{D}(A)\) spanned by the objects \(\mathcal{F}\) such that \(\mathrm{Hom}_{\mathrm{D}(A)}(\mathcal{G},\mathcal{F})=0\) for every \(\mathcal{G}\in\mathcal{S}_{n}(A)\). Observe that we have \(\mathcal{F}\in\mathrm{D}_{\leq n-1}(A)\) if and only if \(\mathrm{Hom}_{\mathrm{D}(A)}(\mathcal{G},\mathcal{F})=0\) for every \(\mathcal{G}\in\mathrm{D}_{\geq n}(A)\). The inclusion \(\mathrm{D}_{\geq n}(A)\to\mathrm{D}(A)\) admits a right adjoint. Let \(P_{n}\colon\mathrm{D}(A)\to\mathrm{D}(A)\) be the unit of this adjunction pair. Hence \(P_{n}(\mathcal{F})\in\mathrm{D}_{\geq n}(A)\). We also obtain the natural transformations \[\cdots\to P_{n}\to P_{n-1}\to\cdots,\] which we call the slice filtration. We set \(P^{n}:=\mathrm{cofib}(P_{n+1}\to\mathrm{id})\) and \(P_{n}^{n}:=\mathrm{cofib}(P_{n+1}\to P_{n})\) for every integer \(n\). Observe that there is an equivalence \(P_{n}^{n}\simeq\mathrm{fib}(P^{n}\to P^{n-1})\), and that \(P^{n}(\mathcal{F})\in\mathrm{D}_{\leq n}(A)\). **Lemma 3.4**.: _Let \(A\) be a Green functor. For every integer \(n\), there are natural equivalences of \(\infty\)-categories_ \[\mathrm{D}_{\geq n+2}(A)\simeq\mathrm{D}_{\geq n}(A)[1+\sigma]\text{ and }\mathrm{D}_{\leq n+2}(A)\simeq\mathrm{D}_{\leq n}(A)[1+\sigma].\] _This notation means the full subcategory spanned by all the objects of this shifted form._ Proof.: This is a direct consequence of \(\mathcal{S}_{n+2}(A)=\mathcal{S}_{n}(A)[1+\sigma]\). As for \(\mathrm{Sp}^{\mathbb{Z}/2}\) above, we define group valued functors \(H_{m+n\sigma}(-)\) and Mackey functor valued functors \(\underline{H}_{m+n\sigma}(-)\) for integers \(m\) and \(n\). **Lemma 3.5**.: _Let \(A\) be a Green functor. For \(\mathcal{F}\in\mathrm{D}(A)\), we have the following properties._ 1. \(\mathcal{F}\in\mathrm{D}_{\geq 0}(A)\) _if and only if_ \(\underline{H}_{n}(\mathcal{F})=0\) _for every integer_ \(n<0\)_._ 2. \(\mathcal{F}\in\mathrm{D}_{\geq 1}(A)\) _if and only if_ \(\underline{H}_{n}(\mathcal{F})=0\) _for every integer_ \(n<1\)_._ 3. \(\mathcal{F}\in\mathrm{D}_{\leq-1}(A)\) _if and only if_ \(\underline{H}_{n}(\mathcal{F})=0\) _for every integer_ \(n>-1\)_._ 4. \(\mathcal{F}\in\mathrm{D}_{\leq 0}(A)\) _if and only if_ \(\underline{H}_{n}(\mathcal{F})=0\) _for every integer_ \(n>0\)_._ Proof.: Argue as in [22, Proposition 11.1.18]. **Proposition 3.6**.: _Let \(A\) be a commutative ring, possibly with involution. For \(\mathcal{F}\in\mathrm{D}(\underline{A})\) and integer \(n\), there are canonical equivalences in \(\mathrm{D}(\underline{A})\)_ \[P_{2n}^{2n}(\mathcal{F})\simeq(\underline{H}_{n+n\sigma}(\mathcal{F}))[n+n \sigma],\] \[P_{2n+1}^{2n+1}(\mathcal{F})\simeq(\mathcal{P}^{0}\underline{H}_{n+1+n\sigma}( \mathcal{F}))[n+1+n\sigma],\] _where \(\mathcal{P}^{0}\) is the endofunctor of \(\mathrm{Mod}_{\underline{A}}\) killing the kernel of the restriction._ Proof.: Argue as in [20, Theorem 17.5.25]. If \(A\) has a non-trivial involution, note that the functor \(\mathcal{P}^{0}\) exists as res is \(A\)-linear. **Definition 3.7**.: Let \(A\) be a commutative ring. For \(\mathcal{F}\in\mathrm{D}(\underline{A})\) and integer \(n\), we set \[\rho_{2n}(\mathcal{F}):=\underline{H}_{n+n\sigma}(\mathcal{F}),\] \[\rho_{2n+1}(\mathcal{F}):=\mathcal{P}^{0}\underline{H}_{n+1+n\sigma}(\mathcal{ F}).\] Observe that we have a natural isomorphism of Mackey functors \(\rho_{m}(\mathcal{F})\cong\rho_{m}(\alpha_{*}\mathcal{F})\) for every integer \(m\). **Proposition 3.8**.: _Let \(A\) be a Green functor. If \(\mathcal{F}\in\mathrm{D}(A)\) satisfies \(P_{n}^{n}(\mathcal{F})\simeq 0\) for every integer \(n\), then \(\mathcal{F}\simeq 0\) in \(\mathrm{D}(A)\)._ Proof.: We have the fiber sequence \(P_{n+1}(\mathcal{F})\to P_{n}(\mathcal{F})\to P_{n}^{n}(\mathcal{F})\) in \(\mathrm{D}(A)\). Together with the assumption \(P_{n}^{n}(\mathcal{F})\simeq 0\), we have \(P_{n}(\mathcal{F})\simeq P_{n+1}(\mathcal{F})\) for every integer \(n\). In particular, \(P_{1}(\mathcal{F})\in\mathrm{D}_{\geq n}(A)\) for every integer \(n\). Lemma 3.4 implies that \(P_{1}(\mathcal{F})\in\mathrm{D}_{\geq 0}(A)[m+m\sigma]\) for every integer \(m\). Since \(\mathrm{D}_{\geq 0}(A)[m+m\sigma]\subset\mathrm{D}_{\geq 0}(A)[m]\), we obtain \(P_{1}(\mathcal{F})\simeq 0\) using Lemma 3.5(1). A similar argument shows that \(P^{0}(\mathcal{F})\simeq 0\). Use the fiber sequence \(P_{1}(\mathcal{F})\to\mathcal{F}\to P^{0}(\mathcal{F})\) to finish the proof. **Definition 3.9**.: Let \(X\) be a \(\mathbb{Z}/2\)-spectrum. We say that \(X\) is _even_ if \(\rho_{2n+1}(X)=0\) for every integer \(n\). We say that \(X\) is _very even_ if \(X\) is even and \(\rho_{2n}(X)\) is a constant Mackey functor for every integer \(n\). In this case, we have isomorphisms \[\rho_{2n}(X)(C_{2}/e)= \mathrm{Hom}_{\mathrm{Sp}^{2/2}}(\Sigma^{\infty}(C_{2}/e)_{+}, \Sigma^{-n-n\sigma}X)\] \[\cong \mathrm{Hom}_{\mathrm{Sp}^{2/2}}(i_{*}\mathbb{S},\Sigma^{-n-n \sigma}X)\] \[\cong \mathrm{Hom}_{\mathrm{Sp}}(\mathbb{S},i^{*}\Sigma^{-n-n\sigma}X)\] \[\cong \pi_{2n}(i^{*}X).\] Hence we have a natural isomorphism \[\rho_{2n}(X)\cong\underline{\pi_{2n}(i^{*}X)}. \tag{3.1}\] For a commutative ring \(R\). we use a similar terminology for objects of \(\mathrm{D}(\underline{R})\) **Proposition 3.10**.: _Let \(X\) be a \(\mathbb{Z}/2\)-spectrum. Then \(X\) is even if and only if \(i^{*}X\) is even in the sense that \(\pi_{2n+1}(i^{*}X)=0\) for every integer \(n\)._ Proof.: If \(\pi_{2n+1}(i^{*}X)=0\), then \(\underline{\pi}_{n+1+n\sigma}(X)(C_{2}/e)=0\). This implies that the map res is \(0\) for \(\rho_{2n+1}(X)\), so we have \(\mathcal{P}^{0}\underline{\pi}_{n+1+n\sigma}(X)=0\). On the other hand, \(\mathcal{P}^{0}\underline{\pi}_{n+1+n\sigma}(X)=0\) implies \(\pi_{2n+1}(i^{*}X)=0\). Together with the definition \(\rho_{2n+1}(X)=\mathcal{P}^{0}\underline{\pi}_{n+1+n\sigma}(X)\), we finish the proof. ## 4. Real Hochschild homology and real topological Hochschild homology Let \(\mathrm{NAlg}^{\mathbb{Z}/2}\) denote the \(\infty\)-category of normed \(\mathbb{Z}/2\)-spectra. This can be defined as the underlying \(\infty\)-category of a certain model category [21, Proposition B.129] of commutative monoids in orthogonal \(\mathbb{Z}/2\)-spectra. Alternatively, Bachmann-Hoyois provide a purely \(\infty\)-categorical formulation, see [2, Definition 9.14]. We refer to [25, Remark A.2.3] for a review. There are various equivalent definitions of THR. The following one is taken from [25, Definition 2.1.1]. **Definition 4.1**.: For a normed \(\mathbb{Z}/2\)-spectrum \(A\), the _real topological Hochschild homology of \(A\)_ is the coproduct \[\mathrm{THR}(A):=A\wedge_{N^{\mathbb{Z}/2}i^{*}A}A\] in \(\mathrm{NAlg}^{\mathbb{Z}/2}\), where both morphisms \(N^{\mathbb{Z}/2}i^{*}A\to A\) are the counit map. We may regard \(\mathrm{THR}(A)\) as an \(A\)-algebra using the morphism \(A\to A\wedge_{N^{\mathbb{Z}/2}i^{*}A}A\) to the second smash factor. On the other hand, we have the map \(\operatorname{THR}(A)\to A\) given by \(A\wedge_{N^{\mathbb{Z}/2}i^{*}A}A\to A\wedge_{A}A\simeq A\). For a map of normed \(\mathbb{Z}/2\)-spectra \(A\to B\), the _real Hochschild homology of \(B\) over \(A\)_ is the coproduct \[\operatorname{HR}(B/A):=\operatorname{THR}(B)\wedge_{\operatorname{THR}(A)}A\] in \(\operatorname{NAlg}^{\mathbb{Z}/2}\). We may also regard \(\operatorname{HR}(B/A)\) as a \(B\)-algebra. The above definition immediately implies an equivalence of normed \(\mathbb{Z}/2\)-spectra \[\operatorname{HR}(B/\mathbb{S})\simeq\operatorname{THR}(B).\] **Remark 4.2**.: For rings, forgetting the involution this is compatible with the classical \(\operatorname{HH}\) in the flat case thanks to [9, Lemma 2.5], keeping in mind the convention on pages 211/212 of loc. cit. So we force the equivariant refinement of [9, Lemma 2.5] by definition. Establishing an equivariant \(\operatorname{HKR}\) theorem in general with this definition will be difficult, but see our partial results further below. Also, note that this definition of \(\operatorname{HR}\) produces an equivariant ring spectrum, whereas the traditional \(\operatorname{HH}(A/R)\) is a simplicial ring. However, using the stable Dold-Kan functor \(\operatorname{H}\) from Proposition 2.11, \(\operatorname{H}(\operatorname{HH}(A/R))\) is indeed equivalent to \(i^{*}\operatorname{HR}(A/R)\), see Proposition 4.11. **Remark 4.3**.: For a homomorphism of commutative rings \(R\to A\) with involutions, we often use the abbreviated notation \[\operatorname{THR}(A):=\operatorname{THR}(\operatorname{H\underline{A}})\text{ and } \operatorname{HR}(A/R):=\operatorname{HR}(\operatorname{H\underline{A}}/ \operatorname{H\underline{R}}).\] Let \(\operatorname{THR}(A;\mathbb{Z}_{p})\) and \(\operatorname{HR}(A/R;\mathbb{Z}_{p})\) be their derived \(p\)-completions in \(\operatorname{D}(\underline{A})\). If \(R=\mathbb{Z}\), then we set \(\operatorname{HR}(A):=\operatorname{HR}(A/\mathbb{Z})\). For morphisms of Green functors \(R\to A\), we also use the abbreviated notation \(\operatorname{THR}(A):=\operatorname{THR}(\operatorname{H}A)\) and \(\operatorname{HR}(A/R):=\operatorname{HR}(\operatorname{H}A/\operatorname{H }R)\) and their \(p\)-completions \(\operatorname{THR}(A;\mathbb{Z}_{p})\) and \(\operatorname{THR}(A/R;\mathbb{Z}_{p})\). **Proposition 4.4**.: _The \(\infty\)-category \(\operatorname{NAlg}^{\mathbb{Z}/2}\) has colimits and limits, and the forgetful functor \(\operatorname{NAlg}^{\mathbb{Z}/2}\to\operatorname{Sp}^{\mathbb{Z}/2}\) is conservative and preserves limits and sifted colimits._ Proof.: An analogous result is proved in [2, Proposition 7.6(1),(2)] for the motivic setting. One can similarly argue for the topological setting too. **Construction 4.5**.: Let \(R\) be a commutative ring, and let \(\operatorname{Poly}_{R}\) be the category of polynomial \(R\)-algebras. A cohomology theory on the \(\infty\)-category \(\operatorname{sCRing}_{R}\) of simplicial commutative \(R\)-algebras is often extended from a cohomology theory on \(\operatorname{Poly}_{R}\) as observed in [9, Construction 2.1], which we review as follows. According to loc. cit, there is a natural equivalence of categories between the category \(\operatorname{CRing}_{R}\) of \(R\)-algebras and the category of functors \(\operatorname{Poly}_{R}\to\operatorname{Set}\) sending coproducts to products. By [31, Corollary 5.5.9.3], this yields an equivalence of \(\infty\)-categories \(\operatorname{sCRing}_{R}\simeq\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{ \mathbb{Z}/2})\). Hence for every \(\infty\)-category \(\mathcal{D}\) with sifted colimits, there exists an equivalence of \(\infty\)-categories \[\operatorname{Fun}_{\Sigma}(\operatorname{sCRing}_{R},\mathcal{D})\xrightarrow{ \simeq}\operatorname{Fun}(\operatorname{Poly}_{R},\mathcal{D}) \tag{4.1}\] using [31, Proposition 5.5.8.15], where \(\operatorname{Fun}_{\Sigma}(\operatorname{sCRing}_{R},\mathcal{D})\) denotes the full subcategory of \(\operatorname{Fun}(\operatorname{sCRing}_{R},\mathcal{D})\) spanned by the functors preserving sifted colimits. Hence for every functor \(f\colon\operatorname{Poly}_{R}\to\mathcal{D}\), there exists a functor \(F\colon\operatorname{sCRing}_{R}\to\mathcal{D}\) preserving sifted colimits that is unique in the \(\infty\)-categorical sense. In this case, we say that \(F\) is _a left Kan extension of \(f\)_. **Construction 4.6**.: The approach of [9, Construction 2.1] of extending \(\infty\)-functors from polynomial rings to simplicial rings can be adapted to the equivariant setting as follows: Let \(R\) be a commutative ring with involution, and let \(\operatorname{Poly}_{R}^{\mathbb{Z}/2}\) be the category of \(R\)-algebras of the form \(R[\mathbb{N}^{X}]\) with a finite \(\mathbb{Z}/2\)-set \(X\). Consider the \(\infty\)-category \(\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{Z}/2})\), which is the full subcategory of the \(\infty\)-category of presheaves of spaces \(\mathcal{P}(\operatorname{Poly}_{R}^{\mathbb{Z}/2})\) spanned by the functors \((\operatorname{Poly}_{R}^{\mathbb{Z}/2})^{op}\to\operatorname{Spc}\) preserving finite products. By [31, Corollary 5.5.9.3], \(\mathcal{P}(\operatorname{Poly}_{R}^{\mathbb{Z}/2})\) is the underlying \(\infty\)-category of the simplicial model category of the simplicial presheaves on \(\operatorname{Poly}_{R}^{\mathbb{Z}/2}\) with the projective model structure. Let \(\operatorname{CRing}_{R}^{\mathbb{Z}/2}\) be the category of commutative \(R\)-algebras with involution. The inclusion functor \(\operatorname{Poly}_{R}^{\mathbb{Z}/2}\to\operatorname{CRing}_{R}^{\mathbb{Z}/2}\) preserves coproducts. This implies that for every \(A\in\operatorname{CRing}_{R}^{\mathbb{Z}/2}\), the presheaf on \(\operatorname{Poly}_{R}^{\mathbb{Z}/2}\) represented by \(A\) is in \(\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{Z}/2})\). Hence the Yoneda embedding \[\operatorname{Poly}_{R}^{\mathbb{Z}/2}\to\mathcal{P}_{\Sigma}(\operatorname{ Poly}_{R}^{\mathbb{Z}/2})\] factors through \(\operatorname{CRing}_{R}^{\mathbb{Z}/2}\). (To proceed exactly as in [9, Construction 2.1], we would have to consider simplicial rings here and show that \(\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{Z}/2})\) is indeed equivalent to the underlying \(\infty\)-category of the simplicial commutative rings with involutions. This would require an equivariant version of [11, Example 5.1.3].) Let \(\mathcal{D}\) be an \(\infty\)-category with sifted colimits. Then [31, Proposition 5.5.8.15] yields an induced equivalence of \(\infty\)-categories \[\operatorname{Fun}_{\Sigma}(\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{ \mathbb{Z}/2}),\mathcal{D})\xrightarrow{\simeq}\operatorname{Fun}( \operatorname{Poly}_{R}^{\mathbb{Z}/2},\mathcal{D}),\] where \(\operatorname{Fun}_{\Sigma}(\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{ \mathbb{Z}/2}),\mathcal{D})\) denotes the full subcategory of \(\operatorname{Fun}(\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{Z}/ 2}),\mathcal{D})\) spanned by the functors preserving sifted colimits. This implies that any functor \(f\colon\operatorname{Poly}_{R}^{\mathbb{Z}/2}\to\mathcal{D}\) admits a unique extension \(F\colon\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{Z}/2})\to \mathcal{D}\) in the \(\infty\)-sense preserving sifted colimits. In this case, we say that \(F\) is a _left Kan extension of \(f\)_. Compose \(F\) with the above functor \(\operatorname{CRing}_{R}^{\mathbb{Z}/2}\to\mathcal{P}_{\Sigma}(\operatorname{ Poly}_{R}^{\mathbb{Z}/2})\) to obtain a functor \[\operatorname{CRing}_{R}^{\mathbb{Z}/2}\to\mathcal{D},\] which is a natural extension of \(f\). **Example 4.7**.: Let \(R\) be a commutative ring with involution. We have the adjunction \[F:\operatorname{Set}^{\mathbb{Z}/2}\to\operatorname{CRing}_{R}^{\mathbb{Z}/2}:U,\] where \(F\) sends a \(\mathbb{Z}/2\)-set \(X\) to \(R[\mathbb{N}^{X}]\), and \(U\) is the forgetful functor. Using \(F\) and \(U\), one can form the _standard resolution_[44, section 14.34] \[P_{\bullet}\to A \tag{4.2}\] for every \(R\)-algebra \(A\) with involution with the terms \(P_{n}:=(FU)^{(n+1)}(A)\) for all integers \(n\geq 0\). Observe that \(P_{\bullet}\) is a simplicial \(R\)-module with involution. Consider the Green functor \(\underline{A}\) associated with \(A\). Apply \((-)\) to (4.2) to obtain the standard resolution \(\underline{P_{\bullet}}\to\underline{A}\), and in this case \(\underline{P_{\bullet}}\) is a simplicial \(\underline{A}\)-module. Consider the functor \[\text{H}\colon\operatorname{CRing}_{R}^{\mathbb{Z}/2}\to\operatorname{NAlg}^{ \mathbb{Z}/2} \tag{4.3}\] sending a commutative \(R\)-algebra \(A\) with involution to the Eilenberg-MacLane spectrum \(\operatorname{H}\!\underline{A}\), see e.g. [25, Definition 2.3.2]. This restricts to the functor \(\operatorname{H}\colon\operatorname{Poly}_{R}^{\mathbb{Z}/2}\to\operatorname{ NAlg}^{\mathbb{Z}/2}\), which admits a left Kan extension \[\operatorname{H}\colon\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{Z} /2})\to\operatorname{NAlg}^{\mathbb{Z}/2} \tag{4.4}\] as a special case of Construction 4.6 for \(F=\operatorname{H}\) and \(\mathcal{D}=\operatorname{NAlg}^{\mathbb{Z}/2}\). If \(A\in\operatorname{CRing}_{R}^{\mathbb{Z}/2}\), then consider any standard resolution \(P_{\bullet}\to A\). The induced morphism \(\operatorname{colim}\operatorname{H}(P_{\bullet})\to\operatorname{H}(A)\) in \(\operatorname{NAlg}^{\mathbb{Z}/2}\) is an equivalence. It follows that (4.4) is an extension of (4.3). We have the functor \[\operatorname{THR}\colon\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{ Z}/2})\to\operatorname{NAlg}^{\mathbb{Z}/2} \tag{4.5}\] given by \(\operatorname{THR}(\mathcal{F}):=\operatorname{H}\!\mathcal{F}\wedge_{N^{ \mathbb{Z}/2}i^{*}\operatorname{H}\!\mathcal{F}}\operatorname{H}\!\mathcal{F}\) for \(\mathcal{F}\in\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{\mathbb{Z}/2})\). If \(\mathcal{F}\) is represented by a commutative \(R\)-algebra \(A\) with involution, then we have \(\operatorname{THR}(\mathcal{F})\simeq\operatorname{THR}(A)\) by Definition 4.1. Since (4.4) preserves sifted colimits and \(\wedge\) is a colimit, (4.5) preserves sifted colimits too. It follows that (4.5) is a left Kan extension of the functor \(\operatorname{THR}\colon\operatorname{Poly}_{R}^{\mathbb{Z}/2}\to\operatorname{ NAlg}^{\mathbb{Z}/2}\). Composing with the map \(i^{*}\), one easily sees that we have \(\operatorname{THH}(i^{*}\mathcal{F})\simeq i^{*}\operatorname{THR}(\mathcal{F})\) which generalizes the first statement of [25, Proposition 2.1.3]. We compose (4.4) with the forgetful functor \(\operatorname{NAlg}^{\mathbb{Z}/2}\to\operatorname{Sp}^{\mathbb{Z}/2}\) to obtain the functor \[\operatorname{THR}\colon\mathcal{P}_{\Sigma}(\operatorname{Poly}_{R}^{ \mathbb{Z}/2})\to\operatorname{Sp}^{\mathbb{Z}/2}. \tag{4.6}\] This preserves sifted colimits by Proposition 4.4. It follows that (4.6) is a left Kan extension of the functor \(\operatorname{THR}\colon\operatorname{Poly}_{R}^{\mathbb{Z}/2}\to\operatorname {Sp}^{\mathbb{Z}/2}\) too. **Example 4.8**.: Let \(R\) be a commutative ring. We have the functor \[\operatorname{H}\colon\operatorname{Poly}_{R}\to\operatorname{CAlg}( \operatorname{Sp})\] sending \(A\in\operatorname{Poly}_{R}\) to its Eilenberg-MacLane spectrum \(\operatorname{H}\!A\). We have a left Kan extension \[\operatorname{H}\colon\operatorname{sCAlg}_{R}\to\operatorname{CAlg}( \operatorname{Sp}).\] This preserves sifted colimits by Construction 4.5, \(\operatorname{H}\!A\) is equivalent to the usual Eilenberg-MacLane spectrum for any commutative \(R\)-algebra \(A\) arguing as in Example 4.7. We similarly define a functor \(\operatorname{H}\colon\operatorname{sCRing}_{R}\to\operatorname{NAlg}^{ \mathbb{Z}/2}\) that is obtained as a left Kan extension such that \(\operatorname{H}\!\underline{A}\) is the equivariant Eilenberg-MacLane spectrum for any commutative \(R\)-algebra \(A\). (As \(A\) has trivial involution here, we don't need Construction 4.6 for this.) **Example 4.9**.: Let \(R\) be a commutative ring. We have the functor \[\Omega^{1}_{/R}\colon\operatorname{Poly}_{R}\to\operatorname{Mod}( \operatorname{D}(R))\] sending \(A\in\operatorname{Poly}_{R}\) to the \(A\)-module \(\Omega^{1}_{A/R}\), where \(\operatorname{Mod}(\mathcal{C})\) denotes the underlying \(\infty\)-category of the generalized \(\infty\)-operad \(\operatorname{Mod}(\mathcal{C})^{\otimes}\) in [32, Definition 4.5.1.1] for every symmetric monoidal \(\infty\)-category \(\mathcal{C}\). Recall from [32, Remark 4.2.1.15, Corollary 4.5.1.6] that an object of the \(\infty\)-category \(\operatorname{Mod}(\operatorname{D}(R))\) consists of a pair of a commutative algebra object and its module. By Proposition A.3, \(\operatorname{Mod}(\operatorname{D}(R))\) admits sifted colimits, and the forgetful functor \(\operatorname{Mod}(\operatorname{D}(R))\to\operatorname{CAlg}(\operatorname{D}(R)) \times\operatorname{D}(R)\) is conservative and preserves sifted colimits. Let \[\mathbb{L}_{/R}\colon\operatorname{sCRing}_{R}\to\operatorname{Mod}( \operatorname{D}(R))\] be a left Kan extension of \(\Omega^{1}_{/R}\). For an \(R\)-algebra \(A\), \(\mathbb{L}_{A/R}\) is an \(\operatorname{H}\!A\)-module and hence an object of \(\operatorname{D}(A)\) due to Example 4.8. We wish to keep track of this \(\operatorname{H}\!A\)-module structure, and this is the reason why we work with \(\operatorname{Mod}(\operatorname{D}(R))\) rather than \(\operatorname{D}(R)\). Together with the equivalence of \(\infty\)-categories \(\operatorname{Mod}_{A}(\operatorname{D}(R))\simeq\operatorname{D}(A)\) (a non-equivariant version of Proposition 2.14), we may regard \(\mathbb{L}_{A/R}\) as an object of \(\operatorname{D}(A)\). We later might wish to extend the last example to commutative \(R\)-algebras with involution using Construction 4.6. This could be very useful below, see Remark 4.36 below. **Construction 4.10**.: Let \(R\) be a commutative ring. We have the functor \[\operatorname{THR}\colon\operatorname{sCRing}_{R}\to\operatorname{Mod}( \operatorname{Sp}^{\mathbb{Z}/2})\] sending \(A\in\operatorname{sCRing}_{R}\) to the \(\operatorname{H}\!\underline{A}\)-module \(\operatorname{THR}(\operatorname{H}\!\underline{A})\), where \(\operatorname{H}\) is a left Kan extension of the equivariant Eilenberg-MacLane spectrum functor in Example 4.8. This preserves sifted colimits since \(\operatorname{H}\) preserves sifted colimits by Example 4.8, the coproduct \(\wedge\) in \(\operatorname{NAlg}^{\mathbb{Z}/2}\) preserves colimits, the forgetful functor \(\operatorname{NAlg}^{\mathbb{Z}/2}\to\operatorname{Sp}^{\mathbb{Z}/2}\) preserves sifted colimits by Proposition 4.4, and the functors \(N^{\mathbb{Z}/2}\colon\operatorname{CAlg}(\operatorname{Sp})\to\operatorname{NAlg }^{\mathbb{Z}/2}\) and \(i^{*}\colon\operatorname{NAlg}^{\mathbb{Z}/2}\to\operatorname{CAlg}( \operatorname{Sp})\) preserve colimits. Arguing as in Construction 4.5, we see that \(\operatorname{THR}\) is a left Kan extension of its restriction \[\operatorname{THR}\colon\operatorname{Poly}_{R}\to\operatorname{Mod}( \operatorname{Sp}^{\mathbb{Z}/2}).\] Similarly, the functor \[\operatorname{HR}(-/\underline{R})\colon\operatorname{sCRing}_{R}\to \operatorname{Mod}(\operatorname{D}(\underline{R}))\] preserves sifted colimits. Hence \(\operatorname{HR}(-/\underline{R})\) is a left Kan extension of its restriction \[\operatorname{HR}(-/\underline{R})\colon\operatorname{Poly}_{R}\to \operatorname{Mod}(\operatorname{D}(\underline{R})).\] **Proposition 4.11**.: _Let \(R\to A\) be a homomorphism of commutative rings. Then there are equivalences in \(\operatorname{D}(A)\)_ \[i^{*}\operatorname{THR}(A)\simeq\operatorname{THH}(A)\text{ and }i^{*} \operatorname{HR}(A/R)\simeq\operatorname{HH}(A/R).\] Proof.: Consider the forgetful conservative functors \(\alpha_{*}\colon\operatorname{D}(A)\to\operatorname{Sp}\) and \(\alpha_{*}\colon\operatorname{D}(\underline{A})\to\operatorname{Sp}^{\mathbb{ Z}/2}\). By Proposition 2.10, we have a natural isomorphism \(\alpha_{*}i^{*}\simeq i^{*}\alpha_{*}\). Hence it suffices to provide equivalences \[i^{*}\alpha_{*}\operatorname{THR}(A)\simeq\operatorname{THH}(A)\text{ and }i^{*} \alpha_{*}\operatorname{HR}(A/R)\simeq\alpha_{*}\operatorname{HH}(A/R),\] i.e., we can forget the \(\operatorname{H}\!A\)-module and \(\operatorname{H}\!\underline{A}\)-module structures. Then the first equivalence is a special case of [25, Proposition 2.1.3], and the second equivalence holds as \(i^{*}\) commutes with push-outs in \(\operatorname{NAlg}^{\mathbb{Z}/2}\). **Proposition 4.12**.: _Let \(B\) be a normed \(\mathbb{Z}/2\)-spectrum such that its underlying \(\mathbb{Z}/2\)-spectrum is \((-1)\)-connected. Then \(\operatorname{THR}(B)\) is \((-1)\)-connected. Let \(A\) be a commutative ring with involution. Then \(\operatorname{THR}(A;\mathbb{Z}_{p})\) is \((-1)\)-connected._ Proof.: By [25, Proposition A.3.7], \(N^{\mathbb{Z}/2}i^{*}B\) is \((-1)\)-connected. Together with [4, Corollary 6.8.1], we see that \(\mathrm{THR}(B)\) is \((-1)\)-connected. We have the equivalence \(\mathrm{THR}(A;\mathbb{Z}_{p})\simeq\lim_{n}\mathrm{THR}(A)\,\square_{\underline {A}}^{\mathbb{L}}(\underline{A}/p^{n}\underline{A})\). Since the \(\lim^{1}\) terms add at most \(-1\) homological degree in the limit, we see that \(\mathrm{THR}(A;\mathbb{Z}_{p})\) is \((-2)\)-connected and (using [4, Corollary 6.8.1] again) there is an isomorphism \[\underline{\pi}_{-1}\mathrm{THR}(A;\mathbb{Z}_{p})\cong\lim_{n}^{1}\underline{ \pi}_{0}\mathrm{THR}(A)\,\square_{\underline{A}}\,\underline{\pi}_{0}( \underline{A}/p^{n}\underline{A}).\] Using the description of Lemma 5.1, we see that the tower on the right hand side consists of pointwise epimorphisms, hence satisfies the Mittag-Leffler condition in the abelian category of \(\underline{\mathbb{Z}}\)-modules. We deduce that \(\underline{\pi}_{-1}\mathrm{THR}(A;\mathbb{Z}_{p})\) vanishes. **Lemma 4.13**.: _For every integer \(n>0\), \(\underline{\pi}_{n}(\mathrm{THR}(\mathbb{Z}))\) is finite, i.e., \(\underline{\pi}_{n}(\mathrm{THR}(\mathbb{Z}))(C_{2}/e)\) and \(\underline{\pi}_{n}(\mathrm{THR}(\mathbb{Z}))(C_{2}/C_{2})\) are finite._ Proof.: By the finiteness result [9, Lemma 2.5], it remains to show that \(\pi_{n}(\mathrm{THR}(\mathbb{Z})^{\mathbb{Z}/2})\) is finite. Using the homotopy orbit spectral sequence, [9, Lemma 2.5] implies that \(\pi_{n}(\mathrm{THR}(\mathbb{Z})_{h\mathbb{Z}/2})\) is finite for \(n>0\). By [16, Theorem 5.20], \(\pi_{n}(\Phi^{\mathbb{Z}/2}\mathrm{THR}(\mathbb{Z}))\) is finite. The isotropy cofiber sequence \[\mathrm{THR}(\mathbb{Z})_{h\mathbb{Z}/2}\to\mathrm{THR}(\mathbb{Z})^{\mathbb{ Z}/2}\to\Phi^{\mathbb{Z}/2}\mathrm{THR}(\mathbb{Z})\] implies that \(\pi_{n}(\mathrm{THR}(\mathbb{Z})^{\mathbb{Z}/2})\) is finite too. **Remark 4.14**.: This important finiteness result refines the last part of [9, Lemma 2.5]. It is also related to [16, Theorem 5.24]. In fact, Dotto-Moi-Patchkoria-Reeh [16, Remark 5.23] conjecture that there is an equivalence of \(\mathbb{Z}/2\)-spectra \[\mathrm{THR}(\mathbb{Z})\simeq\mathrm{H}\underline{\mathbb{Z}}\oplus\bigoplus _{i=1}^{\infty}\Sigma^{i-1+i\sigma}\mathrm{H}\underline{\mathbb{Z}/i},\] which would imply Lemma 4.13. **Construction 4.15**.: Let \(M\) be an \(\underline{\mathbb{Z}}\)-module such that \(M\) is finite, i.e., \(A:=M(C_{2}/e)\) and \(B:=M(C_{2}/C_{2})\) are finite abelian groups. We have the morphism of \(\underline{\mathbb{Z}}\)-modules \(\underline{B}\to M\) given by the diagram since \(\mathrm{tr}\circ\mathrm{res}=2\) for \(M\) by Lemma 2.17 and the involution on \(B\) is trivial by definition. The kernel and cokernel of \(\underline{B}\to\underline{M}\) have the form for some \(\mathbb{Z}\)-module \(C\). The identity \(1+w=\mathrm{res}\circ\mathrm{tr}\) implies \(w=-1\). This Mackey functor is the cokernel of the morphism of \(\underline{\mathbb{Z}}\)-modules \(\underline{C}\to\underline{C^{\oplus\mathbb{Z}/2}}\) given by the diagram where \(\operatorname{res}\) is the diagonal embedding, the involution on \(C\) is trivial, and the involution on \(C\oplus C\) changes the summands. Hence there exists a finite number of finite abelian groups \(L_{1},\dots,L_{n}\) such that we can also construct \(M\) from \(\underline{L_{i}}\) and \(\underline{L_{i}^{\oplus\mathbb{Z}/2}}\) using kernels of surjections, cokernels of injections, and extensions finitely many times. It follows that we can construct \(M\) from \(\underline{\mathbb{Z}}\) and \(\underline{\mathbb{Z}^{\oplus\mathbb{Z}/2}}\) using the above operations finitely many times. This implies that we can construct \(\operatorname{H}\!M\) from \(\underline{\mathbb{Z}}\) and \(\underline{\operatorname{H}\!\mathbb{Z}}[\sigma]\) using fibers, cofibers, and extensions only finitely many times. **Lemma 4.16**.: _Let \(L\) and \(M\) be \(\underline{\mathbb{Z}}\)-modules. If \(L\) and \(M\) are finite, then the \(\underline{\mathbb{Z}}\)-module \(\underline{H}_{n}(L\,\square^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: Using [4, Corollary 6.8.1], we see that \(L\wedge_{A}\operatorname{Fil}_{n}M\) is \((n-1)\)-connected. It follows that \(\lim_{n}(L\wedge_{A}\operatorname{Fil}_{n}M)\) vanishes. Use the cofiber sequence \(\operatorname{Fil}_{n+1}M\to M\to\operatorname{Fil}^{n}M\) to finish the proof. **Lemma 4.22**.: _Let \(A\) be a commutative ring, and let \(\cdots\to\operatorname{Fil}_{1}\mathcal{F}\to\operatorname{Fil}_{0}\mathcal{F} :=\mathcal{F}\) be a complete filtration on \(\mathcal{F}\in\operatorname{D}(\underline{A})\). Then we have the following properties:_ 1. \(\operatorname{Fil}_{\bullet}(\mathcal{F}_{p}^{\wedge}):=(\operatorname{Fil}_{ \bullet}\mathcal{F})_{p}^{\wedge}\) _is a complete filtration on_ \(\mathcal{F}_{p}^{\wedge}\)_._ 2. _We have an equivalence_ \((\operatorname{gr}^{n}\mathcal{F})_{p}^{\wedge}\simeq\operatorname{gr}^{n}( \mathcal{F}_{p}^{\wedge})\) _in_ \(\operatorname{D}(\underline{A})\) _for every integer_ \(n\)_._ 3. _If_ \(\operatorname{gr}^{n}\mathcal{F}\) _is derived_ \(p\)_-complete for every integer_ \(n\)_, then_ \(\mathcal{F}\) _is derived_ \(p\)_-complete._ Proof.: (1) We have equivalences \[\lim_{n}\operatorname{Fil}_{n}(\mathcal{F}_{p}^{\wedge})\simeq\lim_{n} \operatorname{Fil}_{n}\mathcal{F}\,\square_{\underline{A}}^{\mathbb{L}}A/p^{i} A\simeq\lim_{i}(\lim_{n}\operatorname{Fil}_{n}\mathcal{F}\xrightarrow{p^{i}} \lim_{n}\operatorname{Fil}_{n}\mathcal{F})\] in \(\operatorname{D}(\underline{A})\), and the last one vanishes since \(\lim\operatorname{Fil}_{n}\mathcal{F}\) vanishes. Hence \(\operatorname{Fil}_{\bullet}(\mathcal{F}_{p}^{\wedge})\) gives a complete filtration on \(\mathcal{F}_{p}^{\wedge}\). (2) Use the cofiber sequences \(\operatorname{Fil}_{n+1}\mathcal{F}\to\operatorname{Fil}_{n}\mathcal{F}\to \operatorname{gr}^{n}\mathcal{F}\) and \(\operatorname{Fil}_{n+1}(\mathcal{F}_{p}^{\wedge})\to\operatorname{Fil}_{n}( \mathcal{F}_{p}^{\wedge})\to\operatorname{gr}^{n}(\mathcal{F}_{p}^{\wedge})\). (3) By induction and (2), we have an equivalence \(\operatorname{Fil}^{n}\mathcal{F}\simeq\operatorname{Fil}^{n}(\mathcal{F}_{p}^ {\wedge})\) for every integer \(n\). Take \(\lim_{n}\) on both sides, and use the completeness of \(\operatorname{Fil}_{n}\mathcal{F}\) and (1) to conclude. The following result seems well-known. **Lemma 4.23**.: _Let \(R\) be a commutative \(\mathbb{Z}_{p}\)-algebra. Then there is a natural equivalence \((\mathbb{L}_{R/\mathbb{Z}})_{p}^{\wedge}\simeq(\mathbb{L}_{R/\mathbb{Z}_{p}}) _{p}^{\wedge}\) in \(\operatorname{D}(R)\)._ Proof.: We have the transitivity triangle \(\mathbb{L}_{\mathbb{Z}_{p}/\mathbb{Z}}\otimes_{\mathbb{Z}_{p}}^{\mathbb{L}}R \to\mathbb{L}_{R/\mathbb{Z}}\to\mathbb{L}_{R/\mathbb{Z}_{p}}\). Since \(\mathbb{Z}_{p}/p^{n}\mathbb{Z}_{p}\) admits a finite filtration whose graded pieces are \(\mathbb{F}_{p}\), it suffices to show \(\mathbb{L}_{\mathbb{Z}_{p}/\mathbb{Z}}\otimes_{\mathbb{Z}_{p}}^{\mathbb{L}} \mathbb{F}_{p}\simeq 0\). The induced morphism \(\mathbb{L}_{\mathbb{F}_{p}/\mathbb{Z}}\to\mathbb{L}_{\mathbb{F}_{p}/\mathbb{Z} _{p}}\) in \(\operatorname{D}(\mathbb{F}_{p})\) is an equivalence, as \(\mathbb{L}_{\mathbb{F}_{p}/\mathbb{Z}}\simeq(p)/(p)^{2}[1]\) and similarly for \(\mathbb{L}_{\mathbb{F}_{p}/\mathbb{Z}_{p}}\). Use the transitivity triangle \(\mathbb{L}_{\mathbb{Z}_{p}/\mathbb{Z}}\otimes_{\mathbb{Z}_{p}}^{\mathbb{L}} \mathbb{F}_{p}\to\mathbb{L}_{\mathbb{F}_{p}/\mathbb{Z}}\to\mathbb{L}_{\mathbb{ F}_{p}/\mathbb{Z}_{p}}\) to conclude. The following result is an equivariant refinement of [9, a part of Lemma 2.5], which will be used later for relating statements about \(\operatorname{THR}\) and \(\operatorname{HR}\). **Proposition 4.24**.: _Let \(A\) be a commutative ring. Then the natural morphism_ \[\operatorname{THR}(A;\mathbb{Z}_{p})\,\square_{\operatorname{THR}(\mathbb{Z})} ^{\mathbb{L}}\,\underline{\mathbb{Z}}\to\operatorname{HR}(A;\mathbb{Z}_{p})\] _in \(\operatorname{D}(\underline{A})\) is an equivalence._ Proof.: By Proposition 2.23, it suffices to show that \(\operatorname{THR}(A;\mathbb{Z}_{p})\,\square_{\operatorname{THR}(\mathbb{Z})} ^{\mathbb{L}}\,\underline{\mathbb{Z}}\) is derived \(p\)-complete. Recall that \(\underline{H}_{i}(\operatorname{THR}(\mathbb{Z}))\) is finite for \(i>0\) by Lemma 4.13, \(\underline{H}_{0}(\operatorname{THR}(\mathbb{Z}))\simeq\underline{\mathbb{Z}}\) by [25, Proposition 2.3.5], and \(\underline{H}_{i}(\operatorname{THR}(\mathbb{Z}))=0\) for \(i<0\) by Proposition 4.12. As \(\operatorname{THR}(A;\mathbb{Z}_{p})\) is derived \(p\)-complete, it suffices to show that \[\operatorname{THR}(A;\mathbb{Z}_{p})\,\square_{\operatorname{THR}(\mathbb{Z})} ^{\mathbb{L}}\,\tau_{\geq 1}\operatorname{THR}(\mathbb{Z})\] is derived \(p\)-complete. Recall that \(\operatorname{THR}(A;\mathbb{Z}_{p})\) is \((-1)\)-connected by Proposition 4.12. The filtration \(\operatorname{THR}(A;\mathbb{Z}_{p})\,\square_{\operatorname{THR}(\mathbb{Z})}^{ \mathbb{L}}\,\tau_{\geq\bullet}\operatorname{THR}(\mathbb{Z})\) is complete by a \(\operatorname{D}(\underline{A})\)-variant of Lemma 4.21. Using Lemma 4.22(3), we reduce to showing that \[\operatorname{THR}(A;\mathbb{Z}_{p})\,\square_{\operatorname{THR}(\mathbb{Z})} ^{\mathbb{L}}\,\underline{H}_{i}\operatorname{THR}(\mathbb{Z})\] is derived \(p\)-complete for every integer \(i\geq 1\). Since \(\underline{H}_{i}\mathrm{THR}(\mathbb{Z})\) is finite for \(i\geq 1\), Construction 4.15 allows us to reduce to showing that \[\alpha(M):=\mathrm{THR}(A;\mathbb{Z}_{p})\,\square_{\mathrm{THR}(\mathbb{Z})}^{ \mathbb{L}}\,\underline{M}\text{ and }\beta(M):=\mathrm{THR}(A;\mathbb{Z}_{p})\, \square_{\mathrm{THR}(\mathbb{Z})}^{\mathbb{L}}\,\underline{M}^{\oplus\mathbb{ Z}/2}\] are derived \(p\)-complete for every finite abelian group \(M\). We further reduce to the case when \(M=\mathbb{Z}/\ell^{m}\) for some prime \(\ell\) and integer \(m\geq 1\). The cofiber of the multiplication \(\ell^{m}\colon\underline{\mathbb{Z}}\to\underline{\mathbb{Z}}\) is \(\underline{\mathbb{Z}/\ell^{m}}\). If \(\ell\neq p\), then we have \(\alpha(M)\simeq 0\) by Proposition 2.21(6). A similar argument yields \(\beta(M)\simeq 0\). Now assume \(\ell=p\). For every integer \(k\geq 1\), the quotient homomorphism \(\mathbb{Z}/p^{(m+1)k}\to\mathbb{Z}/p^{mk}\) induces the following commutative diagram in \(\mathrm{D}(\underline{A})\) whose rows are cofiber sequences. Apply \(\mathrm{THR}(A;\mathbb{Z}_{p})\,\square_{\mathrm{THR}(\mathbb{Z})}^{\mathbb{L} }(-)\) to this diagram, iterate this for various \(k\), and take \(\lim_{k}\). Using \(\lim(\cdots\xrightarrow{0}\mathbb{Z}/p^{m}\xrightarrow{0}\mathbb{Z}/p^{m})\simeq 0\), we obtain an equivalence \(\alpha(M)\simeq\alpha(M)_{p}^{\wedge}\). This implies that \(\alpha(M)\) is derived \(p\)-complete by Proposition 2.21(4). Again, a variant of this argument shows that \(\beta(M)\) is derived \(p\)-complete as well. **Definition 4.25**.: For a commutative ring \(R\), let \(\mathrm{D}(\underline{R})_{\sigma-\mathrm{sums}}\) be the full subcategory of the \(\infty\)-category \(\mathrm{D}(\underline{R})\) spanned by the objects of the form \[\mathcal{F}:=\bigoplus_{n=0}^{\infty}\underline{\mathcal{F}_{n}}[n\sigma]\] such that \(\mathcal{F}_{n}\) is an \(R\)-module for all integers \(n\geq 0\). In this case, we have a natural isomorphism \(\mathcal{F}_{n}\cong H_{n}(i^{*}\mathcal{F})\) of \(R\)-modules. This rather technical definition will be considered in the following Lemma, which will be used in the proof of Theorem 4.30. **Lemma 4.26**.: _Let \(R\) be a commutative ring. Then there exists a natural filtration \(\mathrm{Fil}_{n}\mathcal{F}\) with \(n\geq 0\) for all \(\mathcal{F}\in\mathrm{D}(\underline{R})_{\sigma-\mathrm{sums}}\) such that the \(n\)th graded piece \(\mathrm{gr}^{n}\mathcal{F}\) is equivalent to \(\underline{H_{n}}(i^{*}\mathcal{F})[n\sigma]\) for all integers \(n\geq 0\)._ Proof.: We set \(\mathrm{Fil}_{0}\mathcal{F}:=\mathcal{F}\). For every integer \(n\geq 0\), we recursively construct \[\mathrm{Fil}_{n+1}\mathcal{F}:=(P_{2n+2}(\mathrm{Fil}_{n}\mathcal{F}[n+1]))[-n-1].\] We have the natural morphism \[\mathrm{Fil}_{n+1}\mathcal{F}\to\mathrm{Fil}_{n}\mathcal{F}\] induced by the natural morphism \(P^{2n+2}(\mathrm{Fil}_{n}\mathcal{F}[n+1])\to\mathrm{Fil}_{n}\mathcal{F}[n+1]\). We claim that there exists an equivalence \[\mathrm{Fil}_{n}\mathcal{F}\simeq\bigoplus_{j=n}^{\infty}\underline{\mathcal{F }_{j}}[j\sigma]\] for all integers \(n\geq 0\), where \(\mathcal{F}_{j}:=H_{j}(i^{*}\mathcal{F})\) for every integer \(j\geq 0\). This holds if \(n=0\). Assume that the claim holds for \(n\). We have \[\mathcal{F}_{j}[j\sigma+n+1]\in\mathrm{D}_{\geq 2n+2}(\underline{R})\] for \(j>n\) by Lemmas 3.4 and 3.5(1) and \[P_{2n+1}^{2n+1}\underline{\mathcal{F}_{n}}[n\sigma+n+1]\simeq\underline{ \mathcal{F}_{n}}[n\sigma+n+1]\] by Proposition 3.6, see Definition 3.3 for the notation \(\mathrm{D}_{\geq 2n+2}(\underline{R})\). This shows the claim for \(n+1\), which completes the induction. Then \(\mathrm{gr}^{n}\mathcal{F}\) can be identified with \(\underline{\mathcal{F}_{n}}[n\sigma]\). The following Lemma is the starting point of all computations, ultimately leading to Theorem 5.16. We currently can't generalize this lemma to \(R\) or \(\mathbb{N}\) having a non-trivial involution, but see Remark 4.36 below. **Lemma 4.27**.: _Let \(R\) be a commutative ring. Then there is an equivalence_ \[\mathrm{HR}(R[\mathbb{N}]/R)\simeq\underline{\Omega}_{R[\mathbb{N}]/R}^{0} \oplus\underline{\Omega}_{R[\mathbb{N}]/R}^{1}[\sigma].\] _in \(\mathrm{D}(\underline{R[\mathbb{N}]})\). Note that by definition the left-hand side is an algebra and hence a module over \(\mathrm{HR}[\mathbb{N}]\), which we then consider as an object in \(\mathrm{D}(\underline{R[\mathbb{N}]})\) using Proposition 2.11. In particular, we have \(\mathrm{HR}(R[\mathbb{N}]/\mathbb{R})\in\mathrm{D}(\underline{R[\mathbb{N}]}) _{\sigma-\mathrm{sums}}\)._ Proof.: We have the maps \(\mathbb{S}[\mathbb{N}]\to\mathrm{THR}(\mathbb{S}[\mathbb{N}])\to\mathbb{S}[ \mathbb{N}]\) in \(\mathrm{NAlg}^{\mathbb{Z}/2}\) from Definition 4.1. Hence \(\mathrm{THR}(\mathbb{S}[\mathbb{N}])\) admits a decomposition \(\mathbb{S}[\mathbb{N}]\oplus X\) as an \(\mathbb{S}[\mathbb{N}]\)-module. By [25, Propositions 4.2.11, 4.2.13], we obtain an equivalence of \(\mathbb{Z}/2\)-spectra \[\mathrm{THR}(\mathbb{S}[\mathbb{N}])\simeq\bigoplus_{i=0}^{\infty}\mathbb{S} \oplus\bigoplus_{i=0}^{\infty}\Sigma^{\sigma}\mathbb{S}.\] using that \(\mathbb{S}[S^{\sigma}]\simeq\mathbb{S}\oplus\Sigma^{\sigma}\mathbb{S}\) and an index shift. From this, we obtain an equivalence of \(\mathbb{Z}/2\)-spectra \(X\simeq\bigoplus_{i=0}^{\infty}\Sigma^{\sigma}\mathbb{S}\). We have equivalences of \(\mathrm{H}\underline{R}\)-modules \[\mathrm{HR}(R[\mathbb{N}]/R)= \mathrm{THR}(R[\mathbb{N}])\wedge_{\mathrm{THR}(R)}\mathrm{H} \underline{R}\] \[\simeq \mathrm{THR}(\mathbb{S}[\mathbb{N}])\wedge\mathrm{H}\underline{R}\] \[\simeq \mathrm{THR}(\mathbb{S}[\mathbb{N}])\wedge\mathrm{H}\underline{R}\] using [25, Proposition 4.2.15] and the equivalence \(\mathrm{THR}(\mathbb{S})\simeq\mathbb{S}\). Hence setting \(Y\simeq X\wedge\mathrm{H}\underline{R}\), we obtain an equivalence \[\mathrm{HR}(R[\mathbb{N}]/R)\simeq\underline{R[\mathbb{N}]}\oplus Y\] in \(\mathrm{D}(\underline{R[\mathbb{N}]})\). Observe that we have an equivalence \(Y\simeq\bigoplus_{i=0}^{\infty}\underline{R}[\sigma]\) in \(\mathrm{D}(\underline{R})\) by the equivalence of Proposition 2.11, under which \([\sigma]\) corresponds to \(\Sigma^{\sigma}\). Recall the conservative forgetful functor \(\alpha_{*}\colon\mathrm{D}(\underline{R[\mathbb{N}]})\to\mathrm{Sp}^{\mathbb{ Z}/2}\). For every \(\mathcal{F}\in\mathrm{D}(\underline{R[\mathbb{N}]})\), we have a natural isomorphism of Mackey functors \[\underline{H}_{m+n\sigma}(\mathcal{F})\cong\underline{\pi}_{m+n\sigma}( \alpha_{*}\mathcal{F})\] by adjunction since \(\alpha^{*}(\Sigma^{m+n\sigma}\mathbb{S})\simeq\underline{R[\mathbb{N}]}[m+n\sigma]\). Using this, we have an equivalence \(Y\simeq\underline{M}[\sigma]\) in \(\mathrm{D}(\underline{R[\mathbb{N}]})\) for some \(R[\mathbb{N}]\)-modules \(M\). Together with the Hochschild-Kostant-Rosenberg theorem (see e.g. [30, Theorem 3.4.4]) and Proposition 4.11, we have isomorphisms of \(R[\mathbb{N}]\)-modules \[M\simeq H_{1}(i^{*}\mathrm{HR}(R[\mathbb{N}]/R))\cong H_{1}(\mathrm{HH}(R[ \mathbb{N}]/R))\cong\Omega_{R[\mathbb{N}]/R}^{1}.\] On the other hand, the direct summand \(\underline{R[\mathbb{N}]}\) canonically corresponds to \(\underline{\Omega}_{R[\mathbb{N}]/R}^{0}\) **Remark 4.28**.: We now assume that \(2\) is invertible in \(R\). As before, the following computation is carried out in the abelian category of modules over the Green functor \(\underline{R[\mathbb{N}]}\). We compute the kernel and cokernel of the morphism \(\underline{R[\mathbb{N}]}^{\oplus\mathbb{Z}/2}\to R[\mathbb{N}]\) induced by the summation \(R[\mathbb{N}]\oplus R[\mathbb{N}]\to R[\mathbb{N}]\), where by the definition of \([\sigma]\) in Construction 2.7 the kernel is \(R[\mathbb{N}][\sigma-1]\), and the cokernel is \(0\) as \(2\) is invertible. Now one easily verifies that \(\underline{R[\mathbb{N}]}[\sigma-1]\) is quasi-isomorphic to the \(\underline{R[\mathbb{N}]}\)-module given by the diagram where \(w\) is the \(R[\mathbb{N}]\)-linear involution on \(R[\mathbb{N}]\) given by \(a\mapsto-a\) for \(a\in R[\mathbb{N}]\). This \(\underline{R[\mathbb{N}]}\)-module is isomorphic to \((R[\mathbb{N}],w)\) since \(w(a)=a\) implies \(a=0\). Hence using the isomorphism of \(R[\mathbb{N}]\)-modules \(R[\mathbb{N}]\cong\Omega^{1}_{R[\mathbb{N}]/R}\), we obtain an equivalence in \(\mathrm{D}(\underline{R[\mathbb{N}]})\) \[\mathrm{HR}(R[\mathbb{N}]/R)\simeq\Omega^{0}_{\underline{R[\mathbb{N}]}/R} \oplus\frac{(\Omega^{1}_{R[\mathbb{N}]/R},w)[1]}{}.\] So in this description the involution on \(\Omega^{1}_{R[x]/R}\) is induced by \(fdx\mapsto-fdx\). Note that via the standard isomorphism \(\mathrm{HH}_{1}(R[x]/R)\cong\Omega_{R[x]/R}\) given by \(1\otimes x-x\otimes 1\mapsto dx\), this involution corresponds to changing the two tensor factors on \(\mathrm{HH}_{1}\), as expected. Let us also point out a possible source of confusion here: the cokernel above in modules over Green functors is \(0\) only if \(2\) is invertible in \(R\), but the underlying map of \(R\)-modules with involution is surjective even if \(2\) is not invertible. In the next, we use the notion of flat modules of Green functors [25, Definition A.5.1]. **Lemma 4.29**.: _Let \(R\to A\), \(R\to B\) be morphisms of Green functors. If either \(A\) or \(B\) is flat over \(R\), then there is a natural equivalence in \(\mathrm{D}(A\,\square_{R}\,B)\)_ \[\mathrm{HR}(A\,\square_{R}\,B)\simeq\mathrm{HR}(A/R)\,\square_{R}^{\mathbb{L }}\,\mathrm{HR}(B/R).\] Proof.: We set \(C:=A\,\square_{R}\,B\), which is equivalent to \(A\,\square_{R}^{\mathbb{L}}\,B\) using the assumption of flatness. We have the equivalences \[\mathrm{HR}(C/R)= \mathrm{THR}(C)\,\square_{\mathrm{THR}(R)}^{\mathbb{L}}\,R\] \[\simeq (\mathrm{THR}(A)\,\square_{\mathrm{THR}(R)}^{\mathbb{L}}\,\mathrm{ THR}(B))\,\square_{\mathrm{THR}(R)}^{\mathbb{L}}\,R\] \[\simeq (\mathrm{THR}(A)\,\square_{\mathrm{THR}(R)}^{\mathbb{L}}\,R)\, \square_{R}^{\mathbb{L}}(\mathrm{THR}(B)\,\square_{\mathrm{THR}(R)}^{\mathbb{ L}}\,R)\] \[\simeq \mathrm{HR}(A/R)\,\square_{R}^{\mathbb{L}}\,\mathrm{HR}(B/R)\] where the second one is due to [25, Proposition 2.1.5]. The following is an equivariant refinement of the Hochschild-Kostant-Rosenberg theorem [23], see also [9, p. 215]. We will relate this filtration to slice filtrations after derived \(p\)-completion in Theorem 5.16 below, refining [9, Theorem 6.1], in the case where \(A\) is a perfectoid ring and \(R=\mathbb{Z}_{p}\). Recall that Construction 4.10 extended the equivariant Eilenberg-MacLane spectrum \(\mathrm{H}(-)\) to simplicial commutative \(R\)-algebras. **Theorem 4.30**.: _Let \(R\) be a commutative ring, and let \(A\) be a simplicial commutative \(R\)-algebra. Then there exists a natural filtration \(\mathrm{Fil}_{\bullet}\mathrm{HR}(A/R)\) on \(\mathrm{HR}(A/R)\) _whose \(n\)th graded piece is_ \[\operatorname{gr}^{n}\!\operatorname{HR}(A/R)\simeq(\iota\wedge_{A}^{n}\mathbb{L} _{A/R})[n\sigma]\] _for every integer \(n\). If \(2\) is invertible in \(R\), then this filtration is complete._ Proof.: If \(A=R[x_{1},\dots,x_{d}]\) with an integer \(d\geq 0\), then Lemmas 4.27 and 4.29 imply \(\operatorname{HR}(A/R)\in\operatorname{D}_{\sigma-\operatorname{sums}}( \underline{A})\), i.e., \(\operatorname{HR}(A/R)\simeq\bigoplus_{n=0}^{d}\underline{\mathcal{F}_{n}}[n\sigma]\) for some \(A\)-modules \(\mathcal{F}_{n}\). Using the Hochschild-Kostant-Rosenberg theorem and Proposition 4.11, we have isomorphisms of \(A\)-modules \[\mathcal{F}_{n}\cong H_{n}(i^{*}\!\operatorname{HR}(A/R))\cong H_{n}( \operatorname{HH}(A/R))\cong\Omega_{A/R}^{n}.\] Hence we obtain an equivalence \[\operatorname{HR}(A/R)\simeq\bigoplus_{n=0}^{d}\underline{\Omega_{A/R}^{n}}[n\sigma] \tag{4.7}\] in \(\operatorname{D}(\underline{A})\). Together with the natural filtration in Lemma 4.26, we can regard \(\operatorname{HR}(A/R)\) as an object of \(\operatorname{Fun}(\mathbb{Z}^{op},\operatorname{D}(\underline{A}))\). Proposition 2.14 allows us to regard \(\operatorname{HR}(A/R)\) as an object \(\operatorname{Fun}(\mathbb{Z}^{op},\operatorname{Mod}(\operatorname{D}( \underline{R})))\), where \(\operatorname{Mod}(-)\) denotes the \(\infty\)-category of the module objects of a symmetric monoidal category. Hence we obtain a functor \[\operatorname{HR}(-/R)\colon\operatorname{Poly}_{R}\to\operatorname{Fun}( \mathbb{Z}^{op},\operatorname{Mod}(\operatorname{D}(\underline{R})))\] (where again we work with \(\operatorname{Mod}\) to keep track the \(A\)-module structure on \(\operatorname{HR}(A/R)\)) such that its \(n\)th graded piece \(\operatorname{gr}^{n}\!\operatorname{HR}(A/R)\) is naturally equivalent to \(\underline{\Omega_{A/R}^{n}}[n\sigma]\) as \(A\)-modules in \(\operatorname{D}(\underline{R})\) for every integer \(n\). For general \(A\in\operatorname{sCRing}_{R}\), we get the desired filtration by left Kan extension, using the description of \(\mathbb{L}_{A/R}\) from Example 4.9. To check that this filtration is complete if \(2\) is invertible in \(R\), use the forgetful functor \(\operatorname{Mod}(\operatorname{D}(\underline{R}))\to\operatorname{D}( \underline{R})\) to obtain a functor \[\operatorname{HR}(-/R)\colon\operatorname{sCRing}_{R}\to\operatorname{Fun}( \mathbb{Z}^{op},\operatorname{D}(\underline{R}))=\operatorname{DF}( \underline{R}).\] By Lemma 4.34(1) below, we see that \(\operatorname{Fil}_{n}\!\operatorname{HR}(A/R)\) is \((n-1)\)-connected for \(A\in\operatorname{sCRing}_{R}\) and integer \(n\). Hence the filtration on \(\operatorname{HR}(A/R)\) is complete. The following result deserves to be called the _real Hochschild-Kostant-Rosenberg theorem_. **Theorem 4.31**.: _Let \(R\) be a commutative ring, and let \(A\) be a smooth \(R\)-algebra. Then the natural filtration on \(\operatorname{HR}(A/R)\) in Theorem 4.30 is complete, and its \(n\)th graded piece in \(\operatorname{D}(A)\) given by_ \[\operatorname{gr}^{n}\!\operatorname{HR}(A/R)\simeq\underline{\Omega_{A/R}^{n }}[n\sigma]\] _for every integer \(n\)._ Proof.: The last claim follows from \(\mathbb{L}_{A/R}\simeq\Omega_{A/R}^{1}\) and Theorem 4.30. By [25, Theorem 3.4.3] (see also [24]) \(\operatorname{THR}(-)\) is an etale hypersheaf on the affine isovariant site, and hence on the subsite of affine schemes with trivial \(C_{2}\)-action over a given base ring \(R\). Definition 4.1 immediately implies that \(\operatorname{HR}(-/R)\) is also an etale sheaf. Furthermore, every limit of complete filtered complexes is complete. Hence it suffices to work Zariski locally on \(A\) to show that the filtration on \(\operatorname{HR}(A/R)\) is complete. Together with [14, Corollaire IV.17.11.4], we may assume that \(R\to A\) factors through \(R\to R[x_{1},\ldots,x_{d}]\to A\), where the first homomorphism is the obvious inclusion and the second homomorphism is etale. We set \(R^{\prime}:=R[x_{1},\ldots,x_{d}]\). For \(B\in\operatorname{Poly}_{R^{\prime}}\), we have \[\operatorname{HR}(R^{\prime}/R)\wedge_{\underline{H}\underline{R^{\prime}}} \operatorname{H}\underline{B},\ \operatorname{HR}(B/R)\in\operatorname{D}(\underline{R})_{\sigma-\text{ sums}}\] by (4.7). Using Lemma 4.26, we see that the base change map \[\operatorname{HR}(R^{\prime}/R)\wedge_{\underline{H}\underline{R^{\prime}}} \operatorname{H}\underline{B}\to\operatorname{HR}(B/R) \tag{4.8}\] obtained from [25, (3.1)] is compatible with the filtrations for \(B\in\operatorname{Poly}_{R^{\prime}}\). By left Kan extension, we see that (4.8) is compatible with the filtrations for \(B\in\operatorname{sCRing}_{R^{\prime}}\). In particular, the base change map \[\operatorname{HR}(R^{\prime}/R)\wedge_{\underline{H}\underline{R^{\prime}}} \operatorname{H}\underline{A}\to\operatorname{HR}(A/R)\] is compatible with the filtrations. This map is an equivalence by [25, Theorem 3.2.3]. To conclude, observe that the filtration on \(\operatorname{HR}(R^{\prime}/R)\) is finite and hence complete as \(\Omega^{n}_{R^{\prime}/R}\) vanishes for \(n>d\). **Remark 4.32**.: Let \(R\) be a commutative ring, and let \(A\) be a smooth \(R\)-algebra. If 2 is invertible in \(R\), then \(R[\sigma-1]\) is equivalent to \((R,w)\) by Lemma 4.34(1) below, where \(w\) is the involution \(R\to R\) sending \(x\) to \(-x\). In particular, \(R[\sigma-1]\) is then concentrated in degree 0. This implies that \(\underline{H}_{\varepsilon\sigma}\Omega^{n}_{A/R}[n\sigma]=0\) whenever \(i\neq n\). Hence for every integer \(n\), the equivalence of Theorem 4.31 induces a natural isomorphism \[\underline{H}_{\text{\tiny$\alpha$}}\operatorname{HR}(A/R)\simeq\underline{ \Omega^{n}_{A/R}}.\] We now discuss some partial results towards a generalization to commutative \(R\)-algebras with a non-trivial involution. **Definition 4.33**.: For any commutative ring \(R\) and for any commutative \(R\)-algebra \(A\), we considered the graded \(A\)-module \(\Omega^{*}_{A/R}\). Observe that if \(A\) comes with an \(R\)-linear involution, then for any \(n\geq 0\), this induces an involution on \(\Omega^{n}_{A/R}\). Restricting to \(\operatorname{Poly}^{\mathbb{Z}/2}_{R}\) and using Construction 4.6, we obtain the definition of the _real cotangent complex_ in degree \(n\), denoted \(\underline{\mathbb{L}}^{n}_{A/R}\), in \(\operatorname{D}(\underline{A})\), proceeding as in Example 4.9 and also using Proposition 2.14. **Lemma 4.34**.: _Let \(R\) be a commutative ring, \(M\) an \(R\)-module with involution \(\tau\)._ 1. _If 2 is invertible in_ \(R\)_, we have an equivalence_ \(\underline{M}[\sigma]\simeq(\underline{M},w)[1]\) _in_ \(\operatorname{D}(\underline{R})\)_, where_ \(w(m)=-\tau(m)\) _for all_ \(m\in M\)_._ 2. _Then_ \(M\oplus M\) _with involution_ \((a,b)\mapsto(b,a)\) _is isomorphic as an_ \(R\)_-module with involution to_ \(M\oplus M\) _with involution_ \((a,b)\mapsto(\tau(b),\tau(a))\)_._ Proof.: For the first part, argue as in Remark 4.28. For the second statement, one uses the isomorphism of \(R\)-modules with involution given by \((a,b)\mapsto(a,\tau(b))\). **Proposition 4.35**.: _Let \(R\) be a commutative ring in which 2 is invertible, and let \(A:=R[\mathbb{N}^{\oplus\mathbb{Z}/2}]\). Then we have an equivalence_ \[\operatorname{HR}(A/R)\simeq\bigoplus_{i=0}^{2}\Omega^{i}_{A/R}[i\sigma]\] in \(\mathrm{D}(\underline{A})\). For all commutative rings \(R\), with \(2\) invertible or not invertible, we have an equivalence_ \[\mathrm{HR}(R[\mathbb{N}^{\oplus\mathbb{Z}/2}]/R)\simeq\underline{R[\mathbb{N}^{ \oplus\mathbb{Z}/2}]}\oplus(\underline{R[v,w]}\oplus R[x,y])[1]\oplus\underline{ R[\mathbb{N}^{\oplus\mathbb{Z}/2}]}[1+\sigma]\] _in \(\mathrm{D}(\underline{A})\), where the middle summand has the involution changing \(v\) and \(x\) as well as \(w\) and \(y\), but no involution on the individual summands \(R[v,w]\) and \(R[x,y]\). We also have an equivalence of \(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}]\)-modules_ \[\mathrm{THR}(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}])\simeq\mathbb{S}[ \mathbb{N}^{\oplus\mathbb{Z}/2}]\oplus\Sigma^{1}i_{*}\mathbb{S}[\mathbb{N}^{ \oplus\mathbb{Z}/2}]\oplus\Sigma^{1+\sigma}\mathbb{S}[\mathbb{N}^{\oplus \mathbb{Z}/2}].\] Proof.: There are equivalences \[\mathrm{THR}(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}])\simeq \mathrm{THR}(N^{\mathbb{Z}/2}\mathbb{S}[\mathbb{N}])\] \[\simeq N^{\mathbb{Z}/2}\mathbb{S}[\mathbb{N}]\wedge_{N^{\mathbb{Z}/2}i_ {*}N^{\mathbb{Z}/2}\mathbb{S}[\mathbb{N}]}N^{\mathbb{Z}/2}\mathbb{S}[\mathbb{ N}]\] \[\simeq N^{\mathbb{Z}/2}(\mathbb{S}[\mathbb{N}]\wedge_{i^{*}N^{\mathbb{Z}/ 2}\mathbb{S}[\mathbb{N}]}\mathbb{S}[\mathbb{N}])\] \[\simeq N^{\mathbb{Z}/2}\mathrm{THH}(\mathbb{S}[\mathbb{N}])\] in \(\mathrm{NAlg}^{\mathbb{Z}/2}\), where we need the fact that \(N^{\mathbb{Z}/2}\) preserves colimits at the monoidal level [25, Proposition A.1.11] for the third one and [25, Proposition A.2.7(6)] for the fourth one. Recall that \(\mathrm{THR}(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}])\) is an \(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}]\)-algebra via the inclusion on the second smash factor. Hence the above chain of equivalences induces an \(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}]\)-algebra structure on \(N^{\mathbb{Z}/2}\mathrm{THH}(\mathbb{S}[\mathbb{N}])\), where the action is induced by applying \(N^{\mathbb{Z}/2}\) to the usual action (via the second smash factor) of \(\mathbb{S}[\mathbb{N}]\) on \(\mathrm{THH}(\mathbb{S}[\mathbb{N}])\). There is an equivalence of spectra \(\mathrm{THH}(\mathbb{S}[\mathbb{N}])\simeq\mathbb{S}\oplus\bigoplus_{j\geq 1 }\mathbb{S}[S^{1}]\) by [41, Proposition 3.20], which is equivalent to \(X\oplus\Sigma^{1}X\) with \(X:=\bigoplus_{j\geq 0}\mathbb{S}\in\mathrm{Sp}\). Analyzing this computation which reduces to the cyclic nerve of \(\mathbb{N}\), compare also [25, Proposition 4.2.11], we see that the \(\mathbb{S}[\mathbb{N}]\)-module structure of this normed spectrum is induced by the action of \(\mathbb{N}\) shifting the index in \(\bigoplus\). Hence we have an equivalence of \(\mathbb{S}[\mathbb{N}]\)-modules \(\mathrm{THH}(\mathbb{S}[\mathbb{N}])\simeq\mathbb{S}[\mathbb{N}]\oplus\Sigma ^{1}\mathbb{S}[\mathbb{N}]\). Since \(N^{\mathbb{Z}/2}\Sigma^{1}\mathbb{S}\simeq\Sigma^{1+\sigma}\mathbb{S}\) by [21, Proposition A.59], we have an equivalence \[N^{\mathbb{Z}/2}\mathrm{THH}(\mathbb{S}[\mathbb{N}])\simeq\mathbb{S}[ \mathbb{N}^{\oplus\mathbb{Z}/2}]\oplus\Sigma^{1}i_{*}\mathbb{S}[\mathbb{N}^{ \oplus\mathbb{Z}/2}]\oplus\Sigma^{1+\sigma}\mathbb{S}[\mathbb{N}^{\oplus \mathbb{Z}/2}]\] of \(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}]\)-modules using the model categorical description of the norm functor in [21, section 2.2.3] and the usual behavior on the smash product on direct sums. Now, let \(R\) be a commutative ring. By [25, Proposition 4.2.15] and the definition of \(\mathrm{HR}\), we have equivalences \[\mathrm{HR}(R[\mathbb{N}^{\oplus\mathbb{Z}/2}]/R)= \mathrm{THR}(R[\mathbb{N}^{\oplus\mathbb{Z}/2}])\wedge_{\mathrm{ THR}(R)}\mathrm{H}\underline{R}\] \[\simeq (\mathrm{THR}(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}])\wedge \mathrm{THR}(R))\wedge_{\mathrm{THR}(R)}\mathrm{H}\underline{R}\] \[\simeq \mathrm{THR}(\mathbb{S}[\mathbb{N}^{\oplus\mathbb{Z}/2}])\wedge \mathrm{H}R.\] Consequently, we obtain an equivalence in \(\mathrm{D}(\underline{R}[\mathbb{N}^{\oplus\mathbb{Z}/2}])\) \[\mathrm{HR}(R[\mathbb{N}^{\oplus\mathbb{Z}/2}]/R)\simeq\underline{R[\mathbb{N }^{\oplus\mathbb{Z}/2}]}\oplus(\underline{R}[v,w]\oplus R[x,y])[1]\oplus \underline{R}[\mathbb{N}^{\oplus\mathbb{Z}/2}][1+\sigma].\] From now on, we assume that \(2\) is invertible in \(R\). Using the above description of the middle summand via \(i_{*}(X\wedge X)\), it has the involution changing \(v\) and \(x\) as well as \(w\) and \(y\), but no involution on the individual summands. However, we can use both parts of Lemma 4.34, namely the second for \(\tau\) the sign involution, to deduce that the middle summand is equivalent to \(\Omega^{1}_{A/R}[\sigma]\). For the last summand, we use the Lemma 4.34(1) and \(dxdy=-dydx\) to see that \(\Omega^{2}_{A/R}[2\sigma]\) is equivalent to \(R[\mathbb{N}^{\oplus\mathbb{Z}/2}][1+\sigma]\). Note that parts of this computation work for \(i_{*}\) and arbitrary monoids \(M\), e.g. \(\operatorname{THR}(\mathbb{S}[M^{\oplus\mathbb{Z}/2}])\simeq N^{\mathbb{Z}/2} \operatorname{THR}(\mathbb{S}[M])\). One should also observe the following: even if \(\rho_{n}\operatorname{THR}(R)\simeq 0\) for every odd \(n\), e.g. for \(R\) a perfectoid ring in the completed variant, see Theorem 5.16), we no longer have \(\rho_{n}\operatorname{THR}(R[\mathbb{N}^{\oplus\mathbb{Z}/2}])\simeq 0\) for every odd \(n\). This follows from Proposition 3.10 and [25, Proposition 2.1.3], because \(i^{*}\operatorname{THR}(R[\mathbb{N}^{\oplus\mathbb{Z}/2}])=\operatorname{THH} (R[x,y])\) has homotopy groups in odd degrees using \[\operatorname{THH}(R[x,y])\simeq\operatorname{THH}(R)\wedge\operatorname{ THH}(\mathbb{S}[\mathbb{N}])\wedge\operatorname{THH}(\mathbb{S}[\mathbb{N}])\] and the explicit description of \(\operatorname{THH}(\mathbb{S}[\mathbb{N}])\) in [41, Proposition 3.20]. However, if we proceed as for Proposition 4.37, Example 4.38, and Remark 5.18, i.e. use colimit perfection to construct a (quasiregular semi-)perfectoid ring \(S\) with involution from \(R[\mathbb{N}^{\oplus\mathbb{Z}/2}]\), then applying \(i^{*}\) and arguing as above we see that \(\operatorname{THR}(S)\) is even. **Remark 4.36**.: It would be nice to generalize Theorem 4.30 to commutative rings \(A\) and even \(R\) with involution, thus establishing the most general version of a real Hochschild-Kostant-Rosenberg theorem. We expect that if \(2\) is invertible in \(R\), then there exists a natural complete filtration \(\operatorname{Fil}_{\bullet}\operatorname{HR}(A/R)\) on \(\operatorname{HR}(A/R)\) whose \(n\)th graded piece is \[\operatorname{gr}^{n}\operatorname{HR}(A/R)\simeq\underline{\mathbb{L}}^{n}_ {A/R}[n\sigma]\] for every integer \(n\). If \(A\) is a smooth \(R\)-algebra, then this would induce an equivalence \[\operatorname{gr}^{n}\operatorname{HR}(A/R)\simeq\underline{\Omega}^{n}_{A/R }[n\sigma]\] in \(\operatorname{D}(\underline{A})\) for every integer \(n\). Combining Lemmas 4.27 and 4.29 and Proposition 4.35 as well as a formula for \(\Omega^{1}\) of polynomial algebras, it is possible to show that this expected real Hochschild-Kostant-Rosenberg theorem for \(\operatorname{HR}(A/R)\) and individual \(A\in\operatorname{Poly}_{R}^{\mathbb{Z}/2}\) is true if \(2\) is invertible in \(R\). However, we cannot use Construction 4.6 to extend the filtration from \(\operatorname{Poly}_{R}^{\mathbb{Z}/2}\) to \(\operatorname{CRing}_{R}\) from the above results unless we show that the expected filtration is natural in \(A\in\operatorname{Poly}_{R}^{\mathbb{Z}/2}\). Without involution, the naturality was established in the proof of Theorem 4.30 using the naturality of the classical Hochschild-Kostant-Rosenberg theorem as an ingredient. In the more general case with involutions, one has to write down a map \(\underline{\Omega}^{1}_{A/R}\to\underline{H}\),\(\operatorname{HR}(A/R)\) and show that it is natural on \(\operatorname{Poly}_{R}^{\mathbb{Z}/2}\). There are at least two possible strategies for this, both assuming \(2\) is invertible in \(R\). First, one could try to describe \(\operatorname{HR}(A/R)\) in terms of real simplicial sets to write down this map for general \(A\). We expect that after the Segal subdivision, \(\operatorname{HR}(A/R)\) can be written as a certain chain complex \[\cdots\to A\otimes_{R}A\otimes_{R}A\otimes_{R}A\to A\otimes_{R}A.\] Second, one could try to define this map using Lemma 4.27 and Proposition 4.35, Lemma 4.29 and a similar formula for \(\Omega^{1}\) for tensor products of \(R\)-algebras. This second approach might be easier for defining the map \(\underline{\Omega}^{1}\to\underline{H}_{\sigma}\operatorname{HR}\), but verifying that this is natural will presumably be more complicated. The Example 4.38 below discusses functoriality in a very special case. If \(2\) is not invertible in \(R\), then the situation is actually worse: Lemma 4.26 is not generalizable because \(\operatorname{HR}(A/R)\) will not be contained in \(\operatorname{D}_{\sigma-\operatorname{sums}}(\underline{A})\). Once functoriality is established, we can proceed in combination with Construction 4.6 to extend Theorem 4.30 to the case of nontrivial involutions. Of course, extending results of later chapters, e.g. Theorem 5.16, to (perfectoid) rings \(R\) with involution would require further additional work. Finally, recall the evaluation functor \(ev=(-)(C_{2}/e)\) considered e.g. before Proposition 2.18. This functor is not conservative. Still, we may consider its derived functor from \(\operatorname{D}(\underline{R})\) to the derived category of the abelian category of \(R\)-modules with involution. This commutes with \([\sigma]\) in an obvious sense, and computations as in Proposition 4.35 suggest that the expected real Hochschild-Kostant-Rosenberg theorem for \(R\)-algebras with involution might hold in the setting of modules with involution (rather than Green modules) even if \(2\) is not invertible. Let us conclude the discussion about \(R\)-algebras with involutions by studying the other standard example. Observe that this result is compatible with the above conjecture on HR of \(R\)-algebras with involution. **Proposition 4.37**.: _Let \(R\) be a commutative ring in which \(2\) is invertible, and consider \(A:=R[\mathbb{Z}^{\sigma}]\), where \(\mathbb{Z}^{\sigma}\) is the abelian group \(\mathbb{Z}\) with the involution \(x\mapsto-x\). Then we have an equivalence_ \[\operatorname{HR}(A/R)\simeq\underline{\Omega}^{0}_{A/R}\oplus\underline{ \Omega}^{1}_{A/R}[\sigma]\] _in \(\operatorname{D}(\underline{A})\). For all commutative rings \(R\), with \(2\) invertible or not invertible, we have an equivalence_ \[\operatorname{HR}(A/R)\simeq\underline{A}\oplus\underline{A}[1]\] _in \(\operatorname{D}(\underline{A})\), We also have an equivalence of \(\mathbb{S}[\mathbb{Z}^{\sigma}]\)-modules_ \[\operatorname{THR}(\mathbb{S}[\mathbb{Z}^{\sigma}])\simeq\mathbb{S}[ \mathbb{Z}^{\sigma}]\oplus\Sigma^{1}\mathbb{S}[\mathbb{Z}^{\sigma}].\] Proof.: By [16, Proposition 5.9] and [25, (4.13)], we have equivalences of \(\mathbb{Z}/2\)-spectra \[\operatorname{THR}(\mathbb{S}[\mathbb{Z}^{\sigma}])\simeq\mathbb{S}[ \operatorname{B}^{\operatorname{di}}\mathbb{Z}^{\sigma}]\simeq\bigoplus_{j \in\mathbb{Z}^{\sigma}}\mathbb{S}[S^{1}],\] where the involution acts on the index \(\mathbb{Z}^{\sigma}\). This is equivalent to \(X\oplus\Sigma^{1}X\) with \(X:=\bigoplus_{j\in\mathbb{Z}^{\sigma}}\mathbb{S}\) in \(\operatorname{Sp}^{\mathbb{Z}/2}\). Analyzing this computation which reduces to the dihedral nerve of \(\mathbb{Z}^{\sigma}\), compare also [25, Proposition 4.2.6], we see that the \(\mathbb{S}[\mathbb{Z}^{\sigma}]\)-module structure of this normed spectrum is induced by the action of \(\mathbb{Z}\) shifting the index in \(\bigoplus\). Hence we have an equivalence of \(\mathbb{S}[\mathbb{Z}^{\sigma}]\)-modules \(\operatorname{THH}(\mathbb{S}[\mathbb{Z}^{\sigma}])\simeq\mathbb{S}[\mathbb{ Z}^{\sigma}]\oplus\Sigma^{1}\mathbb{S}[\mathbb{Z}^{\sigma}]\). We also have the equivalence of \(\mathbb{Z}/2\)-spectra \(\operatorname{THR}(R[M])\simeq\operatorname{THR}(R)\wedge\mathbb{S}[ \operatorname{B}^{\operatorname{di}}M]\) for arbitrary monoids \(M\) with involution by [25, Proposition 4.2.15]. Arguing as in Proposition 4.35, we deduce an equivalence \(\operatorname{HR}(A/R)\simeq\underline{A}\oplus\underline{A}[1]\) in \(\operatorname{D}(\underline{A})\). From now on, we assume that \(2\) is invertible in \(R\). By Lemma 4.34(1), we have \(\underline{A}[1]\simeq\underline{(A,w)}[\sigma]\), where \(w\) is the involution on \(A\) given by \(f(x)\mapsto-f(1/x)\) if \(x\) denotes a generator of \(\mathbb{Z}^{\sigma}\). The involution on \(\Omega^{1}_{A/R}\) is given by \(f(x)dx\mapsto f(1/x)d(1/x)\). As \(A\)-modules with involution, we have the isomorphism \(\Omega^{1}_{A/R}\to(A,w)\) sending \(f(x)dx\) to \(xf(x)\). Hence we have an equivalence \(\operatorname{HR}(A/R)\simeq\underline{\Omega}^{0}_{A/R}\oplus\Omega^{1}_{A/ R}[\sigma]\) in \(\operatorname{D}(\underline{A})\), We continue to study the \(R\)-algebra with involution from Proposition 4.37 in the next example, illustrating functoriality in one of the easiest possible cases. **Example 4.38**.: Consider the real simplicial set \(\mathrm{N}^{\sigma}\mathbb{Z}^{\sigma}\) and its real geometric realization \(\mathrm{B}^{\sigma}\mathbb{Z}^{\sigma}\), see [25, Definition 4.2.1]. For an integer \(n\), the map \(n\colon\mathrm{N}^{\sigma}\mathbb{Z}^{\sigma}\to\mathrm{N}^{\sigma}\mathbb{Z}^ {\sigma}\) induced by the multiplication \(n\colon\mathbb{Z}^{\sigma}\to\mathbb{Z}^{\sigma}\) sends the element \(1\in(\mathrm{N}^{\sigma}\mathbb{Z}^{\sigma})_{1}\) in simplicial degree \(1\) to the element \(n\in(\mathrm{N}^{\sigma}\mathbb{Z}^{\sigma})_{1}\) in simplicial degree \(1\). Together with the equivalence of \(\mathbb{Z}/2\)-spaces \(S^{1}\simeq\mathrm{B}^{\sigma}\mathbb{Z}^{\sigma}\) in [16, Example 5.13], we obtain a commutative square of \(\mathbb{Z}/2\)-spaces with horizontal equivalences, where \(n\colon S^{1}\to S^{1}\) denotes the multiplication by \(n\). Now, assume \(n\neq 0\). There is an isomorphism of real simplicial sets \(\mathrm{N}^{\mathrm{d}i}\mathbb{Z}^{\sigma}\cong\mathbb{Z}^{\sigma}\times \mathrm{N}^{\sigma}\mathbb{Z}^{\sigma}\) which was already used implicitly in the proof of Proposition 4.37 above, see [25, proof of Proposition 4.2.6] for this description. Using this, we see that the morphism \(n\colon\mathrm{THR}(\mathbb{S}[\mathbb{Z}^{\sigma}])\to\mathrm{THR}(\mathbb{S} [\mathbb{Z}^{\sigma}])\) induced by \(n\colon\mathbb{Z}^{\sigma}\to\mathbb{Z}^{\sigma}\) can be identified with the map \[\bigoplus_{j\in\mathbb{Z}^{\sigma}}\mathbb{S}[S^{1}]\to\bigoplus_{j\in \mathbb{Z}^{\sigma}}\mathbb{S}[S^{1}]\] sending the \(j\)th factor to \(nj\)th factor for \(j\in\mathbb{Z}^{\sigma}\) whose underlying maps \(\mathbb{S}[S^{1}]\to\mathbb{S}[S^{1}]\) is induced by \(n\colon S^{1}\to S^{1}\). Using the description of \(\mathrm{HR}(R[\mathbb{Z}^{\sigma}]/R)\) in Proposition 4.37, we deduce that the morphism \(\mathrm{HR}(R[\mathbb{Z}^{\sigma}]/R)\to\mathrm{HR}(R[\mathbb{Z}^{\sigma}]/R)\) in \(\mathrm{D}(\underline{R})\) induced by \(n\colon\mathbb{Z}^{\sigma}\to\mathbb{Z}^{\sigma}\) can be identified with \[\mathrm{id}\oplus\alpha\colon\underline{R[\mathbb{Z}^{\sigma}]}\oplus \underline{R[\mathbb{Z}^{\sigma}]}[1]\to\underline{R[\mathbb{Z}^{\sigma}]} \oplus\underline{R[\mathbb{Z}^{\sigma}]}[1],\] where \(\alpha\) is \(R\)-linear and \(\alpha(x):=x^{n}\) for a generator \(x\) of \(\mathbb{Z}^{\sigma}\). If we assume that \(2\) is invertible in \(R\), then this can be also identified with the endomorphism on \(\underline{\Omega^{0}_{R[\mathbb{Z}^{\sigma}]/R}}\oplus\underline{\Omega^{1}_ {R[\mathbb{Z}^{\sigma}]/R}[\sigma]}\) induced by \(f\colon R[\mathbb{Z}^{\sigma}]\to R[\mathbb{Z}^{\sigma}]\) given by \(f(x)=x^{n}\), which is a morphism in \(\mathrm{Poly}_{R}^{\mathbb{Z}/2}\). Hence the isomorphism \[\mathrm{HR}(R[\mathbb{Z}^{\sigma}]/R)\simeq\underline{\Omega^{0}_{R[\mathbb{Z }^{\sigma}]/R}}\oplus\underline{\Omega^{1}_{R[\mathbb{Z}^{\sigma}]/R}}[\sigma]\] is functorial with respect to \(f\). We will continue to study \(R[\mathbb{Z}^{\sigma}]\) in Remark 5.18 below, which will be modified to obtain a quasi- regular semiperfectoid - and in fact even perfectoid - \(\mathbb{F}_{p}\)-algebra with involution, and whose \(\mathrm{THR}\) will be shown to have the expected form. **Corollary 4.39**.: _Let \(R\) be a commutative ring, and let \(A\) be a simplicial commutative \(R\)-algebra. Then there exists a natural filtration \(\mathrm{Fil}_{\bullet}\mathrm{HR}(A/R;\mathbb{Z}_{p})\) on \(\mathrm{HR}(A/R)\) whose \(n\)th graded piece is_ \[\mathrm{gr}^{n}\mathrm{HR}(A/R;\mathbb{Z}_{p})\simeq\iota(\wedge_{A}^{n} \mathbb{L}_{A/R})_{p}^{\wedge}[n\sigma]\] _for every integer \(n\). If \(2\) is invertible or if \(A\) is a smooth \(R\)-algebra, then this filtration is complete._ Proof.: Note that derived \(p\)-completion obviously commutes with \(\mathrm{gr}\) and \([n\sigma]\), and also with \(\iota\) by Proposition 2.20. Hence Lemmas 4.22(1),(2) and 4.23 together with Theorems 4.30 and 4.31 finish the proof. For a commutative ring \(R\) in which \(2\) is not invertible, the difficulty of proving that the filtration in Theorem 4.30 is complete is caused by the fact that the intersection \(\bigcap_{n\geq 0}(\operatorname{D}_{\geq 0}(R)[n\sigma])\) is nonempty. Indeed, if \(\mathcal{F}\in\operatorname{D}_{\geq 0}(\underline{R})\) satisfies \(i^{*}\mathcal{F}\simeq 0\), then we have \(\mathcal{F}\,\square\,\underline{R^{\oplus 2/2}}\simeq 0\), so we have \(\mathcal{F}\simeq\mathcal{F}[\sigma]\). It follows that we have \(\mathcal{F}\in\operatorname{D}_{\geq 0}(\underline{R})[n\sigma]\) for every integer \(n\). The following theorem is proved in [37, Theorem 6.20], see also Remark 6.21 of loc. cit. **Theorem 4.40**.: _Let \(R\to A\) be a map of commutative rings. If the Frobenius \(\varphi\colon A/2\to A/2\) (i.e., the squaring map) is surjective, then the filtrations in Theorem 4.30 and Corollary 4.39 are complete._ Observe that the condition of Theorem 4.40 holds if \(A\) is perfectoid and \(R=\mathbb{Z}_{p}\) by definition. Of course, if \(2\) is invertible or if \(A\) is a smooth \(R\)-algebra, then the filtration of Theorem 4.30 is complete for other reasons (see Theorem 4.31), and the surjectivity of \(\varphi\) will not even hold in general in the smooth case. ## 5. THR of perfectoid rings **Lemma 5.1**.: _Let \(R\to A\) be a homomorphism of commutative rings, and let \(M\) be an \(\underline{R}\)-module. Then the base change \(M\,\square_{\underline{R}}\,\underline{A}\) is obtained by taking \(\otimes_{R}A\) pointwise, i.e., \(M\,\square_{\underline{R}}\,\underline{A}\) can be described as_ \[\begin{CD}M(C_{2}/e)\otimes_{R}A@<{\operatorname{tr}\otimes\operatorname{id }}<{}<\operatorname{res}\otimes\operatorname{id}\\ @V{}V{w\otimes\operatorname{id}}V\end{CD}M(C_{2}/C_{2})\otimes_{R}A.\] Proof.: By Lemma 2.17, we have \(\operatorname{tr}\circ\operatorname{res}=2\) for \(M\). Together with Lemma 2.16, we see that \(M\,\square\,\underline{A}\) is obtained by taking \(\otimes A\) pointwise to \(M\), i.e., \(M\,\square\,\underline{A}\) can be described as \[\begin{CD}M(C_{2}/e)\otimes A@<{\operatorname{tr}\otimes\operatorname{id} }<{}<\operatorname{res}\otimes\operatorname{id}\\ @V{}V{w\otimes\operatorname{id}}V\end{CD}M(C_{2}/C_{2})\otimes A.\] We can similarly show that \(M\,\square\,\underline{R}\,\square\,\underline{A}\) is obtained by taking \(\otimes R\otimes A\) pointwise to \(M\). Since \(M\,\square_{\underline{R}}\,\underline{A}\) is the equalizer of the two induced morphisms \(M\,\square\,\underline{R}\,\square\,\underline{A}\rightrightarrows M\,\square \,\underline{A}\), the above descriptions for \(M\,\square\,\underline{A}\) and \(M\,\square\,\underline{R}\,\square\,\underline{A}\) imply the claim. Here is the outline of the proof of Theorem 5.16, which is an equivariant refinement of [9, Theorem 6.1]. Let \(R\) be a perfectoid ring. 1. We compute the slices of \(\operatorname{HR}(R;\mathbb{Z}_{p})\) in Proposition 5.2 using Theorem 4.30, which is an equivariant refinement of the Hochschild-Kostant-Rosenberg theorem. 2. We define pseudo-coherent objects of \(\operatorname{D}(\underline{R})\) in Definition 5.3. Lemmas 5.4-5.6 establish some useful facts about pseudo-coherent objects. Lemma 5.7 shows that \(\operatorname{HR}(R;\mathbb{Z}_{p})\) and \(\operatorname{THR}(R;\mathbb{Z}_{p})\) are pseudo-coherent. 3. In Lemmas 5.8 and 5.9, we establish a certain base change property for HR and THR. Lemma 5.13 deals with an application of the Nakayama lemma. 4. We do not establish a real refinement of [35, Proposition IV.4.2] in general. Instead, we first compare the zeroth slices of \(\operatorname{THR}(R;\mathbb{Z}_{p})\) and \(\operatorname{HR}(R;\mathbb{Z}_{p})\) in Lemma 5.11. Lemma 5.12 provides an equivariant refinement of the induction argument in [9, proof of Theorem 6.1], and we use Lemma 5.12 in Lemmas 5.14 and 5.15 to compare the first and second slices. 5. To finish the proof of Theorem 5.16, we combine the above arguments as in [9, proof of Theorem 6.1], and use Lemma 5.12 again for the induction argument. Hence in general, several of the following results are real respectively Mackey refinements of statements established or used in the proof of [9, Theorem 6.1]. **Proposition 5.2**.: _Let \(R\) be a perfectoid ring. Then using notation introduced after Proposition 3.2, there is a natural equivalence of \(\underline{R}\)-modules_ \[\rho_{n}(\mathrm{HR}(R;\mathbb{Z}_{p}))\simeq\left\{\begin{array}{ll}\underline {R}&\text{if $n$ is even and nonnegative},\\ 0&\text{otherwise}.\end{array}\right.\] Proof.: Consider the filtration \(\mathrm{Fil}_{\bullet}\mathrm{HR}(R;\mathbb{Z}_{p})\) on \(\mathrm{HR}(R;\mathbb{Z}_{p})\) in Corollary 4.39 whose \(n\)th graded piece is \(\iota(\wedge_{R}^{n}\mathbb{L}_{R/\mathbb{Z}_{p}})_{p}^{\wedge}[n\sigma]\) for every integer \(n\). By [9, Proposition 4.19(2)], we have \((\wedge_{R}^{n}\mathbb{L}_{R/\mathbb{Z}_{p}})_{p}^{\wedge}\simeq R[n]\) if \(n\geq 0\). Hence we have \(\iota(\wedge_{R}^{n}\mathbb{L}_{R/\mathbb{Z}_{p}})_{p}^{\wedge}[n\sigma] \simeq\underline{R}[n+n\sigma]\), which is an object of \(\mathrm{D}_{\geq 2n}(\underline{R})\cap\mathrm{D}_{\leq 2n}(\underline{R})\) by Proposition 3.2. Use induction to show \[\mathrm{cofib}(\mathrm{Fil}_{m}\mathrm{HR}(R;\mathbb{Z}_{p})\to\mathrm{Fil}_{n} \mathrm{HR}(R;\mathbb{Z}_{p}))\in\mathrm{D}_{\geq 2n}(\underline{R})\cap \mathrm{D}_{\leq 2m-2}(\underline{R})\] for every integer \(m\geq n\). Take \(\lim_{m}\) and use the completeness of the filtration established in Theorem 4.40 to deduce \(\mathrm{Fil}_{n}\mathrm{HR}(R;\mathbb{Z}_{p})\in\mathrm{D}_{\geq n}(\underline{ R})\). In particular, we have \(\mathrm{Fil}_{n+1}\mathrm{HR}(R;\mathbb{Z}_{p})\in\mathrm{D}_{\geq 2n+2}( \underline{R})\). We also have \(\mathrm{cofib}(\mathrm{Fil}_{n}\mathrm{HR}(R;\mathbb{Z}_{p})\to\mathrm{HR}(R; \mathbb{Z}_{p}))\in\mathrm{D}_{\leq 2n-2}(\underline{R})\). It follows that we have \(P_{n}^{n}\mathrm{HR}(R;\mathbb{Z}_{p})\simeq\underline{R}[a+a\sigma]\) for even \(n=2a\geq 0\) and \(P_{n}^{n}\mathrm{HR}(R;\mathbb{Z}_{p})\simeq 0\) otherwise. If 2 is invertible in \(R\) or equivalently if the fixed prime \(p\) is different from 2, then we can use Corollary 4.39 instead of Theorem 4.40 in the above proof. A complex of \(R\)-module is _pseudo-coherent_ if it is quasi-isomorphic to a (homologically) bounded below complex of finitely generated free \(R\)-modules. Here, a bounded below complex means that \(H_{n}(-)\) vanishes for \(n\ll 0\), and which is called "bounded to the right" [9, p. 243]. **Definition 5.3**.: Let \(R\) be a commutative ring. An \(\underline{R}\)-module is _finitely generated_ if it is pointwise finitely generated. An \(\underline{R}\)-module is _free_ if it is isomorphic to the sum of copies of \(\underline{A}\) and \(\underline{A}^{\oplus\mathbb{Z}/2}\). A complex of \(\underline{R}\)-modules is _pseudo-coherent_ if it is quasi-isomorphic to a bounded below complex of finitely generated free \(\underline{R}\)-modules. **Lemma 5.4**.: _Let \(\mathcal{E}\to\mathcal{F}\to\mathcal{G}\) be a fiber sequence in \(\mathrm{D}(\underline{R})\), where \(R\) is a commutative ring. If two of \(\mathcal{E}\), \(\mathcal{F}\), and \(\mathcal{G}\) are pseudo-coherent, then the remaining one is pseudo-coherent too._ Proof.: Without loss of generality we may assume that \(\mathcal{F}\) and \(\mathcal{G}\) are pseudo-coherent. The morphism \(\mathcal{F}\to\mathcal{G}\) admits an explicit model \(\mathcal{F}^{\prime}\to\mathcal{G}^{\prime}\) such that \(\mathcal{F}^{\prime}\) and \(\mathcal{G}^{\prime}\) are bounded below complexes of finitely generated free \(\underline{R}\)-modules. To conclude, observe that the cofiber of \(\mathcal{F}^{\prime}\to\mathcal{G}^{\prime}\) is a bounded below complex of finitely generated free \(\underline{R}\)-modules. Recall the full subcategory \(\mathrm{D}_{\geq n}(\underline{R})\) of \(\mathrm{D}(\underline{R})\) introduced in Definition 3.3. **Lemma 5.5**.: _Let \(R\) be a commutative ring, and let \(\mathcal{F}\) be a pseudo-coherent complex of \(\underline{R}\)-modules. If there exists an integer \(n\) such that \(\underline{H}_{m}(\mathcal{F})=0\) for every integer \(m<n\), then there exists a quasi-isomorphism \(\mathcal{E}\to\mathcal{F}\) such that \(\mathcal{E}\) is a bounded below complex of finitely generated free \(\underline{R}\)-modules and the entries of \(\mathcal{E}\) in degrees less than \(n\) are \(0\)._ Proof.: Argue as in [44, Tag 064U]. **Lemma 5.6**.: _Let \(R\) be a commutative ring, and let \(\mathcal{F}\) be a pseudo-coherent complex of \(\underline{R}\)-modules. If \(\mathcal{F}\in\mathrm{D}_{\geq n}(\underline{R})\) for some integer \(n\), then \(\rho_{n}(\mathcal{F})\) is a finitely generated \(\underline{R}\)-module._ Proof.: After shifting and using Lemma 3.4, we may assume \(n=0\) or \(n=1\). By Lemma 3.5, \(\underline{H}_{m}(\mathcal{F})=0\) for every integer \(m<n\). Consider the quasi-isomorphism \(\mathcal{E}\to\mathcal{F}\) in Lemma 5.5. If \(n=0\), then \(\rho_{0}(\mathcal{E})=\underline{H}_{0}(\mathcal{E})\) is finitely generated. If \(n=1\), then \(\underline{H}_{1}(\mathcal{E})\) is finitely generated, which implies that \(\rho_{1}(\mathcal{E})\) is finitely generated. Consider the homotopy \(t\)-structure on \(\mathrm{Sp}^{\mathbb{Z}/2}\) in [25, Proposition A.3.4]. This is different from the slice filtrations in [21] and [22]. Let \[\tau_{\leq n},\tau_{\geq n}\colon\mathrm{Sp}^{\mathbb{Z}/2}\to\mathrm{Sp}^{ \mathbb{Z}/2}\] be the associated truncation functors, which are compatible with those of Recollection 2.2 under Proposition 2.11 after forgetting the module structure. **Lemma 5.7**.: _Let \(R\) be a perfectoid ring. Then \(\mathrm{HR}(R;\mathbb{Z}_{p})\) and \(\mathrm{THR}(R;\mathbb{Z}_{p})\) are pseudo-coherent complexes of \(\underline{R}\)-modules._ Proof.: By Proposition 5.2, we obtain a morphism \(\underline{R}[n+n\sigma]\to\mathrm{HR}(R;\mathbb{Z}_{p})\) in \(\mathrm{D}(\underline{R})\) corresponding to a generator of \(R\simeq H_{n+n\sigma}\mathrm{HR}(R;\mathbb{Z}_{p})\) for every integer \(n\geq 0\). Hence we obtain a morphism \[\bigoplus_{n\geq 0}\underline{R}[n+n\sigma]\to\mathrm{HR}(R;\mathbb{Z}_{p})\] in \(\mathrm{D}(\underline{R})\), which is a quasi-isomorphism. In particular, \(\mathrm{HR}(R;\mathbb{Z}_{p})\) is pseudo-coherent. This implies that \[\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{\mathrm{THR}(\mathbb{Z})}\mathrm{H} \underline{\mathbb{Z}}\] is pseudo-coherent by Proposition 4.24. We can view \(\underline{\pi}_{n}(\mathrm{THR}(\mathbb{Z}))\) as a \(\underline{\pi}_{0}(\mathrm{THR}(\mathbb{Z}))\)-module for every integer \(n\). Since \(\underline{\pi}_{0}(\mathrm{THR}(\mathbb{Z}))\simeq\underline{\mathbb{Z}}\) by [25, Proposition 2.3.5] and \(\underline{\pi}_{n}(\mathrm{THR}(\mathbb{Z}))\) is finite for \(n>0\) by Lemma 4.13, use Construction 4.15 to see that \[\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{\mathrm{THR}(\mathbb{Z})}\mathrm{H} \underline{\pi}_{n}\mathrm{THR}(\mathbb{Z})\] is pseudo-coherent for every integer \(n\), i.e., it is quasi-isomorphic to a bounded below complex \(\mathcal{F}_{n}\) of finitely generated free \(\underline{R}\)-modules. By [4, Corollary 6.8.1], Proposition 4.12, and Lemma 5.5, we may assume that \(\mathcal{F}_{n}\) is vanishing in negative degrees. Let us inductively construct a quasi-isomorphism \[\mathcal{E}_{n}\xrightarrow{\simeq}\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{ \mathrm{THR}(\mathbb{Z})}\tau_{\leq n}\mathrm{THR}(\mathbb{Z})\] with a bounded below complex \(\mathcal{E}_{n}\) of finitely generated free \(\underline{R}\)-modules. If \(n=0\), then take \(\mathcal{E}_{0}:=\mathcal{F}_{n}\). Assume that we have constructed a quasi-isomorphism for \(n\). Then there exists a morphism \(\mathcal{E}_{n}[-1]\to\mathcal{F}_{n+1}\) of chain complexes with a commutative square in \(\mathrm{D}(\underline{R})\). Let \(\mathcal{E}_{n+1}\) be its mapping cone of \(\mathcal{E}_{n}[-1]\to\mathcal{F}_{n+1}\) so that we have a quasi-isomorphism \[\mathcal{E}_{n+1}\xrightarrow{\simeq}\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{ \mathrm{THR}(\mathbb{Z})}\tau_{\leq n+1}\mathrm{THR}(\mathbb{Z}).\] When \(n\) is sufficiently large, the construction implies that \(\mathcal{E}_{n}\) becomes stable in a given degree. This implies that \(\lim_{n}\mathcal{E}_{n}\) is a bounded below complex of finitely generated free \(\underline{R}\)-modules. Lemma 4.21 finishes the proof. **Lemma 5.8**.: _Let \(R\to R^{\prime}\) be a homomorphism of perfectoid rings. Then the natural morphism_ \[\mathrm{HR}(R;\mathbb{Z}_{p})\,\square_{\underline{R}}^{\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! is an equivalence. As in the proof of Lemma 5.7, this implies that the induced map of \(\mathbb{Z}/2\)-spectra \[\underline{\pi}_{n}(\operatorname{THR}(\mathbb{Z}))\wedge_{ \operatorname{THR}(\mathbb{Z})}\operatorname{THR}(R;\mathbb{Z}_{p})\wedge_{ \underline{H}\underline{R}}\operatorname{H}\underline{R}^{\prime}\] \[\rightarrow \underline{\pi}_{n}(\operatorname{THR}(\mathbb{Z}))\wedge_{ \operatorname{THR}(\mathbb{Z})}\operatorname{THR}(R^{\prime};\mathbb{Z}_{p})\] is an equivalence for every integer \(n\). By induction on \(n\), we see that the induced map of \(\mathbb{Z}/2\)-spectra \[\tau_{\leq n}(\operatorname{THR}(\mathbb{Z}))\wedge_{ \operatorname{THR}(\mathbb{Z})}\operatorname{THR}(R;\mathbb{Z}_{p})\wedge_{ \operatorname{H}\underline{R}}\operatorname{H}\underline{R}^{\prime}\] \[\rightarrow \tau_{\leq n}(\operatorname{THR}(\mathbb{Z}))\wedge_{ \operatorname{THR}(\mathbb{Z})}\operatorname{THR}(R^{\prime};\mathbb{Z}_{p})\] is an equivalence for every integer \(n\). Take \(\lim_{n}\) on both sides, and use Lemma 4.21 to conclude. In [25, Theorem 3.4.3], we proved a descent theorem for THR with respect to the isovariant etale topology. A key ingredient for this was the etale base change Theorem 3.2.3 in loc. cit.. It is an obvious question to ask if THR satisfies descent even for the isovariant flat topology. When restricted to sites with trivial \(C_{2}\)-action, the above Lemma 5.9 yields base change for the special case of perfectoid rings. In [37], fpqc- and syntomic descent for THR will be investigated. **Lemma 5.10**.: _The induced morphism_ \[\underline{\mathbb{Q}}\rightarrow\operatorname{THR}(\underline{\mathbb{Z}}) \operatorname{\square}_{\underline{\mathbb{Z}}}^{\mathbb{L}}\underline{\mathbb{Q}}\] _in \(\operatorname{D}(\underline{\mathbb{Q}})\) is an equivalence._ Proof.: It suffices to show that the induced map of \(\mathbb{Z}/2\)-spectra \[\operatorname{H}\underline{\mathbb{Q}}\rightarrow\operatorname{THR}( \underline{\mathbb{Z}})\wedge_{\operatorname{H}\underline{\mathbb{Z}}} \operatorname{H}\underline{\mathbb{Q}}\] is an equivalence. Both sides vanish after applying \(\Phi^{\mathbb{Z}/2}\) since \(\Phi^{\mathbb{Z}/2}\operatorname{H}\underline{\mathbb{Q}}\) vanishes and \(\Phi^{\mathbb{Z}/2}\) is monoidal. Hence it suffices to show the claim after applying \(i^{*}\), i.e., it suffices to show that the induced map of spectra \[\operatorname{H}\underline{\mathbb{Q}}\rightarrow\operatorname{THH}( \mathbb{Z})\wedge_{\operatorname{H}\underline{\mathbb{Z}}}\operatorname{H} \underline{\mathbb{Q}}\] is an equivalence. This is a consequence of the finiteness of \(\pi_{*}\operatorname{THH}(\mathbb{Z})\) in [9, Lemma 2.5]. We are still working on the real refinement of [9, Theorem 6.1], which will be achieved in Theorem 5.16 below. Some of the following results should also be compared with [35, Proposition IV.4.2]. For a commutative ring \(R\), we have the map to the first smash factor \(\operatorname{THR}(R)\rightarrow\operatorname{THR}(R)\wedge_{ \operatorname{THR}(\mathbb{Z})}\operatorname{H}\underline{\mathbb{Z}}= \operatorname{HR}(R).\) After taking \((-)_{p}^{\wedge}\) on both sides, we obtain the morphism \(\operatorname{THR}(R;\mathbb{Z}_{p})\rightarrow\operatorname{HR}(R;\mathbb{Z} _{p})\) in \(\operatorname{D}(\underline{R})\). **Lemma 5.11**.: _Let \(R\) be a perfectoid ring. Then the induced morphisms of \(\underline{R}\)-modules_ \[\underline{R}\rightarrow\rho_{0}\operatorname{THR}(R;\mathbb{Z}_{p}) \rightarrow\rho_{0}\operatorname{HR}(R;\mathbb{Z}_{p})\] _are isomorphisms._ Proof.: The Frobenius \(\varphi\colon R/2\to R/2\) is surjective by assumption. Now combine [25, Proposition 2.3.5] (see also [24]) and Proposition 5.2. For an \(\mathbb{F}_{p}\)-algebra \(R\), the _colimit perfection_ is \(\operatorname{colim}(R\xrightarrow{\varphi}R\xrightarrow{\varphi}\cdots)\), where \(\varphi\) is the Frobenius, see e.g. [9, Remark 8.15]. **Lemma 5.12**.: _Let \(R\) be a perfectoid ring, let \(R^{\prime}\) be the colimit perfection of \(R/p\), and let \(n\) be an integer. If \(\rho_{i}\mathrm{THR}(R;\mathbb{Z}_{p})\) is a finitely generated free \(\underline{R}\)-module and the induced morphism of \(\underline{R}\)-modules_ \[\rho_{i}\mathrm{THR}(R;\mathbb{Z}_{p})\,\square_{\underline{R}}\,\underline{R ^{\prime}}\to\rho_{i}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\] _is an isomorphism for every integer \(i<n\), then \(\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\) is a finitely generated \(\underline{R}\)-module, and the induced morphism of \(\underline{R}\)-modules_ \[\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\,\square_{\underline{R}}\,\underline{R ^{\prime}}\to\rho_{n}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\] _is an isomorphism. A similar result holds for \(\mathrm{HR}\) too._ Proof.: We focus on the case of \(\mathrm{THR}\) since the proofs are similar. By Lemmas 5.4, 5.6, and 5.7, \(\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\) is finitely generated. A projective \(\underline{R}\)-module is flat due to [25, Proposition A.5.3]. It follows that the induced morphism of \(\mathbb{Z}/2\)-spectra \[P_{i}^{i}\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{\underline{R}}\mathrm{H} \underline{R^{\prime}}\to P_{i}^{i}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\] is an equivalence for every integer \(i<n\). This implies that the induced map of \(\mathbb{Z}/2\)-spectra \[P^{n-1}\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{\underline{R}}\mathrm{H} \underline{R^{\prime}}\to P^{n-1}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\] is an equivalence. Together with Lemma 5.9, we see that the induced map of \(\mathbb{Z}/2\)-spectra \[P_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{\underline{R}}\mathrm{H} \underline{R^{\prime}}\to P_{n}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\] is an equivalence. Take \(\underline{\pi}_{m+m\sigma}\) (resp. \(\underline{\pi}_{m+1+m\sigma}\)) on both sides if \(n=2m\) (resp. \(n=2m+1\)) to obtain the desired isomorphism. **Lemma 5.13**.: _Let \(R\) be a perfectoid ring, let \(R^{\prime}\) be the colimit perfection of \(R/p\), and let \(f\colon M^{\prime}\to M\) be a morphism of finitely generated \(\underline{R}\)-modules. If the induced morphism \(g\colon M\,\square_{\underline{R}}\,\underline{R^{\prime}}\to M^{\prime}\, \square_{\underline{R}}\,\underline{R^{\prime}}\) is an epimorphism, then \(f\) is an epimorphism._ Proof.: Lemma 5.1 implies that the induced homomorphisms of \(R^{\prime}\)-modules \[M^{\prime}(C_{2}/e)\otimes_{R}R^{\prime}\to M(C_{2}/e)\otimes_{R}R^{\prime} \text{ and }M^{\prime}(C_{2}/C_{2})\otimes_{R}R^{\prime}\to M(C_{2}/C_{2})\otimes_{R}R^ {\prime}\] are epimorphisms. Nakayama's lemma finishes the proof since the kernel of \(R\to R^{\prime}\) is contained in the Jacobson radical of \(R\). **Lemma 5.14**.: _Let \(R\) be a perfectoid ring. Then we have \(\rho_{1}\mathrm{THR}(R;\mathbb{Z}_{p})\simeq 0\)._ Proof.: Consider the map \(\mathrm{H}\underline{R}\to\mathrm{THR}(R)\) to the second smash factor. Let \(R^{\prime}\) be the colimit perfection of \(R/p\). Observe that \(R^{\prime}\) is a perfect \(\mathbb{F}_{p}\)-algebra. By Lemma 5.9 for \(\mathbb{F}_{p}\to R^{\prime}\) and the computation of \(\mathrm{THR}(\mathbb{F}_{p})\) obtained by Dotto-Moi-Patchkoria-Reeh [16, Theorem 5.15], we have \(\rho_{1}\mathrm{THR}(R^{\prime})\simeq 0\). Together with Lemmas 5.6, 5.7, 5.11, and 5.12, we have an isomorphism \[\rho_{1}\mathrm{THR}(R;\mathbb{Z}_{p})\,\square_{\underline{R}}\,\underline{R ^{\prime}}\cong 0.\] Lemma 5.13 finishes the proof. **Lemma 5.15**.: _Let \(R\) be a perfectoid ring. Then the induced morphism of \(\underline{R}\)-modules_ \[\rho_{2}\mathrm{THR}(R;\mathbb{Z}_{p})\to\rho_{2}\mathrm{HR}(R;\mathbb{Z}_{p})\] _is an epimorphism._ Proof.: Assume first that \(R\) is a perfect \(\mathbb{F}_{p}\)-algebra. Then \(R\) is a perfectoid ring. By Lemma 5.9, we reduce to the case when \(R=\mathbb{F}_{p}\). Consider the induced commutative square (5.1) whose horizontal homomorphisms are the restriction homomorphisms. The left vertical homomorphism is an epimorphism by [16, Lemma 5.18(ii)]. The right vertical homomorphism can be identified with \(\pi_{1+\sigma}(\underline{R}[1+\sigma])\to\pi_{2}(R[2])\) by Proposition 5.2, which is an isomorphism. The lower horizontal homomorphism is an isomorphism by [35, Proposition IV.4.2]. Hence the upper horizontal homomorphism is an epimorphism. For general \(R\), let \(R^{\prime}\) be the colimit perfection of \(R/p\). Since \(R^{\prime}\) is a perfect \(\mathbb{F}_{p}\)-algebra, we know that the induced morphism of \(\underline{R}^{\prime}\)-modules \[\rho_{2}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\to\rho_{2}\mathrm{HR}(R^{ \prime};\mathbb{Z}_{p})\] is an epimorphism. By Lemmas 5.11, 5.12, and 5.14, \(\rho_{2}\mathrm{THR}(R;\mathbb{Z}_{p})\) is a finitely generated \(\underline{R}\)-modules, and the induced morphism of \(\underline{R}^{\prime}\)-modules \[\rho_{2}\mathrm{THR}(R;\mathbb{Z}_{p})\,\square_{\underline{R}}\,\underline{R }^{\prime}\to\rho_{2}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\] is an isomorphism. On the other hand, Proposition 5.2 and Lemma 5.12 imply that \(\rho_{2}(\mathrm{HR}(R;\mathbb{Z}_{p}))\) is a finitely generated \(\underline{R}\)-module and the induced morphism of \(\underline{R}^{\prime}\)-modules \[\rho_{2}\mathrm{HR}(R;\mathbb{Z}_{p})\,\square_{\underline{R}}\,\underline{R }^{\prime}\to\rho_{2}\mathrm{HR}(R^{\prime};\mathbb{Z}_{p})\] is an isomorphism. Lemma 5.13 finishes the proof. For \(A\in\mathrm{NAlg}^{\mathbb{Z}/2}\), let \[T_{A}(S^{1+\sigma}):=\bigoplus_{n=0}^{\infty}\Sigma^{n+\sigma n}A\] denote the free associative \(A\)-algebra on \(S^{1+\sigma}\). Dotto-Moi-Patchkoria-Reeh [16, Theorem 5.15] show that there is an equivalence of normed \(\mathbb{Z}/2\)-spectra \[T_{\mathrm{H}\mathbb{F}_{p}}(S^{1+\sigma})\simeq\mathrm{THR}(\mathbb{F}_{p}) \tag{5.2}\] for every prime \(p\). This is a crucial ingredient for our computation of THR of perfectoid rings below, similar to Bokstedt's computation of \(\mathrm{THH}(\mathbb{F}_{p})\) being crucial for [9]. In the case that \(R\) is a perfect field of characteristic \(2\), Dotto-Moi-Patchkoria compute \(\mathrm{THR}(R;\mathbb{Z}_{p})\) in [15, Remark 5.14]. It is observed in [39, Proposition 6.26] that the proof holds for a perfect \(\mathbb{F}_{p}\)-algebra \(R\) and any prime \(p\). We now generalize this computation to the mixed characteristic case. Note that if \(2\) is not invertible, the proof of the following result relies on Theorem 4.40, whose proof appears in [37, Theorem 6.20]. **Theorem 5.16**.: _Let \(R\) be a perfectoid ring. Then there is a natural equivalence of normed \(\mathbb{Z}/2\)-spectra_ \[\mathrm{THR}(R;\mathbb{Z}_{p})\simeq T_{\mathrm{HR}}(S^{1+\sigma}),\] _In particular, we have_ \[\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\cong\left\{\begin{array}{ll}\underline{R} &\text{if $n$ is even and nonnegative},\\ 0&\text{otherwise}.\end{array}\right.\] Proof.: Lemma 5.15 gives an epimorphism of \(R\)-modules \[\pi_{1+\sigma}\mathrm{THR}(R;\mathbb{Z}_{p})\to\pi_{1+\sigma}\mathrm{HR}(R; \mathbb{Z}_{p}).\] Choose \(\tilde{u}\in\pi_{1+\sigma}\mathrm{THR}(R;\mathbb{Z}_{p})\) that maps to a generator of \(\pi_{1+\sigma}\mathrm{HR}(R;\mathbb{Z}_{p})\cong R\), where the isomorphism follows from Proposition 5.2. Consider the corresponding map of \(\mathbb{Z}/2\)-spectra \(x\colon\Sigma^{1+\sigma}\mathbb{S}\to\mathrm{THR}(R;\mathbb{Z}_{p})\). Since \(\mathrm{THR}(R;\mathbb{Z}_{p})\) is an \(\mathrm{H}\underline{R}\)-algebra, we obtain the induced map \[T_{\underline{R}\underline{R}}(S^{1+\sigma})\to\mathrm{THR}(R;\mathbb{Z}_{p}).\] We claim that the induced morphism \[\rho_{n}T_{\underline{R}\underline{R}}(S^{1+\sigma})\to\rho_{n}\mathrm{THR}( R;\mathbb{Z}_{p}).\] is an isomorphism for every integer \(n\), which implies the theorem due to Proposition 3.8. The claim holds when \(R=\mathbb{F}_{p}\) by the proof of [16, Theorem 5.15] and the commutativity of (5.1). Lemmas 5.1 and 5.9 imply that the claim holds when \(R\) has characteristic \(p\). Assume that \(R\) has mixed characteristic. Note that we have \[\rho_{n}T_{\mathrm{H}\underline{R}}(S^{1+\sigma})\cong\left\{\begin{array}{ ll}\underline{R}&\text{if $n$ is even and nonnegative},\\ 0&\text{otherwise}.\end{array}\right.\] We proceed by induction on \(n\). The claim holds for \(n<0\) since \(\mathrm{THR}(R;\mathbb{Z}_{p})\) is \((-1)\)-connected by Proposition 4.12. Assume \(n\geq 0\). Let \(R^{\prime}\) be the colimit perfection of \(R/p\). By the induction hypothesis and Lemma 5.12, the induced morphism of \(\underline{R^{\prime}}\)-modules \[\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\,\square_{\underline{R}}\,\underline{R ^{\prime}}\to\rho_{n}\mathrm{THR}(R^{\prime};\mathbb{Z}_{p})\] is an isomorphism. Since the claim holds for the perfect \(\mathbb{F}_{p}\)-algebra \(R^{\prime}\), the induced morphism of \(\underline{R^{\prime}}\)-modules \[\rho_{n}T_{\mathrm{H}\underline{R}}(S^{1+\sigma})\,\square_{\underline{R}}\, \underline{R^{\prime}}\to\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\,\square_{ \underline{R}}\,\underline{R^{\prime}}\] is an isomorphism using the obvious isomorphism \(\rho_{n}T_{\mathrm{H}\underline{R}}(S^{1+\sigma})\,\square_{\underline{R}}\, \underline{R^{\prime}}\simeq\rho_{n}T_{\mathrm{H}\underline{R^{\prime}}}(S^{ 1+\sigma})\). Together with Lemma 5.13, we see that the induced morphism of \(\underline{R}\)-modules \[\rho_{n}T_{\mathrm{H}\underline{R}}(S^{1+\sigma})\to\rho_{n}\mathrm{THR}(R; \mathbb{Z}_{p})\] is an epimorphism. Since \(R\) is reduced, it suffices to show that the ranks of \(\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})(C_{2}/e)\) and \(\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})(C_{2}/C_{2})\) at any the point of \(\mathrm{Spec}(R)\) are at least one. This claim holds for any point of \(\mathrm{Spec}(R^{\prime})\), which is a subset of \(\mathrm{Spec}(R)\) as a topological space. The complement of the inclusion \(\mathrm{Spec}(R^{\prime})\subset\mathrm{Spec}(R)\) consists of the characteristic zero points. We have an equivalence \[\rho_{n}\mathrm{THR}(R;\mathbb{Z}_{p})\wedge_{\mathrm{H}\underline{Z}}\, \underline{\mathrm{R}}\underline{\mathrm{Q}}\simeq\rho_{n}\mathrm{HR}(R; \mathbb{Z}_{p})\wedge_{\mathrm{H}\underline{Z}}\mathrm{H}\underline{\mathrm{Q}}\] using Lemma 5.10. Proposition 5.2 finishes the proof. In particular, we have an isomorphism of graded Green functors, \(\underline{\pi}_{(1+\sigma)*}(\mathrm{THR}(R))\cong\underline{R}[\tilde{u}]\) with \(\tilde{u}\in\pi_{1+\sigma}(\mathrm{THR}(R))\), where the Green functor structure follows e.g. from [25, section A.4]. Also note that the result shows that for any perfectoid ring \(R\), the \(\mathbb{Z}/2\)-spectrum \(\mathrm{THR}(R)\) is very even in the sense of Definition 3.9. The following result refines [9, Theorem 6.7]. **Theorem 5.17**.: _Let \(R\) be a perfectoid ring, and let \(A\) be an \(R\)-algebra. Then there is an equivalence of \(\mathrm{H}\underline{A}\)-modules_ \[\mathrm{HR}(A/R;\mathbb{Z}_{p})\simeq\mathrm{THR}(A;\mathbb{Z}_{p})\wedge_{ \mathrm{THR}(R;\mathbb{Z}_{p})}\mathrm{H}\underline{R}.\] _Furthermore, there is a cofiber sequence_ \[\mathrm{THR}(A;\mathbb{Z}_{p})[1+\sigma]\xrightarrow{\bar{u}}\mathrm{THR}(A; \mathbb{Z}_{p})\to\mathrm{HR}(A/R;\mathbb{Z}_{p}).\] Proof.: Thanks to Theorem 5.16, we have a cofiber sequence \[\mathrm{THR}(R;\mathbb{Z}_{p})[1+\sigma]\xrightarrow{\bar{\alpha}}\mathrm{THR} (R;\mathbb{Z}_{p})\to\mathrm{HR}(R/R;\mathbb{Z}_{p}).\] After taking \(\mathrm{THR}(A;\mathbb{Z}_{p})\wedge_{\mathrm{THR}(R;\mathbb{Z}_{p})}\), we obtain a cofiber sequence \[\mathrm{THR}(A;\mathbb{Z}_{p})[1+\sigma]\xrightarrow{\bar{u}}\mathrm{THR}(A; \mathbb{Z}_{p})\to\mathrm{THR}(A;\mathbb{Z}_{p})\wedge_{\mathrm{THR}(R;\mathbb{ Z}_{p})}\mathrm{H}\underline{R}.\] Since \(\mathrm{THR}(A;\mathbb{Z}_{p})\) and \(\mathrm{THR}(A;\mathbb{Z}_{p})[1+\sigma]\) are derived \(p\)-complete, we see that \(\mathrm{THR}(A;\mathbb{Z}_{p})\wedge_{\mathrm{THR}(R;\mathbb{Z}_{p})}\mathrm{H} \underline{R}\) is derived \(p\)-complete. Together with Proposition 2.25, we obtain a natural equivalence of \(\mathrm{H}\underline{A}\)-modules \[\mathrm{THR}(A;\mathbb{Z}_{p})\wedge_{\mathrm{THR}(R;\mathbb{Z}_{p})}\mathrm{ H}\underline{R}\simeq(\mathrm{THR}(A)\wedge_{\mathrm{THR}(R)}\mathrm{H} \underline{R})_{p}^{\wedge}.\] The latter one is identified with \(\mathrm{HR}(A/R;\mathbb{Z}_{p})\), which finishes the proof. The proof shows that this cofiber sequence is \(S^{\sigma}\)-equivariant, which will be important in [37] when considering TCR. **Remark 5.18**.: We continue the discussion of Proposition 4.37 and Example 4.38. Now, let \(R\) be an \(\mathbb{F}_{p}\)-algebra. Then the map \(p\colon\alpha^{*}(\mathbb{S}[S^{1}])\to\alpha^{*}(\mathbb{S}[S^{1}])\) in \(\mathrm{D}(\underline{R})\) induced by \(p\colon S^{1}\to S^{1}\) can be identified with \(\mathrm{id}\oplus 0\colon R\oplus R[1]\to R\oplus R[1]\). On the other hand, using \(\mathrm{THR}(R[\mathbb{Z}^{\sigma}])\simeq\mathrm{THR}(R)\wedge\mathrm{THR}( \mathbb{S}[\mathbb{Z}^{\sigma}])\) from [25, Proposition 2.1.5] and the above description of \(\mathrm{THR}(\mathbb{S}[\mathbb{Z}^{\sigma}])\), we have an equivalence \[\mathrm{THR}(R[\mathbb{Z}^{\sigma}])\simeq\mathrm{THR}(R)\square_{\underline {R}}^{\mathrm{L}}(\underline{R[\mathbb{Z}^{\sigma}]}\oplus\mathcal{F}[1])\] in \(\mathrm{D}(\underline{R[\mathbb{Z}^{\sigma}]})\) for some \(\mathcal{F}\in\mathrm{D}(\underline{R[\mathbb{Z}^{\sigma}]})\) that forgets to \(\underline{R}\oplus\bigoplus_{j\geq 1}\underline{R^{\otimes\mathbb{Z}/2}} \in\mathrm{D}(\underline{R})\). The above description of \(p\) on \(\alpha^{*}(\mathbb{S}[S^{1}])\) implies that the map \(p\colon\mathrm{THR}(R[\mathbb{Z}^{\sigma}])\to\mathrm{THR}(R[\mathbb{Z}^{ \sigma}])\) sends the direct summand \(\mathrm{THR}(R)\square_{\underline{R}}^{\mathrm{L}}\mathcal{F}[1]\) to \(0\). Let \(\alpha\colon\mathrm{THR}(R[\mathbb{Z}^{\sigma}])\to\mathrm{THR}(R[\mathbb{Z}^{ \sigma}])\) be the morphism in \(\mathrm{D}(\underline{R[\mathbb{Z}^{\sigma}]})\) induced by \(p\colon\mathbb{Z}^{\sigma}\to\mathbb{Z}^{\sigma}\). We obtain an equivalence \[\mathrm{THR}(R)\,\square_{\underline{R}}^{\mathrm{L}}\,\underline{R[\mathbb{Z} _{p}]^{\sigma}]}\simeq\mathrm{colim}(\mathrm{THR}(R[\mathbb{Z}^{\sigma}]) \xrightarrow{\alpha}\mathrm{THR}(R[\mathbb{Z}^{\sigma}])\xrightarrow{\alpha}\cdots)\] in \(\mathrm{D}(\underline{R[\mathbb{Z}^{\sigma}]})\), where \(\mathbb{Z}_{p}^{\lfloor\frac{1}{p}\rfloor^{\sigma}}\) is the abelian group \(\mathbb{Z}_{p}^{\lfloor\frac{1}{p}\rfloor}\) with the involution \(x\mapsto-x\) for \(x\in\mathbb{Z}[\frac{1}{p}]\), which satisfies \[\mathbb{Z}_{p}^{\lfloor\frac{1}{p}\rfloor^{\sigma}}:=\mathrm{colim}(\mathbb{Z} ^{\sigma}\xrightarrow{p}\mathbb{Z}^{\sigma}\xrightarrow{p}\cdots).\] The functor \(\mathrm{THR}\colon\mathrm{NAlg}^{\mathbb{Z}/2}\to\mathrm{NAlg}^{\mathbb{Z}/2}\) preserves colimits, and the forgetful functor \(\mathrm{NAlg}^{\mathbb{Z}/2}\to\mathrm{Sp}^{\mathbb{Z}/2}\) preserves sifted colimits by Proposition 4.4. Hence we obtain an equivalence \[\mathrm{THR}(R[\mathbb{Z}_{p}^{\lfloor\frac{1}{p}\rfloor^{\sigma}])\simeq \mathrm{THR}(R)\,\square_{\underline{R}}^{\mathrm{L}}\,\underline{R[\mathbb{ Z}_{p}^{\lfloor\frac{1}{p}\rfloor^{\sigma}}]}\] in \(\mathrm{D}(\underline{R[\mathbb{Z}^{\sigma}]})\). Taking the induced maps of simplicial sets \(\mathbb{Z}[\frac{1}{p}]^{\sigma}\to\mathrm{N}^{\mathrm{di}}\mathbb{Z}[\frac{1 }{p}]^{\sigma}\to\mathbb{Z}[\frac{1}{p}]^{\sigma}\) into account, we see that the above equivalence for \(\mathrm{THR}(R[\mathbb{Z}_{p}^{\lfloor\frac{1}{p}\rfloor^{\sigma}]})\) is indeed an equivalence in \(\mathrm{D}(R[\mathbb{Z}_{p}^{\lfloor\frac{1}{p}\rfloor^{\sigma}}])\). The example considered here is the group ring of a uniquely \(p\)-divisible group over a perfect \(\mathbb{F}_{p}\)-algebra, and those are obviously perfect \(\mathbb{F}_{p}\)-algebras and thus perfectoid rings by [8, Example 3.6]. Hence if \(R\) is a perfect \(\mathbb{F}_{p}\)-algebra, and \(A:=R[\mathbb{Z}[\frac{1}{p}]^{\sigma}]\) the associated perfect \(\mathbb{F}_{p}\)-algebra with a nontrivial involution, then in particular the underlying ring \(R[\mathbb{Z}[\frac{1}{p}]]\) is perfectoid. In Example 4.38, we show an equivalence \(\operatorname{THR}(A)\simeq\operatorname{THR}(R)\,\square_{R}^{L}\,\underline {A}\) in \(\operatorname{D}(\underline{A})\). Theorem 5.16 gives an equivalence of normed \(\mathbb{Z}/2\)-spectra \(T_{\underline{H}\underline{A}}(S^{1+\sigma})\simeq\operatorname{THR}(R; \mathbb{Z}_{p}).\) Together with \(T_{\underline{H}\underline{A}}(S^{1+\sigma})\,\square_{\underline{R}}^{L}\, \underline{A}\simeq T_{\underline{H}\underline{A}}(S^{1+\sigma})\), we obtain an equivalence of normed \(\mathbb{Z}/2\)-spectra \[T_{\underline{H}\underline{A}}(S^{1+\sigma})\simeq\operatorname{THR}(A; \mathbb{Z}_{p}).\] Hence we expect that Theorem 5.16 is satisfied by a more general class of perfectoid rings \(R\) with certain (or all) nontrivial involutions. Note however that this spectrum is not necessarily very even, as \(\operatorname{H}\underline{A}\) is not very even for rings \(A\) with a non-trivial involution. We can argue similarly in the setting of Proposition 4.35 to obtain an equivalence of normed \(\mathbb{Z}/2\)-spectra \[T_{\underline{H}\underline{B}}(S^{1+\sigma})\simeq\operatorname{THR}(B; \mathbb{Z}_{p})\] with \(B:=R[\mathbb{N}^{\oplus\mathbb{Z}/2}[\frac{1}{p}]]\) if \(R\) is a perfect \(\mathbb{F}_{p}\)-algebra. Observe that Theorem 5.17 extends to \(R[\mathbb{Z}[\frac{1}{p}]^{\sigma}]\) and \(R[\mathbb{N}^{\oplus\mathbb{Z}/2}[\frac{1}{p}]]\). ## Appendix A \(\infty\)-category of modules and sifted colimits The purpose of this appendix is to show that the \(\infty\)-category of modules admits sifted colimits under reasonable hypotheses by arguing as in [32, Proposition 3.2.3.1]. **Lemma A.1**.: _Let \(p\colon\mathcal{M}\to\Delta^{1}\) be a cocartesian fibration of \(\infty\)-categories, which classifies a functor \(f\colon\mathcal{C}\to\mathcal{D}\) in the sense of [31, Definition 3.3.2.2], where \(\mathcal{C}\) and \(\mathcal{D}\) are the fibers of \(p\) at \(\{0\}\) and \(\{1\}\). Let \(\mathcal{C}^{\prime}\) be the full subcategory of \(\operatorname{Fun}_{\Delta^{1}}(\Delta^{1},\mathcal{M})\) spanned by the functors \(\Delta^{1}\to\mathcal{M}\) which are \(p\)-cocartesian edges (defined in [31, Definition 2.4.11 and before Proposition 2.4.1.8]). If \(K\) is a simplicial set and \(f\) preserves \(K\)-indexed colimits, then the inclusion functor \(\mathcal{C}^{\prime}\to\operatorname{Fun}_{\Delta^{1}}(\Delta^{1},\mathcal{M})\) preserves \(K\)-indexed colimits._ Proof.: Let \(i_{0}\colon\{0\}\to\Delta^{1}\) and \(i_{1}\colon\{1\}\to\Delta^{1}\) be the inclusion functors. By [31, Proof of Lemma 5.4.7.15], there exists a commutative diagram of \(\infty\)-categories for some equivalence of \(\infty\)-categories \(g\colon\mathcal{C}\to\mathcal{C}^{\prime}\), where \(a\) is the inclusion functor. The functors \(i_{0}^{*}\) and \(i_{1}^{*}\) preserve \(K\)-indexed colimits, and the pair of functors \((i_{0}^{*},i_{1}^{*})\) is conservative. Hence the functor \(ag\) preserves \(K\)-indexed colimits since \(f\) and \(\operatorname{id}\) preserve \(K\)-indexed colimits. Since \(g\) is an equivalence of \(\infty\)-categories, \(a\) preserves \(K\)-indexed colimits. For a monoidal \(\infty\)-category \(\mathcal{C}^{\otimes}\), let \(\operatorname{LMod}(\mathcal{C})\) denote the \(\infty\)-category of left module objects of \(\mathcal{C}\) in [32, Definition 4.2.1.13]. To form this, we need the \(\infty\)-operads \(\operatorname{Assoc}^{\otimes}\) and \(\mathcal{LM}^{\otimes}\) in [32, Definitions 4.1.1.3, 4.2.1.7]. **Proposition A.2**.: _Let \(\mathcal{C}^{\otimes}\) be a monoidal \(\infty\)-category such that \(\mathcal{C}\) admits sifted colimits and the monoidal product \(\otimes\) preserves sifted colimits in each variable. Then \(\operatorname{Mod}(\mathcal{C})\) admits sifted colimits, and the forgetful functor_ \[p\colon\operatorname{LMod}(\mathcal{C})\to\operatorname{Alg}(\mathcal{C}) \times\mathcal{C}\] _sending a left \(A\)-module \(M\) to \((A,M)\) is conservative and preserves sifted colimits._ Proof.: We have the cofibration of \(\infty\)-operads \(q\colon\mathcal{C}^{\otimes}\to\operatorname{Assoc}^{\otimes}\). The assumption that \(\mathcal{C}\) has sifted colimits and \(\otimes\) preserves sifted colimits in each variable implies that \(q\) is compatible with sifted colimits in the sense of [32, Definition 3.1.1.18]. Let \(u\colon\mathcal{LM}^{\otimes}\to\operatorname{Assoc}^{\otimes}\) be the functor in [32, Remark 4.2.1.9]. By [32, Lemma 3.2.3.7], the associated functor \(f_{!}\colon\mathcal{C}^{\otimes}_{X}\to\mathcal{C}^{\otimes}_{Y}\) preserves sifted colimits for every morphism \(f\colon X\to Y\) in \(\operatorname{Assoc}^{\otimes}\). Let \(K\) be a simplicial set. Apply [31, Corollary 4.3.1.11] to \(q\) and [32, Lemma 3.2.2.9] to \(q^{op}\) and \((\mathcal{LM}^{\otimes})^{op}\) to deduce the following results: 1. The induced functor \(r\colon\operatorname{Fun}(\mathcal{LM}^{\otimes},\mathcal{C}^{\otimes})\to \operatorname{Fun}(\mathcal{LM}^{\otimes},\operatorname{Assoc}^{\otimes})\) admits relative sifted colimits in the sense of [31, Definition 4.3.1.1]. 2. A map \(K^{\rhd}\to\operatorname{Fun}(\mathcal{LM}^{\otimes},\mathcal{C}^{\otimes})\) is an \(r\)-colimit diagram if and only if the induced map \(K^{\rhd}\to\operatorname{Fun}(\{X\},\mathcal{C}^{\otimes})\) is a colimit diagram for every \(X\in\mathcal{LM}^{\otimes}\). By restricting to \(u\in\operatorname{Fun}(\mathcal{LM}^{\otimes},\operatorname{Assoc}^{\otimes})\), we deduce the following results: 1. The \(\infty\)-category \(\operatorname{Fun}_{\operatorname{Assoc}^{\otimes}}(\mathcal{LM}^{\otimes}, \mathcal{C}^{\otimes})\) admits sifted colimits. 2. A map \(K^{\rhd}\to\operatorname{Fun}_{\operatorname{Assoc}^{\otimes}}(\mathcal{LM} ^{\otimes},\mathcal{C}^{\otimes})\) is a colimit diagram if and only if the induced map \(K^{\rhd}\to\mathcal{C}^{\otimes}_{u(X)}\) is a colimit diagram for every \(X\in\mathcal{LM}^{\otimes}\). The full subcategory \(\operatorname{LMod}(\mathcal{C})\) of \(\operatorname{Fun}_{\operatorname{Assoc}^{\otimes}}(\mathcal{LM}^{\otimes}, \mathcal{C}^{\otimes})\) is spanned by the maps of \(\infty\)-operads, i.e., those functors \(\mathcal{LM}^{\otimes}\to\mathcal{C}^{\otimes}\) sending inert morphisms in \(\mathcal{LM}^{\otimes}\) to inert morphisms in \(\mathcal{C}^{\otimes}\). We now claim the following: 1. The \(\infty\)-category \(\operatorname{LMod}(\mathcal{C})\) admits sifted colimits, and the inclusion functor \(\operatorname{LMod}(\mathcal{C})\to\operatorname{Fun}_{\operatorname{Assoc}^{ \otimes}}(\mathcal{LM}^{\otimes},\operatorname{Assoc}^{\otimes})\) preserves sifted colimits. 2. A map \(K^{\rhd}\to\operatorname{LMod}(\mathcal{C})\) is a colimit diagram if and only if the induced map \(K^{\rhd}\to\mathcal{C}^{\otimes}_{u(X)}\) is a colimit diagram for every \(X\in\mathcal{LM}^{\otimes}\). Let \(a\colon K\to\operatorname{LMod}(\mathcal{C})\) be a functor. To show (i"), it suffices to show that the colimit of \(a\) computed in \(\operatorname{Fun}_{\operatorname{Assoc}^{\otimes}}(\mathcal{LM}^{\otimes}, \mathcal{C}^{\otimes})\) is contained in \(\operatorname{LMod}(\mathcal{C})\). This amounts to show that for every inert morphism \(e\) in \(\mathcal{LM}^{\otimes}\), the colimit of the restriction \(K\to\operatorname{Fun}_{\Delta^{1}}(\Delta^{1},\mathcal{C}^{\otimes}_{u(e)})\) corresponds to a morphism that is an inert morphism in \(\mathcal{C}^{\otimes}\), i.e., a \(q\)-cartesian morphism in \(\mathcal{C}^{\otimes}\) by [32, Proposition 2.1.2.22]. Lemma A.1 finishes the proof of (i"). As a consequence of (i") and (ii'), we obtain (ii"). By [32, Corollary 4.2.3.2], \(p\colon\operatorname{LMod}(\mathcal{C})\to\operatorname{Alg}(\mathcal{C}) \times\mathcal{C}\) is conservative. For a symmetric monoidal \(\infty\)-category \(\mathcal{C}^{\otimes}\), let \(\operatorname{Mod}(\mathcal{C})\) denote the underlying \(\infty\)-category of the generalized \(\infty\)-operad \(\operatorname{Mod}(\mathcal{C})^{\otimes}\) in [32, Definition 4.5.1.1]. For a commutative algebra object \(A\) of \(\mathcal{C}\), let \(\operatorname{Mod}_{A}(\mathcal{C})\) denote the restriction of \(\operatorname{Mod}(\mathcal{C})\) to \(A\)-module objects. We often use the simpler notation \(\operatorname{Mod}_{A}\) instead of \(\operatorname{Mod}_{A}(\mathcal{C})\). By [32, Corollary 4.5.1.6], there is a natural equivalence of \(\infty\)-categories \[\operatorname{Mod}(\mathcal{C})\simeq\operatorname{LMod}(\mathcal{C})\times_{ \operatorname{Alg}(\mathcal{C})}\operatorname{CAlg}(\mathcal{C}).\] **Proposition A.3**.: _Let \(\mathcal{C}^{\otimes}\) be a symmetric monoidal \(\infty\)-category such that \(\mathcal{C}\) admits sifted colimits and the monoidal product \(\otimes\) preserves sifted colimits in each variable. Then \(\operatorname{Mod}(\mathcal{C})\) admits sifted colimits, and the forgetful functor_ \[p\colon\operatorname{Mod}(\mathcal{C})\to\operatorname{CAlg}(\mathcal{C}) \times\mathcal{C}\] _sending an \(A\)-module \(M\) to \((A,M)\) is conservative and preserves sifted colimits._ Proof.: This is an immediate consequence of Proposition A.2.
2304.02706
Spectroscopic Orbits of Subsystems in Multiple Stars. X (Summary)
Results of a large program of spectroscopic monitoring of nearby solar-type stellar hierarchical systems using the CHIRON echelle spectrograph at the 1.5 m telescope are summarized. Ten papers of this series contain 102 spectroscopic orbits and substantially contribute to the knowledge of periods and eccentricties, providing input for the study of their formation and early evolution. Radial velocities of additional 91 targets without CHIRON orbits (members of wide physical pairs) are published here. Our results are compared to the recent Gaia Non-Single Star (NSS) catalog, revealing its strengths and weaknesses. The NSS provides orbital periods for 31 objects of the CHIRON sample (about one third). Of the 22 spectroscopic NSS orbits in common, 14 are in good agreement with CHIRON, the rest have reduced velocity amplitudes or other problems. Hence ground-based monitoring gives, so far, a more accurate and complete picture of nearby hierarchies than Gaia. The distribution of inner periods in hierarchical systems is non-monotonic, showing a shallow minimum in the 30-100 days bin and a strong excess at shorter periods, compared to the smooth distribution of simple binaries in the field. The period-eccentricity diagram of inner subsystems updated by this survey, recent literature, and Gaia, displays an interesting structure.
Andrei Tokovinin
2023-04-05T19:02:55Z
http://arxiv.org/abs/2304.02706v1
# Spectroscopic Orbits of Subsystems in Multiple Stars. X (Summary) ###### Abstract Results of a large program of spectroscopic monitoring of nearby solar-type stellar hierarchical systems using the CHIRON echelle spectrograph at the 1.5 m telescope are summarized. Ten papers of this series contain 102 spectroscopic orbits and substantially contribute to the knowledge of periods and eccentricities, providing input for the study of their formation and early evolution. Radial velocities of additional 91 targets without CHIRON orbits (members of wide physical pairs) are published here. Our results are compared to the recent Gaia Non-Single Star (NSS) catalog, revealing its strengths and weaknesses. The NSS provides orbital periods for 31 objects of the CHIRON sample (about one third). Of the 22 spectroscopic NSS orbits in common, 14 are in good agreement with CHIRON, the rest have reduced velocity amplitudes or other problems. Hence ground-based monitoring gives, so far, a more accurate and complete picture of nearby hierarchies than Gaia. The distribution of inner periods in hierarchical systems is non-monotonic, showing a shallow minimum in the 30-100 days bin and a strong excess at shorter periods, compared to the smooth distribution of simple binaries in the field. The period-eccentricity diagram of inner subsystems updated by this survey, recent literature, and Gaia, displays an interesting structure. binaries:spectroscopic -- binaries:visual 0000-0002-4880-7888]Andrei Tokovinin ## 1 Introduction Observations of spectroscopic subsystems in nearby solar-type stars are motivated by the desire to complement statistics of hierachies in the solar neighborhood (Tokovinin, 2014). In most cases, discovery of such subsystems by variable radial velocity (RV) or astrometric acceleration has not been followed by determination of the orbits. Without knowledge of periods and mass ratios, statistical distributions remain poorly constrained, hence useless as input for testing models of formation and early evolution of hierarchies. Development of predictive models of stellar multiplitiity remains the ultimate goal that justifies new observations. Monitoring of RVs is a classical way to find periods, mass ratios, and orbital eccentricities. Such long-term program has been conducted since 2015 at the 1.5 m telescope at Cerro Tololo with the CHIRON high-resolution optical echelle spectrograph (Tokovinin et al., 2013). Its main targets were solar-type stars within 67 pc belonging to hierarchical systems with three or more components. The program has been complemented by hierarchies at larger distances, also with solar-type components. Short-period orbits could be determined rapidly, while longer periods required observations for several years. The resulting orbits accompanied by discussions of each hierarchy were published in a series of 10 papers listed in Table 1, with a total of 102 spectroscopic orbits determined throughout this program. A summary of this effort is provided here. The classical approach of monitoring selected objects from the ground is nowadays complemented by the large spectroscopic surveys, e.g. GALAH (Buder et al., 2021) and LAMOST (Cui et al., 2012), and by the Gaia space mission (Gaia Collaboration et al., 2021). The Gaia Data Release 3 (GDR3) includes a catalog of non-single stars (NSS) which contains about \(3\times 10^{5}\) spectroscopic and/or astrometric orbits (Gaia Collaboration et al., 2022). Uniformity of the Gaia coverage, compared to the selective object-by-object study, is a huge advantage for the statistics. However, the current NSS suffers from incompleteness and selection and contains a non-negligible fraction of wrong orbits, as noted by its compilers (Pourbaix et al., 2022). I compare here the NSS and CHIRON orbits to highlight the advantages and caveats of both approaches. Taken together, the ground- and space-based orbits substantially complement our knowledge of stellar hierarchies and their architecture. Some statistical results are presented here based on the current version of the Multiple Star Catalog, MSC (Tokovinin, 2018) that can be accessed online.1 Footnote 1: [http://www.ctio.noirlab.edu/~atokovin/stars/](http://www.ctio.noirlab.edu/~atokovin/stars/) and [http://vizier.u-strasbg.fr/viz-bin/VizieR-47-source=J/ApJS/235/6](http://vizier.u-strasbg.fr/viz-bin/VizieR-47-source=J/ApJS/235/6) The CHIRON multiplicity project is part of a larger observational effort. Studies of stellar hierarchies on the northern sky using a correlation radial-velocity meter were conducted in the 1990s, as summarized by Tokovinin & Smekhov (2002), and continued in the following decades in a series of papers by Gorynya et al. (Tokovinin & Gorynya, 2001, 2007; Gorynya & Tokovinin, 2014, 2018). In parallel, high-resolution imaging of wider (mostly astrometric) subsystems using adaptive optics and speckle interferometry was undertaken (Tokovinin et al., 2010, 2012); it is continued at present. With all techniques (including Gaia) combined, a wide and deep coverage of the full parameter space can be achieved for nearby stars. The CHIRON sample is presented in Section 2 and the orbits are compared to the Gaia orbits in Section 3. In Section 4, RVs of other targets (mostly wide visual binaries) measured with CHIRON are published for future use. The period distribution and the \(P-e\) diagram are discussed briefly in Section 5, and the summary is given in Section 6. ## 2 Overview of the CHIRON program The main 67-pc sample of hierarchies with solar-type primary stars was based on the Hipparcos catalog (Tokovinin, 2014), so it is convenient to use here HIP numbers as primary identifiers. I extracted from the database of CHIRON observations all Hipparcos stars relevant to this project, omitting a few systems which are not in Hipparcos. Table 2 lists 120 individual targets (resolved or blended components of multiple systems) featured here. The columns contain the HIP number, MSC/WDS code based on the J2000 coordinates, component identifier, its equatorial coordinates for J2000, \(V\) magnitude, parallax \(\varpi\), its source, the orbital period \(P\) determined in this project, and the orbit reference (the paper number). In some cases the components of a multiple system have individual HIP numbers (e.g. HIP 6868 and 6873); otherwise, the secondary stars are identified here by the HIP number of the primary and a component letter (e.g. HIP 24320 A and B). Detailed information on all components (accurate coordinates, proper motions, etc.) can be found in the MSC. As indicated in the parallax reference column, most parallaxes come from the GDR3 or its NSS extension (DR3N). If parallax of the component is not measured in GDR3, parallax of other system's components is used instead (DR3*). If there are no wide components, the parallax comes from Hipparcos (HIP) or visual orbits (dyn and orb). The median parallax of the CHIRON sample is 17.3 mas, 85 targets are within 67 pc (parallaxes above 15 mas). Four targets are revealed as spectroscopic triples with inner periods of a few days and outer periods on the order of a year (HIP 11537A, 27970A, 56282A, 111598A); both spectroscopic periods are listed for those stars, which also have outer visual companions (they are rare quadruples of 3+1 hierarchy). Candidates for determination of spectroscopic orbits were mostly identified in the survey by Nordstrom et al. (2004) and in other publications as components of visual binaries with variable RVs. They are featured in the original 67-pc sample as hierarchies with unknown inner periods. Some of these stars also have astrometric accelerations (the astrometric and spectroscopic binaries overlap). Measurements of the RVs of wide (resolved) nearby binaries with the fiber echelle and CHIRON spectrographs at the CTIO 1.5 m telescope are reported in (Tokovinin, 2015). The aim was to detect new subsystems (one measurement of a substantial RV difference between the components is sufficient for a detection). \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{ Papera } & \multicolumn{1}{c}{Bibcode} & \(N_{\rm sys}\) & \(N_{\rm orb}\) \\ \hline 1 & 2016AJ...152...11T & 4 & 7 \\ 2 & 2016AJ...152...10T & 4 & 7 \\ 3 & 2018AJ...156...48T & 6 & 9 \\ 4 & 2018AJ...156.194T & 9 & 10 \\ 5 & 2019AJ...157..91T & 9 & 9 \\ 6 & 2019AJ...158..222T & 11 & 12 \\ 6a & 2020AJ...159...88T & 1 & 2 \\ 7 & 2020AJ...160...69T & 8 & 12 \\ 8 & 2022AJ...163..161T & 10 & 19 \\ 9 & Submitted & 14 & 15 \\ \hline \end{tabular} \end{table} Table 1: Publications on the CHIRON Survey This mini-survey of 96 wide pairs revealed 17 new subsystems which were added to the present program. Components of additional wide multiples were episodically observed with CHIRON in the following years as a complement to the main program; their RVs are reported here in Section 4. Typically, components of wide pairs were observed only once. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline HIP & WDS & Comp. & R.A. & Decl. & \(V\) & \(\varpi\) & Ref.plx\({}^{a}\) & \(P\) & SB\({}^{b}\) & Ref. \\ & (J2000) & & (deg) & (deg) & (mag) & (mas) & & (days) & & \\ \hline [MISSING_PAGE_POST] & 109.275042 & -12.035929 & 9.4 \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline HIP & WDS & Comp. & R.A. & Decl. & \(V\) & \(\varpi\) & Ref.plx\({}^{a}\) & \(P\) & SB\({}^{b}\) & Ref. \\ & (J2000) & & (deg) & (deg) & (mag) & (mas) & & (days) & & \\ \hline [MISSING_PAGE_POST] & 259.9654 The goal of our survey was to determine all (or most) periods up to 1000 days. Naturally, some periods turned out to be longer than this arbitrary threshold. The distribution of orbital periods determined in this survey is plotted in Figure 1 in dashed line. For comparison, the distribution of periods in all known inner subsystems in hierarchies within 67 pc with primary star masses from 0.5 to 1.5 \(M_{\odot}\) is plotted. The dotted line traces the standard log-normal period distribution of field binaries with a median of \(10^{5}\) days and a logarithmic dispersion of 2.28 dex (Raghavan et al., 2010). At \(P>100\) days, both distributions are similar, showing an increase of orbits with longer periods. However, inner subsystems have a strong excess of orbits with \(P<30\) days compared to the canonical log-normal curve. In other words, short-period binaries have a strong preference to belong to hierarchical systems. This phenomenon is further discussed below in Section 5.2. Neither the MSC nor the CHIRON samples are complete, but, owing to the extended monitoring, periods shorter than \(10^{3}\) days should be represented uniformly. So, the excess of short periods and the local minimum in the 30-100 days bin are real features rather than selection effects. Figure 1.— Histogram of inner periods in hierarchical systems of solar-type stars within 67 pc (full line) and of the subset of 90 inner periods resulting from the CHIRON program (dashed line). The dotted line is a log-normal period distribution of field solar-type binaries from Raghavan et al. (2010) with arbitrary normalization. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ HIP} & WDS & Comp. & R.A. & Decl. & \(V\) & \(\varpi\) & Ref.plx\({}^{a}\) & \(P\) & SB\({}^{b}\) & Ref. \\ & (J2000) & & (deg) & (deg) & (mag) & (mas) & & (days) & & \\ \hline [MISSING_PAGE_POST] & 357.945126 & -6.613069 & 8.73 & 13.30 & dyn & 253.8 & 1 & 9 \\ \hline \end{tabular} \({}^{a}\)Parallax references: DR3 – Gaia Data Release 3; DR3N – Gaia Data Release 3, NSS catalog; DR3* – Gaia Data Release 3, other component of the system; HIP – Hipparcos; dyn – dynamical (visual orbit and mass); orb – orbital (visual-spectroscopic orbit). \({}^{b}\) SB orbit flags: 0 – no orbit; 1 – single-lined orbit; 2 – double-lined orbit. \end{table} Table 2: (continued) ## 3 Comparison between CHIRON and Gaia NSS Orbits The number of spectroscopic orbits in GDR3/NSS (\(1.8\times 10^{5}\)) overwhelms all spectroscopic orbits determined to date from the ground: on March 24, 2020, the SB9 catalog2(Pourbaix et al., 2004) contained only 4004 systems, with 2/3 of those on the northern sky. Gaia determined orbits by an impersonal automated procedure (Gaia Collaboration et al., 2022; Pourbaix et al., 2022). The duration of the GDR3 mission (34 months) and the observing cadence set by the Gaia scanning law naturally restrict the range of accessible orbits. Most Gaia periods are under 1000 days, and orbital periods close to one year and its harmonics are underrepresented. The distribution of Gaia orbits on the sky is nonuniform and clearly shows an imprint of the scanning law. Candidates for Gaia orbit determination went through a vetting procedure (visual binaries with close separations were rejected), and the orbits were checked by various filters. Documentation available on the Gaia web site (Pourbaix et al., 2022) describes the vetting and quality control. Comparison with known spectroscopic orbits in the cited document indicated a "recovery rate" (correctness) of Gaia spectroscopic orbits between 0.7 and 0.9, depending on the comparison sample and criteria used. Footnote 2: [https://sb9.astro.ulb.ac.be/](https://sb9.astro.ulb.ac.be/) The CHIRON sample presented here was matched by coordinates to the Gaia catalogs of single- and double-lined spectroscopic orbits (SB1 and SB2, respectively) and other NSS solutions. Coordinate search reveals 31 Gaia orbits among CHIRON targets; four of those have no CHIRON orbits owing to the small number of observations, for five targets NSS contains only astrometric orbits with matching periods. Table 3 compares the CHIRON and NSS orbits for 31 common targets. The NSS solution codes are obvious (SB1 and SB2 for single- and double-lined spectroscopic orbits, ASB1 for spectro-astrometric orbits, and AORB for purely astrometric orbits). One NSS orbit (HIP 21079A) is false owing to the wrongly determined period of 2.71 days (the true period is 217 days). Seven NSS orbits have reduced RV amplitudes, either because of blending with lines of the secondary or because of an inaccurate shape of the NSS RV curve. The remaining 14 SB orbits in common between CHIRON and NSS are in reasonably good mutual agreement. Note, however, that NSS missed some sub-systems, either inner (1.56 days in HIP 22531A) or outer with periods exceeding the GDR3 mission duration (in HIP 56282A, 64478A). For HIP 36165 with \(P=2300\) days, Gaia detected so far only the acceleration and the RV trend. Overall, only a third of the CHIRON sample has spectroscopic or astrometric orbits in the NSS. The CHIRON targets are bright (well above the Gaia RV threshold of 13 mag), and most orbital periods are under 1000 days. So, despite the large total number of orbits, the NSS orbit catalog is still very incomplete. Figure 2 compares CHIRON and Gaia orbits of six common stars where the periods match approximately. Four systems are double-lined in CHIRON, but single-lined in Gaia. When the secondary lines are much fainter than those of the primary (HIP 9148, 22531B), \begin{table} \begin{tabular}{c c c c c c} \hline \hline HIP & Comp. & SB1/2 & \(P_{\rm CHI}\) & NSSa & \(P_{\rm NSS}\) \\ & & & (days) & sol. & (days) \\ \hline 1103 & A & \(\cdots\) & \(\cdots\) & SB1 & 1343.0 \\ 6873 & B & SB2 & 115.6 & SB2 & 112.4 \\ 7852 & A & SB1 & 1177.5 & SB1 & 1184.0 \\ 9148 & A & SB2 & 1272.3 & ASB1 & 1283.5b \\ 16853 & A & \(\cdots\) & \(\cdots\) & \(\cdots\) & ASB1 & 204.1 \\ 21079 & A & SB1 & 217.0 & SB1 & 2.71c \\ 22531 & A & SB1 & 1003 & AORB & 1000.8 \\ 22534 & B & SB2 & 208.3 & ASB1 & 207.6b \\ 24320 & A & SB1 & 1430.3 & SB1 & 1042.9 \\ 24320 & B & SB1 & 7.943 & SB1 & 7.943 \\ 34212 & A & SB1 & 1246.7 & SB1 & 1185.6 \\ 35733 & A & SB2 & 4.63 & SB2 & 4.63 \\ 51578 & A & SB1 & 2.19 & SB1 & 2.19 \\ 56282 & A & SB1 & 121.1 & SB1 & 125.3 \\ 57572 & A & \(\cdots\) & \(\cdots\) & ASB1 & 169.6 \\ 59426 & A & SB1 & 211.6 & ASB1 & 212.3b \\ 64478 & A & SB2 & 4.23 & SB2 & 4.23 \\ 75663 & A & SB1 & 623.8 & ASB1 & 626.7b \\ 78163 & B & SB1 & 2082.5 & AORB & 1532.4 \\ 81395 & B & SB2 & 224.8 & ASB1 & 225.2b \\ 84789 & A & SB2 & 2.28 & SB2 & 2.28 \\ 88728 & A & SB1 & 1132.0 & SB1 & 1267.8b \\ 100420 & A & SB2 & 790.6 & AORB & 805.4 \\ 101472 & A & SB2 & 354.9 & AORB & 354.0 \\ 103814 & A & SB1 & 1089.0 & ASB1 & 1119.7 \\ 104833 & C & SB1 & 11.34 & SB1 & 11.34 \\ 105441 & A & SB1 & 4.62 & SB1 & 4.62 \\ 105441 & B & \(\cdots\) & \(\cdots\) & ASB1 & 549.5 \\ 107731 & A & SB2 & 469.9 & ASB1 & 469.3b \\ 109443 & A & SB1 & 970.9 & SB1 & 989.8 \\ 115552 & A & SB2 & 17.48 & SB2 & 17.48 \\ \hline \end{tabular} \end{table} Table 3: Comparison between CHIRON and Gaia NSSb Orbits Gaia gives a reasonable match for the primary star with a slightly reduced RV amplitude. For pairs with comparable-mass components (HIP 59426, 81394), the Gaia RV amplitudes are dramatically underestimated; moreover, the shape and phase of the Gaia RV curves are incorrect. For two single-lined binaries (HIP 24320B, 34212), the CHIRON and Gaia orbits are similar (small phase shifts are due to inaccurate Gaia periods). Interestingly, a comparison of the NSS with two large ground-based RV surveys indicated that only about a half of Gaia SB1 orbits could be validated (Bashi et al., 2022). The availability of Gaia orbits is most welcome. However, it does not make the CHIRON survey obsolete, quite to the contrary. Presently, Gaia provides orbits for only a small fraction of inner subsystems in nearby hierarchies, and some of those orbits are questionable even for the relatively simple binaries shown in Figure 2. More complex and more interesting systems (e.g. triple- and quadruple-lined) can be discovered and studied only by dedicated ground-based programs like this one. Systematic underestimation of RV amplitudes by Gaia due to line blending leads to the underestimated mass ratios, so the use of Gaia orbits for a statistical study of the mass ratio distribution is not recommended. ## 4 Radial velocities of wide pairs Wide (resolved) pairs were probed for the presence of subsystems by measuring RVs of each component and looking for substantial differences (Tokovinin, 2015). This work has been continued in the following years and its complementary results are reported here. Some wide pairs contain known visual subsystems with long periods, and their RVs change on a time scale of decades. \begin{table} \begin{tabular}{l l c c c c} \hline \hline \multicolumn{1}{c}{ HIP} & Comp. & JD & RV & \(a\) & \(\sigma\) \\ & & \(-2\,400\,000\) & km s\({}^{-1}\) & & km s\({}^{-1}\) \\ \hline 1103 & A & 57276.7918 & 2.785 & 0.056 & 20.455 \\ 1103 & A & 57299.7225 & 2.729 & 0.057 & 20.539 \\ 1103 & A & 57333.5715 & 2.814 & 0.057 & 20.569 \\ 1103 & A & 57983.7853 & 5.967 & 0.057 & 20.178 \\ 2713 & A & 57986.7734 & 8.987 & 0.353 & 4.345 \\ 2715 & C & 57986.7745 & 5.899 & 0.307 & 4.730 \\ 5896 & B & 57985.7965 & 8.228 & 0.457 & 4.465 \\ 5896 & A & 57985.7954 & 8.042 & 0.036 & 31.149 \\ 6712 & A & 57985.8439 & -23.365 & 0.471 & 3.844 \\ \hline \end{tabular} \end{table} Table 4: RVs of Stars in Wide Pairs (Fragment) Figure 2: Comparison between six CHIRON and Gaia spectroscopic orbits. All plots show RVs of the primary (solid green line) and squares) and secondary (dashed blue line and triangles) components measured by CHIRON vs. orbital phase. The Gaia orbits are depicted by dashed green lines. The CHIRON and Gaia periods are indicated under each plot. The RV measurements of wide pairs with CHIRON are published in Table 4, to be used in the future for determination of long-period orbits. This table also contains previously unpublished RVs of stars from the main program where insufficient number of observations does not allow for orbit calculation or the orbits are known from other sources. For example, the orbital period of HIP 1103A, 1343 days, is now determined by Gaia, and its continued monitoring with CHIRON makes little sense. Columns of the table identify stars by their HIP number and component letter. Then follow Julian date, RV, amplitude \(a\), and rms width \(\sigma\) of the cross-correlation dip. The dip parameters are helpful in evaluating the RV errors (wide and shallow dips give larger errors), while variable dips indicate blending of several components with variable RVs. Table 4 contains 162 RVs of 91 distinct targets. Five stars in Table 4 have multi-component dips and deserve individual comments. Triple lines in HIP 49442A were discovered here and indicate a spectroscopic subsystem in this 0\(\farcs\)18 visual pair; the RV of the B component, at 4\(\farcs\)4 from A, was also measured. The double dip of HIP 61465B signals a subsystem, in agreement with the astrometric signature of an unresolved binary in Gaia; its counterpart HIP 61466A at 27\(\farcs\)5 may also contain a subsystem. HIP 78662C (a \(V=8\) mag star at 11'' from the bright young visual binary HIP 78662AB) may have a double dip, but its large width and small amplitude render this discovery uncertain. The modest RV difference between the dip components in HIP 79588AB may be caused by motion in the 34 yr visual orbit with a large eccentricity of 0.8. HIP 111391AB is also a visual binary with a period of 198 yr and an eccentric orbit which likely causes the small and constant RV difference between the two dip components. ## 5 Statistics of Inner Subsystems ### Period-Eccentricity Diagram Patient accumulation of data on nearby solar-type hierarchies improves completeness of their sample. The multi-year CHIRON survey and the Gaia DR3 greatly reduce the historic bias in favor of short periods. Although not all inner orbits with \(P<3000\) days are known, the observational "window" is more uniform, and the number of known orbits is larger than a decade ago. So, a fresh look at the statistics in the short-period regime is warranted using the up-to-date MSC. I selected from the MSC inner subsystems with primary masses from 0.7 to 1.5 \(M_{\odot}\) (solar-type), known periods under 3000 days, and distances within 100 pc -- a total of 743 cases with spectroscopic, astrometric, or visual inner orbits (455 ground-based, 288 from Gaia). References to the orbits can be found in the MSC and in SB9 (Pourbaix et al., 2004). The median primary mass is 0.99 \(M_{\odot}\). If the distance limit is reduced to 67 pc, the balance between ground-based (325) and Gaia (53) orbits shifts further, showing improved completeness of the ground-based data for nearby stars. Figure 3 plots the period-eccentricity relation for inner subsystems within 100 pc. The upper envelope of the points outlines the tidal circularization: most orbits with \(P<10\) days are circular (Meibom & Mathieu, 2005). Several crosses at \(P<10\) days, outside the envelope, are Gaia orbits with spurious periods that should be ignored. The large number of orbits reveals an interesting structure in this diagram, such as two concentrations of eccentric orbits with periods of 10-30 days and with \(P>100\) days, while the number of orbits in the intermediate 30-100 days interval appears smaller and these orbits seem to have smaller eccentricities. Circular orbits reappear again at \(P>1000\) days. Period-eccentricity diagrams for spectroscopic binaries can be found in many papers (e.g. Raghavan et al., 2010; Triaud et al., 2017; Price-Whelan et al., 2020; Torres et al., 2021); a circularization period \(P_{\rm circ}\) between 7 and 10 days is inferred from these plots. A few eccentric orbits with periods shorter than \(P_{\rm circ}\) seen in such diagrams are explained by inefficient tides in stars with primary masses above 1.3 \(M_{\odot}\) and small secondaries (Triaud et al., 2017) or by the influence of tertiary companions (Mazeh, 1990); all close binaries in Figure 3 are inner subsystems within multiple stars. Most short-period eccentric orbits derived by Price-Whelan et al. (2020) "in the regime of sparse, noisy, and poorly sampled multi-epoch data" likely are spurious. Figure 3: Periods and eccentricities of 743 inner subsystems with solar-type components found in the MSC. Squares correspond to the ground-based orbits, blue crosses are orbits from Gaia NSS. All binaries with periods less than \(\sim\)10 yr were formed by some kind of migration, although the migration mechanisms are still under debate (Moe & Kratter, 2018). The \(P-e\) diagram may be instructive from this perspective. It shows that some inner pairs with \(P>1000\) days could be formed with quasi-circular orbits (many visual binaries with nearly circular orbits and periods on the order of a decade are known as well). However, further shortening of the period seems to be associated with an eccentricity growth, given that small eccentricites are rare at periods of \(\sim 10^{2}\) days. The mechanism responsible for migration in this regime should be associated with a loss of angular momentum, e.g. via interaction with a circumbinary disk or with outer companions. The growing eccentricity and shortening period eventually bring the pair into a regime of tidal circularization, where separation at periastron is a few stellar radii. Concentration of points at \(P\sim 20\) days and \(e=0.4\ldots 0.6\) in Figure 3 corresponds to inner pairs that have reached the tidal regime and apparently start slow evolution toward shorter periods and circular orbits. Interestingly, D'Orazio & Duffell (2021) found by hydrodynamical simulation that an eccentric binary in a coplanar prograde disc evolves to shorter periods, while eccentricity fluctuates around the \(e=0.4\) attractor; in contrast, a binary with \(e<0.1\) does not migrate and its orbit remains circular. ### Period Distribution of Inner Subsystems The histogram of inner periods in solar-type hierarchies is plotted in Figure 1. It shows a marked difference with the period distribution of all field binaries, namely the excess of periods shorter than 30 days in inner subsystems. Preference of close binaries to be members of hierarchical systems is firmly established by prior work. For example, Tokovinin et al. (2006) determined that \(\sim\)80% of spectroscopic binaries with \(P<7\) days have tertiary companions. The statistical model of hierarchical multiplicity developed in (Tokovinin, 2014) matches the data quite well (after accounting for the selection), but underpredicts the fraction of inner subsystems with \(P<10\) days by a factor of two, because it does not account for correlation between close binaries and higher-order multiplicity. Recently Hwang et al. (2020) found that the occurrence rate of wide physical companions to eclipsing binaries is \(\sim\)3 times higher than for typical field stars (14.1% vs. 4.5%). The origin of the observed relation between close binaries and hierarchies is still under debate. The reader is referred to Moe & Kratter (2018) for a detailed analysis and an up-to-date population synthesis. The originally proposed mechanism of Lidov-Kozai cycles in misaligned triples acting in combination with tidal friction can account only for a minor fraction of the close subsystems, and its predictions disagree with reality. Specifically, the predicted excess of inner periods just below the tidal cutoff at \(P<10\) days and the concentration of mutual inclinations near \(40^{\circ}\) are not seen. Instead, there is no discontinuity in the distribution of inner periods at \(P\sim 10\) days, but their numbers drop at \(P>30\) days (Figure 1). Moe & Kratter (2018) argue that the main agent that shrinks inner periods should be associated with the accretion of gas during mass assembly. The observed frequency of inner twins with mass ratio \(q>0.95\) formed by accretion also drops sharply at \(P>30\) days (see Figure 4 in Tokovinin, 2021). However, the key issue of relating accretion to the presence of tertiary companions is still unsettled. Further discussion of this topic is beyond the scope of this paper. The point here is to illustrate how new homogeneous data on hierarchies contribute to the study of their formation mechanisms. ## 6 Summary Multi-year spectroscopic monitoring of inner subsystems in solar-type hierarchies has been undertaken to elucidate distributions of periods, eccentricities, and mass ratios. The targets were derived from the sample of solar-type stars within 67 pc with unknown inner periods and enlarged by additional, more distant hierarchies. Spectroscopic (and visual) orbits based on these data are reported in 10 papers. The main results are as follows. * A total of 102 spectroscopic orbits with periods ranging from fraction of a day to several years were determined. The coverage is reasonably complete up to \(P\sim 1000\) days. * Monitoring with CHIRON revealed new, additional subsystems: several presumed triples in fact are quadruples of 2+2 or 3+1 hierarchy. * The Gaia NSS provides orbits for only a third of the CHIRON sample, and a substantial fraction of Gaia orbits in common with CHIRON have reduced RV amplitudes or other problems. * The distribution of inner periods based on this survey, literature, and Gaia (Figure 1), differs from the canonical log-normal period distribution in the field. Inner subsystems have a strong excess of periods shorter than 30 days. The logarithmic inner period distribution has a local minimum in the 30-100 days bin. * The period-eccentricity diagram of inner solar-type subsystems (Figure 3) shows an interesting structure. Statistical data on periods, eccentricities, and mass ratios in these hierarchies will help in the development and verification of their formation models. In a broader context, this study fits into the vast landscape of observational characterization of stellar hierarchies. Large photometric surveys designed for the study of transiting exoplanets have opened a new window on unusual and rare compact hierarchies like triply eclipsing planar worlds (e.g. Rappaport et al., 2022) that challenge current formation theories. Gaia has revolutionized the census of solar neighborhood by revealing wide pairs of stars with an unprecedented completeness and their connection to close binaries (e.g. Hwang et al., 2020; Hwang, 2022). New data highlight the diversity of stellar hierarchies (Tokovinin, 2021) and provide insights on their formation (Offner et al., 2022). The research was funded by the NSF's NOIRLab. This work used the SIMBAD service operated by Centre des Donnees Stellaires (Strasbourg, France), bibliographic references from the Astrophysics Data System maintained by SAO/NASA, and the Washington Double Star Catalog maintained at USNO. This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/g](https://www.cosmos.esa.int/g) Gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. CTIO:1.5m, Gaia
2310.18819
On the mapping class groups of simply-connected smooth 4-manifolds
The mapping class group $M(X)$ of a smooth manifold $X$ is the group of smooth isotopy classes of orientation preserving diffeomorphisms of $X$. We prove a number of results about the mapping class groups of compact, simply-connected, smooth $4$-manifolds. We prove that $M(X)$ is non-finitely generated for $X = 2n \mathbb{CP}^2 # 10n \overline{\mathbb{CP}^2}$, where $n \ge 3$ is odd. Let $\Gamma(X)$ denote the group of automorphisms of the intersection lattice of $X$ that can be realised by diffeomorphisms. Then $M(X)$ is an extension of $\Gamma(X)$ by $T(X)$, the Torelli group of isotopy classes of diffeomorphisms that act trivially in cohomology. We prove that this extension is split for connected sums of $\mathbb{CP}^2$, but is not split for $2\mathbb{CP}^2 # n \overline{\mathbb{CP}^2}$, where $n \ge 11$. We prove that the Nielsen realisation problem fails for certain finite subgroups of $M( p \mathbb{CP}^2 # q \overline{\mathbb{CP}^2} )$ whenever $p+q \ge 4$. Lastly we study the extension $M_1(X) \to M(X)$, where $M_1(X)$ is the group of isotopy classes of diffeomorphisms of $X$ which fix a neighbourhood of a point. When $X = K3$ or $K3 # (S^2 \times S^2)$ we prove that $M_1(X) \to M(X)$ is a non-trivial extension of $M(X)$ by $\mathbb{Z}_2$. Moreover, we completely determine the extension class of $M_1(K3) \to M(K3)$.
David Baraglia
2023-10-28T20:50:31Z
http://arxiv.org/abs/2310.18819v1
# On the mapping class groups of simply-connected smooth \(4\)-manifolds ###### Abstract. The mapping class group \(M(X)\) of a smooth manifold \(X\) is the group of smooth isotopy classes of orientation preserving diffeomorphisms of \(X\). We prove a number of results about the mapping class groups of compact, simply-connected, smooth \(4\)-manifolds. We prove that \(M(X)\) is non-finitely generated for \(X=2n\mathbb{CP}^{2}\#10n\overline{\mathbb{CP}^{2}}\), where \(n\geq 3\) is odd. Let \(\Gamma(X)\) denote the group of automorphisms of the intersection lattice of \(X\) that can be realised by diffeomorphisms. Then \(M(X)\) is an extension of \(\Gamma(X)\) by \(T(X)\), the Torelli group of isotopy classes of diffeomorphisms that act trivially in cohomology. We prove that this extension is split for connected sums of \(\mathbb{CP}^{2}\), but is not split for \(2\mathbb{CP}^{2}\#n\overline{\mathbb{CP}^{2}}\), where \(n\geq 11\). We prove that the Nielsen realisation problem fails for certain finite subgroups of \(M(p\mathbb{CP}^{2}\#q\overline{\mathbb{CP}^{2}})\) whenever \(p+q\geq 4\). Lastly we study the extension \(M_{1}(X)\to M(X)\), where \(M_{1}(X)\) is the group of isotopy classes of diffeomorphisms of \(X\) which fix a neighbourhood of a point. When \(X=K3\) or \(K3\#(S^{2}\times S^{2})\) we prove that \(M_{1}(X)\to M(X)\) is a non-trivial extension of \(M(X)\) by \(\mathbb{Z}_{2}\). Moreover, we completely determine the extension class of \(M_{1}(K3)\to M(K3)\). ## 1. Introduction Let \(X\) be a compact, oriented, smooth, simply-connected \(4\)-manifold. Define the mapping class group \(M(X)\) to be the group of smooth isotopy classes of orientation preserving diffeomorphisms of \(X\). There is considerable interest in the groups \(M(X)\), although little is known about their structure. In this paper we will prove a number of new results concerning the structure of mapping class groups of smooth \(4\)-manifolds. Recall that the second cohomology group \(L_{X}=H^{2}(X;\mathbb{Z})\) of \(X\) equipped with its intersection form is a unimodular lattice. We let \(Aut(L_{X})\) denote the automorphism group of the lattice \(L_{X}\). The group of orientation preserving diffeomorphisms of \(X\) acts on \(L_{X}\) via \(f\mapsto(f^{-1})^{*}\). This action depends only on this isotopy class and so defines a homomorphism \(M(X)\to Aut(L_{X})\). Denoting the image of this map by \(\Gamma(X)\) and the kernel by \(T(X)\), we obtain a short exact sequence \[1\to T(X)\to M(X)\to\Gamma(X)\to 1. \tag{1.1}\] We call \(T(X)\) the _Torelli group_ of \(X\). It is the group of isotopy classes of diffeomorphisms of \(X\) that act trivially in cohomology. By a result of Quinn, \(T(X)\) can also be defined as the group of isotopy classes of diffeomorphisms which are continuously isotopic to the identity [23]. The group \(\Gamma(X)\) is the group of automorphisms of \(L_{X}\) that can be realised by diffeomorphisms of \(X\). Understanding the group \(M(X)\) necessitates an understanding of the groups \(T(X)\), \(\Gamma(X)\) and the extension (1.1). The group \(\Gamma(X)\) is known for some classes of \(4\)-manifolds. In particular, a theorem of Wall implies that \(\Gamma(X)=Aut(L_{X})\) for a large class of \(4\)-manifolds [31]. In contrast, the Torelli group \(T(X)\) is poorly understood. Ruberman showed that \(T(X)\) is not finitely generated for certain \(X\)[25]. However this does not imply that \(M(X)\) is not finitely generated, since a finitely generated group can have subgroups which are not finitely generated. Our first main result confirms that \(M(X)\) is not finitely generated for certain simply-connected \(4\)-manifolds. **Theorem 1.1**.: _Let \(X=2n\mathbb{CP}^{2}\#10n\overline{\mathbb{CP}^{2}}\), where \(n\geq 3\) is odd. Then \(M(X)\) is not finitely generated. More precisely, the following holds:_ 1. _There is an index_ \(2\) _subgroup_ \(M_{+}(X)\) _of_ \(M(X)\) _and a surjective homomorphism_ \(\Phi:M_{+}(X)\to\mathbb{Z}^{\infty}\) _from_ \(M_{+}(X)\) _to_ \(\mathbb{Z}^{\infty}\)_, where_ \(\mathbb{Z}^{\infty}\) _denotes a free abelian group of countably infinite rank._ 2. _The mod_ \(2\) _reduction of_ \(\Phi\) _extends to a surjective homomorphism_ \(\Phi:M(X)\to\mathbb{Z}_{2}^{\infty}\)_._ As this paper was nearing completion we received a preprint by Hokuto Konno [15] which also proves that the mapping class groups of simply-connected \(4\)-manifolds can be non-finitely generated. Konno's proof uses essentially the same method as ours, however we obtained our proofs completely independently. _Remark 1.2_.: It is interesting to contrast Theorem 1.1 with finiteness results for mapping class groups in other dimensions. Let \(X\) be a compact, simply-connected smooth manifold of dimension \(d\) and \(M(X)=\pi_{0}(Diff(X))\) the mapping class group. If \(d\neq 4\), then \(M(X)\) is finitely generated. For \(d\leq 3\), finite generation holds for any compact oriented manifold (see [7] for \(d=2\) and [13] for \(d=3\)). If \(d\geq 5\) then \(M(X)\) is finitely generated [6, Theorem 2.6]. Note that Theorem 2.6 of [6] is only stated for \(d\geq 6\), but when \(X\) is simply-connected, the proof carries over to \(d=5\). In the proof of [6, Theorem 2.6], dimension \(6\) only enters in the point (i) in the proof, but Cerf's theorem says that in the simply-connected case \(\pi_{0}(C^{Diff}(X))=0\), and in [6, Proposition 2.7] where it is not necessary. This follows from specialising Triantafillou [30] to simply-connected manifolds, where none of the oversights mentioned in [6, SS2.2] cause a problem1. Footnote 1: I thank Alexander Kupers for explaining why [6, Theorem 2.6] works for simply-connected \(5\)-manifolds. One may also consider the larger group \(M^{\prime}(X)\) consisting is isotopy classes of diffeomorphisms which are not necessarily orientation preserving. Since \(M(X)\) has finite index in \(M^{\prime}(X)\), it follows from Schreier's lemma [28] that if \(M(X)\) is not finitely generated then neither is \(M^{\prime}(X)\). _Remark 1.3_.: Let \(X\) be a compact, simply-connected smooth \(4\)-manifold and let \(M^{top}(X)=\pi_{0}(Homeo(X))\) be the topological mapping class group. By work of Freedman [10] and Quinn [23], the natural map \(M^{top}(X)\to Aut(H^{2}(X;\mathbb{Z}))\) to the group of automorphisms of the intersection lattice \(H^{2}(X;\mathbb{Z})\) is an isomorphism. By a result of Siegel [29], the automorphism group of any lattice is finitely generated. Hence \(M^{top}(X)\) is finitely generated. In contrast, we do not know whether the group \(\Gamma(X)\subseteq Aut(H^{2}(X;\mathbb{Z}))\) is always finitely generated, although we conjecture that it is. Our next result concerns the question of whether or not the sequence (1.1) admits a splitting. **Theorem 1.4**.: _The following holds:_ 1. _Let_ \(X=n\mathbb{CP}^{2}\)_, where_ \(n\geq 1\)_. Then there exists a splitting_ \(\Gamma(X)\to M(X)\)_._ 2. _Let_ \(X=(S^{2}\times S^{2})\#X^{\prime}\)_, where_ \(b_{+}(X^{\prime})=1\)_,_ \(b_{-}(X^{\prime})\geq 10\)_. Then there does not exist a splitting_ \(\Gamma(X)\to M(X)\)_._ More precise information about the failure of a splitting in case (2) is provided by Theorem 5.1. _Remark 1.5_.: In [5] it is shown when \(X\) is a \(K3\) surface, there is a splitting \(\Gamma(X)\to M(X)\). It is also easy to see that splittings exist for \(S^{2}\times S^{2}\) and \(\mathbb{CP}^{2}\#\overline{\mathbb{CP}^{2}}\). Our next result concerns the Nielsen realisation problem. Recall that the Nielsen realisation problem for a smooth manifold \(X\) asks whether a subgroup \(G\) of the mapping class group of \(X\) can be lifted to a subgroup of \(Diff(X)\). Recent results of Baraglia-Konno [5], Farb-Looijenga [9], Konno [14] show that Nielsen realisation fails for many simply-connected spin \(4\)-manifolds. Arabadji-Baykur showed that there are many non-spin \(4\)-manifolds with finite non-trivial fundamental group for which Nielsen realisation fails [1] and Konno-Miyazawa-Taniguchi gave examples with simply-connected indefinite non-spin \(4\)-manifolds [16]. **Theorem 1.6**.: _Let \(X=X^{\prime}\#p\mathbb{CP}^{2}\#q\overline{\mathbb{CP}^{2}}\) where \(X^{\prime}\) is a compact, smooth, simply-connected \(4\)-manifold and \(p+q\geq 4\). Then \(M(X)\) contains a subgroup isomorphic to \(\mathbb{Z}_{2}^{4}\) which can not be lifted to \(Diff(X)\)._ In particular, Nielsen realisation fails for \(n\mathbb{CP}^{2}\) for \(n\geq 4\). As far as we are aware, these are the first examples of definite, simply-connected \(4\)-manifolds where Nielsen realisation fails. Our last main result concerns a certain extension of \(M(X)\). Let \(X^{(1)}\) be obtained from \(X\) by removing an open ball and let \(Diff(X^{(1)},\partial X^{(1)})\) denote the group of diffeomorphisms of \(X^{(1)}\) which are the identity in a neighbourhood of the boundary. Let \(M_{1}(X)=\pi_{0}(Diff(X^{(1)},\partial X^{(1)}))\) denote the group of components of \(Diff(X^{(1)},\partial X^{(1)})\). It is known that the map \(M_{1}(X)\to M(X)\) is surjective and that the kernel (which is either trivial or has order \(2\)) is generated by a Dehn twist on the boundary (see Section 7 for more details). In general it is difficult to determine whether the kernel of \(M_{1}(X)\to M(X)\) is trivial or non-trivial, or equivalently, whether the boundary Dehn twist is trivial or non-trivial. The extension is known to be trivial when \(X\) is a connected sum of copies of \(S^{2}\times S^{2}\). In contrast we have: **Theorem 1.7**.: _Let \(X^{\prime}\) be a compact, smooth, simply-connected \(4\)-manifold which is homeomorphic to \(K3\). Let \(X\) be \(X^{\prime}\) or \(X^{\prime}\#(S^{2}\times S^{2})\). Then the boundary Dehn twist is non-trivial. Moreover, the extension \(1\to\mathbb{Z}_{2}\to M_{1}(X)\to M(X)\to 1\) does not split._ If \(M_{1}(X)\to M(X)\) is a non-trivial extension, then it is given by an extension class \(\xi_{X}\in H^{2}(M(X);\mathbb{Z}_{2})\) and the above theorem says that \(\xi_{X}\neq 0\) when \(X\) is of the stated form. Our final result completely determines \(\xi_{X}\) in the case that \(X\) is homeomorphic to \(K3\). Let \(L_{X}\) be the intersection lattice of \(X\) and \(Aut(L_{X})\) the group of automorphisms. Over the classifying space \(BAut(L_{X})\) we have the tautological flat bundle \(H=EAut(L_{X})\times_{Aut(L_{X})}L_{X}\). Let \(H^{+}\to BAut(L_{X})\) be a maximal positive subbundle. This defines a characteristic class \(w_{2}(H^{+})\in H^{2}(Aut(L_{X});\mathbb{Z}_{2})\). **Theorem 1.8**.: _Let \(X\) be a smooth \(4\)-manifold which is homeomorphic to \(K3\). Then the extension class \(\xi_{X}\in H^{2}(M(X);\mathbb{Z}_{2})\) is the pullback of \(w_{2}(H^{+})\in H^{2}(Aut(L_{X});\mathbb{Z}_{2})\) under the map \(M(X)\to Aut(L_{X})\)._ ### Structure of the paper The structure of the paper is as follows. In Section 2 we review the Seiberg-Witten invariants for the Torelli group (as in [24, 26, 3]) and show how these invariants can be assembled into cohomology classes on the mapping class group. In Section 3 we use these cohomology classes to show that \(M(X)\) is not finitely generated for certain \(X\). In Section 4 we construct a splitting \(\Gamma(X)\to M(X)\) when \(X\) is a connected sum of copies of \(\mathbb{CP}^{2}\). In Section 5 we prove the non-existence of splittings \(\Gamma(X)\to M(X)\) for certain \(4\)-manifolds. The proof uses families Seiberg-Witten theory and more specifically the main result of [2]. In Section 6 we prove Theorem 1.6. Finally, in Section 7 we study boundary Dehn twists and the extension \(M_{1}(X)\to M(X)\) and we prove Theorems 1.7 and 1.8. ### Acknowledgements I thank Hokuto Konno for sending me his paper [15] and also for comments on a draft of this paper. I also thank Alexander Kupers for clarifying the finite generation of mapping class groups for simply-connected \(5\)-manifolds. ## 2. Seiberg-Witten invariants for the mapping class group In this section we define Seiberg-Witten invariants for the mapping class group, extending the Seiberg-Witten invariants on the Torelli group which have previously been considered in [24, 26, 3]. These invariants will be used to show that certain simply-connected \(4\)-manifolds have non-finitely generated mapping class group. Let \(X\) be a compact, smooth, simply connected \(4\)-manifold and let \(\mathfrak{s}\) be a spin\({}^{c}\)-structure with \(d(\mathfrak{s})=-1\), where \[d(\mathfrak{s})=\frac{c(\mathfrak{s})^{2}-\sigma(X)}{4}-b_{+}(X)-1\] is the expected dimension of the Seiberg-Witten moduli space for \(\mathfrak{s}\). Let \(\mathcal{S}(X)\) denote the set of all isomorphism classes of spin\({}^{c}\)-structures on \(X\) for which \(d(\mathfrak{s})=-1\). Since \(X\) is assumed to be simply connected, \(\mathcal{S}(X)\) can be identified with the set of characteristic elements \(c\in L=H^{2}(X;\mathbb{Z})\) for which \((c^{2}-\sigma(X))/4-b_{+}(X)=0\). Let \(\Pi\) denote the space of pairs \((g,\eta)\) where \(g\) is a Riemannian metric on \(X\) and \(\eta\) is a \(2\)-form which is self-dual with respect to \(g\). For any \(h\in\Pi\) and any \(\mathfrak{s}\in\mathcal{S}(X)\) we may consider the Seiberg-Witten equations on \(X\) with respect to the metric \(g\), spin\({}^{c}\)-structure \(\mathfrak{s}\) and \(2\)-form perturbation \(\eta\). Let \(\mathcal{M}(X,\mathfrak{s},h)\) denote the moduli space of gauge equivalence classes of solutions to the Seiberg-Witten equations for \((X,\mathfrak{s},h)\). Assume \(b_{+}(X)>2\). We will say that \(h\in\Pi\) is regular if \(\mathcal{M}(X,\mathfrak{s},h)\) is empty for all \(\mathfrak{s}\in S(X)\). Since \(b_{+}(X)>0\) and the expected dimension of \(\mathcal{M}(X,\mathfrak{s},h)\) is negative, the regular elements form a subset of \(\Pi\) of Baire second category with respect to the \(\mathcal{C}^{\infty}\) topology. Let \(\Pi^{reg}\subseteq\Pi\) denote the set of regular elements. Suppose that \(h_{0},h_{1}\in\Pi^{reg}\). If \(h:[0,1]\to\Pi\) is a path in \(\Pi\) from \(h_{0}\) to \(h_{1}\), we can consider the families moduli space, which is the union over \(t\in[0,1]\) of the Seiberg-Witten moduli spaces for each \(h_{t}\in\Pi\). For a sufficiently generic path \(h_{t}\), the moduli space is a compact, smooth, \(0\)-dimensional manifold. A choice of orientation on a maximal positive definite subspace of \(H^{2}(X;\mathbb{R})\) determines an orientation on the moduli space and hence we can count with sign the number of points in the moduli space. Fix a choice of such an orientation. It can be shown [24] that the number of solutions depends on the endpoints \(h_{0},h_{1}\), but not on the choice of generic path \(h_{t}\). Hence we may denote by \(SW_{\mathfrak{s}}(h_{0},h_{1})\in\mathbb{Z}\) the signed count of points in the moduli space. From the definition it is clear that this count of points satisfies the following properties: 1. \(SW_{\mathfrak{s}}(h_{0},h_{1})+SW_{\mathfrak{s}}(h_{1},h_{2})=SW_{\mathfrak{s} }(h_{0},h_{2})\), 2. \(SW_{\mathfrak{s}}(h_{0},h_{1})=sgn_{+}(f)SW_{f(\mathfrak{s})}(f(h_{0}),f(h_{1}))\) for any orientation preserving diffeomorphism \(f\). In (2), \(sgn_{+}(f)\) is defined as follows. The space of oriented, maximal positive definite subspaces of \(H^{2}(X;\mathbb{R})\) has two connected components. For an isometry \(\varphi\) of \(H^{2}(X;\mathbb{R})\) we let \(sgn_{+}(\varphi)\) equal \(1\) or \(-1\) according to whether \(\varphi\) preserves or exchanges the two components. If \(f\) is an orientation preserving diffeomorphism of \(X\), then \(sgn_{+}(f)\) denotes \(sgn_{+}(f_{*})\), where \(f_{*}=(f^{-1})^{*}\) is the isometry of \(H^{2}(X;\mathbb{R})\) induced by \(f\). Property (1) follows by concatenating a path from \(h_{0}\) to \(h_{1}\) with a path from \(h_{1}\) to \(h_{2}\). Property (2) follows from diffeomorphism invariance of the Seiberg-Witten equations. In addition, \(SW_{\mathfrak{s}}(h_{0},h_{1})\) obeys a symmetry with respect to charge conjugation: 1. \(SW_{\mathfrak{s}}(h_{0},h_{1})=(-1)^{b_{+}(X)/2+1}SW_{\mathfrak{s}}(\bar{h}_ {0},\bar{h}_{1})\), where \(\bar{\mathfrak{s}}\) denotes the charge conjugate of \(\mathfrak{s}\) and for \(h=(g,\eta)\in\Pi\), we set \(\bar{h}=(g,-\eta)\). Property (3) is an immediate consequence of the charge conjugation symmetry of the Seiberg-Witten equations. Let \(f\in T(X)\) be an element of the Torelli group. Fix a spin\({}^{c}\)-structure \(\mathfrak{s}\) with \(d(\mathfrak{s})=-1\). The mapping cylinder of \(f\) defines a smooth family \(E\to S^{1}\) over \(S^{1}\) with fibres diffeomorphic to \(X\). Since \(f\) acts trivially on cohomology, it preserves the isomorphism class of \(\mathfrak{s}\). It follows easily that there is a unique spin\({}^{c}\)-structure on the vertical tangent bundle of \(E\) which restricts to \(\mathfrak{s}\) on each fibre. Since \(b_{+}(X)>2\), there is a single chamber for the families Seiberg-Witten equations for \(E\). Furthermore, the families moduli space is oriented and so we obtain an integer-valued invariant \(SW_{\mathfrak{s}}(f)\in\mathbb{Z}\) which depends only on \((X,\mathfrak{s})\) and the isotopy class of \(f\) (see [3] for more details). From the definition of \(SW_{\mathfrak{s}}(f)\), it is easy to see that \[SW_{\mathfrak{s}}(f)=SW_{\mathfrak{s}}(h,f(h))\] for any \(h\in\Pi^{reg}\). It is instructive to see why \(SW_{\mathfrak{s}}(f)\) is independent of the choice of \(h\in\Pi^{reg}\). Let \(h^{\prime}\in\Pi^{reg}\). Then \[SW_{\mathfrak{s}}(h^{\prime},f(h^{\prime})) =-SW_{\mathfrak{s}}(h,h^{\prime})+SW_{\mathfrak{s}}(h,f(h))+SW_{ \mathfrak{s}}(f(h),f(h^{\prime}))\] \[=-SW_{\mathfrak{s}}(h,h^{\prime})+SW_{\mathfrak{s}}(h,f(h))+SW_{f ^{-1}(\mathfrak{s})}(h,h^{\prime})\] \[=SW_{\mathfrak{s}}(h,f(h)),\] where the last line follows from \(f^{-1}(\mathfrak{s})=\mathfrak{s}\), which holds since \(f\in T(X)\). For \(f,g\in T(X)\), we have that \(SW_{\mathfrak{s}}(f\circ g)=SW_{\mathfrak{s}}(f)+SW_{\mathfrak{s}}(g)\) (this is a special case of Proposition 2.1, proven below). Therefore \(SW_{\mathfrak{s}}\) defines a homomorphism \[SW_{\mathfrak{s}}:T(X)\to\mathbb{Z}\] or equivalently, a cohomology class \(SW_{\mathfrak{s}}\in H^{1}(T(X);\mathbb{Z})\). These cohomology classes generally do not extend to the full mapping class group \(M(X)\), because \(\Gamma(X)\) acts non-trivially on the set of spin\({}^{c}\)-structures. Recall that the compactness of the Seiberg-Witten moduli space follows from a priori bounds. These bounds depend on the pair \(h\in\Pi\), but not on the spin\({}^{c}\)-structure. This argument also works for families over a compact base space, hence for fixed \(f\in T(X)\), \(SW_{\mathfrak{s}}(f)\) is non-zero for only finitely many \(\mathfrak{s}\in S(X)\). Therefore, we can collect the homomorphisms \(SW_{\mathfrak{s}}\) into a single invariant \[SW:T(X) \to\bigoplus_{\mathfrak{s}\in S(X)}\mathbb{Z}\] \[f \mapsto\bigoplus_{\mathfrak{s}}SW_{\mathfrak{s}}(f).\] In what follows, we will see that \(SW\) can be extended from \(T(X)\) to the full mapping class group \(M(X)\) as a cohomology class valued in a certain \(\Gamma(X)\)-module. Recall that each \(\mathfrak{s}\in S(X)\) is determined by the corresponding characteristic element \(c(\mathfrak{s})\in L\). Therefore the group \(\Gamma(X)\) acts on \(\mathcal{S}(X)\) and hence on \(\mathbb{Z}[\mathcal{S}(X)]\), the free abelian group with basis \(\mathcal{S}(X)\). Let \(\widehat{\mathbb{Z}}\) denote \(\mathbb{Z}\) equipped with the action of \(\Gamma(X)\) such that \(f\in\Gamma(X)\) acts as multiplication by \(sgn_{+}(f)\). Let \(\widehat{\mathbb{Z}}[\mathcal{S}(X)]=\widehat{\mathbb{Z}}\otimes_{\mathbb{Z}} \mathbb{Z}[\mathcal{S}(X)]\). It will be convenient to regard \(\widehat{\mathbb{Z}}[\mathcal{S}(X)]\) as the group of functions \(\phi:\mathcal{S}(X)\to\mathbb{Z}\) with finite support. Then the action of \(f\in\Gamma(X)\) is given by \((f\phi)(\mathfrak{s})=sgn_{+}(f)\phi(f^{-1}(\mathfrak{s}))\). We will show that the families Seiberg-Witten invariant for \(1\)-dimensional families (where \(b_{+}(X)>2\)) can be viewed as an element of \(H^{1}(M(X);\widehat{\mathbb{Z}}[\mathcal{S}(X)])\). Recall that for a group \(G\) and a \(G\)-module \(M\), the group \(H^{1}(G;M)\) can be viewed as the set of equivalence classes of twisted homomomorphisms \(G\to M\). A twisted homomorphism is a map \(\phi:G\to M\) such that \(\phi(gh)=\phi(g)+g\phi(h)\). A trivial twisted homomorphism is a twisted homomorphism of the form \(\phi(g)=gm-m\) for some \(m\in M\). Two twisted homomorphisms are considered equivalent if they differ by a trivial twisted homomorphism. Let \(h\in\Pi^{reg}\). Define a map \(\phi_{h}:Diff_{+}(X)\to\widehat{\mathbb{Z}}[\mathcal{S}(X)]\) from the group of orientation preserving diffeomorphisms to \(\widehat{\mathbb{Z}}[\mathcal{S}(X)]\) by setting \[(\phi_{h}(f))(\mathfrak{s})=SW_{\mathfrak{s}}(h,f(h)).\] Suppose that \(f_{0},f_{1}\in Diff_{+}(X)\) are isotopic. Choose an isotopy \(f_{t}\). Then \[\phi_{h}(f_{1}) =SW_{\mathfrak{s}}(h,f_{1}(h))\] \[=SW_{\mathfrak{s}}(h,f_{0}(h))+SW_{\mathfrak{s}}(f_{0}(h),f_{1}(h))\] \[=\phi_{h}(f_{0})+SW_{\mathfrak{s}}(f_{0}(h),f_{1}(h)).\] Consider the path \(h_{t}=f_{t}(h)\) from \(f_{0}(h)\) to \(f_{1}(h)\). By diffeomorphism invariance of the Seiberg-Witten equations, the Seiberg-Witten moduli space for \(h_{t}\) is empty for each \(t\in[0,1]\), hence \(SW_{\mathfrak{s}}(f_{0}(h),f_{1}(h))=0\) and \(\phi_{h}(f_{1})=\phi_{h}(f_{2})\). This shows that \(\phi\) only depends on the underlying isotopy class and so we may view it as a map \(\phi_{h}:M(X)\to\widehat{\mathbb{Z}}[\mathcal{S}(X)]\). **Proposition 2.1**.: _The map \(\phi_{h}:M(X)\to\widehat{\mathbb{Z}}[\mathcal{S}(X)]\) is a twisted homomorphism. Furthermore, the underlying cohomology class \([\phi_{h}]\in H^{1}(M(X);\widehat{\mathbb{Z}}[\mathcal{S}(X)])\) does not depend on the choice of \(h\in\Pi^{reg}\)._ Proof.: Let \(f,g\in M(X)\). Then \[\phi_{h}(gf)(\mathfrak{s}) =SW_{\mathfrak{s}}(h,gf(h))\] \[=SW_{\mathfrak{s}}(h,g(h))+SW_{\mathfrak{s}}(g(h),g(f(h)))\] \[=SW_{\mathfrak{s}}(h,g(h))+sgn_{+}(g)SW_{g^{-1}\mathfrak{s}}(h,f( h))\] \[=\phi_{h}(g)(\mathfrak{s})+sgn_{+}(g)\phi_{h}(f)(g^{-1}\mathfrak{ s})\] \[=(\phi_{h}(g)+(g\phi_{h})(f))(\mathfrak{s}).\] Hence \(\phi_{h}\) is a twisted homomorphism. Next we show that the underlying cohomology class of \(\phi_{h}\) does not depend on the choice of \(h\). Let \(h^{\prime}\in\Pi^{reg}\) be another generic pair. Choose a path \(h_{t}\) from \(h=h_{0}\) to \(h^{\prime}=h_{1}\). Then \[\phi_{h^{\prime}}(f)(\mathfrak{s}) =SW_{\mathfrak{s}}(h_{1},f(h_{1}))\] \[=SW_{\mathfrak{s}}(h_{0},f(h_{0}))-SW_{\mathfrak{s}}(h_{0},h_{1} )+SW_{\mathfrak{s}}(f(h_{0}),f(h_{1}))\] \[=\phi_{h}(f)(\mathfrak{s})+sgn_{+}(f)SW_{f^{-1}\mathfrak{s}}(h_ {0},h_{1})-SW_{\mathfrak{s}}(h_{0},h_{1})\] \[=\phi_{h}(f)(\mathfrak{s})+(fm-m)(\mathfrak{s})\] where \(m(\mathfrak{s})=SW_{\mathfrak{s}}(h_{0},h_{1})\). Hence \(\phi_{h}\) and \(\phi_{h^{\prime}}\) define the same cohomology class. **Definition 2.2**.: We denote by \[SW\in H^{1}(M(X);\widehat{\mathbb{Z}}[\mathcal{S}(X)])\] the cohomology class \(SW=[\phi_{h}]\) for any \(h\in\Pi^{reg}\). Let \(M_{+}(X)\) denote the subgroup of \(M(X)\) consisting of all \(f\in M(X)\) for which \(sgn_{+}(f)=1\). Then \(M_{+}(X)\) has index \(1\) or \(2\) in \(M(X)\). Observe that \(\widehat{\mathbb{Z}}|_{M_{+}(X)}=\mathbb{Z}\) and thus \(SW|_{M_{+}(X)}\in H^{1}(M(X);\mathbb{Z}[\mathcal{S}(X)])\). From \(SW|_{M_{+}(X)}\) we can extract \(\mathbb{Z}\)-valued cohomology classes as follows: let \(\mathcal{O}\subseteq\mathcal{S}(X)\) be a \(\Gamma(X)\)-invariant subset of \(\mathcal{S}(X)\). Then we obtain a morphism \(p_{\mathcal{O}}:\mathbb{Z}[\mathcal{S}(X)]\to\mathbb{Z}\) given by \(\phi\mapsto\sum_{\mathfrak{s}\in\mathcal{O}}\phi(\mathfrak{s})\). We define \(SW_{\mathcal{O}}\in H^{1}(M_{+}(X);\mathbb{Z})\) by setting \(SW_{\mathcal{O}}=p_{\mathcal{O}}(SW|_{M_{+}(X)})\). From this definition it follows that \[SW_{\mathcal{O}}|_{T(X)}=\sum_{\mathfrak{s}\in\mathcal{O}}SW_{\mathfrak{s}}.\] Furthermore, for any \(f\in M_{+}(X)\), we have \[SW_{\mathcal{O}}(f)=\sum_{\mathfrak{s}\in\mathcal{O}}SW_{\mathfrak{s}}(h,f(h))\] where \(h\in\Pi^{reg}\). _Remark 2.3_.: In [26], Ruberman defined an invariant \(SW_{\mathfrak{s}}^{tot}\) which is similar to the definition of \(SW_{\mathcal{O}}\) given above. Namely \(SW_{\mathfrak{s}}^{tot}:M_{+}(X)\to\mathbb{Z}\) is given by \(SW_{\mathfrak{s}}^{tot}(f)=\sum_{\mathfrak{s}^{\prime}}SW_{\mathfrak{s}^{ \prime}}(h,f(h))\) where the sum is over all spin\({}^{c}\)-structures \(\mathfrak{s}^{\prime}\) such that \(\mathfrak{s}^{\prime}=f^{m}(\mathfrak{s})\) for some \(m\in\mathbb{Z}\). However this invariant is not a group homomorphism and behaves in a complicated manner with respect to composition (see [26, Theorem 3.4]). For this reason, we find it more useful to work with the invariants \(SW_{\mathcal{O}}\). Let \(\mathfrak{s}\in\mathcal{S}(X)\) be a spin\({}^{c}\)-structure and let \(f\in M(X)\). Suppose that \(f\) preserves \(\mathfrak{s}\). If \(sgn_{+}(f)=1\), then the families Seiberg--Witten moduli space for the mapping cylinder of \(f\) with spin\({}^{c}\)-structure \(\mathfrak{s}\) is oriented, and we obtain an integer families Seiberg-Witten invariant \(SW_{\mathfrak{s}}(f)\in\mathbb{Z}\). It is given by \(SW_{\mathfrak{s}}(f)=SW_{\mathfrak{s}}(h,f(h))\), for any \(h\in\Pi^{reg}\). If \(sgn_{+}(f)=-1\) then there is no natural choice of orientation on the families moduli space, hence we only get a mod \(2\) invariant \(SW_{\mathfrak{s}}(f)\in\mathbb{Z}_{2}\) which is given by \(SW_{\mathfrak{s}}(f)=SW_{\mathfrak{s}}(h,f(h))\;(\text{mod }2)\), for any \(h\in\Pi^{reg}\) (the value of \(SW_{\mathfrak{s}}(h,f(h))\) depends on \(h\), but its mod \(2\) reduction does not). We will make use of the following special case of the gluing formula of [3]: **Proposition 2.4**.: _Suppose that \(X=X^{\prime}\#(S^{2}\times S^{2})\), where \(b_{+}(X^{\prime})>1\). Let \(\mathfrak{s}^{\prime}\) be a spin\({}^{c}\)-structure on \(X^{\prime}\) with \(d(\mathfrak{s}^{\prime})=0\) and let \(\mathfrak{s}_{0}\) denote the spin structure on \(S^{2}\times S^{2}\). Let \(\mathfrak{s}=\mathfrak{s}^{\prime}\#\mathfrak{s}_{0}\). Let \(f^{\prime}\) be a diffeomorphism on \(X^{\prime}\) that preserves \(\mathfrak{s}^{\prime}\) and \(\rho\) a diffeomorphism of \(S^{2}\times S^{2}\). Suppose that \(f^{\prime}\) is trivial in a neighbourhood of a point \(x_{1}\in X^{\prime}\) and that \(\rho\) is trivial in a neighbourhood of a point \(x_{2}\in S^{2}\times S^{2}\). Set \(f=f^{\prime}\#\rho\), where the connected sum is performed by removing balls around \(x_{1}\) and \(x_{2}\) and identifying boundaries. Then:_ 1. _If_ \(sgn_{+}(\rho)=1\)_, then_ \(SW_{\mathfrak{s}}(f)=0\;(\text{mod }2)\)_._ 2. _If_ \(sgn_{+}(\rho)=-1\)_, then_ \(SW_{\mathfrak{s}}(f)=SW(X^{\prime},\mathfrak{s}^{\prime})\;(\text{mod }2)\)_, where_ \(SW(X^{\prime},\mathfrak{s}^{\prime})\) _denotes the ordinary Seiberg-Witten invariant of_ \((X^{\prime},\mathfrak{s}^{\prime})\)_._ ## 3. Non-finitely generated mapping class groups In this section we prove that \(M(X)\) is not finitely generated for certain \(4\)-manifolds. **Theorem 3.1**.: _Let \(X=2n\mathbb{CP}^{2}\#10n\overline{\mathbb{CP}^{2}}\), where \(n\geq 3\) is odd. Then \(M(X)\) is not finitely generated. More precisely, the following holds:_ 1. _There is a surjective homomorphism_ \(\Phi:M_{+}(X)\to\mathbb{Z}^{\infty}\) _from_ \(M_{+}(X)\) _to_ \(\mathbb{Z}^{\infty}\)_, where_ \(\mathbb{Z}^{\infty}\) _denotes a free abelian group of countably infinite rank._ 2. _The mod_ \(2\) _reduction of_ \(\Phi\) _extends to a homomorphism_ \(\Phi:M(X)\to\mathbb{Z}_{2}^{\infty}\)_._ 3. \(M_{+}(X)\) _has index_ \(2\) _in_ \(M(X)\)_._ 4. _The short exact sequence_ \(1\to M_{+}(X)\to M(X)\to\mathbb{Z}_{2}\to 0\) _splits._ Proof.: First note that \(X=X^{\prime}\#(S^{2}\times S^{2})\), where \(X^{\prime}=(2n-1)\mathbb{CP}^{2}\#(10n-1)\overline{\mathbb{CP}^{2}}\). It follows from [31] that \(\Gamma(X)=Aut(H^{2}(X;\mathbb{Z}))\). Observe that \(d(\mathfrak{s})=(c(\mathfrak{s})^{2}+8n)/4-2n-1=c(\mathfrak{s})^{2}/4-1\). Hence \(d(\mathfrak{s})=-1\) if and only if \(c(\mathfrak{s})^{2}=0\). For each \(k\geq 1\), let \(\mathcal{O}_{k}\subset\mathcal{S}(X)\) denote the set of \(\operatorname{spin}^{c}\)-structures whose characteristic \(c\) satisfies, \(c\neq 0\), \(c^{2}=0\), and \(c\) is \(k\) times a primitive element. We will show that the homomorphism \[\Phi=\bigoplus_{q=1}^{\infty}SW_{\mathcal{O}_{nq-q-1}}:M_{+}(X)\to\mathbb{Z}^ {\infty}\] surjects to a subgroup of \(\mathbb{Z}^{\infty}\) of countably infinite rank. The decomposition \(X=X^{\prime}\#(S^{2}\times S^{2})\) yields an orthogonal decomposition \(L=L^{\prime}\oplus H\), where \(L,L^{\prime}\) are the intersection forms of \(X,X^{\prime}\) and \(H=H^{2}(S^{2}\times S^{2};\mathbb{Z})\) is the hyperbolic lattice. Any characteristic \(c\in L\) decomposes as \(c=(c_{1},c_{2})\), where \(c_{1}\in L^{\prime},c_{2}\in H\) are characteristic. The intersection form \(L^{\prime}\) is odd, hence \(c_{1}\neq 0\). We will partition \(\mathcal{O}_{k}\) into two types of subsets: 1. Subsets \(\{\mathfrak{s},\bar{\mathfrak{s}}\}\), where \(\mathfrak{s}=\mathfrak{s}^{\prime}\#\mathfrak{s}_{0}\) and \(c(\mathfrak{s}_{0})=0\). 2. Subsets \(\{\mathfrak{s}_{1},\mathfrak{s}_{2},\bar{\mathfrak{s}}_{1},\bar{\mathfrak{s}}_ {2}\}\), where \(\mathfrak{s}_{1}=\mathfrak{s}^{\prime}\#\mathfrak{s}^{\prime\prime}\), \(\mathfrak{s}_{2}=\mathfrak{s}^{\prime}\#\bar{\mathfrak{s}}^{\prime\prime}\) and where \(c(\mathfrak{s}^{\prime\prime})\neq 0\). Since \(b_{+}(X)=2n=2\) (mod 4), we have \(SW_{\mathfrak{s}}(t)=SW_{\bar{\mathfrak{s}}}(t)\) for all \(t\in T(X)\). Hence a subset of type of \(\mathcal{O}_{k}\) of type (1) will contribute \(2SW_{\mathfrak{s}}(t)\) to \(SW_{\mathcal{O}_{k}}(t)\) and a subset of type (2) will contribute \(2(SW_{\mathfrak{s}_{1}}(t)+SW_{\mathfrak{s}_{2}}(t))\). Let \(E(n)_{q}\) be the elliptic surface obtained from \(E(n)\) by performing a logarithmic transform of multiplicity \(q\geq 1\). Since \(n\) is odd, \(E(n)_{q}\) is not spin and its intersection form is diagonal of signature \((2n-1,10n-1)\). Hence \(E(n)_{q}\) has the same intersection lattice as \(X^{\prime}\). Furthermore, we have that \(E(n)_{q}\#(S^{2}\times S^{2})\) is diffeomorphic to \(2n\mathbb{CP}^{2}\#10n\overline{\mathbb{CP}^{2}}=X=X^{\prime}\#(S^{2}\times S^ {2})\)[12, Page 320]. So there is an orientation preserving diffeomorphism \(\psi_{q}:E(n)_{q}\#(S^{2}\times S^{2})\to X^{\prime}\#(S^{2}\times S^{2})\). We can choose \(\psi_{q}\) so that it respects the decomposition \(H^{2}(E(n)_{q};\mathbb{Z})\oplus H^{2}(S^{2}\times S^{2};\mathbb{Z})\to H^{2} (X^{\prime};\mathbb{Z})\oplus H^{2}(S^{2}\times S^{2};\mathbb{Z})\). To see this, first let \(\psi_{q}^{\prime}:E(n)_{q}\#(S^{2}\times S^{2})\to X^{\prime}\#(S^{2}\times S ^{2})\) be any diffeomorphism. Then by [31], every isometry of \(H^{2}(X^{\prime}\#(S^{2}\times S^{2});\mathbb{Z})\) can be realised by a diffeomorphism. Hence, composing \(\psi_{q}^{\prime}\) with a suitable diffeomorphism of \(X\), we obtain the desired diffeomorphism \(\psi_{q}\). Let \(\rho\) be a diffeomorphism of \(S^{2}\times S^{2}\) which acts as \(-1\) on \(H^{2}(S^{2}\times S^{2};\mathbb{Z})\) and is trivial in a neighbourhood of some point. Such diffeomorphisms can easily be constructed, for example, take the product \(r\times r\) of two copies of a reflection on \(S^{2}\) and then isotopy it to act trivially in a neighbourhood of a point. Define a diffeomorphism \(f_{0}\in M(X)\) by setting \(f_{0}=id_{X^{\prime}}\#\rho\), where the connected sum is performed by removing a ball of \((S^{2}\times S^{2})\) on which \(\rho\) acts trivially. For each \(q\geq 1\), define a diffeomorphism \(f_{q}\in M(X)\) by setting \(f_{q}=\psi_{q}\circ(id_{E(n)_{q}}\#\rho)\circ\psi_{q}^{-1}\). Note that \(sgn_{+}(f_{0})=sgn_{+}(f_{q})=-1\). Also \(t_{q}=f_{q}\circ f_{0}\in T(X)\). We claim that \(SW_{\mathcal{O}_{nq-q-1}}(t_{q})=2\) (mod 4) and that \(SW_{\mathcal{O}_{nq^{\prime}-q^{\prime}-1}}(t_{q})=0\) (mod 4) for all \(q^{\prime}>q\). This implies that the elements \(\{\Phi(t_{q})\}_{q\geq 1}\) are linearly independent. To see this, first note that \(\Phi(t_{q})\in 2\mathbb{Z}^{\infty}\) by charge conjugation symmetry and that the image of \(\{\Phi(t_{q})/2\}_{q\geq 1}\) is a basis for \(\mathbb{Z}_{2}^{\infty}\), by the above claim. Now suppose \(n_{1}\Phi(t_{1})+n_{2}\Phi(t_{2})+\cdots+n_{r}\Phi(t_{r})=0\) for some \(n_{1},\ldots,n_{r}\in\mathbb{Z}\), not all zero. Without loss of generality we can assume that \(gcd(n_{1},\ldots,n_{r})=1\). Then \(n_{1}\Phi(t_{1})/2+\cdots+n_{r}\Phi(t_{r})/2=0\). But \(\{\Phi(t_{q})/2\}_{q_{\geq 1}}\) are linearly independent in \(\mathbb{Z}_{2}^{\infty}\), so \(n_{1},\ldots,n_{r}\) are all even, a contradiction. Now we prove the claim. Let \(q,q^{\prime}\geq 1\) and set \(k=nq^{\prime}-q^{\prime}-1\). By partitioning \(\mathcal{O}_{k}\) into subsets of type (1) and (2) as described above, we can then write \(SW_{\mathcal{O}_{k}}(t_{q})\) as a sum of contributions from sets of type (1) and (2). Consider a contribution from a subset \(\{\mathfrak{s},\bar{\mathfrak{s}}\}\) of type (1). So \(\mathfrak{s}=\mathfrak{s}^{\prime}\#\mathfrak{s}_{0}\). The contribution is \(2SW_{\mathfrak{s}}(t_{q})\). Since \(f_{q}\) and \(f_{0}\) both preserve \(\mathfrak{s}\), we have that \[SW_{\mathfrak{s}}(t_{q}) =SW_{\mathfrak{s}}(f_{q}\circ f_{0})\] \[=SW_{\mathfrak{s}}(f_{q})+SW_{\mathfrak{s}}(f_{0})\;(\mathrm{mod} \;2)\] \[=SW(E(n)_{q},\mathfrak{s}^{\prime})\;(\mathrm{mod}\;2)\] where the last equality is due to Proposition 2.4. Let \(f\in H^{2}(E(n)_{q};\mathbb{Z})\) denote the class of the multiple fibre. Then \(f\) is non-zero, primitive and \(f^{2}=0\). From the well-known calculation of the Seiberg-Witten invariants for elliptic surfaces [21, Chapter 3], we have that \(SW(E(n)_{q},\mathfrak{s}^{\prime})=0\) unless \(c(\mathfrak{s}^{\prime})\) is a multiple of \(f\). More precisely, \(SW(E(n)_{q},\mathfrak{s}^{\prime})\) is zero unless \(c(\mathfrak{s}^{\prime})=2(qk+a)f-(nq-q-1)f\), where \(0\leq k\leq n-2\) and \(0\leq a<q\). In such a case \(SW(E(n)_{q},\mathfrak{s}^{\prime})=(-1)^{k}\binom{n-2}{k}\). In particular, \(SW(E(n)_{q},\mathfrak{s}^{\prime})=\pm 1\) if \(c(\mathfrak{s}^{\prime})=(nq-q-1)f\) and \(SW(E(n)_{q},\mathfrak{s}^{\prime})=0\) if \(c(\mathfrak{s})=uf\), \(u>nq-q-1\). Now suppose that \(q^{\prime}\geq q\). We have that \(SW(E(n)_{q},\mathfrak{s}^{\prime})=0\) unless \(c(\mathfrak{s})\) is a multiple of \(f\). But since \(\mathfrak{s}=\mathfrak{s}^{\prime}\#\mathfrak{s}_{0}\in\mathcal{O}_{k}\), this can only happen if \(c(\mathfrak{s}^{\prime})=\pm kf\) (recall that \(\mathcal{O}_{k}\) is the set of spin\({}^{\varepsilon}\)-structures whose characteristic \(c\) satisfies, \(c\neq 0\), \(c^{2}=0\), and \(c\) is \(k\) times a primitive element). Hence if \(q^{\prime}>q\), then every pair \(\{\mathfrak{s},\bar{\mathfrak{s}}\}\) of type (1) contributes \(0\) mod \(4\) to \(SW_{\mathcal{O}_{k}}(t_{q})\). If \(q^{\prime}=q\), then there is exactly one pair \(\{\mathfrak{s},\bar{\mathfrak{s}}\}\) of type (1) which contributes \(2\) mod \(4\) to \(SW_{\mathcal{O}_{k}}(t_{q})\) and all other pairs are \(0\) mod \(4\). Now consider the contribution from a set \(\{\mathfrak{s}_{1},\mathfrak{s}_{2},\bar{\mathfrak{s}}_{1},\bar{\mathfrak{s}} _{2}\}\) of type (2), where \(\mathfrak{s}_{1}=\mathfrak{s}^{\prime}\#\mathfrak{s}^{\prime\prime}\), \(\mathfrak{s}_{2}=\mathfrak{s}^{\prime}\#\bar{\mathfrak{s}}^{\prime\prime}\) and where \(c(\mathfrak{s}^{\prime\prime})\neq 0\). As seen above, the contribution is \(2(SW_{\mathfrak{s}_{1}}(t_{q})+SW_{\mathfrak{s}_{2}}(t_{q}))\). We will show that \(SW_{\mathfrak{s}_{1}}(t_{q})+SW_{\mathfrak{s}_{2}}(t_{q})=0\;(\mathrm{mod}\;2)\), hence all subsets of type (2) contribute \(0\) mod \(4\) and this will prove the claim. Observe that \(\mathfrak{s}_{2}=f_{q}(\mathfrak{s}_{1})=f_{0}(\mathfrak{s}_{1})\). Let \(h\in\Pi^{reg}\). Then \[SW_{\mathfrak{s}_{1}}(t_{q}) =SW_{\mathfrak{s}_{1}}(h,t_{q}(h))\] \[=SW_{\mathfrak{s}_{1}}(h,f_{q}f_{0}(h))\] \[=SW_{\mathfrak{s}_{1}}(h,f_{q}(h))+SW_{\mathfrak{s}_{1}}(f_{q}(h), f_{q}f_{0}(h))\] \[=SW_{\mathfrak{s}_{1}}(h,f_{q}(h))-SW_{\mathfrak{s}_{2}}(h,f_{0}(h)).\] Similarly, \(SW_{\mathfrak{s}_{2}}(t_{q})=SW_{\mathfrak{s}_{2}}(h,f_{q}(h))-SW_{\mathfrak{s} _{1}}(h,f_{0}(h))\). Hence \[SW_{\mathfrak{s}_{1}}(t_{q})+SW_{\mathfrak{s}_{2}}(t_{q}) =(SW_{\mathfrak{s}_{1}}(h,f_{q}(h))+SW_{\mathfrak{s}_{2}}(h,f_{q}( h)))\] \[\qquad-(SW_{\mathfrak{s}_{1}}(h,f_{0}(h))+SW_{\mathfrak{s}_{2}}(h, f_{0}(h)))\] \[=\big{(}SW_{\mathfrak{s}_{1}}(h,f_{q}(h))-SW_{\mathfrak{s}_{1}}(f_{ q}(h),f_{q}^{2}(h))\big{)}\] \[\qquad-\big{(}SW_{\mathfrak{s}_{1}}(h,f_{0}(h))-SW_{\mathfrak{s} _{1}}(f_{0}(h),f_{0}^{2}(h))\big{)}\] \[=SW_{\mathfrak{s}_{1}}(h,f_{q}^{2}(h))-SW_{\mathfrak{s}_{1}}(h,f_{ 0}^{2}(h))\] \[=SW_{\mathfrak{s}_{1}}(f_{q}^{2})-SW_{\mathfrak{s}_{1}}(f_{0}^{2})\] \[=0\;(\mathrm{mod}\;2)\] where the last line follows from Proposition 2.4. This proves the claim. Hence we have proven that there exists a surjective homomorphism \(M_{+}(X)\to\mathbb{Z}^{\infty}\). The fact that the mod \(2\) reduction of \(\Phi\) extends to \(M(X)\) follows by noting that \(\frac{1}{2}SW_{\mathcal{O}_{k}}\in H^{1}(M_{+}(X);\mathbb{Z})\) extends to \(H^{1}(M(X);\widehat{\mathbb{Z}})\) and then applying the mod \(2\) reduction map \(\widehat{\mathbb{Z}}\to\mathbb{Z}_{2}\). The fact that \(M_{+}(X)\) has index \(2\) in \(M(X)\) follows immediately from \(sgn_{+}(f_{0})=-1\). Furthermore, it is easy to see that \(f_{0}^{2}\) is isotopic to a Dehn twist on the neck of the connected sum \(X^{\prime}\#(S^{2}\times S^{2})\). By using the circle action on \(S^{2}\times S^{2}\), it follows that this is isotopic to the identity. So \(f_{0}\) defines a splitting \(\mathbb{Z}_{2}\to M(X)\) of the sequence \(1\to M_{+}(X)\to M(X)\to\mathbb{Z}_{2}\to 0\). ## 4. Split extensions Let \(L=\mathbb{Z}^{n}\) denote the standard diagonal lattice of rank \(n\) with orthonormal basis \(e_{1},\dots,e_{n}\). The isometry group of \(L\) is the hyperoctahedral group \(H_{n}\), which is also the Coxeter group of type \(BC_{n}\). This group is easily seen to be generated by permutations of \(e_{1},\dots,e_{n}\) and the reflections \(r_{1},\dots,r_{n}\) in the hyperplanes orthogonal to \(e_{1},\dots,e_{n}\). The reflections generate a normal subgroup isomorphic to \(\mathbb{Z}_{2}^{n}\) and \(H_{n}\) is the semidirect product \(H_{n}=S_{n}\ltimes\mathbb{Z}_{2}^{n}\). Let \(X=n\mathbb{CP}^{2}\) be the connected sum of \(n\) copies of \(\mathbb{CP}^{2}\). Then \(H^{2}(X;\mathbb{Z})\) is isomorphic to \(L\) with \(e_{1},\dots,e_{n}\) corresponding to generators of \(H^{2}(\mathbb{CP}^{2};\mathbb{Z})\) for the \(n\) summands of \(X\). It is not hard to see that \(\Gamma(X)\) is equal to the full isometry group of \(L\). This will also follow from the construction given below. **Theorem 4.1**.: _For \(X=n\mathbb{CP}^{2}\), there is a splitting \(\Gamma(X)\to M(X)\)._ Proof.: We will construct a smooth fibre bundle \(\pi:E\to B\) with fibres diffeomorphic to \(X\) and such that the monodromy of the local system \(R^{2}\pi_{*}\mathbb{Z}\) yields an isomorphism \(\rho:\pi_{1}(B)\to Aut(L)\). The geometric monodromy of the family defines a lift \(\widetilde{\rho}:\pi_{1}(B)\to\pi_{0}(Diff(X))=M(X)\) of \(\rho\) to \(M(X)\). Then \(\widetilde{\rho}\circ\rho^{-1}:Aut(L)\to M(X)\) is the desired splitting (this also proves that \(\Gamma(X)\cong Aut(L)\)). Let \(C_{m}\) be the space of \(m\)-tuples of distinct points on \(S^{4}\). Clearly \(C_{1}\) is diffeomorphic to \(S^{4}\). For \(m>1\) there is a natural map \(C_{m}\to C_{m-1}\) given by forgetting the \(m\)-th point. This map gives \(C_{m}\) the structure of a fibre bundle over \(C_{m-1}\) with fibre \(F_{m-1}\) the \(4\)-sphere with \(m-1\) points removed. Since \(\pi_{1}(F_{m-1})=\pi_{0}(F_{m-1})=1\), the long exact sequence in homotopy yields an isomorphism \(\pi_{1}(C_{m})\cong\pi_{1}(C_{m})\). Since \(\pi_{1}(C_{1})=\pi_{1}(S^{4})=1\), it follows by induction that \(\pi_{1}(C_{n})=1\) for all \(n\). Fix an orientation on \(S^{4}\). Let \(\widetilde{C}_{n}\) denote the space consisting of an \(n\)-tuple \((x_{1},\dots,x_{n})\) of distinct points of \(S^{4}\) together with an \(n\)-tuple \((I_{1},\dots,I_{n})\), where \(I_{j}\) is a complex structure on \(T_{x_{j}}S^{4}\) which induces the given orientation. The forgetful map \(\widetilde{C}_{n}\to C_{n}\) which forgets the complex structures \(I_{1},\dots,I_{n}\) gives \(\widetilde{C}_{n}\) the structure of a fibre bundle over \(C_{n}\). Since the space of complex structures on \(\mathbb{R}^{4}\) compatible with a given orientation is isomorphic to \(SO(4)/U(2)\cong S^{2}\), it follows that the fibres of \(\widetilde{C}_{n}\to C_{n}\) are isomorphic to \((S^{2})^{n}\). The long exact sequence in homotopy implies that \(\pi_{1}(\widetilde{C}_{n})=1\). Consider the trivial family \(\widetilde{E}_{0}=\widetilde{C}_{n}\times S^{4}\to\widetilde{C}_{n}\). This family is equipped with \(n\) sections \(s_{1},\dots,s_{n}\), where \(s_{j}((x_{1},\dots,x_{n}),(I_{1},\dots,I_{n}))=x_{j}\). The normal bundle of \(s_{j}\) is \(N_{j}=T_{x_{j}}S^{4}\). The complex structure \(I_{j}\) gives \(N_{j}\) the structure of a complex rank \(2\) vector bundle. Therefore, we can form a family \(\widetilde{E}_{n}\) by blowing up \(\widetilde{E}_{0}\) along the sections \(s_{1},\dots,s_{n}\). More precisely, consider the fibre bundle over \(\widetilde{C}_{n}\) with fibre \(\mathbb{CP}^{2}\) given by the projective bundle \(\mathbb{P}(\mathbb{C}\oplus N_{j})\). This bundle has a natural section \(t_{j}\) corresponding to the \(1\)-dimensional subbundle \(\mathbb{C}\subset\mathbb{C}\oplus N_{j}\). The normal bundle of \(t_{j}\) is isomorphic to \(N_{j}\). Since \(s_{j}\) and \(t_{j}\) have isomorphic normal bundles, we can attach \(\mathbb{P}(\mathbb{C}\oplus N_{j})\) to \(\widetilde{E}_{0}\) by removing tubular neighbourhoods of \(s_{j}\) and \(t_{j}\) and identifying the boundaries. The hyperoctahedral group \(H_{n}=S_{n}\ltimes\mathbb{Z}_{2}^{n}\) acts on \(\widetilde{C}_{n}\) as follows. The permutation group \(S_{n}\) acts by permuting the points \(x_{1},\dots,x_{n}\) as well as the corresponding complex structures \(I_{1},\dots,I_{n}\). The group \(\mathbb{Z}_{2}^{n}\) is generated by reflections \(r_{1},\dots,r_{n}\). We let \(r_{j}\) act by fixing \(x_{1},\dots,x_{n}\), sending \(I_{j}\) to \(-I_{j}\) and fixing the remaining complex structures. The action of \(H_{n}\) is free and we let \(B=\widetilde{C}_{n}/H_{n}\) be the quotient. It follows that \(\pi_{1}(B)\cong H_{n}\). The action of \(H_{n}\) on \(\widetilde{C}_{n}\) lifts to an action on \(\widetilde{E}_{0}=\widetilde{C}_{n}\times S^{4}\) which acts trivially on the \(S^{4}\) factor. It is not hard to see that \(\widetilde{E}_{n}\) can be constructed in such a way that the action of \(H_{n}\) extends to it. Now let \(E=\widetilde{E}_{n}/H_{n}\). This is a family \(\pi:E\to B\) over \(B\) with fibres diffeomorphic to \(n\mathbb{CP}^{2}\). The monodromy of the local system \(R^{2}\pi_{*}\mathbb{Z}\) is easily seen to yield an isomorphism \(\rho:\pi_{1}(B)\to Aut(L)\). As explained above, this yields a splitting \(\Gamma(X)\to M(X)\). ## 5. Non-split extensions Let \(X\) be a compact, simply connected, smooth \(4\)-manifold with intersection form \(L=H^{2}(X;\mathbb{Z})\). Let \(\Sigma\) be a compact surface (orientable or non-orientable). Suppose that \(\rho:\pi_{1}(\Sigma)\to\Gamma(X)\) is a homomorphism. Letting \(\Gamma(X)\) act on \(L_{\mathbb{R}}=\mathbb{R}\otimes_{\mathbb{Z}}L\), we obtain a flat vector bundle \(H_{\rho}\to\Sigma\) which has a covariantly constant bilinear form of signature \((b_{+}(X),b_{-}(X))\). Let \(H_{\rho}^{+}\) denote a maximal positive definite subbundle of \(H_{\rho}\). The choice of subbundle \(H_{\rho}^{+}\) is not unique, but all such subbundles are isomorphic. In particular the Stiefel-Whitney classes \(w_{j}(H_{\rho}^{+})\in H^{j}(\Sigma;\mathbb{Z}_{2})\) depend only on \(\rho\). **Theorem 5.1**.: _Let \(X\) be a compact, simply connected, smooth \(4\)-manifold with \(b_{+}(X)=2\) and let \(\rho:\pi_{1}(\Sigma)\to\Gamma(X)\) be a homomorphism. Suppose that \(w_{2}(H_{\rho}^{+})\neq 0\) and suppose there exists a characteristic \(c\in L\) which is \(\rho\)-invariant and satisfies \(c^{2}>\sigma(X)\). Then \(\rho\) does not lift to a homomorphism \(\widetilde{\rho}:\Gamma(X)\to M(X)\)._ Proof.: Consider first the case that \(\Sigma\) is orientable of genus \(g\). Recall that \(\pi_{1}(\Sigma)\) admits a presention \[\pi_{1}(\Sigma)=\langle a_{1},b_{1},\dots,a_{g},b_{g}\mid[a_{1},b_{1}]\cdots[a_ {g},b_{g}]\rangle\] Suppose that \(\rho\) admits a lift \(\widetilde{\rho}:\pi_{1}(\Sigma)\to M(X)\). Let \(\alpha_{j}\) be a diffeomorphism of \(X\) whose isotopy class is \(\widetilde{\rho}(a_{j})\) and let \(\beta_{j}\) be a diffeomorphism of \(X\) whose isotopy class is \(\widetilde{\rho}(b_{j})\). Then \([\alpha_{1},\beta_{1}]\cdots[\alpha_{g},\beta_{g}]\) is isotopic to the identity. The surface \(\Sigma\) can be constructed from a wedge of \(2g\) circles by attaching a \(2\)-cell whose attaching map represents \([a_{1},b_{1}]\cdots[a_{g},b_{g}]\) in \(\pi_{1}(\vee_{i=1}^{2g}S^{1})\). We will construct a smooth family \(\pi:E\to\Sigma\) whose fibres are diffeomorphic to \(X\) as follows. Over the 1-skeleton \(\vee_{i=1}^{2g}S^{1}\), we take the wedge sum of mapping cylinders associated to the diffeomorphisms \(\alpha_{1},\beta_{1},\dots,\alpha_{g},\beta_{g}\). A choice of isotopy from \([\alpha_{1},\beta_{1}]\cdots[\alpha_{g},\beta_{g}]\) to the identity allows us to extend this family over the 2-cell and in this way we obtain the family \(\pi:E\to\Sigma\). By construction, the local system \(R^{2}\pi_{*}\mathbb{Z}\) has monodromy \(\rho\). Now suppose that \(w_{2}(H^{+}_{\rho})\neq 0\) and that there exists a characteristic \(c\in L\) which is \(\rho\)-invariant and satisfies \(c^{2}>\sigma(X)\). This contradicts [2, Theorem 1.1], hence \(\rho\) does not lift to \(M(X)\). The case that \(\Sigma\) is non-orientable is similar. Recall that \(\pi_{1}(\Sigma)\) admits a presentation \[\pi_{1}(\Sigma)=\langle a_{1},\dots,a_{k}\mid a_{1}^{2}\cdots a_{k}^{2}\rangle\] where \(\Sigma\) has Euler characteristic \(1-k\). If \(\rho\) lifts to a homomorphism \(\widetilde{\rho}:\pi_{1}(\Sigma)\to M(X)\), then we choose diffeomorphisms \(\alpha_{1},\dots,\alpha_{k}\) where the isotopy class of \(\alpha_{j}\) is \(\widetilde{\rho}(a_{j})\). Then \(\alpha_{1}^{2}\cdots\alpha_{k}^{2}\) is isotopic to the identity. A choice of such an isotopy allows us to construct a smooth family \(\pi:E\to\Sigma\) with fibres diffeomorphic to \(X\) and such that the monodromy of \(R^{2}\pi_{*}\mathbb{Z}\) is \(\rho\). As before, this contradicts [2, Theorem 1.1], hence \(\rho\) does not lift to \(M(X)\). _Remark 5.2_.: A similar argument was used in [17] to prove the non-triviality of \(T(X)\) for \(X=2\mathbb{CP}\#n\overline{\mathbb{CP}^{2}}\), \(n\geq 11\). **Corollary 5.3**.: _Let \(X=(S^{2}\times S^{2})\#X^{\prime}\), where \(b_{+}(X^{\prime})=1\), \(b_{-}(X^{\prime})\geq 10\). Then there does not exist a splitting \(\Gamma(X)\to M(X)\)._ Proof.: Let \(L=H^{2}(X;\mathbb{Z})\), \(L^{\prime}=H^{2}(X^{\prime};\mathbb{Z})\) denote the intersection lattices of \(X\) and \(X^{\prime}\) and let \(H=H^{2}(S^{2}\times S^{2}\;\mathbb{Z})\). So \(L\cong H\oplus L^{\prime}\). Since \(X=(S^{2}\times S^{2})\#X^{\prime}\), we have that \(\Gamma(X)=Aut(H^{2}(X;\mathbb{Z}))\) by [31]. Let \(x,y\in H\) be a basis with \(x^{2}=y^{2}=0\), \(\langle x,y\rangle=1\). Since \(b_{+}(X^{\prime})=1\) and \(\sigma(X^{\prime})<0\), it follows that \(X^{\prime}\) is not spin and it follows that \(L^{\prime}\cong H^{\prime}\oplus E_{8}\oplus L^{\prime\prime}\), where \(H^{\prime}\) has basis \(x^{\prime},y^{\prime}\), \((x^{\prime})^{2}=(y^{\prime})^{2}=0\), \(\langle x^{\prime},y^{\prime}\rangle=1\), \(E_{8}\) is the negative definite \(E_{8}\) lattice and \(L^{\prime\prime}\) is a diagonal lattice with basis \(e_{1},\dots,e_{m}\), where \(m=b_{-}(X^{\prime})-9\), with \(e_{i}^{2}=-1\) for all \(i\), \(\langle e_{i},e_{j}\rangle=0\) for \(i\neq j\). Let \(u=x+y\), \(v=x^{\prime}+y^{\prime}\). Then \(u^{2}=v^{2}=2\), \(\langle u,v\rangle=0\). Let \(r_{u}\) be the reflection \(r_{u}(x)=x+\langle x,u\rangle u\) and define \(r_{v}\) similarly. Consider the isometry \(f(x)=r_{u}r_{v}(x)\). Then \(f\in\Gamma(X)\) and \(f^{2}=1\). Hence we obtain a homomorphism \(\rho:\pi_{1}(\mathbb{RP}^{2})\to\Gamma(X)\) which sends the generator of \(\pi_{1}(\mathbb{RP}^{2})\) to \(f\). Since \(f\) acts as \(-1\) on the maximal positive definite subspace of \(H^{2}(X;\mathbb{R})\) spanned by \(u\) and \(v\), we have that \(w_{2}(H^{+}_{\rho})\neq 0\). Let \(c=e_{1}+\cdots+e_{m}\). Then \(c\) is a characteristic that \(c^{2}>\sigma(X)\) and \(\langle c,u\rangle=\langle c,v\rangle=0\). Then \(r_{u}(c)=r_{v}(c)=c\) and hence \(f(c)=c\). Then Theorem 5.1 implies that \(\rho\) does not lift to \(M(X)\). Hence the subgroup \(\langle f\rangle\subseteq\Gamma(X)\) does not lift to \(M(X)\), in particular, there does not exist a splitting \(\Gamma(X)\to M(X)\). _Remark 5.4_.: Note that in the above proof \(u\in L\) can be realised by an embedded 2-sphere in \(X\), namely the diagonal \(S^{2}\subset S^{2}\times S^{2}\). By a result of Seidel [27], it follows that \(r_{u}\) can be lifted to an element \(\hat{r}_{u}\in M(X)\) of order 2. Since \(\Gamma(X)=Aut(L)\), it follows that there is a diffeomorphism of \(X\) sending \(u\) to \(v\). It follows that \(v\) can also be realised by an embedded 2-sphere and hence \(r_{v}\) can be lifted to an element \(\hat{r}_{v}\in M(X)\) of order 2. Then \(\hat{r}_{u}\hat{r}_{v}\) is a lift of \(f\) to \(M(X)\). If \(u,v\) could be represented by _disjoint_ embedded 2-spheres, then \(\hat{r}_{u},\hat{r}_{v}\) commute (since \(\hat{r}_{u},\hat{r}_{v}\) can be constructed to have disjoint supports) and then \(\hat{r}_{u}\hat{r}_{v}\) would be an involutive lift of \(f\), contradicting the corollary above. We deduce that \(u,v\) can be represented by embedded spheres, but they can not be represented by disjoint embedded spheres even though \(\langle u,v\rangle=0\). ## 6. Nielsen realisation As explained in the introduction, the following result shows that the Nielsen realisation problem fails for \(X=X^{\prime}\#p\mathbb{CP}^{2}\#q\overline{\mathbb{CP}^{2}}\) whenever \(p+q\geq 4\). **Theorem 6.1**.: _Let \(X=X^{\prime}\#p\mathbb{CP}^{2}\#q\overline{\mathbb{CP}^{2}}\) where \(X^{\prime}\) is a compact, smooth, simply-connected \(4\)-manifold and \(p+q\geq 4\). Then \(M(X)\) contains a subgroup isomorphic to \(\mathbb{Z}_{2}^{4}\) which can not be lifted to \(Diff(X)\)._ Proof.: To each summand of \(\mathbb{CP}^{2}\) or \(\overline{\mathbb{CP}^{2}}\) in \(X\), there is a corresponding embedded \(2\)-sphere of self-intersection \(\pm 1\). Let \(E_{1},\ldots,E_{4}\) be any four of them. Let \(t_{1},\ldots,t_{4}\in M(X)\) be the corresponding Dehn twists around these spheres. Then \(t_{1},\ldots,t_{4}\) are involutons [27] and they commute since \(E_{1},\ldots,E_{4}\) are disjoint. Hence the group \(G\subseteq M(X)\) generated by \(t_{1},\ldots,t_{4}\) is isomorphic to \(\mathbb{Z}_{2}^{4}\). Now suppose that \(G\) can be lifted to \(Diff(X)\). Hence we can find commuting diffeomorphisms \(\sigma_{1},\ldots,\sigma_{4}\) such that the isotopy class of \(\sigma_{i}\) is \(t_{i}\). Consider the fixed point set \(F\) of \(\sigma_{1}\). Since \(\sigma_{1}\) acts on \(H^{2}(X;\mathbb{Z})\) as a reflection in a \(\pm 1\) sphere, it follows from [8, Proposition 2.4] that \(F\) consists of a single copy of \(\mathbb{RP}^{2}\), together with some isolated points and some \(2\)-spheres. Let \(G_{0}\) be the subgroup of \(G\) generated by \(\sigma_{2},\sigma_{3},\sigma_{4}\). Since \(\sigma_{2},\sigma_{3},\sigma_{4}\) commute with \(\sigma_{1}\), they act on \(F\) and in particular must send the copy of \(\mathbb{RP}^{2}\) to itself. Hence \(G_{0}\) acts on \(\mathbb{RP}^{2}\). We claim that the action is effective. To see this, suppose \(f\in G_{0}\) fixes \(\mathbb{RP}^{2}\) pointwise. Since \(f\) is an orientation preserving involution, it must act on the normal bundle of \(\mathbb{RP}^{2}\) in \(X\) as either the identity or multiplication by \(-1\). Hence either \(f\) or \(\sigma_{1}f\) fixes \(\mathbb{RP}^{2}\) pointwise and acts trivially on the normal bundle. For a diffeomorphism of finite order, this can only happen if the diffeomorphism is the identity. Hence \(f\) or \(\sigma_{1}f\) is the identity, but \(f\in G_{0}\), so \(f\neq\sigma_{1}\) and it must be that \(f\) is the identity. A finite group action on \(\mathbb{RP}^{2}\) by diffeomorphisms is conjugate to a subgroup of \(PO(3)\cong SO(3)\). Since \(G_{0}\) is abelian, its action on the standard representation of \(SO(3)\) can be simultaneously diagonalised, so \(G_{0}\) is isomorphic to a subgroup of \(\{diag(\epsilon_{1},\epsilon_{2},\epsilon_{3})\in SO(3)\}\cong\mathbb{Z}_{2}^ {2}\), which is impossible since \(|G_{0}|=8\). So \(G\) does not lift to \(Diff(X)\). ## 7. Boundary Dehn twists Let \(X^{(n)}\) be obtained from \(X\) by removing \(n\) disjoint open balls. So \(X^{(n)}\) is a compact \(4\)-manifold with boundary consisting of \(n\) copies of \(S^{3}\). Let \(Diff(X^{(n)},\partial X^{(n)})\) denote the group of diffeomorphisms of \(X^{(n)}\) which are the identity in a neighbourhood of the boundary. Let \(M_{n}(X)=\pi_{0}(Diff(X^{(n)},\partial X^{(n)})\) denote the group of components of \(Diff(X^{(n)},\partial X^{(n)})\). It is known that the map \(M_{n}(X)\to M(X)\) is surjective and that the kernel is generated by Dehn twists on the boundary components [11]. More precisely, if \(S^{3}\to X^{(n)}\) is a boundary component, then \(X^{(n)}\) has a tubular neighbourhood \([0,1]\times S^{3}\to X\). The Dehn twist on this boundary component is defined by taking a non-trivial loop \(\alpha_{t}:[0,1]\to SO(4)\) and defining \(\phi:[0,1]\times S^{3}\to[0,1]\times S^{3}\) by \(\phi(t,x)=(t,\alpha_{t}(x))\), where \(SO(4)\) acts on \(S^{3}\) in the standard way. We assume that \(\alpha_{t}\) is smooth and equals the identity in a neighbourhood of \(\{0,1\}\), hence \(\phi\) can be extended to an element of \(Diff(X^{(n)},\partial X^{(n)})\) by taking it to be the identity outside of the tubular neighbourhood. Let \(K_{n}(X)\) denote the kernel of \(M_{n}(X)\to M(X)\), so we have an short exact sequence \[1\to K_{n}(X)\to M_{n}(X)\to M(X)\to 1.\] Furthermore, we have a surjection \(\mathbb{Z}_{2}^{n}\to K_{n}(X)\) given by Dehn twists on the boundary components ([11, Proposition 3.1]). **Proposition 7.1**.: _Let \(X\) be a compact, smooth, simply-connected \(4\)-manifold._ 1. _If_ \(X\) _is spin, then_ \(K_{n}(X)\) _is either_ \(\mathbb{Z}_{2}^{n}\) _or_ \(\mathbb{Z}_{2}^{n}/\Delta\mathbb{Z}_{2}\)_, for all_ \(n\)_, where_ \(\Delta\mathbb{Z}_{2}\) _is the diagonal copy of_ \(\mathbb{Z}_{2}\)_._ 2. _If_ \(X\) _is not spin, then_ \(K_{n}(X)=0\) _for all_ \(n\)_, hence_ \(M_{n}(X)\cong M(X)\)_._ Proof.: Part (1) is given by [11, Corollary 2.5] and part (2) by [22, Corollary A.5]. In light of Proposition 7.1, boundary Dehn twists are only interesting when \(X\) is spin. In this case, we either have \(K_{n}(X)\cong\mathbb{Z}_{2}^{n}\) or \(K_{n}(X)\cong\mathbb{Z}_{2}^{n}/\Delta\mathbb{Z}_{2}\). Which of these two cases occurs is completely determined by the \(n=1\) case. We consider this case in more detail. There is a Serre fibration \[Diff(X^{(1)},\partial X^{(1)})\to Diff(X)\to Emb\left(D^{4},X\right) \tag{7.1}\] where \(Emb\left(D^{4},X\right)\) is the space of embeddings of a disc in \(X\) which can be extended to a diffeomorphism. Furthermore, there is a homotopy equivalence \(Emb\left(D^{4},X\right)\cong F(X)\), where \(F(X)\) is the oriented frame bundle of \(X\)[11]. Since \(X\) is simply-connected and spin, \(\pi_{1}(F(X))\cong\mathbb{Z}_{2}\). Then the fibration (7.1) induces an exact sequence \[\pi_{1}(Diff(X))\overset{\phi}{\longrightarrow}\mathbb{Z}_{2}\to M_{1}(X) \to M(X)\to 1.\] In the absence of a metric we can define the spin bundle of \(X\) to be the universal cover \(\widetilde{F}(X)\to F(X)\) of \(F(X)\). Since \(\pi_{1}(F(X))\cong\mathbb{Z}_{2}\), \(\widetilde{F}(X)\to F(X)\) is a double cover. Since \(Emb\left(D^{4},X\right)\cong F(X)\), it follows that \(\phi\) is the map that measures whether or not a loop of diffeomorphisms of \(X\) lifts to a loop in the spin bundle of \(X\). This leads to an alternative description of the group \(M_{1}(X)\) when \(X\) is spin. Let \(SpinDiff(X)\) be the group whose elements consist of a diffeomorphism \(f\in Diff(X)\) and a choice of lift of \(f_{*}:F(X)\to F(X)\) to \(\widetilde{F}(X)\). We have a short exact sequence \(1\to\mathbb{Z}_{2}\to SpinDiff(X)\to Diff(X)\to 1\) and the connecting homomorphism \(\pi_{1}(Diff(X))\to\mathbb{Z}_{2}\) is precisely \(\phi\). The map \(Diff(X^{(1)},\partial X^{(1)})\to Diff(X)\) admits a lift \(Diff(X^{(1)},\partial X^{(1)})\to SpinDiff(X)\) by taking the unique lift which is the identity over \(\partial X^{(1)}\). We then have a commutative diagram from which it follows that \(M_{1}(X)\to\pi_{0}(SpinDiff(X))\) is an isomorphism. If \(\phi\) is non-trivial, then \(K_{1}(X)=0\) and \(M_{1}(X)\to M(X)\) is an isomorphism. This happens for \(S^{2}\times S^{2}\), as seen by taking a loop of diffeomorphisms given by a circle action which rotates one of the spheres. Similarly, \(\phi\) is non-trivial for \(X=S^{4}\) or for a connected sum of copies of \(S^{2}\times S^{2}\). If \(\phi\) is trivial, then \(K_{1}(X)\cong\mathbb{Z}_{2}\) and \(M_{1}(X)\to M(X)\) is an extension of \(M(X)\) by \(\mathbb{Z}_{2}\), hence corresponds to a class \(\xi_{X}\in H^{2}(M(X);\mathbb{Z}_{2})\). It is natural to ask what this class is and in particular, whether or not it is trivial. First, we need some examples of spin \(4\)-manifolds where \(\phi=0\). **Theorem 7.2**.: _Let \(X\) be a compact, smooth, simply-connected \(4\)-manifold. If \(X\) is homeomorphic to \(K3\) then \(\phi=0\). Similarly, if \(X=X^{\prime}\#(S^{2}\times S^{2})\), where \(X^{\prime}\) is homeomorphic to \(K3\), then \(\phi=0\)._ Proof.: In [4], it is proven that if \(E\to S^{2}\) is a smooth family of \(K3\) surfaces over \(S^{2}\), then \(w_{2}(TE)=0\). As explained in [18], this implies that the homomorphism \(\phi\) is zero. The same argument works for any \(X\) that is homeomorphic to \(K3\), since by [20], the Seiberg-Witten invariant of the spin structure of \(X\) is odd. Next, suppose \(X=X^{\prime}\#(S^{2}\times S^{2})\), where \(X^{\prime}\) is homeomorphic to \(K3\). Suppose that \(\phi\) is non-zero. This means that the boundary Dehn twist \(\tau\in M_{1}(X)\) is trivial. But this would imply that the Dehn twist on the neck of \(K3\#X^{\prime}\) becomes trivial upon connected sum with \(S^{2}\times S^{2}\). However this contradicts [19] (in [19] the theorem is stated only for \(X^{\prime}=K3\), but the exact same proof works for any smooth \(4\)-manifold homeomorphic to \(K3\)). Recall that an involution \(f\) on a simply-connected spin \(4\)-manifold \(X\) is called even or odd according to whether or not \(f\) lifts to an involution on the spin bundle of \(X\). **Proposition 7.3**.: _Suppose \(X\) is spin and that \(\phi=0\), so that the extension class \(\xi_{X}\in H^{2}(M(X);\mathbb{Z}_{2})\) is defined. Suppose that \(f\) is an odd involution. Then \(\xi_{X}(f)\neq 0\). In particular, the extension \(\mathbb{Z}_{2}\to M_{1}(X)\to M(X)\) is non-trivial._ Proof.: As explained above, the extension \(1\to\mathbb{Z}_{2}\to M_{1}(X)\to M(X)\to 1\) is isomorphic to the extension \(1\to\mathbb{Z}_{2}\to\pi_{0}(SpinDiff(X))\to M(X)\to 1\). But \(f\) defines a class \([f]\in M(X)\) such that \([f]^{2}=1\), but any lift of \(f\) to the spin bundle is not an involution. So there is no splitting \(M(X)\to M_{1}(X)\) and more precisely, \(\xi_{X}(f)\neq 0\). **Corollary 7.4**.: _If \(X=K3\) or \(K3\#(S^{2}\times S^{2})\), then \(\xi_{X}\in H^{2}(M(X);\mathbb{Z}_{2})\) is non-trivial._ Proof.: This is immediate from Theorem 7.2 and Proposition 7.3, since both \(K3\) and \(K3\#(S^{2}\times S^{2})\) admit odd involutions. In what follows we will completely determine the class \(\xi_{X}\in H^{2}(M(X);\mathbb{Z}_{2})\) when \(X\) is homeomorphic to \(K3\). **Proposition 7.5**.: _Let \(\pi:E\to B\) be a smooth fibre bundle, where \(B\) is a compact surface and the fibres of \(E\) are diffeomorphic to a compact, simply-connected, smooth spin \(4\)-manifold \(X\). Then_ 1. _There exists a spin_\({}^{c}\)_-structure_ \(\mathfrak{s}_{E/B}\) _on the vertical tangent bundle_ \(TE/B=Ker(\pi_{*})\) _whose restriction to each fibre is spin._ 2. _Let_ \(ind(D)\in K^{0}(B)\) _denote the families index of the Dirac operator_ \(D\) _with respect to the spin_\({}^{c}\)_-structure_ \(\mathfrak{s}_{E/B}\)_. Then_ \[c_{1}(ind(D))=(\sigma(X)/16)w_{2}(TE/B)\;(\mathrm{mod}\;2).\] Proof.: (1) follows immediately from [2, Proposition 2.1]. The Dirac operator \(D\) for the spin\({}^{c}\)-structure \(\mathfrak{s}_{E/B}\) defines a family of elliptic operators parametrised by \(B\) and \(ind(D)\) is the families index. Then \(c_{1}(ind(D))=c_{1}(\mathcal{L})\), where \(\mathcal{L}=det(ind(D))\) is the determinant line bundle of \(D\). Suppose that the family \(E\) is determined by transition function \(\psi_{ij}\) valued in \(Diff(X)\). Let \(\widetilde{\psi}_{ij}\) be lifts of \(\psi_{ij}\) to \(SpinDiff(X)\). Then \(\widetilde{\psi}_{ij}\widetilde{\psi}_{jk}\widetilde{\psi}_{ki}=g_{ijk}\), where \(g_{ijk}\) is a \(\mathbb{Z}_{2}\)-valued cocycle, defining a class \([g_{ijk}]\in H^{2}(B;\mathbb{Z}_{2})\). Clearly \(w_{2}(TE/B)=[g_{ijk}]\). Observe that \(c(\mathfrak{s}_{E/B})\in H^{2}(B;\mathbb{Z})\) is a lift of \([g_{ijk}]\) to integer coefficients. Therefore we can represent \(c(\mathfrak{s}_{E/B})\) as an integer-valued 2-cocycle \(c_{ijk}\) such that \(c_{ijk}=g_{ijk}\;(\mathrm{mod}\;2)\). Choose real-valued smooth functions \(u_{ij}\) such that \(c_{ijk}=u_{ij}+u_{jk}+u_{ki}\). Set \(f_{ij}=e^{2\pi iu_{ij}}\). Then \(f_{ij}\) define transition functions for a complex line bundle whose first Chern class is \([c_{ijk}]\). Note that \(f_{ij}=h_{ij}^{2}\), where \(h_{ij}=e^{\pi iu_{ij}}\). Then \(h_{ij}h_{jk}h_{ki}=(-1)^{g_{ijk}}\). Define \(Spin^{c}Diff(X)=U(1)\times_{\mathbb{Z}_{2}}SpinDiff(X)\). Then \(\varphi_{ij}=h_{ij}\widetilde{\psi}_{ij}\) is a 2-cocycle valued in \(Spin^{c}Diff(X)\). Consider now the transition functions for the determinant line bundle \(\mathcal{L}\). Since \(\mathfrak{s}_{E/B}\) restricts to a spin structure on the fibres, the spinor bundles have a quaternionic structure on each fibre. It follows that \(\widetilde{\psi}_{ij}\) induces a trivial action on the determinant line. However, the \(U(1)\)-factor \(h_{ij}\) in \(\varphi_{ij}=h_{ij}\widetilde{\psi}_{ij}\) acts on the spinor bundles as scalar multiplication which then acts on the determinant line by \(h_{ij}^{d}\), where \(d\) is the virtual rank of \(ind(D)\), which is \(d=-\sigma(X)/8\). Hence \(\mathcal{L}\) has transition functions \(h_{ij}^{-\sigma(X)/8}=f_{ij}^{-\sigma(X)/16}\). Recalling that \(f_{ij}\) are transition functions for a line bundle with Chern class \(c(\mathfrak{s}_{E/B})\), it follows that \[c_{1}(ind(D))=c_{1}(\mathcal{L})=-\frac{\sigma(X)}{16}c(\mathfrak{s}_{E/B})= \frac{\sigma(X)}{16}w_{2}(TE/B)\;(\mathrm{mod}\;2).\] **Proposition 7.6**.: _Let \(\pi:E\to B\) be a smooth fibre bundle, where the fibres of \(E\) are homeomorphic to \(K3\). Then \(w_{2}(TE/B)=w_{2}(H^{+})\), where \(H^{+}\to B\) denote the bundle whose fibre over \(b\) is a maximal positive definite subspace of \(H^{2}(E_{b};\mathbb{R})\)._ Proof.: Since \(H^{2}(B;\mathbb{Z}_{2})\) is detected by maps of compact surfaces into \(B\), it suffices to prove the result when \(B\) is a compact surface. Then by Proposition 7.5, \(w_{2}(TE/B)=c_{1}(ind(D))\;(\mathrm{mod}\;2)\). On the other hand, since the fibres are homeomorphic to \(K3\), their Seiberg-Witten with respect to the spin structure is odd [20]. Then by [4, Corollary 1.3], \(c_{1}(ind(D))=w_{2}(H^{+})\). Let \(L\) be a lattice and \(A=Aut(L)\) the group of automorphisms. Over the classifying space \(BA\) we have the tautological flat bundle \(H=EA\times_{A}L\). Let \(H^{+}\to BA\) be a maximal positive subbundle. This defines a characteristic class \(w_{2}(H^{+})\in H^{2}(Aut(L);\mathbb{Z}_{2})\). **Theorem 7.7**.: _Let \(X\) be a smooth \(4\)-manifold which is homeomorphic to \(K3\). Let \(L_{X}\) be the intersection lattice of \(X\). Then the extension class \(\xi_{X}\in H^{2}(M(X);\mathbb{Z}_{2})\) is the pullback of \(w_{2}(H^{+})\in H^{2}(Aut(L_{X});\mathbb{Z}_{2})\) under the map \(M(X)\to Aut(L_{X})\)._ Proof.: Let \(B\) be a compact surface and consider a map \(\iota:B\to BM(X)\). This is equivalent to a homomorphism \(\rho:\pi_{1}(B)\to M(X)\). We claim that \(\rho\) is the geometric monodromy of a family \(E\to B\). We can take \(B\) to be given by attaching a \(2\)-cell to a wedge of \(k\) circles. Each circle defines a generator \(g_{i}\in\pi_{1}(B)\) and the \(2\)-cell defines a relation \(r=r(g_{1},\dots,g_{k})\), which is a word in the \(g_{i}\). Choose a lift \(f_{i}\in Diff(X)\) of \(\rho(g_{i})\in M(X)\). Then we can construct a family \(E_{1}\) over the \(1\)-skeleton on \(B\) as a wedge of mapping cylinders corresponding to the diffeomorphisms \(f_{1},\dots,f_{k}\). Since \(g_{1},\dots,g_{k}\) satisfy \(r\), it follows that \(r(f_{1},\dots,f_{k})\) is isotopic to the identity. Choosing such an isotopy, we can extend \(E_{1}\) over the \(2\)-cell, giving the desired family \(E\to B\). As explained in [4, Remark 4.20], we can assume that the family \(E\to B\) is smooth. Now consider the obstruction to lifting the structure group of \(E\) to \(SpinDiff(X)\). This is easily seen to coincide with the obstruction to lifting \(\rho:\pi_{1}(B)\to M(X)\) to \(M_{1}(X)\), which is \(\iota^{*}(\xi_{X})\in H^{2}(B;\mathbb{Z}_{2})\). On the other hand, the obstruction to lifting the structure group of \(E\) to \(SpinDiff(X)\) is \(w_{2}(TE/B)\), which by Proposition 7.6 equals \(w_{2}(H^{+})\). Since \(H^{2}(M(X);\mathbb{Z}_{2})\) is detected by maps of compact surfaces \(B\) into \(BM(X)\), the result is proven.
2309.02922
Effect of Backing on Neutron Spectra for Low Energy Quasi-Mono-energetic p+$^7$Li Reaction
$\underline{\textbf{MO}}$nte-carlo $\underline{\textbf{N}}$ucleon transport $\underline{\textbf{C}}$ode (MONC) for nucleon transport is extended for below 20MeV proton transport using ENDF and EXFOR data base. It is used to simulate p+$^7$Li reaction upto 20MeV proton energies and produced neutron spectra are reported here. The simulated results are compared with calculated values from other available codes like PINO, EPEN, SimLiT and experimental data. The spectra reported here can be used to get the neutron cross-section for the quasi-mono-energetic neutron reaction and will help to subtract the low energy contribution. The neutron spectra also useful as this reaction is used for accelerator based Boron Neutron Capture Therapy. The backing materials are used to fully stop the proton beam hence contribution from the neutrons from backing materials is estimated. It is found that Tantalum is good backing material below $\sim$8 MeV and Carbon is better at higher energies.
H. Kumawat
2023-09-06T11:32:43Z
http://arxiv.org/abs/2309.02922v2
# Effect of Backing on Neutron Spectra for Low Energy Quasi-Mono-energetic p+\({}^{7}\)Li Reaction. ###### Abstract **MO**nte-carlo **N**ucleon transport **C**ode (MONC) for nucleon transport is extended for below 20MeV proton transport using ENDF and EXFOR data base. It is used to simulate p+\({}^{7}\)Li reaction upto 20MeV proton energies and produced neutron spectra are reported here. The simulated results are compared with calculated values from other available codes like PINO, EPEN, SimLiT and experimental data. The spectra reported here can be used to get the neutron cross-section for the quasi-mono-energetic neutron reaction and will help to subtract the low energy contribution. The neutron spectra also useful as this reaction is used for accelerator based Boron Neutron Capture Therapy. The backing materials are used to fully stop the proton beam hence contribution from the neutrons from backing materials is estimated. It is found that Tantalum is good backing material below \(\sim\)8 MeV and Carbon is better at higher energies. Monte Carlo, Quasi-mono-energetic Neutron Source, Boron Neutron Capture Therapy ## I Introduction Measurement of neutron cross-section is an important research activity for it's application in nuclear reactors, cancer therapy, neutron dosimetry, nuclear astrophysics [1; 2; 3; 4; 5; 6; 7; 8; 9]. The \({}^{7}\)Li(p, n)X reaction is used as quasi mono-energetic neutron source to measure cross-sections and also a possible accelerator based source of neutrons for Boron Neutron Capture Therapy (BNCT) [10; 11; 12]. The threshold for the \({}^{7}\)Li(p, n)\({}^{7}\)Be\({}_{g}\) reaction is \(\sim\)1.88 MeV and cross-section rise rapidly near the threshold energy. For protons above 2.37 MeV, production of another group of neutrons starts due to inelastic state of \({}^{7}\)Be at 429 keV. Neutron production threshold for three-body breakup channel \({}^{7}\)Li(p, n+\({}^{3}\)He)\(\alpha\) is 3.7 MeV which gives a broad neutron energy distribution. Thus, \({}^{7}\)Li(p, n)X reaction can be considered quasi-mono-energetic near threshold but it has a tail above 4 MeV of proton energies. It can be still used as a quasi-mono-energetic source for neutron threshold reactions where low energy tail does not contribute. The contribution of tail should be carefully subtracted for neutron capture reactions where low energy contributes more in neutron activation technique in cross-section measurement. Additionally, the spread in the neutron spectra occurs due to thickness of the Lithium target (up to 100 \(\mu\)m), used by various experimentalists and the corresponding neutron energy spread may be up to 500keV. Some experimentalists use cadmium foil to cut very low energy neutron contribution but it is important to quote the energy spectra for a particular measurement. Recently, several experiments have been conducted at BARC-FOTIA [13; 14; 15; 16] and BARC-TIFR [17; 18; 19; 20; 21; 22] facilities using this reaction. MONC [23; 24; 25] code is used for Monte-carlo simulations for Lithium target of thickness 4mg/cm\({}^{2}\) which is typical thickness used at many experimental facilities in Mumbai, India. The contribution of low energy tail and second peak should be considered while quoting cross-section for a single energy and in best practices it should be subtracted as mentioned in Ref. [26] or by similar methods. The neutron flux monitor reaction should be sensitive in the similar energy range as that of reaction of measurement. The calculations are also performed using code PINO [27] which includes only \({}^{7}\)Li(p, n\({}_{0}\))\({}^{7}\)Be\({}_{g}\) and \({}^{7}\)Li(p, n\({}_{1}\))\({}^{7}\)Be\({}^{*}\) reactions hence valid in low energy region (\(<\) 7.0 MeV). The simulated values are also compared with available experimental data and calculated values from literature by SimLiT and EPEN [28] at 3.5 MeV. At higher energies, the proton beam has to stop in some other material otherwise the quasi-mono-energeticity does not remain valid. Usually Tantalum and Carbon are used as the backing materials. The neutrons produced by the backing material should be estimated to get the total neutron spectrum. The outline of this paper is as follows. In Sec. II we present brief description of MONC. Section III contains simulation results. Conclusions are given in Sec. IV. ## II Brief description of MONC Monte Carlo program MONC incorporates Intra-nuclear Cascade, Pre-equilibrium, Evaporation and Fission models to simulate spallation reaction mechanism for thin and thick targets. Modeling details of Intra-nuclear cascade, Pre-equilibrium particle emission are described in detail in Ref. [29; 30]. Fission barrier, level density parameter and inverse cross sections for pre-equilibrium/evaporation/fission model are given in detail in Ref. [31; 32]. Benchmark of spallation models for experimental values of neutron, charged particles, and pions double dif ferential production cross-sections, particle multiplicities, spallation residues and excitation functions was organized by IAEA and is given in Ref. [33]. We have used the predecessor of this code named CASCADE.04 to calculate these quantities in the IAEA benchmark. Heat Deposition algorithm for thick spallation targets and thin films was modified and benchmarked as mentioned in Ref. [34]. The code was further developed for the Neutron shielding and dosimetry applications and published [35]. Energy loss of the charge particle is calculated during the transport in the thick target. MONC realizes the particle transport in three stages: 1) sampling of particle (ion) mean free path in the medium taking into account the energy loss of a charged particle and a possible decay of non-stable particles (\(\pi^{0}\), \(\pi^{\pm}\)). All \(\pi^{0}\)-mesons are considered to decay into \(\gamma\)-quanta at the point of their creation. The ionization losses of \(\pi\) - mesons, protons and light ions are calculated by Sternheimer's method [36] using well established Bathe formula for the average ionization loss calculations with proper density effects. Here, it is important to mention that the density effect shows reduction in ionization loss for fast charged particles due to dielectric polarization of the medium. In the lower energy region (\(<\) 2.0 MeV) Lindhard's approach [37] is used and a semi-phenomenological procedure [38] is applied for the heavy ions. While doing the practical simulation one has to calculate the ionization and nuclear interaction ranges and then uses the formulation to deposit heat. 2) Simulation of the particle interaction with a nucleus is considered along its path. In case of inelastic interaction the MONC code considers three stages of reaction for calculation: a) intranuclear cascade originally developed at Dubna: In this part of the calculation, primary particles can be re-scattered and they may produce secondary particles several times prior to absorption or escape from the target. Modeling of intra-nuclear cascades [29; 30] is in general rather closer to the methods used in other transport codes. Cross-sections of the hadron-nucleus collisions are calculated based on the compilations of the experimental data [39; 40]. To calculate the nucleus-nucleus cross-sections we used analytical approximations with parameters defined in Ref. [41]. b) Pre-equilibrium stage: In this part of the reaction, relaxation of the nuclear excitation is treated according to the exciton model of the pre-equilibrium decay. The relaxation is calculated by the method based on the Blann's model [42; 43]. Proton, neutron, deuterium, tritium, \({}^{3}\)He and \({}^{4}\)He are considered as emitted particles in the pre-equilibrium and in the subsequent equilibrium stage. c) Equilibrium stage: This part considers the particle evaporation/fission of the thermally equilibrated nucleus. 3) Low energy neutron transport code is developed recently. A package has been developed for reading point-wise cross sections for neutron in ACE (A Compact ENDF) format. ENDF data file processing and generating point data at different temperatures has also been developed [45]. The delayed neutrons are treated exclusively with their energy spectra for which data are available. Spontaneous and induced fission fragment yield are read from ENDF fission yield libraries. The free gas thermal treatment of the neutron interaction for below 4eV can be used for compound and crystal material or Thermal scattering law can be used if available in ENDF file. Probability table method is used in the un-resolved energy region. Low energy (\(<\) 20 MeV) proton data are used to simulate the reaction mechanism and outgoing particles energy and angular distributions. No cross-section data are given for \({}^{7}\)Li(p, n+\({}^{3}\)He)\(\alpha\) reaction in the ENDF file so it is taken from Ref. [44] while energy and angular distributions are calculated using 3-body kinematics. The cross-sections used in the present simulations are given in Fig. 1 for \({}^{7}\)Li nucleus. There is a thick tantalum or carbon sheet placed at the end of Lithium target to stop the proton beam. The cross-section for Tantalum and Carbon are taken from Evaluated neutron data libraries and EXFOR experimental database. The energy and angular distributions are calculated using 2, 3-body kinematics. ## III Simulation and Results Monte-carlo simulations are carried out for 4mg/cm\({}^{2}\) thick \({}^{7}\)Li target which is 92.41% in the natural Lithium and contributes most for neutron production in the natural Lithium target. Proton energies considered are up to 21MeV (Namely 6, 10, 15 and 21 MeV). Experimental data near threshold energy at 1.912 MeV [46; 47; 48] are compared in Fig. 2. Angular distributions are given in the ENDF library and corresponding energy is calculated using 2-body kinematics [49]. The calculated spectrum from MONC has overall agreement with slight underestimation at peak position of the energy spectrum. In case of thick Lithium target, experimental data at Figure 1: \({}^{7}\)Li(p,x)Y cross-section from ENDF VIII.0 as used in the MONC. \({}^{7}\)Li(p, n+\({}^{3}\)He)\(\alpha\) reaction cross-section is taken from Ref. [44] 3.45 MeV and 0\({}^{\circ}\) are taken from Ref. [50] and simulations are done for \(\theta_{n}\) = 0-5\({}^{\circ}\) using MONC. A comparison is shown in Fig. 3. There is an agreement from peak energy up to highest neutron energy. MONC overestimate the neutron spectrum below the peak neutron energy. Simulation was performed for 38\(\mu\)m thick target where EPEN and SimLiT [28] calculated values were also available in the literature. The calculations are also performed using PINO code [27] for comparison which is available on web portal. The published values from EPEN/SimLiT [28] are in good agreement with PINO and MONC calculated values for the first group of the neutron energies (see Fig. 4) (ground state transition to \({}^{7}\)Be) but the second group of neutron corresponding to the excited state transition to \({}^{7}\)Be is underestimated by PINO. The data for second group of neutrons by EPEN is not given in this publication hence could not be compared. Neutron spectra were simulated at 6, 10, 15 and 21 MeV proton energies. The Lithium target thickness was taken as 4mg/cm\({}^{2}\). Tantalum and Natural Carbon were taken as backing material to stop the protons. The spectra for neutron angle \(\theta_{n}\) 0-10\({}^{\circ}\) are given in Fig. 6 where usually samples are kept to measure neutron cross-sections. The Lithium spectra show a third group of neutrons from 6 to 20 MeV proton energies which is coming from three body breakup reaction as mentioned above. The neutrons from natural carbon target is contributed by \({}^{13}\)C(p, n) reaction which has 1.1% abundance. The barrier for this reaction is \(\sim\)1.15 MeV and threshold is 3.24 MeV. The neutron production for Tantalum is through \({}^{181}\)Ta(p, n), \({}^{181}\)Ta(p, 2n) and \({}^{181}\)Ta(p, 3n) reactions with thresholds of 0.99 MeV, 7.70 MeV and 16.16 Figure 4: Proton (E\({}_{p}\)=3.5 MeV) induced neutron spectrum from \({}^{7}\)Li(p,n)X reaction for neutrons simulated using MONC and PINO. Values for EPEN/SimLiT are taken from Ref. [28] Figure 5: Proton (E\({}_{p}\)=3MeV) induced neutron spectrum from \({}^{7}\)Li(p,n)X reaction for 0\({}^{\circ}\)-10\({}^{\circ}\) of neutrons simulated using MONC. Figure 3: Proton (E\({}_{p}\)=3.5 MeV) induced neutron spectrum from \({}^{7}\)Li(p,n)X reaction for 0\({}^{\circ}\)-5\({}^{\circ}\) of neutrons simulated using MONC. Experimental data are taken from Ref. [50] Figure 2: Proton induced neutron spectrum from \({}^{7}\)Li(p,n)\({}^{7}\)Be reaction. Experimental data are taken from Ref. [46; 47; 48] and calculations are performed using MONC. MeV, respectively. It is clear from the spectra that large contribution of neutrons is produced from Tantalum at higher energies. Carbon target gives less contribution by factor of two orders of magnitudes compared to that profuced from \({}^{7}\)Li target. ## IV Conclusion The Monte Carlo code MONC has been developed for proton induced reactions at low energies. The \({}^{7}\)Li(p, n)X reaction which is widely used for neutron cross-section measurement and a potential candidate for Boron Neutron Capture Therapy, is investigated. The simulated neutron spectra are compared with the experimental data and calculated values from PINO and EPEN/SimLiT codes and the results are in good agreement for the first group \({}^{7}\)Li(p,n)\({}^{7}\)Be\({}_{g}\) at least. There is a disagreement for the second group of neutrons through \({}^{7}\)Li(p,n)\({}^{7}\)Be\({}^{*}\) reaction. PINO shows very small contribution from second group of neutrons coming from excited \({}^{7}\)Be state at 429 keV. Ratio of these two groups are measured at different angle [51] which shows a relative contribution of 10-15% around 0\({}^{\circ}\)-10\({}^{\circ}\) neutron emission angles and the ratio from MONC are of similar magnitudes for these group of neutrons. The neutron spectra at forward angles are given here because the measurements are done in the forward direction using neutron activation technique. The simulated neutron spectra are useful for the experimental measurements using neutron activation analysis although we use exact geometry of the arrangement of samples, sample holders, monitor foils etc. [52] for our measurements and Monte carlo code is best suited for that compared to deterministic codes. The contribution to the neutron spectra by backing material is significant by Tantalum at higher energies and negligible below \(\sim\)8 MeV. The contribution from Natural carbon as backing material is less by two orders of magnitudes. The spec Figure 6: Proton (E\({}_{p}\)=21, 15, 10 and 6 MeV) induced neutron spectrum from \({}^{7}\)Li(p,n)X, \({}^{181}\)Ta(p,n)X and \({}^{nat}\)C(p,n)X reaction for 0\({}^{\circ}\)-10\({}^{\circ}\) of neutrons simulated using MONC. tra from Carbon does not interfere with the major quasi-energetic peak while tantalum gives neutrons beyond this peak. Hence, Natural Carbon is better as backing material compared to Tantalum at higher energies.
2302.11392
Photometric follow-up of 43 new eclipsing white dwarf plus main-sequence binaries from the ZTF survey
Wide-field time-domain photometric sky surveys are now finding hundreds of eclipsing white dwarf plus M dwarf binaries, a population encompassing a wealth of information and potential insight into white dwarf and close binary astrophysics. Precise follow-up observations are essential in order to fully constrain these systems and capitalise on the power of this sample. We present the first results from our program of high-speed, multi-band photometric follow-up. We develop a method to measure temperatures, (model-dependent) masses, and radii for both components from the eclipse photometry alone and characterize 34 white dwarf binaries, finding general agreement with independent estimates using an alternative approach while achieving around a factor of two increase in parameter precision. In addition to these parameter estimates, we discover a number of interesting systems -- finding four with sub-stellar secondaries, doubling the number of eclipsing examples, and at least six where we find the white dwarf to be strongly magnetic, making these the first eclipsing examples of such systems and key to investigating the mechanism of magnetic field generation in white dwarfs. We also discover the first two pulsating white dwarfs in detached and eclipsing post-common-envelope binaries -- one with a low-mass, likely helium core, and one with a relatively high mass, towards the upper end of the known sample of ZZ Cetis. Our results demonstrate the power of eclipse photometry, not only as a method of characterising the population, but as a way of discovering important systems that would have otherwise been missed by spectroscopic follow-up.
Alex J. Brown, Steven G. Parsons, Jan van Roestel, Alberto Rebassa-Mansergas, Elmé Breedt, Vik S. Dhillon, Martin J. Dyer, Matthew J. Green, Paul Kerry, Stuart P. Littlefair, Thomas R. Marsh, James Munday, Ingrid Pelisoli, David I. Sahman, James F. Wild
2023-02-22T14:13:25Z
http://arxiv.org/abs/2302.11392v1
Photometric follow-up of 43 new eclipsing white dwarf plus main-sequence binaries from the ZTF survey ###### Abstract Wide-field time-domain photometric sky surveys are now finding hundreds of eclipsing white dwarf plus M dwarf binaries, a population encompassing a wealth of information and potential insight into white dwarf and close binary astrophysics. Precise follow-up observations are essential in order to fully constrain these systems and capitalise on the power of this sample. We present the first results from our program of high-speed, multi-band photometric follow-up. We develop a method to measure temperatures, (model-dependent) masses, and radii for both components from the eclipse photometry alone and characterize 34 white dwarf binaries, finding general agreement with independent estimates using an alternative approach while achieving around a factor of two increase in parameter precision. In addition to these parameter estimates, we discover a number of interesting systems - finding four with sub-stellar secondaries, doubling the number of eclipsing examples, and at least six where we find the white dwarf to be strongly magnetic, making these the first eclipsing examples of such systems and key to investigating the mechanism of magnetic field generation in white dwarfs. We also discover the first two pulsating white dwarfs in detached and eclipsing post-common-envelope binaries - one with a low-mass, likely helium core, and one with a relatively high mass, towards the upper end of the known sample of ZZ Cetis. Our results demonstrate the power of eclipse photometry, not only as a method of characterising the population, but as a way of discovering important systems that would have otherwise been missed by spectroscopic follow-up. keywords: (stars:) binaries: eclipsing - (stars:) white dwarfs - stars: late-type - techniques: photometric ## 1 Introduction A significant fraction of field stars are formed as part of a binary system (Eggleton and Tokovinin, 2008; Raghavan et al., 2010). Of these binaries, around 25 per cent are formed with sufficiently small orbital separations such that at some stage in their lives the two stars will interact with each other (Willems and Kolb, 2004), transferring material between them and affecting their future evolution. For many of these interacting systems, the mass-transfer will lead to a common-envelope phase, initiated by the post-main-sequence evolution of the more massive star (the primary). This involves both stars being engulfed by the expanding outer envelope of the more massive star, with the resulting drag forces causing the hot core of the primary star and its main-sequence companion to spiral in to small orbital separations and therefore short orbital periods ranging from hours to a few days. The lost orbital energy and angular momentum from the binary is imparted into the envelope, ejecting it (Paczynski, 1976), where it may then be ionised and lit up by the hot remnant core, appearing as a planetary nebula for a period of time (Jones and Boffin, 2017) before the core cools and stratifies to become a white dwarf (WD). As well as being a key tracer of the common-envelope phase, these short period detached post-common-envelope binaries (PCEBs) made up of a WD and a main-sequence star are the progenitors to many of the most interesting and exotic astrophysical objects and phenomena in the Universe, including the cosmologically important type Ia supernovae. Binaries made up of a WD and a main-sequence star are typically split into two categories, one of which containing the WDs with solar-type companions (WD+FGK) and the other made up of WDs with companions of spectral type M and later (referred to as WD+dM). These categories reflect the differences in observational properties, with the WD+dM binaries being relatively easy to find due to the two stars often contributing a similar amount of flux at optical wavelengths. This has allowed a large sample to be extracted from spectroscopic surveys (Rebassa-Mansergas et al., 2007, 2010, 2012, 2016), making them the most common for many years. More recently, WD+FGK binaries have been found by using UV excesses to discern systems with WDs that would otherwise be outabone by their companion at optical wavelengths (Parsons et al., 2016; Rebassa-Mansergas et al., 2017). While both types can be found with short orbital periods (Hernandez et al., 2022) - and therefore with small orbital separations with a relatively high chance of eclipse - the advantage of the WD+dM PCEBs is that the WD contributes enough of the total flux such that the eclipses can be detected, enabling them to be found in photometric surveys (Parsons et al., 2013, 2015). Eclipsing systems are a gold standard in astrophysics, allowing for incredibly precise measurements of the stellar and binary parameters, with typical precisions at or below the percent level. The result of this is that eclipsing PCEBs are some of the best laboratories of stellar and binary physics available to us and, as such, have been used to test and study a multitude of effects including, but not limited to: precisely measuring mass-radius relations of WDs (Parsons et al., 2017), confirming the over-inflation of M dwarfs relative to theoretical models (Parsons et al., 2018), distinguishing the transition between helium and carbon-oxygen core compositions in WDs (Parsons et al., 2017), finding systems with brown dwarf companions (Beuermann et al., 2013; Parsons et al., 2017; Casewell et al., 2020; van Roestel et al., 2021), and identifying unusual systems such as merger products and extremely low metallicity systems (O'Brien et al., 2001; Rebassa-Mansergas et al., 2019). In the current era of wide-field time-domain photometric sky surveys, such as the Zwicky Transient Facility (ZTF; Masci et al., 2019; Bellm et al., 2019; Graham et al., 2019), the number of known eclipsing PCEBs is increasing drastically, with ZTF alone contributing to more than an order of magnitude increase (van Roestel et al. in prep), so far, on the previously known sample (Parsons et al., 2015). The Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) carried out by the Vera Rubin Observatory in the near future will only accelerate this increase. Follow-up of this vast quantity of systems will be an ongoing challenge, particularly as many of these will be extremely faint, but they will provide much-needed insight into the relatively uncertain physics of the common-envelope as well as uncovering rare systems that may have implications for specific areas of stellar or binary physics. These include, but are not limited to, systems containing magnetic, pulsating, or high-mass WDs, as well as those with brown dwarf companions. Previous work has shown that high-cadence multi-colour photometric observations of the primary eclipse is enough to accurately and efficiently characterize detached eclipsing PCEBs (Brown et al., 2022). This method makes use of the eclipse to cleanly disentangle the spectral energy distributions of the two components and constrains the effective temperatures, while using the shape of the eclipse to measure the orbital inclination and the stellar radii. These, in turn, provide information about the stellar masses through the use of mass-radius relations. A photometric method such as this is especially important as fainter systems are discovered - particularly in the LSST era - making spectroscopic follow-up even more difficult, and in many cases, impractical. With this in mind, we have undertaken a program of high-cadence photometric follow-up of eclipsing WD+dM PCEBs (first discovered by van Roestel et al. (in prep)) with the goal of characterizing a significant fraction and discovering a number of rare systems among them. Here we present the first results of this follow-up. ## 2 Observations ### Target selection Our targets for follow-up were selected from the detached eclipsing WD+dM systems discovered by van Roestel et al. (in prep) using data from ZTF. In brief, this sample was created by searching for periodic outliers in the ZTF photometry, indicative of eclipses. The primary biases are therefore related to the probability of a given system eclipsing as viewed from Earth and the ability to detect an eclipse within the ZTF data. The former is dominated by the orbital period (with a very weak dependence on the secondary radius), while the latter is dominated by the signal-to-noise ratio of the eclipse, with a heavy dependence on the depth of the eclipse (and a much weaker dependence on the duration of the eclipse). A more detailed description of the full ZTF eclipsing WD+dM sample identification method and the biases within it will be presented in van Roestel et al. (in prep). We restricted our target list to systems visible from the La Silla Observatory (Dec < +25 deg) and brighter than \(g=19.5\) mag. We typically observed systems with eclipse timings that made for the most efficient use of telescope time on a particular night however we also tried to prioritise systems with longer periods where possible since the eclipses of these systems are more difficult to observe. Systems with ZTF light curves that indicated they may be of particular interest were also prioritised. This includes systems with in-eclipse flux measurements at or below the detection threshold of ZTF (indicative of brown dwarf companions) and systems with unusual ZTF light curves, showing variability inconsistent with typical binary variability mechanisms and indicating the presence of a magnetic WD. A journal of observations is included in Table 1. ### High speed photometry Our photometric follow-up observations made use of the three-band frame-transfer camera, ULTRACAM (Dhillon et al., 2007), mounted on the 3.6 m New Technology Telescope (NTT) at the ESO La Silla Observatory in Chile, to obtain high-cadence multi-colour photometry of the primary eclipse of each system - the eclipse of the WD by its companion. For all targets observed with ULTRACAM we used the higher throughput Super-SDSS \(u_{s}\,g_{s}\,i_{s}\) filters (Dhillon et al., 2021), with the exception of one observation where the \(r_{s}\) filter was used in place of \(i_{s}\). For a few of the systems thought to harbour magnetic WDs, we obtained high-speed photometry with the quintuple band frame-transfer camera, HiPERCAM (Dhillon et al., 2021), mounted on the 10.4 m Gran Telescopio Canarias (GTC) at the Roque de los Muchachos observatory in La Palma, again equipped with Super-SDSS \(u_{s}\,g_{s}\,r_{s}\,i_{s}\) filters. All observations were bias-subtracted and flat-field corrected (and fringe corrected in the case of the HiPERCAM \(z_{s}\) band) using the HiPERCAM pipeline1. Differential aperture photometry was then extracted using a variable aperture radius set to scale with the measured full width at half-maximum (FWHM) in each frame in order to remove effects due to seeing and transparency variations. For this we use a target aperture radius of \(1.8\times FWHM\). In observations with lower signal-to-noise ratios, optimal extraction (Naylor, 1998) was also performed, with the extraction method resulting in the highest signal-to-noise light curve being the one that was used. Flux calibration was then performed by fitting the atmospheric extinction in each band using one or more observing runs taken on the same night as the target observations (each spanning a minimum of 0.2 airmasses). The atmospheric extinction measurements were combined with an observation of an ULTRACAM flux standard star (see Brown et al., 2022, table A3), reduced using a larger target aperture radius of \(2.5\times FWHM\), in order to measure the instrumental zeropoint for the night. The calibrated flux of the comparison star was then determined using the same target aperture radius as for the flux standard star, which was then used to flux calibrate the target. When using optimally extracted photometry, the flux calibration was still performed on the data reduced using a standard aperture photometry extraction. This calibration was then applied to the optimally extracted photometry to prevent systematic absolute flux errors between the two methods. These flux calibration steps were performed using the cam_cal2 package. Footnote 2: [https://github.com/Alex-J-Brown/cam_cal](https://github.com/Alex-J-Brown/cam_cal) ## 3 Method We fit the flux calibrated eclipse photometry using the pylcurve3 package, a python wrapper for lcurve's lroche routine (Copperwheat et al., 2010). In general, we follow the method of Brown et al. (2022) which involves fitting the eclipse photometry in multiple filters simultaneously with eight free parameters. These are the effective temperatures, T\({}_{1}\) and T\({}_{2}\), which define the spectral energy distributions (SEDs) of both stars through the use of stellar atmosphere models (Claret et al., 2020; Husser et al., 2013); the stellar masses, M\({}_{1}\) and M\({}_{2}\); the binary inclination, \(i\); the parallax, \(\varpi\); the interstellar reddening, \(E(B-V)\); and the time of mid-eclipse, T\({}_{0}\). With the use of mass-radius relations and a given (fixed) orbital period, P, the radii of both stars and the orbital separation of the binary can be defined allowing model light curves to be generated for each filter. See Brown et al. (2022) for more details on this method. Footnote 3: [https://github.com/Alex-J-Brown/pylcurve](https://github.com/Alex-J-Brown/pylcurve) For this work, however, we implement two changes to the methodology mentioned above, both regarding the spectral modelling of the secondary star: 1. Previously, PHOENIX stellar atmospheres (Husser et al., 2013) were used to model the SED of the secondary star (Brown et al., 2022). However, these models are limited to a minimum effective temperature of 2300 K, preventing the modelling of systems with brown dwarf companions. We have therefore switched to using the BT-Settl CIFIST stellar atmosphere grid (Allard et al., 2012) which go as low as 1200 K, allowing for a seamless transition to the brown dwarf regime and keeping our modelling consistent throughout. 2. It is well known that there are significant differences in the synthetic photometry of low mass stars calculated using different spectral models for a given effective temperature and surface gravity. This is most apparent for lower effective temperatures (<3500 K), with models struggling to reproduce the transitions from M dwarfs to L dwarfs to T dwarfs (Saumon and Marley, 2008; Allard et al., 2012; Best et al., 2021). Rigidly defining the SED of the secondary from these spectral models could therefore introduce problems where the model photometry cannot reproduce the observed SED of the star in question to the precision of our observations. We counter this by allowing the secondary to have a separate effective temperature in each observed bandpass. Despite being allowed to vary, these individual filter-specific effective temperatures should be consistent with each other at a certain level. We implement this consistency requirement using priors to favour solutions where these effective temperatures are similar across the different filters. In order to inform the priors on the filter-specific secondary temperatures mentioned in item (ii), we use a sample of 15 279 well-characterised M dwarfs (Morrell and Naylor, 2019). Cross-matching this sample with SDSS DR13 returns a sample of 5 222 M dwarfs, on which we then make colour cuts informed by synthetic photometry of the BT-SETTL-CIFIST model atmospheres (\(4.0<(u^{\prime}-i^{\prime})<6.4\) and \(1.5<(g^{\prime}-i^{\prime})<3.4\)) to remove many of the extreme outliers. This leaves 4 158 M dwarfs with SDSS photometry. We then fit fifth-order polynomials to the measured effective temperature as a function of \(u^{\prime}-i^{\prime}\) and \(g^{\prime}-i^{\prime}\) colours individually, using an iterative sigma clipping procedure with a \(3\sigma\) cut to remove any outliers that remain after the initial colour cuts (Figure 1). The standard deviations of the residuals of the remaining points are 80 K for a \(u^{\prime}-i^{\prime}\) colour and 30 K for a \(g^{\prime}-i^{\prime}\) colour. We therefore implement Gaussian priors on the difference in effective temperature between the \(u^{\prime}\) and \(i^{\prime}\), and \(g^{\prime}\) and \(i^{\prime}\) bands of 80 K and 30 K respectively, both centred at zero. As with this method, there are as many temperature measurements available for the secondary as filters used, we take the \(i_{s}\)-band measurement as being representative of the true secondary temperature. We make this choice based on it being the the band where the secondary is brightest and is therefore the most strongly constrained by the photometry. As in Brown et al. (2022), we use a Markov Chain Monte Carlo (MCMC) method to fit each light curve, implemented through the python package, emcee4(Foreman-Mackey et al., 2013). We run each fit for a minimum of 10 000 steps using 100 walkers and inspect each fit manually for convergence and stability. Each system is first fit using a carbon-oxygen (CO) core WD mass-radius relation (Bedard et al., 2020; Blouin et al., 2018; Tremblay et al., 2011) with the fit then being repeated using a helium (He) core model (Panei et al., 2007) if the best-fit CO-core WD mass is below 0.5 M\({}_{\odot}\). If this subsequent fit using the He-core model is restricted by the upper mass limit of the He-core models - 0.5 M\({}_{\odot}\) - then we consider the WD to have a CO core-composition, if not then we assume the WD to possess a He core. Footnote 4: [https://emcee.readthedocs.io/en/stable/](https://emcee.readthedocs.io/en/stable/) ## 4 Results The results of our light curve fits are presented in Table 1 and Table 2 - note that of the 43 systems that we have followed-up, 9 do not have measured parameters because they either harbour magnetic WDs or are strong candidates (see section 5.4). Our best-fit values are taken to be the median of the posterior distributions of the MCMC with lower and upper uncertainties taken as the 16th and 84th percentiles respectively. As in Brown et al. (2022), the formal uncertainties from the MCMC do not include contributions from systematic errors and so we attempt to take this into account by adding estimated systematic uncertainties in quadrature with the formal uncertainties of the MCMC. We add 1.5 per cent in quadrature with the uncertainties on the primary temperature (Gianninas et al., 2011), T\({}_{1}\), and 100 K in quadrature with the secondary temperature, T\({}_{2}\). We also add 1 per cent in quadrature with the WD mass, M\({}_{1}\), and 5 per cent in quadrature with the secondary mass, M\({}_{2}\) (for the reasons explained in Brown et al. (2022)). These contributions are included in the uncertainties shown in Table 1 and in all figures. An example ULTRACAM eclipse light curve and best-fit model is shown in Figure 2 with all best-fit light curves shown in Appendix B. ## 5 Discussion ### Comparison with previous parameters Initial parameter estimates for these systems were made by fitting the ZTF time-series photometry alongside photometric measurements from other surveys, where available, covering a wide wavelength range (van Roestel et al. in prep). Comparing our parameters determined from the three-band eclipse photometry against these initial estimates demonstrates general agreement between the two methods (Figure 3). The WD temperatures, in particular, show excellent agreement but there are some significant differences in the measured masses for certain systems. This may be due, in part, to the survey SED data used by van Roestel et al. (in prep) being taken at a range of different orbital phases and therefore suffering from increased systematics due to ellipsoidal modulation or reflection effect. As the method we have used in this work has previously been shown to retrieve accurate parameters (Brown et al., 2022), this may imply a slight underestimation in the uncertainties determined by combining SED fitting with ZTF photometry. The parameters determined using the high-speed eclipse photometry are typically more precise than those measured by van Roestel et al. (in prep). This is most apparent for the primary and secondary masses with a median uncertainty in the WD mass from the ULTRACAM photometry of 2.6 per cent, and 7.2 per cent for the secondary mass. These values are 6.0 per cent and 13.7 per cent respectively from the ZTF photometry for the same systems and so the ULTRACAM measurements are typically a factor of 2 more precise. This is likely due to the high time resolution of the ULTRACAM photometry, enabling the duration of the eclipse as well as the ingress and egress to be measured very precisely. In addition to the initial parameter estimates discussed above, two of the systems fit in this work have been included in previously published analyses - ZTF J125620.57+211725.8 and ZTF J164441.18+243428.2. Comparisons with these previous works are made below. #### 5.1.1 Ztf J125620.57+211725.8 ZTF J125620.57+211725.8 was previously fitted by Rebassa-Mansergas et al. (2021), using the Virtual Observatory SED Analyser (VOSA) to fit the available survey photometry. Out of the 112 systems that they analysed, 13 systems were determined to possess a WD with a mass below 0.2 M\({}_{\odot}\). It is not known how such low mass WDs could form in PCEBs with low-mass main sequence companions - with any mass transfer initiating a common envelope phase in which the envelope would most likely not gain sufficient energy to be ejected, leading to a merger scenario (Rebassa-Mansergas et al., 2021). This system is - as far as we know - the only one of these 13 systems that eclipses, enabling a valuable check on the system parameters. Our fit to the eclipse photometry determines the WD mass to be \(0.48\pm 0.01\) M\({}_{\odot}\), discrepant with the \(0.155\pm 0.02\) M\({}_{\odot}\) obtained from VOSA by over 14\(\sigma\). We encourage spectroscopic follow-up of this system in order to determine the cause of this large discrepancy and the true WD mass. Figure 1: Effective temperatures of M dwarfs measured by Morrell & Naylor (2019) against their SDSS colours. Blue crosses show points discarded by the sigma clipping procedure and the solid black lines show the final polynomial fits to these sigma-clipped distributions. The residuals of these fits, from which we calculate the standard deviations, are shown in the panels below. The gap in the sample at an effective temperature of 4000 K is due to a discontinuity in the model grid used by Morrell & Naylor (2019). Figure 2: ULTRACAM \(u_{s}\,gs_{s}\,is_{s}\) eclipse light curve (coloured points) of ZTF J041016.82\(-\)083419.5 with the best-fit light curve model over-plotted in black and the residuals of this fit shown below. The zero-flux level is shown by the horizontal grey line. #### 5.1.2 Ztej 164441.18+243428.2 ZTF J164441.18+243428.2 was one of the four deeply eclipsing PCEBs found and fitted by Kosakowski et al. (2022). For this target in particular they did not detect the eclipse minimum and so their parameters from the light curve fit represent limits rather than specific values. As would be expected, our light curve fit to the ULTRACAM photometry is consistent with these parameter limits. As well as fitting the eclipse light curve Kosakowski et al. (2022) performed a spectroscopic fit to the WD, determining the effective temperature, surface gravity, and mass (determined from the surface gravity using CO-core composition models). From our fit to the ULTRACAM photometry we find an effective temperature of \(13270\pm 490\) K, cooler than the \(14900\pm 760\) K determined by their spectroscopic fit but still consistent to within \(2\sigma\). For the WD mass there is a little more deviation, with our fit finding a WD mass of \(0.38\pm 0.02\) M\({}_{\odot}\), \(2.3\sigma\) below the \(0.55\pm 0.07\) M\({}_{\odot}\) found from their spectroscopic fit and suggesting a He-core composition rather than a CO-core. For the companion, Kosakowski et al. (2022) estimate a mass of \(0.084\pm 0.004\) M\({}_{\odot}\) by fitting the Pan-STARRS SED with a composite model, placing it close to the hydrogen-burning limit. We find a higher mass of \(0.103\pm 0.009\) M\({}_{\odot}\) from our light curve fit taking it into more typically stellar territory. Again though, these two values are consistent to within \(2\sigma\). Overall, our fit to the ULTRACAM photometry is fully consistent with their light curve fit and consistent with their spectroscopic and Pan-STARRS SED fits at around the \(2\sigma\) level. \begin{table} \begin{tabular}{l c c c c c c c c} \hline Target & He/CO & T\({}_{1}\) (K) & M\({}_{1}\) (M\({}_{\odot}\)) & R\({}_{1}\) (R\({}_{\odot}\)) & log(g1) & T\({}_{2}\) (K) & M\({}_{2}\) (M\({}_{\odot}\)) & R\({}_{2}\) (R\({}_{\odot}\)) & R\({}_{2}\)/R\({}_{1}\) \\ \hline ZTF J041016.82\(-\)083419.5 & He & 14690\({}^{+580}_{-580}\) & 0.355\({}^{+0.015}_{-0.011}\) & 0.0204\({}^{+0.003}_{-0.008}\) & 7.37\({}^{+0.03}_{-0.03}\) & 2840\({}^{+110}_{-110}\) & 0.123\({}^{+0.009}_{-0.008}\) & 0.151\({}^{+0.008}_{-0.006}\) & 0.680\({}^{+0.037}_{-0.021}\) \\ ZTF J051902.06+092526.4 & He & 10750\({}^{+720}_{-580}\) & 0.391\({}^{+0.019}_{-0.029}\) & 0.0178\({}^{+0.009}_{-0.008}\) & 7.53\({}^{+0.04}_{-0.08}\) & 280\({}^{+110}_{-110}\) & 0.177\({}^{+0.014}_{-0.019}\) & 0.214\({}^{+0.012}_{-0.013}\) & 0.842\({}^{+0.017}_{-0.084}\) \\ ZTF J052848.24+215629.0 & CO & 12100\({}^{+500}_{-600}\) & 0.787\({}^{+0.025}_{-0.025}\) & 0.0105\({}^{+0.003}_{-0.003}\) & 8.29\({}^{+0.04}_{-0.04}\) & 3110\({}^{+110}_{-110}\) & 0.184\({}^{+0.014}_{-0.013}\) & 0.220\({}^{+0.011}_{-0.009}\) & 0.408\({}^{+0.014}_{-0.001}\) \\ ZTF J053708.26\(-\)245041.6 & He & 16100\({}^{+500}_{-440}\) & 0.397\({}^{+0.009}_{-0.007}\) & 0.0191\({}^{+0.002}_{-0.002}\) & 7.48\({}^{+0.02}_{-0.02}\) & 2970\({}^{+0.009}_{-100}\) & 0.204\({}^{+0.012}_{-0.011}\) & 0.241\({}^{+0.007}_{-0.003}\) & 0.333\({}^{+0.006}_{-0.004}\) \\ ZTF J061530.96+051041.8 & CO & 15220\({}^{+500}_{-510}\) & 0.560\({}^{+0.011}_{-0.011}\) & 0.0139\({}^{+0.002}_{-0.002}\) & 7.90\({}^{+0.02}_{-0.02}\) & 3380\({}^{+110}_{-110}\) & 0.533\({}^{+0.004}_{-0.009}\) & 0.547\({}^{+0.013}_{-0.011}\) & 0.531\({}^{+0.008}_{-0.008}\) \\ ZTF J063808.71+091027.4 & CO & 22500\({}^{+100}_{-100}\) & 0.604\({}^{+0.013}_{-0.011}\) & 0.0136\({}^{+0.002}_{-0.002}\) & 7.95\({}^{+0.02}_{-0.02}\) & 3320\({}^{+110}_{-110}\) & 0.410\({}^{+0.024}_{-0.022}\) & 0.432\({}^{+0.012}_{-0.008}\) & 0.295\({}^{+0.005}_{-0.004}\) \\ ZTF J063954.70+191958.0 & CO & 15980\({}^{+520}_{-520}\) & 0.701\({}^{+0.011}_{-0.009}\) & 0.0117\({}^{+0.0001}_{-0.0001}\) & 8.15\({}^{+0.01}_{-0.01}\) & 3200\({}^{+0.010}_{-100}\) & 0.210\({}^{+0.011}_{-0.011}\) & 0.246\({}^{+0.004}_{-0.002}\) & 0.398\({}^{+0.004}_{-0.002}\) \\ ZTF J064242.41+131427.6 & CO & 14560\({}^{+500}_{-500}\) & 0.633\({}^{+0.018}_{-0.008}\) & 0.0127\({}^{+0.001}_{-0.0001}\) & 8.03\({}^{+0.01}_{-0.010}\) & 3110\({}^{+0.008}_{-0.005}\) & 0.018\({}^{+0.001}_{-0.004}\) & 0.438\({}^{+0.002}_{-0.002}\) \\ ZTF J065103.70+145246.2 & CO & 13140\({}^{+600}_{-600}\) & 0.515\({}^{+0.020}_{-0.020}\) & 0.0145\({}^{+0.003}_{-0.003}\) & 7.83\({}^{+0.03}_{-0.04}\) & 3170\({}^{+110}_{-110}\) & 0.242\({}^{+0.018}_{-0.019}\) & 0.276\({}^{+0.012}_{-0.013}\) & 0.589\({}^{+0.018}_{-0.019}\) \\ ZTF J070458.08\(-\)020103.3 & CO & 9280\({}^{+50}_{-250}\) & 0.500\({}^{+0.012}_{-0.014}\) & 0.0143\({}^{+0.003}_{-0.000}\) & 7.82\({}^{+0.02}_{-0.03}\) & 3300\({}^{+0.010}_{-100}\) & 0.344\({}^{+0.018}_{-0.020}\) & 0.370\({}^{+0.006}_{-0.009}\) & 0.915\({}^{+0.010}_{-0.013}\) \\ ZTF J071759.41+13630.2 & CO & 21110\({}^{+500}_{-500}\) & 0.528\({}^{+0.016}_{-0.017}\) & 0.0149\({}^{+0.003}_{-0.00 ### Brown dwarf companions WDs with brown dwarf companions are rare, with around 0.5 per cent of WDs expected to have substellar partners (Steele et al., 2011). Eclipsing examples are, predictably, even rarer with only four systems currently confirmed (Beuermann et al., 2013; Littlefair et al., 2014; Parsons et al., 2017; Casewell et al., 2020; van Roestel et al., 2021). These eclipsing WD-brown dwarf binaries are valuable as they are one of the few places where both the brown dwarf's radii and mass can be measured precisely and are therefore important benchmarks for brown dwarf models. Additionally, as some of the lowest mass objects thought to survive the common-envelope (Casewell et al., 2018), brown dwarfs in PCEBs occupy an important area of the parameter space when studying common-envelope evolution, with the study of the common-envelope phase in this low-mass regime having implications for systems with planetary mass companions (Vanderburg et al., 2020). In our ULTRACAM follow-up we have found four systems so far that our light curve fits suggest as having brown dwarf companions. These are ZTF J080441.95\(-\)021545.7, ZTF J103448.82+005201.9, ZTF J145819.54+131326.7, and ZTF J182848.77+230838.0. As our mass-radius relation for M dwarfs (Brown et al., 2022) is horizontal below 0.07 M\({}_{\odot}\) - and therefore uninformative in this regime - the best fit secondary masses can only be regarded as upper limits. Additionally, as none of the secondaries for these systems are detected in-eclipse, only an upper limit can be given for their effective temperatures. One of these systems, ZTF J182848.77+230838.0, has a high secondary temperature for a brown dwarf. In order to rule out problems with the photometry, we stack the in-eclipse images (Figure 4). This reveals a faint (\(G=20.88\) mag) source 2.79 arcsec away from the target which results in an erroneous slight 'detection' in eclipse and therefore a higher than expected temperature. The true \begin{table} \begin{tabular}{l c c c c c} \hline Target & a (R\({}_{\odot}\)) & \(E\,(B-V)\) & \(\sigma_{\rm UCAM}\) & \(\sigma_{\rm Gaia}\) & T\({}_{0}\) (BMJD(TDB)) & P (d) \\ \hline ZTF J041016.82\(-\)083419.5 & 86.6\({}^{+2.27}_{-1.7}\) & 0.616\({}^{+000}_{-0.006}\) & 0.031\({}^{+001}_{-0.017}\) & 3.863\({}^{+0097}_{-0.078}\) & 4.07 \(\pm\) 0.11 & 59646.0489782(16) & 0.0811093 \\ ZTF J051902.06+092526.4 & 76.3\({}^{+1.1}_{-0.6}\) & 0.715\({}^{+0.012}_{-0.020}\) & 0.112\({}^{+0028}_{-0.023}\) & 2.835\({}^{+0.140}_{-0.140}\) & 2.92 \(\pm\) 0.30 & 59251.0519387(57) & 0.0929131 \\ ZTF J052848.24+215629.0 & 87.7\({}^{+1.4}_{-1.1}\) & 1.540\({}^{+0.017}_{-0.016}\) & 0.090\({}^{+0.000}_{-0.021}\) & 5.666\({}^{+10.104}_{-0.111}\) & 5.59 \(\pm\) 0.13 & 59932.215321(52) & 0.2259952 \\ ZTF J053708.26+245014.6 & 88.1\({}^{+0.7}_{-0.6}\) & 1.688\({}^{+0.014}_{-0.010}\) & 0.015\({}^{+0.011}_{-0.010}\) & 4.580\({}^{+0.044}_{-0.047}\) & 4.574 \(\pm\) 0.049 & 59251.2246115(52) & 0.3277936 \\ ZTF J061530.96+051041.8 & 85.0\({}^{+0.7}_{-0.7}\) & 2.146\({}^{+0.015}_{-0.014}\) & 0.019\({}^{+0.019}_{-0.013}\) & 3.16\({}^{+0.060}_{-0.051}\) & 3.166 \(\pm\) 0.081 & 59280.12536567(83) & 0.3481742 \\ ZTF J063808.71+091027.4 & 88.2\({}^{+0.6}_{-0.6}\) & 1.397\({}^{+0.024}_{-0.019}\) & 0.021\({}^{+0.018}_{-0.015}\) & 1.709\({}^{+0.047}_{-0.047}\) & 1.65 \(\pm\) 0.14 & 59252.1564861(10) & 0.6576453 \\ ZTF J063954.70+191958.0 & 88.9\({}^{+0.7}_{-0.7}\) & 1.659\({}^{+0.008}_{-0.08}\) & 0.028\({}^{+0.013}_{-0.055}\) & 5.394\({}^{+0.070}_{-0.055}\) & 5.387 \(\pm\) 0.085 & 59251.17799186(52) & 0.25935556 \\ ZTF J064242.41+131427.6 & 89.1\({}^{+0.7}_{-0.8}\) & 1.195\({}^{+0.006}_{-0.020}\) & 0.022\({}^{+0.016}_{-0.013}\) & 3.583\({}^{+0.075}_{-0.073}\) & 3.77 \(\pm\) 0.20 & 59252.10345653(59) & 0.1710542 \\ ZTF J065103.70+145246.2 & 85.3\({}^{+5.1}_{-1.5}\) & 1.166\({}^{+0.016}_{-0.017}\) & 0.037\({}^{+0.016}_{-0.018}\) & 2.567\({}^{+0.073}_{-0.076}\) & 2.70 \(\pm\) 0.17 & 59252.2124993(16) & 0.1677075 \\ ZTF J070458.08\(-\)020103.3 & 74.3\({}^{+0.2}_{-0.2}\) & 1.079\({}^{+0.060}_{-0.010}\) & 0.052\({}^{+0.011}_{-0.013}\) & 3.715\({}^{+0.076}_{-0.073}\) & 3.643 \(\pm\) 0.088 & 59253.2216462(43) & 0.1413708 \\ ZTF J071759.04+113630.2 & 84.9\({}^{+0.4}_{-0.3}\) & 2.326\({}^{+0.027}_{-0.020}\) & 0.018\({}^{+0.017}_{-0.012}\) & 2.812\({}^{+0.065}_{-0.072}\) & 2.74 \(\pm\) 0.13 & 59251.1312794(93) & 0.4527638 \\ ZTF J071843.68\(-\)085232.1 & 84.6\({}^{+0.7}_{-0.7}\) & 1.563\({}^{+0.014}_{-0.013}\) & 0.064\({}^{+0.017}_{-0.019}\) & 2.157\({}^{+0.062}_{-0.088}\) & 2.39 \(\pm\) 0.22 & 59283.1026109(12) & 0.2158113 \\ ZTF J080441.95\(-\)021545.7 & 85.3\({}^{+0.1}_{-0.1}\) & 0.889\({}^{+0.007}_{-0.044}\) & 0.027\({}^{+0.015}_{-0.051}\) & 5.631\({}^{+0.092}_{-0.089}\) & 5.47 \(\pm\) 0.11 & 59646.0905072397(97) & 0.1209762 \\ ZTF J0805429.98\(-\)143036.3 & 81.0\({}^{+0.1}_{-0.7}\) & 1.260\({}^{+0.015}_{-0.019}\) & 0.010\({}^{+0.012}_{-0.008}\) & 1.102\({}^{+0.034}_{-0.039}\) & 1.39 \(\pm\) 0.16 & 59646.18599526(74) & 0.1981669 \\ ZTF J094826.35+253810.6 & 79.9\({}^{+0.6}_{-0.6}\) & 1.003\({}^{+0.01 upper limit for the secondary temperature will be lower than given by our fit. In addition to these four systems with sub-stellar companions, we have measured one system with a companion mass just above the hydrogen-burning limit, ZTF J140537.34+103919.0, hereafter ZTF J1405+1039. The best-fit parameters for this system suggest that the secondary is significantly hotter than would be expected for its mass (shown as the blue point in Figure 5). Again, we stack the in-eclipse images to rule out problems in the photometry (Figure 6), demonstrating that the source is indeed detected in-eclipse. We believe that the most likely explanation for this is that ZTF J1405+1039 is actually a triple system, with a tertiary companion contributing a significant fraction of the in-eclipse flux. ### ZZ Ceti WDs ZZ Ceti s are pulsating WDs, possessing hydrogen atmospheres and pulsation periods ranging from tens of seconds to tens of minutes (Fontaine and Brassard, 2008; Winget and Kepler, 2008; Romero et al., 2022). The presence of pulsations enable asteroseismological analyses to be performed, providing insight into the internal structure of the WD which is otherwise concealed by their highly stratified nature. In PCEBs, the possibility of measuring the internal structure of the WD is especially interesting as it can reveal how the WD itself is affected by the common-envelope phase (Hermes et al., 2015). Previously, only one ZZ Ceti WD in a detached eclipsing binary was known (Parsons et al., 2020). This system is a double WD binary, however, and as such its evolutionary history is less well defined, with the number of common-envelope events it has passed through being uncertain. ZZ Ceti s found in WD-main sequence PCEBs do not have this problem with their evolutionary past known to comprise of a single common-envelope phase. These systems are therefore potentially Figure 4: Stacked images of ZTF J1828+2308 taken with ULTRACAM in the \(t_{\rm s}\) filter before and during the eclipse. The red dashed aperture shows the location of ZTF J1828+2308 itself while the solid blue aperture shows the fainter background source 2.79\({}^{\circ}\) away (Gaia DR3 4529477702982880512) that is marginally affecting our in-eclipse photometry. Figure 5: Measured masses and effective temperatures of the M dwarf components with an inset plot zoomed in around the brown dwarfs (which are shown in red). The solid black line shows the 1 Gyr track from Baraffe et al. (2015) and the shaded blue area denotes the region where our mass-radius relation is horizontal (i.e. the radius is constant in this mass range). For the brown dwarfs we plot the masses and temperatures as upper limits centred on the 84\({}^{\rm th}\) percentile of the fit. The blue point denotes ZTF J1405+1039 which has a best-fit secondary temperature that is much hotter than expected for its mass. Figure 3: Comparison of our parameters from the NTT-ULTRACAM photometry against the initial parameters of van Roestel et al. (in prep) from ZTF photometry. very interesting systems to find. Currently there is one known ZZ Ceti WD in a detached, albeit not eclipsing, PCEB (Pyrzas et al., 2015). Although this is an important find, Hermes et al. (2015) noted that there were a lot of free parameters, limiting the precision of the asteroseismological analysis. Eclipsing examples of such systems would reduce these free parameters and enable a more precise analysis. Comparing our best fit parameters for the WD components to the ZZ Ceti instability strip (Figure 7), we find that eight of our systems have WDs that lie within 1\(\sigma\) of the instability strip (shown in Table 3). Closer inspection of their light curves do not reveal any clear photometric variability indicative of pulsations in six of the systems, however, the out-of-eclipse data for many of these systems is typically less than 30 minutes and so is not enough to rule out pulsations either. Although the WD temperatures are not necessarily precise enough to say with certainty whether a particular WD lies within the instability strip or not. Of these eight systems with WDs that lie in the instability strip, we have found two that show clear variability due to pulsations. These represent the first two ZZ Ceti WDs found in eclipsing WD+dM PCEBs. #### 5.3.1 Zfj1407+2115 ZTF J140702.56+211559.7, hereafter ZTF J1407+2115, was first observed with ULTRACAM in February 2021. Unusual out-of eclipse variation was noticed but the data taken in this run was insufficient to confirm pulsations. We observed ZTF J1407+2115 again for 1 h on the 2\({}^{\rm nd}\) of March 2022, detecting 3 clear pulsations and confirming it as the first eclipsing detached PCEB containing a ZZ Ceti WD. With this confirmation, we observed ZTF J1407+2115 in two long observing runs on the 4\({}^{\rm th}\) and 26\({}^{\rm th}\) of March 2022 using the \(u_{s}\,g_{s}\,i_{s}\) and \(u_{s}\,g_{s}\,r_{s}\) filters and lasting \(\sim 2\) h and \(\sim 5\) h respectively (LONDAPE periodograms of these two long runs are shown in Figure 8). It is the photometry from the long observing run on the 4\({}^{\rm th}\) of March that we use to fit the system parameters. We choose this observation primarily for consistency with the modelling performed on the other systems in this work, but also as the wider wavelength range provided by the \(i_{s}\)-band strengthens the constraints on the WD temperature. Additionally, chromospheric variability in the H\(\alpha\) feature can lead to higher scatter of M dwarf fluxes in the \(r_{s}\)-band. In order to fit the eclipse photometry of this system, the pulsations need to be included in the light curve model to prevent them introducing large systematic errors in the best-fit parameters. We do this using a Gaussian process (GP) implemented through the python package, George5(Ambikasaran et al., 2015). The GP is applied to the residuals of the pylcurve model at each MCMC walker position, with the posterior log probability calculated as the sum of the GP marginalised log likelihood, the log likelihood from comparing the model WD SED with the measured eclipse depths, and the log priors (parallax and interstellar reddening). We use the ExpSquare&Kernel, defined by an amplitude, temperature, and scale-length, with the temperature scaling the pulsation amplitude between the light curves in different filters according to a blackbody law. These three GP parameters are included as free parameters in our fit. We switch the GP off between the second and third contact points where the WD is totally eclipsed by its M dwarf companion, with the contact points being calculated for every walker position. We then use emcee(Foreman-Mackey et al., 2013) to sample from the posterior probability distribution and determine the best-fit parameters. This best-fit model is shown in Figure 9. Figure 6: Stacked images of ZTF J1405+1039 taken with ULTRACAM in the \(i_{s}\) filter before and during the eclipse. The red dashed aperture shows the location of ZTF J1405+1039. It is clear that the source is still detected in-eclipse. Figure 7: The ZZ Ceti instability strip (blue region) with known pulsating (dark grey) and non-pulsating (light grey) WDs from Gianninas et al. (2011); Steinfadt et al. (2012); Hermes et al. (2012, 2013a,c,b); Romero et al. (2022). Points in red show the measured parameters of the WD components of binaries fit in this work, with the confirmed pulsators, ZTF J1407+2115 and ZTF J0528+2156, shown by the yellow and cyan stars respectively. \begin{table} \begin{tabular}{c c c c c} \hline Target & RA & Dec & G & Pulsating \\ \hline ZTF J0519+0925 & 05:19:02.1 & +09:25:26.38 & 19.0 & Candidate \\ ZTF J0528+2156 & 05:28:48.2 & +21:56:28.94 & 17.7 & Confirmed \\ ZTF J0948+2538 & 09:48:26.4 & +25:38:10.68 & 18.7 & Candidate \\ ZTF J1034+0052 & 10:34:48.8 & +00:52:01.69 & 19.0 & Candidate \\ ZTF J1302\(-\)0032 & 13:02:28.3 & -00:32:00.11 & 16.8 & Candidate \\ ZTF J1407+2115 & 14:07:02.6 & +21:55:97.5 & 17.4 & Confirmed \\ ZTF J1634\(-\)2713 & 16:34:21.0 & -27:13:21.54 & 18.8 & Candidate \\ ZTF J1802\(-\)0054 & 18:02:56.4 & -00:54:58.47 & 18.0 & Candidate \\ \hline \end{tabular} \end{table} Table 3: eclipsing PCEBs with – either confirmed or candidate – ZZ Ceti WDs We find the WD to have an effective temperature of \(10\,900\pm 300\) K and a mass of \(0.41\pm 0.01\) M\({}_{\odot}\), suggesting a core composed primarily of helium. This mass and temperature corresponds to a surface gravity of \(7.57\pm 0.04\) dex, placing it in a relatively sparsely sampled region in the middle of the instability strip (Figure 7). We subtract our best-fit eclipse light curve model from the \(g_{s}\)-band photometry of the longer run on the 26th of March, leaving just the pulsation signal. Running a periodogram on this determines the main pulsation mode to have a frequency of 1.11 mHz (898 s) with an amplitude of around 47 parts per thousand (ppt) (Figure 8). We calculate the 3\(\sigma\) significance threshold to be 8 ppt following the method of Greiss et al. (2014), shuffling the flux values 10 000 times and taking the amplitude of the 99.7th percentile highest peak. #### 5.3.2 ZTF J0528+2156 ZTF J052848.24+215629.0, hereafter ZTF J0528+2156, was first observed with ULTRACAM in February 2021. Attempts at fitting the eclipse light curve showed some possible structure in the residuals, prompting us to observe it again to search for pulsations. We observed ZTF J0528+2156 again on the 18th of December 2022 for 1.8 h, detecting pulsations with a period of around 11 minutes and amplitude of around 5 per cent. We fit the ULTRACAM photometry in the same way as for ZTF J1407+2115 - using a Gaussian process to model the pulsations. We find the WD to have an effective temperature of \(11\,900\pm 600\) K and a mass of \(0.78\pm 0.02\) M\({}_{\odot}\), corresponding to a surface gravity of \(8.27\pm 0.044\) dex and placing it comfortably within the instability strip (Figure 7). Computing the periodogram of the residuals of the eclipse light curve model in the same way as for ZTF J1407+2115, we find the main mode to have a frequency of 1.5 mHz (670 s) and amplitude of around 19 ppt with a 3\(\sigma\) significance threshold of 7 ppt (Figure 8). ### Magnetic WDs Around 36 per cent of WDs in cataclysmic variables (CVs) are observed to be strongly magnetic (Pala et al., 2020). This is in stark contrast with their progenitor population - the detached PCEBs - of which only a handful possess WDs with strong magnetic fields. Schreiber et al. (2021) propose an evolutionary channel between the magnetic CVs and the detached magnetic population to explain this discrepancy. This relies on a rotation-driven dynamo in which a crystallising WD, spun up due to accretion during the CV phase, can generate the strong magnetic fields that we observe in CVs. Interactions between the newly-formed magnetic field of the WD and the magnetic field of the M dwarf then act to detach the binary, halting mass transfer and causing the binary to appear as a strongly magnetic detached PCEB for a period of time before angular momentum loss due to magnetic braking and gravitational wave radiation brings the two stars back into a mass-transferring state as a polar or intermediate polar. A test of this model was performed by Parsons et al. (2021), using spectroscopic observations of detached magnetic PCEBs to constrain their evolutionary history, attempting to assess whether or not they are consistent with having undergone a mass-transferring phase in the past. All systems studied were found to be consistent with a previous CV phase but spectroscopic observations alone were not powerful enough to draw strong conclusions. More powerful constraints can be made if such systems are found to be eclipsing, enabling more precise measurements to be made from the eclipse photometry and therefore a robust test of the model. As part of our follow-up program we have discovered 6 new eclipsing PCEBs (Table 4) that we have confirmed from our high-speed photometry as having magnetic WDs - showing clear evidence of a bright magnetic pole in the eclipse ingress/egress, with one previously known as a magnetic system but not known to be eclipsing. We have additionally found 3 candidate systems that show out-of-eclipse variation that disappears when the WD is eclipsed but for which the ingress/egress of the eclipse do not confirm a bright magnetic pole. These systems have been found by searching for unusual out-of-eclipse variation in their ZTF light curves (Figure 10), inconsistent with the ellipsoidal modulation or reflection effect that is common in PCEBs. This unusual out-of-eclipse variability was noted in the pre-intermediate polar, SDSS J0303+0054, (Parsons et al., 2013) and is due to additional emission in the form of cyclotron radiation from the magnetic poles of the WD. The effect of the cyclotron emission on the eclipse profiles - introducing steps in the ingress and egress due to the eclipse of the small, bright magnetic pole (Figure 10) - makes the light curves of the magnetic systems more complicated to fit and so the analysis of these systems will be the subject of a future paper. ## 6 Conclusions Through our dedicated program of high-speed photometric follow-up we have obtained multi-band eclipse light curves for 43 new PCEBs found using ZTF. We have characterized 34 of these systems from the eclipse light curves alone - finding four that contain sub-stellar companions, doubling the number of eclipsing examples known, and two with pulsating WDs representing the first ZZ Ceti WDs known in eclipsing WD+dM binaries. Of the remaining nine systems, we Figure 8: Lomb-Scargle periodograms (shown in parts per thousand relative to the flux of the WD) of the ULTRACAM \(g_{s}\) light curves of ZTF J1407+2115 and ZTF J0528+2156 with their respective eclipse light curve models subtracted. Horizontal dashed lines show the 3\(\sigma\) significance levels calculated using the bootstrapping method described by Greiss et al. (2014, Section 4.1). Figure 9: ULTRACAM \(u_{s}\)\(g_{s}\)\(i_{s}\) light curves of ZTF J1407+2115 (**a**) and ZTF J0528+2156 (**b**). The top row of each plot shows the observed light curve (coloured points) with the combined eclipse plus mean Gaussian process pulsation model (black line). The second row shows the observed light curve with the eclipse model subtracted (coloured points) as well as the same data binned up by a factor of ten (dark grey points) with the mean Gaussian process model (black line). The third row shows the observed light curve with the mean Gaussian process subtracted with the black line showing the eclipse model. The bottom row shows the residuals of the full light curve model. The filled region shows the phase range where the Gaussian process is switched off (between the second and third eclipse contact points). Figure 10: _left_: ZTF g-band (black) and r-band (grey) light curves of the 6 new confirmed, and 3 new candidate eclipsing PCEBs with magnetic WDs. All show out-of-eclipse variation inconsistent with reflection effect or ellipsoidal modulation in at least one filter. Some light curves have been binned for clarity. _right_: Normalised ULTRACAM/HIPERACAM \(g_{s}\)-band primary eclipse light curves (zoomed in on the ingress and egress) of the 6 new confirmed, and 3 new candidate eclipsing PCEBs with magnetic WDs. The solid grey line shows a flux of zero while the red dashed line shows the mean flux of the first 10 points shown. have found six to contain strongly magnetic WDs from their eclipse photometry with three further candidates. These will be invaluable to the study of magnetic field generation in binary WDs. Our results demonstrate that a photometric approach to the follow-up of eclipsing systems can effectively discern interesting sub-types of PCEBs, including those that would be otherwise missed by spectroscopic follow-up. ## Acknowledgements SGP acknowledges the support of the UK's Science and Technology Facilities Council (STFC) Ernest Rutherford Fellowship. ARM acknowledges support from Grant RYC-2016-20254 funded by MCIN/AEI/10.13039/501100011033 and by ESF Investing in your future and from MINECO under the PID2020-117252GB-I00 grant. VSD, HiPERCAM, and ULTRACAM are supported by the STFC. IP and TRM acknowledge support from the STFC, grant ST/T000406/1 and a Leverhulme Research Fellowship. JM was supported by funding from a Science and Technology Facilities Council (STFC) studentship. Based on observations collected at the European Southern Observatory under ESO programme 0106.D-0824. Based on observations made with the Gran Telescopio Canarias (GTC), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias, in the island of La Palma. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. We thank the anonymous referee for their helpful comments. ## Data Availability The data underlying this article will be shared upon reasonable request to the corresponding author.
2306.03261
On Lagrange multipliers of the KKT system in Hilbert spaces
In this paper we develop a new decomposition framework to deal with Lagrange multipliers of the Karush-Kuhn-Tucker (KKT) system of constrained optimization problems and variational inequalities in Hilbert spaces. It is different from existing frameworks based on separation theorems. We introduce the essential Lagrange multiplier and establish the basic theory of this new multiplier. The essential Lagrange multiplier poses essentially different existence results in finite and infinite-dimensional spaces. It can also be used to give an essential characterization of the convergence of multipliers generated by the classical augmented Lagrangian method. Our analysis reveals that the essential Lagrange multiplier is at the core of both theories and applications of Lagrange multipliers.
Zhiyu Tan
2023-06-05T21:26:00Z
http://arxiv.org/abs/2306.03261v1
# On Lagrange multipliers of the KKT system in Hilbert spaces # On Lagrange multipliers of the KKT system in Hilbert spaces Zhiyu Tan 1 Footnote 1: Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803, USA. Email: [email protected] or [email protected]. **Abstract:** In this paper we develop a new decomposition framework to deal with Lagrange multipliers of the Karush-Kuhn-Tucker (KKT) system of constrained optimization problems and variational inequalities in Hilbert spaces. It is different from existing frameworks based on separation theorems. We introduce the essential Lagrange multiplier and establish the basic theory of this new multiplier. The essential Lagrange multiplier poses essentially different existence results in finite and infinite-dimensional spaces. It can also be used to give an essential characterization of the convergence of multipliers generated by the classical augmented Lagrangian method. Our analysis reveals that the essential Lagrange multiplier is at the core of both theories and applications of Lagrange multipliers. **Keywords:** Hilbert spaces, Constrained optimization, KKT system, Weak form asymptotic KKT system, Essential Lagrange multiplier, Classical augmented Lagrangian method. **Mathematics Subject Classification**: 46N10, 49J27, 90C25, 90C46, 90C48. ## 1. Introduction The KKT (Karush-Kuhn-Tucker) system and the related Lagrange multipliers are of great significance to the theories and algorithms of constrained optimization problems (cf. [4, 5, 18, 22, 23, 24, 26, 29, 32, 33]). In this paper we attempt to move away from the classical approach based on separation theorems to develop a new framework to investigate Lagrange multipliers of the KKT system of constrained optimization problems, which can also cover some cases of variational inequalities related to constrained optimization problems. In our approach we first construct a surrogate model that should share the same KKT system with the optimization problem at a given local minimizer by making use of the linearization of the problem, and then discuss the KKT system of the surrogate model at the minimizer. The approach here is based on a key observation that the KKT system only involves the linearized information of the optimization problem at a minimizer and is somehow a reformulation of the first order necessary condition with respect to the linearizing cone at the minimizer. For the surrogate model, we will prove the following existence theorem (cf. Section 2). **Theorem 1.1**.: _For a given minimizer \(u^{*}\) of the constrained optimization problem (2.1), the surrogate model exists for all \(f\in\mathcal{F}(u^{*})\) at the minimizer if and only if Guignard's condition (2.17) holds, where \(\mathcal{F}(u^{*})\) is the set of all Frechet differentiable objective functions which have a local constrained minimizer at \(u^{*}\)._ In general the surrogate model is a special case of the following model problem \[\min\theta(u)\quad\text{s.t.}\ Su\in K, \tag{1.1}\] where \(\mathcal{U}\) and \(\mathcal{X}\) are real Hilbert spaces, \(\emptyset\neq K\subseteq\mathcal{X}\) is closed and convex, and \(S\) is a bounded linear operator from \(\mathcal{U}\) to \(\mathcal{X}\). According to the Riesz representation theorem (cf. [9, Theorem 3.4, Chapter I]), we set \(\mathcal{U}^{\prime}=\mathcal{U}\) and \(\mathcal{X}^{\prime}=\mathcal{X}\). We assume that the feasible set \(\mathrm{R}(S)\cap K\neq\emptyset\), where R\((S)\) is the range of \(S\) and \(\theta(u)\) is continuously Frechet differentiable and strongly convex on \(\mathcal{U}\), i.e., there exists \(c_{0}>0\) such that \[\langle u-v,D_{u}\theta(u)-D_{u}\theta(v)\rangle_{\mathcal{U}}\geq c_{0}\|u-v \|_{\mathcal{U}}^{2}, \tag{1.2}\] where \(D_{u}\theta(v)\) is the first order Frechet derivative of \(\theta(\cdot)\) at \(v\), \(\langle\cdot,\cdot\rangle_{\mathcal{U}}\) is the inner product of \(\mathcal{U}\) and \(\|\cdot\|_{\mathcal{U}}\) is the induced norm. The inner product and the induced norm on \(\mathcal{X}\) are denoted by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\) respectively. It follows from the assumptions on the model problem (1.1) and the classical convex optimization theory that there exists a unique global minimizer \(u^{*}\) of the model problem (1.1). We will investigate the KKT system of the model problem at \(u^{*}\), and the results can be applied to the surrogate model. To avoid separation theorems, we use an optimization procedure regularization approach to derive the KKT system at \(u^{*}\), which will be realized by the classical augmented Lagrangian method (ALM, for short)(cf. [16, 30]) in this paper. By carrying out the convergence analysis of the classical ALM without using any information of Lagrange multipliers of the model problem (1.1) at \(u^{*}\), we will prove the following theorem (cf. Appendix A). **Theorem 1.2**.: _A feasible point \(u^{*}\) is a global minimizer of the model problem (1.1) if and only if there exists \(\{\lambda^{k}\}_{k=1}^{+\infty}\subset\mathcal{X}\) such that the following weak form asymptotic KKT system (W-AKKT, for short) holds_ \[\left\{\begin{aligned} &\langle D_{u}\theta(u^{*}),v\rangle_{ \mathcal{U}}+\lim_{k\to+\infty}\langle\lambda^{k},Sv\rangle=0\quad\forall\ v\in \mathcal{U};\\ &-\limsup_{k\to+\infty}\langle\lambda^{k},\zeta-Su^{*}\rangle \geq 0\quad\forall\ \zeta\in K.\end{aligned}\right. \tag{1.3}\] Motivated by the results in Theorem 1.2, we introduce the essential Lagrange multiplier. **Definition 1.1**.: _An element \(\lambda^{*}\in\overline{R(S)}\) is called an essential Lagrange multiplier of the model problem (1.1) at \(u^{*}\) if it satisfies_ \[\left\{\begin{aligned} &\langle D_{u}\theta(u^{*}),v\rangle_{ \mathcal{U}}+\langle\lambda^{*},Sv\rangle=0\quad\forall\ v\in\mathcal{U};\\ &-\langle\lambda^{*},\zeta-Su^{*}\rangle\geq 0\quad\forall\ \zeta\in \overline{K\cap R(S)},\end{aligned}\right. \tag{1.4}\] _where \(\overline{R(S)}\) and \(\overline{K\cap R(S)}\) are the closure of \(R(S)\) and \(K\cap R(S)\), respectively._ We also recall the definition of the proper Lagrange multiplier and the classical KKT system as follows (cf. [2, p. 160]). **Definition 1.2**.: _An element \(\bar{\lambda}\in\mathcal{X}\) is called a proper Lagrange multiplier of the model problem (1.1) at \(u^{*}\) if it satisfies the classical KKT system_ \[\left\{\begin{aligned} &\langle D_{u}\theta(u^{*}),v\rangle_{ \mathcal{U}}+\langle\bar{\lambda},Sv\rangle=0\quad\forall\ v\in\mathcal{U};\\ &-\langle\bar{\lambda},\zeta-Su^{*}\rangle\geq 0\quad\forall\ \zeta\in K.\end{aligned}\right. \tag{1.5}\] Note that the essential Lagrange multiplier of the model problem (1.1) is actually the proper Lagrange multiplier of the optimization problem \[\min\theta(u)\quad\text{s.t.}\ \ Su\in\overline{K\cap R(S)}\subset\overline{R(S)},\] which implies that the essential Lagrange multiplier is only related to the feasible set of the model problem (1.1). This observation inspires us to investigate the proper Lagrange multiplier with the help of the essential Lagrange multiplier. More details on the essential Lagrange multiplier will be given in Section 3. These results indicate that the essential Lagrange multiplier is a fundamental concept in the theory of Lagrange multipliers of constrained optimization. As an application of the essential Lagrange multiplier, we will consider the convergence of the multipliers generated by the classical ALM for the model problem (1.1). Our results show some equivalence between the convergence of the multipliers and the existence of the essential Lagrange multiplier (see Theorem 3.9). This indicates that the essential Lagrange multiplier is of fundamental importance in the application of Lagrange multipliers. The rest of the paper is organized as follows. In Section 2 a general optimization problem and the related variational inequalities are given, and some results of the surrogate model are also presented there, especially the proof of Theorem 1.1. A thorough discussion of the essential Lagrange multiplier and the proper Lagrange multiplier of the model problem (1.1) is included in Section 3. The results in this section also theoretically confirm the necessity of using asymptotic or approximate KKT systems (cf [1, 6, 20, 38]) to give the optimality conditions of constrained optimization problems in infinite-dimensional spaces. Section 4 is devoted to some further applications of the W-AKKT system. We give an elementary proof of the existence of the proper Lagrange multiplier under Robinson's condition that is widely used in the theories and applications of optimization problems in both finite and infinite-dimensional spaces (cf. [18, 27, 28, 31, 42]). An application of the theory to optimal control problems with pointwise constraints is given in Section 5. The paper ends up with some concluding remarks in Section 6. Details for the proof of Theorem 1.2 are provided in Appendix A. In this paper we use the standard notations from functional analysis, convex analysis and partial differential equations, see for example in [2, 3, 9, 11, 14, 33, 41]. ## 2. A general optimization problem and the surrogate model Let us consider the following general optimization problem \[\min f(u)\ \ \ \text{s.t.}\ \ G(u)\in\mathcal{K}, \tag{2.1}\] where \(f:\ \mathcal{U}\to\mathbb{R}\), \(G:\ \mathcal{U}\to\mathcal{X}\), \(\mathcal{K}\) is a closed and convex set in \(\mathcal{X}\), and \(\mathcal{U}\), \(\mathcal{X}\) are two real Hilbert spaces. Assume that \(G(u)\cap\mathcal{K}\neq\emptyset\) and \(u^{*}\) is a minimizer of the optimization problem (2.1). We will investigate the KKT system of the optimization problem (2.1) at \(u^{*}\). At present we assume that \(f\) and \(G\) are Frechet differentiable, and we will extend the results to some nonsmooth cases later. As aforementioned, we will denote by \(\mathcal{F}(u^{*})\) the set of all Frechet differentiable objective functions which have a local constrained minimizer at \(u^{*}\). ### Some preliminary results Let \(M\) be the feasible set, i.e., \[M=\{u\in\mathcal{U}:\ G(u)\in\mathcal{K}\}. \tag{2.2}\] We denote by \(T(M,\bar{u})\), \(T_{w}(M,\bar{u})\) and \(L(\mathcal{K},\bar{u})\) the sequential tangent cone, the weak sequential tangent cone and the linearizing cone at \(\bar{u}\in M\) respectively, which are defined by \[T(M,\bar{u})=\{v\in\mathcal{U}:\ \exists\{u_{n}\}\subset M,\ \{t_{n}\}\subset\mathbb{R}^{+},\ u_{n}\to\bar{u},\ t_{n}\to 0^{+}, \frac{1}{t_{n}}(u_{n}-\bar{u})\to v\}, \tag{2.3}\] \[T_{w}(M,\bar{u})=\{v\in\mathcal{U}:\ \exists\{u_{n}\}\subset M,\ \{t_{n}\}\subset\mathbb{R}^{+},\ u_{n}\to\bar{u},\ t_{n}\to 0^{+}, \frac{1}{t_{n}}(u_{n}-\bar{u})\rightharpoonup v\} \tag{2.4}\] and \[L(\mathcal{K},\bar{u})=\{tv\in\mathcal{U}:\ G(\bar{u})+G^{\prime}(\bar{u})v \in\mathcal{K},\ \forall\ t>0\}. \tag{2.5}\] Let \(C\subset\mathcal{U}\). The polar cone of \(C\) is defined by \[C^{\circ}=\{v\in\mathcal{U}:\ \langle v,w\rangle_{\mathcal{U}}\leq 0\quad \forall\ w\in C\}. \tag{2.6}\] The following property of the sequential tangent cone, the weak sequential tangent cone and the linearizing cone is crucial to establish our theory. **Lemma 2.1**.: _If \(G\) is a bounded linear operator, for any \(\bar{u}\in M\), it holds_ \[\overline{L(\mathcal{K},\bar{u})}=T(M,\bar{u})=T_{w}(M,\bar{u}). \tag{2.7}\] Proof.: Let \(v\in T(M,\bar{u})\), i.e., there exist \(\{u_{n}\}\subset\mathcal{U}\) and \(\{t_{n}\}\subset\mathbb{R}^{+}\) such that \[v=\lim_{n\to+\infty}\frac{1}{t_{n}}(u_{n}-\bar{u}),\ t_{n}\to 0^{+},\ u_{n}\in M,\] and set \(v_{n}=u_{n}-\bar{u}\). Since \(G\) is linear and \(u_{n}\in M\), we have \[G(\bar{u})+G^{\prime}(\bar{u})v_{n}=G(\bar{u}+v_{n})=G(u_{n})\in\mathcal{K},\] which implies \(\frac{1}{t_{n}}v_{n}\in L(\mathcal{K},\bar{u})\) and \(v\in\overline{L(\mathcal{K},\bar{u})}\) by \(v=\lim\limits_{n\to+\infty}\frac{1}{t_{n}}v_{n}\). This leads to \[T(M,\bar{u})\subset\overline{L(\mathcal{K},\bar{u})}.\] For any \(0\neq v\in L(\mathcal{K},\bar{u})\), there exist \(v_{0}\in\mathcal{U}\) and \(t_{0}>0\), such that \(v=t_{0}v_{0}\) and \[G(\bar{u}+v_{0})=G(\bar{u})+G(v_{0})=G(\bar{u})+G^{\prime}(\bar{u})v_{0}\in \mathcal{K}.\] Let \(u_{n}=\bar{u}+\frac{1}{n}v_{0}.\) Since \(\mathcal{K}\) is convex, we have \[G(u_{n})=G(\bar{u}+\frac{1}{n}v_{0})=\frac{n-1}{n}G(\bar{u})+\frac{1}{n}G( \bar{u}+v_{0})\in\mathcal{K},\] which implies \(u_{n}\in M\) for any \(n\in\mathbb{N}^{+}\). Taking \(t_{n}=1/(nt_{0})>0\), it follows \[v=\lim\limits_{n\to+\infty}\frac{1}{t_{n}}(u_{n}-\bar{u}),\] which gives \(v\in T(M,\bar{u})\) and \[L(\mathcal{K},\bar{u})\subset T(M,\bar{u}).\] Note that \(T(M,\bar{u})\) is closed (cf. [19, Theorem 4.10]). Therefore, \[\overline{L(\mathcal{K},\bar{u})}=T(M,\bar{u}).\] Since \(G\) is a bounded linear operator, \(M\) is closed and convex. This further implies \[T(M,\bar{u})=T_{w}(M,\bar{u})\] by Proposition 6.1 of [13]. This completes the proof. ### A first order necessary condition According to the classical optimization theory, at the minimizer \(u^{*}\), the following first order necessary condition holds (cf. [15, Theorem 1], [18, Proposition 1.2] or [19, Theorem 4.14]) \[\langle D_{u}f(u^{*}),v\rangle_{\mathcal{U}}\geq 0\quad\forall\ v\in T_{w}(M,u^{* }), \tag{2.8}\] which is equivalent to \[-D_{u}f(u^{*})\in T_{w}^{\circ}(M,u^{*}).\] We can also consider the variational inequality: Find \(u^{*}\in M\) such that \[\langle F(u),v\rangle_{\mathcal{U}}\geq 0\quad\forall\ v\in T_{w}(M,u), \tag{2.9}\] where \(F:\ \mathcal{U}\to\mathcal{U}^{\prime}(=\mathcal{U})\) is a given mapping. It is obvious that at a solution point \(u^{*}\), there holds \[\langle F(u^{*}),v\rangle_{\mathcal{U}}\geq 0\quad\forall\ v\in T_{w}(M,u^{*}),\] which is in the same form of (2.8). Therefore, we can deal with these two problems in the same framework, and we will only give the arguments for (2.8). ### The surrogate model In this part we will give the definition of the surrogate model at a minimizer of the optimization problem (2.1) and prove a fundamental theorem in our theory, i.e., Theorem 1.1. Note that the linearization problem of (2.1) at \(u^{*}\) is \[\min f(u^{*})+\langle D_{u}f(u^{*}),u-u^{*}\rangle_{\mathcal{U}}\quad\text{s.t. }\ G(u^{*})+G^{\prime}(u^{*})(u-u^{*})\in\mathcal{K},\] which is equivalent to \[\min f(u^{*})+\langle D_{u}f(u^{*}),u-u^{*}\rangle_{\mathcal{U}}\quad\text{s.t. }\ G^{\prime}(u^{*})u\in\mathcal{K}-G(u^{*})+G^{\prime}(u^{*})u^{*}.\] Let us consider the following optimization problem \[\min f(u^{*})+\langle D_{u}f(u^{*}),u-u^{*}\rangle_{\mathcal{U}}+\frac{c}{2} \|u-u^{*}\|_{\mathcal{U}}^{2}\quad\text{s.t. }\ G^{\prime}(u^{*})u\in K, \tag{2.10}\] where \(c>0\) and \(K=\mathcal{K}-G(u^{*})+G^{\prime}(u^{*})u^{*}\). Note that the optimization problem (2.10) is a special case of the model problem (1.1). The feasible set of this problem is \[\tilde{M}=\{u\in\mathcal{U}:\ G^{\prime}(u^{*})u\in K\},\] and we have some key observations \[\begin{cases}K-G^{\prime}(u^{*})u^{*}&=\ \ \mathcal{K}-G(u^{*});\\ G^{\prime}(u^{*})u^{*}\in K\Longleftrightarrow G(u^{*})\in\mathcal{K}\end{cases} \tag{2.11}\] and \[L(K,u^{*}) =\{tv\in\mathcal{U}:\ G^{\prime}(u^{*})u^{*}+G^{\prime}(u^{*})v \in K,\ \forall\ t>0\}\] \[=\{tv\in\mathcal{U}:\ G^{\prime}(u^{*})u^{*}+G^{\prime}(u^{*})v \in\mathcal{K}-G(u^{*})+G^{\prime}(u^{*})u^{*},\ \forall\ t>0\}\] \[=\{tv\in\mathcal{U}:\ G(u^{*})+G^{\prime}(u^{*})v\in\mathcal{K}, \ \forall\ t>0\}\] \[=L(\mathcal{K},u^{*}). \tag{2.12}\] **Lemma 2.2**.: _The classical KKT system (1.5) of the optimization problem (2.10) holds at \(u^{*}\) with a proper Lagrange multiplier \(\bar{\lambda}\in\mathcal{X}\), i.e.,_ \[\begin{cases}\langle D_{u}f(u^{*}),v\rangle_{\mathcal{U}}+\langle\bar{\lambda },G^{\prime}(u^{*})v\rangle=0\quad\forall\ v\in\mathcal{U};\\ -\langle\bar{\lambda},\zeta-G^{\prime}(u^{*})u^{*}\rangle\geq 0\quad\forall \ \zeta\in K,\end{cases} \tag{2.13}\] _if and only if the classical KKT system of the optimization problem (2.1) holds at \(u^{*}\) with the same proper Lagrange multiplier \(\bar{\lambda}\), i.e.,_ \[\begin{cases}\langle D_{u}f(u^{*}),v\rangle_{\mathcal{U}}+\langle\bar{\lambda },G^{\prime}(u^{*})v\rangle=0\quad\forall\ v\in\mathcal{U};\\ -\langle\bar{\lambda},\zeta-G(u^{*})\rangle\geq 0\quad\forall\ \zeta\in \mathcal{K}.\end{cases} \tag{2.14}\] Proof.: This follows from (2.11) directly. Note that if \(u^{*}\) satisfies (2.13), \(u^{*}\) is the global minimizer of the optimization problem (2.10). Meanwhile, according to the convex optimization theory, \(u^{*}\) is the global minimizer of the optimization problem (2.10) if and only if (cf. [15, Theorem 1] or [19, Theorem 4.14 and Theorem 4.19]) \[\langle D_{u}f(u^{*}),v\rangle_{\mathcal{U}}\geq 0\quad\forall\ v\in T(\tilde{M },u^{*}),\] which is equivalent to \[\langle D_{u}f(u^{*}),v\rangle_{\mathcal{U}}\geq 0\quad\forall\ v\in\overline{L( \mathcal{K},u^{*})}\] by Lemma 2.1 and (2.12). That is \[-D_{u}f(u^{*})\in\overline{L(\mathcal{K},u^{*})}^{\circ}=L^{\circ}(\mathcal{K },u^{*}). \tag{2.15}\] On the other hand, Lemma 4.2 of [13] states that \[L^{\circ}(\mathcal{K},u^{*})\subset T^{\circ}_{w}(M,u^{*}). \tag{2.16}\] Therefore, the first order necessary condition (2.8) holds automatically if \(u^{*}\) is the global minimizer of the optimization problem (2.10). This inspires us to use the optimization problem (2.10) to investigate the KKT system of the optimization problem (2.1) at \(u^{*}\). **Definition 2.3**.: _For \(f\in\mathcal{F}(u^{*})\), the optimization problem (2.10) is called a surrogate model of the optimization problem (2.1) at \(u^{*}\) if \(u^{*}\) is the global minimizer of the optimization problem (2.10)._ Theorem 1.1 establishes the existence results of the surrogate model of the optimization problem (2.1) at \(u^{*}\). We recall here Guignard's condition (cf. [13, 15]), which is \[T^{\circ}_{w}(M,u^{*})=L^{\circ}(\mathcal{K},u^{*}). \tag{2.17}\] ### Proof of Theorem 1.1 For any \(f\in\mathcal{F}(u^{*})\), since \(u^{*}\) is a global minimizer of the optimization problem (2.10), the condition (2.15) should be held, i.e., \(-D_{u}f(u^{*})\in L^{\circ}(\mathcal{K},u^{*})\), which yields \[D\mathcal{F}(u^{*})\subset L^{\circ}(\mathcal{K},u^{*}).\] Here \(D\mathcal{F}(u^{*})=\{-D_{u}f(u^{*})\in\mathcal{U}:\ f\in\mathcal{F}(u^{*})\}\). Note that Theorem 3.2 of [13] gives \[D\mathcal{F}(u^{*})=T^{\circ}_{w}(M,u^{*}).\] Hence, \(T^{\circ}_{w}(M,u^{*})=D\mathcal{F}(u^{*})\subset L^{\circ}(\mathcal{K},u^{*})\). Combining with (2.16), we arrive at (2.17). For the other direction, (2.8) holds, i.e., \[-D_{u}f(u^{*})\in T^{\circ}_{w}(M,u^{*})\] for any \(f\in\mathcal{F}(u^{*})\). If (2.17) holds, we have (2.15) holds, which implies \(u^{*}\) is a global minimizer of the optimization problem (2.10). Hence, the optimization problem (2.10) is a surrogate model of the optimization problem (2.1) at \(u^{*}\). This completes the proof. ### Nonsmooth cases In the previous arguments, we assume that \(f\) and \(G\) are Frechet differentiable. It is worth noting that the theory in this paper can also be applied to some nonsmooth cases, for example the case where \(f\) and \(G\) are only semismooth. In this case, we can choose \(p_{u^{*}}\in\partial f(u^{*})\) and \(S_{u^{*}}\in\partial G(u^{*})\) to do the analysis. If there exist \(p_{u^{*}}\in\partial f(u^{*})\) and \(S_{u^{*}}\in\partial G(u^{*})\) such that 1. \(\langle p_{u^{*}},v\rangle_{\mathcal{U}}\geq 0\quad\forall\ v\in T_{w}(M,u^{*})\); 2. \(T^{\circ}_{w}(M,u^{*})=L^{\circ}(\mathcal{K},u^{*},S_{u^{*}})\), where \(L(\mathcal{K},u^{*},S_{u^{*}})\) is \(L(\mathcal{K},u^{*})\) with \(G^{\prime}(u^{*})=S_{u}^{*}\), then the surrogate model can be defined as \[\min f(u^{*})+\langle p_{u^{*}},u-u^{*}\rangle_{\mathcal{U}}+\frac{c}{2}\|u- u^{*}\|_{\mathcal{U}}^{2}\quad\text{s.t.}\ \ S_{u^{*}}u\in K=\mathcal{K}-G(u^{*})+S_{u^{*}}u^{*}\] for any \(c>0\). ## 3. The essential Lagrange multiplier In this section we will establish the basic theory of the essential Lagrange multiplier and the proper Lagrange multiplier of the model problem (1.1). As an application of the essential Lagrange multiplier, we will use it to characterize the convergence of the multipliers generated by the classical ALM (see (A.3)). Without further explanation, we always assume that \(u^{*}\) is the global minimizer of the model problem (1.1) in this section. ### The essential Lagrange multiplier We will establish the existence and uniqueness theory of the essential Lagrange multiplier (see Definition 1.1) here. Our results indicate the existence theory of the essential Lagrange multiplier is different in finite and infinite-dimensional cases. More precisely, the essential Lagrange multiplier always exists in the finite-dimensional case, while in the infinite-dimensional case, this is no longer true. **Theorem 3.1**.: _The essential Lagrange multiplier is unique._ Proof.: This can be derived from Definition 1.1 directly. If there exist two essential Lagrange multipliers \(\lambda_{1}^{*}\in\overline{R(S)}\) and \(\lambda_{2}^{*}\in\overline{R(S)}\) at \(u^{*}\), it holds \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\langle\lambda_{1}^{*},Sv \rangle=0\quad\forall\ v\in\mathcal{U}\] and \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\langle\lambda_{2}^{*},Sv \rangle=0\quad\forall\ v\in\mathcal{U},\] which follows \[\langle\lambda_{1}^{*}-\lambda_{2}^{*},Sv\rangle=0\quad\forall\ v\in\mathcal{ U}.\] This gives \(\lambda_{1}^{*}=\lambda_{2}^{*}\), and it completes the proof. **Theorem 3.2**.: _The essential Lagrange multiplier exists at the global minimizer \(u^{*}\) of the model problem (1.1) if and only if_ \[-D_{u}\theta(u^{*})\in R(S^{*}),\] _where \(S^{*}\) is the adjoint operator of \(S\)._ Proof.: If the essential Lagrange multiplier \(\lambda^{*}\) exists, according to its definition, we have \[-D_{u}\theta(u^{*})=S^{*}\lambda^{*},\] which means \(-D_{u}\theta(u^{*})\in R(S^{*})\). If \(-D_{u}\theta(u^{*})\in R(S^{*})\), there exists \(\bar{\lambda}^{*}\in\mathcal{X}\) such that \[-D_{u}\theta(u^{*})=S^{*}\bar{\lambda}^{*},\] which is equivalent to \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\langle\bar{\lambda}^{*},Sv \rangle=0\quad\forall\ v\in\mathcal{U}. \tag{3.1}\] Since \(u^{*}\) is the global minimizer, by Theorem 1.2, there exists \(\{\lambda^{k}\}\subset\mathcal{X}\) such that the W-AKKT system (1.3) holds. Therefore, \[\lim_{k\to+\infty}\langle\lambda^{k},Sv\rangle=\langle\bar{\lambda}^{*},Sv \rangle\quad\forall\ v\in\mathcal{U},\] by (3.1), and then \[-\langle\bar{\lambda}^{*},Sv-Su^{*}\rangle=-\lim_{k\to+\infty}\langle\lambda^ {k},Sv-Su^{*}\rangle\geq-\limsup_{k\to+\infty}\langle\lambda^{k},Sv-Su^{*} \rangle\geq 0\quad\forall\ v\in\mathcal{U},\] by (1.3). Finally, by the boundedness of \(\bar{\lambda}^{*}\), we arrive at \[-\langle\bar{\lambda}^{*},\zeta-Su^{*}\rangle\geq 0\quad\forall\ \zeta\in \overline{K\cap R(S)},\] which, together with (3.1), implies that the restriction of \(\bar{\lambda}^{*}\) to \(\overline{R(S)}\) is the essential Lagrange multiplier. This gives the existence of the essential Lagrange multiplier. **Theorem 3.3**.: _If \(R(S)\) is closed in \(\mathcal{X}\), then the essential Lagrange multiplier exists. Conversely, if the essential Lagrange multiplier always exists at \(u^{*}\) for any \(K\) and \(\theta(\cdot)\) satisfying the assumptions of the model problem (1.1), then \(R(S)\) is closed in \(\mathcal{X}\)._ Proof.: According to Theorem 1.2, there exists \(\{\lambda^{k}\}_{k=1}^{+\infty}\subset\mathcal{X}\) such that \[\lim_{k\to+\infty}\langle S^{*}\lambda^{k}+D_{u}\theta(u^{*}),v\rangle_{ \mathcal{U}}=0\quad\forall\ v\in\mathcal{U},\] i.e., \(\{S^{*}\lambda^{k}\}_{k=1}^{+\infty}\) weakly converges to \(-D_{u}\theta(u^{*})\) in \(\mathcal{U}\). If \(R(S)\) is closed, by the closed range theorem (cf. [41, p. 205]), \(R(S^{*})\) is closed, which further is weakly closed. Therefore, \(-D_{u}\theta(u^{*})\in R(S^{*})\) and then by Theorem 3.2, we have the existence of the essential Lagrange multiplier. Let \(u_{0}\in\mathrm{Ker}(S)^{\perp}\) be arbitrary, \(\theta(u)=\frac{1}{2}\|u\|_{\mathcal{U}}^{2}\) and \(K=\{Su_{0}\}\). In this case the global minimizer \(u^{*}=u_{0}\) and \(D_{u}\theta(u^{*})=u_{0}\in\mathrm{Ker}(S)^{\perp}\). Note that \(\mathrm{Ker}(S)^{\perp}=\overline{R(S^{*})}\). Since \(u_{0}\) is arbitrary, by the assumption on the existence of the essential Lagrange multiplier and Theorem 3.2, we have \(\mathrm{Ker}(S)^{\perp}\subset R(S^{*})\), which gives \(\overline{R(S^{*})}\subset R(S^{*})\). Therefore, \(R(S^{*})=\overline{R(S^{*})}\), which is equivalent to \(R(S)=\overline{R(S)}\) by the closed range theorem, i.e., \(R(S)\) is closed. **Remark 3.1**.: _Theorem 3.3 implies that the condition that \(R(S)\) is closed in \(\mathcal{X}\) is sufficient and almost necessary for the existence of the essential Lagrange multiplier._ Note that if \(R(S)\) is a finite-dimensional space, \(R(S)\) is closed. We have the following corollaries. **Corollary 3.2**.: _If \(R(S)\) is a finite-dimensional space, then the essential Lagrange multiplier exists._ **Corollary 3.3**.: _If \(\mathcal{X}\) is a finite-dimensional space, then the essential Lagrange multiplier exists._ The essential Lagrange multiplier can also be used to characterize the optimality of a feasible point. **Theorem 3.4**.: _If the essential Lagrange multiplier exits at a feasible point \(u^{*}\), then \(u^{*}\) is the global minimizer of the model problem (1.1)._ Proof.: According to the convex optimization theory, it suffices to show that (cf. [3, Proposition 26.5], [10, Proposition 2.1, Chapter II] or [19, Corollary 4.20]) \[\langle D_{u}\theta(u^{*}),v-u^{*}\rangle_{\mathcal{U}}\geq 0\quad\forall\ Sv \in K.\] Let \(\lambda^{*}\) be the essential Lagrange multiplier. The definition of \(\lambda^{*}\) gives \[\langle D_{u}\theta(u^{*}),v-u^{*}\rangle_{\mathcal{U}}=-\langle\lambda^{*}, S(v-u^{*})\rangle=-\langle\lambda^{*},Sv-Su^{*}\rangle\geq 0\quad\forall\ Sv\in K,\] which completes the proof. Furthermore, Theorem 3.3 and Theorem 3.4 lead to the following corollary. **Corollary 3.4**.: _Assume that \(R(S)\) is closed in \(\mathcal{X}\), a feasible point \(u^{*}\) is the global minimizer of the model problem (1.1) if and only if the essential Lagrange multiplier exists at \(u^{*}\)._ ### The proper Lagrange multiplier According to the definition of the proper Lagrange multiplier (see Definition 1.2) and the definition of the essential Lagrange multiplier (see Definition 1.1), the existence of the proper Lagrange multiplier always implies the existence of the essential Lagrange multiplier and \[\lambda^{*}=\bar{\lambda}|_{\overline{R(S)}}, \tag{3.2}\] where \(\bar{\lambda}|_{\overline{R(S)}}\) is the restriction of \(\bar{\lambda}\) to \(\overline{R(S)}\). In other words, if the essential Lagrange multiplier does not exist, neither does the proper Lagrange multiplier. Therefore, we can consider the theory of the proper Lagrange multiplier under the assumption that the essential Lagrange multiplier exists, and which can be verified by the results in the previous subsection. We can also assume that \(\lambda^{*}\neq 0\). Otherwise \(\bar{\lambda}=0\) is a proper Lagrange multiplier. We will establish the existence and uniqueness theory of the proper Lagrange multiplier under Assumption 3.6. **Remark 3.5**.: _If the essential Lagrange multiplier does not exist, neither does the classical KKT system. This happens in infinite-dimensional cases as shown in the previous subsection. Therefore, it is necessary to use the asymptotic or approximate KKT system to characterize the optimality in some infinite-dimensional cases, which has been used in the literature (cf. [1, 6, 20, 37, 38]) as a technique, but did not confirm its necessity theoretically._ **Assumption 3.6**.: _The essential Lagrange multiplier \(\lambda^{*}\) exists at \(u^{*}\) and \(\lambda^{*}\neq 0\)._ Let \(\zeta^{*}=Su^{*}\) and \(\mathcal{N}(\zeta^{*},K)\) be the normal cone to \(K\) at \(\zeta^{*}\), i.e., \[\mathcal{N}(\zeta^{*},K)=\{\lambda\in\mathcal{X}:\ -\langle\lambda,\zeta- \zeta^{*}\rangle\geq 0\quad\forall\ \zeta\in K\}.\] **Theorem 3.5**.: _Suppose that Assumption 3.6 holds. The proper Lagrange multiplier exists at \(u^{*}\) if and only if there exist \(\tilde{\lambda}\in\mathcal{N}(\zeta^{*},K)\) and \(\tilde{\zeta}_{0}\in\overline{R(S)}\) such that_ \[\mathrm{Ker}(\tilde{\lambda})\cap\overline{R(S)}=\mathrm{Ker}(\lambda^{*}) \cap\overline{R(S)}\qquad(\text{Compatibility Condition}) \tag{3.3}\] _or equivalently_ \[\mathrm{Span}\{\tilde{\lambda}\}+\mathrm{Ker}(S^{*})=\mathrm{Span}\{\lambda^{* }\}+\mathrm{Ker}(S^{*})\] _and_ \[\begin{cases}\langle\tilde{\lambda},\bar{\zeta}_{0}\rangle>0,\\ \langle\lambda^{*},\bar{\zeta}_{0}\rangle>0.\end{cases}\qquad(\text{Consistency Condition}) \tag{3.4}\] Proof.: A direct calculation gives \[\operatorname{Ker}(\tilde{\lambda})\cap\overline{R(S)}=\operatorname {Ker}(\lambda^{*})\cap\overline{R(S)}\] \[\Longleftrightarrow [\operatorname{Ker}(\tilde{\lambda})\cap\overline{R(S)}]^{\perp} =[\operatorname{Ker}(\lambda^{*})\cap\overline{R(S)}]^{\perp}\] \[\Longleftrightarrow \operatorname{cl}\{[\operatorname{Ker}(\tilde{\lambda})]^{\perp}+ \operatorname{Ker}(S^{*})\}=\operatorname{cl}\{[\operatorname{Ker}(\lambda^{* })]^{\perp}+\operatorname{Ker}(S^{*})\}\] \[\Longleftrightarrow \operatorname{Span}\{\tilde{\lambda}\}+\operatorname{Ker}(S^{*}) =\operatorname{Span}\{\lambda^{*}\}+\operatorname{Ker}(S^{*}).\] If the proper Lagrange multiplier \(\bar{\lambda}\) exists, we have \(\bar{\lambda}\in\mathcal{N}(\zeta^{*},K)\) and \(\lambda^{*}=\bar{\lambda}|_{\overline{R(S)}}\), which follows \[\operatorname{Ker}(\bar{\lambda})\cap\overline{R(S)}=\operatorname{Ker}( \lambda^{*})\cap\overline{R(S)}.\] Since \(\lambda^{*}\neq 0\) on \(\overline{R(S)}\), there exists \(\zeta_{0}\in\overline{R(S)}\) such that \[\langle\lambda^{*},\zeta_{0}\rangle\neq 0.\] We assume that \[\langle\lambda^{*},\zeta_{0}\rangle>0.\] Otherwise, we can choose \(-\zeta_{0}\). Taking \(\bar{\zeta}_{0}=\zeta_{0}\), the condition (3.4) holds for \(\bar{\lambda}\). Hence, we can take \(\tilde{\lambda}=\bar{\lambda}\). Now we prove the other direction. Let \(\tilde{\lambda}\) be an element in \(\mathcal{N}(\zeta^{*},K)\) such that (3.3) and (3.4) hold. Let \[\bar{\lambda}=t_{0}\tilde{\lambda},\] where \(t_{0}=\langle\lambda^{*},\bar{\zeta}_{0}\rangle/\langle\tilde{\lambda},\bar{ \zeta}_{0}\rangle\) and \(\bar{\zeta}_{0}\) satisfies (3.4). We will prove that \(\bar{\lambda}\) is a proper Lagrange multiplier. Note that the condition (3.4) implies \(t_{0}>0\), which further gives \(\bar{\lambda}\in\mathcal{N}(\zeta^{*},K)\). Now, by the definition of the proper Lagrange multiplier, we only need to show that \[\lambda^{*}=\bar{\lambda}|_{\overline{R(S)}}.\] By (3.4) and (3.3), we have \(\overline{R(S)}=\operatorname{Ker}(\lambda^{*})\cap\overline{R(S)}+ \operatorname{Span}\{\bar{\zeta}_{0}\}=\operatorname{Ker}(\tilde{\lambda}) \cap\overline{R(S)}+\operatorname{Span}\{\bar{\zeta}_{0}\}\). Therefore, for any \(\zeta\in\overline{R(S)}\), there exist \(s\in\mathbb{R}\) and \(\zeta_{0}\in\operatorname{Ker}(\tilde{\lambda})\cap\overline{R(S)}\) such that \(\zeta=\zeta_{0}+s\bar{\zeta}_{0}\). It follows \[\langle\bar{\lambda},\zeta\rangle=t_{0}\langle\tilde{\lambda},\zeta\rangle=t _{0}\langle\tilde{\lambda},\zeta_{0}+s\bar{\zeta}_{0}\rangle=t_{0}s\langle \tilde{\lambda},\bar{\zeta}_{0}\rangle=s\langle\lambda^{*},\bar{\zeta}_{0} \rangle=\langle\lambda^{*},\zeta_{0}+s\bar{\zeta}_{0}\rangle=\langle\lambda^{* },\zeta\rangle,\] which implies \(\lambda^{*}=\bar{\lambda}|_{\overline{R(S)}}\). **Theorem 3.6**.: _Suppose that Assumption 3.6 holds. There exists a unique proper Lagrange multiplier at \(u^{*}\) if and only if (1) there exists \(\tilde{\lambda}\in\mathcal{N}(\zeta^{*},K)\) which satisfies (3.3) and (3.4), and (2) for any \(\hat{\lambda}\in\mathcal{N}(\zeta^{*},K)\) which satisfies (3.3) and (3.4), it holds \(\operatorname{Ker}(\hat{\lambda})=\operatorname{Ker}(\bar{\lambda})\)._ Proof.: Since all the proper Lagrange multipliers belong to \(\mathcal{N}(\zeta^{*},K)\) and their restrictions to \(\overline{R(S)}\) (\(\neq\{0\}\)) are the same, according to the proof of Theorem 3.5, we have \(\bar{\lambda}=\langle\lambda^{*},\bar{\zeta}_{0}\rangle/\langle\tilde{\lambda},\bar{\zeta}_{0}\rangle\bar{\lambda}\) is the unique proper Lagrange multiplier. Let \(\bar{\lambda}\) be the unique proper Lagrange multiplier. According to the condition (3.4) of Theorem 3.5, \(\bar{\lambda}\in\mathcal{N}(\zeta^{*},K)\), and (3.3) and (3.4) hold for \(\bar{\lambda}\). Suppose that there exists \(\tilde{\lambda}\in\mathcal{N}(\zeta^{*},K)\) which satisfies (3.3) and (3.4), it holds \(\operatorname{Ker}(\tilde{\lambda})\neq\operatorname{Ker}(\bar{\lambda})\). By Theorem 3.5, there exists a proper Lagrange multiplier \(\bar{\lambda}_{0}\) with \(\operatorname{Ker}(\bar{\lambda}_{0})\neq\operatorname{Ker}(\bar{\lambda})\). This is a contradiction to the uniqueness of the Lagrange multiplier. **Remark 3.7**.: _If \(\mathcal{X}=\overline{R(S)}\) or equivalently \(\operatorname{Ker}(S^{*})=\{0\}\), the proper Lagrange multiplier is the essential Lagrange multiplier, which implies that the proper Lagrange multiplier (if exists) is unique._ **Remark 3.8**.: _In the finite-dimensional case, the condition \(\mathrm{Ker}(S^{*})=\{0\}\) is the LICQ condition. The connections of the LICQ condition and the uniqueness of the proper Lagrange multiplier have been investigated in [40]. It has also been proved that the proper Lagrange multiplier is unique if and only if it satisfies SMFC in [25] for an optimization problem with both equality and inequality constraints under the assumption that the proper Lagrange multiplier exists. The uniqueness results for the case of general cone constraints can be found in [34, 35]._ In the following part we will use the results in quotient spaces to simplify the presentation of the results in Theorem 3.5 and Theorem 3.6. Let \(Y=\mathrm{Ker}(\lambda^{*})\cap\overline{R(S)}\) and \([\mathcal{X}]=\mathcal{X}/Y\) be the quotient space with respect to \(Y\). Since \(Y=\mathrm{Ker}(\lambda^{*})\cap\overline{R(S)}\) is a closed subspace of \(\mathcal{X}\), \([\mathcal{X}]\) is a Banach space under the conventional norm \[\|[\zeta]\|_{q}=\inf\{\|\zeta-\zeta^{\prime}\|:\ \zeta^{\prime}\in\mathrm{ Ker}(\lambda^{*})\cap\overline{R(S)}\}. \tag{3.5}\] Let \(K\) be a set in \(\mathcal{X}\) and \([K]=\{[\zeta]\in[\mathcal{X}]:\ \zeta\in K\}\). **Lemma 3.9**.: _For any \(\zeta\in\mathcal{X}\), there exists a unique \(\zeta_{0}\in Y^{\perp}\) such that_ \[[\zeta]=[\zeta_{0}],\ \ \|[\zeta]\|_{q}=\|\zeta_{0}\|\ \text{ and }\ \|\zeta_{0}\|\leq\|\zeta\|. \tag{3.6}\] Proof.: This follows directly from the orthogonal decomposition \(\mathcal{X}=Y\oplus Y^{\perp}\). Let \[\mathcal{X}_{Y}=\{f\in\mathcal{X}:\ Y\subset\mathrm{Ker}(f)\}.\] It can be checked that \(\mathcal{X}_{Y}\) is a closed subspace of \(\mathcal{X}\). **Lemma 3.10**.: _The space \(\mathcal{X}_{Y}\) is isometrically isomorphic to the dual space \([\mathcal{X}]^{\prime}\) of \(([\mathcal{X}],\|\cdot\|_{q})\)._ Proof.: For any \(f\in\mathcal{X}_{Y}\), we define \(F\) by \[F([\zeta])=\langle f,\zeta\rangle\quad\forall\ [\zeta]\in[\mathcal{X}]. \tag{3.7}\] If \([\zeta_{1}]=[\zeta_{2}]\), we have \(\zeta_{1}-\zeta_{2}\in Y\subset\mathrm{Ker}(f)\), which gives \(F([\zeta_{1}])=F([\zeta_{2}])\) and \(F\) is well defined. According to Lemma 3.9, \(F\in[\mathcal{X}]^{\prime}\) and \[\|F\|_{[\mathcal{X}]^{\prime}}=\|f\|. \tag{3.8}\] On the other hand, for any \(F\in[\mathcal{X}]^{\prime}\), we can define \(f\) by \[\langle f,\zeta\rangle=F([\zeta])\quad\forall\ \zeta\in\mathcal{X}. \tag{3.9}\] It is obvious that \(Y\subset\mathrm{Ker}(f)\). Again by Lemma 3.9, we have \(f\in\mathcal{X}_{Y}\) and (3.8) holds. Define \(T:\mathcal{X}_{Y}\to[\mathcal{X}]^{\prime}\) by \[T:f\to F, \tag{3.10}\] where \(F\) is defined by (3.7). By (3.7), (3.9) and (3.8), \(T\) is an isometrical isomorphism. This completes the proof. **Lemma 3.11**.: _For any \(f\in\mathcal{X}_{Y}\), let \(F\) be defined as in (3.7). It holds \([\mathrm{Ker}(f)]=\mathrm{Ker}(F)\)._ Proof.: For any \([\zeta]\in[\mathrm{Ker}(f)]\), we can assume \(\zeta\in\mathrm{Ker}(f)\). It gives \[F([\zeta])=\langle f,\zeta\rangle=0,\] i.e., \([\zeta]\in\mathrm{Ker}(F)\), which yields \([\mathrm{Ker}(f)]\subset\mathrm{Ker}(F)\). On the other hand, for any \([\zeta]\in\mathrm{Ker}(F)\), we have \(F([\zeta])=0\), which gives \(\langle f,\zeta\rangle=F([\zeta])=0\). This means \(\zeta\in\mathrm{Ker}(f)\), which further implies \([\zeta]\in[\mathrm{Ker}(f)]\) and \(\mathrm{Ker}(F)\subset[\mathrm{Ker}(f)]\). This completes the proof. **Lemma 3.12**.: _Let \(K\) be a convex set and \(\zeta^{*}\in K\). Then \(f\in\mathcal{X}_{Y}\cap\mathcal{N}(\zeta^{*},K)\) if and only if \(T(f)\in\mathcal{N}([\zeta^{*}],[K])\), where \(T\) is defined by (3.10) and_ \[\mathcal{N}([\zeta^{*}],[K])=\{\tilde{F}\in[\mathcal{X}]^{\prime}:\ -\tilde{F}([\zeta]-[\zeta^{*}])\geq 0 \quad\forall\ [\zeta]\in[K]\}. \tag{3.11}\] Proof.: This follows from the fact \[-T(f)([\zeta]-[\zeta^{*}])=-T(f)([\zeta-\zeta^{*}])=-\langle f,\zeta-\zeta^{*} \rangle\quad\forall\ \zeta\in K.\] **Theorem 3.7**.: _Suppose that Assumption 3.6 holds. The proper Lagrange multiplier exists at \(u^{*}\) if and only if there exist \(\bar{\Lambda}\in\mathcal{N}([\zeta^{*}],[K])\) and \(\bar{\zeta}_{0}\in\overline{R(S)}\) such that_ \[\begin{cases}\bar{\Lambda}([\bar{\zeta}_{0}])>0,\\ \langle\lambda^{*},\bar{\zeta}_{0}\rangle>0.\end{cases} \tag{3.12}\] Proof.: If the proper Lagrange multiplier exists, by Theorem 3.5, there exists \(\bar{\lambda}\in\mathcal{N}(\zeta^{*},K)\) satisfying (3.3) and (3.4). This gives \(\tilde{\lambda}\in\mathcal{X}_{Y}\) and we can define \(\bar{\Lambda}\in[\mathcal{X}]^{\prime}\) by \(\tilde{\lambda}\) and (3.7). Since \(\tilde{\lambda}\in\mathcal{N}(\zeta^{*},K)\), by Lemma 3.12, we have \(\bar{\Lambda}\in\mathcal{N}([\zeta^{*}],[K])\). According to the consistency condition (3.4) and the definition of \(\bar{\Lambda}\), it holds \[\bar{\Lambda}([\bar{\zeta}_{0}])=\langle\tilde{\lambda},\ \bar{\zeta}_{0} \rangle>0.\] For the other direction, assume that such \(\bar{\Lambda}\in\mathcal{N}([\zeta^{*}],[K])\) exists. We can define \(\tilde{\lambda}\) by \(\bar{\Lambda}\) and (3.9). By Lemma 3.12, (3.9) and (3.12), we have \[\tilde{\lambda}\in\mathcal{X}_{Y},\ \ \tilde{\lambda}\in\mathcal{N}(\zeta^{*},K) \ \text{ and }\ \langle\tilde{\lambda},\ \bar{\zeta}_{0}\rangle>0.\] Now by Theorem 3.5, we only need to prove that the compatibility condition (3.3) holds. Since \(\tilde{\lambda}\in\mathcal{X}_{Y}\), we have \[\text{Ker}(\lambda^{*})\cap\overline{R(S)}=Y\subset\text{Ker}(\tilde{\lambda} )\cap\overline{R(S)}. \tag{3.13}\] By (3.12), \([\overline{R(S)}]\) is not a subspace of \(\text{Ker}(\bar{\Lambda})\), which implies \[\text{Ker}(\bar{\Lambda})\cap[\overline{R(S)}]=[0],\] by \(\dim([\overline{R(S)}])\leq 1\). Note that Lemma 3.11 gives \[[\text{Ker}(\tilde{\lambda})]=\text{Ker}(\bar{\Lambda}).\] Then we have \[[\text{Ker}(\tilde{\lambda})]\cap[\overline{R(S)}]=\text{Ker}(\bar{\Lambda} )\cap[\overline{R(S)}]=[0].\] Since \[[\text{Ker}(\tilde{\lambda})\cap\overline{R(S)}]\subset[\text{Ker}(\tilde{ \lambda})]\cap[\overline{R(S)}],\] it follows \[[\text{Ker}(\tilde{\lambda})\cap\overline{R(S)}]=[0].\] This gives \[\text{Ker}(\tilde{\lambda})\cap\overline{R(S)}\subset\text{Ker}(\lambda^{*}) \cap\overline{R(S)}.\] Together with (3.13), we have \[\text{Ker}(\tilde{\lambda})\cap\overline{R(S)}=\text{Ker}(\lambda^{*})\cap \overline{R(S)}.\] This completes the proof. For the uniqueness of the proper Lagrange multiplier, it follows from the proof of Theorem 3.7 directly. **Theorem 3.8**.: _Suppose that Assumption 3.6 holds. The proper Lagrange multiplier is unique if and only if there exists a unique (up to multiplication by a positive constant) \(\bar{\Lambda}\in\mathcal{N}([\zeta^{*}],[K])\) satisfying (3.12) for some \(\bar{\zeta}_{0}\in\overline{R(S)}\)._ **Remark 3.13**.: _Since the canonical map from \(\mathcal{X}\) to \([X]\) is continuous, the open mapping theorem indicates that if \(K\) has an interior point, so does \([K]\). But the other direction is not true. This is useful in exploring the existence of the proper Lagrange multiplier._ The following examples are helpful in understanding the previous theoretical results. **Example 3.14**.: _We consider the optimization problem_ \[\min\frac{1}{2}[(x_{1}-\alpha)^{2}+x_{2}^{2}]\ \ \text{s.t.}\ \ Sx\in K,\] _where \(\alpha\in\mathbb{R}\), \(x\in\mathbb{R}^{2}\), \(S=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\) and \(K\subset\mathbb{R}^{2}\) is a closed convex set._ * _Let_ \(K=K_{1}\)_, where_ \[K_{1}=\{(\zeta_{1},\zeta_{2})^{T}\in\mathbb{R}^{2}:\ \zeta_{1}^{2}+(\zeta_{2}-1)^{2}\leq 1\}.\] _In this case_ \(K\cap R(S)=(0,0)^{T}\) _and the feasible set is_ \[M=\{(0,x_{2})^{T}\in\mathbb{R}^{2}:\ x_{2}\in\mathbb{R}\}.\] _The global minimizer of this problem is_ \(x^{*}=(0,0)^{T}\)_. The gradient of the objective function at this point is_ \((-\alpha,0)^{T}\) _and the essential Lagrange multiplier is_ \(\lambda^{*}=\alpha\)_. Note that the solution of_ \[\begin{pmatrix}-\alpha\\ 0\end{pmatrix}+S^{*}\lambda=0\] _is_ \(\lambda=(\alpha,\lambda_{2})^{T}\)_, for any_ \(\lambda_{2}\in\mathbb{R}\)_. Since the proper Lagrange multiplier must satisfy the above equation, we assume that_ \(\bar{\lambda}=(\alpha,\bar{\lambda}_{2})^{T}\) _for some_ \(\bar{\lambda}_{2}\in\mathbb{R}\)_. Now we consider the condition_ \[-\langle\bar{\lambda},\zeta-\zeta^{*}\rangle\geq 0\quad\forall\ \zeta\in K,\] _where_ \(\zeta^{*}=Sx^{*}=(0,0)^{T}\)_. It is equivalent to_ \[-\alpha\zeta_{1}-\bar{\lambda}_{2}\zeta_{2}\geq 0\quad\forall\ \zeta=(\zeta_{1},\zeta_{2})^{T}\in K.\] _If_ \(\alpha=0\)_, we can choose_ \(\bar{\lambda}_{2}=0\)_. If_ \(\alpha\neq 0\)_, there is no_ \(\bar{\lambda}_{2}\in\mathbb{R}\) _to satisfy the inequality. This means that unless_ \(\alpha=0\)_, the proper Lagrange multiplier does not exist for this problem at the global minimizer. Note that if_ \(\alpha\neq 0\)_, we have_ \[\mathcal{N}(\zeta^{*},K):=\{(0,\lambda_{2})^{T}\in\mathbb{R}^{2}:\ \lambda_{2}\leq 0\} \subset\mathrm{Ker}(S^{*}):=\{(0,\lambda_{2})^{T}\in\mathbb{R}^{2}:\ \lambda_{2}\in\mathbb{R}\}.\] _Both the compatibility condition (_3.3_) and the consistency condition (_3.4_) are not satisfied._ * _Let_ \(K=K_{2}\)_, where_ \[K_{2}=\{(\zeta_{1},\zeta_{2})^{T}\in\mathbb{R}^{2}:\ \zeta_{1}^{2}+(\zeta_{2}-1)^{2}\leq 1 \}\setminus\{(\zeta_{1},\zeta_{2})^{T}\in\mathbb{R}^{2}:\ \zeta_{1}-\zeta_{2}\leq 0\}.\] _The feasible set_ \(M\)_, the global minimizer and the essential Lagrange multiplier are the same as those in the case_ \(K=K_{1}\)_. As before, for the proper Lagrange multiplier_ \(\bar{\lambda}=(\alpha,\bar{\lambda}_{2})^{T}\) _we have_ \[-\alpha\zeta_{1}-\bar{\lambda}_{2}\zeta_{2}\geq 0\quad\forall\ \zeta=(\zeta_{1},\zeta_{2})^{T}\in K.\] _On the other hand, the normal cone_ \(\mathcal{N}(\zeta^{*},K)\) _is given by_ \[\mathcal{N}(\zeta^{*},K)=\{(\lambda_{1},\lambda_{2})^{T}\in\mathbb{R}^{2}:\ \lambda_{2}\leq 0,\ \lambda_{1}+\lambda_{2}\geq 0\}.\] 1. _If_ \(\alpha<0\)_,_ \((\alpha,\bar{\lambda}_{2})^{T}\not\in\mathcal{N}(\zeta^{*},K)\) _for any_ \(\bar{\lambda}_{2}\in\mathbb{R}\)_. Therefore, the proper Lagrange multiplier does not exist. Note that the condition (_3.4_) can not be satisfied, while the condition (_3.3_) is always true for any_ \(0\neq\lambda\in\mathcal{N}(\zeta^{*},K)\)_._ 2. _If_ \(\alpha>0\)_,_ \(\bar{\lambda}\) _is a proper Lagrange multiplier for any_ \(\bar{\lambda}_{2}\in[-\alpha,0]\)_. In this case, both the compatibility condition (_3.3_) and the consistency condition (_3.4_) can be fulfilled._ _Figure 1 gives an illustration of the results above._ **Example 3.15**.: _We consider the optimization problem_ \[\min\frac{1}{2}[(x_{1}-\alpha)^{2}+x_{2}^{2}]\ \text{ s.t. }\ Sx\in K,\] _where \(\alpha>0\), \(x\in\mathbb{R}^{2}\), \(S=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\) and \(K\subset\mathbb{R}^{2}\) is a closed convex set. Let \(0\leq 2r<\alpha\)._ * _Let_ \(K=K_{3}\)_, where_ \[K_{3}=\{(\zeta_{1},\zeta_{2})^{T}\in\mathbb{R}^{2}:\ (\zeta_{1}-r)^{2}+\zeta_{2}^{2} \leq r^{2}\}.\] _The feasible set of this example is_ \[M=\{(x_{1},x_{2})^{T}\in\mathbb{R}^{2}:\ 0\leq x_{1}\leq 2r\}\] _and the global minimizer of this problem is_ \(x^{*}=(2r,0)^{T}\)_. The gradient of the objective function at this point is_ \((2r-\alpha,0)^{T}\)_. The essential Lagrange multiplier is_ \(\lambda^{*}=\alpha-2r>0\) _and_ \(\zeta^{*}=Sx^{*}=(2r,0)^{T}\)_. The proper Lagrange multiplier should be in the form_ \(\bar{\lambda}=(\alpha-2r,\lambda_{2})^{T}\) _for some_ \(\lambda_{2}\in\mathbb{R}\)_. By the condition_ \[-\langle\bar{\lambda},\zeta-\zeta^{*}\rangle\geq 0\quad\forall\ \zeta\in K,\] _or equivalently_ \[-(\alpha-2r)(\zeta_{1}-2r)-\lambda_{2}\zeta_{2}\geq 0\quad\forall\ \zeta=(\zeta_{1},\zeta_{2})^{T}\in K,\] _we have_ \[\lambda_{2}=0.\] _Hence the proper Lagrange multiplier for this example is_ \(\bar{\lambda}=(\alpha-2r,0)^{T}\) _and it is unique. Note that Robinson's condition (_4.2_) holds in this case._ * _Let_ \(K=K_{4}\)_, where_ \[K_{4}=\{(\zeta_{1},\zeta_{2})^{T}\in\mathbb{R}^{2}:\ (\zeta_{1}-r)^{2}+\zeta_{2}^{2} \leq r^{2},\ \ \zeta_{2}\geq 0\}.\] _The global minimizer and the essential Lagrange multiplier are the same as in the previous case. In this case, the proper Lagrange multiplier_ \(\bar{\lambda}=(\alpha-2r,\bar{\lambda}_{2})^{T}\) _for any_ \(\bar{\lambda}_{2}\leq 0\)_, which is not unique. Note that Robinson's condition (_4.2_) is not true in this case._ _Figure_ 2 _gives an illustration of the results above._ **Example 3.16**.: _We consider the problem (cf. [11, Section 8.4])_ \[\min_{u\in H^{1}_{0}(\Omega)}\int_{\Omega}\nabla u\cdot\nabla u\ dx\quad\text{s.t. }\ \|u\|_{L^{2}(\Omega)}^{2}=1.\] _Let_ \[G(u)=\|u\|_{L^{2}(\Omega)}^{2}\quad\text{and}\quad\mathcal{K}=\{1\}\subset \mathbb{R}.\] _For this problem, we have_ \[T_{w}(M,u)=T(M,u)=\overline{L(\mathcal{K},u)}\] _for any \(u\). Note that \(G^{\prime}(u)\) is bounded from \(H^{1}_{0}(\Omega)\) to \(\mathbb{R}\) and \(R(G^{\prime}(u))=\mathbb{R}\). According to our theory, for a global minimizer \(u^{*}\in H^{1}_{0}(\Omega)\), the essential (or proper) Lagrange multiplier \(\lambda^{*}\in\mathbb{R}\) exists and the following KKT system holds_ \[\int_{\Omega}\nabla u^{*}\cdot\nabla v\ dx-\lambda^{*}\int_{\Omega}u^{*}v\ dx\quad\forall\ v\in H^{1}_{0}(\Omega)\quad\text{and} \quad\|u^{*}\|_{L^{2}(\Omega)}=1.\] _This is actually the weak form of the following eigenvalue problem_ \[\begin{cases}\quad-\Delta u^{*}=\lambda^{*}u^{*}\quad\text{in}\ \ \Omega;\\ \quad u^{*}=0\quad\quad\text{on}\ \ \partial\Omega;\\ \|u^{*}\|_{L^{2}(\Omega)}=1.\end{cases}\] ### The convergence of multipliers of the classical ALM The classical ALM for the model problem is given in (A.3). We will give an essential characterization of the convergence of the multipliers generated by the algorithm. **Theorem 3.9**.: _Let \(\{\lambda^{k}\}_{k=1}^{+\infty}\) be the multipliers generated in the classical ALM (A.3)._ 1. _If the essential Lagrange multiplier_ \(\lambda^{*}\) _exists at_ \(u^{*}\)_, then_ (3.14) \[\lim_{k\to+\infty}\langle\lambda^{k},Sv\rangle=\langle\lambda^{*},Sv\rangle \quad\forall\ v\in\mathcal{U}.\] 2. _If the restriction of_ \(\{\lambda^{k}\}_{k=1}^{+\infty}\) _to_ \(\overline{R(S)}\) _weakly converges in_ \(\overline{R(S)}\) _to some element_ \(\lambda^{*}\in\overline{R(S)}\)_, then_ \(\lambda^{*}\) _is the essential Lagrange multiplier at_ \(u^{*}\)_._ Proof.: Note that \(\{\lambda^{k}\}_{k=1}^{+\infty}\) satisfies (1.3). The first part follows from (1.3) and Definition 1.1 directly. For the second part, if the restriction of \(\{\lambda^{k}\}_{k=1}^{+\infty}\) to \(\overline{R(S)}\) weakly converges in \(\overline{R(S)}\) to some element \(\lambda^{*}\in\overline{R(S)}\), i.e., \[\lim_{k\to+\infty}\langle\lambda^{k},Sv\rangle=\langle\lambda^{*},Sv\rangle \quad\forall\ v\in\mathcal{U},\] then we have \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\langle\lambda^{*},Sv\rangle =\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\lim_{k\to+\infty}\langle \lambda^{k},Sv\rangle=0\quad\forall\ v\in\mathcal{U}\] and \[-\langle\lambda^{*},Sv-Su^{*}\rangle=-\lim_{k\to+\infty}\langle\lambda^{k},Sv -Su^{*}\rangle\geq-\lim_{k\to+\infty}\langle\lambda^{k},Sv-Su^{*}\rangle\geq 0 \quad\forall\ Sv\in K,\] by (1.3). Therefore, \(\lambda^{*}\) is the essential Lagrange multiplier by the boundedness of \(\lambda^{*}\) and we finish the proof. The following corollary follows from Theorem 3.3 and Theorem 3.9 directly. **Corollary 3.17**.: _If \(R(S)\) is closed in \(\mathcal{X}\), then the restriction of \(\{\lambda^{k}\}_{k=1}^{+\infty}\) to \(R(S)\) always weakly converges in \(R(S)\) to the essential Lagrange multiplier \(\lambda^{*}\) at \(u^{*}\)._ **Example 3.18**.: _Let us consider the optimization problem_ \[\min\frac{1}{2}x^{2}\ \ \ \text{s.t.}\ \begin{cases}x=1;\\ 2x=2.\end{cases}\] _The global minimizer of this problem is \(x^{*}=1\) and the essential Lagrange multiplier at \(x^{*}\) is_ \[\lambda^{*}=-\frac{1}{5}\begin{pmatrix}1\\ 2\end{pmatrix}.\] _All the proper Lagrange multipliers of this example define the set_ \[\Lambda=\{\bar{\lambda}=(\lambda_{1},\lambda_{2})^{T}\in\mathbb{R}^{2}:\ \lambda_{1}+2\lambda_{2}+1=0\}.\] _The iterators of the classical ALM satisfy_ \[\left\{\begin{aligned} x^{k+1}-1&=\frac{1}{1+5\beta}(x^{k}-1); \\ \lambda^{k+1}+\frac{1}{5}\begin{pmatrix}1\\ 2\end{pmatrix}&=J\left[\lambda^{k}+\frac{1}{5}\begin{pmatrix}1\\ 2\end{pmatrix}\right];\\ \lambda_{1}^{k+1}+2\lambda_{2}^{k+1}+1&=\frac{1}{1+5\beta}(\lambda^{k}+2 \lambda^{k}+1),\end{aligned}\right.\] _where \(k=1,2,\dots\),_ \[J=\frac{1}{1+5\beta}\begin{pmatrix}1+4\beta&-2\beta\\ -2\beta&1+\beta\end{pmatrix}\] _and \(\beta>0\)._ _Note that \(J\) is symmetric, positive definite and \(\det J=1\), which implies \(\rho(J)>1\). We have the following convergence results._ * \(\{x^{k}\}_{k=1}^{+\infty}\) _converges to_ \(x^{*}=1\)_._ * \(\{\lambda^{k}\}_{k=1}^{+\infty}\) _is unbounded._ * \(\{\lambda^{k}\}_{k=1}^{+\infty}\) _converges to the essential Lagrange multiplier in_ \(R(S)\)_, i.e.,_ \[\lambda_{1}^{k}+2\lambda_{2}^{k}\to-1=-\frac{1}{5}(1\times 1+2\times 2)\ \ \text{as}\ \ k\to+\infty.\] * _The distance between_ \(\lambda^{k}\)__\((k=1,2,\dots)\) _and_ \(\Lambda\) _converges to_ \(0\)_._ ## 4. Revisiting the weak form asymptotic KKT system In the previous arguments we notice that in the infinite-dimensional case, the essential Lagrange multiplier may not exist. Here we derive a sufficient condition to guarantee the existence of the essential Lagrange multiplier by the W-AKKT system (1.3). We will also use the W-AKKT system (1.3) to give a proof of the existence of the proper Lagrange multiplier under Robinson's condition (cf. [18, p. 5], [28, 31, 42] or (4.2)). A sufficient condition, which includes Robinson's condition as a special case, to guarantee the existence of the proper Lagrange multiplier is also given. It is worth noting that all these results also give the convergence results of the multipliers generated by the classical ALM for the model problem. According to the W-AKKT system (1.3), we have \[\langle D_{u}\theta(u^{*}),v\rangle\mathcal{U}+\lim_{k\to+\infty}\langle \lambda^{k},Sv\rangle=0\ \ \ \forall\ v\in\mathcal{U}.\] A linear functional \(\tilde{\lambda}\) on \(\mathrm{R}(S)\subset\mathcal{X}\) can be defined as \[\tilde{\lambda}(\zeta)=\lim_{k\to+\infty}\langle\lambda^{k},\zeta\rangle\ \ \ \forall\ \zeta\in\mathrm{R}(S).\] The second condition leads to \[\lim_{k\to+\infty}\langle\lambda^{k},\zeta\rangle\leq\lim_{k\to+\infty} \langle\lambda^{k},\zeta^{*}\rangle\ \ \Longleftrightarrow\ \ \tilde{\lambda}(\zeta)\leq\tilde{\lambda}(\zeta^{*})\ \ \ \forall\ \zeta\in K\cap R(S).\] If \(\tilde{\lambda}\) can be extended to a bounded linear functional in \(\overline{R(S)}\), then we have the essential Lagrange multiplier. Therefore, the existence of the essential Lagrange multiplier is equivalent to the boundedness of the linear functional \(\tilde{\lambda}\) in \(\overline{R(S)}^{\prime}\). If the topology of \(\mathcal{X}\) is too weak to guarantee the boundedness of \(\tilde{\lambda}\), we can set the problem on a small subspace of \(\mathcal{X}\), where a stronger topology can be equipped on the subspace. More precisely, we can set the problem on a subspace \(\tilde{\mathcal{X}}\) where \(\mathrm{R}(S)\subseteq\tilde{\mathcal{X}}\subseteq\mathcal{X}\). In this case, we can define a strong topology on \(\tilde{\mathcal{X}}\), which is stronger than that of \(\mathcal{X}\), and the stronger topology will make it easier to guarantee the boundedness of \(\tilde{\lambda}\). Motivated by these arguments, we give the following results. **Theorem 4.1**.: _Let \(\|\cdot\|_{*}\) be a norm on \(R(S)\) such that \((R(S),\|\cdot\|_{*})\) is a Banach space which is continuously embedded into \((\mathcal{X},\|\cdot\|)\). There exists a unique essential Lagrange multiplier \(\tilde{\lambda}^{*}\in(R(S),\|\cdot\|_{*})^{\prime}\) which is the dual space of \((R(S),\|\cdot\|_{*})\) such that the following KKT system holds, i.e.,_ \[\left\{\begin{aligned} &\langle D_{u}\theta(u^{*}),v\rangle_{ \mathcal{U}}+\langle\tilde{\lambda}^{*},Sv\rangle_{*}=0\quad\forall\ v\in \mathcal{U};\\ &-\langle\tilde{\lambda}^{*},\zeta-Su^{*}\rangle_{*}\geq 0\quad \forall\ \zeta\in\overline{K\cap R(S)}^{\|\cdot\|_{*}},\end{aligned}\right. \tag{4.1}\] _where \(\langle\cdot,\cdot\rangle_{*}\) is the duality pair of \((R(S),\|\cdot\|_{*})^{\prime}\) and \((R(S),\|\cdot\|_{*})\)._ Proof.: Since \((R(S),\|\cdot\|_{*})\) is continuously embedded into \((\mathcal{X},\|\cdot\|)\) and \(\{\lambda^{k}\}\subseteq\mathcal{X}\), \(\{\lambda^{k}\}\subseteq(R(S),\|\cdot\|_{*})^{\prime}\). By the first condition in the W-AKKT system (1.3), we know \[\sup\{|\langle\lambda^{k},\zeta\rangle_{*}|:\ k=1,\ldots,+\infty\}<+\infty \quad\forall\ \zeta\in R(S).\] According to the uniform boundedness principle (cf. [36]), we know that \(\{\lambda^{k}\}_{k=1}^{+\infty}\) is bounded in \((R(S),\|\cdot\|_{*})^{\prime}\). Note that the unit ball in \((R(S),\|\cdot\|_{*})^{\prime}\) is weak-* compact. Again, by the first condition in the W-AKKT system (1.3), we know that there exists \(\tilde{\lambda}^{*}\in(R(S),\|\cdot\|_{*})^{\prime}\) such that (4.1) holds. The uniqueness follows from (4.1) directly. The W-AKKT system (1.3) can also be used to derive the existence of the proper Lagrange multiplier under Robinson's condition (cf. [18, p. 5] and [28, 31, 42]), which is \[\mathcal{X}=S(\mathcal{U})-\mathcal{K}_{K,\zeta^{*}}, \tag{4.2}\] where \[\mathcal{K}_{K,\zeta^{*}}=\{t(\zeta-\zeta^{*}):\ \forall\ \zeta\in K\ \ \text{and}\ \ \forall\ t\geq 0\}.\] The following results cover the results under Robinson's condition and the proof below is based on the generalized open mapping theorem (cf. [18, Theorem 1.4], [28, Lemma 2.3] or [42, Theorem 2.1]). **Theorem 4.2**.: _Let \((\tilde{\mathcal{X}},\|\cdot\|_{*})\) be a Banach space which is continuously embedded into \((\mathcal{X},\|\cdot\|)\). Assume that the following condition holds for \((\tilde{\mathcal{X}},\|\cdot\|_{*})\) at \((u^{*},\zeta^{*})=(u^{*},Su^{*})\):_ \[\tilde{\mathcal{X}}=S(\mathcal{U})-\mathcal{K}_{K,\zeta^{*}}. \tag{4.3}\] _Then there exists a proper Lagrange multiplier \(\tilde{\lambda}\in(\tilde{\mathcal{X}},\|\cdot\|_{*})^{\prime}\) such that_ \[\left\{\begin{aligned} &\langle D_{u}\theta(u^{*}),v\rangle_{ \mathcal{U}}+\langle\tilde{\lambda},Sv\rangle_{*}=0\quad\forall\ v\in\mathcal{ U};\\ &-\langle\tilde{\lambda},\zeta-Su^{*}\rangle_{*}\geq 0\quad \forall\ \zeta\in K,\end{aligned}\right. \tag{4.4}\] _where \(\langle\cdot,\cdot\rangle_{*}\) is the duality pair of \((\tilde{X},\|\cdot\|_{*})^{\prime}\) and \((\tilde{X},\|\cdot\|_{*})\)._ Proof.: Since \((\tilde{\mathcal{X}},\|\cdot\|_{*})\) is continuously embedded into \((\mathcal{X},\|\cdot\|)\), we have \(\{\lambda^{k}\}\subseteq(\tilde{\mathcal{X}},\|\cdot\|_{*})^{\prime}\). By (4.3) and the generalized open mapping theorem (cf. [18, Theorem 1.4], [28, Lemma 2.3] or [42, Theorem 2.1]), there exists \(r>0\) such that \[\overline{B_{r,\tilde{\mathcal{X}}}}\subseteq S(B_{1,\mathcal{U}})-\mathcal{K} _{K,\zeta^{*}}\cap B_{1,\tilde{\mathcal{X}}},\] where \(B_{\rho,V}\) is the ball in \(V\) with center at the original and radius \(\rho>0\), \(V=\tilde{\mathcal{X}}\) or \(\mathcal{U}\), \(\rho=r\) or \(1\) and \(\overline{B_{\rho,V}}\) is its closure in \(V\). Let \(\{x^{k}\}\subseteq\tilde{\mathcal{X}}\) be a sequence of unit vectors such that \[\langle\lambda^{k},x^{k}\rangle_{*}\geq\frac{1}{2}\|\lambda^{k}\|_{(\tilde{ \mathcal{X}},\|\cdot\|_{*})^{\prime}}.\] Then \[-rx^{k}=Sv^{k}-t_{k}(\zeta^{k}-\zeta^{*}),\] where \(\{v^{k}\}\subseteq B_{1,\mathcal{U}}\), \(\{t_{k}(\zeta^{k}-\zeta^{*})\}\subseteq\mathcal{K}_{K,\zeta^{*}}\cap B_{1, \tilde{\mathcal{X}}}\) and \(t_{k}\geq 0\), \(\forall\ k=1,\ldots,\infty\). Note that \[\langle D_{u}\theta(u^{*}),v^{k}\rangle_{\mathcal{U}}+\lim_{k\to+\infty} \langle\lambda^{k},Sv^{k}\rangle_{*}=0\quad\forall\ k=1,\ldots,\infty\] and \[-\limsup_{k\to+\infty}\langle\lambda^{k},t_{k}(\zeta^{k}-\zeta^{*})\rangle_{* }\geq 0\quad\forall\ k=1,\ldots,\infty.\] Hence, it holds at least for a subsequence that for any \(\epsilon>0\) there exists \(N_{\epsilon}>0\) such that for any \(k>N_{\epsilon}\), \[-\langle\lambda^{k},Sv^{k}\rangle_{*}+\langle\lambda^{k},t_{k}(\zeta^{k}- \zeta^{*})\rangle_{*}\leq\epsilon+|\langle D_{u}\theta(u^{*}),v^{k}\rangle_{ \mathcal{U}}|\leq\epsilon+\|D_{u}\theta(u^{*})\|_{\mathcal{U}}<+\infty.\] Therefore, for a fixed \(\epsilon>0\) and any \(k>N_{\epsilon}\), we have \[\frac{r}{2}\|\lambda^{k}\|_{(\tilde{\mathcal{X}},\|\cdot\|_{*})^{\prime}} \leq\langle\lambda^{k},rx^{k}\rangle_{*}=-\langle\lambda^{k},Sv^{k}-t_{k}( \zeta^{k}-\zeta^{*})\rangle_{*}<+\infty.\] It follows that \(\{\lambda^{k}\}_{k=1}^{+\infty}\) is bounded in \((\tilde{\mathcal{X}},\|\cdot\|_{*})^{\prime}\). There exists a subsequence of \(\{\lambda^{k}\}_{k=1}^{+\infty}\) which weak-* converges to some element \(\tilde{\lambda}\) in \((\tilde{\mathcal{X}},\|\cdot\|_{*})^{\prime}\). According to the W-AKKT (1.3), (4.4) is valid and we complete the proof. Now we give a proof of the existence of the proper Lagrange multiplier under the condition (4.2) based on the W-AKKT system (1.3) and the uniform boundedness principle that has an elementary proof in [36]. **Lemma 4.1**.: _The W-AKKT system (1.3) is equivalent to_ \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\limsup_{k\to+\infty}\langle \lambda^{k},Sv+t(\zeta-Su^{*})\rangle\leq 0\quad\forall\ v\in\mathcal{U},\ \forall\ \zeta\in K,\ \ t\geq 0. \tag{4.5}\] Proof.: It is obvious that the W-AKKT system (1.3) implies (4.5). Now we prove that if (4.5) holds, so does the W-AKKT system (1.3). Taking \(t=0\), we have \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\limsup_{k\to+\infty}\langle \lambda^{k},Sv\rangle\leq 0\quad\forall\ v\in\mathcal{U}.\] If we change \(v\) to \(-v\), we have \[\langle D_{u}\theta(u^{*}),-v\rangle_{\mathcal{U}}-\liminf_{k\to+\infty} \langle\lambda^{k},Sv\rangle=\langle D_{u}\theta(u^{*}),-v\rangle_{\mathcal{U }}+\limsup_{k\to+\infty}\langle\lambda^{k},S(-v)\rangle\leq 0.\] These two inequalities give \[\limsup_{k\to+\infty}\langle\lambda^{k},Sv\rangle-\liminf_{k\to+\infty} \langle\lambda^{k},Sv\rangle\leq 0.\] On the other hand, by the definitions of \(\limsup_{k\to+\infty}\) and \(\liminf_{k\to+\infty}\), we have \[\limsup_{k\to+\infty}\langle\lambda^{k},Sv\rangle\geq\liminf_{k\to+\infty} \langle\lambda^{k},Sv\rangle.\] Therefore, \[\limsup_{k\to+\infty}\langle\lambda^{k},Sv\rangle=\liminf_{k\to+\infty} \langle\lambda^{k},Sv\rangle\] and \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\lim_{k\to+\infty}\langle \lambda^{k},Sv\rangle=0\quad\forall\ v\in\mathcal{U}.\] By taking \(v=0\) and \(t=1\), we have \[-\limsup_{k\to+\infty}\langle\lambda^{k},\zeta-Su^{*}\rangle\geq 0\quad\forall\ \zeta\in K.\] This completes the proof. Let \(\zeta^{*}=Su^{*}\) and \[\mathcal{C}(S,\zeta^{*},K)=\{Sv+t(\zeta-\zeta^{*})\in\mathcal{X}:\ \forall\ v\in \mathcal{U},\ \forall\ \zeta\in K\ \ \text{and}\ \ t\geq 0\}.\] Note that \(\mathcal{C}(S,\zeta^{*},K)\) is a convex cone in \(\mathcal{X}\). **Theorem 4.3**.: _Assume that there exists a norm \(\|\cdot\|_{*}\) such that \((\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})\) is a Banach space and it is continuously embedded into \((\mathcal{X},\|\cdot\|)\). Then there exists \(\tilde{\lambda}\in(\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})^{\prime}\) such that_ \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\langle\tilde{\lambda},Sv+t (\zeta-\zeta^{*})\rangle_{*}\leq 0\quad\forall\ v\in\mathcal{U},\ \forall\ \zeta\in K\ \ \text{and}\ \ t\geq 0, \tag{4.6}\] _or equivalently,_ \[\begin{cases}\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\langle\tilde{ \lambda},Sv\rangle_{*}=0\quad\forall\ v\in\mathcal{U};\\ -\langle\tilde{\lambda},\zeta-Su^{*}\rangle_{*}\geq 0\quad\forall\ \zeta\in K, \end{cases} \tag{4.7}\] _where \(\langle\cdot,\cdot\rangle_{*}\) is the duality pair of \((\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})^{\prime}\) and \((\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})\)._ Proof.: Let \(\{\lambda^{k}\}_{k=1}^{+\infty}\) be the same as that in the W-AKKT system (1.3). Since \((\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})\) is a Banach space and it is continuously embedded into \((\mathcal{X},\|\cdot\|)\), \(\{\lambda^{k}\}\subset(\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})^{\prime}\). Since \(\mathcal{C}(S,\zeta^{*},K)\) is a linear space, for any \(v\in\mathcal{U}\), \(\zeta\in K\) and \(t\geq 0\), we have \[-[Sv+t(\zeta-\zeta^{*})]\in\mathcal{C}(S,\zeta^{*},K),\] i.e., there exist \(w\in\mathcal{U}\), \(\bar{\zeta}\in K\) and \(s\geq 0\) such that \[-[Sv+t(\zeta-\zeta^{*})]=Sw+s(\zeta-\zeta^{*}).\] Therefore, by (4.5), we have \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\limsup_{k\to+\infty} \langle\lambda^{k},Sv+t(\zeta-\zeta^{*})\rangle\leq 0\] and \[\langle D_{u}\theta(u^{*}),w\rangle_{\mathcal{U}}-\limsup_{k\to+\infty} \langle\lambda^{k},Sv+t(\zeta-\zeta^{*})\rangle\] \[\leq\langle D_{u}\theta(u^{*}),w\rangle_{\mathcal{U}}-\liminf_{k\to+\infty} \langle\lambda^{k},Sv+t(\zeta-\zeta^{*})\rangle\] \[=\langle D_{u}\theta(u^{*}),w\rangle_{\mathcal{U}}+\limsup_{k\to+\infty} \langle\lambda^{k},-[Sv+t(\zeta-\zeta^{*})]\rangle\] \[=\langle D_{u}\theta(u^{*}),w\rangle_{\mathcal{U}}+\limsup_{k\to+\infty} \langle\lambda^{k},Sw+s(\bar{\zeta}-\zeta^{*})\rangle\] \[\leq 0,\] which follows \[\langle D_{u}\theta(u^{*}),w\rangle_{\mathcal{U}}\leq\limsup_{k\to+\infty} \langle\lambda^{k},Sv+t(\zeta-\zeta^{*})\rangle\leq-\langle D_{u}\theta(u^{*}), v\rangle_{\mathcal{U}}.\] Furthermore, we have \[|\limsup_{k\to+\infty}\langle\lambda^{k},Sv+t(\zeta-\zeta^{*})\rangle|\leq\|D_ {u}\theta(u^{*})\|_{\mathcal{U}}(\|v\|_{\mathcal{U}}+\|w\|_{\mathcal{U}})<+ \infty\quad\forall\ v\in\mathcal{U},\ \forall\zeta\in K.\] It follows that there exists a subsequence of \(\{\lambda^{k}\}_{k=1}^{+\infty}\) (which will still be denoted by \(\{\lambda^{k}\}_{k=1}^{+\infty}\)), such that \[|\langle\lambda^{k},\eta\rangle_{*}|=|\langle\lambda^{k},\eta\rangle|<+\infty \quad\forall\ \eta\in\mathcal{C}(S,\zeta^{*},K),\ \forall\ k=1,2,\ldots.\] According to the uniform boundedness principle, \(\{\lambda^{k}\}_{k=1}^{+\infty}\) is bounded in \((\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})^{\prime}\). Then there exists a subsequence of \(\{\lambda^{k}\}_{k=1}^{+\infty}\) (which will still be denoted by \(\{\lambda^{k}\}_{k=1}^{+\infty}\)) converges to \(\tilde{\lambda}\in(\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})^{\prime}\) in the weak-* topology of \((\mathcal{C}(S,\zeta^{*},K),\|\cdot\|_{*})^{\prime}\). Therefore, \[\lim_{k\to+\infty}\langle\lambda^{k},\eta\rangle=\langle\tilde{\lambda},\eta \rangle_{*}\quad\forall\ \eta\in\mathcal{C}(S,\zeta^{*},K).\] By (4.5), we have (4.6) holds. By taking \(t=0\) and \(v=0\), we can get the equivalence of (4.6) and (4.7). This completes the proof. **Remark 4.2**.: _Note that \(\mathcal{X}=\mathcal{C}(S,\zeta^{*},K)\), which means Robinson's condition (4.2) holds, is a special case of the results in Theorem 4.3._ **Remark 4.3**.: _If there exists \(\lambda_{0}\) such that_ \[\langle\lambda_{0},Sv+t(\zeta-\zeta^{*})\rangle\leq\limsup_{k\to+\infty} \langle\lambda^{k},Sv+t(\zeta-\zeta^{*})\rangle\quad\forall\ v\in\mathcal{U},\ \forall\ \zeta\in K\ \ \text{and}\ \ t\geq 0,\] _then we have_ \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\langle\lambda_{0},Sv+t( \zeta-Su^{*})\rangle\leq 0,\quad\forall\ v\in\mathcal{U},\ \forall\ \zeta\in K\ \ \text{and}\ \ t\geq 0.\] _This implies \(\lambda_{0}\) is a proper Lagrange multiplier._ _Therefore, every weak accumulation point (if exists) of \(\{\lambda^{k}\}_{k=1}^{+\infty}\) is a proper Lagrange multiplier, and furthermore, the weak convergence of the multipliers generated by the classical ALM in \(\mathcal{X}\) implies the existence of a proper Lagrange multiplier. However, the existence of proper Lagrange Lagrange multipliers can not guarantee the convergence of the multipliers generated by the classical ALM in \(\mathcal{X}\), see e.g., Example 3.18 in this paper._ _This will be also useful for developing new constrained qualification conditions to guarantee the existence of the proper Lagrange multiplier._ ## 5. Optimal control problems with pointwise constraints In this section we apply our theory to optimal control problems with pointwise state or control constraints. We will discuss the existence of Lagrange multipliers for optimal control problems with pointwise state or control constraints. ### Optimal control problems with pointwise state constraints We consider the following optimal control problem with pointwise state constraints \[\min_{u\in L^{2}(\Omega),y\in Y_{ad}}\ J(y,u)=\frac{1}{2}\|y-y_{d}\|_{L^{2}( \Omega)}^{2}+\frac{\alpha}{2}\|u\|_{L^{2}(\Omega)}^{2}\] subject to \[\begin{cases}-\Delta y=u&\text{ in }\Omega,\\ \quad\quad y=0&\text{ on }\partial\Omega,\end{cases}\] where \(u\in L^{2}(\Omega)\) is the control variable, \(y_{d}\in L^{2}(\Omega)\) is the desired state or observation, \(\alpha>0\) is the regularization parameter and \(Y_{ad}\) is the set of the state constraints. For simplicity, we assume that \(\Omega\subset\mathbb{R}^{2}\) is a convex domain in the following arguments. The extension to general cases is straightforward. We have two different state constraint sets \(Y_{ad,1}\) and \(Y_{ad,2}\), where \[Y_{ad,1}=\{y\in C(\bar{\Omega}):\ a(x)\leq y(x)\leq b(x)\ \ \text{a.e. in }\Omega\},\] \[Y_{ad,2}=\{y\in L^{2}(\Omega):\ a(x)\leq y(x)\leq b(x)\ \ \text{a.e. in }\Omega\}\] and \(a(x),b(x)\in L^{\infty}(\Omega)\). We refer the case \(Y_{ad}=Y_{ad,1}\) as the continuous setting and the case \(Y_{ad}=Y_{ad,2}\) as the \(L^{2}\) setting. In both cases we can prove the existence and uniqueness of the solution to the optimal control problem. Let \((\bar{u},\bar{y})\in L^{2}(\Omega)\times Y_{ad,2}\) be the solution of \[\min_{u\in L^{2}(\Omega),y\in Y_{ad,2}}\ J(y,u).\] By the regularity results of elliptic partial differential equations (cf. [14]), we have \[\bar{y}\in C(\bar{\Omega}),\] which implies \[\bar{y}\in Y_{ad,1}.\] Since \(Y_{ad,1}\subset Y_{ad,2}\), we have \[J(\bar{y},\bar{u})=\min_{u\in L^{2}(\Omega),y\in Y_{ad,2}}\ J(y,u)\leq\min_{u \in L^{2}(\Omega),y\in Y_{ad,1}}\ J(y,u)\leq J(\bar{y},\bar{u}).\] It gives \[\min_{u\in L^{2}(\Omega),y\in Y_{ad,2}}\ J(y,u)=\min_{u\in L^{2}(\Omega),y\in Y _{ad,1}}\ J(y,u),\] which yields that \((\bar{u},\bar{y})\) is also the unique solution of the problem in the continuous setting. In general, one can not expect the existence of a proper Lagrange multiplier in \(L^{2}(\Omega)\), see e.g., Example 7.1 in [7]. In the continuous setting, we can obtain a multiplier in the measure space if Slater's condition is satisfied (cf. [8]), which implies Robinson's condition (cf. [17, Theorem 1.55]) in this case. In the \(L^{2}\) setting, we can not get the existence results of the multipliers from the classical optimization theory. Note that there are not interior points in \(Y_{ad,2}\) and Slater's condition can not be imposed. ### A general problem in the form of the model problem Let us consider the following general optimal control problem with state and/or control constraints \[\min_{(u,y)\in\mathcal{U}_{ad}\times\mathcal{Y}_{ad}}\ J(y,u)=\frac{1}{2}\|y-y _{d}\|_{\mathcal{H}}^{2}+\frac{\alpha}{2}\|u\|_{\mathcal{U}}^{2}\] subject to \[y=\mathcal{L}u,\] where \(\mathcal{H}\), \(\mathcal{U}\) are two real Hilbert spaces, \(\mathcal{L}\) is a bounded operator from \(\mathcal{U}\) to \(\mathcal{W}\) (\(\subset\mathcal{H}\)) and \(\mathcal{U}_{ad}\), \(\mathcal{Y}_{ad}\) are two closed and convex sets in \(\mathcal{U}\) and \(\mathcal{H}\), respectively. Let \[\theta(u)=J(\mathcal{L}u,u)\] and set \[\mathcal{L}u=z,\quad u=w,\quad Su=\begin{pmatrix}\mathcal{L}u\\ u\end{pmatrix}\quad\text{and}\quad\zeta=\begin{pmatrix}z\\ w\end{pmatrix}.\] We also denote \[K=\mathcal{Y}_{ad}\times\mathcal{U}_{ad}.\] With the help of these notations, we can reformulate the general optimal control problem in the desired form \[\min_{u\in\mathcal{U}}\theta(u)\quad\text{s.t.}\ \ Su\in K,\] where \(K\subset\mathcal{X}=\mathcal{W}\times\mathcal{U}\). Note that in this case \(\theta(u)\) is strongly convex on \(\mathcal{U}\) and \(c_{0}\) can be chosen as \(\alpha\). If we take \(\mathcal{W}=\mathcal{H}=\mathcal{U}=L^{2}(\Omega)\), \(\mathcal{L}=\Delta^{-1}:L^{2}(\Omega)\mapsto H_{0}^{1}(\Omega)\hookrightarrow L ^{2}(\Omega)\), \(\mathcal{Y}_{ad}=Y_{ad,2}\) and \(\mathcal{U}_{ad}=\mathcal{U}\), this is the problem with pointwise state constraints in the \(L^{2}\) setting. In this case, \(S^{*}=(\Delta^{-1},I_{d})\) where \(I_{d}\) is the identity operator on \(L^{2}(\Omega)\). Since \(\mathrm{R}(\Delta^{-1})\) is dense in \(L^{2}(\Omega)\), by Theorem 3.2, in general we can not obtain an essential Lagrange multiplier (which is also a proper Lagrange multiplier) in \(L^{2}(\Omega)\) for the state constraints in this case. This theoretically clarifies the existence results of Lagrange multipliers in \(L^{2}(\Omega)\) for optimal control problems with pointwise state constraints. Our theory also indicates that in the case with only pointwise control constraints, i.e., \(\mathcal{W}=\mathcal{H}=L^{2}(\Omega)\), \(\mathcal{L}=\Delta^{-1}\), \(\mathcal{Y}_{ad}=\mathcal{W}\) and \(\mathcal{U}_{ad}=\{u\in L^{2}(\Omega):\ u_{a}(x)\leq u(x)\leq u_{b}(x)\ \ \text{a.e.}\ x\in\Omega\}\), a proper Lagrange multiplier in \(L^{2}(\Omega)\) always exists for the control constraints, since the identity operator related to the control variable is closed. The same results can be obtained by a direct construction (cf. [17, 39]) or the results under Robinson's condition by the classical optimization theory (cf. [39]). Our theory can guarantee this naturally. **Remark 5.1**.: _Let_ \[W=\{y\in H^{1}_{0}(\Omega):\ \Delta y\in L^{2}(\Omega)\}.\] \(W\) _is a Hilbert space under the following norm (cf. [14, p. 59])_ \[\|\varphi\|_{W}^{2}=\|\varphi\|_{H^{1}(\Omega)}^{2}+\|\Delta\varphi\|_{L^{2}( \Omega)}^{2}.\] _Note that \(W\hookrightarrow C(\bar{\Omega})\) and_ \[(\Delta)^{-1}\ :\ L^{2}(\Omega)\to W\] _is onto, i.e., \(R((\Delta)^{-1})=W\)._ _If we take_ \[Y_{ad,3}=\{y\in W:\ a(x)\leq y(x)\leq b(x)\ \text{a.e. in}\ \Omega\},\] _then we have the same solution \((\bar{u},\bar{y})\) of the optimal control problem with pointwise state constraints as in the continuous and \(L^{2}\) settings. In this case, by Theorem 3.3 or Theorem 4.1, we have the existence of the essential Lagrange multiplier \(\lambda^{*}\) in \(W^{\prime}\)._ ## 6. Concluding remarks In this paper, we systematically developed a new decomposition framework to investigate Lagrange multipliers of the KKT system of constrained optimization problems and variational inequalities in Hilbert spaces. Our new framework is totally different from existing frameworks based on separation theorems. We derived the weak form asymptotic KKT system and introduced the essential Lagrange multiplier. The basic theory of the essential Lagrange multiplier has been established in this paper. The existence theory of the essential Lagrange multiplier shows essential differences in finite and infinite-dimensional cases. Based on it, we also gave necessary and sufficient conditions for the existence and uniqueness of the proper Lagrange multiplier. The results theoretically confirm the necessity of using the asymptotic or approximate KKT system in the infinite-dimensional case as well. We proved the convergence of the classical augmented Lagrangian method without using the information of Lagrange multipliers of the problem, and essentially characterized the convergence properties of the multipliers generated by the classical augmented Lagrangian method. Some sufficient conditions to guarantee the existence of the essential Lagrange multiplier and the proper Lagrange multiplier were also derived. ## Acknowledgement ### A. The proof of Theorem 1.2 We first prove that if a feasible point \(u^{*}\) satisfies the W-AKKT system (1.3), then \(u^{*}\) is the global minimizer of the model problem (1.1). According to the convex optimization theory (cf. [3, Proposition 26.5], [10, Proposition 2.1, Chapter II] or [19, Corollary 4.20]), it suffices to show that \[\langle D_{u}\theta(u^{*}),v-u^{*}\rangle_{\mathcal{U}}\geq 0\ \ \ \ \forall\ Sv\in K.\] Since \(u^{*}\) satisfies the W-AKKT system (1.3), there exists \(\{\lambda^{k}\}\subset\mathcal{X}\) such that \[\langle D_{u}\theta(u^{*}),v-u^{*}\rangle_{\mathcal{U}}+\lim_{k\to+\infty} \langle S^{*}\lambda^{k},v-u^{*}\rangle=0\ \ \ \forall\ v\in\mathcal{U}\] and \[-\limsup_{k\to+\infty}\langle\lambda^{k},\zeta-Su^{*}\rangle\geq 0\ \ \ \forall\ \zeta\in K.\] Therefore, \[-\lim_{k\to+\infty}\langle\lambda^{k},Sv-Su^{*}\rangle\geq 0\quad\forall\ Sv\in K,\] i.e., \[-\lim_{k\to+\infty}\langle S^{*}\lambda^{k},v-u^{*}\rangle\mu\geq 0\quad\forall\ Sv \in K.\] It follows (A.1) \[\begin{split}\langle D_{u}\theta(u^{*}),v-u^{*}\rangle_{\mathcal{U }}&=-\lim_{k\to+\infty}\langle S^{*}\lambda^{k},v-u^{*}\rangle_{ \mathcal{U}}\\ &\geq-\limsup_{k\to+\infty}\langle S^{*}\lambda^{k},v-u^{*} \rangle_{\mathcal{U}}\geq 0\quad\forall\ Sv\in K,\end{split}\] which implies that \(u^{*}\) is the global minimizer of the model problem (1.1). For the other direction, we will use the classical ALM for the model problem as an optimization procedure regularization to prove it. The algorithm will be given and the convergence analysis will be carried out without using any assumptions on Lagrange multipliers of the model problem. The convergence analysis borrows some ideas of that in [12] and [21] for ADMM. **Remark A.1**.: _A comprehensive discussion of the classical ALM in Banach spaces based on a different approach can be found in [37, Chapter 4], and a further application of these results to develop new constraint qualification conditions in Banach spaces can be found in [6]._ We first rewrite the model problem (1.1) into the following equivalent form (A.2) \[\min_{u\in\mathcal{U},\zeta\in\mathcal{X}}\theta(u)+I_{K}(\zeta)\ \ \text{s.t.}\ \ Su=\zeta,\] where \(I_{K}(\cdot)\) is the indicator function of \(K\), which is defined by \[I_{K}(\zeta)=\begin{cases}0,&\text{if }\zeta\in K,\\ +\infty,&\text{if }\zeta\not\in K.\end{cases}\] Note that the global minimizer of the problem (A.2) is given by \((u^{*},\zeta^{*})\), where \(u^{*}\) is the global minimizer of the model problem (1.1) and \(\zeta^{*}=Su^{*}\). ### The classical augmented Lagrangian method The augmented Lagrangian functional \(L_{\beta}(u,\zeta;\lambda):(\mathcal{U}\times\mathcal{X})\times\mathcal{X} \to\mathbb{R}\cup\{+\infty\}\) of the model problem (1.1) based on (A.2) is given by \[L_{\beta}(u,\zeta;\lambda)=\theta(u)+I_{K}(\zeta)+\langle\lambda,Su-\zeta \rangle+\frac{\beta}{2}\|Su-\zeta\|^{2},\] where \(\beta>0\). The classical augmented Lagrangian method for the model problem (1.1) based on this augmented Lagrangian functional is given by (A.3) \[\begin{cases}(u^{k+1},\zeta^{k+1})=\arg\min_{(u,\zeta)\in\mathcal{U}\times \mathcal{X}}L_{\beta}(u,\zeta;\lambda^{k}),\\ \lambda^{k+1}=\lambda^{k}+\beta(Su^{k+1}-\zeta^{k+1}),\end{cases}\] where \(\lambda^{1}\in\mathcal{X}\) is given. ### Convergence analysis In this part we will give an elementary proof of the convergence of the algorithm. The proof is composed of several steps. We first give a characterization of the iterators \(\{(u^{k},\zeta^{k},\lambda^{k})\}_{k=1}^{+\infty}\) by the first order optimality system of the subproblem in the first step. Since the subproblem in the first step is a convex problem, solving the subproblem is equivalent to solve \[\begin{cases}D_{u}\theta(u^{k+1})+S^{*}[\lambda^{k}+\beta(Su^{k+1}-\zeta^{k+1 })]=0;\\ I_{K}(\zeta)-I_{K}(\zeta^{k+1})-\langle\lambda^{k}+\beta(Su^{k+1}-\zeta^{k+1 }),\zeta-\zeta^{k+1}\rangle\geq 0\quad\forall\ \zeta\in\mathcal{X}.\end{cases}\] Hence, the iterators of the classical ALM satisfy (A.4) \[\begin{cases}D_{u}\theta(u^{k+1})+S^{*}\lambda^{k+1}=0;\\ I_{K}(\zeta)-I_{K}(\zeta^{k+1})-\langle\lambda^{k+1},\zeta-\zeta^{k+1}\rangle\geq 0 \quad\forall\ \zeta\in\mathcal{X};\\ \beta r^{k+1}=\lambda^{k+1}-\lambda^{k},\end{cases}\] where \(r^{k+1}=Su^{k+1}-\zeta^{k+1}\). According to (A.4), it holds (A.5) \[\lambda^{k}\in\partial I_{K}(\zeta^{k})\quad\forall\ k=2,3,\dots.\] Without loss of generality, we also assume that (A.5) holds for \(k=1\). #### a.2.1. Convergence of \(\{(u^{k},\zeta^{k})\}_{k=1}^{+\infty}\) We first prove that \(\{\|r^{k}\|\}_{k=1}^{+\infty}\) is Fejer monotone. It follows from (A.4) that \[\beta S^{*}r^{k+1}=S^{*}(\lambda^{k+1}-\lambda^{k})=S^{*}\lambda^{k+1}-S^{*} \lambda^{k}=D_{u}\theta(u^{k})-D_{u}\theta(u^{k+1})\] and \[\beta\langle r^{k+1},\zeta^{k+1}-\zeta^{k}\rangle =\langle\lambda^{k+1}-\lambda^{k},\zeta^{k+1}-\zeta^{k}\rangle\] \[=I_{K}(\zeta^{k})-I_{K}(\zeta^{k+1})-\langle\lambda^{k+1},\zeta^ {k}-\zeta^{k+1}\rangle\quad(\geq 0)\] \[\quad+I_{K}(\zeta^{k+1})-I_{K}(\zeta^{k})-\langle\lambda^{k}, \zeta^{k+1}-\zeta^{k}\rangle\quad(\geq 0)\] \[\geq 0,\] which yield \[\beta\langle r^{k+1}-r^{k},r^{k+1}\rangle =\beta\langle(Su^{k+1}-\zeta^{k+1})-(Su^{k}-\zeta^{k}),r^{k+1}\rangle\] \[=\beta\langle S(u^{k+1}-u^{k}),r^{k+1}\rangle-\beta\langle\zeta^ {k+1}-\zeta^{k},r^{k+1}\rangle\] \[=\langle u^{k+1}-u^{k},\beta S^{*}r^{k+1}\rangle_{\mathcal{U}}- \beta\langle r^{k+1},\zeta^{k+1}-\zeta^{k}\rangle\] \[\leq-\langle u^{k+1}-u^{k},D_{u}\theta(u^{k+1})-D_{u}\theta(u^{k })\rangle_{\mathcal{U}}\] \[\leq-c_{0}\|u^{k+1}-u^{k}\|_{\mathcal{U}}^{2},\] where we used (1.2) in the last inequality. Furthermore, by applying the identity \[\beta\langle r^{k+1}-r^{k},r^{k+1}\rangle=\frac{\beta}{2}\left(\|r^{k+1}\|^{2 }-\|r^{k}\|^{2}+\|r^{k+1}-r^{k}\|^{2}\right),\] we have \[\frac{\beta}{2}\left(\|r^{k+1}\|^{2}-\|r^{k}\|^{2}+\|r^{k+1}-r^{k}\|^{2} \right)\leq-c_{0}\|u^{k+1}-u^{k}\|_{\mathcal{U}}^{2}\] or equivalently (A.6) \[\|r^{k+1}\|^{2}\leq\|r^{k}\|^{2}-\|r^{k+1}-r^{k}\|^{2}-2c_{0}\beta^{-1}\|u^{k+ 1}-u^{k}\|_{\mathcal{U}}^{2}\leq\|r^{k}\|^{2},\] which shows that \(\{\|r^{k}\|\}_{k=1}^{+\infty}\) is Fejer monotone. Secondly, we prove that \[\sum_{k=1}^{+\infty}\|r^{k}\|^{2}<+\infty.\] We denote by \(\eta=(u,\zeta)\in\mathcal{U}\times\mathcal{X}\) and recall the Bregman distance induced by the convex functional \(\theta(u)+I_{K}(\zeta)\) at \((D_{u}\theta(u),\lambda)\) with \(\lambda\in\partial I_{K}(\zeta)\), which is given by (A.7) \[\mathcal{D}_{\eta}(\hat{\eta},\eta;\lambda):=\theta(\hat{u})-\theta(u)- \langle D_{u}\theta(u),\hat{u}-u\rangle_{\mathcal{U}}+I_{K}(\hat{\zeta})-I_{K }(\zeta)-\langle\lambda,\hat{\zeta}-\zeta\rangle\geq 0,\] for any \(\hat{\eta}\in\mathcal{U}\times\mathcal{X}\). By the assumption (1.2) of \(\theta(u)\), we have (A.8) \[\mathcal{D}_{\eta}(\hat{\eta},\eta;\lambda)\geq\theta(\hat{u})-\theta(u)- \langle D_{u}\theta(u),\hat{u}-u\rangle_{\mathcal{U}}\geq\frac{c_{0}}{2}\|\hat {u}-u\|_{\mathcal{U}}^{2}.\] Let \(\hat{\eta}=(\hat{u},\hat{\zeta})\) satisfy \(\hat{\zeta}=S\hat{u}\in K\). We will always assume this in the following analysis. It follows from (A.4), (A.5) and (A.7) that (A.9) \[\begin{split}&\mathcal{D}_{\eta^{k}}(\hat{\eta},\eta^{k};\lambda^{k})- \mathcal{D}_{\eta^{n}}(\hat{\eta},\eta^{n};\lambda^{n})+\mathcal{D}_{\eta^{n} }(\eta^{k},\eta^{n};\lambda^{n})\\ &=\langle D_{u}\theta(u^{n})-D_{u}\theta(u^{k}),\hat{u}-u^{k} \rangle_{\mathcal{U}}+\langle\lambda^{n}-\lambda^{k},\hat{\zeta}-\zeta^{k}\rangle \\ &=\langle\lambda^{n}-\lambda^{k},Su^{k}-S\hat{u}\rangle+\langle \lambda^{n}-\lambda^{k},S\hat{u}-\zeta^{k}\rangle\\ &=\langle\lambda^{n}-\lambda^{k},Su^{k}-\zeta^{k}\rangle=-\sum_{ i=n}^{k-1}\langle\lambda^{i+1}-\lambda^{i},Su^{k}-\zeta^{k}\rangle\\ &=-\beta\sum_{i=n}^{k-1}\langle r^{i+1},r^{k}\rangle,\end{split}\] for any \(k>n\). This gives \[\mathcal{D}_{\eta^{k}}(\eta^{k+1},\eta^{k};\lambda^{k})+\beta\|r^{k+1}\|^{2}= \mathcal{D}_{\eta^{k}}(\hat{\eta},\eta^{k};\lambda^{k})-\mathcal{D}_{\eta^{k+ 1}}(\hat{\eta},\eta^{k+1};\lambda^{k+1})\quad\forall\ k=1,2,\ldots.\] Summing over \(k\) from \(n\) to \(m\) (\(>n\)) on both sides, we have (A.10) \[\sum_{k=n}^{m}\left(\mathcal{D}_{\eta^{k}}(\eta^{k+1},\eta^{k};\lambda^{k})+ \beta\|r^{k+1}\|^{2}\right)=\mathcal{D}_{\eta^{n}}(\hat{\eta},\eta^{n}; \lambda^{n})-\mathcal{D}_{\eta^{m}}(\hat{\eta},\eta^{m};\lambda^{m}).\] Then by taking \(n=1\), using (A.8) and letting \(m\to+\infty\), we have \[\sum_{k=1}^{+\infty}\left(\mathcal{D}_{\eta^{k}}(\eta^{k+1},\eta^{k};\lambda^{ k})+\beta\|r^{k+1}\|^{2}\right)\leq\mathcal{D}_{\eta^{1}}(\hat{\eta},\eta^{1}; \lambda^{1})<+\infty,\] which yields (A.11) \[\sum_{k=1}^{+\infty}\|r^{k}\|^{2}<\infty\] and (A.12) \[\{\mathcal{D}_{\eta^{k}}(\hat{\eta},\eta^{k};\lambda^{k})\}_{k=1}^{+\infty} \ \ \text{is a Cauchy sequence},\] by (A.10). Moreover, we have (A.13) \[\|Su^{k}-\zeta^{k}\|=\|r^{k}\|\ \to 0\ \ \text{as}\ \ k\to+\infty.\] Now, we prove that \(\{u^{k}\}_{k=1}^{+\infty}\) and \(\{\zeta^{k}\}_{k=1}^{+\infty}\) are Cauchy sequences. For any \(k>n\), it follows from (A.6) that (A.14) \[|\beta\sum_{i=n}^{k-1}\langle r^{i+1},r^{k}\rangle|\leq\frac{\beta}{2}\sum_{i= n}^{k-1}(\|r^{i+1}\|^{2}+\|r^{k}\|^{2})\leq\beta\sum_{i=n}^{k}\|r^{i}\|^{2}.\] For any \(k>n\), by (A.8), (A.9), (A.14), (A.11) and (A.12), we have \[\begin{split}\frac{c_{0}}{2}\|u^{k}-u^{n}\|_{\mathcal{U}}^{2}& \leq\mathcal{D}_{\eta^{n}}(\eta^{k},\eta^{n};\lambda^{n})\\ &=-\beta\sum_{i=n}^{k-1}\langle r^{i+1},r^{k}\rangle-[\mathcal{D} _{\eta^{k}}(\hat{\eta},\eta^{k};\lambda^{k})-\mathcal{D}_{\eta^{n}}(\hat{\eta}, \eta^{n};\lambda^{n})]\\ &\leq\beta\sum_{i=n}^{k}\|r^{i}\|^{2}+|\mathcal{D}_{\eta^{k}}( \hat{\eta},\eta^{k};\lambda^{k})-\mathcal{D}_{\eta^{n}}(\hat{\eta},\eta^{n}; \lambda^{n})|\quad\to\ \ 0\ \ \text{as}\ n,k\to+\infty.\end{split}\] Therefore, \(\{u^{k}\}_{k=1}^{+\infty}\) is a Cauchy sequence and \(\{\zeta^{k}\}_{k=1}^{+\infty}\) is a Cauchy sequence by (A.13). Let \(\bar{u}\in\mathcal{U}\) and \(\bar{\zeta}\in\mathcal{X}\) be the limits of \(\{u^{k}\}_{k=1}^{+\infty}\) and \(\{\zeta^{k}\}_{k=1}^{+\infty}\) respectively, i.e., (A.15) \[\lim_{k\to+\infty}u^{k}=\bar{u}\quad\text{and}\quad\lim_{k\to+\infty}\zeta^{k}= \bar{\zeta}.\] Since \(\{\zeta^{k}\}\subset K\) and \(K\) is closed, we have \(\bar{\zeta}\in K\). It follows from (A.13) that (A.16) \[S\bar{u}=\bar{\zeta}.\] #### a.2.2. Global convergence of the algorithm Now, we prove that \((\bar{u},\bar{\zeta})\) is the global minimizer of the problem (A.2). Since \[\theta(u)+I_{K}(\zeta)\] is lower semicontinuous, we have (A.17) \[\theta(\bar{u})+I_{K}(\bar{\zeta})\leq\liminf_{k\to+\infty}[\theta(u^{k})+I_{K }(\zeta^{k})].\] Note that \(\lambda^{k}\in\partial I_{K}(\zeta^{k})\). The convexity of the objective functional gives \[\theta(\bar{u})+I_{K}(\bar{\zeta})-[\theta(u^{k})+I_{K}(\zeta^{k})]-[\langle \lambda^{k},\bar{\zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{k}),\bar{u}-u^ {k}\rangle_{\mathcal{U}}]\geq 0,\] i.e., (A.18) \[\theta(u^{k})+I_{K}(\zeta^{k})\leq\theta(\bar{u})+I_{K}(\bar{\zeta})-[\langle \lambda^{k},\bar{\zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{k}),\bar{u}-u^ {k}\rangle_{\mathcal{U}}].\] For any fixed \(n\) and \(k>n\), it follows from (A.16) and (A.9) with \(\hat{\eta}=(\bar{u},\bar{\zeta})\) that (A.19) \[\begin{split}&|\langle\lambda^{k},\bar{\zeta}-\zeta^{k}\rangle+ \langle D_{u}\theta(u^{k}),\bar{u}-u^{k}\rangle_{\mathcal{U}}|\\ &=|\langle\lambda^{k}-\lambda^{n},\bar{\zeta}-\zeta^{k}\rangle+ \langle D_{u}\theta(u^{k})-D_{u}\theta(u^{n}),\bar{u}-u^{k}\rangle_{\mathcal{U} }\\ &\quad+\langle\lambda^{n},\bar{\zeta}-\zeta^{k}\rangle+\langle D _{u}\theta(u^{n}),\bar{u}-u^{k}\rangle_{\mathcal{U}}|\\ &=|\beta\sum_{i=n}^{k-1}\langle r^{i+1},r^{k}\rangle+[\langle \lambda^{n},\bar{\zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{n}),\bar{u}-u^ {k}\rangle_{\mathcal{U}}]|\\ &\leq\beta\sum_{i=n}^{k}\|r^{i}\|^{2}+|\langle\lambda^{n},\bar{ \zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{n}),\bar{u}-u^{k}\rangle_{ \mathcal{U}}|.\end{split}\] Hence, by (A.18), (A.19), (A.11) and (A.15), we get \[\limsup_{k\to+\infty}[\theta(u^{k})+I_{K}(\zeta^{k})]\] \[\leq\theta(\bar{u})+I_{K}(\bar{\zeta})+\limsup_{k\to+\infty} \left\{\beta\sum_{i=n}^{k}\|r^{i}\|^{2}+|\langle\lambda^{n},\bar{\zeta}-\zeta ^{k}\rangle+\langle D_{u}\theta(u^{n}),\bar{u}-u^{k}\rangle_{\mathcal{U}}|\right\}\] \[=\theta(\bar{u})+I_{K}(\bar{\zeta})+\beta\sum_{i=n}^{+\infty}\|r ^{i}\|^{2}+|\langle\lambda^{n},\bar{\zeta}-\bar{\zeta}\rangle+\langle D_{u} \theta(u^{n}),\bar{u}-\bar{u}\rangle_{\mathcal{U}}|\] \[=\theta(\bar{u})+I_{K}(\bar{\zeta})+\beta\sum_{i=n}^{+\infty}\|r ^{i}\|^{2}.\] If we further take \(n\to+\infty\) and use (A.11), we obtain (A.20) \[\lim_{k\to+\infty}\sup[\theta(u^{k})+I_{K}(\zeta^{k})]\leq\theta(\bar{u})+I_{ K}(\bar{\zeta}).\] This together with (A.17) yields (A.21) \[\theta(\bar{u})+I_{K}(\bar{\zeta})=\lim_{k\to+\infty}[\theta(u^{k})+I_{K}( \zeta^{k})].\] For any \((\hat{u},\hat{\zeta})\) satisfing \(S\hat{u}=\hat{\zeta}\), we will prove that \[\theta(\bar{u})+I_{K}(\bar{\zeta})\leq\theta(\hat{u})+I_{K}(\hat{\zeta}).\] The proof is similar to that of (A.20). Since \[\theta(u^{k})+I_{K}(\zeta^{k})\leq\theta(\hat{u})+I_{K}(\hat{\zeta})-[\langle \lambda^{k},\hat{\zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{k}),\hat{u}-u^ {k}\rangle_{\mathcal{U}}]\] \[|\langle\lambda^{k},\hat{\zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{k}), \hat{u}-u^{k}\rangle_{\mathcal{U}}|\] \[=|\beta\sum_{i=n}^{k-1}\langle r^{i+1},r^{k}\rangle|+|\langle \lambda^{n},\hat{\zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{n}),\hat{u}-u^ {k}\rangle_{\mathcal{U}}|\] \[\leq\beta\sum_{i=n}^{k}\|r^{i}\|^{2}+|\langle\lambda^{n},r^{k}\rangle|\] where \(n\) is fixed and \(k>n\), we have \[\theta(u^{k})+I_{K}(\zeta^{k}) \leq\theta(\hat{u})+I_{K}(\hat{\zeta})-[\langle\lambda^{k},\hat{ \zeta}-\zeta^{k}\rangle+\langle D_{u}\theta(u^{k}),\hat{u}-u^{k}\rangle_{ \mathcal{U}}]\] \[\leq\theta(\hat{u})+I_{K}(\hat{\zeta})+\beta\sum_{i=n}^{k}\|r^{i }\|^{2}+|\langle\lambda^{n},r^{k}\rangle|.\] By (A.11) and (A.21), we get \[\theta(\bar{u})+I_{K}(\bar{\zeta})=\lim_{k\to+\infty}\theta(u^{k})+I_{K}( \zeta^{k})\leq\theta(\hat{u})+I_{K}(\hat{\zeta})+\beta\sum_{i=n}^{+\infty}\|r ^{i}\|^{2}.\] By taking \(n\to+\infty\) and using (A.11) again, it leads to \[\theta(\bar{u})+I_{K}(\bar{\zeta})\leq\theta(\hat{u})+I_{K}(\hat{\zeta}),\] which implies \((\bar{u},\bar{\zeta})\) is the global minimizer of the model problem. ### Deriving the weak form asymptotic KKT system Now we derive the weak form asymptotic KKT system (1.3). This can be achieved by exploring the limiting case of (A.4). We have proved that \(\{u^{k}\}_{k=1}^{+\infty}\) strongly converges to the global minimizer \(u^{*}\) of the model problem (1.1). Since \(\theta(u)\) is continuously differentiable, it gives \[\lim_{k\to+\infty}D_{u}\theta(u^{k})=D_{u}\theta(u^{*})\] and (A.22) \[\langle D_{u}\theta(u^{*}),v\rangle_{\mathcal{U}}+\lim_{k\to+\infty}\langle \lambda^{k},Sv\rangle=0\quad\forall\ v\in\mathcal{U},\] by (A.4). In order to get (1.3), it suffices to show that \[-\limsup_{k\to+\infty}\langle\lambda^{k},\zeta-Su^{*}\rangle\geq 0\quad \forall\ \zeta\in K.\] Denote by \(\zeta^{*}=Su^{*}\in K\). According to (A.4), for any \(\zeta\in K\), we have \[-\langle\lambda^{k+1},\zeta-\zeta^{k+1}\rangle\geq 0\] and (A.23) \[\quad-\langle\lambda^{k+1},\zeta-\zeta^{*}\rangle=-\langle\lambda^{k+1}, \zeta-\zeta^{k+1}\rangle-\langle\lambda^{k+1},\zeta^{k+1}-\zeta^{*}\rangle\] \[\geq\langle\lambda^{k+1},\zeta^{*}-\zeta^{k+1}\rangle\] For any fixed \(n\) and \(k+1>n\), applying (A.19) to \((u^{*},\zeta^{*})\), we have \[|\langle\lambda^{k+1},\zeta^{*}-\zeta^{k+1}\rangle+\langle D_{u}\theta(u^{k+1} ),u^{*}-u^{k+1}\rangle_{\mathcal{U}}|\] \[\leq\beta\sum_{i=n}^{k+1}\|r^{i}\|^{2}+|\langle\lambda^{n},\zeta^{*}-\zeta^{k+ 1}\rangle+\langle D_{u}\theta(u^{n}),u^{*}-u^{k+1}\rangle_{\mathcal{U}}|.\] Following the same arguments as before (see (A.19)-(A.20)), we get (A.24) \[|\langle\lambda^{k+1},\zeta^{*}-\zeta^{k+1}\rangle+\langle D_{u}\theta(u^{k+1} ),u^{*}-u^{k+1}\rangle_{\mathcal{U}}|\to 0\quad\text{as}\quad k\to+\infty.\] On the other hand, the strong convergence of \(\{u^{k}\}_{k=1}^{+\infty}\) and the boundedness of \(D_{u}\theta(u)\) around \(u^{*}\) give (A.25) \[\lim_{k\to+\infty}\langle D_{u}\theta(u^{k+1}),u^{*}-u^{k+1}\rangle_{\mathcal{U} }=0.\] Together with (A.23), (A.24) and (A.25), we arrive at (A.26) \[-\limsup_{k\to+\infty}\langle\lambda^{k},\zeta-\zeta^{*}\rangle=\lim_{k\to+ \infty}[-\langle\lambda^{k},\zeta-\zeta^{*}\rangle]\geq 0\quad\forall\ \zeta\in K.\] Therefore, by (A.22), (A.26) and \(\zeta^{*}=Su^{*}\), we have the weak form asymptotic KKT system (1.3) at \(u^{*}\). This completes the proof of Theorem 1.2.
2302.01181
$\mathcal{PT}$-symmetric effects in measurement-based quantum thermal machines
Measurement-based quantum thermal machines are fascinating models of thermodynamic cycles where measurement protocols play an important role in the performance and functioning of the cycle. Despite theoretical advances, interesting experimental implementations have been reported. Here we move a step further by considering in this class of cycle $\mathcal{PT}$-symmetric non-Hermitian Hamiltonians and their implications in quantum thermal machines fueled by generalized measurements. We present theoretical results indicating that $\mathcal{PT}$-symmetric effects and measurement protocols are related along the cycle. Furthermore, tuning the parameters suitably it is possible to improve the power output (engine configuration) and the cooling rate (refrigerator configuration), operating in the Otto limit, in a finite-time cycle that satisfies the quantum adiabatic theorem. Our model also allows switching the configuration of the cycle, engine, or refrigerator, depending on the strength of the measurement protocol.
Jonas F. G. Santos, Pritam Chattopadhyay
2023-02-02T16:09:26Z
http://arxiv.org/abs/2302.01181v2
# \(\mathcal{PT}\)-symmetric effects in measurement-based quantum thermal machines ###### Abstract Measurement-based quantum thermal machines are fascinating models of thermodynamic cycles where measurement protocols play an important role in the performance and functioning of the cycle. Despite theoretical advances, interesting experimental implementations have been reported. Here we move a step further by considering in this class of cycle \(\mathcal{PT}\)-symmetric non-Hermitian Hamiltonians and their implications in quantum thermal machines fueled by generalized measurements. We present theoretical results indicating that \(\mathcal{PT}\)-symmetric effects and measurement protocols are related along the cycle. Furthermore, tuning the parameters suitably it is possible to improve the power output (engine configuration) and the cooling rate (refrigerator configuration), operating in the Otto limit, in a finite-time cycle that satisfies the quantum adiabatic theorem. Our model also allows switching the configuration of the cycle, engine, or refrigerator, depending on the strength of the measurement protocol. ## I Introduction The development of classical thermodynamics is a solid ground for the theoretical and practical study of thermal machines. For example, using the association between one thermal engine and one refrigerator it is possible to show the second law of classical thermodynamics and that no design of classical machines is able to overcome the Carnot bounds [1]. As the systems involved in the fabrication of machines get miniaturized, the working substance or the refrigerant system being the spin (or spins) of an atom (atoms) [2; 3; 4] or the vibrational modes of an ion [5; 6], quantum fluctuations become relevant in the proper understanding of the energy transfer and the entropy production during any cyclic operation [7; 8]. Quantum thermal machines (QTM) [9; 10; 11; 12; 13; 14; 15; 16] are the grounds for the so-called quantum thermodynamics. By investigating QTM we can study quantum effects such as entanglement and coherence and how they affect the performance of QTM. It is also possible to understand and elaborate quantum fluctuation theorems [7; 8; 17; 18] as well as thermodynamic uncertainty relations [19; 20; 21; 22; 23]. In recent years, quantum measurements have also played a crucial role in designing new QTM models, as they can change the state of a system and modify its internal energy [24; 25; 26]. Measurement protocols are non-unitary operations acting on a quantum system. Due to this aspect, many recent QTM's replace one thermalization process by a measurement protocol and the performance becomes dependent on how the measurement is implemented. Protocols involving quantum measurements are vast in the literature [27], and they are the basis of quantum mechanics theory. To get an interpretation of the non-ideal measurement, Hendrikx [28] in their work considered Ramsay's experiment to explore the generalized measurement protocols. While in projective measurements the system state is completely collapsed in a particular eigenstate after the measurement, in the so-called weak measurement protocols [29; 30] the system is only slightly perturbed resulting in many applications in quantum information [31]. On the other hand, generalized measurements are designed such that one can control the action on the quantum system and, depending on the parameter strength involved, it is possible to vary between weak and projective measurements. Generalized measurements are well described by the positive operator-valued measurements (POVMs). There is a growing interest in designing QTM where at least one of the steps is performed by quantum measurements, where it plays the role of non-unitary processes, thus relating quantum measurements to entropy production and irreversibility. In particular, recently an experimental implementation of QTM using generalized measurements in nuclear magnetic resonance setup was reported [4]. The theory of quantum mechanics supports an interesting generalization well-known as \(\mathcal{PT}\)-symmetric quantum mechanics, which mathematically means that the property of Hermiticity for the observables is relaxed and replaced by the \(\mathcal{PT}\)-symmetric conditions, i.e., the operators have to fulfill the conditions of invariance by spatial reflection (parity \(\mathcal{P}\)) and time reversal \(\mathcal{T}\) in order to assure real eigenvalues and thus they can represent physical systems [32; 33]. From the seminal paper by Bender and Boettcher [32], it became evident that this new class of Hamiltonians, so-called \(\mathcal{PT}\)-symmetric Hamiltonians, could have a huge set of applications in quantum physics, for instance, in fluctuation relations theorems [34; 35; 36], in quantum optics and photonics systems [37; 38; 39; 40]. In open quantum systems, Ref. [41] showed that the decoherence dynamics are considerably modified if the system-environment interaction is built by \(\mathcal{PT}\)-symmetric Hamiltonians [42; 43]. Furthermore, Ref. [44] proposed a thermal reservoir model with \(\mathcal{PT}\)-symmetric Hamiltonians based on a collisional model such that modifying the strength of the \(\mathcal{PT}\)-symmetry is sufficient to switch the configuration from the engine to refrigerator. Although there exists the study of \(\mathcal{PT}\)-symmetry properties in quantum thermodynamics, their applications in thermodynamic cycles are still unclear for general cycles besides the standard quantum Otto cycle. In this work, we extended the consideration of \(\mathcal{PT}\)-symmetric Hamiltonians for the case of measurement-based quantum thermal machines. Since the measurement mechanism playing the role of a thermal reservoir introduces a new class of quantum thermal machines, the introduction of a second non-trivial degree of freedom may be interesting to control the energy flow along the cycle and then modify its structure and performance. With this in mind, we propose a measurement-based cycle in which the working substance is a single harmonic oscillator, for instance, the vibration mode of a trapped ion, with one of the non-unitary processes a measurement protocol, whereas the second is the interaction with a thermal reservoir modeled by \(\mathcal{PT}\)-symmetric Hamiltonians via collisional model. This work is organized as follows. In section II we review the main properties of \(\mathcal{PT}\)-symmetric quantum mechanics and generalized measurements for continuous variable systems. Section III is dedicated to describe the proposed cycle in detail and discuss its consequences. The conclusion and final remarks are drawn in section IV. ## II \(\mathcal{PT}\)-symmetric quantum mechanics and generalized measurements ### \(\mathcal{PT}\)-symmetric quantum mechanics We briefly revisit the main aspects of the \(\mathcal{PT}\)-symmetric quantum mechanics. In order to represent physical observables, quantum mechanics imposes that operators have to be Hermitian, \(A=A^{\dagger}\), such that they have a complete set of eigenstates and real spectra. As evidenced in the pioneering work [32], non-Hermitian operators that are simultaneously invariant under parity \(\mathcal{P}\) and time reversal \(\mathcal{T}\) symmetries also fulfill the conditions to represent physical observables. This property is well-known as \(\mathcal{PT}\)-symmetry and for a non-Hermitian Hamiltonian \(H\left(q_{i},p_{i}\right)\) with \(i=1...,N\) and with eigenstates \(\left|\psi\left(t\right)\right\rangle\), it implies \(\left[H\left(q_{i},p_{i}\right),\mathcal{PT}\right]=0\) as well as \(\mathcal{PT}|\psi\left(t\right)\rangle=\left|\psi\left(t\right)\right\rangle\). Under this condition, the Hamiltonian is called \(\mathcal{PT}\)-symmetric and invariant under the transformations \[\mathcal{PT}q_{i}\left(\mathcal{PT}\right)^{-1} = -q_{i},\] \[\mathcal{PT}p_{i}\left(\mathcal{PT}\right)^{-1} = p_{i}\] \[\mathcal{PT}i\left(\mathcal{PT}\right)^{-1} = -i. \tag{1}\] Apart from the real spectra of a non-Hermitian Hamiltonian fulfilling the \(\mathcal{PT}\)-symmetry condition, its connection with its Hermitian partner is realized through the similarity transformation [45; 46] \[h\left(q_{i},p_{i}\right)=\eta H\left(q_{i},p_{i}\right)\eta^{-1}, \tag{2}\] with \(h\left(q_{i},p_{i}\right)\) the Hermitian partner of \(H\left(q_{i},p_{i}\right)\) and \(\eta=\eta\left(q_{i},p_{i}\right)\) is the Dyson map with the property \(\eta\eta^{-1}=\mathbb{I}\). \(\mathcal{PT}\)-symmetric Hamiltonians satisfying Eq. (2) together with the Hermitian condition guarantee the quasi-Hermiticity relation, \(\Theta H\left(q_{i},p_{i}\right)=H^{\dagger}\left(q_{i},p_{i}\right)\Theta\), with \(\Theta=\eta^{\dagger}\eta\) the metric operator in order to ensure the probability conservation [46; 47]. For Hermitian Hamiltonians, \(\Theta=\mathbb{I}\). Finally, expected values of observables of \(\mathcal{PT}\)-symmetric non-Hermitian Hamiltonians and their Hermitian partner are linked through the relation \(\left\langle\phi\left(t\right)|O|\phi\right\rangle=\left\langle\psi\left(t \right)|\mathcal{O}|\psi\right\rangle\), where \(\mathcal{O}\) and \(O\) are the non-Hermitian and Hermitian partner observables, respectively, and \(\left|\phi\left(t\right)\right\rangle=\eta^{-1}|\psi\left(t\right)\rangle\)[47]. ### Generalized measurements Recent studies concerning measurement protocols in quantum mechanics assume generalized measurement as the most general set of measurements [27]. They range from the class of projective to weak measurements, depending on their strength. Generalized measurements of such a observable \(A\) with eigenvalues \(a_{\alpha}\) are characterized by Hermitian measurement operators \(M_{\alpha}=M_{\alpha}^{\dagger}\) such that \(\sum_{\alpha}M_{\alpha}^{2}=\mathbb{I}\)[24]. The state of the system after the measurement in the non-selective case has the form \(\rho^{M}=\sum_{\alpha}M_{\alpha}\rho M_{\alpha}^{\dagger}\). Quantum thermal machines powered by generalized measurements have been recently reported in [48; 4]. Generalized measurements have been used in quantum thermal machine models with qubit playing the role of working substance in Ref. [4; 24; 48]. For continuous variable systems, we can perform Gaussian measurements on the position or momentum operators such that \(M_{\alpha}=\left(2\pi\sigma^{2}\right)^{-1/4}\exp\left[-\left(q-\alpha\right) ^{2}/\left(4\sigma^{2}\right)\right]\) for the position case, with \(\alpha\) the measured position and \(\sigma^{2}\) is the variance of the measurement apparatus and characterizes its precision. Here the measurement operators satisfy the normalization condition \(\int d\alpha\,M_{\alpha}^{2}=\mathbb{I}\). Note that for \(\sigma^{2}\to 0\) we have infinite precision in the measurement which is in agreement with a projective measurement protocol. Also, these measurement operators correspond to the homodyne probability density associated with a measurement of the position [49]. It is important to stress the non-unitary aspect of this generalized measurement, such that after the measurement protocol the system in general absorbs or releases an amount of heat, whenever the Hamiltonian and the measurement operators do not commute. It is interpreted as heat because the Hamiltonian is kept fixed during the process. ## III \(\mathcal{PT}\)-symmetry in a based-measure quantum heat engine A measurement-based quantum thermal machine with \(\mathcal{PT}\)-symmetric effects are studied. The cycle is illustrated in Fig. (1). The working substance is a single-mode quantum harmonic oscillator. The cycle is structured with two unitary and two non-unitary processes. In the unitary process, the first and third strokes, changes in the frequency \(\omega(t)\) of the working substance are implemented quasi-adiabatically in order to avoid transitions between the eigenstates. On the other hand, the non-unitary processes are the position measurement process (second stroke) and the thermalization with a thermal reservoir (fourth stroke). We incorporate \(\mathcal{PT}\)-symmetry in the thermal reservoir through a collisional model, where each particle is composed of a single harmonic oscillator with Hamiltonian [44; 48] \[H^{\mathcal{PT}}=\frac{p^{2}}{2m}+\frac{m\omega^{2}q^{2}}{2}+2i\omega ep\,q. \tag{3}\] Note that the above Hamiltonian is \(\mathcal{PT}\)-symmetric and then the eigenvalues are reals. We use the Dyson map \(\eta=\exp\left[\epsilon/\left(m\omega\hbar\right)p^{2}\right]\) to obtain the Hermitian counterpart such that \[h=\frac{\mu^{2}}{2m}p^{2}+\frac{m\omega^{2}q^{2}}{2}+\hbar\omega\epsilon, \tag{4}\] with \(\mu^{2}\equiv\left(1+4\epsilon^{2}\right)\) and we vanish any \(\mathcal{PT}\)-symmetry signature by setting \(\epsilon=0\). In the simplest collisional model (CM) [50], the thermal reservoir is modeled by a large number of non-interacting particles (ancillary systems), with each of them prepared in a thermal state. The assumption that the ancillas are non-interacting assures a Markovian dynamics of the working substance [51; 52]. Furthermore, the CM implies that the working substance interacts with just one ancilla at a time interval \(\delta t\), after which the ancilla \(i\) is discarded and a new ancilla \(i+1\) is brought to interact. By repeating this method many times and doing \(\delta t\to 0\) the CM results in the Lindblad master equation [53] \[\frac{d\rho}{dt}=-i\left[H_{S},\rho\right]+\gamma\left(N+1\right)\mathcal{D}[ a]\rho+\gamma N\mathcal{D}\left[a^{\dagger}\right]\rho, \tag{5}\] where \(H_{S}=\hbar\omega_{S}\left(a^{\dagger}a+1/2\right)\) is the Hamiltonian of the working substance, \(\mathcal{D}\left[o\right]=o^{\dagger}\rho o-\left(\rho o^{\dagger}o+o^{ \dagger}o\rho\right)/2\) is the Lindblad operator, and \(N\) is the average number of photons associated with the thermal bath. Using Hamiltonian in Eq. (4) to build thermal states in which each ancilla is prepared, we can write them in the Fock basis as \(\rho^{\text{th}}=\sum_{n}\left[N^{n}/\left(N+1\right)^{N+1}\right]|n\rangle \langle n|\), with \(N=\left(e^{\beta\hbar\omega\mu}-1\right)^{-1}\), with \(\beta=1/\left(k_{B}T\right)\) the inverse temperature of the thermal reservoir. Note that it is possible to define an effective temperature \(\beta_{c}^{\text{eff}}=\mu\beta_{c}\), similar to a squeezed thermal bath, in which tuning \(\mu\) implies a larger or smaller temperature in the thermal reservoir. The second non-unitary stroke is a position measurement process which is a Gaussian operation, such that since the state before the measurement is Gaussian, the state after it remains Gaussian. We choose the measurement operators to be \(M_{\alpha}=\left(2\pi\sigma^{2}\right)^{-1/4}\exp\left[-\left(x-\alpha\right)^ {2}/\left(4\sigma^{2}\right)^{2}\right]\)[24]. The whole cycle is described by the following strokes. First stroke. The working substance is detached from the thermal reservoir and from the measurement apparatus. An external field changes its frequency from \(\omega_{1}\) to \(\omega_{2}\). This process is implemented by satisfying the quantum adiabatic theorem, such that there is no transition between the different eigenstates. The work associated with this stroke is \(W_{1}=\operatorname{Tr}\left[\rho_{\tau_{1}}\left(\omega_{2}\right)H\left( \omega_{2}\right)\right]-\operatorname{Tr}\left[\rho_{0}\left(\omega_{1} \right)H\left(\omega_{1}\right)\right]\). Second stroke. A measurement apparatus is attached to the working substance such that its position is measured. Using the position measurement operator \(M_{\alpha}\) mentioned above, we measure the position \(\alpha\) with precision given by \(\sigma^{2}\). The post-measurement state is \(\rho^{M}\left(\omega_{2}\right)\) and the energy exchanged with the measurement apparatus is \(\left\langle Q_{2}^{M}\right\rangle=\operatorname{Tr}\left[\rho^{M}\left( \omega_{2}\right)H\left(\omega_{2}\right)\right]-\operatorname{Tr}\left[\rho_ {\tau_{1}}\left(\omega_{2}\right)H\left(\omega_{2}\right)\right]\). This energy exchange is defined as heat because the Hamiltonian is kept fixed during this process. Third stroke. The working substance is again detached from the measurement apparatus and an external field changes back its frequency from \(\omega_{2}\) to \(\omega_{1}\). This unitary process also satisfies the quantum adiabatic theorem and there is no transition between the eigenstates. The work associated to this process is Figure 1: Illustration of the model. A single-temperature quantum thermal machine where the thermal reservoir is designed employing a \(\mathcal{PT}\)-symmetric Hamiltonian. The position measurement process is a Gaussian operation such that the state after it is still a Gaussian state. \[\mathrm{Tr}\left[\rho^{M}\left(\omega_{2}\right)H\left(\omega_{2}\right)\right]\text{.}\] Fourth stroke. In order to close the cycle, the working substance is brought to interact with the thermal reservoir for a sufficiently long time such that the asymptotic state is reached, i.e., \(\rho_{\tau_{4}}\left(\omega_{1}\right)=\rho_{0}\left(\omega_{1}\right)\). The heat during this process is \(\left\langle Q_{4}\right\rangle=\mathrm{Tr}\left[\rho_{\tau_{4}}\left(\omega_{ 1}\right)H\left(\omega_{1}\right)\right]-\mathrm{Tr}\left[\rho_{\tau_{3}} \left(\omega_{1}\right)H\left(\omega_{1}\right)\right]\). Noticing that all the processes involved in the cycle are Gaussian and the initial state is also a Gaussian state, we can use the Fock basis to directly compute the net work \(\left\langle W_{\mathrm{net}}\right\rangle=\left\langle W_{1}\right\rangle+ \left\langle W_{3}\right\rangle\) as well as the heat associated with the non-unitary processes. We get the results (more details in the Appendix) \[\left\langle W_{\mathrm{net}}\right\rangle =\frac{\hbar\mu}{2}\left(\omega_{2}-\omega_{1}\right)\left(1- \frac{1}{\left(2\pi\sigma^{2}\right)^{1/2}}\right)\coth\left(\frac{\beta_{c} \hbar\omega_{1}\mu}{2}\right),\] \[\left\langle Q_{2}^{M}\right\rangle =\frac{\hbar\omega_{2}\mu}{2}\left(\frac{1}{\left(2\pi\sigma^{2 }\right)^{1/2}}-1\right)\coth\left(\frac{\beta_{c}\hbar\omega_{1}\mu}{2} \right),\] \[\left\langle Q_{4}\right\rangle =\frac{\hbar\omega_{1}\mu}{2}\left(1-\frac{1}{\left(2\pi\sigma^{ 2}\right)^{1/2}}\right)\coth\left(\frac{\beta_{c}\hbar\omega_{1}\mu}{2}\right). \tag{6}\] For the cycle to operate as an engine (refrigerator), the thermodynamic quantities have to satisfy \(\left\langle W_{\mathrm{net}}\right\rangle<0\), \(\left\langle Q_{2}^{M}\right\rangle>0\), and \(\left\langle Q_{4}\right\rangle<0\) (\(\left\langle W_{\mathrm{net}}\right\rangle>0\), \(\left\langle Q_{2}^{M}\right\rangle<0\), and \(\left\langle Q_{4}\right\rangle>0\)). From the results in (6) we obtain the efficiency \(\eta=-\left\langle W_{\mathrm{net}}\right\rangle/\langle Q_{2}^{M}\rangle=1- \omega_{1}/\omega_{2}\) for the engine regime and the coefficient of performance \(\mathrm{COP}=\left\langle Q_{4}\right\rangle/\langle W_{\mathrm{net}}\right\rangle =\omega_{1}/\left(\omega_{2}-\omega_{1}\right)\). These results show that the proposed quantum machine operates in the Otto limit. This behavior is expected since we do not have the production of quantum coherence in the unitary processes [15], such that the quantum friction along the cycle is zero. Besides the efficiency and coefficient of performance, the important aspect is how the quantum machine behavior depends on the \(\mathcal{PT}\)-symmetry and the position measurement process composing the cycle. In order to better understand this behavior, Fig. (2) depicts the heat exchanged during the measurement process and the net work as a function of the \(\mathcal{PT}\)-symmetric parameter \(\mu\), fixing the value of \(\sigma=0.1\) (\(\sigma=0.2\)) in the black (blue) curves. To compare with the scenario where we have a standard thermal reservoir, we define the straight lines for \(\mu=1\). We first note that for these values of \(\sigma\), the cycle operates as an engine for any values of \(\mu\). However, it can be noted that, larger the value of \(\mu\), larger the absorbed heat and net work. This could imply, for instance, an increasing power output in a cycle operating in a finite-time regime but still satisfying the quantum adiabatic theorem. Another interesting point is that the \(\mathcal{PT}\)-symmetry of the thermal reservoir and the measurement process are dependent on each other. This is because the \(\mathcal{PT}\)-symmetric property appears exactly in the part of the Hamiltonian which does not commute with the measurement operator. Thus a momentum measurement operator would not affect the thermodynamic quantities as observed in Fig. (2). Figure 2: Thermodynamics quantities. (a) Heat absorbed by the working substance during the measurement process (second stroke) and (b) net work as a function of the \(\mathcal{PT}\)-symmetric parameter \(\mu\). Black (blue) curves are for \(\sigma=0.1\) (\(\sigma=0.2\))). Straight lines are set for \(\mu=1\), representing a cycle without \(\mathcal{PT}\)-symmetry in the thermal reservoir. Figure (3) shows the thermodynamic quantities as a function position measurement parameter \(\sigma\) by keeping fixed \(\mu\). We observe that tuning the measurement precision results in switching the quantum machine regime from refrigerator to engine. For \(\sigma^{2}\to 0\) we have an infinite precision (projective measurement protocol) and in this case, the machine operates as an engine. However, if the measurement process is such that the working substance is only weakly modified, then the cycle operates as a refrigerator. This is an interesting aspect of generalized measurement in quantum thermal machines which are not present in quantum cycles fueled by projective measurements. The value of \(\sigma\) where the cycle regime switches is \(\sigma_{s}=1/\sqrt{2\pi}\). The fact that \(\sigma_{s}\) does not depend on the temperature is because the cycle contains only one thermal bath. For a finite-time cycle satisfying the quantum adiabatic theorem, this result shows that if a sufficiently high control over the measurement protocol is achieved, then the cycle regime can be tuned to provide a larger value of power output (cooling rate) for the engine (refrigerator) regime. The proposed quantum thermal machine is theoretical in nature. However, some recent advances should be highlighted in order to indicate that this cycle can be experimentally performed or simulated in the future. Optomechanical systems are interesting platforms where continuous variables can be tested. For example, \(\mathcal{PT}\)-symmetric dynamics have been considered in optomechanical systems [54, 55]. In particular, Ref. [56] proposes to generate a \(\mathcal{PT}\)-symmetric dynamics employing two-coupled optomechanical systems, which mimics exactly the collisional model we have considered in the thermal bath. On the other hand, the measurement process (second stroke), has been investigated in Ref. [57] in which the position operator is measured. These advances show that an implementation of the present cycle could be possible using optomechanical systems. ## IV Conclusion We have considered a measurement-based quantum thermodynamic cycle in which the only thermal reservoir is built employing \(\mathcal{PT}\)-symmetric non-Hermitian Hamiltonians. With this, we introduced two non-trivial parameters in the cycle performance, the \(\mathcal{PT}\)-symmetry signature through the Dyson map and the measurement parameter. The detailed analysis of the cycle showed that, fixing the measurement position precision, the larger the \(\mathcal{PT}\)-symmetry aspect in the thermal reservoir, the larger the absorbed heat and the net work. This feature clearly results in larger power output as \(\mu\) increases. On the other hand, for a fixed value of \(\mu\), varying the position measurement precision implies the conversion from the refrigerator to the engine regime. This interesting aspect sheds some light on the role played by generalized measurements in quantum thermodynamics protocols. Finally, we have seen some advances in the experimental scenario, in particular optomechanical systems, which could be useful to realize or simulate the present quantum cycle. We hope our work can help to contribute to unveiling the role played by \(\mathcal{PT}\)-symmetric non-Hermitian Hamiltonians in quantum thermodynamics. ###### Acknowledgements. J. F. G. Santos acknowledges Sao Paulo Research Grant No. 2019/04184-5, Universidade Federal da Grande Dourados and Universidade Federal do ABC for the support. ## Appendix We here detail the results in Eqs. (6). Under the collisional model assumptions in the main text, the initial state of the working substance is \[\rho_{0}=2\sinh\left[\frac{\beta\hbar\omega_{1}\mu}{2}\right]e^{-\beta\hbar \omega_{1}\mu(n+1/2)}. \tag{7}\] At \(t=0\) the Hamiltonian is \(H\left(0\right)=\frac{\hbar\omega_{1}\mu}{2}\left(1+2a^{\dagger}a\right)\), such that the initial energy is \[U_{0} = Tr\left[\rho_{0}H\left(0\right)\right] \tag{8}\] \[= \frac{\omega_{1}\hbar}{2}\mu\coth\left(\frac{\beta\hbar\omega_{1} \mu}{2}\right).\] After the first stroke the Hamiltonian is \(H_{\tau_{1}}=\frac{\hbar\omega_{2}\mu}{2}\left[1+2a^{\dagger}a\right]\) and \(\rho_{\tau_{1}}=\rho_{0}\), resulting in the energy \[U_{1} = Tr\left[\rho_{\tau_{1}}H\left(\tau_{1}\right)\right] \tag{9}\] \[= \frac{\omega_{2}\hbar\mu}{2}\coth\left(\frac{\beta\hbar\omega_{1} \mu}{2}\right).\] Figure 3: Thermodynamics quantities. Heat absorbed/released by the working substance during the measurement process (red curve) and net work (black curve) as a function of the measurement parameter \(\sigma\). We have set \(\mu=10\), \((\omega_{c},\omega_{h},\beta)=(1.0,2.0,0.2)\). The second stroke is a position measurement process, such that the state after the measurement is \[\rho^{M} =2A^{2}\int d\alpha e^{-2\left(x-\alpha\right)^{2}/\left(4\sigma^{2} \right)}\sinh\left[\frac{\beta\hbar\omega_{1}\mu}{2}\right]e^{-\beta\hbar\omega_ {1}\mu\left(n+1/2\right)}\] \[=\frac{\sqrt{2}}{\left(\pi\sigma^{2}\right)^{1/2}}\sinh\left[ \frac{\beta\hbar\omega_{1}\mu}{2}\right]e^{-\beta\hbar\omega_{1}\mu\left(n+1/ 2\right)}. \tag{10}\] This results in the following internal energy after the measurement \[U_{2} =Tr\left[\rho_{\tau_{2}}H_{\tau_{2}}\right]\] \[=\frac{1}{\left(2\pi\sigma^{2}\right)^{1/2}}\frac{\omega_{2}\hbar \mu}{2}\coth\left(\frac{\beta\hbar\omega_{1}\mu}{2}\right), \tag{11}\] and the internal energy at the end of the third stroke \[U_{3} =Tr\left[\rho_{\tau_{3}}H_{\tau_{3}}\right]\] \[=\frac{1}{\left(2\pi\sigma^{2}\right)^{1/2}}\frac{\omega_{1}\hbar \mu}{2}\coth\left(\frac{\beta\hbar\omega_{1}\mu}{2}\right). \tag{12}\] The thermodynamic quantities, i.e., the net work, the heat during the measurement, and the heat exchanged with the thermal reservoir are given respectively by \[\langle W_{\text{net}}\rangle =\langle U_{1}\rangle-\langle U_{0}\rangle+\langle U_{3}\rangle- \langle U_{2}\rangle\] \[\langle Q_{2}^{M}\rangle =\langle U_{2}\rangle-\langle U_{1}\rangle\] \[\langle Q_{4}\rangle =\langle U_{4}\rangle-\langle U_{3}\rangle. \tag{13}\] and we obtain the expressions in the main text.
2304.11034
Revisiting Membership Problems in Subclasses of Rational Relations
We revisit the membership problem for subclasses of rational relations over finite and infinite words: Given a relation R in a class C_2, does R belong to a smaller class C_1? The subclasses of rational relations that we consider are formed by the deterministic rational relations, synchronous (also called automatic or regular) relations, and recognizable relations. For almost all versions of the membership problem, determining the precise complexity or even decidability has remained an open problem for almost two decades. In this paper, we provide improved complexity and new decidability results. (i) Testing whether a synchronous relation over infinite words is recognizable is NL-complete (PSPACE-complete) if the relation is given by a deterministic (nondeterministic) omega-automaton. This fully settles the complexity of this recognizability problem, matching the complexity of the same problem over finite words. (ii) Testing whether a deterministic rational binary relation is recognizable is decidable in polynomial time, which improves a previously known double exponential time upper bound. For relations of higher arity, we present a randomized exponential time algorithm. (iii) We provide the first algorithm to decide whether a deterministic rational relation is synchronous. For binary relations the algorithm even runs in polynomial time.
Pascal Bergsträßer, Moses Ganardi
2023-04-21T15:30:23Z
http://arxiv.org/abs/2304.11034v2
# Revisiting Membership Problems in Subclasses of Rational Relations ###### Abstract We revisit the _membership problem_ for subclasses of rational relations over finite and infinite words: Given a relation \(R\) in a class \(\mathbf{C}_{2}\), does \(R\) belong to a smaller class \(\mathbf{C}_{1}\)? The subclasses of rational relations that we consider are formed by the deterministic rational relations, synchronous (also called automatic or regular) relations, and recognizable relations. For almost all versions of the membership problem, determining the precise complexity or even decidability has remained an open problem for almost two decades. In this paper, we provide improved complexity and new decidability results. (i) Testing whether a synchronous relation over infinite words is recognizable is NL-complete (PSPACE-complete) if the relation is given by a deterministic (nondeterministic) \(\omega\)-automaton. This fully settles the complexity of this recognizability problem, matching the complexity of the same problem over finite words. (ii) Testing whether a deterministic rational binary relation is recognizable is decidable in polynomial time, which improves a previously known double exponential time upper bound. For relations of higher arity, we present a randomized exponential time algorithm. (iii) We provide the first algorithm to decide whether a deterministic rational relation is synchronous. For binary relations the algorithm even runs in polynomial time. ## I Introduction The study of _relations over words_ and their computational models, often called _transducers_, has become an active field of research, with applications in various fields, including algorithmic verification [1, 2], synthesis [3, 4], and graph databases [5]. While the class of regular languages is captured by several equivalent automata models, e.g. deterministic and nondeterministic automata, which read their input either in one or both directions, the same does not hold anymore for relations. The literature contains a number of transducer models for relations with varying tradeoffs between expressivity, closure properties, and algorithmic amenability. Finite-state transducers reading multiple words have been already introduced by Rabin and Scott in their seminal paper [6]. This basic model has later been extended to more expressive models such as streaming string transducers, transductions with origin semantics, visibly pushdown transducers, and transducers over infinite words, see [7] for an overview. Various algorithmic questions on basic transducer models remain challenging open problems, e.g. determining the precise complexity of the equivalence problem for deterministic streaming string transducers [1] or for deterministic multitape automata [8]. _The membership problem:_ In this paper we revisit the _membership problem_ (or the _definability problem_) for relations over words, i.e. given a relation \(R\) in a class \(\mathbf{C}_{2}\), does \(R\) belong to a smaller class \(\mathbf{C}_{1}\)? The membership problem for languages is a classical question in automata theory, in particular the question whether a given regular language belongs to a subclass of the regular languages [9, 10, 11, 12, 13], and the question whether a given language from a superclass of the regular languages is in fact regular [14, 15, 16]. For example, Schutzenberger's theorem effectively characterizes which regular languages are star-free [9]. Deciding whether an NFA accepts a star-free language is PSPACE-complete [17]. Another milestone result in this context is Valiant's regularity test for deterministic pushdown automata (DPDAs) [15]. Its running time is double exponential, improving on a previous triple exponential time algorithm by Stearns [18]. The only known lower bound is P-hardness inherited from emptiness problem, leaving an almost fifty year old double exponential gap between the upper and the lower bound. The membership problem for relations over words was first systematically studied by Carton, Choffrut, and Grigorieff [19] for subclasses of rational relations over finite words. Let us briefly introduce the most important subclasses, see Section II for formal definitions. A relation is _rational_ if it is recognized by a nondeterministic multitape automaton where the tapes are read asynchronously in one direction [20]. The deterministic variant of multitape automata [6] captures the class of _deterministic rational_ relations. Unfortunately, universality of rational relations and inclusion of deterministic rational relations are undecidable [21]. To overcome these undecidability barriers, one can put the additional restriction on the automaton that all heads read their input letter synchronously in parallel. Synchronous multitape automata recognize the _synchronous (rational) relations_[22], also called automatic or regular relations. Due to their effective closure under first-order operations, they enjoy pleasant algorithmic properties and form the basis of _automatic structures_[23, 24] and of _regular model checking_[25, 26]. The smallest class of relations we consider is formed by the _recognizable relations_, where the input words are processed by independent automata that synchronize only on the sequence of final states reached after reading the entire words. Alternatively, recognizable relations can be described as finite unions of Cartesian products of regular languages [27, Theorem 1.5]. All mentioned classes of relations over finite words are extended to infinite words, by adding a Buchi condition for nondeterministic automata or a parity condition for deterministic automata. The hierarchy of the considered subclasses of rational relations over finite and infinite words is displayed in Figure 1. However, as most interesting problems on rational relations, it is undecidable to test whether a given rational relation is recognizable, synchronous, or deterministic rational [21, 28]. Hence, we turn our attention to subclasses of deterministic rational relations. The following observation from [19] makes a simple connection between _binary_ rational relations \(R\subseteq\Sigma^{*}\times\Sigma^{*}\) and context-free languages: If \(R\) is rational then \(L_{R}=\{\mathsf{rev}(u)\#v\mid(u,v)\in R\}\) is context-free where \(\mathsf{rev}(u)\) is the reversal of \(u\); if \(R\) is deterministic rational then \(L_{R}\) is deterministic context-free. Furthermore, \(R\) is recognizable if and only if \(L_{R}\) is regular. Therefore, recognizability of binary deterministic rational relations can be easily reduced to regularity of DPDAs, which can be decided in double exponential time [15]. Using methods from the regularity algorithm, originally due to Stearns [18], one can also decide1 recognizability of deterministic rational relations of arbitrary arity [19]. Carton, Choffrut, and Grigorieff also present an algorithm to test whether a synchronous relation is recognizable [19], which runs in double exponential time (see the remark on its running time in [29]). Recently, Barcelo et al. determined the precise complexity of the same problem [30], see below. The question how to decide whether a deterministic rational relation is synchronous still hitherto remains open. Footnote 1: While the authors of [19] did not analyze the complexity of their algorithm, it is easy to see that their algorithm runs in elementary time for fixed arity \(k\). _Recognizability and the infinite clique problem:_ It has been observed in [19] that the recognizability problem for subclasses of rational relations can be reduced to checking whether certain equivalence relations have finite index. For a relation \(R\subseteq(\Sigma^{*})^{k}\) and \(j\in[1,k-1]\) define the equivalence relations \(\sim_{j}^{R}\) on \((\Sigma^{*})^{j}\) by \[\mathbf{x}\sim_{j}^{R}\mathbf{y}\;\stackrel{{\text{\tiny def }}}{{\Longleftrightarrow}} \text{for all }\mathbf{z}\in(\Sigma^{*})^{k-j}\colon\] \[\big{[}(\mathbf{x},\mathbf{z})\in R\iff(\mathbf{y},\mathbf{z})\in R\big{]},\] resembling the Myhill-Nerode congruence for languages. If \(R\) is rational, then \(R\) is recognizable if and only if \(\sim_{j}^{R}\) has finite index for all \(j\in[1,k-1]\)[19, Proposition 3.8]. This characterization has been used in [30] to decide recognizability for synchronous relations, as follows. Given a DFA (NFA) for a synchronous relation \(R\), one can compute automata for the complement relations \(\sim_{j}^{R}\) in logarithmic space (polynomial space). Hence, to decide non-recognizability of \(R\) it suffices to test whether for a given _co-equivalence relation_\(\not\sim\) there exists an infinite set \(X\) such that \(\mathbf{x}\not\sim\mathbf{y}\) for all distinct \(\mathbf{x},\mathbf{y}\in X\), in other words, whether \(\not\sim\) has an infinite clique. In fact, the infinite clique problem for arbitrary synchronous relations was shown to be NL-complete [30] (later simplified in [31]). However, in certain settings we need to exploit the fact that \(\sim_{j}^{R}\) is the complement of an equivalence relation \(\sim_{j}^{R}\). For example, Loding and Spinrath [29] have shown that the infinite clique problem for \(\omega\)-synchronous co-equivalence relations is decidable in double exponential time. This yields a double (triple) exponential time algorithm for the \(\omega\)-recognizability problem for \(\omega\)-synchronous relations given by (non)deterministic \(\omega\)-automata. Whether the infinite clique problem over _arbitrary_\(\omega\)-synchronous relations is decidable is a longstanding open problem [32]. Another example where the difference between co-equivalence relations and arbitrary relations becomes apparent is the case of _tree-automatic relations_. It was proven in [31] that the infinite clique problem for tree-automatic relations is \(\mathsf{EXP}\)-complete; however, restricted to complements of transitive relations the infinite clique problem becomes \(\mathsf{P}\)-complete. This yields optimal complexity for the recognizability problem for tree-automatic relations: Recognizability is \(\mathsf{P}\)-complete for relations given by deterministic bottom-up or top-down tree automata, and \(\mathsf{EXP}\)-complete for nondeterministic tree automata. _Contributions:_ We provide improved complexity and new decidability results for the membership problems in subclasses of rational relations over finite and infinite words. To do so, we refine the existing analyses in [19, 29] and identify patterns in the transducers which witness _non_-membership in the subclass. As our first main result, we pinpoint the precise complexity of the \(\omega\)-recognizability problem of \(\omega\)-synchronous relations. **Theorem 1**.: _Given an \(\omega\)-synchronous relation \(R\) by a deterministic parity (resp. nondeterministic Buchi) automaton, it is \(\mathsf{NL}\)-complete (resp. \(\mathsf{PSPACE}\)-complete) to decide whether \(R\) is \(\omega\)-recognizable._ This matches the complexity of the recognizability problem of synchronous relations over finite words [30]. To prove Theorem 1, we follow the approach of [29] and solve the infinite clique problem for \(\omega\)-synchronous co-equivalence relations \(\not\sim\). Their algorithm constructs an automaton for a regular set of (ultimately periodic) representatives of \(\sim\), whose size is double (triple) exponential in the size of a (non)deterministic automaton for \(\not\sim\). We circumvent the construction of this large automaton and identify a simple pattern directly in the automaton for \(\not\sim\) which witnesses an infinite clique. Our second and third main result concerns decision problems on deterministic rational relations over finite words. We encounter two issues when applying the same reduction to the infinite clique problem on \(\sim_{j}^{R}\). If \(R\) is a binary relation, then it is not difficult to see that \(\not\sim_{1}^{R}\) is effectively rational since two runs on pairs \((x,z)\) and \((y,z)\) with a common second component \(z\) can be simulated in parallel by a 3-tape automaton reading \((x,y,z)\). However, to the best of the authors' knowledge, it is unknown whether the infinite clique problem for rational relations is decidable at all, even when restricted to co-equivalence relations. Moreover, if \(R\) has arity \(k>2\) then it is unclear whether the relations \(\not\sim_{j}^{R}\) are still rational. Instead of reducing to an infinite clique problem, we revisit the proof from [19] and obtain the following improved complexity bounds. **Theorem 2**.: _Given a \(k\)-ary deterministic rational relation \(R\), one can decide whether \(R\) is recognizable (i) in \(\mathsf{P}\) if \(k=2\), (ii) in \(\mathsf{coREXP}\) if \(k>2\) is fixed, and (iii) in \(\mathsf{coNEXP}\) if \(k\) is part of the input._ Here, \(\mathsf{coREXP}\subseteq\mathsf{coNEXP}\) is the class of all decision problems that can be solved by a randomized algorithm in exponential time, which may err on negative instances with probability at most \(1/2\). To show Theorem 2 we reduce the recognizability problem to the equivalence problem of deterministic \(k\)-tape automata. The reduction works in logspace if \(k=2\), but requires polynomial space for \(k>2\). Let us remark that the precise complexity of the equivalence problem is unknown: Harju and Karhumaki showed that testing equivalence of deterministic rational relations is in \(\mathsf{coNP}\)[33]. Moreover, Friedman and Greibach devised a polynomial time algorithm for binary relations [34], and for fixed arity \(k>2\) equivalence is decidable in randomized polynomial time [8]. For the reduction, we first observe that recognizability can also be described in terms of modified equivalence relations \(\approx_{j}^{R}\) over words instead of \(\sim_{j}^{R}\), which is defined over \(j\)-tuples of words. Second, we extract from [19] an automaton pattern that witnesses nonrecognizability. In addition, for the case of binary relations, we need the simple but crucial observation mentioned above that two runs on pairs of words with a common component can be simulated in parallel. In addition, we observe that over deterministic rational relations the equivalence problem is logspace reducible to the recognizability problem (Theorem 8). Essentially, this follows from a result by Friedman and Greibach [35], which reduces the equivalence problem of DPDAs restricted to a subclass \(\mathbf{C}\) to the membership problem of DPDAs to \(\mathbf{C}\). Hence, over binary deterministic rational relations the recognizability problem and the equivalence problem are in fact _logspace interreducible_. Moreover, we present a construction that transforms a deterministic multitape automaton into an equivalent double exponentially sized _independent multitape automaton_, assuming it exists, i.e. if the relation is recognizable. This provides an answer to the problem of how to compute _monadic decompositions_ for deterministic rational and synchronous relations, see [30, Section 6]. The construction is based on known ideas from [19] and imitates Valiant's construction of a double exponentially large DFA from a regular DPDA [15]. It seems that the missing piece for the construction is our characterization of recognizability via the equivalence relations \(\approx_{j}^{R}\). Finally, we prove that one can decide whether a deterministic rational relation is synchronous by a reduction to the recognizability problem, which was left open in [19]. **Theorem 3**.: _Given a \(k\)-ary deterministic rational relation \(R\), one can decide whether \(R\) is synchronous (i) in \(\mathsf{P}\) if \(k=2\) and (ii) in \((2k-4)\)-\(\mathsf{EXP}\) if \(k>2\)._ The intuition behind the algorithm is that the heads of a deterministic multitape automaton for a synchronous relation must have _bounded delay_ throughout the computation, see [22, Section 3], except if, from some point on, the components are independent from each other. To check the latter condition, we need the recognizability test from Theorem 2. ApplicationsAs a corollary of our results on \(\omega\)-synchronous relations we will provide a \(\mathsf{PSPACE}\)-algorithm which tests whether a quantifier-free formula over mixed real-integer linear arithmetic \((\mathbb{R};\mathbb{Z},+,<,0,1)\) is _monadically decomposable_, i.e. equivalent to a Boolean combination of monadic formulas. Recently, recognizable relations have been featured in a decidable string constraint language, motivated by the verification of string-manipulating programs [2]. One semantic condition of the constraint language requires that the relations appearing in the constraint are _effectively_ recognizable, i.e. one can compute a representation as a union of Cartesian products of regular languages. If the given relations are deterministic rational, we can effectively decide recognizability (in polynomial-time for binary relations) and compute the required representation in double exponential time. ## II Rational relations and their Subclasses In the following we introduce the classes of rational relations, deterministic rational relations, synchronous relations, and recognizable relations, which are denoted by Fig. 1: The complexity landscape of deciding membership to subclasses of rational relations over finite and infinite words. An arrow from \(\mathbf{C}_{2}\) to \(\mathbf{C}_{1}\) refers to the membership problem, given a relation from \(\mathbf{C}_{2}\), does it belong to \(\mathbf{C}_{1}\)? Membership of rational relations in any one of the three subclasses is undecidable [21, 28]. Dotted arrows mean that decidability of the problem is unknown. \(\mathbf{Sync}\subseteq\mathbf{DRat}\subseteq\mathbf{Rat}\). Similarly, on infinite words we consider \(\omega\)-rational relations, deterministic \(\omega\)-rational relations, \(\omega\)-synchronous relations, and \(\omega\)-recognizable relations, denoted by \(\omega\)-\(\mathbf{Rec}\subseteq\omega\)-\(\mathbf{Sync}\subseteq\omega\)-\(\mathbf{DRat}\subseteq\omega\)-\(\mathbf{Rat}\). Since \(\omega\)-\(\mathbf{DRat}\) and \(\omega\)-\(\mathbf{Rat}\) will not be used in this work, we will not define these classes. Let \(\Sigma\) be a finite alphabet. The product of \(k\) free monoids \((\Sigma^{*})^{k}\) forms a monoid with componentwise multiplication \((u_{1},\ldots,u_{k})(v_{1},\ldots,v_{k})=(u_{1}v_{1},\ldots,u_{k}v_{k})\). We often denote word tuples by boldface letters \(\mathbf{u}\) and denote its \(i\)-th entry by \(u_{i}\). As usual we identify a pair of tuples \((\mathbf{u},\mathbf{v})\) with the concatenation of \(\mathbf{u}\) and \(\mathbf{v}\). Furthermore \(\mathbf{\varepsilon}=(\varepsilon,\ldots,\varepsilon)\) denotes a tuple of empty words of appropriate dimension. The _length_ of a word tuple \(\|\mathbf{u}\|=\sum_{i=1}^{k}|u_{i}|\) is the total length of its entries. We assume familiarity with the basic models of (non)deterministic finite automata over finite and infinite words. Recall that the class of \(\omega\)-regular languages is described by _nondeterministic Buchi automata_ (NBAs) as well as _deterministic parity automata_ (DPAs) [36, Section 1]. Rational relationsA \(k\)-tape automaton \(\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F)\) consists of a finite state set \(Q\), a finite alphabet \(\Sigma\), an initial state \(q_{0}\), a set \(F\subseteq Q\) of final states, and a finite set of transitions \(\Delta\subseteq Q\times(\Sigma^{*})^{k}\times Q\). A run of \(\mathcal{A}\) on a tuple \(\mathbf{w}\in(\Sigma^{*})^{k}\) from \(p_{0}\) to \(p_{n}\) is a sequence of transitions \(p_{0}\xrightarrow{\mathbf{w}_{1}}p_{1}\xrightarrow{\mathbf{w}_{2}}\cdots\xrightarrow{ \mathbf{w}_{n}}p_{n}\) with \(\mathbf{w}=\mathbf{w}_{1}\mathbf{w}_{2}\cdots\mathbf{w}_{n}\). The relation \(R(\mathcal{A})\) accepted by \(\mathcal{A}\) consists of all tuples \(\mathbf{w}\in(\Sigma^{*})^{k}\) such that \(\mathcal{A}\) has a run on \(\mathbf{w}\) from the initial to a final state. Relations accepted by \(k\)-tape automata are called _rational_. Deterministic rational relationsFor \(k\)-tape automata we define the sets \(H_{1},\ldots,H_{k}\) by \(H_{i}=\{\varepsilon\}^{i-1}\times\Sigma\times\{\varepsilon\}^{k-i}\). A \(k\)-tape automaton \(\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F)\) is _deterministic_ if (i) \(Q\) is equipped with a partition into sets \(Q=\bigcup_{i=1}^{k}Q_{i}\), (ii) the transition relation has the form \(\Delta\subseteq\bigcup_{i=1}^{k}Q_{i}\times H_{i}\times Q\), and (iii) for every \((p,h)\in Q_{i}\times H_{i}\) there exists exactly one transition \((p,h,q)\in\Delta\). For convenience, we represent \(\Delta\) as a transition function \(\delta\colon Q\times\Sigma\to Q\) instead. Observe that 1-tape (deterministic) automata are precisely NFAs (DFAs). A relation \(R\subseteq(\Sigma^{*})^{k}\) is _deterministic rational_ if there exists a deterministic \(k\)-tape automaton \(\mathcal{A}\) such that \(R(\mathcal{A})=\{(w_{1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Lemma 1**.: _The equivalence relation \(\approx_{I}^{R}\) has finite index if and only if \(R\) conforms to \(\{I,[1,k]\setminus I\}\)._ If \(R\) conforms to \(P\) then clearly \(R\) conforms to any partition \(P^{\prime}\) that is coarser than \(P\). The _coarsest refinement_\(P_{1}\sqcap P_{2}\) of two partitions is the set of all nonempty intersections \(I_{1}\cap I_{2}\) where \(I_{1}\in P_{1}\), \(I_{2}\in P_{2}\). **Theorem 4** ([37]).: _If \(R\subseteq D^{k}\) conforms to two partitions \(P_{1},P_{2}\) then also to their coarsest refinement \(P_{1}\sqcap P_{2}\)._ The partition of \([1,k]\)_generated_ by subsets \(I_{1},\ldots,I_{n}\subseteq[1,k]\) is the coarsest refinement \(P_{1}\sqcap\cdots\sqcap P_{n}\) of the partitions \(P_{j}=\{I_{j},[1,k]\setminus I_{j}\}\). For example, the discrete partition is clearly generated by the singleton sets \(\{1\},\ldots,\{k-1\}\). It is also generated by all intervals \([1,j]\) for \(j\in[1,k-1]\). **Lemma 2**.: _If a partition \(P\) is generated by \(I_{1},\ldots,I_{n}\subseteq[1,k]\) then \(R\subseteq D^{k}\) conforms to \(P\) if and only if \(\approx_{I_{j}}^{R}\) has finite index for all \(j\in[1,n]\)._ Proof.: By Theorem 4, \(R\) conforms to \(P\) if and only if \(R\) conforms to \(\{I_{j},[1,k]\setminus I_{j}\}\) for each \(j\in[1,n]\). By Lemma 1 this is equivalent to finite index of \(\approx_{I_{j}}^{R}\) for all \(j\in[1,n]\). Choosing the discrete partition on \([1,k]\) as \(P\) in Lemma 2 we obtain the equivalence of 2) and 3) in Proposition 1. ## IV Deciding \(\omega\)-Recognizability in \(\omega\)-Sync The goal of this section is to prove Theorem 1. The lower bounds are inherited from the finite-word case by padding, since recognizability of synchronous relations is \(\mathsf{PSPACE}\)-complete (resp. \(\mathsf{NL}\)-complete) if the relation is given by an NFA (resp. DFA) [30]. For the upper bounds we follow the same approach as in [30, 31] for the recognizability problem for synchronous relations. Given an (\(\omega\)-)synchronous relation \(R\) the complements \(\approx_{j}^{R}\) are again (\(\omega\)-)synchronous. In fact, if \(R\) is given by a (non)deterministic automaton, then a nondeterministic automaton for \(\approx_{j}^{R}\) can be computed in logspace (polynomial space): Observe that \(x\mathrel{\approx_{j}^{R}}y\) if and only if \[\exists\boldsymbol{z}\in(\Sigma^{\omega})^{k-1}\colon(x\odot_{j }\boldsymbol{z}\in R\ \wedge\ y\odot_{j}\boldsymbol{z}\notin R)\ \vee\] \[(x\odot_{j}\boldsymbol{z}\notin R\ \wedge\ y\odot_{j} \boldsymbol{z}\in R).\] If \(R\) is given by a DPA \(\mathcal{B}\), we can construct an DPA for \((\Sigma^{\omega})^{k}\setminus R\) in logarithmic space and convert it into an NBA \(\bar{\mathcal{B}}\)[38]. If \(R\) is given by an NBA \(\mathcal{B}\) then this step incurs an exponential blowup but can still be done in polynomial space [39]. From \(\mathcal{B}\) and \(\bar{\mathcal{B}}\) we can construct NBAs for the relations \(\mathrel{\approx_{j}}\) in logspace (intersections, unions, and projections of NBAs are logspace computable). By Proposition 1 it remains to check whether for some \(j<k\) the relation \(\mathrel{\approx_{j}^{R}}\) has an _infinite clique_, i.e. an infinite sequence of pairwise distinct words \(w_{1},w_{2},\ldots\) such that \(w_{i_{1}}\mathrel{\approx_{j}^{R}}w_{i_{2}}\) for all \(i_{1}<i_{2}\). In [31] it is shown that the infinite clique problem can be solved in nondeterministic logspace for arbitrary synchronous relations over finite words. For arbitrary \(\omega\)-synchronous relations, it is a longstanding open problem whether the infinite clique problem is decidable. However, in [29] it is shown to be decidable in double exponential time for \(\omega\)-synchronous _co-equivalence relations_, i.e. complements of equivalence relations. In the following we will show that for those relations the infinite clique problem can even be solved in nondeterministic logspace. Applying this result to the relations \(\mathrel{\approx_{j}^{R}}\) yields an \(\mathsf{NL}\) respectively \(\mathsf{PSPACE}\) algorithm for \(\omega\)-recognizability of \(\omega\)-synchronous relations depending on whether \(R\) is given by a DPA or NBA. **Theorem 5**.: _It is \(\mathsf{NL}\)-complete to decide, given a nondeterministic Buchi automaton for an \(\omega\)-synchronous co-equivalence relation \(\bar{E}\), whether \(\bar{E}\) has an infinite clique._ The rest of this section is devoted to proving Theorem 5. One challenge in finding infinite cliques in \(\omega\)-synchronous relations is how to even finitely represent infinite clique, i.e. an infinite sequence of infinite words. A strong indicator that this is indeed difficult is that there are \(\omega\)-synchronous relations which have infinite cliques but no _regular_ infinite clique. One such example is the complement of the _equal ends_ relation \(\sim_{\boldsymbol{\mathrm{e}}}\) on \(\Sigma^{\omega}\) where \(u\sim_{\boldsymbol{\mathrm{e}}}v\) if and only if there exist \(x,y\in\Sigma^{*}\), \(z\in\Sigma^{\omega}\) with \(|x|=|y|\) and \(u=xz\) and \(v=yz\). Kuske and Lohrey observed that, although \(\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel {\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{ \mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{ \mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{ \mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{ \mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{\mathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrelmathrel{ \ and \(S_{j}\) and check whether they are slender. However, an automaton for the set of representatives \(L_{\#}(E)\) might be double exponentially large, since its size is exponential in the size of the automaton for \(E_{\#}\), which in turn is exponential in the size of the automaton for \(\bar{E}\) via a construction using transition profiles. This results in a double exponential time algorithm given an automaton for \(\bar{E}\). We deviate from this approach and use the slenderness property of the languages \(P_{i}\) and \(S_{j}\) only to identify the shape of unbounded cliques in \(\bar{E}\). In a second step, we search for patterns in the automaton for \(\bar{E}\) that witness unbounded cliques in \(\bar{E}\). The existence of these patterns can be checked in nondeterministic logspace given an automaton for \(\bar{E}\). **Lemma 4**.: _A regular language \(L\subseteq\Sigma^{*}\) is not slender if and only if there are words \(u,v,w,x,y\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\) such that \(uv^{*}wx^{*}y\subseteq L\)._ Proof.: Consider the minimal trimmed DFA for \(L\). By [41, Theorem 4.23]\(L\) is not slender if and only if the DFA contains two distinct nonempty cycles \(p\xrightarrow{v}p\), \(q\xrightarrow{x}q\) (distinct means that their set of transitions are distinct), and a run \(p\xrightarrow{w}q\). In particular, \(uv^{*}wx^{*}y\) is contained in \(L\) where \(u\) and \(y\) are words read from the initial state to \(p\), and from \(q\) to some final state. We can ensure that \(|w|\leq|v|=|x|\) by replacing \(v\) and \(x\) by \(v^{k|x|}\) and \(x^{k|v|}\), respectively, for a sufficiently large number \(k\). Furthermore, we can ensure \(|v|=|w|=|x|\) by extending the run \(p\xrightarrow{w}q\) to \(p\xrightarrow{wx_{1}}r\) for some prefix \(x_{1}\) of \(x\) with \(|wx_{1}|=|x|\) and rebasing the cycle \(q\xrightarrow{x}q\) on state \(r\). The following lemma distinguishes two types of cliques: On the one hand, there are cliques whose words differ in a finite prefix but have equal ends; on the other hand, there are cliques whose words do not have equal ends. For example, consider the equality relation \(=\). In the complement \(\neq\) we can find the cliques \((a^{i}b^{n-i}a^{\omega})_{0\leq i\leq n}\) for all \(n\in\mathbb{N}\) of words that only differ in a finite prefix. On the other hand, if we consider the equal ends \(\sim_{\mathsf{e}}\) equivalence relation then we observe that it does not suffice to look at the finite prefixes of words to determine whether they are in relation or not. In the complement \(\not\sim_{\mathsf{e}}\) we can find the cliques \(((a^{i}b^{n-i})^{\omega})_{0\leq i\leq n}\) for all \(n\in\mathbb{N}\) of words that differ in the periodic part. **Lemma 5**.: \(\bar{E}\) _contains an infinite clique if and only if \(\bar{E}\) contains cliques of the form \((uv^{i}wx^{n-i}yz^{\omega})_{0\leq i\leq n}\) or \((z(uv^{i}wx^{n-i}y^{\omega})_{0\leq i\leq n}\) for all \(n\in\mathbb{N}\) where \(u,v,w,x,y,z\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\)._ Proof.: The "if" direction is immediate since the existence of such cliques imply that \(\bar{E}\) contains unbounded cliques and therefore also an infinite clique. For the "only if" direction assume that \(\bar{E}\) contains an infinite clique which means that \(E\) has infinite index. Then by Lemma 3 there are non-empty regular languages \(P,S\subseteq\Sigma^{*}\) with \(P\{\#\}S\subseteq L_{\#}(E)\) such that \(P\) or \(S\) is not slender. By Lemma 4 there are \(u,v,w,x,y\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\) such that \(uv^{*}wx^{*}y\subseteq P\) or \(uv^{*}wx^{*}y\subseteq S\). If \(uv^{*}wx^{*}y\subseteq P\) we pick a word \(z\in S\) from the non-empty language \(S\). Then \(uv^{*}wx^{*}y\#z\subseteq L_{\#}(E)\). Since all words of the form \(uv^{i}wx^{j}y\) are pairwise different, \(\bar{E}\) contains the clique \((uv^{i}wx^{n-i}yz^{\omega})_{0\leq i\leq n}\) for all \(n\in\mathbb{N}\). Similarly, if \(uv^{*}wx^{*}y\subseteq S\), we pick a word \(z\in P\) and find the cliques \((z(uv^{i}wx^{n-i}y)^{\omega})_{0\leq i\leq n}\) of size \(n\) in \(\bar{E}\). A _3-cycles pattern_ consists of states \(q_{1},q_{2},q_{3},q_{4},q_{5}\in Q\) and words \(u,v,w,x,y\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\) such that \[q_{1}\xrightarrow{\big{[}\begin{smallmatrix}u\\ u\end{smallmatrix}\big{]}}q_{2},\quad q_{2}\xrightarrow{\big{[}\begin{smallmatrix}v \\ v\end{smallmatrix}\big{]}}q_{2},\quad q_{2}\xrightarrow{\big{[}\begin{smallmatrix} w\\ v\end{smallmatrix}\big{]}}q_{3},\quad q_{3}\xrightarrow{\big{[}\begin{smallmatrix}x \\ v\end{smallmatrix}\big{]}}q_{3},\] \[q_{3}\xrightarrow{\big{[}\begin{smallmatrix}x\\ v\end{smallmatrix}\big{]}}q_{4},\quad q_{4}\xrightarrow{\big{[}\begin{smallmatrix} x\\ x\end{smallmatrix}\big{]}}q_{4},\quad q_{4}\xrightarrow{\big{[}\begin{smallmatrix}y\\ y\end{smallmatrix}\big{]}}q_{5}.\] We say that the above is a _3-cycles pattern from \(q_{1}\) to \(q_{5}\)_. The 3-cycles pattern is called _final_ if one of the runs \(q_{1}\xrightarrow{\big{[}\begin{smallmatrix}u\\ u\end{smallmatrix}\big{]}}q_{2}\), \(q_{2}\xrightarrow{\big{[}\begin{smallmatrix}w\\ v\end{smallmatrix}\big{]}}q_{3}\), \(q_{3}\xrightarrow{\big{[}\begin{smallmatrix}x\\ w\end{smallmatrix}\big{]}}q_{4}\), \(q_{4}\xrightarrow{\big{[}\begin{smallmatrix}y\\ y\end{smallmatrix}\big{]}}q_{5}\) visits a final state. If there exists a (final) 3-cycles pattern from \(p\) to \(q\) we write \(q_{1}\xrightarrow{\text{3CP}}q_{5}\) and \(q_{1}\xrightarrow{\text{3CP}}_{F}q_{5}\), respectively. Clearly \(p\xrightarrow{\text{3CP}}q\) implies that the automaton contains \(p\)-\(q\)-runs reading \(uv^{i}wx^{n-i}y\otimes uv^{j}wx^{n-j}y\) for all \(i<j\leq n\), for some \(u,v,w,x,y\) with \(|v|=|w|=|x|>0\) and \(v\neq w\). To prove that the converse also holds, we use transition profiles. A _transition profile_\(\tau=(\Rightarrow,\xrightarrow{F})\) over \(\mathcal{A}\) consists of two binary relations \(\Rightarrow,\xrightarrow{F}\) over \(Q\). For each word \(w\in(\Sigma^{2})^{*}\) we define the transition profile \(\tau(w)\) such that \(p\Rightarrow q\) if and only if there exists a run \(p\xrightarrow{w}q\), and \(p\xrightarrow{F}q\) if and only if there exists a run \(p\xrightarrow{w}q\) visiting a final state. It is easy to see that \(\tau(uv)\) is determined by \(\tau(u)\) and \(\tau(v)\), and therefore the set \(\mathsf{TP}(\mathcal{A})=\{\tau(w)\mid w\in(\Sigma^{2})^{*}\}\) forms a finite monoid with the well-defined operation \(\tau(u)\cdot\tau(v)=\tau(uv)\) and neutral element \(\tau(\varepsilon)\). An element \(s\) in a monoid \(M\) is _idempotent_ if \(s^{2}=s\). Every finite monoid \(M\) has an _idempotent exponent_, i.e. a number \(n\geq 1\) so that \(s^{n}\) is idempotent for all \(s\in M\). **Lemma 6**.: _Let \(p\in\mathbb{N}\) be the idempotent exponent of \(\mathsf{TP}(\mathcal{A})\). If for words \(u,v,w,x,y,z\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\) and states \(q_{1},q_{5}\in Q\) there exists a run \(\rho\) in \(\mathcal{A}\) from \(q_{1}\) to \(q_{5}\) reading_ \[\big{[}\begin{smallmatrix}u\\ u\end{smallmatrix}\big{]}\big{[}\begin{smallmatrix}v^{m}\\ v^{m}\end{smallmatrix}\big{]}\big{[}\begin{smallmatrix}v^{p-1}\\ v^{p}\end{smallmatrix}\big{]}\big{[}\begin{smallmatrix}x^{m}\\ v^{m}\end{smallmatrix}\big{]}\big{[}\begin{smallmatrix}x^{p}\\ v^{m-1}w\end{smallmatrix}\big{]}\big{[}\begin{smallmatrix}x^{m}\\ x^{m}\end{smallmatrix}\big{]}\big{[}\begin{smallmatrix}y\\ y\end{smallmatrix}\big{]}\] _then \(q_{1}\xrightarrow{\text{3CP}}q_{5}\). If \(\rho\) visits a final state then \(q_{1}\xrightarrow{\text{3CP}}_{F}q_{5}\)._ Proof.: We will use the fact that \(\big{[}\begin{smallmatrix}v^{p}\\ v^{p}\end{smallmatrix}\big{]}\), \(\big{[}\begin{smallmatrix}x^{p}\\ v^{p}\end{smallmatrix}\big{]}\), and \(\big{[}\begin{smallmatrix}x^{p}\\ x^{p}\end{smallmatrix}\big{]}\) are idempotent in \(\mathsf{ and similarly for the factors \(\left[\begin{smallmatrix}x\\ v\end{smallmatrix}\right]\) and \(\left[\begin{smallmatrix}v\\ v\end{smallmatrix}\right]\). Therefore we find intermediate states \(q_{2},q_{3},q_{4}\) so that \(\rho\) has the form \[\rho_{1}\colon q_{1}\xrightarrow{\left[\begin{smallmatrix}uv^{i_{1}}\\ uv^{i_{1}}\end{smallmatrix}\right]}q_{2}, \sigma_{2}\colon q_{2}\xrightarrow{\left[\begin{smallmatrix}v^{i_{2}} \\ v^{i_{2}}\end{smallmatrix}\right]}q_{2}, \tag{2}\] \[\rho_{2}\colon q_{2}\xrightarrow{\left[\begin{smallmatrix}v^{i_{3}} \\ v^{i_{3}}\end{smallmatrix}\right]\left[\begin{smallmatrix}w\\ v^{i_{1}}\end{smallmatrix}\right]}q_{3}, \sigma_{3}\colon q_{3}\xrightarrow{\left[\begin{smallmatrix}v^{j_{2}} \\ v^{j_{2}}\end{smallmatrix}\right]}q_{3}\] \[\rho_{3}\colon q_{3}\xrightarrow{\left[\begin{smallmatrix}x^{j_{3} }\\ v^{j_{3}}\end{smallmatrix}\right]\left[\begin{smallmatrix}x^{k_{1}}\\ x^{k_{1}}\end{smallmatrix}\right]}q_{4}, \sigma_{4}\colon q_{4}\xrightarrow{\left[\begin{smallmatrix}x^{k_{2}} \\ x^{k_{2}}\end{smallmatrix}\right]}q_{4},\] \[\rho_{4}\colon q_{4}\xrightarrow{\left[\begin{smallmatrix}x^{k_{3} }y\\ x^{k_{3}}\end{smallmatrix}\right]}q_{5}\] for some numbers \(i_{1},j_{1},k_{1},i_{3},j_{3},k_{3}\geq 0\) and \(i_{2},j_{2},k_{2}\geq 1\). Since \(\left[\begin{smallmatrix}v\\ v\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}v\\ v\end{smallmatrix}\right]\), and \(\left[\begin{smallmatrix}x\\ x\end{smallmatrix}\right]\) are idempotent in \(\mathsf{TP}(\mathcal{A})\), there exist runs \(\tilde{\rho}_{1},\tilde{\sigma}_{2},\tilde{\rho}_{2},\tilde{\sigma}_{3},\tilde {\rho}_{3},\tilde{\sigma}_{4},\tilde{\rho}_{4}\) as in Equation (2) for \(i_{\ell}=j_{\ell}=k_{\ell}=1\) for all \(\ell\in[1,3]\). Then the five words \(uv,v^{3},vwx,x^{3},xy\) form the required 3-cycles pattern from \(q_{1}\) to \(q_{5}\). Assume that \(\rho\) visits a final state. We can ensure that the final state occurs in one of the subruns \(\rho_{i}\) in Equation (2): If the final state occurs in one of the cycles \(\sigma_{i}\) then we can append the cycle \(\sigma_{i}\) to \(\rho_{i}\). By the \(\underline{F}\)-component of transition profiles we can then choose the run \(\tilde{\rho}_{i}\) to visit a final state again, and therefore \(q_{1}\xrightarrow{\text{3CP}}_{F}q_{5}\). The next lemma shows that a 3-cycles pattern can be used to detect unbounded cliques in \(\bar{E}\) where the words differ in the finite prefix. **Lemma 7**.: \(\bar{E}\) _contains cliques \((uv^{i}wx^{n-i}yz^{\omega})_{0\leq i\leq n}\) for all \(n\in\mathbb{N}\) where \(u,v,w,x,y,z\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\) if and only if there is a 3-cycles pattern in \(\mathcal{A}\) from \(q_{0}\) to some state \(q\in Q\) such that \(\left[\begin{smallmatrix}z^{\prime\omega}\\ z^{\prime\omega}\end{smallmatrix}\right]\) is accepted from \(q\) for some word \(z^{\prime}\in\Sigma^{*}\)._ Proof.: We first observe that \(\bar{E}\) contains cliques \((uv^{i}wx^{n-i}yz^{\omega})_{0\leq i\leq n}\) as on the LHS of the lemma if and only if \[\left[\begin{smallmatrix}u\\ v\end{smallmatrix}\right]\left[\begin{smallmatrix}v\\ v\end{smallmatrix}\right]\left[\begin{smallmatrix}v\\ v\end{smallmatrix}\right]\left[\begin{smallmatrix}x\\ v\end{smallmatrix}\right]\left[\begin{smallmatrix}x\\ v\end{smallmatrix}\right]\left[\begin{smallmatrix}x\\ x\end{smallmatrix}\right]\left[\begin{smallmatrix}v\\ y\end{smallmatrix}\right]\left[\begin{smallmatrix}z\\ z\end{smallmatrix}\right]^{\omega}\subseteq L(\mathcal{A}) \tag{3}\] for some \(u,v,w,x,y,z\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\). Then the "if" direction of the lemma follows directly. For the "only if" direction assume that Equation (3) holds. Let \(n\stackrel{{\text{\tiny{def}}}}{{\equiv}}|Q|\) and \(p\in\mathbb{N}\) be the idempotent exponent of \(\mathsf{TP}(\mathcal{A})\). Then there is a run of \(\mathcal{A}\) on \(\left[\begin{smallmatrix}u(v^{2})^{n}v^{p-1}w(x^{p})^{2n+1}y\\ u(v^{2})^{2n+1}v^{p-1}w(x^{p})^{n}y\\ z^{2}\end{smallmatrix}\right]\) from \(q_{0}\) to some state \(q\in Q\) such that \(\left[\begin{smallmatrix}z^{\prime\omega}\\ z^{\prime\omega}\end{smallmatrix}\right]\) is accepted from \(q\). Applying Lemma 6 yields the desired 3-cycles pattern. The following lemma shows which pattern occurs if the words differ in the periodic part. In Lemma 9 we will see that this pattern is also sufficient to show the existence of unbounded cliques. **Lemma 8**.: _If \(\bar{E}\) contains cliques \((z(uv^{i}wx^{n-i}y^{\omega})_{0\leq i\leq n}\) for all \(n\in\mathbb{N}\) where \(u,v,w,x,y,z\in\Sigma^{*}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\), then there are states \(q_{1},\ldots,q_{\ell}\in Q\) such that_ * \(q_{0}\xrightarrow{\left[\begin{smallmatrix}z\\ z\end{smallmatrix}\right]}q_{1}\)_,_ * \(q_{k}=q_{\ell}\) _for some_ \(k<\ell\)_._ Proof.: Suppose that \(\bar{E}\) contains cliques \((z(uv^{i}wx^{n-i}y^{\omega})_{0\leq i\leq n}\) with \(|v|=|w|=|x|>0\) and \(v\neq w\). Let \(t\) be the word from Equation (1). Since \(\left[\begin{smallmatrix}z\\ z\end{smallmatrix}\right]t^{\omega}\) is accepted by \(\mathcal{A}\), it has an accepting run of the form \[q_{0}\xrightarrow{\left[\begin{smallmatrix}z\\ z\end{smallmatrix}\right]}q_{1}\xrightarrow{t}q_{2}\xrightarrow{t}q_{3} \xrightarrow{t}\cdots.\] Let \(m\in\mathbb{N}\) such that \(\{q_{0},\ldots,q_{m}\}=\{q_{i}\mid i\in\mathbb{N}\}\), i.e. all states \(q_{i}\) have been visited at least once after reaching \(q_{m}\). Since the run visits some final state infinitely often, there exists \(\ell>m\) such that the subrun between \(q_{\ell-1}\) and \(q_{\ell}\) visits a final state. Furthermore, there exists \(k\leq m\) such that \(q_{k}=q_{\ell}\). **Lemma 9**.: _If there are states \(q_{1},\ldots,q_{\ell}\in Q\) such that_ * \(q_{0}\xrightarrow{\left[\begin{smallmatrix}z\\ z\end{smallmatrix}\right]}q_{1}\)_,_ * \(q_{1}\xrightarrow{\text{3CP}}q_{2}\xrightarrow{\text{3CP}}q_{3}\xrightarrow{ \text{3CP}}\ldots\xrightarrow{\text{3CP}}q_{\ell-1}\xrightarrow{\text{3CP}}_{F}q_ {\ell}\)_,_ * \(q_{k}=q_{\ell}\) _for some_ \(k<\ell\)_,_ _then \(\bar{E}\) contains unbounded cliques._ Proof.: Let \(u_{j},v_{j},w_{j},x_{j},y_{j}\in\Sigma^{*}\) be the words of the 3-cycles pattern from \(q_{j}\) to \(q_{j+1}\) for \(j\in[1,\ell-1]\). Define \(t_{j}(i,n)\stackrel{{\text{\tiny{def}}}}{{=}}u_{j}v_{j}^{i}w_{j}x_{j} ^{n-i}y_{j}\) for all \(0\leq i\leq n\) and \(1\leq j<\ell\). Then \[(t_{1}(i,n)\cdots t_{k-1}(i,n)\big{(}t_{k}(i,n)\cdots t_{\ell}(i,n)\big{)}^{ \omega})_{0\leq i\leq n}\] forms a clique in \(\bar{E}\) for each \(n\in\mathbb{N}\). We are now ready to prove Theorem 5. Since \(\bar{E}\) has an infinite clique if and only if it has unbounded cliques, by Lemmas 5 and 7 to 9 it suffices to check whether \(\mathcal{A}\) contains the pattern in Lemma 7 or the pattern in Lemmas 8 and 9. First, we can check in \(\mathsf{NL}\) whether, given states \(p,q\in Q\), there exists a word \(z\in\Sigma^{*}\) with \(p\xrightarrow{\left[\begin{smallmatrix}z\\ z\end{smallmatrix}\right]}q\), and whether there exists a word \(z\in\Sigma^{*}\) such that \(\left[\begin{smallmatrix}z^{\omega}\\ z^{\omega}\end{smallmatrix}\right]\) is accepted from \(q\). Furthermore, given two states \(q_{1},q_{5}\in Q\), we can check whether \(q_{1}\xrightarrow{\text{3CP}}q_{5}\) in \(\mathsf{NL}\) as follows: Construct in logspace an NFA sufficient conditions on \(\mathfrak{A}\) under which the problem of testing monadic decomposability becomes decidable [43, Theorem 3]. Under these conditions a formula \(\varphi\) defining a \(k\)-ary relation \(R\) is monadically decomposable if and only if \(\approx_{[1,j]}^{R}\) has finite index for all \(j<k\)[43, Lemma 4]. In its generality the algorithm in [43] is not very efficient since it uses an unstructured enumeration procedure to find so called _definable invariant Skolem functions_. There is a more straightforward procedure for monadic decomposability if \(\mathfrak{A}\) is \(\omega\)-automatic [24], i.e. its domain and relations are given by \(\omega\)-synchronous automata. In this setting we can translate \(\varphi\) into an \(\omega\)-synchronous automaton for the relation \(R\) over infinite words defined by \(\varphi\). Then, we construct automata for the relations \(\not\approx_{[1,j]}^{R}\) and solve the infinite clique problem by Theorem 5. In fact, if \(\varphi\) is quantifier-free (this assumption is also made in [42]), the automata for \(\not\approx_{[1,j]}^{R}\) are constructible in polynomial space. Combined with the NL-algorithm for the infinite clique problem, this yields \(\mathsf{PSPACE}\)-complexity for testing monadic decomposability. An example for an \(\omega\)-automatic structure that satisfies the conditions of [43, Theorem 3] is real linear arithmetic (RLA) \((\mathbb{R};+,<,0,1)\). Its extension \((\mathbb{R};\mathbb{Z},+,<,0,1)\) to mixed real-integer linear arithmetic (RILA) is still \(\omega\)-automatic [44], but it is not immediately clear whether it fulfills the conditions of [43, Theorem 3]. However, we can use the fact that in the (standard) \(\omega\)-automatic presentation of RILA ultimately periodic words are rational numbers and therefore definable in RILA. **Lemma 10**.: _Let \(\mathfrak{A}\) be an \(\omega\)-automatic structure with an \(\omega\)-automatic presentation over alphabet \(\Sigma\) such that each ultimately periodic word over \(\Sigma\) that represents an element in the domain is definable in \(\mathfrak{A}\). Then a formula \(\varphi\) in \(\mathfrak{A}\) defining the relation \(R\subseteq(\Sigma^{\omega})^{k}\) is monadically decomposable if and only if \(\approx_{j}^{R}\) has finite index for all \(j\in[1,k]\)._ Proof.: The "only if" direction is clear. For the "if" direction assume that \(\approx_{j}^{R}\) has finite index for all \(j\in[1,k]\). It was observed in [29, Proof of Lemma 3] that every finite-index \(\omega\)-synchronous equivalence relation has a set of ultimately periodic representatives. Let \(A_{1},\ldots,A_{k}\) be such representative sets for \(\approx_{1}^{R},\ldots,\approx_{k}^{R}\). Now, \(\varphi(x_{1},\ldots,x_{k})\) is equivalent to the following Boolean combination of monadic formulas: \[\bigvee_{(a_{1},\ldots,a_{k})\in R\cap(A_{1}\times\cdots\times A_{k})}\bigwedge _{j=1}^{k}x_{j}\approx_{j}a_{j}\] The statement \(x_{j}\approx_{j}a_{j}\) is definable since \(\approx_{j}\) is definable and the ultimately periodic words in \(A_{j}\) are definable in \(\mathfrak{A}\). Therefore, under the assumption in Lemma 10 (e.g. for RILA) we can use the same approach from above to decide monadic decomposability for quantifier-free formulas in polynomial space. ## V Deciding Recognizability in \(\mathbf{DRat}\) In this section we will show how to test whether a deterministic rational relation is recognizable (Theorem 2) and, if so, how to construct an equivalent independent multitape automaton. Notice that we can ignore the endmarkers in the definition of \(\mathbf{DRat}\) since a relation \(R\) is recognizable if and only if \(\{\boldsymbol{w}(\neg\!,\ldots,\neg)\mid\boldsymbol{w}\in R\}\) is recognizable. Hence, for the rest of this section let \(R\subseteq(\Sigma^{*})^{k}\) with \(R=R(\mathcal{A})\) for some deterministic \(k\)-tape automaton \(\mathcal{A}=(Q,\Sigma,q_{0},\delta,F)\) with \(n\) states. Furthermore we assume that all states are reachable from \(q_{0}\). We also write \(R_{q}\) for the relation recognized from state \(q\), i.e. \(R_{q}\stackrel{{\text{\tiny def}}}{{=}}R(\mathcal{A}_{q})\) where \(\mathcal{A}_{q}=(Q,\Sigma,q,\delta,F)\). ### _Witness for nonrecognizability_ To decide whether \(R\) is recognizable it suffices to check whether the equivalence relations \(\approx_{1},\ldots,\approx_{k-1}\) have finite index by Proposition 1. To keep notation clean in the following we will focus on how to test whether \(\approx_{1}\) has finite index. By permuting the components of \(R\) we can reduce testing finite-index of any \(\approx_{j}\) to the \(\approx_{1}\)-case. We provide equivalent characterizations (Proposition 2) of when \(\approx_{1}\) has infinite index, which will be used for the decision procedures in Theorem 2. The characterizations will be deduced from the proof of Lemma 3.5 in [19], which states that, if \(\approx_{1}\) has finite index, then any word is \(\approx_{1}\)-equivalent to a word whose length is exponentially bounded in \(n\). We need a few definitions from [19]. A nonempty word \(v_{1}\in\Sigma^{+}\) is _null-transparent_ if for all \(s,t\in Q_{1}\) we have \(s\xrightarrow{(v_{1},\boldsymbol{e})}t\) implies \(t\xrightarrow{(v_{1},\boldsymbol{e})}t\). In other words, \(v_{1}\) induces an _idempotent_ transformation on \(Q_{1}\). Since every element \(m\) in a finite monoid has an idempotent power \(m^{\ell}\), every non-empty word \(v_{1}\) has a null-transparent power \(v_{1}^{\ell}\). We call a run \(s\xrightarrow{(x,\boldsymbol{x})}t\) an \(N\)_-path_ if the run switches from \(Q_{1}\) to \(Q\setminus Q_{1}\) at most \(N\) times. A nonempty word \(y\in\Sigma^{+}\) is called \(N\)_-invisible in the context of \(x\in\Sigma^{*}\)_ if any \(N\)-path \(s\xrightarrow{(x,\boldsymbol{z})}t\) with \(t\in Q_{1}\) implies \(t\xrightarrow{(y,\boldsymbol{e})}t\). **Lemma 11** ([19, Lemma 3.4]).: _Let \(n\) be the number of states of \(\mathcal{A}\) and let \(u_{1}\cdots u_{\ell}\in\Sigma^{*}\) be a product of \(\ell\) nonempty words._ 1. _If_ \(\ell>n!\) _then some factor_ \(u_{i+1}\cdots u_{j}\) _is null-transparent._ 2. _If_ \(\ell>2(Nn)^{N}\) _then some factor_ \(u_{i+1}\cdots u_{j}\) _is_ \(N\)_-invisible in the context of_ \(u_{1}\ldots u_{i}\)_._ We say that a set \(S\)_separates_ two sets \(X\) and \(Y\) if \(X\subseteq S\) and \(Y\cap S=\emptyset\), or \(Y\subseteq S\) and \(X\cap S=\emptyset\). If \(X\) is a singleton \(\{x\}\) we also say that \(S\) separates \(x\) and \(Y\) (similarly for \(Y\)). **Proposition 2**.: _The following conditions are equivalent:_ 1. \(\approx_{1}\) _has infinite index._ 2. _There exist words_ \(x,y,z\in\Sigma^{*}\) _such that_ \(y\) _is_ \(nn!\)_-invisible in the context of_ \(x\) _and_ \(xyz\not\approx_{1}xz\)_._ 3. _There exist_ \(\boldsymbol{v},\boldsymbol{w}\in(\Sigma^{*})^{k}\) _and a state_ \(q\in Q\) _such that_ \(q\xrightarrow{\boldsymbol{v}}q\)_,_ \(v_{1}\) _is null-transparent,_ \(R_{q}\) _separates_ \(\boldsymbol{w}\) _and_ \((v_{1},\boldsymbol{e})\boldsymbol{w}\)_._ 4. _There exist_ \(\boldsymbol{v},\boldsymbol{w}\in(\Sigma^{*})^{k}\) _and a state_ \(q\in Q\) _such that_ \(q\xrightarrow{\boldsymbol{v}}q\)_, and_ \(R_{q}\) _separates_ \(\boldsymbol{w}\) _and_ \((v_{1},\boldsymbol{e})^{+}\boldsymbol{w}\)_._ The implication (2 \(\Rightarrow\) 1) already appeared in [19, Proof of Lemma 3.5]. In our understanding, to prove this implication the authors used 3) as an intermediate step. Unfortunately, the proof for the implication (\(3\Rightarrow 1\)) contains an argument that we could not follow, see Appendix A for a discussion. For completeness, we reprove the implication (\(3\Rightarrow 1\)) using 4) as an intermediate step. Proof of Proposition 2.: Let us start with the easy directions. (\(4\Rightarrow 1\)): Consider any run \(q_{0}\xrightarrow{\mathbf{u}}q\). Then \(u_{1}v_{1}^{i}w_{1}\not\ncong_{1}u_{1}v_{1}^{i+j}w_{1}\) for all \(i\geq 0\), \(j\geq 1\) because \(R\) separates \(\mathbf{uv}^{i}\mathbf{w}\) and \(\mathbf{uv}^{i}(v_{1},\mathbf{\varepsilon})^{j}\mathbf{w}\). Hence \(\approx_{1}\) has infinite index. (\(1\Rightarrow 2\)): Assume that 2) is false. By Lemma 11, any word of length at least \(f(nn!)\) where \(f(N)\stackrel{{ d}}{{=}}2(Nn)^{n}\) can be written as \(uvw\) where \(v\) is nonempty and \(nn!\)-invisible in the context of \(u\), and therefore \(uvw\approx_{1}uw\). By repeating this argument, we obtain for any word an \(\approx_{1}\)-equivalent word of length at most \(f(nn!)\). Therefore, \(\approx_{1}\) has finite index. (\(2\Rightarrow 3\)): Assume that \(xyz\ncong_{1}xz\) where \(y\) is \(nn!\)-invisible in the context of \(x\). Choose a length-minimal tuple \(\mathbf{t}\in(\Sigma^{*})^{k-1}\) such that \[(xyz,\mathbf{t})\in R\iff(xz,\mathbf{t})\notin R. \tag{4}\] Let \(\rho\) be a prefix of the run on \((xyz,\mathbf{t})\) which reads \(x\) on the first tape. Observe that \(\rho\) is not a \(nn!\)-path since \(y\) is \(nn!\)-invisible in the context of \(x\) and otherwise one could remove the \((y,\mathbf{\varepsilon})\)-loop from the run, which would contradict Equation (4). In particular, \(\rho\) reads at least \(nn!\) symbols from \(\mathbf{t}\). Consider the sequence of states in \(\rho\) visited after reading a symbol from \(\mathbf{t}\). There is a state \(q\) which is visited more than \(n!\) times. We can factor \(x=\alpha_{1}\cdots\alpha_{\ell+1}\) and a prefix of \(\mathbf{t}\) into nonempty words \(\tau_{1}\cdots\tau_{\ell+1}\) such that \[q_{0}\xrightarrow{(\alpha_{1},\tau_{1})}q\xrightarrow{(\alpha_{2},\tau_{2})} q\xrightarrow{(\alpha_{3},\tau_{3})}\cdots\xrightarrow{(\alpha_{\ell},\tau_{ \ell})}q\xrightarrow{(\alpha_{\ell+1},\tau_{\ell+1})}p\] and \(\ell>n!\). By Lemma 11 there exists a null-transparent factor \(\alpha_{i+1}\cdots\alpha_{j}\) for some \(1\leq i<j\leq\ell\). Let us set \(x_{1}=\alpha_{1}\cdots\alpha_{i}\), \(x_{2}=\alpha_{i+1}\cdots\alpha_{j}\), and \(x_{3}=\alpha_{j+1}\cdots\alpha_{\ell+1}\). Consider the corresponding decomposition \(\mathbf{t}=\mathbf{t}_{1}\mathbf{t}_{2}\mathbf{t}_{3}\) such that \[q_{0}\xrightarrow{(x_{1},\mathbf{t}_{1})}q\xrightarrow{(x_{2},\mathbf{t}_{2})}q \xrightarrow{(x_{3}yz,\mathbf{t}_{3})}r_{+} \tag{5}\] and \[q_{0}\xrightarrow{(x_{1},\mathbf{t}_{1})}q\xrightarrow{(x_{2},\mathbf{t}_{2})}q \xrightarrow{(x_{3}z,\mathbf{t}_{3})}r_{-} \tag{6}\] where exactly one of the states \(r_{+},r_{-}\) belongs to \(F\). Since \(\mathbf{t}\) is a length-minimal tuple satisfying Equation (4) and \(\mathbf{t}_{2}\) is nonempty we know that \[(xyz,\mathbf{t}_{1}\mathbf{t}_{3})\in R\iff(xz,\mathbf{t}_{1}\mathbf{t}_{3})\in R\] and thus \[(x_{2}x_{3}yz,\mathbf{t}_{3})\in R_{q}\iff(x_{2}x_{3}z,\mathbf{t}_{3})\in R_{q}. \tag{7}\] We claim that either (i) \(R_{q}\) separates \((x_{2}x_{3}yz,\mathbf{t}_{3})\) and \((x_{3}yz,\mathbf{t}_{3})\) or (ii) \(R_{q}\) separates \((x_{2}x_{3}z,\mathbf{t}_{3})\) and \((x_{3}z,\mathbf{t}_{3})\), which proves the lemma. Otherwise, Equation (7) implies \[(x_{3}yz,\mathbf{t}_{3})\in R_{q}\iff(x_{3}z,\mathbf{t}_{3})\in R_{q},\] which contradicts Equations (5) and (6). Hence, we can set \(\mathbf{v}=(x_{2},\mathbf{t}_{2})\) and either set \(\mathbf{w}=(x_{3}yz,\mathbf{t}_{3})\) in case (i) or set \(\mathbf{w}=(x_{3}z,\mathbf{t}_{3})\) in case (ii). This concludes the proof. (\(3\Rightarrow 4\)): Let \(\mathbf{v},\mathbf{w}\) and \(q\in Q\) such that \(q\xrightarrow{\mathbf{v}}q\), \(v_{1}\) is null-transparent, \(R_{q}\) separates \(\mathbf{w}\) and \((v_{1},\mathbf{\varepsilon})\mathbf{w}\). Let \(m>|w_{2}\cdots w_{k}|+1\). Let \(\rho\) be the run on \((v_{1},\mathbf{\varepsilon})^{m}\mathbf{w}\) starting in \(q\). It contains a subrun reading \((v_{1},\mathbf{\varepsilon})\) between two \(Q_{1}\)-states, i.e. we can factor \((w_{2},\ldots,w_{k})=\mathbf{x}\mathbf{y}\) such that \[\rho\colon q\xrightarrow{(v_{1}^{i-1},\mathbf{x})}s\xrightarrow{(v_{1},\mathbf{ \varepsilon})^{\ell}}r\xrightarrow{(v_{1}^{m-i}w_{1},\mathbf{y})}t\] for some \(s,r\in Q_{1}\). Since \(v_{1}\) is null-transparent there is a cycle \(r\xrightarrow{(v_{1},\mathbf{\varepsilon})}r\). Therefore \(V\stackrel{{ d}}{{=}}\{(v_{1},\mathbf{\varepsilon})^{j}\mathbf{w}\mid j \geq m\}\) is either contained in \(R_{q}\) or disjoint from \(R_{q}\). Since \(R_{q}\) separates \(\mathbf{w}\) and \((v_{1},\mathbf{\varepsilon})\mathbf{w}\), it also separates one of them from \(V\). If \(R_{q}\) separates \(\mathbf{w}\) and \(V\), then \(\mathbf{v}^{m}\) and \(\mathbf{w}\) satisfy the condition from the proposition. Otherwise, \(R_{q}\) separates \((v_{1},\mathbf{\varepsilon})\mathbf{w}\) and \(V\), and the tuples \(\mathbf{v}^{m}\) and \((v_{1},\mathbf{\varepsilon})\mathbf{w}\) satisfy the condition from the proposition. ### _Polynomial-time algorithm for binary relations_ From Proposition 2 we can derive a pattern which is present in \(\mathcal{A}\) if and only if \(R\) is _not_ recognizable. For binary relations the pattern is visualized in Figure 2. This pattern can be detected in polynomial-time by reducing to the inequivalence problem for binary deterministic rational relations. **Proposition 3**.: \(\approx_{1}\) _has infinite index if and only if there exist words \(v_{1},w_{1}\in\Sigma^{*}\), tuples \(\mathbf{v}_{2},\mathbf{x},\mathbf{y}\in(\Sigma^{*})^{k-1}\), and states \(q,r\in Q\) such that_ 1. \(q\xrightarrow{(v_{1},\mathbf{v}_{2})}q\)_,_ \(q\xrightarrow{(v_{1},\mathbf{x})}r\)_,_ \(r\xrightarrow{(v_{1},\mathbf{\varepsilon})}r\)_,_ 2. \((w_{1},\mathbf{x}\mathbf{y})\in R_{q}\iff(w_{1},\mathbf{y})\notin R_{r}\)_._ Proof.: For the "if" direction observe that \(R_{q}\) separates \((w_{1},\mathbf{x}\mathbf{y})\) and \((v_{1},\mathbf{\varepsilon})^{+}(w_{1},\mathbf{x}\mathbf{y})\). Therefore \(\approx_{1}\) has infinite index by Proposition 2 point 4). For the "only if" direction assume that \(\approx_{1}\) has infinite index. Again, by Proposition 2 point 4) there exist \((v_{1},\mathbf{v}_{2}),(w_{1},\mathbf{w}_{2})\in(\Sigma^{*})^{k}\) and a state \(q\in Q\) such that \(q\xrightarrow{(v_{1},\mathbf{v}_{2})}q\), and \(R_{q}\) separates \((w_{1},\mathbf{w}_{2})\) and \((v_{1},\mathbf{\varepsilon})^{+}(w_{1},\mathbf{w}_{2})\). Let \(m>\|\mathbf{w}_{2}\|+1\) and let \(\ell\) be such that \(v_{1}^{\ell}\) is null-transparent. Consider the unique run \(\rho_{q}\) on \((v_{1},\mathbf{\varepsilon})^{m\ell}(w_{1},\mathbf{w}_{2})\) starting from \(q\). It must contain a subrun of the form \(s\xrightarrow{(v_{1},\mathbf{\varepsilon})^{\ell}}r\) where \(s,r\in Q_{1}\). Hence we can factorize \(\mathbf{w}_{2}=\mathbf{x}\mathbf{y}\) such that \(\rho_{ Since \(v_{1}^{\ell}\) is null-transparent there exists a cycle \(r\xrightarrow{(v_{1},\mathbf{\varepsilon})^{\ell}}r\). Since \(\mathcal{A}\) is deterministic, this allows us to choose \(i=m\) in Equation (8) and write \[\rho_{q}\colon\ q\xrightarrow{(v_{1}^{(m-1)\ell},\mathbf{x})}s\xrightarrow{(v_{1}, \mathbf{\varepsilon})^{\ell}}r\xrightarrow{(w_{1},\mathbf{y})}t. \tag{9}\] Since \(R_{q}\) separates \((w_{1},\mathbf{w}_{2})=(w_{1},\mathbf{x}\mathbf{y})\) and \((v_{1},\mathbf{\varepsilon})^{\ell}(w_{1},\mathbf{w}_{2})\), we know that \((w_{1},\mathbf{x}\mathbf{y})\in R_{q}\) if and only if \((w_{1},\mathbf{y})\notin R_{r}\). Hence the words \(v_{1}^{m\ell},w_{1}\) together with the tuples \(\mathbf{v}_{2}^{m\ell},\mathbf{x},\mathbf{y}\) satisfy the claim. **Theorem 6**.: _The recognizability problem for binary deterministic rational relations is logspace reducible to the equivalence problem for binary deterministic rational relations._ Proof.: Let \(R\) be a binary deterministic rational relation, which is recognizable if and only if \(\approx_{1}\) has finite index by Proposition 1. By Proposition 3 this holds if and only if for all state pairs \(q,r\in Q\) and all words \(v_{1},w_{1},v_{2},x,y\in\Sigma^{*}\) the following two conditions are equivalent: 1. [label=(C0)] 2. \(q\xrightarrow{(v_{1},v_{2})}q\), \(q\xrightarrow{(v_{1},x)}r\), \(r\xrightarrow{(v_{1},\varepsilon)}r\), \((w_{1},xy)\in R_{q}\) 3. \(q\xrightarrow{(v_{1},v_{2})}q\), \(q\xrightarrow{(v_{1},x)}r\), \(r\xrightarrow{(v_{1},\varepsilon)}r\), \((w_{1},\mathbf{y})\in R_{r}\) Using an appropriate encoding we can reduce the equivalence of 1 and 2 to the equivalence problem for binary deterministic rational relations. Suppose \(\pi,\rho\) are runs which read the same input word (in our case, this would be \(v_{1}\)), i.e. we can write \[\pi\colon\ s_{1}\xrightarrow{g_{0}}t_{1}\xrightarrow{a_{1}}s_{2}\xrightarrow {g_{1}}t_{2}\xrightarrow{a_{2}}\cdots\xrightarrow{a_{n}}s_{n}\xrightarrow{g_ {n}}t_{n}\] and \[\rho\colon\ s_{1}^{\prime}\xrightarrow{h_{0}}t_{1}^{\prime}\xrightarrow{a_{1} }s_{2}^{\prime}\xrightarrow{h_{1}}t_{2}^{\prime}\xrightarrow{a_{2}}\cdots \xrightarrow{a_{n}}s_{n}^{\prime}\xrightarrow{h_{n}}t_{n}^{\prime}\] where each \(a_{i}\) is a letter and the states \(t_{i}\), \(t_{i}^{\prime}\) are precisely the states in \(\pi\) and \(\rho\) in \(Q_{1}\). We define their synchronized shuffle \(\pi\shuffle\rho=(\Sigma\cup\{\diamond\})^{*}\) as \[\pi\shuffle\rho=g_{0}\diamond h_{0}\diamond a_{1}\diamond g_{1}\diamond h_{1} \diamond a_{2}\diamond\cdots\diamond a_{n}\diamond g_{n}\diamond h_{n}.\] We encode 1 as the binary relation \[C_{1}=\{(q\,r\,w_{1}, (\pi\shuffle\rho)\,\$\,y)\mid q,r\in Q,\,\pi\colon q \xrightarrow{(v_{1},v_{2})}q,\] \[\rho\colon q\xrightarrow{(v_{1},x)}r,\,r\xrightarrow{(v_{1}, \varepsilon)}r,\,(w_{1},xy)\in R_{q}\}\] and 2 as the binary relation \[C_{2}=\{(q\,r\,w_{1}, (\pi\shuffle\rho)\,\$\,y)\mid q,r\in Q,\,\pi\colon q \xrightarrow{(v_{1},v_{2})}q,\] \[\rho\colon q\xrightarrow{(v_{1},x)}r,\,r\xrightarrow{(v_{1}, \varepsilon)}r,\,(w_{1},y)\in R_{r}\}.\] Observe that \(C_{1}=C_{2}\) if and only if 1 and 2 are equivalent. It remains to verify that \(C_{1}\) and \(C_{2}\) are deterministic rational and we can construct automata in logspace. First, for each state pair \(q,r\in Q\) we can construct a DFA over \(\Sigma\cup\{\diamond\}\) which accepts precisely the synchronized shuffles \(\pi\shuffle\rho\) where \(\pi\colon q\xrightarrow{(v_{1},v_{2})}q\), \(\rho\colon q\xrightarrow{(v_{1},x)}r\) and \(r\xrightarrow{(v_{1},\varepsilon)}r\) for some words \(v_{1},v_{2},x\). Since \(x\) can be easily extracted as a subword of \(\pi\shuffle\rho\), a deterministic transducer can verify whether the input pair \((q\,r\,w_{1},(\pi\shuffle\rho)\,\$\,y)\) satisfies \((w_{1},xy)\in R_{q}\) and whether it satisfies \((w_{1},y)\in R_{r}\). ### _Arbitrary relations_ The approach from Theorem 6 does not work for arity \(k\geq 3\). The issue is that the words \(v_{2},x,y\) from 1 and 2 would become \((k-1)\)-tuples \(\mathbf{v}_{2},\mathbf{x},\mathbf{y}\). It is not clear how to appropriately encode the runs on \((v_{1},\mathbf{x})\) and \((w_{1},\mathbf{x}\mathbf{y})\) in 1 so that they can be simulated by an automaton. Still, we can express (the negation of) property 3) in Proposition 2 as the equivalence of two polynomial space constructible deterministic multitape automata. Since equivalence of deterministic \(k\)-tape automata is in [33], and in [1], the complexity bounds from Theorem 3 follow. **Theorem 7**.: _The recognizability problem for \(k\)-ary deterministic rational relations is polynomial space reducible to the equivalence problem for \(k\)-ary deterministic rational relations._ Proof.: Let \(R\) be a \(k\)-ary deterministic rational relation, which is recognizable if and only if \(\approx_{j}\) has finite index for all \(j<k\) by Proposition 1. It suffices to show how to reduce the test whether \(\approx_{1}\) has finite index to the equivalence problem of polynomial space constructible deterministic \(k\)-tape automata. By Proposition 2 point 3), \(\approx_{1}\) has finite index if and only if for all states \(q\in Q\) and all tuples \(\mathbf{v},\mathbf{w}\in(\Sigma^{*})^{k}\) the following two conditions are equivalent: 1. [label=(P0)] 2. \(q\xrightarrow{\mathbf{v}}q\), \(v_{1}\) null-transparent, \(\mathbf{w}\in R_{q}\) 3. \(q\xrightarrow{\mathbf{v}}q\), \(v_{1}\) null-transparent, \((v_{1},\mathbf{\varepsilon})\mathbf{w}\in R_{q}\) We encode 1 and 2 as deterministic rational relations. First observe that we can construct an exponentially large DFA for the language of all null-transparent words \(v_{1}\in\Sigma^{+}\). It simulates runs on \(v_{1}\) in parallel from every state \(s\in Q_{1}\), and verifies that \(s\xrightarrow{(v_{1},\mathbf{\varepsilon})}t\) implies \(t\xrightarrow{(v_{1},\mathbf{\varepsilon})}t\). We encode a run \(\pi\) as an alternating sequence \(\mathsf{flat}(\pi)\in(Q\Sigma)^{*}Q\) of states and input letters. Under this encoding, valid runs can be recognized by a polynomially sized DFA. Define the following \(k\)-ary relations \[P_{1}=\{(q\,\mathsf{flat}(\pi)\,\$\,\mathbf{\varepsilon})\,\mathbf{w}\mid q \in Q,\,v_{1}\,\,\text{null-transparent},\] \[\pi\colon q\xrightarrow{\mathbf{v}}q,\,\mathbf{w}\in R_{q}\}\] and \[P_{2}=\{(q\,\mathsf{flat}(\pi)\,\$\,\mathbf{\varepsilon})\,\mathbf{w}\mid q \in Q,\,v_{1}\,\,\text{null-transparent},\] \[\pi\colon q\xrightarrow{\mathbf{v}}q,\,(v_{1},\mathbf{\varepsilon})\mathbf{w} \in R_{q}\}.\] Since \(v_{1}\) can be easily extracted from \(\mathsf{flat}(\pi)\), we can construct exponentially sized deterministic \(k\)-tape automata for \(P_{1}\) and \(P_{2}\). Furthermore, the conditions 1 and 2 are equivalent if and only if \(P_{1}=P_{2}\). ### _Reducing equivalence to recognizability_ Let us complement the presented algorithms for recognizability with the following "converse direction". We solved the recognizability problem by reducing to the equivalence problem over deterministic rational relations (in logspace, for binary relations). In fact, the equivalence problem is logspace reducible to the recognizability problem (for arbitrary arity). **Theorem 8**.: _Let \(k\geq 2\). The equivalence problem for \(k\)-ary deterministic rational relations is logspace reducible to the recognizability problem for \(k\)-ary deterministic rational relations._ Proof.: Given two deterministic \(k\)-tape automata \(\mathcal{A}\) and \(\mathcal{B}\). First we ensure that both \(R(\mathcal{A})\) and \(R(\mathcal{B})\) are finite relations, and, in particular, recognizable. By [33] the automata \(\mathcal{A}\) and \(\mathcal{B}\) are equivalent if and only if they accept the same tuples of length at most \(n-1\), where \(n\) is the total number of states in \(\mathcal{A}\) and \(\mathcal{B}\). We can compute in logspace a deterministic \(k\)-tape automaton \(\mathcal{A}^{\prime}\) such that \(R(\mathcal{A}^{\prime})=R(\mathcal{A})\cap\{\mathbf{u}\in(\Sigma^{*})^{k}\mid\|\mathbf{ u}\|<n\}\), and analogously \(\mathcal{B}^{\prime}\) for \(\mathcal{B}\). The automaton \(\mathcal{A}^{\prime}\) tracks the length of the prefix tuple read so far, up to threshold \(n\), and rejects all tuples of length at least \(n\). We claim that \(R(\mathcal{A}^{\prime})=R(\mathcal{B}^{\prime})\) if and only if \[T= \{(a^{i}\#,a^{i}\#,\mathbf{\varepsilon})\mid i\in\mathbb{N}\}R( \mathcal{A}^{\prime})\ \cup\] \[\{(a^{i}\#,a^{j}\#,\mathbf{\varepsilon})\mid i\neq j\}R(\mathcal{B}^{ \prime})\] is recognizable where \(a\) and \(\#\) are fresh distinct letters. Observe that a deterministic \(k\)-tape automaton for \(T\) is logspace computable from \(\mathcal{A}^{\prime}\) and \(\mathcal{B}^{\prime}\). If \(R(\mathcal{A}^{\prime})=R(\mathcal{B}^{\prime})\) then \[T=\{(a^{i}\#,a^{j}\#,\mathbf{\varepsilon})\mid i,j\in\mathbb{N}\}R( \mathcal{A}^{\prime})\] is the concatenation of two recognizable relations and hence itself recognizable. Suppose that \(R(\mathcal{A}^{\prime})\neq R(\mathcal{B}^{\prime})\) and assume that there exists a tuple \(\mathbf{v}\in R(\mathcal{A}^{\prime})\setminus R(\mathcal{B}^{\prime})\) (the case where \(R(\mathcal{B}^{\prime})\setminus R(\mathcal{A}^{\prime})\neq\emptyset\) is similar). If \(T\) would be recognizable then \(T\mathbf{v}^{-1}=\{\mathbf{u}\mid\mathbf{uv}\in T\}\) would also be recognizable. However \[T\mathbf{v}^{-1}=\{(a^{i}\#,a^{i}\#,\mathbf{\varepsilon})\mid i\in\mathbb{N}\}\] is clearly not recognizable. ### _Constructing an independent automaton_ Theorem 2 raises the question how to translate a deterministic multitape automaton into an equivalent automaton with independent tapes, if one exists. Such a construction will be needed in the next section, to decide whether a deterministic multitape automaton recognizes a synchronous relation. Formally, an _independent \(k\)-tape automaton_\(\mathcal{I}\) is a tuple \(\mathcal{I}=(\mathcal{A}_{1},\ldots,\mathcal{A}_{k},F)\) consisting of DFAs \(\mathcal{A}_{i}\) without final states and a set of state tuples \(F\subseteq Q_{1}\times\cdots\times Q_{k}\), where \(Q_{i}\) is the state set of \(\mathcal{A}_{i}\). The relation \(R(\mathcal{I})\) recognized by \(\mathcal{I}\) is the set of all tuples \((w_{1},\ldots,w_{k})\) such that for each \(i\in[1,k]\) the unique run of \(\mathcal{A}_{i}\) on \(w_{i}\) ends in a state \(q_{i}\in Q_{i}\) with \((q_{1},\ldots,q_{k})\in F\). Note that independent multitape automata recognize exactly the relations in \(\mathbf{Rec}\). **Theorem 9**.: _Given a deterministic \(k\)-tape automaton for a recognizable relation \(R\), one can compute an independent \(k\)-tape automaton for \(R\) in double exponential time._ Proof.: For each \(j\in[1,k]\) define the relation \(\equiv_{j}\) by \[x\equiv_{j}y\iff\text{for all }z\in\Sigma^{*}\!:xz\approx_{j}yz,\] which is a right-congruence, i.e. \(x\equiv_{j}y\) implies \(xa\equiv_{j}ya\). Let \(\mathcal{A}=(Q,\Sigma,q_{0},\delta,F)\) be a deterministic \(k\)-tape automaton for a recognizable relation \(R\). Suppose that \(x,y,z\) are words where \(y\) is \(nn!\)-invisible in the context of \(x\). Then \(xyz\equiv_{j}xz\) since otherwise \(xy(zz^{\prime})\not\approx_{j}x(zz^{\prime})\) for some word \(z^{\prime}\), which contradicts Proposition 2. Hence, for each word \(w\) of length \(f(nn!)+1\) there exists an \(\equiv_{j}\)-equivalent word \(w^{\prime}\) of length at most \(f(nn!)\), by cutting out \(nn!\)-invisible factor according to Lemma 11 where \(f(N)\stackrel{{\text{\tiny def}}}{{=}}2(Nn)^{n}\). Furthermore, the function \(w\mapsto w^{\prime}\) can be computed in double exponential time, as remarked in [15, Section 8]. Hence, the independent \(k\)-tape automaton \((\mathcal{A}_{1},\ldots,\mathcal{A}_{k},F)\) works as follows. The states of \(\mathcal{A}_{j}\) are words of length at most \(f(nn!)\). The initial state is the empty word \(\varepsilon\). If the current state (word) is \(w\) and the next input symbol is \(a\in\Sigma\), then the next state is the word obtained from \(wa\) by removing an \(nn!\)-invisible factor, if possible. In this way, at each time step the reached state is a word that is \(\equiv_{j}\)-equivalent to the read prefix. Finally, a tuple \(\mathbf{v}\) is marked final if and only if \(\mathbf{v}\in R\). On input tuple \((w_{1},\ldots,w_{k})\) each DFA \(\mathcal{A}_{j}\) reaches a state \(v_{j}\) with \(v_{j}\equiv_{j}w_{j}\), and therefore \((v_{1},\ldots,v_{k})\in R\) if and only if \((w_{1},\ldots,w_{k})\in R\). We remark that the double exponential bound in Theorem 9 is optimal, which can be derived from the proof by Meyer and Fischer for the double exponential succinctness gap between DPDAs and DFAs [45]. To keep the paper self-contained, we provide an alternative proof. **Proposition 4**.: _There exists a recognizable relation \(R_{n}\subseteq\{0,1\}^{*}\times\{0,1\}^{*}\) which is accepted by a deterministic 2-tape automaton with \(O(n^{2}\log n)\) states so that any independent 2-tape automaton for \(R_{n}\) has in total at least \(2^{2^{n-1}}\) states._ Proof.: Let \(R_{n}\subseteq[1,n]^{*}\times[1,n]^{*}\) be the relation containing all pairs \((u,v)\) where \(|v|\leq 2n\) and \(v\) is a scattered subword of \(u\). Observe that \(R_{n}\) is accepted by a deterministic 2-tape automaton with \(O(n)\) states. Then \(\approx_{1}^{R_{n}}\) is _Simon's congruence_ with parameter \(2n\)[12]. Its index is finite and bounded from below by \(2^{2^{n-1}}\) by [46, Theorem 1.2]. If an independent 2-tape automaton \(\mathcal{I}=(\mathcal{A}_{1},\mathcal{A}_{2},F)\) recognizes \(R_{n}\) then the index of \(\approx_{1}^{R_{n}}\) is a lower bound for the number of states of \(\mathcal{A}_{1}\). Finally, we can replace the alphabet \([1,n]\) by codes from \(\{0,1\}^{\log n}\), increasing the automaton size by a \(\log n\)-factor. ## VI Deciding Synchronicity in \(\mathbf{DRat}\) In this section we prove Theorem 3 by showing that synchronicity can be reduced to recognizability for relations in \(\mathbf{DRat}\), which can be solved according to Theorem 2. Let us remark that there also exists a reduction in the reverse direction, whose proof can be found in Appendix B. **Proposition 5**.: _Given a \(k\)-tape automaton \(\mathcal{A}\) for a relation \(R\), one can compute in logspace a \(k\)-tape automaton \(\mathcal{B}\) for a relation \(S\) such that \(R\) is recognizable if and only if \(S\) is synchronous. If \(\mathcal{A}\) is deterministic, then so is \(\mathcal{B}\)._ In the rest of this section we show Theorem 3. We first give an intuition for the case \(k=2\). Suppose \(R\) is given by a deterministic 2-tape automaton \(\mathcal{A}\) with the property that every reachable cycle \(p\xrightarrow{(v_{1},v_{2})}p\) satisfies \(|v_{1}|=|v_{2}|\). This ensures that \(\mathcal{A}\) has _bounded delay_[22, Section 3], i.e. the head positions cannot be arbitrarily far apart during the computation of \(\mathcal{A}\). In fact, the delay is bounded by the number of states \(|Q|\). It is well-known that such an automaton recognizes a synchronous relation [22, Corollary 3.4], since letters that are "read ahead" on a tape can be stored in a queue of length \(|Q|\). Let us now consider the case where \(\mathcal{A}\) contains an _asynchronous_ cycle of the form \(p\xrightarrow{(v_{1},v_{2})}p\) with \(|v_{1}|<|v_{2}|\) (the case \(|v_{1}|>|v_{2}|\) is symmetric). We partition the automaton into an asynchronous part, containing all states which are reachable from an asynchronous cycle, and a synchronous part. While the simulation using a queue works in the synchronous part of the automaton, the delay can become unbounded in the asynchronous part, by traversing asynchronous cycles repeatedly. We claim that \(R\) is synchronous if and only if for each state \(q\) in the asynchronous part, the relation \(R_{q}\) is recognizable. If each such relation \(R_{q}\) is recognizable and in particular synchronous, then the computation from state \(q\) can be continued synchronously using a synchronous automaton for \(R_{q}\). For the other direction, assume that \(R_{q}\) is not recognizable for some asynchronous state \(q\). Therefore \(\approx_{1}^{R_{q}}\) has infinite index by Proposition 1, i.e. there exist words \((s_{i})_{i\geq 1}\) and \((t_{i,j})_{i<j}\) such that \(R_{q}\) separates \((s_{i},t_{i,j})\) and \((s_{j},t_{i,j})\) for all \(i<j\). We claim that the Myhill-Nerode equivalence relation \(\sim_{\otimes R}\) of the language \(\otimes R\) of convolutions has at least \(h\) classes for each \(h\in\mathbb{N}\): Take an asynchronous cycle \(p\xrightarrow{(v_{1},v_{2})}p\) from which \(q\) is reachable. We can produce runs \(q_{0}\xrightarrow{(u_{1},u_{2})}q\) where the delay \(|u_{2}|-|u_{1}|\) is arbitrary large. Pick such a run where \(|u_{2}|-|u_{1}|\geq\max\{|s_{1}|,\ldots,|s_{h}|\}\). Then any two words \((u_{1}s_{i})\otimes u_{2}\) and \((u_{1}s_{j})\otimes u_{2}\) for \(1\leq i<j\leq h\) are inequivalent with respect to \(\sim_{\otimes R}\) because \(\otimes R\) separates \((u_{1}s_{i})\otimes(u_{2}t_{i,j})\) and \((u_{1}s_{j})\otimes(u_{2}t_{i,j})\), see the illustration in Figure 3. Thus, the synchronicity problem can be reduced to checking recognizability of relations \(R_{q}\) where \(q\) is reachable from an asynchronous cycle. Let us now consider the general case of a \(k\)-ary deterministic rational relation \(R\). Again, we can ignore the endmarker \(\dashv\) since appending \((\dashv,\ldots,\dashv)\) to \(R\) preserves (non)synchronicity. Hence, for the rest of this section we assume that \(R\) is given by a deterministic \(k\)-tape automaton \(\mathcal{A}=(Q,\Sigma,q_{0},\delta,F)\) with \(R=R(\mathcal{A})\). Moreover, we assume that every state in \(Q\) is reachable from \(q_{0}\). A cycle in \(\mathcal{A}\) reading \((v_{1},\ldots,v_{k})\)_induces_ a partition \(P\) on the components \([1,k]\) where two components \(i\) and \(j\) are in the same block in \(P\) if and only if \(|v_{i}|=|v_{j}|\). For a state \(q\in Q\) we define the partition \(P_{q}\) as the coarsest refinement of all partitions induced by a cycle from which \(q\) is reachable. As before, let \(R_{q}\) be the relation recognized from state \(q\). **Lemma 12**.: _For every \(q\in Q\), \(P_{q}\) is computable in time polynomial in the size of \(\mathcal{A}\)._ Proof.: The algorithm proceeds as follows. As a first step we compute for each \(q\in Q\) the coarsest refinement \(S_{q}\) of all partitions that are induced by _simple_ cycles on \(q\). Check for every \(1\leq i<j\leq k\) if there exists a simple cycle \(q\xrightarrow{v}q\) such that \(|v_{i}|\neq|v_{j}|\) and if so, store it as a constraint that \(i\) and \(j\) are in different blocks. Then \(S_{q}\) is the coarsest partition of \([1,k]\) that fulfills all stored constraints. Note that the existence of a simple cycle \(q\xrightarrow{v}q\) with \(|v_{i}|\neq|v_{j}|\) can be checked in nondeterministic logspace by storing the current length difference of the words in the \(i\)-th and \(j\)-th component on the guessed path in a counter whose value is bounded by \(|Q|\). We claim that \(P_{q}=S_{q_{1}}\sqcap\cdots\sqcap S_{q_{n}}\) where \(q_{1},\ldots,q_{n}\) are the states from which \(q\) is reachable. By definition, \(P_{q}\) is finer than \(S_{q_{1}}\sqcap\cdots\sqcap S_{q_{n}}\). For the other direction let \(P\) be a partition induced by a cycle \(c\) from which \(q\) is reachable. For the sake of contradiction assume that there exist \(1\leq i<j\leq k\) that are in different blocks in \(P\) but in the same block in \(S_{q_{i}}\) for all \(\ell\in[1,n]\). Since any cycle contains a simple cycle, there exists a simple cycle \(p\xrightarrow{v}p\) that is contained in \(c\). By assumption, it holds that \(|v_{i}|=|v_{j}|\) which means that after removing \(p\xrightarrow{v}p\) from \(c\), the cycle \(c\) still induces a partition where \(i\) and \(j\) are in different blocks. Furthermore, \(q\) is still reachable from \(c\). We can repeat this argument until \(c\) is a simple cycle inducing a partition where \(i\) and \(j\) are in different blocks, a contradiction. Recall that, for binary relations we tested recognizability for all states reachable from asynchronous cycles. For higher arity relations, we need to test whether each relation \(R_{q}\) conforms to \(P_{q}\). **Lemma 13**.: _If \(R_{q}\) does not conform to \(P_{q}\) for some state \(q\in Q\), then \(R\) is not synchronous._ Proof.: Assume that \(R_{q}\) does not conform to \(P_{q}\). By Theorem 4 there exists a partition \(P\) induced by a cycle \(p\xrightarrow{v}p\) so that \(q\) is reachable from \(p\) and \(R_{q}\) does not conform to \(P\). By permuting components, we can assume that \(|v_{1}|\leq\cdots\leq|v_{k}|\). Hence \(P\) is a partition of \([1,k]\) into intervals \(B_{1},\ldots,B_{n}\), which are listed in ascending order. Since the intervals \(B_{1}\cup\cdots\cup B_{i}\) for \(i\in[1,n]\) generate \(P\), there exists an index \(r\in[1,n]\) such Fig. 3: An asynchronous cycle can produce words \((u_{1},u_{2})\) with unbounded length difference. The words \(s_{i}\) are pairwise inequivalent words with respect to \(\approx_{1}^{R}\), separated by the words \(t_{i,j}\). that \(\approx_{B_{1}\cup\dots\cup B_{r}}^{R_{q}}\) has infinite index by Lemma 2. Set \([1,m]\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tinytinytiny{}}}}}}}}}}}}}}} }B_{1}\cup\dots\cup B_{r}\). Observe that for any number \(b\in\mathbb{N}\) there exists a run \(q_{0}\xrightarrow{\mathbf{u}_{q}}q\) such that \(|u_{i}|+b\leq|u_{j}|\) for all \(i\in[1,m]\) and \(j\in[m+1,k]\). Such runs can be constructed by traversing the cycle \(p\xrightarrow{\mathbf{v}}p\) sufficiently often. We show that for every \(h\in\mathbb{N}\), the Myhill-Nerode equivalence relation \(\sim_{\otimes R}\) of the language \(\otimes R\) of convolutions has at least \(h\) classes. This proves that \(\otimes R\) is not regular and therefore \(R\) is not synchronous. Let \(h\in\mathbb{N}\). Since \(\approx_{[1,m]}^{R_{q}}\) has infinite index there are tuples \(\mathbf{s_{i}}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{ }}}}}}}}}}}}}}} (s_{i,1},\dots,s_{i,m})\) for \(i\in[1,h]\) and \(\mathbf{t_{i,j}}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ }}}}}}}}}}}}}} (t_{i,j,m+1},\dots,t_{i,j,k})\) for \(1\leq i<j\leq h\) such that \((\mathbf{s_{i}},\mathbf{t_{i,j}})\in R_{q}\) if and only if \((\mathbf{s_{j}},\mathbf{t_{i,j}})\notin R_{q}\) for all \(1\leq i<j\leq h\). Let \(b\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tinytiny{\tinytiny{\tinytiny{\tinytiny{ \tinytiny{ \tiny \tiny \tiny{ \leftleftleftleftleftleftleft({ \,} \mid{{{{ \ \(|Q|\) for each component and simulate \(\mathcal{A}\) either on the next symbol in the queue or on the current input symbol if the queue is empty. If it simulates \(\mathcal{A}\) on the input symbol of the corresponding tape, we store the symbols read in the other components in the queues of that components. The simulation of \(\mathcal{A}\) on the queue is handled like an \(\varepsilon\)-transition where nothing is read from the input. Those \(\varepsilon\)-transitions can be removed and do not lead to nondeterminism since there is no branching of \(\varepsilon\)-transitions possible. If the simulation of \(\mathcal{A}\) leads to a state \(q^{\prime}\) that is not contained in \(Q_{P}\), i.e., \(q^{\prime}\in Q_{P^{\prime}}\) for some partition \(P^{\prime}=\{B_{1}^{\prime},\ldots,B_{n^{\prime}}^{\prime}\}\) that is strictly finer than \(P\), then \(\mathcal{A}_{q}^{\otimes}\) changes to the second part. In the second part \(\mathcal{A}_{q}^{\otimes}\) simulates the independent \(n^{\prime}\)-tape automaton \(\mathcal{I}_{q^{\prime}}^{\otimes}=(\mathcal{A}_{1},\ldots,\mathcal{A}_{n^{ \prime}},F_{q^{\prime}})\). For each class \(B_{i}\), for \(i=1,\ldots,n\), the DFAs \(\mathcal{A}_{j}\) that are responsible for components contained in \(B_{i}\) are simulated in parallel either on the queue or the current input symbol. After the whole input was read, \(\mathcal{A}_{q}^{\otimes}\) checks with final states whether the remaining content of the queues leads in each \(\mathcal{A}_{i}\) to some state \(f_{i}\) such that \((f_{1},\ldots,f_{n^{\prime}})\in F_{q^{\prime}}\). Finally, we argue that the running time of the algorithm is \(2(k-2)\)-fold exponential in the automaton size \(|\mathcal{A}|\). Inductively, we prove that in layer \(L_{t}\) we can construct the automata \(\mathcal{A}_{q}^{\otimes}\) in \(2(k-t)\)-fold exponential time and the automata \(\mathcal{I}_{q}^{\otimes}\) in \((k-t+1)\)-fold exponential time. In particular, the automata sizes are bounded by their respective construction times. In layer \(L_{k}\) the automata \(\mathcal{A}_{q}^{\otimes}\) are constructed in polynomial time. In the other layers \(L_{t}\) the automata \(\mathcal{A}_{q}^{\otimes}\) are constructed by Lemma 15 in \(2(k-t)\)-fold exponential time. Each automaton \(\mathcal{I}_{q}^{\otimes}\) is constructed in double exponential time in the size of \(\mathcal{A}_{q}^{\otimes}\) (Theorem 9), which is \(2(k-t+1)\)-fold exponential in \(|\mathcal{A}|\). Furthermore, each recognizability test on \(\mathcal{A}_{q}^{\otimes}\) in layer \(L_{t}\) where \(t\in[3,k]\) takes double exponential time in \(|\mathcal{A}_{q}^{\otimes}|\) by Theorem 2, which is \(2(k-t+1)\)-fold exponential in \(|\mathcal{A}|\). The relations \(R_{q}^{\otimes}\) in layer \(L_{2}\) are binary and therefore recognizability can be tested in polynomial time in \(|\mathcal{A}_{q}^{\otimes}|\), which is \(2(k-2)\)-fold exponential in \(|\mathcal{A}|\). ## Acknowledgments The authors thank Stefan Goller, Anthony W. Lin, and Georg Zetzsche for helpful discussions. This work is funded by the European Union (ERC, AV-SMP, 759969 and ERC, FINABIS, 101077902). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
2305.06661
Interstitial flow potentiates TGF-$β$/Smad-signaling activity in lung cancer spheroids in a 3D-microfluidic chip
Within the tumor microenvironment (TME), cancer cells use mechanotransduction pathways to convert biophysical forces to biochemical signals. However, the underlying mechanisms and functional significance of these pathways remain largely unclear. The upregulation of mechanosensitive pathways from biophysical forces such as interstitial flow, leads to the activation of various cytokines, most importantly transforming growth factor-$\beta$ (TGF-$\beta$). TGF-$\beta$ is a critical inducer of epithelial-mesenchymal transition (EMT) in cancer cells that leads to increased cell motility and invasion in a Smad dependent manner. Current research models have limited ability to investigate the combined effects of biophysical forces (such as interstitial flow) and cytokines (TGF-$\beta$) in a 3D microenvironment. We used a 3D-matrix based microfluidic platform to demonstrate the potentiating effect of interstitial flow (IF) on exogenous TGF-$\beta$ induced upregulation of the Smad-signaling activity and the expression of mesenchymal marker vimentin in A549 lung cancer spheroids. To monitor this, we used stably integrated fluorescent based reporters into the A549 cancer cell genome. Our results demonstrate that interstitial flow enhances exogenous TGF-$\beta$ induced Smad-signaling activity in lung cancer spheroids embedded in a matrix microenvironment. In addition, we observed an increased cell motility for A549 spheroids when exposed to interstitial flow and TGF-$\beta$. Our 3D-microfluidic model integrated with real-time imaging provides a powerful tool for investigating cancer cell signaling and motility associated with invasion characteristics in a physiologically relevant TME.
Zaid Rahman, Ankur Deep Bordoloi, Haifa Rouhana, Margherita Tavasso, Valeria Garbin, Peter ten Dijke, Pouyan E. Boukany
2023-05-11T08:56:41Z
http://arxiv.org/abs/2305.06661v2
Interstitial flow potentiates TGF-\(\beta\) /Smad-signaling activity in lung cancer spheroids in a 3D-microfluidic chip+ ###### Abstract Within the tumor microenvironment (TME), cancer cells use mechanotransduction pathways to convert biophysical forces to biochemical signals. However, the underlying mechanisms and functional significance of these pathways remain largely unclear. The upregulation of mechanosensitive pathways from biophysical forces such as interstitial flow, leads to the activation of various cytokines, most importantly transforming growth factor-\(\beta\) (TGF-\(\beta\)). TGF-\(\beta\) is a critical inducer of epithelial-mesenchymal transition (EMT) in cancer cells that leads to increased cell motility and invasion in a Smad dependent manner. Current research models have limited ability to investigate the combined effects of biophysical forces (such as interstitial flow) and cytokines (TGF-\(\beta\)) in a 3D microenvironment. We used a 3D-matrix based microfluidic platform to demonstrate the potentiating effect of interstitial flow (IF) on exogenous TGF-\(\beta\) induced upregulation of the Smad-signaling activity and the expression of mesenchymal marker vimentin in A549 lung cancer spheroids. To monitor this, we used stably integrated fluorescent based reporters into the A549 cancer cell genome. Our results demonstrate that interstitial flow enhances exogenous TGF-\(\beta\) induced Smad-signaling activity in lung cancer spheroids embedded in a matrix microenvironment. In addition, we observed an increased cell motility for A549 spheroids when exposed to interstitial flow and TGF-\(\beta\). Our 3D-microfluidic model integrated with real-time imaging provides a powerful tool for investigating cancer cell signaling and motility associated with invasion characteristics in a physiologically relevant TME. + Footnote †: _Department of Cell and Chemical Biology and Oncode Institute, Leiden University Medical Center, Leiden, The Netherlands._ + Footnote †: _Department of Cell and Chemical Biology and Oncode Institute, Leiden University Medical Center, Leiden, The Netherlands._ + Footnote †: _Department of Cell and Chemical Biology and Oncode Institute, Leiden University Medical Center, Leiden, The Netherlands._ + Footnote †: _Department of Cell and Chemical Biology and Oncode Institute, Leiden University Medical Center, Leiden, The Netherlands._ ## 1 Introduction The 3D tumor microenvironment (TME) plays a crucial role in the progression and metastasis of primary tumors to secondary tumor sites [1, 2]. It consists of key components such as the extracellular matrix (ECM), biophysical forces (interstitial flow and consequent fluid stresses), tumor cell-TME interactions in the presence of stromal cells, immune cells and cancer-associated fibroblasts (CAFs). The interplay of these components contributes to the metastatic cascade of events from early dissemination to extravasation [3]. However, most tumor cell migration and invasion studies have been performed in 2D/3D in-vitro models that poorly recapitulate the characteristics of solid tumors in-vivo. To overcome these limitations, microfluidic platforms provide an effective tool to replicate a physiologically relevant TME for studying cancer cell behavior [4]. Recent advances in microfluidic platforms based on a 3-D matrix have allowed for the incorporation of key components of the tumor microenvironment (TME) in cancer cell migration and invasion studies [4, 5]. To mimic the ECM, natural-hydrogel materials with tunable mechanical properties are used [6]. These hydrogels have been further embedded with single cancer cells and/or cancer cell aggregate/spheroids to include cell-matrix interactions. Advancements in modeling and fabrication technologies have improved microfluidic devices to introduce interstitial flow (IF) for long-term perfusion and culture conditions. In the past, IF studies were mostly performed on single cells embedded in a matrix material [7, 8, 9]. Recently, researchers have investigated the invasive and migratory cellular response of a breast tumor spheroid model under IF to show morphological and epigenetic changes [10]. However, these studies were limited to highly migratory breast cancer cells and did not include effect of biochemical signals towards EMT signaling pathways. The role of IF is important due to its direct influence on the remodeling of the ECM, where compressive, tensional and shear forces are sensed by cell-surface receptors that activate mechanotransduction pathways to trigger biochemical signals [7, 11, 12]. This further leads to the activation and upregulation of many core EMT cytokines, including the TGF-\(\beta\) cytokine, known as a key EMT inducer [13, 14]. In solid tumors, the poorly drained interstitial flow is responsible for interstitial fluid pressure build up in the surrounding healthy tissue [7]. Moreover, the lung tumor tissue is constantly subjected to a mechanical load due to its physiological activities that may result in cancer cell invasion and migration [15, 16]. Therefore, it is of primary interest to study primary lung tumor models such as A549 lung adenocarcinoma, when subjected to biophysical force induced stresses. It has been hypothesized that the exposure of cancer cells to IF promotes endogenous TGF-\(\beta\) driven non-Smad-signaling activity towards EMT response [17, 18, 12]. Moreover, studies have also investigated the role of fluid-induced shear stress to promote mechanotransduction pathways (such as YAP/TAZ) responsible for triggering EMT signaling for cancer cell invasion in non-small cell lung cancer, breast cancer and melanoma tumor [19, 20, 21]. TGF-\(\beta\) is capable to promote cancer cell invasion and progression in various tumor types such as lung, breast and pancreatic cancer [22, 23]. TGF-\(\beta\) receptors at the cell-surface upon binding TGF-\(\beta\) activate the intracellular Smad-signaling pathway [22]. Activated Smads can act as transcription factors to mediate EMT associated with cancer. Many researchers have studied the role of TGF-\(\beta\) in static 2D/3D tumor models, highlighting its importance in activating EMT transcriptional factors including SNA11, TWIST, ZEB12. Studies conducted on A549 lung adenocarcinoma cells showed EMT behavior upon exposure to TGF-\(\beta\) cytokine [24, 25, 26, 27]. Most studies focused on the upregulation of mesenchymal markers (such as vimentin) and an increased expression of transcription factors such as SNA11 and ZEB2 highlighting EMT response [24, 28, 29]. The upregulation of the vimentin mesenchymal marker and downregulation of the E-cadherin (epithelial marker) in A549 lung cancer cells were found to be associated with an aggressive motile response [23]. In recent years, researchers further studied A549 3D cancer models towards EMT behavior [30, 31]. However, these studies were performed in culture conditions devoid of matrix material and IF. Thus, there is an evident lack of research on the effect of IF and exogenous TGF-\(\beta\) on A549 lung tumor EMT response in a relevant matrix microenvironment. In this research, we employed a 3D-matrix based microfluidic model to investigate the impact of IF and exogenous TGF-\(\beta\) cytokine on epithelial-like A549 spheroids (Fig. 1(A) and (B)). Specifically, we investigated the Smad-signaling pathway and vimentin biomarker expression in response to varying IF and exogenous TGF-\(\beta\) concentration towards cancer cell invasion response (Fig. 1(C)). These studies were conducted with genetically modified A549 lung tumor cells with dual artificial reporter constructs for Smad-signaling pathway (CAGA-12-GFP reporter gene) and vimentin biomarker (RFP reporter gene) (Fig. 1(D)). We demonstrate that IF potentiates Smad-signaling response when exposed to exogenous TGF-\(\beta\). The combined effect of IF and TGF-\(\beta\) also showed increased vimentin activity. Lastly, A549 lung tumor exhibited increased cell plasticity motion on the spheroid periphery showing cancer cell invasion characteristics. These findings suggest that external IF and cellular cues play critical roles in promoting the invasion characteristics of cancer cells within relevant matrix microenvironments, and highlight the importance of incorporating these factors in cancer research models. ## 2 Materials and Methods ### Cell culture A549-VIM-RFP cells were acquired from the company, ATCC [32]. These cells are CRISPR-ed reporter cell line that shows red fluorescent protein (RFP) when it undergoes EMT. To construct a dual reporter, A549-VIM-RFP cells were transduced with a lentiviral CAGA-12-GFP construct to produce a green fluorescent protein (GFP) upon Smad-pathway activation. The dual reporter cell line was a gift from Yifan Zhu (Department of Cell and Chemical Biology, LUMC). They were maintained in Dulbecco's Modified Eagle Medium High Glucose (DMEM,Sigma) containing 4.5 g/L glucose, L-glutamine without sodium pyruvate, and supplemented with 10% Fetal Bovine Serum (FBS), Sigma) and 1% Antibiotic-Antimycotic solution (Gibco). All cells were incubated at 37\({}^{\circ}\)C with 5% CO\({}^{2}\) and sub-cultured 2 times per week. Cells were frequently tested for absence of mycoplasma and checked for authenticity by STR profiling. ### Spheroid fabrication Spheroids were grown in a commercially available Corning(tm) Elplasia(tm) 96-well plate for high-throughput spheroid production. These well plates are round-bottom with Ultra-Low Attachment (ULA) surface that prevents cell-surface attachment and promotes cell-cell adhesion. We use an initial seeding density of 40 x 10\({}^{3}\) cells (500 cells per micro-well) for each well to produce 79 spheroids (Fig.S1). Spheroid size is dependent on the initial seeding density, cell proliferation rate and culture duration. Spheroids are ready to use after 4 days of culture in the wells and range between 180-220 \(\mu\)m in diameter. We restricted the spheroid diameter to less than 220 \(\mu\)m to avoid a necrotic core and to avoid contact with the glass bottom of the microfluidic chip. Any cancer spheroids that made contact with the microfluidic glass substrate, were excluded from the analysis in this work. ### Hydrogel synthesis and characterization Gelatin methacryloyl (GelMA), 300g bloom, 60 % degree substitution, was purchased from Sigma Aldrich. Like gelatin, gelMA is still a thermo-reversible gel, however, the methacrylic anhydride groups give the ability to undergo covalent cross-linking under UV light (365nm) in the presence of a photoinitiator. 5wt.% gelMA was used in experiments, with a mass ratio of 1:16 of photoinitor (Lithium phenyl-2,4,6- trimethylbenzoyllphosphinate, LAP; Sigma Aldrich). LAP and gelMA were added together and dissolved in Dulbecco's Phosphhae Buffered Saline (DPBS; Gibco). The mixture was dissolved at 37\({}^{\circ}\)C in a water bath for about 2 hours. The hydrogel was then crosslinked using Colibri Axio Observer microscope laser 385 nm with a 5x objective lens for 45 seconds. The viscoelastic properties of crosslinked GelMA were investigated with a modular rotational rheometer (DSR 502, Anton Paar) equipped with a parallel plate of a diameter of 25 mm. Experimental detail can be found in SI. The 5wt.% GelMA analyzed at at a fixed strain of 1% with frequency sweeps (0.1 to 100 rad/s ) at room temperature showed a solid-like behaviour, with a storage modulus G\({}^{\prime}\)\(\approx\) 250 Pa, higher than the loss modulus (G\({}^{\prime}\)) by at least one order of magnitude (Fig. S1). The lung tumor tissue according to the literature has an elastic modulus of about 200 Pa in vivo [33]. To replicate the mechanical properties of TME (under in vivo conditions), we employed a matrix material with similar mechanical properties. ### Microfluidic chip fabrication and interstitial flow characterization The microfluidic chip was fabricated on a 4-inch silicon wafer by the photo-lithography process in a cleanroom facility using \(\mu\)MLA Laser Writer (Hieldberg Instruments) (full procedure described in SI). The microfluidic chip design was inspired by IF studies on single cells and was upgraded to fabricate a channel height of 280 \(\pm\) 25 \(\mu\)m (Fig.1A) [34]. From the master mould, polydimethylsiloxane (PDMS) based microfluidic chips were fabricated by soft-lithography technique (refer to SI for the detailed procedure). The microfluidic chip consisted of three parallel channels separated by triangular pillars (all side lengths: 150 \(\mu\)m and height: 280 \(\pm\) 25 \(\mu\)m). The middle channel is loaded with 5 wt\(\%\) gelMA hydrogel, which is crosslinked under UV-light (385 nm) for 45 seconds. The top and bottom channels are the fluidic channels. The insets of the top channel were maintained at a higher pressure (\(P_{1}\)) relative to the bottom channel (\(P_{2}\)) to generate an IF along the pressure gradient (Fig.1A). By controlling the pressure of the reservoirs at (\(P_{1}\)) and (\(P_{2}\)), we were able to establish a pressure gradient to generate an IF through the gelMA hydrogel across the microfluidic device. The inlet and outlet pressures were controlled by a pressure pump (Fluigent) and operated via InP software to pressurize the sample reservoirs. According to Darcy's Law, flow velocity through a porous material is directly proportional to the pressure gradient governed by hydraulic permeability (K) of the material. In this case, we first calculated the hydraulic permeability of 5wt\(\%\) gelMA and then estimated the average IF velocity (\(u_{m}\)); refer to SI for detailed protocol. We tested two pressure drops (\(\Delta\) P = \(P_{1}\cdot P_{2}\)) of 20 mbar and 30 mbar that corresponded to an interstitial flow velocity of \(u_{m}\) = 0.2 \(\mu\)m/s and 0.45 \(\mu\)m/s obtained via COMSOL Multiphysics using Free and Porous Media Flow interface (Fig.S2(A and B)). ### Microfluidic device setup for IF and exogenous TGF-\(\beta\) studies To investigate the effect of IF and exogenous TGF-\(\beta\) on spheroids, we used a step-wise procedure as described below. We first collected A549 spheroids from an Elplasia 96-well plate after 4 days of culture duration. The collected spheroids were then transferred to an empty well of a separate Corning Ultra-Low Attach Fig. 1: 3D-matrix based microfluidic platform to study interstitial flow and TGF-\(\beta\)/Smad-signaling and vimentin expression in A549 lung tumor spheroids. (A) Schematic of the 3D-matrix based microfluidic platform. Inset figure shows the relative size of spheroid embedded in the microfluidic channel. (B) Bright-field image displaying A549 spheroid embedded in 3D-hydroged matrix scaffold between in the hydrogel channel. Scale bar: 200 \(\mu\)m (C) A549 Lung tumor spheroid exposed to interstitial flow and exogenous TGF-\(\beta\) embedded in a matrix microenvironment. Exogenous TGF-\(\beta\) molecules specifically binds with TGF-\(\beta\) surface receptors to activate the intracellular Smad-derived transcriptional response. Cells sense interstitial flow to initiate mechanotransduction pathways to trigger EMT. (D) Upregulation in transcriptional Smad-derived CAGA-12-GFP reporter gene response and RFP linked vimentin expression from exogenous TGF-\(\beta\) under flow conditions. ment (ULA) 96-well plate. Once all the spheroids settled at the bottom of the well after 5 minutes, the cell culture media was aspirated out leaving only the spheroids in the well. A small volume of 5wt.% gelMA was added to this well to make a hydrogel-spheroid suspension. The hydrogel-spheroid suspension was then pipetted into the middle channel of the microfluidic device allowing entry of multiple spheroids. Once the middle channel was full, we gently removed the pipette from the inlet without introducing any air bubbles. The chip was then transferred to a microscope stage for UV-crosslinking at 385 nm laser source for 45 seconds using a 5x objective lens. After UV-irradiation, the hydrogel undergoes irreversible chemical crosslinking and acts as a 3D scaffold for spheroids(Fig.1B). To generate IF, we operated the microfluidic device as described in the above section. For experiments to study the effect of interstitial flow on A549 spheroids, the sample reservoir for the top channel was replaced with cell culture medium (DMEM, high glucose, 10% v/v FBS, 1% v/v Antibiotics). For IF with exogenous TGF-\(\beta\) experiments on A549 spheroids, we supplemented the culture medium with TGF-\(\beta\) (stock concentration; 5ng/mL) to achieve a final concentration of 0.1-10 ng/mL. Brightfield and fluorescent images of the spheroids were captured on an inverted fluorescence microscope (Zeiss Axio-Observed) at an interval of 1 hour for a duration of 70 hours using a 20x/NA 0.16 air objective and ORCA Flash 4.0 VZ (Hamamatsu) digital camera with a resolution of 2048x2048 pixels. We used Software Autofocus strategy with best contrast method to reduce background or out of focus fluorescence signal. For the GFP and RFP fluorescence, we used the 488 LED source (ex: 488 nm; emm: 520 nm) and 543 LED source (ex: 543 nm; emm: 590 nm), respectively. All experiments were conducted at 37\({}^{\circ}\)C and 5% C02 using a stage top incubator (ibid). Bright field images were taken at 10% light intensity and 100 millisecond exposure time. Fluorescent signal intensity for GFP and RFP images were analyzed via ImageJ (v1.53t, National Institute of Health, USA). A region of interest was created encircling the entire spheroid area for both GFP and RFP, performed separately. This region of interest was quantified for pixel intensity density at every time point using Measure function in ImageJ. The fluorescent intensity signal values are normalized with respect to the signal intensity at t = 0 hr. The device is robustly operational at pressure differences upto 30-35 mbar in the presence of spheroids. Increasing the pressure drop, resulted in the hydrogel structure breaking and interrupted uniform IF after a few hours. Within this pressure drop range, we were able to perform long-term culture experiments (up to 70 hours) to visualize cancer cell spheroid for fluorescence reporter signaling activity, and invasive response. ### Statistical analysis Statistical analysis were performed using Microsoft Excel (Microsoft Corporation, USA). The statistical significant differences between two experimental groups were determined by Student t-test using the function _t-test: two sample with unequal variance_ and p values below 0.05 were considered to be significant. We categorize statistical difference as following; p < 0.001 (***), p < 0.01 (***) and p < 0.05 (*). ## 3 Results and discussion ### Exogenous TGF-\(\beta\) induced CAGA-12-GFP reporter response under interstitial flow conditions To analyze the effect of exogenous TGF-\(\beta\) under interstitial flow (IF) on Smad3/4-dependent transcriptional reporter response, we first examined the overall CAGA-12-GFP reporter fluorescence intensities at the end of 70 hrs for a fixed \(C_{0}\) = 10 ng/ml of exogenous TGF-\(\beta\) : (i) with IF (IF 'TGF-\(\beta\) - ) and (ii) without IF (IF 'TGF-\(\beta\) - '). These two conditions are contrasted with an IF condition without any exogenous TGF-\(\beta\) (IF 'TGF-\(\beta\) - '). Fig. 2A shows the brightfield images superposed with GFP fluorescence intensity at 0 and 70 hrs for these three conditions. The IF conditions were obtained under a fixed pressure gradient of \(\Delta\)P = 30 mbar, equivalent of an average interstitial fluid velocity, \(u_{m}\) = 0.45 \(\mu\)m/s (measured separately via an independent experiment; see Fig. S2(B). We observed an enhanced CAGA-12-GFP reporter expression with the addition of exogenous TGF-\(\beta\) (2A (ii) vs. (iii)), which becomes further amplified across the spheroid under the imposed IF (2A (i) vs. (ii)). This observation strongly suggests that IF enhances the exogenous TGF-\(\beta\) induced Smad-signaling activity in A549 spheroids. The statistics of relative increase in the CAGA-12-GFP reporter expression (\(I\)) at t=70 hrs for these conditions were quantified for multiple spheroids based on the intensity readouts normalized by baseline values (\(I_{0}\)) at t=0. The box plot in Fig.2B shows the average reporter signal intensity (\(I\))\(\sigma\)/\(I_{0}\)) as a function of varying exogenous TGF-\(\beta\) conditions under fixed IF at \(\Delta\)P = 30 mbar. Among all the reported conditions, we observe the strongest reporter upregulation (\(I\))\(\sigma\)/\(I_{0}\) = 13\(\pm\)2.73) for an exogenous TGF-\(\beta\) concentration of \(C_{0}\) = 10 ng/mL under IF (i.e. IF 'TGF-\(\beta\) '(10 ng/mL)). We also observed that supplying exogenous TGF-\(\beta\) (10 ng/mL) without IF (IF 'TGF-\(\beta\) (10 ng/mL)) has approximately 68% lower reporter expression when compared to 1 ng/mL under IF (IF 'TGF-\(\beta\)'(1 ng/mL)) (Fig.2B). In addition, we studied the effect of different IF without exogenous TGF-\(\beta\) for CAGA-12-GFP reporter response. We observed minimal reporter gene upregulation which is linked to the inactivity of the Smad-pathway at t = 70 hrs in the absence of exogenous TGF-\(\beta\) (Fig. S3). To explore this potentiating effect between IF and exogenous TGF-\(\beta\) further, we employed time-lapse imaging to monitor the CAGA-12-GFP reporter expression profile through 70 hrs under varying IF (\(u_{m}\) = 0.2 \(\mu\)m/s and 0.45 \(\mu\)m/s at \(\Delta\)P = 20 and 30 mbar, respectively) and exogenous TGF-\(\beta\) concentrations (\(C_{0}\) = 1 and 10 ng/mL). Fig.2C shows the time-wise variations in (\(I\)/\(I_{0}\)) for the following three combinations of IF and exogenous TGF-\(\beta\) concentrations- IF 'TGF-\(\beta\) ': \(\Delta\)P = 30 mbar; \(C_{0}\) = 10 ng/ml, \(\Delta\)P = 20 mbar; \(C_{0}\) = 10 ng/ml, compared with the no-IF condition, i.e. IF 'TGF-\(\beta\)'(\(C_{0}\) = 10 ng/ml). We observed a clear influence on the CAGA-12-GFP signal intensity profile with changing IF pressure gradients. For \(C_{0}\) = 10 ng/ml, the IF condition at \(\Delta\)P =30 mbar resulted in the fastest non-linear increase in the fluorescence signal intensity profile that begins to show saturation over the 70-hour time period (Fig.2C). For the IF at 20 mbar and the no-IF conditions, fluorescence signal intensity showed relatively slower upregulation responses (Fig.2C). Interestingly, for the IF condition of \(\Delta\)P = 30 mbar, decreasing the exogenous TGF-\(\beta\) concentration by an order of magnitude (i.e. \(C_{0}\) = 1 ng/ml) still resulted in an upregulation response faster than the AP = 20 mbar; \(C_{0}\) = 10 ng/ml condition. This observation indicates that the Smad-dependent transcriptional reporter response is weakly sensitive to the exogenous TGF-\(\beta\) concentration, but has a stronger dependence on the IF. Under the fixed \(\Delta\)P = 30 mbar, for both \(C_{0}\) = 1 and 10 ng/ml the upregulation rates are nearly equal between 25-55 hrs, after an initial delayed response for the former. Additionally, we performed similar experiments under no-IF conditions with exogenous TGF-\(\beta\) concentration (1 and 10 ng/mL). The CAGA-12-GFP expression for 1 ng/mL and 10 ng/mL showed a similar upregulation profile (see Fig.54), suggesting that the transcriptional response is fairly independent of the exogenous TGF-\(\beta\) concentration greater than \(C_{0}\) = 1 ng/ml. It has been proposed that the transcriptional gene response from Smad-signaling pathway is dependent on a chain of reaction kinetics initiated with binding of exogenous TGF-\(\beta\) molecules at the active receptor sites [35]. These reaction kinetics include expression level of TGF-\(\beta\) receptors and Smads and its activation state, ability to translocate into the nucleus, ability to interact with other transcription factors, co-activators, co-repressors, and chromatin modulators etc [35]. The local concentration of available TGF-\(\beta\) should influence the conversion capacity of binding sites to active receptor sites. Additionally, active TGF-\(\beta\) availability is tightly controlled by its interaction with ECM proteins and its ability to present itself to signaling receptors is regulated by co-receptors (without intrinsic enzymatic motif), by integrins and other receptor molecules [22]. The upregulation rate of the transcriptional gene response is controlled by the density of binding sites and the reaction rate constant. To estimate the evolution of local TGF-\(\beta\) concentration in the vicinity of a spheroid, we performed 2D mass transport simulations using the finite-element method (implemented in COMSOL Multiphysics) by varying the IF conditions and the input concentration of exogenous TGF-\(\beta\) (Fig. S5). At 350 -400 minute mark, each condition has achieved its respective saturation concentration (\(C_{0}\)) of the exogenous TGF-\(\beta\) (Fig.S5). Since the upregulation of CAGA-12-GFP was found to be fairly independent of exogenous TGF-\(\beta\) concentration even under no-IF condition (see Fig. S4), we expect that all the active binding sites on the spheroid interface are activated for each case by this 250-400 minutes of the experiment. Comparing this analysis with the results in Fig. 2C suggests that besides the exogenous TGF-\(\beta\), there are additional biophysical force induced mechanotransduction pathways that influence an enhanced CAGA-12-GFP reporter upregulation from TGF-\(\beta\) induced Smad-signaling activity. It is likely that the flow-induced stress under IF triggers certain mechanotransduction pathways leading to the upregulation of Smad-signaling activities. Earlier studies have linked mechanotransduction induced EMT for cancer cells under a flow-induced shear stress of 0.1 - 3 Pa [36, 37, 38]. However, these studies were limited to 2D monolayer culture without an extracellular matrix environment. The shear stress induced by an interstitial fluid flow is typically reported to be in the order of 0.01 Pa [39]. These values are reported for cells cultured on 2D substrate subjected to interstitial flow velocities in microfluidic systems. Based on our simulation, the shear stress on a 2D spheroid model interface embedded in Fig. 2: Exogenous TGF-\(\beta\) induced CAGA-12-GFP reporter gene response of A549 spheroids under interstitial flow and no-flow conditions. (A) 20x GFP and bright-field merged microscope images of A549 spheroids at = 0 and 70 hrs showing gene-reporter intensity upregulation for the following conditions: (i) IF ‘TGF-\(\beta\) (10 ng/ml), (ii) IF ‘TGF-\(\beta\) (10 ng/ml) and (iii) IF ‘TGF-\(\beta\) –, scale bar: 100 \(\mu\)m. (B): Quantitative measurement of normalized CAGA-12-GFP reporter signal intensity at t = 70 hrs for varying exogenous TGF-\(\beta\) under fixed IF and no-flow conditions. (C): Time series quantification of fluorescence signal intensity (with time intervals of 5 hrs) for CAGA-12-GFP upregulation profile under different IF (\(\Delta\)P = 20 mbar and 30 mbar) and exogenous TGF-\(\beta\) conditions (1 and 10 ng/ml), for n = 3 spheroids in each condition. a low permeability matrix (mimicking the properties of gelMA used in the experiments) was found to be relatively low (\(\sim\) 0.1-0.3 mPa, see Fig. S2(C)). On the other hand, the normal stress caused by hydrodynamic pressure at the spheroid interface is significantly high (\(\sim\) 1 - 2 kPa, see Fig. S2(D)). This suggests that the compression induced by the hydrodynamic pressure might be responsible for the faster upregulation of CAGA-12-GFP reporter intensity at AP = 30 mbar IF irrespective of the exogenous TGF-\(\beta\) concentration. Cancer cells have the ability to respond to these mechanical cues (matrix stiffness, fluid shear stress and compressive forces) by activation of cell surface mechanosensors such as integrins, focal adhesion complex, transient receptor potential (TRP) ion channels and YAP/TAZ signaling pathway [20, 40, 41, 42, 43]. Activation of mechanotransduction induced signaling pathways such as YAP/TAZ, are commonly identified to promote cancer cell invasion and trigger EMT signaling pathways [19, 44, 45]. We hypothesize that IF induces a shear/compressive stress that further potentiates Smad-signaling pathway from additionally activated mechanotransduction pathways. A detailed study to identify the upregulation in mechanotransduction pathways under IF in a TME will shed more light on this hypothesis. ### Local heterogeneity in Smad-pathway reporter activity under IF-exogenous TGF-\(\beta\) condition To examine the local heterogeneity of transcriptional reporter gene response in a spheroid, we compared the fluorescence intensity profile for a fixed exogenous TGF-\(\beta\) concentration (\(C\) = 10 ng/mL) under two different values of IF (AP = 20 and 30 mbar) and no-IF conditions. The evolution of radially averaged polar intensity profiles of these three conditions were computed at four time points (t = 0, 24, 48 and 70 hrs). The intensity profile for each spheroid was normalized to its initial fluorescence value at t = 0 hrs. 2D GFP channel images at 20x objective lens were analyzed for these experimental results. An example of methodology for this analysis technique is shown in Fig. S6. The difference in intensity of fluorescence signal among these conditions are influenced by the varying IF conditions (as discussed in section 3.1). Besides, compared to the axisymmetric fluorescence intensity profiles around the spheroids under no-IF condition (Fig.3C), both IF conditions showed locally asymmetric fluorescence intensity profiles (Fig.3A,B). In the no-IF conditions, fluorescence intensity is not influenced by any hydrodynamic factors or fluid induced shear/compressive stress. We suspect that the absence of mechanical stress and inactivation of mechanotransduction pathways justifies the axisymmetric only exogenous TGF-\(\beta\) induced fluorescence profiles. The fore-aft asymmetry in CAGA-12-GFP upregulation profiles under IF conditions could be attributed to higher hydrodynamic stress along the direction of flow (top to bottom). These observations can be further linked to the hypothesis of shear/compressive stress induced activation of mechanosensory molecules on the spheroid. ### Exogenous TGF-\(\beta\) induced vimentin activity towards cancer cell motility To further explore the potentiating effect of interstitial flow (IF) with exogenous TGF-\(\beta\), we examined the upregulation of vimentin as measured by determining the RFP reporter response. Vimentin, a key mesenchymal biomarker, is upregulated in lung cancer cells in the presence of TGF-\(\beta\) towards EMT response [24, 26, 27]. We measured the upregulation in vimentin expression activity by quantifying RFP reporter expression under flow and no-flow conditions with addition of exogenous TGF-\(\beta\). Fig.4A shows the superposed microscope images of brightfield and RFP channels at t = 0 and 70 hrs. We observed an enhanced RFP reporter expression with IF and exogenous TGF-\(\beta\) (IF 'TGF-\(\beta\)') (Fig. 4A (i)) compared to IF'TGF-\(\beta\)'(Fig. 4A (ii)) condition). Fig.4B shows the quantified RFP signal upregulation for the same conditions (described in section 3.1). We observed that the strongest reporter upregulation (\(I\)/\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(I\)\(\)\(I\)\(I\)\(I\)\(\I\)\(I\)\(\)\(I\)\(I\)\(\I\)\(I\)\(\I\)\(I\)\(I\)\(\I\)\(I\)\(\I\)\(I\)\(I\)\(\I\)\(\I\)\(\I\)\(I\)\(\I\)\(\I\)\(\I\)\(\I\)\(I\)\(\I\ S2). This increased activity can be linked to our mechanotransduction hypothesis. It is highly likely that A549 cells imposed under IF convert mechanical forces (fluid-shear stress/compressive stress) into biochemical signals to promote cancer cell invasion response. This brings us closer to our hypothesis of normal/shear stress activating mechanosensory molecules for triggering EMT signaling pathways. These findings highlight the importance of IF and exogenous TGF-\(\beta\) that directly influence A549 tumor cells to undergo morphological changes towards EMT-like behavior. ## 4 Conclusions and Outlook We used a 3D-matrix based microfluidic platform to investigate the potentiating effect of IF on exogenous TGF-\(\beta\) induced Smad-signaling activity in A549 lung cancer spheroids. Our platform allowed us to embed cancer spheroids in 3-D using gelMA hydrogel as a relevant ECM material. This integrated platform of porous hydrogel material and cancer spheroid allowed us to mimic interstitial flow (IF) conditions experienced by a tumor in a TME. One advantage of this microfluidic platform was the ability to investigate cancer cell-matrix interactions over time, allowing us to observe the effects of varying biophysical conditions and biochemical signals. By studying the interplay between biophysical components (hydrogel matrix and IF), and the externally introduced cytokine (TGF-\(\beta\) ), we aimed to better understand how these factors contribute to cancer spheroid response and invasive behavior. To this end, we monitored the upregulation in transcriptional Fig. 4: Upregulation in vimentin expression as measured via RFP reporter gene response toward cell motility in A549 spheroids. (A) 20x RFP and bright-field channel merged microscope images of A549 spheroids at \(t=0\) and 70hrs showing gene-reporter intensity upregulation in the following conditions. (I) IF’-TGF-\(\beta\) ’(10 ng/mL) and (II) IF’–TGF-\(\beta\) ’(10 ng/mL), scale bar: 100 \(\mu\)m. (B) Quantitative measurement of normalized RFP reporter signal intensity at \(t=70\) hrs for varying exogenous TGF-\(\beta\) under fixed IF (0.45 \(\mu\)m/s) at AP = 30 mbar and no-flow conditions. (C) and (D) Standard deviation analysis showing spheroid peripheral activity of invasive cancer cells of A549 spheroids under IF and no-flow conditions with exogenous TGF-\(\beta\) (10 ng/mL), scale bar: 100 \(\mu\)m. Fig. 3: Exogenous TGF-\(\beta\) induced CAGA-12-GFP reporter intensity profile under a fixed exogenous TGF-\(\beta\) concentration (10 ng/mL) and varying interstitial flow and no-flow conditions. Polar plot of radially averaged intensity profiles of A549 spheroids at four different times. Each time point is represented with a specific color. The dotted line shows the average of n = 3 spheroids and the shaded region of each color represents the standard deviation to show evolution and distribution of reporter expression for conditions: (A) IF’-(0.45 \(\mu\)m/s)TGF-\(\beta\) ’, (B) IF’-(0.2 \(\mu\)m/s)TGF-\(\beta\) ’, and (C) No flow, IF’TGF-\(\beta\) ’. reporter response (CAGA-12-GFP) and vimentin expression (RFP reporter) in A549 lung spheroids using real-time imaging of artificial gene reporter constructs. Our findings suggest that the addition of IF within the 3D-matrix significantly enhances the CAGA-12-GFP reporter response from Smad-signaling activities upregulated by exogenous TGF-\(\beta\). This also leads to upregulation in vimentin expression and increased cell invasion and protrusions in a matrix microenvironment. Using complimentary numerical simulation on a 2D spheroid model, we further characterized the mass transport of TGF-\(\beta\), flow induced shear and normal stresses on the spheroid interface under different IF conditions. Based on these results and analyses, we hypothesize that exogenous TGF-\(\beta\) induced Smad-signaling and vimentin expression response is further upregulated from interstitial flow mediated mechanotransduction pathways. The 3D microfluidic platform introduced in this study has the potential to expand beyond tumor spheroid models, and can be applied to heterogeneous tumor spheroid, stromal cells, cancer-associated fibroblasts (CAFs) and immune cells. This versatility brings us closer to mimicking in-vivo tumor microenvironment (TME) conditions. ## Author Contributions Z.R., P.T.D and P.E.B conceived the ideas and designed the experiments. Z.R. carried out the experiments and collected the data. H.R. and M.T. carried out additional experiments on interstitial flow rate measurement and rheology of the hydrogel. Z.R. and A.B. performed data and image analysis. A.B. performed numerical simulations. Z.R. wrote the paper and A.B., V.G., P.T.D and P.E.B edited it. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements Z.R., A.B. and P.E.B gratefully acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement no. 819424). P.T.D. and P.E.B. gratefully acknowledge funding from the Delft Health Technology grant (between LUMC and TU Delft) and ZonMW grant (09120012010061). The authors thank Yifan Zhu for the dual-reporter A549 cells.
2305.09552
InstaLoc: One-shot Global Lidar Localisation in Indoor Environments through Instance Learning
Localization for autonomous robots in prior maps is crucial for their functionality. This paper offers a solution to this problem for indoor environments called InstaLoc, which operates on an individual lidar scan to localize it within a prior map. We draw on inspiration from how humans navigate and position themselves by recognizing the layout of distinctive objects and structures. Mimicking the human approach, InstaLoc identifies and matches object instances in the scene with those from a prior map. As far as we know, this is the first method to use panoptic segmentation directly inferring on 3D lidar scans for indoor localization. InstaLoc operates through two networks based on spatially sparse tensors to directly infer dense 3D lidar point clouds. The first network is a panoptic segmentation network that produces object instances and their semantic classes. The second smaller network produces a descriptor for each object instance. A consensus based matching algorithm then matches the instances to the prior map and estimates a six degrees of freedom (DoF) pose for the input cloud in the prior map. The significance of InstaLoc is that it has two efficient networks. It requires only one to two hours of training on a mobile GPU and runs in real-time at 1 Hz. Our method achieves between two and four times more detections when localizing, as compared to baseline methods, and achieves higher precision on these detections.
Lintong Zhang, Tejaswi Digumarti, Georgi Tinchev, Maurice Fallon
2023-05-16T15:51:35Z
http://arxiv.org/abs/2305.09552v2
# InstaLoc: One-shot Global Lidar Localisation in Indoor Environments through Instance Learning ###### Abstract Localization for autonomous robots in prior maps is crucial for their functionality. This paper offers a solution to this problem for indoor environments called InstaLoc, which operates on an individual lidar scan to localize it within a prior map. We draw on inspiration from how humans navigate and position themselves by recognizing the layout of distinctive objects and structures. Mimicking the human approach, InstaLoc identifies and matches object instances in the scene with those from a prior map. As far as we know, this is the first method to use panoptic segmentation directly inferring on 3D lidar scans for indoor localization. InstaLoc operates through two networks based on spatially sparse tensors to directly infer dense 3D lidar point clouds. The first network is a panoptic segmentation network that produces object instances and their semantic classes. The second smaller network produces a descriptor for each object instance. A consensus based matching algorithm then matches the instances to the prior map and estimates a six degrees of freedom (DoF) pose for the input cloud in the prior map. The significance of InstaLoc is that it has two efficient networks. It requires only one to two hours of training on a mobile GPU and runs in real-time at 1 Hz. Our method achieves between two and four times more detections when localizing, as compared to baseline methods, and achieves higher precision on these detections. ## I Introduction Localization is a fundamental capability needed for mobile robots to navigate their environment and make decisions. There have been many studies on vision, lidar, and radar-based localization. The parent problem of Simultaneous Localisation and Mapping (SLAM) concerns a robot determining its pose while building a map of its environment concurrently. Localization, or place recognition, can contribute to SLAM by helping to _close loops_, or to determine the robot's position in a fixed prior map - the _kidnapped robot_ problem. Many popular localization methods using visual and lidar sensors have been proposed. Among visual-based approaches, visual teach-and-repeat [16, 17] is one of the most popular methods, where a robot first constructs a visual prior map and then localizes on its repeat phase. Compared to image-based camera solutions, modern 3D lidar sensors are view-invariant, robust to lighting changes, and can operate when the path traveled is offset from the original path. Given that lidar is a precise and long-range sensor, lidar localization has been heavily researched in outdoor environments, especially in the context of autonomous driving [21, 14, 27]. However, there are fewer approaches for indoor environments because these environments contain more complex structures and clutter, hence fewer clear separations between objects in lidar scans. In an indoor environment, there are many different classes of objects, with one dataset proposing 13 semantic classes [2]. The indoor scene varies greatly: from bare box-shaped rooms with four walls to narrow corridors with two long walls. Room surfaces are often covered with objects such as electronics, hanging art, ceiling lights, bookcases, and various decorative objects. Localization algorithms cannot rely on flat ground assumptions as there is often an incomplete view of the floor. In addition, there are changes in levels, with steps and staircases. Nevertheless, it is important to localize in these indoor scenarios to enable robots to operate robustly in complex office buildings, construction sites, warehouses, and other commercial environments. In this paper, we draw inspiration from how humans perceive the world and reach the "I know where I am" moment. By memorizing and recognizing the distinctive structures and unique objects inside a space, humans can spatially locate themselves in the environment. Based on the same principle, InstaLoc makes use of individual object instances to localize. Different from existing approaches that rely on primitive shapes or other handcrafted features, InstaLoc learns to segment and match individual objects to the prior scene. These scenes include both fixed objects (walls, ceilings, beams) and movable objects (chairs, desks), in order to tolerate active dynamics and longer term scene changes. To train the segmentation network with accurate class labels, we leverage a Fig. 1: An illustration of the _InstaLoc_ method, where a ‘live’ lidar scan (top) is localized inside the orange colored prior map (bottom) by matching semantically segmented objects (red lines). The green arrow shows the estimated position and the left corner image is the corresponding camera view. simulator to synthesize and automatically annotate every point -- thus avoiding onerous point cloud labeling. To overcome the challenge of imperfect instance segmentation, we designed a sparse convolutional descriptor network that infers many instances simultaneously and tolerates mild changes in the instance point cloud. To summarize, our contributions are: * A novel learning-based lidar localization approach for indoor environments that can process dense lidar scans on a mobile GPU in real time. * An improved panoptic segmentation network that works with single lidar scans. * A fast and efficient descriptor network to learn object instances with a variable number of input points. * State-of-the-art performance on indoor localization compared to other segment-based methods, achieving two to four times more detection. ## II Related Work In this section, we describe recent work on segment-based lidar localization and its applications to urban, natural, and industrial environments. We review approaches that use semantic segmentation for outdoor localization. Finally, we discuss methods that rely on the geometry of the scene and algorithms which can localize in maps made with other sensor modalities (co-localization). ### _Outdoor Segment-based localization_ Scan segment matching was first introduced by Douillard et al. [6], where segments were considered as a midway point between local and global approaches for describing a scene. The approach was initially applied to lidar localization by Dube et al. [7] where segments were extracted directly from raw point clouds and described with a descriptor based on the geometry of the segment (such as its eigenvalues and proportions). Later, Dube et al. [8], Tinchev et al. [20] described segments using a neural network to provide a richer and more meaningful descriptions. Building on this research, Ratz et al. [18] showed that lidar segments fused with visual data further improve the performance of global localization algorithms. Cramariuc et al. [5] fused both colour and semantic information from images to create an enriched point cloud that was later segmented and used for localization. We use this segment-direct concept as the basis of InstaLoc, however in contrast to these approaches, InstaLoc does not use engineered segmentation methods, nor images, to extract the semantic information but directly learns to predict the per-point instance annotation. There are several relevant outdoor lidar localization methods that make use of semantics and segments. Vidanapathirana et al. [21] used global descriptors with segments and spatiotemporal high-order pooling for place recognition. Kong et al. [14] presented a semantic graph-based approach to place recognition, where the topological information of the point cloud is preserved. Zhu et al. [27] extracted common semantic classes, such as vehicles, trunks, and poles from the raw point cloud for loop closure detection. The above methods are designed primarily for outdoor scenarios and are inadequate for an indoor setting. However, they demonstrate the value that semantic information brings to place recognition. To extend this line of research, we leverage a panoptic segmentation method that predicts both the semantic mask and instance label of each point. ### _Indoor localization_ Specifically focusing on indoor localization, the state-of-the-art methods often focus on planar floors or geometric features which describe corners and intersections as landmarks for localization. For example, Wei et al. [24] used planar floor assumption to constrain the vertical pose drift of a robot in a multi-floor parking lot. [26, 10] used planar surfaces to efficiently align two lidar scans for loop closure detection. Li et al. [15], Wang et al. [23] used floor plan features such as corners and wall intersections for localization. Bae et al. [3] proposed to use semantic features to detect and match corners of doors and walls. Other works rely upon a predefined map of the world such as a BIM model or a floor plan. Hendrikx et al. [12], Yin et al. [25] built a map from a subset of semantic entities and their associated geometries drawn from a BIM model of the world. They used a spatial database to query the position of the robot within a graph-based localization approach. They impose a prior to use static features for localization. In comparison to these approaches, we do not rely on planes or any other explicit structure to constrain our localization performance. Instead, our approach is to learn to segment semantically meaningful objects and match them between different observations of the scene. ## III Methodology In this section, we first formulate the research problem, then present the entire pipeline as shown in Fig.2: the panoptic segmentation network, the instance description module, and the matching and pose estimation module, ### _Overview_ The problem is defined as localizing a single query lidar scan \(\mathcal{Q}=\{\mathbf{q}_{i}\in\mathbb{R}^{3}\}\) within a prior map \(\mathcal{M}=\{P_{1},P_{2},\ldots P_{t_{i-1}}\}\). We seek to determine the pose of the lidar at time \(t_{i}\) defined as follows, \[\mathbf{x}_{i}\triangleq[\mathbf{t}_{i},\mathbf{R}_{i}]\in\mathrm{SO}(3)\times \mathbb{R}^{3} \tag{1}\] where \(\mathbf{t}_{i}\in\mathbb{R}^{3}\) is the translation, \(\mathbf{R}_{i}\in\mathrm{SO}(3)\) is the orientation of \(\mathcal{Q}\) in \(\mathcal{M}\). The map \(\mathcal{M}\) is a collection of registered lidar scans, \(P_{t}=\{\mathbf{m}_{i,t}\in\mathbb{R}^{3}\}\), accumulated over time. We approach the problem at the level of objects and compute the pose \(\mathbf{x}_{i}\) by matching object instances identified in the query scan \(\mathcal{Q}\) with those previously identified in the map \(\mathcal{M}\). The first step is to partition the map scans into meaningful object instances. Prior approaches have used planes or region growing methods to segment objects with a scan. This segmentation approach works well in outdoor environments such as in the case of autonomous vehicle localization. This is because sizes of outdoor objects and the separation between them is many times greater than the average inter-point distance in a lidar scan. However, in indoor environments, due to the close proximity of objects with each other, space partitioning and region growing approaches perform poorly. On the other hand, there are many distinguishable objects such as furniture, doors and windows; because of this we can use semantic object segmentation to partition the environment into these object segments. Objects observed in a query scan will often be quite different from those in the prior map. This could be due to observing the object from a different viewpoint in the query scan than from which it was observed in the prior map. Partial observations from different viewpoints, occlusions by other objects, or different point sampling densities ( if the query and map scans were taken from different ranges) all contribute to variation in the reconstruction of an object instance. Due to this variation in the observations, finding matches by aligning a 3D point cloud of the objects between the query and the map will result in poor pose estimation. To overcome this issue, we use object level descriptors that capture the distinguishing features of each object. Estimating pose by matching these descriptions provides some robustness against the variations which occur due to differing viewpoints and partial observations. Descriptor matching also requires lower computational and memory resources as the dimensions of the descriptor are typically smaller than the number of points in each object. After this step, descriptors of the objects segmented from the query scan need to be matched against the database of objects with descriptors in the map to determine correspondences between the query scan and the map. We use the approach from [1] to group descriptors based on their similarity and to find correspondences. Finally, we use RANSAC on a subset of correspondences to estimate the 6-DOF pose of the lidar sensor by aligning the matched objects between the two scans. In InstaLoc both the instance segmentation module and the instance description module are modeled using deep neural networks which work directly on 3D point cloud data. Typical lidar scans are point clouds with large amounts of spatially sparse data. We use sparse tensors to represent this data and designed both networks in our framework using the Spatially Sparse Convolution Library (SpConv) [4] which uses sub-manifold sparse convolutions in its neural network implementation. Sub-manifold convolutions have the advantage that they maintain a greater degree of sparsity than other sparse convolutions by overcoming the issue of sub-manifold dilation [11]. As a result, deeper networks with lower memory and computational requirements, and practical real-time capabilities can be constructed to work with large amounts of sparse data. Furthermore, Graham et al. [11] also showed that sub-manifold sparse convolutions are more efficient than alternate approaches that use spacial partitioning schemes. ### _Instance Segmentation_ The instance segmentation module is a point-wise panoptic segmentation network. Given a lidar scan, i.e. a set of \(N\) 3D points, \(P=\left\{\mathbf{p}_{1},\mathbf{p}_{2},\ldots\mathbf{p}_{N}|\mathbf{p}_{k}\in\mathbb{R}^{3}\right\}\) as input, the network predicts for each point \(\mathbf{p}_{k}\) a semantic label \(s_{k}\) corresponding to the object class that the point belongs to (e.g. chair, table, wall, ceiling) and an instance label \(i_{k}\) representing the unique object that the point corresponds to (e.g. chair1, chair14 or chair42). We use the state-of-the-art _Softgroup_[22] network architecture to construct this module. This architecture consists of three stages; (1) a U-Net based point-wise prediction network that generates semantic scores and an offset vector representing the distance from the point to the instance it belongs to, (2) a soft-grouping step where points are grouped by similarity of their semantic scores and their spatial distance to generate instance proposals and (3) a refinement network that extracts features for every instance proposal and then uses a tiny U-Net based network to refine the proposals. A fixed distance threshold used to group the points in step (2) of the _Softgroup_ architecture works well for point clouds with uniform sampling density. In lidar scans, the sampling density is much lower along the vertical axis as compared to the density in the horizontal axis and points become further separated at the sensing range increases. If a fixed distance threshold is used for grouping, then the number of instance proposals would be overestimated in regions further away from the sensor; this can lead to incorrect object segmentation. We counter this issue by using an adaptive radius threshold proportionate to the vertical distance between two beams. Typically, 3D lidars rotate 360\({}^{\circ}\) horizontally and have a vertical field of view of \(\theta\) radians. For a point \(\mathbf{p}_{i}(x_{i},y_{i},z_{i})\) resulting from a lidar beam in a point cloud, with the sensor origin \(O\), its radius threshold \(\rho_{i}\) is: \[\rho_{i}=\alpha\cdot d(\mathbf{p}_{i},O)\cdot\tan(\frac{\theta}{N_{beam}})=\alpha \cdot d(\mathbf{p}_{i},O)\cdot\frac{\theta}{N_{beam}} \tag{2}\] where \[d(\mathbf{p}_{i},O)=\sqrt{x_{i}^{2}+y_{i}^{2}+z_{i}^{2}} \tag{3}\] and \(N_{beam}\) is the number of lidar beams and \(\alpha\) is a constant scale factor. The output of panoptic segmentation network is a set of \(M\) object instances \(\mathcal{I}=\left\{I_{1},I_{2},\ldots,I_{M}\right\}\) where each object instance is a set of \(N_{j}\) points representing the 3D coordinates of the point and the semantic label \(s_{j}\) of the object, i.e. \(I_{j}=\left\{\mathbf{h}_{k,j}|k=\left\{1,2\ldots N_{j}\right\},\mathbf{h}_{k,j}=(\mathbf{ p}_{k,j},s_{j})\right\}\). ### _Instance Description_ After the object instances are segmented, the next step is to generate descriptors for each of the instances. An overview of the network is shown in Fig. 3. The network is designed to be small and fast: it can take all instances in one batch with varying number of points as input. This is done using the instance descriptor network, which consists of four sub-manifold sparse convolutional layers of increasing feature size followed by three fully connected layers, with a dropout layer before the final fully connected layer. The input to the network is a set of object instances \(\mathcal{I}=\left\{I_{1},I_{2},\ldots,I_{M}\right\}\), the output of the instance segmentation network, with each instance \(I_{j}\) containing \(N_{j}\) points (with \(N_{j}\) varying for each object). The descriptor network output for each object instance \(I_{j}\) is an \(N_{j}\times D\) tensor where every row is a descriptor of length \(D\) for one point in the object instance. Finally, an average pooling layer computes the average of the \(N_{j}\) descriptors to create a single descriptor of length \(D\) for each object instance. This results in an output vector of dimensions \(M\times D\), where \(M\) is the number of object instances. The network is trained using triplet loss. If \(a,p,n\in\mathbb{R}^{D}\) are the descriptors for an anchor, the corresponding positive element and a negative element respectively, then the triplet loss \(\mathcal{L}_{triplet}\) can be calculated as \[\mathcal{L}_{triplet}(a,p,n)=\max\left\{d(a,p)-d(a,n)+m,0\right\} \tag{4}\] where \[d(x,y)=||x-y||_{2}\] is the pairwise distance between the descriptors; and the margin \(m\) is set to 1. The average loss over all the samples in a mini-batch is considered as the loss during training. ### _Matching_ For each instance in the query scan, we first obtain its \(N\) closest descriptors from the database of instances of the prior map. This generates a list of instance-to-instance correspondences. A correspondence grouping method from [1] is used to find the correct correspondence. We start from a seed correspondence \(c_{n}=\{I_{n}^{Q},I_{n}^{M}\}\), where \(I_{n}^{Q}\) and \(I_{n}^{M}\) are two instances from the query scan and the prior map respectively. We then loop through all candidate correspondences, so another correspondence \(c_{m}=\{I_{m}^{Q},I_{m}^{M}\}\) can be grouped with \(c_{n}\) if: \[||I_{n}^{Q}-I_{n}^{Q}||-||I_{n}^{M}-I_{m}^{M}||<\epsilon \tag{5}\] \(\epsilon\) is the parameter that restricts how strictly the grouping algorithm behaves. The accepted consensus group has to contain a minimum of \(\tau\) instances. Finally, for the 6 DoF pose estimation, we apply a RANSAC step on the subset of correspondences to align the query scan with the prior map, with \(\tau\) and \(\epsilon\). ## IV Implementation ### _Simulated Lidar Data_ Training deep learning algorithms requires large amounts of data. To bypass the need to do time-consuming manual labeling, we constructed several indoor environments in the Unreal Engine game simulator to take advantage of automatic labeling. As well as being automatic, it eliminates errors in human labeling and can be easily extended to other environments. We created about 20 unique rooms and assembled them into six room networks which contained a total of \(\sim\)1500 objects. As an example, two of the six networks are shown in Fig. 4. We used the Airsim plugin [19] to capture over 90 scans from these spaces. The simulator allowed us to configure the lidar settings -- including frequency, range, the field of view, and the number of lidar beams. The simulated lidar configuration we used was modelled on the Ouster OS-128 lidar 1, which has \(\sim\)50 m range, 90\({}^{\circ}\) field of view, and 128 lidar beams. Note, that this is a wide field of view and dense lidar coming on the market. Similarly to the existing indoor point cloud dataset, Stanford 3D Indoor Scene Dataset (S3DIS) [2], we used 13 object classes: ceiling, floor, column, beam, wall, table, chair, bookcase, sofa, window, door, board, and clutter. Footnote 1: [https://ouster.com/products/scanning-lidar/os0-sensor/](https://ouster.com/products/scanning-lidar/os0-sensor/) #### Iv-D1 Labeled Data for Instance Segmentation Each simulated lidar beam that intersects with an object would result in a range measurement and a unique object ID. Using the object ID, we can assign a semantic class and an instance number. These labels are used in the supervised training of the two networks. Overall, each point has five fields: \((X,Y,Z)\) coordinates, semantic class, and object instance number. Fig. 3: Instance descriptor network architecture. The input is a set of object instances with a variable number of points per instance, with each point representing the 3D coordinates of the point and the semantic label of the object. The network consists of layers of sub-manifold convolutions with increasing feature size followed by fully connected layers, with dropout before the final fully connected layer. Finally an average pooling layer computes a single descriptor for each object instance. Fig. 2: Overview of the proposed learned lidar localization system #### Iv-A2 Triplets for Descriptor Network To train the descriptor network, we need to generate object instances as triplets - with anchor, positive and negative instances. First, we generate two scans that are 2m apart and a 10\({}^{\circ}\) rotation in the simulator. Given that every object in the lidar scan is labeled, we classify the same objects in these two scans as the anchor and positive instance. We then randomly selected another object as the negative instance. Because the anchor and positive instances have mild viewpoint differences, the objects scanned in the point clouds may have slight changes. These slight appearance changes contribute to algorithm robustness. In total, about 9900 triplet object instances were generated for training, validation, and testing. ### _Training_ As mentioned in Sec. III-A, both networks are built with a sparse tensor framework and were trained on a 4GB mobile GPU, NVIDIA Quadro T2000. #### Iv-B1 Instance Segmentation Network We use a pre-existing Softgroup model (trained on S3DIS) as a warm start. The voxel size was set to 2 cm and the minimum number of points in each instance was set to 50. The network was trained for 50 epochs which took about one hour. #### Iv-B2 Instance Descriptor Network The network is trained from scratch with a triplet loss function, see (4). Compared to a whole scan (which usually contains over 100,000 points) each triplet instance is only a small fraction of a whole scan; because of this we could increase the batch size to allow parallel input. The descriptor network was trained for 90 epochs, and took around 90 minutes. Both networks were trained with an Adam optimizer with a learning rate of 0.001. ## V Experiment and Results In this section, we describe experiments conducted on instance segmentation and descriptor networks. This is followed by real world experiments using InstaLoc as a complete localization system. Lastly, we demonstrate that the algorithm is robust to a changing number of prior map scans which indicates robust performance. ### _Experimental Setup_ We use a fully labeled simulated dataset to train the instance segmentation and descriptor network. The dataset also holds 113 test scans for the instance segmentation network and 2123 test triplet instances for the descriptor network. For the localization experiment, we collected an indoor environment dataset using a Ouster lidar OS-128 in small, medium, and large scale buildings. The dataset includes sequences in office rooms, meeting rooms, and social spaces as well as lecture theatres, staircases, and hallways. Fig. 5 shows the prior maps built with a lidar SLAM system. The SLAM poses are 0.7 m apart so there are 147, 192, and 384 individual scans which form the final map for George, Thom, and IEB buildings. As an indication of size, the estimated map floor area for each building is around 500 m\({}^{2}\), 1100 m\({}^{2}\), and 2000 m\({}^{2}\) respectively. However, in our localization experiments, the prior map is made up of a subset of registered scans that are spaced 2.1 m apart. As the lidar sensor was running at 10 Hz, the localization system is triggered every ten scans - once per second. Tab. II presents specific details for each building. For example, the prior map of George Building consists of 32 scans, and the trajectory length is 96 m. In total, 106 scans were queried. A detection is classified as being correct when the estimated pose is within 0.2 m and the orientation is within 10\({}^{\circ}\) of the ground truth pose. Please note, there is no point cloud alignment step, such as Iterative Closest Point (ICP) refinement, and the pose estimation is from the instance correspondence matching. ### _Results_ #### V-B1 Instance Segmentation Results Fig. 6 shows two illustrations of instance segmentation results. The left side image of a scan is from the simulated dataset. In this classroom environment, each object instance has been assigned a random color. Chairs and tables are individually segmented and as well as each wall surface. Note that the door (colored in red) is partially segmented from the blue wall. In another example result, the right side image of a scan was captured in a hallway in George Building with the Ouster lidar. The hallway connects several rooms with lidar beams scanning into those rooms, which resulted in several partially scanned walls. The ceiling is accurately segmented (colored in orange), but the blue wall is mixed with one light green and one black segment. The imperfection in segmentation is expected as the current state-of-the-art instance segmentation method, SoftGroup, achieves an average precision (AP) of around 54.5 % on indoor datasets such as the S3DIS dataset [2]. Unlike the S3DIS dataset, there is no visual color information for lidar points in our simulator synthesized data. In addition, our data contains a larger variety of spaces and objects than S3DIS, and some objects in the scans are scanned partially. Hence after applying the default SoftGroup on our synthesized data, it reaches 39 % average precision across 13 classes. Our proposed improvement of incorporating lidar properties in (2) improved the Average AP from 39 % to 41 %, shown in Tab. I. Larger objects such as ceilings, floors, and walls have higher AP. Objects such as boards, windows, and doors, which are gathered under "other1" and "other2" in Tab. I have much low AP, less than 20 %. One key design consideration for our localization method to be able to deal with imperfect segmentation is to use all Fig. 4: Two indoor office networks constructed using the Unreal Engine to simulate lidar scans with semantic labels. available instances with descriptors that can tolerate incomplete object point clouds. #### Iv-B2 Instance Descriptor Results In 3D point cloud learning, data representation and augmentation have a significant impact on achieving on best matching or labeling performance. We experimented with several approaches and found that centering individual instances and applying random rotations during data preparation can optimize learning results. We randomly eliminate 20 % of the points in each instance and add random noise to lidar point positions during data preparation to improve descriptor robustness. Fig. 7 (right) presents descriptor pairwise distances between the anchor, positive and negative instances in a subset of 120 test triplets. The blue lines correspond to smaller, positive distances (as desired). There is a clear separation between the typical positive and negative distances. The graph in Fig. 7 (left) shows the precision and recall curve for the 2000 test triplets. At a descriptor distance (\(\mathcal{L}_{2}\) norm) threshold of 0.56, the model can classify the instances with 91.4 % precision and 88.1 % recall. Here we purposely choose a smaller distance threshold to have higher precision as false positives are more detrimental to the localization system. Our network is fast and efficient. For comparison, we tested on the Thom Building dataset. Averaged across all scans, ESM [20] descriptor processed 30 segments in a scan in 72 ms, while InstaLoc descriptor network processed 30 instances in 21 ms. In addition, our descriptor network can operate on any number of input points but ESM needs to downsample a segment to 256 points and SegMap uses fixed 3D voxel grid dimension of \(32\times 32\times 16\) which compromises on detail. #### Iv-B3 Localisation Results Fig. 1 shows a lidar scan that has been successfully localized within the ground floor of Thom building. Several object instances have been matched including walls, sofas, and flat planes (TV screens). The top section shows the matched instances within the query scan, and the bottom section shows the matched instance within the larger prior map. The estimated pose is indicated with a gold arrow. InstaLoc and two state-of-the-art baselines were tested with datasets from George, Thom, and Information Engineering \begin{table} \begin{tabular}{l c|c c|c c} \hline \hline **Class** & **AP** & **Class** & **AP** & **Class** & **AP** \\ \hline ceiling & 0.923 & floor & 0.838 & wall & 0.565 \\ column & 0.632 & beam & 0.367 & chair & 0.723 \\ sofa & 0.402 & others1 & 0.144 & others2 & 0.163 \\ \hline \multicolumn{5}{c}{**Average AP**} & \multicolumn{5}{c}{41.3} \\ \hline \hline \end{tabular} \end{table} TABLE I: Average precision for each object class. others1 is the mean value of table, board, and window, others2 is the mean value of door, bookcase, and clutter. Fig. 5: Our indoor datasets. The height direction is indicated by color: blue is the lowest level and red is the highest level. _Left:_ Small size George building. _Middle:_ Medium size Thom building. _Right:_ Large size Information Engineering Building. Fig. 6: Instance segmentation results. Left: Result with a simulated lidar scan. Right: Result with a real Ouster lidar scan. Random colors are assigned to each instance. Fig. 7: _Left:_ A precision/recall curve for all test data in the descriptor network. _Right:_ A subset of the test data showing in blue the distance between the anchor and the positive instances. In red, the line shows the distance between the anchor and the negative instances. Building (IEB). InstaLoc successfully detected 48 out of 106 scans in the prior map of George building, and all detections were correct according to the ground truth. Hence the recall rate is 45 % and the precision is 100 %. For Thom building, the recall rate was 56 % but the precision was lower at 86 %. The lower precision was largely due to the two near identical lecture theatre halls on the two sides of Thom building, shown in Fig. 5. This caused confusion in the localization system. Over the three sequences, the average recall was around 47 %, and the average precision was about 94 %. Note that all three datasets are for test, the segmentation and description networks have not seen them as they are trained on simulated lidar scans. We selected two segment-based localization methods as comparative baselines, as they are most similar to our method. We modified the two algorithms to the best of our efforts to offer a fair comparison in indoor environments. In the Efficient Segmentation and Matching (ESM) paper[20], the authors used the Euclidean cluster extraction (ECE) method to segment the lidar scans. As it was originally designed for outdoor environments, objects in the scan are expected to be distinctly separated, especially after removing points corresponding to the ground. However, in an indoor environment, the ECE method cannot separate objects efficiently as walls and ceilings often become one segment. To mitigate this, we first calculate the curvature and remove high curvature points so there are distinct gaps between structured objects. After this, the ECE method can produce more reasonable segments. The second algorithm we test is SegMap [8]. We first simplified the system by removing the lidar accumulating through the odometry system, as the lidar scans in our test are from a 128-beam lidar so it is very dense compared to 16 or 32-beam lidar used in their paper. More importantly, we used the incremental region growing method [9] for segmentation, which computes local normals and curvatures for each point and uses these to extract flat or planar-like surfaces. After these two modifications, the system can operate in real time and have better segmentation performance. However, even as we improved the segmentation method in both systems, there is still a limitation in their descriptor network. One factor is that it does not use sparse tensor networks, and as a result only a small and fixed number of points can be used as input. A table presenting comparison results is shown in Tab. II, our approach outperforms the baseline methods by a factor of between two and four times in recall, and also achieved higher precision. In general, these systems tend to be tuned to prefer higher precision - for accurate and trustworthy localization. We also considered ScanContext [13] for comparison, but its descriptor is too rudimentary to work in tight indoor spaces, as opposed to the road networks it was designed for. #### Iv-B4 Varying the Size of the Prior Map As a robustness test, we conducted experiments to see how the number of individual scans in a prior map can affect localization results. Intuitively, as the number of prior map scans is reduced, the localization detection rate should reduce. Shown in Tab. III, the middle column is the baseline which has the same configuration as II, and the columns left and right of it have either an increased or decreased number of prior map scans. With the decreased number of prior map scans, there is a slight reduction in recall values, but no negative effect on precision. This demonstrates the robustness of our system to changes in the number of prior map scans. ### _Limitations_ As mentioned in Sec. V-B1, the precision of the instance segmentation can directly impact the performance of the localization system. Since the descriptor network has already reached high precision and recall values, good instance segmentation is a key way of improving overall localization recall \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c|c c} \hline \hline **Data** & **Length** & \multicolumn{2}{c|}{**Scan Number**} & \multicolumn{3}{c|}{**ESM* [20]**} & \multicolumn{3}{c|}{**SegMap* [8]**} & \multicolumn{3}{c}{**InstaLoc (Ours)**} \\ Building & (m) & Map & Query & Detect & Recall & Precision & Detect & Recall & Precision & Detect & Recall & Precision \\ \hline George & 96 & 32 & 106 & 12 & 11 \% & 75 \% & 28 & 26 \% & 81 \% & **56** & **49 \%** & **93 \%** \\ Thom & 121 & 45 & 137 & 36 & 26 \% & **92 \%** & 28 & 30 \% & 83 \% & **88** & **58 \%** & 91 \% \\ IEB & 253 & 98 & 211 & 29 & 14 \% & 93 \% & 27 & 13 \% & 56 \% & **94** & **42 \%** & **95 \%** \\ \hline \hline \end{tabular} \end{table} TABLE II: Numerical summary table showing the performance of InstaLoc compared to two state-of-the-art benchmarks. The prior map is made of N scans and the query scan is the total number of scans queried. *: both methods have been adapted for better performance. Fig. 8: An example of inferior instance segmentation within a corridor from two different viewpoints. In the side view scan, both the left and right walls are being over-segmented. In the top view scan, the points near the sensor origin (\(<\)1.0 m) have much higher noise, resulting in uneven wall surfaces. \begin{table} \begin{tabular}{c|c c|c c|c} \hline \hline **Data** & \multicolumn{2}{c|}{**Fewer Scans**} & \multicolumn{2}{c|}{**Default Density**} & \multicolumn{2}{c}{**More Scans**} \\ Building & Map & R/P \% & Map & R/P \% & Map & R/P \% \\ \hline George & 22 & 30 / 94 & 32 & 45 / 100 & 48 & 49 / 84 \\ Thom & 33 & 45 / 97 & 45 & 56 / 86 & 60 & 54 / 86 \\ IEB & 68 & 30 / 100 & 98 & 41 / 97 & 125 & 42 / 93 \\ \hline \hline \end{tabular} \end{table} TABLE III: Ablation study: varying the number of scans used for the prior map. The same number of query scans are used as Tab. II. R and P are the recall and precision values respectively. performance. In our experiments, the instance segmentation network performs well in structured and enclosed spaces such as theatres, classrooms, offices, etc. However, it performs much more poorly in corridors and staircases, especially when there are embedded small objects inside the walls, such as handrails. As shown in the camera image in Fig. 8 (left), there were fire extinguishers, radiators, and cupboards along the corridor walls. We did specifically use examples of hallways and corridors in our training dataset. While that did improve performance, the results still have room for improvement. This might be due to the inconsistent point cloud density on the walls and the noisier lidar measurements from the Ouster lidar at close distances. An example of this issue is shown in Fig. 8 (right) in a corridor. This is a topic for future work. ## VI Conclusion In this paper, we proposed a fast and accurate lidar localization approach. InstaLoc learns to segment and describe different object instances in a scene. Two networks inside are joined together to recognize and then describe the individual objects. InstaLoc can localize with between two to four times as many matches as two state-of-the-art baseline methods while retaining high levels of precision. In future work, we want to improve the localization performance in hallways and corridor spaces. Moreover, we intend to combine visual information with lidar measurements in instance segmentation. As importantly, we aim to extend InstaLoc to outdoor environments and also to be independent of the type of operating environment. Lastly, we will add flexibility to InstaLoc to work with sparse lidar scans from different lidars. ## Acknowledgments This research was partly funded by the Horizon Europe project DIGIFOREST (Grant ID 101070405), UKRI/EPSRC ORCA Robotics Hub (EP/R026173/1), and a Royal Society University Research Fellowship (Fallon).
2306.11895
Learning Elastic Costs to Shape Monge Displacements
Given a source and a target probability measure supported on $\mathbb{R}^d$, the Monge problem asks to find the most efficient way to map one distribution to the other. This efficiency is quantified by defining a \textit{cost} function between source and target data. Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, $\ell^2_2(\mathbf{x},\mathbf{y})=\tfrac12|\mathbf{x}-\mathbf{y}|_2^2$. Recently, Cuturi et. al '23 highlighted the benefits of using elastic costs, defined through a regularizer $\tau$ as $c(\mathbf{x},\mathbf{y})=\ell^2_2(\mathbf{x},\mathbf{y})+\tau(\mathbf{x}-\mathbf{y})$. Such costs shape the \textit{displacements} of Monge maps $T$, i.e., the difference between a source point and its image $T(\mathbf{x})-\mathbf{x})$, by giving them a structure that matches that of the proximal operator of $\tau$. In this work, we make two important contributions to the study of elastic costs: (i) For any elastic cost, we propose a numerical method to compute Monge maps that are provably optimal. This provides a much-needed routine to create synthetic problems where the ground truth OT map is known, by analogy to the Brenier theorem, which states that the gradient of any convex potential is always a valid Monge map for the $\ell_2^2$ cost; (ii) We propose a loss to \textit{learn} the parameter $\theta$ of a parameterized regularizer $\tau_\theta$, and apply it in the case where $\tau_{A}(\mathbf{z})=|A^\perp \mathbf{z}|^2_2$. This regularizer promotes displacements that lie on a low dimensional subspace of $\mathbb{R}^d$, spanned by the $p$ rows of $A\in\mathbb{R}^{p\times d}$.
Michal Klein, Aram-Alexandre Pooladian, Pierre Ablin, Eugène Ndiaye, Jonathan Niles-Weed, Marco Cuturi
2023-06-20T21:17:32Z
http://arxiv.org/abs/2306.11895v2
# Learning Costs for Structured Monge Displacements ###### Abstract Optimal transport theory has provided machine learning with several tools to infer a push-forward map between densities from samples. While this theory has recently seen tremendous methodological developments in machine learning, its practical implementation remains notoriously difficult, because it is plagued by both computational and statistical challenges. Because of such difficulties, existing approaches rarely depart from the default choice of estimating such maps with the simple squared-Euclidean distance as the ground cost, \(c(x,y)=\|x-y\|_{2}^{2}\). We follow a different path in this work, with the motivation of _learning_ a suitable cost structure to encourage maps to transport points along engineered features. We extend the recently proposed Monge-Bregman-Occam pipeline (Cuturi et al., 2023), that rests on an alternative cost formulation that is also cost-invariant \(c(x,y)=h(x-y)\), but which adopts a more general form as \(h=\frac{1}{2}\ell_{2}^{2}+\tau\), where \(\tau\) is an appropriately chosen regularizer. We first propose a method that builds upon proximal gradient descent to generate ground truth transports for such structured costs, using the notion of \(h\)-transforms and \(h\)-concave potentials. We show more generally that such a method can be extended to compute \(h\)-transforms for entropic potentials. We study a regularizer that promotes transport displacements in low-dimensional spaces, and propose to learn such a basis change using Riemannian gradient descent on the Stiefel manifold. We show that these changes lead to estimators that are more robust and easier to interpret. ## 1 Introduction **On computing optimal transport.** Mapping a distribution of points onto another is a subtask that plays a crucial role across machine learning. Optimal transport (OT) theory (Santambrogio, 2015) has emerged as a fundamental building block to tackle such tasks, both to guide theoretical analysis Dalalyan (2017) and inform practice with novel methods across science (Schiebinger et al., 2019; Bunne et al., 2021, 2022; Janati et al., 2020; Tong et al., 2020), attention mechanisms (Tay et al., 2020; Sander et al., 2022), self-supervised learning(Caron et al., 2021; Oquab et al., 2023), domain adaptation (Courty et al., 2017) or learning on graphs (Vincent-Cuz et al., 2023). **Estimation challenges.** Computing optimal transport maps from data remains, however, a daunting task. Beyond the well-documented challenges associated with the computation of OT (Peyre and Cuturi, 2019), lies perhaps a more fundamental statistical limitation, commonly referred to as the curse of dimensionality in OT (Dudley et al., 1966; Weed and Bach, 2019). Owing to this limitation, most OT solvers rely on a prior dimensionality reduction of data, either with standard tools, such as PCA or VAE, or by taking the more drastic step of projecting measures onto 1D directions (Rabin et al., 2012; Bonneel et al., 2015). Alternatively, this reduction can be also carried out _jointly_ when estimating OT, on hyperplanes (Niles-Weed and Rigollet, 2022; Paty and Cuturi, 2019; Lin et al., 2020; Huang et al., 2021; Lin et al., 2021), lines (Deshpande et al., 2019; Kolouri et al., 2019), trees (Le et al., 2019) or using adversarial costs featurizers (Salimans et al., 2018). **Cost structure impacts map structure.** A promising research direction was recently unveiled by the Monge-Bregman Occam estimator (Cuturi et al., 2023), which rests on the observation that the choice of the ground cost function has a "structural" impact on Monge map estimators. Rather than setting the cost function \(c(x,y)\) that quantifies the cost of moving mass from a point \(x\) onto another \(y\) to be the \(\ell_{2}^{2}\) distance, Cuturi et al. propose to consider instead a translation invariant cost, namely \(c(x,y):=h(x-y)\), where \(h:\mathbb{R}^{d}\to\mathbb{R}\) reads \(h=\frac{1}{2}\|\cdot\|^{2}+\tau\), and \(\tau:\mathbb{R}^{d}\to\mathbb{R}\) can be interpreted as a regularizer. Following (Pooladian and Niles-Weed, 2021; Rigollet and Stromme, 2022); Cuturi et al. (2023) rely on entropy regularized transport to estimate a dual potential function \(f_{\varepsilon}\), and propose a Monge map estimator \(T_{\varepsilon}(\mathbf{x})\), whose computation involves using the _proximal_ operator of function \(\tau\), by applying it to the _gradient_ of a dual potential \(f_{\varepsilon}\). More precisely, they consider \(T_{\varepsilon}(\mathbf{x})=\mathbf{x}-\text{prox}_{\tau}\circ\nabla f_{ \varepsilon}(\mathbf{x})\), which they call the Monge-Bregman-Occam (MBO) estimator. **Towards an Adaptive Cost.** The model underlying the MBO estimator encodes structure directly by specifying a regularizer \(\tau\). In effect, the MBO regularizer seeks OT maps such that the values of \(\tau\) evaluated on displacements are small. In some cases, such structure priors are directly known and easily interpretable (e.g. sparsity in log-variations of gene expression levels in the single-cell genomics example in (Cuturi et al., 2023)). However, and as often the case in ML, it might be preferable to _learn_ such regularity from data, leading to a challenging inverse OT problem. **Our contributions.** We extend the MBO formalism in several directions to reach that goal: * We answer positively a question raised by (Cuturi et al., 2023) on our ability to generate ground-truth OT maps for structured costs. We define implicitly the \(h\)-transform of an arbitrary concave potential \(f\) as the minimizer (obtained using proximal gradient descent) of \(h(x,\cdot)-f(\cdot)\). We also study how \(h\)-transform operators can be applied to entropic estimators. * We introduce an adaptive MBO model to tune the parameters of the regularizer. Our model builds upon a bilevel optimization approach, using implicit differentiation of Sinkhorn solutions. We provide a direct application of that approach that can favor low-dimensional displacements among high-dimensional points. * When the cost is a "subspace structured cost", we prove sample-complexity estimates for the MBO estimator, and relate the estimator to the Spiked Transport Model Niles-Weed and Rigollet (2022). * We benchmark these approaches on synthetic generated ground truth transports, and discuss our ability to recover such transports. ## 2 Background: Optimal transport with translation invariant costs The (Structured) Monge Problem.We consider in this work ground cost functions \(c\) of the form \(c(\mathbf{x},\mathbf{y}):=h(\mathbf{x}-\mathbf{y})\), where \(h:\mathbb{R}^{d}\to\mathbb{R}\) and, to simplify a few computations, \(h\) is symmetric, i.e. \(h(\mathbf{z})=h(-\mathbf{z})\), and strictly convex. The Monge problem (1781) seeks a map \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\) minimizing an average transport cost (as quantified by \(h\)) of the form: \[T^{\star}:=\operatorname*{arg\,inf}_{T_{\sharp}\mu=\nu}\int_{\mathbb{R}^{d}}h (\mathbf{x}-T(\mathbf{x}))\,\mathrm{d}\mu \tag{1}\] Because the set of admissible maps \(T\) is not convex, solving (1) requires taking a detour that involves relaxing (1) into the so-called Kantorovich dual and semi-dual formulations, involving respectively two functions (or only one in the case of the semi-dual): \[f^{\star},g^{\star}:=\operatorname*{arg\,sup}_{\begin{subarray}{c}f,g:\mathbb{ R}^{d}\to\mathbb{R}\\ f\not\in g\leq h\end{subarray}}\int_{\mathbb{R}^{d}}f\mathrm{d}\mu+\int_{ \mathbb{R}^{d}}g\,\mathrm{d}\nu\,=\operatorname*{arg\,sup}_{f:\mathbb{R}^{d} \to\mathbb{R},h\text{-concave}}\int_{\mathbb{R}^{d}}f\mathrm{d}\mu+\int_{ \mathbb{R}^{d}}\bar{f}^{h}\mathrm{d}\nu \tag{2}\] where for all \(\mathbf{x},\mathbf{y}\) we write \((f\oplus g)(\mathbf{x},\mathbf{y}):=f(\mathbf{x})+g(\mathbf{y})\) and for any function \(f:\mathbb{R}^{d}\to\mathbb{R}\), we define its \(h\)-transform as \[\bar{f}^{h}(\mathbf{y}):=\min_{\mathbf{x}}h(\mathbf{x}-\mathbf{y})-f(\mathbf{ x}). \tag{3}\] A function \(f\) is said to be \(h\)-concave if there exists a function \(g\) such that it is itself the \(h\)-transform of \(g\), i.e., \(f=\bar{g}^{h}\). Let us now recall the fundamental theorem in optimal transport ((Santambrogio, 2015, SS1.3)). Assuming the optimal, \(h\)-concave, potential for (2), \(f^{\star}\), is differentiable at \(\mathbf{x}_{0}\) (this turns out to be a mild assumption since \(f^{\star}\) is a.e. differentiable when \(h\) is), we have the relation \[T^{\star}(\mathbf{x})=\mathbf{x}-(\nabla h)^{-1}(\nabla f^{\star}(\mathbf{x})) =\mathbf{x}-\nabla h^{\ast}\circ\nabla f^{\star}(\mathbf{x})\,, \tag{4}\] where the convex conjugate of \(h\) reads: \(h^{\ast}(\mathbf{w}):=\max_{\mathbf{z}}\langle\mathbf{z},\mathbf{w}\rangle-h( \mathbf{z})\,.\) The classic Brenier theorem (1991), which is now a staple of OT estimation in machine learning (Korovtin et al., 2019; Makkuva et al., 2020; Korotin et al., 2021; Bunne et al., 2021) through input-convex neural networks (Amos et al., 2017), is a particular example, stating for \(h=\frac{1}{2}\|\cdot\|_{2}^{2}\), that \(T(\mathbf{x})=\mathbf{x}-\nabla f^{\star}(\mathbf{x}_{0})\), since in this case, \(\nabla h=(\nabla h)^{-1}=\text{Id}\) (see (Santambrogio, 2015; Theorem 1.22)). OT Maps and Structured Costs.(Cuturi et al., 2023) show that when the cost \(h\) has a particular _structure_, in the sense that it also includes a regularizer \(\tau\), i.e. \[h=\tfrac{1}{2}\|\cdot\|_{2}^{2}+\gamma\tau, \tag{5}\] then the optimal transport _absorbs the structure of the regularizer_. The optimal displacement reads \[T^{\star}(\mathbf{x})-\mathbf{x}=-\text{prox}_{\gamma\tau}\circ\nabla f^{ \star}(\mathbf{x})\,. \tag{6}\] The MBO Estimator.While the result above is theoretical, in the sense that is assumes knowledge of an optimal \(f^{\star}\), the MBO estimator proposes to evaluate that formula with an approximation of \(f^{\star}\), using samples from \(\mu\) and \(\nu\). The estimation of optimal potential functions can be carried out using entropic regularized transport (Cuturi, 2013) to result in entropic potentials (Pooladian and Niles-Weed, 2021). This involves choosing a regularization strength \(\varepsilon>0\), and solving numerically the following dual problem using the Sinkhorn algorithm. \[(\mathbf{f}^{\star},\mathbf{g}^{\star})=D^{\star}(\mathbf{X},\mathbf{a}, \mathbf{Y},\mathbf{b};h,\varepsilon):=\operatorname*{arg\,max}_{\mathbf{f}\in \mathbb{R}^{n},\mathbf{g}\in\mathbb{R}^{m}}\langle\mathbf{f},\mathbf{a}\rangle+ \langle\mathbf{g},\mathbf{b}\rangle-\varepsilon(e^{\frac{\varepsilon}{ \varepsilon}},Ke^{\frac{\varepsilon}{\varepsilon}})\,. \tag{7}\] where \(K_{ij}=[\exp(-h(\mathbf{x}_{i}-\mathbf{y}_{j})/\varepsilon)]_{ij}\). The entropy-regularized optimal transport matrix, associated with that cost \(h\) and on those samples, can be derived directly from these dual potentials, as \[P^{\star}(\mathbf{X},\mathbf{a},\mathbf{Y},\mathbf{b};h,\varepsilon):=\left[ \exp\left(\frac{-h(\mathbf{x}_{i}-\mathbf{y}_{j})+\mathbf{f}_{i}^{\star}+ \mathbf{g}_{j}^{\star}}{\varepsilon}\right)\right]_{ij}\in\mathbb{R}^{n\times m}. \tag{8}\] We now introduce the soft-minimum operator, and its gradient, defined for any vector \(\mathbf{u}\in\mathbb{R}^{q}\) as \[\text{min}_{\varepsilon}(\mathbf{u}):=-\varepsilon\log\sum_{l=1}^{q}e^{- \mathbf{u}_{l}/\varepsilon},\,\nabla\text{min}_{\varepsilon}(\mathbf{u})= \frac{e^{-\mathbf{u}_{k}/\varepsilon}}{\sum_{l=1}^{q}e^{-\mathbf{u}_{l}/ \varepsilon}}.\] Using vectors \(\mathbf{f}^{\star},\mathbf{g}^{\star}\), we can define estimators \(f_{\varepsilon}\) and \(g_{\varepsilon}\) for the optimal dual function (\(f^{\star},g^{\star}\)) of (2): \[f_{\varepsilon}:\mathbf{x}\mapsto\text{min}_{\varepsilon}([h(\mathbf{x}- \mathbf{y}_{j})-\mathbf{g}_{j}^{\star}]_{j})\,g_{\varepsilon}:\mathbf{y}\mapsto\text{min}_{ \varepsilon}([h(\mathbf{x}_{i}-\mathbf{y})+\mathbf{f}_{i}^{\star}]_{i})\,. \tag{9}\] Plugging now (9) into (6), we obtain the MBO estimator, \[T_{\varepsilon}(\mathbf{x})=\mathbf{x}-\text{prox}_{\tau}\left(\mathbf{x}+ \sum_{j=1}^{m}\mathbf{p}_{j}(\mathbf{x})\left(\nabla\tau(\mathbf{x}-\mathbf{ y}_{j})-\mathbf{y}_{j}\right)\right), \tag{10}\] where \(\mathbf{p}(\mathbf{x})=\nabla\text{min}_{\varepsilon}([h(\mathbf{x}-\mathbf{ y}_{j})-\mathbf{g}_{j}^{\star}]_{j})\). ## 3 On Ground Truth Structured Optimal Displacements We propose to deep-dive in this section on optimal transport maps induced by the family of costs in Equation (5). We consider first the practical computation of \(h\)-transforms for an arbitrary potential function \(f\) and arbitrary structured penalty \(\tau\). This approach is useful for two reasons: it provides a way to define ground-truth Monge maps for structured costs \(h\), answering positively a question raised in the last pages of (Cuturi et al., 2023) on our ability to generate ground-truth structured OT maps. In addition to that result, we also show that the \(h\)-transform of entropic maps results in a convex problem, a result that we use directly to follow the hyperparameter tuning approach outlined by Vacher and Vialard (2022). ### On the computation of \(h\)-concave potentials and ground truth \(h\)-transports We explore how to compute the \(h\)-transform, defined in Equation (3), of a potential function \(f\). The minimization of \(\bar{f}^{h}\) can be done using proximal gradient descent with step-size \(\lambda>0\) when \(f\) is concave and smooth and when \(\operatorname{prox}_{\lambda h}\) is available, following the iterations: \[\mathbf{x}\leftarrow\mathbf{y}+\operatorname{prox}_{\lambda h}(\mathbf{x}- \mathbf{y}+\lambda\nabla f(\mathbf{x})). \tag{11}\] Running these iterations, we obtain the \(h\)-transform of \(f\) as well as its gradient. In practice, because \(h\) is the sum of a function \(\gamma\tau\) and a quadratic term, one has, thanks to (Parikh et al., 2014, SS2.1.1), that the proximal operator of \(\lambda h\) is given by \[\operatorname{prox}_{\lambda h}(\mathbf{z})=\operatorname{prox}_{\frac{ \lambda\gamma}{\lambda+1}\tau}\left(\frac{\mathbf{z}}{1+\lambda}\right).\] This observation is summarized in the following proposition: **Proposition 1**.: _Assume \(f\) is \(L\)-smooth and concave and that \(\lambda<2/L\). Then, iterations (11) converge to a point \(\mathbf{x}^{\star}(\mathbf{y})=\operatorname{arg\,min}_{\mathbf{x}}h(\mathbf{ x}-\mathbf{y})-f(\mathbf{x})\). Furthermore, we have_ \[\bar{f}^{h}(\mathbf{y})=h(\mathbf{x}^{\star}(\mathbf{y})-\mathbf{y})-f( \mathbf{x}^{\star}(\mathbf{y}))\text{, and }\nabla\bar{f}^{h}(\mathbf{y})=-\nabla h( \mathbf{x}^{\star}(\mathbf{y})-\mathbf{y}). \tag{12}\] Proof.: The convergence of iterates (11) follows from (Beck and Teboulle, 2009, Theo. 1) or (Rockafellar, 1976, Theo. 1). Bauschke and Combettes (2011, Prop. 18.7) then give the last identities. Equipped with \(h\)-concave potentials, we can now define a ground-truth optimal map with respect to general structured costs \(h\), which can be used to produce ground-truth OT maps for \(h\). **Proposition 2**.: _Let \(\mu\) be a measure and push it forward using \(T_{f}^{h}:=\operatorname{Id}-\operatorname{prox}_{\gamma\tau}\circ\nabla\bar{ f}^{h}\) then \(T_{f}^{h}\) is optimal for cost \(h\) between \(\mu\) and \((T_{f}^{h})_{\sharp}\mu\)._ Proof.: The result follows from (Santambrogio, 2015, Theorem 1.17). In summary, the proximal operator of \(\tau\) is the only thing needed to implement iterations (11), and, as a result, the \(h\)-transform of a suitable concave potential. We can then plug the solution in (12) to compute the quantities of interest numerically, plugged back again in a proximal operator to compute the pushforward \(T_{f}^{h}\). In practice, we use the JAXOPT (Blondel et al., 2021) library to integrate these steps in our differentiable pipeline seamlessly. We illustrate numerically in \(2\)d the resulting transport maps for different choices of regularizer \(\tau\) in Fig. 1. In this illustration, we use the same base function \(f\), so we see clearly that the choice of cost has a drastic impact on the form of the transport map. Figure 1: Illustration of ground truth optimal transport maps with different costs \(h\), for the same base function \(f\). Here we take for \(f(\mathbf{z})=-\mathbf{z}^{T}W\mathbf{z}+\mathbf{v}^{T}\mathbf{z}\), a concave quadratic function, and compute the optimal transport map \(T_{f}^{h}\) following Prop. 2 using different base costs. On the left, with the usual \(\ell_{2}^{2}\) cost, the map simply corresponds to a linear map. With a sparsity-inducing cost (middle), we obtain sparse displacements: most arrows follow the canonical axes. On the right, we take a cost that has a stronger penalization in the direction of a vector \(\mathbf{b}\). We see that the displacements mostly align in a direction that is orthogonal to \(\mathbf{b}\). This idea is at the core of our proposal to learn structured costs for Monge displacements (Section 4.1). ### Estimation of \(\bar{f}_{\varepsilon}^{h}\), the \(h\)-transform of entropic potentials As recently established, our goal is to compute optimal transport maps induced by a ground-truth \(h\)-concave potentials \(\bar{f}^{h}\). The difficulty here, of course, lies in obtaining a reliable estimate of \(\bar{f}^{h}\). The MBO estimator outlined in Section 2 cements the validity of entropic potentials as computationally efficient surrogates. However, the choice of \(\varepsilon\) remains, as a poorly chosen regularization parameter can lead to a useless estimator. Following the pipeline introduced by Vacher and Vialard (2022), we propose a model selection framework for tuning \(\varepsilon\) that relies on the efficient computation of \(h\)-transforms. For a given candidate potential function \(\hat{f}\), Vacher and Vialard (2022) consider the value of the dual problem \[\mathcal{D}(\hat{f})=\int\hat{f}\mathrm{d}\mu+\int\hat{f}^{h}\mathrm{d}\nu \tag{13}\] as a criterion for model selection. However, Vacher and Vialard (2022) only consider the case of \(h(\cdot)=\frac{1}{2}\|\cdot\|^{2}\,\) where the \(h\)-transform becomes the Fenchel conjugate operator. In order to consider more general costs, we make a step towards computing \(h\)-transforms of entropic potentials by studying the hessian of their objective and show that this can be done efficiently using proximal gradient descent when \(h\) is a more general quadratic cost. This is outlined in the following propositions, where we establish convexity and smoothness of these potentials, which is required to use Proposition 1. **Proposition 3**.: _For fixed \(y\in\mathbb{R}^{d}\), the Hessian of \(x\mapsto h(\mathbf{x}-y)-f_{\varepsilon}(\mathbf{x})\) is given by_ \[\nabla^{2}_{xx}h(\mathbf{x})-\mathbb{E}_{\pi_{\varepsilon}}[\nabla^{2}_{xx}h( \mathbf{x}-Y)|X=\mathbf{x}]+\varepsilon^{-1}\mathrm{Cov}_{\pi_{\varepsilon}}[ \nabla_{x}h(\mathbf{x}-Y)|X=\mathbf{x}]\,. \tag{14}\] The proof can be found in Appendix A. This leads to the following corollary for quadratic costs. **Corollary 1**.: _Let \((f_{\varepsilon},g_{\varepsilon})\) be optimal entropic potentials for a quadratic cost \(h(x-\mathbf{y}):=\frac{1}{2}(x-\mathbf{y})^{\top}B(x-\mathbf{y}).\) Then the function \(h(\cdot-\mathbf{y})-f_{\varepsilon}\) is convex._ Proof.: Note that \(\nabla_{x}h(\mathbf{x}-Y)=B(\mathbf{x}-Y)\), which is linear, and \(\nabla^{2}_{x,x}h(\cdot)=B\) is constant in both \(x\) and \(Y\). Performing the appropriate cancellations, eq. (14) reads \[\varepsilon^{-1}\mathrm{Cov}_{\pi_{\varepsilon}}[B(\mathbf{x}-Y)|X=\mathbf{x }]\,, \tag{15}\] which remains positive semi-definite. Finally, to compute \(\bar{f}_{\varepsilon}^{h}\), control on the smoothness of \(f_{\varepsilon}\) is necessary. We claim that if \((f_{\varepsilon},g_{\varepsilon})\) are optimal entropic potentials corresponding to two compactly supported measures \(\mu\) and \(\nu\), and \(h\) is a quadratic cost, then \(f_{\varepsilon}\) is smooth with parameter \(O(\varepsilon^{-1})\), as a direct consequence of Vacher and Vialard (2022, Proposition 5). The following result, whose proof can be found in Appendix A, proposes sufficient conditions on \(\mathbf{y}\) so that \(\bar{f}_{\varepsilon}^{h}(\mathbf{y})\) is well defined. **Proposition 4**.: _Consider \(f_{\varepsilon}\) computed using Eq. (9). If the cost \(h\) is convex and \(\mathbf{y}\) is in the convex hull of the \(\mathbf{y}_{j}\)'s, then \(\bar{f}_{\varepsilon}^{h}(\mathbf{y})>-\infty\). Conversely, if \(h\) is quadratic, then \(\bar{f}_{\varepsilon}^{h}(\mathbf{y})>-\infty\) implies that \(\mathbf{y}\) is in the convex hull of the \(\mathbf{y}_{j}\)'s. Moreover, if \(h\) is Lipschitz continous, then \(\bar{f}_{\varepsilon}^{h}(\mathbf{y})>-\infty\) for any \(\mathbf{y}.\)_ ### General Dictionaries and Connections to Generalized Lasso When \(A\) is a general dictionary, namely a full-rank matrix \(A\in\mathbb{R}^{p\times d}\) with no properties, a closed-form solution is not available. However, the calculations can be done numerically via a proximal gradient algorithm, as done for instance with the generalized LASSO (Ali and Tibshirani, 2019). **Proposition 5**.: _For a generic matrix \(A\) one has \(\mathrm{prox}_{\gamma\tau(A)}(\mathbf{z})=\mathbf{z}-A^{T}\mathbf{d}^{\star},\) where \(\mathbf{d}^{\star}\) solves the dual maximization problem \(\max_{\mathbf{d}}\frac{1}{2}\|\mathbf{z}\|^{2}-\frac{1}{2}\|\mathbf{z}-A^{T} \mathbf{d}\|^{2}-\gamma\tau^{\star}(\mathbf{d}/\gamma).\)_ Proof.: The proof follows, by minimizing the objective and applying Fenchel-Rockafellar duality. \[\min_{\mathbf{w}} \tfrac{1}{2}\|\mathbf{w}-\mathbf{z}\|^{2}+\gamma\tau(A\mathbf{w})= \min_{\mathbf{w},\mathbf{q}:A\mathbf{w}=\mathbf{q}}\tfrac{1}{2}\|\mathbf{w}- \mathbf{q}\|^{2}+\gamma\tau(\mathbf{z})\] \[=\max_{\mathbf{d}}\min_{\mathbf{w},\mathbf{q}}\tfrac{1}{2}\| \mathbf{w}-\mathbf{z}\|^{2}+\gamma\tau(\mathbf{z})+\mathbf{d}^{T}(A\mathbf{w}- \mathbf{q})\] \[=\max_{\mathbf{d}}-(\|\cdot-\mathbf{z}\|^{2}/2)^{*}(-A^{T} \mathbf{d})-(\gamma\tau)^{*}(\mathbf{d})=\max_{\mathbf{d}}-(\|\cdot-\mathbf{z }\|^{2}/2)^{*}(-A^{T}\mathbf{d})-\gamma\tau^{*}(\mathbf{d}/\gamma)\] \[=\max_{\mathbf{d}}\tfrac{1}{2}\|\mathbf{z}\|^{2}-\tfrac{1}{2}\| \mathbf{z}-A^{T}\mathbf{d}\|^{2}-\gamma\tau^{*}(\mathbf{d}/\gamma)\enspace.\] the result follows from optimality conditions and primal/dual relationships. ## 4 On Learning Structured Costs for Monge Displacements We propose in this section a practical pipeline to _learn_ the parameters of the regularizer \(\tau\), to infer a regularized cost \(h\) that captures suitable regularity in displacements. ### On Learning Structured Costs Let \(\tau_{\theta}\) be a parameterized family of regularizers. Our goal is to _learn_, given input and target measures, a parameter \(\theta\) such that the bulk of the transport cost is dominated by displacements with low regularization value. Since the only moving piece in our pipeline will be \(\theta\), we consider all other parameters constant and re-write (8) as: \[P^{\star}(\theta):=P^{\star}\left(\mathbf{X},\mathbf{a},\mathbf{Y},\mathbf{b} ;\tfrac{1}{2}\ell_{2}^{2}+\gamma\tau_{\theta},\varepsilon\right). \tag{16}\] The matrix \(P^{\star}(\theta)\) contains \(n\times m\) weights that each quantify the association strength between a pair \((\mathbf{x}_{i},\mathbf{y}_{j})\). Such pairs are characterized by a displacement \(\mathbf{z}_{ij}:=\mathbf{y}_{j}-\mathbf{x}_{i}\). We expect \(P^{\star}(\theta)\) is such that \(\mathbf{z}_{ij}\) is well represented by points with a low value for \(\tau_{\theta}\). In other words, we expect that \(\tau_{\theta}(\mathbf{z}_{ij})\) be as small as possible when \(P^{\star}_{ij}(\theta)\) is high. We, therefore, consider the objective function \[\mathcal{L}(\theta):=\left\langle P^{*}(\theta),M(\theta)\right\rangle,\text{ with }M\text{ the matrix of entries }[M(\theta)]_{ij}=\tau_{\theta}(\mathbf{z}_{ij}). \tag{17}\] Here, \(P^{\star}(\theta)\) is itself obtained as the solution to an optimization problem. The problem of minimizing \(\mathcal{L}\) is, therefore, a _bilevel_ problem. In order to solve it, we need to be able to compute the gradient \(\nabla\mathcal{L}(\theta)\). The chain rule gives \[\nabla\mathcal{L}(\theta)=M(\theta)\left[\frac{\partial P^{\star}(\theta)}{ \partial\theta}\right]+P^{\star}(\theta)\left[\frac{\partial M(\theta)}{ \partial\theta}\right]. \tag{18}\] The first Vector-Jacobian Product (VJP) is computed using the implicit function theorem as \(P^{\star}(\theta)\) is the solution to a minimization problem. We rely on OTT-JAX[Cuturi et al., 2022] to compute it efficiently and seamlessly. The second VJP is a classical VJP computed using JAX's autodiff. Next, we turn to a parametrization \(\tau_{\theta}\) allowing us to learn a subspace on which the displacements lie. ### Subspace Structured Costs Recall that for a rank-\(p\) matrix \(A\in\mathbb{R}^{p\times d}\), \(p\leq d\), the projection matrix that maps it to its orthogonal is \(A^{\perp}=I-A^{T}(AA^{T})^{-1}A\). When \(A\) lies in the Stiefel manifold (i.e. \(AA^{T}=I\)), we have the simplification \(A^{\perp}=I-A^{T}A\). This results in the Pythagorean identity \(\|\mathbf{z}\|^{2}=\|A^{\perp}\mathbf{z}\|^{2}+\|A\mathbf{z}\|^{2}\), as intended. In order to promote displacements that happen _within_ the span of \(A\), we must set a regularizer that penalizes the presence of \(\mathbf{z}\) within its _complement_: \[\tau_{A^{\perp}}(\mathbf{z}):=\tfrac{1}{2}\|A^{\perp}\mathbf{z}\|_{2}^{2}= \tfrac{1}{2}\mathbf{z}^{T}(A^{\perp})^{T}A^{\perp}\mathbf{z}=\tfrac{1}{2} \mathbf{z}^{T}(I_{d}-A^{T}(AA^{T})^{-1}A)\mathbf{z}.\] Since \(\tau_{A^{\perp}}\) is evidently a quadratic form, its proximal operator can be obtained by solving a linear system [Parikh et al., 2014, SS6.1.1]; developing and using the matrix inversion lemma results in \[\operatorname{prox}_{\gamma\tau_{A^{\perp}}}(\mathbf{z})=\left(I_{d}+\gamma(A^ {\perp})^{T}A^{\perp}\right)^{-1}\mathbf{z}=\tfrac{1}{1+\gamma}(I+\gamma A^{T} (AA^{T})^{-1}A)\,\mathbf{z}. \tag{19}\] To summarize, given an orthogonal sub-basis \(A\) of \(p\) vectors (each of size \(d\)), promoting that a vector \(\mathbf{z}\) lies in its orthogonal can be achieved by regularizing its norm in the orthogonal of \(A\). That norm has a proximal operator that can be computed either by 1. Parameterizing \(A\)_implicitly_, through an _explicit_ parameterization of an orthonormal basis \(B\) for \(A^{\perp}\), as a matrix directly specified in the \((d-p)\times p\) Stiefel manifold. This can alleviate computations to obtain a closed form for its proximal operator: \[\operatorname{prox}_{\gamma_{\mathcal{T}A^{\perp}}}(\mathbf{z})=\operatorname{ prox}_{\gamma_{\mathcal{T}B}}(\mathbf{z})=\mathbf{z}-B^{T}\left(B\mathbf{z}-\frac{1} {1+\gamma}B\mathbf{z}\right)=\left(I_{d}-\frac{\gamma}{1+\gamma}B^{T}B\right) \mathbf{z},\] but requires storing \(B\), a \((d-p)\times d\) orthogonal matrix, which is cumbersome when \(p\ll d\). 2. Parameterizing \(A\)_explicitly_, either as a full-rank \(p\times d\) matrix, or more simply a \(p\times d\) orthogonal matrix, to recover the suitable proximal operator for \(\tau_{A^{\perp}}\), by either 1. Falling back on the right-most expression in (20) in the linear solve, which can be handled using sparse conjugate gradient solvers, since the application of the right-most linear operator has complexity \((p+1)\times d\) and is positive definite, in addition to the linear solve of complexity \(O(p^{3})\). This is of course much simplifies when \(A\) is orthogonal, \(A\in\mathcal{S}_{p,d}\) since in that case, \[\operatorname{prox}_{\gamma_{\mathcal{T}A^{\perp}}}(\mathbf{z})=\frac{1}{1+ \gamma}\left(I_{d}+\gamma A^{T}A\right)\mathbf{z}.\] (20) 3. Alternatively, compute a matrix in the \((d-p)\times p\) Stiefel manifold that spans the same linear space as, through the Gram-Schmidt process (Golub and Van Loan, 2013, p.254) of the \(d\times d\) matrix \(A^{\perp}\) or rank \(d-p\), \(B:=\text{Gram-Schmidt}(A^{\perp})\), to fall back on the expression above. We can then use this structured cost in the machinery described in Section 4.1. This way, we learn a matrix \(A\) such that displacements between the two target measures happen mostly in the range of \(A\). As discussed above, the cost function \(\mathcal{L}(A)\) should be optimized over the Stiefel manifold (Edelman et al., 1998). We use Riemannian gradient descent (Boumal, 2023) for this task, which iterates \[A\leftarrow\mathcal{P}(A-\eta\mathrm{grad}\mathcal{L}(A)),\] with \(\eta>0\) a step size, \(\mathrm{grad}\mathcal{L}(A)\) the Riemannian gradient of \(\mathcal{L}\), given by the formula \(\mathrm{grad}\mathcal{L}(A)=G-AG^{T}A\) with \(G=\nabla\mathcal{L}(A)\) the standard Euclidean gradient of \(A\) computed with autodiff, and \(\mathcal{P}\) the projection on the Stiefel manifold, with formula \(\mathcal{P}(A)=(AA^{\top})^{-1/2}A\). The Euclidean gradient of \(\mathcal{L}\) is computed using the IFT as described after Eq. (18). As proposed by Absil and Malick (2012), we use projections to stay on the manifold. ## 5 Statistical Aspects of Subspace Monge Maps The costs proposed in the second part of Section 4 are designed to encourage the displacements of the transport maps to lie in a particular subspace of \(\mathbb{R}^{d}\). In this section, we consider the statistical complexity of estimating such maps from data. The question of estimating transport maps was first studied in a statistical context by Hutter and Rigollet (2021), and subsequent research has proposed alternative estimation procedures, with different statistical and computational properties (Deb et al., 2021; Manole et al., 2021; Muzellec et al., 2021; Pooladian and Niles-Weed, 2021). We extend this line of work by considering the analogous problem for Monge maps with structured displacements. We show that with a proper choice of \(\varepsilon\), the MBO estimator defined by (10) is a consistent estimator of \(T^{\star}\) as \(n\to\infty\), and prove a rate of convergence in \(L^{2}(\mu)\). We also give preliminary theoretical evidence that, as \(\gamma\to\infty\), maps corresponding to the subspace structured cost \(\frac{1}{2}\ell_{2}^{2}+\gamma\tau_{A^{\perp}}\) can be estimated at a rate that depends only on the subspace dimension \(p\), rather than on the ambient dimension \(d\), thereby avoiding the _curse of dimensionality_. ### Sample complexity estimates for the MBO estimator The MBO estimator is a generalization of the entropic map estimator, originally defined by Pooladian and Niles-Weed (2021) for the quadratic cost \(h=\frac{1}{2}\ell_{2}^{2}\). This estimator has been statistically analyzed in several regimes, see e.g., Pooladian et al. (2023); Rigollet and Stromme (2022); del Barrio et al. (2022) and Goldfeld et al. (2022). We show that this procedure also succeeds for subspace structured costs of the form \(h=\frac{1}{2}\ell_{2}^{2}+\gamma\tau_{A^{\perp}}\). As a result of being recast as an estimation task for quadratic cost, the following sample-complexity result for the MBO estimator follows from (Pooladian and Niles-Weed, 2021, Theorem 3), and a computation relating the MBO estimator to a barycentric projection for the costs we consider (see Appendix B for the full statements and proofs). **Theorem 1**.: _Let \(A\in\mathbb{R}^{p\times d}\) be fixed, and consider \(\tilde{T}\) given by an \(M\)-smooth and \(m\)-strongly convex function, with smooth inverse, and suppose \(\mu\) has an upper and lower bounded density with compact support, and \(\nu\) is lower-bounded. Consider \(\hat{T}^{\star}\) of the form eq.23 for some \(\gamma\geq 0\) fixed, and suppose we have samples \(X_{1},\dots,X_{n}\sim\mu\) and \(Y_{1},\dots,Y_{n}\sim(T^{\star})_{\sharp}\mu\). Let \(\hat{T}_{\varepsilon}\) be the MBO estimator with \(\varepsilon\asymp n^{-\frac{1}{d+4}}\). Then it holds that_ \[\mathbb{E}\|\hat{T}_{\varepsilon}-T^{\star}\|_{L^{2}(\mu)}^{2}\lesssim n^{- \frac{2}{d^{\prime}+4}}\,,\] _where \(d^{\prime}=2\lceil d/2\rceil\), where the underlying constants depend on properties of \(\mu,\nu,\tilde{T},\gamma\) and \(A\)._ ### Connection to the Spiked Transport Model The additional structure we impose on the displacements allows us to closely relate our model to the "spiked transport model" as defined in Niles-Weed and Rigollet (2022), see also (Paty and Cuturi, 2019; Lin et al., 2020, 2021a). The authors studied the estimation of the Wasserstein distance in the setting where the Brenier map between \(\mu\) and \(\nu\) takes the form, \[T_{\text{spiked}}(\mathbf{x})=\mathbf{x}-A^{T}(A\mathbf{x}-S(A\mathbf{x}))\,, \tag{21}\] where \(A\in\mathcal{S}_{p,d}\) and \(S:\mathbb{R}^{p}\to\mathbb{R}^{p}\) is the gradient of a convex function on \(\mathbb{R}^{p}\). Divol et al. (2022) performed a statistical analysis of the map estimation problem under the spiked transport model. They constructed an estimator \(\hat{T}_{n}\) such that the \(L^{2}(\mu)\) risk decays with respect to the _intrinsic dimension_\(p\ll d\); this is summarized in the following theorem. **Theorem 2**.: _(_Divol et al._,_ 2022_, Theorem 3 with Proposition 4)_ _Suppose \(\tilde{T}\) is bi-Lipschitz (smooth, and strongly convex) and \(\mu\) has compact support, with density bounded above and below. Suppose further that there exists a matrix \(A\in\mathbb{R}^{p\times d}\) on the Stiefel manifold such that \(\nu:=(T_{\text{spiked}})_{\sharp}\mu\), with \(T_{\text{spiked}}\) defined as in eq.21. Assume that \(\mu\) is known explicitly. Given \(n\) i.i.d. samples from \(\nu\), there exists an estimator \(\hat{T}_{n}\) satisfying_ \[\mathbb{E}\|\hat{T}_{n}-T_{\text{spiked}}\|_{L^{2}(\mu)}^{2}\lesssim_{\log(n)} n^{-\frac{2}{p}}\,. \tag{22}\] We now argue that the spiked transport model can be recovered in the large \(\gamma\) limit of subspace structured costs. Indeed, if \(\gamma\to\infty\), then displacements in the subspace orthogonal to \(A\) are heavily disfavored, so that the optimal coupling will concentrate on the subspace given by \(A\), thereby recovering a map of the form (21), which by Theorem2 can be estimated at a rate independent of the ambient dimension. Making this observation quantitative by characterizing the rate of estimation of \(T^{\star}\) as a function of \(\gamma\) for \(\gamma\) large is an interesting question for future work. ## 6 Experiments We study with two synthetic tasks in this experimental study. Because of our ability to propose ground-truth \(h\)-optimal maps, we can now benchmark the MBO estimator in the simplest settings when \(h\) is known, and has the general structure considered in this work. The entire pipeline described in SS4 was implemented by creating a new family of parameterized \(\mathtt{RegTICost}\) in OTT-JAX1(Cuturi et al., 2022). This cost family can be fed into the \(\mathtt{Sinkhorn}\) solver, and their solution cast solutions as \(\mathtt{DualPotentials}\) objects that hold holds \(f_{\varepsilon}\) for a given \(h\). The application of the transport is then recovered using simply automatic differentiation. Footnote 1: [https://github.com/ott-jax/ott](https://github.com/ott-jax/ott) ### On the MBO Performance For A Synthetic Ground Truth Displacement In this section, we assume that the cost structure is _known_, using the same \(\tau\) both for generation and estimation, except for regularization strength \(\gamma\). We use that cost to evaluate the transport associated with \(\bar{f}_{\varepsilon}^{h}\) on a sample of points, using Proposition2, and then compare the performance of Sinkhorn based estimators, either with that cost or the standard \(\frac{1}{2}\ell_{2}^{2}\) cost (which corresponds to \(\gamma=0\)). We consider the \(\tau=\ell_{1}\) and \(\tau_{A^{\perp}}=\|A^{\perp}\mathbf{z}\|_{2}^{2}\) regularizer, and its associated proximal soft-thresholding operator. To estimate OT, we assume that we do not know \(\gamma\) (we do not use directly \(\gamma^{*}\)) and vary it, reporting mean-squared error of predictions. We first sample a random quadratic function \(f(\mathbf{z}):=\frac{1}{2}(\mathbf{z}-\mathbf{w})^{T}M(\mathbf{z}-\mathbf{w})\) where \(M\) is a Wishart matrix, sampled as \(M=QQ^{T}\), with \(Q\in\mathbb{R}^{d\times 2d}\) is multivariate Gaussian, and \(\mathbf{w}\) is a random Gaussian vector. We then sample \(n=200\) Gaussian points in dimension \(d=5\) and push them through the transport associated with \(\bar{f}_{\varepsilon}^{h}\), to recover matched train data \(\mathbf{Y}_{T}\) and \(\mathbf{X}_{T}\), and do the same for a test fold \(\mathbf{Y}_{t}\) and \(\mathbf{X}_{t}\) of the same size, to report our metric, the MSE. The MSE is defined, given an estimator for \(f\) and its associated cost function \(h\),as \(\|T_{f_{\varepsilon},\gamma}(\mathbf{X}_{t})-\mathbf{Y}_{t}\|_{2}^{2}\). We plot this MSE as a function of \(\gamma\), where \(\gamma=0\) would correspond exactly to the \(\ell_{2}^{2}\) cost. We observe in Figure 2, that our estimator outperforms significantly the \(\ell_{2}^{2}\) pipeline for any range of the parameter \(\gamma\). Here, the projection dimension \(p=2\), and data dimension \(d=5\). ### Learning Subspace from Synthetic Displacement We propose to test the ability of our pipeline to recover a ground truth \(A^{*}\) parameter within a squared-Euclidean cost. To do so, we proceed as follows. For dimension \(d\), we build our \(h\) function by selecting \(A^{*}\) by sampling a \(p^{*}\times d\) normal matrix that is projected on the \(p^{*}\times d\) Stiefel manifold. As in the previous section, we sample a random quadratic function \(f(\mathbf{z}):=\frac{1}{2}\mathbf{z}^{T}M\mathbf{z}\), sample a point cloud \(\mathbf{X}\) of \(n=512\) standard Gaussian points, and apply, following Proposition2, the corresponding ground-truth transport to obtain \(\mathbf{Y}\) of the same size. We set the regularization parameter \(\gamma\) manually, such that the \(p^{*}\) first singular values of displacements \(\mathbf{Y}-\mathbf{X}\) capture either 80% or 90% of the total inertia, ensuring that most displacements are indeed captured by \(p^{*}\) directions. We then launch our solver with a dimension \(\hat{p}=2p^{*}\), and measure our recovery for \(\hat{A}\) by looking at the average (normalized by basis size) of the residual error, when projecting the vectors in \(A^{*}\) in the orthogonal basis \(\hat{A}\), namely \(\|A^{*}-\hat{A}\hat{A}^{T}A^{*}\|_{2}^{2}/p^{*}\). For simplicity, we use an iteration \(i\) stepsize of \(2/\sqrt{i}\). These results in Figure 3 highlight the phenomenon that larger \(p^{*}\) seem to be easier to recover. #### Conclusion. We propose in this paper an algorithmic mechanism to design ground-truth transports for structured costs. We exploit this to benchmark successfully the MBO estimator on two tasks, involving the \(\ell_{1}\) and an orthogonal projection norm. These experiments showcase the versatility of the MBO Figure 2: Performance of the MBO estimator on two ground-truth tasks involving the \(\tau=\ell_{1}\) (top row) and \(\tau_{A^{\perp}}=\|A^{\perp}\mathbf{z}\|_{2}^{2}\) (bottom row) structured costs, where \(p=2\) in dimension \(5\). \(\gamma^{*}\) is the level of regularization used for the ground truth in data generation, whereas performance are shown varying w.r.t \(\gamma\). Knowing the cost (not its strength) does help in these experiments in the estimation of transport map. framework. Additionally, we propose an inverse OT problem in which the goal is to learn the parameters of a regularizer. Intuitively, our aim is to learn a regularizer parameter, such that the OT displacements it promotes, have themselves low regularization values. We explored this approach by learning a subspace for displacements (and not, as considered previously, for the entire pointclouds) and provided encouraging recovery results.
2305.08894
First Impressions: Early-Time Classification of Supernovae using Host Galaxy Information and Shallow Learning
Substantial effort has been devoted to the characterization of transient phenomena from photometric information. Automated approaches to this problem have taken advantage of complete phase-coverage of an event, limiting their use for triggering rapid follow-up of ongoing phenomena. In this work, we introduce a neural network with a single recurrent layer designed explicitly for early photometric classification of supernovae. Our algorithm leverages transfer learning to account for model misspecification, host galaxy photometry to solve the data scarcity problem soon after discovery, and a custom weighted loss to prioritize accurate early classification. We first train our algorithm using state-of-the-art transient and host galaxy simulations, then adapt its weights and validate it on the spectroscopically-confirmed SNe Ia, SNe II, and SNe Ib/c from the Zwicky Transient Facility Bright Transient Survey. On observed data, our method achieves an overall accuracy of $82 \pm 2$% within 3 days of an event's discovery, and an accuracy of $87 \pm 5$% within 30 days of discovery. At both early and late phases, our method achieves comparable or superior results to the leading classification algorithms with a simpler network architecture. These results help pave the way for rapid photometric and spectroscopic follow-up of scientifically-valuable transients discovered in massive synoptic surveys.
Alexander Gagliano, Gabriella Contardo, Daniel Foreman-Mackey, Alex I. Malz, Patrick D. Aleo
2023-05-15T18:00:00Z
http://arxiv.org/abs/2305.08894v3
First Impressions: Early-Time Classification of Supernovae using Host Galaxy Information and Shallow Learning ###### Abstract Substantial effort has been devoted to the characterization of transient phenomena from photometric information. Automated approaches to this problem have taken advantage of complete phase-coverage of an event, limiting their use for triggering rapid follow-up of ongoing phenomena. In this work, we introduce a neural network with a single recurrent layer designed explicitly for early photometric classification of supernovae. Our algorithm leverages transfer learning to account for model misspecification, host galaxy photometry to solve the data scarcity problem soon after discovery, and a custom weighted loss to prioritize accurate early classification. We first train our algorithm using state-of-the-art transient and host galaxy simulations, then adapt its weights and validate it on the spectroscopically-confirmed SNe Ia, SNe II, and SNe Ib/c from the Zwicky Transient Facility Bright Transient Survey. On observed data, our method achieves an overall accuracy of \(82\pm 2\%\) within 3 days of an event's discovery, and an accuracy of \(87\pm 5\%\) within 30 days of discovery. At both early and late phases, our method achieves comparable or superior results to the leading classification algorithms with a simpler network architecture. These results help pave the way for rapid photometric and spectroscopic follow-up of scientifically-valuable transients discovered in massive synoptic surveys. supernovae(1668), light curve classification(1954), neural networks(1933) ## 1 Introduction Decades of spectroscopic insights have motivated the construction of a hierarchical classification taxonomy for the death knells of stars as supernovae (SNe; Filippenko, 1997; Gal-Yam, 2017). At its core are the four most commonly-observed classes: Type-Ia (SNe Ia); Type-II (SNe II); and Type-Ib and Type-Ic (collectively SNe Ib/c). SNe Ia, signposts for the thermonuclear detonations of carbon-oxygen white dwarfs, are defined by the early presence of strong Si II features. SNe II, the explosions of young, massive stars (with a Zero-Age-Main-Sequence mass of \(M_{\rm ZAMS}>8~{}M_{\odot}\)) following core collapse, are events whose spectra exhibit strong hydrogen lines with P-Cygni profiles. The less-common SNe Ib/c (comprising, together with the transitional SNe IIb, \(\sim\)30% of all core-collapse events; Shivvers et al., 2017) exhibit spectra devoid of both hydrogen and Si II features; these events result from the core-collapse of young stars that have undergone envelope stripping. Despite together comprising the vast majority of observed terminal explosions, SNe Ia, II, and Ib/c remain poorly understood in both their detonation mechanisms and the physics driving their observed diversity (Laplace et al., 2021). SNe Ia are likely triggered by either the runaway burning of an accreting white dwarf pushed beyond the Chandrasekhar limit by a close-in massive companion (with significant uncertainty surrounding the nature of the companion; see Hachisu et al., 1999), or the merger of two white dwarfs (Pakmor et al., 2012). Significant heterogeneity exists among SNe II in both their photometric evolution (Arcavi et al., 2012) and spectroscopic characterization (Taddia et al., 2013) that has yet to be connected definitively to the nature or pre-explosion behavior of their progenitors. Stripped-envelope SN (SN Ib/c/IIb) progenitors likely arise preferentially in binaries with some degree of mass transfer (Podsiadlowski et al., 1992), although some are believed to lose their envelopes in isolation through eruptive mass-loss and metallicity-driven stellar winds (Kuncarayakti et al., 2018). Further, observations have been unable to clarify whether SNe Ib, whose spectra contain clear helium signatures, and SNe Ic, whose spectra lack them, arise from distinct stripping channels or instead reflect extremes along a continuum in stripping degree. Although their underlying physics remain unclear, we are now discovering these transients in abundance. Time-domain surveys, in an attempt to balance sky coverage and depth, can be broadly distinguished between those that are low-redshift and wide-field, such as the Nearby SN Factory (SNFactory; Aldering et al., 2002), the Texas SN Search (Quimby, 2006), the Pan-STARRS Survey for Transients (PSST; Huber et al., 2015), the All-Sky Automated Survey for SNe (ASAS-SN; Kochanek et al., 2017), the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018), the Zwicky Transient Facility (ZTF; Bellm et al., 2019), and the Young SN Experiment (YSE; Jones et al., 2021); and those that are targeted, high-redshift, and narrow-field, including the Sloan Digital Sky Survey-II SN Survey (Sako et al., 2008), SN Cosmology project (Perlmutter et al., 1999), Deep Lens Survey (Becker et al., 2004), and the Pan-STARRS Medium Deep survey (MDS; Villar et al., 2019); among others. These complementary searches have contributed to an exponentially-increasing (\(\mathcal{O}(10^{4}\) yr\({}^{-1}\))) SN discovery rate. The Vera Rubin Observatory (Vera Rubin Obs; Ivezic et al., 2019), with its synoptic coverage (completely covering the Southern sky each \(3-4\) days in its Wide-Fast-Deep mode) and its unprecedented survey depths (\(r\sim 24.5\) in a single visit), is slated to annually discovery \(10^{6}\) transients1, single-handedly surpassing the current exponential scaling of SN discovery. Footnote 1: [https://www.lsst.org/sites/default/files/docs/sciencebook/SB](https://www.lsst.org/sites/default/files/docs/sciencebook/SB) 11.pdf SNe occur at the interface between a dying progenitor star and its post-explosion remnant. While we are unable to witness the exact moment of this metamorphosis, at best we can deepen our understanding by tightening our observations in both temporal directions: in other words, by observing and connecting the final moments of the progenitor star's life and the earliest moments of its destruction. Following this approach, recent high-cadence imaging has revealed multiple serendipitous detections of pre-cursor emission in the weeks to months preceding the explosions of core-collapse SNe (e.g., 2009ip, 2010mc, 2020tlf; Mauerhan et al., 2013; Ofek et al., 2013, and Jacobson-Galan et al., 2022, respectively). The depth of LSST imaging offers a promising avenue for detecting progenitor emission at _some_ pre-explosion phase, but the synoptic coverage of this and other upcoming surveys comes at the expense of sparse phase coverage of both any pre-explosion activity and of the terminal explosion. Our capacity to classify discovered events is further thwarted by our lack of complementary spectroscopic resources. Data from LSST and other upcoming surveys must be rapidly augmented by additional follow-up from targeted instruments to fully characterize the photometric and spectroscopic evolution of an event and link an explosion to its pre-cursor activity, particularly at early phases when the physics of an explosion is most directly encoded onto its light curve. To consolidate photometrically pure samples, and to facilitate the follow-up of scientifically valuable events, light curve classifiers have appeared on the scene en masse. Some of these require matching observed light curves to archival templates (e.g., Sako et al., 2011); others classify on features extracted from light curves (either properties of the rise, peak, and decline as estimated with parametric fits, or reduced-dimensionality representations of the full light curve; see Lochner et al., 2016 and Villar et al., 2019 for reviews of multiple popular techniques). Several of these algorithms are founded upon principles directly relevant to solving the early-time classification problem. These principles include: _speed_, with 15M light curve classifications per second processed with the neural network SuperNNova(Moller and de Boissiere, 2020); _flexibility_, with the auto-encoder and recurrent neural network SuperRAENN(Villar et al., 2020) able to develop realistic light curve representations for classification without complete phase coverage; and _adaptive prediction_, first shown as a proof-of-concept by Muthukrishna et al. (2019) with their recurrent network RAPID and more recently extended to more sophisticated network architectures in Qu and Sako (2022) and Pimentel et al. (2022). Despite these early successes, real-time transient classification remains in its infancy. Success in practice has been limited by two factors. First, to satiate the data-starved machine learning classifiers currently in use, it has become standard practice to generate and train on a large sample of synthetic light curves. Due to our limited understanding of the underlying physical systems involved, these transient models are driven by observed phenomenology, and so are necessarily simplified. This limits the performance of these classifiers on observed samples, particularly those containing out-of-distribution behavior. Second, the question of early-time classification has to date been explored _post-hoc_ through modifications to networks originally developed for full-phase classification. A classifier has yet to be designed explicitly to classify early and to bridge the divide spanning simulation and observation. The insights to be gleaned from high-cadence early-time coverage of SNe, even common ones, are vast. Early-time brightness variations can reveal interactions with a companion star in a binary, a surrounding progenitor envelope, or circumstellar material shed by the progenitor pre-explosion (e.g., Kasen, 2010; Gezari et al., 2008; Dimitriadis et al., 2019; Shappee et al., 2019; Gagliano et al., 2022), and comparisons with analytic models of \({}^{56}\)Ni decay can reveal tantalizing evidence for additional engines powering an explosion (as is the case for SLSNe-I and, recently, SNe Ic; see Hosseinzadeh et al., 2022 and Afsariardchi et al., 2021, respectively). The evolution of an explosion's color in the first few days, as discrete tracers of its underlying spectral energy distribution (SED), can even unveil an asymmetric distribution of nucleosynthetic products, such as an overabundance of heavy elements on the surface of the progenitor star (Ni et al., 2022). Real-time classification will also help enable the discovery of a greater number of rare and rapidly-evolving transients that have eluded discovery within previous surveys. Serendipitous detections among massive survey streams have already revealed multiple transients in this region of parameter space, including Rapidly-Evolving Transients (RETs; Pursiainen et al., 2018); Fast Blue Optical Transients (FBOTs; Perley et al., 2019); and Fast-Evolving Luminous Transients (FELTs; Rest et al., 2018). Because the emission timescales of most SNe are determined by the radioactive half-life of \({}^{56}\)Ni, events evolving more rapidly than this hint at distinct explosion physics and central engines (such as a relativistic jet that interacts with a cloud of dense circumstellar material, as with FBOTs; Gottlieb et al. 2022). Increasing our sample sizes of these events is essential for constructing a complete picture of stellar death and the nature of the compact remnant that these doomed systems leave behind. In this work, we introduce a recurrent neural network designed end-to-end for _early_ classification of explosive transients. In contrast to the rise of deep-learning methods that introduce complex networks at the expense of model interpretability, our model consists of a single recurrent layer and leverages physical information about the explosion site to improve early performance. Because of the simplified architecture and the emphasis on classification with incomplete light curve information, we deem this approach'shallow learning'. Our framework extends the initial real-time classification system developed by Muthukrishna et al. (2019) in four key respects: 1. We implement a temporally-weighted categorical cross-entropy loss function such that early classification is prioritized over full-phase classification. 2. We adopt a flexible Gaussian Process model parameterized by the correlation between filter passbands and observations in time to realistically interpolate observed transient photometry. 3. We utilize a two-component training strategy where the network is first trained on a large sample of simulated light curves, and then re-trained on a curated subset of real observations. 4. We include photometry from the galaxy where a transient occurred (its 'host galaxy') to reduce the reliance on sparse transient photometry at early phases. Host galaxy information is increasingly being considered for its value in early SN classification. The initial work of Foley and Mandel (2013) verified using the Lick Observatory SN Search sample (Filippenko et al., 2001) that host galaxy morphology and color could be used to construct photometric SN Ia samples as pure as those constructed using the then-leading light curve methods. Baldeschi et al. (2020) extended this work with a random forest classifier to distinguish between low and highly star-forming host galaxies, finding that this metric could serve as an indicator of transient type in the two-class scenario (SNe II versus SNe Ia). A postage-stamp classifier constructed by Carrasco-Davis et al. (2021) uses a convolutional neural network to distinguish between SNe, asteroids, variable stars, and bogus alerts with a single detection image (in the case of SNe, this necessarily includes host galaxy information). Gagliano et al. (2021) found that a random forest classifier trained directly on the photometric features of transient host galaxies achieved \(\sim 70\%\) accuracy distinguishing between SNe II and SNe Ia. FLEET, a random-forest classifier used to recover SLSNe-I from survey streams, combines a set of full-phase parametric light curve features with host galaxy properties and reports 20 SLSN-I discovered per year with an overall detection purity of 85% (Gomez et al., 2020). More recent work has extended these insights to additional sub-classes of SNe (Kisley et al., 2022). None of these methods leverage both host-galaxy and light curve information for _real-time_, _adaptive_ photometric classification. We leverage the photometric information from two surveys in this work, mimicking the realistic scenario in which heterogeneous data (potentially with observations collected in distinct filter systems) are consolidated from multiple sources. We consider SN photometry in ZTF-\(gr\) and \(grizy\) host-galaxy photometry from the Pan-STARRS 3-\(\pi\) survey (Chambers et al., 2016), both in our simulated and observed samples. We provide an overview of our analysis, which mimics the structure of this paper, in Fig. 1. In SS2, we introduce the simulated and observational datasets used in this work. We outline our prescription for pre-processing each of the transient light curves in SS3, and introduce our recurrent neural network architecture in SS4. We then describe our training of the model in SS5. We provide results from our recurrent classification tests and place them in the context of light curve only classification, and of other recently-released photometric classifiers, in SS6. We conclude with a discussion of science applications and future work in SS7. ## 2 Data ### Simulated Transients and their Host Galaxies #### 2.1.1 Modeling Transient Photometry with SNANA We first outline the pipeline used to generate the primary training set for our classification network. Our strategy lever ages the popular simulation code SNANA2(Kessler et al., 2009), which constructs a forward model for a transient event starting from a rest-frame SED and ending with synthetic observations matching the cadence and depth of a survey of interest. The simulation considers atmospheric distortions, instrument efficiency, and photometric calibration in estimating the uncertainty of each simulated observation. A common set of SN SED models used to train the current generation of photometric classifiers are those constructed for the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC; Kessler et al., 2019). Since the challenge was first run, the models associated with SNe Ia, SNe II, and SNe Ib/c have all been updated, and will soon be released in an updated iteration of the challenge. We adopt these updated SED templates for our simulations. Footnote 2: [https://github.com/RickKessler/SNANA/tree/master](https://github.com/RickKessler/SNANA/tree/master) Using SNANA, we simulate the properties of four classes of SNe: SNe Ia, SNe II, SNe Ib, and SNe Ic. We assume a flat \(\Lambda\)CDM cosmology with \(H_{0}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}\) = 0.3, \(\Omega_{\Lambda}\) = 0.7, and \(w=-1\). To parameterize the rise, stretch, and color of SNe Ia as a function of their rest-frame SEDs, we use the recently-released SALT3 model (Kenworthy et al., 2021), an update of the commonly-used SALT2 model (Guy et al., 2007). For SNe II, SNe Ib, and SNe Ic, we use the spectrophotometric templates provided by Vincenzi et al. (2019). These templates have been constructed from observed events of multiple sub-types; we consolidate the templates provided for SNe IIP/IIL, SNe IIn, and SNe IIb as "SNe II" to reflect the observational diversity of this class. We generate synthetic light curves in ZTF-\(g\) and ZTF-\(r\) for each transient class over the redshift range \(0<z<0.2\) to match the photometric properties of the data publicly reported in the alert stream of the Zwicky Transient Facility (ZTF; Bellm et al., 2019)3. Because we aim to apply our trained network to the ZTF Bright Transient Survey (ZTF BTS), which is magnitude-limited to ZTF-\(r<18.5\) (additional detail on the catalog is provided in SS2.2), we cut from our simulations any transients without at least a single observation brighter than this limit. We also remove observations with a signal-to-noise ratio (S/N) of less than 4 (determined empirically to match the S/N distribution of observations reported in the ZTF BTS). The sky locations of transients in the simulated sample, which convolves a uniform distribution where they occur with the ZTF footprint where they are detected, are then used to calculate the expected line-of-sight Galactic extinction assuming \(R_{V}\) = 3.1, an intrinsic scatter of \(\sigma_{E(B-V)}\) = 0.16, and the dust law from Cardelli et al. (1989). Footnote 3: [http://ztf.caltech.edu](http://ztf.caltech.edu) Because the simulation represents a forward model imposing observational cuts on an underlying 'true' transient sample, the final number of transients 'observed' is unknown _a priori_. We simulate 50,000 events of each class so that at least 20,000 remain after these cuts. Once we have a dataset of 'observed' ZTF SNe, we combine SNe Ib and SNe Ic into a single "SN Ib/c" class (due to the uncertainty surrounding their classification as distinct sub-types) and under-sample the dominant classes with the Python package imbalanced-learn(Lemaitre et al., 2017) so that the full dataset has the same number of events per class. This results in a sample containing 20,000 ZTF light curves per transient class (SN Ia, SN II, and SN Ib/c), or 60,000 in total. We have simulated our transients using 10 Intel Xeon "Haswell" processor nodes on the Cray XC40 "Cori" system at the National Energy Research Scientific Computing Figure 1: Overview of our analysis pipeline, with relevant sections in this paper labeled. Data are outlined in blue, while analysis steps are given in green. (NERSC). The full simulation took a total wall time of 5.2 hours. #### 2.1.2 Simulated Host-Galaxy Properties with Normalizing Flows Our goal is to leverage host-galaxy properties to predict a transient's class when explosion photometry is limited and spectroscopy is absent. Our neural network needs to be trained on data comprising only what is available at test-time, and should be informed by extant SN samples. Significant effort has been devoted to expanding the SNANA simulation code to simultaneously model samples of observed transients and their host galaxies (Brout et al., 2019; Vincenzi et al., 2021; Lokken et al., 2022); however, this approach has several limitations for our use case. First, the generation of realistic photometric redshifts demands that both the survey-specific photometry and colors of synthetic galaxies be well-calibrated against observations (a non-trivial problem; Korytov et al., 2019). Second, we wish to generate a large, balanced training sample spanning the same redshift range as the test set. This would likely require scaling up the number of low-redshift galaxies in existing synthetic or observed catalogs, which can introduce artificial structure if properties are repeated. Finally, as ZTF photometry does not exist for galaxies, our framework needs to match SNe observed in one survey to galaxies observed in another. We bypass these limitations by proposing a simple data-driven model trained to reproduce the class-specific photometric properties (PS1-\(grizy\) and photometric redshift) of observed SN host galaxies. By conditioning these properties on the true redshifts of our generated transients, we can ensure that the host galaxy properties are consistent with the redshift range simulated without repeating values or demanding that they be observed in the same survey. To this end, we train separate conditional normalizing flows for the host-galaxy properties of each transient class in our sample (SN Ia, SN Ib/c, and SN II). Normalizing flows (Jimenez Rezende and Mohamed, 2015) are generative models constructed to approximate and draw from a joint probability density function (PDF) associated with a complex multi-dimensional dataset. They are parameterized by an invertible bijective function (or bijector) that maps a simple distribution (e.g., a multi-dimensional Uniform distribution) to the observed data distribution. New samples can then be drawn from the data distribution by sampling the simple distribution and applying the learned bijective function. A flow can also be constructed to predict multiple parameters of the dataset conditioned on a separate parameter, as is done here. When trained on observed events, this approach allows us to generate realistic host-galaxy properties without an underlying galaxy model or an encoding of the observing systematics limiting their identification. We use the P2Flow4 package (Crenshaw et al., 2022) to build our conditional normalizing flow. Footnote 4: [https://github.com/jfcrenshaw/pzflow](https://github.com/jfcrenshaw/pzflow) We first consolidate PS1 best-fit Kron magnitudes in PanSTARRS \(grizy\)-passbands for the galaxies within the GHOST (Gagliano et al., 2021) catalog. After excluding events from the ZTF BTS sample (described in the following section), those without spectroscopic redshift information, and those without complete PS1-\(grizy\) photometry, we are left with \(\sim 8,000\) events. The PS1 photometry for each galaxy is corrected for Galactic extinction using the total reddening values at each location derived from Planck observations of the cosmic microwave background (Planck Collaboration et al., 2014). We further consolidate photometric redshifts for each host galaxy from the neural-network-produced 'PS1-STRM' catalog (Beck et al., 2021), and separate 'SNe Ia', 'SNe II', and 'SNe Ib/c' hosts. We then train a conditional flow to generate extinction-corrected PS1-\(grizy\) photometry and photometric redshifts for each transient type given the transient's spectroscopic redshift. Our bijective function is a Rational-Quadratic Neural Spline Coupling (Durkan et al., 2019), and our target distribution is a multi-dimensional Uniform distribution. We train for 100 epochs and confirm convergence of the log-probability. We then condition our learned flow on the spectroscopic redshift of each SNANA-simulated transient to generate a posterior PDF for its host galaxy properties: \(p(g,r,i,z,y,z_{\text{phot}}|z_{\text{spec}})\). We draw from this joint PDF to simultaneously generate point estimates for each of these properties. We note that uncertainties for these properties can be estimated by repeatedly drawing from this conditional flow, or the full posterior can be used instead; for this work we limit ourselves to the single estimate of each property. We emphasize that we have not implemented any host-galaxy association technique, as no positional information exists: our simulated hosts are fully described by the properties we have generated with the normalizing flow. In practice, SNe observed within \(z<0.6\) may be misassociated at the 3-5% level using traditional techniques (Gagliano et al., 2021); reproducing this contamination rate using simulations requires an all-sky galaxy catalog with realistic number densities, far beyond the scope of this work. Because this contamination induces additional scatter in the observed host-galaxy correlations for each SN class, our normalizing flow trained on observations naturally encodes the effects of these misassociations in our simulated galaxy photometry (though misassociations are less likely within our considered redshift range of \(z<0.2\)). Next, we divide our simulated sample into training and test sets comprising 75% and 25% of the original dataset, respec tively. We maintain the even class balance in the training set; however, we re-balance the test set to have the approximate class proportions as exist in the ZTF BTS after quality cuts: 80% SNe Ia, 15% SNe II, and 5% SNe Ib/c (where we assume equal proportions of SNe Ib and SNe Ic in the combined SN Ib/c class). This allows us to track the performance of the network on a BTS-like sample of events as it trains. We achieve these proportions by undersampling our SN II and SN Ib/c light curves using imbalanced-learn. Our final simulated training set contains 10,000 events of each class, while our simulated test set contains 8,000 SNe Ia, 1,500 SNe II, and 500 SNe Ib/c. ### Observed Transients from The ZTF Bright Transient Survey (ZTF BTS) To evaluate the performance of our network on real observations, we use the set of 4,020 high-quality5 SNe from the ZTF BTS6(Fremling et al., 2020; Perley et al., 2020), the largest spectroscopic sample of SNe constructed to date. Data from the ZTF public stream are collected using the 47 deg\({}^{2}\) field-of-view camera mounted atop the Palomar 48-inch Schmidt telescope, which observes in three passbands: ZTF-\(g\), ZTF-\(r\), and ZTF-\(i\). The data collected, which now include forced photometry at the locations of identified events (Masci et al., 2019), are then processed at the Infrared Processing and Analysis Center (IPAC) to produce public alerts \(\sim\)4 minutes from the time of observation. Now in the second phase of its public survey, which comprises half of its total time on-sky, ZTF scans the entire northern sky with a cadence of \(\sim\)2 nights in ZTF-\(g\) and ZTF-\(r\). Footnote 5: See [https://sites.astro.caltech.edu/ztf/bts/explorer_info.html](https://sites.astro.caltech.edu/ztf/bts/explorer_info.html) for a description of quality and purity cuts. Footnote 6: [https://sites.astro.caltech.edu/ztf/bts/bts.php](https://sites.astro.caltech.edu/ztf/bts/bts.php) The ZTF BTS is magnitude-limited to ZTF-\(r<18.5\) and nearly spectroscopically-complete, containing spectroscopic classifications for 93% of discovered SNe. The catalog (at the time of download on March 16th, 2023) consists of 3,275 SNe Ia, 563 SNe II, and 182 SNe Ib/c, whose photometry was released to alert brokers in near real-time and whose spectra were made public on the Transient Name Server7 within a day of observation. The catalog also contains multiple events belonging to rarer classes (including SLSNe-I); we do not consider these additional classes in this study, but note that significant host-galaxy correlations have been identified among these other classes (Lokken et al., 2022) and our model can easily be extended to include them. Footnote 7: [https://www.wis-tns.org](https://www.wis-tns.org) We identify the most likely host galaxies of these transients in Pan-STARRS (PS1) using the modified directional light-radius (Gupta et al., 2016) method implemented in the GHOST (Gagliano et al., 2021) software8. The host galaxies of 866 SNe could not be found using this method and were dropped. Through visual inspection, another 76 were found to have erroneous or ambiguous associations and were manually re-associated. Footnote 8: [https://pypi.org/project/astro-ghost/](https://pypi.org/project/astro-ghost/) For the identified hosts, PS1 best-fit Kron magnitudes in \(grizy\)-passbands were retrieved. 301 systems had unreported host galaxy photometry in at least one PS1 band and were dropped at this stage, leaving 2,853 events. We then downloaded the photometry of these transients using the ANTARES alert broker local client9(Matheson et al., 2021). Photometry for 2 transients could not be found and these events were removed. Footnote 9: [https://nsf-noiflab.gitlab.io/csdc/antares/client/installation.html](https://nsf-noiflab.gitlab.io/csdc/antares/client/installation.html) Extensive prior work has been done to construct and validate photometric redshift estimators for specific surveys and galaxy datasets. Due to the lack of generalizability of these methods, many surveys report neural-network generated photometric redshifts for each galaxy but not the associated code used to generate them. To bypass this issue, we have constructed a simple emulator for the photometric redshifts reported in Beck et al. (2021) for transient host galaxies. This allows us to recover redshifts with comparable statistical properties to those predicted by fine-tuned models. Similar to the approach for the simulated sample, we train a conditional flow to generate data with comparable statistical properties to our observed sample. Whereas our previous flow was used to generate both host-galaxy photometry _and_ photometric redshifts for the simulated sample, this model leverages observed host-galaxy photometry to predict _only_ the Beck et al. (2021)-reported photometric redshifts. We condition our flow on PS1-\(r\) and the colors of galaxies within the GHOST (Gagliano et al., 2021) sample, which correlate more strongly with redshift than galaxy brightness alone. As a result, our normalizing flow predicts the conditional distribution \(p(z_{\rm phot}|r,g-r,r-i,i-z,z-y)\). As before, we train the flow for 100 epochs and verify convergence of the log-probability. After training, we apply our normalizing flow to our ZTF BTS host galaxy photometry to recover photometric redshift point estimates. We use these values to normalize our light curves as described in the following section. We similarly use the PS1 zero-points and photometric redshifts to convert the host galaxy apparent magnitudes to host galaxy luminosities, which we normalize to near unity. We plot the distribution of photometric redshifts for both the simulated and observed samples as a function of their spectroscopic redshifts in Fig. 2. Qualitatively, the scatter in photometric redshifts between the two samples is similar. Quantitatively, we can measure the number of photometric redshift outliers as (Hildebrandt et al., 2010) \[\frac{|z_{\rm phot}-z_{\rm spec}|}{1+z_{\rm spec}}>0.15 \tag{1}\] By this metric, we measure an outlier fraction of 4.5% for the observed sample and 1.5% for the simulated sample (compared to 2.5% for the Beck et al. (2021)-reported photometric redshifts for the GHOST training sample). We note that the use of \(u\)-band photometry from the Sloan Digital Sky Survey (SDSS; York et al., 2000) would likely decrease this scatter, but the increased depth of PS1 results in higher-S/N host-galaxy photometry for use in our network. Further, redshift is a key feature used to estimate the absolute brightness of an explosion, and therefore a key discriminator between SNe Ia and core-collapse events. The reliability of photometric redshifts in the era of the Vera C. Rubin Observatory is an area of active research, and we avoid fine-tuning these estimates further so as not to report overly-optimistic performance. Our approach represents a significant advancement over previous light curve simulations that estimate photo-zs by interpolating look-up tables of host galaxy colors (see e.g., Graham et al., 2018). Our observed sample before pre-processing contains 2,851 SNe. An additional 44 were dropped during the light curve truncation stage described in SS3, leaving us with a final observed sample size of 2,807 SNe. While this is a fraction of the initial number in the ZTF BTS (\(\sim\)70%), we note that only a small fraction of the discovered SNe in upcoming surveys will be classified and selected for follow-up observations. By constructing a curated observational dataset, we optimize our classifier's ability to discover events for which we will be able to extract meaningful scientific insights. We compare the distributions of peak apparent magnitude, redshift, and host galaxy photometry after each of these preprocessing steps to ensure that we do not encode an additional observational bias into our observed sample with these cuts. We summarize our sample sizes after each of these cuts in Table 1. We divide our final spectroscopic sample into a 75% training set and a 25% test set without changing the relative proportions of classes. As a result, our spectroscopic training set consists of 1,704 SNe Ia, 280 SNe II, and 121 SNe Ib/c; our spectroscopic test set consists of 569 SNe Ia, 95 SNe II, and 38 SNe Ib/c. ## 3 Data Pre-Processing ### Phase Jitter of Synthetic Light Curves In our model, the date at which ZTF 'discovers' an SN (deemed the trigger date, or \(T_{\rm trigger}\)) is the epoch when a second observation is taken with \(S/N\geq 5\). Because the date at which a transient is discovered is determined by the combination of a transient's brightness evolution and the survey cadence, this date should not be assumed to be either the date of the start of the explosion or a constant offset relative to it. Nevertheless, the simplicity of SN simulations may lead a machine learning model to over-train on unrealistic correlations in the trigger dates of the training set instead of the properties of the light curve. This will limit its use when applied to data collected from a survey with more restrictive, or relaxed, trigger criteria. To mitigate this issue, we remove observations from the first \(d\) days of each transient light curve, where \(d\) is a number randomly sampled from a truncated Normal distribution with \(d\sim\mathcal{N}(T_{\rm trigger}\), 0.25\({}^{2})\) and \(d\geq T_{\rm trigger}\). We then re-define the trigger date of each event as the epoch of the first observation in this new truncated light curve. For each light curve in both the simulated and spectroscopic sample, we calculate the phase of each observation relative to trigger in days. We correct these phases for time dilation using the photometric redshifts calculated above and correct light curve flux values for Galactic extinction. We then remove observations from each light curve obtained greater than 30 days after this new trigger date (as we are interested in this work in \begin{table} \begin{tabular}{c|c|c} Processing Cut & SNe Remaining & Fraction Cut \\ \hline \hline Initial Sample & 4,020 & – \\ Missing/Incorrect Host & 3,154 & 21.5\% \\ Unreported Host Phot. & 2,853 & 9.6\% \\ SN Light Curve Retrieval & 2,851 & \(<\) 0.1\% \\ Light Curve Truncation & 2,807 & 1.5\% \\ \hline Final SN Sample & 2,807 & 30.2\% \\ \end{tabular} \end{table} Table 1: Number of SNe from the ZTF BTS remaining, and fraction removed, after each step in the processing pipeline. Figure 2: Photometric versus spectroscopic redshifts for the galaxies in our simulated and observed samples. ZTF BTS host galaxies are shown as black points, and synthetic host galaxies are shown as the two-dimensional histogram in blue. The discreteness in observed spectroscopic redshifts is an artifact of the template-matching algorithm through which they were determined. early classification). We do not k-correct our photometry, as this would encode assumptions about the shape of the SED of each transient class (and further encode the reliance on accurately determined redshifts). Our classifier consists of a recurrent neural network, and these networks were initially constructed to process arrays of uniform length. To prepare irregularly-sampled light curves across multiple passbands for processing, common approaches include padding light curves to a fixed length and masking zero-valued observations in training (e.g., Charnock and Moss, 2017); or interpolating observations onto a grid of uniform length (e.g., Villar et al., 2020). We adopt both approaches, using a Gaussian Process model described in the following section. ### Gaussian Process Interpolation and Padding We use Gaussian Process Regression (GPR; Rasmussen, 2006) to construct a light curve model for each transient event and interpolate observations in apparent magnitudes onto an evenly-spaced grid in time. In GPR, observations are considered to be realizations of a latent function with Gaussian noise. A function describing the covariance between observations, called the kernel, is constructed and its parameters are chosen to minimize the loss function (and maximize the likelihood of the obtained observations). This results in a continuous posterior distribution for a class of models that describe the data. GPR can additionally consider a mean model for the observations, and this further conditions the subsequent model predictions. Our model, which is implemented in the lightweight code tinygp10, uses a Matern kernel with \(\nu=3/2\) that quantifies the correlation in time between observations \(t_{i}\) and \(t_{j}\) as Footnote 10: [https://tinygp.readthedocs.io/en/stable/](https://tinygp.readthedocs.io/en/stable/) \[k(t_{i},t_{j})=(1+\sqrt{3}r)e^{-\sqrt{3}r} \tag{2}\] In the above equation, \(r=||\frac{t_{i}-t_{j}}{I}||_{1}\) parameterizes the time between observations and the scale factor \(l\) is a free parameter. In addition to modeling the correlation between single-band observations in time, we model the correlations between _bands_. We construct a \(p\times p\) symmetric correlation matrix, where in this case \(p=2\) is the number of passbands considered. The diagonal and off-diagonal terms (2 and 1 terms, respectively) are free parameters to be fit. We use a constant value in each band as our mean model (\(\bar{g}\), \(\bar{r}\)), and add the uncertainty in each band to the diagonal of the covariance matrix along with additional free terms \(j_{g}\), \(j_{r}\) to capture any remaining measurement error. In total, our model consists of 8 free parameters. We transform our data from flux to magnitude space and minimize the negative marginal log probability of the model (our loss function) to determine the best-fit parameters for each event. We optimize our Gaussian Process fit with the Scipy package. There is precedent to a Gaussian Process (GP) light curve model parameterized by the correlation in both wavelength/frequency and time. A similar method was first in Figure 3: The training and testing datasets used in this work. **Above:** The composition of simulated events used in phase 1 of training the network. The total number of light curve segments in each sample are listed, and the number of unique transient events in each sample are given in parentheses. **Below:** The composition of spectroscopic events from the ZTF Bright Transient Survey used in the re-training stage (see text for details). troduced by Boone (2019) with the photometric classifier Avocado, the winning submission of the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC; Kessler et al., 2019). In Avocado, the central wavelength of each passband was used as the coordinates of the wavelength dimension of the GP. Villar et al. (2020) also used a custom GP model described by a kernel in both wavelength and time, where the distance metric in wavelength space was the Wasserstein-1 distance between each filter's normalized throughput. Because these two GP methods place physical constraints on the covariance in wavelength, they are specialized cases of our model. A physically-motivated construction of the GP covariance matrix may be desirable when photometry is sparse; however, in this work we opt instead for a model with greater flexibility to fully characterize both the long-duration and short-duration behavior of each transient. We show the best-fit GP model for one simulated transient event across three phases of its light curve, in additional to the best-fit passband kernel at that phase, in Fig. 4. With few observations, minimal correlation is detected between passbands (left plot). As more data is obtained (middle and right plots), the correlation increases and is used to improve the posterior light curve fits across both passbands. This learned correlation also ensures that the model in each band is robust against low-S/N observations (not shown). Although a GP model whose hyperparameters are optimized based on a full-phase light curve would best characterize an event's evolution in time and across passbands, this information will not be available in real-time. We decompose each transient light curve into segments to realistically simulate constructing a model at different phases of observation. From a light curve with \(N\) total observations in the simulated sample, we construct a series of \(N/2\) segments from the first \(n\) observations, where \(n=2,4...N\). In other words, we generate a unique segment every two observations of the light curve. This cadence was chosen to reduce the volumes of training data without decreasing the number of transient events they contain (and the associated photometric diversity). Multiple strategies have been proposed to pre-process unevenly-sampled and multi-variate time-series datasets for use in recurrent neural networks, and it remains unclear which strategy yields the best performance for SN classification. With this in mind, we construct three different data representations of our light segments: 1. **Raw and Pad Model:** In the first approach, we construct an array directly from the light curve observations. This photometry has been corrected for distance, extinction, and time dilation but has not been interpolated using our GP model. In this method, our temporal array contains the (jittered) phases at which observations were obtained in any band; the magnitude and magnitude uncertainty arrays are filled with zeros where a given band was unobserved. These arrays are then padded with zeros at the tail end so that the matrices for all events have equal length. 2. **100-Timestep Model:** In the second approach, we construct a GP fit for each light curve segment and compute the mean and standard deviation of the GP posterior distribution at 100 independently-spaced phases spanning the full extent of the segment. We then convert these interpolated observations and their uncertainties from magnitude space back to flux space. We note that, in contrast to prior light curve methods, this interpolation scheme does not maintain uniform spacing in time across light curve segments. One partial-phase segment may consist of 10 observations spanning 5 days, in which case it will be interpolated onto a uniform grid with \(\delta t=0.05\) days; if 10 additional observations span another 5 days, the full 10-day segment would be interpolated onto a grid with \(\delta t=0.1\) days. 3. **0.2-Day Spacing Model:** In the third approach, we construct a GP fit as above and evaluate the GP model at phases beginning at the time of trigger and occurring every \(\delta t=0.2\) days until the end of the segment. We then convert these interpolated observations to flux. While this approach preserves consistent spacing in time, the interpolated light curves will not be equal in length. As in method 1, we pad each light curve array to a consistent length after interpolation. We have adopted a flexible GP model over a physical light curve model to capture the observed diversity in SN photometry, which allows unexpected phenomenology to be expressed in the interpolated data. When an SN is young, however, this flexibility can be detrimental: with very few observations, the best-fit GP model may not reflect expected early-time SN behavior. To bypass this limitation, we only interpolate our observations after 5 or more total observations have been obtained. Before this phase, we pad the photometry with zeros to a uniform length in all three of our models and pass these arrays directly into the neural network (as with our 'Rad and Pad' model above). We note that a more sophisticated approach may be to integrate a physically-motivated light curve model at early times, then transition to a more flexible GP model as more data is obtained. We leave this implementation to future work. After obtaining the interpolated or padded representations of each light curve segment, we convert its flux to normalized luminosity using the equation \[L_{\text{Norm}}=\frac{4\pi d^{2}F}{\alpha} \tag{3}\] where \(F\) is the GP-interpolated flux in each band; \(d\) is the luminosity distance to the transient in parsec, estimated from the photometric redshift of the host galaxy; and \(\alpha=10^{23}\) is a normalization constant used to keep the computed luminosities near unity (as neural network performance is often better if the input range is constrained). Photometric uncertainties extracted from the GP fit are similarly converted as \(\sigma_{\rm L,Norm}=L_{\rm Norm}\sigma_{F}/F\). Each interpolated or padded transient light curve segment is represented in our dataset as an \(N_{\rm pad}\times p\) matrix, where \(N_{\rm pad}\) is the full length of each light curve segment, including padded values (this value is different for each of the three methods) and \(p=2\) is the number of passbands (ZTF-\(g,r\)). To this matrix, we add the array of interpolated phases \(t=T-T_{\rm trigger}\); the GP-derived photometric uncertainties in each band; and \(N_{\rm pad}\)-length arrays contains the fixed repeated values of \(E(B-V)_{MW}\), \(z_{\rm phot}\), and host galaxy normalized luminosity in PS1-\(grizy\) filters. This results in a data matrix containing arrays of dimensionality \(N_{\rm pad}\times(2p+8)\) that are used to both train and validate the network. We consolidate our \(N_{\rm pad}\times(2p+8)\) arrays for all SN segments associated with the SNe in our training samples described in 2.1; this forms our training set. The segment arrays associated with the test SNe similarly form our test set. Our target array for prediction is a one-hot encoded representation of our three classes: SNe II are encoded as '0', SNe Ia as '1', and SNe Ib/c as '2'. We report the number of unique transient events across the simulated and spectroscopic training and test set, along with the number of light curve segments in each sample, in Fig. 3. ## 4 Model Architecture ### Overview To take advantage of the temporal correlations embedded within our partial-phase time-series samples, we design a recurrent neural network for photometric classification. A recurrent neural network (RNN) is comprised of multiple connected and directed "units" that are trained to learn a data representation from both the internal states of earlier units as well as a component of the input array. Because gradients propagate through the network, RNNs have traditionally suffered from vanishing and exploding gradients (those tending toward 0 and infinity, respectively) in training. This issue can be alleviated by the use of recurrent units that modulate information flow through the network; for example, Gated Recurrent Units (GRUs; Cho et al., 2014) feature coupled 'update' and'reset' gates. We have selected the more conventional Long-Short Term Memory cells (LSTM; Hochreiter & Schmidhuber, 1997) as our RNN units. LSTMs contain three gates: a 'forget' gate that removes information from the cell state; an 'input' gate that regulates the adding of new information from the data sequence into the cell state; and an 'output', which regulates the output of the cell state into a subsequent unit. We use LSTMs for their ability to retain insights from the long-term behavior of complex time-series data, where they often achieve superior performance to GRUs (e.g., Cahuantzi et al., 2021). The architecture of our neural network is illustrated in Fig. 5. It consists of a single LSTM layer of 60 units, with a Figure 4: Best-fit Gaussian Process model for one simulated light curve in ZTF passbands (ZTF-\(g\) in green and ZTF-\(r\) in red) at three phases in its evolution from the time of trigger (bottom). The best-fit band-wise covariance matrix associated with each fit is shown at top. Because our model is parameterized by the covariance between observations in both time and wavelength, observations taken in a single band inform the model in the other. sigmoid activation function defined as \[S(x)=\frac{1}{1+e^{-x}} \tag{4}\] where \(x\) is the input tensor to the gate. The next layer is a masking layer, which constructs a boolean mask of equal dimensionality to the input data. This mask, which is propagated through successive layers along with the data, is constructed to flag and ignore padded values in training. This layer is followed by a dense layer to consolidate model outputs into a set of normalized probability scores corresponding to the likelihood of the light curve to belong to the three transient classes considered (SN Ia, SN II, SN Ib/c). Many previous RNN architectures for photometric classification have applied dropout layers that deactivate randomly-selected neurons during training to ensure that all neurons are used; batch normalization is also common to decrease the number of training epochs needed to achieve accurate classification results. We have found that neither modification significantly increases the accuracy of our model when applied to real data, and so have excluded them. Our network is implemented in TensorFlow(Abadi et al., 2015) using the Keras(Chollet et al., 2015) interface. ### Class-Weighted Loss For Accurate Early Predictions A common choice for the loss function used to train multi-class classification networks is the sparse categorical cross-entropy. For an event belong to true class \(a\), the categorical cross-entropy is calculated as \[L(y,\hat{y})=-\sum_{j=0}^{M}\sum_{i=0}^{C}y_{ij}\times\log\left(\hat{y}_{ij}\right) \tag{5}\] where \(C\) is the number of transient classes (in this work \(C=3\)), \(M\) is the number of events in a training batch, \(y_{ij}\) is the true class of event \(j\) after one-hot-encoding (with \(y_{ij}=1\) where \(j=a\), else \(y_{ij}=0\)), and \(\hat{y}_{ij}\) is the set of predicted normalized probabilities output from the softmax layer of the network. The cross-entropy for all events in a given batch are summed at each training epoch to compute the network loss. We modify this framework to weight the contributions of particular segments over others. The modified cross-entropy loss is constructed as: \[L(y,\hat{y})=-\sum_{j=0}^{M}\sum_{i=0}^{C}\beta_{ij}y_{ij}\times\log\left(\hat{y }_{ij}\right) \tag{6}\] We calculate the weight of each light curve segment as a function of \(t_{N}\), the final phase covered by the segment (ignoring padded values): \[\beta_{ij}(t_{N})=\begin{cases}10,&\text{for $t_{N}\leq 3$ days}\\ 5,&\text{for $t_{N}>3$ days and $t_{N}\leq 15$ days}\\ 1,&\text{otherwise}\end{cases} \tag{7}\] These weights are constructed so that the network minimizes the loss by prioritizing the classification of SNe using only photometry provided in the first three days following detection. In the re-training stage using observed ZTF light curves (see SS5 for details), we modulate these values by an additional weight according to the class of the transient associated with the segment: \[\beta_{ij}(t_{N})=\begin{cases}\beta_{ij}(t_{N})\times 2,&\text{for $j=0$}\\ \beta_{ij}(t_{N})\times 1,&\text{for $j=1$}\\ \beta_{ij}(t_{N})\times 5,&\text{for $j=2$}\end{cases} \tag{8}\] The multiplicative factors are added to make the network more sensitive to identifying SNe II and SNe Ib/c, which are poorly represented in our re-training dataset. SNe II events are more photometrically heterogeneous than SNe Ia, and SNe Ib/c are more easily confused with SNe Ia; the network learns to classify these more challenging minority classes only through explicit weighting. We note that we have tested the performance of the network after using these class weights in the primary training stage (on synthetic data before re-training). We find no significant improvement in Figure 5: Architecture for our RNN, which is composed of a single LSTM layer of 60 units. The photometry in each band (interpolated, padded, or both) is stacked with the photometric redshift, extinction, and host galaxy photometry, which are repeated at each phase. This combined array serves as the input to our RNN, and the network returns a single classification prediction corresponding to the light curve segment processed. performance relative to the early-weighted cross-entropy loss. The same principle we have applied here can be used to adapt publicly-available classifiers for specific science cases (e.g., to prioritize SN Ia classification for cosmological studies, or to prioritize late classification for nebular-phase spectroscopic follow-up). ## 5 Primary and Adaptive Training Because of the inherent complexity of observational data, classification models trained on simulated data typically under-perform on more realistic datasets. We have attempted to reduce the discrepancy between synthetic and observed samples by simulating realistic photometric redshifts and jittering the early photometry of simulated light curves. We further prepare our trained network for real-time operation with a domain-adaptation step in the training process. There are two primary motivations for this: * **Encoding a weak dependence on class imbalance and survey systematics:** If the network was fully trained using realistic relative class rates, it may prioritize accurate classification of some classes over others (e.g., only classifying SNe Ia, the most common SN class in observed samples); conversely, a network trained on evenly balanced datasets may predict unrealistically-large numbers of minority classes. A retraining stage allows a reasonably flexible architecture to consider both pieces of information. After learning the underlying phenomenological differences between the light curves of each SN class from the balanced simulated sample, the network in its re-training stage additionally considers the observed distribution of real events in a given survey. * **Increasing the Realism of Estimated Photo-\(z\)s:** Given the reliance of photometric classifiers on the predicted distance to the transient, it is imperative that our model is trained on a set of photometric redshifts with statistical properties comparable to those derived from observed photometry. While the overall scatter of the simulated and observed samples appears similar, re-training prevents the network from relying upon unphysical artifacts within the simulation for its prediction. First, we train our network on the simulated training sample for 200 epochs. We use the stochastic gradient descent optimizer Adam(Kingma & Ba, 2014), with a learning rate of \(l=10^{-4}\). We then re-train this model for 200 additional epochs and the same learning rate on the observed training sample. The temporal weights used in both training stages are identical; however, we have only used the class weights in the re-training stage, to account for the class imbalance. We use a training batch size of 32 in the simulated training stage and 12 in the spectroscopic training stage to account for the smaller data volumes of the latter dataset. We conducted the primary and secondary training for each of the three models across 24 CPUs of a node of the Flatiron Institute Scientific Computing Hub. The primary training stage was completed in a wall time of 31.2 - 32.1 hours per model, while the secondary training stage was conducted in 9.7 - 11.6 hours of wall time per model. ## 6 Results In the following section, we evaluate the performance of our three models as a function of phase. We construct ROC curves and PRC curves for each model, and report balanced metrics across the SN classes in our sample. ### Receiver Operator Characteristic and Precision-Recall Curves For both the simulated and the observed samples, we bin our light curve segments into early (\(t_{N}<3\) days), intermediate (\(3<t_{N}<15\) days), and late (\(15>t_{N}>30\) days) epochs depending on the phase of the final observation in the segment. The vast majority of events within the 'early' bin will still be brightening, while the late events will predominantly be dimming after peak. For a given phase bin, events correctly classified as belonging to a class \(a\) are called True Positives (\(TP\)). Events incorrectly classified as belonging to class \(a\) are called False Positives (\(FP\)). Conversely, events correctly and incorrectly classified as _not_ belonging to class \(a\) are deemed True Negatives (\(TN\)) and False Negatives (\(FN\)), respectively. Then, the classification precision, also known as its purity, is calculated as \[p_{a}=\frac{TP_{a}}{TP_{a}+FP_{a}} \tag{9}\] The classification recall, also known as its completeness, is calculated by \[r_{a}=\frac{TP_{a}}{TP_{a}+FN_{a}} \tag{10}\] A common method to evaluate the performance of a binary classifier is to consider the false positive rate \(FP_{a}\) as a function of the true positive rate \(TP_{a}\); a well-trained classifier should be able to maximally increase the rate at which they recover events from class \(a\) while minimizing the rate at which they mistakenly identify the alternative class as belonging to class \(a\). The ideal behavior of this \(TP-FP\) curve, also known as a Receiver Operator Characteristic Curve (ROC; Peterson et al., 1954), is to rapidly increase toward a true positive rate of unity with minimal false positives. We quantify this behavior by calculating the Area Under the ROC (AUROC; Pepe et al., 2006), which is unity in the limit of perfect classification and 0.5 in the case of random guessing. Another common metric is to plot the recall of the model as a function of its precision; a well-performing classifier exhibits a Precision-Recall (PR) curve with high recall rates (the fraction of recovered events from class \(a\)) even as the demanded model precision increases. As in the ROC curve, this trade-off can be quantified by the area under the precision-recall curve (AUPRC; Saito & Rehmsmeier 2015), with AUPRC = 1 for perfect classification and AUC = \(f_{a}\), the fraction of the test data consisting of class \(a\), for random guessing. The AUROC and the AUPRC provide complementary views of a classifier's performance. Our work is motivated by the need for rapid characterization of common events to conduct follow-up with highly limited resources. We will be unable to study the vast majority of discovered transient events in detail, but we must ensure that we do not waste precious resources when we do. For this reason, this and other upcoming SN classifiers will need to optimize a method's precision over its recall. We note that if an event is intrinsically rare, a classifier's recall rate may be prioritized over its precision. We use the StratifiedKFold module in sklearn to generate five evenly-sized random splits of the test datasets. These data subsets are mutually exclusive: no events are repeated between them. Next, we binarize our classification predictions for each split using the one-versus-all approach (in which the true class \(a\) is assigned a value of 1 and the other two classes are assigned a value of 0). We then generate ROC and PR Curves for each data split and each phase range ('early', 'intermediate', and 'late'). Finally, we calculate the mean and standard deviation of the curves across all splits. The ROC and PR curves for the 'Raw with Pad', '100-Timestep GP', and '0.2-Day GP' models are shown in Figures 6, 7, and 8, respectively. We show these curves applied to both the simulated test segments after the first training stage and on the observed test segments after the retraining stage. We report the class-specific AUC and AUPRC values in the figure legends. On the simulated sample, we achieve near-perfect classification for both SNe II and SNe Ia at late phases (\(15<t_{N}<30\)): Our mean AUROC values for the two classes are \(\geq 98\%\) across all three models, and the mean AUPRC values are \(\geq 95\%\). We observe substantially worse results at late phases for SNe Ib/c: in the worst case, we observe an AUROC of \(94\pm 1\%\) for the 100-Timestep GP model but an associated AUPRC of \(54\pm 11\%\). We attribute this to the class imbalance of the test set, to which the the AUPRC is particularly sensitive: \(\sim 5\%\) of the sample is SN Ib/c. At late phases, we also observe the largest difference between the three classifiers tested on the simulated transients. Despite generally comparable performance for all models, we find the lowest metrics with the 100-Timestep GP model, with AUPRC values of \(99\pm<1\%\), \(95\pm 1\%\), and \(54\pm 11\%\) for SNe Ia, SNe II, and SNe Ib/c, respectively. It is possible that this is a reflection of the loss of physically-relevant timescales in pre-processing these light curves: because 100 observations are generated independent of the duration of the light curve segment, the same rise and decline times could comprise half of one processed array and span the full extent of another. Although we also pass the phase information explicitly to the network, we propose that the physically-motivated spacing between array elements aids the network in learning the temporal evolution of an event. Further, we note that the first five observations were passed in directly (without interpolation) for each of the models. Because ZTF observations are taken with a median cadence of \(\sim\)2 days, these observations encode physical information about the phase of the event in the number of array elements. The network trained to leverage this information from raw data (the 'Raw and Pad' model) is better-adapted to process this early data, evidenced by the highest early-phase precision for simulated SNe Ib/c (24 \(\pm\)1%). We now consider the performance of our three models on the spectroscopically-confirmed sample of SNe from the ZTF BTS. For the dominant class in the sample, SN Ia, we note surprisingly consistent results when transitioning from simulations to observations: the late-phase classification achieves an AUROC of \(85\pm 4\%\) and an AUPRC of \(94\pm 3\%\) in the worst case (with the 'Raw and Pad' model). The performance on the other two classes is appreciably worse: we observe significantly greater variance among the performance results between the five cross-validation folds for each the three models, particularly for SNe Ib/c. In the worst case, our mean AUROC metric for late-phase classification decreases by 36% for SNe Ib/c and 34% for SNe II with the 'Raw and Pad' model. Our AUROC and AUPRC values increase roughly monotonically with phase for each classifier applied to both the simulated and observed SN samples. There is a single exception: the mean SN Ib/c AUROC decreases from intermediate to late phase with the 'Raw and Pad' model, although the difference is not statistically significant. The '0.2-Day GP' model achieves marginally superior performance at late-phase, and we conclude that this representation may be slightly better able to incorporate the long-term evolution of each light curve. We note that, in an earlier iteration of this work, the inclusion of ZTF SNe that did not pass the ZTF-imposed quality and purity cuts caused this performance increase with phase to disappear for the the 'Raw and Pad' model. The fact that our simplified network architecture is able to reasonably fit the temporal behavior of the real data across all three light curve representations suggests that, for sufficient-quality observations, interpolation onto an evenly-spaced grid (as is commonplace in present-day photometric classifiers) may not be a required pre-processing step for accurate RNN classification. Finally, we find that _all three models are able to achieve classification performance in the first three days better than random guessing across all SN classes._ For the '0.2-Day GP' model, we find mean AUROC values of \(>72\%\) across all classes, and an AUPRC value of 90% for SNe Ia. We additionally find AUPRC values of \(22\pm 7\%\) for SNe Ib/c and \(38\pm 2\%\) for SNe II. These are encouraging results that the light curve rise of an SN, coupled with its host galaxy photometry, can be used for real-time early classification. Our results also suggest that the application of machine-learning solutions to simulated SN samples alone may suggest overly optimistic performance, and may not be reflective of their performance on real data. ### Comparison to Light Curve-Only Classification Next, we quantify the impact of each component of our classification framework on our final results. Due to the slightly superior late-time performance of the '0.2-Day GP' model described in the previous section, we use this as our baseline for comparison. To facilitate this analysis, we have trained three additional models using the '0.2-Day GP' interpolation method. In the first, we remove host-galaxy photometry and consider only the transient's light curve, photometric redshift, and the Galactic extinction along the line of sight. In the second, we remove our primary training stage, and train our network exclusively on the imbalanced training set from the ZTF BTS. In the third, we remove our adaptive training stage and train our network exclusively on the balanced simulated sample. To better facilitate a comparison between these models, we consider the macro-average of the AUROC, AUPRC, precision, and recall between the three classes. These metrics consider the contributions of each class equally, and consequently are not dominated by the network's performance on the most populous class in the observed sample (SN Ia). This is in contrast to the micro-average, in which a weighted average is computed using the number of events of each class. Our macro-averaged balanced metrics are denoted with a 'b-' prefix. We also introduce an additional metric to our comparison: the balanced-F\({}_{1}\) score, defined as the harmonic mean between the precision and recall of a classifier averaged between classes: \[F_{1}=\frac{1}{C}\sum_{i=0}^{C}\frac{2r_{i}p_{i}}{r_{i}+p_{i}} \tag{11}\] The balanced-\(F_{1}\) score consolidates multiple performance metrics into a single value for directly comparing classifiers. We report this and our other metrics for the above models evaluated on the early-phase (\(<3\) day) light curves of the observed ZTF BTS sample. We report our results in Table 2. We find that _removing any component of our training sequence decreases the mean value of every metric_. The largest decrease in every metric comes from removing the adaptive training stage. Interestingly, removing the primary training stage results in a nearly comparable mean sample accuracy to our baseline classifier (81% compared to 82%). Removing the adaptive training stage results in a substantial drop in mean accuracy (66% compared to 82%), although these two truncated-training networks have comparable precision. This can be understood in the context of the class breakdown in each training set: Training only on the fully balanced simulated sample results in inaccurate predictions on a sample comprised primarily of SNe Ia, while the opposite holds true for a training set dominated by SNe Ia. The smallest decrease comes from the removal of host-galaxy information. Although the uncertainties are large, the systematic decrease in every property suggests the benefit of including this photometry in early classification efforts. ### Comparison to Previous Classifiers We now compare our classifier to pre-existing methods in its design. Because we have sought to overcome existing barriers to real-time classification using physically-motivated information instead of model complexity, our classifier is simpler in architecture to the majority of comparative leading methods. Earlier RNN methods for photometric classification (e.g., superNNova; Moller & de Boissiere, 2020) had considered two layers of bi-directional GRUs, in which the RNN representation of the light curve at each unit is informed by information from both earlier and later phases. These architectures are suited for full-phase classification but are ill-suited for classification with an incomplete light curve. RAPID, the first recurrent neural network to be constructed for real-time classification, consisted of two layers of 100 uni-directional GRUs to avoid leaking information from later phases. It was later updated to use Temporal-Convolutional Neural units (TCNs; Bai et al., 2018). Our method uses a single LSTM layer consisting of 60 uni-directional units. To interpolate observations onto a uniform grid, Muthukrishna et al. (2019) interpolated synthetic ZTF observations onto a grid of 50 observations spanning \(-70<T-T_{\text{trigger}}<80\) days at a 3-day cadence. Model predictions were then wrapped in a TimeDistributed layer to provide a classification output at each interval in phase. In contrast, we process segments of the light curve separately through our network and interpolate each segment independently with a flexible GP. We have chosen this approach so that our light curve representation improves as additional observations are obtained, contrary to the stationary interpolation scheme considered by Muthukrishna et al. (2019). This mimics model construction in real-time: as additional observations are obtained, our understanding of the shape and evolution of the light curve improves. Muthukrishna et al. (2019) also define a pseudo-class to distinguish pre-explosion from post-explosion phases, and use a parametric fit to each light curve in training to estimate the time of explosion relative to trigger. We avoid this framework so that the network does not need to estimate the time of explosion, Figure 6: **First Row: Receiver Operator Characteristic (ROC) Curves for the network trained and tested directly on transient and host photometry (without using a GP interpolation scheme). The performance has been evaluated on light curve segments truncated at early, intermediate, and late times. The mean AUROC are reported with 1-\(\sigma\) uncertainties in the legend of each panel. Third Row: The Precision-Recall Curves for the same model applied to synthetic photometry. The AUPRC with 1-\(\sigma\) uncertainties are given in the corresponding legends. Second and Fourth Rows: ROC Curves, and Precision-Recall Curves, for the same model after 200 epochs of re-training on observed light curve segments (see text for details).** Figure 7: Same plot as Fig. 6 for the 100-Timestep GP model. Figure 8: Same plot as Fig. 6 for the 0.2-Day GP model. which in the case of sparse light curves is non-trivial and non-essential for classification. We note that our model can easily be modified to incorporate pre-explosion non-detections, as is done in (Muthukrishna et al., 2019); this may further improve its performance. Two years after the development of RAPID, the real-time classifier SCONE(Qu et al., 2021) was developed. SCONE uses a convolutional neural network applied to 2-dimensional 'flux-heatmap' representations of transient light curves for classification, after GP interpolation in wavelength and time. The model consists of 22,606 free parameters for multi-class classification, whereas ours consists of 17,463. We note, however, that our model has only been constructed to distinguish three classes of SNe, whereas theirs includes SNe Iax, SNe 91bg, and SLSNe-I for a total of six possible SN classes. During the writing of this work, Pimentel et al. (2022) released a Deep Attention Model for photometric classification of transients discovered by ZTF. Attention Models (Mnih et al., 2014) are secondary neural networks added to a primary network and tasked with learning the input features most relevant to the output of the primary network. Using a custom time-modulated attention component (called 'TimeModAttn'), Pimentel et al. (2022) uses an auto-encoder to learn a continuous representation of a light curve from discrete observations. Two model approaches are considered: a serial one in which an auto-encoder simultaneously encodes the light curves in ZTF-\(g\) and ZTF-\(r\) into a single latent vector; and a parallel one in which the light curves of an event in each band are encoded separately, combined, and then compressed with a linear projection to arrive at a dimensionality comparable to that of the serial model. After the partial light curve is encoded, it is processed by a Multi-Layered Perceptron (MLP; Murtagh, 1991) model with two layers, a dropout fraction of 50%, a batch normalization component, and a final softmax layer to convert the output of the model to a series of classification probabilities. We note that this requires more pre-processing than our model, and we require no dropout, batch normalization, or attention component. Nevertheless, the inclusion of the attention component allows for the network to predict the specific observations most relevant for classification by the model, a major advancement in the interpretability of machine learning models used for photometric classification. We now compare the performance of our classifier to prior methods. We list our balanced performance metrics at each phase bin for the simulated and observed SN samples classified with the 0.2-Day GP model in Table 3. We acknowledge that few commonly-reported classification metrics are intuitive; for clarity, we also provide the overall accuracy of the network at each phase, although we caution that this is equal to the micro-averaged precision and so is sensitive to the class imbalance of the dataset (\(\sim\)80% SNe Ia). In Pimentel et al. (2022), the authors report the performance of their TimeModAttn models relative to a traditional RNN with LSTM units and a Balanced Random Forest algorithm constructed from 144 features extracted from the light curves. They evaluate these classifiers on a sample of spectroscopically-confirmed SNe from ZTF comparable to that used in this work. We note that these metrics are computed 100 days after detection, 70 days after our longest light curve segment, and so denote them as _Full-Phase_. Using the results from their parallel TimeModAttn network with synthetic pre-training and zero data augmentation, we provide these balanced metrics in Table 3. We find no statistically significant difference between the balanced precision, F\({}_{1}\) score, AUROC, or AUPRC of our classifier at late-phase (\(15<t_{N}<30\)) and those reported for the TimeModAttn model 100 days from trigger, although their mean balanced precision falls below the uncertainty range for ours. It is likely that our results represent lower limits for the performance of our classifier following 30 days. We find the TimeModAttn model to be superior to our method only in terms of balanced recall, and we emphasize that this metric is less important than precision for triggering follow-up spectroscopy on targeted events (except in the case of rare, high-value transients). We find the mean AUROC, AUPRC, precision, and F\({}_{1}\) score of our method at late-phase to be superior to the baseline random forest (BRF) model considered; however, we caution that our statistical uncertainties across samples are larger than each of their considered methods, and this limits our ability to directly compare them. In addition, Pimentel et al. (2022) considered four SN classes (SNe Ia, SNe Ib/c, SNe II, and SLSNe-I), whereas we have excluded SLSNe-I from our sample. Although the light curve and host galaxy photometry of these events are reasonably distinct from the other classes (and performance is often significantly worse on SNe Ib/c and SNe II; these were the two SN classes with the lowest performance across twelve classes considered in Muthukrishna et al., 2019, while SLSNe achieved the highest late-phase AUROC values in Pimentel et al., 2022), our results may differ when applied to the four-class problem. Next, we compare the performance of our network to earlier methods at _partial-phase_ classification. The number of classifiers that report early-time performance on real data is low; as a result, we will begin by comparing to the early performance of RAPID and SCONE on synthetic samples. We caution that both of these classifiers considered more SN classes than this work; in addition, whereas RAPID simulated ZTF light curves, the SCONE framework was tested on LSST-simulated photometry from PLAsTiCC (Kessler et al., 2019). The impact to classification of the increased spectral coverage but lower cadence of LSST relative to ZTF is an open question, and should be investigated further. Figure 7 of Muthukrishna et al. (2019) shows the recall of the model across all 12 considered transient classes 2 days since trigger. The performance on SNe II and SNe Ib/c is the worst across all classes. Combining these classes into a balanced-recall score, we calculate a value of 0.36, compared to our value of 0.61 \(\pm\) 0.03. Figure 8 presents these values 40 days from trigger, when the model achieves a balanced-recall of 0.57 across SNe Ia, II, and SNe Ib/c. Our late-phase balanced recall on synthetic ZTF samples is 0.90 \(\pm\) 0.02, evaluated at least ten days earlier in phase. The confusion matrices shown in Muthukrishna et al. (2019) indicate that Calcium-rich transients and SNe Iax are significant contaminants for classifying SNe Ib/c; our superior performance is likely partially attributed to not considering these classes. Nevertheless, we find significantly improved performance on the partial-phase light curves of dominant SN classes relative to RAPID. Next, we consider the partial-phase performance of SCONE(Qu et al., 2021) on simulated LSST light curves. Fig. 4 of Qu & Sako (2022) indicates that their model achieves (when using redshift) a balanced-recall of 0.75 \(\pm\) 0.03 across SNe Ia, SNe II, and SNe Ib/c 5 days from trigger, compared to our lower recall of 0.61 \(\pm\) 0.03 within the first 3 days. 50 days following trigger, they report a balanced recall of 0.86 \(\pm\) 0.03, compared to our value of 0.90 \(\pm\) 0.02 in the first 30 days. Although these results are encouraging, Table 3 reveals the degradation in results when transitioning from synthetic to real samples. We strongly encourage the validation of RAPID, SCONE, and other early-time photometric classifiers on spectroscopic SN samples to better understand the advantages of each approach. In addition to the full-phase metrics reported by Pimentel et al. (2022), plots are provided for the b-AUROC values of their models as a function of phase. At the earliest observation (approximately three days from trigger), they note a maximum b-AUROC of \(\sim\) 50% across all of their models on observed ZTF SNe. Our model achieves a significantly higher b-AUROC of \(74\pm 4\%\) within the first 3 days. They report a b-AUROC of \(\sim\) 75% for their BRF and LSTM RNN models, and \(\sim\) 85% for their TimeModAttn model at \(\sim\) 30 days from trigger, compared to our \(91\pm 4\%\). We conclude that a simple architecture is able to achieve comparable or superior performance to more complex methods, both in distinguishing between SNe Ia, II, and Ib/c at early phases and in improving performance as more observations are obtained. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Model & b-AUROC & b-AUPRC & b-Precision & b-Recall & b-F\({}_{1}\) Score & Accuracy \\ \hline \hline Baseline & 0.74 \(\pm\) 0.04 & 0.52 \(\pm\) 0.07 & 0.58 \(\pm\) 0.13 & 0.46 \(\pm\) 0.09 & 0.48 \(\pm\) 0.11 & 0.82 \(\pm\) 0.02 \\ No Host & 0.72 \(\pm\) 0.08 & 0.48 \(\pm\) 0.09 & 0.48 \(\pm\) 0.12 & 0.41 \(\pm\) 0.09 & 0.40 \(\pm\) 0.08 & 0.78 \(\pm\) 0.02 \\ No Primary Training & 0.71 \(\pm\) 0.04 & 0.45 \(\pm\) 0.02 & 0.40 \(\pm\) 0.18 & 0.34 \(\pm\) 0.01 & 0.30 \(\pm\) 0.01 & 0.81 \(<\) 0.01 \\ No Adaptive Training & 0.65 \(\pm\) 0.03 & 0.43 \(\pm\) 0.02 & 0.41 \(\pm\) 0.02 & 0.39 \(\pm\) 0.05 & 0.39 \(\pm\) 0.03 & 0.66 \(\pm\) 0.02 \\ \hline \end{tabular} \end{table} Table 2: Early-phase (\(<\) 3 day) classification metrics for the observed ZTF BTS Sample, with 0.2-day GP-interpolation scheme, for three cases: no host photometry with the baseline training scheme, host photometry with no primary training, and host photometry with no adaptive training (see text for details). Removing any of these components results in an average decrease in classifier performance across every metric considered. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Model & Data & Phase & b-AUROC & b-AUPRC & b-Precision & b-Recall & b-F\({}_{1}\) Score & Accuracy \\ \hline \hline 0.2-Day GP & Synthetic & _Early_ & 0.82 \(\pm\) 0.01 & 0.55 \(\pm\) 0.02 & 0.51 \(\pm\) 0.01 & 0.61 \(\pm\) 0.03 & 0.54 \(\pm\) 0.02 & 0.74 \(\pm\) 0.01 \\ & & _Intermediate_ & 0.94 \(\pm\) 0.02 & 0.78 \(\pm\) 0.04 & 0.65 \(\pm\) 0.02 & 0.78 \(\pm\) 0.03 & 0.70 \(\pm\) 0.03 & 0.85 \(\pm\) 0.01 \\ & & _Late_ & 0.98 \(\pm\) 0.01 & 0.90 \(\pm\) 0.02 & 0.77 \(\pm\) 0.04 & 0.90 \(\pm\) 0.02 & 0.82 \(\pm\) 0.03 & 0.92 \(\pm\) 0.02 \\ \cline{2-10} & ZTF BTS & _Early_ & **0.74 \(\pm\) 0.04** & **0.52 \(\pm\) 0.07** & **0.58 \(\pm\) 0.13** & **0.46 \(\pm\) 0.09** & **0.48 \(\pm\) 0.11** & **0.82 \(\pm\) 0.02** \\ & & _Intermediate_ & 0.86 \(\pm\) 0.06 & 0.59 \(\pm\) 0.06 & 0.65 \(\pm\) 0.07 & 0.58 \(\pm\) 0.07 & 0.60 \(\pm\) 0.07 & 0.84 \(\pm\) 0.04 \\ & & _Late_ & 0.91 \(\pm\) 0.04 & 0.72 \(\pm\) 0.10 & 0.72 \(\pm\) 0.11 & 0.65 \(\pm\) 0.10 & 0.67 \(\pm\) 0.10 & 0.87 \(\pm\) 0.05 \\ \hline \hline 0.2-Day GP & ZTF BTS & _Late_ & 0.91 \(\pm\) 0.04 & 0.72 \(\pm\) 0.10 & 0.72 \(\pm\) 0.11 & 0.65 \(\pm\) 0.10 & 0.67 \(\pm\) 0.10 & 0.87 \(\pm\) 0.05 \\ BRF & ZTF & _Full_ & 0.87 \(\pm\) 0.02 & 0.60 \(\pm\) 0.05 & 0.53 \(\pm\) 0.03 & 0.69 \(\pm\) 0.05 & 0.53 \(\pm\) 0.04 & - \\ LSTM RNN & ZTF & _Full_ & 0.88 \(\pm\) 0.03 & 0.65 \(\pm\) 0.05 & 0.57 \(\pm\) 0.03 & 0.68 \(\pm\) 0.04 & 0.58 \(\pm\) 0.04 & - \\ TimeModAttn & ZTF & _Full_ & 0.90 \(\pm\) 0.02 & 0.68 \(\pm\) 0.06 & 0.59 \(\pm\) 0.02 & 0.73 \(\pm\) 0.04 & 0.61 \(\pm\) 0.03 & - \\ \hline \end{tabular} \end{table} Table 3: Classification metrics for the 0.2-Day GP RNN model evaluated on synthetic and observed (ZTF BTS) light curves at early (\(t_{N}<\) 3 days), intermediate (3 days \(<\)\(t_{N}<\) 15 days), and late phases (15 days \(<\)\(t_{N}<\) 30 days). Upper rows indicate the performance of the model in this work, whereas the lower rows indicate only its late-phase performance in comparison with the same metrics at full-phase (\(t_{N}\) = 100 days) for the BRF, LSTM RNN, and TimeModAttn models from Pimentel et al. (2022). The bolded row highlights the early performance of the classifier on the observed ZTF sample. The three models from literature are evaluated on observed ZTF light curves, but the specific SNe considered may vary. ## 7 Conclusions & Future Work We have shown that an SN classifier leveraging both transient and host-galaxy photometry can achieve comparable performance to machine learning methods with complex architectures. Our algorithm represents one of the first attempts to prepare for real-time SN classification at every stage of the design and implementation pipeline. Below, we outline our primary conclusions: 1. Between the three light curve pre-processing methods we considered ('Raw and Pad', '0.2-Day GP', and '100-Timestep GP'), the '0.2-Day GP' model performs marginally better in late-phase classification. The three classifiers achieve comparable early and time-evolving classification when applied to the ZTF BTS data. This suggests that the temporal evolution of the photometry can be equally well-captured by an RNN trained on an interpolated representation of the light curve and the raw photometry itself, if the photometry is high-S/N and suffers from minimal Galactic extinction (as is the case with our high-quality sample). 2. Our late-time classification performance (between 15 and 30 days from trigger) is comparable to the leading methods (the balanced random-forest, LSTM RNN, and TimeModAttn networks given in Pimentel et al.2022) 100 days from trigger. The mean balanced-recall of our method is worse than that of the other methods considered, but this is of secondary importance for prioritizing follow-up of common SN classes for upcoming surveys. 3. Our method, with only a single LSTM layer of 60 gated units, achieves \(\sim 25\%\) higher balanced-AUROC than prior methods within the first three days of observation. Table 2 suggests that this is due to a combination of host-galaxy photometry _and_ early light curve photometry. The decrease in performance from excluding host-galaxy information was within the uncertainties of the baseline results; other host galaxy properties, such as color, may be more informative. 4. The uncertainty in each metric (estimated with 5-fold cross-validation) is higher with our model applied to the ZTF BTS than on synthetic data, and also higher than that reported by Pimentel et al. (2022) for a comparable observational dataset. This may reflect the variation in the GP model of a single event as more data is collected. If this is indeed the case, it reflects the realistic stochasticity of light curve fitting in real-time. 5. There remains a fundamental mismatch between the performance of simulations and observations. All three models achieve impressive performance on the synthetic light curves, and suffer a degredation in performance when applied to real observations. This could be due to overly simplistic host-galaxy correlations and/or overly optimistic redshifts in the simulations, but we predict that classification methods developed with simulated samples (e.g., those for PLAsTiCC and ELAsTiCC) will face an adaptation step to ensure reliable classification on observed samples. By constructing a normalizing flow to generate redshifts and host-galaxy photometry for every transient in our synthetic sample, we have increased the realism of the current generation of transient simulations. We have used point estimates for photometric redshifts, but our probabilistic approach provides full posterior distributions for each galaxy. These distributions can reveal bimodalities and other complex behavior not captured by a point estimate, and future work should be devoted to developing classification methods that incorporate this information (e.g., by drawing from the redshift distribution multiple times and quantifying how classification results change with each point estimate, or by passing in a discretized form of the full posterior distribution). We have found that classification performance suffered when applying our RNN models directly to the observational data after training with synthetic samples. The performance of our network after re-training suggests that transfer learning can improve the network's robustness to realistically complex datasets. Transfer learning is rapidly becoming a critical component of photometric classification architectures (with Pimentel et al.2022 and Burhanudin and Maund 2022 released while our network was being developed), and will become increasingly valuable in the initial years of upcoming surveys such as LSST. Our GP interpolation model is implemented in the lightweight and GPU-accelerated code tinyGP; as a result, it is competitive with the fastest GP models in the literature. When implementing our network for classification directly on the ZTF or LSST alert streams, our architecture can be further optimized to ensure rapid inference. We leave this model refinement to future work. We have avoided using our GP to interpolate early phases of our transient light curves, except where enough data has been collected to construct a reasonable model for its evolution. The growth of physics-informed machine learning in recent years (Wu et al.2018; Rao et al.2020; Karpov et al.2022) represents a promising direction for balancing model flexibility and physically-motivated constraints. Multiple models exist for the photometric evolution of SNe (particularly for SNe Ia; Kenworthy et al.2021; Mandel et al.2022), and future work could incorporate one of these models into the mean model of the GP or into a distinct interpolation scheme that is only used at early phases, where limited data is available to condition a model. The principles explored in this work have immediate utility for surveys whose science goals include the discovery of young SNe. One of these is the Young SN Experiment (YSE; Jones et al., 2021), a 1500 deg\({}^{2}\) survey that began in 2019 and that uses the Pan-STARRS telescopes to obtain well-sampled \(griz\) light curves of low-redshift SNe. A primary goal of this survey is the characterization of early-time flux excesses in young SNe and the construction of the low-redshift SN Ia anchor sample for upcoming cosmological analyses with the Vera C. Rubin Observatory. Simulations suggest that this survey at full operation will discover \(\geq\)2 SNe within 3 days of explosion per month, when they can be classified with \(\sim 82\%\) accuracy using our method. We aim to re-train our classifier using simulated YSE light curves and adapt the network using the light curves consolidated as part of the First Data Release for YSE (Aleo et al., 2023). We predict that the increased wavelength coverage and higher cadence achieved by augmenting ZTF light curves with YSE observations will result in superior early classification results to the ones shown in this work. Although we have considered only the three dominant classes of SNe in training and testing our network at low-\(z\), the problem faced by classifiers operating directly on an alert stream such as that served by LSST will be more difficult: they will have to classify from among the full diversity of observed transient classes spanning \(0<z<3\). For this goal, the SCOTCH framework Lokken et al. (2022) will be a valuable resource. SCOTCH encodes observed correlations for rarer classes, including Tidal Disruption Events (TDEs; French et al., 2017, 2020; Zabludoff et al., 2021), Hydrogen-Poor Superluminous SNe (SLSNe-I; Leloudas et al., 2015; Perley et al., 2016; Angus et al., 2016), and Kilonovae (KNe; Prochaska et al., 2006; Berger, 2009). These events will represent a fraction of the full stream, but high-cadence coverage of the early-time behavior of even a few events of these classes will be a significant contribution to the literature. Consequently, additional work should be dedicated to extending our classification schema to include these events. The broad-band photometry of a galaxy in optical passbands is, at best, a limited indicator of its underlying spectral energy distribution. Multiple degeneracies exist between, for example, the distance to the galaxy and the color of its average stellar population, particularly across wide ranges in redshift. We have limited our analysis to the approximate redshift range spanned by the ZTF BTS (\(z<0.2\)) in order to validate our methods on observed data, and it is likely that this has allowed us to largely avoid this drawback (although at these low distances, peculiar velocities introduce additional scatter). LSST will dramatically increase our sample of high-redshift (\(z<1\)) SNe, and at these distances it is unlikely that optical host galaxy photometry alone will have high discriminating power. We have begun investigating the value of host galaxy star-formation rate and stellar mass in simulated samples, but additional work should be devoted to rapid estimation of these parameters from host galaxies imaged with LSST (e.g., using Bayesian SED-fitting codes such as Prospector; Johnson et al., 2021). Further, our simulations considered only the photometry of transient hosts in PS1-\(grizy\); for high-redshift systems discovered by the Vera C. Rubin Obs., the _Nancy Grace Roman Space Telescope_(Spergel et al., 2015), and the _James Webb Space Telescope_(Gardner et al., 2006), infrared photometry will play an increasing role in precise galaxy characterization, as it more precisely traces a galaxy's stellar mass. Observed transient host galaxy catalogs are increasingly incorporating photometry spanning ultraviolet and infrared wavelengths (Qin et al., 2022), and future work should be dedicated to reproducing these observations in a simulated host galaxy sample. ## 8 Acknowledgements We wish to thank G. Narayan, V. Ashley Villar, R. Kessler, J. F. Crenshaw, A. Rest, and J. Pierel for conversations that improved this work. The authors further thank the anonymous referee for their thorough review, which has significantly strengthened this paper. A.G. acknowledges support from the Flatiron Institute Center for Computational Astrophysics Pre-Doctoral Fellowship Program in Spring 2022. A.G. is also supported by the Illinois Distinguished Fellowship, the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE - 1746047, and the Center for Astrophysical Surveys Graduate Fellowship at the University of Illinois. A.I.M. acknowledges support during this work from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research. AIM is a member of the LSST Interdisciplinary Network for Collaboration and Computing (LINCC) Frameworks team; LINCC Frameworks is supported by Schmidt Futures, a philanthropic initiative founded by Eric and Wendy Schmidt, as part of the Virtual Institute of Astrophysics (VIA). P.D.A. is supported by the Center for Astrophysical Surveys at the National Center for Supercomputing Applications (NCSA) as an Illinois Survey Science Graduate Fellow. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. The ANTARES project has been supported by the National Science Foundation through a cooperative agreement with the Association of Universities for Research in Astronomy (AURA) for the operation of NOIRLab, through an NSF INSPIRE grant to the University of Arizona (CISE AST-1344024, PI: R. Snodgrass), and through a grant from the Heising-Simons Foundation. ZTF is supported by National Science Foundation grant AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant #12540303 (PI: Graham). This research made use of the Cori system associated with the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory and operated under Contract No. DE-AC02-05CH11231. ANTARES (Matheson et al., 2021), **Astropy** (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018), **imbalanced-learn**(Lemaitre et al., 2017), **Keras**(Chollet et al., 2015), **Matplotlib**(Hunter, 2007), **numpy**(Walt et al., 2011), **Pandas**(pandas development team, 2020), **PZFlow**(Crenshaw et al., 2022), **Seaborn**(Waskom et al., 2014), **Scikit-Learn**(Pedregosa et al., 2011), **Tensorflow**(Abadi et al., 2015), **tinygp\({}^{\text{u}}\).
2306.00740
On the Limitations of Temperature Scaling for Distributions with Overlaps
Despite the impressive generalization capabilities of deep neural networks, they have been repeatedly shown to be overconfident when they are wrong. Fixing this issue is known as model calibration, and has consequently received much attention in the form of modified training schemes and post-training calibration procedures such as temperature scaling. While temperature scaling is frequently used because of its simplicity, it is often outperformed by modified training schemes. In this work, we identify a specific bottleneck for the performance of temperature scaling. We show that for empirical risk minimizers for a general set of distributions in which the supports of classes have overlaps, the performance of temperature scaling degrades with the amount of overlap between classes, and asymptotically becomes no better than random when there are a large number of classes. On the other hand, we prove that optimizing a modified form of the empirical risk induced by the Mixup data augmentation technique can in fact lead to reasonably good calibration performance, showing that training-time calibration may be necessary in some situations. We also verify that our theoretical results reflect practice by showing that Mixup significantly outperforms empirical risk minimization (with respect to multiple calibration metrics) on image classification benchmarks with class overlaps introduced in the form of label noise.
Muthu Chidambaram, Rong Ge
2023-06-01T14:35:28Z
http://arxiv.org/abs/2306.00740v3
# A Uniform Confidence Phenomenon in Deep Learning and its Implications for Calibration ###### Abstract Despite the impressive generalization capabilities of deep neural networks, they have been repeatedly shown to poorly estimate their predictive uncertainty - in other words, they are frequently overconfident when they are wrong. Fixing this issue is known as model calibration, and has consequently received much attention in the form of modified training schemes and post-training calibration procedures. In this work, we present a significant hurdle to the calibration of modern models: deep neural networks have large neighborhoods of almost certain confidence around their training points. We demonstrate in our experiments that this phenomenon consistently arises (in the context of image classification) across many model and dataset pairs. Furthermore, we prove that when this phenomenon holds, for a large class of data distributions with overlaps between classes, it is not possible to obtain a model that is asymptotically better than random (with respect to calibration) even _after_ applying the standard post-training calibration technique of temperature scaling. On the other hand, we also prove that it is possible to circumvent this defect by changing the training process to use a modified loss based on the Mixup data augmentation technique. ## 1 Introduction The past decade has seen a rapid increase in the prevalence of deep learning models across a variety of applications, in large part due to their impressive predictive accuracy on unseen test data. However, as these models begin to be applied to critical applications such as predicting credit risk (Clements et al., 2020), diagnosing medical conditions (Esteva et al., 2017, 2021; Elmarakeby et al., 2021), and autonomous driving (Bojarski et al., 2016; Grigorescu et al., 2020), it is crucial that the models are not only accurate but also predict with appropriate levels of uncertainty. In the context of classification, a model with appropriate uncertainty would be correct with a probability that is similar to its predicted confidence - for example, among samples on which the model predicts a class with 90% confidence, around 90% of them should indeed be the predicted class (a more formal definition is provided in Equation (3.1)). As a concrete (but highly simplified) example, consider the case of applying a deep learning model for predicting whether a patient has a life-threatening illness (Jiang et al., 2012). In this situation, suppose our model classifies the patient as not having the illness but does so with high confidence. A physician using this model for their assessments may then incorrectly diagnose the patient (with potentially grave consequences). On the other hand, if the model had lower confidence in this incorrect prediction, a physician may be more likely to do further assessments. Obtaining such models with good predictive uncertainty is the problem of _model calibration_, and has seen a flurry of recent work in the context of training deep learning models (Guo et al., 2017; Thulasidasan et al., 2019; Ovadia et al., 2019; Wen et al., 2020; Minderer et al., 2021). Without these calibration techniques, it is commonly observed that large neural network models are often over-confident (Guo et al., 2017) - they can make incorrect predictions with very high confidence. In this paper, we investigate what properties of trained neural networks lead to such overconfidence in practice, and whether these properties have further implications for how we should do model calibration. Our approach centers on analyzing model confidence in regions of the input space beyond just the training and test data. Such an analysis can allow us to predict model calibration performance for different training/test data distributions, and then consequently determine if there exist plausible distributions for which we cannot hope to be calibrated for. This idea of attempting to understand the confidence of neural networks outside of the training and test data is not new; a recent line of work (Hein et al., 2019; Meinke and Hein, 2020; Kristiadi et al., 2020) has shown empirically and theoretically that certain classes of ReLU networks can have high confidence predictions far away from the data that they were trained on. To the best of our knowledge, however, prior work has not precisely characterized (empirically or theoretically) the nature of these high confidence regions and their impacts on calibration. In this work, we attempt to make progress on this problem by answering the following questions: 1. How does the confidence of trained neural networks change as we move away from the training and test data? 2. Does the confidence behavior of neural networks imply settings for which we cannot calibrate them using post-training calibration methods? ### Main Contributions Our main empirical finding can be summarized as: _Deep neural networks have large regions of almost certain confidence around their training data points, across a wide variety of architectures and data, so long as they are trained to zero training error._ It is not surprising that neural networks trained to achieve zero training error have softmax outputs that are effectively point masses at their training data points, as this is optimal for the empirical cross-entropy. What is surprising about our results is that these softmax outputs remain almost constant in relatively large neighborhoods around _all_ of the training data points. We empirically establish this phenomenon for various image classification models ranging from popular baselines to current state-of-the-art in Section 2. Our results are consistent across multiple classification benchmarks, and continue to hold even if the true labels of the data are replaced with random labels. The ramifications are significant - for data distributions in which the supports of different classes overlap, it may not be possible to calibrate modern models using standard techniques. We formalize this idea into a theoretical framework in Section 3, and prove in Section 4 that for a wide class of data distributions, models following the aforementioned phenomenon are asymptotically no better than random classifiers in terms of calibration even **after** we apply post-training calibration techniques. On the other hand, we also prove that by using a generalization of Mixup training (Zhang et al., 2017), we can achieve reasonably good calibration. The key takeaway from our theory is that _post-training calibration can fail to fix problems that are fixable by modifying the loss function_. ### Related Work **Calibration in deep learning.** The calibration of deep learning models has received significant attention in recent years, largely stemming from the work of Guo et al. (2017) which empirically showed that modern, overparameterized models can have poor predictive uncertainty. Follow-up works (Thulasidasan et al., 2019; Wen et al., 2021) have supported these findings, although the recent work of Minderer et al. (2021) showed that current state-of-the-art architectures can be better calibrated than the previous generation of models. From a theoretical standpoint, work on calibration in deep learning is still nascent. As mentioned previously, the works of Hein et al. (2019) and Meinke and Hein (2020) showed that ReLU networks can have high confidence predictions away from their training data, but this does not by itself imply poor calibration. In this work, we show that when models exhibit a more general high confidence phenomenon, they will be provably poorly calibrated with respect to a large class of distributions. **Methods for improving calibration.** Many different methods have been proposed for improving calibration, including: logit rescaling (Guo et al., 2017), data augmentation (Thulasidasan et al., 2019; Muller et al., 2020), ensembling (Lakshminarayanan et al., 2017; Wen et al., 2020), and modified loss functions (Kumar et al., 2018; Wang et al., 2021). The logit rescaling methods, namely temperature scaling and its variants (Kull et al., 2019; Ding et al., 2020), constitute perhaps the most applied calibration techniques, since they can be used on any trained model with the introduction of only a few extra parameters (see Section 3.1). However, we show in this work that this kind of post-training calibration can be insufficient for some data distributions, which can in fact require data augmentation/modified loss functions to achieve good calibration. We focus particularly on Mixup (Zhang et al., 2017) data augmentation, whose theoretical benefits for calibration were recently studied by Zhang et al. (2021) in the context of linear models and Gaussian data. Our results provide a complementary perspective to this prior work, as we address a much broader class of models and a different class of data distributions. ## 2 A Uniform Confidence Phenomenon In this section we consider experimentally how the confidence of models change as we move away from training points. Given a classification training dataset \(\mathcal{X}\), we first downsample \(\mathcal{X}\) to consist of 5000 points (approximately 10% of the original size of the datasets we consider) due to computational constraints (detailed below). Then, for every point \((x_{i},y_{i})\in\mathcal{X}\), we sample points uniformly from the surface of spheres (neighborhoods) centered around \(x_{i}\) with varying radii and compute the mean probability with which a trained model predicts the class \(y_{i}\) over these sampled points. Crucially, _the radii for the neighborhoods considered at each \(x_{i}\) vary with \(x_{i}\)_. This is because different points may have very different distances to the decision boundary, so it does not make sense to compute model predictions at fixed distances from each data point. Instead, we define what we refer to as the **Other-Class Nearest Neighbor (OCNN)** distance, which for a point \((x_{i},y_{i})\) is the minimum distance from \(x_{i}\) to another point \(x_{j}\) such that the label \(y_{j}\neq y_{i}\). We sample batches of 500 points at different fixed proportions of the OCNN distance away from each \(x_{i}\), and then take the mean of the predicted probability for the class \(y_{i}\) over these batches. We then report the mean and variance of original class (i.e. \(y_{i}\)) probabilities for each fixed proportion of the OCNN distance over the entire dataset. A visualization of this procedure is provided in Figure 1. Note that because our procedure requires computing nearest neighbor distances and sampling points conditional on these distances, it is not easy to vectorize and its computational cost scales quadratically in the dataset size (when implemented naively). While more efficient approaches to this computation exist, we have opted for downsampling the data due to its simplicity and easy reproducibility. Based on the consistency of our main experimental results, as well as the larger-scale explorations in Appendix B.4, we anticipate no significant changes to our findings as data size is scaled. **Findings.** Our main evaluation results are shown in Figure 2, with all model/dataset details described in Section 2.1. We consider ten neighborhoods around each training point, corresponding to equally spaced OCNN distance proportions from 0 to 0.5 (with 0 indicating just the original point itself). We observe that the mean original class softmax prediction for **all models, over all datasets** remains almost **constant at 1** for a non-trivial radius around each training point. Additionally, we find that this radius of confidence (as a proportion of the OCNN distance) appears to be relatively consistent across models and datasets (around 0.2 to 0.3). Figure 1: Visualization of our experimental setup. To get a sense of how surprising this is, we point out that it is known in the adversarial attack literature that moving roughly 0.1 (Euclidean distance) away from a data point is enough to flip the prediction of state-of-the-art deep learning models from the correct class to another class on CIFAR (Carlini and Wagner, 2017). On the other hand, the _minimum_ OCNN distance over all of our datasets was on the order of 10. We interpret this to mean that even though adversarial directions can exist, on average it is possible to move a significant distance from a data point without affecting model predictions at all. The lack of variance in Figure 2 up to 0.2-0.3 OCNN distance implies that this phenomenon holds at virtually every training data point, and hence we refer to it as **uniform confidence**. The main concern with a model exhibiting uniform confidence behavior is that any test data point from a class \(y\) that falls in a confidence neighborhood of a training data point from a class \(s\neq y\) will not only be classified incorrectly, but classified incorrectly with very high confidence. As discussed in Section 1, this kind of model behavior can have significant consequences for critical use cases, so it is desirable to have a means of mitigating it. We will show in Section 4, however, that uniform confidence cannot be handled in certain cases by post-training calibration methods. Instead, we need to rely on preventing uniform confidence entirely by modifying the training process of the model. We focus on one particular training modification - Mixup data-augmentation (Zhang et al., 2017) - which simply requires training models on random convex combinations of the original data points and their labels (see Section 3.3 for precise details). Mixup has been shown to appropriately decrease model confidence (Thulasidasan et al., 2019), and we confirm these findings in our setting by repeating our experiments but training using Mixup with a uniform mixing distribution, with corresponding results shown in Figure 3. As can be seen from comparing Figures 2 and 3, training with Mixup clearly avoids the uniform confidence phenomenon. In the rest of this work, we focus on developing theory to explain why this happens and when the regularization introduced by Mixup is useful with respect to calibration. The key idea is that Mixup _constrains model behavior away from the training data_. **Further Experiments.** In addition to the experiments in this section, we also: explore how the logits of all of the models considered in Figures 2 and 3 behave (Appendix B.1), analyze the impact of training horizon on uniform confidence (Appendix B.2), evaluate confidence on test data (Appendix B.3), and verify consistency of our findings for larger datasets (Appendix B.4). Figure 2: Mean original class softmax outputs across various neighborhood sizes for different models trained with _empirical risk minimization_, as described in our experimental setup. The shaded regions around each curve represent one standard deviation. ### Dataset and Model Details **Datasets.** We consider the standard image classification benchmarks of CIFAR-10, CIFAR-100, and SVHN downsampled (uniformly at random) to consist of 5000 training data points. Additionally, we also consider a version of CIFAR-10 with entirely randomized labels ("CIFAR10_randomized" in Figures 2 and 3), as was done by Zhang et al. (2017), in order to examine whether datalabel relationships have any influence on our results. We preprocess all datasets to have zero mean and unit variance, and maintain the original input resolution (\(3\times 32\times 32\) size images). This ensures that 5000 data points are enough to avoid trivial linear separability of each dataset. **Models.** We train ResNet-18 (He et al., 2015), ResNeXt-50 (Xie et al., 2016), DenseNet (Huang et al., 2016), MLP-Mixer (Tolstikhin et al., 2021), and ConvNeXt (Liu et al., 2022) architectures on all of the aforementioned datasets. We use the popular PyTorch (Paszke et al., 2019) open source implementations (MIT license) of Kuang Liu and Phil Wang for ResNet-18 and MLP-Mixer respectively, and for MLP-Mixer we use a patch size of \(8\times 8\), a depth of 8, and a hidden layer dimension of 1024 (this is smaller than any model considered in the original paper). We use ResNeXt \(32\times 4\), DenseNet-121, and ConvNeXt-Tiny (with untrained weights) as provided in PyTorch. **Training Details.** All models were trained for 500 epochs using Adam (Kingma & Ba, 2015) with the standard hyperparameters of \(\beta_{1}=0.9,~{}\beta_{2}=0.999\), a learning rate of \(0.001\), and a batch size of 500 on a single A5000 GPU. We did not use any additional training heuristics such as Dropout (Srivastava et al., 2014). In preliminary experiments, we found that changes to optimizer/learning rate did not influence our results so long as the training horizon was sufficiently large for achieving interpolation (zero training error) on the training data. ## 3 Theoretical Preliminaries **Notation.** We use \([k]\) to denote \(\{1,2,...,k\}\) for a positive integer \(k\). We consider \(k\)-class classification and use \(\mathcal{X}\) to denote a dataset of \(N\) points \((x_{i},y_{i})\) sampled from a distribution \(\pi(X,Y)\) whose support \(\mathrm{supp}(\pi)\) is contained in \(\mathbb{R}^{d}\times[k]\). We use \(\pi_{X}\) and \(\pi_{Y}\) to denote the respective marginal distributions of \(\pi\), and use \(\pi_{y}\) to denote the conditional distribution \(\pi(X\mid Y=y)\). We use \(d(A,B)\) to denote the Euclidean distance between two sets \(A,B\subset\mathbb{R}^{d}\), \(d_{\mathrm{KL}}(\pi_{1},\pi_{2})\) to denote the KL divergence between two distributions \(\pi_{1}\) and \(\pi_{2}\), and \(\mu_{d}\) for the Lebesgue measure on \(\mathbb{R}^{d}\). For a function \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}\), we use \(g^{i}\) to denote the \(i^{\text{th}}\) coordinate function of \(g\). Lastly, we use \(\phi(\cdot)\) to denote the softmax function, Figure 3: A replication of the experiments from Figure 2, but training using _Mixup_ (uniform mixing distribution) instead of empirical risk minimization. i.e. \(\phi^{i}(g(x))=\exp\bigl{(}g^{i}(x)\bigr{)}/\sum_{j\in[k]}\exp\bigl{(}g^{j}(x) \bigr{)}\). In everything that follows, we assume both \(N\) and \(k\) are sufficiently large, and that \(N=\Omega(\operatorname{poly}(k))\) for some large degree polynomial \(\operatorname{poly}(k)\). ### Calibration In training a model \(g\) on a dataset \(\mathcal{X}\), the goal is for \(\phi^{Y}(g(X))\) to recover the ground truth conditional distribution \(\pi(Y\mid X)\). However, this is usually unattainable when training on finite datasets. We may thus hope instead to satisfy the weaker property that the trained model \(g\) is _calibrated_, which can be formulated as the following regular conditional probability condition: \[\mathbb{P}(Y=y\mid\phi^{y}(g(X))=p_{y})=p_{y} \tag{3.1}\] Equation (3.1) captures the earlier mentioned intuition that, when our model predicts probability \(p_{y}\) for a class \(y\), the true probability for the class \(y\) (for those predicted instances) is also \(p_{y}\). It is straightforward to translate Equation (3.1) into a notion of miscalibration by considering the expectation of the absolute difference between the left and right-hand sides; this is the expected calibration error (ECE). Although natural, ECE suffers from several theoretical and empirical drawbacks (Blasiok et al., 2023), and we opt to instead work with the expected KL divergence \(\mathbb{E}_{X\sim\pi\times}[d_{\mathrm{KL}}(\pi(Y\mid X),\cdot)]\), as minimizing this implies that Equation (3.1) is satisfied (and importantly, we will consider theoretical settings in which we allow post-training calibration methods access to \(\pi\)). With a notion of miscalibration in hand, we consider methods to improve the calibration of a trained model \(g\). One of the most popular (and simplest) approaches is temperature scaling (Guo et al., 2017), which consists of introducing a single parameter \(T\) that is used to scale the outputs of \(g\). The value of \(T\) is obtained by optimizing the negative log-likelihood on a calibration dataset \(\mathcal{X}_{\mathrm{cal}}\): \[T=\operatorname*{argmin}_{\hat{T}\in(0,\infty)}-\frac{1}{|\mathcal{X}_{ \mathrm{cal}}|}\sum_{(x_{i},y_{i})\in\mathcal{X}_{\mathrm{cal}}}\log\phi^{y_{ i}}(g(x_{i})/\hat{T}) \tag{3.2}\] For our results, we will in fact consider an even more powerful (and impractical) form of temperature scaling in which we allow access to the ground-truth distribution \(\pi\): \[T=\operatorname*{argmin}_{\hat{T}\in(0,\infty)}\mathbb{E}_{X\sim\pi_{X}}\left[ d_{\mathrm{KL}}(\pi(Y\mid X),\phi^{Y}(g(X)/\hat{T}))\right] \tag{3.3}\] We will henceforth refer to an optimally temperature-scaled model with respect to Equation (3.3) as \(g_{T}\). We will show in Section 4 that even when we allow this "oracle" temperature scaling, we cannot hope to calibrate models \(g\) that exhibit the uniform confidence behavior discussed in Section 2. ### Empirical Risk Minimization In practice, models are often trained via empirical risk minimization (ERM). Due to the large number of parameters in modern models, training often leads to an _interpolator_ that is very confident on every training data point \((x_{i},y_{i})\in\mathcal{X}\), as we formalize below: **Definition 3.1**.: [ERM Interpolator] For a dataset \(\mathcal{X}\), we say that a model \(g\) is an ERM interpolator if for every \((x_{i},y_{i})\in\mathcal{X}\) there exists a universal constant \(C_{i}\) such that: \[\min_{s\neq y_{i}}g^{y_{i}}(x_{i})-g^{s}(x_{i})>\log k\quad\text{and}\quad \max_{r,s\neq y_{i}}g^{s}(x_{i})-g^{r}(x_{i})<C_{i} \tag{3.4}\] Equation (3.4) is slightly stronger than directly assuming \(\phi^{y_{i}}(g(x_{i}))\approx 1\); it is motivated by further empirical analysis in Appendix B.1, in which we show that the logits associated with incorrect classes on the training data for the models in Section 2 are near zero. We now also introduce a direct theoretical analogue to the uniform confidence behavior in Section 2. **Definition 3.2**.: [Uniform \(\gamma\)-Confidence] For a point \((x_{i},y_{i})\in\mathcal{X}\), letting \(L\) be a universal constant, we define: \[\mathcal{B}_{\gamma}(x_{i}) =\{x\in\mathbb{R}^{d}:\;\|x_{i}-x\|\leq\gamma\} \tag{3.5}\] \[\mathcal{G}_{\gamma}(x_{i}) =\{x\in\mathbb{R}^{d}:\;|g^{y_{i}}(x_{i})-g^{y_{i}}(x)|\leq L\gamma\} \tag{3.6}\] We say that a model \(g\) is uniformly \(\gamma\)-confident over a set \(U\) if there exists a class \(y\in[k]\) and \(\Theta(N/k)\) points \((x_{i},y)\in\mathcal{X}\) with \(x_{i}\in U\) such that \(\pi_{y}(X\in\mathcal{G}_{\gamma}(x_{i})\mid X\in\mathcal{B}_{\gamma}(x_{i})) \geq 1-O(1/k)\). Basically, Definition 3.2 codifies the idea that the model logit \(g^{y_{i}}\) does not change much in a small enough neighborhood of each \(x_{i}\) in a set \(U\), with high probability. We will show in Theorem 4.5 that satisfying Definitions 3.1 and 3.2 is sufficient for poor calibration for a wide class of data distributions, even when using temperature scaling with access to the ground truth distribution oracle. ### Mixup In contrast, we can show that if we consider models minimizing a Mixup-like training objective instead of the usual negative log-likelihood in Equation (3.2), we can avoid uniform confidence. Let \(\mathcal{D}_{\lambda}\) denote a continuous distribution supported on \([0,1]\) and let \(z_{i,j}(\lambda)=\lambda x_{i}+(1-\lambda)x_{j}\) (using \(z_{i,j}\) when \(\lambda\) is clear from context) where \((x_{i},y_{i}),\ (x_{j},y_{j})\in\mathcal{X}\). Then we may define the empirical Mixup cross-entropy \(J_{\mathrm{mix}}(g,\mathcal{X},\mathcal{D}_{\lambda})\) as: \[J_{\mathrm{mix}}(g,\mathcal{X},\mathcal{D}_{\lambda})=-\frac{1}{N^{2}}\sum_{i \in[N]}\sum_{j\in[N]}\mathbb{E}_{\lambda\sim\mathcal{D}_{\lambda}}\left[ \lambda\log\phi^{y_{i}}(g(z_{i,j}))+(1-\lambda)\log\phi^{y_{j}}(g(z_{i,j}))\right] \tag{3.7}\] Essentially, minimizing Equation (3.7) forces a model to linearly interpolate between its predictions \(\phi^{y_{i}}(g(x_{i}))\) and \(\phi^{y_{j}}(g(x_{j}))\) over the line segment connecting the points \(x_{i}\) and \(x_{j}\). This already provides some intuition for why Mixup-optimal models will fail to satisfy a property like uniform \(\gamma\)-confidence from Definition 3.2: _their behavior is constrained away from the training data_. However, the line segment constraints of \(J_{mix}(g,\mathcal{X},\mathcal{D}_{\lambda})\) will not be enough to make this intuition rigorous when the data is in \(\mathbb{R}^{d}\) with \(d>1\), since in this case line segments are measure zero sets with respect to \(\mu_{d}\). We will thus augment Mixup to work with convex combinations of \(d+1\) points as opposed to two, and refer to this new objective as \(d\)**-Mixup**. In generalizing from Mixup to \(d\)-Mixup, it is helpful from a theoretical standpoint to constrain the set of allowed mixings \(\mathcal{M}_{d}(\mathcal{X})\subset[N]^{d+1}\). We will consider only mixing points at most some constant distance away from one another, and we will also preclude mixing points that are too highly correlated.1 The precise definition of \(\mathcal{M}_{d}(\mathcal{X})\) can be found in Definition A.1 of Appendix A.2; we omit it here due to its technical nature. Footnote 1: This means we do not mix points with themselves in \(d\)-Mixup; however, when \(\pi_{X}\) has a density, this makes little difference since we can mix in a neighborhood of any point. Now let \(\mathcal{D}_{\lambda,d}\) denote a continuous distribution supported on the \(d\)-dimensional probability simplex \(\Delta^{d}\subset\mathbb{R}^{d+1}\). Defining \(z_{\sigma}(\lambda)=\sum_{j\in[d+1]}\lambda_{j}x_{\sigma_{j}}\) for \(\lambda\in\mathrm{supp}(\mathcal{D}_{\lambda,d})\) and \(\sigma\in\mathcal{M}_{d}(\mathcal{X})\), we can define the empirical \(d\)-Mixup cross-entropy \(J_{\mathrm{mix},d}(g,\mathcal{X},\mathcal{D}_{\lambda,d})\): \[J_{\mathrm{mix},d}(g,\mathcal{X},\mathcal{D}_{\lambda,d})=-\frac{1}{|\mathcal{ M}_{d}(\mathcal{X})|}\sum_{\sigma\in\mathcal{M}_{d}(\mathcal{X})}\mathbb{E}_{ \lambda\sim\mathcal{D}_{\lambda,d}}\left[\sum_{j\in[d+1]}\lambda_{j}\log\phi^{ y_{\sigma_{j}}}(g(z_{\sigma}(\lambda)))\right] \tag{3.8}\] We will henceforth use \(\mathcal{X}_{\mathrm{mix},d}\) to denote the set of all \(z_{\sigma}\). The main benefit of introducing the set \(\mathcal{M}_{d}(\mathcal{X})\) instead of just generalizing Equation (3.7) to mixing over \([N]^{d+1}\) is that it allows us to use a reparameterization trick with which we can characterize the \(d\)-Mixup optimal prediction at every mixed point \(z_{\sigma}\). We state only an informal version of this result below and defer a formal statement and proof to Appendix A.2. **Lemma 3.3**.: [Informal Optimality Lemma] Every \(g^{*}\in\operatorname*{arginf}_{g}J_{\mathrm{mix},d}(g,\mathcal{X},\mathcal{D} _{\lambda,d})\) (where the \(\operatorname*{arginf}\) is over all extended \(\mathbb{R}^{d}\)-valued functions) satisfies \(\phi^{y}(g^{*}(z))=\alpha_{y}(z)/\sum_{s\in[k]}\alpha_{s}(z)\) for almost every \(z_{\sigma}\in\mathcal{X}_{\mathrm{mix},d}\), where \(\alpha_{y}(z)\) corresponds to the expected weight of class \(y\) points over all mixing sets \(\sigma\in\mathcal{M}_{d}(\mathcal{X})\) from which we can obtain \(z\). We note that this lemma is analogous to Lemma 2.3 in the work of Chidambaram et al. (2021), but avoids restrictions on the function class being considered and is non-asymptotic. Since we can characterize optimal predictions over \(\mathcal{X}_{\mathrm{mix},d}\), we can define \(d\)-Mixup interpolators as follows. **Definition 3.4**.: [\(d\)-Mixup Interpolator] For a dataset \(\mathcal{X}\), we say that \(g\) is a \(d\)-Mixup interpolator if \(\phi^{y}(g(z))=\phi^{y}(g^{*}(z))\pm O(1/k)\) for almost every \(z\in\mathcal{X}_{\mathrm{mix},d}\) and \(y\in[k]\), with \(g^{*}\in\operatorname*{arginf}_{g}J_{\mathrm{mix},d}(g,\mathcal{X},\mathcal{D} _{\lambda,d})\). In Theorem 4.6, we will show that \(d\)-Mixup interpolators can achieve good calibration on a subclass of distributions for which ERM interpolators perform poorly. **Remark 3.5**.: In practice it is unreasonable to mix \(d+1\) points when \(d\) is large. However, we conjecture that due to the regularity of practical models (i.e. neural networks), even mixing two points as in traditional Mixup is sufficient for achieving neighborhood constraints like those induced by \(d\)-Mixup, hence the results of Figure 3. We introduce \(d\)-Mixup because we make no such regularity assumptions on the models in our theory. ## 4 Theoretical Implications of the Uniform Confidence Phenomenon In this section, we show that even for simple data distributions, uniform confidence can prevent post-training calibration methods (temperature scaling) from producing well-calibrated models, while modifications in the training process (Mixup) can potentially address this issue. Prior to proving our main results, we begin first with a 1-dimensional example that contains the key ideas of our analysis. The full proofs of all results in this section can be found in Appendix A. ### Warm-Up: A Simple 1-D Example **Definition 4.1**.: [Overlapping Intervals] Let \(\tau(y)\) denote the parity of a nonnegative integer \(y\) and let \(\beta_{y}=\lfloor(y-1)/2\rfloor k+\tau(y-1)\) for \(y\in[k]\). Then we define \(\pi(X,Y)\) to be the distribution on \(\mathbb{R}\times[k]\) such that \(\pi_{Y}\) is uniform over \([k]\) and \(\pi(X\mid Y=y)\) is uniform over \([\beta_{y},\beta_{y}+2]\). Definition 4.1 corresponds to a distribution in which consecutive class-conditional densities are supported on overlapping intervals of length 2 (starting from 0) with a spacing of \(k\) between each pair of classes (see Figure 4). The idea is that ERM interpolators for \(\mathcal{X}\) that are uniformly confident will be poorly calibrated in each overlapping region in the support of \(\pi_{X}\). This is made precise in the following proposition. **Proposition 4.2**.: Let \(\mathcal{X}\) consist of \(N\) i.i.d. draws from the distribution \(\pi\) specified in Definition 4.1. Then with probability at least \(1-k\exp(-\Omega(N/k))\) over the randomness of \(\mathcal{X}\), the set \(\mathcal{S}\) of all models \(g\) that are ERM interpolators for \(\mathcal{X}\) and uniformly \(k/(2N)\)-confident over each overlapping region in \(\operatorname{supp}(\pi_{X})\) is non-empty (in fact, uncountable). Furthermore, the predictive distribution \(\hat{\pi}_{T}(Y\mid X)=\phi^{Y}(g_{T}(X))\) of the optimally temperature-scaled model \(g_{T}\) for any \(g\in\mathcal{S}\) satisfies: \[\mathbb{E}_{X\sim\pi_{X}}\left[d_{\mathrm{KL}}(\pi(Y\mid X),\hat{\pi}_{T}(Y \mid X))\right]\geq\Theta(\log k) \tag{4.1}\] In other words, even with temperature scaling every \(g\in\mathcal{S}\) is asymptotically no better than random. **Proof Sketch.** We can show that \(\mathcal{S}\) is non-trivial using Chernoff bound arguments, and then use uniform confidence to show that there is a significant fraction of \(\operatorname{supp}(\pi_{X})\) on which every \(g\in\mathcal{S}\) predicts incorrect probabilities. The key idea is then that temperature scaling will only improve incorrect predictions for ERM interpolators to uniformly random (i.e. \(1/k\)), whereas the correct prediction in an overlapping region is \(1/2\) for each of the overlapping classes. On the other hand, for \(d\)-Mixup, each point in the overlapping regions can be obtained as a mixture of points from the overlapping classes, so we will have non-trivial probabilities for both classes. **Proposition 4.3**.: Let \(\mathcal{X}\) be as in Proposition 4.2 and \(p(k)\) denote a polynomial in \(k\) of degree at least one. Then taking \(\mathcal{D}_{\lambda,1}\) to be uniform, _every_\(1\)-Mixup interpolator \(g\) for \(\mathcal{X}\) with the property that \(\phi^{y}(g(x))\leq 1-\Omega(1/p(k))\) for every \(x\in\operatorname{supp}(\pi_{X})\setminus\mathcal{X}_{\operatorname{mix},1}\) and \(y\in[k]\) satisfies with probability at least \(1-k^{2}\exp\bigl{(}-\Omega(N/k^{2})\bigr{)}\): \[\mathbb{E}_{X\sim\pi_{X}}\left[d_{\mathrm{KL}}(\pi(Y\mid X),\hat{\pi}(Y\mid X ))\right]\leq\Theta(1) \tag{4.2}\] Figure 4: Visualization of Definition 4.1 for the case \(k=4\). **Proof Sketch.** We can show with high probability that \(\mathcal{X}_{\mathrm{mix},1}\) covers most of \(\mathrm{supp}(\pi_{X})\) uniformly, and then we can use Lemma 3.3 to precisely characterize the \(1\)-Mixup predictions over \(\mathcal{X}_{\mathrm{mix},1}\). We needed to add the (relatively weak) stipulation that \(\phi^{y}(g(x))\leq 1-\Omega(1/\mathrm{poly}(k))\) in the statement above, since we cannot hope to prove an upper bound if \(g\) is allowed to behave arbitrarily on \(\mathrm{supp}(\pi_{X})\setminus\mathcal{X}_{\mathrm{mix},1}\). ### Generalizing to Higher Dimensions By extending the idea of overlapping regions in \(\mathrm{supp}(\pi_{X})\) from our 1-D example, we can generalize the failure of ERM interpolators to higher-dimensional distributions. **Definition 4.4**.: [General Data Distribution] We define \(\pi\) to be any distribution whose support is contained in \(\mathbb{R}^{d}\times[k]\) satisfying the following constraints: 1. (Classes are roughly balanced) \(\pi_{Y}(Y=y)=\Theta(1/k)\). 2. (Constant class overlaps) Letting \(M\) denote a nonnegative integer constant, there exist \(\Theta(k)\) classes \(y\) for which there are classes \(s_{1}(y),s_{2}(y),...,s_{m}(y)\) for some \(1\leq m<M\) with \(\pi_{X}(\mathrm{supp}(\pi_{y})\cap\mathrm{supp}(\pi_{s_{i}(y)}))\geq C\) for a universal constant \(C>0\), and all other \(s^{\prime}\in[k]\) satisfy \(\pi_{X}(\mathrm{supp}(\pi_{y})\cap\mathrm{supp}(\pi_{s^{\prime}}))=0\). 3. (Overlap density is proportional to measure) \(\pi_{y}(X\in A)=\Theta(\mu_{d}(A))\) and \(\pi_{s_{i}(y)}(X\in A)=\Theta(\mu_{d}(A))\) for every \(A\subseteq\mathrm{supp}(\pi_{y})\cap\mathrm{supp}(\pi_{s_{i}(y)})\). Definition 4.4 is quite broad in that we make no assumptions on the behavior of the class-conditional densities outside of the overlapping regions. We now generalize Proposition 4.2. **Theorem 4.5**.: Let \(\mathcal{X}\) consist of \(N\) i.i.d. draws from any distribution \(\pi\) satisfying Definition 4.4, and let \(r\in\mathbb{R}\) be such that the sphere with radius \(r\) in \(\mathbb{R}^{d}\) has volume \(k/(MN)\). Then the result of Proposition 4.2 still holds for the set \(\mathcal{S}_{d}\) of ERM interpolators for \(\mathcal{X}\) which are uniformly \(r\)-confident over each overlapping region in \(\mathrm{supp}(\pi_{X})\). To generalize Proposition 4.3, however, we need further restrictions on \(\pi\). Mainly, we need to have significant spacing between non-overlapping classes (as in Definition 4.1), and we need to restrict the class-conditional densities such that mixings in \(\mathcal{X}_{\mathrm{mix},d}\) are not too skewed towards a small subset of classes. The precise formulation of this assumption can be found in Appendix A.2. **Theorem 4.6**.: Let \(\mathcal{X}\) consist of \(N\) i.i.d. draws from any distribution \(\pi\) satisfying Definition 4.4 and Assumption A.3, and let \(p(k)\) be as in Proposition 4.3. Then the result of Proposition 4.3 still holds when considering \(d\)-Mixup interpolators \(g\) for \(\mathcal{X}\) where the mixing distribution \(\mathcal{D}_{\lambda,d}\) is uniform over the \(d\)-dimensional probability simplex. ## 5 Limitations and Discussion **Limitations.** Perhaps the main empirical limitation of our work is the reliance on subsampling due to computational constraints in the experiments of Section 2, and it may be interesting in future work to explore properties of the OCNN distance for larger-scale datasets. However, as mentioned in Section 2, our further experiments in Appendix B suggest that our findings should generalize. On the theoretical side, the main limitation is technicalities introduced due to \(d\)-Mixup. Namely, it becomes very difficult to reason about what the set of \(d\)-Mixup points \(\mathcal{X}_{\mathrm{mix},d}\) looks like in high dimensions, thereby forcing parts of our theory to directly assume some high probability behavior of \(\mathcal{X}_{\mathrm{mix},d}\) (justified by the simple example in Definition 4.1). **Discussion.** The key findings of our work are that trained neural networks can exhibit large regions of near-certain confidence around their training points, and that when this behavior occurs we cannot hope to fix calibration for data distributions with class overlaps using temperature-scaling-type methods. One clear direction suggested by our work is the idea of developing better _neighborhood constraints_ around training points; we study the constraints introduced by Mixup, but we anticipate there are likely better alternatives for improving calibration. For broader impacts, the main concern introduced by our work is that critical use cases (i.e. medical applications) may be significantly impacted by the uniform confidence phenomenon of Section 2 - we hope that this initial work instigates further studies into how models can be improved for such use cases. ## Acknowledgements Rong Ge and Muthu Chidambaram are supported by NSF Award DMS-2031849, CCF-1845171 (CAREER), CCF-1934964 (Tripods), and a Sloan Research Fellowship. Muthu would like to thank Kai Xu for helpful discussions during the early stages of this project.
2306.16122
Semantic Positive Pairs for Enhancing Visual Representation Learning of Instance Discrimination methods
Self-supervised learning algorithms (SSL) based on instance discrimination have shown promising results, performing competitively or even outperforming supervised learning counterparts in some downstream tasks. Such approaches employ data augmentation to create two views of the same instance (i.e., positive pairs) and encourage the model to learn good representations by attracting these views closer in the embedding space without collapsing to the trivial solution. However, data augmentation is limited in representing positive pairs, and the repulsion process between the instances during contrastive learning may discard important features for instances that have similar categories. To address this issue, we propose an approach to identify those images with similar semantic content and treat them as positive instances, thereby reducing the chance of discarding important features during representation learning and increasing the richness of the latent representation. Our approach is generic and could work with any self-supervised instance discrimination frameworks such as MoCo and SimSiam. To evaluate our method, we run experiments on three benchmark datasets: ImageNet, STL-10 and CIFAR-10 with different instance discrimination SSL approaches. The experimental results show that our approach consistently outperforms the baseline methods across all three datasets; for instance, we improve upon the vanilla MoCo-v2 by 4.1% on ImageNet under a linear evaluation protocol over 800 epochs. We also report results on semi-supervised learning, transfer learning on downstream tasks, and object detection.
Mohammad Alkhalefi, Georgios Leontidis, Mingjun Zhong
2023-06-28T11:47:08Z
http://arxiv.org/abs/2306.16122v2
# Semantic Positive Pairs for Enhancing Contrastive Instance Discrimination ###### Abstract Self-supervised learning algorithms based on instance discrimination effectively prevent representation collapse and produce promising results in representation learning. However, the process of attracting positive pairs (i.e., two views of the same instance) in the embedding space and repelling all other instances (i.e., negative pairs) irrespective of their categories could result in discarding important features. To address this issue, we propose an approach to identifying those images with similar semantic content and treating them as positive instances, named semantic positive pairs set (SPPS), thereby reducing the risk of discarding important features during representation learning. Our approach could work with any contrastive instance discrimination framework such as SimCLR or MOCO. We conduct experiments on three datasets: ImageNet, STL-10 and CIFAR-10 to evaluate our approach. The experimental results show that our approach consistently outperforms the baseline method vanilla SimCLR across all three datasets; for example, our approach improves upon vanilla SimCLR under linear evaluation protocol by \(4.18\%\) on ImageNet with a batch size 1024 and 800 epochs. contrastive instance discrimination self-supervised learning representation learning ## 1 Introduction In supervised learning, models are trained with both input data \(X\) and their corresponding semantic labels/classes \(Y\). It is rather common for each class to have several hundreds of instances available for training, which enables the model to extract the important features and create useful representations for the given samples [1]. This type of machine learning training has been proven to perform well in various domains whenever data are available in abundance. In practice, that is often not the case, given that data annotation is laborious and expensive. Recently, contrastive instance discrimination (CID) was proposed, which is a kind of self-supervised learning (SSL) approach ([2, 3, 4, 5]). CID reduces the reliance on large annotated datasets during training, as well as provides promising performance, close to supervised learning or even better in some downstream tasks [3, 6, 7, 5, 8, 9]. Very popular CID approaches, such as SimCLR and MoCo [4, 3] learn image representations by employing data augmentations. The idea of CID is to consider each image as a class of its own, therefore all the images in the dataset are randomly transformed to make the model invariant to the augmentation by avoiding learning trivial features [10, 11, 3, 5]. Next, the objective function encourages the model to bring the embeddings of the positive pair (two views for the same image) closer in the embedding space while repelling the embeddings of the negative pairs (other images in the batch) to avoid representation collapse (the backbone encoder provides the same embedding for all the images in the dataset). Although CID approaches are capable of avoiding representation collapse, the repulsion between the images in the embedding space regardless of their semantic content causes to discard of some common semantic features between the instances in the same category during training [8]. For example, Figure 1 shows two images that have similar semantic content (aeroplanes). During CID, such images are repelled in the embedding space because the objective of the CID only attracts the positive pair and pushes apart all the other images even though they have similar semantic content. This is a major limitation which requires attention, as the performance of the downstream tasks depends on high-quality representation learning during self-supervised pre-training [12; 13; 14; 15; 16; 17; 18]. There are two approaches Nearest-Neighbor Contrastive Learning of Visual Representations (NNCLR) [9] and False Negative Cancellation (FNC) [8] introduce solutions to find semantic positive pairs to improve the representation learning. NNCLR use a support set (Q) to keep a representation of the dataset during model training. The model learns the visual representation by creating a representation for the two views of the input instance (\(z_{i}\), \(z_{i}^{+}\)), and then the nearest neighbour for the first view (\(z_{i}\)) is found from the support set, S= NN(\(z_{i}\), Q). After that, they treat (S) as semantic positive pairs for the second view of the instance (\(z_{i}+\)) regardless of the semantic content of the (S) which maybe belong to a category different to the (\(z_{i}^{+}\)). FNC creates more than two views for each instance [\(z_{i}\),\(z_{i}^{+}\),\(z_{i}^{1}\),\(z_{i}^{2}\)] where (\(z_{i}\),\(z_{i}^{+}\)) are positive pairs and (\(z_{s}^{1}\),\(z_{s}^{2}\)) are support views for the instance. Next, they find the potential semantic pairs between the support set and negative examples in the batch by computing the similarity between them. Finally, they treat the found semantic pairs as positive pairs during model training. Finding images that have similar content and treating them as positive pairs is increase the data diversity and improve the representation learning by creating invariant representation for the nuisances [10; 9; 8; 19]. On the contrary, when we provide the model with the wrong semantic pairs and encourage it to learn the mutual information between the two views, this will lead to degrading the representation learning [10]. Therefore, we need to improve the method of choosing the semantic pairs before training the model on them which increases the quality of representation learning and the model performance. The mentioned approaches (NNCLR and FNC) use a model under training (non-pretrained model) with augmented images to find the semantic pairs that may lead to sampling the wrong semantic pairs. For example, we may have two instances belonging to the same classes but treated as different instances because they lock differently after augmentation. Also, we may have images with different classes treated as similar after augmentation such as (cropping and resizing). Figure 2 shows an empirical example that illustrates it is not guaranteed to obtain the correct semantic positive pairs if we use non-pretrained model with augmented images to find the instances that have similar classes. During the model training, the model needs several epochs to be able to produce similar embedding vectors for the images that have similar content, especially with augmented images [9]. Thus, the model may be pre-trained on the wrong semantic pairs before being able to produce similar embedding vectors for the images that have similar content which leads to reducing the model performance as well as slow model convergence. To obtain more accurate semantic positive pairs and improve the representation learning of the model, we should find the correct semantic positive pairs before applying the data augmentation as well as before starting training the model. If we find the semantic positive pairs by using the original dataset and pre-trained model we solved two issues: 1) increase the richness of the latent space, thus the model can learn from the mutual information between the instance belonging to the same class rather than single instance; 2) avoid the limitations of finding in accurate semantic pairs by using non-pretrained model and augmented images. So, the question is, how we can pre-train C Figure 1: Example of an instance discrimination task where positive pairs are attracted together and negative pairs are pushed apart, even if they have similar semantic content. positive pairs obtained from the original dataset without labelling? In this paper, we introduce a new approach that is used to find images with similar semantic content in the original dataset by using a pre-trained model and treating them as positive pairs during the training of the CID models. This approach increases the diversity of the training data and provides more semantic variation than that provided by data augmentation which subsequently improves the representation learning quality [9; 8; 19]. In summary, our contributions are as follows: * We introduce a novel and simple approach to finding the images that have similar semantic content in the dataset without labelling or clustering. * We show that our approach boosts significantly the performance of state-of-the-art (SOTA) contrastive instance discrimination algorithms, specifically SimCLR under different epochs, batch sizes, and datasets. * We show that our approach does not require any algorithmic changes in the underlying methods, therefore it can work with any contrastive instance discrimination method. * We show that our approach does not require a support set, like other approaches [9; 8], so it does not require auxiliary memory for pre-training, which adds computational overhead. ## 2 Related Work There are multiple self-supervised learning (SSL) approaches each of which has a different methodology for representation learning and for avoiding representation collapse. In this subsection, we provide a brief overview of these approaches. **Clustering-Based Methods.** In this approach, the samples that have similar features are assigned to the same cluster. Thus, discrimination, in this case, is based on a group of images rather than instance discrimination [20; 1; 21; 22]. DeepCluster [21] obtains the pseudo-label from the previous iteration which makes it computationally expensive and hard to scale up. SWAV [20] solved this issue by using online clustering, but it needs to determine the correct number of prototypes; otherwise, the performance will be affected. **Distillation Methods.** BYOL [23] and SimSiam [6] use techniques inspired by knowledge distillation where a Siamese network has an online encoder and a target encoder. The target network parameters are not updated during backpropagation. Instead, the online network parameters are updated while it is encouraged to predict the representation of the target network. Although these methods have provided promising results, it is not fully understood how they avoid collapse. Bag of visual words [24; 25] also use a teacher-student scheme, but they include an approach inspired by natural language processing (NLP) to avoid representation collapse. The student network is encouraged to predict the features' histogram for the augmented images, similar to the teacher network's histogram. **Information Maximization.** Barlow twins [26] and VICReg [27] do not require negative examples, stop gradient or clustering. Instead, they use regularization to avoid representation collapse. The objective function of these methods is to reduce the redundant information in the embeddings by making the correlation of the embedding vectors closer to the identity matrix. Though these methods provide promising results, they have some limitations, such as the representation learning being sensitive to regularization. The effectiveness of these methods is also reduced if certain statistical properties are not available in the data. **Contrastive Learning.** Instance discrimination, such as SimCLR, MoCo, and PIRL [3; 2; 4; 5] employ a similar idea. They attract the positive pair together and push the negative pairs apart in the embedding space albeit through a different mechanism. SimCLR [3] uses an end-to-end mechanism where a large batch size is used for the negative examples and both encoders' parameters in the Siamese network are updated together. PIRL [5] uses a memory bank for negative examples and both encoders' parameters are updated together. MoCo [4; 2] uses a moment contrastive approach whereby the query encoder is updated during backpropagation and the query encoder updates the key encoder. The negative examples are located in a dictionary separate from the mini-batch, which enables holding large batch sizes without needing large GPUs. **Enhanced Contrastive Learning.** CID mechanisms focus on the importance of the negative examples and find different ways to sample negative examples regardless of their content, which may cause undesired behaviour between images with similar semantic content. Some studies have focused on improving the quality of the negative examples which in turn improves representation learning. Kalantidis et al.[28] and Robinson et al.[29] focused on the hard negative samples around the positive anchor, while [30] introduce the percentile range for negative sampling. Another approach introduced by Chuang et al.[7] gives weights for positive and negative terms to reduce the effects of undesirable negatives. Other methods Dwibedi et al.[9] and Huynh et al. [8] use similarity metrics to define the images that have similar semantic content and treat them as positive pairs during model training. These approaches (FNC) [8] and (NNCLR) [9] provide a solution to determine variant instances of the same category and treat them as positive pairs. However, these approaches have drawbacks in computing the similarity between the images which may lead to obtaining incorrect semantic positive pairs, such as: 1. They use a model under training (not convergence) to represent the images before measuring the similarity between them. 2. They compute the similarity between transformed images, not the raw images in the original dataset. 3. These methods use a support set containing positive pairs for the anchor which adds computational overheads. Counting on non-pretrained model with augmented images to find semantic positive pairs akin [9; 8] maybe lead to inaccurate results. To demonstrate that, Figure 2 shows an empirical example of the inaccurate similarity scores, we obtained when we used a non-pretrained model and random augmented images to determine the instances that belong to the same class in the dataset. The similarity scores show that the horse and deer are more similar to the anchor (car) than car 1 and car 2, and also show that the train is more similar to the anchor than car 2, which is not correct. **Semi-Supervised Learning.** The advantage of training a model on variant instances of the same category to improve the representation learning is used by semi-supervised approaches [31; 32]. Bosnjak et al.[31] present an approach to train the model on different semantic positive pairs by leveraging a few labelled data. In their approach, they use labelled data and k-nearest neighbour to provide a pseudo-label for unlabelled based on the most frequent class near the unlabelled data. Following that, they treat the datapoints that have similar pseudo-label as semantic positive pairs in a contrastive learning setting. Such methods still require labelled data during training to provide semantic positive pairs. Our proposed pre-processing method provides a different way of approaching this, as demonstrated below across several datasets and ablations. We used a model pre-trained by SSL approaches and worked on the original dataset rather than augmented images to determine semantic positive pairs. In addition, our method does not require labelled data, specialised architecture or a support set to hold the semantic positive pairs which allows it to be used as a pre-processing step with any SSL CID methods. ## 3 Methodology In this section, we propose an approach to enhance the contrastive instance discrimination methods by using semantic positive pairs. \[\ell_{i,j}=-\log\frac{\exp(\sin(z_{i},z_{j})/\tau)}{\sum_{k=1}^{2N}\mathbb{1}_ {[k\neq i]}\exp(\sin(z_{i},z_{k})/\tau)} \tag{1}\] Contrastive instance discrimination methods use random transformation with a contrastive objective function (equation 1) to learn image representation by increasing the agreement between the positive pairs (\(z_{i}\)) and (\(z_{j}\)) while reducing the similarity with all the negative examples (\(z_{k}\)) in the batch [4; 3]. Our approach is applying a pre-processing step before the data augmentation step, to find semantic positive pairs in the dataset by using a pre-trained model and similarity metric such as cosine similarity. After finding the semantic positive pairs we treat them as positive pairs during training CID models. Thus, the backbone model (CID model) is pre-trained on both semantic positive pairs (images that have similar content) and positive pairs (two views for the same instance). To find the semantic positive pairs we select several images from the original dataset, and then we pair the images that have similar semantic content together. Next, a random transformation is applied for both semantic positive pairs and the original images in the dataset (see Figure 3). Our method has two steps which will be described in the following sub-sections: the first step finds semantic positive pairs, and the second step combines the semantic positive pairs with the dataset. We refer to our approach as SePP-CID which stands for "Semantic Positive Pairs for enhancing Contrastive Instance Discrimination". Algorithm 1 shows how the proposed method is implemented. Figure 2: Similarity scores are shown for anchor (car) with instances from other classes using non-pre-trained models and random augmented images. ### Semantic Positive Pairs We used a model pre-trained by SSL approach and cosine similarity to find the semantic positive pairs. The pretrained model is used to represent the dataset images as embedding vectors and cosine similarity is used to compute the similarity between images in the dataset. The idea of the SePP-CID approach is to create a list containing \(K\) number of embedding vectors for images in the dataset where \(K\) is a constant number defining the number of images involved in the pre-processing. We deal with each embedding vector in the list as an anchor and aim to find similar vectors (semantic positive pair) for each anchor, but excluded the identical vectors (same image) by using cosine similarity. As shown in Figure 4 a list of images is encoded by a pre-trained model, and then the embedding vectors are duplicated in two lists A and B. After that, we use cosine similarity to find semantic positive pairs for each instance (anchor) in list B from list A. Each anchor in List B may pair with more than one instance from List A which allows the model to capture important features for the anchor from different instances belonging to the same category. Finally, a new dataset is created containing tuples of semantic positive pairs. To ensure that the new dataset (subset of the original dataset) will most likely contain the correct semantic positive pairs we used two thresholds for the similarity score: a) maximum threshold (0.99) to avoid having two identical images as semantic positive pairs, and b) minimum threshold (0.96) to avoid obtaining images that have different semantic content as positive pairs. Using thresholds with a pre-trained model and original dataset for finding the semantic positive pairs are the key differences with the previous approaches FNC and NNCLR. Thus, only the pairs that comply with the two threshold conditions (max and min) are chosen as semantic positive pairs. For example, in our approach, if the (K-size) "the number of images involved in the pre-process" is equal to 10 images. All these 10 images will be encoded by SSL such as (SWAV [20]), and then the similarity between these images is computed. If two images achieved the threshold condition, they will be added to the semantic positive pairs subset otherwise we obtain zero semantic positive pairs (as shown in Table 3 the K-size = 76800 but we have only 5306 semantic positive pairs). In our approach, we add more construction such as (using the pre-trained model, using the images of the original dataset, and two thresholds "max and min") because we prefer to pre-train the model on zero semantic positive pairs rather than pre-train the model on wrong semantic positive pairs which degrade the representation learning. Our approach is different from the previous works (FNC and NNCLR) which mapped each Figure 4: shows the first step in the methodology (finding semantic positive pairs). Figure 3: Entire proposed methodology. Where several images equal (k) are picked from the dataset and encoded by a pre-trained model. Next, cosine similarity is used to find the semantic positive pairs for each anchor, and then data transformations are applied for both the dataset and semantic positive pairs. Eventually, all the images are combined in one dataset which will be used to train a contrastive instance discrimination model. anchor in the batch with the most similar image in the support set as semantic pair regardless of the content of the images which may lead to the reduction of the representation learning. ### Combine and Transform After creating the semantic positive pairs subset (SPPS), composition data augmentations akin to the ones used with SimCLR [3] are applied to both datasets (original and SPPS) to avoid the model learning trivial features. In the original dataset, a copy for each instance is created, and then a random transformation is applied for each copy. Regarding the SPPS we do not need to create a copy for the instances because we already have pairs, thus we only need to apply the random transformation. ``` 1:Dataset samples, constant k, and structure.\(f\) 2:ImageList=[] 3:for\(\hat{k}\leftarrow\) 10 \(N\)do 4:\(image\gets tensor(x_{k})\)\(\triangleright\) convert k images from dataset to tensor 5:\(emb\_vector\leftarrow\)\(\int(image)\)\(\triangleright\) encode images by pre-trained model 6:\(norm\_vector\gets Normalize(emb\_vector)\)\(\triangleright\) 12 norm 7:ImageList.append\(norm\_vector)\) 8:endfor 9: 10:ImageList\(\leftarrow\) ImageList\(\triangleright\) both lists have similar embedding vectors 11:Max= 0.99 and Min\(\circ\)\(\triangleright\) define threshold 12:\(sim\) torch.nm(ImageList.1,ImageList2.T)\(\triangleright\) compute similarity 13:semantic_pairs_list[] 14:for\(i\gets 10\)\(sim\_size()[0]\)do 15:for\(j\gets 1\) to \(sim\_size()[1]\)do 16:if\(Sim[i,j]\geq Min\) and \(Sim[i,j]\leq Max\)then 17:\(Positive\_pair\gets tuple(Dataset[i],Dataset[j])\) 18:semantic_pairs_list_append(\(Positive\_pair\)) 19:endif 20:endfor 21:endfor 22:endfor 23:endfor 24:2 apply a random transformation to semantic_pairs_list. 25: combine the semantic_pairs_list with the original dataset. 26:combine dataset ``` **Algorithm 1** ScPP-CID Approach As seen in Figure 5, two random views are created for each instance in the original dataset whereas random transformations are applied directly to the semantic positive pairs of SPPS without creating copies for the instances. Next, all the positive pairs from the original dataset and SPPS are combined into one dataset which is used to pre-train SSL models. Combining the SPPS with the original dataset eliminates the need for the support set which is being used by the other approaches to hold the semantic positive pairs for the anchors during the model training, thereby the computational overhead of auxiliary memory is eliminated. Figure 5: shows the second step of the methodology where the instances of both datasets are transformed and combined into one dataset. Figure 6 illustrates examples of semantic positive pairs in SPPS obtained from the STL10 dataset. It is shown that our approach finds the correct semantic positive pairs for the anchors, despite the anchor image having different properties (e.g. color, direction, background, and size). After adding the SPPS to the dataset, our model maximizes the similarity between the two views \(z\) and \(\tilde{z}\) of image \(x\) as well as the similarities between \(z\) and the views \(u\) which are the semantic positive samples of \(x\). Note that the number of semantic positive samples of \(x\) may vary. For example, some images may not have any semantic positive samples sought by our method, while other images may have more than one semantic positive sample. For defining our loss function, firstly we use the following contrastive function for computing the similarity between a positive pair: \[\ell(z,\tilde{z})=-\log\frac{\exp(\text{sim}(z,\tilde{z})/\tau)}{\exp(\text{ sim}(z,\tilde{z})/\tau)+\sum_{k=1}^{2N}\exp(\text{sim}(z,z_{k})/\tau)} \tag{2}\] where \(z_{k}\) denotes the views of the negative samples of \(x\). Since we are using two views of an image, therefore, we have \(2N\) number of negative views. As we are using views of an image as positive samples and as well as semantic positive samples, our loss function is defined as follows, \[loss=\frac{1}{N}\sum_{i=1}^{N}\left[\ell(z_{i},\tilde{z}_{i})+\sum_{m=1}^{M} \lambda_{im}\ell(z_{i},u_{m})\right] \tag{3}\] where \(0\leq\lambda_{im}\leq 1\) is a regularizer for using semantic positive samples for training our model. In the case of \(\lambda_{im}=0\), no positive samples are used and therefore our model reduces to SimCLR. This approach will increase the richness of the latent space and reduce the discarded features due to contrasting images that have similar content during the representation learning. This is yielded to improve the model performance on the downstream task. ## 4 Experiments and Results **Datasets.** We evaluated Sepp-CID approach on three datasets, i.e. STL-10 "unlabeled" with 100K training images [33], CIFAR-10 with 50K training images [34], and ImageNet-1K with 1.28M training images [35]. **Training.** We used SimCLR [3] as an example of a CID SSL with backbone ResNet50, temperature \(0.1\), weight decay \(1x10-6\), LARS optimizer, and the models are pre-trained up to 800 epochs on three datasets (CIFAR-10, STL-10, and ImageNet). We used the same configurations as SimCLR for both approaches, i.e. with our proposed methodology and without. To make a fair comparison of the effect of SePP-CID on the model performance across the three datasets, we fixed the proportion of the images involved in the pre-processing (\(K=5\%\)) (the number of images chosen from each dataset shown in Table 1). Figure 6: We show the semantic positive pairs for different anchors in the STL10-unlabeled dataset where the min threshold for similarity score is \(0.96\) and the max threshold is \(0.99\). **Evaluation.** We evaluated our approach by following the common protocol [3; 4]. The backbone is frozen and a linear classifier is trained for 90 epochs by using supervised learning, SGD optimizer, and zero weight decay. **Comparing SePP-CID to SimCLR.** We performed several full-scale experiments using different datasets, including ImageNet, across 100, 200, 400, 600, and 800 epochs, as well as varied batch sizes (256, 512, 1024) to demonstrate that our methodology consistently boosts the performance of contrastive instance discrimination methods, in this case, SimCLR is used as a baseline method. Table 2 illustrates a performance comparison on two datasets (CIFAR-10, STL-10) between SimCLR (vanilla) and SimCLR with our approach, referred to as SePP-CID. The comparison involves two factors batch size and the number of epochs. It is apparent in Table 2 that the SePP-CID approach continuously boosts the performance of contrastive instance discrimination where SePP-CID outperforms the vanilla SimCLR with different batch sizes and epochs. SePP-CID pre-trained 800 epochs and batch size 256 achieved 91% accuracy on CIFAR10: this is 3.56 % higher than Vanilla SimCLR under the same epochs and batch size, also 3.25% and 2.38% better than larger batch size 512 and 1024 consecutively with the same number of epochs. Table 2 also shows that SePP-CID surpasses SimCLR by 3.12% on STL-10 with pre-trained 800 epochs and batch size 1024. Table 3 shows that the SePP-CID method performs better than vanilla SimCLR on the ImageNet dataset. SePP-CID achieved 72.46% with 1024 batch size and 800 epochs which is better than vanilla SimCLR by 4.18%, in absolute terms. **Comparing SePP-CID to NNCLR and FNC.** After proving the consistency of our approach on different datasets with varying epochs and batch sizes, we linearly evaluate our approach with other approaches that treat the semantic pairs as \begin{table} \begin{tabular}{|l|l|} \hline Dataset & K-size \\ \hline \hline CIFAR10 & 2500 \\ STL10 & 5000 \\ ImageNet & 64000 \\ \hline \end{tabular} \end{table} Table 1: Number of images that equal \(5\%\) for each dataset. \begin{table} \begin{tabular}{|c c c c c c c c c c c c|} \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{STL-10} & \multicolumn{4}{c|}{CIFAR-10} \\ \cline{2-13} \multicolumn{1}{c}{} & Batch size/Epochs & 100 & 200 & 400 & 600 & 800 & 100 & 200 & 400 & 600 & 800 \\ \hline \hline SimCLR & 256 & 77.37\% & 80.87\% & 83.55\% & 85.29\% & 86.30\% & 80.23\% & 83.41\% & 86.66\% & 87.00 \% & 87.44\% \\ SePP-CID (_ours_) & 256 & 80.50\% & 84.51\% & 87.31\% & 88.67\% & 89.44\% & 22.24\% & 86.93\% & 88.92\% & 90.20\% & 91.00\% \\ \hline SimCLR & 512 & 77.78\% & 83.97\% & 86.71\% & 87.92\% & 88.73\% & 81.75\% & 84.38 \% & 86.90\% & 87.29\% & 87.75 \% \\ SePP-CID (_ours_) & 512 & 81.60\% & 85.62\% & 88.33\% & 89.45\% & 90.51\% & 83.34\% & 87.65\% & 89.40\% & 90.32\% & 91.64\% \\ \hline SimCLR & 1024 & 81.40\% & 85.83\% & 87.95\% & 88.19\% & 89.08\% & 83.92\% & 86.50 \% & 87.60\% & 88.00 \% & 88.62\% \\ SePP-CID (_ours_) & 1024 & 84.50\% & 88.25\% & 90.60\% & 91.47\% & 92.20\% & 85.39\% & 88.59\% & 90.68\% & 91.82 \% & 92.60\% \\ \hline \end{tabular} \end{table} Table 2: shows the performance of the two approaches SimCLR and SePP-CID on two datasets CIFAR-10 and STL-10 with different batch sizes and epochs. \begin{table} \begin{tabular}{c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{4}{c}{ImageNet} \\ \hline & Batch size/Epochs & 100 & 200 & 400 & 600 & 800 \\ \hline \hline SimCLR & 256 & 55.40 \% & 58.20\% & \% 61.34\% & 63.50\% & 64.80\% \\ SePP-CID (_ours_) & 256 & 58.60\% & 62.30\% & 64.63\% & 66.03 \% & 66.85 \% \\ \hline SimCLR & 512 & 59.70\% & 62.53 \% & 65.24 \% & 66.18 \% & 67.06 \% \\ SePP-CID (_ours_) & 512 & 61.55\% & 64.25\% & 66.86 \% & 67.70\% & 68.79 \% \\ \hline SimCLR & 1024 & 62.28\% & 65.46 \% & 66.69 \% & 67.86 \% & 68.28 \% \\ SePP-CID (_ours_) & 1024 & 66.58 \% & 68.51 \% & 69.53 \% & 70.89 \% & 72.46 \% \\ \hline \end{tabular} \end{table} Table 3: The classification accuracy of SimCLR and SePP-CID (ours) on ImageNet. positive pairs during model training. We used SimCLR as a baseline and we pre-trained the three approaches on two datasets (CIFAR-10 and STL-10) with a batch size equal to 1024 and varying epochs. Next, we froze the backbone for the models and used the evaluation protocol of [9; 8; 3] to evaluate the models on the same datasets. Table 4 shows that our approach (SePP-CID) significantly outperforms all the other approaches in both datasets with varying epochs. This supports our hypothesis that we can obtain more accurate semantic positive pairs by using pre-trained models and raw images in a dataset, thus we can pretrain our model on correct semantic pairs which improves the representation learning, yielding improved model performance on the downstream task. Also, the result shows that using a non-pretrained model with augmented images to determine the semantic pairs may slow the model convergence because the model needs several epochs to be able to pick the correct semantic positive pairs. For example, in Table 4 NNCLR achieved 85.47% on STL-10 dataset with 800 epochs while simCLR achieved 85.83% on 200 epochs. The NNCLR learn visual representation by attracting the nearest neighbour instances in the embedding space (semantic pairs) because they assume they have similar content. However, using non-pretrained model and augmented images to find images that have similar content may cause obtaining wrong semantic pairs during model training which leads to slow model convergence and reduced model performance as shown in Table 4. On the contrary, FNC and SePP-CID both use positive pairs and semantic positive pairs to learn visual representation. The difference between the two approaches is that FNC determines the semantic pairs by using non-pretrained model and augmented images which may also cause choose wrong semantic pairs and reduce the model performance. In our approach, we used a pre-trained model with raw images from the dataset to acquire more accurate semantic pairs, therefore the CID model is trained on the right semantic pairs from the beginning of training which leads to improves model performance. ### Ablation Study Here we provide a more in-depth analysis of our approach. We used various \(K\) values (a constant number to determine how many images from the original dataset are involved in the pre-processing) to check how it affects the performance of the contrastive instance discrimination. In addition, we picked random images from the original dataset, transformed them, and then added them back to the dataset. With this, we wanted to ensure that boosting the performance of CID methods originated from the semantic positive pairs not because of increasing the size of the dataset. For the ablation studies, we pre-trained all the models for up to 200 epochs with batch size 256 on the ImageNet dataset. It is obvious in Table 5 that when we increase the (K-size), the number of semantic positive pairs is increased which in turn increases the model's performance. This proves that semantic positive pairs are affected positively in the performance of contrastive instance discrimination. To ensure that the enhancement in the performance is because of the semantic positive pairs, not for increasing the size of the dataset. We randomly picked 4968 images from ImageNet, and then we created two copies for them (\(x_{i}\) and \(x_{i}^{\prime}\)) each of which has random data augmentation, after that we add them \begin{table} \begin{tabular}{c c c c c|c c c c} & \multicolumn{3}{c|}{STL-10} & \multicolumn{3}{c}{CIFAR-10} \\ \hline ApproachEpochs & 200 & 400 & 600 & 800 & 200 & 400 & 600 & 800 \\ \hline \hline SimCLR & 85.83\% & 87.95\% & 88.19\% & 89.08\% & 86.50\% & 87.60\% & 88.00\% & 88.62\% \\ NNCLR & 77.87\% & 81.53 \% & 83.67 \% & 85.47 \% & 84.13\% & 85.65 \% & 87.29\% & 88.18 \% \\ FNC & 85.33\% & 88.04 \% & 88.63 \% & 89.62 \% & 86.30 \% & 88.34 \% & 89.43\% & 90.51 \% \\ \hline SePP-CID (_ours_) & 88.25\% & 90.60 \% & 91.47 \% & 92.20\% & 88.59\% & 90.68\% & 91.82\% & 92.60\% \\ \hline \end{tabular} \end{table} Table 4: shows a comparison between the four approaches on two different datasets STL-10 and CIFAR-10 with the same batch size of 1024 and different epochs. \begin{table} \begin{tabular}{|c|c|c|c|} \hline K-size & K proportion to ImageNet dataset & semantic positive pairs & accuracy \\ \hline \hline 12800 & k=1\% & 900 & 59.30\% \\ 25600 & K = 2\% & 1762 & 60.18\% \\ 51200 & K = 4\% & 4710 & 61.87\% \\ 64000 & K = 5\% & 4968 & 62.30\% \\ 76800 & K = 6\% & 5306 & 62.38\% \\ \hline \end{tabular} \end{table} Table 5: Relation between K (number of images involved in the pre-process) with semantic positive pairs and model performance. again to the dataset. We picked this amount of images from the dataset because it is equal to the number of semantic positive pairs when the k-size= 64000 images (k=5% of the dataset). Thus, we can compare if the improvement came from semantic positive pairs or from increasing the size of the dataset. Adding random images to the original dataset after the transformation has a negligible improvement to the SimCLR performance (\(0.16\%\)) whereas adding the same number of semantic positive pairs by our approach increases the performance of SimCLR by \(4.1\%\) (see Table 6). ## 5 Limitation We find that increasing the (K-size) improves the performance of the contrastive instance discrimination approach but this improvement costs time to the model pretraining (Figure 7 shows the relation between time and K size). There is a linear relation between increasing the (K-size) and model pre-train time. Thus, if we want to find the semantic positive pairs between the first (10k) images in the dataset this preprocess will add 1 hour to the training of the contrastive instance discrimination model. ## 6 Conclusion & Future Work In this work, we highlighted some of the limitations of the previous approaches proposed for finding the semantic positive pairs, and proposed a new approach termed SePP-CID for solving this issue which improves the performance of contrastive instance discrimination methods. We evaluated SePP-CID on three datasets (ImageNet, STL10, and CIFAR10) and demonstrated a consistent increase in performance, outperforming the original SimCLR considerably, including on the full ImageNet dataset for 800 epochs. We find that increasing the number of images involved in the pre-processing (K-size) improves performance, but this improvement is computationally expensive and increases the model pre-training time. This needs to be factored in when employing this methodology. Lastly, although we used only SimCLR as a baseline CID method, we expect to evidence the same behaviour when other CID baseline methods are used, e.g. MoCo and MoCo v2. Our future work involves expanding our approach to other CID and non-CID methods.
2307.01777
Shapley Sets: Feature Attribution via Recursive Function Decomposition
Despite their ubiquitous use, Shapley value feature attributions can be misleading due to feature interaction in both model and data. We propose an alternative attribution approach, Shapley Sets, which awards value to sets of features. Shapley Sets decomposes the underlying model into non-separable variable groups using a recursive function decomposition algorithm with log linear complexity in the number of variables. Shapley Sets attributes to each non-separable variable group their combined value for a particular prediction. We show that Shapley Sets is equivalent to the Shapley value over the transformed feature set and thus benefits from the same axioms of fairness. Shapley Sets is value function agnostic and we show theoretically and experimentally how Shapley Sets avoids pitfalls associated with Shapley value based alternatives and are particularly advantageous for data types with complex dependency structure.
Torty Sivill, Peter Flach
2023-07-04T15:30:09Z
http://arxiv.org/abs/2307.01777v1
# Shapley Sets: Feature Attribution via Recursive Function Decomposition ###### Abstract Despite their ubiquitous use, Shapley value feature attributions can be misleading due to feature interaction in both model and data. We propose an alternative attribution approach, Shapley Sets, which awards value to sets of features. Shapley Sets decomposes the underlying model into non-separable variable groups using a recursive function decomposition algorithm with log linear complexity in the number of variables. Shapley Sets attributes to each non-separable variable group their combined value for a particular prediction. We show that Shapley Sets is equivalent to the Shapley value over the transformed feature set and thus benefits from the same axioms of fairness. Shapley Sets is value function agnostic and we show theoretically and experimentally how Shapley Sets avoids pitfalls associated with Shapley value based alternatives and are particularly advantageous for data types with complex dependency structure. Explainability, Feature Attribution, Shapley Value, Function Decomposition, Separability ## 1 The Shapley Value and Non-separable Functions In co-operative game theory, one central question is that of fair division: if players form a coalition to achieve a common goal, how should they split the profits? Let \(N\) be the set \(\{1,2,...n\}\) of players and \(2^{N}\) all coalitions of players. A function \(v:2^{N}\rightarrow\mathbb{R}\) is the \(n\)-person game in characteristic form, such that \(v(S),S\subseteq N\) defines the worth of coalition \(S\) where \(v(\varnothing)=0\). A solution concept is a mapping assigning a vector \(\mathbf{x}\in\mathbb{R}^{n}\) to the game \(v\). The Shapley value [1] is the most widely known solution concept which uniquely satisfies certain axioms of fairness: efficiency, dummy, symmetry and additivity. Please see [1] for definitions. **Definition 1.1** (Shapley Value).: For the game \(v\) the Shapley value of player \(i\in N\) is given as \[\phi_{i}(v)=\Sigma_{S\subseteq N\setminus\{i\}}\frac{|S|!(n-|S|-1)!}{n!}[v(S \cup\{i\})-v(S)] \tag{1}\] Under efficiency, the Shapley value decomposes the value of the grand coalition \(v(N)-v(\varnothing)\) to attribute worth to each individual player. The Shapley value is a fully separable function (Definition 1.2) such that \(v(N)-v(\varnothing)=\sum_{i}^{n}\phi_{i}(v)\). **Definition 1.2**.: [Additively Separable Function] A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) with variable set \(\mathbf{X}=\{X_{1},...,X_{n}\}\) is separable if it has the following form \(f(\mathbf{X})=\sum_{i=1}^{k}f_{i}(\mathbf{X}_{i})\:1<k\leq n\), where \(\mathbf{X}_{1},\mathbf{X}_{2},...,\mathbf{X}_{k}\) are \(k\) non-overlapped sub-vectors of \(\mathbf{X}\). Specifically, the function \(f\) is also called fully additively separable if \(k=n\), while it is regarded as fully non-separable if \(k=1\). While there are other forms of separability, In this paper we use the term separable to refer to additive separability. The set function \(v\) may not be fully separable. Within coalitional games this is due to the interaction between players. Consider the following example for the game \(v\) with player set \(N=\{1,2,3\}\) and \(v(1)=1,v(2)=0,v(3)=0,v(1,2)=1,v(1,3)=1,v(2,3)=2,v(1,2,3)=3\). Clearly the game is not fully separable as \(v(1)+v(2)+v(3)\neq v(1,2,3)\). The non-separable interaction effects within coalitional games are dealt with by solution concepts which map partially separable into fully separable functions, allowing an individual attribution of worth to each player. The Shapley value provides an attribution where each player receives an average of their marginal contribution to all coalitions. ### Interaction Effects For Feature Attribution When applying the Shapley value to feature attribution, there are three functions to consider: The model to be explained \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) which operates on a variable set \(\mathbf{X}=\{X_{1},...,X_{n}\}\). The set function \(v\), which takes as input a set of features \(\mathbf{X}_{S}\subseteq\mathbf{X}\) and obtains \(f\)'s prediction on this coalition of features. The Shapley value \(\phi(v)\) which maps the set function \(v\) into a fully separable function. Given a particular prediction to attribute, \(f(\mathbf{x})\) where \(\mathbf{x}=\{x_{1},...,x_{n}\}\), the value function \(v(\mathbf{x},\mathbf{X}_{S})\) specifies how the subset of features, \(\mathbf{X}_{S}=\mathbf{X}\backslash\mathbf{X}_{S}\), should be removed from \(\mathbf{x}\). In coalitional game theory, the Shapley value attributes the difference in value between the grand coalition and the empty set of players. For feature attribution, the Shapley value attributes the change in prediction between instance and baseline. Therefore, the value of the empty coalition, \(v(\mathbf{x},\mathbf{X}_{\varnothing})\) is not guaranteed to be zero but some uninformative baseline prediction and thus the value function must account for the non-zero baseline: \(v(\mathbf{x},\mathbf{X}_{S})=f(\mathbf{X}_{\tilde{S}},\mathbf{X}_{S}=\mathbf{ x}_{S})-f(\mathbf{X}_{\varnothing})\). Similarly to coalitional games, the Shapley value fairly allocates interaction effects to each feature. Feature interaction may occur in the data \(f(X_{1},X_{2},X_{3})=X_{1}+X_{2}+X_{3}\) where \(X_{2}=\alpha X_{3}\) and/or in the model \(f(X_{1},X_{2},X_{3})=X_{1}+2X_{2}X_{3}\). The choice of value function, which acts as the interface between the Shapley value and the function \(f\) determines the kind of interactive effects the Shapley value must allocate between features. ### When Interaction Occurs in the Data While the following ideas have previously been discussed [2; 3; 4], we re-frame them here within the context of separability which allows us to motivate our proposed attribution method, Shapley Sets. **Example 1:** Given the binary variable set \(\mathbf{X}=\{X_{1},X_{2},X_{3}\}\) and function \(f(\mathbf{X})=X_{1}+X_{3}\) where \(X_{2}\) is the causal ancestor of \(X_{3}\) such that \(X_{3}=X_{2}\). It is clear that \(X_{2}\) has no impact on \(f(\mathbf{X})\) from the perspective of the model. However, from the perspective of the data distribution, \(X_{3}\) is dependent on \(X_{2}\). Changing \(X_{2}\) will result in a change in \(X_{3}\) therefore, changing \(X_{2}\) to a value non-consistent with \(X_{3}\) does not make sense. Whether to consider \(X_{2}\) as a separate player in the game and attribute value despite it having no direct influence on the model output is an open debate in the literature. **Off-manifold Value Functions** There are those who argue that features with no impact on the model should receive no attribution [5; 6]. These methods break all statistical relationships between the inputs to the model by using a value function which calculates the impact of each feature on the model independently of its impact on the distribution of other features. This approach was formalised as \(v_{marg}\) by [6] \[v_{marg}(\mathbf{x},\mathbf{X}_{S})=f(\mathbf{X}_{S}=\mathbf{x}_{S},\mathbb{ E}[\mathbf{X}_{\tilde{S}}])-f(\mathbb{E}[\mathbf{X}]) \tag{2}\] The expectation is usually taken over the input distribution \(\mathbf{X}_{input}\). However, if this is replaced by an arbitrary distribution, \(v_{marg}\) was generalised to \(v_{bs}\) by [7] which uses an arbitrary baseline sample \(\mathbf{z}\), \[v_{bs}(\mathbf{x},\mathbf{z},\mathbf{X}_{S})=f(\mathbf{X}_{S}=\mathbf{x}_{S}, \mathbf{X}_{\tilde{S}}=\mathbf{z}_{\tilde{S}})-f(\mathbf{X}=\mathbf{z}). \tag{3}\] There are those who argue that attributions independent of the statistical interactions in the data are inherently misleading [8; 4]. Firstly from a causal perspective, if we consider Example 1, the Shapley value via \(v_{marg}\) would assign zero importance to \(X_{2}\). An attribution ignoring that \(X_{2}\) is directly responsible for \(X_{3}\) is misleading, especially if the attribution is used to recommend changes. Furthermore, \(v_{marg}\) evaluates the model on out of distribution samples. If we break the causal relationship between \(X_{2}\) and \(X_{3}\), and using their independent expected values \(\mathbb{E}[X_{2}]=\mathbb{E}[X_{3}]=0\) in \(v_{marg}\), the model is evaluated on samples \((x_{1},1,0)\) which is a complete misrepresentation of the truth. **On-manifold Value Functions** To combat this problem, on-manifold samples can be calculated by the use of the conditional value function, first introduced by [9] which does consider statistically related features as separate players in the game, allowing the distribution of out of coalition features to be impacted by the feature in question \[v_{cond}(\mathbf{x},\mathbf{X}_{S})=\mathbb{E}[f(\mathbf{X}_{S}=\mathbf{x}_{S},\mathbf{X}_{\tilde{S}})|\mathbf{X}_{S}=\mathbf{x}_{S}]-\mathbb{E}[f(\mathbf{X })]. \tag{4}\] \(v_{cond}\) is often taken as the observational conditional probability whereby the expected conditional is calculated over \(\mathbf{X}_{input}\). This generates on-manifold data samples which address the problems discussed above. Furthermore, features which have no direct impact on the model but an indirect impact through other features are assigned a non zero importance, more accurately reflecting reality. However, the two significant issues with \(v_{cond}\) are its computational complexity: requiring the evaluation of the model on \(2^{N}\) multivariate conditional distributions and the undesirable impact of considering all features as players combined with the efficiency which we explicate below. In assigning non-marginal features a non zero importance, \(v_{cond}\) can give misleading explanations which indicate features to change despite having zero impact on the outcome. This weakness of \(v_{cond}\) has been formalised as a "violation of sensitivity" [6]: _When the relevance of \(\phi_{i}\) is defined by \(v_{cond},\phi_{i}\neq 0\) does not imply that \(f\) depends on \(X_{i}\)_ The failure of sensitivity exhibited by \(v_{cond}\) leads to further issues with the generated attributions. Consider Example 1 again, where \(X_{3},X_{2},X_{1}\) are binary variables and \(X_{3}=X_{2}\). Given the input \(\mathbf{x}=(x_{1},x_{2},x_{3})=(1,1,1)\) and \(f(x_{1},x_{2},x_{3})=2\) Under \(v_{cond}\), the Shapley attributions for \(X_{2}\) and \(X_{3}\) would both be greater than the attribution for \(X_{1}\). Clearly, the attribution of \(X_{2}\) violates sensitivity. Now, consider the an alternative function which is just trained on two features \(X_{1},X_{3}\). As \(X_{3}=X_{2}\)\(f_{2}(X_{1},X_{3})=f(X_{1},X_{2},X_{3})\). However, now the Shapley values for \(X_{1}\) and \(X_{3}\) are equal. The relative apparent importances of \(X_{1}\) and \(X_{3}\) depend on whether \(X_{2}\) is considered to be a third feature, even though the two functions are effectively the same. [10] propose a solution to the failure of sensitivity exhibited by \(v_{cond}\) following the intuition: If \(X_{i}\) is known to be the deterministic causal ancestor of \(X_{j}\), one might want to attribute all effect to \(X_{i}\) and none to \(X_{j}\). In contrast, [4] argue that the only way to remove the problems arising from the failure of sensitivity is to replace the observational \(v_{cond}\) with the interventional conditional distribution. However, both the asymmetric and interventional attributions above require the specification of the causal structure of the phenomenon being modelled. It has been argued [2] that this requirement is a significant limiting factor in the adoption of either approach. In this paper, we propose an attribution approach which can be used with on and off-manifold value functions. Under \(v_{cond}\), our method generates on-manifold attributions which avoid the failure of sensitivity without requiring any knowledge of the causal structure of the underlying data distribution. ### When Interaction Occurs in the Model While off-manifold value functions ignore interaction in the data, both on and off-manifold value functions recognize interaction in the model. It has been recognised, however that the Shapley value, in the presence of feature interaction in the model, generates misleading attributions [11]. **Example 2:** Consider the function \(f(X_{1},X_{2},X_{3})=X_{1}+2X_{2}X_{3}\) and assume that each of the three features are statistically independent, i.e. all interaction between features is defined entirely by the model. Furthermore, it is given that \(\mathbb{E}[X_{1}]=\mathbb{E}[X_{2}]=\mathbb{E}[X_{3}]=0\) and that our sample to be explained \(\mathbf{x}=(1,1,1)\). The Shapley value under both on and off-manifold value functions give equal attributions to each feature. While this attribution makes sense from the perspective of how much each feature contributed to the change in prediction, it does not reflect the true behaviour of the model where changing the value of \(X_{2}\) or \(X_{3}\) would have double the impact on the model as changing \(X_{1}\). In this paper, we propose a solution concept which would group \(X_{2},X_{3}\) and unlike the Shapley value, award attribution together, resulting in attributions more faithful to the underlying model \(f\) when used with on or off-manifold value functions. ## 2 Shapley Sets of Non-Separable Variable Groups The problems with Shapley value attributions discussed above occur as it assigns individual value to variables belonging to Non-Separable Variable Groups (NSVGs) in regards to the underlying partially separable function \(f\) (Definition 1.2). Non-separable groups are used to describe the formed variable groups \(\{\mathbf{X}_{1},...,\mathbf{X}_{k}\}\) after a complete (or ideal) decomposition of \(f\). An NSVG can also be defined as the minimal set of all interacted variables given the function \(f\) which we explicate in Definition 2.1. **Definition 2.1** (Non-Separable Variable Group (NSVG)).: Let \(\mathbf{X}=\{X_{1},X_{2},...X_{n}\}\) be the set of decision variables and \(f\) be a partially separable function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) satisfying Definition 1.2. If there exists any two candidate decision vectors \(\mathbf{x}=\{x_{1},...,x_{n}\}\) and \(\mathbf{x^{\prime}}=\{x^{\prime}_{1},...,x^{\prime}_{n}\}\), sampled from the domain of \(\mathbf{X}\), such that the following property holds for any two mutually exclusive subsets \(\mathbf{X}_{i},\mathbf{X}_{j}\subset\mathbf{X},\mathbf{X}_{i}\cap\mathbf{X}_{j }=\varnothing\), \[f(\mathbf{x})\mathbf{x}_{i\cup\mathbf{X}_{j}}-f(\mathbf{x})\mathbf{x}_{j} \neq f(\mathbf{x})\mathbf{x}_{i}-f(\mathbf{x})\mathbf{x}_{\varnothing}, \tag{5}\] then the sets \(\mathbf{X}_{i},\mathbf{X}_{j}\) are said to interact. Here, \(f(\mathbf{x})\mathbf{x}_{\varnothing}=f(\mathbf{X}_{S}=\mathbf{x}_{S},\mathbf{ X}_{S}=\mathbf{x^{\prime}}_{S})\) and \(\mathbf{X}_{S}\cup\mathbf{X}_{S}=\mathbf{X}\). As an NSVG refers to the minimal set of interacted variables, if \(|\mathbf{X}_{i}|\) and \(|\mathbf{X}_{j}|\) is minimized such that Equation 5 still holds, then \(\mathbf{X}_{i}\cup\mathbf{X}_{j}\) is a NSVG. (For proof, see [12]). Translating Definition 2.1 for feature attribution, given that \(f(\mathbf{x})_{\mathbf{X}_{S}}\) is a function over the domain of all the possible subsets of \(\mathbf{X}_{S}\subseteq\mathbf{X}\) we can rewrite Equation 5 in terms of \(v(\mathbf{x},\mathbf{X}_{S})\), where \(v\) could represent any of the value functions from the previous section but in this paper we restrict \(v\in\{v_{cond},v_{bs}\}\). By setting \(\mathbf{X}_{i}=\{X_{i}\}\) and \(\mathbf{X}_{j}=\mathbf{X}_{S}\), for \(v_{bs}\), given that \(|\mathbf{X}_{S}|\) is minimised, if there exist any candidate vectors \(\mathbf{x},\mathbf{x}^{\prime}\) such that \[v_{bs}(\mathbf{x},\mathbf{x}^{\prime},\{X_{i}\}\cup\mathbf{X}_{S})-v_{bs}( \mathbf{x},\mathbf{x}^{\prime},\mathbf{X}_{S})\neq v_{bs}(\mathbf{x},\mathbf{x }^{\prime},\{X_{i}\}) \tag{6}\] then \(\{X_{i}\}\cup\mathbf{X}_{S}\) is a NSVG. For \(v_{cond}\), given that \(|\mathbf{X}_{S}|\) is minimised, if there exists any candidate vector \(\mathbf{x}\) such that \[v_{cond}(\mathbf{x},\{X_{i}\}\cup\mathbf{X}_{S})-v_{cond}(\mathbf{x},\mathbf{ X}_{S})\neq v_{cond}(\mathbf{x},\{X_{i}\}) \tag{7}\] then \(\{X_{i}\}\cup\mathbf{X}_{S}\) is a NSVG. Given the partially separable function from Example 2, under \(v_{bs}\), \(\{X_{2},X_{3}\}\) is a NSVG as \(v_{bs}(\mathbf{x},\mathbf{x}^{\prime},\{X_{3},X_{2}\})-v_{bs}(\mathbf{x}, \mathbf{x}^{\prime},\{X_{2}\})\neq v_{bs}(\mathbf{x},\mathbf{x}^{\prime},\{X_ {3}\})\) for settings \(\mathbf{x}=(1,1,1)\) and \(\mathbf{x}^{\prime}=(0,0,0)\). Given the partially separable function from Example 1, under \(v_{cond}\), the set \(\{X_{2},X_{3}\}\) is a NSVG as \(v_{cond}(\mathbf{x},\{X_{3},X_{2}\})-v_{cond}(\mathbf{x},\{X_{2}\})\neq v_{ cond}(\mathbf{x},X_{3})\) for setting \(\mathbf{x}=(1,1,1)\). In this paper, we propose an alternative attribution method which, unlike the Shapley value, does not separate NSVGs to assign attribution. We work under the intuition that any interacting feature whether that be in the model or in the data should not be considered as separate players in the coalitional game but should be awarded value together. In both the examples above, \(X_{2}\) and \(X_{3}\) would receive joint attribution under our proposed method. Given the partially separable function \(f\) satisfying Definition 1.2, variable set \(\mathbf{X}=\{X_{1},X_{2},...,X_{n}\}\), and a specified value function \(v_{cond,marg}(\mathbf{x},\mathbf{X}_{S})\), our proposed solution concept \(\varphi\), which we term Shapley Sets (SS), finds the optimal decomposition of \(f\) into the set of \(m>1\) NSVGs \(\{\mathbf{X}_{1},...,\mathbf{X}_{m}\}\). The resulting variable grouping \(\{\mathbf{X}_{1},...,\mathbf{X}_{m}\}\) satisfies Definition 1.2 and each variable group is composed solely of variables \(\mathbf{X}_{i}\) which satisfy definition 2.1. From Definition 1.2, \(f(\mathbf{x})=\sum_{i=1}^{m}v(\mathbf{x},\mathbf{X}_{i})\). Given a prediction to be attributed, \(f(\mathbf{x})\), our proposed attribution, \(\varphi\), therefore returns the attribution for each variable group \(\mathbf{X}_{i},\forall i\in m\) given as: \[\varphi_{\mathbf{X}_{i}}=v(\mathbf{x},\mathbf{X}_{i}) \tag{8}\] **Proposition 2.2**.: _If we model each NSVG, \(\mathbf{X}_{i}\in\{\mathbf{X}_{i},...,\mathbf{X}_{m}\}\) as a super-feature \(Z_{i}\) such that \(\mathbf{Z}=\{Z_{i},...,Z_{m}\}\), \(z_{i}=\mathbf{x}_{i}\) and \(\mathbf{z}=\{z_{1},...,z_{n}\}\) The Shapley value of each super feature \(\phi_{Z_{i}}(v,\mathbf{z})\) is equivalent to \(v(\mathbf{z},Z_{i})\)_ Proof.: \[\phi_{Z_{i}}(v,\mathbf{z})=\sum_{\mathbf{Z}_{S}\subseteq\mathbf{Z}\setminus\{Z _{i}\}}\alpha[v(\mathbf{z},\{Z_{i}\}\cup\mathbf{Z}_{S})-v(\mathbf{z},\mathbf{ Z}_{S})]\] where \(\alpha=\frac{|\mathbf{Z}_{S}|\{\forall|\mathbf{Z}|-|\mathbf{Z}_{S}|-1\}|!}{| \mathbf{Z}|!}\) Given that each \(\mathbf{Z}_{i},\mathbf{Z}_{j}\subseteq\mathbf{Z}\) is a NSVG, from Definition 2.1 we know that \(v(\mathbf{z},\mathbf{Z}_{i}\cup\mathbf{Z}_{j})-v(\mathbf{z},\mathbf{Z}_{j})=v (\mathbf{z},\mathbf{Z}_{i})\) for any \(\mathbf{Z}_{i},\mathbf{Z}_{j}\subseteq\mathbf{Z}\). Therefore, \(v(\mathbf{z},\{Z_{i}\}\cup\mathbf{Z}_{S})-v(\mathbf{z},\mathbf{Z}_{S})=v( \mathbf{z},\{Z_{i}\})\) It follows that given \(\sum_{\mathbf{Z}_{S}\subseteq\mathbf{Z}\setminus\{Z_{i}\}}\frac{|\mathbf{Z}_{S }|\{(|\mathbf{Z}|-|\mathbf{Z}_{S}|-1\}|!)}{|\mathbf{Z}|!}=1\) \[\phi_{Z_{i}}(v,\mathbf{z})=\sum_{\mathbf{Z}_{S}\subseteq\mathbf{Z}\setminus\{Z _{i}\}}\alpha v(\mathbf{z},\{Z_{i}\})=v(\mathbf{z},\{Z_{i}\})=v(\mathbf{x}, \mathbf{X}_{i})\] Proposition 2.2 shows how the attribution given by Shapley Sets (SS) to variable \(X_{i}\), \(\varphi_{\mathbf{X}_{i}}(v,\mathbf{x})\), is equivalent to the Shapley value when played over the feature set \(\mathbf{Z}\) containing the set of NSVGs \(\{Z_{1},...,Z_{m}\}=\{\mathbf{X}_{1},...,\mathbf{X}_{m}\}\) for a given \(v\in\{v_{cond},v_{bs}\}\). SS therefore satisfies the same axioms of fairness as the Shapley value: efficiency, dummy, additivity and symmetry when played over this feature set. However, we have discussed how, despite its axioms, the Shapley value can generate misleading attributions in the presence of feature interaction. In Section 4 we therefore give practical advantages of the SS over the Shapley value. First however, we provide a method for finding the optimal decomposition of \(f\) into its NSVGs. ## 3 Computing Shapley Sets Determining the NSVGs of a function \(f\) could be achieved manually by partitioning the variable set and determining interaction over every possible candidate vector. However, this would be computationally intractable. Instead, there exists a large body of literature surrounding function decomposition in global optimization problems. Of this work, automatic decomposition methods identify NSVGs. We therefore propose a method for calculating SS which is based on the Recursive Decomposition Grouping algorithm (RDG) as introduced in [12]. To identify whether two sets of variables \(\mathbf{X}_{i}\) and \(\mathbf{X}_{j}\) interact, RDG uses a fitness measure, based on Definition 2.1 with candidate vectors, \(\mathbf{x},\mathbf{x}^{\prime}\) as the lower and upper bounds of the domain of \(\mathbf{X}\). If the difference between the left and right hand side of Equation 5 meets some threshold \(\epsilon=\alpha\min\{|f(\mathbf{x}_{1})|,...,|f(\mathbf{x}_{k})|\}\) where \(\mathbf{x}_{k}\) is a randomly selected candidate vector, then \(\mathbf{X}_{i}\) and \(\mathbf{X}_{j}\) are deemed by RDG to interact. To adapt RDG for \(v_{cond}\) and \(v_{bs}\), we propose an alternative fitness measure, Definition 3.1, with candidate vectors \(\mathbf{x},\mathbf{x}^{\prime}\) randomly sampled from \(\mathbf{X}_{input}\) which can identify NSVGs in the function and/or in the model. **Definition 3.1** (Shapley Sets Fitness Measure).: Given two sets of variables \(\mathbf{X}_{i},\mathbf{X}_{j}\) and a specified value function, \(v\in\{v_{bs},v_{cond}\}\), if \(|v_{cond}(\mathbf{x},\mathbf{X}_{i}\cup\mathbf{X}_{j})-v_{cond}(\mathbf{x}, \mathbf{X}_{j})-v_{cond}(\mathbf{x},\mathbf{X}_{i})|>\epsilon\) then there is interaction between \(\mathbf{X}_{i}\) and \(\mathbf{X}_{j}\). Or, if \(|v_{bs}(\mathbf{x},\mathbf{x}^{\prime},\mathbf{X}_{i}\cup\mathbf{X}_{j})-v_{ bs}(\mathbf{x},\mathbf{x}^{\prime},\mathbf{X}_{j})-v_{bs}(\mathbf{x},\mathbf{x}^{ \prime},\mathbf{X}_{i})|>\epsilon\) then there is interaction between \(\mathbf{X}_{i}\) and \(\mathbf{X}_{j}\). We substitute the SS fitness measure into the RDG algorithm which identifies NSVGs by recursively identifying individual variable sets \(\mathbf{X}_{j}\) with which a given variable \(X_{i}\) interacts with. If \(X_{i}\) and a single variable \(X_{j}\) are said to interact they are placed into the same NSVG, \(\mathbf{X}_{1}\). At which point conditional interaction between \(\mathbf{X}_{1}\) and the remaining variables is identified. The algorithm iterates over every variable \(X_{i}\in\mathbf{X}\) and returns the set of NSVGs. To compute the SS attributions for a given prediction \(f(\mathbf{x})\) we compute \(v(\mathbf{x},\mathbf{X}_{i})\) for each NSVG, \(\mathbf{X}_{i}\). Our full algorithm is shown in Algorithm 2. The runtime of SS is \(O(nlogn)\) as proven in [12]. ``` 0:\(v\in\{v_{bs},v_{cond}\},\mathbf{X}_{input},\epsilon\) if\(v_{bs}\)then Sample \(\mathbf{x},\mathbf{x}^{\prime}\) from input distribution \(\mathbf{X}_{input}\) \(\sigma_{1}=v(\mathbf{x},\mathbf{x}^{\prime},\mathbf{X}_{1}\cup\mathbf{X}_{2 })-v(\mathbf{x},\mathbf{x}^{\prime}\mathbf{X}_{2})\) \(\sigma_{2}=v(\mathbf{x},\mathbf{x}^{\prime},\mathbf{X}_{1})\) endif if\(v_{cond}\)then Sample \(\mathbf{x}\) from input distribution \(\mathbf{X}_{input}\) \(\sigma_{1}=v(\mathbf{x},\mathbf{X}_{1}\cup\mathbf{X}_{2})-v(\mathbf{x}, \mathbf{X}_{2})\) \(\sigma_{2}=v(\mathbf{x},\mathbf{X}_{1})\) endif if\(|\sigma_{1}-\sigma_{2}|>\epsilon\)then if\(\mathbf{X}_{2}\) contains one variable then \(\mathbf{X}_{1}=\mathbf{X}_{1}\cup\mathbf{X}_{2}\) else Split \(\mathbf{X}_{2}\) into two equal groups \(G_{1},G_{2}\) \(\mathbf{X}_{1}^{1}\) = ValueInteract\((\mathbf{X}_{1},G_{1})\) \(\mathbf{X}_{1}^{2}\) = ValueInteract\((\mathbf{X}_{1},G_{2})\) \(\mathbf{X}_{1}=\mathbf{X}_{1}^{1}\cup\mathbf{X}_{1}^{2}\) endif endif Return \(\mathbf{X}_{1}\) ``` **Algorithm 1** ValueInteract(\(\mathbf{X}_{1},\mathbf{X}_{2}\)) ## 4 Motivating Shapley Sets The selection of the value function \(v\) determines the variable grouping generated. Used with \(v_{bs}\), as interacting features are placed in the same NSVG, the attributions resulting from SS will be more faithful to the underlying model. The SS attribution for \(v_{bs}\) in Example 2 would be \(\varphi_{X_{1}}=1\) and \(\varphi_{X_{2},X_{3}}=2\). Used with \(v_{cond}\), as interacting features are placed in the same NSVG, the attributions resulting from SS do not suffer from the violation of sensitivity as described in Section 1.2. Consider again Example 1, as \(\tilde{X}_{2},X_{3}\) now belong to a NSVG, the SS attributions for \(X_{1},X_{3}\) are now equal across both \(f\) and \(f_{2}\) therefore robust to whether non-directly impacting features are included in the model. SS offer a further advantage when used to compare the attributions under on and off-manifold examples. Consider again Example 2 yet now with \(X_{1}=\alpha X_{2}\). The SS attribution via \(v_{marg}\) would be \(\varphi_{X_{1}}=1\) and \(\varphi_{X_{2},X_{3}}=2\). However, if SS was calculated via \(v_{cond}\)\(\varphi_{\{X_{1},X_{2},X_{3}\}}=3-\mathbb{E}[f(\mathbf{X})]\) indicating that \(f\) is non-separable and all the features interact. The comparison between on and off-manifold SS therefore indicate _where_ the feature interaction takes place. We have thus far provided an alternative attribution method to the Shapley Value, SS which can be computed in \(O(nlogn)\) time with \(n\) being the number of features. SS can be adapted for arbitrary value functions and offers several advantages over Shapley value based attributions when used with on and off-manifold value functions. In Section 6 we empirically validate the theoretical claims made above but first we discuss related work. ## 5 Related Work As Shapley value based feature attribution has a rich literature, we differentiate SS from three approaches which are closest in essence to ours. SS enforce a coalition structure on the Shapley value such that players cannot be considered in isolation from their coalitions. **The Owen value** is a solution concept for games with an existing coalition structure [13]. The Owen value is the result of a two-step procedure: first, the coalitions play a quotient game among themselves, and each coalition receives a payoff which, in turn, is shared among its individual players in an internal game. Both payoffs are given by applying the Shapley value. This approach is not equivalent to SS, who assume no prior coalitional structure, and instead finds the optimum coalition structure which is the decomposition of \(v\) into its NSVGs. **Shapley Residuals**[11] capture the level to which a value function is inessential to a coalition of features. They show for \(v_{cond},v_{marg}\), that if the game function can be decomposed into \(v(\mathbf{x},\mathbf{X}_{S})=v(\mathbf{X}_{T})+v(\mathbf{X}_{\tilde{T}})\) for \(\mathbf{X}_{T}\subset\mathbf{X}_{S}\) then the value function \(v\) is inessential with respect to the coalition \(\mathbf{X}_{T}\). In this way we can view Shapley residuals, \(r_{S}\neq 0\) as an indication that a coalition is an non-separable variable group. However, the Shapley residuals are built on complex Hodge decomposition [14], are difficult to understand and do not offer a better way of attributing to features. In contrast, SS is built on the idea of additive separability, easier to understand, less computationally expensive and propose a solution to the issues with the Shapley value which are analogous to those Shapley residuals were designed to identify. **Grouped Shapley Values** Determining the Shapley value of grouped variables has been previously suggested in [15; 16] which identify interaction in the data (based on measures of correlation) to then partition the features into groups, after which the Shapley value is then calculated. Shapley Sets is distinct from the above approaches in the following ways. Firstly, Shapley Sets is capable of uncovering interaction in the model as well as in the data. Secondly, Shapley Sets is designed to find the optimal grouping of the features such that the Shapley value theoretically reduces to the simple computation in Equation 8. Therefore, the grouping under Shapley Sets requires linear time to compute (given the prior decomposition of the variable set under log linear time), whereas the grouping proposed under grouped shapley values [15; 16] still requires exponential computation (to compute exactly although this can be approximated). Shapley Sets, to our knowledge, is the first contribution to the feature attribution literature which automatically decomposes a function into the optimal variable set by which to award attribution. ## 6 Experimental Motivation of Shapley Sets We begin with two synthetic experiments. The first of these motivates the use of SS in the presence of interaction in the model. The second motivates the use of the SS in the presence of interaction in the data. We then compare SS to existing Shapley value (SV) based attribution methods on three benchmark datasets. We first however, outline how the value functions \(v_{bs},v_{cond}\) are computed for our experiments. As discussed above, \(v_{bs}\) takes as input arbitrary reference vectors. For our experiments we select \(v_{marg}\) such that \(v_{bs}=v_{marg}\) (Equation 2). The expectation is taken over the empirical input distribution \(\mathbf{X}_{input}\) For the calculation of \(v_{cond}\) (Equation 4), as the true conditional probabilities for the underlying data distribution are unknown we approximate \(p(\mathbf{X}_{S}|\mathbf{X}_{S}=\mathbf{x}_{s})\) using the underlying data distribution. Approximating conditional distributions can be achieved by directly sampling from the empirical data distribution. However, as noted in [3], the this method of approximating \(p(\mathbf{X}_{S}|\mathbf{X}_{S})=\mathbf{x}_{s}\) suffers when \(|\mathbf{X}_{S}|>2\), due to sparsity in the underlying empirical distribution. We therefore adopt the approach of [3], where under the assumption that each \(\mathbf{x}\in\mathbf{X}\) is sampled from a multivariate Gaussian with mean vector \(\mu\) and covariance matrix \(\mathbf{\Sigma}\), the conditional distribution \(p(\mathbf{X}_{S}|\mathbf{X}_{S}\) is also multivariate Gaussian such that \(p(\mathbf{X}_{\bar{S}}|\mathbf{X}_{S}=\mathbf{x}_{s})=\mathcal{N}_{\bar{S}}( \boldsymbol{\mu}_{\bar{S}|S},\boldsymbol{\Sigma}_{\bar{S}|S})\) where \(\boldsymbol{\mu}_{\bar{S}|S}=\boldsymbol{\mu}_{\bar{S}}+\boldsymbol{\Sigma}_{ \bar{S}\bar{S}}\boldsymbol{\Sigma}_{S}^{-1}(\mathbf{x}_{S}-\boldsymbol{\mu}_{S})\) and \(\boldsymbol{\Sigma}_{\bar{S}|S}=\boldsymbol{\Sigma}_{\bar{S}\bar{S}}+ \boldsymbol{\Sigma}_{\bar{S}\bar{S}}\boldsymbol{\Sigma}_{S\bar{S}}^{-1} \boldsymbol{\Sigma}_{S\bar{S}}\). We can therefore sample from the conditional Gaussian distribution with expectation vector and covariance matrix given by \(\boldsymbol{\mu}_{\bar{S}|S}\) and \(\boldsymbol{\Sigma}_{\bar{S}|S}\) where \(\boldsymbol{\mu}\) and \(\boldsymbol{\Sigma}\) are estimated by the sample mean and covariance matrix of \(\mathbf{X}_{input}\). ### Synthetic Experiment: Interaction in the Model We first construct three functions with linear and non-linear feature interactions: \[f_{1}(\mathbf{X})=X_{0}+(X_{1}/(2+X_{4}))+2(X_{2}*X_{3})+sin(2(X_{5})+X_{6})\] \[f_{2}(\mathbf{X})=2(sgn(X_{0}))+sgn(X_{1}X_{2}X_{3})+sgn(X_{4}X_{5}X_{6})\] \[f_{3}(\mathbf{X})=2(X_{0}X_{2}X_{3})+4(X_{4}X_{5})-3(X_{1})^{2}-(X_{6}))\] We construct a synthetic dataset of seven features drawn independently from \(\mathcal{N}(-1,1)\). For each of 100 randomly drawn samples we compute SS under \(v_{marg}\). As \(|\mathbf{X}|=7\) we are able to compute the true SVs under \(v_{marg}\) for each feature, without relying on a sampling algorithm. As we know the ground truth we calculate the Mean Average Error across all features and samples as our evaluation metric, \[MAE=\frac{1}{k}\sum_{j=1}^{k}\frac{1}{n}\sum_{i=1}^{n}m(X_{ij})-gt(X_{ij}), \tag{9}\] where \(m(X_{ij})\) is the attribution given by \(m=SS\) or \(m=SV\) to feature \(i\) in sample \(k\). As SS calculates an attribution for a set of features, \(m_{SS}(X_{ij})=\varphi_{\mathbf{X}_{ij}}\), the ground truth attribution \(gt(X_{ij})\) is the ground truth value of each NSVG. For example, given \(f=2(X_{1}X_{2})\) and \(\mathbf{x}_{j}=(1,1)\), \(gt(X_{1,j})=2\) and \(gt(X_{2,j})=2\). Results are shown in Table 1. SS is successful in decomposing each function into its NSVGs and the attributions awarded to each set matches the ground truth of the function giving MAE of zero for all samples and functions. SV attributions deviate from ground truth by dividing the value of each NSVG between each individual feature which results in misleading attributions, particularly in the presence of inverse relationships between features. For example \begin{table} \begin{tabular}{|l|l|l|} \hline & SS & Shapley Value \\ \hline \(f_{1}\) & \(\mathbf{0.000\pm 0.000}\) & \(0.335\pm 0.400\) \\ \hline \(f_{2}\) & \(\mathbf{0.000\pm 0.000}\) & \(1.143\pm 0.990\) \\ \hline \(f_{3}\) & \(\mathbf{0.000\pm 0.000}\) & \(0.540\pm 0.580\) \\ \hline \end{tabular} \end{table} Table 1: Mean Average Error \(\pm\) std, for SS and SV attributions under \(v_{marg}\) for the three functions outlined in Section 5.1. SS perfectly identifies NSVGs for all three functions. consider the following sub-component \((X_{1})/(1-X_{2})\), and a particular sample \(\mathbf{x}=(1,0.2)\). SV gives \(X_{1}\) a positive attribution but \(X_{2}\)'s attribution is negative. Under SS, \(X_{1}\) and \(X_{2}\) are considered as non-separable and awarded a positive attribution together. From its SV attribution, a user may opt to change \(X_{2}\) rather than \(X_{1}\), however, as these features jointly move the outcome from the baseline to the target, the impact of changing \(X_{2}\) in isolation could be cancelled out by the impact of \(X_{1}\). ### Synthetic Experiment: Interaction in the Data We adopt the approach of [8] and propose and underlying linear regression model \(f(\mathbf{X})=X_{0}+0.5X_{1}+0.8X_{3}+0.2X_{2}+0.5X_{4}\). We construct a synthetic dataset comprising five features \(n=5\). \((X_{2},X_{3},X_{4})\) are all modeled as I.I.D and drawn independently from \(\mathcal{N}(-1,1)\). \(X_{0},X_{1}\), however are modeled as dependent features where \(X_{1}=\rho X_{0}\). We generate a synthetic dataset \(X_{train},X_{test}\) consisting \(k=(2000,100)\) samples of each feature and obtain the ground truth labels \(y_{train},y_{test}=f(\mathbf{X}_{train}),f(\mathbf{X}_{test})\). We next select a model \(g\) which is trained on \(\mathbf{X}_{train},\mathbf{y}_{train}\) to approximate \(f\). We calculate the attributions for each sample in \(\mathbf{X}_{test}\) generated by the SV under both \(v_{marg}\) and \(v_{cond}\) and the attributions from SS under \(v_{cond}\). To evaluate attributions we use the coefficients of the linear regression model as our ground truth attributions \(c=\{1,0.5,0.8,0.2,0.5\}\). We use \(MAE\) (Equation 9) where the ground truth for feature \(i\) in sample \(j\)\(gt_{X_{ij}}=c_{i}x_{i,j}\). Off-manifold attributions in the presence of interaction in the data recover the ground truth attributions reliably when \(g\) is a linear model, however, that breaks down when non-linear models are used as the approximating function \(g\)[8]. We therefore compare attributions under \(g1\), a linear regression model, and \(g2\), an XGBoost model. Results are shown in Table 2 where SS outperforms SV on both \(g1\) and \(g2\). Under \(g1\), the MAE is lower for SV Marginal than for SV Conditional, validating the findings in [8]. However, when non linear \(g2\) is used, the attributions from SS and SV under \(v_{cond}\) outperform SV under \(v_{marg}\). The attributions provided by SS outperform those generated by SV across both models. We now show experimentally the claim that SS under on-manifold value function avoid the issues related to sensitivity. To do this we add a dummy variable \(X_{5}=X_{0}\) to the dataset \(\mathbf{X}\) such that \(X_{5}\) is not used by \(f\). We train another XGBoost model, \(g3\) using the new dataset and generate the three sets of attributions as before. Results are shown in Table 2. Under the influence of the dummy, MAE of SV under \(v_{cond}\) increases, as the attribution of each of the non-dummy variables moves further away from its true value to accommodate the attribution of the new feature despite it having no effect on the true output. In contrast, as SS includes this dummy feature in the non-separable set \(\{X_{0},X_{1}\}\). The resulting attribution to the existing features is unchanged and thus the MAE remains constant under the inclusion dummy variables, demonstrating SS's robustness to how the underlying phenomenon is modelled. ### Shapley Sets of Real World Benchmarks We now evaluate SS on real data: the Diabetes, Boston and Correlation datasets from the Shap library [16]. For each dataset we train either an XGBoost or Random Forest model on the provided train set obtaining \(R^{2}\) score of 0.90 (RF), 0.89 9 (RF) and 0.86 (XGB) respectively. We compute SS attributions for 100 randomly selected samples from the test set under both \(v_{marg}\) and \(v_{cond}\). As the dimensionality of the datasets now exceed that capable of being computed by the true Shapley values we compare the SS attributions with the most commonly used approximation techniques: \begin{table} \begin{tabular}{|l|l|l|l|} \hline & SS & Shap Marg & Shap Cond \\ \hline \(g_{1}\) & \(\mathbf{0.204\pm 0.114}\) & \(0.226\pm 0.121\) & \(0.211\pm 0.127\) \\ \hline \(g_{2}\) & \(\mathbf{0.071\pm 0.031}\) & \(0.082\pm 0.032\) & \(0.073\pm 0.031\) \\ \hline \(g_{3}\) & \(\mathbf{0.074\pm 0.044}\) & \(0.110\pm 0.068\) & \(0.150\pm 0.059\) \\ \hline \end{tabular} \end{table} Table 2: Mean Average Error \(\pm\) std for SS under \(v_{cond}\) and SV under \(v_{cond}\) and \(v_{marg}\) for the three experiments outlined in Section 5.2. SS has lower MAE than SV for all models \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & SS Int & SS Cond & KS & TS \\ \hline B & \(0.020\pm 0.022\) & \(\mathbf{0.007\pm 0.006}\) & \(0.046\pm 0.047\) & \(0.047\pm 0.048\) \\ \hline D & \(0.081\pm 0.075\) & \(\mathbf{0.050\pm 0.039}\) & \(0.103\pm 0.085\) & \(0.010\pm 0.082\) \\ \hline C & \(\mathbf{0.005\pm 0.007}\) & \(0.033\pm 0.029\) & \(0.075\pm 0.057\) & \(0.072\pm 0.055\) \\ \hline \end{tabular} \end{table} Table 3: Average deletion \(\pm\) std for the attributions generated by SS under \(v_{marg}\) and \(v_{cond}\), KS and TS for the Boston (B), Diabetes (D) and Correlation (C) datasets. SS attributions have lowest deletion score across all datasets. Tree Shap (TS) [17] and Kernel Shap (KS) [16]. Under its original implementation, KS is an approximation of an off-manifold value function and breaks the relationship between input features and the data distribution. TS does not make this assumption and is presented as an on-manifold Shapley value approximation. However, in practice TS performs poorly when there is high dependence between features in the dataset [3]. To evaluate the attributions generated by SS, KS and TS in the absence of a ground truth attribution we use modified versions of the deletion and sensitivity measures which have been used widely across the literature [18]. Deletion is built on the intuition that the magnitude of a feature's score should reflect its impact on the output. Our metric therefore measures the absolute distance between the target prediction, \(v(\mathbf{x},\mathbf{X}_{\varnothing})\) and the prediction of a given sample \(v(\mathbf{x},\mathbf{X})\) after the most important feature \(X_{i}^{\prime}=x_{i}\), determined by the attribution method under consideration \(m\), has been removed. \[AD=\frac{1}{k}\sum_{j=1}^{k}|v(\mathbf{x}_{j},\varnothing)-v(\mathbf{x}_{j},N \backslash\{i\})| \tag{10}\] Low AD indicates that the attribution technique has correctly identified an important feature to remove. As SS attributes to sets of features we allow \(\mathbf{X}^{\prime}\) to be a non-separable variable set as generated by SS. This may influence the reliability of AD due to a varying number of features being removed from an instance. We therefore also assess the sensitivity of the attribution technique which calculates the difference between the sum of all the attributions given by the attribution technique and the prediction of the sample. Ideal attributions have a low sensitivity. \[AS=\frac{1}{k}\sum_{j=1}^{k}|v(\mathbf{x}_{j},N)-\sum_{i=1}^{n}m(\mathbf{X}_{ ij})| \tag{11}\] Tables 3 and 4 show how SS has lower (better) deletion than TS and KS across all three datasets. However, KS has the lowest sensitivity score on the Diabetes dataset, we note that for this dataset, there is high variance of the sensitivity score for both SS attributions. This can be largely explained by the sensitivity of SS to the setting of \(\epsilon\) which is discussed further in Section 7. Figure 1 shows the advantage of sets rather than individual attributions. The red and green curves (KS and SS respectively) show the change in prediction as each feature in the sorted attributions is masked consecutively from the input. By considering the effect of sets of interacting features rather than individual features we can see that SS avoids the sub-optimal behaviour of KS which arises due to the interaction effects between features in the model masking each other's importance. Figure 1 also validates the use of the deletion to compare individual and set attributions as it is clear that masking more features does not guarantee a lower deletion score. ## 7 Conclusions, Limitations and Future Work This paper has introduced Shapley Sets (SS), a novel method for feature attribution, which automatically and optimally decomposes a function \(f\) into a set of NSVGs by which to award attribution. We have shown how SS generates more faithful explanations in the presence of feature interaction both in the data and in the model than Shapley value-based alternatives. To our knowledge, SS is the only method in the literature which automatically generates a grouped attribution vector. Below we explore some limitations of SS and ideas for future work. **Sensitivity to Parametrisation**: In Algorithm 2, \(\epsilon\) determines the degree to which two sets of variables are considered interacting. The original RDG algorithms recommends the setting \(\epsilon\) as proportional the magnitude of the objective space. This setting works well for SS Interventional. However, we noticed a large variation in the variable grouping generated by SS Conditional under this setting of \(\epsilon\). This is not surprising as it is known that \(v_{cond}\) is sensitive to feature correlations in the data and it is difficult to know how much correlational structure to allow before two features are considered to be causally linked. Future work should therefore look at alternative methods of function decomposition which are not so dependent on the parametrisation of \(\epsilon\)[19]. **Assumption of Partially-Separable Model** SS assumes that the model to be explained is partially separable. If we consider the function \(f(\mathbf{X})=X_{1}X_{2}X_{3}\), SS would result in a single attribution to all three \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & SS Int & SS Cond & KS & TS \\ \hline B & \(0.015\pm 0.049\) & \(\mathbf{0.006\pm 0.031}\) & \(0.029\pm 0.000\) & \(0.030\pm 0.000\) \\ \hline D & \(0.021\pm 0.099\) & \(0.017\pm 0.067\) & \(\mathbf{0.004\pm 0.000}\) & \(0.076\pm 0.000\) \\ \hline C & \(\mathbf{0.000\pm 0.010}\) & \(0.008\pm 0.020\) & \(0.001\pm 0.000\) & \(0.035\pm 0.000\) \\ \hline \end{tabular} \end{table} Table 4: Average sensitivity \(\pm\) std for SS under \(v_{marg}\) and \(v_{cond}\), KS and TS for the Boston (B), Diabetes (D) and Correlation (C) datasets. SS results in the lowest sensitivity for B and C yet KS achieves lowest sensitivity for D. features of \(f(\mathbf{x})\). This is not useful from an explanation perspective although does inform us about the nature of the underlying model. Furthermore, the assumption of a partially separable function is also made by the Shapley value [2]. Future work should consider function decomposition under a wider class of separability such as multiplicative separability where associated algorithms decompose a function into its additive and multiplicative separable variable sets [19]. Figure 1: Curves show the change in prediction of two individual samples from the Boston dataset as increasing features, as sorted in order of importance by the attributions returned by SS (green) and KS (red) attributions are removed from the instance. Original and target predictions are shown by the black and blue horizontal line. An ideal attribution would result in a sharp increase or decrease towards the target. In both samples, SS results in a quicker and smoother transition from original to target prediction ## Acknowledgments We would like to thank Giulia Oechini, Alexis Monks, Isobel Shaw, Jennifer Yates, for their invaluable support during the writing of this paper. This work was supported by an Alan Turing Institute PhD Studentship funded under EPSRC grant EP/N510129/1.
2308.08868
Computing complexity measures of degenerate graphs
We show that the VC-dimension of a graph can be computed in time $n^{\log d+1} d^{O(d)}$, where $d$ is the degeneracy of the input graph. The core idea of our algorithm is a data structure to efficiently query the number of vertices that see a specific subset of vertices inside of a (small) query set. The construction of this data structure takes time $O(d2^dn)$, afterwards queries can be computed efficiently using fast M\"obius inversion. This data structure turns out to be useful for a range of tasks, especially for finding bipartite patterns in degenerate graphs, and we outline an efficient algorithms for counting the number of times specific patterns occur in a graph. The largest factor in the running time of this algorithm is $O(n^c)$, where $c$ is a parameter of the pattern we call its left covering number. Concrete applications of this algorithm include counting the number of (non-induced) bicliques in linear time, the number of co-matchings in quadratic time, as well as a constant-factor approximation of the ladder index in linear time. Finally, we supplement our theoretical results with several implementations and run experiments on more than 200 real-world datasets -- the largest of which has 8 million edges -- where we obtain interesting insights into the VC-dimension of real-world networks.
Pål Grønås Drange, Patrick Greaves, Irene Muzi, Felix Reidl
2023-08-17T09:01:47Z
http://arxiv.org/abs/2308.08868v1
# Computing complexity measures of degenerate graphs ###### Abstract We show that the VC-dimension of a graph can be computed in time \(n^{\lceil\log d+1\rceil}d^{O(d)}\), where \(d\) is the degeneracy of the input graph. The core idea of our algorithm is a data structure to efficiently query the number of vertices that see a specific subset of vertices inside of a (small) query set. The construction of this data structure takes time \(O(d2^{d}n)\), afterwards queries can be computed efficiently using fast Mobius inversion. This data structure turns out to be useful for a range of tasks, especially for finding bipartite patterns in degenerate graphs, and we outline an efficient algorithms for counting the number of times specific patterns occur in a graph. The largest factor in the running time of this algorithm is \(O(n^{c})\), where \(c\) is a parameter of the pattern we call its _left covering number_. Concrete applications of this algorithm include counting the number of (non-induced) bicliques in linear time, the number of co-matchings in quadratic time, as well as a constant-factor approximation of the ladder index in linear time. Finally, we supplement our theoretical results with several implementations and run experiments on more than 200 real-world datasets--the largest of which has 8 million edges--where we obtain interesting insights into the VC-dimension of real-world networks. [MISSING_PAGE_POST] Our first achievement is an algorithm that computes the VC-dimension of a \(d\)-degenerate graph in time \(O(n^{\lceil\log d+1\rceil}d^{O(d)})\). A core concept is a novel data structure which enables us to efficiently query the size of the intersection of several neighbourhoods for a small set of vertices, described in Section 3, which we use to quickly determine whether a given candidate set is shattered by its neighbours. But the general idea of this algorithm can be generalised to other bipartite "patterns" like bicliques, co-matchings, and ladders (defined in Section 2.2). These objects are also closely related to notions of "complexity" of graphs. They appear, for example, in the study of graph width measures [9] and algorithm design for sparse classes [12] (see also there for connections to stability theory). Our general pattern-finding algorithm presented in Section 3 can count bicliques in linear time, co-matchings in quadratic time and find partial ladders in linear time, see Section 4 for these and further results. Dense structures like cliques or bicliques are famously important in the analysis of networks, and we suggest that co-matchings and ladders might be of similar interest--but without a program to compute them, we cannot hope for these statistics to be trialled in practice. We therefore implemented algorithms to compute the VC-dimension, ladder index, maximum biclique1 and maximum co-matching of a graph. To establish their practicality, we ran these four algorithms on 206 real-world networks from various sources, see Section 5. The VC-dimension algorithm in our experiments terminated within 10 minutes on networks with up to \(\sim\)33K vertices, the other three on networks up to \(\sim\)93K vertices. This is already squarely in the region of "practical" for certain types of networks and we believe that with further engineering--in particular to improve space efficiency--our implementation can be used to compute these statistics on much larger networks. Footnote 1: There are probably faster programs to compute bicliques in practice, we compute this statistic here as a baseline. **Prior work.** We briefly mention a few relevant previous articles on the subject. Eppstein, Loffler, and Strash [11] gave an algorithm for enumerating maximal cliques in \(d\)-degenerate graphs in \(O(dn3^{d/3})\) time, i.e., fixed-parameter tractable time when parameterized by the degeneracy. They also give experimental results showing that their algorithm works well on large real-world networks. Bera, Pashanasangi, and Seshadhri [1], extending the classic result by Chiba and Nishizeki [6], show that for all patterns \(H\) of size less than six, we can count the number of appearances of \(H\) in a \(d\)-degenerate graph \(G\) in time \(O(m\cdot d^{k-2})\), where \(m\) is the number of edges in \(G\) and \(k\) is the number of vertices in \(H\). Recently, Bressan, and Roth [3] gave algorithms for counting copies of a graph \(H\) in a \(d\)-degenerate graph \(G\) in time \(f(d,k)\cdot n^{\mathbf{i}\mathbf{m}(H)}\log n\), for some function \(f\), where \(k\) again is the number of vertices in \(H\), \(n\) the number of vertices in \(G\), and \(\mathbf{i}\mathbf{m}(H)\) is the size of a largest induced matching in \(H\). ## 2 Preliminaries For an integer \(k\), we use \([k]\) as a short-hand for the set \(\{0,1,2,\ldots,k-1\}\). We use black-board bold letters like \(\mathbb{X}\) to denote sets \(X\) associated with a total order \(<_{\mathbb{X}}\). The _index function_\(\iota_{\mathbb{X}}\colon X\to\mathbb{N}\) maps elements of \(X\) to their corresponding position in \(\mathbb{X}\). We extend this function to sets via \(\iota_{\mathbb{X}}(S)=\{\iota_{\mathbb{X}}(s)\mid s\in S\}\). For any integer \(i\in[|X|]\) we write \(\mathbb{X}[i]\) to mean the \(i\)th element in the ordered set. An _index set_\(I\) for \(\mathbb{X}\) is simply a subset of \([|X|]\) and we extend the index notation to sets via \(\mathbb{X}[I]\coloneqq\{\mathbb{X}[i]\mid i\in I\}\). We write \(\pi(H)\) for the set of all permutations of \(H\). For a graph \(G\) we use \(V(G)\) and \(E(G)\) to refer to its vertex- and edge-set, respectively. We used the short hands \(|G|\coloneqq|V(G)|\) and \(\|G|\coloneqq|E(G)|\). An _ordered graph_ is a pair \(\mathbb{G}=(G,<)\) where \(G\) is a graph and \(<\) a total ordering of \(V(G)\). We write \(<_{\mathbb{G}}\) to denote the ordering for a given ordered graph and extend this notation to the derived relations \(\leqslant_{\mathbb{G}}\), \(>_{\mathbb{G}}\), \(\geqslant_{\mathbb{G}}\). We use the same notations for graphs and ordered graphs, additionally we write \(N^{-}(u)\coloneqq\{v\in N(u)\mid v<_{\mathbb{G}}u\}\) for the _left neighbourhood_ and \(N^{+}(u)\coloneqq\{v\in N(u)\mid v>_{\mathbb{G}}u\}\) for the _right neighbourhood_ of a vertex \(u\in\mathbb{G}\). We further use \(d_{\mathbb{G}}^{-}(u)\) and \(d_{\mathbb{G}}^{+}(u)\) for the left and right degree, as well as \(\Delta^{-}(\mathbb{G})\coloneqq\max_{u\in\mathbb{G}}d_{\mathbb{G}}^{-}(u)\) and \(\Delta^{+}(\mathbb{G})\coloneqq\max_{u\in\mathbb{G}}d_{\mathbb{G}}^{+}(u)\). We omit the graphs in the subscripts if clear from the context. A graph \(G\) is \(d\)-degenerate if there exists an ordering \(\mathbb{G}\) such that \(\Delta^{-}(\mathbb{G})\leqslant d\). An equivalent definition is that a graph is \(d\)-degenerate if every subgraph has a vertex of degree at most \(d\). The number of edges in a \(d\)-degenerate graph is bounded by \(dn\) and many important sparse graph classes--bounded treewidth, planar graphs, graphs excluding a minor--have finite degeneracy. The degeneracy ordering of a graph can be computed in time \(O(n+m)\)[14], and \(O(dn)\) for \(d\)-degenerate graphs. Let \(\mathcal{F}\subseteq 2^{U}\) be a set family over \(U\). We define the intersection of a set family with set \(X\subseteq U\) as \(\mathcal{F}\cap X\coloneqq\{F\cap X\mid F\in\mathcal{F}\}\). A set \(X\subseteq U\) is then _shattered_ by \(\mathcal{F}\) if \(\mathcal{F}\cap X=2^{X}\). The _graph representation_ of a set family \(\mathcal{F}\) is the bipartite graph \(G(\mathcal{F})=(\mathcal{F},U,E)\) where for each \(F\in\mathcal{F}\) and \(x\in U\) we have the edge \(Fx\in E\) iff \(x\in F\). In the other direction, we define for a graph \(G\) its _neighbourhood set system_\(\mathcal{F}(G)\coloneqq\{N(v)\mid v\in G\}\). The Vapnik-Chervonenkis dimension (VC-dimension) of a set family \(\mathcal{F}\subseteq 2^{U}\) is the size of the largest set in \(U\) that is shattered by \(\mathcal{F}\) and we write this quantity as \(\mathbf{vc}(\mathcal{F})\). The VC-dimension of a graph \(G\) is defined as the VC-dimension of its neighbourhood set system, i.e. \(\mathbf{vc}(G)\coloneqq\mathbf{vc}(\mathcal{F}(G))\). ### Set dictionaries In the following we will make heavy use of data structures that model functions of the form \(f:2^{U}\to\mathbb{Z}\) for some universe \(U\). Since the arguments in our use-case are assumed to be small, we use prefix-tries [16] in our theoretical analysis (see notes on practical implementations below): [Subset dictionary] Let \(U\) be a set and let \(\mathbb{U}\) be an arbitrary total order of \(U\). A _subset dictionary_\(D\) over \(U\) associates a key \(X\subseteq U\) with an integer \(D[X]\) by storing the sequence \(\mathbb{X}\) of \(X\) under \(<_{\mathbb{U}}\) in a prefix trie. Accordingly, insertion/update/deletion of a value for a key \(X\) takes time \(O(|X|)\)_if_ we can assume the key \(X\) to be present in some canonical order. Our algorithms all work on graphs imbued with a (degeneracy) ordering and we will sort the left-neighbourhood \(N^{-}(*)\) of each vertex according to this global ordering, which we will simply call "sorting the left-neighbourhoods" for brevity. Subsets of these left-neighbourhoods are assumed to inherit this ordering, which covers all operations that we will need in our algorithms, which in conclusion means that we can assume that all sets used as keys in subset dictionaries have a canonical ordering. Unless otherwise noted, we will use the convention that \(D[X]=0\) for all keys \(X\) that have not been inserted into \(D\). ### Bipartite patterns and left-covers [Pattern] A _pattern_\(H\) is a complete graph whose edges are partitioned into sets \(B\), \(R\), and \(W\) (black, red and white). We say that a graph \(G\)_contains_\(H\) (or \(H\)_appears in \(G\)) if there exists a vertex set \(S\subseteq V(G)\) and a bijection \(\phi\colon V(H)\to S\) such that \(uv\in B\implies\phi(u)\phi(v)\in E(G)\) and \(uv\in R\implies\phi(u)\phi(v)\not\in E(G)\). We say that a pattern \(H\) is _bipartite_ if the vertex set of \(H\) can be partitioned into two sets \(X,Y\) such that all edges inside of \(X\) and inside of \(Y\) are white. For a vertex \(v\in V(H)\) we write \(N(v)\) to denote its neighbours according to the black edge relation only. An _ordered_ pattern \(\mathbb{H}\) is a pattern whose vertex set comes with a linear order \(<_{\mathbb{H}}\). Given a vertex \(v\in\mathbb{H}\), we write \(N^{-}(u)\coloneqq\{v\in N(u)\mid v<_{\mathbb{H}}u\}\). A _ladder_ (sometimes called a chain graph) \(L_{n}\) of size \(n\) is a bipartite pattern defined on two vertex sequences \(A=(a_{i})_{i\in[n]}\) and \(B=(b_{i})_{i\in[n]}\), where \(a_{i}b_{j}\in B\) if \(i>j\) and \(a_{i}b_{j}\in R\) otherwise. Note that for any \(1\leqslant l\leqslant r\leqslant n\) the subgraph induced by the sequences \((a_{i})_{i\in[l,r]}\) and \((b_{i})_{i\in[l,r]}\) induces a ladder. A _semi-ladder_\(\tilde{L}_{n}\) has the same black edges, but only the edges \(a_{i}b_{i}\), \(i\in[n]\) are red. All the remaining edges are white: The _Ladder index_ of a graph \(G\) is the largest \(n\) such that \(G\) contains the pattern \(L_{n}\). A _co-matching_\(\overline{M}_{n}\) (also called _crown_) has black edges \(a_{i}b_{j}\) for \(i\neq j\) and red edges \(a_{i}b_{i}\) for \(i\in[n]\). Finally, the _shattered_ pattern \(U_{n}\) of size \(n\) has a side \(S\) (the _shattered_ set) of size \(n\) and a side \(W\) (the _witness_ set) of size \(2^{n}\). We index the vertices of \(W\) by subsets \(I\subseteq S\), then the vertex \(w_{I}\) has black edges into \(I\) and red edges into \(S\setminus I\): [Left-cover, left-covering number] Given an ordered bipartite pattern \(\mathbb{H}\) with bipartition \((X,Y)\), a _left-cover_ is a set of vertices \(C\subseteq V(\mathbb{H})\) such that either \(X\subseteq N^{-}(C)\cup C\) or \(Y\subseteq N^{-}(C)\cup C\). The _left-covering number \(\operatorname{lc}(\mathbb{H})\)_ is the minimum size of a left cover of \(\mathbb{H}\). For an (unordered) pattern \(H\) we define its left-covering number as \[\operatorname{lc}(H)\coloneqq\max_{\mathbb{H}\in\pi(H)}\operatorname{lc}( \mathbb{H}).\] Note that we include the covering set \(C\) itself in the cover, this is necessary since for a given ordering of a pattern some vertices might not have right neighbours and can therefore not be covered by left neighbourhoods. The left-covering number of a pattern is the first important measure that will influence the running time of the main algorithm presented later. The second important measure relates to the number of non-isomorphic "half"-ordered patterns we can obtain from a bipartite pattern, that is, how many distinct objects we find by ordering one partition. A useful tool to concretise this notion is the following function: [Signature] Let \(H\) be a bipartite pattern with bipartition \((X,Y)\) and let \(\mathbb{Z}\) be an ordering of \(Z\in\{X,Y\}\). Then the _signature_\(\sigma_{\mathbb{Z}}(H)\) is defined as the multiset \[\sigma_{\mathbb{Z}}(H)\coloneqq\{\!\{t_{\mathbb{Z}}(N(u))\mid u\in(X\cup Y) \setminus Z\}\!\}.\] For orderings \(\mathbb{Z},\mathbb{Z}^{\prime}\in\pi(Z)\) we define the equivalence relation \[\mathbb{Z}\sim_{H}\mathbb{Z}^{\prime}\iff\sigma_{\mathbb{Z}}(H)=\sigma_{ \mathbb{Z}^{\prime}}(H).\] [Half-ordering asymmetry] Given a bipartite pattern \(H\) with bipartition \((X,Y)\) and a partite set \(Z\in\{X,Y\}\), we define the _half-ordering asymmetry \(\operatorname{hoa}(H,Z)\)_as the number of equivalence classes under the \(\sim_{H}\) relation \[\operatorname{hoa}(H,Z)\coloneqq|\pi(Z)/\sim_{H}|\,.\] We further define the half-ordering asymmetry of \(H\) as \[\operatorname{hoa}(H)\coloneqq\max\{\operatorname{hoa}(H,X),\operatorname{hoa }(H,Y)\}.\] Alternatively, \(\operatorname{hoa}(H,Z)\coloneqq|\{\sigma_{\mathbb{Z}}(H)\mid\mathbb{Z}\in\pi( Z)\}|\). ## 3 A general pattern-finding algorithm We first describe a general-purpose algorithm for finding patterns in degenerate graphs. Afterwards, we will describe more specialised algorithms using similar ideas to find specific patterns. Let \(G\) be a \(d\)-degenerate graph and let \(H\) be a bipartite pattern with bipartition \((X,Y)\) where \(|X|\geqslant|Y|\). Then after a preprocessing time of \(O(|X|^{\operatorname{lc}(H)}|H|!+d2^{d}n)\), we can in time \(O\big{(}n^{\operatorname{lc}(H)}(4d\operatorname{lc}(H))^{|X|}d|X|^{3} \operatorname{hoa}(H)\big{)}\) count how often \(H\) appears in \(G\). The main ingredient of our algorithm will be the following data structure: Let \(\mathbb{G}\) be an ordered graph on \(n\) vertices with degeneracy \(d\). After a preprocessing time of \(O(d2^{d}n)\), we can, for any given \(S\subseteq V(G)\), compute a subset dictionary \(Q_{S}\) in time \(O(|S|2^{|S|}+d|S|^{2})\) which for any \(X\subseteq S\subseteq V(G)\) answers the query \[Q_{S}[X]\coloneqq\big{|}\{v\in G\mid S\cap N(v)=X\}\big{|}\] in time \(O(|X|)\). Let \(\mathbb{G}\) be an ordered graph with degeneracy \(d\). Then in time \(O(d2^{d}n)\) we can compute a subset dictionary \(R\) over \(V(G)\) which for any \(X\subseteq V(G)\) answers the query \[R[X]\coloneqq\big{|}\{v\in G\mid X\subseteq N^{-}(v)\}\big{|}\] in time \(O(|X|)\). Proof.: Given \(\mathbb{G}\) as input, we compute \(R\) as follows: ``` Initialize \(R\) as an empty trie storing integers; for\(u\in\mathbb{G}\)do for\(X\subseteq N^{-}(u)\)do \(R[X]\gets R[X]+1\) // Non-existing keys are treated as zero return\(R\); ``` Note that every update of the data structure with key \(X\) takes time \(O(|X|)\), since \(|X|\leqslant d\) it follows that the total initialisation time is bounded by \(O(d2^{d}n)\). Let \(\mathbb{G}\) be an ordered graph with degeneracy \(d\) and let \(S\subseteq V(G)\). If we assume the subset dictionary \(R\) of Lemma 3 is given, we can construct in time \(O(|S|2^{|S|}+d|S|^{2})\) a subset dictionary \(Q_{S}\) over \(S\) which for \(X\subseteq S\) answer the query \[Q_{S}[X]\coloneqq\left|\{v\in G\mid S\cap N(v)=X\}\right|\] in time \(O(|X|)\). Proof.: We first construct an auxiliary subset dictionary \(\hat{Q}\) which for \(X\subseteq S\) answers the query \[\hat{Q}_{S}[X]\coloneqq\left|\{v\in G\mid S\cap N^{-}(v)=X\}\right|\] in time \(O(|X|)\). We first prove the following claim which implies that \(\hat{Q}_{S}\) is the (upwards) Mobius inversion of \(R\) over \(S\) and hence can be computed in time \(O(|S|2^{|S|})\) using Yate's algorithm [17, 13, 2]. \(\rhd\)Claim 10. \(\left|\{v\in G\mid S\cap N^{-}(v)=X\}\right|=\sum_{X\subseteq Y\subseteq S}(- 1)^{|Y\setminus X|}R[Y]\), Proof.: First consider \(v\not\geq_{\mathbb{G}}X\). Then \(X\) cannot be contained in \(N^{-}(v)\) and therefore \(v\) does not contribute to the left-hand side. Note that \(v\) is not counted by \(R[Y]\) for any \(Y\supseteq X\), therefore \(v\) does not contribute to the right-hand side. Consider therefore \(v\geq_{\mathbb{G}}X\). First, assume that \(S\cap N^{-}(v)=X\) and therefore \(v\) contributes to the left-hand side. Then \(v\) is counted on the right-hand side exactly once by the term \(R[X]\) which has a positive sign. Consider now \(v\) with \(S\cap N^{-}(v)\neq X\). If \(X\not\subseteq N^{-}(v)\), then \(v\) does not contribute to the left-hands side and it is not counted by any term \(R[Y]\), \(Y\supseteq X\) on the right-hand side. We are therefore left with vertices \(v\) where \(I\coloneqq S\cap N^{-}(v)\) satisfies \(X\subset I\). Note that \(I\) is counted by every term \(R[Y]\) with \(X\subseteq Y\subseteq I\). Since \[\sum_{X\subseteq Y\subseteq I}(-1)^{(Y\setminus X)}=\sum_{0\leqslant k\leqslant |I\setminus X|}(-1)^{k}\binom{|I\setminus X|}{k}=0\] we conclude that these counts of \(v\) cancel out and contribute a sum-total of zero to the right-hand side. This covers all cases and we conclude that the claim holds. It remains to be shown how the query \(Q_{S}[X]\) can be computed using \(\hat{Q}_{S}[X]\). To this end, consider a vertex \(v\in G\) where \(S\cap N(v)\neq S\cap N^{-}(v)\) as these contribute to \(\hat{Q}_{S}[X]\) but must not be counted by \(Q_{S}[X]\). Note that any such vertex must be contained in \(N^{-}(S)\) since \(v\) has at least one right-neighbour in \(S\). Accordingly, we apply the following correction to \(\hat{Q}_{S}[X]\): ``` Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 3** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 4** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 5** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 6** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 7** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 8** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 9** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 10** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 11** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 12** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 13** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 14** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 15** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 16** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 17** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\in N^{-}(S)\)do \(Q_{S}[N^{-}(u)\cap S]\gets Q_{S}[N^{-}(u)\cap S]-1\) \(Q_{S}[N(u)\cap S]\gets Q_{S}[N(u)\cap S]+1\) ``` **Algorithm 18** Computing complexity measures of degenerate graphs Let \(Q_{S}=\hat{Q}_{S}\) for\(u\ This correction takes time \(O(d|S|^{2})\). We are now ready to describe the pattern-counting algorithm. Proof of Theorem 6.: The problem is trivial for \(|X|=1\) since then the pattern is either a single edge or anti-edge. Thus assume \(|X|\geqslant 2\) in the following, in particular for the running time calculations. We first compute the left-covering number \(\operatorname{lc}(H)\) by simply brute-forcing all orderings of \(H\) in time \(O(|H|!\cdot\max\{|X|,|Y|\}^{\operatorname{lc}(H)})=O(|H|!X|^{\operatorname{ lc}(H)})\). At the same time, whenever we find that a specific ordering \(\mathbb{H}\) has a minimal left-covering of \(X\), then we add the signature \(\sigma_{\mathbb{X}}(H)\) with \(\mathbb{X}\coloneqq\mathbb{H}[X]\) to a collection \(\mathcal{X}\). Similarly, if we find that a minimal left-covering in \(\mathbb{H}\) covers \(Y\) we add the signature \(\sigma_{\mathbb{Y}}(H)\) with \(\mathbb{Y}\coloneqq\mathbb{H}[Y]\) to a collection \(\mathcal{Y}\). We will later use that \(|\mathcal{X}|\leqslant\operatorname{hoa}(H,X)\) and \(|\mathcal{Y}|\leqslant\operatorname{hoa}(H,Y)\). We now compute an ordering \(\mathbb{G}\) for \(G\) of degeneracy \(d\) in time \(O(dn)\), sort the left-neighbourhoods in time \(O(d\log d\cdot n)\) time and compute the data structure \(R\) as per Lemma 8 in time \(O(d2^{d}n)\). If we want to compute the number of times \(H\) appears in \(G\), we further need to initialise a subset dictionary \(K\). We now iterate through all subsets \(C\subseteq V(G)\) of size \(\operatorname{lc}(H)\) and for each such set we iterate through all subsets \(Z\subseteq N_{\mathbb{G}}^{-}(C)\cup C\) of size \(|X|\) or \(|Y|\), in total this takes time \(O(n^{\operatorname{lc}(H)}((d+1)\operatorname{lc}(H))^{|X|})\). We describe the remainder of the algorithm for a set \(X=Z\) of size \(|X|\), the procedure for a set \(Y\) works analogously. Let \(\mathbb{X}\) be the ordering of \(X\) in \(\mathbb{G}\). To verify that \(X\) can be completed into a pattern \(H\) in \(G\), we compute the data structure \(Q_{X}\) in time \(O(|X|2^{|X|}+d|X|^{2})\) as per Theorem 7. To check whether \(H\) exists in \(G\), we iterate through all signatures \(\sigma\in\mathcal{X}\) and test whether \(Q_{X}[\mathbb{X}[A]]>0\) for all index sets \(A\in\sigma\), this takes time \(O(|\mathcal{X}||X||Y|)\), in total the verification step for \(X\) takes time \[O\big{(}(|X|2^{|X|}+d|X|^{2})\cdot|\mathcal{X}||X||Y|\big{)}=O\big{(}d|X|^{3}2 ^{|X|}\operatorname{hoa}(H)\big{)}\] where we used that \(|X|\geqslant|Y|\) and \(|X|\geqslant 2\). This bound also holds for checking \(Y\) since \(|\mathcal{X}|+|\mathcal{Y}|\leqslant 2\operatorname{hoa}(H)\). Finally, if we exhaust all orderings of \(H\) without finding the pattern, we report that it does not exist in \(G\). To _count_ in how many ways \(X\) can be extended into the pattern \(H\) in \(G\), we compute \[c_{H,X}\coloneqq\sum_{\sigma\in\mathcal{X}}\prod_{A^{(k)}\in\sigma}\binom{Q_ {X}[\mathbb{X}[A]]}{k}\] where \(k\) denotes the multiplicity of \(A\) in the multiset \(\sigma\). Note, however, that we have to take care not to double-count the contribution of \(X\) to the overall count as we might encounter the set \(X\) multiple times. To that end, we record the intermediate result by setting \(K[X]\coloneqq c_{H,X}\) and we forgo the above computation if \(X\) exists already as a key in \(K\). The computation of \(c_{H,X}\) and this additional book keeping takes time \(O(|X|+|\mathcal{X}||X||Y|)\), in total we arrive at the same running time \(O\big{(}d|X|^{3}2^{|X|}\operatorname{hoa}(H)\big{)}\) like for the decision variant. After exhausting all orderings of \(H\) we report back the number of times \(H\) appears in \(G\) as the sum of all entries of \(K\). The total running time of either variant of the algorithm is, as claimed, \[O\Big{(}|X|^{\operatorname{lc}(H)}|H|!+d2^{d}n+dn+n^{\operatorname {lc}(H)}\big{(}(d+1)\operatorname{lc}(H)\big{)}^{|X|}\cdot d|X|^{3}2^{|X|} \operatorname{hoa}(H)\Big{)}\] \[=O\Big{(}|X|^{\operatorname{lc}(H)}|H|!+d2^{d}n+n^{\operatorname {lc}(H)}(4d\operatorname{lc}(H))^{|X|}d|X|^{3}\operatorname{hoa}(H)\Big{)}.\qed\] ## 4 Concrete applications ### Finding bicliques and co-matchings We note that \(\operatorname{lc}(K_{t,t})=1\) and \(\operatorname{hoa}(K_{t,t})=1\), therefore the application of Theorem 4 gives the following: Let \(G\) be a \(d\)-degenerate graph. Then we can compute the number of biclique patterns \(K_{s,t}\) (\(s\geqslant t\)) in time \(O\bigl{(}s\cdot(2s)!+d2^{d}n+n(4d)^{s}ds^{3}\bigr{)}\). Let \(\overline{M}_{t}\) be a co-matching on \(2t\) vertices. We will assume in the following that the partite sets of \(\overline{M}_{t}\) are \(X\coloneqq(x_{1},\ldots,x_{t})\) and \(Y\coloneqq(y_{1},\ldots,y_{t})\) so that the edges \(x_{i}y_{i}\) for \(i\in[t]\) are forbidden. \(\operatorname{lc}(\overline{M}_{t})=2\) and \(\operatorname{hoa}(\overline{M}_{t})=1\). Proof.: Let \(\bar{\mathbb{M}}_{t}\) be an ordering of \(\overline{M}_{t}\) and let \(z\) be the last vertex in that order. Then \(N^{-}(z)\) covers all vertices of one partite set except one vertex \(z^{\prime}\). Thus \(\{z,z^{\prime}\}\) is a left-cover of \(\bar{\mathbb{M}}_{t}\). To determine the half-ordering asymmetry, note that for _every_ ordering \(\mathbb{Z}\) of \(Z\in\{X,Y\}\) the signature \(\sigma_{\mathbb{Z}}(\overline{M}_{t})\) is simply the set \(\binom{[t]}{t-1}\), so the total number of signatures is one. Let \(G\) be a \(d\)-degenerate graph. Then we can compute the number of co-matching patterns \(\overline{M}_{t}\) in time \(O\bigl{(}t^{2}(2t)!+d2^{d}n+n^{2}(8d)^{t}dt^{3}\bigr{)}\). ### Finding shattered sets A direct application of Theorem 4 to locate a shattered pattern \(U_{t}\) is unsatisfactory as the running time will include a factor of \(n^{t}\) since \(\operatorname{lc}(U_{t})=t\). By the following observation, we can bound \(t\) by the degeneracy of the graph, but we can greatly improve the running time by further adjusting the algorithm. Let \(G\) be a \(d\)-degenerate graph. Then \(\operatorname{\mathbf{vc}}(G)\leqslant d+1\). Proof.: Assume \(S\subseteq V(G)\) is shattered by \(W\subseteq V(G)\), with \(S=|\operatorname{\mathbf{vc}}(G)|\). Let \(W^{\prime}\subseteq W\) be those witnesses that have \(|S|-1\) neighbours in \(S\). Then \(G[W^{\prime}\cup S]\) induces a graph of minimum degree \(|S|-1\) and we must have that \(|S|-1\leqslant d\) and accordingly \(\operatorname{\mathbf{vc}}(G)=|S|\leqslant d+1\). The core observation that allows further improvements is that many orderings of \(U_{d+1}\) have degeneracy _larger_ than \(d\) and can therefore not appear in a \(d\)-degenerate graph. In particular, the ordering in which all witnesses of \(U_{d+1}\) appear before the shattered set has degeneracy \(2^{d+1}\) and can therefore be ruled out. We refine this idea further in the following lemma. Let \(\mathbb{G}\) be a \(d\)-degenerate ordering of a graph \(G\). Let \(G\) contain the shattered pattern \(U_{t}\) and let \(\mathbb{U}_{t}\coloneqq\mathbb{G}[U_{t}]\) be its ordering. Then \(\operatorname{lc}(\mathbb{U}_{t})\leqslant\lceil\log d+1\rceil\). Specifically, we either have that \(t\leqslant\lceil\log d+1\rceil\) or that \(\mathbb{U}_{t}\) can be covered by \(\lceil\log d+1\rceil\) witness vertices. Proof.: Let \(S=(s_{1},\ldots,s_{t})\) and \(W=(w_{1},\ldots,w_{2^{t}})\) be the vertices of \(U_{t}\) in \(G\) and let the indices of the variables reflect the ordering of the corresponding vertices in \(\mathbb{U}_{t}\). Partition the set \(S\) into \(p\coloneqq\lceil\log d+1\rceil\) sets \(S_{1},\ldots,S_{p}\) such that each set has size at least \(\lfloor t/p\rfloor\) and at most \(\lceil t/p\rceil\). For each set \(S_{i}\) define the set of "apex"-witnesses \(A_{i}\coloneqq\{w\in W\mid N(w)\supset S_{i}\}\). Note that, for all \(i\in[p]\), \[|A_{i}|=2^{|S\setminus S_{i}|}\geqslant 2^{t-\lceil t/p\rceil}=2^{\lceil t \frac{p-1}{p}\rceil}.\] We call a set \(A_{i}\)_good_ if \(\max_{\mathbb{G}}A_{i}>\max_{\mathbb{G}}S_{i}\), that is, at least one apex vertex from \(A_{i}\) can be found to the right of \(S_{i}\). We now distinguish two cases: **Case 1**. All \(A_{i}\), \(i\in[p]\), are good. It follows that \(\mathbb{U}_{t}\) can be left-covered by taking one vertex from each \(A_{i}\), \(i\in[p]\). We conclude that \(\operatorname{lc}(\mathbb{U}_{t})\leqslant p=\lceil\log d+1\rceil\). **Case 2**. Some \(A_{i}\), \(i\in[p]\), is not good. Let \(u=\max_{\mathbb{G}}S_{i}\) be the last vertex in \(S_{i}\), note that \(A_{i}\leqslant_{\mathbb{G}}u\) and accordingly \(A_{i}\subseteq N^{-}(u)\). But then we must have that \(|A_{i}|\leqslant d\) and accordingly that \[2^{\lceil t\frac{p-1}{p}\rceil}\leqslant d \iff\lceil t\frac{p-1}{p}\rceil\leqslant\log d\implies t\frac{p-1} {p}\leqslant\log d\iff t\leqslant\frac{p}{p-1}\log d\] \[\iff t\leqslant\frac{\lceil\log d+1\rceil}{\lceil\log d+1 \rceil-1}\log d=\frac{\log d}{\lceil\log d\rceil}\lceil\log d+1\rceil \leqslant\lceil\log d+1\rceil.\] We therefore find that \(\operatorname{lc}(\mathbb{U}_{t})\leqslant|S|\leqslant\lceil\log d+1\rceil\). Let \(G\) be a \(d\)-degenerate graph on \(n\) vertices. Then we can determine the VC-dimension of its neighbourhood set system \(\mathcal{F}(G)\) in time \(O(n^{\lceil\log d+1\rceil}d^{d+2}(2d\log d)^{d+1})\). Proof.: We first compute an ordering \(\mathbb{G}\) of \(G\) with degeneracy \(d\) in time \(O(dn)\) and sort all left-neighbourhoods in time \(O(d\log d\cdot n)\). Let \(p\coloneqq\lceil\log d+1\rceil\) in the following. Let \(\mathbb{U}_{t}=(S,W)\) be a shattered set of size \(t\leqslant d+1\) in \(\mathbb{G}\). By Lemma 3 we then have that \(\operatorname{lc}(\mathbb{U}_{t})\leqslant p\). Therefore to locate the set \(S\) we first guess up to \(p\) vertices and then exhaustively search through their (closed) left-neighbourhoods in time \[\binom{n}{p}\binom{dp}{t}\leqslant\Big{(}\frac{en}{p}\Big{)}^{p}\Big{(}\frac {edp}{t}\Big{)}^{t}=O\left(n^{\lceil\log d+1\rceil}(d\log d)^{d+1}\right).\] Now that we can locate \(S\) we apply Theorem 3.2 in order to verify that \(S\) is indeed shattered: For each candidate set \(S\) from the previous step, we compute a subset dictionary \(Q_{S}\) in time \(O(|S|2^{|S|}+d|S|^{2})=O(d2^{d})\) and then check whether \(Q_{S}[X]>0\) for each \(X\subseteq S\). This latter step takes time \(O(|S|2^{|S|})\) and is therefore subsumed by the construction time of \(Q_{S}\). We conclude that the algorithm runs in total time \[O(d\log d\cdot n)+O(d2^{d}n)+O\left(n^{\lceil\log d+1\rceil}(d\log d)^{d+1} \cdot d2^{d}\right)=O\left(n^{\lceil\log d+1\rceil}d^{d+2}(2d\log d)^{d+1}\right)\] as claimed. We note that the exponent of \(\lceil\log d+1\rceil\) in the running time is almost tight: Graph VC-dimension parameterized by the degeneracy \(d\) of the input graph cannot be solved in time \(f(d)\cdot n^{o(\log d)}\) unless all problems in SNP can be solved in subexponential time. Proof.: We adapt the \(\operatorname{W}[1]\)-hardness reduction from \(k\)-Clique to VC-dimension by Downey, Evans, and Fellows [8] and combine it with the result by Chen _et al_. [5, 4] which states that \(k\)-Clique cannot be solved in time \(f(k)n^{o(k)}\) unless all problems in SNP admit subexponential-time algorithms. Given an instance \((H,k)\) for \(k\)-Clique, we construct a graph \(G\) as follows. We first create \(k\) copies \(V_{1},\ldots,V_{k}\) of \(V(H)\). For \(v\in H\), let us denote its copies by \(v^{(1)},\ldots,v^{(k)}\) with \(v^{(i)}\in V_{i}\) for \(i\in[k]\). We now add the following vertices and edges: * A single isolated vertex \(w_{0}\), * a vertex set \(W_{1}\) which contains one pendant vertex for each \(v^{(i)}\), \(v\in H\) and \(i\in[k]\), * a vertex set \(W_{2}\) which for each edge \(uv\in H\) contains \(\binom{k}{2}\) vertices \(w_{uv}^{ij}\), \(i,j\in[k]\), each of which \(u^{(i)}\) and \(v^{(j)}\) as its only neighbours, and * a vertex set \(A\) which for each index set \(I\subseteq[k]\) contains a vertex \(a_{I}\) which is connected to all vertices in \(V_{i}\) for each \(i\in I\). Note that the graph is bipartite with partite sets \(\mathcal{V}\coloneqq V_{1}\cup\dots\cup V_{k}\) and \(\mathcal{W}\coloneqq W_{1}\cup W_{2}\cup A\). Let us first show that if \(H\) contains a clique of size \(k\) then \(G\) contains a shattered set of size \(k\). Let \(u_{1},\dots,u_{k}\) be distinct vertices that form a complete graph in \(H\). We claim that then the set \(S\coloneqq\{u_{1}^{(1)},\dots,u_{k}^{(k)}\}\) is shattered in \(G\). First, note that for every subset \(X\subset S\), \(|X|\geqslant 3\), there exists a witness vertex \(a\in A\) such that \(N(a)\cap S=X\). For the empty set we have the witness \(w_{0}\), for every singleton subset \(\{u\}\subseteq S\) we have that the pendant vertex \(p\in N(u)\cap W_{1}\) witnesses \(\{u\}\). Therefore, only subsets of size exactly two need to be witnesses to shatter \(S\). Consider \(\{u_{i}^{(i)},u_{j}^{(j)}\}\subseteq S\) for \(i\neq j\). Since \(u_{i}u_{j}\in H\), the vertex \(w_{u_{i}u_{j}}^{ij}\) exists in \(W_{2}\) and its neighbourhood in \(S\) is exactly \(\{u_{i}^{(i)},u_{j}^{(j)}\}\). We conclude that all subsets of size two in \(S\) are witnessed as well and therefore \(S\) is shattered. In the other direction, assume that \(G\) contains a shattered set \((S,W)\) of size \(k\). Without loss of generality, assume that \(k\geqslant 3\). \(\rhd\) Claim 18.: \(S\subseteq\mathcal{V}\) and \(W\subseteq\mathcal{W}\). Proof.: Since \(G\) is bipartite we either have that \(S\subseteq\mathcal{V}\) and \(W\subseteq\mathcal{W}\) or that \(S\subseteq\mathcal{W}\) and \(W\subseteq\mathcal{V}\). Let us now show that the latter is impossible. Since \(k\geqslant 3\) we have that every vertex in \(S\) has degree at least four. Accordingly, \(W\) cannot contain vertices from \(W_{1}\) or \(W_{2}\), which leaves us with \(W\subseteq A\). However, all vertices in \(V_{i}\), \(i\in[k]\), have the exact same neighbours in \(A\). Therefore only \(k\) subsets of \(A\) are witnessed by vertices in \(\mathcal{V}\) and therefore the largest shattered set in \(A\) has size at most \(\log k\). We conclude that \(S\) cannot be contained in \(A\) and the claim holds. We now claim that \(|S\cap V_{i}|=1\) for all \(i\in[k]\). Assume otherwise, so let \(u^{(i)},v^{(i)}\in S\) for some \(i\in[k]\). But then the set \(\{u^{i},v^{i}\}\) cannot be witnessed: not by a vertex from \(W_{1}\), since it only contains vertices with one neighbour, not by a vertex from \(W_{2}\), since these vertices each have at most one neighbour in each set \(V_{i}\), and not by a vertex from \(A\) since we need all \(2^{k}-\binom{k}{2}-k-1\) vertices of \(A\) to witness subsets of \(S\) of size at least three. Therefore \(S\) intersects each \(V_{i}\) in exactly one vertex. Since \(S\) is shattered, every subset \(\{u^{(i)},v^{(j)}\}\), \(i\neq j\), is shattered. By the same logic as above, this can only be due to a witness \(w_{uv}^{ij}\in W_{2}\) and therefore \(uv\in H\). We conclude that indeed \(u_{1},\dots,u_{k}\) induce a complete graph in \(H\), as claimed. Finally, we need to determine the degeneracy of \(G\). Consider the following elimination sequence: We first delete all of \(\{w_{0}\}\cup W_{1}\cup W_{2}\), all of which have degree at most two. Note now that all vertices in \(\mathcal{V}\) have at most \(|A|<2^{k}\) neighbours in \(A\), so we delete \(\mathcal{V}\) and then \(A\). In total, the maximum degree we encountered in this deletion sequence is \(<2^{k}\). Assume we could solve Graph VC-Dimension in time \(f(d)n^{o(\log d)}\). In the above reduction the degeneracy of the constructed graph is \(d<2^{k}\), thus this running time for Graph VC-Dimension would imply a running time of \[f(d)\cdot n^{o(\log d)}=f(2^{k})\cdot n^{o(\log 2^{k})}=f(2^{k})\cdot n^{o(k)}\] for \(k\)-Clique. We conclude that VC-dimension parameterized by the degeneracy of the input graph cannot be solved in time \(f(d)n^{o(\log d)}\) unless all problems in SNP can be solved in subexponential time. We note that Lemma 15 allows us to approximate the VC-dimension of degenerate graphs. Let \(G\) be a \(d\)-degenerate graph on \(n\) vertices. Then for any \(0<\varepsilon\leqslant 1\) we can approximate the VC-dimension of \(G\) in time \(O(d2^{d}(2n)^{\lceil\varepsilon(1+\log d)\rceil})\) within a factor of \(\varepsilon\). Proof.: We first compute a \(d\)-degenerate ordering \(\mathbb{G}\) of \(G\) in time \(O(dn)\) and sort its left-neighbourhoods in time \(O(d\log d\cdot n)\). Let \(U_{t}=(S,W)\) be the largest shattered set in \(G\) and let \(\mathbb{U}_{t}\) be its ordering in \(\mathbb{G}\). We further prepare the use of Theorem 3.2 by computing the necessary data structure in time \(O(d2^{d}n)\). Let \(c\coloneqq\lceil\varepsilon(1+\log d)\rceil\). The algorithm now iterates over all \(C\subseteq V(G)\) of size \(c\) and searches the left-neighbourhood \(L\coloneqq N^{-}[C]\) for a shattered set by first computing a subset dictionary \(Q_{L}\) in time \(O(d2^{d})\) and then finding the largest shattered subset \(S\subseteq L\) by brute-force in time \(O(|L|2^{|L|})=O(cd2^{cd})\). We claim that this simple algorithm computes the claimed approximation of the VC-dimension. By Lemma 15 we either have that \(t\leqslant\log d+1\) or that \(\mathbb{U}_{t}\) can be left-covered by \(\log d+1\) witness vertices. In the first case, our algorithm will trivally locate an \(\varepsilon\)-fraction of a maximal solution since it tests every set of size \(c\). In the second case, the shattered set \(S\) of \(\mathbb{U}_{t}\) is covered by the left-neighbourhood of witness vertices \(w_{1},\dots,w_{p}\in W\) for \(p\coloneqq\log d+1\). Then by simple averaging, there exist \(c\) witnesses \(W^{\prime}\) such that \(|N^{-}[W^{\prime}]\cap S|\geqslant c|S|/p=ct/(\log d+1)\). Since the above algorithm will find the shattered set \(N^{-}[W]\cap S\) when inspecting the left-neighbourhood of \(W\), we conclude that it will output at least a value of \(ct/(\log d+1)\). In either case the approximation factor is \(\frac{c}{1+\log d}\geqslant\varepsilon\), as claimed. We would like to highlight the special case of \(c=1\) of the above theorem as it provides us with a linear-time approximation of the VC-dimension, which is probably a good starting point for practical applications: **Corollary 20**.: _Let \(G\) be a \(d\)-degenerate graph on \(n\) vertices. Then we can approximate the VC-dimension of \(G\) in time \(O(d2^{d}n)\) within a factor of \(\frac{1}{1+\log d}\)._ ### Approximating the ladder and semi-ladder index Before we proceed, we note that degenerate graphs cannot contain arbitrarily long ladders: **Observation 21**.: _If \(G\) is \(d\)-degenerate then \(G\) cannot contain a ladder of length \(2d+2\)._ Proof.: Note that a ladder of length \(t\) contains a complete bipartite graph \(K_{\lfloor t/2\rfloor,\lfloor t/2\rfloor}\), i.e. a subgraph of minimum degree \(\lfloor t/2\rfloor\). Therefore \(t<2d+2\). Again we find that a direct application of Theorem 3.2 to ladder patterns does not yield a satisfying running time since \(\operatorname{lc}(L_{t})\approx t/2\). However, we can always left-cover a large portion of a ladder with only one vertex: **Observation 22**.: _Let \((A,B)\) induce a ladder of length \(t\) in \(G\). For every ordering \(\mathbb{G}\) of \(G\) there exists a vertex \(u\in A\cup B\) such that \(|N^{-}(u)\cap(A\cup B)|\geqslant\lfloor t/2\rfloor\)._ Proof.: Let \(A^{\prime}\coloneqq(a_{i})_{i\geqslant t/2}\) and \(B^{\prime}\coloneqq(b_{i})_{i\leqslant t/2}\), then \(G[A^{\prime}\cup B^{\prime}]\) contains a biclique with partite sets \(A^{\prime},B^{\prime}\). Let \(u\in A^{\prime}\cup B^{\prime}\) be the largest vertex according to \(<_{\mathbb{G}}\), then \(N^{-}(u)\cap(A^{\prime},B^{\prime})\) is either all of \(A^{\prime}\) or all of \(B^{\prime}\). In either case the claim holds. **Theorem 23**.: _Let \(G\) be a \(d\)-degenerate graph on \(n\) vertices and let \(t\) be its ladder-index. Then we can in time \(O(d^{2}8^{d}\cdot n)\) decide whether \(G\) contains a ladder of size at least \(\lfloor t/2\rfloor\)._ Proof.: We compute a degeneracy ordering \(\mathbb{G}\) of \(G\) and initialize the data structure \(R\) as per Lemma 8 in time \(O(2^{d}n)\). Let \((A,B)\) induce a ladder of maximum size \(t\) in \(G\), by Observation 21 we have that \(t\leqslant 2d+1\). By Observation 22, there exists a vertex \(u\in A\cup B\) such that \(N^{-}(u)\) contains either \(A^{\prime}\coloneqq(a_{i})_{i\geqslant t/2}\) or \(B^{\prime}\coloneqq(b_{i})_{i\leqslant t/2}\). Wlog assume \(A^{\prime}\subseteq N^{-}(u)\) and let \(k\coloneqq|A^{\prime}|\). We guess \(u\) in \(O(n)\) time and \(A^{\prime}\subseteq N^{-}(u)\) in time \(O(2^{d})\). To verify that \(A^{\prime}\) can be completed into a ladder, we compute the data structure \(Q_{A^{\prime}}\) in time \(O(k2^{k}+dk)\) using Lemma 9. Finally, we verify that there exists a sequence of subsets \(A^{\prime}_{1}\subset A^{\prime}_{2}\subset\ldots\subset A^{\prime}_{k}=A^{\prime}\) where \(Q_{A^{\prime}}[A^{\prime}_{k}]>0\) for all \(i\in[k]\); as each lookup in \(Q_{A^{\prime}}\) has cost equal to the size of the query set this will take time proportional to \(\sum_{i=0}^{k}i\binom{k}{i}=k2^{k-1}\) in the worst case (where we have to query all subsets of \(A^{\prime}\) before finding the sequence). Since \(k\coloneqq\lfloor t/2\rfloor\), the total running time of this algorithm is \[O(2^{d}n)+O\Big{(}2^{d}n\cdot(k2^{k}+dk)\cdot k2^{k-1}\Big{)}=O\Big{(}2^{d}k2 ^{k}(k2^{k}+dk)\cdot n\Big{)}.\] We can simplify this expression further by using that \(k\leqslant d\) which leads us to the claimed running time of \(O(d^{2}8^{d}\cdot n)\) ## 5 Implementation and experiments Based on the above theoretical ideas, we implemented algorithms2 to compute the VC-dimension, find the largest biclique, co-matchings (within an additive error of \(1\)) and ladder (within a factor \(2\)). The last three algorithms all simply check the left-neighbourhood for the respective structure. Aside from optimisations of the involved data structures we will not describe these algorithms in further detail. Footnote 2: Source code available under [https://github.com/microgravitas/mantis-shrimp/](https://github.com/microgravitas/mantis-shrimp/) We observe that for practical purposes the data structure \(R\) can be computed progressively: if we know that our algorithm currently only needs to compute \(Q_{S}\) from \(R\) (as per Lemma 9) with \(|S|=k\) (\(k\leqslant d\)), then it is enough to only count sets of size \(\leqslant k\) in \(R\). We can achieve this in time \(O(\binom{d}{k}n)\), which is far preferable to using \(O(2^{d}n)\) time to insert all left-neighbourhood subsets into \(R\). If \(k\) remains much smaller than \(d\), this improves our running time and space consumption substantially. The second important optimisation regards subset dictionaries. While tries are useful in our theoretical analysis, in practice we opted to use bitsets for the data structures \(Q_{S}\), as their universe \(S\) can assumed to be small. Bistsets also allow for a very concise and fast implementation of the fast Mobius inversion, which needs to happen very frequently inside the hot loop of the search algorithms. The algorithm to compute the VC-dimension includes a few simple optimisations that vastly improved its performance. Note that if we are currently searching for a shattered set of size \(k\), then a candidate vertex for a shattered set of size \(k\) must have at least \(\binom{k-1}{i-1}\) neighbours of degree at least \(i\), for \(1\leqslant i\leqslant k-1\). Our algorithm recomputes the set of remaining candidates each time it finds a larger shattered set. The (progressive) computation of the data structure \(R\) can then also be restricted to only those left-neighbourhoods subsets which only contain candidate vertices. Accordingly, the algorithm performs well if it finds large shattered sets fast. To that end, it first only looks at \(k\)-subsets of left-neighbourhoods of single vertices. Once that search is exhausted, it considers left-neighbourhoods of pairs, then triplets, _etc_. up to vertices (as per Lemma 15). As this search is very expensive once we need to consider the joint left-neighbourhood of several vertices, the algorithm estimates the work needed and compares it against simply brute-forcing all \(k\)-subsets of the remaining candidates. Since the number of candidates shrinks quite quickly in practice, the algorithm usually concludes with such a final exhaustive search. ### Results We implemented all four algorithms in Rust and tested them on a diverse collection of 206 networks3, using a PC with a AMD Ryzen 3 2200G CPU and 24 GB RAM. The primary goal of our experiments was to verify that the data structures and algorithms in this paper could be of practical use, therefore we ran each algorithm only once per network4 and timed out after 10 minutes. Footnote 3: [https://github.com/microgravitas/network-corpus](https://github.com/microgravitas/network-corpus) Footnote 4: The variance in running times was on the orders of seconds. Of all the four measures, computing the VC-dimension is, unsuprisingly, the most computationally challenging and the program timed out or ran out of memory for networks larger than a few ten-thousand nodes or of degeneracy higher than 24. The broad summary of the results looks as follows: \begin{tabular}{l l l l} Statistics & Completed & Max size (\(n\)) & Max degeneracy \\ \hline VC-dimension & 126 & 33266 (BioGrid-Chemicals) & 24 (wafa-eies) \\ Biclique & 176 & 935591 (teams) & 191 (BioGrid-All) \\ Co-matching & 179 & 935591 (teams) & 255 (dogster_friendships) \\ Ladder index & 187 & 935591 (teams) & 191 (BioGrid-All) \\ \end{tabular} Figure 1 visualizes these results in more detail. Figure 1: Running times of all four algorithms on a collection of 206 networks. The size of the circles indicates the degeneracy of the networks, triangles indicate that program timed out on the network after 10 minutes. We are also interested in typical values of the VC-dimension of networks and how it compares to the degeneracy. This topic deserves a deeper investigation, but we can report some preliminary results here for those networks where our program terminated before the timeout. In Figure 2 we normalised the VC-dimension by the degeneracy-plus-one, so values close to one indicate that the VC-dimension is on the order of the degeneracy while values close to zero indicate that it is much smaller than the degeneracy. We see a clear tendency that networks with larger degeneracy tend towards zero, which we interpret as the VC-dimension "growing slower" than the degeneracy in typical networks. ## 6 Conclusion On the theoretical side, we outlined a general bipartite pattern-finding and -counting algorithm in degenerate graphs. Its running time crucially depends on two complexity measures of patterns, namely the left-covering number and the half-ordering asymmetry. These general algorithms can be further improved for specific patterns, which we exemplify for shattered set, ladder, co-matching and biclique patterns. Our results also include improved running times when the input graphs are of bounded degeneracy. On the experimental side, we demonstrate that this style of algorithm is feasible and practical for computation on real-world networks, which often exhibit low degeneracy. The experiments also suggest that the VC-dimension of networks tends to be a very small parameter, which makes it an interesting target for the development of fast algorithms that exploit low VC-dimension. Figure 2: VC-dimension of networks normalized by their degeneracy \(+1\). Networks with large degeneracy tend towards the left, meaning that the VC-dimension does not increase proportionally to the degeneracy.
2306.06331
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
Xuan-Quy Dao, Ngoc-Bich Le
2023-06-10T02:01:02Z
http://arxiv.org/abs/2306.06331v3
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination ###### Abstract This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of \(83\%\); but, as the difficulty level rose, it scored poorly, with an accuracy rate of \(10\%\). The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of \(70\%\), followed by VNHSGE mathematics (\(58.8\%\)). However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging. ChatGPT large language model natural language processing Vietnamese high school graduation examination ## 1 Introduction In recent years, artificial intelligence (AI) has drawn a lot of interest and been extensively discussed. AI represents a creative and imaginative advancement in many fields, including mathematics instruction. The current work analyzes a number of studies that looked into the application of AI in a number of contexts, including medical [1], education [2], [3], [4], [5] and pandemics [6]. The role of educators should not be replaced by AI in the educational process; rather, AI should be used to enhance it [8]. The implementation of AI in education faces a variety of challenges despite the potential benefits. In order to improve student learning outcomes and get around obstacles like a shortage of qualified teachers and resources [9], [10], using AI in education is becoming more popular [11], [12],[13], [14], [15]. According to research, AI is crucial for guaranteeing sustainable societal growth and can boost student accomplishment. Despite the fact that literature evaluations have been undertaken on the use of AI in education across a variety of subjects, little is known about how AI especially affects mathematics education, including its nature, target grade levels, and study methodologies. Achievement in mathematics is important for kids' academic progress, future employment prospects, and social growth, and it is connected to civil rights issues [16], [17]. Therefore, preparing students with math skills and knowledge is crucial for adapting to a society that is changing quickly and ensuring sustainable development. A comprehensive literature review was undertaken by bin Mohamed et al. [18] to provide an overview of AI in mathematics education for students at all levels of education, one of the few studies on the effects of AI on mathematics education. This review contributes to the discussion about enhancing teaching and learning in mathematics education through the use of AI. In a different study, Hwang [19] used 21 empirical studies with 30 independent samples to conduct a meta-analysis to assess the overall impact of AI on elementary children' mathematical achievement. The results of the study revealed that AI had a negligible impact on primary kids' mathematical proficiency. The results showed that grade level and topic of mathematics learning variables considerably reduced the impact of AI on mathematical achievement. Other moderator variables' effects, however, were found to be insignificant. Based on the findings, this study offers both practical and theoretical insights that can help guide the appropriate application of AI in the teaching of mathematics to elementary school children. It is evident that additional meta-analysis is required to determine whether AI offers novel opportunities for mathematics learning [13], [15]. Studies examining how moderating variables affect the connection between them are also necessary. The area of education could undergo a revolution owing to recent advancements in natural language processing (NLP), which have led to the development of increasingly complex language models like GPT-3. Due to its capacity to produce natural language answers to a variety of questions, ChatGPT, a large language model based on the GPT architecture, has attracted a great deal of interest in the educational community. In recent years, there has been an increase in interest in using chatbots, particularly ChatGPT, in education. Several research have investigated the possible advantages, issues, and difficulties of this practice. Halaweh [20] addressed educators' worries about the adoption of ChatGPT into educational contexts, arguing for its inclusion and offering guidelines for safe implementation. In a research on the potential effects of ChatGPT on education, Zhai [21] recommended changing instructional objectives to emphasize students' creativity and critical thinking. In their discussion of the possible advantages and difficulties of employing large language models in educational contexts, Kasneci et al. [22] placed emphasis on the requirement for competences and literacies to comprehend the technology and its constraints. The effectiveness of ChatGPT in assessments has also been examined in studies. (Kortemeyer, 2023) discovered that ChatGPT displayed several misconceptions and mistakes typical of a beginner learner yet would only about pass a calculus-based physics course. Katz et al. [23] conducted an experimental evaluation of GPT-4's zero-shot performance on the complete Uniform Bar Examination (UBE), demonstrating that it performed better than human test-takers and previous models on the Multistate Bar Examination (MBE), which is a multiple-choice test. Gilson et al. [24] assessed ChatGPT's performance on multiple-choice questions related to the USMLE Step 1 and Step 2 tests and discovered that its performance is comparable to a third-year medical student. These studies show the potential of chatbots to enhance education and legal services, but they also raise questions about their accuracy and dependability in assessments. Through the simulation of various use cases, Frieder et al. [26] conducted a study to evaluate the mathematical proficiency of ChatGPT and determine its potential as a helpful assistant to professional mathematicians. The outcomes revealed that ChatGPT participants' mathematical skills were significantly worse to those of the typical mathematics graduate student. However, it is critical to also assess ChatGPT's mathematical prowess at lower levels, such as high school. This evaluation would shed light on ChatGPT's capacity to support teachers and students in this level of mathematics learning. NLP has received a lot of attention recently as a vital study area. Chatbots, one of its implementations, have drawn attention for its capacity to mimic human interactions. While current research highlights the potential of chatbots to support students' learning in a variety of educational settings, their effectiveness in completing particular subjects, like mathematics, in high-stakes exams has received little attention. By evaluating ChatGPT's ability to complete mathematical challenges and pass the VNHSGE exam, this study aims to fill this knowledge gap in the literature. This will be achieved by contrasting ChatGPT's performance in our test with that of earlier assessments made by the OpenAI team [27]. This study intends to advance knowledge of the benefits of utilizing cutting-edge technology in education to enhance student results by studying the efficiency of AI-powered chatbots in assisting students in high-stakes tests. The results of this study may be especially helpful to educators and policymakers who want to use AI to enhance learning outcomes. In this article, we concentrate on examining ChatGPT's capability for resolving mathematical issues within the framework of the VNHSGE exam. The Vietnamese educational system places a high value on mathematics, which is frequently seen as a key predictor of student achievement. The promise of AI-powered tools for enhancing mathematics education can therefore be shown by analyzing ChatGPT's mathematical capabilities in the context of the VNHSGE mathematics dataset [28]. Our work seeks to evaluate ChatGPT's performance on mathematical inquiries in the VNHSGE exam critically and explore the prospects of deploying AI-powered tools to assist enhance mathematics teaching. ## 2 Objectives and Methodology ### Objectives This study aims to offer a thorough analysis of ChatGPT's mathematical skills in relation to the mathematics evaluation for the VNHSGE exam. We seek to shed light on the possibilities of AI tools for educational support and investigate their role in changing the educational landscape by evaluating ChatGPT's performance in these areas. This study also attempts to illustrate ChatGPT's shortcomings when dealing with questions that differ from those present in the VNHSGE exam in terms of both structure and level of difficulty. ### Scope and Limitation By analyzing ChatGPT's responses to questions from the VNHSGE exam that involve mathematics, this study seeks to assess ChatGPT's mathematical capabilities. Our objective is to assess how well ChatGPT responds to these questions and to provide details on ChatGPT's potential in the context of Vietnamese education. It's important to remember that our evaluations are restricted to the unique the VNHSGE exam structure. The results of ChatGPT are incapable of being extrapolated to tests with other numbers or difficulty levels. This restriction highlights the need for caution when extrapolating from our results and making generalizations regarding ChatGPT's potential uses in educational contexts outside the scope of this study. ### Methods In this study, we evaluated the capability of the ChatGPT model to answer mathematical problems in the VNHSGE mathematics dataset [28]. Using a sequence-to-sequence methodology, the model was developed using a dataset of math problems after being trained on a sizable corpus of text. The mathematical problem was the model's input, and the solution was its output. We compared the produced answers from ChatGPT with the accurate responses given in the exam papers in order to evaluate its performance. We created a detailed process with many phases to carry out this examination. In the beginning, we gathered information from official test papers made available by the Vietnamese Ministry of Education and Training. We chose these questions as an accurate representation of the actual exam because they were all taken from high school mathematics exams. The data needs to be formatted in a way that ChatGPT could interpret afterward. The exam questions contained mathematical equations and symbols, which we transformed into LaTeX format to display in a uniform manner. The exam questions were then transformed from their LaTeX format into JSON (JavaScript Object Notation), a lightweight data transfer standard that is frequently used in web applications. We were able to give the questions to the pre-trained ChatGPT model and get its generated answers after formatting the data in a way that ChatGPT could understand. Finally, we determined ChatGPT's performance score by comparing the generated answers to the accurate responses provided by the exam papers. Overall, this methodology allowed us to thoroughly evaluate ChatGPT's capacity to answer mathematical problems in the VNHSGE exam. By outlining the specific procedures, we took, we intend to offer a framework for future research examining the efficiency of chatbots powered by AI in assisting students in demanding exams. ## 3 Dataset The VNHSGE mathematics test dataset for the academic years 2019-2023 was used in this investigation. 250 multiple-choice math questions covering a range of subjects, such as algebra, geometry, and calculus, make up the dataset. Based on Bloom's Taxonomy, these questions were divided into four difficulty levels: K (knowledge), C (comprehension), A (application), and H (high application). The Vietnamese Ministry of Education and Training publicly released the dataset, which is frequently used to evaluate students' mathematical aptitude. ### Question Levels Different levels of competence in comprehending and using mathematical concepts are necessary for solving mathematical problems. The dataset includes a range of levels of difficulty, from K-based questions that evaluate fundamental understanding to high-application questions that assess the capacity to analyze and synthesize information in order to solve complex problems. This allows for a thorough evaluation of ChatGPT's mathematical problem-solving abilities. Based on the sort of cognitive activity and verbs used in responding to the questions, the four levels of complexity--K, C, A and H--were established. We can learn more about ChatGPT's strengths and drawbacks when we evaluate its performance on a range of mathematical problems of varying degrees of difficulty. ### Question Topics The dataset provides a thorough assessment of ChatGPT participants' mathematical knowledge and abilities by encompassing a wide range of mathematical topics. M11A: Combinations and Probability; M11B: Number Series (Arithmetic progression, Geometric progression); M11C: Spatial Geometry; M12A: Derivatives and Applications; M12B: Exponential and Logarithmic Functions; M12C: Primitives and Integrals; M12D: Complex Numbers; M12E: Polyhedrons; M12F: Rotating Circle Block; and M12G: Oxyz Spatial Calculus. These topics were included to ensure a thorough evaluation of the ChatGPT's mathematical abilities by testing its understanding, application, analysis, and evaluation of mathematical concepts and principles. Researchers can learn about ChatGPT's strengths and limitations and identify opportunities for development by analyzing how well it performs across all of these issues. ### Knowledge matrix A key element of assessment systems that gives a thorough breakdown of the criteria and content to be evaluated is the question matrix. To create and compile questions for various tests and examinations, this technical design was deployed. It acts as a reference for test designers in choosing appropriate questions that appropriately reflect the educational and learning objectives of the assessment system. By ensuring that the test questions assess the desired knowledge, skills, and abilities of the examiners and that they are aligned with the learning outcomes, the question matrix aids in assuring the validity, reliability, and fairness of the assessment. As a result, the question matrix is an essential tool for creating high-quality tests that accurately assess student achievement and guide educational decisions. A knowledge matrix, which classifies each question according to its specific level and topic, can effectively depict the structure and substance of an exam. Administrators of exams and educators can gain a lot from employing a knowledge matrix since it can be used to determine where students' knowledge is strong and weak and to build focused interventions to boost performance. Additionally, the knowledge matrix makes sure that the exam covers a wide range of subjects and levels of difficulty, providing a thorough evaluation of student's knowledge and abilities. The usage of a knowledge matrix ensures that exam results accurately reflect students' abilities and accomplishments by increasing the validity and reliability of exam scores. The knowledge matrix for the VNHSGE exam in Mathematics for the years 2019-2023 is displayed in Table 1. We have a distribution of questions based on the topics and degree of difficulty. We can identify a specified number of question levels pertinent to the issue based on the distribution. The distribution of questions by level is shown in Figure 1 as follows: knowlegde 103 (41%), comprehension 77 (31%), application 41 (16%), and high application 29 (12%). M11A -10 (4%), M11B - 5 (2%), M12C - 8 (3%), M12A - 57 (23%), M12B - 39 (16%), M12C - 33 (13%), M12D - \(26(10\%)\), M12E - \(17(7\%)\), M12F - \(14(6\%)\), and M12G - \(41(16\%)\) are the breakdown of questions by type. Generally, the knowledge matrix offers a thorough overview of the exam's structure and content, making it possible to assess and enhance students' mathematical understanding and problem-solving skills. The exam framework does not have a uniform allocation of questions. There are some topics and problems that just call for knowledge and comprehension, not a high-level application. A majority of the questions-roughly \(70\%\)-are focused on knowledge and comprehension. In addition, only \(10\%\) of the questions concentrate on information from the 11th grade, while \(90\%\) are at the 12th grade level. Questions on subjects like M12A, M12B, M12G, and M12C are plentiful. It should be emphasized, nonetheless, that the questions in topic M11B only call for a certain level of expertise. The distribution of question levels and topics as a percentage is shown in Figure 1. The topic M12A, which comprises 23\(\%\) of the total questions, is distributed as follows: 9.60\(\%\) at the K level, 6.00\(\%\) at the C level, 2.40\(\%\) at the A level, and 4.80\(\%\) at the H level. We may analyze the performance of the student or ChatGPT specifically by level and topic based on the thorough distribution by level and topic. A comprehensive grasp of the distribution of questions across various levels and topics is made possible by this graphic portrayal. Insights into the areas where test takers are anticipated to perform well and those that could need more improvement can be obtained by examining Figure 1. It offers useful data that teachers and curriculum designers may use to better understand the strengths and weaknesses of their students and the efficiency of their instructional strategies. Overall, Table 1 and Figure 1 together give a thorough breakdown of the distribution of the questions and are an effective tool for educational study and practice. ### Prompt and Answer When asking questions to ChatGPT, we can receive answers in different formats. However, to make the process of handling results easier and ensure consistency, we kindly ask ChatGPT to provide replies in a specific structure. Figure 2 and Table 2 demonstrate an example of the required structure for ChatGPT responses. This table demonstrates the adaptability and versatility of the model by giving instances of how ChatGPT can respond to different cues in various formats. When we receive automatic responses, we utilize Word format on [https://chat.openai.com/](https://chat.openai.com/) but "OpenAI API" uses Json format. The table is divided into three columns: the first column reveals the prompt's format; the second column displays the prompt itself; and the third column provides the response that ChatGPT created. This table demonstrates the adaptability and versatility of the model by giving instances of how ChatGPT can respond to different prompts in various formats. When we receive automatic responses, we utilize Word format on [https://chat.openai.com/](https://chat.openai.com/) but "OpenAI API" uses Json format. The table shows how ChatGPT can provide responses to prompts in many formats, which is a useful feature for many applications. (M11A, M11B, M11C, M12A-M12G) and four degrees of complexity (knowledge, comprehension, application, and \begin{table} \begin{tabular}{|c|l|l|l|l|} \hline ID & IQ & Q & C & IA & E \\ \hline & & 1) The volume of a cube with edge 2a is: & & & \\ & & A. 8a\({}^{\alpha}\)3 & & & The volume of a cube \\ 1 & & B. 2a\({}^{\alpha}\)3. & A & & with edge 2a is: V=(2a)\({}^{\alpha}\)3=8a\({}^{\alpha}\)3.”, \} \\ & & C. a\({}^{\alpha}\)3 & & V=(2a)\({}^{\alpha}\)3=8a\({}^{\alpha}\)3. \\ & & D. 6a\({}^{\alpha}\)3. & & \\ \hline \end{tabular} \end{table} Table 2: An example of prompt and response. Figure 2: Formatted question and ChatGPT response. high application). The exam aims to provide a thorough assessment of the mathematical knowledge and abilities of ChatGPT candidates by evaluating a wide range of topics. The questions are made to test ChatGPT's understanding, application, evaluation, and analysis of mathematical concepts and principles, ensuring a thorough evaluation of its mathematical skills. This rigorous assessment makes sure that ChatGPT's math-solving abilities are accurately measured and can be used to guide future NLP advances. ### ChatGPT score The results of the mathematics test taken by ChatGPT from 2019 to 2023 are shown in Table 3[28], together with the number of right answers and corresponding score for each year. A score of 5 represents an average performance on a scale from 0 to 10. These outcomes show that ChatGPT performed better than average on the math test. The ChatGPT ranges from 0 to 7 points. This outcome can be attributed to ChatGPT's propensity to accurately respond to a significant portion of questions at the knowledge and comprehension levels, which make up \(70\%\) of the total questions. The middle-range ChatGPT score is clear from the fact that only a small number of questions at both the application and high application levels were correctly answered. Further clarification on this point will be provided in the upcoming sections. ### ChatGPT's performance in order question Figure 3 illustrates the average number of right responses given by ChatGPT for each question across all years. The data exhibits that the possibility of ChatGPT providing an accurate response reduces as the question's level of complexity rises. The ChatGPT correct answer rate is greater than 50% for questions 1 through 35, which are K and C-level questions. The accurate answer rate of ChatGPT, however, decreases below 50% for questions 35 to 50, demonstrating a decline proportional to the pattern of the questions. The graph demonstrates that as question difficulty grows, ChatGPT's accuracy declines. Given that questions at higher knowledge levels tend to be more complicated and need in-depth comprehension and problem-solving abilities, this pattern is to be expected. The findings imply that the difficulty and complexity of the questions have a significant impact on ChatGPT's capacity to provide accurate answers. This discovery has significant implications for the design of AI systems for educational applications since it emphasizes the need for more sophisticated and advanced models that are capable of handling difficult and challenging tasks. Additionally, it suggests that more investigation is required to identify the specific factors that influence ChatGPT's performance on various question types. This understanding can guide the creation of more efficient AI-based educational tools and interventions. The analysis of the model's performance in relation to the order of the questions can be beneficial in a number of ways, in addition to determining ChatGPT's accuracy in responding to the questions. In the first place, it can assist teachers in comprehending how the order of questions impacts ChatGPT's capacity to solve them and in optimizing Figure 3: ChatGPT’s performance in order question. \begin{table} \begin{tabular}{|c|c|c|} \hline Year & ChatGPT’s Performance & ChatGPT’s Score \\ \hline 2023 & 27/50 & 5.4 \\ 2022 & 31/50 & 6.2 \\ 2021 & 30/50 & 6 \\ 2020 & 33/50 & 6.6 \\ 2019 & 26/50 & 5.2 \\ \hline Average & 147/250 & 5.88 \\ \hline \end{tabular} \end{table} Table 3: ChatGPT’s performance in 2019-2023 the question sequence to produce a more useful evaluation. This is crucial because as an exam goes on, students may become cognitively fatigued, which may affect how well they perform on subsequent questions. Teachers can simulate how students could perform under various circumstances and create exams that are better suited to accurately assess their knowledge and abilities by studying ChatGPT's performance with regard to the configuration of questions. Understanding how the question sequence impacts ChatGPT's performance can also assist identify possible weak points in the model, which can guide future model improvements. ### ChatGPT's performance in levels and topics According to the degree of difficulty, Table 4 shows the percentage of accurate responses using ChatGPT for each year. The average percentage of right answers for K-level questions given by ChatGPT ranged from 90% in 2022 to 75% in 2023. The highest percentage of accurate answers for C-level questions was 75.22% in 2022, and the lowest was 40% in 2023. The highest and lowest percentages of right responses for questions at the A-level were 55.56% and 0%, respectively. For the years 2021, 2022, and 2023, ChatGPT did not offer any accurate responses to H-type questions. The highest percentages for the remaining years were 16.67% and 22.22%. These results show how ChatGPT has performed over time at various levels of difficulty. In accordance with the questions' degree of complexity, Figure 4 depicts ChatGPT's accuracy from 2019 to 2023. For queries classified as type K, it indicates that ChatGPT attained an accuracy rate ranging from 75\(\%\) to 90\(\%\), with a small standard deviation indicating a high rate of consistency. This demonstrates ChatGPT's exceptional skill in answering questions that are not too challenging. For questions of type C, the accuracy rate falls to 40-72\(\%\), demonstrating that ChatGPT performs less effectively when answering questions of intermediate difficulty. Type A questions show the greatest diversity in ChatGPT's accuracy rate, with correct answers ranging from 0\(\%\) to 57\(\%\) and the highest standard deviation. This shows that ChatGPT performs the least consistently when attempting to answer challenging type-A questions. The accuracy of ChatGPT's answers to the most difficult type H questions ranges from 0 to 22\(\%\), which is a quite low percentage. Based on these findings, it appears that ChatGPT performs better when answering questions that are easier to answer than those that are more complex. The percentage of correct responses offered by ChatGPT for different topics from 2019 to 2023 is depicted in Table 5. ChatGPT provided 100% accurate responses for all years for the topic M11B. Additionally, ChatGPT provided 100% accurate responses for topics M11A, M12D, M12F, and M11C for a number of years. In 2022, ChatGPT's accuracy rate \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & **K** & **C** & **A** & **H** \\ \hline **2023** & 75.00 & 40.00 & 25.00 & 0.00 \\ \hline **2022** & 90.00 & 72.22 & 0.00 & 0.00 \\ \hline **2021** & 81.82 & 62.50 & 28.57 & 0.00 \\ \hline **2020** & 89.47 & 62.50 & 55.56 & 16.67 \\ \hline **2019** & 85.71 & 58.82 & 20.00 & 22.22 \\ \hline \end{tabular} \end{table} Table 4: ChatGPT’s performance in question levels Figure 4: ChatGPT’s performance in question levels for 2019-2023. for the M11C topic was 0%. With the exception of the M12A topic on graphs and diagrams, ChatGPT's accuracy rate for the other topics was rather high. Recently, a lot of attention has been paid to how well AI models perform, particularly when answering questions. Figure 5 provides an informative examination of ChatGPT's accuracy in responding to various query kinds over the period of 2019-2023. The findings show that ChatGPT's accuracy varies depending on the type of question being answered. In particular, ChatGPT answered M11C questions with an accuracy rate of 0-100\(\%\), M11B questions with 100\(\%\), M11A questions with 50-100\(\%\), M12A questions with 20-50\(\%\), M12B questions with 62-75\(\%\), M12C questions with 42-80\(\%\), M12D questions with 40-100\(\%\), M12E questions with 33-80\(\%\), M12F questions with 33-100\(\%\), and M12G questions with 44-75\(\%\). The level of difficulty of the questions, the number and quality of training data, and the model's internal architecture are just a few of the variables that can affect how well ChatGPT performs while answering these questions. Therefore, comprehending the variations in performance across various question types can offer insights into the model's advantages and disadvantages as well as guide future developments to enhance its performance. A thorough analysis of ChatGPT's performance on various levels and topics is presented in Table 6. First, consider the difficulty of the questions; ChatGPT was able to accurately respond to 85 of 103 questions at level K. Out of 77 questions at level C, 48 were correctly answered by ChatGPT. Only 12 of the 49 questions in level A could be correctly answered by ChatGPT, while only 3 of the 29 questions in level H could be answered by ChatGPT. Second, ChatGPT's performance varied depending on the type of question. For M11A, M11B, M11C, and M12A, ChatGPT correctly answered 7 out of 10 questions, 5 out of 5 questions, 4 out of 8 questions, and 20 out of 57 questions, respectively. For M12B, M12C, M12D, M12E, M12F, and M12G, respectively, ChatGPT correctly answered 28 out of 39 questions, 21 out of 33 questions, 18 out of 26 questions, 11 out of 16 questions, 9 out of 15 questions, and 24 out of 41 questions. It is crucial to keep in mind that certain topics only contain questions at the knowledge and comprehension levels that are quite simple to respond to, and ChatGPT did well on these because of its aptitude for natural language creation. Therefore, ChatGPT's high scores on these topics do not necessarily reflect its understanding of mathematics or capacity for reasoning. Furthermore, it is challenging to give a precise rating solely based on topics because some topics have Figure 5: ChatGPT’s performance in question topics for 2019-2023. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \cline{2-10} \multicolumn{1}{c|}{} & **M11C** & **M11B** & **M11A** & **M12A** & **M12B** & **M12C** & **M12D** & **M12E** & **M12F** & **M12G** \\ \hline **2023** & 50 & 100.00 & 50.00 & 30.00 & 75.00 & 57.14 & 83.33 & 33.33 & 50.00 & 44.44 \\ \hline **2022** & 0 & 100.00 & 50.00 & 50.00 & 75.00 & 71.43 & 66.67 & 66.67 & 66.67 & 62.50 \\ \hline **2021** & 50 & 100.00 & 100.00 & 20.00 & 75.00 & 71.43 & 66.67 & 66.67 & 66.67 & 62.50 \\ \hline **2020** & 100 & 100.00 & 100.00 & 46.15 & 62.50 & 42.86 & 100.00 & 66.67 & 100.00 & 75.00 \\ \hline **2019** & & 100.00 & 50.00 & 28.57 & 71.43 & 80.00 & 40.00 & 80.00 & 33.33 & 50.00 \\ \hline \end{tabular} \end{table} Table 5: ChatGPT’s performance in question topics a preponderance of knowledge-level questions. Additionally, due to a lack of information, ChatGPT might not be able to respond to some knowledge-level questions. As an illustration, many questions in the topic of derivatives and applications (M12A) call for the interpretation of graphs or variable tables, which ChatGPT is unable to read from photos at this time. As a result, ChatGPT might be unable to respond to some inquiries that require an understanding of this subject. These findings show that ChatGPT has diverse degrees of competence in various math specialties. In general, ChatGPT performed well for some question types but poorly for others. These results collectively imply that while ChatGPT might be a valuable tool for addressing math-related queries, its accuracy varies between topics and levels. As a result, significant advancements are required to increase ChatGPT's math question-answering ability, especially in more difficult math subfields. Figure 6 presents a more thorough breakdown of the percentage of right responses by difficulty level and topic so that users of ChatGPT can better understand how well it performs. For instance, in the case of M12G, ChatGPT attained a high accuracy rate of 76\(\%\) for questions at the K level, followed by 67\(\%\) for questions at the C level, 25\(\%\) for questions at the A level, and 0\(\%\) for questions at the H level. Notably, ChatGPT achieved a flawless accuracy rate of 100\(\%\) when responding to questions at the K level for M11A, M11B, M11C, M12B, M12D, and M12F. Additionally, ChatGPT was able to correctly respond to H-level questions for M12A (Derivatives and Applications) and M12E (Polyhedron), demonstrating its competency in handling more difficult questions in these topics. These results indicate that the topic and difficulty level have an impact on ChatGPT's accuracy, and that ChatGPT performs differently depending on how these two factors are coupled. These findings suggest that these particular issues contain linguistic nuances or complexities that the model was unable to \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-13} \multicolumn{1}{c|}{} & **M11C** & **M11B** & **M11A** & **M12A** & **M12B** & **M12C** & **M12D** & **M12E** & **M12F** & **M12G** & \multicolumn{2}{c|}{**LEVEL**} \\ \hline **K** & 1 & 5 & 5 & 12 & 15 & 12 & 8 & 7 & 7 & 13 & 85 & 83\(\%\) \\ \hline **C** & 2 & & 1 & 6 & 11 & 7 & 8 & 2 & 1 & 10 & 48 & 62\(\%\) \\ \hline **A** & 1 & & 1 & 0 & 2 & 2 & 2 & 1 & 1 & 11 & 27\(\%\) \\ \hline **H** & 0 & & & 2 & 0 & 0 & 0 & 1 & & 0 & 3 & 10\(\%\) \\ \hline **TOPIC** & 4 & 5 & 7 & 20 & 28 & 21 & 18 & 11 & 9 & 24 & **147** & \multicolumn{1}{c|}{} \\ \cline{2-13} \cline{2-1 adequately capture. This result highlights the need for ongoing study to enhance the model's ability to handle a variety of linguistic complexities. This shortcoming might be brought on by the lack of training data or the intrinsic intricacy of the queries at this level. By evaluating how well language models--like ChatGPT--can respond to questions of varying degrees of cognitive complexity, one can assess the performance of these models. Knowledge, understanding, application, and strong application are the four categories for the levels of cognitive difficulty in answering questions. The ability to recognize and identify concepts, content, and issues is referred to as the recognition level. Understanding fundamental ideas and being able to articulate them in one's own words are requirements for the comprehension level. The application level necessitates applying concepts in unfamiliar or comparable circumstances. The high application level requires the capacity to apply fundamental ideas to an entirely new challenge. The effectiveness of ChatGPT was assessed by counting how many questions at each level of cognitive difficulty it correctly answered. Figure 7 demonstrates that ChatGPT properly identified and recognized 83\(\%\) of the ideas in the recognition level of the questions that were asked. 62\(\%\) of the questions at the comprehension level were correctly answered by ChatGPT, demonstrating an adequate understanding of the fundamental ideas. At the application level, where it could only accurately answer 27\(\%\) of the questions, its performance deteriorated dramatically. Only 10\(\%\) of the questions were correctly answered by ChatGPT at the highest cognitive complexity level, the high application level, demonstrating a limited capacity to apply fundamental ideas to novel problems. According to this performance evaluation, ChatGPT may have some restrictions when it comes to employing newly learned concepts in novel contexts. By giving language models more sophisticated and advanced problem-solving abilities, future language model development might concentrate on enhancing the models' capacity to solve novel challenges. The performance of language models at the application and high application levels may also be enhanced by additional training data and focused training techniques, enabling them to more effectively apply acquired concepts in real-world circumstances. Figure 8 demonstrates the astounding 100\(\%\) correct answer rate for the M11B question that ChatGPT attained. It's crucial to remember that this particular topic only included K-type questions. The correct answer rates for the remaining topics ranged from 58.89\(\%\) for M12G to 71.79\(\%\) for M12B. Notably, M11C and M12A had the lowest rates of correctly answered questions. Most questions were in M12A, and the majority of them were at the K-level. The lack of information in the figure, however, prevented ChatGPT from being able to respond to all questions. Similarly, ChatGPT did not show much promise for topics like M11C on spatial geometry and M12G on spatial analysis Oxyz. However, if we ignore the questions that required information from the figure, ChatGPT demonstrated a solid capacity to respond correctly for more than 50\(\%\) of all topics. This indicates that ChatGPT shows potential in some areas of the evaluated topics, but it may need more work to succeed in other areas that require more intricate inference and data interpretation. ### ChatGPT's performance in VNHSGE and other exams We evaluated ChatGPT's success rate in a number of well-known math competitions, as reported by OpenAI [27] and shown in Figure 9, to determine its suitability for the VNHSGE mathematics exam. With a success percentage of 70\(\%\), ChatGPT's performance in the SAT Math competition is better than its performance in the VNHSGE mathematics exam, according to our study. With rates of 40\(\%\) for AP Statistics, 25\(\%\) for the GRE Quantitative, 10\(\%\) for AMC 10, Figure 7: ChatGPT’s performance in question level. 4\(\%\) for AMC 12, and only 1\(\%\) for AP Calculus BC, ChatGPT performed much worse in the other competitions. It is important to note that these comparisons are just meant to be used as a guide because there are variations among math examinations in terms of their formats, structures, levels, and question kinds. As a result, it is impossible to assess the complexity of the VNHSGE exam just by looking at ChatGPT's performance in other competitions. However, this comparison provides a general idea of the VNHSGE exam's level of difficulty in relation to other math competitions. ### ChatGPT's performance and Vietnamese students Figure 10-13 compare ChatGPT math scores across four years--specifically, 2019, 2020, 2021, and 2022--with Vietnamese students' scores. Notably, the findings show that across the investigated years, ChatGPT math scores have consistently been lower than those of the majority of Vietnamese pupils. Additional performance data analysis can shed light on potential causes of the performance gap between ChatGPT and human students. There may be a variance in performance due to elements such various learning styles and approaches, resource accessibility, and cultural background. Additionally, with additional training and model improvement, ChatGPT's performance might be enhanced. Another key drawback of this AI model is ChatGPT's inability to access, read, and comprehend graphical information in test questions. Tables, charts, and other graphical representations of data and information are frequently used in mathematics exams to visually communicate data and information. However, ChatGPT's inability to interpret graphical data limits its capacity to offer precise answers to this kind of query. This restriction is not specific to ChatGPT; many other AI models also have trouble comprehending graphical data. This is so because reading text takes a distinct set of abilities than analyzing images and other visual information. NLP is exploited by text-based AI models like ChatGPT to comprehend and process text-based inputs. In contrast, computer vision techniques are utilized by image-based AI models to comprehend visual inputs. Enhancing ChatGPT's capacity to comprehend visual data is one potential means of getting around this restriction. Adding computer vision capabilities to the model or creating a hybrid model that blends NLP and computer vision methods may achieve this. The test format could be changed to eliminate graphical data or to offer alternate text-based representations of the graphical data as a potential alternative. Though it might not always be possible, this solution would necessitate significant modifications to the test design. Figure 8: ChatGPT’s performance in question type. Figure 9: ChatGPT’s performance in VNHSGE mathematics and other exams. Figure 11: Mathematics score spectrum of Vietnamese students in 2020. Figure 10: Mathematics score spectrum of Vietnamese students in 2019. Figure 12: Mathematics score spectrum of Vietnamese students in 2021. Figure 13: Mathematics score spectrum of Vietnamese students in 2022. ## 5 Discussion While ChatGPT has certain limitations in the field of mathematics [26],[29], [30], it has the potential to be a beneficial resource for educators and learners in the field of education[31], [32]. Nevertheless, ChatGPT must continue to prove its ability to earn trust. Therefore, we need to have in-depth and detailed studies of its capabilities in areas, like mathematics. The findings of this study demonstrate that ChatGPT, a big language model trained by OpenAI, is capable of solving math issues to a certain extent but still has difficulties comprehending and interpreting graphical data in test questions. Less than the typical success rate of Vietnamese students taking the same exam, ChatGPT's total success rate in the VNHSGE exam ranged from 52\(\%\) to 66\(\%\). This shows that ChatGPT's capacity to tackle mathematical issues still needs to be enhanced. Further examination of ChatGPT's performance in resolving mathematical problems revealed that its success rate varied based on the level of difficulty and topic of the problems. The questions at the K-level had the greatest ChatGPT success rate, indicating a fundamental comprehension of the topic in question. However, the ChatGPT success rate significantly decreased as the question difficulty increased. This shows that ChatGPT has trouble solving more difficult math problems, particularly those that are at the H-level. Additionally, ChatGPT's performance varied depending on the topic. This conclusion suggests that ChatGPT's current iteration has limits in its capacity to understand mathematical ideas that call for the use of visual reasoning or the interpretation of graphical data. Future development should focus on ChatGPT's shortcomings in comprehending graphical information in test questions. This constraint could be overcome by creating algorithms and models that enable ChatGPT to read and evaluate visual data, which is crucial for resolving many mathematical issues. In summary, ChatGPT performs inconsistently across various topics and difficulty levels, although showing promising results when solving mathematical inquiries. ChatGPT's comprehension of intricate mathematical ideas, particularly those using graphical data, requires more refinement. In our study, we compared how well ChatGPT performed in a number of well-known math competitions, including SAT Math, VNHSGE mathematics, AP Statistics, GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. The degree of difficulty, the format, and the nature of the questions employed in these contests all differ. With a 70\(\%\) success rate, ChatGPT had the highest success rate in the SAT Math competition, which is not surprising considering that the SAT Math test primarily evaluates high school math proficiency. The ChatGPT success rate for the VNHSGE Mathematics, on the other hand, was 58.8\(\%\). It is a more thorough test that covers a wider range of math topics and difficulty levels. It is important to note that, as was mentioned in our earlier investigation, ChatGPT performed better in some areas than others. With success rates of 25\(\%\) and 1\(\%\), respectively, in the GRE Quantitative and AP Calculus BC competitions, ChatGPT performed much worse. These contests are renowned for their high degree of complexity and difficulty, with questions that call for highly developed problem-solving abilities and a thorough comprehension of mathematical ideas. These types of challenges are difficult for ChatGPT to understand and analyze, which underlines the shortcomings of current language models. Overall, our analysis of ChatGPT's performance in several math competitions reveals the advantages and disadvantages of the present language models for math problem-solving. Even though language models like ChatGPT have advanced significantly in recent years, they still have difficulties processing graphical data, comprehending intricate mathematical ideas, and working out difficult mathematical problem. The goal of future study could be to overcome these constraints and improve language models' capacity for mathematical problem solving. ## 6 Conclusion In this study, we assessed how well ChatGPT performed when it came to answering mathematics issues of various levels and topics. The findings revealed that ChatGPT performed poorly in some topics and levels while performing well in others. At Level K, ChatGPT correctly answered 83\(\%\) of the questions, whereas at Levels C, A, and H, the accuracy rate dropped to 62\(\%\), 27\(\%\), and 10\(\%\), respectively. Additionally, the accuracy rates of ChatGPT varied depending on the topic, with M11B, M12B, M11A, and M12D having the highest rates and M12A, M11C, and M12G having the lowest rates. It's crucial to highlight that ChatGPT had difficulty with issues requiring graphical interpretation because it couldn't read and comprehend the images, which led to a poor accuracy rate for queries about derivatives and applications. Furthermore, ChatGPT math scores were consistently lower than those of Vietnamese students in the same years. This might be as a result of the language model's reliance on pre-existing data and algorithms, as well as its failure to comprehend the context and nuances of the Vietnamese language. In conclusion, ChatGPT had potential in resolving mathematical issues, but its effectiveness was constrained by elements like graphical interpretation and language understanding. Future studies might concentrate on addressing these limitations and investigating the possibilities of language models in math education.
2304.07656
Constraint stability in permutations and action traces
We introduce the notion of action trace as a function naturally associated to a probability measure preserving action of a group on a standard probability space. For countable amenable groups, we characterise stability in permutations using action traces. We extend such a characterisation to constraint stability. We give sufficient conditions for a group to be constraint stable. As an application, we obtain many new examples of groups stable in permutations, in particular, among free amalgamated products over a finite group. This is the first general result (besides trivial case of free products) which gives a wealth of non-amenable groups stable in permutations.
Goulnara Arzhantseva, Liviu Paunescu
2023-04-15T23:37:52Z
http://arxiv.org/abs/2304.07656v1
# Constraint stability in permutations and action traces ###### Abstract. We introduce the notion of action trace as a function naturally associated to a probability measure preserving action of a group on a standard probability space. For countable amenable groups, we characterise stability in permutations using action traces. We extend such a characterisation to constraint stability. We give sufficient conditions for a group to be constraint stable. As an application, we obtain many new examples of groups stable in permutations, in particular, among free amalgamated products over a finite group. This is the first general result (besides trivial case of free products) which gives a wealth of non-amenable groups stable in permutations. Key words and phrases:Metric ultraproducts, sofic groups, Loeb measure space, groups stable in permutations 2010 Mathematics Subject Classification: 20Fxx, 20F05, 20F69, 20B30, 22F10 L.P. was supported by grant number PN-II-RU-TE-2014-4-0669 of the Romanian National Authority for Scientific Research, CNCS - UEFISCDI A general theory of constraint metric approximations by an arbitrary approximating family endowed with a bi-invariant distance (not necessarily by permutations with \(d_{H}\)) and of constraint stability of arbitrary systems of group equations has been developed in our prior article [1]. In the present paper, we introduce the notion of _action trace_. Equipped with this new tool, we extend our study of constraint stability and provide new examples of groups stable in permutations with respect to \(d_{H}\). The following result gives a general ground for our examples, see Definition 2.3 for the terminology. **Theorem 1.1** (Theorem 4.8).: _Let \(G_{1}\) and \(G_{2}\) be two countable groups with a common subgroup \(H\). Suppose that \(G_{1}\) is stable in permutations and \(G_{2}\) is \(\varphi\)-constraint stable, for every homomorphism \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\). Then \(G_{1}\ast_{H}G_{2}\) is stable in permutations._ In the process, we generalise a few classical results, our conceptual results on stability of groups from [1] and results on stability of amenable groups from [1] (precise references are given below). The study of constraint stability initiated in [1] is more general than that of stability as considered in [1, 2]. The action traces are well-suited to this more general setting and allow to overcome the use of invariant random subgroups essential to [2]. The next theorem is our main technical result, see Definition 3.4 and Definition 4.1 for the terminology. **Theorem 1.2** (Theorem 4.7).: _Let \(H\leqslant G\) be countable groups, \(G\) amenable and \(H\) finite. Let \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\) be a homomorphism. Then \(G\) is \(\varphi\)-constraint stable if and only if every \(\varphi\)-constraint action trace is \(\varphi\)-constraint residually finite._ This result is a crucial ingredient towards our main source of new examples of groups stable in permutations: **Theorem 1.3** (Theorem 6.2).: _Let \(G_{1}\) be a countable group stable in permutations and \(H\) be a finite subgroup. Let \(G_{2}\) be a countable amenable group with \(Sub(G_{2})\) countable, every almost normal subgroup profinitely closed, and such that \(H\) is acting on \(G_{2}\). Then \(G_{1}\ast_{H}(G_{2}\rtimes H)\) is stable in permutations._ The paper is organised as follows. In Section 2, we review the notion of constraint stability and give an alternative to [1] formulation, in a more group-theoretical language. In Section 3, we define the action trace. Then we give a characterisation of stability in permutations for amenable groups using action traces, see Theorem 3.17. In Section 4, we prove an analogous characterisation of more general constraint stability, see Theorem 4.7. In Section 5, we give sufficient conditions for a group to be constraint stable. In Section 6, we provide new examples of groups stable in permutations, obtained from our study of constraint stability via action traces. We conclude, in Section 7, with a result on (very) flexible stability and a few open questions. ## 2. Preliminaries Let \(\omega\) be a non-principal ultrafilter on \(\mathbb{N}\) and let \(n_{k}\in\mathbb{N}^{*}\) such that \(\lim_{k\to\omega}n_{k}=\infty\). The metric ultraproduct of \(S_{n_{k}},k\in\mathbb{N}\) with respect to the normalised Hamming distance is the _universal soft group_[1]: \[\Pi_{k\to\omega}S_{n_{k}}=\Pi_{k}S_{n_{k}}/\{(p_{k})_{k}\in\Pi_{k}S_{n_{k}}: \lim_{k\to\omega}d_{H}(p_{k},1_{n_{k}})=0\},\] endowed with the bi-invariant metric defined by \(d_{\omega}\big{(}\big{(}p_{k}\big{)}_{k}\,,\big{(}q_{k}\big{)}_{k}\big{)}=\lim _{k\to\omega}d_{H}\,(p_{k},q_{k})\). We write \(1_{\omega}\) for the identity element of this group and denote by \[Q\colon\Pi_{k}S_{n_{k}}\to\Pi_{k\to\omega}S_{n_{k}}\] the canonical projection homomorphism. **Definition 2.1** (Sofic morphism / sofic representation).: A group homomorphism \[\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\] is called a _sofic morphism_ of \(G\). A sofic morphism at the maximal distance to the identity, that is, a group homomorphism \[\theta\colon G\hookrightarrow\Pi_{k\to\omega}S_{n_{k}}\] with \(d_{\omega}\left(\theta(g),1_{\omega}\right)=1\) for all \(g\neq 1_{G}\) in \(G\), is called a _sofic representation_. **Definition 2.2** (Conjugated morphisms).: Two sofic morphisms \(\theta_{1},\theta_{2}\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) are called _conjugated_ if there exist \(p\in\Pi_{k\to\omega}S_{n_{k}}\) such that \(\theta_{1}(g)=p\theta_{2}(g)p^{-1}\) for every \(g\in G\). Using matrices, \(S_{n}\) is identified with the group of permutation matrices. Then \(d_{H}(p,1_{n})=1-Tr(p)\), where \(Tr(p)\) is the normalised trace of the matrix \(p\in S_{n}\). We define \(Tr\left((p_{k})_{k}\right)=\lim_{k\to\omega}Tr\left(p_{k}\right)\) on \(\Pi_{k\to\omega}S_{n_{k}}\). The following are instances of a general concept of a _constraint lift_[1, Definition 2.15] and of a general theorem characterising _constraint stability as a lifting property_ of constraint morphisms [1, Theorem 2.16]. **Definition 2.3** (Constraint morphism / constraint lift/constraint stability).: Let \(H\leqslant G\) be countable groups and \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\), \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) be homomorphisms. We say that: 1. \(\theta\) is _\(\varphi\)-constraint_ if \(\theta|_{H}=Q\circ\varphi\); 2. \(\theta\) is _\(\varphi\)-constraint liftable_ if there exists a homomorphism \(\widetilde{\theta}\colon G\to\Pi_{k}S_{n_{k}}\), called a _\(\varphi\)-constraint lift_ of \(\theta\), such that \(\theta=Q\circ\widetilde{\theta}\) and \(\widetilde{\theta}|_{H}=\varphi\); 3. \(G\) is _constraint \(\varphi\)-stable_ if every \(\varphi\)-constraint homomorphism \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) is \(\varphi\)-constraint liftable. We say a _liftable_ homomorphism (it was termed _perfect_ in [1, Definition 4.1]), a _lift_, and \(G\) is _stable in permutations_, whenever \(H=\{1_{H}\}\) is the trivial subgroup in the preceding definitions. In [1], the constraint stability has been introduced in countable groups using the language of equations with coefficients and for arbitrary metric approximations (not only by permutations). In the present paper, Definition 2.3 uses pairs \(H\leqslant G\) of countable groups and their homomorphisms instead of equations. These two viewpoints on constraint stability are easily seen to be equivalent in the setting of finitely generated groups: fixing a finite set of generators of \(H\) as coefficients leads to finitely many equations as in [1, Definition 2.4] and, conversely, generating a subgroup by given coefficients, yields a pair \(H\leqslant G\) as in Definition 2.3. In [1], the constraint stability has been characterised as a lifting property in a theorem, the above-mentioned [1, Theorem 2.16]. This characterisation is now the content of Definition 2.3 (iii). Definition 2.3 extends immediately to arbitrary metric approximations and has a natural reformulation, in the spirit of [1], as constraint stability of almost solutions of systems consisting of countably many equations with coefficients. ## 3. Action traces Let \(G\) be a countable discrete group and \((X,\mu)\) be a standard probability space. Denote by \(\mathcal{P}_{f}(G)\) the set of finite subsets of \(G\). Let \(\alpha\colon G\to Aut\,(X,\mu)\) be a probability measure preserving action. We introduce two invariants associated to the action. **Definition 3.1** (Trace).: The _trace_ of \(\alpha\colon G\curvearrowright(X,\mu)\) is defined as follows: for each \(A\in\mathcal{P}_{f}(G)\), \[Tr_{\alpha}(A)=\mu(\{x\in X:\alpha(g)(x)=x,\ \forall g\in A\}).\] **Definition 3.2** (Benjamini-Schramm statistics).: The _Benjamini-Schramm statistics_ of \(\alpha\colon G\curvearrowright(X,\mu)\) are defined by: \[S_{\alpha}(A,B)=\mu(\{x\in X:\alpha(g)(x)=x\ \forall g\in A;\ \alpha(g)(x)\neq x \ \forall g\in B\}),\] where \(A,B\in\mathcal{P}_{f}(G)\). We use \(S\) and \(Tr\) without index when the action \(\alpha\) is clear from the context. **Proposition 3.3**.: _Given an action \(\alpha\colon G\curvearrowright(X,\mu)\) as above, the associated numbers \(S\) are determined by the numbers \(Tr\), and vice versa._ Proof.: This is straightforward, by the inclusion-exclusion principle. For example, \(S(\{g_{1},g_{2}\},\{h\})=Tr(\{g_{1},g_{2}\})-Tr(\{g_{1},g_{2},h\})\) and \(S(\{g\},\{h_{1},h_{2},h_{3}\})=Tr(\{g\})-Tr(\{g,h_{1}\})-Tr(\{g,h_{2}\})-Tr(\{g,h_{3}\})+Tr(\{g,h_{1},h_{2}\})+Tr(\{g,h_{2},h_{3}\})+Tr(\{g,h_{3},h_{1}\})- Tr(\{g,h_{1},h_{2},h_{3}\})\). In general: \[S_{\alpha}(A,B)=\sum_{V\subseteq B}(-1)^{|V|}\cdot Tr_{\alpha}(A\cup V),\] for every pair \(A,B\in\mathcal{P}_{f}(G)\). Conversely, \(Tr_{\alpha}(A)=S_{\alpha}(A,\emptyset)\). We now introduce our main concept. **Definition 3.4** (Action trace).: A function \(Tr\colon\mathcal{P}_{f}(G)\to[0,1]\) is called an _action trace_ if there exists a probability measure preserving action \(\alpha\colon G\to Aut(X,\mu)\) such that \(Tr=Tr_{\alpha}\). ### Action traces of homomorphisms If a group \(G\) admits a homomorphism to \(S_{n}\), to the cartesian product \(\Pi_{k}S_{n_{k}}\) or to the universal sofic group \(\Pi_{k\to\omega}S_{n_{k}}\), then there is a natural action trace defined by such a homomorphism, induced by the canonical action \(\pi\colon S_{n}\curvearrowright(\{1,\dots,n\},\mu_{n})\), where \(\mu_{n}\) is the normalised cardinal measure. **Definition 3.5** (Action traces of homomorphisms).: 1. If \(\theta\colon G\to S_{n}\) is a homomorphism, then we define \(Tr_{\theta}=Tr_{\pi\circ\theta}\), where \(Tr_{\pi\circ\theta}\) is the trace of the action \(\pi\circ\theta\colon G\curvearrowright(\{1,\dots,n\},\mu_{n})\). 2. If \(\theta\colon G\to\Pi_{k}S_{n_{k}}\) is a homomorphism, then we define \(Tr_{\theta}=\lim_{k\to\omega}Tr_{\eta_{k}\circ\theta}\), where \(\eta_{k}\circ\theta\colon G\to S_{n_{k}}\) and \(\eta_{k}\colon\Pi_{k}S_{n_{k}}\twoheadrightarrow S_{n_{k}}\) is the canonical projection on the \(k\)-th factor. Such an action trace is said to be _residually finite_. 3. If \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) is a sofic morphism, then we define \(Tr_{\theta}\) to be the trace of the induced action on the Loeb measure space \(G\curvearrowright(X_{\omega},\mu_{\omega})\), where \(X_{\omega}=\Pi_{k}X_{n_{k}}/\sim_{\omega}\) is the algebraic ultraproduct of \(X_{n_{k}}=\{1,\dots,n_{k}\}\) and \(\mu_{\omega}=\lim_{k\to\omega}\mu_{n_{k}}\)[1, Section 2.2]. Such an action trace is said to be _sofic_. _Observation 3.6_.: For an action trace \(Tr\), being residually finite, or sofic, does not depend on the sequence \(\{n_{k}\}_{k}\). Indeed, if there exists a homomorphism \(\theta\colon G\to\Pi_{k}S_{n_{k}}\) such that \(Tr=Tr_{\theta}\), then there exists such a homomorphism for any other sequence \(\{m_{k}\}_{k}\), provided that \(\lim_{k\to\omega}m_{k}=\infty\). The proof is the same as our proof of [1, Proposition 6.1]. The following result is straightforward, by definitions. **Lemma 3.7**.: _Let \(\widetilde{\theta}\colon G\to\Pi_{k}S_{n_{k}}\) be a lift of a sofic morphism \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\), then \(Tr_{\widetilde{\theta}}=Tr_{\theta}\)._ ### Action traces of amenable groups We begin with proving, in Theorem 3.12, a generalisation of a celebrated theorem by Elek and Szabo which characterises finitely generated amenable groups as having a unique sofic representation, up to conjugacy [11, Theorem 2]. This generalisation is essentially due to Newman and Sohler, see [12] and [12, Theorem 3.1], who work in the setting of hyperfinite graphs and use the statistical distance to compare graphs. See also [1, Proposition 6.8] for an interpretation in the context of invariant random subgroups. Our formulation and proof of Theorem 3.12 are using action traces. For a fixed \(d>0\), we work with two types of graphs: 1. simple \(d\)-bounded degree graphs that are used in the Newman-Sohler theorem, see Theorem 3.8, and 2. oriented labeled \(d\)-bounded degree graphs that are defined by a finite collection of permutations, which is our setting. We define the statistical distance for these two types of graphs. For a graph \(\Gamma\), let \(V(\Gamma)\) and \(E(\Gamma)\) denote its vertex set and edge set, respectively. Let \(K=(V,E,r)\) be a simple rooted \(d\)-bounded degree graph, with a root \(r\in V\). We define a number \(\Gamma(K)\), that represents the ratio of vertices in \(V(\Gamma)\) that have a rooted subgraph isomorphic to \(K\). Formally, \[\Gamma(K)=|\{x\in V(\Gamma):\exists f\colon V\to V(\Gamma),\text{ injective},f(r)=x,\forall(u,v)\in E,(f(u),f(v))\in E(\Gamma)\}|/|V(\Gamma)|.\] Let \(\{K_{j}\}_{j\in\mathbb{N}^{*}}\) be an enumeration of all simple rooted \(d\)-bounded degree graphs. Then the _statistical distance_ between two simple \(d\)-bounded degree graphs \(\Gamma_{1}\) and \(\Gamma_{2}\) is defined as follows: \[d_{stat}(\Gamma_{1},\Gamma_{2})=\sum_{j=1}^{\infty}\frac{1}{2^{-j}}|\Gamma_{1 }(K_{j})-\Gamma_{2}(K_{j})|.\] The statistical distance between two oriented labeled d-bounded degree graphs is defined similarly by taking an enumeration of all rooted oriented labeled d-bounded degree graphs. **Theorem 3.8**.: _[_11_, Theorem 5]_ _Let \(\mathcal{P}\) be a hyperfinite family of simple \(d\)-bounded degree graphs for a fixed \(d>0\). Then for every \(\delta>0\), there exists \(f(\delta)>0\) such that if for a graph \(\Gamma_{1}\in\mathcal{P}\) and a \(d\)-bounded degree graph \(\Gamma_{2}\), \(|\Gamma_{1}|=|\Gamma_{2}|=n\) and \(d_{stat}(\Gamma_{1},\Gamma_{2})<f(\delta)\), then \(\Gamma_{1}\) and \(\Gamma_{2}\) are \(\delta\)-close, that is, we have a bijection \(\rho\colon V(\Gamma_{1})\to V(\Gamma_{2})\) such that_ \[|\rho^{-1}E(\Gamma_{2})\triangle E(\Gamma_{1})|<\delta n.\] **Theorem 3.9**.: _Let \(\mathcal{P}\) be a hyperfinite family of oriented labeled \(d\)-bounded degree graphs for a fixed \(d>0\). Then for every \(\delta>0\) there exists \(f(\delta)>0\) such that if for a graph \(\Gamma_{1}\in\mathcal{P}\) and an oriented labeled \(d\)-bounded degree graph \(\Gamma_{2}\), \(|\Gamma_{1}|=|\Gamma_{2}|=n\) and \(d_{stat}(\Gamma_{1},\Gamma_{2})<f(\delta)\), then \(\Gamma_{1}\) and \(\Gamma_{2}\) are \(\delta\)-close, that is, we have a bijection \(\rho\colon V(\Gamma_{1})\to V(\Gamma_{2})\) such that_ \[|\{e\in E(\Gamma_{2}):\rho^{-1}(e)\notin E(\Gamma_{1})\text{ or }l(\rho^{-1}(e)) \neq l(e)\}|<\delta n,\] _where \(l\) denotes the labelling function, with edges of both graphs labeled by elements of a fixed finite alphabet._ Sketch of the proof.: Proceeding as in the proof of [11, Theorem 9], we encode an oriented labeled bounded degree graph into a simple bounded degree graph. Then, we apply Theorem 3.8. Let \(\mathbb{F}_{m}\) denote the free group of rank \(m\) freely generated by \(x_{1},\ldots,x_{m}\). **Definition 3.10** (Action graph).: For a homomorphism \(\psi\colon\mathbb{F}_{m}\to S_{n}\), we define an oriented labeled graph \(\Gamma_{\psi}=(V,E)\), where \(V=\{1,\ldots,n\}\) and \(E=\{(v,\psi(x_{i})(v)):\forall v\in V,\ \forall i=1,\ldots,m\}\), and oriented edges are labeled by \(m\) letters \(x_{1},\ldots,x_{m}\), accordingly. **Proposition 3.11**.: _There exist sequences \((A_{j})_{j}\) and \((B_{j})_{j}\) of finite subsets of \(\mathbb{F}_{m}\) such that for every \(n\) and every pair of homomorphisms \(\varphi,\psi\colon\mathbb{F}_{m}\to S_{n}\), we have_ \[d_{stat}(\Gamma_{\varphi},\Gamma_{\psi})=\sum_{j=1}^{\infty}\frac{1}{2^{-j}}| S_{\varphi}(A_{j},B_{j})-S_{\psi}(A_{j},B_{j})|.\] Proof.: It is straightforward from the definitions of the Benjamini-Schramm statistics and of the statistical distance. **Theorem 3.12**.: _Let \(G\) be a countable amenable group. Let \(\theta_{1},\theta_{2}\colon G\to\Pi_{k\to\infty}S_{n_{k}}\) be sofic morphisms. Then, \(\theta_{1}\) and \(\theta_{2}\) are conjugated if and only if \(Tr_{\theta_{1}}=Tr_{\theta_{2}}\)._ Proof.: The "only if" direction follows by definitions. Let us prove the "if" direction. If \(G\) is finitely generated, then \(G=\mathbb{F}_{m}/N\) for some \(m\geqslant 1\) and \(N\trianglelefteq\mathbb{F}_{m}\). Let \(\pi\colon\mathbb{F}_{m}\twoheadrightarrow G=\mathbb{F}_{m}/N\) be the canonical projection. We define \(\psi_{j}\colon\mathbb{F}_{r}\to\Pi_{k\to\infty}S_{n_{k}}\) by \(\psi_{j}=\theta_{j}\circ\pi\), for \(j=1,2\). Let \(s_{k}^{i,j}\in S_{n_{k}}\) be such that \(\psi_{j}(x_{i})=\Pi_{k\to\infty}s_{k}^{i,j}\), where \(i\in\{1,\ldots,m\}\) and \(j=1,2\). We define the associated action graph, an oriented graph \(\Gamma_{k}^{j}=(V_{k}^{j},E_{k}^{j})\) by taking \(V_{k}^{j}=\{1,\ldots,n_{k}\}\) and \(E_{k}^{j}=\{(v,s_{k}^{i,j}(v)):\forall v\in V_{k}^{j},\ \forall i=1,\ldots,m\}\), labelled by \(m\) letters \(s_{k}^{i,j}\), for \(i\in\{1,\ldots,m\}\), according to the orientation. Since \(G\) is amenable, by a result of Schramm [15], the sequence of graphs \(\{\Gamma_{k}^{j}\}_{k}\) is hyperfinite. Also, \(Tr_{\theta_{1}}=Tr_{\theta_{2}}\), implies, by Proposition 3.3 and Proposition 3.11, that \(\lim_{k\to\omega}d_{stat}(\Gamma_{k}^{1},\Gamma_{k}^{2})=0\). By Theorem 3.9, it follows that there exists a sequence of bijections \(\rho_{k}\colon\{1,\ldots,n_{k}\}\to\{1,\ldots,n_{k}\}\) such that: \[\lim_{k\to\omega}|\{(v,i)\in\{1,\ldots,n_{k}\}\times\{1,\ldots,m\}:s_{k}^{i,1 }(\rho_{k}(v))\neq\rho_{k}(s_{k}^{i,2}(v))\}|/n_{k}=0.\] This is equivalent to the fact that \(\psi_{1}\) and \(\psi_{2}\) are conjugated by \(\Pi_{k\to\omega}\rho_{k}\in\Pi_{k\to\omega}S_{n_{k}}\). The same element that conjugates \(\psi_{1}\) to \(\psi_{2}\), will also conjugate \(\theta_{1}\) to \(\theta_{2}\). If \(G\) is not finitely generated, then let \(\{G_{i}\}_{i}\) be an increasing sequence of finitely generated subgroups of \(G\), such that \(G=\cup_{i}G_{i}\). For each \(i\), we construct \(p_{i}\in\Pi_{k\to\omega}S_{n_{k}}\) such that \(\theta_{2}|_{G_{i}}=p_{i}(\theta_{1}|_{G_{i}})p_{i}^{-1}\). Then, we use the diagonal argument to construct the required \(p\in\Pi_{k\to\omega}S_{n_{k}}\) that conjugates \(\theta_{1}\) to \(\theta_{2}\). _Example 3.13_.: The hypothesis \(Tr_{\theta_{1}}=Tr_{\theta_{2}}\) in Theorem 3.12 is on all trace numbers \(Tr_{\theta}(g_{1},\ldots,g_{n})\), for all \(g_{1},\ldots,g_{n}\in G,n\in\mathbb{N}^{*}\). Requiring only \(Tr_{\theta_{1}}(g)=Tr_{\theta_{2}}(g)\) for all \(g\in G\) is not sufficient to deduce the conjugacy of \(\theta_{1}\) and \(\theta_{2}\). Here is a counter-example, even in finite groups. Let \(G=\mathbb{Z}_{2}\times\mathbb{Z}_{2}=\langle a,b\mid a^{2}=b^{2}=(ab)^{2}=1\rangle\) and define \(\theta_{1},\theta_{2}\colon G\to S_{6}\) as follows: \[\theta_{1}(a)=(12)(34)(5)(6),\ \theta_{1}(b)=(12)(3)(4)(56),\ \theta_{1}(ab)=(1)(2)(34)(56);\] \[\theta_{2}(a)=(12)(34)(5)(6),\ \theta_{2}(b)=(13)(24)(5)(6),\ \theta_{2}(ab)=(14)(23)(5)(6).\] Then the homomorphisms \(\theta_{1},\theta_{2}\) satisfy \(Tr(\theta_{1}(g))=Tr(\theta_{2}(g))=1/3\), or equivalently, \(Tr_{\theta_{1}}(g)=Tr_{\theta_{2}}(g)=1/3\) for all \(g\neq 1_{G}\) in \(G\). However, \(\theta_{2}\) has two global fixed points, while \(\theta_{1}\) does not have any. We deduce that \(\theta_{1}\) and \(\theta_{2}\) are not conjugated. It is interesting to compare Theorem 3.12 with an analogous result on _hyperlinear morphisms_. It might be known to experts, although it is not in the literature. We formulate it in our terms, using hyperlinear analogues of Definition 2.1 and Definition 2.2, where \((S_{n_{k}},d_{H})\) is replaced by \((U_{n},d_{HS})\), the finite rank unitary group endowed with the normalised Hilbert-Schmidt distance, defined, for two unitary matrices \(u,v\in U_{n}\), by \(d_{HS}(u,v)=\sqrt{Tr(u-v)^{*}(u-v)}\), where \(Tr\) is the normalised trace. **Theorem 3.14**.: _Let \(G\) be a countable amenable group and \(\theta_{1},\theta_{2}\colon G\to\Pi_{k\to\omega}U_{n_{k}}\) be hyperlinear morphisms. Then, \(\theta_{1}\) and \(\theta_{2}\) are conjugated if and only if \(Tr(\theta_{1}(g))=Tr(\theta_{2}(g))\) for all \(g\in G\)._ Proof.: We prove the non-trivial "if" direction. Let \(\varphi\colon G\to\mathbb{C}\) be defined by \(\varphi(g)=Tr(\theta_{i}(g))\) with \(i=1\) or \(2\). Then \(\varphi\) is a positive defined function, invariant on conjugacy classes, i.e. a character. Let \((M,Tr)\) be the von Neumann algebra generated by the GNS representation associated with \((G,\varphi)\). Since \(G\) is amenable, then \(M\) is hyperfinite. The von Neumann algebra generated by \(\theta_{1}(G)\) inside \(\Pi_{k\to\omega}U_{n_{k}}\) is isomorphic to \((M,Tr)\). The same is true for \(\theta_{2}(G)\). These are two embeddings of the same hyperfinite von Neumann algebra into \(\Pi_{k\to\omega}U_{n_{k}}\). By [11, Proposition 1], translated in the ultraproduct language by standard arguments, these two embeddings are conjugated. In particular, there is indeed a unitary matrix which conjugates \(\theta_{1}\) and \(\theta_{2}\) in Example 3.13, although we have seen that there is no such permutation matrix. Thus, the preceding two formulations using action trace and usual trace, respectively, allow us to distinguish sofic and hyperlinear morphisms of amenable groups. In the hyperlinear case, we deal with the trace, i.e. a classical character. While in the sofic case, we require the action trace, which is a 'character like' function associated to the action \(G\curvearrowright(X_{\omega},\mu_{\omega})\) on the Loeb measure space. The difference in the hypothesis on these two types of traces explains a greater difficulty to prove the stability results in permutations versus analogous results in unitary matrices. The next result is the action trace generalisation of the fact that every amenable group is sofic. An analogous result, in the setting of invariant random subgroups, is [1, Proposition 6.6]. **Proposition 3.15**.: _If \(G\) is a countable amenable group, then every action trace is sofic._ Proof.: Let \(Tr\) be an action trace of \(G\) and \(\alpha\colon G\to Aut(X,\mu)\) be a probability measure preserving action such that \(Tr=Tr_{\alpha}\). Let \(E_{\alpha}\) be the orbit equivalence relation of \(\alpha\) on \((X,\mu)\). Since \(G\) is amenable, by the Ornstein-Weiss theorem, \(E_{\alpha}\) is hyperfinite. It follows that \(E_{\alpha}\) is treeable. By [16, Proposition 3.16], \(E_{\alpha}\) is a sofic equivalence relation (cf. [1] that uses a different but, by [16, Proposition 3.22], equivalent terminology). Let \(M(E_{\alpha})\) be the tracial von Neumann algebra associated to \(E_{\alpha}\) by the Feldman-Moore construction and \(A\subseteq M(E_{\alpha})\) be the corresponding Cartan pair of \(E_{\alpha}\)[16, Section 2.3]. By [16, Proposition 2.17], there exists a sofic embedding \(\theta\colon M(E_{\alpha})\to\Pi_{k\to\omega}M_{n_{k}}\) into the metric ultraproduct of matrix algebras equipped with the normalised trace. The image of \(\alpha\) is included in \([E_{\alpha}]\), the full group of \(E_{\alpha}\), where \([E_{\alpha}]=\{\varphi\in Aut(X,\mu):(x,\varphi(x))\in E_{\alpha}\,\forall x\}\). Then, using the canonical injection \(\iota\colon[E_{\alpha}]\hookrightarrow M(E_{\alpha})\)[16, Definition 2.13], we have a map \(\iota\circ\alpha\colon G\to M(E_{\alpha})\). For a finite subset \(F\subseteq G\), let \(c_{F}=\{x\in X:\alpha(g)(x)=x\,\forall g\in F\}\), and let \(Q_{F}\in A\) be the projection on \(c_{F}\). Then, by construction of \(M(E_{\alpha})\), \(Tr(Q_{F})=\mu(c_{F})=Tr_{\alpha}(F)\). Let us prove that \(\theta\circ\iota\circ\alpha\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) is the required morphism. The image is in \(\Pi_{k\to\omega}S_{n_{k}}\), by definition of a sofic embedding [16, Definition 2.16]. For finite \(F\subseteq G\), let \(P_{F}\) be the projection on the set of common fixed points in the Loeb measure space of \(\theta\circ\iota\circ\alpha(g)\) for all \(g\in F\). We have to show that \(Tr(P_{F})=Tr_{\alpha}(F)\). We show that actually \(P_{F}=\theta(Q_{F})\). Since \(\theta\) is trace preserving, \(Tr(\theta(Q_{F}))=Tr(Q_{F})=Tr_{\alpha}(F)\), then this concludes the proof. For every \(g\in G\), \(Tr(P_{[g]})=Tr(\theta\circ\iota\circ\alpha(g))=Tr(\iota\circ\alpha(g))=Tr( \Theta(Q_{[g]}))=Tr(\theta(Q_{[g]}))\). Since \(\theta(Q_{[g]})\leqslant P_{[g]}\), it follows that \(\theta(Q_{[g]})=P_{[g]}\). Since \(\theta\) is a morphism, then \(P_{F}=\Pi_{g\in F}P_{[g]}=\Pi_{g\in F}\theta(Q_{[g]})=\theta(\Pi_{g\in F}Q_{[ g]})=\theta(Q_{F})\). The following result is the action trace generalisation of [1, Theorem 4.3] that states that a sofic group stable in permutations has to be residually finite. Cf. [1, Theorem 1.3 (i)]. **Proposition 3.16**.: _Let \(G\) be a countable group. If \(G\) is stable in permutations, then any sofic action trace is residually finite._ Proof.: Let \(Tr\colon\mathcal{P}_{f}(G)\to[0,1]\) be a sofic action trace. Thus, there exists \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) such that \(Tr=Tr_{\theta}\). Since \(G\) is stable in permutations, then there exists \(\widetilde{\theta}\colon G\to\Pi_{k}S_{n_{k}}\) such that \(\theta=Q\circ\widetilde{\theta}\). By Lemma 3.7, \(Tr_{\theta}=Tr_{\widetilde{\theta}}\), and hence, \(Tr\) is residually finite. The next result generalises [1, Main theorem, Corollary 6.5]. It can be viewed as a variant, formulated in the ultraproduct language suited to our setting, of [1, Theorem 1.3 (ii)]. **Theorem 3.17**.: _Let \(G\) be a countable amenable group. Then \(G\) is stable in permutations if and only if every action trace is residually finite._ Proof.: If \(G\) is stable in permutations, it follows by Propositions 3.15 and 3.16 that every action trace is residually finite. Conversely, let \(G\) be a group such that every action trace is residually finite. Let \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) be a sofic morphism. Since \(Tr_{\theta}\) is residually finite and by Observation 3.6, there exists \(\alpha\colon G\to\Pi_{k}S_{n_{k}}\) such that \(Tr_{\alpha}=Tr_{\theta}\). By Theorem 3.12, there exists \(p\in\Pi_{k}S_{n_{k}}\) such that \(Q\circ(p\alpha(g)p^{-1})=\theta(g)\) for every \(g\in G\). Then \(p\alpha p^{-1}\) is a lift of \(\theta\), and hence, \(G\) is stable in permutations. ## 4. Constraint action traces Now, we transport the results of the previous section to a more general _constraint_ setting. That is, we fix a subgroup \(H\) of \(G\) and a homomorphism \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\). Every homomorphism of \(G\) into \(\Pi_{k}S_{n_{k}}\) or \(\Pi_{k\to\omega}S_{n_{k}}\) will be an extension of \(\varphi\) or \(Q\circ\varphi\), respectively. **Definition 4.1** (Constraint action traces).: Let \(H\leqslant G\) be countable groups, \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\) be a homomorphism and \(Tr\colon\mathcal{P}_{f}(G)\to[0,1]\) be an action trace. We say that: 1. \(Tr\) is \(\varphi\)-_constraint_ if \(Tr(A)=Tr_{\varphi}(A)\) for each \(A\in\mathcal{P}_{f}(H)\); 2. \(Tr\) is \(\varphi\)-_constraint residually finite_ if there exists a homomorphism \(\theta\colon G\to\Pi_{k}S_{n_{k}}\) such that \(Tr=Tr_{\theta}\) and \(\theta|_{H}=\varphi\); 3. \(Tr\) is \(\varphi\)-_constraint sofic_ if there exists a sofic morphism \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) such that \(Tr=Tr_{\theta}\) and \(\theta|_{H}=Q\circ\varphi\). Observe that an action trace that is \(\varphi\)-constraint sofic or \(\varphi\)-constraint residually finite has to be \(\varphi\)-constraint. The next two propositions yield the converse statements, under the assumptions on amenability or constraint stability, respectively. **Proposition 4.2**.: _Let \(H\leqslant G\) be countable amenable groups and \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\) be a homomorphism. Let \(Tr\colon\mathcal{P}_{f}(G)\to[0,1]\) be a \(\varphi\)-constraint action trace. Then \(Tr\) is \(\varphi\)-constraint sofic._ Proof.: By Proposition 3.15, there exists \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) such that \(Tr=Tr_{\theta}\). Then, \(\theta|_{H}\) and \(Q\circ\varphi\) are two sofic morphisms of \(H\) with the same action trace. By Theorem 3.12, there exists \(p\in\Pi_{k\to\omega}S_{n_{k}}\) such that \(p(\theta|_{H})p^{-1}=Q\circ\varphi\). Then, \(p\theta p^{-1}\) is the required sofic morphism of \(G\). **Proposition 4.3**.: _Let \(H\leqslant G\) be countable groups and \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\) be a homomorphism such that \(G\) is \(\varphi\)-constraint stable. Let \(Tr\colon\mathcal{P}_{f}(G)\to[0,1]\) be a \(\varphi\)-constraint sofic action trace. Then \(Tr\) is \(\varphi\)-constraint residually finite._ Proof.: Since \(Tr\) is \(\varphi\)-constraint sofic, then there exists \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) such that \(\theta|_{H}=Q\circ\varphi\) and \(Tr=Tr_{\theta}\). Since \(G\) is \(\varphi\)-constraint stable, then there exists a homomorphism \(\widetilde{\theta}\colon G\to\Pi_{k}S_{n_{k}}\) such that \(\theta=Q\circ\widetilde{\theta}\) and \(\widetilde{\theta}|_{H}=\varphi\). By Lemma 3.7, we have \(Tr_{\widetilde{\theta}}=Tr_{\theta}\). Therefore, \(Tr\) is \(\varphi\)-constraint residually finite. These two propositions immediately imply: **Corollary 4.4**.: _Let \(H\leqslant G\) be countable amenable groups and \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\) be a homomorphism such that \(G\) is \(\varphi\)-constraint stable. Then every \(\varphi\)-constraint action trace is \(\varphi\)-constraint residually finite._ We shall prove the converse of this corollary whenever \(H\) is a finite subgroup of \(G\). **Proposition 4.5**.: _Let \(H\) be a finite group and \(\varphi_{1},\varphi_{2}\colon H\to\Pi_{k}S_{n_{k}}\), be two conjugated homomorphisms such that \(\lim_{k\to\omega}d_{H}(\varphi_{1}^{k}(h),\varphi_{2}^{k}(h))=0\) for every \(h\in H\). Then, there exists \(p_{k}\in S_{n_{k}}\) with \(\lim_{k\to\omega}d_{H}(p_{k},1_{n_{k}})=0\) such that \((p_{k})_{k}\in\Pi_{k}S_{n_{k}}\) conjugates \(\varphi_{1}\) to \(\varphi_{2}\)._ Proof.: Let \(\varepsilon_{k}=\max\{d_{H}(\varphi_{1}^{k}(h),\varphi_{2}^{k}(h)):h\in H\}\). Since \(H\) is finite, \(\lim_{k\to\omega}\varepsilon_{k}=0\). Let \(A_{k}=\{i:\varphi_{1}^{k}(h)(i)=\varphi_{2}^{k}(h)(i),\forall h\in H\}\subseteq \{1,\ldots,n_{k}\}\). Then, \(Card(A_{k})/n_{k}\geqslant 1-Card(H)\cdot\varepsilon_{k}\). Also, \(A_{k}\) is invariant under \(\varphi_{1}^{k}\) and \(\varphi_{2}^{k}\). Because \(\varphi_{1}^{k}\) and \(\varphi_{2}^{k}\) are conjugated, they are conjugated also on the complement \((A_{k})^{c}\). We construct \(p_{k}\in S_{n_{k}}\) such that \(p_{k}=1_{n_{k}}\) on \(A_{k}\), and \(p_{k}\) conjugates \(\varphi_{1}^{k}\) and \(\varphi_{2}^{k}\) on \((A_{k})^{c}\). Then, \((p_{k})_{k}\) conjugates \(\varphi_{1}\) to \(\varphi_{2}\), and \(\lim_{k\to\omega}d_{H}(p_{k},1_{n_{k}})\leqslant\lim_{k\to\omega}Card(H)\cdot \varepsilon_{k}=0\). _Example 4.6_.: Proposition 4.5 does not hold if \(H\) is an arbitrary infinite group. Let \(a_{k},b_{k}\in S_{k^{2}}\) be defined by: \[a_{k}= (1,2,\ldots,k)(k+1,\ldots,2k)\cdots(k^{2}-k+1,\ldots,k^{2});\] \[b_{k}= (1,2,\ldots,k-1)(k+1,\ldots,2k-1)\cdots(k^{2}-k+1,k^{2}-1)(k,2k, \ldots,k^{2}).\] By construction, \(a_{k}\) has \(k\) cycles of length \(k\), and \(b_{k}\) has \(k\) cycles of length \(k-1\) and one of length \(k\). These permutations are different only on inputs of type \(mk-1\) and \(mk\). Therefore, \(d_{H}(a_{k},b_{k})=2k/k^{2}=2/k\). Let us consider \(\varphi_{1},\varphi_{2}\colon\mathds{Z}\to\Pi_{k}S_{2k^{2}}\) defined by \(\varphi_{1}(1)=(a_{k}\oplus b_{k})_{k}\) and \(\varphi_{2}(1)=(b_{k}\oplus a_{k})_{k}\). Clearly, \(\varphi_{1}\) is conjugated to \(\varphi_{2}\) and \(d_{H}(\varphi_{1}^{k}(1),\varphi_{2}^{k}(1))=2/k\), so it tends to \(0\) as \(k\to\infty\). However, every \(p_{k}\in S_{2k^{2}}\) that conjugates \(\varphi_{1}^{k}(1)\) to \(\varphi_{2}^{k}(1)\) has the property \(d_{H}(p_{k},1_{2k^{2}})=1\). **Theorem 4.7**.: _Let \(H\leqslant G\) be countable groups, \(G\) amenable and \(H\) finite. Let \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\) be a homomorphism. Then \(G\) is \(\varphi\)-constraint stable if and only if every \(\varphi\)-constraint action trace is \(\varphi\)-constraint residually finite._ Proof.: The "only if" direction follows by Corollary 4.4. For the "if" direction, let \(\theta\colon G\to\Pi_{k\to\omega}S_{n_{k}}\) be a sofic morphism such that \(Q\circ\varphi=\theta|_{H}\). Then \(Tr_{\theta}\) is a \(\varphi\)-constraint action trace. By hypothesis, \(Tr_{\theta}\) is \(\varphi\)-constraint residually finite. Then, there exists a homomorphism \(\Phi\colon G\to\Pi_{k}S_{n_{k}}\) such that \(\Phi|_{H}=\varphi\) and \(Tr_{\Phi}=Tr_{\theta}\). By Theorem 3.12, \(\theta\) and \(Q\circ\Phi\) are conjugated. Let \(p\in\Pi_{k}S_{n_{k}}\) be such that \(Q\circ(p\cdot\Phi\cdot p^{-1})=\theta\). So \(Q\circ(p\cdot\Phi|_{H}\cdot p^{-1})=\theta|_{H}=Q\circ\varphi\). By Proposition 4.5, applied to \(p\cdot\Phi|_{H}\cdot p^{-1}\) and \(\varphi\), we get \(q\in\Pi_{k}S_{n_{k}}\), \(Q\circ q=1_{\omega}\) and \(qp\cdot\Phi|_{H}\cdot p^{-1}q^{-1}=\varphi\). Then \(qp\cdot\Phi\cdot(qp)^{-1}\) is the required lift of \(\theta\) as \((qp\cdot\Phi\cdot(qp)^{-1})|_{H}=\varphi\) and \(Q\circ(qp\cdot\Phi\cdot(qp)^{-1})=Q\circ(p\cdot\Phi\cdot p^{-1})=\theta\). Thus, \(G\) is \(\varphi\) -constraint stable. We do not know whether or not the "if" direction of this theorem holds for an arbitrary infinite subgroup \(H\). The current proof is not sufficient because Proposition 4.5 fails for some infinite \(H\). However, it is still possible for the theorem to hold. In this scenario, one has to choose a specific \(\Phi\colon G\to\Pi_{k}S_{n_{k}}\) that witnesses the constraint residually finiteness property of \(Tr_{\theta}\), at the beginning of the proof. We consider that this scenario is unlikely. The notions of constraint metric approximations and constraint stability that we have introduced in [1], give a rigorous framework to the study of arbitrary metric approximations and their stability of group-theoretical constructions such as free amalgamated products and HNN-extensions. The next theorem illustrates this general approach in the case of stability in permutations. Our goal is to apply this theorem and thus provide new examples of groups stable in permutations. **Theorem 4.8**.: _Let \(G_{1},G_{2}\) be two countable groups with a common subgroup \(H\). Suppose that \(G_{1}\) is stable in permutations and \(G_{2}\) is \(\varphi\)-constraint stable, for every homomorphism \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\). Then \(G_{1}\star_{H}G_{2}\) is stable in permutations._ Proof.: Let \(\theta\colon G_{1}\star_{H}G_{2}\to\Pi_{k\to\omega}S_{n_{k}}\) be a homomorphism. Since \(G_{1}\) is stable in permutations and \(G_{1}\) injects into \(G_{1}\star_{H}G_{2}\), there exists a homomorphism \(\psi_{1}\colon G_{1}\to\Pi_{k}S_{n_{k}}\) such that \(Q\circ\psi_{1}=\theta|_{G_{1}}\). Let \(\varphi=\psi_{1}|_{H}\). By hypothesis, \(G_{2}\) is \(\varphi\)-constraint stable. It follows that there exists a homomorphism \(\psi_{2}\colon G_{2}\to\Pi_{k}S_{n_{k}}\) such that \(Q\circ\psi_{2}=\theta|_{G_{2}}\) and \(\psi_{2}|_{H}=\varphi\). Now, \(\psi_{1}|_{H}=\psi_{2}|_{H}\), so, by the universal property of free amalgamated products, we can construct the homomorphism \(\psi_{1}\star_{H}\psi_{2}\colon G_{1}\star_{H}G_{2}\to\Pi_{k}S_{n_{k}}\). Moreover, \(Q\circ(\psi_{1}\star_{H}\psi_{2})=\theta\), so \(\theta\) is liftable. Theorem 4.8 remains true for arbitrary metric approximations (not necessarily by permutations), under suitable variants of Definition 2.3. The results of Section 3 and Section 4 have natural analogues for arbitrary metric approximations. In contrast, the use of Loeb measure space \((X_{\omega},\mu_{\omega})\) underlying the arguments of the next section makes the sofic approximations special. ## 5. Homomorphism extension property In this section, we use Theorem 4.7 in order to provide examples of groups that are \(\varphi\)-constraint stable for every homomorphism \(\varphi\) of a subgroup. First we give some preliminaries. **Definition 5.1** (Coset multiplicity).: Let \(\varphi\colon H\to S_{n}\) be a homomorphism and \(N\leqslant H\) be a subgroup. Then the _coset multiplicity_\(r(\varphi,N)\) is the multiplicity of \(H\curvearrowright H/N\) in \(\varphi\) divided by \(n\). For \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\), \(\varphi=(\varphi_{k})_{k}\), we define \(r(\varphi,N)=\lim_{k\to\omega}r(\varphi_{k},N)\). _Observation 5.2_.: If \(H\) is finite, then 1. \(\sum_{N\leqslant H}r(\varphi,N)\cdot|H/N|=1\); 2. \(r(\varphi,N)=S_{\varphi}(N,N^{c})/Card(\{h\in H:h^{-1}Nh\in N\})\), where \(N^{c}\) denotes the complement \(H\setminus N\). **Proposition 5.3**.: _Let \(\varphi,\psi\colon H\to S_{n}\) be two homomorphisms of a finite group \(H\). Then \(\varphi\) and \(\psi\) are conjugated if and only if \(r(\varphi,N)=r(\psi,N)\) for each subgroup \(N\leqslant H\)._ Proof.: If \(\varphi,\psi\) are conjugated, then \(S_{\varphi}(N,N^{c})=S_{\psi}(N,N^{c})\) for each subgroup \(N\leqslant H\). By the above observation, this implies \(r(\varphi,N)=r(\psi,N)\). For the reverse statement, \(r(\varphi,N)=r(\psi,N)\) implies that the multiplicity of \(H\curvearrowright H/N\) is the same in both \(\varphi\) and \(\psi\). This allows the construction of a permutation that conjugates \(\varphi\) into \(\psi\). **Definition 5.4** (Homomorphism order).: Let \(\varphi\colon H\to S_{m}\) and \(\psi\colon H\to S_{n}\) be two homomorphisms. We write \(\varphi\preccurlyeq\psi\) whenever \(r(\varphi,N)\cdot m\leqslant r(\psi,N)\cdot n\) for each subgroup \(N\leqslant H\). **Definition 5.5** (Homomorphism extension property).: A pair of countable groups \(H\leqslant G\) is said to be with _extension property_ if, for every \(n\in\mathbb{N}\) and for every homomorphism \(\varphi\colon H\to S_{n}\), there exists a homomorphism \(\varphi\colon G\to S_{n}\) such that \(\varphi|_{H}=\varphi\). Clearly, if \(K\leqslant H\) and \(H\leqslant G\) are with extension property, then \(K\leqslant G\) is with extension property. **Definition 5.6** (Retract).: A subgroup \(H\) in a group \(G\) is a _retract_ of \(G\) if there exists a homomorphism \(\gamma\colon G\to H\) such that \(\gamma|_{H}=id_{H}\). The next result is well known. We omit the proof as it is elementary. **Lemma 5.7**.: _Let \(H\) be a subgroup of a group \(G\). The following are equivalent._ 1. \(H\) _is a retract of_ \(G\)_._ 2. _There exists_ \(K\trianglelefteq G\) _such that_ \(K\cap H=\{1_{G}\}\) _and_ \(G=K\rtimes H\)_._ 3. _For every homomorphism_ \(\varphi\colon H\to L\) _to an arbitrary group_ \(L\)_, there exists a homomorphism_ \(\varphi\colon G\to L\) _with_ \(\varphi|_{H}=\varphi\)_._ Thus, if \(H\) is a retract of \(G\), then \(H\leqslant G\) is with extension property. _Remark 5.8_.: There are examples of pairs \(H\leqslant G\) with extension property, where \(H\) is not necessarily a retract. For instance, \(\mathbb{Z}_{p}\leqslant S_{p}\), with a prime \(p\), is with extension property. The cyclic subgroup is not a retract in \(S_{p}\) as it has no normal complement, see Lemma 5.7 (ii). Given a pair \(H\leqslant G\), one can ask for an algorithm to decide whether or not it is with extension property. This question was recently addressed in complexity theory, in relation to list-decoding homomorphism codes [10, 1]. Given a countable group \(G\), we denote by \(2^{G}\) the power set of \(G\) and by \(Sub(G)\) the set of subgroups of \(G\), endowed with the subspace topology induced by the product topology on \(2^{G}\). **Definition 5.9** (Almost normal subgroup).: A subgroup \(L\) of a group \(G\) is _almost normal_ if \(L\) has only a finite number of conjugates in \(G\), that is, if \([G:N_{G}(L)]<\infty\). It follows by definitions that being almost normal is preserved under taking homomorphic images and restrictions to a subgroup. It is well-known that every subgroup of a group \(G\) is almost normal if and only if the quotient group by the center \(G/Z(G)\) is finite [10] if and only if every abelian subgroup of \(G\) is almost normal [1]. **Definition 5.10** (Profinitely closed).: A subgroup \(L\) of a group \(G\) is _profinitely closed_ if there is a sequence \((K_{i})_{i=1}^{\infty}\) of finite index subgroups \(K_{i}\leqslant G\) such that \(L=\cap_{i=1}^{\infty}K_{i}\). We are now ready for the main result of this section. It generalises [1, Proposition 8.1]. Our formulation and proof are free from the invariant random subgroups used in [1]. **Proposition 5.11**.: _Let \(H\leqslant G\) be countable groups with extension property, \(H\) be finite. Let \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\) be a homomorphism. Suppose that \(Sub(G)\) is countable and that every almost normal subgroup of \(G\) is profinitely closed. Then every \(\varphi\)-constraint action trace is \(\varphi\)-constraint residually finite._ Proof.: Let \(Tr\) be a \(\varphi\)-constraint action trace and choose \(\alpha\colon G\to Aut(X,\mu)\) a measure preserving action such that \(Tr_{\alpha}=Tr\). Since \(Sub(G)\) is countable, we have \(X=\cup_{N\in Sub(G)}Stab^{-1}(N)\) and \(\sum_{N\in Sub(G)}\mu(Stab^{-1}(N))=1\), where \(Stab\colon X\to Sub(G),x\mapsto stab_{\alpha}(x)\) and \(stab_{\alpha}(x)\) is the stabiliser subgroup. Let \(N\in Sub(G)\) be such that \(\mu(Stab^{-1}(N))>0\). Since \(g\cdot Stab^{-1}(N)=Stab^{-1}(g^{-1}Ng)\), it follows that \(N\) is an almost normal subgroup. Let us assume for now that \(X=\cup_{g\in G}Stab^{-1}(g^{-1}Ng)\). We denote by \(j\) the number of different conjugates of \(N\) in \(G\). Let \(g_{1},\dots,g_{j}\) be a system of representatives for these conjugates. Given \(A\in\mathcal{P}_{f}(G)\), we have then \(Tr(A)=Card\{i:g_{i}^{-1}hg_{i}\in N,\ \forall h\in A\}/j\). Let \(M\) be the normaliser of \(N\) in \(G\). Then \(N\) is a normal subgroup in \(M\) and \(M\) is a subgroup of finite index in \(G\). Since \(N\) is profinitely closed, then there exists a chain of finite index normal subgroups \(N_{m}\) of \(M\) such that \(\cap_{m}N_{m}=N\). We choose each \(N_{m}\) such that whenever \(g_{i}^{-1}hg_{i}\not\in N\), for some \(i=1,\dots,j\) and \(h\in H\), we also have \(g_{i}^{-1}hg_{i}\not\in N_{m}\). We denote by \(\psi_{m}\) the action of \(G\) on \(G/N_{m}\). Then, \(Tr_{\psi_{m}}(A)=Card\{i:g_{i}^{-1}hg_{i}\in N_{m},\ \forall h\in A\}/j\). It follows that \(Tr_{\psi_{m}}\to_{m\to\infty}Tr\). Moreover, \(Tr(A)=Tr_{\psi_{m}}(A)\) for any \(A\subseteq H\), by the requirement on the groups \(N_{m}\). As a consequence, by Proposition 3.3, \(S_{\alpha}(T,H\setminus T)=S_{\psi_{m}}(T,H\setminus T)\) for each subgroup \(T\leqslant H\). The action trace \(Tr_{\alpha}\) is \(\varphi\)-constraint. Thus, \(S_{\alpha}(T,H\setminus T)=S_{\varphi}(T,H\setminus T)\). As such, by Observation 5.2, \(r(\varphi,T)=r(\psi_{m},T)\) for each subgroup \(T\leqslant H\). Let \(\varphi=(\varphi_{k})_{k}\), with \(\varphi_{k}\colon H\to S_{n_{k}}\). Fix \(m\in\mathbb{N}\) and define: \[s_{k}=\min_{T\leqslant H,r(\varphi,T)\neq 0}[\frac{r(\varphi_{k},T)\cdot n_{k}} {r(\psi_{m},T)\cdot|G/N_{m}|}].\] Since \(r(\varphi_{k},T)\to_{k\to\omega}r(\varphi,T)=r(\psi_{m},T)\), it is easy to see that \(s_{k}\cdot|G/N_{m}|/n_{k}\to_{k\to\omega}1\). Also, for any \(k\) in some set \(F\in\omega\), \(s_{k}\cdot r(\psi_{m},T)\cdot|G/N_{m}|\leqslant r(\varphi_{k},T)\cdot n_{k}\) for each subgroup \(T\leqslant H\). So, \(\psi_{m}\otimes 1_{s_{k}}\preccurlyeq\varphi_{k}\). By the extension property of \(H\leqslant G\), we construct a homomorphism \(\eta_{k}\colon G\to S_{n_{k}-k}\colon G/N_{m}|\) that extends \(\varphi_{k}\ominus\psi_{m}\otimes 1_{s_{k}}\), where \(\ominus\) denotes the skew sum of permutations. Then, \(\psi_{m}\otimes 1_{s_{k}}\oplus\eta_{k}\) is a homomorphism of \(G\) to \(S_{n_{k}}\), that, while restricted to \(H\), is equal to \(\varphi_{k}\) and \(\lim_{k\to\omega}Tr_{\psi_{m}\otimes 1_{n_{k}}\oplus\eta_{k}}=Tr_{\psi_{m}}\). We use the diagonal argument to construct \(\theta\colon G\to\Pi_{k}S_{n_{k}}\) such that \(Tr_{\theta}=Tr\) and \(\theta|_{H}\) is conjugated to \(\varphi\). If \(X\) is not \(\cup_{g\in G}Stab^{-1}(g^{-1}Ng)\), then \(X\) is a disjoint union of such spaces. We need to partition sets \(\{1,\dots,n_{k}\}\) corresponding to this partition of \(X\). This can be done with an ultrafilter as in [1, Lemma 6.3]. **Corollary 5.12**.: _Let \(H\leqslant G\) be countable groups with extension property, \(G\) amenable and \(H\) finite. Suppose that \(Sub(G)\) is countable and that every almost normal subgroup of \(G\) is profinitely closed. Then \(G\) is \(\varphi\)-constraint stable, for every homomorphism \(\varphi\colon H\to\Pi_{k}S_{n_{k}}\)._ Proof.: This follows by Proposition 5.11 and Theorem 4.7. **Theorem 5.13**.: _Let \(G_{1},G_{2}\) be two countable groups with a common finite subgroup \(H\). Suppose that \(G_{1}\) is stable in permutations, \(G_{2}\) is amenable, \(Sub(G_{2})\) is countable and that every almost normal subgroup of \(G_{2}\) is profinitely closed, and \(H\leqslant G_{2}\) is with extension property. Then \(G_{1}\ast_{H}G_{2}\) is stable in permutations._ Proof.: This follows by Theorem 4.8 and Corollary 5.12. ## 6. Examples of stable groups We use the results in the last section to provide new examples of groups stable in permutations. The next result shows how to obtain pairs of groups satisfying the hypotheses of Corollary 5.12. **Proposition 6.1**.: _Let \(G\) be a group such that \(Sub(G)\) is countable and that every almost normal subgroup of \(G\) is profinitely closed. Let \(H\) be a finite group acting on \(G\). Then \(G\rtimes H\) has countably many subgroups and every almost normal subgroup of \(G\) is profinitely closed._ Proof.: In order to prove that \(G\rtimes H\) has countably many subgroups, one can use [13, Lemma 2.1]. We anyway have to study the structure of an arbitrary subgroup of \(G\rtimes H\) for the other statement. Let \(L\leqslant G\rtimes H\) be a subgroup. Define \(L_{e}=L\cap G\). Let \(\varphi\colon G\rtimes H\to H,(g,h)\mapsto h\) be the canonical projection homomorphism induced by the structure of the semidirect product. We define \(H_{0}=\varphi(L)\). Choose \(g_{h}\in L\) such that \(\varphi(g_{h})=h\) for each \(h\in H_{0}\). It is easy to see that \(L=\cup_{h\in H_{0}}L_{e}g_{h}\). This shows that \(Sub(G\rtimes H)\) is countable. Assume now that \(L\) is almost normal in \(G\rtimes H\). Then, using the definition, we see that \(L_{e}\) is almost normal in \(G\). By hypothesis, there exists finite index subgroups \(K_{i}\) of \(G\) such that \(L_{e}=\cap_{i}K_{i}\). We replace each \(K_{i}\) with \(\cap_{h\in H_{0}}\delta_{h}K_{i}g_{h}^{-1}\). Then, \(K_{i}\) are still finite index subgroups in \(G\) (since \(H_{0}\) is finite) and \(L_{e}=\cap_{i}K_{i}\). Moreover, \(gK_{i}g^{-1}=K_{i}\) for each \(g\in L\). As such, the subgroup generated in \(G\rtimes H\) by \(K_{i}\) and \(L\) is \(K_{i}L\). We use these subgroups to prove that \(L\) is profinitely closed. Clearly, \(K_{i}L\) are finite index subgroups of \(G\rtimes H\). Let \(g\in\cap_{i}K_{i}L\). Then, for each \(i\), there exists \(k_{i}\in K_{i}\) and \(h_{i}\in H_{0}\) such that \(g=k_{i}g_{h_{i}}\). Now, \(\varphi(g)=\varphi(k_{i}g_{h_{i}})=h_{i}\), so \(h_{i}\) is independent of \(i\), and \(g=k_{i}g_{h}\) for some \(h\in H\). Then \(gg_{h}^{-1}\in K_{i}\) for each \(i\), so \(gg_{h}^{-1}\in L_{e}\). It follows that \(g\in L\), so \(\cap_{i}K_{i}L=L\), and hence, \(L\) is profinitely closed. The class of groups with countably many subgroups is closed under taking subgroups and quotients but, in general, not under extensions, nor even direct products. For example, if \(p\) is a prime, the Prufer \(p\)-group \(C_{p^{\infty}}\) has \(Sub(C_{p^{\infty}})\) countable, but its direct square \(C_{p^{\infty}}\times C_{p^{\infty}}\) has \(2^{\aleph_{0}}\) subgroups [13]. According to Lemma 5.7, the pair \(H\leqslant G\rtimes H\) is always with extension property. As such, under the hypothesis of Proposition 6.1, the pair \(H\leqslant G\rtimes H\) satisfies all the assumptions of Corollary 5.12. By also using Proposition 6.1 and Theorem 4.8, we obtain the following general result. **Theorem 6.2**.: _Let \(G_{1}\) be a countable group stable in permutations and \(H\) be a finite subgroup. Let \(G_{2}\) be a countable amenable group with \(Sub(G_{2})\) countable, every almost normal subgroup profinitely closed, and such that \(H\) is acting on \(G_{2}\). Then \(G_{1}\ast_{H}(G_{2}\rtimes H)\) is stable in permutations._ Here are some concrete examples of groups stable in permutations by Theorem 6.2. _Example 6.3_ (Virtually free examples).: The special linear group \(SL_{2}(\mathbb{Z})\cong\mathbb{Z}_{4}\ast_{\mathbb{Z}_{3}}(\mathbb{Z}_{3} \times\mathbb{Z}_{2})\) is stable in permutations as \(\mathbb{Z}_{2}\leqslant\mathbb{Z}_{3}\times\mathbb{Z}_{2}\) is with extension property and other hypothesis of Theorem 6.2 are also satisfied. Given arbitrary groups \(G_{1},G_{2}\) and \(Q\), the semidirect product of \(G_{1}\ast_{H}G_{2}\) by \(Q\) is isomorphic to the free product of \(G_{1}\rtimes Q\) and \(G_{2}\rtimes Q\) amalgamated over \(H\rtimes Q\): ( \[\rtimes\] ) \[(G_{1}\ast_{H}G_{2})\rtimes Q\cong(G_{1}\rtimes Q)\ast_{H\rtimes Q}(G_{2} \rtimes Q).\] In particular, for the general linear group: \(GL_{2}(\mathbb{Z})\cong SL_{2}(\mathbb{Z})\rtimes\mathbb{Z}_{2}\cong(\mathbb{ Z}_{4}\rtimes\mathbb{Z}_{2})\ast_{\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}}(\mathbb{ Z}_{6}\rtimes\mathbb{Z}_{2})\). It is stable in permutations. Indeed, by Gaschutz' complement theorem [11, Satz 1 on p. 99], since the normal subgroup \(\mathbb{Z}_{3}\trianglelefteq(\mathbb{Z}_{6}\rtimes\mathbb{Z}_{2})\) has a complement in \(\mathbb{Z}_{6}\cong\mathbb{Z}_{3}\times\mathbb{Z}_{2}\), then it has a complement in \(\mathbb{Z}_{6}\rtimes\mathbb{Z}_{2}\). It is clear that such a complement \((\mathbb{Z}_{6}\rtimes\mathbb{Z}_{2})/\mathbb{Z}_{3}\) is isomorphic to \(\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}\). Therefore, \(\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}\) is a retract of \(\mathbb{Z}_{6}\rtimes\mathbb{Z}_{2}\), and hence, \(\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}\leqslant\mathbb{Z}_{6}\rtimes\mathbb{Z}_{2}\) is with extension property. Other hypothesis of Theorem 6.2 are clearly satisfied. Both \(SL_{2}(\mathbb{Z})\) and \(GL_{2}(\mathbb{Z})\) are virtually free groups, and hence, they are stable in permutations also by a different proof from [11]. _Example 6.4_ (Non-virtually free examples).: By varying the groups involved in the free amalgamated product from Theorem 6.2 or in the above semidirect product construction (\(\rtimes\)), we obtain many non-amenable groups stable in permutations, which are not virtually free. For instance, \(GL_{2}(\mathbb{Z})\ast_{H}(BS(1,n)\rtimes H)\) is not virtually free and it is stable in permutations by Theorem 6.2. Indeed, the Baumslag-Solitar group \(BS(1,n)\) satisfies the hypothesis of Theorem 6.2 by [10, Corollary 8.4] and \(BS(1,n)\rtimes H\), where \(H\) is a finite subgroup of \(GL_{2}(\mathbb{Z})\), satisfies the hypothesis of Theorem 6.2, by Proposition 6.1. Gaschutz' type results and its generalisations [19] yield many pairs \(H\rtimes Q\leqslant G_{2}\rtimes Q\) with extension property so that, also by Proposition 6.1, Theorem 6.2 applies to the above semidirect product construction (\(\rtimes\)), where \(G_{1}\) and \(G_{2}\) are as in Theorem 6.2. _Example 6.5_ (Around just-infinite branch groups).: Let \(\Gamma\) be the first Grigorchuk group or the Gupta-Sidki \(p\)-group. Then, \(\Gamma\) is stable in permutations [18, Theorem 6.6]. Therefore, \(\Gamma\ast_{H}(G_{2}\rtimes H)\) is stable in permutations, where \(H\) is a finite subgroup of \(\Gamma\) and \(G_{2}\) is an arbitrary group satisfying the hypothesis of Theorem 6.2. ## 7. Further results and questions ### (Very) flexible stability There are natural dimension related relaxations of stability, called _flexible stability_ and _very flexible stability_: an almost solution in \(S_{n}\) is required to be close, in a suitable sense, to a solution in \(S_{N}\), for \(N\) not necessarily equal to \(n\)[10]. It is straightforward to adapt our concepts and results to such a setting. For instance, Theorem 6.2 has the following analogue. **Theorem 7.1**.: _Let \(G_{1}\) be a countable group flexibly (respectively, very flexibly) stable in permutations and \(H\) be a finite subgroup. Let \(G_{2}\) be a countable amenable group with \(Sub(G_{2})\) countable, every almost normal subgroup profinitely closed, and such that \(H\) is acting on \(G_{2}\). Then \(G_{1}\ast_{H}(G_{2}\rtimes H)\) is flexibly (respectively, very flexibly) stable in permutations._ This gives new examples of flexibly (respectively, very flexibly) stable groups. ### Open questions In proving results of Section 5, we require the homomorphism extension property of \(H\leqslant G\). By Lemma 5.7, every semidirect product \(G=K\rtimes H\) yields a pair \(H\leqslant G\) with extension property. In Remark 5.8, we have given an example of a pair \(H\leqslant G\) with extension property, where \(H\) is not a retract. How far can we go from the semidirect products? **Problem 7.2**.: _Let \(G=K\bowtie H\) be the Zappa-Szep product of two groups. Characterise the pairs \(H\leqslant G\) with extension property._ Stability in permutations is not preserved under arbitrary amalgamated free product or semidirect product constructions [10], and not even under the direct product with \(\mathbb{Z}\)[14]. In contrast, our results give many examples of non-amenable amalgamated free products and semidirect products which are stable in permutations. The following basic question is still open. **Question 7.3**.: _Let \(G\) be a countable group stable in permutations. Let \(H\) be a finite group. Is \(G\rtimes H\) stable in permutations?_ Together with the first Grigorchuk group and the Gupta-Sidki \(p\)-group, Grigorchuk's groups \(G_{\omega}\), with \(\omega\) in a certain uncountable subset of \(\{0,1,2\}^{\mathbb{Z}_{+}}\), are stable in permutations [10]. All these uncountably many groups are amenable but not elementary amenable. They are finitely generated but not finitely presented. **Question 7.4**.: _Does there exist a finitely presented amenable but not elementary amenable group stable in permutations?_ We expect a positive answer. A natural candidate is the finitely presented Grigorchuk group \(\Gamma_{\sigma}=\langle\Gamma,t\mid t^{-1}\Gamma t=\sigma(\Gamma)\rangle\), where \(\Gamma\) is the first Grigorchuk group and \(\sigma\) is Lysenok's endomorphism of \(\Gamma\)[11]. However, although \(\Gamma\) is residually finite, \(\Gamma_{\sigma}\) is not [12]. Therefore, since \(\Gamma_{\sigma}\) is amenable, then \(\Gamma_{\sigma}\) is not stable in permutations, by [1, Theorem 4.3] (also by [1, Theorem 2], using stability in permutations for presentations of groups, together with another result from [1], showing that stability is a group property, i.e., it is independent of the choice of the presentation). The free amalgamated product from the next question is a building block of the famous Higman group [10]: \[H\cong(BS(1,2)\ast_{\mathbb{Z}}BS(1,2))\ast_{\mathbb{F}_{2}}(BS(1,2)\ast_{ \mathbb{Z}}BS(1,2)),\] an infinite group all of whose finite quotients are trivial. It follows from [1, Theorem 4.3] that if \(H\) is stable in permutations, then \(H\) is not sofic. In detail, let us consider the Baumslag-Solitar group, \(BS(1,n)=\langle x_{i},t_{i}\mid t_{i}^{-1}x_{i}t_{i}=x_{i}^{n}\rangle,i=1,2,3,4\). Then we form three types of the free amalgamated products over an infinite cyclic group: \[H(t_{1},t_{2})=\langle x_{1},t_{1}\mid t_{1}^{-1}x_{1}t_{1}=x_{1 }^{n}\rangle\ast_{\langle t_{1}\rangle=\langle t_{2}\rangle}\langle x_{2},t_{2 }\mid t_{2}^{-1}x_{2}t_{2}=x_{2}^{n}\rangle,\] \[H(x_{1},x_{2})=\langle x_{1},t_{1}\mid t_{1}^{-1}x_{1}t_{1}=x_{1 }^{n}\rangle\ast_{\langle x_{1}\rangle=\langle x_{2}\rangle}\langle x_{2},t_{2 }\mid t_{2}^{-1}x_{2}t_{2}=x_{2}^{n}\rangle,\] \[H(x_{1},t_{2})=\langle x_{1},t_{1}\mid t_{1}^{-1}x_{1}t_{1}=x_{1 }^{n}\rangle\ast_{\langle x_{1}\rangle=\langle t_{2}\rangle}\langle x_{2},t_{2 }\mid t_{2}^{-1}x_{2}t_{2}=x_{2}^{n}\rangle.\] Being the free amalgamated products of sofic (even solvable) groups over an amenable (even cyclic) group, all these groups are sofic [1, 1]. The group \(H(t_{1},t_{2})\) is residually finite, since the amalgamation is along the retract [10]. The group \(H(x_{1},x_{2})\) is not Hopfian, and hence, it is not residually finite. It follows from [1, Theorem 4.3] that \(H(x_{1},x_{2})\) is not stable in permutations. Finally, for \(n=2\), the group \(H(x_{1},t_{2})\) is the above mentioned building block of the Higman group: \[H=H(x_{1},t_{2})\ast_{\langle t_{1},x_{2}\rangle=\langle t_{3},x_{4}\rangle}H( x_{3},t_{4}).\] **Question 7.5**.: _Let \(n\geqslant 2\). Is \(H(x_{1},t_{2})=BS(1,n)\ast_{\mathbb{Z}}BS(1,n)\) stable in permutations?_ If \(H(x_{1},t_{2})\) is not residually finite, then the answer is negative.
2306.00998
Towards Selection of Text-to-speech Data to Augment ASR Training
This paper presents a method for selecting appropriate synthetic speech samples from a given large text-to-speech (TTS) dataset as supplementary training data for an automatic speech recognition (ASR) model. We trained a neural network, which can be optimised using cross-entropy loss or Arcface loss, to measure the similarity of a synthetic data to real speech. We found that incorporating synthetic samples with considerable dissimilarity to real speech, owing in part to lexical differences, into ASR training is crucial for boosting recognition performance. Experimental results on Librispeech test sets indicate that, in order to maintain the same speech recognition accuracy as when using all TTS data, our proposed solution can reduce the size of the TTS data down below its $30\,\%$, which is superior to several baseline methods.
Shuo Liu, Leda Sarı, Chunyang Wu, Gil Keren, Yuan Shangguan, Jay Mahadeokar, Ozlem Kalinli
2023-05-30T17:24:28Z
http://arxiv.org/abs/2306.00998v1
# Towards Selection of Text-to-speech Data to Augment ASR Training ###### Abstract This paper presents a method for selecting appropriate synthetic speech samples from a given large text-to-speech (TTS) dataset as supplementary training data for an automatic speech recognition (ASR) model. We trained a neural network, which can be optimised using cross-entropy loss or Arcface loss, to measure the similarity of a synthetic data to real speech. We found that incorporating synthetic samples with considerable dissimilarity to real speech, owing in part to lexical differences, into ASR training is crucial for boosting recognition performance. Experimental results on Librispeech test sets indicate that, in order to maintain the same speech recognition accuracy as when using all TTS data, our proposed solution can reduce the size of the TTS data down below its \(30\;\%\), which is superior to several baseline methods. Shuo Liu\({}^{1,2}\), Leda Sarr\({}^{1}\), Chunyang Wu\({}^{1}\), Gil Keren\({}^{1}\), Yuan Shangguan\({}^{1}\), Jay Mahadeokar\({}^{1}\), Ozlem Kalinli\({}^{1}\)+\({}^{1}\) Meta AI \({}^{2}\) University of Augsburg, Augsburg, Germany [email protected], [email protected] Footnote †: Work was done when Shuo was interning at Meta AI. ## 1 Introduction Speech synthesis, or Text-to-Speech (TTS), is a fast developing technology that generates speech from given text. It has reached the stage that it can create audio that closely resembles natural human voice, allowing for the use of synthetic data to improve the training of Automatic Speech Recognition (ASR) models [1, 2, 3]. Ideally, creating more synthetic data for ASR training has always the potential to improve the accuracy of speech recognition. However, as the quantity of the training data grows, more training time and computational resources are demanded. On the other hand, since TTS speech data are often produced with limited alternative constrains in terms of, for instance, speakers, speech speed and pitch, the resultant synthetic dataset may include a substantial amount of redundancy. Hence, it should be possible to minimise the data size by only choosing more typical and representative samples in a given large TTS dataset to achieve a more cost-efficient ASR training. Recent research [4, 5] have demonstrated some data selection strategies that are effective for this data reduction purpose. In [4], it is suggested to exploit rejection sampling to accept or reject synthetic samples such that the distribution of the selected synthetic data is close to that of the real speech. The distribution is represented in five dimensions depending on the outputs of a pre-trained ASR model, such as Xent loss, CTC loss, and token lengths, etc. Thus, the chosen synthetic data should own high similarity to actual speech. In [5], the objective is to choose data from a general pool that reflects a domain of interest, in order to build a domain-optimised ASR model. For this purpose, two language models are separately trained on the data from the target domain and the general domain using discrete speech tokens as input. A domain relevance score is computed based on the output probabilities from the target and general domain language models. The general domain samples scored highest are selected for training the domain specific ASR model. Data selection is also considered essential for training natural language processing (NLP) models efficiently [6, 7]. To empower an NLP model with more lexical information, however, it has been advised to choose the data of low certainty for training so that extra linguistic knowledge may be included. When selecting TTS data for ASR training, a question remains unanswered is whether we should choose the data that are similar to or distinct from the original real speech. From the acoustic perspective, we may want to select the samples of high similarity to ensure signal quality. However, these data may contain much less additional information and hence can only marginally improve an ASR model; On the other hand, a TTS sample that differs significantly from the original real speech may indicate the synthetic audio is contaminated with noise or artefacts produced during the synthesis process, which can be detrimental to the ASR training. To choose the appropriate TTS data for ASR training, our methodology compromises the similarity and discrepancy of the synthetic and the original real speech. In particular, we use a basic recurrent neural network, a Gated Recurrent Unit (GRU), to generate speech embedding. Then, we compare the embeddings of the real and synthetic speech samples and use two scoring methods to reflect their agreement. For both scoring methods, when selecting the TTS data inside a certain scoring range, we can lower the needed TTS data size while preserving the same level of ASR performance as using all available TTS data. ## 2 Methodology The system overview used in this work is given in Figure 1. We generate synthetic speech files from additional text resources using a TTS model. A scoring model is then trained to select some of these audio files that facilitate ASR training. These files are then added to the original real dataset to train an ASR model. For the test phase, we use only real test data. To develop the scoring model, we train a two-layer Gated Recurrent Unit (GRU) model to extract speech representation from an input audio file. The output of the last time-step is fed into a fully-connected (FC) layer, which is then optimised using binary cross-entropy (BCE) or Arcface loss [8]. When using the BCE loss, the FC output is linearly projected to the two classes, real or synthetic speech, through an additional FC layer. Arcface loss adds a margin parameter to the angle between the normalised speech representation and the weights of the FC layer that is implemented in serve that the output position of the margin tends to increase the separability between speech representations of distinct classes. Real speech is regarded as class 1 and synthetic speech as class 0 in the training of the GRU model using BCE loss. After training, we calculate the Softmax value of the output logit to score an input audio, and the score of 0.5 can be seen as the threshold for distinguishing between a real or synthetic speech. If the score exceeds \(0.5\), the input is recognised as a real data, otherwise it is classified as a TTS data. Hence, when a TTS sample's score is closer to \(1\), it is more likely to be identified as real data, indicating its higher similarity to real speech. However, as mentioned earlier, a dissimilar TTS sample may provide extra information needed to enhance ASR training. Therefore, it is necessary, however challenging, to find a proper scoring range for selecting the appropriate TTS data. A more straightforward approach to quantify the similarity between real and synthetic speech is to compute the cosine similarity of their speech representations. In an attempt to improve the representation separability, Arcface loss is applied to the GRU model's optimisation. Based on the real samples in training set, we next calculate the average embedding for real speech, named as average real embedding. To score a TTS sample, the cosine similarity between its embedding and the average real embedding is computed. Again, the TTS samples scored with low similarity to real speech may signal the presence of new information that has the chance to contribute to the ASR training. Notably, since our proposed method exploits only a simple neural network, it allows the rapid scoring of synthetic speech samples compared to previous work that typically relies on a pre-trained ASR model, particularly when the size of the involved TTS dataset is enormous. ## 3 Experiments ### Dataset Our scoring models are trained based on the LibriSpeech corpus [9]. It consists of around 960 hours of read, clean speech derived from over 8 000 public domain audiobooks and has its own train, development, and test splits. We apply the TTS model described in Section 3.2 to the Librispeech transcriptions to generate the synthetic audio files. During training, the performance of the scoring models are monitored on the Librispeech development set without touching the test set. As additional text resources to generate the synthetic speech files for ASR training, we employ \(10\,\%\) of data from the language model (LM) corpus provided in Librispeech corpus, yielding a total of 4.11 million utterances. ### Tts & ASR Models The TTS model has a multi-stage framework. It begins with a rule-based Grapheme-to-Phoneme (G2P) Conversion module that transforms the input text into various levels of linguistic representations. These features are consumed by a prosody model [10] and a spectrum model [11] to generate spectrum features. The prosody model consists of multiple Transformer layers [12], with each layer contains a multi-head attention module and a feed-forward layer. In addition, residual connection and layer normalisation are performed to each transformer layer. Finally, a neural vocoder using WaveRNN achitecture [13] is used to transform the features into audio waveform. The ASR model used in this work processes a log Mel-spectrogram with 80 Mel bands, the spectrogram is created by taking the Short-time Fourier transform (STFT) of a speech signal with a window size of \(25\)ms and hop size of \(10\)ms. SpecAugment is applied to strengthen the robustness of the ASR model [14]. The model has the output vocabulary of \(5000\) sentence pieces [15] estimated over the \(960\)-hours Librispeech training set [9] (LS 960h). The Mel-spectrogram is linearly projected to \(128\) channels using an encoder, and every four time steps are concatenated along the feature dimensionality to squeeze the total frames. Then a stack of \(20\) efficient memory transformer (Emformer [16]) layers are used to capture the temporal context. The Emformer output is then fed into a recurrent neural transducer (RNN-T). Using a 3-layer LSTM model [17], the Emformer predictor generates embeddings of the size of \(512\) based on all the previous predicted symbols. The outputs of the Emformer encoder and predictor are combined and projected to a probability distribution over a vocabulary. ### Settings For a fair comparison of different data selection methods, the training parameters for all the ASR models developed throughout this work are consistent. The models are optimised by minimising the transducer loss using an Adam optimiser with an initial learning rate of \(0.001\). The total number of training epochs is \(200\), and beginning with the \(60\) epoch, the learning rate annealing is applied at a shrink rate of \(0.96\) per epoch. The maximum number of tokens in a training batch is capped to \(40000\) with the number of utterances not exceeding \(1000\). The ASR models are trained using \(32\) A100 GPU units. Both layers of our GRU scoring model have hidden units of \(256\). The dimensionality of the subsequent fully-connected layer is set to \(64\) when the model is trained with BCE loss. Its optimisation employs a batch size of \(64\) and Adam optimiser with a fixed learning rate of \(0.0001\). When using Arcface loss for training, the scale and margin parameters are set to \(10\) and \(0.5\), respectively. A lower learning rate of \(0.00001\) is applied to ensure stable convergence. ### Baseline Methods As the primary comparison result, we first evaluate the effect of adding all available TTS data to the original Librispeech training set as the new training data. In addition, we compare our data selection approach with three baseline methods: ran Figure 1: _System overview_ dom selection, a method depending on confidence score, and an acoustic unit language model-based scoring method. Confidence score represents the joiner score of the word-pieces making up the words in an utterance. We consider the selection of TTS samples with the highest and lowest confidence scores for ASR training. We compare our methods with these baseline methods on the test sets of Librispeech in terms of word error rate (WER). The scoring method using an acoustic unit language model (ULM) is inspired by [5]. The acoustic units are obtained by a Hidden-Unit BERT (HuBERT) model [18] which is a self-supervised speech model. To emulate a written language having a finite vocabulary of discrete units, a HuBERT model produces the discrete speech representation by directly processing a speech waveform. The pseudo labels for HuBERT training are created by performing unsupervised clustering, such as K-means, on the MFCCs of the input signal. Hence, the learnt discrete speech representation is optimised to be close to the centroid of its belonging cluster, and far from other cluster centroids. It has been shown that fine-tuning a HuBERT model for ASR can reach state-of-the-art results [19]. In this work, we deploy a HuBERT model trained using K-means clustering method with \(K=100\). A 13-layer neural LM is trained to predict the next HuBERT unit for a given sequence of input HuBERT units. Please note that this LM is not a conventional text-based LM but one which is trained on discrete speech units and hence, it has the ability to model the acoustic properties. In order to obtain a score per utterance, we used either the average next unit prediction accuracy or the perplexity. ### Results The ASR model trained using the original Librispeech training set achieves a WER of \(3.5\,\mathrm{\char 37}\) on the test-clean split and \(8.7\,\mathrm{\char 37}\) on the test-other split (Table 1). Adding all of our generated TTS data, which consists of \(4.11\) million synthetic speech files, as additional training material decreases the WER results on the evaluation sets to \(2.97\,\mathrm{\char 37}\) and \(8.05\,\mathrm{\char 37}\), respectively. We assess our scoring methods for selecting a subset of the TTS audio files for ASR training while aiming to keep the same effectiveness of WER reduction. With the allowance of \(2\,\mathrm{\char 37}\) rising rate, we expect the resulting WER on test-clean to be lower than \(3.03\,\mathrm{\char 37}\), and on the test-other to be lower than \(8.24\,\mathrm{\char 37}\) As seen in Table 1, the data selection methods based on ULM can produce superior results than the other two baseline methods, namely random selection and confidence score-based scoring. Using the average next unit prediction perplexity as the criterion for data selection, \(30\,\mathrm{\char 37}\) of the synthetic audio files with the lowest scores yields a WER of \(3.01\,\mathrm{\char 37}\) on the test-clean set and \(8.20\,\mathrm{\char 37}\) on the test-other set. To attain the same level of performance, however, we need to incorporate \(1.5\) million TTS samples with the highest accuracy scores when choosing the data based on the average prediction accuracy. In particular, the trained ASR can reach the same WER on the test-clean set as using all of the TTS data available, and a WER of \(8.22\) on the test-other split. Instead of choosing the highest or lowest scored samples, as was the case with baseline methods, we discovered that our binary classification-based scoring approaches perform best when employing the TTS data within a medium scoring range. In other words, the synthetic audio files that can be differentiated from real speech samples are more beneficial for improving the ASR performance. After training our GRU scoring model using BCE loss, we acquire an unweighted average recall of \(92\,\mathrm{\char 37}\) classification accuracy on the Librispeech validation set. In particular, the recall score reaches approximately \(100\,\mathrm{\char 37}\) for real speech (class 1), and \(83\,\mathrm{\char 37}\) for synthetic speech (class 0), demonstrating that almost all the real speech samples can be identified accurately while some of the TTS samples are incorrectly recognised as real speech. The scoring model's prediction output are then used to score the TTS samples created from LM text resources. The WER can be reduced to \(2.96\,\mathrm{\char 37}\) and \(8.16\,\mathrm{\char 37}\) for the test-clean and test-other sets, respectively, by choosing only \(27\,\mathrm{\char 37}\) of the synthetic audio files rated between 0.2 and 0.5. Similar to this, when using Arcface loss to train the GRU model, we can successfully select TTS data that yields an even better result on the test-other set, a WER of \(8.09\,\mathrm{\char 37}\), by selecting \(30\,\mathrm{\char 37}\) of all TTS audio files which have similarity scores within the range of 0.2 to 0.8. ## 4 Discussion To support our recommended scoring ranges for TTS data selection, i.e., score between \(0.2\) and \(0.5\) for the GRU scoring model trained with BCE loss, similarity between \(0.2\) to \(0.8\) similarity for the scoring model trained with Arcface loss, we compare their effectiveness to the data scored outside the range. Specifically, for the first scoring model, we randomly select synthetic samples from those scored between \(0.2\) and \(0.5\) as the same size of the samples scored greater than 0.5, and compare their effec \begin{table} \begin{tabular}{l|l|c c c c} \hline \hline **Method** & **\# add. utter. [M]** & **dev-clean** & **test-clean** & **dev-other** & **test-other** \\ \hline **LS** & – & \(3.2\) & \(3.5\) & \(9.2\) & \(8.7\) \\ **LS + all synthetic speech** & \(4.11\) (\(100\,\mathrm{\char 37}\)) & \(2.67\) & \(2.97\) & \(8.39\) & \(8.05\) \\ \hline **Random** & \(1.23\) (\(30\,\mathrm{\char 37}\)) & \(2.87\) & \(3.21\) & \(8.63\) & \(8.63\) \\ \hline **Confidence score** & \(1.23\) (\(30\,\mathrm{\char 37}\) high) & \(3.10\) & \(3.24\) & \(8.90\) & \(8.65\) \\ & \(1.23\) (\(30\,\mathrm{\char 37}\) low) & \(2.77\) & \(3.05\) & \(8.46\) & \(8.28\) \\ \hline **ULM accuracy** & \(1.50\) (\(36\,\mathrm{\char 37}\) high) & \(2.72\) & \(2.97\) & \(8.63\) & \(8.22\) \\ & \(1.23\) (\(30\,\mathrm{\char 37}\) low) & \(2.91\) & \(3.13\) & \(8.58\) & \(8.31\) \\ \hline **ULM perplexity** & \(1.50\) (\(36\,\mathrm{\char 37}\) high) & \(2.87\) & \(2.99\) & \(8.96\) & \(8.35\) \\ & \(1.23\) (\(30\,\mathrm{\char 37}\) low) & \(2.76\) & \(3.01\) & \(8.39\) & \(8.20\) \\ \hline **Binary classifier (Xent)** & \(1.10\) (\(27\,\mathrm{\char 37}\)) & \(2.78\) & \(2.96\) & \(8.42\) & \(8.16\) \\ **Binary classifier (Arrefaee)** & \(1.23\) (\(30\,\mathrm{\char 37}\)) & \(2.81\) & \(2.97\) & \(8.46\) & \(8.09\) \\ \hline \hline \end{tabular} \end{table} Table 1: Testing results, WER [\(\%\)], of different data selection approaches on Librispeech test set. **# add. utter** indicates the number of TTS samples selected to be added to Librispeech 960h (LS) training data. The proportion of utterances selected from the synthetic samples generated based on LM text database are additionally given, where “high” and “low” denote the TTS samples are selected from those with highest or lowest scores, respectively. The binary classifiers are trained using Cross-entropy (Xent) or Arcface loss. tiveness in ASR improvement. Similar comparison has been made for the TTS samples with scores under \(0.2\). The random selection was performed \(5\) times for each comparison, and the averaged outcomes, shown in Table 2, reveal that the TTS samples from our suggested scoring range are more effective in ASR improvement compared to the top- and bottom-scoring samples. However, the samples scored below \(0.2\) are also not very helpful for ASR, as shown in Table 2. This lower-score boundary has been established experimentally; however, its determination can be challenging in practice. Thus, it is suggested to simply choose the highest-scoring TTS samples right below \(0.5\). Similarly, for the second scoring model, the TTS samples with too high or too low similarity to real speech are not as competitive as the samples with similarity between \(0.2\) and \(0.8\) in terms of ASR improvement. Overall, we are able to optimize the ASR performance using TTS samples while excluding samples that are too similar or too dissimilar to real speech. The results are partly due to the inclusion of out-of-vocabulary words in LM texts like Table 2, supporting the idea that we should select TTS data with enough uncertainty as additional ASR training material. These results are partially attributable to the incorporation of some extra unseen words given by LM text resources, and as a result, the selected data provide more additional information to benefit ASR training. In Table 3, we list the number of out-of-vocabulary words that are included in the evaluation splits of the Librispeech dataset and added to the new training set due to LM texts. An utterance containing these unseen words may have a lower similarity to the original training samples, but including them in ASR training can help to improve speech recognition accuracy. If the selected utterances containing these unseen words are filtered out for other utterances without them in the same scoring range, the results provided in Table 4 show that the speech recognition performance for both of our scoring methods declines to some extent. In an effort to further reduce the required TTS data size, we additionally investigate the combination of our classification-based scoring method with the ULM scoring method. Only the data chosen by both approaches is used to train ASR models, and the best-performing combination is given in Table 5. Combining ULM accuracy and a binary classifier (Xent) retains the WER results of \(3.02\) and \(8.23\) on the test-clean and test-other sets while only requiring \(0.72\) million of TTS samples, or about \(18\,\%\) of the total generated TTS data. However, the WER rise on the development set raises the question of the validity of this ASR model in practical use. ## 5 Conclusions In this paper, we presented a solution for selecting useful samples from a given large TTS dataset to augment the ASR training data. Our method uses a simple neural network to produce agreement between real and synthetic speech. Experimental results indicated that the ASR benefits from choosing synthetic data that are of sufficient additional information or uncertainty, resulting in better speech recognition performance compared to other existing methods that consider the maximum or minimum amount of agreement. \begin{table} \begin{tabular}{l|c|c c c c c c c} \hline \hline \multirow{2}{*}{**Scoring range**} & \multicolumn{2}{c|}{**\# add.**} & \multicolumn{5}{c}{**Words**} & \multicolumn{5}{c}{**Uterances**} \\ \cline{2-9} & **utter. [M]** & **dev-clean** & **test-clean** & **dev-other** & **test-other** & **dev-clean** & **test-clean** & **dev-other** & **test-other** \\ \hline \(Score>0.5\) & \(0.77\) (\(18\,\%\)) & \(48\) & \(46\) & \(49\) & \(66\) & \(290\) & \(322\) & \(234\) & \(378\) \\ \(0.2<Score<0.5\) & \(2.24\) (\(55\,\%\)) & \(157\) & \(151\) & \(156\) & \(182\) & \(1073\) & \(1191\) & \(927\) & \(1285\) \\ \(Score<0.2\) & \(1.10\) (\(27\,\%\)) & \(133\) & \(108\) & \(118\) & \(149\) & \(674\) & \(711\) & \(514\) & \(824\) \\ \hline \hline \end{tabular} \end{table} Table 4: Testing results, WER [\(\%\)], of classification scoring methods on Librispeech test set, after removing the LM TTS utterances containing unseen words. \begin{table} \begin{tabular}{l|l|l|l l l l l} \hline \hline **Method** & **Scoring range** & **\# add. utter. [M]** & **dev-clean** & **test-clean** & **dev-other** & **test-other** \\ \hline \multirow{3}{*}{**Binary classifier**} & \(Score>0.5\) & \multirow{3}{*}{\(0.77\) (\(18\,\%\))} & \(3.06\) & \(3.20\) & \(8.95\) & \(8.58\) \\ & \(0.2<Score<0.5\) & & & \(2.85\) & \(3.11\) & \(8.67\) & \(8.17\) \\ \cline{2-9} & \(Score<0.2\) & & & \(2.80\) & \(3.13\) & \(8.48\) & \(8.42\) \\ & \(0.2<Score<0.5\) & & & \(2.78\) & \(2.96\) & \(8.42\) & \(8.16\) \\ \hline \multirow{3}{*}{**Binary classifier**} & \(Similarity>0.8\) & \multirow{3}{*}{\(0.22\) (\(30\,\%\))} & \(2.85\) & \(3.04\) & \(8.60\) & \(8.34\) \\ & \(0.2<Similarity<0.8\) & & & \(2.81\) & \(2.97\) & \(8.46\) & \(8.09\) \\ \cline{1-1} \cline{2-9} & \(Similarity<0.2\) & & & \(2.96\) & \(3.25\) & \(9.03\) & \(8.45\) \\ \cline{1-1} & \(0.2<Similarity<0.8\) & & & \(2.91\) & \(3.00\) & \(8.39\) & \(8.19\) \\ \hline \hline \end{tabular} \end{table} Table 2: Informativeness Comparison in three scoring range using our binary classifier trained with cross-entropy or Arcface loss. \begin{table} \begin{tabular}{l|c|c c c c c c} \hline \hline **Method** & \begin{tabular}{c} **Unseen** \\ **words** \\ \end{tabular} & **dev-clean** & **test-clean** & **dev-other** & **test-other** \\ \hline \multirow{2}{*}{**Bin. cls.** & ✗} & \(2.90\) & \(3.09\) & \(8.74\) & \(8.30\) \\ & ✗ & \(2.78\) & \(2.96\) & \(8.42\) & \(8.16\) \\ \hline \multirow{2}{*}{**Bin. cls** & ✗} & \(2.88\) & \(3.07\) & \(8.65\) & \(8.20\) \\ & ✗ & \(2.81\) & \(2.97\) & \(8.46\) & \(8.09\) \\ \hline \hline \end{tabular} \end{table} Table 5: Testing results, WER [\(\%\)], by using the TTS data selected by both ULM and our data selection approaches. \begin{table} \begin{tabular}{l|l|c c c c c c} \hline \hline **Method** & \begin{tabular}{c} **\# add.** \\ **utter. [M]** \\ \end{tabular} & **dev-clean** & **test-clean** & **dev-other** & **test-other** \\ \hline \multirow{2}{*}{**Bin. cls. (Xent)**} & \(0.72\) & \multirow{2}{*}{\(2.90\)} & \(3.02\) & \(8.81\) & \(8.23\) \\ & & \(0.71\) & & \(2.89\) & \(3.05\) & \(8.83\) & \(8.26\) \\ & & \(17\) & & & & \\ **LAM accuracy** & \(0.65\) & & & & & \\ **\& Bin. cls. (Xent)** & \((16\,\%\)) & & & & \\ **\& Bin. cls. (Xent)** & \((16\,\%\)) & & & & \\ **\& Bin. cls. (Xent)** & \((16\,\%\)) & & & & \\ \hline \hline \end{tabular} \end{table} Table 5: Testing results, WER [\(\%\)], by using the TTS data selected by both ULM and our data selection approaches.
2306.12344
An efficient, provably exact, practical algorithm for the 0-1 loss linear classification problem
Algorithms for solving the linear classification problem have a long history, dating back at least to 1936 with linear discriminant analysis. For linearly separable data, many algorithms can obtain the exact solution to the corresponding 0-1 loss classification problem efficiently, but for data which is not linearly separable, it has been shown that this problem, in full generality, is NP-hard. Alternative approaches all involve approximations of some kind, including the use of surrogates for the 0-1 loss (for example, the hinge or logistic loss) or approximate combinatorial search, none of which can be guaranteed to solve the problem exactly. Finding efficient algorithms to obtain an exact i.e. globally optimal solution for the 0-1 loss linear classification problem with fixed dimension, remains an open problem. In research we report here, we detail the rigorous construction of a new algorithm, incremental cell enumeration (ICE), that can solve the 0-1 loss classification problem exactly in polynomial time. We prove correctness using concepts from the theory of hyperplane arrangements and oriented matroids. We demonstrate the effectiveness of this algorithm on synthetic and real-world datasets, showing optimal accuracy both in and out-of-sample, in practical computational time. We also empirically demonstrate how the use of approximate upper bound leads to polynomial time run-time improvements to the algorithm whilst retaining exactness. To our knowledge, this is the first, rigorously-proven polynomial time, practical algorithm for this long-standing problem.
Xi He, Waheed Ul Rahman, Max A. Little
2023-06-21T15:41:34Z
http://arxiv.org/abs/2306.12344v2
# An efficient, provably exact, practical algorithm for the 0-1 loss linear classification problem ###### Abstract Algorithms for solving the linear classification problem have a long history, dating back at least to 1936 with linear discriminant analysis. For linearly separable data, many algorithms can obtain the exact solution to the corresponding 0-1 loss classification problem efficiently, but for data which is not linearly separable, it has been shown that this problem, in full generality, is NP-hard. Alternative approaches all involve approximations of some kind, including the use of surrogates for the 0-1 loss (for example, the hinge or logistic loss) or approximate combinatorial search, none of which can be guaranteed to solve the problem exactly. Finding efficient algorithms to obtain an exact i.e. globally optimal solution for the 0-1 loss linear classification problem with fixed dimension, remains an open problem. In research we report here, we detail the rigorous construction of a new algorithm, incremental cell enumeration (ICE), that can solve the 0-1 loss classification problem exactly in polynomial time. We prove correctness using concepts from the theory of hyperplane arrangements and oriented matroids. We demonstrate the effectiveness of this algorithm on synthetic and real-world datasets, showing optimal accuracy both in and out-of-sample, in practical computational time. We also empirically demonstrate how the use of approximate upper bound leads to polynomial time run-time improvements to the algorithm whilst retaining exactness. To our knowledge, this is the first, rigorously-proven polynomial time, practical algorithm for this long-standing problem. ## 1 Introduction There has been an increasing trend to leverage machine learning (ML) for high-stakes prediction applications that deeply impact human lives. Many of these ML models are "black boxes" with highly complex, inscrutable functional forms. In high-stakes applications such as healthcare and criminal justice, black box ML predictions have incorrectly denied parole [20], misclassified highly polluted air as safe to breathe [17], and suggested poor allocation of valuable, limited resources in medicine and energy reliability [21]. In such high-stakes applications of ML, we always want the best possible prediction, and we want to know how the model makes these predictions so that we can be confident the predictions are meaningful [14]. In short, the ideal model is simple enough to be easily understood (_interpretable_), and optimally accurate (_exact_). Hence, in high-stakes applications of ML, we always want the best possible prediction, and we want to know how the model makes these predictions so that we can be confident the predictions are meaningful. In short, the ideal model is simple enough to understand and optimally accurate, then our interpretations of the results can be faithful to what our model actually computes. Another compelling reason why simple models are preferable is because such _low complexity_ models usually provide better _statistical generality_, in the sense that a classifier fit to some training dataset, will work well on another dataset drawn from the same distribution to which we do not have access (works well _out-of-sample_). The _VC dimension_ is a key measure of the complexity of a classification model. The simple \(D\)-dimensional _linear hyperplane_ classification model, which we discuss in detail below, has VC dimension \(D+1\) which is the lowest of other widely used models such as the decision tree model (axis-parallel hyper-rectangles, VC dimension \(2D\)) and the \(K\)-degree polynomial (VC dimension \(O\left(D^{K}\right)\)), for instance [22, 23]. Assume a dataset of size \(N\) is drawn i.i.d (independent and identically distributed) from the same distribution as the training dataset, according to Vapnik [23]'s _generalization bound theorem_, for the hyperplane classifier we have, with high probability, \[E_{\text{test}}{\leq}E_{\text{emp}}+O\left(\sqrt{\frac{\log\left(N/\left(D+1\right) \right)}{N/\left(D+1\right)}}\right), \tag{1}\] where \(E_{\text{test}}\), \(E_{\text{emp}}\) are the _test 0-1 loss_ and the _empirical 0-1 loss_ of on training data set, respectively (Mohri et al., 2018). Equation (1) motivates finding the exact 0-1 loss on the training data, since, among all possible linear hyperplane classifiers, none has better worst-case test 0-1 loss than the exact classifier. Such linear classification algorithms are widely used in practice. The 0-1 loss objective (minimizing the number of misclassifications) for the binary linear classifier is hard to optimize directly. To illustrate, assume a data set consists of \(N\)_data points_ (or data items) \(\mathbf{x}_{n}\), \(\forall n\in\{1,\ldots,N\}=\mathcal{N}\), where the data points \(\mathbf{x}_{n}\in\mathbb{R}^{D}\) and \(D\) is the dimension of the _feature space_. Each data point has a unique true _label_\(l_{n}\in\{-1,1\}\), \(\forall n\in\mathcal{N}\). All true labels in this data set are stored in a set \(\mathbf{l}=\left\{l_{1},l_{2},...,l_{N}\right\}^{T}\). The data points and their labels are packaged together into the dataset \(\mathcal{D}\). The objective function for 0-1 loss linear classification problem can be defined as \[E_{\text{0-1}}\left(\mathbf{w}\right)=\sum_{n\in\mathcal{N}}\mathbf{1}\left[\text{ sign}\left(\mathbf{w}^{T}\bar{\mathbf{x}}_{n}\right)\neq l_{n}\right], \tag{2}\] which is a sum of 0-1 loss functions \(\mathbf{1}\left[\right]\), each taking the value 1 if the Boolean argument is true, and 0 if false. The function sign returns \(+1\) is the argument is positive, and \(-1\) if negative (and zero otherwise). The linear decision function \(\mathbf{w}^{T}\bar{\mathbf{x}}\) with parameters \(\mathbf{w}\in\mathbb{R}^{D+1}\) (\(\bar{\mathbf{x}}\) is the data in homogeneous coordinates) is highly interpretable since it represents a simple hyperplane boundary in feature space separating the two classes. It would therefore be extremely useful to find an optimal linear decision function for a given dataset \(\mathcal{D}\). Equation (2) counts the number of misclassified data points given the parameter \(\mathbf{w}\), so that the supervised classification problem is solved by computing \[\hat{\mathbf{w}}=\underset{\mathbf{w}:\mathbb{R}^{D+1}}{\text{argmin}}\ E_{\text{0-1}} \left(\mathbf{w}\right). \tag{3}\] Although apparently simple, this is a surprisingly challenging optimization problem. Considered a continuous optimization problem, the standard ML optimization technique, gradient descent, is not applicable (since the gradients of \(E_{\text{0-1}}\left(\mathbf{w}\right)\) with respect to \(\mathbf{w}\) are zero everywhere they exist), and the problem is non-convex so there are a potentially very large number of local minima in which gradient descent can become trapped. Heuristics exist, in particular the classic _perceptron training algorithm_ and variants which are only guaranteed to find one of these local minima. By replacing the loss function \(\mathbf{1}\left[\right]\) with more manageable _surrogates_ such as the _hinge loss_\(E_{hinge}\left(\mathbf{w}\right)=\sum_{n\in\mathcal{N}}\max\left(0,1-l_{n}\mathbf{w}^{T} \bar{\mathbf{x}}_{n}\right)\) which is convex and differentiable nearly everywhere, the corresponding optimization is a linear problem solvable by general-purpose algorithms such as _interior-point primal-dual_ optimization. However, this can only find a sub-optimal decision function corresponding to an upper bound on the globally optimal value of the objective \(E_{\text{0-1}}\left(\mathbf{w}\right)\). An alternative is to recast the problem as a combinatorial one, in the following way. Every choice of parameters \(\mathbf{w}\) determines a particular classification prediction or _assignment_, \(\text{sign}\left(\mathbf{w}^{T}\mathbf{x}_{n}\right)\in\mathcal{S}\) for all \(n\in\mathcal{N}\), where \(\mathcal{S}\) is the _discrete search_ or _solution space_, in linear classification problem \(\mathcal{S}\) is all possible \(2^{N}\)_assignments/configurations_. Every assignment entails a particular total loss \(E_{\text{0-1}}\left(\mathbf{w}\right)\), and we need to find the assignment that \(\mathbf{c}\)n obtain the optimal 0-1 loss \(\hat{E}_{\text{0-1}}\). For finite \(N\) there are only a finite number of equivalence classes of parameters \(\mathbf{w}\) which entail the same assignment \(\mathbf{z}=\left\{z_{1},z_{2},...,z_{N}\right\}\in\mathcal{S}\), that is, the set \(\left\{\mathbf{w}\in\mathbb{R}^{D+1}:\text{sign}\left(\mathbf{w}^{T}\mathbf{x}_{n}\right)= z_{n},\forall n\in\mathcal{N}\right\}\). In fact, the geometry of the problem implies that not all assignments correspond to one of these equivalence classes; the only ones which do are known as _dichotomies_ or _half spaces_, and while there are \(2^{N}\) possible assignments, there are only \(O\left(N^{D}\right)\) dichotomies for data set in \(D\) dimensions (Cover, 1965). Thus, problem (3) can instead be treated as a combinatorial optimization problem of finding the best such dichotomies and their implied assignments, from which an optimal parameter \(\hat{\mathbf{w}}\) can be obtained. This is still a challenging optimization problem, and indeed, it is been shown to be NP-hard (Ben-David et al., 2003; Feldman et al., 2012). Because this problem involves both continuous-valued parameters \(\mathbf{w}\) and discrete-valued assignments \(\mathbf{z}\), it is an example of a _mixed continuous-discrete optimization problem_ (more precisely in this case, a _mixed integer-linear programming_, MIP problem) for which general-purpose algorithms such as _branch-and-bound_ (BnB) or _mixed integer programming solvers_ (GLPK for instance) are applicable. Nonetheless, such generic algorithms do not come with guarantees on computational complexity, and are usually worst-case exponential time as in principle they will test all \(2^{N}\) possible assignments. In our research here, we will construct an algorithm to solve (3) exactly, with polynomial-time guarantees on worst-case computational complexity. The proof of the correctness of this algorithm i.e. that it solves (3), relies on several ideas in computational geometry and oriented matroid theory. We take a fundamentally different approach, based on several modern, broadly-applicable algorithm design principles such as sequential decision processes, short-cut filter fusion, tupling, and other acceleration techniques. We shall use functional programming to derive an efficient algorithm from provably correct specification. The paper is organized as follows. In Section 2, we explain in detail how our algorithm is constructed. Section 3 shows the results of empirical computational comparisons against approximate algorithms applied to classification data sets from the UCI machine learning repository and synthetic data, to test predictions about the performance of our algorithm. Finally in Section 4, we present a summary and brief discussion of contributions, review related work, and suggest future research directions. ## 2 Solving the problem: functional program derivation meets combinatorial geometry It is widespread practice for machine learning algorithms that solve problems like (3) to be given with little justification. Where formal proofs of correctness--essential to _guaranteeing exactness_ in our case--are provided, these proofs are given "after the fact". While this practice is not necessarily problematic, it can be error-prone and _ambiguous_, and provide little reason as to _how_ the algorithm was developed. We argue that it is preferable to _derive_ algorithms from a _correct specification_, using simple, calculational steps, a process known as _program calculation_[12]. This requires treating programs as if they are mathematical functions, and it is for this reason we develop our algorithm in a _strongly-typed_ and _side-effect free functional programming_ language, specifically Haskell. Strong typing provides the substantial benefit that algorithm correctness requires _type correctness_ of the program (and up to a certain degree of freedom, the implication goes the other way as well), and the syntactic and semantic rules of the computer language avoid ambiguity. For readers unfamiliar with this language and functional programming in general, therefore, we will give brief tutorial notes alongside the derivations. All the code in the paper is directly executable Haskell so that the reader can test out the ideas by direct experimentation if desired. ### Generic sequential decision processes _Sequential decision processes_ (SDPs) were originally identified by Richard Bellman in his pioneering study of _dynamic programming_ (DP) [10, 11]. Conceptually, SDPs decompose the problem into a set of all possible partial candidate solutions, each such partial solution consisting of a particular combinatorial configuration of the input data (Figure 1). Partial solutions are then combined to construct the optimal, complete solution (this is Bellman's _principle of optimality_). In Haskell, this process of sequentially constructing partial solutions is expressed as a _recursion_ over a _list_ of input data: Figure 1: Graphical illustration of the _sequential decision process_ of our exact 0-1 loss classification algorithm, generating a set of combinatorial configurations, c, each one representing a unique classification hyperplane. The root node is an empty configuration and two update (Haskell) functions cnfupd1 and cnfupd1 are applied to each candidate configuration in each step \(n=0,1,2\ldots\) through the input dataset, generating new candidate combinatorial configurations using the next dataset item, x1. _Non-viable_ configurations and their descendants (shaded) have 0-1 loss larger than the approximate upper bound on the loss, ub (Haskell code, modell (model c) > ub) and thus cannot lead to an optimal configuration. sdpfse=fold(choicefs)[e] choicefscsx=[f'c'x|f'<-fs,c'<-cs] ``` The function choice is implemented as a Haskell _list comprehension_ (set comprehension for ordered sequences) which systematically applies each decision function in the given list of decision functions fs :: [t1 -> t2 -> a], to each (partial) input configuration given in the input list cs :: [t1], combined with the next input data x :: t2, in the sequence. In these _type declarations_, <identifier> :: <type string>, t1, t2, a etc. stand for _arbitrary_ data types, the type declaration indicates the _pattern_ of types within e.g. the calling structure of functions. These types will be bound to particular data type instances (such as built-in types integer, Int or Boolean, Bool) when the function is applied to actual data to perform computations. Therefore, each decision function must have type t1 -> t2 -> a, indicating that it has two parameters of type t1 and t2, respectively, and outputs a value of type a. A list of these is provided, namely, fs :: [t1 -> t2 -> a]. By ensuring consistency of among this type information, Haskell can _infer_ the type declaration choice :: [t1 -> t2 -> a] -> [t1] -> t2 -> [a]. In turn, this constrains the actual SDP program which applies the decisions in choice recursively and has type sdp :: [t2 -> a -> t2] -> t2 -> [t1] -> [t2]. The recursion is implemented using Haskell's built-in (_left_) _list fold_ function fold1 :: (b -> a -> b) -> b -> [a] -> b. Informally, fold1 f e [x1,x2,...,xN], computes the value ``` ((e'f'x1)'f'x2)'f'..'f'xN) ``` where x 'f'y indicates _infix_ function application as opposed to Haskell's usual, _prefix_ application notation f x y, of the two parameter function f :: b -> a -> b. Astute readers may notice that the type definition of the function sdp mentions only two parameters fs and e, but the type declaration mentions another input, the input data list, [t1]. This is the use of _currying_: in the more traditional mathematical notation, parameters are _tupled_\((x,y)\) so we write \(g\left(x,y\right)\), equivalent to Haskell's g (x, y). But in Haskell we can _fix_ parameter x--thereby _partially applying_ g--to obtain the new, single-parameter Haskell function g x, taking the lone parameter y1. In this case, this unspecified parameter [t1] is inferred from the explicit type of fold1. Footnote 1: In fact technically _every_ function in Haskell only takes one parameter with the successive use of currying like this, the syntax of function declarations mostly hides this fact from the user. ### An efficient paired combination-sequence generator The algorithm we will derive for solving the 0-1 loss linear classification problem, will be based on an SDP which pairs size-\(D\) combinations to enumerate all unique boundary hyperplanes, with linear sequences to evaluate each boundary's implied 0-1 classification loss. This _combination-sequence generator_ will _enumerate_ all such pairs of combinatorial configurations, and by evaluating the 0-1 loss over these configurations we will be guaranteed to test every possible assignment, and thus guarantee finding the optimal solution. The generator for combinations will be based on a generator of _subsequences_ given as a special case of function sdp defined above, ``` subs=sdpsubsfsempty subsfs=[ignore,include] ignorecx=c includecx=c++[x] empty=[] ``` where the decisions either "ignore" the new data item x or "include" it in the current configuration c. To illustrate, for a string (character-type list) input, subs "abc" evaluates to ["","a","b","ab","c","ac","ab","abc"]. The generator for linear sequences is trivial having only one decision, and it merely reconstructs the input data, ``` seqn=sdpsseqnfsempty seqnfs=[include] ``` such that seqn "abcd" evaluates to just ["abcd"]. Now, we need to pair together all possible subsequences with linear sequences of the input. This could be done after generating them but only very inefficiently, because every one of the \(2^{N}\) subsequences would need to be paired with a linear sequence directly. Instead, we can use the _tupling trick_ (or "products" Bird and de Moor, 1997) to perform this pairing during the SDP recursion: ``` sdppairfs1fs2e1e2=sdp(mappaircross(cppfs1fs2))(e1,e2) pair(f1,f2)x=(f1x,f2x) cross(f1,f2)(a,b)=(f1a,f2b) ``` paircross (f1,f2) = pair. cross (f1,f2) cpp xs1 xs2 = [(x1', x2') | x1' <- xs1, x2' <- xs2] To understand sdppair, first note that the function cpp :: [a] -> [b] -> [[a, b]] takes two lists and pairs every element of the first list with every element in the second list. Thus, cpp subsfs seqnfs is [(ignore, include), (include, include)], which pairs together the two decision functions of subs and seqn. The function paircross applies the two functions f1, f2 to the first element (fst) of its first parameter (which is a tuple), followed by the second parameter (which is a single value), and the second element (snd) of the first tuple parameter followed by the second parameter, respectively.2 Finally, the built-in Haskell function map :: (a -> b) -> [a] -> [b] simply applies this paired update to each of the decision functions in the paired SDPs. In this way, we can now combine the two SDPs into a single SDP: Footnote 2: This might be written as \(\left(f_{1},f_{2}\right)(\left(c_{1},c_{2}\right),x)=\left(f_{1}\left(c_{1},x \right),f_{2}\left(c_{2},x\right)\right)\) in the usual mathematical notation. subseq = sdppair subsfs seqnfs empty empty using which subsseq "abc" evaluates to [("","abc"),("a","abc"),("b","abc"),("ab","abc"),("c","abc"), ("ac","abc"),("bc","abc"),("abc","abc")] The paired SDP generates all subsequences, but we only want subsequences of size \(D\). The simple solution is just to remove the unwanted (_infeasible_) subsequences after they are generated using the built-in Haskell function filter :: (a -> Bool) -> [a] -> [a], which takes a _predicate_ p :: a -> Bool and deletes any element of the input list where p evaluates to False. In our case, the predicate fixlen d = (==d). length. fst with type Int -> ([a], b) -> Bool would have the desired effect. Here, this predicate is expressed in the _point-free function composition_ style: the function fst (picking out the subsequence from each paired SDP configuration) is followed by list length, followed by a test for the size being equal to the integer d (which is set to \(D\)). Using the Haskell composition operator. avoids the need to be explicit about the implied configuration input (the "point" in point-free in this case) for this predicate.3 Footnote 3: This point-free notational device is extremely useful for reasoning about algorithms in functional style, and we will make extensive use of it in this paper. The problem with this simple post-enumeration filtering, is that it is inefficient. Instead, it would be better to filter out any configurations during the SDP recursion, i.e. apply the predicate straight after applying the decision functions. By doing this, we can hope to avoid processing any infeasible configurations. Unfortunately, for fixlen this cannot be done because the subs SDP constructs subsequences from smaller subsequences generated earlier in the sequence (Bellman's principle of optimality). However, the slightly modified predicate predicate maxlen d = (<=d). length. fst has the property we need, preventing subsequences of size larger than \(D\) from being retained. Subsequently, once generation is complete, we can remove any subsequences of size \(0,1,2,\ldots,D-1\) with post-filtering. This is a generic principle: if any feasibility predicate p can be written as the conjunction of an auxiliary _prefix-closed_ predicate q such that p and q is equal to p, then we can _fuse_ the SDP with the post-filtering to potentially reduce computational effort: sdpflitpair p q fs1 fs2 e1 e2 = (filter p). foldl (choicefilt q (mappaircross (cpp fs1 fs2))) [(e1,e2)] choicefilt q fs cs = (filter q). (choice fscs) where the new choicefilt function simply applies the predicate q after applying the decision functions. This finally leads to our efficient enumerator of paired combination-sequence configurations: combseq d = sdpflitpair (fixlen d) (maxlen d) subsfs seqnfs empty empty such that, running combseq 2 "abcd" gives [("ab","abcd"),("ac","abcd"),("bc","abcd"), ("ad","abcd"),("bd","abcd"),("cd","abcd")] which to check, agrees with filter (fixlen 2) (subsseq "abcd"). ### Exhaustive, incremental cell enumeration (ICE) As mentioned in the introduction, every hyperplane decision boundary \(\boldsymbol{w}\) leads to an implied class assignment \(\{+1,-1\}\) for every data point in the dataset, and a corresponding 0-1 loss. There are an uncountable infinity of such boundaries because \(\boldsymbol{w}\in\mathbb{R}^{D+1}\) is continuous. Nevertheless, the number of data points \(N\) is finite and these points occupy zero volume of the Euclidean feature space \(\mathbb{R}^{D+1}\). This implies that there are a finite number of _equivalence classes_ of decision boundaries which share the same assignment. These equivalence classes are Cover's dichotomies, and they form a finite partition of the parameter space \(\mathbb{R}^{D+1}\). It follows that an exact solution for the 0-1 loss classification problem, involves exhaustively enumerating all \(O\left(N^{D}\right)\) such dichotomies and then selecting (one of) those with the smallest 0-1 loss. This is the basis on which the combination-sequence algorithm given above, can be used to solve (3) exactly, in polynomial time. **Theorem 1**.: _Consider a dataset \(\mathcal{D}\) of \(N\) data points of dimension \(D\) in general position, along with their associated labels. All globally optimal solutions to problem (3) for \(\mathcal{D}\), are contained in the set of solutions of all positive and negatively-oriented linear classification decision hyperplanes which go through \(D\) out of these \(N\) data points._ Proof.: A formal proof is given in the Appendix. The informal intuition behind this claim goes as follows. We can construct these solutions by selecting all \(D\) out of the \(N\) data points, finding the hyperplane which goes exactly through these points, and computing the associated assignments for the entire dataset and their corresponding 0-1 loss. Every such hyperplane has two sets of assignments, corresponding to the two possible orientations of the boundary. However, the \(D\) points used to construct these two hyperplanes, have undecided class assignments because the boundary goes exactly through them (so the classification model evaluates to 0 for these points). There are \(2^{D}\) possible assignments of class labels to the \(D\) points on the boundary, and each of these \(2^{D}\) assignments is a unique dichotomy. The best such dichotomy is the one with the smallest 0-1 loss, and this is guaranteed by selecting the labels of the \(D\) points such that they agree with their labels in the training data.4 Footnote 4: We can reason about these geometric facts formally in the original feature space, but the well-developed theory of _hyperplane arrangements_ can be used to more clearly structure proofs of these claims through transformation to _dual space_ where points are replaced with hyperplanes and vice-versa. We now have all the ingredients to construct our algorithm, which will enumerate all these linear classification decision hyperplanes and thus solve (3). We will need some basic linear algebra such as real-valued Vector and Matrix types, solving linear systems linearsolve :: Matrix -> Vector -> Vector and matrix-vector multiplication matvecmult:: Matrix -> Vector -> Vector which are defined in the imported Linearsolve module and listed in the Appendix for completeness. #### Dataset First, the input Dataset is defined type Label = Integer type Item = (Vector, Label) type Dataset = [Item] which is a set of data Items which comprises a tuple of a real-valued Vector data point and its associated integer training Label, for clarity extracted from the tuple using label (x,1) = 1 point (x,1) = x The Haskell keyword type simply associates a type with a string label to make code more concise. #### Linear classification A linear model is the unique hyperplane parameter of type Vector which goes through a given set of data points, where the number of data points is equal to the dimension of the space: ones:: Int -> Vector ones n = taken [1.0,1.0..] fitw :: Double -> [Vector] -> Vector fitw sense dx = [-sense] ++ (map (*sense) (linearsolve dx (ones (length (head dx))))) Here, dx is a list of vectors of length \(D\), in other words a \(D\times D\) matrix, and the function fitw solves a linear system of equations to obtain the normal vector of the hyperplane in the homogeneous coordinates for all data in dx. The Haskell function take :: Int -> [a] -> [a] simply truncates a given list to the given number of elements, and the list comprehension [1.0,1.0..] is the infinite list of +1.0 real values5. The sense parameter, taking on the values \(\{-1.0,+1.0\}\), is used to select the orientation of the normal vector. The function head :: [a] -> a extracts the first element of a list (which must be non-empty); here it is used to find the dimension \(D\) of the dataset. Footnote 5: Haskell is a _lazy_ language, in that terms are only evaluated when required. This allows for the specification of non-terminating structures like this, which on evaluation will turn out to be finite. With an (oriented) linear model obtained this way, we can apply it to a set of data points in order to make a decision function prediction evalw :: [Vector] -> Vector -> [Double] evalwdxw = matvecmult (map ([1.0]++) dx) w which is the oriented distance of all data items in dx to the linear model with normal vector w. Given that prediction function value, we can obtain the corresponding predicted assignment in \(\{0,1,-1\}\) (which is zero for points which lie on the decision boundary and which actually define the boundary): **label :: [Double] -> [Label]** **label = map (round. signum. underflow) where **samples = 1e-8** **underflow v = if (abs v) < smallepsthen 0else v** The Haskell where keyword is a notational convenience which allows local function and variable definitions that can access the enclosing, less indented scope. The reason for the underflow correction, is that numerical imprecision leads to predictions for some points which are not exactly on the boundary, where they should be. The function round just type casts the label prediction to match the label type (integer). Lastly, combining these two functions above obtains **pclass :: [Vector] -> Vector -> [Label]** **pclassdxw = plabel (evalwdxw)** which, given a set of data points and a hyperplane, obtains the associated labels with respect to that hyperplane. ### Loss Next, given a pair of labels, we want to be able to compute the corresponding term in the 0-1 loss. This makes use of Haskell "case" syntax statements loss01 :: Label -> Label -> Integer loss01 11 12 | 11 == 0 = 0 | 12 == 0 = 0 | 11 /= 12 = 1 | otherwise = 0 This function handles the situation where either label is 0, which occurs for data points which lie on the defining hyperplane and whose predicted class is always assumed to be the same as training label, and also the default case (otherwise) to ensure that loss01 is total. Using this, we can compute the 0-1 loss, \(E_{0-1}\) for a given pair of label lists e01 :: [Label] -> [Label] -> Integer e01 x y = sum (map (\(\langle\)lx,ly\(\rangle\) -> loss01 lxly\(\rangle\) (zip x y)) making use of the Haskell function zip :: [a] -> [b] -> [(a,b)] which pairs every element of the first given list with the corresponding element of the second given list. ### Configuration Our recursive combination-sequence SDP requires a partial configuration data type which is updated by application of the decisions (Figure 1). For computational efficiency, we package up the linear classification hyperplane defined by the size-\(D\) combination, with the 0-1 loss for its corresponding sequence, which together we define as the type (classification) Model: type Model = (Vector, Integer) modelw :: Model -> Vector modelw (w,1) = w model1 :: Model -> Integer model1 (w,1) = l and, combining this with the combination-sequence data type gives us the SDP configuration Config: type Config = ([Item], [Item], Maybe Model) comb :: Config -> [Item] comb (c,s,m) = c seqn :: Config -> [Item] seqn (c,s,m) = s model :: Config -> Maybe Model model (c,s,m) = m The first element in this tuple is the combination of data items (with maximum size \(D\)) that is used to construct a linear model, the second element is a sequence of data items which have been encountered so far in the recursion. Note that here, the value for Model in the configuration is _optional_. This is indicated by the use of Haskell's Maybe data type, which is roughly equivalent to allowing variables to take None values in Python. Indeed, the initial configuration has empty combination-sequence pairs, and a Nothing-valued model: empty :: Config empty = ([], [], Nothing) The reason for the model pair being optional should be obvious: for combinations of insufficient size, it is not possible to compute a model or corresponding 0-1 loss. #### Algorithms We are now in a position to give the main recursion e01gen, which is a combination-sequence SDP (Figure 1): e01gen :: Double -> Integer -> Dataset -> [Config] e01gens senseub dxl = foldl (choiceifit retain [cnfupd1,cnfupd2]) [empty] dxl where dim = length (point (head dxl)) retainc = (feasible c) && (viable c) feasible c = length (combc) <= dim viable c = case (modelc) of Nothing -> True Just (w,1) -> (l <= ub) cnfupd1 cxl = case (modelc) of Nothing -> (combc, updseqn, Nothing) Just (w,1) -> (combc, updseqn, Just (w, l + e01 [label xl] (pclass [point xl] w))) where updseqn = (seqn c) ++ [xl] cnfupd2 cxl = case (modelc) of Nothing -> if (length updcomb == dim) then (updcomb, updseqn, Just (w, e01 (maplabel updseqn) (pclass (mappoint updseqn) w))) updseqn))) else (updcomb, updseqn, Nothing) Just m -> (updcomb, updseqn, Just m) where updcomb = (comb c) ++ [x1] updseqn = (seqn c) ++ [x1] w = fitws sense (mappoint updcomb) Given an orientation parameter sense :: Double, an approximate upper bound on the 0-1 loss ub :: Integer, and a (non-empty) dataset dxl :: Dataset, this outputs a list of candidate solutions to the of type [Config] which are potential globally optimal solutions to (3), with 0-1 loss no worse than ub. This efficient SDP is derived using all the same principles as section 2.2 above (generic recursive SDPs, tupling, prefix-closed filter fusion), but additionally includes updates to the configurations of type Config, and it augments the prefix-closed filter fusion with an additional approximate upper bound predicate. The variable dim is just the dimension \(D\) of the data points, extracted from the first data item in the dataset. Regarding configuration updates, the decision function cnfupd1 :: Config -> Item -> Config, takes an input configuration, and updates the sequence of the configuration with the new data item. Simultaneously, if there is no defined linear model, it leaves the model unchanged. If there is a model, then the model's 0-1 loss is updated with the loss for the new data item. It leaves the combination part of the configuration unchanged. Conversely, the decision function cnfupd2 :: Config -> Item -> Config is responsible for increasing the size of the configuration's combination, therefore, it must also compute a new linear model for the configuration (using fitw) when its combination reaches size \(D\) for the first time. Thus, the Maybe value of the model in the configuration, undergoes a one-way _state transition_ from undefined (Nothing) to computed (Just m); when computed, the linear boundary hyperplane remains unchanged and the configuration's 0-1 loss is updated on each subsequent step of the SDP recursion. Looking at the filtering, the predicate feasible :: Config -> Bool is the function q, i.e. maxlen, in section 2.2. The predicate viable :: Config -> Bool checks whether a linear hyperplane model is defined for a configuration, and if so (case Just m), returns True when the 0-1 loss of this configuration is at most equal to the approximate upper bound. This is prefix-closed because the 0-1 loss is non-decreasing as more data is scanned by the recursion; this is a very useful computational efficiency improvement since it can eliminate many non-optimal partial solutions. Since the conjunction of two prefix-closed predicates is also prefix-closed, these two predicates are combined, retain :: Config -> Bool and form the prefix-closed filtering for the SDP. Having generated partial solutions, the next stage is to select an optimal one. This involves a straightforward recursive iteration through a non-empty list of partial configurations (fold11 :: (a -> a -> a) -> [a] -> a), comparing adjacent configurations remaining in the list. The best of the pair, that is, one with 0-1 loss at most as large as the other, is selected, using function best :: Config -> Config -> Config. At the same time, configurations with Nothing-valued (undefined) models are simultaneously removed in the same iteration. This is the implicit application of predicate def = (/= Nothing). model, which satisfies the required filter fusion condition from section 2.2 that def and viable and feasible, equals def and viable: ``` sel01opt :: [Config] -> Config sel01opt = fold11 best where best c1 c2 = case (model c1) of Nothing -> c2 Just (w1,11) -> case (model c2) of Nothing -> c1 Just (w2,12) -> if (11 <= 12) then c1 else c2 ``` Finally, we can give the program for solving problem (3). It generates all positive and negatively-oriented decision boundaries (which are viable with respect to the approximate upper bound ub) and selecting an optimal one: ``` e01class::Integer->Dataset->Config e01classubdxy=sel01opt((e01gen(1.0)ubdxy)++(e01gen(-1.0)ubdxy)) ``` An approximate upper bound may be computed by any reasonably good approximate method, for instance the support vector machine (SVM). The tighter this bound, the more partial solutions are removed during iteration of e01gen which is desirable in order to achieve practical computational performance. ## 3 Empirical experiments In this section, we analyze the computational performance of our novel ICE algorithm on both synthetic and real-world data sets. Our evaluation aims to test the following predictions: (a) the ICE algorithm always obtains the best 0-1 loss (classification error) among other algorithms (hence obtains optimal prediction accuracy); (b) wall-clock run-time matches the worst-case time complexity analysis, and (c) viability filtering using the approximate upper bound leads to polynomial decrease in wall-clock run-time.6 Footnote 6: For practical purposes, all our results are obtained using a direct, efficient C++ translation of the Haskell code given in this paper. ### Real-world data set classification performance Various linear classification algorithms were applied to classification data sets from the UCI machine learning repository (Dua and Graff, 2019). We compare our exact algorithm ICE, against approximate algorithms: support vector machine (SVM), logistic regression (LR) and linear discriminant analysis (LDA). As predicted (Table 1), the ICE algorithm always finds solutions with smaller 0-1 loss than approximate algorithms (except for the Inflammations data set which is linearly separable). ### Out-of-sample generalization tests From equation 1, we can predict that an exact algorithm has the best, worst-case test 0-1 loss. In this section, we test this prediction by analyzing the performance of ICE algorithm using cross-validation (see Table 2), where the out-of-sample predictions use the _maximum margin representative_ of the equivalence class of the exact hyperplane (Vapnik, 1999). ### Run-time complexity analysis In the worst case situation, \(\mathtt{ub}\geq N/2\), viability filtering with the approximate global upper bound will do nothing because every combinatorial configuration will be feasible (if a model's objective function value \(E_{0-1}\geq N/2\), we can get its negative part by reversing the direction of the normal vector; the resulting model will have the 0-1 loss smaller than \(N/2\), both models represented by the same hyperplane). Therefore, all \(O\left(N^{D}\right)\) configurations will be enumerated for all \(N\) dataset items. In each iteration, a configuration takes constant time to update its 0-1 loss, followed by \(O\left(N\right)\) time required to calculate the complete 0-1 loss of a configuration. Hence, the ICE algorithm will have \(O\left(N^{D+1}\right)\) in the worst case. We test the wall clock time of our novel ICE algorithm on four different synthetic data sets with dimension ranging from \(1D\) to \(4D\). The \(1D\)-dimensional data set has data size ranging from \(N=1000\) to \(60000\), the \(2D\)-dimensional ranges from \(150\) to \(2400\), \(3D\)-dimensional from \(50\) to \(500\), and \(4D\)-dimensional data ranging from \(30\) to \(200\). The worst-case predictions are well-matched empirically (see Figure 2). Viability filtering using the approximate global upper bound \(\mathtt{ub}\) is a powerful technique which can substantially speed up our algorithm. Next, we will evaluate the effectiveness of the upper bound (see Figure 3). We generate five synthetic datasets with dimension ranging from \(D=1\) to \(D=4\), and varying \(\mathtt{ub}\) from \(\hat{E}_{0\text{-}1}\) to \(N\). The synthetic datasets are chosen such that they all have \(\hat{E}_{0\text{-}1}\) approximately equal to \(0.1N\) and \(0.2N\). Figure 3 shows polynomial degree decrease in run time as \(\mathtt{ub}\) is decreased from \(N/2\) to \(\hat{E}_{0\text{-}1}\), and it remains stable when \(\mathtt{ub}\geq N/2\) because then all configurations are viable. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline UCI dataset & \(N\) & \(D\) & Incremental cell enumeration (ICE) (ours) & Support vector machine (SVM) & Logistic regression (LR) & Linear discriminant analysis (LDA) \\ \hline \hline Habermans & 306 & 3 & **21.6\% (66)** & 24.8\% (76) & 23.9\% (73) & 25.2\% (77) \\ \hline Caesarian & 80 & 5 & **22.5\% (18)** & 27.5\% (22) & 27.5\% (22) & 27.5\% (22) \\ \hline Cryotherapy & 90 & 6 & **4.4\% (4)** & 8.9\% (8) & **4.4\% (4)** & 10.0\% (9) \\ \hline Voicepath & 704 & 2 & **2.7\% (19)** & 3.3\% (23) & 3.4\% (24) & 3.4\% (24) \\ \hline Inflammations & 120 & 6 & **0.0\% (0)** & **0.0\% (0)** & **0.0\% (0)** & **0.0\% (0)** \\ \hline \end{tabular} \end{table} Table 1: Empirical comparison of the classification error performance (smaller is better), \(\hat{E}_{0\text{-}1}\), of our novel incremental cell enumeration (ICE) algorithm, against approximate methods (support vector machine, logistic regression, Fisher’s linear discriminant) on real-world datasets from the UCI machine learning repository (Dua and Graff, 2019). Misclassification rates are given as classification error percentage, \(\hat{E}_{0\text{-}1}/N\%\) and number of classification errors, \(\hat{E}_{0\text{-}1}\) (in brackets). Best performing algorithm is marked bold. As predicted, ICE, being exact, outperforms all other non-exact algorithms. \begin{table} \begin{tabular}{|c|r|r|r|r|r|r|r|r|} \hline UCI dataset & ICE & ICE & SVM & SVM & LR & LR & LDA & LDA \\ & train & test & train & test & train & test & train & test \\ & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline \hline Habermans & **21.5** & **23.8** & 24.8 & 25.8 & 24.9 & 26.4 & 25.0 & 27.1 \\ & (0.6) & (6.8) & (0.6) & (5.7) & (1.2) & (6.8) & (1.0) & (6.0) \\ \hline Caesarian & **16.4** & **37.5** & 30.3 & 42.5 & 27.9 & 43.8 & 29.6 & 43.8 \\ & (1.4) & (16.8) & (4.9) & (13.9) & (3.7) & (15.1) & (2.8) & (15.1) \\ \hline Cryotherapy & **4.4** & **9.0** & 7.8 & 17.8 & **4.4** & **9.0** & 10 & 16.7 \\ & (0.6) & (10.5) & (1.7) & (10.1) & (0.6) & (9.2) & (1.7) & (9.0) \\ \hline Voicepath & **2.7** & **3.7** & 3.3 & 4.0 & 3.4 & 3.7 & 3.3 & 3.9 \\ & (0.2) & (1.4) & (0.2) & (1.7) & (0.2) & (1.3) & (0.3) & (1.1) \\ \hline Inflammations & **0.0** & **0.0** & **0.0** & **0.0** & **0.0** & **0.0** & **0.0** \\ & (0.0) & (0.0) & (0.0) & (0.0) & (0.0) & (0.0) & (0.0) \\ \hline \end{tabular} \end{table} Table 2: Ten-fold cross-validation out-of-sample tests on UCI data set of our novel incremental cell enumeration (ICE) algorithm, against approximate methods (support vector machine, SVM; logistic regression, LR; linear discriminant analysis, LDA). Mean classification error (smaller is better) percentage \(E_{0\text{-}1}/N\%\) is given (standard deviation in brackets), for the training and test sets. Best performing algorithm is marked bold. As predicted, ICE always obtains better solutions on average on out-of-sample datasets than other, non-exact algorithms. Figure 2: Log-log wall-clock run time (seconds) for the ICE algorithm in \(1D\) to \(4D\) synthetic datasets, against dataset size \(N\), where the approximate upper bound is disabled (by setting it to \(N\)). The run-time curves from left to right (corresponding to \(D=1,2,3,4\) respectively), have slopes 2.0, 3.1, 4.1, and 4.9, a very good match to the predicted worst-case run-time complexity of \(O\left(N^{2}\right)\), \(O\left(N^{3}\right)\), \(O\left(N^{4}\right)\), and \(O\left(N^{5}\right)\) respectively. ## 4 Summary, discussion and future work In this paper, have presented _incremental cell enumeration_, ICE, the first provably correct, worst-case polynomial \(O\left(N^{D+1}\right)\) run-time complexity algorithm for solving the 0-1 loss linear classification problem (3). Our solution combines novel ideas from combinatorial geometry and program transformations in functional programming. Compared with well-known approximate algorithms such as SVMs, logistic regression and linear discriminant analysis, our algorithm is guaranteed to find the best possible solution for a given dataset, where an approximate algorithm cannot be relied upon to find it. Our empirical investigations show that the exact solution is often significantly better than the best approximate solution. From the point of view of interpretability, this is critically important because it demonstrates that, contrary to widely held belief, simple linear models can also be _accurate_ i.e. good, purely linear classification models do indeed exist for many practical problems. Approximate algorithms are not guaranteed to be able to find these simple yet accurate linear models, and prior to ICE, provably correct exact algorithms for finding these optimal linear models were computationally intractable. For medium-scale \(N\) and small \(D\), we have already demonstrated that ICE can be a good replacement for linear SVMs, for example. ### Related work Classification algorithms have a long history. The first classification algorithms date back to the early 20th century, perhaps most importantly logistic regression (LR) which is regarded as one of the most useful algorithms for linear classification. The logistic function first appeared in the early 19th century [Quetelet et al., 1826], and has been rediscovered a few more times throughout the late 19th century and early 20th century, yet Wilson and Worcester were the first to use the logistic model in bioassay research [Wilson and Worcester, 1943], and Cox was the first to construct the log-linear model for the linear classification problem [Cox, 1958, 1966]. In 1972, Nelder and Wederburn first proposed a generalization of the logistic model for linear-nonlinear classification. Support vector machine (SVM) is probably the most famous algorithm for the linear classification problem [Cortes and Vapnik, 1995], it optimizes the regularized hinge loss to obtain a feasible decision hyperplane with _maximal margin_. Most of these algorithms can be considered as optimizing over convex surrogate losses for the 0-1 loss function. More recent studies showed that optimizing surrogate losses, such as the hinge loss, is not robust to _outliers_[Long and Servedio, 2008, Liu and Wu, 2007]. The objectives of these surrogate losses, while leading to computationally efficient algorithms, fail to be robust compared with exact algorithms. In ML research, the study of exact algorithms for mixed continuous-discrete problems, has not received the same level of attention as approximate algorithms such as Markov Chain Monte Carlo or variational Bayes. Perhaps one of the main reasons for this is that typically and in the general case, these problems are NP-hard. Where exact solutions have been found, these are often obtained using inefficient _branch-and-bound_ (BnB) algorithms or off-the-shelf _mixed integer programming_ (MIP) solvers. For instance, the _\(K\)-clustering_ problem is known to be NP-hard when \(D\geq 2\) and constrained versions of \(K\)-clustering, for instance the _\(K\)-medians, \(K\)-means_ and _\(K\)-centers_ problems, are also NP-hard to Figure 3: Log-log wall-clock run time (seconds) of the ICE algorithm on synthetic data, as the approximate upper bound viability is varied, \(\hat{E}_{0\text{-}1}\leq\mathfrak{ub}\leq N\), for \(\hat{E}_{0\text{-}1}\) approximately \(0.1N\) (left), and approximately \(0.2N\) (right). It can be seen that the empirical run-time decreases polynomially as \(\mathfrak{ub}\) tends towards the exact \(\tilde{E}_{0\text{-}1}\) of the dataset. optimize. Fayed and Atiya (2013) use a _mixed breadth-depth first_ strategy BnB algorithm to solve the \(K\)-center problem, and Du Merle et al. (1999) combines the _interior point_ algorithm with BnB to solve the the \(K\)-means problem. Meanwhile, Peng and Xia (2005) use a _cutting-plane_ algorithm for solving the \(K\)-means problem. More recently, for the _decision/regression tree_ problem, the _Generalized and Scalable Optimal Sparse Decision Trees_ (GOSDT) framework was proposed to _find optimal sparse decision trees_ formulated as the solution to a wide variety of objective functions (Lin et al., 2020). Zhang et al. (2023) construct an algorithm that solves the sparse decision tree problem based on Lin et al. (2020)'s GOSDT framework. Bertsimas and Dunn (2017) provides a novel _mathematical programming_ formula for the decision tree problem and optimizes it exactly using modern MIP techniques. Little work appears to have been devoted to exact algorithms for the 0-1 loss classification problem. Tang et al. (2014) implemented a MIP approach to obtain the maximal margin boundary for the optimal 0-1 loss, and Brooks (2011) optimized SVM with "ramp loss" and the hard-margin loss using a _quadratic mixed-integer program_ (QMIP), where the ramp loss is a continuous function mixed with the 0-1 loss. Problems involving optimizing the _integer coefficient linear classifier_ have also drawn some attention (Chevaleyre et al., 2013; Carrizosa et al., 2016), again exact solutions have only been obtained using inefficient MIP. _Scoring system research_ is related to linear classification with integer coefficients, but many scoring systems are built using traditional heuristic/approximate classification methods, and Ustun (2017)'s empirical results show that the loss is substantial if we optimize convex surrogates. Therefore, Ustun and Rudin (2019) presented a cutting-plane algorithm to learn an optimal risk score, or solve it by formulating it as a MIP problem[Ustun and Rudin, 2016]. Perhaps closest to our work, Nguyen and Sanner (2013) developed a BnB algorithm for solving (3). Nguyen and Sanner (2013) also constructed a polynomial-time combinatorial search algorithm which is similar to our algorithm, but gave no proof of correctness. Hence, previous work on this problem of solving (3) is either computationally intractable i.e. worst case exponential run-time complexity, or uses inefficient, off-the-shelf MIP solvers, or is not provably correct (Nguyen and Sanner, 2013). Previously, we designed and implemented in Python, three other novel, exact algorithms (E01-ICG, E01-ICG-purge and E01-CE) for this problem (Xi and Little, 2023), although correctness proofs and formal algorithm derivations have yet to be provided. We performed some small-scale experiments to compare the wall-clock run-time of these three algorithms against an earlier BnB algorithm. ### Future work While the SDP used as the basis of this algorithm is an efficient _factorization_ of the combinatorics involved, it involves a certain trade-off between usage of memory and computational resources. Our results were obtained using a highly-optimized C++ implementation of the Haskell ICE algorithm listed here, but although this is practical for medium scale \(N\) and \(D\) there is only so much that an optimizing C++ compiler can achieve without fundamental design changes to the algorithm, for instance, using combinatorial generator patterns other than SDPs. These other generators also open up the possibility of using other strategies such as dynamic upper bound updating (as in classical BnB). Finally, _intrinsically parallel_ algorithms implemented on massively parallel _graphical processing units_ (GPUs) would be expected to lead to much better than \(O\left(N^{D+1}\right)\) worst-case run-time complexity, which may finally make exact 0-1 loss classification practical for problems with large \(N\) and \(D\).
2307.05511
Opinions with few disciples can win in the dynamical directed networks: an evolutionary game perspective
The voter model on networks is crucial to understand opinion formation. Uni-directional social interactions are ubiquitous in real social networks whereas undirected interactions are intensively studied. We establish a voter model on a dynamical directed network. We show that the opinion invasion is captured by a replicator equation of an emergent four-player two-strategy game, and the average in(out)-degree for the two opinions is fully captured by an emergent three-player two-strategy game. Interestingly, it is shown that the difference between the two emergent games arises from the uni-directionality of the network. The difference implies that the opinion with a small number of disciples can take over the population for in-group bias, provided that the network is directed. Our work makes an explicit connection between opinion dynamics and evolutionary games.
Yakun Wang, Bin Wu
2023-07-05T01:44:08Z
http://arxiv.org/abs/2307.05511v1
Opinions with few disciples can win in the dynamical directed networks: an evolutionary game perspective ###### Abstract The voter model on networks is crucial to understand opinion formation. Uni-directional social interactions are ubiquitous in real social networks whereas undirected interactions are intensively studied. We establish a voter model on a dynamical directed network. We show that the opinion invasion is captured by a replicator equation of an emergent four-player two-strategy game, and the average in(out)-degree for the two opinions is fully captured by an emergent three-player two-strategy game. Interestingly, it is shown that the difference between the two emergent games arises from the uni-directionality of the network. The difference implies that the opinion with a small number of disciples can take over the population for in-group bias, provided that the network is directed. Our work makes an explicit connection between opinion dynamics and evolutionary games. ## 1 Introduction Opinion dynamics have become attractive in diverse disciplines, such as statistical physics, control theory and system science [1, 2, 3, 4, 5, 6]. Two main topics of opinion dynamics are how opinions reach a consensus and how opinions coexist for a long time. The voter model is one of the classical models [7, 8, 9]. It is a discrete opinion dynamics model in which an individual adopts an opinion with a probability proportional to the fraction of that opinion in its neighborhood. Besides opinion dynamics, the voter model has various applications in many fields, such as epidemic spreading [10], catalytic reactions in chemistry [11] and prey-predator interaction in biology [12]. Individual interactions in opinion dynamics are typically captured by networks. The real-world networks are dynamical, rather than static [13, 14, 15, 16, 17, 18, 19, 20, 21]. The researches on the co-evolutionary dynamics of opinions and networks have been well thorough [22, 23, 24, 25]. A simple model with a single parameter controlling the balance of the two dynamics is built to investigate the opinion formation [26]. The modified model exhibits complicated topological behaviors via introducing heterophily [27]. One individual can rewire to an individual chosen at random from those with the same opinion or from the whole network. The rewire-to-same and rewire-to-random models have different phase transitions [28]. Master equation approximation, pair approximation and heterogeneous mean-field are well-known approaches to capture the opinion dynamics on the networks [7, 29, 30, 31]. But all of these works explicitly assume that the networks are bi-directional. Unidirectional social interactions are ubiquitous in the real world. For example, a user follows another user on Twitter based on a common interest, and this following relationship is asymmetric [32]: Sally enjoys Pilates, so she follows the blogger Jessica, who teaches Pilates online. But Jessica does not follow Sally. In the US National Longitudinal Study of Adolescent Health (the "AddHealth" study), high school students were asked to identify their friends within the school. More than half of the friendships are found to be unidirectional. Lisa considering Cindy to be her friend does not imply that Cindy considers Lisa to be her friend [33]. A large number of biological systems also have unidirectional interactions. For example in a wolf pack, wolves in general are subservient to the alpha wolf and their socialization is strictly one-way [34]. Directed dynamic networks are also widely present in the field of engineering [35, 36]. We concentrate on the unidirectional nature of the network [37, 38] besides the dynamic nature of the social network. In this paper, we establish a voter model on a dynamical directed network [39, 40]. Each node in the network represents an individual, and each directed link represents a directed social relationship. We are to address two questions, i.e., fate of opinions and transient topology. It is found that the fate of opinions is captured by an _emergent_ four-player two-strategy game. The expectation of in(out)-degree for the two opinions is captured by an _emergent_ three-player two-strategy game. The two emergent games are typically different for directed networks, which facilitates us to explain some counterintuitive phenomena. ## 2 Model Initially, the whole population of size \(N\) are situated on nodes of a regular directed graph. Each node has \(L\) incoming edges and \(L\) outgoing edges, as shown in Fig. 1(a). The total number of directed links is thus \(NL\). We assume that \(N\gg L\). It implies that each individual has a limited number of neighbors compared with the population size which is ubiquitous in social networks. There are two opinions, denoted as \(+\) and \(-\), respectively. Each individual holds one type of opinion and we denote \(\overrightarrow{XY}\) as the type of the directed link, where \(\overrightarrow{XY}\in\{\overrightarrow{+},\overrightarrow{+}, \overrightarrow{-},\overrightarrow{-},\overrightarrow{-}\}\stackrel{{ \Delta}}{{=}}S\). Here we propose a voter model on the evolving directed network. In the network, we define the direction of "learning": for example, if node \(B\) points to node \(A\), it implies that \(B\) unilaterally learns from \(A\) and \(A\) does not learn from \(B\). In other words, the source node plays the role of a student to learn the target node who plays the role of a teacher, as shown in Fig. 1(b). For a node, it has a student set and a teacher set. The student set is composed of the source nodes on the edges that flow into the node, and the teacher set is composed of the target nodes on the edges that flow out from the node. Each individual has an opportunity to either update its opinion with probability \(w\) or update its link with probability \(1-w\) at each time step, which is shown in Fig. 2. When \(w=1\), the social links between individuals are invariant, i.e., individuals only update their opinions. It refers to the opinion dynamics on a static directed network [41, 42]. When \(w=0\), the social network evolves all the time whereas the fractions of opinions are constant. For _opinion dynamics_, we focus on the voter model [6]. An individual is randomly selected from the population. The probability that the selected individual adopts opinion \(+\) is proportional to the number of teachers with opinion \(+\) in its teacher set. In other words, the selected individual adopts opinion \(+\) with probability \(Q_{+}/\left(Q_{+}+Q_{-}\right)\), where \(Q_{\pm}\) refers to the number of its teachers whose opinion is \(\pm\). It is notable that if the teacher set of the selected node is empty, then the individual has no teachers to learn from and keeps the opinion. For _linking dynamics_, our model focuses on the updating of directed links. The whole network is adjusted by at most one directed link at each time step. There are three steps as follows. (i) _Selecting a directed link_. A directed link \(\overrightarrow{XY}\) is randomly selected from all the directed links. The directed link \(\overrightarrow{XY}\) corresponds to the student \(X\) and the teacher \(Y\), where \(\overrightarrow{XY}\in S\). (ii) _Selecting \(X\) or \(Y\). \(X\)_ is selected with probability \(\alpha_{\overrightarrow{XY}}\), where \(0<\alpha_{\overrightarrow{XY}}<1\). Otherwise \(Y\) is selected with probability \(\beta_{\overrightarrow{XY}}\). We have \(\alpha_{\overrightarrow{XY}}+\beta_{\overrightarrow{XY}}=1\). (iii) _Breaking the directed link_. The \(\overrightarrow{XY}\) breaks off with a pre-defined probability \(k_{\overrightarrow{XY}}\), where \(0<k_{\overrightarrow{XY}}<1\). It implies that if the student \(X\)(teacher \(Y\)) is selected, then \(X(Y)\) would like to break the directed link with probability \(k_{\overrightarrow{XY}}\) to change the current Figure 1: **Uni-directional social interactions.** (a) This is a regular directed network. There are nine individuals and each individual has two incoming and two outgoing edges, i.e., \(N=9\) and \(L=2\). (b) There are two types of opinions and four types of edges, namely \(\overrightarrow{+\tau},\overrightarrow{+\tau},\overrightarrow{-\tau}, \overrightarrow{-\tau}\). A directed edge connects \(A\) and \(B\), which implies that \(B\) as a student can learn from \(A\) as a teacher. Figure 2: **Coevolutionary dynamics of opinions and directed social relationships.** (a) A population is described by a regular directed network. For example, the student set of \(A\) is \(\{B,G\}\) and the teacher set of \(A\) is \(\{C,D\}\). (b) With probability \(w\), opinion update happens. In this case, an individual is randomly chosen to update its opinion. It learns from its teachers who are the target nodes. The probability of the focal individual adopting the opinion \(\pm\) is proportional to the number of its teachers with opinion \(\pm\). For example, suppose \(A\) is selected and its teachers are \(C\) and \(D\). Then \(A\) adopts opinion + with probability \(1/2\). If the selected node has no outgoing edges, i.e., it has no teacher to learn from, then it will remain the original opinion. (c) With probability \(1-w\), the linking dynamics happens. Firstly, a directed link is selected randomly. Secondly, the source or target node of the directed link is chosen based on the respective pre-defined probability. Thirdly, the directed link breaks with a pre-defined probability depending on its type. If the student(teacher) is selected and the directed link is broken, then the node reconnects with the outgoing(incoming) edge to a random node that is neither in its current teacher set nor in its current student set. For example, the directed link \(\overrightarrow{BA}\) is selected. With probability \(\alpha_{\overrightarrow{\pm}}\), the source node \(B\) is selected and the directed link is broken with probability \(k_{\overrightarrow{\pm}}\). Then the node randomly chooses an individual which is neither in \(B\)'s current teacher set nor in \(B\)'s current student set (\(F\),\(G\),\(H\) or \(I\)). Otherwise, with probability \(\beta_{\overrightarrow{\pm}}\), the target node \(A\) is selected, and then it also randomly chooses an individual which is neither in \(A\)’s current teacher set nor in \(A\)’s current student set (\(E\),\(F\),\(H\) or \(I\)). Suppose \(B\) is selected and breaks the directed link \(\overrightarrow{BA}\). Finally, \(B\) finds a new teacher \(F\) and reconnects to \(F\) with a directed link. teacher(student). (iv) _Rewiring the node._ If student \(X\) is selected and the \(\overrightarrow{XY}\) is broken, then \(X\) will find a new teacher who is neither in \(X\)'s current teacher set nor in \(X\)'s current student set. If the teacher \(Y\) is selected and the \(\overrightarrow{XY}\) is broken, then \(Y\) will teach a new student who is neither in \(Y\)'s current teacher set nor in \(Y\)'s current student set. Notably, the number of teachers in the entire population is constant, since the sum of out-degrees of all the nodes in the network keeps unchanged over time. ## 3 Emergent games for the fate of opinions The voter model on the evolving network is a Markov chain with state \(x+\), i.e., the fraction of opinion \(+\) in the population. Thus, the state space is \(\{0,1/N,2/N,\cdots,1\}\). State 0 and state 1 are absorbing states, which implies that all the individuals reach a consensus. We focus on \(w\ll 1\). In this case, individuals prefer to adjust their social relationships rather than change their opinions. This is widespread in real social systems. For example, users on Twitter change their opinions much less frequently than adjust their followers [43]. It leads to a time scale separation, that is, all the directed links are almost in the stationary regime when the opinion update occurs (see A for details). For the evolutionary dynamics of opinions, \(x_{+}\) either increases or decreases by \(1/N\) within a time step. For example, \(x_{+}\) increases by \(1/N\) if an individual who adopts opinion \(-\) is selected with probability \(x_{-}=1-x_{+}\), i.e., the fraction of opinion \(-\) in the population. Then the focal individual with opinion \(-\) learns from its teachers with opinion \(+\). And it adopts opinion \(+\) with a probability proportional to the number of its teachers with opinion \(+\), i.e., \(q\pi_{\overrightarrow{-}\overrightarrow{+}}/(q\pi_{\overrightarrow{-} \overrightarrow{+}}+q\pi_{\overrightarrow{-}\overrightarrow{-}})=\pi_{ \overrightarrow{-}\overrightarrow{+}}/(\pi_{\overrightarrow{-} \overrightarrow{+}}+\pi_{\overrightarrow{-}\overrightarrow{-}})\). Here \(q\) is the average size of the teacher set captured by the average out-degree of the focal individual. Thus the transition probability that \(x_{+}\) increases by \(1/N\) is \[T_{x_{+}}^{+}=x_{-}\frac{\pi_{\overrightarrow{-}\overrightarrow{+}}}{\pi_ {\overrightarrow{-}\overrightarrow{+}}+\pi_{\overrightarrow{-}\overrightarrow {-}}}. \tag{1}\] Similarly, the transition probability that \(x+\) decreases by \(1/N\) is \[T_{x_{+}}^{-}=x_{+}\frac{\pi_{\overrightarrow{-}\overrightarrow{+}}}{\pi_ {\overrightarrow{-}\overrightarrow{+}}+\pi_{\overrightarrow{-}\overrightarrow {-}}}. \tag{2}\] The probability that \(x+\) remains constant is \(T_{x_{+}}^{0}=1-T_{x_{+}}^{+}-T_{x_{+}}^{-}\), since the each row sum of the transition probability matrix is unit one. For large population size, i.e., \(N\rightarrow+\infty\), the mean-field equation is given by \(\dot{x}_{+}=T_{x_{+}}^{+}-T_{x_{+}}^{-}\), capturing the evolution of the opinions. Taking Eqs. (1), (2) yields that \(\dot{x}_{+}=x_{+}x_{-}\)\([\)\(k_{\overrightarrow{-}\overrightarrow{-}}\)\((\alpha_{\overrightarrow{-}\overrightarrow{-}}\beta_{\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{ probability of breaking directed links \(k_{\overrightarrow{XY}}\) and the probability of choosing the student to reconnect \(\alpha_{\overrightarrow{XY}}\). Multiplying \(C\left(x_{+}\right)=A\left(x_{+}\right)B\left(x_{+}\right)k_{\overrightarrow{+ \overrightarrow{+}}}^{-1}k_{\overrightarrow{-\overrightarrow{-\overrightarrow{ +}}}}^{-1}\) which is positive on the right side does not alter the asymptotic dynamics, i.e., the fixed point and its stability. We end up with the equation \[\dot{x}_{+} =x_{+}x_{-}\left[\left(u_{1}x_{+}^{3}+u_{2}x_{+}^{2}x_{-}+u_{3}x _{+}x_{-}^{2}+u_{4}x_{-}^{3}\right)\right. \tag{3}\] \[-\left(v_{1}x_{+}^{3}+v_{2}x_{+}^{2}x_{-}+v_{3}x_{+}x_{-}^{2}+v_{4 }x_{-}^{3}\right)\right],\] where \(u_{1}=\left(k_{\overrightarrow{+\overrightarrow{-}}}/k_{\overrightarrow{+ \overrightarrow{+}}}\right)\alpha_{\overrightarrow{+\overrightarrow{- \overrightarrow{+}}}}^{2}\beta_{\overrightarrow{-\overrightarrow{+}}},u_{2}= \alpha_{\overrightarrow{+\overrightarrow{-}}}\alpha_{\overrightarrow{+ \overrightarrow{-\overrightarrow{-\overrightarrow{+}}}}}\beta_{\overrightarrow {-\overrightarrow{+}}}+\left(k_{\overrightarrow{+\overrightarrow{-}}}/k_{ \overrightarrow{+\overrightarrow{+}}}\right)\left[\alpha_{\overrightarrow {+\overrightarrow{-\overrightarrow{-\overrightarrow{-\overrightarrow{- \overrightarrow{-\overrightarrow{-\overrightarrow{-\overrightarrow{-\overrightarrow{- \overrightarrow{-\overrightarrow{-\overrightarrow{-\cdot \(\overrightarrow{XY}\in S\). Substituting it into Table 1, we obtain \[\left(\begin{array}{c}u_{1}\\ u_{2}/3\\ u_{3}/3\\ u_{4}\end{array}\right)=\frac{\alpha^{2}(1-\alpha)^{2}}{3}\left(\begin{array}{ cc}3&0\\ 2&1\\ 1&2\\ 0&3\end{array}\right)\cdot\left(\begin{array}{c}k_{\overrightarrow{\tau \overrightarrow{\tau}}}\\ k_{\overrightarrow{\tau\overrightarrow{\tau}}}\\ 1\end{array}\right)\] and \[\left(\begin{array}{c}v_{1}\\ v_{2}/3\\ v_{3}/3\\ v_{4}\end{array}\right)=\frac{\alpha^{2}(1-\alpha)^{2}}{3}\left(\begin{array} []{cc}3&0\\ 2&1\\ 1&2\\ 0&3\end{array}\right)\cdot\left(\begin{array}{c}1\\ k_{\overrightarrow{\tau\overrightarrow{\tau}}}\\ k_{\overrightarrow{\tau\overrightarrow{\tau}}}\end{array}\right)\] It implies that, for example, the payoff of one individual \(+\) who meets three other individuals with opinion \(+\) in the four-player game is equal to sum of the payoff of one individual \(+\) who meets one individual \(+\) in a two-player game, i.e., \(u_{1}=\alpha^{2}(1-\alpha)^{2}k_{\overrightarrow{\tau\overrightarrow{\tau} }}/k_{\overrightarrow{\tau\overrightarrow{\tau}}}\). Therefore, the four-player two-strategy game degenerates to the two-player two-strategy game, whose payoff matrix is \[M_{\rm opinion}= + \left(\begin{array}{cc}k_{\overrightarrow{\tau\overrightarrow{ \tau}}}&1\\ \hline k_{\overrightarrow{\tau\overrightarrow{\tau}}}&1\\ 1&k_{\overrightarrow{\tau\overrightarrow{\tau}}}\end{array}\right). \tag{4}\] The emergent payoff matrix is independent on \(\alpha\). Intuitively, the payoff of an individual \(+\) against an individual \(+\) is proportional to \(k_{\overrightarrow{\tau\overrightarrow{\tau}}}/k_{\overrightarrow{\tau \overrightarrow{\tau}}}\). If \(k_{\overrightarrow{\tau\overrightarrow{\tau}}}\) is increased solely, then the number of students with opinion \(+\) who learn opinion \(-\) decreases. A part of these students reconnect to new teachers with opinion \(+\) and adopt opinion \(+\). Hence the proportion of opinion \(+\) increases. In-group bias is a common phenomenon in the real world, which implies that individuals prefer to interact with those who take the same opinion [44, 45, 46]. It can lead to consensus in the population. That is to say, individuals tend to have the same opinion with in-group bias. In our model, in-group bias corresponds to \(k_{\overrightarrow{\tau\overrightarrow{\tau}}}>k_{\overrightarrow{\tau \overrightarrow{\tau}}}\) and \(k_{\overrightarrow{\tau\overrightarrow{\tau}}}>k_{\overrightarrow{\tau \overrightarrow{\tau}}}\). Students who adopt different opinions from their teachers are more likely to break the directed links than those who adopt the same opinions. The emergent payoff matrix in this case is a coordination game. There is only one internal equilibrium of the replicator equation and it is unstable. Thus all the individuals adopt opinion \(+\) if the initial fraction of opinion \(+\) exceeds \[x^{*}_{\rm opinion\,+}=\frac{k_{\overrightarrow{\tau\overrightarrow{\tau}}} /k_{\overrightarrow{\tau\overrightarrow{\tau}}}-1}{k_{\overrightarrow{ \tau\overrightarrow{\tau}}}/k_{\overrightarrow{\tau\overrightarrow{\tau}}}+k_ {\overrightarrow{\tau\overrightarrow{\tau}}}/k_{\overrightarrow{\tau \overrightarrow{\tau}}}-2}. \tag{5}\] Otherwise all, the individuals reach a consensus on opinion \(-\). It prevents the homogenization of opinions. The out-group bias implies that individuals prefer to interact with those who adopt different opinions [44, 45, 46]. In a large campaign, it is important that the chiefs focus on how to convert voters from the other camp to their own. Out-group bias in our model refers to \(k_{\overrightarrow{+\overrightarrow{-}}}<k_{\overrightarrow{+\overrightarrow{-}}}\) and \(k_{\overrightarrow{-\overrightarrow{+}}}<k_{\overrightarrow{-\overrightarrow{-}}}\). The payoff matrix refers to a coexistence game. Standard analysis shows that there is only one internal stable equilibrium \(x_{\mathrm{opinion}\,+}^{*}\) of the replicator equation. In other words, opinion \(+\) and opinion \(-\) coexist if they coexist in the beginning. The network has many directed links with inconsistent opinions, i.e., \(\overrightarrow{+\overrightarrow{-}}\) and \(\overrightarrow{-\overrightarrow{+}}\). Based on stable regimes, if \(k_{\overrightarrow{+\overrightarrow{-}}}\) is decreasing or \(k_{\overrightarrow{-\overrightarrow{-}}}\) is increasing, then the final fraction of opinion \(+\) increases [Figs. 3(a) and 3(b)]. Other cases are listed in Supplemental Material. Therefore, if the chiefs with opinion \(+\) would like to increase the size of their camp, then it can be achieved by decreasing \(k_{\overrightarrow{+\overrightarrow{-}}}\) or increasing \(k_{\overrightarrow{-\overrightarrow{-}}}\). That is to say, increasing the number of students on the opinion \(+\) or decreasing the number of students on the opposite side. #### 3.1.2 The same probability of breaking directed links We assume that the probabilities of breaking directed links are equal, i.e., there exists a \(k\in\left(0,1\right)\) such that \(k_{\overrightarrow{XY}}=k\), where \(\overrightarrow{XY}\in S\). It implies that the type of the directed links is not taken into account when the links are broken. Substituting \(k_{\overrightarrow{XY}}=k\) into Eq. (3), we find \(\dot{x}_{+}=D\left(x_{+}\right)x_{+}x_{-}\left[\left(\alpha_{\overrightarrow {-\overrightarrow{-}}}\beta_{\overrightarrow{-\overrightarrow{-}}}x_{+}+ \ \alpha_{\overrightarrow{-\overrightarrow{-}}}\beta_{\overrightarrow{- \overrightarrow{-}}}x_{-}\right)-\left(\alpha_{\overrightarrow{- \overrightarrow{-}}}\beta_{\overrightarrow{-\overrightarrow{-}}}x_{+}+ \alpha_{\overrightarrow{-\overrightarrow{-}}}\beta_{\overrightarrow{- \overrightarrow{-}}}x_{-}\right)\right]\), where \(D\left(x_{+}\right)=\beta_{\overrightarrow{+\overrightarrow{-}}}\alpha_{ \overrightarrow{-\overrightarrow{-}}}x_{+}^{2}+\left(\alpha_{\overrightarrow {-\overrightarrow{-}}}\beta_{\overrightarrow{-\overrightarrow{-}}}+ \alpha_{\overrightarrow{-\overrightarrow{-}}}\beta_{\overrightarrow{- \overrightarrow{-}}}\right)x_{+}x_{-}+\alpha_{\overrightarrow{- \overrightarrow{-}}}\beta_{\overrightarrow{-\overrightarrow{-}}}x_{-}^{2}\) is positive. Similarly, we end up with a replicator equation, i.e., \(\dot{x}_{+}=x_{+}x_{-}\left[\left(\alpha_{\overrightarrow{-\overrightarrow{- }}}\beta_{\overrightarrow{-\overrightarrow{-}}}x_{+}+\alpha_{\overrightarrow {-\overrightarrow{-}}}\beta_{\overrightarrow{-\overrightarrow{-}}}x_{-} \right)-\left(\alpha_{\overrightarrow{-\overrightarrow{-}}}\beta_{ \overrightarrow{-\overrightarrow{-}}}x_{+}+\alpha_{\overrightarrow{- \overrightarrow{-}}}\beta_{\overrightarrow{-\overrightarrow{-}}}x_{-}\right)\right]\), whose payoff matrix is the two-player two-strategy game \[R_{\mathrm{opinion}}=\begin{array}{c}+\\ -\end{array}\left(\begin{array}{c}\alpha_{\overrightarrow{-\overrightarrow{- }}}\beta_{\overrightarrow{-\overrightarrow{-}}}&\alpha_{\overrightarrow{- \overrightarrow{-}}}\beta_{\overrightarrow{-\overrightarrow{-}}}\\ \alpha_{\overrightarrow{-\overrightarrow{-}}}\beta_{\overrightarrow{- \overrightarrow{-}}}&\alpha_{\overrightarrow{-\overrightarrow{-}}}\beta_{ \overrightarrow{-\overrightarrow{-}}}\end{array}\right). \tag{6}\] Noteworthily, the emergent payoff matrix is independent on \(k\) and the payoff entry \(R_{XY}\) is proportional to \(l_{\overrightarrow{XY}}\), i.e., the number of directed links \(\overrightarrow{YX}\). For example, the payoff of an individual \(+\) meeting an individual \(-\) is proportional to \(l_{\overrightarrow{-\overrightarrow{-}}}\), which refers to the number of students \(-\) who have teachers with opinion \(+\). Here is an intuitive explanation: if \(\alpha_{\overrightarrow{-\overrightarrow{-}}}\) increases solely, then a part of students with opinion \(-\) reconnect to the new teachers with opinion \(+\). Hence \(l_{\overrightarrow{-\overrightarrow{-}}}\) increases. Similarly, we discuss the following two cases. We address a coordination game with \(\alpha_{\overrightarrow{-\overrightarrow{-}}}>\alpha_{\overrightarrow{- \overrightarrow{-}}}\) and \(\alpha_{\overrightarrow{-\overrightarrow{-}}}>\alpha_{\overrightarrow{- \overrightarrow{-}}}\). In this scenario, there is an unstable internal equilibrium given by \[y_{\mathrm{opinion}\,+}^{*}=\frac{\beta_{\overrightarrow{-\overrightarrow{-}}} \left(\alpha_{\overrightarrow{-\overrightarrow{-}}}-\alpha_{\overrightarrow{- \overrightarrow{-}}}\right)}{\beta_{\overrightarrow{-\overrightarrow{-}}} \left(\alpha_{\overrightarrow{-\overrightarrow{-}}}-\alpha_{\overrightarrow{- \overrightarrow{-}}}\right)+\beta_{\overrightarrow{-\overrightarrow{-}}}\left( \alpha_{\overrightarrow{-\overrightarrow{-}}}-\alpha_{\overrightarrow{- \overrightarrow{-}}}\right)} \tag{7}\] The individuals reach a consensus with opinion \(+\) if the initial fraction of opinion \(+\) exceeds \(y_{\mathrm{opinion}\,+}^{*}\). Otherwise, it reaches a consensus with opinion \(-\). We study a coexistence game defined by \(\alpha_{\overrightarrow{-\overrightarrow{-}}}<\alpha_{\overrightarrow{- \overrightarrow{-}}}\) and \(\alpha_{\overrightarrow{-\overrightarrow{-}}}<\alpha_{\overrightarrow{- \overrightarrow{-}}}\). In this case, opinion \(+\) and opinion \(-\) coexist for a long time if they coexist in the beginning. If \(\alpha_{\overrightarrow{-\overrightarrow{-}}}\) is decreasing or \(\alpha_{\overrightarrow{-\overrightarrow{-}}}\) is increasing, then the fraction fraction of opinion \(+\) increases [Figs. 3(c) and 3(d)]. And other cases see Supplemental Material for details. ### Emergent multi-player games: complexity analysis In subsection A, the four-player two-strategy game degenerates to the two-player two-strategy game provided that there are \(\alpha\in(0,1)\) and \(k\in(0,1)\) such that \(\alpha_{\overrightarrow{XY}}=\alpha\) or \(k_{\overrightarrow{XY}}=k\) for \(\forall\overrightarrow{XY}\in S\). But what is the complexity of our model? If \(u_{1}>v_{1}\), \(u_{2}<v_{2}\), \(u_{3}>v_{3}\) and \(u_{4}<v_{4}\) (or \(u_{1}<v_{1}\), \(u_{2}>v_{2}\), \(u_{3}<v_{3}\) and \(u_{4}>v_{4}\)) are satisfied in Table 1, \(f_{+}\left(x_{+}\right)-f_{-}\left(x_{+}\right)\) changes the sign three times with respect to \(x_{+}\) when non-zero coefficients are arranged from highest to lowest according to the power of \(x_{+}\). Based on Descartes' rule of signs [47], there are one or three roots, i.e., one internal equilibrium or three internal equilibria. We choose one parameter at random from \(\alpha_{\overrightarrow{XY}}\) and \(k_{\overrightarrow{XY}}\) respectively and make them equal. And we keep the other six parameters equal. We prove that it does not satisfy the condition of changing the sign three times (See Supplemental Material for details). Thus, to reveal the complexity, more parameters are needed to be unequal. We find a set of parameters, i.e., \(k_{\overrightarrow{x+}}=\rho,k_{\overrightarrow{x+}}=\rho,k_{\overrightarrow{ x-}}=\rho/4,k_{\overrightarrow{x-}}=\rho,\alpha_{\overrightarrow{x+}}=\rho/2, \alpha_{\overrightarrow{x-}}=\rho,\alpha_{\overrightarrow{x-}}=2\rho\) and \(\alpha_{\overrightarrow{-}}=\rho/4\), where \(0<\rho<0.5\). These eight parameters are only up to \(\rho\). There are three internal equilibria under the condition \(\left(21-\sqrt{249}\right)/32\approx 0.1631<\rho<0.5\), where \(\mathbf{u_{1}>v_{1}}\), \(\mathbf{u_{2}<v_{2}}\), \(\mathbf{u_{3}>v_{3}}\) and \(\mathbf{u_{4}<v_{4}}\). For example, substituting \(\rho=0.4\) into Table 1, we obtain Figure 3: **The fate of opinions.** We predict the fate of opinion \(+\) in the voter model on the directed evolving network. The proportion of one opinion increases when the number of students on the other opinion decreases. That is, the final fraction of opinion \(+\) increases as the number of students on opinion \(-\) decreases. If \(x_{\text{opinion}+}^{*}\) is stable, then \(x_{\text{opinion}+}^{*}\) increases as \(k_{\overrightarrow{x+}}\) decreases or \(k_{\overrightarrow{x}}\) increases. If \(y_{\text{opinion}+}^{*}\) is stable, then \(y_{\text{opinion}+}^{*}\) increases as \(\alpha_{\overrightarrow{x+}}\) decreases or \(\alpha_{\overrightarrow{-}}\) increases. Parameters: We focus on \(x_{\text{opinion}+}^{*}\) or \(y_{\text{opinion}+}^{*}\) is stable internal equilibrium for out-group bias. Hence, we set \(k_{\overrightarrow{x-}}<k_{\overrightarrow{x+}}\) and \(k_{\overrightarrow{x-}}<k_{\overrightarrow{-}}\) when \(\alpha_{\overrightarrow{XY}}=\alpha=0.5\) or \(\alpha_{\overrightarrow{x-}}<\alpha_{\overrightarrow{x+}}\) and \(\alpha_{\overrightarrow{-}}<\alpha_{\overrightarrow{-}}\) when \(k_{\overrightarrow{XY}}=k=0.5\). (a) \(\alpha_{\overrightarrow{XY}}=\alpha=0.5\), \(k_{\overrightarrow{-}}=0.1,k_{\overrightarrow{-}}=0.3\) and \(k_{\overrightarrow{-}}=0.6\). (b) \(\alpha_{\overrightarrow{XY}}=\alpha=0.5\), \(k_{\overrightarrow{x+}}=0.3\) and \(k_{\overrightarrow{-}}=0.1\). (c) \(k_{\overrightarrow{XY}}=k=0.5\), \(\alpha_{\overrightarrow{-}}=0.1,\alpha_{\overrightarrow{-}}=0.3\) and \(\alpha_{\overrightarrow{-}}=0.6\). (d) \(k_{\overrightarrow{XY}}=k=0.5\), \(\alpha_{\overrightarrow{x+}}=0.6,\alpha_{\overrightarrow{-}}=0.3\) and \(\alpha_{\overrightarrow{-}}=0.1\). We run \(10^{6}\) rounds of the simulation and set the millionth result as the final fraction of opinion \(+\). The initial state is \(x_{+}=0.5\). For each data point, it is averaged over 100 independent runs. We set \(N=100,L=4\) and \(w=0.01\). This four-player two-strategy game has three internal equilibria, i.e., \(x^{*}_{\rm opinion+}=0.29\), \(0.5\) and \(0.89\), as shown in Fig. 4. And \(x^{*}_{\rm opinion+}=0.5\) is only one internal stable equilibrium. In this case, the final opinions in the population are either diverse or reached a consensus among individuals. Therefore, the complexity of the voter model on the directed evolving network is captured by the four-player two-strategy game. *Complexity dynamics analysis of opinions.** We have shown that the opinion evolves as a replicator equation of a four-player two-strategy game Table 1. For such a game, there can be at most three internal equilibria. This game is related to the probabilities of choosing nodes and breaking directed links. We can obtain the complexity of opinion dynamics via an evolutionary game approach. There are three internal equilibria, i.e., \(x^{*}_{\rm opinion+}=0.29\), \(0.5\) and \(0.89\) under the condition of \(\rho=0.4\). \(x^{*}_{\rm opinion+}=0.5\) is the only internal stable equilibrium. If the initial fraction of opinion \(+\) is about \(0.29\), then the individuals reach a consensus on opinion \(-\) finally. If the initial fraction of opinion \(+\) is between \(0.29\) and \(0.89\), then the two types of opinions coexist and each one is equally divided. Otherwise, the individuals reach a consensus on opinion \(+\). Individuals can maintain diverse opinions or reach a consensus for this game. ### **Robustness** We exchange the direction of learning in the network. For example, if node \(B\) points to node \(A\), it implies that \(A\) unilaterally learns from \(B\) and \(B\) does not learn from \(A\), that is, the target node learns the source node. Therefore, the transition probability that \(T^{+}_{x_{+}}=x_{-}\pi_{\overrightarrow{x}}/(\pi_{\overrightarrow{x}}+ \pi_{\overrightarrow{x}})\). The transition probability \begin{table} \begin{tabular}{c c c c c} \hline \hline Individual(s) & 3+ & 2+ & 1+ & 0+ \\ \hline \(+\) & 0.0256 & 0.0379 & 0.0836 & 0.0432 \\ \(-\) & 0.0128 & 0.0785 & 0.0328 & 0.0864 \\ \hline \hline \end{tabular} \end{table} Table 2: The value of payoff matrix. Figure 4: **Complexity dynamics analysis of opinions.** We have shown that the opinion evolves as a replicator equation of a four-player two-strategy game Table 1. For such a game, there can be at most three internal equilibria. This game is related to the probabilities of choosing nodes and breaking directed links. We can obtain the complexity of opinion dynamics via an evolutionary game approach. There are three internal equilibria, i.e., \(x^{*}_{\rm opinion+}=0.29\), \(0.5\) and \(0.89\) under the condition of \(\rho=0.4\). \(x^{*}_{\rm opinion+}=0.5\) is the only internal stable equilibrium. If the initial fraction of opinion \(+\) is about \(0.29\), then the individuals reach a consensus on opinion \(-\) finally. If the initial fraction of opinion \(+\) is between \(0.29\) and \(0.89\), then the two types of opinions coexist and each one is equally divided. Otherwise, the individuals reach a consensus on opinion \(+\). Individuals can maintain diverse opinions or reach a consensus for this game. that \(x_{+}\) decreases by \(1/N\) is \(T^{-}_{x_{+}}=x_{+}\pi_{\overrightarrow{-}\overrightarrow{+}}/(\pi_{ \overrightarrow{-}\overrightarrow{+}}+\pi_{\overrightarrow{-}\overrightarrow{-}})\). In this case, we obtain some dual results. Similarly, the voting behavior on the evolving directed network is captured by a four-player two-strategy game whose payoff matrix is given by Table 3, where \(u^{\prime}_{1}=(k_{\overrightarrow{-}\overrightarrow{+}}/k_{\overrightarrow{- }\overrightarrow{+}})\,\alpha_{\overrightarrow{-}\overrightarrow{+}}\beta^{2}_ {\overrightarrow{-}\overrightarrow{+}},u^{\prime}_{2}=\alpha_{\overrightarrow{- }\overrightarrow{+}}\alpha_{\overrightarrow{-}\overrightarrow{-}}\beta_{ \overrightarrow{-}\overrightarrow{+}}+(k_{\overrightarrow{-}\overrightarrow{+}} /k_{\overrightarrow{+}})\,[\alpha_{\overrightarrow{+}\overrightarrow{-}} \alpha_{\overrightarrow{-}\overrightarrow{-}}\beta_{\overrightarrow{-} \overrightarrow{-}}+\alpha_{\overrightarrow{+}\overrightarrow{+}}\alpha_{ \overrightarrow{-}\overrightarrow{-}}\beta_{\overrightarrow{-}\overrightarrow{+}} \beta_{\overrightarrow{-}\overrightarrow{+}}+\alpha_{\overrightarrow{+} \overrightarrow{+}}\alpha_{\overrightarrow{-}\overrightarrow{-}}\beta_{ \overrightarrow{-}\overrightarrow{-}}+\alpha_{\overrightarrow{-}\overrightarrow{-}} \alpha_{\overrightarrow{-}\overrightarrow{-}})]\,,\,u^{\prime}_{3}=\alpha_{ \overrightarrow{-}\overrightarrow{-}}\alpha_{\overrightarrow{-}}\beta_{ \overrightarrow{-}\overrightarrow{+}}+\alpha_{\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}}\beta_{\overrightarrow{-}\overrightarrow{+}}+(k_{ \overrightarrow{-}\overrightarrow{+}}/k_{\overrightarrow{+}})\,[\alpha_{ \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}}+ \alpha_{\overrightarrow{-}\overrightarrow{-}}\alpha_{\overrightarrow{-} \overrightarrow{-}})]\,,\,u^{\prime}_{4}=\alpha_{\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}}\alpha_{\overrightarrow{-}\overrightarrow{-}}\beta_{ \overrightarrow{-}\overrightarrow{-}}\) and \(\,v^{\prime}_{1}=\alpha_{\overrightarrow{-}\overrightarrow{+}}\alpha_{ \overrightarrow{-}\overrightarrow{-}\overrightarrow{+}}\beta_{\overrightarrow {-}\overrightarrow{-}},v^{\prime}_{2}=\alpha_{\overrightarrow{-}\overrightarrow {+}}\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}}+\alpha_{ \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}}\beta_{\overrightarrow {-}\overrightarrow{+}}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}}\beta_{ \overrightarrow{-}\overrightarrow{-}}\)\(+\beta_{\overrightarrow{-}\overrightarrow{-}}+(k_{ \overrightarrow{-}\overrightarrow{-}}/k_{\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}})\,[\alpha_{\overrightarrow{+}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}}\beta_{\overrightarrow {-}\overrightarrow{-}}+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}}\alpha_{ \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}}+(k_{ \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-} \overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\)\(+\alpha_{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{-}\overrightarrow{- Sally's in-degree increases or decreases by at most one. If an individual who is not Sally's current student reconnects to her, Sally's in-degree \(d_{\text{in}+}\) increases by one: firstly, the probability of selecting the directed link \(\overrightarrow{XY}\) which is not point to Sally is \((NL-d_{\text{in}+})/NL\), where \(\overrightarrow{XY}\in S\). Secondly, the stationary distribution of the directed links is \(\pi_{S}=(\pi_{\overrightarrow{+}},\pi_{\overrightarrow{+}},\pi_{\overrightarrow {-}\overrightarrow{+}},\pi_{\overrightarrow{-}\overrightarrow{-}})\), which has been given by Eq. (A.4). Then student \(X\) is chosen with probability \(\alpha_{\overrightarrow{XY}}\) and breaks the directed link \(\overrightarrow{XY}\) with probability \(k_{\overrightarrow{XY}}\). Finally, student \(X\) connects to Sally with probability \(1/(N-1)\). Thus the transition Figure 5: **Markov transitions of Sally’s student size. Left Panel:** The number of Sally’s students increases by one. (1) Only part of the directed network is shown here. (2) Select a link \(\overrightarrow{XY}\) which is not point to Sally with probability \((NL-d_{\text{in}+})/NL\). (3) The probability that the type of the selected link is \(\overrightarrow{XY}\) depends on \(\pi_{S}\). And the probability of selecting the student \(X\) is \(\alpha_{\overrightarrow{XY}}\). Then the student chooses to change the teacher and breaks the link with probability \(k_{\overrightarrow{XY}}\). (4) Finally, \(X\) is connected to Sally with probability \(1/(N-1)\). Hence the number of Sally’s students increases by one. Without loss of generality, \(C\), \(D\) and \(E\) are not Sally’s students. We assume that the directed link \(\overrightarrow{CD}\) is selected. The type of \(\overrightarrow{CD}\) is \(\overrightarrow{+-}\). Then \(C\) is selected and breaks the link with probability \(\alpha_{\overrightarrow{+-}}k_{\overrightarrow{+-}}\). Eventually, student \(C\) chooses the new teacher Sally. **Right Panel:** The number of Sally’s students decreases by one. (1) Only part of the directed network is shown here. (2) Select a link \(\overrightarrow{XY}\) which is point to Sally with probability \(d_{\text{in}+}/NL\). (3) The probability that the type of the selected link is \(\overrightarrow{XY}\) depends on \(\pi_{S}\). And the probability of selecting the student \(X\) is \(\alpha_{\overrightarrow{XY}}\). Then the student chooses to change the teacher and breaks the link with probability \(k_{\overrightarrow{XY}}\). (4) Finally, \(X\) is connected to other nodes with probability \(1\). Hence the number of Sally’s students decreases by one. Without loss of generality, \(A\) and \(B\) are Sally’s students. We assume that the directed link \(\overrightarrow{BS}\) is selected. The type of \(\overrightarrow{BS}\) is \(\overrightarrow{+-}\). Then \(B\) is selected and breaks the link with probability \(\alpha_{\overrightarrow{+-}}k_{\overrightarrow{+-}}\). Eventually, student \(B\) chooses other nodes to find a new teacher \(A\). probability that \(d_{\text{in}\,+}\) increases by one is \[P_{d_{\text{in}\,+}}^{+}=\underbrace{\frac{NL-d_{\text{in}\,+}}{NL}}_{\text{ \begin{subarray}{c}\text{select a link which \\ is not point to Sally \\ \end{subarray}}}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ where \(a_{1}=\alpha_{\overrightarrow{\mp\mp}}\beta_{\overrightarrow{\mp\mp}}/k_{ \overrightarrow{\mp\mp}},b_{1}=\alpha_{\overrightarrow{\mp\mp}}\beta_{ \overrightarrow{\mp}}/k_{\overrightarrow{\mp\mp}},a_{2}=\alpha_{\overrightarrow {\mp\mp}}\beta_{\overrightarrow{\mp\mp}}/k_{\overrightarrow{\mp\mp}}+[\alpha_ {\overrightarrow{\mp\mp}}\left(\alpha_{\overrightarrow{\mp\mp}}-\alpha_{ \overrightarrow{\mp\mp}}\right)+\alpha_{\overrightarrow{\mp\mp}}\beta_{ \overrightarrow{\mp\mp}}]/k_{\overrightarrow{\mp\mp}},b_{2}=\alpha_{ \overrightarrow{\mp\mp}}\beta_{\overrightarrow{\mp\mp}}/k_{\overrightarrow{ \mp\mp}}+[\alpha_{\overrightarrow{\mp\mp}}\left(\alpha_{\overrightarrow{\mp \mp}}-\alpha_{\overrightarrow{\mp\mp}}\right)+\alpha_{\overrightarrow{\mp\mp}} \beta_{\overrightarrow{\mp\mp}}]/k_{\overrightarrow{\mp\mp}},\ a_{3}=\alpha_{ \overrightarrow{\mp\mp}}\beta_{\overrightarrow{\mp\mp}}/k_{\overrightarrow{ \mp\mp}},b_{3}=\alpha_{\overrightarrow{\mp\mp}}\beta_{\overrightarrow{\mp\mp}} /k_{\overrightarrow{\mp\mp}}\). The Nash equilibrium of the emergent game is the transient topology, at which the two opinions have the same student size [Fig. 6]. If the payoff of opinion \(+\) is larger than the payoff of opinion \(-\) for Table 4, then the average in-degree of opinion \(+\) is greater than that of opinion \(-\). When the four probabilities of breaking links are the same, i.e., \(k_{\overrightarrow{XY}}=k\), where \(\overrightarrow{XY}\in S\) and \(0<k<1\) and concentrate on the in-group bias. Noteworthily, Eq. (10) and Eq. (11) are approximations because Sally's out-degree i.e., her teachers are neglected. Bidirectional links are not excluded in the approximation. In spite of this error, the in-degree distribution via the simulation agrees perfectly with the theoretical approximations for both one opinion in majority and the other opinion in minority [Figs. 7(a) and 7(b)]. Intuitively, here \(N\gg L\), i.e., the total number of individuals is much larger than the number of students for an individual which is close to the reality. Thus, each node almost obeys the same in-degree distribution and each update is approximately independent. Hence, these approximations are acceptable. For the completeness of our study, we show the corresponding results for the out-degree (See Supplemental Material for details). ### Emergent two-player games: the student size We focus on two classes of breaking patterns, i.e., the probability of choosing nodes \(\alpha_{\overrightarrow{XY}}\) and the probability of breaking directed links \(k_{\overrightarrow{XY}}\). If there exists \(0<\alpha<1\), s.t., Figure 7: **Student size distribution.** The accuracy of our model with respect to the simulation results is good shown in (a) and (b), even though our analysis ignores the out-degree. For opinion \(+\), the theoretical value of the average in-degree is \(4.1428\) and the simulation value is \(4.1422\). The order of magnitude of the error is \(10^{-4}\). For opinion \(-\), the theoretical value of the average in-degree is \(1.2672\) and the simulation value is \(1.2985\). The order of magnitude of the error is \(10^{-2}\). (c) Transient topology of the directed network. The red nodes take opinion \(+\) and the green nodes take opinion \(-\). The label and the size of the nodes are determined by their in-degree. When \(x_{+}\) is large, the nodes with opinion \(-\) have a low in-degree, which implies that the number of students whose teachers hold the opinion \(-\) is small. As a result, it is difficult to spread the opinion \(-\), and opinion \(-\) ultimately invades unsuccessfully. When \(x_{-}\) is small, the connections between opinion \(-\) are not strong. As shown in the figure, there is only one directed green edge between opinion \(-\). The visualization of the directed network is obtained via GEPHI [50]. Parameters: \(k_{\overrightarrow{XY}}=k=0.5\), \(\alpha_{\overrightarrow{+\overrightarrow{+}}}=0.1\), \(\alpha_{\overrightarrow{+\overrightarrow{-}}}=0.6\), \(\alpha_{\overrightarrow{-\overrightarrow{+}}}=0.6\) and \(\alpha_{\overrightarrow{-\overrightarrow{-}}}=0.1\). The initial state is \(x_{+}=0.5\). When \(x_{+}=0.95\), we record the number of students who adopt either opinion \(+\) or opinion \(-\) for each individual. We run \(10^{3}\) rounds of the simulation and set \(N=100,L=4\), and \(w=0.01\). \(\alpha_{\overrightarrow{X\overrightarrow{Y}}}=\alpha\), the three-player game degenerates to the two-player game \[M_{\text{in-degree}}=\left(\begin{array}{cc}\frac{1}{k_{\overrightarrow{ \rightarrow}}}&\frac{1}{k_{\overrightarrow{\rightarrow}}}\\ \frac{1}{k_{\overrightarrow{\rightarrow}}}&\frac{1}{k_{\overrightarrow{ \rightarrow}}}\end{array}\right). \tag{14}\] We obtain a internal equilibrium \(x_{\text{in-degree}\,+}^{*}\) for in-group bias \[x_{\text{in-degree}\,+}^{*}=\frac{1/k_{\overrightarrow{\rightarrow}}-1/k_{ \overrightarrow{\rightarrow}}}{1/k_{\overrightarrow{\rightarrow}}-1/k_{ \overrightarrow{\rightarrow}}-1/k_{\overrightarrow{\rightarrow}}+1/k_{ \overrightarrow{\rightarrow}}}. \tag{15}\] The equilibrium is a Nash equilibrium of the emergent game Eq. (14). It refers to a _topology_ in which opinion \(+\) has as many students as opinion \(-\) does [Fig. 6]. For in-group bias, if \(x_{+}>x_{\text{in-degree}\,+}^{*}\), the average degree of opinion \(+\) is larger than opinion \(-\)'s. It implies that more students learn opinion \(+\). Otherwise, the average degree of opinion \(-\) is larger. Since the emergent games \(M_{\text{opinion}}\), i.e., Eq. (4) and \(M_{\text{in-degree}}\), i.e., Eq. (14) are not equal, we cannot capture both the opinion formation and the transient topology with just one emergent game. Thus, here are some counterintuitive cases. For in-group bias, \(k_{\overrightarrow{\rightarrow}}>k_{\overrightarrow{\rightarrow}}\) and \(k_{\overrightarrow{\rightarrow}}>k_{\overrightarrow{\rightarrow}}\), if the initial proportion of opinion \(+\) is larger than \(x_{\text{opinion}\,+}^{*}\), then opinion \(+\) is likely to take over. For \(k_{\overrightarrow{\rightarrow}}>k_{\overrightarrow{\rightarrow}}\), we have \(x_{\text{opinion}\,+}^{*}<x_{\text{in-degree}\,+}^{*}\). If the initial fraction of opinion \(+\) is between \(x_{\text{opinion}\,+}^{*}\) and \(x_{\text{in-degree}\,+}^{*}\), then opinion \(+\) invades successfully in the end, even if more students learn the opinion \(-\) than opinion \(+\) in the beginning [Fig. 8(a)]. Similarly, if \(k_{\overrightarrow{\rightarrow}}<k_{\overrightarrow{\rightarrow}}\), we have \(x_{\text{in-degree}\,+}^{*}<x_{\text{opinion}\,+}^{*}\). And if the initial fraction of opinion \(+\) is between \(x_{\text{in-degree}\,+}^{*}\) and \(x_{\text{opinion}\,+}^{*}\), then opinion \(+\) invades unsuccessfully eventually, even if more students learn the opinion \(+\) than the opinion \(-\) in the beginning [Fig. 8(b)]. It implies that the opinion with few students is likely to invade successfully. Hence, the student size is not the indicator of the successful invasion, which is counterintuitive. If there is \(0<k<1\), s.t., \(k_{\overrightarrow{X\overrightarrow{Y}}}=k\), then the degenerated payoff matrix is \[R_{\text{in-degree}}=\left(\begin{array}{cc}\alpha_{\overrightarrow{ \rightarrow}}\beta_{\overrightarrow{\rightarrow}}&\alpha_{\overrightarrow{ \rightarrow}}\beta_{\overrightarrow{\rightarrow}}\\ \alpha_{\overrightarrow{\rightarrow}}\beta_{\overrightarrow{\rightarrow}}& \alpha_{\overrightarrow{\rightarrow}}\beta_{\overrightarrow{\rightarrow}} \end{array}\right). \tag{16}\] \(R_{\text{in-degree}}\) is the same as \(R_{\text{opinion}}\), i.e., Eq. (6). It implies that the internal equilibrium \(y_{\text{in-degree}\,+}^{*}\) is equal to \(y_{\text{opinion}\,+}^{*}\). For the in-group bias, if the initial fraction of opinion \(+\) is larger than \(y_{\text{opinion}\,+}^{*}\), then more students learn the opinion \(+\) than the opinion \(-\) and the opinion \(+\) invades successfully. It implies that the student size is the indicator of the successful invasion in this case. We draw the directed network topology [Fig. 7(c)]. If the proportion of one opinion is quite small, then the in-degree of the opinion is small. ### An emergent three-player game for the student size: complexity analysis Some of the three-player games may be expanded by the two-player games. We take the number of internal equilibria of the replicated equation as the true complexity of our model. Based on Descartes' rule of signs [47], if \(a_{1}>b_{1}\), \(a_{2}<b_{2}\) and \(a_{3}>b_{3}\) (or \(a_{1}<b_{1}\), \(a_{2}>b_{2}\) and \(a_{3}<b_{3}\) ) are satisfied in Table 4, the three-player two-strategy game has at most two internal equilibria. At the equilibria, the in-degree of opinion \(+\) and that of opinion \(-\) are equal. To verify whether the same parameters simultaneously lead to three internal equilibria in a four-player two-strategy game and two internal equilibria in a three-player two-strategy game, we take the set of parameters, i.e., \(k_{\overrightarrow{x+}}=\rho,k_{\overrightarrow{x-}}=\rho,k_{\overrightarrow{ x-}}=\rho/4,k_{\overrightarrow{x-}}=\rho,\alpha_{\overrightarrow{x+}}=\rho/2, \alpha_{\overrightarrow{x-}}=\rho,\alpha_{\overrightarrow{-}}=2\rho\) and \(\alpha_{\overrightarrow{-}}=\rho/4\), where \(0<\rho<0.5\) into Table 4. However, there is only one internal equilibrium. This emergent three-player game differs in complexity from the four-player game to predict the fate of opinions. We show that the four-player two-strategy game with three internal equilibria and the three-player two-strategy game with two internal equilibria cannot occur at the same time (Supplemental Material). It indicates that the complexity of the two emergent games is different and we can not use the same emergent game to describe both the fate of opinions and the transient topology except some special cases, i.e., \(k_{\overrightarrow{X}\overrightarrow{Y}}=k\), where \(0<k<1\). We find a new set of parameters in which the three-player two-strategy game has two internal equilibria, as shown in Supplemental Material. Figure 8: **Opinions with fewer students can invade successfully.** For in-group bias, the number of disciples of an opinion is not the key factor in the success of invasion. The opinion with a large number of students does not necessarily end up with a successful invasion, and that with a small number of students does not necessarily invade unsuccessfully. (a) Even if more students learn the opinion \(-\) than the opinion \(+\), the opinion \(+\) eventually wins. Parameters: \(\alpha_{\overrightarrow{X}\overrightarrow{Y}}=\alpha=0.5\), \(k_{\overrightarrow{x+}}=0.3\), \(k_{\overrightarrow{x-}}=0.9\), \(k_{\overrightarrow{x-}}=0.6\) and \(k_{\overrightarrow{x-}}=0.2\). \(x^{*}_{\text{opinion}\,+}=0.5\) and \(x^{*}_{\text{in-degree}\,+}=0.6\). The initial fraction of opinion \(+\) is \(0.52\). (b) Even if more students learn the opinion \(+\) than the opinion \(-\), the opinion \(-\) eventually wins. Parameters: \(\alpha_{\overrightarrow{X}\overrightarrow{Y}}=\alpha=0.5\), \(k_{\overrightarrow{x+}}=0.3\), \(k_{\overrightarrow{x-}}=0.6\), \(k_{\overrightarrow{x-}}=0.9\) and \(k_{\overrightarrow{x-}}=0.2\). \(x^{*}_{\text{in-degree}\,+}=0.7\) and \(x^{*}_{\text{opinion}\,+}\approx 0.78\). The initial fraction of opinion \(+\) is \(0.75\). ## 5 Conclusion and Discussion Evolutionary game theory is a powerful mathematical framework to explore how individuals adjust their strategies, provided that the game interactions are given in prior [51, 52, 53]. Both opinion dynamics and evolutionary game dynamics have been benefited from the statistical physics method, yet they are treated as two distinct fields. We show that opinion dynamics is equivalent to the evolutionary games, both opinion wise and network wise. We focus on a voter model on an evolving directed network without any game interactions. We have shown that the fate of opinions is captured by a replicator equation of an emergent four-player two-strategy game. The complexity of the fate of opinions is thus the same as the classic evolutionary four-player two-strategy game. It has at most three internal equilibria. This equivalence result explicitly captures how opinions reach a consensus and how opinions coexist for a long time, which are the two main questions in opinion dynamics. On the other hand, we show that the transient topology is fully captured by an emergent three-player two-strategy game. Thus it has at most two internal equilibria. The Nash equilibrium of the emergent game is the transient topology, at which the two opinions have the same student size. We obtain the in(out)-degree distribution, which is typically challenging in previous works. This equivalence result explicitly tells who has how many neighbors during the opinion formation. Thus it demonstrates the transient topology during opinion formation. The emergent games degenerate to two-player two-strategy games, if the type of directed links is not considered when selecting an individual or initiating breaking the link, i.e., \(\alpha_{\overrightarrow{X\overrightarrow{Y}}}=\alpha\) or \(k_{\overrightarrow{X\overrightarrow{Y}}}=k\), where \(0<\alpha<1\), \(0<k<1\) and \(\overrightarrow{XY}\in S\). If we focus on the bi-directionality and set \(\alpha_{\overrightarrow{X\overrightarrow{Y}}}=\alpha=1/2\), the emergent game which captures the fate of opinions, i.e., Eq. (4) is equivalent to [21] where networks are undirected yet dynamical. For in-group bias, individuals can reach a consensus. For out-group bias, opinions can coexist if opinions coexist in the beginning. Furthermore, the condition \(\alpha_{\overrightarrow{X\overrightarrow{Y}}}=\alpha\) can be relaxed to \(\alpha_{\overrightarrow{x\overrightarrow{Y}}}=\alpha_{\overrightarrow{x \overrightarrow{Y}}}=\gamma_{1}\) and \(\alpha_{\overrightarrow{x\overrightarrow{Y}}}=\alpha_{\overrightarrow{x \overrightarrow{Y}}}=\gamma_{2}\), where \(0<\gamma_{1},\gamma_{2}<1\). For example, if the teachers have the same opinion \(+\), then their students have the same probability of being selected, i.e., \(\alpha_{\overrightarrow{x\overrightarrow{x}}}=\alpha_{\overrightarrow{x \overrightarrow{y}}}=\gamma_{1}\). We have \[M_{\text{opinion\_new}}=\ \left(\begin{array}{cc}\dfrac{\gamma_{2}k_{ \overrightarrow{x\overrightarrow{x}}}}{\gamma_{1}k_{\overrightarrow{x \overrightarrow{y}}}}&1\\ 1&\dfrac{\gamma_{1}k_{\overrightarrow{x\overrightarrow{y}}}}{\gamma_{2}k_{ \overrightarrow{x\overrightarrow{y}}}}\end{array}\right) \tag{17}\] and \[M_{\text{in-degree\_new}}=\left(\begin{array}{cc}\dfrac{\gamma_{2}}{k_{ \overrightarrow{x\overrightarrow{x}}}}&\dfrac{\gamma_{2}}{k_{ \overrightarrow{x\overrightarrow{y}}}}\\ \dfrac{\gamma_{1}}{k_{\overrightarrow{x\overrightarrow{y}}}}&\dfrac{\gamma_{ 1}}{k_{\overrightarrow{x\overrightarrow{y}}}}\end{array}\right). \tag{18}\] If \(\gamma_{1}=\gamma_{2}\), then \(M_{\text{opinion}}=M_{\text{opinion\_new}}\) and \(M_{\text{in-degree}}=M_{\text{in-degree\_new}}\). We reveal a counterintuitive phenomenon with the aid of the two different emergent games, i.e., \(M_{\text{opinion}}\) [Eq. (4)] and \(M_{\text{in-degree}}\) [Eq. (14)]. Intuitively, if the number of disciples of opinion \(+\) is larger than the opinion \(-\), then opinion \(+\) is learned by more students, hence the fraction of opinion + increases and opinion + can take over the whole population. However, we show that the number of disciples is _not_ the key to the success of the invasion. An opinion with a smaller student size can succeed in the population. Noteworthily, if \(k_{\overrightarrow{+-}}=k_{\overrightarrow{-+}}=k\), where \(0<k<1\), we have \(M_{\text{opinion}}=k\cdot M_{\text{in-degree}}\). It implies that one emergent game is sufficient to capture both the fate of opinions and the transient topology. We also show \(M_{\text{in-degree}}\) is the same as \(M_{\text{out-degree}}\) in this case (See Supplemental Material). It implies that the average in-degree is equal to the average out-degree, i.e., one individual has the same number of students and teachers on average. It mirrors an undirected-like network. In other words, if we do not distinguish \(\overrightarrow{+-}\) and \(\overrightarrow{-+}\), the network has symmetric-like properties in a statistical sense although it is still a directed network. Furthermore, the number of students with popular opinions is not higher than that with non-popular opinions, whereas opinion leaders play a decisive role in static networks [42]. It implies that undirected and directed networks are fundamentally different. Clustering is believed to play a crucial role in complex systems [54, 55, 56, 57, 38, 58]. However, we find that if individuals with opinion + gather together, the opinion + does not necessarily invade successfully. It implies that the clustering of individuals with the same opinions is _not_ the key to a successful invasion in the dynamical directed network (see more details in Supplemental Material). To sum up, our work bridges the gap between the opinion dynamics and evolutionary game theory. Via the bridge, we are able to predict both the fate of opinions and the transient topology from a game perspective. We gratefully acknowledge Xunlong Wang, who inspire us to find that the in-degree follows the Poisson distribution in the infinite large population size limit. We appreciate NSFC No.61751301. ## Appendix A Linking Dynamics Here the number of directed links \(NL\) is constant. Each directed link \(i\left(i=1,2,\cdots,NL\right)\) is selected with probability \(1/NL\). In time \(t\), we randomly select a directed link \(i^{t}=i\). If the selected \(i^{t}\) does not break, then we have \(i^{t+1}=i^{t}\). Otherwise, a new directed link is introduced, denoted as \(i^{t+1}\). We denote the type of directed edge of \(i^{t}\) by \(T\left(i^{t}\right)\), where \(T\left(i^{t}\right)\in S\). The linking dynamics is captured by Markov chain with transition matrix \(Q_{(\overrightarrow{AB})(\overrightarrow{CD})}\), which is the probability that link \(\overrightarrow{AB}\) transforms to link \(\overrightarrow{CD}\) in one time step. For instance, \(Q_{(\overrightarrow{+-})(\overrightarrow{+-})}\) is the probability that \(i^{t}\) of type \(\overrightarrow{+-}\) transforms to \(i^{t+1}\) of type \(\overrightarrow{+-}\). In this case, one of the following two cases occurs: (1) \(i^{t}\) is not selected (with probability \(\left(NL-1\right)/NL\)). (2) \(i^{t}\) is selected (with probability \(1/NL\)). Then, either the original \(\overrightarrow{+-}\) link is not broken (with probability \(1-k_{\overrightarrow{\rightarrow}}\)) or the selected student with opinion \(+\) reconnects a new teacher with opinion \(+\) when the original \(\overrightarrow{+\rightarrow}\) link is broken (with probability \(k_{\overrightarrow{+\rightarrow}}\alpha_{\overrightarrow{+\rightarrow}}x_{+}\), where \(x_{+}\) is the fraction of opinion \(+\)). Hence, \[Q_{\left(\overrightarrow{+\rightarrow}\right)}(\overrightarrow{+\neq})= \frac{NL-1}{NL}+\frac{1}{NL}\left(1-k_{\overrightarrow{+\rightarrow}}+k_{ \overrightarrow{+\rightarrow}}\alpha_{\overrightarrow{+\rightarrow}}x_{+} \right). \tag{10}\] And \(x_{-}=1-x_{+}\) is the fraction of opinion \(-\). The transition probability matrix is given by \[Q=\frac{NL-1}{NL}I_{4}+\frac{1}{NL}V, \tag{11}\] where \(I_{4}\) is the identity matrix and the matrix \(V\) is given by Eq. (10). \(V=\) \(\overrightarrow{+\rightarrow}\)\(\overrightarrow{+\rightarrow}\)\(\overrightarrow{+\rightarrow}\)\(\overrightarrow{-\rightarrow}\)\(\overrightarrow{+\rightarrow}\)\(\overrightarrow{+\rightarrow}\)\(\overrightarrow{+\rightarrow}\)\(k_{\overrightarrow{+\rightarrow}}\alpha_{\overrightarrow{+\rightarrow}}x_{-}\)\(k_{\overrightarrow{+\rightarrow}}\beta_{\overrightarrow{+\rightarrow}}x_{-}\)\(0\)\(k_{\overrightarrow{+\rightarrow}}\alpha_{\overrightarrow{+\rightarrow}}x_{+}\)\(1-k_{\overrightarrow{+\rightarrow}}\alpha_{\overrightarrow{+\rightarrow}}x_{+}\)\(1-k_{\overrightarrow{+\rightarrow}}\alpha_{\overrightarrow{+\rightarrow}}x_{-}\)\(0\)\(k_{\overrightarrow{+\rightarrow}}\beta_{\overrightarrow{+\rightarrow}}x_{-}\)\(k_{\overrightarrow{-\rightarrow}}\alpha_{\overrightarrow{-\rightarrow}}x_{-}\)\(0\)\(k_{\overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}x_{+}\)\(0\)\(1-k_{\overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}x_{+}\)\(k_{\overrightarrow{-\rightarrow}}\alpha_{\overrightarrow{-\rightarrow}}x_{-}\)\(0\)\(k_{\overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}x_{+}\)\(k_{\overrightarrow{-\rightarrow}}\alpha_{\overrightarrow{-\rightarrow}}x_{+}\)\(1-k_{\overrightarrow{-\rightarrow}}+k_{\overrightarrow{-\rightarrow}}x_{-}\)) The matrix \(V\) is an approximation because it is possible that an individual reconnects its student set or teacher set of individuals. Since the population size is much larger than the average degree of the nodes, i.e., \(N\gg L\), the approximation is completely acceptable. The state space of the Markov chain is \(S\). If \(k_{\overrightarrow{+\rightarrow}}k_{\overrightarrow{+\rightarrow}}k_{ \overrightarrow{-\rightarrow}}k_{\overrightarrow{-\rightarrow}}x_{+}x_{-}\neq 0\), there is a unique stationary distribution \(\pi_{S}=\left(\pi_{\overrightarrow{+\rightarrow}},\pi_{\overrightarrow{+ \rightarrow}},\pi_{\overrightarrow{-\rightarrow}},\pi_{\overrightarrow{- \rightarrow}}\right)\) determinded by equation \(\pi_{S}Q=\pi_{S}\). We find that \[\pi_{S}=\mathcal{N}\left(x_{+}\right)*\left[\begin{array}{c}\frac{x_{+}^{2}}{ k_{\overrightarrow{+\rightarrow}}}\left(x_{+}\alpha_{\overrightarrow{+ \rightarrow}}\beta_{\overrightarrow{-\rightarrow}}+x_{-}\alpha_{\overrightarrow {-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}+x_{-}\alpha_{ \overrightarrow{-\rightarrow}}\left(\alpha_{\overrightarrow{-\rightarrow}}- \alpha_{\overrightarrow{-\rightarrow}}\right)\right)\\ \frac{x_{+}x_{-}}{k_{\overrightarrow{+\rightarrow}}}\left(x_{+}\alpha_{ \overrightarrow{+\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}+x_{-} \alpha_{\overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}} \right)\\ \frac{x_{+}x_{-}}{k_{\overrightarrow{-\rightarrow}}}\left(x_{+}\alpha_{ \overrightarrow{-\rightarrow}}\beta_{\overrightarrow{+\rightarrow}}+x_{-} \alpha_{\overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}} \right)\\ \frac{x_{-}^{2}}{k_{\overrightarrow{-\rightarrow}}}\left(x_{+}\alpha_{ \overrightarrow{+\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}+x_{-} \alpha_{\overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}} +x_{+}\alpha_{\overrightarrow{-\rightarrow}}\left(\alpha_{\overrightarrow{- \rightarrow}}-\alpha_{\overrightarrow{-\rightarrow}}\right)\right)\end{array} \right]^{\prime}, \tag{12}\] where \(\mathcal{N}\left(x\right)=\left[x_{+}^{2}\left(\alpha_{\overrightarrow{-\to}} \beta_{\overrightarrow{-\rightarrow}}x_{-}+\alpha_{\overrightarrow{-\rightarrow}} \left(\left(\alpha_{\overrightarrow{-\rightarrow}}-\alpha_{\overrightarrow{- \rightarrow}}\right)x_{-}+\beta_{\overrightarrow{-\rightarrow}}x_{+}\right) \right)\right/k_{\overrightarrow{+\rightarrow}}+x_{+}x_{-}\left(\alpha_{ \overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}x_{+}+\right.\)\(\alpha_{\overrightarrow{-\rightarrow}}\beta_{\overrightarrow{-\rightarrow}}x_{-}\)\(/k_{\overrightarrow{+\rightarrow}}+x_{+}x_{-}\left(\alpha_{\overrightarrow{-\rightarrow}}\beta_{ \overrightarrow{-\rightarrow}}x_{+}+\right.\)\(\left.+\beta_{\overrightarrow{-\rightarrow}}x_{-}\right)\)\(/k_{\overrightarrow{-\rightarrow}}+x_{-}^{2}\left(\alpha_{\overrightarrow{-\rightarrow}}\beta_{ \overrightarrow{-\rightarrow}}x_{+}+\alpha_{\overrightarrow{-\rightarrow}}\left( \alpha_{\overrightarrow{-\rightarrow}}-\alpha_{\overrightarrow{-\rightarrow}} \right)x_{+}\right.\)\(\left.+\beta_{\overrightarrow{-\rightarrow}}x_{-}\right)\)\(/k_{\overrightarrow{-\rightarrow}}\)\(\left.+\)\(\left.+\beta_{\overrightarrow{-\rightarrow}}x_{-}\right)\)\(/k_{\overrightarrow{-\rightarrow}}\)\(\left.+\)\(\left.-\)\(\left.+\beta_{\overrightarrow{-\rightarrow}}x_{-}\right)\(/k_{\overrightarrow{-\rightarrow}}\right]^{-1}>0\) is a normalization factor. Here \(\pi_{\overrightarrow{\mathbf{X}}\mathbf{Y}}\) refers to the probability that a directed link \(i\) is of type \(\overrightarrow{XY}\) in the stationary regime.
2303.09804
Commutator subgroups and crystallographic quotients of virtual extensions of symmetric groups
The virtual braid group $VB_n$, the virtual twin group $VT_n$ and the virtual triplet group $VL_n$ are extensions of the symmetric group $S_n$, which are motivated by the Alexander-Markov correspondence for virtual knot theories. The kernels of natural epimorphisms of these groups onto the symmetric group $S_n$ are the pure virtual braid group $VP_n$, the pure virtual twin group $PVT_n$ and the pure virtual triplet group $PVL_n$, respectively. In this paper, we investigate commutator subgroups, pure subgroups and crystallographic quotients of these groups. We derive explicit finite presentations of the pure virtual triplet group $PVL_n$, the commutator subgroup $VT_n^{'}$ of $VT_n$ and the commutator subgroup $VL_n^{'}$ of $VL_n$. Our results complete the understanding of these groups, except that of $VB_n^{'}$, for which the existence of a finite presentations is not known for $n \ge 4$. We also prove that $VL_n/PVL_n^{'}$ is a crystallographic group and give an explicit construction of infinitely many torsion elements in it.
Pravin Kumar, Tushar Kanta Naik, Neha Nanda, Mahender Singh
2023-03-17T07:19:15Z
http://arxiv.org/abs/2303.09804v1
# Commutator subgroups and crystallographic quotients of virtual extensions of symmetric groups ###### Abstract. The virtual braid group \(VB_{n}\), the virtual twin group \(VT_{n}\) and the virtual triplet group \(VL_{n}\) are extensions of the symmetric group \(S_{n}\), which are motivated by the Alexander-Markov correspondence for virtual knot theories. The kernels of natural epimorphisms of these groups onto the symmetric group \(S_{n}\) are the pure virtual braid group \(VP_{n}\), the pure virtual twin group \(PVT_{n}\) and the pure virtual triplet group \(PVL_{n}\), respectively. In this paper, we investigate commutator subgroups, pure subgroups and crystallographic quotients of these groups. We derive explicit finite presentations of the pure virtual triplet group \(PVL_{n}\), the commutator subgroup \(VT_{n}^{{}^{\prime}}\) of \(VT_{n}\) and the commutator subgroup \(VL_{n}^{{}^{\prime}}\) of \(VL_{n}\). Our results complete the understanding of these groups, except that of \(VB_{n}^{{}^{\prime}}\), for which the existence of a finite presentations is not known for \(n\geq 4\). We also prove that \(VL_{n}/PVL_{n}^{{}^{\prime}}\) is a crystallographic group and give an explicit construction of infinitely many torsion elements in it. _Mathematics Subject Classification_ 2020. Primary 20F55, 20H15; Secondary 20F36. _Key words and phrases._ Bieberbach group, crystallographic group, Reidemeister-Schreier method, virtual braid group, virtual triplet group, virtual twin group Introduction Let \(\Gamma\) be a smooth smooth manifold and \(\Gamma\) be a smooth manifold. Let \(\Gamma\) be a smooth manifold and \(\Gamma\) be a smooth manifold. (1.0.6)\(VL_{n}=\)\(\Big{\langle}y_{1},y_{2},\ldots,y_{n-1},\rho_{1},\rho_{2},\ldots,\rho_{n-1}\ \ |\ \ y_{i}^{2}=1,\quad\rho_{i}^{2}=1\quad\text{for}\quad 1 \leq i\leq n-1,\) \[\rho_{i}\rho_{j}=\rho_{j}\rho_{i},\quad y_{i}\rho_{j}=\rho_{j}y_{i }\quad\text{for}\quad|i-j|\geq 2,\] \[y_{i}y_{j}y_{i}=y_{j}y_{i}y_{j},\quad\rho_{i}\rho_{j}\rho_{i}= \rho_{j}\rho_{i}\rho_{j},\quad\rho_{i}y_{j}\rho_{i}=\rho_{j}y_{i}\rho_{j}\quad \text{for}\quad|i-j|=1\Big{\rangle}.\] There are natural epimorphisms \[VB_{n}\to S_{n}\quad\text{given by}\quad\sigma_{i},\rho_{i} \mapsto\tau_{i},\] \[VT_{n}\to S_{n}\quad\text{given by}\quad s_{i},\rho_{i} \mapsto\tau_{i}\] and \[VL_{n}\to S_{n}\quad\text{given by}\quad y_{i},\rho_{i}\mapsto\tau_{i}.\] The kernels of these epimorphisms are the pure virtual braid group \(VP_{n}\), the pure virtual twin group \(PVT_{n}\) and the pure virtual triplet group \(PVL_{n}\), respectively. The virtual groups together with their pure subgroups make the following diagram commute. The present paper is concerned with commutator subgroups, pure subgroups and certain geometrically motivated quotients of virtual braid groups, virtual twin groups and virtual triplet groups. It is known from [5, Theorem 1.1] that the commutator subgroup \(VB_{n}^{{}^{\prime}}\) of the virtual braid group \(VB_{n}\) is generated by \((2n-3)\) elements for \(n\geq 4\), whereas \(VB_{3}^{{}^{\prime}}\) is infinitely generated. It is still not known whether \(VB_{n}^{{}^{\prime}}\) is finitely presented for \(n\geq 4\). In this paper, we determine a precise finite presentation of the commutator subgroup \(VT_{n}^{{}^{\prime}}\) of \(VT_{n}\) (Theorem 2.2) and the commutator subgroup \(VL_{n}^{{}^{\prime}}\) of \(VL_{n}\) (Theorem 3.2). A finite presentation of the pure virtual braid group is well-known due to Bardakov [2]. It has been proved in [25] that the pure virtual twin group \(PVT_{n}\) is an irreducible right-angled Artin group and a finite presentation has also been given for the same. We use the Reidemeister-Schreier method to derive a finite presentation of the pure virtual triplet group \(PVL_{n}\) (Theorem 4.2). This completes our understanding of these groups in the virtual setting. As by-products, we deduce that \(VT_{n}\) and \(VL_{n}\) are residually nilpotent if and only if \(n=2\) (Corollaries 2.4 and 3.4). Crystallographic groups play a crucial role in the study of groups of isometries of Euclidean spaces. In fact, there is a correspondence between torsion free crystallographic groups (called Bieberbach groups) and the class of compact flat Riemannian manifolds [9, Theorem 2.1.1]. Crystallographic quotients of virtual braid groups and virtual twin groups have been investigated in [28]. In this paper, we consider the case of virtual triplet groups. More precisely, we prove that \(VL_{n}/PVL_{n}^{{}^{\prime}}\) is a crystallographic group of dimension \(n(n-1)/2\) for \(n\geq 2\) (Theorem 5.4), and that it contains infinitely many torsion elements (Corollary 5.6). In fact, we give an explicit construction of infinitely many torsion elements in \(VL_{n}/PVL_{n}^{{}^{\prime}}\). We denote the \(n\)-th term of the lower central series of a group \(G\) by \(\gamma_{n}(G)\), where \(\gamma_{1}(G)=G\) and \(\gamma_{2}(G)=G^{{}^{\prime}}\) is the commutator subgroup of \(G\). ## 2. Presentation of commutator subgroup of virtual twin group In this section, we give a finite presentation of the commutator subgroup \(VT_{n}^{{}^{\prime}}\) of \(VT_{n}\). This is achieved through a reduced presentation of \(VT_{n}\), which we prove first. **Theorem 2.1**.: _The virtual twin group \(VT_{n}\)\((n\geq 3)\) has the following reduced presentation:_ 1. \(VT_{3}=\langle s_{1},\rho_{1},\rho_{2}\ \mid\ s_{1}^{2}=\rho_{1}^{2}=\rho_{2}^{2 }=(\rho_{1}\rho_{2})^{3}=1\rangle\)_._ 2. _For_ \(n\geq 4\)_,_ \(VT_{n}\) _has a presentation with generating set_ \(\{s_{1},\rho_{1},\rho_{2},\ldots,\rho_{n-1}\}\) _and defining relations as follows:_ (2.0.1) \[s_{1}^{2} =1,\] (2.0.2) \[\rho_{i}^{2} =1\quad\text{for}\quad 1\leq i\leq n-1,\] (2.0.3) \[\rho_{i}\rho_{j} =\rho_{j}\rho_{i}\quad\text{for}\quad|i-j|\geq 2,\] (2.0.4) \[\rho_{i}\rho_{i+1}\rho_{i} =\rho_{i+1}\rho_{i}\rho_{i+1}\quad\text{for}\quad 1\leq i\leq n-2,\] (2.0.5) \[\rho_{i}s_{1} =s_{1}\rho_{i}\quad\text{for}\quad i\geq 3,\] (2.0.6) \[(s_{1}\rho_{2}\rho_{1}\rho_{3}\rho_{2})^{4} =1.\] Proof.: The case \(n=3\) is immediate. For \(n\geq 4\), we first show that the generating set of \(VT_{n}\) can be reduced to \(\{s_{1},\rho_{1},\rho_{2},\ldots,\rho_{n-1}\}\). Claim 1: \(s_{i+1}=(\rho_{i}\rho_{i-1}\ldots\rho_{2}\rho_{1})(\rho_{i+1}\rho_{i}\ldots \rho_{3}\rho_{2})s_{1}(\rho_{2}\rho_{3}\ldots\rho_{i}\rho_{i+1})(\rho_{1}\rho _{2}\ldots\rho_{i-1}\rho_{i})\) for \(i\geq 1\). The case \(i=1\) follows from the relation \[\rho_{i}s_{j}\rho_{i}=\rho_{j}s_{i}\rho_{j}\quad\text{for}\quad|i-j|=1. \tag{2.0.7}\] Assuming the claim for \(i\) and using the relation (2.0.7) for \(j=i+1\) gives \[s_{i+1}=\rho_{i}\rho_{i+1}(\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1})(\rho_{i }\rho_{i-1}\ldots\rho_{3}\rho_{2})s_{1}(\rho_{2}\rho_{3}\ldots\rho_{i-1}\rho _{i})(\rho_{1}\rho_{2}\ldots\rho_{i-2}\rho_{i-1})\rho_{i+1}\rho_{i}.\] Now, the relation (2.0.3) gives \[s_{i+1}=(\rho_{i}\rho_{i-1}\ldots\rho_{2}\rho_{1})(\rho_{i+1}\rho_{i}\ldots \rho_{3}\rho_{2})s_{1}(\rho_{2}\rho_{3}\ldots\rho_{i}\rho_{i+1})(\rho_{1}\rho _{2}\ldots\rho_{i-1}\rho_{i}), \tag{2.0.8}\] which is desired. Hence, we can eliminate the generators \(s_{i}\) for \(i\geq 2\). Next, we show that the relations of \(VT_{n}\) involving \(s_{i}\) for \(i\geq 2\) can be recovered from the relations listed in the statement of the theorem. Claim 2: The relations \(\rho_{i}s_{i+1}\rho_{i}=\rho_{i+1}s_{i}\rho_{i+1}\) for \(1\leq i\leq n-2\) can be recovered from the relations (2.0.2) and (2.0.3). Using (2.0.8), we have \[\rho_{i}s_{i+1}\rho_{i} = \rho_{i}(\rho_{i}\ldots\rho_{1})(\rho_{i+1}\ldots\rho_{2})s_{1}( \rho_{2}\ldots\rho_{i+1})(\rho_{1}\ldots\rho_{i})\rho_{i}\] \[= (\rho_{i-1}\ldots\rho_{1})\rho_{i+1}(\rho_{i}\ldots\rho_{2})s_{1} (\rho_{2}\ldots\rho_{i})\rho_{i+1}(\rho_{1}\ldots\rho_{i-1})\] \[= \rho_{i+1}(\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_{2})s_{1} (\rho_{2}\ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})\rho_{i+1}\] \[= \rho_{i+1}s_{i}\rho_{i+1},\] which is desired. Claim 3: The relations \(s_{i}\rho_{j}=\rho_{j}s_{i}\) for \(|i-j|\geq 2\) can be recovered using the relations (2.0.3), (2.0.4) and (2.0.5). If \(j\leq i-2\), then we have \[s_{i}\rho_{j}\] \[= (\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1})(\rho_{i}\rho_{i-1} \ldots\rho_{3}\rho_{2})s_{1}(\rho_{2}\rho_{3}\ldots\rho_{i-1}\rho_{i})(\rho_ {1}\rho_{2}\ldots\rho_{i-2}\rho_{i-1})\rho_{j}\] \[= (\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1})(\rho_{i}\rho_{i-1} \ldots\rho_{3}\rho_{2})s_{1}(\rho_{2}\ldots\rho_{i-1}\rho_{i})(\rho_{1}\rho_ {2}\ldots\rho_{j}\rho_{j+1}\rho_{j}\ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.3})})\] \[= (\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1})(\rho_{i}\rho_{i-1} \ldots\rho_{3}\rho_{2})s_{1}(\rho_{2}\ldots\rho_{i-1}\rho_{i})(\rho_{1}\ldots \rho_{j+1}\rho_{j}\rho_{j+1}\ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.4})})\] \[= (\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1})(\rho_{i}\rho_{i-1} \ldots\rho_{3}\rho_{2})s_{1}(\rho_{2}\ldots\rho_{j+1}\rho_{j+2}\rho_{j+1} \ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.3})})\] \[= (\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1})(\rho_{i}\ldots\rho_{ j+2}\rho_{j+1}\rho_{j+2}\ldots\rho_{2})s_{1}(\rho_{2}\ldots\rho_{i})(\rho_{1} \ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.3}) and (\ref{eq:2.0.5})})\] \[= (\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1})(\rho_{i}\ldots\rho_{ j+1}\rho_{j+2}\rho_{j+1}\ldots\rho_{2})s_{1}(\rho_{2}\ldots\rho_{i})(\rho_{1} \ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.4})})\] \[= (\rho_{i-1}\ldots\rho_{j+1}\rho_{j}\rho_{j+1}\ldots\rho_{1})(\rho _{i}\ldots\rho_{2})s_{1}(\rho_{2}\ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.3})})\] \[= (\rho_{i-1}\ldots\rho_{j}\rho_{j+1}\rho_{j}\ldots\rho_{1})(\rho_ {i}\ldots\rho_{2})s_{1}(\rho_{2}\ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.4})})\] \[= \rho_{j}(\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_{2})s_{1}( \rho_{2}\ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})\ \ (\mbox{By (\ref{eq:2.0.3})})\] \[= \rho_{j}s_{i},\] which is desired. If \(j\geq i+2\), then the claim follows from the relations (2.0.3) and (2.0.5). Claim 4: The relations \(s_{i}s_{j}=s_{j}s_{i}\) for \(1\leq i\leq j-2\leq n-3\) can be recovered from the relations (2.0.1) through (2.0.6). For clarity, we underline the terms which are modified using the cited relations. First, we consider \(i=1\), in which case we have \[s_{1}s_{j}\] \[= s_{1}(\rho_{j-1}\ldots\rho_{1})(\rho_{j}\ldots\rho_{2})s_{1}(\rho _{2}\rho_{3}\ldots\rho_{j})(\rho_{1}\rho_{2}\ldots\rho_{j-1})\] \[= (\rho_{j-1}\ldots\rho_{3})s_{1}\rho_{2}\rho_{1}(\rho_{j}\ldots\rho _{3}\rho_{2})s_{1}\rho_{2}\rho_{1}\rho_{3}\rho_{2}(\rho_{4}\ldots\rho_{j})(\rho_ {3}\ldots\rho_{j-1})\] \[= (\rho_{j-1}\ldots\rho_{3})(\rho_{j}\ldots\rho_{4})(s_{1}\rho_{2} \rho_{1}\rho_{3}\rho_{2})^{2}(\rho_{4}\ldots\rho_{j})(\rho_{3}\ldots\rho_{j-1})\] \[= (\rho_{j-1}\ldots\rho_{3})(\rho_{j}\ldots\rho_{4})(\rho_{2}\rho_{ 3}\rho_{1}\rho_{2}s_{1})^{2}(\rho_{4}\ldots\rho_{j})(\rho_{3}\ldots\rho_{j-1})\] \[= (\rho_{j-1}\ldots\rho_{1})(\rho_{j}\ldots\rho_{2})s_{1}(\rho_{2} \rho_{3}\ldots\rho_{j})(\rho_{1}\rho_{2}\ldots\rho_{j-1})s_{1}\] \[= s_{j}s_{1}.\] Next, we assume that \(i\geq 2\). Then we have \[\begin{array}{ll}&s_{i}s_{j}\\ =&(\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_{2})s_{1}(\rho_{2}\ldots\rho_{i} )(\rho_{1}\ldots\rho_{i-1})(\rho_{j-1}\ldots\rho_{i+2}\rho_{i+1}\rho_{i}\ldots \rho_{1})(\rho_{j}\ldots\rho_{2})s_{1}\\ &(\rho_{2}\ldots\rho_{j})(\rho_{1}\ldots\rho_{j-1})\\ =&(\rho_{j-1}\ldots\rho_{i+2})(\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_{2 })s_{1}(\rho_{2}\ldots\rho_{i})\rho_{i+1}(\rho_{1}\ldots\rho_{i-1})(\rho_{i} \ldots\rho_{1})(\rho_{j}\ldots\rho_{2})s_{1}\\ &(\rho_{2}\ldots\rho_{j})(\rho_{1}\ldots\rho_{j-1})\ \ \mbox{(By (2.0.3))}\\ =&(\rho_{j-1}\ldots\rho_{i+2})(\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_ {2})s_{1}(\rho_{2}\ldots\rho_{i}\rho_{i+1})(\rho_{i}\ldots\rho_{2}\rho_{1} \rho_{2}\ldots\rho_{i})(\rho_{j}\ldots\rho_{2})s_{1}\\ &(\rho_{2}\ldots\rho_{j})(\rho_{1}\ldots\rho_{j-1})\ \ \mbox{(By (2.0.4))}\\ =&(\rho_{j-1}\ldots\rho_{i+2})(\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_ {2})s_{1}(\rho_{i+1}\ldots\rho_{3}\rho_{2}\rho_{3}\ldots\rho_{i+1})(\rho_{1} \ldots\rho_{i})\\ &\ Using (2.0.9), we can write \[\begin{array}{ll}&s_{i}s_{j}\\ =&(\rho_{j-1}\ldots\rho_{i+2})(\rho_{j}\ldots\rho_{i+3})(\rho_{i-1}\ldots\rho_{1 })(\rho_{i}\ldots\rho_{2})(\rho_{i+1}\ldots\rho_{3})(\rho_{i+2}\ldots\rho_{4}) \\ &s_{1}(\rho_{2}\rho_{3}\rho_{1}\rho_{2}s_{1}\rho_{2}\rho_{1}\rho_{3}\rho_{2})( \rho_{4}\rho_{3}\rho_{2}\rho_{1})\ldots(\rho_{i+2}\rho_{i+1}\rho_{i}\rho_{i-1}) (\rho_{i+3}\ldots\rho_{j})(\rho_{i+2}\ldots\rho_{j-1})\\ &\hbox{ \raise 1.29pt\hbox{\hbox to 0.0pt{\hbox{\hbox{\hbox{\hbox{\hbox{ \hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{ \hbox{\hbox{\hbox{\hbox{ \hbox{ \hbox{ \hbox{ This completes the proof of the theorem. Our main computational tool will be the Reidemeister-Schreier method [22, Theorem 2.6]. Given a subgroup \(H\) of a group \(G=\langle S\ |\ R\rangle\), we fix a Schreier system \(\Lambda\) of coset representatives of \(H\) in \(G\). For an element \(w\in G\), let \(\overline{w}\) denote the unique coset representative of the coset of \(w\) in the Schreier set \(\Lambda\). Then, by Reidemeister-Schreier Theorem, the subgroup \(H\) has a presentation with generating set \[\big{\{}\gamma(\mu,a):=(\mu a)(\overline{\mu a})^{-1}\ \mid\ \mu\in\Lambda\ \ \text{and}\ \ a\in S\big{\}}\] and set of defining relations \[\{\tau\left(\mu r\mu^{-1}\right)\ \mid\ \mu\in\Lambda\ \ \text{and}\ \ r\in R\}.\] Here, \(\tau\) is the rewriting process, that is, given \(g=g_{1}g_{2}\ldots g_{k}\in G\) for some \(g_{i}\in S\), we have \[\tau(g)=\gamma(1,g_{1})\gamma(\overline{g_{1}},g_{2})\cdots\gamma(\overline{ g_{1}g_{2}\ldots g_{k-1}},g_{k}).\] We now use Theorem 2.1 and the Reidemeister-Schreier method to give a presentation of \(VT_{n}^{{}^{\prime}}\) for \(n\geq 2\). Since the abelianisation of \(VT_{n}\) is isomorphic to the elementary abelian \(2\)-group of order \(4\), we can take the Schreier system of coset representatives to be \[\text{M}=\{1,s_{1},\rho_{1},s_{1}\rho_{1}\}.\] In view of Theorem 2.1, we take \(S=\{s_{1},\rho_{1},\rho_{2},\ldots,\rho_{n-1}\}\) as a generating set for \(VT_{n}\). **Generators of \(VT_{n}^{{}^{\prime}}\):** The generating set \(\{\gamma(\mu,a)\ |\ \mu\in\text{M},\ a\in S\}\) of \(VT_{n}^{{}^{\prime}}\) can be determined explicitly. More precisely, for each \(2\leq i\leq n-1\), we have \[\gamma(1,s_{1}) =s_{1}(\overline{s_{1}})^{-1}=1, \gamma(\rho_{1},s_{1}) =\rho_{1}s_{1}(\overline{\rho_{1}s_{1}})^{-1}=(\rho_{1}s_{1})^{2},\] \[\gamma(1,\rho_{1}) =\rho_{1}(\overline{\rho_{1}})^{-1}=1, \gamma(\rho_{1},\rho_{1}) =\rho_{1}\rho_{1}(\overline{\rho_{1}\rho_{1}})^{-1}=1,\] \[\gamma(1,\rho_{i}) =\rho_{i}(\overline{\rho_{i}})^{-1}=\rho_{i}\rho_{1}, \gamma(\rho_{1},\rho_{i}) =\rho_{1}\rho_{i}(\overline{\rho_{1}\rho_{i}})^{-1}=\rho_{1}\rho_{ i},\] \[\gamma(s_{1},s_{1}) =s_{1}s_{1}(\overline{s_{1}s_{1}})^{-1}=1, \gamma(s_{1}\rho_{1},s_{1}) =s_{1}\rho_{1}s_{1}(\overline{s_{1}\rho_{1}s_{1}})^{-1}=(s_{1} \rho_{1})^{2},\] \[\gamma(s_{1},\rho_{1}) =s_{1}\rho_{1}(\overline{s_{1}\rho_{1}})^{-1}=1, \gamma(s_{1}\rho_{1},\rho_{1}) =s_{1}\rho_{1}\rho_{1}(\overline{s_{1}\rho_{1}\rho_{1}})^{-1}=1,\] \[\gamma(s_{1},\rho_{i}) =s_{1}\rho_{i}(\overline{s_{1}\rho_{i}})^{-1}=s_{1}\rho_{i}\rho_{ 1}s_{1}, \gamma(s_{1}\rho_{1},\rho_{i}) =s_{1}\rho_{1}\rho_{i}(\overline{s_{1}\rho_{1}\rho_{i}})^{-1}=(s_{1} \rho_{i}\rho_{1}s_{1})^{-1}.\] For \(2\leq i\leq n-1\), setting \[x_{i}:=\rho_{i}\rho_{1},\quad z_{i}:=s_{1}\rho_{i}\rho_{1}s_{1}\quad\text{and }\quad w:=(\rho_{1}s_{1})^{2},\] we see that \(VT_{n}^{{}^{\prime}}\) is generated by the set \[\{x_{i},z_{i},w\ \mid\ i=2,3,\ldots,n-1\}.\] **Relations in \(VT_{n}^{{}^{\prime}}\):** We now determine the relations in \(VT_{n}^{{}^{\prime}}\) corresponding to each relation in the presentation of \(VT_{n}\) given by Theorem 2.1. 1. First, we consider the relation \(s_{1}^{2}=1\). Then we obtain \[\tau(1s_{1}s_{1}1) = \gamma(1,s_{1})\gamma(\overline{s_{1}},s_{1})=1,\] \[\tau(s_{1}s_{1}s_{1}s_{1}) = (\gamma(1,s_{1})\gamma(s_{1},s_{1}))^{2}=1,\] \[\tau(\rho_{1}s_{1}s_{1}\rho_{1}) = \gamma(1,\rho_{1})\gamma(\rho_{1},s_{1})\gamma(s_{1}\rho_{1},s_{1 })\gamma(\rho_{1},\rho_{1})=1,\] \[\tau(s_{1}\rho_{1}(s_{1}s_{1})\rho_{1}s_{1}) = \gamma(1,s_{1})\gamma(s_{1},\rho_{1})\gamma(s_{1}\rho_{1},s_{1}) \gamma(s_{1}\rho_{1},\rho_{1})\gamma(s_{1},s_{1})=1.\] 2. Next, we consider the relations \(\rho_{i}^{2}=1\) for \(1\leq i\leq n-1\). This case gives \[\tau(1\rho_{i}\rho_{i}1) = \gamma(1,\rho_{i})\gamma(\overline{\rho_{i}},\rho_{i})=1,\] \[\tau(s_{1}\rho_{i}\rho_{i}s_{1}) = \gamma(1,s_{1})\gamma(s_{1},\rho_{i})\gamma(s_{1}\rho_{1},\rho_{ i})\gamma(s_{1},s_{1})=1,\] \[\tau(\rho_{1}\rho_{i}\rho_{i}\rho_{1}) = \gamma(1,\rho_{1})\gamma(\rho_{1},\rho_{i})\gamma(1,\rho_{i}) \gamma(\rho_{1},\rho_{1})=1,\] \[\tau(s_{1}\rho_{1}(\rho_{1}\rho_{1})\rho_{1}s_{1}) = \gamma(1,s_{1})\gamma(s_{1},\rho_{1})\gamma(s_{1}\rho_{1},\rho_{ 1})\gamma(s_{1}\rho_{1},\rho_{1})\gamma(s_{1},s_{1})=1,\] \[\mbox{for }i\geq 2.\] 3. Consider the relations \((\rho_{i}\rho_{j})^{2}=1\) for \(|i-j|\geq 2\). In this case, we get \[\tau(1(\rho_{i}\rho_{j})^{2}1) = (\gamma(1,\rho_{i})\gamma(\rho_{1},\rho_{j}))^{2}\] \[= \begin{cases}x_{j}^{-2}&\mbox{for}\quad j\geq 3,\\ (x_{i}x_{j}^{-1})^{2}&\mbox{for}\quad i\geq 2\mbox{ and }i+2\leq j\leq n-1,\end{cases}\] \[\tau(s_{1}(\rho_{i}\rho_{j})^{2}s_{1}) = (\gamma(s_{1},\rho_{i})\gamma(s_{1}\rho_{1},\rho_{j}))^{2}\] \[= \begin{cases}z_{j}^{-2}&\mbox{for}\quad 3\leq j\leq n-1,\\ (z_{i}z_{j}^{-1})^{2}&\mbox{for}\quad 2\leq i\leq n-2\quad\mbox{and}\quad j \geq i+2,\end{cases}\] \[\tau(\rho_{1}(\rho_{i}\rho_{j})^{2}\rho_{1}) = (\gamma(\rho_{1},\rho_{i})\gamma(1,\rho_{j}))^{2}\] \[= \begin{cases}x_{j}^{2}&\mbox{for}\quad 3\leq j\leq n-1,\\ (x_{i}^{-1}x_{j})^{2}&\mbox{for}\quad i\geq 2\quad\mbox{and}\quad i+2\leq j \leq n-1,\end{cases}\] \[\tau(s_{1}\rho_{1}(\rho_{i}\rho_{j})^{2}\rho_{1}s_{1}) = (\gamma(s_{1}\rho_{1},\rho_{i})\gamma(s_{1},\rho_{j}))^{2}\] \[= \begin{cases}z_{j}^{2}&\mbox{for}\quad j\geq 3,\\ (z_{i}^{-1}z_{j})^{2}&\mbox{for}\quad i\geq 2\quad\mbox{and}\quad i+2\leq j \leq n-1.\end{cases}\] Thus, the non-trivial relations in \(VT_{n}^{{}^{\prime}}\) are \[x_{j}^{2} =1\qquad\text{for}\quad 3\leq j\leq n-1, \tag{2.0.13}\] \[z_{j}^{2} =1\qquad\text{for}\quad 3\leq j\leq n-1,\] (2.0.14) \[(x_{i}x_{j}^{-1})^{2} =1\qquad\text{for}\quad 2\leq i\leq n-2\quad\text{and}\quad j \geq i+2,\] (2.0.15) \[(z_{i}z_{j}^{-1})^{2} =1\qquad\text{for}\quad 2\leq i\leq n-2\quad\text{and}\quad j \geq i+2. \tag{2.0.12}\] 4. Considering the relations \((\rho_{i}\rho_{i+1})^{3}=1\) for \(1\leq i\leq n-2\) give \[\tau(1(\rho_{i}\rho_{i+1})^{3}1) = (\gamma(1,\rho_{i})\gamma(\rho_{1},\rho_{i+1}))^{3}\] \[= \begin{cases}x_{2}^{-3}&\text{for}\quad i=1,\\ (x_{i}x_{i+1}^{-1})^{3}&\text{for}\quad 2\leq i\leq n-2,\end{cases}\] \[\tau(s_{1}(\rho_{i}\rho_{i+1})^{3}s_{1}) = (\gamma(s_{1},\rho_{i})\gamma(s_{1}\rho_{1},\rho_{i+1}))^{3}\] \[= \begin{cases}z_{2}^{-3}&\text{for}\quad i=1,\\ (z_{i}z_{i+1}^{-1})^{3}&\text{for}\quad 2\leq i\leq n-2,\end{cases}\] \[\tau(\rho_{1}(\rho_{i}\rho_{i+1})^{3}\rho_{1}) = (\gamma(\rho_{1},\rho_{i})\gamma(1,\rho_{i+1}))^{3}\] \[= \begin{cases}x_{2}^{3}&\text{for}\quad i=1,\\ (x_{i}^{-1}x_{i+1})^{3}&\text{for}\quad 2\leq i\leq n-2,\end{cases}\] \[\tau(s_{1}\rho_{1}(\rho_{i}\rho_{i+1})^{3}\rho_{1}s_{1}) = (\gamma(s_{1}\rho_{1},\rho_{i})\gamma(s_{1},\rho_{i+1}))^{3}\] \[= \begin{cases}z_{2}^{3}&\text{for}\quad i=1,\\ (z_{i}^{-1}z_{i+1})^{3}&\text{for}\quad 2\leq i\leq n-2.\end{cases}\] Thus, we have the following non-trivial relations in \(VT_{n}^{{}^{\prime}}\) \[x_{2}^{3} =1, \tag{2.0.17}\] \[z_{2}^{3} =1,\] (2.0.18) \[(x_{i}x_{i+1}^{-1})^{3} =1\qquad\text{for}\quad 2\leq i\leq n-2,\] (2.0.19) \[(z_{i}z_{i+1}^{-1})^{3} =1\qquad\text{for}\quad 2\leq i\leq n-2. \tag{2.0.16}\] 5. Considering the relations \((\rho_{i}s_{1})^{2}=1\) for \(3\leq i\leq n-1\) give \[\tau(1(\rho_{i}s_{1})^{2}1) = \gamma(1,\rho_{i})\gamma(\rho_{1},s_{1})\gamma(s_{1}\rho_{1},\rho _{i})=x_{i}wz_{i}^{-1}\quad\text{for}\quad i\geq 3,\] \[\tau(s_{1}(\rho_{i}s_{1})^{2}s_{1}) = \gamma(s_{1},\rho_{i})\gamma(s_{1}\rho_{1},s_{1})\gamma(\rho_{1}, \rho_{i})=z_{i}w^{-1}x_{i}^{-1}\quad\text{for}\quad i\geq 3,\] \[\tau(\rho_{1}(\rho_{i}s_{1})^{2}\rho_{1}) = \gamma(\rho_{1},\rho_{i})\gamma(s_{1},\rho_{i})\gamma(s_{1}\rho_{ 1},s_{1})=x_{i}^{-1}z_{i}w^{-1}\quad\text{for}\quad i\geq 3,\] \[\tau(s_{1}\rho_{1}(\rho_{i}s_{1})^{2}\rho_{1}s_{1}) = \gamma(s_{1}\rho_{1},\rho_{i})\gamma(1,\rho_{i})\gamma(\rho_{1},s_ {1})=z_{i}^{-1}x_{i}w\quad\text{for}\quad i\geq 3.\] Thus, we have the following non-trivial relations in \(VT_{n}^{{}^{\prime}}\) \[z_{i} = x_{i}w\quad\mbox{ for }\quad 3\leq i\leq n-1. \tag{2.0.20}\] * Finally, we consider the relation \((s_{1}\rho_{2}\rho_{1}\rho_{3}\rho_{2})^{4}=1\). Then we obtain \[\tau(1(s_{1}\rho_{2}\rho_{3}\rho_{1}\rho_{2}s_{1}\rho_{2}\rho_{1} \rho_{3}\rho_{2})^{2}1)\] \[= (\gamma(s_{1},\rho_{2})\gamma(s_{1}\rho_{1},\rho_{3})\gamma(s_{1} \rho_{1},\rho_{2})\gamma(1,\rho_{2})\gamma(1,\rho_{3})\gamma(\rho_{1},\rho_{2} ))^{2}\] \[= (z_{2}z_{3}^{-1}z_{2}^{-1}x_{2}x_{3}x_{2}^{-1})^{2},\] \[\tau(s_{1}(s_{1}\rho_{2}\rho_{3}\rho_{1}\rho_{2}s_{1}\rho_{2} \rho_{1}\rho_{3}\rho_{2})^{2}s_{1})\] \[= (\gamma(1,\rho_{2})\gamma(\rho_{1},\rho_{3})\gamma(\rho_{1},\rho _{2})\gamma(s_{1},\rho_{2})\gamma(s_{1},\rho_{3})\gamma(s_{1}\rho_{1},\rho_{2 }))^{2}\] \[= (x_{2}x_{3}^{-1}x_{2}^{-1}z_{2}z_{3}z_{1}^{-1})^{2},\] \[\tau(\rho_{1}(s_{1}\rho_{2}\rho_{3}\rho_{1}\rho_{2}s_{1}\rho_{2} \rho_{1}\rho_{3}\rho_{2})^{2}\rho_{1})\] \[= (\gamma(\rho_{1},s_{1})\gamma(s_{1}\rho_{1},\rho_{2})\gamma(s_{1 },\rho_{3})\gamma(s_{1},\rho_{2})\gamma(s_{1}\rho_{1},s_{1})\gamma(\rho_{1}, \rho_{2})\gamma(\rho_{1},\rho_{3})\gamma(1,\rho_{2}))^{2}\] \[= (wz_{2}^{-1}z_{3}z_{2}w^{-1}x_{2}^{-1}x_{3}^{-1}x_{2})^{2},\] \[\tau(s_{1}\rho_{1}(s_{1}\rho_{2}\rho_{3}\rho_{1}\rho_{2}s_{1} \rho_{2}\rho_{1}\rho_{3}\rho_{2})^{2}\rho_{1}s_{1})\] \[= (\gamma(s_{1}\rho_{1},s_{1})\gamma(\rho_{1},\rho_{2})\gamma(1, \rho_{3})\gamma(1,\rho_{2})\gamma(\rho_{1},s_{1})\gamma(s_{1}\rho_{1},\rho_{2 })\gamma(s_{1}\rho_{1},\rho_{3})\gamma(s_{1},\rho_{2}))^{2}\] \[= (w^{-1}x_{2}^{-1}x_{3}x_{2}wz_{2}^{-1}z_{3}^{-1}z_{2})^{2}.\] Thus, we get the following two non-trivial relations in \(VT_{n}^{{}^{\prime}}\) \[(z_{2}z_{3}^{-1}z_{2}^{-1}x_{2}x_{3}x_{2}^{-1})^{2} = 1, \tag{2.0.22}\] \[(wz_{2}^{-1}z_{3}z_{2}w^{-1}x_{2}^{-1}x_{3}^{-1}x_{2})^{2} = 1. \tag{2.0.21}\] We can eliminate the generators \(z_{i}\) for \(3\leq i\leq n-1\) using (2.0.20). Setting \(z:=z_{2}\), we see that the relations (2.0.16) through (2.0.22) yield the following result. **Theorem 2.2**.: _The following hold for the commutator subgroup \(VT_{n}^{{}^{\prime}}\) of \(VT_{n}\):_ * \(VT_{2}^{{}^{\prime}}\cong\mathbb{Z}\) _and is generated by_ \((\rho_{1}s_{1})^{2}\)_._ * \(VT_{3}^{{}^{\prime}}\cong\mathbb{Z}_{3}*\mathbb{Z}_{3}*\mathbb{Z}\) _and has a presentation_ \[\langle\rho_{2}\rho_{1},s_{1}\rho_{2}\rho_{1}s_{1},(\rho_{1}s_{1})^{2}\ \ |\ \ (\rho_{2}\rho_{1})^{3}=(s_{1}\rho_{2}\rho_{1}s_{1})^{3}=1\rangle.\] * _For_ \(n\geq 4\)_,_ \(VT_{n}^{{}^{\prime}}\) _has a presentation with generating set_ \[\{x_{i},z,w\ \ |\ \ i=2,3,\ldots,n-1\}\] and defining relations as follows:_ \[x_{2}^{3} = 1,\] \[x_{j}^{2} = 1\quad\text{for}\quad 3\leq j\leq n-1,\] \[z^{3} = 1,\] \[(x_{i}x_{i+1}^{-1})^{3} = 1\quad\text{for}\quad 2\leq i\leq n-2,\] \[(x_{i}x_{j}^{-1})^{2} = 1\quad\text{for}\quad 2\leq i\leq n-2\quad\text{and}\quad j \geq i+2,\] \[(x_{j}w)^{2} = 1\quad\text{for}\quad 3\leq j\leq n-1,\] \[(zw^{-1}x_{3}^{-1})^{3} = 1,\] \[(zw^{-1}x_{j}^{-1})^{2} = 1\quad\text{for}\quad 4\leq j\leq n-1,\] \[(zw^{-1}x_{3}^{-1}z^{-1}x_{2}x_{3}x_{2}^{-1})^{2} = 1,\] \[(wz^{-1}x_{3}wzw^{-1}x_{2}^{-1}x_{3}^{-1}x_{2})^{2} = 1.\] It is known that the second and the third term of the lower central series coincide for the braid group \(B_{n}\) when \(n\geq 3\)[14] and coincide for the virtual braid group \(VB_{n}\) when \(n\geq 4\)[3, Proposition 7(c)]. The same assertion does not hold for twin groups [30, Theorem 4.5]. However, it turns out that the assertion holds for virtual twin groups. **Proposition 2.3**.: \(VT_{n}^{{}^{\prime}}=\gamma_{3}(VT_{n})\) _for \(n\geq 3\)._ Proof.: It is enough to prove that \(VT_{n}/\gamma_{3}(VT_{n})\) is abelian. The group \(VT_{n}/\gamma_{3}(VT_{n})\) is generated by \[\tilde{s}_{i}:=s_{i}\gamma_{3}(VT_{n})\quad\text{and}\quad\tilde{\rho}_{i}:= \rho_{i}\gamma_{3}(VT_{n}),\] where \(1\leq i\leq n-1\). It is easy to see that \[\rho_{i+1}=\rho_{i}[\rho_{i},[\rho_{i},\rho_{i+1}]]\quad\text{and}\quad s_{i+ 1}=s_{i}[[\rho_{i+1},[\rho_{i+1},\rho_{i}]],s_{i}]^{-1}.\] This gives \[\tilde{\rho}_{i+1}=\rho_{i+1}\gamma_{3}(VT_{n})=\rho_{i}[\rho_{i},[\rho_{i}, \rho_{i+1}]]\gamma_{3}(VT_{n})=\rho_{i}\gamma_{3}(VT_{n})=\tilde{\rho}_{i}\] and \[\tilde{s}_{i+1}=s_{i+1}\gamma_{3}(VT_{n})=s_{i}[[\rho_{i+1},[\rho_{i+1},\rho_{ i}]],s_{i}]^{-1}\gamma_{3}(VT_{n})=s_{i}\gamma_{3}(VT_{n})=\tilde{s}_{i}.\] for each \(1\leq i\leq n-2\). Since \[\tilde{\rho}_{1}\tilde{s}_{1}=\tilde{\rho}_{3}\tilde{s}_{1}=\tilde{s}_{1} \tilde{\rho}_{3}=\tilde{s}_{1}\tilde{\rho}_{1},\] the assertion follows. **Corollary 2.4**.: \(VT_{n}\) _is residually nilpotent if and only if \(n=2\)._ We conclude the section with a result on freeness of commutator subgroup of \(PVT_{n}\). A graph is called _chordal_ if each of its cycles with more than three vertices has a chord (an edge joining two vertices that are not adjacent in the cycle). A _clique_ (or a complete subgraph) of a graph is a subset \(C\) of vertices such that every two vertices in \(C\) are connected by an edge. It is well-known that a graph is chordal if and only if its vertices can be ordered in such a way that the lesser neighbours of each vertex form a clique. **Proposition 2.5**.: _The commutator subgroup of \(PVT_{n}\) is free if and only if \(n\leq 4\)._ Proof.: By [25, Theorem 3.3], the pure virtual twin group \(PVT_{n}\) is an irreducible right-angled Artin group for each \(n\geq 2\), and has a presentation \[PVT_{n}=\langle\lambda_{i,j},\ 1\leq i<j\leq n\ \mid\ \lambda_{i,j}\lambda_{k,l}= \lambda_{k,l}\lambda_{i,j}\text{ for distinct integers }i,j,k,l\rangle,\] where \(\lambda_{i,i+1}=s_{i}\rho_{i}\) for each \(1\leq i\leq n-1\) and \(\lambda_{i,j}=\rho_{j-1}\rho_{j-2}\ldots\rho_{i+1}\lambda_{i,i+1}\rho_{i+1} \ldots\rho_{j-2}\rho_{j-1}\) for each \(1\leq i<j\leq n\) and \(j\neq i+1\). The assertion is now immediate for \(n=2,3\). The graph of \(PVT_{4}\) (see Figure 1) is vacuously chordal. By [29, Corollary 4.4], the commutator subgroup of a right-angled Artin group is free if and only if its associated graph is chordal, and hence \(PVT_{4}\) is free. For \(n\geq 5\), fix an ordering on the vertex set of the graph, for example, it could be the lexicographic ordering in our case. Let \(\lambda_{i,j}\) be a maximal vertex and \(p,q,r\in\{1,2,\ldots,n\}\setminus\{i,j\}\) be three distinct integers. Then both \(\lambda_{p,q}\) and \(\lambda_{p,r}\) are lesser neighbours of \(\lambda_{i,j}\), but there cannot be an edge between \(\lambda_{p,q}\) and \(\lambda_{p,r}\). Thus, lesser neighbours of the vertex \(\lambda_{i,j}\) do not form a clique, and hence the graph is not chordal. ## 3. Presentation of commutator subgroup of virtual triplet group In this section, we determine a finite presentation of the commutator subgroup \(VL_{n}^{{}^{\prime}}\) of \(VL_{n}\). The approach is similar to that of the preceding section. We will need the following reduced presentation of \(VL_{n}\). Figure 1. The graph of \(PVT_{4}\). Figure 2. The graph of \(PVT_{5}\). **Theorem 3.1**.: _The virtual triplet group \(VL_{n}\) (\(n\geq 3\)) has a presentation with generating set \(\{y_{1},\rho_{1},\rho_{2},\ldots,\rho_{n-1}\}\) and defining relations as follows:_ \[y_{1}^{2} =1, \tag{3.0.2}\] \[\rho_{i}^{2} =1\quad\text{for}\quad 1\leq i\leq n-1,\] (3.0.3) \[\rho_{i}\rho_{j} =\rho_{j}\rho_{i}\quad\text{for}\quad|i-j|\geq 2,\] (3.0.4) \[\rho_{i}\rho_{i+1}\rho_{i} =\rho_{i+1}\rho_{i}\rho_{i+1}\quad\text{for}\quad 1\leq i\leq n -2,\] (3.0.5) \[\rho_{i}y_{1} =y_{1}\rho_{i}\quad\text{for}\quad i\geq 3,\] (3.0.6) \[(y_{1}\rho_{1}\rho_{2}y_{1}\rho_{2}\rho_{1})^{3} =1. \tag{3.0.1}\] Proof.: Replacing \(s_{i}\) by \(y_{i}\) in Claim 1 in the proof of Theorem 2.1 gives \[y_{i+1}=(\rho_{i}\rho_{i-1}\ldots\rho_{2}\rho_{1})(\rho_{i+1}\rho_{i}\ldots \rho_{3}\rho_{2})y_{1}(\rho_{2}\rho_{3}\ldots\rho_{i}\rho_{i+1})(\rho_{1}\rho _{2}\ldots\rho_{i-1}\rho_{i}) \tag{3.0.7}\] for \(i\geq 2\). Hence, we can eliminate the generators \(y_{i}\) for \(i\geq 2\). Next, we show that the relations of \(VL_{n}\) involving \(y_{i}\) for \(i\geq 2\) can be recovered from the relations listed in the statement of the theorem. Firstly, replacing \(s_{i}\) by \(y_{i}\) in Claim 2 in the proof of Theorem 2.1, we see that the relations \(\rho_{i}y_{i+1}\rho_{i}=\rho_{i+1}y_{i}\rho_{i+1}\) for \(1\leq i\leq n-2\) can be recovered from the relations (3.0.2) and (3.0.3). Similarly, the relation \(y_{i}\rho_{j}=\rho_{j}y_{i}\) for \(|i-j|\geq 2\) is a consequence of the relations 3.0.3, 3.0.4 and 3.0.5. Next, we claim that the relation \(y_{i}y_{i+1}y_{i}=y_{i+1}y_{i}y_{i+1}\) is a consequence of (3.0.7) and the relations (3.0.1) through (3.0.6). We compute \[y_{i}y_{i+1}y_{i}\] \[= (\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_{2})y_{1}(\rho_{2} \ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})(\rho_{i}\ldots\rho_{1})(\rho_{i+1} \ldots\rho_{2})y_{1}\] \[(\rho_{2}\ldots\rho_{i+1})(\rho_{1}\ldots\rho_{i})(\rho_{i-1} \ldots\rho_{1})(\rho_{i}\ldots\rho_{2})y_{1}(\rho_{2}\ldots\rho_{i})(\rho_{1} \ldots\rho_{i-1})\;\;\text{(By \eqref{eq:3.0.7})}\] \[= (\rho_{i-1}\ldots\rho_{1})\left(\rho_{i}\ldots\rho_{2}\right)y_{1 }(\rho_{2}\ldots\rho_{i})\left(\rho_{i}\ldots\rho_{2}\,\rho_{1}\rho_{2}\ldots \rho_{i}\right)(\rho_{i+1}\ldots\rho_{2})y_{1}\] \[(\rho_{2}\ldots\rho_{i+1})\left(\rho_{i}\ldots\rho_{2}\rho_{1} \rho_{2}\ldots\rho_{i}\right)(\rho_{i}\ldots\rho_{2})y_{1}\left(\rho_{2} \ldots\rho_{i}\right)(\rho_{1}\ldots\rho_{i-1})\;\;\;\text{(By \eqref{eq:3.0.4})}\] \[= (\rho_{i-1}\ldots\rho_{1})\left(\rho_{i}\ldots\rho_{2}\right)y_{1 }(\rho_{1}\rho_{2}\ldots\rho_{i})(\rho_{i+1}\rho_{i}\ldots\rho_{2})y_{1}(\rho _{2}\ldots\rho_{i}\rho_{i+1})(\rho_{i}\ldots\rho_{2}\,\rho_{1})y_{1}\] \[(\rho_{2}\ldots\rho_{i})\left(\rho_{1}\ldots\rho_{i-1}\right)\;\; \text{(By \eqref{eq:3.0.2})}\] \[= (\rho_{i-1}\ldots\rho_{1})\left(\rho_{i}\ldots\rho_{2}\right)y_{1 }\rho_{1}(\rho_{i+1}\ldots\rho_{3}\rho_{2}\rho_{3}\ldots\rho_{i+1})y_{1}(\rho _{i+1}\ldots\rho_{3}\rho_{2}\rho_{3}\ldots\rho_{i+1})\rho_{1}y_{1}\] \[(\rho_{2}\ldots\rho_{i})\left(\rho_{1}\ldots\rho_{i-1}\right)\;\; \text{(By \eqref{eq:3.0.4})}\] \[= (\rho_{i-1}\ldots\rho_{1})\left(\rho_{i}\ldots\rho_{2}\right)( \rho_{i+1}\ldots\rho_{3})y_{1}\rho_{1}\rho_{2}y_{1}(\rho_{3}\ldots\rho_{i+1}) \left(\rho_{i+1}\ldots\rho_{3}\right)\rho_{2}\rho_{1}y_{1}\left(\rho_{3} \ldots\rho_{i+1}\right)\] \[(\rho_{2}\ldots\rho_{i})\left(\rho_{1}\ldots\rho_{i-1}\right)\;\; \text{(By \eqref{eq:3.0.5})}\] \[(\rho_{2}\ldots\rho_{i})\left(\rho_{1}\ldots\rho_{i-1}\right)\;\; \text{(By \eqref{eq:3.0.3})}\] \[= (\rho_{i-1}\ldots\rho_{1})\left(\rho_{i}\ldots\rho_{2}\right)( \rho_{i+1}\ldots\rho_{3})\rho_{1}\rho_{2}y_{1}\rho_{2}\rho_{1}y_{1}\rho_{2}y_{1 }\rho_{2}\rho_{1}(\rho_{3}\ldots\rho_{i+1})\left(\rho_{2}\ldots\rho_{i}\right) \left(\rho_{1}\ldots\rho_{i-1}\right)\] \[(\text{By \eqref{eq:3.0.6})}\] \[= (\rho_{i-1}\ldots\rho_{1})(\rho_{i}\ldots\rho_{1})(\rho_{i+1} \ldots\rho_{2})y_{1}\rho_{2}\rho_{1}y_{1}\rho_{1}\rho_{2}y_{1}(\rho_{2} \ldots\rho_{i+1})(\rho_{1}\ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})\;\;\text{(By \eqref{eq:3.0.3})}\] and \[y_{i+1}y_{i}y_{i+1}\] \[= (\rho_{i}\ldots\rho_{1})(\rho_{i+1}\ldots\rho_{2})y_{1}(\rho_{2} \ldots\rho_{i+1})(\rho_{1}\ldots\rho_{i})(\rho_{i-1}\ldots\rho_{1})(\rho_{i} \ldots\rho_{2})y_{1}\] \[(\rho_{2}\ldots\rho_{i})(\rho_{1}\ldots\rho_{i-1})(\rho_{i}\ldots \rho_{1})(\rho_{i+1}\ldots\rho_{2})y_{1}(\rho_{2}\ldots\rho_{i+1})(\rho_{1} \ldots\rho_{i})\ \ \mbox{(By (3.0.7))}\] \[= (\rho_{i}\ldots\rho_{1})\left(\rho_{i+1}\ldots\rho_{2}\right)y_{ 1}\left(\rho_{2}\ldots\rho_{i+1}\right)(\rho_{i}\ldots\rho_{2}\rho_{1}\ \rho_{2}\ldots\rho_{i})\left(\rho_{i}\ldots\rho_{2}\right)y_{1}\] \[(\rho_{2}\ldots\rho_{i})\left(\rho_{i}\ldots\rho_{2}\ \rho_{1}\rho_{2}\ldots\rho_{i} \right)\left(\rho_{i+1}\ldots\rho_{2}\right)y_{1}\left(\rho_{2}\ldots\rho_{i+ 1}\right)(\rho_{1}\ldots\rho_{i})\ \ \ \mbox{(By (3.0.4))}\] \[= (\rho_{i}\ldots\rho_{1})\left(\rho_{i+1}\ldots\rho_{2}\right)y_{ 1}(\rho_{2}\ldots\rho_{i+1})\left(\rho_{i}\ldots\rho_{2}\ \rho_{1}\right)y_{1}\left(\rho_{1}\ \rho_{2}\ldots\rho_{i}\right)(\rho_{i+1}\ldots\rho_{2})\] \[y_{1}\left(\rho_{2}\ldots\rho_{i+1}\right)(\rho_{1}\ldots\rho_{ i})\ \ \ \mbox{(By (3.0.2))}\] \[= (\rho_{i}\ldots\rho_{1})\left(\rho_{i+1}\ldots\rho_{2}\right)y_{ 1}(\rho_{i+1}\ldots\rho_{3}\rho_{2}\rho_{3}\ldots\rho_{i+1})\rho_{1}y_{1} \rho_{1}(\rho_{i+1}\ldots\rho_{3}\rho_{2}\rho_{3}\ldots\rho_{i+1})y_{1}\] \[(\rho_{2}\ldots\rho_{i+1})\left(\rho_{1}\ldots\rho_{i})\ \ \ \mbox{(By (3.0.4))}\] \[= (\rho_{i}\ldots\rho_{1})\left(\rho_{i+1}\ldots\rho_{2}\right)( \rho_{i+1}\ldots\rho_{3})\ y_{1}\rho_{2}\rho_{1}y_{1}\rho_{1}\rho_{2}y_{1} \ (\rho_{3}\ldots\rho_{i+1})\left(\rho_{2}\ldots\rho_{i+1}\right)(\rho_{1}\ldots \rho_{i})\] \[\ \ \mbox{(By (3.0.2), (3.0.3) and (3.0.5))}.\] But, (3.0.3) and (3.0.4) give \[(\rho_{i}\ldots\rho_{1})\left(\rho_{i+1}\rho_{i}\rho_{i-1}\ldots\rho_{2}\right) (\rho_{i+1}\rho_{i}\ldots\rho_{3})=(\rho_{i-1}\ldots\rho_{1})\left(\rho_{i} \ldots\rho_{1}\right)\left(\rho_{i+1}\ldots\rho_{2}\right),\] and the claim holds, which completes the proof. Since the abelianisation of \(VL_{n}\) is isomorphic to the elementary abelian 2-group of order 4, we can take \[\mbox{M}=\{1,y_{1},\rho_{1},y_{1}\rho_{1}\}\] as a Schreier system of coset representatives. We use the presentation of \(VL_{n}\) given by Theorem 3.1 and set \(S=\{y_{1},\rho_{1},\rho_{2},\ldots,\rho_{n-1}\}\). **Generators of \(VL_{n}^{{}^{\prime}}\):** We first compute the generating set \(\{\gamma(\mu,a)\ |\ \mu\in\mbox{M},\ a\in S\}\) explicitly. Since the generating set and the Schreier system is similar to the case of \(VT_{n}\), ignoring the trivial generators, we see that \(VL_{n}^{{}^{\prime}}\) is generated by the set \[\{\alpha_{i},\beta_{i},\delta\ \mid\ i=2,3,\ldots,n-1\},\] where \[\alpha_{i} := \rho_{i}\rho_{1}=\gamma(1,\rho_{i})=\gamma(\rho_{1},\rho_{i})^{-1},\] \[\beta_{i} := y_{1}\rho_{i}\rho_{1}y_{1}=\gamma(y_{1},\rho_{i})=\gamma(y_{1} \rho_{1},\rho_{i})^{-1},\] \[\delta := (\rho_{1}y_{1})^{2}=\gamma(\rho_{1},y_{1})=\gamma(y_{1}\rho_{1},y_ {1})^{-1}.\] **Relations in \(VL_{n}^{{}^{\prime}}\):** We now determine the relations in \(VL_{n}^{{}^{\prime}}\) corresponding to each relation in the presentation of \(VL_{n}\) given by Theorem 3.1. Since the computations are similar to the ones in the proof of Theorem 2.2, we elaborate only those which yield non-trivial relations in \(VL_{n}^{{}^{\prime}}\). 1. The relation \(y_{1}^{2}=1\) gives only trivial relations in \(VL_{n}^{{}^{\prime}}\). 2. Similarly, the relations \(\rho_{i}^{2}=1\) for \(1\leq i\leq n-1\) give only trivial relations in \(VL_{n}^{{}^{\prime}}\). 3. Next, we consider the relations \((\rho_{i}\rho_{j})^{2}=1\) for \(|i-j|\geq 2\). In this case, we get \[\tau(1(\rho_{i}\rho_{j})^{2}1) = (\gamma(1,\rho_{i})\gamma(\rho_{1},\rho_{j}))^{2}\] \[= \begin{cases}\alpha_{j}^{-2}&\text{for}\quad j\geq 3,\\ (\alpha_{i}\alpha_{j}^{-1})^{2}&\text{for}\quad i\geq 2\quad\text{and}\quad i+2 \leq j\leq n-1,\end{cases}\] \[\tau(y_{1}(\rho_{i}\rho_{j})^{2}y_{1}) = (\gamma(y_{1},\rho_{i})\gamma(y_{1}\rho_{1},\rho_{j}))^{2}\] \[= \begin{cases}\beta_{j}^{-2}&\text{for}\quad 3\leq j\leq n-1, \\ (\beta_{i}\beta_{j}^{-1})^{2}&\text{for}\quad 2\leq i\leq n-2\quad\text{and} \quad j\geq i+2,\end{cases}\] \[\tau(\rho_{1}(\rho_{i}\rho_{j})^{2}\rho_{1}) = (\gamma(\rho_{1},\rho_{i})\gamma(1,\rho_{j}))^{2}\] \[= \begin{cases}\alpha_{j}^{2}&\text{for}\quad 3\leq j\leq n-1, \\ (\alpha_{i}^{-1}\alpha_{j})^{2}&\text{for}\quad i\geq 2\quad\text{and}\quad i+2 \leq j\leq n-1,\end{cases}\] \[\tau(y_{1}\rho_{1}(\rho_{i}\rho_{j})^{2}\rho_{1}y_{1}) = (\gamma(y_{1}\rho_{1},\rho_{i})\gamma(y_{1},\rho_{j}))^{2}\] \[= \begin{cases}\beta_{j}^{2}&\text{for}\quad j\geq 3,\\ (\beta_{i}^{-1}\beta_{j})^{2}&\text{for}\quad i\geq 2\quad\text{and}\quad i+2 \leq j\leq n-1.\end{cases}\] Thus, we have the following non-trivial relations in \(VL_{n}^{{}^{\prime}}\) \[\alpha_{j}^{2} =1\qquad\text{for}\quad 3\leq j\leq n-1, \tag{3.0.9}\] \[\beta_{j}^{2} =1\qquad\text{for}\quad 3\leq j\leq n-1,\] (3.0.10) \[(\alpha_{i}\alpha_{j}^{-1})^{2} =1\qquad\text{for}\quad 2\leq i\leq n-2\quad\text{and}\quad j \geq i+2,\] (3.0.11) \[(\beta_{i}\beta_{j}^{-1})^{2} =1\qquad\text{for}\quad 2\leq i\leq n-2\quad\text{and}\quad j \geq i+2. \tag{3.0.8}\] 4. We now consider the relations \((\rho_{i}\rho_{i+1})^{3}=1\) for \(1\leq i\leq n-2\). Then we obtain \[\tau(1(\rho_{i}\rho_{i+1})^{3}1) = (\gamma(1,\rho_{i})\gamma(\rho_{1},\rho_{i+1}))^{3}\] \[= \begin{cases}\alpha_{2}^{-3}&\text{for}\quad i=1,\\ (\alpha_{i}\alpha_{i+1}^{-1})^{3}&\text{for}\quad 2\leq i\leq n-2,\end{cases}\] \[\tau(y_{1}(\rho_{i}\rho_{i+1})^{3}y_{1}) = (\gamma(y_{1},\rho_{i})\gamma(y_{1}\rho_{1},\rho_{i+1}))^{3}\] \[= \begin{cases}\beta_{2}^{-3}&\text{for}\quad i=1,\\ (\beta_{i}\beta_{i+1}^{-1})^{3}&\text{for}\quad 2\leq i\leq n-2,\end{cases}\] \[\tau(\rho_{1}(\rho_{i}\rho_{i+1})^{3}\rho_{1}) = (\gamma(\rho_{1},\rho_{i})\gamma(1,\rho_{i+1}))^{3}\] \[= \begin{cases}\alpha_{2}^{3}&\text{for}\quad i=1,\\ (\alpha_{i}^{-1}\alpha_{i+1})^{3}&\text{for}\quad 2\leq i\leq n-2,\end{cases}\] \[\tau(y_{1}\rho_{1}(\rho_{i}\rho_{i+1})^{3}\rho_{1}y_{1}) = (\gamma(y_{1}\rho_{1},\rho_{i})\gamma(y_{1},\rho_{i+1}))^{3}\] \[= \begin{cases}\beta_{2}^{3}&\text{for}\quad i=1,\\ (\beta_{i}^{-1}\beta_{i+1})^{3}&\text{for}\quad 2\leq i\leq n-2.\end{cases}\] Thus, we have the following non-trivial relations in \(VL_{n}^{{}^{\prime}}\) \[\alpha_{2}^{3} =1, \tag{3.0.13}\] \[\beta_{2}^{3} =1,\] (3.0.14) \[(\alpha_{i}\alpha_{i+1}^{-1})^{3} =1\qquad\text{for}\quad 2\leq i\leq n-2,\] (3.0.15) \[(\beta_{i}\beta_{i+1}^{-1})^{3} =1\qquad\text{for}\quad 2\leq i\leq n-2. \tag{3.0.12}\] 5. The relations \((\rho_{i}y_{1})^{2}=1\) for \(3\leq i\leq n-1\) give \[\tau(1(\rho_{i}y_{1})^{2}1) = \alpha_{i}\delta\beta_{i}^{-1},\] \[\tau(y_{1}(\rho_{i}y_{1})^{2}y_{1}) = \beta_{i}\delta^{-1}\alpha_{i}^{-1},\] \[\tau(\rho_{1}(\rho_{i}y_{1})^{2}\rho_{1}) = \alpha_{i}^{-1}\beta_{i}\delta^{-1},\] \[\tau(y_{1}\rho_{1}(\rho_{i}y_{1})^{2}\rho_{1}y_{1}) = \beta_{i}^{-1}\alpha_{i}\delta\] for each \(3\leq i\leq n-1\). Thus, the non-trivial relations in \(VL_{n}^{{}^{\prime}}\) are (3.0.16) \[\beta_{i} = \alpha_{i}\delta\quad\text{for}\quad 3\leq i\leq n-1.\] (6) Finally, we consider the relation \((y_{1}\rho_{1}\rho_{2}y_{1}\rho_{2}\rho_{1})^{3}=1\). In this case, we get \[\tau(1(y_{1}\rho_{1}\rho_{2}y_{1}\rho_{2}\rho_{1})^{3}1) = (\gamma(y_{1}\rho_{1},\rho_{2})\gamma(1,\rho_{2}))^{3}=(\beta_{2}^ {-1}\alpha_{2})^{3},\] \[\tau(y_{1}(y_{1}\rho_{1}\rho_{2}y_{1}\rho_{2}\rho_{1})^{3}y_{1}) = (\gamma(\rho_{1},\rho_{2})\gamma(y_{1},\rho_{2}))^{3}=(\alpha_{2} ^{-1}\beta_{2})^{3},\] \[\tau(\rho_{1}(y_{1}\rho_{1}\rho_{2}y_{1}\rho_{2}\rho_{1})^{3}\rho_ {1}) = (\gamma(\rho_{1},y_{1})\gamma(y_{1},\rho_{2})\gamma(y_{1}\rho_{1}, y_{1})\gamma(\rho_{1},\rho_{2}))^{3}=(\delta\beta_{2}\delta^{-1}\alpha_{2}^{-1})^{3},\] \[\tau(y_{1}\rho_{1}(y_{1}\rho_{1}\rho_{2}y_{1}\rho_{2}\rho_{1})^{3} \rho_{1}y_{1}) = (\gamma(y_{1}\rho_{1},y_{1})\gamma(1,\rho_{2})\gamma(\rho_{1},y_ {1})\gamma(y_{1}\rho_{1},\rho_{2}))^{3}=(\delta^{-1}\alpha_{2}\delta\beta_{2}^ {-1})^{3}.\] Thus, we have the following additional non-trivial relations in \(VL_{n}^{{}^{\prime}}\) \[(\alpha_{2}^{-1}\beta_{2})^{3} = 1, \tag{3.0.18}\] \[(\delta\beta_{2}\delta^{-1}\alpha_{2}^{-1})^{3} = 1. \tag{3.0.17}\] We eliminate the generators \(\beta_{i}\) for \(3\leq i\leq n-1\) using the relations (3.0.16) and set \(\beta:=\beta_{2}\). This together with the relations (3.0.12) through (3.0.18) gives the following result. **Theorem 3.2**.: _The following hold for the commutator subgroup \(VL_{n}^{{}^{\prime}}\) of \(VL_{n}\):_ _(1) \(VL_{2}^{{}^{\prime}}\cong\mathbb{Z}\) and is generated by \((\rho_{1}y_{1})^{2}\)._ _(2) \(VL_{3}^{{}^{\prime}}\cong\mathbb{Z}_{3}*\mathbb{Z}_{3}*\mathbb{Z}\) and has a presentation_ \[\langle\rho_{2}\rho_{1},y_{1}\rho_{2}\rho_{1}y_{1},(\rho_{1}y_{1})^{2}\ \ |\ \ (\rho_{2}\rho_{1})^{3}=(y_{1}\rho_{2}\rho_{1}y_{1})^{3}=1\rangle.\] _(3) For \(n\geq 4\), \(VL_{n}^{{}^{\prime}}\) has a presentation with generating set_ \[\{\alpha_{i},\beta,\delta\ \ |\ \ i=2,3,\ldots,n-1\}\] _and defining relations as follows:_ \[\alpha_{2}^{3} = 1,\] \[\alpha_{j}^{2} = 1\ \ \mbox{ for }\ \ 3\leq j\leq n-1,\] \[\beta^{3} = 1,\] \[(\alpha_{i}\alpha_{i+1}^{-1})^{3} = 1\ \ \mbox{ for }\ \ 2\leq i\leq n-2,\] \[(\alpha_{i}\alpha_{j}^{-1})^{2} = 1\ \ \mbox{ for }\ \ 2\leq i\leq n-2\ \ \ \mbox{and}\ \ \ j\geq i+2,\] \[(\alpha_{j}\delta)^{2} = 1\ \ \mbox{ for }\ \ 3\leq j\leq n-1,\] \[(\beta\delta^{-1}\alpha_{3}^{-1})^{3} = 1,\] \[(\beta\delta^{-1}\alpha_{j}^{-1})^{2} = 1\ \ \mbox{ for }\ \ 4\leq j\leq n-1,\] \[(\alpha_{2}^{-1}\beta)^{3} = 1,\] \[(\delta\beta\delta^{-1}\alpha_{2}^{-1})^{3} = 1.\] **Proposition 3.3**.: \(L_{n}^{{}^{\prime}}=\gamma_{3}(L_{n})\) _and \(VL_{n}^{{}^{\prime}}=\gamma_{3}(VL_{n})\) for \(n\geq 3\)._ Proof.: Consider the short exact sequence \[1\longrightarrow L_{n}^{{}^{\prime}}/\gamma_{3}\left(L_{n}\right) \longrightarrow L_{n}/\gamma_{3}\left(L_{n}\right)\longrightarrow\mathbb{Z}_{2} \longrightarrow 1.\] Since coset of each \(y_{i}\) maps to the generator of \(\mathbb{Z}_{2}\), it follows that there exist \(\alpha_{i}\in L_{n}^{{}^{\prime}}\) such that \(y_{i}=\alpha_{i}y_{1}\mod\gamma_{3}\left(L_{n}\right)\) for each \(1\leq i\leq n-1\), where \(\alpha_{1}=1\). Using the braid relation in \(L_{n}/\gamma_{3}\left(L_{n}\right)\), we obtain \[\alpha_{i}y_{1}\alpha_{i+1}y_{1}\alpha_{i}y_{1}=\alpha_{i+1}y_{1}\alpha_{i}y_{ 1}\alpha_{i+1}y_{1}\mod\gamma_{3}\left(L_{n}\right) \tag{3.0.19}\] for all \(1\leq i\leq n-1\). Since \(\alpha_{1}=1\) and \(L_{n}^{{}^{\prime}}/\gamma_{3}\left(L_{n}\right)\subseteq\mathrm{Z}(L_{n}/ \gamma_{3}\left(L_{n}\right))\), equation (3.0.19) gives \[\alpha_{i}=1\mod\gamma_{3}\left(L_{n}\right)\] for all \(1\leq i\leq n-1\). This implies that \(L_{n}/\gamma_{3}\left(L_{n}\right)\) is cyclic of order \(2\), and hence \(L_{n}^{{}^{\prime}}/\gamma_{3}\left(L_{n}\right)=1\), which is desired. The proof of \(VL_{n}^{{}^{\prime}}=\gamma_{3}(VL_{n})\) is similar to that of Proposition 2.3 with \(s_{i}\) replaced by \(y_{i}\). **Corollary 3.4**.: \(L_{n}\) _and \(VL_{n}\) are residually nilpotent if and only if \(n=2\)._ ## 4. Presentation of the pure virtual triplet group Recall that, for \(n\geq 2\), the virtual triplet group \(VL_{n}\) has a presentation with generating set \(\{y_{1},y_{2},\ldots,y_{n-1},\rho_{1},\rho_{2},\ldots,\rho_{n-1}\}\) and defining relations as follows: \[y_{i}^{2} = 1\quad\text{for}\quad 1\leq i\leq n-1, \tag{4.0.2}\] \[y_{i}y_{i+1}y_{i} = y_{i+1}y_{i}y_{i+1}\quad\text{for}\quad 1\leq i\leq n-2,\] (4.0.3) \[\rho_{i}^{2} = 1\quad\text{for}\quad 1\leq i\leq n-1,\] (4.0.4) \[\rho_{i}\rho_{j} = \rho_{j}\rho_{i}\quad\text{for}\quad|i-j|\geq 2,\] (4.0.5) \[\rho_{i}\rho_{i+1}\rho_{i} = \rho_{i+1}\rho_{i}\rho_{i+1}\quad\text{for}\quad 1\leq i\leq n-2,\] (4.0.6) \[\rho_{i}y_{j} = y_{j}\rho_{i}\quad\text{for}\quad|i-j|\geq 2,\] (4.0.7) \[\rho_{i}\rho_{i+1}y_{i} = y_{i+1}\rho_{i}\rho_{i+1}\quad\text{for}\quad 1\leq i\leq n-2. \tag{4.0.1}\] Let \(\pi:VL_{n}\longrightarrow S_{n}\) denote the natural epimorphism given by \(\pi\left(y_{i}\right)=\pi\left(\rho_{i}\right)=\tau_{i}\) for \(1\leq i\leq n-1\), where \(\tau_{i}\) denotes the transposition \(\left(i,i+1\right)\) in \(S_{n}\). Then, we have \(PVL_{n}=\ker(\pi)\). Further, the monomorphism \(S_{n}\to VL_{n}\) given by \(\tau_{i}\mapsto\rho_{i}\) gives a splitting of the exact sequence \[1\longrightarrow PVL_{n}\longrightarrow VL_{n}\stackrel{{\pi}}{{ \longrightarrow}}S_{n}\longrightarrow 1.\] We take the set \[\mathrm{M}_{n}=\left\{m_{1,i_{1}}m_{2,i_{2}}\ldots m_{n-1,i_{n-1}}\ \mid\ m_{k,i_{k}}=\rho_{k}\rho_{k-1}\ldots\rho_{i_{k}+1}\ \text{ for }1\leq k\leq n-1\text{ and }\ 0\leq i_{k}<k\right\}\] as a Schreier system of coset representatives of \(PVL_{n}\) in \(VL_{n}\). Further, we set \(m_{k,k}=1\) for \(1\leq k\leq n-1\). For an element \(w\in VL_{n}\), let \(\overline{w}\) denote the unique coset representative of the coset of \(w\) in the Schreier set \(\mathrm{M}_{n}\). Then \(PVL_{n}\) is generated by \[\{\gamma(\mu,a)\ \mid\ \mu\in\mathrm{M}_{n}\ \text{ and }\ a\in\{y_{1},\ldots,y_{n-1},\rho_{1},\ldots,\rho_{n-1}\}\}\] and has defining relations \[\{\tau\left(\mu r\mu^{-1}\right)\ \mid\ \mu\in\mathrm{M}_{n}\text{ and }r\text{ is a defining relation in }VL_{n}\}.\] We set \[\kappa_{i,i+1}:=y_{i}\rho_{i}\] for each \(1\leq i\leq n-1\) and \[\kappa_{i,j}:=\rho_{j-1}\rho_{j-2}\ldots\rho_{i+1}\kappa_{i,i+1}\rho_{i+1}\ldots \rho_{j-2}\rho_{j-1}\] for each \(1\leq i<j\leq n\) and \(j\neq i+1\). Let \(\mathcal{S}=\left\{\kappa_{i,j}\mid 1\leq i<j\leq n\right\}\) and \(\mathcal{S}\sqcup\mathcal{S}^{-1}=\left\{\kappa_{i,j}^{\pm 1}\mid\kappa_{i,j} \in\mathcal{S}\right\}\). For ease of notation, we set \(\kappa_{j,i}:=\kappa_{i,j}^{-1}\) whenever \(i<j\). **Lemma 4.1**.: _The conjugation action of \(S_{n}\cong\langle\rho_{1},\ldots,\rho_{n-1}\rangle\) on \(\mathcal{S}\sqcup\mathcal{S}^{-1}\) is given by_ \[\rho_{i}:\begin{cases}\kappa_{i,i+1}\longleftrightarrow\kappa_{i,i+1}^{-1}, \\ \kappa_{i,j}\longleftrightarrow\kappa_{i+1,j}\\ \kappa_{j,i}\longleftrightarrow\kappa_{j,i+1}\\ \kappa_{k,l}\longleftrightarrow\kappa_{k,l}\end{cases}\qquad\text{for}\quad i +2\leq j\leq n,\] _More precisely, \(S_{n}\) acts on \(\mathcal{S}\sqcup\mathcal{S}^{-1}\) by the rule_ \[\rho\cdot\kappa_{i,j}=\kappa_{\rho(i),\rho(j)}\] _for each \(\rho\in S_{n}\) and \(\kappa_{i,j}\in\mathcal{S}\)._ Proof.: First, we consider action on the generators \(\kappa_{i,i+1}\) for \(1\leq i\leq n-1\). 1. If \(1\leq k\leq i-2\) or \(i+2\leq k\leq n-1\), then \(\rho_{k}\kappa_{i,i+1}\rho_{k}=\kappa_{i,i+1}\). 2. If \(k=i-1\), then \[\rho_{k}\kappa_{i,i+1}\rho_{k} = \rho_{i-1}y_{i}\rho_{i}\rho_{i-1}\] \[= \rho_{i-1}y_{i}\rho_{i-1}(\rho_{i-1}\rho_{i}\rho_{i-1})\] \[= \rho_{i-1}(y_{i}\rho_{i-1}\rho_{i})\rho_{i-1}\rho_{i}\] \[= \rho_{i}\left(y_{i-1}\rho_{i-1}\right)\rho_{i}\] \[= \kappa_{i-1,i+1}.\] 3. If \(k=i\), then \(\rho_{k}\kappa_{i,i+1}\rho_{k}=\kappa_{i,i+1}^{-1}\). 4. If \(k=i+1\), then \(\rho_{k}\kappa_{i,i+1}\rho_{k}=\kappa_{i,i+2}\). Next, we consider action on \(\kappa_{i,j}\) for \(1\leq i<j\leq n\) with \(j\neq i+1\). 1. If \(1\leq k\leq i-2\) or \(j+1\leq k\leq n-1\), then \(\rho_{k}\kappa_{i,j}\rho_{k}=\kappa_{i,j}\). 2. For \(k=i-1\), we have \[\rho_{i-1}\kappa_{i,j}\rho_{i-1} = \rho_{i-1}\rho_{j-1}\rho_{j-2}\ldots\rho_{i+1}\kappa_{i,i+1}\rho_ {i+1}\ldots\rho_{j-2}\rho_{j-1}\rho_{i-1}\] \[= \rho_{j-1}\rho_{j-2}\ldots\rho_{i+1}\left(\rho_{i-1}\kappa_{i,i+1 }\rho_{i-1}\right)\rho_{i+1}\ldots\rho_{j-2}\rho_{j-1}\] \[= \rho_{j-1}\rho_{j-2}\ldots\rho_{i+1}\rho_{i}\left(y_{i-1}\rho_{i- 1}\right)\rho_{i}\rho_{i+1}\ldots\rho_{j-2}\rho_{j-1}\] \[= \kappa_{i-1,j}.\] 3. For \(k=i\), we have \[\rho_{i}\kappa_{i,j}\rho_{i} = \rho_{i}\rho_{j-1}\rho_{j-2}\ldots\rho_{i+1}\kappa_{i,i+1}\rho_{i+ 1}\ldots\rho_{j-2}\rho_{j-1}\rho_{i}\] \[= \rho_{j-1}\rho_{j-2}\ldots\rho_{i}\rho_{i+1}\kappa_{i,i+1}\rho_{i+ 1}\rho_{i}\ldots\rho_{j-2}\rho_{j-1}\] \[= \rho_{j-1}\rho_{j-2}\ldots(\rho_{i}\rho_{i+1}y_{i})\rho_{i}\rho_{ i+1}\rho_{i}\ldots\rho_{j-2}\rho_{j-1}\] \[= \rho_{j-1}\rho_{j-2}\ldots\rho_{i+2}y_{i+1}\big{(}\rho_{i}\rho_{i+ 1}\rho_{i}\rho_{i+1}\rho_{i}\big{)}\ldots\rho_{j-2}\rho_{j-1}\] \[= \rho_{j-1}\rho_{j-2}\ldots\rho_{i+2}\left(y_{i+1}\rho_{i+1}\right) \rho_{i+2}\ldots\rho_{j-2}\rho_{j-1}\] \[= \kappa_{i+1,j}.\] 4. If \(i+1\leq k\leq j-2\), then \[\rho_{k}\kappa_{i,j}\rho_{k} = \rho_{k}\rho_{j-1}\ldots\rho_{k+1}\rho_{k}\ldots\rho_{i+1}\kappa _{i,i+1}\rho_{i+1}\ldots\rho_{k}\rho_{k+1}\ldots\rho_{j-1}\rho_{k}\] \[= \rho_{j-1}\ldots\rho_{k}\rho_{k+1}\rho_{k}\ldots\rho_{i+1}\kappa _{i,i+1}\rho_{i+1}\ldots\rho_{k}\rho_{k+1}\rho_{k}\ldots\rho_{j-1}\] \[= \rho_{j-1}\ldots\rho_{k+1}\rho_{k}\rho_{k+1}\rho_{k-1}\ldots\rho _{i+1}\kappa_{i,i+1}\rho_{i+1}\ldots\rho_{k-1}\rho_{k+1}\rho_{k}\rho_{k+1} \ldots\rho_{j-1}\] \[= \rho_{j-1}\ldots\rho_{k+1}\rho_{k}\rho_{k-1}\ldots\rho_{i+1}\kappa _{i,i+1}\rho_{i+1}\ldots\rho_{k-1}\rho_{k}\rho_{k+1}\ldots\rho_{j-1}\] \[= \kappa_{i,j}.\] 5. If \(k=j-1\), then \(\rho_{k}\kappa_{i,j}\rho_{k}=\kappa_{i,j-1}\). 6. Finally, if \(k=j\), then \(\rho_{k}\kappa_{i,j}\rho_{k}=\kappa_{i,j+1}\). This completes the proof of the lemma. **Theorem 4.2**.: _The pure virtual triplet group \(PVL_{n}\) has a presentation with generating set \(\mathcal{S}=\{\kappa_{i,j}\mid 1\leq i<j\leq n\}\) and defining relations_ \[\kappa_{i,j}\kappa_{i,k}\kappa_{j,k}=\kappa_{j,k}\kappa_{i,k}\kappa_{i,j},\] _where \(1\leq i<j<k\leq n\)._ Proof.: We begin by observing that \[\overline{\alpha}=\alpha\quad\text{and}\quad\overline{\alpha_{1}y_{i_{1}} \ldots\alpha_{k}y_{i_{k}}}=\alpha_{1}\rho_{i_{1}}\ldots\alpha_{k}\rho_{i_{k}}\] for words \(\alpha\) and \(\alpha_{j}\) in the generators \(\{\rho_{1},\ldots,\rho_{n-1}\}\). Thus, we have \[\gamma\left(\mu,\rho_{i}\right)=\left(\mu\rho_{i}\right)\left(\mu\rho_{i} \right)^{-1}=1\] and \[\gamma\left(\mu,y_{i}\right)=\left(\mu y_{i}\right)\left(\mu\rho_{i}\right)^{-1 }=\mu y_{i}\rho_{i}\mu^{-1}=\mu\kappa_{i,i+1}\mu^{-1}\] for each \(\mu\in\mathrm{M}_{n}\) and \(1\leq i\leq n-1\). It follows from Lemma 4.1 that each \(\gamma\left(\mu,y_{i}\right)\) lie in \(\mathcal{S}\sqcup\mathcal{S}^{-1}\). Conversely, if \(\kappa_{i,j}\in\mathcal{S}\) is an arbitrary element, then conjugation by \(\left(\rho_{i-1}\rho_{i-2}\ldots\rho_{2}\rho_{1}\right)\left(\rho_{j-1}\rho_{j -2}\ldots\rho_{3}\rho_{2}\right)\) maps \(\kappa_{1,2}\) to \(\kappa_{i,j}\). Similarly, conjugation by the element \(\left(\rho_{i-1}\rho_{i-2}\ldots\rho_{3}\rho_{2}\right)\left(\rho_{j-1}\rho_{j -2}\ldots\rho_{2}\rho_{1}\right)\) maps \(\kappa_{1,2}\) to \(\kappa_{i,j}^{-1}\) (\(=\kappa_{j,i}\)). Thus, \(\mathcal{S}\sqcup\mathcal{S}^{-1}\), and consequently \(\mathcal{S}\) generates \(PVL_{n}\). Next, we determine the defining relations in \(PVL_{n}\). Consider an element \(\mu=\rho_{i_{1}}\rho_{i_{2}}\ldots\rho_{i_{k}}\in\mathrm{M}_{n}.\) Since \(\gamma\left(\mu,\rho_{i}\right)=1\) for all \(i\), no non-trivial relations for \(PVL_{n}\) can be obtained from the relations (4.0.3), (4.0.4) and (4.0.5) of \(VL_{n}\). We now consider the remaining relations of \(VL_{n}\). 1. First, consider the relations \(y_{i}^{2}=1\) for \(1\leq i\leq n-1\). Then we have \[\tau\left(\mu y_{i}^{2}\mu^{-1}\right) = \gamma\left(\rho_{i_{1}}\ldots\rho_{i_{k}}y_{i}y_{i}\rho_{i_{k}} \ldots\rho_{i_{1}}\right)\] \[= \gamma\left(1,\rho_{i_{1}}\right)\gamma\left(\overline{\rho_{i_{1 }}},\rho_{i_{2}}\right)\ldots\gamma\left(\bar{\mu},y_{i}\right)\gamma\left( \overline{\mu y_{i}},y_{i}\right)\ldots\gamma\left(\overline{\mu y_{i}y_{i} \rho_{i_{k}}\ldots\rho_{i_{2}}},\rho_{i_{1}}\right)\] \[= \gamma\left(\bar{\mu},y_{i}\right)\gamma\left(\overline{\mu y_{i} },y_{i}\right)\] \[= \gamma\left(\mu,y_{i}\right)\gamma\left(\mu\rho_{i},y_{i}\right)\] \[= \gamma\left(\mu,y_{i}\right)\left(\mu\rho_{i}y_{i}\mu^{-1}\right)\] \[= \gamma\left(\mu,y_{i}\right)\gamma\left(\mu,y_{i}\right)^{-1},\] which does not yield any non-trivial relation in \(PVL_{n}\). 2. For the relations \((y_{i}y_{i+1})^{3}=1\) for \(1\leq i\leq n-2\), we have \[\tau\left(\mu(y_{i}y_{i+1})^{3}\mu^{-1}\right) = \gamma\left(\rho_{i_{1}}\ldots\rho_{i_{k}}y_{i}y_{i+1}y_{i}y_{i+ 1}y_{i}y_{i+1}\rho_{i_{k}}\ldots\rho_{i_{1}}\right)\] \[= \gamma\left(\bar{\mu},y_{i}\right)\gamma\left(\overline{\mu y_{i }},y_{i+1}\right)\gamma\left(\overline{\mu y_{i}y_{i+1}},y_{i}\right)\] \[\gamma\left(\overline{\mu y_{i}y_{i+1}y_{i}},y_{i+1}\right)\gamma \left(\overline{\mu y_{i}y_{i+1}y_{i}y_{i+1}},y_{i}\right)\gamma\left( \overline{\mu y_{i}y_{i+1}y_{i}y_{i+1}y_{i}},y_{i+1}\right)\] \[= \gamma\left(\mu,y_{i}\right)\gamma\left(\mu\rho_{i},y_{i+1}\right) \gamma\left(\mu\rho_{i}\rho_{i+1},y_{i}\right)\] \[\gamma\left(\mu\rho_{i}\rho_{i+1}\rho_{i},y_{i+1}\right)\gamma \left(\mu\rho_{i}\rho_{i+1}\rho_{i}\rho_{i+1},y_{i}\right)\gamma\left(\mu\rho _{i}\rho_{i+1}\rho_{i}\rho_{i+1}\rho_{i},y_{i+1}\right)\] \[= \left(\mu\kappa_{i,i+1}\mu^{-1}\right)\left(\mu\kappa_{i,i+2}\mu^ {-1}\right)\left(\mu\kappa_{i+1,i+2}\mu^{-1}\right)\] \[\left(\mu\kappa_{i,i+1}\mu^{-1}\right)^{-1}\left(\mu\kappa_{i,i+2 }\mu^{-1}\right)^{-1}\left(\mu\kappa_{i+1,i+2}\mu^{-1}\right)^{-1}\] \[= \kappa_{\mu(i),\mu(i+1)}\kappa_{\mu(i),\mu(i+2)}\kappa_{\mu(i+1),\mu(i+2)}\kappa_{\mu(i),\mu(i+1)}^{-1}\kappa_{\mu(i),\mu(i+2)}^{-1}\kappa_{ \mu(i+1),\mu(i+2)}^{-1}.\] This gives non-trivial relations (4.0.8) \[\kappa_{\mu(i),\mu(i+1)}\kappa_{\mu(i),\mu(i+2)}\kappa_{\mu(i+1),\mu(i+2)}=\kappa_{\mu(i+1),\mu(i+2)}\kappa_{\mu(i),\mu(i+2)}\kappa_{\mu(i), \mu(i+1)}\] in \(PVL_{n}\). Since \(S_{n}\) action on \(\{1,2,\ldots,n\}\) is triply-transitive, (4.0.8) gives the non-trivial relations (4.0.9) \[\kappa_{i,j}\kappa_{i,k}\kappa_{j,k}=\kappa_{j,k}\kappa_{i,k}\kappa_{i,j},\] where \(i,j,k\) are all distinct. We claim that the relations (4.0.9) are consequences of the relations (4.0.10) \[\kappa_{p,q}\kappa_{p,r}\kappa_{q,r}=\kappa_{q,r}\kappa_{p,r}\kappa_{p,q},\] where \(1\leq p<q<r\leq n\). Note that, given three distinct integers \(i,j,k\), one of the following holds: 1. \(i<j<k\) 2. \(i<k<j\) 3. \(k<j<i\) 4. \(k<i<j\) 5. \(j<i<k\) 6. \(j<k<i\) 7. Nothing needs to be done in Case (a). In Case (b), the relation (4.0.9) becomes \[\kappa_{i,j}\kappa_{i,k}\kappa_{k,j}^{-1}=\kappa_{k,j}^{-1}\kappa_{i,k}\kappa_ {i,j}.\] Rewriting it gives \[\kappa_{k,j}\kappa_{i,j}\kappa_{i,k}=\kappa_{i,k}\kappa_{i,j}\kappa_{k,j},\] which is the relation (4.0.10). In Case (c), the relation (4.0.9) becomes \[\kappa_{j,i}^{-1}\kappa_{k,i}^{-1}\kappa_{k,j}^{-1}=\kappa_{k,j}^{-1}\kappa_{k,i }^{-1}\kappa_{j,i}^{-1}.\] Taking inverses give \[\kappa_{k,j}\kappa_{k,i}\kappa_{j,i}=\kappa_{j,i}\kappa_{k,i}\kappa_{k,j},\] which is the relation (4.0.10). The remaining cases can sorted out in a similar manner. 3. In case of the relations \(\left(y_{i}\rho_{j}\right)^{2}=1\) for \(|i-j|\geq 2\), we obtain \[\tau\left(\mu y_{i}\rho_{j}y_{i}\rho_{j}\mu^{-1}\right) = \gamma\left(\bar{\mu},y_{i}\right)\gamma\left(\overline{\mu y_{ i}},\rho_{j}\right)\gamma\left(\overline{\mu y_{i}\rho_{j}},y_{i}\right) \gamma\left(\overline{\mu y_{i}\rho_{j}y_{i}},\rho_{j}\right)\] \[= \gamma\left(\bar{\mu},y_{i}\right)\gamma\left(\overline{\mu y_{ i}\rho_{j}},y_{i}\right)\] \[= \gamma\left(\mu,y_{i}\right)\gamma\left(\mu\rho_{i}\rho_{j},y_{i }\right)\] \[= \gamma\left(\mu,y_{i}\right)\left(\mu\rho_{i}\rho_{j}y_{i}\rho_{ i}\rho_{j}\rho_{i}\mu^{-1}\right)\] \[= \gamma\left(\mu,y_{i}\right)\left(\mu\rho_{i}y_{i}\mu^{-1}\right)\] \[= \gamma\left(\mu,y_{i}\right)\left(\mu y_{i}\rho_{i}\mu^{-1}\right) ^{-1}\] \[= \gamma\left(\mu,y_{i}\right)\gamma\left(\mu,y_{i}\right)^{-1},\] which does not yield any non-trivial relation in \(PVL_{n}\). 4. Finally, consider the relations \(\rho_{i}y_{i+1}\rho_{i}\rho_{i+1}y_{i}\rho_{i+1}=1\) for \(1\leq i\leq n-2\). Computing \[\tau\left(\mu\rho_{i}y_{i+1}\rho_{i}\rho_{i+1}y_{i}\rho_{i+1}\mu ^{-1}\right) = \gamma\left(\overline{\mu\rho_{i}},y_{i+1}\right)\gamma\left( \overline{\mu\rho_{i}y_{i+1}\rho_{i}\rho_{i+1}},y_{i}\right)\] \[= \gamma\left(\mu\rho_{i},y_{i+1}\right)\gamma\left(\mu\rho_{i}\rho _{i+1}\rho_{i}\rho_{i+1},y_{i}\right)\] \[= \gamma\left(\mu\rho_{i},y_{i+1}\right)\left(\mu\rho_{i+1}\rho_{i} y_{i}\rho_{i+1}\mu^{-1}\right)\] \[= \gamma\left(\mu\rho_{i},y_{i+1}\right)\left(\mu\rho_{i+1}\rho_{i} \rho_{i+1}\rho_{i+1}y_{i}\rho_{i+1}\mu^{-1}\right)\] \[= \gamma\left(\mu\rho_{i},y_{i+1}\right)\left(\mu\rho_{i}\rho_{i+1} y_{i+1}\rho_{i}\mu^{-1}\right)\] \[= \gamma\left(\mu\rho_{i},y_{i+1}\right)\gamma\left(\mu\rho_{i},y_{ i+1}\right)^{-1},\] gives only trivial relations in \(PVL_{n}\). Thus, the only non-trivial relations amongst elements of \(\mathcal{S}\) are of the form \(\kappa_{i,j}\kappa_{i,k}\kappa_{j,k}=\kappa_{j,k}\kappa_{i,k}\kappa_{i,j}\), where \(1\leq i<j<k\leq n\). This completes the proof of the theorem. ## 5. Crystallographic quotients of virtual triplet groups A closed subgroup \(H\) of a Hausdorff topological group \(G\) is said to be _uniform_ if \(G/H\) is a compact space. A discrete and uniform subgroup \(G\) of \(\mathbb{R}^{n}\rtimes\mathrm{O}(n,\mathbb{R})\) is called a _crystallographic group_ of dimension \(n\). If in addition \(G\) is torsion free, then it is called a _Bieberbach group_ of dimension \(n\). The following characterisation of crystallographic groups is well-known [12, Lemma 8]. **Lemma 5.1**.: _A group \(G\) is a crystallographic group if and only if there is an integer \(n\), a finite group \(H\) and a short exact sequence_ \[0\longrightarrow\mathbb{Z}^{n}\longrightarrow G\overset{\eta}{\longrightarrow} H\longrightarrow 1\] _such that the integral representation \(\Theta:H\longrightarrow\operatorname{Aut}\left(\mathbb{Z}^{n}\right)\) defined by \(\Theta(h)(x)=zxz^{-1}\) is faithful, where \(h\in H\), \(x\in\mathbb{Z}^{n}\) and \(z\in G\) is such that \(\eta(z)=h\)._ The group \(H\) is called the _holonomy group_ of \(G\), the integer \(n\) is called the _dimension_ of \(G\), and \(\Theta:H\longrightarrow\operatorname{Aut}\left(\mathbb{Z}^{n}\right)\) is called the _holonomy representation_ of \(G\). It is known that any finite group is the holonomy group of some flat manifold [8, Theorem III.5.2]. Furthermore, there is a correspondence between the class of Bieberbach groups and the class of compact flat Riemannian manifolds [9, Theorem 2.1.1]. Crystallographic quotients of virtual braid and virtual twin groups have been considered in [28, Theorem 2.4] and [28, Theorem 3.5], respectively. **Theorem 5.2**.: _For \(n\geq 2\), there is a split short exact sequence_ \[1\longrightarrow\mathbb{Z}^{n(n-1)}\longrightarrow VB_{n}/VP_{n}^{{}^{\prime} }\longrightarrow S_{n}\longrightarrow 1\] _such that the group \(VB_{n}/VP_{n}^{{}^{\prime}}\) is a crystallographic group of dimension \(n(n-1)\)._ **Theorem 5.3**.: _For \(n\geq 2\), there is a split short exact sequence_ \[1\longrightarrow\mathbb{Z}^{n(n-1)/2}\longrightarrow VT_{n}/PVT_{n}^{{}^{ \prime}}\longrightarrow S_{n}\longrightarrow 1\] _such that the group \(VT_{n}/PVT_{n}^{{}^{\prime}}\) is a crystallographic group of dimension \(n(n-1)/2\)._ We prove the following result for virtual triplet groups. **Theorem 5.4**.: _For \(n\geq 2\), there is a split short exact sequence_ \[1\longrightarrow\mathbb{Z}^{n(n-1)/2}\longrightarrow VL_{n}/PVL_{n}^{{}^{ \prime}}\longrightarrow S_{n}\longrightarrow 1\] _such that the group \(VL_{n}/PVL_{n}^{{}^{\prime}}\) is a crystallographic group of dimension \(n(n-1)/2\)._ Proof.: Notice that \(PVL_{n}/PVL_{n}^{{}^{\prime}}\cong\mathbb{Z}^{n(n-1)/2}\). The split short exact sequence \[1\to PVL_{n}\to VL_{n}\to S_{n}\to 1\] induces the desired split short exact sequence \[1\longrightarrow\mathbb{Z}^{n(n-1)/2}\longrightarrow VL_{n}/PVL_{n}^{{}^{ \prime}}\overset{\pi}{\longrightarrow}S_{n}\longrightarrow 1.\] Let \(\Theta:S_{n}\rightarrow\operatorname{Aut}(PVL_{n}/PVL_{n}^{{}^{\prime}}))\) be the action of \(S_{n}\) on \(PVL_{n}/PVL_{n}^{{}^{\prime}}\), which is induced by the action given in Lemma 4.1. That is, for \(\rho\in S_{n}\) and \(\kappa_{i,j}\in PVL_{n}\), we have \(\Theta(\rho)(\kappa_{i,j})=\kappa_{\rho(i),\rho(j)}\mod PVL_{n}^{{}^{\prime}}\). Thus, \(\kappa_{\rho(i),\rho(j)}=\kappa_{i,j}\mod PVL_{n}^{{}^{\prime}}\) for all \(1\leq i<j\leq n\) if and only if \(\rho=1\). Hence, the holonomy representation is faithful and \(VL_{n}/PVL_{n}^{{}^{\prime}}\) is a crystallographic group. Next, we determine torsion in the crystallographic quotient considered above. Since \(VL_{n}=PVL_{n}\rtimes S_{n}\), any \(\beta\in VL_{n}\) can be written uniquely as \(\beta=w\theta\) for some \(w\in PVL_{n}\) and \(\theta\in S_{n}\). The element \(\beta\) acts on the set \(\{\kappa_{i,j}\mid 1\leq i\neq j\leq n\}\) via the action of its image \(\pi(\beta)=\theta\) (see Lemma 4.1). We denote the orbit of an element \(\kappa_{i,j}\) under this action by \(\mathcal{O}_{\theta}\left(\kappa_{i,j}\right)\). The following theorem is an analogue of similar results for virtual braid groups and virtual twin groups [28, Theorem 2.7 and Theorem 3.6]. **Theorem 5.5**.: _For each \(2\leq t\leq n\), let \(1\leq r_{1}<r_{2}<\cdots<r_{t-1}\leq n\) be a sequence of consecutive integers. Let \(\pi\left(\rho_{r_{1}}\rho_{r_{2}}\ldots\rho_{r_{t-1}}\right)=\theta\) and \(\mathcal{T}_{\theta}\) denote a set of representatives of orbits of the action of \(\rho_{r_{1}}\rho_{r_{2}}\ldots\rho_{r_{t-1}}\) on \(\{\kappa_{i,j}\mid 1\leq i\neq j\leq n\}\). Then the coset of the element_ \[(\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\left(\rho_{r_{1}}\rho_{r_{2}} \ldots\rho_{r_{t-1}}\right)\] _has order \(t\) in \(VL_{n}/PVL_{n}^{{}^{\prime}}\) if and only if_ \[a_{r,s}+a_{\theta^{-1}(r),\theta^{-1}(s)}+\cdots+a_{\theta^{-(t-1)}(r),\theta^ {-(t-1)}(s)}=0\] _for all \(\kappa_{r,s}\in\mathcal{T}_{\theta}\). Here, \(a_{i,j}\in\mathbb{Z}\) and \(a_{\theta^{\ell}(r),\theta^{\ell}(s)}:=-a_{\theta^{\ell}(s),\theta^{\ell}(r)}\) whenever \(1\leq\theta^{\ell}(s)<\theta^{\ell}(r)\leq n\)._ Proof.: We assume that \(r_{i}=i\) for all \(1\leq i\leq t-1\), and the other cases can be proved in a similar way. Note that, for each \(2\leq t\leq n\), the element \(\rho_{1}\rho_{2}\ldots\rho_{t-1}\) has order \(t\) in \(VL_{n}\). We will analyse the conditions on \(a_{i,j}\in\mathbb{Z}\) under which the coset of the element \((\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})(\rho_{1}\rho_{2}\ldots\rho_{t- 1})\) has order \(t\) in \(VL_{n}/PVL_{n}^{{}^{\prime}}\). To proceed, let \(\pi\left(\rho_{1}\rho_{2}\ldots\rho_{t-1}\right)=\theta\) and \(\mathcal{T}_{\theta}=\{\kappa_{r_{1},s_{1}},\kappa_{r_{2},s_{2}},\ldots,\kappa _{r_{m},s_{m}}\}\) a set of representatives of orbits of the action of \(\rho_{1}\rho_{2}\ldots\rho_{t-1}\) on \(\{\kappa_{i,j}\mid 1\leq i\neq j\leq n\}\). Then, we have \[\left((\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\left(\rho_{ 1}\rho_{2}\ldots\rho_{t-1}\right)\right)^{t}\] \[= (\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\left(\rho_{1}\rho _{2}\ldots\rho_{t-1}\right)\cdots(\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j} })\left(\rho_{1}\rho_{2}\ldots\rho_{t-1}\right)\] \[= (\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\left((\rho_{1} \ldots\rho_{t-1})\left(\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\left( \rho_{1}\ldots\rho_{t-1}\right)^{-1}\right)\] \[\left((\rho_{1}\ldots\rho_{t-1})^{2}\left(\prod_{1\leq i<j\leq n }\kappa_{i,j}^{a_{i,j}})\left(\rho_{1}\ldots\rho_{t-1}\right)^{-2}\right)\cdots\] \[\cdots\left((\rho_{1}\ldots\rho_{t-1})^{t-1}\left(\prod_{1\leq i <j\leq n}\kappa_{i,j}^{a_{i,j}})\left(\rho_{1}\ldots\rho_{t-1}\right)^{-(t-1)} \right)\left(\rho_{1}\ldots\rho_{t-1}\right)^{t}\] \[= (\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\ \left(\theta\cdot( \prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\right)\ \left(\theta^{2}\cdot( \prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})\right)\cdots\] \[\cdots\left(\theta^{t-1}\cdot(\prod_{1\leq i<j\leq n}\kappa_{i,j} ^{a_{i,j}})\right)\ \left(\rho_{1}\ldots\rho_{t-1}\right)^{t}\] \[= (\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})(\prod_{1\leq i<j \leq n}\kappa_{\theta(i),\theta(j)}^{a_{i,j}})(\prod_{1\leq i<j\leq n}\kappa_{ \theta^{2}(i),\theta^{2}(j)}^{a_{i,j}})\cdots(\prod_{1\leq i<j\leq n}\kappa_{ \theta^{t-1}(i),\theta^{t-1}(j)}^{a_{i,j}}).\] The last expression implies that the total exponent sum of a generator \(\kappa_{i,j}\) (with \(1\leq i<j\leq n\)) is the same as that of \(\kappa_{\theta^{\ell}(i),\theta^{\ell}(j))}\) for each \(0\leq\ell\leq t-1\). Thus, using \(\mathcal{T}_{\theta}\), we see that \[\left((\prod_{1\leq i<j\leq n}\kappa_{i,j}^{a_{i,j}})(\rho_{1}\rho_{2}\ldots \rho_{t-1})\right)^{t}=1\mod PVL_{n}^{{}^{\prime}}\] if and only if the following system of integer equations has a solution \[\begin{cases}a_{r_{1},s_{1}}+a_{\theta^{-1}(r_{1}),\theta^{-1}(s_{1})}+\cdots+ a_{\theta^{-(t-1)}(r_{1}),\theta^{-(t-1)}(s_{1})}=0,\\ a_{r_{2},s_{2}}+a_{\theta^{-1}(r_{2}),\theta^{-1}(s_{2})}+\cdots+a_{\theta^{-( t-1)}(r_{2}),\theta^{-(t-1)}(s_{2})}=0,\\ \vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\\ a_{r_{m},s_{m}}+a_{\theta^{-1}(r_{m}),\theta^{-1}(s_{m})}+\cdots+a_{\theta^{-( t-1)}(r_{m}),\theta^{-(t-1)}(s_{m})}=0.\end{cases}\] This completes the proof. **Corollary 5.6**.: _Let \(m_{1},m_{2},\ldots,m_{r}\) be positive integers (not necessarily distinct) each greater than 1 such that \(\sum_{i=1}^{r}m_{i}\leq n\). Then \(VL_{n}/PVL_{n}^{{}^{\prime}}\) admits infinitely many elements of order \(\operatorname{lcm}\left(m_{1},\ldots,m_{r}\right)\). Further, there exists such an element whose corresponding permutation has cycle type \((m_{1},m_{2},\ldots,m_{r})\)._ Proof.: By Theorem 5.5, there exist elements \(w_{k}\in VL_{n}\) whose cosets have order \(m_{k}\) in \(VL_{n}/PVL_{n}^{{}^{\prime}}\) for each \(1\leq k\leq r\), where \[w_{k}=\begin{cases}(\prod_{\kappa_{i,j}\in\mathcal{O}_{\theta_{1}}(\kappa_{1,2} )}\kappa_{i,j}^{a_{i,j}})\;(\rho_{1}\cdots\rho_{m_{1}-1})&\text{for}\quad k=1,\\ (\prod_{\kappa_{i,j}\in\mathcal{O}_{\theta_{k}}(\kappa_{r_{k},k})}\kappa_{i,j} ^{a_{i,j}})\;(\rho_{\sum_{l=1}^{k-1}m_{l}+1}\cdots\rho_{\sum_{l=1}^{k}m_{l}- 1})&\text{for}\quad 2\leq k\leq r,\end{cases}\] \(a_{i,j}\in\mathbb{Z}\), \(r_{k}=\sum_{l=1}^{k-1}m_{l}+1,s_{k}=\sum_{l=1}^{k-1}m_{l}+2\) and \(\pi\left(w_{k}\right)=\theta_{k}\). Since \(\rho_{\sum_{l=1}^{k_{l}-1}m_{l}+1}\cdots\rho_{\sum_{l=1}^{k_{l}}m_{l}-1}\) and \(\rho_{\sum_{l=1}^{k_{j}-1}m_{l}+1}\cdots\rho_{\sum_{l=1}^{k_{j}}m_{l}-1}\) commute in \(VL_{n}/PVL_{n}^{{}^{\prime}}\), it follows that the element \(w_{1}w_{2}\cdots w_{r}\) has order \(\operatorname{lcm}\left(m_{1},m_{2},\ldots,m_{r}\right)\) in \(VL_{n}/PVL_{n}^{{}^{\prime}}\). Further, \(\pi\left(w_{1}w_{2}\cdots w_{r}\right)\) is an element of \(S_{n}\) of cycle type \((m_{1},m_{2},\ldots,m_{r})\). **Remark 5.7**.: The preceding corollary shows that \(VL_{n}/PVL_{n}^{{}^{\prime}}\) is not a Bieberbach group. In the spirit of [4, Question 3.2], it is interesting to determine Bieberbach subgroups of \(VL_{n}/PVL_{n}^{{}^{\prime}}\). We conclude by tabulating the results for braid groups, twin groups, triplet groups and their virtual counterparts. \begin{tabular}{|c||c|c|c|} \hline & \(B_{n}\) & \(T_{n}\) & \(L_{n}\) \\ \hline \hline Commutator & Finite presentation & Finite presentation & Finite presentation \\ subgroup & known for \(n\geq 2\)[14]. & known for \(n\geq 2\)[10]. & known for \(n\geq 2\)[7, 26]. \\ \hline Pure & Finite presentation & Finite presentation & It is a free group of \\ subgroup & known for \(n\geq 2\)[1]. & known for \(2\leq n\leq 6\) & finite rank for \(n\geq 4\) \\ & & [6, 13, 24]. & [20, 21]. \\ \hline Crystallographic & \(B_{n}/P_{n}^{{}^{\prime}}\) is crystallographic & \(T_{n}/PT_{n}^{{}^{\prime}}\) is crystallographic & \(L_{n}/PL_{n}^{{}^{\prime}}\) is crystallographic \\ quotient & for \(n\geq 2\)[12]. & for \(n\geq 4\)[21]. & for \(n\geq 4\)[21]. \\ \hline \end{tabular} Table 1. \begin{tabular}{|c||c|c|c|} \hline & \(VB_{n}\) & \(VT_{n}\) & \(VL_{n}\) \\ \hline \hline Commutator & Finite generation & Theorem 2.2. & Theorem 3.2. \\ subgroup & known for \(n\geq 4\)[5]. & & \\ & Finite presentation & & \\ & unknown for \(n\geq 4\). & & \\ \hline Pure & Finite presentation & Finite presentation & Theorem 4.2. \\ subgroup & known for \(n\geq 2\)[2]. & known for \(n\geq 2\)[25]. & \\ & & & \\ \hline Crystallographic & \(VB_{n}/VP_{n}^{{}^{\prime}}\) is crystallographic & \(VT_{n}/PVT_{n}^{{}^{\prime}}\) is crystallographic & Theorem 5.4. \\ quotient & for \(n\geq 2\)[28]. & for \(n\geq 2\)[28]. & \\ \hline \end{tabular} Table 2. **Acknowledgement.** Pravin Kumar is supported by the PMRF fellowship at IISER Mohali. Neha Nanda has received funding from European Union's Horizon Europe research and innovation programme under the Marie Sklodowska Curie grant agreement no. 101066588. Mahender Singh is supported by the Swarna Jayanti Fellowship grants DST/SJF/MSA-02/2018-19 and SB/SJF/2019-20.
2306.02157
Transforming to Yoked Neural Networks to Improve ANN Structure
Most existing classical artificial neural networks (ANN) are designed as a tree structure to imitate neural networks. In this paper, we argue that the connectivity of a tree is not sufficient to characterize a neural network. The nodes of the same level of a tree cannot be connected with each other, i.e., these neural unit cannot share information with each other, which is a major drawback of ANN. Although ANN has been significantly improved in recent years to more complex structures, such as the directed acyclic graph (DAG), these methods also have unidirectional and acyclic bias for ANN. In this paper, we propose a method to build a bidirectional complete graph for the nodes in the same level of an ANN, which yokes the nodes of the same level to formulate a neural module. We call our model as YNN in short. YNN promotes the information transfer significantly which obviously helps in improving the performance of the method. Our YNN can imitate neural networks much better compared with the traditional ANN. In this paper, we analyze the existing structural bias of ANN and propose a model YNN to efficiently eliminate such structural bias. In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information. We further impose auxiliary sparsity constraint to the distribution of connectedness, which promotes the learned structure to focus on critical connections. Finally, based on the optimized structure, we also design small neural module structure based on the minimum cut technique to reduce the computational burden of the YNN model. This learning process is compatible with the existing networks and different tasks. The obtained quantitative experimental results reflect that the learned connectivity is superior to the traditional NN structure.
Xinshun Liu, Yizhi Fang, Yichao Jiang
2023-06-03T16:56:18Z
http://arxiv.org/abs/2306.02157v3
# Transforming to Yoked Neural Networks to Improve ANN Structure ###### Abstract Most existing classical artificial neural networks (ANN) are designed as a tree structure to imitate neural networks. In this paper, we argue that the connectivity of a tree is not sufficient to characterize a neural network. The nodes of the same level of a tree cannot be connected with each other, i.e., these neural unit cannot share information with each other, which is a major drawback of ANN. Although ANN has been significantly improved in recent years to more complex structures, such as the directed acyclic graph (DAG), these methods also have unidirectional and acyclic bias for ANN. In this paper, we propose a method to build a bidirectional complete graph for the nodes in the same level of an ANN, which yokes the nodes of the same level to formulate a neural module. We call our model as YNN in short. YNN promotes the information transfer significantly which obviously helps in improving the performance of the method. Our YNN can imitate neural networks much better compared with the traditional ANN. In this paper, we analyze the existing structural bias of ANN and propose a model YNN to efficiently eliminate such structural bias. In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information. We further impose auxiliary sparsity constraint to the distribution of connectedness, which promotes the learned structure to focus on critical connections. Finally, based on the optimized structure, we also design small neural module structure based on the minimum cut technique to reduce the computational burden of the YNN model. This learning process is compatible with the existing networks and different tasks. The obtained quantitative experimental results reflect that the learned connectivity is superior to the traditional NN structure. Introduction Deep learning successfully transits the feature engineering from manual to automatic design and enables optimization of the mapping function from sample to feature. Consequently, the search for effective neural networks has gradually become an important and practical direction. However, designing the architecture remains a challenging task. Certain research studies explore the impact of depth [1,2,3] and the type of convolution [4,5] on performance. Moreover, some researchers have attempted to simplify the architecture design. VGGNet [6] was directly stacked by a series of convolution layers with plain topology. To better adapt the optimization process of gradient descent process, GoogleNet [7] introduced parallel modules, while Highway networks [8] employed gating units to regulate information flow, resulting in elastic topologies. Driven by the significance of depth, the residual block consisted of residual mapping and shortcut was raised in ResNet [9]. Topological changes in neural networks successfully scaled up neural networks to hundreds of layers. The proposed residual connectivity was widely approved and was subsequently applied in other works such as MobileNet [10,11] and ShuffleNet [12]. Divergent from the relative sparse topologies, DenseNet [13] wired densely among blocks to fully leverage feature reuse. Recent advances in computer vision [25,26] also explored neural architecture search (NAS) methods [14,15,16] to search convolutional blocks. In recent years, Yuan proposed a topological perspective using directed acyclic graph (DAG) [29] to represent neural networks, enhancing the topological capabilities of artificial neural networks (ANNs). However, these approaches suffer from the bias of unidirectional and acyclic structures, limiting the signal's capability for free transmission in the network. At the heart of our innovation lies a critical reimagining of traditional ANNs. Currently, NNs operate on asynchronous tensor flow, often organized hierarchically in a tree-like structure. However, this approach inadvertently hampers the nodes within each level from effective communication, relegating them to mere information carriers devoid of meaningful interaction. This inherent limitation substantially diminishes the potential of ANNs, impeding their full capabilities. Our work transcends these constraints by introducing a paradigm shift. We present a method that enables synchronous communication among nodes within the same level, a fundamental departure from the status quo. This transformative adjustment yields a remarkable enhancement in information transformation, thereby significantly boosting the overall capacity of ANN structures. By fostering a collaborative environment among nodes, our approach leverages their collective power to unlock unprecedented capabilities. Particularly, what sets our research apart is its inspiration drawn from the intricate dynamics of biological neural systems. Unlike the traditional stacked unit approach, where neural elements operate in isolation, our approach mirrors the cooperative nature of biological neural modules. In these systems, multiple neural units collaboratively execute precise functional implementations, resulting in exquisite performance. Our innovation is poised to bridge the gap between artificial and biological neural networks, thus propelling ANN structures closer to the remarkable efficiency of their natural counterparts. The existing efforts in neural network connectivity have primarily focused on the tree structures where neural units at the same level cannot exchange information with each other, resulting in significant drawbacks for ANNs. This limitation arises due to the absence of a neural module concept. In this paper, we argue that the current connectivity approaches fail to adequately capture the essence of neural networks. Since the nodes at the same level of a tree cannot establish connections with each other, it hampers the transfer of information between these neural units, leading to substantial defects for ANNs. We argue that the nodes in the same level should form a neural module and establish interconnections. As a result, we introduce a method to build up a bidirectional complete graph for nodes at the same level of an ANN. By linking the nodes in a YOKE fashion, we create neural modules. Furthermore, when we consider all the nodes at the same level, we would have a chance to construct a bidirectional complete graph in ANNs and yields remarkable improvements. We refer to our model as Yoked Neural Network, YNN for brevity. It is important to note that if all the edge weights in the bidirectional complete graph become vestigial and approach to zero, our YNN would reduce to a traditional tree structure. In this paper, we analyze the structural bias of existing ANN structures. To more accurately mimic neural networks, our method efficiently eliminates structural bias. In our model, nodes not only aggregate and transform features but also determine the information flow. We achieve this by assigning learnable parameters to the edges, which reflect the magnitude of connections. This allows the learning process to resemble traditional learning methods, enhancing the overall performance of our model in imitating neural networks. As the nodes are relied on the values of other nodes, it is a challenging task designing a bidirectional complete graph for nodes at the same level. We address this challenge by introducing a synchronization method specifically tailored for learning the nodes at the same level. This synchronization method is crucial for ensuring the effective coordination and learning of these interconnected nodes. Finally, to optimize the structure of YNN, we further attach an auxiliary sparsity constraint that influences the distribution of connectedness. This constraint promotes the learned structure to prioritize critical connections, enhancing the overall efficiency of the learning process. The learning process is compatible with existing networks and exhibits adaptability to larger search spaces and diverse tasks, effectively eliminating the structural bias. We evaluate the effectiveness of our optimization method by conducting experiments on classical networks, demonstrating its competitiveness compared to existing networks. Additionally, to showcase the benefits of connectivity learning, we evaluate our method across various tasks and datasets. The quantitative results from these experiments indicate the superiority of the learned connectivity in terms of performance and effectiveness. Considering that the synchronization algorithm for nodes at the same level may be computationally intense, we also propose a method to design small neural modules to simplify our model. This approach significantly reduces the computational burden of our model while maintaining its effectiveness. To sum up, our contributions in this paper are as follows: 1. We provide an analysis of the structural bias present in existing ANN networks. 2. We propose the YNN model which involves YOKING the nodes at the same level together to simulate real neural networks. 3. We develop a synchronization method to effectively learn and coordinate the nodes at the same level, introducing the concept of neural modules. 4. We design a regularization-based optimization method to optimize the structure of the YNN model. 5. We propose the design of small neural modules to significantly reduce the computational complexity of our model, improving its efficiently. ## 2 Related Works We firstly review some related works on the design of neural network structures and relevant optimization methods. The design of neural network has been studied widely. From shallow to deep, the shortcut connection plays an important role. Before ResNet, an early practice [17] also added linear layers connected from input to output to train multi-layer perceptrons. [7] was composed of a shortcut branch and a few deeper branches. The existence of shortcut eases the vanishing or exploding gradients [8, 9]. Recently, Yuan [29] explained from a topological perspective that shortcuts offer dense connections and benefit optimization. Many networks with dense connections exist On macro-structures also. In DenseNet [13], all preceding layers are connected. HRNet [18] was benefited from dense high-to-low connections for fine representations. Densely connected networks promote the specific task of localization [19]. Differently, our YNN optimizes the desired network from a bidirectional complete graph in a differentiable way. For the learning process, our method is consistent with DARTS [22], which is differentiable. Different from sample-based optimization methods [29], the connectivity is learned simultaneously through the weights of the network using our modified version of the gradient descent. A joint training can shift the transferring step from one task to another, and obtain task-related YNN. This type was explored in [20, 21, 22, 23, 24] also, where weight-sharing is utilized across models at the cost of training. At the same time, for our YNN model, we also propose a synchronization method to get the node values in the same neural module. In order to optimize the learned structure, a sparsity constraint can be observed in other applications, e.g., path selection for a multi-branch network [27], pruning unimportant channels for fast inference [28], etc. In a recent work, Yuan used L1 regularization to optimize a topological structure. In this paper, we also use L1 as well as L2 regularization to search a better structure. Secondly, many deep learning works deal with the geometric data in these years[40]. They make neural network better cope with structure. Graph neural networks (GNNs) are connectivity-driven models, which have been addressing the need of geometric deep learning[30, 31]. In fact, a GNN adapts its structure to that of an input graph, and captures complex dependencies of an underlying system through an iterative process of aggregation of information. This allows to predict the properties of specific nodes, connections, or of the entire graph as a whole, and also to generalize to unseen graphs. Due to these powerful features, GNNs have been utilized in many relevant applications to accomplish their tasks, such as recommender systems [33], natural language processing [34], traffic speed prediction [35], critical data classification [36], computer vision [25, 26, 37], particle physics [38], resource allocation in computer networks [39], and so on. In summary, the position of our work can by summarized as Fig 0: ## 3 Methodology ### Why YNN is Introduced? ANN stands for a type of information flow. The traditional structure of ANN is a tree, which is a natural way to describe this type of information flow. Then, we can represent the architecture as \(G=(N,E)\), where \(N\) is the set of nodes and \(E\) denotes the set of edges. In this tree, each edge \(e_{ij}\in E\) performs a transformation operation parameterized by \(w_{ij}\), where \(ij\) stands for the topological ordering from the node \(n_{i}\) to node \(n_{j}\) with \(n_{i},n_{j}\in N\). In fact, the importance of the connection is determined by the weight of \(e_{ij}\). The tree structure as a natural way to represent such formation flow is most frequently used in ANN. A tree is a hierarchical nested structure where a node can be influenced only Figure 1: Fig 0 by its precursor node, thereby causing transformation of information between them. In a tree structure, the root node has no precursor node, while each other node has one and only one precursor node. The leaf node has no subsequent nodes. The number of subsequent nodes of each other node can be one or multiple. In addition, the tree structure in mathematical statistics can represent some hierarchical relationships. A tree structure has many applications. It can also indicate subordinating relationships. In recent years, some researchers attempted to generalize this structure. In those works, except the root node, all other nodes are made to have multiple precursor nodes, i.e., the hierarchical information flow is made to form a directed acyclic graph (DAG). However, a tree or a DAG is a hierarchical nested structure where a node can be influenced only by its precursor node, which makes the transformation of information quite inadequate. Moreover, we find that this structure is far more inferior in its strength compared with those of real neural networks, which connect far more complex structures than a tree or DAG structure as shown in Fig 1. In fact, a tree or a DAG structure is used just because its good mathematical properties which can apply backward propagation conveniently. In this paper, we represent the neural network as a bidirectional complete graph for the nodes of the same level to make the description of NN is much better compared with the traditional ANN. Further, the connections between nodes are represented as directed edges, which determine the flow of information between the connected nodes. We consider that any two nodes \(n_{i}\) and \(n_{j}\) of the same level construct an information clique if there exists a path between them. Compared with the traditional tree structure, we yoke the nodes of the same level to form a bidirectional complete graph. We call this structure as YNN, which will be introduced in the next section. Figure 2: Artificial Neural Network ### Structure of YNN Inspired by the neural network of human beings as shown in the Fig 2. In order to enhance the ability of NN to express information, we design cliques for the nodes of each level of a neural network. **Definition 1**: _A clique is a bidirectional complete graph which considers that for any two nodes \(n_{i}\) and \(n_{j}\), an edge exists from \(n_{i}\) to \(n_{j}\)._ According to this definition, the model in our framework is considered as a bidirectional complete graph for the nodes of the same level. These nodes construct a clique, where every node is not only influenced by its precursor nodes but also by all other nodes of its level. The cliques are represented as information modules which greatly enhance the characterization of NN. According to the definition of clique, a neural network can also be represented as a list of cliques. Further, we can also introduce a concept of neural module. **Definition 2**: _A neural module is a collection of nodes that interact with each other._ According to the definition, a neural module can be part of clique. In fact, if all the weights in a clique becomes zero, then the YNN model is reduced to the traditional tree structure. In each clique of our model, the nodes are first calculated by using their precursor nodes, which only distribute features. The last one is the output level, which only generates final output of the graph. Secondly, each node is Figure 3: Compare with biological nervous systems also indicated by the nodes of the same level and their values are influenced by each other. During the traditional forward computation, each node aggregates inputs from connected preorder nodes. We divide such nodes into two parts. The first part contains the precursor nodes of the last level, and the second part contains the nodes of the corresponding clique of the same level. Then, features are transformed to get an output tensor, which is sent to the nodes in the next level through the output edges. Its specific calculation method will be introduced in the next section. In summary, according to the above definitions, each YNN is constructed as follow. Its order of outputs is represented as \(G=\{N,E\}\). For the nodes in the same level, bidirectional complete graphs are built as clique \(C\). Each node \(n\) in \(C\) is first calculated by using the precursor nodes without the nodes in the clique, which is called as the meta value \(\hat{n}\) of the node. Then, we calculate its real value \(n\) by using the nodes of the clique. According to the meta value and the real value as introduced before, the structure of YNN is shown in the Fig 3. The benefits of the structure can be formul Figure 4: The first picture shows the tree structure of traditional ANN. The second picture shows our YNN model that yokes together the nodes of the first level. For the clique of the first level, the node spin part is based on its meta value, which also represents the connection with the pre nodes. As a result, we can decompose the spin node as shown in the third picture, which is to represent the meta value. The fourth and fifth pictures show the second level of our YNN model, which are the same as the second and third pictures, respectively. In the next section, we will explain how to calculate the values of the nodes by using the precursor node as well as the nodes in the clique. ### Forward Process Let we have \(n\) elements: \[X=\{x_{1},x_{2},...,x_{n}\} \tag{1}\] as the input data to feed for the first level of ANN. Then, the meta value \(\widehat{N}^{1}\) of the first level can be calculated as: \[\widehat{N}^{1}=X*W^{01}, \tag{2}\] where \(W_{01}\) is the fully connected weight of the edges between level 1 and input nodes. Then, similarity in nature, for meta value, the full connection between the levels makes the information to flow as: \[\widehat{N}^{i}=f(N^{i-1})*W^{(i-1)i}, \tag{3}\] where \(N^{i-1}=\{1,n_{1}^{i-1},n_{2}^{i-1},...\}\), \(n_{j}^{i-1}\) is the real value of the \(j\)th node in the \((i-1)\)th level, number 1 indicates for the bias of the value between the \((i-1)\)th and \(i\)th levels as well as the activation function \(f\). Then, by introducing weight \(W^{i}\) in the \(i\)th level and considering the bidirectional complete graph of that level as a clique, we propose a method to calculate the real value \(N^{i}\) based on the meta value \(\widehat{N}^{i}\) as introduced in the previous section. Suppose, there are \(m\) nodes in the clique and they rely on the values of other nodes. Hence, we need a synchronization method to solve the problem. Here, we take the problem as a system of multivariate equations as well as an activation function \(f\). Then, for the real value of \(n_{j}^{i}\) in \(N^{i}\) based on the meta value \(\widehat{n}_{j}^{i}\) in \(\widehat{N}^{i}\), the equations can be summarized as follow: \[\begin{cases}w_{01}^{i}+\sum\limits_{j\neq 1}f(n_{j}^{i})*w_{j1}^{i}+f( \widehat{n}_{1}^{i})*w_{11}^{i}=n_{1}^{i}\\ w_{02}^{i}+\sum\limits_{j\neq 2}f(n_{j}^{i})*w_{j2}^{i}+f(\widehat{n}_{2}^{i})*w _{22}^{i}=n_{2}^{i}\\...\\ w_{0m}^{i}+\sum\limits_{j\neq m}f(n)^{i}{}_{j}*w_{jm}^{i}+f(\widehat{n}_{m}^{i })*w_{mm}^{i}=n_{m}^{i}\end{cases}\] In the above equations, \(w_{01}^{i}\), \(w_{02}^{i}\),..., \(w_{0m}^{i}\) are the bias of the real values of the nodes in the \(i\)th level. Note that, for the meta value, the bias is a value between the levels; while for a real value, the bias is a value in the individual level only. Existing numerical methods would be able to solve the above equations efficiently. In the real applications, the efficiency can also be well optimized. In fact, for too large equations, we also propose a method to reduce the calculation scale efficiently. This method is introduced in the following section. ### Backward Process In this section, we introduce the backward process of our model. Firstly, let the gradient of the output be the gradient of the meta value of the last level. The loss of the model denote as \(\widehat{N}^{n}\). We calculate the node gradient for the real value of the \(i\)th level as: \[d(N^{i})=d(\widehat{N}^{i+1})*(W^{i(i+1)})^{T}. \tag{4}\] The meta value of \(\widehat{N}^{i}\) is calculated by using the real value of \(N^{i-1}\) according to the system of equations. Then, to get the value of \(d(\widehat{N}^{i-1})\), we need to consider the nodes as the variables in the system of equations. For convenient, we introduce operator \(C^{i}\) to represent the derivatives for the \(i\)th level, which can be expressed as: \[C^{i}=W^{i}-diag(W^{i})+eye(W^{i})\, \tag{5}\] where \(W^{i}\) is the adjacency matrix of the clique in the \(i\)th level, \(diag(W^{i})\) is the diagonal matrix of \(W^{i}\), \(eye(W^{i})\) is the identity matrix whose size is the same as that of \(w^{i}\), and operator \(C^{i}\) represents the transfer of other nodes for each node in the clique according to the system of equations. In the clique, the identity matrix is for the node itself. According to the system of equations, the meta value of a node is connected to its real value through the diagonal matrix of \(W^{i}\). Note that each node is calculated by using the activation function \(f\). As a result, after the transfer through the bidirectional complete graph, the gradient of the meta value of the nodes becomes: \[d(\widehat{N}^{i})=d(N^{i})*C^{i}*f^{-1}(N^{i})*diag(W^{i})*f^{-1}(\widehat{N }^{i}). \tag{6}\] Now, we have got the gradient of the meta value as well as that of the real value of each node. Finally, the gradient weight of the fully connected level \(W^{i(i+1)}\) between the \(i\)th and \((i+1)\)th level can be expressed as: \[d(W^{i(i+1)})^{T}=d(\widehat{N}^{i+1})^{T}*f(N^{i}). \tag{7}\] Now, we need to calculate the gradient of \(W^{i}\) for the clique in the \(i\)th level. According to the system of equations, we need to consider the weights of all the connected nodes. For any \(j\)th node in the clique, its connected weight is the \(j\)th column of the matrix. Similarly, for convenient, we introduce the following operator: \[D^{i}_{j}=(1,f(n^{i}_{1}),...,f(\widehat{n}^{i}_{j}),...,f(n^{i}_{m}))\, \tag{8}\] which can be found in the system of equations. Then, by the gradient of real value of the \(j\)th node \(n^{i}_{j}\) in \(N^{i}\), the following becomes the corresponding gradient of the clique: \[d(W^{i}(:,j))=d(n^{i}_{j})*(D^{i}_{j})^{T}. \tag{9}\] ### YNN Structure Optimization Consider that for the nodes in the same level, we construct a clique as stated before. Here, we consider a clique just as a universal set for all the possible connections. In our work, we can optimize the YNN structure to let our model to focus on important connections only. The optimization process can be L1 or L2 regularization as usual, which can be parameterized \(L_{1}\) and \(L_{2}\), respectively. For the \(j\)th node in the \(i\)th level, the process can be formulated as follow: \[opt\_n_{j}^{i}=n_{j}^{i}+L_{1}*\sum_{k}abs(w^{i}(k,j))+L_{2}*\sum_{k}(w^{i}(k, j))^{2} \tag{10}\] According to the L1 and L2 regularization, the L1 parameter can make our YNN to focus on important connections in the clique, and the L2 regularization makes the weight in the clique to be low to make our model to have better generation. ### Structure of Neural Module According to the forward process of YNN as stated earlier, it solves a system of equations. A large number of nodes in the same level would bring too much computational burden to solve a large system of equations. In Fact, we can optimize the graph of any level by L1 and L2 regularization, and then turn to a minimum cut technology, e.g., the NE algorithm, to reduce the computation significantly. For each cut subgraph, we design a neural module structure according to definition 2 to simplify the system of equations as shown in Fig 4. Since the nodes are influenced only by the nodes in the subgraph, the system of equations can be reduced to the number of the nodes in the cut subgraph, which is formulated as a neural module as definition 2 in this paper. In summary, the structure of the neural module can be constructed as follows: 1. Construct the clique for the nodes in the same level; 2. Optimize the clique by using the L1 and L2 regularization; 3. Cut the optimized graph using the NE algorithm; 4. Construct system of equations by taking each cut subgraph as a neural module. As explained before, in this way the system of equations can be reduced to Ns-ary equations, where \(Ns\) is the number of nodes in each neural module. Of course, if the calculation of our model can be accept for our model, take the clique itself as Neural Module is most accurate, since clique considers all connection in the level. ## 4 Experiments ### Optimization of Classical ANN In this section, we will show the experiments with our method. Here, we compare our method with the traditional NN method, stacked auto encoder(SAE), as well as the generalized traditional NN which is a topological perspective to take NN as a DAG graph proposed in recent years. We show our results for three real data sets. The first dataset contains the codon usage frequencies in the genomic coding DNA of a large sample of diverse organisms obtained from different taxa tabulated in the CUTG database. Here, we further manually curated and harmonized the existing entries by re-classifying the bacteria (bct) class of CUTG into archaea (arc), plasmids (plm), and bacteria proper (keeping with the original label 'bct'). The second dataset contains optically recognized handwritten digits made available by NIST using preprocessing programs to extract normalized bitmaps of handwritten digits from a preprinted form. Out of a total of 43 people. The third dataset is Connect-4 that contains all the legal 8-ply positions used in the game of connect-4, in which neither player has won yet, and the next move is not forced. The outcome class is the theoretical value of the first player in the game. Here, we compared our method with other methods in terms of a variety of nodes. In this way, we can examine the effectiveness of our model at different levels of complexity of the traditional structure. These nodes are constructed by the NN, SAE, and DAG models. We compared these models in terms of the percentage error. The obtained results are organized in the following Tables, where we can see that our YNN model achieves much better results in most of the cases. In fact, for all the data sets and a variety of nodes in the same level, our YNN Figure 5: If the clique is too large, we would have too much computational burden to solve the system of equations. Then, we can first optimize the structure and learn the importance of the connection, followed by the application of the minimum cut method to formulate the structure of the neural module. In this way, the calculation for the system of equations can be limited to each subgraph. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Condon Data} & \multicolumn{1}{c}{} & \\ \cline{2-5} & 35 Nodes & 40 Nodes & 45 Nodes & 50 Nodes \\ \hline NN & 0.2789\(\pm\)0.0075 & 0.285\(\pm\)0.012 & 0.2875\(\pm\)0.0134 & 0.3073\(\pm\)0.0259 \\ SAE & 0.3912\(\pm\)0.0416 & 0.331\(\pm\)0.0044 & 0.3346\(\pm\)0.0096 & 0.3366\(\pm\)0.0099 \\ DAG & 3519\(\pm\)0.05 & 0.2828\(\pm\)0.0053 & 0.2989\(\pm\)0.0081 & 0.3134\(\pm\)0.0382 \\ YNN & **0.2751\(\pm\)0.0174** & **0.2489\(\pm\)0.0004** & **0.2582\(\pm\)0.0045** & **0.2475\(\pm\)0.0068** \\ YNN\&L1 & 0.2758\(\pm\)0.026 & 0.2513\(\pm\)0.0017 & 0.2635\(\pm\)0.0029 & 0.2625\(\pm\)0.0093 \\ YNN\&L2 & 0.2826\(\pm\)0.0366 & 0.2495\(\pm\)0.002 & 0.262\(\pm\)0.0081 & 0.2485\(\pm\)0.0122 \\ \hline \hline \end{tabular} \end{table} Table 3: Connect-4 Dataset \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Condon Data} & \multicolumn{1}{c}{} & \\ \cline{2-5} & 35 Nodes & 40 Nodes & 45 Nodes & 50 Nodes \\ \hline NN & 0.2565\(\pm\)0.069 & 0.2181\(\pm\)0.445 & 0.1536\(\pm\)0.0323 & 0.259\(\pm\)0.0937 \\ SAE & 0.2871\(\pm\)0.04 & 0.3603\(\pm\)0.0086 & 0.4186\(\pm\)0.0419 & 0.3375\(\pm\)0.0376 \\ DAG & 0.2446\(\pm\)0.0409 & 0.2721\(\pm\)0.534 & 0.3475\(\pm\)0.0208 & 0.2585\(\pm\)0.0654 \\ YNN & **0.1433\(\pm\)0.0159** & 0.1725\(\pm\)0.0451 & 0.1552\(\pm\)0.0077 & 0.256\(\pm\)0.0001 \\ YNN\&L1 & 0.1633\(\pm\)0.0153 & 0.18\(\pm\)0.0247 & 0.1594\(\pm\)0.0225 & **0.1494\(\pm\)0.032** \\ YNN\&L2 & 0.1586\(\pm\)0.015 & **0.1614\(\pm\)0.0189** & **0.1483\(\pm\)0.142** & 0.1881\(\pm\)0.0001 \\ \hline \hline \end{tabular} \end{table} Table 2: Optical Recognition of Handwritten Digits \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Condon Data} & \multicolumn{1}{c}{} & \\ \cline{2-5} & 35 Nodes & 40 Nodes & 45 Nodes & 50 Nodes \\ \hline NN & 0.2789\(\pm\)0.0075 & 0.285\(\pm\)0.012 & 0.2875\(\pm\)0.0134 & 0.3073\(\pm\)0.0259 \\ SAE & 0.3912\(\pm\)0.0416 & 0.331\(\pm\)0.0044 & 0.3346\(\pm\)0.0096 & 0.3366\(\pm\)0.0099 \\ DAG & 3519\(\pm\)0.05 & 0.2828\(\pm\)0.0053 & 0.2989\(\pm\)0.0081 & 0.3134\(\pm\)0.0382 \\ YNN & **0.2751\(\pm\)0.0174** & **0.2489\(\pm\)0.0004** & **0.2582\(\pm\)0.0045** & **0.2475\(\pm\)0.0068** \\ YNN\&L1 & 0.2758\(\pm\)0.026 & 0.2513\(\pm\)0.0017 & 0.2635\(\pm\)0.0029 & 0.2625\(\pm\)0.0093 \\ YNN\&L2 & 0.2826\(\pm\)0.0366 & 0.2495\(\pm\)0.002 & 0.2622\(\pm\)0.0081 & 0.2485\(\pm\)0.0122 \\ \hline \hline \end{tabular} \end{table} Table 3: Connect-4 Dataset model could tend to get better results after the nodes are yoked together. The effect of our YNN could be improved by optimizing the structure as explained before. All of the first four lines of the Tables are for the results that do not be optimized by the L1 or L2 regularization. We can see that our YNN structure is more efficient even without regularization, compared with the traditional structure. ### Optimization of Structure In this section, we optimize the structure of our model. Since every structure is a subgraph of a fully connected graph, the initial clique can be a search space for our model. Our model is optimized by using the L1 and L2 regularization, which are effective tools for optimizing structures. The obtained results show that such optimizations can yield better effect. Here, we study the structure of the model for different L1 and L2 parameters, as shown in Fig 5. In the figure, the green line represents the results of YNN without optimization, while the blue and red lines are the results for a variety of L1 and L2 parameters, respectively. We can see that such optimization is effective for our YNN in most cases. We also show the pixel map of the matrix for the clique. In the figure, the black-and-white graph represents the matrix of the fully connected graph for the nodes in the same level. The more black of the pixel means a lower weight for the corresponding edge. According with the decline of the error, we can always seek a better structure compared with the bidirectional complete graph used in our YNN. Besides the L1 regularization, the L2 regularization is also an effective tool to optimize the structure of our model. A larger L2 regularization lowers the weights of all the edges, thus yields more black pixels. However, from the decline of error, we can find that the L2 regularization is also effective to optimize our YNN structure. Figure 6: Regularization of results based on L1 and L2 for Codon dataset, optically recognized handwritten digits and connect-4 dataset. ## 5 Conclusion In this paper, we propose a YNN structure to build a bidirectional complete graph for the nodes in the same level of ANN, so as to improve the effect of ANN by promoting the significant transfer of information. In our work, we analyze the structure bias. Our method eliminates structure bias efficiently. By assigning learnable parameters to the edges, which reflect the magnitude of connections, the learning process can be performed in a differentiable manner. For our model, we propose a synchronization method to simultaneously calculate the values of the nodes in the same level. We further impose an auxiliary sparsity constraint to the distribution of connectedness by L1 and L2 regularization, which promotes the learned structure to focus on critical connections. We also propose a small neural module structure that would efficiently reduce the computational burden of our model. The obtained quantitative experimental results demonstrate that the learned YNN structure is superior to the traditional structures.
2305.14265
Adapting to Misspecification
Empirical research typically involves a robustness-efficiency tradeoff. A researcher seeking to estimate a scalar parameter can invoke strong assumptions to motivate a restricted estimator that is precise but may be heavily biased, or they can relax some of these assumptions to motivate a more robust, but variable, unrestricted estimator. When a bound on the bias of the restricted estimator is available, it is optimal to shrink the unrestricted estimator towards the restricted estimator. For settings where a bound on the bias of the restricted estimator is unknown, we propose adaptive estimators that minimize the percentage increase in worst case risk relative to an oracle that knows the bound. We show that adaptive estimators solve a weighted convex minimax problem and provide lookup tables facilitating their rapid computation. Revisiting some well known empirical studies where questions of model specification arise, we examine the advantages of adapting to -- rather than testing for -- misspecification.
Timothy B. Armstrong, Patrick Kline, Liyang Sun
2023-05-23T17:16:09Z
http://arxiv.org/abs/2305.14265v4
# Adapting to Misspecification+ ###### Abstract Empirical research typically involves a robustness-efficiency tradeoff. A researcher seeking to estimate a scalar parameter can invoke strong assumptions to motivate a restricted estimator that is precise but may be heavily biased, or they can relax some of these assumptions to motivate a more robust, but variable, unrestricted estimator. When a bound on the bias of the restricted estimator is available, it is optimal to shrink the unrestricted estimator towards the restricted estimator. For settings where a bound on the bias of the restricted estimator is unknown, we propose adaptive shrinkage estimators that minimize the percentage increase in worst case risk relative to an oracle that knows the bound. We show that adaptive estimators solve a weighted convex minimax problem and provide lookup tables facilitating their rapid computation. Revisiting four empirical studies where questions of model specification arise, we examine the advantages of adapting to--rather than testing for--misspecification. **Keywords:** Adaptive estimation, Minimax procedures, Specification testing, Shrinkage, Robustness. **JEL classification codes:** C13, C18. Introduction Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful. - Box and Draper (1987) Empirical research is typically characterized by a robustness-efficiency tradeoff. The researcher can either invoke strong assumptions to motivate an estimator that is precise, but sensitive to violations of model assumptions, or they can employ a less precise estimator that is robust to these violations. Familiar examples include the choice of whether to add a set of controls to a regression, whether to exploit over-identifying restrictions in estimation, and whether to allow for endogeneity or measurement error in an explanatory variable. As the quote from Box and Draper illustrates, decisions of this nature are often approached with a degree of pragmatism: imposing a false restriction may be worthwhile if doing so yields improvements in precision that are not outweighed by corresponding increases in bias. While precision is readily assessed with asymptotic standard errors, the measurement of bias is less standardized. A popular informal approach is to conduct a series of "robustness exercises," whereby estimates from models that add or subtract assumptions from some baseline are reported and examined for differences. While robustness exercises of this nature can be informative, they can also be perplexing. How should the results of this exercise be used to refine the baseline estimate of the parameter of interest? The traditional answer offered in econometrics textbooks and graduate courses is to use a specification test to select a model. Specification tests offer a form of asymptotic insurance against bias: as the degree of misspecification grows large relative to the noise in the data, the test rejects with near certainty. Yet when biases are modest, as one might expect of models that serve as useful approximations to the world, the price of this insurance in terms of increased variance can be exceedingly high. In this paper we explore an alternative to specification testing: _adapting_ to misspecification. Rather than selecting estimates from a single model, the adaptive approach combines estimates from multiple models in order to optimize a robustness-efficiency tradeoff. The robustness notion considered is the procedure's worst case risk. In the canonical case of squared error loss, the risk of relying on a potentially misspecified estimator is the sum of its variance and the square of its (unknown) bias. Contrasting a credible _unrestricted_ estimator with a potentially misspecified _restricted_ estimator provides a noisy estimate of the restricted estimator's bias. At first blush, it would appear difficult to trade off a combination procedure's robustness against its variance when the bias of one of its inputs is potentially infinite. Consider, however, an oracle who knows a bound \(B\) on the magnitude of the restricted estimator's bias. Given the emphasis on _minimax_ estimation procedures in modern empirical research, it is natural for the oracle to exploit its prior knowledge by searching for a function of the restricted and unrestricted estimators that minimizes worst case risk subject to the bound \(B\). Such _\(B\)-minimax_ estimators have a particularly simple structure, corresponding to a Bayes estimator utilizing a discrete least favorable prior on the restricted estimator's bias and an independent flat prior on the parameter of interest. When \(B=0\), the oracle knows that the unrestricted and restricted estimators are unbiased for the same parameter; consequently, the 0-minimax estimator amounts to the efficiently weighted Generalized Method of Moments (GMM) estimator. By contrast, when \(B=\infty\), the oracle fears that the restricted estimator is hopelessly biased; hence, the \(\infty\)-minimax estimator corresponds to the unrestricted estimator. For intermediate values of \(B\), the \(B\)-minimax estimator involves a type of shrinkage of the bias estimate towards zero that is used to adjust the GMM estimator for expected biases. Now consider a researcher who does not know a bound on the bias. To quantify the disadvantage this researcher faces relative to the oracle, we introduce the notion of _adaptation regret_, which gives the percentage increase in worst case risk an estimation procedure yields over the oracle's \(B\)-minimax procedure. Because adaptation regret depends on the true bias magnitude, it is unknown at the time of estimation. However, it is typically possible to deduce the maximal (i.e., the "worst case") adaptation regret of a procedure across all possible bias magnitudes ex-ante. Importantly, the worst case adaptation regret of a procedure can often be bounded even when the bias cannot. Our proposal for optimizing the robustness-efficiency tradeoff is to employ an _adaptive_ estimator that minimizes the worst case adaptation regret. The adaptive estimator achieves worst case risk near that of the oracle regardless of the true bias magnitude. We show that the adaptive estimator can equivalently be written as a conventional minimax estimation procedure featuring a scaled notion of risk. The adaptive estimator blends the insurance properties of specification tests with the potential for efficiency gains when the restriction being considered is approximately satisfied. Like a pre-test estimator, the risk of the adaptive estimator remains bounded as the bias grows large. When biases are modest, however, the risk of the adaptive estimator is correspondingly modest. And when biases are negligible, the adaptive estimator performs nearly as well as could be achieved if prior knowledge of the bias had been available. We show that the adaptive estimator takes a simple functional form, amounting to a weighted average of the GMM estimator and the unrestricted estimator. The combination weights depend on a shrinkage estimate of the restricted estimator's bias. As with the \(B\)-minimax estimator, the shrinkage estimate can be viewed as a Bayes estimate of bias under a discrete least favorable prior. In contrast with the \(B\)-minimax case, however, this prior requires no input from the researcher and is robust in the sense that the risk of the procedure remains bounded as the bias grows. Another appealing feature of the prior is that it depends only on the correlation between the restricted and unrestricted estimators. Enumerating these priors over a grid of correlation coefficients, we provide a lookup table that facilitates near instantaneous computation of the adaptive combination procedure. Though the adaptive estimator is conceptually simple and easy to compute using our automated lookup table, it is not analytic. Building on insights from Efron and Morris (1972) and Bickel (1984), we explore the potential of a soft-thresholding estimator to approximate the adaptive estimator's behavior. Interestingly, we find that optimizing the soft threshold to mimic the oracle yields worst-case regret comparable to the fully adaptive estimator, while typically delivering lower worst case risk. We also devise constrained versions of both the adaptive estimator and its soft-thresholding approximation that limit the increase in maximal risk to a pre-specified level, an extension that turns out to be important in cases where the restricted estimator is orders of magnitude more precise than the unrestricted estimator. MATLAB and R code implementing the adaptive estimator, its soft-thresholding approximation, and their risk limited variants is provided online at [https://github.com/lsun20/MissAdapt](https://github.com/lsun20/MissAdapt). We also provide routines for computing \(B\)-minimax estimates, which may be useful in settings where prior information about the magnitude of biases is available. To illustrate the advantages of adapting to--rather than testing for--misspecification, we revisit four empirical examples where questions of model specification arise. The first example, drawn from Dobkin et al. (2018), considers whether to control for a linear trend in an event study analysis. A second example from Berry et al. (1995) considers whether to exploit potentially invalid supply side instruments in demand estimation. A third example drawn from Gentzkow et al. (2011) compares a two-way fixed effects estimator that exhibits negative weights in many periods to a more variable convex weighted estimator proposed by de Chaisemartin and D'Haultfoeuille (2020b). The fourth example, drawn from Angrist and Krueger (1991), considers whether to instrument for years of schooling when estimating the returns to education. Online Appendix E provides an additional example, drawn from LaLonde (1986), illustrating the multivariate problem of adapting to multiple control groups. In all of the above examples, adapting between models is found to yield substantially lower worst case risk and worst case adaptation regret than selecting a single model via pre-testing. The automatic procedures developed in this paper therefore provide an attractive alternative to using specification tests to summarize robustness exercises, particularly given that pre-tests have long been criticized for also leading to selective reporting of results (Leamer, 1978; Miguel, 2021). While researchers planning prospectively (e.g., in a pre-analysis plan) to entertain multiple specifications may wish to commit ex-ante to reporting adaptive summaries of the specifications considered, consumers of statistical research can also easily compute adaptive estimates from reported point estimates, standard errors, and the correlation between estimators. We find in the majority of our examples that the restricted estimators considered are nearly efficient, suggesting that accurate adaptive estimates can often be recovered from published tables ex-post even when correlations between estimators are not reported and replication data are unavailable. **Related literature.** Our analysis builds on early contributions by Hodges and Lehmann (1952) and Bickel (1983, 1984) who consider families of robustness-efficiency tradeoffs defined over pairs of nested models. We extend this work by considering a continuum of models, indexed by different degrees of misspecification. Our general framework also allows for other sets of parameter spaces indexed by a regularity parameter, although computational constraints limit us to low dimensional applications in practice. We follow a large statistics literature on the problem of adaptation, defined as the search for an estimator that performs "nearly as well" as an oracle with additional knowledge of the problem at hand. We focus on the case where "nearly as well as an oracle" is defined formally as "up to the smallest constant multiplicative factor," which follows the definition used in Tsybakov (1998) and leads to simple risk guarantees and statements about relative efficiency. However, we also consider in detail an important departure from this definition that further restricts worst-case risk under the unconstrained parameter space. While the high dimensional statistics literature has mostly focused on asymptotic rates and constants, we focus on exact computation of quantities of interest in low dimensional settings. In particular, we apply methods for numerical computation of optimal procedures using least favorable priors similar to those used in the recent econometrics literature (e.g., Chamberlain, 2000; Elliott et al., 2015; Muller and Wang, 2019; Kline and Walters, 2021). To model bias, we work within a local asymptotic misspecification framework of the sort popularized recently by Andrews et al. (2017). However, the proposed adaptive procedures offer global risk guarantees for linear estimation problems. Armstrong and Kolesar (2021) study optimal inference in such settings under a known constraint on the bias of a potentially misspecified moment condition. A large literature considers Bayesian and empirical Bayesian schemes for either model selection or model averaging (Akaike, 1973; Mallows, 1973; Schwarz, 1978; Leamer, 1978; Hjort and Claeskens, 2003). The proposed adaptive estimator can be viewed as a Bayes estimator that utilizes a "robust" prior guaranteeing bounded influence of specification biases on risk. In contrast to recent empirical Bayesian proposals (e.g., Green and Strawderman, 1991; Hansen, 2007; Hansen and Racine, 2012; Cheng et al., 2019; Fessler and Kasy, 2019) our analysis considers a scalar estimand, which renders Stein style shrinkage arguments inapplicable. de Chaisemartin and D'Haultfoeuille (2020a) study an empirical MSE minimization approach in an analogous setting with a scalar parameter and misspecification, establishing that the maximum decrease in MSE of this approach over the unrestricted estimator is greater than the maximum increase in MSE over the unrestricted estimator. We demonstrate numerically that the risk-limited variants of our adaptive estimators also satisfy this property. It is natural to wonder if adaptive estimators can be used to construct adaptive confidence intervals (CIs) that exhibit nearly the same length as CIs based on efficient GMM when \(B=0\), while still maintaining coverage when \(B\) is large. Unfortunately, work dating back to Low (1997) establishes that this goal cannot be achieved; see Armstrong and Kolesar (2018) for impossibility results applicable to our main examples. Hence, while it is possible to construct an estimator that closely mimics an oracle, it is not possible to construct an analogous CI that adapts to biases while maintaining uniform size control. Replacing size control with other criteria amenable to adaptation is an interesting topic that we leave for future research. ## 2 Preliminaries Consider a researcher who observes data or initial estimate \(Y\) taking values in a set \(\mathcal{Y}\), following a distribution \(P_{\theta,b}\) that depends on unknown parameters \((\theta,b)\). We use \(E_{\theta,b}\) to denote expectation under the distribution \(P_{\theta,b}\). While we develop many results in a general setting, our main interest is in possibly misspecified models in a normal or asymptotically normal setting. **Main example.** The random variable \(Y=(Y_{U},Y_{R})\) consists of an "unrestricted" estimator \(Y_{U}\) of a scalar parameter \(\theta\in\mathbb{R}\) and a "restricted" estimator \(Y_{R}\) that is predicated upon additional model assumptions. The additional restrictions required to motivate the restricted estimator make it less robust but potentially more efficient. To capture this tradeoff, we assume that \(Y_{U}\) is asymptotically unbiased for \(\theta\), while \(Y_{R}\) may exhibit a bias of \(b\) stemming from violation of the additional restrictions. We focus on the case where \(Y_{R}\) is a single scalar-valued estimate, but extensions to vector-valued \(b\) are possible as well. It will often be convenient to work with the quantity \(Y_{O}=Y_{R}-Y_{U}\), which gives an estimate of the bias in \(Y_{R}\) that can be used in a test of overidentifying restrictions. We work with the large sample approximation \[\left(\begin{array}{c}Y_{U}\\ Y_{O}\end{array}\right)\sim N\left(\left(\begin{array}{c}\theta\\ b\end{array}\right),\Sigma\right),\quad\Sigma=\left(\begin{array}{cc}\Sigma_ {U}&\rho\sqrt{\Sigma_{U}}\sqrt{\Sigma_{O}}\\ \rho\sqrt{\Sigma_{U}}\sqrt{\Sigma_{O}}&\Sigma_{O}\end{array}\right).\] The variance matrix \(\Sigma\) is treated as known, which arises as a local approximation to misspecification. In practice, the asymptotic variance will typically be measured via a consistent ("misspecification robust") variance estimate. In the special case where \(Y_{R}\) is fully efficient the restriction \(\rho\sqrt{\Sigma_{U}}\sqrt{\Sigma_{O}}=-\Sigma_{O}\) ensues because the unrestricted estimator equals the restricted estimator plus uncorrelated noise. As famously noted by Hausman (1978), one can compute \(\Sigma_{O}\) in this case simply by subtracting the squared standard error of the restricted estimator from that of the unrestricted estimator. Commonly encountered examples of restricted versus unrestricted specifications include (respectively) "short" versus "long" regressions containing nested sets of covariates, estima tors imposing linearity/additive separability versus "saturated" specifications, and estimators motivated by exogeneity/ignorability assumptions versus those motivated by models accommodating endogeneity. **Other settings.** While our main example considers a local misspecification setting with a single restricted estimator, the proposed approach applies more generally to other adaptation problems involving an unknown regularity parameter. Appendix A.1 provides results for a general setting with multiple restricted estimates and Online Appendix E studies an application involving two restricted estimators. ### Decision rules, loss and risk A decision rule \(\delta:\mathcal{Y}\rightarrow\mathcal{A}\) maps the data \(Y\) to an action \(a\in\mathcal{A}\). The loss of taking action \(a\) under parameters \((\theta,b)\) is given by the function \(L(\theta,b,a)\). While it is possible to analyze many types of loss functions in our framework, we will focus on the familiar case of estimation of a scalar parameter \(\theta\) with squared error loss: \(\theta\in\mathbb{R}\), \(\mathcal{A}=\mathbb{R}\) and the loss function is \(L(\theta,b,\hat{\theta})=(\hat{\theta}-\theta)^{2}\). The risk of a decision rule is given by the function \[R(\theta,b,\delta)=E_{\theta,b}L(\theta,b,\delta(Y))=\int L(\theta,b,\delta(y) )\,dP_{\theta,b}(y).\] A decision \(\delta\) is _minimax_ over the set \(\mathcal{C}\) for the parameter \((\theta,b)\) if it minimizes the maximum risk over \((\theta,b)\in\mathcal{C}\). We are interested in a setting where the researcher entertains multiple parameter spaces \(\mathcal{C}_{B}\), indexed by \(B\in\mathcal{B}\), which may restrict the parameters \((\theta,b)\) in different ways. The maximum risk over the set \(\mathcal{C}_{B}\) is \[R_{\max}(B,\delta)=\sup_{(\theta,b)\in\mathcal{C}_{B}}R(\theta,b,\delta).\] A decision \(\delta\) is _minimax_ over \(\mathcal{C}_{B}\) if it minimizes \(R(B,\delta)\). The _minimax risk_ for the parameter space \(\mathcal{C}_{B}\) is the risk of this decision: \[R^{*}(B)=\inf_{\delta}R_{\max}(B,\delta)=\inf_{\delta}\sup_{(\theta,b)\in \mathcal{C}_{B}}R(\theta,b,\delta).\] We use the term _\(B\)-minimax_ as shorthand for "minimax over \(\mathcal{C}_{B}\)" and \(B\)-minimax risk for "minimax risk for the parameter space \(\mathcal{C}_{B}\)." At times, we will use "minimax" or "\(B\)-minimax" for "maximum risk of \(\delta\) over \((\theta,b)\in\mathcal{C}_{B}\)" even when \(\delta\) is not actually the minimax decision. **Main example (continued).** In our main example, we define \(\mathcal{C}_{B}\) to place a bound \(B\) on the magnitude of the bias of the restricted estimator: \[\mathcal{C}_{B}=\{(\theta,b):\theta\in\mathbb{R},b\in[-B,B]\}=\mathbb{R} \times[-B,B].\] We consider the sets \(\mathcal{C}_{B}\) for \(B\in[0,\infty]\). Thus, \(B=\infty\) corresponds to the unrestricted parameter space, while \(B=0\) corresponds to the restricted parameter space. It follows from the theory of minimax estimation in linear models that the \(\infty\)-minimax estimator (the \(B\)-minimax estimator when \(B=\infty\)) is \(Y_{U}\), while the 0-minimax estimator (the \(B\)-minimax estimator when \(B\)=0) is \(Y_{U}-(\rho\sqrt{\Sigma_{U}}/\sqrt{\Sigma_{O}})Y_{O}.\) Inspection of this formula reveals that the 0-minimax estimator is the efficient GMM estimator exploiting the restriction \(b=0\). In the special case where the restricted estimator is fully efficient, the 0-minimax estimator is additionally equal to the restricted estimator \(Y_{R}=Y_{U}+Y_{O}\). ### Adaptation Minimax procedures are ubiquitous in modern empirical research, perhaps because they place transparent guarantees on worst case risk. Sample averages, for instance, are minimax estimators of population means (Lehmann and Casella 1998, p. 317; Bickel and Lehmann 1981), while standard maximum likelihood estimators can typically be justified as asymptotically minimax procedures (van der Vaart, 1998, Section 8.7). Hence, in settings where it is known that \((\theta,b)\in\mathcal{C}_{B}\), \(B\)-minimax estimators provide a natural approach to incorporating prior restrictions into estimation. However, researchers are often unwilling to commit to a restricted parameter space \(\mathcal{C}_{B}\), either because they lack appropriate prior information or because their priors differ from those of their scientific peers. While one can always report a range of \(B\)-minimax estimates corresponding to different choices of \(B\), distilling this sensitivity analysis down to a single preferred estimate of \(\theta\) requires further guidance. For such settings, we propose adaptive estimators that yield worst case risk near \(R^{*}(B)\) for all \(B\). That is, they yield uniformly "near-minimax" performance without commitment to a particular choice of \(B\). How much must one give up in order to avoid specifying \(B\)? Consider an estimator \(\delta\) formed without reference to a particular parameter space \(\mathcal{C}_{B}\). Relative to an oracle that knows \(|b|\leq B\) and is able to compute the \(B\)-minimax estimator, \(\delta\) yields a proportional increase in worst-case risk given by \[A(B,\delta)=\frac{R_{\max}(B,\delta)}{R^{*}(B)}.\] We refer to \(A(B,\delta)\) as the _adaptation regret_ of the estimator \(\delta\) under the set \(\mathcal{C}_{B}\). In our leading example, risk corresponds to mean squared error. Hence, \((A(B,\delta)-1)\times 100\) gives the percentage increase in worst-case MSE over \(\mathcal{C}_{B}\) faced by an estimator \(\delta\) relative to the \(B\)-minimax estimator. The adaptation regret may be as large as \(A_{\max}(\mathcal{B},\delta)=\sup_{B\in\mathcal{B}}A(B,\delta)\), a quantity we term the _worst case adaptation regret_. The lowest possible value \(A_{\max}(\mathcal{B},\delta)\) can take is \[A^{*}(\mathcal{B})=\inf_{\delta}\sup_{B\in\mathcal{B}}A(B,\delta)=\inf_{\delta }\sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\delta)}{R^{*}(B)}. \tag{1}\] Following Tsybakov (1998)\(A^{*}(\mathcal{B})\) gives the _loss of efficiency under adaptation_. An estimator \(\delta\) is _optimally adaptive_ if \(A_{\max}(\mathcal{B},\delta)=A^{*}(\mathcal{B})\). We use the notation \(\delta^{\mathrm{adapt}}\) to denote such an estimator. To measure the efficiency of an ad hoc estimator \(\delta\) relative to the optimally adaptive estimator, one can compute \[\frac{A^{*}(\mathcal{B})}{A_{\max}(\mathcal{B},\delta)}=\frac{\inf_{\delta}A_ {\max}(\mathcal{B},\delta)}{A_{\max}(\mathcal{B},\delta)}.\] We refer to this quantity as the _adaptive efficiency_ of the estimator \(\delta\). **Main example (continued).** In our main example, \(\mathcal{C}_{B}=\mathbb{R}\times[-B,B]\), and we seek estimators that perform well even in the worst case when \(B=\infty\). Thus, we take the set of values of \(B\) under consideration to be \(\mathcal{B}=[0,\infty]\). **Granular \(\mathcal{B}\).** Bickel (1984) considered adapting over the finite set \(\mathcal{B}^{gran}=\{0,\infty\}\). Naturally, it is easier to adapt to the elements of \(\mathcal{B}^{gran}\) than to the infinite set \(\mathcal{B}=[0,\infty]\). Consequently, \(A^{*}(\mathcal{B}^{gran})\leq A^{*}(\mathcal{B})\). However, consideration of \(\mathcal{B}^{gran}\) may leave efficiency gains on the table for \(0<b<\infty\) because \(R^{*}(b)\leq R^{*}(\infty)\). Note that \(A(B,\delta)^{-1}=R^{*}(B)/R_{\max}(B,\delta)\) gives the _relative efficiency_ of the estimator \(\delta\) under the minimax criterion for parameter space \(\mathcal{C}_{B}\), according to the usual definition. Thus, the optimally adaptive estimator obtains the best possible relative efficiency that can be obtained simultaneously for all \(B\in\mathcal{B}\). The loss of efficiency under adaptation gives the reciprocal of this best possible simultaneous relative efficiency. Bickel (1982) studied an asymptotic regime where \(A(B,\delta^{adapt})\) tended to one, implying no asymptotic loss of efficiency under adaptation. By contrast, in the high-dimensional statistics literature, estimators typically exhibit non-negligible loss of efficiency under adaptation. For instance, the lasso achieves asymptotic MSE exceeding that of an oracle that knows the identity of the nonzero coefficients by a term that grows with the log of the number of regressors considered (Buhlmann and van de Geer, 2011, Ch. 6). ### Discussion Fundamentally, an optimally adaptive estimator is one that is "nearly \(B\)-minimax" for all \(B\in\mathcal{B}\), a notion that accords closely with the usual definitions in the literature (e.g., Tsybakov, 1998, 2009; Johnstone, 2019). The definition in (1) operationalizes "near" as "up to the smallest uniform multiplicative factor," which provides an intuitive link between statements about adaptation and relative efficiency. However, the approach developed in this paper is easily extended to other definitions of near, such as the smallest absolute distance from the relevant \(B\)-minimax risk. In Section 4.5 we also consider an extension that places a bound on worst-case risk relative to the unbiased estimator, a constraint that we argue is well suited to settings where \(A^{*}(\mathcal{B})\) is large. Adaptive estimators, like their minimax antecedents, provide convenient alternatives to Bayesian estimation that avoid the requirement to fully specify a prior. It is well known that minimax strategies can be justified on decision theoretic grounds by various axiomatizations of ambiguity aversion (Gilboa and Schmeidler, 1989; Schmeidler, 1989). Adaptation regret can be thought of as capturing the regret an ambiguity averse researcher feels over having exposed themselves to an unnecessarily high level of worst case risk, regardless of what losses were actually realized. A different sort of justification for minimax decisions--attributable to Savage (1954)--involves the potential of such decisions to foster consensus in settings where priors differ among members of a group. In Online Appendix B we develop a stylized extension of Savage (1954)'s argument that illustrates the ability of adaptive decisions to foster consensus among "committees" characterized by different sets of beliefs. Taking the committees to represent different camps of researchers, the model suggests adaptive estimation can help to forge consensus between researchers with varying beliefs about the suitability of different econometric models. In accord with the notion that the desirability of an optimally adaptive decision derives from its resemblance to the relevant \(B\)-minimax decision, the model suggests the prospects for achieving consensus decrease with the loss of efficiency under adaptation \(A^{*}(\mathcal{B})\). ## 3 An Illustration To build some intuition for \(B\)-minimax and optimally adaptive estimators, we consider an example drawn from Dobkin et al. (2018) concerning whether to detrend a quasi-experimental estimator of treatment effects. In this case \(Y_{R}\) corresponds to a two-way fixed effects estimator of the effect of unexpected hospitalization on medical spending, while \(Y_{U}\) corresponds to a linearly detrended estimate of the same quantity. In the constant coefficient framework entertained by Dobkin et al. (2018) these models are nested: the model excluding the trend is a restricted version of the model including the trend. We return to this example in Section 5 where further details on the econometric specification under consideration are provided. The \(B\)-minimax and optimally adaptive estimators are depicted in Figure 1. Both estimators have been computed numerically assuming squared error loss, implying risk is given by mean squared error (MSE). The first y-axis reports point estimates of \(\theta\), which is measured in dollars. Realized values of \(Y_{R}\), \(Y_{U}\), the efficient GMM estimator, and the optimally adaptive estimator are depicted by horizontal lines. Realized values of the \(B\)-minimax estimators are plotted as triangles. The x-axis has been set on a quadratic scale to highlight the properties of these estimators for choices of \(B\) that are small relative to the standard error \(\Sigma_{O}^{1/2}\) of the bias estimate \(Y_{O}\). In this example \(Y_{R}\) is not fully efficient, leading the GMM estimator to place positive weight on \(Y_{U}\). When \(B=0\), the \(B\)-minimax estimator coincides with efficient GMM. As \(B\) grows, the \(B\)-minimax estimator adjusts towards \(Y_{U}\), reflecting the tradeoff between robustness and efficiency. The adaptive estimator lies roughly halfway between the efficient GMM estimate and the realized value of \(Y_{U}\), coming very close ex-post to the \(B\)-minimax estimate that arises when \(B=\Sigma_{O}^{1/2}\). The second y-axis of Figure 1 measures worst case MSE scaled in terms of \(\Sigma_{U}\) (i.e., in terms of the risk of \(Y_{U}\)), which provides an ex-ante assessment--that is, before \(Y_{U}\) or \(Y_{R}\) have been realized--of an estimator's expected performance under a least favorable bias magnitude \(|b|\leq B\). The dashed line gives the worst case risk of an oracle that knows the bound \(B\) and computes the \(B\)-minimax estimator. When \(B=0\) the \(B\)-minimax oracle achieves a sizable \(27\%\) worst case MSE reduction relative to \(Y_{U}\). As \(B\) grows large, the minimax risk of the \(B\)-minimax oracle converges with that of \(Y_{U}\). Hence, by exploiting prior knowledge of the bound \(B\), the oracle can obtain an estimator with risk weakly lower than \(Y_{U}\). The adaptive estimator tries to limit worst case risk without prior knowledge of \(B\). The worst case risk of the optimally adaptive estimator is given by the dotted line, which follows a profile mimicking that of the \(B\)-minimax oracles. The price of not knowing the bound \(B\) in advance is that the worst case risk of the adaptive estimator lies everywhere above that of the corresponding oracle's risk. Fortunately, the worst case risk of \(\delta^{adapt}\) remains bounded as \(B\) approaches infinity. In fact, the adaptation regret \(A(B,\delta^{adapt})\) is nearly constant in the oracle bound \(B\). Consequently, the adaptation regret associated with not having used \(Y_{U}\) when \(B/\Sigma_{O}^{1/2}=9\) roughly equals the adaptation regret associated with not having used GMM when \(B=0\). Moreover, the reduction in risk relative to \(Y_{U}\) when \(B=0\) exceeds the increase in worst-case risk relative to \(Y_{U}\) when \(B/\Sigma_{O}^{1/2}=9\), a property emphasized by de Chaisemartin and D'Haultfoeuille (2020a). Figure 1: \(B\)-minimax and adaptive estimators As we show in the next section, both the adaptive estimator and its \(B\)-minimax antecedents can be thought of as Bayes estimators motivated by particular least favorable priors. Figure 2 depicts the least favorable priors utilized by the \(B\)-minimax estimator for two values of \(B\) along with the least favorable prior of the adaptive estimator. These distributions depend on the data only through the estimated value of \(\rho\), which takes the value -0.524 in this example. All three priors on \(b/\Sigma_{O}^{1/2}\) are discrete, symmetric about zero, and decreasing in \(|b|\). Hence, all three estimators will tend to be more efficient than \(Y_{U}\) when the true bias magnitude \(|b|\) is small. The adaptive prior has the important advantage over \(B\)-minimax priors of not requiring specification of the bound \(B\). A second advantage of the adaptive prior is that it is _robust_: the risk of \(\delta^{adapt}\) remains bounded as \(|b|\) grows large. In contrast, the risk of a \(B\)-minimax estimator grows rapidly and without limit once \(|b|\) exceeds the posited bound \(B\). ## 4 Main results Computing the optimally adaptive estimator requires solving (1). As we now show, this task amounts to solving a minimax problem with a scaled loss function, thereby allowing us to leverage results from the literature on computation of minimax estimators. Figure 2: Least favorable priors when \(\rho=-0.524\) ### Adaptation as minimax with scaled loss Plugging in the definition of \(R_{\max}(B,\delta)\), the criterion that the optimally adaptive estimator \(\delta^{\mathrm{adapt}}\) minimizes can be written \[\sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\delta)}{R^{*}(B)}=\sup_{B\in\mathcal{B} }\sup_{(\theta,b)\in\mathcal{C}_{B}}\frac{R(\theta,b,\delta)}{R^{*}(B)}=\sup_ {(\theta,b)\in\cup_{B^{\prime}\in\mathcal{B}}\mathcal{C}_{B^{\prime}}}\sup_{B \in\mathcal{B}\text{ s.t. }(\theta,b)\in\mathcal{C}_{B}}\frac{R(\theta,b, \delta)}{R^{*}(B)}\] where the last equality follows by noting that the double supremum on either side of this equality is over the same set of values of \((B,\theta,b)\). Letting \[\omega(\theta,b)=\left(\inf_{B\in\mathcal{B}\text{ s.t. }(\theta,b)\in\mathcal{C}_{B}}R^{*}(B) \right)^{-1}, \tag{2}\] we obtain the following lemma. **Lemma 4.1**.: _The loss of efficiency under adaptation (1) is given by_ \[A^{*}(\mathcal{B})=\inf_{\delta}\sup_{(\theta,b)\in\cup_{B^{\prime}\in \mathcal{B}}\mathcal{C}_{B^{\prime}}}\omega(\theta,b)R(\theta,b,\delta)\] _and a decision \(\delta^{\mathrm{adapt}}\) that achieves this infimum (if it exists) is optimally adaptive._ Lemma 4.1 shows that finding an optimally adaptive decision can be written as a minimax problem with a weighted version of the original loss function. In particular, \(\delta\) is found to minimize the maximum (over \(\theta,b\)) of the objective \(\omega(\theta,b)R(\theta,b,\delta)=E_{\theta,b}\omega(\theta,b)L(\theta,b, \delta(Y))\). Hence, the optimal adaptive estimator corresponds to a minimax estimator under the loss function \(\omega(\theta,b)L(\theta,b,\delta(Y))\). Of course, \(\omega(\theta,b)\) must be computed, but this also amounts to computing a family of minimax problems. **Main example (continued).** In our main example, the sets \(\mathcal{C}_{B}=\mathbb{R}\times[-B,B]\) are nested so that \(R^{*}(B)\) is increasing in \(B\) and \(\omega(\theta,b)=R^{*}(|b|)^{-1}\). To summarize, provided that we have a general method for constructing minimax estimators, the optimally adaptive estimator can be computed via the following algorithm. **Algorithm 4.1** (General computation of optimally adaptive estimator).: **Input**: Set of parameter spaces \(\mathcal{C}_{B}\), loss function, \((Y,\Sigma)\) as described in Section 2, along with a generic method for computing minimax estimators **Output**: Optimally adaptive estimator \(\delta^{\rm adapt}\) and loss of efficiency under adaptation \(A^{*}(\mathcal{B})\) 1. Compute the minimax risk \(R^{*}(B)\) for each \(B\in\mathcal{B}\) and use this to form the weight \(\omega(\theta,b)\) as in (2). 2. Form the loss function \((\theta,b,a)\mapsto\omega(\theta,b)L(\theta,b,a)\). Compute the optimally adaptive estimator \(\delta^{\rm adapt}\) as the minimax estimator under the parameter space \(\cup_{B\in\mathcal{B}}\mathcal{C}_{B}\), and compute the loss of efficiency under adaptation \(A^{*}(\mathcal{B})\) as the corresponding minimax risk. ### Computing minimax estimators Algorithm 4.1 allows us to compute adaptive estimators once we have a generic method for solving minimax estimation problems. A typical approach to this problem is to use the insight that the minimax estimator can often be characterized as a Bayes estimator for a _least favorable prior_. Such estimators can be formulated as solving a convex optimization problem over distributions on \((\theta,b)\) that can be evaluated numerically using discretization or other approximation techniques so long as the dimension of \((\theta,b)\) is sufficiently low (see Chamberlain (2000), Elliott et al. (2015), Muller and Wang (2019) and Kline and Walters (2021) for recent applications in econometrics). We now summarize the relevant ideas as they apply to our general setup. In the next subsection, we use the fact that in our main example the minimax and adaptive estimators are invariant to certain transformations to reduce the problem to finding a least favorable prior over \(b\), with a flat (improper) prior on \(\theta\). Details on the choices made to evaluate the estimators numerically are provided in Online Appendix D. Consider the generic problem of computing a minimax decision over the parameter space \(\mathcal{C}\) for a parameter \(\vartheta\) under loss \(\bar{L}(\vartheta,\delta)\). We use \(E_{\vartheta}\) and \(P_{\vartheta}\) to denote expectation under \(\vartheta\) and the probability distribution of the data \(Y\) under \(\vartheta\). To implement Algorithm 4.1, \(\mathcal{C}_{B}\) plays the role of \(\mathcal{C}\) and \(L(\theta,b,\delta)\) plays the role of \(\bar{L}(\vartheta,\delta)\) for a \(B\) on a grid approximating \(\mathcal{B}\). We then solve this problem with \(\cup_{B\in\mathcal{B}}\mathcal{C}_{B}\) playing the role of \(\mathcal{C}\) and \(\omega(\theta,b)L(\theta,b,\delta)\) playing the role of \(\bar{L}(\vartheta,\delta)\). Letting \(\pi\) denote a _prior_ distribution on \(\mathcal{C}\), the _Bayes risk_ of \(\delta\) is given by \[R_{\rm Bayes}(\pi,\delta)=\int E_{\vartheta}\bar{L}(\vartheta,\delta(Y))\,d \pi(\vartheta)=\int\int\bar{L}(\vartheta,\delta(y))\,dP_{\vartheta}(y)d\pi( \vartheta).\] The _Bayes decision_, which we will denote \(\delta_{\pi}^{\rm Bayes}\), optimizes \(R_{\rm Bayes}(\pi,\delta)\) over \(\delta\). It can be computed by optimizing expected loss under the posterior distribution for \(\vartheta\) taking \(\pi\) as the prior. Under squared error loss, the Bayes decision is the posterior mean. \(R_{\rm Bayes}(\pi,\delta)\) gives a lower bound for the worst-case risk of \(\delta\) under \(\mathcal{C}\) and \(R_{\rm Bayes}(\pi,\delta_{\pi}^{\rm Bayes})\) gives a lower bound for the minimax risk. Under certain conditions, a _minimax theorem_ applies, which tells us that this lower bound is in fact sharp. In this case, letting \(\Gamma\) denote the set of priors \(\pi\) supported on \(\mathcal{C}\), the minimax risk over \(\mathcal{C}\) is given by \[\min_{\delta}\max_{\pi\in\Gamma}R_{\rm Bayes}(\pi,\delta)=\max_{\pi\in\Gamma} \min_{\delta}R_{\rm Bayes}(\pi,\delta)=\max_{\pi\in\Gamma}R_{\rm Bayes}(\pi, \delta_{\pi}^{\rm Bayes}).\] The distribution \(\pi\) that solves this maximization problem is called the _least favorable prior_. When the minimax theorem applies, the Bayes decision for this prior is the minimax decision over \(\mathcal{C}\). The expression \(R_{\rm Bayes}(\pi,\delta_{\pi}^{\rm Bayes})\) is convex as a function of \(\pi\) if the set of possible decision functions is sufficiently unrestricted and the set \(\Gamma\) is convex. While one may need to allow randomized decisions in general, the estimation problems we consider will be such that the Bayes decision is nonrandomized. Thus, we can use convex optimization software to compute the least favorable prior and minimax estimator so long as we have a way of approximating \(\pi\) with a finite dimensional object that retains the convex structure of the problem. In our applications, we approximate \(\pi\) with the finite dimensional vector \((\pi(\vartheta_{1}),\ldots,\pi(\vartheta_{J}))\) for a grid of \(J\) values of \(\vartheta\), following Chamberlain (2000). ### Adaptive estimation in main example In our main example, we use invariance to further simplify the problem before applying the methods for computing minimax estimators in Section 4.2. We focus in the main text on the case of squared error loss \(L(\theta,b,\delta)=(\theta-\delta)^{2}\). Appendix A.1 provides proofs of the results in this section and includes general loss functions for estimation of the form \(L(\theta,b,\delta)=\ell(\theta-\delta)\). It will be useful to transform the data to \(Y_{U},T_{O}\) where \(T_{O}=Y_{O}/\sqrt{\Sigma_{O}}\) is the \(t\)-statistic for a specification test of the null that \(b=0\). We observe \[\left(\begin{array}{c}Y_{U}\\ T_{O}\end{array}\right)\sim N\left(\left(\begin{array}{c}\theta\\ b/\sqrt{\Sigma_{O}}\end{array}\right),\left(\begin{array}{cc}\Sigma_{U}& \rho\sqrt{\Sigma_{U}}\\ \rho\sqrt{\Sigma_{U}}&1\end{array}\right)\right). \tag{3}\] where \(\Sigma_{U}\), \(\Sigma_{O}\) and \(\rho=\mathrm{corr}(Y_{U},T_{O})=\mathrm{corr}(Y_{U},Y_{O})\) are treated as known. This representation is equivalent to our original setting, as \(\Sigma_{O}\) is known and can be used to transform \(T_{O}\) to \(Y_{O}\). Applying invariance arguments and the Hunt-Stein theorem, it follows that the \(B\)-minimax estimator \(\delta_{B}^{*}(Y_{U},T_{O})\) takes the form \[\rho\sqrt{\Sigma_{U}}\delta\left(T_{O}\right)+Y_{U}-\rho\sqrt{\Sigma_{U}}T_{O}. \tag{4}\] To build some intuition for this expression, note that \(Y_{U}-\rho\sqrt{\Sigma_{U}}T_{O}\) is the optimal GMM estimator of \(\theta\) under the restriction \(b=0\). When \(\rho\sqrt{\Sigma_{O}}\sqrt{\Sigma_{U}}=-\Sigma_{O}\), optimal GMM reduces to the restricted estimator \(Y_{R}\), which is efficient in this case. If \(b\neq 0\), then GMM will exhibit a bias of \(-\frac{\rho\sqrt{\Sigma_{U}}}{\sqrt{\Sigma_{O}}}b\). The estimator in (4) subtracts from the GMM estimate a corresponding estimate \(-\rho\sqrt{\Sigma_{U}}\delta\left(\frac{Y_{O}}{\sqrt{\Sigma_{O}}}\right)\) of this bias term. The \(\delta\left(T_{O}\right)\) employed by the \(B\)-minimax estimator can be shown to evaluate to the _bounded normal mean_ estimator \(\delta^{\mathrm{BNM}}\left(T_{O};\frac{B}{\sqrt{\Sigma_{O}}}\right)\), where \(\delta^{\mathrm{BNM}}(y;\tau)\) denotes the minimax estimator of \(\vartheta\in\mathcal{C}=[-\tau,\tau]\) when \(Y\sim N(\vartheta,1)\). The bounded normal mean problem has been studied extensively (see, e.g., Lehmann and Casella, 1998, Section 9.7(i), p. 425) and we detail its computation in Online Appendix D.2. The corresponding \(B\)-minimax risk is \[R^{*}(B)=\rho^{2}\Sigma_{U}r^{\mathrm{BNM}}\left(\frac{B}{\sqrt{\Sigma_{O}}} \right)+\Sigma_{U}-\rho^{2}\Sigma_{U}, \tag{5}\] where \(r^{\mathrm{BNM}}(\tau)\) denotes minimax risk in the bounded normal mean problem. This expression was used to construct the oracle risk curve displayed in Figure 1. We evaluate \(r^{\mathrm{BNM}}(\tau)\) numerically by computing a least favorable prior on a grid approximating \([-\tau,\tau]\), following the methods described in Section 4.2 above. The scaling function (2) can now be written \(\omega(\theta,b)=R^{*}(|b|)\), where \(R^{*}\) for our problem is given in (5). To compute the optimally adaptive estimator for squared error loss, it therefore suffices to compute the minimax estimator for \(\theta\) under the scaled loss function \(R^{*}(|b|)^{-1}(\theta-\delta)^{2}\). Invariance arguments can again be applied to show that the optimally adaptive estimator takes the same form as in (4), but with \(\delta\) given by the estimator \(\tilde{\delta}^{\mathrm{adapt}}(t;\rho)\), which minimizes \[\sup_{\tilde{b}\in\mathbb{R}}\frac{E_{T\sim N(\tilde{b},1)}(\tilde{\delta}(T) -\tilde{b})^{2}+\rho^{-2}-1}{r^{\mathrm{BNM}}(|\tilde{b}|)+\rho^{-2}-1}. \tag{6}\] The loss of efficiency under adaptation \(A^{*}([0,\infty])\) is given by the minimized value of (6). Following the approach described in Section 4.2, we evaluate \(\tilde{\delta}^{\mathrm{adapt}}(t;\rho)\) and \(A^{*}([0,\infty])\) numerically by computing a least favorable prior for \(\tilde{b}\) over an equally spaced grid approximation of the interval \([-9,9]\). The least favorable prior for \(\tilde{b}\) corresponds to a prior on \(b/\sqrt{\Sigma_{O}}\), and the invariance arguments for \(\theta\) lead to a flat (improper) prior for \(\theta\). As detailed in Online Appendix D.3, we solve for the least favorable prior using convex programming methods. We summarize these results in the following theorem, which is proved in Appendix A.1. **Theorem 4.1**.: _Consider our main example, given by the model in (3) with parameter spaces \(\mathcal{C}_{B}=\mathbb{R}\times[-B,B]\) for \(B\in\mathcal{B}=[0,\infty]\) and squared error loss \(L(\theta,b,d)=(d-\theta)^{2}\). The following results hold:_ 1. _The_ \(B\)_-minimax estimator takes the form in (_4_) with_ \(\delta\left(\cdot\right)\) _given by_ \(\delta^{\mathrm{BNM}}\left(\cdot;\frac{B}{\sqrt{\Sigma_{O}}}\right)\) _and the minimax risk_ \(R^{*}(B)\) _is given by (_5_)._ 2. _An optimally adaptive estimator is given by (_4_) with_ \(\delta(\cdot)\) _given by a function_ \(\tilde{\delta}^{\mathrm{adapt}}(t;\rho)\) _that minimizes (_6_)._ 3. _The loss of efficiency under adaptation is_ \[\inf_{\tilde{\delta}}\sup_{\tilde{b}\in\mathbb{R}}\frac{E_{T\sim N(\tilde{b}, 1)}(\tilde{\delta}(T)-\tilde{b})^{2}+\rho^{-2}-1}{r^{\mathrm{BNM}}(|\tilde{b} |)+\rho^{-2}-1}=\sup_{\pi}\inf_{\tilde{\delta}}\int\frac{E_{T\sim N(\tilde{b},1)}(\tilde{\delta}(T)-\tilde{b})^{2}+\rho^{-2}-1}{r^{\mathrm{BNM}}(|\tilde{ b}|)+\rho^{-2}-1}\,d\pi(\tilde{b})\] _where the supremum is over all probability distributions_ \(\pi\) _on_ \(\mathbb{R}\)_._ #### 4.3.1 Weighted average interpretation One can write the estimator in (4) as a weighted average: \[w(T_{O})\cdot Y_{U}+(1-w(T_{O}))\cdot\underbrace{(Y_{U}-\rho\sqrt{\Sigma_{U}} \cdot T_{O})}_{\text{Optimal GMM}}, \tag{7}\] where \(w(T_{O})=\delta(T_{O})/T_{O}\) is a data-dependent weight. The \(B\)-minimax estimator takes \(\delta(\cdot)\) to be a minimax estimator that uses the constraint \(|b|\leq B\) with known \(B\), whereas the optimally adaptive estimator takes as \(\delta(\cdot)\) an estimator engineered to adapt to different values of \(B\) in this constraint. As detailed in Online Appendix D.4, we find numerically that the adaptive estimator "shrinks" \(T_{O}\) towards zero, leading the weight \(\delta(T_{O})/T_{O}\) to fall between zero and one for all values of \(\rho\). The data dependent nature of the weight \(w(T_{O})\) is clearly crucial for the robustness properties of the optimally adaptive estimator. As \(T_{O}\) grows large, less weight is placed on the optimal GMM estimator and more weight is placed on the unrestricted estimator \(Y_{U}\). If one were to commit ex-ante to a fixed (i.e., non-stochastic) weight on \(Y_{U}\) below one, the worst-case risk of the procedure would become unbounded as the optimal GMM estimator can exhibit arbitrarily large bias. Consequently, worst case adaptation regret would also become unbounded. #### 4.3.2 Impossibility of consistently estimating the asymptotic distribution Recall that (3) provides the asymptotic distribution of \((Y_{U},T_{O})\) under local misspecification. In this asymptotic regime, \(b\) gives the limit of the bias of the restricted estimator divided by \(\sqrt{n}\) and cannot be consistently estimated. In contrast, consistent estimates for \(\rho\) and \(\Sigma_{U}\) are available via the usual asymptotic variance formulas used in overidentification tests for GMM. To obtain the sampling distribution of the optimally adaptive estimator, one can plug the distribution of \((Y_{U},T_{O})\) stipulated in (3) into expression (7). Unfortunately, this distribution cannot be consistently estimated, as it depends on the local asymptotic bias \(b\). For instance, the asymptotic variance of the optimally adaptive estimator \(\delta^{\rm adapt}\) takes the form \(\rho^{2}\Sigma_{U}v(b/\sqrt{\Sigma_{O}})+\Sigma_{U}-\rho^{2}\Sigma_{U}\), where \(v(\tilde{b})=\mbox{var}_{T_{O}\sim N(\tilde{b},1)}(\tilde{\delta}^{\rm adapt }(T_{O};\rho))\) denotes the variance of \(\tilde{\delta}^{\rm adapt}(T_{O};\rho))\) when \(T_{O}\sim N(\tilde{b},1)\). Because \(\tilde{\delta}^{\rm adapt}(T_{O};\rho)\) is a nonlinear function of \(T_{O}\), this variance formula is a nonconstant function of \(b\). Since \(b\) cannot be consistently estimated, it is not possible to consistently estimate the asymptotic variance of \(\delta^{\rm adapt}\). See Leeb and Potscher (2005) for a discussion of these issues in the context of pre-test estimators. Related arguments (Low, 1997; Armstrong and Kolesar, 2018) establish the impossibility of constructing adaptive CIs. While it is not possible to consistently estimate the asymptotic variance of \(\delta^{\rm adapt}\), one can form an upper bound by taking the maximum of the asymptotic variance as a function of the unknown bias parameter \(b\). It can be shown numerically that, except for cases where \(|\rho|\) is very large - a setting which we argue below requires special care - the largest possible variance of the optimally adaptive estimator lies strictly below that of \(Y_{U}\). Hence, the asymptotic standard error associated with \(Y_{U}\) can generally be viewed as also providing a conservative estimate of the standard deviation of the optimally adaptive estimator. When \(b\) is given, one can construct consistent estimates of the sampling distribution of the adaptive estimator, which is useful for assessing its theoretical risk properties. In particular, the mean squared error of the estimator (4) is given by \[\rho^{2}\Sigma_{U}r(b/\sqrt{\Sigma_{U}})+\Sigma_{U}-\rho^{2}\Sigma_{U}\quad \text{where}\quad r(\tilde{b})=E_{T\sim N(\tilde{b},1)}(\delta(T)-\tilde{b})^{2}.\] In our applications, we report asymptotic risk functions by plotting them as a function of \(b\). #### 4.3.3 Lookup table To ease computation of the optimally adaptive estimator, we solved for the function \(\tilde{\delta}^{\text{adapt}}(t;\rho)\) numerically at a grid of values of the scalar parameter \(\rho\). Tabulating these solutions yields a simple lookup table that allows rapid retrieval of (a spline interpolation of) the empirically relevant function \(\tilde{\delta}^{\text{adapt}}(\cdot;\rho)\). We detail the construction of this lookup table in Online Appendix D.4. After evaluating this function at the realized \(T_{O}\), the remaining computations take an analytic closed form and can be evaluated nearly instantaneously. ### Simple "nearly adaptive" estimators While the optimally adaptive estimator is straightforward to compute via convex programming and is trivial to implement once the solution is tabulated, it lacks a simple closed form. To reduce the opacity of the procedure, one can replace the term \(\delta(T_{O})\) in (4) with an analytic approximation. A natural choice of approximations for \(\delta(T_{O})\) is the class of _soft-thresholding_ estimators, which are indexed by a threshold \(\lambda\geq 0\) and given by \[\delta_{S,\lambda}(T)=\max\left\{|T|-\lambda,0\right\}\operatorname{sgn}(T)= \begin{cases}T-\lambda&\text{if }T>\lambda\\ T+\lambda&\text{if }T<-\lambda\\ 0&\text{if }|T|\leq\lambda,\end{cases}\] which leads to the estimator \[\rho\sqrt{\Sigma_{U}}\delta_{S,\lambda}\left(T_{O}\right)+Y_{U}-\rho\sqrt{\Sigma_ {U}}T_{O}=\begin{cases}Y_{U}-\rho\sqrt{\Sigma_{U}}\lambda&\text{ if }T_{O}>\lambda\\ Y_{U}+\rho\sqrt{\Sigma_{U}}\lambda&\text{ if }T_{O}<-\lambda\\ Y_{U}-\rho\sqrt{\Sigma_{U}}T_{O}&\text{ if }|T_{O}|\leq\lambda.\end{cases}\] We also consider the class of _hard-thresholding_ estimators, which are given by \[\delta_{H,\lambda}(T)=T\cdot I(|t|\geq\lambda)=\begin{cases}T&\text{if }|T|> \lambda\\ 0&\text{if }|T|\leq\lambda,\end{cases}\] which leads to the estimator \[\rho\sqrt{\Sigma_{U}}\delta_{H,\lambda}\left(T_{O}\right)+Y_{U}-\rho\sqrt{ \Sigma_{U}}T_{O}=\begin{cases}Y_{U}&\text{if }|T_{O}|>\lambda\\ Y_{U}-\rho\sqrt{\Sigma_{U}}T_{O}&\text{if }|T_{O}|\leq\lambda.\end{cases}\] Note that hard-thresholding leads to a simple pre-test rule: use the unrestricted estimator if \(|T_{O}|>\lambda\) (i.e. if we reject the null that \(b=0\) using critical value \(\lambda\)) and otherwise use the GMM estimator that is efficient under the restriction \(b=0\). The soft-thresholding estimator uses a similar idea, but avoids the discontinuity at \(T_{O}=\lambda\). To compute the hard and soft-thresholding estimators that are optimally adaptive in these classes of estimators, we minimize (6) numerically over \(\lambda\). The minimax theorem does not apply to these restricted classes of estimators. Fortunately, however, the resulting two dimensional minimax problem in \(\lambda\) and \(\tilde{b}\) is easily solved in practice as explained in Online Appendix D.5. The optimized value of (6) then gives the worst-case adaptation regret of the optimally adaptive soft or hard-thresholding estimator. Figure 3 plots the optimally adaptive and soft-thresholding estimators of the scaled bias as functions of \(T_{O}\). To ease visual inspection of the differences between these estimators, they have been plotted over the restricted range [-3,3]. These functions depend on the data only through the estimated value of \(\rho\), which takes the value -0.524 here, as in the two-way fixed effects example introduced in Section 3. The optimal soft-threshold \(\lambda\) yielding the lowest worst cast adaptation regret in this example is 0.52. Both the adaptive and soft-thresholding estimators continously shrink small values of \(T_{O}\) towards zero. However, the soft-thresholding estimator sets all values of \(|T_{O}|\) less than \(0.52\) to zero, while the optimally adaptive estimator avoids flat regions. In contrast to the continuous nature of these two adaptive estimators, a conventional pre-test using \(\lambda=1.96\) exhibits large discontinuities at the hard threshold. Like the optimally adaptive estimator \(\delta^{adapt}\), the worst-case adaptation regret of the optimally adaptive soft and hard-thresholding estimators depends only on \(\rho\). We report comparisons between these estimators in our empirical applications in Section 5 and provide a more detailed analysis in Online Appendix C.1. As discussed in Online Appendix C.1, soft-thresholding yields nearly optimal performance for the adaptation problem relative to \(\delta^{\rm adapt}\) in a wide range of settings. In contrast, hard-thresholding typically exhibits both substantially elevated worst case adaptation regret and worst case risk driven by the possibility that the scaled bias has magnitude near \(\lambda\). In Online Appendix C.2 we consider the behavior of these adaptive estimators as \(|\rho|\to 1\) and show that the worst-case adaptation regret of \(\delta^{\rm adapt}\), as well as the optimally adaptive soft and hard-thresholding estimators, increases at a logarithmic rate. These conclusions mirror the findings of Bickel (1984) for the case where the set \(\mathcal{B}\) of bounds \(B\) on the bias consists of the two elements \(0\) and \(\infty\). When \(|\rho|\) is close to \(1\), using the constraint \(b=0\) leads to a very large efficiency gain relative to the unconstrained estimator. As \(|\rho|\to 1\), it become increasingly difficult to achieve this large efficiency gain when \(b\) is Figure 3: Estimators of scaled bias when \(\rho=-0.524\) small while retaining robustness to large values of \(b\). This dilemma leads to increasing loss of efficiency under adaptation for \(|\rho|\) near \(1\). In particular, the optimally adaptive estimator exhibits increasing worst-case risk relative to \(Y_{U}\) as \(|\rho|\to 1\) (see Lemma C.3 in Online Appendix C.2). ### Constrained adaptation If the loss of efficiency under adaptation \(A^{*}(\mathcal{B})\) is large, both the optimally adaptive estimator and its soft-thresholding approximation will possess worst case risk far above the oracle minimax risk, which limits their practical appeal as devices for building consensus among researchers with different priors. As noted in the previous subsection, \(A^{*}(\mathcal{B})\) will tend to be large when \(|\rho|\) is large, which corresponds to settings where \(Y_{R}\) is orders of magnitude more precise than \(Y_{U}\). In such settings, substantial weight will be placed on the GMM estimator to guard against the immense adaptation regret that would emerge if \(b=0\), which exposes the researcher to severe biases if \(|b|\) is large. In such cases, it may be attractive to temper the degree of adaptation that takes place by restricting attention to estimators that exhibit worst case risk no greater than a constant \(\bar{R}\). Formally, this leads to the problem \[A^{*}(\mathcal{B};\overline{R})=\inf_{\delta}\sup_{B\in\mathcal{B}}\frac{R_{ \max}(B,\delta)}{R^{*}(B)}\quad\text{s.t.}\quad\sup_{B\in\mathcal{B}}R_{\max} (B,\delta)\leq\overline{R}. \tag{8}\] We can rewrite this formulation as a weighted minimax problem similar to the one in Section 4.1 by setting \(t=\overline{R}/A^{*}(\mathcal{B};\overline{R})\) and considering the problem \[\inf_{\delta}\sup_{B\in\mathcal{B}}\max\left\{\frac{R_{\max}(B,\delta)}{R^{*}( B)},\frac{R_{\max}(B,\delta)}{t}\right\}=\inf_{\delta}\sup_{B\in\mathcal{B}} \frac{R_{\max}(B,\delta)}{\min\left\{R^{*}(B),t\right\}}. \tag{9}\] Indeed, any solution to (8) must also be a solution to (9) with \(t=\overline{R}/A^{*}(\mathcal{B};\overline{R})\), since any decision function achieving a strictly better value of (9) would satisfy the constraint in (8) and achieve a strictly better value of the objective in (8). Conversely, letting \(\tilde{A}^{*}(t)\) be the value of (9), any solution to (9) will achieve the same value of the objective (8) and will satisfy the constraint for \(\bar{R}=t\cdot\tilde{A}^{*}(t)\). In fact, this solution to (9) will also solve (8) for \(\bar{R}=t\cdot\tilde{A}^{*}(t)\) so long as this value of \(\bar{R}\) is large enough to allow some scope for adaptation. Arguing as in Section 4.1, we can write the optimization problem (9) as \[\inf_{\delta}\sup_{(\theta,b)\in\cup_{B^{\prime}\in\mathcal{B}} \mathcal{C}_{B^{\prime}}}\tilde{\omega}(\theta,b,t)R(\theta,b,\delta), \tag{10}\] \[\text{where }\tilde{\omega}(\theta,b,t)=\left(\inf_{B\in\mathcal{B} \text{ s.t. }(\theta,b)\in\mathcal{C}_{B}}\min\left\{R_{\max}(B),t\right\}\right)^{-1}= \max\left\{\omega(\theta,b),1/t\right\}\] and \(\omega(\theta,b)\) is given in (2) in Section 4.1. Thus, we can solve (9) by solving for the minimax estimator under the loss function \((\theta,b,d)\mapsto\tilde{\omega}(\theta,b,t)L(\theta,b,d)\). Letting \(A^{*}(t)\) be the optimized objective function, we can then solve (8) by finding a \(t\) such that \(\bar{R}=t\cdot A^{*}(t)\). We summarize these results in the following lemma, which is proved in Section A.2 of the appendix. **Lemma 4.2**.: _Any solution to (8) is also a solution to (10) with \(t=\overline{R}/A^{*}(\mathcal{B};\overline{R})\). Conversely, let \(\tilde{A}^{*}(t)\) denote the value of (10) and let \(\tilde{R}(t)=\tilde{A}^{*}(t)\cdot t\). If \(\tilde{R}(t)>\inf_{\delta}\sup_{B\in\mathcal{B}}R_{\max}(B,\delta)\) and \(\inf_{B\in\mathcal{B}}R^{*}(B)>0\), then \(A^{*}(\mathcal{B};\tilde{R}(t))=\tilde{A}^{*}(t)\) and any solution to (10) is also a solution to (8) with \(\bar{R}=\tilde{R}(t)\)._ How should the bound \(\overline{R}\) on worst-case risk be chosen? This choice depends on how one trades off efficiency when \(b\) is small against robustness when \(b\) is large. As noted by Bickel (1984) in his analysis of the granular case where \(\mathcal{B}=\{0,\infty\}\), it is often possible to greatly improve the risk at \(b=0\) relative to the unbiased estimator \(Y_{U}\) in exchange for modest increases in risk in the worst case. Similarly, we find that moderate choices of \(\overline{R}\) equal to \(20\%\) or \(50\%\) above the risk of \(Y_{U}\) yield large efficiency improvements in our applications when \(b\) is small. One way of measuring these tradeoffs, suggested by de Chaisemartin and D'Haultfoeuille (2020a), is to look for an estimator where the best-case decrease in risk relative to \(Y_{U}\) is greater than the worst-case increase in risk over \(Y_{U}\). We show numerically in Online Appendix C.1 that this property holds for the constrained soft-thresholding version of our estimator so long as \(\overline{R}\) is less than \(70\%\) above the risk of \(Y_{U}\), and that it holds even for unconstrained soft-thresholding (\(\overline{R}=\infty\)) when \(\rho^{2}\) is less than \(0.86\). The optimally adaptive estimator exhibits similar properties: depictions of its performance as a function of \(\rho^{2}\)--both when unconstrained and when \(\overline{R}\) is set at \(120\%\) of the risk of \(Y_{U}\)--are provided in Figure A5. Examples We now consider a series of examples where questions of specification arise and examine how adapting to misspecification compares to pre-testing and other strategies such as committing ex-ante to either the unrestricted or restricted estimator. Because the only inputs required to compute the adaptive estimator are the restricted and unrestricted point estimates along with their estimated covariance matrix, the burden on researchers of reporting adaptive estimates is very low. In the examples below, we draw on published tables of point estimates and standard errors whenever possible, using the replication data only to derive estimates of the covariance between the estimators. In the majority of these examples, we find that the restricted estimator is nearly efficient, implying the relevant covariances could have been inferred from published standard errors. ### Adapting to a pre-trend (Dobkin et al., 2018) We begin by returning to an example from Dobkin et al. (2018) who study the effects of unexpected hospitalization on out of pocket (OOP) spending. They consider a panel specification of the form \[OOP_{it}=\gamma_{t}+X_{it}^{\prime}\alpha+\sum_{\ell=0}^{3}\mu_{\ell}D_{it}^{ \ell}+\varepsilon_{it},\] where \(OOP_{it}\) is the OOP spending of individual \(i\) in calendar year \(t\), \(D_{it}^{\ell}=1\{t-e_{i}=r\}\) is an event time indicator, \(e_{i}\) is the date of hospitalization, \(X_{it}\) is a vector of interactions between year dummies and grouped birth cohort dummies. The \(\{\mu_{\ell}\}_{\ell=0}^{3}\) are meant to capture the causal effect of hospitalization on OOP spending at various horizons, with \(\ell=0\) giving the contemporaneous impact. Concerned that their analysis may be confounded by trending omitted variables, the authors add a linear trend \(t-e_{i}\) to \(X_{it}\) in their baseline specification but also report results dropping the trend. Table 1 shows the results of this robustness exercise at each horizon \(\ell\in\{0,1,2,3\}\), where we have denoted the ordinary least squares (OLS) estimates of \(\mu_{\ell}\) including the trend as \(Y_{U}\) and the estimates omitting the trend as \(Y_{R}\). These point estimates exactly replicate the numbers underlying Panel A of Dobkin et al. (2018)'s Figure 1. The restricted estimates of \(\mu_{0}\) exhibit standard errors about 25% lower than the corresponding unrestricted estimates, with larger precision gains present at longer horizons. The GMM estimator that imposes \(b=0\) tracks \(Y_{R}\) closely and yields trivial improvements in precision, suggesting the restricted estimator is fully efficient. Consequently, the variability of the difference \(Y_{O}\) between the restricted and unrestricted estimators can be closely approximated by the difference between the squared standard error of \(Y_{U}\) and that of \(Y_{R}\). At each horizon, we find a standardized difference \(T_{O}\) between the estimators of approximately 1.2. Since the difference \(Y_{O}\) between the restricted and unrestricted estimators is not statistically differentiable from zero at conventional levels of significance, the pre-test estimator simply discards the noisy estimates that include a trend and selects the restricted model. However, \(Y_{O}\) offers a fairly noisy assessment of the restricted estimator's bias. While zero bias can't be rejected at the 5% level in the year after hospitalization, neither can a bias equal to 50% of the restricted estimate. The adaptive estimator balances these considerations regarding robustness and precision, generating an estimate roughly halfway between \(Y_{R}\) and \(Y_{U}\). The worst case adaptation regret \begin{table} \begin{tabular}{c c c c c c c c} \hline Yrs since & & & & & & Soft- & Pre- \\ hospital & & \(Y_{U}\) & \(Y_{R}\) & \(Y_{O}\) & GMM & Adaptive & threshold & test \\ \hline \hline 0 & Estimate & 2,217 & 2,409 & 192 & 2,379 & 2,302 & 2,287 & 2,409 \\ & Std Error & (257) & (221) & (160) & (219) & & & \\ & Max Regret & 38\% & \(\infty\) & & \(\infty\) & 15\% & 15\% & 68\% \\ & Threshold & & & & & 0.52 & 1.96 \\ \hline 1 & Estimate & 1,268 & 1,584 & 316 & 1,552 & 1,435 & 1,408 & 1,584 \\ & Std Error & (337) & (241) & (263) & (239) & & & \\ & Max Regret & 98\% & \(\infty\) & & \(\infty\) & 33\% & 34\% & 124\% \\ & Threshold & & & & & 0.59 & 1.96 \\ \hline 2 & Estimate & 989 & 1,436 & 447 & 1,394 & 1,246 & 1,210 & 1,436 \\ & Std Error & (430) & (270) & (373) & (267) & & & \\ & Max Regret & 159\% & \(\infty\) & & \(\infty\) & 47\% & 49\% & 161\% \\ & Threshold & & & & & 0.66 & 1.96 \\ \hline 3 & Estimate & 1,234 & 1,813 & 579 & 1,752 & 1,574 & 1,530 & 1,813 \\ & Std Error & (530) & (313) & (482) & (309) & & & \\ & Max Regret & 195\% & \(\infty\) & & \(\infty\) & 54\% & 57\% & 180\% \\ & Threshold & & & & & 0.69 & 1.96 \\ \hline \end{tabular} \end{table} Table 1: Impact of unexpected hospitalization on out of pocket (OOP) expenditures of the non-elderly insured (ages 50 to 59) from Dobkin et al. (2018). Standard errors in parentheses clustered by individual as in original study. “Yrs since hospital” refers to years since hospitalization. “Max regret” refers to the worst case adaptation regret in percentage terms \((A_{\max}(\mathcal{B},\delta)-1)\times 100\). The correlation coefficients between \(Y_{U}\) and \(Y_{O}\) by years since hospitalization are -0.524, -0.703, -0.784 and -0.813 respectively. of the adaptive estimator rises from only \(15\%\) for the contemporaneous impact to \(54\%\) three years after hospitalization. The large value of \(A^{*}(\mathcal{B})\) found at \(\ell=3\) is attributable to the elevated precision gains associated with \(Y_{R}\) at that horizon: in exchange for bounded risk, we miss out on the potentially very large risk reductions if \(b=0\). By contrast, the low adaptation regret provided at horizon \(\ell=0\) reflects the milder precision gains offered by \(Y_{R}\) when considering contemporaneous impacts. In effect, the near oracle performance found at this horizon reflects that the efficiency cost of robustness is low here. The soft-thresholding estimator arrives at an estimate very similar to the adaptive estimator. By construction, the adaptive estimator exhibits lower worst case adaptation regret than the soft-thresholding estimator. Standard errors are not reported for the soft-thresholding, adaptive, or pre-test estimators because the variability of these procedures depends on the unknown bias level \(b\). To assess the _ex-ante_ tradeoffs involved in adapting to misspecification, Figure 4 depicts the risk functions of the various estimation approaches listed in the first row of Table 1. Recall that these risk functions depict expected MSE before \(Y_{U}\) or \(Y_{R}\) have been realized. Here, the correlation coefficient \(\rho\) between \(Y_{U}\) and \(Y_{O}\) equals \(-0.524\): the value we estimated for the contemporaneous impact \(\mu_{0}\). As a normalization, the risk of the unrestricted estimator has been set to \(1\). The restricted estimator exhibits low risk when the bias is small but very high risk when the bias is large. Pre-testing yields good performance when the bias is either very large or very small. When the scaled bias is near the threshold value of \(1.96\) the pre-test Figure 4: Risk functions for \(\mu_{0}\) (\(\rho=-0.524\)) estimator's risk becomes very large, as the results of the initial test become highly variable. The line labeled "oracle" plots the \(B\)-minimax risk for \(B=|b|\). The oracle's prior knowledge of the bias magnitude yields uniformly lower risk than any other estimator. The adaptive estimator mirrors the oracle, with nearly constant adaptation regret. When the bias in the restricted estimator is small, the adaptive estimator yields large risk reductions relative to \(Y_{U}\). When the bias is large, the adaptive estimator's risk remains bounded at a level substantially below the worst case risk experienced by the pre-test estimator. Table 2 shows the results from constrained adaptation limiting the worst case risk to no more than 20% above the risk of \(Y_{U}\). This constraint results in relatively minor adjustments to the point estimates of both the adaptive and soft-thresholding estimators, even at horizon \(\ell=3\) in which unconstrained adaptation yields a 31-48% increase in worst case risk over \(Y_{U}\). Of course, larger adjustments would have occurred if more extreme values of \(T_{O}\) had been realized. Ex-ante, constraining the adaptive estimator cuts its worst case risk by more than half while yielding only a modest increase of 6 percentage points in its worst case adaptation \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{Unconstrained} & \multicolumn{3}{c}{Constrained \(R/\Sigma_{U}\leq 1.2\)} \\ \hline Years since hosp. & \multicolumn{2}{c}{Adaptive} & Soft-threshold & Adaptive & Soft-threshold \\ \hline \hline 0 & Estimates & 2,302 & 2,287 & 2,302 & 2,287 \\ & Max Regret & 15\% & 15\% & 15\% & 15\% \\ & Max Risk & 13\% & 7\% & 13\% & 7\% \\ & Threshold & & 0.52 & & 0.52 \\ \hline 1 & Estimates & 1,435 & 1,408 & 1,429 & 1,408 \\ & Max Regret & 33\% & 34\% & 41\% & 34\% \\ & Max Risk & 28\% & 17\% & 19\% & 17\% \\ & Threshold & & 0.59 & & 0.59 \\ \hline 2 & Estimates & 1,246 & 1,210 & 1,248 & 1,176 \\ & Max Regret & 47\% & 49\% & 54\% & 60\% \\ & Max Risk & 41\% & 26\% & 19\% & 19\% \\ & Threshold & & 0.66 & & 0.56 \\ \hline 3 & Estimates & 1,574 & 1,530 & 1,569 & 1,463 \\ & Max Regret & 54\% & 57\% & 60\% & 77\% \\ & Max Risk & 48\% & 31\% & 19\% & 19\% \\ & Threshold & & 0.69 & & 0.53 \\ \hline \hline \end{tabular} \end{table} Table 2: Impact of unexpected hospitalization on out of pocket (OOP) expenditures of the non-elderly insured (ages 50 to 59) from Dobkin et al. (2018). “Yrs since hospital” refers to years since hospitalization. “Max regret” refers to the worst case adaptation regret in percentage terms \((A_{\max}(\mathcal{B},\delta)-1)\times 100\). “Max risk” refers to the worst case risk increase relative to \(Y_{U}\) in percentage terms \((R_{\max}(\delta)-\Sigma_{U})\times 100\). The correlation coefficients between \(Y_{U}\) and \(Y_{O}\) by years since hospitalization are -0.524, -0.703, -0.784 and -0.813 respectively. regret. The tradeoff between worst case risk and adaptation regret is somewhat less favorable for the soft-thresholding estimator: reducing its worst case risk by roughly a third raises its worst case adaptation regret by a third. These worst case risk / adaptation regret tradeoffs are illustrated in Figure 5, which depicts the risk functions of the estimators at horizon \(\ell=3\). Remarkably, the risk constrained adaptive estimator exhibits substantially lower risk than the unconstrained adaptive and soft-thresholding estimators at most bias levels, while exhibiting only slightly elevated risk when the bias is small. We expect most researchers would view this tradeoff favorably. Constraining the soft-thresholding estimator yields similar risk reductions when the bias is large but generates more substantial risk increases when the bias magnitude is negligible. Overall, however, the constrained soft-thresholding estimator provides a reasonably close approximation to the constrained adaptive estimator. ### Adapting to an invalid instrument (Berry et al., 1995) Our second example comes from Berry et al. (1995)'s seminal study of the equilibrium determination of automobile prices. As in Andrews et al. (2017) and Armstrong and Kolesar (2021), we focus on their analysis of average price-cost markups. \(Y_{U}\) is taken as the average markup implied by optimally weighted GMM estimation using a set of 8 demand-side instruments described in Andrews et al. (2017). We take as \(Y_{R}\) the GMM estimator that adds to the demand side instruments a set of 12 additional supply-side instruments. Following Armstrong and Kolesar (2021), we compute the GMM estimates in a single step using a weighting matrix allowing for unrestricted misspecification (\(B=\infty\)). Table 3 lists estimates under different estimation approaches. The realizations of \(Y_{R}\) and \(Y_{U}\) correspond, respectively, to the estimates labeled "all excluded supply" and "none" in Figure 1 of Armstrong and Kolesar (2021). Because both \(Y_{U}\) and \(Y_{R}\) are computed using an efficient weighting matrix, the variance of their difference \(Y_{O}\) is given by the difference in their squared standard errors. While relying on demand side instruments alone implies automobile prices average 53% above marginal cost, adding supply side instruments yields much lower markups, with prices approximately 34% above marginal cost on average. Adding the supply side instruments not only decreases the average markup estimate but also reduces the standard error by nearly 30%. However, the difference \(Y_{O}\) between the restricted and unrestricted estimates is large and statistically significant, with \(T_{O}\approx-11\). Detecting what appears to be severe misspecification, the adaptive estimator shrinks strongly towards \(Y_{U}\), as does the soft-thresholding estimator. The chosen soft-threshold is very low, indicating a relatively high level of robustness to bias: only scaled bias estimates smaller than 0.59 in magnitude are zeroed out. Consequently, even realizations of \(T_{O}\) near 3 would have yielded soft-thresholding point estimates close to \(Y_{U}\) in this setting. Evidently, entertaining instruments that turn out to be heavily biased yields little adaptation regret in this scenario, as both the soft-thresholding and optimally adaptive estimators are highly robust. Had the realized value of \(Y_{O}\) been small, these estimators would have placed significant weight on \(Y_{R}\), potentially yielding substantial efficiency gains relative to relying on \(Y_{U}\) alone. \begin{table} \begin{tabular}{c c c c c c c} \hline & \(Y_{U}\) & \(Y_{R}\) & \(Y_{O}\) & Adaptive & Soft-threshold & Pre-test \\ \hline \hline Estimate & 52.95 & 33.53 & -19.42 & 49.44 & 51.89 & 52.95 \\ Std Error & (2.54) & (1.81) & (1.78) & & & \\ Max Regret & 96\% & \(\infty\) & & 32\% & 34\% & 107\% \\ Threshold & & & & & 0.59 & 1.96 \\ \hline \end{tabular} \end{table} Table 3: Adaptive estimates for the average markup (in percent). Point estimates and standard errors calculated using misspecification robust weighting matrix as in Armstrong and Kolesár (2021). “Max Regret” refers to worst case adaptation regret in percentage terms \((A_{\max}(\mathcal{B},\delta)-1)\times 100\). The correlation coefficient between \(Y_{U}\) and \(Y_{O}\) is \(\rho=-0.7\). ### Adapting to heterogeneous effects (Gentzkow et al., 2011) An influential recent literature emphasizes the potential for two-way fixed effects estimators to identify non-convex weighted averages of heterogeneous treatment effects (de Chaisemartin and D'Haultfoeuille, 2020; Sun and Abraham, 2021; Goodman-Bacon, 2021; Callaway and Sant'Anna, 2021). Convexity of the weights defining a causal estimand \(\theta\) is generally agreed to be an important desideratum, guaranteeing that when treatment effects are of uniform sign, \(\theta\) will also possess that sign. Hence, an estimator exhibiting asymptotically convex weights limits the scope of potential biases when treatment effects are all of the same sign. However, when treatment effect heterogeneity is mild, an estimator exhibiting asymptotic weights of mixed sign may yield negligible asymptotic bias and substantially lower asymptotic variance than a convex weighted alternative. Consequently, researchers choosing between standard two-way fixed effects estimators and recently proposed convex weighted estimators often face a non-trivial robustness-efficiency tradeoff. An illustration of this tradeoff comes from Gentzkow et al. (2011) who study the effect of newspapers on voter turnout in US presidential elections between 1868 and 1928. They consider the following linear model relating the first-difference of the turnout rate to the first difference of the number of newspapers available in different counties: \[\Delta y_{ct}=\beta\Delta n_{ct}+\Delta\gamma_{st}+\delta\Delta x_{ct}+\lambda \Delta z_{ct}+\Delta\varepsilon_{ct},\] where \(\Delta\) is the first difference operator, \(\gamma_{st}\) is a state-year effect, \(x_{ct}\) is a vector of observable county characteristics, and \(z_{ct}\) denotes newspaper profitability. The parameter \(\beta\) is meant to capture a causal effect of newspapers on voter turnout. In what follows, we take the OLS estimator of \(\beta\) as \(Y_{R}\). Studying this estimator in a heterogeneous treatment effects framework, de Chaisemartin and D'Haultfoeuille (2020) establish that \(Y_{R}\) yields a weighted average of average causal effects across different time periods and different counties, estimating that 46% of the relevant weights are negative. To guard against the potential biases stemming from reliance on negative weights, they propose a convex weighted estimator of average treatment effects featuring weights that are treatment shares. We take this convex weighted estimator as \(Y_{U}\), implying our estimand of interest \(\theta\) is the average treatment on the treated (ATT). When treatment effects are constant, the two-way fixed effects estimator is consistent for the same ATT parameter. Table 4 reports the realizations of \((Y_{U},Y_{R})\) and their standard errors, which exactly replicate those given in Table 3 of de Chaisemartin and D'Haultfoeuille (2020b). Once again the estimated variance of \(Y_{O}\) is closely approximated by the difference in squared standard errors between \(Y_{U}\) and \(Y_{R}\), suggesting \(Y_{R}\) is nearly efficient. Hence, the downstream GMM, adaptive, and soft-thresholding estimators could have been computed using only the published point estimates and standard errors. Though the realized value of \(Y_{U}\) is nearly twice as large as that of \(Y_{R}\), the two estimators are not statistically distinguishable from one another at the 5% level. Hence, a conventional pre-test suggests ignoring the perils of negative weights and confining attention to \(Y_{R}\) on account of its substantially increased precision. Like \(Y_{R}\), GMM exhibits a standard error roughly 35% below that of \(Y_{U}\). Consequently, relying solely on the convex-weighted but highly inefficient estimator \(Y_{U}\) exposes the researcher to a large worst-case adaptation regret of 145%. In contrast to the pre-test, both the optimally adaptive estimator and its soft-thresholding approximation place substantial weight \(w(T_{O})\) on the convex estimator, yielding estimates roughly 60% of the way towards \(Y_{U}\) from GMM. This phenomenon owes to the fact that with \(T_{O}=-1.7\) both estimators detect the presence of a non-trivial amount of bias in \(Y_{R}\). We can easily compute the soft-thresholding bias estimate from the figures reported in the table as \((-1.7+.64)\times 0.001\approx-.001\), suggesting that \(Y_{R}\) exhibits a bias of nearly 40%. Balancing this bias against the estimator's increased precision leads the soft-thresholding estimator to essentially split the difference between the convex and non-convex weighted estimators, \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \(Y_{U}\) & \(Y_{R}\) & \(Y_{O}\) & GMM & Adaptive & threshold & \begin{tabular}{c} Pre- \\ test \\ \end{tabular} \\ \hline \hline Estimate & 0.0043 & 0.0026 & -0.0017 & 0.0024 & 0.0036 & 0.0036 & 0.0026 \\ Std Error & (0.0014) & (0.0009) & (0.001) & (0.0009) & & & \\ Max Regret & 145\% & \(\infty\) & & \(\infty\) & 44\% & 46\% & 118\% \\ Threshold & & & & & & 0.64 & 1.96 \\ \hline \hline \end{tabular} \end{table} Table 4: Estimates of the effect of one additional newspaper on turnout. Bootstrap standard errors in parentheses computed using the same 100 bootstrap samples utilized by de Chaisemartin and D’Haultfoeuille (2020b). “Max regret” refers to the worst case adaptation regret in percentage terms \((A_{\max}(\mathcal{B},\delta)-1)\times 100\). The correlation coefficient between \(Y_{U}\) and \(Y_{O}\) is -0.77. which yields a near optimal worst case adaptation regret of 46%. ### Adapting to endogeneity (Angrist and Krueger, 1991) Our final example comes from Angrist and Krueger (1991)'s classic analysis of the returns to schooling using quarter of birth as an instrument for schooling attainment. Documenting that individuals born in the first quarter of the year acquire fewer years of schooling than those born later in the year, they demonstrate that the earnings of those born in the first quarter of the year also earn less than those born later in the year. Table 5 replicates exactly the estimates reported in Angrist and Krueger (1991, Panel B, Table III) for men born 1930-39. \(Y_{U}\) gives the Wald-IV estimate of the returns to schooling using an indicator for being born in the first quarter of the year as an instrument for years of schooling completed, while \(Y_{R}\) gives the corresponding OLS estimate. Neither estimator controls for additional covariates. When viewed through the lens of the linear constant coefficient models that dominated labor economics research at the time, the IV estimator identifies the same parameter as OLS under strictly weaker exogeneity requirements. In particular, IV guards against "ability bias," which plagues OLS in such models (Griliches and Mason, 1972; Ashenfelter and Krueger, 1994). The first stage relationship between quarter of birth and years of schooling exhibits a z-score of 8.24, suggesting an asymptotic normal approximation to \(Y_{U}\) is likely to be highly accurate. As in our previous examples, the variance of the difference between \(Y_{U}\) and \(Y_{R}\) is very closely approximated by the difference in their squared standard errors, indicating this exercise could have been computed using only the information reported in the original published tables. While the IV estimator accounts for endogeneity, it is highly imp \begin{table} \begin{tabular}{c c c c c c c} \hline & \(Y_{U}\) & \(Y_{R}\) & \(Y_{O}\) & Adaptive & Soft-threshold & Pre-test \\ \hline \hline Estimate & 0.102 & 0.0709 & -0.0311 & 0.071 & 0.071 & 0.071 \\ Std Error & (0.0239) & (0.0003) & (0.0239) & & & \\ Max Regret & 500145\% & \(\infty\) & & 493\% & 537\% & 17882\% \\ Thresholds & & & & & 2.07 & 1.96 \\ \hline \end{tabular} \end{table} Table 5: Returns to schooling. Standard errors in parentheses computed under homoscedasticity as in original study. “Max regret” refers to the worst case adaptation regret in percentage terms \((A^{*}(\mathcal{B})-1)\times 100\). The correlation coefficients between \(Y_{U}\) and \(Y_{O}\) is \(\rho=-0.9998\). error two orders of magnitude greater than OLS. Consequently, the maximal regret associated with using IV instead of OLS is extremely large, as the variability of \(Y_{U}\) is more than 5,000 times that of \(Y_{R}\). IV and OLS cannot be statistically distinguished at conventional significance levels, with \(T_{O}\approx 1.3\). The inability to distinguish IV from OLS estimates of the returns to schooling is characteristic not only of the specifications reported in Angrist and Krueger (1991) but of the broader quasi-experimental literature spawned by their landmark study (Card, 1999). The confluence of extremely large maximal regret for \(Y_{U}\) with a statistically insignificant difference \(Y_{O}\), leads the adaptive estimator, the soft-thresholding estimator and the pre-test estimator to all coincide with \(Y_{R}\). The motives for this coincidence are of course quite different. The adaptive and soft-thresholding estimators seek to avoid the regret associated with missing out on the enormous efficiency gains of OLS if it is essentially unconfounded. By contrast, the pre-test estimator simply fails to reject the null hypothesis that years of schooling is exogenous at the proper significance level. Despite the agreement of the three approaches, the extremely large adaptation regret exhibited by the optimally adaptive estimator suggests it is unlikely to garner consensus in this setting. Committing to \(Y_{R}\) exposes the researcher to potentially unlimited risk. The adaptive and soft-thresholding estimators avoid committing to either \(Y_{U}\) or \(Y_{R}\) before observing the data but still expose the researcher to an approximately five fold maximal risk increase relative to \(Y_{U}\). A skeptic concerned with the potential biases in OLS is therefore unlikely to be willing to rely on such an estimator. As shown in Table 6, if we instead follow the rule of thumb of limiting ourselves to a 20% increase in maximal risk, both the adaptive and soft-threshold \begin{table} \begin{tabular}{c c c c c} \hline & \multicolumn{2}{c}{Unconstrained} & \multicolumn{2}{c}{Constrained \(R/\Sigma_{U}\leq 1.2\)} \\ \hline & Adaptive & Soft-threshold & Adaptive & Soft-threshold \\ \hline \hline Estimate (fully nonlinear) & 0.071 & 0.071 & 0.087 & 0.091 \\ Max Regret & 493\% & 537\% & 30089\% & 34086\% \\ Max Risk & 455\% & 427\% & 20\% & 20\% \\ Threshold & & 2.07 & & 0.45 \\ \hline \end{tabular} \end{table} Table 6: Adaptive estimates of returns to schooling. “Max regret” refers to the worst case adaptation regret in percentage terms \((A_{\max}(\mathcal{B},\delta)-1)\times 100\). “Max risk” refers to the worst case risk increase relative to \(Y_{U}\) in percentage terms \((R_{\max}(\delta)-\Sigma_{U})/\Sigma_{U}\times 100\). The correlation coefficient is \(\rho=-0.9998\). to schooling estimates of roughly 9%, approximately halfway between OLS and IV. The maximal regret of these estimates is extremely high, reflecting the potential efficiency costs of weighting \(Y_{U}\) so heavily. These efficiency concerns are likely outweighed in this case by the potential for extremely large biases. Though these estimates are unlikely to garner consensus across camps of researchers with widely different beliefs, the risk-limited adaptive estimator should yield wider consensus than proposals to discard \(Y_{R}\) and rely on \(Y_{U}\) alone. ## 6 Conclusion Empiricists routinely encounter robustness-efficiency tradeoffs. The reporting of estimates from different models has emerged as a best practice at leading journals. The methods introduced here provide a scientific means of summarizing what has been learned from such exercises and arriving at a preferred estimate that trades off considerations of bias against variance. Computing the adaptive estimators proposed in this paper requires only point estimates, standard errors, and the covariance between estimators, objects that are easily produced by standard statistical packages. As our examples revealed, in many cases the restricted estimator is nearly efficient, implying the relevant covariance can be deduced from the standard errors of the restricted and unrestricted estimators. In line with earlier results from Bickel (1984), we found that soft-thresholding estimators closely approximate the optimally adaptive estimator in the scalar case, while requiring less effort to compute. An interesting topic for future research is whether similar approximations can be developed for higher dimensional settings where the curse of dimensionality renders direct computation of optimally adaptive estimators infeasible. ## References * Akaike (1973) Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In _Proc. 2nd International Symposium on Information Theory, 1973_, pp. 267-281. Akademiai Kiado. * Andrews et al. (2017) Andrews, I., M. Gentzkow, and J. M. Shapiro (2017). Measuring the Sensitivity of Parameter Estimates to Estimation Moments. _The Quarterly Journal of Economics__132_(4), 1553-1592. * Angrist and Krueger (1991) Angrist, J. D. and A. B. Krueger (1991). Does Compulsory School Attendance Affect Schooling and Earnings? _The Quarterly Journal of Economics__106_(4), 979-1014. * Armstrong and Kolesar (2018) Armstrong, T. B. and M. Kolesar (2018). Optimal Inference in a Class of Regression Models. _Econometrica__86_(2), 655-683. * Armstrong and Kolesar (2021) Armstrong, T. B. and M. Kolesar (2021). Sensitivity analysis using approximate moment condition models. _Quantitative Economics__12_(1), 77-108. * Ashenfelter and Krueger (1994) Ashenfelter, O. and A. Krueger (1994). Estimates of the economic return to schooling from a new sample of twins. _The American economic review_, 1157-1173. * Berry et al. (1995) Berry, S., J. Levinsohn, and A. Pakes (1995). Automobile Prices in Market Equilibrium. _Econometrica__63_(4), 841-890. * Bickel (1982) Bickel, P. J. (1982, September). On Adaptive Estimation. _The Annals of Statistics__10_(3), 647-671. * Bickel (1983) Bickel, P. J. (1983). Minimax estimation of the mean of a normal distribution subject to doing well at a point. In M. H. Rizvi, J. S. Rustagi, and D. Siegmund (Eds.), _Recent Advances in Statistics_, pp. 511-528. Academic Press. * Bickel (1984) Bickel, P. J. (1984, September). Parametric Robustness: Small Biases can be Worthwhile. _The Annals of Statistics__12_(3), 864-879. Publisher: Institute of Mathematical Statistics. * Bickel and Lehmann (1981) Bickel, P. J. and E. L. Lehmann (1981, September). A Minimax Property of the Sample Mean in Finite Populations. _The Annals of Statistics__9_(5), 1119-1122. Publisher: Institute of Mathematical Statistics. * Box and Draper (1987) Box, G. E. and N. R. Draper (1987). _Empirical model-building and response surfaces_. John Wiley & Sons. * Buhlmann and van de Geer (2011) Buhlmann, P. and S. van de Geer (2011, June). _Statistics for High-Dimensional Data: Methods, Theory and Applications_ (2011 edition ed.). Heidelberg ; New York: Springer. * Buhlmann and van de Geer (2012) Callaway, B. and P. H. Sant'Anna (2021). Difference-in-differences with multiple time periods. _Journal of Econometrics__225_(2), 200-230. * Card (1999) Card, D. (1999). The causal effect of education on earnings. _Handbook of labor economics__3_, 1801-1863. * Chamberlain (2000) Chamberlain, G. (2000, November). Econometric applications of maxmin expected utility. _Journal of Applied Econometrics__15_(6), 625-644. * Cheng et al. (2019) Cheng, X., Z. Liao, and R. Shi (2019). On uniform asymptotic risk of averaging GMM estimators. _Quantitative Economics__10_(3), 931-979. * de Chaisemartin and D'Haultfoeuille (2020a) de Chaisemartin, C. and X. D'Haultfoeuille (2020a, June). Empirical MSE Minimization to Estimate a Scalar Parameter. * de Chaisemartin and D'Haultfoeuille (2020b, September). Two-Way Fixed Effects Estimators with Heterogeneous Treatment Effects. _American Economic Review__110_(9), 2964-2996. * Dobkin et al. (2018) Dobkin, C., A. Finkelstein, R. Kluender, and M. J. Notowidigdo (2018). The Economic Consequences of Hospital Admissions. _American Economic Review__108_(2), 308-52. * Donoho (1994) Donoho, D. L. (1994, March). Statistical Estimation and Optimal Recovery. _The Annals of Statistics__22_(1), 238-270. * Efron and Morris (1972) Efron, B. and C. Morris (1972). Empirical Bayes on Vector Observations: An Extension of Stein's Method. _Biometrika__59_(2), 335-347. * Elliott et al. (2015) Elliott, G., U. K. Muller, and M. W. Watson (2015, March). Nearly Optimal Tests When a Nuisance Parameter Is Present Under the Null Hypothesis. _Econometrica__83_(2), 771-811. * Fessler and Kasy (2019) Fessler, P. and M. Kasy (2019). How to use economic theory to improve estimators: Shrinking toward theoretical restrictions. _Review of Economics and Statistics__101_(4), 681-698. * Gentzkow et al. (2011) Gentzkow, M., J. M. Shapiro, and M. Sinkinson (2011, December). The Effect of Newspaper Entry and Exit on Electoral Politics. _American Economic Review__101_(7), 2980-3018. * Gilboa and Schmeidler (1989) Gilboa, I. and D. Schmeidler (1989). Maxmin expected utility with non-unique prior. _Journal of mathematical economics__18_(2), 141-153. * Gersers et al. (2015) Goodman-Bacon, A. (2021). Difference-in-differences with variation in treatment timing. _Journal of Econometrics__225_(2), 254-277. * Green and Strawderman (1991) Green, E. J. and W. E. Strawderman (1991). A james-stein type estimator for combining unbiased and possibly biased estimators. _Journal of the American Statistical Association__86_(416), 1001-1006. * Griliches and Mason (1972) Griliches, Z. and W. M. Mason (1972). Education, income, and ability. _Journal of political Economy__80_(3, Part 2), S74-S103. * Hansen (2007) Hansen, B. E. (2007). Least Squares Model Averaging. _Econometrica__75_(4), 1175-1189. * Hansen and Racine (2012) Hansen, B. E. and J. S. Racine (2012). Jackknife model averaging. _Journal of Econometrics__167_(1), 38-46. * Hausman (1978) Hausman, J. A. (1978). Specification Tests in Econometrics. _Econometrica__46_(6), 1251-1271. * Hjort and Claeskens (2003) Hjort, N. L. and G. Claeskens (2003). Frequentist Model Average Estimators. _Journal of the American Statistical Association__98_(464), 879-899. * Hodges and Lehmann (1952) Hodges, J. L. and E. L. Lehmann (1952). The use of Previous Experience in Reaching Statistical Decisions. _The Annals of Mathematical Statistics__23_(3), 396-407. * Johnstone (2019) Johnstone, I. M. (2019). _Gaussian estimation: Sequence and wavelet models_. Online manuscript available at [https://imjohnstone.su.domains/](https://imjohnstone.su.domains/). * Kline and Walters (2021) Kline, P. and C. Walters (2021). Reasonable doubt: Experimental detection of job-level employment discrimination. _Econometrica__89_(2), 765-792. * LaLonde (1986) LaLonde, R. J. (1986). Evaluating the econometric evaluations of training programs with experimental data. _The American economic review_, 604-620. * Leamer (1978) Leamer, E. E. (1978). _Specification searches: Ad hoc inference with nonexperimental data_, Volume 53. John Wiley & Sons Incorporated. * Leeb and Potscher (2005) Leeb, H. and B. M. Potscher (2005). Model selection and inference: Facts and fiction. _Econometric Theory__21_(01), 21-59. * Leeb and Potscher (2006) Lehmann, E. L. and G. Casella (1998). _Theory of Point Estimation_ (2nd edition ed.). New York: Springer. * Low (1997) Low, M. G. (1997, December). On nonparametric confidence intervals. _The Annals of Statistics__25_(6), 2547-2554. * Mallows (1973) Mallows, C. L. (1973). Some Comments on CP. _Technometrics__15_(4), 661-675. * Miguel (2021) Miguel, E. (2021). Evidence on research transparency in economics. _Journal of Economic Perspectives__35_(3), 193-214. * Muller and Wang (2019) Muller, U. K. and Y. Wang (2019, March). Nearly weighted risk minimal unbiased estimation. _Journal of Econometrics__209_(1), 18-34. * Savage (1954) Savage, L. J. (1954). _The Foundations of Statistics_. John Wiley & Sons. * Schmeidler (1989) Schmeidler, D. (1989). Subjective probability and expected utility without additivity. _Econometrica: Journal of the Econometric Society_, 571-587. * Schwarz (1978) Schwarz, G. (1978). Estimating the Dimension of a Model. _The Annals of Statistics__6_(2), 461-464. * Sun and Abraham (2021) Sun, L. and S. Abraham (2021). Estimating Dynamic Treatment Effects in Event Studies with Heterogeneous Treatment Effects. _Journal of Econometrics__225_(2), 175-199. * Tsybakov (1998) Tsybakov, A. B. (1998, December). Pointwise and sup-norm sharp adaptive estimation of functions on the Sobolev classes. _The Annals of Statistics__26_(6), 2420-2469. * Tsybakov (2009) Tsybakov, A. B. (2009). _Introduction to Nonparametric Estimation_. New York: Springer. * van der Vaart (1998) van der Vaart, A. W. (1998, October). _Asymptotic Statistics_. Cambridge, UK ; New York, NY, USA: Cambridge University Press. ## Appendix A Details and proofs for Section 4 ### Details for main example We provide details and formal results for the results in Section 4.3 giving \(B\)-minimax and optimally adaptive estimators in our main example. We first provide a general theorem characterizing minimax estimators in a setting that includes our main example. We then specialize this result to derive the the formula for the \(B\)-minimax estimator and optimally adaptive estimator for our main example given in Section 4.3, using a weighted loss function and Lemma 4.1 to obtain the optimally adaptive estimator. This proves Theorem 4.1. We consider a slightly more general setting with \(p\) misspecified estimates, leading to a \(p\times 1\) vector \(Y_{O}\): \[Y=\left(\begin{array}{c}Y_{U}\\ \begin{smallmatrix}1\times 1\\ Y_{O}\\ p\times 1\end{smallmatrix}\end{array}\right)\sim N\left(\left(\begin{array}{c} \theta\\ \begin{smallmatrix}1\times 1\\ b\\ p\times 1\end{smallmatrix}\end{array}\right),\Sigma\right),\quad\Sigma=\left( \begin{array}{cc}\Sigma_{U}&\Sigma_{UO}\\ \begin{smallmatrix}1\times 1\\ 1\times p\\ \Sigma^{\prime}_{OO}&\Sigma_{O}\\ p\times 1\end{smallmatrix}\end{array}\right). \tag{11}\] In our main example, \(p=1\) and \(\rho=\Sigma_{UO}/\sqrt{\Sigma_{U}\Sigma_{O}}\). We are interested in the minimax risk of an estimator \(\delta:\mathbb{R}^{p+1}\rightarrow\mathbb{R}\) under the loss function \(L(\theta,b,d)\), which may incorporate a scaling to turn the minimax problem into a problem of finding an optimally adaptive estimator, following Lemma 4.1. We assume that the loss function satisfies the invariance condition \[L(\theta+t,b,d+t)=L(\theta,b,d)\quad\text{all }t\in\mathbb{R}. \tag{12}\] We consider minimax estimation over a parameter space \(\mathbb{R}\times\mathcal{C}\): \[\inf_{\delta}\sup_{\theta\in\mathbb{R},b\in\mathcal{C}}R(\theta,b,\delta). \tag{13}\] **Theorem A.1**.: _Suppose that the loss function \(L(\theta,b,d)\) is convex in \(d\) and that (12) holds. Then the minimax risk (13) is given by_ \[\inf_{\bar{\delta}}\sup_{b\in\mathcal{C}}E_{0,b}[\tilde{L}(b, \bar{\delta}(Y_{O})-\Sigma_{UO}\Sigma_{O}^{-1}b)] \tag{14}\] \[=\sup_{\pi\text{ supported on }\mathcal{C}}\inf_{\bar{\delta}} \int E_{0,b}[\tilde{L}(b,\bar{\delta}(Y_{O})-\Sigma_{UO}\Sigma_{O}^{-1}b)]\, d\pi(b)\] _where \(\tilde{L}(b,t)=EL(0,b,t+V)\) with \(V\sim N(0,\Sigma_{U}-\Sigma_{UO}\Sigma_{O}^{-1}\Sigma_{UO}^{\prime})\). Furthermore, the minimax problem (13) has at least one solution, and any solution \(\delta^{*}\) takes the form_ \[\delta^{*}(Y_{U},Y_{O})=Y_{U}-\Sigma_{UO}\Sigma_{O}^{-1}Y_{O}+\bar{\delta}^{*}( Y_{O})\] _where \(\bar{\delta}^{*}\) achieves the infimum in (14)._ Proof.: The minimax problem (13) is invariant (in the sense of pp. 159-161 of Lehmann and Casella (1998)) to the transformations \((\theta,b)\mapsto(\theta+t,b)\) and the associated transformation of the data \((Y_{U},Y_{O})\mapsto(Y_{U}+t,Y_{O})\), where \(t\) varies over \(\mathbb{R}\). Equivariant estimators for this group of transformations are those that satisfy \(\delta(y_{U}+t,y_{O})=\delta(y_{U},y_{O})+t\), which is equivalent to imposing that the estimator takes the form \(\delta(y_{U},y_{O})=\delta(0,y_{O})+y_{U}\). The risk of such an estimator does not depend on \(\theta\) and is given by \[R(\theta,b,\delta)=R(0,b,\delta)=E_{0,b}\left[L(0,b,\delta(0,Y_{O})+Y_{U}) \right].\] Using the decomposition \(Y_{U}-\theta=\Sigma_{UO}\Sigma^{-1}(Y_{O}-b)+V\) where \(V\sim N(0,\Sigma_{U}-\Sigma_{UO}\Sigma_{O}^{-1}\Sigma_{UO}^{\prime})\) is independent of \(Y_{O}\), the above display is equal to \[E_{0,b}\left[L(0,b,\delta(0,Y_{O})+\Sigma_{UO}\Sigma_{O}^{-1}(Y_{O}-b)+V) \right]=E_{0,b}\tilde{L}(b,\delta(0,Y_{O})+\Sigma_{UO}\Sigma_{O}^{-1}(Y_{O}-b)).\] Letting \(\bar{\delta}(Y_{O})=\delta(0,Y_{O})+\Sigma_{UO}\Sigma_{O}^{-1}Y_{O}\), the above display is equal to \(E_{0,b}[\tilde{L}(b,\bar{\delta}(Y_{O})-\Sigma_{UO}\Sigma_{O}^{-1}b)]\). Thus, if an estimator \(\bar{\delta}^{*}\) achieves the infimum in (14), the corresponding estimator \(\delta(Y_{U},Y_{O})=\delta(0,Y_{O})+Y_{U}=\bar{\delta}^{*}(Y_{O})-\Sigma_{UO} \Sigma_{O}^{-1}Y_{O}+Y_{U}\) will be minimax among equivariant estimators for (13). It will then follow from the Hunt-Stein Theorem (Lehmann and Casella, 1998, Theorem 9.2) that this minimax equivariant estimator is minimax among all estimators, that any other minimax estimator takes this form and that the minimax risk is given by the first line of (14). It remains to show that the infimum in the first line of (14) is achieved, and that the equality claimed in (14) holds. The equality in (14) follows from the minimax theorem, as stated in Theorem A.5 in Johnstone (2019) (note that \(d\mapsto\tilde{L}(b,d-\Sigma_{UO}\Sigma_{O}^{-1}b)\) is convex since it is an integral of the convex functions \(d\mapsto L(0,b,d-\Sigma_{UO}\Sigma_{O}^{-1}b+v)\) over the index \(v\)). The existence of an estimator \(\bar{\delta}^{*}\) that achieves the infimum in the first line of (14) follows by noting that the set of decision rules (allowing for randomized decision rules) is compact in the topology defined on p. 405 of Johnstone (2019), and the risk \(E_{0,b}[\tilde{L}(b,\bar{\delta}(Y_{O})-\Sigma_{UO}\Sigma_{O}^{-1}b)]\) is continuous in \(\bar{\delta}\) under this topology. As noted immediately after Theorem A.1 in Johnstone (2019), this implies that \(\bar{\delta}\mapsto\sup_{b}E_{0,b}[\tilde{L}(b,\bar{\delta}(Y_{O})-\Sigma_{UO }\Sigma_{O}^{-1}b)]\) is a lower semicontinuous function on the compact set of possibly randomized decision rules under this topology, which means that there exists a decision rule that achieves the minimum. From this possibly randomized decision rule, we can construct a nonrandomized decision rule that achieves the minimum by constructing a nonrandomized decision rule with uniformly smaller risk by averaging, following Johnstone (2019, p. 404). We now prove Theorem 4.1 by specializing this result. The notation is the same as in the main text, with \(\rho\) in the main text given by \(\Sigma_{UO}/\sqrt{\Sigma_{U}\Sigma_{O}}\). First, we derive the minimax estimator and minimax risk in (13) when \(L(\theta,b,d)=(\theta-d)^{2}\) and \(\mathcal{C}=[-B,B]\). We have \(\tilde{L}(b,t)=E(t+V)^{2}=t^{2}+\Sigma_{U}-\Sigma_{UO}^{2}/\Sigma_{O}\). Thus, (14) becomes \[\inf_{\bar{\delta}}\sup_{b\in[-B,B]}E_{0,b}\left[\left(\bar{\delta }(Y_{O})-\frac{\Sigma_{UO}}{\Sigma_{O}}b\right)^{2}\right]+\Sigma_{U}-\frac{ \Sigma_{UO}^{2}}{\Sigma_{O}}\] \[=\inf_{\bar{\delta}}\sup_{b\in[-B,B]}\frac{\Sigma_{UO}^{2}}{ \Sigma_{O}}E_{0,b}\left[\left(\frac{\sqrt{\Sigma_{O}}}{\Sigma_{UO}}\bar{\delta }(Y_{O})-\frac{b}{\sqrt{\Sigma_{O}}}\right)^{2}\right]+\Sigma_{U}-\frac{\Sigma_ {UO}^{2}}{\Sigma_{O}}.\] This is equivalent to observing \(T_{O}=Y_{O}/\sqrt{\Sigma_{O}}\sim N(t,1)\) and finding the minimax estimator of \(t\) under the constraint \(|t|\leq B/\sqrt{\Sigma_{O}}\). Letting \(\delta^{\text{BNM}}(T_{O};B/\sqrt{\Sigma_{O}})\) denote the solution to this minimax problem and letting \(r^{\text{BNM}}(B/\sqrt{\Sigma_{O}})\) denote the value of this minimax problem, the optimal \(\bar{\delta}\) in the above display satisfies \(\frac{\sqrt{\Sigma_{O}}}{\Sigma_{UO}}\bar{\delta}(Y_{O})=\delta^{\text{BNM}}(Y _{O}/\sqrt{\Sigma_{O}};B/\sqrt{\Sigma_{O}})\), which gives the value of the above display as \[\frac{\Sigma_{UO}^{2}}{\Sigma_{O}}r^{\text{BNM}}(B/\sqrt{\Sigma_{O}})+\Sigma_ {U}-\frac{\Sigma_{UO}^{2}}{\Sigma_{O}} \tag{15}\] and the \(B\)-minimax estimator as \[\frac{\Sigma_{UO}}{\sqrt{\Sigma_{O}}}\delta^{\text{BNM}}(Y_{O}/\sqrt{\Sigma_{ O}};B/\sqrt{\Sigma_{O}})+Y_{U}-\frac{\Sigma_{UO}}{\Sigma_{O}}Y_{O}. \tag{16}\] Substituting \(T_{O}=Y_{O}/\sqrt{\Sigma_{O}}\) and the notation \(\rho=\Sigma_{UO}/\sqrt{\Sigma_{U}\Sigma_{O}}\) used in the main text gives (4) and (5). This proves part (i) of Theorem 4.1. To find the optimally adaptive estimator and loss of efficiency under adaptation in our main example, we apply Lemma 4.1 with \(\omega(\theta,b)=R^{*}(|b|)^{-1}\), with \(R^{*}(B)\) given by (15). This leads to the minimax problem (13) with \(\mathcal{C}=\mathbb{R}\) and \(L(\theta,b,d)=R^{*}(|b|)^{-1}(\theta-d)^{2}\). The function \(\tilde{L}\) in Theorem A.1 is then given by \(\Sigma_{U}-\Sigma_{UO}^{2}/\Sigma_{O}\)), which gives (14) as \[\inf_{\bar{\delta}}\sup_{b\in\mathbb{R}}\frac{E_{0,b}\left[\left(\bar{ \delta}(Y_{O})-\frac{\Sigma_{UO}}{\Sigma_{O}}b\right)^{2}\right]+\Sigma_{U}- \frac{\Sigma_{UO}^{2}}{\Sigma_{O}}}{\frac{\Sigma_{UO}^{2}}{\Sigma_{O}}r^{ \mathrm{BNM}}(|b|/\sqrt{\Sigma_{O}})+\Sigma_{U}-\frac{\Sigma_{UO}^{2}}{\Sigma_{ O}}}=\inf_{\bar{\delta}}\sup_{b\in\mathbb{R}}\frac{E_{0,b}\left[\left(\frac{ \sqrt{\Sigma_{O}}}{\Sigma_{UO}}\bar{\delta}(Y_{O})-\frac{b}{\sqrt{\Sigma_{O}}} \right)^{2}\right]+\rho^{-2}-1}{r^{\mathrm{BNM}}(|b|/\sqrt{\Sigma_{O}})+\rho^ {-2}-1}.\] This proves part (iii) of Theorem 4.1. The above display is minimized by \(\bar{\delta}\) satisfying \(\frac{\Sigma_{UO}}{\Sigma_{UO}}\bar{\delta}(Y_{O})=\tilde{\delta}^{\mathrm{ adapt}}(Y_{O}/\sqrt{\Sigma_{O}};\rho)\) where \(\tilde{\delta}^{\mathrm{adapt}}(T;\rho)\) minimizes (6) in the main text. By Theorem A.1, the optimally adaptive estimator is given by \[\frac{\Sigma_{UO}}{\sqrt{\Sigma_{O}}}\tilde{\delta}^{\mathrm{adapt}}(Y_{O}/ \sqrt{\Sigma};\rho)+Y_{U}-\frac{\Sigma_{UO}}{\Sigma_{O}}Y_{O}=\rho\sqrt{ \Sigma_{U}}\tilde{\delta}^{\mathrm{adapt}}(T_{O};\rho)+Y_{U}-\rho\sqrt{\Sigma_ {U}}T_{O}. \tag{17}\] This proves the part (ii) of Theorem 4.1. ### Details for constrained adaptation We provide proof for Lemma 4.2, which shows the constrained adaption problem is equivalent to the weighted minimax problem with a particular set of weights. The first statement is immediate from the arguments proceeding the statement of the lemma in Section 4.5. For the second statement, let \(\bar{\delta}\) be a decision rule with \(\sup_{B\in\mathcal{B}}R_{\max}(B,\bar{\delta})<\tilde{R}(t)\). Such a decision rule exists and satisfies \(\sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\bar{\delta})}{R^{*}(B)}<\infty\) by the assumptions of the lemma. Let \(\tilde{\delta}_{t}^{*}\) be a solution to (9). Suppose, to get a contradiction, that a decision \(\delta^{\prime}\) satisfies the constraint in (8) with \(\bar{R}=\tilde{R}(t)\) and achieves a strictly better value of the objective than \(\tilde{A}^{*}(t)\). For \(\lambda\in(0,1)\), let \(\delta^{\prime}_{\lambda}\) be the randomized decision rule that places probability \(\lambda\) on \(\bar{\delta}\) and probability \(1-\lambda\) on \(\delta^{\prime}\), independently of the data \(Y\). Note that \(R_{\max}(B,\delta^{\prime}_{\lambda})=\sup_{(\theta,b)\in\mathcal{C}_{B}}R( \theta,b,\delta^{\prime}_{\lambda})=\sup_{(\theta,b)\in\mathcal{C}_{B}}\left[ \lambda R(\theta,b,\bar{\delta})+(1-\lambda)R(\theta,b,\delta^{\prime})\right] \leq\sup_{(\theta,b)\in\mathcal{C}_{B}}\lambda R(\theta,b,\bar{\delta})+\sup _{(\theta,b)\in\mathcal{C}_{B}}(1-\lambda)R(\theta,b,\delta^{\prime})= \lambda R_{\max}(B,\bar{\delta})+(1-\lambda)R_{\max}(B,\delta^{\prime})\) so that, for \(\lambda\in(0,1)\), \[\sup_{B\in\mathcal{B}}R_{\max}(B,\delta_{\lambda})\leq\lambda\sup_{B\in \mathcal{B}}R_{\max}(B,\bar{\delta})+(1-\lambda)\sup_{B\in\mathcal{B}}R_{\max} (B,\delta^{\prime})<\tilde{R}(t)=\tilde{A}^{*}(t)\cdot t\] and \[\sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\delta_{\lambda})}{R^{*}(B)}\leq\lambda \sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\bar{\delta})}{R^{*}(B)}+(1-\lambda)\sup _{B\in\mathcal{B}}\frac{R_{\max}(B,\delta^{\prime})}{R^{*}(B)}.\] Since \(\sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\delta)}{R^{*}(B)}\) is finite and \(\frac{\sup_{B\in\mathcal{B}}R_{\max}(B,\delta^{\prime})}{R^{*}(B)}<\tilde{A}^{*} (t)\), the above display is strictly less than \(\tilde{A}^{*}(t)\) for small enough \(\lambda\). Thus, for small enough \(\lambda\), the objective function in (10) evaluated at the decision function \(\delta_{\lambda}\) evaluates to \[\max\left\{\sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\delta_{\lambda})}{R^{*}(B)},\sup_{B\in\mathcal{B}}\frac{R_{\max}(B,\delta_{\lambda})}{t}\right\}<\max \left\{\tilde{A}^{*}(t),\tilde{R}(t)/t\right\}=\tilde{A}^{*}(t),\] a contradiction. # Online Appendix to "Adapting to Misspecification" Timothy B. Armstrong, Patrick Kline and Liyang Sun June 2023 ## Appendix B Group decision making interpretation This appendix develops a simple model of group decision making inspired by Savage (1954)'s arguments regarding the ability of minimax decisions to foster consensus among individuals with heterogeneous beliefs. Extending these arguments, we illustrate how adaptive decisions can serve to foster consensus across groups of individuals with different sets of beliefs. ### Consensus in a single committee Suppose there is a committee comprised of members with heterogeneous beliefs that include all priors supported on the set \(\mathcal{C}_{B}\). The committee chair, who we will call the \(B\)_-chair_, offers a take it or leave it proposal that her committee follow a decision rule \(\delta\) in exchange for the provision of a public good providing payoff \(G\) to each member of the committee. This public good might consist of a persuasive speech, a reduction in committee work, or an offer to end the meeting early. If the committee agrees to the proposal, the \(B\)-chair earns a payoff \(K-C(G)\), where \(K\) is the value of consensus and \(C(\cdot)\) is an increasing cost function. If some member of the committee does not agree to the proposal, the chair and all committee members receive payoff zero. The \(B\)-chair therefore seeks a rule \(\delta\) allowing payment of the smallest \(G\) that ensures consensus. A committee member who is certain of the parameters \((\theta,b)\) will accept the chair's offer if and only if \(R\left(\theta,b,\delta\right)\leq G\). However, the committee member with the most pessimistic beliefs regarding these parameters will require a public goods provision level of at least \(R_{\max}\left(B,\delta\right)\) to agree to the offer. To achieve consensus at minimal cost, the \(B\)-chair can propose the \(B\)-minimax decision, which requires public goods provision level \(R^{*}\left(B\right)\) to achieve consensus. The \(B\)-chair will be willing to provide this level of public goods if and only if \(K\geq C(R^{*}\left(B\right))\), in which case consensus ensues. If this condition does not hold, the chair deems the \(B\)-minimax decision too costly to implement and consensus is not achieved. Hence, when no individual holds beliefs that are too extreme, the minimax decision fosters consensus. ### Consensus among committees Now suppose there is a collection \(\mathcal{B}\) of committees that is led by a _chair of chairs_ (CoC) who would like for the \(B\)-chairs to agree on a common decision making rule \(\delta\). Suppose also that \(K>\sup_{B\in\mathcal{B}}C(R^{*}\left(B\right))\), so that each \(B\)-chair would privately prefer to implement the \(B\)-minimax decision. The CoC has a fixed budget that can be used to persuade the chairs to instead coordinate on a common rule \(\delta\). By the arguments above, each \(B\)-chair must pay a cost \(C(R_{max}\left(B,\delta\right))\) to secure consensus regarding the CoC's proposed plan \(\delta\), leaving her with payoff \(K-C(R_{max}\left(B,\delta\right))\). However, each chair can also defy the CoC and propose the \(B\)-minimax decision to her committee, yielding payoff \(K-C(R^{*}\left(B\right))\). Hence, to compel a \(B\)-chair to propose a decision \(\delta\), the CoC must offer a transfer of at least \(\Delta_{B}=C(R_{max}\left(B,\delta\right))-C(R^{*}\left(B\right))\). To economize on transfer costs, the CoC searches for a \(\delta\) that minimizes the maximal required payment \(\sup_{B\in\mathcal{B}}\Delta_{B}\) across all committees. Different functional forms for the cost function \(C\) yield different notions of adaptation. To motivate the formulation in (1), we assume \(C(G)=\ln G\), which suggests chairs produce the public good according to an increasing returns to scale technology that is exponential in effort costs. With this choice of \(C(\cdot)\), the CoC's problem is to find a \(\delta\) that minimizes \(\sup_{B\in\mathcal{B}}\ln\left(R_{max}\left(B,\delta\right)/R^{*}\left(B\right) \right)=\sup_{B\in\mathcal{B}}\ln A(B,\delta)\). The CoC will therefore propose the optimally adaptive decision \(\delta^{\text{adapt}}\), which yields \(\sup_{B\in\mathcal{B}}\Delta_{B}=\ln A^{*}(\mathcal{B})\). When \(A^{*}(\mathcal{B})\) is too large, the CoC balks at the cost and consensus fails. ### Discussion Taking the committees to represent different camps of researchers, our stylized model suggests adaptive estimation can help to forge consensus between researchers with varying beliefs about the suitability of different econometric models. The prospects for achieving consensus are governed by the loss of efficiency under adaptation. When \(A^{*}(\mathcal{B})\) is small, consensus is likely, as the adaptive decision will yield maximal risk similar to each camp's perceived \(B\)-minimax risk. When \(A^{*}(\mathcal{B})\) is large, however, consensus is unlikely to emerge, as the optimally adaptive estimator will be perceived as excessively risky by camps with extreme beliefs. ## Appendix C Additional details ### Numerical results on estimators as a function of \(\rho^{2}\) Section 4.4 introduces the class of soft thresholding estimators and hard thresholding estimators. In Figure A1, we plot the solution to the nearly adaptive objective function for soft-thresholding, which corresponds to a threshold that increases with \(\rho^{2}\). As \(\rho^{2}\) increases, to minimize the worst-case adaptation regret, more weight needs to be placed on the optimal GMM estimator, which explains the increase in the adaptive threshold. Correspondingly, the adaptive estimator incurs more bias as \(\rho^{2}\) increase, which narrows the range of true bias for which the adaptive estimator beats \(Y_{U}\) in terms of risk. In practice, it is common to use a fixed threshold of 1.96, which corresponds to a pre-test rule that switches between the unrestricted estimator and the GMM estimator based on the result of the specification test. Doing so leads to high level of worst-case adaptation regret especially when \(\rho^{2}\) is close to one as shown in Figure A2. To minimize the worst-case adaptation regret, the adaptive hard-threshold estimator needs to use a threshold that would increase to infinity as \(\rho^{2}\) gets closer to one. A pre-test estimator utilizing a fixed threshold at 1.96 realizes its worst-case risk when the scaled bias \(\tilde{b}\) is itself near the 1.96 threshold. As shown in Figure A3, the pre-test Figure A4: “Max risk” refers to the worst case risk increase relative to \(Y_{U}\) in percentage terms \((R_{\max}(\infty,\delta)-\Sigma_{U})/\Sigma_{U}\times 100\). “Min risk” refers to the best case risk decrease relative to \(Y_{U}\) in percentage terms \((\min_{b}R(\theta,b,\delta)-\Sigma_{U})/\Sigma_{U}\times 100\). The calculations are based on the soft thresholding nearly adaptive estimator. The constrained variant bounds the worst-case risk to be less than \(70\%\) above the risk of \(Y_{U}\). estimator tends to exhibit substantially greater worst-case risk than the class of adaptive estimators for most values of \(\rho^{2}\). As discussed in Section 4.4, adaptive estimators have large worst-case risk when \(\rho^{2}\) is close to one. The pre-test estimator has lower worst-case risk in these cases, due to the fixed threshold at \(1.96\). However, one can achieve the same worst-case risk while achieving a much lower worst-case adaptation regret by constraining the worst-case risk directly as in Section 4.5. For example, Figure A4 shows that for the constrained soft-thresholding version of the adaptive estimator, even as we constrain the worst-case risk to be less than \(70\%\) above the risk of \(Y_{U}\), the best-case decrease in risk relative to \(Y_{U}\) is still greater than the worst-case increase in risk over \(Y_{U}\). Figure A5 shows that this property holds for the unconstrained optimally adaptive estimator so long as \(\rho^{2}\leq 0.65\) and also when the optimally adaptive estimator is constrained to exhibit risk no greater than \(120\%\) of the risk of \(Y_{U}\). ### Asymptotics as \(|\rho|\to 1\) This section considers the behavior of the worst-case adaptation regret as \(|\rho|\to 1\) for the optimally adaptive estimator as well as for the hard and soft-thresholding estimators. Let \(A(\delta,\rho)\) denote the worst-case adaptation regret of the estimator given by (4) under the given value of \(\rho\), so that \(A(\delta,\rho)\) returns the value of (6) with \(\tilde{\delta}=\delta\). We use \(A^{*}(\rho)=\inf_{\delta}A(\delta,\rho)\) (where the infimum is over all estimators) to denote the loss of efficiency under adaptation for the given value of \(\rho\). Likewise, we denote by \(A_{S}(\lambda,\rho)=A(\delta_{S,\lambda},\rho)\) and \(A_{H}(\lambda,\rho)=A(\delta_{H,\lambda},\rho)\) the worst-case adaptation regret for soft and hard-thresholding respectively with threshold \(\lambda\), where \(\delta_{S,\lambda}\) are \(\delta_{H,\lambda}\) are defined in Section 4.4. Finally, we use \(A_{S}^{*}(\rho)=\inf_{\lambda}A_{S}(\lambda,\rho)\) and \(A_{H}^{*}(\rho)=\inf_{\lambda}A_{H}(\lambda,\rho)\) to denote the minimum worst-case adaptation regret for soft and hard-thresholding respectively. To get some intuition for the interpretation of \(\rho\) close to \(1\), consider the Hausman setting where \(Y_{R}\) is efficient under the restriction \(b=0\). In this case, we have \(\operatorname{var}(Y_{R})=\operatorname{cov}(Y_{R},Y_{U})\), \(\operatorname{cov}(Y_{O},Y_{U})=\operatorname{cov}(Y_{R}-Y_{U},Y_{U})= \operatorname{var}(Y_{R})-\operatorname{var}(Y_{U})\) and \(\operatorname{var}(Y_{O})=\operatorname{var}(Y_{R})+\operatorname{var}(Y_{ U})-2\operatorname{cov}(Y_{R},Y_{U})=\operatorname{var}(Y_{U})-\operatorname{var}(Y_{R})\). It follows that \[\rho^{2}=\frac{\operatorname{cov}(Y_{O},Y_{U})^{2}}{\operatorname{var}(Y_{U} )\operatorname{var}(Y_{O})}=\frac{\operatorname{var}(Y_{U})-\operatorname{ var}(Y_{R})}{\operatorname{var}(Y_{U})}\] \[\rho^{-2}-1=\frac{\operatorname{var}(Y_{U})}{\operatorname{var}(Y_{U})- \operatorname{var}(Y_{R})}-1=\frac{\operatorname{var}(Y_{R})}{\operatorname{ var}(Y_{U})-\operatorname{var}(Y_{R})}=\frac{\operatorname{var}(Y_{R})/ \operatorname{var}(Y_{U})}{1-\operatorname{var}(Y_{R})/\operatorname{var}(Y_{U })}.\] Therefore, \(|\rho|\to 1\) corresponds to the case where \(\operatorname{var}(Y_{R})/\operatorname{var}(Y_{U})\to 0\). Furthermore, \(\rho^{-2}-1=\frac{\operatorname{var}(Y_{R})}{\operatorname{var}(Y_{U})}(1+o(1))\) as \(|\rho|\to 1\), revealing that this quantity captures the relative efficiency of the restricted estimator under proper specification. The following theorem characterizes the behavior of \(A^{*}(\rho)\), \(A^{*}_{S}(\rho)\) and \(A^{*}_{H}(\rho)\) as \(|\rho|\to 1\). **Theorem C.1**.: _We have_ \[\lim_{|\rho|\uparrow 1}\frac{A^{*}(\rho)}{2\log(\rho^{-2}-1)^{-1}}=\lim_{| \rho|\uparrow 1}\frac{A^{*}_{S}(\rho)}{2\log(\rho^{-2}-1)^{-1}}=\lim_{| \rho|\uparrow 1}\frac{A^{*}_{H}(\rho)}{2\log(\rho^{-2}-1)^{-1}}=1.\] In the remainder of this section, we prove Theorem C.1. We split the proof into upper bounds (Section C.2.1) and lower bounds (Section C.2.2). The lower bounds in Section C.2.2 are essentially immediate from results in Bickel (1983) for adapting to \(B\in\mathcal{B}=\{0,\infty\}\), whereas the upper bounds in Section C.2.1 involve new arguments to deal with intermediate values of \(B\). #### c.2.1 Upper bounds In this section, we show that \(A^{*}_{S}(\rho)\leq(1+o(1))2\log(\rho^{-2}-1)^{-1}\) and \(A^{*}_{H}(\rho)\leq(1+o(1))2\log(\rho^{-2}-1)^{-1}\). Since \(A^{*}(\rho)\) is bounded from above by both \(A^{*}_{S}(\rho)\) and \(A^{*}_{H}(\rho)\), this also implies \(A^{*}(\rho)\leq(1+o(1))2\log(\rho^{-2}-1)^{-1}\). Let \(r_{S}(\lambda,t)=E_{T\sim N(\mu,1)}(\delta_{S,\lambda}(T)-\mu)^{2}\) and \(r_{S}(\lambda,t)=E_{T\sim N(\mu,1)}(\delta_{H,\lambda}(T)-\mu)^{2}\) denote the risk of soft and hard-thresholding. Then \[A_{S}(\lambda,\rho)=\sup_{\mu\in\mathbb{R}}\frac{r_{S}(\lambda,\mu)+\rho^{-2} -1}{r^{\operatorname{BNM}}(|\mu|)+\rho^{-2}-1}\] and similarly for \(A_{H}(\lambda,\rho)\). We use the following upper bound for \(r_{H}(\lambda,\mu)\) and \(r_{S}(\lambda,\mu)\), which follows immediately from results given in Johnstone (2019). **Lemma C.1**.: _There exists a constant \(C\) such that, for \(\lambda>C\), both \(r_{S}(\lambda,\mu)\) and \(r_{H}(\lambda,\mu)\) _are bounded from above by \(\bar{r}(\lambda,\mu)\) where_ \[\bar{r}(\lambda,\mu)=\begin{cases}\min\left\{\lambda\exp\left(-\lambda^{2}/2 \right)+1.2\mu^{2},1+\mu^{2}\right\}&|\mu|\leq\lambda\\ 1+\lambda^{2}&|\mu|>\lambda.\end{cases}\] Proof.: The bound for \(r_{H}(\lambda,\mu)\) follows from Lemma 8.5 in Johnstone (2019) along with the bound \(r_{H}(\lambda,0)\leq\frac{2+\varepsilon}{\sqrt{2\pi}}\lambda\exp\left(- \lambda^{2}/2\right)\) which holds for any \(\varepsilon>0\) for \(\lambda\) large enough by (8.15) in Johnstone (2019). The bound for \(r_{L}(\lambda,\mu)\) follows from Lemma 8.3 and (8.7) in Johnstone (2019). Let \(\tilde{\lambda}_{\rho}=\sqrt{2\log(\rho^{-2}-1)^{-1}}\). By Lemma C.1, \(A^{*}_{S}(\rho)\) and \(A^{*}_{H}(\rho)\) are, for \((\rho^{-2}-1)^{-1}\) large enough, bounded from above by the supremum over \(\mu\) of \[\frac{\bar{r}(\tilde{\lambda}_{\rho},\mu)+\rho^{-2}-1}{\bar{r}^{\text{BNM}}(| \mu|)+\rho^{-2}-1} \tag{18}\] Let \(c(\rho)\) be such that \(c(\rho)/\tilde{\lambda}_{\rho}\to 0\) and \(c(\rho)\to\infty\) as \(|\rho|\uparrow 1\). We bound (18) separately for \(|\mu|\leq c(\rho)\) and for \(|\mu|\geq c(\rho)\). For \(|\mu|\leq c(\rho)\), we use the bound \(r^{\text{BNM}}(|\mu|)\geq.8\cdot\mu^{2}/(\mu^{2}+1)\)(Donoho, 1994), which gives an upper bound for (18) of \[\frac{\bar{r}(\tilde{\lambda}_{\rho},\mu)+\rho^{-2}-1}{.8\cdot \mu^{2}/(\mu^{2}+1)+\rho^{-2}-1}\leq\frac{\sqrt{2\log(\rho^{-2}-1)^{-1}}\cdot( \rho^{-2}-1)+1.2\mu^{2}+\rho^{-2}-1}{.8\cdot\mu^{2}/(\mu^{2}+1)+\rho^{-2}-1}\] \[\leq\sqrt{2\log(\rho^{-2}-1)^{-1}}+(1.2/.8)\cdot(\mu^{2}+1)+1\leq \sqrt{2\log(\rho^{-2}-1)^{-1}}+(1.2/.8)\cdot(c(\rho)^{2}+1)+1.\] As \(|\rho|\uparrow 1\), this increases more slowly than \(\log(\rho^{-2}-1)^{-1}\). For \(|\mu|\geq c(\rho)\), we use the bound \(r^{\text{BNM}}(|\mu|)\geq r^{\text{BNM}}(c(\rho))\) which gives an upper bound for (18) of \[\frac{\bar{r}(\tilde{\lambda}_{\rho},\mu)+\rho^{-2}-1}{r^{\text{BNM}}(|c(\rho )|)+\rho^{-2}-1}\leq\frac{\bar{r}(\tilde{\lambda}_{\rho},\mu)}{r^{\text{BNM}} (|c(\rho)|)}+1\leq\frac{1+\tilde{\lambda}_{\rho}^{2}}{r^{\text{BNM}}(|c(\rho )|)}+1.\] As \(|\rho|\uparrow 1\), \(c(\rho)\to\infty\) and \(r^{\text{BNM}}(|c(\rho)|)\to 1\), so that the above display is equal to a \(1+o(1)\) term times \(\tilde{\lambda}_{\rho}^{2}=2\log(\rho^{-2}-1)^{-1}\) as required. #### c.2.2 Lower bounds In this section, we show that \(A^{*}(\rho)\geq(1+o(1))2\log(\rho^{-2}-1)^{-1}\). Since \(A^{*}_{S}(\rho)\) and \(A^{*}_{H}(\rho)\) are bounded from below by \(A^{*}(\rho)\), this also implies \(A^{*}_{S}(\rho)\geq(1+o(1))2\log(\rho^{-2}-1)^{-1}\) and \(A^{*}_{H}(\rho)\geq(1+o(1))2\log(\rho^{-2}-1)^{-1}\). Given an estimator \(\delta(Y)\) of \(\mu\) in the normal means problem \(Y\sim N(\mu,1)\), let \(m(\delta)=E_{T\sim N(0,1)}\delta(Y)^{2}\) denote the risk at \(\mu=0\) and let \(M(\delta)=\sup_{\mu\in\mathbb{R}}E_{T\sim N(\mu,1)}(\delta(Y)-\mu)^{2}\) denote worst-case risk. The following lemma is immediate from Bickel (1983, Theorem 4.1). **Lemma C.2** (Bickel 1983, Theorem 4.1).: _For \(t\in(0,1]\), let \(\delta_{t}\) be an estimator that satisfies \(m(\delta_{t})\leq 1-t\). Then, as \(t\uparrow 1\), \(M(\delta_{t})\geq(1+o(1))\cdot 2\log(1-t)\)._ Using this result, we prove the following lemma, which gives a lower bound for the worst-case adaptation regret and the worst-case risk of any estimator achieving the upper bound in Section C.2.1. The required lower bound \(A^{*}(\rho)\geq(1+o(1))2\log(\rho^{-2}-1)^{-1}\) follows from this result. **Lemma C.3**.: _For \(\rho\in(-1,1)\), let \(\delta_{\rho}:\mathbb{R}\to\mathbb{R}\) be an estimator of \(\mu\) in the normal means problem \(Y\sim N(\mu,1)\). Suppose that the worst-case adaptation regret \(A(\delta_{\rho},\rho)\) of the corresponding estimator (4) satisfies \(A(\delta_{\rho},\rho)\leq(1+o(1))2\log(\rho^{-2}-1)^{-1}\) as \(|\rho|\to 1\). Then the following results hold as \(|\rho|\to 1\)._ 1. _The worst-case risk of the corresponding estimator (_4_) is bounded from below by a_ \(1+o(1)\) _term times_ \(2\Sigma_{U}\log(\rho^{-2}-1)^{-1}\)__ 2. \(A(\delta_{\rho},\rho)\geq(1+o(1))\cdot 2\log(\rho^{-2}-1)^{-1}\)_._ Proof.: By the arguments Section A.1, the worst-case risk of the estimator (4) with \(\delta=\delta_{\rho}\) is given by \(\Sigma_{U}\cdot\left[\rho^{2}\sup_{\mu}E_{T\sim N(\mu,1)}(\delta_{\rho}(T)- \mu)^{2}+1-\rho^{2}\right]\). As \(|\rho|\uparrow 1\), this is bounded from below by a \(1+o(1)\) term times \(\Sigma_{U}\sup_{\mu}E_{T\sim N(\mu,1)}(\delta_{\rho}(T)-\mu)^{2}\). Similarly, \(A(\delta_{\rho},\rho)\) is bounded from below by a \(1+o(1)\) term times \(\sup_{\mu}E_{T\sim N(\mu,1)}(\delta_{\rho}(T)-\mu)^{2}\) as \(|\rho|\uparrow 1\). Thus, it suffices to show that \(\sup_{\mu}E_{T\sim N(\mu,1)}(\delta_{\rho}(T)-\mu)^{2}\geq(1+o(1))\cdot 2\log( \rho^{-2}-1)^{-1}\). To show this, note that it follows from plugging in \(\tilde{b}=0\) to the objective in (6) that, for any \(\varepsilon>0\), we have, for \(|\rho|\) close enough to \(1\), \[\frac{E_{T\sim N(0,1)}\delta_{\rho}(T)^{2}}{\rho^{-2}-1}\leq A(\delta_{\rho}, \rho)\leq(2+\varepsilon)\log(\rho^{-2}-1)^{-1}.\] Applying Lemma C.2 with \(1-t=(\rho^{-2}-1)\cdot(2+\varepsilon)\log(\rho^{-2}-1)^{-1}\), it follows that \[\sup_{\mu}E_{T\sim N(\mu,1)}(\delta_{\rho}(T)-\mu)^{2}\geq(1+o(1)) \cdot 2\log\left[(\rho^{-2}-1)\cdot(2+\varepsilon)\log(\rho^{-2}-1)^{-1}\right]\] \[=(1+o(1))\cdot\left[2\log(\rho^{-2}-1)+\log(2+\varepsilon)+\log \log(\rho^{-2}-1)^{-1}\right]=(1+o(1))\cdot 2\log(\rho^{-2}-1)\] as required. ## Appendix D Computational details In this section, we provide additional details on our computation of the adaptive estimator. ### Discrete approximation to estimators and risk function Operationally, discretizing the support of the random variable \(T\in\mathcal{T}\) into \(K\) points, finding an estimator \(\delta(T)\) is equivalent to finding a "policy" function \(\delta\left(t\right):\mathcal{T}\rightarrow\mathbb{R}\): \[\delta\left(t\right)=\sum_{k=1}^{K}\psi_{k}1\left\{t=t_{k}\right\}.\] Hence, we can rewrite the risk of estimator \(\delta(T)\) when \(T\sim N(b,1)\) as \[E_{T\sim N(b,1)}\left(\sum_{k=1}^{K}\psi_{k}1\left\{T=t_{k}\right\}-b\right)^ {2}. \tag{19}\] Define \(\mu_{kb}=\Pr_{T\sim N(b,1)}\left(T=t_{k}\right)\) as the probability of falling into the \(k\)'th grid point given bias \(b\), which can be evaluated analytically via the following discrete approximation to the normal distribution \[\mu_{kb}=\Phi\left(\left(t_{k}+t_{k+1}\right)/2-b\right)-\Phi\left(\left(t_{k }+t_{k-1}\right)/2-b\right), \tag{20}\] where we define \(t_{0}=-\infty\) and \(t_{K+1}=\infty\), which ensures that \(\sum_{k=1}^{K}\mu_{kb}=1\). The discretized approximation to the risk function (19) is therefore \[\sum_{k=1}^{K}\psi_{k}^{2}\mu_{kb}-2b\sum_{k=1}^{K}\psi_{k}\mu_{kb}+b^{2}. \tag{21}\] ### Computing minimax risk in the bounded normal mean problem We now provide details on how to compute the minimax risk \(r^{\text{BNM}}(|\tilde{b}|)\) in the bounded normal mean problem, which allows us to easily compute the \(B\)-minimax risk for the main example as described in 5 for each \(B\in\mathcal{B}\). This subsection is a specialized version of the first step of Algorithm 4.1. By definition, the minimax risk \(r^{\text{BNM}}(|\tilde{b}|)\) is the minimized value of the following minimax problem \[\min_{\delta}\max_{b\in[-|\tilde{b}|,|\tilde{b}|]}E_{T\sim N(b,1)}(\delta(Y)-b )^{2}\] whose solution is the minimax estimator \(\delta^{\text{BNM}}\left(T;|\tilde{b}|\right)\). In particular, for each \(|\tilde{b}|=B/\sqrt{\Sigma_{O}}\in\{0.1,0.2,\ldots,9\}\) we calculate the minimax risk \(r^{\text{BNM}}(|\tilde{b}|)\) following the steps below. To compute the minimax risk function \(r^{\text{BNM}}(|\tilde{b}|)\) for values of \(|\tilde{b}|\) that are not included in the fine grid, we rely on spline interpolation. 1. Approximate the prior \(\pi\) with the finite dimensional vector \(\pi\in\Delta^{J}\), where the parameter space \([-|\tilde{b}|,|\tilde{b}|]\) is approximated by an equally spaced grid of \(b\) values spanning \([-|\tilde{b}|,|\tilde{b}|]\) with a step size of \(0.05\), totaling to \(J\) grid values. Approximate the conditional risk function as in (21), where the support for \(T\sim N(b,1)\) is approximated by an equally spaced grid of \(t\) values spanning \([-|\tilde{b}|-3,|\tilde{b}|+3]\) with a step size of \(0.1\), totaling to \(K\) grid values. The minimax problem becomes \[\max_{\pi\in\Delta^{J}}\min_{\{\psi_{k}\}_{k=1}^{K}}\sum_{\ell=1}^{J}\pi_{\ell }\left(\sum_{k=1}^{K}\psi_{k}^{2}\mu_{kb_{\ell}}-2b_{\ell}\sum_{k=1}^{K}\psi_ {k}\mu_{kb_{\ell}}+b_{\ell}^{2}\right).\] (22) 2. The solution to the inner optimization yields the posterior mean \(\psi_{k}^{*}\left(\pi\right)=\frac{\sum_{\ell=1}^{J}\pi_{\ell}\mu_{kb_{\ell}} b_{\ell}}{\sum_{\ell=1}^{J}\pi_{\ell}\mu_{kb_{\ell}}}\). The outer problem is then \[\max_{\pi\in\Delta^{J}}\sum_{\ell=1}^{J}\pi_{\ell}\left(\sum_{k=1}^{K}\left(\psi_{ k}^{*}\left(\pi\right)\right)^{2}\mu_{kb_{\ell}}-2b_{\ell}\sum_{k=1}^{K}\psi_{k}^{*} \left(\pi\right)\mu_{kb_{\ell}}+b_{\ell}^{2}\right).\] 3. Solve the outer problem for the least favorable prior \(\pi^{*}\) based on sequential quadratic programming via MATLAB's fmincon routine. The minimax estimator \(\delta^{\text{BNM}}\left(T;|\tilde{b}|\right)\) is therefore \(\sum_{k=1}^{K}\psi_{k}^{*}\left(\pi^{*}\right)1\left\{t=t_{k}\right\}\) and the minimax risk \(r^{\text{BNM}}(|\tilde{b}|)\) is the minimized value. Since the objective is concave in \(\pi\) (it is the pointwise infimum over a set of linear functions; see Boyd and Vandenberghe, 2004, p. 81), we can check that the algorithm has found a global maximum by checking for a local maximum. ### Computing the optimally adaptive estimator for a given \(\rho^{2}\) As explained in the main text, the adaptive problem in the main example only depends on \(\Sigma\) through the correlation coefficient \(\rho^{2}\). For a given value of \(\rho^{2}\), we use convex programming methods to solve for the function \(\tilde{\delta}^{\text{adapt}}(t;\rho)\) based on the steps described below, which is a specialized version of the second step of Algorithm 4.1. 1. Approximate the prior \(\pi\) with the finite dimensional vector \(\pi\in\Delta^{J}\), where the parameter space for \(b/\sqrt{\Sigma_{O}}\) is approximated by an equally spaced grid of \(\tilde{b}\) values spanning \([-9,9]\) with a step size of \(0.025\), totaling to \(J\) grid values. Approximate the conditional risk function as in (21), where the support for \(T\sim N(\tilde{b},1)\) is approximated by an equally spaced grid of \(t\) values spanning \([-12,12]\) with a step size of \(0.05\), totaling to \(K\) grid values. The adaptation problem (6) becomes \[\max_{\pi\in\Delta^{J}}\min_{\{\psi_{k}\}_{k=1}^{K}}\sum_{\ell=1}^{J}\pi_{\ell }\omega_{\ell}\left(\sum_{k=1}^{K}\psi_{k}^{2}\mu_{kb_{\ell}}-2b_{\ell}\sum_{ k=1}^{K}\psi_{k}\mu_{kb_{\ell}}+b_{\ell}^{2}\right)+\rho^{-2}-1\] (23) where \(\omega_{\ell}=\left(r^{\text{BNM}}(|\tilde{b}_{\ell}|)+\rho^{-2}-1\right)^{-1}\)using output from the previous subsection. 2. The solution to the inner optimization yields \(\psi_{k}^{*}\left(\pi\right)=\frac{\sum_{\ell=1}^{J}\pi_{\ell}\mu_{kb_{\ell}} \omega_{\ell}b_{\ell}}{\sum_{\ell=1}^{J}\pi_{\ell}\mu_{kb_{\ell}}\omega_{\ell}}\). The outer prob lem is then \[\max_{\pi\in\Delta^{J}}\sum_{\ell=1}^{J}\pi_{\ell}\omega_{\ell}\left(\sum_{k=1}^{ K}\left(\psi_{k}^{*}\left(\pi\right)\right)^{2}\mu_{kb_{\ell}}-2b_{\ell}\sum_{k=1}^{ K}\psi_{k}^{*}\left(\pi\right)\mu_{kb_{\ell}}+b_{\ell}^{2}\right)+\rho^{-2}-1.\] 3. Solve the outer problem for the least favorable (adaptive) prior \(\pi^{*}\) based on sequential quadratic programming via Matlab's fmincon routine. The adaptive estimator \(\tilde{\delta}^{\mathrm{adapt}}(t;\rho)\) is therefore \(\sum_{k=1}^{K}\psi_{k}^{*}\left(\pi^{*}\right)1\left\{t=t_{k}\right\}\). The loss of efficiency under adaptation is the minimized value. As with the bounded normal mean problem, the objective is concave in \(\pi\), so we can check that the algorithm has found a global maximum by checking for a local maximum. ### Computing the optimally adaptive estimator based on the lookup table To simplify the computation of the optimally adaptive estimator, we pre-calculate the adaptive estimates over an unequally spaced grid \(\tanh([0,0.05,0.10,\ldots,3])\) of correlation coefficients using the algorithm described above. As \(\rho^{2}\) approaches one, the solution becomes sensitive to small changes in \(\rho\). The uneven spacing of the \(\rho\) grid allows for more accurate interpolation based on the simple pre-tabulated lookup table that we describe next. To rapidly obtain a final estimator \(\tilde{\delta}^{\mathrm{adapt}}(T_{O};\rho)\) for a given application, we conduct 2D interpolation across \(\rho^{2}\) and \(t\) values to tailor the adaptive estimates to the exact parameter values desired. For example, we obtain \(\tilde{\delta}\left(T_{O};-0.524\right)\) based on spline interpolation at \(\rho^{2}=(-0.524)^{2}\) together with the observed test statistic \(T_{O}\) based on the 2D grid of \(\rho^{2}\) and \(t\) values. Figure A6 plots the maximum and minimum values of \(\delta(T_{O})/T_{O}\) against \(\rho^{2}\). For all enumerated values of \(\rho^{2}\), the adaptive estimator "shrinks" \(T_{O}\) towards zero. ### Computing the nearly adaptive estimators To find the nearly adaptive estimators in the class of soft thresholding estimators and hard thresholding estimators, it suffices to solve the two dimensional minimax problem in threshold \(\lambda\) and scaled bias level \(\tilde{b}\). We provide details for the claim in the main text that this two dimensional minimax problem can be easily solved in practice even though the minimax theorem does not apply to these restricted classes of estimators. The derivation is largely based on the following equality using moments of a truncated standard normal \(X_{i}\mid a<X_{i}<b\). Let \(\phi(x)\) and \(\Phi(x)\) denote the pdf and cdf of a standard normal distribution. Then for any \(a<b\), we have \[\int_{a}^{b}x^{2}\phi(x)dx=\Phi\left(b\right)-\Phi\left(a\right)-\left(b\phi(b)- a\phi(a)\right). \tag{24}\] #### d.5.1 Soft thresholding Rewrite the soft thresholding estimator as \(\delta_{S,\lambda}\left(T_{O}\right)=\mathbf{1}\left\{T_{O}>\lambda\right\} \left(T_{O}-\lambda\right)+\mathbf{1}\left\{T_{O}<-\lambda\right\}\left(T_{O} +\lambda\right)\) and its risk function can be expressed as \[E_{T_{O}\sim N\left(\tilde{b},1\right)}\left(\delta_{S,\lambda} \left(T_{O}\right)-\tilde{b}\right)^{2} \tag{25}\] \[= E_{T_{O}\sim N\left(\tilde{b},1\right)}\left(\mathbf{1}\left\{T _{O}>\lambda\right\}\left(T_{O}-\lambda-\tilde{b}\right)+\mathbf{1}\left\{T _{O}<-\lambda\right\}\left(T_{O}+\lambda-\tilde{b}\right)-\mathbf{1}\left\{- \lambda<T_{O}<\lambda\right\}\tilde{b}\right)^{2}\] \[= \tilde{b}^{2}\left(\Phi\left(\lambda-\tilde{b}\right)-\Phi\left( -\lambda-\tilde{b}\right)\right)+\int_{\lambda-\tilde{b}}^{\infty}\left(x- \lambda\right)^{2}\phi(x)dx+\int_{-\infty}^{-\lambda-\tilde{b}}\left(x+ \lambda\right)^{2}\phi(x)dx\] The integrals in (25) simplify to \[\int_{\lambda-\tilde{b}}^{\infty}\left(x-\lambda\right)^{2}\phi(x) dx+\int_{-\infty}^{-\lambda-\tilde{b}}\left(x+\lambda\right)^{2}\phi(x)dx\] \[= \int_{\lambda-\tilde{b}}^{\infty}x^{2}\phi(x)dx+\int_{-\infty}^{- \lambda-\tilde{b}}x^{2}\phi(x)dx\] \[-2\lambda\left(\int_{\lambda-\tilde{b}}^{\infty}x\phi(x)dx-\int_ {-\infty}^{-\lambda-\tilde{b}}x\phi(x)dx\right)\] \[+\lambda^{2}\left(1-\Phi\left(\lambda-\tilde{b}\right)+\Phi\left( -\lambda-\tilde{b}\right)\right)\] \[= 1-\Phi\left(\lambda-\tilde{b}\right)+\Phi\left(-\lambda-\tilde {b}\right)+\left((\lambda-\tilde{b})\phi(\lambda-\tilde{b})-(-\lambda-\tilde{ b})\phi(-\lambda-\tilde{b})\right)\] \[-2\lambda\left(\phi(\lambda-\tilde{b})+\phi(-\lambda-\tilde{b}) \right)+\lambda^{2}\left(1-\Phi\left(\lambda-\tilde{b}\right)+\Phi\left(- \lambda-\tilde{b}\right)\right)\] where we use the fact that \(\int_{\lambda-\tilde{b}}^{\infty}x^{2}\phi(x)dx+\int_{-\infty}^{-\lambda- \tilde{b}}x^{2}\phi(x)dx=\int_{-\infty}^{\infty}x^{2}\phi(x)dx-\int_{-\lambda- \tilde{b}}^{\lambda-\tilde{b}}x^{2}\phi(x)dx\) and Equation (24). The nearly adaptive objective function \[\min_{\lambda}\max_{\tilde{b}}\frac{E_{T_{O}\sim N(\tilde{b},1)}\left(\delta_{ S,\lambda}\left(T_{O}\right)-\tilde{b}\right)^{2}+\rho^{-2}-1}{r^{\text{BNM}}(| \tilde{b}|)+\rho^{-2}-1},\] can now be easily solved by Matlab's fminimax function when the risk function is evaluated based on the simplified expression derived above. To simplify the computation of the nearly adaptive estimator, we pre-calculate the adaptive thresholds over an unequally spaced grid \(\tanh([0,0.05,0.10,\ldots,3])\) of correlation coefficients as explained above. To rapidly obtain a final estimator \(\delta_{S,\lambda}\left(T_{O};\rho\right)\) for a given application, we conduct a spline interpolation across \(\rho^{2}\) values to tailor the threshold to the exact parameter values desired. For example, we obtain \(\delta_{S,\lambda}\left(T_{O};-0.524\right)\) firstly based on spline interpolation at \(\rho^{2}=(-0.524)^{2}\) to obtain the threshold \(\lambda\), and then with the observed test statistic \(T_{O}\). #### d.5.2 Hard thresholding Similarly rewrite hard thresholding as \(\delta_{H,\lambda}\left(T_{O}\right)=\left(1-\mathbf{1}\left\{-\lambda<T_{O}< \lambda\right\}\right)T_{O}\) and its risk function can be simplified due to Equation (24) \[E_{T_{O}\sim N(\tilde{b},1))}\left(\delta_{H,\lambda}\left(T_{O} \right)-\tilde{b}\right)^{2}\] \[= E_{T_{O}\sim N(\tilde{b},1)}\left(\left(1-\mathbf{1}\left\{- \lambda<T_{O}<\lambda\right\}\right)\left(T_{O}-\tilde{b}\right)-\mathbf{1} \left\{-\lambda<T_{O}<\lambda\right\}\tilde{b}\right)^{2}\] \[= \tilde{b}^{2}\left(\Phi\left(\lambda-\tilde{b}\right)-\Phi\left( -\lambda-\tilde{b}\right)\right)+\int_{-\infty}^{\infty}x^{2}\phi(x)dx-\int_{- \lambda-\tilde{b}}^{\lambda-\tilde{b}}x^{2}\phi(x)dx.\] ## Appendix E Pooling controls (LaLonde, 1986) LaLonde (1986) contrasted experimental estimates of the causal effects of job training derived from the National Supported Work (NSW) demonstration with econometric estimates derived from observational controls, concluding that the latter were highly sensitive to modeling choices. Subsequent work by Heckman and Hotz (1989) argued that proper use of specification tests would have guarded against large biases in LaLonde (1986)'s setting. An important limitation of the NSW experiment, however, is that its small sample size inhibits a precise assessment of the magnitude of selection bias associated with any given non-experimental estimator. In what follows, we explore the prospects of improving experimental estimates of the NSW's impact on earnings by utilizing additional non-experimental control groups and adapting to the biases their inclusion engenders. We consider three analysis samples differentiated by the origin of the untreated ("control") observations. All three samples include the experimental NSW treatment group observations. In the first sample the untreated observations are given by the experimental NSW controls. In a second sample the controls come from LaLonde (1986)'s observational "CPS-1" sample, as reconstructed by Dehejia and Wahba (1999). In the third sample, the controls are a propensity score screened subsample of CPS-1. To estimate treatment effects in the samples with observational controls, we follow Angrist and Pischke (2009) in fitting linear models for 1978 earnings to a treatment dummy, 1974 and 1975 earnings, a quadratic in age, years of schooling, a dummy for no degree, a race and ethnicity dummies, and a dummy for marriage status. The propensity score is generated by fitting a probit model of treatment status on the same covariates and dropping observations with predicted treatment probabilities outside of the interval \([0.1,0.9]\). Let \(Y_{U}\) be the mean treatment / control contrast in the experimental NSW sample. We denote by \(Y_{R1}\) the estimated coefficient on the treatment dummy in the linear model described above when the controls are drawn from the CPS-1 sample. Finally, \(Y_{R2}\) gives the corresponding estimate obtained from the linear model when the controls come from the propensity score screened CPS-1 sample. We follow the applied literature in assuming trimming does not meaningfully change the estimand, a perspective that can be formalized by viewing the trimmed estimator as one realization of a sequence of estimators with trimming shares that decrease rapidly with the sample size Huber et al. (2013). Table A1 reports point estimates from all three estimation approaches along with standard errors derived from the pairs bootstrap. The realizations of \((Y_{R1},Y_{R2})\) exactly reproduce those found in the last row of Table 3.3.3 of Angrist and Pischke (2009) but the reported standard errors are somewhat larger due to our use of the bootstrap, which accounts both for heteroscedasticity and uncertainty in the propensity score screening procedure. The realization of \(Y_{U}\) matches the point estimate reported in the first row of Angrist and Pischke (2009)'s Table 3.3.3 but again exhibits a modestly larger standard error reflecting heteroscedasticity with respect to treatment status. While the experimental mean contrast \((Y_{U})\) of $1,794 is statistically distinguishable from zero at the 5% level, considerable uncertainty remains about the magnitude of the average treatment effect of the NSW program on earnings. The propensity trimmed CPS-1 estimate lies closer to the experimental estimate than does the estimate from the untrimmed CPS-1 sample. However, the untrimmed estimate has a much smaller standard error than its trimmed analogue. Though the two restricted estimators are both derived from the CPS-1 sample, our bootstrap estimate of the correlation between them is only 0.75, revealing that each measure contains substantial independent information. Combining the three estimators together via GMM, a procedure we denote \(GMM_{3}\), yields roughly an 11% reduction in standard errors relative to relying on \(Y_{U}\) alone. However, the \(J\)-test associated with the \(GMM_{3}\) procedure rejects the null hypothesis that the three estimators share the same probability limit at the 5% level (\(p=0.04\)). Combining only \(Y_{U}\) and \(Y_{R2}\) by GMM, a procedure we denote \(GMM_{2}\), yields a standard error 7% below that of \(Y_{U}\) alone. The \(J\)-test associated with \(GMM_{2}\) fails to reject the restriction that \(Y_{U}\) and \(Y_{R2}\) share a common probability limit (\(p=0.51\)). Hence, sequential pre-testing selects \(GMM_{2}\). Letting \(b_{1}\equiv\mathbb{E}[Y_{R1}-\theta]\) and \(b_{2}\equiv\mathbb{E}[Y_{R2}-\theta]\) our pre-tests reject the null that \(b_{1}=b_{2}=0\) and fail to reject that \(b_{2}=0\). However, it seems plausible that both restricted estimators suffer from some degree of bias. The adaptive estimator seeks to determine the magnitude of those biases and make the best possible use of the observational estimates. In adapting to misspecification, we operate under the assumption that \(|b_{1}|\geq|b_{2}|\), which is in keeping with the common motivation of propensity score trimming as a tool for bias reduction (e.g., Angrist and Pischke, 2009, Section 3.3.3). Denoting the bounds on \((|b_{1}|,|b_{2}|)\) by \((B_{1},B_{2})\), we adapt over the finite collection of bounds \(\mathcal{B}=\{(0,0),(\infty,0),(\infty,\infty)\}\), the granular nature of which dramatically reduces the computational complexity of finding the optimally adaptive estimator. Note that the scenario \((B_{1},B_{2})=(0,\infty)\) has been ruled out by assumption, reflecting the belief that propensity score trimming reduces bias. See Appendix F for further details. From Table A1, the multivariate adaptive estimator yields an estimated training effect of $1,597: roughly two thirds of the way towards \(Y_{U}\) from the efficient \(GMM_{3}\) estimate. Hence, the observational evidence, while potentially quite biased, leads to a non-trivial (11%) adjustment of our best estimate of the effect of NSW training away from the experimental benchmark. In Table A2 we show that pairwise adaptation using only \(Y_{U}\) and \(Y_{R1}\) or only \(Y_{U}\) and \(Y_{R2}\) yields estimates much closer to \(Y_{U}\). A kindred approach, which avoids completely discarding the information in either restricted estimator, is to combine \(Y_{R1}\) and \(Y_{R2}\) together via optimally weighted GMM and then adapt between \(Y_{U}\) and the composite GMM estimate. As shown in Table A3, this two step approach yields an estimate of $1,624, extremely close to the multivariate adaptive estimate of $1,597, but comes with substantially elevated worst case adaptation regret relative to a multivariate oracle who knows which pair of bounds in \(\mathcal{B}\) prevails. While the multivariate adaptive estimate of $1,597 turns out to be very close to the pre-test estimate of $1,629, the adaptive estimator's worst case adaptation regret of 7.7% is substantially lower than that of the pre-test estimator, which exhibits a maximal regret of 47.5%. The adaptive estimator achieves this advantage by equalizing the maximal adaptation regret across the three bias scenarios \(\{(b_{1}=0,b_{2}=0),(b_{1}\neq 0,b_{2}=0),(b_{1}\neq 0,b_{2}\neq 0)\}\) allowed by our specification of \(\mathcal{B}\). When both restricted estimators are unbiased, the adaptive estimator yields a 14.5% reduction in worst case risk relative to \(Y_{U}\). However, an oracle that knows both restricted estimators are unbiased would choose to employ \(GMM_{3}\), implying maximal adaptation regret of \(0.855/0.793\approx 1.077\). When \(Y_{R1}\) is biased, but \(Y_{R2}\) is not, the adaptive estimator yields a 7.5% reduction in worst case risk. An oracle that knows only \(Y_{R1}\) is biased will rely on \(GMM_{2}\), which yields worst case scaled risk of 0.858; hence, the worst case adaptation regret of not having employed \(GMM_{2}\) in this scenario is \(0.925/0.858\approx 1.077\). Finally, when both restricted estimators are biased, the adaptive estimator can exhibit up to a 7.7% increase in risk relative to \(Y_{U}\). The near oracle performance of the optimally adaptive estimator in this setting suggests it should prove attractive to researchers with a wide range of priors regarding the degree of selection bias present in the CPS-1 samples. Both the skeptic that believes the restricted estimators may be immensely biased and the optimist who believes the restricted estimators are exactly unbiased should face at most a 7.7% increase in maximal risk from using the adaptive estimator. In contrast, an optimist could very well object to a proposal to rely on \(Y_{U}\) alone, as doing so would raise risk by 26% over employing \(GMM_{3}\). ## Appendix F Details of bivariate adaptation In Section E, we report the results of adapting simultaneously to the bias in two restricted estimators when the bias spaces take a nested structure. Denoting the bounds on \((|b_{1}|,|b_{2}|)\) of the two restricted estimators by \((B_{1},B_{2})\), we adapt over the finite collection of bounds \(\mathcal{B}=\{(0,0),(\infty,0),(\infty,\infty)\}\). Note that the scenario \((B_{1},B_{2})=(0,\infty)\) has been ruled out by assumption, reflecting the belief that propensity score trimming reduces bias. The minimax risk over each bias space \(\mathcal{C}_{(B_{1},B_{2})}\) is therefore \[R^{*}(\mathcal{C}_{(B_{1},B_{2})})=\begin{cases}\Sigma_{U}&\text{ for }(B_{1},B_{2})=(\infty,\infty)\\ \Sigma_{U}-\Sigma_{UO,2}\Sigma_{O,2}^{-1}\Sigma_{UO,2}&\text{ for }(B_{1},B_{2})=( \infty,0)\\ \Sigma_{U}-\Sigma_{UO}\Sigma_{O}^{-1}\Sigma_{UO}&\text{ for }(B_{1},B_{2})=( 0,0)\end{cases} \tag{26}\] Then \(\delta(Y_{O})\) is the solution to the following problem \[\inf_{\delta}\max_{(B_{1},B_{2})\in\mathcal{B}}\frac{\max_{b\in\mathcal{C}_{(B _{1},B_{2})}}E_{Y_{O}\sim N(b,\Sigma_{O})}(\delta(Y_{O})-\Sigma_{UO}\Sigma_{O} ^{-1}b)^{2}+\Sigma_{U}-\Sigma_{UO}\Sigma_{O}^{-1}\Sigma_{UO}}{R^{*}(\mathcal{C }_{(B_{1},B_{2})})}\] Since the three spaces are nested, we can rewrite the adaptation problem as \[\inf_{\delta}\sup_{b\in\mathbb{R}\times\mathbb{R}}\frac{E_{Y_{O}\sim N(b, \Sigma_{O})}(\delta(Y_{O})-\Sigma_{UO}\Sigma_{O}^{-1}b)^{2}+\Sigma_{U}-\Sigma_ {UO}\Sigma_{O}^{-1}\Sigma_{UO}}{\tilde{R}(\tilde{\mathcal{S}}(b))}\] where the scaling is \[\tilde{R}(\tilde{\mathcal{S}}(b))=\begin{cases}\Sigma_{U}-\Sigma_{UO}\Sigma_{O }^{-1}\Sigma_{UO}&\text{ if }b_{1}=b_{2}=0\\ \Sigma_{U}-\Sigma_{UO,2}\Sigma_{O,2}^{-1}\Sigma_{UO,2}&\text{ if }b_{1}\neq 0,b_{2}=0\\ \Sigma_{U}&\text{ if }b_{1}\neq 0,b_{2}\neq 0\end{cases} \tag{27}\] Given the high dimensionality of the adaptation problem, we use CVX instead of Matlab's fmincon to solve the scaled minimax problem. ### Shrinkage pattern To illustrate the shrinkage properties of the multivariate adaptive estimator, Figure A7 plots the adaptive minimax estimator of bias against its unbiased counterpart \(\Sigma_{U,O}\Sigma_{O}^{-1}Y_{O}\). The figure reveals a complex shrinkage pattern reflecting the asymmetric nature of \(\mathcal{C}_{B}\). When \(Y_{O1}=Y_{R1}-Y_{U}\) is small, \(Y_{O2}=Y_{R2}-Y_{U}\) is shrunk aggressively towards zero. However when \(Y_{O2}\) is small, \(Y_{O1}\) is shrunk less aggressively towards zero. When both \(Y_{O1}\) and \(Y_{O2}\) are large, the biases exhibit little shrinkage. ### Pairwise adaptation For comparison with the trivariate adaptation estimates reported in the text, we also consider pairwise adaptation using only \(Y_{U}\) and \(Y_{R1}\) or only \(Y_{U}\) and \(Y_{R2}\), keeping the bias spaces as before. Specifically to adapt using only \(Y_{U}\) and \(Y_{Rj}\), we consider an oracle where the set \(\mathcal{B}\) of bounds \(B\) on the bias consists of the two elements \(0\) and \(\infty\). Table A2 shows that pairwise adaptation produces estimates much closer to \(Y_{U}\) than the multivariate adaptive estimate. While pairwise adaptive estimates both incur smaller adaptation regret, the efficiency gain when the model is correct is smaller than with the multivariate adaptive estimate. ### Bivariate adaptation with GMM composite For another comparison with the trivariate adaptation estimates reported in the text, we also consider combining \(Y_{R1}\) and \(Y_{R2}\) first via optimally weighted GMM, which is a composite of the two \(Y_{\text{comp}}\). We then adapt between \(Y_{U}\) and \(Y_{\text{comp}}\). The bias space is now also a composite of the two-dimensional bias space \(\mathcal{C}_{(B_{1},B_{2})}\), and we consider an oracle where the set \(\mathcal{B}\) of bounds \(B\) on the bias consists of the two elements 0 and \(\infty\). Table A3 shows that composite adaptation produces estimates very similar to the multivariate adaptive estimate. The adaptation regret relative to an oracle who knows a bound on the bias of composite is also small. However, for a fair comparison with multivariate adaptation, one should compare its efficiency loss relative to the multivariate oracle with minimax risk specified in (26). This notion of worst case regret is substantially higher at 25% because bivariate adaptation against the GMM composite cannot leverage the nested structure of the multivariate parameter space \(\mathcal{B}\).
2306.03249
Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood Estimation for Latent Gaussian Models
Latent Gaussian models have a rich history in statistics and machine learning, with applications ranging from factor analysis to compressed sensing to time series analysis. The classical method for maximizing the likelihood of these models is the expectation-maximization (EM) algorithm. For problems with high-dimensional latent variables and large datasets, EM scales poorly because it needs to invert as many large covariance matrices as the number of data points. We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversion. Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation. In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
Alexander Lin, Bahareh Tolooshams, Yves Atchadé, Demba Ba
2023-06-05T21:08:34Z
http://arxiv.org/abs/2306.03249v1
# Probabilistic Unrolling: Scalable, Inverse-Free Maximum ###### Abstract Latent Gaussian models have a rich history in statistics and machine learning, with applications ranging from factor analysis to compressed sensing to time series analysis. The classical method for maximizing the likelihood of these models is the expectation-maximization (EM) algorithm. For problems with high-dimensional latent variables and large datasets, EM scales poorly because it needs to invert as many large covariance matrices as the number of data points. We introduce _probabilistic unrolling_, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversion. Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation. In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance. Machine Learning, ICML ## 1 Introduction Latent variable models with Gaussian prior and Gaussian likelihood, i.e. _latent Gaussian models_ (LGMs), are popular and powerful tools within statistics and machine learning. They have found applications in many settings, such as factor analysis (Basilevsky, 2009), sparse Bayesian learning (Tipping, 2001), state-space models (Durbin & Koopman, 2012), and neural linear models (Ober & Rasmussen, 2019). In these models, the means and/or covariances of the Gaussian distributions are functions of parameters that must be optimized to fit observed data. The expectation-maximization (EM) algorithm (Dempster et al., 1977) is a popular way to optimize the parameters by maximum likelihood estimation. One variant called _gradient EM_(Lange, 1995) implements the M-step through a single iteration of gradient descent. For problems with high-dimensional latent variables and many training examples, gradient EM scales poorly due to the need to invert as many large covariance matrices as the number of examples. Advances in numerical linear algebra have demonstrated, in various contexts, that iterative solvers often provide a much faster alternative to matrix inversion (Saad, 2003; Ubaru et al., 2017; Gardner et al., 2018; Lin et al., 2022b). A separate, burgeoning literature on unrolled optimization has shown theoretical and practical benefits to differentiating through the iterations of deterministic optimizers (Maclaurin et al., 2015; Shaban et al., 2019; Ablin et al., 2020; Tolosohsans & Ba, 2022; Malzeieux et al., 2021). This literature bgs questions as to the potential benefits, in a latent variable setting, of unrolling the iterations of a sampler (i.e. stochastic solver), and differentiating through them. **Contributions** We introduce _probabilistic unrolling_, a computational framework that accelerates maximum likelihood estimation for large-scale, high-dimensional LGMs. Our method provides a way to run gradient EM without matrix inversions. Specifically, we design iterative linear solvers to yield the probabilistic quantities needed by the EM algorithm (i.e. posterior means and covariance samples). Our method reduces the complexity of gradient EM from a cubic function of the latent dimension to a quadratic function in the general case, and a linear function in special cases. We theoretically analyze the faithfulness of probabilistic unrolling to gradient EM when encountering two sources of error: (a) the _statistical error_ from using a finite number of covariance samples, and (b) the _optimization error_ from stopping the solver before convergence. We provide bounds for both of these factors, producing insights on how to pick the number of samples and the number of solver iterations. Finally, we show that our method can further improve its approximation to the true EM gradient by backpropagating through the unrolled iterations of the solver. Probabilistic unrolling can be viewed as training a recurrent network in which each layer applies a matrix operation from the unrolled linear solver. We implement this highly structured architecture in modern deep learning frameworks to further benefit from GPU acceleration. We perform several experiments with simulated and real data, showing that probabilistic unrolling can fit LGMs of practical interest up to \(70\) times faster than gradient EM. Our code is available at [https://github.com/al5250/prob-unroll](https://github.com/al5250/prob-unroll). ## 2 Background: Latent Gaussian Model Let \(\{\mathbf{y}^{(n)}\}_{n=1}^{N}\) denote \(N\) i.i.d. observations, each associated with a latent variable \(\mathbf{z}^{(n)}\). In a LGM, the _prior_ on each latent variable and _likelihood_ (i.e. conditional distribution) of each observation both follow Gaussian distributions, \[\mathbf{z}^{(n)}|\mathbf{\theta} \sim\mathcal{N}(\mathbf{\nu}_{\mathbf{\theta}},\mathbf{\Gamma}_{\mathbf{\theta}}^ {-1}), \tag{1}\] \[\mathbf{y}^{(n)}|\mathbf{z}^{(n)},\mathbf{\theta} \sim\mathcal{N}(\mathbf{\Phi}_{\mathbf{\theta}}\mathbf{z}^{(n)}+\mathbf{\eta}_{ \mathbf{\theta}},\mathbf{\Psi}_{\mathbf{\theta}}^{-1}),\quad n=1,\ldots,N.\] The prior and likelihood depend on a set of _canonical parameters_\((\mathbf{\nu}_{\mathbf{\theta}}\in\mathbb{R}^{D},\mathbf{\Gamma}_{\mathbf{\theta}}\in \mathbb{R}^{D\times D},\mathbf{\Phi}_{\mathbf{\theta}}\in\mathbb{R}^{M\times D},\mathbf{ \eta}_{\mathbf{\theta}}\in\mathbb{R}^{M}\), and a diagonal matrix \(\mathbf{\Psi}_{\mathbf{\theta}}\in\mathbb{R}^{M\times M}\)) that form the means and covariances of the Gaussian distributions. The canonical parameters are themselves functions of the model's _free parameters_\(\mathbf{\theta}\), which are individual values that can be learned through maximum likelihood estimation. **Examples** The LGM (1) generalizes many models within statistics and machine learning. Some famous examples include (a) _factor analysis_, a probabilistic generalization of PCA (Basilevsky, 2009), (b) _sparse Bayesian learning_, a Bayesian approach to compressed sensing (Wipf & Rao, 2004), and (c) _state-space models_, one of the most popular class of probabilistic time series models (Durbin & Koopman, 2012). With the advent of deep learning, the LGM class has broadened to include complex, non-linear structures such as (d) _neural linear models_, i.e. neural networks whose trainable weights correspond to free parameters (Ober & Rasmussen, 2019). For each of these models (and others), we work out the definition of free parameters \(\mathbf{\theta}\) and how they map to the canonical parameters in Appendix A. **Missing Data** In many applications of LGMs, \(\mathbf{y}^{(n)}\) may have missing values, i.e. we may not observe all its entries. To account for missing data, we assume that for each \(n\), we observe \(\mathbf{\tilde{y}}^{(n)}=\mathbf{\Omega}^{(n)}\mathbf{y}^{(n)}\), where the mask \(\mathbf{\Omega}^{(n)}\in\mathbb{R}^{M_{n}\times M}\) is a row-wise subset of the \(M\times M\) identity matrix. **EM Inference** To fit the parameters \(\mathbf{\theta}\in\Theta\) to data \(\mathbf{\tilde{y}}^{(1)},\ldots,\mathbf{\tilde{y}}^{(N)}\), we perform maximum likelihood estimation or, equivalently, minimize the negative log-likelihood, \[\mathcal{L}(\mathbf{\theta}):= \frac{1}{N}\sum_{n=1}^{N}-\log p(\mathbf{\tilde{y}}^{(n)}|\mathbf{\theta}) \tag{2}\] \[= \frac{1}{N}\sum_{n=1}^{N}-\log\int p(\mathbf{\tilde{y}}^{(n)}|\mathbf{z}^ {(n)},\mathbf{\theta})p(\mathbf{z}^{(n)}|\mathbf{\theta})d\mathbf{z}^{(n)}.\] Due to the latent variable \(\mathbf{z}^{(n)}\), one common approach to minimizing (2) is to use the _expectation-maximization_ (EM) algorithm (Dempster et al., 1977). EM revolves around the \(\mathcal{Q}\)-function, which is defined for any \(\{\mathbf{\theta}_{1},\mathbf{\theta}_{2}\}\in\Theta\times\Theta\) as \[\mathcal{Q}(\mathbf{\theta}_{1}|\mathbf{\theta}_{2}) :=\frac{1}{N}\sum_{n=1}^{N}q^{(n)}(\mathbf{\theta}_{1}|\mathbf{\theta}_{2 }), \tag{3}\] \[q^{(n)}(\mathbf{\theta}_{1}|\mathbf{\theta}_{2}) :=\mathbb{E}_{p(\mathbf{z}^{(n)}|\mathbf{\tilde{y}}^{(n)},\mathbf{\theta}_{2 })}[-\log p(\mathbf{z}^{(n)},\mathbf{\tilde{y}}^{(n)}|\mathbf{\theta}_{1})].\] The \(\mathcal{Q}\)-function is called the _expected complete-data negative log-likelihood_ because it averages the negative log-likelihood of the observed data \(\mathbf{y}^{(n)}\) and the unobserved data \(\mathbf{z}^{(n)}\) over all possible realizations of \(\mathbf{z}^{(n)}\)(Bishop & Nasrabadi, 2006, Ch. 9). EM iterations repeatedly alternate between constructing \(\mathcal{Q}\) and minimizing it to make progress on \(\mathcal{L}\): Given a current solution \(\mathbf{\theta}^{\text{old}}\), the _E_-step computes the posterior distribution \(p(\mathbf{z}^{(n)}|\mathbf{\tilde{y}}^{(n)},\mathbf{\theta}^{\text{old}})\) to form the function \(\mathcal{Q}(\mathbf{\theta}|\mathbf{\theta}^{\text{old}})\), defined for all \(\mathbf{\theta}\in\Theta\). The \(M\)-step then finds a new solution \(\mathbf{\theta}^{\text{new}}\) such that \(\mathcal{Q}(\mathbf{\theta}^{\text{new}}|\mathbf{\theta}^{\text{old}})\leq\mathcal{Q} (\mathbf{\theta}^{\text{old}}|\mathbf{\theta}^{\text{old}})\). This guarantees that \(\mathcal{L}(\mathbf{\theta}^{\text{new}})\leq\mathcal{L}(\mathbf{\theta}^{\text{old}})\). Variants of EM differ in how they implement the \(M\)-step. _Classical EM_(Dempster et al., 1977) solves an optimization problem, i.e. \(\mathbf{\theta}^{\text{new}}:=\arg\min_{\mathbf{\theta}\in\Theta}\mathcal{Q}(\mathbf{ \theta}|\mathbf{\theta}^{\text{old}})\). We focus on a computationally-simpler alternative called _gradient EM_(Lange, 1995; Balakrishnan et al., 2017), \[\mathbf{\theta}^{\text{new}}:=\mathbf{\theta}^{\text{old}}-\alpha\cdot\nabla_{1} \mathcal{Q}(\mathbf{\theta}^{\text{old}}|\mathbf{\theta}^{\text{old}}), \tag{4}\] where \(\alpha\in\mathbb{R}\) is the step size and \(\nabla_{1}\mathcal{Q}\) means the gradient with respect to the first argument of \(\mathcal{Q}\), as defined in (3). **EM for the LGM** For latent Gaussian models, \(\mathcal{Q}\) and its gradient are computable in closed-form. Each \(q^{(n)}\) in (3) simplifies to (dropping the index \(n\) for convenience): \[q(\mathbf{\theta}_{1}|\mathbf{\theta}_{2})=\frac{1}{2}\mathbf{\mu}_{\mathbf{ \theta}_{2}}^{\top}\mathbf{A}_{\mathbf{\theta}_{1}}\mathbf{\mu}_{\mathbf{\theta}_{2}} -\mathbf{b}_{\mathbf{\theta}_{1}}^{\top}\mathbf{\mu}_{\mathbf{\theta}_{2}} \tag{5}\] \[+\frac{1}{2}\text{Tr}(\mathbf{A}_{\mathbf{\theta}_{1}}\mathbf{\Sigma}_{ \mathbf{\theta}_{2}})+c_{\mathbf{\theta}_{1}},\] where, for all \(\mathbf{\theta}\in\Theta\), we define the quantities \[\mathbf{A}_{\mathbf{\theta}} :=\mathbf{\Gamma}_{\mathbf{\theta}}+\mathbf{\Phi}_{\mathbf{\theta}}^{\top}\mathbf{ \Omega}^{\top}\mathbf{\Omega}\mathbf{\Psi}_{\mathbf{\theta}}\mathbf{\Omega}^{\top}\mathbf{\Omega} \mathbf{\Psi}_{\mathbf{\theta}}, \tag{6}\] \[\mathbf{b}_{\mathbf{\theta}} :=\mathbf{\Gamma}_{\mathbf{\theta}}\mathbf{\nu}_{\mathbf{\theta}}+\mathbf{\Phi}_{\mathbf{ \theta}}^{\top}\mathbf{\Omega}^{\top}\mathbf{\Omega}\mathbf{\Psi}_{\mathbf{\theta}}\mathbf{\Omega}^{ \top}(\mathbf{\tilde{y}}-\mathbf{\Omega}\mathbf{\eta}_{\mathbf{\theta}}),\] \[c_{\mathbf{\theta}} :=\tfrac{1}{2}(\mathbf{\tilde{y}}-\mathbf{\Omega}\mathbf{\eta}_{\mathbf{\theta}})^{ \top}\mathbf{\Omega}\mathbf{\Psi}_{\mathbf{\theta}}\mathbf{\Omega}^{\top}(\mathbf{\tilde{y}}-\mathbf{ \Omega}\mathbf{\eta}_{\mathbf{\theta}})+\tfrac{1}{2}\mathbf{\nu}_{\mathbf{\theta}}^{\top}\mathbf{ \Gamma}_{\mathbf{\theta}}\mathbf{\nu}_{\mathbf{\theta}}\] \[\qquad\qquad-\tfrac{1}{2}\log\det\mathbf{\Omega}\mathbf{\Psi}_{\mathbf{ \theta}}\mathbf{\Omega}^{\top}-\tfrac{1}{2}\log\det\mathbf{\Gamma}_{\mathbf{\theta}},\] and the posterior \(p(\mathbf{z}|\mathbf{\tilde{y}},\mathbf{\theta})\sim\mathcal{N}(\mathbf{\mu}_{\mathbf{\theta}},\mathbf{ \Sigma}_{\mathbf{\theta}})\) is given by \[\mathbf{\mu}_{\mathbf{\theta}}:=\mathbf{\Sigma}_{\mathbf{\theta}}\mathbf{b}_{\mathbf{\theta}},\qquad \qquad\mathbf{\Sigma}_{\mathbf{\theta}}:=\mathbf{A}_{\mathbf{\theta}}^{-1}. \tag{7}\] The derivation for equations (5)-(7) is given in Appendix B. **Computational Challenges** Gradient EM involves computing the gradient of (5) (which we call the _exact gradient_), \[\mathbf{g}^{\star}(\mathbf{\theta}):=\nabla_{1}q(\mathbf{\theta}|\mathbf{\theta}). \tag{8}\] Since \(\mathbf{g}^{\star}(\mathbf{\theta})\) depends on the posterior moments \((\mathbf{\mu}_{\mathbf{\theta}},\mathbf{\Sigma}_{\mathbf{\theta}})\), it requires inverting a large matrix of size \(D\times D\). This has time cost \(\mathcal{O}(D^{3})\) and storage cost \(\mathcal{O}(D^{2})\), which becomes prohibitive for large \(D\). Furthermore, for \(N\) different data vectors, we need to compute \(N\) posterior moments \((\mathbf{\mu}_{\mathbf{\theta}}^{(n)},\mathbf{\Sigma}_{\mathbf{\theta}}^{(n)})\), which requires \(N\) separate matrix inversions. We now arrive at the main goal of the paper: In the ensuing sections, we introduce a computational framework called _probabilistic unrolling_ that can provably accelerate gradient EM by avoiding explicit matrix inversions. This allows us to fit latent Gaussian models at substantially greater scale in high dimensions \(D\) and for large dataset sizes \(N\). ## 3 Method: Probabilistic Unrolling Probabilistic unrolling circumvents matrix inversion by iteratively solving multiple linear systems in parallel. We design the systems to perform posterior inference, i.e. the solutions are the posterior mean \(\mathbf{\mu}_{\mathbf{\theta}}\) and covariance samples distributed as \(\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{\mathbf{\theta}})\). We use these quantities to estimate the EM objective (5) and its gradient \(\mathbf{g}^{\star}(\mathbf{\theta})\) (8). This process requires less time and memory than computing \(\mathbf{g}^{\star}(\mathbf{\theta})\) directly. We also show that backpropagating through the linear solvers further improves our estimation of \(\mathbf{g}^{\star}(\mathbf{\theta})\). From a deep learning perspective, the overall method looks like a recurrent network (Fig. 1). We can view the unrolled sequence of solver iterations as a _recurrent encoder_, with weights \(\mathbf{\theta}\), that takes the observed data \(\mathbf{\tilde{y}}\) and refines _hidden states_ representing the distribution \(p(\mathbf{z}|\mathbf{\tilde{y}},\mathbf{\theta})\). The hidden states are then passed through an _output layer_, also parameterized by \(\mathbf{\theta}\), to evaluate the loss (5). Training this network is equivalent to running gradient EM for the LGM. ### Monte Carlo Gradient EM In high-dimensional settings, inverting a matrix to compute \(\mathbf{\Sigma}_{\mathbf{\theta}}\) is the main bottleneck of (5). The first step of our method replaces the trace term containing \(\mathbf{\Sigma}_{\mathbf{\theta}}\) with an unbiased estimator. Given any square matrix \(\mathbf{A}\) and a sample \(\mathbf{\sigma}_{\mathbf{\theta}}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{\mathbf{\theta}})\), it follows that \(\mathbb{E}[\mathbf{\sigma}_{\mathbf{\theta}}^{\top}\mathbf{A}\mathbf{\sigma}_{\mathbf{\theta}} ]=\text{Tr}(\mathbf{A}\mathbf{\Sigma}_{\mathbf{\theta}})\)(Skilling, 1989; Hutchinson, 1989). Using \(K>1\) independent samples \(\mathbf{\sigma}_{1,\mathbf{\theta}},\dots,\mathbf{\sigma}_{K,\mathbf{\theta}}\sim\mathcal{N}( \mathbf{0},\mathbf{\Sigma}_{\mathbf{\theta}})\) (to reduce variance) leads to the following approximation of (5), \[q^{\#}(\mathbf{\theta}_{1}|\mathbf{\theta}_{2}):= \frac{1}{2}\mathbf{\mu}_{\mathbf{\theta}_{2}}^{\top}\mathbf{A}_{\mathbf{ \theta}_{1}}\mathbf{\mu}_{\mathbf{\theta}_{2}}-\mathbf{b}_{\mathbf{\theta}_{1}}^{\top}\mathbf{\mu} _{\mathbf{\theta}_{2}} \tag{9}\] \[\qquad+\frac{1}{2K}\sum_{k=1}^{K}\mathbf{\sigma}_{k,\mathbf{\theta}_{2}}^ {\top}\mathbf{A}_{\mathbf{\theta}_{1}}\mathbf{\sigma}_{k,\mathbf{\theta}_{2}}+c_{\mathbf{ \theta}_{1}}.\] Eq. (9) satisfies \(\mathbb{E}[q^{\#}(\mathbf{\theta}_{1}|\mathbf{\theta}_{2})]=q(\mathbf{\theta}_{1}|\mathbf{ \theta}_{2})\), where the expectation is taken with respect to \(\mathbf{\sigma}_{1,\mathbf{\theta}},\dots,\mathbf{\sigma}_{K,\mathbf{\theta}}\). We now define the _Monte Carlo gradient_ \[\mathbf{g}^{\#}(\mathbf{\theta}):=\nabla_{1}q^{\#}(\mathbf{\theta}|\mathbf{\theta}), \tag{10}\] which can take the place of \(\mathbf{g}^{\star}(\mathbf{\theta})\) for updating \(\mathbf{\theta}\) in gradient EM. The estimator satisfies \(\mathbb{E}[\mathbf{g}^{\#}(\mathbf{\theta})]=\mathbf{g}^{\star}(\mathbf{\theta})\). **Constructing Samples** The question remains as to how we draw each sample \(\mathbf{\sigma}_{k,\mathbf{\theta}}\). Consider independent random vectors \(\mathbf{\xi}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{\Gamma}_{\mathbf{\theta}})\) and \(\mathbf{\zeta}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{\Psi}_{\mathbf{\theta}})\), and let \[\mathbf{\delta}_{k}:=\mathbf{\xi}_{k}+\mathbf{\Phi}_{\mathbf{\theta}}^{\top}\mathbf{\Omega}^{\top} \mathbf{\Omega}\mathbf{\zeta}_{k}. \tag{11}\] It follows from properties of Gaussian random vectors that \(\mathbf{\delta}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{A}_{\mathbf{\theta}})\), where \(\mathbf{A}_{\mathbf{\theta}}\) is defined in (6). Then, we let \[\mathbf{\sigma}_{k,\mathbf{\theta}}:=\mathbf{\Sigma}_{\mathbf{\theta}}\mathbf{\delta}_{k},\quad k=1,\dots,K. \tag{12}\] As a result, \(\mathbf{\sigma}_{k,\mathbf{\theta}}\) has covariance \(\mathbf{\Sigma}_{\mathbf{\theta}}\mathbf{A}_{\mathbf{\theta}}\mathbf{\Sigma}_{\mathbf{\theta}}=\bm {\Sigma}_{\mathbf{\theta}}\). ### Linear Systems and Iterative Solvers Although the large covariance matrix \(\mathbf{\Sigma}_{\mathbf{\theta}}\) is no longer explicitly written in the new objective (9), it still appears in the definitions for \(\mathbf{\mu}_{\mathbf{\theta}}\) and \(\mathbf{\sigma}_{k,\mathbf{\theta}}\) in (7) and (12), respectively. In this section, we show how to obtain \(\mathbf{\mu}_{\mathbf{\theta}},\mathbf{\sigma}_{k,\mathbf{\theta}}\)_without_ explicitly forming the covariance matrix. First, we cast \(\mathbf{\mu}_{\mathbf{\theta}}\) and \(\mathbf{\sigma}_{k,\mathbf{\theta}}\) as the solutions to linear systems, \[\mathbf{A}_{\mathbf{\theta}}\mathbf{\mu}_{\mathbf{\theta}}=\mathbf{b}_{\mathbf{\theta}},\quad \mathbf{A}_{\mathbf{\theta}}\mathbf{\sigma}_{k,\mathbf{\theta}}=\mathbf{\delta}_{k},\quad k=1, \dots,K, \tag{13}\] where \(\mathbf{A}_{\mathbf{\theta}}=\mathbf{\Sigma}_{\mathbf{\theta}}^{-1}\) is defined in (6). Then, we solve (13) using an _iterative linear solver_(Saad, 2003). For a system \(\mathbf{A}\mathbf{x}=\mathbf{b}\), iterative solvers refine a solution \(\mathbf{x}^{\langle i\rangle}\) over iterations \(i=1,\dots,I\) until \(\mathbf{x}^{\langle I\rangle}\approx\mathbf{A}^{-1}\mathbf{b}\). At iteration \(i\), \[\mathbf{x}^{\langle i+1\rangle}:=\mathbf{x}^{\langle i\rangle}+\mathbf{p}^{\langle i\rangle}, \tag{14}\] where \(\mathbf{p}^{(i)}\) is the search direction. Different solvers vary in how they construct \(\mathbf{p}^{(i)}\). Examples of popular solvers include _gradient descent_, _steepest descent_, and _conjugate gradient_(Saad, 2003), which we review in Appendix D. ### Gradients from Truncated Linear Solvers High-dimensional latent spaces \(D\) may require a large number of iterations \(I\) (hence a high computational cost) to obtain exact solutions. Thus, in practice, it is desirable to run the solver for small \(I\), which leads to approximations \((\mathbf{\mu}_{\mathbf{\theta}}^{(I)},\mathbf{\sigma}_{k,\mathbf{\theta}}^{(I)})\) of the true quantities \((\mathbf{\mu}_{\mathbf{\theta}},\mathbf{\sigma}_{k,\mathbf{\theta}})\). This section proposes two ways to obtain an approximate EM gradient from these partial solutions \((\mathbf{\mu}_{\mathbf{\theta}}^{(I)},\mathbf{\sigma}_{k,\mathbf{\theta}}^{(I)})\). We defer a theoretical analysis of the gradient error to Section 5. First, we substitute \((\mathbf{\mu}_{\mathbf{\theta}}^{(I)},\mathbf{\sigma}_{k,\mathbf{\theta}}^{(I)})\) for \((\mathbf{\mu}_{\mathbf{\theta}},\mathbf{\sigma}_{k,\mathbf{\theta}})\) in (9), i.e. \[q^{(I)}(\mathbf{\theta}_{1}|\mathbf{\theta}_{2}):= \frac{1}{2}(\mathbf{\mu}_{\mathbf{\theta}_{2}}^{(I)})^{\top}\mathbf{A}_{ \mathbf{\theta}_{1}}\mathbf{\mu}_{\mathbf{\theta}_{2}}^{(I)}-\mathbf{b}_{\mathbf{\theta}_{1}}^{ \top}\mathbf{\mu}_{\mathbf{\theta}_{2}}^{(I)} \tag{15}\] \[\quad+\frac{1}{2K}\sum_{k=1}^{K}(\mathbf{\sigma}_{k,\mathbf{\theta}_{2}}^ {(I)})^{\top}\mathbf{A}_{\mathbf{\theta}_{1}}\mathbf{\sigma}_{k,\mathbf{\theta}_{2}}^{(I) }+c_{\mathbf{\theta}_{1}},\] which satisfies \(\lim_{I\to\infty}q^{(I)}(\mathbf{\theta}_{1}|\mathbf{\theta}_{2})=q^{\#}(\mathbf{\theta}_ {1}|\mathbf{\theta}_{2})\). **Option 1: Output Gradient** We can take the gradient of (15) in a manner similar to (10) to obtain the _output gradient_ \[\widehat{\mathbf{g}}^{(I)}(\mathbf{\theta}):=\nabla_{1}q^{(I)}(\mathbf{\theta}|\mathbf{\theta }), \tag{16}\] which satisfies \(\lim_{I\to\infty}\widehat{\mathbf{g}}^{(I)}(\mathbf{\theta})=\mathbf{g}^{\#}(\mathbf{\theta})\). We interpret this gradient as backpropagating through only the output layer of the architecture in Fig. 1, hence the terminology. **Option 2: Network Gradient** Since the inputs to the output layer \((\mathbf{\mu}_{\mathbf{\theta}}^{(I)},\mathbf{\sigma}_{k,\mathbf{\theta}}^{(I)})\) are themselves functions of the parameters \(\mathbf{\theta}\), a neural question arises as to the benefits of additionally propagating the gradient through these quantities (and the linear solver). This leads to the _network gradient_ \[\widetilde{\mathbf{g}}^{(I)}(\mathbf{\theta}):=\frac{\partial}{\partial\mathbf{\theta}} \left[q^{(I)}(\mathbf{\theta}|\mathbf{\theta})-\frac{1}{K}\sum_{k=1}^{K}\mathbf{\delta}_ {k}^{\top}\mathbf{\sigma}_{k,\mathbf{\theta}}^{(I)}\right], \tag{17}\] which backpropagates through the whole architecture in Fig. 1. There are two changes that (17) makes to (16): (a) the use of \(\frac{\partial}{\partial\mathbf{\theta}}\) instead of \(\nabla_{1}\) means that (17) differentiates with respect to both variables in (15) (not just the first argument); (b) (17) has an extra term with \(\mathbf{\delta}_{k}\), which is absent from (16) but is necessary in (17) to ensure \(\lim_{I\to\infty}\widetilde{\mathbf{g}}^{(I)}(\mathbf{\theta})=\mathbf{g}^{\#}(\mathbf{\theta})\) (short proof in Appendix C; longer proof in Appendix E.3). In Section 5.2, we will show that compared to the output gradient \(\widehat{\mathbf{g}}^{(I)}\), the network gradient \(\widetilde{\mathbf{g}}^{(I)}\) exhibits a "super-efficiency" phenomenon (Ablin et al., 2020; Tolooshams and Ba, 2022), which means that it converges faster to \(\mathbf{g}^{\#}\). ### Full Algorithm The probabilistic unrolling algorithm is given in Algorithm 1. The LinearSolver step depends on the particular choice of solver; options include gradient descent, steepest descent, and conjugate gradient. In addition to circumventing matrix inversion, probabilistic unrolling provides several computational benefits over EM, which we explain below. _Covariance-Free Computation._ The iterative solvers eliminate the need to explicitly form the \(D\times D\) covariance matrix \(\mathbf{\Sigma}_{\mathbf{\theta}}\) (or even its inverse \(\mathbf{A}_{\mathbf{\theta}}\)). At each iteration \(i\), a linear solver simply needs to compute matrix-vector products of the form \(\mathbf{A}_{\mathbf{\theta}}\mathbf{v}\), for any \(\mathbf{v}\in\mathbb{R}^{D}\), efficiently. For an LGM, the matrix \(\mathbf{A}_{\mathbf{\theta}}\) is a highly-structured function of its canonical parameters and the data mask \(\mathbf{\Omega}\), as shown in (6). _Exploiting LGM Structure._ In many cases, the canonical parameters of the LGM exhibit additional structure, such as diagonal, Toeplitz, low rank, and sparse structure, to name a few examples. This can significantly reduce the computational and storage costs of each iteration of the linear solver. For example, in applications of sparse Bayesian learning (Lin et al., 2022), \(\mathbf{\Phi}_{\mathbf{\theta}}\) and its transpose often arise as Fourier-like operators. Efficient algorithms, in both computation and storage, exist for applying such operators to vectors. For a single linear system, the time cost of the solver is \(\mathcal{O}(I_{\mathbf{\theta}})\), where \(I\) is the number of iterations and \(\tau_{\mathbf{\theta}}\) is the time needed to compute the matrix-vector multiplication \(\mathbf{A}_{\mathbf{\theta}}\mathbf{v}\). The space cost is \(\mathcal{O}(D+\omega_{\mathbf{\theta}})\), where \(\omega_{\mathbf{\theta}}\) is the space needed to store the canonical parameters. _Amenability to Parallelization._ Iterative solvers are simple and straightforward to parallelize for solving multiple linear systems (e.g. in (13)) through \(\mathbf{A}_{\mathbf{\theta}}\mathbf{X}_{\mathbf{\theta}}=\mathbf{B}_{\mathbf{\theta}}\), where \[\mathbf{X}_{\mathbf{\theta}}:=[\mathbf{\mu}_{\mathbf{\theta}}|\mathbf{\sigma}_{1,\mathbf{\theta}}| \cdots|\mathbf{\sigma}_{K,\mathbf{\theta}}],\ \ \ \mathbf{B}_{\mathbf{\theta}}:=[\mathbf{b}_{\mathbf{\theta}}|\mathbf{\delta}_{1}| \cdots|\mathbf{\delta}_{K}].\] For example, Gardner et al. (2018) and Lin et al. (2022) show how to parallelize the preconditioned conjugate gradient algorithm to solve for \(\mathbf{X}_{\mathbf{\theta}}\). They demonstrate that matrix-based parallelization is especially suitable for multi-core hardware, such as graphics processing units. In this work, we go a step further by parallelizing the solver across data points \(\{\widetilde{\mathbf{y}}^{(n)}\}_{n=1}^{N}\) to obtain solutions \(\{\mathbf{X}_{\mathbf{\theta}}^{(n)}\}_{n=1}^{N}\) for every \(n\). By (6), the operators \(\{\mathbf{A}_{\mathbf{\theta}}^{(n)}\}_{n=1}^{N}\) only differ in the masks \(\{\mathbf{\Omega}^{(n)}\}_{n=1}^{N}\). Thus, the total storage needed for performing \(NK\) matrix-vector multiplications with \(\{\mathbf{A}_{\mathbf{\theta}}^{(n)}\}_{n=1}^{N}\) is only \(\mathcal{O}(NKD+\omega_{\mathbf{\theta}})\) (where \(\omega_{\mathbf{\theta}}\) is at most \(\mathcal{O}(D^{2})\)) even though the matrices \(\{\mathbf{A}_{\mathbf{\theta}}^{(n)}\}_{n=1}^{N}\) have \(\mathcal{O}(ND^{2})\) entries. We compare the computational complexities of gradient EM using matrix inversion and probabilistic unrolling in Table 1. The additional factor of \(I\) in the space complexity of the network gradient comes from the need to store all \(I\) intermediate states of the solver for backpropagation. ## 4 Related Work **Efficient Learning with Linear Solvers.** Using iterative solvers to circumvent matrix inversion is a widely-known technique within numerical linear algebra (Saad, 2003; Halko et al., 2011). Recently, solvers such as the Lanczos algorithm (Lanczos, 1950) and conjugate gradient (CG) (Hestenes and Stiefel, 1952) have become popular for accelerating gradient-based learning for Gaussian processes (Dong et al., 2017; Gardner et al., 2018; Wang et al., 2019; Wenger et al., 2022). In addition, Lin et al. (2022b) and Lin et al. (2022c) used CG to accelerate the classical EM algorithm for sparse Bayesian learning. Many of these works consider when the number of data vectors \(N=1\), as opposed to the setting of the LGM where \(N\) can be large. They also do not consider the idea of backpropagation through the solver. **Backpropagating through Optimization Algorithms.** Automatic differentiation (or "backpropagation") (Baydin et al., 2018) has been widely used and studied in machine learning (Domke, 2012; Deledalle et al., 2014; Shaban et al., 2019). Domke (2012) studied truncated backpropagation as a replacement for implicit differentiation (e.g. Foo et al., 2007; Blondel et al., 2022; Bertrand et al., 2022) when performing incomplete energy minimization. Shaban et al. (2019) studied the use of truncated backpropagation for parameter estimation using unrolled networks. Backpropagating through an unrolled parameter estimation mapping has also been applied to hyperparameter optimization (Maclaurin et al., 2015; Franceschi et al., 2018), and constructing generative adversarial networks (Metz et al., 2016). Ablin et al. (2020) theoretically studied how backpropagation can accelerate gradient estimation for bilevel (i.e. min-min) optimization problems, in the setting where the inner and outer objectives are the same, and when the inner optimization algorithm is gradient descent. Moreover, Tolooshams and Ba (2022); Malezieux et al. (2021) studied the acceleration phenomenon for the sparse coding problem. This paper differs from the aforementioned prior work as follows: (a) probabilistic unrolling is designed for the specific setting of the LGM (as opposed to the general energy minimization problem of Domke (2012)), and contains a novel Monte Carlo sampling step to avoid inversion of the covariance matrix, (b) the fact that our inner optimization originates from this sampling step necessitates statistical considerations and analyses absent from previous work, (c) we extend the result of Ablin et al. (2020), showing that backpropagation can accelerate gradient estimation even in cases in which the inner and outer objectives are different, and (d) we provide gradient convergence analysis for steepest descent (an algorithm that is more sophisticated than gradient descent, requiring analysis of backpropagation through the step size). **Unrolled Networks.** Our interpretation of unrolled solvers as a deep neural network is known as unrolled/unfolded networks in the literature. Gregor and LeCun (2010) introduced this approach for solving the sparse coding problem. Prior works designed and studied deep unrolled networks (Chen et al., 2018; Ablin et al., 2019). Moreover, unrolled networks have found advantages in various applications such as compressed sensing MRI (Sun et al., 2016), Poisson image denoising (Tolooshams et al., 2020), and pattern learning from physiological data (Malezieux et al., 2021). **Variational EM and Variational Auto-Encoders.** Variational inference (VI) is a popular approach for approximating posterior distributions with simpler surrogates. Using VI for the E-Step of EM leads to the _variational EM (VEM)_ algorithm (Murphy, 2023, Sec. 10.3.5), which is a potential alternative to probabilistic unrolling for accelerating EM inference. VEM is more flexible than probabilistic unrolling because it can perform inference for models outside the LGM family. However, the most common form of VEM learns a "mean-field" approximation to the posterior, which does not model covariance between latent variables and therefore biases the learning process away from the negative log-likelihood objective \(\mathcal{L}(\theta)\) (2) (Lin et al., 2022b); in contrast, probabilistic unrolling captures rich covariance structure using samples from the true posterior and like EM, still optimizes \(\mathcal{L}(\theta)\) as its central objective. The variational auto-encoder (VAE) (Kingma and Welling, 2013) is one of the most widely-used instances of VEM that trains a deep neural network to perform VI. Although VAEs are efficient tools for inference, they (a) require a separate inference network that is different from the generative model, increasing the number of parameters for training, and (b) require custom design of this network's architecture (e.g. layers, activations, etc.). In contrast, the probabilistic unrolling architecture (Fig. 1) is based on an interpretable linear solver that uses the same parameters as the generative model. ## 5 Theoretical Analysis How well probabilistic unrolling approximates the exact EM gradient depends on the number of solver iterations, and the quality of the Monte Carlo approximation. We conduct a theoretical analysis of these two sources of error. We begin by defining population-level quantities for each gradient, \[\mathbf{h}^{\star} :=\frac{1}{N}\sum_{n=1}^{N}\mathbf{g}^{\star,(n)} \mathbf{h}^{\#} :=\frac{1}{N}\sum_{n=1}^{N}\mathbf{g}^{\#,(n)} \tag{18}\] \[\widehat{\mathbf{h}}^{(I)} :=\frac{1}{N}\sum_{n=1}^{N}\widehat{\mathbf{g}}^{(I),(n)} \widetilde{\mathbf{h}}^{(I)} :=\frac{1}{N}\sum_{n=1}^{N}\widehat{\mathbf{g}}^{(I),(n)},\] where \(\mathbf{h}^{\star}(\mathbf{\theta})=\nabla_{1}\mathcal{Q}(\mathbf{\theta}|\mathbf{\theta})\) is the exact gradient EM update of (4). We denote the approximate gradient after \(I\) iterations of probabilistic unrolling by \(\mathbf{h}^{(I)}\), with variants \(\widehat{\mathbf{h}}^{(I)}\) and \(\widetilde{\mathbf{h}}^{(I)}\) corresponding, respectively, to the output and network gradients defined previously. The quantity of interest is \[\|\mathbf{h}^{\star}-\mathbf{h}^{(I)}\|\leq\underbrace{\|\mathbf{h}^{\star}-\mathbf{h}^{\#} \|}_{\text{statistical error}}+\underbrace{\|\mathbf{h}^{\#}-\mathbf{h}^{(I)}\|}_{\text{ optimization error}}, \tag{19}\] which decomposes into two terms. The first term, which we name _statistical error_, comes from approximating \(\mathbf{h}^{\star}\) with Monte Carlo samples. We name the second term _optimization error_: this term captures the error due to performing a finite number \(I\) of iterations of the linear solver. ### Statistical Error Given \(\mathbf{\theta}\in\Theta\), we first bound \(\|\mathbf{h}^{\star}(\mathbf{\theta})-\mathbf{h}^{\#}(\mathbf{\theta})\|_{\infty}\). **Proposition 5.1**.: _Let \(N\) be the number of data points, \(K\) be the number of samples for each data point, and \(L\) be the dimensionality of \(\mathbf{\theta}\). For every \(n\in\{1,\dots,N\}\) we define_ \[\mathbf{M}^{(n,\ell)}:=(\mathbf{\Sigma}_{\mathbf{\theta}}^{(n)})^{1/2} \frac{\partial\mathbf{A}_{\mathbf{\theta}}^{(n)}}{\partial\theta_{\ell}}(\mathbf{ \Sigma}_{\mathbf{\theta}}^{(n)})^{1/2}, \tag{20}\] _where \(\mathbf{A}_{\mathbf{\theta}}^{(n)}\) is defined in (6), \(\mathbf{\Sigma}_{\mathbf{\theta}}^{(n)}\) is defined in (7), and \(\frac{\partial\mathbf{A}_{\mathbf{\theta}}^{(n)}}{\partial\theta_{\ell}}\) is the \(D\times D\) matrix of partial derivatives of the entries of \(\mathbf{A}_{\mathbf{\theta}}^{(n)}\) with respect to \(\theta_{\ell}\). Let \(\xi:=\max_{\ell}\max_{n}\|\mathbf{M}^{(n,\ell)}\|_{F}\), where \(\|\cdot\|_{2}\) denotes the spectral norm and \(\|\cdot\|_{F}\) denotes Frobenius norm._ _Then, there is an absolute constant \(C\) such that if the number of Monte Carlo samples \(K\) satisfies_ \[K\geq\frac{\log(4NL)}{C}\max_{\ell}\left(\frac{\max_{n}\|\mathbf{M}^{(n,\ell)} \|_{2}^{2}}{\sum_{n}\|\mathbf{M}^{(n,\ell)}\|_{F}^{2}}\right), \tag{21}\] _it follows that_ \[\Pr\left(\|\mathbf{h}^{\star}-\mathbf{h}^{\#}\|_{\infty}>\xi\sqrt{\frac {\log(4NL)}{CNK}}\right)\leq\frac{1}{N}. \tag{22}\] We give the proof in Appendix E.1. The implication of Prop. 5.1 is that with high probability, \(\mathbf{h}^{\#}\) is close to \(\mathbf{h}^{\star}\). The condition in (21) is a mild Monte Carlo sample size requirement and is satisfied for instance if \(K\geq\frac{\log(4NL)}{C\max(1,\kappa N)}\), where \(\kappa\) is any number such that for all \(\ell,n,n^{\prime}\), \(\frac{\|\mathbf{M}^{(n,\ell)}\|_{2}^{2}}{\|\mathbf{M}^{(n^{\prime},\ell)}\|_{2} ^{2}}\geq\kappa\). ### Optimization Error Next, we bound optimization error \(\|\mathbf{h}^{\#}(\mathbf{\theta})-\mathbf{h}^{(I)}(\mathbf{\theta})\|_{2}\). **Proposition 5.2**.: _Let \(I\) denote the number of linear solver iterations. Then, the output gradient \(\widehat{\mathbf{h}}^{(I)}\) and the network gradient \(\widetilde{\mathbf{h}}^{(I)}\) converge to \(\mathbf{h}^{\#}\) with the following rates:_ \[\|\mathbf{h}^{\#}-\widehat{\mathbf{h}}^{(I)}\|_{2}=\mathcal{O}(\rho^{I}), \quad\|\mathbf{h}^{\#}-\widetilde{\mathbf{h}}^{(I)}\|_{2}=\mathcal{O}(I\rho^{2I}),\] _where \(\rho<1\) is the solver convergence rate. For gradient descent (GD) and steepest descent (SD), these rates are_ \[\rho_{\text{GD}}:=\frac{\iota-1}{\iota},\qquad\qquad\rho_{\text{SD}}:=\frac{ \iota-1}{\iota+1}, \tag{23}\] _where \(\iota\) denotes the condition number (i.e. ratio between largest and smallest eigenvalues) of the matrix \(\mathbf{A}_{\mathbf{\theta}}\) (6)._ From Prop. 5.2, we draw three conclusions: First, both the output gradient \(\widehat{\mathbf{h}}^{(I)}\) and the network gradient \(\widetilde{\mathbf{h}}^{(I)}\) converge to \(\mathbf{h}^{\#}\) as \(I\to\infty\). Second, \(\widetilde{\mathbf{h}}^{(I)}\) achieves asymptotically better estimation of \(\mathbf{h}^{\#}\) (compared to \(\widehat{\mathbf{h}}^{(I)}\)). Third, the results suggest that the error in both gradients can be decreased by the use of solvers that converge faster than gradient descent, e.g., using steepest descent (as shown in Prop. 5.2), or conjugate gradient (CG), which has convergence rate \(\rho_{\text{CG}}=\frac{\sqrt{\iota}-1}{\sqrt{\iota}+1}\)(Shewchuk et al., 1994). The proof of Prop. 5.2 is given in Appendix E.2. It relies on a connection we build between probabilistic unrolling and _bilevel optimization_, i.e. minimizing functions defined as a minimum (Ablin et al., 2020). Probabilistic unrolling (15) is an instance of bilevel optimization in which the outer level optimizes the EM objective by estimating its gradient with respect to parameters \(\mathbf{\theta}\). This gradient is itself dependent on the solutions \(\mathbf{\mu}_{\mathbf{\theta}},\{\mathbf{\sigma}_{k,\mathbf{\theta}}\}_{k=1}^{K}\) of \(K+1\) linear systems, each equivalent to minimizing an inner quadratic function that depends on \(\mathbf{\theta}\). As part of our proof, we introduce the following two lemmas, which may be of broader interest beyond our particular setting of probabilistic unrolling for LGMs. The first result (Lemma 5.3, proof in Appendix E.3) is a general statement on gradient convergence for bilevel optimization problems; it extends Prop. 2.2 of Ablin et al. (2020) to settings in which the outer and inner objectives have different forms. The second result (Lemma 5.4, proof in Appendix E.4) analyzes Jacobian convergence for iterative solvers based on gradient descent and steepest descent. **Lemma 5.3**.: _Consider a bilevel optimization problem with outer objective \(r(\mathbf{\theta},\mathbf{\beta})\) and inner objective \(s(\mathbf{\theta},\mathbf{\beta})\),_ \[\min_{\mathbf{\theta}}\ r(\mathbf{\theta},\mathbf{\beta}^{\#})\quad\text{s.t.}\quad\mathbf{ \beta}^{\#}:=\arg\min_{\mathbf{\beta}}s(\mathbf{\theta},\mathbf{\beta}), \tag{24}\] _in which the gradients \(\{\nabla_{1}r(\mathbf{\theta},\mathbf{\beta}),\nabla_{2}s(\mathbf{\theta},\mathbf{\beta})\}\) and the second derivatives \(\{\nabla_{2}^{2}s(\mathbf{\theta},\mathbf{\beta}),\nabla_{1}^{2}r(\mathbf{\theta},\mathbf{ \beta})\}\) are Lipschitz continuous in \(\mathbf{\beta}\). Let \(\mathbf{g}^{\#}:=\nabla_{1}r(\mathbf{\theta},\mathbf{\beta}^{\#})\) be the desired gradient. Let \(\mathbf{\beta}^{(I)}\) denote an approximation of \(\mathbf{\beta}^{\#}\) obtained from running an iterative (and differentiable) optimizer for \(I\) steps. We use \(\mathbf{\beta}^{(I)}\) to define two approximate gradients: (1) the analytic gradient (called "output gradient" in our work) \(\mathbf{\widehat{g}}^{(I)}:=\nabla_{1}r(\mathbf{\theta},\mathbf{\beta}^{(I)})\) and (2) the automatic gradient (called "network gradient" in our work) \(\mathbf{\widetilde{g}}^{(I)}:=\nabla_{1}r(\mathbf{\theta},\mathbf{\beta}^{(I)})+\frac{ \partial\mathbf{\beta}^{(I)}}{\partial\mathbf{\theta}}\cdot\nabla_{2}s(\mathbf{\theta}, \mathbf{\beta}^{(I)})\). Additionally, define the Jacobians \(\mathbf{J}^{\#}:=\frac{\partial\mathbf{\beta}^{\#}}{\partial\mathbf{\theta}}\) and \(\mathbf{J}^{(I)}:=\frac{\partial\mathbf{\beta}^{(I)}}{\partial\mathbf{\theta}}\), and let \(\mathbf{J}^{(I)}\) be bounded (i.e. \(\|\mathbf{J}^{(I)}\|_{2}\leq J_{M}\)). If the outer and inner objectives share second-order derivatives, i.e. \(\nabla_{12}^{2}r(\mathbf{\theta},\mathbf{\beta}^{\#})=\nabla_{12}^{2}s(\mathbf{\theta}, \mathbf{\beta}^{\#})\), then the analytic and automatic gradients converge at the following rates:_ \[\|\mathbf{\widehat{g}}^{(I)}-\mathbf{g}^{\#}\|_{2}=\mathcal{O}(\|\mathbf{ \beta}^{(I)}-\mathbf{\beta}^{\#}\|_{2}), \tag{25}\] \[\|\mathbf{\widehat{g}}^{(I)}-\mathbf{g}^{\#}\|_{2}=\mathcal{O}(\|\mathbf{ \beta}^{(I)}-\mathbf{\beta}^{\#}\|_{2}\cdot\|\mathbf{J}^{(I)}-\mathbf{J}^{\#}\|_{2}).\] **Lemma 5.4**.: _Given the bilevel optimization problem from Prop. 5.3, let the inner objective \(s(\mathbf{\theta},\mathbf{\beta}):=\frac{1}{2}\mathbf{\beta}^{\top}\mathbf{A}_{\mathbf{\theta} }\mathbf{\beta}-\mathbf{u}_{\mathbf{\theta}}^{\top}\mathbf{\beta}\) be a strongly convex quadratic function with positive definite \(\mathbf{A}_{\mathbf{\theta}}\). Given \(\mathbf{\theta}\), let \(\mathbf{\beta}^{(I)}:=\textsc{LinearSolver}(\mathbf{A}_{\mathbf{\theta}},\mathbf{u}_{\mathbf{ \theta}},I)\) be the output of an \(I\)-step linear solver used to approximate \(\mathbf{\beta}^{\#}:=\arg\min_{\mathbf{\beta}}s(\mathbf{\theta},\mathbf{\beta})=\mathbf{A}_{ \mathbf{\theta}}^{-1}\mathbf{u}_{\mathbf{\theta}}\). Then, for gradient descent and steepest descent as the linear solver, the Jacobian error is the following function of solver error: \(\|\mathbf{J}^{(I)}-\mathbf{J}^{\#}\|_{2}=\mathcal{O}(I\cdot\|\mathbf{\beta}^{(I)}-\mathbf{ \beta}^{\#}\|_{2})\)._ **Insights on Unrolling Depth** Taken together, Prop. 5.1 and 5.2 offer insights in choosing the number of unrolling iterations \(I\). Since the overall gradient error is the sum of the optimization and statistical errors1, the latter being impervious to \(I\), the results suggest taking \(I\) just large enough to balance the two sources of error. A rough calculation yields \(I\approx C\log(NK)/\log(1/\rho)\) for output gradient and \(I\approx C\log(NK)/\log(1/\rho^{2})\) for network gradient, for some dimension-dependent constant \(C\), where \(\rho\) is the convergence rate of the solver. Footnote 1: Using the fact that the \(\ell_{2}\)-norm is an upper-bound on the \(\ell_{\infty}\)-norm, we can bound the overall error (19) in \(\ell_{\infty}\) norm (with probability \(1-1/N\)) by adding the results of Prop. 5.1 and 5.2. ## 6 Experiments We perform experiments on several LGM applications, ranging from recovering unknown parameters to solving inverse problems to predicting movie ratings. We demonstrate that probabilistic unrolling provides significant scalability over EM, without loss in model performance. In all instances of EM, we use a single gradient step for the M-Step update (i.e. gradient EM). We implement all algorithms in PyTorch and on a single Nvidia T4 GPU with 16 GB RAM. The main hyperparameters for probabilistic unrolling are the number of samples \(K\) and the number of solver iterations \(I\). When solving a linear system \(\mathbf{A}\mathbf{x}=\mathbf{b}\), we let \(I\) be just large enough so that the residual error \(\|\mathbf{b}-\mathbf{A}\mathbf{x}^{(I)}\|_{2}^{2}\) is below some small threshold (i.e. \(10^{-8}\)). We set \(K\) based on our theoretical analysis (21). We keep \(K\) small if either the number of data points \(N\) is large or the number of parameters \(L\) is small; otherwise we increase \(K\). In our experiments, we find that having \(I\) and \(K\) in the range [10, 30] is sufficient even when \(D\) increases to (tens of) thousands of dimensions. ### Parameter Recovery for Noisy AR Models The noisy auto-regressive (AR) model is a time series model with applications in radar (Cayir and Candan, 2021) and biomedical imaging (Luo et al., 2020). A noisy AR model of order \(P\) for a time series \(\mathbf{y}:=\{y_{d}\}_{d=1}^{D}\) is written as \[\{z_{1},\dots,z_{P}\}\sim\mathcal{N}(\mathbf{0},\mathbf{Q}_{\mathbf{\phi}}), \quad\mathbf{Q}_{\mathbf{\phi}}\in\mathbb{R}^{P}, \tag{26}\] \[z_{d}=\sum_{p=1}^{P}\phi_{p}\cdot z_{d-p}+w_{d},\quad w_{d}\sim \mathcal{N}(0,\kappa),\quad P<d\leq D,\] \[y_{d}=z_{d}+v_{d},\qquad\qquad\qquad v_{d}\sim\mathcal{N}(0,\lambda),\quad 1 \leq d\leq D.\] The initial covariance matrix \(\mathbf{Q}_{\mathbf{\phi}}\) is some function of the AR coefficients \(\mathbf{\phi}:=\{\phi_{1},\dots,\phi_{P}\}\) that ensures stationarity for the latent process (see Appendix F.1.1 for details). The model's free parameters are \(\mathbf{\theta}=\{\mathbf{\phi},\lambda,\kappa\}\). We can write this model as an LGM (1), where \(\mathbf{\nu}_{\mathbf{\theta}}=\mathbf{0},\mathbf{\Phi}_{\mathbf{\theta}}=\mathbf{I},\mathbf{\eta}_{ \mathbf{\theta}}=\mathbf{0}\), \(\mathbf{\Psi}_{\mathbf{\theta}}=\lambda^{-1}\mathbf{I}\), and \(\mathbf{\Gamma}_{\mathbf{\theta}}\) is a function of \(\{\mathbf{\phi},\kappa\}\). Complexity ComparisonUsing matrix inversion, exact-gradient EM will require \(\mathcal{O}(D^{3})\)-time and \(\mathcal{O}(D^{2})\)-space. In comparison, probabilistic unrolling scales with the time \(\tau_{\mathbf{\theta}}\) and space \(\omega_{\mathbf{\theta}}\) needed for matrix-vector multiplication with the posterior inverse-covariance matrix \(\mathbf{A}_{\mathbf{\theta}}\) (6). For the noisy AR model of order \(P\), \(\mathbf{A}_{\mathbf{\theta}}\) is a banded matrix with \(2P+1\) non-zero bands (derivation given in Appendix F.1.1). As a result, \(\tau_{\mathbf{\theta}}=\mathcal{O}(DP+P^{3})\) and \(\omega_{\mathbf{\theta}}=\mathcal{O}(DP+P^{2})\) which is much more efficient than EM.2 Footnote 2: We note that instead of using matrix inversion, we could cast (26) as a state-space model and use a Kalman smoother to run exact-gradient EM in \(\mathcal{O}(DP^{3})\)-time and \(\mathcal{O}(DP^{2})\)-space (see Appendix F.1.2). However, unlike probabilistic unrolling, the Kalman filter is a sequential algorithm and does not parallelize across \(D\). _Setup and Results._ We compare the accuracy and speed of exact-gradient EM and probabilistic unrolling in parameter recovery for noisy AR models of order \(P=5\). First, we randomly sample a set of true parameters \(\{\boldsymbol{\phi}^{\star},\lambda^{\star},\kappa^{\star}\}\), generate \(N=5\) time series according to (26), and randomly mask out 10% of the observations from each time series to create \(\boldsymbol{\tilde{y}}^{(1)},\ldots,\boldsymbol{\tilde{y}}^{(5)}\). Then, we perform maximum likelihood estimation using either gradient EM or probabilistic unrolling to produce parameter estimates \(\{\boldsymbol{\phi},\hat{\lambda},\hat{\kappa}\}\). We measure accuracy using the normalized root-mean-square error (NRMSE) \(r(\boldsymbol{\theta},\boldsymbol{\theta}^{\star}):=\|\boldsymbol{\hat{\theta} }-\boldsymbol{\theta}^{\star}\|_{2}/\|\boldsymbol{\theta}^{\star}\|_{2}\times 1 00\%\). For probabilistic unrolling, we use \(K=10\) Monte Carlo samples, unroll \(I=30\) iterations of the conjugate gradient solver, and use the network gradient. Other details can be found in Appendix F.1.3. We report results for different values of \(D\) in Table 2. Probabilistic unrolling consistently matches the performance of gradient EM, while being up to 47 times faster. For each \(D\), we report the smaller of the times between EM with matrix inversion and EM with a Kalman smoother (see Appendix F.1.2). Typically, inversion is faster for smaller \(D\) while using the Kalman smoother is faster for larger \(D\). Probabilistic unrolling is faster than both of these for all \(D\). We additionally perform comparisons between probabilistic unrolling and variational EM (as implemented through the variational auto-encoder (VAE) (Kingma & Welling, 2013)) in Appendix F.1.4. ### Bayesian Compressed Sensing of Sparse Signals With applications from radio astronomy (Wiaux et al., 2009) to MRI (Lustig et al., 2008), compressed sensing (CS) is a technique for reconstructing sparse, high-dimensional signals \(\boldsymbol{\tilde{z}}^{(n)}\) from measurements \(\boldsymbol{\tilde{y}}^{(n)}\). _Bayesian compressed sensing_(Ji et al., 2008b; Bilgic et al., 2011; Lin et al., 2021; 2022a) is an approach to CS that employs the sparse Bayesian learning model (Wipf & Rao, 2004) \[\boldsymbol{z}^{(n)} \sim\mathcal{N}(\mathbf{0},\text{diag}(\boldsymbol{\alpha})^{-1 }), n=1,\ldots,N \tag{27}\] \[\boldsymbol{\tilde{y}}^{(n)}|\boldsymbol{z}^{(n)} \sim\mathcal{N}(\boldsymbol{\Phi}^{(n)}\boldsymbol{z}^{(n)}, \beta^{-1}\mathbf{I}), n=1,\ldots,N,\] where each \(\boldsymbol{z}^{(n)}\in\mathbb{R}^{D}\) is an unknown signal, \(\boldsymbol{\tilde{y}}^{(n)}\in\mathbb{R}^{M}\) is a measurement associated of the signal, and \(\boldsymbol{\Phi}^{(n)}\in\mathbb{R}^{M\times D}\) is a so-called measurement matrix. The free parameters \(\boldsymbol{\theta}\) of the model are \(\boldsymbol{\alpha}\in\mathbb{R}^{D}\) and \(\beta\in\mathbb{R}\). When a common sparsity pattern underlies the observations \(\{\boldsymbol{\tilde{y}}^{(n)}\}_{n=1}^{N}\), maximum likelihood estimation will push many of the entries \(\alpha_{m}\) to adopt large values, tending to \(\infty\), and, thus, encouraging sparsity of samples from the posterior \(p(\boldsymbol{z}^{(n)}|\boldsymbol{\tilde{y}}^{(n)},\boldsymbol{\alpha},\beta)\)(Yee & Atchade, 2017). The mean \(\boldsymbol{\mu}^{(n)}\) of each posterior is then used as an estimate for \(\boldsymbol{\tilde{z}}^{(n)}\)(Ji et al., 2008a). _Complexity Comparison._ In several applications (e.g. MRI, astronomy), each \(\boldsymbol{\Phi}^{(n)}=\boldsymbol{\Omega}^{(n)}\boldsymbol{\Phi}\), where \(\boldsymbol{\Phi}\in\mathbb{C}^{D\times D}\) is the Fourier transform and \(\boldsymbol{\Omega}^{(n)}\in\mathbb{R}^{M\times D}\) is a random undersampling mask. Thus, (27) is an instance of the LGM, where \(\boldsymbol{\theta}:=\{\boldsymbol{\alpha},\beta\}\). Using gradient EM to fit \(\boldsymbol{\theta}\) requires \(\mathcal{O}(D^{3})\)-time and \(\mathcal{O}(D^{2})\)-space. On the other hand, probabilistic unrolling scales with the complexity needed to apply \(\mathbf{A}_{\boldsymbol{\theta}}\) (6) to vectors; this is dominated by the Fourier transform \(\boldsymbol{\Phi}\), which only requires \(\mathcal{O}(D\log D)\)-time and \(\mathcal{O}(D)\)-space. _Setup and Results._ We perform CS experiments on NIST (Grother, 1995), a dataset of handwritten digits. For each digit type (i.e. 0 through 9), we sample \(N=10\) images \(\boldsymbol{\tilde{z}}^{(n)}\) of size \(128\times 128\), which are high-dimensional signals with \(D=16{,}384\) pixels. Each image is naturally sparse because most pixels are zero. For each \(\boldsymbol{\tilde{z}}^{(n)}\), we randomly undersample its 2D Fourier transform by 15\(\%\) (i.e. \(M=0.15D\)) and add noise to construct the measurement \(\boldsymbol{\tilde{y}}^{(n)}\). Then, we fit a Bayesian compressed sensing model (27) to \(\{\boldsymbol{\tilde{y}}^{(n)}\}_{n=1}^{N}\) to obtain reconstructions \(\{\boldsymbol{\mu}^{(n)}\}_{n=1}^{N}\). We measure success using the NRMSE between \(\boldsymbol{\mu}\) and \(\boldsymbol{\tilde{z}}\), where \(\boldsymbol{\mu},\boldsymbol{\tilde{z}}\in\mathbb{R}^{ND}\) are the concatenations of \(\{\boldsymbol{\mu}^{(n)}\}_{n=1}^{N}\) and the true signals \(\{\boldsymbol{\tilde{z}}^{(n)}\}_{n=1}^{N}\), respectively. For probabilistic unrolling, we use \(K=30\) samples, \(I=25\) iterations of preconditioned conjugate gradient, and the network gradient. More details can be found in Appendix F.2.2. Results averaged over the 10 different digit types are given in Table 3. We find that \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(D\) & \(r(\boldsymbol{\phi}^{\text{EM}},\boldsymbol{\phi}^{\star})\) & \(r(\boldsymbol{\phi}^{\text{PU}},\boldsymbol{\phi}^{\star})\) & \(r(\kappa^{\text{KM}},\kappa^{\star})\) & \(r(\kappa^{\text{PU}},\kappa^{\star})\) & \(r(\lambda^{\text{IM}},\lambda^{\star})\) & \(r(\lambda^{\text{PU}},\lambda^{\star})\) & EM Time (Best) & PU Time \\ \hline 1,000 & 7.5\(\pm\)4.7 \% & 6.8\(\pm\)3.1 \% & 3.6\(\pm\)4.4 \% & 3.1\(\pm\)1.5 \% & 5.0\(\pm\)2.9 \% & 6.0\(\pm\)3.7 \% & 41\(\pm\)0 s & **8\(\pm\)0 s** \\ 3,000 & 3.0\(\pm\)2.5 \% & 3.7\(\pm\)2.1 \% & 3.1\(\pm\)2.2 \% & 3.6\(\pm\)2.6 \% & 2.8\(\pm\)3.6 \% & 3.0\(\pm\)3.5 \% & 413\(\pm\)2 s & **10\(\pm\)0 s** \\ 10,000 & 1.8\(\pm\)1.1 \% & 2.5\(\pm\)2.2 \% & 1.3\(\pm\)0.8 \% & 1.5\(\pm\)0.7 \% & 1.3\(\pm\)0.3 \% & 1.0\(\pm\)0.5 \% & 1361\(\pm\)36 s & **29\(\pm\)0 s** \\ 30,000 & 0.5\(\pm\)0.2 \% & 0.4\(\pm\)0.2 \% & 0.4\(\pm\)0.1 \% & 0.4\(\pm\)0.2 \% & 0.7\(\pm\)0.1 \% & 0.8\(\pm\)0.2 \% & 4139\(\pm\)49 s & **87\(\pm\)1 s** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing percent error in noisy AR parameter recovery and computation time for EM and probabilistic unrolling (PU). \begin{table} \begin{tabular}{c c c c} \hline \hline & \(r(\boldsymbol{\mu}^{\text{EM}},\boldsymbol{\tilde{z}})\) & \(r(\boldsymbol{\mu}^{\text{PU}},\boldsymbol{\tilde{z}})\) & EM Time & PU Time \\ \hline Avg. & 4.8\(\pm\)1.0 \% & 4.7\(\pm\)1.4 \% & 1481\(\pm\)19 s & **21\(\pm\)0 s** \\ \hline \hline \end{tabular} \end{table} Table 3: Averaged CS results (see Appendix F.2.4 for breakdown by digit type). Without Woodbury identity, EM time is 4725\(\pm\)61 s. probabilistic unrolling and gradient EM have similar error. However, probabilistic unrolling is approximately 70 times faster than gradient EM, even after we accelerate EM using the Woodbury matrix identity (see Appendix F.2.1). ### Collaborative Filtering through Factor Analysis The goal of recommender systems is to predict user ratings for various items. One common approach is _collaborative filtering_, in which we pool together incomplete ratings data for \(M\) items across \(N\) users to infer how all users would rate all items. One of the central challenges of collaborative filtering is the inherent sparsity of the data - for every user, we typically only observe ratings for a small fraction of items, leading to large amounts of missing data (Rendle et al., 2020; Wu et al., 2021). In this section, we use _factor analysis_ models for collaborative filtering. Factor analysis is a Bayesian analog of matrix factorization, one of the state-of-the-art methods for recommender systems (Koren et al., 2009; Lawrence and Urtasun, 2009; Rendle et al., 2019). Let \(\mathbf{y}^{(n)}\in\mathbb{R}^{M}\) be the ratings for user \(n\) across \(M\) movies. Only part of this vector is known: \(\mathbf{\tilde{y}}^{(n)}=\mathbf{\Omega}^{(n)}\mathbf{y}^{(n)}\in\mathbb{R}^{M_{n}}\), where \(M_{n}<M\). The factor analysis model is written as \[\mathbf{z}^{(n)} \sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{28}\] \[\mathbf{\tilde{y}}^{(n)}|\mathbf{z}^{(n)} \sim\mathcal{N}(\mathbf{\Omega}^{(n)}(\mathbf{\Phi}\mathbf{z}^{(n)}+\mathbf{ \eta}),\mathbf{\Omega}^{(n)}\mathbf{\Psi}^{-1}(\mathbf{\Omega}^{(n)})^{\top}),\] where each \(\mathbf{z}^{(n)}\in\mathbb{R}^{D}\) for \(D<M\) is a set of latent factors for user \(n\). The free parameters of this model are \(\mathbf{\theta}:=\{\mathbf{\Phi},\mathbf{\eta},\mathbf{\Psi}\}\), where \(\mathbf{\Phi}\in\mathbb{R}^{M\times D}\), \(\mathbf{\eta}\in\mathbb{R}^{M}\), and \(\mathbf{\Psi}\) is a diagonal \(M\times M\) matrix. After estimating \(\mathbf{\theta}\), we can predict any unknown rating \(y_{m}^{(n)}\not\in\mathbf{\tilde{y}}^{(n)}\) using the mean of the distribution \(p(y_{m}^{(n)}|\mathbf{\tilde{y}}^{(n)},\mathbf{\theta})\) (i.e. \(\hat{y}_{m}^{(n)}=\mathbf{\phi}_{m}^{\top}\mathbf{\mu}_{\mathbf{\theta}}^{(n)}+\eta_{m}\), where \(\mathbf{\mu}_{\mathbf{\theta}}^{(n)}\) is defined by (7) and \(\mathbf{\phi}_{m}\) is the \(m\)-th row of \(\mathbf{\Phi}\)). _Setup and Results._ We perform collaborative filtering experiments on MovieLens (Harper and Konstan, 2015), a group of successively larger datasets with \(R=1\) million, 10 million, and 25 million ratings of thousands of movies, by thousands of users. For each dataset, we perform a 90%-10% train-test split of the ratings data (Sedhain et al., 2015). Then, we fit a factor analysis model to the training set using mini-batch gradient descent, where the gradients are calculated using either gradient EM or probabilistic unrolling. For probabilistic unrolling, we use \(K=10\) Monte Carlo samples, \(I=10\) unrolled iterations of conjugate gradient, and the output gradient to reduce memory consumption. After convergence, we calculate the root-mean-square error between all ratings in the test set \(y_{m}^{(n)}\) and the fitted model's predictions \(\hat{y}_{m}^{(n)}\). Further experimental details can be found in Appendix F.3.1. The results for the three MovieLens datasets are given in Table 4. We also report processing time and GPU memory utilized by EM and probabilistic unrolling. All of the results in Table 4 are for \(D=1{,}000\) latent factors. Figure 2 shows a plot of time/memory vs. \(D\) for other values of \(D\). An additional VAE baseline is provided in Appendix F.3.2. ## 7 Conclusion We introduced _probabilistic unrolling_, a computational framework for accelerating gradient-based maximum likelihood estimation for a large class of latent variable models with Gaussian prior and Gaussian likelihood. Our method combines Monte Carlo sampling with iterative solvers and unrolled optimization, leading to a novel means of back-propagating through a sampling algorithm. Our theoretical analyses demonstrated that this can accelerate gradient estimation and, hence, maximum likelihood estimation. Our analyses provide insight into the relationship between the number of solver iterations, i.e. network depth, the number of Monte Carlo samples, and the gradient approximation error. In the future, we will consider extensions of probabilistic unrolling to other classes of probabilistic latent variable models. ## Acknowledgements This work was supported by a National Defense Science and Engineering Graduate Fellowship, and grants PHY-2019786, DMS-2015485, and DMS-2210664 from the National Science Foundation. The authors also thank the anonymous reviewers, whose comments greatly improved this paper. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Dataset & \(N\) (users) & \(M\) (movies) & EM RMSE & PU RMSE & EM Time/Cycle & PU Time/Cycle & PU Mem & PU Mem \\ \hline ML-1M & 6,000 & 4,000 & 0.8433 & 0.8436 & 54 min, 42 s & **5 min, 50 s** & 1.94 GB & **0.17 GB** \\ ML-10M & 72,000 & 10,000 & 0.7809 & 0.7796 & 78 min, 36 s & **12 min, 8 s** & 5.62 GB & **2.64 GB** \\ ML-25M & 162,000 & 62,000 & — & 0.7700 & — & **31 min, 11 s** & \(>\)16 GB & **8.48 GB** \\ \hline \hline \end{tabular} \end{table} Table 4: MovieLens results. For timing, a _cycle_ is defined as 2,000 gradient steps. EM requires too much memory to run ML-25M. Figure 2: Time and memory versus \(D\) for the ML-1M dataset.
2307.03160
Invertibility criteria for the biharmonic single-layer potential
While the single-layer operator for the Laplacian is well understood, questions remain concerning the single-layer operator for the Bilaplacian, particularly with regard to invertibility issues linked with degenerate scales. In this article, we provide simple sufficient conditions ensuring this invertibility for a wide range of problems.
Alexandre Munnier
2023-07-06T17:42:08Z
http://arxiv.org/abs/2307.03160v3
# Invertibility criteria for the biharmonic single-layer potential ###### Abstract While the single-layer operator for the Laplacian is well understood, questions remain concerning the single-layer operator for the Bilaplacian, particularly with regard to invertibility issues linked with degenerate scales. In this article, we provide simple sufficient conditions ensuring this invertibility for a wide range of problems. _Keywords--_ Biharmonic single-layer potential, biharmonic equation. ## 1 Introduction Let \(\varGamma\) be a smooth curve in the plane (we don't intend to be rigorous at this stage). For all density \(q\in H^{-1/2}(\varGamma)\), the harmonic single-layer potential is defined by: \[S_{\varGamma}q(x)=\int_{\varGamma}g_{0}(x-y)q(y)\,\mathrm{d}s(y)\qquad\text{ for all }x\in\mathbb{R}^{2},\] where \(g_{0}\) is the fundamental solution of the Laplacian, that reads (\(\kappa\) is a positive parameter): \[g_{0}(x)=-\frac{1}{2\pi}\ln\frac{|x|}{\kappa}\qquad\text{for all }x\in\mathbb{R}^{2} \setminus\{0\}.\] The operator \(S_{\varGamma}\) is bounded from \(H^{-1/2}(\varGamma)\) into \(H^{1}_{loc}(\mathbb{R}^{2})\) and so is the operator: \[V_{\varGamma}:H^{-1/2}(\varGamma) \longrightarrow H^{1/2}(\varGamma)\] \[q \longmapsto\gamma_{\varGamma}^{D}\circ S_{\varGamma}q,\] where \(\gamma_{\varGamma}^{D}\) stands for the usual Dirichlet trace operator on \(\varGamma\). It is well known that \(V_{\varGamma}\) is invertible if and only if \(\kappa\neq\mathrm{Cap}_{\varGamma}\), where \(\mathrm{Cap}_{\varGamma}\) is a constant called the logarithmic capacity of \(\varGamma\). Furthermore, according to [6, Theorem 8.16], if \(\kappa>\mathrm{Cap}_{\varGamma}\), the operator \(S_{\varGamma}\) is positive definite on \(H^{-1/2}(\varGamma)\) and it has one negative eigenvalue if \(\kappa<\mathrm{Cap}_{\varGamma}\). The value of \(\mathrm{Cap}_{\varGamma}\) is difficult to evaluate in general (it is explicitly known only for certain geometries such as a disk, an ellipse, a square...). However, one can use the following simple estimate: \(\mathrm{Cap}_{\varGamma}\leqslant R\) where \(R>0\) is the radius of any circle that enclosed \(\varGamma\). The successful implementation of a BEM is therefore guaranteed by the verification of the criterion: \(\kappa>R\). This criterion is elementary and applies to a wide range of problems. To our knowledge, no such simple criterion is available for the biharmonic single-layer potential. In this paper we consider Jordan curves of class \(\mathcal{C}^{1,1}\) in the plane (see the last section for possible generalizations). We denote by \(\varGamma\) a disjoint union of a finite number of such curves. We define \(\varOmega_{\varGamma}^{-}\) the bounded domain consisting of the points enclosed by at least one curve and \(\varOmega_{\varGamma}^{+}\) its unbounded complement (see Fig. 1). The multi-connected curve \(\varGamma\) can be decomposed into \(\varGamma_{e}\), the boundary shared by \(\varOmega_{\varGamma}^{+}\) and \(\varOmega_{\varGamma}^{-}\), and the Jordan curves included in \(\varOmega_{\varGamma}^{-}\). On every Jordan curve, we define \(n\) the unit normal vector field directed towards the bounded domain enclosed by the curve (and we will stick to this convention throughout the paper). For every function \(u\in H^{2}_{loc}(\mathbb{R}^{2})\), we can define the Dirichlet and Neumann traces on \(\varGamma\) denoted respectively by \(\gamma^{D}_{T}u\) and \(\gamma^{N}_{T}u\) (the Neumann trace is defined taking into account the orientation of \(n\)). The total trace operator is next given by: \[\begin{array}{c}\gamma_{T}:H^{2}_{loc}(\mathbb{R}^{2})\longrightarrow H^{3/2 }(\varGamma)\times H^{1/2}(\varGamma)\\ u\longmapsto(\gamma^{D}_{T}u,\gamma^{N}_{T}u).\end{array}\] We will use the following expression for the fundamental solution of the Bilaplacian: \[G_{0}(x)=\frac{1}{8\pi}\Big{[}|x|^{2}\ln\frac{|x|}{\kappa_{0}}+\kappa_{1} \Big{]}\qquad\text{for all }x\in\mathbb{R}^{2}, \tag{1}\] where the parameters \((\kappa_{0},\kappa_{1})\in]0,+\infty[\times\mathbb{R}\) are introduced to cover all the classical definitions available in the literature. We denote \(H(\varGamma)=H^{3/2}(\varGamma)\times H^{1/2}(\varGamma)\) and \(H^{\prime}(\varGamma)=H^{-3/2}(\varGamma)\times H^{-1/2}(\varGamma)\). Using the usual abuse of notation to identify \(G_{0}(x-y)\) with a two-variables function \(G_{0}(x,y)\), the biharmonic single-layer potential is defined for every \(q=(q_{0},q_{1})\in H^{\prime}(\varGamma)\) by: \[\mathscr{S}_{\varGamma}q(x)=\int_{\varGamma}G_{0}(x,y)q_{0}(y)+\partial_{n(y) }G_{0}(x,y)q_{1}(y)\,\mathrm{d}s(y)\qquad\text{for all }x\in\mathbb{R}^{2}. \tag{2}\] The operator \(\mathscr{S}_{\varGamma}:H^{\prime}(\varGamma)\longrightarrow H^{2}_{loc}( \mathbb{R}^{2})\) is bounded so the same conclusion applies to the operator: \[\begin{array}{c}V_{\varGamma}:H^{\prime}(\varGamma)\longrightarrow H( \varGamma)\\ q\longmapsto\gamma_{\varGamma}\circ\mathscr{S}_{\varGamma}q.\end{array} \tag{3}\] Our goal is to determine conditions that ensure the invertibility of \(V_{\varGamma}\). Even in the simplest case where \(\varGamma\) reduces to a single Jordan curve, the answer is not clear in general. Indeed, for any fixed parameters \(\kappa_{0}\), \(\kappa_{1}\) (in identity (1)), it is known that there exist degenerate scales \(\rho>0\) for which \(V_{\rho\varGamma}\) is not invertible. Since the pioneering work [3], studies on degenerate scales (and more generally on the invertibility of \(V_{\varGamma}\)) have been reduced to questions of invertibility of a \(4\times 4\) matrix (known as the discriminant matrix). In this paper, we show that this matrix can be replaced by a simpler one, a \(3\times 3\) matrix called the Robin matrix (because of the similarity of its role to that of the Robin constant in potential theory). Recalling that the operator \(V_{\varGamma}\) (from \(H^{\prime}(\varGamma)\) into itself) is self-adjoint, our main results are as follows: **Theorem 1.1**.: 1. _The invertibility of_ \(V_{\varGamma}\) _depends on_ \(\varGamma_{e}\) _only._ 2. _To any multi-connected curve_ \(\varGamma\) _(as described above) we can associate a_ \(3\times 3\) _symmetric matrix_ \(\Lambda_{\varGamma_{e}}\) _(the Robin matrix). The operator_ \(V_{\varGamma}\) _is an isomorphism if and only if_ \(\det\Lambda_{\varGamma_{e}}\neq 0\)_._ 3. _Let_ \(R>0\) _be the radius of a circle_ \(\mathcal{C}_{R}\) _such that_ \(\varOmega^{+}_{\mathcal{C}_{R}}\subset\varOmega^{+}_{\varGamma}\) _(i.e. the circle_ \(\mathcal{C}_{R}\) _enclosed_ \(\varGamma\)_). If_ \(\kappa_{0}>eR\) _and_ \(\kappa_{1}>R^{2}\) _in the definition (_1_), then the matrix_ \(\Lambda_{\varGamma_{e}}\) _is positive definite and the operator_ \(V_{\varGamma}\) _is strongly elliptic on_ \(H^{\prime}(\varGamma)\)_._ 4. _Let_ \(R>0\) _be the radius of a circle_ \(\mathcal{C}_{R}\) _such that_ \(\varOmega^{+}_{\mathcal{C}_{R}}\subset\varOmega^{+}_{\mathcal{C}_{R}}\)_. If_ \(\kappa_{0}<eR\) _and_ \(\kappa_{1}<R^{2}\) _in the definition (_1_), then the matrix_ \(\Lambda_{\varGamma_{e}}\) _is negative definite._ Figure 1: The multi-connected curve \(\varGamma\) can be decomposed into the disjoint union of \(\varGamma_{e}\) and the Jordan curves included in \(\varOmega^{-}_{\varGamma}\). 5. _Let_ \(\kappa_{0}=1\) _and_ \(\kappa_{1}=0\) _and let_ \(\mathcal{C}_{R^{-}}\) _and_ \(\mathcal{C}_{R^{+}}\) _be circles of radii_ \(R^{-}\) _and_ \(R^{+}\) _such that_ \(\Omega^{+}_{\mathcal{C}_{R^{+}}}\subset\Omega^{+}_{\Gamma}\subset\Omega^{+}_{ \mathcal{C}_{R^{-}}}\)_._ _Degenerate scales_ \(\rho\) _for_ \(\Gamma\) _can occur only when_ \(1/(eR^{+})<\rho<1/(eR^{-})\)_._ Points 3 and 4 of Theorems 1.1 provide criteria (similar to those for the harmonic single-layer potential) ensuring the successful implementation of a BEM. These criteria are simple and apply to most of the problems that can be encountered. In particular, we emphasize that they apply to the case where \(\mathit{I}_{e}\) is multi-connected, this case being the blind spot of most publications on the subject. Some authors focus on the particular case where \(\kappa_{0}=1\) and \(\kappa_{1}=0\). The most advanced results concerning degenerate scales in this case can be found in the recent paper [2] (to which we refer also for a detailed overview of the known results on this topic). The authors look for sufficient conditions to prevent the appearance of degenerate scales. They prove that when \(\Gamma\) is a single Jordan curve such that the domain \(\Omega^{-}_{\Gamma}\) is star-shaped, or has symmetry properties, \(V_{\Gamma}\) is invertible provided that either a circle of radius \(1/e\) is included in \(\Omega^{-}_{\Gamma}\) or that \(\Omega^{-}_{\Gamma}\) is included in such a circle. They claim that when \(\mathit{I}_{e}\) is not connected, no general conclusion can be drawn. Points 3, 4 and 5 of Theorem 1.1 above extend their results to the general case considered in this article, and thus removes their geometric restrictions. In the same paper [2], it is showed that when \(\mathit{I}_{e}\) is a single Jordan curve, "holes" have no influence on the degenerate scales. The first assertion of Theorem 1.1 also extends this result, in particular to the case where \(\mathit{I}_{e}\) is multi-connected. The proof of Theorem 1.1 results straightforwardly from the combination of Theorems 4.1, 4.2, 5.1 and Property 5.1 established below. ## 2 The biharmonic transmission problem We continue to use the notation \(\Gamma\) to designate a disjoint union of Jordan curves as described in the previous section. Following an idea developed in the article [1], we introduce the weight functions: \[\rho(x)=\sqrt{1+|x|^{2}}\qquad\text{and}\qquad\lg(x)=\ln(2+|x|^{2})\qquad \text{for all }x\in\mathbb{R}^{2},\] and the weighted Sobolev space: \[W^{2}(\mathbb{R}^{2})=\Big{\{}u\in\mathscr{D}^{\prime}(\mathbb{R}^{2})\,:\, \frac{u}{\rho^{2}\lg}\in L^{2}(\mathbb{R}^{2}),\;\frac{1}{\rho\lg}\frac{ \partial u}{\partial x_{j}}\in L^{2}(\mathbb{R}^{2})\;\text{ and }\;\frac{ \partial^{2}u}{\partial x_{j}\partial x_{k}}\in L^{2}(\mathbb{R}^{2}),\,\forall \,j,k=1,2\Big{\}}.\] Note that the three-dimensional space of affine functions is a subspace of \(W^{2}(\mathbb{R}^{2})\). The affine functions will play a particular role in the analysis (the same role as played by the constants for the harmonic single-layer potential). We provide the space \(W^{2}(\mathbb{R}^{2})\) with the inner product: \[(u,v)_{W^{2}(\mathbb{R}^{2})}=(\Delta u,\Delta v)_{L^{2}(\mathbb{R}^{2})}+\int _{\Gamma}u\,v\,\mathrm{d}s\qquad\text{for all }u,v\in W^{2}(\mathbb{R}^{2}). \tag{4}\] According to [1], the norm associated to this scalar product is equivalent to the natural norm of \(W^{2}(\mathbb{R}^{2})\). We introduce the subspace: \[W^{2}_{\Gamma}(\mathbb{R}^{2})=\Big{\{}u\in W^{2}(\mathbb{R}^{2})\,:\,\gamma_ {\Gamma}u=0\Big{\}},\] and for every \(p=(p_{0},p_{1})\in H(\Gamma)\), we define: \[\mathsf{S}_{\Gamma}p=\operatorname{argmin}\Big{\{}\|u\|_{W^{2}(\mathbb{R}^{2} )}\,:\,u\in W^{2}(\mathbb{R}^{2}),\,\gamma_{\Gamma}u=p\Big{\}}. \tag{5}\] For any function \(u\) defined in \(\mathbb{R}^{2}\) we denote by \(u^{+}\) and \(u^{-}\) its restrictions to the domains \(\Omega^{+}_{\Gamma}\) and \(\Omega^{-}_{\Gamma}\) respectively. Some elementary properties of \(\mathsf{S}_{\Gamma}p\) are gathered in the following lemma: **Lemma 2.1**.: 1. _The function_ \(\mathsf{S}_{\Gamma}p\) _is well-defined for every_ \(p\in H(\Gamma)\) _(the minimum is achieved and is unique) and_ \(\mathsf{S}_{\Gamma}:H(\Gamma)\longrightarrow W^{2}_{\Gamma}(\mathbb{R}^{2})^{\perp}\) _is an isomorphism._ 2. _The function_ \(\mathsf{S}_{\Gamma}p\) _is the unique solution in the space_ \(W^{2}(\mathbb{R}^{2})\) _to the transmission problem:_ \[\begin{cases}\Delta^{2}u=0\quad\text{in }\mathbb{R}^{2}\setminus\Gamma\\ \gamma_{\Gamma}u=p.\end{cases}\] (6a) 3. _For every_ \(u\in W^{2}_{\Gamma}(\mathbb{R}^{2})^{\perp}\) _and_ \(v\in W^{2}(\mathbb{R}^{2})\)_,_ \((\Delta u,\Delta v)_{L^{2}(\Omega^{+}_{\Gamma})}=\int_{\mathit{I}_{e}}(\partial _{n}\Delta u^{+})v-(\Delta u^{+})\partial_{n}v\,\mathrm{d}s\)_._ 4. _If_ \(\mathscr{C}\) _is another set of Jordan curves as described in Section_ 1 _such that_ \(\varOmega_{\mathscr{C}}^{+}\subset\varOmega_{\varGamma}^{+}\)_, then for every_ \(p\in H(\varGamma)\)_,_ \(\mathsf{S}_{\mathscr{C}_{e}}\circ\gamma_{\mathscr{C}_{e}}\circ\mathsf{S}_{ \varGamma}p=\mathsf{S}_{\varGamma}p\) _in_ \(\varOmega_{\mathscr{C}}^{+}\)_._ 5. _For every_ \(p\in H(\varGamma)\)_,_ \(\Delta\mathsf{S}_{\varGamma}p(x)=\mathscr{O}(1/|x|)\) _as_ \(|x|\longrightarrow+\infty\)_._ Proof.: The proofs of statements 1 and 2 are straightforward. 1. Since \(\Delta u^{+}\) is a harmonic function in \(L^{2}(\varOmega_{\varGamma}^{+})\), it admits a Dirichlet and a Neumann trace in \(H^{-1/2}(\varGamma_{e})\) and \(H^{-3/2}(\varGamma_{e})\) respectively. The integration by parts formula results from the density of the space \(\mathscr{D}(\mathbb{R}^{2})\) in \(W^{2}(\mathbb{R}^{2})\) (asserted in [1, Theorem 7.2]). 2. We get the result by noticing that the function equal to \(\mathsf{S}_{\mathscr{C}_{e}}\circ\gamma_{\mathscr{C}_{e}}\circ\mathsf{S}_{ \varGamma}p-\mathsf{S}_{\varGamma}p\) in \(\varOmega_{\mathscr{C}}^{+}\) and to \(0\) in \(\varOmega_{\mathscr{C}}^{-}\), is in \(W^{2}_{\mathscr{C}_{e}}(\mathbb{R}^{2})\cap W^{2}_{\mathscr{C}_{e}}(\mathbb{R }^{2})^{\perp}\). 3. For every \(p\in H(\varGamma)\) and every \(x\in\varOmega_{\varGamma}^{+}\), the mean value property for harmonic functions asserts that: \[\Delta\mathsf{S}_{\varGamma}p(x)=\frac{1}{\pi R_{x}}\int_{D(x,R_{x})}\Delta \mathsf{S}_{\varGamma}p(y)\,\mathrm{d}y,\] where \(D(x,R_{x})\) is the disk of center \(x\) and radius \(R_{x}\) with \(R_{x}\) the distance from \(x\) to \(\varGamma\). Since \(\Delta\mathsf{S}_{\varGamma}p\in L^{2}(\varOmega_{\varGamma}^{+})\), the result follows from Cauchy-Schwarz inequality. It should be noted here that \(\mathsf{S}_{\varGamma}p\) is not the biharmonic single-layer potential of total trace \(p\). For instance, if \(p\) is the total trace of an affine function, then \(\mathsf{S}_{\varGamma}p\) is equal to this function while the biharmonic single-layer potential is not. ## 3 The Robin matrix In this section, we will define the Robin matrix of a multi-connected curve \(\varGamma\). Since the Robin matrix depends only on \(\varGamma_{e}\), to lighten the notation, we will assume that \(\varGamma\) is such that \(\varGamma=\varGamma_{e}\). We assume also that the origin lies in \(\varOmega_{\varGamma}^{-}\) and we introduce the functions: \[G_{1}(x)=-\frac{\partial G_{0}}{\partial x_{1}}(x)=-\frac{x_{1}}{8\pi}\Big{[} 2\ln\frac{|x|}{\kappa_{0}}+1\Big{]}\qquad\text{and}\qquad G_{2}(x)=-\frac{ \partial G_{0}}{\partial x_{2}}(x)=-\frac{x_{2}}{8\pi}\Big{[}2\ln\frac{|x|}{ \kappa_{0}}+1\Big{]},\] and therefore, denoting by \(\omega_{j}\) the Laplacian of \(G_{j}\) (for \(j=0,1,2\)), we have: \[\omega_{0}(x)=\frac{1}{2\pi}\Big{[}\ln\frac{|x|}{\kappa_{0}}+1\Big{]},\qquad \omega_{1}(x)=-\frac{1}{2\pi}\frac{x_{1}}{|x|^{2}},\qquad\omega_{2}(x)=-\frac{ 1}{2\pi}\frac{x_{2}}{|x|^{2}}\qquad\text{for all }x\in\mathbb{R}^{2} \setminus\{0\}.\] The following notations will also be helpful: \[G(x)=\begin{pmatrix}G_{0}(x)\\ G_{1}(x)\\ G_{2}(x)\end{pmatrix}\qquad\text{and}\qquad X(x)=\begin{pmatrix}1\\ x_{1}\\ x_{2}\end{pmatrix}\qquad\text{and}\qquad\omega(x)=\begin{pmatrix}\omega_{0}(x)\\ \omega_{1}(x)\\ \omega_{2}(x)\end{pmatrix}.\] Let \(u\) be a biharmonic function in \(H^{2}_{\ell oc}(\overline{\varOmega_{\varGamma}^{+}})\) and \(p\) be in \(H(\varGamma)\), and define: \[\big{[}u,p\big{]}_{\varGamma}=-\int_{\varGamma}\partial_{n}(\Delta u^{+}) \mathsf{S}_{\varGamma}p\,\mathrm{d}s+\int_{\varGamma}(\Delta u^{+})\partial _{n}\mathsf{S}_{\varGamma}p\,\mathrm{d}s-\int_{\varGamma}\partial_{n}u(\Delta \mathsf{S}_{\varGamma}^{+}p)\,\mathrm{d}s+\int_{\varGamma}u\,\partial_{n}( \Delta\mathsf{S}_{\varGamma}^{+}p)\,\mathrm{d}s. \tag{7}\] This bracket will prove crucial in the analysis. Let us collect some of its properties: **Lemma 3.1**.: 1. _For every_ \(p,q\in H(\varGamma)\)_,_ \(\big{[}\mathsf{S}_{\varGamma}p,q\big{]}_{\varGamma}=0\)_._ 2. _If_ \(\mathscr{C}\) _is another Jordan curve such that_ \(\varOmega_{\mathscr{C}}^{+}\subset\varOmega_{\varGamma}^{+}\)_, then for every_ \(p\in H(\varGamma)\) _and_ \(j=0,1,2\)_,_ \(\big{[}G_{j},\gamma_{\mathscr{C}}\circ\mathsf{S}_{\varGamma}p\big{]}_{\mathscr{C}}\)_._ 3. _For_ \(j,k=0,1,2\)_,_ \(\big{[}G_{j},\gamma_{\varGamma}X_{k}\big{]}_{\varGamma}=\delta_{jk}\)_._ 4. _For_ \(j,k=0,1,2\)_,_ \(\big{[}G_{j},\gamma_{\varGamma}G_{k}\big{]}_{\varGamma}=\big{[}G_{k},\gamma_{ \varGamma}G_{j}\big{]}_{\varGamma}\)_._ Proof.: 1. It suffices to combine the definition (7) with the third point of Lemma 2.1. 2. An integration by parts in the domain between \(\varGamma\) and \(\mathscr{C}\) followed with the fourth point of Lemma 2.1 leads to the equality. 3. Let \(\mathcal{C}_{R}\) be a large circle of radius \(R\), centered at the origin and enclosing \(\varGamma\). The preceding point asserts that: \[\big{[}G_{j},(\gamma_{\varGamma}X_{k})\big{]}_{\varGamma}=\big{[}G_{j},(\gamma _{\mathcal{C}_{R}}X_{k})\big{]}_{\mathcal{C}_{R}}=-\int_{\mathcal{C}_{R}}( \partial_{n}\omega_{j})X_{k}\,\mathrm{d}s+\int_{\mathcal{C}_{R}}\omega_{j}( \partial_{n}X_{k})\,\mathrm{d}s.\] (8) Then, explicit computations yield the result. 4. According to the third point of Lemma 2.1: \[\int_{\varGamma}(\partial_{n}G_{j})(\Delta\mathsf{S}_{\varGamma}^ {+}\circ\gamma_{\varGamma}G_{k})\,\mathrm{d}s-\int_{\varGamma}G_{j}\partial_{ n}(\Delta\mathsf{S}_{\varGamma}^{+}\circ\gamma_{\varGamma}G_{k})\,\mathrm{d}s=\\ \int_{\varGamma}(\partial_{n}G_{k})(\Delta\mathsf{S}_{\varGamma}^ {+}\circ\gamma_{\varGamma}G_{j})\,\mathrm{d}s-\int_{\varGamma}G_{k}\partial_{ n}(\Delta\mathsf{S}_{\varGamma}^{+}\circ\gamma_{\varGamma}G_{j})\,\mathrm{d}s.\] (9a) On the other hand, let \[D\] be the domain between \[\varGamma\] and a large circle \[\mathcal{C}_{R}\] centered at the origin. Integrating by parts the zero quantity \[(\Delta\omega_{j},G_{k})_{L^{2}(D)}\], we get: \[\int_{\varGamma}(\partial_{n}\omega_{j})G_{k}\,\mathrm{d}s-\int_{\varGamma} \omega_{j}(\partial_{n}G_{k})\,\mathrm{d}s=\int_{\mathcal{C}_{R}}(\partial_{n }\omega_{j})G_{k}\,\mathrm{d}s-\int_{\mathcal{C}_{R}}\omega_{j}(\partial_{n}G_ {k})\,\mathrm{d}s+(\omega_{j},\omega_{k})_{L^{2}(D)},\] (9b) and one easily verifies that the boundary integrals in the right hand side vanish when \[j\neq k\]. Using the equalities ( 9 ) in the definition of \[\big{[}G_{j},\gamma_{\varGamma}G_{k}\big{]}_{\varGamma}\] leads to the result. For every \(p\in H(\varGamma)\) we define \(\big{[}G,p\big{]}_{\varGamma}\) as the vector in \(\mathbb{R}^{3}\) whose components are \(\big{[}G_{j},p\big{]}_{\varGamma}\) (\(j=0,1,2\)). **Definition 3.1**.: _The \(3\times 3\) matrix:_ \[\Lambda_{\varGamma}=\big{(}\big{[}G_{j},\gamma_{\varGamma}G_{k}\big{]}_{ \varGamma}\big{)}_{0\leqslant i\leqslant 2\atop 0\leqslant i\leqslant 2},\] _will be called the Robin matrix (by analogy with the Robin constant for the harmonic single-layer potential). The fourth point of Lemma 3.1 asserts that this matrix is symmetric._ We define now, for \(k=0,1,2\): \[\mathscr{G}_{\varGamma}^{k}(x)=\begin{cases}G_{k}(x)-\mathsf{S}_{\varGamma} \circ\gamma_{\varGamma}G_{k}(x)+\big{[}G,\gamma_{\varGamma}G_{k}\big{]}_{ \varGamma}\cdot X(x)&(x\in\varOmega_{\varGamma}^{+})\\ \big{[}G,\gamma_{\varGamma}G_{k}\big{]}_{\varGamma}\cdot X(x)&(x\in\varOmega _{\varGamma}^{-})\end{cases}\qquad\text{and}\qquad\mathscr{G}_{\varGamma}= \begin{pmatrix}\mathscr{G}_{\varGamma}^{0}\\ \mathscr{G}_{\varGamma}^{1}\\ \mathscr{G}_{\varGamma}^{2}\end{pmatrix}. \tag{10}\] Applying the total trace operator to each component of the vectors, we obtain the equality: \[\gamma_{\varGamma}\mathscr{G}_{\varGamma}=\Lambda_{\varGamma}\big{(}\gamma_ {\varGamma}X\big{)}. \tag{11}\] ## 4 Invertibility of the biharmonic single-layer potential We now return to the general case in which \(\varGamma\) represents a union of Jordan curves, as described in Section 1. Up to a translation, we can assume that the origin lies in \(\varOmega_{\varGamma}^{-}\). For every \(p\in H(\varGamma)\), we denote by \(p_{e}\) the restriction of \(p\) to \(\varGamma_{e}\). **Definition 4.1**.: _If \(\det\Lambda_{\varGamma_{\varGamma}}\neq 0\), we define for every \(p\in H(\varGamma)\):_ \[\mathscr{S}_{\varGamma}^{\dagger}p=\mathsf{S}_{\varGamma}p-\big{[}G,p_{e} \big{]}_{\varGamma_{e}}\cdot X+\big{[}G,p_{e}\big{]}_{\varGamma_{e}}\cdot \Lambda_{\varGamma_{\varGamma}}^{-1}\mathscr{G}_{\varGamma_{\varGamma}}. \tag{12}\] We are going to show that, this time, \(\mathscr{S}_{\varGamma}^{\dagger}p\) coincides well with the biharmonic single-layer potential of total trace \(p\). Every function \(u\) in \(L^{2}_{loc}(\mathbb{R}^{2})\) harmonic in \(\mathbb{R}^{2}\setminus\varGamma\) admits one-sided Dirichlet and Neumann traces on \(\varGamma\) in the spaces \(H^{-1/2}(\varGamma)\) and \(H^{-3/2}(\varGamma)\) respectively. Taking into account the orientation of the unit normal \(n\), we set: \[\big{[}u\big{]}_{\varGamma}=\gamma_{\varGamma}^{D}u^{+}-\gamma_{\varGamma}^{D} u^{-}\qquad\text{and}\qquad\big{[}\partial_{n}u\big{]}_{\varGamma}=\gamma_{ \varGamma}^{N}u^{+}-\gamma_{\varGamma}^{N}u^{-}, \tag{13}\] and we define the operator: \[\begin{split} U_{\varGamma}:H(\varGamma)& \longrightarrow H^{\prime}(\varGamma)\\ p&\longmapsto\big{(}-\big{[}\partial_{n}\Delta \mathscr{S}_{\varGamma}^{\dagger}p\big{]}_{\varGamma},\big{[}\Delta\mathscr{S}_ {\varGamma}^{\dagger}p\big{]}_{\varGamma}\big{)}.\end{split} \tag{14}\] **Theorem 4.1**.: _The operator \(V_{T}\) is invertible if and only if \(\det\Lambda_{\Gamma}\neq 0\). In this case \(\mathscr{I}_{T}^{\dagger}p=\mathscr{S}_{T}\circ U_{T}p\) for every \(p\in H(\varGamma)\). Taking the total trace on \(\varGamma\), this entails that \(V_{T}\circ U_{T}p=p\)._ The proof is based on a couple of technical lemmas. The first is the result of explicit calculations: **Lemma 4.1**.: _For any \(q=(q_{0},q_{1})\in H^{\prime}(\varGamma)\), the biharmonic single-layer potential \(\mathscr{S}_{T}q\) (defined by (2)) and its partial derivatives up to order 2 admit the following asymptotic expansions as \(|x|\) goes to \(+\infty\):_ \[\mathscr{S}_{T}q(x) =A_{\varGamma}(q)\cdot G(x)+B_{\varGamma}(q)\omega_{0}(x)+C_{ \varGamma}(q)\frac{x_{1}^{2}-x_{2}^{2}}{|x|^{2}}+D_{\varGamma}(q)\frac{x_{1}x _{2}}{|x|^{2}}+\mathscr{O}(1/|x|), \tag{15a}\] \[\partial_{x_{j}}\mathscr{S}_{T}q(x) =A_{\varGamma}(q)\cdot\partial_{x_{j}}G(x)+\mathscr{O}(1/|x|),\] (15b) \[\partial_{x_{j}x_{k}}^{2}\mathscr{S}_{T}q(x) =A_{\varGamma}(q)\cdot\partial_{x_{j}x_{k}}^{2}G(x)+\mathscr{O}( 1/|x|^{2}). \tag{15c}\] _where \(A_{\varGamma}(q)\in\mathbb{R}^{3}\) is defined by:_ \[A_{\varGamma}(q)=\int_{\varGamma}q_{0}(y)X(y)+q_{1}\partial_{n}X(y)\,\mathrm{ d}s,\] _and \(B_{\varGamma}(q)\), \(C_{\varGamma}(q)\) and \(D_{\varGamma}(q)\) are real constants depending on \(q\)._ **Lemma 4.2**.: _For every \(p\in H(\varGamma)\), \(A_{\varGamma}\big{(}U_{\varGamma}p\big{)}=\Lambda_{\Gamma}^{-1}\big{[}G,p_{e} \big{]}_{\varGamma_{e}}\)._ Proof.: By definition, we have: \[A_{\varGamma}\big{(}U_{\varGamma}p\big{)}=-\int_{\varGamma}\big{[}\partial_{n }(\Delta\mathscr{S}_{\varGamma}^{\dagger}p)\big{]}_{\varGamma}\,X\,\mathrm{d} s+\int_{\varGamma}\big{[}\Delta\mathscr{S}_{\varGamma}^{\dagger}p\big{]}_{ \varGamma}\partial_{n}X\,\mathrm{d}s,\] and since the functions \(X_{j}\) (for \(j=0,1,2\)) are harmonic in \(\mathbb{R}^{2}\), an integration by parts on the domain \(\varOmega_{\varGamma}^{-}\) yields: \[A_{\varGamma}\big{(}U_{\varGamma}p\big{)}=-\int_{\varGamma_{e}}\partial_{n} (\Delta\mathscr{S}_{\varGamma}^{\dagger}p)^{+}\,X_{j}\,\mathrm{d}s+\int_{ \varGamma_{e}}(\Delta\mathscr{S}_{\varGamma}^{\dagger}p)^{+}\partial_{n}X\, \mathrm{d}s=\big{[}\mathscr{S}_{\varGamma}^{\dagger}p,\gamma_{\varGamma_{e}} X\big{]}_{\varGamma_{e}}.\] Using the definition (12) of \(\mathscr{S}_{\varGamma}^{\dagger}p\), the fourth point of Lemma 2.1 and the first point of Lemma 3.1, we obtain: \[\big{[}\mathscr{S}_{\varGamma}^{\dagger}p,\gamma_{\varGamma_{e}}X\big{]}_{ \varGamma_{e}}=\big{[}\big{[}G,p_{e}\big{]}_{\varGamma_{e}}\cdot\Lambda_{ \Gamma_{e}}^{-1}G,\gamma_{\varGamma_{e}}X\big{]}_{\varGamma_{e}}.\] Then, the third point of Lemma 3.1 leads to the conclusion. Proof of Theorem 4.1.: Assume that \(\det\Lambda_{\Gamma_{e}}\neq 0\), let \(p\) be in \(H(\varGamma)\) and define \(u=\mathscr{S}_{\varGamma}^{\dagger}p-\mathscr{S}_{\varGamma}\circ U_{\varGamma}p\). By construction, \(\big{[}\Delta u\big{]}_{\varGamma}=0\) and \(\big{[}\partial_{n}\Delta u\big{]}_{\varGamma}=0\), hence the function \(\Delta u\) is harmonic in \(\mathbb{R}^{2}\). Moreover, according to the last assertion of Lemma 2.1: \[\Delta\mathscr{S}_{\varGamma}^{\dagger}p(x)=\big{[}G,p_{e}\big{]}_{\varGamma_{ e}}\cdot\Lambda_{\Gamma_{e}}^{-1}\omega(x)+\mathscr{O}(1/|x|)\qquad\text{as} \qquad|x|\longrightarrow+\infty. \tag{16}\] Combining (15c), (16) and Lemma 4.2, we deduce that \(\Delta u(x)\) tends to \(0\) as \(|x|\) goes to \(+\infty\) and Liouville's theorem allows us to conclude that \(\Delta u\) is equal to zero. Therefore, there exists a function \(h\) harmonic in \(\mathbb{R}^{2}\) such that \(\mathscr{S}_{\varGamma}^{\dagger}p=\mathscr{S}_{\varGamma}\circ U_{\varGamma}p+h\) and hence also: \[\big{(}\mathscr{S}_{\varGamma}^{\dagger}p-\big{[}G,p_{e}\big{]}_{\varGamma_{e} }\cdot\Lambda_{\Gamma_{e}}^{-1}G\big{)}=\big{(}\mathscr{S}_{\varGamma}\circ U _{\varGamma}p-\big{[}G,p_{e}\big{]}_{\varGamma_{e}}\cdot\Lambda_{\Gamma_{e}}^{-1 }G\big{)}+h\qquad\text{in }\varOmega_{\varGamma}^{+}. \tag{17}\] Denote by \(p_{1}\) and \(p_{2}\) respectively the total traces on \(\varGamma_{e}\) of the functions in brackets. Considering again their asymptotic behavior, we deduce that there are equal to \(\mathsf{S}_{\varGamma_{e}}p_{1}\) and \(\mathsf{S}_{\varGamma_{e}}p_{2}\) respectively, in \(\varOmega_{\varGamma}^{+}\). On the one hand, it follows that \(h\) is in the space \(W^{2}(\mathbb{R}^{2})\) and therefore that \(h=\alpha\cdot X\) with \(\alpha\in\mathbb{R}^{3}\) (the affine functions are the only functions harmonic in \(\mathbb{R}^{2}\) in \(W^{2}(\mathbb{R}^{2})\)). On the other hand, since \(p_{1}=p_{e}-\gamma_{\varGamma_{e}}\big{[}G,p_{e}\big{]}_{\varGamma_{e}}\cdot \Lambda_{\Gamma_{e}}^{-1}G\), we have \(\big{[}G_{k},p_{1}\big{]}_{\varGamma_{e}}=0\) and \(\big{[}G_{k},\alpha\cdot(\gamma_{\varGamma_{e}}X)\big{]}_{\varGamma_{e}}=\alpha_ {k}\) for \(k=0,1,2\), according to the third point of Lemma 3.1. We are now going to verify that \(\big{[}G_{k},p_{2}\big{]}_{\varGamma_{e}}=0\), which together with equality (17) will allow us to conclude that \(h=0\). Let \(\mathcal{C}_{R}\) be a large circle of radius \(R\) and centered at the origin enclosing \(\varGamma_{e}\). According to the second point of Lemma 3.1: \[\big{[}G_{k},p_{2}\big{]}_{\varGamma_{e}}=\big{[}G_{k},\gamma_{\varGamma_{e}} \circ\mathsf{S}_{\varGamma_{e}}p_{2}\big{]}_{\mathcal{C}_{R}}.\] Define now: \[\nu_{0}(R) =\frac{R^{2}}{4}\Big{[}2\ln\Big{(}\frac{R}{\kappa_{0}}\Big{)}+1\Big{]} \lambda_{0}(R) =-\frac{R^{2}}{4\pi}\Big{[}\ln\Big{(}\frac{R}{\kappa_{0}}\Big{)}+ \ln^{2}\Big{(}\frac{R}{\kappa_{0}}\Big{)}\Big{]}+\frac{1}{8\pi}\big{[}\kappa_{ 1}-R^{2}\big{]}, \tag{18a}\] \[\nu_{j}(R) =-\frac{R^{2}}{4} \lambda_{j}(R) =-\frac{1}{4\pi}\Big{[}\ln\Big{(}\frac{R}{\kappa_{0}}\Big{)}+1 \Big{]},\qquad(j=1,2), \tag{18b}\] and the functions \(G^{\dagger}_{k}=\nu_{k}(R)\omega_{k}+\lambda_{k}(R)X_{k}\). Then \(\gamma_{\mathcal{C}_{R}}G_{k}=\gamma_{\mathcal{C}_{R}}G^{\dagger}_{k}\) (the total traces coincide on \(\mathcal{C}_{R}\)), and since the function \(G^{\dagger}_{k}\) is harmonic in \(\varOmega^{+}_{\mathcal{C}_{R}}\), we deduce, using the third point of Lemma 2.1 that: \[\int_{\mathcal{C}_{R}}\partial_{n}G_{k}(\Delta\mathsf{S}_{\varGamma_{\!c}}p_{2 })\,\mathrm{d}s-\int_{\mathcal{C}_{R}}G_{k}\,\partial_{n}(\mathsf{S}_{\varGamma _{\!c}}p_{2})\,\mathrm{d}s=\int_{\mathcal{C}_{R}}\partial_{n}G^{\dagger}_{k}( \Delta\mathsf{S}_{\varGamma_{\!c}}p_{2})\,\mathrm{d}s-\int_{\mathcal{C}_{R}} G^{\dagger}_{k}\,\partial_{n}(\Delta\mathsf{S}_{\varGamma_{\!c}}p_{2})\,\mathrm{d}s=0. \tag{19}\] Using the expansion (15a) for \(\mathsf{S}_{\varGamma_{\!c}}p_{2}\) (recall that \(\mathsf{S}_{\varGamma_{\!c}}p_{2}=\mathscr{S}_{\varGamma}\circ U_{\varGamma}p- A_{\varGamma}(U_{\varGamma}p))\cdot G\) in \(\varOmega^{+}_{\varGamma}\)), we arrive at: \[\big{[}G_{k},\gamma_{\mathcal{C}_{R}}\circ\mathsf{S}_{\varGamma_{\!c}}p_{2} \big{]}_{\mathcal{C}_{R}}=-\int_{\mathcal{C}_{R}}(\partial_{n}\omega_{k})( \mathsf{S}_{\varGamma_{\!c}}p_{2})\,\mathrm{d}s+\int_{\mathcal{C}_{R}}\omega _{k}(\partial_{n}\mathsf{S}_{\varGamma_{\!c}}p_{2})\,\mathrm{d}s\longrightarrow 0 \qquad\text{as}\qquad R\longrightarrow+\infty.\] All together, we have proved that \(\big{[}G_{k},p_{2}\big{]}_{\varGamma_{\!c}}=0\) for \(k=0,1,2\) and therefore that \(u=\mathscr{S}^{\dagger}_{\varGamma}p-\mathscr{S}_{\varGamma}\circ U_{\varGamma }p=0\). Assume now that \(\det\Lambda_{\varGamma_{\!c}}=0\). According to the definition (10), this implies that there exists \(\xi\in\mathbb{R}^{3}\), \(\xi\neq 0\) such that the function \(\mathcal{G}=\mathscr{G}_{\varGamma_{\!c}}\cdot\xi\) is zero in \(\varOmega^{-}_{\varGamma}\). Let \(q=\big{(}-\big{[}\partial_{n}\Delta\mathcal{G}\big{]}_{\varGamma},\big{[} \Delta\mathcal{G}\big{]}_{\varGamma}\big{)}\) and \(v=\mathcal{G}-\mathscr{S}_{\varGamma}q\). We are going to verify that \(v=0\) and hence that \(V_{\varGamma}\) is not injective. Since \(\mathcal{G}\) vanishes in \(\varOmega^{-}_{\varGamma}\), we have: \[A_{\varGamma}(q)=-\int_{\varGamma}\big{[}\partial_{n}(\Delta\mathcal{G}) \big{]}_{\varGamma}\,X\,\mathrm{d}s+\int_{\varGamma}\big{[}\Delta\mathcal{G} \big{]}_{\varGamma}\partial_{n}X\,\mathrm{d}s=-\int_{\varGamma_{\!c}} \partial_{n}(\Delta\mathcal{G})^{+}\,X\,\mathrm{d}s+\int_{\varGamma_{\!c}}( \Delta\mathcal{G})^{+}\partial_{n}X\,\mathrm{d}s. \tag{20}\] On the other hand, combining the first and third points of Lemma 3.1, we obtain that: \[-\int_{\varGamma_{\!c}}\partial_{n}(\Delta\mathscr{G}^{k}_{\varGamma_{\!c}}) ^{+}\,X_{j}\,\mathrm{d}s+\int_{\varGamma_{\!c}}(\Delta\mathscr{G}^{k}_{\varGamma _{\!c}})^{+}\partial_{n}X_{j}\,\mathrm{d}s=\delta_{jk}\qquad\text{for all }j,k=0,1,2. \tag{21}\] Comparing (20) and (21), it follows that \(A_{\varGamma}(q)=\xi\) and therefore that \(\mathcal{G}\) and \(\mathscr{S}_{\varGamma}q\) have the same asymptotic behavior. The rest of the proof is similar to the one establishing that \(u=0\) above. **Theorem 4.2**.: _If the matrix \(\Lambda_{\varGamma_{\!c}}\) is positive definite, the operator \(V_{\varGamma}\) is strongly elliptic in \(H^{\prime}(\varGamma)\)._ Proof.: Assume that the matrix \(\Lambda_{\varGamma_{\!c}}\) is positive definite and define in \(H(\varGamma)\) the inner product: \[(p,q)_{\varGamma}=\big{(}\Delta\mathsf{S}_{\varGamma}p,\Delta\mathsf{S}_{ \varGamma}q\big{)}_{L^{2}(\mathbb{R}^{2})}+\big{[}G,p_{e}\big{]}_{\varGamma_{ \!c}}\cdot\Lambda^{-1}_{\varGamma_{\!c}}\big{[}G,q_{e}\big{]}_{\varGamma_{\!c }}\qquad\text{for all }p,q\in H(\varGamma). \tag{22}\] The norm \(\|\cdot\|_{\varGamma}\) associated to this scalar product is clearly equivalent to the usual norm on \(H(\varGamma)\). The inclusion \(H(\varGamma)\subset L^{2}(\varGamma)\times L^{2}(\varGamma)\) being continuous and dense, we can use \(\mathbf{L}^{2}(\varGamma)=L^{2}(\varGamma)\times L^{2}(\varGamma)\) as pivot space (identified with its dual) and obtain a Gelfand triple of Hilbert spaces: \[H(\varGamma)\subset\mathbf{L}^{2}(\varGamma)\subset H^{\prime}(\varGamma).\] With this configuration, it is well known that the operator \(H(\varGamma)\longrightarrow H^{\prime}(\varGamma)\), \(p\longmapsto(p,\cdot)_{\varGamma}\) is an isometry and elementary calculations can be used to check that it is equal to \(U_{\varGamma}\). We have therefore: \[\langle U_{\varGamma}p,p\rangle=\|p\|_{\varGamma}^{2}\qquad\text{for all }p\in H(\varGamma),\] where the brackets \(\langle\cdot,\cdot\rangle\) stands for the duality pairing on \(H^{\prime}(\varGamma)\times H(\varGamma)\) that extends the inner product of \(\mathbf{L}^{2}(\varGamma)\). Since \(V_{\varGamma}\) is the inverse of the isometric operator \(U_{\varGamma}\), we get the result. ## 5 Properties of the Robin matrix Since the Robin matrix depends only on \(\varGamma_{\!c}\), we assume again in this section (as in Section 3) that the multi-connected curve \(\varGamma\) is such that \(\varGamma=\varGamma_{\!c}\). Let \(\varGamma^{\prime}\) be another such a curve and denote by \(\lambda^{1}_{\varGamma}\leqslant\lambda^{2}_{\varGamma}\leqslant\lambda^{3}_{ \varGamma}\) the eigenvalues of \(\Lambda_{\varGamma}\) and by \(\lambda^{1}_{\varGamma^{\prime}}\leqslant\lambda^{2}_{\varGamma^{\prime}} \leqslant\lambda^{3}_{\varGamma^{\prime}}\) those of \(\Lambda_{\varGamma^{\prime}}\). **Theorem 5.1**.: _If \(\varOmega_{T}^{+}\subset\varOmega_{T^{\prime}}^{+}\) then \(\varlambda_{T}^{j}\leqslant\varlambda_{T^{\prime}}^{j}\) for \(j=1,2,3\)._ Proof.: Let \(\xi\) be in \(\mathbb{R}^{3}\) and define \(F=\xi\cdot G\) and \(\mathscr{F}_{\varGamma^{\prime}}=\xi\cdot\mathscr{G}_{\varGamma^{\prime}}\). Then \(\xi\cdot\Lambda_{\varGamma^{\prime}}\xi=\big{[}F,\gamma_{\varGamma^{\prime}} F\big{]}_{\varGamma^{\prime}}\) and proceeding as for establishing the equality (21), we obtain: \[\big{[}F,\gamma_{\varGamma^{\prime}}F\big{]}_{\varGamma^{\prime}}=-\int_{ \varGamma^{\prime}}\big{(}\partial_{n}\Delta\mathscr{F}_{\varGamma^{\prime}}^ {+}\big{)}F\,\mathrm{d}s+\int_{\varGamma^{\prime}}\big{(}\Delta\mathscr{F}_{ \varGamma^{\prime}}^{+}\big{)}(\partial_{n}F)\,\mathrm{d}s.\] From (10) and recalling (5) we deduce that the above equality can be transformed into: \[\big{[}F,\gamma_{\varGamma^{\prime}}F\big{]}_{\varGamma^{\prime}}=-\int_{ \varGamma^{\prime}}\big{(}\partial_{n}\Delta F\big{)}F\,\mathrm{d}s+\int_{ \varGamma^{\prime}}\big{(}\Delta F\big{)}(\partial_{n}F)\,\mathrm{d}s-\min \big{\{}\|\Delta u\|_{L^{2}(\varOmega_{T^{\prime}}^{+})}^{2}\,:\,u\in W^{2}( \mathbb{R}^{2}),\,\gamma_{\varGamma^{\prime}}u=F\big{\}}.\] Denote by \(D\) the bounded domain between \(\varGamma^{\prime}\) and \(\varGamma\). On the one hand: \[-\int_{\varGamma^{\prime}}\big{(}\partial_{n}\Delta F\big{)}F\,\mathrm{d}s+ \int_{\varGamma^{\prime}}\big{(}\Delta F\big{)}(\partial_{n}F)\,\mathrm{d}s=- \int_{\varGamma}\big{(}\partial_{n}\Delta F\big{)}F\,\mathrm{d}s+\int_{ \varGamma}\big{(}\Delta F\big{)}(\partial_{n}F)\,\mathrm{d}s+\int_{\varGamma} \big{|}\Delta F|^{2}\,\mathrm{d}x.\] On the other hand: \[\min\big{\{}\|\Delta u\|_{L^{2}(\varOmega_{T^{\prime}}^{+})}^{2}\,:\,u\in W^{2 }(\mathbb{R}^{2}),\,\gamma_{\varGamma^{\prime}}u=F\big{\}}\leqslant\int_{D}| \Delta F|^{2}\,\mathrm{d}x+\min\big{\{}\|\Delta u\|_{L^{2}(\varOmega_{T}^{+} )}^{2}\,:\,u\in W^{2}(\mathbb{R}^{2}),\,\gamma_{\varGamma}u=F\big{\}},\] and thus we have proved that \(\big{[}F,\gamma_{\varGamma^{\prime}}F\big{]}_{\varGamma^{\prime}}\geqslant \big{[}F,\gamma_{\varGamma}F\big{]}_{\varGamma}\). The Courant-Fischer min-max principle leads to the conclusion of the theorem. **Proposition 5.1**.: _Let \(\mathcal{C}_{R}\) be a circle of radius \(R>0\). Then \(\Lambda_{\mathcal{C}_{R}}=\mathrm{diag}(\lambda_{0}(R),\lambda_{1}(R),\lambda_ {2}(R))\) where the expressions of the real numbers \(\lambda_{j}(R)\) (\(j=0,1,2\)) are given in (18)._ Proof.: For \(k=0,1,2\) we have \(\mathsf{S}_{\mathcal{C}_{R}}\circ\gamma\mathbb{C}_{R}G_{k}=\nu_{k}(R)\omega_{j }+\lambda_{j}(R)X_{j}\) in \(\varOmega_{T}^{+}\), where the expressions of the constants \(\nu_{k}(R)\) and \(\lambda_{k}(R)\) are given in (18). The result follows from the definition of the entries \(\big{[}G_{k},\gamma_{\varGamma}G_{j}\big{]}_{\mathcal{C}_{R}}\) of the matrix \(\Lambda_{\mathcal{C}_{R}}\). ## 6 Generalization The \(\mathcal{C}^{1,1}\) regularity of \(\varGamma\) is the weakest for which the total trace operator from \(H^{2}_{loc}(\mathbb{R}^{2})\) into \(H^{3/2}(\varGamma)\times H^{1/2}(\varGamma)\) is well defined and onto. With weaker regularity, \(H^{3/2}(\varGamma)\) has no more intrinsic definition and the space \(H(\varGamma)\) must be defined simply as the image of \(H^{2}_{loc}(\mathbb{R}^{2})\) by the total trace operator. In general, this image is difficult to characterize. Nevertheless, some cases are dealt with in the literature: curvilinear \(\mathcal{C}^{1,1}\) polygons in [5] and Lipschitz continuous curves in [4]. For \(\mathcal{C}^{1,1}\) curvilinear polygons, a generalization of the work done in this document seems well within reach. Another possible generalization would be to adopt the approach of [3], in which \(\varGamma\) is any compact set in \(\mathbb{R}^{2}\). In this case, the spaces \(H(\varGamma)\) and \(H^{\prime}(\varGamma)\) would be replaced by the spaces denoted respectively by \(H^{2}_{\gamma}(\varGamma)\) and \(H^{-2}_{\varGamma}\) in [3]. The space \(W^{2}(\mathbb{R}^{2})\) would obviously remain unchanged and \(W^{2}_{\varGamma}(\mathbb{R}^{2})\) would be defined as the closure of \(\mathscr{D}(\mathbb{R}^{2}\setminus\varGamma)\) in \(W^{2}(\mathbb{R}^{2})\). With these settings, the domain \(\varOmega_{T}^{+}\) is the unbounded connected component of \(\mathbb{R}^{2}\setminus\varGamma\) and \(\varGamma_{e}\) is the boundary of \(\varOmega_{T}^{+}\). The details of the analysis still need to be verified, and this will be done in a future work.
2303.11093
An exterior calculus framework for polytopal methods
We develop in this work the first polytopal complexes of differential forms. These complexes, inspired by the Discrete De Rham and the Virtual Element approaches, are discrete versions of the de Rham complex of differential forms built on meshes made of general polytopal elements. Both constructions benefit from the high-level approach of polytopal methods, which leads, on certain meshes, to leaner constructions than the finite element method. We establish commutation properties between the interpolators and the discrete and continuous exterior derivatives, prove key polynomial consistency results for the complexes, and show that their cohomologies are isomorphic to the cohomology of the continuous de Rham complex.
Francesco Bonaldi, Daniele A. Di Pietro, Jerome Droniou, Kaibo Hu
2023-03-20T13:26:50Z
http://arxiv.org/abs/2303.11093v2
# An exterior calculus framework for polytopal methods ###### Abstract We develop in this work the first polytopal complexes of differential forms. These complexes, inspired by the Discrete De Rham and the Virtual Element approaches, are discrete versions of the de Rham complex of differential forms built on meshes made of general polytopal elements. Both constructions benefit from the high-level approach of polytopal methods, which leads, on certain meshes, to leaner constructions than the finite element method. We establish commutation properties between the interpolators and the discrete and continuous exterior derivatives, prove key polynomial consistency results for the complexes, and show that their cohomologies are isomorphic to the cohomology of the continuous de Rham complex. **Key words.** Discrete de Rham complex, Virtual Element Method, differential forms, exterior calculus, polytopal methods **MSC2020.** 65N30, 65N99, 14F40 ## 1 Introduction This work is a first step towards merging two extremely successful avenues of research in numerical analysis: finite element differential forms and arbitrary-order polytopal methods. The well-posedness of important classes of partial differential equations (PDEs), and the development of stable approximations thereof, hinges on the properties of underlying Hilbert complexes [23]. The best-known example is provided by the de Rham complex which, for an open connected polyhedral domain \(\Omega\subset\mathbb{R}^{3}\), reads \[\{0\}\xrightarrow{}H^{1}(\Omega)\xrightarrow{\mathbf{grad}}\mathbf{H}( \mathbf{curl};\Omega)\xrightarrow{\mathbf{curl}}\mathbf{H}(\mathrm{div};\Omega) \xrightarrow{\mathrm{div}}L^{2}(\Omega)\xrightarrow{}\{0\}, \tag{1.1}\] where \(H^{1}(\Omega)\) is the space of scalar-valued functions over \(\Omega\) that are square-integrable along with their gradient, while \(\mathbf{H}(\mathbf{curl};\Omega)\) and \(\mathbf{H}(\mathrm{div};\Omega)\) are the spaces of vector-valued functions over \(\Omega\) that are square-integrable along with their curl and divergence, respectively. Using the framework of differential forms (see Appendix A), the de Rham complex (1.1) can be generalised to a domain \(\Omega\) of any dimension \(n\) as: \[\{0\}\xrightarrow{}H\Lambda^{0}(\Omega)\xrightarrow{\mathrm{d}^{0}}\cdots \xrightarrow{\mathrm{d}^{k-1}}H\Lambda^{k}(\Omega)\xrightarrow{\mathrm{d}^{k}} \cdots\xrightarrow{\mathrm{d}^{m-1}}H\Lambda^{n}(\Omega)\xrightarrow{}\{0\}. \tag{1.2}\] In what follows, we shall possibly omit the index \(k\) from exterior derivatives and spaces in (1.2) when no ambiguity can arise. The de Rham complex enters the well-posedness analysis of PDEs through its cohomology spaces \(\mathrm{Ker}\,\mathrm{d}^{k}/\mathrm{Im}\,\mathrm{d}^{k-1}\). A classical result links these spaces to the topological features of the domain and their dimensions to its Betti numbers. Preserving such homological structures at the discrete level leads to _compatible_ methods and is key to the design of stable numerical schemes. The compatible finite element approximation of the vector-valued spaces appearing in the de Rham complex (1.1) arose as a research subject in the late 70s [53, 54]. In the late 80s, links with Whitney forms were identified [17]. More recently, the development of Finite Element Exterior Calculus (FEEC) [2, 4, 5] has provided a unified perspective on the generation and analysis of finite element approximations of the de Rham complex (1.2). Finite Element Systems (FES) are a generalisation of FEEC covering spaces which are not necessarily piecewise polynomial inside mesh elements (but can be, for example, piecewise polynomial on subdivisions of these elements); see [29, 30, 31]. FEEC and FES led to the unification of several families of finite elements and heavily hinge on the notion of subcomplex, which makes them naturally geared towards conforming approximations. While conforming methods are still widely used, their construction can only be carried out on conforming meshes, typically made of elements with simple shapes (e.g., tetrahedra or hexahedra); extensions to more general meshes, such as the barycentric dual of a simplicial mesh, have been considered, e.g., in [26]. In recent years, significant efforts have been made to develop and analyse numerical methods that support more general meshes including, e.g., general polytopal elements and non-matching interfaces; a representative but by no means exhaustive list of contributions includes [1, 6, 7, 13, 16, 18, 21, 22, 24, 33, 36, 38, 42, 45, 46, 48]. Polytopal technologies typically introduce some degree of non-conformity, either because they are formulated in a fully discrete setting (like Hybrid High-Order [36, 42] or Discrete de Rham - DDR methods [33, 38]) or through the use of projections (as in Virtual Element Methods - VEM [7]). Despite their non-conformity, polytopal technologies can be used to develop compatible frameworks. Polytopal discretisations of the de Rham complex (1.1) have been proposed, e.g., in [33, 38, 10], and applied to a variety of models, such as magnetostatics [8, 34], the Stokes equations [11], and the Yang-Mills equations [47]; they have also inspired further developments, based on the same principles, for other complexes of interest such as variants of the de Rham complex with increased regularity [32, 55], elasticity complexes [19, 44], and the Stokes complex [12, 14, 49]. Polytopal complexes have additionally been used to construct methods that are robust with respect to the variations of physical parameters, in particular for the Stokes problem [11], for the Reissner-Mindlin equation [43], or the Brinkman model [33]. Many of these models have also been tackled using finite element complexes and related methods (see, e.g., [2, 3, 27, 31]). However, due to their higher-level design, which does not require explicit expressions for the basis functions, polytopal methods offer distinctive advantages over finite elements. These include, in addition to the support of general meshes, the possibility to reduce the dimension of discrete spaces, sometimes below their finite element counterparts [35, Table 3], through systematic processes such as enhancement or serendipity [9, 35] The purpose of the present work is to take one step further and show how exterior calculus can be used to generalise the construction and analysis of polytopal complexes. More specifically, we present two discrete de Rham complexes in arbitrary dimension and with arbitrary approximation degree that generalise those introduced in [33] (DDR) and [8] (VEM). Three key features set these constructions apart from Finite Element complexes: * No explicit spaces of globally conforming differential forms (i.e., subspaces of \(H\Lambda(\Omega)\)) are sought. Instead, we work with _fully discrete spaces_ made of vectors of polynomial components on the mesh cells (of various dimensions). The meaning of these components is provided by the interpolators on the fully discrete spaces. * Due to the absence of explicit underlying conforming spaces, the differential operator of the complex cannot be the exterior derivative. Instead, a _discrete exterior derivative_ is constructed combining the polynomial components to mimic the Stokes formula. * _Discrete potentials_ are also designed, again mimicking the Stokes formula. They are piecewise (discontinuous) polynomial forms on the mesh used, in particular, to define an \(L^{2}\)-structure on the discrete spaces (an essential tool to discretise PDEs written in weak form). The choice of the polynomial components in the spaces and the design of discrete exterior derivatives and potentials revolve around two key properties: _polynomial consistency_, which is related to the ability to reproduce polynomial differential forms up to a selected polynomial degree, and _compatibility_, linked to the existence of an isomorphism between the cohomology of the discrete and continuous de Rham complexes. While both the DDR- and VEM-inspired constructions heavily rely on discrete versions of the Stokes formula, they do so in a radically different spirit: in the DDR construction, the choice of components in the discrete spaces is inspired by the formula to reconstruct a discrete exterior derivative, which is then used to construct discrete potentials. In the VEM construction, on the other hand, the space components (and, in particular, those associated with differentials) are chosen based on the formula used to define a discrete potential. While the choice in the DDR construction leads to leaner spaces, the study of its properties is more elaborated. Notice that, at this early stage, we haven't tried to identify the virtual (conforming) spaces that underlie the VEM-inspired construction and we have made no effort whatsoever in trying to reduce the dimension of the discrete spaces through serendipity. The rest of this work is organised as follows. In Section 2 we establish the setting. In Section 3 we present and analyse the discrete complex generalising the DDR construction of [33]. Section 4 contains the definition and analysis of the complex generalising the VEM construction of [8]. In Section 5, we discuss in greater detail similarities and differences with respect to the FEEC, FES and Distributional Differential Forms frameworks. Differential forms of any degree in dimensions 2 and 3 have interpretations in terms of vector fields. To make the exposition self-contained and improve the legibility for the reader not used to differential forms, we recall some facts on these so-called vector proxies in Appendix A, and we include throughout the exposition a series of examples to illustrate the development in the differential forms framework through vector calculus operators. ## 2 Setting We present here the main notions used in the construction of the polytopal complexes of differential forms. For the reader not used to the framework of differential forms, we recall in Appendix A some basic concepts and definitions. ### Spaces of differential forms Let \(M\) denote an \(n\)-dimensional manifold. In what follows, \(M\) will typically be a cell of a polytopal mesh (see Section 2.5 below), and thus a relatively open set in a subspace of \(\mathbb{R}^{m}\) for some \(m\geq n\). For any natural number \(\ell\) such that \(0\leq\ell\leq n\), we will denote by \(\Lambda^{\ell}(M)\) the space of differential \(\ell\)-forms (often just called \(\ell\)-forms) on \(M\) without explicit regularity requirements. When relevant, regularity is made explicit by prepending the appropriate space (e.g., \(L^{2}\Lambda^{\ell}(M)\) stands for square-integrable \(\ell\)-forms). ### Integration by parts We recall the following integration by parts (Stokes) formula: \[\int_{M}\mathrm{d}\omega\wedge\mu=(-1)^{\ell+1}\int_{M}\omega\wedge\mathrm{d}\mu+ \int_{\partial M}\mathrm{tr}_{\partial M}\,\omega\wedge\mathrm{tr}_{\partial M}\,\mu \qquad\forall(\omega,\mu)\in C^{1}\Lambda^{\ell}(\overline{M})\times C^{1} \Lambda^{n-\ell-1}(\overline{M}), \tag{2.1}\] where, for any form degree \(m\), \(\mathrm{tr}_{\partial M}:C^{0}\Lambda^{m}(\overline{M})\to C^{0}\Lambda^{m}( \partial M)\) is the trace operator. Formula (2.1) will provide the starting point to define discrete counterparts of the exterior derivative and of the corresponding potentials on mesh cells. It will also drive the choice of the components in the discrete spaces, geared at ensuring that the reconstructions preserve certain polynomial differential forms. ### Hodge star Assume now that \(M\) is an open set in a subspace of \(\mathbb{R}^{m}\). We denote by \(\star:\Lambda^{\ell}(M)\to\Lambda^{n-\ell}(M)\) the Hodge star operator, and we set \[\star^{-1}\coloneqq(-1)^{\ell(n-\ell)}\star, \tag{2.2}\] a notation justified observing that, for any \(\omega\in\mathrm{Alt}^{\ell}(V)\), \(\star^{-1}\star\omega=\omega\) (see (A.3) in the appendix). ### \(L^{2}\)-orthogonal projectors Integrating the inner product of \(\mathrm{Alt}^{\ell}(V)\) over \(M\) yields the inner product of \(L^{2}\Lambda^{\ell}(M)\). For any closed subspace \(\mathcal{X}\) of \(L^{2}\Lambda^{\ell}(M)\), we therefore have an \(L^{2}\)-orthogonal projector \(\pi_{\mathcal{X}}:L^{2}\Lambda^{\ell}(M)\to\mathcal{X}\) on \(\mathcal{X}\), defined by the following relation: For all \(\omega\in L^{2}\Lambda^{\ell}(M)\), \(\pi_{\mathcal{X}}\omega\in\mathcal{X}\) satisfies \[\int_{M}\pi_{\mathcal{X}}\omega\wedge\star\mu=\int_{M}\omega\wedge\star\mu \qquad\forall\mu\in\mathcal{X}. \tag{2.3}\] To improve legibility, we will introduce in the following sections specific notations to \(\pi_{\mathcal{X}}\) for some polynomial subspaces \(\mathcal{X}\) that are particularly relevant to our construction. For future use, we note the following property. **Lemma 1** (Projectors on subspaces of differential forms).: _Let \((k,d)\) be integers such that \(k\leq d\leq n\), \(f\in\Delta_{d}(\mathcal{M}_{h})\) and \(\mathcal{X}\) be a closed subspace of \(L^{2}\Lambda^{d-k}(f)\). Then, it holds: For all \(\omega\in L^{2}\Lambda^{k}(f)\) and all \(\mu\in\mathcal{X}\),_ \[\int_{f}\star^{-1}\pi_{\mathcal{X}}(\star\omega)\wedge\mu=\int_{f}\mu\wedge \star\pi_{\mathcal{X}}(\star\omega)=\int_{f}\omega\wedge\mu. \tag{2.4}\] Proof.: The first relation in (2.4) follows from (A.4). To prove the second relation, we write \[\int_{f}\mu\wedge\star\pi_{\mathcal{X}}(\star\omega)=\int_{f}\pi_{\mathcal{X} }(\star\omega)\wedge\star\mu=\int_{f}\mu\wedge(\star\star\omega)=\int_{f} \omega\wedge\mu,\] where the first equality follows from (A.4) (with \((\omega,\mu)\leftarrow(\pi_{\mathcal{X}}(\star\omega),\mu)\)), the cancellation of the projector is justified by its definition (2.3), the second equality is obtained using (A.4) again, and the conclusion follows from (A.3) and the anticommutativity (A.1) of \(\wedge\). ### Polytopal mesh From this point on, \(\Omega\) will denote a polytopal domain of \(\mathbb{R}^{n}\). We let \(\mathcal{M}_{h}\) denote a _polytopal mesh_ of \(\Omega\), i.e., a collection of disjoint polytopal sets (mesh entities) of dimensions in \([0,n]\), relatively open in their spanned affine space, such that the boundary of each \(d\)-cell (polytopal set of dimension \(d\)) is the union of mesh entities of dimension \(<d\), and such that any \(d\)-cell for \(d<n\) is contained in the boundary of some \((d+1)\)-cell. For any \(d\in[0,n]\), the set collecting all \(d\)-cells of \(\mathcal{M}_{h}\) is denoted by \(\Delta_{d}(\mathcal{M}_{h})\). Notice that this notion of polytopal mesh essentially coincides with that of CW-complex in algebraic topology. Thus, when \(\Omega\) is a domain in dimension \(n=3\), \(\mathcal{M}_{h}\) gathers the vertices collected in the set \(\mathcal{V}_{h}\coloneqq\Delta_{0}(\mathcal{M}_{h})\), the edges collected in the set \(\mathcal{E}_{h}\coloneqq\Delta_{1}(\mathcal{M}_{h})\), the faces collected in the set \(\mathcal{T}_{h}\coloneqq\Delta_{2}(\mathcal{M}_{h})\), and the elements collected in the set \(\mathcal{T}_{h}\coloneqq\Delta_{3}(\mathcal{M}_{h})\). For all \(f\in\mathcal{M}_{h}\), we select a point \(\boldsymbol{x}_{f}\in f\) which, when \(\mathcal{M}_{h}\) belongs to a refined mesh sequence, is assumed at a distance from the boundary of \(f\) comparable to the meshsize. If \(f\in\Delta_{d}(\mathcal{M}_{h})\) and \(d^{\prime}\leq d\) is an integer, we denote by \(\Delta_{d^{\prime}}(f)\) the set of subcells of \(f\) of dimension \(d^{\prime}\). Hence, if \(n=d=3\), so that \(f=T\in\mathcal{T}_{h}\) is a polyhedral element of the mesh, \(f\in\Delta_{d^{\prime}}(T)\) is a vertex of \(T\) if \(d^{\prime}=0\), an edge of \(T\) if \(d^{\prime}=1\), a polygonal face of \(T\) if \(d^{\prime}=2\), or \(T\) itself if \(d^{\prime}=3\). ### Local polynomial spaces of differential forms Let \(f\in\Delta_{d}(\mathcal{M}_{h})\), \(0\leq d\leq n\). For any integer \(r\geq 0\), we denote by \(\mathcal{P}_{r}\Lambda^{\ell}(f)\) the space of polynomial \(\ell\)-forms of total degree \(\leq r\) on \(f\). We also adopt the standard convention \(\mathcal{P}_{-1}\Lambda^{\ell}(f)\coloneqq\{0\}\). We denote by \(\pi^{\ell}_{r,f}:L^{2}\Lambda^{\ell}(f)\to\mathcal{P}_{r}\Lambda^{\ell}(f)\) the \(L^{2}\)-orthogonal projector on \(\mathcal{P}_{r}\Lambda^{\ell}(f)\), defined by (2.3) with \(\mathcal{X}=\mathcal{P}_{r}\Lambda^{\ell}(f)\). The Koszul differential on \(f\) (translated by \(\boldsymbol{x}_{f}\)) is denoted by \(\kappa\) so that, for all \(\omega\in\Lambda^{\ell}(f)\), \(\kappa\omega\in\Lambda^{\ell-1}(f)\) satisfies \((\kappa\omega)_{\boldsymbol{x}}(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{ \ell-1})=\omega_{\boldsymbol{x}}(\boldsymbol{x}-\boldsymbol{x}_{f}, \boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{\ell-1})\) for all vectors \(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{\ell-1}\) tangent to \(f\). For any \(f\in\Delta_{d}(\mathcal{M}_{h})\), \(1\leq d\leq n\), any integer \(\ell\in[0,d]\), and any polynomial degree \(r\geq 0\), we define the Koszul complement space as \[\mathcal{K}^{\ell}_{r}(f)\coloneqq\kappa\mathcal{P}_{r-1}\Lambda^{\ell+1}(f). \tag{2.5}\] The indices \(r\) and \(\ell\) in this notation serve as a reminder that elements in \(\mathcal{K}^{\ell}_{r}(f)\) are polynomial \(\ell\)-forms of polynomial degree \(r\). Note also that, since \(\mathcal{P}_{-1}\Lambda^{\ell}(f)=\{0\}\) and \(\Lambda^{d+1}(f)=\{0\}\), we have \[\mathcal{K}^{\ell}_{0}(f)=\mathcal{K}^{d}_{r}(f)=\{0\}\text{ for all }\ell\text{ and all }r\text{, respectively.} \tag{2.6}\] Moreover, since \(\kappa\Lambda^{0}(f)=\{0\}\), we adopt the convention \(\mathcal{K}^{-1}_{r}(f)\coloneqq\{0\}\) for all \(r\). We denote by \(\pi^{\mathcal{K},\ell}_{r,f}\) the \(L^{2}\)-orthogonal projector \(L^{2}\Lambda^{\ell}(f)\to\mathcal{K}^{\ell}_{r}(f)\), defined by (2.3) with \(\mathcal{X}=\mathcal{K}^{\ell}_{r}(f)\). For all integers \(r\geq 0\) and \(\ell\in[0,d]\), the following direct decomposition holds (see [4, Eq. (3.11)] for \(\ell\geq 1\), the case \(\ell=0\) can be directly checked): \[\mathcal{P}_{r}\Lambda^{0}(f) =\mathcal{P}_{0}\Lambda^{0}(f)\oplus\mathcal{K}^{0}_{r}(f), \tag{2.7a}\] \[\mathcal{P}_{r}\Lambda^{\ell}(f) =\mathrm{d}\mathcal{P}_{r+1}\Lambda^{\ell-1}(f)\oplus\mathcal{K}^ {\ell}_{r}(f)\quad\text{ if }\ell\geq 1. \tag{2.7b}\] Since \(\mathrm{d}\circ\mathrm{d}=0\) and \(\mathrm{d}\mathcal{P}_{0}\Lambda^{0}(f)=\{0\}\), this shows that \[\mathrm{d}\mathcal{P}_{r}\Lambda^{\ell}(f)=\mathrm{d}\mathcal{K}^{\ell}_{r}(f). \tag{2.8}\] Applying this relation to \((r+1,\ell-1)\) instead of \((r,\ell)\) and recalling that \(\mathrm{d}\) is one-to-one on \(\mathcal{K}^{\ell-1}_{r+1}(f)\) (see [4, Theorem 3.2]), this shows that, for \(\ell\geq 1\), the following mapping is an isomorphism: \[\mathcal{K}^{\ell-1}_{r+1}(f)\times\mathcal{K}^{\ell}_{r}(f) \xrightarrow{\cong}\mathcal{P}_{r}\Lambda^{\ell}(f), \tag{2.9}\] \[(\mu,\nu) \mapsto\mathrm{d}\mu+\nu.\] **Example 2** (Interpretation in terms of vector proxies).: _In the case \(n=3\), thanks to the links between differential forms and vector proxies (see Appendix A), we can associate to each space of polynomial differential forms a space of (vector- or scalar-valued) polynomial fields. Let us consider decomposition (2.7b). We denote by \(f_{d}\) a \(d\)-cell of \(\mathcal{M}_{h}\), and we use a notation analogous to that of [33] for polynomial spaces and vector calculus differential operators (with the exception that polynomial degrees are in subscripts instead of superscripts). Then, by definition (2.5) of the Koszul space, when \(f_{3}=T\in\mathcal{T}_{h}=\Delta_{3}(\mathcal{M}_{h})\) is a mesh element, we have_ \[\mathrm{d}\mathcal{P}_{r+1}\Lambda^{0}(f_{3}) \leftrightarrow\mathbf{\mathcal{G}}_{r}(T)\coloneqq\mathbf{grad}\, \mathcal{P}_{r+1}(T), \mathcal{K}_{r}^{1}(f_{3}) \leftrightarrow\mathbf{\mathcal{G}}_{r}^{\mathrm{c}}(T)\coloneqq(\mathbf{x }-\mathbf{x}_{T})\times\mathbf{\mathcal{P}}_{r-1}(T),\] \[\mathrm{d}\mathcal{P}_{r+1}\Lambda^{1}(f_{3}) \leftrightarrow\mathbf{\mathcal{R}}_{r}(T)\coloneqq\mathbf{curl}\, \mathcal{P}_{r+1}(T), \mathcal{K}_{r}^{2}(f_{3}) \leftrightarrow\mathbf{\mathcal{R}}_{r}^{\mathrm{c}}(T)\coloneqq(\mathbf{x }-\mathbf{x}_{T})\mathcal{P}_{r-1}(T),\] \[\mathrm{d}\mathcal{P}_{r+1}\Lambda^{2}(f_{3}) \leftrightarrow\operatorname{div}\mathcal{P}_{r+1}(T)=\mathcal{P}_ {r}(T), \mathcal{K}_{r}^{3}(f_{3})=\{0\},\] _where the first identity in the last line results from the surjectivity of the divergence operator._ _On the other hand, when \(f_{2}=F\in\mathcal{F}_{h}=\Delta_{2}(\mathcal{M}_{h})\) is a mesh face, we obtain the following pair of possible correspondences:_ \[\mathrm{d}\mathcal{P}_{r+1}\Lambda^{0}(f_{2}) \leftrightarrow\mathbf{\mathcal{G}}_{r}(F)\coloneqq\mathbf{grad}_{F} \,\mathcal{P}_{r+1}(F), \mathcal{K}_{r}^{1}(f_{2}) \leftrightarrow\mathbf{\mathcal{G}}_{r}^{\mathrm{c}}(F)\coloneqq(\mathbf{x}-\mathbf{x}_{ F})^{\perp}\mathcal{P}_{r-1}(F) \tag{2.10}\] _or_ \[\mathrm{d}\mathcal{P}_{r+1}\Lambda^{0}(f_{2}) \leftrightarrow\mathbf{\mathcal{R}}_{r}(F)\coloneqq\mathbf{rot}_{F} \,\mathcal{P}_{r+1}(F), \mathcal{K}_{r}^{1}(f_{2}) \leftrightarrow\mathbf{\mathcal{R}}_{r}^{\mathrm{c}}(F)\coloneqq(\mathbf{x}-\mathbf{x}_{ F})\mathcal{P}_{r+1}(F), \tag{2.11}\] _where, for any \(\mathbf{v}\in\mathbb{R}^{2}\), \(\mathbf{v}^{\perp}=\varrho_{-\pi/2}\mathbf{v}\) is the clockwise rotation of \(\mathbf{v}\) with respect to the orientation of \(F\). The existence of two possible correspondences between polynomial \(1\)-forms and polynomial vector fields is to due to the fact that, when \(d=2\), one can identify a \(1\)-form either with a vector field \(\mathbf{v}=(v_{1},v_{2})\) or with its rotation through a right angle (cf. [2, Chapter 6]); in particular, we choose to identify it with the clockwise rotation \(\mathbf{v}^{\perp}=(v_{2},-v_{1})\) (see Appendix A for further details). By (2.6), we have \(\mathcal{K}_{r}^{2}(f_{2})=\{0\}\) and, according to whether we consider the vector proxy leading to (2.10) or (2.11),_ \[\mathrm{d}\mathcal{P}_{r+1}\Lambda^{1}(f_{2}) \leftrightarrow\operatorname{rot}_{F}\,\mathbf{\mathcal{P}}_{r+1}(F)= \mathcal{P}_{r}(F)\quad\text{or}\quad\mathrm{d}\mathcal{P}_{r+1}\Lambda^{1}(f _{2})\,\leftrightarrow\operatorname{div}_{F}\,\mathbf{\mathcal{P}}_{r+1}(F)= \mathcal{P}_{r}(F).\] _Hence, since both \(1\)-forms and \(2\)-forms in \(\mathbb{R}^{3}\) can be identified with vector fields, and accounting for the two-fold identification of \(1\)-forms in \(\mathbb{R}^{2}\), the decomposition (2.7b) reads, in terms of proxies,_ \[\mathbf{\mathcal{P}}_{r}(f_{d})=\mathbf{\mathcal{G}}_{r}(f_{d})\oplus \mathbf{\mathcal{G}}_{r}^{\mathrm{c}}(f_{d})=\mathbf{\mathcal{R}}_{r}(f_{d})\oplus\bm {\mathcal{R}}_{r}^{\mathrm{c}}(f_{d}),\quad d\in\{2,3\},\] _i.e., the same expressions as [33, Eqs. (2.4) and (2.6)]. On the other hand, concerning \(0\)-forms, the decomposition (2.7a) reads, in terms of proxies,_ \[\mathcal{P}_{r}(f_{d})=\mathcal{P}_{0}(f_{d})\oplus\mathcal{P}_{r}^{\mathrm{b }}(f_{d}),\quad d\in\{0,\dots,3\},\] _where we have introduced the notation \(\mathcal{P}_{r}^{\mathrm{b}}(f)\coloneqq(\mathbf{x}-\mathbf{x}_{f})\cdot\mathbf{\mathcal{ P}}_{r-1}(f)\) for any \(f\in\Delta_{d}(\mathcal{M}_{h})\)._ ### Trimmed local polynomial spaces We recall the following local trimmed polynomial spaces (see e.g. [4, Theorem 3.5]): For any \(f\in\,\Delta_{d}(\mathcal{M}_{h})\), \(1\leq d\leq n\), \[\mathcal{P}_{r}^{-}\Lambda^{0}(f) =\mathcal{P}_{r}\Lambda^{0}(f), \tag{2.12a}\] \[\mathcal{P}_{r}^{-}\Lambda^{\ell}(f) =\mathrm{d}\mathcal{P}_{r}\Lambda^{\ell-1}(f)\oplus\mathcal{K}_{r }^{\ell}(f)\qquad\text{for $\ell\geq 1$}. \tag{2.12b}\] In (2.12b), comparing with the decompositions (2.7), we have decreased by one the polynomial degree of the first space in the direct sum. Note that this definition leads to the choice \[\mathcal{P}_{r}^{-}\Lambda^{0}(f)\coloneqq\mathcal{P}_{r}\Lambda^{0}(f) \cong\mathbb{R}\qquad\forall f\in\Delta_{0}(\mathcal{M}_{h}). \tag{2.13}\] The \(L^{2}\)-orthogonal projector \(L^{2}\Lambda^{\ell}(f)\to\mathcal{P}_{r}^{-}\Lambda^{\ell}(f)\) is denoted by \(\pi_{r,f}^{-,\ell}\), and is defined by (2.3) with \(\mathcal{X}=\mathcal{P}_{r}^{-}\Lambda^{\ell}(f)\). Let us note a few properties of trimmed polynomial spaces. For \(r=0\), only the space (2.12a) is non-trivial, that is, \(\mathcal{P}_{0}^{-}\Lambda^{\ell}(f)=\{0\}\) if \(\ell\in[1,d]\). Applying, if \(r\geq 1\) and \(\ell\geq 1\), (2.7b) with \(r-1\) instead of \(r\) and noting that \(\mathcal{K}_{r-1}^{\ell}(f)\subset\mathcal{K}_{r}^{\ell}(f)\), we obtain the equality \[\mathcal{P}_{r}^{-}\Lambda^{\ell}(f)=\mathcal{P}_{r-1}\Lambda^{\ell}(f)+ \mathcal{K}_{r}^{\ell}(f). \tag{2.14}\] This equality, which obviously also holds for \(\ell=0\) (see (2.7a)), shows that trimmed polynomial spaces sit between full polynomial spaces: \[\mathcal{P}_{r-1}\Lambda^{\ell}(f)\subset\mathcal{P}_{r}^{-}\Lambda^{\ell}(f )\subset\mathcal{P}_{r}\Lambda^{\ell}(f).\] Recalling that \(\mathcal{K}_{r}^{d}(f)=\{0\}\) and that \(\mathrm{d}\mathcal{P}_{r}\Lambda^{d-1}(f)=\mathcal{P}_{r-1}\Lambda^{d}(f)\) (by exactness of the tail of the polynomial de Rham sequence [2, Corollary 7.3]), it holds \[\mathcal{P}_{r}^{-}\Lambda^{d}(f)=\mathcal{P}_{r-1}\Lambda^{d}(f). \tag{2.15}\] Applying (2.8) with \(\ell-1\) instead of \(\ell\), we moreover have \[\mathcal{P}_{r}^{-}\Lambda^{\ell}(f)=\mathrm{d}\mathcal{K}_{r}^{\ell-1}(f)+ \mathcal{K}_{r}^{\ell}(f)\qquad\text{for $\ell\geq 1$.} \tag{2.16}\] Since \(\mathrm{d}\) is one-to-one on \(\mathcal{K}_{r}^{\ell-1}(f)\), this gives the following isomorphism, whenever \(\ell\geq 1\): \[\mathcal{K}_{r}^{\ell-1}(f)\times\mathcal{K}_{r}^{\ell}(f) \xrightarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ The trace is a pullback, so it commutes with \(\mathrm{d}\), and we thus have \[\mathrm{tr}_{f^{\prime}}(\mathrm{d}\mathcal{P}_{r}\Lambda^{\ell-1}(f))=\mathrm{d }(\mathrm{tr}_{f^{\prime}},\mathcal{P}_{r}\Lambda^{\ell-1}(f))\subset\mathrm{d} \mathcal{P}_{r}\Lambda^{\ell-1}(f^{\prime}),\] where the inclusion holds since the trace preserves full polynomial spaces. Given the definition (2.12b) of the trimmed spaces, the lemma follows if we show that \[\mathrm{tr}_{f^{\prime}},\mathcal{K}_{r}^{\ell}(f)\subset\mathcal{P}_{r}^{-} \Lambda^{\ell}(f^{\prime})=\mathcal{P}_{r-1}\Lambda^{\ell}(f^{\prime})+ \mathcal{K}_{r}^{\ell}(f^{\prime}) \tag{2.18}\] (where the equality follows from (2.14) applied to \(f^{\prime}\) instead of \(f\)). Let \(\omega\in\mathcal{P}_{r-1}\Lambda^{\ell+1}(f)\). The definitions of \(\mathrm{tr}_{f^{\prime}}\) and \(\kappa_{f}\) give, for any \(\boldsymbol{x}\in f^{\prime}\) and \(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{\ell}\) tangent to \(f^{\prime}\), \[\mathrm{tr}_{f^{\prime}}(\kappa_{f}\omega)_{\boldsymbol{x}}( \boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{\ell}) =\omega_{\boldsymbol{x}}(\boldsymbol{x}-\boldsymbol{x}_{f}, \boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{\ell})\] \[=\omega_{\boldsymbol{x}}(\boldsymbol{x}_{f^{\prime}}-\boldsymbol {x}_{f},\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{\ell})+\omega_{\boldsymbol{ x}}(\boldsymbol{x}-\boldsymbol{x}_{f^{\prime}},\boldsymbol{v}_{1},\ldots, \boldsymbol{v}_{\ell})\] \[=\alpha_{\boldsymbol{x}}(\boldsymbol{v}_{1},\ldots,\boldsymbol{v }_{\ell})+(\kappa_{f^{\prime}},\mathrm{tr}_{f^{\prime}}\,\omega)_{\boldsymbol{ x}}(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{\ell}),\] where we have used the linearity of \(\omega_{\boldsymbol{x}}\) with respect to its first argument to obtain the second equality, and introduced the differential form \(\alpha\coloneqq\omega(\boldsymbol{x}_{f^{\prime}}-\boldsymbol{x}_{f},\cdot)\) in the third equality. Hence, \(\mathrm{tr}_{f^{\prime}}(\kappa_{f}\omega)=\alpha+\kappa_{f^{\prime}}\, \mathrm{tr}_{f^{\prime}}\,\omega\), which proves (2.18) since \(\alpha\in\mathcal{P}_{r-1}\Lambda^{\ell}(f^{\prime})\) (as \(\boldsymbol{x}_{f^{\prime}}-\boldsymbol{x}_{f}\) is constant) and \(\mathrm{tr}_{f^{\prime}}\,\omega\in\mathcal{P}_{r-1}\Lambda^{\ell+1}(f^{ \prime})\). ## 3 Discrete de Rham complex We define in this section a discrete counterpart of the de Rham complex of differential forms (1.2) in the spirit of [33, 38]. Let, from this point on, an integer \(r\geq 0\) be fixed corresponding to the polynomial degree of the discrete sequence. The general idea is, for each form degree \(k\in[0,n]\), to select the polynomial components of the discrete spaces in order to reconstruct, on each \(d\)-cell \(f\) and iteratively on the dimension \(d\): * A _discrete exterior derivative_ in the full polynomial space \(\mathcal{P}_{r}\Lambda^{k+1}(f)\) that can reproduce exactly the exterior derivative of differential forms in \(\mathcal{P}_{r+1}^{-}\Lambda^{k}(f)\); * Based on this discrete exterior derivative and on traces on \((d-1)\)-cells (either directly available or reconstructed), a _discrete potential_ in \(\mathcal{P}_{r}\Lambda^{k}(f)\) that can reproduce exactly differential forms belonging to this same space. ### Definition #### 3.1.1 Discrete spaces The discrete counterpart \(\underline{X}_{r,h}^{k}\) of the space \(H\Lambda^{k}(\Omega)\), \(0\leq k\leq n\), is defined as \[\underline{X}_{r,h}^{k}\coloneqq\bigtimes_{d=k}^{n}\bigtimes_{f\in\Delta_{d}( \mathcal{M}_{h})}\mathcal{P}_{r}^{-}\Lambda^{d-k}(f). \tag{3.1}\] We define the restrictions of the global space (3.1) to a mesh entity or its boundary as follows: For all integers \(k\) and \(d\) such that \(0\leq k\leq d\leq n\) and all \(f\in\Delta_{d}(\mathcal{M}_{h})\), \[\underline{X}_{r,f}^{k}\coloneqq\bigtimes_{d^{\prime}=k}^{d}\bigtimes_{f^{ \prime}\in\Delta_{d^{\prime}}(f)}\mathcal{P}_{r}^{-}\Lambda^{d^{\prime}-k}(f^{ \prime})\quad\text{and}\quad\underline{X}_{r,\partial f}^{k}\coloneqq\bigtimes_{d ^{\prime}=k}^{d-1}\bigtimes_{f^{\prime}\in\Delta_{d^{\prime}}(f)}\mathcal{P}_{r }^{-}\Lambda^{d^{\prime}-k}(f^{\prime})\text{ if }d\geq 1.\] We shall use the notation \(\underline{\omega}_{h}=(\omega_{f})_{f\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k,n]}\in\underline{X}_{r,h}^{k}\) for a generic element of the global discrete space of \(k\)-forms and \(\underline{\omega}_{f}=(\omega_{f^{\prime}})_{f^{\prime}\in\Delta_{d^{\prime}}(f ),\,d^{\prime}\in[k,d]}\in\underline{X}_{r,f}^{k}\) (resp., \(\underline{\omega}_{\partial f}=(\omega_{f^{\prime}})_{f^{\prime}\in\Delta_{d^{ \prime}}(f),\,d^{\prime}\in[k,d-1]}\in\underline{X}_{r,h}^{k}\)). \(\underline{X}^{k}_{r,\partial f}\)) for its restriction to \(f\) (resp., \(\partial f\)), obtained collecting the components on the mesh entities \(f^{\prime}\in\Delta_{d^{\prime}}(f)\), \(d^{\prime}\in[k,d]\) (resp., \(d^{\prime}\in[k,d-1]\)). As a generic convention in this article, underlined letters denote spaces or vectors made of polynomial components on mesh entities. Table 1 gives an overview of the polynomial unknowns in \(\underline{X}^{k}_{r,f}\), along with their vector proxies, in dimensions 0 to 3. #### 3.1.2 Interpolators and interpretation of the polynomial components The precise meaning of the components in each DDR space is provided by the corresponding interpolator. For \(f\in\Delta_{d}(\mathcal{M}_{h})\) and \(k\leq d\), the interpolator \(\underline{I}^{k}_{r,f}:C^{0}\Lambda^{k}(\overline{f})\to\underline{X}^{k}_{r,f}\) is defined by: For all \(\omega\in C^{0}\Lambda^{k}(\overline{f})\), \[\underline{I}^{k}_{r,f}\,\omega\coloneqq(\pi^{-,d^{\prime}-k}_{r,f^{\prime}}( \star\operatorname{tr}_{f^{\prime}}\omega))_{f^{\prime}\in\Delta_{d^{\prime }}(f),\,d^{\prime}\in[k,d]}. \tag{3.2}\] In other words, a discrete \(k\)-form on the mesh is made of polynomial forms attached to each mesh entity of dimension \(d\geq k\); on each such entity, the form is of degree \(d-k\) as it corresponds to the Hodge star of an underlying \(k\)-form. The Hodge star operator is used in the definition of the polynomial components to ensure that the full space \(\mathcal{P}_{r}\Lambda^{0}(f)\) (see (2.12a)) is attached to the lowest-dimensional cells \(f\in\Delta_{k}(\mathcal{M}_{h})\). #### 3.1.3 Local discrete potentials and discrete exterior derivative Let \(0\leq k\leq n\) be a fixed integer. For all \(f\in\Delta_{d}(\mathcal{M}_{h})\) with \(d\geq k\), we define the _discrete potential_\(P^{k}_{r,f}:\underline{X}^{k}_{r,f}\to\mathcal{P}_{r}\Lambda^{k}(f)\) and, if \(d\geq k+1\), the _discrete exterior derivative_\(\mathrm{d}^{k}_{r,f}:\underline{X}^{k}_{r,f}\to\mathcal{P}_{r}\Lambda^{k+1}(f)\) recursively on the dimension \(d\) as follows: * If \(d=k\), then the discrete potential on \(f\) is directly given by the component of \(\underline{\omega}_{f}\) on \(f\): \[P^{k}_{r,f}\,\underline{\omega}_{f}\coloneqq\star^{-1}\omega_{f}\,\in\mathcal{ P}_{r}\Lambda^{d}(f).\] (3.3) * If \(k+1\leq d\leq n\): \begin{table} \begin{tabular}{c|c c c c} \hline \hline \(k\)\({}^{d}\) & 0 & 1 & 2 & 3 \\ \hline 0 & \(\mathbb{R}=\mathcal{P}_{r}\Lambda^{0}(f_{0})\) & \(\mathcal{P}_{r-1}\Lambda^{1}(f_{1})\) & \(\mathcal{P}_{r-1}\Lambda^{2}(f_{2})\) & \(\mathcal{P}_{r-1}\Lambda^{3}(f_{3})\) \\ 1 & & \(\mathcal{P}_{r}\Lambda^{0}(f_{1})\) & \(\mathcal{P}_{r}^{-}\Lambda^{1}(f_{2})\) & \(\mathcal{P}_{r}^{-}\Lambda^{2}(f_{3})\) \\ 2 & & & \(\mathcal{P}_{r}\Lambda^{0}(f_{2})\) & \(\mathcal{P}_{r}^{-}\Lambda^{1}(f_{3})\) \\ 3 & & & & \(\mathcal{P}_{r}\Lambda^{0}(f_{3})\) \\ \hline \(k\)\({}^{d}\) & 0 & 1 & 2 & 3 \\ \hline 0 & \(\mathbb{R}=\mathcal{P}_{r}\left(f_{0}\right)\) & \(\mathcal{P}_{r-1}(f_{1})\) & \(\mathcal{P}_{r-1}(f_{2})\) & \(\mathcal{P}_{r-1}(f_{3})\) \\ 1 & & \(\mathcal{P}_{r}\left(f_{1}\right)\) & \(\mathcal{RT}_{r}\left(f_{2}\right)\) & \(\mathcal{RT}_{r}\left(f_{3}\right)\) \\ 2 & & & \(\mathcal{P}_{r}\left(f_{2}\right)\) & \(\mathcal{N}_{r}\left(f_{3}\right)\) \\ 3 & & & & \(\mathcal{P}_{r}\left(f_{3}\right)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Polynomial components attached to each mesh entity \(f_{d}\) of dimension \(d\in\{0,\ldots,3\}\) for the space \(\underline{X}^{k}_{r,h}\) for \(k\in\{0,\ldots,3\}\) (top) and counterpart through vector proxies (bottom). 1. First, the discrete exterior derivative is defined by: For all \(\underline{\omega}_{f}\in\underline{X}_{r,f}^{k}\), \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge\mu=(-1)^{k+1} \int_{f}\star^{-1}\omega_{f}\,\wedge\mathrm{d}\mu+\int_{\partial f}P_{r, \partial f}^{k}\,\underline{\omega}_{\partial f}\,\wedge\mathrm{tr}_{\partial f }\,\mu\\ \forall\mu\in\mathcal{P}_{r}\Lambda^{d-k-1}(f),\] (3.4) where we have introduced the piecewise polynomial boundary potential \(P_{r,\partial f}^{k}:\underline{X}_{r,\partial f}^{k}\to\Lambda^{k}(\partial f)\) such that \((P_{r,\partial f}^{k}\,)_{|f^{\prime}}\coloneqq P_{r,f^{\prime}}^{k}\), for all \(f^{\prime}\in\Delta_{d-1}(f)\) (\(P_{r,f^{\prime}}^{k}\), being the discrete potential on the \((d-1)\)-cell \(f^{\prime}\) defined at the previous step). 2. Then, the discrete potential on the \(d\)-cell \(f\) is given by: For all \(\underline{\omega}_{f}\in\underline{X}_{r,f}^{k}\), \[(-1)^{k+1}\int_{f}P_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge( \mathrm{d}\mu+\nu)\\ =\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge\mu -\int_{\partial f}P_{r,\partial f}^{k}\,\underline{\omega}_{\partial f}\, \wedge\mathrm{tr}_{\partial f}\,\mu+(-1)^{k+1}\int_{f}\star^{-1}\omega_{f}\, \wedge\nu\\ \forall(\mu,\nu)\in\mathcal{K}_{r+1}^{d-k-1}(f)\times\mathcal{K}_ {r}^{d-k}(f).\] (3.5) Some remarks are in order. _Remark 5_ (Definitions (3.4) and (3.5)).: The fact that condition (3.4) defines \(\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\) uniquely is an immediate consequence of the Riesz representation theorem for \(\mathcal{P}_{r}\Lambda^{k+1}(f)\) equipped with the \(L^{2}\)-product \((\rho,\beta)\ni L^{2}\Lambda^{k+1}(f)\times L^{2}\Lambda^{k+1}(f)\mapsto\int_ {f}\rho\wedge\star\beta\in\mathbb{R}\), after observing that (3.4) can be equivalently reformulated as follows (notice the change in the degree of the test differential form, with \(\beta\) below corresponding to \(\star^{-1}\mu\) in (3.4)): \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge\star\beta=(-1)^{k +1}\int_{f}\,\omega_{f}\,\wedge\star\mathrm{d}\star\beta+\int_{\partial f}\,P_ {r,\partial f}^{k}\,\underline{\omega}_{\partial f}\,\wedge\mathrm{tr}_{ \partial f}\,\star\beta\qquad\forall\beta\in\mathcal{P}_{r}\Lambda^{k+1}(f),\] where we have additionally used (A.4) for the first term in the right-hand side. Similar considerations apply to the definition (3.5) of \(P_{r,f}^{k}\), applying the isomorphism (2.9) with \(\ell=d-k\geq 1\). _Remark 6_ (Validity of (3.5)).: For \(k+1\leq d\leq n\), equation (3.5) actually holds for all \(\mu\in\mathcal{P}_{r+1}^{-}\Lambda^{d-k-1}(f)\). To prove this assertion, since (3.5) holds for \(\mu\in\mathcal{K}_{r+1}^{d-k-1}(f)\), it suffices to show that it also holds for \(\nu=0\) and \(\mu\) belonging to \(\mathcal{P}_{0}\Lambda^{0}(f)\) if \(d=k+1\) (see (2.12a) and (2.7a)) or \(\mathrm{d}\mathcal{P}_{r+1}\Lambda^{d-k-2}(f)\) if \(d\geq k+2\) (see (2.12b)). In both cases, we have \(\mathrm{d}\mu=0\), so that the left-hand side of (3.5) vanishes; since \(\mu\in\mathcal{P}_{r}\Lambda^{d-k-1}(f)\), the right-hand side of (3.5) also vanishes due to the definition (3.4) of the discrete exterior derivative, which concludes the argument. _Remark 7_ (Potential for \(k=0\)).: In the case \(k=0\), we can define an improved potential \(P_{r+1,f}^{0}\,:\underline{X}_{r,f}^{0}\to\mathcal{P}_{r+1}\Lambda^{0}(f)\) of polynomial degree \(r+1\) (instead of \(r\)) as follows: For all \(\underline{\omega}_{f}\,\in\underline{X}_{r,f}^{0}\), * If \(d=0\), then \(P_{r+1,f}^{0}\,\underline{\omega}_{f}=\star^{-1}\omega_{f}\,\in\mathcal{P}_{r} \Lambda^{0}(f)\cong\mathbb{R}\cong\mathcal{P}_{r+1}\Lambda^{0}(f)\) (since \(f\) has dimension \(0\)); * If \(1\leq d\leq n\), \[-\int_{f}\,P_{r+1,f}^{0}\,\underline{\omega}_{f}\,\wedge\mathrm{d}\mu=\int_{f }\,\mathrm{d}_{r,f}^{0}\,\underline{\omega}_{f}\,\wedge\mu-\int_{\partial f}\,P_ {r+1,\partial f}^{0}\,\underline{\omega}_{\partial f}\,\wedge\mathrm{tr}_{ \partial f}\,\mu\qquad\forall\mu\in\mathcal{K}_{r+2}^{d-1}(f).\] (3.6) This definition is justified by the isomorphism (2.9) with \(\ell=d\) and \(r+1\) instead of \(r\) (recalling that \(\mathcal{K}_{r+1}^{d}(f)=\{0\}\)), and it can easily be checked, testing (3.5) and (3.6) with \(\mu\in\mathcal{K}_{r+1}^{d-1}(f)\), that \(\pi_{r,f}^{0}\,P_{r+1,f}^{0}\,\underline{\omega}_{f}=P_{r,f}^{0}\,\underline{ \omega}_{f}\,\). We will moreover see in Remark 18 that \(P_{r+1,f}^{0}\) enjoys optimal consistency properties. _Remark 8_ (Space of DDR potentials).: The space of DDR reconstructed potentials, that is, \[\{(P_{r,f}^{k}\underline{\omega}_{h})_{f\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k,n ]\,\,:\,\underline{\omega}_{h}\in\underline{X}_{r,h}^{k}\}\] cannot be considered as a space of differential forms with global regularity, as the reconstructed polynomials do not have any compatibility condition of the traces; they are inherently piecewise discontinuous polynomials. **Example 9** (Interpretation in terms of vector proxies).: _We start by considering \(k=0\). In this case, formula (3.3) means that (constant) real values are attached to the vertices \(f_{0}=V\in\mathcal{V}_{h}=\Delta_{0}(\mathcal{M}_{h})\) of the mesh, so that an iterative procedure can be initialised to reconstruct discrete gradients and related traces/potentials over higher-dimensional cells. Indeed, formula (3.4) reconstructs a (scalar) gradient over edges \(f_{1}=E\in\mathcal{E}_{h}=\Delta_{1}(\mathcal{M}_{h})\) (i.e., the derivative along the direction given by the orientation of \(E\)) based on the values at the vertices and the value on the edge itself. This edge gradient, in turn, enters (3.5) to define a scalar edge trace over \(E\). When \(d\) takes the values \(2\) and \(3\), the successive application of formulas (3.4)-(3.5) defines, respectively, the pairs (face gradient, scalar face trace) on mesh faces \(f_{2}=F\in\mathcal{F}_{h}=\Delta_{2}(\mathcal{M}_{h})\), and (element gradient, scalar element potential) on mesh elements \(f_{3}=T\in\mathcal{T}_{h}=\Delta_{3}(\mathcal{M}_{h})\)._ _Let us now turn to the case \(k=1\), for which we provide more details. The vector proxy for the space \(\underline{X}_{r,h}^{1}\) is the space_ _and, with standard DDR notation, we denote by \(\underline{X}_{\text{\bf curl},Y}^{r}\) its restriction to a mesh element or face \(Y\in\mathcal{T}_{h}\cup\mathcal{F}_{h}\). By (3.3) with \(d=k=1\), the reconstruction process is initialised by \(1\)-forms, whose vector proxies are scalar-valued polynomials of degree \(r\) over edges \(f_{1}=E\in\mathcal{E}_{h}\) that play the role of edge tangential traces._ _Then, for each mesh face \(f_{2}=F\in\mathcal{F}_{h}\), we sequentially reconstruct a scalar face curl \(C_{F}^{r}:\underline{X}_{\text{\bf curl},F}^{r}\to\mathcal{P}_{r}(F)\) by (3.4) with \(d=k+1=2\) and a vector face tangential trace \(\boldsymbol{\gamma}_{\text{t},F}^{r}:\underline{X}_{\text{\bf curl},F}^{r} \to\mathcal{P}_{r}(F)\) by (3.5). \(C_{F}^{r}\) is such that, for all \(\underline{\boldsymbol{v}}_{F}=\left((v_{E})_{E\in\mathcal{E}_{F}}, \boldsymbol{v}_{F}\right)\in\underline{X}_{\text{\bf curl},F}^{r}\),_ \[\int_{F}C_{F}^{r}\underline{\boldsymbol{v}}_{F}\ q=\int_{F}\boldsymbol{v}_{F} \cdot\text{\bf rot}_{F}\ q+\sum_{E\in\mathcal{E}_{F}}\varepsilon_{FE}\int_{E} v_{E}\ q\qquad\forall q\in\mathcal{P}_{r}(F),\] _where, for all \(E\in\mathcal{E}_{F}\) (the set of edges of \(F\)), \(\varepsilon_{FE}\in\{-1,+1\}\) denotes the orientation of \(E\) relative to \(F\), while \(\boldsymbol{\gamma}_{\text{t},F}^{r}\) satisfies, for all \(\underline{\boldsymbol{v}}_{F}\in\underline{X}_{\text{\bf curl},F}^{r}\),_ \[\int_{F}\boldsymbol{\gamma}_{\text{t},F}^{r}\underline{ \boldsymbol{v}}_{F}\cdot(\text{\bf rot}_{F}\ q+\boldsymbol{w})=\int_{F}C_{F}^{ r}\underline{\boldsymbol{v}}_{F}\ q-\sum_{E\in\mathcal{E}_{F}}\varepsilon_{FE}\int_{E} v_{E}\ q+\int_{F}\boldsymbol{v}_{F}\cdot\boldsymbol{w},\\ \forall(q,\boldsymbol{w})\in\mathcal{P}_{r+1}^{\flat}(F)\times \mathcal{R}_{r}^{\flat}(F).\] _The alternative interpretation of \(1\)-forms in dimension \(d=2\) results in a rotation of \(\underline{X}_{\text{\bf curl},F}^{r}\) by a right angle. Correspondingly, (3.4) yields a face divergence (see (A.9) and (A.10) in Appendix A.2)._ _Next, for each mesh element \(f_{3}=T\in\mathcal{T}_{h}\), (3.4) defines the element curl \(\boldsymbol{C}_{T}^{r}:\underline{X}_{\text{\bf curl},T}^{r}\to\mathcal{P}_{r} (T)\) such that, for all \(\underline{\boldsymbol{v}}_{T}=\left((v_{E})_{E\in\mathcal{E}_{T}},( \boldsymbol{v}_{F})_{F\in\mathcal{F}_{T}},\boldsymbol{v}_{T}\right)\in \underline{X}_{\text{\bf curl},T}^{r}\),_ \[\int_{T}\boldsymbol{C}_{T}^{r}\underline{\boldsymbol{v}}_{T}\cdot\boldsymbol{w} =\int_{T}\boldsymbol{v}_{T}\cdot\text{\bf curl}\,\boldsymbol{w}+\sum_{F\in \mathcal{F}_{T}}\varepsilon_{TF}\int_{F}\boldsymbol{\gamma}_{\text{t},F}^{r} \underline{\boldsymbol{v}}_{F}\cdot(\boldsymbol{w}\times\boldsymbol{n}_{F}) \qquad\forall\boldsymbol{w}\in\mathcal{P}_{r}(T),\] _where, for all \(F\in\mathcal{F}_{T}\) (the set of faces of \(T\)), \(\varepsilon_{TF}\in\{-1,+1\}\) denotes the orientation of \(F\) relative to \(T\), while (3.5) defines the vector potential \(\mathbf{P}^{r}_{\text{\bf curl},T}:\underline{\mathbf{X}}^{r}_{\text{\bf curl},T} \to\mathcal{P}_{r}(T)\) such that, for all \(\underline{\mathbf{v}}_{T}\in\underline{\mathbf{X}}^{r}_{\text{\bf curl},T}\),_ \[\int_{T}\mathbf{P}^{r}_{\text{\bf curl},T}\,\underline{\mathbf{v}}_{T} \cdot(\text{\bf curl}\,\mathbf{w}+\mathbf{z})=\int_{T}\mathbf{C}^{r}_{T}\,\underline{\mathbf{v }}_{T}\cdot\mathbf{w}-\sum_{F\in\mathcal{F}_{T}}\varepsilon_{TF}\int_{F}\mathbf{ \gamma}^{r}_{t,F}\,\underline{\mathbf{v}}_{F}\cdot(\mathbf{w}\times\mathbf{n}_{F})+\int_{T }\mathbf{v}_{T}\cdot\mathbf{z}\\ \forall(\mathbf{w},\mathbf{z})\in\mathcal{G}^{\text{c}}_{r+1}(T)\times \mathcal{R}^{\text{c}}_{r}(T).\] _When \(k=2\), (3.4) reconstructs on mesh elements \(f_{3}=T\in\mathcal{T}_{h}\) a discrete divergence of order \(r\) based on the polynomial scalar trace defined by (3.3), which plays the role of a normal trace on the face \(f_{2}=F\in\mathcal{F}_{T}\). Then, (3.5) defines a vector potential of degree \(r\) over \(T\)._ _Finally, in the case \(k=3\), (3.3) simply yields a polynomial over mesh elements \(f_{3}=T\in\mathcal{T}_{h}\)._ #### 3.1.4 Global discrete exterior derivative and DDR complex To arrange the spaces \(\underline{\mathbf{X}}^{k}_{r,h}\) into a sequence that mimics the continuous de Rham complex, for any form degree \(k\) such that \(0\leq k\leq n-1\), we introduce the _global discrete exterior derivative_\(\underline{\mathrm{d}}^{k}_{r,h}:\underline{\mathbf{X}}^{k}_{r,h}\to\underline{ \mathbf{X}}^{k+1}_{r,h}\) defined as follows: \[\underline{\mathrm{d}}^{k}_{r,h}\,\underline{\omega}_{h}\coloneqq\left(\pi^{- d-k-1}_{r,f}(\star\mathrm{d}^{k}_{r,f}\,\underline{\omega}_{f})\right)_{f \in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+1,n]}. \tag{3.7}\] In what follows, given a \(d\)-cell \(f\in\Delta_{d}(\mathcal{M}_{h})\) with \(d\in[k+1,n]\), we denote by \(\underline{\mathrm{d}}^{k}_{r,f}\) the _local discrete exterior derivative_ collecting the components of \(\underline{\mathrm{d}}^{k}_{r,h}\) on \(f\) and its boundary. The DDR sequence reads \[\text{DDR}(r)\coloneqq\{0\}\xrightarrow{\phantom{\text{\Large$X$}}\smash{ \smash{\smash{\smash{\smash{\smash{\smash{\smash{\smash{\smash{\smash{\smash{ \smash{\smash{\smash{\smash{\smashsmash{\smashsmash{\smashsmashsmashsmashsmashsmashsmashsmashsmashsmashsmashsmashsmashsmashsmash \smash #### 3.1.5 Discrete \(L^{2}\)-products Using the potentials built in Section 3.1.3, we can define, for all \(k\in[0,n]\), an inner product \((\cdot,\cdot)_{k,h}:\underline{X}_{\cdot,h}^{k}\times\underline{X}_{\cdot,h}^{k} \rightarrow\mathbb{R}\) that induces an \(L^{2}\)-structure on \(\underline{X}_{\cdot,h}^{k}\). Specifically, we set: For all \((\underline{\omega}_{h},\underline{\mu}_{h})\in\underline{X}_{\cdot,h}^{k} \times\underline{X}_{\cdot,h}^{k}\), \[\begin{split}(\underline{\omega}_{h},\underline{\mu}_{h})_{k,h} \coloneqq\sum_{f\in\Delta_{n}(\mathcal{M}_{h})}(\underline{\omega}_{f}\,, \underline{\mu}_{f})_{k,f}\\ \text{with }(\underline{\omega}_{f}\,,\underline{\mu}_{f})_{k,f} \coloneqq\int_{f}P_{r,f}^{k}\underline{\omega}_{f}\,\wedge\star P_{r,f}^{k} \,\underline{\mu}_{f}+s_{k,f}\,(\underline{\omega}_{f}\,,\underline{\mu}_{f} )\text{ for all }f\in\Delta_{n}(\mathcal{M}_{h}),\end{split} \tag{3.11}\] where \(s_{k,f}:\underline{X}_{\cdot,f}^{k}\times\underline{X}_{\cdot,f}^{k} \rightarrow\mathbb{R}\) is the stabilisation bilinear form such that \[s_{k,f}\,(\underline{\omega}_{f}\,,\underline{\mu}_{f})=\sum_{d^{\prime}=k}^{ n-1}h_{f}^{n-d^{\prime}}\sum_{f^{\prime}\in\Delta_{d^{\prime}}(f)}\int_{f^{ \prime}}(\operatorname{tr}_{f^{\prime}}P_{r,f}^{k}\,\underline{\omega}_{f}-P _{r,f}^{k}\,,\underline{\omega}_{f^{\prime}})\wedge\star(\operatorname{tr}_{f ^{\prime}}P_{r,f}^{k}\,\underline{\mu}_{f}-P_{r,f^{\prime}}^{k}\,,\underline{ \mu}_{f}),\] with \(h_{f}\) denoting the diameter of \(f\). The first term in the right-hand side of \((\cdot,\cdot)_{k,f}\) is responsible for consistency, while the second one ensures the positivity of this bilinear form. More specifically, by Theorem 11 and Remark 12 it holds, for all \(f\in\Delta_{n}(\mathcal{M}_{h})\), \[(\underline{I}_{\cdot,f}^{k}\,\omega,\underline{\mu}_{f})_{k,f}=\int_{f}\, \omega\,\wedge\star P_{r,f}^{k}\,\underline{\mu}_{f}\qquad\forall\omega\in \mathcal{P}_{r}\,\Lambda^{k}(f)\,,\ \forall\underline{\mu}_{f}\,\in\underline{X}_{\cdot,f}^{k}\,.\] Additionally, by (3.20) below, the mapping \(\underline{X}_{\cdot,f}^{k}\ni\underline{\omega}_{f}\mapsto\|\underline{ \omega}_{f}\|_{k,f}\coloneqq(\underline{\omega}_{f}\,,\underline{\omega}_{f} )_{k,f}^{\nicefrac{{1}}{{2}}}\in\mathbb{R}\) defines a norm on \(\underline{X}_{\cdot,f}^{k}\). Numerical schemes for linear PDEs related to the de Rham complex are typically obtained replacing continuous spaces and \(L^{2}\)-products with their discrete counterparts, according to the principles illustrated, e.g., in [33, Section 7]. _Remark 13_ (Stabilisation).: A more general expression for the local \(L^{2}\)-product in (3.11) is obtained replacing \(s_{k,f}\) with \[s_{\mathcal{B},k,f}\,(\underline{\omega}_{f}\,,\underline{\mu}_{f}\,)= \mathcal{B}_{f}\,(\underline{I}_{\cdot,f}^{k}\,P_{r,f}^{k}\,\underline{\omega} _{f}-\underline{\omega}_{f}\,,\underline{I}_{\cdot,f}^{k}\,P_{r,f}^{k}\, \underline{\mu}_{f}\,-\,\underline{\mu}_{f}\,),\] with \(\mathcal{B}_{f}:\underline{X}_{\cdot,f}^{k}\times\underline{X}_{\cdot,f}^{k} \rightarrow\mathbb{R}\) denoting a symmetric positive definite bilinear form inducing a norm that scales in \(h_{f}\) as \(\|\cdot\|_{k,f}\) defined above. Crucially, \(s_{\mathcal{B},k,f}\) depends on its arguments only through the difference operator \(\underline{X}_{\cdot,f}^{k}\ni\underline{\omega}_{f}\mapsto\underline{I}_{\cdot,f}^{k}\,P_{r,f}^{k}\,\underline{\omega}_{f}-\underline{\omega}_{f}\,\in \underline{X}_{\cdot,f}^{k}\,,\) which guarantees that it vanishes whenever one of its arguments is the interpolate of a differential form in \(\mathcal{P}_{r}\,\Lambda^{k}(f)\) (as can be checked in the same spirit as [36, Lemma 2.11]). ### Complex property We introduce, for all integers \(d\in[1,n]\), the piecewise polynomial boundary exterior derivative \(\operatorname{d}_{r,\partial f}^{k}:\underline{X}_{\cdot,\partial f}^{k}\to \Lambda^{k+1}(\partial f)\) such that \((\operatorname{d}_{r,\partial f}^{k}\,)_{|f^{\prime}}\coloneqq\operatorname{d} _{r,f}^{k}\), for all \(f^{\prime}\in\Delta_{d-1}(f)\) (\(\operatorname{d}_{r,f^{\prime}}^{k}\) being the discrete exterior derivative on the \((d-1)\)-cell \(f^{\prime}\) defined by (3.4)). The following lemma is a generalisation of the links, in the DDR framework based on vector proxies, between element gradients (resp., curls) and face gradients (resp., curls), see [33, Propositions 1 and 4]. **Lemma 14** (Link between discrete exterior derivatives on subcells).: _It holds, for all \(d\geq k+2\), all \(f\in\Delta_{d}(\mathcal{M}_{h})\), and all \(\underline{\omega}_{f}\,\in\underline{X}_{\cdot,f}^{k}\),_ \[\int_{f}\operatorname{d}_{r,f}^{k}\underline{\omega}_{f}\,\wedge\operatorname{d} \alpha=(-1)^{k+1}\int_{\partial f}\,\operatorname{d}_{r,\partial f}^{k}\, \underline{\omega}_{\partial f}\,\wedge\operatorname{tr}_{\partial f}\,\alpha \qquad\forall\alpha\in\mathcal{P}_{r+1}^{-}\Lambda^{d-k-2}(f). \tag{3.12}\] Proof.: Take \(\mu=\mathrm{d}\alpha\in\mathcal{P}_{r}\Lambda^{d-k-1}(f)\) in (3.4) and use \(\mathrm{d}\circ\mathrm{d}=0\) and \(\mathrm{tr}_{\partial f}\mathrm{d}=\mathrm{d}\,\mathrm{tr}_{\partial f}\) (since the trace is a pullback, it commutes with the exterior derivative) to get \[\int_{f}\mathrm{d}^{k}_{r,f}\underline{\omega}_{f}\wedge\mathrm{d}\alpha=\int_{ \partial f}P^{k}_{r,\partial f}\underline{\omega}_{\partial f}\wedge\mathrm{d} \mathrm{tr}_{\partial f}\ \alpha. \tag{3.13}\] For each \(f^{\prime}\in\Delta_{d-1}(f)\) forming \(\partial f\), by Lemma 4 we have \(\mathrm{tr}_{f^{\prime}}\alpha\in\mathcal{P}_{r+1}^{-}\Lambda^{d-k-2}(f^{ \prime})\) so, by (3.5) applied to \(f^{\prime}\) instead of \(f\) with test function \((\mu,\nu)=(\mathrm{tr}_{f^{\prime}},\alpha,0)\) (see Remark 6), we have \[(-1)^{k+1}\int_{f^{\prime}}P^{k}_{r,f^{\prime}}\underline{\omega}_{f^{\prime} }\wedge\mathrm{d}\,\mathrm{tr}_{f^{\prime}}\,\alpha=\int_{f^{\prime}}\mathrm{ d}^{k}_{r,f^{\prime}}\underline{\omega}_{f^{\prime}}\wedge\mathrm{tr}_{f^{ \prime}}\,\alpha-\int_{\partial f^{\prime}}P^{k}_{r,\partial f^{\prime}} \underline{\omega}_{\partial f^{\prime}}\wedge\mathrm{tr}_{\partial f^{ \prime}}(\mathrm{tr}_{f^{\prime}}\,\alpha).\] Use \(\mathrm{tr}_{\partial f^{\prime}}\circ\mathrm{tr}_{f^{\prime}}=\mathrm{tr}_{ \partial f^{\prime}}\), sum these relations over \(f^{\prime}\in\Delta_{d-1}(f)\), invoke \(\partial(\partial f)=0\) (which implies that the integrals over \((\partial f^{\prime})_{f^{\prime}\in\Delta_{d-1}(f)}\) cancel out due to compatible orientations), and plug the result into (3.13) to conclude. **Theorem 15** (Link between discrete potentials and exterior derivatives, complex property).: _It holds, for all integers \(k\in[1,n]\) and \(d\geq k\), all \(f\in\Delta_{d}(\mathcal{M}_{h})\), and all \(\underline{\omega}_{f}\in\underline{X}_{\star,f}^{k-1}\),_ \[P^{k}_{r,f}\,(\underline{\mathrm{d}}^{k-1}_{r,f}\underline{\omega}_{f})= \mathrm{d}^{k-1}_{r,f}\underline{\omega}_{f}\,, \tag{3.14}\] _and, if \(d\geq k+1\),_ \[\mathrm{d}^{k}_{r,f}\,(\underline{\mathrm{d}}^{k-1}_{r,f}\underline{\omega}_ {f})=0. \tag{3.15}\] _As a consequence, the sequence (3.8) defines a complex._ Proof.: The proof is done by induction on \(\rho\coloneqq d-k\). If \(\rho=0\) (i.e., \(d=k\)), by the definitions (3.3) of the discrete potential and (3.7) of the global discrete exterior derivative with \(k-1\) instead of \(k\), we have \(P^{k}_{r,f}\,(\underline{\mathrm{d}}^{k-1}_{r,f}\underline{\omega}_{f})= \star^{-1}(\star\mathrm{d}^{k-1}_{r,f}\underline{\omega}_{f})=\mathrm{d}^{k-1} _{r,f}\underline{\omega}_{f}\) (notice that, in the first passage, we can omit the projector found in the definition of \(\underline{\mathrm{d}}^{k-1}_{r,f}\underline{\omega}_{f}\) in front of \(\star\mathrm{d}^{k-1}_{r,f}\underline{\omega}_{f}\) since this quantity sits in \(\mathcal{P}_{r}\Lambda^{0}(f)=\mathcal{P}_{r}^{-}\Lambda^{0}(f)\), and is therefore left unchanged by \(\pi_{r,f}^{-0}\)). This proves (3.14), and the relation (3.15) is irrelevant here since \(d=k\). Let us now assume that (3.14) and (3.15) hold for a given \(\rho\geq 0\), and let us consider \(d\) and \(k\) such that \(d-k=\rho+1\). We start by considering (3.15) (which we need to prove since \(d\geq k+1\) in the present case). Let us take \(f\in\Delta_{d}(\mathcal{M}_{h})\). Applying (3.4) with \(\underline{\mathrm{d}}^{k-1}_{r,f}\underline{\omega}_{f}\) instead of \(\underline{\omega}_{f}\) and a generic \(\mu\in\mathcal{P}_{r}\Lambda^{d-k-1}(f)\), we have, expanding the local discrete exterior derivative \(\underline{\mathrm{d}}^{k-1}_{r,f}\underline{\omega}_{f}\) according to its definition (i.e., the restriction to \(f\) of (3.7) with \(k-1\) instead of \(k\)), \[\int_{f}\mathrm{d}^{k}_{r,f}\,(\underline{\mathrm{d}}^{k-1}_{r,f }\underline{\omega}_{f})\wedge\mu=(-1)^{k+1}\int_{f}\star^{-1}(\pi_{r,f}^{-,d-k }\,(\star\mathrm{d}^{k-1}_{r,f}\underline{\omega}_{f}))\wedge\mathrm{d}\mu\\ +\int_{\partial f}P^{k}_{r,\partial f}\,(\underline{\mathrm{d}}^{k -1}_{r,f}\underline{\omega}_{\partial f})\wedge\mathrm{tr}_{\partial f}\ \mu. \tag{3.16}\] By the induction hypothesis, (3.14) holds on each \(f^{\prime}\in\Delta_{d-1}(f)\) (since \((d-1)-k=\rho\)), and thus \[P^{k}_{r,\partial f}\,(\underline{\mathrm{d}}^{k-1}_{r,\partial f}\,\underline{ \omega}_{\partial f})=\mathrm{d}^{k-1}_{r,\partial f}\,\underline{\omega}_{ \partial f}\,. \tag{3.17}\] Invoking then (2.4) with \((\mathcal{X},\omega,\mu)\leftarrow(\mathcal{P}_{r}^{-}\Lambda^{d-k}(f),\mathrm{d }^{k-1}_{r,f}\underline{\omega}_{f},\mathrm{d}\mu)\), noticing that \(\mathrm{d}\mu\in\mathcal{P}_{r-1}\Lambda^{d-k}(f)\subset\mathcal{P}_{r}^{-} \Lambda^{d-k}(f)\) (by (2.14) with \(\ell=d-k\)) to handle the first term in the right-hand side of (3.16), we infer \[\int_{f}\mathrm{d}^{k}_{r,f}\,(\underline{\mathrm{d}}^{k-1}_{r,f}\underline{ \omega}_{f})\wedge\mu=(-1)^{k+1}\int_{f}\mathrm{d}^{k-1}_{r,f}\underline{ \omega}_{f}\,\wedge\mathrm{d}\mu+\int_{\partial f}\mathrm{d}^{k-1}_{r,\partial f }\,\underline{\omega}_{\partial f}\,\wedge\mathrm{tr}_{\partial f}\ \mu=0, \tag{3.18}\] where the conclusion follows from the link (3.12) between discrete exterior derivatives on subcells applied with \(k-1\) instead of \(k\) and \(\alpha=\mu\in\mathcal{P}_{r}\Lambda^{d-k-1}(f)\subset\mathcal{P}_{r+1}^{-1} \Lambda^{d-(k-1)-2}(f)\). Since \(\mu\) is arbitrary in \(\mathcal{P}_{r}\Lambda^{d-k-1}(f)\), (3.18) proves (3.15). We next prove (3.14). For any \((\mu,\nu)\in\mathcal{K}_{r+1}^{d-k-1}(f)\times\mathcal{K}_{r}^{d-k}(f)\), the definition (3.5) of the potential applied to \(\underline{\mathrm{d}}_{r,f}^{k-1}\underline{\omega}_{f}\) gives \[(-1)^{k+1}\int_{f}P_{r,f}^{k}\,(\underline{\mathrm{d}}_{r,f}^{k- 1}\underline{\omega}_{f})\wedge(\mathrm{d}\mu+\nu)=\int_{f}\mathrm{d}_{r,f}^{k }\,(\underline{\mathrm{d}}_{r,f}^{k-1}\underline{\omega}_{f})\wedge\mu\\ -\int_{\partial f}P_{r,\partial f}^{k}\,(\underline{\mathrm{d}}_ {r,\partial f}^{k-1}\,\underline{\omega}_{\partial f})\wedge\mathrm{tr}_{ \partial f}\,\,\mu+(-1)^{k+1}\int_{f}\,\star^{-1}\pi_{r,f}^{-d-k}\,(\star \mathrm{d}_{r,f}^{k-1}\underline{\omega}_{f})\wedge\nu,\] where we have additionally used, in the last term, the definition of the local discrete exterior derivative \(\underline{\mathrm{d}}_{r,f}^{k-1}\underline{\omega}_{f}\), corresponding to the restriction to \(f\) of (3.7) with \(k-1\) instead of \(k\). Using the complex property (3.15) that we have just proved, we have \(\mathrm{d}_{r,f}^{k}\,(\underline{\mathrm{d}}_{r,f}^{k-1}\underline{\omega}_{ f})=0\). Moreover, the induction hypothesis (3.17) yields \(P_{r,\partial f}^{k}\,(\underline{\mathrm{d}}_{r,\partial f}^{k-1}\underline{ \omega}_{\partial f})=\mathrm{d}_{r,\partial f}^{k-1}\,\underline{\omega}_{ \partial f}\). Hence, invoking (3.12) with \(k-1\) instead of \(k\) and \(\alpha=\mu\) (notice that \(\mu\in\mathcal{K}_{r+1}^{d-k-1}(f)\subset\mathcal{P}_{r+1}^{-}\Lambda^{d-(k-1) -2}(f)\) by (2.12)) and applying (2.4) with \((\mathcal{X},\omega,\mu)\leftarrow(\mathcal{P}_{r}^{-}\Lambda^{d-k}(f), \mathrm{d}_{r,f}^{k-1}\underline{\omega}_{f},\nu)\), which is valid since \(\nu\in\mathcal{K}_{r}^{d-k}\,(f)\subset\mathcal{P}_{r}^{-}\Lambda^{d-k}(f)\) by (2.12b) with \(\ell=d-k\geq 1\), we obtain \[(-1)^{k+1}\int_{f}\,P_{r,f}^{k}\,(\underline{\mathrm{d}}_{r,f}^{k-1}\underline{ \omega}_{f})\wedge(\mathrm{d}\mu+\nu)=-\,(-1)^{k}\int_{f}\mathrm{d}_{r,f}^{k- 1}\underline{\omega}_{f}\,\wedge\mathrm{d}\mu+(-1)^{k+1}\int_{f}\mathrm{d}_{ r,f}^{k-1}\underline{\omega}_{f}\,\wedge\nu.\] Simplifying by \((-1)^{k+1}\) and recalling the isomorphism (2.9) concludes the proof of (3.14). ### Commutation The following lemma shows that the reconstructed potential \(P_{r,f}^{k}\,\underline{\omega}_{f}\) on a \(d\)-cell \(f\) is built by adding a high-order correction to \(\star^{-1}\omega_{f}\); this correction is designed to obtain a polynomial consistency unachievable by the component alone (see (3.9)). **Lemma 16** (Links between component and potential reconstruction).: _For all integers \(d\in[0,n]\) and \(k\leq d\), if \(f\in\Delta_{d}(\mathcal{M}_{h})\) and \(\underline{\omega}_{f}\,\in\underline{X}_{r,f}^{k}\), then it holds_ \[(-1)^{k+1}\int_{f}\,P_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge (\mathrm{d}\mu+\nu)=(-1)^{k+1}\int_{f}\star^{-1}\omega_{f}\,\wedge(\mathrm{d}( \pi_{r,f}^{d-k-1}\mu)+\nu)\\ +\int_{f}\,\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge (\mu-\pi_{r,f}^{d-k-1}\mu)-\int_{\partial f}\,P_{r,\partial f}^{k}\,\underline{ \omega}_{\partial f}\,\wedge\mathrm{tr}_{\partial f}\,\,(\mu-\pi_{r,f}^{d-k-1}\mu) \\ \forall(\mu,\nu)\in\mathcal{K}_{r+1}^{d-k-1}(f)\times\mathcal{K}_ {r}^{d-k}(f). \tag{3.19}\] _As a consequence,_ \[\pi_{r,f}^{-d-k}\,(\star P_{r,f}^{k}\,\underline{\omega}_{f})=\omega_{f}. \tag{3.20}\] Proof.: If \(d=k\), the relation (3.19) follows from \(\mathcal{K}_{r+1}^{d-k-1}(f)=\mathcal{K}_{r+1}^{-1}(f)=\{0\}\) and \(P_{r,f}^{k}\,\underline{\omega}_{f}=\star^{-1}\omega_{f}\) (see (3.3)), which also establishes (3.20) since \(\pi_{r,f}^{-0}=\mathrm{Id}\) on \(\mathcal{P}_{r}\,\Lambda^{0}(f)=\mathcal{P}_{r}^{-}\Lambda^{0}(f)\). Consider now \(d\geq k+1\) and take \((\mu,\nu)\in\mathcal{K}_{r+1}^{d-k-1}(f)\times\mathcal{K}_{r}^{d-k}(f)\). Inserting \(\pm\pi_{r,f}^{d-k-1}\mu\) into the definition (3.5) of \(P^{k}_{r,f}\,\underline{\omega}_{f}\,\) we have \[(-1)^{k+1} \int_{f}\,P^{k}_{r,f}\,\underline{\omega}_{f}\,\wedge\,(\mathrm{d} \mu+\nu)\] \[=\,\int_{f}\,\mathrm{d}^{k}_{r,f}\,\underline{\omega}_{f}\,\wedge \pi^{d-k-1}_{r,f}\mu+\int_{f}\,\mathrm{d}^{k}_{r,f}\,\underline{\omega}_{f}\, \wedge\,(\mu-\pi^{d-k-1}_{r,f}\mu)-\int_{\partial f}\,P^{k}_{r,\partial f}\, \underline{\omega}_{\partial f}\,\wedge\mathrm{tr}\alpha_{f}\,\mu \tag{3.21}\] \[\quad+(-1)^{k+1}\int_{f}\,\star^{-1}\omega_{f}\,\wedge\,\nu.\] On the other hand, the definition (3.4) of \(\mathrm{d}^{k}_{r,f}\,\underline{\omega}_{f}\,\) applied to \(\pi^{d-k-1}_{r,f}\mu\) yields \[\int_{f}\,\mathrm{d}^{k}_{r,f}\,\underline{\omega}_{f}\,\wedge\pi^{d-k-1}_{r,f}\mu=(-1)^{k+1}\int_{f}\,\star^{-1}\omega_{f}\,\wedge\,\mathrm{d}(\pi^{d-k-1 }_{r,f}\mu)+\int_{\partial f}\,P^{k}_{r,\partial f}\,\underline{\omega}_{ \partial f}\,\wedge\mathrm{tr}\alpha_{f}\,(\pi^{d-k-1}_{r,f}\mu).\] Substituting this relation into (3.21) yields (3.19). To prove (3.20) we apply (3.19) with \((\mu,\nu)\in\mathcal{K}^{d-k-1}_{r}(f)\times\mathcal{K}^{d-k}_{r}(f)\) and notice that \(\mu=\pi^{d-k-1}_{r,f}\mu\) since \(\mathcal{K}^{d-k-1}_{r}(f)\subset\mathcal{P}_{r}\Lambda^{d-k-1}(f)\), to get \[\int_{f}\,P^{k}_{r,f}\,\underline{\omega}_{f}\,\wedge\,(\mathrm{d}\mu+\nu)= \int_{f}\,\star^{-1}\omega_{f}\,\wedge\,(\mathrm{d}\mu+\nu). \tag{3.22}\] The isomorphism (2.17) with \(\ell=d-k\geq 1\) shows that \(\mathrm{d}\mu+\nu\) spans \(\mathcal{P}^{-}_{r}\Lambda^{d-k}(f)\) when \((\mu,\nu)\) span \(\mathcal{K}^{d-k-1}_{r}(f)\times\mathcal{K}^{d-k}_{r}(f)\). Hence, (3.22) gives \[\int_{f}\,\star^{-1}\omega_{f}\,\wedge\,\alpha=\int_{f}\,P^{k}_{r,f}\, \underline{\omega}_{f}\,\wedge\,\alpha\,\stackrel{{\eqref{eq: d-k-1}}}{{=}}\int_{f}\,\star^{-1}\pi^{-,d-k}_{r,f}\,(\star P^{k}_{r,f}\, \underline{\omega}_{f}\,)\wedge\,\alpha\qquad\forall\alpha\in\mathcal{P}^{-}_{ r}\Lambda^{d-k}(f),\] proving (3.20) since \(\star^{-1}\omega_{f}\,\in\mathcal{P}^{-}_{r}\Lambda^{d-k}(f)\) and \(\star^{-1}\) is an isomorphism. **Theorem 17** (Commutation property for the local discrete exterior derivative).: _For all integers \(d\in[1,n]\) and \(k\leq d-1\), and for all \(f\in\Delta_{d}(\mathcal{M}_{h})\), recalling the definition (3.2) of the interpolators, it holds_ \[\underline{\mathrm{d}}^{k}_{r,f}\,(\underline{l}^{k}_{r,f}\,\omega)= \underline{l}^{k+1}_{r,f}(\mathrm{d}\omega)\qquad\forall\omega\in C^{1} \Lambda^{k}(\overline{f}), \tag{3.23}\] _expressing the commutativity of the following diagram:_ Proof.: Given the definitions of the interpolator and of the local discrete exterior derivative (see (3.2) and (3.7)), we have to prove that, for all \(f^{\prime}\in\Delta_{d^{\prime}}(f)\) with \(d^{\prime}\in[k+1,d]\), \(\pi^{-,d^{\prime}-k-1}_{r,f^{\prime}}(\star\mathrm{d}^{k}_{r,f},\underline{l} ^{k}_{r,f^{\prime}},\omega)=\pi^{-,d^{\prime}-k-1}_{r,f^{\prime}}(\star\, \mathrm{tr}_{f^{\prime}}(\mathrm{d}\omega))\). Recalling the definition of the projector \(\pi^{-,d^{\prime}-k-1}_{r,f^{\prime}}\) (i.e., (2.3) with \(\mathcal{X}=\mathcal{P}^{-}_{r}\Lambda^{d^{\prime}-k-1}(f^{\prime})\)), we need to prove that, for any \(\mu\in\mathcal{P}^{-}_{r}\Lambda^{d^{\prime}-k-1}(f^{\prime})\) \[\int_{f^{\prime}}\,\star\mathrm{d}^{k}_{r,f}\,\underline{l}^{k}_{r,f^{\prime}} \,\omega\wedge\star\mu=\int_{f^{\prime}}\,\star\,\mathrm{tr}_{f^{\prime}}( \mathrm{d}\omega)\wedge\star\mu.\] Applying (A.4), this amounts to proving that \[\int_{f^{\prime}}\,\mathrm{d}^{k}_{r,f}\,\underline{l}^{k}_{r,f^{\prime}}\, \omega\wedge\mu=\int_{f^{\prime}}\,\mathrm{tr}_{f^{\prime}}(\mathrm{d}\omega) \wedge\mu. \tag{3.24}\] Using the definitions (3.4) of the discrete exterior derivative on \(f^{\prime}\) and (3.2) of \(\underline{I}^{k}_{\boldsymbol{r},f^{\prime}}\), we have (3.25) where the substitution is justified by (2.4) with \((X,\omega,\mu)\leftarrow(\mathcal{P}^{-}_{r}\Lambda^{d^{\prime}-k}(f^{\prime}),\mathrm{tr}_{f^{\prime}}\,\omega,\mathrm{d}\mu)\), since \(\mathrm{d}\mu\in\mathrm{d}\mathcal{P}^{-}_{r}\Lambda^{d^{\prime}-k-1}(f^{ \prime})\subset\mathrm{d}\mathcal{P}^{-}_{r}\Lambda^{d^{\prime}-k-1}(f^{ \prime})\subset\mathcal{P}^{-}_{r}\Lambda^{d^{\prime}-k}(f^{\prime})\) (see (2.12b)). For all \(f^{\prime\prime}\in\Delta_{d^{\prime}-1}(f^{\prime})\) we have \(\mathrm{tr}_{f^{\prime\prime}}\,\mu\in\mathcal{P}^{-}_{r}\Lambda^{d^{\prime}- 1-k}(f^{\prime\prime})\) (see Lemma 4), so \[\int_{f^{\prime\prime}}P^{k}_{r,f^{\prime\prime}}\underline{I}^{k} _{\boldsymbol{r},f^{\prime}}\omega\wedge\mathrm{tr}_{f^{\prime\prime}}\,\mu =\int_{f^{\prime\prime}}\star^{-1}\pi^{-,d^{\prime}-1-k}_{r,f^{ \prime\prime}}(\star P^{k}_{r,f^{\prime\prime}}\underline{I}^{k}_{\boldsymbol {r},f^{\prime\prime}}\omega)\wedge\mathrm{tr}_{f^{\prime\prime}}\,\mu \text{Eq.~{}\eqref{eq:f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f__f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_f_ff_f _Remark 18_ (Consistency property of the improved potential for \(k=0\)).: In the case \(k=0\), the improved potential defined in Remark 7 satisfies the following consistency property: \[P^{0}_{r+1,f}\,\underline{I}^{0}_{r,f}\,\omega=\omega\qquad\forall\omega\in \mathcal{P}_{r+1}\Lambda^{0}(f).\] To see this, first notice that when \(d=k=0\) we have \(P^{0}_{r+1,f}=P^{0}_{r,f}\) since \(\mathcal{P}_{r+1}\Lambda^{0}(f)=\mathcal{P}_{r}\,\Lambda^{0}(f)\cong\mathbb{R}\), and then, for \(d\geq k+1\), invoke the definition (3.6) of \(P^{0}_{r+1,f}\,\underline{I}^{0}_{r,f}\,\omega\), apply (3.10) (since \(\mathcal{P}^{-}_{r+1}\Lambda^{0}(f)=\mathcal{P}_{r+1}\Lambda^{0}(f)\)) and a recursion argument on \(d\). ### Cohomology A strategy to establish the exactness of the de Rham complex (for a domain with trivial topology) is to design a Poincare operator \(p:C^{1}\Lambda^{k}(\overline{\Omega})\to C^{1}\Lambda^{k-1}(\overline{\Omega})\), that satisfies \(\mathrm{d}p+p\mathrm{d}=\mathrm{Id}\). The Poincare operator is built integrating a certain flow of contracted differential forms; see [31, 51] for details and applications to the design of finite element complexes. Extending such a construction to the context of fully discrete spaces is not trivial, as it is not clear how the discrete polynomial components on cells should evolve with such a flow. We therefore select an alternative approach, more suited to hierarchical discrete spaces. The starting point is the following idea: if \(\eta\in C^{1}\Lambda^{k}(\overline{\Omega})\) satisfies \(\mathrm{d}\eta=0\) and we have \(\omega\in C^{2}\Lambda^{k-1}(\overline{\Omega})\) such that \(\mathrm{d}\omega=\eta\), then (2.1) shows that, for any \(d\)-cell \(f\), \[(-1)^{k}\int_{f}\,\omega\wedge\mathrm{d}\mu=\int_{f}\,\eta\wedge\mu-\int_{ \partial f}\,\mathrm{tr}_{\partial f}\,\omega\wedge\mathrm{tr}_{\partial f}\, \mu\qquad\forall\mu\in C^{1}\Lambda^{d-k}(\overline{\Omega}). \tag{3.26}\] In the discrete setting, \(\omega\) is built starting from the lowest-dimensional cells, and (3.26) thus gives a condition on \(\omega\) over \(f\) based on the already constructed \(\mathrm{tr}_{\partial f}\,\omega\). To start this process, we must fix the values of \(\omega\) on the lowest-dimensional cells, which is not an easy task in general. Actually, from the point of view of differential forms, the lowest-dimensional cells encode the topology of the domain, and thus the cohomology of the complex; for a generic \(\eta\), the recursive construction of \(\omega\) can therefore only be fully complete if the complex is exact, and thus the topology trivial. This limitation is circumvented by using the following idea: if \(\eta\) has zero average on \(k\)-cells, then \(\omega\) can be set to zero on \((k-1)\)-cells, which completes the construction above (see Lemma 19 below). This result is then exploited, through the extension/reduction strategy developed in [35, 37], to compare the cohomology of the arbitrary-order \(\mathrm{DDR}(r)\) complex to that of the lowest-order \(\mathrm{DDR}(0)\) complex, which is trivially isomorphic to the CW complex based on the mesh. We therefore start by considering the subspace \(\underline{X}^{k}_{r,h,\flat}\) of \(\underline{X}^{k}_{r,h}\) made of vectors of differential forms whose integrals over cells of dimension \(d=k\) vanish: \[\underline{X}^{k}_{r,h,\flat}\coloneqq\left\{\underline{\omega}_{h}=(\omega_{ f}\,)_{f\,\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k,n]}\ :\ \int_{f}\,\star^{-1}\omega_{f}=0\quad\forall f\in\Delta_{k}(\mathcal{M}_{h}) \right\}.\] **Lemma 19** (Exactness property for \(\underline{X}^{k}_{r,h,\flat}\)).: _For any integer \(k\in[0,n]\), if \(\underline{\eta}_{h}\in\underline{X}^{k}_{r,h,\flat}\) satisfies \(\underline{\mathrm{d}}^{k}_{r,h}\underline{\eta}_{h}=\underline{0}\), then there exists \(\underline{\omega}_{h}\in\underline{X}^{k-1}_{r,h,\flat}\) such that \(\underline{\eta}_{h}=\underline{\mathrm{d}}^{k-1}_{r,h}\,\underline{\omega}_ {h}\), where, in accordance with (3.8), we have set \(\underline{\mathrm{d}}^{-1}_{r,h}=\underline{\mathrm{d}}^{n}_{r,h}\coloneqq 0\)._ _Remark 20_ (Exact sub-complex).: It can easily be checked that \(\underline{\mathrm{d}}^{k}_{r,h}:\underline{X}^{k}_{r,h,\flat}\to\underline{X} ^{k+1}_{r,h,\flat}\). As a consequence, the previous lemma shows that \((\underline{X}^{k}_{r,h,\flat},\underline{\mathrm{d}}^{k}_{r,h})_{k}\) is an exact sub-complex of \(\mathrm{DDR}(r)\) (even if the latter complex is not exact). Proof.: We first notice that the case \(r=0\) is trivial since, for all \(k\), \(\underline{X}^{k}_{0,h,\flat}=\{(0)_{f\,\in\Delta_{k}(\mathcal{M}_{h})}\}\). This comes from the fact that the space \(\underline{X}^{k}_{0,h}\) only has non-zero components (which are moreover constant) on cells of dimension \(d=k\); to check this, notice that the spaces (2.12b) are all trivial since the first component vanishes for \(k\)-forms with constant coefficients, while the second is zero by (2.6). We can therefore assume that \(r\geq 1\). The cases \(k=0\) and \(k\geq 1\) have to be handled separately. Case \(k=0\). We prove that, if \(\underline{\eta}_{h}\in\underline{X}^{0}_{r,h,b}\) and \(\underline{\mathrm{d}}^{0}_{r,h}\underline{\eta}_{h}=\underline{0}\), then \(\eta_{f}=0\) for all \(f\in\Delta_{d}(\mathcal{M}_{h})\), \(d\in[0,n]\). This is done by induction on \(d\). The case \(d=0\) follows immediately from the definition of \(\underline{X}^{0}_{r,h,b}\) which shows that the value of \(\star^{-1}\eta_{f}\) on any vertex \(f\in\Delta_{0}(\mathcal{M}_{h})\) is zero. Assuming that all components of \(\underline{\eta}_{h}\) on cells of dimension \(d-1\geq 0\) vanish, we now prove that \(\eta_{f}=0\) for all \(f\in\Delta_{d}(\mathcal{M}_{h})\). Note first that, by (3.14), the property \(\underline{\mathrm{d}}^{0}_{r,f}\,\underline{\eta}_{f}=\underline{0}\) implies \(\mathrm{d}^{0}_{r,f}\,\underline{\eta}_{f}=0\). Enforcing then \(\underline{\eta}_{\partial f}=\underline{0}\) (by induction hypothesis) in the definition (3.4) of \(\mathrm{d}^{0}_{r,f}\,\underline{\eta}_{f}\) gives \[\int_{f}\,\star^{-1}\eta_{f}\,\wedge\,\mathrm{d}\mu=0\qquad\forall\mu\in \mathcal{P}_{r}\Lambda^{d-1}(f).\] By definition (2.12b) of the trimmed space with \(\ell=d\), and accounting for (2.6), we have \(\mathrm{d}\mathcal{P}_{r}\Lambda^{d-1}(f)=\mathcal{P}_{r}^{-}\,\Lambda^{d}(f)\), so the relation above and (A.4) with \((\omega,\mu)\leftarrow(\eta_{f}\,,\mathrm{d}\mu)\) and \(\rho=\mathrm{d}\mu\) show that \(\int_{f}\,\eta_{f}\,\wedge\star\rho=0\) for all \(\rho\in\mathcal{P}_{r}^{-}\Lambda^{d}(f)\). Since \(\eta_{f}\) belongs to that space, we conclude that \(\eta_{f}=0\). Case \(k\geq 1\). Let \(\underline{\eta}_{h}\in\underline{X}^{k}_{r,h,b}\) be such that \(\underline{\mathrm{d}}^{k}_{r,h}\underline{\eta}_{h}=\underline{0}\), and let us construct \(\underline{\omega}_{h}\in\underline{X}^{k-1}_{r,h,b}\) such that \(\underline{\mathrm{d}}^{k-1}_{r,h}\underline{\omega}_{h}=\underline{\eta}_{h}\). This construction of \(\underline{\omega}_{h}\) is done by increasing dimension \(d\in[k-1,n]\) of the cells. For all \(f\in\Delta_{k-1}(\mathcal{M}_{h})\), we set \(\omega_{f}=0\) (which ensures, in particular, that the zero-average condition embedded in the space \(\underline{X}^{k-1}_{r,h,b}\) is fulfilled). Assume now that the components of \(\underline{\omega}_{h}\) have been constructed up to cells of dimension \(d-1\geq k-1\), and consider \(f\in\Delta_{d}(\mathcal{M}_{h})\). We choose \(\omega_{f}\in\mathcal{P}_{r}^{-}\Lambda^{d-k+1}(f)\) such that the following relation holds: \[(-1)^{k}\int_{f}\,\star^{-1}\omega_{f}\,\wedge\,\mathrm{d}\mu=\int_{f}\,P_{r, f}^{k}\,\underline{\eta}_{f}\,\wedge\mu-\int_{\partial f}\,P_{r,\partial f}^{k-1 }\,\underline{\omega}_{\partial f}\,\wedge\,\mathrm{tr}_{\partial f}\,\mu \qquad\forall\mu\in\mathcal{K}_{r}^{d-k}(f). \tag{3.27}\] Notice that, since the construction is recursive on the dimension of the cells, \(\underline{\omega}_{\partial f}\) has already been constructed at this stage. Owing to the isomorphism (2.17) with \(\ell=d-k+1\geq 1\), this relation completely defines the projection of \(\omega_{f}\) on \(\mathrm{d}\mathcal{K}_{r}^{d-k}(f)\subset\mathcal{P}_{r}^{-}\,\Lambda^{d-k+1} (f)\). The projection of \(\omega_{f}\) on the remaining component \(\mathcal{K}_{r}^{d-k+1}(f)\) of \(\mathcal{P}_{r}^{-}\,\Lambda^{d-k+1}(f)\) is not relevant to the rest of the proof and can be set to \(0\). Let us now prove that \(\underline{\mathrm{d}}^{k-1}_{r,h}\underline{\omega}_{h}=\underline{\eta}_{h}\). It suffices to show that \[\mathrm{d}^{k-1}_{r,f}\underline{\omega}_{f}=P_{r,f}^{k}\,\underline{\eta}_{f }\qquad\forall f\in\Delta_{d}(\mathcal{M}_{h})\,,\,\,d\in[k,n]. \tag{3.28}\] Indeed, applying \(\pi_{r,f}^{-,d-k}\star\) to this relation and using (3.20) yields \(\pi_{r,f}^{-,d-k}\,(\star\mathrm{d}^{k-1}_{r,f}\,\underline{\omega}_{f})=\eta _{f}\); using this relation for all cells \(f\), and recalling the definition (3.7) of the global discrete exterior derivative (with \(k-1\) instead of \(k\)), then gives \(\underline{\mathrm{d}}^{k-1}_{r,h}\underline{\omega}_{h}=\underline{\eta}_{h}\) as claimed. The relation (3.28) is a direct consequence of the following property, for the same \(f\) and \(d\): \[\int_{f}\,\mathrm{d}^{k-1}_{r,f}\,\underline{\omega}_{f}\,\wedge\mu=\int_{f}\,P_ {r,f}^{k}\,\underline{\eta}_{f}\,\wedge\mu\qquad\forall\mu\in\mathcal{P}_{r} \,\Lambda^{d-k}(f). \tag{3.29}\] Owing to (2.7), we only need to prove this relation for \(\mu\in\mathcal{K}_{r}^{d-k}(f)\), and \(\mu\in\mathcal{P}_{0}\Lambda^{0}(f)\) if \(d=k\) or \(\mu\in\mathrm{d}\mathcal{P}_{r+1}\Lambda^{d-k-1}(f)\) if \(d\geq k+1\). If \(\mu\in\mathcal{K}_{r}^{d-k}(f)\), the definition (3.4) of \(\mathrm{d}^{k-1}_{r,f}\,\underline{\omega}_{f}\) together with the property (3.27) immediately give (3.29). Let us consider the case \(d=k\) and \(\mu\in\mathcal{P}_{0}\Lambda^{0}(f)\). Then \(\mathrm{d}\mu=0\), so the definition (3.4) of \(\underline{\mathrm{d}}_{r,f}^{k-1}\underline{\omega}_{f}\) and \(\underline{\omega}_{\partial f}=0\) (by construction, \(\underline{\omega}_{h}\) vanishes on cells of dimension \(d-1=k-1\)) show that the left-hand side of (3.29) vanishes. Since \(P_{r,f}^{k}\underline{\eta}_{f}=\star^{-1}\eta_{f}\) (see (3.3)) and \(\int_{f}\star^{-1}\eta_{f}=0\) as \(\underline{\eta}_{h}\in\underline{X}_{r,h,b}^{k}\), the right-hand side of (3.29) vanishes as well, and this relation holds. Finally, we turn to the case \(d\geq k+1\) and \(\mu\in\mathrm{d}\mathcal{P}_{r+1}\Lambda^{d-k-1}(f)\), which is proved by induction on \(d\) (the base case \(d=k\) having already been covered). By (2.8) with \((\ell,r)\leftarrow(d-k-1,r+1)\), we have \(\mu\in\mathrm{d}\mathcal{N}_{r+1}^{d-k-1}(f)\), and we can therefore write \(\mu=\mathrm{d}\alpha\) with \(\alpha\in\mathcal{K}_{r+1}^{d-k-1}(f)\subset\mathcal{P}_{r+1}^{-1}\Lambda^{d- k-1}(f)\) (see (2.12)). Invoking the link (3.12) between discrete exterior derivatives on subcells (notice that \(d\geq(k-1)+2\)), we obtain \[\int_{f}\mathrm{d}_{r,f}^{k-1}\underline{\omega}_{f}\,\wedge\mu=(-1)^{k}\int_ {\partial f}\mathrm{d}_{r,\partial f}^{k-1}\underline{\omega}_{\partial f}\, \wedge\,\mathrm{tr}_{\partial f}\,\,\alpha=(-1)^{k}\int_{\partial f}P_{r, \partial f}^{k}\underline{\eta}_{\partial f}\,\wedge\,\mathrm{tr}_{\partial f} \,\,\alpha,\] where the second equality follows from the induction hypothesis that (3.29) holds on subcells of \(f\). We have \(\underline{\mathrm{d}}_{r,f}^{k}\,\underline{\eta}_{f}=0\) and \(d\geq k+1\), so we can apply (3.14) with \(k+1\) instead of \(k\) to get \(\mathrm{d}_{r,f}^{k}\,\underline{\eta}_{f}=0\); the definition (3.5) of \(P_{r,f}^{k}\,\underline{\eta}_{f}\) (with \((\mu,\nu)\leftarrow(\alpha,0)\), see Remark 6 for the validity of this choice of \(\mu\)) allows us to continue with \[\int_{f}\mathrm{d}_{r,f}^{k-1}\underline{\omega}_{f}\,\wedge\mu=-(-1)^{k} \times(-1)^{k+1}\int_{f}P_{r,f}^{k}\,\underline{\eta}_{f}\,\wedge\,\mathrm{d}\alpha.\] Recalling that \(\mathrm{d}\alpha=\mu\) concludes the proof of (3.29). Proof of Theorem 10.: As in [37, Lemma 4], it is straightforward to see that the (discrete) de Rham map establishes a chain isomorphism between the lowest-degree complex \(\mathrm{DDR}(0)\) and the CW complex defined by \(\mathcal{M}_{h}\). Since this CW complex has the same cohomology as the de Rham complex (1.2), the proof is complete if we show that the cohomology of \(\mathrm{DDR}(r)\) is isomorphic to the cohomology of \(\mathrm{DDR}(0)\). This obviously means that we can assume \(r\geq 1\) in the following. _Step 1: Reductions and extensions._ With the goal of applying [35, Proposition 2], we define reduction and extension maps between \(\mathrm{DDR}(r)\) and \(\mathrm{DDR}(0)\) as in (3.30). \[\begin{CD}\mathrm{DDR}(r):&\cdots\xrightarrow{}\underline{X}_{r,h}^{k} \xrightarrow{\underline{\mathrm{d}}_{r,h}^{k}}\xrightarrow{X_{r,h}^{k+1}} \xrightarrow{\cdots}\\ &\cdots\\ &\underline{E}_{h}^{k}\Bigg{(}\begin{CD}\underline{R}_{h}^{k}&\underline{E}_{ h}^{k+1}\Bigg{(}\begin{CD}\underline{\omega}_{h}^{k+1}\\ \end{CD}\end{CD}\begin{CD}\underline{R}_{h}^{k+1}\\ \end{CD}\begin{ * If \(d\geq k+1\), \(E^{k}_{f}\underline{\eta}_{f}\in\mathcal{P}_{r}^{-}\Lambda^{d-k}(f)\) satisfies \[(-1)^{k+1}\int_{f}\star^{-1}E^{k}_{f}\underline{\eta}_{f}\ \wedge(\mathrm{d}\mu+\nu)\\ =\int_{f}\mathrm{d}^{k}_{0,f}\underline{\eta}_{f}\ \wedge\mu-\int_{\partial f}P^{k}_{r, \partial f}\underline{E}^{k}_{\partial f}\underline{\eta}_{\partial f}\ \wedge\mathrm{tr}_{\partial f}\ \mu+(-1)^{k+1}\int_{f}P^{k}_{0,f} \underline{\eta}_{f}\ \wedge\nu\\ \forall(\mu,\nu)\in\mathcal{K}_{r}^{d-k-1}(f)\times\mathcal{K}_{ r}^{d-k}(f),\] (3.32b) where \(\underline{E}^{k}_{\partial f}\underline{\eta}_{\partial f}=(\underline{E}^{k}_{ f}.\underline{\eta}_{f})_{f^{\prime}\in\Delta_{d-1}(f)}\) gathers the extensions already built at previous steps on the subcells of dimension \(d-1\) of \(f\). The isomorphism (2.17) with \(\ell=d-k\geq 1\) ensures that the relation above fully and properly defines \(E^{k}_{f}\underline{\eta}_{f}\). In passing, we notice that, for all \(f\in\Delta_{d}(\mathcal{M}_{h})\) with \(d\geq k+1\), combining the definitions (3.4) of \(\mathrm{d}^{k}_{r,f}\underline{E}^{k}_{f}\underline{\eta}_{f}\) and (3.32b) of \(E^{k}_{f}\underline{\eta}_{f}\) (with \(\nu=0\)), we have \[\int_{f}\mathrm{d}^{k}_{r,f}\underline{E}^{k}_{f}\underline{\eta}_{f}\ \wedge\mu=\int_{f}\mathrm{d}^{k}_{0,f}\underline{\eta}_{f}\ \wedge\mu\qquad\forall\mu\in\mathcal{K}_{r}^{d-k-1}(f).\] Since \(\mathrm{d}^{k}_{0,f}\underline{\eta}_{f}=P^{k+1}_{0,f}\mathrm{d}^{k}_{0,f} \underline{\eta}_{f}\) by (3.14), using the definition of \(E^{k+1}_{f}\mathrm{d}^{k}_{0,f}\underline{\eta}_{f}\) (namely, (3.32a) if \(d=k+1\), or (3.32b) with \((\mu,\nu)\leftarrow(0,\mu)\) if \(d\geq k+2\)) we deduce that \[\int_{f}\mathrm{d}^{k}_{r,f}\underline{E}^{k}_{f}\underline{\eta}_{f}\ \wedge\mu=\int_{f}\star^{-1}E^{k+1}_{f}\mathrm{d}^{k}_{0,f}\underline{\eta}_{f }\ \wedge\mu\qquad\forall\mu\in\mathcal{K}_{r}^{d-k-1}(f). \tag{3.33}\] _Step 2: Proof of the theorem._ To apply [35, Proposition 2], we need to prove that \[\underline{\mathrm{d}}^{k}_{0,h}=\underline{R}^{k+1}_{h}\underline{\mathrm{d}} ^{k}_{r,h}\underline{E}^{k}_{h} \tag{3.34}\] and that [35, Assumption 1] holds, that is: * \(\underline{R}^{k}_{h}\underline{E}^{k}_{h}=\mathrm{Id}\) on \(\mathrm{Ker}\,\underline{\mathrm{d}}^{k}_{0,h}\); * \((\underline{E}^{k}_{h}\underline{R}^{k}_{h}-\mathrm{Id})(\mathrm{Ker}\, \underline{\mathrm{d}}^{k}_{r,h})\ \subset\mathrm{Im}\,\underline{\mathrm{d}}^{k-1}_{r,h}\); * The graded maps \(\underline{E}^{\bullet}_{h}\) and \(\underline{R}^{\bullet}_{h}\) are cochain maps. We start by noticing that, since \(\mathrm{DDR}(0)\) is already known to be a complex, (C1) and (C3) imply (3.34). Indeed, (C3) gives \(\underline{R}^{k+1}_{h}\underline{\mathrm{d}}^{k}_{r,h}\underline{E}^{k}_{h}= \underline{R}^{k+1}_{h}\underline{E}^{k+1}_{h}\underline{\mathrm{d}}^{k}_{0,h}\) and, by the complex property, \(\mathrm{Im}\,\underline{\mathrm{d}}^{k}_{0,h}\subset\mathrm{Ker}\,\underline {\mathrm{d}}^{k+1}_{0,h}\), so (C1) applied to \(k+1\) instead of \(k\) yields (3.34). _1. Proof of (C1)._ The definitions (3.31) and (3.32a) of the reduction and the extension components on the lowest dimensional cells directly shows that \(\underline{R}^{k}_{h}\underline{E}^{k}_{h}=\mathrm{Id}\) on \(\underline{X}^{k}_{r,h}\), which establishes a stronger result than (C1). _2. Proof of (C3) for the extension._ We now turn to (C3), considering first the case of the extension. We have to show that, for all \(\underline{\eta}_{h}\in\underline{X}^{k}_{0,h}\) it holds \(\underline{\mathrm{d}}^{k}_{r,h}\underline{E}^{k}_{h}\underline{\eta}_{h}= \underline{E}^{k+1}_{h}\underline{\mathrm{d}}^{k}_{0,h}\underline{\eta}_{h}\). Given the definitions (3.7) of the global discrete exterior derivative and of the extension, this boils down to showing that \[\star^{-1}\pi_{r,f}^{-d-k-1}(\star\mathrm{d}^{k}_{r,f}\underline{E}^{k}_{f} \underline{\eta}_{f})=\star^{-1}E^{k+1}_{f}\underline{\mathrm{d}}^{k}_{0,f} \underline{\eta}_{f}\qquad\forall f\in\Delta_{d}(\mathcal{M}_{h})\ \text{with}\ d\geq k+1\] which, testing against \(\rho\in{\cal P}_{r}^{-}\Lambda^{d-k-1}(f)\) and recalling the relation (2.4), can be recast as \[\begin{split}\int_{f}\mathrm{d}_{r,f}^{k}\underline{E}_{f}^{k} \underline{\eta}_{f}\wedge\rho=\int_{f}\star^{-1}E_{f}^{k+1}\underline{ \mathrm{d}}_{0,f}^{k}\underline{\eta}_{f}\wedge\rho\qquad\forall f\in\Delta_{d }({\cal M}_{h})\text{ with }d\geq k+1\,,\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\forall\rho\in{\cal P}_{r}^{-}\Lambda^{d-k-1 }(f).\end{split} \tag{3.35}\] We start by noticing that, by (3.33), the relation (3.35) holds for \(\rho\in{\cal K}_{r}^{d-k-1}(f)\). The decompositions (2.7a) of \({\cal P}_{r}\,\Lambda^{0}(f)={\cal P}_{r}^{-}\Lambda^{0}(f)\) (if \(d=k+1\)) and (2.16) of \({\cal P}_{r}^{-}\Lambda^{d-k-1}(f)\) (if \(d\geq k+2\)) then show that we only have to prove (3.35) for \(\rho\in{\cal P}_{0}\Lambda^{0}(f)\) (if \(d=k+1\)) or \(\rho\in\mathrm{d}{\cal K}_{r}^{d-k-2}(f)\) (if \(d\geq k+2\)). This fact is proved by induction on \(d\): * Let us first consider \(d=k+1\) and take \(\rho\in{\cal P}_{0}\Lambda^{0}(f)\). We can use this polynomial form as a test function in the definition (3.4) of \(\mathrm{d}_{0,f}^{k}\underline{\eta}_{f}\) to get \[\int_{\partial f}P_{0,\partial f}^{k}\underline{\eta}_{\partial f}\wedge \operatorname{tr}_{\partial f}\,\rho=\int_{f}\,\mathrm{d}_{0,f}^{k}\, \underline{\eta}_{f}\wedge\rho=\int_{f}\,\star^{-1}E_{f}^{k+1}\underline{ \mathrm{d}}_{0,f}^{k}\,\underline{\eta}_{f}\wedge\rho\qquad\forall\rho\in{ \cal P}_{0}\Lambda^{0}(f),\] (3.36) where the second equality follows from (3.32a) with \((k,\underline{\eta}_{f})\leftarrow(k+1,\underline{\mathrm{d}}_{0,f}^{k} \underline{\eta}_{f})\). For all \(f^{\prime}\in\Delta_{k}(f)\), by definition (3.3) of \(P_{0,f^{\prime}}^{k}\), and (3.32a) of \(E_{f^{\prime}}^{k}\), we have \(P_{0,f^{\prime}}^{k}\underline{\eta}_{f^{\prime}}=\star^{-1}\eta_{f^{\prime}} =\star^{-1}E_{f}^{k}\underline{\eta}_{f^{\prime}}=P_{r,f^{\prime}}^{k} \underline{E}_{f}^{k}\underline{\eta}_{f^{\prime}}\), where the last relation follows applying the definition (3.3) of \(P_{r,f^{\prime}}^{k}\). We infer from this equality and (3.36) that \[\int_{\partial f}P_{r,\partial f}^{k}\underline{E}_{\partial f}^{k}\underline{ \eta}_{\partial f}\wedge\operatorname{tr}_{\partial f}\,\rho=\int_{f}\,\star^ {-1}E_{f}^{k+1}\underline{\mathrm{d}}_{0,f}^{k}\,\underline{\eta}_{f}\wedge \rho\qquad\forall\rho\in{\cal P}_{0}\Lambda^{0}(f).\] Applying the definition (3.4) of \(\mathrm{d}_{r,f}^{k}\,\underline{E}_{f}^{k}\,\underline{\eta}_{\partial f}\) with \(\mu=\rho\) (which satisfies \(\mathrm{d}\rho=0\)) to the left-hand side then concludes the proof of (3.35). * We now take \(d\geq k+2\) and \(\rho\in\mathrm{d}{\cal K}_{r}^{d-k-2}(f)\), which we write \(\rho=\mathrm{d}\alpha\) with \(\alpha\in{\cal K}_{r}^{d-k-2}(f)\subset{\cal P}_{r}^{-}\Lambda^{d-k-2}(f)\). Applying the link (3.12) between discrete exterior derivatives on \(f\) and \(\partial f\), we have \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{E}_{f}^{k}\,\underline{\eta}_{f}\wedge \rho=(-1)^{k+1}\int_{\partial f}\mathrm{d}_{r,\partial f}^{k}\,\underline{E}_{ \partial f}^{k}\,\underline{\eta}_{\partial f}\wedge\operatorname{tr}_{ \partial f}\,\alpha.\] (3.37) By Lemma 4, for all \(f^{\prime}\in\Delta_{d-1}(f)\), \(\operatorname{tr}_{f^{\prime}}\alpha\in{\cal P}_{r}^{-}\Lambda^{d-k-2}(f^{ \prime})\), so we can apply (3.35) on \(f^{\prime}\) (by the induction hypothesis) to get \[\int_{f}\mathrm{d}_{r,f^{\prime}}^{k}\underline{E}_{f}^{k}\cdot\underline{\eta}_ {f^{\prime}}\wedge\operatorname{tr}_{f^{\prime}}\alpha=\int_{f^{\prime}}\star^ {-1}E_{f^{\prime}}^{k+1}\underline{\mathrm{d}}_{0,f^{\prime}}^{k}\,\underline{ \eta}_{f^{\prime}}\wedge\operatorname{tr}_{f^{\prime}}\alpha=\int_{f^{\prime}} P_{r,f^{\prime}}^{k+1}\underline{\mathrm{d}}_{0,f}^{k+1}\underline{\mathrm{d}}_{0,f^{ \prime}}^{k}\,\underline{\eta}_{f^{\prime}}\wedge\operatorname{tr}_{f^{\prime}} \alpha,\] the second equality being justified by (3.20) and (2.4) (with \(({\cal X},f,d,k)\leftarrow({\cal P}_{r}^{-}\Lambda^{(d-1)-(k+1)}(f^{\prime}),f^{ \prime},d-1,k+1)\)) and the fact that \(\operatorname{tr}_{f^{\prime}}\alpha\in{\cal P}_{r}^{-}\Lambda^{(d-1)-(k+1)}(f^{ \prime})\). Plugging this relation into (3.37) yields \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{E}_{f}^{k}\,\underline{\eta}_{f}\wedge \rho=(-1)^{k+1}\int_{\partial f}P_{r,\partial f}^{k+1}\underline{\mathrm{d}}_{ 0,f}^{k}\,\underline{\mathrm{d}}_{0,f}^{k}\,\underline{\eta}_{\partial f}\wedge \operatorname{tr}_{\partial f}\,\alpha.\] Invoking then the definition (3.32b) of \(E_{f}^{k+1}\underline{\mathrm{d}}_{0,f}^{k}\,\underline{\eta}_{f}\) with \((k,\mu,\nu,\underline{\eta}_{f})\leftarrow(k+1,\alpha,0,\underline{\mathrm{d}}_{0,f}\,\underline{\eta}_{f})\), and using the property \(\mathrm{d}_{0,f}^{k+1}\circ\underline{\mathrm{d}}_{0,f}^{k}=0\) (consequence of (3.15) with \(k+1\) instead of \(k\)) we infer \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{E}_{f}^{k}\,\underline{\eta}_{f}\wedge \rho=\int_{f}\star^{-1}E_{f}^{k+1}\underline{\mathrm{d}}_{0,f}^{k}\,\underline{ \eta}_{f}\wedge\mathrm{d}\alpha\] and (3.35) follows by recalling that \(\rho=\mathrm{d}\alpha\). 3. _Proof of (C3) for the reduction._ To conclude the proof of (C3), it remains to show that \(\underline{R}_{h}^{k+1}\underline{\mathrm{d}}_{r,h}^{k}\underline{\omega}_{h}= \underline{\mathrm{d}}_{0,h}^{k}\underline{R}_{h}^{k}\underline{\omega}_{h}\) for all \(\underline{\omega}_{h}\in\underline{X}_{r,h}^{k}\). Since vectors in \(\underline{X}_{0,h}^{k+1}\) only have constant components on cells of dimension \(k+1\), and since \(\underline{R}_{h}^{k+1}\) is defined by (3.31), we only have to show that \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge\rho=\int_{f} \mathrm{d}_{0,f}^{k}\,\underline{R}_{f}^{k}\underline{\omega}_{f}\,\wedge\rho \qquad\forall f\in\Delta_{k+1}(\mathcal{M}_{h})\,,\ \forall\rho\in\mathcal{P}_{0}\Lambda^{0}(f). \tag{3.38}\] Let \(\rho\) as above and apply the definition (3.4) of \(\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\) to \(\mu=\rho\); accounting for \(\mathrm{d}\rho=0\), we obtain \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge\rho=\int_{ \partial f}\,P_{r,\partial f}^{k}\,\underline{\omega}_{\partial f}\,\wedge \mathrm{tr}_{\partial f}\,\rho. \tag{3.39}\] For each \(f^{\prime}\in\Delta_{k}(f)\), by definition (3.3) of \(P_{r,f^{\prime}}^{k}\), we can write \[\int_{f^{\prime}}P_{r,f^{\prime}}^{k}\,\underline{\omega}_{f}\,\wedge\mathrm{ tr}_{f^{\prime}}\,\rho=\int_{f^{\prime}}\star^{-1}\omega_{f}\,\wedge\mathrm{ tr}_{f^{\prime}}\,\rho=\int_{f^{\prime}}\star^{-1}\pi_{0,f}^{0}\,\omega_{f}\,\wedge \mathrm{tr}_{f^{\prime}}\,\rho=\int_{f^{\prime}}P_{0,f^{\prime}}^{k}\, \underline{R}_{f}^{k}\,\underline{\omega}_{f^{\prime}}\,\wedge\mathrm{tr}_{ f^{\prime}}\,\rho, \tag{3.40}\] where we have used the fact that \(\mathrm{tr}_{f^{\prime}}\,\rho\in\mathcal{P}_{0}\Lambda^{0}(f^{\prime})\) to insert the projector in the second equality and the definitions (3.31) of \(\underline{R}_{f}^{k}\), and (3.3) of \(P_{0,f^{\prime}}^{k}\) to conclude. Combining (3.39) and (3.40), we find \[\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge\rho=\int_{ \partial f}\,P_{0,\partial f}^{k}\,\underline{R}_{\partial f}^{k}\,\underline {\omega}_{\partial f}\,\wedge\mathrm{tr}_{\partial f}\,\rho.\] Applying the definition (3.4) of \(\mathrm{d}_{0,f}^{k}\,\underline{R}_{f}^{k}\,\underline{\omega}_{f}\) then concludes the proof of (3.38). 4. _Proof of (C2)._ Finally, to prove (C2), we notice that if \(\underline{\omega}_{h}\in\underline{X}_{r,h}^{k}\), then by (3.31) and (3.32a) the components of \(\underline{E}_{h}^{k}\underline{R}_{h}^{k}\underline{\omega}_{h}\) on the lowest dimensional cells \(f\in\Delta_{k}(\mathcal{M}_{h})\) are just the averages of the components of \(\underline{\omega}_{h}\) on these cells; hence, \(\underline{E}_{h}^{k}\underline{R}_{h}^{k}\underline{\omega}_{h}-\underline{ \omega}_{h}\in\underline{X}_{r,h,b}^{k}\). Moreover, by the cochain map property (C3), \(\underline{\mathrm{d}}_{r,h}^{k}\,(\underline{E}_{h}^{k}\underline{R}_{h}^{k} \underline{\omega}_{h}-\underline{\omega}_{h})=\underline{E}_{h}^{k}\underline {R}_{h}^{k}\underline{\mathrm{d}}_{r,h}^{k}\underline{\omega}_{h}-\underline {\mathrm{d}}_{r,h}^{k}\underline{\omega}_{h}=\underline{0}\) whenever \(\underline{\omega}_{h}\in\mathrm{Ker}\,\underline{\mathrm{d}}_{r,h}^{k}\). We can thus, for such an \(\underline{\omega}_{h}\), apply Lemma 19 with \(\underline{\omega}_{h}\leftarrow\underline{E}_{h}^{k}\underline{R}_{h}^{k} \underline{\omega}_{h}-\underline{\omega}_{h}\) to see that this element belongs to \(\mathrm{Im}\,\underline{\mathrm{d}}_{r,h}^{k-1}\), establishing (C2). \(\Box\) ## 4 A VEM-inspired complex In this section we consider an alternative construction inspired by the Virtual Element complex of [8]. Notice that we make here no effort to reduce the polynomial degree of certain components of the discrete spaces, which is known to be possible; see, e.g., [9] and also [35] for a general framework with application to DDR methods. Notice also that we work in a fully discrete spirit, without attempting to identify the underlying virtual spaces (which are not needed for the purposes of the present work). Let again a polynomial degree \(r\geq 0\) be fixed. The general principle to design the VEM-inspired sequence is to select polynomial components that make it possible to reconstruct, for each \(d\)-cell and inductively on the dimension \(d\), a discrete potential capable of reproducing polynomial forms in \(\mathcal{P}_{r+1}^{-}\Lambda^{k}(f)\). The main difference with respect to the DDR approach illustrated in Section 3 is that, with the exception of \((k+1)\)-cells, the required information on the discrete exterior derivative is directly encoded in the discrete spaces. Adopting this approach has several, far-reaching, consequences. The first one is that the discrete spaces contain a mix of both traces and exterior derivatives (which, in passing, requires higher regularity in the definition of the interpolators). The components on \(k\)- and \((k+1)\)-cells in the discrete space of \(k\)-forms play a slightly different role than the others (and are, as a result, treated separately in the definition of the space). The second consequence is that the proofs of key properties (polynomial consistency, cohomology, etc.) are carried out by induction on the dimension (and not on the difference between the dimension and the form degree, as in Theorems 11 and 15). This leads to somewhat simpler arguments, at the cost of larger discrete spaces. Also, the commutation property is essentially obtained by definition of the local discrete exterior derivative (with the exception of lowest-dimensional cells). ### Definition #### 4.1.1 Discrete spaces We define the following discrete counterpart of \(H\Lambda^{k}(\Omega)\), \(0\leq k\leq n\): \[\underline{V}^{k}_{r,h}\coloneqq\sum_{f\in\Delta_{k}(\mathcal{M }_{h})}\mathcal{P}_{r}\Lambda^{0}(f)\times\sum_{f\in\Delta_{k+1}(\mathcal{M} _{h})}\mathcal{K}^{1}_{r+1}(f)\times\mathcal{K}^{0}_{r}(f)\\ \times\sum_{d=k+2}^{n}\sum_{f\in\Delta_{d}(\mathcal{M}_{h})} \mathcal{K}^{d-k}_{r+1}(f)\times\mathcal{K}^{d-k-1}_{r+1}(f). \tag{4.1}\] Notice that, on \((k+1)\)-cells, the second component has polynomial degree reduced by one compared to \(d\)-cells with \(d\geq k+2\), i.e., we have \(\mathcal{K}^{0}_{r}(f)\) instead of \(\mathcal{K}^{0}_{r+1}(f)\). A generic element of \(\underline{V}^{k}_{r,h}\) will be denoted by \[\underline{\omega}_{h}=\big{(}(\omega_{f})_{f\in\Delta_{k}(\mathcal{M}_{h})},(\omega_{f},D_{\omega_{f},f})_{f\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+1,n] }\big{)}. \tag{4.2}\] The notation \(D_{\omega,f}\) is reminescent of the fact that these polynomial components are interpreted as Hodge stars of exterior derivatives. We refer to Table 2 for an overview of the polynomial unknowns in \(\underline{V}^{k}_{r,f}\) in dimensions \(0\) to \(3\), as well as their vector proxies. \begin{table} \begin{tabular}{c|c c c c} \hline \hline \(k\)\(d\) & \(0\) & \(1\) & \(2\) & \(3\) \\ \hline \(0\) & \(\mathbb{R}=\mathcal{P}_{r}\Lambda^{0}(f_{0})\) & \(\{0\}\times\mathcal{K}^{0}_{r}(f_{1})\) & \(\{0\}\times\mathcal{K}^{1}_{r+1}(f_{2})\) & \(\{0\}\times\mathcal{K}^{2}_{r+1}(f_{3})\) \\ \(1\) & & \(\mathcal{P}_{r}\Lambda^{0}(f_{1})\) & \(\mathcal{K}^{1}_{r+1}(f_{2})\times\mathcal{K}^{0}_{r}(f_{2})\) & \(\mathcal{K}^{2}_{r+1}(f_{3})\times\mathcal{K}^{1}_{r+1}(f_{3})\) \\ \(2\) & & & \(\mathcal{P}_{r}\Lambda^{0}(f_{2})\) & \(\mathcal{K}^{1}_{r+1}(f_{3})\times\mathcal{K}^{0}_{r}(f_{3})\) \\ \(3\) & & & & \(\mathcal{P}_{r}\Lambda^{0}(f_{3})\) \\ \hline \hline \(k\)\(d\) & \(0\) & \(1\) & \(2\) & \(3\) \\ \hline \(0\) & \(\mathbb{R}=\mathcal{P}_{r}(f_{0})\) & \(\{0\}\times\mathcal{P}^{b}_{r}(f_{2})\) & \(\{0\}\times\mathcal{R}^{c}_{r+1}(f_{2})\) & \(\{0\}\times\mathcal{R}^{c}_{r+1}(f_{3})\) \\ \(1\) & & \(\mathcal{P}_{r}(f_{1})\) & \(\mathcal{R}^{c}_{r+1}(f_{2})\times\mathcal{P}^{b}_{r}(f_{2})\) & \(\mathcal{R}^{c}_{r+1}(f_{3})\times\mathcal{G}^{c}_{r+1}(f_{3})\) \\ \(2\) & & & \(\mathcal{P}_{r}(f_{2})\) & \(\mathcal{G}^{c}_{r+1}(f_{3})\times\mathcal{P}^{b}_{r}(f_{3})\) \\ \(3\) & & & & \(\mathcal{P}_{r}(f_{3})\) \\ \hline \hline \end{tabular} \end{table} Table 2: Polynomial components attached to each mesh entity \(f_{d}\) of dimension \(d\in\{0,\ldots,3\}\) for the space \(\underline{V}^{k}_{r,h}\) for \(k\in\{0,\ldots,3\}\) (top) and counterparts through vector proxies (bottom). #### 4.1.2 Interpolators For all integers \(0\leq k\leq d\leq n\) and any \(f\in\Delta_{d}(\mathcal{M}_{h})\), the local interpolator is such that, for all \(\omega\in C^{1}\Lambda^{k}(\overline{f})\), \[\underline{I}_{r,f}^{k}\,\omega\coloneqq\Big{(} (\pi_{r,f}^{0},(\star\operatorname{tr}_{f^{\prime}}\omega))_{f^{ \prime}\in\Delta_{k}(f)}, \tag{4.3}\] \[\big{(}\pi_{r+1,f^{\prime}}^{\mathcal{K},d^{\prime}-k}(\star \operatorname{tr}_{f^{\prime}}\omega),\pi_{r+1,f^{\prime}}^{\mathcal{K},d^{ \prime}-k-1}(\star\operatorname{tr}_{f^{\prime}}\operatorname{d}\omega)\big{)} _{f^{\prime}\in\Delta_{d^{\prime}}(f),\,d^{\prime}\in[k+2,d]}\Big{)}.\] _Remark 21_ (Domain of the interpolator).: Owing to the presence of polynomial components that are interpreted as exterior derivatives (compare (3.2) with (4.3)), the interpolator in the VEM-inspired construction requires higher regularity of the interpolated functions compared to the DDR complex presented in Section 3, namely \(C^{1}\Lambda^{k}(\overline{f})\) instead of \(C^{0}\Lambda^{k}(\overline{f})\). #### 4.1.3 Global discrete exterior derivative and VEM complex For all \(f\in\Delta_{k+1}(\mathcal{M}_{h})\), we define the _discrete exterior derivative_\(\operatorname{d}_{r,f}^{k}\,:\,\underline{V}_{r,f}^{k}\,\to\mathcal{P}_{r} \Lambda^{k+1}(f)\) such that, for all \(\underline{\omega}_{f}\,\in\,\underline{V}_{r,f}^{k}\), \[\int_{f}\operatorname{d}_{r,f}^{k}\,\underline{\omega}_{f}\, \wedge\,(\mu+\nu)=\int_{\partial f}\,\star^{-1}\omega_{\partial f}\,\wedge \operatorname{tr}_{\partial f}\,\mu+\int_{f}\,\star^{-1}D_{\omega,f}\,\wedge \,\nu\\ \forall(\mu,\nu)\in\mathcal{P}_{0}\Lambda^{0}(f)\times\mathcal{K} _{r}^{0}(f), \tag{4.4}\] where, as before, \(\omega_{\partial f}\) is defined by \((\omega_{\partial f})_{[f^{\prime}}=\omega_{f^{\prime}}\in\mathcal{P}_{r} \Lambda^{0}(f^{\prime})\) for all \(f^{\prime}\in\Delta_{k}(f)\). Notice that the above equation defines \(\operatorname{d}_{r,f}^{k}\,\underline{\omega}_{f}\) uniquely as, by (2.7a), \(\mu+\nu\) spans \(\mathcal{P}_{r}\Lambda^{0}(f)\) as \((\mu,\nu)\) spans \(\mathcal{P}_{0}\Lambda^{0}(f)\times\mathcal{K}_{r}^{0}(f)\). Moreover, taking \(\mu=0\) and letting \(\nu\) span \(\mathcal{K}_{r}^{0}(f)\), we infer, using (2.4) with \((\mathcal{X},\omega,\mu)\leftarrow(\mathcal{K}_{r}^{0}(f),\operatorname{d}_{r, f}^{k}\,\underline{\omega}_{f}\,),\nu)\), \[D_{\omega,f}=\pi_{r,f}^{\mathcal{K},0}(\star\operatorname{d}_{r,f}^{k}\, \underline{\omega}_{f}\,)\qquad\forall f\in\Delta_{k+1}(\mathcal{M}_{h}). \tag{4.5}\] Unlike the DDR complex, the construction of a _global discrete exterior derivative_ for the VEM complex does not require to first reconstruct traces on lower-dimensional cells, as all the necessary information is encoded in the polynomial components \((D_{\omega,f})_{f\,\in\,\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+2,n]}\) supplemented by \((\operatorname{d}_{r,f}^{k}\,\underline{\omega}_{f}\,)_{f\,\in\,\Delta_{k+1}( \mathcal{M}_{h})}\). More specifically, for all integers \(k\in[0,n-1]\), we let \(\underline{\operatorname{d}}_{r,h}^{k}\,:\,\underline{V}_{r,h}^{k}\,\to\, \underline{V}_{r,h}^{k+1}\) be such that, for all \(\underline{\omega}_{h}\,\in\,\underline{V}_{r,h}^{k}\), \[\underline{\operatorname{d}}_{r,h}^{k}\,\underline{\omega}_{h}\coloneqq\big{(} (\star\operatorname{d}_{r,f}^{k}\,\underline{\omega}_{f}\,)_{f\,\in\,\Delta_{ k+1}(\mathcal{M}_{h})},(D_{\omega,f},0)_{f\,\in\,\Delta_{d}(\mathcal{M}_{h}),\,d \in[k+2,n]}\big{)} \tag{4.6}\] (compare with (4.2) and notice the different positioning, compared to \(\underline{\omega}_{h}\), of the polynomial components \(D_{\omega,f}\,)\). As for the DDR complex, we will denote by \(\underline{\operatorname{d}}_{r,f}^{k}\) the restriction of \(\underline{\operatorname{d}}_{r,h}^{k}\) to \(f\in\Delta_{d}(\mathcal{M}_{h})\) with \(d\in[0,n]\) such that \(k\leq d-1\). The VEM sequence of spaces and operators then reads \[\operatorname{VEM}(r)\coloneqq\{0\}\xrightarrow{}\underline{V}_{r,h}^{0} \xrightarrow{\underline{\operatorname{d}}_{r,h}^{0}}\underline{V}_{r,h}^{1} \xrightarrow{}\cdots\xrightarrow{}\underline{V}_{r,h}^{n-1}\xrightarrow{ \underline{\operatorname{d}}_{r,h}^{n-1}}\underline{V}_{r,h}^{n}\xrightarrow{ \underline{\operatorname{d}}_{r,h}^{n}}\{0\}. \tag{4.7}\] #### 4.1.4 Local discrete potentials and discrete exterior derivatives Given a form degree \(k\in[0,n]\), for all \(f\in\Delta_{d}(\mathcal{M}_{h})\), \(k\leq d\leq n\), we define the _local discrete potential_\(P_{r,f}^{k}\,:\,\underline{V}_{r}^{k}(f)\to\mathcal{P}_{r+1}^{-}\Lambda^{k}(f)\) by induction on \(d\) as follows: For all \(\underline{\omega}_{f}\,\in\,\underline{V}_{r,f}^{k}\), * If \(d=k\), we simply set \[P^{k}_{r,f}\underline{\omega}_{f}\coloneqq\star^{-1}\omega_{f}\in\mathcal{P}_{r} \Lambda^{d}(f)=\mathcal{P}^{-}_{r+1}\Lambda^{d}(f),\] (4.8) where the last equality follows from (2.13) if \(d=0\) (after noticing that \(\mathcal{P}_{r}\Lambda^{d}(f)\cong\mathbb{R}\cong\mathcal{P}^{-}_{r+1}\Lambda^ {d}(f)\)) and from (2.15) if \(d\geq 1\); * If \(k+1\leq d\leq n\), using the isomorphism (2.17) with \(\ell=d-k\geq 1\) and \(r\) replaced by \(r+1\), we define \(P^{k}_{r,f}\underline{\omega}_{f}\in\mathcal{P}^{-}_{r+1}\Lambda^{k}(f)\) as the unique solution of the following equation: \[(-1)^{k+1}\int_{f}P^{k}_{r,f}\underline{\omega}_{f}\wedge(\mathrm{d }\mu+\nu)=\int_{f}\star^{-1}\widetilde{D}_{\omega,f}\wedge\mu-\!\!\!\int_{ \partial f}P^{k}_{r,\partial f}\underline{\omega}_{\partial f}\wedge\!\!\! \mathrm{tr}\partial_{\partial f}\ \mu+(-1)^{k+1}\int_{f}\star^{-1}\omega_{f}\wedge\nu\\ \forall(\mu,\nu)\in\mathcal{K}^{d-k-1}_{r+1}(f)\times\mathcal{K}^ {d-k}_{r+1}(f),\] (4.9) where \[\widetilde{D}_{\omega,f}\coloneqq\left\{\begin{aligned} \star \mathrm{d}^{k}_{r,f}\underline{\omega}_{f}&\text{if }d=k+1,\\ D_{\omega,f}&\text{if }d\geq k+2,\end{aligned}\right.\] (4.10) and we have introduced the piecewise polynomial boundary potential \(P^{k}_{r,\partial f}:\underline{V}^{k}_{r,\partial f}\to\Lambda^{k}(\partial f)\) such that \((P^{k}_{r,\partial f})_{|f^{\prime}}\coloneqq P^{k}_{r,f^{\prime}}\), for all \(f^{\prime}\in\Delta_{d-1}(f)\). Leveraging the above-defined discrete potentials, we can define the _discrete exterior derivative_\(\mathrm{d}^{k}_{r,f}:\underline{V}^{k}_{r,f}\to\mathcal{P}_{r}\Lambda^{k+1}(f)\) for all \(f\in\Delta_{d}(\mathcal{M}_{h})\), \(k+2\leq d\leq n-1\) (this object was previously only defined for \(d=k+1\), see (4.4)), setting: \[\mathrm{d}^{k}_{r,f}\underline{\omega}_{f}\coloneqq P^{k+1}_{r,f}\underline{ \mathrm{d}}^{k}_{r,f}\underline{\omega}_{f}\qquad\forall\underline{\omega}_{f} \in\underline{V}^{k}_{r,f}. \tag{4.11}\] Notice that these discrete exterior derivatives are not relevant in the definition of the VEM complex, but may be useful in practical applications. ### Main properties of the VEM complex The main results for the VEM complex are stated below. **Theorem 22** (Cohomology of the VEM complex).: _The VEM sequence (4.7) is a complex and its cohomology is isomorphic to the cohomology of the continuous de Rham complex (1.2)._ Proof.: See Section 4.6. **Theorem 23** (Polynomial consistency of the discrete potential and exterior derivative).: _For all integers \(0\leq k\leq d\leq n\) and all \(f\in\Delta_{d}(\mathcal{M}_{h})\), it holds_ \[P^{k}_{r,f}\underline{I}^{k}_{r,f}\omega=\omega\qquad\forall\omega\in\mathcal{ P}^{-}_{r+1}\Lambda^{k}(f), \tag{4.12}\] _and, if \(d\geq k+1\),_ \[\mathrm{d}^{k}_{r,f}\underline{I}^{k}_{r,f}\omega=\mathrm{d}\omega\qquad \forall\omega\in\mathcal{P}^{-}_{r+1}\Lambda^{k}(f). \tag{4.13}\] Proof.: See Section 4.5. ### Complex property **Lemma 24** (Complex property).: _The sequence (4.7) defines a complex, i.e., for all integers \(k\in[1,n-1]\) and all \(\underline{\omega}_{h}\in\underline{\nu}_{\cdot,h}^{k-1}\),_ \[\underline{\mathrm{d}}_{\cdot,h}^{k}(\underline{\mathrm{d}}_{\cdot,h}^{k-1} \underline{\omega}_{h})=\underline{0}.\] Proof.: Applying the definition (4.6) of the global discrete exterior derivative first for \(k-1\) then for \(k\), we obtain \[\underline{\mathrm{d}}_{\cdot,h}^{k-1}\underline{\omega}_{h}=\left((\star \mathrm{d}_{\cdot,f}^{k-1}\underline{\omega}_{f})_{f\in\Delta_{k}(\mathcal{M}_ {h})},\,(D_{\omega,f}\,,0)_{f\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+1,n]} \right)\in\underline{V}_{\cdot,h}^{k}, \tag{4.14}\] which shows that, for all \(d\in[k+1,n]\) and all \(f\in\Delta_{d}(\mathcal{M}_{h})\), the exterior derivative components of \(\underline{\mathrm{d}}_{\cdot,h}^{k-1}\underline{\omega}_{h}\) are zero, and thus that \[\underline{\mathrm{d}}_{\cdot,h}^{k}(\underline{\mathrm{d}}_{\cdot,h}^{k-1} \underline{\omega}_{h})=\left(\big{(}\star\mathrm{d}_{\cdot,f}^{k}\,( \underline{\mathrm{d}}_{\cdot,f}^{k-1}\underline{\omega}_{f})\big{)}_{f\in \Delta_{k+1}(\mathcal{M}_{h})},\,(0,0)_{f\in\Delta_{d}(\mathcal{M}_{h}),\,d \in[k+2,n]}\right)\in\underline{V}_{\cdot,h}^{k+1}.\] The assertion is therefore proved if we show that \(\mathrm{d}_{\cdot,f}^{k}\,(\underline{\mathrm{d}}_{\cdot,f}^{k-1}\underline{ \omega}_{f})=0\) for all \(f\in\Delta_{k+1}(\mathcal{M}_{h})\). Applying the definition of the local discrete exterior derivative (see (4.4)) with \(\underline{\omega}_{f}\) replaced by \(\underline{\mathrm{d}}_{\cdot,f}^{k-1}\underline{\omega}_{f}\) obtained by restricting (4.14) to \(f\), we get: For all \((\mu,\nu)\in\mathcal{P}_{0}\Lambda^{0}(f)\times\mathcal{K}_{r}^{0}(f)\), \[\int_{f}\mathrm{d}_{\cdot,f}^{k}\,\underline{\mathrm{d}}_{\cdot,f}^{k-1} \underline{\omega}_{f}\,\wedge\,(\mu+\nu)=\int_{\partial f}\mathrm{d}_{\cdot,f}^{k-1}\underline{\omega}_{\partial f}\,\wedge\,\mathrm{tr}_{\partial f}\, \mu=0,\] where the conclusion follows using the definition (4.4) of \(\mathrm{d}_{\cdot,f}^{k-1}\underline{\omega}_{f}\), with \((\mu,\nu)\leftarrow(\mathrm{tr}_{f^{\prime}}\,\mu,0)\) for all \(f^{\prime}\in\Delta_{k}(f)\) and noticing, as at the end of the proof of Lemma 14, that the sum over \(f^{\prime}\) of the integrals over \(\partial f^{\prime}\) is zero. ### Commutation **Proposition 25** (Commutation property for the discrete exterior derivative in dimension \(d=k+1\)).: _For all \(f\in\Delta_{k+1}(\mathcal{M}_{h})\), it holds_ \[\mathrm{d}_{\cdot,f}^{k}\,\underline{I}_{\cdot,f}^{k}\,\omega=\star^{-1}\pi_{ \cdot,f}^{0}\,(\star\mathrm{d}\omega)\qquad\forall\omega\in C^{1}\Lambda^{k} (\overline{f}), \tag{4.15}\] _expressing the commutativity of the following diagram:_ Proof.: Plugging the definition (4.3) of the interpolator into (4.4) we get, for all \((\mu,\nu)\in\mathcal{P}_{0}\Lambda^{0}(f)\times\mathcal{K}_{r}^{0}(f)\), \[\int_{f}\mathrm{d}_{\cdot,f}^{k}\,\underline{I}_{\cdot,f}^{k}\,\omega\wedge\, (\mu+\nu)=\int_{\partial f}\star^{-1}\pi_{\cdot,\partial f}^{0}\,(\star \mathrm{tr}_{\partial f}\,\omega)\wedge\mathrm{tr}_{\partial f}\,\mu+\int_{f} \star^{-1}\pi_{\cdot,f}^{\mathcal{K},0}(\star\mathrm{d}\omega)\wedge\nu,\] where \(\pi_{\cdot,\partial f}^{0}\) denotes the piecewise \(L^{2}\)-orthogonal projector obtained patching together the \(\pi_{\cdot,f^{\prime}}^{0}\), \(f^{\prime}\in\Delta_{k}(f)\). Using (2.4) with \((\mathcal{X},\omega,\mu)\leftarrow(\mathcal{K}_{r}^{0}(f^{\prime}),\mathrm{d} \omega,\nu)\) for the second term and, for each \(f^{\prime}\in\Delta_{k}(f)\), \((\mathcal{X},d,f)\leftarrow(\mathcal{P}_{r}\Lambda^{0}(f^{\prime}),k,f^{ \prime})\) for the first term, the projectors can be removed. The Stokes formula (2.1) along with \(\mathrm{d}\mu=0\) (since \(\mu\) is constant) then yields \[\int_{f}\mathrm{d}_{\cdot,f}^{k}\,\underline{I}_{\cdot,f}^{k}\,\omega\wedge\, (\mu+\nu)=\int_{f}\mathrm{d}\omega\wedge\mu+\int_{f}\mathrm{d}\omega\wedge\nu= \int_{f}\star^{-1}\pi_{\cdot,f}^{0}\,(\star\mathrm{d}\omega)\wedge(\mu+\nu),\] where the conclusion follows from (2.4) with \((\mathcal{X},\omega,\mu)\leftarrow(\mathcal{P}_{r}\Lambda^{0}(f),\mathrm{d} \omega,\mu+\nu)\). Since, by (2.7a), \(\mu+\nu\) spans \(\mathcal{P}_{r}\Lambda^{0}(f)\) as \((\mu,\nu)\) spans \(\mathcal{P}_{0}\Lambda^{0}(f)\times\mathcal{K}_{r}^{0}(f)\), this concludes the proof. **Proposition 26** (Commutation property for the local discrete exterior derivative).: _For all integers \(d\in[1,n]\) and \(k\leq d-1\), and all \(f\in\Delta_{d}(\mathcal{M}_{h})\), it holds_ \[\underline{\mathrm{d}}_{r,f}^{k}\,(\underline{I}_{r,f}^{k}\,\omega)=\underline{ I}_{r,f}^{k+1}(\mathrm{d}\omega)\qquad\forall\omega\in C^{2}\Lambda^{k}( \overline{f}), \tag{4.16}\] _expressing the commutativity of the following diagram:_ Proof.: Immediate consequence of (4.15) along with the definition (4.3) of the interpolator, and the property \(\mathrm{d}\circ\mathrm{d}=0\). ### Polynomial consistency Proof of Theorem 23.: The proof proceeds by induction on the dimension \(d\). When \(d=k\), (4.12) is a direct consequence of the definitions (4.8) of the potential and (4.3) of the interpolator, which give \(P_{r,f}^{k}\,\underline{I}_{r,f}^{k}\,\omega=\star^{-1}\pi_{r,f}^{0}\,(\star \omega)=\omega\), where, to remove the projector, we have used the fact that \(\star\omega\in\mathcal{P}_{r}\Lambda^{0}(f)\), since \(\omega\in\mathcal{P}_{r+1}^{-}\Lambda^{d}(f)=\mathcal{P}_{r}\Lambda^{d}(f)\) (see (2.15) with \(r+1\) instead of \(r\)). We next prove (4.12) for \(d\geq k+1\) assuming that it holds for \(d-1\). Writing the definition (4.9) of the potential for \(\underline{\omega}_{f}=\underline{I}_{r,f}^{k}\,\omega\), we get, for all \((\mu,\nu)\in\mathcal{K}_{r+1}^{d-k-1}(f)\times\mathcal{K}_{r+1}^{d-k}(f)\), \[(-1)^{k+1} \int_{f}P_{r,f}^{k}\,(\underline{I}_{r,f}^{k}\,\omega)\wedge( \mathrm{d}\mu+\nu) \tag{4.17}\] \[=\int_{f}\star^{-1}\widetilde{D}_{\omega,f}\,\wedge\mu-\int_{ \partial f}P_{r,\partial f}^{k}\,(\underline{I}_{r,\partial f}^{k}\,\mathrm{ tr}_{\partial f}\,\omega)\wedge\mathrm{tr}_{\partial f}\,\mu+(-1)^{k+1}\int_{f} \star^{-1}(\pi_{r+1,f}^{\mathcal{K},d-k}\star\omega)\wedge\nu\] \[=\int_{f}\star^{-1}\widetilde{D}_{\omega,f}\,\wedge\mu-\int_{ \partial f}\mathrm{tr}_{\partial f}\,\omega\wedge\mathrm{tr}_{\partial f}\,\mu +(-1)^{k+1}\int_{f}\omega\wedge\nu,\] where we have used the induction hypothesis for the second term in the right-hand side after noticing that, by Lemma 4 with \(\ell=k\), \(\mathrm{tr}_{f^{\prime}}\,\omega\in\mathcal{P}_{r+1}^{-}\Lambda^{k}(f^{\prime})\) for all \(f^{\prime}\in\Delta_{d-1}(f)\), together with (2.4) for the third one. Recalling the definition (4.10) of \(\widetilde{D}_{\omega,f}^{k}\), we distinguish two cases for the first term in the right-hand side. If \(d=k+1\), (4.13) (immediate consequence of (4.15) after observing that \(d\mathcal{P}_{r+1}^{-}\Lambda^{k}(f)\subset\mathcal{P}_{r}\Lambda^{k+1}(f)\)) gives \(\star^{-1}\widetilde{D}_{\omega,f}=\star^{-1}\star\mathrm{d}_{r,f}^{k}\,( \underline{I}_{r,f}^{k}\,\omega)=\mathrm{d}\omega\). If, on the other hand, \(d\geq k+2\), recalling the definition (4.3) of the interpolator, we have \(\int_{f}\star^{-1}\widetilde{D}_{\omega,f}\,\wedge\mu=\int_{f}\star^{-1}(\pi_ {r+1,f}^{\mathcal{K},d-k-1}\star\mathrm{d}\omega)\wedge\mu\stackrel{{ \eqref{eq:d-1}}}{{=}}\int_{f}\mathrm{d}\omega\wedge\mu\). Plugging these relations into (4.17), using the Stokes formula (2.1), and simplifying, we get \[\int_{f}P_{r,f}^{k}\,(\underline{I}_{r,f}^{k}\,\omega)\wedge(\mathrm{d}\mu+ \nu)=\int_{f}\omega\wedge(\mathrm{d}\mu+\nu),\] which yields (4.12) for \(d\geq k+1\) since, by (2.17) with \(\ell=d-k\geq 1\), \(\mathrm{d}\mu+\nu\) spans \(\mathcal{P}_{r+1}^{-}\Lambda^{d-k}(f)\) as \((\mu,\nu)\) spans \(\mathcal{K}_{r+1}^{d-k-1}(f)\times\mathcal{K}_{r+1}^{d-k}(f)\). We have already seen above that (4.13) holds for \(d=k+1\). To prove this relation for \(d\geq k+2\), it suffices to recall (4.11) and (4.16) to write \[\mathrm{d}_{r,f}^{k}\,\underline{I}_{r,f}^{k}\,\omega=P_{r,f}^{k+1}(\underline {\mathrm{d}}_{r,f}^{k}\,I_{r,f}^{k}\,\omega)=P_{r,f}^{k+1}(\underline{I}_{r,f}^ {k+1}\mathrm{d}\omega)=\mathrm{d}\omega,\] where the conclusion follows from (4.12) after observing that \(\mathrm{d}\omega\in\mathcal{P}_{r}\Lambda^{k+1}(f)\subset\mathcal{P}_{r+1}^{-} \Lambda^{k+1}(f)\). ### Cohomology As in Section 3.5, given a form degree \(k\in[0,n]\), we first consider the following subspace of \(\underline{V}_{\tau,h}^{k}\): \[\underline{V}_{\tau,h,b}^{k}\coloneqq\left\{\underline{\omega}_{h}=\left(( \omega_{f})_{f\,\in\Delta_{k}(\mathcal{M}_{h})},\,(\omega_{f},D_{\omega,f})_{f \,\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+1,n]}\right)\,:\,\int_{f}\star^{-1} \omega_{f}=0\quad\forall f\in\Delta_{k}(\mathcal{M}_{h})\right\}.\] **Lemma 27** (Exactness property for \(\underline{V}_{\tau,h,b}^{k}\)).: _For all \(k\in[0,n]\), if \(\underline{\eta}_{h}\in\underline{V}_{\tau,h,b}^{k}\) satisfies \(\underline{\mathrm{d}}_{\tau,h}^{k}\underline{\eta}_{h}=\underline{0}\), then there exists \(\underline{\omega}_{h}\in\underline{V}_{\tau,h,b}^{k-1}\) such that \(\underline{\eta}_{h}=\underline{\mathrm{d}}_{\tau,h}^{k-1}\underline{\omega}_ {h}\), where, in accordance with the sequence (4.7), we have set \(\underline{\mathrm{d}}_{\tau,h}^{-1}=\underline{\mathrm{d}}_{\tau,h}^{n}:=0\)._ Proof.: Recalling the definition (4.6) of \(\underline{\mathrm{d}}_{\tau,h}^{k}\underline{\eta}_{h}\), we have \[\underline{\mathrm{d}}_{\tau,h}^{k}\underline{\eta}_{h}=\left((\star\delta_{ \tau,f}^{k}\underline{\eta}_{f})_{f\,\in\Delta_{k+1}(\mathcal{M}_{h})},(D_{ \eta,f},0)_{f\,\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+2,n]}\right).\] If \(k=0\), then \(\int_{f}\star^{-1}\eta_{f}=0\) implies \(\eta_{f}=0\) for all \(f\in\Delta_{0}(\mathcal{M}_{h})\); moreover, \(\eta_{f}=0\) for all \(f\in\Delta_{d}(\mathcal{M}_{h})\), \(d\in[1,n]\), by definition (4.1) of \(\underline{V}_{\tau,h}^{0}\) (recall that \(\mathcal{K}_{r}^{d}(f)=\{0\}\) for all \(r\), cf. (2.6)). The condition \(\underline{\mathrm{d}}_{\tau,h}^{k}\underline{\eta}_{h}=\underline{0}\) together with (4.5) yields \(D_{\omega,f}=0\) for all \(f\in\Delta_{d}(\mathcal{M}_{h})\), \(d\geq k+1\), and thus \[\underline{\eta}_{h}=\left((0)_{f\,\in\Delta_{0}(\mathcal{M}_{h})},\,(0,0)_{f \,\in\Delta_{d}(\mathcal{M}_{h})},\,d\in[k+1,n]\right)=\underline{\mathrm{d}}_ {\tau,h}^{-1}0.\] If \(1\leq k\leq n-1\), on the other hand, from \(\underline{\mathrm{d}}_{\tau,h}^{k}\underline{\eta}_{h}=\underline{0}\) and (4.5) we infer \[\underline{\eta}_{h}=\left((\eta_{f})_{f\,\in\Delta_{k}(\mathcal{M}_{h})},( \eta_{f},0)_{f\,\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+1,n]}\right),\] (4.18a) while, if \[k=n\], we simply have \[\underline{\eta}_{h}=(\eta_{f})_{f\,\in\Delta_{n}(\mathcal{M}_{h})}. \tag{4.18b}\] Let now \[\underline{\omega}_{h}=\left((0)_{f\,\in\Delta_{k-1}(\mathcal{M}_{h})},\,(0, \pi_{\tau,f}^{\mathcal{K},0}\eta_{f})_{f\,\in\Delta_{k}(\mathcal{M}_{h})},\,( 0,\eta_{f})_{f\,\in\Delta_{d}(\mathcal{M}_{h}),\,d\in[k+1,n]}\right)\in \underline{V}_{\tau,h,b}^{k-1}.\] To check that this \(\underline{\omega}_{h}\) is well defined, it suffices to notice that, if \(f\in\Delta_{d}(\mathcal{M}_{h})\) with \(d\geq k+1=(k-1)+2\), then \(\eta_{f}\in\mathcal{K}_{r+1}^{d-k}(f)=\mathcal{K}_{r+1}^{d-(k-1)-1}(f)\) is a suitable choice for the corresponding component of \(\underline{\omega}_{h}\). By definition (4.4) of \(\mathrm{d}_{r,f}^{k-1}\), we have: For all \(f\in\Delta_{k}(\mathcal{M}_{h})\) and all \((\mu,\nu)\in\mathcal{P}_{0}\Lambda^{0}(f)\times\mathcal{K}_{r}^{0}(f)\), since \(\omega_{f^{\prime}}=0\) for all \(f^{\prime}\in\Delta_{k}(f)\), \[\int_{f}\,\mathrm{d}_{r,f}^{k-1}\underline{\omega}_{f}\,\wedge\,(\mu+\nu)=\int _{f}\star^{-1}\pi_{\mathcal{I},f}^{\mathcal{K},\theta}\eta_{f}\,\wedge\,\nu= \int_{f}\star^{-1}\eta_{f}\,\wedge\,(\mu+\nu),\] where the cancellation of \(\pi_{\tau,f}^{\mathcal{K},0}\) is made possible by (2.4) with \((\mathcal{X},\omega,\mu)\leftarrow(\mathcal{K}_{r}^{0}(f),\star^{-1}\eta_{f},\nu)\), while the introduction of \(\mu\) in the last passage is justified observing that \(\underline{\eta}_{h}\in\underline{V}_{\tau,h,b}^{k}\) implies \(\int_{f}\star^{-1}\eta_{f}=0\) for all \(f\in\Delta_{k}(\mathcal{M}_{h})\). This relation gives \(\mathrm{d}_{r,f}^{k-1}\underline{\omega}_{f}=\star^{-1}\eta_{f}\) for all \(f\in\Delta_{k}(\mathcal{M}_{h})\) which, combined with the definition (4.6) of the global discrete exterior derivative and the expression (4.18) of \(\underline{\eta}_{h}\), readily yields \(\underline{\eta}_{h}=\underline{\mathrm{d}}_{\tau,h}^{k-1}\underline{\omega}_ {h}\) and concludes the proof. Proof of Theorem 22.: Contrary to the DDR(0) complex, the VEM(0) complex is not isomorphic to the CW complex (the VEM spaces for \(r=0\) do not have only constant polynomial components on the lowest-dimensional cells). As a consequence, designing extensions and reductions between the VEM(\(r\)) and VEM(0) complexes would not directly allow us, as it was the case for the DDR complex in the proof of Theorem 10, to analyse the cohomology of the VEM complex. To circumvent this difficulty, we will instead design extensions \(\underline{E}_{h}^{k}:\underline{X}_{0,h}^{k}\to\underline{V}_{r,h}^{k}\) and reductions \(\underline{R}_{h}^{k}:\underline{V}_{r,h}^{k}\to\underline{X}_{0,h}^{k}\) between the \(\operatorname{VEM}(r)\), \(r\geq 0\), and the DDR(0) complexes, in order to show that their cohomologies are isomorphic. By Theorem 10, this will prove that the cohomology of \(\operatorname{VEM}(r)\) is isomorphic to the continuous de Rham cohomology. Throughout the rest of this proof, \((P_{0,f}^{k},\operatorname{d}_{0,f}^{k})\) and \((P_{r,f}^{k},\operatorname{d}_{r,f}^{k})\) denote, respectively, the couple (potential reconstruction, discrete exterior derivative) of the \(\operatorname{DDR}(0)\) and \(\operatorname{VEM}(r)\) complexes. We do not need to differentiate these notations, as the argument removes all ambiguity. For all form degrees \(k\in[0,d]\), the reduction is obtained setting \[\underline{R}_{h}^{k}\underline{\omega}_{h}\coloneqq\big{(}(\pi_{0,f}^{0} \,\omega_{f})_{f\,\in\Delta_{k}\,(\mathcal{M}_{h})}\big{)}\qquad\forall \underline{\omega}_{h}\in\underline{V}_{r,h}^{k}, \tag{4.19}\] while the extension is given by \[\underline{E}_{h}^{k}\underline{\eta}_{h} \coloneqq\Big{(}(\eta_{f})_{f\,\in\Delta_{k}\,(\mathcal{M}_{h})}, \tag{4.20}\] \[\big{(}\pi_{r+1,f}^{\mathcal{K},1}\,(\star P_{0,f}^{k}\, \underline{\eta}_{f}),\pi_{r,f}^{\mathcal{K},0}(\star\operatorname{d}_{0,f}^{ k}\,\underline{\eta}_{f})\big{)}_{f\,\in\Delta_{k+1}\,(\mathcal{M}_{h})},\] \[\big{(}\pi_{r+1,f}^{\mathcal{K},d-k}\,(\star P_{0,f}^{k}\, \underline{\eta}_{f}),\pi_{r+1,f}^{\mathcal{K},d-k-1}\,(\star\operatorname{d}_ {0,f}^{k}\,\underline{\eta}_{f})\big{)}_{f\,\in\Delta_{d}\,(\mathcal{M}_{h}), \,d\geq k+2}\Big{)}\qquad\forall\underline{\eta}_{h}\in\underline{X}_{0,h}^{k}.\] As in the proof of Theorem 10, we need to establish the properties (C1)-(C3) of [35, Assumption 1] to obtain the desired isomorphism in cohomology (also in this case, the relation (3.34) is an immediate consequence of (C1) and (C3)). _Proof of (C1)._ An inspection of the definitions (4.19) of the reduction and (4.20) of the extension shows that \(\underline{R}_{h}^{k}\underline{E}_{h}^{k}\underline{\eta}_{h}=\underline{\eta }_{h}\) for all \(\underline{\eta}_{h}\in\underline{X}_{0,h}^{k}\), and thus (C1) holds a fortiori. _Proof of (C3)._ We need to prove that both the reduction and extension are cochain maps. Let us start with the extension. We have to prove that, for any integer \(k\in[0,n-1]\) and all \(\underline{\eta}_{h}\in\underline{X}_{r,h}^{k}\), \(\underline{E}_{h}^{k+1}(\operatorname{d}_{0,h}^{k}\underline{\eta}_{h})= \underline{\operatorname{d}}_{r,h}^{k}(\underline{E}_{h}^{k}\underline{\eta}_ {h})\). Owing to the definitions (4.20) of the extension, (3.7) of \(\operatorname{d}_{0,f}^{k}\), and (4.6) of \(\operatorname{d}_{r,f}^{k}\), and since \(\operatorname{d}_{0,f}^{k+1}\circ\operatorname{d}_{0,f}^{k}=0\) (by (3.15) with \(r=0\) and \(k+1\) instead of \(k\)) this amounts to proving that \[\star\operatorname{d}_{0,f}^{k}\,\underline{\eta}_{f} =\,\star\,\operatorname{d}_{r,f}^{k}\,(\underline{E}_{f}^{k} \,\underline{\eta}_{f}) \forall f\in\Delta_{k+1}\,(\mathcal{M}_{h}), \tag{4.21}\] \[\pi_{r+1,f}^{\mathcal{K},d-k-1}\,(\star P_{0,f}^{k+1}\operatorname {d}_{0,f}^{k}\,\underline{\eta}_{f}) =\pi_{r+1,f}^{\mathcal{K},d-k-1}\,(\star\operatorname{d}_{0,f}^{ k}\,\underline{\eta}_{f}) \forall f\in\Delta_{d}\,(\mathcal{M}_{h}),\,d\geq k+2. \tag{4.22}\] The relation (4.22) trivially follows from \(P_{0,f}^{k+1}\operatorname{d}_{0,f}^{k}=\operatorname{d}_{0,f}^{k}\), see (3.14) with \((k,r)\leftarrow(k+1,0)\). To prove (4.21), we take \((\mu,\nu)\in\mathcal{P}_{0}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, Let us now turn to the reduction. We need to show that, for any integer \(k\in[0,n-1]\) and all \(\underline{\omega}_{h}\in\underline{V}_{r,h}^{k}\), \(\underline{R}_{h}^{k+1}(\underline{\mathrm{d}}_{r,h}^{k}\underline{\omega}_{h} )=\underline{\mathrm{d}}_{0,h}^{k}\,(\underline{R}_{h}^{k}\underline{\omega}_{h})\), i.e., accounting for the definitions (4.19) of the reduction, (3.7) of \(\underline{\mathrm{d}}_{0,h}^{k}\) (additionally noticing that \(\pi_{0,f}^{-,0}\) concides with \(\pi_{0,f}^{0}\) owing to (2.12a)), and (4.6) of \(\underline{\mathrm{d}}_{r,h}^{k}\), \[\pi_{0,f}^{0}\,(\star\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f})=\star \mathrm{d}_{0,f}^{k}\,\underline{R}_{f}^{k}\,\underline{\omega}_{f}\qquad \forall f\in\Delta_{k+1}\,(\mathcal{M}_{h}). \tag{4.23}\] To check this relation, let \(f\in\Delta_{k+1}(\mathcal{M}_{h})\) and write, for all \(\mu\in\mathcal{P}_{0}\Lambda^{0}(f)\), \[\int_{f}\star^{-1}\pi_{0,f}^{0}\,(\star\mathrm{d}_{r,f}^{k}\, \underline{\omega}_{f})\wedge\mu =\int_{f}\mathrm{d}_{r,f}^{k}\,\underline{\omega}_{f}\,\wedge\mu \text{Eq.~{}\eqref{eq: (in which case \(A^{k}(f)\) is a space of piecewise polynomial forms on a subdivision of \(f\)) or with higher inter-element regularity (\(C^{1}\) spaces, for example). The concept of (faithful) _mirror system_ plays the role of degrees of freedom in the FES framework. Mirror systems are constructed on a case-by-case basis for each FES, and are auxiliary tools in the framework: they are not required to design the FES spaces, but they identify (by duality) a basis of such spaces. A mirror system for \(A^{k}(\mathcal{M}_{h})\) is a family of subspaces of linear forms: \[Z^{k}(\mathcal{M}_{h})=\bigvee_{\begin{subarray}{c}f\in\Delta_{d}(\mathcal{M}_ {h})\\ d\in[k,n]\end{subarray}}Z^{k}(f)\qquad\text{ with }Z^{k}(f)\subset A^{k}(f)^{*}\text{ for all }f\in\Delta(\mathcal{M}_{h}), \tag{5.2}\] where \(A^{k}(f)^{*}\) is the dual space of \(A^{k}(f)\) (actually, to link mirror systems and interpolators, each \(Z^{k}(f)\) is chosen as a subspace of \(\tilde{X}^{k}(f)^{*}\) with \(\tilde{X}^{k}(f)\supset A^{k}(f)\), but we won't need this in the discussion here). As can be seen in (5.2), a mirror system is built hierarchically on the mesh, and each \(Z^{k}(f)\) identifies the modes of the FES forms that are "interior" to \(f\); to obtain all the modes (interior and boundary) associated with \(f\), one must consider \(\bigtimes_{f^{\prime}\in\Delta_{d^{\prime}}(f),\,d^{\prime}\in[k,d]}Z^{k}(f^ {\prime})\). A particular case of interest in the present context is when \(Z^{k}(f)\subset L^{2}\Lambda^{k}(f)^{*}\) (see Remark 5). Using the Riesz representation theorem and applying the Hodge star transformation, \(Z^{k}(f)\) can then be identified with a family of subspaces of \(L^{2}\)-integrable \((d-k)\)-forms: \[Z^{k}(\mathcal{M}_{h})\,\cong\bigvee_{\begin{subarray}{c}f\in\Delta_{d}( \mathcal{M}_{h})\\ d\in[k,n]\end{subarray}}\widetilde{Z}^{d-k}(f)\quad\text{ with }\widetilde{Z}^{d-k}(f)\subset L^{2} \Lambda^{d-k}(f). \tag{5.3}\] Here, and contrary to (5.1), no compatibility condition of the traces is imposed: the spaces \(\widetilde{Z}^{d-k}(f)\) are completely disconnected from each other. FEEC and FES provide computable spaces - that is, in which functions are entirely described in an algebraic manner - only on certain types of meshes. This is due to the requirement of compatible traces. In the DDR and VEM constructions of Sections 3 and 4, on the other hand, this requirement of computability for conforming subspaces is relaxed. Actually, no such space even needs to be identified: polytopal methods can be entirely built using spaces of polynomial functions on the mesh, without any compatibility condition on the traces. These spaces are explicit, and their basis is directly given by the polynomial components. Comparing (3.1) and (5.3) for example, we see that the DDR space plays the role of a mirror system, and puts discrete polynomial components at the center of the construction. A similar approach is also true for the VEM-inspired spaces (4.1), with, contrary to DDR, some polynomial components representing exterior derivatives; see the definition (4.3) of the interpolator. A closer link between DDR and FES can be drawn by noticing that the FES [30, Section 2.1] has the DDR spaces as mirror system (in the sense of (5.3)). This FES space, based on liftings of harmonic functions on each cell, therefore identifies a space of conforming functions whose degrees of freedom correspond to the DDR polynomial components; note that, in the context of vector proxies, another such identification was done in [11, Section 6.2]. The analogies, however, seem to stop here. While the FES theory would then use this conforming space and the continuous exterior derivative d to construct a discrete de Rham complex, DDR reconstructs _directly from the unknowns (mirror system)_ a discrete exterior derivative \(\underline{\mathrm{d}}_{,h}\), whose link with the continuous derivative of the corresponding FES function is not immediate. The appeal of this fully discrete approach is that, even when the FES space may not be computable (e.g., on polytopal meshes) and thus not directly usable in a scheme, the DDR space, its discrete exterior derivative, and its potential reconstruction are always computable, and are polynomially consistent, thus ensuring their practical applicability and optimal approximation properties. ### Distributional Differential Forms The theory of Distributional Differential Forms (DDF) has been introduced in [52] as a generalisation of the construction in [20] for the a posteriori error analysis of Nedelec edge elements. DDF are built on triangulations of the domain and, using their relation with the underlying simplicial complexes (as well as the concept of double complexes), their cohomology was analysed in [52] for rather general boundary conditions. Poincare-Friedrichs inequalities were later established in [28]. As is the case for the spaces appearing in the DDR and VEM complexes, DDF spaces are collections of differential forms on cells of various dimensions, with form degree depending on the dimension of the cell: if the domain \(\Omega\) has dimension \(n\), the DDF space of degree \(k\) is made of \((k-n+d)\)-forms on \(d\)-cells. No compatibility of the traces is enforced on these forms, which can be completely discontinuous between two \(d\)-simplices. The discrete distributional exterior derivative on the DDF space is then composed of two contributions: the exterior derivative inside the simplices, and a trace term. For example, focusing on the highest dimension \(d=n\), if the DDF space of \(k\)-forms is \[\hat{\Lambda}^{k}_{-2}(\Delta_{n}(\mathcal{M}_{h}))=\hat{\Lambda}^{k}_{-1}( \Delta_{n}(\mathcal{M}_{h}))\oplus\hat{\Lambda}^{k-1}_{-1}(\Delta_{n-1}( \mathcal{M}_{h})), \tag{5.4}\] (with \(\hat{\Lambda}^{\ell}_{-1}\) subspace of piecewise \(C^{\infty}\Lambda^{\ell}\) forms, the index \(-1\) expressing the absence of continuity properties at the interfaces), for a family \(\omega_{n,h}=(\omega_{f})_{f\,\in\Delta_{n}(\mathcal{M}_{h})}\in\hat{\Lambda} ^{k}_{-1}(\Delta_{n}(\mathcal{M}_{h}))\), we define the distributional derivative \(\hat{\mathrm{d}}^{k}_{h}:\hat{\Lambda}^{k}_{-1}(\Delta_{n}(\mathcal{M}_{h})) \rightarrow\hat{\Lambda}^{k+1}_{-2}(\Delta_{n}(\mathcal{M}_{h}))\) by \[\hat{\mathrm{d}}^{k}_{h}\omega_{n,h}=\left((\mathrm{d}^{k}\omega_{f})_{f\,\in \Delta_{n}(\mathcal{M}_{h})}\cdot\left(-\sum_{f\,\in\Sigma_{n}(f^{\prime})} \varepsilon_{f\,f^{\prime}}\,\mathrm{tr}_{f^{\prime}}\,\omega_{f}\right)_{f^ {\prime}\in\Delta_{n-1}(\mathcal{M}_{h})}\right), \tag{5.5}\] where \(\Sigma_{n}(f^{\prime})\) is the set of \(n\)-simplices \(f\) that share \(f^{\prime}\) (that is, \(f^{\prime}\in\Delta_{n-1}(f)\)), and \(\varepsilon_{f\,f^{\prime}}\) is the relative orientation of the simplex \(f^{\prime}\) with respect to the simplex \(f\). Note that, in (5.5), we have adopted a presentation of the distributional derivative that distributes its two contributions (D and T in [52]) on the corresponding components \((\hat{\Lambda}^{k+1-i}_{-1}(\Delta_{n-i}(\mathcal{M}_{h})))_{i=0,1}\) of \(\hat{\Lambda}^{k+1}_{-2}(\Delta_{n}(\mathcal{M}_{h}))\) (see (5.4) with \(k+1\) instead of \(k\)), instead of writing \(\hat{\mathrm{d}}^{k}_{h}\) as a sum of elements in the global space \(\hat{\Lambda}^{k+1}_{-2}(\Delta_{n}(\mathcal{M}_{h}))\); this is to better compare with the definition (3.7). This definition of distributional derivative is a global one, obtained by testing the piecewise smooth form \(\omega_{n,h}\) against globally smooth forms, which classically results in a term inside each \(f\in\Delta_{n}(\mathcal{M}_{h})\) corresponding to the standard exterior derivative (first component in (5.5)), and a jump across the \((n-1)\)-sub-simplices based on the difference of traces on the two adjacent \(n\)-simplices (second component in (5.5)). A crucial remark is that, in (5.5), the component \((\mathrm{d}^{k}\omega_{f})_{f\,\in\Delta_{n}(\mathcal{M}_{h})}\) of \(\hat{\mathrm{d}}^{k}_{h}\omega_{n,h}\) on \(n\)-cells only depends on the values \(\omega_{n,h}\) of the discrete distributional differential form on \(n\)-cells, not on the values of these forms on lower-dimensional cells (e.g., \(\hat{\Lambda}^{k-1}_{-1}(\Delta_{n-1}(\mathcal{M}_{h}))\) in (5.4)). This is in contrast with the discrete exterior derivatives in DDR and VEM complexes, whose definition on higher-dimensional cells depends on polynomial components on their sub-cells; see (3.4) and (4.6). Another difference between DDR and DDF can be seen when recasting the discrete exterior derivative: integrating by parts (3.4) yields the following characterisation: \[\int_{f}\,\mathrm{d}^{k}_{r,f}\,\underline{\omega}_{f}\,\wedge\mu =-\int_{f}\,\mathrm{d}(\star^{-1}\omega_{f})\wedge\mu+\int_{\partial f}\,(P^{k }_{r,\partial f}\,\underline{\omega}_{\partial f}-\mathrm{tr}_{\partial f}\,( \star^{-1}\omega_{f}))\wedge\mathrm{tr}_{\partial f}\,\mu\\ \forall\mu\in\mathcal{P}_{r}\Lambda^{d-k-1}(f).\] This relation reveals that \(\mathrm{d}^{k}_{r,f}\,\underline{\omega}_{f}\,\) is, as in DDF, composed of an exterior derivative term in the \(d\)-cell and a boundary term involving jumps. However, contrary to DDF, the jumps here are between the trace of the \(d\)-cell unknown and the potential \(P^{k}_{r,\partial f}\,\underline{\omega}_{\partial f}\,\) reconstructed on \((d-1)\)-cells (which depends on the unknowns on all \(d^{\prime}\)-subcells of \(f\), \(k\leq d^{\prime}\leq d\)), not between traces of two \(d\)-cells unknowns (as in (5.5) with \(d=n\)). In this respect, the "jump" term in DDR relates more to the kind of face differences encountered in polytopal methods (e.g., the HHO method [36]) while the jump term in DDF is more akin to those arising in discontinuous Galerkin (DG) methods [41]. This comparison can be extended to the potential reconstructions themselves. Equation (3.19) shows that \(P^{k}_{r,f}\,\underline{\omega}_{f}\) is obtained applying a higher-order correction to the cell component \(\star^{-1}\omega_{f}\), designed from the discrete exterior derivative on \(f\) and the potentials on \(\partial f\). This _enhancement_ process ensures the high-order consistency of the method starting from lower-order polynomial unknowns. In the context of elliptic equations, it is commonly used in methods with unknowns in the elements and on the faces of the mesh, but it is not directly available in DG methods. In DDF, as in DG, the cell unknown itself must be used (e.g., in a scheme to discretise the source term), and the consistency is therefore limited by the degree of this unknown. ## Appendix A Differential forms and vector proxies In this section, we briefly recall basic concepts on alternating (resp. differential) forms, and their representation in terms of vectors (resp. vector fields); these representations are often referred to as "vector proxies". We refer the reader to [2, Chapter 6] for a presentation in the framework of Finite Element Exterior Calculus, and to [15], [25, Chapter 1], [50, Chapter 1] for an introduction in more general scientific and engineering contexts. ### Exterior algebra in \(\mathbb{R}^{n}\) #### a.1.1 Alternating forms Let \(\{\boldsymbol{e}_{i}\}_{i\in[1,n]}\) be the canonical basis of \(\mathbb{R}^{n}\), equipped with the standard inner product. A basis for the space of linear forms over \(\mathbb{R}^{n}\), i.e., the dual space \((\mathbb{R}^{n})^{\prime}\) of \(\mathbb{R}^{n}\), is given by \(\{\mathrm{d}x^{i}\}_{i\in[1,n]}\), with \(\mathrm{d}x^{i}(\boldsymbol{e}_{j})\coloneqq\delta^{i}_{j}\) (Kronecker symbol), for all \((i,j)\in[1,n]^{2}\). The starting point of exterior calculus is to consider _alternating_ multilinear forms, vanishing whenever they are applied to a set of linearly dependent vectors in \(\mathbb{R}^{n}\). For any integer \(k\geq 1\), the set of alternating \(k\)-linear forms on \(\mathbb{R}^{n}\) is denoted by \(\mathrm{Alt}^{k}(\mathbb{R}^{n})\); by convention, we set \(\mathrm{Alt}^{0}(\mathbb{R}^{n})\coloneqq\mathbb{R}\). We also note that \(\mathrm{Alt}^{1}(\mathbb{R}^{n})=(\mathbb{R}^{n})^{\prime}\) and that \(\mathrm{Alt}^{k}(\mathbb{R}^{n})=\{0\}\) if \(k>n\) (since families of \(k>n\) vectors are always linearly dependent). It can be checked that \(\dim\mathrm{Alt}^{k}(\mathbb{R}^{n})=\binom{n}{k}\). In particular, \(\mathrm{Alt}^{n}(\mathbb{R}^{n})\) is the 1-dimensional space spanned by the determinant in the canonical basis vol (called the volume form). #### a.1.2 Exterior product Given two alternating multilinear forms \(\omega\in\mathrm{Alt}^{i}(\mathbb{R}^{n})\) and \(\mu\in\mathrm{Alt}^{j}(\mathbb{R}^{n})\), their _exterior product_\(\omega\wedge\mu\in\mathrm{Alt}^{i+j}(\mathbb{R}^{n})\) is defined, for any vectors \(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{i+j}\in\mathbb{R}^{n}\), by \[(\omega\wedge\mu)(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{i+j})\coloneqq \sum_{\sigma\in\Sigma_{i,j}}\mathrm{sign}(\sigma)\,\omega(\boldsymbol{v}_{ \sigma_{1}},\ldots,\boldsymbol{v}_{\sigma_{i}})\,\mu(\boldsymbol{v}_{\sigma_{ i+1}},\ldots,\boldsymbol{v}_{\sigma_{i+j}}),\] where \(\Sigma_{i,j}\) is the set of all permutations \(\sigma\) of the \((i+j)\)-tuple \((1,\ldots,i+j)\) such that \(\sigma_{1}<\cdots<\sigma_{i}\) and \(\sigma_{i+1}<\cdots<\sigma_{i+j}\). The exterior product satisfies the anticommutativity law \[\omega\wedge\mu=(-1)^{ij}\mu\wedge\omega,\] (A.1) so that, in particular, we have \(\mathrm{d}x^{i}\wedge\mathrm{d}x^{i}=0\) and \(\mathrm{d}x^{i}\wedge\mathrm{d}x^{j}=-\mathrm{d}x^{j}\wedge\mathrm{d}x^{i}\). With these definitions, for \(k\in[1,n]\) a basis of the space \(\mathrm{Alt}^{k}(\mathbb{R}^{n})\) is \(\{\mathrm{d}x^{\sigma_{1}}\wedge\cdots\wedge\mathrm{d}x^{\sigma_{k}}\}_{\sigma}\) where \(\sigma\) spans all strictly increasing functions \([1,k]\to[1,n]\). Hence, any \(\omega\in\mathrm{Alt}^{k}(\mathbb{R}^{n})\) can be written \[\omega=\sum_{1\leq\sigma_{1}<\cdots<\sigma_{k}\leq n}a_{\sigma}\ \mathrm{d}x^{\sigma_{1}}\wedge\cdots\wedge\mathrm{d}x^{\sigma_{k}},\quad a_{ \sigma}\in\mathbb{R}.\] (A.2) #### a.1.3 Hodge star operator The scalar product in \(\mathbb{R}^{n}\) induces a scalar product, denoted by \(\langle\cdot,\cdot\rangle\), on \(\operatorname{Alt}^{n-k}(\mathbb{R}^{n})\) - namely, the scalar product for which the aforementioned basis \(\{\operatorname{d}\!x^{\sigma_{1}}\wedge\dots\wedge\operatorname{d}\!x^{\sigma _{n-k}}\}_{\sigma}\) of \(\operatorname{Alt}^{n-k}(\mathbb{R}^{n})\) is orthonormal. The _Hodge star operator_ is the unique linear mapping \(\star:\operatorname{Alt}^{k}(\mathbb{R}^{n})\to\operatorname{Alt}^{n-k}( \mathbb{R}^{n})\) such that, for all \(\omega\in\operatorname{Alt}^{k}(\mathbb{R}^{n})\), \(\langle\star\omega,\mu\rangle\mathrm{vol}=\omega\wedge\mu\) for all \(\mu\in\operatorname{Alt}^{n-k}(\mathbb{R}^{n})\). It can be checked that \[\star(\operatorname{d}\!x^{\sigma_{1}}\wedge\dots\wedge\operatorname{d}\!x^{ \sigma_{k}})=\operatorname{sign}(\sigma,\tau)(\operatorname{d}\!x^{\tau_{1}} \wedge\dots\wedge\operatorname{d}\!x^{\tau_{n-k}}),\] where \((\sigma,\tau)=(\sigma_{1},\dots,\sigma_{k},\tau_{1},\dots,\tau_{n-k})\) is a permutation of \((1,\dots,n)\) such that \(\sigma_{1}<\dots<\sigma_{k}\) and \(\tau_{1}<\dots<\tau_{n-k}\). From the above identity, one can infer that \[\star(\star\omega)=(-1)^{k(n-k)}\omega\qquad\forall\omega\in\operatorname{Alt }^{k}(\mathbb{R}^{n})\] (A.3) and, hence, that \(\langle\star\omega,\star\mu\rangle=\langle\omega,\mu\rangle\), i.e., \(\star\) is an isometry. Formula (A.3) justifies the definition (2.2) of \(\star^{-1}\). The anticommutativity (A.1) of \(\wedge\), the definition of \(\star\), and the symmetry of \(\langle\cdot,\cdot\rangle\) then give \[\star^{-1}\omega\wedge\mu=\mu\wedge\star\omega=\omega\wedge\star\mu\qquad \forall\omega,\mu\in\operatorname{Alt}^{k}(\mathbb{R}^{n}).\] (A.4) **Example 28** (Hodge star operator in two and three dimensions).: _If \(\omega\in\operatorname{Alt}^{2}(\mathbb{R}^{3})\), i.e., \(\omega=a_{12}\operatorname{d}\!x^{1}\wedge\operatorname{d}\!x^{2}+a_{13} \operatorname{d}\!x^{1}\wedge\operatorname{d}\!x^{3}+a_{23}\operatorname{d} \!x^{2}\wedge\operatorname{d}\!x^{3}\) (see (A.2)), one obtains \(\star\omega\in\operatorname{Alt}^{1}(\mathbb{R}^{3})\) with_ \[\star\omega=a_{12}\operatorname{d}\!x^{3}-a_{13}\operatorname{d}\!x^{2}+a_{2 3}\operatorname{d}\!x^{1}.\] _If \(\omega\in\operatorname{Alt}^{1}(\mathbb{R}^{2})\), i.e., \(\omega=a_{1}\operatorname{d}\!x^{1}+a_{2}\operatorname{d}\!x^{2}\), then \(\star\omega\in\operatorname{Alt}^{1}(\mathbb{R}^{2})\) with_ \[\star\omega=a_{1}\operatorname{d}\!x^{2}-a_{2}\operatorname{d}\!x^{1}.\] #### a.1.4 Vector proxies for alternating forms As already mentioned in Section A.1.1, \(\operatorname{Alt}^{0}(\mathbb{R}^{n})=\mathbb{R}\) and \(\operatorname{Alt}^{n}(\mathbb{R}^{n})\cong\mathbb{R}\). Using the Riesz representation theorem to identify \((\mathbb{R}^{n})^{\prime}\) and \(\mathbb{R}^{n}\), we can identify two further spaces of alternating forms. Specifically, \(\operatorname{Alt}^{1}(\mathbb{R}^{n})=(\mathbb{R}^{n})^{\prime}\cong\mathbb{ R}^{n}\) and, writing \(\star\operatorname{Alt}^{n-1}(\mathbb{R}^{n})=\operatorname{Alt}^{1}(\mathbb{ R}^{n})\cong\mathbb{R}^{n}\), since \(\star\) is bijective, we obtain the identification \(\operatorname{Alt}^{n-1}(\mathbb{R}^{n})\cong\mathbb{R}^{n}\). Applied with \(n=3\), and recalling the formula for Hodge star transformations of 2-forms in Remark 28, these identifications lead to considering a vector \(\boldsymbol{v}=(a,b,c)\in\mathbb{R}^{3}\) as a _proxy_ for both the alternating linear and bilinear forms \[\operatorname{Alt}^{1}(\mathbb{R}^{3})\ni\omega=a\operatorname{d}\!x^{1}+b \operatorname{d}\!x^{2}+c\operatorname{d}\!x^{3}\text{ and }\operatorname{Alt}^{2}(\mathbb{R}^{3})\ni\mu=a \operatorname{d}\!x^{2}\wedge\operatorname{d}\!x^{3}-b\operatorname{d}\!x^{1 }\wedge\operatorname{d}\!x^{3}+c\operatorname{d}\!x^{1}\wedge\operatorname{d} \!x^{2}.\] On the other hand, when \(n=2\), the discussion above gives two possible ways to identify \(\operatorname{Alt}^{1}(\mathbb{R}^{2})=\operatorname{Alt}^{2-1}(\mathbb{R}^{2})\) with \(\mathbb{R}^{2}\). This leads to associating \(a\operatorname{d}\!x^{1}+b\operatorname{d}\!x^{2}=\omega\in\operatorname{Alt}^{ 1}(\mathbb{R}^{2})\) either to the vector \(\boldsymbol{v}=(a,b)\in\mathbb{R}^{2}\), or to its rotation by a right angle \(\varrho_{-\pi/2}\boldsymbol{v}=(b,-a)\in\mathbb{R}^{2}\). Based on the the above identifications, when \(n=3\), one can interpret the exterior product of two alternating multilinear forms \(\omega\wedge\mu\) in terms of vector proxies \((\boldsymbol{w},\boldsymbol{v})\) as follows: * the vector product \(\mathbb{R}^{3}\times\mathbb{R}^{3}\ni(\boldsymbol{w},\boldsymbol{v})\mapsto \boldsymbol{w}\times\boldsymbol{v}\in\mathbb{R}^{3}\) when \((\omega,\mu)\in\operatorname{Alt}^{1}(\mathbb{R}^{3})\times\operatorname{Alt}^{ 1}(\mathbb{R}^{3})\); * the dot product \(\mathbb{R}^{3}\times\mathbb{R}^{3}\ni(\boldsymbol{w},\boldsymbol{v})\mapsto \boldsymbol{w}\cdot\boldsymbol{v}\in\mathbb{R}\) when \((\omega,\mu)\in\operatorname{Alt}^{1}(\mathbb{R}^{3})\times\operatorname{Alt}^{ 2}(\mathbb{R}^{3})\). On the other hand, if \(n=2\) and \(\omega\), \(\mu\in\operatorname{Alt}^{1}(\mathbb{R}^{2})\), we can write \(\omega\wedge\mu=(a\operatorname{d}\!x^{1}+b\operatorname{d}\!x^{2})\wedge(f \operatorname{d}\!x^{1}+g\operatorname{d}\!x^{2})=(ag-bf)\operatorname{d}\!x^{1} \wedge\operatorname{d}\!x^{2}\). Considering the correspondences \(\omega\leftrightarrow\boldsymbol{w}=(a,b)\) and \(\mu\leftrightarrow\boldsymbol{v}=(f,g)\), we obtain \[\omega\wedge\mu=(\boldsymbol{w}\cdot\varrho_{-\pi/2}\boldsymbol{v})\,\operatorname{ d}\!x^{1}\wedge\operatorname{d}\!x^{2}.\] (A.5) #### a.1.5 Contraction and trace For a given vector \(\mathbf{v}\in\mathbb{R}^{n}\), the _contraction_\(\omega\lrcorner\mathbf{v}\in\operatorname{Alt}^{k-1}(\mathbb{R}^{n})\) of \(\omega\in\operatorname{Alt}^{k}(\mathbb{R}^{n})\) with \(\mathbf{v}\) is defined, for any \(\mathbf{v}_{1},\ldots,\mathbf{v}_{k-1}\in\mathbb{R}^{n}\), by \[(\omega\lrcorner\mathbf{v})(\mathbf{v}_{1},\ldots,\mathbf{v}_{k-1})\coloneqq\omega(\mathbf{v}, \mathbf{v}_{1},\ldots,\mathbf{v}_{k-1}).\] (A.6) In terms of vector proxies, in the case where \(n=3\), this contraction with \(\mathbf{v}\) corresponds to * the scalar product \(\mathbb{R}^{3}\ni\mathbf{w}\mapsto\mathbf{v}\cdot\mathbf{w}\in\mathbb{R}\) when \(\mathbf{w}\leftrightarrow\omega\in\operatorname{Alt}^{1}(\mathbb{R}^{3})\); * the vector product \(\mathbb{R}^{3}\ni\mathbf{w}\mapsto\mathbf{v}\times\mathbf{w}\in\mathbb{R}^{3}\) when \(\mathbf{w}\leftrightarrow\omega\in\operatorname{Alt}^{2}(\mathbb{R}^{3})\); * the multiplication of a real number \(\mathbb{R}\ni w\mapsto\mathbf{w}\in\mathbb{R}^{3}\) when \(w\leftrightarrow\omega\in\operatorname{Alt}^{3}(\mathbb{R}^{3})\). Let now \(V\subset W\) be finite dimensional subspaces of \(\mathbb{R}^{n}\), and \(\iota_{V}:V\hookrightarrow W\) be the inclusion of \(V\) in \(W\). The _trace_\(\operatorname{tr}_{V}:\operatorname{Alt}^{k}(W)\to\operatorname{Alt}^{k}(V)\) is the pullback under \(\iota_{V}\), that is: for any \(\mathbf{v}_{1},\ldots,\mathbf{v}_{k}\in V\), \[\operatorname{tr}_{V}\omega(\mathbf{v}_{1},\ldots,\mathbf{v}_{k})\coloneqq\omega( \iota_{V}\mathbf{v}_{1},\ldots,\iota_{V}\mathbf{v}_{k}).\] (A.7) The trace respects the exterior product, i.e., \(\operatorname{tr}_{V}(\omega\wedge\mu)=\operatorname{tr}_{V}\omega\wedge \operatorname{tr}_{V}\mu\). It is easy to see that, through the vector proxy of \(\operatorname{Alt}^{1}\) spaces, \(\operatorname{tr}_{V}:\operatorname{Alt}^{1}(W)\to\operatorname{Alt}^{1}(V)\) is the orthogonal projection \(\pi_{V}:W\to V\) of a vector \(\mathbf{w}\in W\) onto \(V\). Let us fix an integer \(m\in[1,n]\) and suppose that \(\dim(W)=m\) and \(\dim(V)=m-1\), and that both spaces are oriented; let \(\mathbf{n}_{V}\) be the unit normal to \(V\) such that, given a positively oriented basis \((\mathbf{e}_{1},\ldots,\mathbf{e}_{m-1})\) of \(V\), the family \((\mathbf{n}_{V},\mathbf{e}_{1},\ldots,\mathbf{e}_{m-1})\) forms a positively oriented basis of \(W\). Then, an identification of the trace \(\operatorname{tr}_{V}:\operatorname{Alt}^{m-1}(W)\to\operatorname{Alt}^{m-1}(V)\) through vector proxies is the scalar product with the vector \(\mathbf{n}_{V}\), that is, \(W\ni\mathbf{w}\mapsto\mathbf{w}\cdot\mathbf{n}_{V}\in\mathbb{R}\). ### Exterior calculus in \(\mathbb{R}^{n}\) #### a.2.1 Differential forms Let \(M\) be an \(n\)-dimensional flat manifold. When the coefficients in (A.2) are functions \(a_{\sigma}:M\to\mathbb{R}\), the map \(\omega:M\to\operatorname{Alt}^{k}(\mathbb{R}^{n})\) is referred to as a _differential form_, or simply a \(k\)-form. Consistently with the notation adopted in Section 2.1, the space of \(k\)-forms over \(M\) without any specific smoothness requirement on the coefficients \(a_{\sigma}\) is denoted by \(\Lambda^{k}(M)\). If \(\omega\in\Lambda^{k}(M)\), the value of \(\omega\) at \(\mathbf{x}\in M\) is denoted by \(\omega_{\mathbf{x}}\in\operatorname{Alt}^{k}(\mathbb{R}^{n})\). If the coefficients \(a_{\sigma}\) in (A.2) are polynomial functions, \(\omega\) is said to be a _polynomial differential form_. Specifically, for an integer \(r\geq 0\), the space of polynomial \(k\)-forms of degree \(\leq r\) is defined as \[\mathcal{P}_{r}\Lambda^{k}(M)\coloneqq\left\{\sum_{1\leq\sigma_{1}<\cdots< \sigma_{k}\leq n}p_{\sigma}\;\mathrm{d}x^{\sigma_{1}}\wedge\cdots\wedge\mathrm{ d}x^{\sigma_{k}}\;:\;p_{\sigma}\in\mathcal{P}_{r}(M)\right\},\] where \(\mathcal{P}_{r}(M)\) is the space of scalar polynomials of degree \(\leq r\) over \(M\). All the arguments concerning vector proxies presented in Section A.1 for alternating \(k\)-linear forms can be immediately extended to the case of \(k\)-forms. Hence, when \(n\in\{2,3\}\), their corresponding vector proxies are scalar fields over \(M\) when \(k\in\{0,n\}\), and vector fields over \(M\) when \(k\in\{1,n-1\}\). #### a.2.2 Exterior derivative and de Rham complexes Provided that the coefficients \(a_{\sigma}\) in (A.2) are smooth enough, the _exterior derivative_ of a \(k\)-form \(\omega\in\Lambda^{k}(M)\) is the linear unbounded operator \(\mathrm{d}:\Lambda^{k}(M)\to\Lambda^{k+1}(M)\) such that, in terms of standard coordinates on \(\mathbb{R}^{n}\), \[\mathrm{d}\omega=\sum_{1\leq\sigma_{1}<\cdots<\sigma_{k}\leq n}\sum_{i=1}^{n} \frac{\partial a_{\sigma}}{\partial x_{i}}\;\mathrm{d}x^{i}\wedge\mathrm{d}x^{ \sigma_{1}}\wedge\cdots\wedge\mathrm{d}x^{\sigma_{k}}.\] The interpretation of the exterior derivative in terms of vector calculus operators, through vector proxies of alternating forms and when \(M\) is a domain \(\Omega\) of \(\mathbb{R}^{3}\), is given in (A.8). We have used in this diagram the spaces defined in the introduction of the paper. (A.8) In the case \(n=2\), as we have two possible vector proxies for \(\operatorname{Alt}^{1}(\mathbb{R}^{2})\). These interpretations are illustrated in (A.9) when \(\omega=a\,\operatorname{dx}^{1}+b\,\operatorname{dx}^{2}\in\operatorname{ Alt}^{1}(\mathbb{R}^{2})\) is identified with \(\boldsymbol{v}=(a,b)\), and in (A.10) when \(\omega\in\operatorname{Alt}^{2-1}(\mathbb{R}^{2})\) is identified with \(\boldsymbol{\varrho}_{-\pi/2}\boldsymbol{v}\) (with \(\operatorname{rot}=\operatorname{div}\,\varrho_{-\pi/2}\) and \(\operatorname{rot}=\varrho_{-\pi/2}\operatorname{\mathbf{grad}}\), respectively, denoting the scalar and vector curls, and \(\boldsymbol{H}(\operatorname{rot};\Omega)\) the space of square-integrable vector-valued functions whose rot is also square-integrable). (A.9) Notice, finally, that the exterior derivative satisfies the _complex property_\(\operatorname{d}\circ\operatorname{d}=0\). This property translates, through vector proxies, into the well-known identities \(\operatorname{\mathbf{curl}}\operatorname{\mathbf{grad}}=\boldsymbol{0}\) and \(\operatorname{div}\operatorname{\mathbf{curl}}=0\) for \(n=3\), and \(\operatorname{rot}\operatorname{\mathbf{grad}}=0\), \(\operatorname{div}\operatorname{\mathbf{rot}}=0\) when \(n=2\). #### a.2.3 Koszul differential Given \(\boldsymbol{x}_{M}\in\mathbb{R}^{n}\), the _Koszul differential_\(\kappa_{M}:\Lambda^{k}(M)\to\Lambda^{k-1}(M)\) is defined pointwise over \(M\) as follows: For all \(\boldsymbol{x}\in M\), recalling the definition (A.6) of the contraction \(\lrcorner\), \[(\kappa_{M}\omega)_{\boldsymbol{x}}\coloneqq\omega_{\boldsymbol{x}}\lrcorner( \boldsymbol{x}-\boldsymbol{x}_{M}).\] Its interpretation in terms of vector fields proxy is then analogous to that of a contraction of an alternating multilinear form with a vector, except that the contraction is made pointwise with the vector field \(\mathbb{R}^{n}\ni\boldsymbol{x}\mapsto\boldsymbol{x}-\boldsymbol{x}_{M}\in \mathbb{R}^{n}\). The terminology "differential" is legitimate, as \(\kappa_{M}\) satisfies the complex property \(\kappa_{M}\circ\kappa_{M}=0\) (since any alternating form applied to the same vector twice vanishes). #### a.2.4 Trace If \(P\subset Q\) are (relatively) open sets in affine subspaces \(V\subset W\) of \(\mathbb{R}^{n}\), the trace operator \(\operatorname{tr}_{P}:C^{0}\Lambda^{k}(\overline{Q})\to C^{0}\Lambda^{k}( \overline{P})\) on differential forms is defined pointwise, using the trace operator (A.7) on alternating forms: For all \(\omega\in C^{0}\Lambda^{k}(\overline{Q})\), \[(\operatorname{tr}_{P}\omega)_{\boldsymbol{x}}\coloneqq\operatorname{tr}_{V} \omega_{\boldsymbol{x}}\qquad\forall\boldsymbol{x}\in P.\] Note that, in the case \(P=Q\), the trace is simply the identity operator (and can be defined without any continuity assumption): \(\operatorname{tr}_{P}\omega=\omega\) for all \(\omega\in\Lambda^{k}(P)\). Applying the same arguments as in Section A.1 pointwise over \(P\), the trace operator in terms of vector fields proxy gives * the restriction of functions, when \(k=0\); * the orthogonal projection onto \(V\) (that is, \(\operatorname{tr}_{P}\omega\leftrightarrow\pi_{V}\mathbf{w}\) if \(\omega\leftrightarrow\mathbf{w}\)), when \(k=1\); * the normal component on \(P\) along the direction \(\mathbf{n}\) (that is, \(\operatorname{tr}_{P}\omega\leftrightarrow\mathbf{w}\cdot\mathbf{n}\) if \(\omega\leftrightarrow\mathbf{w}\)), with \(\mathbf{n}\) unit normal vector field preserving the orientations of \(V\) and \(W\), when \(k=\dim(P)=\dim(Q)-1\). **Example 29** (Interpretation of the Stokes formula for \(\ell=1\) and \(n=3\)).: _We rewrite here, for the reader's convenience, the integration by parts formula (2.1) for \(\ell=1\) and \(n=3\):_ \[\int_{M}\mathrm{d}\omega\wedge\mu=\int_{M}\omega\wedge\mathrm{d}\mu+\int_{ \partial M}\operatorname{tr}_{\partial M}\omega\wedge\operatorname{tr}_{ \partial M}\mu\qquad\forall(\omega,\mu)\in\Lambda^{1}(M)\times\Lambda^{1}(M).\] (A.11) _Given the previous interpretations of the exterior derivative and product in terms of vector proxies, if \(\omega\leftrightarrow\mathbf{w}\) and \(\mu\leftrightarrow\mathbf{v}\), then \(\mathrm{d}\omega\wedge\mu\leftrightarrow\operatorname{\mathbf{curl}}\mathbf{w} \cdot\mathbf{v}\) and \(\omega\wedge\mathrm{d}\mu\leftrightarrow\mathbf{w}\cdot\operatorname{\mathbf{curl}} \mathbf{v}\). This leads to considering the integration by parts formula for the curl:_ \[\int_{M}\operatorname{\mathbf{curl}}\mathbf{w}\cdot\mathbf{v}=\int_{M}\mathbf{w}\cdot \operatorname{\mathbf{curl}}\mathbf{v}+\int_{\partial M}\left(\mathbf{n}\times\left( \mathbf{w}\times\mathbf{n}\right)\right)\cdot\left(\mathbf{v}\times\mathbf{n}\right),\] (A.12) _where \(\mathbf{n}\) is the outer unit normal vector field over \(\partial M\). For any fixed \(\mathbf{x}\in\partial M\), we have \(\mathbf{n}(\mathbf{x})\times\left(\mathbf{w}(\mathbf{x})\times\mathbf{n}(\mathbf{x})\right)=\pi_{\mathbf{ T}_{\mathbf{x}}\partial M}\mathbf{w}(\mathbf{x})\) (here, \(T_{\mathbf{x}}\partial M\) is the tangent space of \(\partial M\) at \(\mathbf{x}\)), whereas \(\mathbf{v}(\mathbf{x})\times\mathbf{n}(\mathbf{x})=\varrho_{-\pi/2}(\pi_{\mathbf{T}_{\mathbf{x}} \partial M}\mathbf{v}(\mathbf{x}))\), where the rotation is considered with respect to the orientation of the tangent plane given by \(\mathbf{n}(\mathbf{x})\). The boundary terms of (A.11) and (A.12) therefore coincide, through the vector proxy for the exterior product of 1-forms in dimension 2 (see (A.5))._ ## Acknowledgements Daniele Di Pietro and Jerome Droniou acknowledge the partial support of _Agence Nationale de la Recherche_ through the grant ANR-20-MRS2-0004 "NEMESIS". Francesco Bonaldi and Daniele Di Pietro acknowledge the partial support of _Agence Nationale de la Recherche_ through the grant ANR-16-IDEX-0006 "RHAMNUS". Kaibo Hu acknowledges the partial support of a _Royal Society University Research Fellowship_ through the grant URF\(\backslash\)R1\(\backslash\)221398.
2305.12614
Enabling Team of Teams: A Trust Inference and Propagation (TIP) Model in Multi-Human Multi-Robot Teams
Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. In a multi-human multi-robot team, we postulate that there exist two types of experiences that a human agent has with a robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (${N=30}$). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.
Yaohui Guo, X. Jessie Yang, Cong Shi
2023-05-22T00:43:31Z
http://arxiv.org/abs/2305.12614v3
# Enabling Team of Teams: ###### Abstract Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. In a multi-human multi-robot team, we postulate that there exist two types of experiences that a human agent has with a robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (\(N=30\)). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. ## I Introduction A human agent's trust in an autonomous/robotic agent, defined as "the attitude that an agent will help achieve an individual's goals in situations characterized by uncertainty and vulnerability [1]", is a central factor for effective human-robot interaction (HRI) [2, 3, 4]. Optimal interaction can be achieved only when an appropriate level of trust is established between the human and the robotic agents [5, 6]. Despite the extensive research efforts over the past thirty years, existing literature is predominantly focused on trust modeling in dyadic human-robot teams where one human agent interacts with one robot [7]. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robots. Consider a scenario where two human agents (figure 1), \(x\) and \(y\), and two robots, \(A\) and \(B\), are to perform a task. The four agents are allowed to form sub-teams to enhance task performance (e.g., maximizing throughput and minimizing task completion time). For instance, they could initially form two dyadic human-robot teams to complete the first part of the task, merge to complete the second part and split again with a different configuration to complete the third part of the task, and so on. This scenario illustrates a new organizational model known as "team of teams [8, 9]" in which the team composition is fluid and team members come and go as the nature of the problem changes. In this scenario, we postulate that there exist two types of experiences that a human agent has with a robot: _direct and indirect experiences_. Direct experience, by its name, means that a human agent's interaction with a robot is by him-/her-self; indirect experience means that a human agent's interaction with a robot is mediated by another party. Considering the third part of the task (see figures 1 and 2), human \(x\) works directly with robot \(B\) (i.e., direct experience). Even though there is no direct interaction between \(x\) and \(A\) in part 3, we postulate that \(x\) could still update his or her trust in \(A\) by learning her human teamm \(y\)'s experience with \(A\), i.e., \(y\)'s direct experience with \(A\) becomes \(x\)'s _in_direct experience with \(A\), based on which \(x\) can update her trust in \(A\), \(t^{x,A}\). Essentially, \(y\)'s trust in A _pro Fig. 1: Four agents can form sub-teams. In part 1, human \(x\) and robot \(A\) form a dyad, and human \(y\) and robot \(B\) form a dyad. In part 2, two dyads merge. In part 3, human \(x\) and robot \(B\) form a dyad, and human \(y\) and robot \(A\) form a dyad. Under the direct and indirect experience framework, prior work on trust modeling in dyadic human-robot teams can be regarded as examining how _direct_ experience influences a person's trust in a robot. In multi-human multi-robot teams, we postulate that _both direct and indirect experiences drive a human agent's trust in a robot_. In this study, we develop the **T**rust **I**nference and **P**ropagation (TIP) model for multi-human multi-robot teams. The proposed model explicitly accounts for the direct and indirect experiences a human agent may have with a robot. We examine trust dynamics under the TIP framework and prove theoretically that trust converges after repeated (direct and indirect) interactions. To evaluate the proposed TIP model, we conducted a human-subject experiment with 15 pairs of participants (\(N=30\)). Each pair worked with two drones to perform a threat detection task for 15 sessions. We compared the TIP model (i.e., accounts for both the direct and indirect experiences) and a direct-experience-only model (i.e., only accounts for the direct experience a human agent has with a robot). Results show that the TIP model successfully captures people's trust dynamics with a significantly smaller root-mean-square error (RMSE) compared to the direct-experience-only model. The key contribution of this work is three-fold: * To the best of our knowledge, the proposed TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. The TIP model accounts for both the direct and indirect experiences (through trust propagation) a human agent has with a robot in multi-human multi-robot teams. As a result, the TIP model is well-suited for trust estimation in networks involving multiple humans and robots. * We prove theoretically that trust converges to the unique equilibrium in probability after repeated direct and indirect interactions under our TIP framework. Such an equilibrium can also be efficiently computed. * We conduct a human-subject experiment to assess the TIP model. Results reveal the superior performance of the TIP model in capturing trust dynamics in a multi-human multi-robot team. This paper is organized as follows. Section II presents related work, including trust modeling in dyadic human-robot teams and reputation/trust management in e-commerce. In Section III, we describe the mathematical formulation of the TIP model and examine its behavior under different types of interactions. Section IV presents the human-subject study. In Section V, we present and discuss the results. Section VI concludes the paper. ## II Related Work In this section, we review two bodies of research motivating the present study: the extensive literature on trust in dyadic human-robot teams and the literature on reputation/trust management. The latter is a research topic in computer science that shares commonalities with the underlying research question of trust modeling in multi-human multi-robot teams. ### _Trust Modeling in Dyadic Human-robot Interaction_ Trust in autonomous/robotic agents attracts research attention from multiple disciplines. One line of research is to identify factors influencing a human's trust in autonomy/robots and quantify their effects. These factors can be categorized into human-related factors such as personality [10], robot-related factors such as reliability [11, 12] and transparency [13, 14], and task-related factors such as task emergency [15]. For a review of the factors, see [6]. More recently, another line of research has emerged that focuses on understanding the dynamics of trust formation and evolution when a person interacts with autonomy repeatedly [16, 3, 17]. Empirical studies have investigated how trust strengthens or decays due to moment-to-moment interactions with autonomy [18, 19, 20, 3, 3]. Based on the empirical research, three major properties of trust dynamics have been identified and summarized, namely _continuity_, _negativity bias_, and _stabilization_[21, 17] Acknowledging that trust is a dynamic variable, several computational trust models in dyadic human-robot teams have been developed [22, 23, 24, 17]. Notably, Xu and Dudek [23] proposed the online probabilistic trust inference model (OP-TIMo) utilizing Bayesian networks to estimate human trust based on the autonomous agent's performance and human behavioral signals. Guo and Yang [17] modeled trust as a Beta random variable parameterized by positive and negative interaction experience a human agent has with a robotic agent. Soh et al. [25] proposed a Bayesian model which combines Gaussian processes and recurrent neural networks to predict trust over different tasks. For a detailed review, refer to [26]. ### _Reputation/Credential Management_ Despite limited research on trust modeling in multi-human multi-robot teams, insights can be drawn from studies on reputation management. In consumer-to-consumer electronic marketplaces like eBay, reputation systems play a crucial role in generating trust among buyers to facilitate transactions with unknown sellers [27]. These systems can be categorized as centralized, where reputation values are stored centrally, representing the overall trustworthiness of sellers, or decentralized, where buyers maintain their evaluation scores privately [28]. In decentralized systems, a propagation mechanism allows buyers to obtain reputation values, even in the absence of prior transactions. Various propagation mechanisms have been developed, such as subjective logic integrated into the Beta reputation management system [29, 30] or the concept Fig. 2: An arrow points from a trustor to a trustee, representing the trust \(t^{\text{trustor, trustee}}\). Human \(x\) updates her trust in robot \(B\) based on direct experience. Even though \(x\) does not have direct interaction with \(A\), \(x\) could still update her trust toward \(A\) through a third party, \(y\). of "witness reputation" in the FIRE reputation management model [31], facilitating the transfer of reputation scores among agents in a network. These propagation mechanisms provide valuable insights into modeling trust update through indirect experience in HRI. Yet, their direct application to HRI settings is impeded due to the distinct characteristics of human trust in robots, as reviewed in Section II-A. ## III Mathematical Model We present the TIP model in this section. Our key motivation is to develop a fully computational trust inference and propagation model that works in general multi-human multi-robot settings. First, we discuss the assumptions and introduce the mathematical formulation. Second, we examine the behavior of the model under repeated human-robot interactions. Finally, we present the parameter inference method and trust estimation using the TIP model. ### _Assumptions_ We make three major assumptions in the context of HRI. First, we assume that _each human agent communicates trust as a single-dimensional value_. In some prior work, trust is represented as a tuple. For example, in [30], trust is represented as a triplet, i.e., belief, disbelief, and uncertainty. Although a multi-dimensional representation conveys more information, our study as well as some prior studies show that a one-dimensional representation of trust suffices in capturing trust evolution [17, 22, 23, 24, 22]. Moreover, querying a single-dimension trust value increases operational feasibility because keeping track of multiple numbers adds unnecessary cognitive load and may not be pragmatic for non-experts. Therefore, we assume a simple one-dimension form of trust in this study. Second, we assume that _the human agents are cooperative_, i.e., they are honest and willing to share their trust in a robot truthfully with their human teammates. Third, we take an ability/performance-centric view of trust and assume that a human agent's trust in a robot is primarily driven by the ability or performance of the robot. This ability/performance-centric view has been widely used in prior research for modeling trust in task-oriented HRI contexts (i.e., a robot is to perform a specific task) [22, 17, 23]. We discuss the limitations of the assumptions in Section VI. ### _Proposed Model_ **Trust as a Beta random variable.** We take a probabilistic view to model trust as in [17]. At time \(k\), the trust \(t_{k}^{a,b}\) that a human agent \(a\) feels toward another agent \(b\) follows a Beta distribution, i.e., \[t_{k}^{a,b}\sim\mathrm{Beta}\left(\alpha_{k}^{a,b},\beta_{k}^{a,b}\right), \tag{1}\] where \(\alpha_{k}^{a,b}\) and \(\beta_{k}^{a,b}\) are the positive and negative experiences \(a\) had about \(b\) up to time \(k\), respectively, \(k=0,1,2,\ldots\). When \(k=0\), \(\alpha_{0}^{a,b}\) and \(\beta_{0}^{a,b}\) represent the prior experiences that \(a\) has before any interaction with \(b\). The expected trust is given by \[\mu_{k}^{a,b}=\alpha_{k}^{a,b}/\left(\alpha_{k}^{a,b}+\beta_{k}^{a,b}\right). \tag{2}\] Here we note that \(t_{k}^{a,b}\) is the queried trust given by the agent \(a\), which has some randomness due to subjectivity, while \(\mu_{k}^{a,b}\) is the expected trust determined by the experiences. **Trust update through direct experience.** Similar to [17], we update the experiences through direct interaction at time \(k\) by setting \[\alpha_{k}^{a,b} =\alpha_{k-1}^{a,b}+s^{a,b}\cdot p_{k}^{b} \tag{3}\] \[\beta_{k}^{a,b} =\beta_{k-1}^{a,b}+f^{a,b}\cdot\bar{p}_{k}^{b}.\] Here \(p_{k}^{b}\) and \(\bar{p}_{k}^{b}\) are the measurements of \(b\)'s success and failure during time \(k\), respectively; \(s^{a,b}\) and \(f^{a,b}\) are \(a\)'s unit experience gains with respect to success or failure of \(b\). We require \(s^{a,b}\) and \(f^{a,b}\) to be positive to ensure that cumulative experiences are non-decreasing. The updated trust \(t_{k}^{a,b}\) follows the distribution \(\mathrm{Beta}(\alpha_{k}^{a,b},\beta_{k}^{a,b})\). **Trust update through indirect experience propagation.** Let \(x\) and \(y\) denote two human agents and let \(A\) denote a robot agent, as illustrated in figure 2. At time \(k\), \(y\) communicates his or her trust \(t_{k}^{y,A}\) in \(A\) with \(x\), and then \(x\) updates his or her experiences through indirect interaction by \[\alpha_{k}^{x,A} =\alpha_{k-1}^{x,A}+\hat{s}^{x,A}\cdot t_{k}^{x,y}\cdot\left[t_{ k}^{y,A}-t_{k-1}^{x,A}\right]^{+}, \tag{4}\] \[\beta_{k}^{x,A} =\beta_{k-1}^{x,A}+\hat{f}^{x,A}\cdot t_{k}^{x,y}\cdot\left[t_{ k-1}^{x,A}-t_{k}^{y,A}\right]^{+},\] where the superscript '\(+\)' means taking the positive part of the corresponding number, i.e., \(t^{+}=\max\{0,t\}\) for a real number \(t\), and \(t_{k}^{x,A}\sim\mathrm{Beta}(\alpha_{k}^{x,A},\beta_{k}^{x,A})\). The intuition behind this model is that \(x\) needs to reason upon \(t_{k}^{y,A}\), i.e., \(y\)'s trust toward \(A\). First, \(x\) compares \(y\)'s trust \(t_{k}^{y,A}\) with his or her previous trust \(t_{k-1}^{x,A}\). Let \(\Delta t:=t_{k}^{y,A}-t_{k-1}^{x,A}\) be the difference. If \(\Delta t\geq 0\), \(x\) gains positive indirect experience about \(A\), which amounts to the product of the trust difference \(\Delta t\), a coefficient \(\hat{s}^{x,A}\), and a discounting factor \(t_{k}^{x,y}\), i.e., \(x\)'s trust in \(y\); if \(\Delta t<0\), then \(x\) gains negative indirect experience about \(A\), which is defined similarly. ### _Asymptotic Behavior under Repeated Interactions_ We examine the behavior of the proposed model under both direct and indirect trust updates. Consider a scenario where human agents \(x\) and \(y\) take turns working with robot \(A\) repetitively. Suppose each \(x\)'s turn contains \(m\) interactions while each \(y\)'s turn contains \(n\) interactions; and, after each interaction, the agent who works directly with \(A\) informs the other agent of his or her trust in \(A\). Figure 3 illustrates the interaction process. In addition, we assume that robot \(A\) has constant reliability \(r\), i.e., \(A\)'s performance measure are \(p_{k}^{A}=r\) and \(\bar{p}_{k}^{A}=\bar{r}\), for \(k=1,2,\ldots,K\), where \(\bar{r}:=1-r\), and \(x\) has constant trust \(t^{x,y}\) in \(y\). To avoid triviality, we exclude the case when \(m=n=0\) (where no interactions occur). Without loss of generality, we assume \(m>0\) and \(n\geqslant 0\). (The case \(m\geqslant 0\) and \(n>0\) is symmetric.) We have the following main result on the asymptotic behavior of \(t_{k}^{x,A}\) and \(t_{k}^{y,A}\). **Theorem 1**.: _When \(m>0\) and \(n\geqslant 0\), \(t_{k}^{x,A}\) and \(t_{k}^{y,A}\) converge in probability (i.p.) respectively, i.e., there exists \(t^{x}\) and \(t^{y}\) such that, for any \(\epsilon>0\),_ \[\lim_{k\to\infty}\Pr\left(\left|t_{k}^{x,A}-t^{x}\right|>\epsilon \right) =0\] \[\text{and}\quad\lim_{k\to\infty}\Pr\left(\left|t_{k}^{y,A}-t^{y} \right|>\epsilon\right) =0.\] Theorem 1 exhibits that, under alternating interactions with the robot, both agents' trust will stabilize and converge after sufficiently many interactions. The next result gives an exact method to compute the limiting equilibrium. **Theorem 2**.: _The equilibrium \(t^{x}\) and \(t^{y}\) in Theorem 1 satisfy_ \[S^{x}\frac{1-t^{x}}{t^{x}} =\hat{F}^{x}\left(t^{x}-t^{y}\right)+F^{x}\quad\text{and} \tag{5}\] \[F^{y}\frac{t^{y}}{1-t^{y}} =\hat{S}^{y}\left(t^{x}-t^{y}\right)+S^{y},\] _if \(S^{x}F^{y}\geqslant F^{x}S^{y}\); otherwise, they satisfy_ \[F^{x}\frac{t^{x}}{1-t^{x}} =\hat{S}^{x}\left(t^{y}-t^{x}\right)+S^{x}\quad\text{ and} \tag{6}\] \[S^{y}\frac{1-t^{y}}{t^{y}} =\hat{F}^{y}\left(t^{y}-t^{x}\right)+F^{y},\] _where \(\hat{S}^{x}=nt^{x,y}\hat{s}^{x,A}\), \(\hat{F}^{x}=nt^{x,y}\hat{s}^{x,A}\), \(S^{x}=ms^{x,A}r\), \(F^{x}=mf^{x,A}\overline{r}\), \(\hat{S}^{y}=mt^{y,x}\hat{s}^{y,A}\), \(\hat{F}^{y}=mt^{y,x}\hat{f}^{y,A}\), \(S^{y}=ns^{y,A}r\), and \(F^{y}=nf^{y,A}\overline{r}\)._ The capitalized variables in Theorem 2 are related to the average experience gains in the long run, e.g., \(S^{x}\) is \(x\)'s direct positive experience gain after each \(m\) direct update. The condition \(S^{x}F^{y}\geqslant F^{x}S^{y}\) can be interpreted as follows: compared with \(y\), \(x\) tends to have a higher trust gain in \(A\) after each turn via direct experience. Note that \(t^{x}\) and \(t^{y}\) can be computed exactly by solving a cubic equation or readily approximated by Newton's method. Details are given in the appendix. A special case is when \(n=0\), i.e., agent \(x\) only updates trust in \(A\) via direct experience, and agent \(y\) only updates trust via indirect experience. Theorem 2 leads to the following corollary with a closed-form equilibrium: **Corollary 1**.: _When \(m>0\) and \(n=0\), \(x\)'s trust \(t_{k}^{x,A}\) in \(A\) converges to \(t^{x}=\frac{s^{x,A}r}{f^{x,A}\overline{r}+s^{x,A}r}\) in probability, i.e., for any \(\epsilon>0\),_ \[\lim_{k\to\infty}\Pr\left(\left|t_{k}^{x,A}-\frac{s^{x,A}r}{f^{x,A}\overline {r}+s^{x,A}r}\right|>\epsilon\right)=0.\] _The difference between \(t_{k}^{x,A}\) and \(t_{k}^{y,A}\) converge to 0 in probability, i.e., for any \(\epsilon>0\),_ \[\lim_{k\to\infty}\Pr\left(\left|t_{k}^{x,A}-t_{k}^{y,A}\right|>\epsilon \right)=0.\] _Equivalently, we have \(t^{x}=t^{y}\) in Theorem 2._ Corollay 1 implies that under direct-only trust updates, \(x\)'s trust will stabilize around the following closed-form \[\frac{s^{x,A}r}{f^{x,A}\overline{r}+s^{x,A}r},\] which is determined by \(x\)'s unit experience gains \(s^{x,A}\) and \(f^{x,A}\) via direct trust, and the robot's reliability \(r\); moreover, under indirect-only updates, \(y\)'s trust will converge to \(x\)'s trust. ### _Parameter Inference_ The proposed model characterizes a human agent's trust in a robot by six parameters. For instance, the parameter of \(x\) on robot \(A\), which is defined as \[\mathfrak{g}^{x,A}=\left(\alpha_{0}^{x,A},\beta_{0}^{x,A},s^{x,A},f^{x,A}, \hat{s}^{x,A},\hat{f}^{x,A}\right), \tag{7}\] includes \(x\)'s prior experiences \(\alpha_{0}^{x,A}\) and \(\beta_{0}^{x,A}\), the unit direct experience gains \(s_{A}^{x}\) and \(f_{A}^{x}\), and the unit indirect experience gains \(\hat{s}_{A}^{x}\) and \(\hat{f}_{A}^{x}\). Denote the indices of \(x\)'s direct and indirect interactions with \(A\) up to time \(k\) as \(D_{k}\) and \(\overline{D}_{k}\). We can compute \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\), according to Eqs. (3) and (4), as \[\alpha_{k}^{x,A}= \alpha_{0}^{x,A}+s^{x,A}\sum_{j\in D_{k}}p_{j}^{A}+\hat{s}\sum_{ j\in\overline{D}_{k}}t_{j}^{x,y}\left[t_{j}^{y,A}-t_{j-1}^{x,A}\right]^{+} \tag{8}\] \[\beta_{k}^{x,A}= \beta_{0}^{x,A}+f^{x,A}\sum_{j\in D_{k}}\overline{p}_{j}^{A}+ \hat{f}\sum_{j\in\overline{D}_{k}}t_{j}^{x,y}\left[t_{j-1}^{x,A}-t_{j}^{y,A} \right]^{+}.\] We compute the optimal parameter \(\mathfrak{g}_{*}^{x,A}\) by maximum likelihood estimation (MLE), i.e., \[\mathfrak{g}_{*}^{x,A}= \arg\max\log\Pr\left(\text{data}\middle|\mathfrak{g}^{x,A}\right)\] \[= \arg\max\sum_{k=0}^{K}\log\text{Beta}\left(t_{k}^{x,A}\middle| \alpha_{k}^{x,A},\beta_{k}^{x,A}\right).\] Specifically, the problem of estimating \(x\)'s parameter \(\mathfrak{g}_{*}^{x,A}\) on robot \(A\) is formulated as follows: given \(x\)'s full trust history in \(A\), \(\{t_{k}^{x,A}\}_{k=0,1,\ldots,K}\), \(A\)'s performance history during \(x\)'s direct trust update in \(A\), \(\{(p_{k}^{A},\overline{p}_{k}^{A})\}_{k\in D_{K}}\), \(x\)'s trust in \(y\) during \(x\)'s indirect trust update in \(A\), \(\{t_{k}^{x,y}\}_{k\in\overline{D}_{K}}\), and \(y\)'s trust in \(A\) during \(x\)'s indirect trust update in \(A\), \(\{t_{k}^{y,A}\}_{k\in\overline{D}_{K}}\), we compute the parameter \(\mathfrak{g}_{*}^{x,A}\) that maximizes the log likelihood function \[H(\mathfrak{g}^{x,A}):=\sum_{k=0}^{K}\log\text{Beta}\left(t_{k}^{x,A}\middle| \alpha_{k}^{x,A},\beta_{k}^{x,A}\right), \tag{9}\] where \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\) are defined in Eq. (8). We note that \(\log\text{Beta}(t_{k}^{x,A}|\alpha_{k}^{x,A},\beta_{k}^{x,A})\) is concave in \(\mathfrak{g}^{x,A}\) because it is concave in \((\alpha_{k}^{x,A},\beta_{k}^{x,A})\) and \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\) are non-decreasing linear functions of \(\mathfrak{g}^{x,A}\). Consequently, \(H(\mathfrak{g}^{x,A})\) is concave in \(\mathfrak{g}^{x,A}\) since it is a summation of several concave functions. Therefore, we can run the gradient descent method to compute the optimal parameters. Now we explicitly give the formulas for the gradient descent method. By expressing the probability density function of Beta Fig. 3: \(x\) and \(y\) take turns to interact with \(A\). random variables in terms of Gamma functions, we can rewrite Eq. (9) as \[H(\mathbf{\uptheta}^{x,A})\] \[= \sum_{k=0}^{K}\left[\log\Gamma(\alpha_{k}^{x,A}+\beta_{k}^{x,A})- \log\Gamma(\alpha_{k}^{x,A})-\log\Gamma(\beta_{k}^{x,A})\right.\] \[\left.+(\alpha_{k}^{x,A}-1)\log t_{k}^{x,A}+(\beta_{k}^{x,A}-1) \log(1-t_{k}^{x,A})\right],\] where \(\Gamma(\cdot)\) stands for the Gamma function. Define the following variables: \[P_{k}:=\sum_{j\in D_{k}}p_{j}^{A}, Q_{k}:=\sum_{j\in\overline{D}_{k}}t_{j}^{x,y}\left[t_{j}^{y,A}-t_{j- 1}^{x,A}\right]^{+},\] \[\overline{P}_{k}:=\sum_{j\in D_{k}}\overline{p}_{j}^{A}, \overline{Q}_{k}:=\sum_{j\in\overline{D}_{k}}t_{j}^{x,y}\left[t_{j -1}^{x,A}-t_{j}^{y,A}\right]^{+}.\] Then (8) becomes \[\alpha_{k}^{x,A}= \alpha_{0}^{x,A}+s^{x,A}P_{k}+\hat{s}^{x,A}Q_{k}\] \[\beta_{k}^{x,A}= \beta_{0}^{x,A}+f^{x,A}\overline{P}_{k}+\hat{f}^{x,A}\overline{Q }_{k}\,.\] Calculation shows the gradient can be written as \[\nabla H(\mathbf{\uptheta}^{x,A})=\sum_{k=0}^{K}\mathbf{C}_{k}\mathbf{v}_{k}, \tag{10}\] where \[\mathbf{C}_{k}=\begin{bmatrix}1&-1&0&1&0\\ 1&0&-1&0&1\\ P_{k}&-P_{k}&0&P_{k}&0\\ P_{k}&0&-\overline{P}_{k}&0&\overline{P}_{k}\\ Q_{k}&-Q_{k}&0&Q_{k}&0\\ \overline{Q}_{k}&0&-\overline{Q}_{k}&0&\overline{Q}_{k}\end{bmatrix} \tag{11}\] and \[\mathbf{v}_{k}=\begin{bmatrix}\psi\left(\alpha_{k}^{x,A}+\beta_{k}^{x,A} \right)\\ \psi\left(\alpha_{k}^{x,A}\right)\\ \psi\left(\beta_{k}^{x,A}\right)\\ \log t_{k}^{x,A}\\ \log\left(1-t_{k}^{x,A}\right)\end{bmatrix}. \tag{12}\] Here \(\psi\) is the digamma function. Note that \(\mathbf{C}_{k}\) is constant throughout the gradient descent while \(\mathbf{v}_{k}\) needs to be computed in every iteration. ### _Trust Estimation_ In real HRI scenarios, querying human trust after every interaction is impractical as it introduces extra workload and reduces collaboration efficiency. Instead, we consider the case when human trust is only queried after some, but not all, of the interactions. In particular, we are interested in referring the model parameter \(\mathbf{\uptheta}^{x,A}\) defined in Eq. (7) with missing trust values and estimating these missing values with the TIP model. Specifically, the input of the trust estimation problem is the same as the parameter inference problem in Section III-D, except that \(t_{u}^{x,A}\), \(t_{u}^{x,y}\), and \(t_{u}^{y,A}\) are missing for \(u\in U\), where \(U\) is the collection of interactions without trust ratings. We assume \(0\notin U\), that is, the initial trust ratings, \(t_{0}^{x,y}\), \(t_{0}^{y,A}\), and \(t_{0}^{x,A}\), are known. The optimal parameter is defined as the maximizer of the log-likelihood given the available data: \[H_{U}(\mathbf{\uptheta}^{x,A}):=\sum_{k\in\{0,\ldots,K\}\backslash U}\log\mathrm{ Beta}\left(t_{k}^{x,A}\middle|\alpha_{k}^{x,A},\beta_{k}^{x,A}\right).\] Eq. (8) implies that computing the experiences \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\) relies on the trust ratings \(t_{j}^{x,y}\), \(t_{j}^{y,A}\), and \(t_{j}^{x,A}\). We approximate them by the following recursive relations: \[\widehat{t}_{j}^{x,y}=t_{j}^{x,y},\ \widehat{t}_{j}^{y,A}=t_{j}^{y,A},\ \text{and}\ \widehat{t}_{j}^{x,A}=t_{j}^{x,A},\] for \(j\notin U\); \[\widehat{t}_{j}^{x,y}=t_{j^{\prime}}^{x,y},\ \widehat{t}_{j}^{y,A}=t_{j^{\prime}}^{y,A}, \ \text{and}\ \widehat{t}_{j}^{x,A}=t_{j^{\prime}}^{x,A}, \tag{13}\] for \(j\in U\), where \(j^{\prime}=\max\{0,1,\ldots,j-1\}\backslash U\). In other words, we use the trust rating from the most recent interactions to approximate the missing values. We note that the index \(j^{\prime}\) is well defined in Eq. (13) since we assume the initial trust ratings are known. Now, we can compute \(\alpha_{k}^{x,A}\) and \(\beta_{k}^{x,A}\) by the approximated trust values as follows \[\alpha_{k}^{x,A}= \alpha_{0}^{x,A}+s^{x,A}\sum_{j\in D_{k}}p_{j}^{A}+\hat{s}\sum_{j \in\overline{D}_{k}}\widehat{t}_{j}^{x,y}\left[\widehat{t}_{j}^{y,A}-\widehat {t}_{j-1}^{x,A}\right]^{+} \tag{14}\] \[\beta_{k}^{x,A}= \beta_{0}^{x,A}+f^{x,A}\sum_{j\in D_{k}}\overline{p}_{j}^{A}+ \hat{f}\sum_{j\in\overline{D}_{k}}\widehat{t}_{j}^{x,y}\left[\widehat{t}_{j-1}^ {x,A}-\widehat{t}_{j}^{y,A}\right]^{+}.\] Similar to maximizing \(H\), we can apply the gradient descent method to find the maximizer \(\mathbf{\uptheta}_{*}^{x,A}\) of \(H_{U}\). The gradient \(\nabla H_{U}\) can be computed in the same way as Eq. (10) except that the summation is over \(\{0,\ldots,K\}\backslash U\) instead of \(\{0,\ldots,K\}\), i.e., \[\nabla H_{U}(\mathbf{\uptheta}^{x,A})=\sum_{k\in\{0,\ldots,K\}\backslash U}\mathbf{ C}_{k}\mathbf{v}_{k},\] where \(\mathbf{C}_{k}\) and \(\mathbf{v}_{k}\) are defined in Eqs. (11) and (12) and computed with the estimated trust values. By substituting \(\mathbf{\uptheta}_{*}^{x,A}\) to Eq. (14), we can approximate the experiences and further estimate the missing trust rating \(t_{u}^{x,A}\) by the expectation \(\mu_{u}^{x,A}=\frac{\alpha_{u}^{x,A}}{\alpha_{u}^{x,A}+\beta_{u}^{x,A}}\) for \(u\in U\). ## IV Human Subject Study We conducted a human-subject experiment with 30 participants to evaluate the TIP model. The experiment, inspired by [20], simulated a search and detection task where two human agents work with two smart drones to search for threats at multiple sites. ### _Participants_ A total of \(N=30\) participants (average age = 25.3 years, SD = 4.3 years, 16 females, 14 males) with normal or corrected-to-normal vision formed 15 teams and participated in the experiment. Each participant received a base payment of $15 and a bonus of up to $10 depending on their team performance. ### _Experimental Task and Design_ In the experiment, a pair of participants performed a simulated threat detection task with two assistant drones for \(K=15\) sessions on two separate desktop computers. At each session, each participant was assigned one drone and worked on the detection tasks. After the session, they were asked to report their trust in each drone and their trust in their human teammate. For clarity, we named the two drones \(A\) and \(B\) and colored them in red and blue, respectively; and we denoted the participants as \(x\) and \(y\). A trust rating is denoted as \(t_{k}^{a,b}\), where the superscript \(a\in\{x,y\}\) stands for the trustor, the superscript \(b\in\{x,y,A,B\}\) stands for the trustee, and the subscript \(k\) is the session index. For example, \(t_{2}^{x,A}\) is person \(x\)'s trust in drone \(A\) after the 2nd session. The range of a trust rating is \([0,1]\), where 0 stands for "(do) not trust at all" and 1 stands for "trust completely". The flow of the experimental task is illustrated in figure 3(a). **Initial trust rating.** At the start, each participant gave their initial trust in the two drones based on their prior experience with automation/robots. Additionally, they gave their initial trust in each other. These trust ratings were indexed by 0, e.g., \(x\)'s initial trust rating in \(A\) was denoted as \(t_{0}^{x,A}\). **Robot assignment.** At each session, each participant was randomly assigned one drone as his or her assistant robot, as shown in figure 5. **Search and detection task.** Each session consisted of 10 locations to detect. As shown in figure 3(b), four views were present at each location. If a threat, which appeared like a combatant, was in any of the views, the participant should click the 'Danger' button; otherwise, they should click the 'Clear' button. Meanwhile, his or her drone would assist by highlighting a view if the drone detected a threat there. In addition, a 3-second timer was set for each location. If a participant did not click either button before the timer counted down to zero, the testbed would move to the next location automatically. After all the 10 locations, an end-of-session screen was shown, displaying how many correct choices the participant and the drone had made in the current session. Correct choices mean correctly identifying threats or declaring 'Clear' within 3 seconds. **Trust rating.** After each session, each participant reported three trust values. First, each participant updated his or her trust in the drone s/he just worked with, i.e., through direct experience, based on the drone's detection ability. Next, through a server (see figure 5), each participant communicated their trust in the drone s/he just worked with to their human teammate. After that, each participant updated his or her trust in the other player's drone (i.e., through indirect experience). Note that only trust ratings were communicated and drones' performances were not. Finally, each participant updated his or her trust in the human teammate based on the teammate's ability to rate trust in the drones accurately. Hence, after the \(k\)th session, there would be 6 additional self-reported trust values, \(t_{k}^{x,A}\), \(t_{k}^{x,B}\), \(t_{k}^{y,A}\), \(t_{k}^{y,B}\), \(t_{k}^{x,y}\), and \(t_{k}^{y,x}\). An illustration of the rating interface is shown in figure 3(c). After participants completed all 15 sessions, the experiment ended. ### _Experimental Procedure_ Prior to the experiment, participants were instructed not to engage in any interaction with each other. Initially, each participant signed a consent form and filled in a demographic survey. To familiarize themselves with the setup, two practice sessions were provided, wherein a practice drone was used to assist the participants. The participants were informed that the practice drone differed from the two drones used in the real experiment. After the experiment started, the assignment of drones was randomized for each pair of participants. To motivate participants to provide accurate trust ratings, team performance instead of individual performance was used to determine the bonus, which was calculated as Fig. 4: Experimental process and task interface. \(\max\{0,(\bar{a}-0.7)/0.3\}\), where \(\bar{a}\) was the average detection accuracy of the two participants over all the tasks. Specifically, the participants would receive a bonus if their average detection accuracy exceeded \(70\%\). Participants were explicitly informed that truthful and accurate communication of their trust values would assist the other participant in determining the appropriate level of trust in the drones, thereby increasing their detection accuracy and potential bonus. ## V Results and Discussion ### _Analysis of Trust Convergence within Teams_ We first conduct two types of team-level analysis to demonstrate that leveraging both direct and indirect interaction with a robot leads to faster trust convergence at the team level. We then compare the with- vs. between-team trust deviation and illustrate statistically the existence and benefits of leveraging both direct and indirect experience for trust updating. We denote the set of participants as \(P=\{x_{1},y_{1},\ldots,x_{15},y_{15}\}\), where \(x_{i}\) and \(y_{i}\) are two members in the \(i\)th team. **Within-team trust average over time.** We calculate the within-team trust average for team \(i\) on drone \(R\) at session \(k\) as \[t_{k}^{i,R}:=\frac{1}{2}(t_{k}^{x_{i},R}+t_{k}^{y_{i},R}),\] where \(R\in\{A,B\}\) indicates the drone type. The within-team trust average represents a team of players' overall trust in a robot. Figure 6 shows how the within-team average trust changed as the number of interactions increased. The initial and final trusts in drone \(A\) (\(\frac{1}{15}\sum_{i=1}^{15}t_{0}^{i,A}\) and \(\frac{1}{15}\sum_{i=1}^{15}t_{15}^{i,A}\)) were \(0.57\pm 0.16\) (mean \(\pm\) SD) and \(0.83\pm 0.09\), respectively. The initial and final trusts in drone \(B\) (\(\frac{1}{15}\sum_{i=1}^{15}t_{0}^{i,B}\) and \(\frac{1}{15}\sum_{i=1}^{15}t_{15}^{i,B}\)) were \(0.61\pm 0.15\) and \(0.46\pm 0.19\), respectively. A two-way repeated measures analysis of variance (ANOVA) showed a significant main effect of drone type (drone \(A\) vs. \(B\), \(F(1,14)=58.81\), \(p<.001\)), and a non-significant effect of time (initial vs. final, \(F(1,14)=3.66\), \(p=.08\)). There was also a significant interaction effect (\(F(1,14)=73.02\), \(p<.001\)). Prior to the experiment, the within-team average trust in drone \(A\) and that in drone \(B\) were similar. As the amount of interaction increased, the within-team average trust in drones \(A\) and \(B\) tended to reflect the different detection accuracy of drone \(A\) and drone \(B\), which were set to 90% and 60%, respectively. The within-team average trust in drone \(A\) gradually increased and that in drone \(B\) decreased. At the end of the experiment, the within-team average trust in drone \(A\) was significantly larger than that in drone \(B\) (\(p<0.001\)). **Within-team trust deviation over time.** We define the within-team trust deviation of team \(i\) on drone \(R\) at session \(k\) as the difference in trust ratings between the two human players in a team, regardless of whether the trust update is due to direct or indirect interaction, calculated as \[\text{dev}_{k,\text{W/N}}^{i,R}:=|t_{k}^{x_{i},R}-t_{k}^{y_{i},R}|,\] where \(R\in\{A,B\}\) is the drone type and the subscript "W/N" stands for "within." In contrast to the within-team trust average, the within-team trust deviation focuses on the differences between the two players in a team. Figure 7 plots the within-team trust deviation in drone \(A\) and drone \(B\). For both drones, the within-team trust deviation decreased rapidly in the first few sessions and became relatively stable afterward. For drone \(A\), the initial and final within-team trust deviations (\(\frac{1}{15}\sum_{i=1}^{15}\text{dev}_{0,\text{W/N}}^{i,A}\) and \(\frac{1}{15}\sum_{i=1}^{15}\text{dev}_{15,\text{W/N}}^{i,A}\)) were \(0.27\pm 0.25\) and \(0.06\pm 0.08\). For drone \(B\), the initial and final trust deviation values (\(\frac{1}{15}\sum_{i=1}^{15}\text{dev}_{0,\text{W/N}}^{i,B}\) and \(\frac{1}{15}\sum_{i=1}^{15}\text{dev}_{15,\text{W/N}}^{i,B}\)) were \(0.27\pm 0.24\) and \(0.07\pm 0.09\). A two-way repeated measures ANOVA revealed a significant main Fig. 5: Illustration of drone assignment. Participant \(x\) is randomly assigned to work with drone \(A\) in session 1, with drone \(B\) in session 2, and so on. The assignment is random. Fig. 6: Within-team trust _average_ in drones \(A\) and \(B\) over time. Solid lines indicate mean values and the shaded regions indicate the sample standard errors. Fig. 7: Within-team trust _deviation_ in drones \(A\) and \(B\) over time. Solid lines indicate the mean values and the shaded regions indicate the sample standard errors. effect of time, that the within-team trust deviation at the end of the experiment was significantly smaller than that prior to the experiment (\(F(1,14)=11.51\), \(p=.004\)). Neither the drone type (\(F(1,14)=.06\), \(p=.82\)) nor the interaction effect (\(F(1,14)=.313\), \(p=.59\)) was significant. **Within- vs. between-team trust deviation.** To statistically show the existence of trust propagation among team members, we compare the within-team and between-teams trust deviations as human agents gain more interaction experience. If trust propagation between the two players in a team had not occurred (i.e., participants updated their trust in the drones based solely on direct interaction), the within-team and between-team trust deviation would be statistically equal throughout the entire experiment. The between-team trust deviation of the \(i\)th team on drone \(R\) after the \(k\)th session is defined as \[\begin{split}&\text{dev}_{k,\text{BTW}}^{i,R}\\ :=&\frac{1}{N-2}\sum_{p\in P\backslash\{x_{i},y_{i} \}}\frac{1}{2}\left(|t_{k}^{x_{i},R}-t_{k}^{p,R}|+|t_{k}^{y_{i},R}-t_{k}^{p,R} |\right),\end{split}\] where \(R\in\{A,B\}\) and \(N\) was the total number of participants. Figure 8 illustrates the calculation of within- and between-team trust deviations. Figure 9 shows the within- vs. between-team trust deviations at the beginning and end of the experiment. In the beginning, the within- and between-team trust deviations in drone \(A\) were \(0.28\pm 0.25\) and \(0.27\pm 0.22\), respectively, and in drone \(B\) were \(0.27\pm 0.24\) and \(0.25\pm 0.21\), respectively (figure 9(a)). A two-way repeated measures ANOVA showed no significant difference between the within- and between-team trust deviation (\(F(1,14)=.07\), \(p=.90\)). No difference was found between drone \(A\) and drone \(B\) (\(F(1,14)=2.82\), \(p=.12\)). The interaction effect was not significant either (\(F(1,14)=0.75\), \(p=.40\)). At the end of the experiment, the within-team and between-team trust deviations in drone \(A\) were \(0.06\pm 0.08\) and \(0.11\pm 0.04\), and in drone \(B\) were \(0.07\pm 0.09\) and \(0.22\pm 0.08\) (figure 9(b)). A two-way repeated measures ANOVA revealed that the within-team trust deviation is significantly smaller than the between-team deviation (\(F(1,14)=71.16\), \(p<.001\)), and trust deviation in drone \(A\) is significantly smaller than drone \(B\) (\(F(1,14)=9.81\), \(p=.007\)). In addition, there was also a significant interaction effect (\(F(1,14)=5.86\), \(p=.03\)). The above results demonstrate the existence, and more importantly, the benefits of trust propagation. As shown in figures 6 and 7, the within-team trust average quickly stabilized and the within-team trust deviation rapidly decreased because of trust propagation within a team. Statistically speaking, at the beginning of the experiment, the within-team and between-team trust deviation in both drones were not significantly different (see figure 9(a)). At the end of the experiment, the within-team trust deviation was significantly smaller than the between-team trust deviation (see figure 9(b)). Had there not been trust propagation between the two players in a team (i.e., participants update their trust in the drones based _only_ on the direct interaction), the within-team and between-team trust deviations would remain statistically equal. Therefore, the significant difference at the end of the experiment was attributed to the trust propagation within a team. Being able to fuse one's direct and indirect experience, instead of relying solely on the direct experience, contributes to the quick convergence of trust assessments on a robot, leading to a significantly smaller within-team trust deviation compared to the between-team trust deviation. ### _Model Fitting_ For clarity, we relabel the participants as \(P=\{p_{1},p_{2},\ldots,p_{30}\}\). We utilize the gradient descent method in Section III-D to compute the optimal parameters \(\mathfrak{\theta}_{*}^{p_{i},A}\) and \(\mathfrak{\theta}_{*}^{p_{i},B}\) for each participant \(p_{i}\). The fitting results are shown in figure 10. We set the performance measurements of drone \(A\) at session \(k\) as \(p_{k}^{A}=A_{k}/10\) and \(\overline{p}_{k}^{A}=1-p_{k}^{A}\), where \(A_{k}\) is the number of correct choices drone \(A\) made in the \(k\)th session; and we define \(p_{k}^{B}\) and \(\overline{p}_{k}^{B}\) similarly. To measure the performance of the model, we define the fitting error at each session for each participant as \[e_{k}^{p_{i},R}=\left|\mu_{k}^{p_{i},R}-t_{k}^{p_{i},R}\right|,\ R\in\{A,B\},\] where \(t_{k}^{p_{i},R}\) is the participant's reported trust while \(\mu_{k}^{p_{i},R}\) is the expected trust computed according to Eq. (2) with \(\alpha_{k}^{p_{i},R}\) Fig. 8: Within-team trust deviation of team \(i\) is the trust difference between \(x_{i}\) and \(y_{i}\), indicated by the dashed line in the figure. Between-team trust deviation of team \(i\) is the average trust difference between the trust ratings of \(x_{i}\) and \(y_{i}\) and all the other participants in other teams, indicated by the solid lines. Fig. 9: Within- vs. between-team trust deviations (a) at the beginning and (b) end of the experiment and \(\beta_{k}^{p_{i},R}\) generated by Eq. (8) based on \(\theta_{*}^{p_{i},R}\); and, we define the root-mean-square error (RMSE) between the ground truth and the expected trust value as \[\text{RMSE}^{R}=\left[\frac{1}{N}\sum_{i=1}^{N}\frac{1}{K+1}\sum_{k=0}^{K} \left(e_{k}^{p_{i},R}\right)^{2}\right]^{1/2},\] for \(R\in\{A,B\}\). The results are \(\text{RMSE}^{A}=0.057\) and \(\text{RMSE}^{B}=0.082\). For comparison, we consider two baseline models: one accounting for solely direct experience and another solely indirect experience. The direct-experience-only model corresponds to the TIP model with zero unit gains in indirect experience, i.e., \(\hat{s}^{x,A}=\hat{f}^{x,A}=0\); while the indirect-experience-only model corresponds to \(s^{a,b}=f^{a,b}=0\). We recompute the parameters for the baseline models, and the RMSE errors are \(\text{RMSE}^{A}_{\text{direct}}=0.085\), \(\text{RMSE}^{B}_{\text{direct}}=0.107\), \(\text{RMSE}^{A}_{\text{indirect}}=0.128\), and \(\text{RMSE}^{B}_{\text{indirect}}=0.130\). In addition, we compare each participant's fitting error \(\bar{e}^{p_{i},R}:=1/(K+1)\sum_{k=0}^{K}e_{k}^{p_{i},R}\) of the TIP model (\(A\): \(0.044\pm 0.037\); \(B\): \(0.069\pm 0.045\)), direct-experience-only model (\(A\): \(0.075\pm 0.041\); \(B\): \(0.095\pm 0.051\)), and indirect-experience-only model (\(A\): \(0.116\pm 0.053\); \(B\): \(0.118\pm 0.054\)) using a paired-sample t-test. Results reveal that the fitting error of the TIP model is significantly smaller than the direct-experience-only model, with \(t(29)=-6.18,p<.001\) for drone \(A\), and \(t(29)=-7.31\), \(p<.001\) for drone \(B\), and significantly smaller than the indirect-experience-only model, with \(t(29)=-9.28,p<.001\) for drone \(A\), and \(t(29)=-10.06\), \(p<.001\) for drone \(B\). Furthermore, the fitting error of the direct-only model is significantly smaller than the indirect-experience-only model, with \(t(29)=-4.73,p<.001\) for drone \(A\), and \(t(29)=-3.73\), \(p<.001\) for drone \(B\). A bar plot is shown in figure 11. This comparison indicates that a human agent mainly relies on direct experience to update his or her trust, while indirect experience also plays a vital role in trust dynamics. Fig. 11: Fitting error comparison between the TIP model and the two baseline models. Fig. 10: Fitting results. Red curves are for drone \(A\) while blue curves are for drone \(B\). The solid lines are the participants’ trust feedback, while the dashed lines are the expected trust value given by the model. The shaded areas indicate the 90% probability interval of the Beta distribution at each session. The index \(i\)-\(j\) stands for the \(j\)th participant in the \(i\)the group. The horizontal axes represent the session number, ranging from 0 (prior interaction index) to 15. The vertical axes indicate trust levels, ranging from 0 to 1, where 0 represents “(do) not trust at all” and 1 indicates “trust completely”. ### _Trust Estimation_ To measure the estimation accuracy of the proposed model, we remove some trust ratings in the data and compute the RMSE of the estimated trust values. Specifically, for each participant \(p_{i}\), we set \(U_{\hat{K}}=\{K-\hat{K}+1,\ldots,K\}\) to remove the last \(\hat{K}\) trust ratings, where \(U_{\hat{K}}\) is the index set of sessions without trust ratings as defined in Section III-E, and estimate the missing trust values by \(\mu_{u}^{p_{i},A}\) and \(\mu_{u}^{p_{i},B}\) for \(u\in U_{\hat{K}}\). The root-mean-square errors are defined as \[\text{RMSE}_{\hat{K}}^{R}=\left[\frac{1}{N}\sum_{i=1}^{N}\frac{1}{\hat{K}} \sum_{u=K-\hat{K}+1}^{K}\left(e_{u}^{p_{i},R}\right)^{2}\right]^{\frac{1}{2}},\] for \(R\in\{A,B\}\). Figure 12 shows the RMSE's under different \(\hat{K}\). When \(\hat{K}\leq 7\), the TIP model can successfully estimate the trust values in the late sessions with a small RMSE (\(<0.1\)) by learning from previous data. In particular, \(\text{RMSE}_{\hat{K}=7}^{A}=0.052\) and \(\text{RMSE}_{\hat{K}=7}^{B}=0.077\), which implies that, with the first 9 sessions' trust ratings available, the RMSE's of the estimation for the last 7 sessions are under 0.08 for both drones. The result also illustrates that \(\text{RMSE}_{\hat{K}}^{A}\) is smaller than \(\text{RMSE}_{\hat{K}}^{B}\) in general. This could be explained by the performance difference between the two drones. Indeed, because the number of correct choices each drone could make follows binomial distributions (\(\text{Bin}(10,0.9)\) for \(A\) and \(\text{Bin}(10,0.6)\) for \(B\)), the variance of their performance are 0.09 and 0.24 respectively. The greater variance of drone \(B\) may cause a human subject to acquire more information to stabilize his or her trust and thus leads to higher uncertainty in trust feedback values, which makes it difficult for the model to learn trust dynamics in a short time. ## VI Conclusion In the study, we proposed the TIP model that accounts for both the direct and indirect experiences a human agent may have with a robot in multi-human multi-robot teams. To the best of our knowledge, it is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. In addition, we prove theoretically that trust converges after repeated direct and indirect interactions under our TIP framework. Using a human-subject experiment, we showed that being able to fuse one's direct and indirect experiences, instead of relying solely on the direct experience, contributes to the quick convergence of trust in a robot. In addition, we showed that the TIP models significantly outperformed the baseline direct-experience-only model in capturing the trust dynamics in multi-human multi-robot teams. The TIP model can be applied to various human-robot teaming contexts including team of teams [9] and multi-echelon networks [7]. In particular, the TIP model can update a human agent's trust in a robot whenever a direct or indirect experience is available and thus can be applied for trust estimation in a network consisting of multiple humans and robots. Our results should be viewed in light of several limitations. First, we assume that the two human players within a team were cooperative and willing to share their trust in a robot truthfully. In a non-cooperative context where a human player is motivated to cheat, a quick convergence of trust assessment is less likely to occur. Further research is needed to examine the non-cooperative scenario. Second, we used a one-dimensional trust scale in the experiment. Even though this scale has been used in prior literature [10, 19, 20], it may not capture the different underlying dimensions of trust. Third, we take an ability/performance-centric view of trust and assume a human agent's trust in a robot is primarily driven by the ability or performance of the robot. Based on research in organizational management, trust can be influenced by three elements, namely ability, integrity, and benevolence [33]. Future research should investigate ways to integrate the benevolence and integrity elements into trust modeling, in particular, for HRI contexts that involve a strong emotional component, for example, educational or home-care robots. Moreover, conducting further ablation studies is essential to comprehensively understand the impact of various factors on the dynamics of trust. For instance, varying the number of sessions versus drone performances would provide insights into the rate at which trust converges.
2303.07829
Glassy materials for Silicon-based solar panels: present and future
Glass provides mechanical, chemical, and UV protection to solar panels, enabling these devices to withstand weathering for several decades. The increasing demand for solar electricity and the need to reduce anthropogenic carbon emissions require researchers to develop new materials and processes to make solar even more sustainable. Here, we review the current research to create environmentally friendly glasses and to add new features to the cover glass used in silicon solar panels, such as anti-reflection, self-cleaning, and spectral conversion properties. While several studies have proposed spectral converter designs and reported information regarding their light-conversion efficiency, there is still a need for a standardized protocol to investigate and compare the impact of these modified materials on the electrical output of photovoltaic systems. In light of these issues, we propose a framework for quantifying parameters that can serve as benchmarks for comparing different cover glasses, which is especially important in the search for a viable spectral converter.
Marcos Paulo Belançon, Marcelo Sandrini, Vitor Santaella Zanuto, Robson Ferrari Muniz
2023-03-14T12:06:51Z
http://arxiv.org/abs/2303.07829v4
# Glassy materials for Silicon-based solar panels: present and future ###### Abstract Glass provides mechanical, chemical, and UV protection to solar panels, enabling these devices to withstand weathering for several decades. The increasing demand for solar electricity and the need to reduce anthropogenic carbon emissions require researchers to develop new materials and processes to make solar even more sustainable. Here, we review the current research to create environmentally friendly glasses and to add new features to the cover glass used in silicon solar panels, such as anti-reflection, self-cleaning, and spectral conversion properties. While several studies have proposed spectral converter designs and reported information regarding their light-conversion efficiency, there is still a need for a standardized protocol to investigate and compare the impact of these modified materials on the electrical output of photovoltaic systems. In light of these issues, we propose a framework for quantifying parameters that can serve as benchmarks for comparing different cover glasses, which is especially important in the search for a viable spectral converter. keywords: keyword one, keyword two Pacs: 0000, 1111 Msc: 0000, 1111 + Footnote †: journal: Journal of Non-Crystalline Solids ###### Contents * 1 Introduction * 2 Reducing energy inputs * 2.1 Alternative glass matrix * 2.1.1 Aluminosilicates * 2.1.2 YAG Glass Ceramics * 2.1.3 SLS variations * 2.1.4 Silicates containing fluorine * 2.1.5 Other alternatives * 3 Increasing energy outputs * 3.1 Anti-reflective glass surfaces * 3.2 Self-cleaning and multifunctional glass surfaces * 3.3 Spectral converters (SC) * 3.3.1 Main rare-earth dopants for spectral conversion * 3.3.2 Non-rare-earth ions for spectral conversion * 3.3.3 A benchmark framework for spectral converters * 3.3.4 The ideal SC * 4 Emerging trends * 4.1 Lowering carbon emission with Hydrogen * 4.2 Alternative materials and methods * 5 Discussion * 6 Conclusion * 7 Acknowledgement ## 1 Introduction The annual glass consumption worldwide surpassed 21 kg per person in 2014 [1]. Besides traditional applications such as packaging or flat glass for cars and buildings, the glass demand for cover glasses (CG) in solar panels is significant. Silicon-based photovoltaic panels (PV) are already responsible for about 3% of electricity produced annually worldwide, and this share is expected to grow significantly in the following decades [2; 3]. A standard PV produce an electrical output of \(\sim\)210 \(W_{p}/m^{2}\) from 1000 \(W/m^{2}\) of sunlight, which corresponds to efficiencies of about 21% at the industry level [4]. As the world transitions to more sustainable energy sources, new PVs are installed as fast as 183 \(\mbox{\it GW}_{p}\) per year, corresponding to an additional area of about 1 billion square meters. CG demand is high, and the share of bifacial PVs (which may have glass on both the front and back sides [5]) is growing and pushing the consumption of float glass by this sector even further. However, several aspects of the PV technology need further improvements to guarantee its sustainability in the future [3, 6, 7, 8, 9, 10], and some of them are related to the glass ecological footprint [11, 12], as well as its features, such as UV-filtering, anti-reflective and self-cleaning properties [13]. Glass makes 67%-76% of the total solar panel weight. There is a growing concern about the industrial impact of glass production, which includes significant energy inputs and emissions of about 60 million tons of CO\({}_{2}\) equivalent per year [12]. From another hand, silicon's characteristic spectral sensitivity limits the efficiency of sunlight to power conversion, and the industry is already reaching the practical limits imposed by the Shockley-Queisser theory [14, 15]. In this context, glass science may address these problems and help expand and develop more sustainable technologies, materials, and processes. Here, we review some of the glass research related to this subject, highlighting where advances are already being made and where more effort seems essential in light of the challenges associated with PV expansion worldwide. To effectively synthesize and evaluate the vast array of information available, we implemented a systematic approach in our literature review by categorizing the relevant topics into distinct sections. Our objective was to address specific issues of interest to the glass science community, and to establish a cohesive framework for analysing these topics. These topics are organized in terms of the "energy return on investment" (EROI) concept [16, 17, 18, 19, 20]. The EROI for a solar panel is the sum of energy invested in all materials and processes needed to build the devices, divided by all the energy produced during the panel lifespan. In other words, there are advances that researchers may pursue that will contribute to one or other part of this equation, and some of the most interesting ones are presented and discussed in this work. In the 1950s, the Pilkington process, or float process, revolutionized the glass-making industry [21, 22]. Glass production could grow while enabling glass to be produced faster, with much higher quality and a reduced cost, and these features were necessary to make possible such massive production of PV in the world today. However, even though the knowledge of several families of glasses is abundant, only soda-lime silicate (SLS) glass is adequate to Pilkington's process, and cheap enough to meet the industry needs. It is important to remember that SLS glass making is an energy-intensive process due to this material high melting temperature (\(\sim 1500^{o}C\)), which requires about 7-8 GJ/t to be produced [1]. This value is significantly higher for other materials, such as aluminum (90-100 GJ/t). However, explicitly speaking of the CG, the challenge remains not precisely in how much energy is consumed per ton of glass, but in the amount of material needed in each panel, the growing demand [4; 11], and the difficult task of recycling CG. As this material should have a specific composition, which includes a low iron concentration, it is challenging to perform the recycling process without introducing impurities, namely, iron. In this way, while SLS glass can be indefinitely recycled, the amount of glass mixed with the melt in the float glass production is limited to about \(\sim 11\%\)[1], to avoid negative impacts to the final glass sheet. Considering the vast number of different glasses and treatments applied to their surfaces under investigation by researchers worldwide, here we review some trends that may potentially enhance the EROI of PVs. Several possibilities exist that could either reduce the energy input or increase the energy output of the panel. We will now focus our discussion on those that pertain to the former alternative. ## 2 Reducing energy inputs At the industry level, glass has become a synonym for SLS glass. However, other glasses, such as borosilicates, are fundamental for some applications due to their improved chemical and thermal resistance. It presents a reduced expansion coefficient, vital for several applications [23], but also preventing it from being thermally toughened [13]. This fact clearly illustrates that, when developing a new glass system, an improved property often has a downside. Anyhow, one can see that basic research plays a critical role in creating an environment that enables breakthroughs to occur. In the case of borosilicate, there is still much research going on about its structure [24; 25; 26], corrosion [27], effects of dopants [28] and other basic science studies. In such a context, we are not looking to present a glass ready to replace the SLS in solar panels but to highlight some of the most recent and exciting results in the literature concerning the search for alternatives. ### Alternative glass matrix Glasses and glass ceramics have been a constant subject of research worldwide. Besides several applications that include lasers [29], amplifiers [30], glass fibers [31; 32], sensors [33; 34; 35] and white-light applications [36; 37; 38; 39; 40; 41; 42; 43], several studies have been developed aiming to apply a glassy material to enhance photovoltaic energy production. In Table 1, we have listed some of these materials recently investigated for this application, as well as their primary raw materials and the respective melting temperature (T\({}_{Melt}\)). This list includes materials with chemistry very similar to the commercial SLS glass and some based on entirely different systems. From the point of view of reducing the energy inputs, it would be interesting to develop low-melting temperature materials. However, it seems impossible for large-scale applications to be based on some of those materials in table 1. For example, even though Tellurium is used to produce commercial thin-film technologies such as CdTe solar cells [67], this mineral is a secondary product in mining [68; 69], and its availability would never allow the production of \begin{table} \begin{tabular}{|c|c|c|} \hline **Glass** & **Main Components** & **T\({}_{Melt}\)** (\({}^{o}C\)) \\ \hline Aluminosilicates [44; 45] & SiO\({}_{2}\)-CaO-Al\({}_{2}\)O\({}_{3}\) & 1400-1600 \\ \hline YAG Glass-Ceramic [46] & SiO\({}_{2}\)-Al\({}_{2}\)O\({}_{3}\)-Y\({}_{2}\)O\({}_{3}\)-B\({}_{2}\)O\({}_{3}\) & 1500 \\ \hline Fluorosilicate [47] & SiO\({}_{2}\)-Al\({}_{2}\)O\({}_{3}\)-CaF\({}_{2}\) & 1500 \\ \hline SLS-Lithium [13] & SiO\({}_{2}\)-Na\({}_{2}\)O-CaO-MgO-Al\({}_{2}\)O\({}_{3}\)-Li\({}_{2}\)O & 1450-1480 \\ \hline SLS-Titanium [48] & SiO\({}_{2}\)-Na\({}_{2}\)O-CaO-TiO\({}_{2}\) & - \\ \hline Alkali alumina-borate GC [49] & Al\({}_{2}\)O\({}_{3}\)-B\({}_{2}\)O\({}_{3}\)-K\({}_{2}\)O-Li\({}_{2}\)O & 1400 \\ \hline SCS [50; 51] & SiO\({}_{2}\)-Na\({}_{2}\)O-CaO-Al\({}_{2}\)O\({}_{3}\)-CaF\({}_{2}\) & 1150-1250 \\ \hline Recycled SLS [52; 53] & SiO\({}_{2}\)-Na\({}_{2}\)O-CaO-MgO-Al\({}_{2}\)O\({}_{3}\) & 1100 \\ \hline Phosphates [54; 55; 56] & NaH\({}_{2}\)PO\({}_{4}\)-H\({}_{2}\)O & 1000 \\ \hline Lead-Bismuthate [57] & Li\({}_{2}\)O-Bi\({}_{2}\)O\({}_{3}\)-PbO & 800-1000 \\ \hline Lithium-Tellurite [58] & TeO\({}_{2}\)-Li\({}_{2}\)O & 850 \\ \hline Fluorochlorozoirconate [59] & ZrF\({}_{4}\)-BaCl\({}_{2}\)-NaF-AlF\({}_{3}\) & 825 \\ \hline Fluorozirconate [60] & ZrF\({}_{4}\)-BaF\({}_{2}\)-NaF-AlF\({}_{3}\) & 745 \\ \hline Tellurite-Tungstate [30; 61] & TeO\({}_{2}\)-WO\({}_{3}\)-Nb\({}_{2}\)O\({}_{5}\)-Na\({}_{2}\)O & 700-800 \\ \hline Zinc-Tellurite [62; 63; 64; 65; 66] & TeO\({}_{2}\)-ZnO-Na\({}_{2}\)O & 600-800 \\ \hline \end{tabular} \end{table} Table 1: Some glassy materials recently investigated for photovoltaic applications and its T\({}_{Melt}\) value, or range value. tellurite glass to be even a fraction of that demanded from PVs. On the other hand, the upper part of table 1 contains a few glass matrices that rely on quite common and abundant minerals. Yttrium is not as abundant as silicon or aluminum, although it is the second most abundant rare-earth [70]. Some of these glasses are presented and discussed hereafter, following descending order of melting temperature. #### 2.1.1 Aluminosilicates The incorporation of aluminum brings substantial modification to silicates in general. Properties such as the elastic module and hardness increase monotonically with alumina content from 30 mol% to 60 mol% [71]. Related to this phenomenon, pure silica fibers may have their Brillouin coefficient reduced by two orders of magnitude due to Aluminum [31]. Aluminosilicate compositions with up to 39% Al\({}_{2}\)O\({}_{3}\) content [44] have already been used as active media for lasers [29; 72], which demonstrates high optical quality as well as efficient luminescent properties that are both required to develop spectral converters (SCs) for photovoltaics. However, the viscosity of the silica melt is significantly reduced by the aluminum [73], and conventional production methods of flat glass are not compatible with these melts [31]. One exciting possibility is the development of thin films, recently demonstrated by Savi [45] et al. Though a sophisticated UV-pulsed laser deposition technique was used, it is remarkable that films as thin as 17 nm were obtained, and most properties of the bulk glass samples were preserved. Several complementary works to this study could be exciting, such as investigating the mechanical properties and strength of the film, as well as its production by more straightforward techniques. #### 2.1.2 YAG Glass Ceramics Yttrium-Aluminum-Garnet (YAG) has been used as active media and phosphor for lighting applications for several decades. Tai et al. [46] have prepared glass samples containing Yttrium, which were heat-treated at \(\sim 750^{o}C\) to grow YAG nanocrystals inside bulk samples. The resulting Nd\({}^{3+}\)-Yb\({}^{3+}\) co-doped glass-ceramic demonstrated NIR DC of photons with quantum efficiencies as high as 185%. One exciting aspect of these materials is the production of nanocrystals at moderately low temperatures, which may favor their production for practical applications. Besides, analyzing this remarkable material on a photovoltaic panel prototype would be essential since the material refractive index and its optical quality may introduce significant re flection losses. As recently demonstrated, such evaluation can be performed under natural sunlight irradiation using an affordable apparatus [66]. #### 2.1.3 SLS variations Modified SLS glass has also been under investigation aiming at photovoltaic applications. Allsopp et al. [13] have demonstrated an extensive study of Bi\({}^{3+}\)-Gd\({}^{3+}\) co-doped SLS glass, which was also slightly modified with the incorporation of Li\({}_{2}\)O to facilitate the production of flat samples. Miniaturized solar module prototypes fabricated using the most optimal glass samples demonstrated enhanced electricity production, which was explained in terms of the fluorescent dopants incorporated into the glass, even though the authors also pointed out that further measurements are advisable. Remarkably, such samples are similar to the commercial CG, indicating that mass production could be feasible. Another possibility of improvement of the SLS glass is the mechanical strengthening due to TiO\({}_{2}\) incorporation. Bengtsson et al. [48] reported a decrease in the alkali diffusion coefficient of SLS glass when Titanium is incorporated by ion exchange. This may be essential to expand the PV lifespan, as Na\({}^{+}\) diffusion is one of the leading causes of potential-induced degradation (PID) in these devices. It has been demonstrated that Ti films in SLS glass reduce the PID [74], and the potential to combine this feature with others, such as self-cleaning [75; 76] and anti-reflective [77] properties is quite exciting. #### 2.1.4 Silicates containing fluorine The incorporation of fluorine in silicate glasses has been extensively investigated [78; 79; 80], and it is well-known that this modification reduces both glass transition and peak crystallization temperatures while also improving glass transparency. All these effects can help develop an environmentally friendly CG, as they may reduce energy inputs in glass manufacturing and enhance the sunlight power reaching the solar cells. Muniz, one of the authors of this work, has investigated silicates containing up to \(\sim\)20% of CaF\({}_{2}\)[50], and rare-earth doped samples [51; 81] based on the system 50SiO\({}_{2}\)-29Na\({}_{2}\)O-12.5CaO-7.5CaF\({}_{2}\)-1Al\({}_{2}\)O\({}_{3}\). Such a high Na concentration, coupled with the fluorine effect, resulted in a significant decrease in the melting temperature to only 1200\({}^{o}\)C. Down-conversion with up to \(\sim\)87% efficiency was achieved in co-doped samples [51], and the next steps are underway to evaluate the performance of solar panel prototypes based on this Sodium-Calcium-Silicate (SCS) glass. Besides the results already demonstrated, this material should have it's chemical and mechanical properties evaluated carefully to check if it can reach the standards needed for applications in solar panels [13]. Some critical aspects requiring investigation are the chemical stability of the glass. The dissolution of silicates in water [82; 83; 84; 85; 86; 87] is a complex phenomenon, which is likely to be altered due to the high sodium concentration [88], and the presence of fluorine, which can make it more challenging to process the material at an industrial scale. #### 2.1.5 Other alternatives The number of glass systems is limitless and constantly growing and expanding. Many recent works have demonstrated interesting spectroscopic properties [89; 90; 91; 92; 93; 94; 95; 96; 97; 98], though, in many of these cases the glass composition depends on a scarce mineral (such as Te), a toxic one (such as Bi, Cd or Pb) or even result in chemically unstable materials. In this regard, though the search for innovative materials for several applications should always be pursued, for PV, more common chemistry, such as those mentioned above, seems to have the best chances. One possibility that does not seem to have been fully explored is the modification of SLS glass, such as proposed by Allsopp et al. [13], but also those including surface modification, doping by ion-exchange and others, and we believe researchers worldwide should be encouraged to do so. ## 3 Increasing energy outputs The conversion of sunlight into electricity is subjected to the Carnot heat engine limit [99; 100]. As the sun is a black body at \(5500^{o}C\) and a Si-cell has a working temperature of about \(80^{o}C\), the Carnot efficiency limit is 94% for the energy transfer from the sun to the cell [101]. However, a Si-cell is not an ideal black body. Due to its spectral sensitivity and internal losses such as electron recombination, the practical efficiency limit for PVs is about 30% [14; 15; 101; 102]. Considering these constraints, and the relatively complex and expensive processes needed to produce solar cells, it has been fundamental to expand electricity production by maximizing the sunlight reaching the cells. ### Anti-reflective glass surfaces The CG and the encapsulant material in PVs should be very transparent and exhibit proper refractive indeces to reduce reflection losses. Even though sunlight is scattered when reaching the Earth, most of the power produced in PVs is related to the plane-polarized light component. The Fresnel theory states that this component will be reflected in an interface between two mediums, which can be calculated by Equation 1, \[R_{s}=\left(\frac{n_{1}cos\theta_{i}-n_{2}\sqrt{1-\left(\frac{n_{1}}{n_{2}}sin \theta_{i}\right)^{2}}}{n_{1}cos\theta_{i}+n_{2}\sqrt{1-\left(\frac{n_{1}}{n_{ 2}}sin\theta_{i}\right)^{2}}}\right)^{2}\, \tag{1}\] where \(R_{s}\) is the reflection coefficient, \(n_{1}\) and \(n_{2}\) are the refractive indexes of the two mediums, and \(\theta_{i}\) is the incidence angle. Before reaching the Silicon, sunlight is subjected to air-glass, glass-encapsulant, and encapsulant-silicon interfaces. As the refractive index of Silicon is very high, to avoid \(\sim\)20% loss in its interface, an anti-reflective coating on the Silicon surface is mandatory. On the other hand, SLS glass has a refractive index of 1.52, which results in over 4% loss by reflection for perpendicular incidence of light. Surface texturing of the CG has been explored to produce a refractive index gradient, which further reduces air-glass interface reflection. These structures may enhance the PV's efficiencies by up to 8.7% [103]. Some theoretical work has even proposed surface texturing to increase the panel's emissivity [104] as a pathway to reduce its temperature. AR coatings based on "Moth-eye" and multiple interference films have been investigated, and several techniques to produce them have already been demonstrated [105; 106]. These and other results are fascinating; however, as PVs should withstand weathering for at least a few decades, it is fundamental to investigate the durability of these coatings under the most distinct climatic and meteorological conditions and specific conditions caused by regionally located situations. [107; 108]. ### Self-cleaning and multifunctional glass surfaces PVs are supposed to produce as much electricity as possible during their lifetimes. These devices are installed literally in all continents of the Earth and, in this sense, are subject to a wide range of environments, wind, storms, etc. Soiling is an important issue [109] as it may significantly reduce the amount of light reaching the solar cells inside the panel. This has fuelled the development of self-cleaning surfaces. In some cases, rain is well distributed throughout the year, which can be enough to keep PVs satisfactorily clean. However, in some environments, such as dry or icy ones, even though one may have an excellent potential for solar power production, keeping the CG surfaces clean can be challenging as water may not be available [110; 111]. In the context discussed in this work, some exciting results have proposed multifunctional coatings exhibiting self-cleaning, anti-reflective, and even luminescent properties [112; 113]. We will come back to this subject on section 4.2, but next we review some research on spectral converters. ### Spectral converters (SC) In a commercial PV, most of the sunlight is converted into heat [114], and the spectral mismatch between the sunlight and silicon sensitivity plays a central role in this inefficiency [14; 15; 101; 102]. The standard 1.5G solar spectrum has an intensity of 1000 \(W/m^{2}\). Photons below the silicon bandgap (\(\sim\)1100-2200 nm) account for 164 \(W/m^{2}\)[115]. Theoretically, these photons can be up-converted to higher energy ones, enabling additional sunlight to produce electricity. However, up-conversion efficiency is inherently low [116], and after extensive research on the subject in the last decades [117], there is no clear evidence of a feasible up-converter for PVs. Most experiments claiming to measure an increment in electricity output due to up-converter materials were performed in impractical conditions, often based on laser illumination and/or highly concentrated light [118]. In this way, the stokes shift is far more likely to occur than the anti-stokes shift, hence we will focus here on the spectral conversion of the above bandgap photons only. Each incoming photon may produce a maximum of one free electron in the Si-cell by creating an electron-hole pair. Above bandgap photons, however, have excess energy that will be wasted. About 149 \(W/m^{2}\) of the incoming sunlight consists of UV-blue photons (\(\sim\)300-450 nm) with at least twice the silicon bandgap energy [115]. Besides this significant loss due to energy excess, these high-energy photons are also linked to solar panel degradation [119; 120; 121; 122; 123; 124]. The electricity output could be increased if these photons were converted or split into two half-energy photons. This could modify the spectrum to enrich the NIR part, where PV is more sensitive, reducing the heat associated with the electron-hole pair creation or even increasing the number of photons reaching the solar cell (if the SC exhibits quantum efficiency higher than one). Most of the proposed SCs are based on rare-earth active ions, though there is plenty of research on other dopants, such as transition metals or metallic nanoparticles [125; 126]. Two main approaches concerning SC exist: developing CGs containing optically active ions and producing optically active films on top of a standard CG. Although the list of dopants being investigated is long and has been reviewed recently [125; 126], here we provide a short description of the most common ones, highlighting some key aspects of the complex task of developing a viable SC. #### 3.3.1 Main rare-earth dopants for spectral conversion _Ce\({}^{3+}\)_ Cerium is one of the most abundant rare-earths and is widely used in industrial applications such as catalysis, UV-blocking agent in glasses, etc. When incorporated into glass, it can exist in two different valence states, either Ce\({}^{3+}\) or Ce\({}^{4+}\)[65]. As Ce has the atomic number 58, it will have 54 electrons in its oxidized state and present full electron shells and no optical transitions. The additional electron in the Ce\({}^{3+}\) ion, though, has a 4f-5d parity allowed transition [127], which is very sensitive to the host crystal field. Absorption bands of Ce\({}^{3+}\) are often observed in the UV-blue range [128], while emission bands have been reported ranging from UV-blue [129; 130] to the yellow-red parts of the visible spectrum [131]. In this way, to develop SCs for PVs, Cerium is often used to absorb UV-blue light. In most cases, it is designed to work coupled with some other ion such as Nd\({}^{3+}\)[132; 133] or Yb\({}^{3+}\)[134; 135; 136] which are well-known for their emission lines near the silicon sensitivity peak [114]. One of the drawbacks reported with Cerium-doped glasses is the difficulty of controlling its valence states. In most cases, the final material will have a mix of both states [137; 138; 139]. For PV applications, Ce\({}^{4+}\) may be an excellent UV-blocking agent, though it may prevent spectral conversion [65]. _Pr\({}^{3+}\)_ The Pr\({}^{3+}\) energy diagram is broadly investigated as it has several absorption and emission bands between the blue and the near-infrared, including an emission line near the silicon sensitivity peak around \(1\mu m\)[30]. In practice, most of the blue light absorbed by this ion will suffer a slight Stokes shift and result in visible emission, which will barely benefit the PV efficiency. Similar to the case of Cerium, Praseodymium has been proposed to be used alongside a NIR emitter, such as Yb\({}^{3+}\)[140]. Such mechanism of spectral conversion has been demonstrated experimentally [141]. The energy diagram of this ion results in a relatively narrow absorption band in the blue region due to ground state absorption to the \({}^{3}\)P levels. Because of that, besides co-doping schemes with a NIR emitter, Pr\({}^{3+}\) has also been used as co-dopant with Ce\({}^{3+}\)[142]. However, as we have pointed in a recent work [66], one should not neglect the negative impact of any ion absorption bands on a solar cell performance. Even if the spectral conversion can be observed, it has no straightforward relation to the overall efficiency of a solar cell prototype. In the case of Praseodymium doped and co-doped samples, electrons excited to the \({}^{3}\)P levels may result in quantum-cutting or down-conversion. However, the lower energetic \({}^{1}\)D\({}_{2}\) level is expected to reduce the yellow-red (\(\sim 590\) nm) transmission through the sample. Even though resonant emission from this level is expected (\(\sim\)612 nm), it will also result in the emission of low-energy photons, including some with energy below the silicon bandgap [30] (for example at \(\sim\)1480 nm). _Eu\({}^{2+}\)_ Europium-doped glasses will more often result in Eu\({}^{3+}\) rich materials, and this valence state provides a mechanism to obtain intense red emission [36, 64, 81, 143]. This ion can sometimes be found or reduced to Eu\({}^{2+}\)[51, 144], which has spectroscopical properties similar to Ce\({}^{3+}\). Indeed, Dorenbos [145, 146] have demonstrated a strong correlation between the spectroscopic properties of these two ions. Also similar to the case of Cerium, Europium has been incorporated into several different materials where often a NIR emitter co-dopant is used, such as Pr\({}^{3+}\)[147], Dy\({}^{3+}\)[148], Nd\({}^{3+}\)[51, 149, 150] and Yb\({}^{3+}\)[132]. _Nd\({}^{3+}\)_ As previously mentioned, Nd\({}^{3+}\) has been used as a NIR emitter in materials proposed as SCs for PVs [51, 132, 143, 149, 150, 151]. However, this ion has also been a donor to Yb\({}^{3+}\) ions [45, 46]. Though it could be theoretically possible to perform some spectral conversion using Nd\({}^{3+}\) single-doped samples, in practice, the ion has several absorption lines in the visible and NIR that are likely to have some negative impact in the spectrum reaching the Si-cell. We will be back to this question hereafter. \(Yb^{3+}\) Yb\({}^{3+}\) is probably the most investigated NIR emitter for PVs [45, 46, 141, 152, 153, 154, 155, 156]. The main reason for that relies on the simple energy levels diagram of this ion, which has absorption bands (between 850-1000 nm) and intense emission (between 970-1050 nm) in the NIR range. Though absorption in this region may not be beneficial for PVs [13, 157], Yb\({}^{3+}\) can be excited by energy transfer from several rare-earths and transition metals [156, 158]. In the next section, we discuss the challenges related to choosing dopants. _Selecting rare earth elements for spectral converters._ As we have pointed out, several rare-earth doped materials have been investigated as a mechanism to achieve spectral conversion. In figure 1, we show a simplified representation of the main energy levels in Yb\({}^{3+}\) and Nd\({}^{3+}\), which play an important role in the development of SCs. Absorption of photons with wavelengths higher than 1100 nm is unharmful to the electrical output of PVs, as these photons have significantly low energy. As we have mentioned, the up-conversion from these photons seems unlikely and is not within the scope of this work. On the other hand, absorption in the range of 300-1100 nm is desirable only if we can increase the electrical output of the PV after the spectral modification. As the optimum sensitivity of PVs is found near the Si bandgap, there is no point in having an SC absorbing energy in this region. Yb\({}^{3+}\) is transparent in almost all silicon sensitivity range and exhibits a resonant absorption/emission matching the PV sensitivity peak. On the other hand, Nd\({}^{3+}\) exhibits many absorption peaks along the UV-visible range, presenting a solid emission in the NIR region. However, the same excited level related to the emission at \(\sim\)1064 nm is also the origin of the emission at \(\sim\)1350 nm, below the bandgap. Nd\({}^{3+}\) is also a strong absorber between 730-830 nm, a region where the down-conversion of a photon is not favorable for PV. To the best of our knowledge, the research on both Yb\({}^{3+}\) and Nd\({}^{3+}\) based SCs has yet to consider the negative impacts of this ion on the overall efficiency of PVs. Most of the research has focused on optimizing the dopant concentrations to obtain higher quantum efficiencies in the conversion of Figure 1: Representative energy levels diagram of NIR emitters for spectral conversion. Semi-transparent arrows indicate emission lines below the Si bandgap. photons; however, the PV electrical output is not a function of this single variable, and neither correlates straightforwardly to it. In some materials exhibiting high conversion efficiencies, a high dopant concentration of Yb\({}^{3+}\) or Nd\({}^{3+}\) is used, which could reduce the electricity output of Si-cells. It is essential to highlight that several reports on high conversion efficiencies have been demonstrated in the last few years. However, none have included experimental measurements to quantify a prototype's electrical output under sunlight irradiation. As recently shown in a Pr\({}^{3+}\)-doped tellurite glass [66], it is possible to detect rare-earth emission bands under natural light, and we believe the SCs should be evaluated under this situation. In section 3.3.3, we propose an analysis considering this goal. As one can imagine, the list of ions that could be explored to provide UV-Blue absorption and NIR emission is extensive. Other rare-earths such as Tb\({}^{3+}\)[159], Er\({}^{3+}\)[160], Dy\({}^{3+}\)[161], Ho\({}^{3+}\)[162], Tm\({}^{3+}\)[153] have also been investigated. However, as we have discussed, a "too rich" energy diagram will often result in a loss of sunlight power somewhere in the spectrum by one of two mechanisms: 1) undesirable absorption of photons or 2) undesirable emission of photons without overall gains for the electricity output. #### 3.3.2 Non-rare-earth ions for spectral conversion Several transition metals (TMs) have also been investigated as active ions for spectral conversion [163; 164; 165; 166]. Some of the most exciting results reported concern the possibility of replacing cerium by a TM to block UV transmission, as Cerium may react with traces of iron in the SLS glass under UV radiation. The result, in this case, is an increase of the Fe\({}^{2+}\) concentration in the CG [157], a NIR absorber that will have a deleterious effect on the PV electricity output. It has been demonstrated by using TMs [163] that significant UV blocking capabilities can be achieved, with the bonus that some TMs exhibit VIS/NIR emission, and down-conversion can potentially be achieved as well. #### 3.3.3 A benchmark framework for spectral converters To the best of our knowledge, there is no standardized test to measure the performance of SCs. Indeed, as we have discussed, most works proposing an SC have some information about the conversion efficiency of the absorbed light. However, they lack specific measurements of the effect of the material on the electrical output of a PV. On the other hand, we recognize how difficult it is to produce solar cell prototypes covered by the SC. It can be a resource and time-consuming task that requires a different set of skills and equipment and could introduce uncertainties in comparing different SCs. With this aim, we propose a framework for quantifying some parameters that could be used as a benchmark to compare SCs. Our approach is based on the following considerations: 1. The Global 37\({}^{o}\) tilt ASTM-G173-03 sunlight spectrum (1.5G spectrum) is used as the reference for standard sunlight illumination. 2. The Thorlabs S120VC Si sensor responsivity spectrum is used to represent a Si solar cell spectral sensitivity. 3. The product of these two spectra represents the solar cell current output per nm under sunlight illumination. These three curves enumerated above are shown in figure 2, for the range 280-1100 nm, with a resolution of 5 nm. The area under these curves was calculated for different sections of the data, and all of them were performed using the trapezoidal rule. The integral of the 1.5G spectrum in this range and resolution corresponds to 805.65 W/m\({}^{2}\), and as one can see, the sunlight intensity peaks at \(\sim\)500 nm. On the other hand, the Si sensitivity peaks at \(\sim\)1000 nm. At the bottom of the figure, we show the resulting "current output spectrum" obtained by multiplying the sunlight irradiance times the sensitivity curve. The total area under the last curve corresponds to 24159 mA/m\({}^{2}\), and it is essential to highlight the "flat" region, between 450-900 nm, which is responsible for about 75% of the total current output (\(\sim\)17917 mA/m\({}^{2}\)). One fundamental insight of this approach is that, in the flat region of the current curve, its value is \(\sim\)40 \(mA\)\(m^{-2}nm^{-1}\). This means a production of about 200 mA/m\({}^{2}\) for each 5 nm interval. As one can see, even a narrow absorption band may significantly affect electricity production depending on the SC thickness. This way, if we have a 5 nm narrow band in the 450-900 nm range that absorbs 50% of the incoming light, this will reduce the current output per square meter in \(\sim\)100 mA/m\({}^{2}\), which corresponds to about 0.41% of the current output under direct sunlight. Using the material absorption spectrum and this current output reference curve, one can estimate the negative impact of an SC on the electricity output. To illustrate that, we evaluated the absorption data in the range 280-900 nm from a 0.1 mm thick Cerium-doped xerogel [167], and the results are shown in Figure 3. Figure 2: Reference curves used to our proposed framework Considering the sunlight spectrum in that range, this material applied to a PV would reduce the current output from 20114 to 19471 mA/m\({}^{2}\) due to its absorbance. From this 643 mA/m\({}^{2}\) loss, 265 mA/m\({}^{2}\) are related to wavelengths below 420 nm, demonstrating the reduction, in this case, is more significant in the UV range. This illustrates how to quantify the losses introduced by the material, though, for an SC, it is also necessary to evaluate the potential gains which can be introduced. A viable SC must add enough spectral conversion to overcome its absorption losses. In the next section, we introduce a model for an "ideal SC", which could be used as a reference scale for different materials proposed as SCs. #### 3.3.4 The ideal SC We can think on different types of SCs, made from different compositions and having specific spectroscopic characteristics. However, to develop our reference for an ideal SC, it is enough to consider that it exhibits the following attributes: * Full absorption of light below a certain wavelength; * Full transparency above this same wavelength; Figure 3: Current output under 1.5G sunlight spectrum and the effect of a Ce-doped xerogel based film [167]. * Convert all the absorbed radiation into wavelengths near the Si sensitivity peak; * Do not introduce any other loss due to the sunlight reflection or scattering; The immediate implication is that this ideal SC will have more photons emitted than absorbed because we are converting high-energy photons into NIR ones without generating any phonons. Though a quantum efficiency as high as this is unlikely, the above considerations are enough to provide a simple and helpful reference. On the other hand, the wavelength that delimits the absorption and transparency of the SC is arbitrary. Though, as we discussed before, 75% of the current output in our framework originates from sunlight in the range 450-900 nm, and in this way, it seems reasonable not to consider absorption near that range. In this context, we chose the wavelength limit between absorption/transparency for our ideal SC to be 420 nm. The Si sensitivity response we are considering, as shown in Figure 2, peaks around 965 nm. An ideal SC should profit from that by emitting radiation near it. Here again, there is room for discussion on the shape and width of the SC luminescence band. A "laser-like" luminescence is quite unlikely, but on the other hand, a too-broad band would mean some photons falling below the Si bandgap. In this way, we choose a simple flat emission band between 940-990 nm, which is 50 nm wide and centered at 965 nm. Using the reference spectra shown in Figure 2, we could evaluate some estimates of this ideal SC effect on PV performance. The main results are presented in Figure 4 and discussed next. By integrating the irradiance spectrum below 420 nm we obtain a value of 71.85 Wm\({}^{-2}\) for the total radiation absorbed by the ideal SC. This radiation would produce 1324 mA m\({}^{-2}\) if it could reach the PV, and we will call it "SC current drop" (C\({}_{D}\)). Accordingly, it represents a primary adverse effect of the SC on the PV performance. As the total current output without the SC accounts for 24159 mA m\({}^{-2}\), even for our ideal SC, the current drop is significant, corresponding to 5.5 % of the total. However, our ideal SC is supposed to have total efficiency in converting incident radiation. Assuming this same 71.85 Wm\({}^{-2}\) is emitted around 965 nm, it would produce an additional current output of about 3113 mA m\({}^{-2}\), which we will call "SC current gain" (C\({}_{G}\)). In other words, the net effect of a SC Figure 4: Ideal SC effect on sunlight spectrum and current output. would be, \[C_{G}-C_{D}=C_{N}, \tag{2}\] where \(C_{N}\) is the net current for the SC. For reference, the proposed ideal SC described above would result in \(C_{N}=1789\ mA\ m^{-2}\). This net current increment corresponds to a 7.4 % increase in the total current output (from 24159 to 25948 mA m\({}^{-2}\)). Unfortunately, collecting data from most of the materials proposed as SCs in the literature is challenging, and this review does not include a comparison between them in terms of the net current described above. We believe, though, this balance of negative and positive impacts of the SCs is critical, and its evaluation under our proposed framework can accelerate the development of SCs and be helpful to all researchers in this field. In summary, our framework to characterize and compare materials proposed as SCs can be split into the following steps: 1. Measure the material absorption spectrum; 2. Define a thickness, depending on how this material is intended to be deployed in PV (nanometer/micrometer film, thick sample, etc); 3. Calculate the absorbance spectrum for this specific material and thickness. 4. Calculate C\({}_{D}\), due to this absorbance spectrum; 5. Estimate the C\({}_{G}\), due to SC luminescence; 6. Obtain the C\({}_{N}\) associated to the SC. We intentionally used the term "estimate" for the fifth step, recognizing the complexity of measuring or calculating luminescence. Even though luminescence can be quantified for one sample in a specific range, measuring it from the visible to the infrared often requires different sensors, gratings, etc. It may be challenging to have calibrated equipment to perform luminescence measurements in such a broad range. Comparing the luminescence intensity between several samples can be even more difficult, as reflection and scattering of the pump source and luminescent light self-absorption may vary among them (mainly for SCs containing resonant emitters, such as Yb\({}^{3+}\)). ## 4 Emerging trends ### Lowering carbon emission with Hydrogen Scientists are pursuing to reduce \(CO_{2}\) emissions related to glass production by replacing natural gas used in the float process. One of the most promising approaches is replacing natural gas with hydrogen, though this also looks challenging. The energy content of these two gases is significantly different, and the combustion of each one results in different amounts of heat transfer through radiation. The modified atmosphere composition inside the furnace also seems to increase the water and NO\({}_{x}\) content. Consequently, recent studies have addressed the effect of this hydrogen-modified float process on the optical quality of the final glass sheet [168; 169; 170]. By one hand it seems interesting to look for alternatives to the SLS glass, or modify it. However, by another, it seems likely that the float process itself will need adjustments to become greener. In both cases, there are plenty of work for the glass science community in understanding and proposing methods to optimize the quality of the float glass in light of this new challenge to fit the industry inside environmental boundaries. ### Alternative materials and methods A modern float SLS industry produces several thousand square meters of glass per hour [66], and some thin films may be produced inline with the process by chemical vapor deposition (CVD). Though this enables fast and cheap production of coatings, many other cannot be produced by the CVD method. In such a context, here we highlight some recent studies concerning materials, often produced by alternative methods, which are proposed for PV applications. Besides several glassy films [45; 171; 172] and glass-polymer matrices [167; 173; 174], one may found several innovative methods to introduce dopants [81] and coatings [175; 176] which include even the use of organic molecules such as Chlorophyll [176]. As one can see, even though there are robust materials and techniques available at the industry level, the need to decarbonize the industry [168; 177; 170], expand PV lifespan and efficiency as well as increase the reuse [178] and recyclability [179; 180; 181] of glass indicates a wide range of possibilities to be explored by glass scientists. ## 5 Discussion Glass is undoubtedly an essential part of PV devices, and there is room for glass-related breakthroughs that could result in expanded net energy production of silicon based solar electricity. There is the possibility to develop CGs with reduced energy intensity and the need to reduce emissions from the flat glass production process. On the other hand, particular features, such as self-cleaning and anti-reflective coatings, are already available at the industry level. Others, such as SCs still in the early research and development stage. Regarding this latter one, there are countless works published in the last few decades, though the different techniques and methods used to characterize SC materials make it quite hard to compare them. Based on this fact, we proposed a benchmark framework for SCs, that could also be applied to UV filters for PV. As we have demonstrated, even narrow absorption bands can significantly affect the current output, mainly if those bands are between 450-900 nm. Our framework considers the ASTM-G173-03 sunlight spectrum and a commercial Si sensor response, resulting in a "current output spectrum" more suitable for comparing spectrally selective materials. Such a model seems straightforward and effective. If we aim to use a material to cover PVs, one should consider the spectral response of silicon and the overall net effect of this material in the electrical output. As we have discussed in a previous work [66], a class AAA solar simulator should reproduce the solar irradiance within a margin of \(\pm 25\%\) in six bands of the spectrum (five 100 nm wide and the last one between 900 and 1100 nm), using the AM1.5G as reference. Though such an approach is fundamental at the industry level to evaluate PVs, this low-resolution analysis is inadequate to compare and develop spectrally selective materials, namely SCs and UV filters. As demonstrated here, even a narrow absorption band just 5 nm wide of a cover material can significantly reduce the PV current output. The proposed model is an essential step towards a standard for theoretically and experimentally comparing different SCs. Considering some of the emerging trends presented in this review, it seems clear that the glass industry is again looking for a revolution. If, on the one hand, there are not yet a candidate glass to replace the SLS in PVs, the need to decarbonize the glass industry by developing a fossil fuel-free float glass process is itself a breakthrough. This could also be an opportunity to develop a modified-SLS glass more suitable to this new environmentally friendly process or even to develop new glass systems. In conclusion, continued research and development in the field of spectrally selective materials could lead to significant advancements in the efficiency and sustainability of PV technology, ultimately contributing to a cleaner and greener future. ## 6 Conclusion In this work, the literature on cover glasses and spectral converters has been reviewed. Several new glasses, glass ceramics, and multi-functional thins films have been investigated for PV applications in the last few years, and promising results have been reported. However, the quantitative comparison between these materials has not been performed correctly. There is a lack of standardized parameters to enable scientists to do so, which we propose here, as an instructive model for standardization. Considering the AM1.5G solar spectrum and the Si's spectral response, our model enables a path to quantify the effect of these new materials on PV's electrical output, no matter where the absorption and emission bands are located. By calculating both the current drop and gain due to the SC, we can theoretically evaluate the effect of the spectral properties of the material on the electrical output. There are no restrictions to the material's geometry, so the model can also compare bulk and thin film materials. The solar photovoltaic industry remains focused on Silicon technology. There are predictions of a critical increment in the share of bifacial solar panels in the following decades, evidencing we can expect an increment in flat glass demand for this sector. In this context, innovation is needed as it is mandatory to decarbonize the glass industry as soon as possible. Such a challenging task may be accomplished by replacing natural gas in the float process and developing modified SLS glasses. Besides that, expanding the efficiency or the lifespan of PVs will also contribute to reducing the environmental impact and, consequently, the cost of solar power worldwide. This contribution summarizes the role of the cover glass in PVs, highlighting some of the most recent and exciting research results of glassy materials for solar silicon photovoltaic applications. The glass community has plenty of opportunities to develop new materials and processes that may reduce our carbon emissions and environmental footprint. ## 7 Acknowledgement The authors would like to thank the _Conselho Nacional de Desenvolvimento Cientifico e Tecnologico_ (CNPq), Brazil, grant 409475/2021-1, and the Central de Analises laboratory in UTFPR-PB.
2310.07911
Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention
Scaling pre-trained language models has resulted in large performance gains in various natural language processing tasks but comes with a large cost in memory requirements. Inspired by the position embeddings in transformers, we aim to simplify and reduce the memory footprint of the multi-head attention (MHA) mechanism. We propose an alternative module that uses only a single shared projection matrix and multiple head embeddings (MHE), i.e. one per head. We empirically demonstrate that our MHE attention is substantially more memory efficient compared to alternative attention mechanisms while achieving high predictive performance retention ratio to vanilla MHA on several downstream tasks. MHE attention only requires a negligible fraction of additional parameters ($3nd$, where $n$ is the number of attention heads and $d$ the size of the head embeddings) compared to a single-head attention, while MHA requires $(3n^2-3n)d^2-3nd$ additional parameters.
Huiyin Xue, Nikolaos Aletras
2023-10-11T21:38:40Z
http://arxiv.org/abs/2310.07911v1
Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention ###### Abstract Scaling pre-trained language models has resulted in large performance gains in various natural language processing tasks but comes with a large cost in memory requirements. Inspired by the position embeddings in transformers, we aim to simplify and reduce the memory footprint of the multi-head attention (MHA) mechanism. We propose an alternative module that uses only a single shared projection matrix and multiple head embeddings (MHE), i.e. one per head. We empirically demonstrate that our MHE attention is substantially more memory efficient compared to alternative attention mechanisms while achieving high predictive performance retention ratio to vanilla MHA on several downstream tasks. MHE attention only requires a negligible fraction of additional parameters (\(3nd\), where \(n\) is the number of attention heads and \(d\) the size of the head embeddings) compared to a single-head attention, while MHA requires \((3n^{2}-3n)d^{2}-3nd\) additional parameters.1 Footnote 1: Code: [https://github.com/HUYINGUE/simpleMHE](https://github.com/HUYINGUE/simpleMHE) ## 1 Introduction Scaling pre-trained language models (PLMs) aims to enhance performance by increasing their size and capacity, leading to models with an unprecedented number of parameters Kaplan et al. (2020); Chowdhery et al. (2022); Hoffmann et al. (2022). Just by increasing the size of PLMs and the pre-training data has yielded state-of-the-art performance on various natural language processing (NLP) tasks Devlin et al. (2019); Liu et al. (2019); Clark et al. (2020); Raffel et al. (2020); Brown et al. (2020); Clark et al. (2022); Ouyang et al. (2022); Touvron et al. (2023). However, the pursuit of developing larger PLMs comes with large computational requirements. This has direct environmental implications such as large carbon emissions Lacoste et al. (2019); Strubell et al. (2019); Weidinger et al. (2022), conflicting with the principles of Green artificial intelligence development Schwartz et al. (2020). Moreover, scaling can hinder researchers with limited access to computing resources to participate in advancing the field Schwartz et al. (2020). This results in inequalities, where only a privileged few can actively contribute, potentially impeding diversity and inclusivity Weidinger et al. (2022). The backbone of transformers Vaswani et al. (2017) is the multi-head attention (MHA) module that extends the standard single-head attention (SHA) proposed by Cho et al. (2014). MHA applies an attention mechanism (i.e. head) multiple times for the same set of queries, keys and values by using a different set of parameters (i.e. projection matrices) for each of them. This results in MHA modules with a large memory footprint that increases with the number of layers and attention heads per layer in PLMs Devlin et al. (2019); Brown et al. (2020); Ouyang et al. (2022); Touvron et al. (2023). Figure 1 shows how the number of parameters of a single attention sublayer increases with its number of attention heads. Previous work has attempted to address this issue by proposing to share projection matrices or Figure 1: Number of parameters for an attention sublayer and different number of attention heads using multi-head attention MHA and our multi-head embedding attention MHE. We fix the dimension of attention to 64, only counting the parameters for projecting queries, keys, and values. eliminating them entirely to improve the parameter efficiency of MHA. Lan et al. (2020) proposed sharing projection parameters for keys, queries and values across layers, while Kitaev et al. (2020) introduced a method for sharing the projection matrix between keys and values within each transformer layer. Additionally, similar approaches use a multi-query attention approach that uses a pair of global projection matrices for keys and values in each layer Shazeer (2019); Chowdhery et al. (2022); Ainslie et al. (2023). Furthermore, Yan et al. (2021) eliminate the projection matrices entirely and directly treat the input hidden states as both keys and values. In a different direction, Lee-Thorp et al. (2022) propose models that replace the attention blocks with token-mixture blocks (i.e. using linear or Fourier transformations) that contain fewer or no parameters compared to MHA. Inspired by the position embeddings in transformers Vaswani et al. (2017); Devlin et al. (2019), we aim to simplify and reduce the memory footprint of the MHA mechanism. We achieve this using only a single projection matrix for each of the keys, queries and values respectively shared across all attention heads, and one embedding per head (MHE). Our contributions are as follows: * We propose MHE, a novel attention module that uses shared projection matrices across heads that are modified by corresponding embedding heads. Our method generates multiple attention heads requiring only a small fraction of additional parameters compared to single-head attention. * We empirically demonstrate that our MHE attention is substantially more parameter efficient compared to alternative attention mechanisms while achieving high predictive performance retention ratio (i.e. 92.9-98.7%) to MHA on several downstream tasks. MHE is \((3n^{2}-3n)d^{2}-3nd\) smaller than MHA for a single attention sublayer with \(n\) attention heads and a hidden dimension of \(d\) per head. ## 2 Related Work ### Model Compression To make PLMs memory efficient, previous work has focused on the following post-hoc model compression approaches Ganesh et al. (2021); Tay et al. (2022). QuantizationHubara et al. (2017) proposed representing weights using fewer bits to reduce memory requirements. Zadeh et al. (2020) introduced a method for identifying the outliers in weights and excluded them during quantization. Another direction involves additional training steps to adjust the quantized weights, i.e. quantization-aware training Zafrir et al. (2019); Boo and Sung (2020); Stock et al. (2020); Shen et al. (2020); Tambe et al. (2021); Tao et al. (2022). Bai et al. (2022) developed a more efficient post-training quantization approach that minimizes the reconstruction error incurred by quantization. PruningThese compression approaches remove entirely parts of the network such as weights close to zero Gordon et al. (2020); Mao et al. (2020); Chen et al. (2020) and weights that move towards zero during fine-tuning Sanh et al. (2020); Tambe et al. (2021). Different to operating on individual weights, previous work attempted to remove structured blocks of weights or even architectural components such as attention heads and encoder layers Fan et al. (2019); Prasanna et al. (2020); Khetan and Karnin (2020); Li et al. (2020); Lin et al. (2020); Tay et al. (2021). Knowledge DistillationThis set of techniques typically train a light-weight student model to mimic the outputs of a larger teacher PLM Sun et al. (2019); Li et al. (2020); Jiao et al. (2020); Sun et al. (2020); Li et al. (2021); Tahaei et al. (2022). In a similar direction, smaller PLMs have been recently fine-tuned on text generated by larger PLMs Chiang et al. (2023); Taori et al. (2023). Weight Matrix DecompositionPrevious work also proposed replacing large weight matrices by the product of two smaller ones for reducing model size and runtime memory. Weight matrix decomposition has been applied to linear layers Mao et al. (2020); Ben Noach and Goldberg (2020), the embedding matrix Lan et al. (2020); Tambe et al. (2021); Wang et al. (2022), and attention blocks Hu et al. (2022); Wang et al. (2022). Embedding Matrix CompressionFinally, various attempts have been introduced for compressing the embedding matrix during pre-training and fine-tuning Xue et al. (2022); Clark et al. (2022); Xue and Aletras (2022). ### Improving Attention Efficiency Previous work on making attention more efficient includes efforts towards (1) speeding-up pairwise computations between token representations; and (2) parameter efficiency. Computational EfficiencyWhile improving computational efficiency of attention is out of the scope of our paper, we provide a brief overview of previous work since it is complementary to parameter efficiency. One approach to speed up attention computation is by reducing the number of similarity computations between representations in different positions using predefined local windows, fixed or dynamic strides Child et al. (2019); Zaheer et al. (2020); Beltagy et al. (2020); Kitaev et al. (2020). Other methods leverage the approximation of SoftMax to change the order of matrix multiplications, resulting in lower computational complexity Katharoopoulos et al. (2020); Choromanski et al. (2021); Schlag et al. (2021); Qin et al. (2022). Related approaches along this direction proposed kernel functions that require additional parameters Choromanski et al. (2021); Wang et al. (2020). Finally, Dao et al. (2022) proposed improvements in GPU memory access to optimize and accelerate the MHA computation. Memory EfficiencyLan et al. (2020) introduced a method for sharing the projection parameters for queries, keys and values across transformer layers. Furthermore, Kitaev et al. (2020) proposed sharing the projection matrix between keys and values within each layer. Additionally, other methods use a multi-query attention approach that shares projection weights for keys and values across different heads Shazeer (2019); Chowdhery et al. (2022); Ainslie et al. (2023), while Yan et al. (2021) directly treat the input hidden states as both keys and values. In a different direction, Lee-Thorp et al. (2022) proposed replacing the attention blocks with faster token-mixture blocks consisting of a few parameters or no parameters at all. This includes methods such as linear or Fourier transformations in the token-mixture block. However, these approaches tend to yield lower predictive performance compared to MHA. ## 3 Multiple Head Embeddings Attention Inspired by the absolute position embeddings Vaswani et al. (2017); Devlin et al. (2019) for distinguishing the representation of the same token in different contexts, we propose Multiple Head Embeddings (MHE) attention. MHE uses a shared'seed' projection matrix that is subsequently combined with distinct head embeddings to generate multiple attention heads. ### Multi-head Attention (MHA) We first begin by formally defining MHA. MHA consists of different projection matrices (\(\mathbf{W}_{i}^{Q},\mathbf{W}_{i}^{K},\mathbf{W}_{i}^{V}\in\mathbb{R}^{d_{m} \times d_{h}},i=1,...,n\), where \(d_{m}\) is the dimension of the input representation and \(d_{h}\) is the dimension of \(n\) attention heads) for queries (\(Q\)), keys (\(K\)) and values (\(V\)) per head, \(3\times n\) in total. It is computed as follows: \[\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i} =\mathbf{X}\mathbf{W}_{i}^{Q,K,V} \tag{1}\] \[\mathbf{H}_{i} =\text{Att}(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i})\] (2) \[=\text{SoftMax}(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{\top}}{\sqrt {d_{h}}})\mathbf{V}_{i} \tag{3}\] Note that we use scale-dot attention, but our method can be used with any other attention mechanism. ### Seed Projection Matrix Unlike MHA that uses different projection matrices per head, MHE attention employs only a single projection matrix for each of the queries, keys and values, \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\in\mathbb{R}^{d_{m}\times d_{h}}\). These matrices are shared across all attention heads. We obtain query, key and values projections of the input sequence \(\mathbf{X}\) as follows: \[\mathbf{Q},\mathbf{K},\mathbf{V}=\mathbf{X}\mathbf{W}^{Q,K,V} \tag{4}\] ### Attention Head Embeddings Using a seed projection matrix for \(\mathbf{Q},\mathbf{K},\mathbf{V}\) is equivalent to a single-head attention (SHA) module. Therefore, we need a mechanism to transform the seed projection matrices to obtain different attention head. For this purpose, we represent each attention head \(i\) by specific head embeddings \(\mathbf{e}_{i}^{Q},\mathbf{e}_{i}^{K},\mathbf{e}_{i}^{V}\in\mathbb{R}^{d_{h}},i =1,...,n\) for queries, key and values. These embeddings have a substantially smaller memory footprint compared to using different projection matrices per head. The contextualized representation \(\mathbf{H}_{i}\) of the entire input sequence \(\mathbf{X}\) for head \(i\) is computed as follows: \[\mathbf{\tilde{Q}}_{i},\mathbf{\tilde{K}}_{i},\mathbf{\tilde{V}} _{i} =\psi(\mathbf{Q};\mathbf{K};\mathbf{V},\mathbf{e}_{i}^{Q,K,V}) \tag{5}\] \[\mathbf{H}_{i} =\text{Att}(\mathbf{\tilde{Q}}_{i},\mathbf{\tilde{K}}_{i}, \mathbf{\tilde{V}}_{i}) \tag{6}\] where \(\psi(\cdot)\) is a function that modifies the query, key and value matrices with a corresponding head embedding \(\mathbf{e}_{i}\). ### Modifying Queries, Keys and Values with Head Embeddings We propose two MHE variants, one adds and the other multiplies the head embeddings with the seed projection matrices. MHE-Add:Motivated by the absolute position embedding Devlin et al. (2019), we use the addition operation in Equation 5, represented as \(\psi(\mathbf{A},\mathbf{b}):=\mathbf{A}+\mathbf{b}\), where \(\mathbf{A}\in\{\mathbf{Q},\mathbf{K},\mathbf{V}\}\) and \(\mathbf{b}\in\{\mathbf{e}^{Q},\mathbf{e}^{K},\mathbf{e}^{V}\}\) respectively. MHE-Mul:Likewise, motivated by the rotary position embedding Su et al. (2021), MHE-Mul employs multiplication as the integrating operation in Equation 5 as \(\psi(\mathbf{A},\mathbf{b}):=\mathbf{A}\odot(\mathbf{b}+1)\), where \(\odot\) represents the Hadamard product.2 Footnote 2: We add 1 to avoid elements in queries, keys and values become too small during initialization. Figure 2 shows an overview of the MHE mechanism compared to MHA. ## 4 Experimental Setup ### Attention Mechanisms We compare our MHE attention with the following attention mechanisms:3 Footnote 3: We have also experimented with Linear and Fourier tokenizer models Lee-Thorp et al. (2022) yielding substantially lower performance. For full results of these methods, see Appendix A. * **Multi-head Attention (MHA):** This is the original multi-head attention mechanism Vaswani et al. (2017); Devlin et al. (2019). * **Single-head Attention (SHA):** Similar to MHA but using only one attention head. * **EL-att:** Introduced by Yan et al. (2021), this attention variant completely eliminates the projection matrices for all keys and values. * **MQA:** Introduced by Shazeer (2019), this approach uses shared projection matrices for keys and values across all attention heads. Note that different projection matrices are used for queries across heads. * **SKV:** Introduced by Kitaev et al. (2020), this attention variant enforces keys and values to share the same projection matrix within each attention module. ### Data We experiment with a diverse range of tasks including: (1) two standard natural language understanding benchmarks in English, Glue Wang et al. (2018) and SuperGlue Wang et al. (2019); (2) two question and answering benchmarks in English, SQuAD v1.1 Rajpurkar et al. (2016) and SQuAD v2.0 Rajpurkar et al. (2018); (3) WMT-14 English-to-German machine translation Bojar et al. (2014); and (4) two language modelling datasets in English WikText-103 Merity et al. (2017) and Penn Treebank Marcus et al. (1993). ### Models We test all different attention variants on two architectures: (1) encoder-only transformer Devlin et al. (2019) and (2) encoder-decoder transformer Vaswani et al. (2017). Encoder-onlyFor Glue, SuperGlue, SQuAD v1.1 and SQuAD v2.0, we use a BERT-base architecture. This consists of 12 transformer layers, embedding size of 768, hidden states dimension of 768, 12 attention heads and a maximum sequence length of 512. Decoder-onlyWe also test a decoder-only model using the GPT2-base architecture on WikiText-103, Penn Treebank and Glue. GPT2-base consists of 12 transformer layers, embedding size of 768, hidden states dimension of 768, 12 attention heads and a maximum sequence length of 512. Figure 2: Multi-head attention (left) requires \(3\times n\) projection matrices for queries, keys and values (\(\mathbf{W}^{Q,K,V}\)) where \(n\) is the number of attention heads. Multi-head embedding attention (right) uses only three projection matrices and \(3\times n\) head embeddings. Encoder-decoderFor WMT-14, we train an encoder-decoder transformer from scratch. It consists of 12 layers (6 for the encoder and decoder respectively), an embedding size of 512, hidden states dimension of 512 and 8 attention-heads and a maximum sequence length of 100. We set the number of attention heads to 1 for all SHA models. Experimenting with larger models and different number of attention heads is out of the scope of our paper and left for future work due to limited access to computing resources. ### Implementation Details Pre-trainingWe pre-train all models on the English Wikipedia and BookCorpus Zhu et al. (2015) from HuggingFace Lhoest et al. (2021) for up to 1M steps with a batch size of 128. We choose masked language modelling as the pre-training objective. For all models, we use a 30K WordPiece vocabulary Devlin et al. (2019). Fine-tuning and TrainingFor Glue, SuperGLue, SQuAD v1.1 and SQuAD v2.0, we fine-tune all pre-trained models up to 20 epochs with early stopping fixing the batch size to 32. For each task, we use five different seeds and report the average. We train the encoder-decoder model from scratch on the training set of WMT-14 English-to-German machine translation dataset up to 100K steps with a batch size of 256. WMT-14 contains 4.5M sentence pairs and evaluate on its test set. We train the tokenizer using byte-pair-encoding Sennrich et al. (2016) with 37K merging steps on the training set. We enable both source language and target language to share the vocabulary. We use one random seed and report the average on the last five epochs. We optimize all models using AdamW Loshchilov and Hutter (2019). HyperparametersHyperparameter selection details are in Appendix B. HardwareFor pre-training, we use four NVIDIA Tesla A100 GPUs and one for fine-tuning on downstream tasks. ### Predictive Performance Evaluation For Glue, SuperGlue, SQuAD v1.1 and SQuAD v2.0, we use the official metric of each task (see Appendix A for details on metrics for each task). We report F1 score for SQuAD v1.1 and SQuAD v2.0. We use BLEU to report performance in WMT-14 English-to-German machine translation task. We use perplexity (PPL) to report generative performance on WikiText-103 and Penn Treebank by fixing the stride length to 256. ### Memory Efficiency Evaluation Furthermore, we use the following metrics to measure and compare the memory efficiency of MHE and the baselines. * **Performance Retention Ratio:** We compute the ratio between the predictive performance of each attention mechanism compared to MHA upper-bound baseline performance (the higher the better). For direct indicator (e.g. accuracy etc.): \[\text{PRR}=\frac{\text{score}_{\text{model}}}{\text{score}_{\text{MHA}}}\] For inverse indicator (e.g. perplexity etc.): \[\text{PRR}=1-\frac{\text{score}_{\text{model}}-\text{score}_{\text{MHA}}}{ \text{score}_{\text{MHA}}}\] * **Performance Elasticity of Parameters:** Inspired by the concept of elasticity in economics Bittermann (1934), which measures the responsiveness of an economic variable (e.g. investment demand) to a change in another (e.g. interest rate), we extend it to measure the parameter utilization rate of a target model compared to the SHA lower-bound. The performance elasticity of parameters (PEoP) indicates how effectively parameters contribute to predictive performance, compared to SHA. It is computed as follows: For direct indicator (e.g. accuracy etc.): \[\text{PEoP}=\frac{(\text{score}_{\text{model}}/\text{score}_{\text{HA}})-1}{ (\text{params}_{\text{model}}/\text{params}_{\text{SHA}})-1}\] For inverse indicator (e.g. perplexity etc.): \[\text{PEoP}=-\frac{(\text{score}_{\text{model}}/\text{score}_{\text{SHA}})-1}{ (\text{params}_{\text{model}}/\text{params}_{\text{SHA}})-1}\] PEoP quantifies the extent to which a model's performance can be boosted with 1% additional parameters compared to a baseline model (the higher the better).4 Footnote 4: We subtract 1 in both nominator and denominator, following the original definition of elasticity. ## 5 Results ### Predictive Performance Comparison Table 1 presents results on Glue, SuperGlue, SQuAD v1.1 and SQuAD v2.0 for our MHE variants and all baselines. We first observe that both the performance of our MHE-Add and MHE-Mul are comparable to the vanilla MHA on two text classification benchmarks (80.4, 80.6 vs. 81.9 on average Glue and 69.1, 69.6 vs. 70.5 on average SuperGlue) with high performance retention ratios (PRR) between 97.9% and 98.7%. On question answering tasks SQuAD v1.1 and SQuAD v2.0, both MHE variants are also competitive, with PRRs higher than 93%. Similar results are observed on the WMT-14 English-to-German machine translation task for the encoder-decoder transformer. According to Table 3, MHE-Add and MHE-Mul achieve BLEU scores of 23.0 and 23.6, respectively. The performance of MHE-Mul is negligibly lower than that of MHA (24.8) while being substantially smaller. Consistent results for the decoder-only transformer are shown in Table 2. The PRRs for MHE-Add and MHE-Mul on Glue are still high (i.e. 97.8% and 99.0%). While using the intrinsic metrics for evaluation, MHE-Mul leads to the perplexities of 53.8 and 50.7 compared to 43.0 and 44.3 for MHA on WikiText-103 and Penn Treebank respectively, indicating a stable PRR higher than 74.9%. In all tasks, MHE consistently outperforms SHA by a large margin with only 0.03M extra parameters, i.e. 0.6-17.4. For example, 69.6 vs. 67.1 in SuperGlue, 72.3 vs. 67.6 in SQuAD v2.0, 23.6 vs. 22.5 in WMT-14 and 62.0 vs. 53.8 in WikiText-103 for the MHE-Mul variant. We also note that MQA and SKV attention mechanisms generally perform better than MHE, however they are 1.7 and 2.4 times larger than MHE, i.e. 15.34M and 21.23M vs. 8.88M parameters. It is worth noting that MHE-Mul outperforms El-Att on three out of five benchmarks, despite having nearly half the parameters in the attention module. ### Memory Efficiency Comparison Our results so far indicate that performance increases with the number of attention mechanism parameters, which is expected. Next, we inspect how efficiently different attention mechanisms uti \begin{table} \begin{tabular}{l|r|r r r r r r r r r r r} \hline \hline & \multicolumn{4}{c}{Glue} & \multicolumn{4}{c}{SuperGlue} & \multicolumn{4}{c}{SQuAD v1.1} & \multicolumn{4}{c}{SQuAD v2.0} \\ **Attention** & **\#params** & Acc & PRR & PEoP & Acc & PRR & PEoP & Acc & PRR & PEoP & Acc & PRR & PEoP \\ \hline SHA & 8.85M & 79.2 & 96.7 & - 1 & 67.1 & 95.1 & - 1 & 82.5 & 93.1 & - 1 & 67.6 & 87.8 & - \\ MHA & 28.32M & 81.9 & 100.0 & 0.02 & 100.0 & 0.02 & 1 & 88.6 & 100.0 & 0.03 & 1 & 77.0 & 100.0 & 0.06 \\ \hline EL-att & 14.16M & 80.3 & 98.0 & 0.02 & 1 & 99.5 & 98.5 & 0.06 & 1 & 86.5 & 97.6 & 0.08 & 1 & 72.2 & 93.8 & 0.11 \\ MQA & 15.34M & 81.3 & 99.2 & 0.04 & 1 & 69.3 & 98.2 & 0.04 & 1 & 86.7 & 97.9 & 0.07 & 1 & 74.8 & 97.1 & 0.15 \\ SKV & 21.23M & **81.4** & **99.4** & 0.02 & 1 & **69.9** & **99.1** & 0.03 & 1 & **88.1** & **99.4** & 0.05 & 1 & **75.9** & **98.6** & 0.09 \\ \hline MHE-Add & 8.88M & 80.4 & 98.2 & 4.92 & 69.1 & 69.7 & 9.4 & 83.7 & 94.5 & 4.65 & 71.8 & 93.2 & 19.88 \\ MHE-Mul & 8.88M & 80.6 & 98.3 & **5.53** & 69.6 & 98.7 & **12.07** & 1 & 85.9 & 97.0 & **13.19** & 1 & 72.3 & 93.9 & **22.25** \\ \hline \hline \end{tabular} \end{table} Table 1: Results of the encoder-only architecture on Glue, SuperGlue, SQuAD v1.1 and SQuAD v2.0 dev sets with performance retention ratio (PRR) and performance elasticity of parameters (PEoP) over five runs. **Bold** values denote best performing method in each benchmark. \begin{table} \begin{tabular}{l|r|r r r r r r r r r} \hline \hline & \multicolumn{4}{c}{Glue} & \multicolumn{4}{c}{WikiText-103} & \multicolumn{4}{c}{Penn Treebank} \\ **Attention** & **\#params** & Acc & PRR & PEoP & PPL & PRR & PEoP & PPL & PRR & PEoP \\ \hline SHA & 8.85M & 75.3 & 97.2 & - 1 & 62.0 & 55.8 & - & 68.1 & 46.3 & - \\ MHA & 28.32M & 77.5 & 100.0 & 0.01 & 43.0 & 100.0 & 0.14 & 44.3 & 100.0 & 0.16 \\ \hline EL-att & 14.16M & 76.6 & 98.9 & 0.03 & 1 & 57.1 & 67.2 & 0.13 & 56.1 & 73.4 & 0.29 \\ MQA & 15.34M & 76.9 & 99.2 & 0.03 & 49.7 & 84.4 & 0.27 & 1 & 49.3 & 88.7 & 0.38 \\ SKV & 21.23M & **77.1** & **99.5** & 0.02 & **46.2** & **92.6** & 0.18 & **45.5** & **97.3** & 0.24 \\ \hline MHE-Add & 8.88M & 75.8 & 97.8 & 2.18 & 54.0 & 74.4 & 41.29 & 55.3 & 75.2 & 60.15 \\ MHE-Mul & 8.88M & 76.7 & 99.0 & **5.92** & 53.8 & 74.9 & **42.32** & 50.7 & 85.6 & **81.76** \\ \hline \hline \end{tabular} \end{table} Table 2: Results of decoder-only architecture on Glue dev sets and WikiText-103, Penn Treebank test sets with performance retention ratio (PRR) and performance elasticity of parameters (PEoP) over five runs. **Bold** values denote best performing method in each benchmark. lize their parameters 5. Tables 1 and 3 show how parameter efficient our two MHE attention variants and all baselines are, measured in PEqP. Note that PEqP scores for SHA cannot be computed as it is used as the point for reference model. We also report PRR using MHA as a baseline for completeness, however this metric does not take the model size into account. Footnote 5: For a detailed report on the memory usage of different attention mechanisms, see Appendix C. We first observe in Table 1 that both our MHE-Add and MHE-Mul achieve the highest PEqP scores on the two natural language understanding benchmarks (4.92, 5.53 on Glue, and 9.44, 12.07 on SuperGlue) and two question answering tasks (4.65, 13.19on SQuAD v1.1, and 19.88, 22.25 on SQuAD v2.0). In contrast, vanilla MHA results in the lowest PEqP score among all models as expected, ranging from 0.02 to 0.06. It indicates the memory inefficiency of MHA. The PEqPs of more light-weight El-Att and SKV are similar to that of MHA (0.02) on average Glue, barely 4 %of that of MHE, indicating they are far more memory-inefficient compared to MHE. Similar findings are observed in WMT-14 for the encoder-decoder models depicted in Table 3. MHE-Add and MHE-Mul achieve PEqP scores of 20.0 and 27.9, respectively. In contrast, the PEqP scores of MHA, El-Att MQA and SKV are close to zero (barely 0.1). This means that investing more parameters into their attention modules would not bring proportional benefits in predictive performance. Even for the SKV which is half the size of MHA and achieves high PRR, when the number of parameters increase by 1%, the BLEU score increases a negligible 0.1%, while evolving from SHA. However, with the same number of parameters, our most memory-inefficient MHE-Mul is able to improve the BLEU score by 11.0%. Such rate of return is 110 times larger than that of SKV. Leveraging the head embeddings by adding only a negligible number of parameters efficiently improves the predictive performance. We further observe that MHE-Add and MHE-Mul are architecture-agnostic, obtaining similar memory efficiency for the decoder-only model in Table 2. Both our MHE-Add and MHE-Mul achieve the highest PEqP scores on the two language modelling benchmarks (41.29, 42.32 on WikiText-103 and 60.15 and 81.76 on Penn Treebank) and Glue (2.18 and 5.92). At the same time, MHA fail to perform well on Glue and Penn Treebank with a PEqP of 0.01 and 0.16 respectively. MHE-Add and MHE-Mul also consistently outperform other efficient-attention variants (i.e. EL-Att, MQA and SKV) by 72-340 times on PEqP across the three benchmarks. In all tasks, MHE consistently outperforms MHA by orders of magnitude in parameter efficiency. We also note that El-Att, MQA and SKV only lead to PEqP scores with the same magnitude as MHA. This highlights the more superior parameter utilization of MHE attention variants, achieving state-of-the-art memory-efficiency. ### Theoretical Memory Complexity Table 4 presents the theoretical memory complexity and the total number of parameters of our two MHE and baseline attention mechanisms in a single transformer sublayer. First, we see that the theoretical memory complexity of MHA and other efficient parameters (El-Att, MQA and SKV) are quadratic with the number of attention heads, while our MHE are the only two variants having the complexity linear with the attention heads similar to SHA. Taking a closer look at the rightmost column in Table 4, we observe that the number of extra parameters of all attention variants compared to SHA have a quadratic relationship to both the number \(n\) and the dimension of attention heads \(d\), except our two MHE variants. MHE only requires a relatively small fraction of additional parameters compared to SHA. \begin{table} \begin{tabular}{l|r|r r r} \hline \hline **Attention** & **\#params** & **BLEU** & **PRR** & **PEoP** \\ \hline SHA & 6.49M & 22.5 & 90.8 & - \\ MHA & 18.87M & 24.8 & 100.0 & 0.1 \\ \hline EL-att & 9.44M & 23.9 & 96.6 & 0.1 \\ MQA & 10.62M & 24.2 & 97.6 & 0.1 \\ SKV & 14.16M & **24.7** & **99.5** & 0.1 \\ MHE-Add & 6.52M & 23.0 & 92.9 & 5.3 \\ MHE-Mul & 6.52M & 23.6 & 95.0 & **11.0** \\ \hline \hline \end{tabular} \end{table} Table 3: BLEU scores on WMT-14 English to German machine translation task with performance retention ratio (PRR) and performance elasticity of parameters (PEoP). **Bold** values denote best performing method in each benchmark. ### Scaling the Number of Attention Parameters Delving deeper to the effect of scaling to memory footprint, we show in Figure 3 the total number of parameters needed for a single attention module (e.g. in an encoder layer). We fix the dimension of attention heads to 64 commonly used by BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), GPT-2 Radford et al. (2019), BART Lewis et al. (2020) and T5 Raffel et al. (2020). In general, we note that the number of parameters in MHA could reach more than 200M if employing 128 attention heads. At the same time, SKV, MQA and El-Att would require 2/3, 1/3 and 1/3 of that number respectively. In contrast, MHE only accounts for 1% of the MHA parameters. Moreover, we also present in Figure 4 the total number of parameters required across attention variants when stacking 12, 24 and 48 layers along with 32 and 64 attention heads respectively. We also fix the dimension of attention heads to 64. We can observe, when the number of attention head reaches 64, MHA with 24 layers already occupies more than 1B parameters, while El-Att and MQA reach 0.8B parameters with 48 layers. SKV takes 24 layers to reach 0.8B parameters. However, the total number of parameters in MHE attention does not exceed 0.1B even when scaling to 48 layers with 64 attention heads. It is also clear that scaling the attention module to 48 layers, 32 attention heads and 12 layers needs a comparable number of parameters for MHA, El-Att, MQA or SKV. This indicates, that LLM developers have to make a choice whether doubling the number of attention heads or cutting down the number of layers to a quarter when working under a tight memory budget. However, MHE does not suffer by such issues. Further, we project these estimates to the popular GPT-3 model Brown et al. (2020). It is a decoder-only model with 96 decoder layers, 96 attention heads per layer, and a head dimension of 128. The vanilla multi-head attention module requires a massive 43.48B parameters. However, using MHE attention, this number can be significantly reduced to 0.46B parameters, i.e. approximately a reduction by 98.9%.6 Comparing this to other parameter-efficient attention variants such as EL-att (14.50B parameters), MQA attention (14.80B parameters), and SKV attention (28.99B parameters), it becomes evident that our MHE offers better memory efficiency. This makes it a compelling alternative for memory-constrained scenarios. See Appendix D for a detailed study on the robustness of MHE to model size changes (i.e. scaling). Footnote 6: It would have been great to report results by pre-training our own MHE GPT-3 model, however this is prohibitive with the modest compute we have available. ## 6 Discussion MHA enables the model to attend to information from different representation subspaces at different positions Vaswani et al. (2017). It uses distinct projection matrices for each attention head and integrates the information from these different representation subspaces. However, Vaswani et al. (2017) did not explore different methods for performing space transformations per head. Previous work has pointed out that over-parameterized models might have a low intrinsic dimension. Therefore, transforming the projection \begin{table} \begin{tabular}{c c c c} \hline \hline **Attention** & **Complexity** & **\#Params** & **\#Params (\(\star\))** \\ \hline SHA & \(\mathcal{O}(n)\) & \(3d^{2}n\) & 0 \\ MHA & \(\mathcal{O}(n^{2})\) & \(3d^{2}n^{2}\) & \((3n^{2}-3n)d^{2}\) \\ \hline El-att & \(\mathcal{O}(n^{2})\) & \(d^{2}n^{2}\) & \((n^{2}-3n)d^{2}\) \\ MQA & \(\mathcal{O}(n^{2})\) & \(d^{2}n^{2}+2d^{2}n\) & \((n^{2}-n)d^{2}\) \\ SKV & \(\mathcal{O}(n^{2})\) & \(2d^{2}n^{2}\) & \((2n^{2}-3n)d^{2}\) \\ \hline MHE (ours) & & & \\ -Add & \(\mathcal{O}(n)\) & \(3d^{2}n+3dn\) & \(3nd\) \\ -Mul & \(\mathcal{O}(n)\) & \(3d^{2}n+3dn\) & \(3nd\) \\ \hline \hline \end{tabular} \end{table} Table 4: Memory complexity regarding the number of parameters in each attention sublayer, while fixing the dimension of attention heads to \(d\). \(n\) denotes the number of attention heads. To simplify, the dimension of hidden states \(d_{m}\) is set to \(nd\). The last projection for pooling attention heads is excluded. Figure 3: Number of parameters per attention sublayer, while scaling the number of attention heads in different attention variants. We fix the dimension of attention to 64. matrices to smaller low-rank ones usually does not severely harm model predictive performance (Li et al., 2018; Aghajanyan et al., 2020). Meanwhile, the classic MHA approach also does not impose any constraints on the orthogonality of these subspaces during pre-training and fine-tuning. The column vectors in those projection matrices could be highly collinear, i.e. the projection matrices could be rank-deficient. As a result, its inner-working mechanism could be simply understood as introducing levels of variation to the encoded representation of the same token at the same position across different heads. Our MHE approach is possible to achieve memory efficiency (similar to SHA) together with high PRR compared to MHA by mimicking the position embeddings for representing different attention heads. On one hand, the addition operation in MHE-Add is used for transforming the keys, queries and values. This can be seen as a small distortion of the subspace obtained through projection, followed by rotation. For an input representation, the difference between the projected and injected (i.e. through head embedding addition) queries, keys and values is a constant vector across any pair of heads. On the other hand, the MHE-Mul approach employs a multiplication operation, which more aggressively distorts and reshapes the keys, queries and values subspaces. Head embeddings in MHE-Mul play a role as the scaling factors, respectively stretching each dimension of the input representation. Thus, the difference between the keys, queries, and values generated by different heads for the same input representation, is a vector parallel to the projected input. This vector is dependent on the specific input, unlike the constant vector in MHE-Add. Interestingly, our experimental results consistently show that the multiplication operation outperforms addition in the majority of benchmarks. This corroborates findings of a previous empirical study by Su et al. (2021) that compared rotary position embeddings (somehow analogous to MHE-Mul) with absolute position embeddings (analogous to MHE-Add). ## 7 Conclusions We have proposed MHE attention that employs a single shared projection matrix along with multiple head embeddings, to simplify and reduce the memory footprint of the MHA. Our experimental results have demonstrated that MHE attention exhibits superior memory efficiency compared to other memory-efficient attention variants, while achieving high predictive performance ratio to MHA on various downstream tasks. Compared to a single-head attention, MHA requires \((3n^{2}-3n)d^{2}\) parameters for \(n\) attention heads and head dimensionality \(d\), while MHE barely requires a negligible \(3nd\). For future research, we plan to investigate scaling up MHE models and explore its linguistic capabilities (Vulic et al., 2020; Koto et al., 2021). ## Limitations We experiment only using 'base' size models without experimenting with larger architectures, due to limited access to computational resources. Similarly, we did not experiment with decoder only architectures (Brown et al., 2020) which we leave for future work. We have not combined our MHE method with computationally efficient attention methods with linear complexity, such as Linformer (Wang et al., 2020). We expect that it would speed up computation of MHE, but it is out of the scope of our paper. ## Acknowledgments We would like to thank Constantinos Karouzos, Miles Williams and the anonymous reviewers for their invaluable feedback.
2304.05041
What Food Do We Tweet about on a Rainy Day?
Food choice is a complex phenomenon shaped by factors such as taste, ambience, culture or weather. In this paper, we explore food-related tweeting in different weather conditions. We inspect a Latvian food tweet dataset spanning the past decade in conjunction with a weather observation dataset consisting of average temperature, precipitation, and other phenomena. We find which weather conditions lead to specific food information sharing; automatically classify tweet sentiment and discuss how it changes depending on the weather. This research contributes to the growing area of large-scale social network data understanding of food consumers' choices and perceptions.
Maija Kāle, Matīss Rikters
2023-04-11T07:57:10Z
http://arxiv.org/abs/2304.05041v1
# What Food Do We Tweet about on a Rainy Day? ###### Abstract Food choice is a complex phenomenon shaped by factors such as taste, ambience, culture or weather. In this paper, we explore food-related tweeting in different weather conditions. We inspect a Latvian food tweet dataset spanning the past decade in conjunction with a weather observation dataset consisting of average temperature, precipitation, and other phenomena. We find which weather conditions lead to specific food information sharing; automatically classify tweet sentiment and discuss how it changes depending on the weather. This research contributes to the growing area of large-scale social network data understanding of food consumers' choices and perceptions. ## 1 Introduction This paper focuses on the relationship between food sentiment and weather using the previously collected Latvian Twitter Eater Corpus (LTEC (Sprogis and Rikters, 2020)). We seek to answer (1) is there a correlation between food sentiment and weather experienced at the time of tweeting and (2) what are the differences in the term frequencies of food mentioned depending on the weather. The rationale for this paper is to contribute to deeper understanding of human-food relationship, in particular in relation to weather data. We believe that with more nuanced knowledge of human-food relationships and factors influencing them, we can provide valuable inputs for public health policy makers when they develop their strategies and nudge consumers to choose more healthy options of food. Weather people - this is a term that Bakhshi (Maderer, 2014) used to explain our dependence on the weather regarding food choices and satisfaction with food. While the weather is known to alter consumers' mood significantly and consequently their behaviour (Bujisic et al., 2019), there have been surprisingly few studies that illustrate weather's impact on food perception and food choices, except some that have used online and offline restaurant reviews as a proxy of measuring it (Bakhshi et al., 2014; Bujisic et al., 2019). They find that weather impacts both the frequency of the feedback that food consumers provide, as well as its content. Typically, sunny and pleasant weather leads to more frequent and more positive feedback, since low levels of humidity and high levels of sunlight are associated with high mood. At the same time, reviews written on rainy or snowy days, namely days with precipitation, tend to have lower ratings. Instead of analysing restaurant reviews, we focus on Twitter, where food represents one of the key themes discussed, providing us with spontaneous reactions, which is a unique feature when compared to other data collection methods like reviews or food diaries (Puerta et al., 2020). Our analysis of the LTEC provides a food-related set of discussions that we can correlate with weather data, leading to the following research inquiries: 1) is there a correlation between food tweet sentiment and the weather that the tweet authors are experiencing at the time of tweeting? 2) what are the differences in terms of frequencies of what food is mentioned in tweets depending on weather? One of the reasons, why there are few weather-food choice related studies, is the lack of data - we do not have access to retailers' food sales data that could be correlated with the weather data. Instead, we are focusing how food is represented in social media - in particular Twitter, assuming that tweet is an appropriate proxy to measure sentiment related to food consumption. By analysing weather-related dynamics in LTEC, we contribute to the research field that links food and mood, adding weather impact on the mood. ## 2 Related Work Food consumption is a complex process that is impacted interchangeably by various endogenous factors, such as taste, quality, texture, colour and others, as well as exogenous or external factors ranging from demography, educational level, time of the day, weather, the ambience where it is consumed and others (Velasco et al., 2021; Bujisic et al., 2019). Mood is the determining factor in food choice, where good mood is associated with healthier food choices and bad mood with less healthy food choices (Spence, 2021). Food choice is also seasonally patterned in particular in areas with more seasonal climate in terms of temperature. Even though most of our modern lives are spent indoors, weather and climate conditions still impact our food preferences and consumption (Spence, 2021). While seasonal food consumption patterns are culture-based and differ in various geographical regions, weather-related preferences seem universal. Sunny and moderate temperature-wise weather leads to better mood, while more extreme weather (hot, cold, precipitation) is less pleasant and impacts mood, food consumption experiences. A large-scale study on demographics, weather, and restaurant reviews reveals that pleasant weather impacts not only the content but also the frequency that is higher than during non-pleasant weather conditions (Bakhshi et al., 2014). This is an important indicator that a review can serve as a proxy for measuring the weather's impact on mood and, thus, the food consumption experience. Consumer comments and word-of-mouth have also been studied in relation to weather, implying that consumers' pre-consumption mood directly influences post-consumption mood, and consumers' satisfaction with the service accordingly. Pre-consumption mood, is viewed via weather conditions, where eight weather-related variables have been considered, including visibility, rain, storm, humidity, wind speed, pressure. By including temperature, barometric pressure, and rain as variables reduces unexplained variance and improves results of the experiment. This study successfully links weather to mood and its transfer to affective experience and consumer behaviour (Bujisic et al., 2019). Considering previous studies that prove the link of weather to mood and food perception accordingly, with our work, we aim to illustrate this link via tweet sentiment evaluation. We refine our study by looking at frequencies - what foods authors tweet more in pleasant weather and unpleasant weather conditions, mapping the weather-related food scene in Latvian language Twitter. ## 3 Case Study of Latvia Latvia has four distinct seasons: winter is December to February, spring - March to May, summer - June to August, autumn - September to November. The average annual air temperature in Latvia is only +5.9\({}^{\circ}\)C. The warmest month is July, and the coldest months are January, February (also the snowiest). Months with the most precipitation are July and August, while the least is in February and March. The highest wind speeds are in November, December and January, and the lowest are in July and August (LVGMC, 2009). Latvia provides an example of a country in the Northern hemisphere with various weather conditions to analyse from the perspective of tweeting about food. Besides recognising weather data, Latvian national cuisine seasonality aspects should be considered. Specific foods are consumed in certain seasons in Latvia - cold soup in summer, grey peas, tangerines and gingerbread for the Christmas season (Kale et al., 2021). This cultural context is important for understanding weather-related impact on food tweet understanding. Other cyclical events that are present in any modern society should also be considered. Not just weather and seasonal celebrations are cyclical in nature and correlate with the time of the year. There are other variables that correspond to the time of year that could be possible confounds, for example, school schedules, holiday seasons, election events, sport events, etc. While aware of such cyclical events, we do not highlight them here due to lack of previous research to provide us with reference data. The only study about the timeline of food related tweets in Latvia reveals that a slight decrease of food tweeting was observed on weekend evenings, and a significant one - on weekend mornings (Kale et al., 2021). These results imply the overall differences in mood and behaviour at various times of the day/meas: people tend to be more 'virtualous' in mornings by choosing healthy and nutritious food, while snacking during afternoons (Spence, 2021). The nuances to consider can be categorised in individual circadian rhythms, culture/climate bound seasonality cycles, celebrations, and cyclical events. While being aware of those multiple factors, in this work we focus on weather data primarily, linking them with tweet sentiment without additional references to cyclical nature of human life. Data Collection and Processing We used a combination of the LTEC for tweets and weather data exported from Meteostat1. We mainly focused on tweets and weather relating to Riga, the capital of Latvia, since most tweets with location data originated there, and it was difficult to obtain detailed historical weather data for the smaller regions. Footnote 1: [https://meteostat.net/en/place/lv/riga](https://meteostat.net/en/place/lv/riga) The LTEC has a total of 2.4M tweets generated by 169k users. It has been collected over ten years following 363 eating-related keywords in Latvian. Among the tweets, 167k have location metadata specified, of which 68k were from Riga and 9k more from areas around Riga. To further increase the number of location-related tweets, we selected all remaining tweets which mention Riga or any of its surrounding areas (Marupe, Kekava, etc.) in any valid inflected form. This added 54k tweets, totalling to 131,595. In addition to location metadata, the LTEC provides all food items mentioned in the text and a separate subset of sentiment-annotated tweets for training sentiment analysis models. We use the 5420 annotated tweets to fine-tune a multilingual BERT Devlin et al. (2019) model for this task along with \(\sim\)20,000 sentiment-annotated Latvian tweets from other sources2. Evaluation was performed on the 743 tweet test set from LTEC and reached an accuracy of 74.06%. We then use the model to automatically classify the location-specific tweets as positive, neutral or negative. Footnote 2: [https://github.com/Usprogis/Latvian-Twitter-Eater-Corpus/tree/master/sub-corpora/sentiment-analysis](https://github.com/Usprogis/Latvian-Twitter-Eater-Corpus/tree/master/sub-corpora/sentiment-analysis) We could reliably obtain only data for temperature and precipitation from Meteostat, while data for snowfall was only available up to 2017, and data for wind speed and air pressure was only available from 2018 onward. There was no available data to trace daily sunshine, but it can be inferred from looking at precipitation, snowfall and air pressure. ### Limitations and Assumptions Our work has several important limitations that can be grouped into categories of 1) data availability, 2) tweet author's demographic profile, and 3) generalisation of the results. First, we could only obtain fairly superficial weather data while weather change during the same day was not considered due to lack of detail. Second, we cannot provide a demographic outlook of the usual tweet author in LTEC, and our analysis includes tweets by general digitally literate people active on Twitter. Third, considering the limitations discussed, our results are not an exact extrapolation of weather-related food perception in Latvian society. Nevertheless, our approach adds to the understanding of weather's impact on the part of the Latvian society which tweets about food. ## 5 Analysis and Results While the results of tweet sentiment in terms of the percentage of negative, neutral and positive tweets are largely the same for all weather conditions, we can observe considerably fewer positive tweets during windy weather and high-pressure, as shown in Table 2. Surprisingly, even during low-pressure weather conditions, tweets are not necessarily dominated by negative sentiment - quite the opposite - food tweets have been related to mostly positive sentiment. It could be explained by the fact that people are tweeting about comfort food (e.g. coffee, chocolate, other) or that any food could be comforting during days of low-pressure weather conditions. This remains to be answered in a more fine-grained manual analysis. The right part of Table 1 shows that tea exceeds coffee during cold weather, and there is also a slight increase in tweets about chocolate in cold weather, while the frequency of ice-cream tweets doubles in warm weather. Interestingly, in hot or cold weather tweet amount about meat, cake or soup remains largely similar. While warm weather tweets include strawberries, cold weather tweets include ginger-bread, which coincides with seasonal Christmas food. There are no other notable differences between warm and cold weather tweets, which leads to a conclusion that spending so much time indoors has harmonised foods tweeted about in different seasons and conditions. A slightly different result is revealed in the left part of Table 1, which indicates that during windy weather, meat becomes the most popular food item, while in rainy weather, the results are similar to cold weather where tea dominates. While it is difficult to explain this, a speculation could be that wind is less visible than temperature that is frequently reported in media or precipitation that is visually noticeable before leaving the home, and, thus, without proper clothing during windy weather one might become uncomfortably cold, which in turn could lead to higher willingness to consume meat. Chocolate is twice as popular during rainy weather than during windy weather, and it could be related to a lack of sunshine during rainy weather that needs to be compensated with chocolate, while a windy day can still be sunny. Only potatoes remain stable in terms of tweeting frequencies in any weather - warm, cold, windy or rainy. This can be explained by the fact that potatoes are part of a daily diet in Latvia and constitute the basis for energy intake. ## 6 Conclusion This paper contributes to understanding how weather impacts the mood of food consumers by examining influence on food tweets. The knowledge can be useful to public health policymakers and applied when nudging consumers to choose more healthy food alternatives in different weather conditions and seasons. Obesity, type 2 diabetes and cardiovascular diseases are just a few of the health problems acquired due to nutritional specifics (Min et al., 2019; Mai et al., 2011). The global spread of obesity has been labelled a pandemic and it is of utmost importance to understand the underlying factors behind food choice. Acknowledging and understanding the impact of weather on food consumers and their affective reactions helps explain the complexities associated - food waste, healthy vs. unhealthy choices and other issues. We also highlight the lack of weather data to obtain precise results. A more fine-grained and longitudinal weather data set could allow for higher precision for food tweet data correlation. Besides that, there should also be additional studies done with regard to other cyclical events encountered in modern lives - e.g. school schedule and holidays, annual sport events and others - to capture the impact of weather and non-weather related seasonality on food tweet sentiment. We aim to contextualise the behaviour of tweeting about food in a given geographical area and build a framework for more nuanced understanding of food-related discourse in Latvian language Twitter (Velasco et al., 2021). The contextual knowledge created can be helpful to researchers working with personalised food and health application model development, since humans are social beings, and peer behaviour impacts their choice. Furthermore, we wish to highlight how interconnected our digital and analogue lives are - following up the tweet sentiment and frequency indicators with actual purchasing behaviour and food sales data. We plan to release the tweet-weather dataset as an addition to the existing LTEC and make it public on GitHub.
2308.14299
Reinforcement Strategies in General Lotto Games
Strategic decisions are often made over multiple periods of time, wherein decisions made earlier impact a competitor's success in later stages. In this paper, we study these dynamics in General Lotto games, a class of models describing the competitive allocation of resources between two opposing players. We propose a two-stage formulation where one of the players has reserved resources that can be strategically pre-allocated across the battlefields in the first stage of the game as reinforcements. The players then simultaneously allocate their remaining real-time resources, which can be randomized, in a decisive final stage. Our main contributions provide complete characterizations of the optimal reinforcement strategies and resulting equilibrium payoffs in these multi-stage General Lotto games. Interestingly, we determine that real-time resources are at least twice as effective as reinforcement resources when considering equilibrium payoffs.
Keith Paarporn, Rahul Chandan, Mahnoosh Alizadeh, Jason R. Marden
2023-08-28T04:34:58Z
http://arxiv.org/abs/2308.14299v1
# Reinforcement Strategies in General Lotto Games ###### Abstract Strategic decisions are often made over multiple periods of time, wherein decisions made earlier impact a competitor's success in later stages. In this paper, we study these dynamics in General Lotto games, a class of models describing the competitive allocation of resources between two opposing players. We propose a two-stage formulation where one of the players has reserved resources that can be strategically _pre-allocated_ across the battlefields in the first stage of the game as reinforcements. The players then simultaneously allocate their remaining _real-time_ resources, which can be randomized, in a decisive final stage. Our main contributions provide complete characterizations of the optimal reinforcement strategies and resulting equilibrium payoffs in these multi-stage General Lotto games. Interestingly, we determine that real-time resources are at least twice as effective as reinforcement resources when considering equilibrium payoffs. ## I Introduction System planners must make investment decisions to mitigate the risks posed by disturbances or adversarial interference. In many practical settings, these investments are made and built over time, leading up to a decisive point of conflict. Security measures in cyber-physical systems and public safety are deployed and accumulated over long periods of time. Attackers can consequently use knowledge of the pre-deployed elements to identify vulnerabilities and exploits in the defender's strategy [2, 7, 31]. Many types of contests involves deciding how much effort to exert over multiple rounds of competition [1, 6, 15, 25, 30]. Indeed, investment decisions are dynamic, where early investments affect how successful a competitor is at later points in time. Many of these scenarios involve the strategic allocation of resources, exhibiting trade-offs between the costs of investing resources in earlier periods and reserving resources for later stages. In particular, an adversary is often able to learn how the resources were allocated in the earlier periods and can exploit this knowledge in later periods. In this manuscript, we seek to characterize the interplay between early and late resource investments. We study these elements in General Lotto games, a game-theoretic framework that describes the competitive allocation of resources between opponents. The General Lotto game is a popular variant of the classic Colonel Blotto game, wherein two budget-constrained players, \(A\) and \(B\), compete over a set of valuable battlefields. The player that deploys more resources to a battlefield wins its associated value, and the objective for each player is to win as much value as possible. Outcomes in the standard formulations are determined by a single simultaneous allocation of resources, i.e. they are typically studied as one-shot games. The formulations considered in this paper focus on a multi-stage version of the General Lotto game where one of the players can reinforce various battlefields before the competition begins by pre-allocating resources to battlefields; hence, we refer to these reinforcement strategies as pre-allocation strategies. More formally, our analysis is centered on the following multi-stage scenario: Player \(A\) is endowed with \(P\geq 0\) resources to be pre-allocated, and both players possess real-time resources \(R_{A},R_{B}\geq 0\) to be allocated at the time of competition. In the first stage, player \(A\) decides how to deploy the pre-allocated resources \(P\) over the battlefields. The pre-allocation decision is binding and known to player \(B\). In the final stage, both players engage in a General Lotto game where they simultaneously decide how to deploy their real-time resources, and payoffs are subsequently derived. The pre-allocated resources may represent, for example, the installation of anti-virus tools on system servers. The capabilities of anti-virus software are typically static and well-known, and thus a potential attacker would have knowledge about the system's base level of defensive capability. However, the attacker would not generally have knowledge about the system's placement of intrusion-detection systems, which are often dynamic and part of a "moving target defense" strategy [5, 32, 33]. Moreover, attackers' strategies must be unpredictable in an attempt to exploit defenses. Thus, the use of real-time resources in our model represents such dynamic and unpredictable strategies. A full summary of our contributions is provided below. **Our Contributions:** Our main contribution in this paper is a full characterization of equilibrium strategies and payoffs to both players in the aforementioned two-stage General Lotto game (Theorem 3.1). By characterizing these optimal reinforcement strategies, we are able to provide Pareto frontiers for player \(A\) as one balances a combination of real-time and pre-allocated resources (Lemma 4.1). Interestingly, Theorem 4.1 demonstrates that real-time resources are at least twice as effective as pre-allocated resources when considering the equilibrium payoff of player \(A\). Our second set of results in this manuscript focus on the optimal investment levels of pre-allocated and real-time resources. Rather than player \(A\) being equipped with a fixed budget of resources \((P,R_{A})\), we rather consider a setting where player \(A\) has a monetary budget \(M_{A}\) and each type of resource is associated with a given per-unit cost. Building upon the above characterization of the optimal reinforcement strategies in Theorem 3.1, in Theorem 4.2 we characterize the optimal investment strategies for this per-unit cost variant of the two-stage General Lotto game. This provides an understanding of the precise combination of pre-allocated and real-time resources that optimize player \(A\)'s equilibrium payoff. Our last contribution focuses on a variant of this General Lotto game where both players can employ pre-allocated resources. In particular, we consider a scenario where player \(B\) is able to respond to player \(A\)'s pre-allocation with its own pre-allocated resources, before engaging in the final-stage General Lotto game. This is formulated as a Stackelberg game, where both players have monetary budgets \(M_{A},M_{B}\) and per-unit costs for investing in the two types of resources. We fully characterize the Stackelberg equilibrium (Proposition 5.1), which highlights that having the opportunity to respond to an opponent's early investments can significantly improve one's eventual performance. **Related works:** This manuscript takes steps towards understanding the competitive allocation of resources in multi-stage scenarios. There is widespread interest in this research objective that involves the analysis of zero-sum games [14], [20], [21], differential or repeated games [13], [27], and Colonel Blotto games [1], [19], [26], [29]. The goal of many of these works is to develop tools to compute decision-making policies for agents in adversarial and uncertain environments. In comparison, our work provides explicit, analytical characterizations of equilibrium strategies, which draws sharper insights that relate the players' performance with the various elements of adversarial interaction. As such, our work is related to a recent research thread in which allocation decisions are made over multiple stages [4], [9], [10], [17], [19], [23], [26], [29]. Our work also draws significantly from the primary literature on Colonel Blotto and General Lotto games [8], [18], [24], [28]. In particular, the simultaneous-move subgame played in the final stage of our formulations was first proposed by Vu and Loiseau [28], and is known as the _General Lotto game with favoritism_ (GL-F). Favoritism refers to the fact that pre-allocated resources provide an incumbency advantage to one player's competitive chances. Their work establishes existence of equilibria and develops computational methods to calculate them to arbitrary precision. However, this prior work considers pre-allocated resources as exogenous parameters of the game. In contrast, we model the deployment of pre-allocated resources as a strategic element of the competitive interaction. Furthermore, we provide the first analytical characterizations of equilibria and the corresponding payoffs in GL-F games. ## 2 Problem formulation The _General Lotto game with pre-allocations_ (GL-P) is a two-stage game with players \(A\) and \(B\), who compete over a set of \(n\) battlefields, denoted as \(\mathcal{B}=\{1,\ldots,n\}\). Each battlefield \(b\in\mathcal{B}\) is associated with a known valuation \(w_{b}>0\), which is common to both players. Player \(A\) is endowed with a pre-allocated resource budget \(P>0\) and a real-time resource budget \(R_{A}>0\). Player \(B\) is endowed with a budget \(R_{B}>0\) of real-time resource, but no pre-allocated resources. The two stages are played as follows: _- Stage 1 (pre-allocation):_ Player \(A\) decides how to allocate her \(P\) pre-allocated resources to the battlefields, i.e., it selects a vector \(\mathbf{p}=(p_{1},\ldots,p_{n})\in\Delta_{n}(P):=\{\mathbf{p}^{\prime}\in\mathbb{R}_{ +}^{n}:\|\mathbf{p}^{\prime}\|_{1}=P\}\). We term the vector \(\mathbf{p}\) as player \(A\)'s _pre-allocation profile_. No payoffs are derived in Stage 1, and \(A\)'s choice \(\mathbf{p}\) becomes binding and common knowledge. _- Stage 2 (decisive point of conflict):_ Players \(A\) and \(B\) then compete in a simultaneous-move sub-game with their real-time resource budgets \(R_{A}\), \(R_{B}\). Here, both players can randomly allocate these resources as long as their expenditure does not exceed their budgets in expectation. Specifically, a strategy for player \(i\in\{A,B\}\) is an \(n\)-variate (cumulative) distribution \(F_{i}\) over allocations \(\mathbf{x}_{i}\in\mathbb{R}_{+}^{n}\) that satisfies \[\mathbb{E}_{\mathbf{x}_{i}\sim F_{i}}\left[\sum_{b\in\mathcal{B}}x_{i,b}\right] \leq R_{i}. \tag{1}\] We use \(\mathcal{L}(R_{i})\) to denote the set of all strategies \(F_{i}\) that satisfy (1). Given that player \(A\) chose \(\mathbf{p}\) in Stage 1, the expected payoff to player \(A\) is given by \[U_{A}(\mathbf{p},F_{A},F_{B}):=\mathbb{E}_{\mathbf{x}_{A}\sim F_{B}}\left[\sum_{b\in \mathcal{B}}w_{b}\cdot\mathds{1}\{x_{A,b}+p_{b}\geq qx_{B,b}\}\right] \tag{2}\] where \(\mathds{1}\{\cdot\}\) is the usual indicator function taking a value of 1 or 0.1 In words, player \(B\) must overcome player \(A\)'s pre-allocated resources \(p_{b}\) as well as player \(A\)'s allocation of real-time resources \(x_{A,b}\) in order to win battlefield \(b\). The parameter \(q>0\) is the relative quality of player \(B\)'s real-time resources against player \(A\)'s resources. For simpler exposition, we will simply set \(q=1\), noting that all of our results are easily attained for any other value of \(q\). The payoff to player \(B\) is \(U_{B}(\mathbf{p},F_{A},F_{B})=1-U_{A}(\mathbf{p},F_{A},F_{B})\), where we assume without loss of generality that \(\sum_{b\in\mathcal{B}}w_{b}=1\). Footnote 1: The tie-breaking rule (i.e., deciding who wins if \(x_{A,b}+p_{b}=x_{B,b}\)) can be assumed to be arbitrary, without affecting any of our results. This property is common in the General Lotto literature, see, e.g., [18], [28]. Stages 1 and 2 of GL-P are illustrated in Figure 1. We specify an instantiation of the game as GL-P\((P,R_{A},R_{B},\mathbf{w})\). We focus on the subgame-perfect equilibrium solution concept. **Definition 2.1**: _A profile \((\mathbf{p}^{*},F_{A}^{*}(\mathbf{p}),F_{B}^{*}(\mathbf{p}))\) where \(\mathbf{p}^{*}\in\Delta_{n}(P)\) and \(F_{i}^{*}(\mathbf{p}):\Delta_{n}(P)\rightarrow\mathcal{L}(R_{i})\), for \(i=A,B\), is a subgame-perfect equilibrium (SPE) if the following conditions hold._ * For any \(\mathbf{p}\in\Delta_{n}(P)\), \((F^{*}_{A}(\mathbf{p}),F^{*}_{B}(\mathbf{p}))\) constitutes a Nash equilibrium of the Stage 2 subgame: \[\begin{split} U_{A}(\mathbf{p},F^{*}_{A}(\mathbf{p}),F^{*}_{B}(\mathbf{p}))& \geq U_{A}(\mathbf{p},F_{A},F^{*}_{B}(\mathbf{p}))\\ \text{and}& U_{B}(\mathbf{p},F^{*}_{A}(\mathbf{p}),F^{*}_{B}( \mathbf{p}))&\geq U_{B}(\mathbf{p},F^{*}_{A}(\mathbf{p}),F_{B})\end{split}\] (3) for any \(F_{A}\in\mathcal{L}(R_{A})\) and \(F_{B}\in\mathcal{L}(R_{B})\). * The pre-allocation \(\mathbf{p}^{*}\) satisfies \[U_{A}(\mathbf{p}^{*},F^{*}_{A}(\mathbf{p}^{*}),F^{*}_{B}(\mathbf{p}^{*}))\geq U_{A}(\mathbf{p},F^{*}_{A}(\mathbf{p}),F^{*}_{B}(\mathbf{p}))\] (4) for any \(\mathbf{p}\in\Delta_{n}(P)\). In an SPE, the players select their Stage 2 strategies conditioned on player \(A\)'s choice of pre-allocation \(\mathbf{p}\) in Stage 1, such that \(F^{*}_{A}(\mathbf{p}),F^{*}_{B}(\mathbf{p})\) forms a Nash equilibrium of the one-shot subgame of Stage 2. We stress the importance of the common knowledge assumption for the pre-allocation choice \(\mathbf{p}\) before Stage 2 - over time, an opponent is likely to learn the placement of past resources and would be able to exploit this knowledge at a later point in time. The second condition in the above definition asserts that player \(A\)'s SPE pre-allocation \(\mathbf{p}^{*}\) in Stage 1 optimizes its equilibrium payoff in the subsequent Stage 2 subgame. We remark that the Stage 2 subgame has been studied in the recent literature, where it is termed a _General Lotto game with Favoritism_[28]. We denote it as GL-F\((\mathbf{p},R_{A},R_{B})\). There, a pre-allocation vector \(\mathbf{p}\) is viewed as an exogenous fixed parameter, whereas in our GL-P formulation, it is an endogenous strategic choice. It is established in [28] that a Nash equilibrium exists and its payoffs are unique for any instance of GL-F\((\mathbf{p},R_{A},R_{B})\). Consequently, the players' SPE payoffs in our GL-P game are necessarily unique. We will denote \(\pi^{*}_{i}(P,R_{A},R_{B}):=U_{i}(\mathbf{p}^{*},F^{*}_{A}(\mathbf{p}^{*}),F^{*}_{B}( \mathbf{p}^{*}))\), \(i\in\{A,B\}\), as the players' payoffs in an SPE when the dependence on the vector \(\mathbf{w}\) is clear. While [28] provided numerical techniques to compute an equilibrium of GL-F\((\mathbf{p},R_{A},R_{B})\) to arbitrary precision, analytical characterizations of them (e.g. closed-form expressions) were not provided. In the next section, we develop techniques to derive such characterizations, as they are required to precisely express the SPE of the GL-P game. ## 3 Equilibrium characterizations In this section, we present our main results regarding the characterization of players' SPE payoffs in the GL-P game. These results highlight the relative effectiveness of pre-allocated vs real-time resources. ### Main results The result below provides an explicit characterization of the players' payoffs in an SPE of the two-stage GL-P game. **Theorem 3.1**: _Consider the game GL-P\((P,R_{A},R_{B},\mathbf{w})\). Player \(A\)'s payoff \(\pi^{*}_{A}(P,R_{A},R_{B})\) in a SPE is given as follows:_ 1. _If_ \(R_{B}\leq P\)_, or_ \(R_{B}>P\) _and_ \(R_{A}\geq\frac{2(R_{B}-P)^{2}}{P+2(R_{B}-P)}\)_, then_ \(\pi^{*}_{A}(P,R_{A},R_{B})\) _is_ \[1-\frac{R_{B}}{2R_{A}}\left(\frac{R_{A}+\sqrt{R_{A}(R_{A}+2P)}}{P+R_{A}+\sqrt{ R_{A}(R_{A}+2P)}}\right)^{2}.\] (5) 2. _If_ \(R_{B}>P\) _and_ \(0<R_{A}<\frac{2(R_{B}-P)^{2}}{P+2(R_{B}-P)}\)_, then_ \(\pi^{*}_{A}(P,R_{A},R_{B})\) _is_ \[\frac{R_{A}}{2(qR_{B}-P)}.\] (6) 3. _If_ \(R_{A}=0\)_, then_ \(\pi^{*}_{A}(P,R_{A},R_{B})\) _is_ \[\left(1-\min\left\{\frac{R_{B}}{P},1\right\}\right)\] (7) _Player_ \(B\)'s SPE payoff is given by \(\pi^{*}_{B}(P,R_{A},R_{B})=1-\pi^{*}_{A}(P,R_{A},R_{B})\). In all instances, player \(A\)'s SPE pre-allocation is \(\mathbf{p}^{*}=\mathbf{w}\cdot P\). A visualization of the parameter regimes of the three cases above is shown in the right Figure 1. Note that the standard General Lotto game (without pre-allocations, [11]) is included as the vertical axis at \(P=0\). An illustration of the SPE payoffs to player \(A\) is shown in the center plot of Figure 1. We notice that given sufficiently high amount of pre-allocated resources (i.e. \(P>R_{B}\)), player \(A\) can attain a positive payoff even without any real-time resources (\(R_{A}=0\)). For \(P<R_{B}\), player \(A\) receives zero payoff since player \(B\) can simply Figure 1: (Left) The two-stage General Lotto game with Pre-allocations (GL-P). Players \(A\) and \(B\) compete over \(n\) battlefields, whose valuations are given by \(\{w_{b}\}_{b=1}^{n}\). In Stage 1, player \(A\) decides how to deploy \(P\) pre-allocated resources to the battlefields. Player \(B\) observes the deployment. In Stage 2, the players simultaneously decide how to deploy their real-time resources \(R_{A}\) and \(R_{B}\) and final payoffs are determined. (Center) This plot shows the SPE payoff to player \(A\) under varying resource endowments (Theorem 3.1). Obtaining more pre-allocated resources improves the payoff with decreasing marginal returns. Here, we have fixed \(R_{B}=1\). (Right) The characterization of the SPE payoff is broken down into three separate cases in the game’s parameters. These are shown as the three regions in this plot, here parameterized by \(P\) and \(R_{A}\), which correspond to the items in Theorem 3.1. exceed the pre-allocation on every battlefield. Observe that the SPE payoff \(\pi_{A}^{*}(P,R_{A},R_{B})\) exhibits diminishing marginal returns in \(R_{A}\) and in \(P\) for larger values of \(P\), but is not in general a concave function in \(P\) - see the \(R_{A}=0.5\) curve, which has an inflection. **Proof approach and outline:** The derivation of the SPE payoffs in Theorem 3.1 follows a backwards induction approach. First, for any fixed pre-allocation vector \(\boldsymbol{p}\), one characterizes the equilibrium payoff of the Stage 2 sub-game GL-F\((\boldsymbol{p},R_{A},R_{B})\). We denote this payoff as \[\pi_{A}(\boldsymbol{p},R_{A},R_{B}):=U_{A}(\boldsymbol{p},F_{A}^{*}( \boldsymbol{p}),F_{B}^{*}(\boldsymbol{p})) \tag{8}\] where \(F_{A}^{*}(\cdot),F_{B}^{*}(\cdot)\) satisfies the first condition of Definition 2.1. Then, the SPE payoff is calculated by solving the following optimization problem, \[\pi_{A}^{*}(P,R_{A},R_{B})=\max_{\boldsymbol{p}\in\Delta_{n}(P)}\pi_{A}( \boldsymbol{p},R_{A},R_{B}). \tag{9}\] The following proof outline is taken to derive the SPE strategies and payoffs. **Part 1:** We first detail analytical methods to derive the equilibrium payoff \(\pi_{A}(\boldsymbol{p},R_{A},R_{B})\) to the second stage subgame GL-F\((\boldsymbol{p},R_{A},R_{B})\). **Part 2:** We show that \(\boldsymbol{p}^{*}=\boldsymbol{w}\cdot P\) is an SPE pre-allocation, i.e. it solves the optimization problem (9). **Part 3:** We derive the analytical expressions for \(\pi_{A}^{*}\) reported in Theorem 3.1. Each one of the three parts has a corresponding Lemma that we present in the following subsection. ### Proof of Theorem 3.1 **Part 1:** The recent work of Vu and Loiseau [28] provides a method to derive an equilibrium of the General Lotto game with Favoritism GL-F\((\boldsymbol{p},R_{A},R_{B})\). This method involves solving the following system2 of two equations for two unknown variables \((\kappa_{A},\kappa_{B})\in\mathbb{R}_{++}^{2}\): Footnote 2: The problem settings considered in [28] are more general, which considers two-sided favoritism (i.e. \(p_{b}<0\) for some \(b\)). However, exact closed-form solutions were not provided. The paper [28] provided computational approaches to calculate an equilibrium to arbitrary precision. \[R_{A}\!=\!\sum_{b=1}^{n}\frac{[h_{b}(\kappa_{A},\kappa_{B})-p_{b}]^{2}}{2w_{b} \kappa_{B}},\ R_{B}\!=\!\sum_{b=1}^{n}\frac{h_{b}^{2}(\kappa_{A},\kappa_{B})- p_{b}^{2}}{2w_{b}\kappa_{A}} \tag{10}\] where \(h_{b}(\kappa_{A},\kappa_{B}):=\min\{w_{b}\kappa_{B},w_{b}\kappa_{A}+p_{b}\}\) for \(b\in\mathcal{B}\). The two equations above correspond to the expected budget constraint (1) for both players. There always exists a solution \((\kappa_{A}^{*},\kappa_{B}^{*})\in\mathbb{R}_{++}^{2}\) to this system [28], which allows one to calculate the following equilibrium payoffs. **Lemma 3.1** (Adapted from [28]): _Suppose \((\kappa_{A}^{*},\kappa_{B}^{*})\in\mathbb{R}_{++}^{2}\) solves (10). Let \(\mathcal{B}_{1}:=\{b\in\mathcal{B}:h_{b}(\kappa_{A}^{*},\kappa_{B}^{*})=w_{b} \kappa_{B}\}\) and \(\mathcal{B}_{2}=\mathcal{B}\backslash\mathcal{B}_{1}\). Then there is a Nash equilibrium \((F_{A}^{*},F_{B}^{*})\) of GL-F\((\boldsymbol{p},R_{A},R_{B})\) where player \(A\)'s equilibrium payoff is given by_ \[\pi_{A}(\boldsymbol{p},R_{A},R_{B}) =\sum_{b\in\mathcal{B}_{1}}w_{b}\!\left[1-\frac{\kappa_{B}^{*}}{ 2\kappa_{A}^{*}}\left(1-\frac{p_{i}^{2}}{(w_{b}\kappa_{B})^{2}}\right)\right] \tag{11}\] \[\quad+\sum_{b\in\mathcal{B}_{2}}w_{b}\frac{\kappa_{A}^{*}}{2 \kappa_{B}^{*}}\] _and the equilibrium payoff to player \(B\) is \(\pi_{B}(\boldsymbol{p},R_{A},R_{B})=1-\pi_{A}(\boldsymbol{p},R_{A},R_{B})\)._ Lemma 3.1 provides an expression for \(\pi_{A}(\boldsymbol{p},R_{A},R_{B})\) in terms of a solution \((\kappa_{A}^{*},\kappa_{B}^{*})\) to the system of equations (10). However, in order to study the optimization (9), we need to be able to either find closed-form expressions for the solution \((\kappa_{A}^{*},\kappa_{B}^{*})\) in terms of the defining game parameters \(\boldsymbol{p},R_{A},R_{B},\boldsymbol{w}\), or establish certain properties about the payoff function (11), such as concavity in \(\boldsymbol{p}\). Unfortunately, we find that this function is not generally concave for \(\boldsymbol{p}\in\Delta_{n}(P)\). Our approach in Part 2 is to show that it is always increasing in the direction pointing to \(\boldsymbol{p}^{*}\). **Part 2:** This part of the proof is devoted to showing that \(\boldsymbol{p}^{*}=\boldsymbol{w}\cdot P\) is an SPE pre-allocation for player \(A\). This divides the total pre-allocated resources \(P\) among the battlefields proportionally to their values \(w_{b}\), \(b\in\mathcal{B}\). **Lemma 3.2**: _The vector \(\boldsymbol{p}^{*}=\boldsymbol{w}\cdot P\) is an SPE pre-allocation._ Equivalently, \(\boldsymbol{p}^{*}\) solves the optimization problem (9). Proof: The proof will follow two sub-parts, 2-a and 2-b. In part 2-a, we first establish that \(\boldsymbol{p}^{*}\) is a local maximizer of \(\pi_{A}(\boldsymbol{p},R_{A},R_{B})\), which necessarily occurs when either \(\mathcal{B}_{1}=\mathcal{B}\) or \(\mathcal{B}_{2}=\mathcal{B}\). In part 2-b, we show that no choice of \(\boldsymbol{p}\in\Delta_{n}(P)\) that results in both sets \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) being non-empty achieves a higher payoff than \(\pi_{A}(\boldsymbol{p}^{*},R_{A},R_{B})\), thus establishing Lemma 3.2. **Part 2-a:**\(\boldsymbol{p}^{*}\) _is a local maximizer of \(\pi_{A}(\boldsymbol{p},R_{A},R_{B})\)._ From Lemma 3.1 and the definition of \(h_{b}(\kappa_{A},\kappa_{B})\), we find that the solution to (10) under the pre-allocation \(\boldsymbol{p}^{*}\) is always in one of two completely symmetric cases: 1) \(\mathcal{B}_{1}=\mathcal{B}\); or 2) \(\mathcal{B}_{2}=\mathcal{B}\). Thus, we need to show \(\boldsymbol{p}^{*}\) is a local maximizer in both cases. **Case 1** (\(\mathcal{B}_{1}=\mathcal{B}\))**:** For \(\boldsymbol{p}\in\Delta_{n}(P)\), the system (10) is written \[R_{A}=\sum_{b=1}^{n}\frac{(w_{b}\kappa_{B}-p_{b})^{2}}{2w_{b} \kappa_{B}}\ \text{and}\ R_{B}=\sum_{b=1}^{n}\frac{(w_{b}\kappa_{B})^{2}-p_{b}^{2}}{2w_{b} \kappa_{A}}\] \[\ \text{where}\ 0<w_{b}\kappa_{B}-p_{b}\leq\kappa_{A}\ \text{ holds}\ \forall b\in\mathcal{B}. \tag{12}\] It yields the algebraic solution \[\kappa_{B}^{*} =P+R_{A}+\sqrt{(P+R_{A})^{2}-\|\boldsymbol{p}\|_{\boldsymbol{w}}^{2}} \tag{13}\] \[\kappa_{A}^{*} =\frac{(P+R_{A})\kappa_{B}^{*}-\|\boldsymbol{p}\|_{\boldsymbol{w }}^{2}}{R_{B}}.\] where \(\|\boldsymbol{p}\|_{\boldsymbol{w}}^{2}:=\sum_{b=1}^{n}\frac{p_{b}^{2}}{w_{b}}\). This solution needs to satisfy the set of conditions \(0<w_{b}\kappa_{B}-p_{b}\leq\kappa_{A}\ \forall b\in\mathcal{B}\), but the explicit characterization of these conditions is not needed to show that \(\boldsymbol{p}^{*}\) is a local maximum. Indeed, first observe that the expression for \(\kappa_{B}^{*}\) is required to be real-valued, which we can write as the condition \[\boldsymbol{p}\in R^{(1n)}:=\left\{\boldsymbol{p}\in\Delta_{n}(P):\| \boldsymbol{p}\|_{\boldsymbol{w}}^{2}<(P+R_{A})^{2}\right\}. \tag{14}\] We thus have a region \(R^{(1n)}\) for which player \(A\)'s equilibrium payoff (Lemma 3.1) is given by the expression \[\pi_{A}^{(1n)}(\mathbf{p}):=1-\frac{R_{B}}{f(\|\mathbf{p}\|_{\mathbf{w}})}\bigg{(}1-\frac{\| \mathbf{p}\|_{\mathbf{w}}^{2}}{(P\!+\!R_{A}\!+\!f(\|\mathbf{p}\|_{\mathbf{w}}))^{2}}\bigg{)} \tag{15}\] where \(f(\|\mathbf{p}\|_{\mathbf{w}}):=\sqrt{(P+R_{A})^{2}-\|\mathbf{p}\|_{\mathbf{w}}^{2}}\). The partial derivatives are calculated to be \[\frac{\partial\pi_{A}^{(1n)}}{\partial p_{b}}(\mathbf{p})=\frac{p_{b}}{w_{b}}\cdot \frac{2R_{B}}{f(\|\mathbf{p}\|_{\mathbf{w}})(P+R_{A}+f(\|\mathbf{p}\|_{\mathbf{w}}))^{2}} \tag{16}\] A critical point of \(\pi_{A}^{(1n)}\) must satisfy \(\mathbf{z}^{\top}\nabla\pi_{A}^{(1n)}(\mathbf{p})=0\) for any \(\mathbf{z}\in\mathbb{T}_{n}\), where we define \(\mathbb{T}_{n}:=\{\mathbf{z}\in\mathbb{R}^{n}:\sum_{b=1}^{n}z_{b}=0\}\) as the tangent space of \(\Delta_{n}(P)\). Indeed for any \(\mathbf{p}\in R^{(1n)}\), we calculate \[(\mathbf{p}-\mathbf{w}\cdot P)^{\top}\nabla\pi_{A}^{(1n)}(\mathbf{p}) =g(\|\mathbf{p}\|_{\mathbf{w}})\cdot(\|\mathbf{p}\|_{\mathbf{w}}^{2}-P^{2}\big{)} \tag{17}\] \[\geq 0\] where \(g(\|\mathbf{p}\|_{\mathbf{w}}):=\frac{2R_{B}}{f(\|\mathbf{p}\|_{\mathbf{w}})(P+R_{A}+f(\|\mathbf{ p}\|_{\mathbf{w}}))^{2}}>0\) for any \(\mathbf{p}\in R^{(1n)}\). The inequality above is met with equality if and only if \(\mathbf{p}=\mathbf{p}^{*}\). This is due to the fact that \(\min_{\mathbf{p}\in\Delta_{n}(P)}\|\mathbf{p}\|_{\mathbf{w}}^{2}=\|\mathbf{p}^{*}\|_{\mathbf{w}}^{ 2}=P^{2}\). Thus, \(\mathbf{p}^{*}\) is the unique maximizer of \(\pi_{A}^{(1n)}(\mathbf{p})\) on \(R^{(1n)}\). **Case 2 (\(\mathcal{B}_{2}=\mathcal{B}\)):** For \(\mathbf{p}\in\Delta_{n}(P)\), the system is written as \[R_{A}=\sum_{b=1}^{n}\frac{(w_{b}\kappa_{A})^{2}}{2w_{b}\kappa_{B}}\text{ and }R_{B}=\sum_{b=1}^{n}\frac{(w_{b}\kappa_{A}-p_{b})^{2}-(p_{b})^{2}}{2w_{b}\kappa_{A }},\] where \(w_{b}\kappa_{B}-p_{b}>w_{b}\kappa_{A}\) holds for all \(b\in\mathcal{B}\). This readily yields the algebraic solution: \[\kappa_{B}^{*}=2\frac{(R_{B}-P)^{2}}{R_{A}}\text{ and }\kappa_{A}^{*}=2(R_{B}-P). \tag{18}\] For this solution to be valid, the following conditions are required: \(\bullet\)\(\kappa_{A}^{*},\kappa_{B}^{*}\in\mathbb{R}_{++}\): This requires that \(R_{B}-P>0\). \(\bullet\)\(w_{b}\kappa_{B}^{*}-p_{b}>w_{b}\kappa_{A}^{*}\) for all \(b\in\mathcal{B}\): This requires that \[2\frac{(R_{B}-P)^{2}}{R_{A}}-2(R_{B}-P)-\max_{b}\{\frac{p_{b}}{w_{b}}\}>0.\] The left-hand side is quadratic in \(R_{B}-P\), and thus requires that either \[R_{B}-P<\frac{R_{A}}{2}\left(1-\sqrt{1+\frac{2}{R_{A}}\max_{b}\{\frac{p_{b}}{w_ {b}}\}}\right)\] or \[R_{B}-P>\frac{R_{A}}{2}\left(1+\sqrt{1+\frac{2}{R_{A}}\max_{b}\{\frac{p_{b}}{w_ {b}}\}}\right). \tag{19}\] The former cannot hold since the numerator on the right-hand side is strictly negative, but \(\kappa_{A}^{*},\kappa_{B}^{*}\in\mathbb{R}_{++}\) requires \(R_{B}-P>0\). Thus, (19) must hold, Clearly, this is more restrictive than \(R_{B}-P>0\). This dictates the boundary of Case 2. For any \(\mathbf{p}\in\Delta_{n}(P)\) such that all battlefields are in Case 2, the expression for player \(A\)'s payoff in (11) simplifies to \[\pi_{A}(\mathbf{p},R_{A},R_{B})=\sum_{b=1}^{n}w_{b}\frac{\kappa_{A}^{*}}{2\kappa_{ B}^{*}}=\frac{R_{A}}{2(R_{B}-P)},\] where we use the expression for \(\kappa_{B}^{*}\) and \(\kappa_{A}^{*}\) in (18). Observe that player \(A\)'s payoff is constant in the quantity \(\mathbf{p}\). Thus, for any \(\mathbf{p}\) that satisfies (19), it holds that all battlefields are in Case 2, and that player \(A\)'s payoff is the above. We conclude sub-part 2-a noting that, for given quantities \(R_{A}\) and \(P\), if there exists any \(\mathbf{p}\in\Delta_{n}(P)\) such that (19) is satisfied, then \(\mathbf{p}^{*}=\mathbf{w}\cdot P\) must also satisfy (19), since \(||\mathbf{p}||_{\infty}\geq||\mathbf{p}^{*}||_{\infty}\) and the right-hand side in (19) is increasing in \(||\mathbf{p}||_{\infty}\). **Part 2-b:**_Any pre-allocation \(\mathbf{p}\) that corresponds to a solution of (10) with \(\mathcal{B}_{1},\mathcal{B}_{2}\neq\varnothing\) satisfies \(\pi_{A}(\mathbf{p},R_{A},R_{B})\leq\pi_{A}(\mathbf{p}^{*},R_{A},R_{B})\)._ For easier exposition, the proof of Part 2-b is presented in the Appendix. Together, Parts 2-a and 2-b imply that \(\mathbf{p}^{*}\) is a global maximizer of the function \(\pi_{A}(\mathbf{p},R_{A},R_{B})\), completing the proof of Lemma 3.2. **Part 3:** In the third and final part, we obtain the formulas for SPE payoffs reported in Theorem 3.1. Proof.: [Proof of Theorem 3.1] We proceed to derive closed-form solutions for the SPE payoff \(\pi_{A}^{*}(P,R_{A},R_{B})\). From Lemmas 3.1 and 3.2, the SPE payoff is attained by evaluating \(\pi_{A}(\mathbf{p}^{*},R_{A},R_{B})\), i.e. from equation (11). From the discussion of Part 2-a, this amounts to analyzing the two completely symmetric cases \(\mathcal{B}_{1}=\mathcal{B}\) and \(\mathcal{B}_{2}=\mathcal{B}\). **Case 1 (\(\mathcal{B}_{1}=\mathcal{B}\)):** Substituting \(\mathbf{p}^{*}=\mathbf{w}\cdot P\) into (13) and simplifying, we obtain \[\kappa_{B}^{*} =P+R_{A}+\sqrt{R_{A}(R_{A}+2P)} \tag{20}\] \[\kappa_{A}^{*} =\frac{(P+R_{A})\kappa_{B}^{*}-P^{2}}{R_{B}}.\] Next, we verify that this solution satisfies the conditions \(0<\kappa_{B}^{*}-P\leq\kappa_{A}^{*}\) imposed by the case \(\mathcal{B}_{1}=\mathcal{B}\). \(\bullet\)\(\kappa_{B}^{*}-P>0\): This holds by inspection. \(\bullet\)\(\kappa_{B}^{*}-P\leq\kappa_{A}\): We can write this condition as \[R_{B}-P\leq R_{A}+\frac{PR_{A}}{R_{A}+\sqrt{R_{A}(R_{A}+2P)}} \tag{21}\] We note that whenever \(R_{B}\leq P\), this condition is always satisfied. When \(R_{B}>P\), this condition does not automatically hold, and an equivalent expression of (21) is given by \[R_{A}\geq\frac{2(R_{B}-P)^{2}}{P+2(R_{B}-P)}. \tag{22}\] Observe that \(R_{A}=\frac{2(R_{B}-P)^{2}}{P+(R_{B}-P)}\) satisfies (21) with equality, and is in fact the only real solution (one can reduce it to a cubic polynomial in \(R_{A}\)). When these conditions hold, the equilibrium payoff \(\pi_{A}^{*}(P,R_{A},R_{B})=\pi_{A}(\mathbf{p}^{*},R_{A},R_{B})\) can be directly computed from Lemma 3.1, i.e. (11). It is given by the expression (5). **Case 2 (\(\mathcal{B}_{2}=\mathcal{B}\)):** Substituting \(\mathbf{p}=\mathbf{w}\cdot P\) into (18) and simplifying, we obtain \[\kappa_{A}^{*}=\frac{2(R_{B}-P)}{W}\text{ and }\kappa_{B}^{*}=\frac{2(R_{B}-P)^{2}}{R_{A}}. \tag{23}\] ## 4 Interplay between resource types In this section, we present some implications from Theorem 3.1 regarding the interplay between pre-allocated and real-time resources. Specifically, we seek to compare the relative effectiveness of the two types of resources in the GL-P game by first quantifying an _effectiveness ratio_. We then leverage this analysis to address how player \(A\) should invest in both types of resources when they are costly to acquire. ### The effectiveness ratio We define the effectiveness ratio as follows. **Definition 4.1**: _For a given \(R_{A},R_{B}>0\), let \(P^{\text{eq}}(R_{A},R_{B})>0\) be the unique value such that \(\pi_{A}^{*}(P^{\text{eq}},0,R_{B})=\pi_{A}^{*}(0,R_{A},R_{B})\). The effectiveness ratio is defined as_ \[E(R_{A},R_{B}):=\frac{P^{\text{eq}}}{R_{A}} \tag{24}\] In words, \(P^{\text{eq}}\) is the amount of pre-allocated resources required to achieve the same level of performance as the amount \(R_{A}\) of real-time resources in the absence of pre-allocated resources. The effectiveness ratio \(E\) thus quantifies the multiplicative factor of pre-allocated resources needed compared to real-time resources \(R_{A}\). The following result establishes the effectiveness ratio for any given parameters. **Theorem 4.1**: _For a given \(R_{A},R_{B}>0\), the effectiveness ratio is_ \[E(R_{A},R_{B})=\begin{cases}2&\text{if }R_{A}\geq R_{B},\\ \frac{2(R_{B})^{2}}{R_{A}(2R_{B}-R_{A})}&\text{if }R_{A}<R_{B}.\end{cases} \tag{25}\] _Here, it is interesting to note that the ratio \(E\) is lower-bounded by \(2\) - real-time resources are at least twice as effective as pre-allocated resources. Additionally, as \(R_{A}\to 0^{+}\), the ratio grows unboundedly \(E\to\infty\). This is due to the fact that without any real-time resources, player \(A\) needs \(P\geq R_{B}\) pre-allocated resources to obtain a positive payoff (see third case of Theorem 3.1). A plot of the ratio \(E\) is shown in the left Figure 2._ The proof of Theorem 4.1 relies on the following technical lemma, which provides the level curves of the SPE payoff \(\pi_{A}^{*}(P,R_{A},R_{B})\). A level curve with fixed performance level \(\Pi\in[0,1]\) is defined as the set of points \[L_{\Pi}:=\{(P,R_{A})\in\mathbb{R}_{+}^{2}:\pi_{A}^{*}(P,R_{A},R_{B})=\Pi\}. \tag{26}\] **Lemma 4.1**: _Given any \(R_{B}>0\) and \(\mathbf{w}\in\mathbb{R}_{++}^{n}\), fix a desired performance level \(\Pi\in[0,1]\). The level curve \(L_{\Pi}\) is given by_ \[L_{\Pi}=\bigcup_{P\in\left[0,\frac{R_{B}}{1-\Pi}\right]}(P,R_{\Pi}(P)) \tag{27}\] _where if \(0\leq\Pi<\frac{1}{2}\),_ \[R_{\Pi}(P)=\begin{cases}2\Pi(R_{B}-P)&\text{for }P\in\left[0,\frac{(1-2\Pi)R_{B} }{1-\Pi}\right)\\ \frac{(R_{B}-(1-\Pi)P)^{2}}{2R_{B}(1-\Pi)}&\text{for }P\in\left[\frac{(1-2\Pi)R_{B} }{1-1\Pi},\frac{WR_{B}}{1-1\Pi}\right]\end{cases} \tag{28}\] _and if \(\frac{1}{2}\leq\Pi\leq 1\),_ \[R_{\Pi}(P)=\frac{(R_{B}-(1-\Pi)P)^{2}}{2R_{B}(1-\Pi)} \tag{29}\] _If \(P>\frac{R_{B}}{1-\Pi}\), then \(\pi_{A}^{*}(P,R_{A},R_{B})>\Pi\) for any \(R_{A}\geq 0\)._ The proof of this Lemma directly follows from the expressions in Theorem 3.1 and is thus omitted. In the center Figure 2, we illustrate level curves associated with varying performance levels \(\Pi\). We can now leverage the above Lemma to complete the proof of Theorem 4.1. [Proof of Theorem 4.1] First, suppose \(R_{A}<R_{B}\). Then \(\pi_{A}^{*}(0,R_{A},R_{B})=\frac{R_{A}}{2R_{B}}<1/2\). Focusing on the level curve associated with the value \(\Pi=\frac{R_{A}}{2R_{B}}\), the quantity \(P^{\text{eq}}\) is determined as the endpoint of this curve where there are zero real-time resources. From (28), this occurs when \(P=\frac{R_{B}}{1-\Pi}=\frac{2(R_{B})^{2}}{2R_{B}-R_{A}}\). Now, suppose \(R_{A}\geq R_{B}\). Then \(\pi_{A}^{*}(0,R_{A},R_{B})=(1-\frac{R_{B}}{2R_{A}}\geq 1/2\). Similarly, the quantity \(P^{\text{eq}}\) is determined as the endpoint of the level curve associated with \(\Pi=(1-\frac{R_{B}}{2R_{A}})\). From (29), this occurs when \(P=\frac{R_{B}}{1-\Pi}=2R_{A}\). ### Optimal investment in resources In addition to the effectiveness ratio, the interplay between the two types of resources is also highlighted by the following scenario: player \(A\) has an opportunity to make an investment decision regarding its resource endowments. That is, the pair \((P,R_{A})\in\mathbb{R}_{+}^{2}\) is a strategic choice made by player \(A\) before the game GL-P\((P,R_{A},R_{B},\mathbf{w})\) is played. Given a monetary budget \(M_{A}>0\) for player \(A\), any pair \((P,R_{A})\) must belong to the following set of feasible investments: \[\mathcal{I}(M_{A}):=\{(P,R_{A}):R_{A}+c_{A}P\leq M_{A}\} \tag{30}\] where \(c_{A}\geq 0\) is the per-unit cost for purchasing pre-allocated resources, and we assume the per-unit cost for purchasing real-time resources is 1 without loss of generality. We are interested in characterizing player \(A\)'s optimal investment subject to the above cost constraint, and given player \(B\)'s resource endowment \(R_{B}>0\). This is formulated as the following optimization problem: \[\pi_{A}^{\text{opt}}:=\max_{(P,R_{A})\in\mathcal{I}(M_{A})}\pi_{A}^{*}(P,R_{A},R_{B}). \tag{31}\] In the result below, we derive the complete solution to the optimal investment problem (31). **Theorem 4.2**: _Fix a monetary budget \(M_{A}>0\), relative per-unit cost \(c_{A}>0\), and \(R_{B}>0\) real-time resources for player \(B\). Then, player \(A\)'s optimal investment in pre-allocated resources in (31) is_ \[P^{*}=\begin{cases}\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{A}},&\text{if }c_{A}<t\\ \in[0,\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{A}}],&\text{if }c_{A}=t\\ 0,&\text{if }c_{A}>t\end{cases}. \tag{32}\] _where \(t:=\min\{1,\frac{M_{A}}{R_{B}}\}\). The optimal investment in real-time resources is \(R_{A}^{*}=M_{A}-c_{A}P^{*}\). The resulting payoff \(\pi_{A}^{\text{opt}}\) to player \(A\) is given by_ \[\begin{cases}1-\frac{R_{B}}{2M_{A}}c_{A}(2-c_{A}),&\text{if }c_{A}<t\\ 1-\frac{R_{B}}{2M_{A}},&\text{if }c_{A}\geq t\text{ and }\frac{M_{A}}{R_{B}}\geq 1 \\ \frac{M_{A}}{2R_{B}},&\text{if }c_{A}\geq t\text{ and }\frac{M_{A}}{R_{B}}<1\end{cases}. \tag{33}\] A plot of the optimal investment \(P^{*}\) (32) is shown in the right Figure 2. If the cost \(c_{A}\) exceeds 1, then there is no investment in pre-allocated resources since they are less effective than real-time resources. Thus, \(c_{A}\) must necessarily be cheaper than real-time resources in order to invest in any positive amount. We note that while an optimal investment can purely consist of real-time resources, no optimal investment from Theorem 4.2 can purely consist of pre-allocated resources. Interestingly, when the monetary budget is small (\(M_{A}<1\)), there is a discontinuity in the investment level \(P^{*}\) at \(c_{A}=R_{B}\). A visual illustration of how the optimal investments are determined is shown in Figure 3, which is detailed in the proof below. We first observe that for any \(\Pi\in(0,1)\), the level curve \(R_{\Pi}(P)\) (from Lemma 4.1) is strictly decreasing and convex in \(P\in[0,\frac{R_{B}}{1-\Pi}]\). Hence, the function \(\pi_{A}(P,R_{A},R_{B})\) is quasi-concave in \((P,R_{A})\). Observe that the set of points \((P,R_{A})\in\mathbb{R}_{+}^{2}\) that satisfy \(R_{A}+c_{A}P=M_{A}\) consists of the line segment \(R_{A}=M_{A}-c_{A}P\), \(P\in[0,M_{A}/c_{A}]\), with slope \(-c_{A}\), and end-points \((M_{A},0)\) and \((0,M_{A}/c_{A})\). Thus, the optimization amounts to finding the highest level curve that intersects with \(R_{A}=M_{A}-c_{A}P\), \(P\in[0,M_{A}/c_{A}]\). The slope of a level curve \(R_{\Pi}(P)\) at \(P=0\) is \[\frac{\partial R_{\Pi}}{\partial P}(0)=\begin{cases}-2\Pi,&\text{if }\Pi<\frac{1}{2}\\ -1,&\text{if }\Pi\geq\frac{1}{2}\end{cases}. \tag{34}\] Let \(M_{A}\geq 0\) such that \(\pi_{A}^{*}(0,M_{A},R_{B})=\Pi\geq 1/2\). Then, note that, if \(-c_{A}<-1\) (or, equivalently \(c_{A}>1\)), then \(R_{A}=M_{A}-c_{A}P\) shrinks faster in \(P\) than the level curve \(R_{\Pi}(P)\) by monotonicity and convexity of the level curve in \(P\). Thus, the allocation \((P,R_{A})=(0,M_{A})\) maximizes \(A\)'s payoff, as all other points on the line segment \(R_{A}=M_{A}-c_{A}P\), \(P\in[0,M_{A}/c_{A}]\), intersect with strictly lower level curves. Similarly, for \(M_{A}\geq 0\) such that \(\pi_{A}^{*}(0,M_{A},R_{B})=\Pi<1/2\), the allocation \((P,R_{A})=(0,M_{A})\) maximizes \(A\)'s payoff when \(c_{A}>2\Pi\). Since the condition \(\pi_{A}^{*}(0,M_{A},R_{B})=\Pi\geq 1/2\) is equivalent to \(M_{A}\geq R_{B}\) and \(\pi_{A}^{*}(0,M_{A},R_{B})=\frac{R_{A}}{2R_{B}}\) when \(M_{A}<R_{B}\), it follows that \(P^{*}=0\) if \(c_{A}>\min\{1,\frac{M_{A}}{R_{B}}\}\). For the remainder of the proof, we use \(t=1\) (resp. \(t=\frac{M_{A}}{R_{B}}\)) and \(M_{A}\geq R_{B}\) (resp. \(M_{A}<R_{B}\)) interchangeably. Suppose that \(c_{A}=t\) and \(t=\frac{M_{A}}{R_{B}}\). Then the level curve corresponding to \(\Pi(M_{A})=\frac{W}{2}c_{A}\) has an interval of budget-feasible points \((P,R_{A})\) parameterized by \(P\in[0,(1-\frac{c_{A}}{2-c_{A}})\frac{M_{A}}{c_{A}}]\) with \(R_{A}=M_{A}-cP\). If \(t=1\), then there is a single budget-feasible point \((P,R_{A})=(0,M_{A})\) for the level curve corresponding to \(\Pi(M_{A})=(1-\frac{R_{B}}{2M_{A}})\). In both cases, there are no budget-feasible points for any level curve corresponding to \(\Pi>\Pi(M_{A})\). Now, suppose \(t=1\) and \(c_{A}<t\). We wish to find the level curve for which the line \((P,M_{A}-c_{A}P)\), \(P\in[0,M_{A}/c_{A}]\), lies Figure 3: This plot illustrates how to determine the optimal investment \((P^{*},R_{A}^{*})\in\mathbb{R}_{+}^{2}\) subject to the cost constraint in (30). The set of feasible investments \(\mathcal{I}(M_{A})\) is the line segment connecting \((0,M_{A})\) and \((M_{A}/c_{A},0)\). The optimal investment lies on the level curve tangent to this line segment. For example, when \(c=0.423\), the optimal investment is \((2.309,0.357)\) (unfilled circle), which gives a performance level of \(\Pi=0.75\). For sufficiently high cost \(c_{A}\), \(\mathcal{I}(M_{A})\) will not be tangent to any level curve, and the optimal investment is \((0,M_{A})\). For example, when \(c_{A}=1.333\), the highest level curve that intersects \(\mathcal{I}(M_{A})\) is \(\Pi=0.625\), and the optimal investment is \((0,4/3)\) (filled square). Figure 2: (Left) A plot of the effectiveness ratio \(E(R_{A},R_{B})\) (Theorem 4.1), which quantifies the multiplicative factor of pre-allocated resources needed to achieve the same performance as an amount of real-time resources \(R_{A}\). Notably, real-time resources are at least twice as effective as an equivalent amount of pre-allocated resources. (Center) This plot shows a collection of level curves for player \(A\)’s SPE payoff. A level curve corresponds to a fixed performance level \(\Pi\), and any point \((P,R_{A})\) on the level curve satisfies \(\pi_{A}^{*}(P,R_{A},R_{B})=\Pi\) (Lemma 4.1). (Right) This plot shows player \(A\)’s optimal investment in pre-allocated resources \(P^{*}\) when it has a per-unit cost of \(c_{A}\) and a fixed monetary budget of \(M_{A}\) to invest in both types of resources (Theorem 4.2). Player \(A\) invests the remaining \(M_{A}-c_{A}P^{*}\) in real-time resources. In these plots, we set \(R_{B}=1\), and \(W=1\). tangent. The point(s) of tangency yields the optimal solution due to the quasi-concavity of \(\pi_{A}^{*}\). Furthermore, since \(M_{A}\geq R_{B}\) and \(c_{A}<1\), a solution \((P,R_{A})\) must satisfy \(\Pi\in[\frac{1}{2},1]\) and \[\begin{split}\frac{\partial R_{\Pi}}{\partial P}(P^{*})& =\frac{P^{*}(1-\Pi)}{R_{B}W}-1&=-c_{A}\\ R_{\Pi}(P^{*})&=\frac{(R_{B}-(1-\Pi)P^{*})^{2}}{2R_ {B}(1-\Pi)}&=M_{A}-c_{A}P^{*}\end{split} \tag{35}\] From the first equation, we obtain \(P^{*}=\frac{R_{B}(1-c_{A})}{1-\Pi}\). Plugging this expression into the second equation, we obtain \(\Pi=(1-\frac{R_{B}}{2M_{A}}c_{A}(2-c_{A}))\in[\frac{1}{2},1]\), which leads to the unique solution \(P^{*}=(1-\frac{c_{A}}{2-c_{A}})\frac{M_{A}}{c_{A}}\frac{M_{A}}{c_{A}}\). Lastly, suppose \(c_{A}<t\) and \(t=\frac{M_{A}}{R_{B}}\) (\(M_{A}<R_{B}\)). Similar to the preceding case, we seek the highest level curve for which the budget constraint is tangent. Due to the assumption that \(M_{A}<R_{B}\) and \(c<\frac{M_{A}}{R_{B}}\), we observe that tangent points cannot exist for \(P<\frac{1-\Pi}{1-\Pi}R_{B}\) and \(\Pi<\frac{1}{2}\), i.e. in the region where the level curve is linear. Thus, it must be that either \(\Pi<\frac{1}{2}\) and \(P\geq\frac{1-\Pi}{1-\Pi}R_{B}\), or \(\Pi\geq\frac{1}{2}\). In either case, a solution must also satisfy the equations in (35), from which we obtain an identical expression for \(P^{*}\). ## 5 Two-sided pre-allocations The scenarios studied thus far have considered one-sided pre-allocations, where only player \(A\) has the opportunity for early investments. The goal in this section is to take preliminary steps in understanding how multiple rounds of early investments, on the part by both competitors, impacts the players' performance in the final stage. We will consider a scenario where player \(B\) has an opportunity to respond to the pre-allocation decision of player \(A\) with its own pre-allocated resources, which we formulate as a Stackelberg game. **Remark 5.1**: _Before formalizing this game, we remark that such a scenario admits positive and negative pre-allocations, i.e. \(p_{b}>0\) for some subset of battlefields and \(p_{b}<0\) on the others. Here, \(p_{b}<0\) means that the amount \(|p_{b}|\) of pre-allocated resources favors player \(B\). While the work in [28] establishes existence of equilibrium for any such pre-allocations as well as numerical approaches to compute equilibria to arbitrary precision, it does not provide analytical characterizations of them. Indeed, while our current techniques (i.e. from Theorem 3.1) analytically derive the equilibria for any positive pre-allocation vector, they are yet unable to account for such two-sided favoritism. Developing appropriate methods is subject to future study._ In light of the aforementioned limitations, we may still investigate the impact of player \(B\)'s response in the context of a single-battlefield environment3. The Stackelberg game is defined as follows. Player \(A\) has a monetary budget \(M_{A}\) with per-unit cost \(c_{A}\in(0,1)\) for stationary resources. Similarly, player \(B\) has a monetary budget \(M_{B}\) with per-unit cost \(c_{B}\in(0,1)\). The players compete over a single battlefield of unit value. Footnote 3: In contrast to Colonel Blotto games, General Lotto games with a single battlefield still provides rich insights that often generalize to multi-battlefield scenarios [11, 12, 22]. \(-\)_Stage 1:_ Player \(A\) chooses its pre-allocation investment \(p_{A}\in[0,\frac{M_{A}}{c_{A}}]\). This becomes common knowledge. \(-\)_Stage 2:_ Player \(B\) chooses its pre-allocation investment \(p_{B}\in[0,\frac{M_{B}}{c_{B}}]\). \(-\)_Stage 3:_ The players engage in the General Lotto game with favoritism GL-F\((p_{A}-p_{B},M_{A}-c_{A}p_{A},M_{B}-c_{B}p_{B})\). Players derive the final payoffs \[\begin{split} u_{A}(p_{A},p_{B})&:=\pi_{A}^{*}(p_{A }-p_{B},M_{A}-c_{A}p_{A},M_{B}-c_{B}p_{B})\\ u_{B}(p_{A},p_{B})&:=1-u_{A}(p_{A},p_{B})\end{split} \tag{36}\] Note that \(p_{A}-p_{B}\) is the favoritism to player \(A\). When it is non-negative, \(\pi_{A}^{*}\) is given precisely by Theorem 3.1. When it is negative, \(\pi_{A}^{*}=1-\pi_{B}^{*}\) where \(\pi_{B}^{*}\) is given as in Theorem 3.1 with the indices switched. Let us denote this game as GL-S\((\{M_{i},c_{i}\}_{i=A,B})\). We seek to characterize the following equilibrium concept. **Definition 5.1**: _The investment profile \((p_{A}^{*},p_{B}^{*})\) is a Stackelberg equilibrium if_ \[p_{A}^{*}\in\arg\max_{p_{A}\in[0,M_{A}/c_{A}]}\left(\min_{p_{B}\in[0,M_{B}/c_{ B}]}u_{A}(p_{A},p_{B})\right) \tag{37}\] _and_ \[p_{B}^{*}\in\arg\min_{p_{B}\in[0,M_{B}/c_{B}]}u_{A}(p_{A}^{*},p_{B}). \tag{38}\] Note that the definition in (37) is in a max-min form, since the final payoffs in GL-S are constant-sum. The characterization of the Stackelberg equilibrium is given in the result below. **Proposition 5.1**: _The Stackelberg equilibrium of GL-S\((\{M_{i},c_{i}\}_{i=A,B})\) is given as follows._ 1. _Suppose_ \(\frac{M_{B}}{c_{B}}\leq M_{A}\)_. Then_ \(p_{A}^{*}\) _is given according to Theorem_ 4.2 _and_ \(p_{B}^{*}=0\)_._ 2. _Suppose_ \(M_{A}<\frac{M_{B}}{c_{B}}\leq\frac{M_{A}}{c_{A}}\)_. If_ \(p_{A}^{\dagger}<\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{A}}\)_, then_ \(p_{A}^{*}\) _is given according to Theorem_ 4.2 _and_ \(p_{B}^{*}=0\)_, where_ \(p_{A}^{*}\in(0,\frac{M_{A}}{c_{A}}]\) _is the unique value that satisfies_ \(u_{B}(p_{A},0)=u_{B}(p_{A},\hat{p}_{B})\)_, with_ \[\hat{p}_{B}:=\frac{M_{B}}{c_{B}}-\frac{M_{B}-c_{B}p_{A}}{2-c_{B}}.\] (39) _If_ \(p_{A}^{\dagger}\geq\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{B}}\)_, then_ \(p_{A}^{*}=p_{A}^{\dagger}\)_, and_ \(p_{B}^{*}=0\) _or_ \(\hat{p}_{B}\)_._ 3. _Suppose_ \(\frac{M_{A}}{c_{A}}<\frac{M_{B}}{c_{B}}\)_. Then_ \(p_{A}^{*}=0\) _and_ \(p_{B}^{*}=\hat{p}_{B}\)_._ _Several comments are in order. In the first interval \(\frac{M_{B}}{c_{B}}\leq M_{A}\), player \(B\) is sufficiently weak such that it does not respond with any of its own stationary resources against any player \(A\) investment \(p_{A}\in[0,\frac{M_{A}}{c_{A}}]\). Thus, the Stackelberg solution recovers the result from Theorem 4.2. For a large part of the middle interval \(M_{A}<\frac{M_{B}}{c_{B}}\leq\frac{M_{A}}{c_{A}}\), \(p_{A}^{*}\) coincides with the investment from Theorem 4.2, which forces player \(B\) to respond with zero stationary resources (i.e. when \(p_{A}^{\dagger}<\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{A}}\)). However, when \(p_{A}^{\dagger}\geq\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{A}}\), player \(A\)'s optimal investment makes player \(B\) indifferent between responding with zero or with an amount \(\hat{p}_{B}>p_{A}^{*}\) of stationary resources that exceeds the pre-allocation of player \(A\). In the last interval \(\frac{M_{A}}{c_{A}}<\frac{M_{B}}{c_{B}}\), player \(B\) is sufficiently strong such that it is optimal for player \(A\) to invest zero stationary resources. ### _The impact of responding_ We illustrate the implications of Proposition 5.1 regarding the responding player's (\(B\)) performance in the numerical example shown in Figure 4. Here, we compare player \(B\)'s Stackelberg equilibrium payoff to the payoff it would have obtained if it did not have an opportunity to respond. We recall that this payoff was characterized in Theorem 4.2. By definition, the Stackelberg payoff is necessarily at least as high as the non-responding payoff - one performs better being able to respond to a pre-allocation. However, what is notable in Figure 4 is a significant, discontinuous increase in payoff for player \(B\) once its budget \(\frac{M_{B}}{c_{B}}\) is sufficiently high (i.e. exceeds \(\frac{M_{A}}{c_{A}}\)). The presence of the discontinuity strongly suggests that being in a resource-advantaged position with regards to the monetary budget \(\frac{M_{B}}{c_{B}}\) is a crucial factor for performance in multi-stage resource allocation. At the same time, Proposition 5.1 asserts that no response is optimal if player \(B\) is resource-disadvantaged (first item). This conclusion is contrasted with the classic simultaneous-move General Lotto games, in which there is no such discontinuity in the equilibrium payoffs - they vary continuously in the players' budgets [11, 18]. The remainder of this section provides the proof of Proposition 5.1, which utilizes two supporting Lemmas. ### _Follower's best-response_ We begin by analyzing player \(B\)'s best-response to any player \(A\) pre-allocation \(p_{A}\). We need to find \(p_{B}^{*}\) that solves \[\max_{p_{B}\in[0,M_{B}/c_{B}]}u_{B}(p_{A},p_{B}). \tag{40}\] Let us denote \(\textbf{p}=(p_{A},p_{B})\), \(R_{A}=M_{A}-c_{A}p_{A}\), and \(R_{B}=M_{B}-c_{B}p_{B}\). Define \(f_{A}(\textbf{p}):=R_{A}+\sqrt{R_{A}(R_{A}+2(p_{A}-p_{B}))}\), \(f_{B}(\textbf{p}):=\sqrt{R_{B}}\), and \(g_{B}(\textbf{p}):=\sqrt{R_{B}+2(p_{B}-p_{A})}\). Define \[u_{B}^{1A}(\textbf{p}):=\frac{M_{B}-c_{B}p_{B}}{2R_{A}}\left(\frac{f_{A}( \textbf{p})}{p_{A}-p_{B}+f_{A}(\textbf{p})}\right)^{2}\] \[u_{B}^{2A}(\textbf{p}):=1-\frac{R_{A}}{2(M_{B}-c_{B}p_{B}-(p_{A}-p_{B}))}\] \[u_{B}^{1B}(\textbf{p}):=1-\frac{R_{A}}{2}\left(\frac{f_{B}(\textbf{p})+g_{B}( \textbf{p})}{p_{B}-p_{A}+f_{B}(\textbf{p})(f_{B}(\textbf{p})+g_{B}(\textbf{p }))}\right)^{2}\] \[u_{B}^{2B}(\textbf{p}):=\frac{M_{B}-c_{B}p_{B}}{2(R_{A}-(p_{B}-p_{A}))} \tag{41}\] From Theorem 3.1, one can write player \(B\)'s payoff in GL-S as \[u_{B}(\textbf{p})=\begin{cases}u_{B}^{1A}(\textbf{p})&\text{if }\textbf{p}\in \mathcal{R}^{1A}\\ u_{B}^{2A}(\textbf{p})&\text{if }\textbf{p}\in\mathcal{R}^{2A}\\ u_{B}^{1B}(\textbf{p})&\text{if }\textbf{p}\in\mathcal{R}^{1B}\\ u_{B}^{2B}(\textbf{p})&\text{if }\textbf{p}\in\mathcal{R}^{2B}\end{cases} \tag{42}\] where \[\mathcal{R}^{1A}=\{p_{A}\geq p_{B}:R_{B}<p_{A}-p_{B},\text{ or }\] \[R_{B}\geq p_{A}-p_{B}\text{ and }R_{A}\geq\frac{2(R_{B}-(p_{A}-p_{B})) ^{2}}{2R_{B}-(p_{A}-p_{B})}\}\] \[\mathcal{R}^{2A}=\{p_{A}\geq p_{B}\}\backslash\mathcal{R}^{1A}\] \[\mathcal{R}^{1B}=\{p_{B}>p_{A}:R_{A}<p_{B}-p_{A},\text{ or }\] \[R_{A}\geq p_{B}-p_{A}\text{ and }R_{B}\geq\frac{2(R_{A}-(p_{B}-p_{A})) ^{2}}{2R_{A}-(p_{B}-p_{A})}\}\] \[\mathcal{R}^{2B}=\{p_{B}>p_{A}\}\backslash\mathcal{R}^{1B} \tag{43}\] The following Lemma details player \(B\)'s payoff for any response \(p_{B}\in[0,\frac{M_{B}}{c_{B}}]\). **Lemma 5.1**: _Consider any fixed strategy \((p_{A},R_{A})\) for player \(A\). Player \(B\)'s payoff is given as follows._ 1. _If_ \(\frac{M_{B}}{c_{B}}\leq R_{A}+p_{A}\)_, then_ \(u_{B}(\textbf{p})\) _is decreasing for all_ \(p_{B}\in[0,\frac{M_{B}}{c_{B}}]\)_._ 2. _If_ \(R_{A}+p_{A}<\frac{M_{B}}{c_{B}}\leq\frac{R_{A}}{c_{B}}+p_{A}\)_, then_ \[u_{B}(\textbf{p})=\begin{cases}u_{B}^{1A}(\textbf{p}),&\text{if }p_{B}\in[0,p_{A}]\\ u_{B}^{2B}(\textbf{p}),&\text{if }p_{B}\in(p_{A},p_{B}^{1B}]\\ u_{B}^{1B}(\textbf{p}),&\text{if }p_{B}\in(p_{B}^{1B},\frac{M_{B}}{c_{B}}]\end{cases}\] (44) _where_ \(p_{B}^{1B}\) _is the unique solution to_ \[F(p_{B}):=\frac{(R_{A}+p_{A}-p_{B})^{2}}{2R_{A}+p_{A}-p_{B}}=M_{B}-c_{B}p_{B}.\] (45) 3. _If_ \(\frac{R_{A}}{c_{B}}+p_{A}<\frac{M_{B}}{c_{B}}\leq\frac{1}{c_{B}}\left(p_{A}+ \frac{R_{A}+\sqrt{R_{A}(R_{A}+2p_{A})}}{2}\right)\)_, then_ \[u_{B}(\textbf{p})=\begin{cases}u_{B}^{1A}(\textbf{p}),&p_{B}\in[0,p_{B}^{1A}]\\ u_{B}^{2A}(\textbf{p}),&p_{B}\in(p_{A}^{1B},p_{A}]\\ u_{B}^{1B}(\textbf{p}),&p_{B}\in(p_{A},\frac{M_{B}}{c_{B}}]\end{cases}\] (46) _where_ \(p_{B}^{1A}\) _is the unique solution to_ \[G(p_{B}):=\frac{2(M_{B}+(1-c_{B})p_{B}-p_{A})^{2}}{2(M_{B}-c_{B}p_{B})-(p_{A}- p_{B})}=R_{A}.\] (47) _on_ \(p_{B}\in(0,p_{A})\) Fig. 4: This plot illustrates the Stackelberg equilibrium payoff (red line, Proposition 5.1) to player \(B\) contrasted with its payoff if it did not have the opportunity to respond with pre-allocated resources, i.e. setting \(p_{B}=0\) (green dashed line, Theorem 4.2). Notably, there is a dramatic improvement in performance when player \(B\) is sufficiently budget-rich, \(\frac{M_{B}}{c_{B}}=\frac{M_{A}}{c_{A}}\). In this example, we set \(M_{A}=0.5\), \(c_{A}=0.2\), and \(c_{B}=0.5\). We vary \(M_{B}\) from 0 to 3. * If \(\frac{M_{B}}{c_{B}}>\frac{1}{c_{B}}\left(p_{A}+\frac{R_{A}+\sqrt{R_{A}(R_{A}+2p_{A })}}{2}\right)\), then \[u_{B}(\mathbf{p})=\begin{cases}u_{B}^{2A}(\mathbf{p}),&\text{if }p_{B}\in[0,p_{A}]\\ u_{B}^{1B}(\mathbf{p}),&\text{if }p_{B}\in(p_{A},\frac{M_{B}}{c_{B}}].\end{cases}\] (48) The proof is deferred to Appendix B. The final Lemma characterizes player \(B\)'s best response to any fixed player \(A\) strategy. **Lemma 5.2**: _Consider any fixed strategy \((p_{A},R_{A})\) for player \(A\). Player \(B\)'s best-response_ \[p_{B}^{*}:=\operatorname*{arg\ max}_{p_{B}\in[0,\frac{M_{B}}{c_{B}}]}u_{B}(p_{ A},p_{B}) \tag{49}\] _is determined according to_ \[p_{B}^{*}=\begin{cases}0,&\text{if }\frac{M_{B}}{c_{B}}<h(R_{A},p_{A})\\ \hat{p}_{B},&\text{if }\frac{M_{B}}{c_{B}}>h(R_{A},p_{A})\\ 0\text{ or }\hat{p}_{B},&\text{if }\frac{M_{B}}{c_{B}}=h(R_{A},p_{A})\end{cases} \tag{50}\] _where \(\hat{p}_{B}\) was defined in (39), and \(h(R_{A},p_{A})\in(R_{A}+p_{A},p_{A}+\frac{R_{A}+\sqrt{R_{A}(R_{A}+2p_{A})}}{2})\) is the unique value of \(\frac{M_{B}}{c_{B}}\) at which \(u_{B}^{1A}(p_{A},0)=u_{B}^{1B}(p_{A},\hat{p}_{B})\)._ The proof is deferred to Appendix C, where we thoroughly analyze the properties of the functions \(u_{B}^{1A}\), \(u_{B}^{2A}\), \(u_{B}^{1B}\), and \(u_{B}^{2B}\) as characterized by Lemma 5.1. This allows us to identify the maximizer \(p_{B}^{*}\in[0,\frac{M_{B}}{c_{B}}]\) of \(u_{B}\) for all possible cases stated in Lemma 5.1. ### Proof of Proposition 5.1 We are now ready to establish Proposition 5.1. Let us define \[T_{1}(p_{A}) :=p_{A}+R_{A}(p_{A}) \tag{51}\] \[T_{2}(p_{A}) :=p_{A}+\frac{R_{A}(p_{A})+\sqrt{R_{A}(p_{A})(R_{A}(p_{A})+2p_{A} )}}{2}.\] where \(R_{A}(p_{A})=M_{A}-c_{A}p_{A}\). It holds that \(T_{1}(0)=T_{2}(0)=M_{A}\), \(T_{1}(\frac{M_{A}}{c_{A}})=T_{2}(\frac{M_{A}}{c_{A}})=\frac{M_{A}}{c_{A}}\), and \(T_{1}(p_{A})<T_{2}(p_{A})\) on the interval \(p_{A}\in(0,\frac{M_{A}}{c_{A}})\). We prove the result item by item. 1) Suppose \(\frac{M_{B}}{c_{B}}\leq M_{A}\). In this case, we have \(\frac{M_{B}}{c_{B}}\leq T_{1}(p_{A})\) for all \(p_{A}\in[0,\frac{M_{A}}{c_{A}}]\), with equality at \(p_{A}=0\) if and only if \(\frac{M_{B}}{c_{B}}=M_{A}\). Thus, player \(B\)'s best-response against any \(p_{A}\) is \(p_{B}^{*}=0\) (Lemma 5.2). The scenario reduces to the optimization problem from Corollary 4.1. 2) Suppose \(M_{A}<\frac{M_{B}}{c_{B}}\leq\frac{M_{A}}{c_{A}}\). By Lemma 5.2, the threshold at which player \(B\)'s best-response switches is given by the value of \(p_{A}\) that satisfies \(h(M_{A}-c_{A}p_{A},p_{A})=\frac{M_{B}}{c_{B}}\). Equivalently, this is the value of \(p_{A}\) that satisfies \(u_{B}^{1A}(p_{A},0)=u_{B}^{1B}(p_{A},\hat{p}_{B})\) with \(\frac{M_{B}}{c_{B}}\leq\frac{1}{c_{B}}T_{2}(p_{A})\). The latter condition ensures that either \(u_{B}^{1A}(p_{A},0)\) or \(u_{B}^{1B}(p_{A},\hat{p}_{B})\) is player \(B\)'s best-response payoff (Lemma 5.1). One can write \[u_{B}^{1B}(p_{A},\hat{p}_{B})=1-\frac{c_{A}(2-c_{B})}{2}\frac{\frac{M_{A}}{c_{A }}-p_{A}}{\frac{M_{B}}{c_{B}}-p_{A}} \tag{52}\] This is a decreasing and concave function on \(p_{A}\in[0,\frac{M_{B}}{c_{B}})\), and it decreases to \(-\infty\) as \(p_{A}\rightarrow\frac{M_{B}}{c_{B}}\). The payoff \(u_{B}^{1A}(p_{A},0)\) is given by \[u_{B}^{1A}(p_{A},0)=\frac{M_{B}}{2\sqrt{R_{A}(p_{A})}}\left(\frac{2(T_{2}(p_{A} )-p_{A})}{T_{1}(p_{A})+2(T_{2}(p_{A})-p_{A})}\right)^{2} \tag{53}\] This function has a single critical point in the interval \([0,\frac{M_{A}}{c_{A}}]\) at \(\bar{p}_{A}=\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{A}}\), which is a local minimum. It is decreasing on \([0,\bar{p}_{A}]\) and increasing on \((\bar{p}_{A},\frac{M_{A}}{c_{A}}]\). There is a unique value \(p_{A}^{\dagger}\in(0,\frac{M_{B}}{c_{B}})\) such that \(\frac{M_{B}}{c_{B}}\leq\frac{1}{c_{B}}T_{2}(p_{A}^{\dagger})\) where these two functions intersect. Note that from Lemma 5.2, \(p_{A}^{\dagger}\) is the unique value that satisfies \[h(M_{A}-c_{A}p_{A}^{\dagger},p_{A}^{\dagger})=\frac{M_{B}}{c_{B}}. \tag{54}\] Player \(B\)'s best-response is \(p_{B}^{*}=\hat{p}_{B}\) for \(p_{A}<p_{A}^{\dagger}\), and \(p_{B}^{*}=0\) for \(p_{A}>p_{A}^{\dagger}\) (Lemma 5.2). Consequently, player \(A\)'s payoff (under player \(B\)'s best-response) is given by \[u_{A}(p_{A},p_{B}^{*})=\begin{cases}1-u_{B}^{1B}(p_{A},\hat{p}_{B}),&\text{if }p_{A} \in[0,p_{A}^{\dagger})\\ 1-u_{B}^{1A}(p_{A},0),&\text{if }p_{A}\in(p_{A}^{\dagger},\frac{M_{A}}{c_{A}}]\end{cases} \tag{55}\] We observe that \(u_{A}(p_{A},p_{B}^{*})\) is increasing on \(p_{A}\in[0,p_{A}^{\dagger})\). If \(p_{A}^{\dagger}\geq\bar{p}_{A}\), then \(u_{A}(p_{A},p_{B}^{*})\) is decreasing on \((p_{A}^{\dagger},\frac{M_{A}}{c_{A}}]\), and hence player \(A\)'s security strategy is \(p_{A}^{*}=p_{A}^{\dagger}\). The resulting payoff to player \(A\) is \[1-u_{B}^{1B}(p_{A}^{\dagger},\hat{p}_{B})=\frac{c_{A}(2-c_{B})}{2}\frac{\frac{M_{A}} {c_{A}}-p_{A}^{\dagger}}{\frac{M_{B}}{c_{B}}-p_{A}^{\dagger}}. \tag{56}\] If \(p_{A}^{\dagger}<\bar{p}_{A}\), then \(u_{A}(p_{A},p_{B}^{*})\) is increasing on \((p_{A}^{\dagger},\bar{p}_{A}]\) and decreasing on \((\bar{p}_{A},\frac{M_{A}}{c_{A}})\). Hence, player \(A\)'s security strategy is \(p_{A}^{*}=\bar{p}_{A}=\frac{2(1-c_{A})}{2-c_{A}}\frac{M_{A}}{c_{A}}\). The resulting payoff to player \(A\) is given by Corollary 4.1. 3) Suppose \(\frac{M_{B}}{c_{B}}>\frac{M_{A}}{c_{A}}\). In this case, the function \(u_{B}^{1B}(p_{A},\hat{p}_{B})\) is strictly increasing on \(p_{A}\in[0,\frac{M_{A}}{c_{A}}]\), any intersection (if respond to the pre-allocation before the final decisive round. This highlights the significance that more dynamic and sequential interactions can have on a player's eventual performance. Future work will involve studying these dynamic interactions in richer environmental contexts, e.g. with multiple fronts of battlefields.
2307.13373
Magnetic Effect on the Potential Barrier for Nucleosynthesis
We demonstrated that a weak magnetic field can increase the permittivity, leading to a reduction in the potential barrier within the Debye sphere consisting of electrons and a nucleus. By solving the Boltzmann equation with the inclusion of the magnetic field, we obtained the magnetized permittivity. The resulting enhanced permittivity field inversely decreases the potential barrier, thereby increasing the reaction rate between two fusing nuclei. We compared this Boltzmann kinetic approach with the Debye potential method. We found that they are qualitatively consistent. Further, we also derived the magnetized Debye potential composed of the conventional term with a new magnetic effect. Both approaches indicate that magnetized plasmas, which have existed since the Big Bang, have ultimately influenced permittivity, potential barrier, and nucleosynthesis.
Kiwan Park
2023-07-25T09:47:10Z
http://arxiv.org/abs/2307.13373v1
# Magnetic Effect on the Potential Barrier for Nucleosynthesis ###### Abstract We demonstrated that a weak magnetic field can increase the permittivity, leading to a reduction in the potential barrier within the Debye sphere consisting of electrons and a nucleus. By solving the Boltzmann equation with the inclusion of the magnetic field, we obtained the magnetized permittivity. The resulting enhanced permittivity field inversely decreases the potential barrier, thereby increasing the reaction rate between two fusing nuclei. We compared this Boltzmann kinetic approach with the Debye potential method. We found that they are qualitatively consistent. Further, we also derived the magnetized Debye potential composed of the conventional term with a new magnetic effect. Both approaches indicate that magnetized plasmas, which have existed since the Big Bang, have ultimately influenced permittivity, potential barrier, and nucleosynthesis. Vlasov equation, magnetized permittivity, potential barrier, Debye potential ## 1 Introduction Magnetic fields (\(B\)) and plasmas are prevalent throughout the Universe. However, despite extensive research, the effect of magnetic fields on the evolution of celestial plasma systems remains partially understood. In particular, the role of magnetized plasma in nucleosynthesis is not well understood. Even a longstanding debate exists regarding the behavior of unmagnetized plasma (electrons) in fusion ions. Briefly, nucleosynthesis proceeds through a series of processes including the proton-proton (pp) chain, CNO cycle, and triple-alpha reaction. The reaction rate is represented as \[R\sim\langle\sigma v\rangle = \frac{2^{3/2}}{\sqrt{\pi\mu}}\frac{1}{(k_{B}T)^{3/2}}\int_{0}^{ \infty}S(E)\,exp\bigg{[}-\frac{E}{k_{B}T}-\frac{Z_{1}Z_{2}e^{2}}{2\epsilon \hbar}\bigg{(}\frac{\mu}{2E}\bigg{)}^{1/2}\bigg{]}dE \tag{1}\] \[\sim \frac{S(E_{0})}{T^{2/3}}exp\bigg{[}-\bigg{(}\frac{\mu Z_{1}^{2}Z_ {2}^{2}}{m_{p}}\ \frac{7.726\times 10^{10}K}{\epsilon_{r}T}\bigg{)}^{1/3} \bigg{]}, \tag{2}\] where \(\mu\) and \(m_{p}\) are respectively'reduced mass' and 'proton mass' with \(E=3k_{B}T/2\). As this formula shows, the nucleosynthesis requires a significant amount of energy to overcome the Coulomb barrier between two fusing ions regardless of quantum tunneling effect. For instance, in the Solar core (\(T\sim 10^{7}K\)) and the early Universe after the Big Bang (\(t\sim 1-10^{2}s,\,T\sim 10^{10}K\)), the reaction rates for synthesizing deuterium \({}^{2}\)H in the initial step of the proton-proton (pp) chain are suppressed by \(1.53\times 10^{-7}\) and \(0.21\), respectively. Furthermore, in the subsequent step involving \({}^{3}\)He, these rates are further reduced by \(1.24\times 10^{-12}\) and \(0.064\). So, to explain the ubiquitously existing nuclei,,i.e., fundamental elements, in the whole Universe, various models have been suggested. And, one of them is the screening effect, attributed to the presence of dense electrons surrounding the ions. The screening effect is believed to lower the Coulomb barrier and enhance the reaction rate (see Debye-Huckel screening Boyd and Sanderson (2003)). Actually, it is evident that high-density plasmas can reduce the potential barrier from the positive nucleus and elevate the reaction rate. However, such dense plasma state is limited, rather dilute plasmas are more commonly observed. Salpeter (1954) proposed the concept of static electron screening surrounding fusing nuclei, which is essentially equivalent to Debye-Huckel screening. Subsequently, several studies and suggestions were based on this groundbreaking work. Bahcall _et al._ (1998) solved the Debye potential using the WKB approximation, where the Coulomb wave function naturally emerges from Salpeter's formulation. And, Gruzinov and Bahcall (1998) calculated the partial differential equation for the electron density matrix in the vicinity of two nuclei. Also, Dewitt et al. Dewitt _et al._ (1973) and Bruggen et al. Bruggen and Gough (1997) derived the reaction rate based on the free energy between two ions under the assumption of weak screening. Simultaneously, the suitability of Salpeter's static screening effect for dynamic stellar cores was called into question. For example, Shaviv and Shaviv (1997); Carraro _et al._ (1988); Hwang _et al._ (2021) considered the dynamic effects arising from the disparate velocities of nuclei and electrons. And, Opher and Opher (2000) provided a statistical reinterpretation of the Gibbs distribution of particles in plasmas. Furthermore, Shaviv and Shaviv (1996) investigated the interaction effects of electrons surrounding fusing nuclei. These examples demonstrate that authors have developed their own plasma models based on their respective backgrounds and approaches (Bahcall _et al._ (2002), and references therein). Interestingly, however, the influence of the ubiquitous background magnetic field on the permittivity \(\epsilon_{r}=\epsilon/\epsilon_{0}\) in the penetration factor \(P\sim\exp[-g(\epsilon,\,E,\,Z_{1},\,Z_{2})]\) has not yet been thoroughly explored (refer to Eq.(1)). The magnetic field has existed ubiquitously since the Big Bang. In the very early Universe, various quantum fluctuations, such as QCD or phase transitions followed by plasma fluctuation (Biermann battery effect), induced magnetic fields (Biermann, 1950; Cheng and Olinto, 1994; Tevzadze _et al._, 2012). These primordial magnetic fields (PMF) are inferred to have been very weak (\(10^{-62}-10^{-19}G\)) compared to the currently observed mean magnetic field strength (\(10^{-5}G\)), which implies various dynamo processes. However, the electrons surrounding the nuclei can be magnetized regardless of the strength of the magnetic field. Moreover, a weak magnetic field, which loosely constrains the charged particles (electrons) but still accelerates their motion, can perturb the distribution more efficiently than a strong magnetic field. In contrast, the strong magnetic field has the effect of suppressing the perturbation through the strong constraint. Statistically, the closed structure composed of a nucleus and electrons can be regarded as a canonical ensemble system dominated by Hamiltonian dynamics with generalized coordinates "\(q_{s}\)" and momentum "\(p_{s}(=m_{s}v_{s}+q_{s}A)\)". Liouville's theorem indicates that the total time (material) deriva tive of the density or distribution function in phase space is \(zero\) as we move along the trajectory dominated by Hamiltonian dynamics. Therefore, some external influences on the system can change the distributions of components, especially light electrons. And it has the effect of modifying the electron density shielding the static electric field from the heavy nuclei. We show that the magnetic effect increases permittivity followed by the drop of the potential barrier between two reacting nuclei. In section 2, we briefly show how to get the permittivity with Boltzmann equation and electromagnetic theory, analytically and numerically. In section 3, we show our numerical results for the magnetized permittivity, potential barrier, and penetration factor. In section 4, we derive the magnetized Debye potential. We used the conventional approach with the additional magnetic effect. In section 5, we summarize our work. ## 2 Theoretical analysis I: Kinetic approach In comparison to the overall distribution \(f(\mathbf{r},\,\mathbf{v},\,t)\), the slightly higher-density electrons surrounding the nucleus can be regarded as the perturbed distribution \(f_{1}(\mathbf{r},\,\mathbf{v},\,t)\). Moreover, since the electrons shield the electric field from the nucleus, they effectively act as bound charges \(\rho_{b}=\int f_{1}(\mathbf{r},\,\mathbf{v},\,t)d\mathbf{r}d\mathbf{v}\) and polarize the system with a dipole moment \(\mathbf{P}\): \(\mathbf{D}=\epsilon\mathbf{E}=\epsilon_{0}\mathbf{E}+\mathbf{P}\). By utilizing the convolution property of Fourier Transformation and taking its divergence, we can separate the longitudinal permittivity \(\epsilon_{l}\) from the electric displacement field \(\mathbf{D}\) as follows: \[kD(k,\,\omega)=k\epsilon(k,\,\omega)E(k,\,\omega)=k\,\epsilon_{0}E(k,\,\omega )-\rho_{b}(k,\,\omega)=k\,\epsilon_{0}E(k,\,\omega)+e\int f_{1}(\mathbf{r},\, \mathbf{v},\,t)\,d\mathbf{r}d\mathbf{v}. \tag{3}\] We apply this relation to the system that is weakly magnetized with \(B_{0}\). The perturbed distribution function \(f_{1}\) is1 Footnote 1: Harris dispersion relation is also obtained from this equation (Arfken and Weber, 2005; Gurnett and Bhattacharjee, 2017). However, Harris mode solves \(\epsilon\) for a nontrivial potential in Poisson equation \(\nabla^{2}\Phi=-\rho/\epsilon_{0}\rightarrow\Phi(k,\,\omega)=N(k,\,\omega)/D(k,\,\omega)\). With \(D(k,\,\omega)=0(=\epsilon)\), the dispersion relation constraining \(k\) and \(\omega\) is derived. \(\epsilon\) is not permittivity. \[\frac{\partial f_{1}}{\partial t}+\mathbf{v}\cdot\nabla f_{1}-\frac{e}{m_{e}} \mathbf{E}\cdot\nabla_{V}f_{0}-\frac{e}{m_{e}}\mathbf{v}\times\mathbf{B}_{0} \cdot\nabla_{V}f_{1}=0. \tag{4}\] Using \(v_{x}=v_{\perp}\cos\,\phi\), \(v_{y}=v_{\perp}\sin\,\phi\), and cyclotron frequency \(\omega_{ce}\equiv eB_{0}/m_{e}\), we can convert the fourth term, i.e., Lorentz force into \(\omega_{ce}\partial f_{1}/\partial\phi\). Then, the Fourier transformed Boltzmann equation is represented as \[\frac{\partial f_{1}}{\partial\phi}-i(\alpha+\beta\cos\,\phi)f_{1}+\frac{e}{m_ {e}\omega_{ce}}\mathbf{E}\cdot\nabla_{V}f_{0}=0, \tag{5}\] where \(\alpha\equiv(k_{\parallel}v_{\parallel}-\omega)/\omega_{ce}\) and \(\beta\equiv k_{\perp}v_{\perp}/\omega_{ce}\). And, then, we get \[f_{1}=-\frac{e}{m_{e}\omega_{ce}}\,e^{(\alpha\phi+\beta\sin\,\phi)}\int^{\phi }e^{-(\alpha\phi^{\prime}+\beta\sin\,\phi^{\prime})}\,\mathbf{E}\cdot\nabla_{ V}f_{0}\,d\phi^{\prime}. \tag{6}\] Applying \(f_{1}\) to Eq.(3), we can derive the magnetized permittivity as follows: \[\epsilon_{l}=\epsilon_{0}+\frac{i\epsilon_{0}}{k^{2}\omega_{ce}}\omega_{pe}^{ 2}\int v_{\perp}dv_{\perp}dv_{\parallel}d\phi\overbrace{e^{i(\alpha\phi+\beta \sin\,\phi)}}^{A}\int^{\phi}\overbrace{e^{-i(\alpha\phi^{\prime}+\beta\sin \,\phi^{\prime})}\,\mathbf{k}\cdot\nabla_{V}F_{0}\,d\phi^{\prime}}^{B}. \tag{7}\] Here, plasma frequency \(\omega_{pe}^{2}\) is defined as \(n_{e0}q_{s}^{2}/\epsilon_{0}m_{e}\), and the volume element in cylindrical coordinate is \(d^{3}v=v_{\perp}dv_{\perp}dv_{\parallel}d\phi\). Also, we use the anisotropic Maxwell distribution \(F_{0}=f_{0}/n_{e0}\): \[F_{0}=\bigg{(}\frac{1}{2\pi k_{B}T_{\perp}}\bigg{)}\bigg{(}\frac{1}{2\pi k_{B}T _{\parallel}}\bigg{)}^{1/2}e^{-\frac{m_{e}}{2k_{B}T_{s}}(v_{\parallel}^{2}+v _{\perp}^{2})}. \tag{8}\] The exponential term in '\(A\)' can be represented by Bessel function \(e^{\alpha+\beta\sin\,\phi}=\sum_{m=-\infty}^{\infty}J_{m}(\beta)e^{i(\alpha+m)\phi}\), and \(k\cdot\nabla_{V}\) is written as \(k_{\parallel}\partial/\partial\,V_{\parallel}+k_{\perp}\partial/\partial\,V_ {\perp}cos\,\phi\)(Boyd and Sanderson, 2003; Arfken and Weber, 2005). Combined '\(A\)' and '\(B\)' are \[\sum_{m,\,n}J_{m}(\beta)J_{n}(\beta)\bigg{[}ik_{\parallel}\frac{\partial F_{0} }{\partial v_{\parallel}}\frac{e^{i(m-n)\phi}}{\alpha+n}+i\frac{k_{\perp}}{2} \frac{\partial F_{0}}{\partial v_{\parallel}}\bigg{\{}\frac{e^{i(m-n+1)\phi} }{\alpha+n-1}+\frac{e^{i(m-n-1)\phi}}{\alpha+n+1}\bigg{\}}\bigg{]}. \tag{9}\] The index '\(n\)' is a dummy variable, and \(\int_{0}^{2\pi}e^{i(m-n)}d\,\phi\) is defined as Dirac delta function \(2\pi\delta_{m,\,n}\). Using Bessel recurrence relation \(J_{n+1}(\beta)+J_{n-1}(\beta)=(2\pi/\beta)J_{n}(\beta)\), we can derive \[\frac{\epsilon_{l}}{\epsilon_{0}}=1+\frac{2\pi\omega_{pe}^{2}}{k^{2}}\int_{- \infty}^{\infty}dv_{\parallel}\int_{0}^{\infty}v_{\perp}\,dv_{\perp}\sum_{n} \bigg{[}\frac{m_{s}k_{\parallel}v_{\parallel}}{k_{B}T_{e}}+\frac{nm_{e}\omega_ {ce}}{k_{B}T_{e}}\bigg{]}\frac{F_{0}J_{n}^{2}(\beta)}{k_{\parallel}v_{\parallel }-\omega+n\omega_{ce}}. \tag{10}\] Expanding Bessel function, we can make the result more suitable for the numerical calculation (Arfken and Weber, 2005). \[\frac{\epsilon_{l}}{\epsilon_{0}} = 1+\frac{2\pi\omega_{pe}^{2}}{k^{2}}\int_{-\infty}^{\infty}dv_{ \parallel}\int_{0}^{\infty}v_{\perp}\,dv_{\perp}\sum_{n}\bigg{[}\frac{m_{e}k_{ \parallel}v_{\parallel}}{k_{B}T_{e}}+\frac{nm_{e}\omega_{ce}}{k_{B}T_{e}}\bigg{]} \frac{F_{0}}{k_{\parallel}v_{\parallel}-\omega+n\omega_{ce}}\bigg{(}\sum_{e=0 }^{\infty}\frac{(-1)^{s}}{s!(s+n)!}\bigg{(}\frac{\beta}{2}\bigg{)}^{n+2s} \bigg{)}^{2}.\] Technically, permittivity \(\epsilon_{l}\) in this equation represents the area between the horizontal axis of \(v_{\parallel}\) and integrand. However, the usual residue theorem with singularities cannot be applied because of the divergent \(F_{0}\) with \(v_{im}\rightarrow\infty\). Instead, we should integrate its principal value and poles directly. \[\frac{\epsilon_{l}}{\epsilon_{0}} = 1+\frac{2\pi\omega_{pe}^{2}}{k^{2}}P\int_{-\infty}^{\infty}dv_{ \parallel}\int_{0}^{\infty}v_{\perp}\,dv_{\perp} \tag{12}\] \[\bigg{(}\sum_{n}\frac{m_{e}}{k_{B}T_{e}}\bigg{(}\frac{k\,v_{ \parallel}\cos\theta+n\omega_{ce}}{k\,v_{\parallel}\cos\theta-\omega+n\omega_ {ce}}\bigg{)}F_{s0}\bigg{[}\sum_{s=0}^{\infty}\frac{(-1)^{s}}{s!(s+n)!}\bigg{(} \frac{m_{e}kv_{\perp}\sin\theta}{2eB_{0}}\bigg{)}^{n+2s}\bigg{]}^{2}\bigg{)}\] \[+i\pi\frac{k}{|k|}\frac{2\pi\omega_{pe}^{2}}{k^{2}}\sum_{n}\frac{ m_{e}}{k_{B}T_{e}}\big{(}k\,v_{\parallel}\cos\theta+nm_{e}\omega_{ce}\bigg{)} \bigg{[}\sum_{s=0}^{\infty}\frac{(-1)^{s}}{s!(s+n)!}\bigg{(}\frac{m_{e}kv_{ \perp}\sin\theta}{2eB_{0}}\bigg{)}^{n+2s}\bigg{]}^{2}F_{0}.\] We applied trapezoidal rule to calculate Eq.(12) but did not consider the imaginary part in this paper (Newman, 2012). The range of wavenumber \(k\) is from 1 to 3000, \(v_{min}=-10^{8}\) to \(v_{max}=10^{8}\), and mesh size is \(\Delta v=0.5\). We inferred the electron density \(n_{e0}=8.18363\times 10^{13}m^{-3}\) and temperature \(T_{e}=2.38\times 10^{6}\,K\) near the Solar tachocline regime with arbitrary frequency \(\omega=10^{4}Hz\) smaller than the plasma frequency \(\omega_{pe}=5.1\times 10^{8}Hz\). The numerical range within 30% of light velocity \(c\) is large enough to normalize the distribution function with the mesh. Also, the wavenumber is sufficient for the discrete Fourier transform. We divided the integral range of \(v_{\parallel}\) into \((v_{min},\,v_{res,\,n}-\delta)\) and \((v_{res,\,n}+\delta,\,v_{max})\) skipping the singular point \(v_{res,\,n}\) with \(\delta=5\times 10^{-7}m\). We expanded Bessel function up to \(v_{\perp}^{18}\) for the case that the velocity is almost parallel to the \(B\) field, i.e., \(\beta\sim v_{\perp}\sim 0\). And, the result was already saturated in the order of \(v_{\perp}^{10}\). ## 3 Numerical Result Fig.1(a) illustrates the Fourier-transformed evolving permittivity (\(\epsilon_{0}\epsilon_{r}\), \(\epsilon_{0}=8.85\times 10^{-12}F/m\)) influenced by the magnetic field. We applied various magnetic fields (\(0-1\times 10^{-5}G\)) to the system with wavenumbers \(k\) ranging from 1 to 3000. The permittivity remains degenerate up to a certain critical wavenumber \(k_{crit}\), regardless of the magnetic field strength. However, it becomes separated for \(k>k_{crit}\) and is amplified by the weak magnetic field. The degree of separation is inversely proportional to the strength of the magnetic field. For magnetic fields stronger than \(1\times 10^{-7}G\), the permittivity becomes essentially the same as the unmagnetized case. This can be attributed to the \(B\) term in the denominator of Eq.(12). Furthermore, Eq.(4) demonstrates the growth of \(f_{1}\) as the Lorentz force decreases, which is consistent with Liouville's theorem. Fig.1(b) shows \(\epsilon_{r}(r/\lambda_{D})\) in real space. Here, '\(r\)' represents the distance from the nucleus, \(\lambda_{D}\) is Debye length \(\sqrt{\epsilon_{0}k_{B}T/e^{2}n_{e}}\sim 1.17\times 10^{-2}cm\). We performed an inverse Fourier transform of \(\epsilon(k)\) using \[\epsilon(r_{n})=\frac{1}{N}\sum_{k=0}^{N-1}\epsilon(k)exp\bigg{(}i\frac{2\pi kn }{N}\bigg{)}, \tag{13}\] where \(r_{n}=nL/N\). Near the nucleus, \(\epsilon_{r}\) is split into the various levels according to the applied \(B\) field, inversely proportional to the magnetic field. However, above the critical \(B_{crit}\) field, permittivity is not split but converges to the nonmagnetized case. And, at \(n\sim N-1\), the oscillation by \(\cos(2\pi kn/N)\) is almost negligible, which appears as the sudden increase of \(\epsilon(r)\) at \(r\sim\lambda_{D}\). Fig.1(c) includes the evolution of potential energy \(\phi=Q/4\pi\epsilon r\) for a hydrogen nucleus with permittivity \(\epsilon(r)\). Since permittivity constitutes the denominator, potential energy evolves in response to the \(B\) field. The plot illustrates that the weak magnetic field decreases the potential barrier. Fig.1(d) shows the evolution of penetration factor \(P(E)\). The result clearly show that the weak \(B\) field, which reduces the potential barrier, enhances the probability of penetration and reaction. The actual potential barrier is of course much more complex. In principle. it should be calculated with the interaction energy among the screening charges around two interacting nuclei and environmental lighter nuclei. However, we do not their effects at the moment. We focus on the weak magnetic effect on the perturbed distribution \(f_{1}\), bound charges, and the enhanced nucleus reaction. This may be a more common mechanism in the whole Universe history. In Fig. 2, we compared the kinetic approach with the conventional Debye screened potential. In the absence of the magnetic field (\(B_{0}=0\)), the perturbed electron density around the nucleus can be simply represented as \[f_{1}=\frac{e}{i\,m_{e}}\frac{\mathbf{E}\cdot\nabla_{V}f_{0}}{(\mathbf{k}\cdot \mathbf{v}-\omega)}. \tag{14}\] This equation can be calculated using the same method as in Eq.(4)-(12) and compared to Debye potential: \[\phi=\frac{Q}{4\pi\epsilon_{0}r}\exp\bigg{[}-\frac{\sqrt{2}r}{\lambda_{D}} \bigg{]},\ \epsilon\rightarrow\epsilon_{0}\exp\bigg{[}r\sqrt{\frac{2n_{e}e^{2}}{\epsilon_ {0}k_{B}T_{e}}}\bigg{]}. \tag{15}\] Fig.2(a) illustrates that potential energy increases with the increasing temperature. However, potential energy in Fig.2(b) decreases as the electron density increases. The nonmagnetized potential energy from Eq.(14) is qualitatively consistent with the Debye potential. However, if there is a current density \(\mathbf{J}=Nq_{e}\mathbf{V}\) present, the dependence of potential energy on the electron density becomes opposite to our case and the Debye screening potential energy Bergman (2000); Das (2013). The kinetic model and Debye approach explicitly and implicitly assume the presence of bound charge rather than the current density \(\mathbf{J}\). ## 4 Theoretical analysis II: magnetized Debye potential We can also consider the magnetic effect on the conventional Debye potential. The momentum equation with the \(B\) field and collision frequency \(\nu_{m}\) is represented as \[\frac{d\mathbf{V}_{s}}{dt} = q_{s}(\mathbf{E}+\mathbf{V}_{s}\times\mathbf{B})-\frac{k_{B}T}{ n_{s}}\nabla n_{s}-\nu_{m}\mathbf{V}_{s} \tag{16}\] \[\rightarrow-i\omega V_{s} \sim -q_{s}\nabla\Phi+q_{s}V_{s}B-\frac{k_{B}T}{n_{s}}\nabla n_{s}- \nu_{m}V_{s}.\,\,(s=i,\,e) \tag{17}\] We integrate the equation from \(\infty\) to \(r\). Then, \[n_{i}(r)=n_{0}\,\exp\big{[}-\frac{e\Phi(r)}{k_{B}T}+\frac{1}{3eB }(i\omega-\nu_{m}+eB)\big{]}, \tag{18}\] \[n_{e}(r)=n_{0}\,\exp\big{[}+\frac{e\Phi(r)}{k_{B}T}-\frac{1}{3eB }(i\omega-\nu_{m}-eB)\big{]}. \tag{19}\] Here, we used the dimensional analysis and mean value theorem assuming the quasi-continuous velocity distribution: \[n_{e}(\infty)=n_{i}(\infty)\equiv n_{0},\int_{\infty}^{r}\nabla \Phi\,dr=\Phi(r), \tag{20}\] \[\int_{\infty}^{r}V_{i}dr\sim U_{i}(r)-U_{i}(\infty)\rightarrow \overline{U}_{i}\overline{r}_{i},\,\,\int_{\infty}^{r}V_{e}dr\sim U_{e}(r)-U_{ e}(\infty)\rightarrow-\overline{U}_{e}\overline{r}_{e} \tag{21}\] Additionally, we used \(k_{B}T=3m_{s}\overline{U}_{s}^{2}\) and \(\overline{r}_{s}=m_{s}\overline{U}_{s}/q_{s}B\) based on the balance between Lorentz force and centrifugal force. It should be noted that \(\overline{U}_{s}\) with \(m_{s}\) for \(s=i,\,e\) was replaced by the system temperature \(T\). Subsequently, by applying \(-\epsilon_{0}\nabla^{2}\Phi=e(n_{i}-n_{e})\), the potential energy can be represented as follows: \[\frac{1}{r^{2}}\frac{\partial}{\partial\,r}\bigg{(}r^{2}\frac{ \partial\,\Phi}{\partial\,r}\bigg{)}-\frac{2}{\lambda_{D}^{2}}\Phi+\frac{2n_{ 0}}{3\epsilon_{0}B}\left(i\omega-\nu_{m}\right)=0. \tag{22}\] With a trial function \(\Phi=QF(r)/4\pi\epsilon_{0}r\), we have \[\Phi(r)=c_{1}\frac{Q}{4\pi\epsilon_{0}r}e^{-\frac{\sqrt{2}r}{ \lambda_{D}}}+c_{2}\frac{Q}{4\pi\epsilon_{0}r}e^{\frac{\sqrt{2}r}{\lambda_{D} }}+\frac{n_{0}\lambda_{D}^{2}}{3\epsilon_{0}BQ}(i\omega-\nu_{m}). \tag{23}\] Since \(\nu_{m}\) is caused by the combined effect of electric field, magnetic field, and thermal pressure, the collision frequency may be limited to the internal range of \(\lambda_{D}\). So, in order to satisfy \(\Phi(\infty)=0\) for \(r\gg\lambda_{D}\), \(c_{2}\) can be \[c_{2}=-\frac{4\pi r}{Q}e^{-\sqrt{2}r/\lambda_{D}}\frac{i\omega\,n_{0}\lambda_ {D}^{2}}{3BQ}. \tag{24}\] If we set \(c_{1}=1\) for consistency, the modified potential is \[\Phi(r)=\frac{Q}{4\pi\epsilon_{0}r}e^{-\frac{\sqrt{2}r}{\lambda_{D}}}-\frac{n_{0 }\nu_{m}\lambda_{D}^{2}}{3\epsilon_{0}BQ}. \tag{25}\] This result demonstrates that the potential is proportional to the magnetic field \(B\) and is consistent with the kinetic model. The magnetic effect becomes evident with the balance between the centrifugal force and Lorentz force. Detailed information on \(\nu_{m}\) is required for more exact investigation. We will not delve further into this topic at present. Nonetheless, note that the magnetic field plays as if it were an additional charged particle in the modified Debye potential. ## 5 Summary In our study, we solved the weakly magnetized Boltzmann equation by considering the system as an isolated canonical ensemble composed of the nucleus and bound charges. We demonstrated that the permittivity is inversely proportional to the magnetic field, indicating that the potential barrier between two fusing nuclei evolves proportionally with the magnetic field. This result is related with Liouville theorem, which states that the net change of density or distribution function in phase space is zero as we move along the trajectory dominated by Hamiltonian dynamics. The weak magnetic field reduces the acceleration effect in the Boltzmann equation, resulting in a decreased constraint on electrons by the magnetic field. This leads to an enhanced fluctuating electron distribution \(f_{1}(\mathbf{r},\;\mathbf{v},\;t)\) in configuration space to compensate for the loss. The equation \(\nabla\cdot(\epsilon\mathbf{E})=\epsilon_{0}\nabla\cdot\mathbf{E}+e\int f_{1} d\mathbf{r}d\mathbf{p}\) explains how the growth of \(f_{1}\) contributes to the increasing permittivity, which in turn leads to a decrease in the potential barrier. In contrast, for magnetic fields beyond a critical threshold, the electrons are strongly constrained, causing the system to behave similarly to a non-magnetized system. It is worth noting that in addition to the magnetic field, the permittivity is also influenced by factors such as electron density, temperature, and current density (Veselago, 1968; Bergman, 2000; Griffiths, 2017). The modified permittivity resulting from these effects ultimately affects the reaction rate in nucleosynthesis in the Universe. The authors acknowledge the support from National Research Foundation of Korea:NRF-2021R1I1A1A01057517, NRF-2020R1A2C3006177, NRF-2021R1A6A1A03043957, and NRF-2020R1F1A1072570.
2308.12830
Bourgain-Brezis-Mironescu formula for $W^{s,p}_q$-spaces in arbitrary domains
Under certain restrictions on $s,p,q$, the Triebel-Lizorkin spaces can be viewed as generalised fractional Sobolev spaces $W^{s,p}_q$. In this article, we show that the Bourgain-Brezis-Mironescu formula holds for $W^{s,p}_q$-seminorms in arbitrary domain. This addresses an open question raised by Brazke-Schikorra-Yung in [Bourgain-Brezis-Mironescu convergence via Triebel-Lizorkin spaces; Calc. Var. Partial Differential Equations; 2023].
Kaushik Mohanta
2023-08-24T14:44:56Z
http://arxiv.org/abs/2308.12830v4
# Bougain-Brezis-Mironescu formula for Triebel-Lizorkin spaces in arbitrary domains ###### Abstract. We show that the Bourgain-Brezis-Mironescu formula, regarding the limits of Gagliardo-type seminorms as \(s\to 1-\), holds for Triebel-Lizorkin spaces defined in arbitrary domains. This answers an open question raised by Brazke-Schikorra-Yung in [Bourgain-Brezis-Mironescu convergence via Triebel-Lizorkin spaces; Calc. Var. Partial Differential Equations; 2023]. Key words and phrases:BBM formula; Triebel-Lizorkin spaces; internal distance; arbitrary domain 2020 Mathematics Subject Classification: 46E35; 42B35 ## 1. Introduction Sobolev spaces arise naturally in the study of partial differential equations. They are defined in terms of weak derivatives. For an open set \(\Omega\subseteq\mathbb{R}^{N}\), and \(1\leq p<\infty\), the Sobolev space \(W^{1,p}(\Omega)\) is defined to be \(\{f\in L^{p}(\Omega)\ |\ [f]_{W^{1,p}(\Omega)}<\infty\}\), where \[[f]_{W^{1,p}(\Omega)}^{p}:=\int_{\Omega}|\nabla f(x)|^{p}dx.\] Some closely related spaces are Triebel-Lizorkin spaces \(F_{p,q}^{s}(\Omega)\), equipped with the norm \([\cdot]_{F_{p,q}^{s}(\mathbb{R}^{N})}\). The explicit definition and classical results regarding these spaces can be found in [59]. For \(1\leq p,q<\infty\), \(\max\{0,\frac{N(q-p)}{pq}\}<s<1\), we have the characterization (see Theorem 1.2 of [56]) \[F_{p,q}^{s}(\mathbb{R}^{N}):=\left\{f\in L^{\max\{p,q\}}(\mathbb{R}^{N})\ \Big{|}\ \|f\|_{L^{p}(\Omega)}+[f]_{W^{s,p}_{q}(\mathbb{R}^{N})}<\infty\right\},\] where \[[f]_{W^{s,p}_{q}(\Omega)}:=\left(\int_{\Omega}\left(\int_{\Omega}\frac{|f(x)- f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}\right)^{\frac{1}{p}}. \tag{1}\] In the special case of \(p=q\), these spaces are related to the so-called fractional Sobolev spaces \(W^{s,p}(\Omega)\), defined by \[W^{s,p}(\Omega):=\left\{f\in L^{p}(\Omega)\ \Big{|}\ \|f\|_{L^{p}(\Omega)}+[f]_{W^{s,p} (\Omega)}<\infty\right\},\] where \([f]_{W^{s,p}(\Omega)}:=[f]_{W^{s,p}_{r}(\Omega)}\). Of course, when \(\Omega=\mathbb{R}^{N}\), or \(\Omega\) is fractional extension domain (see [25]), we have \(W^{s,p}(\Omega)=F_{p,p}^{s}(\Omega)\). Bourgain-Brezis-Mironescu [8] showed that for any smooth and bounded domain \(\Omega\), \(1\leq p<\infty\), and any \(f\in W^{1,p}(\Omega)\), \[\lim_{s\to 1-}(1-s)[f]_{W^{s,p}(\Omega)}^{p}=K\|\nabla f\|_{L^{p}(\Omega)}^{p}.\] Conversely, for any \(f\in L^{p}(\Omega)\), if we have \[\lim_{s\to 1-}(1-s)[f]_{W^{s,p}(\Omega)}^{p}<\infty,\] then \(f\in W^{1,p}(\Omega)\) if \(p>1\) and \(f\in BV(\Omega)\) if \(p=1\). Later Davila [23] extended this result and proved that for any \(f\in BV(\Omega)\), \(\lim_{s\to 1-}(1-s)[f]_{W^{s,1}(\Omega)}^{p}=K|\nabla f|(\Omega)\). This is commonly known as the Bourgain-Brezis-Mironescu formula (BBM formula for short). Extensive research regarding the results of these two papers have been done by many authors. A non-exhaustive list of such works can be found in [1, 2, 3, 4, 6, 7, 9, 10, 11, 14, 15, 16, 17, 18, 19, 21, 24, 26, 28, 29, 30, 31, 32, 34, 33, 35, 36, 37, 38, 39, 41, 42, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 57, 60, 61]. Our first interest lies in works regarding the domain. That is how far the smooth-bounded condition can be extended. In this direction, we mention three papers. The first is due to Lione-Spector [43, 44]. They showed that for an arbitrary domain \(\Omega\), and \(f\in L^{p}(\Omega)\), \[\lim_{\lambda\to 0+}\lim_{s\to 1-}\int_{\Omega_{\lambda}}\int_{\Omega_{ \lambda}}\frac{|f(x)-f(y)|^{p}}{|x-y|^{N+sp}}dydx=K[f]^{p}_{W^{1,p}(\Omega)}.\] where \[\Omega_{\lambda}:=\{x\in\Omega\ |\ \text{dist}(x,\partial\Omega)>\lambda\} \cap B(0,\lambda^{-1}). \tag{2}\] The second result is due to the author with Bal-Roy [5], where it has been shown that the BBM-formula [8], holds if we take \(\Omega\) to be a \(W^{1,p}\)-extension domain. The third result is due to Drelichman-Duran [27]. They showed that for \(1<p<\infty\), and an arbitrary bounded domain \(\Omega\), and any \(\tau\in(0,1)\), we have \[\lim_{s\to 1-}\int_{\Omega}\int_{B(x,\tau\text{dist}(x,\partial\Omega))} \frac{|f(x)-f(y)|^{p}}{|x-y|^{N+sp}}dydx=K[f]^{p}_{W^{1,p}(\Omega)}.\] The second direction of work regarding the BBM-formula, that we are interested in, is its extension for Triebel-Lizorkin spaces. The first work in this direction was done by Brazke-Schikorra-Yung [12]. They explained via examining thoroughly various constants of embeddings that although \(F^{s}_{p,p}=W^{s,p}\), when \(s\in(0,1)\) and \(F^{s}_{p,2}=W^{1,p}\) it makes sense for the scaled \(W^{s,p}\) seminorm to converge to \(W^{1,p}\)-seminorm, even when \(p\neq 2\). The posed the open problem [12, Question 1.12] about the asymptotic behaviour of \(W^{s,p}_{q}(\Omega)\)-seminorm as \(s\to 1\) in \(\mathbb{R}^{N}\). In our knowledge, [22] is the only work, giving partial answer to this question for \(\mathbb{R}^{N}\) and in the case \(1<q<p<\infty\). For a special class of bounded extension domains (called \((\varepsilon,\infty)\)-domains), the question is also answered for \(1<q<p<\infty\) in the preprint [62, Theorem 6.1]. The purpose of the present paper is to answer the question raised in [12]. In fact, we concentrate our focus on the following seminorm \[[f]_{\dot{W}^{s,p}_{q}(\Omega)}:=\left(\int_{\Omega}\left(\int_{B(x,\tau\text {dist}(x,\partial\Omega))}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{ \frac{p}{q}}\right)^{\frac{1}{p}}, \tag{3}\] as one can then extend the above question for arbitrary bounded domains, motivated by [27]. We go one step further and show that the boundedness of the domain is not necessary. Our main results are the following: **Theorem 1**.: _Let \(\Omega\subset\mathbb{R}^{N}\) be any open set \(\tau\in(0,1)\). Assume one of the following conditions_ 1. \(1\leq q\leq p<\infty\)_,_ 2. \(1<p<q<\infty\) _with_ \(p\leq N\) _and_ \(q<\frac{Np}{N-p}\)_,_ 3. \(N<p<q<\infty\)_._ _Then there is a constant \(K=K(N,p,q)>0\) such that for any \(f\in W^{1,p}(\Omega)\), we have,_ \[\lim_{s\to 1-}(1-s)^{\frac{p}{q}}\int_{\Omega}\left(\int_{B(x,\tau\text{ dist}(x,\partial\Omega))}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx=K \int_{\Omega}|\nabla f(x)|^{p}dx. \tag{4}\] **Theorem 2**.: _Let \(\Omega\subset\mathbb{R}^{N}\) be an open set, \(\tau\in(0,1)\) and \(1\leq p,q<\infty\). If \(f\in L^{p}(\Omega)\cap L^{p}(\Omega)\) is such that_ \[L^{s}_{p,q}(f):=\lim_{s\to 1-}\int_{\Omega}\left((1-s)\int_{B(x,\tau\text{ dist}(x,\partial\Omega))}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx<\infty,\] _then \(f\in W^{1,p}(\Omega)\) when \(p>1\), and \(f\in BV(\Omega)\) when \(p=1\)._ _Remark 3_.: Before proceeding further, let us discuss some difficulties that arise here, and strategies for overcoming them. The proof of the main results roughly follow the outline of [8]. However, there are certain obstacles to that path. The first obstacle arises when we want to apply dominated convergence theorem to interchange limit and integral. Similar difficulty was faced and overcome in [27], but we had to take a different route (see lemma 9) for this purpose. The introduction of the second exponent \(q\) forces us to deviate from the usual route again; The case \(q\leq p\) is rather easy to handle, for the case \(p<q\), a careful use of Sobolev embedding is needed. To take into account the case where the domain \(\Omega\) is unbounded, we need to restrict the seminorm further and define some new fractional Sobolev spaces (see eq. (5)) and prove a version of the main result theorem 1 in that context (see theorem 14), and then finally derive the proof of the main results from there. The paper is organized as follows: In section 2, we list some preliminary results, already known in literature, which shall be useful for the proof of our main results. In section 3, we introduce a variant of fractional Sobolev spaces and prove some relevant embedding results. In section 4 we prove the main results in the context of these new spaces (see theorems 14 and 15). Finally in section 5, we prove theorems 1 and 2. ## 2. Preliminary Results For the sake of completeness, we first state the well-known Sobolev inequality results: **Lemma 4**.: _Let \(1\leq p,q\leq\infty\), \(\tau\in(0,1)\), and one of the following hold_ 1. \(p<N\)_, and_ \(q\leq\frac{Np}{N-p}\)_,_ 2. \(p=N\)_, and_ \(q<\infty\)__ 3. \(p>N\)_._ _Then there is a constants \(C=C(p,q,N)>0\) such that the following \((q,p)\)-Poincare inequality holds for any \(f\in W^{1,p}(B(0,\frac{t}{\tau}))\):_ \[\frac{1}{t^{N}}\int_{B(0,t)}|f(y)|^{q}dy\leq C(N,p,q)t^{q}\left(\frac{1}{t^{N }}\int_{B(0,t)}|\nabla f(y)|^{p}dy\right)^{\frac{q}{p}}+C(N,p,q)\left(\frac{1} {t^{N}}\int_{B(0,t)}|f(y)|^{p}dy\right)^{\frac{q}{p}}.\] The following lemma was established in [20] for \(1<p<\infty\); the \(p=1\) case can be found in [58]. **Lemma 5**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\) be an open set with Lipschitz boundary and \(1\leq p<\infty\). Then for any \(f\in W^{1,p}(\Omega)\) there is some \(\tilde{f}\in W^{1,p}(\mathbb{R}^{N})\) such that \(\tilde{f}|_{\Omega}=f\) and for some constant \(C=C(N,\Omega,p)\),_ \[\|\tilde{f}\|_{W^{1,p}(\mathbb{R}^{N})}\leq C\|f\|_{W^{1,p}(\Omega)}.\] The following result can be found in Proposition 9.3 and Remark 6 of [13]. **Lemma 6**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\) be open, \(1\leq p<\infty\), \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\), and \(f\in L^{p}(\Omega)\). Assume that there is constant \(C>0\) such that for any \(\varphi\in C_{c}^{\infty}(\Omega)\)_ \[\left|\int_{\Omega}f(x)\frac{\partial\varphi(x)}{\partial x_{i}}dx\right|\leq C \|\varphi\|_{L^{p^{\prime}}(\Omega)}\quad\text{for }i=1,2,\cdots,N.\] _Then \(f\in W^{1,p}(\Omega)\) when \(1<p\), and \(f\in BV(\Omega)\) when \(p=1\)._ Next we list a special case of Proposition 2/(ii) [59, Chapter 2.3.3] combined with the fact that \(W^{1,p}(\mathbb{R}^{N})=F^{1}_{p,2}(\mathbb{R}^{N})\). **Lemma 7**.: _Let \(1\leq p,q<\infty\), \(s\in(0,1)\). Then_ \[W^{1,p}(\mathbb{R}^{N})\subseteq F^{s}_{p,q}(\mathbb{R}^{N}).\] The following result is taken from Lemma 8 of [5]. **Lemma 8**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\) be open and \(\lambda>0\) be sufficiently small. Then there is a bounded open set \(\Omega^{*}_{\lambda}\) with smooth boundary such that \(\Omega_{\lambda}\subseteq\Omega^{*}_{\lambda}\subseteq\Omega\), where \(\Omega_{\lambda}\) is as in (2)._ The next result can be found in Theorem 2.1 of [40]. It will play a crucial for our paper. **Lemma 9**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\), \(\{\Omega_{i}\}_{i\in\mathbb{N}}\) be such that \(\Omega=\cup_{i}\Omega_{i}\), \(F_{n},F\in L^{1}(\Omega)\) for \(n\in\mathbb{N}\) be such that for a.e. \(x\in\Omega\), \(F_{n}(x)\to F(x)\) as \(n\to\infty\). Assume that_ 1. \[\limsup_{n\to\infty}\sup_{x\in\Omega_{i}}|F_{n}(x)-F(x)|<\infty\quad\text{for all }i\in\mathbb{N},\] 2. \[\liminf_{i\to\infty}\limsup_{n\to\infty}\int_{\Omega\setminus\Omega_{i}}F_{n}( x)dx=0.\] _Then_ \[\lim_{n\to\infty}\int_{\Omega}F_{n}(x)dx=\int_{\Omega}F(x).\] ## 3. Fractional Sobolev space with restricted internal distance Fix \(R>0\) and \(\tau\in(0,1)\) once and for all. Denote \(\delta_{x,R,\tau}=\min\{R,\tau\text{dist}(x,\partial\Omega)\}\). We shall often drop the \(R\) and \(\tau\) in the above notation and write \(\delta_{x}\) to denote \(\delta_{x,R,\tau}\). _Remark 10_.: If the function \(x\mapsto\text{dist}(x,\partial\Omega)\) is bounded in \(\Omega\), we can choose \(R>0\) large enough, so that \(\delta_{x}=\text{dist}(x,\partial\Omega)\). Then the particular case \(p=q\) of theorems 14 and 15 are similar to the results proved in [27], but here \(\Omega\) need not be a bounded domain; for example, it can be a cylindrical domain or any open subset of \(\mathbb{R}^{N}\setminus\mathbb{Z}^{N}\). Define, for any open set \(\Omega\subseteq\mathbb{R}^{N}\), \(1\leq p,q<\infty\), \(0<s<1\), \(\hat{W}_{q}^{s,p}(\Omega):=\{f\in L^{p}(\Omega)\ |\ [f]_{\hat{W}_{q}^{s,p}(\Omega)}<\infty\}\) where \[[f]_{\hat{W}_{q}^{s,p}(\Omega)}^{p}:=\int_{x\in\Omega}\left(\int_{y\in B(x, \delta_{x})}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx. \tag{5}\] We shall need some embedding results for these new fractional Sobolev spaces for our purpose. As expected, the case \(q\leq p\) and \(p<q\) are treated separately. **Lemma 11**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\) be open and \(1\leq q\leq p<\infty\). Assume either \(D=\tilde{D}=\Omega\) or, for some \(\frac{1}{2R}>\alpha>0\), \(D=\{x\in\Omega\ |\ \text{dist}(x,\partial\Omega)<\alpha\}\) with \(\tilde{D}=\{x\in\Omega\ |\ \text{dist}(x,\partial\Omega)<2\alpha\text{ or }|x|> \frac{1}{2\alpha}\}\). Then there is a constant \(C=C(p,q,R,\Omega,N)\) such that for any \(f\) in \(W^{1,p}(\Omega)\),_ \[(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{y\in B(x,\delta_{x})}\frac{|f(x)-f (y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\leq C[f]_{W^{1,p}(D)}^{p}. \tag{6}\] Proof.: We have \[(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{y\in B(x,\delta_{x}) }\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\\ =(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{h\in B(0,\delta_{x}) }\frac{|f(x+h)-f(x)|^{q}}{|h|^{N+sq}}dh\right)^{\frac{p}{q}}dx\\ =(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{h\in B(0,\delta_{x} )}\frac{|f(x+h)-f(x)|^{q}}{|h|^{p}}\frac{dh}{|h|^{N+sq-q}}\right)^{\frac{p}{q} }dx\\ \leq(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{h\in B(0,\delta_{ x})}\int_{0}^{1}|\nabla f(x+th)|^{q}dt\frac{dh}{|h|^{N+sq-q}}\right)^{\frac{p}{q} }dx.\] The last inequality follows as \(W^{1,p}\)- functions satisfy the \(ACL\)-property. We now have, after a change of variable \(y=x+th\), (noteing that \(B(x,t\delta_{x})\subset B(x,\delta_{x})\)) \[(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{y\in B(x,\delta_{x}) }\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\\ \leq(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{y\in B(x,\delta_ {x})}\int_{0}^{1}|\nabla f(y)|^{q}t^{sq-q}dt\frac{dy}{|x-y|^{N+sq-q}}\right)^{ \frac{p}{q}}dx\\ =\frac{(1-s)^{\frac{p}{q}}}{(sq-q+1)^{\frac{p}{q}}}\int_{x\in D} \left(\int_{y\in B(x,\delta_{x})}|\nabla f(y)|^{q}\frac{dy}{|x-y|^{N+sq-q}} \right)^{\frac{p}{q}}dx.\] Note that in the above inequality, \(\nabla f\) is required to be defined only inside \(\tilde{D}\). So, we shall take a \(0\)-extension of \(\nabla f\) outside \(\tilde{D}\). Since we have \(\frac{p}{q}\geq 1\), we can use Young's convolution inequality to get \[(1-s)^{\frac{p}{q}}\int_{x\in D}\left(\int_{y\in B(x,\delta_{x})} \frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\\ \leq\frac{(1-s)^{\frac{p}{q}}}{(sq-q+1)^{\frac{p}{q}}}\int_{x\in D} \left(\int_{y\in B(x,R)}|\nabla f(y)|^{q}\frac{dy}{|x-y|^{N+sq-q}}\right)^{ \frac{p}{q}}dx\\ \leq\frac{(1-s)^{\frac{p}{q}}}{(sq-q+1)^{\frac{p}{q}}}\int_{x\in \tilde{D}}|\nabla f(x)|^{p}dx\left(\int_{x\in B(0,R)}\frac{dx}{|x|^{N+sq-q}} \right)^{\frac{p}{q}}=\frac{R^{p-sp}}{q^{\frac{p}{q}}}\int_{x\in\tilde{D}}| \nabla f(x)|^{p}dx.\] **Lemma 12**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\) be open, \(1<p<q<\infty\), and one of the following hold_ 1. \(p<N\)_, and_ \(q<\frac{Np}{N-p}\)_,_ 2. \(N\leq p\)_._ _Assume either \(D=\tilde{D}=\Omega\) or, for some \(\frac{1}{2R}>\alpha>0\), \(D=\{x\in\Omega\mid\text{dist}(x,\partial\Omega)<\alpha\}\) and \(\tilde{D}=\{x\in\Omega\mid\text{dist}(x,\partial\Omega)<2\alpha\text{ or }|x|>\frac{1}{2\alpha}\}\). Then there is a constant \(C=C(N,R,p,q)>0\) such that for any \(f\in W^{1,p}(\Omega)\), eq. (6) holds._ Proof.: Note that \[\int_{t=|h|}^{\delta_{x}}\frac{dt}{t^{N+sq+1}}=\frac{1}{N+sq}\left(\frac{1}{|h |^{N+sq}}-\frac{1}{\delta_{x}^{N+sq}}\right),\] which gives \[\frac{1}{|h|^{N+sq}}=(N+sq)\int_{t=|h|}^{\delta_{x}}\frac{dt}{t^{N+sq+1}}+ \delta_{x}^{-N-sq}.\] So, we can write \[C(p,q)\int_{x\in D}\left((1-s)\int_{h\in B(0,\delta_{x})}\frac{|f (x+h)-f(x)|^{q}}{|h|^{N+sq}}dh\right)^{\frac{p}{q}}dx\\ \leq\int_{x\in D}\left((N+sq)(1-s)\int_{h\in B(0,\delta_{x})}\int _{t=|h|}^{\delta_{x}}\frac{|f(x+h)-f(x)|^{q}}{t^{N+sq+1}}dtdh\right)^{\frac{p} {q}}dx\\ +\int_{x\in D}\left((1-s)\int_{h\in B(0,\delta_{x})}\frac{|f(x+h)- f(x)|^{q}}{\delta_{x}^{N+sq}}dh\right)^{\frac{p}{q}}dx=I_{1}+I_{2}. \tag{7}\] According to either \(p<N\) or \(p\geq N\), fix \(\beta\in(0,1)\) such that \((q,\beta p)\)-type Poincare inequality is satisfied. We use this to estimate \(I_{1}\) below. First, we change the order of integration between \(t\) and \(h\), then apply Sobolev inequality lemma 4. \[I_{1}=\int_{x\in D}\left((N+sq)(1-s)\int_{t=0}^{\delta_{x}}\frac{ 1}{t^{N+sq+1}}\int_{h\in B(0,t)}|f(x+h)-f(x)|^{q}dhdt\right)^{\frac{p}{q}}dx\\ \leq\int_{x\in D}\left((N+sq)(1-s)\int_{t=0}^{\delta_{x}}\frac{1}{ t^{sq+1-q}}\left(\frac{1}{t^{N}}\int_{h\in B(x,t)}|\nabla f(h)|^{\beta p}dh \right)^{\frac{q}{Np}}dt\right)^{\frac{p}{q}}dx\\ +\int_{x\in D}\left((N+sq)(1-s)\int_{t=0}^{\delta_{x}}\frac{1}{t^{ sq+1}}\left(\frac{1}{t^{N}}\int_{h\in B(0,t)}|f(x+h)-f(x)|^{\beta p}dh \right)^{\frac{q}{Np}}dt\right)^{\frac{p}{q}}dx\\ =I_{1,1}+I_{1,2}. \tag{8}\] As in the proof of the previous lemma, we shall take a \(0\)-extension of \(\nabla f\) outside \(\tilde{D}\). Now using Hardy-Littlewood maximal inequality, we get \[I_{1,1}\leq\int_{x\in D}(M|\nabla f(x)|^{\beta p})^{\frac{1}{\beta}}\left((N+sq) (1-s)\int_{t=0}^{\delta_{x}}\frac{1}{t^{sq+1-q}}dt\right)^{\frac{p}{q}}dx\\ =I_{1,1}+I_{1,2}. \tag{9}\] \[=C(N,p,q,R)\int_{x\in\mathbb{R}^{N}}(M|\nabla f(x)|^{\beta p})^{\frac{1}{ \beta}}dx\leq C(N,p,q,R)\int_{x\in\tilde{D}}|\nabla f(x)|^{p}dx.\] Again, \[I_{1,2} \leq\int_{x\in D}\left((N+sq)(1-s)\int_{t=0}^{\delta_{x}}\frac{1} {t^{sq+1-q}}\left(\frac{1}{t^{N}}\int_{h\in B(0,t)}\frac{|f(x+h)-f(x)|^{\beta p }}{|h|^{\beta p}}dh\right)^{\frac{q}{\beta p}}dt\right)^{\frac{p}{q}}dx\] \[\leq\int_{x\in D}\left((N+sq)(1-s)\int_{t=0}^{\delta_{x}}\frac{1} {t^{sq+1-q}}\left(\int_{r=0}^{1}\frac{1}{t^{N}}\int_{h\in B(0,t)}|\nabla f(x+rh) |^{\beta p}dhdr\right)^{\frac{q}{\beta p}}dt\right)^{\frac{p}{q}}dx\] \[=\int_{x\in D}\left((N+sq)(1-s)\int_{t=0}^{\delta_{x}}\frac{1}{t^ {sq+1-q}}\left(\int_{r=0}^{1}\frac{1}{(rt)^{N}}\int_{h\in B(x,rt)}|\nabla f(h) |^{\beta p}dhdr\right)^{\frac{q}{\beta p}}dt\right)^{\frac{p}{q}}dx\] \[\leq\int_{x\in\mathbb{R}^{N}}(M|\nabla f(x)|^{\beta p})^{\frac{1} {\beta}}\left((N+sq)(1-s)\int_{t=0}^{R}\frac{1}{t^{sq+1-q}}\left(\int_{r=0}^{1} dr\right)^{\frac{q}{\beta p}}dt\right)^{\frac{p}{q}}dx\] \[\leq C(p,q,N,R)\int_{x\in\tilde{D}}|\nabla f(x)|^{p}dx \tag{10}\] Combining eqs. (9) and (10), we get \[I_{1}\leq C(p,q,N,R)\|\nabla f\|_{L^{p}(\tilde{D})}^{p}. \tag{11}\] Again, we can estimate \(I_{2}\), in similar way as above with \(\delta_{x}\) in place of \(t\). We have a better estimate this time. Also, we can apply \((q,p)\)-Poincare inequality this time. \[I_{2} =\int_{x\in D}\left((1-s)\int_{h\in B(0,\delta_{x})}\frac{|f(x+h)- f(x)|^{q}}{\delta_{x}^{N+sq}}dh\right)^{\frac{p}{q}}dx\] \[\leq\int_{x\in D}\left(\frac{(1-s)}{\delta_{x}^{sq}}\left(\delta_ {x}^{p-N}\int_{h\in B(0,\delta_{x})}|\nabla f(x+h)|^{p}dh+\delta_{x}^{-N}\int_ {h\in B(0,\delta_{x})}|f(x+h)-f(x)|^{p}dh\right)^{\frac{q}{p}}\right)^{\frac{p }{q}}dx\] \[\leq\frac{(1-s)}{\delta_{x}^{sp-p}}\int_{x\in D}\left(\left(\delta _{x}^{-N}\int_{h\in B(0,\delta_{x})}|\nabla f(x+h)|^{p}dh+\delta_{x}^{-N}\int_ {h\in B(0,\delta_{x})}\frac{|f(x+h)-f(x)|^{p}}{|h|^{p}}dh\right)^{\frac{q}{p} }\right)^{\frac{p}{q}}dx\] \[\leq\frac{(1-s)}{\delta_{x}^{sp-p}}\int_{x\in D}\left(\delta_{x}^ {-N}\int_{h\in B(0,\delta_{x})}|\nabla f(x+h)|^{p}dh+\int_{r=0}^{1}\frac{1}{(r \delta_{x})^{N}}\int_{h\in B(0,r\delta_{x})}|\nabla f(x+h)|^{p}dhdr\right)dx\] \[\leq(1-s)^{\frac{p}{q}}\delta_{x}^{p-sp}\int_{x\in D}|\nabla f(x) |^{p}dx. \tag{12}\] Combing eqs. (7), (11) and (12), \[\int_{x\in D}\left((1-s)\int_{h\in B(0,\delta_{x})}\frac{|f(x+h)-f(x)|^{q}}{|h |^{N+sq}}dh\right)^{\frac{p}{q}}dx\leq C(N,p,q,R)\|\nabla f\|_{L^{p}(\tilde{D})} ^{p}.\] This proves the lemma. ## 4. BBM formula for \(\hat{W}^{s,p}_{q}\)-seminorms First, we state the following result whose proof can be found in the proof of Theorem of [8] as the quantity \(\delta_{x}\) is bounded by \(R\). **Lemma 13**.: _Let \(\Omega\subset\mathbb{R}^{N}\) be any open set, \(1\leq q<\infty\), \(0<s<1\). Then for any \(f\in C^{2}(\Omega)\), we have for all \(x\in\Omega\),_ \[\lim_{s\to 1-}(1-s)\int_{B(x,\delta_{x})}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy=K| \nabla f(x)|^{q}. \tag{13}\] We now prove the following BBM-type results which are closely related with theorems 1 and 2. **Theorem 14**.: _Let \(\Omega\subset\mathbb{R}^{N}\) be any open set. Assume one of the following conditions_ 1. \(1\leq q\leq p<\infty\)_,_ 2. \(1<p<q<\infty\) _with_ \(p\leq N\) _and_ \(q<\frac{Np}{N-p}\)_,_ 3. \(N<p<q<\infty\)_._ _Then there is a constant \(K=K(N,p,q)>0\) such that for any \(f\in W^{1,p}(\Omega)\), we have for all \(x\in\Omega\),_ \[\lim_{s\to 1-}(1-s)^{\frac{p}{q}}\int_{\Omega}\left(\int_{B(x,\delta_{x})} \frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx=K\int_{\Omega}| \nabla f(x)|^{p}dx. \tag{14}\] Proof of Theorem 14.: **Step-1:** We show that it is enough to prove eq.14 for \(f\in W^{1,p}(\Omega)\cap C^{2}(\Omega)\). Let \(f\in W^{1,p}(\Omega)\) and \(\varepsilon>0\) be fixed. Since \(C^{2}(\Omega)\cap W^{1,p}(\Omega)\) is dense in \(W^{1,p}(\Omega)\), there exists \(g\in C^{2}(\Omega)\cap W^{1,p}(\Omega)\) such that \[[f-g]_{W^{1,p}(\Omega)}<\varepsilon. \tag{15}\] Since we have assumed that eq.14 holds for functions in \(C^{2}(\Omega)\cap W^{1,p}(\Omega)\), for \(s>\frac{1}{2}\), we have \[\left|(1-s)^{\frac{1}{q}}[g]_{s,p,q,\Omega,R}-K^{\frac{1}{p}}[g]_{W^{1,p}( \Omega)}\right|<\varepsilon. \tag{16}\] Using triangle inequality, and then eq.16 followed by eq.15 and either lemma11 or lemma12, we get \[\left|(1-s)^{\frac{1}{q}}[f]_{s,p,q,\Omega,R}-K^{\frac{1}{p}}[f]_ {W^{1,p}(\Omega)}\right|\leq(1-s)^{\frac{1}{q}}\left|[f]_{s,p,q,\Omega,R}-[g ]_{s,p,q,\Omega,R}\right|\\ +\left|(1-s)^{\frac{1}{q}}[g]_{s,p,\Omega,R}-K^{\frac{1}{q}}[g]_{ W^{1,p}(\Omega)}\right|+K^{\frac{1}{q}}\left|[g]_{W^{1,p}(\Omega)}-[f]_{W^{1,p}( \Omega)}\right|\\ \leq(1-s)^{\frac{1}{q}}[f-g]_{s,p,q,\Omega,R}+\varepsilon+K^{ \frac{1}{p}}[f-g]_{W^{1,p}(\Omega)}\\ \leq[f-g]_{W^{1,p}(\Omega)}+\varepsilon+K^{\frac{1}{p}}[f-g]_{W^{ 1,p}(\Omega)}\leq C(K,q)\varepsilon.\] The proof of step-1 follows. **Step-2:** In view of the previous step, it is now enough to assume that \(f\in C^{2}(\Omega)\cap W^{1,p}(\Omega)\) and prove eq.14. Let us take an arbitrary sequence \(s_{n}\in(0,1)\) such that \(s_{n}\to 1-\) as \(n\to\infty\). Set \[F_{n}(x):=\left((1-s_{n})\int_{B(0,\delta_{x})}\frac{|f(x+h)-f(x)|^{p}}{|h|^{N+ s_{n}p}}dh\right)^{\frac{p}{q}},\] and \[F(x):=K|\nabla f(x)|^{p}.\] Also note that, lemma13 implies that \(F_{n}\to F\) pointwise a.e in \(\Omega\). To complete the proof, it is enough to show that \[\lim_{n\to\infty}\int_{\Omega}F_{n}(x)dx=\int_{\Omega}F(x)dx.\] We shall apply lemma9 on \(F_{n}\) to show that the interchange of limit and integral is valid. For any \(i\in\mathbb{N}\), consider the sets \(\Omega_{i}:=\{x\in\Omega\ |\ \mathrm{dist}(x,\partial\Omega)>\frac{1}{i}\} \cap B(0,i)\). We need to verify the the hypotheses of lemma 9. First, note that for \(x\in\Omega_{i}\), \(h\in B(0,\delta_{x})\), \(t\in(0,1)\), we have \[\mathrm{dist}(x+th,\partial\Omega)>\mathrm{dist}(x,\partial\Omega)-|h|\geq(1- \tau)\mathrm{dist}(x,\partial\Omega).\] Thus \(x+th\in\Omega_{i^{2}}\) for \(i>\frac{1}{(1-\tau)}\). Thus, we have using triangle inequality and then mean value inequality, \[|F_{n}(x)-F(x)|= \left|\left((1-s_{n})\int_{B(0,\delta_{x})}\frac{|f(x+h)-f(x)|^{q }}{|h|^{N+s_{n}q}}dh\right)^{\frac{p}{q}}-K|\nabla f(x)|^{p}\right|\] \[\leq\sup_{y\in\Omega_{i^{2}}}|\nabla f(y)|^{p}\left((1-s_{n})\int _{B(0,R)}\frac{dh}{|h|^{N+s_{n}q-q}}\right)^{\frac{p}{q}}+K|\nabla f(x)|^{p} \leq C(N,p,q,R)\sup_{y\in\Omega_{i^{2}}}|\nabla f(y)|^{p}.\] Since \(f\) is continuous in the closure of the bounded open set \(\Omega_{i^{2}}\), we have the hypothesis (1) of lemma 9 satisfied for 'large' \(i\in\mathbb{N}\). Note that, to show that hypothesis (2) of lemma 9 is satisfied, it is enough to show that \[\lim_{i\to\infty}\lim_{n\to\infty}\int_{\Omega\setminus\Omega_{2i}}F_{n}(x)dx=0.\] We start with an arbitrary \(x\in\Omega\setminus\Omega_{2i}\), \(h\in B(0,\delta_{x})\) and \(t\in(0,1)\). There can be two cases: **Case:1**\(\delta_{x}=\mathrm{dist}(x,\partial\Omega)<\frac{1}{2i}\). We have \(\mathrm{dist}(x+th,\partial\Omega)\leq|x+th-x|+\mathrm{dist}(x,\partial\Omega)< \frac{\tau}{2i}+\frac{1}{2i}<\frac{1}{i}\). Thus \(x+th\in\Omega\setminus\Omega_{i}\). **Case:2**\(|x|>2i\) and \(\frac{1}{2i}<\delta_{x}=R<\mathrm{dist}(x,\partial\Omega)\). Moreover, we can assume \(R<i\) without loss of generality. We have \(|x+th|\geq|x|-\tau R\geq 2i-\tau R\geq i\). Thus \(x+th\in\Omega\setminus B(0,i)\subseteq\Omega\setminus\Omega_{i}\). Hence we always have \[x\in\Omega\setminus\Omega_{2i}\implies x+th\in\Omega\setminus\Omega_{i} \tag{17}\] From eq. (17), we get \[\lim_{i\to\infty}\lim_{n\to\infty}\int_{\Omega\setminus\Omega_{2i}}F_{n}(x)dx =\lim_{i\to\infty}\lim_{n\to\infty}\int_{x\in\Omega\setminus\Omega_{2i}} \left((1-s_{n})\int_{y\in B(x,\delta_{x})}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+s_{n} q}}dy\right)^{\frac{p}{q}}dx.\] Now we apply lemma 11 or lemma 12 with \(D=\Omega\setminus\Omega_{2i}\) (so that \(\tilde{D}=\Omega\setminus\Omega_{i}\)) to get \[\lim_{i\to\infty}\lim_{n\to\infty}\int_{\Omega\setminus\Omega_{2i}}F_{n}(x)dx \leq\lim_{i\to\infty}\lim_{n\to\infty}C(p,q,R,N)[f]_{W^{1,p}(\Omega\setminus \Omega_{i})}^{p}=\lim_{i\to\infty}C(p,q,R,N)[f]_{W^{1,p}(\Omega\setminus\Omega _{i})}^{p}=0.\] Hence we can integrate eq. (13) and interchange the limit and the integral to get the result. **Theorem 15**.: _Let \(\Omega\subset\mathbb{R}^{N}\) be an open set. If \(f\in L^{p}(\Omega)\cap L^{p}(\Omega)\) is such that_ \[L_{p,q}(f):=\lim_{s\to 1-}\int_{\Omega}\left((1-s)\int_{B(x,\delta_{x})}\frac{|f(x )-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx<\infty,\] _then \(f\in W^{1,p}(\Omega)\) when \(p>1\), and \(f\in BV(\Omega)\) when \(p=1\)._ Proof of Theorem 15.: We divide the proof into two parts. First, we prove it for a particular case with a bit stronger assumptions, and then give the general proof. **Step-1**: \(\Omega\) is bounded with \(\Omega\subseteq B(0,\lambda)\), and \[\tilde{L}_{p,q}(f):=\lim_{s\to 1-}\int_{\Omega}\left((1-s)\int_{\Omega}\frac{|f(x )-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx<\infty.\] Extend \(f\) by \(0\), outside \(\Omega\). From the proof of Theorem 2 and \(3\) in [8] we can see that for any \(i=1,2,\cdots,N\), and \(\varphi\in C_{c}^{\infty}(\Omega)\), \[\left|\int_{\Omega}f(x)\frac{\partial\varphi(x)}{\partial x_{i}}dx\right|\leq C (\Omega,N,p,q)(1-s)(J_{1,s}+J_{2,s}), \tag{18}\] where, \[J_{1,s}=\int_{\Omega}\int_{\Omega}\frac{|f(x)-f(y)|}{|x-y|^{1+N+sq-q}}|\varphi(y)| dydx\] and \[J_{2,s}=\int_{\mathbb{R}^{N}\setminus\Omega}\int_{\mbox{supp}_{\varphi}}\frac{ |f(y)||\varphi(y)|}{|x-y|^{1+N+sq-q}}dydx.\] We estimate \(J_{1,s}\) using Fubuni's theorem to change the order of integration, then using Holder's inequality twice, first with respect to the measure \(\frac{dx}{|x-y|^{N+sq-q}}\) and then with respect to \(dy\). We get \[J_{1,s}\leq\int_{\Omega}\left(\int_{\Omega}\frac{|f(x)-f(y)|^{q}}{|x-y|^{q+N+ sq-q}}dx\right)^{\frac{p}{q}}\left(\int_{\Omega}\frac{|\varphi(y)|^{q^{\prime}}}{|x- y|^{N+sq-q}}dx\right)^{\frac{1}{q^{\prime}}}dy\\ \leq\left(\int_{\Omega}\left(\int_{\Omega}\frac{|f(x)-f(y)|^{q}} {|x-y|^{N+sq}}dx\right)^{\frac{p}{q}}dy\right)^{\frac{1}{p}}\left(\int_{\Omega }\left(\int_{\Omega}\frac{|\varphi(y)|^{q^{\prime}}}{|x-y|^{N+sq-q}}dx\right)^ {\frac{p^{\prime}}{q^{\prime}}}dy\right)^{\frac{1}{p^{\prime}}}\\ \leq\left(\int_{\Omega}\left(\int_{\Omega}\frac{|f(x)-f(y)|^{q}} {|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\right)^{\frac{1}{p}}\left(\int_{\Omega }|\varphi(y)|^{p^{\prime}}\left(\int_{B(0,\lambda)}\frac{dx}{|x-y|^{N+sq-q}} \right)^{\frac{p^{\prime}}{q^{\prime}}}dy\right)^{\frac{1}{p^{\prime}}}\\ =C(p,q,N,\lambda)(1-s)^{\frac{-1}{q^{\prime}}}\left(\int_{\Omega} \left(\int_{\Omega}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{2}{q}} dx\right)^{\frac{1}{p}}\left\|\varphi\right\|_{L^{p^{\prime}}(\Omega)}\] Now using the hypothesis of Step-1, we have \[(1-s)J_{1,s}\leq C(p,q,N,\lambda)\tilde{L}_{p,q}(f)\|\varphi\|_{L^{p^{\prime} }(\Omega)}. \tag{19}\] Using Holder's inequality, we estimate \(J_{2,s}\) as in [8] to get \[(1-s)J_{2,s}\leq C(N,p,q,\lambda)\|\varphi\|_{L^{p^{\prime}}(\Omega)}\|f\|_{L ^{p}(\Omega)} \tag{20}\] Using eqs. (18) to (20), we get \[\left|\int_{\Omega}f(x)\frac{\partial\varphi(x)}{\partial x_{i}}\right|\leq C (\Omega,N,p,q,\lambda,f)\|\varphi\|_{L^{p^{\prime}}(\Omega)},\] Hence by lemma 6, the result follows. **Step-2:** We now prove the theorem in full generality. For \(1<p<\infty\), define \(X^{1,p}(\Omega):=W^{1,p}(\Omega)\), and \(X^{1,1}(\Omega):=BV(\Omega)\). Using lemma 8, choose an increasing sequence of bounded open sets \(\{\Omega_{n}\}_{n}\) with smooth boundary such that \(\cup_{n}\Omega_{n}=\Omega\), and \(\mbox{dist}(x,\partial\Omega)>\frac{1}{n}\), for \(x\in\Omega_{n}\). From the hypothesis, it follows that \[\lim_{s\to 1-}\int_{x\in\Omega_{n}}\left((1-s)\int_{y\in\Omega_{n}\cap B(x, \delta_{x})}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx<\infty.\] We also have, for \(s>\frac{1}{2}\), and \(R>\frac{1}{n}\), \[\int_{\Omega_{n}}\left(\int_{\Omega_{n}\setminus B(x,\delta_{x})} \frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\leq\int_{\Omega_{n }}\left(\int_{\Omega_{n},\ |x-y|>\frac{1}{n}}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\\ \leq n^{\frac{Np}{q}+sp}\int_{\Omega_{n}}\left(\int_{\Omega_{n}, \ |x-y|>\frac{1}{n}}|f(x)-f(y)|^{q}dy\right)^{\frac{p}{q}}dx\\ \leq C(n,N,p,q)\left[\|f\|_{L^{p}(\Omega_{n})}^{p}|\Omega_{n}|^{ \frac{p}{q}}+\|f\|_{L^{q}(\Omega_{n})}^{p}|\Omega_{n}|\right].\] Since, we have \(f\in L^{p}(\Omega)\cap L^{p}(\Omega)\) frfrom the hypotheses, and \(\Omega_{n}\) are bounded domains, we have \[\lim_{s\to 1-}\int_{\Omega_{n}}\left((1-s)\int_{\Omega_{n}}\frac{|f(x)-f(y)|^{q}}{ |x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx<\infty.\] From Step-1, we can conclude that \(f\in X^{1,p}(\Omega_{n})\) for all \(n\). Further the \(X^{1,p}\)-seminorms are uniformly bounded (independent of \(n\)) as can be seen from the following calculation, where we use theorem 14, \[K[f]^{p}_{X^{1,p}(\Omega_{n})}=\lim_{s\to 1-}\int_{x\in\Omega_{n}} \left((1-s)\int_{y\in B(x,\delta_{x,\Omega_{n}})}\frac{|f(x)-f(y)|^{p}}{|x-y|^ {N+sp}}dy\right)^{\frac{p}{q}}dx\\ \leq\lim_{s\to 1-}\int_{x\in\Omega}\left((1-s)\int_{y\in B(x, \delta_{x})}\frac{|f(x)-f(y)|^{p}}{|x-y|^{N+sp}}dy\right)^{\frac{p}{q}}dx=L_{p,q}(f)<\infty.\] The proof follows from the observation that \(K[f]^{p}_{X^{1,p}(\Omega)}=\sup_{n}K[f]^{p}_{X^{1,p}(\Omega_{n})}\). ## 5. Proof of Theorems 1 and 2 Note that theorem 2 is a straightforward consequence of theorem 15, as \(L_{p,q}(f)\leq L_{p,q}^{*}(f)\). Theorem 1 is also a consequence of theorem 14, but it requires a bit more work. To complete the proof of theorem 1, we only need the following lemma: **Lemma 16**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\) be an open set, \(1\leq p<\infty\), \(\tau\in(0,1)\), \(f\in W^{1,p}(\Omega)\) and \(R>0\). Assume one of the following conditions_ 1. \(1\leq q\leq\frac{Np}{N-p}\) _with_ \(p<N\)_,_ 2. \(1\leq q<\infty\) _with_ \(p\geq N\)_._ _Then eq. (14) implies eq. (4)._ In order to prove this, we first prove a bit more general result. **Proposition 17**.: _Let \(\Omega\subseteq\mathbb{R}^{N}\) be an open set, \(1\leq p,q<\infty\), \(\tau\in(0,1)\), \(f\in L^{p}(\Omega)\cap L^{q}(\Omega)\) and \(R>0\). Additionally, in the case \(p<q\), assume that for some \(s_{0}\in(0,1)\),_ \[\int_{\Omega}\left(\int_{R\leq|h|\leq\tau\text{dist}(x,\partial\Omega)}\frac{ |f(x+h)-f(x)|^{q}}{|h|^{N+sq}}dh\right)^{\frac{p}{q}}dx<\infty. \tag{21}\] _Then eq. (14) implies eq. (4)._ Proof of Proposition 17.: Note that, since \[\int_{\Omega}\left(\int_{B(x,\delta_{x})}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}} dy\right)^{\frac{p}{q}}dx\leq\int_{\Omega}\left(\int_{B(x,\tau\text{dist}(x, \partial\Omega))}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx,\] from eq. (14) we have \[\lim_{s\to 1-}(1-s)^{\frac{p}{q}}\int_{\Omega}\left(\int_{B(x,\tau\text{dist}(x, \partial\Omega))}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx \geq K\int_{\Omega}|\nabla f(x)|^{p}dx.\] We focus on the reverse inequality. Observe that for \(x,y\in\Omega\), \(\delta_{x}<|x-y|\leq\tau\text{dist}(x,\partial\Omega)\) implies \(R<|x-y|\leq\tau\text{dist}(x,\partial\Omega)\). Hence we can write, using triangle inequality for \(L^{p}\)-norms, \[\left(\int_{\Omega}\left(\int_{B(x,\tau\text{dist}(x,\partial\Omega))}\frac{ |f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\right)^{\frac{1}{p}}\\ =\left(\int_{\Omega}\left(\int_{B(x,\delta_{x})}\frac{|f(x)-f(y)| ^{q}}{|x-y|^{N+sq}}dy+\int_{\delta_{x}\leq|x-y|\leq\tau\text{dist}(x,\partial \Omega)}\frac{|f(x)-f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\right)^{ \frac{1}{p}}\] \[\leq\left(\int_{\Omega}\left(\int_{B(x,\delta_{x})}\frac{|f(x)-f(y) |^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\right)^{\frac{1}{p}}+\left(\int_{ \Omega}\left(\int_{R\leq|x-y|\leq\tau\text{dist}(x,\partial\Omega)}\frac{|f(x)- f(y)|^{q}}{|x-y|^{N+sq}}dy\right)^{\frac{p}{q}}dx\right)^{\frac{1}{p}}\] \[=:I_{1}^{\frac{1}{p}}+I_{2}^{\frac{1}{p}}.\] In order to complete the proof, in view of eq.14, we need to show that \(I_{2}\) is bounded as \(s\to 1-\). We estimate \(I_{2}\) in two separate cases. **Case-1: \(1\leq q\leq p<\infty\)**. Using Minkowsky's integral inequality and taking the \(0\)-extension of \(f\) outside \(\Omega\), we have \[I_{2}^{\frac{q}{p}}\leq\left(\int_{\Omega}\left(\int_{R\leq|h| \leq\tau\text{dist}(x,\partial\Omega)}\frac{|f(x+h)-f(x)|^{q}}{|h|^{N+sq}}dh \right)^{\frac{p}{q}}dx\right)^{\frac{q}{p}}\\ \leq\int_{\mathbb{R}^{N}}\left(\int_{\Omega}\frac{|f(x+h)-f(x)|^{ p}}{|h|^{\frac{Np}{q}+sp}}\chi_{B(0,\tau\text{dist}(x,\partial\Omega))\setminus B(0,R)}(h )dx\right)^{\frac{q}{p}}dh\] \[\leq\int_{|h|\geq R}\frac{1}{|h|^{N+sq}}\left(\int_{\Omega}|f(x+h) -f(x)|^{p}dx\right)^{\frac{q}{p}}dh\] \[\leq C(p,q)\int_{|h|\geq R}\frac{1}{|h|^{N+sq}}\left(\int_{\Omega} |f(x)|^{p}dx\right)^{\frac{q}{p}}dh\leq C(p,q,R,N)\|f\|_{L^{p}(\Omega)}^{q}.\] Hence the proof follows in this case. **Case-2: \(1\leq p\leq q<\infty\)**. From eq.21 we get that there is some \(\lambda_{f}>0\) such that for \(s\in(s_{0},1)\), \[I_{2}\leq\int_{\Omega}\left(\int_{R\leq|h|\leq\tau\text{dist}(x, \partial\Omega)}\frac{|f(x+h)-f(x)|^{q}}{|h|^{N+sq}}dh\right)^{\frac{p}{q}}dx \\ \leq 2\int_{\Omega\cap B(0,\lambda_{f})}\left(\int_{R\leq|h|\leq \tau\text{dist}(x,\partial\Omega)}\frac{|f(x+h)-f(x)|^{q}}{|h|^{N+sq}}dh \right)^{\frac{p}{q}}dx.\] Using Holder's inequality, we get \[I_{2}\leq 2\lambda_{f}^{N(1-\frac{p}{q})}\left(\int_{\Omega\cap B(0,\lambda_{f})} \int_{R\leq|h|\leq\tau\text{dist}(x,\partial\Omega)}\frac{|f(x+h)-f(x)|^{q}}{|h |^{N+sq}}dhdx\right)^{\frac{p}{q}}.\] Now we can proceed as in Case-1 to show that \[I_{2}\leq C(N,p,q,R,f)\|f\|_{L^{p}(\Omega)}^{p}.\] This completes the proof. Now we can prove lemma16 and thereby complete the proof of theorem1. Proof of lemma16.: By the standard embedding theorems, we already know that \(f\in L^{q}(\Omega)\). In order to prove the statement, we need to show that when \(p<q\), eq.21 holds. Let \(\Omega_{1}\) be a smooth domain such that \[\{x\in\Omega\ |\ \text{dist}(x,\partial\Omega)>R\}\subseteq\Omega_{1}\subseteq\Omega.\] Clearly \(\Omega_{1}\) is a \(W^{1,p}\)-extension domain. Let \(\tilde{f}\in W^{1,p}(\mathbb{R}^{N})\) be an extension of \(f|_{\Omega_{1}}\), that is \[\|\tilde{f}\|_{W^{1,p}(\mathbb{R}^{N})}\leq C(p,q,N,\Omega)\|f\|_{W^{1,p}( \Omega_{1})}\leq C(p,q,N,\Omega)\|f\|_{W^{1,p}(\Omega)}<\infty.\] This, along with lemma7 \[\int_{\Omega}\left(\int_{R\leq|h|\leq\tau\text{dist}(x,\partial\Omega) }\frac{|f(x+h)-f(x)|^{q}}{|h|^{N+sq}}dh\right)^{\frac{p}{q}}dx=\int_{\Omega_{1}} \left(\int_{R\leq|h|\leq\tau\text{dist}(x,\partial\Omega)}\frac{|f(x+h)-f(x)|^{q }}{|h|^{N+sq}}dh\right)^{\frac{p}{q}}dx\\ \leq\int_{\mathbb{R}^{N}}\left(\int_{\mathbb{R}^{N}}\frac{|\tilde{ f}(x+h)-\tilde{f}(x)|^{q}}{|h|^{N+sq}}dh\right)^{\frac{p}{q}}dx<\infty.\] ## Acknowledgement I would like to thank Antti Vahakangas and Pekka Koskela for all the discussions we had during preparation of the manuscript. I would like to thank Emiel Lorist for pointing out a mistake in the first version of the paper. ## Funding The research is funded by Academy of Finland grant: Geometrinen Analyysi(21000046081).
2307.04169
Heavy Higgs boson Searches at the LHC in the light of a Left-Right Symmetric Model
We investigate a Left-Right symmetric model respecting $SU(3)_C \otimes SU(2)_L \otimes U(1)_L \otimes SU(2)_R \otimes U(1)_R$ local gauge symmetry. We study the interactions of the heavy neutral and charged scalars of this model along with their production at the hadron collider and their subsequent decays. We analyze the collider searches of two heavy scalars, one of them is charge neutral and another one is singly charged. In both the cases we consider their associated production at the Large Hadron Collider (LHC) and finally concentrate only on the leptonic final states. We perform both cut-based and multivariate analysis using Boosted Decision Tree algorithm for 14 TeV as well as as 27 TeV LHC run with 3000 fb$^{-1}$ integrated luminosity. As expected, the multivariate analysis shows a better signal-background discrimination compared to the cut-based analysis. In this article, we show that a charged Higgs of mass 750 GeV and 1.2 TeV can be probed with $2.77 \sigma$ ($4.58 \sigma$) and $1.38 \sigma$ ($3.66 \sigma$) significance at 14 (27) TeV run of LHC.
Sanchari Bhattacharyya
2023-07-09T13:31:23Z
http://arxiv.org/abs/2307.04169v2
# Heavy Higgs Searches at the LHC in the light of a Left-Right Symmetric Model ###### Abstract We investigate a Left-Right symmetric model respecting \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{L}\otimes SU(2)_{R}\otimes U(1)_{R}\) local gauge symmetry. We study the interactions of the heavy neutral and charged scalars of this model along with their production at the hadron collider and their subsequent decays. We analyze the collider searches of two heavy scalars, one of them is charge neutral and another one is singly charged. In both the cases we consider their associated production at the Large Hadron Collider (LHC) and finally concentrate only on the leptonic final states. We perform both cut-based and multivariate analysis using Boosted Decision Tree algorithm for 14 TeV as well as as 27 TeV LHC run with 3000 fb\({}^{-1}\) integrated luminosity. As expected, the multivariate analysis shows a better signal-background discrimination compared to the cut-based analysis. In this article, we show that a charged Higgs of mass 750 GeV and 1.2 TeV can be probed with \(2.77\sigma\) (\(4.58\sigma\)) and \(1.38\sigma\) (\(3.66\sigma\)) significance at 14 (27) TeV run of LHC. ## 1 Introduction It is well known that Standard Model (SM) of particle physics has been extremely successful in describing the interactions of the elementary particles. The discovery of Higgs boson at Large Hadron Collider (LHC), CERN [1, 2] has added another feather in its cap. Despite of being so successful, it is still unable to explain some of the natural phenomena which are already experimentally established, for example the explanation of Dark Matter (DM) or tiny neutrino mass etc. It is also unknown to us that whether the discovered Higgs boson is the only scalar candidate in nature or there are also other scalars with heavier masses which are similarly responsible for Electroweak Symmetry Breaking (EWSB). All of these unexplained facts actually motivate the physicists to look beyond SM. In the existing literature, there are several studies which actually deal with the phenomenology of extended Higgs sector [3]. Many of them have argued that the idea of one Higgs boson is not complete and there may be other representations also which may give rise to other required Higgs bosons having a heavier or lighter mass compared to the SM Higgs bosons. We are hopeful that with the advancement of technologies a detailed study about the properties of SM Higgs boson, for example its decay, branching ratios (BR), its couplings, precision measurements [4] will be possible which will make the picture of the scalar sector more clearer. Extended Higgs sector may also have some bearings on this dark matter sector, Higgs mass hierarchy or neutrino mass issues. In some models, singlet scalar has been considered as a suitable DM candidate [5]. The presence of charged Higgs may contribute to the radiative masses of neutrino [6]. In Left-Right symmetric models (LRSM), people have studied about the mass generation of neutrinos with help of extended triplet or singlet Higgs bosons [7], [8]. Additional Higgs bosons can also play a crucial role in dealing with the flavor problems [9]. Although the direct searches from the LHC have not confirmed the existence of such a scalar, which actually pushes the exclusion limits on the masses of such scalars to higher and higher scales. In quest of such a complete theory, we investigate a model which respects \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{L}\otimes SU(2)_{R}\otimes U(1)_{R}\) (32121) [10] local gauge symmetry. This can be obtained via a two step symmetry breaking from \(E_{6}\)[11] Grand Unified group with \([SU(3)_{C}\otimes SU(3)_{L}\otimes SU(3)_{R}]\) as the intermediate step. We shall only be interested in the Left-Right (LR) symmetric gauge group, 32121 and in the phenomenology of its scalar sector. This model contains the fermions from the full **27**-plet of \(E_{6}\), among them 11 are heavy exotic fermions. Two of these heavy fermions, one being Dirac like and another being Majorana like, are suitable Dark Matter (DM) candidates [12]. This model gives rise to a two component DM scenario. One of the DM candidates has a larger rate of interaction compared to the other. The relic particle with the larger interaction rate, satisfies the constraints from Direct detection experiments when a dimension-6 effective four-fermion interaction is introduced with a new coupling strength. The other DM candidate, with a smaller interaction rate is able to satisfy relic density constraints only when the coannihilation channels between these two relic candidates are opened up. Thus together they present a promising DM scenario and one can constrain the parameter spaces using the recent results of the direct detection of Dark Matter and relic density measurements from PLANCK collaboration. The detailed analysis regrading Dark Matter aspects of this model has been discussed in [12]. Apart from the SM gauge bosons the gauge sector comprises of three heavy BSM gauge bosons. The scalars in 32121 arise from the \((\textbf{1},\textbf{3},\bar{\textbf{3}})\) representation of \(SU(3)_{C}\otimes SU(3)_{L}\otimes SU(3)_{R}\). They are color singlet and heavy scalars. One of them must have similar properties like SM Higgs boson. Some of the BSM Higgs bosons show interesting signatures at High Luminosity-LHC (HL-LHC). In this article we shall mainly analyse the properties of some of the heavy Higgs bosons and their signatures at the LHC with 14 and 27 TeV high luminosity run. In this article we plan to describe the model breifly in section 2 where we mainly discuss the particle sector of this model with special emphasis on the scalar sector of our interest. We also discuss the proerties and production mechanisms of the some exotic scalars including heavy neutral and singly charged scalars at the LHC. In section 3 we perform the signal-background phenomenology of these two BSM Higgs bosons considering the cut-based analysis as well as multivariate analysis. We shall see that the signal-background discrimination is much better in case of multivariate analysis. Finally we conclude in section 4. ## 2 Description of 32121 Model We start with a Left-Right (LR) symmetric gauge group \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{L}\otimes SU(2)_{R}\otimes U(1)_{R}\), namely 32121. A two-step symmetry breaking of \(E_{6}\) can lead to 32121, though we will not be interested in this specific symmetry breaking pattern. This model is rich in particles which are listed in Table 1 with their corresponding gauge quantum numbers. In this article, among all the particles we will mainly study the interactions of some of the scalars which may generate interesting signatures at hadron collider. The gauge bosons along with the matter fields present in this model are listed in Table 1 with their corresponding gauge quantum numbers. The Higgs multiplets present in this table are instrumental in breaking down \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{L}\otimes SU(2)_{R}\otimes U(1)_{R}\) to the \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\) and then to \(SU(3)_{C}\otimes U(1)_{EM}\). \(L\) and \(R\) denote Left and Right repectively. One can calculate the electric charge, \(Q\) as, \(Q=T_{3L}+T_{3R}+Y_{L}/2+Y_{R}/2\) where \(Y_{L}/2\) and \(Y_{R}/2\) are noted down in the last two columns of Table 1 respectively. ### Gauge sector The gauge sector of 32121 model has two charged gauge bosons and four neutral gauge bosons. In the charged sector, one has been identified with the SM \(W\) boson and the other field is the heavy \(W^{\prime}\) boson. In the neutral gauge sector two fields have been identified with SM \(Z\) and photon. Rest two fields are denoted as \(Z^{\prime}\) and \(A^{\prime}\). The masses and mixings along with the interactions in electro-weak gauge sector are controlled by the four gauge coupling constants, \(g_{2L}\), \(g_{2R}\), \(g_{1L}\) and \(g_{1R}\) along with the vacuum expectation values (vevs) of the scalar fields. If one follows the symmetry breaking pattern of \(SU(2)_{R}\otimes U(1)_{L}\otimes U(1)_{R}\) to \(U(1)_{Y}\), one can have an expression like, \[\frac{1}{g_{Y}^{2}}=\frac{1}{g_{2R}^{2}}+\frac{1}{g_{1L}^{2}}+\frac{1}{g_{1R}^ {2}} \tag{1}\] where \(g_{Y}\) denotes the \(U(1)_{Y}\) gauge coupling constant. \(g_{2L}\) is identified with the \(SU(2)_{L}\) gauge coupling constant of SM, \(g\). We have chosen \(g_{2L}=g_{2R}=g\) and \(g_{1L}=g_{1R}\) to keep our Lagrangian Left-Right symmetric. With these choices one can fix the gauge parameters of the 32121 model. On the other hand, the lower limits of the vevs of the Higgs fields can be fixed from the experimental lower limits of the heavy gauge bosons. A deltailed study on the gauge sector of 32121 model can be found in [10]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline & & \(3_{C}\) & \(2_{L}\) & \(2_{R}\) & \(1_{L}\) & \(1_{R}\) \\ \hline & \(L_{L}\) & 1 & 2 & 1 & \(-1/6\) & \(-1/3\) \\ & \(\bar{L}_{R}\) & 1 & 1 & 2 & \(1/3\) & \(1/6\) \\ & \(\bar{L}_{B}\) & 1 & 2 & 2 & \(-1/6\) & \(1/6\) \\ Fermions & \(\bar{l}_{S}\) & 1 & 1 & 1 & \(1/3\) & \(-1/3\) \\ & \(Q_{L}\) & 3 & 2 & 1 & \(1/6\) & \(0\) \\ & \(\bar{Q}_{R}\) & \(\bar{3}\) & 1 & 2 & \(0\) & \(-1/6\) \\ & \(\bar{Q}_{LS}\) & \(\bar{3}\) & 1 & 1 & \(-1/3\) & \(0\) \\ & \(Q_{RS}\) & 3 & 1 & 1 & \(0\) & \(1/3\) \\ \hline & \(\Phi_{B}\) & 1 & 2 & 2 & \(1/6\) & \(-1/6\) \\ Scalar & \(\Phi_{L}\) & 1 & 2 & 1 & \(1/6\) & \(1/3\) \\ Fields & \(\Phi_{R}\) & 1 & 1 & 2 & \(-1/3\) & \(-1/6\) \\ & \(\Phi_{S}\) & 1 & 1 & 1 & \(-1/3\) & \(1/3\) \\ \hline & \(G^{i},\,i=1,...,8\) & 8 & 1 & 1 & \(0\) & \(0\) \\ & \(W_{L}^{i},i=1,2,3\) & 1 & 3 & 1 & \(0\) & \(0\) \\ Gauge fields & \(W_{R}^{i},i=1,2,3\) & 1 & 1 & 3 & \(0\) & \(0\) \\ & \(B_{L}\) & 1 & 1 & 1 & \(0\) & \(0\) \\ & \(B_{R}\) & 1 & 1 & 1 & \(0\) & \(0\) \\ \hline \end{tabular} \end{table} Table 1: Fermions and Bosons in 32121 model with their respective gauge quantum numbers ### Fermion sector As already mentioned, in 32121 model we have 27 fermions. Their chiral components are as follows, \[L_{L} = \begin{pmatrix}\nu_{L}\\ e_{L}\end{pmatrix},\hskip 56.905512ptL_{R}=\begin{pmatrix}\nu_{R}\\ e_{R}\end{pmatrix}\] \[Q_{L} = \begin{pmatrix}u_{L}\\ d_{L}\end{pmatrix},\hskip 56.905512ptQ_{R}=\begin{pmatrix}u_{R}\\ d_{R}\end{pmatrix}\] \[Q_{LS} = q_{SL},\hskip 14.226378ptQ_{RS}=q_{SR},\hskip 14.226378ptl_{S} \hskip 14.226378pt\mbox{and},\] \[L_{B} = \begin{pmatrix}N_{1}&E_{1}\\ E_{2}&N_{2}\end{pmatrix}\hskip 14.226378pt\mbox{and}\hskip 14.226378pt\tilde{L}_{B}= \begin{pmatrix}N_{2}^{c}&E_{2}^{c}\\ E_{1}^{c}&N_{1}^{c}\end{pmatrix} \tag{2}\] \(L_{L,R}\) and \(Q_{L,R}\) contain the SM leptons and quarks respectively along with a right-handed neutrino. Rest of fields are exotic fermions. \(Q_{LS}\) and \(Q_{RS}\) form a four-component Dirac-like color triplet quark whereas \(N_{1},\ N_{2}^{c}\) and \(E_{1},\ E_{2}^{c}\) construct neutral and singly charged Dirac-like lepton \(N\) and \(E\) respectively. \(l_{S}\) and \(l_{S}^{c}\) form a Majorana-like neutral fermion \(L_{S}\). The interactions between the Higgs fields and the fermions are responsible for the masses of the fermions. The relevant Yukawa Lagrangian is as follows. \[\mathcal{L}_{Y} = y_{qij}\bar{Q}_{iL}\Phi_{B}Q_{jR}+\tilde{y}_{qij}\bar{Q}_{iR} \tilde{\Phi}_{B}Q_{jL}+y_{lij}\bar{L}_{iL}\Phi_{B}L_{jR}+\tilde{y}_{lij}\bar{L }_{iR}\tilde{\Phi}_{B}L_{jL} \tag{3}\] \[+ y_{sij}\bar{Q}_{iLS}\Phi_{S}Q_{jRS}+y_{LBij}\ Tr\left[\bar{L}_{ iB}\tilde{L}_{jB}\right]\Phi_{S}^{c}+\frac{y_{LSi}\bar{j}}{\Lambda}\bar{l}_{iS}l_{jS}^ {c}\Phi_{S}\Phi_{S}\] \[+ y_{B}B_{ij}\ Tr\left[\bar{L}_{iB}\tilde{\Phi}_{B}\right]l_{jS}^ {c}+y_{ijBR}\bar{L}_{iL}L_{jB}\Phi_{R}+y_{ijBL}\bar{L}_{iR}L_{jB}^{\dagger} \tilde{\Phi}_{L}\] \[+ y_{ijLRS}\bar{Q}_{iL}Q_{jRS}^{*}\tilde{\Phi}_{L}+y_{ijLRS}\bar{ Q}_{iR}Q_{jLS}^{*}\tilde{\Phi}_{R}+h.c.\] where, \(i,j=1,2,3\) are generation numbers and \(y\)(s) are Yukawa coupling constants. \(\Phi_{S}^{*}\) is complex conjugate of \(\Phi_{S}\), \(\tilde{\Phi}_{B}=\sigma_{2}\Phi_{B}^{*}\sigma_{2}\) and \(\tilde{L}_{B}=\sigma_{2}L_{B}^{*}\sigma_{2}\). The first line of Eq. 3 shows the terms generating the masses of the SM fermions. The terms present in the second line of Eq. 3 are responsible for giving masses to the heavy exotic fermions. It is to be noted that we have written a dimension-5 term for generating the Majorana mass of \(l_{S}\). The rest of the terms represent the mixings among exotic and SM fermions. Here we note that, we can write only Dirac-like mass term for the nutrino in our model. In [10] the fermion sector of this model is discussed in more detail. ### Scalar sector of 32121 There are several scalar fields this model. The Higgs fields which are mainly responsible for the symmetry breaking from \(32121\longrightarrow SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y} \longrightarrow SU(3)_{C}\otimes U(1)_{EM}\) are, one Higgs bi-doublet (\(\Phi_{B}\)), one left-handed (\(\Phi_{L}\)), one right-handed (\(\Phi_{R}\)) weak doublets and a singlet Higgs boson (\(\Phi_{S}\)). \(\Phi_{S}\) is \(SU(2)\) singlet but carries \(U(1)\) hypercharge. These color singlet scalars arise from (\(\bf 1\), \(\bf 3\), \(\bar{\bf 3}\)) representation of the Trinification gauge group (\([SU(3)_{C}\otimes SU(3)_{L}\otimes SU(3)_{R}]\)). Among these fields, \(\Phi_{R}\) is instrumental in breaking the LR symmetry. The alignment of the Higgs fields are as following. \[\Phi_{B} = \begin{pmatrix}\frac{1}{\sqrt{2}}(k_{1}+h_{1}^{0}+i\xi_{1}^{0})&h _{1}^{+}\\ h_{2}^{-}&\frac{1}{\sqrt{2}}(k_{2}+h_{2}^{0}+i\xi_{2}^{0})\end{pmatrix},\] \[\Phi_{L} = \begin{pmatrix}h_{L}^{+}\\ \frac{1}{\sqrt{2}}(v_{L}+h_{L}^{0}+i\xi_{L}^{0})\end{pmatrix},\Phi_{R}= \begin{pmatrix}\frac{1}{\sqrt{2}}(v_{R}+h_{R}^{0}+i\xi_{R}^{0})\\ h_{R}^{-}\end{pmatrix},\Phi_{S}=\frac{1}{\sqrt{2}}(v_{S}+h_{S}^{0}+i\xi_{S}^ {0}) \tag{4}\] The Higgs potential of the 32121 model, \({\cal V}\) is composed of two parts, \({\cal V}_{1}\) and \({\cal V}_{2}\). It is given by, \[{\cal V}_{1}= - \mu_{1}^{2}Tr\left(\Phi_{B}{}^{\dagger}\Phi_{B}\right)-\mu_{2}^{2} \left(\Phi_{L}{}^{\dagger}\Phi_{L}+\Phi_{R}{}^{\dagger}\Phi_{R}\right)-\mu_{4}^ {2}\Phi_{S}{}^{\dagger}\Phi_{S} \tag{5}\] \[+ \lambda_{1}Tr\left[(\Phi_{B}{}^{\dagger}\Phi_{B})^{2}\right]+ \lambda_{3}\left(Tr\left[\Phi_{B}{}^{\dagger}\tilde{\Phi}_{B}\right]Tr\left[ \tilde{\Phi}_{B}^{\dagger}\Phi_{B}\right]\right)\] \[+ \alpha_{1}(\Phi_{S}^{\dagger}\Phi_{S})^{2}+\beta_{1}Tr\left[\Phi _{B}{}^{\dagger}\Phi_{B}\right](\Phi_{S}^{\dagger}\Phi_{S})+\gamma_{1}\left[ (\Phi_{L}^{\dagger}\Phi_{L})+(\Phi_{R}^{\dagger}\Phi_{R})\right](\Phi_{S}^{ \dagger}\Phi_{S})\] \[+ \rho_{1}\left[(\Phi_{L}^{\dagger}\Phi_{L})^{2}+(\Phi_{R}^{ \dagger}\Phi_{R})^{2}\right]+\rho_{3}\left[(\Phi_{L}^{\dagger}\Phi_{L})(\Phi_ {R}^{\dagger}\Phi_{R})\right]+c_{1}Tr\left[\Phi_{B}{}^{\dagger}\Phi_{B}\right] \left[(\Phi_{L}^{\dagger}\Phi_{L})+(\Phi_{R}^{\dagger}\Phi_{R})\right]\] \[+ c_{3}\left[(\Phi_{L}^{\dagger}\Phi_{B}\Phi_{B}^{\dagger}\Phi_{L} )+(\Phi_{R}^{\dagger}\Phi_{B}^{\dagger}\Phi_{B}\Phi_{R})\right]+c_{4}\left[( \Phi_{L}^{\dagger}\tilde{\Phi}_{B}\tilde{\Phi}_{B}^{\dagger}\Phi_{L})+(\Phi_ {R}^{\dagger}\tilde{\Phi}_{B}^{\dagger}\tilde{\Phi}_{B}\Phi_{R})\right]\] and, \[{\cal V}_{2}=\mu_{BS}Tr\left[\Phi_{B}^{\dagger}\tilde{\Phi}_{B}\right]\Phi_{S }^{*}+h.c. \tag{6}\] The parameters in \({\cal V}\) are considered to be real. \({\cal V}\) is also LR symmetric and obeys the gauge symmetry of 32121 model. In the above, \(\tilde{\Phi}_{B}\equiv\sigma_{2}\Phi_{B}^{*}\sigma_{2}\). Apart from the above symmetries, \({\cal V}_{1}\) is also symmetric under the global phase transformations like, \[\Phi_{B}\to e^{i\theta_{B}}\ \Phi_{B};\ \ \Phi_{L}\to e^{i\theta_{L}}\ \Phi_{L};\ \ \Phi_{R}\to e^{i\theta_{R}}\ \Phi_{R}\ {\rm and}\ \Phi_{S}\to e^{i\theta_{S}}\ \Phi_{S}. \tag{7}\] Whereas, the terms present in \({\cal V}_{2}\) explicitly breaks this symmetry. Now, if we choose both \(k_{1}\) and \(k_{2}\) to be non-zero, the terms proportional to \(\lambda_{3}\) in \({\cal V}_{1}\) give rise to some bilinear terms like \(h_{1}^{0}h_{2}^{0}\), \(h_{1}^{+}h_{2}^{-}\) which makes \({\cal V}_{1}\) break the aforementioned global symmetry spontaneously. This actually causes an extra undesirable massless Goldstone mode. This issue of getting unwanted Goldstone mode can be avoided in two ways. One simple option is to choose any between \(k_{1}\) and \(k_{2}\) to be zero which will make such bilinear terms (like \(h_{1}^{0}h_{2}^{0}\), \(h_{1}^{+}h_{2}^{-}\)) vanish and turn the potential \({\cal V}_{1}\) invariant under such global symmetry. Another way is to consider \({\cal V}_{2}\) in addition to \({\cal V}_{1}\) as the scalar potential. As \({\cal V}_{2}\) breaks the global symmetry explicitly, we can get rid of the extra massless mode in this way. In [10], it is discussed in detail that the presence of \({\cal V}_{2}\) does not affect the masses and the mixings in the scalar sector in a significant way. Hence, we choose \(k_{2}\) to be _zero_. A non-zero value of \(v_{R}\) is necessary to lead the Left-Right symmetry breaking. Whereas, \(v_{S}\) also needs to be non-zero as it is responsible for \(U(1)\) symmetry breaking. A non-zero value of \(v_{L}\) will along with a non-zero \(v_{R}\) will again spontaneously break the global symmetry mentioned in Eq. 7 which will give rise to extra unwanted Goldstone mode. In order to avoid such a problem, we choose \(v_{L}=0\)[10]. There are 10 real parameters in the scalar potential of this model, \(\lambda_{1}\), \(\lambda_{3}\), \(\rho_{1}\), \(\rho_{3}\), \(c_{1}\), \(c_{3}\), \(c_{4}\), \(\alpha_{1}\), \(\beta_{1}\) and \(\gamma_{1}\). We accept only those values of the quartic parameters which make the scalar potential bounded from below and which are allowed by the SM-Higgs signal strengths [10]. Among all the scalar fields, there are five neutral CP-even scalar fields, \(h^{0}\), \(h_{2}^{0}\), \(h_{L}^{0}\), \(H_{R}^{0}\) and \(H_{S}^{0}\). \(h^{0}\) has been identified with the SM-Higgs. The neutral CP-odd scalar sector contains two physical fields, \(\xi_{2}^{0}\) and \(\xi_{L}^{0}\). In addition to these scalars, there are two charged Higgs fields, \(H_{1}^{\pm}\) and \(H_{L}^{\pm}\). \(h_{2}^{0}\) and \(\xi_{2}^{0}\) are mass degenerate at the tree level. In a similar fashion, \(h_{L}^{0}\) and \(\xi_{L}^{0}\) also have same mass. In this article, we will mainly concentrate on the scalars who belong to the Higgs bi-doublet, \(\Phi_{B}\) and discuss their properties. \(\bullet\)**Scalars from Bi-doublet Higgs field:** Apart from the SM-like Higgs, the bi-doublet Higgs field \(\Phi_{B}\) comprises of some exotic scalar fields including two neutral CP-even (\(h_{2}^{0}\)) and CP-odd (\(\xi_{2}^{0}\)) scalars and a singly charged Higgs \(H_{1}^{\pm}\). At tree level, the above scalar (\(h_{2}^{0}\)) and pseudoscalar (\(\xi_{2}^{0}\)) have equal masses. With \(k_{2}=0\), \[m_{h_{2}^{0}}^{2}=m_{\xi_{2}^{0}}^{2}=\frac{1}{2}[4\lambda_{3}k_{1}^{2}+(c_{4}-c _{3})v_{R}^{2}] \tag{8}\] The _zero_ value of \(k_{2}\) restricts \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) to couple with a pair of other scalars or gauge bosons but they can have interactions with a pair of SM fermions (see Eq. 3). From Eq. 3 it is evident that the coupling of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) with the up quark sector is proportional to the bottom quark sector Yukawa coupling and vice-versa. This implies that the coupling of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) with a pair of bottom quarks is proportional to top Yukawa coupling. To find the limit on the mass of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) we have produced these heavy scalars in association with a pair of \(b\)-quarks with a further decay to \(b\) quark pair. ATLAS and CMS have already performed a search for heavy neutral scalar which is produced in association with a pair of \(b\) quarks at \(\sqrt{s}=13\) TeV [13, 14]. Using this result, we compare \(\sigma\times BR\) obtained in 32121 model with the measured rate by ATLAS Collaboration and find a lower limit on \(m_{h_{2}^{0}}\) (\(m_{\xi_{2}^{0}}\)). We find, \(m_{h_{2}^{0}}\) (\(m_{\xi_{2}^{0}}\)) must be greater than 800 GeV [10]. At the LHC, one of the dominating ways of producing \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) is via gluon gluon fusion. Unlike SM Higgs, here a triangle loop of bottom quark will mainly control the production cross-section [10]. Another dominant way to produce \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) at the hadronc collider is the associated Higgs production as previously dicussed. One can produce \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) in association with two bottom quarks. This large production cross-section will sensitively depend on the top Yukawa coupling. This in turn makes us consider the associated production mechanism while generating the heavy scalars at the collider. We present the associated production cross-section and decay branching ratios of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) in Fig. 1. We note that \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) has a dominant decay mode to \(b\bar{b}\) untill the decay to \(H_{1}^{\pm}W^{\mp}\) is kinematically allowed. In this plot the mass of \(H_{1}^{\pm}\) has been set to 750 GeV. In order to generate such events, we have first implemented our model in FeynRules2.0[15] and then generated such processes using Madgraph[16]. We have also taken care of the QCD K-factor (\(\sim 1.1\)) following the ref. [17, 18]. Now, coming to the singly charged Higgs boson, \(H_{1}^{\pm}\), it is another scalar field which is of our interest. \(H_{1}^{\pm}\) has a mass, \[m_{H_{1}^{\pm}}^{2}=\frac{1}{2}(c_{4}-c_{3})(k_{1}^{2}+v_{R}^{2}) \tag{9}\] It can couple to SM fermions via Yukawa coupling (see Eq. 3) and also has interactions with SM \(W\) boson and heavy neutral scalar \(h_{2}^{0}\) (\(\xi_{2}^{0}\)). One dominant process of producing this charged scalar at the LHC is Figure 1: The left figure represents the branching ratios of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) to different final states. The figure on the right shows the associated production cross section of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) (\(\sigma\)) at the LHC at 14 and 27 TeV proton proton center of mass energy. the production in association with a top and a bottom quark. Other mechanisms may include Drell-Yan process or even vector boson fusion process. ATLAS and CMS collaborations both have searched for heavy charged Higgs boson at 13 TeV run followed by a decay to a top and a bottom quark [19, 20, 21]. In our analysis, we have also produced \(H_{1}^{\pm}\) in association with a top and a bottom with a further decay of \(H_{1}^{\pm}\) again to a top and a bottom. We compare the event rates obtained in 32121 model with the result provided by ATLAS collaboration which \(H_{1}^{\pm}\). We find, \(m_{H_{1}^{\pm}}>720\) GeV [10]. While performing our analysis, we have considered to produce \(m_{H_{1}^{\pm}}\) at the collider in association with \(t\ b\). The leading contribution will be from \(gg\to\bar{t}bH_{1}\). In Fig. 2, we present the production cross-section of \(H_{1}^{\pm}\) at centre of mass energies of 14 TeV and 27 TeV along with the branching ratios of \(H_{1}^{\pm}\) to different final states. We observe that \(H_{1}^{\pm}\) mainly decays to a top and a bottom quark until it is allowed to decay to \(h_{2}^{0}\) (\(\xi_{2}^{0}\))\(W^{\pm}\) kinematically. The \(H^{\pm}tb\) production cross-section varies from 0.15 (1) pb for \(m_{H_{1}^{\pm}}=720\) GeV to 0.005 (0.06) pb for \(m_{H_{1}^{\pm}}=1500\) GeV at 14 (27) TeV LHC run. In the next section, we will now present the signal-background study of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) and \(H_{1}^{\pm}\) production at the LHC at 14 and 27 TeV run with 3000 fb\({}^{-1}\) integrated luminosity. ## 3 Collider Phenomenology In the previous section we have already discussed about the production mechanisms and subsequent decays of the two scalars, \(h_{2}^{0}\)(\(\xi_{2}^{0}\)) and \(H_{1}^{\pm}\). The heavy neutral and charged scalars have some exotic decay channels. In this section we concentrate on the signal-background analysis of these two scalars at the LHC where we have considered such exotic decay channels of the heavy scalars. One of the interesting channels to probe \(h_{2}^{0}\)(\(\xi_{2}^{0}\)) at the LHC is the following (see Fig. 3). \[pp\to h_{2}^{0}(\xi_{2}^{0})b\bar{b}\to(H_{1}^{\pm}W^{\mp})b\bar{b}\to(t\bar {b}l^{-}\bar{\nu}_{l})b\bar{b}\to b\bar{b}b\bar{b}l^{+}l^{-}\nu_{l}\bar{\nu}_{l}\] Figure 2: The figure on the left shows the branching ratios of \(H_{1}^{\pm}\) to different final states where the mass of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) at its lowest limit (800 GeV). The right figure production cross-section (\(\sigma\)) of \(H_{1}^{\pm}\) via \(pp\to\bar{t}bH_{1}^{+}\) process at 14 and 27 TeV LHC run. Similarly to look for \(H_{1}^{\pm}\) at the hadron collider, one may consider (see Fig. 3), \[pp\to H_{1}^{\pm}tb\to(t\bar{b})\bar{t}b\to(W^{-}\bar{b}b)W^{+}b\bar{b}\to b \bar{b}b\bar{b}l^{+}l^{-}\nu_{l}\bar{\nu}_{l}\] We shall now briefly discuss these two channels with leptonic decay of \(W\) boson with not-too-large background in the context of HL-LHC at 14 TeV and 27 TeV center of mass energy. Fig. 3 shows the leading order Feynman diagrams which are the most dominating for the production of heavy neutral \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) and singly charged scalars \(H_{1}^{\pm}\) at the LHC. We denote the production of \(h_{2}^{0}(\xi_{2}^{0})\) and \(H_{1}^{\pm}\) as signal 1 (\(S_{1}\)) and signal 2 (\(S_{2}\)) respectively. Both the signals discussed above, have similar final states with four b jets, two oppositely charged leptons and missing transverse energy in the final state. This specific combination in the final state makes these signals unique as the chances of getting similar states coming from the Standard Model is quite low. Among all the background processes, \(t\bar{t}\) + jets production will be the most dominant. Other significant background effects include \(b\bar{b}t\bar{t}\) production, \(ht\bar{t}\) production, \(Zt\bar{t}\) production, multijet processes etc. Initially we have set the transverse momentum of b-tagged jet, light jets and leptons as, \(p_{Tb}>40\) GeV, \(p_{Tj}>30\) GeV, \(p_{Tl}>10\) GeV respectively. We have also put an initial cut on missing transverse energy, \(B\!\!\!/_{T}\) which is greater than 20 GeV. We perform this analysis for four chosen benchmark points corresponding to four different sets of masses of the scalars and their decay properties. \(S_{1}\) depends on the branching ratio of \(h_{2}^{0}\) to \(H_{1}^{\pm}W^{\mp}\) channel which is non-zero just after a certain mass of \(h_{2}^{0}\) (see Fig. 1). Whereas \(S_{2}\) depends on the \(H_{1}\to tb\) branching ratio which is non-zero throughout the mass range of \(H_{1}^{\pm}\) (see Fig. 2) but this branching ratio reduces after the \(H_{1}\longrightarrow h_{2}^{0}\) (\(\xi_{2}^{0}\))\(W\) decay channel opens up. In Table 2, the four choices of the benchmark points have been presented. For BP1 and BP4, the masses of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) and \(H_{1}^{\pm}\) are such that both the signals, \(S_{1}\) and \(S_{2}\) are _on_ as the BR (\(h_{2}^{0}\to H_{1}^{\pm}W^{\mp}\)) and BR (\(H_{1}\to tb\)) are non-zero, whereas for BP2 and BP3 the BR(\(h_{2}^{0}\to H_{1}^{\pm}W^{\mp}\)) is zero turning _Signal 1 off_. Furthermore, in case of BP1, for \(S_{1}\) and \(S_{2}\), \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) and \(H_{1}\) dominantly decay to \(H_{1}^{\pm}W^{\mp}\) and \(t\bar{b}\) respectively with the highest BR in the corresponding channels. For BP2, BR(\(H_{1}\to tb\)) remains same as it is in case of BP1 keeping the _Signal 2_ unchanged. However, in case of BP3 the \(H_{1}\) dominantly decays to \(h_{2}^{0}W^{\pm}\) turning BR(\(H_{1}\to tb\)) small. In a similar fashion for BP4 we get both signals _on_ as BP1 but with reduced branching ratios to corresponding channels and with reduced cross-sections. It is important to note here that the BP2 and BP3 practically imply the production of charged Higgs (\(H_{1}^{\pm}\)) only whereas BP1 and BP4 actually denote the production of both the scalars. Figure 3: Feynman diagrams for the most dominant production processses of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) (a) and \(H_{1}^{\pm}\) (b) at the LHC. ### Cut-based Analysis In this section we present the signal-background analysis of \(h^{0}_{2}(\xi^{0}_{2})\) and \(H_{1}\pm\) production at the LHC using cut-based approach. We have implemented our model in FeynRules2.0[15], generated the signal and background events with Madgraph[16] using NNPDF3.0 parton distribution functions [22]. In order to consider the showering and hadronization, we have passed the events through Pythia8[23] already built in Madgraph and used Delphes3.5[24] for detector simulation. We demand there are _atleast_ three b-tagged jets and _atleast_ one charged lepton (\(e\) or \(\mu\)) in the final state. Such a choice effectively turns down the number of events of multijet production which makes us ignore this background. With these demands we have made, we plot the distributions of two important variables icluding the transverse momentum of leading b-tagged jet, \(p_{T}^{b}\) and scalar sum of \(p_{T}\) of all the visible jets, \(H_{T}\) at 14 and 27 TeV HL-LHC run with 3000 fb\({}^{-1}\) integrated luminosity. In Figs. 4 and 5, we present such distributions for the BP1 only. In Figs. 4 and 5, the distributions of \(p_{T}\) of leading b-tagged jet and \(H_{T}\) have been presented for each signal and background processes for BP1 at 14 and 27 TeV center of mass energy respectively with integrated luminosity 3000 fb\({}^{-1}\). The processes corresponding to the different color codes have been mentioned inside the plots. It is clearly understandable from the plots that an appropriate cut on \(p_{T}\) of leading b-tagged jet and \(H_{T}\) in each case can effectively reduce the background events compared to the signal events. Here we want to note that, for the other benchmark points the distributions of the signals are not significantly different compared to what is shown for BP1. We have optimized the cuts in such a way so that the significance of the signal does not vary significantly for all of the benchmark points. \(\bullet\)**Event Selection** As already mentioned, we choose to keep such events where there are at least three b-tagged jets in the final state. Using the information we get from the distribution plots in Figs. 4 and 5, we apply and try to optimize the cuts on the variables we have considered i.e., \(p_{T}^{b}\) and \(H_{T}\) so that we can reduce certain amount of background events keeping the signal events as much as possible. In other words we try apply our cuts on the suitable variable in such a way so that we obtain maximum significance, \(\mathcal{S}\) where \(\mathcal{S}\) is given by, \[\mathcal{S}=\sqrt{2\left[(S+B)\ log\left(\frac{S+B}{B}\right)-S\right]} \tag{10}\] S and B stand for the number of signal and background events respectively. In Table 3, we show the optimized cut flows providing the maximum significance, (see Eq. 10) for all of the benchmark points at 14 TeV HL-LHC run. Here we select only those events where the transverse momentum of leading b-jet, \(p_{t}^{b}>\) 240 GeV and \(H_{T}>\) 990 GeV. Figure 5: Distribution plots of transverse momentum, \(p_{T}\) of leading b-tagged jet and scalar sum of \(p_{T}\) of all the visible jets, \(H_{T}\) of the signals and backgrounds for integrated luminosity 3000 fb\({}^{-1}\) at 27 TeV run of LHC. As both the signals have similar final states, in case of BP1 and BP4 \(S\) is practically the collection of two different signal event numbers, \(S=S_{1}+S_{2}\). In Table 4, we present the case for 27 TeV LHC run with 3000 fb\({}^{-1}\) integrated luminosity. Here we find that we obtain the maximum significance when we select only those events who pass the criteria of having \(p_{t}^{b}>\) 230 GeV and \(H_{T}>\) 680 GeV. From the above Tables, 3 and 4, we observe that the significance for the case of 14 TeV HL-LHC run is much small which gets much better while considering the case for 27 TeV HL-LHC run. BP1 \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline & & \(n_{b}\geq 3\) & \(p_{Tb1}>230\) GeV & \(H_{T}\)\(>\) 680 GeV & \(S=N_{S1}+N_{S2}\) & \({\cal S}\) \\ \hline \hline \multirow{4}{*}{Backgrounds} & \(t\bar{t}+jets\) & 23648538 & 4122984 & 3326268 & \(-\) & \(-\) \\ & \(b\bar{b}t\bar{t}\) & 197815 & 45933 & 38253 & \(-\) & \(-\) \\ & \(ht\bar{t}\) & 29491 & 5831 & 4715 & \(-\) & \(-\) \\ & \(Zt\bar{t}\) & 6489 & 1367 & 1034 & \(-\) & \(-\) \\ \hline \hline \multirow{4}{*}{Signals} & BP1 & \(N_{S1}\) & 7890 & 5427 & 5099 & 9379 & 5.106 \\ \cline{2-6} & \(N_{S2}\) & 7034 & 4584 & 4280 & & \\ \cline{2-6} & BP2 & \(N_{S1}\) & 0 & 0 & 4280 & 2.331 \\ \cline{2-6} & \(N_{S2}\) & 7034 & 4584 & 4280 & & \\ \cline{2-6} & BP3 & \(N_{S1}\) & 0 & 0 & 0 & 2925 & 1.593 \\ \cline{2-6} & \(N_{S2}\) & 3543 & 2958 & 2925 & & \\ \cline{2-6} & BP4 & \(N_{S1}\) & 3246 & 2562 & 2505 & 5538 & 3.016 \\ \cline{2-6} & \(N_{S2}\) & 3988 & 3110 & 3033 & & \\ \hline \hline \end{tabular} \end{table} Table 4: The cut flow table for the signals and backgrounds at 27 TeV for four benchmark points. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \hline & & \(N_{b}\geq 3\) & \(p_{Tb1}>240\) GeV & \(H_{T}\)\(>\) 990 GeV & \(S=N_{S1}+N_{S2}\) & \({\cal S}\) \\ \hline \hline \multirow{4}{*}{Backgrounds} & \(t\bar{t}+jets\) & 5127095 & 624321 & 209137 & \(-\) & \(-\) \\ & \(b\bar{b}t\bar{t}\) & 40239 & 6228 & 1856 & \(-\) & \(-\) \\ & \(ht\bar{t}\) & 7035 & 955 & 274 & \(-\) & \(-\) \\ & \(Zt\bar{t}\) & 1483 & 220 & 54 & \(-\) & \(-\) \\ \hline \hline \multirow{4}{*}{Signals} & BP1 & \(N_{S1}\) & 501 & 305 & 139 & 324 & 0.704 \\ \cline{2-6} & \(N_{S2}\) & 717 & 417 & 185 & & \\ \cline{2-6} & BP2 & \(N_{S1}\) & 0 & 0 & 0 & 185 & 0.402 \\ \cline{2-6} & \(N_{S2}\) & 717 & 417 & 185 & & \\ \cline{2-6} & BP3 & \(N_{S1}\) & 0 & 0 & 0 & 234 & 0.509 \\ \cline{2-6} & \(N_{S2}\) & 372 & 296 & 234 & & \\ \cline{2-6} & BP4 & \(N_{S1}\) & 342 & 253 & 167 & 495 & 1.076 \\ \cline{2-6} & \(N_{S2}\) & 683 & 502 & 328 & & \\ \hline \hline \end{tabular} \end{table} Table 3: The cut flow table for the signals and backgrounds at 14 TeV for four benchmark points. provides 0.7 significance (\(\mathcal{S}\)) for 14 TeV run whereas the significance increases to 5.1 for 27 TeV run at HL-LHC (Table 4). But the results obtained using the cut-based approach does not make this method much useful. This motivates us to explore our results using multivariate analysis which we discuss in the following. ### Multivariate Analysis In this section, we mainly concentrate on the results we obtain using Boosted Decision Tree (BDT) algorithm. This part of analysis has been performed in a TMVA framework [25]. Decision trees are mainly classifiers who generally classifies the signal and background-like events. A suitable variable is chosen and application of a proper cut on this variable separates the signal from the background as best as it can. One can choose a number of variables and train the signal and background sample events. Modifcation of the weights corresponding to the sample events creates new _boosted_ decision trees. After training and testing of the signal and background-like events, this method of analysis excels the generic cut-based analysis by performing a much better discrimination between signal events and background events. To perform the BDT analysis we have considered 11 variables, providing the best possible signal significance, are as the following. * The transverse momentum of leading b-tagged jet, \(p_{T}^{b1}\). * The missing transverse energy, \(\not{E}_{T}\). * The transverse momentum of leading lepton, \(p_{T}^{l1}\). * \(\Delta\eta^{ibbj}\) between the three leading b-tagged jets. * \(\Delta\phi^{ibbj}\) between the three leading b-tagged jets. * \(\Delta\eta^{l1l2}\) between the leading and sub-leading lepton. * \(\Delta\phi^{l1l2}\) between the leading and sub-leading lepton. \begin{table} \begin{tabular}{|c|c|c|} \hline \hline Rank & Variable Importance & Variable Importance \\ & (14 TeV) & (27 TeV) \\ \hline \hline 1 & \(\not{E}_{T}\) & \(\not{E}_{T}\) \\ 2 & \(\Delta\phi^{b1b2}\) & \(\Delta\phi^{b1b3}\) \\ 3 & \(\Delta\phi^{l1l2}\) & \(\Delta\phi^{b1b2}\) \\ 4 & \(\Delta\phi^{b1b3}\) & \(\Delta\phi^{l1l2}\) \\ 5 & \(\Delta\phi^{b2b3}\) & \(\Delta\eta^{l1l2}\) \\ 6 & \(\Delta\eta^{l1l2}\) & \(p_{T}^{b1}\) \\ 7 & \(p_{T}^{b1}\) & \(\Delta\phi^{b2b3}\) \\ 8 & \(\Delta\eta^{b1b3}\) & \(\Delta\eta^{b1b2}\) \\ 9 & \(\Delta\eta^{b2b3}\) & \(\Delta\eta^{b1b3}\) \\ 10 & \(\Delta\eta^{b1b2}\) & \(\Delta\eta^{b2b3}\) \\ 11 & \(p_{T}^{l1}\) & \(p_{T}^{l1}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Importance of the variables used in the BDT analysis The Table 5 shows the ranks of the above variables according to their relevance for both 14 and 27 TeV signal-background study. The rank of the variables are determined depending on how many times a variableis used to split decision tree nodes. Here in both cases of 14 and 27 TeV run, \(\not{E}_{T}\) has been the most important variable. In our study, the important parameters for a BDT analysis have been set as follows. We have set the number of Trees to be 850 with maximum depth 3 and the boost type as AdaBoost. The normalized distributions of the above variables are shown in Fig. 6. The blue-shaded (red-dashed) distributions are for the signal (background). It is to be mentioned that while doing this analysis, all the four backgrounds have been taken into consideration despitehe fact that \(t\bar{t}+jets\) production is the most dominating one. Figure 6: Distribution plots of the variables we have accounted for multivariate analysis for BP1, at 14 TeV HL-LHC run. Figure 7: The linear correlations between the variables considered for the multivariate analysis are shown here in form of percentage for signal (a) as well as background (b) for benchmark point BP1 at 14 TeV HL-LHC run. The negetive sign implies that the two corresponding variables are anti-correlated. The linear correlation matrix for the variables of our choice is shown below in Fig. 7 for only benchmark point BP1. The correlations between any two variables are presented in % in this figure. One can see, in most of the cases, the variables are not correlated in a significant way. The signal and background events have been trained for each four benchmark points. A partial overtraining might be quite possible for boosted decision tree algorithm which must be avoided. It can be tested comparing the performance of training and testing samples. We have ensured that the effect of overtraining of signal and background is minimal for our cases by Kolmogorov-Smirnov (KS) test. In general, KS score must be \(\sim 0.1\). It may be greater than 0.01 if this value remains fixed over changing the statistics of the signal and background events. In Fig. 8, one can see the value of the KS probability is \(\sim 0.187(0.428)\) and \(\sim 0.195(0.184)\) for signal (background) for BP1 at 14 TeV and 27 TeV HL-LHC run. Figure 8: The result of Kolmogorov-Smirnov test for BP1 at 14 TeV (a) and 27 TeV (b) LHC run with integrated luminosity 3000 fb\({}^{-1}\) respectively. Figure 9: The BDT response of the signal and backgrounds for 14 and 27 TeV HL-LHC run for the BP1 respectively. Half of the signal and background events have been used for training and the other half of the same sample is used for testing. After a successful training and testing of the signal and background samples, the BDT algorithm has made the results for 14 as well a 27 TeV HL-LHC run better compared to cut-based analysis. The TMVA response of the classification has shown an good discrimination between signal and background which is shown in Fig. 9 for BP1 benchmark point at 14 as well as 27 TeV HL-LHC run. The significance we approximately measure using the expression shown in Eq. 10 has improved in a significant amount compared to cut-based scenario which is explained in Table 6 for both 14 and 27 TeV run of LHC for all of the benchmark points respectively. In Fig. 10 the signal efficiency, background efficiency and signal significance have been presented for two benchmark points BP1 and BP2 at 14 and 27 TeV HL-LHC run where the results for BP2 solely corresponds to probing a charged Higgs at the hadron collider. The significances obtained from the BDT analysis for each case have been given below in a form of a table (see Table 6). The significance obtained for BP1 is \(\sim 3.87\) which is much better compared to the significance achieved in case of cut-based analysis as one can expect. Similar increments are observed for the other benchmark points BP2, BP3, BP4 at 14 TeV run. The results obtained for 27 TeV HL-LHC Figure 10: The signal and background efficiency and significance for 14 and 27 TeV HL-LHC run for BP1 and BP2 respectively. run are more encouraging. From the results for BP2 and BP3, in 32121 model one can hope to probe a charged Higgs of mass 750 GeV and 1.2 TeV with 2.77 \(\sigma\) (4.58 \(\sigma\)) and 1.38 \(\sigma\) (3.66 \(\sigma\)) significance respectively at 14 (27) TeV HL-LHC run (see Figs. 10 (b), 10 (d) for BP2). ## 4 Conclusions We start with an \(E_{6}\) GUT inspired gauge theory \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{L}\otimes SU(2)_{R}\otimes U(1))_{R}\) namely 32121. This gauge group can arise after a two step symmetry breaking of \(E_{6}\). We have mainly concentrated on the Left-Right symmetry breaking from \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{L}\otimes SU(2)_{R}\otimes U(1))_{R}\) down to \(SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\). The fermions in this model belong to the full **27**-plet of \(E_{6}\). The Higgs bosons of this model arise from \(({\bf 1},{\bf 3},{\bf\bar{3}})\) represensation of \(SU(3)^{3}\). The vevs (\(k_{1}=246\) GeV, \(v_{R}>14.7\) TeV, \(v_{S}>12.61\) TeV) of the scalar fields have been constrained from the masses of the \(W\), \(W^{\prime}\) and \(A^{\prime}\) gauge bosons respectively. The gauge sector of 32121 model contains five gauge couplings whose value have been fixed following the pattern of Left-Right symmetry breaking. Apart from the SM gauge bosons this model contains \(W^{\prime}\), \(Z^{\prime}\) and \(A^{\prime}\) gauge bosons where \(A^{\prime}\) is the hallmark of the extra \(U(1)\) gauge symmetry. In the fermionic sector, among all the fermions of the **27**-plet of \(E_{6}\), two color singlet charge neutral fermions will be suitable DM candidates. The scalar sector of 32121 model contains numbers of Higgs bosons among one is SM-like Higgs. In this article we have mainly set our focus on two exotic heavy Higgs bosons, one is charge neutral CP-even Higgs field, \(h_{2}^{0}\) and its CP-odd partner \(\xi_{2}^{0}\). Both of them have similar masses and couplings. Another scalar field is a singly charged Higgs, \(H_{1}^{\pm}\). Both \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) and \(H_{1}^{\pm}\) arise from the Higgs bi-doublet \(\Phi_{B}\). \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) dominantly decays to \(b\bar{b}\) untill the decay channel to \(H_{1}^{\pm}W^{\mp}\) is kinematically accessible. \(H_{1}^{+}\) dominantly decays to \(t\bar{b}\) untill the decay cahnnel to \(h_{2}^{0}\) (\(\xi_{2}^{0}\))\(W^{+}\) is kinematically allowed. We have used these informations on exotic decay channels while discussing about the signatures of these scalars at the LHC. For \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) we have mainly chosen the dominant production mechanism of this scalar which is in our case the associated Higgs production. The production cross-section of \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) in association of \(b\bar{b}\) is 0.3 (3) pb at 14 (27) TeV LHC run for 1 TeV mass. Whereas \(H_{1}^{\pm}\) has been produced in association with \(tb\). Production cross-section of \(H_{1}^{\pm}\) in association of \(tb\) is 0.04 (0.35) pb at 14 (27) TeV LHC run for 1 TeV scalar mass. In the next section we have performed a detailed signal-background analysis of two of the heavy Higgs bosons, \(h_{2}^{0}\) (\(\xi_{2}^{0}\)) and \(H_{1}^{\pm}\). The associated production of both of them give rise to similar final states with three or more than three b-tagged jets, more than one charged leptons and missing transverse energy. The dominant background will arise from \(t\bar{t}\) production with jets. The other background events will arise from \(b\bar{b}t\bar{t}\), \(ht\bar{t}\), \(Zt\bar{t}\) productions. Depending on the masses and decay properties of the heavy neutral and charged scalars in our model, we choose four benchmark points (BP) to perform our analysis. To begin with, we have presented our results using cut-based analysis for four benchmark points. We have applied a series of cuts on some chosen suitable variables like transverse momentum (\(p_{T}\)) of leading b-tagged jet and the scalar sum of \(p_{T}\) of all jets (\(H_{T}\)). For BP1, at 14 (27) TeV the signal to background ratio is 0.7 \begin{table} \begin{tabular}{|c|c|c|} \hline \hline & \({\cal S}\) (14 TeV) & \({\cal S}\) (27 TeV) \\ \hline BP1 & 3.8695 & 10.4518 \\ BP2 & 2.7677 & 4.5766 \\ BP3 & 1.3838 & 3.6577 \\ BP4 & 2.5171 & 6.5159 \\ \hline \hline \end{tabular} \end{table} Table 6: The significances obtained with multivariate analysis for each benchmark point at 14 and 27 TeV LHC run (5.1) whereas for other benchmark points this is somewhat lower except the case for BP4 at 14 TeV run (\(\mathcal{S}\sim 1.1\)). In order to distinguish signal events from background-like events more accurately we have used a better algorithm used in multivariate analysis where we have chosen the BDT method. With this mechanism, as per our expectation, a better significance could be achieved for all of the benchmark points. For BP1, at 14 (27) TeV the significance is 3.87 (10.45) which clearly shows a better signal-background discrimination. With the results we obtained, one can hope to probe a heavy charged Higgs of a mass 750 GeV in the 32121 model, with \(2.77\sigma\) (\(4.58\sigma\)) significance at a 14 (27) TeV LHC run with 3000 fb\({}^{-1}\) integrated luminosity. **Acknowledgement:** SB acknowledges financial support from DST, Ministry of Science and Technology, Government of India in the form of an INSPIRE-Senior Research Fellowship. SB acknowledges Prof. Anindya Datta for his valuable suggestions throughout the analysis. SB also acknowledges Gourab Saha and Nivedita Ghosh for their help in dealing with some technical issues. SB is thankful to Prof. Partha Konar for the insightful discussions.
2305.12172
Leveraging on-shell interference to search for FCNCs of the top quark and the Z boson
Flavour-changing-neutral currents (FCNCs) involving the top quark are highly suppressed within the Standard Model (SM). Hence, any signal in current or planned future collider experiments would constitute a clear manifestation of physics beyond the SM. We propose a novel, interference-based strategy to search for top-quark FCNCs involving the $Z$ boson that has the potential to complement traditional search strategies due to a more favourable luminosity scaling. The strategy leverages on-shell interference between the FCNC and SM decay of the top quark into hadronic final states. We estimate the feasibility of the most promising case of anomalous $tZc$ couplings using Monte Carlo simulations and a simplified detector simulation. We consider the main background processes and discriminate the signal from the background with a deep neural network that is parametrised in the value of the anomalous $tZc$ coupling. We present sensitivity projections for the HL-LHC and the FCC-hh. We find an expected $95\%$ CL upper limit of $\mathcal{B}_{\mathrm{excl}}(t\rightarrow Zc) = 6.4 \times 10^{-5}$ for the HL-LHC. In general, we conclude that the interference-based approach has the potential to provide both competitive and complementary constraints to traditional multi-lepton searches and other strategies that have been proposed to search for $tZc$ FCNCs.
Lucas Cremer, Johannes Erdmann, Roni Harnik, Jan Lukas Späh, Emmanuel Stamou
2023-05-20T11:21:44Z
http://arxiv.org/abs/2305.12172v1
# Do-Th 23/04 ###### Abstract Flavour-changing-neutral currents (FCNCs) involving the top quark are highly suppressed within the Standard Model (SM). Hence, any signal in current or planned future collider experiments would constitute a clear manifestation of physics beyond the SM. We propose a novel, interference-based strategy to search for top-quark FCNCs involving the \(Z\) boson that has the potential to complement traditional search strategies due to a more favourable luminosity scaling. The strategy leverages on-shell interference between the FCNC and SM decay of the top quark into hadronic final states. We estimate the feasibility of the most promising case of anomalous \(tZc\) couplings using Monte Carlo simulations and a simplified detector simulation. We consider the main background processes and discriminate the signal from the background with a deep neural network that is parametrised in the value of the anomalous \(tZc\) coupling. We present sensitivity projections for the HL-LHC and the FCC-hh. We find an expected \(95\%\) CL upper limit of \(\mathcal{B}_{\rm excl}(t\to Zc)=6.4\times 10^{-5}\) for the HL-LHC. In general, we conclude that the interference-based approach has the potential to provide both competitive and complementary constraints to traditional multi-lepton searches and other strategies that have been proposed to search for \(tZc\) FCNCs. ## 1 Introduction A flavour-changing-neutral-current (FCNC) process is one in which a fermion changes its flavour without changing its gauge quantum numbers. In the Standard Model (SM), FCNCs are absent at tree level, suppressed by Cabibbo-Kobayashi-Maskawa (CKM) elements, and potentially additionally suppressed by fermion mass-differences at loop level via the Glashow-Iliopoulos-Maiani (GIM) mechanism [1]. The SM predictions for FCNCs that involve the top quark are extremely small due to the highly effective GIM suppression. The resulting branching ratios (\(\mathcal{B}\)) for the top-quark two-body decays via FCNCs range from \(\mathcal{B}(t\to uH)_{\rm SM}\sim 10^{-17}\) to \(\mathcal{B}(t\to cg)_{\rm SM}\sim 10^{-12}\)[2; 3; 4; 5; 6; 7]. However, the top quark plays an important role in multiple theories beyond the SM due to its large coupling to the Higgs, which is relevant for models addressing the Hierarchy Problem and models for electroweak-scale baryogenesis. Several of these models predict enhanced top-quark FCNC couplings [8; 9; 10; 11; 4; 12], which we collectively denote here by \(g\). Typically, constraints on \(g\) from low-energy and electroweak-precision observables are mild [13; 14; 15; 16; 17; 18], motivating direct searches for FCNC top-quark decays (\(t\to qX\) with \(q=u\), \(c\)) and FCNC single-top-quark production (\(pp\to tqX\) or \(qX\to t\)). While we focus on FCNC interactions with SM bosons in this paper, FCNC interactions of the top quark with new, scalar bosons have been proposed [19] and searched for [20]. Using data taken at the LHC, the ATLAS and CMS collaborations have placed the most stringent upper limits on top-quark FCNC interactions via a photon [21; 22], \(Z\) boson [23; 24], Higgs boson [25; 26], and gluon [27; 28]. Even though many searches take advantage of both the FCNC decay and single production to search for a non-zero \(g\), the limits are traditionally presented in terms of FCNC branching ratios, \(\mathcal{B}(t\to qX)\). The most stringent limits at 95% confidence level (CL) range from \(\mathcal{B}(t\to u\gamma)<8.5\times 10^{-6}\)[21] to \(\mathcal{B}(t\to cH)<7.3\times 10^{-4}\)[26]. For FCNCs via the \(Z\) boson, the most stringent limits are obtained in a search that uses the decay of the \(Z\) boson to \(e^{+}e^{-}\) or \(\mu^{+}\mu^{-}\) in association with a semileptonically decaying top quark [23]. The resulting 95% CL upper limits on \(g\) translate to \(\mathcal{B}(t\to uZ)<6.2\)-\(6.6\times 10^{-5}\) and \(\mathcal{B}(t\to cZ)<1.2\)-\(1.3\times 10^{-4}\), depending on the chirality of the coupling. While the limits in Ref. [23] are obtained with \(\mathcal{L}_{\rm int}=\int\mathcal{L}\,\mathrm{d}t=139\) fb\({}^{-1}\) of data at \(\sqrt{s}=13\) TeV, the HL-LHC is expected to provide approximately \(3000\) fb\({}^{-1}\) at 14 TeV. Improved sensitivity to top-quark FCNC processes is hence expected at the HL-LHC, because statistical uncertainties play an important role in these searches. With systematic uncertainties being subdominant, one may naively expect that the upper limits on \(\mathcal{B}(t\to qZ)\) scale with the shrinking statistical uncertainty.1 Using this extrapolation, the sensitivity is expected to improve roughly by a factor \(\sqrt{3000\,\mathrm{fb}^{-1}/139\,\mathrm{fb}^{-1}}\approx 5\) at the HL-LHC.2 The reason for this luminosity scaling is that the partial width for the two-body top-quark FCNC decay and the cross section for FCNC single production are proportional to \(g^{2}\) due to the lack of interference with SM processes.3 As a result, the sensitivity to \(\mathcal{B}(t\to qX)\) naively scales as \(1/\sqrt{\mathcal{L}_{\rm int}}\) and the sensitivity to \(g\) as \(1/\sqrt[4]{\mathcal{L}_{\rm int}}\). Finding instead an observable that scales linearly with \(g\) due to interference with the SM would modify favourably the luminosity scaling. Such an interference-based approach would hence be very useful for the search for top-quark FCNCs. In the present work we propose such a novel approach and investigate the feasibility of employing it to search for \(tZq\). There are multiple, phenomenologically relevant examples in which New-Physics (NP) interference with the SM is instrumental for precision NP searches. Examples include searching for \(H\to c\bar{c}\) via exclusive Higgs decays, which makes use of interference with the SM \(H\to\gamma\gamma\) amplitude [31], or searching for NP in high-energy diboson distributions by exploiting the interference between the SM and energy-enhanced NP contributions from dimension-six operators [32, 33]. Here, we introduce a new setup that can be applied to improve top-quark FCNCs searches. As opposed to other approaches, here both NP and SM amplitudes will be mostly resonant, i.e., contain on-shell -but different- intermediate particles. At tree level, a resonant signal amplitude does not generally interfere with a continuum amplitude, because the former is imaginary and the latter is real. However, if both the signal and the background contain an on-shell particle, interference may occur, as long as the final state is identical.4 In this case of on-shell interference, NP and SM amplitudes will still interfere, yet the interference will only be large in a restricted phase-space region. This potential caveat is different to the ones in the aforementioned examples: exclusive decays of the Higgs boson are suppressed by the hadronisation probability to the relevant final-state, e.g., \(J/\psi\), and the interference in diboson tails is suppressed with the decreasing SM amplitude. Our proposal is to search for the three-body decay \(t\to qb\bar{b}\) in the phase-space region in which there is potentially large NP-SM interference. Footnote 4: For an example of this at the optics table, see Ref. [34]. The decay \(t\to qb\bar{b}\) contains two interfering contributions: the NP contribution \(t\to qZ\to qb\bar{b}\) and the SM one \(t\to bW^{+}\to qb\bar{b}\), as illustrated in figure 1. Consequently, the partial width contains a part that is proportional to \(g\). For sufficiently small \(g\) the interference term dominates over the NP\({}^{2}\) term (\(\propto g^{2}\)) in which case the sensitivity to \(g\) is expected to scale like \(1/\sqrt{\mathcal{L}_{\rm int}}\), i.e., it improves faster with increasing luminosity than the traditional approach without interference. The interference argument also holds for probing the top-quark FCNCs with the Higgs boson (\(tHq\)) or with photons (\(tq\gamma\)) and gluons (\(tqg\)). For the Higgs, the interference is suppressed by the light-quark masses of the final-state quarks (\(m_{b}\) and \(m_{q}\)) due to the different chirality structure of the SM (vector) and NP (scalar) couplings. For the photon and gluon FCNCs the SM amplitudes peak at small dijet invariant masses with potentially large QCD backgrounds, which require a dedicated study. We will thus focus in this work on top-quark FCNCs with the \(Z\)-boson. We stress that the interference signal is not only sensitive to the magnitude of the \(tZq\) coupling but is also sensitive to its phase. The interference approach is hence inherently complementary to the traditional FCNC searches and of particular interest in case signs of an anomalous \(tZq\) coupling are observed. We will also focus on the \(tZc\) coupling, because the interference is larger compared to \(tZu\) due to the larger CKM matrix element \(|V_{cb}|\) compared to \(|V_{ub}|\). In section 2, we establish the theory framework and discuss how to leverage interference based on parton-level expressions for the interference-based rate and its kinematic properties. In section 3, we introduce the Monte Carlo (MC) samples that we use for the sensitivity estimate and discuss the event Figure 1: The leading-order diagrams for the three-body decay \(t\to cb\bar{b}\). The left diagram shows the decay via the FCNC \(tZc\) coupling and the right the SM decay via a \(W\) boson. In the small region of phase space in which the \(c\bar{b}\)-pair reconstructs the \(W\)-boson mass and the \(b\bar{b}\)-pair reconstructs the \(Z\)-boson mass, both the \(W\) and the \(Z\) bosons are on-shell and the two amplitudes interfere. selection that is tailored towards the FCNC signal. In section 4.1, we briefly introduce the setup of the statistical analysis and then describe in section 4.2 the optimization of the parametrised deep neural network (DNN) that we use for the analysis of the simulated data. The results are given in section 4.3 for the HL-LHC and in section 4.4 for the FCC-hh. We present our conclusions in section 5. ## 2 \(t\to cZ\) from on-shell interference in \(t\to cb\bar{b}\) The focus of this section is to study the three-body top-quark decay \(t\to cb\bar{b}\) in the presence of an anomalous, NP \(tZc\) coupling with emphasis on how to take advantage of NP-SM interference to probe the NP coupling. The decay rate is affected by interference between the NP and SM amplitudes, illustrated in the left and right diagram in figure 1, respectively. The results of this section are equally well applicable to the \(t\to ub\bar{b}\) decay when an anomalous \(tZu\) coupling is present. However, this channel is less promising to provide competitive constraints from an interference-based analysis since the SM amplitude is highly CKM suppressed. We, thus, concentrate on the \(t\to cb\bar{b}\) case. Given the smallness of the bottom and charm-quark masses with respect to the top-quark mass, the NP-SM interference is large when the chirality of the NP couplings is the same as the one of the SM \(W\)-boson contribution, i.e., left-handed vector couplings \(\bar{t}_{L}\gamma^{\mu}c_{L}Z_{\mu}\). In contrast, the NP-SM interference is suppressed by the small \(b\)- and \(c\)-quark masses if the NP originates from right-handed vector or tensor operators. Therefore, we only consider here the most promising case of anomalous left-handed couplings. The Standard Model Effective Theory (SMFT) parametrises these couplings in terms of two dimension-six operators \[\mathscr{L}\supset\frac{C^{(1)}_{\varphi q;pr}}{\Lambda^{2}}(\varphi^{\dagger}i \overset{\text{\tiny{$+$}}}{D}_{\mu}\varphi)(\bar{q}_{p}\gamma^{\mu}q_{r})+ \frac{C^{(3)}_{\varphi q;pr}}{\Lambda^{2}}(\varphi^{\dagger}i\overset{\text{ \tiny{$+$}}}{D}_{\mu}^{a}\varphi)(\bar{q}_{p}\gamma^{\mu}\tau^{a}q_{r})\,. \tag{1}\] Here, \(\varphi\) is the Higgs doublet, \(q_{p}\) left-handed quark-doublets, and \(p,r\) flavour indices in the conventions of Ref. [35]. In the broken phase, by rotating to the quark-mass eigenstates these SMEFT operators can lead to anomalous tree-level \(tZc\) couplings to the left-handed quarks, which are the subject of this work. We parametrise them with the phenomenological Lagrangian \[\mathscr{L}_{tZc}=\frac{g}{2}e^{i\phi_{\text{NP}}}\;\bar{t}_{L}\gamma^{\mu}c_ {L}\;Z_{\mu}+\text{h.c.}\,, \tag{2}\] with the NP parameter \(g>0\) and the NP phase \(0\leq\phi_{\text{NP}}<2\pi\).5 In the up-quark mass basis, the coupling in Eq. (2) is related to the SMEFT Wilson coefficients via \(ge^{i\phi_{\text{NP}}}=\frac{e}{s_{w}c_{w}}\frac{v^{2}}{\Lambda^{2}}\big{(}C ^{(1)}_{\varphi q;32}-C^{(3)}_{\varphi q;32}\big{)}\), where \(e\) is the electromagnetic coupling, \(s_{w}\) (\(c_{w}\)) the sine (cosine) of the weak mixing angle, and \(v\simeq 246\,\text{Ge\kern-1.0ptV}\) the electroweak vacuum-expectation value. Footnote 5: In unitarity gauge, only the couplings in Eq. (2) enter the computation of \(t\to cb\bar{b}\). In \(R_{\xi}\) gauges also the corresponding Goldstone couplings must be included. The squared amplitude for the \(t\to cb\bar{b}\) decay contains three terms: the SM\({}^{2}\) term, the NP\({}^{2}\) term, and their interference, i.e., \[|\mathcal{A}|^{2}=|\mathcal{A}_{\text{SM}}|^{2}+\underbrace{|\mathcal{A}_{ \text{NP}}|^{2}}_{\propto g^{2}}+\underbrace{2\text{Re}(\mathcal{A}_{\text{ SM}}^{*}\mathcal{A}_{\text{NP}})}_{\propto g\cos(\phi_{\text{NP}}-\phi_{\text{SM}}) \text{ and }g\sin(\phi_{\text{NP}}-\phi_{\text{SM}})} \tag{3}\] where the underbraces indicate the dependence on the NP parameters. The interference term depends linearly on the NP coupling \(g\) and also on the relative, CP-violating phase between NP and SM contribution: \[\phi\equiv\phi_{\text{NP}}-\phi_{\text{SM}}\qquad\text{with}\qquad\phi_{\text{ SM}}\equiv\arg(V_{tb}^{*}V_{cb}) \tag{4}\] As indicated by Eq. (3) and further discussed in the following, the fully differential rate of \(t\to cb\bar{b}\), is sensitive to the interference term and thus potentially sensitive to both a term that is CP-even in the kinematic variables and proportional to \(\cos\phi\) as well as a term that is CP-odd and proportional to \(\sin\phi\). The cases \(\phi=\{0,\pi\}\) lead to a differential rate of \(t\to cb\bar{b}\) that is CP conserving. In this case, namely, the SM and NP sources of CP violation are aligned and the differential rate is insensitive to CP violation. The coupling-scaling of the amplitudes does not capture the dependence on the kinematic variables describing the three-body decay. This dependence is essential for designing the search that leverages interference in an optimal manner. The \(t\to cb\bar{b}\) kinematics are fully specified by the two invariant masses \(m_{\bar{c}\bar{b}}^{2}\equiv(p_{c}+p_{\bar{b}})^{2}\) and \(m_{b\bar{b}}^{2}\equiv(p_{p}+p_{\bar{b}})^{2}\). The different topologies of the NP and the SM amplitudes (compare the two diagrams in figure 1) lead to final states with distinct kinematic configuration: "SM events" originate mostly from on-shell \(W\)'s, i.e., \(m_{c\bar{b}}\sim M_{W}\), whereas "NP events" from on-shell \(Z\)'s, i.e., \(m_{b\bar{b}}\sim M_{Z}\). We illustrate this in figure 1(a), which shows the standard Dalitz plot for the three-body decay in the top-quark rest frame. in terms of \(m_{c\bar{b}}\) and \(m_{b\bar{b}}\). The gray area marks the kinematically allowed phase-space. The SM\({}^{2}\) and NP\({}^{2}\) parts of the squared amplitude mainly populate the blue (vertical band) and green (horizontal band) regions, respectively. The \(W\)- and \(Z\)-boson widths (\(\Gamma_{W}\), \(\Gamma_{Z}\)) control the level of deviations from the on-shell case, i.e., the Figure 2: In (a) the Dalitz plot for the three-body decay \(t\to cb\bar{b}\) in the restframe of the top-quark in terms of the two invariant masses \(m_{b\bar{b}}\) and \(m_{c\bar{b}}\). In gray the kinematically physical region. The dotted vertical and horizontal line indicates the phase-space points of resonant \(Z\)- and \(W\)-boson production (same in (b) and (c)). “Pure SM” events predominantly populate the vertical blue region whereas “pure NP” events the horizontal green region. The red region marks the doubly-on-shell region in which NP–SM interference is the largest. In (b) and (c), we show the rate originating from NP–SM interference proportional to \(g\cos\phi\) and \(g\sin\phi\), respectively. The figure ranges correspond to the doubly-on-shell region (red region in (a)) and the dotted rectangle centered at the doubly-on-shell point has the width \(\Gamma_{W}\) and the height \(\Gamma_{Z}\). Brown regions correspond to negative and green to positive contributions to the branching ratio. width of the vertical and horizontal bands in figure 2a. This is best seen by employing the Breit-Wigner approximation for the massive vector propagators \[i\Delta_{\mu\nu}(q)=-i\frac{g_{\mu\nu}-q_{\mu}q_{\nu}/M^{2}}{q^{2}-M^{2}+iM \Gamma}\,, \tag{5}\] which enhances the SM amplitude when \(m_{c\bar{b}}\sim M_{W}\) and the NP one when \(m_{b\bar{b}}\sim M_{Z}\). By integrating over the full phase-space and taking the narrow-width approximation \(\Gamma_{W}/M_{W},\Gamma_{Z}/M_{Z}\ll 1\), we recover the usual relations for the fully inclusive branching ratios originating from the SM\({}^{2}\) and NP\({}^{2}\) terms in Eq. (3): \[|\mathcal{A}_{\text{SM}}|^{2}\propto\mathcal{B}(t\to cb\bar{b})_{\text{SM}}= \mathcal{B}(t\to Wb)_{\text{SM}}\ \mathcal{B}(W\to c\bar{b})_{\text{SM}}\,,\] \[|\mathcal{A}_{\text{NP}}|^{2}\propto\mathcal{B}(t\to cb\bar{b})_{ \text{NP}}=\underbrace{\mathcal{B}(t\to Zc)_{\text{NP}}}_{\propto g^{2}}\ \ \mathcal{B}(Z\to b\bar{b})_{\text{SM}}\,, \tag{6}\] with \(\mathcal{B}(W\to c\bar{b})_{\text{SM}}\propto M_{W}/\Gamma_{W}\) and \(\mathcal{B}(Z\to b\bar{b})_{\text{SM}}\propto M_{Z}/\Gamma_{Z}\). We collect the expressions for the two-body branching fractions in appendix A. However, as we shall demonstrate next, the interference is large in the small phase-space region in which both \(W\) and \(Z\) bosons are on-shell (red region in figure 2a): \[M_{W}-\Gamma_{W}\lesssim m_{c\bar{b}}\lesssim M_{W}+\Gamma_{W}\,,\quad M_{Z}- \Gamma_{Z}\lesssim m_{b\bar{b}}\lesssim M_{Z}+\Gamma_{Z}\,.\quad\text{[ doubly-on-shell region]} \tag{7}\] Explicit computation shows that the NP\({}^{2}\) and SM\({}^{2}\) rates in this doubly-on-shell region are parametrically suppressed by the widths and masses of the \(Z/W\) bosons with respect to their inclusive values in Eq. (6) \[\mathcal{B}^{\text{doubly on-shell}}_{\text{NP/SM}}\sim\mathcal{B}_{\text{ NP/SM}}\frac{\Gamma_{Z/W}}{M_{Z/W}}\frac{M_{Z/W}^{4}}{m_{t}^{4}}\,. \tag{8}\] The net effect is that in total \(\mathcal{B}^{\text{doubly on-shell}}_{\text{NP/SM}}\) are neither enhanced by \(M_{Z/W}/\Gamma_{Z/W}\) nor suppressed by \(\Gamma_{Z/W}/M_{Z/W}\) factors, since \(\mathcal{B}_{\text{NP/SM}}\propto 1/\Gamma_{Z/W}\). The relative suppression, however, is welcome as both of these contributions constitute a background for the interference-based analysis we are proposing. In contrast to "pure SM" and "pure NP" events, "interference-based" events predominantly populate the doubly-on-shell phase-space region, since \(2\text{Re}(\mathcal{A}^{*}_{\text{SM}}\mathcal{A}_{\text{NP}})\) is proportional to the product of \(W\)- and \(Z\)-boson Breit-Wigner propagators. Summing over final-state polarisations and averaging over the top-quark polarisation we find the double-differential branching ratio originating from the interference term in Eq. (3) to be \[\begin{split}\frac{\mathrm{d}^{2}\mathcal{B}_{\text{Int}}}{ \mathrm{d}m_{b\bar{b}}^{2}\mathrm{d}m_{c\bar{b}}^{2}}=-g&\frac{N _{\text{Int}}}{m_{t}^{3}\Gamma_{t}}\frac{\left(m_{b\bar{b}}^{2}+m_{c\bar{b}}^{2 }\right)\left(m_{t}^{2}-m_{b\bar{b}}^{2}-m_{c\bar{b}}^{2}\right)}{\left((M_{ W}^{2}-m_{c\bar{b}}^{2})^{2}+\Gamma_{W}^{2}M_{W}^{2}\right)\left((M_{Z}^{2}-m_{b \bar{b}}^{2})^{2}+\Gamma_{Z}^{2}M_{Z}^{2}\right)}\bigg{[}\\ &\qquad+\cos\phi\left(\left(M_{W}^{2}-m_{c\bar{b}}^{2}\right) \left(M_{Z}^{2}-m_{b\bar{b}}^{2}\right)+M_{W}\Gamma_{W}M_{Z}\Gamma_{Z}\right) \\ &\qquad+\sin\phi\left(M_{Z}\Gamma_{Z}\left(M_{W}^{2}-m_{c\bar{b}}^ {2}\right)-M_{W}\Gamma_{W}\left(M_{Z}^{2}-m_{b\bar{b}}^{2}\right)\right)\bigg{]} \end{split} \tag{9}\] \[\equiv\frac{\mathrm{d}^{2}\mathcal{B}^{\text{cos}}_{\text{Int}}}{\mathrm{d}m _{b\bar{b}}^{2}\mathrm{d}m_{c\bar{b}}^{2}}\times g\cos\phi+\frac{\mathrm{d}^{2 }\mathcal{B}^{\text{sin}}_{\text{Int}}}{\mathrm{d}m_{b\bar{b}}^{2}\mathrm{d}m_ {c\bar{b}}^{2}}\times g\sin\phi\,,\] with \(N_{\text{Int}}=e^{3}(3-2s_{w}^{2})|V_{cb}||V_{tb}|/(1536\pi^{3}c_{w}s_{w}^{3})\). The last line defines a shorthand notation for the terms proportional to \(g\cos\phi\) and \(g\sin\phi\). In figures 2b and 2c we show \(\mathrm{d}^{2}\mathcal{B}^{\text{cos}}_{\text{Int}}\) and \(\mathrm{d}^{2}\mathcal{B}^{\text{sin}}_{\text{Int}}\), respectively, in terms of the two Dalitz variables. In brown are the regions with a negative rate and in green the ones with positive rate. The intersection of the dotted vertical and horizontal line corresponds to the doubly-on-shell point and we have overlaid a rectangle with width and height equal to \(\Gamma_{W}\) and \(\Gamma_{Z}\). Eq. (9) and its illustration in figures 1(b) and 1(c) contain the most relevant parametric dependences that underpin the idea of leveraging interference to probe anomalous \(tZc\) couplings. * The denominator in the first line stems from the product of the two Breit-Wigner propagators for the \(W\) and \(Z\) bosons, see Eq. (5). They enhance the rate from interference in the doubly-on-shell region, which is regulated by both \(\Gamma_{W}\) and \(\Gamma_{Z}\). The enhancement of the doubly-on-shell region with respect to the rest of the phase-space region is best seen in figures 1(b) and 1(c) for \(\mathrm{d}^{2}\mathcal{B}_{\mathrm{Int}}^{\mathrm{cos}}\) and \(\mathrm{d}^{2}\mathcal{B}_{\mathrm{Int}}^{\mathrm{sin}}\). The main part of the integrated rate comes from the phase-space region close to the doubly-on-shell region. * The rate from interference contains terms proportional to both \(\cos\phi\) and \(\sin\phi\). Interference is present independent of whether there is CP violation in the decay (\(\sin\phi\neq 0\)) or whether there is no CP violation (\(\cos\phi=\pm 1\)). However, the CP-odd term proportional to \(\sin\phi\) is odd under the interchange of \(W\leftrightarrow Z\) and \(m_{b\bar{b}}\leftrightarrow m_{c\bar{b}}\) in Eq. (9), see also figure 1(c) for \(\mathrm{d}^{2}\mathcal{B}_{\mathrm{Int}}^{\mathrm{sin}}\). The consequence is that the integrated rate proportional to \(g\sin\phi\) vanishes for the symmetric case \(M_{W}=M_{Z}\). A measurement of the phase \(\phi\) thus requires separating events within the doubly-on-shell region, which is experimentally extremely challenging given the jet energy resolution. In contrast, the integrated rate proportional to \(g\cos\phi\) is even under the aforementioned interchanges and does not vanish after integration, see figure 1(b) for \(\mathrm{d}^{2}\mathcal{B}_{\mathrm{Int}}^{\mathrm{cos}}\). A dedicated search in the doubly-on-shell region is thus potentially sensitive to \(g\cos\phi\). In section 3, we will use Monte-Carlo (MC) techniques to simulate events including a simplified detector simulation populating the doubly-on-shell region based on the full matrix-elements, which lead to Eq. (9) and the corresponding expressions for the NP\({}^{2}\) and SM\({}^{2}\) terms. To obtain a first rough estimate of the rate from interference and to illustrate the parametric dependences we present here an approximate phase-space integration of the rate in Eq. (9). Most of the rate originates from events in the doubly-on-shell region, see _i_) above. We thus keep the \(m_{b\bar{b}}\) and \(m_{c\bar{b}}\) dependence in the Breit-Wigner denominators but set \(m_{b\bar{b}}=M_{Z}\), \(m_{c\bar{b}}=M_{W}\) in the remaining squared amplitude. We then perform the approximate phase-space integration by integrating over the Breit-Wigner factors via \[\int_{-\infty}^{+\infty}dp^{2}\frac{1}{(p^{2}-M^{2})^{2}+M^{2}\Gamma^{2}}= \frac{\pi}{\Gamma M}\,,\] to obtain a rough estimate of the integrated, interference-based rate \[\mathcal{B}_{\mathrm{Int}}\approx-\pi^{2}N_{\mathrm{Int}}\frac{m_{t}}{\Gamma t }\left(1-\frac{M_{W}^{2}}{m_{t}^{2}}-\frac{M_{Z}^{2}}{m_{t}^{2}}\right)\left( \frac{M_{W}^{2}}{m_{t}^{2}}+\frac{M_{Z}^{2}}{m_{t}^{2}}\right)\times g\cos \phi\,. \tag{10}\] We stress that this is only a rough approximation. In fact, the approximation overestimates the rate by a factor of two with respect to properly integrating Eq. (9) over the physical kinematic region and including the full \(m_{b\bar{b}}\) and \(m_{c\bar{b}}\) dependence. As expected from the discussion in _ii_) above, Eq. 10 does not contain \(g\sin\phi\) terms. The resulting rate is positive (constructive interference) when \(\cos\phi<0\) and negative when \(\cos\phi>0\) (destructive interference), see colormap of \(\mathrm{d}^{2}\mathcal{B}_{\mathrm{Int}}^{\mathrm{cos}}\) in figure 1(b). For this reason, in the following sections, we will concentrate on the case of constructive interference by choosing \[\cos\phi\equiv\cos(\phi_{\mathrm{NP}}-\phi_{\mathrm{SM}})\stackrel{{!}}{{=}}-1\,. \tag{11}\] While it may also be possible to search for destructive interference, i.e., a deficit of events in the doubly-resonant phase space, as for example employed in searches for heavy scalars [36; 37] that decay to \(t\overline{t}\), we will not pursue this direction here. Eq. (10) also illustrates that \(\mathcal{B}_{\rm Int}\) is not suppressed by factors of \(\Gamma_{W/Z}/M_{W/Z}\). As discussed below Eq. (8), the same holds for the NP\({}^{2}\) and SM\({}^{2}\) rates in the doubly-on-shell region, \(\mathcal{B}_{\rm NP/SM}^{\rm doubly\ on\ \text{shell}}\). Therefore, the interference-based rate can compete with the NP\({}^{2}\) rate for sufficiently small \(g\) if the analysis targets the doubly-on-shell region. In what follows we investigate the experimental viability of such a dedicated search. ## 3 Simulated samples and event selection We generated Monte-Carlo (MC) samples with MadGraph5_aMC@NLO 3.2.0 (MG5) [38] using a custom UFO [39] model, which includes the contact \(tZc\) coupling as parametrised in Eq. (2), setting \(\phi=\pi\) (see discussion in Eq. (11)), in addition to the full SM Lagrangian with non-diagonal CKM matrix. All matrix elements are calculated at leading order in perturbative QCD. We validated the custom model by simulating the decay \(t\to cb\bar{b}\) and comparing the distribution of events in the two-dimensional plane spanned by the Dalitz variables \(m_{cb}^{2}\) and \(m_{bb}^{2}\) (cf. section 2) with the expectation from the explicit calculation (figure 2). In the following, we simulate proton-proton collisions at a centre-of-mass energy of \(14\) TeV. The structure of the proton is parametrised with the NNPDF2.3LO set of parton distribution functions [40]. Factorisation and renormalisation scales are set dynamically event-by-event to the transverse mass of the irreducible \(2\to 2\) system resulting from a \(k_{\rm T}\) clustering of the final-state particles [41]. We simulate the FCNC contribution (\(\propto g^{2}\)), also referred to as NP\({}^{2}\) in Section 2, and the interference contribution (\(\propto g\)) to the signal process \(t\bar{t}\to cb\bar{b}\,\mu^{-}\nu_{\mu}\bar{b}\) separately, whereas the SM contribution to this process is treated as irreducible background. We only simulate the muon channel for simplicity. The reducible background processes always include top-quark pair production with subsequent decay in the lepton\(+\)jets channel with first- or second-generation quarks \(q\) and \(q^{\prime}\). Besides the six-particle final state (\(b\bar{q}q^{\prime}\,\mu^{-}\nu_{\mu}\bar{b}\)), we also simulate resonant production of additional bottom quarks from \(t\bar{t}Z(\to b\bar{b})\) and non-resonant contributions from \(t\bar{t}b\bar{b}\) and \(t\bar{t}c\bar{c}\). We do not simulate several other small background processes, such as \(W^{-}+{\rm jets}\) production, diboson production with additional jets or \(t\bar{t}H\) production, because their contribution is expected to be negligible either due to their low cross section or their very different kinematic properties. We only generate muons and final-state partons with transverse momenta larger than \(20\) GeV and require final-state partons to have a minimum angular distance6 of \(\Delta R=0.4\) to each other, motivated by the minimum angular distance obtained with jet clustering algorithms. We require the same angular distance between final-state partons and the muon in order to mimic a muon isolation criterion. For events in the six-particle final state, i.e., signal and background contributions to \(b\bar{b}b\,\mu^{-}\nu_{\mu}\bar{b}\) as well as the reducible background \(b\bar{q}q^{\prime}\,\mu^{-}\nu_{\mu}\bar{b}\), we require muons and final-state partons to be in the central region of the detector (\(|\eta|<2.5\)). Footnote 6: \(\Delta R=\sqrt{\left(\Delta\phi\right)^{2}+\left(\Delta\eta\right)^{2}}\) with \(\phi\) the azimuthal angle and the \(\eta\) the pseudorapidity. For simplicity, we do not use a parton shower in our studies. Instead, we smear the parton-level objects by the detector resolution in order to approximate detector-level jets, muons, and missing transverse momentum. The jet resolution is parametrised as \(\sigma(p_{\rm T})/p_{\rm T}=-0.334\cdot\exp(-0.067\cdot p_{\rm T})+5.788/p_{ \rm T}+0.039\), where the transverse momentum, \(p_{\rm T}\), is in units of GeV. We obtain this parametrisation from a fit to values from the ATLAS experiment [42]. We recalculate the energy of each jet based on the smeared \(p_{\rm T}\) with the jet direction unchanged. We smear the \(x\)- and \(y\)-components of the missing transverse-momentum vector independently by adding a random number drawn from a Gaussian distribution with mean zero and standard deviation of \(24\,\text{GeV}\)[43]. We then calculate the scalar missing transverse momentum and the corresponding azimuthal angle. We take the muon transverse momentum resolution to be 2% [44; 45] with no kinematic dependence. We select events with criteria that are typical for top-quark analyses by the CMS and ATLAS collaborations. We require the muon to be in the central region of the detector (\(|\eta|<2.5\)) and to have a transverse momentum larger than \(25\,\text{GeV}\) to mimic typical single-muon trigger thresholds [46; 47]. We do not take trigger, identification, or isolation efficiencies into account. We only accept events with exactly four central jets (\(|\eta|<2.5\)) to reduce the contamination from the reducible background processes with higher jet multiplicity. Each jet has to have a transverse momentum larger than \(25\,\text{GeV}\) and we require the missing transverse momentum to be at least \(30\,\text{GeV}\). Given the signal final state, \(c\bar{b}\,\mu^{-}\nu_{\mu}\bar{b}\), we demand the four jets in the event to fulfill the following \(b\)-tagging criteria. We require three jets to fulfill a \(b\)-tagging criterion with a \(b\)-tagging efficiency of 70% and corresponding mis-identification efficiencies of 4% and 0.15% for \(c\)-jets and light jets, respectively [48]. The additional fourth jet is often a \(c\)-jet and needs to pass a looser \(b\)-tagging criterion with a \(b\)-tagging efficiency of 91% and a correspondingly larger efficiency for \(c\)-jets [48]. The mis-identification efficiency for light jets of this looser \(b\)-tagging criterion is 5%. We choose the \(b\)-tagging selection from various combinations of \(b\)-tagging criteria with different \(b\)-tagging efficiencies and corresponding mis-tagging efficiencies. We choose the combination with the highest value of \(S/\sqrt{S+B}\), where \(S\) and \(B\) are the total number of weighted events for the signal and the background contributions, respectively, as calculated by sampling of jets according to the \(b\)-tagging efficiencies for the different jet flavours (\(S\) contains both the FCNC and interference contribution). Instead of removing events that did not pass the \(b\)-tagging criteria, we weight events by the total \(b\)-tagging probability to avoid large uncertainties due to the limited size of the MC datasets. We weight events in samples for the six-particle final states, where we required all four partons to be central already at generator-level, by a factor of \(\varepsilon_{\text{4j}}=0.5\), as roughly half of the events in top-quark pair production at the LHC have more than four jets due to additional radiation [49]. We use \(k\)-factors to scale the MG5 leading-order cross sections of the MC samples to higher orders in perturbation theory. \begin{table} \begin{tabular}{c c c c c c c} Process & \(\sigma_{\text{MG}}\) [pb] & \(k\)-factor & \(\varepsilon_{\text{4j}}\) & \(\varepsilon_{\text{pass}}\) & \(\varepsilon_{\text{btag}}\) & \(N_{\text{exp}}\) \\ \hline \(\bar{t}t\) & \(1.73\cdot 10^{1}\) & \(1.63\) & \(0.5\) & \(4.4\cdot 10^{-1}\) & \(6.7\cdot 10^{-4}\) & \(1.24\cdot 10^{4}\) \\ \(\bar{t}t\bar{b}b\) & \(2.29\cdot 10^{-1}\) & \(1.17\) & \(1\) & \(2.8\cdot 10^{-3}\) & \(5.7\cdot 10^{-1}\) & \(1.27\cdot 10^{3}\) \\ \(\bar{t}t\bar{c}c\) & \(2.12\cdot 10^{-1}\) & \(2.41\) & \(1\) & \(2.8\cdot 10^{-3}\) & \(2.9\cdot 10^{-2}\) & \(1.21\cdot 10^{2}\) \\ \(\bar{t}tZ\) & \(3.07\cdot 10^{-3}\) & \(1.44\) & \(1\) & \(2.1\cdot 10^{-2}\) & \(5.7\cdot 10^{-1}\) & \(1.58\cdot 10^{2}\) \\ \(\bar{t}t_{bc}\) & \(1.46\cdot 10^{-2}\) & \(1.63\) & \(0.5\) & \(4.4\cdot 10^{-1}\) & \(1.5\cdot 10^{-1}\) & \(2.33\cdot 10^{3}\) \\ Interference & \(3.35\cdot 10^{-5}\) & \(1.63\) & \(0.5\) & \(4.6\cdot 10^{-1}\) & \(1.5\cdot 10^{-1}\) & \(5.53\cdot 10^{0}\) \\ FCNC & \(3.32\cdot 10^{-4}\) & \(1.63\) & \(0.5\) & \(4.6\cdot 10^{-1}\) & \(1.5\cdot 10^{-1}\) & \(5.58\cdot 10^{1}\) \\ \hline \end{tabular} \end{table} Table 1: The leading-order cross section \(\sigma_{\text{MG}}\) from MG5, the \(k\)-factors, the probability to have only four jets at the LHC for the processes with a six-particle final state, \(\varepsilon_{\text{4j}}\), the fraction of simulated events passing the event selection, \(\varepsilon_{\text{pass}}\), the \(b\)-tag efficiency, \(\varepsilon_{\text{btag}}\), and the expected number of events \(N_{\text{exp}}\) for an integrated luminosity of 3000 fb\({}^{-1}\) for each process. \(t\bar{t}_{bc}\) denotes the irreducible SM-background contribution to the \(b\bar{q}q^{\prime}\,\mu^{-}\nu_{\mu}\bar{b}\) final state. The values of the interference and the FCNC contribution are given for \(g=0.01\) and \(\cos\phi=-1\). For the six-particle final states associated with top-quark pair production, we use a value of \(986\,\mathrm{pb}\) as calculated at next-to-next-to-leading order in QCD including next-to-next-to-leading logarithmic soft gluon resummation [50]. For \(t\bar{t}b\bar{b}\) and \(t\bar{t}c\bar{c}\), we use cross sections of \(3.39\,\mathrm{pb}\) and \(8.9\,\mathrm{pb}\), respectively, as calculated with MG5 at next-to-leading order [51]. For \(t\bar{t}Z\) production, we use a cross section of \(1.015\,\mathrm{pb}\), which includes next-to-leading order QCD and electroweak corrections [52]. Table 1 summarizes the efficiencies of the event selection, the MG5 leading-order cross sections, the \(k\)-factors, the \(b\)-tagging efficiencies, and the expected number of events for an integrated luminosity of \(3000\,\mathrm{fb}^{-1}\). To show the detector-level distribution of the expected number of events for \(3000\,\mathrm{fb}^{-1}\) we define the variables \(m_{W,\mathrm{reco}}\) and \(m_{Z,\mathrm{reco}}\) in analogy to the parton-level Dalitz variables \(m_{c\bar{b}}\) and \(m_{b\bar{b}}\) (cf. section 2). For each event, the three jets with invariant mass closest to the top-quark mass form the hadronically decaying top-quark candidate. From these three jets, we assume the jet with the lowest sampled \(b\)-tag score to be the \(c\)-jet. In case of a tie, we choose the jet with the higher \(p_{\mathrm{T}}\). The invariant mass of the two remaining jets is \(m_{Z,\mathrm{reco}}\). We then calculate the invariant mass of the \(c\)-tagged jet combined with each of the remaining two jets of the hadronic top-quark system, and take the invariant mass closer to \(M_{W}\) as \(m_{W,\mathrm{reco}}\). In figure 3, we show the expected number of events for 3000 fb\({}^{-1}\) in Figure 3: Expected number of events for 3000 fb\({}^{-1}\) in the \(m_{W,\mathrm{reco}}\) vs. \(m_{Z,\mathrm{reco}}\) plane (in bins of 2 GeV \(x\)\(2\) GeV) for the representative value \(g=0.01\) and \(\cos\phi=-1\): in (a) from the pure FCNC contribution, in (b) from the interference contribution with positive and in (c) with negative event weights, and in (d) from the sum of the background processes. the two-dimensional plane spanned by \(m_{W,\mathrm{reco}}\) and \(m_{Z,\mathrm{reco}}\) originating from different contributions: in (a)a events from the pure FCNC contribution, in (b)b events from constructive intereference, in (c)c events from destructive interference, and in (d)d events from the sum of all background processes. The results in figures (b)b and (c)c are in qualitative agreement with the parton-level result proportional to \(g\cos\phi\) shown in figure (b)b. Compared to it, the distributions are more spread out due to the finite detector resolution. However, the characteristic differences between pure FCNC, interference, and background contributions are still visible. ## 4 Sensitivity at hadron colliders Next, we estimate the sensitivity of the interference-based approach to the \(tZc\) FCNC coupling in the form of expected upper limits on the coupling constant \(g\) and compare it with the traditional approach that focuses on the leptonic decay of the \(Z\) boson. The statistical methodology is briefly outlined in section 4.1. To separate the FCNC signal, i.e., the pure FCNC contribution, as well as the interference contribution, from the background, we use a classifier based on deep neural networks (DNN). We parametrise the DNN as a function of the FCNC coupling \(g\) for optimal separation over a large range of coupling values. In section 4.2, the architecture and the optimisation of the DNN are explained. The prospects at the HL-LHC are presented in section 4.3, and section 4.4 contains estimates for the sensitivity to \(g\) in various future scenarios. The section concludes with a comparison to other approaches to constrain \(tZc\) FCNC couplings in section 4.5. ### Outline of the statistical methods Our metric for the sensitivity to the \(tZc\) FCNC coupling is the 95% CL expected upper limit on \(g\) since this allows for a straightforward comparison with existing searches. The method to derive the upper limit is the following: We create pseudo-measurements by sampling from the background-only histogram assuming a Poisson distribution for the counts per bin. Motivated by the Neyman-Pearson lemma [53], we construct a likelihood-ratio test statistic, \(t\), by comparing the bin counts from the pseudo-measurements \(\vec{x}\) with the expectation values from the MC simulation under the \(s\)+\(b\)-hypothesis (\(b\)-only-hypothesis) \(\vec{\lambda}_{s+b}\) (\(\vec{\lambda}_{b}\)) for each pseudo-measurement: \[t=-2\ln\left(\frac{\mathcal{L}(\vec{x}\mid\vec{\lambda}_{s+b})}{\mathcal{L}( \vec{x}\mid\vec{\lambda}_{b})}\right)\,,\quad\text{with}\quad\mathcal{L}=\prod _{i=1}^{N_{\text{bins}}}\frac{\lambda_{i}^{x_{i}}}{x_{i}!}\mathrm{e}^{- \lambda_{i}}\,. \tag{12}\] The nominal expected upper limit on the coupling strength, \(g_{\mathrm{excl}}\), is derived as the median of all pseudo-measurements under the assumption of the absence of a signal with the \(\mathrm{CL_{s}}\) method [54]. ### Optimisation of the parametrised deep neural networks Resolution effects, in particular the jet-energy resolution, and wrong assignments of jets to the decay branches complicate the reconstruction of invariant masses at detector level and motivate the use of machine-learning techniques to optimise the separation of signal and background in a high-dimensional space. We use the following 31 variables for the training of the DNN: for the \(b\)-tagged jets, their transverse momenta, pseudorapidities, azimuthal angles, energies and the highest-efficiency \(b\)-tagging working point that the jet passes; for the single muon, its transverse momentum, pseudorapidity and azimuthal angle; for the missing transverse momentum, its magnitude and azimuthal angle. The values of all azimuthal angles \(\phi\) are replaced by the combination of \(\sin\phi\) and \(\cos\phi\) due to the periodicity of the azimuthal angle. The natural logarithm is applied to all transverse momentum and energy spectra and the missing transverse momentum spectrum, as these variables have large positive tails. The dataset is split with fractions of 60% : 20% : 20% into training, validation and test sets. As a last step, all variables are studentised using \(y^{\prime}_{i}=(y_{i}-\mu)/\sigma\), where \(\mu\) refers to the arithmetic mean of the respective variable and \(\sigma\) is the estimated standard deviation. Besides these 31 observables, we also use the coupling constant \(g\) as an input to the DNN, which leads to a parametrised DNN [55]. The idea is to present different values of \(g\) to the DNN during the training so that the DNN learns the relative importance of the different signal contributions as a function of \(g\). For example, for \(g\gtrsim\mathcal{O}(0.1)\) the DNN should not focus on the interference contribution at all and instead concentrate on the separation of the FCNC contribution against the backgrounds. This is because the weight of the FCNC contribution exceeds that of the interference contribution by orders of magnitude in that regime. Conversely, for \(g\lesssim\mathcal{O}(0.001)\) the DNN should start to focus on the interference contribution more and more to leverage the slower decrease of the number of expected events for the interference contribution compared to the FCNC contribution. To give the DNN the possibility to learn this dependence, we further split the training and the validation set into five stratified subsets. Each of these subsets corresponds to a specific value of \(g\in\{0.001,\,0.005,\,0.01,\,0.05,\,0.1\}\). These values are chosen to cover the range around the current best exclusion limit of about \(0.0126\)[23]. For the training, the weights of the signal events are adjusted so that for a given value of \(g\) the sum of weights in each subset corresponds to the sum of weights of the background contribution. The constructed DNN has four output nodes: one for pure FCNC events, one for interference events with positive weight, one for interference events with negative weight, and one for background events. For the output layer, we use softmax and for the hidden layers ReLU as the activation function. We use the Adam optimiser [56] and categorical cross-entropy as the loss function. For the determination of the expected exclusion limit, a one-dimensional discriminant \[d=\frac{1-\alpha_{\mathrm{bkg}}-\alpha_{\mathrm{negInt}}+\alpha_{\mathrm{ posInt}}+\alpha_{\mathrm{FCNC}}}{2}\in[0,1] \tag{13}\] is constructed based on the activation \(\alpha\) of the respective output nodes. We assign a negative prefactor to the output node corresponding to the negative interference contribution, to increase the difference between the background-only and the signal distribution of \(d\). The corresponding histograms of \(d\) consist of 10 equidistant bins. To account for charge-conjugated processes, the bin contents are multiplied by a factor of two. The structure of the DNN as well as the learning rate and the batch size during the training are manually optimised based on the expected exclusion limit on the validation set. A learning rate of 0.001 and a batch size of 1000 is chosen. The final structure of the DNN is \([32,\,128,\,256,\,128,\,64,\,32,\,4]\), with the numbers referring to the number of nodes in the respective layer. The evolution of the expected exclusion limit during the training of the DNN are shown in figure 3(a). ### Prospects for HL-LHC The integrated luminosity expected at the HL-LHC is \(\mathcal{L}=3000\,\mathrm{fb}^{-1}\)[57]. Figure 3(b) contains the \(\mathrm{CL_{s}}\) values resulting from the evaluation of the DNN on the test as a function of the coupling constant \(g\). We find an expected upper exclusion limit at 95% CL of \[g_{\mathrm{excl}}=8.8^{+1.7}_{-1.3}\times 10^{-3}. \tag{14}\] The corresponding nominal upper limit on the branching fraction is \(\mathcal{B}_{\mathrm{excl}}(t\to Zc)=6.4\times 10^{-5}\). In the following, we highlight some of the features of the machine-learning based analysis to illustrate the employed methods. The distributions of the discriminant for \(g=g_{\rm excl}\) and the rejected hypothesis \(g=0.02\) are shown in figure 5 for the signal and the background-only hypothesis. Since the DNN is parameterised in \(g\), the background-only distribution depends on \(g\) as well. The number of background events expected in the rightmost bins increases for \(g=0.02\) compared to the bin contents expected for \(g=g_{\rm excl}\). This implies that the DNN adapts to the simplifying kinematics due to the decreasing importance of interference events. In figure 6 we show both the bin contents expected for \(g=g_{\rm excl}\) for each background process and the shapes of the signal contributions. Since the irreducible SM background \(t\overline{t}_{\overline{b}c}\) has the same final state as the signal, the separation from signal events turns out to be rather difficult compared to the reducible backgrounds. In fact with respect to the aforementioned irreducible component, the separation of top-quark pair production with decays to only first- and second-generation quarks, denoted by \(t\overline{t}\), can be separated better. Nevertheless, this process remains the most important background contribution due to its high cross section. The DNN separates the signal from the three processes with an additional heavy-flavour quark pair well; this can be attributed to the different kinematical structure due to the additional particles in the event. It should also be noted that the FCNC distribution has a slightly higher mean than the positive interference distribution. This is due to two factors: Firstly, in the vicinity of \(g=g_{\rm excl}\) the sum of weights of the FCNC contribution is still a bit larger than the sum of weights of the positive interference contribution. Thus, the DNN focusses on separating the FCNC events from the background events because of their larger relative impact on the loss function. Secondly, the distribution of the events in the considered phase space inherently offers more separation power from the background for the FCNC events compared to the interference events, as visualised in the \(m_{W,\text{reco}}\) vs. \(m_{Z,\text{reco}}\) plane shown in figure 3. Additionally, the mean value of the distribution for negative interference events is only slightly lower compared to the positive interference contribution, even though the definition of the discriminant in Eq. (13) considers these with opposite relative signs. This validates the observation from figure 3 that the distribution of the negative-interference events in the phase space is quite spread out and thus difficult to separate from the horizontal band of the FCNC contribution in the \(m_{W,\text{reco}}\) vs. \(m_{Z,\text{reco}}\) plane as well as from the similarly distributed positive-interference contribution. Figure 4: In (a), the expected 95% CL exclusion limit on \(g\) calculated on the validation set after each epoch during the training of the DNN. In (b), the \(\text{CL}_{\text{s}}\) value estimated for various values of the coupling constant \(g\) and the corresponding \(\pm 1\sigma\) and \(\pm 2\sigma\) uncertainty bands. ### Prospects for future experiments We explore the potential of the interference-based approach based on various future scenarios. These include developments in the realms of analysis methods, detector development, and future colliders. **Improved \(b\)-tagging.** The performance of \(b\)-tagging algorithms is crucial for the suppression of background contributions. This is evident when considering that the main background contribution after the event selection (see section 3) is \(t\overline{t}\to b\overline{s}c\,\mu^{-}\nu_{\mu}\overline{b}\), which only differs from the signal final state by an \(\overline{s}\) instead of a \(\overline{b}\) quark. Thus, we expect a gain in sensitivity with increasing light-jet rejection factors at the considered \(b\)-tagging working points. The \(b\)-tagging algorithms that provide this rejection are being constantly improved by the experimental collaborations. An approach based on Graph Neural Networks [58] has already shown increased performance in comparison to traditional approaches. To examine the effects of improved \(b\)-tagging algorithms, the analysis is repeated with light-jet rejection rates multiplied by a factor of two. The resulting exclusion limit is \[g_{\text{excl}}^{\text{tag}}=8.0^{+1.6}_{-1.2}\times 10^{-3}. \tag{15}\] This amounts to a relative improvement of the expected limit of around 9\(\%\) compared to the baseline result presented in section 4.3. Figure 5: The signal and background distribution of the discriminant for \(g=8.8\times 10^{-3}\) and \(g=0.02\). As the DNN is parameterised in \(g\), the background distribution depends on \(g\) as well. The bottom panel shows the ratio of expected signal+background events divided by the number of expected background events, \((S+B)/B\). **Improved jet-energy resolution.** As discussed in section 2, the reconstruction of the Dalitz variables \(m_{\tilde{c}\tilde{b}}^{2}\) and \(m_{\tilde{b}\tilde{b}}^{2}\) enables the separation of the different contributions to the parton-level \(t\to b\overline{b}c\) decay. However, for the full process, \(t\overline{t}\to bq\overline{q}^{\prime}\,\mu^{-}\nu_{\mu}\overline{b}\), the separation power degrades due to the choice of wrong jet combinations in the reconstruction of the invariant masses and the limited jet-energy resolution. Significant improvements in the resolution are expected for experiments at the FCC-hh [59] based on simulation studies for calorimetry [60]. To investigate the impact of this improvement, we scale the expected limit for a jet \(p_{\mathrm{T}}\) resolution by a factor of \(\%\) without changing any other parameter. This results in \[g_{\mathrm{excl}}^{\mathrm{res}}=7.4^{+1.4}_{-1.2}\times 10^{-3}, \tag{16}\] which corresponds to an improvement of about 16\(\%\). **Improved statistical power.** The FCC-hh is projected to deliver an integrated luminosity of the order of \(20\,\mathrm{ab}^{-1}\) at a centre-of-mass energy of 100 TeV [59]. This presents an excellent opportunity to search for \(tZc\) FCNC effects in the realm of small coupling constants with the interference-based approach. We do not generate new MC samples for \(\sqrt{s}=100\) TeV. Instead, we scale the event weights by a common factor of \(\sigma_{t\overline{t}}(100\,\mathrm{TeV})/\sigma_{t\overline{t}}(14\, \mathrm{TeV})\approx 35\), which is the increase of the \(t\overline{t}\) cross section due to the higher centre-of-mass-energy [61], as the signal and the main background processes rely on \(t\overline{t}\) production. However, we neglect any difference in the \(\sqrt{s}\) scaling of the cross sections in the presence of additional jets for the background processes. The projected exclusion limit for this scenario is hence a rough estimate. Including these changes and repeating the analysis yields a limit of \[g_{\mathrm{excl}}^{\mathrm{stat}}=1.9^{+0.5}_{-0.4}\times 10^{-3}\,, \tag{17}\] Figure 6: Number of events for each background process in bins of the discriminant \(d\). The expected number of events in each bin is determined from the nominal expected exclusion limit \(g=8.8\times 10^{-3}\) and an integrated luminosity of 3000 fb\({}^{-1}\) at HL-LHC. In addition, the shapes of the signal distributions are illustrated. which amounts to an improvement of around a factor of four. **Combination of improvements.** As a last scenario, we combine all three improvements discussed above. Therefore, this scenario corresponds to a rough projection of the sensitivity at a future general-purpose detector at the FCC-hh with significantly improved \(b\)-tagging algorithms and jet resolution. Retraining and evaluating the DNN on the adjusted dataset, we obtain an expected limit of \[g_{\text{excl}}^{\text{comb}}=1.2^{+0.4}_{-0.3}\times 10^{-3}. \tag{18}\] This corresponds to an improvement of about a factor of seven and results in an upper limit on the branching fraction of \(\mathcal{B}_{\text{excl}}^{\text{comb}}(t\to Zc)=1.2\times 10^{-6}\). ### Comparison to other approaches We compare the sensitivity of the interference-based approach to other approaches that target \(tZc\) FCNC effects. We briefly introduce three alternative approaches and then discuss the relative sensitivities of the different methods. **Leptonic analysis.** Traditionally, \(tZq\) FCNCs are searched for by using the leptonic \(Z\to\ell^{+}\ell^{-}\) decay mode instead of the hadronic decay \(Z\to b\overline{b}\). This leads to three-lepton final states for the signal, which are associated with low SM-background contributions. Ref. [23] provides the tightest expected exclusion limit for \(\mathcal{B}(t\to Zc)\) of \(11\times 10^{-5}\) to date. It considers both single-top quark production via an FCNC \(tZc\) vertex (\(qg\to tZ\)7) and top-quark pair production with an FCNC decay of one of the top quarks. Using the simple scaling introduced in section 1, we obtain an expected exclusion limit for \(3000\,\text{fb}^{-1}\) of Footnote 7: We implicitly include charge-conjugated processes in the following discussions. \[\mathcal{B}_{\text{excl}}^{\text{lep}}(t\to Zc)\approx 11\times 10^{-5}\cdot \sqrt{\frac{139}{3000}}\approx 2.4\times 10^{-5}\,. \tag{19}\] Here, we have taken the limit for a left-handed coupling, just as in our studies, and have assumed that systematic uncertainties will reduce according to the same scaling as the statistical uncertainties with the increase in integrated luminosity. This simple projection shows some tension with the extrapolation in Ref. [30] of the search for \(tZc\) FCNC effects with \(36.1\,\text{fb}^{-1}\) at \(\sqrt{s}=13\,\text{TeV}\)[29] by the ATLAS collaboration, which gives an expected upper limit of \(4\) to \(5\times 10^{-5}\) for the HL-LHC, depending on the assumptions on the reduction of systematic uncertainties. This limit is looser than the one obtained from the scaling above. This hints at the importance of the correct estimation of the long-term reduction of systematic uncertainties and highlights that the assumption that systematic uncertainties decrease according to the same scaling as statistical uncertainties may indeed be over-optimistic for the leptonic approach. The extrapolation to the FCC-hh scenario results in an expected limit of \(1.6\times 10^{-6}\), where we again have used an integrated luminosity of \(20\,\text{ab}^{-1}\) and included a factor of \(35\) for the increase of the cross sections with \(\sqrt{s}\), based again on the scaling of the \(t\bar{t}\) cross section. This projection is probably optimistic and we regard it as a rough estimate. In particular, the factor of \(35\) is unlikely to capture the increase of the cross section of the FCNC production mode accurately. Additionally, this scaling implies a reduction of systematic uncertainties by a factor of more than \(15\), which does not seem realistic given the challenging experimental conditions at the FCC-hh. **Ultraboosted approach.** In Ref. [62], it was proposed to search for top-FCNC effects in \(t\gamma\) and \(tZ\) production in the ultraboosted regime in which the decay products of the top quark merge into a single jet. In contrast to our approach, this method is only sensitive to the production mode. The ultraboosted approach is projected to yield an exclusion limit of \(\mathcal{B}(t\to Zc)<1.6\times 10^{-3}\) at the HL-LHC,8 considering a single source of systematic uncertainty on the number of background events of \(20\%\)[62]. The projected limit for the FCC-hh is \(3.5\times 10^{-5}\)[62].9 Footnote 8: We quote the significantly more sensitive semileptonic decay channel of the top quark and do not attempt to provide a combination with the hadronic decay channel. **Triple-top-quark production.** Another way to search for top-quark FCNC effects is in triple-top-quark production: \(qg\to tB^{*}\) with \(B^{*}\to t\bar{t}\)[63, 64, 65, 66]. In this process, a single top quark is produced alongside an off-shell boson \(B^{*}\) mediating the FCNC, which splits into a \(t\bar{t}\) pair. The studies are performed for the same-sign lepton topology \(\nu_{\ell}\ell^{+}b\,q\bar{q^{\prime}}\bar{b}\,\nu_{\ell^{\prime}}\ell^{+}b\), which benefits from the fact that SM background contributions are small. However, as is also the case for ultraboosted \(tZ\) production, the expected limit on \(\mathcal{B}(t\to Zc)\) of \(1.35\times 10^{-2}\) at the HL-LHC [65] is relatively weak and has already been surpassed by analyses from the ATLAS [23] and CMS collaborations [24] using the leptonic analysis. The limit achievable at the FCC-hh is estimated to be \(4.6\times 10^{-4}\)[66]. **Discussion.** We summarise the expected limits of the individual approaches in table 2. The leptonic analysis yields the most stringent limit at the HL-LHC, while both the ultraboosted and triple-top approaches perform significantly worse than the interference-based method. This is to be expected since these two approaches use the production mode that is suppressed by the charm-quark parton distribution function. Our projected limit for the interference-based approach at HL-LHC of \(6.4\times 10^{-5}\) is likely to degrade when including systematic uncertainties. However, we restricted ourselves to only one analysis region with exactly four central \(b\)-tagged jets. The inclusion of more signal regions would improve the sensitivity while data-driven background estimations from dedicated control regions could mitigate the impact of systematic uncertainties. Additionally, the inclusion of the electron channel will improve the sensitivity. For the FCC-hh, the relative sensitivity of the interference-based approach compared to the leptonic analysis improves when compared to the HL-LHC scenario. This highlights the power of the interference-based approach when moving towards the realm of smaller and smaller couplings and the analysis of larger datasets with increasing statistical power. Nevertheless, it should be recognised that the FCC-hh would operate in a regime of very high pileup: the average number of visible interactions per bunch crossing is projected to be \(\mu\sim\mathcal{O}(1000)\)[59]. This poses notable challenges for flavour tagging and analyses that focus on jets in general. Because of this, more thorough studies with a dedicated detector simulation would be needed to assess and compare the sensitivity of the two approaches at the FCC-hh. The ultraboosted approach benefits significantly more from the energy \begin{table} \begin{tabular}{l c c} Approach & HL-LHC (\(3\,\mathrm{ab}^{-1}\)) & FCC-hh (\(20\,\mathrm{ab}^{-1}\)) \\ \hline Interference & \(6.4\times 10^{-5}\) & \(1.2\times 10^{-6}\) \\ Leptonic & \(2.4\times 10^{-5}\) & \(1.6\times 10^{-6}\) \\ Ultraboosted & \(1.6\times 10^{-3}\) & \(3.5\times 10^{-5}\) \\ Triple-top & \(1.4\times 10^{-2}\) & \(4.6\times 10^{-4}\) \\ \hline \end{tabular} \end{table} Table 2: Expected 95% CL limits for the HL-LHC and FCC-hh scenarios for the presented interference-based approach, the approach with leptonic \(Z\to\ell^{+}\ell^{-}\) decay (scaled based on [23]), the ultraboosted approach [62], and triple-top-quark production in the same-sign lepton channel [65, 66]. The limits for the ultraboosted and the triple-top approaches from the references are scaled by \(1/\sqrt{2}\) to account for our assumption that roughly \(20\,\mathrm{ab}^{-1}\) will be available at the FCC-hh. gain from \(14\,\text{TeV}\) to \(100\,\text{TeV}\) as the limit is estimated to improve by a factor of approximately \(46\), while the limit from triple-top-quark production is only projected to improve by a factor of around \(29\). A clear hierarchy can be deduced: The triple top-quark approach only yields an expected limit of the order of \(10^{-4}\), while the ultraboosted approach is expected to perform better by around one order of magnitude. The interference-based approach and the leptonic analysis are both projected to push this even further to \(\mathcal{O}(10^{-6})\). It should also be noted that the \(Z\to\ell\ell\) and the interference approach have a different sensitivity to \(tZc\) and \(tZu\) FCNC couplings and are hence complementary. The \(Z\to\ell\ell\) analysis that focuses on the production mode is less sensitive to the \(tZc\) than to the \(tZu\) coupling due to the difference in parton distribution functions. Nevertheless, the sensitivities to the two couplings in the production mode are expected to be more similar at FCC-hh due to the evolution of the parton distribution functions considering higher energy scales and the tendency for lower Bjorken \(x\) compared to the LHC. In the decay mode, the \(Z\to\ell\ell\) approach has similar sensitivity to both couplings but relies on charm-quark identification for the distinction of these couplings. In contrast, the interference approach is almost exclusively sensitive to the \(tZc\) coupling. Thus, in case an excess over the SM prediction is observed in the future, the combination of these approaches will allow to disentangle possible effects from these two couplings. ## 5 Conclusions Top-quark FCNCs are so highly suppressed within the SM that any observation at the LHC or planned future hadron colliders would constitute a clear signal of physics beyond the SM. At hadron colliders, the traditionally most promising and most employed channel to search for \(tZq\) FCNCs uses a trilepton signature, relying on the leptonic \(Z\to\ell^{+}\ell^{-}\) decay. Since the \(t\to Zq\) decay rate is quadratically proportional to the FCNC coupling, i.e., \(\propto g^{2}\), the resulting sensitivity to probe \(g\) scales as \(1/\sqrt[4]{\mathcal{L}_{\rm int}}\) with the integrated luminosity \(\mathcal{L}_{\rm int}\) (assuming systematic uncertainties are small compared to the statistical ones). Given the large datasets expected at the HL-LHC and planned future hadron colliders, we investigated how to improve upon this luminosity scaling with a novel strategy. We propose to target the hadronic, three-body decay \(t\to qb\bar{b}\). In the presence of \(tZq\) FCNCs, the decay receives two interfering contributions: one from the FCNC (\(t\to qZ(\to b\bar{b})\)) and one from the SM (\(t\to bW^{+}(\to q\bar{b})\)). Since the two contributions interfere, the three-body rate contains a term linear in the FCNC coupling, i.e., \(\propto g\). Therefore, for sufficiently small \(g\), the sensitivity to probe \(g\) scales as \(1/\sqrt{\mathcal{L}_{\rm int}}\) in this channel, thus more favourably than in the traditional multi-lepton searches. We studied the leading parametric dependencies controlling the kinematics of \(t\to qb\bar{b}\) and identified the requirements on the FCNC couplings that would allow leveraging the interference to compete and complement traditional searches. The interference depends on the chirality and the phase of the FCNC coupling. It is largest for a left-handed \(tZq\) coupling, while for a right-handed one it is suppressed by the small masses of the bottom and \(q\) quark. We have thus focussed on the latter case of left-handed \(tZq\) couplings. The interference is active in a small kinematical region in which both the \(Z\) and \(W\) bosons are "on-shell". In this small doubly-on-shell region, we showed that the parametric dependence on \(\Gamma/M\) is the same for the SM and the interference contribution. Therefore, targeting this doubly-on-shell region with a dedicated search has the potential to provide sensitivity with an improved luminosity scaling. Based on these findings, we studied the prospects of the proposed search strategy for the case of left-handed FCNC \(tZc\) couplings with constructive interference. We consider the production of \(t\bar{t}\to cb\bar{b}\,\mu^{-}\nu_{\mu}\bar{b}\) from \(tZc\) FCNCs as the signal process. We simulated this signal and relevant back ground processes with MadGraph5_aMC@NLO and emulated the detector response by smearing the parton-level objects with resolutions similar to those at the ATLAS and CMS experiments. We then separated the FCNC signal processes from the backgrounds with a deep neural network that is parameterised in the value of the FCNC coupling \(g\). This setup accounts for the varying FCNC-interference contribution to the total FCNC signal. If no signs of FCNC production were found, the resulting expected 95% confidence-level upper limit with the HL-LHC dataset is \(\mathcal{B}_{\text{excl}}(t\to Zc)=6.4\times 10^{-5}\). At the FCC-hh, the expected limit is improved by up to a factor \(\sim 50\), depending on the assumed detector performance. While this study did only consider statistical uncertainties, the effect of systematic uncertainties should be studied in the future. The main backgrounds are \(t\bar{t}\) production with light-quark jets misidentified as \(b\)- or \(c\)-jets and \(t\bar{t}\) production with a \(W\to cb\) decay. As in most \(t\bar{t}\) measurements, uncertainties in the modelling of the \(t\bar{t}\) process may impact the sensitivity. The same is true for \(b\)-tagging and jet-related uncertainties. Heavy-flavour-associated \(t\bar{t}\) production is only a minor background and the potentially large associated systematic uncertainties are unlikely to significantly affect the sensitivity. Given the promising signal-background separation of the parameterised deep neural network, the statistical uncertainties on the number of events in the signal-dominated phase space may still compete with the systematic uncertainties in the background contributions. As the integrated luminosity increases, the advantage of the new strategy over the traditional approach generally becomes more pronounced. At the HL-LHC, the new strategy may not outperform the traditional search based on \(Z\to\ell\ell\) decays. However, at the FCC-hh, it has the potential to be competitive with the established approach. Nevertheless, given their complementarity, the combination of the two strategies will improve over the traditional search alone at both the HL-LHC and the FCC-hh. Additionally, the new interference-based approach demonstrates excellent prospects compared to several other alternative proposals for top-quark FCNC searches. Our study focussed on the case in which SM- and NP-sources of CP violation are aligned. It would be intriguing to relax this assumption and design dedicated observables, e.g., asymmetry distributions, that optimally leverage the interference in \(t\to qb\bar{b}\) to probe possible CP-violating phases in top-quark FCNC processes. In general, the interference approach will be important to understand the nature of the anomalous coupling in case top-quark FCNCs are observed, as it also provides information on its Lorentz structure. Given the results of our study on the proposed interference-based approach, it will be interesting to perform an analysis using current LHC data with a consistent treatment of systematic uncertainties and to estimate the sensitivity at the HL-LHC and future hadron-collider experiments under realistic experimental conditions. ## Acknowledgements The authors thank Fady Bishara and Nuno Castro for useful comments on the manuscript. ES thanks LianTao Wang for multiple inspiring discussions. The authors acknowledge the support from the Deutschlandstipendium (LC), the German Research Foundation (DFG) Heisenberg Programme (JE), the Studienstiftung des deutschen Volkes (JLS), and the partial support by the Fermi Fellowship at the Enrico Fermi Institute and the U.S. Department of Energy, Office of Science, Office of Theoretical Research in High Energy Physics under Award No. DE-SC0009924 (ES). A Two-body branching fractions Resonant \(W\)- and \(Z\)-boson production (if top FCNCs are present) dominate the inclusive rate for the three-body decay \(t\to cb\bar{b}\) via the diagrams in figure 1. As discussed in section 2, these contributions are well described in the narrow-width approximation in terms of inclusive two-body decay rates. Here, we collect the two-body decay rates in Eq. (6) that enter the decay \(t\to cb\bar{b}\) in the SM and when an anomalous \(tZc\) coupling is present: \[\mathcal{B}(t\to Zc)_{\text{FCNC}} =\frac{g^{2}}{128\pi}\frac{m_{t}}{\Gamma_{t}}\frac{m_{t}^{2}}{M_{Z }^{2}}\left(1-\frac{M_{Z}^{2}}{m_{t}^{2}}\right)^{2}\left(1+2\frac{M_{Z}^{2}}{ m_{t}^{2}}\right)\,, \tag{20}\] \[\mathcal{B}(t\to W^{+}b)_{\text{SM}} =\frac{e^{2}|V_{tb}|^{2}}{64\pi s_{w}^{2}}\frac{m_{t}}{\Gamma_{t}} \frac{m_{t}^{2}}{M_{W}^{2}}\left(1-\frac{M_{W}^{2}}{m_{t}^{2}}\right)^{2} \left(1+2\frac{M_{W}^{2}}{m_{t}^{2}}\right)\,,\] (21) \[\mathcal{B}(W^{+}\to c\bar{b})_{\text{SM}} =n_{c}\frac{e^{2}\left|V_{cb}\right|^{2}}{48\pi s_{w}^{2}}\frac{M_ {W}}{\Gamma_{W}}\,,\] (22) \[\mathcal{B}(Z\to b\bar{b})_{\text{SM}} =n_{c}\frac{e^{2}}{864\pi c_{w}^{2}s_{w}^{2}}\left(9-12s_{w}^{2}+8 s_{w}^{4}\right)\frac{M_{Z}}{\Gamma_{Z}}\,, \tag{23}\] with \(s_{w}\) and \(c_{w}\) the sine and cosine of the weak mixing angle, and \(n_{c}=3\) the number of colours.
2310.19291
A3SA: Advanced Data Augmentation via Adjoint Sensitivity Analysis
Innovative machine learning techniques have facilitated the inverse design of photonic structures for numerous practical applications. Nevertheless, within these approaches, the quantity of data and the initial data distribution are paramount for the discovery of highly efficient photonic devices. These devices often require simulated data ranging from thousands to several hundred thousand data points. This issue has consistently posed a major hurdle in machine learning-based photonic design problems. Therefore, we propose a novel data augmentation algorithm grounded in the adjoint method, capable of generating more than 300 times the amount of original data while enhancing device efficiency. The adjoint method forecasts changes in the figure of merit (FoM) resulting from structural perturbations, requiring only two full-wave Maxwell simulations for this prediction. By leveraging the adjoint gradient values, we can augment and label several thousand new data points without any additional computations. Furthermore, the augmented data generated by the proposed algorithm displays significantly improved FoMs owing to the precise FoM change predictions enabled by the adjoint gradients. We apply this algorithm to a multi-layered metalens design problem and demonstrate that it consequently exhibits a 343-fold increase in data generation efficiency. After incorporating the proposed algorithm into a generative adversarial network (GAN), the optimized metalens exhibits a maximum focusing efficiency of 92.93%, comparable to the theoretical upper bound (93.80%).
Chanik Kang, Dongjin Seo, Svetlana V Boriskina, Haejun Chung
2023-10-30T06:12:44Z
http://arxiv.org/abs/2310.19291v2
# A3SA: Advanced Data Augmentation via Adjoint Sensitivity Analysis ###### Abstract Innovative machine learning techniques have facilitated the inverse design of photonic structures for numerous practical applications. Nevertheless, within these approaches, the quantity of data and the initial data distribution are paramount for the discovery of highly efficient photonic devices. These devices often require simulated data ranging from thousands to several hundred thousand data points. This issue has consistently posed a major hurdle in machine learning-based photonic design problems. Therefore, we propose a novel data augmentation algorithm grounded in the adjoint method, capable of generating more than 300 times the amount of original data while enhancing device efficiency. The adjoint method forecasts changes in the figure of merit (FoM) resulting from structural perturbations, requiring only two full-wave Maxwell simulations for this prediction. By leveraging the adjoint gradient values, we can augment and label several thousand new data points without any additional computations. Furthermore, the augmented data generated by the proposed algorithm displays significantly improved FoMs owing to the precise FoM change predictions enabled by the adjoint gradients. We apply this algorithm to a multi-layered metalens design problem and demonstrate that it consequently exhibits a 343-fold increase in data generation efficiency. After incorporating the proposed algorithm into a generative adversarial network (GAN), the optimized metalens exhibits a maximum focusing efficiency of 92.93%, comparable to the theoretical upper bound (93.80%). \(\dagger\)These authors contributed equally to this work. ## I Introduction The field of photonics, which involves a study of detection, generation, and manipulation of light, has advanced with the growing interest in its versatile applications including light detection and ranging (LiDAR) [1; 2; 3], optical communication [4; 5; 6; 7], imaging [8; 9; 10], optical sensing [11; 12; 13; 14; 15; 16], quantum computing [17; 18; 19; 20; 21; 22; 23], and holography [24; 25; 26; 27; 28; 29]. In particular, nanophotonics [30; 31; 32; 33; 34], which merges the principles of nanotechnology and photonics to control light at the nanoscale, has facilitated the precise implementation of complex photonic structures. Furthermore, the demand for highly efficient and complex structures has led to a need for advanced design techniques. Conventional photonic design approaches include parameter sweep [35; 36; 37], Bayesian optimization [38; 39; 40; 41; 42], and global optimizations such as particle swarm optimization [43; 44; 45; 46; 47] or genetic algorithms [48; 49; 50; 51]. However, these methods encounter significant limitations when confronted with intricately complex design problems. More recently, efficient inverse design approaches have been proposed in photonics [52; 53; 54; 55; 56; 57]. Inverse design in photonics is a framework for optimizing photonic structures with respect to the figure of merit (FoM) in a parameter space with many degrees of freedom. One prominent methodology for the inverse photonic design involves the utilization of adjoint sensitivity analysis [52; 58; 59], which predicts the gradient of the FoM with respect to the dielectric permittivity values of engineered materials with only two runs of simulations. The gradient value is utilized to update design parameters to increase the FoM, and this iterative process is referred to as adjoint optimization [52; 60; 61; 62; 63; 64; 65]. However, the adjoint optimization process often converges to local rather than global optimal designs or requires an intricate binarization process [66; 67; 68; 69; 70], which sometimes leads to the degradation of the FoM. Meanwhile, deep learning has proven its effectiveness in designing complex photonic structures through the high-level expression of complex and nonlinear functions empowered by data-driven approaches [71; 72; 73; 74; 75]. In particular, generative models [67; 76; 77; 74; 77] have piqued considerable interest owing to their power of representation and flexibility in learning the complex structures of given image data. However, deep learning inherently requires a large dataset, often including a few hundred thousand data points, and relies heavily on the initial data distribution [78; 79; 80; 81]. Furthermore, conventional generative models [82; 83] intrinsically do not improve device performance because their optimization function generally relies on the likelihood or its correlated value [84]. A generative adversarial network (GAN) [85] represents an innovative approach for training neural networks, which comprises two distinct components: a generator and a discriminator. These two networks engage in a competitive interaction, wherein each strives to outperform the other, leading to mutual improvement. In the context of photonics, GANs exhibit remarkable generation efficiency, enabling rapid generation of numerous photonic devices [74; 75; 76]. However, GANs rarely demonstrate performance enhancements due to their dependence on the distribution of training data. A further complication arises from the loss function used in GAN training, which creates a minimax game between the generator and the discriminator, making it difficult for the learning process to converge [86]. Consequently, a new method for utilizing generative models in photonics is required. In this study, we introduce an innovative data augmentation algorithm for deep-learning-based photonic design. This algorithm, which we name A3SA (Advanced Data Augmentation via Adjoint Sensitivity Analysis), is built on the principle of adjoint sensitivity analysis. It can generate over 1,000 times the initial dataset without requiring many simulations, while simultaneously improving the distribution of the augmented data. Specifically, the adjoint gradients provide a highly accurate prediction of FoM changes caused by structural variations, resulting in augmented data with much higher device efficiencies. Consequently, the A3SA algorithm overcomes the limitations of deep learning in photonics such as the need for a large dataset and reliance on the initial data distribution. In addition, the algorithm can avoid convergence to the local optimum structure, which is often observed in adjoint optimization. We apply the proposed algorithm to a multi-layered metalens design problem involving high structural degrees of freedom. A3SA shows up to 343 \(\times\) data augmentations from the original dataset of size 100. In addition, we apply A3SA to a GAN and discover a multi-layered metalens design with a focusing efficiency of 92.93%. Based on its high versatility and efficiency, our data augmentation algorithm may open a new era in data-driven design in photonics. ## II Advanced Data Augmentation via Adjoint Sensitivity Analysis We introduce the A3SA, a novel photonic data augmentation method based on adjoint sensitivity calculations. The calculation provides gradients with respect to the FoM over the design space within only two simulations: forward and adjoint, as illustrated in Fig. 1 (a). The critical step in the adjoint sensitivity calculation is the efficient computation of the gradients with respect to the numerous geometrical degrees of freedom by combining the Lorentz reciprocity and Born approximation [52; 53]. Born approximation allows to represent small changes in the dielectric constant by dipole sources with magnitude linearly proportional to an unperturbed field E at the same point. In turn, the reciprocity principle allows obtaining an adjoint field by using coherent dipole sources with amplitudes calculated using the definition of the design FoM, as shown in Fig. 1 (a). Then, the variation in the FoM can be calculated as \(\mathfrak{Re}\left(\mathbf{E}_{\mathrm{dir}}\cdot\mathbf{E}_{\mathrm{adj}}^{ \ast}\right)\), where \(\mathbf{E}_{\mathrm{dir}}\) and \(\mathbf{E}_{\mathrm{adj}}^{\ast}\) can be obtained from the forward and adjoint simulations. The critical insight behind our algorithm is that the computed adjoint gradient value serves as both a "navigator" and a "barometer" for newly generated photonic data. Firstly, as a "navigator", it effectively guides the distribution of the Figure of Merit (FoM) by enhancing it. Secondly, as a "barometer", it accurately labels the FoM for newly generated data. The A3SA algorithm starts with a randomly generated initial structure, followed by the computation of adjoint gradients over the design space using the adjoint sensitivity analysis. An example of this process is illustrated in Fig. 1 (b) for the case of a photonic structure with cylindrical symmetry. Here, the adjoint gradients are averaged over the smallest design feature with cylindrical symmetry, a width of 50nm, and a height of 500nm. A3SA then searches for the highest absolute adjoint gradient (\(|g|\)). Next, it inverts the material density function (\(\rho\)) of the smallest design feature having maximum value of \(|g|\). The inversion rule is the following: if an adjoint gradient is positive _and_\(\rho\) is negative, an inversion takes place to increase \(\rho\) (i.e., inversion from low refractive index to high refractive index). If an adjoint gradient is negative _and_\(\rho\) is positive, an inversion reduces \(\rho\) (i.e., inversion from high refractive index to low refractive index). The new structure with a locally inverted material density function is augmented data with a greater FoM. Material density inversion can be repeated multiple times until the total structural change exceeds the Born approximation in the adjoint sensitivity analysis. Therefore, the amount of augmented data can increase more than a thousandfold in a large photonic design problem, where small local changes do not violate the Born approximation. Augmented photonic structures can also have negligible label error with a FoM of \(F+\Delta F\), where \(\Delta F\approx\frac{dF}{de}\Delta\varepsilon\), which enables A3SA to be utilized in a score-based deep neural network (DNN) model. The total number of inverted cells per iteration, denoted by \(k\), is set as the model hyperparameter. The value of \(k\) is proportional to the size of the design area and must not exceed a certain threshold to satisfy the Born approximation validity range. The optimal value of \(k\) can be determined by performing multiple inversions and analyzing the data distribution. Next, we select the optimal number of inversions by examining the mean and the maximum values of the distribution at each iteration. In the mathematical context, if we define the design space as \(X\) and design feature as \(\mathbf{x}\) with each \(\mathbf{x}_{i}\)'s corresponding permittivity value represented as \(\varepsilon_{\mathbf{x}_{i}}\), the optimization process of one iteration is expressed in Eq. (1), (2), and (3). Equation (1) shows the procedure for selecting design features based on their adjoint gradient values, starting from the absolute maximum and proceeding to the next design feature with each adjoint gradient value. \[\begin{split}\mathbf{x}_{1}=\operatorname*{argmax}_{\mathbf{x}_ {i}\in X}\left|\frac{dF}{d\varepsilon_{\mathbf{x}_{i}}}\right|\\ \mathbf{x}_{2}=\operatorname*{argmax}_{\mathbf{x}_{i}\in X \backslash\{\mathbf{x}_{i}\}}\left|\frac{dF}{d\varepsilon_{\mathbf{x}_{i}}} \right|\\ \vdots\\ \mathbf{x}_{k}=\operatorname*{argmax}_{\mathbf{x}_{i}\in X \backslash\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{k-1}\}}\left|\frac {dF}{d\varepsilon_{\mathbf{x}_{i}}}\right|\end{split} \tag{1}\] In Eq. (1), the previously selected design features are excluded in the subsequent iterations of the algorithm. Our algorithm may contain multiple iterations of (1), where each adjoint gradient profile is computed per iteration using only two (forward- and adjoint-) simulations from the adjoint sensitivity analysis. The material densities of the selected design features are then inverted according to the aforementioned rules. Specifically, if a design feature has a positive adjoint gradient and its current material has a lower permittivity, the Figure 1: Schematics of our A3SA (Advanced Data Augmentation via Adjoint Sensitivity Analysis) algorithm. (a) Illustration of the adjoint sensitivity analysis. Instead of running independent simulations for every geometrical perturbation, the adjoint sensitivity analysis employs the forward and adjoint (backward) simulations to compute the gradients with respect to the numerous geometrical degrees of freedom. Combining the Lorentz reciprocity and Born approximation [52; 53], the adjoint sensitivity analysis can provide exact gradients (\(\frac{dF}{de}\)) of the given structure with only two simulations. (b) A3SA algorithm can increase the amount of the initial photonic data with given structural gradients by more than a thousandfold depending on the number of structure parameters. Furthermore, the additionally-generated photonic data have a FOM distribution shifted to a higher mean value by manually inverting the material density function (\(\rho\)) using computed gradient information \(\frac{dF}{de}\). material is transitioned to increase permittivity. However, if the gradient is positive and the design feature already consists of a high permittivity material, no action is taken on the design feature. When the adjoint gradient is negative, the material is switched to decrease the permittivity. The change in permittivity (\(\Delta\varepsilon_{\mathbf{x}_{i}}\)) for each condition is described in Eq. (2). \[\Delta\varepsilon_{\mathbf{x}_{i}}=\begin{cases}\varepsilon_{\text{high}}- \varepsilon_{\text{low}}&\text{if }\frac{dF}{d\varepsilon_{\mathbf{x}_{i}}}>0\text{ and } \varepsilon_{\mathbf{x}_{i}}=\varepsilon_{\text{low}}\\ -(\varepsilon_{\text{high}}-\varepsilon_{\text{low}})&\text{if }\frac{dF}{d \varepsilon_{\mathbf{x}_{i}}}<0\text{ and }\varepsilon_{\mathbf{x}_{i}}= \varepsilon_{\text{high}}\\ 0&\text{otherwise}\end{cases} \tag{2}\] In Eq. (2), \(\varepsilon_{\text{high}}\) denotes the permittivity of the material with a higher value, while \(\varepsilon_{\text{low}}\) represents that of a lower value. In our multi-layer metalens design problem, \(\varepsilon_{\text{high}}\) is equivalent to \(\varepsilon_{\text{TiO2}}\) and \(\varepsilon_{\text{low}}\) to \(\varepsilon_{\text{SU-8}}\). After the inversion process, the FoM values of the newly generated devices are labeled using Eq. (3). \[\begin{split} F_{1}&=F_{0}+\frac{dF}{d\varepsilon_{ \mathbf{x}_{1}}}\Delta\varepsilon_{\mathbf{x}_{1}}\\ F_{2}&=F_{0}+\frac{dF}{d\varepsilon_{\mathbf{x}_{1}}} \Delta\varepsilon_{\mathbf{x}_{1}}+\frac{dF}{d\varepsilon_{\mathbf{x}_{2}}} \Delta\varepsilon_{\mathbf{x}_{2}}\\ &\vdots\\ F_{k}&=F_{0}+\sum_{i=0}^{k-1}\frac{dF}{d\varepsilon_{ \mathbf{x}_{i}}}\Delta\varepsilon_{\mathbf{x}_{i}}\end{split} \tag{3}\] Here, we use \(F_{1}\) to \(F_{k}\) to construct a new dataset, excluding \(F_{0}\). This approach results in a dataset expansion of \(k-1\) times the original dataset size, which reduces simulation costs associated with data generation. In Eq. (3), when \(\Delta\varepsilon_{\mathbf{x}_{i}}=0\), there is no alteration in the structure or in the value of the FoM in the step's progression. To confirm the accuracy and improvement of data distribution of the A3SA, we perform a validation study in a free-form 2D structure. Our validation begins with a planar 2D structure with 4,000 pixels in the design space, as shown in Fig. 2(a). We assume a two-dimensional lens problem which is a field maximization at its focal point. First, Figure 2: Validation of A3SA in a free-form 2D structure. Our validation begins with a planar 2D structure. We then compare the FoMs of the augmentation data to those of the simulated data (ground truth). (a) The simulation environment in a two-dimensional design space composed of 4,000 pixels. We set the electric field intensity at the focal point as the FoM in the optimization process. (b) The adjoint gradient profile is calculated via the adjoint sensitivity analysis, as shown in the inset surface plot. the adjoint gradient profile is calculated by adjoint sensitivity analysis, as shown in the inset surface plot of Fig. 2(b). Based on this, we randomly select the locations of material density inversions for 1 to 1,000 pixels where negative inversion occurs for negative adjoint gradient while positive inversion occurs for positive adjoint gradient. Then, we obtain simulated FoMs (ground truth) for the augmented (inverted) photonic structures to calculate the error of the FoM prediction (denoted as gray boxes) of the A3SA. As shown in Fig. 2(b), the A3SA successfully predicts FoM changes over 500-pixel inversions over a total of 4,000 pixels with less than 1% prediction error. Theoretically, this 500-pixel inversion corresponds to the possible data augmentation of 6.27\(\times 10^{652}\) since we can randomly select the locations of the inversion within the 4000 pixels in the design space. This is an extraordinary data augmentation enabled by only two simulations. Also, 700 to 1,000-pixel inversions demonstrate prediction errors of 1.09% and 1.79%, respectively. Therefore, they can also be employed in the data augmentation of deep generative models such as variational autoencoders (VAE) [83], GAN [85], or diffusion models [87; 88]. ## III Multilayered metalenses Metasurfaces are flat optical devices with subwavelength structures that manipulate incident waves in unprecedented ways, providing a more remarkable precision of the manipulation than their bulky conventional counterparts [89; 90]. This approach enables a new way of compact imaging through a metalens, a two-dimensional device that focuses light with a geometric phase delay [91; 92; 93]. However, the standard metalens design approach, which stitches subwavelength unit cells together into a larger device, is limited to low numerical apertures or low focusing efficiencies due to sampling errors in the stitching process [94; 95; 96; 97]. Recent studies [98; 99] suggest that increasing the volume of the metasurface may relax the limitation of the metasurface's performance due to both increased geometric degrees of freedom and the provision of more room for light manipulation. However, a conventional metasurface design approach, known as unit-cell design, cannot provide a blueprint for multi-layer design because it cannot predict the interactions among the unit cells in different layers. Therefore, in this study, we apply our proposed data augmentation algorithm to solve the high-NA multi-layer metalens design problem. We employ cylindrical symmetry in our designs to minimize the computational burden without sacrificing the focusing efficiency, as illustrated in Fig. 3(a). The Fraunhofer diffraction [100] from circular apertures results in a pattern called an Airy disk [101]. This pattern features a dark region, referred to as a dark ring, where the destructive interference of light occurs. We define the focusing efficiency of our metalens by integrating focused energy within the third dark ring. The design space is confined to the r-z cross-section of the cylindrical metalenses, consisting of multi-layer TiO\({}_{2}\) nanostructures with SU-8 background, as illustrated in Fig. 3(b). TiO\({}_{2}\) and SU-8 offer a refractive-index difference of 0.9264 at a 1000nm wavelength, making them suitable components for highly resonant nanophotonic structures [102][103]. The strong resonance is crucial for designing "fast lenses" (high-NA), where a required phase profile rapidly varies over the radial direction of the lens. Moreover, a TiO\({}_{2}\) nanopattern with SU-8 background is feasible for fabrication by electron-beam lithography of SU-8, a commonly used epoxy-based negative photoresist used in microfabrication and spin-coating with SU-8 [104]. The full-wave simulations are performed using Meep [105; 106], an open-source software package for a finite-difference time-domain (FDTD) simulation. The minimum grid spacing of the FDTD simulation is 50nm, which corresponds to the minimum width of the TiO\({}_{2}\) nanostructures in the multi-layer Figure 3: Illustration of a multi-layer metalens design problem. (a) A circularly polarized plane wave propagates from the bottom of the cylindrically-symmetric metalens. The cross-sectional view of the metalens is depicted in (b). We confine the designable region to three layers of nanoring structures composed of TiO\({}_{2}\) and SU-8 materials. Intermediate layers are filled with SU-8 to make the multi-layer structure fabricable. metalens. The design parameters are NA=0.75, wavelength=1000nm, focal length=5.65\(\lambda\); we design the structure as shown in Fig. 3 and use cylindrical symmetry to reduce the computational costs of the design process. In the problem setup of a cylindrical metalens, multiple pixels are clustered in a nanoring. This indicates that each inversion of the nanoring structure involves several pixels simultaneously. Each pixel may have different adjoint values; thus, the material density inversion predicted by a spatially averaged adjoint value in each nanoring may not increase FoM significantly unlike the inversion of a free-form structure. We note that the multi-layered metalens setup is influenced by the findings presented in previous studies [107; 108; 60; 65]. ## IV Results First, we study the threshold of multiple inversions of the nanoring structure, which is equivalent to one iteration of A3SA, in the multi-layer metalens structure. The numerical experiment is motivated by the insight that a large number of inversions may result in breaking of the conditions of the Born approximation validity. As illustrated in Fig. 4, the experiment shows that the new FoMs gradually increase over a greater number of inversions up to seven and then decrease, which implies that the Born approximation may be violated around the seventh' inversions, leading to a failure of FoM prediction in the new structure. We apply multiple iterations of the A3SA to the multi-layer metalens design problem to demonstrate the efficacy of this method. Specifically, we start with one hundred randomly generated initial data entries (green bars) shown in Fig. 5(a), where they have an average focusing efficiency of 15.42% and a maximum focusing efficiency of 31.05%. Then, the A3SA augments one hundred data to seven hundred by multiple inversions of the material density of the nanoring structure. The data distribution after the single iteration of the A3SA shows an average and maximum efficiency of 25.63% and 68.24%, respectively, as illustrated in Fig. 5(a). The second iteration of the A3SA is applied to the 700 hundred data obtained from the first iteration. The total amount of data is now 4,900 in the second and 32,300 in the third iteration. The maximum focusing efficiency increases significantly over the multiple iterations of the A3SA. It ranges from 31.05% (initial data) to 68.24% (first iteration), 75.04% (second iteration), and 81.39% (third iteration). The sequential enhancement of the focusing efficiency and the number of augmented data sets proves the effectiveness of the algorithm. We also compare the augmented data with randomly generated data with the same amount as shown in Fig. 5(b). The augmented data shows higher average (25.63%) and maximum (68.24%) efficiencies compared to the randomly generated data. Figure 4: Box plot illustrating the focusing efficiency for randomly initialized multi-layer metalenses and A3SA-generated data derived from the initial dataset. Each plot provides a summary of each dataset, and the minimum and maximum values are denoted by the whiskers’ lower and upper ends, respectively. The bottom and top edges of the box represent the first (Q1) and third (Q3) quantiles, respectively. The median value is indicated by the line within the box. The mean value is depicted as an “x” mark inside the box. An inversion number of seven is chosen as it exhibits the peak average efficiency, with a maximum focusing efficiency value of 57.18%, a mean of 28.36%, and a standard deviation of 10.85%. We also implemented a generative adversarial network (GAN) approach to design multi-layer metalenses using A3SA data. GANs are renowned for their fast inference speeds, and they have been already successfully implemented in the inverse design of photonic structures [74; 75; 76]. Figure 6(a) illustrates the schematics of the GAN based on the A3SA data. The generative model used in this study comprises two networks: a generator (\(G\)) and a discriminator (\(D\)). The generator generates structural data \(x_{gen}=G(z)\) from the random input noise \(z\). The discriminator determines whether the input data are fake (labeled 0) or true (labeled 1). In the 1 training process illustrated in Fig. 6(a), we train two networks by adversarial learning grounded in the minimax optimization of a loss function \(L(D,G)\), which is mathematically expressed in Eq. (4). \[\min_{G}\max_{D}L(D,G)=\mathbb{E}_{x\sim p_{\text{data}}(x)}[\log D(x)]+ \mathbb{E}_{z\sim p_{z}(z)}[\log(1-D(G(z)))] \tag{4}\] As described in Eq. (4), generator \(G\) aims to minimize the loss, and discriminator \(D\) attempts to maximize the loss simultaneously. Note that in the first term, data sampled from the true dataset \(p_{\text{data}}(x)\) are employed to train the discriminator \(D\), whereas in the second term, the generated data \(G(z)\) are used to train both the generator \(G\) and the discriminator \(D\). During the training process, a continuous interplay occurs between the generator and the discriminator. The generator attempts to deceive the discriminator by creating indistinguishable synthetic samples. In contrast, the discriminator attempts to distinguish true samples from the synthetic samples generated by the generator. The iterative process continues until the two networks reach a Nash equilibrium [109; 110], where the generator is expected to produce high-quality samples that the discriminator can hardly differentiate from the true samples. A progressive enhancement of data generation can be achieved by successive iterations of stages 1, 2, and 3, as depicted in Fig. 6 (a). In 1, we train both the generator (\(G\)) and the discriminator (\(D\)) networks together. This is succeeded by 2, where we take the top 40th percentile of generated devices from the trained generator, which is a strategy benchmarked from the previous study [74]. The filtered devices are then used as training data for the next iteration of the process. The GAN provides stochasticity to the data distribution, avoiding the convergence to a bad local optima. To further enhance the filtered data, we additionally apply 3 A3SA to them. In 3, we utilize a subset of the data generated from the A3SA, ensuring that the size of both input and output data of the A3SA remains constant. This ensures fairness of the comparative study between A3SA-based GAN and basic GAN. A comparison of the A3SA algorithm (labeled A3SA) with its corresponding ablation study (labeled GAN) is shown in Fig. 6 (b). In the ablation setup, the proposed A3SA process is excluded so that only 1 and 2 in Fig. 6 (a) are performed per iteration. Both A3SA-based GAN and basic GAN demonstrate increased focusing efficiencies over the iteration. However, the maximum and average focusing efficiencies of the A3SA-based GAN are much higher Figure 5: Analysis of the data distribution in the multiple iterations of the A3SA. (a) The initial data has a size of 100, where the average and maximum focusing efficiencies are 15.42% and 31.05%, respectively. After applying a single step of A3SA, the dataset size increases to 700. The distribution after the process has an average and maximum efficiency of 25.63% and 68.24%. (b) Seven hundred randomly generated data are compared against the A3SA-generated data. The average efficiency of the randomly generated data is 16.20%, and the maximum efficiency is 43.38%, which are lower than those of the A3SA-generated data (c) Comparison of the datasets generated per each iteration of A3SA. After successive expansions of the A3SA process, the initial one hundred data grew to 700 in the first iteration, 4,900 in the second, and 32,300 in the third iteration. In parallel, the maximum focusing efficiency notably increases from 31.05% (initial data) to 68.24% (first iteration), 75.04% (second iteration), and 81.39% (third iteration). The sequential enhancement of the focusing efficiency and the number of augmented data proves the effectiveness of the algorithm. than those of the basic GAN. Specifically, at the ninth iteration of the A3SA employed GAN, we find a multi-layer metalens design that demonstrates 92.93 % focusing efficiency, which is close to the theoretical maximum efficiency (\(\sim\)93.80%) of the third dark ring of the Airy disk [100; 101]. Figures 6 (c) and (d) show the normalized field intensities of the optimal structures discovered from the GAN and the combination of the GAN with the A3SA, respectively. The intensity plot in Fig. 6 (c) corresponds to a structure with a 60.03% efficiency, while Fig. 6 (d) corresponds to a 92.93% efficiency. At the target focal length indicated by a white dashed line, it is observable that the incident wave is more effectively focused in the A3SA-optimized multi-layer metalens. It implies that the A3SA combined with a machine-learning algorithm may pave a new way of designing ultra-high-efficiency photonic devices within a feasible amount of the simulations. Figure 6: (a) Schematics of the integration of A3SA with GAN. 1 We train both the generator (\(G\)) and discriminator (\(D\)), which constitute the GAN. The generator (\(G\)) synthesizes fake data (\(x_{gen}\)), and the discriminator (\(D\)) distinguishes if the data are from the true dataset (\(x_{original}\)) or the generator. 2 In each iteration, we select the top 40% of the devices generated from the generator. 3 We apply A3SA to enhance the input dataset, which is then utilized as training data (\(x_{original}\)) for the subsequent iteration of the process. An ablation setup is also illustrated as an upper circuit of the switch, which only involves 1 and 2. (b) Result of a comparative analysis performed between the A3SA algorithm (marked as A3SA) and its corresponding ablation study (marked as GAN). The dotted lines illustrate the average focusing efficiency of a dataset for each iteration of (a). At the final (ninth) iteration, A3SA displays 81.92% and GAN shows 41.57% of average focusing efficiency. (c) and (d) Illustration of the normalized field intensity profile of the optimized multi-layer metalenses by ‘GAN’ (c) and ‘A3SA’ (d), respectively. A white dotted line denotes the desired focal length. The field intensity profile in (d) validates the high focusing efficiency (92.93%) of the metalens structure generated by A3SA. The black (TiO\({}_{2}\)) and white (SU-8) structures indicate a cross-section of the optimized metalenses. Conclusion In this work, we have demonstrated a novel way of augmenting photonic device designs without running numerious simulations. The proposed A3SA algorithm is built on the principle of adjoint sensitivity analysis, forecasting changes in the figure of merit resulting from structural perturbations. By leveraging the gradient values, we can augment and label numerous new designs without additional computations. We validate the A3SA both in free-form design and multi-layer metalens design problems. In the former example, A3SA successfully generates new data within 1% prediction error and shows the possible data augmentation of 6.27\(\times\)10\({}^{652}\). In the multi-layer design problem, it has demonstrated a capability of generating more than 300 times the amount of original data while enhancing the device efficiency. After incorporating the proposed algorithm into a GAN, the optimized metalens exhibits a maximum focusing efficiency of 92.93%, comparable to the theoretical upper bound (93.80%). Our method opens a promising way of sidestepping major hurdles, data generation and initial data distribution, of using deep learning in photonics.
2308.15839
Utilizing Task-Generic Motion Prior to Recover Full-Body Motion from Very Sparse Signals
The most popular type of devices used to track a user's posture in a virtual reality experience consists of a head-mounted display and two controllers held in both hands. However, due to the limited number of tracking sensors (three in total), faithfully recovering the user in full-body is challenging, limiting the potential for interactions among simulated user avatars within the virtual world. Therefore, recent studies have attempted to reconstruct full-body poses using neural networks that utilize previously learned human poses or accept a series of past poses over a short period. In this paper, we propose a method that utilizes information from a neural motion prior to improve the accuracy of reconstructed user's motions. Our approach aims to reconstruct user's full-body poses by predicting the latent representation of the user's overall motion from limited input signals and integrating this information with tracking sensor inputs. This is based on the premise that the ultimate goal of pose reconstruction is to reconstruct the motion, which is a series of poses. Our results show that this integration enables more accurate reconstruction of the user's full-body motion, particularly enhancing the robustness of lower body motion reconstruction from impoverished signals. Web: https://https://mjsh34.github.io/mp-sspe/
Myungjin Shin, Dohae Lee, In-Kwon Lee
2023-08-30T08:21:52Z
http://arxiv.org/abs/2308.15839v1
# Utilizing Task-Generic Motion Prior ###### Abstract The most popular type of devices used to track a user's posture in a virtual reality experience consists of a head-mounted display and two controllers held in both hands. However, due to the limited number of tracking sensors (three in total), faithfully recovering the user in full-body is challenging, limiting the potential for interactions among simulated user avatars within the virtual world. Therefore, recent studies have attempted to reconstruct full-body poses using neural networks that utilize previously learned human poses or accept a series of past poses over a short period. In this paper, we propose a method that utilizes information from a neural motion prior to improve the accuracy of reconstructed user's motions. Our approach aims to reconstruct user's full-body poses by predicting the latent representation of the user's overall motion from limited input signals and integrating this information with tracking sensor inputs. This is based on the premise that the ultimate goal of pose reconstruction is to reconstruct the motion, which is a series of poses. Our results show that this integration enables more accurate reconstruction of the user's full-body motion, particularly enhancing the robustness of lower body motion reconstruction from impoverished signals. Web: https://[https://mjsh34.github.io/mp-sspe/](https://mjsh34.github.io/mp-sspe/). ## 1 Introduction The technology of today's Mixed Reality (MR) has extended traditional interpersonal experiences into the virtual realm, from social gatherings and gaming, to collaborative work, just to name a few. While these experiences have traditionally been limited to settings in which all participants are present in the same environment, MR systems instead rely on virtual avatars to simulate the experience and benefits of non-verbal communication. Unfortunately, real-time data streams provided by commercial MR devices, which typically consist of a head-mounted display (HMD) and two hand-held controllers, each tracked in a small room-scale grid, are insufficient to accurately reproduce the full pose of its user, often resulting in MR environments where avatars only show head and hands. Studies have shown that a hand-only avatar provide little sense of embodiment [63, 26, 11], whereas a full-body avatar can significantly enhance the user experience by creating better sense of embodiment and presence [26, 11, 16]. Tasked with the challenge of recovering full-body posture of humans from sparse signal streams, previous works have attempted to reconstruct the human body from tracking signals from four or more joints including the pelvis [21, 62, 31, 66, 68, 69], and from ego-centric cameras [23, 71, 42, 72], which are unavailable on most MR devices at the present day. Recent works have attempted to reconstruct full-body poses from pose information alone from the HMD and handheld controllers [1, 9, 3, 24, 46, 65, 67]. However when the the reconstructed poses are combined to form a complete motion, they often lead to unnatural motions that fail to match the user's desired action. These systems also often fail to faithfully reproduce lower body motion beyond basic actions such as standing still, and walking at various speeds. We propose a method effectively utilizing a task-generic neural motion prior [29, 15, 49, 55] (Section 2.4) aimed at solving the issues mentioned above. We exploit a generative motion prior model with an encoder-decoder architecture Figure 1: We present a method that utilizes a motion prior to encode the overall motion of a user for full-body pose reconstruction, using only the information of head and two hands. that is initially trained to reconstruct full-body motion while learning a latent space of human motions. We train a motion encoder to predict latent representations of motion from a sequence of sparse poses obtained from the three sources mentioned earlier, utilizing latent space learned by motion prior. Finally, we train a sequence (time-series) model that generates full-body pose from the sparse pose sequence and the latent representations of the overall motion. We achieve the following: * Our method utilizing a motion prior outperforms state-of-the-art methods in reconstructing a full-body pose at a single frame, and in reconstructing motions from combined full-body pose reconstructions from three tracking signals. We evaluate static pose reconstruction and full motion reconstruction performances using appropriate metrics. * We show that our method produces natural-looking motions which match the intended action of the underlying full-body motion. * Our method improves on reconstructing lower body motions which methods without any prior on motion struggle at. We show our model's superior performance against previous works using a diverse set of quantitative metrics (Table 1), user studies (Tables 2 and 3, and Figure 4), and qualitative evaluation (Figure 5). ## 2 Related Works ### Human Body Representations The SMPL model [32] represents the human body as a kinematic tree consisting of 24 joints using two parameters: \(\mathbf{\theta}\) and \(\mathbf{\beta}\), where \(\mathbf{\theta}\in\mathbb{R}^{24\times 3}\) represents the rotations of all 24 joints in the axis-angle representation, and \(\mathbf{\beta}\in\mathbb{R}^{10}\) represents the shape parameter that describes the body type derived via principle component analysis [32] for each gender (male, female, and neutral). We parametrize the full-body pose using the joint rotations in 6D form, which is a continuous representation of 3D rotation proposed by Yi et al. [76] to be effective in training neural networks (also used by previous works on the same task [9, 3, 24]). For training and inference, we use the neutral body with mean body shape: \(\mathbf{\beta}=\mathbf{0}\), disregarding variations in body shape, similar to the approach taken in previous works [9, 3, 24] which also do not consider body shape diversity. ### Full-Body Reconstruction from Various Signal Streams An abundance of literature is dedicated to the recovery of full-body pose from observations of various modalities such as images [4, 43, 73, 74], videos [39, 53, 70, 22], and sparsely-worn body trackers [62, 21, 69, 66, 68]. Notably, the last set of research, while similar to our problem setting, work with richer information by the tracking the pelvis at the very least and often the lower body as well [62, 21, 69, 68]. ### Full-Body Reconstruction from Head and Hands Unlike most previous works that focus on recovering full-body pose from sparse body-worn trackers introduced in Section 2.2, efforts have been made to make use of only three tracking signals, namely the positions and rotations of the head and hands, which most commercial MR devices provide [1, 9, 3, 24, 46, 65, 67]. We categorize the recent lines of works into four types: (1) Motion Matching: Given sparse observations, these works [1, 46] attempt to find the most fitting motion from a predefined animation database. Their primary goals lie not in not precise reconstruction of the full-body, but in accentuating and stylizing motion. (2) Physics-Based Simulation: Recently, QuestSim [65] and Neural3Points [67] have been proposed for simulating full-body avatars by predicting parameters of physical simulation, rendering motions based on laws of physics. In their studies [65, 67], the authors observed that while the synthesized motions are physically plausible, they can exhibit stiffness and unnaturalness. These models also encounter challenges when attempting to replicate complex lower-body motions that have low correlation with their corresponding upper-body movements. Moreover, they are susceptible to deviating from actual movements as errors in physical simulation accumulate and falling over, in which cases simulation needs to be restarted. Neural3Points [67] attempts to mitigate the last issues by using a direct full-pose prediction model in conjunction, but still resorts to restarting the simulation if too much errors accumulate, compromising the realism and accuracy of generated motions. The remaining two lines of work focus on directly predicting full-body pose at every frame. (3) Sequence (Time-Series) Model for Full-Body Pose Estimation: AvatarPoser [24] proposes using a Transformer Encoder [61] to parse a 40-frame sequence of sparse pose signals to predict the full-pose. (4) Generative Latent Space-Based Full-Body Pose Estimation: VAE-HMD [9] and FLAG [3] rely on a decoder of a pose prior to predict the full-pose given a latent code derived from sparse pose signals at the current frame or over a short sequence of past frames. For full-pose priors VAE-HMD uses \(\beta\)-VAE [18] for encoder and decoder and FLAG uses RealNVP [8] for encoder and decoder. To estimate latent codes from sparse signals (as substitute for full-pose encoders) VAE-HMD optimizes a \(\beta\)-VAE objective with a new encoder and FLAG employs a Transformer-based [61] predictor. We have found (3) sequence model-based methods to produce smoother motions than (4) generative latent space-based methods, while the latter tend to more accurately depict full-body pose at a given frame. Our method integrates (3) and (4)'s approaches, predicting full-body pose using a sequence model while simultaneously making use of a generative latent space of motion to produce smooth and accurate full-body motions. We also utilize an explicit motion prior as opposed to static pose priors used by [9, 3]. ### Task-Generic Motion Priors The term task-generic motion prior was first used in HM-VAE [29], to describe "a generalized motion prior, that learns complex human body motions from high-fidelity motion capture data". Unlike task-specific motion priors which are optimized for a single task, such as motion recovery from videos [6, 28, 35], task-generic motion priors [15, 29, 49, 55] are generative models that can perform an array of motion-related tasks, such as motion interpolation (in-betweening), completion, synthesis, refinement, and even recovery of full-pose from partial observations via test-time optimization. NeMF [15] categorizes these motion priors into two categories: time-series models and space-time models. Time-series models predict future motions based on past observations and are typically autoregressive, with HuMoR [49] being an example. Space-Time models, on the other hand, directly model the spatio-temporal kinematic state, often by taking in a whole motion as input at once [15, 29, 55]. We select MotionCLIP [55] for our full-body motion prior, a space-time motion prior with an auto-encoder [13] architecture. MotionCLIP learns to embed the input motion in the latent space of CLIP [48], a large-scale neural network trained jointly on image and text. The CLIP space has demonstrated its effectiveness for use in downstream tasks in various domains, such as image [10, 12, 44, 50], 3D [40, 51, 64], and human motion [55, 57, 75]. Furthermore, MotionCLIP achieves zero-shot action classification performance close to 2s-AGCN [52], a dedicated action classifier, demonstrating the latent space's ability to discriminate between different action types [55]. ## 3 Methods Our framework consists of the following components: _full motion prior_, _sparse motion encoder_, and _sequence model_. The full motion prior consists of full motion encoder and decoder (Figure 3), trained on full-pose motions to learn the motion latent space. We train this component first, followed by training sparse motion encoder. The goal of the sparse motion encoder is to predict the _motion latent_ (\(\mathbf{M}\)) in the space learned by our full motion prior, utilizing only sparse pose signals. Sparse motion encoder and sequence model are used directly for full-body pose estimation, the process visualized in Figure 2. We feed the sparse pose signals to the sparse motion encoder after augmentation, from which we extract the _motion embedding_ (\(\mathbf{E}\)), which is a compressed representation of motion. We then concatenate the motion embedding with the augmented sparse pose signals after normalization step. Finally, the concatenated sequence is input to sequence model for full-pose reconstruction. ### Input and Output Representations We represent the sparse pose signals at time \(t\) as \(\mathbf{x}_{t}\), defined as: \[\mathbf{x}_{t}=[\mathbf{g}_{t}^{\mathbf{3}},\mathbf{r}_{t}^{\mathbf{3}}], \tag{1}\] where \(\mathbf{g}_{t}^{\mathbf{3}}\) and \(\mathbf{r}_{t}^{\mathbf{3}}\) respectively represent the global positions (3D) and the rotations in 6D form [76] of head and hands, as would be provided by the MR device. From this data, we derive _sparse motion signals_ at time \(t\) denoted \(\mathbf{X}_{t}\), and a _sparse motion sequence_ of length \(T\) at time \(t\) denoted \(\mathbf{X}_{t-T+1:t}\), each defined as: \[\mathbf{X}_{t}=[\mathbf{g}_{t}^{\mathbf{3}},\mathbf{g}_{t}^{\mathbf{3}}, \mathbf{r}_{t}^{\mathbf{3}};\mathbf{\hat{r}}_{t}^{\mathbf{3}}]\in\mathbb{R}^{ 54}, \tag{2}\] \[\mathbf{X}_{t-T+1:t}=[\mathbf{X}_{t-T+1},\mathbf{X}_{t-T+2},...,\mathbf{X}_{t }]\in\mathbb{R}^{54\times T}, \tag{3}\] where \(\mathbf{\hat{g}}_{t}^{\mathbf{3}}\) and \(\mathbf{\hat{r}}_{t}^{\mathbf{3}}\) respectively represent the velocities and angular velocities in 6D form derived from \(\mathbf{g}_{t}^{\mathbf{3}}\) and \(\mathbf{r}_{t}^{\mathbf{3}}\) as done in AvatarPoser [24]. This process is represented as "Input Augmentation" in Figure 2. The global position \(\mathbf{g}_{t}^{\mathbf{3}}\) depends on an origin point decided by the MR device, which can be arbitrary. We counteract the randomness of input by "normalizing" the global positions \(\mathbf{g}_{t}^{\mathbf{3}}\) in the horizontal axes (\(x\) and \(z\) axes) as follows: \[\mathbf{g}_{t}^{\mathbf{3}}=[\mathbf{g}_{t}^{head},\mathbf{g}_{t}^{hand}, \mathbf{g}_{t}^{hand}], \tag{4}\] \[\mathbf{g}_{t}.xz=\Big{(}\frac{\mathbf{g}_{t}^{head}+\mathbf{g}_{t}^{hand}+ \mathbf{g}_{t}^{hand}}{3}\Big{)}.xz, \tag{5}\] \[\mathbf{n}_{t}^{j}.xz=(\mathbf{g}_{t}^{j}-\mathbf{g}_{t}).xz,\quad\mathbf{n}_ {t}^{j}.y=\mathbf{g}_{t}^{j}.y,\] \[\forall j\in\{head,hand,hand\}, \tag{6}\] \[\mathbf{n}_{t}^{\mathbf{3}}=[\mathbf{n}_{t}^{head},\mathbf{n}_{t}^{hand}, \mathbf{n}_{t}^{hand}], \tag{7}\] where \(\mathbf{n}_{t}^{\mathbf{3}}\) denote the normalized positions, normalized positions of 3 joints having mean of 0 along \(x\) and \(z\) axes. This process is represented as "Global Pos Normalization" in Figure 2. We found empirically the information lost by using normalized global positions on the horizontal axes to be compensated for by the other inputs obtained via augmentation, and the normalization step to help the sequence model produce more stable motions. We did not observe the same benefit while training the sparse motion encoder, so we apply normalization only before the sequence model. The output consists of relative rotations (6D form) for 22 SMPL [32] joints, which can be used to recover the entire kinematic tree up to both wrists via forward kinematics (FK). ### Full Motion Prior Pretraining Full motion prior denotes motion prior whose encoder and decoder are trained on full-pose sequences, i.e., full motions. We use MotionCLIP [55], which is a full-body motion auto-encoder [13] exploiting the powerful latent space of CLIP [48]. As visualized in Figure 3, when the full 60-frame pose sequence is input to the encoder (based on the Transformer Encoder architecture [61]), denoted _full motion encoder_, the output is a latent vector lying in CLIP space, denoted _motion latent_ (M). The decoder (based on the Transformer Decoder architecture [61]), denoted _full motion decoder_, aims to reconstruct the same full motion from the motion latent. The loss \(\mathcal{L}_{fm}\) used to train the full motion prior is formulated as follows: \[\mathcal{L}_{fm}=\mathcal{L}_{recon}+\lambda_{text}+\lambda_{i \text{-}}\mathcal{L}_{i\text{-}}\mathcal{L}_{image}, \tag{8}\] \[\mathcal{L}_{text}=1-\cos(\textit{CLIP}_{text}(t),\mathbf{M}),\] (9) \[\mathcal{L}_{image}=1-\cos(\textit{CLIP}_{image}(s),\mathbf{M}), \tag{10}\] where \(\mathcal{L}_{recon}\) is the reconstruction loss of the full motion, and \(\mathbf{M}\in\mathbb{R}^{512}\) represents the motion latent, which is the output of the full motion encoder. \(\mathcal{L}_{text}\) and \(\mathcal{L}_{image}\) are the cosine distances from the motion latent \(\mathbf{M}\) to its corresponding text projection \(\textit{CLIP}_{text}(t)\), and the image projection \(\textit{CLIP}_{image}(t)\), respectively. Instead of learning a new latent space, as a variational auto-encoder [27] would do, CLIP space projection of the text labels \(\textit{CLIP}_{text}(t)\) and that of the rendered images \(\textit{CLIP}_{image}(s)\) corresponding respectively to each motion (both of which are part of dataset) are used to guide the motion latents to lie close together (with corresponding text and image projections) on the same space. We train with the configuration named "paper_model" [56]. This module is trained before all else. ### Estimating Motion Latent from Sparse Motion Sequence We train the sparse motion encoder to estimate the motion latent from sparse motion sequence \(\mathbf{X}_{t-T+1:t}\). The architecture is adapted from full motion encoder's Transformer Encoder [61] having the linear layer before the Transformer Encoder modified to accept \(\mathbf{X}_{t-T+1:t}\in\mathbb{R}^{54\times T}\). We use \(T=60\) which is the same motion length used by the full motion prior. We keep the full motion decoder and keep its weights frozen when we train the auto-encoder consisting of sparse motion encoder and full motion decoder (Figure 3) with loss \(\mathcal{L}_{sm}\) as follows: \[\mathcal{L}_{sm}=\lambda_{text}^{*}\mathcal{L}_{text}^{*}+\lambda_{image}^{* }\mathcal{L}_{image}^{*}, \tag{11}\] \[\mathcal{L}_{text}^{*}=1-\cos(\textit{CLIP}_{text}(t),\mathbf{M}^{*}), \tag{12}\] Figure 3: **Motion Prior.** After pretraining full motion prior, sparse motion encoder is trained to predict the same motion latent as the full motion encoder. Figure 2: **Model Overview.** Sparse pose signals are fed to sparse motion encoder after augmentation to derive a motion embedding. Motion embeddings are combined with normalized augmented signals and input the sequence model for full-pose reconstruction. We tried different network architectures during development, and readers may refer to the supplementary material for details and experimental results. \[\mathcal{L}_{image}^{*}=1-\cos(\textit{CLIP}_{image}(s),\mathbf{M}^{*}), \tag{13}\] where \(\mathbf{M}^{*}\in\mathbb{R}^{512}\) denotes the motion latent predicted by the sparse motion encoder. We set \(\lambda_{text}^{*}=\lambda_{image}^{*}=0.01\). This module is trained after the full motion prior. ### Sequence Model to Reconstruct Full-Pose from Sparse Motion Sequence and Motion Latent Sequence model takes as input the length-\(S\) sparse motion sequence \(\mathbf{X}_{t-S+1:t}\) and the corresponding sequence of _motion embeddings_\(\mathbf{E}_{t-S+1:t}=[\mathbf{E}_{t-S+1},\mathbf{E}_{t-S+2},...,\mathbf{E}_{t}]\) as input. A motion embedding \(\mathbf{E}_{t}\in\mathbb{R}^{64}\) is derived by passing the predicted motion latent \(\mathbf{M}_{t}^{*}\in\mathbb{R}^{512}\) through a linear layer to retrieve a compressed 64-dimensional representation of the motion. Then, \(\mathbf{X}_{t-S+1:t}\) and \(\mathbf{E}_{t-S+1:t}\) are concatenated along the time axis, to be input to the sequence model. 3-Layer LSTM [19] is our choice of sequence model, which outputs a single full-body pose at time \(t\) given a sequence of inputs of length \(S\) from time \(t-S+1\) to \(t\). We represent the full-pose as the 6D relative rotation values of 22 joints of the SMPL model, from which we can recover the absolute rotations \(\hat{\mathbf{r}}_{t}^{\mathbf{22}}\) and the body root-relative positions \(\hat{\mathbf{p}}_{t}^{\mathbf{22}}\) of 22 joints via FK. The loss \(\mathcal{L}_{seq}\) is computed as the weighted sum of rotational loss \(\mathcal{L}_{rot}\), positional loss \(\mathcal{L}_{pos}\), velocity loss \(\mathcal{L}_{vel}\) (ablation for \(\mathcal{L}_{vel}\) in supplementary), and motion loss \(\mathcal{L}_{mo}\). \(\mathcal{L}_{rot}\), \(\mathcal{L}_{pos}\), and \(\mathcal{L}_{vel}\) encourage accurate full-pose reconstruction at every frame, and are computed as the L2 norm between the predicted and corresponding GT values, respectively. The motion loss \(\mathcal{L}_{mo}\) encourages the model to learn the correct motion given the consecutive 60-frame full-pose predictions \(\hat{\mathbf{p}}_{t-60+1:t}^{\mathbf{22}}\), which are passed through a full motion encoder to obtain motion latent \(\hat{\mathbf{M}}_{t}\). We also obtain the ground truth motion latent \(\mathbf{M}_{t}\) with the corresponding ground truth motion \(\mathbf{p}_{t-60+1:t}^{\mathbf{22}}\) via the same full motion encoder. While we use the pretrained full motion encoder in Section 3.2 for this purpose, a different full motion encoder could substitute it. We then compute the cosine distance between the two motion latents to obtain motion loss: \[\mathcal{L}_{mo}=1-\cos(\hat{\mathbf{M}}_{t},\mathbf{M}_{t}). \tag{14}\] Finally, the total loss of the sequence model \(\mathcal{L}_{seq}\) is computed as follows: \[\mathcal{L}=\lambda_{rot}\mathcal{L}_{rot}+\lambda_{pos}\mathcal{L}_{pos}+ \lambda_{vel}\mathcal{L}_{vel}+\lambda_{mo}\mathcal{L}_{mo}. \tag{15}\] We set the coefficients \(\lambda_{rot}=\lambda_{pos}=\lambda_{vel}=1.0,\lambda_{mo}=0.1\). Moreover, we found freezing the sparse motion encoder's weights to yield better results (Section 5), and to allow preprocessing the motion latents in advance for faster training. ## 4 Experimental Results ### Data Preparation and Network Training We train and test all our models on the AMASS [37] dataset, a large-scale human motion dataset parametrized by the SMPL model [32]. Since AMASS contains motion capture data with varying frame rates, we downsample each mocap data to be close to 30 FPS. From AMASS, we extract the head (joint index 15) and two wrist joints (joints indices 20 and 21) and derive their root-relative positions via FK, followed by adding translation to simulate global position signals given by MR devices. We also derive the absolute rotations of head and hands to simulate rotation signals. Data are then ready to be processed by the procedure described in Section 3.1. All of our model components and baselines share the same input and output format (n.b., while VAE-HMD [9] was originally tested given pelvis-relative positions as input, we input global positions to reflect the signals from MR devices). We use the AMASS subset consisting of BMLrub [58], EyesJapan-Dataset [34], TotalCapture [59], KIT [38], ACCAD [38], CMU [5], PosePrior [2], TCDHands [20], EKUT for training and set aside HumanEva [54], HDM05 [41], SFU [60], MoSh [31], Transitions, SSM for evaluation. For training motion priors, we additionally use BABEL [47], dataset containing per-frame action labels corresponding to a large portion of AMASS for the text labels, and images are rendered via MotionCLIP's official open-source implementation [56]. The full motion prior and the sparse motion encoder each takes 10 hours, and the sequence model 5 hours after preprocessing the motion latents via the trained sparse motion encoder (possible because sparse motion encoder is kept frozen during sequence model training) which takes about an hour, for a total 26 hours for full training on a single NVIDIA RTX 2080 Ti. For the baseline models, we followed the setup described in the original papers [9, 24] as closely as possible. For AvatarPoser, we use the official open-source implementation [25]. For VAE-HMD [9], which has no open-source implementation available, we implemented their best performing model according to the original paper which contains a pretrained pose prior component. Note that we selected a much wider subset of AMASS for training and testing than the original works [9, 24]. Refer to the supplementary material for more details about training and baseline implementations. ### Quantitative Evaluation We quantitatively evaluate our model using a diverse set of metrics against two baselines: AvatarPoser [24] and VAE-HMD [9]. The quantitative results are presented in Table 1. Originally, AvatarPoser and VAE-HMD had window sizes of 40 frames and 16 frames, respectively. To ensure fairness, we additionally compare against AvatarPoser and VAE-HMD each adapted to have a 60-frame window size, which is the same as the window size that our motion prior sees. The adapted versions are labeled AvatarPoser-60 and VAE-HMD-60 in Table 1 (further results and analyses in supplementary). **Per-Joint Errors**. We use four per-joint error metrics to evaluate our approach: MPJPE (mean per-joint position error [cm]), Legs MPJPE [cm], Global MPJPE [cm], and MPJVE (mean per-joint velocity error [cm/s]). Global MPJPE is computed by first mapping the predicted joints to global space, which involves combining GT head position (given by MR device) with the predicted joint rotations of the full body. The results in Table 1 demonstrate that our approach outperforms baselines on the majority of the metrics evaluated, rivalled only by VAE-HMD on Legs MPJPE. Since the sparse pose signals only contain direct information about the upper body, accurate reconstruction of legs is challenging especially when leg motions have low correlation with co-occurring upper body motion. Use of a motion prior generally results in lower error in leg motions, which can also be seen in Section 5 and in Figure 5 qualitatively. VAE-HMD contains a decoder component of a pre-trained auto-encoder whose weights are frozen, and during pretraining, it receives full-body motions as input and learns a prior, which helps reconstruct some difficult motions, as evidenced by the legs MPJPE metric being as low as ours. However, VAE-HMD suffers from relatively high velocity error (MPJVE) compared to other methods. **Motion-Related Errors**. Motion distance measures the difference between the overall motion (spanning 60 frames), between the GT motion and predicted motion calculated the same way as Equation 14. For this evaluation we use a different motion prior from one we optimized with, namely "classes_model" of MotionCLIP [55]). Although classes_model's motion latents also lie in CLIP space [48], different training parameters and dataset are used to train, resulting in different motion latents being predicted (Also, the text and images' CLIP projections do not align [55], so different motion latent spaces are learned depending on training configuration). FID (Fretchet Inception Distance [17]) measures the similarity between the distributions of ground truth motions and generated motions, the lower the more similar. While FID has been used widely in the context of generated human motions [14, 45, 30, 15], our work is the first to use it for full-body motions reconstructed from sparse signals. Our model achieves the lowest FID, followed by AvatarPoser variants, and VAE-HMD-60 with the highest FID. These results are consistent with the findings from our user studies (Section 4.3) where we evaluate the quality of generated motions. and VAE-HMD [9] for each segment were rendered using SMPL Blender addon [33]. **User Study I**. To compare our model with one of the baselines side-by-side, we placed ground truth animation on top and juxtaposed our model's prediction and one of AvatarPoser and VAE-HMD's animations at the bottom (latter's order randomized) for the same underlying motion. We informed participants that the two animations at the bottom are different reconstructions of the top animation from partial information, and we asked to choose (1) a better reconstruction among the two (preference), and (2) one whose motion matched the ground truth motion better, with "neutral" option added for the latter. The results for (1) can be found in Table 2. The table shows participants' preference towards our model's predictions for both sets, with more pronounced results with Hard Set. This demonstrates our model's ability to better reconstruct motions that the baseline, which does not make use of a motion prior, struggles with. Moreover, participants clearly preferred our model's predictions over VAE-HMD's, noting that VAE-HMD's animations looked unnatural primarily due to jitter. This is consistent with the high velocity error (MPJVE) measured in Section 4.2. The results of (2) can be found in Table 3, where we evaluated the motion matching capability of our model in addition to the quantitative motion distance metric in Section 4.2. The results show similar trends as (1). **User Study II**. We played all four animations with the same underlying motion simultaneously, ordered randomly, asking participants to rate the naturalness of each motion on a scale of 1 to 7, 7 being the highest (participants were allowed to replay animations as they desired). From the mean scores plotted in Figure 4, we can observe that users found GT motions the most natural, followed by generations from our model's predictions, then AvatarPoser and VAE-HMD, in that order. While the scores for predicted motions for Hard Set fall behind those for Random Set, participants found the motions generated via our model more natural than other models' generations for both sets. ## 5 Ablation Studies We conduct ablation studies by removing or modifying different subcomponents, the main quantitative results shown in Table 4. We first assess the role of the motion prior component in our architecture by completely removing it from our model, leaving only the input processing and sequence model components (see Figure 2 for reference). The results in "No Motion Prior" row in Table 4 show degradation in values for all quantitative metrics. (This model was used to curate the Hard Set for the user study, as explained in Section 4.3.) Additionally, we group all motion segments in the test dataset by action types defined in BABEL [47], and sort them by improvement of Legs MPJPE and MPJPE respectively. We present the top 5 and bottom 5 action types in Table 5, left column showing top 5 and right column showing bottom 5. Top 5 improved action types for both metrics consist of actions involving a high amount of leg motions 1, showing that the motion prior contributes to better reconstruction of leg motions given only upper body signals. We can also observe degradation from not using the motion distance loss ("No Motion Distance Loss") and from unfreezing the motion prior ("With Finetuned Motion Prior"). Footnote 1: A list of BABEL [47] action subtypes corresponding to each action type can be found in [36]. We experimented with various task-generic motion priors during development and settled on MotionCLIP as it gave the best overall performance. The final row of Table 4 shows the result of using a Transformer VAE-based [61, 27] motion prior whose architecture is based on ACTOR [45], from which we removed action conditioning part to have an unconditional motion VAE [27]. ## 6 Conclusions and Limitations We present a method of utilizing motion prior to effectively reconstruct full-body motion from impoverished signals of pose. Our method recovers intended full-body motions that look natural, with improved lower body over baselines. However, our work only considers a single body type, and we wish to allow people of diverse body shapes to utilize our system effectively in a future work. Moreover, we sometimes observe footsliding artifacts from generated motions, and we wish to measure their severity and eliminate them. Figure 4: **Result of User Study II: Naturalness of Motions. Scores range from 1 (worst) to 7 (best).** \begin{table} \begin{tabular}{|l||l l l l|l l|} \hline & \multicolumn{4}{|c|}{Per-Joint Errors} & \multicolumn{2}{|c|}{Motion-Related Statistics} \\ \hline Method & MPJPE & \begin{tabular}{l} Legs \\ MPJPE \\ \end{tabular} & \begin{tabular}{l} Global \\ MPJPE \\ \end{tabular} & \begin{tabular}{l} MPJVE \\ \end{tabular} & \begin{tabular}{l} Motion \\ Distance \(\downarrow\) \\ \end{tabular} & \begin{tabular}{l} FID \(\downarrow\) \\ \end{tabular} \\ \hline \hline Ours & **7.25** & 9.34 & **7.38** & **25.42** & \(\mathbf{5.12\cdot 10^{-3}}\) & \(\mathbf{6.03\cdot 10^{-2}}\) \\ \hline No Motion Prior & 7.37 & 9.67 & 7.62 & 26.22 & \(5.52\cdot 10^{-3}\) & \(7.54\cdot 10^{-2}\) \\ No Motion Distance Loss & 7.32 & 9.43 & 7.45 & 25.71 & \(5.31\cdot 10^{-3}\) & \(7.08\cdot 10^{-2}\) \\ With Finetuned Motion Prior & 7.39 & 9.77 & 7.67 & 26.10 & \(5.28\cdot 10^{-3}\) & \(6.24\cdot 10^{-2}\) \\ With a Different Motion Prior & 7.29 & **9.27** & 7.41 & 26.15 & \(5.15\cdot 10^{-3}\) & \(6.20\cdot 10^{-2}\) \\ \hline \end{tabular} \end{table} Table 4: **Ablation Studies: Main Quantitative Results** Figure 5: **Qualitative Results**. We show qualitative results on difficult motions with less common lower body movements. Left Column: Kicking Motion (Transitions/mazen_c3d/kick_push_poses [37]). Right Column: Moonwalking (Transitions/mazen_c3d/rrun_stand_poses [37]). \begin{table} \begin{tabular}{|l||l|} \hline Improved Action Type & \multicolumn{1}{l|}{Degraded Action Type} \\ \hline \hline \multicolumn{3}{|l|}{Legs MPJPE Improvement/Degradation} \\ \hline **knee movement** & \multicolumn{1}{l|}{place something} \\ **cartwheel** & \multicolumn{1}{l|}{grasp object} \\ **crouch** & \multicolumn{1}{l|}{**poses**} \\ **squat** & \multicolumn{1}{l|}{stretch} \\ bend & \multicolumn{1}{l|}{face direction} \\ \hline \multicolumn{3}{|l|}{MPJPE Improvement/Degradation} \\ \hline **cartwheel** & \multicolumn{1}{l|}{face direction} \\ **shuffle** & \multicolumn{1}{l|}{place something} \\ **knee movement** & \multicolumn{1}{l|}{**lean**} \\ throw & \multicolumn{1}{l|}{take/pick something up} \\ **touch ground** & \multicolumn{1}{l|}{shout} \\ \hline \end{tabular} \end{table} Table 5: **Ablation Studies: Action Types [47] Improved by Using Motion Prior.** Actions containing high amount of leg motions are in bold. Our model’s top improvements lie in actions containing much leg motions.
2303.06289
Machine Learning Enhanced Hankel Dynamic-Mode Decomposition
While the acquisition of time series has become more straightforward, developing dynamical models from time series is still a challenging and evolving problem domain. Within the last several years, to address this problem, there has been a merging of machine learning tools with what is called the dynamic mode decomposition (DMD). This general approach has been shown to be an especially promising avenue for accurate model development. Building on this prior body of work, we develop a deep learning DMD based method which makes use of the fundamental insight of Takens' Embedding Theorem to build an adaptive learning scheme that better approximates higher dimensional and chaotic dynamics. We call this method the Deep Learning Hankel DMD (DLHDMD). We likewise explore how our method learns mappings which tend, after successful training, to significantly change the mutual information between dimensions in the dynamics. This appears to be a key feature in enhancing the DMD overall, and it should help provide further insight for developing other deep learning methods for time series analysis and model generation.
Christopher W. Curtis, D. Jay Alford-Lago, Erik Bollt, Andrew Tuma
2023-03-11T02:56:29Z
http://arxiv.org/abs/2303.06289v3
# Machine Learning Enhanced Hankel ###### Abstract While the acquisition of time series has become more straightforward, developing dynamical models from time series is still a challenging and evolving problem domain. Within the last several years, to address this problem, there has been a merging of machine learning tools with what is called the dynamic mode decomposition (DMD). This general approach has been shown to be an especially promising avenue for accurate model development. Building on this prior body of work, we develop a deep learning DMD based method which makes use of the fundamental insight of Takens' Embedding Theorem to build an adaptive learning scheme that better approximates higher dimensional and chaotic dynamics. We call this method the Deep Learning Hankel DMD (DLHDMD). We likewise explore how our method learns mappings which tend, after successful training, to significantly change the mutual information between dimensions in the dynamics. This appears to be a key feature in enhancing the DMD overall, and it should help provide further insight for developing other deep learning methods for time series analysis and model generation. **This work uses machine learning to develop an accurate method for generating models of chaotic dynamical systems using measurements alone. A number of challenging examples are examined which show the broad utility of the method and point towards its potential impacts in advancing data analysis and modeling in the physical sciences. Finally, we present quantitative studies of the information theoretic behavior of the machine learning tools used in our work, thereby allowing for a more detailed understanding of what can otherwise be an inscrutable method.** Introduction The incorporation of modern machine learning methodology into dynamical systems is creating an ever expanding array of techniques pushing the boundaries of what is possible with regards to describing nonlinear multi-dimensional time series. Longstanding problems such as finding optimal Takens' embeddings [1, 2] now have powerful and novel deep learning based algorithmic approaches [3] which would not have been feasible even ten years ago. Likewise, the field of equation free modeling using Koopman operator methods, broadly described by Dynamic Mode Decomposition (DMD), has seen several innovative deep learning based methods emerge over the last several years [4, 5, 6] which have been shown to greatly expand the accuracy and flexibility of DMD based approaches. There have also been related and significant advances in model identification and solving nonlinear partial differential equations via deep learning techniques [7, 8, 9, 10]. With this background in mind, in this work we focus on extending the methods in [6] which were called Deep Learning DMD (DLDMD). In that work, a relatively straightforward method merging auto-encoders with the extended DMD (EDMD) was developed. This was done by using an encoder to embed dynamics in a sufficiently high enough dimensional space which then generated a sufficiently large enough space of observables for the EDMD to generate accurate linear models of the embedded dynamics. Decoding then returned the embedded time series to the original variables in such a way as to guarantee the global stability of iterating the linear model to generate both reconstructions and forecasts of the dynamics. The DLDMD was shown to be very effective in finding equation-free models which were able to both reconstruct and then forecast from data coming from planar dynamical systems. However, when chaotic time series from the Lorenz-63 system were examined, the performance of the DLDMD was found to degrade. While this clearly makes the DLDMD approach limited in its scope, we note that the successful use of DMD based approaches to accurately reconstruct or forecast chaotic dynamics are not readily available. Other methods such as HAVOK [11] or SINDy [12] are more focused on the analysis of chaotic time series or the discovery of model equations which generate chaotic dynamics, though of course if one has an accurate model, then one should be able to generate accurate forecasts. In this vein, there are also methods using reservoir computing (RC) [13, 14], though again, nonlinear models are essentially first learned and then used to generate forecasts. However, both SINDy and RC rely on proposing libraries of terms to build models which are then fit (or learned from) data. While effective, such approaches do not allow for the spectral or modal analysis which has proven to be such an attractive and useful feature of DMD based methods. Likewise, they require a number of user decisions about how to construct the analytic models used in later regressive fitting that amount to a guess and check approach to generating accurate reconstructions and forecasts. Therefore in this work, using insights coming from the Takens' Embedding Theorem (TET) [15; 3], we expand the DLDMD framework so as to make it accurate in both generating reconstructions and forecasts of chaotic time series. This is done by first making the EDMD over embedded coordinates global as opposed to the local approach of [6]; see also [4; 5]. Second, we develop an adaptive Hankel matrix based ordering of the embedded coordinates which adds more expressive power for approximating dynamics to the deep learning framework. To study our method, we use data generated by the Lorenz-63 and Rossler systems as well as twelve-dimensional projections of data from the Kuramoto-Sivashinksky (KS) equation. In all of these cases, we show that by combining our proposed modifications to the DLDMD that we are able to generate far more accurate reconstructions for chaotic systems than with DLDMD alone. Moreover, we have built a method which still allows for the straightforward modal analysis which DMD affords and keeps user choices to a handful of real-valued hyperparamters while still producing results competitive with other approaches in the literature. Further, motivated by the classic information theory (IT) studies of the TET [1], as well as modern insights into the role that information plays in deep learning [16; 17], we study how the fully trained encoder changes the information content of the dynamics coming from the Lorenz-63 and Rossler systems. For the Lorenz-63 system, the encoder tends to either slightly decrease the mutual information or cause strong phase shifts which decrease the coupling times across dimensions. However, the characteristic timescales corresponding to lobe switching in the Lorenz 'butterfly' are clearly seen to be preserved in the dynamics of the information for the Lorenz-63 system. In contrast then, for the Rossler system, the slow/fast dichotomy in the dynamics seen in the original coordinates is made more uniform so that rapid transients in the information coupling are removed by the encoder. Thus in either case, we see that the encoder generates significant differences in the information content between dimensions in the latent coordinates relative to the original ones, and that this strong change in information content is a critical feature in successful training. Of course, the present work is ultimately preliminary, and there are a number of important questions left to be resolved. First, while we are able to easily display computed spectra, the affiliated global Koopman modes we find are not as straightforward to show. We generate our results from random initial conditions, so the most effective means of constructing the global Koopman modes would be via radial-basis functions, but the implementation would be nontrivial due to the infamous ill-conditioning issues which can plague the approach [18]. Second, there is a clear need for a comparison across SINDy, RC, and our DLTDMD methods. In particular, the present work generates excellent reconstructions and thus modal decompo sitions, but learning a method which generates accurate longer time novel predictions beyond the given data has proven too challenging thus far. How well other methods address this issue relative to their reconstruction and other diagnostic properties, and then how all of these methods compare in these several different ways is as yet unclear. While acknowledging then the limitations of the present work, we defer addressing the above issues till later works where each of the above issues can be dealt with in the detail that is needed. The structure of this paper is as follows. In Section 2, we provide an introduction to the Extended DMD and then explain the extensions we develop which are critical to the success of the present work. In Section 3, we introduce the Hankel DMD and, incorporating the extensions introduced in Section 2, we show how well it does and does not perform on several examples. Then in Section 4 we introduce the Deep Learning Hankel DMD and provide results on its performance as well as an analysis of how the mutual information changes in the latent variables. Section 5 presents our results on mutual information. Section 6 provides conclusion and discussion. ## 2 Extended Dynamic Mode Decomposition To begin, we suppose that we have the data set \(\left\{\mathbf{y}_{j}\right\}_{j=1}^{N_{T}+1}\) where \[\mathbf{y}_{j}=\varphi(t_{j};\mathbf{x}),\ t_{j+1}=t_{j}+\delta t,\ \mathbf{x} \in\mathbb{R}^{N_{s}}\] where \(\delta t\) is the time step at which data is sampled and \(\varphi(t;\mathbf{x})\) is a flow map such that \(\varphi(t_{1},\mathbf{x})=\mathbf{x}\). From the flow map, we define the affiliated _Koopman operator_\(\mathcal{K}^{t}\) such that for a given scalar observable \(g(\mathbf{x})\), one has \[\mathcal{K}^{t}g(\mathbf{x})=g(\varphi(t,\mathbf{x})),\] so that the Koopman operator linearly tracks the evolution of the observable along the flow. We likewise define the associated Hilbert space of _observables_, say \(L_{2}\left(\mathbb{R}^{N_{s}},\mathbb{R},\mu\right)\), or more tersely as \(L_{2}\left(\mathcal{O}\right)\), so that \(g\in L_{2}\left(\mathcal{O}\right)\) if \[\int_{\mathbb{R}^{N_{s}}}\left|g(\mathbf{x})\right|^{2}d\mu\left(\mathbf{x} \right)<\infty,\] where \(\mu\) is some appropriately chosen measure. This makes the infinite-dimensional Koopman operator \(\mathcal{K}^{t}\) a map such that \(\mathcal{K}^{t}:L_{2}\left(\mathcal{O}\right)\to L_{2}\left(\mathcal{O} \right).\) Following [19, 20], given our time snapshots \(\left\{\mathbf{y}_{j}\right\}_{j=1}^{N_{T}+1}\), we suppose that any observable \(g(\mathbf{x})\) of interest lives in a finite-dimensional subspace \(\mathcal{F}_{D}\subset L_{2}\left(\mathcal{O}\right)\) described by a given basis of observables \(\left\{\psi_{l}\right\}_{l=1}^{N_{ob}}\) so that \[g(\mathbf{x})=\sum_{l=1}^{N_{ob}}a_{l}\psi_{l}\left(\mathbf{x}\right).\] Given this ansatz, we then suppose that \[\mathcal{K}^{\delta t}g(\mathbf{x})= \sum_{l=1}^{N_{ob}}a_{l}\psi_{l}\left(\varphi\left(\delta t,\mathbf{ x}\right)\right)\] \[= \sum_{l=1}^{N_{ob}}\psi_{l}(\mathbf{x})\left(\mathbf{K}_{a}^{T} \mathbf{a}\right)_{l}+r(\mathbf{x};\mathbf{K}_{a})\] where \(r(\mathbf{x};\mathbf{K}_{a})\) is the associated error which results from the introduction of the finite-dimensional approximation of the Koopman operator represented by \(\mathbf{K}_{a}\). We can then find \(\mathbf{K}_{a}\) by solving the following minimization problem \[\mathbf{K}_{a}= \text{arg}\,\min_{\mathbf{K}}\left|r(\mathbf{x};\mathbf{K}) \right|^{2} \tag{1}\] \[= \text{arg}\,\min_{\mathbf{K}}\sum_{j=1}^{N_{T}}\left|\sum_{l=1}^ {N_{ob}}\left(a_{l}\psi_{l}(\mathbf{y}_{j+1})-\psi_{l}(\mathbf{y}_{j})\left( \mathbf{K}^{T}\mathbf{a}\right)_{l}\right)\right|^{2}\] \[= \text{arg}\,\min_{\mathbf{K}}\sum_{j=1}^{N_{T}}\left|\left\langle \mathbf{\Psi}_{j+1}-\mathbf{K}\mathbf{\Psi}_{j},\mathbf{a}^{*}\right\rangle \right|^{2},\] where \(\mathbf{a}=(a_{1}\cdots a_{N_{ob}})^{T}\), \(\mathbf{\Psi}_{j}=(\psi_{1}(\mathbf{y}_{j})\cdots\psi_{N_{ob}}(\mathbf{y}_{j} ))^{T}\), the inner product \(\left\langle,\right\rangle\) is the standard one over \(\mathbb{C}^{N_{ob}}\), and the \(*\) symbol denotes complex conjugation. It is straightforward to show that an equivalent and easier to solve form of this optimization problem is given by \[\mathbf{K}_{a}=\underset{\mathbf{K}}{\text{argmin}}\left|\left|\mathbf{\Psi} _{+}-\mathbf{K}\mathbf{\Psi}_{-}\right|\right|_{F}^{2}, \tag{2}\] where \(\left|\left|\cdot\right|\right|_{F}\) is the Frobenius norm, and the \(N_{ob}\times N_{T}\) matrices \(\mathbf{\Psi}_{\pm}\) are given by \[\mathbf{\Psi}_{-}=\left\{\mathbf{\Psi}_{1}\,\,\mathbf{\Psi}_{2}\,\,\cdots\, \,\mathbf{\Psi}_{N_{T}}\right\},\quad\mathbf{\Psi}_{+}=\left\{\mathbf{\Psi}_ {2}\,\,\mathbf{\Psi}_{3}\,\,\cdots\,\,\mathbf{\Psi}_{N_{T}+1}\right\}.\] In practice, we solve this equation using the Singular-Value Decomposition (SVD) of \(\mathbf{\Psi}_{-}\) so that \[\mathbf{\Psi}_{-}=\mathbf{U}\mathbf{\Sigma}\mathbf{W}^{\dagger}.\] This then gives us \[\mathbf{K}_{a}=\mathbf{\Psi}_{+}\mathbf{W}\mathbf{\Sigma}^{-1}\mathbf{U}^{ \dagger},\] with the corresponding error in the Frobenius norm \(E_{r}(\mathbf{K}_{a})\) where \[E_{r}(\mathbf{K}_{a})=\left|\left|\mathbf{\Psi}_{+}\left(I-\mathbf{W}\mathbf{ W}^{\dagger}\right)\right|\right|_{F}.\] To complete the algorithm, after diagonalizing \(\mathbf{K}_{a}\) so that \[\mathbf{K}_{a}=\mathbf{V}\mathbf{L}\mathbf{V}^{-1},\,\,\mathbf{L}_{ll}=\ell_{l}, \tag{3}\] then one can show that the Koopman eigenfunctions \(\phi_{l}(\mathbf{y}_{j})\) are found via the equations \[\mathbf{\Phi}_{\pm}=\mathbf{V}^{-1}\mathbf{\Psi}_{\pm}. \tag{4}\] From here, one can, starting from the initial conditions, approximate the dynamics via the reconstruction formula \[\mathbf{y}(t;\mathbf{x})\approx\sum_{l=1}^{N_{ob}}\mathbf{k}_{l}e^{t\lambda_{l }}\phi_{l}(\mathbf{x}), \tag{5}\] where \(\lambda_{l}=\ln(\ell_{l})/\delta t\) and the _Koopman modes_\(\mathbf{k}_{l}\in\mathbb{C}^{N_{s}}\) solve the initial-value problem \[\mathbf{x}=\sum_{l=1}^{N_{ob}}\mathbf{k}_{l}\phi_{l}(\mathbf{x}).\] Again, in matrix/vector notation, keeping in mind that \(\mathbf{x}\in\mathbb{R}^{N_{s}}\) and that in general \(N_{s}\neq N_{ob}\), we have \[\mathbf{x}=\mathbf{K}_{M}\begin{pmatrix}\phi_{1}(\mathbf{x})\\ \vdots\\ \phi_{N_{ob}}(\mathbf{x})\end{pmatrix}\] where \(\mathbf{K}_{M}\) is the \(N_{s}\times N_{ob}\) matrix whose columns are the Koopman modes \(\mathbf{k}_{j}\). As can be seen then, generically, one can only find the Koopman modes through least-squares solutions of the non-square problem. In this regard, one would do well to have information from as many initial conditions as possible to over-determine the problem. ### Extensions to EDMD To wit, if we had a collection of initial conditions \(\{\mathbf{x}_{k}\}_{k=1}^{N_{C}}\) with corresponding path data \(\{\mathbf{y}_{j,k}\}_{j,k=1}^{N_{T}+1,N_{C}}\), we can extend the optimization problem in Equation (1) to be \[\mathbf{K}_{a}=\text{arg min}_{\mathbf{K}}\sum_{k=1}^{N_{C}}\left|r(\mathbf{x} _{k};\mathbf{K})\right|^{2},\] so that now the problem of finding \(\mathbf{K}_{a}\) is no longer strictly localized to a particular path labeled by the initial condition \(\mathbf{x}\). Following the same logic above leads one to simply concatenate across observables column wise when generating the \(\mathbf{\Psi}_{\pm}\) matrices so that \[\mathbf{\Psi}_{-}=\{\mathbf{\Psi}_{1,1}\ \mathbf{\Psi}_{2,1}\ \cdots\ \mathbf{\Psi}_{N_{T},1}\ \cdots\mathbf{\Psi}_{1,N_{C}}\ \mathbf{\Psi}_{2,,N_{C}}\ \cdots\ \mathbf{\Psi}_{N_{T},N_{C}}\}\] where \[\mathbf{\Psi}_{j,k}=\left(\psi_{1}(\varphi(t_{j};\mathbf{x}_{k}))\cdots\psi_{ N_{ob}}(\varphi(t_{j};\mathbf{x}_{k}))\right)^{T}\] The matrix \(\mathbf{\Psi}_{+}\) is defined similarly. Using then the EDMD algorithm outlined above, we arrive at the following matrix problem for determining \(\mathbf{K}_{M}\) \[\mathbf{X}=\mathbf{K}_{M}\mathbf{\Phi}_{0}\] where \[\mathbf{X}=\left(\mathbf{x}_{1}\cdots\mathbf{x}_{N_{C}}\right),\ \mathbf{\Phi}_{0}=\begin{pmatrix}\phi_{1}(\mathbf{x}_{1})&\cdots&\phi_{1}( \mathbf{x}_{N_{C}})\\ \phi_{2}(\mathbf{x}_{1})&\cdots&\phi_{2}(\mathbf{x}_{N_{C}})\\ \vdots&\vdots&\vdots\\ \phi_{N_{ob}}(\mathbf{x}_{1})&\cdots&\phi_{N_{ob}}(\mathbf{x}_{N_{C}}).\end{pmatrix}\] Likewise, given that Equation (4) gives us time series of the Koopman eigenfunctions, which necessarily must satisfy, assuming sufficient accuracy of the approximation implied by Equation (3), the identity \[\phi_{l}\left(\varphi(t_{j};\mathbf{x}_{k})\right)=\mathcal{K}^{j}\phi_{l}( \mathbf{x}_{k})=\ell_{l}^{j}\phi_{l}\left(\mathbf{x}_{k}\right),\] we can generalize Equation (5) via the model \[\mathbf{Y}_{N_{st}}\approx\mathbf{K}_{M}\mathbf{L}^{N_{st}}\mathbf{\Phi}_{-}, \ N_{st}\in\mathbb{N}\cup\left\{0\right\}, \tag{6}\] where \[\mathbf{Y}_{N_{st}}\approx\left\{\mathbf{y}_{N_{st},1}\cdots\mathbf{y}_{N_{ st}+N_{T},1}\ \cdots\ \mathbf{y}_{N_{st},N_{C}}\cdots\mathbf{y}_{N_{st}+N_{T},N_{C}}\right\},\] which generates a reconstruction of the data for time steps \(N_{st}\leq j\leq N_{T}+1\) and a forecast for steps with index \(N_{T}+1\leq j\leq N_{T}+N_{st}\). Using this formula allows for far greater flexibility in employing the EDMD since we can control how many times steps for which we wish to generate reconstructions, which is relatively easy. This is in contrast to generating forecasts through the iteration of the diagonal matrix \(\mathbf{L}\), which is a process that is generally sensitive to small variations in the position of the eigenvalues \(\ell_{l}\), especially for those near the unit circle in the complex plane. We will make great use of this generalization in the later sections of this paper. ## 3 Hankel DMD When implementing EDMD, the most natural observables are the projections along the canonical Cartesian axes, i.e. \[\psi_{l}(\mathbf{x})=x_{l},\ l=1,\cdots,N_{s}.\] If we stick to this space of observables, the EDMD method reduces to the standard DMD method. Thus the idea with EDMD is to include more nonlinear observables to hopefully represent a richer subspace of dynamics and thereby make the approximation of corresponding Koopman operator more accurate and sophisticated. With this in mind, [15] built upon the classic idea of Takens embeddings [21] and explored using affiliated Hankel matrices to generate natural spaces of observables for EDMD, and approach we describe as Hankel DMD (HDMD). Also of note in this direction is the HAVOK method developed in [11], though in some ways HAVOK is more akin to the _embedology_ methods explored in such classic works as [22, 2]. HDMD thus begins with an affiliated scalar measurement of our time series, say \(\left\{g(\mathbf{y}_{j})\right\}_{j=1}^{N_{T}+1}\). From this, by introducing a _window_ size \(N_{w}\) one builds the affiliated Hankel matrix \(\tilde{\mathbf{H}}_{g}\left(\mathbf{x}\right)\) where \[\tilde{\mathbf{H}}_{g}\left(\mathbf{x}\right)=\begin{pmatrix}g(\mathbf{y}_{1} )&\cdots&g(\mathbf{y}_{N_{w}})\\ g(\mathbf{y}_{2})&\cdots&g(\mathbf{y}_{N_{w+1}})\\ \vdots&\vdots&\vdots\\ g(\mathbf{y}_{N_{ob}})&\cdots&g(\mathbf{y}_{N_{T}+1})\end{pmatrix}.\] where the number of observables \(N_{ob}=N_{T}+1-(N_{w}-1)\). What one sees then is that each row of \(\tilde{H}_{f}(\mathbf{x})\) is some iteration of the Koopman operator \(\mathcal{K}^{\delta t}\). From here then, each row of \(N_{w}\) time steps is defined to be its own separate observable \(\psi_{l}(\mathbf{x})\), i.e. \[\psi_{l}(\mathbf{x})=\mathcal{K}^{l\delta t}g(\mathbf{x}),\ l=1,\cdots,N_{ob}.\] One then proceeds as above with the EDMD algorithm, where we emphasize that \(N_{T}\) is replaced by \(N_{w}-1\). This is an interesting feature, or arguably limitation, of the HDMD method in which we generate matrices \(\Phi_{\pm}\) (see Equation (4)) up to the time index \(N_{w}-1\leq N_{T}\). Thus later times are used to build approximations at prior times. This makes the issue of forecasting data more difficult since one must iterate the EDMD results, as is done via Equation (6), from time index \(N_{w}-1\) up to \(N_{T}\) to reconstruct the original data that was used in the first place. Throughout the remainder of the paper then, we take care to distinguish between _iterated reconstructions_ and actual _forecasts_ which make novel predictions beyond the given data. Finishing our explanation of HDMD, if one has data along multiple initial conditions, say \(\left\{\mathbf{x}_{k}\right\}_{k=1}^{N_{C}}\), we can extend the above algorithm by concatenating Hankel matrices so that we perform EDMD on the combined matrix \(\tilde{\mathbf{H}}_{C}\) so that \[\tilde{\mathbf{H}}_{C}=\left(\tilde{H}_{g}\left(\mathbf{x}_{1}\right)\ \cdots\ \tilde{H}_{g}\left(\mathbf{x}_{N_{C}}\right)\right)\] The inclusion of other observables can be done in a similar fashion. ### Results for HDMD The ultimate promise of the HDMD is that it should facilitate an adaptable implementation of the EDMD framework which allows for the number of observables to simply be adjusted by the window size. To see this, in all of the following results we let \(t_{f}=20\), \(dt=.05\), and we use \(N_{C}=128\) random initial conditions which are then stacked together. For HDMD, observables along each dimension of the dynamical system are used. Reconstructions and forecasts are generated using Equation (6) for \(N_{st}=20\), which for a time step of \(dt=.05\), means forecasts are produced up to a unit of non-dimensional time. We note though that the choice of \(N_{ob}\) defines the variable \(N_{w}=N_{T}+1-(N_{ob}-1)\), so that instead of using EDMD on data from \(0\leq t\leq t_{f}\), we now use data from \(0\leq t\leq t_{f,w}\) where \(t_{f,w}=N_{w}\delta t\). If we then take data from the standard harmonic oscillator, where for \(\mathbf{y}(t)=(y_{1}(t),y_{2}(t))^{T}\) we have \[\dot{y}_{1}=y_{2},\ \dot{y}_{2}=-\sin(y_{1}),\ \mathbf{y}(0)=\mathbf{x},\] then HDMD produces the results seen in Figure 1. Using \(N_{ob}=10\), excellent iterated reconstructions and forecasts (note \(N_{ob}<N_{st}\)) are obtained for the entire field of initial conditions examined. The computed eigenvalues are largely localized along the complex unit circle. We emphasize that the HDMD method does this without any added guidance or control on the part of the user. Moving on to the more complicated case of the Van der Pol oscillator, where \[\dot{y}_{1}=y_{2},\ \dot{y}_{2}=-y_{1}+\mu(1-y_{1}^{2})y_{2},\ \mu=1.5,\] we find, as seen in Figure 2, that \(N_{ob}=10\) does not produce as accurate of reconstructions and forecasts as we readily got for the harmonic oscillator. By increasing \(N_{ob}\) to \(20\) though, we are able to generate far more accurate results, though at the cost of being able to forecast beyond the given time Figure 1: HDMD results for the harmonic oscillator with \(N_{ob}=10\), so \(t_{f,w}=19.5\), and \(N_{st}=20\), so that the reconstruction is generated for times \(1\leq t\leq t_{f,w}\) and iterated reconstruction and forecasting is done for times \(t_{f,w}\leq t\leq t_{f,w}+1\). The computed eigenvalues using \(N_{ob}=10\) are shown in (a) and the trajectory reconstructions and forecasts are shown in (b). series. We further note that by fixing \(N_{ob}=20\) and letting \(N_{st}=30\), we get essentially the same degree of degradation in the forecast as when we chose \(N_{ob}=10\) and \(N_{st}=20\). This limitation aside, we see in call cases that the eigenvalues generated in this method naturally fall on or inside the unit circle, thereby generating very stable, even if inaccurate, dynamics. In contrast to these results, we find that the Lorenz Equations \[\dot{y}_{1}= \sigma(y_{2}-y_{1}),\] \[\dot{y}_{2}= \rho y_{1}-y_{2}-y_{1}y_{3},\] \[\dot{y}_{3}= -by_{3}+y_{1}y_{2}, \tag{7}\] where \[\sigma=10,\ \rho=28,\ b=\frac{8}{3},\] Figure 2: HDMD results for the Van der Pol equation with \(N_{st}=20\) and \(N_{obs}=10\) and \(N_{obs}=20\), so that the reconstruction is generated for \(1\leq t\leq t_{f,w}\) and iterated reconstruction and forecasting is done for times \(t_{f,w}\leq t\leq t_{f,w}+1\) where \(t_{f,w}=19.5\) for \(N_{obs}=10\) and \(t_{f,w}=19\) for \(N_{obs}=20\). We note the enhanced accuracy for \(N_{ob}=20\) comes at the expense of generating novel forecasts of the time series. Along the top row are the eigenvalue plots, while reconstructions are presented along the bottom row. As can be seen, relative to the choice of \(N_{st}\), doubling the number of observables greatly enhances the accuracy of the reconstructions and forecast. provide a case in which the HDMD is not able to adequately capture the dynamics for any reasonable choices of \(N_{ob}\). This is not necessarily surprising given that for the parameter choices made, we know that the dynamics traces out the famous Butterfly strange attractor as seen in Figure 3. Given that we are trying to approximate dynamics on a strange attractor, we would reasonably anticipate the HDMD to struggle. However, as seen in Figure 4, we see the method essentially fails completely for parameter choices identical to those used above. Arguably, by comparing Figures 4 (e) and (f) to one another, we see that doubling the number of observables gives one a better approximation of the \((y_{1},y_{2})\) projection of the Butterfly, but that is a coarse metric at best. That all said, the position of the computed spectra seen in Figures 4 (a) and (b) is still relatively ideal, so further adaptation of the HDMD method might produce more desirable results. We will see how to realize this through the use of neural networks in the following section. ## 4 Deep Learning HDMD To improve the HDMD such that it is able to deal with chaotic systems such as the Lorenz equation, we now turn to and adapt the framework of the deep learning DMD (DLDMD) developed in [6]. Our deep learning enhanced HDMD begins with an autoencoder composed of neural networks \(\mathcal{E}\) (the encoder) and \(\mathcal{D}\) (the decoder) such that \[\mathcal{E}:\mathbb{R}^{N_{s}}\rightarrow\mathbb{R}^{N_{s}},\ \mathcal{D}: \mathbb{R}^{N_{s}}\rightarrow\mathbb{R}^{N_{s}},\] and such that our auto-encoder is a near identity, i.e. \[\tilde{\mathbf{y}}=\mathcal{E}\left(\mathbf{y}\right),\ \mathcal{D}\left( \tilde{\mathbf{y}}\right)\approx\mathbf{y}.\] Note, we call the encoded coordinates _latent variables_ or _latent dimensions_ in line with the larger literature on machine learning. Figure 3: The Lorenz Butterfly in (a) with its projection along the \((y_{1},y_{2})\) plane in (b). The encoded coordinates should represent a set of observables which should enhance the overall accuracy of HDMD approximations of the dynamics. To train for this, after making reasonable choices for how to initialize the weights of the auto-encoder, and fixing a choice for \(N_{st}\), given the training data, say \(\{\mathbf{y}_{j,k}\}_{j,k=1}^{N_{T}+1,N_{C}}\), and the validation data \(\left\{\mathbf{y}_{j,k}^{(vl)}\right\}_{j,k=1}^{N_{T}+1,N_{C}^{(vl)}}\), we use the following loss function \[\mathcal{L}_{tot}=\mathcal{L}_{recon}+\mathcal{L}_{pred}+\mathcal{L}_{dmd}+ \alpha\mathcal{L}_{reg}\] Figure 4: HDMD results for the Lorenz system with \(N_{st}=20\), so that the reconstruction is generated for times \(1\leq t\leq t_{f,w}\) and iterated reconstruction and forecasting is done for times \(t_{f,w}\leq t\leq t_{f,w}+1\) where \(t_{f,w}=19.5\) for \(N_{ob}=10\) and \(t_{f,w}=19\) for \(N_{ob}=20\). In the top row are eigenvalue plots, while reconstructions are presented along the bottom row. As can be seen, doubling the number of observables does little to enhance the accuracy of the reconstructions. where \[\mathcal{L}_{recon} =\left[\frac{1}{N_{T}+1}\sum_{j=1}^{N_{T}+1}||\mathbf{y}_{j,\cdot}- \mathcal{D}\circ\mathcal{E}\left(\mathbf{y}_{j,\cdot}\right)||_{2}^{2}\right]_{N _{B}},\] \[\mathcal{L}_{dmd} =\left[\frac{1}{N_{lg}}\sum_{p=0}^{N_{lg}-1}\frac{1}{\Delta_{p}} \sum_{j=N_{st}-p}^{N_{w}-1}\left|\left|\tilde{\mathbf{y}}_{j,\cdot}-\left( \tilde{\mathbf{Y}}_{N_{st}-p}\right)_{j-(N_{st}-p)+1,\cdot}\right|\right|_{2}^{2 }\right]_{N_{B}},\] \[\mathcal{L}_{pred} =\left[\frac{1}{N_{lg}}\sum_{p=0}^{N_{lg}-1}\frac{1}{\Delta_{p}} \sum_{j=N_{st}-p}^{N_{w}-1}\left|\left|\mathbf{y}_{j,\cdot}-\mathcal{D}\left( \left(\tilde{\mathbf{Y}}_{N_{st}-p}\right)_{j-(N_{st}-p)+1,\cdot}\right) \right|\right|_{2}^{2}\right]_{N_{B}},\] with \([\cdot]_{N_{B}}\) denoting averaging over a given batch, \(\Delta_{p}=N_{w}-N_{st}+p\), and where we have modified Equation (6) so that \[\tilde{\mathbf{Y}}_{N_{st}-p}=\mathbf{K}_{M}\mathbf{L}^{N_{st}-p}\mathbf{ \Phi}_{-},\ p=0,\cdots,N_{lg}-1.\] The number of lags \(N_{lg}\) we introduce can be adjusted so as to reinforce learning dynamics by iterating the eigenvalues which come from EDMD. See [6] for a more complete motivation and discussion of this loss function. We collect the details of our learning method in Algorithm 1, which we call the Deep Learning HDMD (DLHDMD). ``` Data: Choose parameters \(N_{C}\), \(N_{B}\), \(\alpha\), \(E_{max}\), \(N_{st}\), \(N_{lg}\) Data: Choose initial value of \(N_{ob}\) input :\(N_{C}\) trajectories shuffled into batches of size \(N_{B}\) 1for\(l\gets 1\)to\(E_{max}\)do 2for\(k\gets 1\)to\(N_{B}\)do 3\(\tilde{y}_{j,k}\leftarrow\mathcal{E}(y_{j,k})\) 4 Apply the HDMD to \(\{\tilde{y}_{j,k}\}_{j,k=1}^{N_{w},N_{B}}\) to generate \(\tilde{\mathbf{Y}}_{N_{st}}\); 5\(\mathcal{L}_{tot}\leftarrow\mathcal{L}_{recon}+\mathcal{L}_{pred}+\mathcal{L}_ {dmd}+\alpha\mathcal{L}_{reg}\); 6 Find \(\mathcal{E}\) and \(\mathcal{D}\) so as to minimize \(\mathcal{L}_{tot}\); 7if\(0\ \equiv l\) mod \(E_{up}\)then 8 Find minimum of \(\mathcal{L}_{dmd}\) for number of observables \(N_{ob}-1,N_{ob},N_{ob}+1\) over the validation data; ``` **Algorithm 1**The DLHDMD Algorithm Note, we perform the update of \(N_{ob}\) over the validation data since we typically have \(N_{C}^{(vl)}\ll N_{C}\), thereby keeping this step relatively economical in terms of computational cost. Also, \(\mathcal{L}_{reg}\) is a standard 2-norm regularization of the weights of the auto-encoder. ### Results for DLHDMD We now show how the DLHDMD performs on several dynamical systems. We take as our training set 10000 randomly chosen initial conditions with their affiliated trajectories, 3000 randomly chosen validation set initial conditions, and 2000 randomly chosen initial conditions for testing purposes. Aside form our results for the KS equation, the training is done over \(E_{max}=100\) epochs using an ADAM optimizer with learning rate \(\gamma=10^{-4}\). The encoder and decoder each consist of five layers consisting of 128 neurons each, and all weights are initially drawn from truncated Gaussian distributions of zero mean and \(\sigma=.1\). The batch size \(N_{B}=256\), and the regularization hyperparamter \(\alpha=10^{-14}\). For the Lorenz-63 and Rossler sytems, we choose the initial number of observables to be \(N_{ob}=10\), and we update every \(E_{up}=5\) epochs. For the KS system, we initially choose \(N_{ob}=5\) and let \(E_{up}=10\). In all cases, we choose \(N_{lg}=1\), which was found to be sufficient for efficient training. #### 4.1.1 DLHDMD for the Lorenz-63 System The results of running the DLHDMD for the Lorenz-63 system are found in Figure 5. The maximum positive Lyupanov exponent, say \(\lambda_{L}\), for this version of the Lorenz-63 system can be numerically computed, and we find that \(\lambda_{L}\approx.8875\). In this case then, our prediction window is only slightly less than \(1/\lambda_{L}\approx 1.127\), so that we are making predictions up to the point where the strange attractor would tend to induce significant separations in what were initially nearby trajectories. Moreover, as can be seen, the overall reconstruction and forecast, plotted for times \(t\) such that \(1\leq t\leq t_{f,w}+1\), shows excellent agreement with the plot of the Lorenz Butterfly in Figure 3. This degree of accuracy is quantified by the graph of \(\mathcal{L}_{pred}\), which shows a relative accuracy of about \(1\%\) by the \(100^{th}\) epoch. To achieve this, we see that the DLHDMD progressively raises the value of \(N_{ob}\), thereby adding observables and concomitantly eigenvalues. As seen in Figure 5, this process continues until about the \(50^{th}\) epoch, at which point \(N_{ob}=N_{st}\) and a saturation effect kicks in whereby \(\mathcal{L}_{dmd}\) collapses for the given choice of observables. That this is also the point at which we no longer have novel forecasts points to this effect being a kind of overfitting. We note though that if one initially chooses \(N_{ob}=N_{st}\), then the training is generally not successful. Thus the model still needs to train to the point at which \(N_{ob}=N_{st}\), and it cannot happen too quickly without compromising the success of the training of the machine. As we increase \(N_{st}\) for \(N_{lg}=1\), we see that this same effect occurs when \(N_{ob}=N_{st}\). Experiments with \(N_{lg}=5\) showed this collapse in \(\mathcal{L}_{dmd}\) continues when \(N_{ob}=N_{st}\), though the overall training was stabilized and larger values of \(N_{st}\) were able to be used in training. Again, we believe that further exploring the choice of lags through the \(N_{lg}\) parameter should help improve this situation, but this will be a subject of future research. Further experiments showed that by setting \(E_{up}=10\), one just delays the epoch at which \(N_{ob}=N_{st}\), and until this point is reached, the machine is not able to produce accurate reconstructions, let alone forecasts. We now look at a typical trajectory both in the original and latent variables to get a better sense of the action of the encoder. As seen in Figure 6, the encoder rescales the data to be more uniform in magnitude across dimensions. However, we also see that the time scale of oscillations are essentially unchanged in the latent relative to the original coordinates. Thus, we see that the HDMD encourages better scaling of the incoming data than necessarily causing any significant changes in the rates of dynamics for the Lorenz-63 system. Figure 5: Results of the DLHDMD on the Lorenz-63 system after 100 epochs of training. In the top row, moving from left to right, we see the reconstructed, iterated reconstructed, and forecast data generated by the DLHDMD data, the affiliated spectra from the HDMD, and the plot of \(N_{ob}\) over epochs. In the bottom row, moving from left to right, we plot \(\mathcal{L}_{recon}\), \(\mathcal{L}_{pred}\), and \(\mathcal{L}_{dmd}\). Again, the reconstruction is generated for times \(1\leq t\leq t_{f,w}\) and forecasting is done for times \(t_{f,w}\leq t\leq t_{f,w}+1\). Error plots are over validation data. #### 4.1.2 DLHDMD for the Rossler System We now study the Rossler system given by \[\dot{y}_{1}= -y_{2}-y_{3},\] \[\dot{y}_{2}= y_{1}+ay_{2},\] \[\dot{y}_{3}= b+y_{3}\left(y_{1}-c\right),\] where \[a=.1,\ b=.1,\ c=14.\] Figure 6: Comparison of original and latent variables for the Lorenz-63 system for a typical test trajectory. Aside from the dynamics coalescing onto a strange attractor, the disparity in parameter values gives rise to multiscale phenomena so that there are slow and fast regimes of the dynamics, with the slow portions being approximated by harmonic motion in the \((y_{1},y_{2})\) plane with fast departures along the \(y_{3}\) coordinate. This strong disparity in time scales also appears by way of \(\lambda_{L}\approx 1.989\), which is more than double the maximal Lyupanov exponent for the Lorenz-63 system. Thus dynamics separate along the strange attractor twice as fast. Using then the same parameter choices described above, we get the following results for the training and validation of DLHDMD on the RS; see Figure 7. The performance of DLHDMD is essentially identical to that seen for the Lorenz-63 system. We likewise see the same plummet in the \(\mathcal{L}_{dmd}\) term around the \(50^{th}\) epoch mark when \(N_{ob}=N_{st}\), though we do see some dynamics in \(N_{ob}\) as it seeks to optimize the performance of \(\mathcal{L}_{pred}\). Thus we see that our method is able to address slow/fast dynamics with no particular modifications of the algorithm needed. We do note though that a visual inspection of trajectories shows that the error in our model is most apparent when one is trying to capture the fast transients affiliated with the multiscale dynamics of the Rossler system. Overall though, our iterated reconstruction window is almost twice the length of time over which trajectories separate on the attractor, so the results appear quite good in light of this fact. Again, we look at a typical trajectory both in the original and latent variables to get a better sense of the action of the encoder. As seen in Figure 8, the encoder, similar to its effect for the Lorenz-63 system, rescales the data so that it is more uniform across dimensions. However, we also see that fast transients along \(y_{3}\) are completely removed so that \(\tilde{y}_{3}\) is now a more uniform oscillator. Taking this information together with the Lorenz-63 results, we see the HDMD algorithm guides the learning process to push data to be both more regular in amplitude and the rate of dynamics. Given the linear nature of DMD based algorithms, with their particular focus on iterating eigenvalues to produce dynamics, makes the latent variable results unsurprising. #### 4.1.3 DLHDMD for the KS Equation To see the edges of our method, we now examine spatio-temporal chaos generated by the KS equation with periodic-boundary conditions in the form \[u_{t}+u_{xx}+u_{xxxx}+uu_{x}=0,\ u(x+2L,t)=u(x,t).\] Note, given the vast size of the literature around the KS equation, we refer the reader to [23] for an extensive bibliography with regards to details and pertinent proofs of facts used in this section. Introducing the rescalings \[\tilde{t}=\frac{t}{T},\ \tilde{x}=\frac{\pi}{L}x,\ u=A\tilde{u},\] and taking the balances \[A=\frac{L}{\pi T},\ T=\left(\frac{L}{\pi}\right)^{2},\] we get the equivalent KS equation (dropping tildes for ease of reading) \[u_{t}+u_{xx}+\nu u_{xxxx}+uu_{x}=0,\ \nu=\left(\frac{\pi}{L}\right)^{2}.\] Looking at the linearized dispersion relationship \(\omega(k)=k^{2}-\nu k^{4}\), we see that the \(\nu\) parameter acts as a viscous damping term. Thus, as the system size \(L\) is increased, the effective viscosity is decreased, thereby allowing for more complex dynamics to emerge. As is now well known, for \(L\) sufficiently large, a fractional-finite- dimensional-strange attractor forms which Figure 7: Results of the DLHDMD on the Rossler system after 100 epochs of training. In the top row, moving from left to right, we see the reconstructed, iterated reconstructed, and forecast data generated by the DLHDMD data, the affiliated spectra from the HDMD, and the plot of \(N_{ob}\) over epochs. In the bottom row, moving from left to right, we plot \(\mathcal{L}_{recon}\), \(\mathcal{L}_{pred}\), and \(\mathcal{L}_{dmd}\). Again, the reconstruction is generated for times \(1\leq t\leq t_{f,w}\) and forecasting is done for times \(t_{f,w}\leq t\leq t_{f,w}+1\). Error plots are over validation data. both produces intricate spatio-temporal dynamics while also allowing for a far simpler representation of said dynamics. It is has been shown in many different works (see for example [24]) that \(L=11\) generates a strange attractor with dimension between eight and nine, and that this is about the smallest value of \(L\) which is guaranteed to generate chaotic dynamics. We therefore set \(L=11\) throughout the remainder of this section. To study the DLDHMD on the KS equation, we use KS data numerically generated by a pseudo-spectral in space and fourth-order exponential-differencing Runge-Kutta in time method [25] of lines approach. For the Figure 8: Comparison of original and latent variables for the Rossler system for a typical test trajectory. pseudo-spectral method, K=128 total modes are used giving an effective spatial mesh width of \(2L/K=.172\), while the time step for the Runge-Kutta scheme is set to \(\delta t=.25\). These particular choices were made with regards to practical memory and simulation time length constraints. After a burn in time of \(t_{b}=\left(L/\pi\right)^{4}=150.3\), which is the time scale affiliated with the fourth-order spatial derivative for the chosen value of \(L\), 15000 trajectories of total simulation time length \(t_{f}=\left(L/\pi\right)^{4}\) were used with gaps of \(L/\pi\) in between to allow for nonlinear effects to make each sample significantly different from its neighbors. Each of the 15000 space/time trajectories was then separated via a POD into space and time modes; see [26]. Taking \(N_{s}=12\) modes captured between 97.8% and 99.4% of the total energy. The choice of the total time scale \(t_{f}\) also ensured that the ratio of the largest and smallest singular values affiliated with the POD was between \(10^{-1.1}\) and \(10^{-1.9}\) so that the relative importance of each of the modes was roughly the same across all samples. We take this as an indication that each 12-dimensional affiliated time series is accurately tracing dynamics along a common finite-dimensional strange attractor as expected in the KS equation. Using the methods of [27], we can find across batches that typically the largest positive Lyupanov exponent \(\lambda_{L}\approx.3930\), so that \(1/\lambda_{L}\approx 2.545\) is the time after which we anticipate the strange attractor starting to fully pull trajectories apart. With regards to the details of the DLHDMD, we again use 10000 samples for training, 3000 for validation, and 2000 for testing. The best results with regards to window size were found when we initially set \(N_{ob}=5\) and \(E_{up}=10\). The iterated reconstruction/forecast horizon determined by the choice of \(N_{st}\) was chosen so that \(N_{st}=(L/\pi)/\delta t\approx 14\), corresponding to the time scale over which nonlinear advection acts. Thus, reconstruction is done on each sample for values of \(t\) such that \(L/\pi\leq t\leq t_{f,w}\), and iterated reconstruction/forecasting is done for \(t\) such that \(t_{f,w}\leq t\leq t_{f,w}+L/\pi\). Note, for our initial choice of \(N_{w}\), we have that initially \(t_{f,w}=(L\pi)^{4}-1.25\). The results of DLHDMD training on the \(N_{s}=12\) dimensional POD reduction of the KS dynamics is shown in Figures 9 and 10. Likewise, our prediction window is longer than the timescale set by \(\lambda_{L}\), so we argue our forecasts are over time scales for which chaotic effects are significant. We see that while the reconstruction and predictions appear accurate; see in particular the comparisons in Figure 10. The collapse of the DMD approximation seen in the previous examples above is now absent, though we see that \(N_{ob}\) has just reached \(N_{st}\) in our simulations. Thus, by using a window update that is half the rate used in the prior systems, we avoid the affiliated overfitting seen in the prior cases, though we should anticipate that it would probably occur with a few more training epochs. ## 5 Mutual Information for Characterizing Embeddings Given the success of the DLHDMD in reconstructing and forecasting dynamics along a strange attractor, especially when compared to the relative failure of trying to do the same using just the HDMD alone, it is of further interest to try to assess exactly what role the auto-encoder plays in improving the outcome of the HDMD. While we can certainly point to the performance of the components of the loss function \(\mathcal{L}_{tot}\) to explain the impact of the encoder, this does not provide us with any more explanatory power. In [6], it was empirically shown that the role of the encoder was to generally transform time series to nearly monochromatic periodic signals, Figure 9: Results of the DLHDMD on the KS system after 100 epochs of training. In the top row, moving from left to right, we see the reconstructed, iterated reconstructed, and forecast data generated by the DLHDMD data, the affiliated spectra from the HDMD, and the plot of \(N_{ob}\) over epochs. In the bottom row, moving from left to right, we plot \(\mathcal{L}_{recon}\), \(\mathcal{L}_{pred}\), and \(\mathcal{L}_{dmd}\). Again, the reconstruction is generated for times \(L/\pi\leq t\leq t_{f,w}\) and forecasting is done for times \(t_{f,w}\leq t\leq t_{f,w}+L/\pi\). \(t_{f,w}\) is initially \((L\pi)^{4}-1.25\). Error plots are over validation data. Figure 10: Comparison of DLHDMD results and original KS data. which is to say, the effect of encoding was to generate far more localized Fourier spectral representations of the original time series. This does not turn out to be the case though for the DLHDMD. Instead, inspired both by the evolving understanding of how mutual information better explains results in dynamical systems [1, 28] and machine learning [16, 17], we assess the impact of the encoder on the DLHDMD by tracking how the information across dimensions and time lags changes in the original and latent variables. For two random variables \(\mathbf{X}\) and \(\mathbf{Y}\) with joint density \(p(\mathbf{X},\mathbf{Y})\), the mutual information (MI) between them \(I(\mathbf{X},\mathbf{Y})\) is defined to be \[I(\mathbf{X},\mathbf{Y})=\int p(\mathbf{x},\mathbf{y})\log\left(\frac{p( \mathbf{x},\mathbf{y})}{p(\mathbf{x})p(\mathbf{y})}\right)d\mathbf{x}d\mathbf{ y},\] where \(p(\mathbf{X})\) and \(p(\mathbf{Y})\) are the affiliated marginals. One can readily show that \(I(\mathbf{X},\mathbf{Y})\geq 0\) and \(I(\mathbf{X},\mathbf{Y})=0\) if and only if \(\mathbf{X}\) and \(\mathbf{Y}\) are independent. Thus information gives us a stronger metric of statistical coupling between random variables than more traditional tools in time series analysis such as correlation measurements. We also should note here that \(I(\mathbf{X},\mathbf{Y})=I(\mathbf{Y},\mathbf{X})\), which is to say it is symmetric. We also note that MI is invariant under the action of diffeomorphisms of the variables. Thus we cannot expect to get much use from computing the multidimensional MI of the original and latent variables, thereby allowing for meaningful differences to appear between original and latent variable computations. Instead, using the \(N_{C}=2000\) trajectories in the test data, we define the \(m\)-step averaged lagged self-information (ALSI) between the \(n^{th}\) and \(v^{th}\) dimensions \(I_{nv}(m)\) to be \[I_{nv}(m)=\frac{1}{N_{C}}\sum_{k=1}^{N_{C}}I\left(y_{n,\cdot,k},y_{v,\cdot+m,k }\right).\] We refer to the parameter \(m\) as a _lag_. In words then, after averaging over the ensemble of initial conditions in the test data, we compute the degree to which the signal becomes statistically independent from itself across all of the dimensions along which the dynamics evolve. We emphasize that due to the strong nonlinearities in our dynamics, we compute the lagged information as opposed to the more traditional auto-correlation so as to get a more accurate understanding of the degree of self-dependence across dimension in our dynamics. Further, by measuring the lagged MI across isolated dimensions, we break the invariance of MI with respect to diffeomorphisms. ### MI for the Lorenz-63 System The results of computing the ALSI for the Lorenz-63 system are plotted in Figure 11. As can be seen, the impact of the encoder is to either weakly attenuate the dependency between dimensions; see \(I_{11}\) and \(I_{22}\). For \(I_{33}\), the encoder leaves the ALSI essentially unchanged. Finaly, we also see significant phase shifts in the lag count; see \(I_{13}\) and \(I_{23}\). In these phase shifts, we see that the shift is always left towards shorter lags, so that the dependence in the latent variables decays more rapidly than in the original variables. In this sense then, the overall tendency of the encoder is to either reduce MI or cause time series to become more independent more rapidly. Otherwise though, the timescales of oscillation in the latent variables are essentially identical to those seen in the latent variables, which is confirmed by comparisons of the original and latent variable dynamics presented in Figure 6. In terms of the DLHDMD, we might then say that the encoder assists the HDMD by generally making the rows of the affiliated Hankel matrices more independent, especially over longer time scales, and therefore more meaningful with regards to their generating more accurate approximations of the underlying Koopman operator. ### MI for the Rossler System When we examine the evolution over lags of the ALSI, we see in Figure 12 that the encoder is causing large and significant changes to the dynamics. In particular, as we might expect from looking at the comparisons in Figure 12, when we look at the plots of \(I_{12}\) and \(I_{23}\), we see that the sharp transients in the ALSI for the original coordinates is removed and the overall ALSI is relatively flattened in the latent coordinates. This would seem to indicate that the slow/fast dichotomy in the Rossler dynamics is removed and so made more uniform. Also of note though is \(I_{13}\) which shows that the dependency between the \(\tilde{y}_{1}\) and \(\tilde{y}_{3}\) axes is enhanced relative to the coupling between \(y_{1}\) and \(y_{3}\) and that said dependency increases with lags. This reflects the more uniform coupling across dimensions in the latent variables which was seen in Figure 8. ## 6 Conclusion and Discussion In this work, we have developed a machine learning enhanced version of the HDMD which we call the DLHDMD. We have shown that its performance is significantly better than just the HDMD method alone, and when comparing against existing results in [6] we see radical improvement over the DLDMD method for the Lorenz-63 system. Likewise, we find that our method is successful across several challenging chaotic dynamical systems varying in dynamical features and size. Thus, we have a parallel approach of similar accuracy fitting within the larger framework of Koopman operator based methods. Moreover, we have a method which computes Koopman modes globally and naturally localizes spectra around the complex unit circle without further control of the method. Finally, our analysis of the relative information dynamics across physical dimensions in the original and latent variables provides us a means of understanding the impact of the encoder network on the dynamics in line with modern thinking in machine learning as well as better pointing towards an understanding that the HDMD is enhanced by decreasing the relative statistical dependence across physical variables. Figure 11: For the Lorenz-63 system, plots of the ALSI \(I_{nv}(m)\) for \((n,v)=(1,1)\) (a), \((n,v)=(2,2)\) (b), \((n,v)=(1,2)\) (c), \((n,v)=(3,3)\) (d), \((n,v)=(1,3)\) (e), and \((n,v)=(2,3)\) (f) for both the original and latent coordinates. As can be seen, the encoder tends to reduce the ALSI along each physical dimension aside from those involving the third physical dimension, for which the ALSI is enhanced for shorter lags and decreased for longer ones. dimensions. As explained in detail in the Introduction, there are of course a number of questions that remain to be addressed, and they will certainly be the subject of future research. ## 6 Conclusion Figure 12: For the Rossler system, plots of the ALSI \(I_{nv}(m)\) for \((n,v)=(1,1)\) (a), \((n,v)=(2,2)\) (b), \((n,v)=(1,2)\) (c), \((n,v)=(3,3)\) (d), \((n,v)=(1,3)\) (e), and \((n,v)=(2,3)\) (f) for both the original and latent coordinates. Acknowledgements C.W. Curtis would like to acknowledge the generous support of the Office of Naval Research and their Summer Research Faculty Program for providing the support of this project. D.J. Alford-Lago acknowledges the support of the Naval Information Warfare Center. E. Bollt was funded in part by the U.S. Army Research Office grant W911NF- 16-1-0081, by the U.S. Naval Research Office, the Defense Advanced Research Projects Agency, the U.S. Air Force Research Office STTR program, and the National Institutes of Health through the CRCNS.
2305.02361
Simulating $\mathbb{Z}_2$ lattice gauge theory on a quantum computer
The utility of quantum computers for simulating lattice gauge theories is currently limited by the noisiness of the physical hardware. Various quantum error mitigation strategies exist to reduce the statistical and systematic uncertainties in quantum simulations via improved algorithms and analysis strategies. We perform quantum simulations of $1+1d$ $\mathbb{Z}_2$ gauge theory with matter to study the efficacy and interplay of different error mitigation methods: readout error mitigation, randomized compiling, rescaling, and dynamical decoupling. We compute Minkowski correlation functions in this confining gauge theory and extract the mass of the lightest spin-1 state from fits to their time dependence. Quantum error mitigation extends the range of times over which our correlation function calculations are accurate by a factor of six and is therefore essential for obtaining reliable masses.
Clement Charles, Erik J. Gustafson, Elizabeth Hardt, Florian Herren, Norman Hogan, Henry Lamm, Sara Starecheski, Ruth S. Van de Water, Michael L. Wagman
2023-05-03T18:01:02Z
http://arxiv.org/abs/2305.02361v2
# Simulating \(\mathbb{Z}_{2}\) lattice gauge theory on a quantum computer ###### Abstract Quantum simulations of lattice gauge theories are currently limited by the noisiness of the physical hardware. Various error mitigation strategies exist to extend the use of quantum computers. We perform quantum simulations to compute two-point correlation functions of the \(1+1d\)\(\mathbb{Z}_{2}\) gauge theory with matter to determine the mass gap for this theory. These simulations are used as a laboratory for investigating the efficacy and interplay of different error mitigation methods: readout error mitigation, randomized compiling, rescaling, and dynamical decoupling. We find interesting synergies between these methods and that their combined application increase the simulation times at a given level of accuracy by a factor of six or more compared to unmitigated results. + Footnote †: preprint: FERMILAB-PUB-23-171-SQMS-T ## I Introduction Efficient classical algorithms for nonperturbatively solving quantum field theories with systematically improvable approximations rely on using Monte Carlo methods to evaluate lattice regularized path integrals. However, many physical systems of interest to high-energy physics and beyond - including finite-density systems [1], chiral gauge theories [2], and systems containing baryons [3] - require computational resources scaling exponentially with system size in order to overcome _sign problems_, complex path integrands which cannot be treated as probability distributions for Monte Carlo importance sampling. A particularly severe sign problem obstructs classical simulations of real-time evolution in lattice gauge theories (LGTs). Although there has been progress in developing novel classical algorithms [4; 5; 6; 7], simulations of the dynamics of four-dimensional LGTs relevant for studying phase transitions in the early universe, heavy ion collisions, and high-energy hadronic cross sections remain unfeasible. Quantum computers offer the possibility to simulate real-time dynamics of LGTs without sign problems by using Hamiltonian time evolution of quantum states instead of Monte Carlo evaluations of path integrals [8]. Although large-scale fault-tolerant quantum computers capable of simulating four-dimensional LGTs may not be realized for many years, it is important to understand the possibilities and challenges associated with quantum simulations of LGTs using current hardware. In particular, while quantum error mitigation (QEM) may extend the reach of noisy simulations, it is expected to suffer exponential scaling similar to classical simulations [9] albeit with potentially smaller prefactors. Substantial effort has been dedicated to error correction or mitigation for general purpose algorithms [10; 11; 12; 13; 14; 15; 16; 17; 18; 19] and other fields of physics e.g. [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. In contrast, for LGT, their use [33; 34; 35; 36; 37; 38; 39; 40; 41; 42] and systematic study [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57] are more limited; the simulations primarily focus on using zero noise extrapolation, random compilation, and readout error mitigation. Further, many approaches for digitization of LGTs (see Sec VI.b of [58] for a review) have been proposed, and it will be illuminating to compare their performance on currently accessible systems to understand which approaches should be pursued for large-scale quantum simulations. The limited quantum resources have focused research on \(1+1d\) quantum electrodynamics (QED), also called the Schwinger model [59], which demonstrates chiral symmetry breaking and confinement. Here, we study the \(1+1d\)\(\mathbb{Z}_{2}\) gauge theory coupled to one fermion field, which is the simplest discrete subgroup approximation of the Schwinger model [60; 61]. Other proposed schemes for rendering the photon-field Hilbert space finite dimensional include truncating the compact QED variables [62], quantum link models [63], and quantum cellular automatons [64]. Implementations of the Schwinger model exist for most quantum platforms, including trapped ions [65; 66; 67], cold atoms [68; 69; 70], and transmons [33]. The Schwinger model and its approximations have been used to test a variety of quantum simulation methods. In particular, state preparation techniques using adiabatic [62; 71] and variational [72; 73; 74] methods as well as thermal-pure-quantum states [75] have been tested. Ground state properties have been calculated at finite temperature [76; 73] and density [74; 75], including topological terms [71; 60; 77]. Additional work has studied non-equilibrium dynamics [76; 78] and the role of dynam ical quantum phase transitions [79, 80]. Specialized error mitigation [52] and correction [57, 81] has been developed. Further, resource estimates for some implementations were derived [82]. In this work, we present results from simulations of real-time evolution \((1+1)d\)\(\mathbb{Z}_{2}\) lattice gauge theory. We first discuss in Sec. II the Hamiltonian time evolution in \(\mathbb{Z}_{2}\) gauge theory and its implementation via quantum circuits in Sec. II. Sec. III discusses the study of QEM strategies for these circuits. We apply the QEM strategies in Sec. IV to compute the real-time evolution of fermion-antifermion bound states and determine the mass gap from these results. The performance and interplay of various QEM techniques is detailed for simulations using multiple IBM quantum computers. Finally, we conclude with a discussion of the results and future work in Sec. V. ## II Theory In this work we use the Kogut-Susskind lattice Hamiltonian for \(1+1d\)\(\mathbb{Z}_{2}\) gauge theory with staggered fermionic matter and open boundary conditions (OBCs) [83, 84]. A convenient feature of this model is that both the fermionic and bosonic degrees of freedom have the same local Hilbert space dimension as qubits. This allows for a straightforward mapping to qubit-based quantum computers. While many simplifications can reduce the quantum resources further, e.g. integrating out the nondynamical gauge fields and using the block structure of the symmetry sectors, we avoid these optimizations as they may not persist in higher dimensions [85]. ### Hamiltonian The Hamiltonian governing \(\mathbb{Z}_{2}\) gauge theory on a lattice of length \(N_{s}\) with OBCs is \[\begin{split} H=\sum_{n=1}^{N_{s}-1}&\Big{[}\frac{ 1}{2}\sigma_{n,n+1}^{x}+\frac{\eta}{2}(\bar{\psi}_{n}\sigma_{n,n+1}^{z}\psi_{n +1}+h.c.)\Big{]}\\ &+m_{0}\sum_{n=1}^{N_{s}}(-1)^{n}\bar{\psi}_{n}\psi_{n},\end{split} \tag{1}\] where \(m_{0}\) and \(\eta\) are the bare fermion mass and the gauge coupling respectively, \(\psi_{n}\) is a fermion field on lattice site \(n\), \(\bar{\psi}_{n}\) is the corresponding antifermion field, and the Pauli matrices \(\sigma_{n,n+1}^{x}\) and \(\sigma_{n,n+1}^{z}\) act on the two-component state of the \(\mathbb{Z}_{2}\) gauge-field. The three terms represent the gauge kinetic term, the fermion hopping term, and the fermion mass term respectively. Here and below, we use units where the spatial lattice spacing is set to unity. When associating the 1-component staggered fields \(\psi_{n}\) with the continuum 2-component spinors \(\psi(x)\), we denote the components on even sites by \(p_{n}\) (for positron) and the components on odd sites by \(e_{n}\) (for electron). The field content of an \(N_{s}\)-site lattice with OBCs is \(N_{s}/2\) electron sites, \(N_{s}/2\) positron sites, and \(N_{s}-1\) gauge-field links. A depiction for the case of \(N_{s}=4\) is provided in Fig. 1. After a Jordan-Wigner transformation [86] to convert the fermionic degrees of freedom to bosonic ones, \(H\) becomes \[\begin{split} H=&\frac{1}{2}\sum_{n=0}^{N_{s}-1} \sigma_{n,n+1}^{x}-\frac{m_{0}}{2}\sum_{n=0}^{N_{s}-1}(-1)^{n}Z_{n}\\ &+\frac{\eta}{4}\sum_{n=0}^{N_{s}-2}(X_{n}X_{n+1}+Y_{n}Y_{n+1}) \sigma_{n,n+1}^{z}.\end{split} \tag{2}\] The operators \(X_{n}\), \(Y_{n}\), and \(Z_{n}\) denote Pauli matrices acting on the electron and positron qubit states. This form of the Hamiltonian is used in the quantum simulations below. ### Circuits A quantum circuit implementation of the time evolution operator, \(U(t)=e^{-itH}\), is necessary to simulate the real-time dynamics of this theory. It is not generally possible to efficiently diagonalize \(H\); therefore we approximate \(U(t)\) via second order Trotterization [87, 88, 89]: \[U(t)\approx\mathcal{U}(t/\varepsilon)^{N_{t}}=\bigg{(}\prod_{i}\mathcal{U}_{i }\bigg{)}^{N_{t}}=\bigg{(}\prod_{i}e^{-i\varepsilon H_{i}}\bigg{)}^{N_{t}}. \tag{3}\] In the above equation, \(N_{t}\epsilon=t\), \(N_{t}\) is an integer, and the Hamiltonian is broken up as \(H=\sum_{i}H_{i}\) into three sets of commuting terms that are individually simple to diagonalize. The terms are the combination of the gauge-kinetic and fermion mass terms \[H_{1}=\frac{1}{2}\sum_{n=0}^{N_{s}-1}\sigma_{n,n+1}^{x}-\frac{m_{0}}{2}\sum_{n =0}^{N_{s}-1}(-1)^{n}Z_{n}, \tag{4}\] and the even- and odd-site fermion-hopping terms \[\begin{split} H_{2}&=\frac{\eta}{2}\sum_{n=\text{ even}}^{N_{s}-2}(X_{n}X_{n+1}+Y_{n}Y_{n+1})\sigma_{n,n+1}^{z},\\ H_{3}&=\frac{\eta}{2}\sum_{n=\text{odd}}^{N_{s}- 2}(X_{n}X_{n+1}+Y_{n}Y_{n+1})\sigma_{n,n+1}^{z}.\end{split} \tag{5}\] Figure 1: Pictorial representation of the \(N_{s}=4\) one-dimensional lattice used in this work. The lattice is represented using seven qubits: two represent “electron” components of the staggered fermion \(e_{n}\) (green), two represent the analogous “positron” components \(p_{n}\) (red), and three represent \(\mathbb{Z}_{2}\)-valued “photon” fields for which operators are labeled by adjacent pairs of lattice sites, e.g. \(\sigma_{n,n+1}\) (orange). While \(\mathcal{U}_{1}\) only requires single-qubit gates, \(\mathcal{U}_{2}\) and \(\mathcal{U}_{3}\) are more complicated to construct and require entangling gates. The \(X_{n}X_{n+1}\) and \(Y_{n}Y_{n+1}\) pairs are mutually diagonalizable because they commute [90]. Since \(H_{2}\) and \(H_{3}\) differ only on which lattice sites they act on, the same fermion hopping term operator \(\mathcal{U}_{fh}\) can be used if connectivity and noise are not issues. On noisy intermediate-scale quantum (NISQ) era hardware, the qubit layout is a critical factor limiting the size of simulations. A lattice of size \(N_{s}\) can be encoded in \(2N_{s}-1=3+4i=3,7,...\) qubits where \(i\) is an integer. One way to implement this is a chain with linear connectivity of length \(2N_{s}-1\) as shown in Fig. 1. This arrangement requires that the fermion hopping term has both fermion qubits connected to the gauge link qubit to avoid including SWAP gates. We label this circuit \(\mathcal{U}_{fh,1}\) and it is shown in Fig. 2. The quantum simulations described in this work were performed using NISQ era hardware: the IBM quantum computers ibm_nairobi and ibmq_jakarta1 with code available from Ref. [91]. Quantum computers including these and other IBM devices often have nonlinear connectivities such as square and heavy-polygon (qubits on both the \(N_{p}\) edges and \(N_{p}\) vertices) lattices. On such devices, the \(N_{s}\) is limited by the longest linear graph. In the limit of many qubits, this corresponds to fractional qubit use of \(\frac{N_{p}-2}{N_{p}-1}\). For heavy-squares and heavy-hexagons, this yields \(78\%\) and \(86\%\) respectively. In the small qubit limit, boundary effects can dramatically reduce this fraction, and for ibm_nairobi and ibmq_jakarta only 3 of the 7 or 43% of the qubits can be used with this layout. Footnote 1: Sample specifications for the various machines are provided in the files accompanying this work. Using circuit identities, we designed a second circuit \(\mathcal{U}_{fh,2}\) which couples all entangling gates to one of the fermions instead of the gauge link. This circuit is shown in Fig. 2. With this, qubits not along the longest linear graph can be used to store gauge links. Further, \(\mathcal{U}_{fh,2}\) requires 4 CNOT's compared to the 6 of \(\mathcal{U}_{fh,1}\). This novel design allows us to simulate larger lattices on quantum computers with nonlinear connectivity. We show the mapping for an example heavy-square device in Fig. 3 and the particular mappings relevant for the IBM devices used in this study in Fig. 4. With this additional gate, it is possible to use all 7 qubits of ibm_nairobi and ibmq_jakarta without SWAP gates. Figure 3: Example mapping of 1+1d \(\mathbb{Z}_{2}\) which tessellates heavy-square qubit connectivity layouts relevant for ibm_nairobi and ibmq_jakarta. Solid (dashed) lines indicate \(U_{fh,1}\) (\(U_{fh,2}\)) gates are used to implement fermion hopping terms involving a given pair of lattice sites and grayed qubits denote ones unnecessary for even numbers of lattice sites. This is the fewest number of idle qubits possible for this graph. Figure 2: The quantum circuit \(\mathcal{U}_{1}\) implementing the gauge kinetic term and the fermion mass term is shown in the left panel. In the right panel, two equivalent quantum circuits implementing the fermion hopping term appearing in \(\mathcal{U}_{2}\) and \(\mathcal{U}_{3}\) are shown. These two circuits have different qubit connectivities and are used in conjunction to provide efficient mappings between logical and physical qubits as described in the main text. ### Simulation Prescription It is anticipated that a primary advantage of quantum simulations will be their ability to efficiently describe the non-equilibrium responses of strongly-coupled quantum systems to applied currents that can be used to predict observables ranging from transport coefficients to scattering cross sections [58]. Predictions for such non-equilibrium observables require calculations of real-time correlation functions including the time-evolution operator \(U(t)\). It is convenient to work with matrix elements of operators of the form \(U^{\dagger}(t)OU(t)\) where \(O\) is a Hermitian operator in order to ensure Hermiticty of all operators whose expectation values are computed in quantum simulations.2 A simple example is given by the correlation function Footnote 2: Computing the expectation value of non-Hermitian operators is possible but requires multiple circuits and ancilla qubits as in e.g. the Hadamard test (see Sec. 2.4.3 of Ref [92]). \[C(t)=\left\langle\phi\right|U^{\dagger}(t)OU(t)\left|\phi\right\rangle, \tag{6}\] which depends on the operator \(O\), the initial state \(\left|\phi\right\rangle\), and total evolution time \(t\). If \(\left|\phi\right\rangle\) is an energy eigenstate \(\left|E_{n}\right\rangle\), then \(C(t)\) will have trivial time dependence because the time evolution factors of \(e^{\pm iE_{n}t}\) will cancel. On the other hand, if \(\left|\phi\right\rangle\) is not an energy eigenstate then there is non-trivial time dependence where \[C(t)=\sum_{n,m}\left\langle\phi\right|E_{m}\rangle\left\langle E_{n}\right| \left.\phi\right\rangle\left\langle m\right|O\left|n\right\rangle e^{-i(E_{n} -E_{m})t}. \tag{7}\] If \(\left|\phi\right\rangle\) has non-zero overlap with only a small number of states, then measurements of the squared magnitudes of correlation functions obtained from quantum simulations can be fit to this oscillatory form in order to extract the energy differences \(E_{n}-E_{m}\) of those states. As an idealized example, taking \(\left|\phi\right\rangle\) to be \(\left|\phi_{01}\right\rangle=(\left|E_{0}\right\rangle+\left|E_{1}\right\rangle )/\sqrt{2}\) would lead to a correlation function of the form \[\begin{split}\left\langle\phi_{01}\right|U^{\dagger}(t)OU(t)& \left|\phi_{01}\right\rangle=\cos((E_{1}-E_{0})t)\left\langle E_{0} \right|O\left|E_{1}\right\rangle\\ &+\frac{1}{2}\left\langle E_{0}\right|O\left|E_{0}\right\rangle +\frac{1}{2}\left\langle E_{1}\right|O\left|E_{1}\right\rangle.\end{split} \tag{8}\] Identifying states with identically zero overlap onto all but a small number of energy eigenstates is not possible in general; nevertheless correlation functions involving states dominantly overlapping with a small number of energy eigenstates can be useful for the determination of energy splittings. These calculations can be used for scale setting and subsequent predictions in eventual applications of quantum simulation to 3+1 dimensional lattice gauge theories. In other applications, the inclusive sum over states could be of interest for calculations of real-time response functions. The initial state \(\left|\phi(N_{s})\right\rangle\) used in our quantum simulations is a superposition of two gauge-invariant states: \(\left|\Omega(N_{s})\right\rangle\), which is the noninteracting vacuum state and expected to have significant overlap with the interacting vacuum state, and \(\left|P(N_{s})\right\rangle\), which is expected to have significant overlap with excited states such as electron-positron bound states. Explicitly, the states \(\left|\Omega(N_{s})\right\rangle\) and \(\left|P(N_{s})\right\rangle\) are defined for lattices with \(N_{s}\) as \[\begin{split}\left|\Omega(N_{s})\right\rangle=&\left( \prod_{n=\text{even}}^{N_{2}-2}H_{n,n+1}X_{n+1}\right)\left|0\right\rangle^{ \otimes 2N_{s}-1},\\ \left|P(N_{s})\right\rangle=& X_{m}\sigma_{m,m+1}X_{n +1}\left|\Omega(N_{s})\right\rangle.\end{split} \tag{9}\] where \(H\) is the Hadamard gate and \(m=N_{s}/2-1\) is the center lattice site. The superposition of these two states \[\left|\phi(N_{s})\right\rangle=\frac{1}{\sqrt{2}}\Big{(}\left|\Omega(N_{s}) \right\rangle+\left|P(N_{s})\right\rangle\Big{)}, \tag{10}\] is used as the initial state in our simulations. The circuit to build this state from the computational \(\left|0...0\right\rangle\) state for \(N_{s}=4\) is given in Fig. 5. Correlation functions \(C(t)\) defined using this state as in Eq. (6) have time-dependence generically given by Eq. (7), and in the limit of large \(t\) their Fourier transformations should be sharply peaked about values corresponding to some energy differences. If \(\left|\Omega\right\rangle\) and \(\left|P\right\rangle\) overlap strongly with the ground state and a particular excited energy eigenstate as expected, then the simple time-dependence of Eq. (8) will approximately emerge and the Fourier transformation of \(C(t)\) will have a single dominant peak. The last piece to construct \(C(t)\) with time dependence analogous to Eq. (8) requires an operator \(O\) that has significant off-diagonal matrix elements such as \(\left\langle E_{0}\right|O\left|E_{1}\right\rangle\). This can be achieved using a meson-like operator such as \(\psi_{n}^{\dagger}\sigma_{n,n+1}^{z}\psi_{n+1}+h.c.\). Adding additional terms that have small matrix elements between these states will affect the constant term but not the time dependence in Eq. (8), and we select the operator \[O=\psi_{n}^{\dagger}\sigma_{n,n+1}^{z}\psi_{n+1}+\psi_{n}^{\dagger}\sigma_{n,n+ 1}^{z}\psi_{n+1}^{\dagger}+h.c.. \tag{11}\] This operator takes a simple form in the qubit spin basis: \[O=X_{n}\sigma_{n,n+1}^{z}X_{n+1}, \tag{12}\] which allows it to be included efficiently in quantum simulations. Further, using Eq. (9) shows that diagonal matrix elements of \(O\) vanish for these states, \[\left\langle\Omega(N_{s})\right|O\left|\Omega(N_{s})\right\rangle=0=\left\langle P (N_{s})\right|O\left|P(N_{s})\right\rangle, \tag{13}\] while off-diagonal matrix elements are unity, \[\left\langle\Omega(N_{s})\right|O\left|P(N_{s})\right\rangle=1=\left\langle \Omega(N_{s})\right|O\left|P(N_{s})\right\rangle. \tag{14}\] Thus, the value of \(C(t=0)\) can be computed exactly: \[C(0)=\left\langle\phi(N_{s})\right|O\left|\phi(N_{s})\right\rangle=1. \tag{15}\] In particular, these matrix element results imply that if \(\left|P(N_{s})\right\rangle\) and \(\left|\Omega(N_{s})\right\rangle\) each approximately overlap with a single energy eigenstate, then the simplified spectral representation of Eq. (8) has a vanishing constant term. This means that \(C(t)\) can be expressed as \[\begin{split} C(t)&=\left\langle\phi(N_{s})\right| \mathcal{U}^{\dagger}(t)\ O\ \mathcal{U}(t)\left|\phi(N_{s})\right\rangle\\ &=\cos(Mt)+\ldots,\end{split} \tag{16}\] where \(M\) is the energy difference between the eigenstates dominantly overlapping with \(\left|P(N_{s})\right\rangle\) and \(\left|\Omega(N_{s})\right\rangle\) and the \(\ldots\) denotes contributions from other states that can lead to different \(t\)-dependence than a single-cosine form. Fits of quantum simulation results to Eq. (16) can be used to study the validity of this approximation. If the single-state contribution \(\cos(Mt)\) provides a good fit to correlation function results, then the fit parameter \(M\) can be expected to describe the energy gap between an electron-positron bound state and the vacuum, or in other words the mass gap in this theory. This correspondence can be directly tested by comparing fitted results for \(M\) with exact spectral results, which can be numerically computed on classical computers for small \(N_{s}\). As discussed in Sec. II.1, it is not straightforward to directly simulate \(U(t)\) for large \(t\) on a quantum computer and the Trotterized product \(\mathcal{U}(t/\varepsilon)^{N_{t}}\) is used in quantum simulations instead. The corresponding Trotterized approximation to \(C(t)\) is defined by \[\mathfrak{C}(t/\varepsilon)=\left\langle\phi(N_{s})\right|\mathcal{U}^{ \dagger}(t/\varepsilon)^{N_{t}}\ O\ \mathcal{U}(t/\varepsilon)^{N_{t}}\left|\phi(N_{s})\right\rangle. \tag{17}\] We performed quantum simulations of \(\mathfrak{C}(t)\) using the parameter choices \(m_{0}\in\{1,2\}\) and \(N_{s}\in\{2,4\}\) in the Hamiltonian in Eq. (2) with \(\eta=1\) in all cases. Each simulation was performed for \(N_{t}=20\) Trotter steps with \(\varepsilon=0.3\). For each \(N_{t}\), \(N_{\mathrm{rc}}=30\) randomly compiled circuits were run with \(N_{\mathrm{meas}}=2,000\) measurements collected for each. These production simulations were carried out on ibm_nairobi and ibmq_jakarta while additional testing simulations were also investigated on ibmq_manila and ibmq_quito. The full details of the simulations are listed in Table 1. ## III Error mitigation of a quantum simulation Many quantum error mitigation strategies have been studied for reducing the systematic uncertainties associated with errors in NISQ era quantum simulations. A primary goal of this work is to study the interplay between different QEM methods and the reliability of quantum simulation results using combinations of state-of-the-art methods. This section briefly introduces the QEM methods that we found to provide significant improvements in these calculations: randomized compiling, readout error mitigation, rescaling, and dynamic decoupling. These QEM strategies introduce additional correlations between quantum simulation results, and it is important to accurately determine and include these correlations in analyses of quantum simulation results. Throughout \begin{table} \begin{tabular}{c c c c c c} \(m_{0}\) & \(N_{s}\) & Machine & Date & Time1 & DD & Qubits2 \\ \hline 1 & 2 & ibmq\_jakarta & 1/3/23 & 20:501 & None & (5,6,3) \\ 1 & 2 & ibmq\_jakarta & 1/4/23 & 18:41 & XY4 & (5,6,3) \\ 1 & 2 & ibm\_nairobi & 9/10/22 & 13:28 & None & (5,4,6) \\ 1 & 2 & ibm\_nairobi & 9/10/22 & 11:53 & XY4 & (5,4,6) \\ 1 & 4 & ibmq\_jakarta & 9/21/22 & 15:23 & None & (0,2,1,3,5,4,6) \\ 1 & 4 & ibmq\_jakarta & 9/21/22 & 19:39 & XY4 & (0,2,1,3,5,4,6) \\ 1 & 4 & ibm\_nairobi & 1/5/23 & 08:571 & None & (0,2,1,3,5,4,6) \\ 1 & 4 & ibm\_nairobi & 1/5/23 & 09:301 & XY4 & (0,2,1,3,5,4,6) \\ 2 & 2 & ibmq\_jakarta & 8/27/22 & 22:04 & None & (5,6,3) \\ 2 & 2 & ibmq\_jakarta & 9/22/22 & 21:44 & XY4 & (5,6,3) \\ 2 & 2 & ibm\_nairobi & 1/4/23 & 21:411 & None & (5,4,6) \\ 2 & 2 & ibm\_nairobi & 1/5/23 & 22:511 & XY4 & (5,4,6) \\ 2 & 4 & ibmq\_jakarta & 1/4/23 & 20:361 & None & (0,2,1,3,5,4,6) \\ 2 & 4 & ibmq\_jakarta & 1/4/23 & 21:011 & XY4 & (0,2,1,3,5,4,6) \\ 2 & 4 & ibm\_nairobi & 9/12/22 & 11:26 & None & (0,2,1,3,5,4,6) \\ 2 & 4 & ibm\_nairobi & 9/14/22 & 11:22 & XY4 & (0,2,1,3,5,4,6) \\ \hline 1 & 2 & ibmq\_manila & 8/4/22 & 10:25 & XY4 & (3,1,4) \\ 1 & 2 & ibmq\_manila & 8/4/22 & 09:06 & CMPG & (3,1,4) \\ 1 & 2 & ibmq\_manila & 8/4/22 & 10:26 & EDD & (3,1,4) \\ 1 & 2 & ibmq\_manila & 8/4/22 & 16:57 & XY4 & (1,0,2) \\ 1 & 2 & ibmq\_manila & 8/4/22 & 14:58 & CMPG & (1,0,2) \\ 1 & 2 & ibmq\_manila & 8/4/22 & 16:56 & EDD & (1,0,2) \\ 1 & 2 & ibmq\_quito & 7/27/22 & 13:38 & XY4 & (3,2,4) \\ 1 & 2 & ibmq\_quito & 7/27/22 & 13:26 & CMPG & (3,2,4) \\ 1 & 2 & ibmq\_quito & 7/27/22 & 13:39 & EDD & (3,2,4) \\ 1 & 2 & ibmq\_quito & 7/27/22 & 12:05 & XY4 & (1,0,2) \\ 1 & 2 & ibmq\_quito & 7/27/22 & 11:46 & CMPG & (1,0,2) \\ 1 & 2 & ibmq\_quito & 7/27/22 & 12:08 & EDD & (1,0,2) \\ \end{tabular} \end{table} Table 1: Details of the simulations performed in this work. The different dynamic decoupling (DD) schemes are described below in Sec. III.4. this work we use bootstrap resampling [93; 94; 95] to determine all statistical uncertainties and correlations between observables, in particular using correlated resampling of quantities arising through QEM that appear in multiple observables computed using the same quantum computer. We performed \(N_{\text{meas}}=N_{\text{shots}}N_{\text{rc}}=6\times 10^{4}\) measurements of each quantum circuit and used \(N_{\text{boot}}=10^{4}\) random bootstrap samples of this ensemble of measurements. Bootstrap covariance matrices were determined for all correlated observables and the associated uncertainties were propagated to fitted quantities in a correlated way using the gvar and lsqfit packages [96; 97; 98; 99; 100]. ### Randomized Compiling Randomized compiling transforms coherent systematic uncertainties associated with the imperfect fidelity of quantum gates into stochastic systematic uncertainties that can be quantified with a Markovian noise model [101; 102; 103; 104; 105; 106; 107; 108]. It has seen great success in other \((1+1)d\) LGT applications [109; 40; 39; 42; 56]. At present it must be implemented by hand, but it is expected to become a standard part of transpilation through parametric compilation [110; 111; 112; 113; 114; 115; 116]. RC utilizes multiple gates that are equivalent in the absence of noise but that differ in the presence of gate errors. Therefore, the averages of the results often have smaller errors than any individual circuit and the variance of the results provides a partial measure of the size of systematic uncertainties arising from gate errors. A strategy for implementing randomized compiling is "Pauli twirling", in which a gate \(\Lambda\) is replaced by a gate including additional sets of Pauli gates \(\{\sigma_{i}\}\) and \(\{\sigma^{\prime}_{i}\}\), where \(i\) indexes the qubits acted on by \(\Lambda\), that are chosen to satisfy [105; 107] \[\left[\bigotimes_{i}\sigma_{i}\right]\Lambda\left[\bigotimes_{j}\sigma^{\prime }_{j}\right]=\Lambda. \tag{18}\] Any solution to Eq. (18) provides a valid gate that is equivalent to \(\Lambda\) on an ideal quantum computer and can be used for randomized compiling. For any choice of \(\{\sigma_{i}\}\), the \(\{\sigma^{\prime}_{i}\}\) required to produce such a solution is simply obtained by multiplying both sides Eq. (18) by the inverses of the gates appearing and is given by \[\left[\bigotimes_{j}\sigma^{\prime}_{j}\right]=\Lambda^{\dagger}\left[\bigotimes _{i}\sigma_{i}\right]\Lambda. \tag{19}\] For complicated gates such as the fermion hopping term above, 64 solutions to Eq. (19) can be produced in this manner, while for the CNOT there are 16. In order to better handle circuit scheduling constraints on cloud computing platforms, a random set of \(N_{\text{RC}}\) solutions can be chosen at compile time, with solutions chosen independently for each instance of \(\Lambda\) appearing in a quantum circuit. The optimal value of \(N_{\text{twirl}}\) depends on both \(\Lambda\) and the hardware is run and can be determined empirically by increasing \(N_{\text{twirl}}\) until the effects of randomized compiling saturate or are offset by a prohibitively larger number of simulations. Pauli twirling removes correlations between repeated \(\Lambda\), but any internal correlations persist. Thus as the correlation between native gates decreases with hardware and implementation improvements, resources devoted to Pauli twirling can be reduced by only implementing them for larger gates like \(U_{fh}\). On the present systems, we investigated Pauli twirling at the level of \(U_{fh}\) and at the level of the CNOTs within it. A mild but statistically significant preference to twirling the CNOTs was observed which will be used for the remainder of this work. ### Readout Error Mitigation The measurement operation on quantum computers is quite noisy. There are many causes for these errors such as classical bit-flips, amplitude dampening, and cross talk [117; 118; 119; 120; 121; 122; 123]. It is important to mitigate these errors as they will bias the observed value of an operator measured on a quantum computer. While many methods to correct these errors exist [124; 125; 126; 127; 128; 129], we use regularized response matrix inversion [130; 131; 132; 133; 134]. For a single qubit if we prepare the system in the state \(|0\rangle\) there is a probability \(p_{0}\) that we measure the qubit in the state \(|0\rangle\) and a probability \(1-p_{0}\) that we measure it in the \(|1\rangle\) state. We can then use these and the analogous probabilities for an initial \(|1\rangle\) state to construct a calibration matrix, \[M=\begin{pmatrix}p_{0}&1-p_{0}\\ 1-p_{1}&p_{1}\end{pmatrix}. \tag{20}\] By acting on the vector of measured qubit state results with \(M^{-1}\), one can mitigate readout error and return a "corrected" output closer to the underlying distribution. Assuming that readout errors are uncorrelated, we can construct a tensor product \(M^{\otimes N}\) and correct the readouts individually rather than the exponentially time consuming task of measuring all possible elements of the full readout correction matrix. Qiskit RunTime has readout mitigation built in [135]. The size of readout mitigation effects will depend on the observable under study, which is taken to be \(\mathfrak{C}(t)\) below. The calibration matrix may introduce correlations between any simulations that are transformed using the same estimated process. The absolute shift in the correlation function due to readout mitigation is defined as \[\mathcal{A}(t)=|\mathfrak{C}_{\text{RO}}(t)-\mathfrak{C}_{\text{raw}}(t)|, \tag{21}\] where \(\mathfrak{C}_{\text{RO}}(t)\) is the readout mitigated observable and \(\mathfrak{C}_{\text{raw}}(t)\) is the observable calculated using the unmitigated data. The relative shift is defined as \[\mathcal{R}(t)=\frac{\mathcal{A}(t)}{|\mathfrak{C}_{\mathrm{raw}}(t)|}. \tag{22}\] Figure 6 shows the absolute shift of the observable as a function of the observable magnitude for the simulations performed on ibm_nairobi. The relative independence of \(\mathcal{R}\) on \(\mathfrak{C}_{\mathrm{raw}}\) indicates that the absolute size \(\mathcal{A}\) of readout mitigation effects is correlated with, and in particular approximately proportional to, \(\mathfrak{C}_{\mathrm{raw}}\). This is not unexpected as errors on physical hardware are commonly asymmetric [136]. Similar patterns were observed on ibmq_jakarta and together we can conclude the relative shift is approximately constant with circuit depth as seen in Fig. 7. Separate simulations with the same or different numbers of Trotter steps may become correlated because of the calibration matrix. The correlations between randomly compiled circuits were observed to be \(\lesssim 5\%\). Including these correlations has a noticeable effect when averaging these circuits, as seen in Fig. 8. Figure 9 shows the correlations introduced by readout mitigation between different time steps for the same parameters as Fig. 8. It is unsurprising that observables involving different numbers of Trotter steps are less significantly correlated than those with the same number because the wave function is not as similar. ### Rescaling Measured observables will include an exponential decay with respect to the circuit depth [137]. It is possible to counteract this signal damping using rescaling [138; 55; 139]. This method rescales the measurements of one set of circuits using information from a related set of circuits. The first set is the randomly compiled quantum simulation circuits \(\mathfrak{C}(t)\) of interest. The second set uses the same circuits except all non-Clifford gates are removed, which is denoted \(r(t)\). The first set of circuits has an unknown output that the quantum simulation is designed to determine. On the other hand, the second set containing only Clifford gates can be efficiently simulated classically [140]. Such classical algorithms can be extended further to some cases where some non-Clifford gates are allowed [141; 142]. This allows for comparison between the exact answer and the noisy result which can be used to mitigate some errors. On a quantum computer, assuming only a depolarizing noise channel, the noisy estimate \(\widetilde{r}(t)\) of the classically Figure 6: Relative shifts \(\mathcal{R}\) in the Trotterized correlation function \(\mathfrak{C}_{\mathrm{raw}}\) (aggregated across \(t/e\)) defined in Eq. (22) due to readout mitigation for ibm_nairobi and ibmq_jakarta. The data points are aggregated across all possible randomized compiling circuits that did not use dynamic decoupling. Each data point was measured with \(N_{\mathrm{meas}}=2,000\). computable observable \(r(t)\) is, \[\widetilde{r}(t)=(1-\varepsilon)r(t)+\frac{\varepsilon}{2^{n}}\left\langle\text{ Tr}[U_{r}(t)]\right\rangle, \tag{23}\] where \(U_{r}(t)\) is an operator satisfying \(r(t)=\left\langle U_{r}(t)\right\rangle\) for expectation values taken in the state \(\left|\phi(N_{s})\right\rangle\) and \(\varepsilon\) is the strength of the depolarizing noise. Since \(U_{r}(t)\) in our case is traceless and Hermitian because it is a tensor product of Pauli matrices, the result simplifies to \(\widetilde{r}(t)=(1-\varepsilon)r(t)\). Since \(r(t)\) is easily computable, we can determine \((1-\varepsilon)\) from a measurement of \(\widetilde{r}(t)\). Then we can correct for the same depolarizing noise in our correlation function of interest by rescaling the analogous noisy estimator \(\widetilde{\mathbf{\mathcal{C}}}(t)\) by \((1-\varepsilon)^{-1}\)[138, 55, 139]. The resulting rescaled correlation function is given by \[\mathfrak{C}_{\text{Rescaled}}(t)=\frac{\widetilde{\mathbf{\mathcal{C}}}(t)}{1- \varepsilon}=\frac{\widetilde{\mathbf{\mathcal{C}}}(t)r(t)}{\widetilde{r}(t)}. \tag{24}\] This method is less feasible for long depth circuits because \(\widetilde{r}(t)\) can be vanishingly small. A pictorial representation is shown in Fig. 10. The efficacy of rescaling is found to depend significantly on whether or not dynamical decoupling is included and is discussed further in the next section. Figure 8: Comparison of the absolute shifts from readout mitigation using calibration matrices computed with and without taking into account correlations between different circuit measurements. Figure 7: Effects of readout correction for various depth circuits corresponding to \(t/\varepsilon\) Trotter steps. The top (bottom) figure corresponds to the \(N_{s}=2\) (\(N_{s}=4\)) lattice volume, and the different colored points in each plot correspond to the different bare fermion masses indicated. Note that simulations for different masses were performed using different machines in both cases (ibm_mairobi for parameters \(\{N_{s}=2,m_{0}=1\}\) and \(\{N_{s}=4,m_{0}=2\}\) and ibm_jakarta for the other combinations), and the central values and uncertainties of the results will therefore differ. Figure 9: Correlation matrices (that is, normalized covariance matrices) for circuits with different numbers of Trotter steps for the representative case of simulations with \(m_{0}=1\) and \(N_{s}=2\). ### Dynamical Decoupling Dynamical decoupling (DD) is a method to reduce errors arising from spectator qubits that are acted on trivially by a given gate. When a qubit is idling, a set of single qubit operations are interleaved using basis transformations so that environmental contamination or spurious signals from other qubits become decoupled. As a result the coherence time of the quantum circuit becomes extended. For a review of the method see Ref. [143]. There exist many methods for DD [143; 144; 145; 146; 147; 148] and extensive studies on different DD sequences have been done [149; 150; 148; 149]. It is well known the effectiveness of a given DD sequence is problem- [151; 152; 153; 154] and hardware-dependent [19; 148]. The time-dependent evolution can be described using a total Hamiltonian \(H\) that depends on the Hamiltonian of the ideal system \(H_{S}\), the Hamiltonian of the environment \(H_{E}\), and the interaction between the system and the environment \(H_{SE}\) as \[H=H_{S}+H_{E}+H_{SE}. \tag{25}\] We can view \(H_{SE}\) as an error term in the desired error-mitigated Hamiltonian and cancel them out to some degree using time-dependent inversion pulses in long periods of system-environment interaction. These pulses can be incorporate into a circuit via a transpiler pass, as described below, and are available within Qiskit. When a circuit is prepared to be run on a quantum computer, it is first transformed into a logically equivalent circuit in terms of the basis gates supported by the quantum computer through a process called transpiling. The DD transpiler pass [155] analyzes a transpiled circuit for idle periods and inserts delay instructions. Although this will be effective in keeping a system in phase during single-qubit gates, CNOTs have longer gate times (10 or more times that of a single-qubit gate) and require a DD pulse sequence to decouple the idle qubits. Research on CNOT-induced idle periods and the best strategies for DD implementation is detailed in Ref. [156]. In this work, we studied three different dynamical decoupling sequences: Carr-Purcell-Meiboom-Gill (CPMG) [144; 145], XY4 [146], and Eulerian dynamical decoupling (EDD) [147], which are described below. These pulse sequences are shown in Fig. 11 where, \(\tau\) is the length of idle time on the qubit minus the single qubit gate operation times. One of the earliest described decoupling sequences was proposed by Carr & Purcell in 1954 [144] and elaborated on by Meiboom & Gill four years later [145]. Called CPMG, this sequence gives first-order protection to environmental coupling. It involves two \(X\) pulses, symmetrically placed on a spectator like in Fig. 11. While CPMG has been shown to perform better than a system without DD [19], it makes assumptions about the pulses being ideal. In addition, CPMG can only decouple states close to the equator of the Bloch sphere (such as the \(|+\rangle\) and \(|-\rangle\) states). To protect all states universally, more than just \(X\) pulses are needed. Following previous work on DD sequences, Maudsley brought forward a sequence with universal first-order protection of states in the ideal pulse limit [146] by introducing a Y-rotation to the CPMG sequence as seen in Fig. 11. This additional direction of rotation cancels out the final undesirable \(H_{SE}\) terms. It is important to note that to properly implement XY4, the total delay time \(t\) must be bounded by the number of single qubit gates. If the total CNOT time is less than the time it takes to implement 4 single qubit gates, then XY4 cannot be implemented. For IBM devices, this issue does not arise because CNOT times are 10 or more times that of single qubit gates. However, other factors such as pulse alignment can place restrictions on the allowed \(f_{t}\). Like CPMG, XY4's efficacy is based on an ideal pulse model and may not accurately describe some noise sources arising on real quantum computers. Eulerian Dynamical Decoupling (EDD) is a class of sequences proposed by Viola and Knill [147] that provides universal first-order protection and takes into account imperfect pulses as well. The name of this procedure is derived from the Eulerian cycles on a Cayley graph of the discrete group (in our case, rotational gates from which a sequence is constructed), a formalism which is elaborated on in Ref. [148]. The explicit sequence of gates we used is described in Ref. [19] and shown in Fig. 11. Environmental couplings can introduce both oscillatory effects and exponential damping to the underlying signal as discussed above. DD can mitigate these oscillatory and exponential damping effects. It is easiest to see these effects on the simple observables \(r(t)\) used for rescaling. With an ideal quantum computer, the expected value of \(r(t)\) for our studies should be 1 regardless of the circuit depth. We show the effects of including DD on the rescaling circuit in Fig. 12. If only Pauli or depolarizing noise channels are affecting the quantum system, then this cir Figure 10: Pictorial representation of how rescaling circuits restore the signal. A noisy measurement (black dashed line) of an observable with a known \(t\)-independent expectation value (black solid line) is used to rescale a noisy measurement of an observable (yellow dashed line), resulting in the rescaled observable (yellow solid line). cuit should decay exponentially with depth. However we observe that without DD environmental effects introduce oscillatory terms which invalidate Eq. (23). To quantify the efficacy of the different DD sequences, we ran simulations of the rescaling circuit on ibmq_quito and ibmq_manila with CPMG, XY4, and EDD protocols. In order to avoid over-optimizing our choice of DD sequence, these studies used a different initial state than the one used in our final results corresponding to \(|\psi\rangle=\frac{1}{\sqrt{2}}(|0\rangle|+\rangle|0\rangle+|1\rangle|- \rangle|1\rangle)\). The results were then fit to the ansatz \[f(x)=Ae^{-Bx}\cos(Cx)+D, \tag{26}\] which is inspired by studies in Ref. [143]. The inclusion of the oscillatory term is often seen when superposition states are prepared [157]. When comparing the fits with and without DD, we expect to see that the coefficients \(B\) and \(C\) should decrease when DD sequences are included into the quantum simulation. We find fit coefficients for the example case shown in Fig. 12 and in this an all other cases it is indeed observed that DD lowers the fitted values of \(B\) and \(C\). For the example shown in Fig. 12, we find that \(B=0.1356(58)\) and \(C=0.3664(68)\) without DD and \(B=0.1016(18)\) and \(C=-0.0919(54)\) when DD is included. Similar trends are observed with the other simulations. The rescaling circuits were also fit to Eq. (26) and the resulting \(B\) values are shown in Figs. 13- 14 and Table 2. The results show that XY4 and EDD generally perform better for this study than CPMG since they have a smaller value of \(B\). There is a slight preference for EDD over XY4; however there are physical hardware constraints that limit the ability to implement EDD due to its large number of gates. Thus XY4 was used for all further simulations. values of \(t/\varepsilon\) rescaling leads to overcorrections or drives the simulation results further way from the expected result. The overcorrection could arise from the fact that the rescaling circuit and the time evolution circuit are not exactly the same length and the small mismatch leads to an accumulation of error as the simulation circuit depth increases. In addition, randomized compiling using only Pauli matrices does not exactly map the coherent error to a depolarizing channel. This mapping is only true if the full Clifford group is used. Given this, the rescaling procedure defined in Eq. (26) is not exactly correct. Rescaling will typically fail if the observable used for rescaling has any oscillatory components. The simultaneous inclusion of DD and rescaling leads to significant improvements over the use of either technique alone. The quantum simulation results for \(t/\varepsilon\lesssim 10\) agree with theoretical expectations within \(5\%\) precision. Even for larger \(t/\varepsilon\), results obtained with DD and rescaling are much closer to theoretical expectations than unmitigated results. These results show that the amplitude of the fully mitigated results still leaves the expected physical range \begin{table} \begin{tabular}{c c c c c c} \hline \hline Machine & Qubits & Sequence & \(B\) & \(C\) & \(\frac{\chi^{2}}{\lambda\sigma}\) \\ \hline ibmq\_quito & (1,0,2) & CPMG & 0.1137(72) & 0.070(43) & 0.87 \\ ibmq\_quito & (1,0,2) & XY4 & 0.117(72) & 0.069(58) & 2.2 \\ ibmq\_quito & (1,0,2) & EDD & 0.116(10) & 0.116(76) & 1.6 \\ ibmq\_quito & (3,2,4) & CPMG & 0.0870(37) & -0.068(18) & 0.97 \\ ibmq\_quito & (3,2,4) & XY4 & 0.0836(49) & 0.0006(81) & 1.4 \\ ibmq\_quito & (3,2,4) & EDD & 0.0725(31) & -0.075(18) & 2.2 \\ ibmq\_manila & (1,0,2) & CPMG & 0.1493(60) & 0.076(23) & 0.53 \\ ibmq\_manila & (1,0,2) & XY4 & 0.1321(46) & 0.095(14) & 0.42 \\ ibmq\_manila & (1,0,2) & EDD & 0.1418(54) & 0.087(15) & 0.60 \\ ibmq\_manila & (3,1,4) & CPMG & 0.1163(97) & 0.1939(68) & 1.4 \\ ibmq\_manila & (3,1,4) & XY4 & 0.1079(84) & 0.0001(31) & 1.9 \\ ibmq\_manila & (3,1,4) & EDD & 0.0793(60) & 0.00007(62) & 3.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Best-fit parameters and associated goodness-of-fit for fits of rescaling circuit results to Eq. (26) using the three dynamical decoupling sequences described in the main text. Calculations were performed using ibmq_manila and ibmq_quito. Figure 14: Comparison of noisy estimates of the rescaling circuits and correlation functions using the three DD sequences describes in the main text for an \(N_{s}=2\) lattice representing \([e_{0},\sigma_{0,1},p_{1}]\) using qubits [3; 2; 4] on ibmq_manila. Figure 13: Comparison of noisy estimates of the rescaling circuits and correlation functions using the three DD sequences describes in the main text for an \(N_{s}=2\) lattice representing \([e_{0},\sigma_{0,1},p_{1}]\) using qubits [3; 2; 4] on ibmq_quito. for large \(t/\varepsilon\) but suggest that the frequency of observed oscillations approximately matches the expected frequency. Similar behavior can be observed in fully mitigated quantum simulation results for each set of parameters \(m_{0}\) and \(N_{s}\) considered here, as shown in Fig. 16. To quantify the accuracy of our simulation results with and without QEM, we perform fits of \(\mathfrak{C}(t)\) to the spectral representation discussed in Sec. II.3. The simplest ansatz is the single-cosine form shown in Eq. (16), which assumes that the initial state can be approximated as a superposition of two energy eigenstates. The largest correction to this form arising from Trotterization is that physical times \(t\) are equal to \(a_{t}N_{t}\), where the renormalized Trotterization scale \(a_{t}\) is only equal to \(\varepsilon\) for a non-interacting theory and otherwise is a function \(a_{t}(\varepsilon,m_{0},\eta,N_{s})\) that must be determined for a given set of parameters by matching a dimensionful observable to a known value. Taking \(t=a_{t}N_{t}\) in Eq. (16) allows us to express the single-state fit ansatz as \[\mathfrak{C}(t)\approx\cos(a_{t}MN_{t})e^{Bt}, \tag{27}\] where \(e^{Bt}\) factor introduces a nuisance parameter \(B\) in order to account for residual effects of depolarizing noise that are not completely removed by rescaling. This shows that \(a_{t}M\) is the dimensionless Fourier conjugate variable to the number of Trotter steps \(N_{t}\). Correlated \(\chi^{2}\)-minimization fits to this functional form are used to extract \(a_{t}M\). The ansatz is then fit to all possible ranges of fit data with six or more consecutive Trotter steps. We then use Bayesian model averaging from Ref. [158] to determine the time dependent mass and associated systematic and statistical uncertainties. If these results described a physically relevant lattice gauge theory, we could match observables like \(a_{t}M\) to their experimental values in order to determine \(a_{t}\) and therefore make unambiguous predictions for other energies in physical units. For the 1+1 dimensional model at hand, \(a_{t}M\) provides a proof-of-principle demonstration of the calculation of an observable that could be used for scale setting and is the final result of this work. Results for \(a_{t}M\) obtained from fitting our quantum simulation results at each \(m_{0}\) and \(N_{s}\) studied are shown in Table 3. The theoretically expected exact results \((a_{t}M)_{\text{expected}}\) computed classically are shown for comparison in the same table, and \(1\sigma\) agreement between quantum simulation results and these expectations is found in the \(a_{s}m_{0}=1\) cases. However, significant discrepancies are found in the \(a_{s}m_{0}=2\) cases. These discrepancies may arise from couplings to other excited states whose exact energies are close to the ones extracted from fits to our quantum simulations. It is noteworthy that applying QEM methods, and in particular the combination of rescaling and DD, is found to be necessary for achieving a good fit to Eq. (27). Performing analogous fits to results without either of these techniques leads to less accurate estimates of the mass gap with larger uncertainties. Figure 15: Effects of rescaling and/or DD on correlation-function results. The results labeled “No QEM” include readout mitigation and randomized compiling but neither rescaling nor DD. Errors show statistical uncertainties determined using bootstrap methods as described in the main text. The parameters for this simulation are \(m_{0}=1\) and \(N_{s}=2\). Data points outside of the range \(\pm 2\) are not shown. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(N_{s}\) & \(m_{0}\) & \(\varepsilon\) & \(a_{t}M\) & \((a_{t}M)_{\text{exact}}\) \\ \hline 2 & 1.0 & 0.3 & 0.89(13) & 0.9473 \\ 4 & 1.0 & 0.3 & 1.02(18) & 0.9386 \\ 2 & 2.0 & 0.3 & 1.619(53) & 1.5204 \\ 4 & 2.0 & 0.3 & 1.591(27) & 1.5168 \\ \hline \hline \end{tabular} \end{table} Table 3: Mass gap determined by fitting quantum simulation results to the single-state ansatz described in the main text. The effect of QEM can also be studied in Fourier space by calculating the discrete Fourier transform (DFT) \[f(a_{t}\omega)\propto\sum_{t/\varepsilon=0}^{N_{t}}\mathfrak{C}(t)\,\cos(a_{t} \omega t/\varepsilon). \tag{28}\] The squared magnitude \(|f(a_{t}\omega)|^{2}\) for the \(m_{0}=2\) and \(N_{s}=4\) (7 qubit lattice) correlation functions with and without DD and rescaling are shown in Fig. 17. In a noiseless simulation, a clear peak around \(a_{t}\omega\approx(a_{t}M)_{\text{expected}}\) should be visible that would approach a \(\delta\)-function as \(N_{t}\to\infty\). While the simulation without DD and rescaling shows an apparent peak close to this value, it is statistically not significant at 1.3 \(\sigma\). Including DD and rescaling leads to a drastic decrease in relative uncertainty in frequency space and the peak close to \((a_{t}M)_{\text{expected}}\) is clearly visible at \(6.4\sigma\). The frequencies \(a_{t}\omega\) associated with these peaks in the Fourier spectrum correspond to energy gaps (relative to the vacuum) for states that have significant overlap with the state studied here. As expected from the success of single-cosine fits to correlation functions in the time domain, the location of the statistically significant peak visible in mitigated results is consistent with the fitted values of \(a_{t}M\) in Table 3 and with \((a_{t}M)_{\text{exact}}\). The remaining spurious oscillations in the simulation including all error mitigation techniques could arise from correlations between different frequency DFTs and possible ringing artifacts due to the finite number of Trotter steps. To measure the efficacy of QEM, we use a figure of merit that quantifies how much longer a mitigated circuit can be simulated over an unmitigated one. This definition uses the relative deviation \(\delta_{\lambda}^{\mathcal{M}}(t)\) of a simulation with parameters \(\lambda=\{m_{0},N_{s},\epsilon,\text{device}\}\) at a time \(t/\varepsilon\) from Figure 16: Fully mitigated results (including rescaling and DD) for \(m_{0}=1\) and \(N_{s}=2\) on ibm_nairobi (top left), \(m_{0}=1\) and \(N_{s}=4\) on ibmq_jakarta (bottom left), \(m_{0}=2\) and \(N_{s}=2\) on ibmq_jakarta (top right), and \(m_{0}=2\) and \(N_{s}=4\) on ibm_nairobi (bottom right) simulations. Points that are overly noisy, unreliable, or outside the plot range are not shown. the noiseless exact value with a mitigation strategy \(\mathcal{M}\): \[\delta_{\lambda}^{\mathcal{M}}(t)=\sqrt{\frac{\sum_{t_{i}=0}^{t}\left[\mathfrak{C }_{\text{exact}}(t_{i})-\mathfrak{C}_{\mathcal{M}}(t_{i})\right]^{2}}{\sum_{t_{ i}=0}^{t}\mathfrak{C}_{\text{exact}}(t_{i})^{2}}}, \tag{29}\] We then define \(t_{\lambda}^{\mathcal{M}}(\Delta)\) as the first trtter step \(t/\varepsilon\) such that \(\delta_{\lambda}^{\mathcal{M}}(t)\) is larger than a threshold \(\Delta\). An improvement factor from mitigation can then be defined as \[T_{\lambda}(\Delta)=\frac{t_{\lambda}^{\mathcal{M}}(\Delta)}{t_{\lambda}^{ \mathcal{M}}(\Delta)} \tag{30}\] where \(\mathcal{M}=\mathbf{0}\) corresponds to the unmitigated or only randomly compiled cases that we use as baselines. This is similar to the relative error mitigation metric proposed in Refs. [159; 160]. For three of our parameter sets, we have \(\mathcal{M}=\mathbf{0}\) results available for comparison. We find that \(1\leq T_{\lambda}(\Delta)\leq 20\) for \(0\leq\Delta\leq 1\) with \(T\) monotonically increasing with \(\Delta\) -- here \(\Delta\lesssim 0.2\) trivially shows no signs of improvement because \(t_{\lambda}^{\mathcal{M}}(\Delta)\) is equal to \(1\) for all \(\mathcal{M}\), while the maximum value of \(T_{\lambda}\) achieved corresponds to the number of Trotter steps \(N_{t}\). Finally, to compute a single value, we average over a wide range of reasonable choices \(\Delta=d/25\) with \(d\in\{1,\dots,25\}\) as \[\bar{T}=\frac{1}{N_{\Delta}N_{\lambda}}\sum_{\Delta,\lambda}T_{\lambda}( \Delta). \tag{31}\] This leads to \(\bar{T}=5.92(12)\), which indicates that QEM enables about six times more Trotter steps to be computed with a given level of precision. ## V Conclusions The ultimate success of quantum computers in solving problems in quantum field theories -- especially in the NISQ era -- depends critically upon leveraging quantum error mitigation strategies. In this work we have investigated multiple QEM methods and how they interact with each other in order to extend the range of times for which a \(1+1d\)\(\mathbb{Z}_{2}\) gauge theory could be simulated. A number of broadly applicable lessons can be drawn from these QEM studies. Readout calibration can introduce correlations between observables calculated at equal times. Dynamic decoupling sequences are powerful at increasing circuit depth, but must be chosen carefully to balance their increased expense and mitigation power. Rescaling can work effectively for short depth circuits but becomes less effective for deep circuits due to a signal-to-noise problem that grows exponentially more severe with increasing circuit depth. Further, we find that the assumption that a depolarizing channel dominates noise is violated in practice; however, applying dynamic decoupling together with rescaling improves this issue and leads to much more significant systematic error reduction than either approach alone. Leveraging all of these error mitigation strategies together, we were able to increase the total simulation time by a factor of around \(6\). One mitigation strategy left for future work because of its complexity is the question of how gauge violating errors can be specifically prevented, since these are anticipated to more efficiently mitigate the specific errors affecting lattice gauge theories [57]. Figure 17: Fourier transform of the \(m_{0}=2\) and \(N_{s}=4\) correlation-function results. The yellow dash-dotted line shows results from the simulation only applying readout mitigation and randomized compiling, while the black solid line shows results from the simulation including dynamic decoupling and rescaling, respectively. The apparent oscillatory behavior of the amplitude could arise from correlations among neighboring points and ringing artifacts. The blue line indicates the mass gap expected from exact diagonalization of the Trotterized Hamiltonian. Figure 18: The mass gap in lattice units, \(a_{t}M\), determined using quantum simulation results is shown as a function of the bare mass \(a_{s}m_{0}\) as points with error bars. Exact results obtained by diagonalizing the Trotterized Hamiltonian are shown for comparison as solid lines. With these improvements, it is possible to extract a Minkowski two-point correlation function and from it derive the mass gap for \((1+1)d\)\(\mathbb{Z}_{2}\) lattice gauge theory for multiple bare mass parameters and lattice volumes. This allows for direct scale setting on a quantum computer, and is a first step toward computing Minkowksi observables in the continuum limit. By comparing to exact results, we further demonstrate that energy spectra can be accurately recovered from DFTs of Minkowski two-point correlation functions as long as the simulation time extent is sufficiently long and the spectrum of states that have significant overlap with a given operator is not too dense. A number of directions for future work are suggested by our results. While our initial state and Trotterized Hamiltonian evolution were empirically observed to lead to little excited state contamination, this is unlikely to persist as larger volumes and times become accessible with improving quantum hardware. Thus, designing improved interpolating operators [161, 162, 163, 164, 165] and improved Hamiltonians [165, 40] will be required in order to extract meaningful results efficiently. As larger devices come online, it will be possible to simulate multiple lattice volumes and lattice spacings such that extrapolations to a continuum limit become possible. Finally, it is anticipated that the efficacy of particular error mitigation strategies will change dramatically for larger gauge groups and nonabelian gauge theories where the gauge and fermion registers themselves require multiple qubits. For these cases, additional QEM methods such as those discussed in Refs. [57, 47] may be important to include. ###### Acknowledgements. The authors would like to thank Elias Bernreuther, Arnaud Carignan-Dugas, Pedro Machado, and Tanner Trickle. This work is supported by the Department of Energy through the Fermilab QuantiSED program in the area of "Intersections of QIS and Theoretical Particle Physics" and National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under the contract No. DE-AC02-07CH11359. Fermilab is operated by Fermi Research Alliance, LLC under contract number DE-AC02-07CH11359 with the United States Department of Energy. F.H. acknowledges support by the Alexander von Humboldt foundation. E.G. was supported by the NASA Academic Mission Services, Contract No. NNA16BD14C. We acknowledge use of the IBM Q for this work. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Q team.
2307.15064
Self-Supervised Visual Acoustic Matching
Acoustic matching aims to re-synthesize an audio clip to sound as if it were recorded in a target acoustic environment. Existing methods assume access to paired training data, where the audio is observed in both source and target environments, but this limits the diversity of training data or requires the use of simulated data or heuristics to create paired samples. We propose a self-supervised approach to visual acoustic matching where training samples include only the target scene image and audio -- without acoustically mismatched source audio for reference. Our approach jointly learns to disentangle room acoustics and re-synthesize audio into the target environment, via a conditional GAN framework and a novel metric that quantifies the level of residual acoustic information in the de-biased audio. Training with either in-the-wild web data or simulated data, we demonstrate it outperforms the state-of-the-art on multiple challenging datasets and a wide variety of real-world audio and environments.
Arjun Somayazulu, Changan Chen, Kristen Grauman
2023-07-27T17:59:59Z
http://arxiv.org/abs/2307.15064v2
# Self-Supervised Visual Acoustic Matching ###### Abstract Acoustic matching aims to re-synthesize an audio clip to sound as if it were recorded in a target acoustic environment. Existing methods assume access to paired training data, where the audio is observed in both source and target environments, but this limits the diversity of training data or requires the use of simulated data or heuristics to create paired samples. We propose a self-supervised approach to visual acoustic matching where training samples include only the target scene image and audio--without acoustically mismatched source audio for reference. Our approach jointly learns to disentangle room acoustics and re-synthesize audio into the target environment, via a conditional GAN framework and a novel metric that quantifies the level of residual acoustic information in the de-biased audio. Training with either in-the-wild web data or simulated data, we demonstrate it outperforms the state-of-the-art on multiple challenging datasets and a wide variety of real-world audio and environments. ## 1 Introduction The acoustic properties of the audio we hear are strongly influenced by the geometry of the room and the materials that make up its surfaces--large, empty rooms with hard surfaces like concrete and glass lead to longer reverberation times, while smaller, more cluttered rooms with soft materials like curtains and carpets will absorb sound waves quickly and produce audio that sounds dull and anechoic. Human perception exploits this acoustic-visual correspondence, and we rely on perceiving natural-sounding audio that is consistent with our environment as we navigate daily life. Likewise, this phenomenon is important in virtual environments, such as in AR/VR applications. When one hears audio that is acoustically consistent with the virtual environment they are seeing, their brain can better integrate audio and visual information, leading to a more immersive experience. Conversely, when one hears audio that does not match the expected acoustics of the virtual environment, the perceptual mismatch can be jarring. The problem of audio-visual acoustic correspondence extends well beyond AR/VR applications. Film and media production involve recording audio in diverse spaces, which can be expensive and logistically challenging. Similarly, interior designers and architects face the problem of previewing how a space will sound before it is built. Today's approaches for modeling acoustic-visual coherence typically assume physical access to the target space [38; 25; 5; 39], which can be impractical or impossible in some cases. In particular, in _acoustic matching_, the audio captured in one environment is re-synthesized to sound as if it were recorded in another target environment, by matching the statistics (e.g. reverberation time) of audio samples recorded in that target environment [9; 15; 21; 24; 26; 43]. _Visual acoustic matching_ (VAM) instead takes an image of the target space as input, learning to transform the audio to match the likely acoustics in the depicted visual scene [3] (see Figure 1(a)). In both cases, however, learned models ideally have access to _paired_ training data, where each training audio clip is recorded in two different environments. This permits a straightforward supervised learning strategy, since a model can learn to transform one clip (source) to the other (target). See Figure 1(b), left. Unfortunately, this approach puts heavy demands on data collection that make large-scale collection of paired data from a variety of diverse environments impractical. In-the-wild Web videos provide us with a large, readily available corpus of diverse acoustic environments and human speakers. However, this data is _unpaired_--we only observe the sound recorded in the target space, without a second recording in a different environment for reference. Prior work attempts to turn unpaired data into paired data by using an off-the-shelf dereverberator model [7] to produce pseudo-anechoic source recordings from in-the-wild reverberant audio, which are then passed to a VAM model with the true reverberant audio as the target [3]. While their approach is inspiring, it has a fundamental limitation. The automatic dereverberation process is necessarily imperfect, which means that _residual acoustic information indicative of the target environment's acoustics can remain in the (pseudo) source example_. In turn, the acoustic matching model trained to produce the target audio can learn to use those residual acoustic cues in the audio--instead of the target space's visual features. When evaluated at test-time on arbitrary source audio and unseen images, the residual acoustic clues exploited during training are no longer available, leading to poor acoustically matched audio. We propose a _self-supervised visual acoustic matching_ approach that accommodates training with unpaired data from in-the-wild videos (See Figure 1(b), right). Our key insight is a training objective that explicitly removes residual acoustics in the audio, forcing reliance on the visual target. In particular, our approach jointly trains an audio-visual _debiaser_--which is trained to adversarially minimize residual acoustic information in the dereverberated audio--alongside a reverberator that performs visual acoustic matching. To this end, we develop an _acoustic residue_ metric that quantifies the level of residual acoustic information in a waveform, based on the difference in performance between a) an acoustic matching model that conditions on the target image and b) a model that does not condition on any image. Intuitively, training on audio with low residual acoustic information frees the model from relying on that (unrealistic) information during training, allowing it to instead leverage the necessary visual cues. We use a time-domain conditional GAN as our debiaser, and continually update the integrated reverberator as the distribution of generated audio shifts during training. Unlike prior work, our approach allows training directly on unpaired videos and speech.2 Footnote 2: We focus on human speech in indoor settings given its relevance to many of the applications cited above, and due to the fact that human listeners have strong prior knowledge about how room acoustics should affect speech. However, our model design is not specific to speech and could be applied to other audio sources. Our proposed LeMARA model outperforms existing approaches [3; 7; 14] on challenging in-the-wild audio and environments from multiple datasets. Furthermore, to benchmark this task, we introduce a high audio-visual correspondence subset of the AVSpeech [10] video dataset. Though we focus on the task of visual-acoustic matching, our insight potentially has broader implications for other self-supervised multi-modal learning tasks in which one wants to control the impact of a dominating modality. ## 2 Related work Room acoustics and spatial soundAudio-visual acoustic matching has limited prior work, though there is growing interest from vision and audio communities. Image2Reverb [38] learns to map an input image to its corresponding Room Impulse Response (RIR)--the transfer function that characterizes acoustics at a particular listener/speaker location--which can then be convolved with an arbitrary source waveform. Generalizing RIR inference to full 3D environments, Few-Shot RIR [25] Figure 1: **Self-supervised visual acoustic matching. (a) Given source audio and target image1, the goal is to re-synthesize the audio to reflect the acoustics of the target environment. b) Two possible data settings: the _paired audio_ setting (left) observes both the source audio and the audio in the target environment, allowing for supervised training, while the _unpaired audio_ setting (right), observes only the audio in the target environment. Our self-supervised strategy handles the unpaired setting.** and Neural Acoustic Fields [23] sample impulse responses at multiple places in an environment in order to synthesize the RIR at novel locations with a transformer. In related tasks, Novel View Acoustic Synthesis [5] directly synthesizes a source audio at a new camera pose in the room, while other methods binauralize monanural source audio [16; 34]. Unlike our model, which can train from arbitrary images, all of these methods require knowledge of the ground truth RIRs [38; 23; 25] or paired source-target audio [5; 16; 34]. Most relevant to our approach, AViTAR [3] uses a cross-modal transformer to re-synthesize the audio; it relies on an off-the-shelf audio-visual dereverbrator trained in simulation [7] to produce a pseudo-clean source signal, and suffers from the acoustic residue issue discussed above. Our results illustrate how our model overcomes this shortcoming. Speech synthesis and enhancementRecent work for speech re-synthesis treats acoustic properties as a "style" which can be disentangled from underlying speech and used to perform acoustic-style matching of audio [27; 31; 30; 42]. However, they require either paired data or speaker embeddings to be learned. Web videos (our target domain) can consist of entirely unique speakers, making it difficult to learn a robust speaker embedding. While environment-aware text-to-speech [39] is applicable even in settings where each speaker in the dataset is unique, the model requires access to paired speech for supervised training. Supervised methods for speech enhancement and dereverberation assume access to paired training data, i.e., anechoic "clean" reference waveforms alongside the altered waveforms [40; 7; 44; 35; 1; 12; 13]. Unsupervised speech enhancement approaches [22; 19; 14] such as MetricGAN-U [14] optimize generic perceptual speech quality scores, such as PESQ [36] (which requires paired samples) and speech-to-reverberation modulation energy (SRMR) [11] (which does not). While we share the objective of relaxing supervision requirements, our overall goal is distinct: rather than optimize a generic quality metric, we aim to retarget the input sound to match a specific (pictured) environment. To achieve that goal, we introduce a novel debiasing objective applicable in a conditional GAN framework. ## 3 Approach We present **Le**arning to **M**atch **A**coustics by **R**emoving **A**coustics, **LeMARA**, to address self-supervised visual acoustic matching (VAM). Let \(A\in\mathcal{A}\) denote an audio waveform and let \(V\in\mathcal{V}\) denote an image frame. During training, we are given \(N\) unlabeled examples of _target_ audio and scenes \(\{(A_{t},V_{t})\}_{t=1}^{N}\), taken from frames and accompanying audio in YouTube videos (cf. Sec. 4 for dataset details). While the data is multi-modal, it is unpaired: it has both audio and visual features, but it lacks a paired sample of the audio in some other source domain (see Fig. 1). With this data, we wish to learn a function \(f(A_{s},V_{t}):\mathcal{A},\mathcal{V}\rightarrow\mathcal{A}\) that takes _source_ audio \(A_{s}\) (which may or may not be reverberant) and a _target_ image \(V_{t}\), and produces the audio re-synthesized to sound like it was recorded in the target scene. To self-supervise \(f\) using unpaired training data, we would like a model that (1) dereverberates \(A_{t}\) to strip it of its room acoustics, yielding pseudo-source audio \(\hat{A}_{t}^{(s)}\) and then (2) reverberates \(\hat{A}_{t}^{(s)}\) by "adding" in the room acoustics of image \(V_{t}\) to regenerate the target audio: \(f(\hat{A}_{t}^{(s)},V_{t})\approx A_{t}\). A naive joint training of two such modules--a dereverbrator and reverberator--would collapse to a trivial solution of \(f\) doing nothing. A better solution would pre-train the dereverbrator with a well-trained audio-only model, yet as we show in experiments, this leaves signals of the target environment in the audio \(\hat{A}_{t}^{(s)}\) that handicap the reverberator at inference time, when such signals will not exist. The key insight of our approach is to make the dereverberation stage a _de-biasing_ stage that is explicitly trained to remove room acoustics information not available in the target image \(V_{t}\). This forces reliance on the visual target and help our model \(f\) generalize to unseen audio and scenes. Our model has two main components that are trained jointly: (1) a de-biaser model \(G\) responsible for removing room acoustics from the target audio and (2) a visually-guided reverberator \(R\) that injects acoustics from the target environment into the output waveform. We first introduce the de-biaser architecture (Sec. 3.1), followed by our novel acoustic residue metric that is optimized during de-biaser training (Sec. 3.2), and our training strategy that allows joint fine-tuning of the reverberator and de-biaser (Sec. 3.3). Finally, we present our approach for training self-supervised VAM (Sec. 3.4). Figure 2 overviews our model. ### De-biasing Conditional Generative Adversarial Network The role of our de-biaser is to dereverberate audio samples in a way that minimizes any residual room acoustics information. To that end, we devise an adversarial de-biaser based on MetricGANs [12; 14]. MetricGANs are similar to conventional generative adversarial networks (GAN) [17]--with a generator that aims to enhance a speech waveform--except the discriminator's job is to mimic the behavior of some target _quality function_. Our de-biaser module extends the basic MetricGAN, augmenting it with novel acoustic residue metric (Sec. 3.2) and training procedure (Sec. 3.3) that accounts for the evolving distribution of the de-biased audio. Our GAN consists of a generator \(G\), a discriminator \(D\), and an audio quality metric \(\mathcal{M}\). \(D\) is trained to approximate the quality metric \(\mathcal{M}\), and \(G\) is trained to maximize the metric score on generated data, using \(D\) as a learned surrogate of the metric function \(\mathcal{M}\). \(G\) is a conditional generator: given an input waveform, it produces a modified waveform which optimizes the quality metric score. Let \(\{A_{t},V_{t}\}\) be a dataset of target audio-image pairs, and let \(\mathcal{M}(A_{t},V_{t})\in[0,1]\) be our quality metric \(\mathcal{M}\) (defined below) that produces a scalar measure of the residual room acoustic information in speech. As in the conventional GAN framework, we alternate between discriminator and generator updates. During an epoch of discriminator training, \(D\) trains to approximate the metric function \(\mathcal{M}\)'s scores on both raw audio \(A_{t}\) and generated audio \(G(A_{t})\). The discriminator loss function is: \[\mathcal{L}_{D}=\|D(A_{t})-\mathcal{M}(A_{t},V_{t})\|_{2}+\|D(G(A_{t}))- \mathcal{M}(G(A_{t}),V_{t})\|_{2}+\|D(A_{hist})-s_{hist}\|_{2}, \tag{1}\] where the first and second terms incentivize \(D\) to produce score estimates that approximate the metric function when evaluated on raw input audio (\(A_{t}\)) and generated audio \(G(A_{t})\), respectively. Following [12], the third term trains the discriminator on samples from a historical replay buffer \(\{(A_{hist},s_{hist})\}\), where \(A_{hist}=G_{prev}(A_{t})\) is a generated sample from a previous epoch, and \(s_{hist}=\mathcal{M}_{prev}(G_{prev}(A_{t}),V_{t})\) is its associated metric score. Training on these historical samples helps improve stability and mitigates issues with catastrophic forgetting in the discriminator. The generator is trained with an adversarial loss, using the discriminator \(D\) learned from the previous epoch of discriminator training (which depends only on \(A_{t}\)) as a surrogate of the true metric \(\mathcal{M}\) (which depends on both \(A_{t}\) and \(V_{t}\)). The generator loss is \[\mathcal{L}_{G}=\|D(G(A_{t}))-1\|_{2}. \tag{2}\] Our metric is normalized to produce scores between 0 and 1 (1 being optimal), so this loss forces \(G\) to generate audio that maximizes the estimated metric score. Next, we introduce our metric \(\mathcal{M}\). ### Acoustic Residue Metric Rather than train the de-biaser GAN to optimize a generic speech quality metric [12; 14], we wish to quantify the amount of _residual room acoustics information_ in an audio sample. Hence, we define a metric \(\mathcal{M}\) that allows the downstream reverberator model \(R\) itself to quantify the level of residual acoustic information in the waveform. Specifically, the metric consists of two models trained to perform reverberation on dereverberated speech. Importantly, one model \(R_{v}\) has been trained to perform VAM (using the target image as conditioner), while the other, \(R_{b}\), has been trained to perform blind acoustic matching (without the target image as conditioner). We next define the reverberator modules, and then return to their role in \(\mathcal{M}\). Inspired by recent work in time-domain signal modeling [41; 34; 5], we use a WaveNet-like architecture for the reverberators consisting of multiple stacked blocks of 1D convolutional layers, with an optional gated fusion network to inject visual information for \(R_{v}\). Similar to [5], the reverberators use a sinusoidal activation function followed by two separate 1D conv layers that produce residual and skip connections, the latter being mean pooled and fed to a decoder to produce reverberated audio. We choose this model because it is parameter-efficient, consisting entirely of 1D convolutions, and because it allows time-domain conditional waveform generation. See Sec. 5 for training details. We use these models to compute the acoustic residue metric. Given input audio \(A\) and image \(V\), our metric is defined as: \[\mathcal{M}(A,V)=\sigma\left(\frac{|\mathcal{RT}(R_{b}(A))-\mathcal{RT}(A_{t} )|-|\mathcal{RT}(R_{v}(A,V))-\mathcal{RT}(A_{t})|}{\text{max}(0.1,\mathcal{RT}( A_{t}))}\right), \tag{3}\] where \(A_{t}\) denotes the known target audio and \(\mathcal{RT}\) is a scalar-output function characterizing the general reverberant properties of its input audio, which we define below. Eqn. 3 quantifies the level of acoustic residue--that is, how much greater the blind reverberator's error is compared to the visually-guided reverberator's error. If de-biasing of \(A\) has gone well, this value will be high. When evaluated on audio that contains a high level of residual acoustic information, the visual features will not provide additional useful information, resulting in similar performance by both visual and blind reverberation models. In other words, if the two errors are similar, visual is not adding much, and there is room acoustic information lingering in the audio \(A\). This pushes the \(\mathcal{M}\) score to be smaller (poor quality under the metric). On the other hand, when a waveform contains very little residual acoustic information, the visual features will help the visual model \(R_{v}\) produce a more accurate acoustically matched waveform model with lower error compared to the blind model \(R_{b}\). This will result in a higher metric score. Figure 4 (left) visualizes the effect of de-biasing. Reverberant audio (left) has been imperfectly dereverberated (middle), leaving residual reverberation trails which contain information about the original acoustic environment. De-biased audio (right) removes these residual artifacts. The waveform plots (right) show de-biased audio (green) significantly attenuates the long sound decay present in both reverberant (blue) and dereverberated (orange) audio. This forces the reverberator to learn acoustics from the target image. For the function \(\mathcal{RT}\) in Eqn. 3, we leverage a classic content-invariant metric for characterizing room acoustics: the Reverberation Time at 60 or "RT60". RT60 is the amount of time after a sound source ceases that it takes for the sound pressure level to reduce by 60 dB--which depends directly on the geometry and materials of the space. For example, no matter the initial direct sound, a big cathedral will yield a high RT60, and a coxy living room will yield a low RT60. While RT60 can be quantified by sensing when one has access to the physical environment, we use a learned estimator for RT60 to allow generalization (see Sec. 4 for details). In Eqn. 3, the normalization by the RT60 of the target audio improves stability of discriminator training. We use the clipping function \(max(0.1,\cdot)\) here to prevent samples with extremely low RT60 from destabilizing training. Training with this acoustic residue metric allows the downstream reverberator models \(R_{v},R_{b}\) themselves to improve the performance of the de-biaser model \(G\). Unlike SRMR, DNSMOS [33], or any existing off-the-shelf metric that quantifies dereverberation, our metric directly addresses the problem of _residual_ acoustic information in audio. Although \(G\) may learn a function similar to that of dereverberation, we use the term de-biaser to describe the generator to highlight the novel training objective it is trained against, which distinguishes it from a conventional dereverberation model. Figure 2: **LeMARA overview. a) Training procedure. Reverberant audio is first processed with an off-the-shelf dereverberator. It is then input to a de-biaser model which strips acoustics from the audio. The clean audio is passed to the reverberator along with the target image for acoustic matching. b) De-biaser architecture. \(G\) is trained to adversarially maximize the score of the discriminator \(D\), which learns a surrogate of the acoustic residue metric \(\mathcal{M}\) (Sec. 3.1 and 3.2). c) Acoustic residue metric. Both \(R_{v}\) and \(R_{b}\) are continually trained on generated data to ensure that they provide accurate metric scores as the distribution of generated data evolves during training (Sec. 3.3). At test time, we use the trained de-biaser \(G\) and the visual reverberator \(R_{v}\) to perform VAM.** ### Joint Training of the De-biaser and Reverberators At initialization, \(R_{v}\) and \(R_{b}\) are trained on a certain distribution of speech. When training the de-biasing GAN with the acoustic residue metric, generated speech can eventually fall out of the distribution on which these reverberators were trained, causing \(\mathcal{M}\) to produce unreliable metric scores that destabilize training. To address this, we introduce a strategy to update \(R_{v}\) and \(R_{b}\) during training of the de-biasing GAN. Updating these models ensures that \(\mathcal{M}\) consistently produces reliable acoustic residue scores as the distribution of generated speech shifts over the course of GAN training. At the start of GAN training, we initialize the _target networks_\(R_{v}^{t}\) and \(R_{b}^{t}\) as copies of \(R_{v}\) and \(R_{b}\) respectively. During discriminator training, each batch of \(\{(G(A_{i}),V_{i})\}\) samples are passed to \(R_{v}\) and \(R_{b}\) to compute metric scores under their current frozen state. This batch is also passed to the target networks, which compute the loss \[\mathcal{L}_{\text{visual}} =||R_{v}^{t}(G(A),V)-A||_{2} \tag{4}\] \[\mathcal{L}_{\text{blind}} =||R_{b}^{t}(G(A))-A||_{2} \tag{5}\] and perform an update step. Every \(E\) epochs, the target networks' weights are copied over into the metric networks. This update strategy allows \(R_{v}\) and \(R_{b}\) to be jointly fine-tuned with the de-biaser model \(G\). Figure 2 overviews the model components and data flow. ### Training and Inference Training proceeds in three steps. (1) First we pretrain the de-biaser \(G\). This entails pretraining a MetricGAN-U [14] with the Speech-to-Energy-Modulation-Ratio (SRMR) metric [11] on speech pre-processed with our off-the-shelf dereverberator (see Sec. 5). By refining the dereverberated output with the MetricGAN-U, we improve its quality and intelligibility without introducing additional supervision requirements. (2) Second, we pretrain the reverberators that perform (visual) acoustic matching. Specifically, we train \(R_{v}\) and \(R_{b}\) with \(\mathcal{L}_{visual}\) and \(\mathcal{L}_{blind}\), respectively, using the dereverberated and SRMR-optimized outputs from the MetricGAN in step (1). (3) Finally, we jointly fine-tune both the de-biaser \(G\) and reverberators, using the acoustic residue metric (Eqn. 3) for the GAN metric \(\mathcal{M}\), the generator and discriminator losses given in Eqns. 2, and 1 together with our alternating training scheme defined in Sec. 3.3. To improve stability in training, since the discriminator \(D\) starts with a good approximation of the SRMR metric, we continue training in step 3 using a weighted combination of SRMR and our residue metric: \(\alpha\text{SRMR}(A)+(1-\alpha)\mathcal{M}(A,V)\). At test time, we use LeMARA for visual-acoustic matching as follows: given a source audio \(A_{s}^{(q)}\) and target environment image \(V_{t}^{(q)}\), we apply the trained de-biaser \(G\) followed by the visual reverberator \(R_{v}\): \[f(A_{s}^{(q)},V_{t}^{(q)})=R_{v}(G(A_{s}^{(q)}),V_{t}^{(q)}). \tag{6}\] Altogether, our approach adds the room acoustics depicted in \(V_{t}^{(q)}\) to the input audio. In the case that the source audio \(A_{s}^{(q)}\) is known to be anechoic (e.g., a user is using LeMARA to re-synthesize stock anechoic sounds for a new scene(s)), then we simply bypass the de-biaser \(G\) and directly apply \(R_{v}\). ## 4 Datasets We use two datasets: SoundSpaces-Speech [7] and AVSpeech [10]. See Figure 3. For all results, we test only on audio and environments that are not observed during training. we discard the source audio and only use the target audio (simulated reverberant audio). We use train/val/test splits of 28,853/280/1,489 samples. AVSpeech-RoomsAVSpeech [10] is a large-scale video dataset consisting of 3-10 second clips from YouTube videos, most of which feature a single speaker and little background noise. Not all of the clips have naturalistic audio-visual correpondence, due to video editing tricks, microphone locations, virtual backgrounds, etc. Hence, we create a subset of AVSpeech, called AVSpeech-Rooms, that preserves samples with useful information about room geometry and material types (See Supp. for details). A randomly selected frame from the video clip is used as the target image. Our final set consists of 72,615/1,911/1,911 train/val/test samples. See Figure 3(right). ## 5 Experiments Implementation DetailsWe use the procedure outlined in Sec. 3.4 to train on SoundSpaces-Speech. For training on AVSpeech-Rooms, we proceed directly to step (2), pre-training \(R_{v}\) and \(R_{b}\) on audio that has been de-biased using the fine-tuned SoundSpaces-Speech de-biased model (instead of an SRMR-optimized model trained on AVSpeech-Rooms). We then proceed to step (3) as in SoundSpaces-Speech. We refer to this setup as "shortcut training" to highlight our use of the SoundSpaces-Speech trained de-biaser to bypass step (1) when training on AVSpeech-Rooms. While AVSpeech can be trained with the full procedure outlined in Sec. 3.4 (see ablations in Supp.), shortcut training allows us the advantage of the strong prior for de-biasing learned by the fine-tuned SoundSpaces de-biaser. We train LeMARA using the combined acoustic residue metric with \(\alpha=0.7\). We train a WaveNet-based dereverberator [34] ("off-the-shelf") on paired SoundSpaces-Speech audio which is "reversed" (reverberant input audio, anechoic target audio). We use this dereverberator for both LeMARA and the ViGAS [5] baseline. Prior VAM work [3] used an audio-visual dereverberator [7] trained on both simulated and real-world data to pre-process reverberant audio. For fair evaluation, we train their model with their same original audio-visual dereverberator.3 Footnote 3: [https://github.com/facebookresearch/visual-acoustic-matching](https://github.com/facebookresearch/visual-acoustic-matching) We adapt our code for the reverberator models and ViGAS from [34].4. We train ViGAS with the same hyperparameters and loss as our reverberators during pre-training. Our de-biaser is adapted from the speechbrain MetricGAN-U implementation [32]. Our RT60 estimator is trained on pairs of reverberant samples from a SoTA audio simulator [6] paired with the ground truth RT60 for the RIR used to produce the reverberant sample. We use a Resnet18 [18] model to encode our visual features on RGB images. The last feature map before the final pooling layer is flattened and used as the visual feature conditioner. See Supp. for training details and architecture for these models. We plan to release our code to facilitate further research. Footnote 4: [https://github.com/facebookresearch/BinauralSpeechSynthesis](https://github.com/facebookresearch/BinauralSpeechSynthesis) MetricsWe rely on two metrics to evaluate quality of VAM models: **STFT Error**, which computes the MSE loss between the magnitude spectrograms of predicted and target speech, and **RT60 Error (RTE)**, which measures the MSE between RT60 estimates of predicted and target speech. The former applies only when we have ground truth re-synthesized audio (in simulation), while the latter is content-invariant and captures room acoustics signatures for any target. BaselinesAs baselines, we compare to state-of-the-art models for audio-visual re-targeting: (1) **AViTAR**[3], the only prior method that addresses the visual-acoustic matching task. It consists of a Transformer for audio-visual fusion, followed by a generator that synthesizes the reverberant Figure 3: **Datasets. SoundSpaces-Speech** (left) renders panoramic views of people in indoor environments. **AVSpeech-Rooms** (right) contains a wide variety of naturalistic indoor environments with diverse acoustic properties. waveform from the audio-visual latent feature. As discussed above, for self-supervised training, the authors use a pre-trained audio-visual dereverberation model [7] to create pseudo-source audio, which is passed as input to the model. (2) **ViGAS**[5], a model designed for novel-view acoustic synthesis, conditioned on a camera pose. We adopt its Acoustic Synthesis module, a WaveNet model based on [34], for our task. To apply it for VAM, we replace the camera pose with the flattened feature from the ResNet. (3) **Non-visual LeMARA**. We evaluate LeMARA with the blind reverberator \(R_{b}\) fine-tuned during training. (4) **Input audio**. We copy the dereverberated audio to the output. AViTAR and ViGAS are trained with the data augmentation strategy introduced in [3] (see Supp. for details). **Results on SoundSpaces-Speech** Table 1 (left two columns) shows results on SoundSpaces-Speech. LeMARA outperforms the baselines on all metrics. Non-visual LeMARA performs significantly worse, indicating LeMARA effectively utilizes visual features during acoustic matching. This shows the success of our acoustics de-biasing, which forces stronger learning from the visual stream. Notably, our model significantly outperforms ViGAS--which shares the same architecture as our reverberator --indicating that our performance improvement over AViTAR can be attributed to our novel training objective, and not simply due to a shift in architecture from Transformer to WaveNet. **Results on AVSpeech-Rooms** Table 1 (right three columns) shows the results on AVSpeech-Rooms. We test in two scenarios: (1) Where the source audio and target visual come from the same AVSpeech-Rooms sample (unobserved during training) and (2) where the source audio comes from LibriSpeech [29], a dataset of anechoic source samples of people reading passages in English. In both cases the target visual environment is a frame from an unseen AVSpeech video. LeMARA outperforms the baselines in both settings on RTE. We outperform ViGAS in the LibriSpeech scenario despite sharing the same reverberator architecture, highlighting the efficacy of our novel training objective. Figure 4 (right) shows the distribution of RT60 values for audio reverberated by our model (pink), the baselines, and the target RT60 distribution (orange). Our model best matches the target RT60 distribution. Although our model performs poorly on STFT error on AVSpeech-Rooms, the naive baseline achieves the lowest STFT error, indicating that here the dereverberated audio strongly resembles reverberant audio prior to acoustic matching. Models that \begin{table} \begin{tabular}{c|c c|c c|c} \hline \hline Train & \multicolumn{2}{c|}{_SoundSpaces-Speech_} & \multicolumn{3}{c}{_AVSpeech-Rooms_} \\ Test & \multicolumn{2}{c|}{_SoundSpaces-Speech_} & \multicolumn{2}{c|}{_AVSpeech-Rooms_} & \multicolumn{1}{c}{_LibriSpeech_} \\ Model & RTE (s) & STFT & RTE (s) & STFT & RTE (s) \\ \hline Input audio (naive) & 0.3204 & 1.4267 & 0.3101 & **1.3265** & 0.3582 \\ AViTAR [3] & 0.0804 & 2.4110 & 0.1359 & 2.8944 & 0.2390 \\ ViGAS [5] & 0.1079 & 4.3726 & 0.1086 & 7.0065 & 0.2539 \\ \hline LeMARA w/o visual & 0.1516 & 5.6124 & 0.1370 & 6.2560 & 0.2225 \\ LeMARA (ours) & **0.0789** & **0.6905** & **0.0713** & 6.2982 & **0.2100** \\ & \(\pm\)0.005 & \(\pm\)0.031 & \(\pm\)0.002 & \(\pm\)0.64 & \(\pm\)0.002 \\ \hline \hline \end{tabular} \end{table} Table 1: VAM results on two datasets. Our approach improves the state of the art (see text). Figure 4: **De-biaser (left).** De-biased audio (right) removes residual acoustic traces in audio, and attenuates long sound decay faster than dereverberated speech. **RT60 distribution (right).** The distribution of RT60 values for LeMARA reverberated audio more closely matches the ground truth target distribution. use this dereverberator without further de-biasing will display artificially low STFT error when evaluated in-dataset (AVSpeech-Rooms \(\rightarrow\) AVSpeech-Rooms). Thus, it is important to balance the in-dataset evaluation with the LibriSpeech generalization case (far right in Table 1) to gain a complete picture of model performance. An ablation study of our model analyzing the impact of varying the target metric during training shows the clear advantage of our residue metric compared to SRMR alone (see Supp.). In particular, training with an SRMR objective alone yields an RTE of 0.2308 on LibriSpeech, whereas training with a pure acoustic residue metric yields an RTE of 0.2156; our combined metric yields an RTE of 0.2123. This reinforces the efficacy of our metric as a training objective: de-biasing is distinct from generic enhancement. Figure 5 shows examples of our model's generated sounds for different target images, compared to the output of AViTAR [3] (the best performing baseline). We display the RT60 of the source audio, the visual scene's ground truth RT60, and the RT60 of the two method's generated audio. In the majority of cases, our model produces audio that more closely matches the true acoustics of the visual scene, as measured by RT60. Our model learns to add little reverberation in enclosed environments with soft surfaces (top left), and to add more reverberation in open acoustic environments with hard surfaces (bottom right). We also highlight a failure mode in which our model does not inject proper acoustics (bottom left), likely due to the irregular shape of the room. Human perception studyWe augment these quantitative results with a human subject study, in order to gauge how listeners perceive our results as successfully retargeting the audio to the pictured environment. We design a survey using 23 images \(V_{t}^{(q)}\) selected from AVSpeech-Rooms that show room geometry and materials clearly, and are representative of a diverse variety of acoustic environments. We couple those with 23 anechoic source samples \(A_{s}^{(q)}\) from LibriSpeech [29]. For each sample, we generate the acoustically matched audio with both LEMARA and AViTAR [3]--the best performing baseline. We anonymize and shuffle the generated audio, and ask 10 subjects with normal hearing to identify which room matches best with the audio among three given rooms, one of which is the true room \(V_{t}^{(q)}\). Users correctly identified the target room with 46.1% accuracy on speech generated by LeMARA versus 34.7% accuracy with speech generated by AViTAR. This shows our model achieves higher quality acoustic matching according to human perception. That said, the subjects' accuracy rates are fairly low in the absolute sense, which suggests both the difficulty of the problem and subtlety of the perception task. Our results have pushed the state of the art, both objectively and subjectively, but there remain challenges to fully mature visual acoustic matching. ## 6 Conclusions and Future Work We introduced a self-supervised approach to visual acoustic matching. Built on a novel idea for disentangling room acoustics from audio with a GAN-debiaser, our model improves the state of the art on two datasets. Our acoustic residue metric and adversarial training concept has potential to generalize to other multi-modal learning settings where there is risk of unintentionally silencing a paired modality during training. For example, our framework could be explored for binauralization of mono sounds using video or audio-visual source separation. In future work, we plan to explore Figure 5: **Acoustically matched audio.** LeMARA produces audio with more accurate acoustics than AViTAR [3] across diverse acoustic and visual scenes. Shown here for LibriSpeech setting. generalizations to spatial dynamics that would account for movement of a speaker throughout 3D space. Please see our Supp. video. Supplementary In this supplementary material we provide the following: 1. A video for qualitative evaluation of our model's performance (7.1). 2. Details regarding AVSpeech-Rooms curation (7.2) (referenced in Sec. 4 of main paper) 3. Details on our ablation study with different metric training objectives (7.3) (referenced in Sec. 5 -- "Results on AVSpeech-Rooms" of main paper) 4. A sample survey slide from our human perception study (Figure 6) 5. Model/training details for our RT60 estimator, de-biaser, discriminator, and reverberator (7.4) (referenced in Sec. 5 -- "Implementation Details" of main paper) 6. Details on our data augmentation strategy (7.5) (referenced in Sec. 5 -- "Baselines" of main paper ) 7. A brief discussion of our work's limitations and broader impact (7.6 7.7) ### Supplementary Video Our video contains several illustrative examples generated by LeMARA on both SoundSpaces-Speech and AVSpeech-Rooms. We provide audio generated by the current state-of-the-art (AViTAR) for reference on each example. We recommend wearing headphones for a better listening experience. ### AVSpeech-Rooms Acoustic AVSpeech consists of audio clips from YouTube videos along with an RGB image frame selected randomly from the corresponding video clip. To create AVSpeech-Rooms, we design a set of criteria which we use to filter out samples in which the image contains uninformative, non-natural, or misleading acoustic information about the space. We focus on cases in which the room is not visible, a microphone is being used, or a virtual background/screen is present -- any of which will disturb the natural room acoustics for the speaker's voice. We query each sample with our criteria using a Visual Question Answering (VQA) model [20], which we found more reliable than manual annotations we originally obtained on MTurk. 2 contains information about our criteria. ### Ablations Table 3 displays our experiments with different self-supervised training objectives. We report performance on the LibriSpeech evaluation setting. The first three rows correspond to experiments in which we do not utilize the shorcut training strategy (referenced in Sec. 3 -- "Training" of main paper). Using SRMR alone (row 1) produces the largest (worst) RTE. Training with the acoustic residue metric instead (row 2) leads to a large improvement in RTE, providing empirical support for our metric as an effective training objective. Using our combined metric and the shortcut training strategy (both described in Sec. 3 -- "Training" of our main paper) further improves the performance by a small margin. \begin{table} \begin{tabular}{l c c} \hline \hline Question & Answer & \begin{tabular}{c} dataset \\ \% \\ \end{tabular} \\ \hline \hline Is a microphone or headset visible in the image? & yes & 7.2 \\ Is there a whiteboard/blackboard in the background? & yes & 3.4 \\ Is the entire background one solid color and material? & yes & 23.4 \\ Is there a large projector screen covering most of the background? & yes & 2.2 \\ Is part or all of the background virtual? & yes & 1.3 \\ Are there multiple screens in the image? & yes & 3.5 \\ Is the wider room clearly visible? & no & 3.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Filtering criteria and % of Acoustic AVSpeech samples removed. ### Model/Training details RT60 estimatorWe adopt the RT60 estimator from [3]. The estimator takes a spectrogram as input, encodes it with a ResNet18 [18], and outputs a scalar RT60 estimate. The model is trained on 2.56s clips of reverberant speech simulated on the SoundSpaces platform [4] paired with the ground truth RT60 computed from the RIR used to generate the reverberant speech. The model trains using MSE loss between predicted and ground truth RT60 values. Ground truth RT60 is computed using the Schroeder method [37]. De-biaser architectureThe de-biaser \(G\) takes a magnitude spectogram as input. This is passed to a bi-directional LSTM with input size 257 and two hidden layers each of size 200, which produces an output with the same temporal length as the input spectrogram. This is passed through a linear layer of size 300 and a leakyReLU activation, followed by another linear layer of size 257 and a Sigmoid activation. The final mask is multiplied with the input magnitude spectrogram to create the generated magnitude spectrogram. A resynthesis module computes phase information from the input audio waveform, combines this with the generated magnitude spectrogram, and performs an inverse STFT to produce the generated waveform. The discriminator \(D\) consists of 4 2D Convolutional layers with kernel size (5,5) and 15 output channels, followed by a channel averaging operation and two linear layers of sizes 50 and 10. A LeakyReLU activation with negative slope = 0.3 is used after each intermediate layer. The final layer outputs a scalar-valued metric score estimate. De-biaser trainingIn stage (1) (see Sec. 3 -- "Training" of our main paper), we train with batch size 32. During stage (3) fine-tuning, we use a batch size of 2. \(G\) and \(D\) are trained with learning rates of 2e-6 and 5e-4 respectively in both stages. In each epoch, We train on 10k samples randomly selected from the train set without replacement. The reverberator models \(R_{v}\) and \(R_{b}\) are updated with the target networks at a frequency of \(E=8\) epochs. For all models, we clip each audio sample to 2.56s during training and evaluation. Reverberator trainingWe train the reverberators with batch size 4 and a learning rate of 1e-2 in stage (2). During stage (3) fine=tuning, we use batch size 2 and a learning rate of 1e-6. Both reverberator models and the ViGAS baseline are trained with MSE loss between the log magnitude spectrogram of predicted and ground truth audio. \begin{table} \begin{tabular}{c c} \hline \hline & _LibriSpeech_ \\ Metric & RTE \\ \hline SRMR [11] & 0.2308 \\ AR & 0.2156 \\ AR (combined) & 0.2123 \\ AR (combined) w/ shortcut & **0.2100** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study using different metric training objectives. AR denotes the proposed acoustic residue metric. Figure 6: **Human perception study. The instructions given to the user at the start of the survey (left), and a sample slide from the survey (right). The user is asked to listen to the audio clip, and identify which room image most closely matches its acoustics.** Baseline training detailsWe use a learning rate of 1e-2 and a batch size of 4 to train ViGAS. We train AViTAR with batch size 4 -- all other hyperparameters are set as described in [3]. ComputeAll models are trained on 8 NVIDIA Quadro RTX 6000 GPUs. ### Augmentation strategy We follow a data augmentation strategy similar to that proposed in [3] for training the baseline models, which was shown to produce better generalization performance on the LibriSpeech setting than when trained without this augmentation strategy. In particular, to each batch of dereverberated audio we add colored noise, perform a polarity inversion on the waveform with \(p=0.5\), and convolve the waveform with a randomly selected Room Impulse Response (RIR) from a different acoustic environment with \(p=0.9\). At test time, we evaluate without these audio augmentations. This strategy is designed to mask over residual acoustic information in dereverberated audio during training. We do not use this augmentation strategy in our approach as our model directly learns to remove residual acoustic information, obviating the need for a heuristic strategy to mask it out. ### Limitations Our approach focuses on visual acoustic matching on mono-channel audio exclusively. However, binaural cues in audio play a fundamental role in our perception of reverberation and room acoustics [8]. We leave it to future work to extend our approach to binaural audio. ### Broader impact While training on in-the-wild web videos allows wider access to a diverse variety of speakers and environments, it also introduces uncontrolled biases, speaker privacy concerns, and potentially harmful content into the model. ### Data examples Refer to video to view samples from both SoundSpaces-Speech and AVSpeech-Rooms.
2303.07197
An alternative explanation of the 'spokes' observed in Saturn's rings
Observed first by amateur astronomer Stephen J. O'Meara in the 1970s and then subsequently observed by the Voyager Spacecraft flybys in the early 1980s, it was realised that the 'spokes' flare out like spokes on a bicycle wheel. The observed behaviour of the 'spokes' indicates that they are not governed by gravitational interactions with the planets, moons, or ring material. In 2005 the Cassini probe confirmed that the 'spokes' are likely under the influence of the gas giant's global magnetic field. Here we show that the 'spokes' that appear in Saturn's rings consist of grains of silicates coated in pyrolytic carbon through the process of Chemical Vapour Deposition (CVD). Pyrolytic carbon is a highly diamagnetic substance that can levitate above a sufficiently strong magnetic field. The 'spokes' also consist of ice particles that are diamagnetic as well. The photoelectric effect can be used to explain why the silicates coated in pyrolytic carbon return to the main ring structure when exposed to sunlight of a specific frequency. The pyrolytic carbon grains become paramagnetic when some of the unhybridised 2pz orbitals lose their unpaired delocalised electrons, thus collapsing the pi bond molecular orbital structure. The pyrolytic carbon grains are now attracted to the magnetic field emanating above and below the main ring structure. It is suggested that the 'spokes' in Saturn's B ring are always present and no plasma triggering event is required to increase plasma density. The 'spokes', however, are only visible when a favourable viewing angle is allowed, and their visibility is also dependent on the angle of the sunlight hitting Saturn's rings.
Fenton John Doolan
2023-03-10T13:25:39Z
http://arxiv.org/abs/2303.07197v4
# An alternative explanation of the'spokes' observed in Saturn's Rings ###### Abstract Observed first by amateur astronomer Stephen J. O'Meara in the 1970s and then subsequently observed by the Voyager Spacecraft flybys in the early 1980s (Fig.1), it was realized that the'spokes' flare out like spokes on a bicycle wheel [2]. The observed behaviour of the'spokes' indicates that they are not governed by gravitational interactions with the planet, moons, or ring material. In 2005 the Cassini probe confirmed that the'spokes' are likely under the influence of the gas giant's global magnetic field [20]. Here we show the'spokes' that appear in Saturn's rings consist of grains of silicates coated in pyrolytic carbon through the process of Chemical Vapour Deposition (CVD). Pyrolytic carbon is a highly diamagnetic substance and can levitate above a sufficiently strong magnetic field. The'spokes' also consist of ice particles that are diamagnetic as well. The photoelectric effect can be used to explain why the silicates coated in pyrolytic carbon return to the main ring structure when exposed to sunlight of a specific frequency. The pyrolytic carbon grains become paramagnetic when some of the unhybridised 2p\({}_{z}\) orbitals lose their unpaired delocalised electrons, thus collapsing the \(\pi\) bond molecular orbital structure. The pyrolytic carbon grains are now attracted towards the magnetic field emanating above and below the main ring structure. It is suggested that the'spokes' in Saturn's B ring are always present and that no plasma triggering event is required to increase plasma density. The'spokes', however, are only visible when a favourable viewing angle is allowed, and their visibility is also dependent on the angle of the sunlight hitting Saturn's rings. ## 1 Introduction Various models have been proposed to explain the appearance of the'spokes' observed in Saturn's rings. The most widely accepted model purports that meteorite bombardment of the rings produces a transient cloud of dense plasma that charges the dust, causing the dust to levitate above and below the plane of the rings. It is theorised that the'spokes' are created by resonate interactions between the oscillations within the rings and the rotating magnetosphere [14]. Another model (Jones, 2006) suggests that the'spokes' may be produced by Figure 1: Voyager’s image of the spokes (Image credit: JPL NASA) lightning-induced electron beams striking the rings, at locations magnetically connected to thunderstorms. The researchers suggest that Saturn's ionospheric density controls the location of the'spokes' formation. Electron beam propagation to the rings may produce the observed X-ray emissions and supply particles to Saturn radiation belts, thus modifying the rings' composition over time [18]. Scientists (Goertz, 1984) suggest that the'spokes' in Saturn's rings are formed by electrostatically charged dust particles that are suspended in Saturn's magnetic field [13]. As such these particles rotate in sync with the planet rather than its ring particles which display Keplerian motion about Saturn. At times, due to the angle of incoming sunlight, these electrostatically charged dust particles lose their electrostatic charge and fall back into the main ring structure hence the'spokes' disappear at times. The'spokes' have been observed by the Cassini spacecraft to be able to form on a time scale of minutes and fade away in a few hours [10]. ## 2 Hypothesis The small percentage of carbon that constitutes Saturn's rings is diamagnetic pyrolytic carbon. During the formation of Saturn's protoplanetary disk, it is hypothesised, that pyrolytic carbon would have been deposited via Chemical Vapour Deposition (CVD) of hydrocarbon gases such as methane, onto fine grains of silicates which acted as a substrate. These fine grains of silicates coated in pyrolytic carbon can levitate above or below a strong magnetic field due to pyrolytic carbon being highly diamagnetic. It is also suggested that Saturn's B ring has a sufficiently strong magnetic field emanating orthogonally above and below its plane to levitate these pyrolytic carbon grains. Justification of Hypothesis In the laboratory it has been demonstrated that diamagnetic pyrolytic carbon levitates above a sufficiently strong magnetic field, see (Fig.2). Pyrolytic carbon is a man-made substance, but it is predicted that Saturn's rings consist of a small percentage of pyrolytic carbon. This type of carbon is formed in a vacuum at high temperatures of above 1400K, this process is known as flash vacuum pyrolysis. The dark'spokes' which are observed in Saturn's B ring consist of levitating particles that transition periodically from motion in sync with the rotation of Saturn's magnetic field to normal Keplerian motion within the main ring. The dark'spokes' are only observed in Saturn's B ring which corresponds to the 1500K region in the protoplanetary disk formation (Fig.3). ## 4 Research Research (Henning et al., 2013) suggests that during Saturn's formation the innermost parts of its protoplanetary disk would have reached these temperatures. In the protoplanetary diagram below (Fig.3), the 1500K region would correspond to Saturn's B ring, which is why the dark'spokes' are only observed in this ring. The temperature beyond the 1500K region would be too cold, so no dark'spoke' formation would be observed beyond Saturn's B ring (Kasner et al., 2013). Figure 2: Levitating pyrolytic carbon (Image credit: scitoys.com) "Saturn's ring system is the closest thing we have to the disc of dust and rubble that gave birth to Earth and the other planets 4-55 billion years ago. The protoplanetary disc took shape when a spherical cloud of ultra-cold gas and dust began to collapse under its own gravity. As the spinning cloud shrank, it took the form of a disc, swirling around the newborn sun. Once the sun had blown away the gas, the disc of orbiting rubble would have resembled the disc of Saturn's ring system" says Professor Carl Murray [4]. Protoplanetary disk surface temperatures, masses, and rate of mass in fall onto the disks can be estimated by observations of suspected planet-forming disks. The solar nebula's temperature is constrained to locations and times as suggested by the ongoing analysis of the formation of primitive meteorites, comets, and their components. Disk temperatures are in good agreement with theoretical models of disks undergoing accretion of mass due to an infalling cloud envelop. As such, the predicted temperatures on the inner disk are a moderately warm (500-1500K) inner disk, surrounded by a cool (50-150K) outer disk [1]. When thinking of the planet Saturn, one would naturally associate very cold temperatures with the planet, thus having the required temperature of 1500K to Figure 3: Protoplanetary disk formation (Image credit: astrochymist.org) form pyrolytic carbon would be unlikely. However, recent data (Fukuhara, 2020) suggests that Saturn's core is actually very hot. Saturn's molten rocky metallic core has an estimated temperature of 12200 \({}^{\circ}\)C [12]. Thus, making Saturn's core hotter than the surface of the Sun. ### Saturn's 'Ring Rain' chemical composition Hydrocarbons such as methane can be converted to pyrolytic carbon at temperatures above 1400K as indicated in equation (1) below. Research (Spilker, 2019) also indicates that the Cassini mission found an abundance of various hydrocarbons in the 'rain' produced by Saturn's rings (Fig. 4) which represents the composition of 'ring rain' produced by Saturn's rings [26]. It should be noted that silicates were also detected in the 'ring rain' which (Fig.4) neglects to show. The researchers found methane, ammonia, carbon monoxide, molecular nitrogen, and carbon dioxide. The methane was totally unexpected, as was the carbon dioxide. What the researchers were expecting was a lot more water ice. Figure 4: Composition of ring rain (Image credit: NASA/JPL/SwRI) Molecular hydrogen was the most abundant constituent at all altitudes sampled. Water in fall from the rings was observed, along with substantial amounts of methane, ammonia, molecular nitrogen, carbon monoxide, carbon dioxide, and impact fragments of organic nanoparticles [30]. The downpour from the rings included large amounts of water and hydrocarbons such as butane and propane [21]. ### Pyrolysis and Gasification Syngas, also called a synthesis gas, is a mix of molecules containing hydrogen, methane, carbon monoxide, carbon dioxide, water vapours, as well as other hydrocarbons and condensable compounds. It is a main product of gasification and majority product of high temperature pyrolysis carried on any biomass, residues, and waste. When produced in pyrolysis, it is created by the vaporisation of volatile compounds from the raw material using heat, which sets off a series of complex reactions. Gases from pyrolysis typically contain large amounts of methane, hydrogen, carbon monoxide, and carbon dioxide, as well as larger hydrocarbons that build their calorific value and make them important fuel for the chemical and energy industries [25]. Gasification is a process that converts biomass into gases, producing large quantities of: nitrogen, carbon monoxide, hydrogen, and carbon dioxide. This is achieved by reacting the biomass at high temperatures (typically greater than 700 \({}^{\circ}\)C, without combustion, via controlling the amount of oxygen and/or steam present in the reaction. The resulting syngas is highly flammable fuel due to the large quantities of water and carbon monoxide which the gas is largely composed of. Further reactions occur between the carbon monoxide and residual water from the organic material to form methane and excess carbon dioxide, equation (2) [15]. \[4\;{\rm CO}_{(9)}\;+\;2\;{\rm H}_{2}{\rm O}_{(1)}\quad\rightarrow\;{\rm CH}_{4(9)} \;+\;3\;{\rm CO}_{2(9)} \tag{2}\] In gasification, reforming is a means to enhance the proportion of hydrogen by decomposing the hydrocarbons into carbon monoxide and hydrogen, equation (3). If the biomass is already made of hydrocarbons, reforming is the first stage towards syngas. \[{\rm CH}_{4(9)}\;+\;{\rm H}_{2}{\rm O}_{(1)}\quad\rightarrow{\rm CO}_{(9)}+3\;{ \rm H}_{2(9)} \tag{3}\] The chemical processes of pyrolysis and gasification may explain where the ice and other constituents of Saturn's rings originated from. Thus, Robin Canup's hypothesis that Saturn's rings were formed when a Titan-sized moon with a rocky core and an icy mantle spiralled into Saturn may not be required [3]. ### Diamagnetism The diamagnetic properties of a material are determined by a property called magnetic susceptibility, \(\chi\). The relationship between magnetic susceptibility and magnetic permeability is expressed in equation (4). A diamagnetic material has a magnetic susceptibility of less than zero and a minimum value of -1: \[\chi_{\nu}=\mu_{\nu}-1 \tag{4}\] \(\mu_{\nu}\) is the magnetic permeability of the material. Superconductors are the most diamagnetic material (-1.04 x 10\({}^{-3}\)) followed by pyrolytic carbon (-4.09 x 10\({}^{-4}\)) then bismuth (-1.66 x 10\({}^{-4}\)) see Table 1. Diamagnetic forces induced in materials by a magnetic field have a very different behaviour from inverse-square law forces. The diamagnetic force depends on the gradient of the squared magnetic field as shown in equation (5): \[\mathrm{F}_{d}=\chi^{V}\,\overset{\rightarrow}{\nabla}\,\overset{\rightarrow}{ \nabla}\,\overset{2}{\mu}_{o} \tag{5}\] where, V is the volume of the diamagnetic material, \(\mu_{o}\) is the vacuum magnetic permeability and B is the magnetic field [24]. ### Magnetic susceptibility of pyrolytic carbon Pyrolytic carbon which is highly diamagnetic may explain the dark'spokes' observed in Saturn's rings. Because pyrolytic carbon is so diamagnetic (repelled by a magnetic field) none of it would have been detected in the 'ring rain' falling from Saturn's rings to its atmosphere. It is suggested that the dark'spokes' are fine grains of silicates that have been covered in pyrolytic carbon due to the process of flash vacuum pyrolysis during the formation of Saturn. Pyrolytic carbon is the best-known material that displays the most similar properties to superconducting materials i.e diamagnetism. Strong variations in ion density in Saturn's electrically charged atmosphere produce strong coupling in the visible rings which consists mainly of electrically \begin{table} \begin{tabular}{|c|c|} \hline **Materials** & **Diamagnetic strength x \(10^{-5}\) SI units** \\ \hline Superconductor & -105 \\ Pyrolytic carbon & -40.9 \\ Bismuth & -16.6 \\ Mercury & -2.9 \\ Silver & -2.6 \\ Carbon(diamond) & -2.1 \\ Lead & -1.8 \\ Carbon(graphite) & -1.6 \\ Copper & -1.0 \\ Water & -0.91 \\ \hline \end{tabular} \end{table} Table 1: Diagmagnetic materials (Image credit:byjus.com) charged ice particles [29]. Saturn's rings are therefore electrically charged and thus must produce an electric field. Mitchell et al. state: "Because of the charging of the ring and the resulting electric field, the electron and ion densities immediately above the ring will not be equal" [20]. Since Saturn's rings are electrically charged with an associated electric field, it is proposed that the rings must emanate a magnetic field orthogonally above and below the ring plane. As such, Saturn's rings can be considered to be an electromagnetic phenomenon, as also suggested by Professor Vladimir Tchernyi [28]. The ring's magnetic fields enable the fine grains of silicates covered in pyrolytic carbon to levitate above and below the main ice-ring structures due to their highly diamagnetic nature, thus producing the observed dark and bright'spoke' structures that rotate in sync with the rotation of Saturn's magnetic field. Analysis of the spectral radiation power of the spokes provides a specific periodicity of about 640.6 +/- 3-5 min which almost coincides with the period of rotation of the magnetic field of Saturn (639.4 min). Professor Tchernyi states: "Moreover, a strong correlation between the maxima and minima of activity of the spokes with the spectral magnetic longitudes is connected to the presence or absence of the radiation of Saturn's Kilometric Radiation(SKR)" [27]. The dark'spokes' are most visible at the two seasonal equinoxes as the illumination of the rings is greatly reduced making possible unique observations highlighting features that depart from the ring plane i.e the levitated fine grains of silicates covered in pyrolytic carbon. The dark'spokes' become visible mainly due to two reasons. The first is due to planet shine from Saturn which reflects off the ice particles in the rings but is absorbed by the fine grains of silicates covered in pyrolytic carbon during the seasonal equinoxes. Hedman (2017) states: "We will highlight the importance of including illumination sources other than the Sun in the radiative transfer analysis, namely the Saturn-shine and the ring-shine" [9]. Secondly, due to an increase in plasma density caused by the reduced illumination of Saturn's rings at the equinoxes, allowing the silicates coated in pyrolytic carbon to become'recharged'. Thus, regaining their diamagnetic properties, enabling them to levitate above and below Saturn's B ring plane. In 2012, a research group in Japan (Kobayashi, 2012) demonstrated that pyrolytic carbon can respond to laser light or sufficiently powerful natural sunlight by spinning or moving in the direction of the field gradient. The carbon's magnetic susceptibility weakens upon sufficient illumination, leading to an unbalanced magnetisation of the material and movement when using a specific geometry [19]. This may explain why the dark'spokes' in Saturn's rings appear seasonally at the equinoxes when the illumination from the sun is at a minimum. Hence the pyrolytic carbon grains levitate above and below the main ring. Illumination from the sun causes a weakening of the pyrolytic carbon's magnetic susceptibility, causing the pyrolytic carbon grains to return to the main ring. ### Hybridisation of Carbon During the process of Chemical Vapour Deposition (CVD) of methane gas the carbon atoms would share the hydridised sp\({}^{2}\) electrons with their three neighbouring carbon atoms (Fig.5). Thus, forming a layer of honeycomb network of planar structure (120\({}^{0}\) bond angle), which is also known as monolayer graphene. In pyrolytic carbon, these monolayers would form a turbostratic structure i.e the graphene layers are arranged without order but have some covalent links between the layers (Fig.6) When sunlight of a specific frequency hits the surface of the pyrolytic carbon grains the delocalised \(\pi\) electrons in the molecular orbital (Fig. 7) created by the overlap of the unhybridised 2p\({}_{z}\) orbitals collapse due to a decrease in electron density caused by electron ejection via the photoelectric effect. According to valence bond theory, the unhybridised 2p\({}_{z}\) orbitals will be vacant or will have an unpaired electron, thus making the pyrolytic carbon grains highly paramagnetic. The pyrolytic carbon grains are now attracted towards the magnetic field emanating from Saturn's B ring. After returning back to the main ring structure the pyrolytic carbon grains become'recharged' when the plasma density reaches a maximum, this occurs when the sun's illumination of Figure 5: The formation of sp\({}^{2}\) hybrid orbitals [11] Figure 6: Turbostratic structure of graphene monolayer ( Image credit: Wiki Common ) the rings is at a minimum i.e at the equinoxes. The unhybridised 2p\({}_{z}\) orbitals are now able to re-establish their \(\pi\) molecular orbital structure due to the increase in electron density. Due to electromagnetic induction, the electrons in the \(\pi\) bond become highly diamagnetic. ### Electromagnetic Induction The silicates coated in pyrolytic carbon become diamagnetic when exposed to the strong magnetic field emanating orthogonally from Saturn's B ring plane. Due to the process of electromagnetic induction, a changing magnetic flux is produced in the delocalised electrons in the pyrolytic carbon's \(\pi\) molecu- lar orbitals. Faraday's Second Law of Electromagnetic Induction equation (6) stipulates that there will be an EMF induced in the electrons in the 2p\({}_{z}\) unhybridized orbitals which will tend to oppose the changing magnetic flux according to Lenz's law, equation (7). Faraday's second law of electromagnetic induction requires that the induced EMF to be equal to the rate of change of flux linkage. Faraday's Second Law of Electromagnetic Induction states, \[E=N\frac{\Delta\Phi}{\Delta t} \tag{6}\] Figure 7: a. Valence bond theory 2p\({}_{z}\) orbital depiction b. \(\pi\) molecular orbital depiction ( Image credit: SlideServe ) hence Lenz's Law states, \[E=-N\,\frac{\Delta\Phi}{\Delta t} \tag{7}\] where \(\epsilon\) is the induced emf, N is the number of spin-orbits, \(\Delta\Phi\) is the change in magnetic flux and \(\Delta t\) is the change in time. Thus, there will be an EMF induced that tends to oppose the magnetic field. The EMF will induce a larger current in some of the 2p\({}_{z}\) delocalised electron's spin-orbits compared to other 2p\({}_{z}\) delocalised electrons which have opposing spin-orbits. The resulting effect is that the magnetic moments now do not cancel out. Thus, making all the 2p\({}_{z}\) delocalised electrons in the pyrolytic carbon act like tiny magnets, consequently, the pyrolytic carbon grains will be highly diamagnetic and will be repelled by the magnetic field emanating above and below Saturn's B ring. ## 5 Discussion Only when the sun is at the equinoxes resulting in the illumination of the rings to be at a minimum, does the plasma density above and below Saturn's rings reach a maximum. As such no triggering mechanism such as lightning is required to increase the plasma density above or below the ring plane is required. Due to the increased plasma density the 2p\({}_{z}\) unhybridised orbitals are able to gain electrons reestablishing the \(\pi\) molecular orbitals on the surface of the pyrolytic carbon. Through the process of electromagnetic induction the pyrolytic carbon grains become highly diamagnetic causing them to levitate above and below Saturn's B ring. The dark'spokes' in Saturn's B ring become visible due to the backscattering of light (Fig.9). This proposed mechanism suggests that Saturn's B ring produces a magnetic field that emanates orthogonally above and below its ring plane. The bright'spokes' in (Fig.8) are visible on the unilluminated side of Saturn's rings when the sun's illumination of the rings is at a maximum (directly below or above the ring plane). The bright'spokes' can be explained due to the forward scattering of light (Fig.9) caused by the grains of pyrolytic carbon which levitate above the plane of Saturn's rings. The bright'spokes' which appear on the illuminated side of Saturn's rings may be due to the sunlight reflecting off the small, levitated ice particles. These small ice particles are also diamagnetic but unlike the silicates covered in pyrolytic carbon, they remain unaffected by natural sunlight as they are not magnetically susceptible to natural sunlight i.e the photoelectric effect will not occur, so they remain levitated above Saturn's main rings. Hence, the bright'spokes' should always be observable given sufficient illumination by the sun and the correct angle of observation by the observer. The bright'spokes' may suddenly disappear when the illumination of the sun reaches a critical point, causing the levitated ice particles to sublimate due to the sunlight's intensity. Figure 8: Bright spokes in Saturn’s B ring observed by Cassini (Image Credit: Smithsonian National Air and Space Museum ) ### Cassini / VIMS Spectrometer The first detection of'spokes' in Saturn's B ring occurred in July 2018 using Cassini's VIMS spectrometer. A wide spectral range (0.35-0.51 \(\upmu\)m) was used to measure the first complete reflectance spectrum of multiple'spokes' (Fig.10). The spectrum obtained indicates that the'spokes' consist of spheroidal water ice particles. These ice particles have a wide size distribution centred on a modal radius of approximately 1.90 \(\upmu\)m, which is significantly larger than previously modelled. It has been verified by spectral analysis that the bright'spokes' on the illuminated side of Saturn's rings are water ice particles. The spectral analysis of the rings showed that H\({}_{2}\)O dominates in the near-IR. However, there was a Figure 10: Cassini/VIMS spectral image of a spoke [8] Figure 9: Forward and backscattering of light[17] steep decline in the reflectance short ward of approximately 0.6 \(\upmu\)m, which must be due to some other substance [8]. Recent studies (Ciarniello et.al, 2019) of the rings' visible and near-IR spectra have identified tholins-reddish, organic-rich, refractory materials as a possible contaminant responsible for the distinctly reddish colour of the rings at visual wavelengths [5]. Cuzzi and Estrada (1998) in a study of the colour of Saturn's rings, found a material that produces a distinct red colour produced in the particles in the A and B rings. They note that "No silicates have the appropriate combination of steep spectral slope and high absorptivity to explain the rings' visual colour while remaining compatible with microwave observations." Titan tholin matches the colours and albedos of the particles when incorporated into the water ice grains. They suggest that the darker rings possibly contain "material with properties like carbon black, as seen in at least some comets and interplanetary dust particles, is needed" [7]. Modelling by Poulet and Cuzzi (2002) and Poulet et al. (2003) suggest a mixture of tholins and amorphous carbon to achieve fits to the observational data in the 0.3-4.0 \(\upmu\)m range [23]. Poulet et al.2003, concluded further that, for the A and B rings, while the carbon grains are intimately mixed in a "salt and pepper" fashion with the ice, the tholins are mixed at the molecular level within the ice grains themselves [22]. Cruikshank et al., 2005 suggest that tholins exhibit relatively few and weak absorption features in the VIMS near-IR region except at wavelengths near 3.0 and 4.5 \(\upmu\)m where water ice is also highly absorbing [6]. No dark'spokes' will be observable on the underneath side of Saturn's B ring due to the illumination from the sun causing the pyrolytic carbon grains to move back to the main ring due to the photoelectric effect. The pyrolytic carbon grains lose some of their delocalised unhybridised 2p\({}_{z}\) electrons thus becoming paramagnetic. Now the pyrolytic carbon grains can move back to the main ring due to being attracted to the magnetic field. The dark'spokes' become visible only at the equinoxes due to the sun's minimum level of illumination of the rings and backscattering of light. The plasma density above and below Saturn's B ring increases to a maximum causing the pyrolytic carbon 2p\({}_{z}\) orbitals to be'recharged' i.e gain electrons. Thus, the pyrolytic carbon grains can reform their \(\pi\) molecular orbitals and regain their highly diamagnetic nature, enabling them to levitate above and below Saturn's B ring plane. ## 6 Conclusion The chemical process of flash vacuum pyrolysis has converted hydrocarbons such as methane to pyrolytic carbon at temperatures above 1400K during the protoplanetary disk formation of Saturn. The pyrolytic carbon has coated the fine grains of silicates through the process of Chemical Vapour Deposition (CVD). The silicates coated in pyrolytic carbon are now able to levitate above or below the magnetic field emanating from Saturn's rings, due to the highly diamagnetic nature of pyrolytic carbon. If some of the'spokes' do consist of pyrolytic carbon then this would be proof that Saturn's rings were formed after the collapse of the protoplanetary cloud during the formation of Saturn. Thus, the argument concerning the age of Saturn's rings would therefore be put to rest once and for good. Depending on the angle and frequency of the sunlight hitting the fine grains of silicates coated in pyrolytic carbon the photoelectric effect will cause the ejec tion of some of the delocalised \(\pi\) electrons from the pyrolytic carbon's molecular orbitals. This causes the pyrolytic carbon's molecular orbitals to collapse into discrete unpaired 2p\({}_{z}\) unhybridised orbitals, thus becoming highly paramagnetic. The pyrolytic carbon grains will now return back to the main ring as they are attracted to the magnetic field emanating from Saturn's B ring. The dark'spokes' in Saturn's B rings are only observable at the equinoxes due to the minimum illumination of its rings by the sun. The bright'spokes' should be visible when the sun is above or below the plane of Saturn's rings. Therefore, it can be concluded that Saturn always has'spokes' but their type either dark or bright depends on the position of the sun relative to Saturn and the angle at which the observer is observing the rings. Saturn's rings are an electromagnetic phenomenon as suggested by Russian Professor Vladimir Tchernyi [28]. Saturn creates electromagnetic fields which encompass the equatorial region of the planet. These electromagnetic fields consist of magnetic fields which emanate orthogonally above and below the ring plane. Saturn's rings are analogous to magnetic field lines produced in a laboratory using a neodymium magnet and iron filings as shown in (Fig.11). It is also predicted that the recent samples taken from the carbonaceous asteroids Bennu and Ryugu will contain a large quantity of pyrolytic 'diamagnetic' Figure 11: The Magnetic field lines created by a neodymium magnet look similar to Saturn’s rings: ( Image credit: Phys.org ) carbon. Further research is suggested concerning the processes of pyrolysis and gasification as possible chemical processes that may help to explain where all Earth's water originated from.
2307.02518
Analyzing the Performance of ChatGPT in Cardiology and Vascular Pathologies
The article aims to analyze the performance of ChatGPT, a large language model developed by OpenAI, in the context of cardiology and vascular pathologies. The study evaluated the accuracy of ChatGPT in answering challenging multiple-choice questions (QCM) using a dataset of 190 questions from the Siamois-QCM platform. The goal was to assess ChatGPT potential as a valuable tool in medical education compared to two well-ranked students of medicine. The results showed that ChatGPT outperformed the students, scoring 175 out of 190 correct answers with a percentage of 92.10\%, while the two students achieved scores of 163 and 159 with percentages of 85.78\% and 82.63\%, respectively. These results showcase how ChatGPT has the potential to be highly effective in the fields of cardiology and vascular pathologies by providing accurate answers to relevant questions.
Walid Hariri
2023-04-15T20:08:48Z
http://arxiv.org/abs/2307.02518v1
# Analyzing the Performance of ChatGPT in Cardiology and Vascular Pathologies ###### Abstract The article aims to analyze the performance of ChatGPT, a large language model developed by OpenAI, in the context of cardiology and vascular pathologies. The study evaluated the accuracy of ChatGPT in answering challenging multiple-choice questions (QCM) using a dataset of 190 questions from the Siamois-QCM platform. The goal was to assess ChatGPT potential as a valuable tool in medical education compared to two well-ranked students of medicine. The results showed that ChatGPT outperformed the students, scoring 175 out of 190 correct answers with a percentage of 92.10%, while the two students achieved scores of 163 and 159 with percentages of 85.78% and 82.63%, respectively. These results showcase how ChatGPT has the potential to be highly effective in the fields of cardiology and vascular pathologies by providing accurate answers to relevant questions. Siamois-QCM ChatGPT Natural language processing Cardiology Vascular pathology ## 1 Introduction ChatGPT, a large language model developed by OpenAI, has gained significant attention for its potential in various domains, including the field of medicine [1]. As a language model trained on vast amounts of text data, ChatGPT has demonstrated the ability to generate coherent and contextually appropriate responses to a wide range of queries [2, 3]. In the medical domain, ChatGPT has shown promise as a tool for medical education and exam preparation, particularly in assisting students in their residency exams [4]. The potential in medical education and exams is particularly relevant and very challenging in the context of cardiology and vascular pathologies. These specialized areas require a deep understanding of complex medical concepts and the ability to accurately answer questions and provide relevant explanations. Therefore, the dataset described below will be a very good challenge for both students and ChatGPT. ## 2 Dataset The performance of ChatGPT and two medical students in cardiology and vascular pathologies was evaluated using a dataset of medical exams from the Siamois-QCM platform. [5], which provides multiple-choice questions (QCM) in French language to assist students in their residency exam preparation. This platform contains more than 50,000 medical, pharmacy, and dental students to prepare for their residency exams and competitions. The platform provides the possibility to choose the materials, and also specific lessons. Each lesson has a different number of questions. The students have chosen both the material (cardiology and vascular pathologies) and the seven lessons to make the comparison to ChatGPT more challenging. We will be analyzing a total of 190 questions related to the lessons presented in Figure 1. The distribution of the 190 questions on the lessons is shown in Figure 2 Methodology This article aims to analyze the performance of ChatGPT in the material of "cardiology and vascular pathologies" by utilizing a dataset of questions from Siamois-QCM platform and assessing its accuracy in answering the questions. The selected questions belong to the \(6^{th}\) year Medicine faculty program of Algiers, Algeria, known for being a challenging exam. We then compare the results of ChatGPT with the performance of two well-ranked students of medicine who are currently studying in the same program. Figure 1: The seven lessons related to cardiology and vascular pathologies material. Figure 2: The number of questions per lesson from Siamois-QCM platform. This study specifically focuses on seven lessons within the material of "cardiology and vascular pathologies". Below, we provide a brief overview of lesson and the related disease: * **Abdominal aortic aneurysm:** A bulge in the lower part of the aorta, which can be life-threatening if it ruptures. * **Antihypertensive medications:** Drugs used to lower high blood pressure and prevent associated complications such as stroke and heart attack. * **Normal and pathological electrocardiograms (ECG):** A test that records the electrical activity of the heart to diagnose heart problems, including abnormal rhythms and damage. * **Atrioventricular block:** A condition where the electrical signals between the upper and lower chambers of the heart are disrupted, resulting in a slower heart rate or irregular heartbeat. * **Varicose veins:** Twisted, enlarged veins, usually in the legs, that can cause pain and discomfort. * **Chronic pulmonary heart disease (CPC):** A condition where the lungs and heart are unable to function properly due to long-term lung disease. * **Syncope and pre-syncope:** Fainting or feeling like one might faint, often caused by a drop in blood pressure, lack of oxygen to the brain, or other underlying medical conditions. ## 4 Results The findings of this study highlight the potential of ChatGPT as a valuable tool in medical education within the material of "cardiology and vascular pathologies. Table 1 shows that ChatGPT outperformed the scores of the two well-ranked students by achieving 175 correct answers out of 190 questions, with a percentage of 92.10%. On the other hand, the two students achieved scores of 163 and 159, with a percentage of 85.78% and 82.63% respectively. Figure 3 summarizes the test results per lesson. Figures 4 and 5 depict a correct and an incorrect answer, respectively, from ChatGPT on the Siamois-QCM platform. Therefore, questions that contain numerical values with different units may prove more challenging for ChatGPT, potentially resulting in incorrect answers. ## 5 Conclusion This paper demonstrates the high potential of ChatGPT in the field of cardiology and vascular pathologies by providing answers to related questions from Siamois-QCM platform. Although ChatGPT has outperformed the score of the 2 well-ranked students with a 6% advantage, it is necessary to provide further refinement and improvement in its performance for specific medical domains. Further research and development are necessary to enhance ChatGPT capabilities for assisting students in residency exam preparation and supporting medical education in this specialized field. This analyzing study will be expanded to include more materials to further evaluate ChatGPT's performance in the medical field.
2304.03356
Extensional Flow of a Free Film of Nematic Liquid Crystal with Moderate Elasticity
Motivated by problems arising in tear film dynamics, we present a model for the extensional flow of thin sheets of nematic liquid crystal. The rod-like molecules of these substances impart an elastic contribution to its response. We rescale a weakly elastic model due to Cummings et al. [European Journal of Applied Mathematics 25 (2014): 397-423] to describe a case of moderate elasticity. The resulting system of two nonlinear partial differential equations for sheet thickness and axial velocity is nonlinear and fourth order in space, but still represents a significant reduction of the full system. We analyze solutions arising from several different boundary conditions, motivated by the underlying application, with particular focus on dynamics and underlying mechanisms under stretching. We solve the system numerically, via collocation with either finite difference or Chebyshev spectral discretization in space, together with implicit time stepping. At early times, depending on the initial film shape, pressure either aids or opposes extensional flow, which changes the shape of the sheet and may result in the loss of a minimum or maximum at the moving end. We contrast this finding with the cases of weak elasticity and Newtonian flow, where the sheet retains all extrema from the initial condition throughout time.
M. J. Taranchuk, L. J. Cummings, T. A. Driscoll, R. J. Braun
2023-04-06T20:11:56Z
http://arxiv.org/abs/2304.03356v1
# Extensional Flow of a Free Film of Nematic Liquid Crystal with Moderate Elasticity ###### Abstract Motivated by problems arising in tear film dynamics, we present a model for the extensional flow of thin sheets of nematic liquid crystal. The rod-like molecules of these substances impart an elastic contribution to its response. We rescale a weakly elastic model due to Cummings et al. [European Journal of Applied Mathematics 25 (2014): 397-423] to describe a case of moderate elasticity. The resulting system of two nonlinear partial differential equations for sheet thickness and axial velocity is nonlinear and fourth order in space, but still represents a significant reduction of the full system. We analyze solutions arising from several different boundary conditions, motivated by the underlying application, with particular focus on dynamics and underlying mechanisms under stretching. We solve the system numerically, via collocation with either finite difference or Chebyshev spectral discretization in space, together with implicit time stepping. At early times, depending on the initial film shape, pressure either aids or opposes extensional flow, which changes the shape of the sheet and may result in the loss of a minimum or maximum at the moving end. We contrast this finding with the cases of weak elasticity and Newtonian flow, where the sheet retains all extrema from the initial condition throughout time. Introduction The tear film of the eye is a thin multi-layer protective liquid film lying over the cornea. It is painted onto the ocular surface during the upstroke of the blink, and is re-formed rapidly after each blink.[1] Proper function of the tear film is essential for eye health and clear vision.[2] The most abundant component of the tear film is the aqueous layer, sandwiched between a mucin layer called the glycocalyx, that is bound to the ocular surface, and a thin lipid layer that floats on it. A sketch of a cross section of a small part of the tear film is shown in Fig. 1. Proceeding toward the eye from the surrounding air, the first layer encountered is the lipid layer, which averages on the order of tens of nanometers thick.[3] Next comes the aqueous layer, which is typically a few microns thick[4], and which contains large molecules such as soluble mucins and proteins.[5] The glycocalyx is a forest of membrane-bound mucins and associated molecules that form a protective barrier for the ocular surface.[6; 7; 8; 9] Finally, the outer surface of the corneal epithelium is the beginning of the ocular surface itself.[10] The normal tear film structure can fail to form initially, or sometime after a blink develop tear breakup, where the tear film fails to coat the ocular surface.[11; 12] Tear breakup and associated hyperosmolarity (excessive saltiness of the local tears) is thought to play an important role in the development of dry eye disease, which affects millions of people.[13; 14; 15] Figure 1: A sketch of the tear film on the ocular surface. Here LL denotes the lipid layer, AL the aqueous layer, G the glycocalyx, and C is the outermost part of the corneal epithelium. The objects in the interior of the aqueous layer represent large mucin and protein molecules. The tear film lipid layer is of interest because it plays an important role in preventing tear breakup. Simultaneous imaging of the lipid layer and the aqueous layer [16] shows a strong correlation between lipid layer dynamics and tear breakup. The lipid layer is typically thought to be a barrier to evaporation, thus providing an important function to preserve the tear film between blinks.[17; 18] However, the lipid layer composition [19] and structure [20; 21; 22] are complex and not yet fully understood. Meibum, an oily secretion from meibomian glands in the eyelids,[23] is the primary component of the lipid layer; it is not uncommonly used as a model for the lipid layer. X-ray scattering methods applied to _in vitro_ meibum films have suggested that there are ordered particles in the meibum films with layered structures;[22] these particles may have liquid crystal structure. Hot-stage imaging of meibum droplets have shown birefringence,[19] another sign of order within the meibum. And in the meibomian glands[23] in the human eyelid which produce meibum, freeze fracture with electron microscopy shows a layered structure of the lipids inside the cells that are the source of the meibum.[24] We interpret this evidence to suggest that the tear film lipid layer could be an extended liquid crystalline layer with (possibly many) defects.[25] It is not known whether the entire lipid layer has these qualities, or whether isolated chunks of structured particles float in the layer; however, there is general agreement that the lipid layer has non-Newtonian properties.[21; 22; 26; 27] These areas of structure in the lipid layer are thought to provide the barrier against evaporation of the aqueous layer.[19; 22] In addition, cooling of liquid crystals facilitates orientation of the molecules in the same direction.[28] The cooling of the lipid layer may encourage the formation of liquid crystal structure _in vivo_.[20] As the eye reopens during a blink, the lipid layer undergoes extensional flow as the tear film is painted across the surface of the eye.[1] Rather than spreading smoothly and uniformly over the eye, imaging of the tear film reveals stripes or ripples in the lipid layer (see Fig. 2).[1] The goal of this paper is to model extensional flow of thin sheets of liquid crystal using both weak and moderate elasticity limits, and to lay the foundation to explore whether we can replicate the type of rippling seen in the tear film. Theoretical modeling of extensional flow was developed quite extensively in the twentieth century[29] and continues to be an active area of study, in part because of industrial applications such as optical fiber drawing[30] and the use of polymers for a wide range of industrial purposes. Thus, much work has been done on extensional flow of both Newtonian and non-Newtonian fluids, especially thin sheets or fibers. We do not attempt a comprehensive review here, but simply highlight a few studies of relevance to our problem. Evolution of Newtonian fibers under extensional flow has been studied extensively, from axisymmetric viscous fibers with one-dimensional flow by Schultz and Davis,[31] to more complicated three-dimensional models for non-axisymmetric fibers by Dewynne et al.[32; 33] Wylie et al.[34] discuss the role of inertia and surface tension in the extensional flow of viscous fibers, and find that, while effects of surface tension are higher order and can be neglected, there are times when inertia plays an important role in the evolution. Howell[30] also presented exact solutions for the extensional flow of both sheets and fibers of primarily Newtonian fluid (and also provides a good overview of earlier extensional flow modeling). Such flows are relevant for glass manufacturing, printing, and other applications; see Dewynne, Howell, and Wilmott[33] for further discussion and references. Non-Newtonian fluids have also received attention; for example, the development of beads on a string has been described by Clasen et al.[35] for polymer fluids in a jet or liquid bridge, and by Sostarecz and Belmonte[36] for micellar fluid under stretching, while Smolka et al.[37] presented an exact solution for the extensional flow of a thread of fluid under both weakly and strongly viscoelastic limits. Most relevant to our application, however, Cummings et al.[38] studied extensional flow of nematic liquid crystals, and this is the scenario on which we now focus, as we hope it can help to explain the dynamics of the tear film of the eye during the blink cycle. As a starting point, we use the model of Cummings et al.,[38] which uses the Ericksen-Leslie equations to describe extensional flow of a thin sheet of nematic liquid crystal. The main focus of that paper is the response of the liquid crystal film to an applied electric field (relevant to many technological applications, such as electronic displays). In a biological setting such as the human eye, however, no electric field is present, thus, we neglect this aspect of the modeling but follow the same asymptotic approach. We rescale the governing equations and consider a new limit for the case of moderate elastic effects. We analyze a range of boundary conditions, which are found to strongly affect the shape of the evolving sheet under stretching. We then investigate rippling in the sheet by introducing more waves into the initial condition. The paper is organized as follows. In Section II we describe the problem formulation, and the mathematical models for both weak and moderate elasticity. Section III provides details of the numerical methods used to solve the models. In Section IV we present our results. These include profiles of the sheet thickness, fluid velocity, and film pressure that result from the different boundary conditions. We track the location of minimum sheet thickness under these different scenarios. We compare and contrast the solutions under weak and moderate elasticity, when either the surface tension or speed of the moving end is varied. Then we present results when multiple waves are added to the initial condition, and the mechanisms for the observed dynamics. Finally, in Section V we discuss the results and outline our conclusions. ## II Models To reduce the complexity of the lipid layer geometry seen in Fig. 1, we simplify the cross section of the tear film (the sagittal plane) to the geometry shown in Fig. 3, where the ripples in the lipid layer of the tear film appear in a 2D configuration analogous to beads on a string. In this work, we neglect the aqueous layer, and consider the lipid layer alone, in two dimensions. Thus, as a first step, we consider it to be a thin free film in a sheet configuration, with multiple waves on the fluid/air interfaces in the initial condition. The sheet of fluid is assumed fixed at the left end, while the right end moves with a Figure 3: Simplified sketch of a cross section of the tear film. Figure 2: Ripples in the lipid layer of the tear film before a blink (left), and after a blink (right). The ripples become compressed after a blink, and may not extend to cover the cornea once the eye is open.[1] prescribed constant speed \(v_{0}\), providing a simple model of the opening eyelid following a blink. A sketch is shown in Fig. 4. As a further simplification, the lipid sheet is assumed symmetric about its midline, and the midline is assumed to be straight. We denote the thickness of the sheet by \(h(x,t)\), the axial fluid velocity by \(u(x,t)\), and the transverse velocity by \(w(x,t)\). The liquid crystal molecules in the lipid sheet are assumed to have a preferred angle of \(\theta_{B}\) relative to \(\hat{n}\), the outward-facing unit vector normal to the sheet surface. The angle of the molecules within the sheet is described by the director field \(\mathbf{n}=(\sin\theta,\cos\theta)\); the director field is discussed further in the appendix. ### Weak elasticity Our approach follows that of Cummings et al.,[38] who used multiple scale perturbation methods to simplify the Ericksen-Leslie equations[39] governing nematic liquid crystal dynamics. The Ericksen-Leslie equations (see Eqs. (A1) of the Appendix) are nondimensionalized using the scalings given below, where primes denotes dimensional quantities. The coordinates \((x^{\prime},z^{\prime})\) and velocity components \((u^{\prime},w^{\prime})\) correspond to the axial and transverse directions respectively, \(h^{\prime}\) represents the sheet thickness, \(t^{\prime}\) is time, \(p^{\prime}\) is pressure, and \(\gamma^{\prime}\) is the surface tension at the film/air interface: \[x^{\prime} =Lx, z^{\prime} =\delta Lz, u^{\prime} =Uu, w^{\prime} =\delta Uw, \tag{1}\] \[h^{\prime} =\hat{h}h, t^{\prime} =\frac{L}{U}t, p^{\prime} =\frac{\mu U}{L}p, \gamma^{\prime} =\frac{\mu U}{\delta}\gamma. \tag{2}\] Figure 4: Schematic of a sheet of nematic liquid crystal stretched between two plates. The left end is fixed, while the right end is moved with a prescribed velocity \(v_{0}\). The sheet thickness is \(h(x,t)\) and the axial fluid velocity, \(u(x,t)\). The transverse velocity is \(w(x,t)\). Molecules on the surface lie at an angle \(\theta_{B}\) relative to \(\hat{n}\), the outward-facing normal vector. The dimensional parameters used in the model are defined in Table 1, along with the non-dimensional parameters that result from the chosen scalings. Asymptotic expansion of the dependent variables in the small parameter \(\delta=\hat{h}/L\) (see section A.3 of the Appendix), yields a closed system of equations for the (leading order) sheet thickness \(h\) and axial velocity \(u\): \[h_{t}+(hu)_{x} =0, \tag{3}\] \[\frac{F(\theta_{B})}{G(\theta_{B})}(hu_{x})_{x}+\frac{\gamma}{2}hh _{xxx} =0. \tag{4}\] Eq. (3) represents conservation of mass, and Eq. (4) is the axial force balance. The coefficient of the axial gradient term is formed from functions \(F(\theta_{B})\) and \(G(\theta_{B})\), which depend on material properties of the fluid as well as the leading order solution for the director angle, \(\theta_{0}\) (see Eqs. (19) and (20) in the Appendix). However, in the situation considered here, \(\theta_{0}=\theta_{B}\) is a fixed angle, and \(F\) and \(G\) are themselves also constant. If the properties of a Newtonian fluid are used, then \(F/G=4\), and Eq. (4) simplifies to \[4(hu_{x})_{x}+\frac{\gamma}{2}hh_{xxx}=0. \tag{5}\] For the remainder of this paper, we use this coefficient value of 4 when presenting weak elasticity solutions. We note that for the weak elasticity scalings chosen here, the pressure \begin{table} \begin{tabular}{c l} Parameter & Description \\ \hline \(\mu\) & dynamic viscosity \\ \(U\) & typical axial velocity \\ \(L\) & typical sheet length \\ \(\hat{h}\) & typical initial sheet thickness \\ \(\gamma^{\prime}\) & surface tension of air/sheet interface \\ \(K\) & elastic constant of the liquid crystal \\ \(\delta=\frac{\hat{h}}{L}\ll 1\) & aspect ratio \\ \(\gamma=\frac{\delta}{\mu U}\gamma^{\prime}\) & surface tension/viscosity comparison \\ \(\hat{N}=\frac{K}{\mu U\delta L}\) & inverse Ericksen number \\ \end{tabular} \end{table} Table 1: Parameters used in the model scalings for weak elasticity. Different scales for \(\gamma\) and \(\hat{N}\) are used in the moderate elasticity model; see Eq. (10). Note that \(\hat{N}\), \(\delta\) and \(\gamma\) are dimensionless. is defined as [38] \[p=-2u_{x}-\frac{\gamma}{2}h_{xx}. \tag{6}\] Whenever the pressure is shown for solutions to the weak elasticity model, we make use of Eq. (6). The Newtonian limit, with zero surface tension \(\gamma=0\), becomes the Trouton model [40], considered extensively within a Newtonian framework by Howell [30] (see also references therein). The tension \(T(t)\) in the sheet is found by taking the first integral of the axial force balance in the Newtonian case, Eq. (5), which gives \[T(t)=4hu_{x}+\frac{\gamma}{2}\left(hh_{xx}-\frac{1}{2}h_{x}^{2}\right). \tag{7}\] The tension is spatially uniform throughout the sheet (independent of \(x\)) [30]. Since we specify the speed of the moving end, we impose the following boundary conditions (BCs), where \(s(t)=1+v_{0}t\) denotes the location of the moving end, \[u(0,t) =0,\quad u(s(t),t)=v_{0}, \tag{8}\] \[h_{x}(0,t) =0,\quad h_{x}(s(t),t)=0. \tag{9}\] Typically, we take \(v_{0}=1\), with the exception of Section IV.5, where we explore varying the speed of the moving end. Neumann BCs on \(h\) specify the contact angle of the film with end plates; the plates are assumed to have no effect on the director field. For the weak elasticity case, we solve the system of partial differential equations (PDE) found in Eqs. (3) and (5), subject to the BCs in Eqs. (8) and (9), as well as given initial conditions (ICs) for \(h(x,0)\) and \(u(x,0)\) discussed below. ### Moderate elasticity To consider the case of moderate elasticity, we rescale the inverse Ericksen number, the pressure, and the surface tension as follows, while keeping the other scalings the same: \[\hat{N}=\frac{K}{\mu UL},\quad p^{\prime}=\frac{\mu U}{\delta L}p,\quad \gamma^{\prime}=\frac{\mu U}{\delta^{2}}\gamma. \tag{10}\] Here primes denote dimensional quantities. For \(O(1)\)\(p\) and \(\gamma\), the dimensional values are both scaled to be larger than the weak elasticity case. Following the derivation outlined in A 3 of the Appendix, we find the the leading order pressure \[p=-\frac{\gamma}{2}h_{xx}, \tag{11}\] and obtain the following system \[h_{t}+\left(hu\right)_{x} =0, \tag{12}\] \[\left(hu_{x}\right)_{x}+\tilde{\gamma}(h^{2}h_{xxx})_{x} =0, \tag{13}\] where \(\tilde{\gamma}=\gamma C_{2}(\theta_{B})/B_{2}(\theta_{b})\) is the scaled surface tension with the scale factors \(B_{2}\) and \(C_{2}\) given in Eqs. (13) and (14) of the Appendix. For simplicity, we take \(\tilde{\gamma}=\gamma\) in our computational solutions. In this case of moderate elasticity, the tension in the sheet is now given by \[T(t)=hu_{x}+\gamma h^{2}h_{xxx}. \tag{14}\] Although the surface tension at the lipid layer/air interface of the tear film is unknown, we use a value based on surface tension measurements for the nematic liquid crystal 5CB at a range of temperatures surrounding 35\({}^{\circ}\)C [41], which is close to the temperature at the surface of the eye [42]. Unless otherwise noted, we take \(\gamma=0.025\). We note that Eq. (13) is higher order than the weak elasticity or Newtonian cases; this change will be consequential for the dynamics of the film. This higher order system requires more boundary conditions on \(h\). To determine the number of boundary conditions needed we use Eq. (14) to eliminate \(hu_{x}\) from Eq. (12), yielding \[h_{t}+uh_{x}+T(t)-\gamma h^{2}h_{xxx}=0. \tag{15}\] The highest derivative in this equation is third order, implying that we need three boundary conditions on \(h\) to solve the system; thus, we will need an additional boundary condition apart from those given in Eqs. (8) and (9). #### ii.1.1 Reducing the order To solve the model numerically, it is preferable to reduce the order of the system by adding a dependent variable. We can add the pressure, \(p\), shown in Eq. (11) to our system of PDEs as an additional dependent variable, and substitute into the axial force balance Eq. (13) to reduce the order of the highest derivative appearing in the system. We obtain: \[h_{t}+(hu)_{x} =0, \tag{16}\] \[\left(hu_{x}\right)_{x}-(h^{2}p_{x})_{x} =0,\] (17) \[p+\frac{\gamma}{2}h_{xx} =0. \tag{18}\] Using this substitution, we can write the equation for tension in the moderate elasticity case (see Eq. (14)) as \[T(t)=hu_{x}-h^{2}p_{x}. \tag{19}\] The space-dependent terms on the right hand side of this equation combine to be independent of \(x\). #### ii.1.2 Boundary and initial conditions The boundary conditions for the axial velocity \(u\) are as in Eq. (8) for the weak elasticity case: \(u(0,t)=0\) and \(u(s(t),t)=v_{0}\). For the sheet thickness \(h\), we consider four sets of boundary conditions for the moderate elasticity model that are summarized in Table 2. In all cases, the third (additional) boundary condition on \(h\) is enforced by setting \(p_{x}(0,t)=0\) on the fixed end. Turning to Table 2, Cases I and II specify Neumann conditions (homogeneous and non-homogenous) on \(h\). Cases III and IV specify a Robin condition on the right (moving) or left (fixed) end respectively. The parameter \(\nu\) may vary between 0 and 1; a smaller value for \(\nu\) results in a boundary condition that is close to a pure Dirichlet condition at that end. In the physical sense, the Robin boundary conditions model capillarity on one end of the sheet. A Dirichlet condition would represent fluid pinned to the plate, with the slope free to vary. The Neumann conditions specify the contact angle formed by the liquid crystal fluid and the plate, but the thickness of the film is free to vary. Homogeneous Neumann conditions represent a contact angle of \(\pi/2\). The tension equation Eq. (19) can be used to determine the remaining boundary condition that is needed. To evaluate the individual terms in Eq. (19), we use the initial condition \(h(x,0)=0.9+0.1\cos(2\pi x)\) and find \(u(x,0)\) by solving Eq. (13) subject to \(u(0,0)=0\) and \(u(1,0)=v_{0}\) with \(\gamma=0.025\) and \(v_{0}=1\). To find \(p(x,0)\), the definition in Eq. (18) is used. We then plot the individual terms from Eq. (19) (or equivalently, Eq. (14)). These curves result from valid initial conditions for which we present solutions below. We see that one component of the tension, \(h^{2}p_{x}\), is zero at the left end, while the other is not. This is important because it suggests that \(p_{x}(0,t)=0\), and that we can enforce it as an additional boundary condition at \(x=0\) for the moderate elasticity model. In physical terms, Fig. 5 shows that, at \(x=0\), all of the tension is in the extensional term while none is in the pressure term. The initial condition for \(h\) is chosen as \[h(x,0)=a+b\cos(2\pi k_{0}x)+c\,x(x-1). \tag{20}\] The wavenumber \(k_{0}\) will typically be \(k_{0}=1\) but will be systematically varied in later sections. The quadratic term (\(c\neq 0\)) is used only in Case II, where we allow a nonzero slope at the ends. The initial condition for \(p\) is calculated exactly via Eq. (17). One must solve for \(u(x,0)\) in order to have a consistent initial condition for the numerical solvers that we use. We return to this point in Section III below. Figure 5: The tension \(T(t)\) and its component terms from Eq. (19) are plotted at \(t=0\) for \(h(x,0)=0.9+0.1\cos(2\pi x)\). At \(x=0\), all the tension comes from \(hu_{x}\neq 0\), and none from \(h^{2}p_{x}=0\). Because \(h(0,t)\neq 0\), this justifies the choice of \(p_{x}(0,t)=0\) as a third boundary condition in the case of moderate elasticity. ## III Numerical solution To solve the models numerically, we first map from a moving domain \(0<x<s(t)\) with \(s(t)=1+v_{0}t\), to a fixed domain \(0<\xi<1\) using \(\xi=x/s(t)\). On the fixed domain, the unknowns become \(H(\xi,t)=h(x,t)\) and \(U(\xi,t)=u(x,t)\). We then apply the mapping to both the weak and moderate elasticity models. ### Weak Elasticity After mapping Eqs. (3), (4), (8) and (9) to the fixed domain, one obtains \[H_{t}-v0(\xi/s)H_{\xi}+(1/s)(UH)_{\xi}=0, \tag{21}\] \[(4/s^{2})(U_{\xi}H)_{\xi}+(\gamma/2s^{3})HH_{\xi\xi\xi}=0,\] (22) \[H_{\xi}(0,t)=0,\ \ H_{\xi}(1,t)=0,\] (23) \[U(0,t)=0,\ \ U(1,t)=v_{0},\] (24) \[H(\xi,0)=a+b\cos(2\pi k_{0}x). \tag{25}\] Note that for this model there are only two BCs for \(h\) and two BCs for \(u\);[38] we do not impose the BC on \(p\). \begin{table} \begin{tabular}{c c c c c c} Case & Fixed end, \(x=0\) & Moving end, \(x=1+v_{0}t\) & \(a\) & \(b\) & \(c\) \\ \hline I & \(h_{x}=0,\ \ \ p_{x}=0\) & \(h_{x}=0\) & 0.9 & 0.1 & 0 \\ II & \(h_{x}=-c\), \(p_{x}=0\) & \(h_{x}=c\) & 0.9 & 0.1 & 0.1 \\ III & \(h_{x}=0,\ \ \ p_{x}=0\) & \((1-\nu)(h-1)+\nu h_{x}=0\) & 0.9 & 0.1 & 0 \\ IV & \((1-\nu)(h-1)-\nu h_{x}=0,\ p_{x}=0\) & \(h_{x}=0\) & 0.9 & 0.1 & 0 \\ \end{tabular} \end{table} Table 2: Summary of boundary conditions on a moving domain with an initial condition \(h(x,0)=a+b\cos(2\pi k_{0}x)+c\,x(x-1)\), and \(0<\nu<1\). The quadratic term in the initial condition is only used in Case II. ### Moderate Elasticity For the moderate elasticity case, the problem defined in Eqs. (16), (17), and (18), along with the boundary conditions for Case III, becomes \[H_{t}-v_{0}\frac{\xi}{s}H_{\xi}+\frac{1}{s}(UH)_{\xi} =0, \tag{26}\] \[\left(U_{\xi}H\right)_{\xi}-\left(H^{2}P_{\xi}\right)_{\xi} =0,\] (27) \[P+\frac{\gamma}{2s^{2}}H_{\xi\xi} =0,\] (28) \[H_{\xi}(0,t)=0,\;P_{\xi}(0,t)=0,\;(1-\nu)sH(1,t)+\nu H_{\xi}(1,t) =0,\] (29) \[H(\xi,0)=a+b\cos(2\pi k_{0}\xi)+c\xi(\xi-1). \tag{30}\] Boundary condition Case I is recovered by setting \(\nu=1\) in Case III here, while Cases II and IV are transformed similarly. ### Numerical methods We describe the implementation for the moderate elasticity case here in detail; the weak elasticity case is treated similarly. After mapping to a fixed domain, we apply a version of the method of lines; we implement two approaches to validate our results. The spatial derivatives are approximated via collocation with either finite difference or Chebyshev spectral discretization. When utilizing finite difference methods, we use a uniform grid. Second-order centered formulas are used inside the domain, and the appropriate second-order non-centered formulas are used to approximate the derivatives at the left and right ends of the sheet. The result is a system of differential algebraic equations (DAEs) at the grid points that we solve forward in time in Matlab (MathWorks, Natick, MA, USA) using ode15s. In general, the number of grid points is \(N=512\). As a check, we use the trapezoidal method to calculate the fluid volume, and observe that it is conserved to the order of our imposed tolerances of \(10^{-4}\). Alternatively, we use Chebyshev spectral discretization in space, which also results in a DAE system solved in the same way.[43] Typically, the number of grid points we used was \(N=128\) for this method. For either discretization method, the initial sheet thickness \(h(x,0)\) was first specified, then \(p(x,0)\) computed from its definition in Eq. (18). Finally, the discrete version of the axial force balance Eq. (17) was solved for \(u\) on the grid points using the backslash. The results using both methods agree, until the final times when error accumulates at the ends with the finite difference method. However, the spectral method could not complete computations over as wide a range of parameter values (for example, for surface tension) as could the finite difference method. ## IV Results We begin by showing solutions for thickness and velocity for the simple case of a flat sheet. We then present solutions for thickness, velocity, and pressure obtained for the various boundary conditions outlined in Table 2 in the case of moderate elasticity, and we compare them with the corresponding results in the case of weak elasticity (where the condition \(p_{x}(0,t)=0\) is not used). We note how the location of the sheet's minimum thickness changes depending on the boundary conditions imposed. Next, we vary both the surface tension and speed of the moving end, and demonstrate the effect for both moderate and weak elasticity. We investigate dynamics resulting from increasing the number of sinusoidal waves in the initial condition, and show examples of how the wave profile changes through time depending on the surface tension value, and the amplitude and period of the imposed initial waves. Finally, we discuss mechanism responsible for those dynamics. ### Neumann conditions on \(h\) We begin by showing solutions for an initially flat sheet. We take \(h(x,0)=1\), with BC Case I given by \(h_{x}(0,t)=h_{x}(1,t)=0\), \(u(x,0)=v_{0}x\), \(u(0,t)=0\), and \(u(1,t)=v_{0}=1\). In this scenario, the sheet remains spatially uniform for all time, and the PDEs governing \(h\) and \(u\) for both weak and moderate elasticity are the same, as the terms containing surface tension are lost. Solutions for \(h\) and \(u\) are shown in Fig. 6 on the moving domain. The thickness decreases uniformly, and the velocity increases linearly across the sheet. Note that \(p_{x}=0\) trivially for all \(x\) and \(t\) for both moderate and weak elasticity models. In the case of moderate elasticity, the pressure is zero at each time level. In the case of weak elasticity, from Eq. (6), \(p=-2u_{x}\) and so \(p\) is constant in \(x\) but decreasing in time. Next, we consider the moderate elasticity solutions for the sheet thickness, axial velocity, and pressure when \(\gamma=0.1\) for a sinusoidal IC with \(a=0.9\), \(b=0.1\) and \(c=0\). Results are shown in Fig. 7. Initially, the axial velocity is negative for much of the sheet, meaning that the fluid in these areas is moving to the left. This changes the profile of the sheet thickness very quickly, and extensional flow leads to thinning of the sheet at the right end. The fluid away from the right end is left behind, and by \(t=0.25\), there is no longer a local maximum in the thickness at the right end. From that time until \(t=3\), the sheet thickness has lost approximately half a wave from the initial one full period. The tendency of fluid to gather at the left end while the right end becomes thinner is a characteristic of moderate elasticity that is not seen in the case of weak elasticity. Fig. 8 Figure 6: Profiles of sheet thickness, \(h\), and fluid velocity, \(u\), for an initially flat sheet with homogeneous Neumann boundary conditions and \(v_{0}=1\) (both weak and moderate elasticity cases have the same evolution for these variables). Figure 7: Profiles of sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for moderate elasticity with a sinusoidal IC and BC Case I when \(\gamma=0.1\) and \(v_{0}\)=1. shows the analogous solutions for \(h\), \(u\) and \(p\) of a sheet of fluid with weak elasticity. The sheet remains symmetric about its midpoint throughout the computation, and \(h_{min}\) occurs in the middle of the sheet. The middle plot of Fig. 8 shows that as time progresses, the strain rate \(u_{x}\) is largest in the middle of the sheet, and the sheet thins fastest there. The pressure remains negative throughout the sheet, but the pressure and its gradient decrease as time increases. ### Robin boundary conditions (moderate elasticity) Imposing a Robin boundary condition at the moving end (BC Case III) of a sheet with moderate elasticity leads to the formation of a meniscus there, as shown in Fig. 9. The sheet thins primarily in the middle and left (fixed) end of the sheet, with a narrow portion of the fluid at the right traveling at roughly the same speed as the right (moving) end. The pressure remains positive at the left end due to capillarity, but becomes negative throughout the part of the sheet that forms the meniscus. Fig. 10 shows the results of imposing a Robin condition at the fixed end on the left (BC Case IV). This meniscus is smaller in both height and width than that of Fig. 9, where the Robin condition is imposed at the right. As observed in Fig. 9, thinning corresponds to increased strain rate \(u_{x}\) in the portion of the sheet where it occurs. As time increases the meniscus grows, and the pressure becomes large and negative at \(x=0\), while approaching zero in the rest of the film. Figure 8: Profiles of sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for weak elasticity when \(\gamma=0.1\) and \(v_{0}\)=1. Table 3 summarizes the differences: in both cases, the maximum sheet thickness \(h_{max}\) occurs at the end where the Robin condition is enforced. When the condition is enforced at the left (Case IV), both \(h_{max}\) and the range of observed sheet thicknesses (\(\Delta h=h_{max}-h_{min}\)) are smaller, and at the final time \(t=4\), \(h_{min}\) is less than half the corresponding value when the Robin condition is imposed at the right (Case III). ### Location of minimum thickness As mentioned before, in the weak elasticity case, the evolution of the sheet is symmetric about the midpoint for the chosen boundary and initial conditions. The minimum sheet Figure 10: Profiles of sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for moderate elasticity with BC Case IV, a Robin boundary condition at the left end; \(\nu=0.1\), \(\gamma=0.025\), and \(v_{0}=1\). Figure 9: Profiles of sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for moderate elasticity with BC Case III, a Robin boundary condition at the right end; \(\nu=0.1\), \(\gamma=0.025\), and \(v_{0}=1\). thickness \(h_{min}\) begins, and remains, at the midpoint throughout the evolution. For moderate elasticity, however, the situation is more complicated. Fig. 11 summarizes a range of results for different BCs, with the initial condition \(h(x,0)=a+b\cos(2\pi x)\) with \(a=0.9\), \(b=0.1\) (as used in the results of Secs. IV.1 and IV.2 above), and \(\gamma=0.025\) unless otherwise noted. Fig. 11 demonstrates that, even with a simple initial film shape that is initially symmetric about the midpoint, the minimum thickness migrates from the midpoint, and can occur in a variety of locations on the sheet that depend on the boundary conditions imposed. If we consider BC Case I (homogeneous Neumann conditions on \(h\)), with \(\gamma=0.1\), then the minimum rapidly migrates to the right end of the domain, by about \(t=0.25\). For BC Case I with \(\gamma=0.025\) (not shown in Fig. 11), the minimum remains in the right half of the domain near \(x=0.5\). Allowing a slight slope on the end (Case II, with \(c=0.1\)) keeps the minimum slightly more centered than BC Case I for the same \(\gamma\). With \(\gamma=0.025\) and Case I and II BCs, the minimum starts in the center of the sheet (as dictated by the initial condition), shifts to the right by \(t=0.15\) or so, and then slowly begins to approach the center of the sheet again. A Robin boundary condition on the right (Case III) leads to a minimum location that begins similarly to Case I: the minimum shifts to about \(\xi=x/s(t)=0.7\), but then stays there. A Robin boundary condition on the left (Case IV) causes the location of the minimum to move around the most. Referring to Fig. 10, we see that for early times, the sheet has two local minima, with the global minimum closest to the moving end. As the sheet lengthens, that dip flattens, and the global minimum shifts to the bottom of the steep meniscus near the fixed end. For the remaining time, the minimum stays close to the left (fixed) end. This switch in the location of the global minimum is clearly seen in Fig. 11. \begin{table} \begin{tabular}{l c c c c c c} & & \(t=0.5\) & & & \(t=4\) & \\ Robin Location & \(h_{max}\) & \(h_{min}\) & \(\Delta h\) & \(h_{max}\) & \(h_{min}\) & \(\Delta h\) \\ \hline Right end (BC Case III, Fig. 9) & 0.835 & 0.497 & 0.338 & 0.763 & 0.086 & 0.676 \\ Left end (BC Case IV, Fig. 10) & 0.758 & 0.538 & 0.220 & 0.431 & 0.037 & 0.394 \\ \end{tabular} \end{table} Table 3: Comparison of imposing Robin boundary conditions at either end of the sheet (moderate elasticity solutions of Figs. 9 and 10). Here \(h_{max}\) is the maximum sheet thickness, \(h_{min}\) is the minimum sheet thickness, and \(\Delta h\) = \(h_{max}-h_{min}\). ### Varying the surface tension (moderate elasticity) We summarize the effect of the surface tension \(\gamma\) on the sheet thickness for the moderate elasticity model in Fig. 12, where we compare a range of \(\gamma\)-values, spanning four orders of magnitude. The first plot of Fig. 12 shows the minimum sheet thickness \(h_{min}\) versus time \(t\) on a semilog scale. The relationship between \(h_{min}\) and \(\gamma\) is not monotone; the largest values of \(h_{min}\) for all values of time occur when surface tension is largest (\(\gamma=1\)), while the smallest values occur at \(\gamma=0.1\). Smaller values of \(\gamma\) lead to intermediate minimum thickness values. The second plot of Fig. 12 shows the film thickness at the right end of the sheet, \(h_{end}=h(s(t),t)\), versus \(t\), on a semilog scale. The minimum thickness may occur at the right end (see Fig. 7). ### Varying the speed of the moving end In the previous results, we varied surface tension \(\gamma\), while fixing the speed of the moving end at \(v_{0}=1\). Now we vary the speed, for fixed surface tension \(\gamma=0.025\). Figs. 13 and 14 show, for moderate and weak elasticity respectively, how the sheet thickness (as characterized by \(h_{min}\) and \(h_{end}\)) is affected when \(v_{0}\) varies from 0.25 to 2.5. The plots show Figure 11: Location of minimum sheet thickness shown on a fixed domain through time. In all cases, \(h(x,0)=0.9+0.1\cos(2\pi x)\), and \(\gamma=0.025\) except where otherwise stated. In Case II, \(c=0.1\) (see Table 2). For the weakly elastic case, the minimum remains at \(x=0.5\) for all time. \(h_{min}\) and \(h_{end}\) versus time \(t\), on a semilog scale. Unsurprisingly, the faster the speed of the moving end, the thinner the sheet at its minimum, for all time points, and for both moderate and weak elasticity models. Comparing the minimum thickness in Figs. 13 and 14, the trend over time is remarkably similar, although \(h_{min}\) is slightly lower for the case of moderate elasticity. We note that as time progresses the thickness of the sheet at the moving end may merge or cross at around \(h_{end}\approx 0.2\), even when varying the speed. This contrasts with the weak elasticity case shown in Fig. 14: comparing the \(h_{end}\) plots in Figs. 13 and 14, we see that for weak elasticity, the moving end of the sheet continues to decrease in thickness as the speed increases for all points in time. This is another way in which the model with moderate elasticity differs from that with weak elasticity. The moderate elasticity solution for the sheet thickness, axial velocity, and pressure corresponding to Fig. 13 with \(v_{0}=2\) are shown in Fig. 15. While the initial sheet profile is retained, qualitatively, under stretching, the right end is slightly thinner than the left. The slower the speed of the moving end, the more of the original wave is lost as time progresses. Fig. 16 compares sheet evolution in time, for the moderate and weak elasticity cases, for three different values of the sheet extension speed \(v_{0}\) at times \(t=0.5\) and \(t=4\). At all speeds, the sheet with weak elasticity remains symmetric about its midpoint, and retains the wavenumber of the initial condition while being stretched over the Figure 12: Evolution of the minimum sheet thickness, \(h_{min}\), and thickness \(h_{end}\) at the right end of the sheet, as the surface tension varies for moderate elasticity. Results are for BC Case I with \(v_{0}=1\). In some cases, the minimum occurs at the right end. In the left plot, the curve for \(\gamma=0.001\) (in blue) lies directly under that of \(\gamma=0.01\) (in red). is not the case for moderate elasticity solutions. For the slowest extension speed \(v_{0}=0.5\), the moving end of the sheet thins significantly, such that roughly half of the initial wave is lost by \(t=0.5\), leading to very large differences between the weak and moderate elasticity predictions. When \(v_{0}=1\), more of the original shape is retained, but the moving end still thins significantly relative to the left end; the prediction is again substantially different from the weak elasticity case. The differences between the two models are least pronounced for the \(h_{end}\) and \(h_{end}\), respectively. The difference between the two models is that the difference between the two models is due to the fact that the two models are not identical, and the difference between the two models is due to the fact that the two models are identical. Figure 14: Evolution of the minimum sheet thickness, \(h_{min}\), and height of the right end of the sheet, \(h_{end}\), for weak elasticity as the speed varies from 0.25 to 2.5. Results are for BC Case I with \(\gamma=0.025\). In some cases the minimum occurs at the right end. Figure 13: Evolution of the minimum sheet thickness, \(h_{min}\), and height of the right end of the sheet, \(h_{end}\), for moderate elasticity as the speed varies from 0.25 to 2.5. Results are for BC Case I with \(\gamma=0.025\). In some cases the minimum occurs at the right end. the fastest extension speed \(v_{0}=2\). At both time points shown, the moderate elasticity model yields a sheet that is only slightly thicker over the left half than the right. The sheet thickness at the left end remains very similar for the two models, but the moving end of the sheet with moderate elasticity is thinner. ### Increasing wavenumber in ICs Imaging of the tear film has on occasion shown stripes or ridges in the lipid layer.[1] To investigate whether our model can sustain multiple waves during extensional flow, we experiment with increasing the wavenumber \(k_{0}\) in the initial condition Eq. (20). For all of the following results, we use the case of moderate elasticity with BC Case I and \(v_{0}=1\). Fig. 17 shows the sheet solution profiles at \(t=0.5\) and \(t=3\) for three different values of the initial wavenumber \(k_{0}\). For each IC, the sheet thickness is shown for three different values of the surface tension, \(\gamma=0.0025,\ 0.01\), and \(0.025\). The lower the surface tension, the more of the original waves are retained as time progresses. We note that the reduction of wavenumber appears to be complete by time \(t=0.5\); after that, the resulting shape primarily stretches as the sheet lengthens (this point is discussed further below). In particular, in the first example with wavenumber \(k_{0}=2\), both waves are retained for the smallest value of \(\gamma\), while for \(\gamma=0.01\) and \(\gamma=0.025\), half a wave and a full wave (respectively) are lost from the initial shape by the final time. Similar differences are also apparent at higher wavenumbers: for \(k_{0}=2.5\) the smallest surface tension simulation (\(\gamma=0.0025\)) l Figure 15: Solutions for sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for moderate elasticity. Results are for BC Case I with \(\gamma=0.025\), and \(v_{0}=2\). the final time, while \(\gamma=0.01\) loses a full wave and \(\gamma=0.025\) loses 1.5 waves; and for \(k_{0}=3\) the simulation for \(\gamma=0.0025\) again loses just half a wave, while \(\gamma=0.01\) loses 1.5 waves and \(\gamma=0.025\) loses 2 full waves. We further investigate simulations for \(\gamma=0.0025\), since this value leads to persistent waves in the sheet. For this value of \(\gamma\) we vary the wave amplitude \(b\) in the initial condition Eq. (20) and observe the change of wavenumber over time as the sheet is stretched (specifically, the number of complete waves that are lost); the results are summarized in Table 4. The top row of this table corresponds to the \(\gamma=0.0025\) simulations of Fig. 17. We see that Figure 16: Sheet thicknesses for moderate and weak elasticity models for increasing values of extension speed \(v_{0}\), shown at times \(t=0.5\) and \(t=4\). The solid black curve shows the initial sheet profile for both models at \(t=0\). Results are shown for BC Case I, with \(\gamma=0.025\). Figure 17: Profiles of sheet thickness, \(h\), at \(t=0\) (top curve in each plot), \(t=0.5\) (middle curves), and \(t=3\) (bottom curves) when the initial condition Eq. (20) has wavenumber \(k_{0}=2\), \(2.5\), and \(3\) (\(b=0.1,a=0.9\) in all cases). The higher the surface tension, the more waves are lost over time. Note that the shape of the sheet appears to be largely determined by \(t=0.5\); subsequent evolution results in the extension of the sheet shape, but not the loss of more waves. the value of the initial wavenumber \(k_{0}\) is more influential than the initial wave amplitude \(b\). We also test our earlier assertion, that the reduction in wavenumber appears to be determined at an early stage of the stretching, by running simulations to larger times. We used the event detection option in Matlab and let the sheet stretch until \(h_{min}<0.01\) (assumed to represent sheet breakup in the model). The results are summarized in Table 5, which records the IC used in the simulation, the time to breakup, the number of waves lost from the IC during evolution, and whether the final extremum of sheet thickness at the moving end is a maximum or minimum. In each case, the sheet reached this minimum thickness threshold before any noticeable change in shape from that noted at \(t=0.5\). When the moving end of the sheet is (or evolves to) a local minimum, the sheet "breaks" faster than when the moving end is a local maximum (sheet contains an integer number of full waves). For example, the two ICs with \(k_{0}=2.5\) and \(k_{0}=3\) both lose half a wave under stretching. The curve resulting from \(k_{0}=2.5\) develops a local maximum at the right end, and can stretch for more than twice the time for \(k_{0}=3\), which develops a local minimum there. Fig. 18 shows the sheet profiles at the time that the thickness reaches the threshold of \(h<0.01\) for four initial conditions. Interestingly, although the sheet profiles that have a minimum at the moving end always appear to break first, the breakup does not always appear at the moving end, but may happen at an interior minimum. \begin{table} \begin{tabular}{l c c c c c c c} & & & & \multicolumn{3}{c}{wavenumber \(k_{0}\)} & \\ Amplitude \(b\) & 1 & 1.5 & 2 & 2.5 & 3 & 3.5 & 4 \\ \hline 0.2 & 0 & 0 & 0 & 1/2 & 1/2 & - & - \\ 0.1 & 0 & 0 & 0 & 1/2 & 1/2 & 1 & 1 \\ 0.05 & 0 & 0 & 0 & 1/2 & 1/2 & 1 & 1 \\ 0.025 & 0 & 0 & 0 & 1/2 & 1/2 & 1 & 1 \\ \end{tabular} \end{table} Table 4: Table entries show number of waves lost from initial condition Eq. (20) at \(t=4\) as amplitude \(b\) and wavenumber \(k_{0}\) are varied. Here, \(a=1-b\) in Eq. (20) and \(\gamma=0.0025\). \begin{table} \begin{tabular}{l c c c} Initial condition & Time to \(h<0.01\) & Waves lost & Final extremum at right \\ \hline \(a+b\cos(3\pi x)\) & 12.9935 & None & minimum \\ \(a+b\cos(4\pi x)\) & 13.4664 & None & maximum \\ \(a+b\cos(5\pi x)\) & 10.2892 & 1/2 & maximum \\ \(a+b\cos(6\pi x)\) & 4.2099 & 1/2 & minimum \\ \(a+b/2\cos(5\pi x)\) & 18.2691 & 1/2 & maximum \\ \(a+b/2\cos(6\pi x)\) & 9.5259 & 1/2 & minimum \\ \end{tabular} \end{table} Table 5: Comparison of the time to reach \(h<0.01\), which represents sheet breakup, for various initial conditions when \(\gamma=0.0025\). Here \(a=0.9\) and \(b=0.1\). Figure 18: Sheet thickness, \(h\), at \(t=0\) (top curve in each plot), \(t=0.5\) (second curve), and \(t=3\) (third curve) and the time to reach the threshold thickness of \(h<0.01\) (bottom curve) for initial condition Eq. (20) with \(a=0.9,b=0.1\) and wavenumbers \(k_{0}=1.5\), \(2\), \(2.5\), and \(3\) with \(v_{0}=1\). ### Mechanisms For weak elasticity, the oscillations contained in the initial condition are retained in the sheet throughout time, and are stretched as the sheet lengthens. The sheet retains any symmetry in the initial condition, and the locations of minimum and maximum thickness are unchanged through time when plotted in terms of the coordinate \(\xi=x/s(t)\); see Fig. 19. If we compare the individual terms of the PDE, as shown in Fig. 21 (where only the right half of the domain is shown), we see that it is primarily the extensional terms from \((hu_{x})_{x}\) that balance; the role of surface tension is minor. The velocity profile is nearly linear with small fluctuations in the slope. Pressure remains negative through the entire sheet, as extension is dominating capillarity, and decreases in magnitude as the sheet lengthens. However, for moderate elasticity, solutions are more complicated. We compare the sheet thickness, velocity, and pressure when the initial condition contains either two and a half (\(k_{0}=2.5\)) or three (\(k_{0}=3\)) waves; the solutions are shown in Figs. 22 through 24. When \(k_{0}=2.5\) (Fig. 22), the moving end begins as a thickness minimum. Local low pressure draws fluid toward the moving end, and this local minimum becomes a global maximum by time \(t=0.5\). The minimum thickness occurs in the interior, in the trough closest to the moving Figure 19: Location of local minima in thickness, for weak and moderate elasticity models, at \(t=0,\,0.0625,\,0.125,\,0.25,\,0.5,\,0.75,\,1,\,2,\,3\) on a fixed domain with \(\gamma=0.0025,\,v_{0}=1\). Each plot corresponds to a different wavenumber: \(k_{0}=2,\,2.5,\,3\) in Eq. (20) (with \(a=0.9,b=0.1\)). The sheets thin in time, so lower points correspond to later times. The straight black lines emphasize that locations of thickness minima are stationary for weak elasticity on a fixed domain. For moderate elasticity with \(k_{0}=3\), the local minimum on the right travels to the moving end and becomes the global minimum. end. The early rapid movement of fluid toward the moving end is shown in the velocity profile at \(t=0.0625\), where the velocity briefly increases above the pulling velocity (\(v_{0}=1\)) near the moving end. Fluctuations in the velocity profile smooth after this time, and the profile becomes nearly linear. Pressure decays to near zero for \(t>1\). Fig. 23 shows the role of each term in the PDE. At early times, we see that the terms with the highest derivatives are flipping roles. As pressure diminishes, extensional terms take over. When \(k_{0}=3\) (Fig. 24), we see the role of pressure has changed. Together, pressure and extension prevent fluid from keeping up with the moving end, and the right end quickly becomes the global minimum. A boundary layer in the velocity profile is seen to form at the right end of the sheet in the middle column of Fig. 24. A maximum in the pressure develops at the right end by \(t=0.0625\), and remains a global maximum until about \(t=0.25\). The pressure diminishes thereafter. Figure 21: Individual terms in the axial force balance Eq. (4) for the case of weak elasticity shown in Fig. 20 (\(k_{0}=2\)). Note that only the right half of the domain is shown, and the scale of the vertical axis changes in the final plot. Figure 20: Profiles of sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for the case of weak elasticity when \(k_{0}=2\). Results are for BC Case I with \(\gamma=0.0025\) and \(v_{0}=1\). In summary, our results for the moderate elasticity model show that, depending on the initial condition, the number of waves in the sheet may be reduced, and there are significant changes in the shape of the sheet as fluid moves due to changes in pressure. Model parameters, in particular the surface tension \(\gamma\), can also strongly influence the number of waves retained in the sheet under extension; in this subsection such model parameters were fixed. At early times, pressure either cooperates with or opposes extension at the moving end, which redistributes fluid there and may result in the loss of a maximum or minimum in the sheet thickness there. When a maximum is lost from the moving end, a boundary layer forms in the velocity profile. As time increases, the pressure decreases in magnitude, Figure 22: Sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for the case of moderate elasticity when the initial condition has two and a half waves. Results are for BC Case I with \(\gamma=0.0025\) and \(v_{0}=1\). Each row represents a time level; each column shows the respective dependent variable. its influence on the shape of the sheet decreases, and the role of the extension becomes more pronounced. In general, the roles of pressure and extension are more intertwined than in the case of weak elasticity. ## V Discussion and Conclusion We present a new model for describing the extensional 2D flow of nematic liquid crystal sheets with moderate elasticity, and compare results to the analogous weak elasticity model. For moderate elasticity, the pressure, surface tension and elastic energy were all promoted to larger values compared to the weak elasticity case studied by Cummings et al. [38] The axial force balance, Eq. (13), in the new model is of higher spatial order than the model for weak elasticity; in terms of the sheet thickness, the equation is fourth order rather than third in spatial derivatives. This change necessitates an additional boundary condition. Consideration of the individual terms in the sheet tension in Eq. (19) motivated the additional condition that we used, \(p_{x}(0,t)=0\). Numerical exploration suggested that the single equation Eq. (15), describing the sheet profile evolution, may be viewed as being dispersive, and that the additional boundary condition may be considered as specifying the Figure 23: Individual terms in the axial force balance Eq. (17) for the case of moderate elasticity whose full profiles are shown in the previous Fig. 22 (only half the domain is shown here). Note that the scale of the vertical axis changes on the second row. value for an incoming characteristic. For initial conditions, we use sinusoidal curves, and we explore a range of initial wavenumbers. Further work could include formulating a consistent initial condition for a Dirichlet condition on either end. We examine the effect of varying surface tension and the speed of the moving on the dynamics of the evolving sheet under stretching. The response of the moderately elastic sheet is markedly different from that of weak elasticity or Newtonian fluids. Cummings et al. [38] modeled liquid crystal with weak elasticity, however this work focused primarily on the effect of an electric field on the liquid crystal. For liquid crystals with moderate elasticity, the elastic quality of the material is demonstrated Figure 24: Profiles of sheet thickness, \(h\), fluid velocity, \(u\), and pressure, \(p\), for the case of moderate elasticity when the initial condition has three waves and \(\gamma=0.0025\). Each row represents a time level; each column shows the dynamics of the respective dependent variable. well in Fig. 19, which shows a recoil in the location of minima in a sheet with multiple waves. In the case of weak elasticity, the minima maintain their relative position in the sheet while undergoing stretching. In Fig. 11, we show that depending on the initial condition, the minimum sheet thickness can occur at almost any position in the sheet, from the very right end, to close to the left end. When varying the surface tension, we again see the elastic quality of the material; see Fig. 12. When varying the speed of the moving end, we see that for the same speed, the sheet with moderate elasticity thins slightly faster than in the case of weak elasticity; see Fig. 13. We also considered dynamics and mechanism for different initial wavenumbers in the sheet profile. We increase the number of waves in the initial condition, and observe the shape of the sheet as it undergoes stretching. We find, as might be expected, that the higher the surface tension, the more waves are lost from the initial shape under stretching. The amplitude of the waves has much less influence than the number of waves, as seen in Table 4. At early times, depending on the number of sinusoidal waves, pressure either aids or opposes extensional flow, which changes the shape of the sheet and may result in the loss of a minimum or maximum at the moving end. When a maximum is lost from the moving end, and specifically when the moving end switches from a maximum to a minimum, we see a boundary layer form in the velocity profile. Fluid flows quickly out of the region at the end, and the sheet is unable to stretch for very long times before numerics fail. This illustrates the more prominent role that pressure plays in determining the shape of the sheet with moderate elasticity; see, for example, Fig. 23. The menisci that develop in the thickness profiles when using Robin boundary conditions for moderate elasticity are reminiscent of the profiles found by several previous authors [44; 45; 46; 47; 48; 49; 50; 51] for the aqueous layer of the tear film during a blink. Specifically, BC Case III (Robin condition at the moving end) yields profiles comparable to those of the tear film during the upstroke of a blink. BC Case IV (Robin condition at the fixed end) is similar to the meniscus corresponding to the lower lid during the upstroke when the upper lid would be moving away from it. We note that weak and moderate elasticity limits were considered for a nematic liquid crystal film on a substrate by Lin et al. [52] Those authors found that a larger scaling for the elastic terms (only) introduced an additional term in the single nonlinear PDE for the thickness \(h\); the new term was diffusion-like and is similar to the effect of gravity in Newtonian films.[53] In our work, there is no substrate for the free film, and compared to the weakly elastic limit, we made both the elastic and surface tension parameters larger. As a result, we scaled the pressure to be larger, and the new balance gave us two PDEs, one each for the film thickness \(h\) and axial velocity \(u\), as is typical for extensional flow.[38] The results reported here clearly show elastic behavior, and likely more obviously than the model found by Lin et al.[52] There are some limitations with our model in terms of computing for a longer time interval for for a wider range of parameter values. Both the finite differences and spectral methods work well up until \(t=3\), and for some parameter values, much longer than that; however, to run other cases for longer time may require domain decomposition or an adaptive method to adequately resolve regions of small thickness.[54] Scenarios that result in beads on a string[35; 36] involve much more extension and longer computation times; the deformation we see here is less severe, and it is unclear what would result in our case. We shall continue developing models for the lipid layer of the tear film in the eye. This will require using more realistic parameter values, and modifying the sheet's end speed to be more realistic.[46; 49; 55; 56] We have a model in hand with a shear-dominated aqueous layer added to the lipid layer in the spirit of previous works.[57; 58; 42; 59] Much work remains to connect the dynamics of such models to the observed patterns of the lipid layer in the tear film. ###### Acknowledgements. This work was supported by National Science Foundation grants DMS 1909846, DMS 2206127 and DMS 1815613. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding source. ## Appendix A Derivations ### Ericksen-Leslie equations The Ericksen-Leslie equations describe the flow of nematic liquid crystals; here the \({}^{\prime}\) denotes a dimensional quantity. \[\frac{\partial}{\partial x_{i}^{\prime}}\left(\frac{\partial W^{ \prime}}{\partial\theta_{x_{i}^{\prime}}}\right)-\frac{\partial W^{\prime}}{ \partial\theta}+\tilde{g}_{i}^{\prime}\frac{\partial n_{i}}{\partial\theta} =0, \tag{101a}\] \[-\frac{\partial\pi^{\prime}}{\partial x_{i}^{\prime}}+\tilde{g}_{ k}^{\prime}\frac{\partial n_{k}}{\partial x_{i}^{\prime}}+\frac{\partial\tilde{t}_{ ik}^{\prime}}{\partial x_{k}^{\prime}} =0,\] (101b) \[\frac{\partial v_{i}^{\prime}}{\partial x_{i}^{\prime}} =0. \tag{101c}\] The first equation corresponds to conservation of energy; the second to conservation of momentum, and the third to conservation of mass. Expanded, these equations become \[\frac{\partial}{\partial x^{\prime}}\left(\frac{\partial W^{ \prime}}{\partial\theta_{x}}\right)+\frac{\partial}{\partial z^{\prime}}\left( \frac{\partial W^{\prime}}{\partial\theta_{z}}\right)-\frac{\partial W^{ \prime}}{\partial\theta}+\tilde{g}_{x}^{\prime}\frac{\partial n_{x}}{\partial \theta}+\tilde{g}_{z}^{\prime}\frac{\partial n_{z}}{\partial\theta} =0, \tag{102}\] \[-\frac{\partial\pi^{\prime}}{\partial x^{\prime}}+\tilde{g}_{x}^ {\prime}\frac{\partial n_{x}}{\partial x^{\prime}}+\tilde{g}_{z}^{\prime} \frac{\partial n_{z}}{\partial x^{\prime}}+\frac{\partial\tilde{t}_{xz}^{ \prime}}{\partial z^{\prime}}+\frac{\partial\tilde{t}_{xx}^{\prime}}{\partial x ^{\prime}} =0,\] (103) \[-\frac{\partial\pi^{\prime}}{\partial z^{\prime}}+\tilde{g}_{x}^ {\prime}\frac{\partial n_{x}}{\partial z^{\prime}}+\tilde{g}_{z}^{\prime} \frac{\partial n_{z}}{\partial z^{\prime}}+\frac{\partial\tilde{t}_{xx}^{ \prime}}{\partial x^{\prime}}+\frac{\partial\tilde{t}_{zz}^{\prime}}{\partial z ^{\prime}} =0,\] (104) \[\frac{\partial v^{\prime}}{\partial x^{\prime}}+\frac{\partial w^ {\prime}}{\partial z^{\prime}} =0, \tag{105}\] where \[\tilde{g}_{i}^{\prime}= -\gamma_{1}N_{i}^{\prime}-\gamma_{2}e_{ik}^{\prime}n_{k}, e_{ij}^{\prime} =\frac{1}{2}\left(\frac{\partial v_{i}^{\prime}}{\partial x_{j}^{ \prime}}+\frac{\partial v_{j}^{\prime}}{\partial x_{i}^{\prime}}\right), \tag{106}\] \[N_{i}^{\prime}=\dot{n}_{i}^{\prime}-\omega_{ik}^{\prime}n_{k}, \omega_{ij}^{\prime} =\frac{1}{2}\left(\frac{\partial v_{i}^{\prime}}{\partial x_{j}^ {\prime}}-\frac{\partial v_{j}^{\prime}}{\partial x_{i}^{\prime}}\right), \pi^{\prime} =\ p^{\prime}+W^{\prime},\] (107) \[W^{\prime}=\frac{1}{2}\bigg{[}K_{1}(\nabla^{\prime}\cdot{\bf n })^{2}+K_{2}({\bf n}\cdot\nabla^{\prime}\wedge{\bf n})^{2}+K_{3}(({\bf n} \cdot\nabla^{\prime}){\bf n})\cdot(({\bf n}\cdot\nabla^{\prime}){\bf n}) \bigg{]},\] (108) \[\tilde{t}_{ij}^{\prime}=\alpha_{1}^{\prime}n_{k}n_{p}e_{kp}^{ \prime}n_{i}n_{j}+\alpha_{2}^{\prime}N_{i}^{\prime}n_{j}+\alpha_{3}^{\prime}N _{j}^{\prime}n_{i}+\alpha_{4}^{\prime}e_{ij}^{\prime}+\alpha_{5}^{\prime}e_{ ik}^{\prime}n_{k}n_{j}+\alpha_{6}^{\prime}e_{jk}^{\prime}n_{k}n_{i}. \tag{109}\] Summation over the repeated indices is understood, and \(\dot{n}_{i}\) denotes the convective derivative of the ith component of \({\bf n}\). The quantities defined above are described in Table 6. We solve the governing equations subject to the boundary conditions that follow. We list the boundary conditions for the top surface, \(z^{\prime}=\frac{1}{2}h^{\prime}(x^{\prime},t^{\prime})\); those for the bottom surface, \(z^{\prime}=-\frac{1}{2}h^{\prime}(x^{\prime},t^{\prime})\), are defined in the same way. The normal stress condition is \[\hat{n}^{\prime}\cdot\sigma^{\prime}\cdot\hat{n}^{\prime}=-\gamma^{\prime} \kappa^{\prime}\hat{n}^{\prime}\quad\text{at}\;z=\frac{1}{2}h^{\prime}(x^{ \prime},t^{\prime}), \tag{110}\] where \(\hat{n}^{\prime}=(-h^{\prime}_{x^{\prime}}/2,1)/\sqrt{1+\left(h^{\prime}_{x^{ \prime}}/2\right)^{2}}\) is the unit vector normal to the top surface, and \(\kappa^{\prime}=-\nabla\cdot\hat{n}^{\prime}\) is the curvature of the top surface, and \(\sigma^{\prime}_{ij}=-p^{\prime}\delta_{ij}+\theta_{i}\theta_{j}+\tilde{t}^{ \prime}_{ij}\). The definition of \(\sigma^{\prime}\) is taken from Lin et al., [52] and contains an additional term from the form given in Cummings et al. [38] The tangential stress condition is \[\hat{n}^{\prime}\cdot\sigma^{\prime}\cdot t^{\prime}=0\quad\text{at}\;z^{\prime }=\frac{1}{2}h^{\prime}(x^{\prime},t^{\prime}), \tag{101}\] where \(t^{\prime}=(1,h^{\prime}_{x^{\prime}}/2)/\sqrt{1+\left(h^{\prime}_{x^{\prime}} /2\right)^{2}}\) is the unit vector tangent to the top surface. The kinematic boundary condition is \[w^{\prime}=\frac{1}{2}\left(h^{\prime}_{t^{\prime}}+u^{\prime}h^{\prime}_{x^{ \prime}}\right)\quad\text{at}\;z^{\prime}=\frac{1}{2}h^{\prime}(x^{\prime},t^ {\prime}). \tag{102}\] Finally, the anchoring boundary condition, in the absence of an electric field, is \[\theta=\theta_{B}\quad\text{at}\;z^{\prime}=\frac{1}{2}h^{\prime}(x^{\prime}, t^{\prime}). \tag{103}\] ## 2 Scalings for weak elasticity The scalings for weak elasticity are \[x^{\prime}=Lx, z^{\prime}=\delta Lz, t^{\prime}=\frac{L}{U}t, \gamma^{\prime}=\frac{\mu U}{\delta}\gamma,\] \[u^{\prime}=Uu, w^{\prime}=\delta Uw, h^{\prime}=\hat{h}h, \hat{N}=\frac{K}{\mu U\delta L},\] \[p^{\prime}=\frac{\mu U}{L}p, W^{\prime}=\frac{K}{\delta^{2}L^{2}}W, \alpha^{\prime}_{i}=\mu\alpha_{i}.\] Nondimensionalizing with these scalings yields \[\hat{N}\frac{\partial}{\partial x}\left(\frac{\partial W}{ \partial\theta_{x}}\right)+\hat{N}\frac{\partial}{\partial z}\left(\frac{ \partial W}{\partial\theta_{z}}\right)-\hat{N}\frac{\partial W}{\partial\theta }+g_{x}\frac{\partial n_{x}}{\partial\theta}+g_{z}\frac{\partial n_{z}}{ \partial\theta}=0, \tag{104}\] \[-\delta^{2}\frac{\partial p}{\partial x}-\delta\hat{N}\frac{ \partial W}{\partial x}+\delta g_{x}\frac{\partial n_{x}}{\partial x}+\delta g _{z}\frac{\partial n_{z}}{\partial x}+\delta\frac{\partial t_{xx}}{\partial x }+\frac{\partial t_{xz}}{\partial z}=0,\] (105) \[-\delta\frac{\partial p}{\partial z}-\hat{N}\frac{\partial W}{ \partial z}+g_{x}\frac{\partial n_{x}}{\partial z}+g_{z}\frac{\partial n_{z}}{ \partial z}+\delta\frac{\partial t_{xx}}{\partial x}+\frac{\partial t_{zz}}{ \partial z}=0. \tag{106}\] Then, the leading order system of equations is \[h_{t}+(hu)_{x}=0, \tag{107}\] \[\frac{F(\theta_{B})}{G(\theta_{B})}(hu_{x})_{x}+\frac{\gamma}{2}hh _{xxx}=0. \tag{108}\] where \[G(\theta_{B}) = \alpha_{1}-2\alpha_{2}+2\alpha_{3}+8+2\alpha_{5}+2\alpha_{6}-\alpha _{1}\cos(4\theta_{B}) \tag{19}\] \[-2\cos(2\theta_{B})(\alpha_{2}+\alpha_{3}-\alpha_{5}+\alpha_{6}),\] \[F(\theta_{B}) = \alpha_{1}(-\alpha_{2}+\alpha_{3}+8+2\alpha_{5}+2\alpha_{6})- \alpha_{2}(8+\alpha_{5}+3\alpha_{6})\] (20) \[+\alpha_{3}(8+\alpha_{5}+\alpha_{6})+32+\alpha_{5}(16+2\alpha_{5} +4\alpha_{6})+\alpha_{6}(16+2\alpha_{6})\] \[-2\cos(2\theta_{B})(\alpha+4+\alpha_{5}+\alpha_{6})(\alpha_{2}+ \alpha_{3}-\alpha_{5}+\alpha_{6})\] \[-\cos(4\theta_{B})\Big{[}\alpha_{1}\alpha_{2}-\alpha_{1}\alpha_{3 }+(\alpha_{2}+\alpha_{3})(\alpha_{5}-\alpha_{6})\Big{]}.\] In the Newtonian case, all viscosities are zero (\(\alpha_{i}=0,\ i=1,\cdots 6\)), and \(F(\theta_{B})/G(\theta_{B})=4\). \begin{table} \begin{tabular}{l l} Quantity & Description \\ \hline \(\mathbf{v}^{\prime}=(u^{\prime},0,w^{\prime})\) & velocity field of the flow \\ \(\mathbf{n}=(\sin\theta,0,\cos\theta)\) & director field \\ \(\theta(x,z,t)\) & angle the director angle makes with the z-axis \\ \(p^{\prime}\) & pressure \\ \(W^{\prime}\) & bulk energy density \\ \(\pi^{\prime}=p^{\prime}+W^{\prime}\) & modified pressure \\ \(\tilde{l}^{\prime}_{ij}\) & extra stress tensor (viscous stress) \\ \(\sigma\) & stress tensor \\ \(\alpha^{\prime}_{i}\), \(i=1,...,6\) & Leslie viscosities (Newtonian: \(\mu^{\prime}=\alpha^{\prime}_{4}/2\), all other \(\alpha_{i}=0\)) \\ \(\gamma^{\prime}_{1}=\alpha^{\prime}_{3}-\alpha^{\prime}_{2}\) & rotational/twist viscosity \\ \(\gamma^{\prime}_{2}=\alpha^{\prime}_{6}-\alpha^{\prime}_{5}\) & torsion coefficient \\ \(K_{1}\), \(K_{2}\), \(K_{3}\) & elastic constants representing splay, twist, and bend respectively \\ \(\omega^{\prime}_{ij}\) & vorticity tensor \\ \(e^{\prime}_{ij}\) & rate of strain tensor \\ \(N_{i}\) & co-rotational time flux of the director \(\mathbf{n}\) \\ \end{tabular} \end{table} Table 6: Parameters and variables used in the model.[60] Deriving the equations for moderate elasticity To consider the case of moderate elasticity, we rescale the inverse Ericksen number, the pressure, the surface tension as follows, while keeping the other scalings the same. \[\hat{N}=\frac{K}{\mu UL},\quad p^{\prime}=\frac{\mu U}{\delta L}p,\quad\gamma^{ \prime}=\frac{\mu U}{\delta^{2}}\gamma.\] Using the scaling for moderate elasticity, the nondimensionalized governing equations become \[\hat{N}\frac{\partial}{\partial x}\left(\frac{\partial W}{ \partial\theta_{x}}\right)+\hat{N}\frac{\partial}{\partial z}\left(\frac{ \partial W}{\partial\theta_{z}}\right)-\hat{N}\frac{\partial W}{\partial \theta}+\delta g_{x}\frac{\partial n_{x}}{\partial\theta}+\delta g_{z}\frac{ \partial n_{z}}{\partial\theta}=0,\] (A21) \[-\delta\frac{\partial p}{\partial x}-\hat{N}\frac{\partial W}{ \partial x}+\delta g_{x}\frac{\partial n_{x}}{\partial x}+\delta g_{z}\frac{ \partial n_{z}}{\partial x}+\delta\frac{\partial t_{xx}}{\partial x}+\frac{ \partial t_{xz}}{\partial z}=0,\] (A22) \[-\delta\frac{\partial p}{\partial z}-\hat{N}\frac{\partial W}{ \partial z}+\delta g_{x}\frac{\partial n_{x}}{\partial z}+\delta g_{z}\frac{ \partial n_{z}}{\partial z}+\delta^{2}\frac{\partial t_{xx}}{\partial x}+\delta \frac{\partial t_{zz}}{\partial z}=0.\] (A23) We consider the case with no electric field, so we set \(\mathbf{E}=\mathbf{0}\). Now we asymptotically expand the dependent variables in powers of \(\delta=\hat{h}/L\). For example, \[u=u_{0}(x,z,t)+\delta\,u_{1}(x,z,t)+\delta^{2}\,u_{2}(x,z,t)+\cdots,\] and we do the same for \(\theta,\,v,\,p,\) and \(h\). We substitute these into the governing equations and boundary conditions, and then collect like powers of \(\delta\). Then at order 1, \[\frac{1}{2}\big{(}\alpha_{2}+\alpha_{3}-\alpha_{5}+\alpha_{6}+2 \alpha_{1}\cos 2\theta_{0}\big{)}\sin 2\theta_{0}u_{0z}\theta_{0z}-\hat{N} \theta_{0z}\theta_{0zz}\] \[+\frac{1}{2}\left[2+(\alpha_{5}-\alpha_{2})\cos^{2}\theta_{0}+( \alpha_{3}+\alpha_{6})\sin^{2}\theta_{0}+\frac{1}{2}\alpha_{1}\sin^{2}2\theta_ {0}\right]u_{0zz}=0,\] (A24) \[-\hat{N}\theta_{0z}\theta_{0zz}=0,\] (A25) \[\hat{N}\theta_{0zz}=0,\] (A26) \[u_{0x}+w_{0z}=0,\] (A27) \[-\hat{N}\theta_{0z}^{2}=0,\] (A28) \[\frac{1}{2}\left[2+(\alpha_{5}-\alpha_{2})\cos^{2}\theta_{0}+( \alpha_{3}+\alpha_{6})\sin^{2}\theta_{0}+\frac{1}{2}\alpha_{1}\sin^{2}2\theta_ {0}\right]u_{0z}\] \[-\frac{1}{2}\hat{N}\theta_{0z}^{2}h_{0x}-\hat{N}\theta_{0z}\theta _{0x}=0,\] (A29) \[w_{0}-\frac{1}{2}\big{(}h_{0t}+u_{0}h_{0x}\big{)}=0,\] (A30) \[\theta_{0}-\theta_{B}=0.\] (A31) Starting from the top, the equations represent momentum in the x and z-components respectively, energy, and continuity, followed by the boundary conditions on the top surface: normal and tangential stress, kinematic, and anchoring. Solving the order 1 equations, we obtain \[\theta_{0}=\theta_{B},\] (A32) \[u_{0}=u_{0}(x,t),\] (A33) \[w_{0}=w_{0}(x,z,t)=-u_{0x}z,\] (A34) \[h_{0t}+(u_{0}h_{0})_{x}=0,\] (A35) where (A35) is the mass balance equation. At order 1, we do not have an equation for \(h\), so to close the system, we continue on to order \(\delta\). After making the above substitutions, the order \(\delta\) equations are \[\frac{1}{8}\Big{[}8+\alpha_{1}-2(\alpha_{2}+\alpha_{3}+\alpha_{5 }+\alpha_{6})-2(\alpha_{2}+\alpha_{3}-\alpha_{5}+\alpha_{6})\cos 2\theta_{B}\] \[-\alpha_{1}\cos 4\theta_{B}\Big{]}u_{1zz}-p_{0x} =0,\] (A36) \[p_{0z} =0,\] (A37) \[\hat{N}\theta_{1zz} =0,\] (A38) \[u_{1x}+w_{1z} =0,\] (A39) \[p_{0}+\frac{\gamma}{2}h_{0xx} =0,\] (A40) \[-\frac{1}{2}(\alpha_{1}\cos 2\theta_{B}-\alpha_{5}+\alpha_{6}) \sin 2\theta_{B}u_{0x}\] \[+\frac{1}{4}\Big{[}4-2(\alpha_{2}-\alpha_{5})\cos^{2}\theta_{B}+ 2(\alpha_{3}+\alpha_{6})\sin^{2}\theta_{B}+\alpha_{1}\sin^{2}2\theta_{B}\Big{]} u_{1z} =0,\] (A41) \[w_{1}-\frac{1}{2}(h_{1t}+u_{1}h_{0x}+u_{0}h_{1x}) =0,\] (A42) \[\theta_{1} =0.\] (A43) Solving, we obtain \[p_{0}(x,t) =-\frac{\gamma}{2}h_{0xx},\] (A44) \[u_{1}(x,z,t) =\frac{p_{0x}}{F(\theta_{B})}\left(\frac{z^{2}}{2}-\frac{h_{0}}{2 }z\right)-\frac{A(\theta_{B},u_{0x})}{B(\theta_{B})}z+K(x,t),\] (A45) \[\theta_{1}(x,z,t) =0,\] (A46) where \(K(x,t)\) is as yet unknown. While we have found some higher order terms, we must proceed to order \(\delta^{2}\) to find an equation for \(h\). We make the above substitutions, in addition to the substitution \(w_{1zz}=-u_{1xz}\), obtained from differentiating the the continuity equation. At this order, the equations are too long to be profitable displayed in their entirety, so we summarize the steps. First, we use \(z\)-momentum and the normal stress condition to determine \[p_{1}(x,z,t)=G_{1}(\theta_{B})u_{0x}+H_{1}(\theta_{B})\gamma(2z-h_{0})h_{0xxx} -\frac{\gamma}{2}h_{1xx},\] (A47) where \[G_{1}(\theta_{B})= \frac{1}{4}\big{[}-8-\alpha_{1}-2\alpha_{5}-2\alpha_{6}-2(\alpha_ {1}+\alpha_{5}+\alpha_{6})\cos 2\theta_{B}-\alpha_{1}\cos 4\theta_{B}\big{]}\] \[+\frac{(-\alpha_{1}-\alpha_{2}-\alpha_{3}-\ \alpha_{5}-\alpha_{6}- \alpha_{1}\cos 2\theta_{B})(-\alpha_{5}+\alpha_{6}+\alpha_{1}\cos 2\theta_{B}) \sin 2\theta_{B}^{2}}{8+\alpha_{1}-2\alpha_{2}+2\alpha_{3}+2\alpha_{5}+2 \alpha_{6}-2(\alpha_{2}+\alpha_{3}-\alpha_{5}+\alpha_{6})\cos 2\theta_{B}- \alpha_{1}\cos 4\theta_{B}},\] \[H_{1}(\theta_{B})=\frac{(-\alpha_{1}-\alpha_{2}-\alpha_{3}-\alpha_{5}-\alpha_ {6}\ -\alpha_{1}\cos 2\theta_{B})\sin 2\theta_{B}}{2\big{[}8+\alpha_{1}-2 \alpha_{2}+2\alpha_{3}+2\alpha_{5}+2\alpha_{6}-2(\alpha_{2}+\alpha_{3}-\alpha _{5}+\alpha_{6})\cos 2\theta_{B}-\alpha_{1}\cos 4\theta_{B}\big{]}}.\] Then, substituting \(p_{1}\) and others into the x-momentum equation, we solve for \(u_{2zz}\) and integrate across the sheet. From the tangential stress condition we get the solvability condition, which leads to \[\frac{B_{2}(\theta_{B})}{A_{2}(\theta_{B})}\left(h_{0}u_{0x} \right)_{x}+\frac{C_{2}(\theta_{B})}{A_{2}(\theta_{B})}\gamma(h_{0}^{2}h_{0xxx })_{x}+4\gamma h_{0}h_{1xxx}=0,\] (A48) where \[A_{2}(\theta_{B})= \ 8+\alpha_{1}-2\alpha_{2}+2\alpha_{3}+2\alpha_{5}+2\alpha_{6},\] \[-2(\alpha_{2}+\alpha_{3}-\alpha_{5}+\alpha_{6})\cos 2\theta_{B}- \alpha_{1}\cos 4\theta_{B},\] (A49) \[B_{2}(\theta_{B})= -4\Big{[}2\alpha_{1}\alpha_{2}+2\alpha_{1}\alpha_{3}-2\alpha_{1} \alpha_{5}+2\alpha_{1}\alpha_{6}\Big{]}\cos 6\theta_{B}\] \[-4\Big{[}-64-16\alpha_{1}-\alpha_{1}^{2}+16\alpha_{2}+2\alpha_{1 }\alpha_{2}-16\alpha_{3}-2\alpha_{1}\alpha_{3}-32\alpha_{5}\] \[-4\alpha_{1}\alpha_{5}+6\alpha_{2}\alpha_{5}-2\alpha_{3}\alpha_{ 5}-4\alpha_{5}^{2}-32\alpha_{6}-4\alpha_{1}\alpha_{6}+2\alpha_{2}\alpha_{6}-6 \alpha_{3}\alpha_{6}\] \[-8\alpha_{5}\alpha_{6}-4\alpha_{6}^{2}+2\big{(}\alpha_{2}+\alpha _{3}-\alpha_{5}+\alpha_{6}\big{)}\big{(}\alpha_{1}+2[4+\alpha_{5}+\alpha_{6 }]\big{)}\cos 2\theta_{B}\] \[+2\big{(}\alpha_{1}[\alpha_{2}-\alpha_{3}]-[\alpha_{2}+\alpha_{3 }][\alpha_{5}-\alpha_{6}]\big{)}\cos 4\theta_{B}+\alpha_{1}^{2}\cos 8\theta_{B} \Big{]},\] (A50) \[C_{2}(\theta_{B})= -8\big{(}\alpha_{2}+\alpha_{3}+\alpha_{1}\cos 2\theta_{B}\big{)} \sin\theta_{B}.\] (A51) Note that the undetermined function \(K(x,t)\) from \(u_{1}\) does not appear in the solvability condition. However, if take the continuity equation from order \(\delta\), integrate with respect to \(z\), and apply the kinematic boundary conditions, we find \[h_{1t}+u_{0}h_{1x}+\big{(}h_{0}K(x,t)\big{)}_{x}-\frac{\gamma}{6A_{2}(\theta_{B} )}\big{(}h_{0}^{3}h_{0xxx}\big{)}_{x}=0, \tag{100}\] where \(A_{2}(\theta_{B})\) is as defined above. To close the system, we assume that \[h(x,t)=h_{0}(x,t)+\delta^{2}h_{2}(x,t)+O(\delta^{3}). \tag{101}\] In other words, there is no correction to \(h\) at order \(\delta\). Then (101), (102), and (100) simplify to \[h_{0t}+(u_{0}h_{0})_{x} =0, \tag{102}\] \[B_{2}(\theta_{B})\left(h_{0}u_{0x}\right)_{x}+C_{2}(\theta_{B}) \gamma(h_{0}^{2}h_{0xxx})_{x} =0,\] (103) \[\big{(}h_{0}K(x,t)\big{)}_{x}-\frac{\gamma}{6A_{2}(\theta_{B})} \big{(}h_{0}^{3}h_{0xxx}\big{)}_{x} =0. \tag{104}\] So we have three equations with three unknowns: \(u_{0}(x,t),\ h_{0}(x,t)\), and \(K(x,t)\). We solve for \(K(x,t)\) by integrating Eq. (104) with respect to \(x\). We determine the constant of integration by integrating \(u_{1}\) through the depth; no net flux along the film due to \(u_{1}\) results in \[K(x,t) =\frac{\gamma}{2}\frac{h_{0}^{2}h_{0xxx}}{3A_{2}(\theta_{B})}, \tag{105}\] \[u_{1}(x,z,t) =\frac{p_{0x}}{F(\theta_{B})}\left(\frac{z^{2}}{2}-\frac{h_{0}}{ 2}z\right)-\frac{A(\theta_{B},u_{0x})}{B(\theta_{B})}z+\frac{\gamma}{2}\frac{ h_{0}^{2}h_{0xxx}}{3A_{2}(\theta_{B})}. \tag{106}\] Then, to find the axial velocity \(u_{0}\) and the sheet thickness \(h_{0}\), we can solve the coupled system \[h_{0t}+(u_{0}h_{0})_{x} =0, \tag{107}\] \[\left(h_{0}u_{0x}\right)_{x}+\hat{\gamma}(h_{0}^{2}h_{0xxx})_{x} =0, \tag{108}\] where \[\hat{\gamma}=\frac{C_{2}(\theta_{B})}{B_{2}(\theta_{B})}\gamma. \tag{109}\] In the paper we take \(\hat{\gamma}=\gamma\).
2304.04439
Unparticle effects at the MUonE experiment
We investigate possible effects of unparticles at the MUonE experiment by considering a general model for unparticle with broken scale invariance, characterized by the scaling dimension $d$ and the energy scale $\mu$ at which the scale invariance is broken. Taking into account available relevant constraints on the couplings of the unparticles with the Standard Model (SM) leptons, we found that the MUonE experiment at the level of 10 ppm systematic accuracy is sensitive to such effects if $1<d\lesssim 1.4$ and $1\le \mu \lesssim 12$ GeV for vector unparticles. The effects of scalar unparticles are too feeble to be detected. The vector unparticles can induce a significant shift on the best-fit value of $a_\mu^\text{had}$ at the MUonE, thereby providing an opportunity to detect unparticles or to obtain a new bound on the unparticle-SM couplings in the case of no anomaly.
Duc Ninh Le, Van Dung Le, Duc Truyen Le, Van Cuong Le
2023-04-10T07:53:03Z
http://arxiv.org/abs/2304.04439v2
# Unparticle effects at the MUonE experiment ###### Abstract We investigate possible effects of unparticles at the MUonE experiment by considering a general model for unparticle with broken scale invariance, characterized by the scaling dimension \(d\) and the energy scale \(\mu\) at which the scale invariance is broken. Taking into account available relevant constraints on the couplings of the unparticles with the Standard Model (SM) leptons, we found that the MUonE experiment at the level of 10 ppm systematic accuracy is sensitive to such effects if \(1<d\lesssim 1.4\) and \(1\leq\mu\lesssim 12\) GeV for vector unparticles. The effects of scalar unparticles are too feeble to be detected. The vector unparticles can induce a significant shift on the best-fit value of \(a_{\mu}^{\rm had}\) at the MUonE, thereby providing an opportunity to detect unparticles or to obtain a new bound on the unparticle-SM couplings in the case of no anomaly. ###### Contents * 1 Introduction * 2 Unparticle models * 3 Constraints on unparticle-SM interactions * 4 MUonE experiment * 5 Unparticle effects at the MUonE experiment * 6 Conclusions ## 1 Introduction Unparticle is an interesting idea proposed by Georgi in 2007 [1]. The idea is that there exists a new hidden sector which is scale invariant. This sector is called unparticle. Unparticle can interact with the Standard Model (SM) fields, leading to possible effects in precision low energy experiments. A scheme for unparticle can be sketched as follows. At a high energy regime above the SM scale, there exists a UV-completion theory with the SM fields and new fields called Banks-Zaks (BZ) fields [1]. These two sets of fields interact via exchanging particles with masses of the order of \(\Lambda_{\rm UV}\) or higher. Below this energy scale, the effective interaction between the SM fields and the Banks-Zaks fields reads \[\mathcal{L}_{\rm UV}=\frac{c_{\rm UV}}{\Lambda_{\rm UV}^{d_{\rm SM}+d_{\rm UV }-4}}\mathcal{O}_{\rm SM}\mathcal{O}_{\rm UV}, \tag{1}\] where \(\mathcal{O}_{\rm SM}\) is an operator with mass dimension \(d_{\rm SM}\) built out of the SM fields and \(\mathcal{O}_{\rm UV}\) an operator with mass dimension \(d_{\rm UV}\) built out of the BZ fields, \(c_{\rm UV}\) is a dimensionless coupling. At some scale \(\Lambda_{\mathcal{U}}<\Lambda_{\rm UV}\), the interactions among the BZ fields then cause dimensional transmutation to form a scale-invariance system, named above as unparticle. The effective interaction between the SM fields and the unparticle is given by \[\mathcal{L}_{\rm SM-\mathcal{U}}=\frac{c_{\mathcal{U}}}{\Lambda_{\mathcal{U}} ^{d_{\rm SM}+d-4}}\mathcal{O}_{\rm SM}\mathcal{O}_{\mathcal{U}}, \tag{2}\] where \(c_{\mathcal{U}}\) is a dimensionless coupling, \(d\) is the mass dimension of the unparticle operator. Unparticle can be thought of as a set of \(d\) massless particles [1]. The novel feature here is that \(d\) is non-integral. This system is, by definition, scale invariant. It is important to note that we do not require unparticle to be conformal invariant, which is more restrictive (see e.g. [2] for the distinction between scale and conformal invariance). For example, for a conformal field theory, unitary constraint imposes a lower limit on the value of \(d\), namely \(d\geq 1\) for a scalar unparticle system and \(d\geq 3\) for a vector case [2; 3]. Such a constraint on the scaling dimension is not demanded in this study. As done in most studies in the literature, we will focus on the range \(1<d<2\), which is most natural since it is close to the particle limit of \(d=1\). Moreover, unparticle effects are largest in this region. The original idea of Georgi [1] assumes that unparticles exist below the scale \(\Lambda_{\mathcal{U}}\). It was then very soon realized that data from cosmology and low-energy experiments puts severe limits on the couplings between the unparticles and the SM sector, see e.g. [4; 5; 6], making it impossible to observe unparticle effects at present or near-future experiments. However, these constraints can be evaded if the scale invariance is broken at a energy scale \(\mu\) which is sufficiently large compared to the scales of the cosmology and low-energy experiment processes. Ref. [7] found that \(\mu\gtrsim 1\,\text{Ge\kern-1.0ptV}\) is enough. The case \(\mu>M_{Z}\) was studied in [8], and \(1\lesssim\mu<M_{Z}\) in [7], using a simple model proposed in [9] for unparticle with broken scale invariance. In the limit \(\mu\to 0\) the unbroken case of Georgi is then recovered. Recently, a new experiment named MUonE has been proposed with the letter of intent submitted to CERN in 2019 [10]. In this experiment, the differential cross section of the elastic \(e\mu\to e\mu\) scattering, occurring at the energy of \(0.4\,\text{Ge\kern-1.0ptV}\), will be measured with very high accuracy. All the systematics effects are expected to be known at 10 ppm [10]. If this experiment is realized it will be an excellent place to probe unparticles in the region of \(\mu\approx 1\,\text{Ge\kern-1.0ptV}\). The purpose of this work is to study unparticle effects at the MUonE experiment taking into account the latest available constraints on the unparticle-SM couplings. The paper is outlined as follows. The unparticle models used in this work are first described in Section 2. In Section 3, we then survey the available constraints on the unparticle-SM couplings using data provided in the literature. In this section, we provide a new bound on the (pseudo-)scalar unparticle couplings using the mono-photon cross section data at LEP2. This result will be useful for unparticle studies. Basics of the MUonE experiment are summarized in Section 4. Our main results are presented in Section 5 where an analytical formula for calculating unparticle effects at the MUonE is given as well as numerical results for sensitivity curves and shifts on the best-fit value of \(a_{\mu}^{\text{had}}\). Summary and conclusions are provided in Section 6. ## 2 Unparticle models We are interested in the interaction between the SM sector and the unparticle as given in Eq. (2) and consider separately the following four operators [11; 12; 1]: \[\frac{c_{S}}{\Lambda_{\mathcal{U}}^{d-1}}\overline{f}f\mathcal{O}_{\mathcal{U} },\quad\frac{ic_{P}}{\Lambda_{\mathcal{U}}^{d-1}}\overline{f}\gamma_{5}f \mathcal{O}_{\mathcal{U}},\quad\frac{c_{V}}{\Lambda_{\mathcal{U}}^{d-1}} \overline{f}\gamma_{\mu}f\mathcal{O}_{\mathcal{U}}^{\mu},\quad\frac{c_{A}}{ \Lambda_{\mathcal{U}}^{d-1}}\overline{f}\gamma_{\mu}\gamma_{5}f\mathcal{O}_{ \mathcal{U}}^{\mu}, \tag{1}\] which are called scalar, pseudo-scalar, vector, and axial-vector unparticle models, respectively. The parameters \(c_{i}\) (\(i=S,P,V,A\)) are dimensionless couplings, \(\mathcal{O}_{\mathcal{U}}\) is a scalar unparticle operator, \(\mathcal{O}_{\mathcal{U}}^{\mu}\) a vector unparticle operator. Since the value of \(\Lambda_{\mathcal{U}}\) is unknown, we trade it to \(M_{Z}\) by rewriting the couplings as follows \[c_{X}=\lambda_{X}\left(\frac{\Lambda_{\mathcal{U}}}{M_{Z}}\right)^{d-1},\ \ X=S,P,V,A. \tag{2}\] The operators in Eq. (1) then read: \[\frac{\lambda_{S}}{M_{Z}^{d-1}}\overline{f}f\mathcal{O}_{\mathcal{U}},\quad \frac{i\lambda_{P}}{M_{Z}^{d-1}}\overline{f}\gamma_{5}f\mathcal{O}_{\mathcal{ U}},\quad\frac{\lambda_{V}}{M_{Z}^{d-1}}\overline{f}\gamma_{\mu}f\mathcal{O}_{ \mathcal{U}}^{\mu},\quad\frac{\lambda_{A}}{M_{Z}^{d-1}}\overline{f}\gamma_{\mu }\gamma_{5}f\mathcal{O}_{\mathcal{U}}^{\mu}. \tag{3}\] The propagators of the scalar and vector unparticles have the following interesting forms [2; 11] \[\Delta_{F}(p) =\frac{iZ_{d}}{(-k^{2}-i\epsilon)^{2-d}}, \tag{4}\] \[\Delta_{F}^{\mu\nu}(p) =\frac{iZ_{d}}{(-k^{2}-i\epsilon)^{2-d}}(-g^{\mu\nu}+a\frac{k^{ \mu}k^{\nu}}{k^{2}}), \tag{5}\] where \(k\) is the momentum of the unparticle, \[A_{d}=\frac{16\pi^{5/2}}{(2\pi)^{2d}}\frac{\Gamma(d+1/2)}{\Gamma(d-1)\Gamma(2 d)},\quad Z_{d}=\frac{A_{d}}{2\sin(d\pi)}. \tag{6}\] The parameter \(a\) is only relevant for the axial-vector case as its dependence is canceled out in the case of vector unparticle. Its value depends on the nature of the unparticle: \(a=1\) if the operator \(\mathcal{O}_{\mathcal{U}}^{\mu}\) is transverse [11], or \(a=2(d-2)/(d-1)\) for \(d\geq 3\) if conformal invariance is required [2]. Since \(1<d<2\) in this work, we will set \(a=1\) in the numerical results. We have checked that the dependence on \(a\) is completely negligible, i.e the change of the differential cross section in Eq. (5) is less than 1 ppt when varying \(a\in[0,2]\). Figure 1: Absolute value of the \(Z_{d}\) factor as a function of the scaling dimension \(d\). \(Z_{d}\) is negative when \(d\) is in \((1,2)\) or \((3,4)\), positive in \((2,3)\). The factor \(Z_{d}\) is plotted in Fig. 1. Because of the denominator \(\sin(d\pi)\) this factor is singular at integer values \(d=n\) with \(n\geq 2\) as shown in the plot. We remark that the above unparticle propagators are very different from the usual particle ones. It is characterized only by the scale dimension \(d\) of the unparticle operators. It is interesting to notice that the unparticle propagator (scalar or vector) reproduces the propagator of a massless particle in the limit \(d\to 1\). Note that, if conformal invariance is required then \(d\geq 3\) for the (axial-)vector cases [2], thereby excluding the limit \(d\to 1\). Since kinetic singularity occurs in the limit \(E_{\mathcal{U}}\to 0\) (i.e. vanishing unparticle energy) in the case of \(d<1\)[1], we will focus on the range \(1<d<2\) for both the scalar and vector unparticles, as discussed in the introduction. The nice thing of unparticle is that it provides a new propagator, leading to novel interference effects with the SM amplitudes [11]. This may help us to explain new effects which will be discovered in the future. Adding broken scale invariance is relatively simple, as done in [9]. In this model, it is assumed that the contribution from the broken phase (i.e. energy less than \(\mu\)) is suppressed. The modes with energy less than \(\mu\) are therefore removed from the spectral density function. This affects the unparticle propagator. The new propagators taking into account the effect of broken scale invariance now read [7] \[\Delta_{F}(p) =\frac{iZ_{d}}{(-k^{2}+\mu^{2}-i\epsilon)^{2-d}}, \tag{7}\] \[\Delta_{F}^{\mu\nu}(p) =\frac{iZ_{d}}{(-k^{2}+\mu^{2}-i\epsilon)^{2-d}}(-g^{\mu\nu}+a \frac{k^{\mu}k^{\nu}}{k^{2}}), \tag{8}\] which are obtained from Eq. (4) and Eq. (5) by the replacement \((k^{2}+i\epsilon)\to(k^{2}-\mu^{2}+i\epsilon)\). The full scale invariance case as originally proposed by Georgi is recovered in the limit \(\mu\to 0\). ## 3 Constraints on unparticle-SM interactions Most studies on the literature have been focusing on the case of exact scale invariance, namely \(\mu=0\). In this limit, it was found that the Big Bang Nucleosynthesis (BBN) [4] and SN 1987A [13, 14, 15, 16] constraints put a severe limit on the strength of the unparticle-SM interactions, with the former giving a more stringent bound [4, 7]. For \(1<d<2\), the coupling \(|\lambda_{V}|\) between the vector unparticle and the SM fermions in Eq. (3) must be smaller than \(10^{-7}\)[4, 7]. Since this constraint was obtained using dimensional analysis [4], and the dimensions of the vector and scalar unparticles in Eq. (3) are exactly the same, we expect similar result for the (pseudo-)scalar and axial-vector unparticle couplings, namely \(|\lambda_{S}|\), \(|\lambda_{P}|\), \(|\lambda_{A}|<10^{-7}\). With these tiny couplings, it is impossible to detect unparticle effects at the present or near-future collider experiments. For the case \(\mu\gtrsim 1\) GeV, the BBN and SN 1987A constraints are evaded because the scale invariance is broken at the energy scale \(\mu\) sufficiently large compared to the relevant energy scales, \(1\) MeV for BBN and \(30\) MeV for SN 1987A, as already observed in [7]. For the same reason, other constraints from low energy experiments such as the electron and muon anomalous magnetic moments [17], positronium decays [5], neutrino decays into unparticles [18], neutrino-electron scattering [6] are evaded as well. In this paper we consider the MUonE experiment where the scattering energy is \(\sqrt{s}\approx 0.4\,\mathrm{GeV}\) in the center-of-mass system (c.m.s), focusing on the region of the parameter space where \(1<d<2\) and \(\mu\geq 1\,\mathrm{GeV}\). It is then clear that the unparticle effects are largest in the region of \(d\approx 1\) and \(\mu\approx 1\,\mathrm{GeV}\) and decrease when \(d\) or \(\mu\) gets larger. We will show that the MUonE experiment at 10 ppm accuracy is insensitive to the region of \(\mu>12\,\mathrm{GeV}\). For the case \(1\leq\mu\lesssim 12\,\mathrm{GeV}\), we will take into account the available constraints from mono-photon production at LEP2 for (pseudo-)scalar and (axial-)vector unparticles [19], from \(\mu^{+}\mu^{-}\) cross section and forward-backward assymmetry data at LEP-Aleph [20], KEK-Venus [21; 22; 23], PETRA-MarkJ [24; 25] for (axial-)vector unparticles [7], from mono-\(Z\) production at the LHC-CMS for (pseudo-)scalar unparticles [26]. These are the strongest and most relevant constraints on the unparticles that we can find in the literature. They are plotted in Fig. 2. Bound from the mono-\(Z\) production at LEP2 [27] has been ignored because it is weaker compared to the mono-photon data [7]. The 95% CL upper limit (black line) on the (axial-)vector unparticle couplings has been obtained using the differential cross section of the process \(e^{+}e^{-}\to\gamma+\text{unparticle}\) provided in [7; 12]. For the case of (pseudo-)scalar unpartiles, our calculation gives \[d\sigma=\frac{A_{d}e^{2}\lambda_{i}^{2}E_{\gamma}}{16\pi^{3}M_{Z}^{2}s}\left( \frac{s-2\sqrt{s}E_{\gamma}-\mu^{2}}{M_{Z}^{2}}\right)^{d-2}\frac{\cos^{2} \theta_{\gamma}}{1-\cos^{2}\theta_{\gamma}}dE_{\gamma}d\Omega, \tag{1}\] where \(\lambda_{i}=\lambda_{S}\), \(\lambda_{P}\) and \(\sqrt{s}\) is the c.m.s energy. Using the LEP2 95% CL upper limit of \(\sigma\approx 0.2\,\mathrm{pb}\) obtained with the kinematic cuts \(E_{\gamma}\in[5\,\mathrm{GeV},(s-\mu^{2})/(2\sqrt{s})]\), \(|\cos\theta_{\gamma}|<0.97\), \(\sqrt{s}=207\) GeV from Ref. [19] we then get the upper bound on the coupling \(\lambda_{i}\) as plotted in Fig. 2 (green line). This new result will be useful for other unparticle studies. Figure 2: Upper limits at 95% CL from LEP, CMS and other experiments (see text) data on (axial-)vector and (pseudo-)scalar unparticle parameters. The regions above the curves are excluded. In Fig. 2 we have set \(\mu=0\) for the mono-photon and mono-\(Z\) constraints. For higher values of \(\mu\), the unparticle cross sections get smaller, leading to higher upper limits on the couplings. However, as long as the value of \(\mu\) remains small compared to the colliding energy, the \(\mu\) dependence is tiny and can be safely neglected. The muon-pair bounds are taken from [7], obtained with \(\mu=1.5\,\text{Ge\kern-1.0ptV}\). Note that, the curves stop at \(d=1.1\) because smaller values are not provided. These constraints are weaker than the mono-photon one, except in the region of \(d<1.2\) where the muon-pair bound on \(\lambda_{V}\) is marginally more stringent. For the case of vector unparticles, the most relevant constraint is therefore the LEP2 mono-photon bound. For the scalar unparticles, the LEP2 mono-photon and CMS mono-\(Z\) constraints are complementary. The former is more stringent when \(d<1.05\) and gets weaker with increasing \(d\). Finally, we note that the upper limits presented in Fig. 2 apply for the absolute values of the couplings because we consider the four cases of unparticles separately. For the same reason, other results of this study are not sensitive to the sign of the unparticle-SM couplings. ## 4 MUonE experiment The aim of the MUonE experiment is to measure precisely the shape of the differential cross section as it does not rely on the exact value of the luminosity [10]. From this shape, a template fit will be used to determine the value of \(a_{\mu}^{\text{had}}\). At LO in the SM the unpolarized differential cross section reads \[\frac{d\sigma_{\text{SM}}}{dT}=\frac{\pi\alpha^{2}(t)}{(E_{\mu}^{2}-m_{\mu}^{ 2})m_{e}^{2}T^{2}}[2E_{\mu}m_{e}(E_{\mu}-T)-T(m_{e}^{2}+m_{\mu}^{2}-m_{e}T)], \tag{10}\] where \[t=-2m_{e}T=(p_{\mu}-p_{\mu}^{\prime})^{2},\quad T=E_{e}^{\prime}-m_{e}\geq 0, \tag{11}\] with \(p_{\mu}\) and \(p_{\mu}^{\prime}\) being the momentum of the initial-state and final-state muons, respectively. \(E_{e}^{\prime}\) is the energy of the final-state electron in the laboratory (Lab) frame. The variable \(T\) is essentially \(E_{e}^{\prime}\) in practice. The energy of the incoming muon in the laboratory frame is \(E_{\mu}=150\,\text{Ge\kern-1.0ptV}\) (using \(160\,\text{Ge\kern-1.0ptV}\) as in [28] does not change our conclusions). The center-of-mass energy is \(\sqrt{s}=\sqrt{2E_{\mu}m_{e}+m_{\mu}^{2}+m_{e}^{2}}\approx 0.4\,\text{Ge \kern-1.0ptV}\). Because of this low center-of-mass energy, the contribution from the \(Z\) boson is negligible and has been removed from Eq. (10). In the center of mass frame we have \[t=-2|\vec{p}|^{2}(1-\cos\theta),\quad|\vec{p}|^{2}=\frac{s^{2}+(m_{\mu}^{2}-m_ {e}^{2})^{2}-2s(m_{\mu}^{2}+m_{e}^{2})}{4s}, \tag{12}\] where \(\theta\) is the scattering angle. In the absence of kinematic cuts, the full range of \(t\) is \([-4|\vec{p}|^{2},0]\), which amounts to \([-0.143,0]\,\text{Ge\kern-1.0ptV}^{2}\) for \(E_{\mu}=150\,\text{Ge\kern-1.0ptV}\). The full range of \(T\) is then \([0,139.818]\) GeV. In this paper, following [29], we impose a cut on the energy of the final-state electron \(E_{e}^{\prime}>1\,\text{Ge\kern-1.0ptV}\) in the Lab frame. The range of \(T\) then reads \([1,139.818]\) GeV. To calculate the \(a_{\mu}^{\rm had}\), it is conventional to parameterize \(t\) in terms of the \(x\) parameter as \[t(x)=-\frac{m_{\mu}^{2}x^{2}}{1-x},\quad x\in[0,1). \tag{10}\] We then have [30] \[a_{\mu}^{\rm had}=\frac{\alpha}{\pi}\int_{0}^{1}dx(1-x)\Delta \alpha_{\rm had}[t(x)], \tag{11}\] where \(\alpha=\alpha(0)\), \(\Delta\alpha_{\rm had}(t)\) is the hadronic contribution to the running of the coupling \(\alpha\). The function \(\Delta\alpha_{\rm had}(t)\) is fitted from experimental data using Eq. (10) with \[\alpha(t)=\frac{\alpha(0)}{1-\Delta\alpha(t)},\quad\Delta\alpha( t)=\Delta\alpha_{\rm had}(t)+\Delta\alpha_{\rm lep}(t), \tag{12}\] where smaller contributions to the \(\Delta\alpha(t)\) from the top quark and the \(W^{\pm}\) bosons have been neglected. For \(\Delta\alpha_{\rm lep}(t)\), the precise prediction of the SM is used [10]. In this work, the values of \(\Delta\alpha_{\rm had}(t)\) and \(\Delta\alpha_{\rm lep}(t)\) are obtained using the package alphaQEDc19[31]. ## 5 Unparticle effects at the MUonE experiment Switching on the unparticles, the MUonE differential cross section then becomes, using the Feynman rules given in Table 1, \[\frac{d\sigma_{\rm new}}{dT} = \frac{1}{128\pi m_{e}(E_{\mu}^{2}-m_{\mu}^{2})}|\mathcal{M}_{X}+ \mathcal{M}_{\gamma}|^{2} \tag{13}\] \[= \frac{1}{128\pi m_{e}(E_{\mu}^{2}-m_{\mu}^{2})}\left(\frac{16\pi^ {2}\alpha^{2}(t)}{t^{2}}\text{Tr}(\gamma\gamma)\right.\] \[+ \left.\frac{8\pi\alpha(t)}{t}\frac{\lambda_{X}^{2}Z_{d}|t-\mu^{2}| ^{d-2}}{(M_{Z}^{2})^{d-1}}\text{Tr}(\gamma X)+\frac{\lambda_{X}^{4}Z_{d}^{2}| t-\mu^{2}|^{2d-4}}{(M_{Z}^{2})^{2d-2}}\text{Tr}(XX)\right),\] where the results for \(\text{Tr}(\gamma\gamma)\), \(\text{Tr}(\gamma X)\), and \(\text{Tr}(XX)\) are provided in Table 2. The relative sign between the photon and the unparticle amplitudes is important as it affects the interference term. In order to see the differences between different types of unparticles, we choose the following benchmark point \[\text{P0}:\quad d=1.1,\quad\lambda_{i}=0.02,\quad\mu=1\,\text{GeV}, \tag{14}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(Y\) & \(\gamma\) & S & P & V & A \\ \hline \hline \(\bar{\ell}\ell Y\) vertex & \(ie\gamma_{\mu}\) & \(i\lambda_{S}/M_{Z}^{d-1}\) & \(-\lambda_{P}\gamma_{5}/M_{Z}^{d-1}\) & \(i\lambda_{V}\gamma_{\mu}/M_{Z}^{d-1}\) & \(i\lambda_{A}\gamma_{\mu}\gamma_{5}/M_{Z}^{d-1}\) \\ \hline Propagator & \(-ig_{\mu\nu}/t|iZ_{d}|t-\mu^{2}|^{d-2}\) & \(iZ_{d}|t-\mu^{2}|^{d-2}\) & \(-iZ_{d}|t-\mu^{2}|^{d-2}p_{\mu\nu}\) & \(-iZ_{d}|t-\mu^{2}|^{d-2}p_{\mu\nu}\) \\ \hline \end{tabular} \end{table} Table 1: Feynman rules for \(e\mu\to e\mu\) scattering, where we have used the notation \(p_{\mu\nu}=g_{\mu\nu}-ak_{\mu}k_{\nu}/t\) for the vector unparticles. where \(i=S,P,V,A\). This point P0 satisfies all the constraints presented in Section 3. From Fig. 2 one sees that P0 is at the edge of allowed region of the (axial-)vector couplings while there is still a good distance from it to the nearest bound of the (pseudo-)scalar couplings. In Fig. 3 we present the SM differential cross section \(d\sigma/dT\) (left) and the four unparticle corrections to this distribution (right) calculated at the benchmark point P0 with respect to the SM values. The MUonE systematic accuracy level of 10 ppm (dashed brown) is plotted to see whether unparticle effects can be observed by the detector. The results are interesting. We see that, with the same coupling strength, the vector unparticle effect (solid blue) is largest, then come the axial-vector (dotted red), scalar (solid green), pseudo-scalar (dotted black) in the order of decreasing magnitude. Notice that all unparticle effects are positive, except for the scalar case where we have switched its sign for the logarithmic-scale plotting. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & Type & \(A_{0}\) & \(A_{1}\) & \(A_{2}\) \\ \hline \(\mathrm{Tr}(\gamma\gamma)\) & & \(64E_{\mu}^{2}m_{e}^{2}\) & \(-64E_{\mu}m_{e}^{2}-32m_{e}^{3}-32m_{e}m_{\mu}^{2}\) & \(32m_{e}^{2}\) \\ \hline \(\mathrm{Tr}(\gamma X)\) & S & \(-64E_{\mu}m_{e}^{2}m_{\mu}\) & \(32m_{e}^{2}m_{\mu}\) & \(0\) \\ \cline{2-5} & P & \(0\) & \(0\) & \(0\) \\ \cline{2-5} & V & \(64E_{\mu}^{2}m_{e}^{2}\) & \(-64E_{\mu}m_{e}^{2}-32m_{e}^{3}-32m_{e}m_{\mu}^{2}\) & \(32m_{e}^{2}\) \\ \cline{2-5} & A & \(0\) & \(64E_{\mu}m_{e}^{2}\) & \(-32m_{e}^{2}\) \\ \hline \(\mathrm{Tr}(XX)\) & S & \(64m_{e}^{2}m_{\mu}^{2}\) & \(32m_{e}^{3}+32m_{e}m_{\mu}^{2}\) & \(16m_{e}^{2}\) \\ \cline{2-5} & P & \(0\) & \(0\) & \(16m_{e}^{2}\) \\ \cline{2-5} & V & \(64E_{\mu}^{2}m_{e}^{2}\) & \(-64E_{\mu}m_{e}^{2}-32m_{e}^{3}-32m_{e}m_{\mu}^{2}\) & \(32m_{e}^{2}\) \\ \cline{2-5} & A & \(64E_{\mu}^{2}m_{e}^{2}+64bm_{e}^{2}m_{\mu}^{2}\) & \(-64E_{\mu}m_{e}^{2}+32m_{e}^{3}+32m_{e}m_{\mu}^{2}\) & \(32m_{e}^{2}\) \\ \hline \end{tabular} \end{table} Table 2: The coefficients \(A_{i}\) of the traces, written as \(A_{0}+A_{1}T+A_{2}T^{2}\). For the \(\mathrm{Tr}(XX)\) of the axial-vector case, we have introduced an auxiliary parameter \(b=2-2a-a^{2}\) in the coeffcient \(A_{0}\). Figure 3: Left: Differential cross section of the SM. Right: Various unparticle effects calculated at the parameter point P0 relative to the SM values. The MUonE systematic accuracy level of 10 ppm is indicated by the dashed brown line. This is due to the destructive interference between the scalar unparticle and the photon amplitude. The vector (axial-vector) unparticle effect is greater than the marked detector accuracy level when \(T>3\,\text{GeV}\) (\(24\,\text{GeV}\)), indicating that the MUonE experiment may be able to explore the region around the point P0 in the parameter space. On the other hand, the (pseudo-)scalar unparticle curves are well below, more than one order of magnitude, the accuracy level. It is therefore impossible to detect the scalar unparticles around the P0 at the MUonE experiment. Indeed, we will later show that, by using a sensitivity threshold based on the \(\chi^{2}\) defined in Eq. (10), it is impossible to observe (pseudo-)scalar unparticle effects in the entire parameter space region allowed by the LEP2 mono-photon bound. We then vary the scaling dimension \(d\) (Fig. 4 left) and the breaking-scale-invariance scale \(\mu\) (Fig. 4 right) to see how the unparticle corrections depend on them. Since these dependences are the same for all unparticle types we plot only the vector case as a representative. As expected, the correction decreases monotonically with increasing \(d\) or increasing \(\mu\). With the other parameters being kept fixed at the P0 values, we see that the vector unparticle effect drops below the accuracy threshold for \(d=1.4\) or \(\mu=10\,\text{GeV}\). We now explore the region of \(1.01\leq d\leq 1.99\) and determine the sensitivity curves of the MUonE experiment on the unparticle-SM couplings \(\lambda_{i}\). The sensitivity curves are defined at \(\chi^{2}(d,\lambda_{i},\mu)=3.84\), corresponding to a 95% upper-limit CL. The \(\chi^{2}\) is computed as [32] \[\chi^{2}(d,\lambda,\mu)=\sum_{i=1}^{N_{\text{bin}}}\frac{\left(N _{i}^{\text{new}}(d,\lambda,\mu)-N_{i}^{\text{SM}}\right)^{2}}{\Delta_{i, \text{stat}}^{2}+\Delta_{i,\text{sys}}^{2}}, \tag{11}\] where the statistical error \(\Delta_{i,\text{stat}}=\sqrt{N_{i}^{\text{new}}}\) and the systematic error \(\Delta_{i,\text{sys}}=10^{-5}N_{i}^{\text{SM}}\). The number of events for each bin is calculated as \[N_{i}=L\int_{T_{i}}^{T_{i}+\Delta T}\frac{d\sigma}{dT}(T)dT, \tag{12}\] Figure 4: Similar to the right plot in Fig. 3 but we plot here only the case of vector unparticle with various values of the scaling dimension \(d\) (left) and of the energy scale \(\mu\) (right). The other parameters are kept at the P0 benchmark values. where \(L=1.5\times 10^{7}\) nb\({}^{-1}\) is the integrated luminosity [29] and the differential cross section is provided in Eq. (4.1) and Eq. (5.1). We take \(N_{\rm bin}=30\) as in [29]. Results for the scalar, pseudo-scalar, vector, and axial-vector unparticles are shown in Fig. 5. For the case of scalar unparticles, it is enough to show for the case of \(\mu=1\) GeV as already at this minimum energy all parameter points satisfying the LEP mono-photon and CMS mono-\(Z\) constraints lie below the sensitivity curves. Increasing the value of \(\mu\) only pushes the sensitivity curves up. For the vector unparticles, results are presented for \(\mu=1\), 5, 12 GeV. For \(\mu>12\) GeV the unparticle effects, when the couplings satisfy the current experimental constraints, are too small to be detected at the MUonE. We therefore conclude that the MUonE experiment is insensitive to the (pseudo-)scalar unparticle effects for \(\mu\geq 1\) GeV and sensitive to the (axial-)vector unparticles when \(1\leq\mu\lesssim 12\) GeV and \(1<d\lesssim 1.4\). Sensitivities for the vector unparticle are slightly better than for the axial-vector case. We now estimate the effects of the (axial-)vector unparticles on the determination of Figure 5: Sensitivity curves of the MUonE experiment on the unparticle-SM coupling \(\lambda\) for the cases of scalar (S), pseudo-scalar (P), vector (V), and axial-vector (A) unparticles. The 95% CL upper limits from the mono-photon, mono-\(Z\), and muon-pair productions are also plotted. The muon-pair bound for the pseudo-scalar case is not plotted as it is irrelevant. \(a_{\mu}^{\rm had}\). For this purpose, we choose the following benchmark points: \[{\rm P1}:\quad d=1.01,\quad\lambda_{i}=0.0033,\quad\mu=1\,{\rm GeV}, \tag{5.5}\] \[{\rm P2}:\quad d=1.3,\quad\lambda_{i}=0.025,\quad\mu=1\,{\rm GeV},\] (5.6) \[{\rm P3}:\quad d=1.01,\quad\lambda_{i}=0.029,\quad\mu=1\,{\rm GeV},\] (5.7) \[{\rm P4}:\quad d=1.01,\quad\lambda_{i}=0.015,\quad\mu=5\,{\rm GeV},\] (5.8) \[{\rm P5}:\quad d=1.1,\quad\lambda_{i}=0.027,\quad\mu=5\,{\rm GeV},\] (5.9) \[{\rm P6}:\quad d=1.01,\quad\lambda_{i}=0.029,\quad\mu=5\,{\rm GeV}, \tag{5.10}\] where \(i=V,A\). As can be seen from Fig. 5, these points lie above the sensitivity curves and below the upper-limit bound. For the case \(\mu=1\,{\rm GeV}\), the first three points P1, P2, P3 form a triangle region approximately covering the whole area of the parameter space which the MUonE is sensitive to and is still allowed by the current constraints. The points P4, P5, P6, for \(\mu=5\,{\rm GeV}\), are similarly chosen. The results are shown in Table 3. These are obtained using a simplified \(\chi^{2}\) fitting with one free parameter \(k\), which enters the theoretical parametrization via the replacement \(\Delta\alpha_{\rm had}(t)\to k\Delta\alpha_{\rm had}(t)\). The function \(\Delta\alpha_{\rm had}(t)\) is still calculated using the package alphaQEDc19. The errors are computed from \(\Delta\chi^{2}(k)=\chi^{2}(k)-\chi^{2}_{\rm min}=1\). The best-fit result at the MUonE experiment with \(1.5\times 10^{7}\) nb\({}^{-1}\) integrated luminosity for the SM-only case is \(a_{\mu}^{\rm had,SM}=6903(29)\times 10^{-11}\), where the mean value is given by the alphaQEDc19 program, agreeing within \(1\sigma\) with the present world-average result of \(6931(40)\times 10^{-11}\) provided in Ref. [33]. The MUonE error of \(0.4\%\), obtained using the \(\chi^{2}\) defined in Eq. (5.3), is in accordance with the estimate of the experiment collaboration [29]. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & SM & Axial-vector & Vector \\ \hline P1 & \(6903(29)\times 10^{-11}\) & \(6957(29)\times 10^{-11}\) & \(6986(29)\times 10^{-11}\) \\ Pull & \(0\) & \(1.9\) & \(2.8\) \\ \hline P2 & \(-\) & \(6980(29)\times 10^{-11}\) & \(7019(29)\times 10^{-11}\) \\ Pull & \(-\) & \(2.6\) & \(4.0\) \\ \hline P3 & \(-\) & \(11073(29)\times 10^{-11}\) & \(13250(29)\times 10^{-11}\) \\ Pull & \(-\) & \(143\) & \(218\) \\ \hline \hline P4 & \(-\) & \(6954(29)\times 10^{-11}\) & \(6979(29)\times 10^{-11}\) \\ Pull & \(-\) & \(1.7\) & \(2.6\) \\ \hline P5 & \(-\) & \(6971(29)\times 10^{-11}\) & \(7006(29)\times 10^{-11}\) \\ Pull & \(-\) & \(2.3\) & \(3.5\) \\ \hline P6 & \(-\) & \(7091(29)\times 10^{-11}\) & \(7186(29)\times 10^{-11}\) \\ Pull & \(-\) & \(6.4\) & \(9.7\) \\ \hline \end{tabular} \end{table} Table 3: Best-fit value of \(a_{\mu}^{\rm had}\) at the MUonE experiment in various scenarios: only SM, SM with axial-vector unparticle, SM with vector unparticle. The pull is defined as \((a_{\mu}^{\rm had,new}-a_{\mu}^{\rm had,SM})/\Delta_{\rm SM}\), where \(\Delta_{\rm SM}\) is the error of the \(a_{\mu}^{\rm had,SM}\) under the assumption of no new physics. The results show that if unparticles exist then the best-fit value of \(a_{\mu}^{\rm had}\) determined at the MUonE experiment will be larger than the one of the scenario without new physics. The increase (pull) is ranging from \(1.7\sigma\) to \(218\sigma\), and is higher for vector unparticle than for axial-vector case, being consistent with the results shown in Fig. 3 (right) and Fig. 5. For the points lying exactly on the sensitivity curves the pull is \(1.8\sigma\). ## 6 Conclusions Unparticle is a fascinating idea which offers us a novel form to describe the propagator of an intermediate stage. If this effect is visible at energy range around \(0.5\,\mathrm{GeV}\) then the newly proposed MUonE experiment, where the differential cross section of the elastic \(e\mu\to e\mu\) scattering is measured at high accuracy, is a naturally good place to search for such effects. In this work we have considered the general case of unparticles with broken scale invariance, where the scale invariance of the unparticle system is broken at the energy \(\mu\). In the limit of \(\mu\to 0\) we recover the unbroken case originally proposed by Georgi. To avoid the severe limits placed on the couplings between unparticles and the SM fields from cosmology, astronomy, precise low-energy experiments one then requires \(\mu\geq 1\,\mathrm{GeV}\), which is sufficiently larger than the energy scales of the constraining processes. Under this condition, we have surveyed the relevant upper bounds on the unparticle-SM couplings from \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) experiments, forward-backward asymmetry data at LEP, mono-photon production at LEP2 and mono-\(Z\) production at the LHC. As a by product, we have provided a new bound on the (pseudo-)scalar unparticle couplings using the mono-photon data at LEP2. Taking into account these constraints we then investigate the effects of unparticles at the MUonE experiment. We have considered separately four scenarios of unparticles: pseudo-scalar, scalar, axial-vector, and vector. This classification is based on the Lorentz structure of the unparticle-SM interaction terms. Given the same values for the input parameters (energy \(\mu\), scaling dimension \(d\), coupling \(\lambda\)), we found that the vector unparticle induces largest effects. Significantly smaller but still visible is the axial-vector case. The impact of (pseudo-)scalar unparticles are tiny and invisible. We have also studied the dependence of unparticle effects on the scaling dimension \(d\) and on the energy scale \(\mu\). As expected, the effect is largest when these parameters are small and decreases as they get larger. It was found that the MUonE experiment is sensitive to (axial-)vector unparticle effects when \(1<d\lesssim 1.4\) and \(1\leq\mu\lesssim 12\,\mathrm{GeV}\). Finally, we have estimated the effect of the (axial-)vector unparticles on the \(a_{\mu}^{\rm had}\) measurement at the MUonE. Scanning the currently allowed parameter space, we found that unparticles can increase the best-fit value significantly. The MUonE experiment will then be able to provide hints of unparticles or new constraints on their couplings with the SM leptons if no significant deviation from the SM value is found. The above results are obtained for the MUonE with the muon-beam energy \(E_{\mu}=150\,\mathrm{GeV}\). For \(E_{\mu}=160\,\mathrm{GeV}\) as studied in [28], the c.m.s energy is slightly increased from \(0.4055\,\mathrm{GeV}\) to \(0.4180\,\mathrm{GeV}\). This change is too small to produce any visible differences in the results. Our conclusions are therefore unchanged. ## Acknowledgments This research is funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.01-2020.17.
2303.17384
Thermal instability as a constraint for warm X-ray corona in AGN
Context. Warm corona is a possible explanation for Soft X-ray Excess in Active Galactic Nuclei (AGN). This paper contains self consistent modeling of both: accretion disk with optically thick corona, where the gas is heated by magneto-rotational instability dynamo (MRI), and cooled by radiation which undergoes free-free absorption and Compton scattering. Aims. We determine the parameters of warm corona in AGN using disk-corona structure model that takes into account magnetic and radiation pressure. We aim to show the role of thermal instability (TI) as a constraint for warm, optically thick X-ray corona in AGN. Methods. With the use of relaxation code, the vertical solution of the disk driven by MRI together with radiative transfer in hydrostatic and radiative equilibrium is calculated, which allows us to point out how TI affects the corona for wide range of global parameters. Results. We show that magnetic heating is strong enough to heat upper layers of the accretion disk atmosphere, which form the warm corona covering the disk. Magnetic pressure does not remove TI caused by radiative processes operating in X-ray emitting plasma. TI disappears only in case of accretion rates higher than 0.2 of Eddington, and high magnetic field parameter $\alpha_{\rm B}$ > 0.1. Conclusions. TI plays the major role in the formation of the warm corona above magnetically driven accretion disk in AGN. The warm, Compton cooled corona, responsible for soft X-ray excess, resulted from our model has typical temperature in the range of 0.01 - 2 keV and optical depth even up to 50, which agrees with recent observations.
Dominik Gronkiewicz, Agata Różańska, Pierre-Olivier Petrucci, Renaud Belmont
2023-03-30T13:53:45Z
http://arxiv.org/abs/2303.17384v1
# Thermal instability as a constraint for warm X-ray corona in AGN ###### Abstract Context:Warm corona is a possible explanation for Soft X-ray Excess in Active Galactic Nuclei (AGN). This paper contains self consistent modeling of both: accretion disk with optically thick corona, where the gas is heated by magneto-rotational instability dynamo (MRI), and cooled by radiation which undergoes free-free absorption and Compton scattering. Aims:We determine the parameters of warm corona in AGN using disk-corona structure model that takes into account magnetic and radiation pressure. We aim to show the role of thermal instability (TI) as a constraint for warm, optically thick X-ray corona in AGN. Methods:With the use of relaxation code, the vertical solution of the disk driven by MRI together with radiative transfer in hydrostatic and radiative equilibrium is calculated, which allows us to point out how TI affects the corona for wide range of global parameters. Results:We show that magnetic heating is strong enough to heat upper layers of the accretion disk atmosphere, which form the warm corona covering the disk. Magnetic pressure does not remove TI caused by radiative processes operating in X-ray emitting plasma. TI disappears only in case of accretion rates higher than 0.2 of Eddington, and high magnetic field parameter \(\alpha_{\rm B}>0.1\). Conclusions:TI plays the major role in the formation of the warm corona above magnetically driven accretion disk in AGN. The warm, Compton cooled corona, responsible for soft X-ray excess, resulted from our model has typical temperature in the range of 0.01 - 2 keV and optical depth even up to 50, which agrees with recent observations. ## 1 Introduction Soft X-ray excess is commonly observed in large majority of active galactic nuclei (AGN) (e.g. Pounds et al., 1987; Walter and Fink, 1993; Magdziarz et al., 1998; Page et al., 2004; Gierlinski and Done, 2004; Bianchi et al., 2009; Mehdipour et al., 2011; Done et al., 2012; Petrucci et al., 2013; Keck and Ballantyne, 2016; Petrucci et al., 2018) including quasars (Madau, 1988; Laor et al., 1994, 1997; Gierlinski and Done, 2004; Piconcelli et al., 2005). It appears as an excess in emission, when extrapolating the 2-10 keV power law of AGN to the soft X-ray band. The origin of this spectral component is still under debate, but two major scenarios are currently considered. The first scenario holds that soft X-ray excess is a result of blurred ionized reflection (Crummy et al., 2006; Walton et al., 2013; Garcia et al., 2019), but such model produces many lines that are not observed in the soft X-ray band, and extreme blurring is generally required to wash them out. Recently, detailed radiative transfer analysis in the warm corona, proved that those lines cannot be easily created due to the full domination of internal heating and Compton scattering over the line transitions (Petrucci et al., 2020; Ballantyne, 2020). Second scenario relies on the fact, that soft X-ray excess is produced due to the Comptonization in optically thick (\(\tau\sim 10-20\)) warm corona, which is reasonable assumption while fitting the data (e.g. Magdziarz et al., 1998; Jin et al., 2012; Petrucci et al., 2013; Porquet et al., 2018; Petrucci et al., 2018, and references therein). Nevertheless, from data fitting, both above scenarios require the existence of the warm layer which is additionally heated, not only by radiation, but also by mechanical process, and the warm layer is located next to the accretion disk. On the other hand, none of those models consider how this warm layer is physically caused, and only the constant heating over gas volume of the warm corona is assumed and included in model computations as a free parameter (Petrucci et al., 2020; Ballantyne, 2020). Analytically, we have predicted that high optical depth and high temperature of the warm, Compton cooled corona is possible when additional pressure component and mechanical heating are taken into account (Rozanska et al., 2015). Recently, we extended this analytical model by numerical calculations of magnetically heated disk with free-free processes taken into account in addition to Compton scattering in case of the accretion disk around black hole of stellar mass i.e. Galactic black hole binaries (GBHB) (Gronkiewicz and Rozanska, 2020, hereafter GR20). For such sources, an accretion disk is already quite hot, visible in X-rays, and slightly warmer layer can form on different distances from black hole. In GR20, we shown that the warm corona, which is physically fully coupled with the cold, magnetically supported disk (MSD), arises naturally due to magnetic heating. Such corona is cooled mostly by Compton scattering, in agreement with the observations. We clearly demonstrated that the classical thermal instability (TI) (Field, 1965; Rozanska and Czerny, 1996), caused by gas cooling processes, cannot be removed by magnetic pressure, and it shapes the radial distance of the warm corona in those objects. But the case of AGN is different, since the disk temperature is relatively cool, of the order of \(10^{4-5}\) K, and the transition to the warm, \(10^{7}\) K, X-ray corona is accompanied by strong changing of gas global parameters, as density and pressure. On the other side, the physically consistent model of warm corona responsible for soft X-ray excess is desirable as we have the growing collection of observed sources indicating such feature, obtained with modern X-ray satellites. In this paper, we adopt GR20 model to the case of AGN, where the geometrically thin and optically thick accretion disk is around the supermassive black hole (SMBH). We assume that the accretion disk is magnetized and the magneto-rotational instability (MRI) is the primary source of viscosity and energy dissipation. We directly use the analytic formula derived by (Begelman et al., 2015, hereafter BAR15), where the vertical profile of magnetic heating of the accretion flow is determined. On the top of this assumption, the disk vertical structure together with the radiative transfer equation in gray atmosphere are fully solved with the relaxation method proposed originally by (Henyey et al., 1964). As a results of our computations, we obtain the optical depth and temperature of the warm corona in AGN, which are the main observable when analyzing X-ray data. The specific heating of a warm corona formed in our model, is computed self consistently based on MRI and reconnection. The value of this heating together with the magnetic pressure, change in the vertical direction, which is a big improvement of the constant heating warm corona slab used in recent calculations of energy spectrum from warm corona (Petrucci et al., 2020; Ballantyne, 2020). The next improvement of our analytical model (Rozanska et al., 2015) is the use of free-free radiative process which is crucial for TI to appear. We show, that in case of AGN, the magnetic pressure is high enough to produce stable brunch in classical TI curve obtained under constant gas and radiation pressure condition. MSD with warm corona is dominated by magnetic pressure which makes radiative cooling gradient always positive when computed under constant gas plus magnetic pressure. Therefore, initially thermally unstable zone, through which the radiative cooling gradient under constant gas pressure is negative, is frozen in the magnetic field allowing corona to exist. All results obtained by us, are compared with observations of unabsorbed type 1 AGN (Jin et al., 2012; Petrucci et al., 2013; Ursini et al., 2018; Middei et al., 2019, and references therein), and with models computed in case of GBHB (GR20). We show, that the measured parameters agree with those obtained by our numerical computations. Our solutions are prone to TI in a wide range of global parameters, indicating that TI plays a crucial role in soft corona formation in AGN. The structure of the paper is as follows: Sec. 2 presents the method of our computations, while model parameters in case of AGN are given in Sec. 2.1. The implementation of MRI-quenching and the role of magnetic pressure is separately discussed in Sec. 2.2. The radiation pressure supported solution is analyzed in Sec. 2.4. For comparison with observations we used models generated in our scheme of random choice described in Sec. 3.4 and measurements described in Sec. 3.5. Results of our numerical computations are presented in Sec. 3, where we display vertical structure of disk/corona system, but we also show radial limitation for which optically thick corona can exist. Discussion and conclusions are given in Sec. 4 and 5 respectively. ## 2 Set-up of the model We further investigate the model developed by GR20, adapted for the case of AGN, where an accretion disk is around SMBH. The code solves the vertical structure of a stationary, optically thick accretion disk, with radiation transfer assuming gray medium and temperature determined by solving balance between total heating and cooling including magnetic reconnection and radiative processes. The gas heating is powered by the magnetic field, according to the analytic formula given by BAR15, and the disk is magnetically supported over it's whole vertical extent. As an output, we can determine the vertical profile of magnetic heating of the accretion disk, as well as of the corona that forms on the top of the disk. For the purpose of this paper we consider Compton scattering and free-free emission/absorption as radiative processes which balance the magnetic heating. Our approach is innovative because it connects the magnetically supported disk (BAR15) with transfer of radiation in optically thick and warm medium, in thermal equilibrium (Rozanska et al., 2015). Such approach allows us to verify if the warm corona can be formed above MSD, and stay there in optically thick regime, which will be consistent with many observational cases. The following set of five equations is solved: \[z\frac{dP_{\rm mag}}{dz}+\left(2+\frac{\alpha_{\rm B}\nu}{\eta}\right)P_{\rm mag }-\frac{\alpha_{\rm B}}{\eta}\left(P+P_{\rm mag}\right)=0, \tag{1}\] \[\mathcal{H}=2(\eta+\alpha_{B}\nu)\Omega P_{\rm mag}-\alpha_{B}\Omega\left(P+ P_{\rm mag}\right)=\Lambda(\rho,T,T_{\rm rad}), \tag{2}\] \[\frac{dF_{\rm rad}}{dz}=\Lambda(\rho,T,T_{\rm rad}), \tag{3}\] \[\frac{dT_{\rm rad}}{dz}+\frac{3\kappa\rho}{16\sigma T_{\rm rad}^{3}}F_{\rm rad }=0, \tag{4}\] \[\frac{dP_{\rm gas}}{dz}+\frac{dP_{\rm mag}}{dz}+\rho\left[\Omega^{2}z-\frac{ \kappa F_{\rm rad}}{c}\right]=0, \tag{5}\] where the first two equations describe the magnetic field structure and magnetic heating as presented by BAR15. Magnetic pressure vertical gradient \(dP_{\rm mag}/dz\) depends on magnetic parameters: total magnetic viscosity - \(\alpha_{\rm B}\), magnetic buoyancy parameter - \(\eta\), and reconnection efficiency parameter - \(\nu\) (see: GR20, for definition), and the sum of gas and radiation pressure \(P=P_{\rm gas}+P_{\rm rad}\). At each point of disk/corona vertical structure, the local magnetic heating \(\mathcal{H}\) is balanced by net radiative cooling rate \(\Lambda(\rho,T,T_{\rm rad})\), where the latter is calculated from the frequency-integrated radiative transfer equation (see GR20 for exact formulae). Equations (3) and (4) describe how radiation flux is locally generated and transported by the gas. The fifth equation is the momentum equation in stationary situation i.e. hydrostatic equilibrium, where \(\rho\) is gas density, \(\kappa\) is Rosseland mean opacity, \(\Omega\) is Keplerian angular velocity, and \(c\) is the light velocity. The gas and radiation pressure in our model can locally be described as \(P_{\rm gas}=\frac{k}{\mu\mu_{\rm B}}\rho T\) and \(P_{\rm rad}=\frac{4\pi}{\lambda}T_{\rm rad}^{4}\) which is typical in case of gray atmosphere with \(k\) being Boltzmann constant, \(\sigma\) - Stefan-Boltzmann constant, and \(\mu\) - mean molecular weight. To formulate our net radiative cooling function, we assume Compton electron scattering and free-free absorption/emission as radiative processes that occur in the medium. The radiative cooling function is calculated at each point of the MSD's vertical structure and in this paper has the form: \[\Lambda\left(\rho,T,T_{\rm rad}\right)\equiv 4\sigma\rho\left[\kappa_{\rm ff} \left(T^{4}-T_{\rm rad}^{4}\right)+\kappa_{\rm es}T_{\rm rad}^{4}\frac{4k\left( \gamma T-T_{\rm rad}\right)}{m_{\rm e}c^{2}}\right],\] (6) where \(\gamma=1+4kT/(m_{\rm e}c^{2})\) accounts for relativistic limit in Compton cooling, \(\kappa_{\rm es}\) is electron scattering opacity and \(\kappa_{\rm ff}\) is Planck-averaged free-free opacity. In principle, more processes can be included (Rozanska et al., 1999), as long as Planck-average opacities are known. However, in this paper we clearly show that free-free absorption is efficient enough to onset of TI on the particular optical depth in magnetically supported accretion disk atmosphere. Usually, TI is connected with ionization and recombination processes, which are not taken by us into account. We show here that ionization is not necessary to study TI, but it will be valuable to test in the future how ionization/recombination processes influence the width of an thermally unstable region, which as we show below is connected with strength of warm corona. We plan to do it in our future work, since it requires substantial extension of our numerical code. The set of equations and the associated boundary conditions are solved using a relaxation method where the differential equations are discretized and then iteratively solved for convergence (Henyey et al., 1964, GR20). We adopt typical boundary conditions appropriate for a cylindrical geometry of an accretion disk i.e.: radiative flux at the mid-plane must be zero, but there is non-zero magnetic pressure at the equatorial plane. The value of magnetic parameter at the disk mid-plane \(\beta_{0}=P_{\rm gas}/P_{\rm mag}\) can be derived from three magnetic input parameters, described in Sec. 2.1 below. At the top of atmosphere of MSD, we assume that the sum of the flux carried away by radiation and of the magnetic field is equal to the flux obtained by Keplerian disk theory. We also take into account standard boundary condition for radiative transfer, i.e. mean intensity \(J=2H\), where Eddington flux \(H\) connects to the radiative flux as \(4\pi H=F_{\rm rad}\). For full numerical procedure and boundary conditions of our code we refer the reader to GR20 paper Appendix A. The numerical program in FORTRAN, Python and Sympy we developed, is available online1. Footnote 1: [http://github.com/gronki/diskvert](http://github.com/gronki/diskvert) ### Model parameters The model is parameterized by six parameters. The first three are associated with an accretion disk and they are: black hole mass \(M_{\rm BH}\), accretion rate \(\dot{m}\) and distance from the nucleus \(R\). Wherever not specified, we assume \(M_{\rm BH}=10^{8}\,{\rm M}_{\odot}\), \(\dot{m}=0.1\) in the units of Eddington accretion rate, and \(R=6R_{\rm Schw}\), where \(R_{\rm Schw}=2GM/c^{2}\), with \(G\) being the gravitational constant, as our canonical model named: CM. Those parameters were chosen only for the representation of our results, nevertheless, we are able to compute the models for other typical parameters within the broad range. The next three parameters, which are: magnetic viscosity - \(\alpha_{\rm B}\), magnetic buoyancy parameter - \(\eta\), and reconnection efficiency parameter - \(\nu\), define the local vertical structure of MSD. We know how those parameters depend on each other, but there is no one good way of determining what values they should take. In our previous paper GR20, we made an effort to compare magnetic structure with the one obtained from MHD simulations by Salvesen et al. (2016), and we adjusted those parameter relations to fit the simulation results. However, this is only one of the assumptions that can be made about the relation between \(\alpha_{\rm B}\), \(\eta\) and \(\nu\), and in this paper we choose to follow a slightly different approach. First, we require that the vertically averaged ratio of magnetic torque over magnetic pressure, marked as \(A\), should be roughly constant in the disk, consistent with simulations reported by Jiang et al. (2014) and Salvesen et al. (2016). It can be proven that this is true if the value \(A\) is constant for any \(\alpha_{\rm B}\), since \[t_{\rm r\phi}\Omega^{-1}=\alpha_{\rm B}P_{\rm tot}=\alpha_{\rm B}\frac{P_{\rm tot }}{P_{\rm mag}}P_{\rm mag}=2AP_{\rm mag}, \tag{7}\] where \[A=\frac{1}{2}\alpha_{\rm B}\nu+\eta. \tag{8}\] With this information we can simplify our parametrization of the magnetic torque by making \(\eta\) and \(\nu\) depend on \(\alpha_{\rm B}\), since the simulations by Salvesen et al. (2016) show that there is some correlation between three parameters for different magnetic field strengths (see: Fig.2 in GR20). We first eliminate \(\eta\) by introducing the constant \(p\), and assuming that it depends on \(0<\alpha_{\rm B}<2A\) as follows: \[\eta\left(\alpha_{\rm B}\right)=A\left(\frac{\alpha_{\rm B}}{2A}\right)^{p}. \tag{9}\] Next, we transform the relation in Eq. 8 and substitute the expression in Eq. 9 to obtain \(\nu\) \[\nu\left(\alpha_{\rm B}\right)=2\frac{A-\eta}{\alpha_{\rm B}}=\frac{1-\left( \frac{\alpha_{\rm B}}{2A}\right)^{p}}{\frac{\alpha_{\rm B}}{2A}}. \tag{10}\] The above relations were our assumption, and were formulated in the process of understanding what really influences the magnetic heating vertical structure. For instance, we have tested that our results are not sensitive to the selection of \(A\) and \(p\), therefore we keep those values constants for the results of this paper i.e. \(A=0.3\) and \(p=0.3\). Therefore, keeping constant \(A\) and \(p\), the two extremes of our magnetic viscosity parameter range are realized where \(\alpha_{\rm B}\approx 2A\), which is a strongly magnetized disk, and where \(\alpha_{\rm B}\approx 0\) for weakly magnetized disk (Eq. 7). Furthermore, magnetic buoyancy parameter - \(\eta\) and reconnection efficiency parameter - \(\nu\), can be automatically provided by Eqs. 9 and 10 for an assumed value of \(\alpha_{\rm B}\). Such convention may seem a bit complicated, but it gives us an intuition about the values of the efficiency of other processes as magnetic buoyancy strength and reconnection efficiency. In case of strongly magnetized disk, we obtain \(\eta\approx A\) and \(\nu\approx 0\), which means that such disk has both: very little dynamo polarity reversals and low reconnection efficiency. Weakly-magnetized disk yields to \(\eta\approx 0\) and \(\nu\gg 1\) and it is realized when frequent polarity reversals cause majority of the energy to be dissipated by magnetic reconnection. ### Implementation of MRI-quenching The vertical support in the disk is caused by the toroidal field produced by MRI dynamo in the presence of vertical net energy field, and the efficiency of this process is described by the magnetic viscosity parameter \(\alpha_{\rm B}\). However, when the magnetic field becomes too strong compared to kinetic force of the gas, the MRI is inefficient, and the dynamo is quenched (BAR15). That causes the decrease in toroidal field production in some areas of the vertical structure and reduces the magnetic support. The approximate condition for this efficiency limit is given by Pessah & Psaltis (2005) and can be written as a limit to magnetic pressure: \[P_{\rm mag}\leq P_{\rm mag,max}=\sqrt{\frac{5}{3}\rho P_{\rm gas}}\ \Omega R. \tag{11}\] Since the limit on the magnetic pressure depends on density and gas pressure, it becomes particularly significant in AGN disks, where the contribution of gas pressure compared to radiation pressure is lesser than in stellar-mass black hole accretion disks, i.e. the domination of radiation pressure happens for larger ranges of accretion rates and extends for larger distances from black holes (Kato et al., 2008). We implement this condition by limiting the \(\alpha_{\rm B}\) parameter: \[\alpha_{\rm B}^{\prime}=\alpha_{\rm B}\ T(P_{\rm mag,max}/P_{\rm mag})=\alpha_{ \rm B}\ T(x), \tag{12}\] where \(T(x)=x^{4}/(1+x^{4})\) is a smooth threshold-like function with values close to 0 for \(x<1\) and close to 1 for \(x\gg 1\). Using this method, we are able to obtain a gradual transition between the zones where MRI dynamo operates and zones where it is quenched. We have checked that the final results of our model are not sensitive on the choice of the particular shape of threshold function. The magnetic field was never measured from accretion disk around SMBH, and therefore, we have no constrains how it should be. Only simulations give us some clues and we have check that our resulted vertical magnetic field distribution agrees with simulations. (GR20). Even if BR15 model is assumed, the energy released in magnetically heated disk has the comparable value to the energy generated by viscosity, and presents a nice way of energy production by MRI and the transfer of this energy into gas. Below we show, how this magnetic pressure affects radiative processes in accretion disks around SMBH. ### Thermal instability To determine the temperature of the gas, we need to solve the thermal balance equation, where on one side we have heating terms, and on the other side the radiative cooling function. This equation normally has one solution, and the cooling rate should have a positive derivative with respect to temperature. In some circumstances this problem has more than one solution, typically three, one of them being unstable. For that unstable solution, assuming that the heating is roughly independent on gas parameters, the condition for instability reads \[\frac{d\ln\Lambda}{d\ln T}=\frac{\partial\ln\Lambda}{\partial\ln T}-a\frac{ \partial\ln\Lambda}{\partial\ln\rho}\leq 0, \tag{13}\] where \(a\) is a numerical constant, depending on the constraining regime. Some typical values are: \(a=1\) for \(\delta P_{\rm gas}=0\), \(a=0\) for \(\delta\rho=0\) and \(a=\frac{\beta}{\beta+2}\) for \(\delta P_{\rm gas}+\delta P_{\rm mag}=0\) where \(\beta=P_{\rm gas}/P_{\rm mag}\)(Field, 1965). Usually, \(\delta P_{\rm gas}=0\) is the assumed thermodynamic constraint for the instability, and if only Compton scattering and free-free cooling are taken into account, this limits the density of the gas, according to formula given in GR20: \[n/n_{0}<(T/T_{\rm rad})^{-1/2}, \tag{14}\] where \[n_{0}=4\times 10^{16}\left(\frac{T_{\rm rad}}{10^{6}\,{\rm K}}\right)^{9/2}. \tag{15}\] If we assume that around the base of the corona \(T\approx T_{\rm rad}\), this condition becomes \(n<n_{0}\), and we can estimate the approximate limit for optical depth of the corona. The estimate for density given in Eq. 15 assumes that gas pressure is constant. In magnetically-dominated corona, however, the density in hydrostatic equilibrium is mostly imposed by magnetic field gradient. For that case, we might assume that the constant-density model is more relevant than the constant-pressure model. Luckily, in isochoric regime the condition for instability is never satisfied, and when magnetic pressure is added, even small values are enough to completely eliminate the instability. ### Radiation-pressure supported solutions For SMBHs, as opposed to the black holes of stellar mass, the radiation pressure has a large contribution in comparison to the gas pressure in supporting the disk structure. This is important because radiation pressure dominated disks are considered unstable for the certain range of parameters (Lightman & Eardley, 1974; Shibazaki & Hoshi, 1975; Shakura & Sunyaev, 1976; Begelman, 2006; Janiuk & Czerny, 2011). When solving the vertical structure of a geometrically thin accretion disk, this radiation pressure instability manifests as a "blown up" solution, where the density is roughly constant over a large height of the disk structure, but it may happen that the density is larger around the photosphere than in the disk mid-plane. In non-magnetic disk, this would satisfy the criterion for the convective instability and the density gradient would be restored (Rozanska et al., 1999). In our models, we do not include convection, as it might be restricted by the magnetic field, which shapes the disk structure. Instead, we compute of the pressure gradient, \(\propto\frac{d\ln\rho}{d\ln\rho}\), and check its minimal value. The negative value means that the density inversion occurs and the model might not be stable. We mark these models clearly while presenting results of our computations. Nevertheless, since we do not solve time-dependent radial equations (Szuszkiewicz & Miller, 1997), we are not able to compare changes caused by TI to those generated by radiation pressure instability front, when we expect that the earlier moves in vertical, while the later - in radial direction. We plan to address this analysis in the future work. ## 3 Results For better visualization of the physical conditions occurring on the border of disk and warm corona we are plotting several quantities describing the processes which take place in magnetically supported and radiatively cooled accretion disks. From X-ray spectral fitting we measure an averaged temperature of the warm corona cooled by Comptonization: \[T_{\rm avg}=\tau^{-1}\int_{0}^{\tau_{\rm cor}}Td\tau. \tag{16}\] Note, that for the purpose of this paper we adopt \(\tau\) to be Thomson optical depth measured from the surface towards disk mid-plane. This is due to the fact that electron scattering optical depth is commonly measured during data analysis, when soft X-ray excess is fitted by Comptonization model. We define the base of the corona \(\tau_{\rm cor}\) as \(\tau\) for which the temperature reaches minimum. At some figures displaying vertical structure, we also mark the position of photosphere, where total optical depth: \(\tau+\tau_{\rm ff}^{\rm p}=1\) or the thermalization zone, where effective optical depth \(\tau^{*}=\sqrt{\tau_{\rm ff}^{\rm p}(\tau+\tau_{\rm ff}^{\rm p})}=1\), where \(\tau_{\rm ff}^{\rm p}\) is the mean Planck opacity for free-free absorption. To trace TI the important quantity is well known ionization parameter (Krolik et al., 1981; Adhikari et al., 2015), defined as: \[\Xi=\frac{P_{\rm rad}}{P_{\rm gas}}. \tag{17}\] Although we do not compute ionization states of the matter, the above parameter indicates the place in the disk structure where TI physically starts (Rozanska & Czerny 1996; Rozanska 1999). The TI appears exactly when the gradient of cooling function, so called stability parameter computed under constant gas pressure becomes negative \[\mathcal{L}_{\rm rad}\ =\ \frac{d\ln\Lambda}{d\ln T}\bigg{|}_{\mathcal{G}P_{\rm min }=0}<0\, \tag{18}\] as we also visualize in our figures below. Our computations allow us to indicate Thompson optical depth, \(\tau_{\rm min}\), at which the above stability parameter is minimal. In addition, we calculate the value of net cooling rate gradient over temperature under constant gas plus magnetic pressure in order to show how magnetic pressure affects stability of the disk heated by MRI. To estimate the importance of the magnetic pressure versus gas pressure at each depth of the disk, the commonly used magnetic pressure parameter defined as: \[\beta=\frac{P_{\rm gas}}{P_{\rm mag}}. \tag{19}\] For the purpose of this paper in analogy do the ionization parameter we define the magnetic ionization parameter as: \[\Xi_{\rm m}=\frac{P_{\rm rad}}{P_{\rm mag}+P_{\rm gas}}, \tag{20}\] which helps us to show the domination of radiation pressure across the disk vertical structure. Furthermore, we have demonstrated in GR20 that the density of the corona depends mostly on the magnetic field gradient: \[q=-\frac{d\ln P_{\rm mag}}{d\ln z}. \tag{21}\] Note, that all parameters change with disk/corona vertical structure and we mark them with "0", when defined at mid-plane. We also define the magnetic dissipation rate as follows: \[q_{\rm h}=\left(\frac{m_{\rm H}}{\rho}\right)^{2}\mathcal{H}, \tag{22}\] in erg cm\({}^{3}\) s\({}^{-1}\), so it can be compared to other heating/cooling rates which are delivered in the literature in the same units. As Figure 1: Vertical structure of an accretion disk for our CM model versus \(z\) in cm measured from disk mid-plane (\(z=0\)) up to the surface (right side of the figure panels), and for three different sets of magnetic parameters ranging from the strongest (M1 - by yellow line) to the weakest (M3 - by black line) magnetic field, and listed in the box below the figure. Local temperature, ionization parameter (Eq. 17 and 20), and stability parameter (Eq. 18) are plotted in the top row, while pressures, fluxes relative to the total dissipated flux, and heating rate are plotted in the bottom row, respectively. In addition, the value of \(\alpha_{\rm B}^{\prime}\) according to Eq. 12 is shown in the last panel of second row, by dashed lines. The positions of corona base \(\tau_{\rm cora}\) are marked by vertical dotted lines, and the positions of photosphere - by dashed lines for each model, respectively. an outcome of our computations we also deliver information about total surface density \(\Sigma\) in g cm\({}^{-2}\) of the disk/corona system, which is the density integrated over total height. Finally, properties of the Compton cooled surface zone are observationally tested with the use of Compton parameter: \[y_{\rm avg}=\int_{0}^{r_{\rm cor}}\frac{4k(T-T_{\rm rad})}{m_{\rm e}c^{2}}\,(1+ 2\tau)\,d\tau\, \tag{23}\] and in the section below we determine such parameter either up to thermalization layer \(\tau^{\star}\) or up to the corona base \(\tau_{\rm cor}\). In analogy to Haardt & Maraschi (1991), we denote the fraction of the energy released in the corona versus total thermal energy \(F_{\rm rad}\) as \[f=\frac{F_{\rm rad}^{\rm cor}}{F_{\rm rad}}=1-\frac{F_{\rm rad}^{\rm disk}}{F_{ \rm rad}}, \tag{24}\] \(f=0\) corresponds to the passive corona whereas \(f=1\) corresponds to passive disk. In case of magnetically supported disk \(f\) depends only on magnetic parameters, and both fluxes \(F_{\rm rad}^{\rm cor}\) and \(F_{\rm rad}^{\rm disk}\) are not assumed, but numerically calculated in our model. The division between disk and corona is taken at the layer when coronal temperature reaches minimum i.e. at \(\tau_{\rm cor}\). All quantities listed in this section are self consistently derived as an outcome of our model. We present them in order to check the model correctness and to compare them to observations. ### Vertical structure In order to understand the formation of warm corona, the vertical structure of our CM model is shown in Fig. 1. In all panels the horizontal axis is the height above the mid-plane of the disk in units of cm. Each figure panel represents the structure of parameter given in the panel's title. Three models for various magnetic parameters are given by different line colors described in the box above figure caption. The model called M1 stands for a highly magnetic disk with value of \(\alpha_{\rm B}=0.5\), while the model M3 stands for low magnetic disk - with \(\alpha_{\rm B}=0.02\). The gas temperature of the disk (panel a solid lines), in all three models, follows the trend with warmer center and cooler photosphere which eventually undergoes an inversion and forms a corona being much hotter than the center of the disk. The inversion occurs roughly at \(z=10^{13}\) cm. The temperature of the corona rises sharply but gradually with height as \(z^{2}\), even though the heating rate per volume, plotted by solid lines at the panel f of Fig. 1, decreases as \(z^{-q}\). Closer to the equator, the temperature rise typically present in case of standards viscous \(\alpha\)-disk (Shakura & Sunyaev 1973) saturates due to magnetic heating being distributed much further away from the equatorial plane, resulting in lower radiative flux and less steep temperature gradient. The radiation pressure is large in the core, between 2-3 orders of magnitude higher than the gas pressure, and can be of comparable magnitude to the magnetic plus gas pressure (panel b). Whereas it fully dominates at the atmosphere where warm corona is formed, where both ionization parameters, \(\Xi\) and \(\Xi_{\rm m}\) increase by two orders of magnitude with the height towards the surface. In addition, the inversion of both parameters is present, regardless of the value of magnetic pressure. Such a course of \(\Xi\) and \(\Xi_{\rm m}\) is connected with the structure of stability parameter, presented in panel c of Fig. 1, which is negative even for the most stringent condition of constant gas pressure. Although not large geometrically, this area actually constitutes a large part of the optically thick corona. The gas pressure (panel d) is roughly constant in the center (due to radiation pressure) and decreases as \(z^{-(q+2)}\) with exception of the model M3 which has the weakest magnetic field. In that model, the region of lesser gas pressure forms in the core of the disk, due to radiation pressure being a primary support for the disk structure, which is because convection that would remove this inversion is absent in our model. In the region of the photosphere the gas pressure levels out, just to enter a sharp decrease in the magnetic pressure dominated corona. It is worth noting Figure 2: Temperature structure of an accretion disk for our CM model versus ionization parameter (left panel), stability parameter (middle panel) and all three pressure components (right panel). Gas pressure structure is given by dashed-dotted line, magnetic pressure by dashed line, and radiation pressure by dotted line. Three models of different magnetization are defined as in Fig. 1, and given by colors: M1 – black, M2 – green, and M2 – red. At each panel, the position of temperature minimum is clearly indicated by a triangle of the same color. Gray dotted line in the middle panel marks the limit of negative value of stability parameter. Thick solid lines in the right panel cover values of pressure and temperature for which classical stability parameter (under constant gas pressure) is negative. that the magnetic pressure is dominating significantly over gas pressure in our model at all points of the vertical structure. The radiative and magnetic fluxes are released at a comparable rate inside the disk (panel **e**), where the fluxes are given in relation to the total flux released by accretion. In the magnetic dominated corona, the magnetic flux is quickly depleted in sub-inversion area, and it is almost zero when it enters the corona with the exception for strongly magnetized models. The magnetic heating in the corona is decreasing sharply with height (panel **f**), except for the strongest magnetic field case M1, where the peak heating occurs in the transition region and then \(\mathcal{H}\) gently decreases. In this model, the MRI process begins to quench at around \(z=10^{12}\)cm, which is ten times lower than other models, and can be seen as a decrease in \(\alpha_{\rm B}^{\prime}\), (given by the dashed lines at the same panel) slightly above equatorial plane. By \(z=3\times 10^{13}\)cm, the MRI becomes almost completely inactive, due to rise in the gas pressure in the corona. To produce such corona, the value of toroidal magnetic field at the equatorial plane is \(\log B_{0}=4.25\), 4.28, and 4.22 G, for considered models: M1, M2 and M3, respectively, and it is three orders of magnitude lower than in case of GBHB (GR20). To follow the vertical extension of TI zone, we present the temperature structure versus ionization parameter defined by Eq. 17 on Fig. 2 left panel, for our three models M1, M2, and M3 of different magnetization. In case of each model, we mark the corona base \(\tau_{\rm cor}\) by a triangle. The part of stability curve with negative slope is clearly present in each of the model, which clearly shows that magnetic heating does not remove TI in regions of high temperatures, of the order of 1 keV, and high densities, of the order of \(10^{12}\) cm\({}^{-3}\), cooled by radiative processes. Interestingly, such TI is created only for Compton and free-free absorption/emission. Going down with temperature, below the corona base, there is another turning point. Below this point, the gas stabilizes due to magnetic heating. It can be firstly deduced from the fact that going towards the equatorial plane the disk temperature increases, which is clearly presented in panel a of Fig. 1. In order to show how magnetic pressure affects the warm heated layer above cold disk, we plot stability parameter on Fig. 2 middle panel, for all three models, in two versions. Solid line presents classical cooling rate gradient under constant gas pressure, while dashed line the same gradient under constant gas plus magnetic pressure. It is clear, that negative value of the sta Figure 3: The properties of the warm corona on accretion rate – magnetic viscosity parameter plane. All models have been calculated for \(M_{\rm BH}=10^{5}M_{\odot}\) and \(R=6\,R_{\rm Schw}\). Upper panels show: a - average temperature \(T_{\rm mg}\) of the corona in keV ( colors) and optical depth of the base of warm corona \(\tau_{\rm cor}\) ( contours); b - \(\Xi\) ( colors) and magnetic pressure parameter \(\beta\) ( contours) both at \(\tau_{\rm cor}\), and c - number density \(n_{\rm H}\) in cm\({}^{-3}\) ( colors) at \(\tau_{\rm cor}\) and total column density \(\Sigma\) in g cm\({}^{-2}\) ( contours). Dotted contour here shows where \(d\Sigma/din=0\). Bottom panels display: d - \(t_{\rm mg}\) parameter of the warm corona at the thermalization zone. i.e. \(\tau^{*}=1\) ( colors) and at the base of corona \(\tau_{\rm cor}\) ( contours), e - fraction of radiative energy produced by the corona \(f\) ( colors) and the maximum value of magnetic field gradient \(q\) ( contours); and f - the minimum value of \(\mathcal{L}_{\rm rad}\) throughout the disk height ( colors) and Thomson optical depth \(\tau_{\rm min}\) when it happens ( contours), green areas are stable disks, whereas magenta areas indicate thermal instability in the corona, the gray dotted line corresponds to \(\mathcal{L}_{\rm rad}=0\). In all panels, dark contour indicates the parameter sub-space affected by the density inversion. bility parameter occurs only in the case of classical assumption of constant gas pressure. Magnetic pressure directly makes cooling rate gradient to be always positive, acting as a freezer for eventual evolution of gas due to TI. For all three models, the base of the corona occurs below TI zones under the classical constant gas pressure condition. In the forthcoming paper, we plan to calculate proper timescales for TI evolution and for the existence of magnetically heated corona. In magnetically supported disk, the radiation pressure dominates in the disk interior, while it becomes comparable to magnetic pressure in thermally unstable zones. It is clearly demonstrated on the right panel of Fig. 2, where solid lines cover pressure plots for regions where classical stability parameter is negative. Gas pressure, which reflects the gas structure, is always at least four orders of magnitude lower than magnetic pressure. It is worth to note, that going from the surface towards corona base, the gas temperature decreases while all three pressure components tend to increase. After passing \(\tau_{\rm cor}\), the temperature starts to increase with depth and with the increase of all three pressure components. Small negative slopes in the vertical structure of gas pressure do not affect the overall stability of the disk, since the gas distribution is mostly kept by magnetic pressure. ### Warm corona across the parameter space The dependence of coronal parameters derived from our model on the accretion rate and on the magnetic field strength for black hole mass \(M_{\rm BH}=10^{8}M_{\odot}\) and at the radius \(R=6\,R_{\rm Schw}\) is shown in Fig. 3. We adopt that the corona extends from the zone where the gas temperature reaches minimum, as visible at panel a of Fig. 1, up to the top of an atmosphere. The optical depth, \(\tau_{\rm cor}\), is greater for higher accretion rate and stronger magnetic viscosity parameter. On the other hand, the temperature \(T_{\rm avg}\), averaged over \(\tau\) from the surface down to \(\tau_{\rm cor}\), seems to be mostly dependent on the magnetic field strength. As can be read from the ratios of radiation, gas and magnetic pressures, displayed in the panel b of Fig. 3, by plotting \(\Xi\) (color maps) and \(\beta\) (contours) at the corona base, the disk is dominated by the radiation pressure for the whole parameter space. In addition, magnetic pressure overwhelms the gas pressure, and this is the main reason that we obtain optically thick soft corona layer as predicted by Rozanska et al. (2015). The magnetic pressure is limited by the condition in Eq. 11, which occurs as a sharp transition in all global parameters at around \(\dot{m}>0.01\). For such large accretion rate, the contribution of radiation pressure is very high, and if the magnetic support is not strong enough, the solution becomes unstable (see Sec. 2.4), which is marked as a black overlay in all panels. This effectively restricts the solutions for high accretion rates only to models where the magnetic field is sufficiently strong (\(\alpha_{\rm B}>0.05\)). The gas density at the base of corona (panel c of Fig. 3) is the highest for both high accretion rate and high magnetic parameter. Decreasing the magnetic field also has a global stability effect on the disk. For a thin accretion disk, the quantity \(d\Sigma/d\dot{m}\) should be positive, otherwise we encounter a problem with well-known stability of standard viscous \(\alpha\)-disk (Shakura & Sunyaev, 1973). For a weak magnetic field, this value is negative even for \(\dot{m}=0.003\), as indicated by the region enclosed by the dotted contour in panel c of Fig. 3. If we introduce a stronger magnetic field, however, the band of accretion rate where this criterion is met, is narrowed down considerably. Investigating the \(y_{\rm avg}\) parameter (panel d of Fig. 3) and comparing it to corona-to-total flux ratio \(f\) (panel e of Fig. 3) we find similar behavior, where the largest values of both parameters are obtained for \(\alpha_{\rm B}>0.3\). Nevertheless, the Compton parameter has his largest values for high accretion rate, while the amount of energy generated in the corona is largest for low values of accretion rate. The magnetic field gradient parameter \(q\) is mostly imposed by the model parameters, but it is worth seeing that its value only drops below 2 for the most magnetized models (panel e). On the other hand, the value of stability parameter \(\mathcal{L}_{\rm rad}\) presented in panel f is almost exclusively dependent on the accretion rate, while the optical depth \(\tau_{\rm min}\), can reach values up to 5. ### Radial structure of the warm corona Fig. 4 shows the radial dependence of global parameters of warm corona for our canonical BH mass \(M_{\rm BH}=10^{8}M_{\odot}\), and for three strengths of the magnetic field, i.e. M1, M2, and M3 models, with the same magnetic parameters \(\alpha_{\rm B}\), \(\eta\), \(\nu\) as in Fig. 1. Upper panels show maps of \(T_{\rm avg}\) given by Eq. 16, and contours of \(\tau_{\rm cor}\). Dotted line contour separates thermally stable regions for high accretion rates and thermally unstable for low accretion rates by the condition \(\mathcal{L}_{\rm ed}=0\). Bottom panels present maps of the density number of the warm corona and contours indicate the total column density \(\Sigma\) computed form the top down to the mid-plane. In case of M3 model, with the lowest magnetization, we clearly indicate sub-space affected by classical radiation pressure instability marked by dark contours. The overall radial dependence of the warm corona parameters follows the trend that corona is hotter and more optically thick for strongly magnetized disk (M1 model on left panels). Higher accretion rate makes corona cooler, i.e. lower \(T_{\rm avg}\), but optically thicker. Such warm and optically thick corona in case of high accretion rate is rather more dense then in case of hotter corona appeared for low accretion rate. However, for the model with a strong field, the quantity \(d\Sigma/d\dot{m}\), which is associated with stability of viscous accretion disk, is positive in all but a narrow stripe in the parameter space marked by dotted lines at bottom panels of Fig. 4. On the other hand, for a weak field, M3 model, there is a maximum column density for around \(\dot{m}=0.03\) above which \(\Sigma\) decreases. In order to specify how TI, caused by gas cooling processes, affects the existence of warm corona we plotted the vertical extent of thermally unstable zones versus distance from the black hole in Fig. 5 for three different magnetic parameters and three accretion rates. The extent of TI zone is presented as pink filled area. The base of the corona is clearly marked by thick black solid line. The optical depth of the corona base is larger for higher accretion rate. As it was shown in the previous section, TI indicated by negative values of the stability parameter is present for the broad range of parameters under the assumption of constant gas pressure, but disappears when it is computed over constant sum of gas and magnetic pressure. Furthermore, it is produced by radiative processes, as Comptonization and free-free emission/absorption, being the only cooling processes in the models considered in this paper. Ionization/recombination may influence the extend of TI (Rozanska & Czerny, 1996), but the final result cannot be analytically predicted. Further computations are needed to show whereas ionization/recombination decreases or increases thermally unstable zone. Only for high values of accretion rate, higher than 0.2, and only in the inner regions of an accretion disk within 20 \(R_{\rm Schw}\), the existence of thermally stable warm corona with classical condition of constant gas pressure, is possible. We claim here, that in case of MSD, the condition of thermal stability under constant total i.e. gas plus magnetic pressure should be used in order to estimate if the TI zone is frozen into magnetic field. In the forthcoming paper, we plan to calculate relevant timescales for this kind of radiative cooling in the magnetically supported plasma. ### Random models sample Our numerical code (GR20) allows us to calculate a huge number of models in a finite time. In order to put constrains on basic observational parameters resulting from our model: temperature of the corona and its optical depth, we have selected results from random distribution of six input parameters for our calculations described in Sec. 2.1, to account for their uncertainty. We use the notation where \(U(a,b)\) is a random variable with uniform distribution between \(a\) and \(b\) and \(N(\mu,\sigma)\) is a normal distribution with expected value \(\mu\) and spread \(\sigma\). We use a fixed radius \(R=6R_{\rm Schw}\), while black hole mass \(M_{\rm BH}\) and accretion rate \(\dot{M}\) (in g s\({}^{-1}\)) are selected in the way to mimic the distribution of the same quantities in the sample of 51 AGN for which the warm corona was observed (Jin et al. 2012, see section below). The random walk overdoes through the following distribution: \[\log\left(M_{\rm BH}\right)=N(7.83,0.63)\,, \tag{25}\] \[\log(\dot{M})=0.27\left(\log\left(M_{\rm BH}\right)-8\right)+N(25.83,0.52)\,, \tag{26}\] where the expected and spread values were estimated from the observed sample. After running the code and solving the structure, we reject the models that satisfy the radiation pressure criteria described in Sec. 2.4. We generated a set of models based on random distribution of the global disk parameters described by Eqs: 25 and 26. The vertical structures of all models in the sample are plotted collectively in Fig. 6, the full sample (upper panels) as well as sample clipped according to the criterion of the density inversion (lower panels) described in Sec. 2.4. The temperature inversion occurs between \(\tau=0.01\) and \(\tau=30\) for all models, with both magnetic field and accretion rate having a positive effect on the corona depth, as shown in left panels of both rows of Fig. 6. The density follows a rather predictable relation \(n_{\rm H}\propto z^{-q_{\rm H}-2}\propto\left(N_{\rm H}\right)^{\frac{q_{\rm H }-2}{q_{\rm H}^{2}+1}}\) and the heating rate changes as \(q_{\rm h}\propto z^{q_{\rm H}+4}\propto\left(N_{\rm H}\right)^{-\frac{q_{\rm H }-2}{q_{\rm H}^{2}+1}}\), with small fluctuations due to the opacity. When we remove the models that do not satisfy our criterion for the density gradient, i.e. we switch from upper to lower panels of Fig. 6, the picture does not change much, but it becomes more complex when we omit the parts of the structure which is thermally unstable (when \(d\ln\Lambda/d\ln T<0\)). For the set of models we analyzed, the TI is confined to the area where \(10^{-24}\leq q_{\rm h}\leq 10^{-22}\), and this does not seem to strongly vary with any of the model parameters we consider. Thus, we can summarize, that independently on the model parameters, those Figure 4: The properties of the warm corona on accretion rate – radius plane given in units of \(R_{\rm Schw}\). All models have been calculated for \(M_{\rm BH}=10^{7}M_{\odot}\), and magnetic parameters are corresponding M1- left, M2 - middle, and M3 - right panel columns with values given under Fig. 1. Upper panels display average temperature \(T_{\rm avg}\) of the corona in keV (colors) and optical depth of the base of warm corona \(\tau_{\rm cor}\) (contours). Dotted contours at upper panels indicate where \(\mathcal{L}_{\rm rad}=0\) under constant gas pressure. Bottom panels show number density \(n_{\rm H}\equiv\Gamma\sin^{-3}\) (colors) at \(\tau_{\rm cor}\) and total column density \(\Sigma\) in g cm\({}^{-2}\) (contours). Dotted contours at bottom panels show where \(d\Sigma/d\dot{m}=0\). Dark contours in case of M3 model, indicate the parameter sub-space affected by the density inversion. associated with accretion disk or with magnetic field, the classical TI (under constant gas pressure) occurs for magnetically supported disk with radiative processes as Comptonization and free-free emission. ### Observational predictions As the main outcome of our computations, we obtain values of warm corona temperatures and optical depth, which can be directly comparable to the measurements reported in the literature. The data analysis concerning warm corona require high resolution, sensitive telescopes to collect enough photons in the soft X-ray energy range around 1 keV. The overall modeling is time consuming and usually one paper is devoted to one source. We have looked at the literature and we chose those sources for which both parameters: warm corona temperature and its optical depth, were measured. The warm corona measurement for the sample of objects was presented only by Jin et al. (2012), who collected 51 sources. We take all those points into account while comparing to data. In addition we selected measured warm corona parameters of individual sources found in Magdziarz et al. (1998); Page et al. (2004); Mehdipour et al. (2011); Done et al. (2012); Petrucci et al. (2013); Matt et al. (2014); Mehdipour et al. (2015); Middei et al. (2018); Porquet et al. (2018); Ursini et al. (2018); Middei et al. (2019). All data points compared to our model in Sec. 3.5 below, are listed in Tab. 1. From our random sample of models, we extracted essential parameters as optical depth of the warm corona, its temperature, accretion rate, black hole mass and magnetic field gradient, all of them described on the beginning of Sec. 3. The results are shown in Fig. 7, where the random values described in Sec. 3.4 are selected in the way to roughly match the observational points taken from Tab. 1, in terms of accretion rate and the black hole mass. At the same times, we allowed the magnetic parameters of the disk to vary in a very broad range, from \(\beta_{0}\approx 10^{3}\) to \(\beta_{0}\approx 1\). This allows us to scan all possibilities while remaining close to observations in terms of known quantities. Observations are marked on the right side of the figure according to data analyzed in papers listed in Tab. 1. Some source are reported twice or even Figure 5: Thomson scattering optical depth of TI zones, plotted against the radial distance from the BH of the mass \(M_{\rm BH}=10^{8}M_{\odot}\), and shown for three values of magnetic parameters M1, M2, M3 (from left to right columns) and accretion rate \(\dot{m}=0.12,0.27\) and 0.1 (from upper to bottom rows). Photosphere is marked using dashed green line, temperature minimum - using thick black solid line, thermalization depth (\(\tau*=1\)) using red dotted line. The extent of thermal instability is presented as pink filled area. more, but it does not matter since we are interested in general observational trend of the warm corona. In the left panel of Fig. 7, we include all results of our models by colored open circles, with no limit to radiation pressure or viscous instability (as long as the model has converged). The size of the circle indicates the value of accretion rate, while its color reflect the value of magnetic field gradient, both given on the right side of the figure. The cloud of points on the \(\tau_{\rm cor}\)-\(T_{\rm avg}\) plane coincide with the observational results, but the range of corona parameters obtained with our numerical code is broader. A clear correlation of magnetic field strength and the accretion rate with resulting corona parameters is seen. Observations follow the models computed for higher accretion rates and moderate magnetic field gradient. In the right panel of Fig. 7, we removed all the models where the density inversion occurs, which are the models dominated by the radiation pressure and not stabilized by magnetic pressure. We reject these models because, unless convection is present to transfer the energy from the disk, the solution becomes "puffed up" and generally unstable. The models that are removed in this step are mostly models when \(\tau_{\rm cor}<10\) or optically thick models with high accretion and low magnetic field. The data points seem to group around the models which do not display TI in the structure of warm corona, but some of them follow thin lines, which are thermally unstable zones. The distribution among in \begin{table} \begin{tabular}{l|l|l} Source & Type & Ref. \\ \hline \hline NGC 5548 & Sy1 & Magdziarz et al. (1998) \\ \hline Q 0056-363 & QSO & \\ Mrk 876 & QSO & Page et al. (2004) \\ B2 1028+31 & QSO & \\ \hline Mrk 509 & Sy1.5 & Mehdipour et al. (2011) \\ \hline RE 1034+396 & NLS1 & Done et al. (2012) \\ PG 1048+213 & BLS1 & \\ \hline 51 sources & AGN1 & Jin et al. (2012) \\ \hline Mrk 509 & Sy1.5 & Petrucci et al. (2013) \\ \hline Ark 120 & Sy1 & Matt et al. (2014) \\ \hline NGC 5548 & Sy1 & Mehdipour et al. (2015) \\ \hline NGC 7469 & Sy1 & Middei et al. (2018) \\ \hline Ark 120 & Sy1 & Porquet et al. (2018) \\ \hline 3C 382 & BLRG & Ursini et al. (2018) \\ \hline NGC 4593 & Sy1 & Middei et al. (2019) \\ \hline \end{tabular} \end{table} Table 1: Observational points from literature compared to our model. Figure 6: Vertical structure of temperature, density and dissipation heating rate defined by Eq. 22 for sample of models computed according to the method described in Sec. 3.4. The first row shows the complete sample of models, while the second row only shows models that do not exhibit density inversion discussed in Sec. 2.4. Zones where TI occurs are omitted, which produces the gaps visible in many models. The color of the line corresponds to the magnetic gradient \(q\) (red is stronger field, blue is weaker), while the thickness shows the accretion rate \(m\) (thicker is higher). The gray vertical lines show the optical depths \(\tau\) equal to (from left to right): 100, 10, 1, 0.1, 0.01. stability strips can be attributed to our choice of the cutoff of the corona base at the temperature minimum. Had it been chosen according to a different criterion, a more shallow and hotter corona would be obtained. TI might cause the corona to be shallower than the temperature minimum, which could potentially happen at undetermined point in the vertical structure. To check at which point of the vertical structure the data is in line with model, we took the black hole mass and the accretion rate for each point from Jin et al. (2012), and run a corresponding model, assuming constant magnetic parameters \(\alpha_{\rm B}=0.18\), \(\eta=0.21\) and \(\nu=1\). The global disk parameters were chosen to fall roughly in the middle of the range consistent with the observations. We then proceed and tune the parameter \(\alpha_{\rm B}\) for each model, so that at the optical depth \(\tau_{\rm cor}\) the average temperature yields \(T_{\rm avg}\), both values taken from the observational data. For some models, it is not possible, but for most (44 objects from the sample of 51) such value of \(\alpha_{\rm B}\) does exist. Both fixed-parameter and fine-tuned models are shown in Fig. 8, where different colors display different disk global parameters. The observed cases of \(T_{\rm avg}\) versus \(\tau_{\rm cor}\) are marked by filled circles, while the same parameters as an outcome of our adjusted models are given by crosses. Additionally, by solid thin lines, we plot the averaged temperature integrated from the surface down to the local optical depth given on the x-axis of our graph. Thick solid lines display the extensions of thermally unstable zones. High accretion rate cases at the top do not indicate TI. It is clear from the figure that tuning the magnetic parameters does not result in much change, which means that the fixed values are a good approximation for this set of observations. In most cases, the observed optical depth of the warm corona is shallower than the \(\tau_{\rm cor}\) derived from our model and indicated by crosses in the figure, with exception to a few models with the lowest accretion rate, where it seems to be deeper than predicted. For a range of models, the location of the warm corona is very coincident with the bottom rather than the top of the TI, and the locations of the two exhibit some correlation in that range. Above some threshold, our model does not predict the occurrence of the TI yet the observed optical depth falls short compared to the \(\tau_{\rm cor}\) from our model. ## 4 Discussion We computed sample of models of the disk/corona vertical structure, where both layers are physically bounded by magnetic field heating and Compton and free-free cooling. Our approach to MRI description is parameterized to adjust the results obtained from simulations as explained by GR20. The warm optically thick corona is formed self consistently on the top of an accretion disk for the range of global disk and magnetic parameters. In general the optical depth of warm corona is larger for higher accretion rate and higher magnetization. Nevertheless, the data show that even models of moderate accretion rate reproduce observations when magnetic parameters are adjusted. Magnetic heating does not remove the classical TI under constant gas pressure presented in regions of high temperatures, of the order of 1 keV, and high densities of the order of \(10^{12}\) cm\({}^{-3}\), cooled by radiation processes. Interestingly, such TI is created only for Compton and free-free absorption/emission. When magnetic heating with simple description of MRI and reconnection is taken into account as a main source of gas heating the disk becomes dominated by magnetic pressure which acts as a freezer for TI zone. In order to check this behavior, stability parameter should be computed under constant gas plus magnetic pressure. In such a case, classical TI exists with negative slope of temperature versus ionization parameter, but stability parameter under constant gas plus magnetic pressure is always positive. To prove if such gas is stable, the timescales for TI frozen into magnetic field should be estimated and we plan to do it in the forthcoming paper. In the all previous approaches to the ionized disk atmospheres, the ionization structure was always solved only up to the depth of photosphere, i.e. to the lower branch which was stabilized due to atomic cooling and heating. Many papers showing such stability curve never run the computations deeper in the disk structure where the energy is dissipated. In our approach, the magnetic heating which is generated at each point of a ver Figure 7: Parameters of the corona \(\tau_{\rm cor}\) and \(T_{\rm avg}\) determined for a random sample of models are marked by open circles. The size of the points determines the accretion rate and color of the points corresponds to the magnetic field gradient (similar to Fig. 6) given on the right side of the figure. For models where the TI is present, the values of \(\tau\) and \(T_{\rm avg}\) computed at every point of the instability are displayed by thin lines. In the left panel, all models are shown, while in the right panel, models where the density inversion occurs (see Sec. 2.4) are filtered out. Additionally, data points from the literature are drawn using black markers listed on the right side of the figure. The data points clearly follow TI strips. tical structure of the disk and corona, provides to the normal behavior of the gas i.e the cold disk temperature increases when going towards the equatorial plane, which is explicitly presented in panel a of Fig. 1. Compton and free-free processes fully account for the extension of stability curve from the hot layers on the corona surface, down to the corona base at \(\tau_{\rm cor}\). Ionization/recombination may increase the unstable zone, but this should be checked in the future studies. TI may impact the gas evolution in thermal timescales. Future, time-dependent simulations may show the impact of TI on the warm corona gas. For all analyzed models the structure was stable when the isobaric constraint of constant gas plus magnetic pressure was assumed. Of course, such treatment of a magnetic field only holds true in idealized case, assuming that field is frozen into matter, and there are no significant pressure gradients along the flux tube. If there is a pressure change, the magnetic structure would need to inflate as a whole, without any escape of the matter. Practically, magnetic structures in the corona are tangled and can be considered as pressurized but open. If a thermal runaway or collapse occurred in a flux tube, matter could be pushed or sucked into the flux tube in order to maintain the pressure equilibrium. In such case, even in presence of a strong field, the original criterion of constant gas pressure would hold and strong TI would occur within separate flux tubes, forming prominence-like structures. If we searched for conditions to obtain an uniform, warm corona (and not clumpy prominence's), the density \(n_{0}\) (Eq. 15) would still be a sensible order-of-magnitude limitation. Figure 8: Profiles of averaged temperature versus the optical depth for fine-tune models. The temperature is averaged between the optical depth given by the coordinate of the horizontal axis and the surface of the corona. Thicker lines are overlayed to indicate where the condition for the TI is fulfilled. Circle markers are temperatures and optical depths obtained by Jin et al. (2012). Cross markers indicate the averaged temperature at \(\tau_{\rm cor}\) of a given model. Magnetic parameters are adjusted so that the temperature profile crosses the observational point. Each profile is shifted slightly for a clearer presentation, and models are sorted according to the accretion rate \(\dot{m}\). ### Observations predictions and modeling implications As we demonstrated in our paper, observations follow our model of warm corona, heated by MRI. All of the data points are in the range \(T=\)0.01-1 keV and \(\tau_{\rm es}=\)2-50. Corona temperature appears to be strongly correlated with coronal magnetic field (particularly, its spatial gradient). Accretion rate, on the other hand, seems to increase the optical depth of the corona, at the slight cost of average temperature. Our numerical model has several weaknesses. The largest limitation to our model is that it is a static and one-dimensional. That implies that the corona assumed must be layer-like and clumpy, and unstable, but optically thick clumps cannot be studied. With the degree that the models are affected by the thermal instability, according to our study, we cannot definitely exclude at this point that the clumpy corona can also reproduce the observations. It is possible that a dynamical model could resolve the issue of a local instability as long as the overall optically thick disk structure survives. It is also possible, that thin-disk approximation is not well-suited to the problem, as there is growing evidence that multilayer accretion, where the accretion of disk and corona are decoupled, may be more realistic (Lancova et al., 2019; Wielgus et al., 2022). We are unable to treat such an accretion mode with our approach. Another limitation is that we do not obtain the spectrum from our model. While quantities such as optical depth and temperature can be fitted to spectra and we compare the values with our model, there is some degeneracy, and the best way to verify the correctness of our predictions would be to at least compare the spectral parameter \(\Gamma\) which is the slope of the power law usually fitted to the data. Despite those limitations, our model allows to determine the location and properties of the TI. That being said, we cannot exclude that there is a coincidence of occurrence of TI i our models and the warm corona in nature. One peculiar possibility is that the warm corona could actually be driven by the TI. Instead of one layer, many scatterings through more and less dense areas could be more effective than a single, uniform layer in producing Comptonization spectrum. More investigation is needed using dynamic models that allow the unstable, clumpy medium to develop (including the magnetic field as essential part to maintain the physical structure of the clumps) and followed by radiative transfer calculations, to prove whether such structure can indeed create warm Comptonization signatures. If we assume the opposite but (as far as we consider) more likely alternative that the clumpy medium does not contribute to the observed spectrum, another interesting possibility emerges. It could be that the corona exists at the verge of the instability. The new warm matter is constantly inflowing, and once it reaches the unstable conditions, it collapses into clumps that are ejected or (more likely) fall back into the disk core. If the condensation timescale is long compared to the dynamical timescale, the matter could exist at the brink of the instability long enough to continuously Comptonizes the soft photons. This could act as a stabilizing mechanism and explain why the observed properties of the warm corona seem to be clustered in a narrow range of the parameter space. While we cannot distinguish which of these scenarios is closer to the truth, the striking coincidence of the warm corona with TI is hard to miss out. ## 5 Conclusions We clearly demonstrated that magnetic heating can produce warm optically thick corona above accretion disk in AGN. Obtained theoretical values of coronal temperatures and optical depths agree with those observed for the best known sources. Magnetic viscosity parameter increases the coronal temperature and optical thickens. Magnetic heating does not remove the local thermal instability of the corona (TI) caused by radiation processes in the warm (\(\sim\)1.0 keV), and dense (above \(~{}10^{12}\) cm\({}^{-3}\)) gas. Classical TI operates for accretion rates lower than 0.1 in Eddington units, and does not depend much on magnetic viscosity, nevertheless always the combination of those parameters fully determines instability strip. When magnetic heating is taken into account, stability parameter computed under constant gas plus magnetic pressure is always positive, suggesting that thermally unstable zone in the classical way, becomes a kind of frozen into magnetic field and can produce interesting observational features including soft X-ray excess. The case of AGN differs from GBHB, where TI occurs for \(in<0,003\) for low \(\alpha_{\rm B}\) and \(\dot{m}<0,05\) for high \(\alpha_{\rm B}\) (GR20). Also optical thickness of the warm corona in AGN is relatively higher than in case of GBHB by the factor of 5. Interestingly, TI operates only when Compton heating and free-free emission are introduced. It does not require ionization/recombination processes to be included. Nevertheless, we claim here that ionization may influence the extension of the TI zone and we plan to check it in our future work. Observations follow TI strip of the warm corona in magnetically supported disk. Thermally unstable gas may undergo evolution under thermal timescale, which in principle can be computed in our models. Such theoretical timescale should be compared to the variability measurements of soft corona. We plan to do it in the next step of our project. We conclude here, that the TI caused in the gas due to radiative cooling may be a common mechanism affecting the existence of warm corona above accretion disks around black holes across different masses, since we obtain TI in the disk vertical structure in case of AGN, considered here, as well as in case of GBHB presented in our previous paper (GR20). ###### Acknowledgements. This research was supported by Polish National Science Center grants No. 2019/33/N/ST9/02804 and 2021/41/B/ST9/04110. We acknowledge financial support from the International Space Science Institute (ISSI) through the International Team proposal "Warm coronae in AGN: Observational evidence and physical understanding" _Software_: FORTRAN, Python (Van Rossum & Drake, 2009), Sympy (Meurer et al., 2017), LAPACK (Anderson et al., 1990) and GNU Parallel (Tange, 2011).
2304.13145
T Cell Receptor Protein Sequences and Sparse Coding: A Novel Approach to Cancer Classification
Cancer is a complex disease characterized by uncontrolled cell growth and proliferation. T cell receptors (TCRs) are essential proteins for the adaptive immune system, and their specific recognition of antigens plays a crucial role in the immune response against diseases, including cancer. The diversity and specificity of TCRs make them ideal for targeting cancer cells, and recent advancements in sequencing technologies have enabled the comprehensive profiling of TCR repertoires. This has led to the discovery of TCRs with potent anti-cancer activity and the development of TCR-based immunotherapies. In this study, we investigate the use of sparse coding for the multi-class classification of TCR protein sequences with cancer categories as target labels. Sparse coding is a popular technique in machine learning that enables the representation of data with a set of informative features and can capture complex relationships between amino acids and identify subtle patterns in the sequence that might be missed by low-dimensional methods. We first compute the k-mers from the TCR sequences and then apply sparse coding to capture the essential features of the data. To improve the predictive performance of the final embeddings, we integrate domain knowledge regarding different types of cancer properties. We then train different machine learning (linear and non-linear) classifiers on the embeddings of TCR sequences for the purpose of supervised analysis. Our proposed embedding method on a benchmark dataset of TCR sequences significantly outperforms the baselines in terms of predictive performance, achieving an accuracy of 99.8\%. Our study highlights the potential of sparse coding for the analysis of TCR protein sequences in cancer research and other related fields.
Zahra Tayebi, Sarwan Ali, Prakash Chourasia, Taslim Murad, Murray Patterson
2023-04-25T20:43:41Z
http://arxiv.org/abs/2304.13145v2
# T Cell Receptor Protein Sequences and Sparse Coding: A Novel Approach to Cancer Classification ###### Abstract Cancer is a complex disease characterized by uncontrolled cell growth and proliferation, which can lead to the development of tumors and metastases. The identification of the cancer type is crucial for selecting the most appropriate treatment strategy and improving patient outcomes. T cell receptors (TCRs) are essential proteins for the adaptive immune system, and their specific recognition of antigens plays a crucial role in the immune response against diseases, including cancer. The diversity and specificity of TCRs make them ideal for targeting cancer cells, and recent advancements in sequencing technologies have enabled the comprehensive profiling of TCR repertoires. This has led to the discovery of TCRs with potent anti-cancer activity and the development of TCR-based immunotherapies. To analyze these complex biomolecules effectively, it is essential to represent them in a way that captures their structural and functional information. In this study, we investigate the use of sparse coding for the multi-class classification of TCR protein sequences with cancer categories as target labels. Sparse coding is a popular technique in machine learning that enables the representation of data with a set of informative features and can capture complex relationships between amino acids and identify subtle patterns in the sequence that might be missed by low-dimensional methods. We first compute the \(k\)-mers from the TCR sequences and then apply sparse coding to capture the essential features of the data. To improve the predictive performance of the final embeddings, we integrate domain knowledge regarding different types of cancer properties such as Human leukocyte antigen (HLA) types, gene mutations, clinical characteristics, immunological features, and epigenetic modifications. We then train different machine learning (linear and non-linear) classifiers on the embeddings of TCR sequences for the purpose of supervised analysis. Our proposed embedding method on a benchmark dataset of TCR sequences significantly outperforms the baselines in terms of predictive performance, achieving an accuracy of 99.8%. Our study highlights the potential of sparse coding for the analysis of TCR protein sequences in cancer research and other related fields. ## 1 Introduction T cell receptors (TCRs) play a crucial role in the immune response by recognizing and binding to antigens presented by major histocompatibility complexes (MHCs) on the surface of infected or cancerous cells, Figure 1(Shah et al., 2021). The specificity of TCRs for antigens is determined by the sequence of amino acids that make up the receptor, which is generated through a process of genetic recombination and somatic mutation. This enables T cells to produce a diverse repertoire of receptors capable of recognizing a wide range of antigens (Courtney et al., 2018). MHC molecules bind to peptide fragments that originate from pathogens and present them on the cell surface. This allows for recognition by the corresponding T cells (Janeway Jr et al., 2001). TCR sequencing involves analyzing the DNA or RNA sequences that code for the TCR protein on T cells, and it can be used to identify changes in the TCR repertoire that occur in response to cancer, as well as to identify specific TCR sequences that are associated with particular types of cancer (Kidman et al., 2020; Robins et al., 2009). TCR protein sequences have become a focus of interest for cancer immunotherapy, as they can be used to engineer TCR-based therapies for cancer treatment (Eshhar et al., 1996). Embedding-based methods involve mapping sequences onto low-dimensional vector representations, which can then be used for various downstream analyses (Yang et al., 2020). Classification is a common application of embedding-based methods in protein sequence analysis. By representing protein sequences as embeddings, classifiers can be trained to predict the functional class of new, unlabeled protein sequences based on their similarity to the training data.(Hristescu and Farach-Colton, 1999; Iqbal et al., 2014). This tech Figure 1: T-cells mount a targeted immune response against the invading pathogen of cancerous cells. nique can also be used to identify groups of related sequences with similar features, predict protein-protein interactions, and classify disease-causing mutations (Yang et al., 2020). Embedding-based methods for protein sequence analysis face challenges in terms of generalizability and complexity (Alberts et al., 2002; Mikolov et al., 2013). However, deep learning methods, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and transformer networks, have shown promise in addressing these challenges (Wang et al., 2019; Liu, 2017; Nambiar et al., 2020). By training on large and diverse datasets, deep learning models can improve generalization performance and make accurate predictions on new sequences. However, the quality and diversity of training data, model architecture, and optimization strategy are still critical factors influencing their performance (Min et al., 2021). Additionally, the interpretability of deep learning models for protein sequence analysis can be a challenge, requiring careful consideration and evaluation. Therefore, while deep learning can improve the generalizability of embedding-based methods, it may not entirely solve the problem. In this study, we propose a multi-class classification approach using sparse coding for predicting cancer categories from TCR protein sequences. Sparse coding is a machine learning technique that finds a sparse representation of the input data using a dictionary of sparse basis functions (Olshausen and Field, 2004). By representing TCR protein sequences as sparse linear combinations of basis functions, we can capture the inherent structure and variation in the data, which can be used for accurate cancer classification. We preprocess the data by encoding the amino acid sequences as numerical vectors and perform sparse coding using a dictionary of \(k\)-mers, a commonly used basis function for analyzing biological sequences. We evaluate the performance of our approach using various metrics and compare it to other state-of-the-art methods. Our results demonstrate that our method outperforms existing approaches in terms of classification predictive performance. The proposed approach possesses properties such as invariance, robustness to noise, interpretability, transferability, and flexibility. The proposed approach possesses the following properties: **Invariant property:** The invariant property is achieved in sparse coding by designing the basis vectors to be invariant to certain transformations of the input data. For example, in the case of protein sequences, the basis vectors may be designed to be invariant to amino acid substitutions or deletions. This means that even if small changes are made to the input sequence, the resulting sparse code will be relatively unchanged. **Robustness to Noise:** Sparse coding-based embeddings are robust to noise in the input data because they learn a sparse representation that focuses on the most important features while ignoring the noise. **Interpretability:** Sparse coding-based embeddings are interpretable because they learn a sparse representation that highlights the most important features of the protein sequences. This makes it possible to identify the specific amino acid residues that are most important for the function of a given protein. Transferability:Sparse coding-based embeddings can be easily transferred to related tasks because the learned representation captures the most relevant information about the protein sequences. Flexibility:Sparse coding-based embeddings can be adapted to different types of protein sequence data (e.g., protein families, domains, etc.) by training the model on a diverse set of sequences. This allows the model to learn a more general representation that can be applied to a wide range of problems. In general, our contributions to this paper are the following: 1. We propose an efficient embedding method based on the idea of sparse coding and the power of \(k\)-mers to embed the T cell sequences into an alignment-free and fixed-length numerical representation. 2. Using the domain knowledge about different cancer types, we show that the predictive performance for supervised analysis can greatly be improved when combine with the sparse coding-based representation. 3. We show that the proposed embedding method gives near-perfect predictive performance and significantly outperforms all baselines in terms of classification accuracy. This behavior shows that although the length of T cell sequences is very small, they can still be distinguished from each other using efficient representation. Given these considerations, we will continue as outlined below. In Section 2, we discuss the related work similar to the problem discussed in this paper. The proposed approach is given in Section 3. The detail regarding the dataset and experimental setup are shown in Section 4. The results for the proposed and baseline methods are reported in Section 5. Finally, the paper is concluded in Section 6. ## 2 Related Work TCR sequencing data has been used to identify neoantigens that are specifically recognized by T cells, providing potential targets for immunotherapy (van den Berg et al., 2020). There are studies that used TCR sequencing and showed T cells play a prominent role in orchestrating the immune response to viral diseases, including SARS-Cov-2 infection (Gittelman et al., 2022) and can be used for measuring the adaptive immune response to SARS-CoV-2 (Elyanow et al., 2021). Furthermore, TCR sequencing data has been used to identify T-cell clones that are associated with tumor regression in response to immunotherapy (Lu et al., 2019). These studies have provided valuable insights into the mechanisms of immunotherapy and potential targets for therapy. Classification methods have been shown to be beneficial for biological data analysis, such as protein sequence analysis in cancer classification (Hoadley et al., 2018). Support Vector Machines (SVM), Random Forest, and Logistic Regression are classification methods that can group protein sequences with similar features together, allowing researchers to identify subpopulations of protein sequences such as TCR sequences that may be associated with specific cancer types (Lin et al., 2019; Ru et al., 2019; Chen et al., 2020). In one study, the authors used discriminant information of mutated genes in protein amino acid sequences for lung cancer classification (Sattar and Majid, 2019). They described variations in amino acid sequences by statistical and physicochemical properties of amino acids and used these features to develop a lung cancer classification model (LCCM). Another study used SVM for TCR sequence classification to predict disease-free survival and overall survival in breast cancer patients (Bai et al., 2018). Also, K-means classification showed promising results in protein sequence analysis in distinguishing colorectal cancer patients from normal individuals (Bae et al., 2021). Overall, classification methods are valuable tools for analyzing protein sequences for cancer-type detection and can provide important insights into the immune response to cancer. Deep learning methods have been used to predict various protein properties, such as structure, function, stability, and interactions (Wan et al., 2019). Convolutional neural networks (CNNs) have been used to predict protein structure and function from amino acid sequences (Bileschi et al., 2019). Recurrent neural networks (RNNs) have been shown to be effective in modeling the temporal dependencies and long-range interactions in protein sequences (Zhang et al., 2021). A Universal Representation of Amino Acid Sequences (UniRep) is a method for embedding protein sequences using an RNN architecture (Alley et al., 2019). These methods have shown promising results in various biological data analysis tasks, including protein sequence analysis. ProtVec stand for Protein Representation Vector for Protein Function Prediction is another method that proposed a distributed representation of protein sequences using word embedding techniques (Asgari and Mofrad, 2010). The method is a natural language processing technique and works based on training a skip-gram model on amino acid n-grams and using the learned embeddings to predict protein functions (Ostrovsky-Berman et al., 2021). Another embedding method is SeqVec (Sequence-to-Vector) which is for embedding protein sequences using a hierarchical representation that captures both local and global features of the sequence (Heinzinger et al., 2019). ESM (Evolutionary Scale Modeling) is another method that has a transformer-based architecture to encode protein sequences and refers to a protein representation method based on the ESM family of protein models (Lin et al., 2023). ESM is trained on a large dataset of protein sequences and structures to predict the 3D structure of a given protein sequence (Hu et al., 2022). It is worth noting that the effectiveness of deep learning models in protein sequence analysis hinges on several factors, including the quality and diversity of the training data, the architecture of the model, and the optimization strategy used during training. However, as we discussed earlier, in this domain the interpretability of deep learning modes results is challenging, which can make it difficult to diagnose and address issues related to the model's generalizability. While deep learning can enhance the generalizability of embedding-based techniques for protein sequence analysis, it may not offer a complete solution and requires thorough evaluation and consideration. ## 3 Proposed Approach Sparse coding is a machine-learning approach that has been used for generating feature representations for protein sequences. In sparse coding, the goal is to represent the input data (in this case, protein sequences) as a sparse linear combination of basis vectors. The basis vectors are learned from the data and are designed to capture the most important features of the input data. We combine the idea of sparse coding with the power of \(k\)-mers to design an efficient embedding representation for the protein sequences. We also use domain knowledge to improve the information richness of the final embeddings. In this section, we start by describing the domain knowledge related to different cancers and how we are using them in our embeddings. We then discuss the proposed algorithm followed by a flow-chart-based discussion of the whole pipeline. ### Incorporating Domain Knowledge In this section, we give some examples of the additional property values for the four cancers we mentioned. There are many factors that can increase the risk of developing cancer including Human leukocyte antigen (HLA) types, gene mutations, clinical characteristics, immunological features, and epigenetic modifications. #### HLA types: HLA genes are a group of genes that encode proteins on the surface of cells. These proteins are responsible for presenting antigens, which are small molecules derived from pathogens or other foreign substances, to immune cells called T cells (Davies and Cohen, 1996). In the case of cancer, mutations in HLA genes can lead to changes in the presentation of antigens to T cells. This can result in the immune system failing to recognize and eliminate cancer cells, which can then grow and spread unchecked (Schaafsma et al., 2021). #### Gene mutations: In addition to HLA types, mutations in other genes can also contribute to the development and progression of cancer. For example, mutations in the tumor suppressor genes TP53 and BRCA1/2 are associated with an increased risk of several types of cancer, including breast, ovarian, and colorectal cancer (Peshkin et al., 2011)(Network et al., 2011)(Visvader, 2011). #### Clinical characteristics: Clinical characteristics such as age, gender, and family history can impact a person's risk of developing cancer (Rex et al., 1993). For example, certain types of cancer are more common in older individuals, while others are more common in younger people. Additionally, some types of cancer may be more prevalent in one gender over another. Furthermore, family history can play a role in cancer risk (Lee et al., 2019). #### Immunological features: Another important factor that can affect cancer outcomes is the immune system. Immune cells play a critical role in identifying and eliminating cancer cells (De Visser et al., 2006). In some cases, however, cancer cells may evade the immune system's surveillance and continue to grow and spread. The presence and activity of immune cells within tumors can impact cancer progression and response to treatment (Gonzalez et al., 2018). ### Epigenetic modifications: Epigenetic modifications are another cancer factor, including DNA methylation, which plays an important role in the development and progression of cancer (Kelly et al., 2010). DNA methylation is a process by which a methyl group is added to the cytosine base of DNA, typically at cytosines followed by guanine residues (CpG) sites, resulting in gene silencing or altered gene expression (Maekita et al., 2006). #### 3.1.1 Breast Cancer Breast cancer is the second most common type of cancer worldwide. Many studies have shown a link between HLA and breast cancer (Liang et al., 2021). Specifically, the HLA-A2, HLA-B7, and HLA-DRB1*15:01 alleles have been implicated in breast cancer susceptibility (Anagnostouli et al., 2014). Mutations in several genes are another factor that has been associated with an increased risk of breast cancer, including Breast Cancer 1, (BRCA1), Breast Cancer 2 (BRCA2), tumor suppressor protein 53 (TP53), and PIK3CA (Johnson et al., 2007). Clinical characteristics of breast cancer are another important factor that includes the size and grade (how different the cancer cells look) of the tumor compared to normal cells (Elston and Ellis, 1991). Further, Estrogen receptor-positive breast cancer cells and Progesterone receptor-positive breast cancer cells that have receptors that bind to the hormone estrogen and hormone progesterone respectively, can stimulate the growth of breast cancer (Sommer and Fuqua, 2001). The last clinical characteristic of breast cancer is the presence of Human Epidermal Growth Factor Receptor 2 (HER2) which is a protein that promotes cell growth and is found on the surface of some breast cancer cells (Loibl and Gianni, 2017). One important immunological factor that affects the behavior and progression of the tumor is the infiltration of TILs, which are immune cells that can recognize and attack cancer cells (Salgado et al., 2015). Studies have shown that breast cancer patients with higher levels of TILs tend to have better clinical outcomes (Stanton and Disis, 2016). Another key aspect of breast cancer immunology is the expression of immune checkpoint molecules such as Programmed Death 1 (PD-1), Programmed Death Ligand 1 (PD-L1), and cytotoxic T-lymphocyte-Associated Antigen 4 (CTLA-4) (Carosella et al., 2015). These molecules play a critical role in regulating the immune response and preventing the immune system from attacking healthy cells. DNA methylation of BRCA1 and BRCA2 genes is an epigenetic modification that has shown promise as a potential biomarker for early detection, therapy monitoring, assessment of prognosis, or prediction of therapy response for breast cancer (Martens et al., 2009). #### 3.1.2 Colorectal Cancer Colorectal cancer (CRC) is a type of cancer that starts in the colon or rectum. It often begins as a noncancerous growth called a polyp, which can turn cancerous over time. CRC is a complex disease that involves multiple genetic and environmental factors. The HLA system is known to be involved in the development and progression of CRC. For example, the HLA-A11 and HLA-B44 alleles have been associated with a decreased risk of CRC, while the HLA-DRB1*04 allele has been associated with an increased risk (Dunne et al., 2020). There are several gene mutations that have been linked to the development of colorectal cancer, including Adenomatous Polyposis Coli (APC), Kirsten Rat Sarcoma viral oncogene homolog (KRAS), Tumor Protein p53 (TP53), and B-Raf Proto-oncogene, serine/threonine kinase (BRAF). APC gene mutations are found in about 80% of sporadic (non-hereditary) colorectal cancers (Fodde, 2002). The clinical characteristics of CRC include tumor location, stage, differentiation grade, and lymph node involvement. Another key player in the development and progression of CRC is the immune system. Several immunological features have been identified that are associated with CRC, including tumor-infiltrating lymphocytes (TILs) and the expression of immune checkpoint molecules such as PD-1, PD-L1, and CTLA-4 (Rotte, 2019). As an epigenetic modification, DNA methylation of the Cyclin-Dependent Kinase Inhibitor 2A (CDKN2A) gene is a critical epigenetic modification that plays an important role in the development and progression of colorectal cancer (Kim et al., 2010). #### 3.1.3 Liver Cancer Liver cancer, also known as hepatocellular carcinoma, is a type of cancer that starts in the liver cells. Risk factors for liver cancer include chronic infection with hepatitis B or C viruses, heavy alcohol use, and certain genetic conditions (Levrero, 2006). Studies have shown that individuals who have HLA-A2, HLA-B35, and HLA-DRB1*01:01 may be at a higher risk of developing liver cancer, especially if they are also infected with hepatitis B or C viruses (Mosaad, 2015). Also, gene mutations such as TP53, CTNNB1, Axin 1 (AXIN1), and AT-rich Interactive Domain-containing protein 1A (ARID1A) are thought to contribute to the development and progression of liver cancer by promoting abnormal cell growth and division (Ozen et al., 2013). Clinical characteristics that are commonly used to assess the severity and prognosis of liver cancer include tumor size, number of nodules, portal vein invasion, and Alpha-fetoprotein (AFP) levels. For example, larger tumors are generally associated with a worse prognosis and may be more difficult to treat (Makuuchi et al., 1993). Moreover, Liver cancer immunological features that contribute to its progression and resistance to treatment can be the presence of TILs and the expression of immune checkpoint molecules such as PD-1, PD-L1, and CTLA-4, as well as high expression of immune regulatory genes such as Producing the Forkhead Box P3 (FOXP3) and Indoleamine 2,3-dioxygenase (IDO) (Bufe et al., 2022). Research has shown that aberrant DNA methylation of CDKN2A, Methylguanine Methyltransferase (MGMT), and Glutathione S-Transferase Pi 1 (GSTP1) genes which is an epigenetic modification is commonly found in liver cancer and can contribute to the development and progression of the disease (ZHU, 2005). #### 3.1.4 Urothelial Cancer Urothelial cancer, also known as transitional cell carcinoma, is cancer that affects the cells lining the bladder, ureters, and renal pelvis (Perez-Montiel et al., 2006). There is some evidence to suggest that certain HLA types such as HLA-A2 and HLA-B7 may be associated with an increased risk of developing urothelial cancer. Further, there is evidence to suggest that mutations in certain genes, including Fibroblast Growth Factor Receptor 3 (FGFR3), TP53, Retinoblastoma (RB1), and PIK3CA, may play a role in the development of urothelial cancer (Smal et al., 2014). The clinical characteristics of urothelial cancer are determined by several factors, including tumor stage, grade, and location. Immunological features of urothelial cancer can include the presence of TILs and the expression of immune checkpoint molecules such as PD-1, PD-L1, and CTLA-4. These are important immunological features of urothelial cancer that can impact prognosis and treatment options (Carosella et al., 2015). DNA methylation of CDKN2A, p16, and Ras Association Domain Family Protein 1A (RASSF1A) genes is an important epigenetic modification that can play a critical role in the development and progression of urothelial cancer. DNA methylation patterns can be used as biomarkers for diagnosis, prognosis, and treatment for urothelial cancer (Florl et al., 1999). ### Algorithmic Pseudocode The code in Algorithm 1 defines a function \(GenerateKmers\) that takes in a sequence \(s\in S\) and an integer k and generates all possible \(k\)-mers of length k from the input sequence. The total number of \(k\)-mers, that can be generated for any sequence s of length \(n\) is the following: \[\text{Total k-mers}=n-k+1 \tag{1}\] For ease of understanding, we use Python-like pseudocode in both Algorithm 1 and Algorithm 2. ``` sequence s, k Output: set of kmers \(\leftarrow\) [] \(\triangleright\) generate all possible k-mers of length k for\(i\)\(\leftarrow\) 0to len(s) - k + 1do kmers.append(s[i:i+k]) end returnkmers ``` **Algorithm 1** GenerateKmers The pseudocode to compute sparse coding + \(k\)-mers-based embedding is given in Algorithm 2. This algorithm takes a set of sequences \(S=\{s_{1},s_{2},\ldots,N\}\) and \(k\), where \(N\) is the number of sequences and \(k\) is the length of \(k\)-mers. The algorithm computed the sparse embedding by iterating over all sequences and computing the set of \(k\)-mers for each sequence. Then it iterates over all the \(k\)-mers, computes the one-hot encoding (OHE) based representation for each amino acid within a \(k\)-mer, and concatenates it with the OHE embeddings of other amino acids within the \(k\)-mer. Finally, the OHE embeddings for all \(k\)-mers within a sequence are concatenated to get the final sparse coding-based representation. To avoid the curse of dimensionality, we used Lasso regression as a dimensionality reduction technique (Ranstam and Cook, 2018). The objective function, which we used in lasso regression is the following: \[\min(\text{Sum of square residuals}+\alpha\times|slope|) \tag{2}\] In the above equation, the \(\alpha\times|slope|\) is referred to as the penalty terms, which reduces the slope of insignificant features to zero. For experiments, we use \(k=4\), which is decided using the standard validation set approach. ``` Input: set of sequences S, \(k\)-mers length k Output: SparseEmbedding \(\triangleright\) unique amino acid characters \(\text{totValues}=21\) \(\triangleright\) unique amino acid characters \(\text{final\_sparse\_embedding}\leftarrow[]\) for\(i\leftarrow\)\(\textbf{0}\) to \(|S|\)do \(\text{seq}\leftarrow\text{S[i]}\) \(\text{kmers}\leftarrow\text{generateKmers}(\text{seq, k})\)\(\triangleright\) generate set of \(k\)-mers \(\text{encoded\_kmers}\leftarrow[]\) forkmerinkmersdo \(\text{encodedVec}\leftarrow\text{np.zeros}(\text{totValues}^{k})\)\(\triangleright\)\(21^{3}=9261\) dimensional vector fori, aain enumerate(kmer)do \(\text{pos}\leftarrow\text{i}\times\text{totValues}^{k-i-1}\) \(\text{encodedVec}[\text{pos:pos+totValues}]\leftarrow\text{OneHotEncoding}(\text{ aa})\) end for \(\text{encoded\_kmers.append}(\text{encodedVec})\) end for final_sparse_embedding.append(np.array(encoded_kmers).flatten()) end for SparseEmbedding\(\leftarrow\text{LassoRegression}(\text{final\_sparse\_embedding})\)\(\triangleright\) dim. reduction returnSparseEmbedding ``` **Algorithm 2** Sparse Coding Algorithm The same process repeated while we considered domain knowledge about cancer properties like HLA types, gene mutations, clinical characteristics, immunological features, and epigenetic modifications. That is, for each property, we generate a one-hot encoding-based representation, where the length of the vector equals the total number of possible property values. All OHE representations of the property values are concatenated in the end to get final representations for all properties that we consider from domain knowledge. The resultant embeddings from Sparse coding (from Algorithm 2) and domain knowledge are concatenated in the end to get the final embedding. These embeddings are used to train machine-learning models for supervised analysis ### Flow Chart Figure 2 shows the flowchart overall embedding generation process. Initially, a TCR protein sequence associated with colorectal cancer is provided (Figure 2 (1-a)), To preserve the ordering information of the sequence, \(k\)-mers are generated for the sequence, as shown in (Figure 2 (1-b)). Subsequently, one-hot encoding (OHE) based representation is computed for each amino acid in each generated \(k\)-mer. The resulting one-hot encodings for each \(k\)-mer are combined together, as shown in Figure 2 (1-c to 1-d). Finally, the one-hot encoding vectors of all generated \(k\)-mers are merged to create a final embedding vector (Figure 2 (1-e). To avoid overfitting, Lasso regression is utilized as a dimensionality reduction technique, and the final merged one-hot vectors are passed through this method to generate final sparse coding-based representation, as shown in Figure 2 g. In parallel, the properties of colorectal cancer, such as HLA types and gene mutations, are taken into consideration. For each factor in each category of colorectal cancer properties, a one-hot encoding vector is generated (in Figure 2 2-a) two categories of cancer properties showed, but we considered all five different cancer properties). All the one-hot encoding vectors for the factors in each category of cancer properties are combined to create a single vector. Finally, the one-hot encoding vectors for all the cancer properties are merged to create the final embedding vectors, as depicted in Figure 2 (2-a to g). The resulting final embedding vectors are utilized as inputs to the classification methods to detect the related cancer type. This process is repeated for all other cancer types. ## 4 Experimental Setup In this section, we describe the detail related to the dataset followed by the description of baseline models, evaluation metrics, and data visualization. The experiments were performed on a system with an Intel(R) Core i5 processor @ 2.10GHz and a Windows 10 64 bit operating system, with 32 GB of memory. The implementation of the model was carried Figure 2: Flowchart of TCR sequence analysis out using Python and the code (and preprocessed dataset) has been made available online for reproducibility 1. Footnote 1: Accessible in the published version ### Dataset Statistics We extract the spike sequence data from TCRdb which is a comprehensive database for \(T\)-cell receptor sequences with a powerful search function (Chen et al., 2021). TCRdb has collected over 130 projects and 8000 samples from public databases, which is a comprehensive TCR sequence resource to date. In this study, We identified and extracted the four most prevalent types of cancer based on their incidence rates. The extracted data contains 4 types of cancers within 23331 TCR sequences that are selected randomly by using the Stratified ShuffleSplit method to get the original proportion of the original data. Detailed statistics of the dataset can be seen in Table 1. Our embedding generation method for this dataset uses TCR sequences, along with cancer names and specific cancer properties. Table 2 demonstrates examples of TCR sequences, which are short sequences of amino acids, along with cancer names and gene mutations for specific cancer types. The table includes four different cancer types with their corresponding gene mutations. The "Gene Mutation" column refers to the tumor suppressor gene mutations that can increase the risk of these types of cancer. For instance, mutations in the tumor suppressor genes TP53 and BRCA1/2 are associated with an increased risk of breast and colorectal cancer. \begin{table} \begin{tabular}{c c c c c} \hline \hline & & \multicolumn{3}{c}{Sequence Length Statistics} \\ \cline{3-5} Cancer Name & Number of Sequences & Min. & Max. & Average \\ \hline \hline Breast & 4363 & 8 & 20 & 14.2264 \\ Colorectal & 10947 & 7 & 26 & 14.5573 \\ Liver & 3520 & 8 & 20 & 14.3005 \\ Urothelial & 4501 & 7 & 24 & 14.6538 \\ \hline Total & 23331 & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset Statistics. \begin{table} \begin{tabular}{c c c} \hline \hline Sequence & Cancer Name & Gene Mutation \\ \hline \hline CASSRGQYEQYF & Breast & BRCA1, BRCA2, TP53, PIK3CA \\ CASSLEAGRAYEQYF & Colorectal & APC, KRAS, TP53, BRAF \\ CASSLGSGQETQYF & Liver & TP53, CTNNB1, AXIN1 \\ CASSGQGSSNSPLHF & Urothelial & FGFR3,TP53, RB1, PIK3CA \\ \hline \hline \end{tabular} \end{table} Table 2: An example of sequences for different cancer types, Breast, Colorectal, Liver, and Urothelial along with their respective gene mutations. ### Baseline Models To evaluate our proposed system we used various baselines and the details of each baseline method are as follows, #### 4.2.1 One Hot Encoding (OHE) (Kuzmin et al., 2020) OHE generates binary vectors to represent the sequences in numerical form. For every alphabet in a sequence, it creates binary vectors of size equal to the number of all possible alphabets and assigns a 1 to the corresponding character's location while all others have 0 value. These vectors are concatenated to produce the final numerical embedding of the sequence. #### 4.2.2 Spike2Vec (Ali and Patterson, 2021) Spike2Vec is a method to convert bio-sequences into numerical form for enabling ML-based classification of the sequences. It generates the embedding of a sequence by counting its \(k\)-mers occurrences, as \(k\)-mers are known to retain the ordering information of the sequence. For a sequence, its \(k\)-mers consists of consecutive substrings of length \(k\) driven from the sequence. For instance, a sequence of length \(N\) will have \(N-k+1\)\(k\)-mers. For our experiments, we used \(k=3\). #### 4.2.3 PWM2Vec (Ali et al., 2022) PWM2Vec is another technique to get numerical embeddings of the biological sequences following the \(k\)-mers concept, however, rather than using the \(k\)-mers frequencies, it assigns weights to each amino acid of the \(k\)-mers and employs these weights to generate the embeddings. Assigning weights enables to preserve the position-wise relative importance of the amino acids in a \(k\)-mer, along with retaining the ordering information. The weights are determined using a position weight matrix (PWM). For our experiments, we used \(k=3\). #### 4.2.4 Spaced \(k\)-mers (Singh et al., 2017) The embeddings generated using \(k\)-mers suffer from sparsity and curse of dimensionality challenges which affect the analytical performance negatively. To tackle these issues, the concept of spaced \(k\)-mers is introduced, which refers to a set of non-contiguous substrings of length \(k\) and is known as \(g\)-mers. For a given sequence, spaced \(k\)-mers first compute the \(g\)-mers of the sequence and then extract \(k\)-mers from these \(g\)-mers to form an embedding. We use \(k=4\) and \(g=9\) in our experiments. #### 4.2.5 Auto Encoder (Xie et al., 2016) In this approach, a neural network is used to get the numerical features of the bio-sequence data. It follows the autoencoder architecture where the encoder module is optimized to get the embeddings. The encoder performs a non-linear transformation of data from space X to a low dimensional numerical feature space Z. We are using two layered autoencoder networks, along with an ADAM optimizer and MSE loss function, in our experiments. #### 4.2.6 Wasserstein Distance Guided Representation Learning (WDGRL) (Shen et al., 2018) WDGRL employs a neural network to extract the numerical features by optimizing the Wasserstein distance (WD) between the source and target distributions. It falls in the category of unsupervised domain adoption techniques. For a given sequence, its one-hot encoded vector is generated and this vector is passed on to the WDGRL to get the final embeddings. #### 4.2.7 String Kernel (Farhan et al., 2017) String kernel works by designing a kernel matrix to generate the numerical vectors for the bio-sequences. Given two sequences, it measures the similarity between them by determining the number of matched and mismatched \(k\)-mers. Once the kernel matrix is created, it is passed to kernel PCA to get the principal components-based feature embeddings. #### 4.2.8 Protein Bert (Brandes et al., 2022) This approach employs a pre-trained language model to perform the classification of protein sequences. This model has introduced novel architectural elements to deal with long protein sequences efficiently. It captures both global and local representations within protein sequences. This end-to-end technique is using the transformer to do classification by taking the protein sequences as input. #### 4.2.9 SeqVec (Heinzinger et al., 2019) SeqVec (Sequence-to-Vector) technique uses a language model, known as ELMO (Embeddings from Language Models) (Sarzynska-Wawer et al., 2021), to obtain the feature vectors of protein sequences. This model is trained using unlabeled data from UniRef50 and it generates embeddings depending on the context of the word. ### Classifiers & Evaluation Metrics Various ML classifiers are used to perform the classification task and those classifiers are, Support Vector Machine (SVM), Naive Bayes (NB), Multi-Layer Perceptron (MLP), K-Nearest Neighbors (KNN) with \(k=3\), Random Forest (RF), Logistic Regression (LR), and Decision Tree (DT). We used a 70-30% train-test split based on stratified sampling to do classification. From the training set, we use 10% data as a validation set for hyperparameter tuning. The results are computed 5 times and the average results are reported. Furthermore, the performance of all the baselines and our proposed method is evaluated using different evaluation metrics, and those metrics are, average accuracy, precision, recall, F1 (weighted), F1 (macro), Receiver Operator Characteristic Curve (ROC), Area Under the Curve (AUC), and training runtime metrics. Moreover, the one-vs-rest approach is used to convert the binary evaluation metrics to multi-class ones. ### Data Visualization In order to see if there is any natural (hidden) clustering in the data, we use t-distributed stochastic neighbor embedding (t-SNE) (Van and Hinton, 2008), which maps input se quences to 2D representation. Particularly in the natural sciences, t-SNE is well-liked because of its ability to handle vast volumes of data and its usage for dimensionality reduction while maintaining the structure of the data. The t-SNE plots for different embedding methods are shown in Figure 3 for One hot encoding (OHE), Spike2Vec, PWM2Vec, Spaced Kmer, Autoencoder, and Sparse Coding, respectively. In general, we can observe that OHE, Spike2Vec, PWM2Vec, and Autoencoder show a smaller set of groups for different cancer types. However, the Spaced \(k\)-mers show a very scattered representation of data. Moreover, the proposed Sparse Coding approach does not show any scattered representation, hence preserving the overall structure better. ## 5 Results And Discussion This section discusses the classification results achieved by our proposed system and the baselines against various ML models. The results for different evaluation metrics are reported in Table 3. We can observe that, our proposed technique (Sparse Coding) is outperforming all the feature-engineering-based baseline models (OHE, Spike2Vec, PWM2Vec, Spaced \(k\)-mers) for every evaluation metric except training runtime. For instance, Sparse Figure 3: t-SNE plots for different feature embedding methods. The figure is best seen in color. Coding obtains 48.5% more accuracy than OHE, 55.8% more than Spike2Vec, 58.4% more than PWM2Vec, and 48.6% more than Spaced \(k\)-mers using LR classifier. These results imply that the feature embeddings generated by the Sparse Coding method retain more information about the biological sequences in terms of classification performance as compared to the feature-engineering-based baselines. Similarly, our technique is also better performing than the neural network-based baselines (WDGRL, Autoencoder). For example, the WDGRL approach has 53.03% less accuracy than the Sparse Coding and Autoencoder achieves 53.83% lower accuracy than Sparse Coding corresponding to the SVM classifier. This indicates that the feature vectors generated from the neural network-based techniques are not efficient for performing classification tasks as compared to the Sparse Coding-based feature vectors. Note that SVM is yielding the best results among all the classifiers corresponding to WDGRL and Autoencoder methods. Likewise, our method is also portraying better predictive results as compared to the kernel-based baseline (String Kernel), like it has 53.83% more accuracy than String Kernel using the SVM model. This yet again shows that Sparse Coding is creating more efficient numerical vectors for protein sequences. Furthermore, we also compared Sparse Coding performance with the pre-trained models (SeqVec and Protein Bert), and the results illustrate that Sparse Coding is outperforming them in terms of all the evaluation metrics except the training runtime. This is an indication that the pre-trained models are not able to generalize well to our dataset therefore they are exhibiting lower predictive performance. As it is obvious in our dataset some classes have fewer examples than the other classes and we are dealing with a class imbalance problem in the data. This can cause problems for classification models because they may become biased towards the majority class, leading to poor performance in predicting the minority class. To confirm if our classifiers are biased toward a single class, we examined various metrics like precision, recall, F1 (weighted, Macro), and ROC-AUC which are sensitive to class imbalance. All of these evaluation metrics showed that the combination of Sparse Coding with all seven classification methods outperformed the baseline methods and ensured that our results are reliable and can be used to inform future research in the field. We can also notice that Sparse Coding is taking longer training runtime as compared to most of the baseline methods, however, it has tremendous performance improvement over all the baseline methods in terms of every other evaluation metric. As our aim is to design an efficient feature extraction model for getting higher classification results, so Sparse Coding has been shown to achieve that goal. ### Statistical Significance of Results We assessed the statistical significance of our computed results by conducting a Student's t-test and examining the \(p\)-values. To compute the p-values, we used the average and standard deviation (SD) of 5 runs. Our analysis revealed that the majority of the p-values were less than 0.05, which indicates the statistical significance of the results. This is likely due to the low SD values observed in our study. \begin{table} \begin{tabular}{c c c c c c c|c|c} \hline \hline \multirow{2}{*}{Embeddings Alg.} & \multirow{2}{*}{Acc. \(\uparrow\)} & \multirow{2}{*}{Prec. \(\uparrow\) Recall} & \(\uparrow\) F1 & \(\uparrow\) F1 & ROC & \begin{tabular}{c} Train \\ (Wig.) \(\uparrow\) \\ \end{tabular} & \(\uparrow\) AUC \(\uparrow\) & Time \\ \hline \hline SVM & 0.5101 & 0.5224 & 0.5101 & 0.4073 & 0.3152 & 0.5392 & 790.5622 \\ NB & 0.2030 & 0.3538 & 0.2036 & 0.0917 & 0.2138 & 0.5107 & 1.0608 \\ MIP & 0.4651 & 0.4986 & 0.4651 & 0.4700 & 0.3714 & 0.5764 & 21.0683 \\ GRE & KNN 0.4644 & 0.4044 & 0.4464 & 0.4100 & 0.3524 & 0.5621 & 72.7248 \\ RF & 0.5156 & 0.20503 & 0.5156 & 0.4212 & 0.3751 & 0.5624 & 18.7587 \\ LR & 0.5143 & 0.2421 & 0.5143 & 0.4237 & 0.3492 & 0.5701 & 48.5122 \\ DT & 0.4190 & 0.4129 & 0.4129 & 0.4160 & 0.3616 & 0.5370 & 1.0607 \\ \hline SVM & 0.4090 & 0.4105 & 0.4309 & 0.4157 & 0.3544 & 0.5711 & 19241.67 \\ NB & 0.2747 & 0.3535 & 0.2174 & 0.1951 & 0.2081 & 0.5221 & 6.1309 \\ MIP & 0.4191 & 0.4081 & 0.4191 & 0.4128 & 0.4320 & 0.5671 & 272.7146 \\ Spitz/Yee & KNN & 0.4297 & 0.4105 & 0.4307 & 0.4487 & 0.3400 & 0.5673 & 40.4755 \\ RF & 0.5158 & 0.5075 & 0.5158 & 0.4193 & 0.3778 & 0.5366 & 138.0589 \\ LR & 0.4044 & 0.4189 & 0.4044 & 0.4251 & 0.3625 & 0.5728 & 914.7739 \\ DT & 0.4150 & 0.4307 & 0.4510 & 0.4285 & 0.4274 & 0.5840 & 40.3658 \\ \hline SVM & 0.4011 & 0.3796 & 0.4011 & 0.3865 & 0.3138 & 0.5452 & 1007.671 \\ NB & 0.2287 & 0.3758 & 0.2287 & 0.2109 & 0.2220 & 0.5723 & 23.9692 \\ MLP & 0.4063 & 0.3830 & 0.4053 & 0.3888 & 0.3225 & 0.5467 & 233.6337 \\ RNN & 0.4154 & 0.3890 & 0.4544 & 0.3800 & 0.3690 & 0.5475 & 10.2005 \\ RF & 0.4901 & 0.4745 & 0.4991 & 0.4470 & 0.3458 & 0.5716 & 126.7570 \\ LR & 0.4143 & 0.3861 & 0.4133 & 0.3807 & 0.3237 & 0.5484 & 919.5001 \\ DT & 0.4339 & 0.4078 & 0.4339 & 0.4140 & 0.3496 & 0.5636 & 34.9344 \\ \hline SVM & 0.5109 & 0.5221 & 0.5109 & 0.4055 & 0.4134 & 0.5528 & 841.213 \\ NN & 0.2157 & 0.3713 & 0.1257 & 0.1296 & 0.1510 & 0.5144 & 10.3838 \\ Speeded & MLP & 0.4524 & 0.4203 & 0.4524 & 0.4236 & 0.3501 & 0.5667 & 30.3905 \\ k-news & KNN & 0.4297 & 0.4270 & 0.4272 & 0.4232 & 0.3531 & 0.5607 & 33.3905 \\ RF & 0.5204 & 0.5233 & 0.5241 & 0.4294 & 0.3203 & 0.5679 & 41.3527 \\ LR & 0.5321 & 0.5653 & 0.5211 & 0.4158 & 0.3441 & 0.5674 & 25.7664 \\ DT & 0.4066 & 0.4009 & 0.4006 & 0.4006 & 0.3433 & 0.5629 & 14.2816 \\ \hline SVM & 0.4077 & 0.2113 & 0.4597 & 0.2066 & 0.1157 & 0.5000 & 1423.020 \\ NB & 0.2001 & 0.3006 & 0.2001 & 0.3822 & 0.2117 & 0.5000 & 0.42301 \\ MIP & 0.3966 & 0.312 & 0.3966 & 0.3226 & 0.2252 & 0.5017 & 110.75 \\ Kavo & KNN 0.3791 & 0.3245 & 0.3791 & 0.3829 & 0.2445 & 0.3688 & 5.9135 \\ ER & 0.4457 & 0.4316 & 0.4457 & 0.3023 & 0.1792 & 0.4003 & 76.2106 \\ LR & 0.4200 & 0.3170 & 0.4203 & 0.2082 & 0.1712 & 0.5001 & 86.9503 \\ DT & 0.3311 & 0.3193 & 0.3131 & 0.3124 & 0.3255 & 0.5016 & 25.1653 \\ \hline SVM & 0.4677 & 0.2188 & 0.4677 & 0.2081 & 1.0593 & 0.5006 & 15.34 \\ MB & 0.4409 & 0.3231 & 0.4409 & 0.3297 & 0.2133 & 0.5106 & 0.0120 \\ MLP & 0.4729 & 0.4690 & 0.4479 & 0.3423 & 0.2184 & 0.5163 & 15.43 \\ WDGLR & KNN 0.3900 & 0.3415 & 0.3390 & 0.3523 & 0.2062 & 0.5156 & 0.608 \\ RF & 0.4666 & 0.4138 & 0.4666 & 0.3068 & 0.3778 & 0.2255 & 0.52194 \\ LR & 0.4676 & 0.2187 & 0.4676 & 0.2980 & 0.1513 & 0.6990 & 0.0799 \\ DT & 0.3040 & 0.3606 & 0.3064 & 0.3065 & 0.3921 & 0.5304 & 0.5210 \\ \hline SVM & 0.4677 & 0.2113 & 0.4677 & 0.2096 & 0.1575 & 0.5000 & 2701.61 \\ NB & 0.3063 & 0.3067 & 0.3067 & 0.3079 & 0.2043 & 0.4638 & 0.2080 & 2.0292 \\ MLP & 0.3287 & 0.3245 & 0.3237 & 0.3211 & 0.3022 & 0.2093 & 125.066 \\ Kernel & RN & 0.3063 & 0.3106 & 0.3063 & 0.3229 & 0.2303 & 0.5001 & 31.1551 \\ LR & 0.4469 & 0.3251 & 0.4499 & 0.301 & 0.3133 & 0.5010 & 5.3158 \\ LR & 0.4451 & 0.3808 & 0.4510 & 0.3065 & 0.1787 & 0.5000 & 2.0433 \\ DT & 0.3073 & 0.3082 & 0.3073 & 0.3077 & 0.2502 & 0.4998 & 17.0852 \\ \hline Protein & & & & & & & \\ \hline Best & 0.4924 & 0.4223 & 0.2624 & 0.19417 & 0.209 ## 6 Conclusion In this study, we proposed a novel approach for cancer classification using sparse coding and TCR protein sequences. Our approach first transformed TCR sequences into feature vectors using \(k\)-mers and then generate embedding using sparse coding. We also incorporate domain knowledge about different cancer properties in the final embeddings to enhance their predictive performance. We then trained several multi-class classifiers on the sparse codes generated by the learned dictionary, achieving state-of-the-art performance on a benchmark dataset of TCR sequences compared to several baseline methods. Our method achieves the maximum 99.9% accuracy, and 99.9% F1 and ROC AUC scores for this data classification. The use of sparse coding allows us to capture the essential features of biological data, which can ultimately aid in the development of more effective cancer therapies. The invariant and robust properties of sparse coding make it a promising technique for the analysis of biological sequences. Further research can investigate the potential of sparse coding for other types of biological data analysis and explore ways to optimize the method for different cancer types and TCR sequences. One potential avenue for future research is to explore the use of alternative basis functions for sparse coding, beyond \(k\)-mers. For example, other types of basis functions, such as position weight matrices or hidden Markov models, could be investigated for their suitability in capturing the structure of TCR protein sequences. Furthermore, it would be interesting to investigate the potential of using TCR protein sequences in combination with other types of data, such as gene expression or epigenetic data, to improve cancer classification and gain a more comprehensive understanding of the underlying biology of cancer. Exploring the scalability of the current method on multi-million sequences is another potential future direction.
2305.04710
ElasticHash: Semantic Image Similarity Search by Deep Hashing with Elasticsearch
We present ElasticHash, a novel approach for high-quality, efficient, and large-scale semantic image similarity search. It is based on a deep hashing model to learn hash codes for fine-grained image similarity search in natural images and a two-stage method for efficiently searching binary hash codes using Elasticsearch (ES). In the first stage, a coarse search based on short hash codes is performed using multi-index hashing and ES terms lookup of neighboring hash codes. In the second stage, the list of results is re-ranked by computing the Hamming distance on long hash codes. We evaluate the retrieval performance of \textit{ElasticHash} for more than 120,000 query images on about 6.9 million database images of the OpenImages data set. The results show that our approach achieves high-quality retrieval results and low search latencies.
Nikolaus Korfhage, Markus Mühling, Bernd Freisleben
2023-05-08T13:50:47Z
http://arxiv.org/abs/2305.04710v1
# _ElasticHash_: Semantic Image Similarity Search by Deep Hashing with Elasticsearch ###### Abstract We present _ElasticHash_, a novel approach for high-quality, efficient, and large-scale semantic image similarity search. It is based on a deep hashing model to learn hash codes for fine-grained image similarity search in natural images and a two-stage method for efficiently searching binary hash codes using Elasticsearch (ES). In the first stage, a coarse search based on short hash codes is performed using multi-index hashing and ES terms lookup of neighboring hash codes. In the second stage, the list of results is re-ranked by computing the Hamming distance on long hash codes. We evaluate the retrieval performance of _ElasticHash_ for more than 120,000 query images on about 6.9 million database images of the OpenImages data set. The results show that our approach achieves high-quality retrieval results and low search latencies. Keywords:deep hashing similarity search Elasticsearch ## 1 Introduction Query-by-content approaches based on feature representations that are learned by deep convolutional neural networks (CNNs) have greatly increased the performance of content-based image retrieval systems. However, state-of-the-art methods in the field of semantic image similarity search suffer from shallow network architectures and small data sets with few image classes in the training as well as in the evaluation phases. Few image classes in the training phase lead to poor generalizability to query images with unknown content in the evaluation phase, i.e., a more fine-grained modeling of the image content is required. Thus, high accuracy for arbitrary search queries, fast response times, and scalability to millions of images are necessary to meet many users' needs both in scientific and commercial applications. In this paper, we present _ElasticHash_, a high-quality, efficient, and scalable approach for semantic image similarity search based on the most popular enterprise full-text search and analytics engine Elasticsearch1 (ES). ES processes queries very fast due to inverted indices based on Lucene2, scales to hundreds of servers, provides load balancing, and supports availability and reliability. Apparently, the properties of ES are not only desirable for full-text search, but also for semantic image similarity search. Furthermore, integrating image similarity search into ES allows multi-modal queries, e.g., combining text and images in a single query. The contributions of the paper are as follows: * We present _ElasticHash_, a novel two-stage approach for semantic image similarity search based on multi-index hashing and integrate it via terms lookup queries into ES. * We present experimental results to show that _ElasticHash_ achieves fast response times and high-quality retrieval results at the same time by leveraging the benefits of short hash codes (better search times) and long hash codes (higher retrieval quality). To the best of our knowledge, we provide the first evaluation of image similarity search for more than 120,000 query images on about 6.9 million database images of the OpenImages data set. * We make our deep image similarity search model, the corresponding ES indices, and a demo application available at [http://github.com/umr-ds/ElasticHash](http://github.com/umr-ds/ElasticHash). The paper is organized as follows. In Section 2, we discuss related work. Section 3 presents _ElasticHash_. In Section 4, we evaluate _ElasticHash_ on the OpenImages data set in terms of search latency and retrieval quality. Section 5 concludes the paper and outlines areas for future work. ## 2 Related Work Deep learning, in particular deep CNNs, led to strong improvements in content-based image similarity search. With increasing sizes of the underlying image databases, the need for an efficient similarity search strategy arises. Since high-dimensional CNN features are not suitable to efficiently search in very large databases, large-scale image similarity search systems focus on binary image codes for quantization or compact representations and fast comparisons rather than full CNN features. Recently, several deep hashing methods were introduced [25, 6, 23, 21, 2, 15, 4]. Many of them employ pairwise or triplet losses. While these methods often achieve state-of-the-art performance on their test data sets, they are not necessarily suitable for very large data sets and fine-grained image similarity search based on thousands of classes. Existing deep hashing methods are often trained using small CNNs that usually cannot capture the granularity of very large image data sets. Often, CNN models like AlexNet [12] are used as their backbones, and they are usually evaluated on a small number of image classes [22, 4, 6, 25] (e.g., a sample of 100 ImageNet categories [4], about 80 object categories in COCO [14], NUS-WIDE [5] with 81 concepts, or even only 10 classes as in MNIST or CIFAR). Additionally, the image dimensions in CIFAR and MNIST are very small (32x32 and 28x28, respectively) and thus not sufficient for image similarity search in real-world applications. Many approaches are trained on relatively small training data sets (e.g., 10,000 - 50,000 images [3, 4, 15]). In addition, there are no standardized benchmark data sets, and each publication uses different splits of training, query, and database images, which further complicates a comparison of the methods. Furthermore, training from scratch can be prohibitively expensive for large data sets. We observed that for large data sets with a high number of image classes, a transfer learning approach that combines triplet loss and classification loss leads to good retrieval results. To the best of our knowledge, _ElasticHash_ is the first work that presents a deep hashing model trained and evaluated on a sufficiently large number of image classes. The currently best performing approaches for learning to hash image representations belong either to product quantization (PQ) methods [8, 9] and methods based on deep hashing (DH) [23, 6]. Amato et al. [1] present PQ approaches that transform neural network features into text formats suitable for being indexed in ES. However, this approach cannot match the retrieval performance of FAISS [9]. Therefore, we focus on deep hashing that in combination with multi-index hashing (MIH) [17] can circumvent exhaustive search in Hamming space and achieve low search latency while maintaining high retrieval quality. _ElasticHash_ is related to other image similarity search methods integrated into ES. For example, FENSHSES [16] integrates MIH into ES and has a search latency comparable to FAISS. The method works efficiently for small radii of the Hamming ball and relatively small data sets (500,000 images). Small hamming radii, however, often produce too few neighbors for a query [17]. MIH like FENSHSES is thus not suitable for our scenario of large-scale image retrieval in ES with long binary codes (256 bits), where we require sub-second search latency on a data set of about 7 million images. Furthermore, we solve the shortcomings of FENSHSES using only a subset of bits rather than the whole hash codes to perform our MIH-based coarse search. While other works extend ES for image similarity search by modifying the Lucene library [7], our approach is seamlessly integrated into ES without modifying its code base. ## 3 _ElasticHash_ _ElasticHash_ consists of several components as shown in Figure 1: a deep hashing component, an ES cluster, and a retrieval component. The deep hashing component is realized as a web service using Tensorflow Serving where the integrated deep hashing model is applied to images and the corresponding binary codes are returned. In the first phase, the binary codes are extracted from the database images in the indexing phase using the deep hashing component and stored into the ES cluster. After initially building the index, the retrieval component handles incoming query images and visualizes the retrieval results. For this purpose, the binary codes are extracted from the query images using the web service, the corresponding ES queries are assembled and sent to the ES cluster that returns the final list of similar images. The entire similarity search system can be easily deployed for production via Docker. The deep hashing model is described in more detail in Section 3.1, including the training strategy and network architecture. In Section 3.2, the ES integration is presented. ### Deep Hashing Model We now describe our deep hashing model and how it is used to extract both short and long hash codes. The model training consists of two phases that both use ADAM as the optimization method. First, an ImageNet-pretrained EfficientNetB3 [19] model is trained on a data set with a larger number of classes in order to obtain a more fine-grained embedding. In contrast to the original ImageNet dataset, it contains all ImageNet classes with more than 1000 training images and all classes of the Places2 [24] data set, which results in a total number of classes of 5,390. The model is trained with cross-entropy loss on a Softmax output. After two epochs of training the final layer with a learning rate of 0.01, all layers are trained for another 16 epochs with a learning rate of 0.0001. In the second phase, the classification model's weights are used to initialize the deep hashing model. This model includes an additional 256-bit coding layer before the class output layer with \(tanh\) activation and 256 outputs. This model is trained for 5 epochs with a learning rate of 0.0001. It is trained on the same data set as before, however, by combining cross-entropy loss on the output and hard triplet loss [18] on the coding layer. With the classification loss \[\mathcal{L}_{c}=\sum_{i=1}^{K}y_{i}\log p_{i} \tag{1}\] for \(K\) classes with labels \(y_{i}\) and predictions \(p_{i}\), and the triplet loss \[\mathcal{L}_{t}=max(d(a,p)-d(a,n)+\gamma,0) \tag{2}\] for Euclidean distance \(d\) between the 256-dimensional output of the coding layer for anchor image \(a\) and positive example \(p\) and between \(a\) and a negative example Figure 1: Overview of the workflows for image similarity search in ES. \(n\), respectively, the combined loss function is given by: \[\mathcal{L}=\alpha\mathcal{L}_{\rfloor}+\beta\mathcal{L}_{\sqcup}, \tag{3}\] where we set margin \(\gamma=2\) and weights \(\alpha=1\) and \(\beta=5\). We first sample a batch of size \(b=128\) images from a uniform distribution of the classes. This batch is used for both computing the classification loss and generating \(b\) hard triplets. To make the similarity search more robust, we used heavy data augmentation in both phases, which in addition to standard augmentation methods includes inducing JPEG compression artifacts. After training, the model generates 256-bit codes. These codes can be decomposed into four 64-bit codes for fast computation of Hamming distance on long integers. However, using codes of this length on a corpus of about 10 million images is too expensive, even when using multi-index hashing. We therefore extracted 64-bit codes from the original 256-bit codes to perform the filtering on shorter codes and thus smaller Hamming ball radii. To extract the 64 most important bits from the 256-bit codes, we first partition the 256-bit codes into four partitions by applying the Kernighan-Lin algorithm [10] on the bit correlations. From each of the four decorrelated partitions, we then take the first 16 bits to compose 64-bit codes. ### Integration into ES Before describing our image similarity search integration into ES, we will shortly review MIH in Hamming space [17]. The idea of MIH is based on the following observation: for two binary codes \(h=(h^{1},...,h^{m})\) and \(g=(g^{1},...,g^{m})\) where \(m\) is the number of partitions, \(h^{k}\) and \(g^{k}\) are the \(k^{th}\) subcodes and \(H\) is the Hamming norm, the following proposition holds: \[\left\|h-g\right\|_{H}\leq r\Rightarrow\exists k\in\{1,...,m\}\;\left\|h^{k}- g^{k}\right\|_{H}\leq\left\lfloor\frac{r}{m}\right\rfloor \tag{4}\] For the case of 64-bit codes that are decomposed into \(m=4\) subcodes, this means that a code is in a Hamming radius \(r<12\) if at least one of the subcodes has a distance of \(d\leq\left\lfloor\frac{r}{m}\right\rfloor=2\) from the query subcode. The performance of MIH can be increased if the subcodes are maximally independent of each other [20], especially for shorter codes [16]. Thus, after training a deep hashing model, the bit positions should be permutated accordingly. The ES index used for retrieval contains four short codes (f_0 - f_3) and four long subcodes (r_0 - r_3) for each image. The short codes are used for MIH and efficiently utilize the reverse index structure of ES and are thus separated into four subcodes of type "keyword". The long codes are also separated into four subcodes in order to allow fast computation of Hamming distances for values of type long. An additional index is used for fast lookup of neighboring subcodes within the retrieval query. The neighbors index does not change once it has been created and merely serves as an auxiliary index for term queries. It requires pre-computing all nearest neighbors for all possible 16-bit subcodes. Thus, the index of neighbors contains \(2^{16}\) documents. The document id corresponds to the unsigned integer representation of a 16-bit subcode and can therefore accessed within a term query. It contains a single field "nbs" that is assigned to a list of all neighboring 16-bit codes within a Hamming radius of \(d\) of the corresponding query subcode. Since this index basically works as a lookup table, it could also be realized somewhere else, i.e., not as an ES index. However, integrating the lookup table this way eliminates the need for external code and enables fast deployment of the whole system. All documents representing all possible 16-bit subcodes are inserted according to the query in Listing 1.1. ``` POST/nbs-d2/doc/<16bitsubcode> {"nbs":[<d2neighborsof16bitsubcode>]} ``` Listing 1.1: Query for adding an entry to neighbor lookup index. In this stage, MIH is realized by querying the additional index of neighbors for fast neighbor lookup. Even with MIH, using the full code length of the deep hashing model trained for 256-bit codes is too expensive for larger databases. We therefore limit the code length for the filtering stage to 64-bit codes. To obtain a sufficiently large set of candidate hash codes in the first stage, we need to search within a Hamming ball with a correspondingly large radius. We set \(d=2\), which will return at least all codes within \(r=11\) of a 64-bit code. In our setting with \(d=2\), this results in 137 neighbors per subcode, i.e., 548 neighbors in total. In ES, we realize MIH by using a terms lookup. It fetches the field values of an existing document and then uses these values as search terms (see Listing 1.1). In contrast to putting all neighbors into the query, using a dedicated index for subcode neighbors has the advantage that the retrieval of neighboring subcodes is carried out within ES. Thus, the query load is small, and no external handling of neighbor lookup is necessary. In the second stage, all codes obtained by MIH are re-ranked according to their Hamming distance to the long code. To compute the Hamming distance of the 256-bit code, the Painless Script in Listing 1.2 is applied to each of the four subcodes. ``` POST_scripts/hd64 {"script":{"lang":"painless", "source":64-Long.bitCount(params.subcode"doc[params.field].value)} } ``` Listing 2.2: Query for adding a Painless Script. The query in Listing 1.3 combines the MIH step as a filter with a term query and the re-ranking step as an application of the painless script from Listing 1.2 on the filtered retrieval list. ## 4 Experimental Evaluation To determine the search latency and retrieval quality of _ElasticHash_, we evaluate three settings for using the binary hash codes generated by our deep hashing model for large-scale image retrieval in ES: (1) short codes, i.e., 64 bits for both filtering and re-ranking, (2) long codes, i.e., 256 bits for both filtering and re-ranking, and (3) _ElasticHash_, i.e., 64 bits for filtering, 256 bits for re-ranking. Settings (1) and (2) are similar to the MIH integration of Mu et al. [16]. To evaluate our approach, we use OpenImages [13], which is currently the largest annotated image data set publicly available. It contains multi-label annotations for 9.2 million Flickr images with 19,794 different labels and is partitioned into training, validation, and test data set. On the average, there are 2.4 positive labels for the training split, while the validation and test splits have 8.8. As our database images we use all training images being available when downloading the data set, i.e., 6,942,071 images in total. To evaluate the retrieval quality, we use all downloaded images from the OpenImages test and validation set as query images (121,588 images in total). From these images, we draw a sample of 10,000 images to measure the search latencies for the three different settings. The quality of the retrieval lists is evaluated using the average precision (AP) score, which is the most commonly used quality measure in image retrieval. The AP score is calculated from the list of retrieved images as follows: \[AP(\rho)=\frac{1}{|R\cap\rho^{N}|}\sum_{k=1}^{N}\frac{\left|R\cap\rho^{k} \right|}{k}\psi(i_{k}),\text{with}\quad\psi(i_{k})=\left\{\begin{array}{ll}1& \text{if }i_{k}\in R\\ 0&\text{otherwise}\end{array}\right. \tag{5}\] where \(N\) is the length of the ranked image list, \(\rho^{k}=\{i_{1},i_{2},\ldots,i_{k}\}\) is the ranked image list up to rank \(k\), \(R\) is the set of relevant documents, \(\left|R\cap\rho^{k}\right|\) is the number of relevant images in the top-\(k\) of \(\rho\) and \(\psi(i_{k})\) is the relevance function. We consider an image as relevant, if it has at least one label in common with the query image. To evaluate the overall performance, the mean AP score is calculated by taking the mean value of the AP scores over all queries. ### Results We first evaluate the search latency for the queries. Next, we compare the retrieval quality in terms of AP. The experiments were performed on a system with an Intel Core i7-4771 CPU @ 3.50GHz and 32 GB RAM. Table 1 shows that for a \(k\) up to 250 there is no notable decrease in retrieval quality when employing _ElasticHash_ rather than using the long codes for both stages. Figure 2 shows examples of the top-10 retrieval results for the three settings. It is evident that the retrieval quality of _ElasticHash_ is similar to using \begin{table} \begin{tabular}{|r||r|r|r|r|r|r|r|} \hline top \(k\) & 10 & 25 & 50 & 100 & 250 & 500 & 1000 \\ \hline \hline short & 87.94 & 86.08 & 84.44 & 82.54 & 79.41 & 76.44 & 72.86 \\ \hline long & 95.35 & 94.72 & 94.23 & 93.71 & 92.90 & 92.09 & 90.95 \\ \hline _ElasticHash_ & 95.21 & 94.48 & 93.90 & 93.22 & 92.02 & 90.61 & 88.42 \\ \hline \end{tabular} \end{table} Table 1: Retrieval quality in terms of mean AP for different thresholds of \(k\) on 121,588 query images. \begin{table} \begin{tabular}{|l||r|r|r|r|r|r|r|r|} \hline top \(k\) & & 10 & 25 & 50 & 100 & 250 & 500 & 1000 \\ \hline \hline short & \(\mu\) & 23.09 & 23.98 & 24.45 & 25.58 & 28.38 & 33.09 & 42.20 \\ & \(\sigma\) & 4.74 & 4.65 & 4.70 & 4.72 & 4.86 & 5.20 & 6.07 \\ \hline long & \(\mu\) & 111.83 & 111.58 & 111.99 & 113.05 & 116.77 & 121.98 & 132.60 \\ & \(\sigma\) & 16.50 & 16.58 & 16.72 & 16.54 & 17.04 & 17.13 & 17.99 \\ \hline \multirow{2}{*}{_ElasticHash_} & \(\mu\) & 36.12 & 36.75 & 37.28 & 38.17 & 40.88 & 45.73 & 55.23 \\ & \(\sigma\) & 7.80 & 7.96 & 7.81 & 7.89 & 7.93 & 8.12 & 8.64 \\ \hline \end{tabular} \end{table} Table 2: Search latencies for ES queries (ms) with standard deviation for different thresholds of \(k\) on 10,000 query images. Figure 2: Top-10 retrieval results for (a) short codes, (b) long codes, and (c) _ElasticHash_ for the same query image (first on the left); green: relevant result; red: irrelevant result. long codes, and both are superior to using short codes. On the other hand, Table 2 indicates that the average retrieval time only slightly increases compared to using short codes for both stages. This suggests that _ElasticHash_ is a good trade-off between retrieval quality and search latency. Although our deep hashing model was trained on 5,390 classes, but almost 20,000 classes occur in the validation data set, high AP values are achieved for _ElasticHash_. ## 5 Conclusion We presented _ElasticHash_, a novel two-stage approach for semantic image similarity search based on deep multi-index hashing and integrated via terms lookup queries into ES. Our experimental results on a large image data set demonstrated that we achieve low search latencies and high-quality retrieval results at the same time by leveraging the benefits of short hash codes (better search times) and long hash codes (higher retrieval quality). There are several areas for future work. For example, it would be interesting to investigate how many classes are necessary to obtain a high degree of generalizability. Furthermore, our loss function could be adapted to multi-label image data. Finally, we plan to extend our approach to achieve intentional image similarity search [11] using ES. ## 6 Acknowledgements This work is financially supported by the German Research Foundation (DFG project number 388420599) and HMWK (LOEWE research cluster Nature 4.0).
2310.11815
Conservative Predictions on Noisy Financial Data
Price movements in financial markets are well known to be very noisy. As a result, even if there are, on occasion, exploitable patterns that could be picked up by machine-learning algorithms, these are obscured by feature and label noise rendering the predictions less useful, and risky in practice. Traditional rule-learning techniques developed for noisy data, such as CN2, would seek only high precision rules and refrain from making predictions where their antecedents did not apply. We apply a similar approach, where a model abstains from making a prediction on data points that it is uncertain on. During training, a cascade of such models are learned in sequence, similar to rule lists, with each model being trained only on data on which the previous model(s) were uncertain. Similar pruning of data takes place at test-time, with (higher accuracy) predictions being made albeit only on a fraction (support) of test-time data. In a financial prediction setting, such an approach allows decisions to be taken only when the ensemble model is confident, thereby reducing risk. We present results using traditional MLPs as well as differentiable decision trees, on synthetic data as well as real financial market data, to predict fixed-term returns using commonly used features. We submit that our approach is likely to result in better overall returns at a lower level of risk. In this context we introduce an utility metric to measure the average gain per trade, as well as the return adjusted for downside risk, both of which are improved significantly by our approach.
Omkar Nabar, Gautam Shroff
2023-10-18T09:14:19Z
http://arxiv.org/abs/2310.11815v1
# Conservative Predictions on Noisy Financial Data ###### Abstract. Price movements in financial markets are well known to be very noisy. As a result, even if there are, on occasion, exploitable patterns that could be picked up by machine-learning algorithms, these are obscured by feature and label noise rendering the predictions less useful, and risky in practice. Traditional rule-learning techniques developed for noisy data, such as CN2, would seek only high precision rules and refrain from making predictions where their antecedents did not apply. We apply a similar approach, where a model abstains from making a prediction on data points that it is uncertain on. During training, a cascade of such models are learned in sequence, similar to rule lists, with each model being trained only on data on which the previous model(s) were uncertain. Similar pruning of data takes place at test-time, with (higher accuracy) predictions being made albeit only on a fraction (support) of test-time data. In a financial prediction setting, such an approach allows decisions to be taken only when the ensemble model is confident, thereby reducing risk. We present results using traditional MLPs as well as differentiable decision trees, on synthetic data as well as real financial market data, to predict fixed-term returns using commonly used features. We submit that our approach is likely to result in better overall returns at a lower level of risk. In this context we introduce an _utility_ metric to measure the average gain per trade, as well as the return adjusted for downside-risk, both of which are improved significantly by our approach. Omkar Nabar and Gautam Shroff. 2023. Conservative Predictions on Noisy Financial Data. In _4th ACM International Conference on AI in Finance (ICAIF '23)_, November 27-29, 2023, Brooklyn, NY, USA. ACM, New York, NY, USA. 9 pages. [https://doi.org/10.1145/3604237.3626859](https://doi.org/10.1145/3604237.3626859) + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm Theor. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM + Footnote †: journal Acm. of the ACM + Footnote †: journal: Acm. of the ACM + + Footnote †: journal: Acm. of the ACM + Footnote †: journal: Acm. of the ACM We report experimental results on real market data as well as synthetic data created to mimic a simplistic mean-reverting market behavior using sine waves. Varying levels of noise are added to this synthetic data to study whether using a cascade of models improves accuracy with acceptable support. Note that in a financial market scenario, it is preferable to have a model with 70% or even 60% accuracy on say 20% or even 10% of the data, on which it makes confident predictions, and abstains on the balance, as this serves to minimise risk in any decisions taken based on the model's predictions. Further, it is more useful to have confident predictions at the extremes of the target attribute's distribution: Such predictions are actionable (as opposed to confidently predicting placid market behaviors), and we define _utility_ of predictions as a metric to measure actionability in the above sense. Utility measures the average gain per trade; thus, higher utility model recommends fewer, albeit successful, as well as less risky trades. We also compute a measure of the return adjusted for downside risk. Our experimental results show that using a cascade of models indeed achieves this effect especially on synthetic data, with DDTs resulting in higher support at comparable accuracy to MLPs, and higher utility as well as risk-adjusted return. On real-data, while both models perform relatively poorly with respect to the final support, we observe that their predictions, however rare, nevertheless exhibit higher utility and risk-adjusted return, and are therefore actionable with lower level of risk. Finally we discuss how this behaviour can be exploited in the context of financial trading strategies. ## 2. Methods: Algorithm & Models ### Differentiable Decision Trees (DDT) Decision Trees have long been used to generate interpretable models for tabular datasets and perform well with classification problems. However, despite the numerous algorithms developed to produce near optimal decision trees, they still are highly sensitive to their training set. Deep Neural Networks (DNN) or **Multi-Layer Perceptrons (MLP)**, on the other hand are analytic in nature and thus can be optimised to produce good results on both regression and classification problems. However, this comes at a cost of interpretability. Suarez and Lutsko (Suarez and Lutsko, 2017) proposed a modification to the 'crisp' decision trees formed by the **CART algorithm**. They replaced the hard decisions taken at each node with 'fuzzy' decisions, determined by the sigmoid function. The leaf nodes are modified to represent a probability distribution over the classes. This allows the fuzzy decision tree or 'differentiable decision tree' to be trained with gradient descent like a multi-layer perceptron. Further work by Frosst and Hinton (Frosst and Hinton, 2017) introduced a regularization term to encourage a balanced split at each internal nodes. The differentiable decision tree model used in the experiments is a combination of these ideas from previous works. #### 2.1.1. Forward pass and optimization The decision tree is initialized as a balanced binary tree, with the depth fixed before-hand. The model works in a hierarchical manner, with each inner node \(i\) having its own weight \(w_{i}\) and bias \(b_{i}\) while the leaf nodes have a learned distribution \(Q_{i}\). At each inner node \(i\), the probability of taking the right branch is given by the sigmoid function: \[p_{i}=\sigma(xw_{i}+b_{i})\] To prevent the decisions from being too soft, temperature scaling (\(T\)) is added to the sigmoid function as follows: \[p_{i}=\sigma(T(xw_{i}+b_{i}))\] At each inner node \(i\) the path probability up until that node is given by \(P_{i}\). This value is then multiplied by \(p_{i}\) for the right branch and \(1-p_{i}\) for the left branch which get passed down as the path probability for that node. The final output is calculated as a **weighted sum** of the probability distributions of all the leaves: \[p=\Sigma_{l\in Leaves}P_{l}Q_{l}\] where \(P_{l}\) is the path probability at leaf node \(l\). The **cross-entropy** loss on this output \(p\) is used for the optimization of the model. Applying the softmax function on \(p\) gives us the probability for each class. \[P_{c}=\frac{e^{-P_{c}}}{\Sigma_{c^{\prime}c\in classes}e^{-P_{c^{\prime}}}}\] \(P_{c}\) denotes the probability of class \(c\) while \(p_{c}\) is the \(c^{th}\) element in the output p. The cross-entropy is minimized by gradient descent to train the parameters \(w_{i},b_{i}\) as well as the scaling parameter \(T\). #### 2.1.2. Regularization The DDT tends to get stuck in a local minima quite often, with one path getting much higher probabilities than the others, which results in the model favouring one leaf over the others for most of the predictions. To solve this issue, Frosst and Hinton (Frosst and Hinton, 2017) introduced a regularization term which makes the inner nodes make a more balanced split. At each inner node \(i\) an \(\alpha\) term is calculated: \[\alpha_{i}=\frac{\Sigma_{x}P_{i}(x)p_{i}(x)}{\Sigma_{x}P_{i}(x)}\] The penalty summed over all the terms is: \[P=-\lambda\Sigma_{l\in InnerNodes}0.5*log(\alpha_{i})+0.5*log(1-\alpha_{i})\] where \(\lambda\) varies as \(2^{-d}\) where \(d\) is the depth of the node. ### Cascading Models We introduce a method in which the model makes selective predictions, and chooses to not predict on some part of the data when it's not confident about its prediction. We start with a classification model, which outputs probabilities of the classes, such as an MLP or DDT. The more imbalanced the probabilities, more confident the model is on it's predictions. We use **Gini Impurity** to calculate the imbalance in the probabilities: \[GiniImpurity=1-\Sigma_{c\in classes}p_{c}^{2}\] The lower the gain impurity, greater the imbalance in the probability distribution of the classes. We set a value to be the maximum admissible impurity in the predictions, and the model is confident on the ones with lower impurity. The expectation is that the model would ideally give a much higher accuracy on the subset of the predictions (**support**) on which it is confident. However, in cases of very noisy data, this support can be very low. Therefore we use **Cascading Models**, which helps to increase support on the predictions while maintaining a high test set performance. The data-points on which we encounter high impurity values are **pruned**, i.e. the model doesn't make any prediction on those points. These data-points act as the train or test set for the next model. Finally, accuracy of all the models is only calculated for data-points on which it makes a prediction. The training procedure is described in Algorithm 1: ``` 1:\(MaxImpurity\in(0,1)\) and \(Levels\geq 0\) 2:procedureCascading(\(MaxImpurity\),\(Levels\), \(D\)) 3:\(Unpruned=\phi\succ\) Contains predictions and the un-pruned data-points 4:for\(level=1\) to \(Levels\)do 5:\(D^{\prime}=\phi\)\(\triangleright\) Initialize an empty data-set 6: model.train(D)\(\triangleright\) Train a fresh model on the data-set 7:for\(d\in D\)do 8:\(p=model\),\(forward(d)\) 9:if\(GinIndex(p)\geq MaxImpurity\)then 10: Append \(d\) to \(D^{\prime}\) 11:else 12: Append \((p,d)\) to \(Unpruned\) 13:endif 14:endfor 15:\(D=D^{\prime}\) 16:endfor 17:endprocedure ``` **Algorithm 1** Cascading models using data pruning (training) During testing, given a sequence of models, each data-point passed through the sequence, with each model either making a prediction or passing the data-point onto the next model. The test accuracy is calculated solely on the un-pruned points. This method can be used with any classification model which can output a probability distribution over the classes. For the experiments, DDT (differentiable decision trees) and MLP were used. The inference procedure is described in Algorithm 2: ``` 1:\(MaxImpurity\in(0,1)\) and \(Levels\geq 0\) 2:procedurePredict(\(MaxImpurity\),\(Models\), \(D\)) 3:\(Unpruned=\phi\succ\) Contains predictions and the un-pruned data-points 4:\(n=Models\_length\)\(\triangleright\) Number of models in cascading 5:for\(d\in D\)do 6:for\(i=1\) to \(n\)do 7:\(model=Models[i]\) 8:\(p=model\),\(forward(d)\) 9:if\(GinIndex(p)\leq MaxImpurity\)then 10: Append \((p,d)\) to \(Unpruned\) 11:break 12:endif 13:endfor 14:endprocedure ``` **Algorithm 2** Inference using Cascading Models During testing, given a sequence of models, each data-point passed through the sequence, with each model either making a prediction or passing the data-point onto the next model. The test accuracy is calculated solely on the un-pruned points. This method can be used with any classification model which can output a probability distribution over the classes. For the experiments, DDT (differentiable decision trees) and MLP were used. The inference procedure is described in Algorithm 2: ``` 1:\(MaxImpurity\in(0,1)\) and \(Levels\geq 0\) 2:procedurePredict(\(MaxImpurity\),\(Models\), \(D\)) 3:\(Unpruned=\phi\succ\) Contains predictions and the un-pruned data-points 4:\(n=Models\_length\)\(\triangleright\) Number of models in cascading 5:for\(d\in D\)do 6:for\(i=1\) to \(n\)do 7:\(model=Models[i]\) 8:\(p=model\_forward(d)\) 9:if\(GinIndex(p)\leq MaxImpurity\)then 10: Append \((p,d)\) to \(Unpruned\) 11:break 12:endif 13:endfor 14:endprocedure ``` **Algorithm 3** Inference using Cascading Models ## 3. Materials: Data & Features ### Market Data We have used equity price data from the Indian equity market captured as five-minute candles in the standard open, high, low, close and volume form. Data for each ticker (stock) is normalized for each day by the close price of the first candle. Thus, each day starts with a normalised close price of one, with remaining values through the day recorded in multiples of this value. Further, volume data for each symbol is normalised by its historical average 5-minute volume, computed on a prior year's data, (i.e., discretisation is not based on volume values in the data itself). As a result, the time-series for each symbol-day pair are on the same scale for the purposes of training machine-learning models. The OHLCV values are augmented with day information, captured as the number of days since some earlier reference date, and included as an attribute 'era'. ### Synthetic Data & Noise Synthetic price data is generated as sine waves, with a different frequency and amplitude for each synthetic day, or 'era'. Further, amplitude is also allowed to vary during the course of the day as a function of noise, as described later below. Note that each wave starts at a value of one, and a phase of zero or ninety degrees chosen randomly. As noted earlier, base amplitude and frequency is also chosen randomly for each day. Next, varying levels of noise are added to each sine-wave, parameterised by two values, base-noise \(\epsilon(t)\sim\mathcal{N}(0,\sigma)\) and peak noise computed as follows: Each sine wave has a number of peaks, say \(K\), \(k=0\ldots K-1\). Points between each zero-crossing are allocated to the intervening peak, defining \(K\) intervals. Noise is added to each peak as peak noise \(\epsilon_{p[k]}\sim\mathcal{N}(0,\sigma_{p})\). Thus, each noisy sine wave of amplitude \(A\) and \(K\) peaks is characterized by two additional noise parameters \(\sigma\) and \(\sigma_{p}\), and computed as (for t=0 to 1, with the latter corresponding to end of day): \[f(t)=1+\epsilon(t)\pm\Sigma_{t\in I_{k}}A(1+\epsilon_{p[k]})\sin(\frac{K+1}{2} \pi t)\] for \(K\neq 0\). If \(K=0\), a noisy ascending or descending straight line is computed as: \[f(t)=1+\epsilon(t)\pm At\] Each (noisy) sine wave (i.e., for a day's worth of synthetic prices) is sampled 375 times, corresponding to one-minute price samples. The time-series thus represents a synthetic trading day of about six and a half hours. This series is then converted into 75 five-minute OHLC candles, and similarly normalised with the first close value as described earlier for market data. For synthetic data the volume is fixed at one for every candle. As earlier, these OHLCV values are augmented with day information, in the 'era' attribute to distinguish each random selection of frequency and base amplitude. Note that a synthetic price dataset, as described above, is computed for a number of days ('eras') for different noise levels, and each \((\sigma,\sigma_{p})\) combination. ### Features We add technical analysis features such as moving averages of price and volume, relative strength index, moving-average convergence-divergence, and Bollinger-bands to the basic OHLCV attributes. In addition to these standard technical indicators we add **logical** and **temporal features** as follows: (i) Differences of selected attributes: open - close, high - low and 20 and 10 window moving averages of price. (ii) Slopes of close values using varying window lengths (3,5,and 10). (iii) 'Change-length' for each of the previous logical features as well as for all the base features and technical indicators: **Change-length** is computed to be the _number_ of consecutive candles, i.e., current and previous five-minute time intervals, in which a feature value is _monotonically increasing or decreasing_. Change-length is positive if the feature has been increasing and negative if it was decreasing. Target values, i.e., values to be predicted by machine-learning, are computed as the 10-candle return, i.e., the difference between the normalised close value at time \(t\) and time \(t+50\) (since each candle represents a five minute interval). Finally, _all_ the features, i.e., OHLCV values, technical indicators as well as the logical and temporal features above, and target values are discretised into 5 bins. Thresholds for discretisation are computed using a prior year's similar data in manner so as to result in equally populated bins, i.e., a percentile-based binning. For synthetic data, an independently sampled set is used to compute discretisation thresholds. (Thus, any potential discretisation-based information leakage is avoided during machine-learning.) ## 4. Experiments & Results The models, standard as well as the cascading MLPs and DDTs are evaluated via 4 experiments for market as well as synthetic data. The experiments evaluate the models on their test set performance for given a combination of train and test data-set. The experiments are as follows: 1. Training on data from a set of eras and testing on data from the same set of eras. 2. Training on data from a set of eras but testing on data from a different set of eras. 3. Training and testing on data from a single era. 4. Training on data from a single era but testing on data from a different era. In addition to the above experiments, two experiments were carried out solely on the synthetic data to examine the performance of the models when the noise levels in the train and test data-sets are different. 1. Training on clean data and testing on noisy data. 2. Training on noisy data and testing on clean data. K-fold cross-validation is used to determine the optimal hyperparameters for each model. The training set used in the first experiment is used in the cross-validation for both data and the same hyper-parameters are used for all the experiments on that set of data (synthetic and market). #### 4.0.1. Cascading Models Two types of models, DDT and MLP were trained using this method. DDT of depth 6 were trained for 500 epochs and MLP of hidden dimensions (128,64,32) trained for 1000 epochs were used. Additionally, for further investigation on the market data, DDTs of depth 4 trained for 1000 epochs and MLP of hidden layers (256,128,64,32) trained for 1000 epochs were also tested. For all the experiments, the maximum admissible impurity was 0.5 and the number of levels of cascading was 3. Using 3 levels, as observed in the experiments, improved the support by as much as 50% or more compared to using a single level. ### Training on data from a set of eras and testing on data from the same set of eras The training data consists of the first 80% data points from each era in the data-set. The test set consists of the latter 20% from each era. These results are tabulated in Table 1 and Table 2. Unsurprisingly, performances on the test data reduce with an increase in noise. The models are highly reactive to noise addition as observed by the reduction in performance from noise level [0,0.05] to [0.01,0]. However, post this drop, even though the performance decreases, it isn't as sharp, and accuracy curves are smoother. The cascaded models offer tangible improvement in test set performance when compared to their single model counterparts. The Cascading MLPs provide test accuracies that are as good as, if not better, than those of Cascading DDTs albeit with a substantially lower support for both train and test sets. This result, however, is reversed for the market data, where the DDTs provide greater accuracy with lower support. On testing the DDT of depth 4 (trained for 1000 epochs) on the market data, it was found that it gave a slightly better performance with marginally higher support. The extended MLP gives significantly better results than the one used (however the base model performance is lower), albeit with much lesser support. Lastly, the test-time confusion matrices revealed that the predictions made by the cascade of models i.e., predictions on the un-pruned data, lie primarily at the extremes of the target distribution. The implications of this result are further explained in experiment 2 where a similar result was observed. ### Training on data from a set of eras and testing on data from a different set of eras The training set consists of all the data-points from every era in a set. The test set also consists of the same, however, the eras are \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Noise & \multicolumn{2}{c|}{BASE MODEL} & \multicolumn{4}{c|}{COMBINATION} \\ \hline & \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c|}{Support} \\ \hline & Train & Test & Train & Test & Train & Test \\ \hline [0, 0] & 0.964 & 0.964 & 0.973 & 0.972 & 0.997 & 0.993 \\ \hline [0, 0.05] & 0.926 & 0.866 & 0.957 & 0.897 & 0.96 & 0.954 \\ \hline [0.01, 0] & 0.659 & 0.604 & 0.899 & 0.808 & 0.43 & 0.42 \\ \hline [0.01, 0.05] & 0.71 & 0.676 & 0.858 & 0.775 & 0.677 & 0.69 \\ \hline [0.03, 0] & 0.554 & 0.522 & 0.83 & 0.759 & 0.317 & 0.322 \\ \hline [0.05, 0.05] & 0.551 & 0.485 & 0.818 & 0.726 & 0.296 & 0.288 \\ \hline [0.075, 0.05] & 0.516 & 0.47 & 0.808 & 0.706 & 0.221 & 0.224 \\ \hline [0.075, 0.05] & 0.546 & 0.457 & 0.81 & 0.738 & 0.237 & 0.23 \\ \hline \end{tabular} \end{table} Table 1. Training on data from all eras and testing on data with same set of eras. (synthetic data) (DDT) from a different set and there is no common era between the test and training sets. The number of eras in both sets is equal. The test accuracies and supports, in this case, are presented as line graphs in Figure 5 and Figure 6. The trend is similar to experiment 1, but the accuracy values are lower. Further, the accuracy level is better maintained across noise levels but leads to diminishing support at higher noise levels. On the real data (Tables 3 and 5), the cascading models don't provide as much improvement in accuracy and also end up with significantly lower support. The DDT of depth 4 and the extended MLP provided improved results at the cost of lower support as shown in Tables 4 and 6. ### Training on a single era and testing in the same era The training set in this experiment is the first 80% of the data from a single era and the test set is the remainder of the data. Performances on eras are averaged over the results on a noise level. Due to lower data-set sizes, the models tend to over-fit the data, with test accuracy on the market data coming to be about 30%, which is much lower than the train accuracies. The Cascading models also don't provide much improvement, as the test accuracies on both, synthetic and market data, are almost the same as that of the base model. Due to over-fitting, the model gives low impurities for most of the points as observed by the supports, while pruning, it does not extract the data-points which is more helpful for increasing test accuracy. ### Training on a single era and testing on a different era In this experiment, O(N) combinations of train and test era are used, where N is the number of eras in each of the two sets in experiment 2. The entirety of the data from the eras is used in the train and test sets. While training performances are usually good, the test accuracy usually depends on the similarity in distribution between the data in the two eras. ### Performance in cases where train and test set have different noise levels The following experiments were conducted only on the synthetic data to assess the ability of models to extract patterns from the training data and effectively apply it to data at a different noise level. #### 4.5.1. Training on clean data and testing on noisy data In this case, the models give low accuracies once noise is added and remain approximately the same for all noise levels. The cascading models don't improve upon this either. For the MLP, the mean test accuracy is 32.9% and lies in the range (30.8,34.3). For DDTs, the mean is slightly higher at 37.5% and lies in a longer range of (35.1,44). This is due to the model over-fitting on the clean data, which forces it to give predictions with low impurity even with very noisy data which hampers the working of the cascading model. #### 4.5.2. Training on noisy data and testing on clean data For the standard ensemble models and the base models, the training accuracy decreases with increasing noise, and the test accuracy remains significantly lower than the train. However, the cascading models offer significant improvement in this case. Here, the Cascading MLPs give significantly better results than the base models, and better results than the Cascading DDTs, which give a lower accuracy but with higher support. We omit the detailed results for the above cases in Sections 4.3-4.5 due to lack of space. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{BASE MODEL} & \multicolumn{3}{c|}{COMBINATION} \\ \hline \multicolumn{2}{|c|}{Accuracy} & \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c|}{Support} \\ \hline Train & Test & Train & Test & Train & Test \\ \hline 0.314 & 0.243 & 0.783 & 0.461 & 0.062 & 0.046 \\ \hline \end{tabular} \end{table} Table 6. Training on market data from all eras and testing on data with different set of eras. (MLP with an extra layer) Figure 5. Test Accuracies for Base and Cascaded Models vs Noise for Experiment 2 on synthetic data \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{BASE MODEL} & \multicolumn{3}{c|}{COMBINATION} \\ \hline \multicolumn{2}{|c|}{Accuracy} & \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c|}{Support} \\ \hline Train & Test & Train & Test & Train & Test \\ \hline 0.333 & 0.265 & 0.802 & 0.475 & 0.026 & 0.022 \\ \hline \end{tabular} \end{table} Table 4. Training on market data from all eras and testing on data with different set of eras. (DDT with depth 4 and trained for 1000 epochs) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{BASE MODEL} & \multicolumn{3}{c|}{COMBINATION} \\ \hline \multicolumn{2}{|c|}{Accuracy} & \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c|}{Support} \\ \hline Train & Test & Train & Test & Train & Test \\ \hline 0.314 & 0.245 & 0.802 & 0.475 & 0.026 & 0.022 \\ \hline \end{tabular} \end{table} Table 4. Training on market data from all eras and testing on data with different set of eras. (DDT with depth 4 and trained for 1000 epochs) Figure 6. Test Support vs Noise for Experiment 2 on synthetic data \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{BASE MODEL} & \multicolumn{3}{c|}{COMBINATION} \\ \hline \multicolumn{2}{|c|}{Accuracy} & \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c|}{Support} \\ \hline Train & Test & Train & Test & Train & Test \\ \hline 0.331 & 0.245 & 0.802 & 0.475 & 0.026 & 0.022 \\ \hline \end{tabular} \end{table} Table 4. Training on market data from all eras and testing on data with different set of eras. (DDT with depth 4 and trained for 1000 epochs) ### Utility and Risk-adjusted Return The confusion matrices formed by the cascading models in experiments 1 and 2 reveal that a major portion lies at the extremes of the target distribution, 0 and 4. These 2 classes generally determine the trading decision to be made at a certain point in time and are highly actionable, making these predictions very useful. We use a metric to compare the average gain per prediction (utility) of the models before and after cascading. This value is calculated using the points where the predictions are either 0 or 4 (extreme ends of the classes). Since the 'target' refers to stock-specific returns, decisions to trade would mainly take place when a target of 0 or 4 is predicted, so we focus only on these two columns. Each pair of predictions and ground truth has a 'utility' attached to it. If the ground truth is 2, the utility is 0 regardless of the prediction as there would be no profit/loss in any case. For predictions that are 0, if the ground truth is 0, a utility of +2 is awarded, indicating a successful short sale. If the ground truth is 1, an utility of +1 is awarded, since a short sale would still have been profitable, albeit less so. For ground truths of 3 and 4, -1 and -2 are awarded to the model as the prediction would result in a loss. Predictions on class 4 award reverse. Then: \[A\mathit{average}Utility=\frac{(Gain-Loss)}{\#\mathit{of}\mathit{predictions}\mathit{ of}\mathit{class}\mathit{0}\mathit{or}\ 4}\] In addition to this, the downside-risk-adjusted-return is also calculated for each model, to measure the gain vs risk to capital: \[DRAR=\frac{(Gain-Loss)}{Loss}\] as well as a 'Traded Sharpe ratio', as the ratio of utility to the standard deviation of returns _measured across the trades actually recommended, i.e., when class 0 or 4 is predicted_. We motivate utility, DRAR, and Traded Sharpe metrics further in the next Section below. Table 7 reports utility, DRAR, and Traded Sharpe for experiments 1 and 2, and for synthetic and market data. **Cascaded DDT performs better than the base model in all cases, and also better than the cascaded MLP**, except for the case of market data in experiment 1. (This may be because the higher capacity MLP captures more era-specific patterns). ## 5. Discussion Consider an algorithmic trading scenario where a machine-learning model is being used to recommend long/short trades based on its prediction of where the market will be after a fixed number of time-steps. Consider two strategies: (a) using a model that recommends only a small number of trades each with high utility, i.e., expected gain, and (b) using a model that recommends a large number of trades but each having lower utility. While it is conceivable that the expected total return is higher for strategy (b), as can also be seen from the figures in Table 7, in the world of financial trading minimising risk is as important as maximizing return. The standard mechanism for measuring risk-adjusted return is the Sharpe ratio, which measures the overall expected gain per time-step adjusted by the standard deviation of returns. However, since our models predict far less often than the base models, we instead measure downside-risk adjusted return and Traded Sharpe ratio as defined earlier. As we have already seen, the cascaded models, especially the DDT-based ones, are far superior in terms of these metrics as reported in Table 7. To further motivate these metrics, consider that short-term trading often takes place in a leveraged manner, i.e., using funds borrowed from a broker and only minimal capital of the trader's own; in practice this ratio is at least 5X and can be as high as 10X. A strategy that incurs large draw-downs (i.e., intermediate losses) risks margin calls in the process, requiring the traders to put up extra capital that could even wipe them out if they are operating with high leverage. On the other hand, strategies that avoid large losses at all costs, including lower overall returns, allow traders to put up more base capital and/or take higher leverage and thus increase their total return, with the risks of serious margin calls that could wipe them out being far reduced. ## 6. Related Work Applying machine learning to financial markets has been and is probably continuing to be widely attempted in practice as well as in research. Apart from application of the standard data-science pipeline using off-the-shelf models, which we do not recount, some novel approaches deserve mention. A number of prior works such as (Bahdan et al., 2017) and (Bahdan et al., 2017) have converted financial data into image data and thereafter applied CNN-based deep-learning models for predicting returns. While the latter directly use images of price-series, the former converts level-2 data, i.e., the limit-order book over a window into an image. Recent works applying machine learning to trading based on price signals alone have also used reinforcement learning: 'Deep Momentum Networks' (Mnih et al., 2015) as well as 'Momentum Transformer' (Mnih et al., 2015) formulate the trading task as one of suggesting the position to take, e.g., 1 for a long (i.e., buy) position, -1 for a short (i.e., sell) position, and 0 for no position. Exiting a buy/sell position takes place when a 0 action follows the previous 1 actions, etc. Neural networks are trained to directly optimize volatility-adjusted expected returns, adjusted for transaction costs, over a trading period. The former paper uses MLPs and LSTMs, while the latter uses transformers. Both works are essentially applying vanilla REINFORCE to the MDP formulation of the trading problem. Deep Reinforcement Learning in Trading (Mnih et al., 2015) uses the same formulation as the above two works, but applies more refined reinforcement learning techniques, e.g., policy-gradient, actor-critic, and deep-Q-learning algorithms. Most recently, meta-reinforcement learning via the RL\({}^{2}\) algorithm (Bahdan et al., 2017) was used along with logical features (including temporal ones) learned automatically via ILP (Mnih et al., 2015). The case for using logical rule-based models for financial data as an antidote to noise was made in (Bahdan et al., 2017). More generally, the original CN2 algorithm (Bahdan et al., 2017) was developed explicitly to deal with noisy data; RIPPER (Bahdan et al., 2017) is a modern version of the same. More modern approaches to combat noise, specifically label noise (i.e., features are assumed relatively noise-free in comparison) are represented by (Kal on train and test datasets. One such recent approach also uses data pruning during training (as opposed to post-training, as in our proposed approach), to improve calibration error as well as improve fraction of high-confidence predictions (Krishnan et al., 2019). ## 7. Conclusions and Future Work We have introduced cascading models as an approach to deal with noisy data and make predictions only on the fraction of data where the model cascade is confident. We have presented results using cascaded differentiable decision trees as well as MLPs on synthetic data with varying levels of noise as well as real market data. We observe that using cascaded models results in more accurate predictions and degrade more gracefully with noise at the expense of support. Further, the predictions that _are_ made have high utility towards making trading decisions, as well as result in better risk-adjusted returns as per the approximate metric used. We also find that the cascaded differentiable decision trees perform better than cascaded MLPs, especially in the utility of their predictions and in terms of risk-adjusted returns and traded Sharpe ratio. Performances of all the models evaluated degrade when tested on distributions different from those they are trained on (represented by different eras in our case). Future work towards addressing this could be to employ meta-learning (Beng et al., 2017) or continual learning (Beng et al., 2017) techniques, combining these with the idea of using cascaded models via data-pruning as we have proposed. Additionally, cascaded models may use train-time pruning approaches such as in (Krishnan et al., 2019).
2305.07219
Forward-backward multiplicity and momentum correlations in pp and pPb collisions at the LHC energies
Correlations and fluctuations between produced particles in an ultra-relativistic nuclear collision remain one of the successor to understand the basics of the particle production mechanism. More differential tools like Forward-Backward (FB) correlations between particles from two different phase-space further strengthened our cognizance. We have studied the strength of FB correlations in terms of charged particle multiplicity and summed transverse momentum for proton-proton ($pp$) and proton-lead ($pPb$) collisions at the centre-of-mass energies $\sqrt{s}$ = 13 TeV and $\sqrt{s_{\rm NN}}$ = 5.02 TeV respectively for the EPOS3 simulated events with hydrodynamical evolution of produced particles. Furthermore, the correlation strengths are separately obtained for the particles coming from the core and the corona. FB correlation strengths are examined as a function of psedorapidity gap ($\eta_{gap}$), psedorapidity window-width ($\delta\eta$), centre-of-mass energy ($\sqrt{s}$), minimum transverse momentum ($p_{Tmin}$) and different multiplicity classes following standard kinematical cuts used by the ALICE and the ATLAS experiments at the LHC for all three EPOS3 event samples. EPOS3 model shows a similar trend of FB multiplicity and momentum correlation strengths for both $pp$ \& $pPb$ systems, though the correlation strengths are found to be larger for $pPb$ system than $pp$ system. Moreover, $\delta\eta$-weighted average of FB correlation strengths as a function of different center-of-mass energies for $pp$ collisions delineates a tendency of saturation at very high energies.
Joyati Mondal, Hirak Koley, Somnath Kar, Premomoy Ghosh, Argha Deb, Mitali Mondal
2023-05-12T03:20:17Z
http://arxiv.org/abs/2305.07219v1
Forward-backward multiplicity and momentum correlations in \(pp\) and \(pPb\) collisions at the LHC energies ###### Abstract Correlations and fluctuations between produced particles in an ultra-relativistic nuclear collision remain one of the successor to understand the basics of the particle production mechanism. More differential tools like Forward-Backward (FB) correlations between particles from two different phase-space further strengthened our cognizance. We have studied the strength of FB correlations in terms of charged particle multiplicity and summed transverse momentum for proton-proton (\(pp\)) and proton-lead (\(pPb\)) collisions at the centre-of-mass energies \(\sqrt{s}=13\) TeV and \(\sqrt{s_{\rm NN}}=5.02\) TeV respectively for the EPOS3 simulated events with hydrodynamical evolution of produced particles. Furthermore, the correlation strengths are separately obtained for the particles coming from the core and the corona. FB correlation strengths are examined as a function of pseudorapidity gap (\(\eta_{gap}\)), pseudorapidity window-width (\(\delta\eta\)), centre-of-mass energy (\(\sqrt{s}\)), minimum transverse momentum (\(p_{Tmin}\)) and different multiplicity classes following standard kinematical cuts used by the ALICE and the ATLAS experiments at the LHC for all three EPOS3 event samples. EPOS3 model shows a similar trend of FB multiplicity and momentum correlation strengths for both \(pp\) & \(pPb\) systems, though the correlation strengths are found to be larger for \(pPb\) system than \(pp\) system. Moreover, \(\delta\eta\)-weighted average of FB correlation strengths as a function of different center-of-mass energies for \(pp\) collisions delineates a tendency of saturation at very high energies. ## I Introduction The formation of a hot dense medium of quasi-free quarks and gluons, known as the Quark-Gluon Plasma, in relativistic heavy-ion collisions at the Relativistic Heavy Ion Collider (RHIC) and at the Large Hadron Collider (LHC) unlatch the path to understand the early Universe as well as to testify the theory of the strong interactions between quarks mediated by gluons [1; 2; 3]. The relativistic viscous hydrodynamic calculations [4; 5] have been found to be most successful in explaining the properties of the produced hot and dense matter in heavy-ion collisions and demonstrate the space-time evolution of the medium through observables such as harmonic flow (\(v_{n}\)) [6; 7; 8] which represent the translation of initial state spatial inhomogeneities to the final state momentum anisotropies. In heavy-ion collisions, the initial energy density fluctuates strongly event to event which leads to the fluctuations of the space-time evolution of the produced medium in the final state. Owing to the viscous hydrodynamics, such density fluctuations are manifested as anisotropic harmonic flow. The large initial state fluctuations effectuate the observed long-range correlations (LRC) between final state particles which are observed as correlation between multiplicity densities in different pseudo-rapidity (\(\eta\))-windows [9]. Another aspect of longitudinal multiplicity correlations is the short-range correlations (SRC), localized over a smaller range of \(\eta\), manifested in the single-jet, mini-jets, resonance decays etc. Forward-Backward (FB) correlations between charged particle multiplicities or transverse momenta in two symmetrically separated \(\eta\)-windows about the collision vertex promote us to differentiate between LRC and SRC components [10] and to study the dynamics of particle production mechanism in high-energy hadron or nuclear collisions. Positive FB multiplicity correlation strength was first observed in \(p\bar{p}\) collisions at \(\sqrt{s}=540\) GeV at CERN SPS collider [12] and it has been then rigorously inspected in \(p\bar{p}\) collisions at ISR energies from \(\sqrt{s}=200\) GeV to 900 GeV [13; 14]. Later, FB correlations were also examined in \(pp\) and \(p\bar{p}\) collisions over wider range of collision energies [15; 16; 17]. No significant FB multiplicity correlation has been reported in \(e^{+}e^{-}\) collisions [18] whereas a very weak correlation strength was observed in \(e^{+}e^{-}\) annihilation [19; 20]. CLAN structure had been used for better understanding of observed stronger positive FB correlation in \(pp\) and \(p\bar{p}\) collisions compared to weak correlation in \(e^{+}e^{-}\) annihilation [21]. There are also significant positive FB correlation values reported in different collision systems e.g. \(pp\), \(pA\) and \(AA\) collisions [22; 23; 24; 25; 9]. The ATLAS [24] and the ALICE [25] collaborations at the LHC reported strong FB correlations in \(pp\) collisions at \(\sqrt{s}=0.9\), 2.76 and 7 TeV which contradicts STAR collaboration findings of weak correlation [23]. To explain experimental results, numerous theoret ical models have been put forth. In Dual Parton Model (DPM) [11] a Pomeron exchange between colliding hadrons was initially thought of as an inelastic scattering, later, the idea of many Pomeron exchanges was implemented. DPM projected that particles created in two well selected rapidity intervals would have a significant long-range correlation [26]. The Quark Gluon String Model (QGSM) [27; 28] is similar to DPM with some essential differences. In this model new objects, quark-gluon strings are formed which fragment into hadrons and resonances. The QGSM model successfully described ALICE data and concluded that the multistring processes due to multi-Pomeron exchanges were the main contributor to the FB correlations. The String Fusion Model (SFM) [29; 30] incorporates the string fusion phenomenon and is based on the Parton model of strong interactions. The SFM framework is used to investigate the characteristics of the strongly intensive variable that characterizes correlations between the number of particles in two separated rapidity intervals in \(pp\) interactions at LHC energy [31]. Using various dynamics of the string interaction assumptions, correlations between multiplicities and average transverse momentum are accomplished in the percolating colour strings picture [32; 33]. A string percolation process in \(pp\) collisions was used to study the FB correlations [32] and observed an approximately constant FB correlation over a substantial range of rapidity window. The correlations between mean transverse momentum and multiplicity of charged particles in \(pp\) and \(p\bar{p}\) collisions at \(\sqrt{s}\) from 17 GeV to 7 TeV are studied using a modified multi-pomeron exchange model in which string collectivity has been included in an effective way [34]. In \(pp\), \(pPb\), and \(PbPb\) collisions at LHC energies it is explored using a dipole based Monte Carlo String Fusion Model [35]. According to the Color Glass Condensate model [36; 37; 38] long-range rapidity correlations continue throughout the development of the quark-gluon plasma that result from the collision. Using a model that regards strings as independent identical emitters, the forward-backward (FB) charged particle multiplicity correlations between windows spaced apart in rapidity and azimuth are investigated in Ref. [39]. Theoretical background of long-range correlations in heavy-ion collisions has been studied using a Monte-Carlo method in Ref. [40]. The high multiplicity data of \(pp\) and \(pPb\) collisions at the LHC and \(dAu\) collisions at the RHIC show some collective-like features resembling the heavy-ion collision [41; 42; 43; 44; 45; 46; 47; 48]. The two-particle correlation studies in high multiplicity \(pp/pPb\) collisions showed the heavy-ion signature; "the ridge" which triggered many theoretical discussion/interpretations on the origin of it. Recent studies [49; 50] show that the hydrodynamical modelling which remains successful in explaining many features of heavy-ion collisions, is also found to be applicable in small collision systems. We discussed in our previous article [51] on how EPOS3 model [52] with hydrodynamical evolution of produced particles (referred as "with hydro" in rest of the texts) successfully reproduced many features of small collision systems at the LHC energies [53]. Furthermore, we investigated the FB multiplicity and momentum correlations using EPOS3 model by switching ON/OFF hydrodynamical evolution of produced particles which does not affect much the final outcomes. Studies using different models show that the FB correlation strength is found to be decreasing with increasing nuclear size upon the selected \(\eta\)-windows and with increasing collision energy for a fixed collision system [54; 55]. It has also been proposed that instead of the contribution coming from particle production in initial stages of collisions, subsequent stage could modify the behaviour of FB correlations and hadron nucleus collision is expected to give more information on the whole scenario. Keeping this in mind, a comparative analysis of \(pp\) and \(pPb\) systems has been performed to improve our current understanding of FB phenomena. We have inspected FB multiplicity and momentum correlation in \(pp\) and \(pPb\) collisions at the centre-of-mass energy \(\sqrt{s}=13\) TeV and \(\sqrt{s_{\rm NN}}=5.02\) TeV respectively using EPOS3 simulated with hydro events. In order to clarify the functions of each component of the model that contribute to the outcomes, we further divided the model into core and corona approaches. The energy density of the strings in the core is sufficient to activate the hydrodynamically evolving QGP description. In the corona, hadron creation from nucleon-nucleon collisions is viewed as an independent phenomenon [52]. The rest of the paper is organized as follows: the definition of FB multiplicity and momentum correlation coefficients are introduced in Section II. Section III describes briefly the basic principles of EPOS3 model and sample size of generated events. Choice of EPOS3 simulated events and FB windows are illustrated in Section IV. In Section V, the dependence of FB correlation strength by varying \(\eta_{gap}\), \(\delta\eta\), \(p_{T_{min}}\) and different multiplicity classes have been discussed in details. More importantly, the behaviour of \(\delta\eta\)-weighted average of FB multiplicity and momentum correlation strengths as a function of centre-of-mass energy using EPOS3 simulated \(pp\) events have been studied for the first time. Finally, conclusions are drawn in Section VI. ## II Forward-backward charged particle multiplicity and momentum correlation coefficient Forward-backward correlations are measured between different observables in separated \(\eta\)-intervals, namely n-n (the correlation between charged-particle multiplicities), \(p_{T}\)-\(p_{T}\) (the correlation between mean or summed transverse momenta of charged particles) and \(p_{T}\)-n (the correlation between mean or summed transverse momenta in one pseudorapidity interval and the multiplicity of charged particles in another pseudorapidity interval) [56]. Two \(\eta\)-intervals, one from the forward and another from the backward window, are symmetrically chosen around the collision centre. The detailed window construction has already been shown and discussed in Ref. [51]. A linear relationship between average charged particle multiplicity in backward window (\(\langle N_{b}\rangle_{N_{f}}\)) and the charged particle multiplicity in the forward window (\(N_{f}\)) has been reported and discussed in Ref. [13; 14]: \[\langle N_{b}\rangle_{N_{f}}=a+b_{corr}(mult)N_{f} \tag{1}\] Here, FB multiplicity correlation strength is characterized by \(b_{corr}(mult)\). Considering the linear relation between (\(\langle N_{b}\rangle_{N_{f}}\)) and \(N_{f}\), \(b_{corr}(mult)\) can be determined using the following Pearson Correlation Coefficient: \[b_{corr}(mult)=\frac{\langle N_{f}N_{b}\rangle-\langle N_{f}\rangle\langle N _{b}\rangle}{\langle N_{f}^{2}\rangle-\langle N_{f}\rangle^{2}} \tag{2}\] The measurement of FB multiplicity correlation coefficient \(b_{corr}(mult)\) is defiled by the so-called "volume fluctuations", which arises due to the event-by-event fluctuations of the number of the participating nucleons [57; 58]. Hence, we have considered an intensive observable like the sum of the absolute transverse momentum of charged-particles within the selected \(\eta\) windows to reduce the contribution of volume fluctuations. Similar to the multiplicity correlation we have estimated FB momentum correlation coefficient, \(b_{corr}(\Sigma p_{T})\) using the following formula: \[b_{corr}(\Sigma p_{T})=\frac{\langle\Sigma p_{T_{f}}\Sigma p_{T_{b}}\rangle- \langle\Sigma p_{T_{f}}\rangle\langle\Sigma p_{T_{b}}\rangle}{\langle(\Sigma p _{T_{f}})^{2}\rangle-\langle\Sigma p_{T_{f}}\rangle^{2}} \tag{3}\] Here, \(\Sigma p_{T_{f}}\) (\(\Sigma p_{T_{b}}\)) denotes the event averaged transverse momenta of charged-particles in forward (backward) window. An intuitive observable, \(\delta\eta\)-weighted average of FB multiplicity and momentum correlation strength has been introduced for the first time and is defined as follows: \[<b_{corr}(mult/\Sigma p_{T})>_{\delta\eta}=\frac{\Sigma_{i}b_{corr}(mult/ \Sigma p_{T})_{i}\delta\eta_{i}}{\Sigma_{i}\delta\eta_{i}} \tag{4}\] The behaviour of such observable has been studied as a function of the centre-of-mass energy in \(pp\) collisions taking into account of our earlier measurements in smaller collision system [51]. ## III Epos3 model The EPOS3 model [52] is based on Gribov-Regge multiple scattering theory. In this approach an individual scattering is labeled as a 'Pomeron'. A pomeron creates a parton ladder which may be considered as longitudinal flux tube carrying the transverse momentum from the initial hard scatterings [59]. In a collision, many elementary parton-parton hard scatterings form a large number of flux tubes that expand and are fragmented into string segments. Higher string density forms the so-called "core" which undergoes a three-dimensional (3D)+1 viscous hydrodynamical evolution expecting no jet parton escape and hadronizes via usual Cooper-Frye formalism at a "hadronization temperature", \(\rm T_{H}\). Another part of lower string density forms the so-called "corona" where we can expect the escape of jet partons. Such string segments having high transverse momentum that are close to the surface leave the bulk matter and hadronize (including jet hadrons) via the Schwinger mechanism. The phase transition from parton to hadron follows a realistic equation of state which is compatible with the lattice gauge results with subsequent hadronic cascade using UrQMD model [60]. Using EPOS3 model, we have generated 3 million minimum-bias \(pp\) events at \(\sqrt{s}=13\) TeV and \(pPb\) events at \(\sqrt{s_{\rm NN}}=5.02\) TeV. On top of the minimum bias analysis, a more differential approach has been introduced by taking particles coming from either core or corona and we have varied certain model parameter in order Figure 1: Charged-particle invariant yields as a function of \(p_{\rm T}\) in \(pp\) collisions at \(\sqrt{s}=13\) TeV (top) and in \(pPb\) collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV (bottom) compared to ALICE data [61; 62]. to achieve it. We have measured FB multiplicity and momentum correlations for the EPOS3 generated events with all charged particles and particles from core and corona. To validate the generated event samples of different centre-of-mass energies, we have compared minimum bias EPOS3 simulated events with ALICE data [61; 62; 63]. Fig. 1 shows that the invariant yields of charged-particles as a function of \(p_{\rm T}\) as measured by ALICE experiment in \(pp\) collisions at \(\sqrt{s}\) = 13 TeV (top) and in \(pPb\) collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV (bottom) have been successfully reproduced by the EPOS3 simulated events at the chosen energies. Average pseudorapidity density and pseudorapidity density of charged-particles has been plotted in Fig. 2 for EPOS3 simulated \(pp\) events at \(\sqrt{s}\) = 13 TeV (top) and pPb events at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV (bottom) respectively. Compared results reflect that EPOS3 simulated events agreed well with the experimental measurements in the chosen kinematic intervals [61; 63]. ## IV Events & FB window selection Events are selected with a minimum of two charged particles in the chosen kinematic interval. The whole analyses for the \(pp\) and \(pPb\) events have been carried out following ALICE [25] and ATLAS [24] kinematics. By the word ALICE kinematics we mean the cuts on the kinematic variables \(p_{\rm T}\) and \(\eta\) as \(0.3<p_{\rm T}<\)1.5 GeV/c and \(|\eta|<0.8\) respectively. Similarly for the ATLAS kinematics we use \(p_{\rm T}>0.1\) GeV/c and \(|\eta|<2.5\). Only caveat is those cuts were used for lower centre-of-mass energies for \(pp\) collisions by the ALICE and the ATLAS Collaborations. ## V Results & Discussions We have calculated and plotted in Fig. 3 the average backward multiplicity (\(\langle N_{b}\rangle_{N_{f}}\)) for each fixed value of forward multiplicity \(N_{f}\) for window width \(\delta\eta=0.6\) and \(\eta_{gap}=0.4\) for EPOS3 simulated \(pp\) events at \(\sqrt{s}\) = 13 TeV (left panel) and \(pPb\) events at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV (right panel). From the scatter plots we can see a linear relationship between \(\langle N_{b}\rangle_{N_{f}}\) and \(N_{f}\). A linear fit has been displayed by the red lines in both panels. Slope of these lines actually quantify the correlation strength between multiplicities in F & B windows. We have, therefore, applied Pearson Correlation Coefficient formula de Figure 3: Variation of \(\langle N_{b}\rangle_{N_{f}}\) with \(N_{f}\) for FB window width \(\delta\eta\) = 0.6 and \(\eta_{gap}=0.4\) for EPOS3 generated \(pp\) events at \(\sqrt{s}\) = 13 TeV (left panel) and \(pPb\) events (right panel) at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. A linear fit has been performed (red line) for both the systems. Figure 2: (Average) Pseudorapidity density of charged particles in (\(pp\) collisions at \(\sqrt{s}\) = 13 TeV (top)) \(pPb\) collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV (bottom) compared to ALICE data [61; 63]. scribed in Eq. 2 to compute FB multiplicity correlation strengths. To eliminate the incorporated volume fluctuations in FB multiplicity correlation, we have evaluated FB momentum correlation coefficient, \(b_{corr}(\Sigma p_{\rm T})\) using Eq. 3 for the EPOS3 simulated \(pp\) events at \(\sqrt{s}=13\) TeV and \(pPb\) events at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. ### Dependence on the gap between FB windows (\(\eta_{gap}\)) The variation of FB multiplicity and momentum correlation coefficients with \(\eta_{gap}\) for four different window widths (\(\delta\eta\) = 0.2, 0.4, 0.6 & 0.8) have been shown in Fig. 4 and Fig. 5 respectively for EPOS3 simulated all charged particles, only-core and only-corona particles in \(pp\) collisions at \(\sqrt{s}=13\) TeV (top panel) and \(pPb\) collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV (bottom panel). We have compared all three cases for window widths \(\delta\eta\) = 0.2 & 0.4 (left panel) and \(\delta\eta\) = 0.6 & 0.8 (right panel) for both \(pp\) and \(pPb\) events. We observed that for a fixed window width, the FB correlation strengths decrease slowly with the increasing \(\eta_{gap}\) and increase with increasing \(\delta\eta\) at a fixed \(\eta_{gap}\) which resemble the trend at lower centre-of-mass energies in \(pp\) collisions as described in our earlier study [51]. Quantitatively we found that the correlation strengths are larger for \(pPb\) collisions than \(pp\) collisions for all the chosen \(\eta_{gap}\) and \(\delta\eta\) combinations. The asymmetric nature of \(pPb\) collisions where the proton collides with a nucleus having a larger number of sources compared to \(pp\) collisions, results in a larger initial-state parton density in the lead nucleus compared to the proton. Such asymmetric collisions could have larger fluctuations in the final state which may contribute to stronger forward-backward correlation strength in \(pPb\) collisions w.r.t \(pp\) collisions. Interestingly, we have noticed that for \(pPb\) events the correlation strengths decrease faster with increasing \(\eta_{gap}\) as compared to \(pp\) events. The SRC component depends strongly on collision system and it is asymmetric between forward and backward windows in \(pPb\) collisions while the LRC component is nearly symmetric in all collision systems [9]. Thus, the faster dilution of SRC component at large \(\eta_{gap}\) between forward and backward regions could be the reason behind the faster decrease of correlation strength in asymmetric \(pPb\) collisions w.r.t symmetric \(pp\) collisions. The dominance of correlation strength due to only-core particles is clearly visible over only-corona particles. Since the particles from corona are mostly dominated by jets or minijet partons, the paucity of LRC component results in smaller correlation strength for particles from corona over core at large \(\eta_{\rm gap}\) for both the collision systems. ### Dependence on the width of FB windows (\(\delta\eta\)) The \(\delta\eta\)-dependence of FB multiplicity and momentum correlation coefficients for contiguous (\(\eta_{gap}\) = 0), symmetrical windows with respect to collision centre are shown in Fig. 6 and Fig. 7 for \(\sqrt{s}=13\) TeV in \(pp\) collisions and \(\sqrt{s_{\rm NN}}=5.02\) TeV in \(pPb\) collisions using EPOS3 simulated with hydro events. We have studied and presented Figure 4: FB multiplicity correlation strength, \(b_{corr}\)(mult) as a function of \(\eta_{gap}\) for \(\delta\eta\) = 0.2, 0.4, 0.6 and 0.8 for EPOS3 generated all charged particles and particles from core & corona in \(pp\) collisions at \(\sqrt{s}=13\) TeV (top panel) and \(pPb\) collisions (bottom panel) at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. Figure 5: Forward-backward summed-\(p_{\rm T}\) correlation as a function of \(\eta_{gap}\) for four window widths \(\delta\eta\) = 0.2, 0.4, 0.6 and 0.8 for EPOS3 generated all charged particles and particles from core & corona in \(pp\) and \(pPb\) collisions at \(\sqrt{s}\) = 13 TeV (top panel) and \(\sqrt{s_{\rm NN}}\) = 5.02 TeV (bottom panel) respectively. the multiplicity and momentum correlation coefficients for the EPOS3 generated event samples with all charged particles and particles from core & corona. We observed that the only-core and only-corona cases underestimate both the correlation strengths for minimum-bias event sample. The correlation coefficients increase non-linearly with \(\delta\eta\) for both \(pp\) and \(pPb\) events though the values are higher in case of \(pPb\) events which may be due to fact as described in V.1. The results are found to be similar to our earlier measurements qualitatively [51], featuring the dominance of SRC component for the non-linear growth of the FB correlation strengths. As discussed and explained in V.1, here also we have found that the correlation strengths w.r.t \(\delta\eta\) are larger for only-core particles than only-corona particles. ### Dependence on collision energy (\(\sqrt{s}\)) We have examined the behaviour of \(\delta\eta\)-weighted average of FB multiplicity and momentum correlation strengths as a function of centre-of-mass energy using EPOS3 simulated \(pp\) event samples. Such an unconventional measurement is still not available experimentally. Hence, to compare our findings in a systematic way we have evaluated \(\delta\eta\)-weighted average for the available experimental \(b_{corr}(mult)\) and \(b_{corr}(\Sigma_{\rm{PT}})\) values for the ALICE [25] and the ATLAS [24] data. Fig. 8 and Fig. 9 show the \(\delta\eta\)-weighted average of FB multiplicity and momentum correlation as a function of centre-of-mass energy following ALICE and ATLAS kinematics. In the left panel of Fig. 8 and Fig. 9, we observed that initially \(\langle b_{corr}(mult)\rangle_{\delta\eta}\) and \(\langle b_{corr}(\Sigma_{\rm{PT}})\rangle_{\delta\eta}\) values increase rapidly with increasing \(\sqrt{s}\) upto 2.76 TeV, then grows moderately upto \(\sqrt{s}=7\) TeV for both EPOS3 simulated events and experimental data. For comparison, we have incorporated the results from other available theoretical models which also show a similar trend for the \(\delta\eta\)-weighted average of FB correlations [28; 39]. It is also very interesting to find that for EPOS3 simulated events, the \(\delta\eta\)-weighted average of FB correlation strengths as a function of \(\sqrt{s}\) lean towards saturation approximately beyond \(\sqrt{s}=7\) TeV, where there is no available experimental data for such a measurement. The results from QGSM model although do not show such a strong saturation effect at higher energy. To gain a more nuanced understanding of the fascinating behavior observed in the \(\delta\eta\)-weighted average of FB correlation strengths, we computed this metric separately for the particles coming either from core or corona. We have found that the observed saturation at higher centre-of-mass energy predominantly due to the saturation for the only-core particles whereas the only-corona particles show increasing trend. In EPOS3 model, the corona is Figure 8: Comparison of \(\delta\eta\)-weighted average FB multiplicity correlations, \(((b_{corr}(mult))_{\delta\eta})\) as a function of \(\sqrt{s}\) for the EPOS3 simulated \(pp\) events (all charged particles, core and corona) with derived ALICE (left), ATLAS (right) data and theoretical models (left). Figure 6: FB multiplicity correlation strength, \(b_{corr}\)(mult) as a function of \(\delta\eta\) for \(\eta_{gap}=0\) using EPOS3 generated \(pp\) and \(pPb\) events at \(\sqrt{s}=13\) TeV and \(\sqrt{s_{\rm NN}}=5.02\) TeV respectively for all charged particles and particles from core & corona. dominated by the high-\(p_{T}\) particles whereas the core contains particles which undergo 3+1D hydro mimicking the formation of QGP-like medium [64]. The exchange of multiple pomerons between colliding particles [65] in a collision remains the primary sources of fluctuations producing multiple particles in a correlated way. The multiplicity of produced particle and their transverse momentum are thus very much influenced by the initial conditions of a collision and in particluar are much more apparent in small collision systems like \(pp\) or \(pA\) where final state effects are less. In the CGC framework [66; 67] it has been argued that at small \(x\sim p_{T}/\sqrt{s}\), gluon density first grows and then gets saturated with increase in energy which results in moderate increase in charge particle multiplicity density in \(pp\) or \(pA\) collisions as the beam energy increases [68]. Since the observable like b\({}_{corr}\) (mult or \(\Sigma p_{\rm T}\)) is an extensive quantity, it might show such saturation effect mainly because fluctuations associated with the number of sources get saturated [65]. Henceforth, the EPOS3 model based FB correlation analysis at higher center-of-mass energy encourages further experimental study. ### Dependence on the minimum transverse momentum (\(p_{\rm T_{min}}\)) The variation of FB multiplicity and momentum correlations with minimum transverse momentum of charged particles (\(p_{\rm T_{min}}\)) have been shown in Fig. 10 and Fig. 11 for both EPOS3 generated with hydro \(pp\) events at \(\sqrt{s}\) = 13 TeV and \(pPb\) events at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV following ATLAS kinematics [24]. We calculated the values of \(b_{corr}\) at seven different levels of minimum transverse momentum (\(p_{\rm T_{min}}\)), specifically at \(p_{\rm T_{min}}\) = 0.1, 0.2, 0.3, 0.5, 1.0, 1.5, and 2.0 GeV. These calculations were performed for symmetric forward-backward (FB) windows without any separation. The multiplicity and momentum correlations strengths decrease rapidly with the increase of \(p_{\rm T_{min}}\) values for both \(pp\) and \(pPb\) events confirming similar trend at lower center-of-mass energies [51]. With the increase of \(p_{\rm T_{min}}\), domination of LRC component decreases resulting weaker FB multiplicity correlation strength suggesting the transition from soft process to hard processes with increasing transverse momentum of the produced particles. As discussed in V.1, here also the correlation strengths are found to be greater for \(pPb\) collisions than \(pp\) collisions. Figure 11: Forward-backward summed-\(p_{T}\) correlation as a function of \(p_{\rm T_{min}}\) for window width \(\delta\eta=0.5\) for EPOS3 generated \(pp\) and \(pPb\) events. Figure 9: Comparison of \(\delta\eta\)-weighted average FB summed-\(p_{T}\) correlations, (\(\langle b_{corr}(\Sigma p_{T})\rangle_{\delta\eta}\)) as a function of \(\sqrt{s}\) for EPOS3 simulated \(pp\) events (all charged particles, core and corona) following ALICE kinematics (left) and with derived ATLAS (right) data. ### Multiplicity dependent \(b_{corr}(\Sigma p_{\rm T})\) In addition to the study using minimum-bias EPOS3 events, we have exploited a multiplicity-dependent summed-\(p_{\rm T}\) FB correlations study as well. Fig. 12 and Fig. 13 show FB momentum correlation as a function of \(\eta_{gap}\) for \(\delta\eta\) = 0.5 in three multiplicity ranges estimated following ATLAS kinematics [24] using EPOS3 simulated \(pp\) events at \(\sqrt{s}\) = 13 TeV and \(pPb\) events at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. Red, blue and green points correspond to non-overlapping multiplicity regions; Low- (2 \(<\) N\({}_{\rm ch}\)\(<\) 30), Mid- (30 \(<\) N\({}_{\rm ch}\)\(<\) 60) and High-multiplicity (60 \(<\) N\({}_{\rm ch}\)\(<\) 90) regions respectively. We have kept the multiplicity ranges same for both \(pp\) and \(pPb\) events for better understanding. We can see the similar decrease of correlation strength with increasing \(\eta_{gap}\) and for a fixed \(\eta_{gap}\) value, \(b_{corr}(\Sigma p_{\rm T})\) also decreases with increasing multiplicity which may be due to the fact that fusion of strings into core in high multiplicity EPOS3 events lowers the FB correlation strength [51]. ## VI Summary and Conclusions We have performed a rigorous study on FB multiplicity and momentum correlation in \(pp\) and \(pPb\) collisions at the centre-of-mass energies, \(\sqrt{s}\) = 13 TeV and \(\sqrt{s_{\rm NN}}\) = 5.02 TeV respectively at the LHC using EPOS3 simulated events with all charged particles and particles from core & corona. We have investigated the behaviour of FB correlation strengths on gap between two pseudo-rapidity windows (\(\eta_{gap}\)), window width (\(\delta\eta\)), minimum transverse momentum (\(p_{Tmin}\)) and different multiplicity classes. Many LHC findings confirm a strong analogy between the small collision systems, \(pp\) and \(pPb\) particularly in terms of particles correlations and fluctuations [42; 43; 69]. We have also noticed here that general trends of both FB correlation strengths are similar in case of both EPOS3 simulated with hydro \(pp\) and \(pPb\) collisions irrespective of energy difference. We have found following results: * The linear relationship between \(\langle N_{b}\rangle_{N_{f}}\) and \(N_{f}\) is verified for EPOS3 generated both \(pp\) and \(pPb\) events and is more steeper for the \(pPb\) events than \(pp\) events. * Our model based study fairly describes two general features of both FB correlation coefficients in two different collision systems (\(pp\) and \(pPb\)) that it decreases slowly with the increase of gap between two selected \(\eta\)-windows irrespective of window widths and increases non-linearly with window width for a fixed separation between \(\eta\)-windows. * Rapid decrease in the values of both \(b_{corr}(mult)\) and \(b_{corr}(\sum p_{T})\) with the small increase of minimum transverse momentum \(p_{Tmin}\) has been observed for both \(pp\) and \(pPb\) events. * Multiplicity dependent summed-\(p_{T}\) correlation study also reveals that with the increase of multiplicity the value of \(b_{corr}(\sum p_{T})\) decreases at a fixed \(\eta_{gap}\) for both \(pp\) and \(pPb\) events. All these facts resemble our previous assessment of FB correlations in \(pp\) collisions at three comparatively lower centre-of-mass energies, \(\sqrt{s}\) = 0.9, 2.76 & 7 TeV [51]. The observed larger FB multiplicity and momentum correlation strength in \(pPb\) collisions w.r.t \(pp\) collisions could be due to the fact that the initial asymmetry of the \(pPb\) collisions and the large system size w.r.t the \(pp\) collisions may enhance the event-by-event fluctuations Figure 12: Forward-backward summed-\(p_{\rm T}\) correlations as a function of \(\eta_{gap}\) for window width \(\delta\eta\) = 0.5 in different multiplicity ranges for EPOS3 simulated \(pp\) events at \(\sqrt{s}\) = 13 TeV. Figure 13: Forward-backward summed-\(p_{\rm T}\) correlations as a function of \(\eta_{gap}\) for window width \(\delta\eta\) = 0.5 in different multiplicity ranges for EPOS3 simulated \(pPb\) events at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. which in turn may increase the FB correlation. The most interesting result from our present study is the behaviour of \(\delta\eta\)-weighted average of FB multiplicity and momentum correlation strengths as a function of centre-of-mass energy (\(\sqrt{s}\) = 0.9, 2.76, 7 & 13 TeV) using EPOS3 simulated \(pp\) events. The increase of both the correlation strengths (\(b_{corr}(mult)\) & \(b_{corr}(\Sigma_{\rm PT})\)) with \(\sqrt{s}\) is clearly visible and interestingly we observed that it tends to saturate at very high energy. We have incorporated different theoretical model studies to compare our results and the correlation strengths are found to be following the similar trend as of EPOS3 simulated events. To investigate the possible reason behind such an interesting observation, we have calculated the correlation strengths for the EPOS3 generated events with all charged particles and particles from core & corona. We found that for the particles from corona, the \(\delta\eta\)-weighted average of FB correlations do not show any saturation trend whereas for particles form core are perfectly exhibited the trend. We may infer that it may be due to the dominance of gluon-saturation effect at such higher centre-of-mass energies. Our analyses uncover the fact that the FB multiplicity and momentum correlation as a function of \(\eta_{gap}\), \(\delta\eta\), \(p_{Tmin}\) and different multiplicity classes in EPOS3 simulated \(pPb\) events qualitatively resemble the outcome of EPOS3 simulated \(pp\) events though the values of correlation coefficients are higher for \(pPb\) events than those for \(pp\) events. Overall, we may conclude that the systematic study on FB correlations in different dimensions using the hybrid Monte-Carlo model EPOS3 [52] adds more valuable information to understand the existing experimental results as well as encourages more experimental measurements at higher centre-of-mass energies and in different collision systems. ## Acknowledgements The authors are thankful to Dr. Klaus Warner for providing us with the EPOS3 model. The authors are thankful to the members of the grid computing team of VECC and cluster computing team of dept. of Physics, Jadavpur University for providing uninterrupted facility for event generation and analyses. We also gratefully acknowledge the financial help from the DST-GOI under the scheme "Mega facilities in basic science research" [Sanction Order No. SR/MF/PS-02/2021-Jadavpur (E-37128) dated 31.12.2021]. One of the authors (JM) acknowledges DST-INDIA for providing fellowship under INSPIRE Scheme.
2308.10941
Stellar Wind Yields of Very Massive Stars
The most massive stars provide an essential source of recycled material for young clusters and galaxies. While very massive stars (VMS, M>100M) are relatively rare compared to O stars, they lose disproportionately large amounts of mass already from the onset of core H-burning. VMS have optically thick winds with elevated mass-loss rates in comparison to optically thin standard O-star winds. We compute wind yields and ejected masses on the main sequence, and we compare enhanced mass-loss rates to standard ones. We calculate solar metallicity wind yields from MESA stellar evolution models in the range 50 - 500M, including a large nuclear network of 92 isotopes, investigating not only the CNO-cycle, but also the Ne-Na and Mg-Al cycles. VMS with enhanced winds eject 5-10 times more H-processed elements (N, Ne, Na, Al) on the main sequence in comparison to standard winds, with possible consequences for observed anti-correlations, such as C-N and Na-O, in globular clusters. We find that for VMS 95% of the total wind yields is produced on the main sequence, while only ~5% is supplied by the post-main sequence. This implies that VMS with enhanced winds are the primary source of 26Al, contrasting previous works where classical Wolf-Rayet winds had been suggested to be responsible for Galactic 26Al enrichment. Finally, 200M stars eject 100 times more of each heavy element in their winds than 50M stars, and even when weighted by an IMF their wind contribution is still an order of magnitude higher than that of 50M stars.
Erin R. Higgins, Jorick S. Vink, Raphael Hirschi, Alison M. Laird, Gautham N. Sabhahit
2023-08-21T18:00:02Z
http://arxiv.org/abs/2308.10941v1
# Stellar Wind Yields of Very Massive Stars ###### Abstract The most massive stars provide an essential source of recycled material for young clusters and galaxies. While very massive stars (VMS, M\(>\)100 M\({}_{\odot}\)) are relatively rare compared to O stars, they lose disproportionately large amounts of mass already from the onset of core H-burning. VMS have optically thick winds with elevated mass-loss rates in comparison to optically thin standard O-star winds. We compute wind yields and ejected masses on the main sequence, and we compare enhanced mass-loss rates to standard ones. We calculate solar metallicity wind yields from MESA stellar evolution models in the range 50 - 500 M\({}_{\odot}\), including a large nuclear network of 92 isotopes, investigating not only the CNO-cycle, but also the Ne-Na and Mg-Al cycles. VMS with enhanced winds eject 5-10 times more H-processed elements (N, Ne, Na, Al) on the main sequence in comparison to standard winds, with possible consequences for observed anti-correlations, such as C-N and Na-O, in globular clusters. We find that for VMS 95% of the total wind yields is produced on the main sequence, while only \(\sim\) 5% is supplied by the post-main sequence. This implies that VMS with enhanced winds are the primary source of \({}^{26}\)Al, contrasting previous works where classical Wolf-Rayet winds had been suggested to be responsible for Galactic \({}^{26}\)Al enrichment. Finally, 200 M\({}_{\odot}\) stars eject 100 times more of each heavy element in their winds than 50 M\({}_{\odot}\) stars, and even when weighted by an IMF their wind contribution is still an order of magnitude higher than that of 50 M\({}_{\odot}\) stars. keywords: stars: massive - stars: evolution - stars: abundances - stars: mass loss - stars: interiors - nuclear reactions, nucleosynthesis, abundances ## 1 Introduction The chemical composition of galaxies relies on the production of elements in stars, which are subsequently released in stellar winds and supernovae. This ejected material is then responsible for enriching the neighbouring environment with heavier elements. The evolution of galaxies therefore depends on the main production sites of various chemical isotopes and the relevant feedback of enriched material which ultimately affects the metallicity of a given galaxy or cluster (Tinsley, 1980). The origin of elements thereby concerns the stellar nucleosynthesis, wind ejecta and chemical yields, of a given population, providing a broad perspective on galactic chemical evolution (GCE; Kobayashi et al., 2020). This rejuvenation of galaxies over generations of stars has led us to the metal-rich environment of our own Galaxy, with abundant quantities of carbon (C), oxygen (O), nitrogen (N) and iron (Fe). These fusion products are key for establishing life in our modern-day Universe, including the enrichment of our solar system with elements such as radioactive aluminium (\({}^{26}\)Al). Surveys of the Milky Way by COMPTEL (Diehl et al., 1995) and INTEGRAL (Diehl et al., 2006) found \(\sim\) 3 M\({}_{\odot}\) of \({}^{26}\)Al which is a radioactive isotope of Al with a half-life of \(\sim\) 0.7Myr. The observed \({}^{26}\)Al was therefore produced and expelled into our Galaxy recently, likely by massive stars. Another intriguing puzzle from the last few decades concerns the origin of the C-N and Na-O anti-correlations in Globular Clusters (Bastian & Lardo, 2018). These hydrogen-burning by-products have been suggested to either originate from asymptotic giant branch (AGB) stars (D'Ercole et al., 2010), massive stars (Decressin et al., 2009), and even supermassive stars (SMS) (Denissenkov & Hartwick, 2014; Gieles et al., 2018). However, it remains to be shown that SMS are actually formed in Nature. Very massive stars (VMS) with masses over 100 M\({}_{\odot}\) have therefore been proposed as an alternative pollutre (Vink, 2018) as VMS are actually seen in Nature, such as in the Arches cluster of the Milky Way and the Tarantula Nebula of the Large Magellanic Cloud (LMC). These VMS have been theoretically predicted, and observed, to have enhanced stellar winds, expelling significant amounts of mass during their lifetime. Vink et al. (2011) calculated Monte Carlo simulations of VMS finding an upturn in the mass-loss rates of stars above a given transition point where stellar winds change from being optically thin to optically thick. A similar increase in the mass-loss rates was observed for the VMS or Hydrogen-rich Wolf-Rayet class of WWh stars in the LMC by Bestenlehner et al. (2014). Recent work by Sabhahit et al. (2022) provided a physically-motivated wind prescription for VMS which adopts enhanced winds above the Vink & Grafener (2012) transition point, whilst retaining standard O-star rates for stars below the transition point. A comparison of the observed VMS in both the Galaxy and LMC showed good agreement with stellar properties such as luminosities (L) and effective temperatures (T\({}_{\rm eff}\)). A subsequent study by Higgins et al. (2022) implemented this new wind prescription for stars with initial masses ranging from 100 M\({}_{\odot}\) to 1000 M\({}_{\odot}\) discovering that the surface Hydrogen (H) abundance could be used to infer the interior H-burning abundance. This is a result of chemically-homogeneous evolution (CHE), where even non-rotating stars are fully mixed as a result of the large convective cores of VMS which comprise \(\sim 90\%\) of the entire star. With such enhanced winds already at the Zero-Age-Main-Sequence (ZAMS), VMS could be the main contributors of processed material which regenerates their host young cluster. In fact, while supernovae ejecta likely dominate the contribution of massive star yields for M \(\sim 20\) M\({}_{\odot}\), this would only occur after \(\sim 5\)-10Myr, while the constant source of enriched material from VMS likely dominates the first 1-5Myr of a given region. For this reason, we explore the contribution of VMS wind yields at solar metallicity (Z) for a range of masses, implementing the enhanced wind prescription for VMS. We compare the effects of adopting the optically thin O star winds which have been previously implemented in the literature for VMS. Ejected masses and net wind yields are provided for VMS on the main sequence (MS) and we explore the effects of MS winds on the post-MS. Finally, we examine the contribution of VMS winds when weighted by an initial mass function (IMF) as compared to standard O stars. The wind and supernovae yields of \({}^{26}\)Al from massive single stars have been estimated previously by Limongi & Chieffi (2006, 2018); Martinet et al. (2022), while the effect of binary interaction has been explored more recently by Brinkman et al. (2019, 2021). We present an overview of our model grid at solar metallicity in Sect. 2, with a description of stellar winds in Sect. 2.2. The results of our stellar models are shown in Sect. 3, with details of VMS nucleosynthesis in Sect. 3.2, key features of VMS evolution in Sect. 3.3 and observable surface enrichment in Sect.3.4. We provide the ejected masses and yield calculations for various isotopes in Sect. 4, with the contribution of MS mass loss discussed in Sect. 4.1, and a discussion of the impact that MS winds have on the post-MS following in Sect. 4.2. A comparison of VMS and O star wind yields is provided in Sect. 5 as a function of their contribution to their host environment for a given IMF. Finally, we provide our conclusions in Sect. 6. ## 2 Method In this section, we provide an overview of our evolutionary models with the relevant wind prescriptions and nuclear reactions necessary for estimating wind yields at solar Z. We compare two stellar wind prescriptions in order to showcase the impact of VMS wind yields on their environment. In Sect. 4 we describe our method of calculating ejected masses and net yields for a given initial mass and chemical isotope. ### Stellar models Stellar models have been calculated using the one-dimensional stellar evolution code MESA (r10398; Paxton et al., 2011, 2013, 2015, 2018, 2019) for a grid of initial masses of 100, 200, 300, 400, and 500 M\({}_{\odot}\). We have also computed comparable models at lower initial masses of 50 M\({}_{\odot}\) and 80 M\({}_{\odot}\). All calculations begin with a pre-main sequence and then evolve from the ZAMS until core O-exhausation (\({}^{16}\)O\({}_{\rm c}\) + 0.00001). We implement a nuclear reaction network which includes the relevant isotopes for massive star evolution until the end of core O-burning. This nuclear network comprises the following 92 isotopes: n, \({}^{1,2}\)H, \({}^{3,4}\)He, \({}^{6}\)Ti, \({}^{7,9,10}\)Be, \({}^{8,10,11}\)B, \({}^{12,13}\)C, \({}^{13-16}\)N, \({}^{14-19}\)O, \({}^{17-20}\)Te, \({}^{18-23}\)Ne, \({}^{21-24}\)Na, \({}^{23-27}\)Mg, \({}^{25-28}\)Al, \({}^{27-33}\)Si, \({}^{30-34}\)P, \({}^{35-38}\)Cl, \({}^{35-41}\)Ar, \({}^{39-44}\)K, and \({}^{39-44,46,48}\)Ca. Our stellar models are computed with solar metallicity, where \(X=0.720\), \(Y=0.266\) and Z\({}_{\odot}=0.014\) where the relative composition is adopted from Asplund et al. (2009), provided in Table 1. We avail of the OPAL opacity tables from Rogers & Nayfonov (2002), and adopt nuclear reaction rates from the JINA Realcib Database (Cyburt et al., 2010). The mixing-length-theory (MLT) of convection describes the treatment of convection in our models, where we apply an efficiency of \(\alpha_{\rm mlt}\)= 1.67 (Arnett et al., 2019). The Schwarzschild criterion defines the convective boundaries in our models and as such we do not implement semiconductive mixing. For convective boundary mixing (CBM), we include the exponential decaying diffusive model of Freytag et al. (1996) (see also Herwig, 2000) with \(f_{\rm ov}\)= 0.03 (corresponding to \(\alpha_{\rm ov}\)\(\simeq 0.3\)) for the top of convective cores and shells, and with \(f_{\rm ov}\)= 0.006 for the bottom of convective shells. Bowman (2020) find a range of \(\alpha_{\rm ov}\) from asteroseismology results with \(\alpha_{\rm ov}\) up to 0.4, so our value for the top of convective zones falls in the range of asteroseismology-inferred values. This value also falls in between the majority of published large grids of stellar models such as \(\alpha_{\rm ov}\) = 0.1 in Ekstrom et al. (2012), \(\alpha_{\rm ov}\) = 0.335 in Brott et al. (2011), and recent studies on CBM (Higgins & Vink, 2019; Scott et al., 2021) supporting values for \(\alpha_{\rm ov}\) up to at least 0.5 for stars above 20 M\({}_{\odot}\). For the bottom boundary, a CBM value of 1/5 the value of the top boundary is based on 3D hydrodynamic simulations (Cristiani et al., 2017, 2019; Rizzuti et al., 2022) finding that CBM is slower at the bottom boundaries due to being stiffer and therefore harder to penetrate. In order to evolve such high mass models, without enhanced winds for comparisons, we apply convection in superadiabatic layers via the MLT++ prescription which aids convergence of such models to late evolutionary stages. The temporal resolution of our models has been set with \(\texttt{varcontroltarget}=0.0001\), and a corresponding spatial resolution of meshdelta = 0.5. \begin{table} \begin{tabular}{c c c c} \hline \hline Isotope & Mass fraction & Isotope & Mass fraction \\ \hline \hline \({}^{1}\)H & 0.719986 & \({}^{20}\)Ne & 1.356E-3 \\ \({}^{2}\)H & 1.440E-5 & \({}^{22}\)Ne & 1.097E-4 \\ \({}^{3}\)He & 4.416E-5 & \({}^{22}\)Na & 2.909SE-5 \\ \({}^{4}\)He & 0.266 & \({}^{24}\)Mg & 4.363E-4 \\ \({}^{12}\)C & 2.380E-3 & \({}^{22}\)Mg & 5.756E-5 \\ \({}^{14}\)N & 7.029 E-4 & \({}^{26}\)Mg & 6.58E-5 \\ \({}^{16}\)O & 6.535E-3 & \({}^{27}\)Al & 5.051E-5 \\ \({}^{18}\)O & 1.475E-5 & \({}^{28}\)Si & 5.675E-4 \\ \({}^{19}\)F & 3.475E-7 & \({}^{32}\)S & 2.917E-4 \\ \hline \end{tabular} \end{table} Table 1: Initial abundances of chemical isotopes in mass fractions for our grid of models at Z\({}_{\odot}\). ### Mass Loss In this work, we compare 2 stellar wind prescriptions, and explore their effects on VMS evolution and corresponding wind yields. Theoretical mass-loss rates of massive stars were calculated by Vink et al. (2001) as a function of mass, luminosity, effective temperature, terminal velocity, and metallicity, \[\begin{split}\log\,\dot{M}_{\rm V01}=&-6.697\\ &+2.194\,\log(L/L_{\odot}/10^{5})\\ &-1.313\,\log(M/M_{\odot}/30)\\ &-1.226\,\log((v_{\infty}/v_{\rm esc})/2)\\ &+0.933\,\log(T_{\rm eff}/40000)\\ &-10.92\,\left(\log(T_{\rm eff}/40000)\right)^{2}\\ &+0.85\,\log(Z/Z_{\odot})\end{split} \tag{1}\] as shown in equation (1). These mass-loss rates were calculated with Monte-Carlo (MC) simulations which trace the number of photons travelling below the photosphere through the stellar wind, thereby calculating the radiative acceleration and mass-loss rate. The MC simulations were calculated for hot (\(\log_{10}\) (T\({}_{\rm eff}\)/K) \(\geq\) 4.0), optically thin OB stars. This mass-loss recipe has been implemented across many stellar evolution and population synthesis codes for massive stars, and in some cases extrapolated to higher masses, which have been shown to under-predict the stellar winds of VMS, (Vink, 2006; Bestenlehner et al., 2014). Following this, Vink et al. (2011) computed MC simulations up to 300 M\({}_{\odot}\) finding a 'kink' or upturn in the mass-loss rates at the highest masses. Similarly, the massive star observations in 30 Dor also displayed a 'kink' in the mass-loss rates of the most massive stars (Bestenlehner et al., 2014). This transition point aligns with the observed spectral transition from O stars with optically thin winds to Of/WNh stars with optically thick winds. While new dependencies on L/M were provided by Vink et al. (2011), showing a strong \(\Gamma\)-dependence on \(\dot{M}\), absolute rates were not calculated. As a result, a recent study by Sabhahit et al. (2022) provided a complete mass-loss prescription which switches from the optically thin Vink et al. (2001) rates below the transition point (\(\sim 77\) M\({}_{\odot}\) at Z\({}_{\odot}\)) to the updated Vink et al. (2011) rates for VMS above the transition point. Since there are a number of transition stars in the Arches cluster of the Milky Way and 30 Dor in the Large Magellanic Cloud (LMC), the absolute rates of both recipes were anchored to the transition point such that a complete recipe was achievable, \[\begin{split}\log\,\dot{M}_{\rm V11}=&-8.445\\ &+4.77\,\log(L/L_{\odot}/10^{5})\\ &-3.99\,\log(M/M_{\odot}/30)\\ &-1.226\,\log((v_{\infty}/v_{\rm esc})/2)\\ &+0.761\,\log(Z/Z_{\odot})\end{split} \tag{2}\] as shown in equation (2). The Sabhahit et al. (2022) study provided a comparison with observed transition stars and VMS in the Galaxy and LMC, showing excellent agreement with absolute mass-loss rates and evolutionary traits. In fact, the self-regulatory behaviour found in the enhanced wind VMS models, illustrated in the Hertzsprung-Russell diagram (HRD), demonstrated that observed VMS could be reproduced in a narrow effective temperature range with a steep drop in luminosity, as a result of the enhanced wind prescription. We adopt this mass-loss recipe throughout the paper, hereafter 'V11'. We compare the updated V11 mass-loss recipe with the standard O star wind of Vink et al. (2001), hereafter 'V01'. Our implementation of V11 winds follows that of Sabhahit et al. (2022) such that the V01 and V11 prescriptions are connected at the observed transition point and the maximum mass-loss rate of the 2 recipes (as a function of L, M, T, Z, and \(v_{\infty}\)) is adopted at Figure 1: Time evolution of the surface abundance of H, He, C, N and O as a function of mass, for 300 M\({}_{\odot}\) models at Z\({}_{\odot}\). As the star loses mass the time evolution goes from right to left. We apply the optically-thick, enhanced wind prescription outlined in equation (2) on the left, and the standard optically thin, O star wind prescription from equation (1) on the right. The white region shows the mass lost over the MS lifetime, while the grey shaded region illustrates the remaining terminal age main-sequence (TAMS) stellar mass with the black line denoting the TAMS. each time-step. Therefore stars which are near the transition point, or evolve beyond the transition point adopt the appropriate mass-loss rate at all evolutionary stages. In our model grid, all VMS (M\(>\)100 M\({}_{\odot}\)) lie beyond the transition point and as such apply the V11 prescription as shown in equation 2 throughout their evolution. On the other hand, the lower mass model calculated with M\({}_{\rm init}=50\) M\({}_{\odot}\) evolves below the transition point and applies the V01 prescription outlined in equation 1 throughout its entire evolution. Finally, the 80 M\({}_{\odot}\) model already begins its evolution near the transition point (at Z\({}_{\odot}\), M\({}_{\rm trans}\sim 76\) M\({}_{\odot}\)) and as a result switches between V01 and V11 dependencies in line with the wind physics discussed in Vink et al. (2011) and Sabhahit et al. (2022). ## 3 Results In this section, we present the initial results of our model grid for VMS masses ranging from 100-500 M\({}_{\odot}\). In particular we focus on the nucleosynthesis and key characteristics of VMS evolution, resulting from two sets of models with one set implementing enhanced, optically thick, V11 stellar winds, and the other including the standard, optically thin, V01 winds, as outlined in Sect. 2. ### Effect of stellar winds on VMS Initially, we compare the effect of applying the enhanced V11 wind prescription in line with observations of VMS and theoretical predictions of enhanced mass loss above the transition point, with the standard O star winds of V01 in Fig. 1. We show the time evolution of surface composition (\({}^{1}\)H, \({}^{4}\)He, \({}^{12}\)C, \({}^{14}\)N and \({}^{16}\)O) for a 300 M\({}_{\odot}\) star, as a function of total stellar mass, where mass is lost with time from right to left. By comparing the grey shaded region which illustrates the TAMS mass for each model, we can see that the wind yields and post-MS evolution are significantly affected by the MS mass-loss rate. Figure 1 highlights the primary difference for VMS winds where the final TAMS mass differs by 100 M\({}_{\odot}\). The subsequent effect of the additional mass lost by the V11 model during this MS phase is evident by the increased amounts of He and N ejected (white region of Fig.1 (b) when compared to the larger white region in (a)). We explore this further for a wider range of isotopes in Figs. 2 and B1, and discuss the key effects in Sect. 4.1. We present an overview of the TAMS masses and evolutionary timescales of our models in Table 2. ### Nucleosynthesis VMS are extremely efficient nuclear fusion generators, with convective cores which are a significant fraction of their total mass, and burning timescales of just a few Myrs. Stars with an initial mass greater than \(\sim 8\) M\({}_{\odot}\) fuse H into He via the CNO-cycle as opposed to the p+p chain due to their increased central temperatures and relatively lower central densities. The heavier elements in the CNO-cycle act as catalysts with the net result of converting H into He. Initially the CN-cycle processes \({}^{12}\)C towards \({}^{15}\)N by proton-capture before returning to \({}^{12}\)C again via (p,\(\alpha\)) reactions. This CN-cycle reaches equilibrium in \(\sim 10^{4}\) yr with a factor of 10 increase in \({}^{14}\)N. Simultaneously, the second CNO-cycle converts \({}^{15}\)N into \({}^{16}\)O - \({}^{17}\)F - \({}^{17}\)O before returning to \({}^{14}\)N again by (p,\(\alpha\)). Therefore, as a result of the CNO-processing during core H-burning Figure 3: Time evolution of the surface composition during core He-burning in log-scale as a function of stellar mass with the interior compositions shown at the end of core He-burning, for a model with an initial mass of 300 M\({}_{\odot}\) applying the V11 wind prescription. The He-exhausted core is shown in the grey shaded region (left) while the ejected material lost during the core He-burning phase can be seen in white (right). Figure 2: Time evolution of the surface composition during the MS coupled with the composition of the interior of the star (grey shaded area) at the end of the MS (both in mass fraction units and using a log-scale) for a model with an initial mass of 300 M\({}_{\odot}\) and the V11 wind prescription (where the legend details the various isotopes with coloured solid/dashed lines). Given that mass loss reduces the total mass of the star with time, the time evolution goes from right to left and the material ejected during the MS corresponds to the white area (right, non-shaded), with the hydrogen-exhausted stellar interior shown by the grey shaded region (left). The black solid line illustrates the TAMS where the surface evolution at core H-exhaustion occurs, and the central abundances are then shown in the grey region at the same TAMS point. in massive stars, the initial C and O abundances are depleted at the expense of producing \({}^{14}\)N. At sufficiently high temperatures (T\({}_{\rm c}\sim 5\times 10^{7}\) K), secondary reactions can occur during core H-burning. This includes the NeNa-cycle which processes the \({}^{20}\)Ne into \({}^{22}\)Ne and \({}^{23}\)Na before returning to \({}^{20}\)Ne again. The processed material can result in an observable increase in \({}^{23}\)Na. In addition, at T \(>10^{7}\) K massive stars produce \({}^{26}\)Al via the MgAl-cycle where \({}^{24}\)Mg is converted to \({}^{25}\)Al - \({}^{25}\)Mg - \({}^{26}\)Al before decaying to \({}^{26}\)Mg or proton-captures to \({}^{27}\)Si. Interestingly, the ground state \(\tau_{1/2}\) of \({}^{26}\)Al, (which makes up \(\sim 77\)% of \({}^{26}\)Al synthesised; Laird et al. 2023) is \(\sim 0.7\) Myr allowing observations to trace the production of \({}^{26}\)Al as it decays. In fact, COMPTEL has observed \(\sim 3\) M\({}_{\odot}\) of radioactive \({}^{26}\)Al in the Galactic plane of the Milky Way (Diehl et al. 1995), where massive star-forming regions are present. Therefore, massive star nucleosynthesis is expected to be crucial for explaining the presence of such an abundance of \({}^{26}\)Al in our Galaxy (Laird et al. 2023). We note that in our MESA models, the nuclear reaction network combines the ground and isomeric state of \({}^{26}\)Al, such that our \({}^{26}\)Al wind yields could be reduced by approximately 23% for \(\gamma\)-ray observation comparisons, as the isomeric component will decay to \({}^{26}\)Mg effectively instantaneously (\(\tau_{1/2}=6.35\) s). We refer to a forthcoming future work for updated \({}^{26}\)Al reaction rates and independent ground and isomeric states of \({}^{26}\)Al for the precise yields produced of this isotope. Figure 2 presents the nucleosynthesised material from core H-burning which would be lost in stellar winds via the V11 prescription. Isotopes are shown in logarithmic-scale and represented as a function of stellar mass. In this case we display the abundances of a 300 M\({}_{\odot}\) star evolving at solar Z. Each isotope evolves from right to left as the star synthesis material and loses mass through stellar winds on the MS. The ejected mass lost during core H-burning can be seen in white (right), with the H-exhausted stellar interior shown by the grey shaded region (left). As the star loses mass during the core H-burning phase (about 90% of the star's entire lifetime), the total mass is reduced significantly from 300 M\({}_{\odot}\) to \(\sim\)30 M\({}_{\odot}\). This is due to strong stellar winds experienced by the most massive stars which evolve close to the Eddington limit. With such large convective cores, these VMS are almost fully mixed, leading to nuclear fusion-products, like N, being exposed at the stellar surface early in the evolution. With strong outflows stripping these outer layers, the contribution of VMS winds on their environment is significant. Therefore, the white region showcases the stellar wind yields that are expected for each isotope (with the legend detailing the various isotopes with coloured solid/dashed lines). We can see from 220 \(\lesssim\) M / M\({}_{\odot}\lesssim\) 280 that the C and O abundances are reduced at the expense of increased N due to the CNO-cycle. We also find an increase in \({}^{26}\)Al during this phase as a result of the MgAl-cycle, as well as increased \({}^{23}\)Na and reduced \({}^{22}\)Ne due to the NeNa-cycle. Interestingly, the crossover from H to He enhancement seen at M\(\approx\)200 M\({}_{\odot}\) represents the chemically-homogeneous nature of these VMS which display surface H abundances which can be used as a 'clock' to infer their core's evolutionary stage (Higgins et al. 2022). At this point, the central abundance is already exposed at the stellar surface, meaning the outer H-rich layers have been stripped from the star. The increase by a factor of 10 in \({}^{14}\)N at \(\approx 280\) M\({}_{\odot}\) showcases CN-equilibrium which is reached quickly on the MS, with a comparable increase in \({}^{23}\)Na. The abundance of \({}^{20}\)Ne remains constant relative to the initial composition due to the regeneration of \({}^{20}\)Ne at the end of the NeNa-cycle. The post-MS is displayed in Fig. 3 where He-processed material is displayed in the white region (right) leaving the He-exhausted core in the grey-shaded region (left). As in Fig.2, the evolution of various isotopes goes from right to left as the star loses mass through stellar winds, and the elements shown in white will be lost in these winds while the core in grey retains any further processed material. We present the continuation of the 300 M\({}_{\odot}\) model showcased in Fig.2, for the core He-burning stage of evolution, which also implements the enhanced VMS wind of V11. In Fig. 3, we do not include the already lost MS wind matter, but only include wind yields during the He-burning phase in white, allowing for a more detailed study of elements processed during core He-burning. We note that while we present a 300 M\({}_{\odot}\) model for the post-MS, all models which include the V11 wind result in the same final mass and element structure. Therefore, our 100 M\({}_{\odot}\) model could be discussed interchangeably here (see Fig. 2). We find that at the onset of core He-burning (M\(\approx 26\) M\({}_{\odot}\)), the N-rich material from the MS is quickly reprocessed into \({}^{22}\)Ne which is enriched by a factor of 1000. This would make \({}^{22}\)Ne a strong spectroscopic observable of early He-burning nucleosynthesis in stripped stars, particularly as it is \(\sim\)10 times more abundant than \({}^{20}\)Ne. We also see that as He converts into C and O. Their abundances increase by a factor of 100 and 1000 respectively (M\(\approx 25\) M\({}_{\odot}\)). We note an increase in \({}^{26}\)Mg at the expense of \({}^{26}\)Al at the same point. This demonstrates that classical WR stars do not eject \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \(M_{\rm i}/\)M\({}_{\odot}\) & \(M_{\rm TAMS}/\)M\({}_{\odot}\) & \(M_{\rm Aa}/\)M\({}_{\odot}\) & \(M_{\rm He-TAMS}/\)M\({}_{\odot}\) & \(M_{\rm CO}/\)M\({}_{\odot}\) & \(M_{\rm f}/\)M\({}_{\odot}\) & \(\tau_{\rm RS}/\)Myr & \(\tau_{\rm He}/\)Myr & \(\tau_{\rm CO}/\)yr & \(\dot{M}\) (Dutch, V01, V11) \\ \hline \hline 100 & 32.95 & 29.57 & 16.20 & 15.82 & 15.94 & 3.12 & 0.379 & 6044 & V11 \\ 200 & 31.82 & 29.57 & 16.16 & 15.27 & 15.72 & 2.76 & 0.381 & 6095 & V11 \\ 300 & 32.20 & 28.71 & 16.05 & 15.47 & 15.80 & 2.62 & 0.381 & 6083 & V11 \\ 400 & 32.46 & 28.71 & 16.03 & 14.49 & 15.85 & 2.54 & 0.380 & 6065 & V11 \\ 500 & 32.66 & 29.25 & 16.15 & 14.53 & 15.89 & 2.48 & 0.380 & 6066 & V11 \\ \hline 100 & 61.86 & 46.82 & - & - & - & 3.05 & - & - & V01 \\ 200 & 98.37 & 41.67 & - & - & - & 2.46 & - & - & V01 \\ 300 & 132.20 & 125.846 & - & - & - & 2.25 & - & - & V01 \\ 400 & 161.33 & 156.636 & - & - & - & 2.14 & - & - & V01 \\ 500 & 189.33 & 185.432 & - & - & - & 2.07 & - & - & V01 \\ \hline \end{tabular} \end{table} Table 2: Key characteristics of the model grid with masses provided at the ZAMS (M\({}_{\rm d}\)), at the end of core H-burning (M\({}_{\rm TAMS}\)), end of core He-burning (M\({}_{\rm He-TAMS}\)), and of final masses (M\({}_{\rm f}\)). We also provide core masses at the end of core H-burning (M\({}_{\rm A}\)), and at the end of core C-burning (M\({}_{\rm CO}\)). Evolutionary timescales are provided for the MS (\(\tau_{\rm MS}\)), core He-burning phase and core CO-burning phase. The mass-loss prescription applied to the model is provided in the final column (V01,V11), as outlined in Sect.2.2. Note that the V01 models are calculated during the core H-burning phase only. meaningful amounts of \({}^{26}\)Al before it has decayed to \({}^{26}\)Mg, due to the lack of H remaining. ### Evolution of VMS We evolve VMS models with initial masses ranging from 100-500 M\({}_{\odot}\) adopting the appropriate enhanced-wind prescription from Sabhahit et al. (2022) for stars above the transition point. This \(\Gamma\)-dependent mass loss results in a self-regulatory effect where stars lose a significant fraction of their total mass on the MS leading to a drop in luminosity at a constant effective temperature as has been observed in the Arches cluster of the Milky Way and 30Dor in the LMC. We extend this \(\Gamma\)-dependent mass loss through the post-MS stages of evolution in V11 models as the relative dependencies are consistent with that of WR stars (Sander et al., 2020). This results in a second drop in luminosity after the onset of core Heburning, see Fig. 4. Comparisons by Sabhahit et al. (2023) have shown that the absolute rates of the hydrodynamically-consistent WR rates by Sander et al. (2020) are in good agreement with the extension of the V11 rates during core He-burning. The 100 M\({}_{\odot}\) model initially evolves to cooler effective temperatures before losing sufficient mass to evolve quasi-chemically homogeneously, while stars with an initial mass greater than 200 M\({}_{\odot}\) already begin the MS and evolve chemically homogeneously. The enhanced wind of V11 models (solid lines) results in a steep drop in mass with a mass turnover point at 1.6 Myrs as previously explored in Higgins et al. (2022). We compare the mass evolution of V11 models and V01 models during core H-burning in Fig. 5, showcasing the consequences of the weaker wind rate of V01 designed for O stars below the transition point. We find that the enhanced wind models converge to a TAMS mass of \(\sim 32\) M\({}_{\odot}\) while models applying the V01 wind have TAMS masses ranging from 60 M\({}_{\odot}\) up to 190 M\({}_{\odot}\). The net yields and ejected mass lost on the MS are impacted significantly by this change in mass. This effect can also be seen by comparing Fig. 2 with the reduced winds in Fig. 11. The TAMS mass in Fig. 11 is 132 M\({}_{\odot}\) reducing the contribution of wind yields substantially. ### Observable surface chemical signatures The surface evolution of various isotopes shown in Figs. 2 and 3 illustrates the dominant isotopes through each evolutionary stage. However, we can also showcase the change in surface enrichment by providing relative surface abundances at particular evolutionary times. Therefore, we consider surface abundances at core H-exhaustion, He-exhaustion and O-exhaustion which represents the surface properties at the black line of Figs. 2 and 3, as well as the final surface profile of each model. We provide surface ratios of \({}^{4}\)He, \({}^{12}\)C, \({}^{14}\)N, \({}^{16}\)O, and \({}^{22}\)Ne in mass fractions in Table 3 to compare with abundance ratios which may be observed in H-burning WNN stars or post-MS WR stars. As in Ekstrom et al. (2012), we present ratios of N/C and N/O abundances, as well as surface \({}^{4}\)He, finding that our VMS results correspond to the same order of magnitude as their M \(>60\) M\({}_{\odot}\) models. We compare our 80-500 M\({}_{\odot}\) model ratios of surface N/C to the observed VMS in the Arches cluster (\(\sim\) Z\({}_{\odot}\)), and find an excellent agreement between our H-exhaustion ratios (N/C \(\sim\) 90-110) and the sample of WNN stars in Martins et al. (Fig. 8; 2008) which are observed to have log N/C \(\sim\) 2. We also provide O/He ratios for later evolutionary stages, comparable to Meynet et al. (Fig. 10; 1994), and Crowther et al. (2002). The C/He and O/He ratios at He-exhaustion and O-exhaustion suggest that our 50 M\({}_{\odot}\) model best represents the WR stars from Crowther et al. (2002). We note that the N/C and N/O ratios will be substantially higher on the MS than the post-MS due to the CNO-cycle, while the Ne/He and O/He will increase in the post-MS as \({}^{14}\)N is processed into \({}^{22}\)Ne and \({}^{16}\)O is produced during core He-burning. Interestingly, the surface \({}^{4}\)He is almost 100% at the end of core H-burning for VMS while the 50 M\({}_{\odot}\) model is not enriched in \({}^{4}\)He at all during core H-burning. As previously mentioned, this is a result of 2 features of VMS evolution, the fully mixed interior where a large fraction of Figure 4: Hertzsprung-Russell diagram of the grid of models comprising initial masses of 100 M\({}_{\odot}\), 200 M\({}_{\odot}\), 300 M\({}_{\odot}\), 400 M\({}_{\odot}\) and 500 M\({}_{\odot}\), including the enhanced wind prescription V11. Figure 5: Mass evolution of the grid of models comprising initial masses of 100 M\({}_{\odot}\), 200 M\({}_{\odot}\), 300 M\({}_{\odot}\), 400 M\({}_{\odot}\) and 500 M\({}_{\odot}\), including the wind prescriptions V11 (solid) and V01 (dashed) during core H-burning only. the star is occupied by the convective core, and strong stellar winds stripping the exterior envelope. A similar effect can be seen for the 50 M\({}_{\odot}\) ratios of N/C and N/O which increase in later burning stages due to delayed stripping of this earlier-processed material, compared to VMS which display surface abundances which are representative of their core abundances. ## 4 Yields and ejected masses In this section we provide calculations of ejected masses and net wind yields for V11 models until core O-exhaustion (Table 5), as well as a comparison of MS ejected masses for both sets of models, applying V11 and V01 winds (Table 4). We discuss the key variations in ejected isotopes when implementing enhanced VMS winds or O-star winds, with consequences for galactic chemical evolution (GCE). Since most of the ejecta are lost during core H-burning, the key differences in ejected (element) masses will occur as a result of the MS mass-loss prescription. We adapt the relations from Hirschi et al. (2005) for our yield calculations. The net stellar wind yield calculated for a star of initial mass, \(m\), and isotope, \(i\), is defined as: \[m_{i}^{\rm wind}=\int_{0}^{\tau\,(m)}\dot{M}(m,t)\left[X_{i}^{S}(m,t)-X_{i}^{0 }\right]dt \tag{3}\] where \(\dot{M}\) is the mass-loss rate, \(X_{i}^{S}\) is the surface abundance of a given isotope, and \(X_{i}^{0}\) is the initial abundance of a given isotope (see Table 1), integrated from the ZAMS until \(\tau\,(m)\), the final age of the star. We also calculate ejected masses (EM) of each isotope, \(i\), by using: \[EM_{im}=\int_{0}^{\tau\,(m)}\dot{M}\,X_{i}^{S}(m,t)\ dt. \tag{4}\] We find that all V11 models reach O-exhaustion with the same final mass (\(\sim 16\) M\({}_{\odot}\)) and structure, and assume that they all collapse to form black holes without a supernova. Therefore, we implement the above wind yield equations such that the ejected masses and net yields are all attributed to stellar winds. We present the complete table of ejected masses (top) and net yields (bottom) in solar mass units for our V11 model grid in Table 5. We find that with increased initial mass, more \({}^{1}\)H, \({}^{4}\)He and \({}^{14}\)N are expelled, as would be expected. However, we find that the ejected masses of \({}^{12}\)C, \({}^{16}\)O, and \({}^{22}\)Ne are relatively constant with initial mass since they are post-MS products. This demonstrates the dominant role that MS mass loss plays on the entire evolution of VMS, including their total yields. Furthermore, we can see from the increasing \({}^{4}\)He ejecta that much of the element is lost in the MS before converting into \({}^{12}\)C in the post-MS. Hence, the H-processed elements will produce the majority of stellar wind yields. Interestingly, we see an increase in the amount of \({}^{20}\)Ne, \({}^{23}\)Na, \({}^{26}\)Al and \({}^{27}\)Al ejected with higher initial masses, suggesting that the most massive stars may be responsible for polluting their environments with these trace elements. This is important for comparisons with \(\gamma\)-ray observations, and globular clusters which show enrichment of \({}^{23}\)Na and \({}^{27}\)Al (Bastian & Lardo, 2018). The net wind yields, useful for GCE calculations, are provided in the lower section of Table 5. Negative values show that the net element mass has been processed into another element, for example \({}^{1}\)H yields are negative at the expense of \({}^{4}\)He. Similarly, for Mini \begin{table} \begin{tabular}{c c c c c c c c} \hline \(M_{\rm i}\) & \(Y_{\rm e}\) & N/C & N/O & Ne/He & O/He & C/N & C/He \\ \hline \hline \multicolumn{8}{c}{H-exhaustion} \\ \hline 50 & 2.660E-1 & 3.393E-1 & 9.369E-2 & 4.961E-4 & 2.458E-2 & 2.947 & 6.788E-3 \\ 80 & 8.047E-1 & 1.108E+2 & 1.683E+2 & 6.462E-6 & 6.146E-5 & 9.022E-3 & 9.330E-5 \\ 100 & 9.572E-1 & 9.403E+1 & 1.921E+2 & 1.150E-5 & 4.518E-5 & 1.063E-2 & 9.229E-5 \\ 200 & 9.532E-1 & 9.526E+1 & 1.899E+2 & 1.076E-5 & 4.591E-5 & 1.050E-2 & 9.150E-5 \\ 300 & 9.548E-1 & 9.479E+1 & 1.908E+2 & 1.099E-5 & 4.561E-5 & 1.055E-2 & 9.179E-5 \\ 400 & 9.557E-1 & 9.451E+1 & 1.913E+2 & 1.114E-5 & 4.543E-5 & 1.058E-2 & 9.197E-5 \\ 500 & 9.564E-1 & 9.429E+1 & 1.917E+2 & 1.125E-5 & 4.530E-5 & 1.061E-2 & 9.211E-5 \\ \hline \multicolumn{8}{c}{He-exhaustion} \\ \hline 50 & 5.095E-1 & 9.473 & 2.427 & 9.178E-5 & 4.596E-3 & 1.056E-1 & 1.177E-3 \\ 80 & 1.544E-1 & \(\sim\)0 & \(\sim\)0 & 7.725E-2 & 2.270 & 8.117E+13 & 3.083 \\ 100 & 1.980E-1 & \(\sim\)0 & \(\sim\)0 & 6.258E-2 & 1.399 & 1.067E+14 & 2.559 \\ 200 & 2.079E-1 & \(\sim\)0 & \(\sim\)0 & 5.998E-2 & 1.265 & 1.147E+14 & 2.457 \\ 300 & 2.043E-1 & \(\sim\)0 & \(\sim\)0 & 6.084E-2 & 1.311 & 1.120E+14 & 2.493 \\ 400 & 2.020E-1 & \(\sim\)0 & \(\sim\)0 & 6.147E-2 & 1.343 & 1.103E+14 & 2.517 \\ 500 & 2.002E-1 & \(\sim\)0 & \(\sim\)0 & 6.196E-2 & 1.368 & 1.092E+14 & 2.536 \\ \hline \multicolumn{8}{c}{O-exhaustion} \\ \hline 50 & 5.195E-1 & 1.460E+1 & 2.567 & 8.903E-5 & 4.412E-3 & 6.848E-2 & 7.757E-4 \\ 80 & 1.369E-1 & \(\sim\)0 & \(\sim\)0 & 8.569E-2 & 2.781 & 7.188E+13 & 3.384 \\ 100 & 1.761E-1 & \(\sim\)0 & \(\sim\)0 & 6.964E-2 & 1.741 & 9.492E+13 & 2.831 \\ 200 & 1.852E-1 & \(\sim\)0 & \(\sim\)0 & 6.667E-2 & 1.578 & 1.017E+14 & 2.723 \\ 300 & 1.818E-1 & \(\sim\)0 & \(\sim\)0 & 6.774E-2 & 1.636 & 9.937E+13 & 2.763 \\ 400 & 1.797E-1 & \(\sim\)0 & \(\sim\)0 & 6.843E-2 & 1.673 & 9.794E+13 & 2.788 \\ 500 & 1.782E-1 & \(\sim\)0 & \(\sim\)0 & 6.896E-2 & 1.703 & 9.694E+13 & 2.807 \\ \hline \end{tabular} \end{table} Table 3: Relative surface abundances in mass fractions for a range of initial masses provided in solar mass units. The surface abundance ratios are provided from V11 models and are shown for three evolutionary stages. \(>200\,\mathrm{M}_{\odot}\) the additional NeNa and MgAl-cycles produce \({}^{23}\)Na at the cost of \({}^{20}\)Ne and \({}^{26}\)Al at the expense of \({}^{24}\)Mg. In fact, increased \({}^{20}\)Ne abundances also demonstrates evidence of previously processed \({}^{23}\)Na. Moreover, at these high initial masses the stars evolve chemically-homogeneously, so stars with \(\mathrm{M}_{\mathrm{init}}>200\,\mathrm{M}_{\odot}\) show that \({}^{14}\)N produces a net positive yield as a result of processing \({}^{16}\)O during the CNO-cycle, while the \(100\,\mathrm{M}_{\odot}\) model which is not fully-mixed retains a positive yield in both elements. We also provide an IMF-weighted contribution of our stellar wind yields in Fig. 6. We adopt the relation from Salpeter (1955) for our VMS study, where \(M^{-2.35}\). We compare the IMF-weighted ejected masses applied in Fig. 6 with a top-heavy IMF in Fig. 10 finding similar results. Figure 11 comparatively shows the stellar wind yields divided by initial mass as a function of the initial mass of models implementing the V11 wind prescription. This enables a direct comparison of the relative wind yield for each stellar mass. ### Main sequence In Table 4 we explore the effects of MS mass loss on ejected masses of key isotopes. As previously discussed, the MS winds dominate the total yields, therefore we compare 2 sets of models with differing mass-loss rates on the MS (V11, top and V01, bottom). The ejected masses can easily be converted into the net wind yields, seen in Table 5, using the relation: \[m_{i}^{\mathrm{wind}}=EM_{lim}-X_{i}(M_{i}-M_{f}) \tag{5}\] where the product of the mass lost and initial abundance, provided in Table 1, is removed from the ejected mass of a given isotope. We only compare here the ejected masses of V11 and V01 models, since the mass is directly impacted by the wind prescriptions discussed. The most significant differences noted in Table 4 are of course \({}^{1}\)H and \({}^{4}\)He, where V11 models eject almost twice the amount of \({}^{4}\)He compared to V01 models, see also Fig.1. Moreover, the V11 models have higher ejected masses of \({}^{14}\)N, due to the surface exposed fusion-products in V11 models. Interestingly, the ejected \({}^{26}\)Al differs considerably with V11 producing up to 10 times more than V01 models. The difference in CNO masses highlights that V01 models do not reveal these core-processed materials early in the evolution. Moreover, the trace elements such as \({}^{20}\)Ne, \({}^{23}\)Na and \({}^{26}\)Al are significantly impacted by the wind prescription with reduced ejected masses for V01 models. We note that the most significant impact on \({}^{26}\)Al ejecta occurs at \(\mathrm{M}_{\mathrm{init}}=100\,\mathrm{M}_{\odot}\) where V01 models predict a much lower value than V11 models. This suggests that previous studies of VMS could have under-predicted the contribution of VMS in the enrichment of \({}^{26}\)Al as a result of wind-driving physics. ### Effect of VMS winds on the post-main sequence The post-MS evolution of VMS is severely impacted by the MS winds, dictating both the He-ZAMS mass and structure. From Fig.1 we can see that enhanced winds leave a much lower He-ZAMS mass than with standard winds. For models implementing the V11 wind (\(\mathrm{M}_{\mathrm{init}}\geq 100\,\mathrm{M}_{\odot}\)), all He-ZAMS masses are \(\sim 32\,\mathrm{M}_{\odot}\) with very similar chemical abundances, as can be seen by comparing the models with initial masses of \(300\,\mathrm{M}_{\odot}\) and \(100\,\mathrm{M}_{\odot}\) in Figs. 3 and 10 respectively. The ejected masses and net wind yields are also completely dominated by the MS wind. By comparing Tables 4 and 5 we see that only \(0.021\,\mathrm{M}_{\odot}\) of \({}^{20}\)Ne, \(\sim\)\(0.05\,\mathrm{M}_{\odot}\) of \({}^{14}\)N and \(0.0004\,\mathrm{M}_{\odot}\) of \({}^{26}\)Al is ejected in the post-MS for a \(100\,\mathrm{M}_{\odot}\) model. We find similar results for all initial masses which implement the V11 wind, showcasing that the MS wind dictates the total ejected masses and wind yields within \(\sim 0.1\)-\(0.001\,\mathrm{M}_{\odot}\), or within 1-5%. As a consequence of the MS wind deciding the He-ZAMS mass, by He-exhaustion the central \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \(M_{i}/\mathrm{M}_{\odot}\) & \(\dot{M}\) & \({}^{1}\)H & \({}^{4}\)He & \({}^{12}\)C & \({}^{14}\)N & \({}^{16}\)O & \({}^{20}\)Ne & \({}^{22}\)Ne & \({}^{22}\)Na & \({}^{26}\)Mg & \({}^{26}\)Al & \({}^{27}\)Al & \({}^{28}\)Si \\ \hline \hline 100 & V11 & 27.418 & 38.339 & 0.033 & 0.422 & 0.118 & 0.105 & 0.002 & 0.013 & 4.88E-3 & 2.271E-3 & 3.454E-3 & 0.039 \\ 200 & V11 & 67.603 & 97.182 & 0.041 & 1.234 & 0.148 & 0.261 & 0.003 & 0.038 & 0.011 & 7.391E-3 & 0.009 & 0.097 \\ 300 & V11 & 115.623 & 146.385 & 0.045 & 2.046 & 0.167 & 0.416 & 0.004 & 0.060 & 0.017 & 0.013 & 0.015 & 0.154 \\ 400 & V11 & 166.382 & 192.622 & 0.050 & 2.846 & 0.196 & 0.571 & 0.005 & 0.082 & 0.023 & 0.018 & 0.021 & 0.221 \\ 500 & V11 & 222.696 & 239.014 & 0.065 & 3.649 & 0.264 & 0.736 & 0.007 & 0.104 & 0.029 & 0.023 & 0.026 & 0.272 \\ \hline 100 & V01 & 22.805 & 12.598 & 0.030 & 0.166 & 0.117 & 0.058 & 2.248E-3 & 4.339E-3 & 3.170E-3 & 4.005E-4 & 1.491E-3 & 0.021 \\ 200 & V01 & 46.798 & 52.162 & 0.037 & 0.682 & 0.140 & 0.158 & 3.008E-3 & 0.020 & 7.549E-3 & 3.311E-3 & 5.328E-3 & 0.058 \\ 300 & V01 & 68.392 & 94.944 & 0.040 & 1.221 & 0.150 & 0.258 & 3.795E-3 & 0.038 & 0.011 & 6.796E-3 & 9.865E-3 & 0.096 \\ 400 & V01 & 91.733 & 140.717 & 0.045 & 1.794 & 0.164 & 0.366 & 4.867E-3 & 0.056 & 0.014 & 0.011 & 0.015 & 0.137 \\ 500 & V01 & 119.558 & 186.017 & 0.056 & 2.380 & 0.197 & 0.480 & 6.361E-3 & 0.075 & 0.018 & 0.015 & 0.020 & 0.180 \\ \hline \end{tabular} \end{table} Table 4: Ejected masses calculated with equation (4) for V11 and V01 models during core H-burning only. Initial masses and ejected masses are provided in solar mass units. Figure 6: An IMF-weighted contribution of the logarithmic stellar wind ejected masses shown for models including the V11 wind prescription. The IMF included here adopts the Salpeter (1955) relation where \(M^{-2.35}\). and surface abundances, He-wind yields and ejected masses are all the same regardless of initial mass (see Figs. 14 and 3 for instance). Interestingly, we find that the relative abundances of \({}^{12}\)C and \({}^{22}\)Ne increase dramatically during the core He-burning stage (see Fig. 15, M\({}_{\odot}\)\(\sim\) 25 M\({}_{\odot}\)) and therefore, 10 times more are ejected during the post-MS. We highlight that models at core He-exhausation show surface enrichment with a factor of 10 increase in \({}^{22}\)Ne abundance compared to \({}^{20}\)Ne, regardless of initial mass, suggesting that \({}^{22}\)Ne could be the dominant Ne-isotope observed in WRs. This isotope may be a key tracer of WR evolution, suggesting that the impact of VMS on WR studies could be broader than previously expected. Kobayashi et al. (2011) find that the solar \({}^{22}\)Ne/\({}^{20}\)Ne ratios are in good agreement with current GCE models for M \(<\) 40 M\({}_{\odot}\), which presents an interesting comparison to the VMS yields of \({}^{20}\)Ne and classical WR yields of \({}^{22}\)Ne. From Fig.1 we can see, by comparing the MS wind effect on the He abundance that, for the same current mass, for example 150 M\({}_{\odot}\), He-rich objects have stripped less of their envelope with low mass-loss rates (V01, Fig. 14) than objects with the same current mass which have a lower surface He abundance with higher mass-loss rates (V11, Fig. 2). Therefore MS stars with \(Y=\)0.8-1.0 suggest that either (i) VMS\(\sim\)300 M\({}_{\odot}\) stars do not exist, or (ii) V01 mass-loss rates are too low for these objects on the MS, since these surface abundances can act as a 'clock' as described in Higgins et al. (2022). Due to the large cores and strong winds, the surface evolution of H and He reveals the core evolution as well. ## 5 Contribution of VMS compared to O-stars Canonical OB stars in the 8-20 M\({}_{\odot}\) range likely end their lives in various supernovae, ejecting heavy elements into their host galaxy, leaving a compact remnant. The most massive O stars (30 M\({}_{\odot}\)\(<\)M\(<\) 60 M\({}_{\odot}\)) eject material during their lives through stellar winds, though mostly in the form of \({}^{1}\)H and \({}^{4}\)He, with 10\({}^{-2}\) to 10\({}^{-6}\) M\({}_{\odot}\) of heavier elements like \({}^{12}\)C, \({}^{14}\)N, and \({}^{16}\)O (Hirschi et al., 2005). VMS can eject substantially higher masses of nuclear-processed elements, not only due to their enhanced winds, but as a result of stripping their outer envelope early on the MS, and their CHE nature, even expose the nuclear-burning core at the surface leading to increased net yields of elements such as \({}^{26}\)Al, \({}^{14}\)N, \({}^{20}\)Ne and \({}^{23}\)Na. With such high initial masses and strong winds, VMS can dominate the yields of an entire IMF in their host galaxy or cluster. Table 5 includes the ejected masses and wind yields of 50 M\({}_{\odot}\) and 80 M\({}_{\odot}\) stars, demonstrating the magnitude of VMS ejecta compared to O stars. The 50 M\({}_{\odot}\) model depicts \(\sim\)100 times less of each isotope (\({}^{12}\)C, \({}^{14}\)N, \({}^{16}\)O, \({}^{22}\)Ne, \({}^{23}\)Na and \({}^{28}\)Si) than the 80 M\({}_{\odot}\) and 100 M\({}_{\odot}\) models. This means that VMS of \(\sim\)100 M\({}_{\odot}\) would produce the same wind yields as 100 massive O-stars. Crucially, the wind contribution of \({}^{26}\)Al is zero for a 50 M\({}_{\odot}\) star, suggesting that the wind contribution of the observed \({}^{26}\)Al in the Galaxy is dominated by VMS. Table 14 shows that the net yields of a 50 M\({}_{\odot}\) star are also negligible or negative as opposed to the VMS, relative to their initial abundances, indicating that O stars do not replenish their host galaxies in the way that VMS do. We reiterate that the 50 M\({}_{\odot}\) model would represent a standard O star, while the 80-100 M\({}_{\odot}\) models are in the transition region and show properties of both O stars and VMS. While the 80-100 M\({}_{\odot}\) models are not fully mixed (CHE) like that of models with M\(\geq\) 200 M\({}_{\odot}\), they do experience enhanced winds. As a result, the 80 M\({}_{\odot}\) and 100 M\({}_{\odot}\) stars eject similar amounts of each isotope (on the same order as VMS). Interestingly, these models eject more \({}^{12}\)C and \({}^{16}\)O than their more massive counterparts. This is because the stars have lost less mass on the MS, and do not expose their cores during core H-burning, leaving increased amounts of \({}^{4}\)He to be processed into \({}^{12}\)C and \({}^{16}\)O during the post-MS (now as stripped stars), where large amounts of these elements are then ejected as they are produced. In contrast to previous work by Martinet et al. (2022) and Brinkman et al. (2019), we find that the most massive stars are responsible for ejecting the primary contribution of \({}^{26}\)Al due to the implementation of enhanced winds. Where previous studies have suggested that evolved, WR stars are responsible for the enrichment of \({}^{26}\)Al, we find that the post-MS stage produces only \(\lesssim\) 5% of the total wind yields, and therefore, WR winds are not the dominant polluters of \({}^{26}\)Al to their host environments. In fact, Martinet et al. (2022) adopt the higher mass-loss rates of Nugis & Lamers (2000), designed for stripped WR stars, in their VMS models when the surface H abundance falls below 40%, leading to higher \({}^{26}\)Al yields than would be expected for the MS-winds of O stars, (Vink et al., 2001). We compare our enhanced wind models with the VMS models of Martinet et al. (2022) finding that our 80 M\({}_{\odot}\) model yields a factor of 10 more \({}^{26}\)Al than the 85 M\({}_{\odot}\) model from Mar \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \(M_{\rm i}\)/M\({}_{\odot}\) & \({}^{1}\)H & \({}^{4}\)He & \({}^{12}\)C & \({}^{14}\)N & \({}^{16}\)O & \({}^{20}\)Ne & \({}^{22}\)Ne & \({}^{23}\)Na & \({}^{26}\)Mg & \({}^{26}\)Al & \({}^{27}\)Al & \({}^{28}\)Si \\ \hline \hline 50 & 5.698 & 2.105 & 0.014 & 4.85E-3 & 0.052 & 0.013 & 1.04E-3 & 2.25E-4 & 5.85E-4 & 0 & 3.03E-4 & 4.60E-3 \\ 80 & 22.131 & 33.552 & 4.788 & 0.268 & 1.337 & 0.099 & 0.195 & 0.0129 & 6.113E-3 & 1.33E-3 & 3.35E-3 & 0.036 \\ 100 & 27.617 & 51.069 & 3.203 & 0.476 & 0.808 & 0.130 & 0.135 & 0.019 & 6.54E-3 & 2.68E-3 & 4.70E-3 & 0.049 \\ 200 & 67.670 & 109.516 & 2.973 & 1.288 & 0.767 & 0.286 & 0.126 & 0.043 & 0.013 & 7.77E-3 & 0.011 & 0.107 \\ 300 & 115.687 & 158.687 & 3.061 & 2.101 & 0.822 & 0.444 & 0.130 & 0.066 & 0.019 & 0.013 & 0.016 & 0.165 \\ 400 & 169.343 & 206.126 & 3.125 & 2.903 & 0.880 & 0.602 & 0.134 & 0.088 & 0.025 & 0.018 & 0.022 & 0.223 \\ 500 & 222.756 & 251.503 & 3.175 & 3.703 & 0.935 & 0.761 & 0.137 & 0.109 & 0.031 & 0.023 & 0.028 & 0.282 \\ \hline 50 & -5.752-5 & 5.99E-7 & -7.87E-7 & 3.64E-8 & -1.24E-10 & -2.55E-11 & -3.01E-11 & 2.89E-11 & -1.16E-12 & 0 & -5.99E-13 & -9.08E-12 \\ 80 & -22.932 & 16.906 & 4.675 & 0.230 & 0.928 & -3.56E-3 & 0.186 & 0.0111 & 1.49E-3 & 1.33E-3 & 9.47E-4 & -4.69E-5 \\ 100 & -32.668 & 28.800 & 3.051 & 0.425 & 0.261 & -0.006 & 0.124 & 0.016 & 3.49E-4 & 2.68E-3 & 1.49E-3 & -9.76E-6 \\ 200 & -64.112 & 60.617 & 2.640 & 1.176 & -0.435 & -0.014 & 0.101 & 0.038 & -7.45E-4 & 7.77E-3 & 3.55E-3 & 1.06E-5 \\ 300 & -87.268 & 83.717 & 2.548 tinet et al. (2022), while our 200 M\({}_{\odot}\) model ejects 15 times more \({}^{26}\)Al than their comparable 180 M\({}_{\odot}\) model. Finally, we compare directly with their 300 M\({}_{\odot}\) model finding that our enhanced wind models yield 4 times more when compared with the implementation of WR winds on the MS. Interestingly, while binary stellar models have been suggested to eject more \({}^{26}\)Al due to stripping of enriched material, our single 80 M\({}_{\odot}\) model still ejects 10 times more than the 80 M\({}_{\odot}\) primary component from Brinkman et al. (2019, 2021). These comparisons prove key in reproducing the \({}^{26}\)Al-rich material in our Galaxy, and that the most appropriate wind physics is required to provide accurate constraints on the chemical yields from the most massive stars. While our models do not treat the ground and isomeric state of \({}^{26}\)Al separately, the branching ratio of these states is approximately 77% \({}^{26}\)Al\({}_{\rm{g}}\) and 23% \({}^{26}\)Al\({}_{\rm{s}}\) in massive stars, therefore we estimate that our \({}^{26}\)Al yields are upper limits, with a potential uncertainty which is approximately 23%, in contrast to the orders of magnitude increase in \({}^{26}\)Al yields from VMS with enhanced winds. This inconsequential uncertainty does not significantly change the overabundance of \({}^{26}\)Al ejected by VMS, and in comparison to lower mass stars and other works which eject factors of 10 less, the relative uncertainty in our treatment of the ground and isomeric state does not impact our conclusion that VMS winds are the primary donors responsible for \({}^{26}\)Al-enrichment in the Galaxy (Vink et al., 2015). We note that while VMS likely form black holes at the end of their lives and as such do not produce supernova yields, for lower mass OB stars \(\sim\) 15 M\({}_{\odot}\) a supernova ejecta may also be included in the yield calculation. In Hirschi et al. (2005) supernova yields and wind yields are both included, providing a complete overview of ejected isotopes in this mass range. Due to the supernovae contribution, lower mass (12-25 M\({}_{\odot}\)) models yield significant amounts of \({}^{4}\)He, \({}^{12}\)C and \({}^{16}\)O with IMF-weighted values ranging from 10\({}^{-2}\) to 10\({}^{-3}\). We find that in comparison to our wind yields, the isotopes dominated by lower mass supernovae progenitors would be \({}^{12}\)C, \({}^{16}\)O and \({}^{20}\)Ne. GCE models for a range of metallicities from Kobayashi et al. (2020), accounting for SNe only, can reproduce the \({}^{16}\)O, \({}^{24}\)Mg, and heavier elements of observations from galaxies of varied Z. But some elements are overproduced, mainly the second and third s-process peaks, which depends on the remaining C abundance prior to the SNe. They find that C, N and \(\alpha\)-elements may be ejected prior to collapse for massive stars to best reproduce the galactic evolution trend as observed. Similarly, Limongi & Chieffi (2018) present IMF-weighted yields for Z\({}_{\odot}\) including the wind and supernovae contribution of rotating models, where the 13-25 M\({}_{\odot}\) models include supernovae yields while higher mass models only include wind yields. Their IMF-weighted total yields show that \({}^{12}\)C, \({}^{16}\)O and \({}^{20}\)Ne are in good agreement with observations, suggesting that SNe dominate due to the IMF, and therefore the production of specific isotopes (see also Nomoto et al., 2013). However, \({}^{14}\)N and \({}^{26}\)Al are underproduced, and some heavy isotopes are overestimated (for example Ga, Ge, As, and Rb). The total (wind and supernova) \({}^{12}\)C yields of a 20 M\({}_{\odot}\) from Limongi & Chieffi (2003, 2018) are in line with our 50 M\({}_{\odot}\) wind yields, but are a factor of 10 lower than our VMS wind yields, showcasing that individual VMS will eject significantly more enriched material, though due to their scarcity will not dominate the \({}^{12}\)C production of an entire population. We investigate the contribution of O stars and VMS on the net yields and ejected masses of their host galaxy, applying an IMF from Salpeter (1955). We compare with a top-heavy IMF from Schneider et al. (2018) which was found for the 30Dcor region of the LMC. Since we calculate models of VMS in this work, we compare with this IMF relation in order to test the effect of VMS on IMF-weighted ejected masses, however we note that by comparing Figs. 6 and 8, we find little difference in the IMF-weighted yields. We do not infer an upper mass in our IMF as in Maeder (1992) or Ritter et al. (2018) since the 'effective' upper mass limit as explored by Vink (2018) relies on a number of uncertain properties such as the pre-MS accretion rate and mass-loss rate, the ignition of the H core as a function of the star formation process, and relative to each of these properties - the host Z content. Table 6 shows the IMF-weighted wind yields (top), calculated as, \[m_{i}^{\rm{IMF}}=m_{im}^{\rm{wind}}\times M_{i}^{-1.9} \tag{6}\] for stars with initial masses ranging from 50-500 M\({}_{\odot}\). We find that VMS make a substantial contribution to the net yields, even when weighted by an IMF. For instance, when compared to the wind yields of lower mass (12-40 M\({}_{\odot}\)) stars from Hirschi et al. (2005) we find that the contribution of VMS winds results in 10 times more ejected mass of \({}^{14}\)N than the wind contribution of O stars (12-60 M\({}_{\odot}\)). In fact, the positive \({}^{12}\)C and \({}^{16}\)O yields of 60-100 M\({}_{\odot}\) compared to \begin{table} \begin{tabular}{c c c c c c c c c} \hline \(M_{\rm{I}}\)/M\({}_{\odot}\) & \({}^{4}\)He & \({}^{12}\)C & \({}^{14}\)N & \({}^{16}\)O & \({}^{20}\)Ne & \({}^{22}\)Ne & \({}^{22}\)Na & \({}^{26}\)Al \\ \hline \hline 50 & 3.54E-10 & -4.66E-10 & 2.15E-11 & -7.33E-14 & -1.51E-14 & -1.78E-14 & 1.71E-14 & 0 \\ 80 & 4.09E-03 & 1.13E-03 & 5.57E-05 & 2.25E-04 & -8.63E-07 & 4.50E-05 & 2.69E-06 & 3.22E-07 \\ 100 & 4.56E-03 & 4.84E-04 & 6.74E-05 & 4.14E-05 & -9.51E-07 & 1.97E-05 & 2.54E-06 & 4.25E-07 \\ 200 & 2.57E-03 & 1.12E-04 & 4.99E-05 & -1.85E-05 & -5.95E-07 & 4.29E-06 & 1.61E-06 & 3.30E-07 \\ 300 & 1.65E-03 & 5.01E-05 & 3.79E-05 & -2.04E-05 & -3.73E-07 & 1.81E-06 & 1.12E-06 & 2.56E-07 \\ 400 & 1.18E-03 & 2.77E-05 & 3.02E-05 & -1.86E-05 & -2.84E-07 & 9.44E-07 & 8.76E-07 & 2.05E-07 \\ 500 & 9.13E-04 & 1.71E-05 & 2.54E-05 & -1.66E-05 & -2.23E-07 & 5.44E-07 & 7.07E-07 & 1.71E-07 \\ \hline 50 & 1.25E-03 & 8.28E-06 & 2.87E-06 & 3.08E-05 & 7.69E-06 & 6.18E-07 & 1.33E-07 & 0 \\ 80 & 8.13E-03 & 1.16E-03 & 6.49E-05 & 3.24E-04 & 2.40E-05 & 4.72E-05 & 3.12E-06 & 3.22E-07 \\ 100 & 8.09E-03 & 5.08E-04 & 7.54E-05 & 1.28E-04 & 2.06E-05 & 2.14E-05 & 3.01E-06 & 4.25E-07 \\ 200 & 4.65E-03 & 1.26E-04 & 5.47E-05 & 3.26E-05 & 1.21E-05 & 5.35E-06 & 1.83E-06 & 3.30E-07 \\ 300 & 3.13E-03 & 6.02E-05 & 4.13E-05 & 1.62E-05 & 8.73E-06 & 2.56E-06 & 1.30E-06 & 2.56E-07 \\ 400 & 2.35E-03 & 3.56E-05 & 3.30E-05 & 1.00E-05 & 6.85E-06 & 1.52E-06 & 1.00E-06 & 2.05E-07 \\ 500 & 1.87E-03 & 2.36E-05 & 2.76E-05 & 6.96E-06 & 5.67E-06 & 1.02E-06 & 8.12E-07 & 1.71E-07 \\ \hline \end{tabular} \end{table} Table 6: IMF-weighted net yields (top) and ejected masses (bottom) calculated with equation (6), for V11 models over the complete evolution until core O-exhaustion. We adopt the IMF of Schneider et al. (2018) where M\({}^{-1.90}\). the negative yields of these 12-40 M\({}_{\odot}\) stars. Table 6 demonstrates the predominant effect that VMS have on their host environment compared to standard O stars in terms of their IMF-weighted yields (top) and IMF-weighted ejected masses (bottom). We find that our VMS models still contribute 10 times more mass than a 50 M\({}_{\odot}\) star when weighted by an IMF, since they lose \(\sim\)90% of their mass over their lifetime they can contribute significant amounts of mass back into their host galaxy, we refer back to the total ejected masses of Table 5. Furthermore, while the 50 M\({}_{\odot}\) stars may be \(\sim\) 100 times more abundant, they eject \(\sim\)10 times less mass of \({}^{4}\)He, \({}^{12}\)C and \({}^{16}\)O than the IMF-weighted VMS, suggesting that VMS ejecta are a significant provider of enriched material to their environments. Interestingly, the transition point model with M\({}_{\rm init}\) = 80 M\({}_{\odot}\) objects more \({}^{12}\)C than the 100-500 M\({}_{\odot}\) stars. We also note that while the contribution of SNe ejecta of lower mass stars will play a role at t \(>\) 6Myr, for young clusters, the ejected masses and net yields will be dominated by winds from VMS for t\(\approx\) 0-4Myr. ## 6 Conclusions The stellar wind contribution of VMS is investigated in this work, with enhanced mass-loss rates appropriate for stars which are observed to have optically thick winds. We have provided a comparison of the resulting ejected masses and net wind yields when implementing these enhanced winds with the previously adopted standard O star winds. We present the nucleosynthesis of VMS throughout their evolution, from core H-burning until core O-exhaustion, calculated with a large nuclear network comprising 92 isotopes. The dominant effects are explored during the MS evolution, with consequences for the post-MS. We consider the impact of stellar winds in lower Z environments for a subset of models. Finally, we evaluate the contribution of VMS winds compared with standard O stars, with IMF-weighted yields and a comparison of ejected masses for M= 50 M\({}_{\odot}\) and M \(>\) 100 M\({}_{\odot}\). On the MS, 95% of the total wind yields are produced, compared to just 5% of the total wind yields which are ejected on the post-MS. This showcases the dominance of MS winds of VMS when compared to evolved stars. We compare the effects of enhanced, optically thick winds as opposed to standard, optically thin winds on the MS and find that VMS with enhanced winds eject up to 10 times more H-burning products of \({}^{14}\)N, \({}^{20}\)Ne, \({}^{23}\)Na, and \({}^{26}\)Al than VMS with standard winds. During the entire evolution of VMS, enhanced winds yield 10 times more \({}^{14}\)N than M\(\lesssim\) 50 M\({}_{\odot}\) O stars, but more importantly they yield positive amounts of \({}^{4}\)He \({}^{12}\)C, and \({}^{22}\)Ne, relative to their initial abundances when compared to O stars which do not replenish their environments via stellar winds. In fact, single stars with initial masses below 50 M\({}_{\odot}\) do not eject any \({}^{26}\)Al, but VMS eject \(10^{-2}\) to \(10^{-3}\) of \({}^{26}\)Al and are likely responsible for the significant mass of \({}^{26}\)Al observed in our Galaxy. Moreover, we show that a 100 M\({}_{\odot}\) star with enhanced winds ejects 100 times more \({}^{12}\)C, \({}^{16}\)O, and \({}^{28}\)Si, than a 50 M\({}_{\odot}\) star. We also find that the ejected masses of \({}^{20}\)Ne, \({}^{23}\)Na and \({}^{26}\)Al increase with increasing initial mass. This suggests that the presence of H-products such as \({}^{20}\)Ne, \({}^{23}\)Na and \({}^{27}\)Al, seen in globular clusters (Gratton et al., 2004; Bastian & Lardo, 2018) steers towards VMS. Although our models are computed for \(Z_{\odot}\), whilst globular clusters typically have low Z ([Fe/H] = -1.5), there is no reason the nucleosynthetic production of Na or Al would be affected. Whether this material is ejected in sufficient quantities to explain the observed anti-correlations in globular clusters remains to be seen. While mass-loss rates are expected to be reduced at lower Z, and VMS would normally not be considered to be sufficiently numerous to pollute globular clusters with sufficient material, we have shown here that the total mass loss is an order of magnitude higher than previously considered. Therefore, for more definitive answers on the role of VMS as polluters in globular clusters we need to consider VMS models with appropriate mass-loss scalings, such as the recent low-Z mass-loss framework of Sabhahit et al. (2023). Interestingly, we discover that the intermediate mass range of transition stars with M\({}_{\rm init}\) = 80-100 M\({}_{\odot}\) eject more \({}^{12}\)C and \({}^{16}\)O than higher mass stars as they do not experience CHE, or do not lose a significant amount of \({}^{4}\)He before it is processed on the post-MS. Additionally, we show that VMS (M\(\gtrsim\) 100 M\({}_{\odot}\)) produce the same He-ZAMS mass and surface composition regardless of initial mass, when implementing enhanced stellar winds. Finally, when weighted by an IMF, uncovering the realistic contribution of VMS, we find that a 100 M\({}_{\odot}\) with enhanced winds still ejects 10 times more \({}^{4}\)He, \({}^{12}\)C, \({}^{14}\)N, \({}^{16}\)O and \({}^{20,22}\)Ne, than a 50 M\({}_{\odot}\) star. Our conclusions reflect the significant impact that VMS winds have on their host galaxy or young cluster. The vast amount of mass lost already on the MS illustrates the presiding role that VMS winds have in replenishing their environments with reprocessed material, such as \({}^{4}\)He, \({}^{12}\)C, \({}^{14}\)N, and \({}^{16}\)O. In fact, adopting the appropriate wind prescription for VMS in both stellar evolution calculations and GCE simulations is crucial for providing accurate yields and ejected masses, as well as impacting many other stellar and galactic properties as highlighted throughout this work. We find that by adopting enhanced winds or standard O star winds, that the ejected masses and post-MS stellar masses can differ by \(\gtrsim\) 100 M\({}_{\odot}\). Finally, the winds of VMS prove to be the dominant source of \({}^{26}\)Al, with 10-100 times more mass ejected by VMS than O stars or evolved WR stars, in a given population showcasing that VMS may be responsible for the enrichment of observed \({}^{26}\)Al in the Galaxy and should be considered in future work. ## Acknowledgements The authors acknowledge MESA authors and developers for their continued revisions and public accessibility of the code. JSV, AML, and ERH are supported by STFC funding under grant number ST/V000233/1 in the context of the BRIDGCE UK Network. RH acknowledges support from STFC, the World Premier International Research Centre Initiative (WPI Initiative), MEXT, Japan and the IReNA AccelNet Network of Networks (National Science Foundation, Grant No. OISE-1927130). This article is based upon work from the CheETEC COST Action (CA16117) and the European Union's Horizon 2020 research and innovation programme (CheETEC-INFRA, Grant No. 101008324). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2304.07130
OPI at SemEval 2023 Task 9: A Simple But Effective Approach to Multilingual Tweet Intimacy Analysis
This paper describes our submission to the SemEval 2023 multilingual tweet intimacy analysis shared task. The goal of the task was to assess the level of intimacy of Twitter posts in ten languages. The proposed approach consists of several steps. First, we perform in-domain pre-training to create a language model adapted to Twitter data. In the next step, we train an ensemble of regression models to expand the training set with pseudo-labeled examples. The extended dataset is used to train the final solution. Our method was ranked first in five out of ten language subtasks, obtaining the highest average score across all languages.
Sławomir Dadas
2023-04-14T13:49:28Z
http://arxiv.org/abs/2304.07130v1
# OPI at SemEval 2023 Task 9: A Simple But Effective Approach to Multilingual Tweet Intimacy Analysis ###### Abstract This paper describes our submission to the SemEval 2023 multilingual tweet intimacy analysis shared task. The goal of the task was to assess the level of intimacy of Twitter posts in ten languages. The proposed approach consists of several steps. First, we perform in-domain pre-training to create a language model adapted to Twitter data. In the next step, we train an ensemble of regression models to expand the training set with pseudo-labeled examples. The extended dataset is used to train the final solution. Our method was ranked first in five out of ten language subtasks, obtaining the highest average score across all languages. ## 1 Introduction Intimacy can be expressed in language in a variety of ways. The degree of intimacy in an utterance is indicated by both thematic and stylistic features, often subtle and difficult to quantify automatically. One of the most apparent aspects of intimacy is self-disclosure. Sharing personal details about oneself or one's life can create a sense of intimacy. This information can relate to both factual as well as emotional spheres, addressing matters such as feelings, goals, dreams, or fears. Other features which may indicate intimacy involve the use of certain types of terms or phrases, especially those creating a sense of closeness and connection between the author and the reader. Automatic identification and quantification of intimacy in natural language is a challenging problem, with a difficulty similar to automatic emotion recognition. Both tasks involve measuring inherently subjective and ambiguous aspects. Intimacy can be influenced by a variety of factors such as the context, culture, and personal experiences of the individual. Intimacy analysis in multilingual text presents additional challenges due to differences in language structure, cultural norms, and expression of emotions across different languages. So far, this topic has not received much attention. Pei and Jurgens (2020) conducted an intimacy analysis of questions from social media, books, and movies. In the study, they created a question dataset and examined the performance of automatic intimacy prediction using methods such as logistic regression and transformer-based language models (Devlin et al., 2019; Liu et al., 2019). The multilingual intimacy analysis task was a part of SemEval 2023. The goal of the task was to measure the intimacy of Twitter posts in ten languages (Pei et al., 2023). The organizers provided a training set of 9,491 tweets and a test set of 13,797 tweets, in which each sample was annotated with an intimacy score ranging from 1 to 5. The training data included texts in only six of the ten languages, while the evaluation was performed on all ten. The task thus tested the performance of the submitted solutions for both standard fine-tuning and zero-shot prediction. The metric selected to evaluate the solutions was the Pearson correlation coefficient. The systems were ranked according to the correlation value for the entire test set, as well as for subsets in each language. This paper presents our solution to the multilingual tweet intimacy analysis shared task. The proposed approach is a combination of domain adaptation and semi-supervised learning. We first train a transformer language model on a large corpus of multilingual tweets and then create an ensemble of regression models to expand the training set with pseudo-labeled examples. Our method achieved the best score in five out of ten language subtasks, the highest number among all participants. According to the Pearson correlation calculated for the entire dataset, the proposed method was ranked third. ## 2 System description Our solution for the multilingual tweet intimacy analysis task can be summarized in the following three steps: 1. We adapt a transformer language model to the problem domain by fine-tuning it on a large corpus of tweets in multiple languages. The model trained by us has been made publicly available.1 Footnote 1: [https://huggingface.co/sdadas/xlm-roberta-large-twitter](https://huggingface.co/sdadas/xlm-roberta-large-twitter) 2. We employ the fine-tuned language model to train an ensemble of regressors. These models are then used to label new data, which we append to the original training set. 3. We train a new ensemble on the expanded training set, which is used to generate the final predictions. Figure 1 shows our approach on a diagram. In the following sections, we explain the steps of this process in detail. ### Domain adaptation The importance of adapting language models to domains and tasks has been highlighted in the scientific literature in recent years (Howard and Ruder, 2018; Gururangan et al., 2020; Ramponi and Plank, 2020). By further pre-training the language model on data from the target domain, the model can better capture the language patterns and nuances specific to that domain, resulting in improved accuracy and performance. Additionally, by leveraging knowledge and patterns learned from the source domain, the language model can be trained more effectively on a supervised task, needing fewer data samples. In the case of Twitter data, this is particularly relevant, as the community on this platform uses a specific language, which differs from the typical texts on which publicly available language models have been trained. It is characterized by the use of non-standard abbreviations, acronyms, and truncated words, making the language more informal and less structured. It is also common practice to replace words or phrases with hashtags. Additionally, due to the limited character count and informal nature of Twitter, users may not always adhere to traditional spelling and grammar rules. Common deviations include the omission of articles and prepositions, the use of contractions and slang, and the omission of punctuation. The basis of our solution is the XLM RoBERTa large model (Conneau et al., 2020). It is a transformer-based language model, trained on a dataset of 100 languages. In order to adapt this model to the Twitter domain, we further optimized it utilizing masked language modeling (MLM) on a dataset of over 156 million tweets. The dataset \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline \multirow{2}{*}{**Language**} & \multicolumn{2}{c|}{**In-domain**} & \multicolumn{2}{c}{**Pseudo-labeled**} \\ & \multicolumn{1}{c|}{**pre-training data**} & \multicolumn{1}{c}{**training data**} \\ \hline English (EN) & 79.4m & 50.9\% & 37.2t & 12.7\% \\ Spanish (ES) & 22.4m & 14.4\% & 49.7t & 17.0\% \\ Portuguese (PT) & 16.3m & 10.5\% & 42.9t & 14.7\% \\ Italian (IT) & 2.5m & 1.6\% & 37.6t & 12.9\% \\ French (FR) & 6.6m & 4.2\% & 34.6t & 11.8\% \\ Chinese (ZH) & 4.0m & 2.5\% & 25.4t & 8.7\% \\ Hindi (HI) & 2.7m & 1.7\% & 23.9t & 8.2\% \\ Dutch (NL) & 1.1m & 0.7\% & 17.5t & 6.0\% \\ Korean (KO) & 8.3m & 5.3\% & 12.6t & 4.3\% \\ Arabic (AR) & 12.9m & 8.2\% & 11.2t & 3.8\% \\ \hline **Total** & 156.2m & 100\% & 292.5t & 100\% \\ \hline \hline \end{tabular} \end{table} Table 1: Distribution of tweets by language in the pre-training and expanded training dataset. The number of tweets (in millions or thousands) and the percentage of each language in the datasets are shown. Figure 1: Our solution to the multilingual tweet intimacy analysis task. First, we further pre-train an existing language model on a large corpus of tweets. Next, we train an ensemble, which is used to label additional data. The expanded dataset is utilized to create a new set of models, which are the final solution to the task. was derived from _archive.org_ Twitter stream collection2, from which we extracted data spanning four months, from May to August 2021. Next, we discarded all posts shorter than 20 characters and written in languages other than those covered by the shared task. We also applied the same preprocessing procedure as the authors of XLM-T (Barbieri et al., 2022), replacing all usernames with the string _@user_ and all URLs with _http3_. Table 1 shows the number and percentage of records from each language included in the pre-training dataset. Footnote 2: [https://archive.org/details/twitterstream](https://archive.org/details/twitterstream) Footnote 3: [https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base) The model was trained for two epochs. We used a learning rate scheduler with warmup and polynomial decay. The peak learning rate was set to 2e-5 and the warmup phase lasted for 6% iterations of the first epoch. We trained the model with a batch size of 1024 on eight Nvidia V100 graphic cards for two weeks. ### Dataset expansion The second stage of our solution was to expand the training set by automatically labeling additional data. For this, we employed a method known as pseudo-labeling (Lee, 2013). It is a semi-supervised learning technique in which a model is first trained on a small set of labeled data, and then used to predict the labels of the remaining unlabeled data. These predicted labels are then added to the training set as if they were actual labels, creating a larger dataset that can be used to retrain the model. In our approach, an ensemble of five regression models was used to predict the scores for unlabeled examples. The procedure we used involved dividing the original training set into five equal parts. This created five possible data splits, with each split consisting of 80% training data and 20% intended for validation. For each such split, we trained five regression models with different random seeds and then selected the model achieving the highest Pearson correlation value on the validation part. The process thus consisted of training a total of 25 models (5 splits, 5 models per split), from which the best five were selected, one for each split. From these models, an ensemble was created. The individual models were fine-tuned with MSE loss and a batch size of 32 for three epochs. A learning rate scheduler with warmup and polynomial decay was used with a peak learning rate of 1e-5. The trained ensemble was used for pseudo-labeling. For each sample from a corpus of 156 million tweets, we calculated the intimacy score as the mean value of the predictions returned by the models. In addition, we also calculated the standard deviation of each score. Our intention was to include only the samples in the expanded dataset, which were predicted with high confidence. Accordingly, we set the threshold at 0.05 and only tweets with a standard deviation below this value were selected. In order to create a more balanced dataset, we also imposed additional limits on the number of records with similar characteristics. The number of examples from the same language and having the same range of intimacy scores (e.g. from 2.0 to 3.0) could not exceed 10 thousand. Using the described procedure, we were able to extract the dataset of over 292 thousand pseudo-labeled examples, the distribution of which is shown in Table 1. ### Generating predictions In the last step, we add pseudo-labeled examples to the training dataset and create a new ensemble of regressors, which was used to generate the final results. The procedure for training the models is similar to the one previously described. Once again, we split the original dataset into five parts, one part of which we use for model validation. In this case, however, for training in addition to the other four parts of the original data, we also used the entire pseudo-labeled dataset. As before, we trained 25 models and selected the best one from each split to form the final ensemble. Predictions for the test dataset were calculated as the mean value of the outputs from the individual models. ## 3 Experiments and results This section contains a discussion of the official results of the multilingual tweet intimacy analysis task. We also conducted post-evaluation experiments using the gold labels provided by the organizers to analyze the results obtained by models different from the submitted solution. ### Official results The evaluation covered ten languages, six of which were present in the training data, whereas four appeared only in the test dataset. The set of seen languages included English, Spanish, Italian, Portuguese, French, and Chinese. The set of unseen languages, intended to test the performance of solutions in a zero-shot setting, included Hindi, Arabic, Dutch, and Korean. 45 teams participated in the shared task. Our solution was ranked third in the main classification. Our method scored high on all but two languages. The weaker points of our solution were French and Korean, on which we were ranked 13th and 12th, respectively. We won in five language-specific subtasks and placed second in one. We also obtained the highest average correlation value on all languages among the submitted solutions. The results of the top five ranked solutions according to the correlation value for the entire test set are shown in Table 2. Based on the results, we can observe a problem associated with the Pearson correlation coefficient, which was chosen as an evaluation metric. In the general case, the value of this coefficient for disjoint subgroups of the population may not necessarily be related to the value for the entire population. In the case of the discussed task, the results for individual languages are not fully aligned with the results on the entire test set. This can be examined by comparing the coefficient value for the whole dataset and the average value of the coefficient for all languages. In the case of the latter value, among the top participants, only our solution and the winning solution achieved high performance across languages. Although there were other participants who obtained an average value above 0.63, they were ranked lower, even outside of the top ten teams. For example, the 12th-placed team achieved the best score in two languages and was in the top three in six. The overall Pearson coefficient for this solution was only 0.587, while the average of the coefficients was 0.636. ### Post-evaluation results In post-evaluation experiments, we fine-tuned publicly available multilingual language models on the tweet intimacy analysis task, comparing their results with the results obtained by the submitted solution. In the experiment, we included the original XLM RoBERTa (Conneau et al., 2020) models in base and large sizes, as well as our version of the large model tuned on a corpus of 156 million tweets. We also utilized XLM-T models, published by Barbieri et al. (2022) as a part of their study on Twitter sentiment analysis. The authors trained XLM RoBERTa base model on a dataset of 198 million tweets, and then further tuned it on sentiment analysis datasets in eight languages. We evaluate both the pre-trained and fine-tuned versions of this model. The comparison is shown in Table 3. We demonstrate the performance of models trained only on the original training data, and those trained on the extended dataset. For each row in the table, a given \begin{table} \begin{tabular}{l|c|c c c c c c c c c c} \hline \hline **System** & **ALL** & **AVG** & **EN** & **ES** & **PT** & **IT** & **FR** & **ZH** & **HI** & **NL** & **KO** & **AR** \\ \hline Ohio State University & **0.616** & 0.635 & 0.758 & 0.770 & 0.689 & **0.739** & **0.726** & **0.756** & 0.226 & 0.623 & **0.414** & 0.643 \\ University of Zurich & **0.614** & 0.616 & 0.722 & 0.740 & 0.689 & 0.723 & 0.710 & 0.718 & 0.224 & 0.619 & 0.380 & 0.636 \\ \hline **Our system** & 0.613 & **0.638** & 0.749 & **0.775** & **0.702** & **0.743** & 0.695 & **0.763** & 0.238 & **0.679** & 0.370 & **0.663** \\ University of Tyumen & 0.599 & 0.621 & 0.717 & 0.740 & 0.684 & 0.734 & 0.708 & 0.721 & 0.242 & 0.639 & 0.361 & **0.662** \\ NetEase Inc & 0.599 & 0.619 & 0.728 & 0.746 & 0.699 & 0.735 & 0.701 & 0.734 & 0.223 & 0.640 & 0.333 & 0.652 \\ \hline \hline \end{tabular} \end{table} Table 2: The performance of five top-rated teams in the multilingual tweet intimacy analysis task according to the official results. We show the Pearson correlation value for the entire dataset (ALL), for individual language subtasks, and the average correlation value across all languages (AVG). Blue color indicates the best score in a given category among all participants, red color indicates the second-best score. \begin{table} \begin{tabular}{l|c c c c} \hline \hline **Model** & **Avg** & **StdDev** & **Max** & **Min** \\ \hline \multicolumn{4}{l}{**Training on the original dataset**} \\ \hline XLM-T (pre-trained) & 0.565 & \(\pm\)0.016 & 0.588 & 0.545 \\ XLM-T (sentiment) & 0.558 & \(\pm\)0.022 & 0.594 & 0.530 \\ XLM-R (base) & 0.537 & \(\pm\)0.008 & 0.545 & 0.522 \\ XLM-R (large) & 0.580 & \(\pm\)0.016 & 0.599 & 0.561 \\ XLM-R (ours) & 0.602 & \(\pm\)0.026 & **0.636** & 0.564 \\ \hline \multicolumn{4}{l}{**Training on the expanded dataset**} \\ \hline XLM-T (pre-trained) & 0.603 & \(\pm\)0.001 & 0.604 & 0.602 \\ XLM-T (sentiment) & 0.598 & \(\pm\)0.003 & 0.603 & 0.594 \\ XLM-R (base) & 0.590 & \(\pm\)0.003 & 0.595 & 0.588 \\ XLM-R (large) & 0.595 & \(\pm\)0.002 & 0.600 & 0.590 \\ XLM-R (ours) & 0.611 & \(\pm\)0.002 & 0.614 & 0.608 \\ \hline \multicolumn{4}{l}{**Submitted solution**} \\ \hline Individual models & 0.612 & \(\pm\)0.003 & 0.616 & **0.608** \\ Ensemble & **0.613** & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison of the submitted solution and fine-tuned language models in two ways: on the original training data and using the extended dataset. The results shown refer to the Pearson correlation values for the entire dataset. model was trained five times with different random seeds. The table shows the average value of the achieved results, as well as the standard deviation, maximum and minimum values. We can see that extending the dataset with pseudo-labeled examples yielded better average results in each case, and also significantly reduced the standard deviation of the results. Training the models on the original dataset appears to be unstable, giving varying results for different runs. Interestingly, one of the fine-tuned models achieved an overall Pearson correlation of 0.636, higher than any solution in the evaluation phase. The same model scored low on individual languages, performing worse in 8 out of 10 languages compared to the solution we submitted, which once again indicates a disparity between overall and individual scores. The variant of XLM RoBERTa adapted by us to the Twitter domain obtained the best average correlation values for both the original and extended datasets. This shows the effectiveness of pre-training on in-domain data, as the results achieved by our model are significantly better than those of the original XLM-R models. A second choice could be XLM RoBERTa large or pre-trained XLM-T, which also achieved solid results. On the other hand, the use of the ensemble in the final submission does not seem to yield a clear improvement over the individual models. The average score obtained by the standalone models is only 0.001 lower than the ensemble solution. ## 4 Conclusion In this paper, we described our solution for the multilingual tweet intimacy analysis shared task. Our system placed first in five out of the ten languages. The paper demonstrated a method for combining domain adaptation with semi-supervised learning. As part of our research, we trained and published a multilingual language model using a corpus of 156 million tweets. Building on this model, we fine-tuned an ensemble of regressors to extend the training dataset with pseudo-labeled examples. We also performed additional experiments, comparing the model to other publicly available multilingual models, in which our method proved to be more effective in predicting intimacy scores.
2306.16644
Probabilistic Linguistic Knowledge and Token-level Text Augmentation
This paper investigates the effectiveness of token-level text augmentation and the role of probabilistic linguistic knowledge within a linguistically-motivated evaluation context. Two text augmentation programs, REDA and REDA$_{NG}$, were developed, both implementing five token-level text editing operations: Synonym Replacement (SR), Random Swap (RS), Random Insertion (RI), Random Deletion (RD), and Random Mix (RM). REDA$_{NG}$ leverages pretrained $n$-gram language models to select the most likely augmented texts from REDA's output. Comprehensive and fine-grained experiments were conducted on a binary question matching classification task in both Chinese and English. The results strongly refute the general effectiveness of the five token-level text augmentation techniques under investigation, whether applied together or separately, and irrespective of various common classification model types used, including transformers. Furthermore, the role of probabilistic linguistic knowledge is found to be minimal.
Zhengxiang Wang
2023-06-29T03:02:04Z
http://arxiv.org/abs/2306.16644v2
# Probabilistic Linguistic Knowledge and Token-level Text Augmentation ###### Abstract This paper investigates the effectiveness of token-level text augmentation and the role of probabilistic linguistic knowledge within a linguistically-motivated evaluation context. Two text augmentation programs, REDA and REDA\({}_{NG}\), were developed, both implementing five token-level text editing operations: Synonym Replacement (SR), Random Swap (RS), Random Insertion (RI), Random Deletion (RD), and Random Mix (RM). REDA\({}_{NG}\) leverages pretrained \(n\)-gram language models to select the most likely augmented texts from REDA's output. Comprehensive and fine-grained experiments were conducted on a binary question matching classification task in both Chinese and English. The results strongly refute the general effectiveness of the five token-level text augmentation techniques under investigation, whether applied together or separately, and irrespective of various common classification model types used, including transformers. Furthermore, the role of probabilistic linguistic knowledge is found to be minimal. ## 1 Introduction Data serves as a crucial component in training high-performing and robust machine learning models that can effectively tackle real-world learning tasks. However, data availability is often unpredictable and not guaranteed. In the realm of supervised learning, the development of reliably deployable models typically requires the collection of vast amounts of annotated data, which is affordable only for a select few. In low-resource settings, in particular, the available data may be limited or entirely nonexistent. There are also situations where existing data is imbalanced for specific classes, causing models trained on such data to be easily biased towards classes with abundant training examples. This can potentially be harmful when the models are deployed. Practical considerations like these have given rise to data augmentation, a widely adopted strategy to mitigate the problems of scarce or imbalanced data. Data augmentation involves applying label-preserving transformations to existing data to generate novel labeled data. This approach has seen considerable success in various fields, such as image and speech recognition [1; 2; 3; 4; 5; 6; 7]. Text augmentation, a subcategory of data augmentation that focuses on augmenting text data, is a promising yet challenging domain within NLP [8; 9; 10; 11; 12; 13]. The challenge arises due to the lack of well-established methods that can consistently generate diverse and accurate augmented texts simultaneously. In contrast to images or speech, where physical features can be relatively easily manipulated without altering the label, text is highly sensitive to the arrangement and combination of words. For instance, to augment an image, one can rotate, crop, flip, or change its color specifications in a predetermined manner, while still assuming that the augmented images represent the same object as the original [1]. However, when augmenting text, one cannot merely replace and shuffle words in an automated fashion to generate paraphrases. It becomes evident that there is a need for foundational research exploring the factors that influence the effectiveness of text augmentation [9]. The primary objective of this study is to gain a deeper understanding of the effectiveness of token-level text augmentation within a linguistically-motivated evaluation context. Token-level text augmentation is assessed, as opposed to other more complex methods (see Sec 2), due to its applicability across various tasks and languages. The insights regarding its effectiveness may prove valuable in low-resource domains, where text augmentation is primarily employed in real-world scenarios. More specifically, this study aims to address the following two research questions: (1) How effective is token-level text augmentation? (2) Can the incorporation of probabilistic linguistic knowledge enhance the effectiveness of text augmentation? To address these two research questions, comprehensive experiments were conducted on a binary question matching classification task, involving both Chinese and English languages, at a reasonably large scale. The objective of this task is to predict whether two given questions share the same expressed intent. This type of task, which entails the classification of text pairs, is well-suited for evaluating text augmentation as it demands high fidelity of the augmented texts to the original texts to maintain label preservation. Conversely, since token-level text augmentation is not strictly paraphrastic, its success in such tasks serves as strong evidence of its overall effectiveness. To explore the impact of probabilistic linguistic knowledge, pretrained \(n\)-gram language models are also utilized to select augmented texts, which, in theory, should be statistically more likely and expected to be closer to natural texts or of higher quality. Consequently, it is anticipated that probabilistic linguistic knowledge will enhance the effectiveness of text augmentation. The paper proceeds as follows. Sec 2 reviews related works, while the augmentation methods and experimental settings of the study are detailed in Sec 3 and Sec 4, respectively. Sec 5 presents the results of the main experiments, and the findings of three supplementary experiments are reported in Sec 6. Sec 7 offers further discussion on the discoveries and provides a conclusion. ## 2 Related Works Over the years, three major types of text augmentation have been employed in NLP to generate label-preserving data [13]: token-level augmentation, sentence-level augmentation, and hidden-level augmentation. Token-level augmentation focuses on individual tokens and involves word replacements, which can be either dictionary-based [14] or embedding-based [15], as well as deletion, insertion, or shuffling of tokens in a random [16] or predefined [17; 18] manner. Sentence-level augmentation, on the other hand, typically entails paraphrasing at the sentence level. Back translation [19; 20; 21], a widely popular technique that involves translating a text into another language and then retranslating it back, exemplifies this approach. Additionally, researchers have utilized language models to generate novel data conditioned on given text or labels [22; 23; 24]. Lastly, hidden-level augmentation pertains to the manipulation of hidden representations, such as linear transformations, in order to create new perturbed representations [25; 26; 27]. Many of the aforementioned studies have reported slight yet inconsistent performance gains when training models with augmented data for their respective NLP tasks, mostly text classification tasks. A common explanation for any observed performance improvement is that the augmented data introduces noise to the original training data, thus preventing the trained models from overfitting [28]. This, in turn, improves their performance on test sets. A notable and widely cited example is provided by [16]. The paper employs four simple token-level text editing operations to augment train sets of varying sizes and demonstrates their general effectiveness in boosting model performance across five sentiment-related and text type classification tasks. Although claimed to be universal, these four text editing operations, which are also examined in this study (see Sec 3), have been found not to be consistently beneficial. More specifically, they have been shown to negatively impact model performance in more complex tasks, such as natural language inference and paraphrasing [13], and fail to consistently improve performance for transformers [29]. This study aims to enrich the existing literature on the effectiveness of token-level text augmentation by conducting comprehensive and fine-grained cross-linguistic experiments for an under-explored task. It additionally examines the role of probabilistic linguistic knowledge, also an under-explored yet fundamental question. ## 3 Augmentation methods ### Text augmentation techniques In this study, five token-level text augmentation techniques, or text editing operations, are employed: Synonym Replacement (SR), Random Swap (RS), Random Insertion (RI), Random Deletion (RD), and Random Mix (RM). The first four techniques were initially proposed by [16] as a simple but universal set of text augmentation techniques named as EDA (Easy Data Augmentation). For a single text edit, they work as follows. SR randomly replaces a word, where possible, with one of its randomly sampled synonyms (if more than one) based on a predefined dictionary. RS, on the other hand, swaps the positions of a random word pair within a text. RI inserts a random synonym immediately after an eligible target word, while RD deletes a word at random. For \(n\) text edits, these techniques are simply applied \(n\) times. Additionally, RM, introduced in [30], is a random combination of 2-4 of the other four techniques, resulting in a more diversified text. Given their randomness, these techniques are also referred to as random text perturbations [31]. The five text augmentation techniques are implemented in Python using a program called REDA (Revised EDA). In addition to incorporating an extra text editing operation (RM), REDA differs from EDA in three key aspects. Firstly, REDA prevents duplicates in the output text(s), which can occur when there are no synonyms available for replacement (SR) or insertion (RS) for words in the input text, or when the same words are replaced or swapped back during the SR and RS operations. Secondly, REDA does not preprocess the input text (e.g., removing stop words), as this is believed to have minimal impact and better aligns with the fundamental concept of random text perturbations that underlie these augmentation techniques. Lastly, REDA replaces only one word with its synonym at a given position per text edit, rather than replacing all occurrences, which are regarded as additional edits. In this study, the synonym dictionary for English is derived from WordNet [32], while for Chinese, it is obtained from multiple reputable sources through web scraping1. Furthermore, rather than applying a fixed number of text edits to texts of varying lengths, this study employs an _editing rate_, which adjusts the number of text edits proportionally to the text length. Footnote 1: [https://github.com/jaack-wang/Chinese-Synonyms](https://github.com/jaack-wang/Chinese-Synonyms) ### \(N\)-gram language model A \(n\)-gram is a string of \(n\) words \(W_{1}^{n}=(w_{1},...w_{n})\). A \(n\)-gram language model is a Markov model that estimates the probability (typically expressed in logarithmic terms [33]) of a given string \(W_{1}^{L}\) of length \(L\) (where \(L\geq n\)) by the product of the probabilities of all its \(n\)-long substrings. \[\log P(W_{i}^{L})\approx\log\prod_{i=1}^{L-n+1}P(W_{i}^{i+n-1})=\sum_{i=1}^{L- n+1}\log P(W_{i}^{i+n-1}) \tag{1}\] where \(P(W_{i}^{i+n-1})\) represents \(P(w_{i+n-1}|W_{i}^{i+n-2})\). Let \(C(W_{i}^{i+n-1})\) denote the frequency of occurrence of a string \(W_{i}^{i+n-1}\) in a much larger corpus used as the training data. The maximum-likelihood probability estimate for \(P(W_{i}^{i+n-1})\) is the relative frequency of \(W_{i}^{i+n-1}\) against its previous \((n-1)\) words \(W_{i}^{i+n-2}\) in counts: \[P(W_{i}^{i+n-1})\approx\frac{C(W_{i}^{i+n-1})}{C(W_{i}^{i+n-2})} \tag{2}\] Since both \(C(W_{i}^{i+n-1})\) and \(C(W_{i}^{i+n-2})\) can be 0 during the deployment of a pretrained \(n\)-gram language model, this leads to inaccurate or undefined probability estimates. Inspired by both Eq. (1) and Stupid Backoff [34], this study uses a non-discounting method that estimates the probability of an unseen \(n\)-gram by multiplying the probabilities of its two \((n-1)\)-grams together, as shown in Eq. (3). The method will continue to back off into unigrams if all other higher-order \(n\)-grams do not occur in the training data, where unseen unigrams are simply assigned the same probability as those one-off unigrams. \[P(W_{i}^{i+n-1})\approx\left\{\begin{array}{ll}\frac{C(W_{i}^{i+n-1})}{C(W_{ i}^{i+n-2})},&\quad\text{if $C(W_{i}^{i+n-1})>0$}\\ \prod_{i=1}^{2}P(W_{i}^{i+n-2}),&\quad\text{otherwise}\end{array}\right\} \tag{3}\] In this study, I trained both the Chinese and English \(n\)-gram language models using \(n\)-grams up to 4-grams. The pretrained \(n\)-gram language models are utilized as a filter to select the \(k\) most likely outputs from \(m\) possible outputs generated by REDA, where \(m\) is at least 20 times greater than \(k\) in this study. The program that combines REDA with a \(n\)-gram language model is denoted as REDA\({}_{NG}\). ## 4 Experimental settings ### Task and data The task under analysis is a binary classification task on the basis of text pairs, commonly known as question matching. The aim is to predict whether a given question pair \((Q,Q^{\prime})\) express similar intents, which is a fundamental sub-task for question answering or, more broadly, a downstream task for semantic matching [35]. Two labels are used, with 0 denoting negative match of \((Q,Q^{\prime})\) and 1 positive match. This study considers two large-scale benchmark datasets for question matching. One is the Large-scale Chinese Question Matching Corpus (LCQMC, [35]) for Chinese. The other is the Quora Question Pairs Dataset (QQQD)2 for English. Footnote 2: [https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) For LCQMC, I reused the original train, development, and test sets as provided by [35]. For QQQD, three label-balanced data sets were created from its train set since the test set is made unlabeled for online competition. The basic statistics about these two datasets are given in Table 1. ### Classification models In the main experiments, four classic neural network models were chosen: Continuous Bag of Words (CBOW, [36]), Convolutional Neural Network (CNN, [37]), Gated Recurrent Units (GRU, [38]) and Long Short-Term Memory (LSTM, [39]). Since the focus here is to evaluate the effectiveness of token-level text augmentation and the role of probabilistic linguistic knowledge, the use of various classification models is not meant to contrast the learning difference among them, but rather to make the examination more comprehensive. Pretrained word embeddings were not utilized to simulate low-resource settings. For the same reason, transformers [40] were only used in the supplementary experiments, instead of the main ones. The models can also be divided into three groups, depending on the type of train sets they train on. _Baseline models_ refer to models that train on train sets without augmentation, i.e., train sets that only contain original training examples. Models that train on train sets augmented by REDA are called as _REDA models_, and similarly _REDA\({}_{NG}\) models_ are the ones that train on train sets augmented by REDA\({}_{NG}\). REDA models and REDA\({}_{NG}\) models are also called _augmented models_, since both are trained on augmented train sets. Augmented train sets contain augmented examples on the top of the original examples, based on which the augmented examples are produced. For convenience, the three respective types of train sets are simply called _baseline train sets_, _REDA train sets_, and _REDA\({}_{NG}\) train sets_. ### Training details The models were constructed in PaddlePaddle3, a deep learning framework developed by Baidu and trained on Baidu Machine Learning CodeLab's AI Studio with Tesla V100 GPU and 32G RAM. The models were trained using mini batches of size 64 with the objective of reducing cross-entropy loss. The Adam optimizer [41] with \begin{table} \begin{tabular}{l l l} \hline \hline **Split** & **LCQMC** & **QQQD** \\ & (Matched \& Mismatched) & (Matched \& Mismatched) \\ \hline Train & 238,766 & 260,000 \\ & (138,574 \& 100,192) & (130,000 \& 130,000) \\ \hline Dev & 8,802 & 20,000 \\ & (4,402 \& 4,400) & (10,000 \& 10,000) \\ \hline Test & 12,500 & 18,526 \\ & (6,250 \& 6,250) & (9,263 \& 9,263) \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the data splits for LCQMC and QQQD. 5e-4 learning rate was applied to assist training. The training time was consistently 3 epochs long, since most of the models overfitted the train sets within 3 epochs. Development sets were used for validation purposes. The basic structure for the classification models is simple and unified as follows. Each model begins with two separate embedding layers with the same embedding size to convert the input text pair (\(Q,Q^{\prime}\)) into two respective embedded sequences, \(\mathbf{Embd}_{Q}\) and \(\mathbf{Embd}_{Q^{\prime}}\). Then, \(\mathbf{Embd}_{Q}\) and \(\mathbf{Embd}_{Q^{\prime}}\) each pass through an encoder layer, whose structure is determined by the classification model in use, to obtain two encoded sequences, \(\mathbf{Enc}_{Q}\) and \(\mathbf{Enc}_{Q^{\prime}}\). The encoded sequences, \(\mathbf{Enc}_{Q}\) and \(\mathbf{Enc}_{Q^{\prime}}\), are concatenated along the last axis and then passed to a fully connected feed-forward network (FFN) that consists of two linear transformations with a tanh activation function in between. \[FFN(\mathbf{x})=tanh(\mathbf{x}\mathbf{W}_{1}+b_{1})\mathbf{W}_{2}+b_{2} \tag{4}\] For CBOW, the encoder is the point-wise summation of the embeddings of tokens in the input text pair followed a tanh function. For the rest, the encoder layers are simply CNN layer, GRU layer, and LSTM layer, corresponding to the model names. ### Augmentation details Due to experimental costs, it was not possible for this study to evaluate the effects of different initializations of REDA/REDA\({}_{NG}\) (i.e., editing rate, number of augmentation per example) on the trained models' performance. Therefore, I initialized REDA/REDA\({}_{NG}\) with small editing rates, informed by [16], who recommend small editing rates over large ones and demonstrate that large editing rates lead to performance decline in their ablation experiments. This makes sense since large editing rates are more likely to cause label changes. Intuitively, if small editing rates do not work well, larger ones will not either. The number of augmentation per example, to be mentioned below, was kept small for the same consideration. More concretely, REDA and REDA\({}_{NG}\) were initialized with the following editing rates for SR, RS, RI, and RD, respectively: 0.2, 0.2, 0.1, and 0.1. I applied Python rounding rule to calculate and perform the number of edits needed for each operation. That means, if the number of edits is less than or equal to 0.5, it will be rounded down to 0 and thus no editing operation will apply. To make the experiments more controlled and doable, (1) I made RM only randomly perform two of the other four editing operations with one edit each; (2) and every editing operation produced up to 2 non-duplicated augmented texts per text (or 4 per text pair), if the train set size was less than 50k; otherwise, there would only be one augmented text per text instead. Every augmented text was crossed-paired with the other text that was the pair to the text being augmented, with the original label retained for the augmented text pair. These settings were also applied for the supplementary experiments. Table 2 shows the size of the augmented train sets for the main experiments. ## 5 Main experiments This section reports the test set performance of the four classification models trained on train sets of varying size with and without augmentation for the binary question matching task in Chinese and in English. As the test sets for LCQMC and QQQD are equally balanced across labels (see Sec 4.1), accuracy is considered as the primary evaluation metric. The average precision and recall are taken as secondary metrics for more nuanced analyses. ### Chinese: LCQMC Table 3 shows the test set accuracy of the four classification models trained on the three types of train sets (baseline, REDA, and REDA\({}_{NG}\)) of different sizes. Contrary to the expectation, incorporating probabilistic linguistic knowledge into the five token-level text augmentation techniques does not lead to superior model performance, as the REDA\({}_{NG}\) models never outperform the REDA models in terms of average performance, when given the same amounts of training data. Instead, the REDA models almost always achieve slightly better performance than the REDA\({}_{NG}\) counterparts. Moreover, it appears that the augmented models (REDA and REDA\({}_{NG}\)) do not necessarily have better test set accuracy than that of the baseline models, unless augmentation is applied to sufficient original training examples (i.e., at least 50k). The average test set precision and recall, as shown in Table 4, may elucidate the performance gains of the augmented models over the baseline models. There are two factors at play. First, the augmented models consistently exhibit higher precision than the baseline models starting from the training size 50k. Second, the gap in recall between the baseline models and the augmented models becomes much narrower in favor of the augmented models at the same time. This suggests that the augmented models seem to learn to make significantly fewer false negatives with sufficient original training examples augmented, resulting in a sudden improvement in recall compared to the baseline models. It appears that 50k is a threshold, prior to \begin{table} \begin{tabular}{l l l l} \hline **LCQMC** & **Augmented** & **QQQD** & **Augmented** \\ \hline 5k & 66,267 & 10k & 148,341 \\ 10k & 132,513 & 50k & 543,066 \\ 50k & 563,228 & 100k & 1,086,063 \\ 100k & 929,176 & 150k & 1,629,178 \\ 240k & 2,218,512 & 260k & 2,823,733 \\ \hline \end{tabular} \end{table} Table 2: Size of augmented train sets for the main experiments on LCQMC and QQQD. For convenience, 240k is hereafter used to refer to the full size (i.e., 238,766 to be exact) of LCQMC. Note that, all the subsets of the full train sets were randomly and independently sampled. which augmentation seems detrimental to model performance, despite the substantial increase in training examples. ### English: QQQD The test set accuracy on QQQD shown in Table 5 exhibits a similar pattern to that on LCQMC for two reasons. First, the difference between REDA and REDA\({}_{NG}\) models \begin{table} \begin{tabular}{l c c c c c} \hline \hline Models & 5k & 10k & 50k & 100k & 240k \\ \hline Precision & **57.4** & 59.2 & 62.8 & 64.3 & 69.0 \\ +REDA & 56.8 & **59.5** & 64.1 & **66.8** & **70.3** \\ +REDA\({}_{NG}\) & **57.4** & 58.2 & **64.4** & 66.4 & 69.9 \\ \hline Recall & **75.2** & **77.5** & **82.0** & **86.2** & 89.2 \\ +REDA & 73.8 & 72.7 & 81.2 & 85.8 & **90.4** \\ +REDA\({}_{NG}\) & 71.1 & 76.1 & 79.9 & 85.5 & 89.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Average test set precision and recall (%) of the four classification models trained on LCQMC’s train sets of varying size with and without augmentation. Best performance given a train set size and a metric is highlighted in **bold**. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Models & 5k & 10k & 50k & 100k & 240k \\ \hline CBOW & **59.4** & 60.4 & 65.4 & 67.8 & 73.8 \\ +REDA & 58.1 & **60.9** & **68.2** & **72.2** & **76.4** \\ +REDA\({}_{NG}\) & 58.8 & 59.6 & 68.1 & 71.2 & 76.0 \\ \hline CNN & 59.3 & **63.4** & 67.2 & 69.0 & 72.9 \\ +REDA & 59.8 & 62.6 & 66.8 & **69.8** & **74.9** \\ +REDA\({}_{NG}\) & **60.3** & 62.0 & **67.9** & 69.1 & 74.0 \\ \hline LSTM & **60.0** & **62.1** & 66.2 & 69.6 & 74.8 \\ +REDA & 58.9 & 61.5 & **67.7** & **71.8** & **76.4** \\ +REDA\({}_{NG}\) & 57.7 & 60.9 & **67.7** & 71.7 & 75.9 \\ \hline GRU & **59.8** & **61.9** & 68.1 & 70.3 & **76.8** \\ +REDA & 58.7 & 61.3 & **68.7** & **72.7** & **76.8** \\ +REDA\({}_{NG}\) & 58.8 & 60.0 & 67.8 & 72.5 & 76.6 \\ \hline Average & **59.6** & **62.0** & 66.7 & 69.2 & 74.6 \\ +REDA & 58.9 & 61.6 & **67.9** & **71.6** & **76.1** \\ +REDA\({}_{NG}\) & 58.9 & 60.6 & **67.9** & 71.1 & 75.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Test set accuracy (%) of the four classification models trained on LCQMC’s train sets of varying size with and without augmentation. The header denotes train set sizes in terms of original training examples. Best performance given a train set size and a model type is highlighted in **bold**. remains negligible, reaffirming the trivial role probabilistic linguistic knowledge plays in the five token-level text augmentation techniques. Second, the augmented models do not outperform the baseline models until a sufficient number of original training examples are seen. However, unlike the experiment on LCQMC, this time the REDA\({}_{NG}\) models consistently perform better than the REDA models. Moreover, it appears that the threshold for the REDA and REDA\({}_{NG}\) models to outperform the baseline models is much larger, or 100k and 150k, respectively. These two differences are likely attributable to some training artifacts related to the datasets, the pretrained \(n\)-gram language models, and the likes. Also differing from the LCQMC experiment is how the baseline models compare to the augmented models in terms of average test set precision and recall. Instead of displaying a consistent advantage over the augmented models in one of these two metrics, the baseline models show better precision in a way highly correlated with accuracy. In other words, the augmented models do not outperform the baseline models until their respective thresholds. This indicates that the baseline models tend to make fewer false positives, compared to the augmented models when the original training data is insufficient for the text augmentation to become effective. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Models & 10k & 50k & 100k & 150k & 260k \\ \hline CBOW & **64.4** & **69.9** & 72.1 & 74.2 & 77.7 \\ +REDA & 62.5 & 68.5 & 71.6 & 74.8 & 78.0 \\ +REDA\({}_{NG}\) & 62.9 & 69.4 & **74.0** & **75.5** & **78.2** \\ \hline CNN & **66.1** & **71.1** & 72.6 & 73.4 & 75.9 \\ +REDA & 63.7 & 69.9 & **72.7** & **75.3** & 77.6 \\ +REDA\({}_{NG}\) & 63.5 & 69.3 & **72.7** & 74.7 & **77.7** \\ \hline LSTM & **65.7** & **71.6** & **72.9** & 75.0 & 77.9 \\ +REDA & 64.0 & 69.8 & 72.5 & **75.1** & **78.1** \\ +REDA\({}_{NG}\) & 64.9 & 70.3 & 72.7 & 75.0 & **78.1** \\ \hline GRU & **67.2** & **71.0** & **74.3** & 74.7 & 77.4 \\ +REDA & 63.3 & 70.0 & 72.8 & 74.8 & 78.1 \\ +REDA\({}_{NG}\) & 64.0 & 70.2 & 73.8 & **75.7** & **78.9** \\ \hline Average & **65.9** & **70.9** & 73.0 & 74.3 & 77.2 \\ +REDA & 63.4 & 69.6 & 72.4 & 75.0 & 78.0 \\ +REDA\({}_{NG}\) & 63.8 & 69.8 & **73.3** & **75.2** & **78.2** \\ \hline \hline \end{tabular} \end{table} Table 5: Test set accuracy (%) of the four classification models trained on QQQD’s train sets of varying size with and without augmentation. ### Interim summary Overall, the results presented above demonstrate that incorporating probabilistic linguistic knowledge into REDA does not make a significant difference. Pairwise Mann-Whitney U tests confirm that there is no statistically significant difference in the test set performance between the REDA and REDA\({}_{NG}\) models, with the obtained \(p\)-values close to 1.0, regardless of the specific metric in use. Additionally, it is revealed that the five token-level text augmentation techniques are not always effective, irrespective of whether an \(n\)-gram language model is employed to optimize the augmented outputs or not. The results indicate that for both Chinese and English binary question matching tasks, the augmented models only outperform the baseline models when a sufficient amount of original training examples are augmented. There are two differences observed between the experiments in Chinese and English. The differences concern the relative performance of the REDA\({}_{NG}\) models against the REDA models, and the relative performance of the augmented models against the baseline models. Nevertheless, these differences do not impact the two general observations made above for the purpose of this study. ## 6 Supplementary experiments Following the main results obtained in Sec 5, three important follow-up questions arise: (1) Does REDA\({}_{NG}\) truly produce higher-quality augmented texts than REDA under the same conditions? (2) Would the results remain valid if state-of-the-art transformer models were employed instead? (3) What if the five token-level text augmentation techniques were applied separately, rather than together? Question (1) is crucial because it determines whether the insignificant difference between the REDA and REDA\({}_{NG}\) models found in Sec 5 is due to the marginal role of probabilistic linguistic knowledge, or simply because the texts augmented by REDA \begin{table} \begin{tabular}{l c c c c c} \hline \hline Models & 10k & 50k & 100k & 150k & 260k \\ \hline Precision & **65.0** & **69.8** & 71.6 & 72.1 & 76.0 \\ +REDA & 61.8 & 68.0 & 70.6 & 73.6 & 76.2 \\ +REDA\({}_{NG}\) & 62.6 & 69.0 & **72.3** & **74.2** & **77.3** \\ \hline Recall & 69.5 & 73.6 & 76.2 & **79.4** & 79.6 \\ +REDA & **70.4** & **73.7** & **77.0** & 78.0 & **81.4** \\ +REDA\({}_{NG}\) & 69.2 & 72.0 & 75.7 & 77.4 & 80.0 \\ \hline \hline \end{tabular} \end{table} Table 6: Average test set precision and recall (%) of the four classification models trained on QQQD’s train sets of varying size with and without augmentation. and REDA\({}_{NG}\) are indistinguishable in terms of quality. Questions (2) and (3) assess the generality of the observations made so far. Due to resource constraints and for simplicity, the supplementary experiments in this section are based on LCQMC. ### Comparison of texts augmented by REDA and REDA\({}_{ng}\) Directly comparing the texts augmented by REDA and REDA\({}_{NG}\) is not feasible, three text restoration experiments were therefore designed to approximate the comparison. These experiments assess the ability of both programs to restore natural texts when given distorted texts or a pseudo-synonym dictionary for the following text editing operations: Synonym Replacement (SR), Random Swap (RS), and Random Deletion (RD). Random Insertion (RI) and Random Mix (RM) are omitted since inserting random synonyms is generally not representative of natural language use, and the text quality resulting from RM can be inferred from the other basic operations. Table 7 presents the average accuracy, with the experiment details provided in Appendix A. As shown, while the performance for both approaches declines as the number of edits increases, REDA\({}_{NG}\) consistently outperforms REDA. For REDA, restoring the distorted texts to their original form is merely a matter of chance, equal to the reciprocal of the number of possible augmented outputs. However, REDA\({}_{NG}\) augments texts based on the maximum likelihood principle, which tends to be closer to natural texts. This also holds true when natural texts are used as inputs. For instance, through manual inspection, I found that REDA\({}_{NG}\) performed much better in selecting appropriate synonyms, a problem for REDA due to its randomness and the ubiquitous existence of polysemy. By measuring the bigram overlap rate and Levenshtein edit distances of output texts randomly swapped twice from the natural texts, I further found that the average overlap rate for REDA was much lower (i.e., 0.29 versus 0.77), and that the average edit distances were much larger (i.e., 3.0 versus 1.4) \begin{table} \begin{tabular}{l l l l l} \hline \hline & & One & Two & Three \\ \hline SR & REDA & 22 & 6 & 2 \\ & REDA\({}_{NG}\) & **88** & **79** & **64** \\ \hline RS & REDA & 9 & 4 & 4 \\ & REDA\({}_{NG}\) & **69** & **41** & **34** \\ \hline RD & REDA & 16 & 5 & 2 \\ & REDA\({}_{NG}\) & **39** & **22** & **15** \\ \hline \hline \end{tabular} \end{table} Table 7: Average accuracy (%) in three text restoration tasks based on different number of edits (header). SR: Synonym Replacement; RS: Random Swap; RD: Random Deletion. Best performance given an edit number and an augmentation method is highlighted in **bold**. than REDA\({}_{NG}\). This suggests that REDA\({}_{NG}\) preserves more collocational features of natural texts than REDA and thus augments higher-quality texts. ### Effect of transformer ERNIE-Gram [42] is a transformer-based pretrained large language model and was chosen for its state-of-the-art performance on LCQMC. The fine-tuning of the ERNIE-Gram models shared identical training details with the main experiments, except that a smaller learning rate (i.e., 5e-5) was used. Table 8 shows the test set performance across the three metrics for the fine-tuned ERNIE-Gram models. Not surprisingly, the fine-tuned ERNIE-Gram models achieve significantly better results than the four classification models trained in the main experiments on LCQMC. Notably, using only 5k original examples, the fine-tuned ERNIE-Gram models outperform any model trained in the main experiments, regardless of augmentation. This highlights the impressive effectiveness of transfer learning resulting from fine-tuning large language model on downstream tasks. The implication may be that transfer learning is a more robust and effective way of boosting model performance than the text augmentation approaches considered in this study. However, it remains unknown whether this is also the case in low-resource settings. Despite the noticeable performance gain, the ERNIE-Gram models fine-tuned on augmented train sets are consistently outperformed by the baseline models without augmentation in terms of both accuracy and precision in the test set. Thus, both text augmentation approaches appear to be overall detrimental to model performance. Furthermore, no evidence indicates a significant difference between REDA and REDA\({}_{NG}\), even when a transformer, such as ERNIE-Gram, is used. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Models & 5k & 10k & 50k & 100k & 240k \\ \hline Accuracy & **78.7** & **81.7** & **85.9** & **87.1** & **87.4** \\ +REDA & 77.5 & 80.3 & 84.1 & 85.0 & 85.7 \\ +REDA\({}_{NG}\) & 78.6 & 80.1 & 83.8 & 84.6 & 85.8 \\ \hline Precision & **71.2** & **74.7** & **80.3** & **81.7** & **82.0** \\ +REDA & 70.0 & 72.9 & 77.7 & 79.1 & 79.5 \\ +REDA\({}_{NG}\) & 71.0 & 73.2 & 77.4 & 78.6 & 80.0 \\ \hline Recall & **96.6** & 95.9 & 95.2 & **95.6** & 95.9 \\ +REDA & 96.4 & **96.4** & **95.6** & 95.1 & **96.1** \\ +REDA\({}_{NG}\) & 95.7 & 95.1 & 95.3 & 95.1 & 95.5 \\ \hline \hline \end{tabular} \end{table} Table 8: Test set, accuracy, precision and recall (%) for the Ernie-Gram models fine-tuned on LCQMC’s train sets of varying size with and without augmentation. ### Effect of single augmentation technique To understand the role of each text augmentation technique, models were trained on train sets augmented using only one augmentation technique. The original train set was partitioned into 11 different sizes, rather than 5, to validate the observation in Sec 5 that the effectiveness of the augmentation is restricted to a sufficient number of original training examples. The experimental details can be found in Appendix B. Fig 1 displays the average test set accuracy of the four classification models trained on the three types of train sets under different text augmentation techniques and across various training sizes. In line with the previous findings, the effect of probabilistic linguistic knowledge on each of the five techniques is minimal and shows no statistically significant difference, both individually and on average. Also consistent with the previous findings is the existence of a threshold where the augmented models outperform the baseline models in test set accuracy, which appears to be around the training size 100k, rather than 50k as in the related main experiments. The discrepancy may be explained by the different epoch numbers (see Appendix B) and, more importantly, the separation of the augmentation techniques, which, however, are beyond the scope of this study. The average test set precision and recall resemble the data patterns observed in the main experiments, with an updated threshold mentioned above. Please refer to Appendix B for details. Figure 1: Average test set accuracy of the four classification models trained on LCQMC’s train sets of varying size with and without augmentation under different conditions (i.e., augmentation technique, train size size). The sixth plot averages the statistics of the previous five plots. ## 7 Discussion and conclusion In this study, I evaluate the effectiveness of five token-level text augmentation techniques and the role of probabilistic linguistic knowledge thereof. To this end, two related programs were created: REDA and REDA\({}_{NG}\), the latter of which utilizes pre-trained \(n\)-gram language models to select most likely augmented texts from REDA's output. Experiments on binary question matching classification task in Chinese and English strongly indicate that the role of probabilistic linguistic knowledge for token-level text augmentation is minimal and that the related augmentation techniques are not generally effective. These two findings are further discussed as follows. First, the difference between the REDA models and the REDA\({}_{NG}\) models is trivial. However, the supplementary experiment on three pseudo text restoration tasks in Sec 6.1 shows that, REDA\({}_{NG}\) arguably generates higher-quality augmented texts compared to REDA, as it preserves more collocational features of natural texts. An intuitively plausible explanation for the insignificant role of probabilistic linguistic knowledge may be due to the inherent inability of the five augmentation techniques to produce strictly paraphrastic augmented texts. In other words, the texts augmented by REDA and REDA\({}_{NG}\) are to a considerable extent comparable in the sense that they are mostly not the paraphrases of the original texts being augmented. Although the REDA\({}_{NG}\) models appear to be slightly better than the REDA models in the English experiments, the opposite is true for the Chinese experiments. The observed differences are highly likely to result from training artifacts. Nevertheless, none of them are statistically significant. Second, the effectiveness of the augmentation techniques, whether applied together or separately, and irrespective of the classification model type (including transformers), only surfaces when a sufficiently large amount of original training examples are supplied. This finding shows that the effectiveness is task specific and not always positive, contrasting with [16] and aligning with [13; 29]. Unlike the one-text-one-label classification tasks experimented in [16], question matching involves classifying a given question pair into a label to indicate the intentional similarity of the pair. As such, the task is inherently more sensitive to the semantic changes caused by text augmentation and thus arguably represents a more reliable evaluative task. The performance decline in the augmented models in cases with insufficient original training examples may be due to the negative effects of the false matching augmented text pairs generated by REDA and REDA\({}_{NG}\). However, with enough original training examples seen, the augmented models learn to mediate these negative effects and turn them somewhat into regularization, which helps the models generalize better. Nevertheless, the requirement of a sufficiently large number of training examples makes token-level text augmentation investigated here a less practical and preferable approach for tasks of similar nature to question matching. One might argue that the differences between REDA/REDA\({}_{NG}\) and EDA [16], as described in Sec 3, could be a possible cause for the failure of text augmentation on small train sets in this study. Specifically, by disallowing deduplicates, REDA and REDA\({}_{NG}\) are more likely to produce more diverse yet non-paraphrastic augmented texts than EDA, given comparably small editing rates. This might exacerbate the negative effects of random text perturbations, thereby requiring more original training examples to mitigate such effects. However, I argue that the differences between REDA/REDA\({}_{NG}\) and EDA are not crucial. Since the augmented models in this study do not necessarily outperform the baseline models even with a non-trivial amount of original training examples (i.e., at least 50k) and when the number of augmentation is only 1 per text, there is no reason to believe that the augmented would perform better with fewer original training examples while having the same proportion of augmented examples. Furthermore, it is not surprising that EDA works for simple one-text-one-label classification tasks, despite producing imperfect augmented texts. The reason is exactly task specific. For example, in sentence-level sentiment analysis, the sentiment of a sentence is often captured by only few keywords [43]. It follows, as long as an augmented text retains these few keywords or similar replacements, it still reasonably preserves the sentiment label of the original text even if it is grammatically problematic. The key lesson here is that token-level text augmentation may easily introduce noise to the training examples for those simple classification tasks while not causing label changes. As a result, the trained models generalize better. Systematically and fairly evaluating a text augmentation is uneasy or even unknown. The limitations of this study are obvious, since it fails to experiment with different initializations of REDA/REDA\({}_{NG}\) or different configurations of the classification models, confined by available computing resources. Nevertheless, this study showcases a linguistically-motivated way of evaluating text augmentation and highlights the benefits and insights it provides. The main takeaway is that although token-level text augmentation is simple and potentially useful, it should be used with caution, particularly for complex tasks. ###### Acknowledgements. The paper is based on two of my previous publications [30; 31]. I thank anonymous reviewers from ICNLSP 2022, ACL 2022 Workshop on Insights from Negative Results in NLP, and AACL-IJCNLP 2022 Workshop on Evaluation & Comparison of NLP Systems, for their feedback. Any remaining errors are solely my responsibility. ## Appendix A Text restoration experiments For the Synonym Replacement (SR) experiment, I created a pseudo synonym dictionary consisting of 3,855 one-word-four-synonym pairs. Each word was mapped to four pseudo synonyms, including the word itself and three non-synonym random words. All the words in the dictionary were those with frequencies ranking between the 1,000th and the 10,000th positions in the unigram dictionary compiled for the Chinese \(n\)-gram language model. For the Random Swap (RS) and Random Deletion (RD) experiments, I randomly reordered the natural texts and added random words from the texts before performing RS and RD, respectively. For each comparison made, I randomly sampled 10,000 texts from LCQMC's train set for five runs. ## Appendix B Ablation experiments on LCQMC The training conditions were the same as the main experiments, except for training time. Specifically, to save resources, the training time was reduced to 2 epochs when the train size was 50k or 100k, and to 1 epoch when the size was over 100k. Since the aim is to compare the test set performance among the baseline, REDA, and REDA\({}_{NG}\) models, and because larger train sizes require fewer epochs to fit the train sets, the reduction of the training time is considered reasonable. For the ablation experiments, Table 9 displays the size of the augmented train sets, and Figs 2 and 3 show the average test set precision and recall, respectively.
2301.12485
Generating Novel, Designable, and Diverse Protein Structures by Equivariantly Diffusing Oriented Residue Clouds
Proteins power a vast array of functional processes in living cells. The capability to create new proteins with designed structures and functions would thus enable the engineering of cellular behavior and development of protein-based therapeutics and materials. Structure-based protein design aims to find structures that are designable (can be realized by a protein sequence), novel (have dissimilar geometry from natural proteins), and diverse (span a wide range of geometries). While advances in protein structure prediction have made it possible to predict structures of novel protein sequences, the combinatorially large space of sequences and structures limits the practicality of search-based methods. Generative models provide a compelling alternative, by implicitly learning the low-dimensional structure of complex data distributions. Here, we leverage recent advances in denoising diffusion probabilistic models and equivariant neural networks to develop Genie, a generative model of protein structures that performs discrete-time diffusion using a cloud of oriented reference frames in 3D space. Through in silico evaluations, we demonstrate that Genie generates protein backbones that are more designable, novel, and diverse than existing models. This indicates that Genie is capturing key aspects of the distribution of protein structure space and facilitates protein design with high success rates. Code for generating new proteins and training new versions of Genie is available at https://github.com/aqlaboratory/genie.
Yeqing Lin, Mohammed AlQuraishi
2023-01-29T16:44:19Z
http://arxiv.org/abs/2301.12485v3
# Generating Novel, Designable, and Diverse Protein Structures ###### Abstract Proteins power a vast array of functional processes in living cells. The capability to create new proteins with designed structures and functions would thus enable the engineering of cellular behavior and development of protein-based therapeutics and materials. Structure-based protein design aims to find structures that are designable (can be realized by a protein sequence), novel (have dissimilar geometry from natural proteins), and diverse (span a wide range of geometries). While advances in protein structure prediction have made it possible to predict structures of novel protein sequences, the combinatorially large space of sequences and structures limits the practicality of search-based methods. Generative models provide a compelling alternative, by implicitly learning the low-dimensional structure of complex data distributions. Here, we leverage recent advances in denoising diffusion probabilistic models and equivariant neural networks to develop Genie, a generative model of protein structures that performs discrete-time diffusion using a cloud of oriented reference frames in 3D space. Through _in silico_ evaluations, we demonstrate that Genie generates protein backbones that are more designable, novel, and diverse than existing models. This indicates that Genie is capturing key aspects of the distribution of protein structure space and facilitates protein design with high success rates. Code for generating new proteins and training new versions of Genie is available at [https://github.com/aqlaboratory/genie](https://github.com/aqlaboratory/genie). Machine Learning, Generative Models, Generative Models, Generative Models ## 1 Introduction Proteins play an essential role in all cellular processes, ranging from chemical catalysis to molecular transport. Over the course of evolution, nature has explored a breathtaking diversity of protein structures and accordant functions. Yet, relative to the full potential size of foldable protein space, evolution has only explored a small subregion (Huang et al., 2016). This leaves open the possibility of designing new proteins unlike any seen in nature, if suitable algorithms can be developed that correctly model uncharted parts of fold space. Protein design efforts historically focused on optimizing functional properties of naturally occurring proteins through directed evolution (Dougherty and Arnold, 2009) or through rational design of novel protein sequences that hew closely to known structural motifs (Kuhlman et al., 2003). This limited exploration of fold space to regions adjacent to naturally occurring proteins. With recent advances in protein structure prediction methods, new approaches have been proposed that leverage representations learned by these methods to more broadly explore structure space. For example, Anishchenko et al. (2021) performed Monte Carlo sampling in sequence space using trRosetta (Yang et al., 2020) as a guide and were able to discover novel structures. One disadvantage of this approach however is the reliance on sampling, which can be computationally expensive and difficult to steer toward desirable design goals. The development of generative models capable of capturing complex data distributions has provided a new direction for _de novo_ protein design. Instead of sampling from protein sequence space, protein design could be achieved by implicitly learning the underlying space of structures. Generative modeling offers a number of paradigms, most of which developed and refined for image generation. Which paradigm is best suited for proteins, and how it should be adapted to their 3D world, are key outstanding questions. Generative modeling trilemmaGenerative models generally contend with a trilemma in optimizing between generation quality (in the protein context, physicality and designability), mode coverage (novelty and diversity of generated structures) and sampling time (Xiao et al., 2022). Multiple modeling paradigms exist, each making a different trade-off. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) pit two neural networks in competition with one another--a generator network that creates new data instances and a discriminator network that distinguishes between real and generated instances. Through this process, GANs learn to map a simple latent space defined by _e,g,,_, a multivariate Gaussian distribution to the complex space that implicitly captures, in part, the underlying data distribution. GANs are capable of rapid generation of high quality samples but suffer from unstable training and mode collapse in which parts of the data distribution are left uncaptured (Brock et al., 2019; Karras et al., 2020; Zhao et al., 2021). Variational AutoEncoders (VAEs) (Kingma & Welling, 2013) are likelihood-based generative models that learn a low-dimensional latent space through reconstruction. More specifically, they consist of two separate networks, an encoder that maps data instances into a low-dimensional latent space and a decoder that reconstructs data instances from their corresponding latent representations. By minimizing a variational lower bound, VAEs aim to maximize the probability of observed data. In practice, VAEs achieve good mode coverage but produce lower quality samples (van den Oord et al., 2017; Razavi et al., 2019). More recently, denoising diffusion probabilistic models (DDPMs) (Ho et al., 2020; Nichol & Dhariwal, 2021) have shown considerable promise in generating high quality 2D images, as exemplified by DALL-E 2 (Ramesh et al., 2022) and Stable Diffusion (Rombach et al., 2022). DDPMs consist of a forward process that iteratively adds a small amount of Gaussian noise to a sample, and a reverse process that iteratively removes noise from a noisy sample. DDPMs optimize for sample quality and diversity, achieving state-of-the-art performance on both (Dhariwal & Nichol, 2021). However, they suffer from a slow sampling process, as generation proceeds through a denoising process that repeatedly executes computationally-intensive inference. Nonetheless, because of the notably higher quality and diversity of generated samples, DDPMs have become the generative paradigm of choice for images. **Application to protein design** Multiple prior efforts have applied generative modeling to structure-based protein design. The first generation of methods parameterized protein geometry using inter-residue distances, leveraging the pre-existing machinery for 2D image generation. For instance, Anand and Huang (2018) used GANs to generate pairwise distance matrices of \(C_{\alpha}\) atoms in proteins, followed by convex optimization to reconstruct the corresponding 3D coordinates. Anand et al. (2019) later introduced an additional refinement network to improve coordinate reconstruction. One limitation of this approach is the lack of a guarantee that generated pairwise distances are embeddable in 3D space, leading to potential inconsistencies between raw samples (in distance matrix space) and generated coordinates. Errors in distance matrices often lead to significant deterioration in structural quality (Eguchi et al., 2022), and prevent the model from being optimized in an end-to-end fashion for final 3D geometry. An alternate parameterization for protein structure can be formulated using internal coordinates, where torsion angles between adjacent residues are used to encode 3D geometry. This approach sidesteps the embeddability problem of distance-based representations, but may be overly reliant on reasoning over local protein geometry (AlQuraishi, 2019). One example is FoldingDiff (Wu et al., 2022), which performs diffusion in the space of internal coordinates using a bidirectional transformer with relative positional embeddings to iteratively denoise a sequence of torsion angles. FoldingDiff yields protein-like backbones, but the majority of generated structures are predicted to not be designable when assessed using _in silico_ sequence-structure self-consistency metrics (described later). One possible reason for this is that errors in predicted internal coordinates can accumulate and propagate along the protein chain during the coordinate reconstruction process. A third approach parameterizes proteins using atomic coordinates in Cartesian space. Unlike distance-based and internal coordinate parameterizations, this approach is not inherently invariant to rotations and translations (SE(3)-invariance). As proteins do not have preferred orientations or locations, capturing these invariances would improve the data efficiency of the generative model. Recent developments in geometric neural networks, including EGNN (Satorras et al., 2021) and GVP (Jing et al., 2021), provide powerful tools for geometric reasoning in an SE(3)-equivariant manner. Employing EGNNs for this purpose, Trippe et al. (2022) developed ProtDiff, a DDPM that directly generates the \(C_{\alpha}\) coordinates of protein structures. Although a promising approach, ProtDiff struggles to produce geometries with realizable protein sequences. One potential reason for this is the reflection-invariant property of EGNNs, which is non-physical and frequently yields left-handed alpha helices, an exceedingly rare structural element in real proteins. In protein structure prediction, AlphaFold2 (Jumper et al., 2021) achieved great success by combining implicit reasoning in a latent space (evoformer module) with geometric reasoning in Cartesian space (structure module). A key feature of the latter is Invariant Point Attention (IPA), a mechanism for computationally-efficient, SE(3)-equivariant reasoning that is sensitive to reflections. IPA parameterizes proteins using rigid body frames anchored at residues, which can be defined in a consistent manner irrespective of global position or orientation by taking ad vantage of the polymeric nature of protein backbones. Using a cloud of reference frames, instead of a point cloud, retains angular information between residues and thus accounts for chemical chirality. Reformulating this construction for protein design, Anand and Achim (2022) combine IPA with a DDPM to generate full-atom protein structures and their corresponding sequences by conditioning on secondary structure topology. They perform (structure) diffusion in frame space, using random Gaussian-distributed 3D vectors and random rotation matrices as noise (the latter does not satisfy the Gaussian assumption of the original DDPM construction). Using this approach the resulting model shows promising empirical results for secondary structure-conditioned tertiary structure generation. In this work, we similarly combine aspects of the SE(3)-equivariant reasoning machinery of IPA with DDPMs to create an (unconditional) diffusion process over protein backbone geometry. Unlike Anand and Achim (2022), we introduce a geometric asymmetry in how protein residues are represented--as point clouds in the standard forward process of DDPMs (the noising procedure) and as a cloud of reference frames in the reverse process (sample generation procedure). This enables us to use a simple and cheap process for noising structures while retaining the full expressivity of IPA during generation and without having to violate the Gaussian assumption of DDPMs. The resulting model, Genie, is capable of generating diverse structures that are simultaneously designable and novel. Through _in silico_ comparisons with leading methods, we show that Genie achieves state-of-the-art performance on key design metrics. Contemporaneous with this work, two other methods reported highly performant DDPMs for protein design inspired by similar ideas (Ingraham et al., 2022; Watson et al., 2022), although their architectural details and training procedures are distinct from Genie. ## 2 Methods Genie is a DDPM that generates protein backbones as a sequence of \(C_{\alpha}\) atomic coordinates. It performs diffusion directly in Cartesian space and uses an SE(3)-equivariant denoiser that reasons over a cloud of reference frames to predict noise displacements at each diffusion step. In Section 2.1, we describe our tailored implementation of DDPMs for protein backbone generation. In Section 2.2, we provide details on the SE(3)-equivariant denoiser. In Sections 2.3 and 2.4, we describe how we train and sample from the model, respectively. ### Denoising Diffusion Probabilistic Model Let \(\mathbf{x}=[\mathbf{x}^{1},\mathbf{x}^{2},\cdots,\mathbf{x}^{N}]\) denote a sequence of \(C_{\alpha}\) coordinates of length \(N\), corresponding to a protein with \(N\) residues. Given a sample \(\mathbf{x}_{0}\) from the unknown data distribution over protein structures, the forward process iteratively adds isotropic Gaussian noise to the sample following a cosine variance schedule \(\boldsymbol{\beta}=[\beta_{1},\beta_{2},\cdots,\beta_{T}]\): \[q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t}\ |\ \sqrt{1- \beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}) \tag{1}\] By reparameterization, we have \[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t}\ |\ \sqrt{\bar{ \alpha}_{t}}\mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}) \tag{2}\] where \[\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}\quad\text{and}\quad\alpha_{t}=1- \beta_{t}\] Since the isotropic Gaussian noise added at each diffusion step is small, the reverse process can be modeled as a Gaussian process: \[p(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1}\ |\ \boldsymbol{ \mu}_{\theta}(\mathbf{x}_{t},t),\boldsymbol{\Sigma}_{\theta}(\mathbf{x}_{t},t )\mathbf{I}) \tag{3}\] where \[\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\boldsymbol{\epsilon }_{\theta}(\mathbf{x}_{t},t)\right)\] \[\boldsymbol{\Sigma}_{\theta}(\mathbf{x}_{t},t)=\beta_{t}\] By starting the reverse process from pure white noise and then iteratively removing noise, Genie generates protein backbones _de novo_. Figure 1 illustrates diffusion of a protein backbone. Running the reverse process requires evaluating \(\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t},t)\), which predicts the noise added at time step \(t\). We model this using a novel geometric noise predictor that forms the core of our model. ### Noise Prediction In Genie, the noise predictor first takes the \(C_{\alpha}\) coordinates at diffusion step \(t\), denoted by \(\mathbf{x}_{t}\), and computes discrete Frenet-Serret (FS) frames based on the backbone geometry encoded by \(\mathbf{x}_{t}\). Each FS frame represents the position and orientation of a residue relative to the global reference frame. Once constructed, these FS frames enable downstream model components, including IPA, to reason about the relative orientations of protein residues and parts. FS frames are passed together with a sinusoidal encoding of diffusion step \(t\) to an SE(3)-invariant encoder and an SE(3)-equivariant decoder to compute a new set of FS frames, from which updated coordinates are extracted (see Appendix A.1 for more details). Noise is then computed as a set of displacement vectors between the original and updated coordinates, which is the final prediction of \(\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t},t)\). Figure 2 summarizes Genie's architecture. **FS frame construction** Following Hu et al. (2011) and Chowdhury et al. (2022), we construct discrete FS frames \(\mathbf{F}\) as \[\mathbf{t}^{i}=\frac{\mathbf{x}^{i+1}-\mathbf{x}^{i}}{\|\mathbf{x}^{i+1}- \mathbf{x}^{i}\|}\] \[\mathbf{b}^{i}=\frac{\mathbf{t}^{i-1}\times\mathbf{t}^{i}}{\|\mathbf{t}^{i-1} \times\mathbf{t}^{i}\|}\] \[\mathbf{n}^{i}=\mathbf{b}^{i}\times\mathbf{t}^{i}\] \[\mathbf{R}^{i}=[\mathbf{t}^{i},\mathbf{b}^{i},\mathbf{n}^{i}]\] \[\mathbf{F}^{i}=(\mathbf{R}^{i},\mathbf{x}^{i})\] where the first element of \(\mathbf{F}^{i}\) is the rotation matrix and the second element is the translation vector. To handle the edge cases corresponding to the N- and C-termini of proteins, we assign the frames of the second and second-to-last residues to the first and last residues, respectively. **SE(3)-invariant encoder** Given frames \(\mathbf{F}_{t}\) and the sinusoidal encoding of the corresponding diffusion step \(t\), the encoder generates and refines single residue and paired residue-residue representations, which are used later by the decoder to update the structure. As illustrated in Figure 2 ("Invariant Encoder"), the Single Feature Network first creates per residue representations (\(\mathbf{s}_{t}\)) from sinusoidal encodings of residue indices and the diffusion step. The Pair Feature Network then computes paired residue-residue representations (\(\mathbf{p}_{t}\)) from the outer sum of the (single) residue representations, relative positional encodings of residue pairs, and a pairwise distance matrix representation of the structure (based on \(C_{\alpha}\) coordinates). These pair representations \(\mathbf{p}_{t}\) are iteratively refined in the Pair Transform Network using triangular multiplicative updates (Jumper et al., 2021). The encoder is SE(3)-invariant since both its single and pair representations are derived from SE(3)-invariant features. Appendix A.2 provides further details on the encoder. **SE(3)-equivariant decoder** Given frames \(\mathbf{F}_{t}\) and the single (\(\mathbf{s}_{t}\)) and pair representations (\(\mathbf{p}_{t}\)) from the encoder, the decoder iteratively refines the structure by operating over \(\mathbf{F}_{t}\) in an SE(3)-equivariant manner. As illustrated in Figure 2 ("Equivariant Decoder"), the decoder first uses IPA to generate a new single representation \(\mathbf{s}^{\prime}_{t}\) based on \(\mathbf{F}_{t}\), \(\mathbf{s}_{t}\), and \(\mathbf{p}_{t}\). Here, frames are initialized using \(\mathbf{F}_{t}\) in lieu of the "black hole" initialization used by AlphaFold2. The Backbone Update Network then computes and applies frame updates based on the updated single representation \(\mathbf{s}^{\prime}_{t}\), resulting in a new set of frames \(\mathbf{F}^{\prime}_{t}\). The decoder is SE(3)-equivariant since frame updates are computed based on the SE(3)-invariant \(\mathbf{s}^{\prime}_{t}\). Thus, any global transformation of the input frames is also applied to the final output frames. **Noise prediction** Given the input coordinates \(\mathbf{x}_{t}\) and the updated frames \(\mathbf{F}^{\prime}_{t}\), we extract the updated coordinates \(\mathbf{x}^{\prime}_{t}\) from the translation component of \(\mathbf{F}^{\prime}_{t}\) and compute the predicted noise \(\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t},t)\) as \(\mathbf{x}_{t}-\mathbf{x}^{\prime}_{t}\). ### Training Since the forward diffusion process is predefined with a fixed variance schedule, training Genie essentially reduces to training the noise prediction model. By minimizing the error in noise prediction for each diffusion step, Genie learns to iteratively reverse the diffusion process and generate novel protein backbones. For this work, we consider a maximum sequence length of 128. Appendix A.3 provides more details on the training process. **Loss** Following Ho et al. (2020), which found that diffusion models achieve better performance when using noise \(\boldsymbol{\epsilon}_{t}\) as the prediction target instead of the mean in the reverse probability distribution \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t})\), we define our loss as: \[L =\mathbb{E}_{t,\mathbf{x}_{0},\boldsymbol{\epsilon}}\left[\sum_{i =1}^{N}\lVert\boldsymbol{\epsilon}_{t}-\boldsymbol{\epsilon}_{\theta}( \mathbf{x}_{t},t)\rVert^{2}\right]\] \[=\mathbb{E}_{t,\mathbf{x}_{0},\boldsymbol{\epsilon}}\left[\sum_{i =1}^{N}\lVert\boldsymbol{\epsilon}_{t}-\boldsymbol{\epsilon}_{\theta}(\sqrt{ \overline{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\boldsymbol{ \epsilon}_{t},t)\rVert^{2}\right]\] At each training step, we sample a protein domain \(x_{0}\) from the training dataset, a diffusion step \(t\) from a uniform distribution of integers between \(1\) and \(T\), and noise vectors \(\boldsymbol{\epsilon}_{t}\) from a unit Gaussian, and update model weights in the direction of minimizing the sum of per residue \(L_{2}\) distances between true and predicted noise vectors. **Dataset** For training data we use structures of protein domains from the Structural Classification of Proteins -- extended (SCOPe) dataset. Protein domains are filtered so that no two domains share more than \(40\%\) sequence identity, to ensure that the data is non-redundant and diverse. Figure 1: Diffusion of protein backbone in Cartesian space. The forward process iteratively adds isotropic Gaussian noise to \(C_{\alpha}\) coordinates, while the reverse process iteratively denoises noisy coordinates through an SE(3)-equivariant model. We also use the structural hierarchy defined by SCOPe to delineate protein domains along four major classes: all alpha proteins, all beta proteins, alpha and beta proteins (\(\alpha/\beta\)), and alpha and beta proteins (\(\alpha+\beta\)). We remove domains with multiple chains and missing backbone atoms. Our resulting training set comprises 8,766 domains, with 3,942 domains having at most 128 residues. ### Sampling To generate a new protein backbone of length \(N\), we first sample a random sequence \(\mathbf{x}_{T}=[\mathbf{x}_{T}^{1},\mathbf{x}_{T}^{2},\cdots,\mathbf{x}_{T}^{N}]\) of \(C_{\alpha}\) coordinates drawn from \(\mathbf{x}_{T}^{i}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) for all \(i\in[1,N]\). This sequence of coordinates \(\mathbf{x}_{T}\) is then recursively fed through the reverse diffusion process until diffusion step 0 is reached. Using Equation 3, the update rule is: \[\mathbf{x}_{t-1}=\begin{cases}\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t},t)+ \sqrt{\boldsymbol{\Sigma}_{\theta}(\mathbf{x}_{t},t)}\cdot\boldsymbol{\epsilon},&\text{if }t>1\\ \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t},t),&\text{otherwise}\end{cases}\] where \(\boldsymbol{\epsilon}=[\boldsymbol{\epsilon}^{1},\boldsymbol{\epsilon}^{2}, \cdots,\boldsymbol{\epsilon}^{N}]\) and each \(\boldsymbol{\epsilon}^{i}\) is drawn from a unit Gaussian distribution. ## 3 Results To evaluate Genie, we generate 10 proteins for each sequence length between 50 and 128 residues and assess the resulting structures on designability, diversity, and novelty. We compare the performance of Genie on these criteria with ProtDiff and FoldingDiff, two recent diffusion models described earlier, and find that it outperforms both on all three criteria (see Appendix A.4 for more details). ### Designability The first assessment criterion we consider is designability, _i.e.,_ the realizability of a generated backbone structure by a designed protein sequence. We follow the self-consistency Figure 2: Architecture of SE(3)-equivariant denoiser, including SE(3)-invariant encoder (bottom left) and SE(3)-equivariant decoder (bottom right). Notation: \(r\): number of residues, \(c_{s}\): dimensionality of single representation, \(c_{p}\): dimensionality of pair representation. Template Modeling (scTM) approach proposed by Trippe et al. (2022). Although purely an _in silico_ method, it has shown promise in correctly identifying designable structures (Dauparas et al., 2022). Briefly, scTM takes a generated backbone structure and feeds it into ProteinMPNN, a state-of-the-art structure-conditioned sequence generation method. Using ProteinMPNN set at a sampling temperature of 0.1, we generate eight sequences per input structure and then use OmegaFold (Wu et al., 2022b) to predict the structure of each putative sequence. The original scTM approach used AlphaFold2 but we substitute OmegaFold for AlphaFold2 as it outperforms the latter on single-sequence structure prediction (we also employ ESMFold (Lin et al., 2022) for the same purpose and observe similar trends -- see Appendix C). Finally, we compute scTM by measuring the TM-score (Zhang and Skolnick, 2004) -- a metric of structural congruence -- of the OmegaFold-predicted structure with respect to the original generated structure. scTM scores range from 0 to 1 with higher numbers corresponding to increased likelihoods that an input structure is designable. Appendix B provides an illustration of the scTM pipeline. Figure 3: Evaluation results. (A) Heatmap of the relative frequencies of generated domains with specific combinations of highest scTM and pLDDT values achieved by ProtDiff, FoldingDiff, and Genie. (B) Heatmap of relative frequencies of confidently designable domains with specific combinations of fractional SSE content. The number of designed domains for each model is shown in parentheses. (C) Heatmap of relative frequencies of our SCOPe dataset. This diagram uses the same color scheme as (B) and is provided for reference. (D) Histogram of confidently designable domains as a function of sequence length. (E) Bar chart of number of designable domains generated by different methods out of a fixed budget of 780 attempted designs per method. For each protein structure generated by each method, we compute the highest scTM score achieved across the eight putative sequences and the predicted local distance difference test score averaged across all residues (pLDDT) for the designed structure with the highest scTM score. pLDDT is an OmegaFold-derived score (initially introduced by AlphaFold2) that summarizes OmegaFold's own confidence in its predictions, ranging in value from 0 to 100 with pLDDT \(>70\) corresponding to confident predictions. Figure 3A shows the distribution of highest scTM scores versus pLDDTs for all three models. Similar to previous work (Trippe et al., 2022; Wu et al., 2022), we first use scTM \(>0.5\) as a cutoff for designability since it suggests that the generated and designed structures are of the same general fold (Xu and Zhang, 2010). \(81.5\%\) of protein domains generated by Genie have scTM \(>0.5\), far exceeding the percentages for ProtDiff (\(5.1\%\) and \(11.8\%\) for retrained and reported models, respectively) and FoldingDiff (\(19.6\%\) and \(22.7\%\) for resampled and reported results, respectively). This indicates that on average Genie generates more designable protein structures. While scTM reflects a model's ability to find structures with designable sequences, it leaves open the possibility that OmegaFold-predicted structures are of insufficient quality to be used reliably in computing scTM scores. We thus place an additional constraint that predicted structures achieve pLDDT \(>70\) to enrich for confidently-predicted structures. When considering both criteria, \(58.3\%\) of domains generated by Genie are designable with confidently-predicted structures (henceforth, "confidently designable"), while only \(3.2\%\) and \(17.7\%\) of ProtDiff- and FoldingDiff-generated domains are, respectively. Figure 3D shows the distribution of confidently designed structures binned by sequence length. We observe that Genie outperforms ProtDiff and FoldingDiff across short and long proteins. Furthermore, Genie-generated structures universally satisfy physical chirality constraints while those generated by ProtDiff often contain left-handed helices. ### Diversity The second assessment criterion we consider is the diversity of generated structures. We first evaluate diversity by considering the relative proportion of secondary structure elements (SSEs) in generated domains. SSEs are local patterns of structure within proteins that are characterized by specific types of hydrogen bonding networks. The most common types of SSEs are \(\alpha\)-helices and \(\beta\)-strands, and Figure 4: Design space of Genie. 455 Genie-generated structures that are confidently designable were embedded in 2D space using multidimensional scaling (MDS) with pairwise TM scores as the distance metric. Domains are colored by their maximum TM score to the training set (central panel), fraction of helical residues (top left panel), fraction of beta strand residues (middle left panel), and sequence length (bottom left panel). Eight novel designed domains are shown as representatives. we focus on these in our assessments. To identify SSE in generated structures, we use the Protein Secondary Element Assignment (P-SEA) algorithm (Labesse et al., 1997). P-SEA detects SSEs using a set of hand-crafted rules based on distances and angles between consecutive \(C_{\alpha}\) atoms in protein backbones. We applied P-SEA to all confidently designable structures (\(\text{scTM}>0.5\); \(\text{pLDDT}>70\)). Figure 3B shows the relative frequencies of designed domains with different fractions of SSEs and Figure 3C provides the baseline frequencies of our SCOPe dataset for reference. Domains generated by FoldingDiff and ProtDiff are dominated by mainly \(\alpha\)-helical domains, with only 2 (out of 25, \(8\%\)) and 10 (out of 138, \(7.25\%\)) of their designs containing \(\beta\)-strands, respectively. In contrast, Genie designs are more diverse, with 254 mainly \(\alpha\)-helical, 25 mainly \(\beta\)-strand, and 176 \(\alpha,\beta\)-mixed domains. In addition to SSE content, we assess the diversity of tertiary structures in confidently designed domains. For each domain, we compute its maximum TM score to all other confidently designed domains, which quantifies its similarity to the most structurally similar domain in the designed set. For a diverse set of domains, most domains should have small maximum TM scores to all other domains. Genie achieves, on average, a maximum TM score of \(0.561\pm 0.086\) relative to the designed set, which is lower than both ProtDiff (\(0.583\pm 0.115\)) and FoldingDiff (\(0.668\pm 0.178\)). This suggests that Genie-designed domains are more diverse and better able to capture the fold distribution of protein structure space. ### Novelty The third assessment criterion we consider is the novelty of generated protein structures. As a major goal of protein design is the creation of new protein folds and geometries, novelty is a key feature of any structure-based protein design tool. To quantify the novelty of generated structures we compute their maximum TM scores with respect to all structures in the training set. We use the TM-Align software package for this purpose (Zhang and Skolnick, 2004). To classify a confidently designable domain as novel, we require that its maximum TM score to the training set is less than 0.5 -- a widely used heuristic for determining when two protein domains are of dissimilar folds. Using this criterion, we find that 98 out of 455 (\(21.5\%\)) confidently designable structures generated by Genie are novel, relative to \(4\%\) (1 out of 25) and \(20.3\%\) (28 out of 138) for ProtDiff and FoldingDiff, respectively. Figure 3E summarizes the statistics on generated domains for all three models. To visualize the design space of Genie, we apply multi-dimensional scaling (MDS) to the pairwise TM scores of all 455 confidently designable domains and show the resulting 2D space in Figure 4. By overlaying maximum TM scores (relative to the training set) and SSE content on the MDS embedding, we confirm that Genie can generate diverse protein domains. We further observe that this diversity is evenly distributed throughout the embedding, with some localization of beta-dominated domains. We illustrate the quality and diversity of generated domains by showing eight novel designs chosen from diverse embedding locations. Appendix D provides additional visualizations of novel domains. ## 4 Conclusion In this work, we present Genie, a novel DDPM for _de novo_ protein design that substantially outperforms previous structure-based methods. One important contributing factor to Genie's success is the use of dual representations for protein residues. By representing a protein as a sequence of \(C_{\alpha}\) coordinates in Cartesian space instead of FS frames, we can perform diffusion by injecting isotropic Gaussian noise into \(C_{\alpha}\) coordinates, bypassing the need to noise rotation matrices, a more delicate task. On the other hand, during noise prediction, proteins are represented as sequences of FS frames, allowing Genie to reason about inter-residue orientations and achieve better structural quality. Thus we simultaneously achieve simplicity of design and geometric expressiveness. Practically, noise prediction is accomplished by combining IPA with backbone updates, which provide a powerful way to reason spatially about protein structure, maintaining equivariance to both translations and rotations while being sensitive to reflections. Future directions for this work center around two key areas. The first is to expand Genie to include a sequence generation module, allowing it to perform _de novo_ sequence-structure design. By learning to generate sequence and structure concurrently, the model may better capture the space of foldable proteins and achieve greater success at designing novel proteins. Additional architectural improvements may also be necessary to enable efficient training of Genie and generation of larger proteins. The second area is to facilitate application of Genie to biologically functional designs. Genie's success in unconditional structure generation immediately indicates its potential for conditional generation based on structural or functional properties. Such conditional generation has been achieved by other methods, for example in producing protein structures that contain functional sites (Wang et al., 2022). The use of pretrained classifiers (_e.g._, function classifiers) to guide the DDPM generation process towards novel proteins with desired properties is particularly promising, including in drug discovery. Contemporaneous methods (Ingraham et al., 2022; Watson et al., 2022) have shown promise in this direction and we hope that the innovations introduced by Genie will further drive progress.
2304.01541
Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation
Privacy and communication constraints are two major bottlenecks in federated learning (FL) and analytics (FA). We study the optimal accuracy of mean and frequency estimation (canonical models for FL and FA respectively) under joint communication and $(\varepsilon, \delta)$-differential privacy (DP) constraints. We show that in order to achieve the optimal error under $(\varepsilon, \delta)$-DP, it is sufficient for each client to send $\Theta\left( n \min\left(\varepsilon, \varepsilon^2\right)\right)$ bits for FL and $\Theta\left(\log\left( n\min\left(\varepsilon, \varepsilon^2\right) \right)\right)$ bits for FA to the server, where $n$ is the number of participating clients. Without compression, each client needs $O(d)$ bits and $\log d$ bits for the mean and frequency estimation problems respectively (where $d$ corresponds to the number of trainable parameters in FL or the domain size in FA), which means that we can get significant savings in the regime $ n \min\left(\varepsilon, \varepsilon^2\right) = o(d)$, which is often the relevant regime in practice. Our algorithms leverage compression for privacy amplification: when each client communicates only partial information about its sample, we show that privacy can be amplified by randomly selecting the part contributed by each client.
Wei-Ning Chen, Dan Song, Ayfer Ozgur, Peter Kairouz
2023-04-04T05:37:17Z
http://arxiv.org/abs/2304.01541v1
Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation ###### Abstract Privacy and communication constraints are two major bottlenecks in federated learning (FL) and analytics (FA). We study the optimal accuracy of mean and frequency estimation (canonical models for FL and FA respectively) under joint communication and \((\varepsilon,\delta)\)-differential privacy (DP) constraints. We consider both the central and the multi-message shuffling DP models. We show that in order to achieve the optimal \(\ell_{2}\) error under \((\varepsilon,\delta)\)-DP, it is sufficient for each client to send \(\Theta\left(n\min\left(\varepsilon,\varepsilon^{2}\right)\right)\) bits for FL and \(\Theta\left(\log\left(n\min\left(\varepsilon,\varepsilon^{2}\right)\right)\right)\) bits for FA to the server, where \(n\) is the number of participating clients. Without compression, each client needs \(O(d)\) bits and \(O\left(\log d\right)\) bits for the mean and frequency estimation problems respectively (where \(d\) corresponds to the number of trainable parameters in FL or the domain size in FA), meaning that we can get significant savings in the regime \(n\min\left(\varepsilon,\varepsilon^{2}\right)=o(d)\), which is often the relevant regime in practice. We propose two different ways to leverage compression for privacy amplification and achieve the optimal privacy-communication-accuracy trade-off. In both cases, each client communicates only partial information about its sample and we show that privacy is amplified by randomly selecting the part contributed by each client. In the first method, the random selection is revealed to the server, which results in a central DP guarantee with optimal privacy-communication-accuracy trade-off. In the second method, the random data parts at each client are privatized locally and anonymized by a secure shuffler, eliminating the need for a trusted server. This results in a multi-message shuffling scheme with the same optimal trade-off. As a result, our paper establishes the optimal three-way trade-off between privacy, communication, and accuracy for both the central DP and multi-message shuffling frameworks.1 Footnote 1: The notion of optimality in our paper is order-wise and within a logarithmic factor in some cases. See results for exact statements. ## 1 Introduction In the basic setting of federated learning (FL) [67, 63, 60] and analytics (FA), a server wants to execute a specific learning or analytics task on raw data that is kept on clients' devices. Consider, for example, model updates in FL or histogram estimation in FA, both of which can be modeled as a distributed mean estimation problem. This problem is solved by having the clients communicate targeted messages to the server. The privacy of the users' data is ensured (in terms of explicit differential privacy (DP) [38] guarantees) by having the server inject noise into the computed mean before releasing it to the next module (e.g., the server can compute the average model update and corrupt it with the addition of noise). This is called the trusted server or central DP model, as it entrusts the central server with privatization and is one of the most common ways in which federated learning and analytics are implemented today 2. Footnote 2: We assume a trusted service provider who applies the DP mechanism faithfully. This can be enforced by implementing the DP mechanism inside of a remotely attestable trusted execution environment [11]. In this paper, we start by asking the following question about distributed mean estimation in this central DP setting: given that the server is required to release only a noisy, or equivalently approximate, version of the mean, can the clients communicate "less information" to the server? More precisely, given a desired privacy level, can we reduce the communication load of the network without sacrificing accur same (order-wise optimal) accuracy for the desired privacy level? In recent years, there has been significant interest in the central DP model [1] as well as communication efficiency and privacy for FL and FA under different models, including local DP [77, 62, 57, 79, 17, 6, 16, 32], shuffle [40, 43] and distributed DP [9, 58, 8, 33, 34]; however, this basic question appears to remain unanswered. The communication load at the clients can be reduced by having the clients communicate partial information about their samples to the server. For example, in the case of model updates, each client can update only a subset of the model coefficients. In the case of histogram estimation, information about a client's sample can be "split" into multiple parts, and the client can communicate only one part. However, this means that the server collects less information, or effectively fewer samples to estimate the target quantity. For example, in the aforementioned case of model updates, each model coefficient is updated only by a subset of the clients. A quick calculation reveals that this increases the sensitivity of the estimate to each user's sample and therefore requires the addition of larger noise at the server to achieve the same privacy level, which leads to lower accuracy. We circumvent this challenge with a simple but insightful observation: when each client communicates only partial information about its sample, we can amplify privacy by randomly selecting the part contributed by each client. A downstream module which has only access to the final estimate revealed by the server does not know which part was contributed by which client, which leads to privacy amplification. Privacy amplification by subsampling has been studied in the prior literature [65, 14] but usually only in the case of privacy amplification when the server selects a random subset of the clients (from a larger pool of available clients). In our case, the randomness is incorporated in the compression scheme of each client and it relates to the piece of information communicated by each client and not to the choice of the participating clients. We call this gain privacy amplification via compression and use it to establish the optimal communication-privacy-accuracy trade-off for the central DP model. Note that this same type of gain cannot be leveraged in the local DP model where the server is untrusted and knows the message and the identity of each client. Indeed, in the local DP model the privacy-accuracy trade-off is known to be significantly worse than the central DP model (see Table 1). This naturally leads to a follow-up question: can we leverage privacy amplification via compression and achieve the same three-way trade-off by using secure aggregation [33] and shuffling [40] type models? The answer is no for secure aggregation. The communication cost for secure aggregation has been studied in [31] and is significantly larger than the communication cost for central DP we establish in this paper (see Table 1). On the other hand, the communication cost for shuffling remains an open problem as discussed in [31]. We resolve this open problem by showing that the optimal central DP trade-off can be also achieved with a multi-message shuffling scheme, which also establishes the optimal communication cost for multi-message shuffling schemes. As before, our scheme leverages a similar privacy amplification gain. Each client communicates partial information about its sample; the identity of the message is erased by the secure shuffler, and hence the untrusted server does not know which part is contributed by each client. However, to achieve the optimal trade-off, it is critical for each client to split its information into multiple messages and employ multiple shuffling rounds by carefully splitting the privacy budget across different rounds. In contrast, a similar gain cannot be leveraged in secure aggregation because the linearity of secure aggregation requires all participating clients to communicate consistent information (same parts), hence precluding privacy amplification by compression. See Table 1 for a detailed comparison. **Our contributions.** We consider distributed mean and frequency estimation as canonical building blocks for FL and FA. We consider both the central DP and the multi-message shuffling models. We characterize the order-optimal privacy-accuracy-communication trade-off for distributed mean estimation and provide an achievable scheme for frequency estimation in the central DP model. Our results reveal that privacy and communication efficiency can be achieved simultaneously with no additional penalty on accuracy. In particular, we show that \(\tilde{O}\left(n\min\left(\varepsilon,\varepsilon^{2}\right)\right)\) and \(\tilde{O}\left(\log\left(n\min\left(\varepsilon,\varepsilon^{2}\right)\right)\right)\) bits of (per-client) communication are sufficient to achieve the order-optimal error under \(\left(\varepsilon,\delta\right)\)-privacy for mean and frequency estimation respectively, where \(n\) is the number of participating clients. Without compression, each client needs \(O(d)\) bits and \(\log d\) bits for the mean and frequency estimation problems respectively (where \(d\) is the number of trainable parameters in FL or the domain size in FA), which means that we can get significant savings in the regime \(n\varepsilon^{2}=o(d)\) (assuming \(\varepsilon=O(1)\)). We note that this is often the relevant regime not only for cross-silo but also for cross-device FL/FA. For instance, in practical FL, \(d\) usually ranges from \(10^{6}\)-\(10^{9}\), and \(n\), the _per-epoch_ sample size, is usually much smaller (e.g., of the order of \(10^{3}\)-\(10^{5}\)). For distributed mean estimation, we show that the central DP trade-off can also be achieved with a multi-message shuffling scheme (within a \(\log d\) factor in communication cost). Hence our paper establishes the three-way trade-off between privacy, communication, and accuracy for both the central DP and multi-message shuffling frameworks, both of which were open problems in the prior literature. Compared with local DP where 1 bit is sufficient when \(\varepsilon=O(1)\), this shows that central/shuffling DP has a larger communication cost but can achieve much smaller error (by a factor of \(n\)) and hence is usually preferable in practical applications. Compared with distributed DP where the server aggregates local (encoded) messages with secure multi-party computation (e.g., [23, 8, 34]), we can improve the communication cost by a factor of \(n\), therefore showing that the communication cost can be reduced with a trusted server or shuffler. We summarize the comparisons of our main results to local and distributed DP in Table 1. **Notation.** Throughout this paper, we use \([m]\) to denote the set of \(\{1,...,m\}\) for any \(m\in\mathbb{N}\). Random variables (vectors) \((X_{1},...,X_{m})\) are denoted as \(X^{m}\). We also make use of Bachmann-Landau asymptotic notation, i.e., \(O,o,\Omega,\omega,\text{ and }\Theta\). ## 2 Problem Formulation We first present the distributed mean estimation (DME) [73] problem under differential privacy. Note that DME is closely related to federated learning with SGD (or similar stochastic optimization methods, such as FedAvg [67]), where in each iteration, the server updates the global model by a noisy mean of the local model updates. This noisy estimate is typically obtained by using a DME scheme, and thus one can easily build a distributed DP-SGD scheme (and hence a private FL scheme) from a differentially private DME scheme. Moreover, as shown in [49], as long as we have an unbiased estimate of the gradient at each round, the convergence rates of SGD (or DP-SGD) depend on the \(\ell_{2}\) estimation error. Distributed mean estimation.Consider \(n\) clients each with local data \(x_{i}\in\mathbb{R}^{d}\) that satisfies \(\left\|x_{i}\right\|_{2}\leq C\) for some constant \(C>0\) (one can think of \(x_{i}\) as a clipped local gradient). A server wants to learn an estimate \(\hat{\mu}\) of the mean \(\mu(x^{n})\triangleq\frac{1}{n}\sum_{i}x_{i}\) from \(x^{n}=(x_{1},\ldots,x_{n})\) after communicating with the \(n\) clients. Toward this end, each client locally compresses \(x_{i}\) into a \(b\)-bit message \(Y_{i}=\mathsf{enc}_{i}\left(x_{i}\right)\in\mathcal{Y}\) through a local encoder \(\mathsf{enc}_{i}:\mathcal{X}\mapsto\mathcal{Y}\) (where \(|\mathcal{Y}|\leq 2^{b}\) and sends it to the central server, which upon receiving \(Y^{n}=(Y_{1},\ldots,Y_{n})\) computes an estimate \(\hat{\mu}=\mathsf{dec}\left(Y^{n}\right)\) that satisfies the following differential privacy: **Definition 2.1** (Differential Privacy).: _The mechanism \(\hat{\mu}\) is \((\varepsilon,\delta)\)-differentially private if for any neighboring datasets \(x^{n}\coloneqq(x_{1},...,x_{i},...,x_{n})\), \(x^{\prime n}\coloneqq(x_{1},...,x_{i}^{\prime},...,x_{n})\), and measurable \(\mathcal{S}\subseteq\mathcal{Y}\),_ \[\Pr\left\{\hat{\mu}\in\mathcal{S}|x^{n}\right\}\leq e^{\varepsilon}\cdot\Pr \left\{\hat{\mu}\in\mathcal{S}|x^{\prime n}\right\}+\delta,\] _where the probability is taken over the randomness of \(\hat{\mu}\)._ Our goal is to design schemes that minimize the \(\ell_{2}^{2}\) estimation error: \[\min_{(\mathsf{enc}_{1}(\cdot),...,\mathsf{enc}_{n}(\cdot),\mathsf{dec}( \cdot))}\max_{x^{n}}\mathbb{E}\left[\left\|\hat{\mu}\left(\mathsf{enc}_{1}(x_ {1}),...,\mathsf{enc}_{n}(x_{n})\right)-\mu(x^{n})\right\|_{2}^{2}\right],\] \begin{table} \begin{tabular}{|c|c|c|} \hline & Communication (bits) & \(\ell_{2}\) error \\ \hline Local DP [32, 42] & \(\Theta\left(\left\lceil\varepsilon\right\rceil\right)\) & \(\Theta\left(\frac{d}{n\min(\varepsilon^{2},\varepsilon)}\right)\) \\ \hline Distributed DP (with SecAgg) [33] & \(\tilde{O}\left(n^{2}\min\left(\varepsilon,\varepsilon^{2}\right)\right)\) & \(\Theta\left(\frac{d}{n^{2}\min(\varepsilon^{2},\varepsilon)}\right)\) \\ \hline Central DP (Theorem 4.4) & \(\tilde{O}\left(n\min\left(\varepsilon,\varepsilon^{2}\right)\right)\) & \(O\left(\frac{d\log d}{n^{2}\min(\varepsilon^{2},\varepsilon)}\right)\) \\ \hline Shuffle DP (Theorem 6.4) & \(\tilde{O}\left(n\log(d)\min\left(\varepsilon,\varepsilon^{2}\right)\right)\) & \(O\left(\frac{d}{n^{2}\min(\varepsilon^{2},\varepsilon)}\right)\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of the communication costs of \(\ell_{2}\) mean estimation under local, distributed, central, and shuffle DP (with \(\delta\) terms hidden). Compared to local DP, we see that error under central DP decays much faster (e.g., \(1/n^{2}\) as opposed to \(1/n\)); compared to distributed DP with secure aggregation, our schemes achieve similar accuracy but saves the communication cost by a factor of \(n\). subject to \(b\)-bit communication and \(\left(\varepsilon,\delta\right)\)-DP constraints. Distributed frequency estimation.Similarly, frequency estimation can also be formulated as a mean estimation problem but with sparse (one-hot) vectors. Let each user \(i\) hold an item \(x_{i}\) in a size \(d\) domain \(\mathcal{X}\). The server aims to estimate the histogram of the \(n\) items. Without loss of generality, we can assume that \(\mathcal{X}\coloneqq\left\{e_{1},...,e_{d}\right\}\in\left\{0,1\right\}^{d}\) (where \(e_{j}\) is the \(j\)-th standard basis vector in \(\mathbb{R}^{d}\)), i.e., each item is expressed as a one-hot vector. Then, the histogram of the \(n\) items can be expressed as \(\pi\left(x^{n}\right)\coloneqq\sum_{i\in\left[n\right]}x_{i}\). Similar to the mean estimation problem, clients locally compute and then send \(Y_{i}=\mathsf{enc}_{i}\left(x_{i}\right)\in\mathcal{Y}\) (for some \(\mathcal{Y}\) such that \(\left|\mathcal{Y}\right|\leq 2^{b}\)), and the central server computes the estimate \(\hat{\pi}=\mathsf{dec}\left(Y^{n}\right)\). Our goal is to design schemes that minimize the \(\ell_{2}^{2}\) or \(\ell_{1}\) error3: Footnote 3: Note that the \(\ell_{1}\) error corresponds to the total variation distance between the true and estimated frequency vectors. \[\min_{\left(\mathsf{enc}_{1}\left(\cdot\right),...,\mathsf{enc}_{n}\left( \cdot\right),\mathsf{dec}\left(\cdot\right)\right)}\max_{x^{n}}\mathbb{E} \left[\left\|\hat{\pi}\left(\mathsf{enc}_{1}(x_{1}),...,\mathsf{enc}_{n}(x_{n} )\right)-\pi(x^{n})\right\|\right],\] subject to communication and DP constraints (where \(\left\|\cdot\right\|\) can be \(\ell_{1}\) or \(\ell_{2}^{2}\)). ## 3 Related Works Federated learning and distributed mean estimation.Federated learning [63, 67, 59] emerges as a decentralized machine learning framework that provides data confidentiality by retaining clients' raw data on edge devices. In FL, communication between clients and the central server can quickly become a bottleneck [67], so previous works have focused on compressing local model updates via gradient quantization [67, 10, 48, 73, 78, 76, 24], sparsification [18, 55, 41]. To further enhance data security, FL is often combined with differential privacy [38, 1, 9]. Among these works, [55] also employs gradient sparsification (or gradient subsampling) to reduce the problem dimensionality. However, the sparsification takes place _after_ the aggregation of local gradients, so the randomness introduced during sparsification cannot be leveraged to amplify the differential privacy guarantee. As a result, this approach leads to a suboptimal trade-off between privacy and communication compared to our scheme. Note that in this work, we consider FL (or more specifically, the distributed mean estimation) under a _central_-DP setting where the server is trusted, which is different from the local DP model [62, 37, 69, 75, 22, 32] and the distributed DP model with secure aggregation [23, 21, 58, 8, 33, 34]. A key step in our mean estimation scheme is pre-processing the local data via Kashin's representation [66]. While various compression schemes, based on quantization, sparsification, and dithering have been proposed in the recent literature, Kashin's representation has also been explored in a few works for communication efficiency [47, 72, 29, 70] and for LDP [42] and is particularly powerful in the case of joint communication and privacy constraints as it helps spread the information in a vector evenly in every dimension. Distributed frequency estimation and heavy hitters.Distributed frequency estimation (a.k.a. histogram estimation) is another canonical task that has been heavily studied under a distributed setting with DP. Prior works either focus on 1) the local DP model with or without communication constraints, e.g., [20, 19, 25, 26, 56] (under an \(\ell_{\infty}\) loss for heavy hitter estimation) and [57, 79, 75, 6, 5, 32, 46, 71, 45] (under an \(\ell_{1}\) or \(\ell_{2}\) loss), or 2) the central DP model _without_ communication constraints [38, 52, 64, 27, 13, 80, 36]. As suggested in [37, 3, 2, 4, 16], compared to central DP, local DP models usually incur much larger estimation errors and can significantly decrease the utility. In this work, we consider central DP but with explicit communication constraints. Local DP with shuffling.A recent line of works [40, 35, 12, 43, 50, 51] considers _shuffle_-DP, showing that one can significantly boost the central DP guarantees by randomly shuffling local (privatized) messages. In this work, we show that the same shuffling technique can be used to achieve the optimal central DP error with nearly optimal communication cost. Therefore, we can obtain the same level of central DP with small communication costs while weakening the security assumption: achieving the optimal communication cost (under central DP) only requires a secure shuffler (as opposed to a fully trusted central server). Distributed Mean Estimation In this section, we present a mean estimation scheme that achieves the optimal \(\tilde{O}_{\delta}\left(\frac{C^{2}d}{n^{2}\varepsilon^{2}}\right)\) error under \((\varepsilon,\delta)\)-DP while only using \(\tilde{O}(n\varepsilon^{2})\) bits of per-client communication. We first consider a slightly simpler, discrete setting with \(\ell_{\infty}\) geometry (as opposed to the \(\ell_{2}\) mean estimation stated in Section 2): assume each client observes \(x_{i}\in\left\{-c,c\right\}^{d}\) where \(c>0\) is a constant, and a central server aims to estimate the mean \(\mu\left(x^{n}\right)\coloneqq\frac{1}{n}\sum_{i=1}^{n}x_{i}\) by minimizing the \(\ell_{2}^{2}\) error subject to the privacy and communication constraints. We argue later that solutions to the above \(\ell_{\infty}\) problem can be used for \(\ell_{2}\) mean estimation by applying Kashin's representation. To solve the aforementioned \(\ell_{\infty}\) mean estimation problem, first observe that each client's local data can be expressed in \(d\) bits since each coordinate of \(x_{i}\) can only take values in \(\left\{c,-c\right\}\). To reduce the communication load to \(o(d)\) bits, each client adopts the following subsampling strategy: for each coordinate \(j\in[d]\), client \(i\) chooses to send \(x_{i}(j)\) to the server with probability \(\gamma\). We assume that this subsampling step is performed with a seed shared by the client and the server4, hence the server knows which coordinates are communicated by each client. Therefore upon receiving the client messages, it can compute the mean of each coordinate and privatize it by adding Gaussian noise. The key observation we leverage is that the randomness in the compression algorithm can be used to amplify privacy or equivalently reduce the magnitude of the Gaussian noise that is needed for privatization. Note that such randomness needs to be kept private from an adversary as the privacy guarantee of the scheme relies on it. Footnote 4: In practice, such randomness can be agreed by both sides ahead of time, or it can be generated by the server and communicated to each client. ``` Input: users' data \(x_{1},...,x_{n}\), sampling parameters \(\gamma\coloneqq b/d\), DP parameters \((\varepsilon,\delta)\). Output: mean estimator \(\hat{\mu}\). for user \(i\in[n]\)do for coordinate \(j\in[d]\)do Draw \(Z_{i,j}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathsf{Bern}(\gamma)\). if\(Z_{i,j}=1\)then Send \(x_{i}(j)\) to the server. endif endfor endfor for coordinate \(j\in[d]\)do Server computes the average \(\hat{\mu}_{j}\coloneqq\frac{1}{n\gamma}\sum_{i:Z_{ij}=1}x_{i}(j)+N(0,\sigma^{2})\), where \(\sigma^{2}\) is computed according to (1) in Theorem 4.1. endfor Return:\(\hat{\mu}\coloneqq(\hat{\mu}_{1},\hat{\mu}_{2},...,\hat{\mu}_{d})\). ``` **Algorithm 1** Coordinate Subsampled Gaussian Mechanism (CSGM) We summarize the scheme in Algorithm 1 and state its privacy and utility guarantees in the following theorem. **Theorem 4.1** (\(\ell_{\infty}\) mean estimation.).: _Let \(x_{1},...,x_{n}\in\left\{-c,c\right\}^{d}\) and let_ \[\sigma^{2}=O\left(\frac{c^{2}\log(1/\delta)}{n^{2}\gamma^{2}}+\frac{c^{2}d( \log(d/\delta)+\varepsilon)\log(d/\delta)}{n^{2}\varepsilon^{2}}\right). \tag{1}\] _Then for any \(\varepsilon,\delta>0\), Algorithm 1 is \((\varepsilon,\delta)\)-DP and yields an unbiased estimator on \(\mu\). In addition, the (average) per-client communication cost is \(\gamma\cdot d=b\) bits, and the \(\ell_{2}^{2}\) estimation error of \(\hat{\mu}\) is at most_ \[\mathbb{E}\left[\left\|\hat{\mu}-\mu\right\|_{2}^{2}\right]\leq \frac{dc^{2}}{n\gamma}+d\sigma^{2}\] \[=O\Big{(}\frac{d^{2}c^{2}}{nb}+\frac{d^{3}c^{2}\log(d/\delta)}{n^ {2}b^{2}}+\frac{c^{2}d^{2}(\log(1/\delta)+\varepsilon)\log(d/\delta)}{n^{2} \varepsilon^{2}}\Big{)}. \tag{2}\] **Remark 4.2** (Unbiasedness).: _Note that for mean estimation, we usually want the final mean estimator to be unbiased since standard convergence analyses of SGD [49] require an unbiased estimate of the true gradient in each optimization round. Given that our proposed mean estimation schemes (Algorithm 1 and Algorithm 2 in the next section) are all unbiased, we can combine them with SGD/federated averaging and readily apply [49] to obtain a convergence guarantee for the resulting communication-efficient DP-SGD._ For the \(\ell_{2}\) mean estimation task formulated in Section 2, we pre-process local vectors by first computing their Kashin's representations and then performing randomized rounding [61, 74, 42, 32]. Specifically, if \(x_{i}\) has \(\ell_{2}\) norm bounded by \(C\), then its Kashin's representation (with respect to a tight frame \(K\in\mathbb{R}^{d\times D}\) where \(D=\Theta(d)\)) \(\tilde{x}_{i}\) has bounded \(\ell_{\infty}\) norm: \(\left\|\tilde{x}_{i}\right\|_{\infty}\leq c=O\left(\frac{C}{\sqrt{d}}\right)\) and satisfies \(x_{i}=K\cdot\tilde{x}_{i}\). This allows us to convert the \(\ell_{2}\) geometry to an \(\ell_{\infty}\) geometry. Furthermore, by randomly rounding each coordinate of \(\tilde{x}_{i}\) to \(\{-c,c\}\) (see for example [32]), we can readily apply Algorithm 1 and obtain the following result for \(\ell_{2}\) mean estimation as a corollary: **Corollary 4.3** (\(\ell_{2}\) mean estimation).: _Let \(x_{1},...,x_{n}\in\mathcal{B}_{2}(C)\) (i.e., \(\left\|x_{i}\right\|_{2}\leq C\) for all \(i\in[n]\)). Then for any \(\varepsilon,\delta>0\), Algorithm 1 combined with Kashin's representation and randomized rounding yields an \((\varepsilon,\delta)\)-DP unbiased estimator for \(\mu\) with \(\ell_{2}^{2}\) estimation error bounded by_ \[O\left(\underbrace{\frac{dC^{2}}{nb}+\frac{C^{2}d^{2}\log(1/\delta)}{n^{2}b^{ 2}}}_{(\alpha)}+\underbrace{\frac{C^{2}d(\log(d/\delta)+\varepsilon)\log(d/ \delta)}{n^{2}\varepsilon^{2}}}_{(\beta)}\right). \tag{3}\] The first term \((\alpha)\) in the estimation error in Corollary 4.3 is the error due to compression, and the second term \((\beta)\) is the error due to privatization (which is order-optimal under \((\varepsilon,\delta)\)-DP up to an additional \(\log(d/\delta)\) factor as we discuss in Section 4.2). In particular, if we ignore the poly-logarithmic terms and assume \(\varepsilon=O(1)\), the privatization error \((\beta)\) can be simplified to \(\tilde{O}\left(\frac{dC^{2}}{n^{2}\varepsilon^{2}}\right)\), which dominates the total \(\ell_{2}^{2}\) error when \(b=\tilde{\Omega}_{\delta}\left(\max\left(n\varepsilon^{2},\sqrt{d} \varepsilon\right)\right)\), i.e. in this regime the total \(\ell_{2}^{2}\) error is order-wise equal to the optimal centralized DP error \((\beta)\). This implies that no more than \(b=\tilde{\Omega}_{\delta}\left(\max\left(n\varepsilon^{2},\sqrt{d} \varepsilon\right)\right)\) bits per client are needed to achieve the order-optimal \(\ell_{2}^{2}\) error under \((\varepsilon,\delta)\)-DP. In the next section, we introduce a modification to Algorithm 1, which allows the removal of the \(\Omega\left(\sqrt{d}\varepsilon\right)\) term in the communication cost. ### Dimension-free communication cost In order to remove the dependence on the dimension \(d\) in the communication cost \(b=\tilde{\Omega}_{\delta}\left(\max\left(n\varepsilon^{2},\sqrt{d} \varepsilon\right)\right)\) from the previous section, we need to improve the performance of our scheme in the _small-sample_ regime \(n\varepsilon^{2}=o(\sqrt{d}\varepsilon)\). Equivalently, we want to be able to achieve the centralized DP performance by using only \(b=\tilde{\Omega}_{\delta}\left(n\varepsilon^{2}\right)\) bits per client when \(n\varepsilon=o(\sqrt{d})\). Assuming \(\varepsilon\approx 1\), note that this implies that the total communication bandwidth of the system \(nb=o(d)\), i.e. the server can receive information about at most \(nb=n^{2}\varepsilon^{2}=o(d)\) coordinates. We show that in this regime the performance of the scheme can be improved by a priori restricting the server's attention to a subset of the coordinates. We make the following modification to Algorithm 1: before performing Algorithm 1, the server randomly selects \(d^{\prime}\approx O\left(\min(d,n^{2}\varepsilon^{2})\right)\) coordinates and only requires clients to run Algorithm 1 on them. We present the modified scheme in Algorithm 2 and summarize its performance in Theorem 4.4. Similarly, we can obtain the following \(\ell_{2}\) mean estimation via Kashin's representations: **Theorem 4.4** (\(\ell_{2}\) mean estimation.).: _Let \(x_{1},...,x_{n}\in\mathcal{B}_{2}(C)\) (i.e., \(\left\|x_{i}\right\|_{2}\leq C\) for all \(i\in[n]\)), \(d^{\prime}=\min\left(d,nb,\frac{n^{2}\varepsilon^{2}}{(\log(1/\delta)+ \varepsilon)\log(d/\delta)}\right)\), and_ \[\sigma^{2}=O\left(\frac{C^{2}\log(1/\delta)}{d^{\prime}n^{2}\gamma^{2}}+\frac{C ^{2}d^{\prime}(\log(1/\delta)+\varepsilon)\log(d^{\prime}/\delta)}{dn^{2} \varepsilon^{2}}\right). \tag{4}\] _Then for any \(\varepsilon,\delta>0\), Algorithm 2 is \((\varepsilon,\delta)\)-DP. In addition, the (average) per-client communication cost is \(\gamma d=b\) bits, and the \(\ell_{2}^{2}\) estimation error is at most_ \[\text{O}\left(\max\left(\frac{C^{2}d\log(d/\delta)}{nb},\frac{C^{2}d\log(d/ \delta)(\log(1/\delta)+\varepsilon)}{n^{2}\varepsilon^{2}}\right)\right). \tag{5}\] **Corollary 4.5**.: _As long as \(b=\Omega\left(\frac{n\varepsilon^{2}}{\log(1/\delta)+\varepsilon}\right)\), the \(\ell_{2}^{2}\) error of mean estimation is_ \[\text{O}\left(\frac{C^{2}d\log(d/\delta)(\log(1/\delta)+\varepsilon)}{n^{2} \varepsilon^{2}}\right).\] As suggested by Corollary 4.5, we see that when \(\varepsilon=O(1)\), \(b=\tilde{\Omega}\left(n\varepsilon^{2}\right)\) bits per client are sufficient to achieve the order-optimal \(\tilde{O}_{\delta}\left(\frac{\varepsilon^{2}d}{n^{2}\varepsilon^{2}}\right)\) error (even in the small sample regime \(n\leq\sqrt{d}\)), i.e. the communication cost of the scheme is independent of the dimension \(d\). ### Lower bounds In this section, we argue that the estimation error in Theorem 4.4 is optimal up to an \(\log\left(d/\delta\right)\) factor. Specifically, Theorem 5.3 of [31] shows that any \(b\)-bit _unbiased_ compression scheme will incur \(\Omega\left(\frac{C^{2}d}{nb}\right)\) error for the \(\ell_{2}\) mean estimation problem (even when privacy is not required). This matches the first term in (5) up to a logarithmic factor. On the other hand, the centralized Gaussian mechanism (under a central \((\varepsilon,\delta)\)-DP) achieves \(O\left(\frac{C^{2}d\log(1/\delta)}{n^{2}\varepsilon^{2}}\right)\) MSE [15] (which is order-optimal in most parameter regimes; see the lower bounds in Theorem 3.1 of [28] or Proposition 23 of [30]). Hence, we can conclude that the total communication received by the server has to be at least \(\Omega(n^{2}\varepsilon^{2})\) bits in order to achieve the same error as the Gaussian mechanism. Therefore, the (average) per-client communication cost has to be at least \(\Omega(n\varepsilon^{2})\) bits. Hence we conclude that Algorithm 2 is optimal (up to a logarithmic factor). For completeness, we state the communication lower bound in the following theorem: **Theorem 4.6** (Communication lower bound for mean estimation under central DP).: _Let \(x_{1},...,x_{n}\in\mathcal{B}_{2}(C)\). Let \(Y_{1},...,Y_{n}\) be any \(b\)-bit local reports generated from a (possibly interactive) compressor and be unbiased in _a sense that_ \[\mathbb{E}\left[\sum_{i}Y_{i}\right]=\sum_{i}x_{i}.\] _Then if_ \[\mathbb{E}\left[\left\|\frac{1}{n}\sum_{i}Y_{i}-\frac{1}{n}\sum_{i}x_{i}\right\| _{2}^{2}\right]\leq O\left(\frac{C^{2}d\log(1/\delta)}{n^{2}\varepsilon^{2}} \right),\] _it holds that_ \[b=\Omega\left(\frac{n\varepsilon^{2}}{\log(1/\delta)}\right).\] Finally, we remark that the logarithmic gap between the upper and lower bounds may be due to the specific composition theorem (Theorem III.3 of [39]) we use in our proof, which is simpler to work with but possibly slightly weaker. However, in our experiments, we compute and account for all privacy budgets with Renyi DP [68, 81], and hence can obtain better constants compared to our theoretical analysis. ## 5 Distributed Frequency Estimation In this section, we consider the frequency estimation problem for federated analytics. Recall that for the frequency estimation task, each client's private data \(x_{i}\in\{0,1\}^{d}\) satisfies \(\left\|x_{i}\right\|_{0}=1\), and the goal is to estimate \(\pi\coloneqq\frac{1}{n}\sum_{i}x_{i}\) by minimizing the \(\ell_{2}\) (or \(\ell_{1},\ell_{\infty}\)) error \(\mathbb{E}\left[\left\|\pi-\hat{\pi}(Y^{n})\right\|_{2}^{2}\right]\) subject to communication and \((\varepsilon,\delta)\)-DP constraints. When the context is clear, we sometimes use \(x_{i}\) to denote, by abuse of notation, the index of the item, i.e., \(x_{i}\in[d]\). To fully make use of the \(\ell_{0}\) structure of the problem, a standard technique is applying a Hadamard transform to convert the \(\ell_{0}\) geometry to an \(\ell_{\infty}\) one and then leveraging the recursive structure of Hadamard matrices to efficiently compress local messages. Specifically, for a given \(b\)-bit constraint, we partition each local item \(x_{i}\) into \(2^{b-1}\) chunks \(x_{i}^{(1)},...,x_{i}^{(2^{b}-1)}\in\{0,1\}^{B}\), where \(B\coloneqq d/2^{b-1}\) and \(x_{i}^{(j)}=x_{i}[B\cdot(j-1):B\cdot j-1]\). Note that since \(x_{i}\) is one-hot, only one chunk of \(x_{i}^{(j)}\) is non-zero. Then, client \(i\) performs the following Hadamard transform for each chunk: \(y_{i}^{(\ell)}=H_{B}\cdot x_{i}^{(\ell)}\), where \(H_{B}\) is defined recursively as follows: \[H_{2^{n}}=\frac{1}{\sqrt{2}}\begin{bmatrix}H_{2^{n-1}},&H_{2^{n-1}}\\ H_{2^{n-1}},&-H_{2^{n-1}}\end{bmatrix},\text{ and }H_{0}=\begin{bmatrix}1 \end{bmatrix}.\] Each client then generates a sampling vector \(Z_{ij}\overset{\text{i.i.d.}}{\sim}\mathsf{Bern}\left(\frac{1}{B}\right)\) via shared randomness that is also known by the server, and commits \((y_{i}^{(1)}(j),...,y_{i}^{(2^{b-1})}(j))\) as its local report. Since \((y_{i}^{(1)}(j),...,y_{i}^{(2^{b-1})}(j))\) only contains a single non-zero entry that can be \(\frac{1}{\sqrt{B}}\) or \(-\frac{1}{\sqrt{B}}\), the local report can be represented in \(b\) bits (\(b-1\) bits for the location of the non-zero entry and \(1\) bit for its sign). From the local reports, the server can compute an unbiased estimator by summing them together (with proper normalization) and performing an inverse Hadamard transform. Moreover, with an adequate injection of Gaussian noise, the frequency estimator satisfies \((\varepsilon,\delta)\)-DP. The idea has been used in previous literature under local DP [19, 6, 3, 32], but in order to obtain the order-optimal trade-off under _central_-DP, one has to combine Hadamard transform with a random subsampling step and incorporate the privacy amplification due to random compression in the analysis. In Algorithm 3, we provide a summary of the resultant scheme which builds on the Recursive Hadamard Response (RHR) mechanism from [32], which was originally designed for communication-efficient frequency estimation under _local_ DP. In the following theorem, we control the \(\ell_{\infty}\) error of Algorithm 3. **Theorem 5.1**.: _Let \(\hat{\pi}(x^{n})\) be the output of Algorithm 3. Then it holds that for all \(j\in[d]\),_ \[\mathbb{E}\left[\left|\pi(j)-\hat{\pi}(j)\right|\right]\leq\sqrt{\frac{\sum_{ i}\mathbb{1}_{\{x_{i}\in[B\cdot(j-1)\cdot B\cdot j-1]\}}}{n^{2}}+\frac{ \sigma^{2}}{B}}, \tag{6}\] _and the \(\ell_{2}^{2}\) and \(\ell_{1}\) errors are bounded by_ \[\mathbb{E}\left[\left\|\pi-\hat{\pi}\right\|_{2}^{2}\right]\leq\frac{B}{n}+\frac{ d\sigma^{2}}{B},\text{ and } \tag{7}\] \[\mathbb{E}\left[\left\|\pi-\hat{\pi}\right\|_{1}\right]\leq\sqrt{\frac{dB}{n}+ \frac{d^{2}\sigma^{2}}{B}}. \tag{8}\] **Theorem 5.2**.: _For any \(\varepsilon,\delta>0\), Algorithm 3 is \((\varepsilon,\delta)\)-DP, if_ \[\sigma^{2}\geq O\left(\frac{B^{2}\log(B/\delta)}{n^{2}}+\frac{B(\log(1/ \delta)+\varepsilon)\log(B/\delta)}{n^{2}\varepsilon^{2}}\right).\] By combining Theorem 5.1 and Theorem 5.2, we conclude that Algorithm 3 achieves \((\varepsilon,\delta)\)-DP with \(\ell_{2}^{2}\) error \[O\left(\frac{B}{n}+\frac{dB\log(B/\delta)}{n^{2}}+\frac{d(\log(1/ \delta)+\varepsilon)\log(B/\delta)}{n^{2}\varepsilon^{2}}\right)\] \[=O\left(\frac{d}{n^{2}b}+\frac{d^{2}\log(d/\delta)}{n^{2}b}+ \frac{d(\log(1/\delta)+\varepsilon)\log(d/\delta)}{n^{2}\varepsilon^{2}} \right).\] Notice that when \(n=\tilde{\Omega}(d)\), the error can be simplified to \[O\left(\frac{d}{n^{2}b}+\frac{d(\log(1/\delta)+\varepsilon)\log(d/\delta)}{n ^{2}\varepsilon^{2}}\right),\] which matches the order-optimal estimation error (up to a \(\log d\) factor) subject to a \(b\)-bit constraint [54, 3, 2] and \((\varepsilon,\delta)\)-DP constraint [15, 7]. Achieving the Optimal Trade-off via Shuffling In Section 4 and Section 5, we see that the communication cost can be reduced to \(\left(\tilde{O}\left(n\varepsilon^{2}\right)\right)\) for mean estimation and \(\tilde{O}\left(\log\left(\lceil n\varepsilon^{2}\rceil\right)\right)\) for frequency estimation) while still achieving the order-wise optimal error, as long as the server is _trusted_. On the other hand, when the server is untrusted, [33, 31] show that optimal error under \(\left(\varepsilon,\delta\right)\)-DP can be achieved with secure aggregation. However, the communication cost of these schemes is \(\tilde{O}\left(n^{2}\varepsilon^{2}\right)\) bits per client for mean estimation and \(\tilde{O}\left(n\varepsilon\right)\) bits per client for frequency estimation. This corresponds to a factor of \(n\) increase for mean estimation and an exponential increase for frequency estimation. In this section, we investigate whether the optimal communication-accuracy-privacy trade-off from the previous sections can be achieved when the server is not fully trusted. In this section, we show that if there exists a _secure_ shuffler that randomly permutes clients' locally privatized messages and releases them to the server, we can achieve the nearly optimal (within a \(\log d\) factor) central-DP error in mean estimation with \(\tilde{O}\left(n\varepsilon^{2}\right)\) bits of communication. Specifically, we present a mean estimation scheme that combines a local-DP mechanism with privacy amplification via shuffling by building on the following recent result [40, 43]: **Lemma 6.1** ([43]).: _Let \(\mathcal{M}_{i}\) be an independent \(\left(\varepsilon_{0},0\right)\)-LDP mechanism for each \(i\in[n]\) with \(\varepsilon_{0}\leq 1\) and \(\pi\) be a random permutation of \([n]\). Then for any \(\delta\in[0,1]\) such that \(\varepsilon_{0}\leq\log\left(\frac{n}{16\log\left(2/\delta\right)}\right)\), the mechanism_ \[\mathcal{S}:\left(x_{1},\ldots,x_{n}\right)\mapsto\left(\mathcal{M}_{1}\left( x_{\pi(1)}\right),\ldots,\mathcal{M}_{n}\left(x_{\pi(n)}\right)\right)\] _is \(\left(\varepsilon,\delta\right)\)-DP for some \(\varepsilon\) such that \(\varepsilon=O\left(\varepsilon_{0}\frac{\sqrt{\log\left(1/\delta\right)}}{ \sqrt{n}}\right).\)_ Privacy analysis.With the above amplification lemma, we only need to design the local randomizers \(\mathcal{M}_{i}\) that satisfy \(\varepsilon_{0}\)-LDP. Note that the above lemma is only tight when \(\varepsilon_{0}=O(1)\), thus restricting the (amplified) central \(\varepsilon=O(1/\sqrt{n})\), i.e. to be very small. To accommodate larger \(\varepsilon\), users can send different portions of their messages to the server in separate shuffling rounds. Equivalently, we repeat the shuffled LDP mechanism for \(T=O\left(\lceil n\varepsilon^{2}\rceil\right)\) rounds while ensuring that in each round clients communicate an independent piece of information about their sample to the server. More precisely, within each round, each client applies the local randomizers \(\mathcal{M}_{i}\) with a per-round _local privacy budget_\(\varepsilon_{0}=O(1)\) and sends an independent message to the server. This results in (amplified) central \(O(1/\sqrt{n})\)-DP per round, which after composition over \(T=O\left(\lceil n\varepsilon^{2}\rceil\right)\) rounds leads to \(\varepsilon\)-DP for the overall scheme as suggested by the composition theorem [57]). We detail the algorithm in Algorithm 4 in Appendix E. **Remark 6.2**.: _Although our analysis mainly focuses on \(\left(\varepsilon,\delta\right)\)-DP, one can also obtain Renyi DP guarantees (which may facilitate practical privacy accounting) using the recent amplification result [44]._ Communication costs.The communication cost of the above \(T\)-round scheme can be computed as follows. As shown in [32], the optimal communication cost of an \(\varepsilon_{0}\)-LDP mean estimation is \(O\left(\lceil\varepsilon_{0}\rceil\right)\) bits. In addition, the (private-coin) SQKR scheme proposed in [32] uses \(O\left(\lceil\varepsilon_{0}\rceil\log d\right)\) bits of communication (we state the formal performance guarantee for this scheme in Lemma 6.3), where compression is done by subsampling coordinates and privatization is performed with Randomized Response. Therefore, since the per-round \(\varepsilon_{0}=O(1)\), the total per-client communication cost is \(O\left(n\varepsilon^{2}\log d\right)\), matching the optimal communication bounds in Section 4 within a \(\log d\) factor. **Lemma 6.3** (SQKR [32]).: _For all \(\varepsilon_{0}>0,b_{0}>0\), there exists a \(\left(\varepsilon_{0},0\right)\)-LDP mechanism \(x_{i}\mapsto\hat{\mu}\) using \(b_{0}\log(d)\) bits such that \(\hat{\mu}\) is unbiased and satisfies_ \[\mathbb{E}\left[\left\|\mu\left(x^{n}\right)-\hat{\mu}\left(x^{n}\right)\right\| _{2}^{2}\right]=O\left(\frac{c^{2}d}{n\min\left(\varepsilon_{0}^{2}, \varepsilon_{0},b_{0}\right)}\right).\] Finally, we summarize the performance guarantee for the overall scheme (Algorithm 4) in the following theorem. **Theorem 6.4** (\(\ell_{2}\) mean estimation).: _Let \(x_{1},...,x_{n}\in\mathcal{B}_{2}(C)\) (i.e., \(\left\|x_{i}\right\|_{2}\leq C\) for all \(i\in[n]\)). For all \(\varepsilon>0,b>0,n>30\), and \(\delta\in(\delta_{\min},1]\) where \(\delta_{\min}=O\left(\frac{be^{-n}}{\log(d)}\right)\), Algorithm 4 combined with Kashin's representation and randomized rounding is \((\varepsilon,\delta)\)-DP, uses no more than \(b\) bits of communication, and achieves_ \[\mathbb{E}\left[\left\|\mu\left(x^{n}\right)-\hat{\mu}\left(x^{n}\right)\right\| _{2}^{2}\right]=O\left(C^{2}d\max\left(\frac{\log(d)}{nb},\frac{\log(b/\delta) (\log(1/\delta)+\varepsilon)}{n^{2}\varepsilon^{2}}\right)\right).\] **Remark 6.5**.: _As opposed to previous schemes Algorithm 1-3, the shuffled SQKR requires some condition on \(\delta\), i.e., \(\delta\in[\delta_{\min},1]\) due to the specific shuffling lemma we used. In practice, however, \(\delta_{\min}\) is small due to the exponential dependence on \(n\). The order-wise optimal error of \(O\left(\frac{C^{2}d}{n^{2}\min(\varepsilon^{2},\varepsilon)}\right)\) is achieved, up to logarithmic factors, when \(b=\Omega_{\delta}\left(n\log(d)\min\left(\varepsilon^{2},\varepsilon\right)\right)\)._ **Remark 6.6**.: _We note that similar ideas of private mean estimation based on shuffling have been studied before, see, for instance, [53]. However, these papers do not use the above privacy budget splitting trick over multiple rounds, so their result is only optimal when \(\varepsilon\) is very small. The above scheme can be viewed as a multi-message shuffling scheme [35, 51], and in particular, can be regarded as a generalization of the scalar mean estimation scheme [35] to \(d\)-dim mean estimation._ ## 7 Experiments In this section, we empirically evaluate our mean estimation scheme (CSGM) from Section 4, examine its privacy-accuracy-communication trade-off, and compare it with other DP mechanisms (including the shuffling-based mechanism introduced in Section 6). Setup.For a given dimension \(d\), and number of samples \(n\), we generate local vectors \(X_{i}\in\mathbb{R}^{d}\) as follows: let \(X_{i}(j)\overset{\mathrm{i.i.d.}}{\sim}\frac{1}{\sqrt{d}}\left(2\cdot\mathsf{ Ber}(0.8)-1\right)\) where \(\mathsf{Ber}(0.8)\) is a Bernoulli random variable with bias \(p=0.8\). This ensures \(\left\|X_{i}\right\|_{\infty}\leq 1/\sqrt{d}\) and \(\left\|X_{i}\right\|_{2}\leq 1\), and in addition, the empirical mean \(\mu\left(X^{n}\right)\coloneqq\frac{1}{n}\sum_{i}X_{i}\) does not converge to \(0\). Note that as our goal is to construct an unbiased estimator, we did not project our final estimator back to the \(\ell_{\infty}\) or \(\ell_{2}\) space as the projection step may introduce bias. Therefore, the \(\ell_{2}\) estimation error can be greater than \(1\). We account for the privacy budget with Renyi DP [68] and the privacy-amplification by subsampling lemma in [81] and convert Renyi DP to \((\varepsilon,\delta)\)-DP via [30]. Privacy-accuracy-communication trade-off of CSGM.In the first set of experiments (Figure 1), we apply Algorithm 1 with different sampling rates \(\gamma\), which leads to different communication budgets (\(b=\gamma d\)). Note that when \(\gamma=1\), the scheme reduces to the central Gaussian mechanism without compression. In Figure1, we see that with a fixed communication budget, CSGM approximates the central (uncompressed) Gaussian mechanism in the high privacy regime (small \(\varepsilon\)) and starts deviating from it when \(\varepsilon\) exceeds a certain value. In addition, that value of \(\varepsilon\) depends only on sample size \(n\) and the communication budget \(b\) and not the dimension \(d\) as predicted by our theory: recall that the compression error dominates the total error, and hence the performance starts to deviate from the (uncompressed) Gaussian mechanism when \(b=o(n\varepsilon^{2})\), a condition that is independent of \(d\). Observe, for example, that when \(b=50\) bits, the Gaussian mechanism starts outperforming CSGM at \(\varepsilon\geq 0.5\) for both \(d=500\) and \(d=5000\). Hence, for \(\varepsilon\approx 0.5\) CSGM is able to provide 10X compression when \(d=500\), but 100X compression when \(d=5000\) without impacting MSE. Comparison with local and shuffle DP.Next, we compare the CSGM with local and shuffled DP for \(d=10^{3}\) and \(n=500\). For local DP, we consider the private-coin SQKR scheme introduced in Section 6 which uses \(\lceil\log d\rceil=10\) bits when \(\varepsilon\leq 1\) and DJW [37] which is known to be order-optimal when \(\varepsilon=O(1)\) (but is not communication-efficient). For shuffle-DP, we apply the amplification lemma in [43] to find the corresponding local \(\varepsilon_{0}\) (see Section 6 for more details) and simulate both SQKR and DJW as the local randomizersWe note that all shuffle-DP mechanisms considered in this experiment are single-round (as opposed to the multi-round schemes in Section 6), which is optimal in the high privacy regime (i.e., when \(\varepsilon\) is small). The MSEs of all mechanisms are reported in Figure 2. Our results suggest that for a fixed communication budget (say, 10 bits), the practical performance of CSGM outperforms shuffled-DP mechanisms, including the shuffled SQKR and DJW. In addition, the amplification gain of shuffling diminishes fast as \(\varepsilon\) increases. Indeed, when \(\varepsilon\geq 0.8\), we observe no amplification gain compared to the pure local DP.
2303.05780
TAKT: Target-Aware Knowledge Transfer for Whole Slide Image Classification
Transferring knowledge from a source domain to a target domain can be crucial for whole slide image classification, since the number of samples in a dataset is often limited due to high annotation costs. However, domain shift and task discrepancy between datasets can hinder effective knowledge transfer. In this paper, we propose a Target-Aware Knowledge Transfer framework, employing a teacher-student paradigm. Our framework enables the teacher model to learn common knowledge from the source and target domains by actively incorporating unlabelled target images into the training of the teacher model. The teacher bag features are subsequently adapted to supervise the training of the student model on the target domain. Despite incorporating the target features during training, the teacher model tends to overlook them under the inherent domain shift and task discrepancy. To alleviate this, we introduce a target-aware feature alignment module to establish a transferable latent relationship between the source and target features by solving the optimal transport problem. Experimental results show that models employing knowledge transfer outperform those trained from scratch, and our method achieves state-of-the-art performance among other knowledge transfer methods on various datasets, including TCGA-RCC, TCGA-NSCLC, and Camelyon16.
Conghao Xiong, Yi Lin, Hao Chen, Hao Zheng, Dong Wei, Yefeng Zheng, Joseph J. Y. Sung, Irwin King
2023-03-10T08:29:35Z
http://arxiv.org/abs/2303.05780v2
# Knowledge Transfer via Multi-Head Feature Adaptation for Whole Slide Image Classification ###### Abstract Transferring prior knowledge from a source domain to the same or similar target domain can greatly enhance the performance of models on the target domain. However, it is challenging to directly leverage the knowledge from the source domain due to task discrepancy and domain shift. To bridge the gaps between different tasks and domains, we propose a Multi-Head Feature Adaptation module, which projects features in the source feature space to a new space that is more similar to the target space. Knowledge transfer is particularly important in Whole Slide Image (WSI) classification since the number of WSIs in one dataset might be too small to achieve satisfactory performance. Therefore, WSI classification is an ideal testbed for our method, and we adapt multiple knowledge transfer methods for WSI classification. The experimental results show that models with knowledge transfer outperform models that are trained from scratch by a large margin regardless of the number of WSIs in the datasets, and our method achieves state-of-the-art performances among other knowledge transfer methods on multiple datasets, including TCGA-RCC, TCGA-NSCLC, and Camelyon16 datasets. Keywords:Whole Slide Image Classification Knowledge Transfer ## 1 Introduction Deep learning is the de facto method in computer vision [10, 21, 29, 31] and **W**hole **S**lide **I**mage (WSI) classification, such as breast [19], prostate [3], skin [13], pancreas [17] and lung cancer [5], _etc._ However, for some diseases, the patient cohort is small, limiting the total number of WSIs in the dataset [2]. Furthermore, WSIs are gigapixel images and it takes several hours for a pathologist to annotate one single WSI, which is labour-extensive and expensive. Hence, the number of WSIs can be insufficient, resulting in an unsatisfactory performance of the model. Transfer learning, which transfers prior knowledge in a pre-trained model (teacher model) obtained from a source domain to the model (student model) on a target domain, is one of the feasible ways to tackle the data insufficiency problem [21, 31]. The most frequently-used transfer learning method is fine-tuning, which initialises the student model with weights from a teacher model. Also, feature transfer is another option, which transfers the features into a new feature space [14]. Most methods for WSI classification only use the models pre-trained on ImageNet to extract features from patches [16, 23, 24]. However, transfer learning faces the challenges of task discrepancy and domain shift. The tasks of the source domain and target domain can be quite different. For instance, the tasks of **T**he **C**ancer **G**enome **A**tlas (TCGA) datasets can be cancer subtyping, survival prediction, _etc._, while that of Camelyon16 [18] is to detect metastases. Also, differences between domains can be significant. First, tumour regions of Camelyon16 only take up less than 10% of the WSIs, while they can account for 80% in TCGA datasets. Second, different institutions might have different staining materials, leading to tone differences in the WSIs. Adding a projection head, _i.e._, feature transfer, is one viable way of solving these problems [4, 8]. Direct supervision is "hard supervision" that outputs from the teacher model are unchangeable. In contrast, adding a projection head is "soft supervision" that allows slight errors. The projection head maps features from the teacher model (teacher features) to another feature space that has less distinction with the target feature space, in which way the teacher knowledge can be more accurately transferred to the student model. We propose a **M**ulti-**H**ead **F**eature **A**daptation (MHFA) module that follows this path and utilises **M**ulti-**H**ead **A**ttention (MHA) [25, 26] to fully exploit new patterns and combinations that are closer to the target space. Besides the data insufficiency scenario, Our method also provides consistent improvements when the data is abundant. Additionally, we also investigate knowledge distillation, which transfers knowledge from a large teacher model to a smaller student model on the same dataset [1, 6, 9, 11, 27]. Given the fact that the sizes of the new models are growing [7, 22, 28], knowledge distillation compresses large models to smaller ones so that users with limited computational resources can also achieve comparable performances. Our method is also applicable to knowledge distillation since transfer learning and knowledge distillation can be formulated under knowledge transfer [1]. 1. We propose an MHFA module to project the teacher features to another space so that the gaps between different tasks and domains can be bridged. 2. We adapt multiple knowledge transfer methods for WSI classification to overcome the data insufficiency problem by leveraging prior knowledge. 3. We conduct extensive experiments on multiple datasets, including TCGA-RCC, TCGA-NSCLC and Camelyon16. The experimental results show that our method outperforms other methods consistently. ## 2 Methodology ### Knowledge Transfer Formulation **Preliminary Definitions.** A domain \(\mathcal{D}=(\mathcal{X},P(\mathbf{X}))\) consists of a feature space \(\mathcal{X}\) and a marginal probability distribution \(P(\mathbf{X})\), where \(\mathbf{X}=\{\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\}\in\mathcal{X}\). A task \(\mathcal{T}=(\mathcal{Y},f(\cdot))\) is also comprised of two components: the corresponding label collection \(\mathcal{Y}\) and the objective predictive function \(f(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\). **Knowledge Transfer.** Considering the source and target domains \(\mathcal{D}_{s},\mathcal{D}_{t}\) and the tasks on the two domains \(\mathcal{T}_{s},\mathcal{T}_{t}\) with teacher model \(f_{t}(\cdot)\) trained and student model \(f_{s}(\cdot)\) untrained, knowledge transfer utilises \(f_{t}(\cdot)\) from the source domain to enhance the objective predictive function at the target domain \(f_{s}(\cdot)\). Furthermore, if \(\mathcal{D}_{s}\neq\mathcal{D}_{t}\) or \(\mathcal{T}_{s}\neq\mathcal{T}_{t}\), this is the transfer learning scenario, and if \(\mathcal{D}_{s}=\mathcal{D}_{t},\mathcal{T}_{s}=\mathcal{T}_{t}\) and \(f_{t}(\cdot)\) is more complex than \(f_{s}(\cdot)\), it is the knowledge distillation scenario. The goal of this work is to investigate knowledge transfer methods applicable to both transfer learning and knowledge distillation. ### Knowledge Transfer Framework for WSI Classification The overview of the knowledge transfer framework is shown in Fig. 1. Given the teacher model \(f_{t}(\cdot)\) trained on \(\mathcal{D}_{s}\) with \(\mathcal{T}_{s}\), knowledge transfer delivers the knowledge acquired in \(f_{t}(\cdot)\) to the student model \(f_{s}(\cdot)\) on \(\mathcal{D}_{t}\) with \(\mathcal{T}_{t}\). In this work, we study four knowledge transfer techniques for WSI classification. **Fine-tuning.** Initialisation has significant impacts on neural network training. Models that are initialised with pre-trained weights on related datasets tend to converge faster and achieve better performance. Hence, fine-tuning, meaning that the model is first initialised with a pre-trained model and then trained on the target dataset, is a popular method since the preliminary knowledge from the source dataset can greatly facilitate the training on the target dataset. **Logit Transfer.** Logits are the predicted probability distribution of the labels. Logits from a well-trained teacher model \(\mathbf{p}_{t}\in\mathbb{R}^{1\times c}\), where \(c\) is the number of classes, can enhance the student model [12] by pulling the logits of the student model \(\mathbf{p}_{s}\in\mathbb{R}^{1\times c}\) closer to \(\mathbf{p}_{t}\), since \(\mathbf{p}_{t}\) can serve as pseudo-labels for \(\mathbf{p}_{s}\). Figure 1: Illustrations of knowledge transfer techniques and our MHFA module. \(f_{t}(\cdot)\) is the teacher model trained on \(\mathcal{D}_{s}\) and \(f_{s}(\cdot)\) is the student model on \(\mathcal{D}_{t}\). Attention Transfer.Under the MIL setting, the WSIs are divided into patches and the importance of each patch can differ. The attention map indicates the significance of each patch, and it can be easily derived from attention-based MIL models. Therefore, assume the number of patches is \(n\), pulling the attention maps \(\mathbf{a}_{s}\in\mathbb{R}^{1\times n}\) of the student model closer to those of the teacher model \(\mathbf{a}_{t}\in\mathbb{R}^{1\times n}\) helps the student model learn the most discriminative regions [30]. **Feature Transfer.** The bag features that are decisive for the prediction can be used as the supervision signal to improve the bag representation of the student model \(\mathbf{h}_{s}\in\mathbb{R}^{1\times d_{s}}\) under the guidance of the bag representation from the teacher model \(\mathbf{h}_{t}\in\mathbb{R}^{1\times d_{t}}\). In some scenarios like knowledge distillation, \(d_{t}\) is typically larger than \(d_{s}\). Therefore, dimension reduction techniques, such as singular value decomposition and principal component analysis, are often required [11]. In summary, the loss functions for these methods are given in Eq. (1) as, \[\mathcal{L}=\left\{\begin{aligned} &\alpha\,\text{RSS}(\mathbf{p}_{t},\mathbf{p}_{s })+\mathcal{L}_{\mathcal{T}_{t}},&\text{logit transfer},\\ &\alpha\,\text{RSS}(\mathbf{a}_{t},\mathbf{a}_{s})+\mathcal{L}_{\mathcal{ T}_{t}},&\text{attention transfer},\\ &\alpha\,\text{RSS}(\mathbf{h}_{t},\mathbf{h}_{s})+\mathcal{L}_{\mathcal{ T}_{t}},&\text{feature transfer},\\ \end{aligned}\right. \tag{1}\] where RSS is the **R**esidual **S**um of **S**quares (RSS) loss, \(\alpha\) is a coefficient, and \(\mathcal{L}_{\mathcal{T}_{t}}\) is the classification loss function of \(\mathcal{T}_{t}\). ### Multi-Head Feature Adaptation Module We adopt feature-based knowledge transfer in our work, as it is well-suited to both transfer learning and knowledge distillation. To address the task discrepancy and domain shift problems, we propose an MHFA module, which projects the teacher feature \(\mathbf{h}_{t}\in\mathbb{R}^{1\times d_{t}}\) to a new feature \(\mathbf{h}_{\text{MHFA}}\in\mathbb{R}^{1\times d_{s}}\). Therefore, the loss function for our method can be expressed in Eq. (2) as, \[\mathcal{L}=\alpha\,\text{RSS}(\mathbf{h}_{\text{MHFA}},\mathbf{h}_{s})+\mathcal{L}_{ \mathcal{T}_{t}}. \tag{2}\] First, it normalises the teacher feature vectors using **P**ower **T**emperature **S**caling (PTS) normalisation function [11], which is given in Eq. (3) as, \[\mathbf{h}^{\prime}_{t}=\text{PTS}(\mathbf{h}_{t})=\text{sign}(\mathbf{h}_{t})|\frac{\bm {h}_{t}}{T}|^{\frac{1}{t}}\in\mathbb{R}^{1\times d_{t}}, \tag{3}\] where \(T=0.1\) and \(t=3\) as stated in [11], \(\text{sign}(\cdot)\) is the sign function. Then the MHA module [25] is applied on the normalised teacher feature vector \(\mathbf{h}^{\prime}_{t}\) to discover new patterns and combinations of it. The MHA module can be expressed in Eq. (4) as, \[\mathbf{H}_{t}=\text{MHA}(\mathbf{h}^{\prime}_{t},m)=\text{concat}(\text{SHA}_{1}(\bm {h}^{\prime}_{t}),\cdots,\text{SHA}_{m}(\mathbf{h}^{\prime}_{t}))\in\mathbb{R}^{ m\times d_{s}}, \tag{4}\] where \(\text{concat}(\cdot,\cdots,\cdot)\) is the concatenation operation, \(m\) is the number of attention heads, \(d_{s}\) is the dimension of the student feature, and \(\text{SHA}_{i}(\cdot)\) is the \(i\)-th **S**ingle-**H**ead **A**ttention (SHA) module [25], which is defined in Eq. (5) as, \[\text{SHA}_{i}(\mathbf{h}^{\prime}_{t})=\frac{(\mathbf{h}^{\prime}_{t}\mathbf{W}_{Q,i}) \cdot(\mathbf{h}^{\prime}_{t}\mathbf{W}_{K,i})^{T}}{\sqrt{d_{t}}}(\mathbf{h}^{\prime}_{t} \mathbf{W}_{V,i})\in\mathbb{R}^{1\times d_{s}}, \tag{5}\] where \(\mathbf{W}_{Q,i},\mathbf{W}_{K,i}\in\mathbb{R}^{d_{t}\times d_{k}}\), \(\mathbf{W}_{V,i}\in\mathbb{R}^{d_{t}\times d_{s}}\) are three learnable matrices. Finally, we utilised the gated attention mechanism \(g(\cdot)\)[15] to assign importance scores to the features \(\mathbf{H}_{t}\). These features are useful to different extents, and the purpose of the gated attention mechanism is to find the most decisive ones. The gated attention mechanism is given in Eq. (6) as, \[g(\mathbf{H}_{t})=\left(\left(\text{softmax}\left(\tanh\left(\mathbf{H}_{t}\mathbf{W}_{V} \right)\odot\sigma\left(\mathbf{H}_{t}\mathbf{W}_{U}\right)\right)\right)\mathbf{w}\right) ^{T}\mathbf{H}_{t}\in\mathbb{R}^{1\times d_{s}}, \tag{6}\] where \(\tanh(\cdot)\) is the tanh function, \(\sigma(\cdot)\) is the sigmoid function, \(\mathbf{w}\in\mathbb{R}^{d^{\prime}\times 1}\), \(\mathbf{W}_{V},\mathbf{W}_{U}\in\mathbb{R}^{d_{s}\times d^{\prime}}\) are the learnable matrices and \(d^{\prime}\) is the hidden dimension of the gated attention mechanism. Combining all three steps mentioned above, the final supervision feature from the teacher model \(\mathbf{h}_{\text{MHFA}}\) is given in Eq. (7) as, \[\mathbf{h}_{\text{MHFA}}=\text{MHFA}(\mathbf{h}_{t},m)=g\left(\text{MHA}\left(\text{ PTS}(\mathbf{h}_{t}),m\right)\right). \tag{7}\] ## 3 Experiments and Results ### Dataset Descriptions **Camelyon16.**[18] There are 399 WSIs of lymph nodes with and without metastasis in women breast cancer tissues. The training and test datasets contain 270 and 129 WSIs, respectively. We split the training dataset into training and validation datasets by 8:2 and compare the performances on the official test dataset. **TCGA-RCC5.** There are 940 WSI slides in this dataset. It consists of 121 WSIs from 109 cases in Kidney Chromophobe Renal Cell Carcinoma (TGCA-KICH), 519 WSIs from 513 cases in Kidney Renal Clear Cell Carcinoma (TCGA-KIRC), and 300 WSIs from 276 cases in Kidney Renal Papillary Cell Carcinoma (TCGA-KIRP). The dataset is split into training, validation, and test datasets by the ratio of 6:1.5:2.5, respectively. Footnote 5: The results shown here are in whole or part based upon data generated by the TCGA Research Network: [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga) **TCGA-NSCLC.** There are 1,053 WSI slides in this dataset. It consists of 512 WSIs from 478 cases in Lung Squamous Cell Carcinoma (TCGA-LUSC), and 541 WSIs from 478 cases in Lung Adenocarcinoma (TCGA-LUAD). The dataset split is the same as TCGA-RCC dataset. ### Implementation Details **Evaluation Metrics.** Area Under the **C**urve (AUC), F1 and accuracy are the evaluation metrics. These metrics can holistically reflect the performances of the models. The thresholds of F1 and accuracy scores are set to 0.5. **Training Settings.** The baseline knowledge transfer methods include (1) no knowledge transfer, (2) fine-tuning, (3) **L**ogit **T**ransfer (LT) [12], (4) **A**ttention **T**ransfer (AT) [30] and (5) PTS norm [11]. The MIL model in our method is initialised with the teacher model when possible. The base models are CLAM **S**mall, CLAM **B**ig [20]. The model with the lowest validation loss is chosen for inference on the test dataset and the result of that is reported. The model training is ceased when the validation loss stops decreasing for 20 epochs. We run the models three times and the average performances are reported. **Hyper-parameters.** The learning rate and weight decay are set to 2e-4 and 1e-5. The Adam optimizer is used. Dropout is set to 0.25 for CLAM S and B. The coefficient \(\alpha\) is set to 0.1. The number of attention head \(m\) is 8. The WSIs are split into 256\(\times\)256 non-overlapping patches at level 1 resolution (20\(\times\)). ResNet-50 [10] pre-trained on ImageNet is used to extract features from the patches. ### Comparison with Related Methods We compare our method with other methods in the following settings: (1) transfer learning, (2) transfer learning in low-resource settings, (3) knowledge distillation, (4) knowledge distillation and transfer learning. **Transfer Learning.** The results are shown in Table 2 and Table 1. Our proposed method achieves the best performance across all metrics, especially on Camelyon16, where our method outperforms other methods by a large margin, \begin{table} \begin{tabular}{l|c c c|c c c} \hline \multirow{2}{*}{**METHOD**} & \multicolumn{2}{c|}{**RCC\(\rightarrow\)Samelyon16**} & \multicolumn{2}{c}{**NSCLC\(\rightarrow\)Campelyon16**} \\ & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** \\ \hline \hline CLAM S [20] & 0.847\({}_{0.035}\) & 0.789\({}_{0.037}\) & 0.819\({}_{0.030}\) & 0.847\({}_{0.035}\) & 0.789\({}_{0.037}\) & 0.819\({}_{0.030}\) \\ Fine-tuning & 0.880\({}_{0.021}\) & 0.850\({}_{0.025}\) & 0.868\({}_{0.021}\) & 0.894\({}_{0.005}\) & 0.835\({}_{0.014}\) & 0.855\({}_{0.009}\) \\ AT [30] & 0.844\({}_{0.010}\) & 0.793\({}_{0.027}\) & 0.819\({}_{0.024}\) & 0.848\({}_{0.013}\) & 0.787\({}_{0.030}\) & 0.817\({}_{0.024}\) \\ PTS norm [11] & 0.858\({}_{0.031}\) & 0.830\({}_{0.043}\) & 0.848\({}_{0.039}\) & 0.778\({}_{0.007}\) & 0.781\({}_{0.015}\) & 0.809\({}_{0.009}\) \\ \hline \hline Ours & **0.910\({}_{0.006}\)** & **0.867\({}_{0.003}\)** & **0.879\({}_{0.004}\)** & **0.899\({}_{0.027}\)** & **0.852\({}_{0.023}\)** & **0.871\({}_{0.018}\)** \\ \hline \end{tabular} \end{table} Table 1: Results on Camelyon16 with teacher and student models being CLAM S, and source domains being TCGA-RCC and TCGA-NSCLC. RCC\(\rightarrow\)Campelyon16 column contains performances of the student model trained on Camelyon16 and the teacher model trained on TCGA-RCC. The best results are in red bold, and the second best ones are in blue underline. The subscript in each cell is the standard derivation. Our method is highlighted with light cyan. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \multirow{2}{*}{**METHOD**} & \multicolumn{2}{c|}{**RCC\(\rightarrow\)NSCLC**} & \multicolumn{2}{c}{**NSCLC\(\rightarrow\)RCC**} \\ & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** \\ \hline \hline CLAM S [20] & 0.934\({}_{0.007}\) & 0.863\({}_{0.008}\) & 0.863\({}_{0.008}\) & 0.973\({}_{0.004}\) & 0.866\({}_{0.009}\) & 0.892\({}_{0.009}\) \\ Fine-tuning & 0.936\({}_{0.005}\) & 0.863\({}_{0.004}\) & 0.863\({}_{0.004}\) & 0.974\({}_{0.002}\) & 0.886\({}_{0.005}\) & 0.905\({}_{0.002}\) \\ AT [30] & 0.946\({}_{0.006}\) & 0.876\({}_{0.018}\) & 0.876\({}_{0.018}\) & 0.975\({}_{0.003}\) & 0.869\({}_{0.004}\) & 0.896\({}_{0.002}\) \\ PTS norm [11] & 0.949\({}_{0.004}\) & 0.878\({}_{0.010}\) & 0.878\({}_{0.010}\) & 0.970\({}_{0.002}\) & 0.877\({}_{0.015}\) & 0.887\({}_{0.012}\) \\ \hline \hline Ours & **0.953\({}_{0.004}\)** & **0.886\({}_{0.008}\)** & **0.886\({}_{0.008}\)** & **0.980\({}_{0.001}\)** & **0.897\({}_{0.017}\)** & **0.913\({}_{0.014}\)** \\ \hline \end{tabular} \end{table} Table 2: Results on TCGA-RCC and TCGA-NSCLC with teacher and student models being CLAM S. demonstrating the effectiveness of our method. Furthermore, due to the areas of tumour regions, Camelyon16 is more difficult than TCGA datasets. The experimental results further prove that our method can effectively transfer knowledge from a simpler dataset to a harder one. Comparing these two tables, methods with additional supervision signals perform better within TCGA dataset transfer, and fine-tuning performs better when transferring knowledge from TCGA datasets to Camelyon16 dataset. The reason for this is that tasks and domains for TCGA datasets are more similar. Methods with additional supervision signals actually all rely on the attention scores since both bag features and logits are derived from the attention scores. Furthermore, the areas of the tumour regions are drastically different between Camelyon16 and TCGA datasets, leading to different attention distributions. Therefore, attention-derived methods may consistently introduce bias, resulting in lower performance compared to fine-tuning. **Transfer Learning in Low Resource Settings.** One of the essential applications for knowledge transfer is to transfer the knowledge acquired in a large dataset to the model in a smaller dataset. To make the experiments easier, we choose to only consider transfer learning without knowledge distillation. We create new training and validation datasets with varying sizes and the test dataset remains the same. The experimental results are shown in Fig. 2. Generally, for every method, the larger the dataset is, the better the performance will be. Our method achieved the best performance in most of these experiments. On extremely low resource settings, CLAM S has competitive AUC scores, but not so when the dataset gets larger. AT performs particularly well in low-resource settings for TCGA-RCC dataset, but has similar performances as CLAM S on Camelyon16. PTS norm is not a competitive method in low-resource settings. **Knowledge Distillation.** Fine-tuning is no longer valid since the model architectures are different. Instead, since the source and target datasets are the same, logit transfer becomes feasible. The results of the experiments are shown in Table 3. Overall, most methods provide improvements compared to CLAM S. Our method improves the AUC score on TCGA-RCC dataset by 0.7%, and 1.9% on TCGA-NSCLC dataset. PTS norm outperforms all other methods by a large margin on TCGA-NSCLC dataset. However, it is the only method that Figure 2: Comparison of different knowledge transfer methods on the TCGA-RCC and Camelyon16 datasets of different sizes. The curves for our method are in bold. does not enhance the original CLAM S on TCGA-RCC dataset. LT achieves the second-best AUC scores and the best F1 and accuracy scores on TCGA-RCC dataset. AT provides consistent improvements on both datasets. **Knowledge Distillation and Transfer Learning.** Since the models and the datasets are both different, neither logit transfer nor fine-tuning is appropriate in this scenario. The results of the experiments are shown in Table 4. Our method again achieves the best performances across both datasets, demonstrating the efficacy of our method. Both PTS norm and AT do not enhance the performance on TCGA-RCC dataset under this setting, yet they provide 1.4% and 1.5% improvements on TCGA-NSCLC AUC scores, compared to CLAM S. ## 4 Conclusion To address the task discrepancy and domain shift problems, we propose an MHFA module, which projects the teacher features to a new feature space that has less distinction with the target feature space, to discover new patterns and combinations of the teacher features. Our method achieves state-of-the-art performance among other adapted knowledge transfer methods, as evidenced by our experimental results. Besides, we adapt multiple knowledge transfer methods for WSI classification to improve the performance of the models and present \begin{table} \begin{tabular}{l|c c c|c c c} \hline \multirow{2}{*}{**METHOD**} & \multicolumn{3}{c|}{**RCC\(\rightarrow\)NSCLC**} & \multicolumn{3}{c}{**NSCLC\(\rightarrow\)RCC**} \\ & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** \\ \hline \hline CLAM S [20] & \(0.934_{0.007}\) & \(0.863_{0.008}\) & \(0.863_{0.008}\) & \(0.973_{0.004}\) & \(0.866_{0.009}\) & \(0.892_{0.009}\) \\ CLAM B [20] & \(0.942_{0.005}\) & \(0.847_{0.011}\) & \(0.848_{0.012}\) & \(0.974_{0.005}\) & \(0.856_{0.003}\) & \(0.886_{0.017}\) \\ AT [30] & \(0.948_{0.010}\) & \(0.868_{0.011}\) & \(0.868_{0.011}\) & \(0.970_{0.001}\) & \(0.864_{0.004}\) & \(0.887_{0.007}\) \\ PTS norm [11] & \(0.949_{0.002}\) & \(\mathbf{0.882}_{0.004}\) & \(\mathbf{0.882}_{0.004}\) & \(0.970_{0.001}\) & \(0.883_{0.009}\) & \(0.899_{0.005}\) \\ \hline \hline Ours & \(\mathbf{0.951}_{0.005}\) & \(0.874_{0.008}\) & \(0.875_{0.008}\) & \(\mathbf{0.981}_{0.001}\) & \(\mathbf{0.888}_{0.013}\) & \(\mathbf{0.906}_{0.011}\) \\ \hline \end{tabular} \end{table} Table 4: Results of knowledge distillation and transfer learning on TCGA-RCC and TCGA-NSCLC with teacher and student models being CLAM S and B. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \multirow{2}{*}{**METHOD**} & \multicolumn{3}{c|}{**NSCLC**} & \multicolumn{3}{c}{**RCC**} \\ & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** & **AUC\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** \\ \hline \hline CLAM S [20] & \(0.934_{0.007}\) & \(0.863_{0.008}\) & \(0.863_{0.008}\) & \(0.973_{0.004}\) & \(0.866_{0.009}\) & \(0.892_{0.009}\) \\ CLAM B [20] & \(0.942_{0.005}\) & \(0.847_{0.011}\) & \(0.848_{0.012}\) & \(0.974_{0.005}\) & \(0.856_{0.003}\) & \(0.886_{0.017}\) \\ LT [12] & \(0.950_{0.002}\) & \(0.868_{0.006}\) & \(0.868_{0.006}\) & \(0.978_{0.000}\) & \(\mathbf{0.885}_{0.006}\) & \(\mathbf{0.905}_{0.005}\) \\ AT [30] & \(0.947_{0.003}\) & \(0.866_{0.008}\) & \(0.866_{0.008}\) & \(0.975_{0.002}\) & \(0.876_{0.020}\) & \(0.900_{0.012}\) \\ PTS norm [11] & \(\mathbf{0.962}_{0.003}\) & \(\mathbf{0.894}_{0.004}\) & \(\mathbf{0.894}_{0.004}\) & \(0.964_{0.005}\) & \(0.853_{0.003}\) & \(0.877_{0.009}\) \\ \hline \hline Ours & \(0.953_{0.001}\) & \(0.885_{0.002}\) & \(0.885_{0.002}\) & \(\mathbf{0.980}_{0.001}\) & \(0.880_{0.006}\) & \(0.900_{0.005}\) \\ \hline \end{tabular} \end{table} Table 3: Results of knowledge distillation on TCGA-RCC and TCGA-NSCLC with teacher and student models being CLAM S and B. a benchmark for it. Our experimental results demonstrate that knowledge transfer significantly and consistently enhances the model performance compared to training from scratch regardless of the number of samples in the dataset.
2305.01668
Visual Reasoning: from State to Transformation
Most existing visual reasoning tasks, such as CLEVR in VQA, ignore an important factor, i.e.~transformation. They are solely defined to test how well machines understand concepts and relations within static settings, like one image. Such \textbf{state driven} visual reasoning has limitations in reflecting the ability to infer the dynamics between different states, which has shown to be equally important for human cognition in Piaget's theory. To tackle this problem, we propose a novel \textbf{transformation driven} visual reasoning (TVR) task. Given both the initial and final states, the target becomes to infer the corresponding intermediate transformation. Following this definition, a new synthetic dataset namely TRANCE is first constructed on the basis of CLEVR, including three levels of settings, i.e.~Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views). Next, we build another real dataset called TRANCO based on COIN, to cover the loss of transformation diversity on TRANCE. Inspired by human reasoning, we propose a three-staged reasoning framework called TranNet, including observing, analyzing, and concluding, to test how recent advanced techniques perform on TVR. Experimental results show that the state-of-the-art visual reasoning models perform well on Basic, but are still far from human-level intelligence on Event, View, and TRANCO. We believe the proposed new paradigm will boost the development of machine visual reasoning. More advanced methods and new problems need to be investigated in this direction. The resource of TVR is available at \url{https://hongxin2019.github.io/TVR/}.
Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
2023-05-02T14:24:12Z
http://arxiv.org/abs/2305.01668v1
# Visual Reasoning: from State to Transformation ###### Abstract Most existing visual reasoning tasks, such as CLEVR in VQA, ignore an important factor, i.e. transformation. They are solely defined to test how well machines understand concepts and relations within static settings, like one image. Such **state driven** visual reasoning has limitations in reflecting the ability to infer the dynamics between different states, which has shown to be equally important for human cognition in Piaget's theory. To tackle this problem, we propose a novel **transformation driven** visual reasoning (TVR) task. Given both the initial and final states, the target becomes to infer the corresponding intermediate transformation. Following this definition, a new synthetic dataset namely TRANCE is first constructed on the basis of CLEVR, including three levels of settings, i.e. Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views). Next, we build another real dataset called TRANCO based on COIN, to cover the loss of transformation diversity on TRANCE. Inspired by human reasoning, we propose a three-staged reasoning framework called TranNet, including observing, analyzing, and concluding, to test how recent advanced techniques perform on TVR. Experimental results show that the state-of-the-art visual reasoning models perform well on Basic, but are still far from human-level intelligence on Event, View, and TRANCO. We believe the proposed new paradigm will boost the development of machine visual reasoning. More advanced methods and new problems need to be investigated in this direction. The resource of TVR is available at [https://hongxin2019.github.io/TVR/](https://hongxin2019.github.io/TVR/). Visual Reasoning, Transformation, Visual Understanding, Deep Learning ## 1 Introduction Visual reasoning goes well beyond object recognition, which is the process of solving problems on the basis of analyzing visual information. Although this task is easy for humans, it is tremendously difficult for vision systems, because it usually requires higher-order cognition and reasoning about the world. Recently, several visual reasoning tasks have been proposed and attract lots of attention in the community of artificial intelligence. For example, CLEVR [1], the most representative visual question answering (VQA) task, defines a question answering paradigm to test whether machines have spatial, relational, and other reasoning abilities for a given image. Visual entailment tasks such as NLVR [2, 3] ask models to determine whether a given description is true about states of images. Visual commonsense reasoning tasks, such as VCR [4], further require a rationale explaining the predicting answer. It has been shown from the above description that these visual reasoning tasks are all defined at the _state_ level. For example, the questions and answers in VQA and VCR as well as the language descriptions in NLVR are just related to the concepts or relations within states, i.e. an image or images. We argue that this kind of _state driven visual reasoning_ fails to test the ability to reason dynamics between different states. In the bottom line of Fig. 1, the first image shows a cat on a ladder, and in the second image, the same cat is under the ladder. It is natural for a human to reason dynamics here after analyzing, that the cat jumps down the ladder. Piaget's cognitive development theory [5] describes the dynamics between states as transformation, and tells that human intelligence must have functions to represent both the transformational and static aspects of reality. In addition, without modeling transformation, complicated tasks such as visual storytelling [6] and visual commonsense inference [7] are hard to be solved, since these tasks involve not only static states but also dynamic transformations, such as actions and events. Though these tasks are closer to reality, they are too complicated to serve as a good testbed for transformation based reasoning. Because these tasks combine too many other requirements, such as Fig. 1: State-driven visual reasoning (top) v.s. transformation-driven visual reasoning (bottom). recognition and language generation abilities, which makes it hard to independently assess transformation reasoning. Therefore, it is crucial to define a specific task to be able to quantitatively evaluate the ability to reason transformation. In this paper, we define a novel _transformation driven visual reasoning_ (TVR) task. Given the initial and final states, like two images, the goal is to infer the corresponding single-step or multi-step transformation. While states are naturally represented as images, the transformation has many choices in its form. Without loss of generality, in this paper, we explore two definitions. In the first definition, transformations are changes of object attributes, therefore a single-step and multi-step transformation are represented as a triplet (_object_, _attribute_, _value_) and a sequence of triplets, respectively. These triplets, which are basic transformation units, are called _atomic transformations_. In the second definition, atomic transformations are video clips to show the entire change process. Therefore, a single-step and multi-step transformation are respectively represented as a clip of video and a sequence of video clips. Following the definition of TVR, we first construct a new dataset called TRANCE, to test and analyze how well machines can understand the transformation. We construct TRANCE based on the synthetic dataset CLEVR [1], since it is better to first study TVR in a simple setting and then move to more complex real scenarios, just like people first study VQA on CLEVR and then generalize to more complicated settings like GQA. CLEVR has defined five types of attributes, i.e. color, shape, size, material, and position. Therefore, it is convenient to define the transformation for each attribute, e.g. the color of an object is changed from red to blue. Given the initial and final states, i.e. two images, where the final state is obtained by applying a single-step or multi-step transformation on the initial state, a learner is required to well infer such transformation. To facilitate the test for different reasoning levels, we design three settings, i.e. Basic, Event, and View. Basic is designed for testing single-step transformation. Event and View are designed for more complicated multi-step transformation, where the difference is that View further considers variant views in the final state. Fig. 2 gives an example of three settings. The biggest limitation of TRANCE is the small diversity of transformation. Imagine that objects in real life can be transformed into different states through a large number of different transformations. Therefore, we build another dataset called TRANCO to reduce the gap between TRANCE with real, by reasoning transformations on real data. Given such a large transformation space, it is infeasible to list and label all available atomic transformations like TRANCE. As a result, the alternative way is to further require models to generalize to unseen transformations, which is actually the basic requirement for practical applications. TRANCO is thus designed to reason "open-world" [8] transformations. That is, given the initial and final states, a learner needs to find a sequence of atomic transformations from test candidates, while these test candidates can not be accessed during training. This setting is different from TRANCE since atomic transformations in TRANCE are a constant set of attribute changes on limited objects. In TRANCO, atomic transformations are represented as the aforementioned video clips, so that annotation of existing datasets could be used. Specifically, TRANCO is built on the instructional video dataset COIN, which contains clip annotations that are equivalent to atomic transformations. That is to say, each video contains multiple annotated clips and each clip is corresponding to a step of completing a certain job. The problem then becomes to finding correct video clips given the initial and final frames, while the results are evaluated under the protocol of TVR. In the experiments, we would like to test how well existing reasoning techniques [9, 10] work on TVR. However, since these models are mainly designed for existing reasoning tasks, they cannot be directly applied to TVR. To tackle this problem, we propose a human-inspired reasoning framework specific for TVR, called as TranNet. The design philosophy, as well as the architectural details, are introduced in Sec. 6. In brief, TranNet extracts essential features from two-state images, and then circularly decodes latent representations to predict a sequence of atomic transformations. With TranNet, existing techniques can be conveniently adapted to TVR. For example, we consider ResNet [11], Bilinear-CNN [12], DUDA [13], and CLIP [14] for encoding, GRU [15], and Transformer [16] for decoding. Experimental results show that deep models perform well on the Basic setting of TRANCE, but are far from human's level on Event, View, and even worse on TRANCO, demonstrating high research potential in this direction. In summary, the contributions of our work include: 1) the definition of a new visual reasoning paradigm, to learn the dynamics between different states, i.e. transformation; 2) a new synthetic dataset called TRANCE, to test three levels of transformation reasoning, i.e. Basic, Event, and View; 3) a real dataset called TRANCO, to test "open-world" transformation reasoning; 4) the proposal of a human-inspired transformation reasoning framework TranNet; 5) experimental studies of the existing SOTA reasoning techniques on TRANCE and TRANCO show the challenges of the TVR and some insights for future model designs. ## 2 Related Works Visual reasoning is an emerging research topic in the field of machine learning, which requires more artificial intelligence than tasks like classification, detection, and captioning. Visual Question Answering (VQA) is the most popular visual reasoning task. Questions in the earliest VQA dataset [17, 18, 19] are usually concerned about the category or attribute of objects. Recent VQA datasets have improved the requirements for image understanding by asking more complex questions, e.g. Visual7W [20], CLEVR [1], OK-VQA [21], and GQA [22]. In addition, two other forms of visual reasoning tasks need to be mentioned. Visual entailment tasks [2, 3, 24, 23] ask models to determine whether a given description is true about visual inputs. Visual commonsense reasoning [4, 25] tasks further require the model to provide a rationale explaining why its answer is right. Solving these tasks is meaningful and requires various reasoning abilities. However, the above tasks are all constrained to be within static states, which ignores the dynamics between different states. Recently, several new visual reasoning tasks have jointly considered multiple states. For example, CATER [26] tests the ability to recognize compositions of object movements, while our task contains more diverse transformations rather than just moving. Furthermore, CATER along with other video reasoning tasks such as CLEVRER [27] and physical reasoning [28, 29] is usually based on dense input states, which make the transformations hard to define and evaluate. Before moving to these complex scenarios, our TVR provides a simpler formulation by explicitly defining the transformations between two states, which is more suitable for testing the ability of transformation reasoning. CLEVR-Change [13], the most relevant work, requires captioning the change between two images. The novelty is that TVR isolates the ability to reason dynamics from captioning to provide a more thorough evaluation. Furthermore, CLEVR-Change only focuses on single-step transformations. The concept of transformation has also been mentioned in many other fields. In [30, 31, 32], transformations are used to learn good attribute representations to improve classification accuracy. In [33, 34, 35, 36, 37], transformations on object or environment are detected to improve the performance of action recognition. However, those works in attribute learning and action recognition fields only consider single-step transformation, thus not appropriate for testing a complete transformation reasoning ability. Procedure planning [38] has a similar task formulation to ours but we see this problem from different perspectives. TVR motivates transformation as important as the state, while procedure planning specially cares about actions to complete a goal. Specifically, we provide a more comprehensive definition and evaluation for transformation, from synthetic to real, from single-step to multiple-step, and procedure planning can be seen as a special case of TVR. ## 3 Task Description Transformation driven Visual Reasoning (TVR) is a visual reasoning task that aims at testing the ability to reason the dynamics between states. Formally, we denote the state space as \(\mathcal{S}\) and the transformation space as \(\mathcal{T}\). The transformational process can be illustrated as a function \(f:\mathcal{S}\times\mathcal{T}\rightarrow\mathcal{S}\), which means a state is transformed into another state under the effect of a transformation. And our task is defined as: **Transformation Driven Visual Reasoning:** \(\mathcal{S}\) is the state space, and \(\mathcal{T}\) is the transformation space. **Input:** * the initial state \(S\in\mathcal{S}\), represented as an image, * the final state \(S^{\prime}\in\mathcal{S}\), represented as an image. **Output:** A transformation \(T\in\mathcal{T}\), so that \(f(S,T)=S^{\prime}\). With this definition, most existing state driven visual reasoning tasks can be extended to the corresponding transformation driven ones. For example, the VQA task, such as CLEVR, can be extended to ask about the transformation between two given images, with answers as the required transformation. In the extension of NLVR, the task becomes to determine whether a sentence describing the transformation is true about the two images, e.g. the color of the bus is changed to red. Since TVR itself is defined as an interpretation task, we do not need any further rational explanations, and the extension of VCR will stay the same as CLEVR. We can see that the intrinsic reasoning target of these tasks is the same, that is to infer the correct transformation, while the difference lies in the manifestation. In TVR, states are naturally represented as images to capture static moments, but the transformation has many choices in its form. For example, any changes in pixel value can be treated as a transformation, but this representation is meaningless for humans. Another way to describe transformation is natural language [13]. However, natural language is not precise and sometimes ambiguous, making it difficult to evaluate the accuracy of the predicted transformations. In this paper, we explore two transformation definitions. In the first definition, transformations only affect limited attributes with limited options just like [13], but the form is changed from the caption to a more concrete one, i.e. attribute-level change of an object, represented as a triplet \((o,a,v)\), which means the object \(o\) with the attribute \(a\) is changed to the value \(v\). Except for the representation, another limitation of [13] is they only consider single attribute changes between states, while multiple attribute changes could exist between states in practice. A more general formulation should consider multiple transformations as well as their order. To be clear, a basic transformation such as the triplet \((o,a,v)\) is called an _atomic transformation_, denotes as \(t\). And the transformation \(T\), denotes as a sequence of atomic transformations that \(T=\{t_{1},t_{2},\ldots,t_{n}\},t_{i}=(o_{i},a_{i},v_{i})\in\mathcal{T}_{ \mathcal{A}}\), where \(n\) is the number of atomic transformations, and \(\mathcal{T}_{\mathcal{A}}\subset\mathcal{T}\) is the atomic transformation space. In more complex scenarios, such as in real data, one single atomic transformation may affect multiple attributes. Take the cat example again, a simple jumping affects at least the location and the pose of the cat. It is not suitable to represent transformations as attribute changes in this situation. Instead, representing an atomic transformation as a clip of video, completely showing the whole change process is natural and more friendly for annotating. The definition of the transformation keeps the same as \(T=\{t_{1},t_{2},\ldots,t_{n}\}\), while \(t_{i}=c_{i}\in\mathcal{T}_{\mathcal{A}}\) and \(c_{i}\) is a clip from a video. Different definitions of transformation can lead to different ways of evaluation. The most ideal way of evaluating the prediction \(\hat{T}\), is to first obtain the corresponding simulated final state \(\hat{S}^{\prime}=f(S,\hat{T})\), and then check whether \(\hat{S}^{\prime}\) is the same as ground truth final state \(\hat{S}\). The first definition that represents transformations as attribute changes of objects is appropriate for this evaluation. However, in real scenarios, it is hard to obtain a simulated final state. We have defined the transformation as a sequence of clips. The goodness of this definition is annotating-friendly, but it is limited for the real data that the evaluation could only be done by comparing predicting \(\hat{T}\) with the given reference transformation \(T\). The problem here is that \(T\) may not be the only way in practice to transform the state from \(S\) into \(S^{\prime}\), thus the evaluation is imperfect. Sec. 4.3 and Sec. 5.3 will introduce the detailed evaluation protocols for TRANCE and TRANCO. ## 4 Synthetic Data: Trance We first study TVR under the synthetic setting, in which we build a new data set by extending CLEVR, namely TRANCE (Transformation on CLEVR). Besides, we describe how to define proper TVR objectives and corresponding evaluation protocols with respect to TRANCE. ### _Dataset Setups_ CLEVR [1] is a popular VQA dataset, which first introduces the concept of visual reasoning. The target of CLEVR is to answer questions about counting, comparing, logical reasoning, and so on, according to given images. The content of images is about simple objects, such as cubes, spheres, and cylinders, which have different sizes, materials, and colors. Specifically, for each object, there are 3 shapes, 2 sizes, 2 materials, 8 colors, and infinity locations to be selected, as listed in Tab. I annotated with *. With so many attributes that are convenient to be modified, we can easily define atomic transformations as changes of these attributes on objects. This is the major reason that we choose CLEVR to extend. Another reason is that images can be synthesized using Blender [39] with small costs. Therefore, it is practicable to create millions of samples. CLEVR provides a good foundation on attributes and values, which are fundamental items of the atomic transformation triplet \((o,a,v)\), as we introduced in Sec. 3. However, the distance to defining atomic transformations well still exists unless we proceed with several modifications or designs. The first problem is how to represent an object in the answer. Existing methods such as CLEVR and CLEVR-Change use text which has ambiguity issues making the evaluation unreliable, while CLEVR-Ref+ [40] employs bounding boxes that are specific but require the additional ability of detection. Therefore, we design to provide additional information, which is the attributes of the initial objects, including the index, color, material, and other attribute values. In this way, an object can be referred to with its index. Note machines still need to perform their own recognition to align objects in images with given attributes. The second problem is available values in size and material are too few, therefore we add medium size and glass material. The last problem is the available values of position transformation are infinite in the space of \(\mathbb{R}^{2}\), which is not computational friendly. To reduce the available values, we change the position from absolute values into relative values by using direction and step to represent the position transformation. Specifically, we consider eight directions as shown in Tab. I. In addition, we define a coordinate system, in which \(x\) and \(y\) are both restricted to \([-40,40]\), and objects can only be placed on integer coordinates. The moving step can be valued as 1 or 2, where 1 step equals 10 in our coordinate system. Except for normal moving action, we are also interested in whether the vision system could understand actions like moving in and moving out, so the plane is split, where the visible area is at the center and the invisible area is around the visible area, and the moving in and out operations can be defined correspondingly. To be reasonable, objects shouldn't be overlapped and moved out of the plane during transformation. Having defined the atomic transformation, we will now move on to introduce how to generate samples. The first step is the same as CLEVR, which is randomly sampling a scene graph. According to the scene graph, CLEVR then generates questions and answers with a functional program and renders the image with Blender. Different from CLEVR, the next step in TRANCE becomes randomly sampling a sequence of atomic transformations, where the length ranges from 1 to 4, which is called the _reference transformation_. By applying the reference transformation to the initial scene graph, we obtain the final scene graph. At last, two scene graphs are rendered into images (\(h:240\times w:320\)). To reduce the potential bias from random sampling, we carefully control the sampling process of scene graph and transformation by balancing several factors. In scene graph sampling, we balance objects' attributes and the number of visual objects in the initial state. In transformation sampling, the length of the transformation, the object number, n-gram atomic transformation, and the move type are all balanced. Throughout all elements, N-gram atomic transformation is the hardest to be balanced and it refers to the sub-sequence of atomic transformations with the length of \(n\). By balancing these factors, we reduce the possibility that a learner utilizes statistics features in the data to predict answers. In the supplementary material, we show the statistics of the dataset and our balancing method in detail. ### _Three Levels of Settings_ We design three settings, i.e. Basic, Event, and View, to facilitate the study on different levels of transformation reasoning. Basic is first designed for single-step transformation and then Event is for multi-step transformation. To further evaluate the ability of reasoning transformation under a more real condition, we extend Event with variant views to propose View. Fig. 2 shows three different settings and more examples can be found in the supplementary material. **Basic.** Basic is a simple problem designed to mainly test how well a learner understands atomic transformations. The target of Basic is to infer the single-step transformation between the initial and final states. That is, given a pair of images, the task is to find out which attribute \(a\) of which object \(o\) is changed to which value \(v\). We can see that this task is similar to the previous game "Spot the Difference" [41], in which the player is asked to point out the differences between two images. However, Basic is substantially different from the game. Basic cares about the object level differences while the game focuses on the pixel level differences. Therefore, Basic can be viewed as a more advanced visual reasoning task than the game. **Event.** Considering only the single-step transformation is obviously not enough. In reality, it is very common that multi-step transformation exists between two states. Therefore, we construct this multi-step transformation setting to test whether machines can handle this situation. The number of transformations between the two states is randomly set from 1 to 4. The goal is to predict a sequence of atomic transformations that could reproduce the same final state from the initial state. To resolve this problem, a learner must find all atomic transformations and arrange them correctly. Compared with Basic, it is possible to have multiple atomic transformations, which improves the difficulty of finding them all. Meanwhile, the order is essential in the Event because atomic transformations may be dependent. For example, moving A first and then moving B to A's place is non-exchangeable, otherwise, B will overlap A. **View.** In real applications, the angle of observation is not fixed like in Basic and Event. To tackle this problem, we extend the Event setting to View, by capturing two states with cameras in different positions. In practice, for simplicity but without loss of generality, we set three cameras, placed on the left, center, and right sides of the plane. The initial state is always captured by the center camera, while for the final state, images are captured with all three cameras. Thus, for each sample, we obtain three pairs for training, validation, and testing with the same initial state but different views of the final states. With this design, it is possible to evaluate how well a vision system understands object-level transformation under variant views. ### _The Evaluation Protocol_ For the single-step transformation setting, i.e. Basic, the answer is unique. Therefore, we can evaluate the performance by directly comparing the prediction with the reference transformation. Specifically, in the TRANCE dataset, we consider fine-grained accuracy and overall accuracy. _ObjAcc, AttrAcc, ValAcc_. Fine-grained accuracy corresponds to three elements in the transformation triplet, including object accuracy (ObjAcc), attribute accuracy (AttrAcc), and value accuracy (ValAcc). _Acc._ The overall accuracy (Acc) only counts the absolutely correct transformation triplets. For multi-step transformation settings, i.e Event and View, it is not suitable to use the above evaluation metrics, since there may exist multiple feasible answers. This is because exchanging some steps like color transformation and shape transformation is acceptable and the final state keeps unchanged. Benefiting from the simple setting of TRANCE, it is convenient to evaluate the predicted transformation by simulation. Specifically, we input the item of predicted transformation sequence \(\hat{T}=\{\hat{t_{1}},\hat{t_{2}},\cdots,\hat{t_{n}}\}\) one by one to transform the initial state to the simulated final state \(\hat{S^{\prime}}\), i.e. \(S\times\hat{T}\rightarrow\hat{S^{\prime}}\). _A distance_ can be computed by counting the attribute level difference between two final states, i.e. \(\hat{S^{\prime}}\) and \(S^{\prime}\). If the intermediate states do not violate the pre-defined two constraints, including no overlapping and no moving out of the plane, and the distance is zero, then the sequence is _correct_. If we ignore the two constraints, which means the order of the sequence is ignored, and find the distance is zero, then the sequence is called _loose correct_. _AD, AND._ A _normalized distance_ is a distance that is normalized by the length of the reference transformation. _AD_ and _AND_ are the average distance and average normalized distance over all samples, respectively. _Acc, LAcc._ The accuracy is the proportion of _correct_ samples, while the loose accuracy is the proportion of _loose correct_ samples without considering the order: \[\begin{split}& Acc=\sum_{i}^{m}\frac{1}{m}[T_{i}\text{ is {correct}}],\\ & LAcc=\sum_{i}^{m}\frac{1}{m}[T_{i}\text{ is {loose correct}}],\end{split} \tag{1}\] where \(m\) is the total number of test samples. _EO._ At last, to measure how well the right order is assigned when all atomic transformations have been found, the error of order \(EO=(LAcc-Acc)/LAcc\) is computed. ## 5 Real Data: Tranco In addition to the synthetic data, we build a real dataset called TRANCO (Transformation on COIN), to explore the potential role of visual reasoning research in real scenarios. ### _Data Setups_ TRANCO is built based on a well-known comprehensive instruction video data, namely COIN [42], which consists of 11,827 YouTube videos covering 180 different tasks in daily activities. COIN is widely used in instructional video Fig. 3: Illustration of an example from TRANCO. The target is to find a sequence of video clips between the initial and the final state. analysis tasks, including step localization, action segmentation, procedure localization, task recognition, and step recognition. Each video of COIN is comprised of a series of steps annotated with temporal boundaries and descriptions. For example, Fig. 3 shows three main steps of cooking noodles, where each step is represented as a video clip along with a sentence to describe the step. We choose COIN to build our real dataset for two major reasons. Firstly, videos in COIN are real data covering various daily activities, which meets our requirement of diverse transformations. Furthermore, the step annotations can be reused to reduce the cost of building the dataset, since the steps in COIN are equivalent to our atomic transformations. As we discussed in Sec. 3, the second transformation definition represents an atomic transformation as a video clip, which is more suitable for real complex scenarios than attribute-level changes. Under this definition, for each sample in COIN, step video clips are directly transferred to be atomic transformations. The additional elements that we need to construct are the initial and the final states. In practice, the state before these steps is the initial state, and the state after is the final state. Therefore, the first frame of the first step video clip becomes the initial state and the last frame of the last step video clip becomes the final state. In addition, to simplify the problem, videos containing more than 7 steps are not used, resulting in 11,105 videos and 38778 total video clips. These videos are separated into 8651 train samples, 1024 validation samples, and 1430 test samples. The detailed video distribution on the clip number is shown in Tab. II. ### _The Problem Setting_ The goal of TRANCO is to reason "open-world" [8] transformations, that is, models should generalize to unseen transformations. Specifically, for each video from COIN, two images are given as the initial and the final state respectively, and the target is to find out the original sequence of video clips between the two states as the transformation, from a candidate set of video clips. During testing, the candidate video clips are comprised of all video clips from the testing set and are not exposed to training. With this design, we expect models to adapt to the diverse characteristics of transformations in the real world. TRANCO is intuitively more difficult than TRANCE. The major difficulty is the objective, i.e. reasoning "open-world" transformation, which requires additional ability to transfer into unseen atomic transformations. Another difficulty comes from the requirement of the higher recognition ability to represent real images or videos. Experiments in Sec. 8 also confirm these two major difficulties of TRANCO. ### _The Evaluation Protocol_ As we discussed in Sec. 3, the definition of transformation can affect the way of evaluation. In order to determine whether the predicted transformation is correct, it is not feasible to compare simulated final state \(\hat{S^{\prime}}\) with the ground truth final state \(S^{\prime}\) here, since it is hard to simulate the real transformation in TRANCO. The alternative way is to directly compare the predicted transformation \(\hat{T}\) with the reference transformation \(T\). Nevertheless, it is acceptable for TRANCO, since the steps in instructional videos are usually unique and can not be exchanged. We consider four metrics for evaluation, including the overall exact match rate, two metrics on the ability to find correct atomic transformations without considering the order, and one especially for order assessment. These metrics are introduced in the following. _Exact Match Rate (EMR)_. The first metric is exact match rate, which evaluates the overall performance. It reflects how many predicted transformations are exactly the same as reference transformations, which requires not only the atomic transformations but also the order are exactly the same. We use the exact match rate here to distinguish with the _Acc_ in TRANCE, since the meaning and the evaluation method are different. _Recall, Precision_. These two metrics both concern the ability to find correct atomic transformations and ignore the order of predicted transformations. Recall reflects how many atomic transformations in the reference transformation are found, while precision reflects how many predicting atomic transformations are right. They are given by: \[\text{Recall}=\frac{|T\cap\hat{T}|}{|T|},\quad\text{Precision}=\frac{|T\cap \hat{T}|}{|\hat{T}|}. \tag{2}\] _KTD_. In contrast to recall and precision, KTD (Kendall's-\(\tau\) distance) only focuses on order evaluation to reflect how well models sort atomic transformations. KTD is a commonly used metric in the field of information retrieval to evaluate ranking models, the detail can be found in [43]. When computing KTD, we only consider the order of intersected atomic transformations \(T\cap\hat{T}\). We define that \(\text{KTD}=1\) if \(T\cap\hat{T}=\emptyset\). _SD, NSD_. Similar to TRANCE, we provide step difference and normalized step difference to reflect how well models estimate the number of steps between the initial and final states. SD is the absolute difference between the number of predicted steps and the number of ground truth steps. NSD is the normalized SD, which is the ratio of SD to the number of ground truth steps. ## 6 The TranNet Framework In this section, we propose a general framework to tackle the transformation driven visual reasoning problem, including both synthetic and real scenarios. ### _The Basic Idea_ TranNet is inspired by the OODA decision loop theory [44] and our study about how human reason transformation reasoning during human experiments. In Fig. 4, the top row shows the three stages that we understand about the reasoning process. To reason the transformation from states, \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{videos} & \multicolumn{2}{c}{clips} & \multicolumn{6}{c}{videos with \(k\) clips} \\ \cline{4-9} & & \multicolumn{2}{c}{\(k=\)2} & \multicolumn{1}{c}{\(k=\)3} & \multicolumn{1}{c}{\(k=\)4} & \multicolumn{1}{c}{\(k=\)5} & \multicolumn{1}{c}{\(k=\)6} & \multicolumn{1}{c}{\(k=\)7} \\ \hline Train & 8651 & 30244 & 2451 & 2497 & 1874 & 947 & 554 & 328 \\ Val & 1024 & 3616 & 283 & 291 & 235 & 96 & 76 & 43 \\ Test & 1430 & 4918 & 432 & 425 & 279 & 154 & 87 & 53 \\ \hline Total & 11105 & 38778 & 3166 & 3213 & 2388 & 1197 & 717 & 424 \\ \hline \hline \end{tabular} \end{table} TABLE II: Statistics of TRANCO. a human will first observe images, and then circularly analyze image contents and conclude transformations. Take the cat jumping as an example, a human will first observe to know that these two images are about a cat. Next, the one analyzes the two images and finds out the location of the cat is changed. At last, the one searches mind for a feasible transformation that could explain the finding state change, which is "the cat jumps down". If the transformation process is complex, e.g. cooking noodles, that a single-step transformation is not enough to complete the entire state changes, one will repeat analyzing and concluding until working out a sequence of transformations as the explanation. We transform the three stages accordingly into modules as shown in the bottom row of Fig. 4, including encoder, decoder, and predictor, to form the TranNet framework. In the following, we briefly introduce how these modules work and show the instantiations of TranNet on our two problems in Sec. 6.3 and Sec. 6.2. **Encoder.** The goal of the encoder is to extract effective features from image pairs, which are mainly associated with the content within and the relation between two states. Specifically, an encoder \(E\) extracts image features \(\mathbf{h}\) from two states images \(S\) and \(S^{\prime}\): \[\mathbf{h}=E(S,S^{\prime}). \tag{3}\] As for the pair inputs, there are two common ways to extract features from them, i.e. early fusion and latter fusion. In the early fusion way, input images interact before sending into the network, while in the latter fusion, images are first separately encoded and then interacted at the feature level. The backbone of the encoder can be any common image encoder, such as ResNet [11] and Vision Transformer [45]. **Decoder.** The decoder is a bridge between the encoder and the predictor. The goal of the decoder is to circularly decode information from image representation for the predictor to predict atomic transformations. In the \(i\)th step, in addition to \(\mathbf{h}\) from the encoder, the decoder also accepts previous atomic transformations \(\mathbf{t_{k<i}}\) as inputs: \[\mathbf{g_{i}}=D(\mathbf{h},\mathbf{t_{0}},\cdots,\mathbf{t_{i-1}}), \tag{4}\] where \(\mathbf{t_{0}}\) is the initial atomic transformation, which could be set by different strategies, e.g. a random initialized vector optimized during learning. RNNs (e.g. GRU [15]) and transformer [16] are selected as two variants of decoders, which are commonly used techniques for sequence generation. **Predictor.** The predictor is responsible for translating the information from the decoder into one specific atomic transformation, which should belong to the candidate atomic transformations. This is implemented by finding \(\mathbf{t}\in\mathcal{T_{A}}\) that maximizes the _score_ for received \(\mathbf{g_{i}}\): \[\mathbf{t_{i}}=\operatorname*{arg\,max}_{\mathbf{t}\in\mathcal{T_{A}}}score(\mathbf{g_{i} },\mathbf{t}). \tag{5}\] In general, there are two ways to implement the score function, corresponding to two different problem formulations. The first way regards the score as a _classification_ function, which maximizes the likelihood of desired atomic transformation given \(\mathbf{g_{i}}\). The second one is a _contrastive learning_ way, which is to maximize the similarity between the \(\mathbf{g_{i}}\) and \(\mathbf{t}\). The major difference is that the labels or candidates in the classification problem must be fixed and shared between training and testing while contrastive learning does not require this. Therefore, the first way is more suitable for problems with few labels and the second way has more advantages in its generalization ability. The second difference is that the contrastive way needs an extra encoder to encode \(\mathbf{t}\) so that the similarity between \(\mathbf{g_{i}}\) and \(\mathbf{t}\) can be computed in the same vector space. Having introduced the basic idea of TranNet, the following two sections discuss how to implement TranNet in two specific scenarios, i.e. TRANCE and TRANCO. ### _TranceNet_ There are two guidelines we follow to design TranceNet for TRANCE. The first one is to design effective encoders, therefore we compare encoders with different encoding ways and architectures. The second one is to formulate the prediction as a classification problem since the atomic transformation space in TRANCE is fixed. Fig. 5 shows the architecture of TranceNet. In the encoder part, we consider two types of early fusion encoders and two types of latter fusion encoders. Early fusion ways include subtracting (\(-\)) or concatenating (\(\oplus\)) two images Fig. 4: What we understand about human transformation reasoning (top). Inspired TranNet framework (bottom). Fig. 5: The architecture of TranceNet. before feeding them into the networks, such as vanilla CNN or ResNet. We use the network name with a subscript to denote early fusion encoders, for example, ResNet\({}_{-}\) means ResNet feeding in subtracted image pairs. The latter fusion encoders include BCNN [12] and DUDA [13]. BCNN is a classical model for fine-grained image classification to distinguish categories with small visual differences. DUDA is originally proposed for change detection and captioning. The main difference between BCNN and DUDA lies in the way of feature-level interaction. We choose GRU [15] and transformer as two different decoders for comparison. The GRU unit updates the hidden state and receives only the last step of atomic transformation and Eq. (4) becomes: \[\mathbf{g_{i}}=D(\mathbf{g_{i-1}},\mathbf{t_{i-1}}), \tag{6}\] where \(\mathbf{g_{0}}=\mathbf{h}\), and \(\mathbf{t_{0}}\) is a learned variable. Since the atomic transformations space of TRANCE is fixed as ten objects times all attribute values, it is better to formulate the problem in a classification way, and the final loss function is simplified as a combination of two cross-entropy losses for object and value respectively, represented as: \[\mathcal{L}=-\frac{1}{n}\sum_{i=1}^{n}(\mathbf{t_{i}^{o}}\cdot\log\mathbf{g_{i}^{o}}+ \mathbf{t_{i}^{v}}\cdot\log\mathbf{g_{i}^{v}}). \tag{7}\] Note the attribute in the triplet is implied by the value, since each value only belongs to one specific attribute here. ### _TrancoNet_ The requirement to the TrancoNet is higher than TranceNet. Compared with TranceNet, our first guideline additionally requires high recognition ability, therefore we employ pretrained CLIP. The second guideline is to formulate the transformation prediction in a contrastive learning style, because the atomic transformation space of TRANCO are dynamic rather than fixed from training to testing. Fig. 6 show the architecture of TrancoNet. We choose transformer as the main backbone to better model the order of atomic transformations. Meanwhile, we use a pretrained CLIP image encoder to reduce the training cost of extracting features from the real image and video data. CLIP is pretrained on massive image-text pairs and achieves SOTA on many multi-modal tasks, including video retrieval [46]. In the encoder part, we only consider the latter fusion way, since early fusion changes the input space and it is impossible to obtain a good performance without tuning CLIP models. The input images are first separately encoded with CLIP image encoder, and then interacted with a transformer encoder. In the decoder part, in \(\mathbf{i}\)th step, a transformer predicts the latent representation \(\mathbf{g_{i}}\) by applying cross attention to the state representation \(\mathbf{h}\) and previous steps of atomic transformations \(\{\mathbf{c_{0}}\cdots\mathbf{c_{i-1}}\}\), where \(\mathbf{c_{0}}\) is chosen to be the initial state and \(\mathbf{c_{i-1}}\) is the \(\mathbf{i-1}\)th video clip. Refer to [46], video clips are also encoded with CLIP image encoder by averaging the encoding results of sampled video frames. Finally, in the predictor part, since the task is more like a problem of ranking the candidates and the candidates are different between training and testing, it is more natural to formulate the problem in a contrastive learning way that maximizes the similarity between \(\mathbf{g_{i}}\) and corresponding encoded reference clip \(\mathbf{c_{i}}\), while decreasing the similarity with other video clips from candidates: \[\mathcal{L}=-\frac{1}{n}\sum_{i=1}^{n}\log\frac{\exp(\mathbf{g_{i}}\cdot\text{CLIP }(\mathbf{c_{i}})/\tau)}{\sum_{\mathbf{c}\in\mathcal{T}_{A}}\exp(\mathbf{g_{i}}\cdot\text{ CLIP}(\mathbf{c})/\tau)}. \tag{8}\] Then, the score function can be written as cosine similarity: \[score(\mathbf{g_{i}},\text{CLIP}(\mathbf{c}))=\frac{\mathbf{g_{i}}\cdot\text{CLIP}(\mathbf{c} )}{|\mathbf{g_{i}}||\text{CLIP}(\mathbf{c})|}. \tag{9}\] During inference, the prediction loop ends when the predictor matches the final state image. ## 7 Experiments on Trance In this section, we first briefly introduce the experimental settings, and then show our experimental results on the three settings of TRANCE, i.e. Basic, Event, and View. We also conduct analyses to provide some insights about machines' ability of reasoning transformation. We would like to test how well existing methods work on this new task. However, since the inputs and outputs of TVR are quite different from existing visual reasoning tasks, existing methods like [9, 10] cannot be directly applied. Instead, we compare eight TranceNet variants as well as humans as the initial benchmark. _TranceNets._ In the encoder part, we test two networks encoding images in the early fusion way, i.e. Vanilla CNN and ResNet, combined with two fusion methods, i.e. subtraction (\(-\)) and concatenation (\(\oplus\)), including CNN\({}_{-}\), CNN\({}_{\oplus}\), ResNet\({}_{-}\), ResNet\({}_{\oplus}\). And we test BCNN and DUDA as the encoders in the latter fusion way. The decoder of the first six models is GRU while the decoder of the last two models is transformer. The predictor is shared just as described in Sec. 6.2. We denote these models by their encoders' names suffixed with 'G' and 'T' to represent GRU decoder and transformer decoder respectively. For example, ResNet\(\oplus\)-G means the encoder is a ResNet feeding in concatenated image pairs and the decoder is a GRU. During training, teacher forcing [47] is applied for faster convergence. More implementation details such as number of layers and kernel size can be found in the supplementary. _Human._ To compare with humans, for each of the three settings, we also collect the results of 100 samples in total. These results come from 10 CS Ph.D. candidates who are familiar with our problems and the testing system. Fig. 6: The architecture of TrancoNet. ### _Results on Three Settings_ From the results of Basic in the left part of Tab. III, we can see that all models perform quite well, in the sense that the performance gap between these models and the human is not very large. Now we compare these models, where the difference lies in the encoder, ResNet\({}_{-/\oplus}\)-G performs better than BCNN-G and DUDA-G. Recall that CNN\({}_{-/\oplus}\) and ResNet\({}_{-/\oplus}\) are early fusion encoders while BCNN and DUDA are latter fusion encoders. We can conclude that the early fusion way is better than the latter fusion way on the Basic setting, as the parameter size of ResNet\({}_{-/\oplus}\), BCNN, and DUDA is similar. By looking closely to the fine-grained accuracy, we can see the way of encoding affect the ability to find the correct objects and values, while the ability to distinguish different attributes is almost the same. The middle part of Tab. III shows the experimental results of Event. The extremely big performance gap between models and humans suggests Event is very challenging for machines. The major reason is the answer space rises exponentially when the number of steps increases. In our experiments, the size of answer space is \(\sum_{i=1}^{4}(33\times 10)^{i}\), about 11.86 billion. The performance (e.g. Acc) gap between CNN\({}_{-/\oplus}\)-G and ResNet\({}_{-/\oplus}\)-G becomes even larger on Event compared with Basic, which suggests larger encoders have advantages in extracting sufficient features to decode transformation sequences. ResNet\({}_{-/\oplus}\)-T performs better than ResNet\({}_{-/\oplus}\)-G on 5% test samples, which shows the advantage of the transformer to the GRU. We also employ reinforcement learning to train models. Specifically, the signals including the _correctness_ and the _distance_ of a prediction to the reference transformation can be easily obtained after a simulation. Therefore, these signals are able to be used as rewards in REINFORCE [48] algorithm to further train ResNet\({}_{-}\)-T models. Tab. IV shows that all three rewards significantly improve performance, and the difference among them is small. The right part of Tab. III shows the results of View. While humans are insensitive to view variations, the performances of all deep models drop sharply from Event to View according to \(\Delta\)Acc, from -0.0497 to -0.2119. Among these models, CNN models with fewer parameters drop more sharply while ResNet\({}_{-/\oplus}\)-T have the least negative impacts, which shows larger models have positive benefits and the advantage of the transformer. ### _Detailed Analysis on Event and View_ According to the above experimental results, models perform worse on Event and View. To understand the task more deeply and provide some insights for future model designs, we conduct a detailed analysis of two crucial factors of transformation, i.e. sequence length and order. Firstly, we analyze the effect of transformation sequence length on Event, which is the major condition that differs from Basic. Specifically, we separate all test samples into four groups based on their lengths, i.e. samples with \(k\)-step transformation (\(k=1,2,3,4\)). Then we plot the Acc for each group in Fig. 7. From the results, both human and deep models work quite well when the length is short, e.g. 1. As the length increases, humans still capture complicated transformations very well. However, the performance of deep models declines sharply. Take CNN\({}_{-}\)-G as an example, the performances for the four different groups are 92%, 55%, 23%, and 8%. These results indicate that future studies should focus more on how to tackle transformations with long steps. Another conclusion is that transformer is more \begin{table} \begin{tabular}{l c c c} \hline \hline Model & AD\(\downarrow\) & AND\(\downarrow\) & LAcc\(\uparrow\) & Acc\(\uparrow\) \\ \hline ResNet\({}_{-}\)-T & 0.8389 & 0.2601 & 0.6865 & 0.6553 \\ + _corr_ & 0.7711 & 0.2367 & 0.7061 & 0.6729 \\ + _dist_ & 0.7741 & 0.2370 & 0.7065 & 0.6734 \\ + _corr \& dist_ & **0.7681** & **0.2354** & **0.7069** & **0.6740** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Results of ResNet\({}_{-}\)-T trained using REINFORCE [48] with different rewards on Event. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Basic} & \multicolumn{4}{c}{Event} & \multicolumn{4}{c}{View} & \multirow{2}{*}{\(\Delta\)Acc\(\uparrow\)} \\ \cline{2-2} \cline{4-13} & ObjAcc\(\uparrow\) & AttrAcc\(\uparrow\) & ValAcc\(\uparrow\) & Acc\(\uparrow\) & & AD\(\downarrow\) & AND\(\downarrow\) & LAcc\(\uparrow\) & Acc\(\uparrow\) & AD\(\downarrow\) & AND\(\downarrow\) & LAcc\(\uparrow\) & Acc\(\uparrow\) & \\ \hline CNN\({}_{-}\)-G & 0.9596 & 0.9954 & 0.9834 & 0.9440 & 1.5842 & 0.5217 & 0.4568 & 0.4419 & 2.2649 & 0.8851 & 0.2376 & 0.2300 & -0.2119 \\ CNN\({}_{\oplus}\)G & 0.9570 & 0.9942 & 0.9798 & 0.9390 & 1.4146 & 0.4725 & 0.4961 & 0.4797 & 2.0671 & 0.7887 & 0.2898 & 0.2789 & -0.2008 \\ BCNN-G & 0.9684 & 0.9946 & 0.9818 & 0.9524 & 1.1219 & 0.3623 & 0.5847 & 0.5610 & 1.2915 & 0.4437 & 0.4977 & 0.4749 & -0.0861 \\ DUDA-G & 0.9534 & 0.9922 & 0.9838 & 0.9394 & 1.3184 & 0.4170 & 0.5612 & 0.5401 & 1.4943 & 0.5130 & 0.4837 & 0.4645 & -0.0756 \\ ResNet\({}_{-}\)-G & 0.9808 & **0.9982** & 0.9934 & 0.9744 & 1.0072 & 0.3108 & 0.6350 & 0.6057 & 1.0552 & 0.3564 & 0.5704 & 0.5454 & -0.0603 \\ ResNet\({}_{\oplus}\)-G & **0.9856** & 0.9980 & **0.9954** & **0.9814** & 1.0624 & 0.3336 & 0.6217 & 0.5932 & 1.1353 & 0.3760 & 0.5681 & 0.5426 & -0.0507 \\ ResNet\({}_{-}\)-T & - & - & - & **0.8389** & **0.2601** & **0.6865** & **0.6553** & **0.8832** & **0.2933** & **0.6324** & **0.6012** & -0.0541 \\ ResNet\({}_{\oplus}\)-T & - & - & - & - & 0.8873 & 0.2777 & 0.6743 & 0.6424 & 0.9260 & 0.3084 & 0.6243 & 0.5927 & **-0.0497** \\ \hline Human & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 0.3700 & 0.1200 & 0.8300 & 0.8300 & 0.3200 & 0.0986 & 0.8433 & 0.8433 & 0.0133 \\ \hline \hline \end{tabular} \end{table} TABLE III: Model and human performance on Basic, Event, and View. \(\Delta\)Acc is the accuracy difference between View and Event. Fig. 7: Results on Event with respect to different steps. advanced than GRU because of its higher ability of longer sequences modeling. Then we analyze the effect of the order on Event, which is another important factor in this data. We collect results on order-sensitive samples. Specifically, we first build a subset of order-sensitive samples by testing each sample in the test set whether there exists a sequence permutation that prevents a successful transformation, caused by overlapping or moving out of the plane. We then test models on these samples, with 6.2% 1 samples from the test set and the result is shown in Tab. V. The metric EO is directly defined to measure the influence of order, LAcc and Acc are just listed for reference. From the results, we can see that EO of the human is zero. That is to say, once humans find all the correct atomic transformations, it is not hard to figure out the order. However, for all deep models, the EOs are larger than zero, which indicates a clear effect of the order on the reasoning process. In order to find out the extent of the effect, i.e., whether \(0.0942\sim 0.2050\) means a large deviation, we perform an experiment on 100 randomly selected order-sensitive samples. Specifically, we randomly permutate reference atomic transformations. As a result, the EO is 0.5008, which could be viewed as an upper bound of the order error. Therefore, the current deep models indeed have some ability to tackle the orders, but there still has a large room for improvement. Footnote 1: In another subset that only exists positional transformations, where 25% of them are order sensitive, the experimental results are similar. We finally analyze the effect of view variation. For each model, we provide the results of different final views, as shown in Fig. 8. Please note the results of CNN\({}_{\oplus}\)-G, BCNN-G, ResNet\({}_{\oplus}\)-G, and ResNet\({}_{\oplus}\)-T are quite similar to CNN\({}_{-}\)-G, DUDA-G, ResNet\({}_{-}\)-G, and ResNet\({}_{-}\)-T, so we just give the results from latter three typical models. The results of humans across different views change small, demonstrating human's powerful ability to adapt to different views. In some cases, humans perform even better when views are changed than unchanged. That is because humans usually spend more time solving problems when the view is altered, resulting in a decrease in the chance to make errors. Conversely, deep learning models share a similar trend that view variations will hurt performance. Among these models, CNN-G decreases the most, while DUDA-G shows its robustness. In conclusion, models with more parameters are more robust to view variations and feature-based interaction like the way used in DUDA-G is helpful. ## 8 Experiments on Tranco The previous section has analyzed the experimental results of the synthetic dataset TRANCE. The following section will move to analyze how models perform on real data. Similar to the previous section, the experimental setting is first briefly introduced, then we show the analysis of results. In terms of comparing baselines, we first set a random baseline to provide the lower bound of the performance as a reference. And we compare five TrancoNet models to set the initial benchmark for TRANCO. _Random._ First, the total number of steps \(n\) is randomly selected from 2 to 7. Next, \(n\) non-repeating atomic transformations are sequentially and randomly sampled from the candidate set as the prediction. _TrancoNets._ In the encoder part, we consider three types of encoder borrowed from CLIP [14], including RN101, ViT-B/16, and ViT-B/32. The input images are encoded in the latter fusion way. In the decoder part, except for the transformer decoder described in Sec. 6.3, the GRU decoder is also compared. These models are denoted by their encoders' names suffixed with 'G' or 'T', indicating GRU and transformer respectively. During training, it is computationally expensive if all available video clips in the training set are included in the candidate. Therefore, for each sample, we randomly select negative atomic transformations from other training samples, to constitute a candidate set size of 20, which is a trade-off between performance and resource consumption. Further analysis of the candidate set size and more implementation details of models are included in the supplementary material. During the evaluation, in addition to the full test candidates, which contain 4918 atomic transformations (video clips), we also construct a tiny candidate of size 100 for each sample. This can help us to learn how candidate size affects the model's performance. The results on tiny candidates are suffixed with '@100', e.g. EMR@100. ### _Results on Tranco_ Tab. VI show the performance of five models on two sizes of candidates. From the table, we can see that EMR@100 of the random baseline is exactly zero. This is because the transformation space is large, which is a combination of different atomic transformations with different orders. Given such a huge space, it is almost impossible to find a correct answer by finding random atomic transformations and assigning a Fig. 8: Results for different final views (Center, Left, Right). \begin{table} \begin{tabular}{l c c c} \hline \hline Model & LAcc\(\uparrow\) & Acc\(\uparrow\) & EO\(\downarrow\) \\ \hline Random (avg. of 100) & 1.0000 & 0.4992 & 0.5008 \\ \hline CNN\({}_{-}\)-G & 0.1540 & 0.1395 & **0.0942** \\ DUDA-G & 0.1944 & 0.1613 & 0.1701 \\ BCNN-G & 0.2339 & 0.1935 & 0.1724 \\ ResNet\({}_{-}\)-G & 0.3226 & 0.2565 & 0.2050 \\ ResNet\({}_{-}\)-T & **0.3556** & **0.2911** & 0.1814 \\ \hline Human & 0.7273 & 0.7273 & 0.0000 \\ \hline \hline \end{tabular} \end{table} TABLE V: Results on 6.2% order sensitive samples from Event. random order. Another comparison is between the results of ResNet\({}_{\oplus}\)-G on TRANCE and the results of RN101-G here. While RN101-G has more parameters than ResNet\({}_{\oplus}\)-G, and is pretrained, the EMR on TRANCO (0.0490) is much lower than Acc on View (0.5425). These results show TRANCO is hard, much more difficult than TRANCE. Next, by comparing the left part of the table with the right part, we can find that compared with EMR on tiny candidates, EMR of all models on full candidates drops by more than 60 percent, which suggests that the high diversity of atomic transformations is one reason that TRANCO is difficult. Finally, the results between transformer based models and GRU based models show transformer performs better on reasoning transformations. The large gap in recall and KTD indicates that transformer is more outstanding in finding complete atomic transformations and capturing the order. ### _Detailed Analysis on TRANCO_ As previously analyzed on TRANCE, sequence length and order are two important factors for transformation reasoning. In this section, we analyze the impact of sequence length again. However, the order is not able to be further analyzed since evaluating order on real data is not convenient. Instead, we will analyze how pretrained CLIP matters, since real data requires additional recognition ability and pretrained CLIP is expected to do well. We first analyze how transformation sequence length affects the model's performance. The results are shown in Fig. 9. The length of transformation is ranged from 2 to 7 on TRANCO. From the results, we can see that models answer half of 2-step samples correctly. However, the EMR@100 drops sharply when the length is larger than 2 and becomes zero when the length is larger than 4. These results prove the previous findings on TRANCE that transformations with more steps are difficult and should be focused on in future studies. Another finding is transformer indeed performs better than GRU on longer-length transformations due to its outstanding ability on capturing long-range dependence. Another important problem is how pretrained CLIP benefits models. Therefore, we compare three different strategies for training ViT-B/32-T and the results are shown in Tab. VII. We can see that models initialized with pretrained weights perform much better than models trained from scratch, improving about 60% on EMR@100 and 100% on EMR. During training, we also observe that models initialized with pre-trained weights converge much faster. All these results suggest pretrained weights from CLIP indeed benefit the transformation reasoning, with its strong ability on extracting semantic meaningful representation. However, the performance drops slightly when the pretrained weights are further tuned. By jointly analyzing the EMR curve during training, we find tuning pretrained weights results in overfitting while fixing pretrained weights does not. We believe the small training set does not support further tuning a better feature extractor, therefore the pretrained weights are fixed in all other experiments on TRANCO. ## 9 Discussion: from TRANCE to TRANCO From the experimental results on TRANCE (e.g. Event) and TRANCO, there are some similarities and differences between the synthetic and real settings. The biggest similarity is that transformations with more steps are more difficult to be reasoned correctly, according to Fig. 7 and Fig. 9. With a deeper analysis of the failure cases from the two datasets, we find the types of mistakes are slightly different. In TRANCE, even in failure cases, models are able to find most objects and actions but may fail to match the action to the correct object or find a correct order, as shown in Fig. 10. While in TRANCO, models even fail to find all correct transformations from candidates most of the time, let alone the right order. This is mainly due to the different characteristics of the two problems. Objects and their attributes are simple in TRANCE but are significantly diverse in TRANCO. Therefore, the requirement \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c}{Tiny Candidates (100)} & \multicolumn{6}{c}{Full Candidates (4918)} \\ \cline{2-13} & R \(\uparrow\) & P \(\uparrow\) & KTD \(\downarrow\) & SD \(\downarrow\) & NSD \(\downarrow\) & EMR \(\uparrow\) & R \(\uparrow\) & P \(\uparrow\) & KTD \(\downarrow\) & SD \(\downarrow\) & NSD \(\downarrow\) & EMR \(\uparrow\) \\ \hline Random & 0.0316 & 0.0409 & 0.9958 & 1.9364 & 0.6845 & 0.0000 & 0.0005 & 0.0006 & 1.0000 & 1.9888 & 0.7093 & 0.0000 \\ \hline RN101-G & 0.5733 & **0.8633** & 0.4540 & 1.5503 & 0.4002 & 0.1524 & 0.3374 & 0.4865 & 0.7489 & 1.8147 & 0.5509 & 0.0490 \\ ViT-B/32-T & **0.7549** & 0.8434 & 0.2981 & 1.4154 & 0.4248 & 0.1986 & **0.4890** & 0.5205 & 0.5982 & 1.7399 & 0.5709 & **0.0881** \\ ViT-B/16-T & 0.7416 & 0.8420 & 0.2912 & 1.4413 & 0.4343 & 0.1860 & 0.4883 & **0.5394** & 0.5933 & 1.6007 & 0.5148 & 0.0811 \\ RN101-T & 0.7188 & 0.8620 & **0.2545** & **1.1154** & **0.2908** & **0.2161** & 0.4598 & 0.5006 & **0.5865** & **1.2832** & **0.3976** & 0.0727 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Model results on TRANCO (R and P are short for Recall and Precision). Fig. 9: Results on TRANCO with respect to different steps. \begin{table} \begin{tabular}{l c c} \hline \hline Pretrain strategy & EMR@100 \(\uparrow\) & EMR \(\uparrow\) \\ \hline from scratch & 0.1210 & 0.0378 \\ pretrain w/o finetune & **0.1986** & **0.0881** \\ pretrain w/ finetune & 0.1965 & 0.0748 \\ \hline \hline \end{tabular} \end{table} TABLE VII: Results on TRANCO with respect to different patterns. for image recognition ability is higher on TRANCO. This is why we empirically found pretrained ResNet has little positive effects on TRANCE but pretrained image encoders such as CLIP make a huge difference (Tab. 7) on TRANCO. However, both datasets require context reasoning ability to generate the correct sequence of transformations, especially when the number of steps is large. Transformer is known to be good at modeling long range dependencies, and this is why it performs better than GRU on both problems. From these two observations, we believe that improving visual transformation reasoning is primarily a matter of finding models with greater abilities of image recognition and contextual reasoning, to make models robust even when reasoning transformations with many steps. ## 10 Conclusion To tackle the problem that most existing visual reasoning tasks are solely defined in static settings and cannot well capture the dynamics between states, we propose a new visual reasoning paradigm, namely transformation driven visual reasoning (TVR). Given the initial and final states, the target is to infer the corresponding sequence of atomic transformations, while the atomic transformation is represented by a triplet (object, attribute, value) or a video clip. In this paper, as an example, we use CLEVR to construct a new synthetic data, namely TRANCE, which includes three different levels of settings, i.e. Basic for single-step transformation, Event for multi-step transformation, and View for multi-step transformation with variant views. We also construct a real dataset called TRANCO to test reasoning "open-world" transformations. To study the effectiveness of existing SOTA reasoning techniques, we propose a human-inspired reasoning framework named TranNet. The experimental results show that our best model works well on Basic, while still having difficulties solving Event, View, and more difficult TRANCO. Specifically, the difficult point of Event is to find all atomic transformations and arrange them with a feasible order, especially when the length of the sequence is large. The view variations in View bring great challenges to these models, but have little impact on humans. While for TRANCO, it brings extra challenges with massive diverse atomic transformations. ## Acknowledgments This work was supported by the National Key R&D Program of China under Grant 2022YFB3103704, in part by the National Natural Science Foundation of China (NSFC) under Grant 62276248, and in part by Beijing Academy of Artificial Intelligence (BAAI) under Grant BAAI2020ZJ0303.
2304.00887
Compressed Indexing for Consecutive Occurrences
The fundamental question considered in algorithms on strings is that of indexing, that is, preprocessing a given string for specific queries. By now we have a number of efficient solutions for this problem when the queries ask for an exact occurrence of a given pattern $P$. However, practical applications motivate the necessity of considering more complex queries, for example concerning near occurrences of two patterns. Recently, Bille et al. [CPM 2021] introduced a variant of such queries, called gapped consecutive occurrences, in which a query consists of two patterns $P_{1}$ and $P_{2}$ and a range $[a,b]$, and one must find all consecutive occurrences $(q_1,q_2)$ of $P_{1}$ and $P_{2}$ such that $q_2-q_1 \in [a,b]$. By their results, we cannot hope for a very efficient indexing structure for such queries, even if $a=0$ is fixed (although at the same time they provided a non-trivial upper bound). Motivated by this, we focus on a text given as a straight-line program (SLP) and design an index taking space polynomial in the size of the grammar that answers such queries in time optimal up to polylog factors.
Paweł Gawrychowski, Garance Gourdel, Tatiana Starikovskaya, Teresa Anna Steiner
2023-04-03T11:15:47Z
http://arxiv.org/abs/2304.00887v1
# Compressed Indexing for Consecutive Occurrences+ ###### Abstract The fundamental question considered in algorithms on strings is that of indexing, that is, preprocessing a given string for specific queries. By now we have a number of efficient solutions for this problem when the queries ask for an exact occurrence of a given pattern \(P\). However, practical applications motivate the necessity of considering more complex queries, for example concerning near occurrences of two patterns. Recently, Bille et al. [CPM 2021] introduced a variant of such queries, called gapped consecutive occurrences, in which a query consists of two patterns \(P_{1}\) and \(P_{2}\) and a range \([a,b]\), and one must find all consecutive occurrences \((q_{1},q_{2})\) of \(P_{1}\) and \(P_{2}\) such that \(q_{2}-q_{1}\in[a,b]\). By their results, we cannot hope for a very efficient indexing structure for such queries, even if \(a=0\) is fixed (although at the same time they provided a non-trivial upper bound). Motivated by this, we focus on a text given as a straight-line program (SLP) and design an index taking space polynomial in the size of the grammar that answers such queries in time optimal up to polylog factors. Introduction In the indexing problem, the goal is to preprocess a string for locating occurrences of a given pattern. For a string of length \(N\), structures such as the suffix tree [36] or the suffix array [31], use space linear in \(N\) and allow for answering such queries in time linear in the length of the pattern \(m\). By now, we have multiple space- and time-efficient solutions for this problem (both in theory and in practice). We refer the reader to the excellent survey by Lewenstein [29] that provides an overview of some of the approaches and some of its extensions, highlighting its connection to orthogonal range searching. However, from the point of view of possible applications, it is desirable to allow for more general queries than just locating an exact match of a given pattern in the preprocessed text, while keeping the time sublinear in the length of the preprocessed string. A very general query is locating a substring matching a regular expression. Very recently, Gibney and Thankachan [19] showed that if the Online Matrix-Vector multiplication conjecture holds, even with a polynomial preprocessing time we cannot answer regular expression query in sublinear time. A more reasonable and yet interesting query could concern occurrences of two given patterns that are closest to each other, or just close enough. Preprocessing a string for queries concerning two patterns has been first studied in the context of document retrieval, where the goal is to preprocess a collection of strings. There, in _the two patterns document retrieval problem_ the query consists of two patterns \(P_{1}\) and \(P_{2}\), and we must report all documents containing both of them [32]. In _the forbidden pattern query problem_ we must report all documents containing \(P_{1}\) but not \(P_{2}\)[15]. For both problems, the asymptotically fastest linear-space solutions need as much as \(\Omega(\sqrt{N})\) time to answer a query, where \(N\) is the total length of all strings [23, 22]. That is, the complexity heavily depends on the length of the strings. Larsen et al. [28] established a connection between Boolean matrix multiplication and the two problems, thus providing a conditional explanation for the high \(\Omega(\sqrt{N})\) query complexity. Later, Kopelowitz et al. [27] provided an even stronger argument using a connection to the 3SUM problem. Even more relevant to this paper is the question considered by Kopelowitz and Krauthgamer [26], who asked for preprocessing a string for computing, given two patterns \(P_{1}\) and \(P_{2}\), their occurrences that are closest to each other. The main result of their paper is a structure constructible in \(O(N^{1.5}\log^{\epsilon}N)\) time that answers such queries in \(O(|P_{1}|+|P_{2}|+\sqrt{N}\log^{\epsilon}N)\), for a string of length \(N\), for any \(\epsilon>0\). They also established a connection between Boolean matrix multiplication and this problem, highlighting a difficulty in removing the \(O(\sqrt{N})\) from both the preprocessing and query time at the same time. The focus of this paper is the recently introduced variant of the indexing problem, called _gapped indexing for consecutive occurrences_, in which a query consists of two patterns \(P_{1}\) and \(P_{2}\) and a range \([a,b]\), and one must find the pairs of consecutive occurrences of \(P_{1},P_{2}\) separated by a distance in the range \([a,b]\). Navarro and Thankanchan [33] showed that for \(P_{1}=P_{2}\) there is a \(O(n\log n)\)-space index with optimal query time \(O(m+\text{occ})\), where \(m=|P_{1}|=|P_{2}|\) and \(\text{occ}\) is the number of pairs to report, but in conclusion they noticed that extending their solution to the general case of two patterns might not be possible. Bille et al. [4] provided an evidence of hardness of the general case and established a (conditional) lower bound for gapped indexing for consecutive occurrences, by connecting its complexity to that of set intersection. This lower bound suggests that, at least for indexes of size \(\tilde{O}(N)\), achieving _query time better_ than \(\tilde{O}(|P_{1}|+|P_{2}|+\sqrt{N})\) would contradict the Set Disjointness conjecture, even if \(a=0\) is fixed. In particular, obtaining query time depending mostly on the lengths of the patterns (perhaps with some additional logarithms), arguably the whole point of string indexing, is unlikely in this case. Motivated by the (conditional) lower bound for gapped indexing for consecutive occurrences, we consider the compressed version of this problem for query intervals \([0,b]\). For exact pattern matching, there is a long line of research devoted to designing the so-called compressed indexes, that is, indexing structures with the size being a function of the length of the compressed representation of the text, see e.g. the entry in the Encyclopedia of Algorithms [30] or the Encyclopedia of Database Systems [13]. This suggests the following research direction: can we design an efficient compressed gapped index for consecutive occurrences? The answer of course depends on the chosen compression method. With a goal to design an index that uses very little space, we focus on the most challenging setting when the compression is capable of describing a string of exponential length (in the size of its representation). An elegant formalism for such a compression method is that of straight-line programs (SLP), which are context-free grammars describing exactly one string. SLPs are known to capture the popular Lempel-Ziv compression method up to a logarithmic factor [7, 35], and at the same time provide a more convenient interface, and in particular, allow for random access in \(O(\log N)\) time [5]. By now it is known that pattern matching admits efficient indexing in SLP-compressed space. Assuming a string \(S\) of length \(N\) described by an SLP with \(g\) productions, Claude and Navarro [9] designed an \(O(g)\)-space index for \(S\) that allows retrieving all occurrences of a pattern of length \(m\) in time \(O(m^{2}\log\log N+\operatorname{occ}\log g)\). Recently, several results have improved the query time bound while still using a comparable \(O(g\log N)\) amount of space: Claude, Navarro and Pacheco [10] showed an index with query time \(O((m^{2}+\operatorname{occ})\log g)\); Christiansen et al. [8] used strings attractors to further improve the time bound to \(O(m+\operatorname{occ}\log^{\epsilon}N)\); and Diaz-Dominguez et al. [12] achieved \(O((m\log m+\operatorname{occ})\log g)\) query time. However it is not always the case that a highly compressible string is easier to preprocess. On the negative side, Abboud et al. [1] showed that, for some problems on compressed strings, such as computing the LCS, _one cannot completely avoid a high dependency on the length of the uncompressed string and that for other problems on compressed strings, such as context-free grammar parsing or RNA folding, one essentially cannot hope for anything better than just decompressing the string and working with the uncompressed representation!_ This is also the case for some problems related to linear algebra [2]. Hence, it was not clear to us if one can avoid a high dependency on the length of the uncompressed string in the gapped indexing for consecutive occurrences problem. In this work, we address the lower bound of Bille et al. [4] and show that, despite the negative results by Abboud et al. [1], one can circumvent it assuming that the text is very compressible: **Theorem 1.1**.: _For an SLP of size \(g\) representing a string \(S\) of length \(N\), there is an \(O(g^{5}\log^{5}N)\)-space data structure that maintains the following queries: given two patterns \(P_{1},P_{2}\) both of length \(O(m)\), and a range \([0,b]\), report all \(\operatorname{occ}\) consecutive occurrences of \(P_{1}\) and \(P_{2}\) separated by a distance \(d\in[0,b]\). The query time is \(O(m\log N+(1+\operatorname{occ})\cdot\log^{4}N\log\log N)\)._ While achieving \(O(g)\) space and \(O(m+\operatorname{occ})\) query time would contradict the Set Disjointness conjecture by the reduction of Bille et al. [4], one might wonder if the space can be improved without increasing the query time and what is the true complexity of the problem when \(a\) is not fixed (recall that \([a,b]\) is the range limiting the distance between co-occurrences to report). While we leave improvement on space and the general case as an interesting open question, we show that in the simpler case \(a=0,b=N\) (i.e. when there is no bound on the distance between the starting positions of \(P_{1}\) and \(P_{2}\)), our techniques do allow for \(O(g^{2}\log^{4}N)\) space complexity, see Corollary 3.21. Footnote 1: Note that the conditional lower bound of Bille et al. [4] does not hold for this simpler case. Throughout the paper we assume a unit-cost RAM model of computation with word size \(\Theta(\log N)\). All space complexities refer to the number of words used by a data structure. ## 2 Preliminaries A _string_\(S\) of length \(|S|=N\) is a sequence \(S[0]S[1]\ldots S[N-1]\) of characters from an alphabet \(\Sigma\). We denote the _reverse_\(S[N-1]S[N-2]\ldots S[0]\) of \(S\) by \(\operatorname{rev}(S)\). We define \(S[i\ldots j]\) to be equal to \(S[i]\ldots S[j]\) which we call a _substring_ of \(S\) if \(i\leq j\) and to the empty string otherwise. We also use notations \(S[i\ldots j]\) and \(S(i\ldots j]\) which naturally stand for \(S[i]\ldots S[j-1]\) and \(S[i+1]\ldots S[j]\), respectively. We call a substring \(S[0\ldots i]\)_a prefix_ of \(S\) and use a simplified notation \(S[\ldots i]\), and a substring \(S[i\ldots N-1]\)_a suffix_ of \(S\) denoted by \(S[i\ldots]\). We say that \(X\) is a _substring_ of \(S\) if \(X=S[i\ldots j]\) for some \(0\leq i\leq j\leq N-1\). The index \(i\) is called an _occurrence_ of \(X\) in \(S\). An occurrence \(q_{1}\) of \(P_{1}\) and an occurrence \(q_{2}\) of \(P_{2}\) form a _consecutive occurrence (co-occurrence)_ of strings \(P_{1},P_{2}\) in a string \(S\) if there are no occurrences of \(P_{1},P_{2}\) between \(q_{1}\) and \(q_{2}\), formally, there should be no occurrences of \(P_{1}\) in \((q_{1},q_{2}]\) and no occurrences of \(P_{2}\) in \([q_{1},q_{2})\). For brevity, we say that a co-occurrence is _\(b\)-close_ if \(q_{2}-q_{1}\leq b\). An integer \(\pi\) is a _period_ of a string \(S\) of length \(N\), if \(S[i]=S[i+\pi]\) for all \(i=0,\ldots,N-1-\pi\). The smallest period of a string \(S\) is called _the period_ of \(S\). We say that \(S\) is _periodic_ if the period of \(S\) is at most \(N/2\). We exploit the well-known corollary of the Fine and Wilf's periodicity lemma [14]: **Corollary 2.1**.: _If there are at least three occurrences of a string \(Y\) in a string \(X\), where \(|X|\leq 2|Y|\), then the occurrences of \(Y\) in \(X\) form an arithmetic progression with a difference equal to the period of \(Y\)._ ### Grammars **Definition 2.2** (Straight-line program [25]).: _A straight-line program (SLP) \(G\) is a context-free grammar (CFG) consisting of a set of non-terminals, a set of terminals, an initial symbol, and a set of productions, satisfying the following properties:_ * _A production consists of a left-hand side and a right-hand side, where the left-hand side is a non-terminal_ \(A\) _and the right-hand side is either a sequence_ \(BC\)_, where_ \(B,C\) _are non-terminals, or a terminal;_ * _Every non-terminal is on the left-hand side of exactly one production;_ * _There exists a linear order_ \(<\) _on the non-terminals such that_ \(A<B\) _whenever_ \(B\) _occurs on the right-hand side of the production associated with_ \(A\)_._ A _run-length straight-line program_ (RLSLP) [34] additionally allows productions of form \(A\to B^{k}\) for positive integers \(k\), which correspond to concatenating \(k\) copies of \(B\). If \(A\) is associated with a production \(A\to a\), where \(a\) is a terminal, we denote \(\mathsf{head}(A)=a\), \(\mathsf{tail}(A)=\varepsilon\) (the empty string); if \(A\) is associated with a production \(A\to BC\), we denote \(\mathsf{head}(A)=B\), \(\mathsf{tail}(A)=C\); and finally if \(A\) is associated with a production \(A\to B^{k}\), then \(\mathsf{head}(A)=B\), \(\mathsf{tail}(A)=B^{k-1}\). The _expansion_\(\overline{S}\) of a sequence of terminals and non-terminals \(S\) is the string that is obtained by iteratively replacing non-terminals by the right-hand sides in the respective productions, until only terminals remain. We say that \(G\)_represents_ the expansion of its initial symbol. **Definition 2.3** (Parse tree).: _The parse tree of a SLP (RLSLP) is a rooted tree defined as follows:_ * _The root is labeled by the initial symbol;_ * _Each internal node is labeled by a non-terminal;_ * _If_ \(S\) _is the expansion of the initial symbol, then the_ \(i\)_th leaf of the parse tree is labeled by a terminal_ \(S[i]\)_;_ * _A node labeled with a non-terminal_ \(A\) _that is associated with a production_ \(A\to BC\)_, where_ \(B,C\) _are non-terminals, has_ \(2\) _children labeled by_ \(B\) _and_ \(C\)_, respectively. If_ \(A\) _is associated with a production_ \(A\to a\)_, where_ \(a\) _is a terminal, then the node has one child labeled by_ \(a\)_._ * _(RLSLP only) A node labeled with non-terminal_ \(A\) _that is associated with a production_ \(A\to B^{k}\)_, where_ \(B\) _is a non-terminal, has_ \(k\) _children, each labeled by_ \(B\)_._ The _size_ of a grammar is its number of productions. The _height_ of a grammar is the height of the parse tree. We say that a non-terminal \(A\) is an _ancestor_ of a non-terminal \(B\) if there are nodes \(u,v\) of the parse tree labeled with \(A,B\) respectively, and \(u\) is an ancestor of \(v\). For a node \(u\) of the parse tree, denote by \(\operatorname{off}(u)\) the number of leaves to the left of the subtree rooted at \(u\). **Definition 2.4** (Relevant occurrences).: _Let \(A\) be a non-terminal associated with a production \(A\to\textsf{head}(A)\textsf{tail}(A)\). We say that an occurrence \(q\) of a string \(P\) in \(\overline{A}\) is relevant with a split \(s\) if \(q=|\overline{\textsf{head}}(A)|-s\leq|\overline{\textsf{head}}(A)|\leq q+|P|-1\)._ For example, in Fig. 1 the occurrence \(q=3\) of \(P=cab\) is a relevant occurrence in \(\overline{C}\) with a split \(s=1\) but \(\overline{A}\) contains no relevant occurrences of \(P\). **Claim 2.5**.: _Let \(q\) be an occurrence of a string \(P\) in a string \(S\). Consider the parse tree of an RLSLP representing \(S\), and let \(w\) be the lowest node containing leaves \(S[q],S[q+1],\ldots,S[q+|P|-1]\) in its subtree, then either_ 1. _The label_ \(A\) _of_ \(w\) _is associated with a production_ \(A\to BC\)_, and_ \(q-\operatorname{off}(w)\) _is a relevant occurrence in_ \(\overline{A}\)_; or_ 2. _The label_ \(A\) _of_ \(w\) _is associated with a production_ \(A\to B^{r}\) _and_ \(q-\operatorname{off}(w)=q^{\prime}+r^{\prime}|\overline{B}|\) _for some_ \(0\leq r^{\prime}\leq r\)_, where_ \(q^{\prime}\) _is a relevant occurrence of_ \(P\) _in_ \(\overline{A}\)_._ Proof.: Assume first that \(A\) is associated with a production \(A\to BC\). We then have that the subtree rooted at the left child of \(w\) (that corresponds to \(\overline{B}\)) does not contain \(S[q+|P|-1]\) and the subtree rooted at the right child of \(w\) (that corresponds to \(\overline{C}\)) does not contain \(S[q]\). As a consequence, \(q-\operatorname{off}(w)\) is a relevant occurrence in \(\overline{A}\). Consider now the case where \(A\) is associated with a production \(A\to B^{r}\). The leaves labeled by \(S[q]\) and \(S[q+|P|-1]\) belong to the subtrees rooted at different children of \(A\). If \(S[q]\) belongs to the subtree rooted at the \((r^{\prime}+1)\)-th child of \(A\), then \(q^{\prime}=q-\operatorname{off}(w)-|\overline{B}|\cdot r^{\prime}\) is a relevant occurrence of \(P\) in \(\overline{A}\). **Definition 2.6** (Splits).: _Consider a non-terminal \(A\) of an RLSLP \(G\). If it is associated with a production \(A\to BC\), define_ \[\operatorname{Splits}(A,P)=\operatorname{Splits}_{\operatorname{rev}}(A,P)= \{s:q\text{ is a relevant occurrence of }P\text{ in }\overline{A}\text{ with a split }s\}.\] _If \(A\) is associated with a rule \(A\to B^{k}\), define_ \[\operatorname{Splits}(A,P) =\{s:q\text{ is a relevant occurrence of }P\text{ in }\overline{A}\text{ with a split }s\};\] \[\operatorname{Splits}_{\operatorname{rev}}(A,P) =\{|P|-s:q\text{ is a relevant occurrence of }\operatorname{rev}(P)\text{ in } \operatorname{rev}(\overline{A})\text{ with split }s\}.\] _Define \(\operatorname{Splits}(G,P)\)\((\operatorname{Splits}_{\operatorname{rev}}(G,P))\) to be the union of \(\operatorname{Splits}(A,P)\)\((\operatorname{Splits}_{\operatorname{rev}}(A,P))\) over all non-terminals \(A\) in \(G\), and \(\operatorname{Splits}^{\prime}(G,P)=\operatorname{Splits}(G,P)\cup \operatorname{Splits}_{\operatorname{rev}}(G,P)\)._ We need the following lemma, which can be derived from Gawrychowski et al. [18]: **Lemma 2.7**.: _Let \(G\) be an SLP of size \(g\) representing a string \(S\) of length \(N\), where \(g\leq N\). There exists a Las Vegas algorithm that builds a RLSLP \(G^{\prime}\) of size \(g^{\prime}=O(g\log N)\) of height \(h=O(\log N)\) representing \(S\) in time \(O(g\log N)\) with high probability. This RLSLP has the following additional property: For a pattern \(P\) of length \(m\), we can in \(O(m\log N)\) time provide a certificate that \(P\) does not occur in \(S\), or compute the set \(\operatorname{Splits}^{\prime}(G^{\prime},P)\). In the latter case, \(|\operatorname{Splits}^{\prime}(G^{\prime},P)|=O(\log N)\)._ ### Compact Tries We assume the reader to be familiar with the definition of a compact trie (see e.g. [21]). Informally, a trie is a tree that represents a lexicographically ordered set of strings. The edges of a trie are labeled with strings. We define the label \(\lambda(u)\) of a node \(u\) to be the concatenation of labels on the path from the root to \(u\) and an interval \(I(u)\) to be the interval of the set of strings starting with \(\lambda(u)\). From the implementation point of view, we assume that a node \(u\) is specified by the interval \(I(u)\). The _locus_ of a string \(P\) is the minimum depth node \(u\) such that \(P\) is a prefix of \(\lambda(u)\). The standard tree-based implementation of a trie for a generic set of strings \(\mathcal{S}=\{S_{1},\ldots,S_{k}\}\) takes \(\Theta\left(\sum_{i=1}^{k}|S_{i}|\right)\) space. Given a pattern \(P\) of length \(m\) and \(\tau>0\) suffixes \(Q_{1},\ldots,Q_{\tau}\) of \(P\), the trie allows retrieving the ranges of strings in (the lexicographically-sorted) \(\mathcal{S}\) prefixed by \(Q_{1},\ldots,Q_{\tau}\) in \(O(m^{2})\) time. However, in this work, we build the tries for very special sets of strings only, which allows for a much more efficient implementation based on the techniques of Christiansen et al. [8], the proof is given in Appendix A: **Lemma 2.8**.: _Given an RLSLP \(G\) of size \(g\) and height \(h\). Assume that every string in a set \(\mathcal{S}\) is either a prefix or a suffix of the expansion of a non-terminal of \(G\) or its reverse. The trie for \(\mathcal{S}\) can be implemented in space \(O(|\mathcal{S}|)\) to maintain the following queries in \(O(m+\tau\cdot(h+\log m))\) time: Given a pattern \(P\) of length \(m\) and suffixes \(Q_{i}\) of \(P\), \(1\leq i\leq\tau\), find, for each \(i\), the interval of strings in the (lexicographically sorted) \(\mathcal{S}\) prefixed by \(Q_{i}\)._ ## 3 Relevant, extremal, and predecessor occurrences in a non-terminal In this section, we present a data structure that allows various efficient queries, which we will need to prove Theorem 1.1. We also show how it can be leveraged for an index in the simpler case of consecutive occurrences (\(a=0,b=N\)). Recall that the text \(S\) is a string of length \(N\) represented by an SLP \(G\) of size \(g\). By applying Lemma 2.7, we transform \(G\) into an RLSLP \(G^{\prime}\) of size \(g^{\prime}=O(g\log N)\) and depth \(h=O(\log N)\) representing \(S\), which we fix from now on. We start by showing that \(G^{\prime}\) can be processed in small space to allow multiple efficient queries: **Theorem 3.1**.: _There is a \(O(g^{2}\log^{4}N)\)-space data structure for \(G^{\prime}\) that given a pattern \(P\) of length \(m\) can preprocess it in \(O(m\log N+\log^{2}N)\) time to support the following queries for a given non-terminal \(A\) of \(G^{\prime}\):_ 1. _Report the sorted set of relevant occurrences of_ \(P\) _in_ \(\overline{A}\) _in_ \(O(\log N)\) _time;_ 2. _Decide whether there is an occurrence of_ \(P\) _in_ \(\overline{A}\) _in_ \(O(\log N\log\log N)\) _time;_ 3. _Report the leftmost and the rightmost occurrences of_ \(P\) _in_ \(\overline{A}\)_,_ \(\overline{\textsf{head}(A)}\)_, and_ \(\overline{\textsf{tail}(A)}\) _in_ \(O(\log^{2}N\log\log N)\) _time;_ _._ 4. _Given a position_ \(p\)_, find the rightmost (leftmost) occurrence_ \(q\leq p\) _(_\(q\geq p\)_) of_ \(P\) _in_ \(\overline{A}\) _in_ \(O(\log^{3}N\log\log N)\) _time (predecessor/successor)._ Before we proceed to the proof, let us derive a data structure to report all consecutive occurrences (co-occurrences) of a given pair of patterns. **Corollary 3.2**.: _For an SLP of size \(g\) representing a string \(S\) of length \(N\), there is an \(O(g^{2}\log^{4}N)\)-space data structure that supports the following queries: given two patterns \(P_{1},P_{2}\) both of length \(O(m)\), report all \(\operatorname{occ}\) co-occurrences of \(P_{1}\) and \(P_{2}\) in \(S\). The query time is \(O(m\log N+(1+\operatorname{occ})\cdot\log^{3}N\log\log N)\)._ Proof.: We exploit the data structure of Theorem 3.1 for \(G^{\prime}\). To report all co-occurrences of \(P_{1},P_{2}\) in \(S\), we preprocess \(P_{1},P_{2}\) in \(O(m\log N+\log^{2}N)\) time and then proceed as follows. Suppose that we want to find the leftmost co-occurrence of \(P_{1}\) and \(P_{2}\) in the string \(S[i\dots]\), where at the beginning \(i=0\). We find the leftmost occurrence \(q^{\prime}_{1}\) of \(P_{1}\) with \(q^{\prime}_{1}\geq i\) (if it exists) by a successor query on the initial symbol of \(G^{\prime}\) (the expansion of which is the entire string \(S\)). Then we find the leftmost occurrence \(q_{2}\) of \(P_{2}\) with \(q_{2}\geq q^{\prime}_{1}\) (if it exists) by a successor query and the rightmost occurrence \(q_{1}\) of \(P_{1}\) with \(q_{1}\leq q_{2}\) by a predecessor query. If either \(q^{\prime}_{1}\) or \(q_{2}\) do not exist, then there are no more co-occurrences in \(S[i\dots]\). Otherwise, clearly, \((q_{1},q_{2})\) is a co-occurrence, and there can be no other co-occurrences starting in \(S[i\dots q_{2}]\). In this case, we return \((q_{1},q_{2})\) and set \(i=q_{2}+1\). The running time of the retrieval phase is \(O(\log^{3}N\log\log N\cdot(\operatorname{occ}+1))\), since we use at most three successor/predecessor queries to either output a new co-occurrence or decide that there are no more co-occurrences. ### Proof of Theorem 3.1 The data structure consists of two compact tries \(T_{pre}\) and \(T_{suf}\) defined as follows. For each non-terminal \(A\), we store \(\operatorname{rev}(\overline{\mathsf{head}(A)})\) in \(T_{pre}\) and \(\overline{\mathsf{tail}(A)}\) in \(T_{suf}\). We augment \(T_{pre}\) and \(T_{suf}\) by computing their heavy path decomposition: **Definition 3.3**.: _The heavy path of a trie \(T\) is the path that starts at the root of \(T\) and at each node \(v\) on the path branches to the child with the largest number of leaves in its subtree (heavy child), with ties broken arbitrarily. The heavy path decomposition is a set of disjoint paths defined recursively, namely it is defined to be a union of the singleton set containing the heavy path of \(T\) and the heavy path decompositions of the subtrees of \(T\) that hang off the heavy path._ For each non-terminal \(A\) of \(G^{\prime}\), a heavy path \(h_{pre}\) in \(T_{pre}\), and a heavy path \(h_{suf}\) in \(T_{suf}\), we construct a multiset of points \(\mathcal{P}(A,h_{pre},h_{suf})\). For every non-terminal \(A^{\prime}\) and nodes \(u\in h_{pre}\), \(v\in h_{suf}\) the multiset contains a point \((|\lambda(u)|,|\lambda(v)|)\) iff \(A^{\prime},u,v\) satisfy the following properties: 1. \(A\) is an ancestor of \(A^{\prime}\); 2. \(I(u)\) contains \(\operatorname{rev}(\overline{\mathsf{head}(A^{\prime})})\) and \(I(v)\) contains \(\overline{\mathsf{tail}(A^{\prime})}\). 3. \(u,v\) are the lowest nodes in \(h_{pre},h_{suf}\), respectively, satisfying Property 2. (See Fig. 1.) The set \(P(A,h_{pre},h_{suf})\) is stored in a two-sided 2D orthogonal range emptiness data structure [29, 6] which occupies \(O(|\mathcal{P}(A,h_{pre},h_{suf})|)\) space. Given a 2D range of the form \([\alpha,\infty]\times[\beta,\infty]\), it allows to decide whether the range contains a point in \(\mathcal{P}(A,h_{pre},h_{suf})\) in \(O(\log\log N)\) time. **Claim 3.4**.: _The data structure occupies \(O(g^{2}\log^{4}N)\) space._ Proof.: Each non-terminal \(A^{\prime}\) has at most \(g^{\prime}\) distinct ancestors and each root-to-leaf path in \(T_{pre}\) or \(T_{suf}\) crosses \(O(\log g^{\prime})\) heavy paths (as each time we switch heavy paths, the number of leaves in the subtree of the current node decreases by at least a factor of two). As a corollary, each non-terminal creates \(O(g^{\prime}\log^{2}g^{\prime})=O(g\log^{3}N)\) points across all orthogonal range emptiness data structures. When we receive a pattern \(P\), we compute \(\operatorname{Splits}^{\prime}(G^{\prime},P)\) via Lemma 2.7 in \(O(m\log N)\) time or provide a certificate that \(P\) does not occur in \(S\), in which case there are no occurrences of \(P\) in the expansions of the non-terminals of \(G^{\prime}\). Recall that \(|\operatorname{Splits}^{\prime}(G^{\prime},P)|\in O(\log N)\). We then sort \(\operatorname{Splits}^{\prime}(G^{\prime},P)\) in \(O(\log^{2}N)\) time (a technicality which will allow us reporting relevant occurrences sorted without time overhead). Finally, we compute, for each \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P)\), the interval of strings in \(T_{pre}\) prefixed by \(\operatorname{rev}(P[\ldots s])\) (which is the interval \(I(u)\) for the locus \(u\) of \(\operatorname{rev}(P[\ldots s])\) in \(T_{pre}\)) and the interval of strings in \(T_{suf}\) prefixed by \(P(s\ldots]\) (which is the interval \(I(u)\) for the locus \(u\) of \(P(s\ldots]\) in \(T_{suf}\)). By Lemma 2.8, with \(\tau=|\operatorname{Splits}^{\prime}(G^{\prime},P)|=O(\log N)\) and \(h=O(\log N)\), this step takes \(O(m+\log^{2}N)\) time. Reporting relevant occurrences is easy: by definition, each relevant occurrence \(q\) of \(P\) in \(\overline{A}\) is equal to \(|\overline{\operatorname{head}(A)}|-s\) for some \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P)\) such that \(\operatorname{rev}(P[\ldots s])\) is a prefix of \(\operatorname{rev}(\overline{\operatorname{head}(A)})\) and \(P(s\ldots]\) is a prefix of \(\overline{\operatorname{tail}(A)}\). As we already know the intervals of the strings in \(T_{suf}\) and \(T_{pre}\) starting with \(\operatorname{rev}(P[\ldots s])\) and \(P(s\ldots]\), respectively, both conditions can be checked in constant time per split, or in \(O(|\operatorname{Splits}^{\prime}(G^{\prime},P)|)=O(\log N)\) time overall. Note that since \(\operatorname{Splits}^{\prime}(G^{\prime},P)\) are sorted, the relevant occurrences are reported sorted as well. We now explain how to answer emptiness queries on a non-terminal: **Claim 3.5**.: _Let \(A\) be a non-terminal labeling a node in the parse tree of \(G^{\prime}\). We can decide whether \(\overline{A}\) contains an occurrence of \(P\) in \(O(\log N\log\log N)\) time._ Proof.: Below we show that \(P\) occurs in \(\overline{A}\) iff there exists a split \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P)\) such that for \(u\) being the locus of \(\operatorname{rev}(P[\ldots s])\) in \(T_{pre}\) and \(v\) the locus of \(P(s\ldots]\) in \(T_{suf}\), for \(h_{pre}\) the heavy path containing \(u\) in \(T_{pre}\)and \(h_{suf}\) the heavy path containing \(v\) in \(T_{suf}\), the rectangle \([|\lambda(u)|,+\infty]\times[|\lambda(v)|,+\infty]\) contains a point from \(\mathcal{P}(A,h_{pre},h_{suf})\). Before we proceed to the proof, observe that by the bound on \(|\operatorname{Splits}^{\prime}(G^{\prime},P)|\) this allows us to decide whether \(P\) occurs in \(\overline{A}\) in \(O(\log N)\) range emptiness queries, which results in \(O(\log N\log\log N)\) query time. Assume that \([|\lambda(u)|,+\infty]\times[|\lambda(v)|,+\infty]\) contains a point \((x,y)\in\mathcal{P}(A,h_{pre},h_{suf})\) corresponding to a non-terminal \(A^{\prime}\). By construction, \(A\) is an ancestor of \(A^{\prime}\), the subtree of \(u\) contains a leaf corresponding to \(\operatorname{rev}(\overline{\operatorname{head}(A^{\prime})})\) and the subtree of \(v\) contains a leaf corresponding to \(\overline{\operatorname{tail}(A^{\prime})}\). Consequently, \(\overline{A^{\prime}}\) contains an occurrence of \(P\), which implies that \(\overline{A}\) contains an occurrence of Figure 1: A string \(S=\operatorname{aababacacabc}\) is generated by an SLP \(G^{\prime}\). Nodes \(u\) and \(v\) are the loci of \(\mathtt{c}\) and \(\mathtt{ab}\) in \(T_{pre}\) and \(T_{suf}\) respectively. The heavy paths \(h_{pre}\) in \(T_{pre}\) and \(h_{suf}\) in \(T_{suf}\) are shown in blue. We have \((2,2)\in\mathcal{P}(A,h_{pre},h_{suf})\) corresponding to \(C,u,v\). \(P\). To show the reverse direction, let \(\ell=\operatorname{off}(u)+1\) and \(r=\operatorname{off}(u)+|\overline{A}|\), i.e. \(S[\ell\ldots r]=\overline{A}\). The string \(\overline{A}\) contains an occurrence \(\overline{A}[q\ldots q+|P|)\) of \(P\) iff \(S[\ell+q\ldots\ell+q+|P|)\) is an occurrence of \(P\) in \(S\). From Claim 2.5 it follows that if \(w\) is the lowest node in the parse tree of \(G^{\prime}\) that contains leaves \(S[\ell+q],\ldots,S[\ell+q+|P|-1]\) in its subtree and \(A^{\prime}\) is its label, then there exists a split \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P)\) such that \(\operatorname{rev}(P[\ldots s])\) is a prefix of \(\operatorname{rev}(\overline{\operatorname{head}(A^{\prime})})\) and \(P(s\ldots]\) of \(\overline{\operatorname{tail}(A^{\prime})}\). By definition of \(u\) and \(v\), the leaf of \(T_{pre}\) labeled with \(\operatorname{rev}(\overline{\operatorname{head}(A^{\prime})})\) belongs to \(I(u)\) and the leaf of \(T_{suf}\) labeled with \(\overline{\operatorname{tail}(A^{\prime})}\) belongs to \(I(v)\). Let \(h_{pre}\) (\(h_{suf}\)) be the heavy path in \(T_{pre}(T_{suf})\) containing \(u\) (\(v\)) and \((x,y)\) be the point in \(\mathcal{P}(A,h_{pre},h_{suf})\) created for \(A^{\prime}\). As \(|\lambda(u)|\leq x\) and \(|\lambda(v)|\leq y\), the rectangle \([|\lambda(u)|,+\infty]\times[|\lambda(v)|,+\infty]\) is not empty. It remains to explain how to retrieve the leftmost/rightmost occurrences in a non-terminal, as well as to answer predecessor/successor queries. The main idea for all four types of queries is to start at any node of the parse tree of \(G^{\prime}\) labeled by \(A\) and recurse down via emptiness queries and case inspection. Since the length of the expansion decreases each time we recurse from a non-terminal to its child and the height of \(G^{\prime}\) is \(h=O(\log N)\), this allows to achieve the desired query time. We provide full details in Appendix B. ## 4 Compressed Indexing for Close Co-occurrences In this section, we show our main result, Theorem 1.1. Recall that \(S\) is a string of length \(N\) represented by an SLP \(G\) of size \(g\). We start by applying Lemma 2.7 to transform \(G\) into an RLSLP \(G^{\prime}\) of size \(g^{\prime}=O(g\log N)\) and height \(h=O(\log N)\) representing \(S\). The query algorithm uses the following strategy: first, it identifies all non-terminals of \(G^{\prime}\) such that their expansion contains a \(b\)-close relevant co-occurrence, where a relevant co-occurrence is defined similarly to a relevant occurrence: **Definition 4.1** (Relevant co-occurrence).: _Let \(A\) be a non-terminal of \(G^{\prime}\). We say that a co-occurrence \((q_{1},q_{2})\) of \(P_{1},P_{2}\) in \(\overline{A}\) is relevant if \(q_{1}\leq|\overline{\operatorname{head}(A)}|\leq q_{2}+|P_{2}|-1\)._ Second, it retrieves all \(b\)-close relevant co-occurrences in each of those non-terminals, and finally, reports all \(b\)-close co-occurrences by traversing the (pruned) parse tree of \(G^{\prime}\), which is possible due to the following claim: **Claim 4.2**.: _Assume that \(P_{2}\) is not a substring of \(P_{1}\), and let \((q_{1},q_{2})\) be a co-occurrence of \(P_{1},P_{2}\) in a string \(S\). In the parse tree of \(G^{\prime}\), there exists a unique node \(u\) such that either_ 1. _Its label_ \(A\) _is associated with a production_ \(A\to BC\)_, and_ \((q_{1}-\operatorname{off}(u),q_{2}-\operatorname{off}(u))\) _is a relevant co-occurrence of_ \(P_{1},P_{2}\) _in_ \(\overline{A}\)_;_ 2. _Its label_ \(A\) _is associated with a production_ \(A\to B^{k}\)_,_ \(q_{1}-\operatorname{off}(u)=q^{\prime}_{1}+k^{\prime}|\overline{B}|\)_,_ \(q_{2}-\operatorname{off}(u)=q^{\prime}_{2}+k^{\prime}|\overline{B}|\) _for some_ \(0\leq k^{\prime}\leq k\)_, where_ \((q^{\prime}_{1},q^{\prime}_{2})\) _is a relevant co-occurrence of_ \(P_{1},P_{2}\) _in_ \(\overline{A}\)_._ Proof.: Let \(A\) be the label of the lowest node \(u\) in the parse tree that contains leaves \(S[q_{1}],S[q_{1}+1],\ldots,S[q_{2}+|P_{2}|-1]\) in its subtree. Because \(P_{2}\) is not a substring of \(P_{1}\), \(A\) cannot be associated with a production \(A\to a\). By definition, \(S[\operatorname{off}(u)+1]\) is the leftmost leaf in the subtree of this node. Assume first that \(A\) is associated with a production \(A\to BC\). We then have that the subtree rooted at the left child of \(u\) (labeled by \(B\)) does not contain \(S[q_{2}+|P_{2}|-1]\) and the subtree rooted at the right child of \(u\) (labeled by \(C\)) does not contain \(S[q_{1}]\). As a consequence, \((q_{1}-\operatorname{off}(u),q_{2}-\operatorname{off}(u))\) is a relevant co-occurrence of \(P_{1},P_{2}\) in \(\overline{A}\). Consider now the case where \(A\) is associated with a production \(A\to B^{k}\). The leaves labeled by \(S[q_{1}]\) and \(S[q_{2}+|P_{2}|-1]\) belong to the subtrees rooted at different children of \(A\). If \(S[q_{1}]\) belongs to the subtree rooted at the \(k^{\prime}\)-th child of \(A\), then \((q_{1}-\operatorname{off}(u)-|\overline{B}|\cdot(k^{\prime}-1),q_{2}- \operatorname{off}(u)-|\overline{B}|\cdot(k^{\prime}-1))\) is a relevant co-occurrence of \(P_{1},P_{2}\) in \(\overline{A}\). ### Combinatorial observations Informally, we define a set of \(O(g^{2})\) strings and show that for any patterns \(P_{1},P_{2}\) there are two strings \(S_{1},S_{2}\) in the set with the following property: whenever the expansion of a non-terminal \(A\) in \(G^{\prime}\) contains a pair of occurrences \(P_{1},P_{2}\) forming a relevant co-occurrence, there are occurrences of \(S_{1},S_{2}\) in the proximity. This will allow us to preprocess the non-terminals of \(G^{\prime}\) for occurrences of the strings in the set and use them to detect \(b\)-close relevant co-occurrences of \(P_{1},P_{2}\). Consider two tries, \(T_{pre}\) and \(T_{suf}\): For each production of \(G^{\prime}\) of the form \(A\to BC\), we store \(\overline{C}\) in \(T_{suf}\) and \(\operatorname{rev}(\overline{B})\) in \(T_{pre}\). For each production of the form \(A\to B^{k}\), we store \(\overline{B}\), \(\overline{B^{2}}\), \(\overline{B^{k-2}}\), and \(\overline{B^{k-1}}\) in \(T_{suf}\) and the reverses of those strings in \(T_{pre}\). For \(j\in\{1,2\}\) and \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P_{j})\) define \(S_{j}(s)=\operatorname{rev}(U)V\), where \(U\) is the label of the locus of \(\operatorname{rev}(P_{j}[\ldots s])\) in \(T_{pre}\) and \(V\) is the label of the locus of \(P_{j}(s\ldots]\) in \(T_{suf}\). Let \(l_{j}(s)=|\operatorname{rev}(U)|\) and \(\Delta_{j}(s)=l_{j}(s)-s\). Consider a non-terminal \(A\) such that its expansion \(\overline{A}\) contains a relevant co-occurrence \((q_{1},q_{2})\) of \(P_{1},P_{2}\). **Claim 4.3**.: _There exists \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P_{2})\) such that \(p_{2}=q_{2}-\Delta_{2}(s)\) is an occurrence of \(S_{2}(s)\) in \(\overline{A}\) and \([p_{2},p_{2}+|S_{2}(s)|)\supseteq[q_{2},q_{2}+|P_{2}|)\)._ Proof.: Below we show that there exists a descendant \(A^{\prime}\) of \(A\) and a split \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P_{2})\) such that either \(\operatorname{rev}(P_{2}[\ldots s])\) is a prefix of \(\operatorname{rev}(\operatorname{head}(A^{\prime}))\) and \(P_{2}(s\ldots]\) is a prefix of \(\overline{\operatorname{tail}(A^{\prime})}\), or \(A^{\prime}\) is associated with a rule \(A^{\prime}\to(B^{\prime})^{k}\), \(\operatorname{rev}(P_{2}[\ldots s])\) is a prefix of \(\operatorname{rev}(\overline{(B^{\prime})^{2}})\) and \(P_{2}(s\ldots]\) is a prefix of \(\overline{(B^{\prime})^{k-2}}\). The claim follows by the definition of \(T_{pre}\), \(T_{suf}\), and \(S_{2}(s)\). If \(q_{2}\) is relevant in \(\overline{A}\), there exists a split \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P_{2})\) such that \(\operatorname{rev}(P_{2}[\ldots s])\) is a prefix of \(\operatorname{rev}(\operatorname{head}(A))\) and \(P_{2}(s\ldots]\) is a prefix of \(\overline{\operatorname{tail}(A)}\) by definition. If \(q_{2}\) is not relevant, then \(q_{2}\geq|\overline{\operatorname{head}(A)}|\) by the definition of a co-occurrence. By Claim 2.5, there is a descendant \(A^{\prime}\) of \(A\) corresponding to a substring \(\overline{A}[\ell\ldots r]\) for which either \((q_{2}-\ell)\) is relevant (and then we can repeat the argument above), or \(A^{\prime}\) is associated with a rule \(A^{\prime}\to(B^{\prime})^{k}\) and \((q_{2}-\ell)-k^{\prime}\cdot|\overline{B^{\prime}}|\) is relevant, for some \(0\leq k^{\prime}\leq k\). Consider the latter case. If \(A^{\prime}=A\), then \(k^{\prime}=1\), as otherwise \(q_{1}<q_{2}^{\prime}=q_{2}-|\overline{B^{\prime}}|<q_{2}\) is an occurrence of \(P_{2}\) in \(\overline{A}\) contradicting the definition of a co-occurrence (recall that \((q_{1},q_{2})\) is a relevant co-occurrence and hence by definition \(q_{1}<|\overline{\operatorname{head}(A)}|\)), and therefore \(s=|\overline{(B^{\prime})^{2}}|-q_{2}+\ell\in\operatorname{Splits}^{\prime}(G ^{\prime},P_{2})\), \(\operatorname{rev}(P_{2}[\ldots s])\) is a prefix of \(\operatorname{rev}((B^{\prime})^{2})\) and \(P_{2}(s\ldots]\) is a prefix of \(\overline{(B^{\prime})^{k-2}}\). If \(A^{\prime}\neq A\), then we can analogously conclude that \(k^{\prime}=0\), which implies \(s=|\overline{B^{\prime}}|-q_{2}+\ell\in\operatorname{Splits}^{\prime}(G^{ \prime},P_{2})\), \(\operatorname{rev}(P_{2}[\ldots s])\) is a prefix of \(\operatorname{rev}(B^{\prime})\) and \(P_{2}(s\ldots]\) is a prefix of \(\overline{(B^{\prime})^{k-1}}\). As the definition of a co-occurrence is not symmetric, \(q_{1}\) does not enjoy the same property. However, a similar claim can be shown: **Lemma 4.4**.: _There exists \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P_{1})\) and an occurrence \(p_{1}\) of \(S_{1}(s)\) in \(\overline{A}\) such that \([p_{1},p_{1}+|S_{1}(s)|)\supseteq[q_{1},q_{1}+|P_{1}|)\) and at least one of the following holds:_ 1. \(q_{1}-\Delta_{1}(s)\) _is an occurrence of_ \(S_{1}(s)\)_;_ 2. \(q_{2}\) _is a relevant occurrence of_ \(P_{2}\) _in_ \(\overline{A}\)_, the period of_ \(S_{1}(s)\) _equals the period_ \(\pi_{1}\) _of_ \(P_{1}\)_, and there exists an integer_ \(k\) _such that_ \(p_{1}=q_{1}-\Delta_{1}(s)-\pi_{1}\cdot k\) _and_ \(q_{2}+\pi_{1}-1\leq p_{1}+|S_{1}(s)|-1\leq q_{2}+|P_{2}|-1\) Proof.: If \(q_{1}\) is a relevant occurrence of \(P_{1}\) in \(A\) with a split \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P_{1})\), then \(\operatorname{rev}(P_{1}[\ldots s])\) is a prefix of \(\operatorname{rev}(\overline{\operatorname{head}(A)})\) and \(P_{1}(s\ldots]\) is a prefix of \(\overline{\operatorname{tail}(A)}\) and therefore the first case holds by the definition of \(T_{pre}\) and \(T_{suf}\). Otherwise, by Claim 2.5, there is a descendant \(A^{\prime}\) of \(\operatorname{head}(A)\) corresponding to a substring \(\overline{A}[\ell\ldots r]\) for which either \((q_{1}-\ell)\) is relevant (and then we can repeat the argument above), or \(A^{\prime}\) is associated with a rule \(A^{\prime}\to(B^{\prime})^{k}\) and \((q_{1}-\ell)-k^{\prime}\cdot|\overline{B^{\prime}}|\), for some \(0\leq k^{\prime}\leq k\), is a relevant occurrence of \(P_{1}\) in \(\overline{A^{\prime}}\) with a split \(s\in\operatorname{Splits}^{\prime}(G^{\prime},P_{1})\). Consider the latter case. We must have (1) \(q_{1}+|P_{1}|-1+|\overline{B^{\prime}}|\geq r\) or (2) \(q_{1}+|\overline{B^{\prime}}|-1\geq q_{2}\), because if both inequalities do not hold, then \(q_{1}<q_{1}+|\overline{B^{\prime}}|\leq q_{2}\) is an occurrence of \(P_{1}\) in \(\overline{A}\), which contradicts the definition of a co-occurrence. Additionally, if (1) holds, then by definition there exists a split \(s^{\prime}\in\operatorname{Splits}^{\prime}(G^{\prime},P_{1})\) (which might be different from the split \(s\) above) such that \(\operatorname{rev}(P_{1}[\ldots s^{\prime}])\) is a prefix of \(\operatorname{rev}((B^{\prime})^{r-1})\) and \(P_{1}(s^{\prime}\ldots]\) is a prefix of \(\overline{B^{\prime}}\) and we fall into the first case of the lemma. From now on, assume that (2) holds and (1) does not. Since \(q_{1}+|\overline{B^{\prime}}|\leq r\leq|\overline{\operatorname{head}(A)}|\) and \((q_{1},q_{2})\) is a relevant co-occurrence, \(q_{2}\) must be a relevant occurrence of \(P_{2}\) in \(\overline{A}\). If \(|P_{1}|-s\leq|\overline{(B^{\prime})^{2}}|\), then \(\operatorname{rev}(P_{1}[\ldots s])\) is a prefix of \(\operatorname{rev}(\overline{B^{\prime}})\) and \(P_{1}(s\ldots]\) is a prefix of \(\overline{(B^{\prime})^{2}}\) and therefore \(q_{1}-\Delta_{1}(s)\) is an occurrence of \(S_{1}(s)\). Otherwise, by Fine and Wilf's periodicity lemma [14], the periods of \(\overline{A^{\prime}}\), \(P_{1}\), and \(S_{1}(s)\) are equal, since \(P_{1}\) and hence \(S_{1}(s)\) span at least two periods of \(\overline{A^{\prime}}\). By periodicity, \(S_{1}(s)\) occurs at positions \(q_{1}-\Delta_{1}(s)-|\overline{B^{\prime}}|\cdot k\) of \(\overline{A}\). Let \(p_{1}\) be the leftmost of these positions which satisfies \(p_{1}+|S_{1}(s)|-1\geq q_{1}+|P_{1}|-1\). This position is well-defined as (1) does not hold, and furthermore \([q_{1},q_{1}+|P_{1}|)\subseteq[p_{1},p_{1}+|S_{1}(s)|)\) as \(s\leq l_{1}(s)\) and \(|S_{1}(s)|-l_{1}(s)\geq|P_{1}|-s\). We have \(p_{1}=q_{1}-\Delta_{1}(s)-\pi_{1}\cdot k^{\prime\prime}\) for some integer \(k^{\prime\prime}\) (as \(|\overline{B^{\prime}}|\) is a multiple of \(\pi_{1}\)), and \(q_{2}+\pi_{1}-1\leq q_{1}+2|\overline{B^{\prime}}|-1\leq q_{1}+|P_{1}|-1\leq p _{1}+|S_{1}(s)|-1\leq r<q_{2}+|P_{2}|-1\), where the last inequality holds as \(q_{2}\) is a relevant occurrence in \(\overline{A}\). The claim of the lemma follows. We summarize Claim 4.3 and Lemma 4.4: **Corollary 4.5**.: _Let \((q_{1},q_{2})\) be a co-occurrence of \(P_{1},P_{2}\) in the expansion of a non-terminal \(A\). There exist splits \(s_{1}\in\operatorname{Splits}^{\prime}(G^{\prime},P_{1}),s_{2}\in\operatorname{Splits }^{\prime}(G^{\prime},P_{2})\) and occurrences \(p_{1}\) of \(S_{1}(s_{1})\) and \(p_{2}\) of \(S_{2}(s)\), where \([p_{1},p_{1}+|S_{1}(s_{1})|)\supseteq[q_{1},q_{1}+|P_{1}|)\) and \([p_{2},p_{2}+|S_{2}(s_{2})|)\supseteq[q_{2},q_{2}+|P_{2}|)\), such that at least one of the following holds:_ 1. _The occurrence_ \(p_{1}\) _is either relevant or_ \(p_{1}+|S_{1}(s_{1})|-1\leq|\overline{\operatorname{head}(A)}|\)_. The occurrence_ \(p_{2}\) _is either relevant or_ \(p_{2}>|\overline{\operatorname{head}(A)}|\)_. Additionally,_ \(p_{1}=q_{1}-\Delta_{1}(s_{1})\) _and_ \(p_{2}=q_{2}-\Delta_{2}(s_{2})\)_._ 2. _The occurrence_ \(p_{2}\) _is relevant and_ \(p_{1}\leq|\overline{\operatorname{head}(A)}|\)_. Additionally,_ \(p_{2}=q_{2}-\Delta_{2}(s_{2})\)_, the period of_ \(S_{1}(s)\) _equals the period_ \(\pi_{1}\) _of_ \(P_{1}\)_, and there exists an integer_ \(k\) _such that_ \(p_{1}=q_{1}-\Delta_{1}(s_{1})-\pi_{1}\cdot k\) _and_ \(p_{2}+\pi_{1}-1\leq p_{1}+|S_{1}(s_{1})|-1\leq p_{2}+|S_{2}(s_{2})|-1\)_._ The reverse observation holds as well: **Observation 4.6**.: _If \(p_{j}\) is an occurrence of \(S_{j}(s)\) in \(\overline{A}\), \(j=1,2\), then \(q_{j}=p_{j}+\Delta_{j}(s)\) is an occurrence of \(P_{j}\). Furthermore, if \(S_{1}(s)\) is periodic with period \(\pi_{1}\), then \(q_{1}+\pi_{1}\cdot k\), \(0\leq k\leq|(|S_{1}(s)|-q_{1}-|P_{1}|)/\pi_{1}|\), are occurrences of \(P_{1}\) in \(\overline{A}\)._ Finally, the following trivial observation will be important for upper bounding the time complexity of our query algorithm: **Observation 4.7**.: _If a string contains a pair of occurrences \((q_{1},q_{2})\) of \(P_{1}\) and \(P_{2}\) such that \(0\leq q_{2}-q_{1}\leq b\), then it contains a \(b\)-close co-occurrence of \(P_{1}\) and \(P_{2}\)._ ### Index The first part of the index is the data structure of Theorem 3.1 and the index of Christiansen et al. [8]: **Fact 4.8** ([8, Introduction and Theorem 6.12]).: _There is a \(O(g\log^{2}N)\)-space data structure that can find the \(\operatorname{occ}\) occurrences of any pattern \(P[1\ldots m]\) in \(S\) in time \(O(m+\operatorname{occ})\)._ The second part of the index are the tries \(T_{pre}\) and \(T_{suf}\), augmented as explained below. Consider a quadruple \((u_{1},u_{2},v_{1},v_{2})\), where \(u_{1}\) and \(u_{2}\) are nodes of \(T_{pre}\) and \(v_{1}\) and \(v_{2}\) are nodes of \(T_{suf}\). Let \(U_{1},U_{2},V_{1},V_{2}\) be the labels of \(u_{1},u_{2},v_{1},v_{2}\), respectively. Define \(S_{1}=\operatorname{rev}(U_{1})V_{1}\) and \(S_{2}=\operatorname{rev}(U_{2})V_{2}\), and let \(l_{1}=|\operatorname{rev}(U_{1})|\) and \(l_{2}=|\operatorname{rev}(U_{2})|\). First, we store a binary search tree \(\mathcal{T}_{1}(u_{1},u_{2},v_{1},v_{2})\) that for each non-terminal \(A\) contains at most six integers \(d=p_{2}-p_{1}\), where \(p_{1},p_{2}\) are occurrences of \(S_{1},S_{2}\) in \(\overline{A}\), satisfying at least one of the below: 1. \(p_{1}\) is the rightmost occurrence of \(S_{1}\) such that \(p_{1}+|S_{1}|-1<|\overline{\mathsf{head}(A)}|\) and \(p_{2}\) is the leftmost occurrence of \(S_{2}\) such that \(p_{2}\geq|\overline{\mathsf{head}(A)}|\); 2. \(p_{1}\) is a relevant occurrence of \(S_{1}\) with a split \(l_{1}\) and \(p_{2}\) is the leftmost occurrence of \(S_{2}\) such that \(p_{2}\geq|\overline{\mathsf{head}(A)}|\); 3. \(p_{1}\) is a relevant occurrence of \(S_{1}\) with a split \(l_{1}\), \(p_{2}\) is a relevant occurrence of \(S_{2}\) with a split \(l_{2}\); Figure 2: Subcases of Lemma 4.4. 4. \(p_{2}\) is a relevant occurrence of \(S_{2}\) with a split \(l_{2}\) and \(p_{1}\) is the rightmost occurrence of \(S_{1}\) such that \(p_{1}+|S_{1}|-1<p_{2}\); 5. \(p_{2}\) is a relevant occurrence of \(S_{2}\) with a split \(l_{2}\) and \(p_{1}\) is the leftmost or second leftmost occurrence of \(S_{1}\) in \(\overline{\mathsf{head}(A)}\) such that \(p_{1}<p_{2}\leq p_{1}+|S_{1}|-1<p_{2}+|S_{2}|-1\). Second, we store a list of non-terminals \(\mathcal{L}(u_{2},v_{2})\) such that their expansion contains a relevant occurrence of \(S_{2}\) with a split \(l_{2}\). Additionally, for every \(k\in[0,\log N]\), we store, if defined: 1. The rightmost occurrence \(p_{1}\) of \(S_{1}\) in \(S_{2}\) such that \(p_{1}+(|S_{1}|-1)\leq l_{2}-2^{k}\); 2. The leftmost occurrence \(p_{1}^{\prime}\) of \(S_{1}\) in \(S_{2}\) such that \(p_{1}^{\prime}\leq l_{2}-2^{k}\leq p_{1}^{\prime}+|S_{1}|-1\); 3. The rightmost occurrence \(p_{1}^{\prime\prime}\) of \(S_{1}\) in \(S_{2}\) such that \(p_{1}^{\prime\prime}\leq l_{2}-2^{k}\leq p_{1}^{\prime\prime}+|S_{1}|-1\). Finally, we compute and memorize the period \(\pi_{1}\) of \(S_{1}\). If the period is well-defined (i.e., \(S_{1}\) is periodic), we build a binary search tree \(\mathcal{T}_{2}(u_{1},u_{2},v_{1},v_{2})\). Consider a non-terminal \(A\) containing a relevant occurrence \(p_{2}\) of \(S_{2}\) with a split \(l_{2}\). Let \(p_{1}\) be the leftmost occurrence of \(S_{1}\) such that \(p_{1}\leq p_{2}\leq p_{1}+|S_{1}|-1\leq p_{2}+|S_{2}|-1\) and \(p_{1}^{\prime}\) the rightmost. If \(p_{1}\) and \(p_{1}^{\prime}\) exist (\(p_{1}\) might be equal to \(p_{1}^{\prime}\)) and \(p_{1}^{\prime}+|S_{1}|-1\geq p_{2}+\pi_{1}-1\), we add an integer \((p_{1}^{\prime}-p_{1})/\pi_{1}\) to the tree and associate it with \(A\). We also memorize a number \(\operatorname{ov}(S_{1},S_{2})=p_{2}-p_{1}^{\prime}\), which does not depend on \(A\) by Corollary 2.1 and therefore is well-defined (it corresponds to the longest prefix of \(S_{2}\) periodic with period \(\pi_{1}\)). **Claim 4.9**.: _The data structure occupies \(O(g^{5}\log^{5}N)\) space._ Proof.: The data structure of Theorem 3.1 occupies \(O(g^{2}\log^{4}N)\) space. The index of Christiansen et al. occupies \(O(g\log^{2}N)\) space. The tries, by Lemma 2.8, use \(O(g^{\prime})=O(g\log N)\) space. There are \(O((g^{\prime})^{4})\) quadruples \((u_{1},u_{2},v_{1},v_{2})\) and for each of them the trees take \(O(g^{\prime})\) space. The arrays of occurrences of \(S_{1}\) in \(S_{2}\) use \(O(\log N)\) space. Therefore, overall the data structure uses \(O(g^{5}\log^{5}N)\) space. ### Query Recall that a query consists of two strings \(P_{1},P_{2}\) of length at most \(m\) each and an integer \(b\), and we must find all \(b\)-close co-occurrences of \(P_{1},P_{2}\) in \(S\), let occ be their number. We start by checking whether \(P_{2}\) occurs in \(P_{1}\) using a linear-time and constant-space pattern matching algorithm such as [11]. If it is, let \(q_{2}\) be the position of the first occurrence. If \(q_{2}>b\), then there are no \(b\)-close co-occurrences of \(P_{1},P_{2}\) in \(S\). Otherwise, to find all \(b\)-close co-occurrences of \(P_{1},P_{2}\) in \(S\) (that _always_ consist of an occurrence of \(P_{1}\) in \(S\) and the first occurrence of \(P_{2}\) in \(P_{1}\)), it suffices to find all occurrences of \(P_{1}\) in \(S\), which we do using the index of Christiansen et al. [8] in time \(O(|P_{1}|+\operatorname{occ})=O(m+\operatorname{occ})\). From now on, assume that \(P_{2}\) is not a substring of \(P_{1}\). Let \(\mathcal{N}\) be the set of all non-terminals in \(G^{\prime}\) such that their expansion contains a relevant \(b\)-close co-occurrence of \(P_{1},P_{2}\). By Claim 4.2, \(|\mathcal{N}|\leq\operatorname{occ}\). **Lemma 4.10**.: _Assume that \(P_{2}\) is not a substring of \(P_{1}\). One can retrieve in \(O(m+(1+\operatorname{occ})\log^{3}N)\) time a set \(\mathcal{N}^{\prime}\supset\mathcal{N}\), \(|\mathcal{N}^{\prime}|=O(\operatorname{occ}\log N)\)._ Proof.: We start by computing \(\operatorname{Splits}^{\prime}(G^{\prime},P_{1})\) and \(\operatorname{Splits}^{\prime}(G^{\prime},P_{2})\) via Lemma 2.7 in \(O((|P_{1}|+|P_{2}|)\log N)=O(m\log N)\) time (or providing a certificate that either \(P_{1}\) or \(P_{2}\) does not occur in \(S\), in which case there are no co-occurrences of \(P_{1},P_{2}\) in \(S\) and we are done). Recall that \(|\operatorname{Splits}^{\prime}(G^{\prime},P_{1})|,|\operatorname{Splits}^{ \prime}(G^{\prime},P_{2})|\in O(\log N)\). For each fixed pair of splits \(s_{1}\in\operatorname{Splits}^{\prime}(G^{\prime},P_{1})\) \(s_{2}\in\operatorname{Splits}^{\prime}(G^{\prime},P_{2})\) and \(j\in\{1,2\}\), we compute the interval of strings in \(T_{pre}\) prefixed by \(\operatorname{rev}(P_{j}[\dots s_{j}])\), which corresponds to the locus \(u_{j}\) of \(\operatorname{rev}(P_{j}[\dots s_{j}])\) in \(T_{pre}\) and the interval of strings in \(T_{suf}\) prefixed by \(P_{j}(s_{j}\dots]\), which corresponds to the locus \(v_{j}\) of \(P_{j}(s_{j}\dots]\) in \(T_{suf}\). Computing the intervals takes \(O(m+\log^{2}N)\) time for all the splits by Lemma 2.8. Consider the strings \(S_{1}=\operatorname{rev}(U_{1})V_{1}\) and \(S_{2}=\operatorname{rev}(U_{2})V_{2}\), where \(U_{1},U_{2},V_{1},V_{2}\) are the labels of \(u_{1},v_{1},u_{2},v_{2}\), respectively. Let \(l_{1}=|\operatorname{rev}(U_{1})|\), \(\Delta_{1}=l_{1}-s_{1}\), \(l_{2}=|\operatorname{rev}(U_{2})|\), \(\Delta_{2}=l_{2}-s_{2}\), and \(\Delta=\Delta_{1}-\Delta_{2}\). Consider a relevant co-occurrence \((q_{1},q_{2})\) of \(P_{1},P_{2}\) in the expansion of a non-terminal \(A\). By Corollary 4.5, \(q_{1},q_{2}\) imply existence of occurrences \(p_{1},p_{2}\) of \(S_{1},S_{2}\) such that \([p_{1},p_{1}+|S_{1}|)\supseteq[q_{1},q_{1}+|P_{1}|)\) and \([p_{2},p_{2}+|S_{2}|)\supseteq[q_{2},q_{2}+|P_{2}|)\). Our index must treat both cases of Corollary 4.5. We consider eight subcases defined in Fig. 3, which describe all possible locations of \(p_{1}\) and \(p_{2}\). Subcases (1.1)-(1.4). To retrieve the non-terminals, we query \(\mathcal{T}_{1}(u_{1},u_{2},v_{1},v_{2})\) to find all integers that belong to the range \([\Delta,\Delta+b]\) (and the corresponding non-terminals). Recall that, for each non-terminal \(A\), the tree stores an integer \(d=p_{2}-p_{1}\), where \(p_{1}\) is the starting position of an occurrence of \(S_{1}\) in \(\overline{A}\) and \(p_{2}\) of \(S_{2}\). By Observation 4.6, \(p_{1}+\Delta_{1}\) is an occurrence of \(P_{1}\) and \(p_{2}+\Delta_{2}\) is an occurrence of \(P_{2}\). The distance between them is in \([0,b]\) iff \(d\in[\Delta,\Delta+b]\). By Observation 4.7, each retrieved non-terminal contains a close co-occurrence of \((q_{1},q_{2})\). On other other hand, if \(\overline{A}\) contains a co-occurrence \((q_{1},q_{2})\) corresponding to one Subcases (1.1)-(1.4), then by Corollary 4.5, \(p_{1}=q_{1}-\Delta_{1}\) is an occurrence of \(S_{1}\) and \(p_{2}=q_{2}-\Delta_{2}\) is an occurrence of \(S_{2}\) and by construction \(\mathcal{T}_{1}(u_{1},u_{2},v_{1},v_{2})\) stores an integer \(d=p_{2}-p_{1}\). Therefore, the query retrieves all non-terminals corresponding to Subcases (1.1)-(1.4). Subcases (1.5) and (2.1). We must decide whether an occurrence of \(P_{1}\) in \(S_{2}\) forms a \(b\)-close co-occurrence with the occurrence \(\Delta_{2}\) of \(P_{2}\) in \(S_{2}\), and if so, report all non-terminals such that their expansion contains a relevant co-occurrence of \(S_{2}\) with a split \(l_{2}\), which are exactly the non-terminals stored in the list \(\mathcal{L}(u_{2},v_{2})\). Let \(k=\lceil\log(s_{2})\rceil\). Recall that the index stores the following information for \(k\): 1. \(p_{1}\), the rightmost occurrence of \(S_{1}\) in \(S_{2}\) such that \(p_{1}+(|S_{1}|-1)\leq l_{2}-2^{k}\); 2. \(p_{1}^{\prime}\), the leftmost occurrence of \(S_{1}\) in \(S_{2}\) such that \(p_{1}^{\prime}\leq l_{2}-2^{k}\leq p_{1}+(|S_{1}|-1)\); 3. \(p_{1}^{\prime\prime}\), the rightmost occurrence of \(S_{1}\) in \(S_{2}\) such that \(p_{1}^{\prime\prime}\leq l_{2}-2^{k}\leq p_{1}^{\prime\prime}+(|S_{1}|-1)\). (See Fig. 4). By Observation 4.6, the occurrence \(p_{1}\) of \(S_{1}\) induces an occurrence \(q_{1}=p_{1}+\Delta_{1}\) of \(P_{1}\). Furthermore, if \(S_{1}\) is periodic with period \(\pi_{1}\), then \(q_{1}+\pi_{1}\cdot k\), \(0\leq k\leq\lfloor(|S_{1}|-q_{1}-|P_{1}|)/\pi_{1}\rfloor\), are also occurrences of \(P_{1}\). One can decide whether the distance from any of these occurrences to \(q_{2}\) is in \([0,b]\) in constant time, and if yes, then there \(S_{2}\) contains a \(b\)-close co-occurrence of \(P_{1},P_{2}\) by Observation 4.7. Second, by Corollary 2.1, if \(S_{1}\) is not periodic, then there are no occurrences of \(S_{1}\) between \(p_{1}^{\prime}\) and \(p_{1}^{\prime\prime}\) and \(p_{1}^{\prime},p_{1}^{\prime\prime}\) by Observation 4.6 induce occurrences \(p_{1}^{\prime}+\Delta_{1},p_{1}^{\prime\prime}+\Delta_{1}\) of \(P_{1}\). Otherwise, there are occurrences of \(P_{1}\) in every position \(p_{1}^{\prime}+\Delta_{1}+k\cdot\pi_{1}\), \(0\leq k\leq\lfloor(|S_{1}|+p_{1}^{\prime\prime}-|P_{1}|-p_{1}^{\prime})/\pi_{1}\rfloor\). Similarly, we can decide whether the distance from any of them to the occurrence \(\Delta_{2}\) of \(P_{2}\) in \(S_{2}\) is in \([0,b]\) in constant time. Finally, let \(q_{1}\) be the rightmost occurrence of \(P_{1}\) in \(S_{2}\) in the interval \([l_{2}-2^{k}+1,\Delta_{2}]\). We extract \(S_{2}(l_{2}-2^{k},\Delta_{2}+|P_{2}|)\) via Fact A.4 and search for \(q_{1}\) using a linear-time pattern matching algorithm for \(P_{1}\), which takes \(O(|P_{1}|+|P_{2}|)=O(m)\) time. If \(0\leq\Delta_{2}-q_{1}\leq b\), then there is a \(b\)-close co-occurrence of \(P_{1},P_{2}\) in \(S_{2}\). Correctness follows from Corollary 4.5, Observation 4.6 and Observation 4.7. Subcase (2.2). Let \(\pi_{1}\) be the period of \(S_{1}\). We retrieve the non-terminals associated with the integers \(q\in\mathcal{T}_{2}(u_{1},u_{2},v_{1},v_{2})\) such that the intersection of an interval \(I=[a,b]\) and \([\ell,q]\) is non-empty, where \[a=\lceil(\Delta-\operatorname{ov}(S_{1},S_{2}))/\pi_{1}\rceil,\,b=\lfloor( \Delta-\operatorname{ov}(S_{1},S_{2})+b)/\pi_{1}\rfloor\text{ and }\ell=-\lfloor(|S_{1}|-|P_{1}|-\Delta_{1})/\pi_{1}\rfloor\] (See the description of the index for the definition of \(\operatorname{ov}(S_{1},S_{2})\)). As \(\ell\) is fixed, we can implement the query via at most one binary tree search: If \(b\leq\ell\), the output is empty, if \(a\leq\ell\leq b\), we must output all integers, and if \(\ell\leq a\), we must output all \(q\geq b\). Let us now explain why the algorithm is correct. Consider a non-terminal \(A\) for which \(\mathcal{T}_{2}(u_{1},u_{2},v_{1},v_{2})\) stores an integer \(q\). By construction, \(\overline{A}\) contains a relevant occurrence of \(S_{2}\) with a split \(l_{2}\). A position \(p_{1}=|\overline{\mathsf{head}(A)}|-l_{2}-\operatorname{ov}(S_{1},S_{2})-q \cdot\pi_{1}\) is the leftmost occurrence of \(S_{1}\) in \(\overline{A}\) such that Figure 4: Query algorithm for Subcases (1.5) and (2.1). \(p_{2}\leq p_{1}+|S_{1}|-1\) and \(p_{2}=|\overline{\mathsf{head}}(A)|-l_{2}-\operatorname{ov}(S_{1},S_{2})\) the rightmost. Consequently, there is an occurrence \(q_{1}=|\overline{\mathsf{head}}(A)|-l_{2}-\operatorname{ov}(S_{1},S_{2})-q^{ \prime}\cdot\pi_{1}+\Delta_{1}\) of \(P_{1}\) for each \(-\lfloor(|S_{1}|-|P_{1}|-\Delta_{1})/\pi_{1}\rfloor\leq q^{\prime}\leq q\). The occurrence of \(S_{2}\) implies that \(q_{2}=|\overline{\mathsf{head}}(A)|-s_{2}\) is an occurrence of \(P_{2}\). We have \(0\leq q_{2}-q_{1}=q^{\prime}\cdot\pi_{1}+\operatorname{ov}(S_{1},S_{2})-\Delta \leq b\) iff \(\Delta-\operatorname{ov}(S_{1},S_{2})\leq q^{\prime}\cdot\pi_{1}\leq\Delta- \operatorname{ov}(S_{1},S_{2})+b\), which is equivalent to \([\ell,q]\cap I\neq\emptyset\). It follows that we retrieve every non-terminal corresponding to Subcase (2.2). On the other hand, by Observation 4.7, the expansion of each retrieved non-terminal contains a \(b\)-close co-occurrence of \(P_{1},P_{2}\). Subcase (1.6). We argue that we have already reported all non-terminals corresponding to this subcase and there is nothing left to do. Consider a non-terminal \(A\) such that its expansion contains a relevant occurrence \(p_{2}\) of \(S_{2}\). If there are at most two occurrences \(p_{1}\) of \(S_{1}\) such that \(p_{1}\leq p_{2}\leq p_{1}+|S_{1}|-1\leq p_{2}+|S_{2}|-1\), we will treat them when we query \(\mathcal{T}_{1}(u_{1},u_{2},v_{1},v_{2})\) (Subcases (1.1)-(1.4)). Otherwise, by Corollary 2.1, \(S_{1}\) is periodic and there is an occurrence \(p_{1}^{\prime}\) of \(S_{1}\) such that \(p_{1}^{\prime}\leq p_{2}<p_{2}+\pi_{1}\leq p_{1}+|S_{1}|-1<p_{2}+|S_{2}|-1\). The non-terminals corresponding to this case are reported when we query \(\mathcal{T}_{2}(u_{1},u_{2},v_{1},v_{2})\) (Subcase (2.2)). Time complexity. As shown above, the algorithm reports a set \(\mathcal{N}^{\prime}\supset\mathcal{N}\) of non-terminals and each non-terminal in \(\mathcal{N}^{\prime}\) contains a \(b\)-close co-occurrence. By Claim 4.2 and since the height of \(G^{\prime}\) is \(h=O(\log N)\), we have \(|\mathcal{N}^{\prime}|=O(\operatorname{occ}\log N)\). Furthermore, for a fixed pair of splits of \(P_{1},P_{2}\), each non-terminal in \(\mathcal{N}^{\prime}\) can be reported a constant number of times. Since \(|\mathrm{Splits}^{\prime}(G^{\prime},P_{1})|\cdot|\mathrm{Splits}^{\prime}(G^{ \prime},P_{2})|=O(\log^{2}N)\), the total size of the output is \(|\mathcal{N}^{\prime}|\cdot O(\log^{2}N)=O(\operatorname{occ}\cdot\log^{3}N)\). We therefore obtain that the running time of the algorithm is \(O(m+\log^{3}N+\operatorname{occ}\log^{3}N)=O(m+(1+\operatorname{occ})\log^{ 3}N)\) as desired. Once we have retrieved the set \(\mathcal{N}^{\prime}\), we find all \(b\)-close relevant co-occurrences for each of the non-terminals in \(\mathcal{N}^{\prime}\) using Theorem 3.1. In fact, our algorithm acts naively and computes _all_ relevant co-occurrences for a non-terminal in \(\mathcal{N}^{\prime}\), and then selects those that are \(b\)-close. By case inspection, one can show that a relevant co-occurrence for a non-terminal \(A\) always consists of an occurrence of \(P_{2}\) that is either relevant or the leftmost in \(\overline{\mathsf{tail}}(A)\), and a preceding occurrence of \(P_{1}\). Intuitively, this allows to compute all relevant co-occurrences efficiently and guarantees that their number is small. Formally, we show the following claim: **Lemma 4.11**.: _Assume that \(P_{2}\) is not a substring of \(P_{1}\). After \(O(m\log N+\log^{2}N)\)-time preprocessing, the data structure of Theorem 3.1 allows to compute all \(b\)-close relevant co-occurrences of \(P_{1},P_{2}\) in the expansion of a given non-terminal \(A\) in time \(O(\log^{3}N\log\log N)\)._ A part of the index of Christiansen et al. [8] is a pruned copy of the parse tree of \(G^{\prime}\). They showed how to traverse the tree to report all occurrences of a pattern, given its relevant occurrences in the non-terminals. By using essentially the same algorithm, we can report all \(b\)-close co-occurrences in amortized constant time per co-occurrence, which concludes the proof of Theorem 1.1. (For completeness, we provide all details of this step in Appendix C, Lemma C.2.)
2306.07749
Provably Learning Nash Policies in Constrained Markov Potential Games
Multi-agent reinforcement learning (MARL) addresses sequential decision-making problems with multiple agents, where each agent optimizes its own objective. In many real-world instances, the agents may not only want to optimize their objectives, but also ensure safe behavior. For example, in traffic routing, each car (agent) aims to reach its destination quickly (objective) while avoiding collisions (safety). Constrained Markov Games (CMGs) are a natural formalism for safe MARL problems, though generally intractable. In this work, we introduce and study Constrained Markov Potential Games (CMPGs), an important class of CMGs. We first show that a Nash policy for CMPGs can be found via constrained optimization. One tempting approach is to solve it by Lagrangian-based primal-dual methods. As we show, in contrast to the single-agent setting, however, CMPGs do not satisfy strong duality, rendering such approaches inapplicable and potentially unsafe. To solve the CMPG problem, we propose our algorithm Coordinate-Ascent for CMPGs (CA-CMPG), which provably converges to a Nash policy in tabular, finite-horizon CMPGs. Furthermore, we provide the first sample complexity bounds for learning Nash policies in unknown CMPGs, and, which under additional assumptions, guarantee safe exploration.
Pragnya Alatur, Giorgia Ramponi, Niao He, Andreas Krause
2023-06-13T13:08:31Z
http://arxiv.org/abs/2306.07749v1
# Provably Learning Nash Policies in ###### Abstract Multi-agent reinforcement learning (MARL) addresses sequential decision-making problems with multiple agents, where each agent optimizes its own objective. In many real-world instances, the agents may not only want to optimize their objectives, but also ensure safe behavior. For example, in traffic routing, each car (agent) aims to reach its destination quickly (objective) while avoiding collisions (safety). Constrained Markov Games (CMGs) are a natural formalism for safe MARL problems, though generally intractable. In this work, we introduce and study _Constrained Markov Potential Games_ (CMPGs), an important class of CMGs. We first show that a Nash policy for CMPGs can be found via constrained optimization. One tempting approach is to solve it by Lagrangian-based primal-dual methods. As we show, in contrast to the single-agent setting, however, CMPGs do not satisfy strong duality, rendering such approaches inapplicable and potentially unsafe. To solve the CMPG problem, we propose our algorithm **Co**ordinate-**A**scent for **CMPGs** (CA-CMPG), which provably converges to a Nash policy in tabular, finite-horizon CMPGs. Furthermore, we provide the first sample complexity bounds for learning Nash policies in unknown CMPGs, and, which under additional assumptions, guarantee safe exploration. ## 1 Introduction Multi-Agent Reinforcement Learning (MARL) addresses sequential decision-making problems with _multiple agents_, where the decisions of individual agents may also affect others. In this work, we focus on a rich and fundamental class of MARL problems, known as _Markov Potential Games_, (MPGs, Leonardos et al., 2022). Important applications, such as traffic routing (Altman et al., 2006) or wireless communication (Yamamoto, 2015), can be modeled as MPGs. The main characteristic of an MPG is the existence of an underlying _potential function_, which captures the agents' incentives to deviate between different policies. MPGs can model both fully cooperative scenarios1 and scenarios, in which the agents have individual objectives, as long as such a potential function exists. Footnote 1: In fully cooperative scenarios, the agents have one common objective. In many real-world applications, however, the standard MPG framework fails to incorporate additional requirements like _safety_. For instance, in traffic routing, we do want to find the fastest route to the individual destinations, while ensuring that the vehicles drive safely and do not collide. In this work, we introduce the framework of _Constrained Markov Potential Games_ (CMPGs) to study safety in the context of MPGs. We incorporate safety using _coupled_ constraints on the policies of the agents. Coupled constraints are relevant because they allow us to model requirements like collision avoidance. Our objective is to find a Nash policy (Nash et al., 1950; Altman and Shwartz, 2000), i.e., a set of policies such that no agent has the incentive to deviate unilaterally within the constrained set of policies. Prior work on algorithms for (unconstrained) MPGs, in which each agent improves its own objective _independently_, cannot be applied to the con strained setting, as the agents may need to _coordinate_ to satisfy the constraints. A more detailed discussion of prior work is provided in Section 2. We study tabular CMPGs in the finite-horizon setting and summarize our contributions here: 1. First, we show that a Nash policy can in principle be recovered by solving a constrained optimization problem, which, however, becomes intractable as the number of agents increases (Section 4). 2. Given tractable algorithms for unconstrained MPGs (cf. Leonardos et al., 2022; Fox et al., 2022), a tempting approach would be to utilize Lagrangian duality to reduce the constrained problem to an unconstrained one (Diddigi et al., 2019; Parnika et al., 2021). Unfortunately, we show that strong duality does not hold for our problem (Section 4), rendering such approaches _sub-optimal_ and _unsafe_. This is in sharp contrast to the single-agent setting, for which strong duality does hold (Paternain et al., 2019). 3. Instead of solving the constrained optimization problem, we propose to directly search for a Nash policy. We present our algorithm - **Co**ordinate-**A**scent for **CMPGs** (CA-CMPG) - which provably converges to an \(\varepsilon\)-Nash policy, assuming that the agents have full knowledge of the CMPG (Section 5). 4. Finally, we prove a sample complexity bound for our algorithm CA-CMPG, when the agents do not know the CMPG beforehand (Section 6). With access to a generative model (Section 6.1), the agents converge to an \(\varepsilon\)-Nash policy with \(\widetilde{\mathcal{O}}\left(\frac{H^{8}}{\varepsilon^{3}\zeta^{2}}\right)\) samples, where \(\zeta\) is the Slater constant of the CMPG and \(H\) is the horizon. On the other hand, if the agents do not have access to a generative model, but still want to ensure safe exploration, we obtain a sample complexity bound of \(\widetilde{\mathcal{O}}\left(\frac{H^{10}}{\varepsilon^{2}\zeta^{2}}\right)\) (Section 6.2), where \(c\in(0,\zeta]\) is a quantity related to the constraint set of the CMPG. ## 2 Related Work **Markov Potential Games:** MPGs have become popular in recent years and have been studied for the tabular setting (Leonardos et al., 2022; Zhang et al., 2022, 2021b; Chen et al., 2022; Mao et al., 2022; Maheshwari et al., 2022; Fox et al., 2022) and for state-action spaces with function approximation (Ding et al., 2022). For the tabular setting with _known_ rewards and transitions, Leonardos et al. (2022) prove that independent policy gradient (IPG) converges to an \(\varepsilon\)-Nash policy in \(O(1/\varepsilon^{2})\) iterations. If rewards and transitions are _unknown_, Mao et al. (2022) prove that IPG with access to a stochastic gradient oracle converges to an \(\varepsilon\)-Nash policy with a sample complexity of \(\mathcal{O}\left(1/\varepsilon^{4.5}\right)\). In these IPG algorithms, the agents improve their own objectives _independently_. It is challenging to apply these algorithms with coupled constraints, as the agents may need to coordinate to satisfy those constraints, at least during the learning process. Song et al. (2021) present a different approach for tabular MPGs with unknown rewards and transitions, in which the agents _coordinate_ to compute an \(\varepsilon\)-Nash policy with a sample complexity of \(\widetilde{\mathcal{O}}(1/\varepsilon^{3})\). While their algorithm is for unconstrained MPGs, we show in our work, that this type of approach can be extended to the constrained setting. Maheshwari et al. (2022) present a different approach with asymptotic convergence to a Nash policy, whereas we target finite-time convergence. Note that MPGs are only one way to model MARL problems, and for a more comprehensive overview on MARL, we refer the reader to the surveys by Yang and Wang (2021) and Zhang et al. (2021). **Constrained Markov Decision Processes:** A common approach to constrained _single-agent_ RL are _Constrained Markov Decision Processes_ (CMDPs, Altman, 1999). CMDPs are widely studied, and a comprehensive survey is given by Garcia et al. (2015). Below, we focus on aspects relevant to our work. In CMDPs, the agent optimizes a reward function subject to constraints. Lagrangian duality is a common approach for constrained optimization and Paternain et al. (2019) proved that CMDPs possess the _strong duality property_, giving theoretical justification for the use of Lagrangian dual approaches. **Constrained Markov Games:** One of the common approaches to constrained multi-agent RL are _Constrained Markov Games_ (CMGs, Altman and Shwartz, 2000). CMGs restrict the policies of the agents, which can be used to model safety objectives. Note that CMPGs are one class of CMGs. In cooperative CMPGs2, where the agents have one common reward function, the CMPG objective very much resembles the CMDP formulation. Furthermore, Diddigi et al. (2019) and Parnika et al. (2021) demonstrate good experimental results for cooperative CMPGs with Lagrangian dual approaches, but provide no theoretical guarantees. We prove in our work, however, that strong duality does not hold in general for CMPGs (cf. Section 4), rendering Lagrangian dual approaches inapplicable in those cases. Furthermore, we demonstrate that the dual might even return unsafe solutions. Footnote 2: Note that cooperative games are a strict subclass of CMPGs, as CMPGs are able to model non-cooperative settings too. ## 3 Background and Problem Definition **Notation:** For any \(n\in\mathbb{N}\), we use the short-hand notation \([n]\) to refer to the set of integers \(\{1,...,n\}\). For any finite set \(X\), we denote by \(\Delta_{X}\) the probability simplex over \(X\), i.e., \(\Delta_{X}=\{v\in[0,1|^{|X||}|\sum_{x\in X}v(x)=1\}\). ### Markov Potential Games An \(n\)-agent _Markov Potential Game_ (MPG) is a tuple \(\mathcal{G}=(\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{n},H,\{\mathcal{P}_{h}\}_{h=1 }^{H},\)\(\{\{r_{i,h}\}_{h=1}^{H}\}_{i=1}^{n},\mu)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}_{i}\) is agent \(i\)'s action space. We denote by \(\mathcal{A}\triangleq\times_{i=1}^{n}\mathcal{A}_{i}\) the joint action space, \(H\in\mathbb{N}_{>0}\) the horizon. \(\mathcal{P}_{h}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta_{\mathcal{S}}\) is the environment's transition function at time \(h\in[H]\) and \(\mathcal{P}_{h}(s^{\prime}|s,a)\) denotes the probability of moving to state \(s^{\prime}\) from state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\) at step \(h\in[H]\), \(r_{i,h}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) is agent \(i\)'s reward function at step \(h\in[H]\) and \(\mu\in\Delta_{\mathcal{S}}\) denotes the initial state distribution. We assume \(\mathcal{S}\) and \(\mathcal{A}\) to be finite. **Policies:** For every agent \(i\in[n]\), we define its policy space as \(\Pi^{i}\triangleq\big{\{}\{\pi_{i,h}\}_{h=1}^{H}\mid\pi_{i,h}:\mathcal{S} \rightarrow\Delta_{\mathcal{A}_{i}},\forall h\in[H]\big{\}}\). If agent \(i\) follows a policy \(\pi\in\Pi^{i}\), it means that at step \(h\in[H]\) and state \(s\in\mathcal{S}\), the agent samples its next action from \(\pi_{h}(\cdot|s)\). We denote by \(\Pi\triangleq\big{\{}\boldsymbol{\pi}=(\pi_{1},...,\pi_{n})\,|\pi_{i}\in\Pi^{ i},\forall i\in[n]\big{\}}\) the set of _joint_ policies. For any policy \(\boldsymbol{\pi}\in\Pi\) and agent \(i\in[n]\), we denote by \(\boldsymbol{\pi}_{-i}\) the policy of the _other_\(n-1\) agents. **Value Function:** For any policy \(\boldsymbol{\pi}\in\Pi\) and agent \(i\in[n]\), the value function \(V^{r_{i}}(\boldsymbol{\pi})\) measures the expected, cumulative reward of agent \(i\), and is defined as follows: \[V^{r_{i}}(\boldsymbol{\pi})\triangleq\underset{\begin{subarray}{c}s\sim\mu _{i,}\\ a_{h}\sim\boldsymbol{\pi}_{h}(\cdot|s_{h}),\\ s_{h+1}\sim\mathcal{P}_{h}(\cdot|s_{h},a_{h})\end{subarray}}{\mathbb{E}}\big{[} \sum_{h=1}^{H}r_{i,h}(s_{h},a_{h})|s_{0}=s\big{]}. \tag{1}\] **Potential Function:** An MPG possesses an underlying potential function \(\Phi:\Pi\rightarrow\mathbb{R}\) such that: \[V^{r_{i}}(\pi_{i},\boldsymbol{\pi}_{-i})-V^{r_{i}}(\pi^{\prime}_{i}, \boldsymbol{\pi}_{-i})=\Phi(\pi_{i},\boldsymbol{\pi}_{-i})-\Phi(\pi^{\prime}_ {i},\boldsymbol{\pi}_{-i})\qquad\forall\pi^{\prime}_{i}\in\Pi^{i},\forall \boldsymbol{\pi}\in\Pi,\forall i\in[n]. \tag{2}\] This is an adaptation of the potential function defined in Leonardos et al. (2022) to the finite-horizon setting. Instead of defining a per-state potential function, we directly consider the potential function with respect to the initial distribution \(\mu\). **Remark:** Note that the potential function is a property of the MPG and is typically not known to the agents. In a cooperative game, the agents have one shared reward function \(r\) such that \(r_{i}\equiv r\), \(\forall i\in[n]\). In this case, the potential function is simply the value function of the agents, i.e., \(\Phi=V^{r}\). Note, however, that cooperative games are a _strict_ subset of MPGs, and MPGs have the ability to express non-cooperative scenarios, such as traffic congestion. In Section 7, we describe different instances in detail. ### Constrained Markov Potential Games An \(n\)-agent _Constrained Markov Potential Game_ (CMPG) is an MPG \(\mathcal{G}=(\mathcal{S},\{\mathcal{A}_{i}\}_{i=1}^{n},\)\(H,\{\mathcal{P}_{h}\}_{h=1}^{H},\)\(\{\{r_{i,h}\}_{h=1}^{H}\}_{i=1}^{n},\mu)\) with constraints \(\{(\{c_{j,h}\}_{h=1}^{H},\alpha_{j})\}_{j=1}^{k}\), where \(c_{j,h}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) denotes the \(j\)-th cost function at step \(h\in[H]\) and \(\alpha_{j}\in[0,H]\) is the constraint threshold.3 Footnote 3: Even though we define our problem in the finite-horizon setting, our results can be easily extended to the discounted, infinite-horizon setting. **Feasible Policies:** We call a policy \(\boldsymbol{\pi}\in\Pi\)_feasible_, if it satisfies the following constraints: \[V^{c_{j}}_{\mu}(\boldsymbol{\pi})\triangleq\underset{\begin{subarray}{c}s\sim \mu_{i,}\\ a_{h}\sim\boldsymbol{\pi}_{h}(\cdot|s_{h}),\\ s_{h+1}\sim\mathcal{P}_{h}(\cdot|s_{h},a_{h})\end{subarray}}{\mathbb{E}}\big{[} \sum_{h=1}^{H}c_{j,h}(s_{h},a_{h})\Big{|}s_{0}=s\big{]}\leq\alpha_{j},\quad \forall j\in[k].\] In the rest of the paper, we use \(\Pi_{C}\) to refer to the set of _feasible_ policies. For every agent \(i\) and policy \(\boldsymbol{\pi}_{-i}\) of the other \(n-1\) agents, we define \(\Pi_{C}^{i}(\boldsymbol{\pi}_{-i})\triangleq\big{\{}\pi_{i}\in\Pi^{i}|(\pi_{i},\boldsymbol{\pi}_{-i})\in\Pi_{C}\big{\}}\). We refer to this type of constraints as _coupled_ constraints, as the values of the constraints depend on the _joint_ actions of the agents. If we wish to model an intersection in a traffic scenario, an important constraint to incorporate would be collision avoidance. To decide whether a certain set of actions causes a collision or not, we need to take the actions of _all_ agents at the intersection into account. In a CMPG, each agent \(i\) aims to maximize its own value function \(V^{r_{i}}\). Since the rewards and transitions depend on the _joint_ policy, it may not be possible to find a policy that is globally optimal for all value functions simultaneously. Instead, the agents typically need to settle for an equilibrium policy, at which no agent has an incentive to deviate unilaterally. Many different types of equilibria exist in the literature, such as the Nash equilibrium (Nash et al., 1950), correlated equilibrium (Aumann, 1987) or Stackelberg equilibrium (Breton et al., 1988). In this work, our goal is to obtain a _Nash equilibrium policy_(Nash et al., 1950; Altman and Shwartz, 2000) in a CMPG. We define a relaxed notion in the following paragraph. \(\varepsilon\)**-Nash Equilibrium Policy:** For any \(\varepsilon\geq 0\), a policy \(\mathbf{\pi}^{*}=(\pi_{1}^{*},...,\pi_{n}^{*})\in\Pi_{C}\) is a \(\varepsilon\)_-Nash equilibrium policy_, if it is the \(\varepsilon\)-best-response policy for each agent, i.e.,4: Footnote 4: This is an extension of the _generalized Nash equilibrium_(Facchinei and Kanzow, 2010) to CMPGs. \[\max_{\pi_{i}\in\Pi_{C}^{*}(\mathbf{\pi}_{-i}^{*})}V^{ri}(\pi_{i},\mathbf{\pi}_{-i}^{*} )-V^{ri}(\mathbf{\pi}^{*})\leq\varepsilon,\qquad\forall i\in[n]. \tag{3}\] We call \(\mathbf{\pi}^{*}\) a _Nash equilibrium policy_, if Eq. (3) holds with \(\varepsilon=0\). In the rest of the paper, we refer to the Nash equilibrium policy as _Nash policy_. ### Constrained Markov Decision Processes A _Constrained Markov Decision Process_ (CMDP) is a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},H,\{\mathcal{P}_{h}\}_{h=1}^{H},\{r_{h} \}_{h=1}^{H},\)\(\mu,\{(\{c_{j,h}\}_{h=1}^{H},\alpha_{j})\}_{j=1}^{k})\). In a CMPP, there is a _single_ agent. However, the individual elements in \(\mathcal{M}\) carry the same meaning as in CMPGs. Furthermore, the policy sets \(\Pi,\Pi_{C}\) and the value functions \(V^{r}:\Pi\to\mathbb{R}\) (reward), \(V^{c_{j}}:\Pi\to\mathbb{R},j\in[k]\) (costs) are defined in the same way as for CMPGs. In a CMDP, the agent aims to find a policy \(\pi^{*}\), that satisfies: \[\pi^{*}\in\arg\max_{\pi\in\Pi_{C}}V^{r}(\pi). \tag{4}\] In the following section, we prove that a Nash policy in a CMPG can be found by maximizing the potential function with respect to the given constraints, similar to Eq. (4). We will show that Lagrangian duality, a common approach for constrained optimization, will not work in general for CMPGs. ## 4 Duality for Constrained Markov Potential Games? For an MPG with potential function \(\Phi\), a globally optimal policy \(\mathbf{\pi}^{*}\in\arg\max_{\mathbf{\pi}\in\Pi}\Phi(\mathbf{\pi})\) is also a Nash policy (Leonardos et al., 2022). We show in Proposition 1 that this property generalizes to CMPGs. We defer the proofs for the theoretical results in this section to Appendix A. **Proposition 1**.: _Define the following constrained optimization problem:_ \[\mathbf{\pi}^{*}\in\arg\max_{\mathbf{\pi}\in\Pi_{C}}\Phi(\mathbf{\pi}). \tag{5}\] _Then, \(\mathbf{\pi}^{*}\) is a Nash policy for a CMPG with potential function \(\Phi\)._ Solving Eq. (5) directly is not trivial; even if the agents know the rewards and transitions, the potential function is usually not known. Moreover, the fact that we have _coupled_ constraints makes solving Eq. (5) directly intractable. Nevertheless, a common approach for solving constrained optimization problems is _Lagrangian duality_, which, in our case, turns the CMPG into an (unconstrained) MPG with modified rewards (Proposition 2). This would enable the use of scalable algorithms that have been developed for unconstrained MPGs (Leonardos et al., 2022). Furthermore, in previous works (Liu et al., 2021; Diddigi et al., 2019), Lagrangian duality was used for cooperative CMPGs and showed promising experimental results. This makes Lagrangian duality a tempting approach for CMPGs. For this, we define the _Lagrangian_\(\mathcal{L}:\Pi\times\mathbb{R}_{+}^{k}\to\mathbb{R}\) and the primal5 and dual problems for Eq. (5) as follows: Footnote 5: Note that the primal is equivalent to Eq. (5). \[\mathcal{L}(\mathbf{\pi},\mathbf{\lambda}) \triangleq\Phi(\mathbf{\pi})+\sum_{j=1}^{k}\lambda_{j}\left(\alpha_{j }-V^{c_{j}}(\mathbf{\pi})\right)\] (Lagrangian) \[P^{*} =\max_{\mathbf{\pi}\in\Pi}\min_{\mathbf{\lambda}\in\mathbb{R}_{+}^{k}} \mathcal{L}(\mathbf{\pi},\mathbf{\lambda})\] (Pimal) \[D^{*} =\min_{\mathbf{\lambda}\in\mathbb{R}_{+}^{k}}\max_{\mathbf{\pi}\in\Pi} \mathcal{L}(\mathbf{\pi},\mathbf{\lambda}).\] (Dual) As a first step, in Proposition 2, we prove that the dual problem does indeed correspond to an (unconstrained) MPG. **Proposition 2**.: _For any \(\mathbf{\lambda}\in\mathbb{R}_{+}^{k}\), \(\mathcal{L}(\cdot,\mathbf{\lambda})\) is a potential function for an MPG with reward functions \(\check{r}_{i,h}\triangleq r_{i,h}-\sum_{j=1}^{k}\lambda_{j}c_{j,h},\forall i \in[n],\forall h\in[H]\)._ Then, weak duality guarantees that \(D^{*}\geq P^{*}\) holds. Unfortunately, in the following proposition, however, we show that _strong duality_, i.e., \(D^{*}=P^{*}\), _does not hold_ in general for CMPGs. **Proposition 3**.: _There exists a CMPG, for which strong duality does not hold, i.e., for which \(P^{*}\neq D^{*}\)._ Proof.: We prove this using a counter-example. Consider the following two-agent CMPG with \(|\mathcal{S}|=1\), \(\mathcal{A}_{1}=\mathcal{A}_{2}=\{1,2\}\), reward functions \(r=r_{1}=r_{2}\), constraint function \(c\) and threshold \(\alpha=1/2\). The rewards and constraints are specified via the matrices \[A=\begin{bmatrix}3&2\\ 2&4\end{bmatrix},\quad B=\begin{bmatrix}0&0\\ 0&1\end{bmatrix},\] where \(r(i,j)=A(i,j)\) and \(c(i,j)=B(i,j),\forall i,j\in\{1,2\}\). This is a _cooperative_ CMPG with potential function \(\Phi(\mathbf{\pi})=\pi_{1}^{T}A\pi_{2}\). The optimization formulation (Eq. (5)) corresponding to this CMPG is: \[\max_{\pi_{1},\pi_{2}\in\Delta_{2}}\pi_{1}^{T}A\pi_{2},\text{ subject to: }\pi_{1}^{T}B\pi_{2}\leq 1/2. \tag{6}\] **Primal problem:** First, we solve the primal problem, which is defined as follows: \[P^{*}=\max_{\pi_{1},\pi_{2}\in\Delta_{2}}\min_{\lambda\in\mathbb{R}_{+}}\left( \pi_{1}^{T}A\pi_{2}+\lambda\left(\frac{1}{2}-\pi_{1}^{T}B\pi_{2}\right)\right)\] The policies \(\pi_{1}=\pi_{2}=\left[1-\sqrt{\frac{1}{2}},\sqrt{\frac{1}{2}}\right]\) solve the primal problem with a reward of \(P^{*}\approx 3.09\). One can easily verify that these three policies are also Nash policies. **Dual problem:** Next, we solve the dual problem, which is defined as follows: \[D^{*}=\min_{\lambda\in\mathbb{R}_{+}}\underbrace{\max_{\pi_{1},\pi_{2}\in \Delta_{2}}\left(\pi_{1}^{T}A\pi_{2}+\lambda\left(\frac{1}{2}-\pi_{1}^{T}B\pi _{2}\right)\right)}_{=:d(\lambda)},\] where \(d(\lambda)\) is the _dual function_. Fig. 1 visualizes \(d(\lambda)\) for \(\lambda\in[0,2]\). Since the dual function is always convex, it is sufficient to focus only on this interval. From Fig. 1, we can see that \(d(\lambda)\) reaches its minimum at \(\lambda_{D}^{*}=1\) with \(D^{*}=d(\lambda_{D}^{*})=3.5\). Note that this is strictly larger than the primal solution \(P^{*}=3\), and therefore, strong duality does not hold here. Next, we list two policies that are solutions to the dual problem, i.e., policies \(\pi_{D}^{*}\) that satisfy \(\pi_{D}^{*}\in\arg\max_{\pi}\left\{\pi_{1}^{T}A\pi_{2}+\lambda_{D}^{*}\left( \frac{1}{2}-\pi_{1}^{T}B\pi_{2}\right)\right\}\): 1. \(\pi_{1}=\left[1,0\right],\pi_{2}=\left[1,0\right]\) 2. \(\pi_{1}=\left[0,1\right],\pi_{2}=\left[0,1\right]\) The first policy satisfies the constraints and is indeed a Nash policy, with a reward of \(3<P^{*}\). The second policy, however, does not satisfy the constraints. Solving the dual problem does therefore not necessarily guarantee a feasible policy. **Remark:** To give an intuition on Proposition 3, consider a cooperative CMPG with \(\Phi\equiv V^{r}\), i.e., the potential function is equal to the shared value function \(V^{r}\). Note that, in this case, the primal problem very much resembles the CMDP objective (Eq. (5)) and it is tempting to solve the CMPG as a CMDP with a large action space \(\mathcal{A}=\times_{i=1}^{n}\mathcal{A}_{i}\). Recall Figure 1: This figure displays the dual function \(d(\lambda)\) for the CMPG in Eq. (6), evaluated at 1000 equidistant locations \(\lambda\in[0,2]\). also, that strong duality does indeed hold for CMDPs (Paternain et al., 2019) and CMDPs can be solved via primal-dual algorithms. By solving this large CMDP, we obtain a solution \(\mathbf{\pi}^{*}\) that specifies distributions over the _joint_ action space \(\mathcal{A}\). To obtain a solution for the original CMPG, however, we require a policy that can be factored into a set of independent policies \(\{\pi_{i}^{*}\}_{i\in[n]}\) such that \(\mathbf{\pi}_{h}^{*}(a|s)=\prod_{i=1}^{n}\pi_{i,h}^{*}(a_{i}|s),\forall(s,a,h)\in \mathcal{S}\times\mathcal{A}\times[H]\). ## 5 Solving Constrained Markov Potential Games In this section, we propose an efficient algorithm to compute Nash policies in CMPGs6. Similar to the work on unconstrained MPGs by Song et al. (2021), in our algorithm **C**oordinate-**A**scent for **CMPGs** (CA-CMPG), agents take turns to solve a _Constrained Markov Decision Process_ (CMDP), i.e., a single-agent reinforcement learning problem, in every iteration. To do this, the agents need to coordinate, such that, when one agent is solving the CMDP, the others provide a stationary environment to that agent by keeping their policies fixed. There are some technical challenges compared to the unconstrained MPG setting. The main difference is that in the CMPG setting, to ensure the convergence to a Nash policy, we need also to ensure that the intermediate policies remain _feasible_ (see remark at the end of this section). Our algorithm CA-CMPG is described in Algorithm 1. Footnote 6: Note that we may not find a Nash policy that solves Eq. (5) though. We assume for now that the agents know their own reward functions, the cost functions as well as the transition model. As a starting point for CA-CMPG, the agents require access to a feasible, initial policy, which we state in the following assumption: **Assumption 1**.: _Given a CMPG, the agents have access to a feasible policy \(\mathbf{\pi}^{S}\in\Pi_{C}\)._ This type of assumption is common for safe exploration in CMDPs (Bura et al., 2022; Liu et al., 2021b). We discuss in Appendix B, why we require it for our setting. While, in general, it may be computationally hard to compute a feasible \(\mathbf{\pi}^{S}\) in the multi-agent setting, we now discuss two examples, for which it is easy to compute \(\mathbf{\pi}^{S}\). **Example 1** (Single Constraint).: _Consider the problem \(\min_{\mathbf{\pi}\in\Pi}V^{c_{1}}(\mathbf{\pi})\). Since the constraint set is feasible, we must have that \(\min_{\mathbf{\pi}\in\Pi}V^{c_{1}}(\mathbf{\pi})\leq\alpha_{1}\). Note that this is an unconstrained Markov decision process (MDP) with state space \(\mathcal{S}\) and action space \(\mathcal{A}\). It is well-known that MDPs always possess at least one deterministic, optimal policy, which can be computed using dynamic programming techniques. Thus, we compute a deterministic policy \(\mathbf{\pi}^{C}\in\arg\min_{\mathbf{\pi}\in\Pi}V^{c_{1}}(\mathbf{\pi})\), s.t. for every state \(s\in\mathcal{S}\) and step \(h\in[H]\), there is exactly one action \(a=(a_{1},...,a_{n})\in\mathcal{A}\), for which \(\mathbf{\pi}_{h}^{C}(a|s)=1\) and \(\mathbf{\pi}_{h}^{C}(a^{\prime}|s)=0,\forall a^{\prime}\neq a\). Then, for every agent \(i\in[n]\), we set \(\pi_{i,h}^{C}(a_{i}|s)=1\) and \(\pi_{i,h}^{C}(a_{i}^{\prime}|s)=0\), for all \(a_{i}^{\prime}\neq a_{i}\). It is easy to verify that \(\mathbf{\pi}^{C}=\prod_{i=1}^{n}\pi_{i}^{C}\)._ **Example 2** (Independent Transitions and Composite Constraints).: _Consider a CMPG with per-agent state spaces \(\mathcal{S}_{1},...,\mathcal{S}_{n}\) and transition models \(\mathcal{P}_{1},...,\mathcal{P}_{m}\), where \(\mathcal{P}_{j,h}(s^{\prime}|s,a)\) is the probability that agent \(j\) transitions to state \(s^{\prime}\in\mathcal{S}_{j}\) from state-action pair \((s,a)\in\mathcal{S}_{j}\times\mathcal{A}\) at step \(h\in[H]\). We denote by \(\mathcal{S}\triangleq\times_{i=1}^{n}\mathcal{S}\) the joint state space and define \(\mathcal{P}_{h}(s^{\prime}|s,a)\triangleq\prod_{i=1}^{n}\mathcal{P}_{h}^{i}(s ^{\prime}_{i}|s_{i},a_{i})\) as the joint probability of transitioning to state \(s^{\prime}\in\mathcal{S}\) from state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\) at step \(h\in[H]\). Furthermore, assume that for each \(j\in[k]\), the constraint function \(c_{j}\) can be written as \(c_{j,h}(s,a)\triangleq\sum_{i=1}^{n}c_{j,h}^{i}(s_{i},a_{i})\). Due to this, the cumulative constraints can be written as \(V^{c_{j}}(\mathbf{\pi})=\sum_{i=1}^{n}V^{c_{j}^{i}}(\pi_{i})\), \(\forall j\in[k]\). To find a feasible policy, each agent \(i\in[n]\) computes \(\pi_{i}\in\left\{\pi\in\Pi^{i}\middle|V^{c_{j}^{i}}(\pi)\leq c_{i}^{*},\forall j \in[k]\right\}\), where \(c_{i}^{*}\triangleq\min_{c\in\mathbb{R}}\left\{\exists\pi\in\Pi^{i}\middle|V^{ c_{j}^{i}}(\pi)\leq c,\forall j\in[k]\right\}\). Assuming that the constraint set is feasible, it is easy to see that \(\mathbf{\pi}^{S}=(\pi_{1}^{S},...,\pi_{n}^{S})\) must be feasible._ In CA-CMPG, the agents start with the feasible policy \(\mathbf{\pi}^{S}\). In every iteration, the agents take turns to maximize their own value function. While one agent is maximizing its value function, the other agents keep their policy fixed (Line 4); therefore, that agent is essentially solving a CMDP. In iteration \(t\), agent \(i\in[n]\) faces the CMDP \(\mathcal{M}=\left(\mathcal{S},\mathcal{A}_{i},H,\left\{\widetilde{\mathcal{P}} _{h}\right\}_{h=1}^{H},\left\{\widetilde{r}_{h}\right\}_{h=1}^{H},\mu,\left\{ \left(\left\{\widetilde{c}_{j,h}\right\}_{h=1}^{H},\alpha_{j}\right)\right\}_{ j=1}^{k}\right)\), where the reward function \(\widetilde{r}\), cost functions \(\left\{\widetilde{c}_{j}\right\}_{j\in[n]}\) and transition model \(\widetilde{\mathcal{P}}\) are defined according to Eq. (7), Eq. (8) and Eq. (9): \[\widetilde{r}_{h}(s,a_{i}) \triangleq\sum_{a_{-i}\in\mathcal{A}\setminus\mathcal{A}_{i}}r_ {i,h}(s,(a_{i},a_{-i}))\cdot\mathbf{\pi}_{-i,h}^{t-1}(a_{-i}|s), \tag{7}\] \[\widetilde{c}_{j,h}(s,a_{i}) \triangleq\sum_{a_{-i}\in\mathcal{A}\setminus\mathcal{A}_{i}}c_ {j,h}(s,(a_{i},a_{-i}))\cdot\mathbf{\pi}_{-i,h}^{t-1}(a_{-i}|s),\] (8) \[\widetilde{\mathcal{P}}_{h}(s^{\prime}|s,a_{i}) \triangleq\sum_{a_{-i}\in\mathcal{A}\setminus\mathcal{A}_{i}} \mathcal{P}_{h}(s^{\prime}|s,(a_{i},a_{-i}))\cdot\mathbf{\pi}_{-i,h}^{t-1}(a_{-i} |s), \tag{9}\] for all \((s,a_{i},s^{\prime},h)\in\mathcal{S}\times\mathcal{A}_{i}\times\mathcal{S} \times[H]\). Let us recall the CMDP objective from Eq. (4). In practice, we can only solve Eq. (4) _approximately_. Given \(\varepsilon>0\), we assume that in every iteration \(t\), agent \(i\in[n]\) can efficiently compute a policy \(\hat{\pi}_{i}^{t}\in\Pi_{C}^{i}(\mathbf{\pi}_{-i}^{t-1})\) such that it satisfies the following conditions7: Footnote 7: This can be achieved using state-of-the-art primal-dual methods, such as the work by Ding et al. (2020); Paternain et al. (2019). \[\max_{\pi\in\Pi_{C}^{i}(\mathbf{\pi}_{-i}^{t-1})}V^{r_{i}}(\pi,\mathbf{\pi}_{-i}^{t-1} )-V^{r_{i}}(\hat{\pi}_{i}^{t},\mathbf{\pi}_{-i}^{t-1})\leq\varepsilon/2. \tag{10}\] Due to the potential property (Eq. (2)), if agent \(i\in[n]\) improves its own value function, it implicitly also improves the potential function. To prove that the potential function can be increased only a finite number of times, implying termination of CA-CMPG, we require the potential function to be bounded. **Lemma 1**.: _Fix an arbitrary base policy \(\mathbf{\pi}^{B}\in\Pi\). Then, for every \(\mathbf{\pi}\in\Pi\), the potential function can be bounded as: \(\Phi(\mathbf{\pi})\leq nH+\Phi(\mathbf{\pi}^{B})\)._ We defer the proofs to all theoretical results in this section to Appendix B. CA-CMPG terminates when the agents cannot deviate unilaterally and improve their value function by more than \(\varepsilon\), i.e., when they reach an \(\varepsilon\)-Nash policy. We state this result in the following theorem: **Theorem 1**.: _Suppose that Assumption 1 holds. Then, given \(\varepsilon>0\), if we invoke CA-CMPG with \(T=\frac{2nH}{\varepsilon}\), it converges to an \(\varepsilon\)-Nash policy._ **Remark:** What if we relax the feasibility requirement in Eq. (10) and allow the CMDP solver to return an \(\varepsilon\)-_feasible_ policy \(\mathbf{\pi}\) such that \(V^{c_{j}}(\mathbf{\pi})\leq\alpha_{j}+\varepsilon\), \(\forall j\in[k]\), for an \(\varepsilon>0\)? In that case, the intermediate policies might not be feasible and CA-CMPG may get stuck in an infeasible policy, which is not a Nash policy. ## 6 Learning in Unknown Constrained Markov Potential Games In this section, we assume that the agents do not know the transition model beforehand. For simplicity, we assume that they do know the rewards and costs8. Our objective is to establish a _sample complexity_ bound for learning in CMPGs. Concretely, we want to construct an algorithm, such that, given any \(\varepsilon>0,\delta\in(0,1)\), the algorithm returns an \(\varepsilon\)-Nash policy with probability at least \(1-\delta\), using at most \(\mathcal{F}(\varepsilon,\delta)\)_samples_ from the transition model \(\mathcal{P}\). Before we proceed, we define an important quantity related to the constraint set, which also contributes to the final sample complexity. Footnote 8: In general, learning the transitions is harder than learning rewards and costs. Concretely, this also means that learning rewards and costs will not add any dominating terms to the overall sample complexity (see Vaswani et al. (2022)). **Definition 1** (Slater constant).: _Given a feasible CMPG \(\mathcal{G}\), we define its Slater constant \(\zeta\) as follows:_ \[\zeta\triangleq\min_{i\in[n]}\min_{\mathbf{\pi}_{-i}\in\Pi\setminus\Pi^{i}}\max_{ \mathbf{\pi}\in\Pi^{1}}\{\alpha-V^{c}(\pi,\mathbf{\pi}_{-i})\}.\] _We call \(\mathcal{G}\)_strictly _feasible if and only if \(\zeta>0\)._ In the rest of this section, we assume that the agents face an unknown, strictly feasible CMPG with Slater constant \(\zeta>0\). Next, we discuss which parts of CA-CMPG need to be adapted for this setting. 1. In every iteration \(t\), each agent \(i\in[n]\) needs to solve the CMDP described in Section 5 (Line 4). To solve this CMDP, we assume access to a _sample-efficient_ CMDP solver, which has the following guarantees: Given \(\varepsilon>0,\delta\in(0,1)\), the solver uses at most \(\mathcal{F}_{C}\left(|\mathcal{S}|,|\mathcal{A}_{i}|,H,\zeta,\delta,\frac{ \varepsilon}{4}\right)\) samples and returns a policy \(\hat{\pi}_{i}^{t}\in\Pi_{C}^{l}(\boldsymbol{\pi}_{-i}^{t-1})\) such that it satisfies the following, with probability at least \(1-\delta\): \[\max_{\pi\in\Pi_{C}^{l}(\boldsymbol{\pi}_{-i}^{t-1})}V^{r_{i}}(\pi,\boldsymbol {\pi}_{-i}^{t-1})-V^{r_{i}}(\hat{\pi}_{i}^{t},\boldsymbol{\pi}_{-i}^{t-1})\leq \varepsilon/4.\] (11) Compared to the setting with known transitions, we have a stricter bound on the approximation error of \(\varepsilon/4\) here. We discuss in Appendix C, why we require this. 2. To compute \(\varepsilon_{i}^{t}\) in step \(t\), agent \(i\) needs to estimate the value functions \(V^{r_{i}}(\hat{\pi}_{i}^{t},\boldsymbol{\pi}_{-i}^{t-1})\) and \(V^{r_{i}}(\boldsymbol{\pi}^{t-1})\). For the former, the agents execute the policy \((\hat{\pi}_{i}^{t},\boldsymbol{\pi}_{-i}^{t-1})\) for \(M>0\) episodes9 and agent \(i\) estimates \(\hat{V}^{r_{i}}(\hat{\pi}_{i}^{t},\boldsymbol{\pi}_{-i}^{t-1})\) with the average of the observed, cumulative rewards. For the latter, similarly, the agents execute \(\boldsymbol{\pi}^{t-1}\) for \(M\) episodes, but these observations can be used to estimate \(V^{r_{1}}(\boldsymbol{\pi}^{t-1}),...,V^{r_{n}}(\boldsymbol{\pi}^{t-1})\) simultaneously10. Footnote 9: Each episode is a sequence of \(H\) steps. At the beginning of each episode, the initial state is freshly sampled from \(\mu\). The resulting algorithm **Co**ordinate-A**scent for **CMPGs** with **E** exploration (CA-CMPG-E) is described in Algorithm 2. Footnote 10: This holds because we assumed that the reward functions are known. The resulting algorithm **Co**ordinate-A**scent for **CMPGs** with **E** exploration (CA-CMPG-E) is described in Algorithm 2. **Theorem 2**.: _Given a strictly feasible CMPG \(\mathcal{G}\) with Slater constant \(\zeta>0\), suppose that the agents have access to an initial feasible policy (cf. Assumption 1). Furthermore, assume that the agents have access to a sample-efficient CMDP solver (Eq. (11)). Then, for any \(\varepsilon>0\), \(\delta\in(0,1)\), CA-CMPG-E invoked with \(M=\frac{32H^{2}}{\varepsilon^{2}}\log\left(\frac{32n^{2}H}{\varepsilon\delta}\right)\) and \(T=\frac{4nH}{\varepsilon}\) returns an \(\varepsilon\)-Nash policy with probability at least \(1-\delta\), using the following number of samples:_ \[\mathcal{F}(\varepsilon,\delta)\triangleq\sum_{t=1}^{T}\sum_{i=1}^{n} \mathcal{F}_{C}\left(|S|,|\mathcal{A}_{i}|,H,\zeta,\frac{\varepsilon\delta}{8 n^{2}H},\frac{\varepsilon}{4}\right)+\frac{256n^{2}H^{4}}{\varepsilon^{3}}\log \left(\frac{32n^{2}H}{\varepsilon\delta}\right).\] In the next two sub-sections, we will instantiate CA-CMPG-E with two different state-of-the-art CMDP solvers and state the resulting sample complexity bounds. Both algorithms are designed for CMDPs with a _single_ constraint. Due to this, we set \(k=1\) and denote our cost function by \(\left\{c_{h}\right\}_{h=1}^{H}\) and refer to the constraint parameter as \(\alpha\). Note that this is due to a limitation of the existing CMDP algorithms and not of CA-CMPG-E. ### Generative model In this section, we assume that the agents have access to a _generative model_, i.e., they can directly obtain samples from the transition model \(\mathcal{P}_{h}(\cdot|s,a)\), for any state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\) and any \(h\in[H]\). Similar to previous results in CMDPs Vaswani et al. (2022) we propose a novel algorithm for finite-horizon CMDPs and describe it in Algorithm 3. Lemma 2 (cf. Appendix D) establishes the sample complexity for Algorithm 3. **Corollary 1**.: _Given a strictly feasible CMPG \(\mathcal{G}\), assume that its Slater constant \(\zeta>0\) is known. Furthermore, assume that the agents invoke Algorithm 3 with \(\varepsilon^{\prime}=\frac{\varepsilon}{4}\), \(\delta^{\prime}=\mathcal{O}\left(\frac{\varepsilon\delta}{n^{2}H}\right)\) and parameters set as in Lemma 2 to solve Eq. (11)._ Then, for any \(\varepsilon>0,\delta\in(0,1)\), CA-CMPG-E invoked with \(M=\mathcal{O}\left(\frac{H^{2}}{\varepsilon^{2}}\log\left(\frac{nH}{\varepsilon \delta}\right)\right)\) and \(T=\frac{4nH}{\varepsilon}\), returns an \(\varepsilon\)-Nash policy with probability at least \(1-\delta\) with an overall sample complexity of: \[\mathcal{F}(\varepsilon,\delta)\leq\widetilde{\mathcal{O}}\left(\frac{n| \mathcal{S}|H^{8}\log\left(\frac{1}{\varepsilon\delta}\right)\sum_{i=1}^{n}| \mathcal{A}_{i}|}{\varepsilon^{3}\zeta^{2}}+\frac{n^{2}H^{4}\log\left(\frac{1 }{\varepsilon\delta}\right)}{\varepsilon^{3}}\right).\] **Remark:** Compared to the result for _unconstrained_ MPGs (Song et al., 2021, Theorem 7), our Corollary 1 has an additional dependence on \(\frac{1}{\varepsilon^{2}}\) and a worse dependence on the horizon \(H\). These are due to the fact that our CMDP solver must always return a _feasible_ policy. Finally, the sample complexity result in Song et al. (2021) explicitly depends on \(\Phi_{max}\triangleq\max_{\pi\in\Pi}\Phi(\pi)\), whereas we substituted \(\Phi_{\max}\leq nH\) (Lemma 1). ### Safe exploration without a generative model We now consider the more challenging setting where the agents do not have access to a generative model, but can only explore by executing policies and observing the transitions. Moreover, during the learning process, we want to ensure that the agents explore _safely_. Existing algorithms with safe exploration (Bura et al., 2022; Liu et al., 2021b) have guarantees on the _regret_, but no sample complexity guarantees. To address this, we derive a sample complexity bound for the algorithm by Bura et al. (2022) (Algorithm 4) in Lemma 14. To apply this CMDP solver in CA-CMPG-E, we need to ensure that in every iteration, the agents have access to a _strictly_ feasible policy. We state a stronger condition in the following assumption. **Assumption 2**.: _There exists \(c\in(0,\zeta]\) s.t. for any agent \(i\in[n]\) and policy \(\pi_{-i}\in\Pi_{C}\setminus\Pi^{i}\) of the other agents, the agent can obtain a strictly feasible policy \(\pi\in\Pi^{i}\) s.t. \(V^{\varepsilon}(\pi,\pi_{-i})\leq\alpha-c\)._ This is a stronger assumption than in Section 6.1, as we additionally require access to a strictly feasible policy for every CMDP that is solved in CA-CMPG-E. **Corollary 2**.: _Suppose that Assumption 2 holds. Given \(\varepsilon>0,\delta\in(0,1)\), assume that we invoke CA-CMPG-E with \(M=\mathcal{O}\left(\frac{H^{2}}{\varepsilon^{2}}\log\left(\frac{nH}{\varepsilon \delta}\right)\right)\) and \(T=\frac{4nH}{\varepsilon}\). Furthermore, assume that we use Algorithm 4 as CMDP solver with \(\varepsilon^{\prime}=\frac{\varepsilon}{4}\), \(\delta^{\prime}=\mathcal{O}\left(\frac{\varepsilon\delta}{n^{2}H}\right)\) and parameters set as in Lemma 14. Then, CA-CMPG-E returns an \(\varepsilon\)-Nash policy with probability at least \(1-\delta\) with an overall sample complexity of:_ \[\mathcal{F}(\varepsilon,\delta)\leq\widetilde{\mathcal{O}}\left(\frac{n|\mathcal{ S}|^{2}H^{10}\log\left(\frac{1}{\varepsilon\delta}\right)\sum_{i=1}^{n}| \mathcal{A}_{i}|}{\varepsilon^{5}c^{2}}+\frac{n^{2}H^{4}\log\left(\frac{1}{ \varepsilon\delta}\right)}{\varepsilon^{3}}\right).\] Note that to satisfy Assumption 2, any \(c\in(0,\zeta]\) is a valid choice. A large \(c\) yields a better sample complexity for Corollary 2, but restricts the set of strictly feasible policies for the CMDP solver. A smaller \(c\) increases the sample complexity, but gives more flexibility, as it allows for a larger set of strictly feasible policies. Comparing the two corollaries, we observe that safe exploration without a generative model leads to a worse dependence of \(|S|\), \(H\) and \(\varepsilon\). ## 7 Experiments ### Grid world We consider a cooperative CMPG with two agents, in which the agents navigate in a 4x4 grid world (cf. Fig. 1(a)). Each cell in the grid represents a state and in every state, each agent can choose to move _up_, _right_, _down_ or _left_. State transitions are deterministic and if an agent selects an action that would make it leave the grid, it remains in the current state. Fig. 1(a) illustrates the rewards that an agent can obtain in the individual states. Both agents start from the bottom left state and their goal is to reach the target state, which is the state with a reward of 10. To model this as a cooperative game, we set the agents' joint objective to be the sum of their individual rewards. Whenever the agents are on the same state, excluding the start and target states, they _collide_ and incur a cost of 1. The agents must keep the expected cost below a pre-defined threshold \(\alpha\in[0,1]\). We evaluate our algorithm CA-CMPG with known transitions and use a primal-dual algorithm as CMDP solver. We set the horizon to \(H=6\) and use a threshold of \(\alpha=0.1\). Fig. 2(a) (top row) displays the reward differences between the current policy and the new policy for both agents and after every cycle of the algorithm, averaged over 20 runs. One cycle corresponds to one full iteration of Algorithm 2, i.e. all agents solving their CMDPs. When the reward differences reach zero for both agents, this implies that the agents have converged to a Nash policy. The bottom row tracks the cost over the cycles of Algorithm 2. The agents start from a strictly feasible policy with a cost of 0, and converge to a policy with a cost close to \(\alpha\). The resulting policies with the corresponding probabilities are shown in Fig. 1(b). With this, the agents collide once with probability 0.1, thus, satisfying the constraint of the experiment. On the other hand, if we solve the Lagrangian dual problem directly (Section 4), one of the returned policies is illustrated in Fig. 1(c). In this case, they always have one collision, which does not satisfy the constraint of the experiment. ### Congestion game We consider a finite-horizon version of the setup described in Leonardos et al. (2022), i.e., a non-cooperative MPG in which every state is a congestion game11. The game consists of two states \(\mathcal{S}=\{\mathtt{safe},\mathtt{unsafe}\}\), \(N\) agents and action Figure 2: Grid world experiment: Fig. 1(a) illustrates the state space that the agents navigate in. Both agents start from the bottom left state and their goal is to maximize the sum of their individual rewards. The numbers on the states indicate the rewards associated with those states. The choice of parameters for our evaluation is described in Section 7. Fig. 1(b) displays the policies with their corresponding probabilities returned by CA-CMPG. If the agents were to solve the dual problem directly (Section 4), they might obtain the policy illustrated in Fig. 1(c), which is not feasible. space \(\mathcal{A}=\{A,B,C,D\}\) for every agent. Each action \(a\in\mathcal{A}\) in state \(s\in\mathcal{S}\) has a weight \(w_{a}^{s}>0\) associated with it. In the safe state, an agent that selects action \(a\in\mathcal{A}\), receives a reward of \(k_{a}\cdot w_{a}^{\texttt{safe}}\), where \(k_{a}\) denotes the number of agents that selected action \(a\). In the unsafe state, the reward structure is similar, however, we subtract an offset \(c\geq 0\), resulting in a reward of \(k_{a}\cdot w_{a}^{\texttt{unsafe}}-c\). In both states \(s\in\mathcal{S}\), the weights follow the order \(w_{A}^{s}<w_{B}^{s}<w_{C}^{s}<w_{D}^{s}\). Thus, in both states, the agents prefer to take the action that is chosen by most agents. Furthermore, for every action \(a\in\mathcal{A}\), \(k_{a}\cdot w_{a}^{\texttt{safe}}\gg k_{a}\cdot w_{a}^{\texttt{unsafe}}-c\) s.t. the agents prefer to stay in the safe state. In the safe state, if more than \(N/2\) agents choose the same action, the system transitions to the unsafe state. To get back to the safe state from the unsafe state, the agents must equally distribute themselves among the four actions. The transitions are illustrated in Fig. 4. We evaluate our algorithm CA-CMPG with \(N=8\) agents and a horizon of \(H=2\). Furthermore, we assume that the transitions are known and use a linear program to solve the CMDPs (Altman, 1999). For the initial state, we set \(\mu(\texttt{safe})=\mu(\texttt{unsafe})=0.5\). At step \(h=1\), in the unsafe state, if more than \(N/2\) agents select the same action, the agents incur a cost of 1. Their goal is to keep the cost below a threshold \(\alpha=0.5\). Fig. 2(b) (top row) displays, as before, the reward differences between the current policy and the new policy, for each agents and averaged over 50 runs. When this difference reaches zero, this implies that the agents have converged to a Nash policy. The bottom plots track the cost over the cycles of Algorithm 2. The agents start from a strictly feasible policy with a cost of 0, and converge to a value close to \(\alpha\) Figure 4: Congestion game experiment: For every action \(a\in\mathcal{A}\), we denote by \(k_{a}\) the number of agents that select \(a\) in the current step. This figure visualizes the state transitions in every step, where \(k^{*}\triangleq\max_{a\in\mathcal{A}}k_{a}\) denotes the maximum number of agents that have selected the same action. Figure 3: These plots illustrate the results of the grid world (Fig. 2(a)) and congestion game (Fig. 2(b)) experiments. One cycle on the x-axis corresponds to one full iteration of Algorithm 2, i.e. all \(N\) agents solving their CMDPs. The top row displays, for each agent, an average of their reward difference between the current and new policy. When the difference reaches zero, they have converged to a Nash policy. The bottom row tracks the averaged cost over the cycles of Algorithm 2 (green, solid line). The agents start from a strictly feasible policy and converge to a cost close to \(\alpha\) (red, dashed line). In all plots, we additionally also display the standard error. Fig. 4(a) and Fig. 4(b) plot the resulting distributions over the actions for steps \(h=1\) and \(h=2\), respectively. We observe that in step \(h=1\), if the agents start from the safe state, they select their actions s.t. in step \(h=2\), the system remains in the safe state. At step \(h=2\), the agents maximize their rewards by selecting action D, irrespective of the state that the system is in. At step \(h=1\), without the constraints, all agents would prefer to choose action D at the unsafe state. With the choice of our constraints, as we can observe in Fig. 4(a), the agents distribute themselves equally amongst actions C and D. ## 8 Conclusion In this paper, we proved that strong duality does not hold always hold in CMPGs, making primal-dual approaches inapplicable. An interesting future question could be to understand under which conditions primal-dual methods may work for CMPGs. To tackle CMPGs, we presented our algorithm CA-CMPG, which provably converges to an \(\varepsilon\)-Nash policy. Note that while this paper focuses on the finite-horizon setting, our algorithm CA-CMPG can be adapted to the discounted, infinite-horizon setting by using an appropriate CMDP solver as a sub-routine. Furthermore, we established the first sample complexity bound for learning in CMPGs. In CA-CMPG, exploration happens only within the CMDP sub-routines. It would be interesting to understand whether the sample complexity bound for the generative model setting (Section 6.1) can be made tighter if we move the exploration outside the CMDP sub-routines. ## Acknowledgments and Disclosure of Funding We thank Daniil Dmitriev, Manish Prajapat and Vignesh Ram Somnath for their valuable comments on the paper. This research was primarily supported by the ETH AI Center. Pragnya Alatur has been funded in part by ETH Foundations of Data Science (ETH-FDS). Giorgia Ramponi is partially funded by Google Brain.
2303.15593
Limits of polyhedral multinomial distributions
We consider limits of certain measures supported on lattice points in lattice polyhedra defined as the intersection of half-spaces $\{m\in\mathbb{R}^n|\langle v_i,x\rangle+a_i \geq 0\}$, where $\sum_i v_i = 0$. The measures are densities associated to lattice random variables obtained by restriction of multinomial random variables. We find the limiting Gaussian distributions explicitly.
Aniket Shah
2023-03-27T20:50:35Z
http://arxiv.org/abs/2303.15593v1
# Limits of polyhedral multinomial distributions ###### Abstract. We consider limits of certain measures supported on lattice points in lattice polyhedra defined as the intersection of half-spaces \(\{m\in\mathbb{R}^{n}|\langle v_{i},x\rangle+a_{i}\geq 0\}\), where \(\sum_{i}v_{i}=0\). The measures are densities associated to lattice random variables obtained by restriction of multinomial random variables. We find the limiting Gaussian distributions explicitly. The author was supported by Charles University project PRIMUS/21/SCI/014. ## 1. Introduction Let \(\mathbf{a}=(a_{1},\ldots,a_{r})\in\mathbb{Z}^{r}\) and \(v_{1},\ldots,v_{r}\in\mathbb{Z}^{n}\) define hyperplanes \(H_{i}=\{x\in\mathbb{R}^{n}|\langle v_{i},x\rangle+a_{i}=0\}\), such that the corresponding intersection of half-spaces \(P=\bigcap_{i}\{x\in\mathbb{R}^{n}|\langle v_{i},x\rangle+a_{i}\geq 0\}\) is compact, and each \(H_{i}\) touches \(P\). Then, we define the random vector \(\mathbf{X}_{\mathbf{a}}\) in \(\mathbb{R}^{n}\) by \[\mathbb{P}(\mathbf{X}_{\mathbf{a}}=x)=\frac{1}{b}\cdot\binom{\sum_{i}\langle v _{i},x\rangle+a_{i}}{\langle v_{1},x\rangle+a_{1},\ldots,\langle v_{r},x \rangle+a_{r}},\] for each lattice point \(x\in P\cap\mathbb{Z}^{n}\). We call its distribution the _polyhedral multinomial distribution_ associated to \(\mathbf{a}\). Imposing the further condition that \(\sum_{i}v_{i}=0\), the top term in the multinomial simplifies to \(|\mathbf{a}|:=\sum_{i}a_{i}\), so \[\mathbb{P}(\mathbf{X}_{\mathbf{a}}=x)=\frac{1}{b}\cdot\binom{|\mathbf{a}|}{ \langle v_{1},x\rangle+a_{1},\ldots,\langle v_{r},x\rangle+a_{r}}.\] In the special case that \(r=n+1\), \(v_{i}=e_{i}\) for \(i\) from \(1\) to \(n\), and \(v_{n+1}=-\sum_{i}e_{i}\), the corresponding distribution is the usual multinomial. More generally, \(\mathbf{X}_{\mathbf{a}}\) follows a conditional distribution of a multinomial distribution on a higher dimensional vector space. We are interested in the limiting behavior of \(\mathbf{X}_{k\cdot\mathbf{a}}\) as \(k\) goes to infinity. For multinomial distributions, the limit is well-known to approach a Gaussian, due to the central limit theorem. More generally, the distribution of \(\mathbf{X}_{k\cdot\mathbf{a}}\) does not remain within the the region (which grows sublinearly in \(k\) around the mean) controlled by e.g. the central or local limit theorems [1, 2] for higher dimensional multinomial distributions. Our approach is to instead approximate directly using Stirling's formula. Our main result is that after recentering and scaling appropriately, the polyhedral multinomial distributions \(\mathbf{X}_{k\cdot\mathbf{a}}\) converge to a specific Gaussian distribution supported on the subspace \(L\) generated by differences of vectors in \(P\). We let \(I_{\mathbf{a}}\subset\{1,\ldots,r\}\) denote the indices \(i\) such that \(\langle v_{i},x\rangle\) is not constant on \(P\). The point \(m_{\mathbf{a}}\in P\) is defined in Proposition 2.2. **Theorem 1.1**.: _When \(\sum_{i}v_{i}=0\), the sequence \(\mathbf{Y}_{k}=\frac{\mathbf{X}_{k\cdot\mathbf{a}}-\mathbb{E}[\mathbf{X}_{k \cdot\mathbf{a}}]}{\sqrt{k}}\) converges weakly to \(\mathbf{Y}\) with density given by the Dirac measure \(\delta_{L}\) times a Gaussian with mean \(0\) and variance given by the quadratic form_ \[x\mapsto\sum_{i\in I_{\mathbf{a}}}\frac{\langle\langle v_{i},x\rangle\rangle^ {2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}.\] **Remark 1.2**.: This research was motivated by consideration of an analogue of Duistermaat-Heckman measure in [1] while studying line bundles on the toric arc scheme as defined in [1]. Those familiar with toric geometry may recognize that besides \(\sum_{i}v_{i}=0\), the conditions on \(v_{i}\) and \(\mathbf{a}\) relate to \(\sum_{i}a_{i}D_{i}\) defining a nef divisor on an associated toric variety. ## 2. The potential corresponding to the data \((a_{1},\ldots,a_{r})\) Let \(\delta_{x}\) be the Dirac probability measure supported at \(x\in\mathbb{R}^{n}\). We let \(\mathbf{a}=(a_{1},\ldots,a_{r})\in\mathbb{Z}^{r}\) and \(v_{1},\ldots,v_{r}\in\mathbb{Z}^{n}\) define \(H_{i}=\{x\in\mathbb{R}^{n}|\langle v_{i},x\rangle+a_{i}=0\}\) as in the introduction. We assume the intersection \(P=\bigcap_{i}\{x\in\mathbb{R}^{n}|\langle v_{i},x\rangle+a_{i}\geq 0\}\) is compact, and each \(H_{i}\) touches \(P\). The distribution of the random vector \(\mathbf{X}_{\mathbf{a}}\) is \[\mu_{\mathbf{a}}=\frac{1}{b}\cdot\sum_{x\in P\cap\mathbb{Z}^{n}}\left(\langle v _{1},x\rangle+a_{1},\ldots,\langle v_{r},x\rangle+a_{r}\right)\delta_{x}.\] and the distribution for \(\mathbf{Y}_{k}=\frac{\mathbf{X}_{k\cdot\mathbf{a}}-\mathbb{E}[\mathbf{X}_{k \cdot\mathbf{a}}]}{\sqrt{k}}\) is \[\nu_{k}:=\left(\tau_{\sqrt{k}}\right)_{*}\left(\mu_{k\cdot\mathbf{a}}*\delta_ {-\mathbb{E}[\mathbf{X}_{k\cdot\mathbf{a}}]}\right).\] We will relate \(\nu_{k}\) to the following function, which we call the _potential_ of \(\mathbf{a}\). **Definition 2.1**.: Let \(\varphi_{\mathbf{a}}:P\rightarrow\mathbb{R}_{>0}\) be the function \[x\mapsto\prod_{i=1}^{r}\left(\langle v_{i},x\rangle+a_{i}\right)^{\langle v_{i},x\rangle+a_{i}}.\] In this product we read \(0^{0}\) as \(1\), so \(\prod_{i=1}^{r}\left(\langle v_{i},x\rangle+a_{i}\right)^{\langle v_{i},x \rangle+a_{i}}=\prod_{i\in I_{\mathbf{a}}}\left(\langle v_{i},x\rangle+a_{i} \right)^{\langle v_{i},x\rangle+a_{i}}.\) We let \(\mathrm{rel.int.}(P)\) be the interior of \(P\) viewed as a subspace of the affine linear span of vectors in \(P\). For \(x\in\mathrm{rel.int.}(P)\) and each \(i\in I_{\mathbf{a}}\), \(\langle v_{i},x\rangle+a_{i}>0\). **Proposition 2.2**.: _The function \(\varphi_{\mathbf{a}}\) is convex on \(P\), and there is a unique \(m_{\mathbf{a}}\in\mathrm{rel.int.}(P)\) minimizing \(\varphi_{\mathbf{a}}\)._ Proof.: Let \(x\) be in the relative interior of \(P\), so \(\langle v_{i},x\rangle+a_{i}>0\) for \(i\in I_{\mathbf{a}}\), and let \(x^{\prime}\) be in \(L\). We calculate \[\frac{d\left(\varphi_{\mathbf{a}}(x+tx^{\prime})\right)}{dt}=\varphi_{\mathbf{ a}}(x)\cdot\sum_{i\in I_{\mathbf{a}}}\langle v_{i},x^{\prime}\rangle\left( \log(\langle v_{i},x+tx^{\prime}\rangle+a_{i})+1\right).\] We have assumed that \(0=\sum_{i}v_{i}=\sum_{i\in I_{\mathbf{a}}}v_{i}+\sum_{i\notin I_{\mathbf{a}}}v _{i}\). Any \(x^{\prime}\in L\) can be written as a sum of differences of elements of \(P\), so for \(i\notin I_{\mathbf{a}}\), we have that \(\langle v_{i},x^{\prime}\rangle=0\). Then we can calculate that \(\sum_{i\in I_{\mathbf{a}}}\langle v_{i},x^{\prime}\rangle=0\) as well. Thus, \[\frac{d\left(\varphi_{\mathbf{a}}(x+tx^{\prime})\right)}{dt}=\varphi_{\mathbf{ a}}(x)\cdot\left(\sum_{i\in I_{\mathbf{a}}}\langle v_{i},x^{\prime}\rangle \log(\langle v_{i},x+tx^{\prime}\rangle+a_{i})\right).\] If \(x\) is in the relative interior of \(P\), \(\langle v_{i},x\rangle+a_{i}>0\) for \(i\in I_{\mathbf{a}}\). As \(x+tx^{\prime}\) approaches the boundary of \(P\) at some finite positive \(t_{0}\), we have that for some \(i\in I_{\mathbf{a}}\), \(\langle v_{i},x+tx^{\prime}\rangle+a_{i}\) decreases to \(0\). This implies both that \(\langle v_{i},x^{\prime}\rangle<0\), and that \(\log(\langle v_{i},x+tx^{\prime}\rangle+a_{i})\) goes to negative infinity. Thus, \(\frac{d\left(\varphi_{\mathbf{a}}(x+tx^{\prime})\right)}{dt}\) is positive near the boundary, so \(\varphi_{\mathbf{a}}(x+tx^{\prime})\) cannot be minimized there. On the other hand, we see that the second derivative \[\frac{d^{2}\left(\varphi_{\mathbf{a}}(x+tx^{\prime})\right)}{dt^{2}}=\varphi_ {\mathbf{a}}(x)\cdot\left(\left(\sum_{i\in I_{\mathbf{a}}}\langle v_{i},x^{ \prime}\rangle\log(\langle v_{i},x+tx^{\prime}\rangle+a_{i})\right)^{2}+\sum_ {i}\frac{\langle v_{i},x^{\prime}\rangle^{2}}{\langle v_{i},x+tx^{\prime} \rangle+a_{i}}\right),\] is strictly positive when \(x+tx^{\prime}\in\mathrm{rel.int.}(P)\), so \(\varphi_{\mathbf{a}}(x)\) is convex. Thus, there is a unique minimizer \(m_{\mathbf{a}}\) in the relative interior of \(P\) **Example 2.3**.: Let \(v_{1}=1\), \(v_{2}=-1\) in \(\mathbb{R}\), and let \(\mathbf{a}=(a_{1},a_{2})=(0,l)\). Then \(P\) is the interval \([0,l]\subset\mathbb{R}\), and \(\varphi_{\mathbf{a}}\) is \[\varphi_{\mathbf{a}}(x)=x^{x}(l-x)^{l-x},\] which is minimized at \(\frac{l}{2}\). We now show the following technical lemma for \(\varphi_{\mathbf{a}}\), which we will use later. Note that for any \(x\in L\), and \(k\) large enough, \(\varphi_{k\cdot\mathbf{a}}(km_{\mathbf{a}}+\sqrt{k}x)\) is defined. **Lemma 2.4**.: _For all \(x\in L\),_ \[\lim_{k\to\infty}\frac{\varphi_{k\cdot\mathbf{a}}(km_{\mathbf{a}})}{\varphi_{ k\cdot\mathbf{a}}(km_{\mathbf{a}}+\sqrt{k}x)}=e^{-\frac{1}{2}\sum_{i\in l_{ \mathbf{a}}}\frac{(v_{i}x)^{2}}{(v_{i}m_{\mathbf{a}})+a_{i}}}.\] _We show this by showing a fortiori that if \(x\in L\) and \(|x|<k^{c}\) for some \(0<c<\frac{1}{6}\), then_ \[\left|\frac{\varphi_{k\cdot\mathbf{a}}(km_{\mathbf{a}})}{\varphi_{k\cdot \mathbf{a}}(km_{\mathbf{a}}+\sqrt{k}x)}\sqrt{\prod_{i\in l_{\mathbf{a}}}\frac {\langle v_{i},km_{\mathbf{a}}\rangle+ka_{i}}{\langle v_{i},km_{\mathbf{a}}+ \sqrt{k}x\rangle+ka_{i}}}-e^{-\frac{1}{2}\sum_{i\in l_{\mathbf{a}}}\frac{ \langle v_{i}x\rangle^{2}}{(v_{i}m_{\mathbf{a}})+a_{i}}}\right|=e^{-\frac{1}{2 }\sum_{i\in l_{\mathbf{a}}}\frac{\langle v_{i},x\rangle^{2}}{(v_{i}m_{\mathbf{ a}})+a_{i}}}\cdot O(k^{3c-\frac{1}{2}}).\] Proof.: We can rewrite the \(k\)-dependent expression as a product of three parts which we will deal with separately: \[\frac{\varphi_{k\cdot\mathbf{a}}(km_{\mathbf{a}})}{\varphi_{k\cdot \mathbf{a}}(km_{\mathbf{a}}+\sqrt{k}x)}\sqrt{\prod_{i\in l_{\mathbf{a}}}\frac {\langle v_{i},km_{\mathbf{a}}\rangle+ka_{i}}{\langle v_{i},km_{\mathbf{a}}+ \sqrt{k}x\rangle+ka_{i}}}= \prod_{i\in l_{\mathbf{a}}}\sqrt{\frac{\langle v_{i},m_{\mathbf{ a}}\rangle+a_{i}}{\langle v_{i},m_{\mathbf{a}}+\frac{x}{\sqrt{k}}\rangle+a_{i}}} \tag{2}\] \[\cdot\left(\prod_{i\in l_{\mathbf{a}}}\frac{\langle v_{i},m_{ \mathbf{a}}\rangle+a_{i}}{\langle v_{i},m_{\mathbf{a}}+\frac{x}{\sqrt{k}} \rangle+a_{i}}\right)^{k(\langle v_{i},m_{\mathbf{a}})+a_{i})}\] (3) \[\cdot\prod_{i\in l_{\mathbf{a}}}\frac{1}{\left(\langle v_{i},m_{ \mathbf{a}}+\frac{x}{\sqrt{k}}\rangle+a_{i}\right)^{\sqrt{k}\langle v_{i},x \rangle}}. \tag{1}\] If \(x\in L\) and \(|x|<k^{c}\), then \[\sqrt{\frac{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}{\langle v_{i},m_{ \mathbf{a}}+\frac{x}{\sqrt{k}}\rangle+a_{i}}}=1+O(k^{c-\frac{1}{2}}).\] For the second, which can be written \(\prod_{i\in l_{\mathbf{a}}}\left(\frac{1}{1+\frac{\langle v_{i},x\rangle}{ \sqrt{k}(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i})}}\right)^{k(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i})}\) we have \[\prod_{i\in l_{\mathbf{a}}}\left(\frac{1}{1+\frac{\langle v_{i},x\rangle}{ \sqrt{k}(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i})}}\right)^{k(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i})}=\exp\left(-\sum_{i\in l_{\mathbf{a}}}k(\langle v _{i},m_{\mathbf{a}}\rangle+a_{i})\sum_{l=1}^{\infty}\frac{(-1)^{l-1}}{l} \left(\frac{\langle v_{i},x\rangle}{\sqrt{k}(\langle v_{i},m_{\mathbf{a}} \rangle+a_{i})}\right)^{l}\right).\] The argument of \(\exp\) can be written \[-\sqrt{k}\left(\sum_{i\in I_{\mathbf{a}}}\langle v_{i},x\rangle\right)+\left(\sum _{i\in I_{\mathbf{a}}}\frac{1}{2}\frac{\langle v_{i},x\rangle^{2}}{\langle v_{i },m_{\mathbf{a}}\rangle+a_{i}}\right)-\frac{1}{\sqrt{k}}\left(\sum_{i\in I_{ \mathbf{a}}}\frac{\langle v_{i},x\rangle^{3}}{\langle(v_{i},m_{\mathbf{a}} \rangle+a_{i})^{2}}\sum_{l=0}^{\infty}\frac{1}{l+3}\left(\frac{-\langle v_{i}, x\rangle}{\sqrt{k}(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i})}\right)^{l} \right).\] Since \(x\in L\), \(\sum_{i\in I_{\mathbf{a}}}\langle v_{i},x\rangle=0\). If \(|x|<k^{c}\), then the above is \[\left(\sum_{i\in I_{\mathbf{a}}}\frac{1}{2}\frac{\langle v_{i},x\rangle^{2}}{ \langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}\right)+O(k^{3c-\frac{1}{2}}),\] so \[\left(\prod_{i\in I_{\mathbf{a}}}\frac{\langle v_{i},m_{\mathbf{a}}\rangle+a_ {i}}{\langle v_{i},m_{\mathbf{a}}+\frac{x}{\sqrt{k}}\rangle+a_{i}}\right)^{k( \langle v_{i},m_{\mathbf{a}}\rangle+a_{i})}=e^{\frac{1}{2}\sum_{i\in I_{ \mathbf{a}}}\frac{\langle v_{i},x\rangle^{2}}{\langle v_{i},m_{\mathbf{a}} \rangle+a_{i}}}\cdot(1+O(k^{3c-\frac{1}{2}})).\] Finally, the last term in the product can be factored further: \[\prod_{i}\frac{1}{\left(\langle v_{i},m_{\mathbf{a}}+\frac{x}{\sqrt{k}} \rangle+a_{i}\right)^{\sqrt{k}\langle v_{i},x\rangle}}=\prod_{i}\left(\frac{1 }{\left(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}\right)^{\langle v_{i},x \rangle}}\right)^{\sqrt{k}}\cdot\left(\frac{1}{1+\frac{\langle v_{i},x\rangle} {\sqrt{k}(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i})}}\right)^{\sqrt{k} \langle v_{i},x\rangle}.\] Recall that \(m_{\mathbf{a}}\) is defined as the unique critical point of the function \(\varphi_{\mathbf{a}}\) from Proposition 2.2, and so for any \(x\), \(\frac{d\left(\varphi_{\mathbf{a}}(m_{\mathbf{a}}+tx)\right)}{dt}|_{t=0}=0.\) Computing the derivative, we get \[0=\varphi_{\mathbf{a}}(m_{\mathbf{a}})\cdot\sum_{i\in I_{\mathbf{a}}}\langle v _{i},x\rangle\log(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}).\] Since \(\varphi_{\mathbf{a}}(m_{\mathbf{a}})\) is positive, we have \(\sum_{i\in I_{\mathbf{a}}}\langle v_{i},x\rangle\log(\langle v_{i},m_{\mathbf{ a}}\rangle+a_{i})=0\), and consequently \[1=\prod_{i\in I_{\mathbf{a}}}\left(\langle v_{i},m_{\mathbf{a}}\rangle+a_{i} \right)^{\langle v_{i},x\rangle}.\] The other term is easy to estimate in a manner similar to the second product, i.e. \[\left(\frac{1}{1+\frac{\langle v_{i},x\rangle}{\sqrt{k}(\langle v_{i},m_{ \mathbf{a}}\rangle+a_{i})}}\right)^{\sqrt{k}(\langle v_{i},x\rangle}=e^{-\frac {\langle v_{i},x\rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}}\cdot( 1+O(k^{3c-\frac{1}{2}})).\] Thus, (4) \[\frac{\varphi_{k\cdot\mathbf{a}}(km_{\mathbf{a}})}{\varphi_{k\cdot \mathbf{a}}(km_{\mathbf{a}}+\sqrt{k}x)}\sqrt{\prod_{i\in I_{\Delta}}\frac{ \langle v_{i},km_{\mathbf{a}}\rangle+ka_{i}}{\langle v_{i},km_{\mathbf{a}}+ \sqrt{k}x\rangle+ka_{i}}}= 1\cdot(1+O(k^{c-\frac{1}{2}}))\] (5) \[\cdot e^{\frac{1}{2}\sum_{i\in I_{\Delta}}\frac{\langle v_{i},x \rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a\varphi_{i}}}\cdot(1+O(k^{3c -\frac{1}{2}}))\] (6) \[\cdot e^{-\sum_{i\in I_{\Delta}}\frac{\langle v_{i},x\rangle^{2}}{ \langle v_{i},m_{\mathbf{a}}\rangle+a\varphi_{i}}}\cdot(1+O(k^{3c-\frac{1}{2} }))\] (7) \[= e^{-\frac{1}{2}\sum_{i\in I_{\Delta}}\frac{\langle v_{i},x \rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a\varphi_{i}}}\cdot(1+O(k^{3c -\frac{1}{2}})).\] (8) We've assumed that \(c<\frac{1}{6}\), so the error term goes to \(0\). ## 3. The auxiliary distribution \(\nu_{k}^{\prime}\) Because we are not able to directly access \(\mathbb{E}[\mathbf{X}_{k\cdot\mathbf{a}}]\), it is easier to calculate the limit of an auxiliary distribution rather than the distributions of \(\mathbf{Y}_{k}\) (the \(\nu_{k}\)). Define \[\nu_{k}^{\prime}:=\left(\tau_{\sqrt{k}}\right)_{*}\left(\mu_{k\cdot\mathbf{a} }*\delta_{-km_{\mathbf{a}}}\right).\] From the definitions, \(\nu_{k}^{\prime}\) is a sum of Dirac measures with multinomial coefficients over lattice points in \(kP\), but the next proposition shows that as \(k\) goes to infinity, there is an alternative formula. **Proposition 3.1**.: _Let \(0<c<\frac{1}{6}\). There are constants \(d_{k}\) such that_ \[\lim_{k\to\infty}\nu_{k}^{\prime}=\lim_{k\to\infty}\frac{1}{d_{k}}\sum_{x\in B _{k^{c}}(0)\cap\frac{M\to\mu_{\mathbf{a}}}{\sqrt{k}}}e^{-\frac{1}{2}\sum_{i \in I_{\Delta}}\frac{\langle v_{i},x\rangle^{2}}{\langle v_{i},m_{\mathbf{a}} \rangle+a_{i}}}\delta_{x}.\] Proof.: By definition, \[\nu_{k}^{\prime}=\frac{1}{b_{k}}\sum_{x\in kP\cap\mathbb{Z}^{n}}\left(\langle v _{1},x\rangle+ka_{1},\ldots,\langle v_{r},x\rangle+ka_{r}\right)\delta_{ \frac{x-km_{\mathbf{a}}}{\sqrt{k}}}, \tag{9}\] for \(b_{k}\) normalizing constants (explicitly, \(b_{k}=\sum_{x\in kP\cap\mathbb{Z}^{n}}\left(\langle v_{1},x\rangle+ka_{1}, \ldots,\langle v_{r},x\rangle+ka_{r}\right)\)). We will start by showing that most of the terms can be ignored as as \(k\) goes to infinity. Let \[c_{L}=\sup_{x\in L}\inf_{m\in L\cap\mathbb{Z}^{n}}|x-m|,\] and let \(m_{\mathrm{inner},k}\) be a point in \(B_{c_{L}+1}(km_{\mathbf{a}})\cap(km_{\mathbf{a}}+L)\cap\mathbb{Z}^{n}\) (which certainly must be non-empty). The mass from terms in the sum outside of \(B_{k^{\frac{1}{2}+c}}(km_{\mathbf{a}})\) can be bounded: \[\frac{1}{b_{k}}\sum_{x\in\left(kP\setminus B_{\frac{1}{k^{\frac{1}{2}+c}}(km_{ \mathbf{a}})}\right)\cap\mathbb{Z}^{n}}\left(\langle v_{1},x\rangle+ka_{1}, \ldots,\langle v_{r},x\rangle+ka_{i}\right)\leq\sum_{x\in\left(kP\setminus B_{ \frac{1}{k^{\frac{1}{2}+c}}(km_{\mathbf{a}})}\right)\cap\mathbb{Z}^{n}}\frac{ \left(\langle v_{1},x\rangle+ka_{1},\ldots,\langle v_{r},x\rangle+ka_{i}\right) }{k\sum_{i}a_{i}}.\] Using Stirling's formula (see e.g. [WW, Section 12.33]), we can bound the summands: \[\frac{\left(\langle v_{1},x\rangle+ka_{1},\ldots,\langle v_{r},x \rangle+ka_{i}\right)}{\left(\langle v_{1},m_{\mathrm{inner},k}\rangle+ka_{i} \right)\prod_{\begin{subarray}{c}i\\ k\sum_{i}a_{i}\\ \langle v_{1},m_{\mathrm{inner},k}\rangle+ka_{1},\ldots,\langle v_{r},m_{ \mathrm{inner},k}\rangle+ka_{i}\end{subarray}}}.\leq c^{\prime}\cdot\frac{ \prod_{\begin{subarray}{c}i\\ i\\ k\sum_{i}a_{i}\end{subarray}}\left(\langle v_{i},m_{\mathrm{inner},k}\rangle+ka _{i}\right)^{\frac{1}{2}+\langle v_{i},m_{\mathrm{inner},k}\rangle+ka_{i}}}{ \prod_{\begin{subarray}{c}i\\ \langle v_{i},x\rangle+ka_{i}\geq 1\end{subarray}}\left(\langle v_{i},x \rangle+ka_{i}\right)^{\frac{1}{2}+\langle v_{i},x\rangle+ka_{i}}}, \tag{11}\] \[\leq c^{\prime\prime}k^{l}\cdot\left(\frac{\varphi_{\mathbf{a}} \left(\frac{m_{\mathrm{inner},k}}{k}\right)}{\varphi_{\mathbf{a}}\left(\frac{x }{k}\right)}\right)^{k}, \tag{10}\] where \(c^{\prime},c^{\prime\prime}\), and \(l\) are constants independent of \(x\). By assumption, \(|\frac{m_{\mathrm{inner},k}}{k}-m_{\mathbf{a}}|<\frac{c_{l}+1}{k}\). If \(x\in kP\setminus B_{\frac{1}{k^{\frac{1}{2}+c}}}(km_{\mathbf{a}})\), then \(|\frac{x}{k}-m_{\mathbf{a}}|\geq\frac{1}{k^{\frac{1}{2}-c}}\). Let us define distinguished points \(m_{\mathrm{outer},k}\) such that \(|\frac{m_{\mathrm{outer},k}}{k}-m_{\mathbf{a}}|=\frac{1}{k^{\frac{1}{2}-c}}\) and additionally among all \(x\) satisfying \(|\frac{x}{k}-m_{\mathbf{a}}|=\frac{1}{k^{\frac{1}{2}-c}}\), the point \(m_{\mathrm{outer},k}\) minimizes \(x\mapsto\varphi_{\mathbf{a}}\big{(}\frac{x}{k}\big{)}\). Then, since \(\varphi_{\mathbf{a}}\) is convex, for any \(x\) such that \(|\frac{x}{k}-m_{\mathbf{a}}|\geq\frac{1}{k^{\frac{1}{2}-c}}\), we have \(\varphi_{\mathbf{a}}\big{(}\frac{m_{\mathrm{outer},k}}{k}\big{)}\leq\varphi_ {\mathbf{a}}\big{(}\frac{x}{k}\big{)}\). On the other hand, by writing \(\varphi_{\mathbf{a}}\) as a Taylor polynomial around \(m_{\mathbf{a}}\), we have that \(\frac{\varphi_{\mathbf{a}}\big{(}\frac{m_{\mathrm{inner},k}}{k}\big{)}}{ \varphi_{\mathbf{a}}\big{(}\frac{m_{\mathrm{outer},k}}{k}\big{)}}<R<1\) for all \(k\) sufficiently large. Thus, \[\sum_{x\in\left(kP\setminus B_{\frac{1}{k^{\frac{1}{2}+c}}(km_{ \mathbf{a}})}\right)\cap\mathbb{Z}^{n}}\frac{\left(\langle v_{1},x\rangle+ka_{ 1},\ldots,\langle v_{r},x\rangle+ka_{i}\right)}{\left(\langle v_{1},m_{ \mathrm{inner},k}\rangle+ka_{1},\ldots,\langle v_{r},m_{\mathrm{inner},k} \rangle+ka_{i}\right)}<\sum_{x\in\left(kP\setminus B_{\frac{1}{k^{\frac{1}{2}+c }}(km_{\mathbf{a}})}\right)\cap\mathbb{Z}^{n}}c^{\prime\prime}k^{l}R^{k} \tag{13}\] \[\leq p(k)c^{\prime\prime}k^{l}R^{k}, \tag{12}\] where \(p(k)\) is a polynomial in \(k\) (e.g. the Ehrhart polynomial of \(P\) which counts integer points in \(kP\)). This quantity goes to \(0\) as \(k\) goes to infinity, showing that \[\lim_{k\to\infty}\nu_{k}^{\prime}=\lim_{k\to\infty}\frac{1}{b_{k}}\sum_{x\in B \atop k^{2}+\zeta(km_{\mathfrak{a}})\cap kP\cap\mathbb{Z}^{n}}\left(\begin{matrix} k\sum_{i}a_{i}\\ \langle v_{1},x\rangle+ka_{1},\ldots,\langle v_{r},x\rangle+ka_{r}\end{matrix} \right)\delta_{\frac{x-km_{\mathfrak{a}}}{\sqrt{k}}}.\] We can rewrite the terms within the limit on the right-hand side of the above as \[\frac{1}{b_{k}}\sum_{x\in B_{k^{c}}(0)\cap L\cap\frac{\mathbb{Z}^{n-km_{ \mathfrak{a}}}}{\sqrt{k}}}\left(\begin{matrix}k\sum_{i}a_{i}\\ \langle v_{1},km_{\mathfrak{a}}+\sqrt{k}x\rangle+ka_{1},\ldots,\langle v_{r},km_{\mathfrak{a}}+\sqrt{k}x\rangle+ka_{r}\end{matrix}\right)\delta_{x},\] so we can then start comparing. Note: to reduce the line sizes below, we write \(\binom{k\sum_{i}a_{i}}{(v_{i},km_{\mathfrak{a}}+\sqrt{k}x)+ka_{1}}\) for \(\binom{k\sum_{i}a_{i}}{(v_{1},km_{\mathfrak{a}}+\sqrt{k}x)+ka_{1},\ldots,(v_{ r},km_{\mathfrak{a}}+\sqrt{k}x)+ka_{r}}\). Let \(d_{k}=b_{k}\frac{\varphi_{k\mathfrak{a}}(km_{\mathfrak{a}})\sqrt{\prod_{i\in \mathfrak{a}}(v_{i},km_{\mathfrak{a}})+ka_{i}}}{(2\pi)^{1-|\mathfrak{a}|}(k \sum_{i\in\mathfrak{a}}a_{i})^{\frac{1}{2}+\sum_{i\in\mathfrak{a}}a_{i}}}\). Then we can compare \(\nu_{k}^{\prime}\) and \(\frac{1}{d_{k}}\sum_{x\in B_{k^{c}}(0)\cap L\frac{\mathbb{Z}^{n-km_{ \mathfrak{a}}}}{\sqrt{k}}}e^{-\frac{1}{2}\sum_{i\in\mathfrak{a}}\frac{(v_{i},x )^{2}}{(v_{i},km_{\mathfrak{a}})+a_{i}}}\delta_{x}\) via \[\sum_{x\in B_{k^{c}}(0)\cap L\cap\frac{\mathbb{Z}^{n-km_{ \mathfrak{a}}}}{\sqrt{k}}}\left|\frac{1}{b_{k}}\binom{k\sum_{i}a_{i}}{(v_{i}, km_{\mathfrak{a}}+\sqrt{k}x)+ka_{i}}-\frac{(2\pi)^{1-|\mathfrak{a}|}(k\sum_{i \in\mathfrak{l}_{\mathfrak{a}}}a_{i})^{\frac{1}{2}+k\sum_{i\in\mathfrak{l}_{ \mathfrak{a}}}a_{i}}}{\varphi_{k\mathfrak{a}}(km_{\mathfrak{a}}+\sqrt{k}x) \sqrt{\prod_{i\in\mathfrak{l}_{\mathfrak{a}}}\langle v_{i},km_{\mathfrak{a}}+ \sqrt{k}x\rangle+ka_{i}}}\right|\] \[+\sum_{x\in B_{k^{c}}(0)\cap L\cap\frac{\mathbb{Z}^{n-km_{ \mathfrak{a}}}}{\sqrt{k}}}\left|\frac{1}{b_{k}}\frac{(2\pi)^{1-|\mathfrak{l}_{ \mathfrak{a}}|}(k\sum_{i\in\mathfrak{l}_{\mathfrak{a}}}a_{i})^{\frac{1}{2}+k \sum_{i\in\mathfrak{l}_{\mathfrak{a}}}a_{i}}}{\varphi_{k\mathfrak{a}}(km_{ \mathfrak{a}}+\sqrt{k}x)\sqrt{\prod_{i\in\mathfrak{l}_{\mathfrak{a}}}\langle v _{i},km_{\mathfrak{a}}+\sqrt{k}x\rangle+ka_{i}}}-\frac{1}{d_{k}}e^{-\frac{1}{2 }\sum_{i\in\mathfrak{l}_{\mathfrak{a}}}\frac{(v_{i},x)^{2}}{(v_{i},km_{ \mathfrak{a}})+a_{i}}}\right|.\] As \(k\) goes to infinity, the first sum goes to \(0\) by applying standard bounds associated to Stirling's approximation. The terms of the second can be rewritten \[\frac{(2\pi)^{1-|\mathfrak{a}|}(k\sum_{i\in\mathfrak{l}_{\mathfrak{a}}}a_{i}) ^{\frac{1}{2}+k\sum_{i\in\mathfrak{l}_{\mathfrak{a}}}a_{i}}}{b_{k}\varphi_{k \mathfrak{a}}(km_{\mathfrak{a}})\sqrt{\prod_{i\in\mathfrak{l}_{\mathfrak{a}}} \langle v_{i},km_{\mathfrak{a}}\rangle+ka_{i}}}\left|\frac{\varphi_{k\cdot \mathfrak{a}}(km_{\mathfrak{a}})\sqrt{\prod_{i\in\mathfrak{l}_{\mathfrak{a}}} \langle v_{i},km_{\mathfrak{a}}\rangle+ka_{i}}}{\varphi_{k\cdot\mathfrak{a}}( km_{\mathfrak{a}}+\sqrt{k}x)\sqrt{\prod_{i\in\mathfrak{l}_{\mathfrak{a}}} \langle v_{i},km_{\mathfrak{a}}+\sqrt{k}x\rangle+ka_{i}}}-e^{-\frac{1}{2} \sum_{i\in\mathfrak{l}_{\mathfrak{a}}}\frac{(v_{i},x)^{2}}{(v_{i},m_{\mathfrak{a}} )+a_{i}}}\right|.\] By Lemma 2.4, the whole sum is therefore bounded by some constant times \[k^{3c-\frac{1}{2}}\frac{1}{b_{k}}\sum_{x\in B_{k^{c}}(0)\cap L\cap\frac{Z^{n-km_{ \mathbf{a}}}}{\sqrt{k}}}\frac{(2\pi)^{1-|I_{\mathbf{a}}|}(k\sum_{i\in I_{ \mathbf{a}}}a_{i})^{\frac{1}{2}+k\sum_{i\in I_{\mathbf{a}}}a_{i}}}{\varphi_{k \cdot\mathbf{a}}(km_{\mathbf{a}})\sqrt{\prod_{i\in I_{\mathbf{a}}}\langle v_{i },km_{\mathbf{a}}\rangle+ka_{i}}}e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a}}} \frac{\langle v_{i},x\rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}}.\] Recall that \(b_{k}=\sum_{y\in\mathcal{D}\cap\mathbb{Z}^{n}}\binom{k\sum_{i}a_{i}}{\langle v _{1},y\rangle+ka_{1},\ldots,\langle v_{r},y\rangle+ka_{r}}\), so using Stirling's formula and Lemma 2.4, we have for some small \(\epsilon>0\) that \[b_{k}\geq(1-\epsilon)\sum_{y\in B_{k^{c}}(0)\cap L\cap\frac{Z^{n-km_{\mathbf{a }}}}{\sqrt{k}}}\frac{(2\pi)^{1-|I_{\mathbf{a}}|}(k\sum_{i\in I_{\mathbf{a}}}a_{ i})^{\frac{1}{2}+k\sum_{i\in I_{\mathbf{a}}}a_{i}}}{\varphi_{k\cdot\mathbf{a}}(km_{ \mathbf{a}})\sqrt{\prod_{i\in I_{\mathbf{a}}}\langle v_{i},km_{\mathbf{a}} \rangle+ka_{i}}}e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a}}}\frac{\langle v_{i},x \rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}},\] when \(k\) is large enough. Thus besides the \(k^{3c-\frac{1}{2}}\) factor, the expression is bounded above by a constant, and so the whole sum vanishes. **Corollary 3.2**.: _The measures \(\nu_{k}^{\prime}\) weakly converge to a scalar multiple of the measure_ \[e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a}}}\frac{\langle v_{i},x\rangle^{2}}{ \langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}}\cdot\delta_{L}.\] This is true because the sequence \[\frac{1}{d_{k}}\sum_{x\in B_{k^{c}}(0)\cap L\cap\frac{Z^{n-km_{\mathbf{a}}}}{ \sqrt{k}}}e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a}}}\frac{\langle v_{i},x \rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}}f(x),\] converges to some multiple of \(\int_{L}e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a}}}\frac{\langle v_{i},x \rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}}f(x)dx\) by the definition of the Riemann integral. Since \(\lim_{k\to\infty}\frac{1}{d_{k}}\sum_{x\in B_{k^{c}}(0)\cap L\cap\frac{Z^{n-km_ {\mathbf{a}}}}{\sqrt{k}}}e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a}}}\frac{ \langle v_{i},x\rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}}\delta_ {x}\) is equal to \(\lim_{k\to\infty}\nu_{k}^{\prime}\) which is a probability measure, we can determine finally that \(\lim_{k\to\infty}\nu_{k}^{\prime}=\frac{e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a }}}\frac{\langle v_{i},x\rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}} \delta_{L}}}{\int_{L}e^{-\frac{1}{2}\sum_{i\in I_{\mathbf{a}}}\frac{\langle v_{i },x\rangle^{2}}{\langle v_{i},m_{\mathbf{a}}\rangle+a_{i}}}dx}\). ## 4. The limit of \(\nu_{k}\) Now we can directly address the distribution of \(\mathbf{Y}_{k}\), which is the rescaling of the polyhedral multinomial random vector \(X_{k}\), translated to have mean \(0\). The distribution has the formula \[\nu_{k}=\left(\tau_{\sqrt{k}}\right)_{\ast}\left(\mu_{k\cdot\mathbf{a}}\ast \delta_{-\mathbb{E}[\mathbf{X}_{k\cdot\mathbf{a}}]}\right).\] According to the results of the previous section, \[\lim_{k\to\infty}\nu_{k}^{\prime}=\lim_{k\to\infty}\left(\tau_{\sqrt{k}}\right)_{* }\left(\mu_{k\cdot\mathbf{a}}*\delta_{-k\cdot m_{\mathbf{a}}}\right)=\frac{e^{- \frac{1}{2}\sum_{i\in\mathbf{l_{a}}}\frac{(p_{i}x)^{2}}{(p_{i}m_{\mathbf{a}})+a _{i}}}\delta_{L}}{\int_{L}e^{-\frac{1}{2}\sum_{i\in\mathbf{l_{a}}}\frac{(p_{i} x)^{2}}{(p_{i}m_{\mathbf{a}})+a_{i}}}dx}.\] **Theorem 4.1**.: _The limit distribution of \(\mathbf{Y}_{k}\) is the probability measure_ \[\frac{e^{-\frac{1}{2}\sum_{i\in\mathbf{l_{a}}}\frac{(p_{i}x)^{2}}{(p_{i}m_{ \mathbf{a}})+a_{i}}}\delta_{L}}{\int_{L}e^{-\frac{1}{2}\sum_{i\in\mathbf{l_{a} }}\frac{(p_{i}x)^{2}}{(p_{i}m_{\mathbf{a}})+a_{i}}}dx}\] Proof.: Integrating \((x_{1},\ldots,x_{n})\) against \(\lim_{k\to\infty}\nu_{k}^{\prime}\) is \(0\), so \(\lim_{k\to\infty}\frac{\operatorname{E}[X_{k\cdot\mathbf{a}}]-k\cdot m_{ \mathbf{a}}}{\sqrt{k}}=0\). Therefore \[\lim_{k\to\infty}\mathbf{Y}_{k} =\lim_{k\to\infty}\frac{X_{k\cdot\mathbf{a}}-\operatorname{E}[X_ {k\cdot\mathbf{a}}]}{\sqrt{k}},\] \[=\lim_{k\to\infty}\frac{X_{k\cdot\mathbf{a}}-km_{\mathbf{a}}}{ \sqrt{k}},\] which is the random variable with distribution \(\lim_{k\to\infty}\nu_{k}^{\prime}\).
2301.03588
Multiscale Metamorphic VAE for 3D Brain MRI Synthesis
Generative modeling of 3D brain MRIs presents difficulties in achieving high visual fidelity while ensuring sufficient coverage of the data distribution. In this work, we propose to address this challenge with composable, multiscale morphological transformations in a variational autoencoder (VAE) framework. These transformations are applied to a chosen reference brain image to generate MRI volumes, equipping the model with strong anatomical inductive biases. We structure the VAE latent space in a way such that the model covers the data distribution sufficiently well. We show substantial performance improvements in FID while retaining comparable, or superior, reconstruction quality compared to prior work based on VAEs and generative adversarial networks (GANs).
Jaivardhan Kapoor, Jakob H. Macke, Christian F. Baumgartner
2023-01-09T09:15:30Z
http://arxiv.org/abs/2301.03588v2
# Multiscale Metamorphic VAE ###### Abstract Generative modeling of 3D brain MRIs presents difficulties in achieving high visual fidelity while ensuring sufficient coverage of the data distribution. In this work, we propose to address this challenge with composable, multiscale morphological transformations in a variational autoencoder (VAE) framework. These transformations are applied to a chosen reference brain image to generate MRI volumes, equipping the model with strong anatomical inductive biases. We show substantial performance improvements in FID while retaining comparable, or superior, reconstruction quality compared to prior work based on VAEs and generative adversarial networks (GANs). ## 1 Introduction The paradigm of generative modeling could be an invaluable tool for better understanding and diagnosing brain disorders. Deep generative models such as VAEs [10] and GANs [6] have enjoyed tremendous success when used for disease attribution maps [4], longitudinal brain aging [16], and synthetic data generation [15; 14]. Several previous works [9; 12; 16; 3; 5; 15; 11] have employed such models to capture the distribution of Magnetic Resonance Images (MRIs) of the brain. However, due to the complementary strengths of GANs and VAEs, these methods either suffer from blurry outputs, insufficient data distribution coverage, or lack a latent space for further analyses [3]. Structural MR images of the brain are morphologically constrained to a smaller set compared to that of natural images such as ImageNet, due to similar brain anatomy across subjects. Morphologically constrained generative models have been used previously for medical image registration [2; 19], where the task is to map an image to a common space for subsequent analysis. This approach has also been explored previously in [5; 18; 15] for image synthesis. However, such models have only been shown to work on 2D MRI slices of the brain [5; 18], or do not contain a manipulable latent space for downstream analyses [15]. In this work, we extend the popular VAE approach by taking advantage of the strong anatomical priors in the brain. In contrast to regular VAEs, our approach outputs compositions of _morphological transformations_ comprising diffeomorphisms and intensity transformations at different scales. Those are then iteratively applied to a fixed reference MRI template. This allows us to generate high-fidelity MRI volumes while retaining a meaningful latent space. ## 2 Methodology In the VAE paradigm, we aim to generate samples, denoted by \(\hat{\mathbf{x}}\in\mathcal{X}\) from a data distribution \(p_{\text{data}}(\mathbf{x})\) by sampling a random latent variable \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), which is then passed to a decoder to output \(p(\mathbf{x}|\mathbf{z})\). During training, given a datapoint \(\mathbf{x}\), we infer the posterior distribution \(q(\mathbf{z}|\mathbf{x})=\mathcal{N}(\mu_{\mathbf{z}},\sigma_{\mathbf{z}})\) of latent variable \(\mathbf{z}\) through an encoder. The encoder- and decoder parameters \(\theta\) are then jointly optimized to maximize the Evidence Lower BOund (ELBO), which is a lower bound on \(\log p(\mathbf{x})\) and can be written as \(\text{ELBO}(\theta|\mathbf{x})=\mathbb{E}_{\mathbf{z}\sim q(\mathbf{z}| \mathbf{x})}\left[\log p(\mathbf{x}|\mathbf{z})\right]+\text{KL}\left[q( \mathbf{z}|\mathbf{x})\|\mathcal{N}(\mathbf{0},\mathbf{I})\right]\). This optimization reduces to minimizing a reconstruction term in the data space and a regularization term over the latent space. The proposed model, which we term _multiscale metamorphic autoencoder (\(M^{3}AE\))_ is based on the standard VAE. However, instead of direct pixelwise outputs, our model outputs morphological transforms that map a fixed standard template \(\mathbf{T}\in\mathcal{X}\) to the input image \(\mathbf{x}\). If the template image \(\mathbf{T}\) is sufficiently similar to \(\mathbf{x}\), the generation task may be substantially simpler than generating images 'from scratch'. We use two classes of transforms: deformations and additions. The diffeomorphic transform \(\mathbf{T}\rightarrow(\mathbf{T}\circ\phi)\), where the deformation field \(\phi\) is applied as in the LDDMM framework [1], performs non-affine elastic deformations, which captures atrophy patterns and structural variations. The intensity transformation is an additive transformation \(\mathbf{T}\rightarrow(\mathbf{T}+\mathbf{A})\), and is expected to capture subject and scanner-specific intensity variations, lesions, and topological irregularities. The two transforms can be composed into _metamorphic_ transformations [5; 17], denoted as \(\mathcal{M}(\mathbf{T}|\phi,\mathbf{A})=(\mathbf{T}+\mathbf{A})\circ\phi\). We then apply these transforms in a cascaded, multiscale fashion at increasing scales. For generation, the decoder is split into two backbones - Decoder\({}_{\phi}\) and Decoder\({}_{\mathbf{A}}\) (Fig. 1). Each decoder, made up of upscaling blocks, is given a subset of \(\mathbf{z}\) as input (i.e., \(\mathbf{z}_{\phi}\) / \(\mathbf{z}_{A}\) to Decoder\({}_{\phi}\) / Decoder\({}_{\mathbf{A}}\), respectively) and outputs a corresponding set of coarse-to-fine transform parameters \(\{\mathbf{A}^{(i)},\phi^{(i)}\}_{i}\). By composing coarse and fine transforms in an interleaved fashion as illustrated in Fig. 1, we obtain more expressive transformations and hence better coverage of the data space. The last transformation is only additive in nature and outputs the final reconstruction \(\hat{\mathbf{x}}\). The loss function, \(\mathcal{L}_{\text{total}}\), for training the model can be written as \[\mathcal{L}_{\text{total}} =\mathcal{L}_{\text{ELBO}}+\sum_{i=1}^{\text{levels}}\mathcal{L}_ {\text{RegRecon}}^{(i)} \tag{1}\] \[\mathcal{L}_{\text{ELBO}} =\|\mathbf{x}-\hat{\mathbf{x}}\|_{1}+\beta\text{KL}\left[\mathcal{ N}(\mu_{\mathbf{z}},\sigma_{\mathbf{z}})\|\mathcal{N}(\mathbf{0},\mathbf{I})\right]\] (2) \[\mathcal{L}_{\text{RegRecon}}^{(i)} =\gamma_{1}^{(i)}\|\mathbf{x}-\hat{\mathbf{x}}^{(i)}\|_{1}+ \underbrace{\gamma_{2}^{(i)}\|\mathbf{A}^{(i)}\|^{2}+\gamma_{3}^{(i)}\|\nabla \phi^{(i)}\|^{2}+\gamma_{4}^{(i)}\|\nabla\cdot\phi^{(i)}\|^{2}+\gamma_{5}^{(i) }\|\phi^{(i)}\|^{2}}_{\text{regularizing coarse-to-fine transform parameters}}, \tag{3}\] In Eq. 1, \(\mathcal{L}_{\text{ELBO}}\) is the \(\beta\)VAE [8] loss, and \(\mathcal{L}_{\text{RegRecon}}^{(i)}\) consists of intermediate regularization and reconstruction loss terms for the \(i\)th transform level. To ensure appropriate behaviour of the transformation Figure 1: Description of the proposed \(M^{3}AE\) model. The decoder consists of two separate backbones Decoder\({}_{\phi}\) and Decoder\({}_{A}\) that output diffeomorphic deformation fields and additive intensity transformations, respectively. These transformations are composed from coarse-to-fine scales that increase by a factor of 2 at each level. The last transformation only consists of an intensity transformation. cascade, we constrain intermediate volumes to be close to the final volume at each level (first term in Eq. 3). The transformation parameters are regularized using decay terms for \(\mathbf{A}^{(i)},\phi^{(i)}\), and spatial gradient and divergence minimizers for \(\phi^{(i)}\). \(\beta,\gamma_{j}^{(i)}\) are hyperparameters to appropriately scale the loss terms. ## 3 Evaluation We evaluated the M\({}^{3}\)AE model on 3D T1 MRI volumes from the ADNI [13] dataset. We used 4789 volumes from a total of 992 unique subjects for training. An additional 2592 volumes from 526 unique subjects served as the test set. Volumes were preprocessed by applying bias correction, skull stripping, and linear registration using FSL1, and were then downscaled and cropped to \(80\times 96\times 80\) pixels, mainly to reduce training times for the baseline and the proposed method. We used the cognitively normal baseline scan for subject 003S692 as the fixed template. Footnote 1: FMRIB Software Library v6.0 Created by the Analysis Group, FMRIB, Oxford, UK. The model's Encoder, Decoder\({}_{\phi}\) and Decoder\({}_{\mathbf{A}}\) are made up of 3D convolutions. We used four layers of morphological transforms, where each layer progressively doubles in each dimension until it reaches the original volume size. The model was trained using the AdamW optimizer with a learning rate of 0.0003. As our first baseline we chose the \(\alpha\)WGAN [11], which consists of an autoencoder with adversarial losses over the generated volume and the latent space vector. We retrained the model using the code provided by the authors. We also evaluated against a standard \(\beta\)VAE [8]. For \(\beta\)VAE as well as for our proposed M\({}^{3}\)AE we set \(\beta=3\). In future work, we also aim to include the 3D StyleGAN [9] in our evaluations. Unfortunately, we could not include it in this submission due to implementation issues and time constraints. We quantitatively assessed the generation quality of M\({}^{3}\)AE with the \(\alpha\)WGAN baseline using FID [7] scores on 2D slices of randomly generated volumes. Following related work [9], we chose four slices (at locations [30,40,50,60] of the volume) for each of the three axes and averaged the per-axis FID scores over them. We also computed reconstruction metrics in the form of SSIM, PSNR, and MSE. Table 1 shows the values obtained for the above metrics. M\({}^{3}\)AE substantially outperformed \(\alpha\)WGAN as well as \(\beta\)VAE in FID scores. The proposed model also outperformed \(\alpha\)WGAN in terms of reconstruction quality. \(\beta\)VAE and M\({}^{3}\)AE perform similarly in terms of reconstruction metrics although the \(\beta\)VAE has a slight advantage. This can be explained by the morphological constraint placed on our model which limit outputs to transformations of the fixed template. This constraint, however, is also what enables the superior FID scores. Qualitative analysis of the compared methods reveals that samples from \(\beta\)VAE are not anatomically correct in many cases and display regions that appear scrambled. \(\alpha\)WGAN generates anatomically viable samples, however in many of them the cortical folds do not follow the anatomical structure. Additionally, as in \(\beta\)VAE, samples exhibit regions where artifacts are dramatically different from the rest of the volume. Samples from our model are the most anatomically correct due to starting from the fixed template. However, the samples occasionally have a wavy visual quality due to improperly generated random deformation fields at finer scales. We furthermore observe a lack of sufficient topological diversity in the cortical folds. Samples for each of the above methods are shown in Section A of Supplementary Material. \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{FID scores} & \multicolumn{3}{c}{reconstruction metrics} \\ \cline{2-7} \multicolumn{1}{c|}{} & axial(\(\downarrow\)) & coronal(\(\downarrow\)) & saggital(\(\downarrow\)) & \(L_{2}(\downarrow\)) & SSIM(\(\uparrow\)) & PSNR(\(\uparrow\)) \\ \hline \(\alpha\)WGAN [11] & 110.4 & \(95.4\) & 98.7 & 0.137 & 0.576 & 12.21 \\ \(\beta\)VAE [8] & 178.1 & 169.5 & 175.1 & \(\mathbf{0.045}\) & \(\mathbf{0.768}\) & \(\mathbf{16.99}\) \\ M\({}^{3}\)AE (ours) & \(\mathbf{82.2}\) & \(\mathbf{83.1}\) & \(\mathbf{76.1}\) & 0.047 & 0.759 & 16.81 \\ \hline Validation set & 11.3 & 11.5 & 8.5 & – & – & – \\ \hline \end{tabular} \end{table} Table 1: Evaluations of Frechet Inception Distance (FID) scores and reconstruction metrics on the test set. FID scores between the validation set and test set are provided as lower bounds. Arrows indicate whether lower (\(\downarrow\)) or higher (\(\uparrow\)) values are better. Discussion This work presents a novel generative modeling approach for 3D MRI synthesis taking advantage of strong anatomical priors and multiscale morphological transformations. Quantitative analysis shows that our model obtains substantially better coverage of the data distribution as well as comparable or better reconstruction quality compared to baselines. In contrast to GAN based approaches our model also retains an expressive latent space which enables further downstream analysis. Our initial results show that using a fixed template to take advantage of anatomical knowledge offers a promising avenue for MRI volume synthesis. In future works, we aim to address current limitations and apply this model for longitudinal modeling of brain MRIs in the context of Alzheimer's disease. ## 5 Potential negative impact This work aims to generate high-fidelity MRI volumes for better downstream analyses on a range of domain-specific tasks in medical imaging and clinical research. If we do not train the model on data representative of the clinical population, it may introduce biases in the downstream analyses and thus lead to sub-optimal or harmful predictions. This may also happen if the model does not equitably model features in the data based on protected attributes. ## 6 Acknowledgements This work is part of a project funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC number 2064/1 - Project number 390727645. This work was supported by the German Federal Ministry of Education and Research (BMBF): Tubingen AI Center, FKZ: 01IS18039A. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Jaiwardhan Kapoor. The authors also thank Dr. Sergios Gatidis (MPI-IS, Germany) for helpful feedback.
2307.07460
Priority Downward Closures
When a system sends messages through a lossy channel, then the language encoding all sequences of messages can be abstracted by its downward closure, i.e. the set of all (not necessarily contiguous) subwords. This is useful because even if the system has infinitely many states, its downward closure is a regular language. However, if the channel has congestion control based on priorities assigned to the messages, then we need a finer abstraction: The downward closure with respect to the priority embedding. As for subword-based downward closures, one can also show that these priority downward closures are always regular. While computing finite automata for the subword-based downward closure is well understood, nothing is known in the case of priorities. We initiate the study of this problem and provide algorithms to compute priority downward closures for regular languages, one-counter languages, and context-free languages.
Ashwani Anand, Georg Zetzsche
2023-07-14T16:38:52Z
http://arxiv.org/abs/2307.07460v2
# Priority Downward Closures ###### Abstract When a system sends messages through a lossy channel, then the language encoding all sequences of messages can be abstracted by its downward closure, i.e. the set of all (not necessarily contiguous) subwords. This is useful because even if the system has infinitely many states, its downward closure is a regular language. However, if the channel has congestion control based on priorities assigned to the messages, then we need a finer abstraction: The downward closure with respect to the priority embedding. As for subword-based downward closures, one can also show that these priority downward closures are always regular. While computing finite automata for the subword-based downward closure is well understood, nothing is known in the case of priorities. We initiate the study of this problem and provide algorithms to compute priority downward closures for regular languages, one-counter languages, and context-free languages. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Preliminaries * 2.2.1 Definitions * 2.2.2 Definitions * 2.2.3 Definitions * 2.2.4 Definitions * 2.2.5 Definition 2.2.6 Definition 2.2.7 Definition 2.2.8 Definition 2.2.9 Definition 2.2.10 Definition 2.2.11 Definition 2.2.11 Definition 2.2.12 Definition 2.2.13 Definition 2.2.14 Definition 2.2.15 Definition 2.2.16 Definition 2.2.17 Definition 2.2.18 Definition 2.2.19 Definition 2.2.20 Definition 2.2.21 Definition 2.2.22 Definition 2.2.23 Definition 2.2.24 Definition 2.2.25 Definition 2.2.26 Definition 2.2.27 Definition 2.2.28 Definition 2.29 Definition 2.2.29 Definition 2.2.30 Definition 2.2.31 Definition 2.2.32 Definition 2.2.33 Definition 2.2.34 Definition 2.2.35 Definition 2.2.36 Definition 2.2.37 Definition 2.2.38 Definition 2.2.39 Definition 2.2.4 Definition 2.2.4 Definition 2.2.5 Definition 2.2.6 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.266 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 2.26 Definition 2.26 26 22.26 Definition 2.26 2.26 Definition 2.26 22 2.26 Definition 22 2.26 226 226 226 2626 2626 2626 2626 2626 2626 22626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 2626 262626 2626 2626 2626 2626 2626 2626 262626 2626 2626 262626 26 PrioritiesHowever, if the messages are not dropped arbitrarily but as part of congestion control, then taking the set of all subwords would be too coarse an abstraction: Suppose we want to prioritize critical messages that can only be dropped if there are no lower-priority messages in the channel. For example, RFC 2475 describes an architecture that allows specifying relative priority among the IP packets from a finite set of priorities and allows the network links to drop lower priority packets to accommodate higher priority ones when the congestion in the network reaches a critical point [10]. As another example, in networks with an Asynchronous Transfer Mode layer, cells carry a priority in order to give preferences to audio or video packages over less time-critical packages [24]. In these situations, the subword downward closure would introduce behaviors that are not actually possible in the system. To formally capture the effect of dropping messages by priorities, Haase, Schmitz and Schnoebelen [17] introduced _Priority Channel Systems (PCS)_. These feature an ordering on words (i.e. channel contents), called the _Prioritised Superseding Order (PSO)_, which allows the messages to have an assigned priority, such that higher priority messages can supersede lower priority ones. This order indeed allows the messages to be treated discriminatively, but the superseding is asymmetric. A message can be superseded only if there is a higher priority letter coming in the channel later. This means, PSO are the "priority counterpart" of the subword order for channels with priorities. In particular, in these systems, components can be abstracted by their _priority downward closure_, the downward closure with respect to the PSO. Fortunately, just as for subwords, priority downward closures are also always regular. This raises the question of whether it is possible to compute finite automata for the priority downward closure for given infinite-state systems. For example, consider a recursive program that sends messages into a lossy channel with congestion control. Then, the set of possible message sequences that can arrive is exactly the priority downward closure of the language \(S\) of sent messages. Since \(S\) is context-free in this case, we would like to compute a finite automaton for \(S\mbox{$\downarrow_{\mathsf{P}}$}\). While this problem is well-understood for subwords, nothing is known for priority downward closures. ContributionWe initiate the study of computing priority downward closures. We show two main results. On the one hand, we study the setting above--computing priority downward closures of context-free languages. Here, we show that one can compute a doubly-exponential-sized automaton for its priority downward closure. On the other hand, we consider a natural restriction of context-free languages: We show that for one-counter automata, there is a polynomial-time algorithm to compute the priority downward closure. The first step is to consider a related order on words, which we call _block order_, which also has priorities assigned to letters, but imposes them more symmetrically. Moreover, we show that under mild assumptions, computing priority downward closures reduces to computing block downward closures. Both our constructions--for one-counter automata and context-free languages--require new ideas. For one-counter automata, we modify the subword-based downward closures construction from [3] in a non-obvious way to block downward closures. Crucially, our modification relies on the insight that, in some word, repeating existing factors will always yield a word that is larger in the block order. For context-free languages, we present a novel inductive approach: We decompose the input language into finitely many languages with fewer priority levels and apply the construction recursively. Outline of the paperWe fix notation in Section 2 and introduce the block order and show its relationship to the priority order in Section 3. In Sections 4-6, we then present methods for computing block and priority downward closures for regular languages, one-counter languages, and context-free languages, respectively. ## 2 Preliminaries We will use the convention that \([i,j]\) denotes the set \(\{i,i+1,\ldots,j\}\). By \(\Sigma\), we represent a finite alphabet. \(\Sigma^{*}\) (\(\Sigma^{+}\)) denotes the set of (non-empty) words over \(\Sigma\). When defining the priority order, we will equip \(\Sigma\) with a set of priorities with total order \((\mathcal{P},\preccurlyeq)\), i.e. there exists a fixed priority mapping from \(\Sigma\) to \(\mathcal{P}\). The set of priority will be the set of integers \([0,d]\), with the canonical total order. By sets \(\Sigma_{=p}\) (\(p\in\mathcal{P}\)), we denote the set of letters in \(\Sigma\) with priority \(p\). For priority \(p\in\mathcal{P}\), \(\Sigma_{\leq p}=\Sigma_{=0}\cup\cdots\cup\Sigma_{=p}\), i.e. the set of letters smaller than or equal to \(p\). For a word \(w=a_{0}a_{1}\cdots a_{k}\), where \(a_{i}\in\Sigma\), by \(w[i,j]\), we denote the infix \(a_{i}a_{i+1}\cdots a_{j-1}a_{j}\), and by \(w[i]\), we denote \(a_{i}\). Finite automata and regular languagesA _non-deterministic finite state automaton (NFA)_ is a tuple \(\mathcal{A}=(Q,\Sigma,\delta,q_{0},F)\), where \(Q\) is a finite set of _states_, \(\Sigma\) is its _input alphabet_, \(\delta\) is its set of _edges_ i.e. a finite subset of \(Q\times\Sigma\cup\{\epsilon\}\times Q\), \(q_{0}\in Q\) is its _initial state_, and \(F\subseteq Q\) is its set of _final states_. A word is accepted by \(\mathcal{A}\) if it has a run from the initial state ending in a final state. The language _recognized_ by an NFA \(\mathcal{A}\) is called a regular language, and is denoted by \(\mathcal{L}(\mathcal{A})\). The _size of a NFA_, denoted by \(|\mathcal{A}|\), is the number of states in the NFA. (Well-)quasi-ordersA _quasi-order_, denoted as \((X,\leq)\), is a set \(X\) with a reflexive and transitive relation \(\leq\) on \(X\). If \(x\leq y\) (or equivalently, \(y\geq x\)), we say that \(x\) is smaller than \(y\), or \(y\) is greater than \(x\). If \(\leq\) is also anti-symmetric, then it is called a _partial order_. If every pair of elements in \(X\) is comparable by \(\leq\), then it is called a _total_ or _linear_ order. Let \((X,\leq_{1})\) and \((Y,\leq_{2})\) be two quasi orders, and \(h:X\to Y\) be a function. We call \(h\) a _monomorphism_ if it is one-to-one and \(x_{1}\leq_{1}x_{2}\iff h(x_{1})\leq_{2}h(x_{2})\). A quasi order \((X,\leq)\) is called a _well-quasi order (WQO)_, if any infinite sequence of elements \(x_{0},x_{1},x_{2},\ldots\) from \(X\) contains an increasing pair \(x_{i}\leq x_{j}\) with \(i<j\). If \(X\) is the set of words over some alphabet, then a WQO \((X,\leq)\) is called _multiplicative_ if \(\forall u,u^{\prime},v,v^{\prime}\in X\), \(u\leq u^{\prime}\) and \(v\leq v^{\prime}\) imply that \(uv\leq u^{\prime}v^{\prime}\). SubwordsFor \(u,v\in\Sigma^{*}\), we say \(u\preccurlyeq v\), which we refer to as _subword order_, if \(u\) is a subword (not necessarily, contiguous) of \(v\), i.e. if \[u = u_{1}u_{2}\cdots u_{k}\] \[\text{and, }v = v_{0}u_{1}v_{1}u_{2}v_{2}\cdots v_{k-1}u_{k}v_{k}\] where \(u_{i}\in\Sigma\) and \(v_{i}\in\Sigma^{*}\). In simpler words, \(u\preccurlyeq v\) if some letters of \(v\) can be dropped to obtain \(u\). For example, let \(\Sigma=[0,1]\). Then, \(0\preccurlyeq 00\preccurlyeq 010\not\preccurlyeq 110\); \(0\) and \(00\) can be obtained by dropping letters from \(00\) and \(010\), respectively. But \(010\) cannot be obtained from \(110\), as the latter does not have sufficiently many \(0\)s. If \(u\preccurlyeq v\), we say that \(u\) is _subword smaller_ than \(v\), or simply that \(u\) is a _subword_ of \(v\). And we call a mapping from the positions in \(u\) to positions in \(v\) that witnesses \(u\preccurlyeq v\) as the _witness position mapping_. Since \(\Sigma\) is a WQO with the equality order, by Higman's lemma, \(\Sigma^{*}\) is a WQO with the subword order. It is in fact a multiplicative WQO: if \(u\preccurlyeq u^{\prime}\) and \(v\preccurlyeq v^{\prime}\), then dropping the same letters from \(u^{\prime}v^{\prime}\) gives us \(uv\). Priority orderWe take an alphabet \(\Sigma\) with priorities totally ordered by \(\lessdot\). We say \(u\preccurlyeq v\), which we refer to as _priority order_, if \(u=\epsilon\) or, \[u = u_{1}u_{2}\cdots u_{k}\] \[\text{and, }v = v_{1}u_{1}v_{2}u_{2}\cdots v_{k}u_{k},\] such that \(\forall i\in[1,k]\), \(u_{i}\in\Sigma\) and \(v_{i}\in\Sigma_{\leq u_{i}}^{*}\). It is easy to observe that the priority order is multiplicative, and is finer than the subword order, i.e. \(\forall u,v\in\Sigma^{*},u\preccurlyeq v\implies u\preccurlyeq v\). As shown in [17, Theorem 3.6], the priority order on words over a finite alphabet with priorities is a well-quasi ordering: \((\Sigma^{*},\preccurlyeq)\) is a WQO. Downward closureWe define the _subword downward closure_ and _priority downward closure_ for a language \(L\subseteq\Sigma^{*}\) as follows: \[L\!\!\downarrow:=\{u\in\Sigma^{*}\ |\ \exists\ v\in L\colon u\ \preccurlyeq v\}, \qquad\quad L\!\!\downarrow_{\mathsf{P}}:=\{u\in\Sigma^{*}\ |\ \exists\ v\in L\colon u\ \preccurlyeq v\}.\] The following is the starting point for our investigation: It shows that for every language \(L\), there exist finite automata for its downward closures w.r.t. \(\preccurlyeq\) and \(\preccurlyeq_{\mathsf{P}}\). Every subword downward closed sets and every priority downward closed set is regular. For the subword order, this was shown by Haines [20]. The same idea applies to the priority ordering: A downward closed set is the complement of an upward closed set. Therefore, and since every upward closed set in a well-quasi ordering has finitely many minimal elements, it suffices to show that the set of all words above a single word is a regular language. This, in turn, is shown using a simple automaton construction. In Appendix A, we prove an analogue of this for the block ordering (Lemma 3). We stress that Lemma 2 is not effective: It does not guarantee that finite automata for downward closures can be computed for any given language. In fact, there are language classes for which they are not computable, such as reachability sets of _lossy channel systems_ and _Church-Rosser languages_[16, 26]. Therefore, our focus will be on the question of how to effectively compute automata for priority downward closures. ## 3 The Block Order We first define the block order formally and then give the intuition behind the definition. Let \(\Sigma\) be a finite alphabet, and \(\mathcal{P}=[0,d]\) be a set of priorities with a total order \(\lessdot\). Then for \(u,v\in\Sigma^{*}\), where maximum priority occurring among \(u\) and \(v\) is \(p\), we say \(u\preccurlyeq_{\mathsf{B}}v\), if 1. if \(u,v\in\Sigma_{=p}^{*}\), and \(u\preccurlyeq v\), or 2. if \[u = u_{0}x_{0}u_{1}x_{1}\cdots x_{n-1}u_{n}\] and, \(v = v_{0}y_{0}v_{1}y_{1}\cdots y_{m-1}v_{m}\) where \(x_{0},\ldots x_{n-1},y_{0},\ldots,y_{m-1}\in\Sigma_{=p}\), and for all \(i\in[0,n]\), we have \(u_{i},v_{i}\in\Sigma_{\leq p-1}^{*}\) (the \(u_{i}\) and \(v_{i}\) are called _sub-\(p\)_ blocks), and there exists a strictly monotonically increasing map \(\phi:[0,n]\to[0,m]\), which we call the _witness block map_, such that 1. \(u_{i}\preccurlyeq_{\mathsf{B}}v_{\phi(i)}\), \(\forall i\), 2. \(\phi(0)=0\), 3. \(\phi(n)=m\), and * \(x_{i}\prec v_{\phi(i)}y_{\phi(i)}v_{\phi(i)+1}\cdots v_{\phi(i+1)}\), \(\forall i\in[0,n-1]\). Intuitively, we say that \(u\) is _block smaller_ than \(v\), if either * both words have letters of same priority, and \(u\) is a subword of \(v\), or, * the largest priority occurring in both words is \(p\). Then we split both words along the priority \(p\) letters, to obtain sequences of sub-\(p\) blocks of words, which have words of strictly less priority. Then by item \(\mathrm{iia}\), we embed the sub-\(p\) blocks of \(u\) to those of \(v\), such that they are recursively block smaller. Then with items \(\mathrm{iib}\) and \(\mathrm{iic}\), we ensure that the first (and last) sub-\(p\) block of \(u\) is embedded in the first (resp., last) sub-\(p\) block of \(v\). We will see later that this constraint allows the order to be multiplicative. Finally, by item \(\mathrm{iid}\), we ensure that the letters of priority \(p\) in \(u\) are preserved in \(v\), i.e. every \(x_{i}\) indeed occurs between the embeddings of the sub-\(p\) block \(u_{i}\) and \(u_{i+1}\). Consider the alphabet \(\Sigma=\{0^{a},0^{b},1^{a},1^{b},2^{a},2^{b}\}\) with priority set \(\mathcal{P}=[0,2]\) and \(\Sigma_{=i}=\{i^{a},i^{b}\}\). In the following examples, the color helps to identify the largest priority occurring in the words. First, notice that \(\epsilon\prec_{\mathsf{B}}0^{a}\prec_{\mathsf{B}}0^{a}0^{b}\), and hence \[1^{\circ}0^{a}\prec_{\mathsf{B}}0^{a}1^{\circ}0^{a}0^{a}1^{a}0^{a}0^{b}, \mathsf{but} 1^{\circ}0^{a}\not\prec_{\mathsf{B}}0^{a}1^{\circ}0^{a}0^{a}1^{ \circ}0^{b}0^{b}.\] This is because \(0^{a}\not\prec_{\mathsf{B}}0^{b}0^{b}\), i.e. the last sub-\(1\) block of the former word cannot be mapped to the last sub-\(1\) block of the latter word. As another example, we have \[2^{a}1^{b}0^{a}\prec_{\mathsf{B}}0^{a}2^{\circ}0^{a}1^{b}0^{a}0^{a}1^{a}0^{a}0 ^{b}, \mathsf{but} 2^{a}1^{b}0^{a}\not\prec_{\mathsf{B}}0^{a}2^{\circ}0^{a}1^{b}0 ^{a}0^{a}1^{a}0^{a}0^{b}.\] This is because \(2^{a}\) does not exist in the latter word, violating item \(\mathrm{iid}\). Finally, notice that \[1^{a}1^{b}\not\prec_{\mathsf{B}}1^{a}2^{\circ}1^{b}, \tag{1}\] because the sub-\(2\) block \(1^{a}1^{b}\) would have to be mapped to a single sub-\(2\) block in the right-hand word; but none of them can accomodate \(1^{a}1^{b}\). Note that by items \(\mathrm{iid}\) and \(\mathrm{iia}\), we have that \(u\prec_{\mathsf{B}}v\implies u\prec v\), for all \(u,v\in\Sigma^{*}\). Then there exists a position mapping \(\rho\) from \([0,|u|]\) to \([0,|v|]\) such that \(u[i]=v[\rho(i)]\), for all \(i\). We say that a position mapping _respects block order_ if for all \(i\), \(v[\rho(i),\rho(i+1)]\) contains letters of priorities smaller than \(u[i]\) and \(u[i+1]\). It is easy to observe that if \(u\prec_{\mathsf{B}}v\), then there exists a position mapping from \(u\) to \(v\) respecting the block order. The following is a straightforward repeated application of Higman's Lemma [22] (see Appendix A). \((\Sigma^{*},\prec_{\mathsf{B}})\) is a WQO. In fact, the block order is multiplicative, i.e. for all \(u,v,u^{\prime},v^{\prime}\in\Sigma^{*}\) such that \(u\prec_{\mathsf{B}}u^{\prime}\) and \(v\prec_{\mathsf{B}}v^{\prime}\), it holds that \(uv\prec_{\mathsf{B}}u^{\prime}v^{\prime}\). \((\Sigma^{*},\prec_{\mathsf{B}})\) is a multiplicative WQO. Proof.: For singleton \(\mathcal{P}\), the result trivially holds because it coincides with the subword order. Let \((\Sigma^{*}_{\leq p-1},\prec_{\mathsf{B}})\) be multiplicative. Now we show that \((\Sigma^{*}_{\leq p},\prec_{\mathsf{B}})\) is multiplicative. To this end, let \(u\prec_{\mathsf{B}}u^{\prime}\), \(v\prec_{\mathsf{B}}v^{\prime}\), and \(\phi,\psi\) be the witnessing block maps respectively. We assume \[u = u_{0}x_{0}u_{1}x_{1}u_{2}x_{2}\cdots x_{k-1}u_{k}\] \[v = v_{0}y_{0}v_{1}y_{1}v_{2}y_{2}\cdots y_{l-1}v_{l}\] \[u^{\prime} = u^{\prime}_{0}x^{\prime}_{0}u^{\prime}_{1}x^{\prime}_{1}u^{\prime }_{2}x^{\prime}_{2}\cdots x^{\prime}_{k-1}u^{\prime}_{k^{\prime}}\] \[v^{\prime} = v^{\prime}_{0}y^{\prime}_{0}v^{\prime}_{1}y^{\prime}_{1}v^{ \prime}_{2}y^{\prime}_{2}\cdots y^{\prime}_{l-1}v^{\prime}_{l^{\prime}}\] where \(x_{i},y_{i},x_{i}^{\prime},y_{i}^{\prime}\in\Sigma_{=p}\). Consider the function \(\delta\colon[0,k+l-1]\to[0,k^{\prime}+l^{\prime}-1]\) with \[i\mapsto\begin{cases}\phi(i),\text{ if }1\leq i\leq k\\ \psi(i-k+1),\text{ if }k<i\leq k+l-1\end{cases}\] Since the \(k^{th}\) sub-\(p\) block of \(u\) and the \(1^{st}\) sub-\(p\) block of \(v\) combines in \(uv\) to form one sub-\(p\) block, we have \(k+l-1\) sub-\(p\) blocks. Similarly, \(u^{\prime}v^{\prime}\) has \(k^{\prime}+l^{\prime}-1\) sub-\(p\) blocks. And hence \(u_{k}v_{1}\prec_{\mathsf{B}}u_{k^{\prime}}^{\prime}v_{1}^{\prime}\), by induction hypothesis. The recursive embedding is obvious for other sub-\(p\) blocks. We also have that \(\delta(0)=0\) and \(\delta(k+l-1)=k^{\prime}+l^{\prime}-1\). By monotonicity of \(\phi\) and \(\psi\), \(\delta\) is also strictly monotonically increasing. Hence, \(\delta\) witnesses \(uv\prec_{\mathsf{B}}u^{\prime}v^{\prime}\). PumpingIn the subword ordering, an often applied property is that for any words \(u,v,w\), we have \(uw\prec uvw\), i.e. inserting any word leads to a superword. This is not true for the block ordering, as we saw in Example 3.1, (1). However, one of our key observations about the block order is the following property: If the word we insert is just a repetition of an existing factor, then this yields a larger word in the block ordering. This will be crucial for our downward closure construction for one-counter automata in Section 5. [Pumping Lemma] For any \(u,v,w\in\Sigma^{*}\), we have \(uvw\prec_{\mathsf{B}}uvvw\). Before we prove Lemma 3.4, let us note that by applying Lemma 3.4 multiple times, this implies that we can also repeat multiple factors. For instance, if \(w=w_{1}w_{2}w_{3}w_{4}w_{5}\), then \(w\prec_{\mathsf{B}}w_{1}w_{2}^{2}w_{3}w_{4}^{3}w_{5}\). Figure 1 shows an example on how to choose the witness block map. Proof.: We proceed by induction on the number of priorities. If there is just a single priority (i.e. \(\mathcal{P}=\{0\}\)), then \(\prec_{\mathsf{B}}\) coincides with \(\prec\) and the statement is trivial. Let us assume the lemma is established for words with up to \(n\) priorities. We distinguish two cases. * Suppose \(v\) contains only letters of priorities \([0,n]\). Then repeating \(v\) means repeating a factor inside a sub-\((n+1)\) block, which is a word with priorities in \([0,n]\). Hence, the statement follows by induction: Formally, this means we can use the embedding mapping that sends block \(i\) of \(uvw\) to block \(i\) of \(uvvw\). * Suppose \(v\) contains a letter of priority \(n+1\). write \(v=v_{0}x_{1}v_{1}\cdots x_{m}v_{m}\), where \(x_{1},\ldots,x_{m}\) are the letters of priority \(n+1\) in \(v\) and \(v_{0},\ldots,v_{m}\) are the sub-\((n+1)\) blocks of \(v\). Then: \[uvw=uv_{0}x_{1}\cdots v_{m-1}x_{m}v_{m}w,\ \ uvvw=uv_{0}x_{1}\cdots v_{m-1}x_{m} \underbrace{v_{m}v_{0}x_{1}\cdots v_{m-1}x_{m}}_{\text{skipped}}v_{m}w.\] The idea is simple: Our witness block map just skips the \(m\) sub-\((n+1)\) blocks inside of \(v_{m}v_{0}x_{1}\cdots v_{m-1}x_{m}\). Thus, the sub-\((n+1)\) blocks in \(uv_{0}x_{1}\cdots v_{m-1}x_{m}\) are mapped to the same blocks in \(uv_{0}x_{1}\cdots v_{m-1}x_{m}\), and the sub-\((n+1)\) blocks in \(v_{m}w\) are mapped to the same blocks in \(v_{m}w\). This is clearly a valid witness block map, since the first (resp. last) sub-\((n+1)\) block is mapped to the first (resp. last), and each sub-\((n+1)\) block is mapped to an identical sub-\((n+1)\) block. Regular downward closuresAs for \(\prec\) and \(\prec_{\mathsf{P}}\), we define \(L\mathord{\downarrow}_{\mathsf{B}}=\{u\in\Sigma^{*}\mid\exists v\in L\colon u \prec_{\mathsf{B}}v\}\) for any \(L\subseteq\Sigma^{*}\). For every \(L\subseteq\Sigma^{*}\), \(L\mathord{\downarrow}_{\mathsf{B}}\) is a regular language. For the proof of Lemma 3.5, one can argue as mentioned above: The complement \(\Sigma^{*}\setminus(L\mathord{\downarrow}_{\mathsf{B}})\) of \(L\mathord{\downarrow}_{\mathsf{B}}\) is upward closed. And since \(\prec_{\mathsf{B}}\) is a WQO, \(\Sigma^{*}\setminus(L\mathord{\downarrow}_{\mathsf{B}})\) has finitely many minimal elements. It thus remains to show that for each word \(w\in\Sigma^{*}\), the set of words \(\prec_{\mathsf{B}}\)-larger than \(w\) is regular, which is a simple exercise. Details can be found in Appendix A. Block order vs. priority orderWe will later see (Theorem 4.4) that under mild conditions, computing priority downward closures reduces to computing block downward closures. The following lemma is the main technical ingredient in this: It shows that the block order refines the priority order on words that end in the same letter, assuming the alphabet has a certain shape. A priority alphabet \((\Sigma,\mathcal{P})\) with \(\mathcal{P}=[1,d]\) is called _flat_ if \(|\Sigma_{=i}|=1\) for each \(i\in[1,d]\). If \(\Sigma\) is flat and \(u,v\in\Sigma^{*}a\) for some \(a\in\Sigma\), then \(u\preccurlyeq_{\mathsf{B}}v\) implies \(u\preccurlyeq v\). Proof.: Since \(u\preccurlyeq_{\mathsf{B}}v\), there exists a witness position mapping \(\rho\) that maps the positions of the letters in \(u\) to that of \(v\), such that it respects the block order, and it maps the last position of \(u\) to the last of \(v\). Let \(u=u_{0}u_{1}\cdots u_{k}\). We say that a position mapping violates the priority order at position \(i\) (for \(i\in[0,k-1]\)), if \(v[\rho(i)+1,\rho(i+1)]\) has a letter of priority higher than that of \(u[i+1]\). Note that if \(\rho\) does not violate the priority order at any position, then \(u\preccurlyeq v\). Let \(i\) be the largest position at which \(\rho\) violates the priority order, i.e. \(v[\rho(i)+1,\rho(i+1)]\) has a letter of priority higher than that of \(u[i+1]\). We show that if \(\rho\) respects the block order till position \(i\), there exists another witness position mapping \(\rho^{\prime}\) that respects the block order till position \(i-1\), and has one few position of violation (i.e. no violation at position \(i\)). We first observe that \(u[i]>u[i+1]\), which holds since \(\rho\) respects the block order till position \(i\), implying that \(v[\rho(i)+1,\rho(i+1)]\) does not have a letter of priority higher than \(min\{u[i],u[i+1]\}\), and if \(u[i]\leq u[i+1]\), \(\rho\) does not violate the priority order at \(i\). Then observe that \(v[\rho(i)+1,\rho(i+1)]\) does not have a letter with priority \(p\), where \(u[i]>p>u[i+1]\), otherwise the sub-\(u[i]\) block of \(u\) immediately after \(u[i]\), can not be embedded to that of \(v\) immediately after \(v[\rho(i)]\), since it would have to be split along \(p\), and the first sub-\(p\) block in \(v\) will not be mapped to any in \(u\). Then \(v[\rho(i)+1,\rho(i+1)]\) has letter of priority \(u[i]\) (for a violation at \(i\)). Then consider the mapping \(\rho^{\prime}\) that maps \(i\) to the last \(u[i]\) letter in \(v[\rho(i)+1,\rho(i+1)]\) (say at \(v[j]\) for some \(j\), \(\rho(i)+1\leq j\leq\rho(i+1)\)). This mapping respects the block order till position \(i-1\), trivially, as we do not change the mapping before \(i\). We show that there is no priority order violation at position \(i\). This holds because the only larger priority letter occurring in \(v[\rho(i)+1,\rho(i+1)]\) was \(u[i]\), and due to the definition of \(\rho^{\prime}\), \(v[\rho^{\prime}(i)+1,\rho^{\prime}(i+1)]\) has no letter of priority higher than \(u[i+1]\). Since we do not change the mapping after position \(i\), \(\rho^{\prime}\) does not introduce a violation at any position after \(i\). Hence we have a new position mapping that has one few position of priority order violation. We want to stress that the flatness assumption in Lemma 3.2 is crucial: Consider the alphabet \(\Sigma\) from the Example 3.1. Then \(1^{a}0^{a}\preccurlyeq_{\mathsf{B}}1^{a}1^{b}0^{a}\), but \(1^{a}0^{a}\noteq_{\mathsf{P}}1^{a}1^{b}0^{a}\). Here only one position mapping exists, and it is not possible to remap \(1^{a}\) to \(1^{b}\) since they are two distinct letters of same priority. Hence, we need to assume that each priority greater than zero has at most one letter. Figure 1: Here \(\Sigma=[0,2]\), \(\mathcal{P}=[0,2]\), and \(A_{i}=\{i\}\), \(w=12(01)21(121)0\) and \(w^{\prime}=12(01)^{2}21(121)^{3}0\). The repeated segments are marked in red, and the arrows denote the witness block map. ## 4 Regular Languages In this section, we show how to construct an NFA for the block downward closure of a regular language. To this end, we show that both orders are rational transductions. Rational transductionsA _finite state transducer_ is a tuple \(\mathcal{A}=(Q,X,Y,E,q_{0},F)\), where \(Q\) is a finite set of states, \(X\) and \(Y\) are _input_ and _output alphabets_, respectively, \(E\) is the set of _edges_ i.e. finite subset of \(Q\times X^{*}\times Y^{*}\times Q\), \(q_{0}\in Q\) is the _initial state_, and \(F\subseteq Q\) is the set of _final states_. A _configuration_ of \(\mathcal{A}\) is a triple \((q,u,v)\in Q\times X^{*}\times Y^{*}\). We write \((q,u,v)\rightarrow_{\mathcal{A}}(q^{\prime},u^{\prime},v^{\prime})\), if there is an edge \((q,x,y,q^{\prime})\) with \(u^{\prime}=ux\) and \(v^{\prime}=vy\). If there is an edge \((q,x,y,q^{\prime})\), we sometimes denote this fact by \(q\xrightarrow{(x,y)}_{\mathcal{A}}q^{\prime}\), and say "read \(x\) at \(q\), output \(y\), and goto \(q^{\prime}\)". The _size of a transducer_, denoted by \(|\mathcal{A}|\), is the number of its states. A _transduction_ is a subset of \(X^{*}\times Y^{*}\) for some finite alphabets \(X,Y\). The _transduction defined by \(\mathcal{A}\)_ is \(\mathcal{T}(\mathcal{A})=\{(u,v)\in X^{*}\times Y^{*}\mid(q_{0},\epsilon, \epsilon)\rightarrow^{*}_{\mathcal{A}}(f,u,v)\text{ for some }f\in F\}\). A transduction is called _rational_ if it is defined by some finite-state transducer. Sometimes we abuse the notation and output a regular language \(R\subseteq Y^{*}\) on an edge, instead of a letter. It should be noted that this abuse is equivalent to original definition of finite state transducers. We say that a language class \(\mathcal{C}\) is _closed under rational transductions_ if for each language \(L\in\mathcal{C}\), and each rational transduction \(R\subseteq X^{*}\times Y^{*}\), _the language obtained by applying the transduction \(R\)_ to \(L\), \(RL\stackrel{{ def}}{{=}}\{v\in Y^{*}\mid(u,v)\in R\text{ for some }u\in L\}\) also belongs to \(\mathcal{C}\). We call such language classes _full trio_. Regular languages, context-free languages, recursively enumerable languages are some examples of full trios [9]. Transducers for ordersIt is well-known that the subword order is a rational transduction, i.e. the relation \(T=\{(u,v)\in X^{*}\times X^{*}\mid v\preccurlyeq u\}\) is defined by a finite-state transducer. For example, it can be defined by a one-state transducer that can non-deterministically decide to output or drop each letter. Note that on applying the transduction to any language, it gives the subword downward closure of the language. This means, for every \(L\subseteq X^{*}\), we have \(TL=L\mathord{\downarrow}\). We will now describe analogous transducers for the priority and block order. Given a priority alphabet with priorities \([0,k]\), one can construct in polynomial time a transducer for \(\preccurlyeq_{\mathsf{B}}\) and a transducer for \(\preccurlyeq_{\mathsf{P}}\), each of size \(\mathcal{O}(k)\). Proof.: The transducers for the block and priority order are similar. Intuitively, both remember the maximum of the priorities dropped or to be dropped, and keep or drop the coming letters accordingly. We show the transducer for the priority order here since it is applied in Theorem 4. The transducer for the block order is detailed in Appendix B. Let \(\Sigma\) be a finite alphabet, with priorities \(\mathcal{P}=[0,k]\). Consider the transducer that has one state for every priority, a non-final sink state, and a distinguished final state. If the transducer is in the state for priority \(r\) and reads a letter \(a\) of priority \(s\), then * if \(s<r\), then it outputs nothing and stays in state \(r\), * if \(s\geq r\), then it can output nothing, and go to state \(s\), * if \(s\geq r\), it can also output \(a\), and go to state \(0\), or the accepting state non-deterministically, * for any other scenario, goes to the sink state. The priority \(0\) state is the initial state. Intuitively, the transducer remembers the largest priority letter that has been dropped, and keeps only a letter of higher priority later. To be accepting, it has to read the last letter to go to the accepting final state. The following theorem states that the class of regular languages form a full trio. **Theorem 4.2** ([27, Corollary 3.5.5]).: _Given an NFA \(\mathcal{A}\) and a transducer \(\mathcal{B}\), we can construct in polynomial time an NFA of size \(|\mathcal{A}|\cdot|\mathcal{B}|\) for \(\mathcal{T}(\mathcal{B})(\mathcal{L}(\mathcal{A}))\)._ Theorems 4.1 and 4.2 give us a polynomial size NFA recognizing the priority and block downward closure of a regular language, which is computable in polynomial time as well. **Theorem 4.3**.: _Priority and block downward closures for regular languages are effectively computable in time polynomial in the number of states in the NFA recognizing the language._ Theorem 4.3 and Lemma 3.6 now allow us to reduce the priority downward closure computability to computability for block order. **Theorem 4.4**.: _If \(\mathcal{C}\) is a full trio and we can effectively compute block downward closures for \(\mathcal{C}\), then we can effectively compute priority downward closures._ Proof.: The key idea is to reduce priority downward closure computation to the setting where (i) all words end in the same letter and (ii) the alphabet is flat. Since by Lemma 3.6, on those languages, the block order is finer than the priority order, computing the block order will essentially be sufficient. Let us first establish (i). Let \(L\in\mathcal{C}\). Then for each \(a\in\Sigma\), the language \(L_{a}=L\cap\Sigma^{*}a\) belongs to \(\mathcal{C}\). Since \(L=\bigcup_{a\in\Sigma}L_{a}\cup E\) and thus \(L\mathord{\downarrow_{\mathcal{P}}}=\bigcup_{a\in\Sigma}L_{a}\mathord{ \downarrow_{\mathcal{P}}}\cup E\), it suffices to compute priority downward closures for each \(L_{a}\), where \(E=\{\epsilon\}\) if \(\epsilon\in L\), else \(\emptyset\). This means, it suffices to compute priority downward closures for languages where all words end in the same letter. To achieve (ii), we make the alphabet flat. We say that \((\Sigma,\mathcal{P}^{\prime})\) is the _flattening_ of \((\Sigma,\mathcal{P}=[0,d])\), if \(\mathcal{P}^{\prime}\) is obtained by choosing a total order to \(\Sigma\) such that if \(a\) has smaller priority than \(b\) in \((\Sigma,\mathcal{P})\), then \(a\) has smaller priority than \(b\) in \((\Sigma,\mathcal{P}^{\prime})\). (In other words, we pick an arbitrary linearization of the quasi-order on \(\Sigma\) that expresses "has smaller priority than"). Then, we assign priorities based on this total ordering. Let \(\prec_{\mathsf{B}}^{\mathsf{flat}}\) and \(\prec_{\mathsf{P}}^{\mathsf{flat}}\) denote the block order and priority order, resp., based on the flat priority assignment. It is a simple observation that for \(u,v\in\Sigma^{*}\), we have that \(u\prec_{\mathsf{P}}^{\mathsf{flat}}v\) implies \(u\prec_{\mathsf{P}}v\). Now observe that for \(u,v\in L_{a}\), Lemma 3.6 tells us that \(u\prec_{\mathsf{P}}^{\mathsf{flat}}v\) implies \(u\prec_{\mathsf{P}}^{\mathsf{flat}}v\) and therefore also \(u\prec_{\mathsf{P}}v\). This implies that \((L_{a}\mathord{\downarrow_{\mathsf{P}}^{\mathsf{flat}}})\mathord{\downarrow_{ \mathcal{P}}}=L_{a}\mathord{\downarrow_{\mathcal{P}}}\). By assumption, we can compute a finite automaton \(\mathcal{A}\) with \(\mathcal{L}(\mathcal{A})=L_{a}\mathord{\downarrow_{\mathsf{P}}^{\mathsf{flat}}}\). Since then \(\mathcal{L}(\mathcal{A})\mathord{\downarrow_{\mathcal{P}}}=(L_{a}\mathord{ \downarrow_{\mathsf{P}}^{\mathsf{flat}}})\mathord{\downarrow_{\mathcal{P}}}= L_{a}\mathord{\downarrow_{\mathcal{P}}}\), we can compute \(L_{a}\mathord{\downarrow_{\mathcal{P}}}\) by applying Theorem 4.3 to \(\mathcal{A}\) to compute \(\mathcal{L}(\mathcal{A})\mathord{\downarrow_{\mathcal{P}}}=L_{a}\mathord{ \downarrow_{\mathcal{P}}}\). ## 5 One-counter Languages In this section, we show that for the class of languages accepted by one-counter automata, which form a full-trio [9, Theorem 4.4], the block and priority downward closures can be computed in polynomial time. We prove the following theorem. **Theorem 5.1**.: _Given an OCA \(\mathcal{A}\), \(\mathcal{L}(\mathcal{A})\mathord{\downarrow_{\mathcal{B}}}\) and \(\mathcal{L}(\mathcal{A})\mathord{\downarrow_{\mathcal{P}}}\) are computable in polynomial time._ Here, the difficulty is that existing downward closure constructions exploit that inserting any letters in a word yields a super-word. However, for the block order, this might not be true: Introducing high-priority letters might split a block unintentionally. However, we observe that the subword closure construction from [3] can be modified so that when constructing larger runs (to show that our NFA only accepts words in the downward closure), we only repeat existing factors. Lemma 3.4 then yields that the resulting word is block-larger. According to Theorem 4.4, it suffices to show that block downward closures are computable in polynomial time (an inspection of the proof of Theorem 4.4 shows that computing the priority downward closure only incurs a polynomial overhead). One-counter automata.One-counter automata are finite state automata with a counter that can be incremented, decremented, or tested for zero. Formally, a _one-counter automaton (OCA)_\(\mathcal{A}\) is a \(5\)-tuple \((Q,\Sigma,\delta,q_{0},F)\) where \(Q\) is a finite set of states, \(q_{0}\in Q\) is an initial state, \(F\subseteq Q\) is a set of final states, \(\Sigma\) is a finite alphabet and \(\delta\subseteq Q\times(\Sigma\cup\{\epsilon\})\times\{-1,0,+1,z\}\times Q\) is a set of transitions. Transitions \((p_{1},a,s,p_{2})\in\delta\) are classified as _incrementing_\((s=+1)\), _decrementing_\((s=-1)\), _internal_\((s=0)\), or _test for zero\((s=z)\)_. A _configuration_ of an \(OCA\) is a pair that consists of a state and a (non-negative) counter value, i.e., \((q,n)\in Q\times\mathbb{N}\). A sequence \(\pi=(p_{0},c_{0}),t_{1},(p_{1},c_{1}),t_{2},\cdots,t_{m},(p_{m},c_{m})\) where \((p_{i},c_{i})\in Q\times\mathbb{Z}\), \(t_{i}\in\delta\) and \((p_{i-1},c_{i-1})\xrightarrow{t_{i}}(p_{i},c_{i})\) is called: * a _quasi-run_, denoted \(\pi=(p_{0},c_{0})\xrightarrow[]{w}\mathcal{A}\)\((p_{m},c_{m})\), if none of \(t_{i}\) is a test for zero; * a _run_, denoted \(\pi=(p_{0},c_{0})\xrightarrow[]{w}\mathcal{A}\)\((p_{m},c_{m})\), if all \((p_{i},c_{i})\in Q\times\mathbb{N}\). For any quasi-run \(\pi\) as above, the sequence of transitions \(t_{1},\cdots,t_{m}\) is called a _walk_ from the state \(p_{0}\) to the state \(p_{m}\). A run \((p_{0},c_{0})\xrightarrow[]{w}(p_{m},c_{m})\) is called _accepting_ in \(\mathcal{A}\) if \((p_{0},c_{0})=(q_{0},0)\) where \(q_{0}\) is the initial state of \(\mathcal{A}\) and \(p_{m}\) is a final state of \(\mathcal{A}\), i.e. \(p_{m}\in F\). In such a case, the word \(w\) is _accepted_ by \(\mathcal{A}\). Simple one-counter automataAs we will show later, computing block downward closures of OCA easily reduces to the case of simple OCA. A _simple OCA (SOCA)_ is defined analogously to OCA, with the differences that (i) there are no zero tests, (ii) there is only one final state, (iii) for acceptance, the final counter value must be zero. We first show that the block downward closures can be effectively computed for the simple one-counter automata languages. Given a simple OCA \(\mathcal{A}\), we can compute \(\mathcal{L}(\mathcal{A})\downarrow_{\mathcal{B}}\) in polynomial time. We present a rough sketch of the construction, full details can be found in Appendix C. The starting point of the construction is the one for subwords in [3], but the latter needs to be modified in a non-obvious way using Lemma 3. Let \(\mathcal{A}=(Q,\Sigma,\delta,q_{0},q_{f})\) be a simple OCA, with \(|Q|=K\). We construct an NFA \(\mathcal{B}\) that can simulate \(\mathcal{A}\) in three different modes. In the first mode, it simulates \(\mathcal{A}\) until the counter value reaches \(K\), and when the value reaches \(K+1\), it switches to the second mode. The second mode simulates \(\mathcal{A}\) while the counter value stays below \(K^{2}+K+1\). Moreover, and this is where our construction differs from [3]: if \(\mathcal{B}\) is in the second mode simulating \(\mathcal{A}\) in some state \(q\), then \(\mathcal{B}\) can spontaneously execute a loop from \(q\) to \(q\) of \(\mathcal{A}\) while ignoring its counter updates. When the counter value in the second mode drops to \(K\) again, \(\mathcal{B}\) non-deterministically switches to the third mode to simulate \(\mathcal{A}\) while the counter value stays below \(K\). Thus, \(\mathcal{B}\) only needs to track counter values in \([0,K^{2}+K+1]\), meaning they can be stored in its state. We claim that then \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\subseteq\mathcal{L }(\mathcal{A})\downarrow_{\mathcal{B}}\). \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\). If a word in \(\mathcal{L}(\mathcal{A})\) has a run with counters bounded by \(K^{2}+K+1\), then it trivially belongs to \(\mathcal{L}(\mathcal{B})\). If the counters go beyond \(K^{2}+K+1\), then with the classical "unpumping" argument, one can extract two loops, one increasing the counter, one decreasing it. These loops can then be simulated by the spontaneous loops in the second mode of \(\mathcal{B}\). The more interesting inclusion is the following: \(\mathcal{L}(\mathcal{B})\subseteq\mathcal{L}(\mathcal{A})\downarrow_{ \mathcal{B}}\). We have to show that each spontaneous loop in \(\mathcal{B}\) can be justified by padding the run with further loop executions so as to obtain a run of \(\mathcal{A}\). This is possible because to execute such a spontaneous loop, we must have gone beyond \(K\) and later go to zero again. Thus, there exists a "pumping up" loop adding, say \(k\geq 0\) to the counter, and a "pumping down" loop, subtracting, say \(\ell\geq 0\) from the counter. We can therefore repeat all spontaneous loops so often that their effect -- when seen as transitions in \(\mathcal{A}\) -- is a (positive or negative) multiple \(M\) of \(k\cdot\ell\). Then, we execute the \(k\)- and the \(\ell\)-loop so often so as to get the counter values so high that (i) our repeated spontaneous loops never cross zero and (ii) the effect difference of the new loops is exactly \(M\). Since in our construction (in contrast to [3]), the padding only _repeated words that already exist_ in the run of \(\mathcal{B}\), Lemma 3.4 implies that the word of \(\mathcal{B}\) embeds via the block order. General OcaLet us now show how to construct the block downward closure of general OCAs. Suppose we are given an OCA \(\mathcal{A}\). For any two states \(p,q\), consider the simple OCA \(\mathcal{A}_{p,q}\) obtained from \(\mathcal{A}\) by removing all zero tests, making \(p\) initial, and \(q\) final. Then \(\mathcal{L}(\mathcal{A})\) is the set of words read from \((p,0)\) to \((q,0)\) without using zero tests. We now compute for each \(p,q\) a finite automaton \(\mathcal{B}_{p,q}\) for the block downward closure of \(\mathcal{A}_{p,q}\). Clearly, we may assume that \(\mathcal{B}_{p,q}\) has exactly one initial state and one final state. Finally, we obtain the finite automaton \(\mathcal{B}\) from \(\mathcal{A}\) as follows: We remove all transitions _except_ the zero tests. Each zero test from \(p\) to \(q\) is replaced with an edge \(p\xrightarrow{\epsilon}q\). Moreover, for any states \(p\) and \(q\) coming from \(\mathcal{A}\), we glue in the automaton \(\mathcal{B}_{p,q}\) (by connecting \(p\) with \(\mathcal{B}_{p,q}\)'s initial state and connecting \(\mathcal{B}_{p,q}\)'s final state with \(q\)). Then, since the block order is multiplicative, we have that \(L(\mathcal{B})\) accepts exactly the block downward closure of \(\mathcal{A}\). Futhermore, note that since our construction for simple OCA is polynomial, the general case is as well: The latter employs the former to \(|Q|^{2}\) simple OCAs. ## 6 Context-free Languages The key trick in our construction for OCA was that we could modify the subword construction so that the overapproximating NFA \(\mathcal{B}\) has the property that in any word from \(\mathcal{L}(\mathcal{B})\), we can repeat factors to obtain a word from \(\mathcal{A}\). This was possible because in an OCA, essentially any pair of loops--one incrementing, one decrementing--could be repeated to pad a run. However, in context-free languages, the situation is more complicated. With a stack, any pumping must always ensure that stack contents match: It is not possible to compensate stack effects with just two loops. In terms of grammars, the core idea for subword closures of context-free languages \(L\) is usually to overapproximate "pump-like" derivations \(X\xrightarrow{\epsilon}uXv\) by observing that--up to subwords--they can generate any \(u^{\prime}Xv^{\prime}\) where the letters of \(u^{\prime}\) can occur on the left and the letters of \(v^{\prime}\) can occur on the right in derivations \(X\xrightarrow{\epsilon}\cdot X\). Showing that all such words belong to the downward closure leads to derivations \(X\xrightarrow{\epsilon}u^{\prime\prime}\bar{v}Xv^{\prime\prime}\bar{u}\), where \(u^{\prime\prime},v^{\prime\prime}\) are super-words of \(u^{\prime},v^{\prime}\) such that \(X\xrightarrow{\epsilon}u^{\prime\prime}X\bar{u}\) and \(X\xrightarrow{\epsilon}\bar{v}Xv^{\prime\prime}\) can be derived. The additional infixes could introduce high priority letters and thus split blocks unintentionally. Therefore, we provide a novel recursive approach to compute the block downward closure by decomposing derivations at high-priority letters. This is non-trivial as this decomposition might not match the decomposition given by derivation trees. Formally, we show: Given a context-free language \(L\subseteq\Sigma^{*}_{\leq n}\), one can construct a doubly-exponential-sized automaton for \(L_{\downarrow\mathcal{B}}\), and thus also for \(L_{\downarrow\mathcal{P}}\). We do not know if this doubly exponential upper bound is optimal. A singly-exponential lower bound follows from the subword case: It is known that subword downward closures of context-free languages can require exponentially many states [5]. However, it is not clear whether for priority or block downward closures, there is a singly-exponential construction. We again note that Theorem 4.4 (and its proof) imply that for Theorem 6.1, it suffices to compute a finite automaton for the block downward closure of the context-free language: Computing the priority downward closure then only increases the size polynomially. GrammarsWe present the construction using _context-free grammars_, which are tuples \(\mathcal{G}=(N,T,P,S)\), where \(N\) is a finite set of _non-terminal letters_, \(T\) is a finite set of _terminal letters_, \(P\) is a finite set of _productions_ of the form \(X\to w\) with \(X\in N\) and \(w\in(N\cup T)^{*}\), and \(S\) is the _start symbol_. For \(u,v\in(N\cup T)^{*}\), we have \(u\Rightarrow v\) if there is a production \(X\to w\) in \(P\) and \(x,y\in(N\cup T)^{*}\) with \(u=xSy\) and \(v=xwy\). The _language generated by \(\mathcal{G}\)_, is then \(\mathcal{L}(\mathcal{G}):=\{w\in T^{*}\mid S\xRightarrow{*}\}\), where \(\xRightarrow{*}\) is the reflexive, transitive closure of \(\Rightarrow\). Assumption on the alphabetIn order to compute block downward closures, it suffices to do this for flat alphabets (see Section 3). The argument is essentially the same as in Theorem 4.4: By flattening the alphabet as in the proof of Theorem 4.4, we obtain a finer block order, so that first computing an automaton for the flat alphabet and then applying Theorem 4.3 to the resulting finite automaton will yield a finite automaton for the original (non-flat) alphabet. In the following, we will assume that the input grammar \(\mathcal{G}\) is in Chomsky normal form, meaning every production is of the form \(X\to YZ\) for non-terminals \(X,Y,Z\), or of the form \(X\to a\) for a non-terminal \(X\) and a terminal \(a\). Kleene grammarsSuppose we are given a context-free grammar \(\mathcal{G}=(N,\Sigma,P,S)\). Roughly speaking, the idea is to construct another grammar \(\mathcal{G}^{\prime}\) whose language has the same block downward closure as \(\mathcal{L}(\mathcal{G})\), but with the additional property that every word can be generated using a derivation tree that is _acyclic_, meaning that each path contains every non-terminal at most once. Of course, if this were literally true, \(\mathcal{G}^{\prime}\) would generate a finite language. Therefore, we allow a slightly expanded syntax: We allow Kleene stars in context-free productions. This means, we allow right-hand sides to contain occurrences of \(B^{*}\), where \(B\) is a non-terminal. The semantics is the obvious one: When applying such a rule, then instead of inserting \(B^{*}\), we can generate any \(B^{k}\) with \(k\geq 0\). We call grammars with such productions _Kleene grammar_. A _derivation tree_ in a Kleene grammar is defined as for context-free grammars, aside from the expected modification: If some \(B^{*}\) occurs on a right-hand side, then we allow any (finite) number of \(B\)-labeled children in the respective place. Then indeed, a Kleene grammar can generate infinite sets using acyclic derivation trees. Given a Kleene grammar \(\mathcal{H}\), let \(\mathsf{acyclic}(\mathcal{H})\) be the set of words generated by \(\mathcal{H}\) using acyclic derivation trees. Given a Kleene grammar \(\mathcal{H}\), one can construct an exponential-sized finite automaton accepting \(\mathsf{acyclic}(\mathcal{H})\). Proof sketch.: The automaton simulates a (say, preorder) traversal of an acyclic derivation tree of \(\mathcal{H}\). This means, its state holds the path to the currently visited node in the derivation tree. Since every path has length at most \(|N|\), where \(N\) is the set of non-terminals of \(\mathcal{H}\), the automaton has at most exponentially many states. Given Lemma 6.2, for Theorem 6.1, it suffices to construct a Kleene grammar \(\mathcal{G}^{\prime}\) of exponential size such that \(\mathsf{acyclic}(\mathcal{G}^{\prime})\mathord{\downarrow_{\mathcal{B}}}= \mathcal{L}(\mathcal{G})\mathord{\downarrow_{\mathcal{B}}}\). Normal form and grammar sizeWe will ensure that in the constructed grammars, the productions are of the form (i) \(X\to w\), where \(w\) is a word of length \(\leq 3\) and consisting of non-terminals \(Y\) or Kleene stars \(Y^{*}\) or (ii) \(X\to a\) where \(a\) is a terminal. This means, the total size of the grammar is always polynomial in the number of non-terminals. Therefore, to analyze the complexity, it will suffice to measure the number of non-terminals. Highest occurring prioritiesSimilar to classical downward closure constructions for context-free languages, we want to overapproximate the set of words generated by "pump derivations" of the form \(X\xrightarrow{*}uXv\). Since we are dealing with priorities, we first partition the set of such derivations according to the highest occurring priorities, on the left and on the right. Thus, for \(r,s\in[0,p]\), we will consider all derivations \(X\xrightarrow{*}uXv\) where \(r\) is the highest occurring priority in \(u\) and \(s\) is the highest occurring priority in \(v\). To ease notation, we define \(\Sigma_{\max r}\) to be the set of words in \(\Sigma^{*}_{\leq r}\) in which \(r\) is the highest occurring priority. Since \(\Sigma_{\max r}=\Sigma^{+}_{\max r}\), we will write \(\Sigma^{+}_{\max r}\) to remind us that this is not an alphabet. Notice that for \(r\in[1,p]\), we have \(\Sigma^{+}_{\max r}=\Sigma^{*}_{\leq r}r\Sigma^{*}_{\leq r}\) and \(\Sigma^{+}_{\max 0}=\Sigma^{*}_{\leq 0}\). Language of endsIn order to perform an inductive construction, we need a way to transform pairs \((u,v)\in\Sigma^{+}_{\max r}\times\Sigma^{+}_{\max s}\) into words over an alphabet with fewer priorities. Part of this will be achieved by the _end maps_\(\overleftarrow{r}_{r}(\cdot)\) and \(\overrightarrow{\tau}_{s}(\cdot)\) as follows. Let \(\hat{\Sigma}\) be the priority alphabet obtained from \(\Sigma\) by adding the letters \(\#\), \(\overline{\#}\), and \(\overline{\#}\) as letters with priority zero. Now for \(r\in[1,p]\), the function \(\overleftarrow{r}_{r}\colon\Sigma^{+}_{\max r}\to\hat{\Sigma}^{*}_{\leq r-1}\) is defined as: \[\overleftarrow{\tau}_{r}(w)=u\overleftarrow{\#}v,\text{ where }w=urx_{1}r\cdots x _{n}rv\text{ for some }n\geq 0,\,u,v,x_{1},\ldots,x_{n}\in\Sigma^{*}_{\leq r-1}.\] Thus, \(\overleftarrow{r}_{r}(w)\) is obtained from \(w\) by replacing the largest possible infix surrounded by \(r\) with \(\overline{\#}\). For \(r=0\), it will be convenient to have the constant function \(\overleftarrow{r}_{0}\colon\Sigma^{+}_{\max 0}\to\{\overline{\#}\}\). Analogously, we define for \(s\in[1,p]\) the function \(\overrightarrow{\tau}_{s}\colon\Sigma^{+}_{\max s}\to\hat{\Sigma}^{*}_{\leq s-1}\) by \[\overrightarrow{\tau}_{s}(w)=u\overrightarrow{\#}v,\text{ where }w= usx_{1}s\cdots x_{n}sv\text{ for some }n\geq 0,\,u,v,x_{1},\ldots,x_{n}\in\Sigma^{*}_{\leq s-1}.\] Moreover, we also set \(\overrightarrow{\tau}_{0}\colon\Sigma^{+}_{\max 0}\to\{\overline{\#}\}\) to be the constant function yielding \(\overrightarrow{\#}\). In particular, for \(r,s\in[1,p]\), we have \(\overleftarrow{\tau}_{r}(w)\), \(\overrightarrow{\tau}_{s}(w)\in\hat{\Sigma}_{\leq p-1}\) and thus we have reduced the number of priorities. Now consider for \(r,s\in[0,p]\) the language \[E_{X,r,s}=\{\overleftarrow{\tau}_{r}(u)\#\overrightarrow{\tau}_{s}(v)\mid X \xrightarrow{*}uXv,\,\,u\in\Sigma^{*}_{\leq r}r\Sigma^{*}_{\leq r},\,\,v\in \Sigma^{*}_{\leq s}s\Sigma^{*}_{\leq s}\}.\] For the language \(E_{X,r,s}\), it is easy to construct a context-free grammar: Given \(\mathcal{G}\), a non-terminal \(X\), and \(r,s\in[0,p]\), one can construct a grammar \(\mathcal{E}_{X,r,s}\) for \(E_{X,r,s}\) of linear size. Defining the sets \(E_{X,r,s}\) with fresh zero-priority letters \(\#\), \(\overleftarrow{\#}\), \(\overrightarrow{\#}\) is a key trick in our construction: Note that each word in \(E_{X,r,s}\) is of the form \(u\overleftarrow{\#}v\#\overrightarrow{\#}x\) for \(u,v,w,x\in\Sigma^{*}_{\leq p-1}\). The segments \(u,v,w,x\) come from different blocks of the generated word, so applying the block downward closure construction recursively to \(E_{X,r,s}\) must guarantee that these segments embed as if they were blocks. However, there are only a bounded number of segments. Thus, we can reduce the number of priorities while retaining the block behavior by using fresh zero-priority letters. This is formalized in the following lemma: For \(u,u^{\prime},v,v^{\prime}\in\Sigma^{*}_{\leq p}\), we have \(u\#v\preccurlyeq_{\mathsf{B}}u^{\prime}\#v^{\prime}\) iff both (i) \(u\preccurlyeq_{\mathsf{B}}u^{\prime}\) and (ii) \(v\preccurlyeq_{\mathsf{B}}v^{\prime}\). Language of repeated wordsRoughly speaking, the language \(E_{X,r,s}\) captures the "ends" of words derived in derivations \(X\xRightarrow{*}uXv\) with \(u\in\Sigma^{+}_{\max\,r}\) and \(v\in\Sigma^{+}_{\max\,s}\): On the left, it keeps everything that is not between two occurrences of \(r\) and on the right, it keeps everything not between two occurrences of \(s\). We now need languages that capture the infixes that can occur between \(r\)'s and \(s\)'s, respectively. Intuitively, these are the words that can occur again and again in words derived from \(X\). There is a "left version" and a "right version". We set for \(r,s\in[1,p]\): \[\overleftarrow{R}_{X,r,s} =\{yr\mid y\in\Sigma^{*}_{\leq r-1},\ \exists x,z\in\Sigma^{*}_{\leq r},\ v\in\Sigma^{+}_{\max\,s}\colon X \xRightarrow{*}xryrzXv\}\] \[\overrightarrow{R}_{X,r,s} =\{ys\mid y\in\Sigma^{*}_{\leq s-1},\ \exists u\in\Sigma^{+}_{\max\,r},\ x,z\in\Sigma^{*}_{\leq r}\colon X \xRightarrow{*}uXxysz\}.\] The case where one side has highest priority zero must be treated slightly differently: There are no enveloping occurrences of some \(r,s\in[1,p]\). However, we can overapproximate those words by the set of all words over a particular alphabet. Specifically, for \(r,s\in[0,p]\), we set \[\overrightarrow{R}_{X,0,s} =\{a\in\Sigma_{\leq 0}\mid\exists u\in\Sigma^{+}_{\max\,0},\ v \in\Sigma^{+}_{\max\,s}\colon X\xRightarrow{*}uXv,\ a\text{ occurs in }u\}\] \[\overleftarrow{R}_{X,r,0} =\{a\in\Sigma_{\leq 0}\mid\exists u\in\Sigma^{+}_{\max\,r},\ v\in \Sigma^{+}_{\max\,0}\colon X\xRightarrow{*}uXv,\ a\text{ occurs in }v\}\] Given \(\mathcal{G}\), a non-terminal \(X\), and \(r,s\in[0,p]\), one can construct grammars \(\overleftarrow{\mathcal{R}}_{X,r,s}\), \(\overrightarrow{\mathcal{R}}_{X,r,s}\) for \(\overleftarrow{R}_{X,r,s}\), \(\overrightarrow{R}_{X,r,s}\), respectively, of linear size. Overapproximating derivable wordsThe languages \(E_{X,r,s}\) and \(\overleftarrow{R}_{X,r,s}\) and \(\overrightarrow{R}_{X,r,s}\) now serve to define overapproximations of the set of \((u,v)\in\Sigma^{+}_{\max\,r}\times\Sigma^{+}_{\max\,s}\) with \(X\xRightarrow{*}uXv\): One can obtain each such pair by taking a word from \(E_{X,r,s}\), replacing \(\overleftarrow{\#}\) and \(\overrightarrow{\#}\), resp., by words in \(r\overleftarrow{R}_{X,r,s}^{*}\) (\(\overrightarrow{R}_{X,0,s}^{*}\) if \(r=0\)) and \(s\overrightarrow{R}_{X,r,s}^{*}\) (\(\overrightarrow{R}_{X,r,0}^{*}\) if \(s=0\)), respectively. By choosing the right words from \(E_{X,r,s}\), \(\overleftarrow{R}_{X,r,s}\), and \(\overrightarrow{R}_{X,r,s}\), we can thus obtain \(u\#v\). However, this process will also yield other words that cannot be derived. However, the key idea in our construction is that every word obtainable in this way from \(E_{X,r,s}\), \(\overleftarrow{R}_{X,r,s}\), and \(\overrightarrow{R}_{X,r,s}\) will be in the block downward closure of a pair of words derivable using \(X\xRightarrow{*}\cdot X\). Let us make this precise. To describe the set of words obtained from \(E_{X,r,s}\), \(\overleftarrow{R}_{X,r,s}\), and \(\overrightarrow{R}_{X,r,s}\), we need the notion of a substitution. For alphabets \(\Gamma_{1},\Gamma_{2}\), a _substitution_ is a map \(\sigma\colon\Gamma_{1}\to 2^{\Gamma_{2}^{*}}\) that yields a language in \(\Gamma_{2}\) for each letter in \(\Gamma_{1}\). Given a word \(w=w_{1}\cdots w_{n}\) with \(w_{1},\ldots w_{n}\in\Gamma_{1}\), we define \(\sigma(w):=\sigma(w_{1})\cdots\sigma(w_{n})\). Then for \(K\subseteq\Gamma_{1}^{*}\), we set \(\sigma(K)=\bigcup_{w\in K}\sigma(w)\). Now let \(\Sigma_{X,r,s}\colon\hat{\Sigma}_{\leq p}\to 2^{\hat{\Sigma}^{*}_{\leq p}}\) be the substitution that maps every letter in \(\Sigma_{\leq p}\cup\{\#\}\) to itself (as a singleton) and maps \(\overleftarrow{\#}\) to \(r\overleftarrow{R}_{X,r,s}^{*}\) and \(\overrightarrow{\#}\) to \(s\overrightarrow{R}_{X,r,s}^{*}\). Now our observation from the previous paragraph can be phrased as: For every \(u\#v\in\Sigma_{X,r,s}(E_{X,r,s})\), there are \(u^{\prime}\in\Sigma^{+}_{\max\,r}\) and \(v^{\prime}\in\Sigma^{+}_{\max\,s}\) with \(u\preccurlyeq_{\mathsf{B}}u^{\prime}\), \(v\preccurlyeq_{\mathsf{B}}v^{\prime}\), and \(X\xRightarrow{*}u^{\prime}Xv^{\prime}\). Constructing the Kleene grammarWe now construct the Kleene grammar for \(\mathcal{L}(\mathcal{G})\mathord{\downarrow}_{\mathsf{B}}\) by first computing the grammars \(\mathcal{E}_{X,r,s}\), \(\overleftarrow{\mathcal{R}}_{X,r,s}\), and \(\overrightarrow{\mathcal{R}}_{X,r,s}\) for each non-terminal \(X\) and each \(r,s\in[1,p]\). Then, since \(\mathcal{E}_{X,r,s}\), \(\overleftarrow{\mathcal{R}}_{X,r,s}\), and \(\overrightarrow{\mathcal{R}}_{X,r,s}\) generate languages with at most \(p-1\) priorities, we can call our construction recursively to obtain grammars \(\mathcal{E}^{\prime}_{X,r,s}\), \(\overleftarrow{\mathcal{R}}^{\prime}_{X,r,s}\), and \(\overrightarrow{\mathcal{R}}^{\prime}_{X,r,s}\), respectively. Then, we add all productions of the grammars \(\mathcal{E}^{\prime}_{X,r,s}\), \(\overleftarrow{\mathcal{R}}^{\prime}_{X,r,s}\), and \(\overrightarrow{\mathcal{R}}^{\prime}_{X,r,s}\) to \(\mathcal{G}^{\prime}\). Moreover, we make the following modifications: Each production of the form \(Y\to\overrightarrow{\#}\) (resp. \(Y\to\overrightarrow{\#}\)) in \(\mathcal{E}_{X,r,s}\) is replaced with \(Y\to Z_{r}\overleftarrow{S}\raisebox{-1.0pt}{\scalebox{1.2}{$\chi$}}_{r,s}\) (resp. \(Y\to Z_{s}\overleftarrow{S}\raisebox{-1.0pt}{\scalebox{1.2}{$\chi$}}_{r,s}\)), where \(\overleftarrow{S}\raisebox{-1.0pt}{\scalebox{1.2}{$\chi$}}_{r,s,s}\) (resp. \(\overrightarrow{S}\raisebox{-1.0pt}{\scalebox{1.2}{$\chi$}}_{r,s}\)) is the start symbol of \(\overleftarrow{\mathcal{R}}^{\prime}_{X,r,s}\) (resp. \(\overrightarrow{\mathcal{R}}^{\prime}_{X,r,s}\)), and \(Z_{r}\) is a fresh non-terminal used to derive \(r\) or \(\varepsilon\): We also have \(Z_{r}\to r\) for each \(r\in[1,p]\) and \(Z_{0}\to\varepsilon\). Moreover, each production \(Y\to\#\) in \(\mathcal{E}^{\prime}_{X}\) is removed and replaced with a production \(Y\to w\) for each production \(X\to w\) in \(\mathcal{G}\). We call the resulting grammar \(\mathcal{G}^{\prime}\). #### Correctness Let us now observe that the grammar \(\mathcal{G}^{\prime}\) does indeed satisfy \(\mathcal{L}(\mathcal{G}^{\prime})\raisebox{-1.0pt}{\scalebox{1.2}{$\downarrow$}}_{ \mathcal{B}}=\mathcal{L}(\mathcal{G})\raisebox{-1.0pt}{\scalebox{1.2}{$ \downarrow$}}_{\mathcal{B}}\). The inclusion "\(\supseteq\)" is trivial as \(\mathcal{G}^{\prime}\) is obtained by adding productions. For the converse, we need some terminology. We say that a derivation tree \(t_{1}\) in \(\mathcal{G}^{\prime}\) is obtained using an _expansion step_ from \(t_{0}\) if we take an \(X\)-labeled node \(x\) in \(t_{0}\), where \(X\) is a non-terminal from \(\mathcal{G}\), and replace this node by a derivation \(X\xrightarrow{*}uwv\) using newly added productions (i.e. using \(\mathcal{E}_{X,r,s}\), \(\overleftarrow{\mathcal{R}}_{X,r,s}\), and \(\overrightarrow{\mathcal{R}}_{X,r,s}\) and some \(Y\to w\) where \(X\to w\) was the production applied to \(x\) in \(t_{0}\)). Then by construction of \(\mathcal{G}^{\prime}\), any derivation in \(\mathcal{G}^{\prime}\) can be obtained from a derivation in \(\mathcal{G}\) by finitely many expansion steps. An induction on the number of expansion steps shows: We have \(\mathcal{L}(\mathcal{G}^{\prime})\raisebox{-1.0pt}{\scalebox{1.2}{$\downarrow$}}_{ \mathcal{B}}=\mathcal{L}(\mathcal{G})\raisebox{-1.0pt}{\scalebox{1.2}{$ \downarrow$}}_{\mathcal{B}}\). #### Acyclic derivations suffice Now that we have the grammar \(\mathcal{G}^{\prime}\) with \(\mathcal{L}(\mathcal{G}^{\prime})\raisebox{-1.0pt}{\scalebox{1.2}{$\downarrow$}}_ {\mathcal{B}}=\mathcal{L}(\mathcal{G})\raisebox{-1.0pt}{\scalebox{1.2}{$ \downarrow$}}_{\mathcal{B}}\), it remains to show that every word in \(\mathcal{G}^{\prime}\) can be derived using an acyclic derivation: \(\mathbf{acyclic}(\mathcal{G}^{\prime})\raisebox{-1.0pt}{\scalebox{1.2}{$ \downarrow$}}_{\mathcal{B}}=\mathcal{L}(\mathcal{G})\raisebox{-1.0pt}{ \scalebox{1.2}{$\downarrow$}}_{\mathcal{B}}\). Essentially, this is due to the fact that any repetition of a non-terminal \(X\) on some path means that we can replace a corresponding derivation \(X\xrightarrow{*}uXv\) by using new productions from \(\mathcal{E}^{\prime}_{X,r,s}\), \(\overleftarrow{\mathcal{R}}^{\prime}_{X,r,s}\), and \(\overrightarrow{\mathcal{R}}^{\prime}_{X,r,s}\). Since these also have the property that every derivation can be made acyclic, the lemma follows. See Appendix D for details. #### Complexity analysis To estimate the size of the constructed grammar, let \(f_{p}(n)\) be the maximal number of non-terminals of a constructed Kleene grammar for an input grammar with \(n\) non-terminals over \(p\) priorities. By Lemmas 6.1 and 6.1, there is a constant \(c\) such that each grammar \(\mathcal{E}_{X}\), \(\overleftarrow{\mathcal{R}}_{X}\), and \(\overrightarrow{\mathcal{R}}_{X}\) has at most \(cn\) non-terminals. Furthermore, \(\mathcal{G}^{\prime}\) is obtained by applying our construction to \(3n(p+1)^{2}\) grammars with \(p-1\) priorities of size \(cn\), and adding \(Z_{p}\). Thus \(f_{p}(n)\leq n+3n(p+1)^{2}f_{p-1}(cn)+1\). Since \(f_{p-1}(n)\geq 1\), we can simplify to \(f_{p}(n)\leq 4n(p+1)^{2}f_{p-1}(cn)\). It is easy to check that \(f_{0}(n)\leq 4n+1\leq 5n\), because \(\mathcal{E}_{X,0,0}\) and \(\overleftarrow{\mathcal{R}}_{X,0,0}\) and \(\overrightarrow{\mathcal{R}}_{X,0,0}\) each only have one non-terminal. Hence \(f_{p}(n)\leq(4n(p+1)^{2})^{p}f_{0}(c^{p}n)\leq(4n(p+1)^{2})\cdot 4(c^{p}n)\), which is exponential in the size of \(\mathcal{G}\). ## 7 Conclusion We have initiated the study of computing priority and block downward closures for infinite-state systems. We have shown that for OCA, both closures can be computed in polynomial time. For CFL, we have provided a doubly exponential construction. Many questions remain. First, we leave open whether the doubly exponential bound for context-free languages can be improved to exponential. An exponential lower bound is easily inherited from the exponential lower bound for subwords [5]. Moreover, it is an intriguing question whether computability of subword downward closures for vector addition systems [18], higher-order pushdown automata [19], and higher-order recursion schemes [11] can be strengthened to block and priority downward closures.