text
stringlengths
59
500k
subset
stringclasses
6 values
I've been writing a little about some results on graph theory, and I want some nice examples of applying the results to some interesting finite connected graphs to show how the results might be useful. I'm therefore looking for a reference which has a collection of interesting graphs, ideally along with a picture if possible (but not the end of the world if there are no pictures) and some of the basic properties of the graph; for example, connectedness, diameter etc, though again if the reference only has the names I'm sure I can track down their properties online so that isn't too big a problem either. I have been using Wikipedia's gallery of named graphs which has been quite helpful, but unfortunately a lot of those turn out to be trivial cases for the results I'm proving, and Alain Matthes had an excellent list too (altermundus.fr/downloads/documents/NamedGraphs.pdf). Some of the examples I am interested in are nontrivial graphs with moderately large girth (say 8 or above, and nontrivial meaning not a path, tree, cycle etc), Cayley graphs, and any interesting constructions of sequences of graphs of increasing order $n \to \infty$. Other than such sequences I would prefer to look at examples of 'small to medium' order (so probably 60 or less), and on the less dense end of the scale; essentially graphs which you can feasibly work with "by hand" because they are not overly enormous, not graphs like the Higman-Sims. I have tried to explain the sort of graphs I am looking for just in case it's relevant, but if there is an all-encompassing handbook or compendium of some sort then I am obviously very happy to sift through them myself to locate some useful examples. Any responses would be appreciated, be they books, websites, papers, or just individual suggestions of interesting graphs which weren't in the Wikipedia gallery. Thankyou! If you have access to the software Mathematica, the function GraphData has quite a number of example graphs you can peruse. The documentation notes that the graphs (and their properties) that are implemented come from a wide range of sources, like this one. You might also want to look at MathWorld's compendium. Not the answer you're looking for? Browse other questions tagged reference-request graph-theory soft-question examples-counterexamples or ask your own question. What theorems/examples will make me really understand representation theory? What newer mathematics fields helped to solve or solved problems from older fields of mathematics? How to ask questions about the likelihood of "interesting" mathematical statements?
CommonCrawl
1History and cryptanalysis 2Security 2.1Overview of security issues 2.2Collision vulnerabilities 2.3Preimage vulnerability 4Algorithm 4.1Pseudocode 5MD5 hashes 6Implementations 9Further reading Message-digest hashing algorithm Ronald Rivest MD2, MD4, MD5, MD6 Cipher detail Digest sizes Block sizes Merkle–Damgård construction 4[1] Best public cryptanalysis A 2013 attack by Xie Tao, Fanbao Liu, and Dengguo Feng breaks MD5 collision resistance in 218 time. This attack runs in less than a second on a regular computer.[2] MD5 is prone to length extension attacks. The MD5 message-digest algorithm is a widely used hash function producing a 128-bit hash value. MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function MD4,[3] and was specified in 1992 as RFC 1321. MD5 can be used as a checksum to verify data integrity against unintentional corruption. Historically it was widely used as a cryptographic hash function; however it has been found to suffer from extensive vulnerabilities. It remains suitable for other non-cryptographic purposes, for example for determining the partition for a particular key in a partitioned database, and may be preferred due to lower computational requirements than more recent Secure Hash Algorithms.[4] History and cryptanalysis[edit] MD5 is one in a series of message digest algorithms designed by Professor Ronald Rivest of MIT (Rivest, 1992). When analytic work indicated that MD5's predecessor MD4 was likely to be insecure, Rivest designed MD5 in 1991 as a secure replacement. (Hans Dobbertin did indeed later find weaknesses in MD4.) In 1993, Den Boer and Bosselaers gave an early, although limited, result of finding a "pseudo-collision" of the MD5 compression function; that is, two different initialization vectors that produce an identical digest. In 1996, Dobbertin announced a collision of the compression function of MD5 (Dobbertin, 1996). While this was not an attack on the full MD5 hash function, it was close enough for cryptographers to recommend switching to a replacement, such as SHA-1 (also compromised) or RIPEMD-160. The size of the hash value (128 bits) is small enough to contemplate a birthday attack. MD5CRK was a distributed project started in March 2004 to demonstrate that MD5 is practically insecure by finding a collision using a birthday attack. MD5CRK ended shortly after 17 August 2004, when collisions for the full MD5 were announced by Xiaoyun Wang, Dengguo Feng, Xuejia Lai, and Hongbo Yu.[5][6] Their analytical attack was reported to take only one hour on an IBM p690 cluster.[7] On 1 March 2005, Arjen Lenstra, Xiaoyun Wang, and Benne de Weger demonstrated construction of two X.509 certificates with different public keys and the same MD5 hash value, a demonstrably practical collision.[8] The construction included private keys for both public keys. A few days later, Vlastimil Klima described an improved algorithm, able to construct MD5 collisions in a few hours on a single notebook computer.[9] On 18 March 2006, Klima published an algorithm that could find a collision within one minute on a single notebook computer, using a method he calls tunneling.[10] Various MD5-related RFC errata have been published. In 2009, the United States Cyber Command used an MD5 hash value of their mission statement as a part of their official emblem.[11] On 24 December 2010, Tao Xie and Dengguo Feng announced the first published single-block (512-bit) MD5 collision.[12] (Previous collision discoveries had relied on multi-block attacks.) For "security reasons", Xie and Feng did not disclose the new attack method. They issued a challenge to the cryptographic community, offering a US$10,000 reward to the first finder of a different 64-byte collision before 1 January 2013. Marc Stevens responded to the challenge and published colliding single-block messages as well as the construction algorithm and sources.[13] In 2011 an informational RFC 6151[14] was approved to update the security considerations in MD5[15] and HMAC-MD5.[16] Security[edit] One basic requirement of any cryptographic hash function is that it should be computationally infeasible to find two distinct messages that hash to the same value. MD5 fails this requirement catastrophically; such collisions can be found in seconds on an ordinary home computer. On 31 December 2008, the CMU Software Engineering Institute concluded that MD5 was essentially "cryptographically broken and unsuitable for further use".[17] The weaknesses of MD5 have been exploited in the field, most infamously by the Flame malware in 2012. As of 2019[update], MD5 continues to be widely used, despite its well-documented weaknesses and deprecation by security experts.[18] The security of the MD5 hash function is severely compromised. A collision attack exists that can find collisions within seconds on a computer with a 2.6 GHz Pentium 4 processor (complexity of 224.1).[19] Further, there is also a chosen-prefix collision attack that can produce a collision for two inputs with specified prefixes within seconds, using off-the-shelf computing hardware (complexity 239).[20] The ability to find collisions has been greatly aided by the use of off-the-shelf GPUs. On an NVIDIA GeForce 8400GS graphics processor, 16–18 million hashes per second can be computed. An NVIDIA GeForce 8800 Ultra can calculate more than 200 million hashes per second.[21] These hash and collision attacks have been demonstrated in the public in various situations, including colliding document files[22][23] and digital certificates.[24] As of 2015, MD5 was demonstrated to be still quite widely used, most notably by security research and antivirus companies.[25] As of 2019, one quarter of widely used content management systems were reported to still use MD5 for password hashing.[18] Overview of security issues[edit] In 1996, a flaw was found in the design of MD5. While it was not deemed a fatal weakness at the time, cryptographers began recommending the use of other algorithms, such as SHA-1, which has since been found to be vulnerable as well.[26] In 2004 it was shown that MD5 is not collision-resistant.[27] As such, MD5 is not suitable for applications like SSL certificates or digital signatures that rely on this property for digital security. Researchers additionally discovered more serious flaws in MD5, and described a feasible collision attack -- a method to create a pair of inputs for which MD5 produces identical checksums.[5][28] Further advances were made in breaking MD5 in 2005, 2006, and 2007.[29] In December 2008, a group of researchers used this technique to fake SSL certificate validity.[24][30] As of 2010, the CMU Software Engineering Institute considers MD5 "cryptographically broken and unsuitable for further use",[31] and most U.S. government applications now require the SHA-2 family of hash functions.[32] In 2012, the Flame malware exploited the weaknesses in MD5 to fake a Microsoft digital signature.[33] Collision vulnerabilities[edit] Further information: Collision attack In 1996, collisions were found in the compression function of MD5, and Hans Dobbertin wrote in the RSA Laboratories technical newsletter, "The presented attack does not yet threaten practical applications of MD5, but it comes rather close ... in the future MD5 should no longer be implemented ... where a collision-resistant hash function is required."[34] In 2005, researchers were able to create pairs of PostScript documents[35] and X.509 certificates[36] with the same hash. Later that year, MD5's designer Ron Rivest wrote that "md5 and sha1 are both clearly broken (in terms of collision-resistance)".[37] On 30 December 2008, a group of researchers announced at the 25th Chaos Communication Congress how they had used MD5 collisions to create an intermediate certificate authority certificate that appeared to be legitimate when checked by its MD5 hash.[24] The researchers used a PS3 cluster at the EPFL in Lausanne, Switzerland[38] to change a normal SSL certificate issued by RapidSSL into a working CA certificate for that issuer, which could then be used to create other certificates that would appear to be legitimate and issued by RapidSSL. VeriSign, the issuers of RapidSSL certificates, said they stopped issuing new certificates using MD5 as their checksum algorithm for RapidSSL once the vulnerability was announced.[39] Although Verisign declined to revoke existing certificates signed using MD5, their response was considered adequate by the authors of the exploit (Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Arne Osvik, and Benne de Weger).[24] Bruce Schneier wrote of the attack that "we already knew that MD5 is a broken hash function" and that "no one should be using MD5 anymore".[40] The SSL researchers wrote, "Our desired impact is that Certification Authorities will stop using MD5 in issuing new certificates. We also hope that use of MD5 in other applications will be reconsidered as well."[24] In 2012, according to Microsoft, the authors of the Flame malware used an MD5 collision to forge a Windows code-signing certificate.[33] MD5 uses the Merkle–Damgård construction, so if two prefixes with the same hash can be constructed, a common suffix can be added to both to make the collision more likely to be accepted as valid data by the application using it. Furthermore, current collision-finding techniques allow specifying an arbitrary prefix: an attacker can create two colliding files that both begin with the same content. All the attacker needs to generate two colliding files is a template file with a 128-byte block of data, aligned on a 64-byte boundary, that can be changed freely by the collision-finding algorithm. An example MD5 collision, with the two messages differing in 6 bits, is: d131dd02c5e6eec4 693d9a0698aff95c 2fcab58712467eab 4004583eb8fb7f89 55ad340609f4b302 83e488832571415a 085125e8f7cdc99f d91dbdf280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e2b487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b67080a80d1e c69821bcb6a88393 96f9652b6ff72a70 55ad340609f4b302 83e4888325f1415a 085125e8f7cdc99f d91dbd7280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e23487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b67080280d1e c69821bcb6a88393 96f965ab6ff72a70 Both produce the MD5 hash 79054025255fb1a26e4bc422aef54eb4.[41] The difference between the two samples is that the leading bit in each nibble has been flipped. For example, the 20th byte (offset 0x13) in the top sample, 0x87, is 10000111 in binary. The leading bit in the byte (also the leading bit in the first nibble) is flipped to make 00000111, which is 0x07, as shown in the lower sample. Later it was also found to be possible to construct collisions between two files with separately chosen prefixes. This technique was used in the creation of the rogue CA certificate in 2008. A new variant of parallelized collision searching using MPI was proposed by Anton Kuznetsov in 2014, which allowed finding a collision in 11 hours on a computing cluster.[42] Preimage vulnerability[edit] In April 2009, an attack against MD5 was published that breaks MD5's preimage resistance. This attack is only theoretical, with a computational complexity of 2123.4 for full preimage.[43][44] Applications[edit] MD5 digests have been widely used in the software world to provide some assurance that a transferred file has arrived intact. For example, file servers often provide a pre-computed MD5 (known as md5sum) checksum for the files, so that a user can compare the checksum of the downloaded file to it. Most unix-based operating systems include MD5 sum utilities in their distribution packages; Windows users may use the included PowerShell function "Get-FileHash", install a Microsoft utility,[45][46] or use third-party applications. Android ROMs also use this type of checksum. As it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. In some cases, the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files. Historically, MD5 has been used to store a one-way hash of a password, often with key stretching.[47][48] NIST does not include MD5 in their list of recommended hashes for password storage.[49] MD5 is also used in the field of electronic discovery, to provide a unique identifier for each document that is exchanged during the legal discovery process. This method can be used to replace the Bates stamp numbering system that has been used for decades during the exchange of paper documents. As above, this usage should be discouraged due to the ease of collision attacks. Algorithm[edit] Figure 1. One MD5 operation. MD5 consists of 64 of these operations, grouped in four rounds of 16 operations. F is a nonlinear function; one function is used in each round. Mi denotes a 32-bit block of the message input, and Ki denotes a 32-bit constant, different for each operation. <<<s denotes a left bit rotation by s places; s varies for each operation. ⊞ {\displaystyle \boxplus } denotes addition modulo 232. MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit words); the message is padded so that its length is divisible by 512. The padding works as follows: first, a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with 64 bits representing the length of the original message, modulo 264. The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted A, B, C, and D. These are initialized to certain fixed constants. The main algorithm then uses each 512-bit message block in turn to modify the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function F, modular addition, and left rotation. Figure 1 illustrates one operation within a round. There are four possible functions; a different one is used in each round: F ( B , C , D ) = ( B ∧ C ) ∨ ( ¬ B ∧ D ) G ( B , C , D ) = ( B ∧ D ) ∨ ( C ∧ ¬ D ) H ( B , C , D ) = B ⊕ C ⊕ D I ( B , C , D ) = C ⊕ ( B ∨ ¬ D ) {\displaystyle {\begin{aligned}F(B,C,D)&=(B\wedge {C})\vee (\neg {B}\wedge {D})\\G(B,C,D)&=(B\wedge {D})\vee (C\wedge \neg {D})\\H(B,C,D)&=B\oplus C\oplus D\\I(B,C,D)&=C\oplus (B\vee \neg {D})\end{aligned}}} ⊕ , ∧ , ∨ , ¬ {\displaystyle \oplus ,\wedge ,\vee ,\neg } denote the XOR, AND, OR and NOT operations respectively. Pseudocode[edit] The MD5 hash is calculated according to this algorithm.[50] All values are in little-endian. // : All variables are unsigned 32 bit and wrap modulo 2^32 when calculating var int s[64], K[64] var int i // s specifies the per-round shift amounts s[ 0..15] := { 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22 } s[16..31] := { 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20 } s[32..47] := { 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23 } // Use binary integer part of the sines of integers (Radians) as constants: for i from 0 to 63 do K[i] := floor(232 × abs (sin(i + 1))) end for // (Or just use the following precomputed table): K[ 0.. 3] := { 0xd76aa478, 0xe8c7b756, 0x242070db, 0xc1bdceee } K[ 4.. 7] := { 0xf57c0faf, 0x4787c62a, 0xa8304613, 0xfd469501 } K[ 8..11] := { 0x698098d8, 0x8b44f7af, 0xffff5bb1, 0x895cd7be } K[12..15] := { 0x6b901122, 0xfd987193, 0xa679438e, 0x49b40821 } K[16..19] := { 0xf61e2562, 0xc040b340, 0x265e5a51, 0xe9b6c7aa } K[20..23] := { 0xd62f105d, 0x02441453, 0xd8a1e681, 0xe7d3fbc8 } K[24..27] := { 0x21e1cde6, 0xc33707d6, 0xf4d50d87, 0x455a14ed } K[28..31] := { 0xa9e3e905, 0xfcefa3f8, 0x676f02d9, 0x8d2a4c8a } K[32..35] := { 0xfffa3942, 0x8771f681, 0x6d9d6122, 0xfde5380c } K[36..39] := { 0xa4beea44, 0x4bdecfa9, 0xf6bb4b60, 0xbebfbc70 } K[40..43] := { 0x289b7ec6, 0xeaa127fa, 0xd4ef3085, 0x04881d05 } K[44..47] := { 0xd9d4d039, 0xe6db99e5, 0x1fa27cf8, 0xc4ac5665 } K[48..51] := { 0xf4292244, 0x432aff97, 0xab9423a7, 0xfc93a039 } K[52..55] := { 0x655b59c3, 0x8f0ccc92, 0xffeff47d, 0x85845dd1 } K[56..59] := { 0x6fa87e4f, 0xfe2ce6e0, 0xa3014314, 0x4e0811a1 } K[60..63] := { 0xf7537e82, 0xbd3af235, 0x2ad7d2bb, 0xeb86d391 } // Initialize variables: var int a0 := 0x67452301 // A var int b0 := 0xefcdab89 // B var int c0 := 0x98badcfe // C var int d0 := 0x10325476 // D // Pre-processing: adding a single 1 bit append "1" bit to message // Notice: the input bytes are considered as bits strings, // where the first bit is the most significant bit of the byte.[51] // Pre-processing: padding with zeros append "0" bit until message length in bits ≡ 448 (mod 512) // Notice: the two padding steps above are implemented in a simpler way // in implementations that only work with complete bytes: append 0x80 // and pad with 0x00 bytes so that the message length in bytes ≡ 56 (mod 64). append original length in bits mod 264 to message // Process the message in successive 512-bit chunks: for each 512-bit chunk of padded message do break chunk into sixteen 32-bit words M[j], 0 ≤ j ≤ 15 // Initialize hash value for this chunk: var int A := a0 var int B := b0 var int C := c0 var int D := d0 // Main loop: var int F, g F := (B and C) or ((not B) and D) g := i F := (D and B) or ((not D) and C) g := (5×i + 1) mod 16 F := B xor C xor D F := C xor (B or (not D)) g := (7×i) mod 16 // Be wary of the below definitions of a,b,c,d F := F + A + K[i] + M[g] // M[g] must be a 32-bits block A := D D := C C := B B := B + leftrotate(F, s[i]) // Add this chunk's hash to result so far: a0 := a0 + A b0 := b0 + B c0 := c0 + C d0 := d0 + D var char digest[16] := a0 append b0 append c0 append d0 // (Output is in little-endian) Instead of the formulation from the original RFC 1321 shown, the following may be used for improved efficiency (useful if assembly language is being used – otherwise, the compiler will generally optimize the above code. Since each computation is dependent on another in these formulations, this is often slower than the above method where the nand/and can be parallelised): ( 0 ≤ i ≤ 15): F := D xor (B and (C xor D)) (16 ≤ i ≤ 31): F := C xor (D and (B xor C)) MD5 hashes[edit] The 128-bit (16-byte) MD5 hashes (also termed message digests) are typically represented as a sequence of 32 hexadecimal digits. The following demonstrates a 43-byte ASCII input and the corresponding MD5 hash: MD5("The quick brown fox jumps over the lazy dog") = 9e107d9d372bb6826bd81d3542a419d6 Even a small change in the message will (with overwhelming probability) result in a mostly different hash, due to the avalanche effect. For example, adding a period to the end of the sentence: MD5("The quick brown fox jumps over the lazy dog.") = e4d909c290d0fb1ca068ffaddf22cbd0 The hash of the zero-length string is: MD5("") = The MD5 algorithm is specified for messages consisting of any number of bits; it is not limited to multiples of eight bits (octets, bytes). Some MD5 implementations such as md5sum might be limited to octets, or they might not support streaming for messages of an initially undetermined length. Implementations[edit] Below is a list of cryptography libraries that support MD5: Botan Bouncy Castle cryptlib Crypto++ Libgcrypt wolfSSL Comparison of cryptographic hash functions Hash function security summary HashClash MD5Crypt md5deep ^ Rivest, R. (April 1992). "Step 4. Process Message in 16-Word Blocks". The MD5 Message-Digest Algorithm. IETF. p. 5. sec. 3.4. doi:10.17487/RFC1321. RFC 1321. Retrieved 10 October 2018. ^ Xie Tao; Fanbao Liu; Dengguo Feng (2013). "Fast Collision Attack on MD5" (PDF). Cryptology ePrint Archive. ^ Ciampa, Mark (2009). CompTIA Security+ 2008 in depth. Australia; United States: Course Technology/Cengage Learning. p. 290. ISBN 978-1-59863-913-1. ^ Kleppmann, Martin (2 April 2017). Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems (1 ed.). O'Reilly Media. p. 203. ISBN 978-1449373320. ^ a b J. Black, M. Cochran, T. Highland: A Study of the MD5 Attacks: Insights and Improvements Archived 1 January 2015 at the Wayback Machine, 3 March 2006. Retrieved 27 July 2008. ^ Hawkes, Philip; Paddon, Michael; Rose, Gregory G. (13 October 2004). "Musings on the Wang et al. MD5 Collision". Cryptology ePrint Archive. Archived from the original on 5 November 2018. Retrieved 10 October 2018. ^ Bishop Fox (26 September 2013). "Fast MD5 and MD4 Collision Generators". BishopFox. Archived from the original on 26 April 2017. Retrieved 10 February 2014. ^ Lenstra, Arjen; Wang, Xiaoyun; Weger, Benne de (1 March 2005). "Colliding X.509 Certificates". Cryptology ePrint Archive. Retrieved 10 October 2018. ^ Klíma, Vlastimil (5 March 2005). "Finding MD5 Collisions – a Toy For a Notebook". Cryptology ePrint Archive. Retrieved 10 October 2018. ^ Vlastimil Klima: Tunnels in Hash Functions: MD5 Collisions Within a Minute, Cryptology ePrint Archive Report 2006/105, 18 March 2006, revised 17 April 2006. Retrieved 27 July 2008. ^ "Code Cracked! Cyber Command Logo Mystery Solved". USCYBERCOM. Wired News. 8 July 2010. Retrieved 29 July 2011. ^ Tao Xie; Dengguo Feng (2010). "Construct MD5 Collisions Using Just A Single Block Of Message" (PDF). Retrieved 28 July 2011. ^ "Marc Stevens – Research – Single-block collision attack on MD5". Marc-stevens.nl. 2012. Retrieved 10 April 2014. ^ Turner, Sean (March 2011). "RFC 6151 – Updated Security Considerations for the MD5 Message-Digest and the HMAC-MD5 Algorithms". Internet Engineering Task Force. doi:10.17487/RFC6151. Retrieved 11 November 2013. ^ Rivest, Ronald L. (April 1992). "RFC 1321 – The MD5 Message-Digest Algorithm". Internet Engineering Task Force. doi:10.17487/RFC1321. Retrieved 5 October 2013. ^ Krawczyk, Hugo; Bellare, Mihir; Canetti, Ran (February 1997). "RFC 2104 – HMAC: Keyed-Hashing for Message Authentication". Internet Engineering Task Force. doi:10.17487/RFC2104. Retrieved 5 October 2013. ^ Chad R, Dougherty (31 December 2008). "Vulnerability Note VU#836068 MD5 vulnerable to collision attacks". Vulnerability notes database. CERT Carnegie Mellon University Software Engineering Institute. Retrieved 3 February 2017. ^ a b Cimpanu, Catalin. "A quarter of major CMSs use outdated MD5 as the default password hashing scheme". ZDNet. Retrieved 17 June 2019. ^ M.M.J. Stevens (June 2007). On Collisions for MD5 (PDF) (Master's thesis). ^ Marc Stevens; Arjen Lenstra; Benne de Weger (16 June 2009). "Chosen-prefix Collisions for MD5 and Applications" (PDF). École Polytechnique Fédérale de Lausanne. Archived from the original (PDF) on 9 November 2011. Retrieved 31 March 2010. ^ "New GPU MD5 cracker cracks more than 200 million hashes per second". ^ Magnus Daum, Stefan Lucks. "Hash Collisions (The Poisoned Message Attack)". Eurocrypt 2005 rump session. Archived from the original on 27 March 2010. ^ Max Gebhardt; Georg Illies; Werner Schindler (31 October 2005). "A Note on the Practical Value of Single Hash Collisions for Special File Formats" (PDF). National Institute of Standards and Technology. Archived from the original (PDF) on 17 September 2008. ^ a b c d e Sotirov, Alexander; Marc Stevens; Jacob Appelbaum; Arjen Lenstra; David Molnar; Dag Arne Osvik; Benne de Weger (30 December 2008). "MD5 considered harmful today". Retrieved 30 December 2008. Announced at the 25th Chaos Communication Congress. ^ "Poisonous MD5 – Wolves Among the Sheep | Silent Signal Techblog". Retrieved 10 June 2015. ^ Hans Dobbertin (Summer 1996). "The Status of MD5 After a Recent Attack". CryptoBytes. Retrieved 22 October 2013. ^ Xiaoyun Wang; Hongbo Yu (2005). "How to Break MD5 and Other Hash Functions" (PDF). Advances in Cryptology – Lecture Notes in Computer Science. pp. 19–35. Archived from the original (PDF) on 21 May 2009. Retrieved 21 December 2009. ^ Xiaoyun Wang, Dengguo ,k.,m.,m, HAVAL-128 and RIPEMD, Cryptology ePrint Archive Report 2004/199, 16 August 2004, revised 17 August 2004. Retrieved 27 July 2008. ^ Marc Stevens, Arjen Lenstra, Benne de Weger: Vulnerability of software integrity and code signing applications to chosen-prefix collisions for MD5, 30 November 2007. Retrieved 27 July 2008. ^ Stray, Jonathan (30 December 2008). "Web browser flaw could put e-commerce security at risk". CNET.com. Archived from the original on 28 August 2013. Retrieved 24 February 2009. ^ "CERT Vulnerability Note VU#836068". Kb.cert.org. Retrieved 9 August 2010. ^ "NIST.gov — Computer Security Division — Computer Security Resource Center". Csrc.nist.gov. Archived from the original on 9 June 2011. Retrieved 9 August 2010. ^ a b "Flame malware collision attack explained". Archived from the original on 8 June 2012. Retrieved 7 June 2012. ^ Dobbertin, Hans (Summer 1996). "The Status of MD5 After a Recent Attack" (PDF). RSA Laboratories CryptoBytes. 2 (2): 1. Retrieved 10 August 2010. The presented attack does not yet threaten practical applications of MD5, but it comes rather close. .... [sic] in the future MD5 should no longer be implemented... [sic] where a collision-resistant hash function is required. [permanent dead link] ^ "Schneier on Security: More MD5 Collisions". Schneier.com. Retrieved 9 August 2010. ^ "Colliding X.509 Certificates". Win.tue.nl. Retrieved 9 August 2010. ^ "[Python-Dev] hashlib — faster md5/sha, adds sha256/512 support". Mail.python.org. Retrieved 9 August 2010. ^ "Researchers Use PlayStation Cluster to Forge a Web Skeleton Key". Wired. 31 December 2008. Retrieved 31 December 2008. ^ Callan, Tim (31 December 2008). "This morning's MD5 attack — resolved". Verisign. Archived from the original on 16 January 2009. Retrieved 31 December 2008. ^ Bruce Schneier (31 December 2008). "Forging SSL Certificates". Schneier on Security. Retrieved 10 April 2014. ^ Eric Rescorla (17 August 2004). "A real MD5 collision". Educated Guesswork (blog). Archived from the original on 15 August 2014. Retrieved 13 April 2015. ^ Anton A. Kuznetsov. "An algorithm for MD5 single-block collision attack using high performance computing cluster" (PDF). IACR. Retrieved 3 November 2014. ^ Yu Sasaki; Kazumaro Aoki (16 April 2009). "Finding Preimages in Full MD5 Faster Than Exhaustive Search". Advances in Cryptology - EUROCRYPT 2009. Lecture Notes in Computer Science. Vol. 5479. Springer Berlin Heidelberg. pp. 134–152. doi:10.1007/978-3-642-01001-9_8. ISBN 978-3-642-01000-2. ^ Ming Mao and Shaohui Chen and Jin Xu (2009). "Construction of the Initial Structure for Preimage Attack of MD5". 2009 International Conference on Computational Intelligence and Security. International Conference on Computational Intelligence and Security. Vol. 1. IEEE Computer Society. pp. 442–445. doi:10.1109/CIS.2009.214. ISBN 978-0-7695-3931-7. S2CID 16512325. ^ "Availability and description of the File Checksum Integrity Verifier utility". Microsoft Support. 17 June 2013. Retrieved 10 April 2014. ^ "How to compute the MD5 or SHA-1 cryptographic hash values for a file". Microsoft Support. 23 January 2007. Retrieved 10 April 2014. ^ "FreeBSD Handbook, Security – DES, Blowfish, MD5, and Crypt". Retrieved 19 October 2014. ^ "Synopsis – man pages section 4: File Formats". Docs.oracle.com. 1 January 2013. Retrieved 10 April 2014. ^ NIST SP 800-132 Section 5.1 ^ "Reference Source". ^ RFC 1321, section 2, "Terminology and Notation", Page 2. Berson, Thomas A. (1992). "Differential Cryptanalysis Mod 232 with Applications to MD5". EUROCRYPT. pp. 71–80. ISBN 3-540-56413-6. Bert den Boer; Antoon Bosselaers (1993). "Collisions for the Compression Function of MD5". Advances in Cryptology — EUROCRYPT '93. EUROCRYPT. Berlin; London: Springer. pp. 293–304. ISBN 978-3-540-57600-6. Hans Dobbertin, Cryptanalysis of MD5 compress. Announcement on Internet, May 1996. "CiteSeerX". Citeseer.ist.psu.edu. Retrieved 9 August 2010. Dobbertin, Hans (1996). "The Status of MD5 After a Recent Attack". CryptoBytes. 2 (2). Xiaoyun Wang; Hongbo Yu (2005). "How to Break MD5 and Other Hash Functions" (PDF). EUROCRYPT. ISBN 3-540-25910-4. Archived from the original (PDF) on 21 May 2009. Retrieved 6 March 2008. W3C recommendation on MD5 MD5 Calculator Cryptographic hash functions and message authentication codes Known attacks Common functions MD5 (compromised) SHA-1 (compromised) BLAKE2 SHA-3 finalists Grøstl Keccak (winner) CubeHash ECOH HAS-160 Kupyna MASH-1 MDC-2 N-hash RIPEMD RadioGatún SWIFFT Snefru Streebog VSH Password hashing/ key stretching functions Argon2 LM hash Lyra2 Makwa PBKDF2 yescrypt key derivation functions HKDF KDF1/KDF2 MAC functions CBC-MAC OMAC/CMAC PMAC SipHash Authenticated encryption modes ChaCha20-Poly1305 GCM IAPM Collision attack Preimage attack Birthday attack Brute-force attack Rainbow table Side-channel attack Length extension attack Avalanche effect Hash collision Sponge function HAIFA construction CRYPTREC NIST hash function competition Hash-based cryptography Merkle tree History of cryptography Outline of cryptography Cryptographic protocol Authentication protocol Cryptographic primitive Cryptosystem Cryptographic nonce Cryptovirology Key derivation function Kleptography Key (cryptography) Key exchange Key generator Key schedule Key stretching Cryptojacking malware Random number generation Cryptographically secure pseudorandom number generator (CSPRNG) Pseudorandom noise (PRN) Secure channel Insecure channel Subliminal channel Information-theoretic security Codetext Ciphertext Shared secret Trapdoor function Trusted timestamping Key-based routing Onion routing Garlic routing Kademlia Mix network Stream cipher Symmetric-key algorithm Authenticated encryption Public-key cryptography Quantum key distribution Quantum cryptography Message authentication code Steganography Retrieved from "https://en.wikipedia.org/w/index.php?title=MD5&oldid=1133028786" Cryptographic hash functions Broken hash functions Checksum algorithms Articles with dead external links from February 2020 Articles with example pseudocode
CommonCrawl
Weak dissipative solutions to a free-boundary problem for finitely extensible bead-spring chain molecules: Variable viscosity coefficients KRM Home Stability of a non-local kinetic model for cell migration with density dependent orientation bias October 2020, 13(5): 1029-1046. doi: 10.3934/krm.2020036 Gelfand-Shilov smoothing effect for the spatially inhomogeneous Boltzmann equations without cut-off Wei-Xi Li 1, and Lvqiao Liu 2,, School of Mathematics and Statistics, Wuhan University & , Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan 430072, China School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China * Corresponding author: Lvqiao Liu Received January 2020 Revised May 2020 Published August 2020 Fund Project: The first author is supported by NSF grant Nos. 11871054, 11771342; Fok Ying Tung Education Foundation (151001) and the Natural Science Foundation of Hubei Province (2019CFA007) In this work we consider the Cauchy problem for the spatially inhomogeneous non-cutoff Boltzmann equation. For any given solution belonging to weighted Sobolev space, we will show it enjoys at positive time the Gelfand-Shilov smoothing effect for the velocity variable and Gevrey regularizing properties for the spatial variable. This improves the result of Lerner-Morimoto-Pravda-Starov-Xu [J. Funct. Anal. 269 (2015) 459-535] on one-dimensional Boltzmann equation to the physical three-dimensional case. Our proof relies on the elementary $ L^2 $ weighted estimate. Keywords: Boltzmann equation, Gelfand-Shilov regularity, subelliptic estimate, non cut-off, weighted estimate. Mathematics Subject Classification: Primary: 35B65; 35Q20; 35H10. Citation: Wei-Xi Li, Lvqiao Liu. Gelfand-Shilov smoothing effect for the spatially inhomogeneous Boltzmann equations without cut-off. Kinetic & Related Models, 2020, 13 (5) : 1029-1046. doi: 10.3934/krm.2020036 R. Alexandre, L. Desvillettes, C. Villani and B. Wennberg, Entropy dissipation and long-range interactions, Arch. Ration. Mech. Anal., 152 (2000), 327-355. doi: 10.1007/s002050000083. Google Scholar R. Alexandre, Y. Morimoto, S. Ukai, C.-J. Xu and T. Yang, Regularizing effect and local existence for the non-cutoff Boltzmann equation, Arch. Ration. Mech. Anal., 198 (2010), 39-123. doi: 10.1007/s00205-010-0290-1. Google Scholar R. Alexandre, Y. Morimoto, S. Ukai, C.-J. Xu and T. Yang, Uncertainty principle and kinetic equations, J. Funct. Anal., 255 (2008), 2013-2066. doi: 10.1016/j.jfa.2008.07.004. Google Scholar R. Alexandre, Y. Morimoto, S. Ukai, C.-J. Xu and T. Yang, The Boltzmann equation without angular cutoff in the whole space: Qualitative properties of solutions, Arch. Ration. Mech. Anal., 202 (2011), 599-661. doi: 10.1007/s00205-011-0432-0. Google Scholar R. Alexandre, Y. Morimoto, S. Ukai, C.-J. Xu and T. Yang, The Boltzmann equation without angular cutoff in the whole space: I, global existence for soft potential, J. Funct. Anal., 262 (2012), 915-1010. doi: 10.1016/j.jfa.2011.10.007. Google Scholar R. Alexandre and M. Safadi, Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations. I. Non-cutoff case and Maxwellian molecules, Math. Models Methods Appl. Sci., 15 (2005), 907-920. doi: 10.1142/S0218202505000613. Google Scholar R. Alexandre and M. Safadi, Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations. Ⅱ. Non cutoff case and non Maxwellian molecules, Discrete Contin. Dyn. Syst., 24 (2009), 1-11. doi: 10.3934/dcds.2009.24.1. Google Scholar R. Alexandre, F. Hérau and W.-X. Li, Global hypoelliptic and symbolic estimates for the linearized Boltzmann operator without angular cutoff, J. Math. Pures Appl. (9), 126 (2019), 1-71. doi: 10.1016/j.matpur.2019.04.013. Google Scholar J.-M. Barbaroux, D. Hundertmark, T. Ried and S. Vugalter, Gevrey smoothing for weak solutions of the fully nonlinear homogeneous Boltzmann and Kac equations without cutoff for Maxwellian molecules, Archive for Rational Mechanics and Analysis, 225 (2017), 601-661. doi: 10.1007/s00205-017-1101-8. Google Scholar H. Cao, H.-G. Li, C.-J. Xu and J. Xu, The Cauchy problem for the inhomogeneous non-cutoff Kac equation in critical Besov space, arXiv E-prints, arXiv: 1902.06699. doi: 10.1016/j.jde.2019.12.025. Google Scholar H. Chen, X. Hu, W.-X. Li and J. Zhan, Gevrey smoothing effect for the spatially inhomogeneous Boltzmann equations without cut-off, arXiv E-prints, arXiv: 1805.12543. Google Scholar H. Chen, W.-X. Li and C.-J. Xu, Propagation of Gevrey regularity for solutions of Landau equations, Kinet. Relat. Models, 1 (2008), 355-368. doi: 10.3934/krm.2008.1.355. Google Scholar H. Chen, W.-X. Li and and C.-J. Xu, Analytic smoothness effect of solutions for spatially homogeneous Landau equation, J. Differential Equations, 248 (2010), 77-94. doi: 10.1016/j.jde.2009.08.006. Google Scholar L. Desvillettes, About the regularizing properties of the non-cut-off Kac equation, Comm. Math. Phys., 168 (1995), 417-440. doi: 10.1007/BF02101556. Google Scholar L. Desvillettes, Regularization properties of the $2$-dimensional non-radially symmetric non-cutoff spatially homogeneous Boltzmann equation for Maxwellian molecules, Transport Theory Statist. Phys., 26 (1997), 341-357. doi: 10.1080/00411459708020291. Google Scholar L. Desvillettes and C. Villani, On the spatially homogeneous Landau equation for hard potentials. I. Existence, uniqueness and smoothness, Comm. Partial Differential Equations, 25 (2000), 179-259. doi: 10.1080/03605300008821512. Google Scholar L. Desvillettes and B. Wennberg, Smoothness of the solution of the spatially homogeneous Boltzmann equation without cutoff, Comm. Partial Differential Equations, 29 (2004), 133-155. doi: 10.1081/PDE-120028847. Google Scholar L. Glangetas, H.-G. Li and C.-J. Xu, Sharp regularity properties for the non-cutoff spatially homogeneous Boltzmann equation, Kinet. Relat. Models, 9 (2016), 299-371. doi: 10.3934/krm.2016.9.299. Google Scholar P. T. Gressman and R. M. Strain, Global classical solutions of the Boltzmann equation without angular cut-off, J. Amer. Math. Soc., 24 (2011), 771-847. doi: 10.1090/S0894-0347-2011-00697-8. Google Scholar Z. Huo, Y. Morimoto, S. Ukai and T. Yang, Regularity of solutions for spatially homogeneous Boltzmann equation without angular cutoff, Kinet. Relat. Models, 1 (2008), 453-489. doi: 10.3934/krm.2008.1.453. Google Scholar N. Lerner, Metrics on the Phase Space and Non-selfadjoint Pseudo-differential Operators, Birkhäuser Verlag, Basel, 2010. doi: 10.1007/978-3-7643-8510-1. Google Scholar N. Lerner, Y. Morimoto, K. Pravda-Starov and C.-J. Xu, Gelfand-Shilov smoothing properties of the radially symmetric spatially homogeneous Boltzmann equation without angular cutoff, J. Differential Equations, 256 (2014), 797-831. doi: 10.1016/j.jde.2013.10.001. Google Scholar N. Lerner, Y. Morimoto, K. Pravda-Starov and C.-J. Xu, Gelfand-Shilov and Gevrey smoothing effect for the spatially inhomogeneous non-cutoff Kac equation, J. Funct. Anal., 269 (2015), 459-535. doi: 10.1016/j.jfa.2015.04.017. Google Scholar H.-G. Li and C.-J. Xu, The Cauchy problem for the radially symmetric homogeneous Boltzmann equation with Shubin class initial datum and Gelfand-Shilov smoothing effect, J. Differential Equations, 263 (2017), 5120-5150. doi: 10.1016/j.jde.2017.06.010. Google Scholar P.-L. Lions, Régularité et compacité pour des noyaux de collision de Boltzmann sans troncature angulaire, C. R. Acad. Sci. Paris Sér. I Math., 326 (1998), 37-41. doi: 10.1016/S0764-4442(97)82709-7. Google Scholar Y. Morimoto and S. Ukai, Gevrey smoothing effect of solutions for spatially homogeneous nonlinear Boltzmann equation without angular cutoff, J. Pseudo-Differ. Oper. Appl., 1 (2010), 139-159. doi: 10.1007/s11868-010-0008-z. Google Scholar Y. Morimoto and T. Yang, Smoothing effect of the homogeneous Boltzmann equation with measure valued initial datum, Ann. Inst. H. Poincaré Anal. Non Linéaire, 32 (2015), 429-442. doi: 10.1016/j.anihpc.2013.12.004. Google Scholar Y. Morimoto, S. Ukai, C.-J. Xu and T. Yang, Regularity of solutions to the spatially homogeneous Boltzmann equation without angular cutoff, Discrete Contin. Dyn. Syst., 24 (2009), 187-212. doi: 10.3934/dcds.2009.24.187. Google Scholar Y. Morimoto and C.-J. Xu, Ultra-analytic effect of Cauchy problem for a class of kinetic equations, J. Differential Equations, 247 (2009), 596-617. doi: 10.1016/j.jde.2009.01.028. Google Scholar C. Villani, Regularity estimates via the entropy dissipation for the spatially homogeneous boltzmann equation without cut-off, Rev. Mat. Iberoam., 15 (1999), 335-352. doi: 10.4171/RMI/259. Google Scholar Robert M. Strain. Optimal time decay of the non cut-off Boltzmann equation in the whole space. Kinetic & Related Models, 2012, 5 (3) : 583-613. doi: 10.3934/krm.2012.5.583 Lvqiao Liu, Hao Wang. Global existence and decay of solutions for hard potentials to the fokker-planck-boltzmann equation without cut-off. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3113-3136. doi: 10.3934/cpaa.2020135 John Sylvester. An estimate for the free Helmholtz equation that scales. Inverse Problems & Imaging, 2009, 3 (2) : 333-351. doi: 10.3934/ipi.2009.3.333 Gary Lieberman. A new regularity estimate for solutions of singular parabolic equations. Conference Publications, 2005, 2005 (Special) : 605-610. doi: 10.3934/proc.2005.2005.605 Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 Peng Gao. Global Carleman estimate for the Kawahara equation and its applications. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1853-1874. doi: 10.3934/cpaa.2018088 Neal Bez, Chris Jeavons. A sharp Sobolev-Strichartz estimate for the wave equation. Electronic Research Announcements, 2015, 22: 46-54. doi: 10.3934/era.2015.22.46 Léo Glangetas, Hao-Guang Li, Chao-Jiang Xu. Sharp regularity properties for the non-cutoff spatially homogeneous Boltzmann equation. Kinetic & Related Models, 2016, 9 (2) : 299-371. doi: 10.3934/krm.2016.9.299 Emine Kaya, Eugenio Aulisa, Akif Ibragimov, Padmanabhan Seshaiyer. A stability estimate for fluid structure interaction problem with non-linear beam. Conference Publications, 2009, 2009 (Special) : 424-432. doi: 10.3934/proc.2009.2009.424 Carlos Matheus, Jacob Palis. An estimate on the Hausdorff dimension of stable sets of non-uniformly hyperbolic horseshoes. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 431-448. doi: 10.3934/dcds.2018020 Xinqun Mei, Jundong Zhou. The interior gradient estimate of prescribed Hessian quotient curvature equation in the hyperbolic space. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1187-1198. doi: 10.3934/cpaa.2021012 Fabrice Planchon, John G. Stalker, A. Shadi Tahvildar-Zadeh. Dispersive estimate for the wave equation with the inverse-square potential. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1387-1400. doi: 10.3934/dcds.2003.9.1387 Boris P. Belinskiy, Peter Caithamer. Energy estimate for the wave equation driven by a fractional Gaussian noise. Conference Publications, 2007, 2007 (Special) : 92-101. doi: 10.3934/proc.2007.2007.92 Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367 Chunpeng Wang, Yanan Zhou, Runmei Du, Qiang Liu. Carleman estimate for solutions to a degenerate convection-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4207-4222. doi: 10.3934/dcdsb.2018133 Hideo Kubo. On the pointwise decay estimate for the wave equation with compactly supported forcing term. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1469-1480. doi: 10.3934/cpaa.2015.14.1469 Lucie Baudouin, Emmanuelle Crépeau, Julie Valein. Global Carleman estimate on a network for the wave equation and application to an inverse problem. Mathematical Control & Related Fields, 2011, 1 (3) : 307-330. doi: 10.3934/mcrf.2011.1.307 Soumen Senapati, Manmohan Vashisth. Stability estimate for a partial data inverse problem for the convection-diffusion equation. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021060 Wenxiong Chen, Congming Li. A priori estimate for the Nirenberg problem. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 225-233. doi: 10.3934/dcdss.2008.1.225 Sachiko Ishida, Yusuke Maeda, Tomomi Yokota. Gradient estimate for solutions to quasilinear non-degenerate Keller-Segel systems on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2537-2568. doi: 10.3934/dcdsb.2013.18.2537 Wei-Xi Li Lvqiao Liu
CommonCrawl
\begin{document} \title{Dependency Pairs Termination in Dependent Type Theory Modulo Rewriting} \begin{abstract} Dependency pairs are a key concept at the core of modern automated termination provers for first-order term rewriting systems. In this paper, we introduce an extension of this technique for a large class of dependently-typed higher-order rewriting systems. This extends previous results by Wahlstedt on the one hand and the first author on the other hand to strong normalization and non-orthogonal rewriting systems. This new criterion is implemented in the type-checker \tool{Dedukti}. \end{abstract} \section{Introduction} Termination, that is, the absence of infinite computations, is an important problem in software verification, as well as in logic. In logic, it is often used to prove cut elimination and consistency. In automated theorem provers and proof assistants, it is often used (together with confluence) to check decidability of equational theories and type-checking algorithms. This paper introduces a new termination criterion for a large class of programs whose operational semantics can be described by higher-order rewriting rules \cite{terese03book} typable in the $\l\Pi$-calculus modulo rewriting ($\l\Pi/\cR$ for short). $\l\Pi/\cR$ is a system of dependent types where types are identified modulo the $\b$-reduction of $\l$-calculus and a set $\cR$ of rewriting rules given by the user to define not only functions but also types. It extends Barendregt's Pure Type System (PTS) $\l P$ \cite{barendregt92chapter}, the logical framework LF \cite{harper93jacm} and Martin-L\"of's type theory\hide{\cite{martinlof84book}}. It can encode any functional PTS like System F or the Calculus of Constructions \cite{cousineau07tlca}.\hide{assaf15phd} \hide{This makes $\l\Pi/\cR$ a good candidate for a logical framework and a platform for proof translation and interoperability \cite{assaf16draft}.} Dependent types, introduced by de Bruijn in \tool{Automath}\hide{\cite{debruijn68sad}}, subsume generalized algebraic data types (GADT)\hide{\cite{xi03popl}} used in some functional programming languages. They are at the core of many proof assistants and programming languages: \tool{Coq}, \tool{Twelf}, \tool{Agda}, \tool{Lean}, \tool{Idris}, \ldots Our criterion has been implemented in \href{https://deducteam.github.io/}{\tool{Dedukti}}\hide{\cite{dedukti}}, a type-checker for $\l\Pi/\cR$ that we will use in our examples. The code is available in \cite{sizechangetool} and could be easily adapted to a subset of other languages like \tool{Agda}. As far as we know, this tool is the first one to automatically check termination in $\l\Pi/\cR$, which includes both higher-order rewriting and dependent types. This criterion is based on dependency pairs, an important concept in the termination of first-order term rewriting systems. It generalizes the notion of recursive call in first-order functional programs to rewriting. Namely, the dependency pairs of a rewriting rule $f(l_1,\ldots,l_p)\a r$ are the pairs $(f(l_1,\ldots,l_p),g(m_1,\ldots,m_q))$ such that $g(m_1,\ldots,m_q)$ is a subterm of $r$ and $g$ is a function symbol defined by some rewriting rules. Dependency pairs have been introduced by Arts and Giesl \cite{arts00tcs} and have evolved into a general framework for termination \cite{giesl04lpar}. It is now at the heart of many state-of-the-art automated termination provers for first-order rewriting systems and \tool{Haskell}, \tool{Java} or \tool{C} programs. \hide{Indeed, Arts and Giesl proved that a rewriting relation terminates if and only if there is no infinite sequence of dependency pairs interleaved with reductions in the arguments. In a first-order functional setting, it amounts to saying that there is no infinite sequence of function calls.} Dependency pairs have been extended to different simply-typed settings for higher-order rewriting: Combinatory Reduction Systems \cite{klop93tcs}\hide{kop12phd} and Higher-order Rewriting Systems \cite{mayr98tcs}, with two different approaches: dynamic dependency pairs include variable applications \cite{kop12phd}, while static dependency pairs exclude them by slightly restricting the class of systems that can be considered \cite{kusakari07aaecc}\hide{suzuki11pro}. Here, we use the static approach. In \cite{wahlstedt07phd}, Wahlstedt considered a system slightly less general than $\l\Pi/\cR$ for which he provided conditions that imply the weak normalization, that is, the existence of a finite reduction to normal form. In his system, $\cR$ uses matching on constructors only, like in the languages \tool{OCaml} or \tool{Haskell}. In this case, $\cR$ is orthogonal: rules are left-linear (no variable occurs twice in a left-hand side) and have no critical pairs (no two rule left-hand side instances overlap). Wahlstedt's proof proceeds in two modular steps. First, he proves that typable terms have a normal form if there is no infinite sequence of function calls. Second, he proves that there is no infinite sequence of function calls if $\cR$ satisfies Lee, Jones and Ben-Amram's size-change termination criterion (SCT) \cite{lee01popl}. In this paper, we extend Wahlstedt's results in two directions. First, we prove a stronger normalization property: the absence of infinite reductions. Second, we assume that $\cR$ is locally confluent, a much weaker condition than orthogonality: rules can be non-left-linear and have joinable critical pairs. In \cite{blanqui05mscs}, the first author developed a termination criterion for a calculus slightly more general than $\l\Pi/\cR$, based on the notion of computability closure, assuming that type-level rules are orthogonal. The computability closure of a term $f(l_1,\ldots,l_p)$ is a set of terms that terminate whenever $l_1,\ldots,l_p$ terminate. It is defined inductively thanks to deduction rules preserving this property, using a precedence and a fixed well-founded ordering for dealing with function calls. Termination can then be enforced by requiring each rule right-hand side to belong to the computability closure of its corresponding left-hand side. \hide{Function calls are added in the computability closure by using a fixed well-founded quasi-ordering. In a way similar to the recursive path ordering \cite{dershowitz79focs}, a function call $g(m_1,\ldots,m_q)$ is added in the computability closure of $f(l_1,\ldots,l_p)$ if $m_1,\ldots,m_q$ are in the computability closure and, either $g$ is smaller than $f$ in a well-founded precedence, or $g$ is equivalent to $f$ but $m_1,\ldots,m_q$ is smaller than $l_1,\ldots,l_p$ in the multiset or lexicographic extension of some fixed well-founded ordering. } We extend this work as well by replacing that fixed ordering by the dependency pair relation. In \cite{blanqui05mscs}, there must be a decrease in every function call. Using dependency pairs allows one to have non-strict decreases. Then, following Wahlstedt, SCT can be used to enforce the absence of infinite sequence of dependency pairs. But other criteria have been developed for this purpose that could be adapted to $\l\Pi/\cR$. \hide{ However, because in our setting typing depends on rewriting, some dependency pairs may arise from typing. For instance, if $f$ takes an argument of type $Ft$, then we will have dependency pairs between $f$ and $F$, even though the rules of $f$ do not use $F$ itself. Our extended notion of dependency pair boils down to the usual one when one restricts himself to simply-typed rewriting rules though. } \subsubsection*{Outline} The main result is Theorem \ref{thm-dp} stating that, for a large class of rewriting systems $\cR$, the combination of $\b$ and $\cR$ is strongly normalizing on terms typable in $\l\Pi/\cR$ if, roughly speaking, there is no infinite sequence of dependency pairs. The proof involves two steps. First, after recalling the terms and types of $\l\Pi/\cR$ in Section \ref{sec-terms}, we introduce in Section \ref{sec-interp} a model of this calculus based on Girard's reducibility candidates \cite{girard88book}, and prove that every typable term is strongly normalizing if every symbol of the signature is in the interpretation of its type (Adequacy lemma). Second, in Section \ref{sec-dp-thm}, we introduce our notion of dependency pair and prove that every symbol of the signature is in the interpretation of its type if there is no infinite sequence of dependency pairs. In order to show the usefulness of this result, we give simple criteria for checking the conditions of the theorem. In Section \ref{sec-acc}, we show that \emph{plain function passing} systems belong to the class of systems that we consider. And in Section \ref{sec-sct}, we show how to use size-change termination to obtain the termination of the dependency pair relation. Finally, in Section \ref{sec-comp} we compare our criterion with other criteria and tools and, in Section \ref{sec-conclu}, we summarize our results and give some hints on possible extensions. For lack of space, some proofs are given in an appendix at the end of the paper. \hide{ The paper is self-contained except for a few meta-theoretical properties of $\l\Pi/\cR$ taken from \cite{blanqui01phd}. The proofs of some lemmas are given in an appendix at the end of the paper. Section \ref{sec-interp} is the most technical part of the paper and can be skipped at first. It introduces a new interpretation of types (Definition \ref{def-interp}), much simpler than the one of Wahlstedt \cite[Definition 3.2.3]{wahlstedt07phd}. In particular, it does not use transfinite ordinal theory but a powerful fixpoint theorem instead. } \section{Terms and types} \label{sec-terms} The set $\bT$ of terms of $\l\Pi/\cR$ is the same as those of Barendregt's $\l P$ \cite{barendregt92chapter}: \begin{center}$t\in\bT = s\in\bS\mid x\in\bV\mid f\in\bF\mid\prod xtt\mid tt\mid\abs xtt$\end{center} where $\bS=\{\type,\kind\}$ is the set of sorts\footnote{Sorts refer here to the notion of sort in Pure Type Systems, not the one used in some first-order settings.}, $\bV$ is an infinite set of variables and $\bF$ is a set of function symbols, so that $\bS$, $\bV$ and $\bF$ are pairwise disjoint. Furthermore, we assume given a set $\cR$ of rules $l\a r$ such that $\FV(r)\sle\FV(l)$ and $l$ is of the form $f\vl$. A symbol $f$ is said to be defined if there is a rule of the form $f\vl\a r$. In this paper, we are interested in the termination of \begin{center}${\a}={\ab\cup\ar}$\end{center} where $\ab$ is the $\b$-reduction of $\l$-calculus and $\ar$ is the smallest relation containing $\cR$ and closed by substitution and context: we consider rewriting with syntactic matching only. Following \cite{blanqui16tcs}, it should however be possible to extend the present results to rewriting with matching modulo $\b\eta$ or some equational theory. Let $\SN$ be the set of terminating terms and, given a term $t$, let $\red{t}=\{u\in\mb{T}\mid t\a u\}$ be the set of immediate reducts of $t$. A typing environment $\G$ is a (possibly empty) sequence $x_1:T_1,\ldots,x_n:T_n$ of pairs of variables and terms, where the variables are distinct, written $\vx:\vT$ for short. Given an environment $\G=\vx:\vT$ and a term $U$, let $\all\G,U$ be $\prod\vx\vT U$. The product arity $\arit(T)$ of a term $T$ is the integer $n\in\bN$ such that $T=\prod{x_1}{T_1}\ldots\prod{x_n}{T_n}U$ and $U$ is not a product. Let $\vt$ denote a possibly empty sequence of terms $t_1,\ldots,t_n$ of length $|\vt|=n$, and $\FV(t)$ be the set of free variables of $t$. For each $f\in\bF$, we assume given a term $\T_f$ and a sort $s_f$, and let $\G_f$ be the environment such that $\T_f=\all\G_f,U$ and $|\G_f|=\arit(\T_f)$. The application of a substitution $\s$ to a term $t$ is written $t\s$. Given a substitution $\s$, let $\dom(\s)=\{x|x\s\neq x\}$, $\FV(\s)=\bigcup_{x\in\dom(\s)}\FV(x\s)$ and $[x\to a,\s]$ ($[x\to a]$ if $\s$ is the identity) be the substitution $\{(x,a)\}\cup\{(y,b)\in\s\mid y\neq x\}$. Given another substitution $\s'$, let $\s\a\s'$ if there is $x$ such that $x\s\a x\s'$ and, for all $y\neq x$, $y\s=y\s'$. The typing rules of $\l\Pi/\cR$, in Figure \ref{fig-typ}, add to those of $\l P$ the rule (fun) similar to (var). Moreover, (conv) uses $\ad$ instead of $\ad_\b$, where ${\ad}={\a^*{}^*\hspace{-3pt}\la}$ is the joinability relation and $\a^*$ the reflexive and transitive closure of $\a$. We say that $t$ has type $T$ in $\G$ if $\G\th t:T$ is derivable. A substitution $\s$ is well-typed from $\D$ to $\G$, written $\G\th\s:\D$, if, for all $(x:T)\in\D$, $\G\th x\s:T\s$ holds. The word ``type'' is used to denote a term occurring at the right-hand side of a colon in a typing judgment (and we usually use capital letters for types). Hence, $\kind$ is the type of $\type$, $\T_f$ is the type of $f$, and $s_f$ is the type of $\T_f$. Common data types like natural numbers $\bN$ are usually declared in $\l\Pi$ as function symbols of type $\type$: $\T_\bN=\type$ and $s_\bN=\kind$. The dependent product $\prod xAB$ generalizes the arrow type $A\A B$ of simply-typed $\l$-calculus: it is the type of functions taking an argument $x$ of type $A$ and returning a term whose type $B$ may depend on $x$. If $B$ does not depend on $x$, we sometimes simply write $A\A B$. Typing induces a hierarchy on terms \cite[Lemma 47]{blanqui01phd}. At the top, there is the sort $\kind$ that is not typable. Then, comes the class $\bK$ of kinds, whose type is $\kind$: $K=\type\mid\prod xtK$ where $t\in\bT$. Then, comes the class of predicates, whose types are kinds. Finally, at the bottom lie (proof) objects whose types are predicates. \begin{figure} \caption{Typing rules of $\l\Pi/\cR$} \label{fig-typ} \end{figure} \begin{example}[Filter function on dependent lists]\label{expl-list-poly} To illustrate the kind of systems we consider, we give an extensive example in the new \tool{Dedukti} syntax combining type-level rewriting rules ({\tt El} converts datatype codes into \tool{Dedukti} types), dependent types ($\bL$ is the polymorphic type of lists parameterized with their length), higher-order variables ({\tt fil} is a function filtering elements out of a list along a boolean function {\tt f}), and matching on defined function symbols ({\tt fil} can match a list defined by concatenation). Note that this example cannot be represented in \tool{Coq} or \tool{Agda} because of the rules using matching on {\tt app}. And its termination can be handled neither by \cite{wahlstedt07phd} nor by \cite{blanqui05mscs} because the system is not orthogonal and has no strict decrease in every recursive call. It can however be handled by our new termination criterion and its implementation \cite{sizechangetool}. For readability, we removed the {\tt \&} which are used to identify pattern variables in the rewriting rules. \begin{lstlisting}[mathescape=true] symbol Set: TYPE symbol arrow: Set $\A$ Set $\A$ Set symbol El: Set $\A$ TYPE rule El (arrow a b) ${\color{darkorange}\rightarrow}$ El a $\A$ El b symbol Bool: TYPE symbol true: Bool symbol false: Bool symbol Nat: TYPE symbol zero: Nat symbol s: Nat $\A$ Nat symbol plus: Nat $\A$ Nat $\A $Nat set infix 1 "+" $\coloneqq$ plus rule zero + q ${\color{darkorange}\rightarrow}$ q rule (s p) + q ${\color{darkorange}\rightarrow}$ s (p + q) symbol List: Set $\A$ Nat $\A$ TYPE symbol nil: $\all$a, List a zero symbol cons:$\all$a, El a $\A$ $\all$p, List a p $\A$ List a (s p) symbol app: $\all$a p, List a p $\A$ $\all$q, List a q $\A$ List a (p+q) rule app a _ (nil _) q m ${\color{darkorange}\rightarrow}$ m rule app a _ (cons _ x p l) q m ${\color{darkorange}\rightarrow}$ cons a x (p+q) (app a p l q m) symbol len_fil: $\all$a, (El a $\A$ Bool) $\A$ $\all$p, List a p $\A$ Nat symbol len_fil_aux: Bool $\A$ $\all$a, (El a $\A$ Bool) $\A$ $\all$p, List a p $\A$ Nat rule len_fil a f _ (nil _) ${\color{darkorange}\rightarrow}$ zero rule len_fil a f _ (cons _ x p l) ${\color{darkorange}\rightarrow}$ len_fil_aux (f x) a f p l rule len_fil a f _ (app _ p l q m) ${\color{darkorange}\rightarrow}$ (len_fil a f p l) + (len_fil a f q m) rule len_fil_aux true a f p l ${\color{darkorange}\rightarrow}$ s (len_fil a f p l) rule len_fil_aux false a f p l ${\color{darkorange}\rightarrow}$ len_fil a f p l symbol fil: $\all$a f p l, List a (len_fil a f p l) symbol fil_aux: $\all$b a f, El a $\A$ $\all$p l, List a (len_fil_aux b a f p l) rule fil a f _ (nil _) ${\color{darkorange}\rightarrow}$ nil a rule fil a f _ (cons _ x p l) ${\color{darkorange}\rightarrow}$ fil_aux (f x) a f x p l rule fil a f _ (app _ p l q m) ${\color{darkorange}\rightarrow}$ app a (len_fil a f p l) (fil a f p l) (len_fil a f q m) (fil a f q m) rule fil_aux false a f x p l ${\color{darkorange}\rightarrow}$ fil a f p l rule fil_aux true a f x p l ${\color{darkorange}\rightarrow}$ cons a x (len_fil a f p l) (fil a f p l) \end{lstlisting} \hide{Note that the rules of {\tt +} are required for the rules of {\tt app} to preserves typing.} \end{example} \noindent{\bf Assumptions:} Throughout the paper, we assume that $\a$ is locally confluent (${\la\a}\sle{\ad}$) and preserves typing (for all $\G$, $A$, $t$ and $u$, if $\G\th t:A$ and $t\a u$, then $\G\th u:A$). Note that local confluence implies that every $t\in\SN$ has a unique normal form $\nf{t}$\hide{\cite{newman42am}}. These assumptions are used in the interpretation of types (Definition \ref{def-interp}) and the adequacy lemma (Lemma \ref{lem-comp}). Both properties are undecidable in general. For confluence, \tool{Dedukti} can call confluence checkers that understand the HRS format of the \href{http://project-coco.uibk.ac.at/}{confluence competition}. For preservation of typing by reduction, it implements an heuristic \cite{saillard15phd}. \hide{See \cite{barbanera97jfp,blanqui05mscs,saillard15phd} for sufficient conditions.} \hide{ Note that, when rules contain no abstraction and no variable applications, then ${\a}={{\ab}\cup{\ar}}$ is locally confluent on $\bT$ iff $\ar$ is locally confluent on first-order terms \cite[Lemma 64]{blanqui01phd}. } \hide{ \begin{lemma} $\G\th T:\kind$ iff $T\in\bK$ and $T$ is typable. \end{lemma} \begin{proof} We first prove that, if $\G\th t:T$ and $T=\kind$, then $T\in\bK$, by induction on $\G\th t:T$. We only detail important cases: \begin{itemize} \item[(var)] Then, $A=\kind$. This case is not possible since $\G\th A:s$ and $\kind$ is not typable. \item[(app)] If $B[x\to a]=\kind$, then either $B=\kind$ or else $B=x$ and $a=\kind$. Both cases are impossible since both $B$ and $a$ are typable while $\kind$ is not typable. \item[(conv)] This case is not possible since $\G\th B:s$ and $\kind$ is not typable. \item[(fun)] Then, $\T_f=\kind$. This case is not possible since $\G\th\T_f:s_f$ and $\kind$ is not typable. \end{itemize} Hence, $T$ can only be typed by using (ax), (weak) and (prod). \end{proof} } \section{Interpretation of types as reducibility candidates} \label{sec-interp} We aim to prove the termination of the union of two relations, $\ab$ and $\ar$,\hide{sharing some symbols, namely the abstraction and the application of $\l$-calculus,} on the set of well-typed terms (which depends on $\cR$ since $\ad$ includes $\a_\mc{R}$). As is well known, termination is not modular in general.\hide{even when the two relations share no symbols} \hide{\cite{toyama87ipl}} As a $\b$ step can generate an $\cR$ step, and vice versa, we cannot expect to prove the termination of ${\ab}\cup{\ar}$ from the termination of $\ab$ and $\ar$.\hide{, unless $\cR$ is restricted to some class (e.g. object-level first-order rewriting systems \cite{dougherty92ic,barthe98icalp}).} The termination of $\l\Pi/\cR$ cannot be reduced to the termination of the simply-typed $\l$-calculus either (as done for $\l\Pi$ alone in \cite{harper93jacm}) because of type-level rewriting rules like the ones defining {\tt El} in Example \ref{expl-list-poly}. Indeed, type-level rules enable the encoding of functional PTS like Girard's System F, whose termination cannot be reduced to the termination of the simply-typed $\l$-calculus \cite{cousineau07tlca}. So, following Girard \cite{girard88book}, to prove the termination of $\ab\cup\ar$, we build a model of our calculus by interpreting types into sets of terminating terms. To this end, we need to find an interpretation $\I~$ having the following properties: \begin{itemize} \item Because types are identified modulo conversion, we need $\I~$ to be invariant by reduction: if $T$ is typable and $T\a T'$, then we must have $\I{T}=\I{T'}$. \item As usual, to handle $\b$-reduction, we need a product type $\prod xAB$ to be interpreted by the set of terms $t$ such that, for all $a$ in the interpretation of $A$, $ta$ is in the interpretation of $B[x\to a]$, that is, we must have $\I{\prod xAB}=\product{a}{\I{A}}{\I{B[x\to a]}}$ where $\product{a}{P}{Q(a)}=\{t\mid\all a\in P,ta\in Q(a)\}$. \end{itemize} First, we define the interpretation of predicates (and $\type$) as the least fixpoint of a monotone function in a directed-complete (= chain-complete) partial order \cite{markowsky76au}. Second, we define the interpretation of kinds by induction on their size. \begin{definition}[Interpretation of types]\label{def-interp} Let $\bI=\cF_p(\bT,\cP(\bT))$ be the set of partial functions from $\bT$ to the powerset of $\bT$. It is directed-complete wrt inclusion, allowing us to define $\cI$ as the least fixpoint of the monotone function $F:\bI\a\bI$ such that, if $I\in\bI$, then: \begin{itemize} \item The domain of $F(I)$ is the set $D(I)$ of all the terminating terms $T$ such that, if $T$ reduces to some product term $\prod xAB$ (not necessarily in normal form), then $A\in\dom(I)$ and, for all $a\in I(A)$, $B[x\to a]\in\dom(I)$. \item If $T\in D(I)$ and the normal form\footnote{Because we assume local confluence, every terminating term $T$ has a unique normal form $\nf{T}$.} of $T$ is not a product, then $F(I)(T)=\SN$. \item If $T\in D(I)$ and $\nf{T}=\prod xAB$, then $F(I)(T)=\product{a}{I(A)}{I(B[x\to a])}$. \end{itemize} We now introduce $\cD=D(\cI)$ and define the interpretation of a term $T$ wrt to a substitution $\s$, $\I{T}_\s$ (and simply $\I{T}$ if $\s$ is the identity), as follows: \begin{itemize} \item $\I{s}_\s=\cD$ if $s\in\bS$, \item $\I{\prod xAK}_\s=\product{a}{\I{A}_\s}{\I{K}_{[x\to a,\s]}}$ if $K\in\bK$ and $x\notin\dom(\s)$, \item $\I{T}_\s=\cI(T\s)$ if $T\notin\bK\cup\{\kind\}$ and $T\s\in\cD$, \item $\I{T}_\s=\SN$ otherwise. \end{itemize} A substitution $\s$ is adequate wrt an environment $\G$, $\s\models\G$, if, for all $x:A\in\G$, $x\s\in\I{A}_\s$. A typing map $\T$ is adequate if, for all $f$, $f\in\I{\T_f}$ whenever $\th\T_f:s_f$ and $\T_f\in\I{s_f}$. Let $\bC$ be the set of terms of the form $f\vt$ such that $|\vt|=\arit(\T_f)$, $\th\T_f:s_f$, $\T_f\in\I{s_f}$ and, if $\G_f=\vx:\vA$ and $\s=[\vx\to\vt]$, then $\s\models\G_f$. (Informally, $\mb{C}$ is the set of terms obtained by fully applying some function symbol to computable arguments.) \end{definition} We can then prove that, for all terms $T$, $\I{T}$ satisfies Girard's conditions of reducibility candidates, called computability predicates here, adapted to rewriting by including in neutral terms every term of the form $f\vt$ when $f$ is applied to enough arguments wrt $\cR$ \cite{blanqui05mscs}: \begin{definition}[Computability predicates]\label{def-comp} A term is neutral if it is of the form ${(\abs xAt)u\vv}$, $x\vv$ or $f\vv$ with, for every rule $f\vl\a r\in\cR$, $|\vl|\le|\vv|$. Let $\bP$ be the set of all the sets of terms $S$ (computability predicates) such that (a) $S\sle\SN$, (b) $\red{S}\sle S$, and (c) $t\in S$ if $t$ is neutral and $\red{t}\sle S$. \end{definition} Note that neutral terms satisfy the following key property: if $t$ is neutral then, for all $u$, $tu$ is neutral and every reduct of $tu$ is either of the form $t'u$ with $t'$ a reduct of $t$, or of the form $tu'$ with $u'$ a reduct of $u$. \hide{${\red{tu}}={{\red{t}u}\cup{t\red{u}}}$} One can easily check that $\SN$ is a computability predicate. Note also that a computability predicate is never empty: it contains every neutral term in normal form. In particular, it contains every variable. We then get the following results (the proofs are given in Appendix \ref{annex-interp}): \begin{lemma}\label{lem-props} \begin{enumerate}[(a)] \item\label{lem-comp-pred-int} For all terms $T$ and substitutions $\s$, $\I{T}_\s\in\bP$. \item\label{lem-int-red} If $T$ is typable, $T\s\in\cD$ and $T\a T'$, then $\I{T}_\s=\I{T'}_\s$. \item\label{lem-int-red-subs} If $T$ is typable, $T\s\in\cD$ and $\s\a\s'$, then $\I{T}_\s=\I{T}_{\s'}$. \item\label{lem-int-prod} If $\prod xAB$ is typable and $\prod x{A\s}{B\s}\in\cD$\hide{ and $x\notin\dom(\s)\cup\FV(\s)$},\\ then $\I{\prod xAB}_\s=\product{a}{\I{A}_\s}{\I{B}_{[x\to a,\s]}}$. \item\label{lem-int-subs} If $\D\th U:s$, $\G\th\g:\D$ and $U\g\s\in\cD$, then $\I{U\g}_\s=\I{U}_{\g\s}$. \item\label{lem-comp-abs} Given $P\in\bP$ and, for all $a\in P$, $Q(a)\in\bP$ such that $Q(a')\sle Q(a)$ if $a\a a'$. Then, $\abs xAb\in\product{a}{P}{Q(a)}$ if $A\in\SN$ and, for all $a\in P$, $b[x\to a]\in Q(a)$. \end{enumerate} \end{lemma} We can finally prove that our model is adequate, that is, every term of type $T$ belongs to $\I{T}$, if the typing map $\T$ itself is adequate. This reduces the termination of well-typed terms to the computability of function symbols. \begin{lemma}[Adequacy]\label{lem-comp} If $\T$ is adequate, $\G\th t:T$ and $\s\models\G$, then $t\s\in\I{T}_\s$. \end{lemma} \begin{proof} First note that, if $\G\th t:T$, then either $T=\kind$ or $\G\th T:s$ \cite[Lemma 28]{blanqui01phd}. Moreover, if $\G\th a:A$, $A\ad B$ and $\G\th B:s$ (the premises of the (conv) rule), then $\G\th A:s$ \cite[Lemma 42]{blanqui01phd} (because $\a$ preserves typing). Hence, the relation $\th$ is unchanged if one adds the premise $\G\th A:s$ in (conv), giving the rule (conv'). Similarly, we add the premise $\G\th\prod xAB:s$ in (app), giving the rule (app'). We now prove the lemma by induction on $\G\th t:T$ using (app') and (conv'): \begin{description} \item[(ax)] It is immediate that $\type\in\I\kind_\s=\cD$. \item[(var)] By assumption on $\s$. \item[(weak)] If $\s\models\G,x:A$, then $\s\models\G$. So, the result follows by induction hypothesis. \item[(prod)] Is $(\prod xAB)\s$ in $\I{s}_\s=\cD$? Wlog we can assume $x\notin\dom(\s)\cup\FV(\s)$. So, ${(\prod xAB)\s}={\prod x{A\s}{B\s}}$. By induction hypothesis, $A\s\in\I\type_\s=\cD$. Let now $a\in \cI(A\s)$ and $\s'=[x\to a,\s]$. Note that $\cI(A\s)=\I{A}_\s$. So, $\s'\models\G,x:A$ and, by induction hypothesis, $B\s'\in\I{s}_\s=\cD$. Since $x\notin\dom(\s)\cup\FV(\s)$, we have $B\s'=(B\s)[x\to a]$. Therefore, $(\prod xAB)\s\in\I{s}_\s$. \item[(abs)] Is $(\abs xAb)\s$ in $\I{\prod xAB}_\s$? Wlog we can assume that $x\notin\dom(\s)\cup\FV(\s)$. So, ${(\abs xAb)\s}={\abs x{A\s}{b\s}}$. By Lemma \ref{lem-props}\ref{lem-int-prod}, $\I{\prod xAB}_\s= \product{a}{\I{A}_\s}{\I{B}_{[x\to a,\s]}}$. By Lemma \ref{lem-props}\ref{lem-int-red-subs}, $\I{B}_{[x\to a,\s]}$ is an $\I{A}_\s$-indexed family of computability predicates such that $\I{B}_{[x\to a',\s]}=\I{B}_{[x\to a,\s]}$ whenever $a\a a'$. Hence, by Lemma \ref{lem-props}\ref{lem-comp-abs}, $\abs x{A\s}{b\s}\in\I{\prod xAB}_\s$ if $A\s\in\SN$ and, for all $a\in\I{A}_\s$, $(b\s)[x\to a]\in\I{B}_{\s'}$ where $\s'=[x\to a,\s]$. By induction hypothesis, $(\prod xAB)\s\in\I{s}_\s=\cD$. Since $x\notin\dom(\s)\cup\FV(\s)$, $(\prod xAB)\s=\prod x{A\s}{B\s}$ and $(b\s)[x\to a]=b\s'$. Since $\cD\sle\SN$, we have $A\s\in\SN$. Moreover, since $\s'\models\G,x:A$, we have $b\s'\in\I{B}_{\s'}$ by induction hypothesis. \item[(app')] Is $(ta)\s=(t\s)(a\s)$ in $\I{B[x\to a]}_\s$? By induction hypothesis, $t\s\in\I{\prod xAB}_\s$, $a\s\in\I{A}_\s$ and $(\prod xAB)\s\in\I{s}=\cD$. By Lemma \ref{lem-props}\ref{lem-int-prod}, $\I{\prod xAB}_\s=\product{\al}{\I{A}_\s}{\I{B}_{[x\to\al,\s]}}$. Hence, $(t\s)(a\s)\in\I{B}_{\s'}$ where $\s'=[x\to a\s,\s]$. Wlog we can assume $x\notin\dom(\s)\cup\FV(\s)$. So, $\s'=[x\to a]\s$. Hence, by Lemma \ref{lem-props}\ref{lem-int-subs}, $\I{B}_{\s'}=\I{B[x\to a]}_\s$. \item[(conv')] By induction hypothesis, $a\s\in\I{A}_\s$, $A\s\in\I{s}_\s=\cD$ and $B\s\in\I{s}_\s=\cD$. By Lemma \ref{lem-props}\ref{lem-int-red}, $\I{A}_\s=\I{B}_\s$. So, $a\s\in\I{B}_\s$. \item[(fun)] By induction hypothesis, $\T_f\in\I{s_f}_\s=\cD$. Therefore, $f\in\I{\T_f}_\s=\I{\T_f}$ since $\T$ is adequate.\qedhere \end{description} \end{proof} \hide{As usual, by taking the identity for $\s$ (variables are computable), we get that $\a$ terminates on terms typable in $\l\Pi/\cR$ if $\T$ is adequate and $\a$ is locally confluent and preserves typing.} \section{Dependency pairs theorem} \label{sec-dp-thm} \newcommand\thfl{\th_{\!\!\!f\vl}} \newcommand\thltf{\th_{\!\!\prec f}} \newcommand{\Rrightarrow}{\Rrightarrow} Now, we prove that the adequacy of $\T$ can be reduced to the absence of infinite sequences of dependency pairs, as shown by Arts and Giesl for first-order rewriting \cite{arts00tcs}. \begin{definition}[Dependency pairs] Let $f\vl$ > $g\vm$ iff there is a rule $f\vl \a r \in \cR$, $g$ is defined and $g\vm$ is a subterm of $r$ such that $\vm$ are all the arguments to which $g$ is applied. The relation $>$ is the set of dependency pairs. Let ${\call}={\a_\arg^*>_s}$ be the relation on the set $\bC$ (Def. \ref{def-interp}), where $f\vt\a_\arg f\vu$ iff $\vt\a_{prod}\vu$ (reduction in one argument), and $>_s$ is the closure by substitution and left-application of $>$: $f t_1\dots t_p~\call\,g u_1\dots u_q$ iff there are a dependency pair $f l_1\dots l_i>g m_1\dots m_j$ with $i\<p$ and $j\<q$ and a substitution $\s$ such that, for all $k\<i$, $t_k\a^*l_k\s$ and, for all $k\<j$, $m_k\s=u_k$. \end{definition} In our setting, we have to close $>_s$ by left-application because function symbols are curried. When a function symbol $f$ is not fully applied wrt $\arit(\T_f)$, the missing arguments must be considered as potentially being anything. Indeed, the following rewriting system: \begin{lstlisting}[mathescape=true] app x y $\a$ x y f x y $\a$ app (f x) y \end{lstlisting} whose dependency pairs are {\tt f x y > app (f x) y} and {\tt f x y > f x}, does not terminate, but there is no way to construct an infinite sequence of dependency pairs without adding an argument to the right-hand side of the second dependency pair. \begin{example}\label{expl-mult} The rules of Example \ref{expl-list-poly} have the following dependency pairs (the pairs whose left-hand side is headed by {\tt fil} or {\tt fil\_aux} can be found in Appendix \ref{annex-matrices}): \begin{lstlisting}[escapeinside={*}{*}] *\color{red} A:* El (arrow a b) > El a *\color{red} B:* El (arrow a b) > El b *\color{red} C:* (s p) + q > p + q *\color{red} D:* app a _ (cons _ x p l) q m > p + q *\color{red} E:* app a _ (cons _ x p l) q m > app a p l q m *\color{red} F:*len_fil a f _ (cons _ x p l) > len_fil_aux (f x) a f p l *\color{red} G:*len_fil a f _ (app _ p l q m) > (len_fil a f p l) + (len_fil a f q m) *\color{red} H:*len_fil a f _ (app _ p l q m) > len_fil a f p l *\color{red} I:*len_fil a f _ (app _ p l q m) > len_fil a f q m *\color{red} J:* len_fil_aux true a f p l > len_fil a f p l *\color{red} K:* len_fil_aux false a f p l > len_fil a f p l \end{lstlisting} \end{example} In \cite{arts00tcs}, a sequence of dependency pairs interleaved with $\a_\arg$ steps is called a chain. Arts and Giesl proved that, in a first-order term algebra, $\ar$ terminates if and only if there are no infinite chains, that is, if and only if $\call$ terminates. Moreover, in a first-order term algebra, $\call$ terminates if and only if, for all $f$ and $\vt$, $f\vt$ terminates wrt $\call$ whenever $\vt$ terminates wrt $\a$. In our framework, this last condition is similar to saying that $\T$ is adequate. We now introduce the class of systems to which we will extend Arts and Giesl's theorem. \begin{definition}[Well-structured system]\label{def-wf-rule}\label{def-preced} Let $\succeq$ be the smallest quasi-order on $\bF$ such that $f\succeq g$ if $g$ occurs in $\T_f$ or if there is a rule $f\vl\a r\in\cR$ with $g$ (defined or undefined) occurring in $r$. Then, let ${\succ}={\succeq\moins\preceq}$ be the strict part of $\succeq$. A rewriting system $\cR$ is well-structured if: \begin{enumerate}[(a)] \item $\succ$ is well-founded; \item for every rule $f\vl\a r$, $|\vl|\le\arit(\T_f)$; \item for every dependency pair $f\vl>g\vm$, $|\vm|\leq \arit(\T_g)$; \item every rule $f\vl\a r$ is equipped with an environment $\D_{f\vl\a r}$ such that, if $\T_f={\prod\vx\vT U}$ and $\pi=[\vx\to\vl]$, then $\D_{f\vl\a r}\thfl r:U\pi$, where $\thfl$ is the restriction of $\th$ defined in Fig. \ref{fig-thf}. \end{enumerate} \end{definition} \begin{figure} \caption{Restricted type systems $\thfl$ and $\thltf$ } \label{fig-thf} \end{figure} Condition (a) is always satisfied when $\bF$ is finite. Condition (b) ensures that a term of the form $f\vt$ is neutral whenever $|\vt|=\arit(\T_f)$. Condition (c) ensures that $>$ is included in $\call$. The relation $\thfl\,$ corresponds to the notion of computability closure in \cite{blanqui05mscs}, with the ordering on function calls replaced by the dependency pair relation. It is similar to $\th$ except that it uses the variant of (conv) and (app) used in the proof of the adequacy lemma; (fun) is split in the rules (const) for undefined symbols and (dp) for dependency pairs whose left-hand side is $f\vl$; every type occurring in an object term or every type of a function symbol occurring in a term is required to be typable by using symbols smaller than $f$ only. The environment $\D_{f\vl\a r}$ can be inferred by \tool{Dedukti} when one restricts rule left hand-sides to some well-behaved class of terms like algebraic terms or Miller patterns\hide{\cite{miller91jlc}} (in $\l$Prolog). One can check that Example \ref{expl-list-poly} is well-structured (the proof is given in Appendix \ref{annex-matrices}). Finally, we need matching to be compatible with computability, that is, if $f\vl\a r\in\cR$ and $\vl\s$ are computable, then $\s$ is computable, a condition called accessibility in \cite{blanqui05mscs}: \begin{definition}[Accessible system]\label{def-valid-rule} A well-structured system $\cR$ is accessible if, for all substitutions $\s$ and rules $f\vl\a r$ with $\T_f=\prod\vx\vT U$ and $|\vx|=|\vl|$, we have $\s\models\D_{f\vl\a r}$ whenever $\th\T_f:s_f$, $\T_f\in\I{s_f}$ and $[\vx\mapsto\vl]\s\models\vx:\vT$. \end{definition} This property is not always satisfied because the subterm relation does not preserve computability in general. Indeed, if $C$ is an undefined type constant, then $\I{C}=\SN$. However, $\I{C\A C}\neq\SN$ since $\w=\abs xC{xx}\in\SN$ and $\w\w\notin\SN$. Hence, if $c$ is an undefined function symbol of type $\T_c=(C\A C)\A C$, then $c\,\w\in\I{C}$ but $\w\notin\I{C\A C}$. We can now state the main lemma: \begin{lemma}\label{lem-comp-fun} $\T$ is adequate if $\call$ terminates and $\cR$ is well-structured and accessible. \end{lemma} \begin{proof} Since $\cR$ is well-structured, $\succ$ is well-founded by condition (a). We prove that, for all $f\in\bF$, $f\in\I{\T_f}$, by induction on $\succ$. So, let $f\in\bF$ with $\T_f=\all\G_f,U$ and $\G_f=x_1:T_1,\ldots,x_n:T_n$. By induction hypothesis, we have that, for all $g\prec f$, $g\in\I{\T_g}$. Since $\a_\arg$ and $\call$ terminate on $\bC$ and ${\a_\arg\call}\sle{\call}$, we have that ${\a_\arg}\cup{\call}$ terminates. We now prove that, for all $f\vt\in\bC$, we have $f\vt\in\I{U}_\t$ where $\t=[\vx\to\vt]$, by a second induction on ${\a_\arg}\cup{\call}$. By condition (b), $f\vt$ is neutral. Hence, by definition of computability, it suffices to prove that, for all $u\in{\red{f\vt}}$, $u\in\I{U}_\t$. There are 2 cases: \begin{itemize} \item $u=f\vv$ with $\vt\a_{prod}\vv$. Then, we can conclude by the first induction hypothesis. \item There are $fl_1\ldots l_k\a r\in\cR$ and $\s$ such that $u=(r\s) t_{k+1}\ldots t_n$ and, for all $i\in\{1,\ldots,k\}$, $t_i=l_i\s$. Since $f\vt\in\bC$, we have $\pi\s\models\G_f$. Since $\cR$ is accessible, we get that $\s\models\D_{f\vl\a r}$. By condition (d), we have $\D_{f\vl\a r}\thfl r:V\pi$ where $V=\prod{x_{k+1}}{T_{k+1}}\ldots\prod{x_n}{T_n}U$. Now, we prove that, for all $\G$, $t$ and $T$, if $\G\thfl t:T$ ($\G\thltf t:T$ resp.) and $\s\models\G$, then $t\s\in\I{T}_\s$, by a third induction on the structure of the derivation of $\G\thfl t:T$ ($\G\thltf t:T$ resp.), as in the proof of Lemma \ref{lem-comp} except for (fun) replaced by (fun$_{\prec f}$) in one case, and (const) and (dp) in the other case. \begin{description} \item[(fun$_{\prec f}$)] We have $g\in\I{\T_g}$ by the first induction hypothesis on $g$. \item[(const)] Since $g$ is undefined, it is neutral and normal. Therefore, it belongs to every computability predicate and in particular to $\I{\T_g}_\s$. \item[(dp)] By the third induction hypothesis, $y_i\g\s\in\I{U_i\g}_\s$. By Lemma \ref{lem-props}\ref{lem-int-subs}, $\I{U_i\g}_\s=\I{U_i}_{\g\s}$. So, $\g\s\models\S$ and $g\vy\g\s\in\bC$. Now, by condition (c), $g\vy\g\s\tilde{<}f\vl\s$ since $g\vy\g<f\vl$. Therefore, by the second induction hypothesis, $g\vy\g\s\in\I{V\g}_\s$. \end{description} So, $r\s\in\I{V\pi}_\s$ and, by Lemma \ref{lem-props}\ref{lem-int-prod}, $u\in\I{U}_{[x_n\to t_n,..,x_{k+1}\to t_{k+1},\pi\s]}=\I{U}_\t$.\qedhere \end{itemize} \end{proof} Note that the proof still works if one replaces the relation $\succeq$ of Definition \ref{def-preced} by any well-founded quasi-order such that $f\succeq g$ whenever $f\vl>g\vm$. The quasi-order of Definition \ref{def-preced}, defined syntactically, relieves the user of the burden of providing one and is sufficient in every practical case met by the authors. However it is possible to construct ad-hoc systems which require a quasi-order richer than the one presented here. By combining the previous lemma and the Adequacy lemma (the identity substitution is computable), we get the main result of the paper: \begin{theorem}\label{thm-dp} The relation ${\a}={{\ab}\cup{\ar}}$ terminates on terms typable in $\l\Pi/\cR$ if $\a$ is locally confluent and preserves typing, $\cR$ is well-structured and accessible, and $\call$ terminates. \end{theorem} For the sake of completeness, we are now going to give sufficient conditions for accessibility and termination of $\call$ to hold, but one could imagine many other criteria. \section{Checking accessibility} \label{sec-acc} In this section, we give a simple condition to ensure accessibility and some hints on how to modify the interpretation when this condition is not satisfied. As seen with the definition of accessibility, the main problem is to deal with subterms of higher-order type. A simple condition is to require higher-order variables to be direct subterms of the left-hand side, a condition called plain function-passing (PFP) in \cite{kusakari07aaecc}, and satisfied by Example \ref{expl-list-poly}. \begin{definition}[PFP systems]\label{def-pfp-rule} A well-structured $\cR$ is PFP if, for all $f\vl\a r\in\cR$ with $\T_f=\prod\vx\vT U$ and $|\vx|=|\vl|$, $\vl\notin\bK\cup\{\kind\}$ and, for all $y:T\in\D_{f\vl\a r}$, there is $i$ such that $y=l_i$ and $T=T_i[\vx\to\vl]$, or else $y\in\FV(l_i)$ and $T=D\vt$ with $D$ undefined and $|\vt|=\arit(D)$. \end{definition} \begin{lemma}\label{lem-pfp-rule} PFP systems are accessible. \end{lemma} \begin{proof} Let $f\vl\a r$ be a PFP rule with $\T_f=\all\G,U$, $\G=\vx:\vT$, $\pi={[\vx\to\vl]}$. Following Definition \ref{def-valid-rule}, assume that $\th\T_f:s_f$, $\T_f\in\cD$ and $\pi\s\models\G$. We have to prove that, for all $(y:T)\in\D_{f\vl\a r}$, $y\s\in\I{T}_\s$. \begin{itemize} \item Suppose $y=l_i$ and $T=T_i\pi$. Then, $y\s=l_i\s\in\I{T_i}_{\pi\s}$. Since $\th\T_f:s_f$, $T_i\notin\bK\cup\{\kind\}$. Since $\T_f\in\cD$ and $\pi\s\models\G$, we have $T_i\pi\s\in\cD$. So, $\I{T_i}_{\pi\s}=\cI(T_i\pi\s)$. Since $T_i\notin\bK\cup\{\kind\}$ and $\vl\notin\bK\cup\{\kind\}$, $T_i\pi\notin\bK\cup\{\kind\}$. Since $T_i\pi\s\in\cD$, $\I{T_i\pi}_\s=\cI(T_i\pi\s)$. Thus, $y\s\in\I{T}_\s$. \item Suppose $y\in\FV(l_i)$ and $T$ is of the form $C\vt$ with $|\vt|=\arit(C)$. Then, $\I{T}_\s=\SN$ and $y\s\in\SN$ since $l_i\s\in\I{T_i}_\s\sle\SN$.\qedhere \end{itemize} \end{proof} But many accessible systems are not PFP. They can be proved accessible by changing the interpretation of type constants (a complete development is left for future work). \begin{example}[Recursor on Brouwer ordinals]\label{expl-ord} \begin{lstlisting}[mathescape=true] symbol Ord: TYPE symbol zero: Ord symbol suc: Ord$\A$Ord symbol lim: (Nat$\A$Ord)$\A$Ord symbol ordrec: A$\A$(Ord$\A$A$\A$A)$\A$((Nat$\A$Ord)$\A$(Nat$\A$A)$\A$A)$\A$Ord$\A$A rule ordrec u v w zero ${\color{darkorange}\rightarrow}$ u rule ordrec u v w (suc x) ${\color{darkorange}\rightarrow}$ v x (ordrec u v w x) rule ordrec u v w (lim f) ${\color{darkorange}\rightarrow}$ w f ($\lambda$n,ordrec u v w (f n)) \end{lstlisting} \end{example} The above example is not PFP because {\tt f:Nat$\A$Ord} is not argument of {\tt ordrec}. Yet, it is accessible if one takes for $\I{\tt Ord}$ the least fixpoint of the monotone function $F(S)=\{{t\in\SN\mid} \text{if }t\a^*{\tt lim}\,f\text{ then } f\in\I{\tt Nat}\A S\text{, and if } t\a^*{\tt suc}\,u\text{ then } u\in S\}$ \cite{blanqui05mscs}. Similarly, the following encoding of the simply-typed $\l$-calculus is not PFP but can be proved accessible by taking \[\I{{\tt T}~ c}=\text{if }\nf{c}={\tt arrow}\,a\,b\text{ then }\{t\in\SN\mid \text{if }t\a^*{\tt lam} f\text{ then }f\in\I{{\tt T}~a}\A\I{{\tt T}~b}\}\text{ else }\SN\] \begin{example}[Simply-typed $\l$-calculus]\label{expl-lambda} \begin{lstlisting}[mathescape=true] symbol Sort : TYPE symbol arrow : Sort $\A$ Sort $\A$ Sort symbol T : Sort $\A$ TYPE symbol lam : $\forall$ a b, (T a $\A$ T b) $\A$ T (arrow a b) symbol app : $\forall$ a b, T (arrow a b) $\A$ T a $\A$ T b rule app a b (lam _ _ f) x ${\color{darkorange}\rightarrow}$ f x \end{lstlisting} \end{example} \hide{ Note here, that erasing dependencies in types, would lead to an encoding of \emph{pure} $\lambda$-calculus, which is well-known to be non-normalizing. } \section{Size-change termination} \label{sec-sct} In this section, we give a sufficient condition for $\call$ to terminate. For first-order rewriting, many techniques have been developed for that purpose. To cite just a few, see for instance \cite{hirokawa07ic,giesl06jar}.\hide{thiemann07phd} Many of them can probably be extended to $\l\Pi/\cR$, either because the structure of terms in which they are expressed can be abstracted away, or because they can be extended to deal also with variable applications, $\l$-abstractions and $\b$-reductions. \hide{\cite{suzuki11pro,kop12phd}} As an example, following Wahlstedt \cite{wahlstedt07phd}, we are going to use Lee, Jones and Ben-Amram's size-change termination criterion (SCT) \cite{lee01popl}. It consists in following arguments along function calls and checking that, in every potential loop, one of them decreases. First introduced for first-order functional languages, it has then been extended to many other settings: untyped $\l$-calculus \cite{jones04rta}, a subset of \tool{OCaml} \cite{sereni05aplas}, Martin-L\"of's type theory \cite{wahlstedt07phd}, System F \cite{lepigre17draft}. We first recall Hyvernat and Raffalli's matrix-based presentation of SCT \cite{hyvernat10wst}: \newcommand{\arity}[1]{\arit(\T_#1)} \begin{definition}[Size-change termination]\label{def-sct} Let $\tgt$ be the smallest transitive relation such that $ft_1\dots t_n\tgt t_i$ when $f\in\bF$. The call graph $\cG(\cR)$ associated to $\cR$ is the directed labeled graph on the defined symbols of $\bF$ such that there is an edge between $f$ and $g$ iff there is a dependency pair $fl_1\dots l_p>gm_1\dots m_q$. This edge is labeled with the matrix $(a_{i,j})_{i\leq\arity{f},j\leq\arity{g}}$ where: \begin{itemize} \item if $l_i\tgt m_j$, then $a_{i,j}=-1$; \item if $l_i=m_j$, then $a_{i,j}=0$; \item otherwise $a_{i,j}=\infty$ (in particular if $i>p$ or $j>q$). \end{itemize} $\cR$ is size-change terminating (SCT) if, in the transitive closure of $\cG(\cR)$ (using the min-plus semi-ring to multiply the matrices labeling the edges), all idempotent matrices labeling a loop have some $-1$ on the diagonal. \end{definition} We add lines and columns of $\infty$'s in matrices associated to dependency pairs containing partially applied symbols (cases $i>p$ or $j>q$) because missing arguments cannot be compared with any other argument since they are arbitrary. The matrix associated to the dependency pair {\tt C: (s p) + q > p + q} and the call graph associated to the dependency pairs of Example \ref{expl-mult} are depicted in Figure \ref{fig-callgraph}. The full list of matrices and the extensive call graph of Example \ref{expl-list-poly} can be found in Appendix \ref{annex-matrices}. \begin{figure} \caption{Matrix of dependency pair {\tt C} and call graph of the dependency pairs of Example \ref{expl-mult}} \label{fig-callgraph} \end{figure} \begin{lemma}\label{lem-sct} $\call$ terminates if $\bF$ is finite and $\cR$ is SCT. \end{lemma} \begin{proof} Suppose that there is an infinite sequence $\chi=f_1\vt_1\call f_2\vt_2\call\dots$ Then, there is an infinite path in the call graph going through nodes labeled by $f_1,f_2,\dots$ Since $\bF$ is finite, there is a symbol $g$ occurring infinitely often in this path. So, there is an infinite sequence $g\vu_1,g\vu_2,\dots$ extracted from $\chi$. Hence, for every $i,j\in\bN^*$, there is a matrix in the transitive closure of the graph which labels the loops of $g$ corresponding to the relation between $\vu_i$ and $\vu_{i+j}$. By Ramsey's theorem\hide{\cite{ramsey30plms}}, there is an infinite sequence $(\phi_i)$ and a matrix $M$ such that $M$ corresponds to all the transitions $g\vu_{\phi_i},g\vu_{\phi_j}$ with $i\neq j$. $M$ is idempotent, indeed $g\vu_{\phi_i},g\vu_{\phi_{i+2}}$ is labeled by $M^2$ by definition of the transitive closure and by $M$ due to Ramsey's theorem, so $M=M^2$. Since, by hypothesis, $\cR$ satisfies SCT, there is $j$ such that $M_{j,j}$ is $-1$. So, for all $i$, $u^{(j)}_{\phi_{i}}(\a^*\tgt)^+ u^{(j)}_{\phi_{i+1}}$. Since ${\tgt\!\a}\subseteq{\a\!\tgt}$ and $\a_{\arg}$ is well-founded on $\bC$, the existence of an infinite sequence contradicts the fact that $\tgt$ is well-founded. \end{proof} By combining all the previous results, we get: \begin{theorem}\label{thm-sn} The relation ${\a}={{\ab}\cup{\ar}}$ terminates on terms typable in $\l\Pi/\cR$ if $\a$ is locally confluent and preserves typing, $\bF$ is finite and $\cR$ is well-structured, plain-function passing and size-change terminating. \end{theorem} The rewriting system of Example \ref{expl-list-poly} verifies all these conditions (proof in the appendix). \section{Implementation and comparison with other criteria and tools} \label{sec-comp} We implemented our criterion in a tool called \tool{SizeChangeTool} \cite{sizechangetool}. As far as we know, there are no other termination checker for $\l\Pi/\cR$. If we restrict ourselves to simply-typed rewriting systems, then we can compare it with the termination checkers participating in the category ``higher-order rewriting union beta'' of the \href{http://termination-portal.org/wiki/Termination\_Competition}{termination competition}\hide{\cite{tc}}: \href{http://wandahot.sourceforge.net/}{\tool{Wanda}}\hide{\cite{wanda}} uses dependency pairs, polynomial interpretations, HORPO and many transformation techniques \cite{kop12phd}; \tool{SOL} uses the General Schema \cite{blanqui16tcs} and other techniques\hide{; \tool{THOR} uses some semantic version of HORPO \cite{blanqui07jacm}}. As these tools implement various techniques and \tool{SizeChangeTool} only one, it is difficult to compete with them. Still, there are examples that are solved by \tool{SizeChangeTool} and not by one of the other tools, demonstrating that these tools would benefit from implementing our new technique. For instance, the problem {\tt Hamana\_Kikuchi\_18/h17}\hide{\cite{tpdb}} is proved terminating by \tool{SizeChangeTool} but not by \tool{Wanda} because of the rule: \begin{lstlisting}[mathescape=true] rule map f (map g l) $\ra$ map (comp f g) l \end{lstlisting} \noindent And the problem {\tt Kop13/kop12thesis\_ex7.23} is proved terminating by \tool{SizeChangeTool} but not by \tool{Sol} because of the rules:\footnote{We renamed the function symbols for the sake of readability.} \begin{lstlisting}[mathescape=true] rule f h x (s y) $\ra$ g (c x (h y)) y rule g x y $\ra$ f ($\lambda$_,s 0) x y \end{lstlisting} One could also imagine to translate a termination problem in $\l\Pi/\cR$ into a simply-typed termination problem. Indeed, the termination of $\l\Pi$ alone (without rewriting) can be reduced to the termination of the simply-typed $\l$-calculus \cite{harper93jacm}. This has been extended to $\l\Pi/\cR$ when there are no type-level rewrite rules like the ones defining {\tt El} in Example \ref{expl-list-poly} \cite{jouannaud15tlca}. However, this translation does not preserve termination as shown by the Example \ref{expl-lambda} which is not terminating if all the types $\bT x$ are mapped to the same type constant. In \cite{roux11rta}, Roux also uses dependency pairs for the termination of simply-typed higher-order rewriting systems, as well as a restricted form of dependent types where a type constant $C$ is annotated by a pattern $l$ representing the set of terms matching $l$. This extends to patterns the notion of indexed or sized types \cite{hughes96popl}. Then, for proving the absence of infinite chains, he uses simple projections \cite{hirokawa07ic}, which can be seen as a particular case of SCT where strictly decreasing arguments are fixed (SCT can also handle permutations in arguments). Finally, if we restrict ourselves to orthogonal systems, it is also possible to compare our technique to the ones implemented in the proof assistants \tool{Coq} and \tool{Agda}. \tool{Coq} essentially implements a higher-order version of primitive recursion. \tool{Agda} on the other hand uses SCT. Because Example \ref{expl-list-poly} uses matching on defined symbols, it is not orthogonal and can be written neither in \tool{Coq} nor in \tool{Agda}. \tool{Agda} recently added the possibility of adding rewrite rules but this feature is highly experimental and comes with no guaranty. In particular, \tool{Agda} termination checker does not handle rewriting rules. \tool{Coq} cannot handle inductive-recursive definitions \cite{dybjer00jsl} nor function definitions with permuted arguments in function calls while it is no problem for \tool{Agda} and us. \section{Conclusion and future work} \label{sec-conclu} We proved a general modularity result extending Arts and Giesl's theorem that a rewriting relation terminates if there are no infinite sequences of dependency pairs \cite{arts00tcs} from first-order rewriting to dependently-typed higher-order rewriting. Then, following \cite{wahlstedt07phd}, we showed how to use Lee, Jones and Ben-Amram's size-change termination criterion to prove the absence of such infinite sequences \cite{lee01popl}. This extends Wahlstedt's work \cite{wahlstedt07phd} from weak to strong normalization, and from orthogonal to locally confluent rewriting systems. This extends the first author's work \cite{blanqui05mscs} from orthogonal to locally confluent systems, and from systems having a decreasing argument in each recursive call to systems with non-increasing arguments in recursive calls. Finally, this also extends previous works on static dependency pairs \cite{kusakari07aaecc}\hide{suzuki11pro}\hide{kop11rta} from simply-typed $\l$-calculus to dependent types modulo rewriting. To get this result, we assumed local confluence. However, one often uses termination to check (local) confluence. Fortunately, there are confluence criteria not based on termination. The most famous one is (weak) orthogonality, that is, when the system is left-linear and has no critical pairs (or only trivial ones) \cite{oostrom94phd}, as it is the case in functional programming languages. A more general one is when critical pairs are ``development-closed'' \cite{oostrom97tcs}. This work can be extended in various directions. First, our tool is currently limited to PFP rules, that is, to rules where higher-order variables are direct subterms of the left-hand side. To have higher-order variables in deeper subterms like in Example \ref{expl-ord}, we need to define a more complex interpretation of types, following \cite{blanqui05mscs}. Second, to handle recursive calls in such systems, we also need to use an ordering more complex than the subterm ordering when computing the matrices labeling the SCT call graph. The ordering needed for handling Example \ref{expl-ord} is the ``structural ordering'' of \tool{Coq} and \tool{Agda} \cite{coquand92types,blanqui16tcs}. Relations other than subterm have already been considered in SCT but in a first-order setting only \cite{thiemann05aaecc}. But we may want to go further because the structural ordering is not enough to handle the following system which is not accepted by \tool{Agda}: \begin{example}[Division]\label{expl-div} $m${\tt/}$n$ computes $\lceil\frac{m}{n}\rceil$. \begin{lstlisting}[mathescape=true] symbol minus: Nat$\A$Nat$\A$Nat set infix 1 "-" $\coloneqq$ minus rule 0 - n $\ra$ 0 rule m - 0 $\ra$ m rule (s m) - (s n) $\ra$ m - n symbol div: Nat$\A$Nat$\A$Nat set infix 1 "/" $\coloneqq$ div rule 0 / (s n) $\ra$ 0 rule (s m) / (s n) $\ra$ s ((m - n) / (s n)) \end{lstlisting} \end{example} \hide{\footnote{For the sake of simplicity, we do not use the lists annotated with type and lengths of Example \ref{expl-list-poly}.}} \hide{ \begin{example}[Map function on Rose trees]\label{expl-div} \begin{lstlisting}[mathescape=true] symbol tree : TYPE symbol list_tree : TYPE symbol node : list_tree $\A$ tree symbol nil : list_tree symbol cons : tree $\A$ list_tree $\A$ list_tree symbol map_list : (tree $\A$ tree) $\A$ list_tree $\A$ list_tree map_list f nil $\ra$ nil map_list f (cons x l) $\ra$ cons (f x) (map_list f l) symbol map : (tree $\A$ tree) $\A$ tree $\A$ tree map f (node(cons x l)) $\ra$ node(cons (map f x) (map_list (map f) l)) \end{lstlisting} \end{example} } \hide{Cet exemple est très différent de celui avec div et moins, puisque cette fois-ci on a un appel récursif partiellement appliqué, donc il ne suffit pas de modifier SCT, il faut modifier même notre définition des DP pour faire passer cet exemple. Je ne suis pas sûr qu'on veuille raconter ça ici.} A solution to handle this system is to use arguments filterings (remove the second argument of {\tt -}) or simple projections \cite{hirokawa07ic}. Another one is to extend the type system with size annotations as in \tool{Agda} and compute the SCT matrices by comparing the size of terms instead of their structure \cite{abel10par,blanqui18jfp}. In our example, the size of {\tt m - n} is smaller than or equal to the size of {\tt m}. One can deduce this by using user annotations like in \tool{Agda}, or by using heuristics \cite{chin01hosc}. Another interesting extension would be to handle function calls with locally size-increasing arguments like in the following example: \begin{lstlisting}[mathescape=true] rule f x $\ra$ g (s x) rule g (s (s x)) $\ra$ f x \end{lstlisting} where the number of {\tt s}'s strictly decreases between two calls to {\tt f} although the first rule makes the number of {\tt s}'s increase. Hyvernat enriched SCT to handle such systems \cite{hyvernat14lmcs}. {\bf Acknowledgments.} The authors thank the anonymous referees for their comments, which have improved the quality of this article. \renewcommand{\em}{} \begin{thebibliography}{10} \bibitem{abel10par} A.~Abel. \newblock \href{http://doi.org/10.4204/EPTCS.43.2}{{MiniAgda}: integrating sized and dependent types}. \newblock PAR'10.\hide{EPTCS 43.} \bibitem{arts00tcs} T.~Arts, J.~Giesl. \newblock \href{http://doi.org/10.1016/S0304-3975(99)00207-8}{Termination of term rewriting using dependency pairs}. \newblock {\em TCS} 236:133--178, 2000. \bibitem{barendregt92chapter} H.~Barendregt. \newblock Lambda calculi with types. \newblock In S.~Abramsky, D.~M. Gabbay, T.~S.~E. Maibaum, editors, {\em Handbook of logic in computer science. {V}olume 2. {B}ackground: computational structures}, p. 117--309. Oxford University Press, 1992. \hide{ \bibitem{blanqui00rta} F.~Blanqui. \newblock \href{http://doi.org/10.1007/10721975_4}{Termination and confluence of higher-order rewrite systems}. \newblock RTA'00.\hide{LNCS 1833.} } \bibitem{blanqui01phd} F.~Blanqui. \newblock {\em \href{http://tel.archives-ouvertes.fr/tel-00105522}{Th\'eorie des types et r\'ecriture}}. \newblock PhD thesis, Universit\'e Paris-Sud, France, 2001. \bibitem{blanqui05mscs} F.~Blanqui. \newblock \href{http://doi.org/10.1017/S0960129504004426}{Definitions by rewriting in the calculus of constructions}. \newblock {\em MSCS} 15(1):37--92, 2005. \bibitem{blanqui16tcs} F.~Blanqui. \newblock \href{http://doi.org/10.1016/j.tcs.2015.07.045}{Termination of rewrite relations on $\lambda$-terms based on {Girard}'s notion of reducibility}. \newblock {\em TCS} 611:50--86, 2016. \bibitem{blanqui18jfp} F.~Blanqui. \newblock \href{http://doi.org/10.1017/S0956796818000072}{Size-based termination of higher-order rewriting}. \newblock {\em JFP} 28(e11), 2018. \newblock 75 pages. \bibitem{chin01hosc} W.~N. Chin, S.~C. Khoo. \newblock \href{http://doi.org/10.1023/A:1012996816178}{Calculating sized types}. \newblock {\em Higher-Order and Symbolic Computation}, 14(2-3):261--300, 2001. \bibitem{coquand92types} T.~Coquand. \newblock \href{http://www.lfcs.inf.ed.ac.uk/research/types-bra/proc/proc92.ps.gz}{Pattern matching with dependent types}. \newblock TYPES'92. \bibitem{cousineau07tlca} D.~Cousineau, G.~Dowek. \newblock \href{http://doi.org/10.1007/978-3-540-73228-0_9}{Embedding pure type systems in the $\l\Pi$-calculus modulo}. \newblock TLCA'07.\hide{LNCS 4583.} \hide{ \bibitem{dedukti} Dedukti. \newblock \url{https://deducteam.github.io/}, 2018. } \bibitem{dybjer00jsl} P.~Dybjer. \newblock \href{http://www.jstor.org/stable/2586554}{A general formulation of simultaneous inductive-recursive definitions in type theory}. \newblock {\em JSL} 65(2):525--549, 2000. \bibitem{sizechangetool} G.~Genestier. \newblock {SizeChangeTool}. \newblock \url{https://github.com/Deducteam/SizeChangeTool}, 2018. \bibitem{giesl04lpar} J.~Giesl, R.~Thiemann, P.~Schneider-Kamp. \newblock \href{http://doi.org/10.1007/978-3-540-32275-7_21}{The dependency pair framework: combining techniques for automated termination proofs}. \newblock LPAR'04.\hide{LNCS 3452.} \bibitem{giesl06jar} J.~Giesl, R.~Thiemann, P.~Schneider-Kamp, S.~Falke. \newblock \href{http://doi.org/10.1007/s10817-006-9057-7}{Mechanizing and improving dependency pairs}. \newblock {\em JAR} 37(3):155--203, 2006. \bibitem{girard88book} J.-Y. Girard, Y.~Lafont, P.~Taylor. \newblock {\em \href{http://www.paultaylor.eu/stable/prot.pdf}{Proofs and types}}. \newblock Cambridge University Press, 1988. \bibitem{harper93jacm} R.~Harper, F.~Honsell, G.~Plotkin. \newblock \href{http://doi.org/10.1145/138027.138060}{A framework for defining logics}. \newblock {\em JACM} 40(1):143--184, 1993. \hide{ \bibitem{hirokawa05ic} N.~Hirokawa, A.~Middeldorp. \newblock \href{http://doi.org/10.1016/j.ic.2004.10.004}{Automating the dependency pair method}. \newblock {\em IC} 199(1-2):172--199, 2005. } \bibitem{hirokawa07ic} N.~Hirokawa, A.~Middeldorp. \newblock \href{http://doi.org/10.1016/j.ic.2006.08.010}{Tyrolean {Termination} {Tool}: techniques and features}. \newblock {\em IC} 205(4):474--511, 2007. \bibitem{hughes96popl} J.~Hughes, L.~Pareto, A.~Sabry. \newblock \href{http://doi.org/10.1145/237721.240882}{Proving the correctness of reactive systems using sized types}. \newblock POPL'96. \bibitem{hyvernat14lmcs} P.~Hyvernat. \newblock \href{http://doi.org/10.2168/LMCS-10(1:11)2014}{The size-change termination principle for constructor based languages}. \newblock {\em LMCS} 10(1):1--30, 2014. \bibitem{hyvernat10wst} P.~Hyvernat, C.~Raffalli. \newblock \href{https://lama.univ-savoie.fr/~raffalli/pdfs/wst.pdf}{Improvements on the "size change termination principle" in a functional language}. \newblock WST'10. \bibitem{jones04rta} N.~D. Jones, N.~Bohr. \newblock \href{http://doi.org/10.1007/978-3-540-25979-4_1}{Termination analysis of the untyped lambda-calculus}. \newblock RTA'04.\hide{LNCS 3091.} \bibitem{jouannaud15tlca} J.-P. Jouannaud, J.~Li. \newblock \href{http://doi.org/10.4230/LIPIcs.TLCA.2015.257}{Termination of Dependently Typed Rewrite Rules}. \newblock TLCA'15.\hide{LIPIcs 38.} \bibitem{klop93tcs} J.~W. Klop, V.~van Oostrom, F.~van Raamsdonk. \newblock \href{http://doi.org/10.1016/0304-3975(93)90091-7}{Combinatory reduction systems: introduction and survey}. \newblock {\em TCS} 121:279--308, 1993. \hide{ \bibitem{kop11rta} C.~Kop. \newblock \href{http://doi.org/10.4230/LIPIcs.RTA.2011.203}{Higher order dependency pairs for algebraic functional systems}. \newblock RTA'11.\hide{LIPIcs 10.} } \bibitem{kop12phd} C.~Kop. \newblock {\em \href{http://hdl.handle.net/1871/39346}{Higher order termination}}. \newblock PhD thesis, VU University Amsterdam, 2012. \hide{ \bibitem{kop12lmcs} C.~Kop, F.~Van Raamsdonk. \newblock \href{http://doi.org/10.2168/LMCS-8(2:10)2012}{Dynamic dependency pairs for algebraic functional systems}. \newblock {\em LMCS} 8(2):1--51, 2012. } \bibitem{kusakari07aaecc} K.~Kusakari, M.~Sakai. \newblock \href{http://doi.org/10.1007/s00200-007-0046-9}{Enhancing dependency pair method using strong computability in simply-typed term rewriting systems}. \newblock {\em AAECC} 18(5):407--431, 2007. \bibitem{lee01popl} C.~S. Lee, N.~D. Jones, A.~M. Ben-Amram. \newblock \href{http://doi.org/10.1145/360204.360210}{The size-change principle for program termination}. \newblock POPL'01. \bibitem{lepigre17draft} R.~Lepigre, C.~Raffalli. \newblock \href{https://arxiv.org/abs/1604.01990}{Practical subtyping for {S}ystem {F} with sized {(co-)}induction}. \newblock 2017. \bibitem{markowsky76au} G.~Markowsky. \newblock Chain-complete posets and directed sets with applications. \newblock {\em Algebra Universalis}, 6:53--68, 1976. \bibitem{mayr98tcs} R.~Mayr, T.~Nipkow. \newblock \href{http://doi.org/10.1016/S0304-3975(97)00143-6}{Higher-order rewrite systems and their confluence}. \newblock {\em TCS} 192(2):3--29, 1998. \bibitem{roux11rta} C.~Roux. \newblock \href{http://doi.org/10.4230/LIPIcs.RTA.2011.299}{Refinement Types as Higher-Order Dependency Pairs}. \newblock RTA'11.\hide{LIPIcs 10.} \bibitem{saillard15phd} R.~Saillard. \newblock {\em \href{https://pastel.archives-ouvertes.fr/tel-01299180}{Type checking in the {Lambda}-{Pi}-calculus modulo: theory and practice}}. \newblock PhD thesis, Mines ParisTech, France, 2015. \bibitem{sereni05aplas} D.~Sereni, N.~D. Jones. \newblock \href{http://doi.org/10.1007/11575467_19}{Termination analysis of higher-order functional programs}. \newblock APLAS'05.\hide{LNCS 3780.} \hide{ \bibitem{suzuki11pro} S.~Suzuki, K.~Kusakari, F.~Blanqui. \newblock \href{https://hal.inria.fr/inria-00555008}{Argument filterings and usable rules in higher-order rewrite systems}. \newblock {\em IPSJ Transactions on Programming} 4(2):1--12, 2011. } \bibitem{terese03book} {TeReSe}. \newblock {\em Term rewriting systems}, volume~55 of {\em Cambridge Tracts in Theoretical Computer Science}. \newblock Cambridge University Press, 2003. \hide{ \bibitem{tc} \url{http://termination-portal.org/wiki/Termination\_Competition}, 2018. } \bibitem{thiemann07phd} R.~Thiemann. \newblock {\em \href{http://aib.informatik.rwth-aachen.de/2007/2007-17.pdf}{The {DP} framework for proving termination of term rewriting}}. \newblock PhD thesis, RWTH Aachen University, 2007. \newblock Technical Report AIB-2007-17. \bibitem{thiemann05aaecc} R.~Thiemann, J.~Giesl. \newblock \href{http://doi.org/10.1007/s00200-005-0179-7}{The size-change principle and dependency pairs for termination of term rewriting}. \newblock {\em AAECC} 16(4):229--270, 2005. \hide{ \bibitem{toyama87ipl} Y.~Toyama. \newblock \href{http://doi.org/10.1016/0020-0190(87)90122-0}{Counterexamples to termination for the direct sum of term rewriting systems}. \newblock {\em IPL} 25(3):141--143, 1987. } \hide{ \bibitem{tpdb} \href{http://cl2-informatik.uibk.ac.at/mercurial.cgi/TPDB}{Termination problem data base (TPDB)}, 2018. } \bibitem{oostrom94phd} V.~van Oostrom. \newblock {\em \href{http://www.phil.uu.nl/~oostrom/publication/ps/phdthesis.ps}{Confluence for abstract and higher-order rewriting}}. \newblock PhD thesis, Vrije Universiteit Amsterdam, 1994. \bibitem{oostrom97tcs} V.~van Oostrom. \newblock \href{http://doi.org/10.1016/S0304-3975(96)00173-9}{Developing developments}. \newblock {\em TCS} 175(1):159--181, 1997. \bibitem{wahlstedt07phd} D.~Wahlstedt. \newblock {\em \href{http://www.cse.chalmers.se/alumni/davidw/wdt_phd_printed_version.pdf}{Dependent type theory with first-order parameterized data types and well-founded recursion}}. \newblock PhD thesis, Chalmers University of Technology, 2007. \hide{ \bibitem{wanda} Wanda. \newblock \url{http://wandahot.sourceforge.net/}. } \end{thebibliography} \appendix \section{Proofs of lemmas on the interpretation} \label{annex-interp} \subsection{Definition of the interpretation} \begin{lemma}\label{lem-F-mon} $F$ is monotone wrt inclusion. \end{lemma} \begin{proof} We first prove that $D$ is monotone. Let $I\sle J$ and $T\in D(I)$. We have to show that $T\in D(J)$. To this end, we have to prove (1) $T\in\SN$ and (2) if $T\a^*(x:A)B$ then $A\in\dom(J)$ and, for all $a\in J(A)$, $B[x\to a]\in\dom(J)$: \begin{enumerate} \item Since $T\in D(I)$, we have $T\in\SN$. \item Since $T\in D(I)$ and $T\a^*(x:A)B$, we have $A\in\dom(I)$ and, for all $a\in I(A)$, $B[x\to a]\in\dom(I)$. Since $I\sle J$, we have $\dom(I)\sle\dom(J)$ and $J(A)=I(A)$ since $I$ and $J$ are functional relations. Therefore, $A\in\dom(J)$ and, for all $a\in I(A)$, $B[x\to a]\in\dom(J)$. \end{enumerate} We now prove that $F$ is monotone. Let $I\sle J$ and $T\in D(I)$. We have to show that $F(I)(T)=F(J)(T)$. First, $T\in D(J)$ since $D$ is monotone. If ${\nf{T}}={(x:A)B}$, then $F(I)(T)=\product{a}{I(A)}{I(B[x\to a])}$ and $F(J)(T)=\product{a}{J(A)}{J(B[x\to a])}$. Since $T\in D(I)$, we have $A\in\dom(I)$ and, for all $a\in I(A)$, $B[x\to a]\in\dom(I)$. Since $\dom(I)\sle\dom(J)$, we have $J(A)=I(A)$ and, for all $a\in I(A)$, $J(B[x\to a])=I(B[x\to a])$. Therefore, $F(I)(T)=F(J)(T)$. Now, if $\nf{T}$ is not a product, then $F(I)(T)=F(J)(T)=\SN$. \end{proof} \subsection{Computability predicates} \begin{lemma}\label{lem-comp-pred-dom} $\cD$ is a computability predicate. \end{lemma} \begin{proof} Note that $\cD=D(\cI)$. \begin{enumerate} \item $\cD\sle\SN$ by definition of $D$. \item Let $T\in\cD$ and $T'$ such that $T\a T'$. We have $T'\in\SN$ since $T\in\SN$. Assume now that $T'\a^*(x:A)B$. Then, $T\a^*(x:A)B$, $A\in\cD$ and, for all $a\in \cI(A)$, $B[x\to a]\in\cD$. Therefore, $T'\in\cD$. \item Let $T$ be a neutral term such that $\red{T}\sle\cD$. Since $\cD\sle\SN$, $T\in\SN$. Assume now that $T\a^*(x:A)B$. Since $T$ is neutral, there is $U\in{\red{T}}$ such that $U\a^*(x:A)B$. Therefore, $A\in\cD$ and, for all $a\in \cI(A)$, $B[x\to a]\in\cD$.\qedhere \end{enumerate} \end{proof} \begin{lemma}\label{lem-comp-pred-prod} If $P\in\bP$ and, for all $a\in P$, $Q(a)\in\bP$, then $\product{a}{P}{Q(a)}\in\bP$. \end{lemma} \begin{proof} Let $R=\product{a}{P}{Q(a)}$. \begin{enumerate} \item Let $t\in R$. We have to prove that $t\in\SN$. Let $x\in\bV$. Since $P\in\bP$, $x\in P$. So, $tx\in Q(x)$. Since $Q(x)\in\bP$, $Q(x)\sle\SN$. Therefore, $tx\in\SN$, and $t\in\SN$. \item Let $t\in R$ and $t'$ such that $t\a t'$. We have to prove that $t'\in R$. Let $a\in P$. We have to prove that $t'a\in Q(a)$. By definition, $ta\in Q(a)$ and $ta\a t'a$. Since $Q(a)\in\bP$, $t'a\in Q(a)$. \item Let $t$ be a neutral term such that $\red{t}\sle R$. We have to prove that $t\in R$. Hence, we take $a\in P$ and prove that $ta\in Q(a)$. Since $P\in\bP$, we have $a\in\SN$ and ${\a^*\!\!(a)}\sle{P}$. We now prove that, for all $b\in{\a^*\!\!(a)}$, $tb\in Q(a)$, by induction on $\a$. Since $t$ is neutral, $tb$ is neutral too and it suffices to prove that $\red{tb}\sle Q(a)$. Since $t$ is neutral, ${\red{tb}}={\red{t}b\cup t\red{b}}$. By induction hypothesis, $t\red{b}\sle Q(a)$. By assumption, $\red{t}\sle R$. So, $\red{t}a\sle Q(a)$. Since $Q(a)\in\bP$, $\red{t}b\sle Q(a)$ too. Therefore, $ta\in Q(a)$ and $t\in R$.\qedhere \end{enumerate} \end{proof} \begin{lemma}\label{lem-comp-pred-type-int} For all $T\in\cD$, $\cI(T)$ is a computability predicate. \end{lemma} \begin{proof} Since $\cF_p(\bT,\bP)$ is a chain-complete poset, it suffices to prove that $\cF_p(\bT,\bP)$ is closed by $F$. Assume that $I\in\cF_p(\bT,\bP)$. We have to prove that $F(I)\in\cF_p(\bT,\bP)$, that is, for all $T\in D(I)$, $F(I)(T)\in\bP$. There are two cases: \begin{itemize} \item If ${\nf{T}}={(x:A)B}$, then $F(I)(T)=\product{a}{I(A)}{I(B[x\to a])}$. By assumption, $I(A)\in\bP$ and, for $a\in I(A)$, $I(B[x\to a])\in\bP$. Hence, by Lemma \ref{lem-comp-pred-prod}, $F(I)(T)\in\bP$. \item Otherwise, $F(I)(T)=\SN\in\bP$.\qedhere \end{itemize} \end{proof} \noindent{\textcolor{darkgray}{$\blacktriangleright$}} {\bf Lemma \ref{lem-props}\ref{lem-comp-pred-int}.} {\it For all terms $T$ and substitutions $\s$, $\I{T}_\s\in\bP$.} \begin{proof} By induction on $T$. If $T=s$, then $\I{T}_\s=\cD\in\bP$ by Lemma \ref{lem-comp-pred-dom}. If $T=(x:A)K\in\bK$, then $\I{T}_\s=\product{a}{\I{A}_\s}{\I{K}_{[x\to a,\s]}}$. By induction hypothesis, $\I{A}_\s\in\bP$ and, for all $a\in\I{A}_\s$, $\I{K}_{[x\to a,\s]}\in\bP$. Hence, by Lemma \ref{lem-comp-pred-prod}, $\I{T}_\s\in\bP$. If $T\notin\bK\cup\{\kind\}$ and $T\s\in\cD$, then $\I{T}_\s=\cI(T\s)\in\bP$ by Lemma \ref{lem-comp-pred-type-int}. Otherwise, $\I{T}_\s=\SN\in\bP$. \end{proof} \subsection{Invariance by reduction} We now prove that the interpretation is invariant by reduction. \begin{lemma}\label{lem-type-int-red} If $T\in\cD$ and $T\a T'$, then $\cI(T)=\cI(T')$. \end{lemma} \begin{proof} First note that $T'\in\cD$ since $\cD\in\bP$. Hence, $\cI(T')$ is well defined. Now, we have $T\in\SN$ since $\cD\sle\SN$. So, $T'\in\SN$ and, by local confluence and Newman's lemma, $\nf{T}=\nf{T'}$. If $\nf{T}=(x:A)B$ then $\cI(T)=\product{a}{\cI(A)}{\cI(B[x\to a])}=\cI(T')$. Otherwise, $\cI(T)=\SN=\cI(T')$. \end{proof} \noindent{\textcolor{darkgray}{$\blacktriangleright$}} {\bf Lemma \ref{lem-props}\ref{lem-int-red}.} {\it If $T$ is typable, $T\s\in\cD$ and $T\a T'$, then $\I{T}_\s=\I{T'}_\s$.} \begin{proof} By assumption, there are $\G$ and $U$ such that $\G\th T:U$. Since $\a$ preserves typing, we also have $\G\th T':U$. So, $T\neq\kind$, and $T'\neq\kind$. Moreover, $T\in\bK$ iff $T'\in\bK$ since $\G\th T:\kind$ iff $T\in\bK$ and $T$ is typable. In addition, we have $T'\s\in\cD$ since $T\s\in\cD$ and $\cD\in\bP$. We now prove the result, with $T\a^=T'$ instead of $T\a T'$, by induction on $T$. If $T\notin\bK$, then $T'\notin\bK$ and, since $T\s,T'\s\in\cD$, $\I{T}_\s=\cI(T\s)=\cI(T'\s)=\I{T'}_\s$ by Lemma \ref{lem-type-int-red}. If $T=\type$, then $\I{T}_\s=\cD=\I{T'}_\s$. Otherwise, $T=(x:A)K$ and $T'=(x:A')K'$ with $A\a^=A'$ and $K\a^=K'$. By inversion, we have $\G\th A:\type$, $\G\th A':\type$, $\G,x:A\th K:\kind$ and $\G,x:A'\th K':\kind$. So, by induction hypothesis, $\I{A}_\s=\I{A'}_\s$ and, for all $a\in\I{A}_\s$, $\I{K}_{\s'}=\I{K'}_{\s'}$, where $\s'=[x\to a,\s]$. Therefore, $\I{T}_\s=\I{T'}_\s$. \end{proof} \noindent{\textcolor{darkgray}{$\blacktriangleright$}} {\bf Lemma \ref{lem-props}\ref{lem-int-red-subs}.} {\it If $T$ is typable, $T\s\in\cD$ and $\s\a\s'$, then $\I{T}_\s=\I{T}_{\s'}$.} \begin{proof} By induction on $T$. \begin{itemize} \item If $T\in\mb{S}$, then $\I{T}_\s=\cD=\I{T}_{\s'}$. \item If $T=(x:A)K$ and $K\in\bK$, then $\I{T}_\s=\product{a}{\I{A}_\s}{\I{K}_{[x\to a,\s]}}$ and $\I{T}_{\s'}=\product{a}{\I{A}_{\s'}}{\I{K}_{[x\to a,\s']}}$. By induction hypothesis, $\I{A}_\s=\I{A}_{\s'}$ and, for all $a\in\I{A}_\s$, $\I{K}_{[x\to a,\s]}=\I{K}_{[x\to a,\s']}$. Therefore, $\I{T}_\s=\I{T}_{\s'}$. \item If $T\s\in\cD$, then $\I{T}_\s=\cI(T\s)$ and $\I{T}_{\s'}=\cI(T\s')$. Since $T\s\a^*T\s'$, by Lemma \ref{lem-props}\ref{lem-int-red}, $\cI(T\s)=\cI(T\s')$. \item Otherwise, $\I{T}_\s=\SN=\I{T}_{\s'}$.\qedhere \end{itemize} \end{proof} \subsection{Adequacy of the interpretation} \noindent{\textcolor{darkgray}{$\blacktriangleright$}} {\bf Lemma \ref{lem-props}\ref{lem-int-prod}.} {\it If $(x:A)B$ is typable, $((x:A)B)\s\in\cD$ and $x\notin\dom(\s)\cup\FV(\s)$, then $\I{(x:A)B}_\s=\product{a}{\I{A}_\s}{\I{B}_{[x\to a,\s]}}$.} \begin{proof} If $B$ is a kind, this is immediate. Otherwise, since $((x:A)B)\s\in\cD$, $\I{(x:A)B}_\s=\cI(((x:A)B)\s)$. Since $x\notin\dom(\s)\cup\FV(\s)$, we have $((x:A)B)\s=(x:A\s)B\s$. Since $(x:A\s)B\s\in\cD$ and $\cD\sle\SN$, we have $\I{(x:A)B}_\s=\product{a}{\cI(\nf{A\s})}{\cI((\nf{B\s})[x\to a])}$. Since $(x:A)B$ is typable, $A$ is of type $\type$ and $A\notin\bK\cup\{\kind\}$. Hence, $\I{A}_\s=\cI(A\s)$ and, by Lemma \ref{lem-type-int-red}, $\cI(A\s)=\cI(\nf{A\s})$. Since $(x:A)B$ is typable and not a kind, $B$ is of type $\type$ and $B\notin\bK\cup\{\kind\}$. Hence, $\I{B}_{[x\to a,\s]}=\cI(B[x\to a,\s])$. Since $x\notin\dom(\s)\cup\FV(\s)$, $B[x\to a,\s]=(B\s)[x\to a]$. Hence, $\I{B}_{[x\to a,\s]}=\cI((B\s)[x\to a])$ and, by Lemma \ref{lem-type-int-red}, $\cI((B\s)[x\to a])=\cI((\nf{B\s})[x\to a])$. Therefore, $\I{(x:A)B}_\s=\product{a}{\I{A}_\s}{\I{B}_{[x\to a,\s]}}$. \end{proof} Note that, by iterating this lemma, we get that $v\in\I{\prod\vx\vT U}$ iff, for all $\vt$ such that $[\vx\to\vt]\models\vx:\vT$, $v\vt\in\I{U}_{[\vx\to\vt]}$. \noindent{\textcolor{darkgray}{$\blacktriangleright$}} {\bf Lemma \ref{lem-props}\ref{lem-int-subs}.} {\it If $\D\th U:s$, $\G\th\g:\D$ and $U\g\s\in\cD$, then $\I{U\g}_\s=\I{U}_{\g\s}$.} \begin{proof} We proceed by induction on $U$. Since $\D\th U:s$ and $\G\th\g:\D$, we have $\G\th U\g:s$. \begin{itemize} \item If $s=\type$, then $U,U\g\notin\bK\cup\{\kind\}$ and $\I{U\g}_\s=\cI(U\g\s)=\I{U}_{\g\s}$ since $U\g\s\in\cD$. \item Otherwise, $s=\kind$ and $U\in\bK$. \begin{itemize} \item If $U=\type$, then $\I{U\g}_\s=\cD=\I{U}_{\g\s}$. \item Otherwise, $U=(x:A)K$ and, by Lemma \ref{lem-props}\ref{lem-int-prod}, $\I{U\g}_\s=\product{a}{\I{A\g}_\s}{\I{K\g}_{[x\to a,\s]}}$ and $\I{U}_{\g\s}=\product{a}{\I{A}_{\g\s}}{\I{K}_{[x\to a,\g\s]}}$. By induction hypothesis, $\I{A\g}_\s=\I{A}_{\g\s}$ and, for all $a\in\I{A\g}_\s$, $\I{K\g}_{[x\to a,\s]}=\I{K}_{\g[x\to a,\s]}$. Wlog we can assume $x\notin\dom(\g)\cup\FV(\g)$. So, $\I{K}_{\g[x\to a,\s]}=\I{K}_{[x\to a,\g\s]}$.\qedhere \end{itemize} \end{itemize} \end{proof} \noindent{\textcolor{darkgray}{$\blacktriangleright$}} {\bf Lemma \ref{lem-props}\ref{lem-comp-abs}.} {\it Let $P$ be a computability predicate and $Q$ a $P$-indexed family of computability predicates such that $Q(a')\sle Q(a)$ whenever $a\a a'$. Then, $\l x:A.b\in\product{a}{P}{Q(a)}$ whenever $A\in\SN$ and, for all $a\in P$, $b[x\to a]\in Q(a)$.} \begin{proof} Let $a_0\in P$. Since $P\in\bP$, we have $a_0\in\SN$ and $x\in P$. Since $Q(x)\in\bP$ and $b=b[x\to x]\in Q(x)$, we have $b\in\SN$. Let $a\in{\a^*(a_0)}$. We can prove that $(\abs xAb)a\in Q(a_0)$ by induction on $(A,b,a)$ ordered by $(\a,\a,\a)_{\mr{prod}}$. Since $Q(a_0)\in\bP$ and $(\abs xAb)a$ is neutral, it suffices to prove that $\red{(\abs xAb)a}\sle Q(a_0)$. If the reduction takes place in $A$, $b$ or $a$, we can conclude by induction hypothesis. Otherwise, $(\abs xAb)a\a b[x\to a]\in Q(a)$ by assumption. Since $a_0\a^*a$ and $Q(a')\sle Q(a)$ whenever $a\a a'$, we have $b[x\to a]\in Q(a_0)$. \end{proof} \section{Termination proof of Example \ref{expl-list-poly}} \label{annex-matrices} Here is the comprehensive list of dependency pairs in the example: \begin{lstlisting}[escapeinside={*}{*}] *\color{red} A:* El (arrow a b) > El a *\color{red} B:* El (arrow a b) > El b *\color{red} C:* (s p) + q > p + q *\color{red} D:* app a _ (cons _ x p l) q m > p + q *\color{red} E:* app a _ (cons _ x p l) q m > app a p l q m *\color{red} F:*len_fil a f _ (cons _ x p l) > len_fil_aux (f x) a f p l *\color{red} G:*len_fil a f _ (app _ p l q m) > (len_fil a f p l) + (len_fil a f q m) *\color{red} H:*len_fil a f _ (app _ p l q m) > len_fil a f p l *\color{red} I:*len_fil a f _ (app _ p l q m) > len_fil a f q m *\color{red} J:* len_fil_aux true a f p l > len_fil a f p l *\color{red} K:* len_fil_aux false a f p l > len_fil a f p l *\color{red} L:* fil a f _ (cons _ x p l) > fil_aux (f x) a f x p l *\color{red} M:* fil a f _ (app _ p l q m) > app a (len_fil a f p l) (fil a f p l) (len_fil a f q m) (fil a f q m) *\color{red} N:* fil a f _ (app _ p l q m) > len_fil a f p l *\color{red} O:* fil a f _ (app _ p l q m) > fil a f p l *\color{red} P:* fil a f _ (app _ p l q m) > len_fil a f q m *\color{red} Q:* fil a f _ (app _ p l q m) > fil a f q m *\color{red} R:* fil_aux true a f x p l > len_fil a f p l *\color{red} S:* fil_aux true a f x p l > fil a f p l *\color{red} T:* fil_aux false a f x p l > fil a f p l \end{lstlisting} The whole callgraph is depicted below. The letter associated to each matrix corresponds to the dependency pair presented above and in example \ref{expl-mult}, except for TC 's which comes from the computation of the transitive closure and labels dotted edges. \begin{center}\tt \begin{tikzpicture} \node[draw] (filter) at (2,0) {fil}; \node[draw] (f_aux) at (-1,0) {fil\_aux}; \node[draw] (l_filter) at (2,-2) {len\_fil}; \node[draw] (l_f_aux) at (-3,-2) {len\_fil\_aux}; \node[draw] (el) at (-3.5,0) {El}; \node[draw] (app) at (5,0) {app}; \node[draw] (pl) at (5,-2) {+}; \draw[>=latex,->] (el) to[out=135,in=90] (-4.5,0) to[out=-90,in=-135] (el); \node[left] (ell) at (-4.5,0) {A,B}; \draw[>=latex,->] (pl) to[out=45,in=90] (6,-2) to[out=-90,in=-45] (pl); \node[right] (pll) at (6,-2) {C}; \draw[>=latex,->] (app) to[bend left=8] node[midway,right] {D} (pl); \draw[>=latex,->] (app) to[out=45,in=90] (6.2,0) to[out=-90,in=-45] (app); \node[right] (appl) at (6.2,0) {E}; \draw[>=latex,->] (l_filter) to[bend left=8] node[midway,above] {F} (l_f_aux); \draw[>=latex,->] (l_filter) to[bend left=8] node[midway,above] {G} (pl); \draw[>=latex,->] (1.5,-2.27) to[out=-45,in=0] (1.35,-2.8) to[out=180,in=-150] (1.23,-2.21); \node[below] (l_filterl) at (1.35,-2.8) {H,I}; \draw[>=latex,->] (l_f_aux) to[bend left=8] node[midway,above] {J,K} (l_filter); \draw[>=latex,->] (filter) to[bend left=10] node[midway,above] {L} (f_aux); \draw[>=latex,->] (f_aux) to[bend right=10] node[midway,above,right] {R} (l_filter); \draw[>=latex,->] (f_aux) to[bend left=10] node[midway,above] {S,T} (filter); \draw[>=latex,->] (filter) to[bend left=10] node[midway,above] {M} (app); \draw[>=latex,->] (filter) to[bend right=10] node[midway,right] {N,P} (l_filter); \draw[>=latex,->] (1.75,0.23) to[out=45,in=0] (1.6,0.8) to[out=180,in=150] (1.6,0.19); \node[above] (filterl) at (1.6,0.8) {O,Q}; \draw[>=latex,->,densely dashed] (f_aux) to[out=45,in=0] (-1,0.8) to[out=180,in=135] (f_aux); \node[above] (f_auxTC) at (-1,0.8) {TC${}_4$}; \draw[>=latex,->,densely dashed] (2.4,0.19) to[out=30,in=0] (2.4,0.8) to[out=180,in=135] (2.25,0.23); \node[above] (filterTC) at (2.4,0.8) {TC${}_3$}; \draw[>=latex,->,densely dashed] (l_f_aux) to[out=-45,in=0] (-3,-2.8) to[out=180,in=-135] (l_f_aux); \node[below] (l_f_auxTC) at (-3,-2.8) {TC${}_1$}; \draw[>=latex,->,densely dashed] (2.77,-2.21) to[out=-30,in=0] (2.7,-2.8) to[out=180,in=-135] (2.55,-2.27); \node[below] (l_filterTC) at (2.7,-2.8) {TC${}_2$}; \end{tikzpicture} \end{center} The argument {\tt a} is omitted everywhere on the matrices presented below: \begin{center}\small {\tt A,B}=$\left(\begin{smallmatrix} -1\\ \end{smallmatrix}\right)$, {\tt C}=$\left(\begin{smallmatrix} -1 & \infty \\ \infty & 0 \\ \end{smallmatrix}\right)$, {\tt D}=$\left(\begin{smallmatrix} \infty & \infty\\ -1 & \infty\\ \infty & 0 \\ \infty & \infty\\ \end{smallmatrix}\right)$, {\tt E}=$\left(\begin{smallmatrix} \infty & \infty & \infty & \infty\\ -1 & -1 & \infty & \infty\\ \infty & \infty & 0 & \infty\\ \infty & \infty & \infty & 0 \\ \end{smallmatrix}\right)$, {\tt F}=$\left(\begin{smallmatrix} \infty & 0 & \infty & \infty\\ \infty & \infty & \infty & \infty\\ \infty & \infty & -1 & -1 \\ \end{smallmatrix}\right)$, {\tt J}={\tt K}=$\left(\begin{smallmatrix} \infty & \infty & \infty\\ 0 & \infty & \infty\\ \infty & 0 & \infty\\ \infty & \infty & 0 \\ \end{smallmatrix}\right)$, {\tt G}=$\left(\begin{smallmatrix} \infty & \infty\\ \infty & \infty\\ \infty & \infty\\ \end{smallmatrix}\right)$, {\tt H}={\tt I}={\tt N}={\tt O}={\tt P}={\tt Q}=$\left(\begin{smallmatrix} 0 & \infty & \infty\\ \infty & \infty & \infty\\ \infty & -1 & -1 \\ \end{smallmatrix}\right)$, {\tt L}=$\left(\begin{smallmatrix} \infty & 0 & \infty & \infty & \infty\\ \infty & \infty & \infty & \infty & \infty\\ \infty & \infty & -1 & -1 & -1 \\ \end{smallmatrix}\right)$, {\tt M}=$\left(\begin{smallmatrix} \infty & \infty & \infty & \infty\\ \infty & \infty & \infty & \infty\\ \infty & \infty & \infty & \infty\\ \end{smallmatrix}\right)$, {\tt R}={\tt S}={\tt T}=$\left(\begin{smallmatrix} \infty & \infty & \infty\\ 0 & \infty & \infty\\ \infty & \infty & \infty\\ \infty & 0 & \infty\\ \infty & \infty & 0 \\ \end{smallmatrix}\right)$. \end{center} Which leads to the matrices labeling a loop in the transitive closure: \begin{center} {\tt TC}${}_1$={\tt J}$\times${\tt F}=$\left(\begin{smallmatrix} \infty & \infty & \infty & \infty\\ \infty & 0 & \infty & \infty\\ \infty & \infty & \infty & \infty\\ \infty & \infty & -1 & -1 \\ \end{smallmatrix}\right)$, {\tt TC}${}_4$={\tt S}$\times${\tt L}=$\left(\begin{smallmatrix} \infty & \infty & \infty & \infty & \infty\\ \infty & 0 & \infty & \infty & \infty\\ \infty & \infty & \infty & \infty & \infty\\ \infty & \infty & \infty & \infty & \infty\\ \infty & \infty & -1 & -1 & -1 \\ \end{smallmatrix}\right)$, {\tt TC}${}_3$={\tt L}$\times${\tt S}={\tt TC}${}_2$={\tt F}$\times${\tt J}=$\left(\begin{smallmatrix} 0 & \infty & \infty\\ \infty & \infty & \infty\\ \infty & -1 & -1 \\ \end{smallmatrix}\right)$={\tt O}={\tt H}. \end{center} It would be useless to compute matrices labeling edges which are not in a strongly connected component of the call-graph (like {\tt S$\times$R}), but it is necessary to compute all the products which could label a loop, especially to verify that all loop-labeling matrices are idempotent, which is indeed the case here. \hide{ As an example, we can detail the calculus performed to compute {\tt TC${}_3^2$}: \[\left(\begin{smallmatrix} \min(0+0,\infty+\infty,\infty+\infty) & \min(0+\infty,\infty+\infty,\infty+-1) & \min(0+\infty,\infty+\infty,\infty+-1)\\ \min(\infty+0,\infty+\infty,\infty+\infty) & \min(\infty+\infty,\infty+\infty,\infty+-1) & \min(\infty+\infty,\infty+\infty,\infty+-1)\\ \min(\infty+0,-1+\infty,-1+\infty) & \min(\infty+\infty,-1+\infty,-1+-1) & \min(\infty+\infty,-1+\infty,-1+-1)\\ \end{smallmatrix}\right)\] } We now check that this system is well-structured. For each rule $f\vl\a r$, we take the environment $\D_{f\vl\a r}$ made of all the variables of $r$ with the following types: {\tt a:Set, b:Set, p:$\bN$, q:$\bN$, x:El a, l:$\bL$ a p, m:$\bL$ a q, f:El a$\A\bB$}. The precedence infered for this example is the smallest containing: \begin{itemize} \item comparisons linked to the typing of symbols: {\begin{center}\tt \begin{tabular}{rlrl} Set &$\preceq$ arrow & Set,$\bL$,0 &$\preceq$ nil\\ Set &$\preceq$ El& Set,El,$\bN$,$\bL$,s &$\preceq$ cons\\ $\bB$ &$\preceq$ true & Set,$\bN$,$\bL$,+ &$\preceq$ app\\ $\bB$ &$\preceq$ false & Set,El,$\bB$,$\bN$,$\bL$ &$\preceq$ len\_fil\\ $\bN$ &$\preceq$ 0 & $\bB$,Set,El,$\bN$,$\bL$ &$\preceq$ len\_fil\_aux\\ $\bN$ &$\preceq$ s& Set,El,$\bB$,$\bN$,$\bL$,len\_fil &$\preceq$ fil\\ $\bN$ &$\preceq$ + & $\bB$,Set,El,$\bN$,$\bL$,len\_fil\_aux &$\preceq$ fil\_aux \\ Set,$\bN$ &$\preceq$ $\bL$&\\ \end{tabular} \end{center}} \item and comparisons related to calls: {\begin{center}\tt \begin{tabular}{rlrl} s &$\preceq$ + & s,len\_fil &$\preceq$ len\_fil\_aux \\ cons,+ &$\preceq$ app & nil,fil\_aux,app,len\_fil &$\preceq$ fil \\ 0,len\_fil\_aux,+ &$\preceq$ len\_fil & fil,cons,len\_fil &$\preceq$ fil\_aux\\ \end{tabular} \end{center}} \end{itemize} This precedence can be sum up in the following diagram, where symbols in the same box are equivalent: \begin{center}\tt \begin{tikzpicture} \node[draw] (filter) at (2,6){fil,fil\_aux}; \node[draw] (l_filter) at (3,4.5) {len\_fil,len\_fil\_aux}; \node[draw] (append) at (-1,4.5) {app}; \node[draw] (true) at (4,1.5) {true}; \node[draw] (false) at (6,1.5) {false}; \node[draw] (B) at (5,0) {$\bB$}; \node[draw] (cons) at (-2,3) {cons}; \node[draw] (nil) at (0,3) {nil}; \node[draw] (plus) at (3,3) {+}; \node[draw] (L) at (-1,1.5) {$\bL$}; \node[draw] (arrow) at (-4,1.5) {arrow}; \node[draw] (El) at (-3,1.5) {El}; \node[draw] (zero) at (1,1.5) {0}; \node[draw] (succ) at (3,1.5) {s}; \node[draw] (Set) at (-3.5,0) {Set}; \node[draw] (N) at (2,0) {$\bN$}; \draw[>=latex,->] (Set) to (arrow); \draw[>=latex,->] (Set) to (El); \draw[>=latex,->] (Set) to (L); \draw[>=latex,->] (N) to (L); \draw[>=latex,->] (N) to (zero); \draw[>=latex,->] (N) to (succ); \draw[>=latex,->] (B) to (true); \draw[>=latex,->] (B) to (false); \draw[>=latex,->] (El) to (cons); \draw[>=latex,->] (L) to (cons); \draw[>=latex,->] (L) to (nil); \draw[>=latex,->] (zero) to (nil); \draw[>=latex,->] (succ) to (plus); \draw[>=latex,->] (cons) to (append); \draw[>=latex,->] (plus) to (append); \draw[>=latex,->] (plus) to (l_filter); \draw[>=latex,->] (append) to (filter); \draw[>=latex,->] (l_filter) to (filter); \draw[>=latex,->] (zero) to (l_filter); \draw[>=latex,->] (L) to[bend right=5] (l_filter); \draw[>=latex,->] (B) to[bend right=35] (l_filter); \draw[>=latex,->] (nil) to[bend left=15] (filter); \draw[>=latex,->] (succ) to (cons); \draw[>=latex,->] (El) to[out=15,in=-175] (l_filter); \end{tikzpicture} \end{center} \end{document}
arXiv
\begin{definition}[Definition:Multiplication/Natural Numbers/Addition] Let $\N$ be the natural numbers. Let $+$ denote addition. The binary operation $\times$ is recursively defined on $\N$ as follows: :$\forall m, n \in \N: \begin{cases} m \times 0 & = 0 \\ m \times \left({n + 1}\right) & = m \times n + m \end{cases}$ This operation is called '''multiplication'''. Equivalently, '''multiplication''' can be defined as: :$\forall m, n \in \N: m \times n := +^n m$ where $+^n m$ denotes the $n$th power of $m$ under $+$. \end{definition}
ProofWiki
Control of Au nanoantenna emission enhancement of magnetic dipolar emitters by means of VO2 phase change layers Emilija Petronijevic, Marco Centini, Tiziana Cesca, Giovanni Mattei, Fabio Antonio Bovino, and Concita Sibilia Emilija Petronijevic,1,* Marco Centini,1 Tiziana Cesca,2 Giovanni Mattei,2 Fabio Antonio Bovino,1 and Concita Sibilia1 1Sapienza University of Rome Department S.B.A.I., Via A. Scarpa 14, I-00161 Rome, Italy 2University of Padova, Physics and Astronomy Department, Via Marzolo 8, I-35131 Padova, Italy *Corresponding author: [email protected] Emilija Petronijevic https://orcid.org/0000-0003-1034-3843 E Petronijevic M Centini T Cesca G Mattei F Bovino C Sibilia Emilija Petronijevic, Marco Centini, Tiziana Cesca, Giovanni Mattei, Fabio Antonio Bovino, and Concita Sibilia, "Control of Au nanoantenna emission enhancement of magnetic dipolar emitters by means of VO2 phase change layers," Opt. Express 27, 24260-24273 (2019) Hysteresis in the spontaneous emission induced by VO2 phase change (JOSAB) Steerable plasmonic nanoantennas: active steering of radiation patterns using phase change materials (OE) Tunable optical switching in the near-infrared spectral regime by employing plasmonic nanoantennas containing phase change materials (OE) Nanophotonics, Metamaterials, and Photonic Crystals Optical amplifiers Original Manuscript: April 11, 2019 Revised Manuscript: June 3, 2019 Manuscript Accepted: June 8, 2019 Plane-wave excitation Dipole excitation Periodic structure Active, ultra-fast external control of the emission properties at the nanoscale is of great interest for chip-scale, tunable and efficient nanophotonics. Here we investigated the emission control of dipolar emitters coupled to a nanostructure made of an Au nanoantenna, and a thin vanadium dioxide (VO2) layer that changes from semiconductor to metallic state. If the emitters are sandwiched between the nanoantenna and the VO2 layer, the enhancement and/or suppression of the nanostructure's magnetic dipole resonance enabled by the phase change behavior of the VO2 layer can provide a high contrast ratio of the emission efficiency. We show that a single nanoantenna can provide high magnetic field in the emission layer when VO2 is metallic, leading to high emission of the magnetic dipoles; this emission is then lowered when VO2 switches back to semiconductor. We finally optimized the contrast ratio by considering different orientation, distribution and nature of the dipoles, as well as the influence of a periodic Au nanoantenna pattern. As an example of a possible application, the design is optimized for the active control of an Er3+ doped SiO2 emission layer. The combination of the emission efficiency increase due to the plasmonic nanoantenna resonances and the ultra-fast contrast control due to the phase-changing medium can have important applications in tunable efficient light sources and their nanoscale integration. One of the main goals of modern nanophotonics is the use of optical nanocomponents as building blocks of integrated circuits for applications in future computers and information systems. However, there is still a lack of chip-scale light sources and optical amplifiers with large bandwidths and high energy efficiencies, as well as active components that allow for efficient high-speed light modulation. Rare earth-based approaches offer unique advantages with respect to semiconductor optical components such as long carrier lifetimes, and longer quantum coherence times [1]. For example, an erbium-doped waveguide amplifier has been shown to work at a modulation speed higher than 170 Gbit/s [2]. Moreover, erbium-based materials are preferred to semiconductors in quantum information systems due to their emission at telecommunication wavelengths with sharp spectral features. However, even though they have high quantum efficiencies, their excitation cross-sections are very low, while for nanoscale integration, erbium-based materials with much higher gain are required. Attempts to increase the optical gain by increasing the doping density encountered other negative effects, such as concentration-related quenching due to clustering or interaction of erbium atoms when above the critical density [3]. Instead, the emission efficiency can be enhanced by putting the emitters in the near-field of nanostructures designed to have resonances that match the wavelength of excitation or emission [4–9]. Another issue is to combine efficiency and modulation speed by dynamical manipulation of the local density of states. As the process of light emission depends both on the optical environment and on the intrinsic properties of the emitter, a fast modulation of the environment properties can produce a fast modulation of the local density of states and field localization resulting in a fast (sub-lifetime) emission modulation [10]. In this work, we combined the enhancement of a dipolar emitter emission due to the proximity of a resonant nanostructure, and the possibility to modulate it by means of a thin layer of a phase change material (PCM). PCMs have been used as active subwavelength elements that can switch between phases that differ in electric and optical properties. The phase change results in a modulation of amplitude or phase of transmission or reflection over nanoscale propagation lengths, and it is compatible with fast optical systems [11–18]. Vanadium dioxide (VO2) is a promising candidate for nanoscale modulation since it shows dramatic contrast in the complex refractive index as it undergoes a structural phase transition from monoclinic (semiconductor) to rutile (metallic) phase at ∼68°C [15] (Figs. 1(a)–1(b)). The phase change can be induced thermally, electrically or optically, and it has been shown to provide tuning of the resonances of the nanostructured materials [16–18]. Moreover, it has been shown that optically induced phase change transition in VO2 can be achieved in the fs time scale making it a perfect candidate material for ultra-fast optical switching and modulation [19,20]. The structure under examination is shown in Fig. 1(c): a thin VO2 layer is deposited on a glass substrate, and covered by a thin spacer layer of silica, which is doped by luminescent ions; above the spacer, Au nanoantennas are added to provide the plasmonic resonant enhancement. Sandwiched magnetic dipoles feel strong resonance when VO2 is metallic (hot state) due to the strong magnetic field arising from the current loops between the Au nanoantenna and the VO2 layer; this resonance blue-shifts and decreases when VO2 is in the semiconductor (cold) state. We first studied the absorption properties and magnetic field confinement effects for a single Au nanoantenna – SiO2 – VO2 structure; the geometric parameters are chosen to set the maximum absorption at the 1540 nm emission line of Er3+ ions when VO2 is metallic. We then investigated the influence of the VO2 phase change on the emission of single dipoles in the layer under the nanoantenna, considering different positions, types (i.e. magnetic or electric dipole), and orientations. We show that the emitted far-field of resonant magnetic dipoles [21,22] follows the VO2 phase change contrast. We finally optimized the structure for the highest emission contrast and provide the sensitivity analysis of the geometric parameters' margins. This study is of particular importance due to the mixed nature of Er3+ transition at 1540 nm, where both electric and magnetic dipole components are of the same order of magnitude [23–25]: a resonant magnetic field can be used to enhance the corresponding transition, and switch it by means of PCM. Fig. 1. Complex refractive indices [10] of VO2 in (a) semiconductor (cold), and (b) metallic (hot) state. Insets show the principle of magnetic dipole emission control (low and high emission in cold and hot state, respectively). (c) Schematic of the investigated hybrid nanostructure: the Au nanoantenna is defined by its length L, width W and thickness t, and it lies above a multilayer structure, made of tem thick Er:SiO2, and tpc thick VO2 layers upon SiO2 substrate. (d,e) x-polarized plane-wave simulations for the two VO2 states; the parameters of the nanostructure are L = 340 nm, W = 50 nm, t = 40 nm, tem=50 nm, and tpc=60 nm: (d) absorption efficiency, and (e) magnetic field intensity at 1540 nm, monitored in xz cross-section, at y = 0 nm. 2. Plane-wave excitation A single Au nanorod acts as a nanoantenna, with an electric dipole resonance when the incident electric field is parallel to its long axis [26,27]. High field localization and high local density of states at the resonance can modify the emission rates of electric dipoles [28]. However, if the dielectric substrate is interrupted by a close metallic layer, another type of resonance can occur: a current loop formed between the nanoantenna and the metal below induces a high magnetic field in the dielectric, leading to a magnetic resonance [29,30]. Our hybrid structure, Fig. 1(c), has similar features when the emitting layer is thin enough to allow for the loop formation due to the induced "mirror" charge from the nanoantenna to the metallic layer, providing enhanced metal-insulator-metal (MIM) absorption when VO2 is heated to its metallic state; this resonant behavior vanishes when VO2 is cooled down to its semiconductor state. VO2 was recently proposed for the high contrast switching of the resonant absorption in MIM metamaterials [18,31–33]. In the present work, we used a commercial-grade simulator based on the 3D Finite Difference Time Domain (FDTD) method in Lumerical [34] to investigate the optical response in a system consisting of an Au nanoantenna of length L, width W and thickness t, standing upon a tem thick Er-doped SiO2 layer and a tpc thick VO2 layer on a silica substrate (see Appendix A). All the simulations consider two stable states of VO2, while the study of the temporal dynamics of its switching is out of the scope of this work. It is worth underlining that the induced heating that changes the phase of the VO2 layer can be also optical [10], by means of another "control" laser that is not resonant with the Er3+ excitation or emission (e.g. at 1064 nm). The geometric parameters can be chosen to give the resonance for the hot VO2 state at the 1540 nm emission of Er3+; we thus use plane-wave excitation with x-polarized total-field-scattered-field (TFSF) source (see Appendix B) to find the resonance of the absorption cross-section σabs. The absorption efficiency is defined as η=σabs/σAu [35], where σAu=W·L is the nanoantenna footprint. Figure 1(d) shows the values of η for the two VO2 states, cold (blue line) and hot (red line), for L = 340 nm, W = 50 nm, t = 40 nm, tem=50 nm, and tpc=60 nm. Around 1540 nm there is a strong resonance for the hot state; however, one should note that a lower, blue-shifted resonance still exist in the cold state, even though there is no metal to form the current loop. This is due to the fact that in the cold state, VO2 is a high refractive index semiconductor in this spectral range (Fig. 1(a)), providing the magnetic response due to the displacement currents. In Fig. 1(e) we show the magnetic field intensity distribution at 1540 nm determined by collecting the field from a xz cross-section monitor: as expected, a much higher magnetic field confinement and enhancement are present in the hot state. As the dipolar emitters are to be inserted in the region under the nanoantenna, we studied the magnetic field enhancement in the center of the simulation region, at Δz=-10 nm. In Fig. 2(a) we show the spectra of the magnetic field intensity, normalized to the one of the incident plane-wave (without the nanostructure) H0: for both states the magnetic enhancement spectrally coincides with the absorption resonant behavior previously shown. For x-polarized excitation the magnetic field is y-oriented; in Fig. 2(b) the yz cross-section (x = 0 nm) shows the enhancement of this component at 1540 nm, which reaches 100 under the nanoantenna in the hot state. Fig. 2. (a) Spectral dependence of the magnetic field intensity normalized to the intensity of the incident magnetic field H0, at the point Δx=Δy = 0 nm, Δz=-10 nm, for the two VO2 states. (b) Normalized Hy intensity for the two VO2 states at 1540 nm, monitored in yz cross-section, at x = 0 nm. 3. Dipole excitation Magnetic dipole emission enhancement of rare-earth metal ions has been investigated in resonant plasmonic [36–40] and dielectric [41,42] nanostructures. Thus, we further investigated how the enhancement due to the magnetic resonance influences a perfect dipole (without nonradiative decays). To do this, first we placed a broadband magnetic dipole source centrally, at the distance Δz=-10 nm under the nanoantenna, and monitored the power emitted in the positive z half-space, as illustrated in Fig. 3(a); θ and φ are the emission cone half-angle, and azimuthal angle, respectively. In this way, we investigated how the total power spectrum changes with the change of the VO2 phase; the results have been calculated as T·Psource (See Appendix C). As expected, only the y-oriented dipoles spectrally follow the resonances seen in the absorption and magnetic field, Fig. 3(b). Since the emission contrast between the two states arises from the sum of the three dipole orientations, the emission of x-oriented dipoles will eventually lower the overall contrast, while the emission of z-oriented dipoles is negligible. It is worth noting that if we change the dipole type from magnetic to electric, the resonance is expected for the dipole orientation along the electric field, i.e. in x direction: indeed, we note some resonant behavior in both states for x-oriented electric dipoles, Fig. 3(c). Fig. 3. (a) Sketch of the dipole excitation simulations. θ and φ are the emission cone half-angle, and azimuthal angle, respectively. (b-c) Spectra of the total power emitted to z+ far-field for the two VO2 states, for different dipole orientations of a dipole located at the center (Δx=Δy = 0 nm), at Δz=-10 nm, for (b) magnetic dipole; (c) electric dipole. The strongest emission max(Pem,m,y) for y-oriented magnetic dipoles is at the wavelength of the highest magnetic field in the hot state, i.e. at 1540 nm. In order to compare this optimal emission to the cases where the dipole is placed farther from the nanoantenna center, we fixed the emission wavelength at 1540 nm, and monitored the emitted power as a function of the dipole's displacement. Figure 4(a) shows results for the two states as a function of |Δx| or |Δy|, keeping |Δy|=0 nm or |Δx|=0 nm, respectively. Figure 4(b) shows the dependence on |Δz| while keeping |Δx|=|Δy|=0 nm. The normalized power follows the spatial dependence of the magnetic field shown in Figs. 1(e) and 2(b). Fig. 4. Normalized total power emitted at 1540 nm for a y-oriented dipole, positioned (a) at Δz=-10 nm, as a function of the distance from the center |Δx| or |Δy|, and (b) at Δx=Δy = 0 nm, as a function of |Δz|. The power is normalized to the maximum power emitted for the resonant dipole at 1540 nm (y-oriented dipole and the hot VO2 state). We then addressed the change in the far-field distribution of the emission by calculating it for a magnetic dipole at 1540 nm placing a near-field monitor above the nanoantenna (see Appendix D). Figure 5(a) shows that a y-oriented dipole emits most efficiently at 1540 nm for the hot state. In order to account for the random orientation of the dipoles, in Fig. 5(b) we show the average far-field distribution, calculated by summing and averaging the far-field power from three simulations with x, y and z-oriented dipoles radiating at 1540 nm (the dipole is placed centrally at |Δx|=|Δy|=0 nm and Δz=-10 nm). A notable high contrast between the states was demonstrated, especially close to normal incidence. However, this contrast will be effectively lowered once the dipole is positioned farther from the center (where the designed structure enhances the magnetic response). Possible solutions to prevent this decrease are to pattern the distribution of the emitters in the SiO2 spacer volume under the nanoantenna, or to simultaneously use excitation enhancement [43] to tailor the magnetic field which can increase the resonant magnetic contribution of the dipoles away from the center. Anyhow, for the out-coupling efficiency of the design, one should consider the emitters deposited over the SiO2 volume, and calculate the contrast between the two states only after averaging over many possible positions. Therefore, in the following we investigated a magnetic dipole distribution below periodically patterned Au nanoantennas, providing a path to metamaterial-PCM governed tunable emission. Fig. 5. (a) Distribution of power emitted to the z+ far-field, for a magnetic dipole at 1540 nm, in the two VO2 states and three different dipole orientations. The dipole is located at the center, at Δz=-10 nm. All the maps are normalized to the maximum value (power emitted to far-field θ=φ=0 for the y-oriented dipole in the hot state). (b) Far-field emission distribution above the structure, averaged over the three orientations of the magnetic dipole which is positioned centrally at Δz=-10 nm, and emits at 1540 nm. 4. Periodic structure Coupling of the emitters with periodic nanostructures has recently proven to provide emission rate modification and quantum efficiency enhancement [9]. Here we investigate the coupling of a magnetic emitter with a metamaterial, comprising a 2D array of Au nanoantennas with periodicity p in both x- and y-directions, as shown in Fig. 6(a); the geometric parameters of the nanoantennas and the layers below are kept the same as in the previous section. In Appendix E we perform plane-wave simulations with periodic boundary conditions (PBCs) in xy plane to explore the magnetic resonance in the metamaterial; as it remains governed by the behavior of the single nanoantenna, we further choose periodicities with higher absorption contrast between the two VO2 states. Next, we study the emission response of dipoles under the metamaterial. In the dipole excitation case, spontaneous emission cannot be modelled by applying PBCs as it would lead to incorrectly induced coherence of the sources. Instead, we replicate the unit cells in the FDTD volume already defined in Appendix D, which now includes more than 100 unit cells. We first consider magnetic dipoles homogenously positioned in a unit cell at Δz=-10 nm, emitting at 1540 nm. The emitted far-field is averaged over the three dipole orientations and 72 dipole positions. The symmetry of the structure then reduces this number to 16, as indicated by the green dots in Fig. 6(a); for the dipoles shared by 2 and 4 unit cells, the emitted power is scaled by factors 0.5 and 0.25, respectively. The simulation domain contains at least 100 unit cells. Figure 6(b) shows the far-field emitted power for p = 900 nm and p = 800 nm, normalized to the maximum value for the hot state (i.e. for θ=φ=0) for a homogeneous distribution of emitters in the unit cell, for the two VO2 states. The period decrease enhances the coupling between adjacent nanoantennas especially in x-direction; this effectively leads to a lower directivity in the hot state, and to even a lower overall contrast at θ=0. More importantly, the emitted far-field distribution differs for the two states, but the overall contrast is dramatically lowered with respect to the one of the centrally positioned dipoles in Fig. 5(b). This is due to the detrimental contribution of dipoles positioned farther from the center; namely, Fig. 4(a) shows that the dipoles positioned at |Δy|>60 nm do emit less due to a lower magnetic enhancement, but for the two states there is almost no contrast, and the same applies for the dipoles at |Δx|>160 nm. Such low far-field contrast is obtained for the dipoles positioned at the borders and corners of the structure; in Fig. 6(c) the far-field emission of such dipoles for the two states is shown. Fig. 6. (a) Simulation sketch for a periodic structure, with dipoles positioned in a unit cell (green dots). (b) Far-field averaged over the three dipole orientations with homogeneous distribution in the unit cell at Δz=-10 nm, for p = 900 nm, and p = 800 nm. (c) Far-field response of the dipoles positioned away from the center for both VO2 states. The randomly oriented magnetic dipole is positioned at (left) the border and (right) the corner of the unit cell. (d) Far-field averaged over the three dipole orientations and positions in the unit cell (xy plane) at Δz=-10 nm, with dipoles only under the nanoantennas (patterned distribution). All maps are normalized to the hot state maximum. In order to optimize the far-field contrast, in Fig. 6(d) only contributions from the emitters under the nanoantenna are averaged (|Δx|<L/2 and |Δy|<W/2). This approach greatly enhances the difference between the far-field distribution of the two states, enabling a strong emission and its directivity around θ=0 in the hot state, which vanishes when switching to the cold state. Therefore, the smart positioning of the emitting material under the nanoantennas can provide the overlap of the dipole distribution with the points of high magnetic field enhancement in the hot state (Figs. 1(e) and 2(b)), and high magnetic field contrast between the two states; this in turns leads to a better far-field emission contrast. Such structures can be fabricated by patterning the dipole distribution under the nanoantennas, for example, by ion implantation through masks before the Au nanoantenna metamaterial fabrication step. In the following, we quantified the far-field emission contrast between the two states, and investigated its dependence on periodicity and far-field collection angle. The radiative decay rate enhancement γrad can be calculated as the ratio of the total power radiated to the far-field Prad and the power that would be radiated from the same dipole in the homogeneous medium P0. The quantum efficiency QE, instead, is usually defined as the ratio of Prad to the power radiated by the dipole. In the investigated structure, however, both γrad and QE are greater in the cold state as a large part of the power is transmitted through VO2 in the semiconductor state, while metallic VO2 introduces more absorption losses. Thus, we define the far-field directional efficiency as: (1)$${\gamma _{ff}} = \frac{{{P_\theta }}}{{{P_{rad}}}} = \frac{{\mathop \sum \nolimits_{|{\Delta x} |,|{\Delta y} |} \mathop \smallint \nolimits_{ff(\theta )}^{} P \cdot ds(\theta )}}{{\mathop \sum \nolimits_{|{\Delta x} |,|{\Delta y} |} \left( {\mathop \smallint \nolimits_{{z^ + }}^{} P \cdot ds + \mathop \smallint \nolimits_{{z^ - }}^{} P \cdot ds} \right)}},$$ which represents the part of the total radiated power Prad which is radiated into a cone of semi-aperture θ in z+ direction (red dashed circle in Fig. 3(a)). The summation is done for 15 different positions under the nanoantenna, and for three magnetic dipole orientations, at Δz=-10 nm. In Fig. 7 we show the γff contrast (γff,hot/γff,cold) for the two states as a function of the periodicity p and collection angle θ. As expected, γff is always higher for the hot state, and, in accordance with the emission directionality of the hot state, the contrast decreases with the increasing of θ, except for the lowest investigated p = 800 nm, in agreement with Fig. 6(b). This is very likely due to the higher neighbor nanoantenna coupling for lower p, which leads to lower directionality, inverts the γff,hot/γff,cold θ dependence and finally leads to much lower γff contrast. For p > 800 nm, there is a high directionality; for example, if a low numerical aperture (NA) is chosen for the signal collection, i.e. θ=10°, then for p = 1000 nm γff,hot/γff,cold>9. The contrast is lowered for p = 950 nm due to the increased γff,cold for this periodicity. Therefore, VO2 can be used to modulate the far-field efficiency of the designed metamaterial, and the contrast can be optimized by means of periodicity tuning. It should be noted that, for all θ, VO2 switching from the hot to the cold state leads to γff decrease both because of the lower power radiated around θ=0°, and because of the increase of Prad, as the power emitted to the z− far-field is higher in the cold state. Fig. 7. Far-field directional efficiency ratio γff,hot/γff,cold for the two states as a function of p and collection angle θ for a distribution of randomly oriented magnetic dipoles positioned under the nanoantennas, at Δz=-10 nm. As this design was optimized for the z+ far-field contrast, we further defined figure of merits of the contrast which include the emitted power that would be collected in a real experiment with collection angle θ; in Table 1 we calculated the normalized modulation depth (MD) efficiency and the contrast ratio (CR) as: (2)$$MD = 100 \cdot \frac{{{P_{\theta ,hot}} - {P_{\theta ,cold}}}}{{{P_{\theta ,hot}} + {P_{\theta ,cold}}}},CR = \frac{{{P_{\theta ,hot}}}}{{{P_{\theta ,cold}}}}.$$ Again, for p=800 nm, CR and MD are the lowest for all collection angles; the nanostructure's contrast was optimized for the single nanoantenna response, so in this design lower p values should be avoided. For high NA, MD and CR drop because of the two lobes present in the cold state (Fig. 6(b)). Finally, the metamaterial with p=850 nm gives the highest directional contrast ratio for θ=10°: MD∼69, CR∼5.5. Table 1. Modulation depth (MD) and contrast ratio (CR) for different p and collection angles θ. In addition to the dipole patterning in the xy plane, we investigated the influence of positioning the set of dipoles in z-direction; the calculations have been done for p=900 nm, which provides high normalized absorption modulation depth (see Appendix E). In Table 2 these values are calculated for the same collection angles as before, and only a slight decrease is noted moving the dipole plane deeper in the SiO2 layer. Total MD and CR are then calculated by summing contributions from all randomly oriented dipoles where |Δx|≤L/2, |Δy|≤W/2, and –40 nm≤Δz≤-10 nm. Even though a deeper distribution emits less power (as expected from Fig. 4(b)), this does not significantly influence the overall contrast, Fig. 8. Therefore, a distribution of dipoles in the range -40 nm<z<-10 nm is expected to provide a similar far-field contrast between the two states. Fig. 8. Far-field power averaged over the three orientations and dipole positions for which |Δx|≤L/2, |Δy|≤W/2, and –40 nm≤Δz≤-10 nm, for the nanoantenna metamaterial with p = 900 nm at 1540 nm. Both maps are normalized to the hot state maximum (θ=φ=0). Table 2. Modulation depth (MD) and contrast ratio (CR) for the set of dipoles under the nanoantenna for p = 900 nm, at different Δz and collection angles θ. Finally, we performed the sensitivity analysis of the design, by simulating the resonant magnetic field contrast as a function of Gaussian distributions of the parameters in the (L-W-t-tpc) parameter space (see Appendix F). As the resonances are spectrally wide, we can conclude that the contrast ratio is rather stable with respect to changes in the geometrical parameters of the nanostructure and thus the designed device is expected to have a good stability with respect to fabrication tolerances. In the present work we demonstrated that the combination of a phase-change material and a plasmonic nanostructure can be effectively used to externally modulate the emission of magnetic dipoles. Particularly, we have shown that a hybrid nanostructure consisting of magnetic dipoles sandwiched between an Au nanoantenna array and a multilayer structure containing VO2 can lead to efficient modulation of the Er3+ emission at 1540 nm. This is due to the control of intensity and spectral position of the magnetic resonance of the overall structure, which change with VO2 changing from semiconductor to metallic state upon an external stimulus (which can be thermal, electrical or optical). In the present work, the nanostructure is optimized to give enhanced magnetic field in the layer between the nanoantenna and VO2 at 1540 nm when VO2 is metallic. We showed that the high magnetic field contrast between the two states of the VO2 layer leads to a similar emission contrast of the magnetic dipoles, thus controlling the difference in the far-field collected signal between the two states. Moreover, the arrangement of the Au nanoantennas in a 2D array and the periodic pattering of the distribution of emitters lead to high values of modulation depth and contrast ratio between the two states for magnetic emission of Er3+; these values can be optimized by choosing the right periodicity and collection angle. We believe that such an approach can be of great importance for the realization of efficient light sources at the nanoscale, where the optically induced phase transition of VO2 can enable the ultrafast modulation. Appendix A: Simulation and material properties In order to simulate the absorption and emission properties of the investigated nanostructure, a single Au nanoantenna, or a 2D periodic array of nanoantennas, is designed upon two thin layers (Er:SiO2 and VO2) and a semi-infinite substrate, all of which are infinite in the xy plane. The simulation region is defined by putting perfectly matched layers (PMLs) in all six directions. PMLs are at least half the maximum wavelength distant from the nanoantenna and the lowest VO2 z coordinate to prevent reflection of the evanescent fields. We assumed both SiO2 substrate and Er:SiO2 emitting layer to be lossless (n = 1.46), which is justified for doping with low concentrations of photoluminescent ions. The optical properties of Au are taken from the Lumerical database (Johnson&Christy), while for VO2 we used recently measured complex refractive index values [10], fitted in the wavelength range 800-2000 nm. Appendix B: Plane-wave single nanoantenna absorption In order to discriminate the total field region from the one with the scattered field, a total-field scattered source (TFSF) is used to illuminate the nanoantenna along its long axis, i.e. x-direction. The absorption cross-section is then calculated by surrounding the nanoantenna with a box in the total field region, "inside" the TFSF source. This built-in box in Lumerical uses six 2D monitors to calculate the power flow in the box across the spectral range. The parameters of the final design were chosen to provide high contrast between the two VO2 states at 1540 nm. Appendix C: Emitted power of dipole excitation An oscillating dipole acts as a point source of electromagnetic fields in FDTD simulations; the base amplitude is defined to give 1 fW radiated power in the simulation domain. The modulation of the emission by means of VO2 is investigated by considering the power emitted by the dipole in the positive z direction, as could be of interest in a future experiment (in the opposite direction VO2 losses are detrimental for the dipole efficiency). A perfect dipole source is put in the region under the nanoantenna, and the spectral response of the structure is monitored in the 800-2000 nm range. The simulation region is again defined by six PMLs. Below the upper z PML boundary, a transmission monitor is placed to measure the total radiation in the same range; the simulation parameters are fixed for the two VO2 states. Here special attention must be made to the normalization of the power. Namely, the results of the transmission function T are by default normalized to the source power that would be emitted by the same dipole in a homogeneous medium. However, the actual radiated power (dipole power) strongly depends on the environment, hence the real transmitted percentage in a given direction needs to be normalized as T*Psource/Pdipole. Here we are interested in how the total power spectrum changes with the change of VO2 phase, hence we present results calculated as T*Psource[A.U.]. Appendix D: Far-field of dipole excitation For the far-field distribution calculations, a frequency-domain field-profile monitor placed 10 nm above the nanoantenna collects the near-field data and decomposes it in a basis of plane waves propagating at different angles; the result is then expressed as the far-field radiation intensity in angles (θ,φ). For the far-field efficiency calculations, another field-profile monitor is placed 10 nm under the lowest z boundary of the VO2 layer. The total power emitted to the far-field is calculated from the integration of the far-field power projected from these two monitors. The size of the FDTD domain is set to 10 µm x 10 µm, which is large enough so that there is no electric field at the borders of the near-field monitor. In the far-field plots, white circles correspond to θ change with 10° step, while white lines mark φ with 30° step. Appendix E: Plane-wave nanoantenna metamaterial absorption Here we use an x-polarized plane-wave under normal incidence, Fig. 9(a), and define PBCs in xy plane, while keeping PMLs in z-direction. We calculate the total absorption by surrounding the nanoantennas with a box of monitors, calculating the power density, and integrating it over the volume. This way one gets the total power absorbed in that part of the structure, normalized to the total power of the source; the source has the electric field amplitude of 1 V/m. In Fig. 9(b) we see the same localized resonance at around 1540 nm (its position does not depend on p). In Fig. 9(c), in the cold state the resonance linearly red-shifts with p, suggesting a surface lattice nature of the mode. Finally, in Fig. 9(d) we investigate the normalized absorption modulation depth defined as: (E1)$$M{D_A} = 100 \cdot \frac{{{A_{hot}} - {A_{cold}}}}{{{A_{hot}} + {A_{cold}}}}.$$ This figure of merit shows an enhancement with respect to the nonperiodic structure (from Fig. 1(d) in the main manuscript, this value is ∼53). This can be understood in terms of an effective surface for the magnetic resonance: for the cold state, for periods greater than 800 nm the localized contribution due to the magnetic resonance almost disappears as such structure has no metallic structure below, and the volume of the nanoantenna compared to the unit cell becomes too low to support it. This is not the case for the hot state, where the magnetic resonance still exists for larger periods because of the metallic layer below. Therefore, in the main manuscript we investigated periods in the range 800-1000 nm. Fig. 9. (a) Sketch of the periodic structure. (b) Absorption dependence on the period when VO2 is hot and (c) when VO2 is cold. (d) Normalized absorption modulation depth dependence on the period. Appendix F: Sensitivity analysis In order to evaluate the sensitivity of the design with respect to change in the geometric parameters of the system, we suppose that the sensitivity of emission will follow the sensitivity of absorption, and perform 800 simulations in the (L-W-t-tpc) parameter space for the two states, and p = 900 nm. We take the following Gaussian distributions: L = 340 ± 5 nm, tpc =60 ± 5 nm, t = 40 ± 2 nm, and W = 50 ± 2 nm. From Fig. 10 we confirm that the normalized absorption MD always lies around 75%, as expected from Fig. 9(d). Fig. 10. Scatter plot of MDA as a function of manufacturing tolerances of the parameter L. The device parameters are taken as Gaussian distributions defined as: L = 340 ± 5 nm, tpc=60 ± 5 nm, t = 40 ± 2 nm, and W = 50 ± 2 nm. 1. H. Sun, L. Yin, Z. Liu, Y. Zheng, F. Fan, S. Zhao, X. Feng, T. Li, and C. Z. Ning, "Giant optical gain in a single-crystal erbium chloride silicate nanowire," Nat. Photonics 11(9), 589–593 (2017). [CrossRef] 2. J. Bradley, M. Costa e Silva, M. Gay, L. Bramerie, A. Driessen, K. Wörhoff, J. Simon, and M. Pollnau, "170 GBit/s transmission in an erbium-doped waveguide amplifier on silicon," Opt. Express 17(24), 22201–22208 (2009). [CrossRef] 3. E. Snoeks, P. G. Kik, and A. Polman, "Concentration quenching in erbium implanted alkali silicate glasses," Opt. Mater. 5(3), 159–167 (1996). [CrossRef] 4. H. Mertens and A. Polman, "Plasmon-enhanced erbium luminescence," Appl. Phys. Lett. 89(21), 211107 (2006). [CrossRef] 5. Y. Gong, S. Yerci, R. Li, L. Negro, and J. Vučković, "Enhanced light emission from erbium doped silicon nitride in plasmonic metal-insulator-metal structures," Opt. Express 17(23), 20642–20650 (2009). [CrossRef] 6. T. Cesca, B. Kalinic, N. Michieli, C. Maurizio, A. Trapananti, C. Scian, G. Battaglin, P. Mazzoldi, and G. Mattei, "Au–Ag nanoalloy molecule-like clusters for enhanced quantum efficiency emission of Er3+ ions in silica," Phys. Chem. Chem. Phys. 17(42), 28262–28269 (2015). [CrossRef] 7. T. Cesca, B. Kalinic, C. Maurizio, C. Scian, G. Battaglin, P. Mazzoldi, and G. Mattei, "Interatomic coupling of Au molecular clusters and Er3+ ions in silica," ACS Photonics 2(1), 96–104 (2015). [CrossRef] 8. B. Kalinic, T. Cesca, C. Scian, N. Michieli, I. G. Balasa, E. Trave, and G. Mattei, "Emission efficiency enhancement of Er3+ ions in silica by near-field coupling with plasmonic and pre-plasmonic nanostructures," Phys. Status Solidi A 215(3), 1700437 (2018). [CrossRef] 9. N. Michieli, B. Kalinic, C. Scian, T. Cesca, and G. Mattei, "Emission rate modification and quantum efficiency enhancement of Er3+ emitters by near-field coupling with nanohole arrays," ACS Photonics 5(6), 2189–2199 (2018). [CrossRef] 10. S. Cueff, D. Li, Y. Zhou, F. J. Wong, J. A. Kurvits, S. Ramanathan, and R. Zia, "Dynamic control of light emission faster than the lifetime limit using VO2 phase-change," Nat. Commun. 6(1), 8636 (2015). [CrossRef] 11. B. Gholipour, J. Zhang, K. F. MacDonald, D. W. Hewak, and N. I. Zheludev, "An all-optical, non-volatile, bidirectional, phase-change meta-switch," Adv. Mater. 25(22), 3050–3054 (2013). [CrossRef] 12. M. Rudé, V. Mkhitaryan, A. E. Cetin, T. A. Miller, A. Carrilero, S. Wall, F. J. García de Abajo, H. Altug, and V. Pruneri, "Ultrafast broadband tuning of resonant optical nanostructures using phase change materials," Adv. Opt. Mater. 4(7), 1060–1066 (2016). [CrossRef] 13. E. Petronijevic and C. Sibilia, "All-optical tuning of EIT-like dielectric metasurfaces by means of chalcogenide phase change materials," Opt. Express 24(26), 30411–30420 (2016). [CrossRef] 14. E. Petronijević, G. Leahu, V. Di Meo, A. Crescitelli, P. Dardano, E. Esposito, G. Coppola, I. Rendina, M. Miritello, M. G. Grimaldi, V. Torrisi, G. Compagnini, and C. Sibilia, "Near-infrared modulation by means of GeTe/SOI based metamaterial," Opt. Lett. 44(6), 1508–1511 (2019). [CrossRef] 15. M. Yang, Y. Yang, B. Hong, L. Wang, K. Hu, Y. Dong, H. Xu, H. Huang, J. Zhao, H. Chen, L. Song, H. Ju, J. Zhu, J. Bao, X. Li, Y. Gu, T. Yang, X. Gao, Z. Luo, and C. Gao, "Suppression of structural phase transition in VO2 by epitaxial strain in vicinity of metal-insulator transition," Sci. Rep. 6(1), 23119 (2016). [CrossRef] 16. M. Dicken, K. Aydin, I. Pryce, L. Sweatlock, E. Boyd, S. Walavalkar, J. Ma, and H. Atwater, "Frequency tunable near-infrared metamaterials based on VO2 phase transition," Opt. Express 17(20), 18330–18339 (2009). [CrossRef] 17. M. D. Goldflam, T. Driscoll, B. Chapler, O. Khatib, N. Marie Jokerst, S. Palit, D. R. Smith, B.-J. Kim, G. Seo, H.-T. Kim, M. Di Ventra, and D. N. Basov, "Reconfigurable gradient index using VO2 memory metamaterials," Appl. Phys. Lett. 99(4), 044103 (2011). [CrossRef] 18. J. Liang, X. Song, J. Li, K. Lan, and P. Li, "A visible-near infrared wavelength-tunable metamaterial absorber based on the structure of Au triangle arrays embedded in VO2 thin film," J. Alloys Compd. 708, 999–1007 (2017). [CrossRef] 19. K. Appavoo, B. Wang, N. F. Brady, M. Seo, J. Nag, R. P. Prasankumar, D. J. Hilton, S. T. Pantelides, and R. F. Haglund Jr., "Ultrafast phase transition via catastrophic phonon collapse driven by plasmonic hot-electron injection," Nano Lett. 14(3), 1127–1133 (2014). [CrossRef] 20. S. Lysenko, A. Rúa, V. Vikhnin, F. Fernández, and H. Liu, "Insulator-to-metal phase transition and recovery processes in VO2 thin films after femtosecond laser excitation," Phys. Rev. B 76(3), 035104 (2007). [CrossRef] 21. D. G. Baranov, R. S. Savelev, S. V. Li, A. E. Krasnok, and A. Alù, "Modifying magnetic dipole spontaneous emission with nanophotonic structures," Laser Photonics Rev. 11(3), 1600268 (2017). [CrossRef] 22. J. Li, N. Verellen, and P. V. Dorpe, "Enhancing magnetic dipole emission by a nano-doughnut-shaped silicon disk," ACS Photonics 4(8), 1893–1898 (2017). [CrossRef] 23. Q. Thommen and P. Mandel, "Left-handed properties of erbium-doped crystals," Opt. Lett. 31(12), 1803–1805 (2006). [CrossRef] 24. C. M. Dodson and R. Zia, "Magnetic dipole and electric quadrupole transitions in the trivalent lanthanide series: Calculated emission rates and oscillator strengths," Phys. Rev. B 86(12), 125102 (2012). [CrossRef] 25. B. Rolly, B. Bebey, S. Bidault, B. Stout, and N. Bonod, "Promoting magnetic dipolar transition in trivalent lanthanide ions with lossless Mie resonances," Phys. Rev. B 85(24), 245432 (2012). [CrossRef] 26. G. W. Bryant, F. J. García de Abajo, and J. Aizpurua, "Mapping the plasmon resonances of metallic nanoantennas," Nano Lett. 8(2), 631–636 (2008). [CrossRef] 27. J. Li, H. Guo, and Z. Li, "Microscopic and macroscopic manipulation of gold nanoantenna and its hybrid nanostructures," Photonics Res. 1(1), 28–41 (2013). [CrossRef] 28. L. Zhao, T. Ming, H. Chen, Y. Liang, and J. Wang, "Plasmon-induced modulation of the emission spectra of the fluorescent molecules near gold nanoantennas," Nanoscale 3(9), 3849–3859 (2011). [CrossRef] 29. X. Han, F. Zhao, K. He, Z. He, and Z. Zhang, "Near-perfect absorber of infrared radiation based on Au nanoantenna arrays," J. Nanophotonics 11(1), 016018 (2017). [CrossRef] 30. Y. Fan, C. Guo, Z. Zhu, W. Xu, F. Wu, X. Yuan, and S. Qin, "Monolayer-graphene-based broadband and wide-angle perfect absorption structures in the near infrared," Sci. Rep. 8(1), 13709 (2018). [CrossRef] 31. H. Kocer, S. Butun, B. Banar, K. Wang, S. Tongay, J. Wu, and K. Aydin, "Thermal tuning of infrared resonant absorbers based on hybrid gold-VO2 nanostructures," Appl. Phys. Lett. 106(16), 161104 (2015). [CrossRef] 32. J. Pradhan, S. Anantha Ramakrishna, B. Rajeswaran, A. Umarji, V. Achanta, A. Agarwal, and A. Ghosh, "High contrast switchability of VO2 based metamaterial absorbers with ITO ground plane," Opt. Express 25(8), 9116–9121 (2017). [CrossRef] 33. L. Yang, P. Zhou, T. Huang, G. Zhen, L. Zhang, L. Bi, X. Weng, J. Xie, and L. Deng, "Broadband thermal tunable infrared absorber based on the coupling between standing wave and magnetic resonance," Opt. Mater. Express 7(8), 2767–2776 (2017). [CrossRef] 34. Lumerical Solutions, Inc.http://www.lumerical.com/tcad-products/fdtd/ 35. G. Leahu, E. Petronijevic, A. Belardini, M. Centini, R. Li Voti, T. Hakkarainen, E. Koivusalo, M. Guina, and C. Sibilia, "Photo-acoustic spectroscopy revealing resonant absorption of self-assembled GaAs-based nanowires," Sci. Rep. 7(1), 2833 (2017). [CrossRef] 36. T. Feng, Y. Zhou, D. Liu, and J. Li, "Controlling magnetic dipole transition with magnetic plasmonic structures," Opt. Lett. 36(12), 2369–2371 (2011). [CrossRef] 37. S. M. Hein and H. Giessen, "Tailoring magnetic dipole emission with plasmonic split-ring resonators," Phys. Rev. Lett. 111(2), 026803 (2013). [CrossRef] 38. M. Mivelle, T. Grosjean, G. W. Burr, U. C. Fischer, and M. F. Garcia-Parajo, "Strong modification of magnetic dipole emission through diabolo nanoantennas," ACS Photonics 2(8), 1071–1076 (2015). [CrossRef] 39. R. Hussain, S. S. Kruk, C. E. Bonner, A. M. Noginov, I. Staude, Y. S. Kivshar, N. Noginova, and D. N. Neshev, "Enhancing Eu3+ magnetic dipole emission by resonant plasmonic nanostructures," Opt. Lett. 40(8), 1659–1662 (2015). [CrossRef] 40. B. Choi, M. Iwanaga, Y. Sugimoto, K. Sakoda, and H. T. Miyazaki, "Selective plasmonic enhancement of electric- and magnetic-dipole radiations of Er ions," Nano Lett. 16(8), 5191–5196 (2016). [CrossRef] 41. M. K. Schmidt, R. Esteban, J. J. Sáenz, I. Suárez-Lacalle, S. Mackowski, and J. Aizpurua, "Dielectric antennas - a suitable platform for controlling magnetic dipolar emission," Opt. Express 20(13), 13636–13650 (2012). [CrossRef] 42. A. Vaskin, S. Mashhadi, M. Steinert, K. E. Chong, D. Keene, S. Nanz, A. Abass, E. Rusak, D. Y. Choi, I. Fernandez-Corbaton, T. Pertsch, C. Rockstuhl, M. A. Noginov, Y. S. Kivshar, D. N. Neshev, N. Noginova, and I. Staude, "Manipulation of magnetic dipole emission from Eu3+ with Mie-resonant dielectric metasurfaces," Nano Lett. 19(2), 1015–1022 (2019). [CrossRef] 43. F. Kang, J. He, T. Sun, Z. Y. Bao, F. Wang, and D. Y. Lei, "Plasmonic dual-enhancement and precise color tuning of gold nanorod@SiO2 coupled core–shell–shell upconversion nanocrystals," Adv. Funct. Mater. 27(36), 1701842 (2017). [CrossRef] H. Sun, L. Yin, Z. Liu, Y. Zheng, F. Fan, S. Zhao, X. Feng, T. Li, and C. Z. Ning, "Giant optical gain in a single-crystal erbium chloride silicate nanowire," Nat. Photonics 11(9), 589–593 (2017). J. Bradley, M. Costa e Silva, M. Gay, L. Bramerie, A. Driessen, K. Wörhoff, J. Simon, and M. Pollnau, "170 GBit/s transmission in an erbium-doped waveguide amplifier on silicon," Opt. Express 17(24), 22201–22208 (2009). E. Snoeks, P. G. Kik, and A. Polman, "Concentration quenching in erbium implanted alkali silicate glasses," Opt. Mater. 5(3), 159–167 (1996). H. Mertens and A. Polman, "Plasmon-enhanced erbium luminescence," Appl. Phys. Lett. 89(21), 211107 (2006). Y. Gong, S. Yerci, R. Li, L. Negro, and J. Vučković, "Enhanced light emission from erbium doped silicon nitride in plasmonic metal-insulator-metal structures," Opt. Express 17(23), 20642–20650 (2009). T. Cesca, B. Kalinic, N. Michieli, C. Maurizio, A. Trapananti, C. Scian, G. Battaglin, P. Mazzoldi, and G. Mattei, "Au–Ag nanoalloy molecule-like clusters for enhanced quantum efficiency emission of Er3+ ions in silica," Phys. Chem. Chem. Phys. 17(42), 28262–28269 (2015). T. Cesca, B. Kalinic, C. Maurizio, C. Scian, G. Battaglin, P. Mazzoldi, and G. Mattei, "Interatomic coupling of Au molecular clusters and Er3+ ions in silica," ACS Photonics 2(1), 96–104 (2015). B. Kalinic, T. Cesca, C. Scian, N. Michieli, I. G. Balasa, E. Trave, and G. Mattei, "Emission efficiency enhancement of Er3+ ions in silica by near-field coupling with plasmonic and pre-plasmonic nanostructures," Phys. Status Solidi A 215(3), 1700437 (2018). N. Michieli, B. Kalinic, C. Scian, T. Cesca, and G. Mattei, "Emission rate modification and quantum efficiency enhancement of Er3+ emitters by near-field coupling with nanohole arrays," ACS Photonics 5(6), 2189–2199 (2018). S. Cueff, D. Li, Y. Zhou, F. J. Wong, J. A. Kurvits, S. Ramanathan, and R. Zia, "Dynamic control of light emission faster than the lifetime limit using VO2 phase-change," Nat. Commun. 6(1), 8636 (2015). B. Gholipour, J. Zhang, K. F. MacDonald, D. W. Hewak, and N. I. Zheludev, "An all-optical, non-volatile, bidirectional, phase-change meta-switch," Adv. Mater. 25(22), 3050–3054 (2013). M. Rudé, V. Mkhitaryan, A. E. Cetin, T. A. Miller, A. Carrilero, S. Wall, F. J. García de Abajo, H. Altug, and V. Pruneri, "Ultrafast broadband tuning of resonant optical nanostructures using phase change materials," Adv. Opt. Mater. 4(7), 1060–1066 (2016). E. Petronijevic and C. Sibilia, "All-optical tuning of EIT-like dielectric metasurfaces by means of chalcogenide phase change materials," Opt. Express 24(26), 30411–30420 (2016). E. Petronijević, G. Leahu, V. Di Meo, A. Crescitelli, P. Dardano, E. Esposito, G. Coppola, I. Rendina, M. Miritello, M. G. Grimaldi, V. Torrisi, G. Compagnini, and C. Sibilia, "Near-infrared modulation by means of GeTe/SOI based metamaterial," Opt. Lett. 44(6), 1508–1511 (2019). M. Yang, Y. Yang, B. Hong, L. Wang, K. Hu, Y. Dong, H. Xu, H. Huang, J. Zhao, H. Chen, L. Song, H. Ju, J. Zhu, J. Bao, X. Li, Y. Gu, T. Yang, X. Gao, Z. Luo, and C. Gao, "Suppression of structural phase transition in VO2 by epitaxial strain in vicinity of metal-insulator transition," Sci. Rep. 6(1), 23119 (2016). M. Dicken, K. Aydin, I. Pryce, L. Sweatlock, E. Boyd, S. Walavalkar, J. Ma, and H. Atwater, "Frequency tunable near-infrared metamaterials based on VO2 phase transition," Opt. Express 17(20), 18330–18339 (2009). M. D. Goldflam, T. Driscoll, B. Chapler, O. Khatib, N. Marie Jokerst, S. Palit, D. R. Smith, B.-J. Kim, G. Seo, H.-T. Kim, M. Di Ventra, and D. N. Basov, "Reconfigurable gradient index using VO2 memory metamaterials," Appl. Phys. Lett. 99(4), 044103 (2011). J. Liang, X. Song, J. Li, K. Lan, and P. Li, "A visible-near infrared wavelength-tunable metamaterial absorber based on the structure of Au triangle arrays embedded in VO2 thin film," J. Alloys Compd. 708, 999–1007 (2017). K. Appavoo, B. Wang, N. F. Brady, M. Seo, J. Nag, R. P. Prasankumar, D. J. Hilton, S. T. Pantelides, and R. F. Haglund, "Ultrafast phase transition via catastrophic phonon collapse driven by plasmonic hot-electron injection," Nano Lett. 14(3), 1127–1133 (2014). S. Lysenko, A. Rúa, V. Vikhnin, F. Fernández, and H. Liu, "Insulator-to-metal phase transition and recovery processes in VO2 thin films after femtosecond laser excitation," Phys. Rev. B 76(3), 035104 (2007). D. G. Baranov, R. S. Savelev, S. V. Li, A. E. Krasnok, and A. Alù, "Modifying magnetic dipole spontaneous emission with nanophotonic structures," Laser Photonics Rev. 11(3), 1600268 (2017). J. Li, N. Verellen, and P. V. Dorpe, "Enhancing magnetic dipole emission by a nano-doughnut-shaped silicon disk," ACS Photonics 4(8), 1893–1898 (2017). Q. Thommen and P. Mandel, "Left-handed properties of erbium-doped crystals," Opt. Lett. 31(12), 1803–1805 (2006). C. M. Dodson and R. Zia, "Magnetic dipole and electric quadrupole transitions in the trivalent lanthanide series: Calculated emission rates and oscillator strengths," Phys. Rev. B 86(12), 125102 (2012). B. Rolly, B. Bebey, S. Bidault, B. Stout, and N. Bonod, "Promoting magnetic dipolar transition in trivalent lanthanide ions with lossless Mie resonances," Phys. Rev. B 85(24), 245432 (2012). G. W. Bryant, F. J. García de Abajo, and J. Aizpurua, "Mapping the plasmon resonances of metallic nanoantennas," Nano Lett. 8(2), 631–636 (2008). J. Li, H. Guo, and Z. Li, "Microscopic and macroscopic manipulation of gold nanoantenna and its hybrid nanostructures," Photonics Res. 1(1), 28–41 (2013). L. Zhao, T. Ming, H. Chen, Y. Liang, and J. Wang, "Plasmon-induced modulation of the emission spectra of the fluorescent molecules near gold nanoantennas," Nanoscale 3(9), 3849–3859 (2011). X. Han, F. Zhao, K. He, Z. He, and Z. Zhang, "Near-perfect absorber of infrared radiation based on Au nanoantenna arrays," J. Nanophotonics 11(1), 016018 (2017). Y. Fan, C. Guo, Z. Zhu, W. Xu, F. Wu, X. Yuan, and S. Qin, "Monolayer-graphene-based broadband and wide-angle perfect absorption structures in the near infrared," Sci. Rep. 8(1), 13709 (2018). H. Kocer, S. Butun, B. Banar, K. Wang, S. Tongay, J. Wu, and K. Aydin, "Thermal tuning of infrared resonant absorbers based on hybrid gold-VO2 nanostructures," Appl. Phys. Lett. 106(16), 161104 (2015). J. Pradhan, S. Anantha Ramakrishna, B. Rajeswaran, A. Umarji, V. Achanta, A. Agarwal, and A. Ghosh, "High contrast switchability of VO2 based metamaterial absorbers with ITO ground plane," Opt. Express 25(8), 9116–9121 (2017). L. Yang, P. Zhou, T. Huang, G. Zhen, L. Zhang, L. Bi, X. Weng, J. Xie, and L. Deng, "Broadband thermal tunable infrared absorber based on the coupling between standing wave and magnetic resonance," Opt. Mater. Express 7(8), 2767–2776 (2017). Lumerical Solutions, Inc. http://www.lumerical.com/tcad-products/fdtd/ G. Leahu, E. Petronijevic, A. Belardini, M. Centini, R. Li Voti, T. Hakkarainen, E. Koivusalo, M. Guina, and C. Sibilia, "Photo-acoustic spectroscopy revealing resonant absorption of self-assembled GaAs-based nanowires," Sci. Rep. 7(1), 2833 (2017). T. Feng, Y. Zhou, D. Liu, and J. Li, "Controlling magnetic dipole transition with magnetic plasmonic structures," Opt. Lett. 36(12), 2369–2371 (2011). S. M. Hein and H. Giessen, "Tailoring magnetic dipole emission with plasmonic split-ring resonators," Phys. Rev. Lett. 111(2), 026803 (2013). M. Mivelle, T. Grosjean, G. W. Burr, U. C. Fischer, and M. F. Garcia-Parajo, "Strong modification of magnetic dipole emission through diabolo nanoantennas," ACS Photonics 2(8), 1071–1076 (2015). R. Hussain, S. S. Kruk, C. E. Bonner, A. M. Noginov, I. Staude, Y. S. Kivshar, N. Noginova, and D. N. Neshev, "Enhancing Eu3+ magnetic dipole emission by resonant plasmonic nanostructures," Opt. Lett. 40(8), 1659–1662 (2015). B. Choi, M. Iwanaga, Y. Sugimoto, K. Sakoda, and H. T. Miyazaki, "Selective plasmonic enhancement of electric- and magnetic-dipole radiations of Er ions," Nano Lett. 16(8), 5191–5196 (2016). M. K. Schmidt, R. Esteban, J. J. Sáenz, I. Suárez-Lacalle, S. Mackowski, and J. Aizpurua, "Dielectric antennas - a suitable platform for controlling magnetic dipolar emission," Opt. Express 20(13), 13636–13650 (2012). A. Vaskin, S. Mashhadi, M. Steinert, K. E. Chong, D. Keene, S. Nanz, A. Abass, E. Rusak, D. Y. Choi, I. Fernandez-Corbaton, T. Pertsch, C. Rockstuhl, M. A. Noginov, Y. S. Kivshar, D. N. Neshev, N. Noginova, and I. Staude, "Manipulation of magnetic dipole emission from Eu3+ with Mie-resonant dielectric metasurfaces," Nano Lett. 19(2), 1015–1022 (2019). F. Kang, J. He, T. Sun, Z. Y. Bao, F. Wang, and D. Y. Lei, "Plasmonic dual-enhancement and precise color tuning of gold nanorod@SiO2 coupled core–shell–shell upconversion nanocrystals," Adv. Funct. Mater. 27(36), 1701842 (2017). Abass, A. Achanta, V. Agarwal, A. Aizpurua, J. Altug, H. Alù, A. Anantha Ramakrishna, S. Appavoo, K. Atwater, H. Aydin, K. Balasa, I. G. Banar, B. Bao, J. Bao, Z. Y. Baranov, D. G. Basov, D. N. Battaglin, G. Bebey, B. Belardini, A. Bi, L. Bidault, S. Bonner, C. E. Bonod, N. Boyd, E. Bradley, J. Brady, N. F. Bramerie, L. Bryant, G. W. Burr, G. W. Butun, S. Carrilero, A. Centini, M. Cesca, T. Cetin, A. E. Chapler, B. Chen, H. Choi, B. Choi, D. Y. Chong, K. E. Compagnini, G. Costa e Silva, M. Crescitelli, A. Cueff, S. Dardano, P. Deng, L. Di Meo, V. Di Ventra, M. Dicken, M. Dodson, C. M. Dong, Y. Dorpe, P. V. Driessen, A. Driscoll, T. Esposito, E. Esteban, R. Fan, F. Fan, Y. Feng, T. Feng, X. Fernández, F. Fernandez-Corbaton, I. Fischer, U. C. Gao, C. Gao, X. García de Abajo, F. J. Garcia-Parajo, M. F. Gay, M. Gholipour, B. Ghosh, A. Giessen, H. Goldflam, M. D. Gong, Y. Grimaldi, M. G. Grosjean, T. Gu, Y. Guina, M. Guo, C. Guo, H. Haglund, R. F. Hakkarainen, T. Han, X. He, J. He, K. He, Z. Hein, S. M. Hewak, D. W. Hilton, D. J. Hong, B. Hu, K. Huang, H. Huang, T. Hussain, R. Iwanaga, M. Ju, H. Kalinic, B. Kang, F. Keene, D. Khatib, O. Kik, P. G. Kim, B.-J. Kim, H.-T. Kivshar, Y. S. Kocer, H. Koivusalo, E. Krasnok, A. E. Kruk, S. S. Kurvits, J. A. Lan, K. Leahu, G. Lei, D. Y. Li, D. Li, J. Li, P. Li, R. Li, S. V. Li, T. Li, X. Li, Z. Li Voti, R. Liang, J. Liang, Y. Liu, D. Liu, H. Luo, Z. Lysenko, S. Ma, J. MacDonald, K. F. Mackowski, S. Mandel, P. Marie Jokerst, N. Mashhadi, S. Mattei, G. Maurizio, C. Mazzoldi, P. Mertens, H. Michieli, N. Miller, T. A. Ming, T. Miritello, M. Mivelle, M. Miyazaki, H. T. Mkhitaryan, V. Nag, J. Nanz, S. Negro, L. Neshev, D. N. Ning, C. Z. Noginov, A. M. Noginov, M. A. Noginova, N. Palit, S. Pantelides, S. T. Pertsch, T. Petronijevic, E. Pollnau, M. Polman, A. Pradhan, J. Prasankumar, R. P. Pruneri, V. Pryce, I. Qin, S. Rajeswaran, B. Ramanathan, S. Rendina, I. Rockstuhl, C. Rolly, B. Rúa, A. Rudé, M. Rusak, E. Sáenz, J. J. Sakoda, K. Savelev, R. S. Schmidt, M. K. Scian, C. Seo, G. Seo, M. Sibilia, C. Simon, J. Smith, D. R. Snoeks, E. Song, L. Song, X. Staude, I. Steinert, M. Stout, B. Suárez-Lacalle, I. Sugimoto, Y. Sun, H. Sun, T. Sweatlock, L. Thommen, Q. Tongay, S. Torrisi, V. Trapananti, A. Trave, E. Umarji, A. Vaskin, A. Verellen, N. Vikhnin, V. Vuckovic, J. Walavalkar, S. Wall, S. Wang, B. Wang, F. Wang, J. Wang, K. Weng, X. Wong, F. J. Wörhoff, K. Wu, F. Wu, J. Xie, J. Xu, H. Xu, W. Yang, L. Yang, M. Yang, T. Yang, Y. Yerci, S. Yin, L. Zhang, J. Zhang, L. Zhang, Z. Zhao, F. Zhao, J. Zhao, L. Zhao, S. Zheludev, N. I. Zhen, G. Zheng, Y. Zhou, P. Zhou, Y. Zhu, J. Zhu, Z. Zia, R. ACS Photonics (4) Adv. Funct. Mater. (1) Adv. Mater. (1) Adv. Opt. Mater. (1) Appl. Phys. Lett. (3) J. Alloys Compd. (1) J. Nanophotonics (1) Laser Photonics Rev. (1) Nano Lett. (4) Nanoscale (1) Nat. Photonics (1) Opt. Lett. (4) Opt. Mater. (1) Opt. Mater. Express (1) Photonics Res. (1) Phys. Chem. Chem. Phys. (1) Phys. Rev. B (3) Phys. Rev. Lett. (1) Phys. Status Solidi A (1) Sci. Rep. (3) Fig. 10. (1) γ f f = P θ P r a d = ∑ | Δ x | , | Δ y | ⁡ ∫ f f ( θ ) ⁡ P ⋅ d s ( θ ) ∑ | Δ x | , | Δ y | ⁡ ( ∫ z + ⁡ P ⋅ d s + ∫ z − ⁡ P ⋅ d s ) , (2) M D = 100 ⋅ P θ , h o t − P θ , c o l d P θ , h o t + P θ , c o l d , C R = P θ , h o t P θ , c o l d . (E1) M D A = 100 ⋅ A h o t − A c o l d A h o t + A c o l d . Modulation depth (MD) and contrast ratio (CR) for different p and collection angles θ. p[nm] MD [10°] CR [10°] 800 13 1.3 21 1.5 27 1.7 28 1.8 850 69 5.5 67 5.1 64 4.6 51 3 1000 54 3.3 52 3.1 47 2.8 31 1.9 Modulation depth (MD) and contrast ratio (CR) for the set of dipoles under the nanoantenna for p = 900 nm, at different Δz and collection angles θ. Δz [nm] −10 56 3.5 48 2.9 40 2.3 29 1.8 −30 50 3 45 2.6 39 2.3 29 1.8 total 52 3.2 47 2.7 39 2.3 29 1.8
CommonCrawl
Volume 82, Numbers 3-4, 2016 Béla Szőkefalvi-Nagy Medal 2016 Homomorphisms and principal congruences of bounded lattices I. Isotone maps of principal congruences G. Grätzer Abstract. Two years ago, I characterized the order $\Princl L$ of principal congruences of a bounded lattice $L$ as a bounded order. If $K$ and $L$ are bounded lattices and $\gf $ is a \zo homomorphism of $K$ into $L$, then there is a natural isotone \zo map $\Princl\gf $ from $\Princl K$ into $\Princl L$. We prove the converse: For bounded orders $P$ and $Q$ and an isotone \zo map $\gy $ of $P$ into $Q$, we represent $P$ and $Q$ as $\Princl K$ and $\Princl L$ for bounded lattices $K$ and $L$ with a \zo homomorphism $\gf $ of $K$ into $L$, so that $\gy $ is represented as $\Princl\gf $. DOI: 10.14232/actasm-015-056-y AMS Subject Classification (1991): 06B10 Keyword(s): bounded lattice, congruence, principal, order Received July 20, 2015, and in revised form September 18, 2015. (Registered under 56/2015.) Lattices embeddable in three-generated lattices Gábor Czédli Abstract. We prove that every finite lattice $L$ can be embedded in a three-generated \emph{finite} lattice $K$. We also prove that every \emph{algebraic} lattice with accessible cardinality is a \emph{complete} sublattice of an appropriate \emph{algebraic} lattice $K$ such that $K$ is completely generated by three elements. Note that ZFC has a model in which all cardinal numbers are accessible. Our results strengthen P. Crawley and R. A. Dean's 1959 results by adding finiteness, algebraicity, and completeness. DOI: 10.14232/actasm-015-586-2 AMS Subject Classification (1991): 06B99, 06B15 Keyword(s): three-generated lattice, equivalence lattice, partition lattice, complete lattice embedding, inaccessible cardinal Received December 12, 2015, and in final form September 19, 2016. (Registered under 86/2015.) An implicational logic for orthomodular lattices Ivan Chajda, Jānis Cirulis Abstract. Orthomodular lattices were introduced to get an algebraic description of the propositional logic of quantum mechanics. In this paper, we set up axiomatization of this logic as a Hilbert style implicational logical system $\LOM $, i.e., we present a set of axioms and derivation rules formulated in the signature $\{\to,0\}$. The other logical operations $\vee, \wedge, \neg $ are expressed in terms of implication (which is the so-called Dishkant implication) and falsum. We further show that the system $\LOM $ is algebraizable in the sense of Blok and Pigozzi, and that orthomodular lattices provide an equivalent algebraic semantics for it. AMS Subject Classification (1991): 06C15, 03G12 Keyword(s): algebraizable logic, axiom system, derivation rule, Dishkant implication, logic of quantum mechanics, orthomodular implication algebra, orthomodular lattice, semi-orthomodular lattice, weak BCK-algebra Received August 16, 2015, and in final form January 16, 2016. (Registered under 63/2015.) Linear representations of regular rings and complemented modular lattices with involution Christian Herrmann, Marina Semenova Abstract. Faithful representations of regular $\ast $-rings and modular complemented lattices with involution within orthosymmetric sesquilinear spaces are studied within the framework of Universal Algebra. In particular, the correspondence between classes of spaces and classes of representable structures is analyzed; for a class $\mathcal{S}$ of spaces which is closed under ultraproducts and non-degenerate finite-dimensional subspaces, the class of representable structures is shown to be closed under complemented [regular] subalgebras, homomorphic images, and ultraproducts. Moreover, this class is generated by its members which are isomorphic to subspace lattices with involution [endomorphism $\ast $-rings, respectively] of finite-dimensional spaces from $\mathcal{S}$. Under natural restrictions, this result is refined to a $1$-$1$-correspondence between the two types of classes. AMS Subject Classification (1991): 06C20, 16E50, 16W10, 51D25 Keyword(s): sesquilinear space, endomorphism ring, regular ring with involution, lattice of subspaces, complemented modular lattice with involution, representation, semivariety, variety Received May 8, 2015, and in revised form May 22, 2016. (Registered under 33/2015.) On a functional equation on the set of Gaussian integers I. Kátai, B. M. Phong Abstract. We define the analogue of $q$-additivity for the canonical number systems in the Gaussian ring of integers. We characterize all those functions $f\colon\zz [i]\to\cc $ which are $\theta =-A+i$-additive and completely multiplicative (Theorem 1). For $\theta =-1+i$ we give all functions which are $\theta $- and $\overline{\theta }$-additive (Theorem 2). AMS Subject Classification (1991): 11K65, 11N37, 11N64 Keyword(s): completely additive, completely multiplicative, $q$-additive function, Gaussian integers, canonical number system Received July 4, 2015, and in revised form June 25, 2016. (Registered under 52/2015.) Torsion units for some untwisted exceptional groups of lie type Joe Gildea, Killian O'Brien Abstract. In this paper, we investigate the Zassenhaus conjecture for exceptional groups of Lie type $G_2(q)$ for $q=\{3,4\}$. Consequently, we prove that the Prime graph question is true for these groups. AMS Subject Classification (1991): 16S34, 20C05 Keyword(s): Zassenhaus Conjecture, torsion unit, partial augmentation, integral group ring Received July 1, 2015, and in revised form November 2, 2015. (Registered under 48/2015.) Properties of delayed feedback and the problem of control in nonlinear difference systems Anna Khamitova Abstract. The goal of this paper is to study the stabilization of chaos in dynamical systems by adding nonlinear feedback. We analyze what happens when two control parameters, the gain and the parameter memory, are considered. It is shown that the introduction of the additional control of the memory parameter does not extend the class of admissible maps. It appears, however, that a stabilizing control may use a variety of time shifts. In this case, one can change the nature of the decay of a chaotic regime, making it smoother, which may be of significance in the management of biological, economical and medical systems. DOI: 10.14232/actasm-014-522-z AMS Subject Classification (1991): 42A05, 39A30 Keyword(s): trigonometric polynomials, dynamical systems, optimal control of chaos Received March 26, 2014, and in revised form August 15, 2016. (Registered under 22/2014.) Jamison sequences in countably infinite discrete Abelian groups Vincent Devinck Abstract. We extend the definition of Jamison sequences in the context of topological abelian groups. We then study these sequences when the group is discrete and countably infinite. An arithmetical characterization of such sequences is obtained, extending the result of Badea and Grivaux [BadeaGrivaux2] about Jamison sequences of integers. In particular, we prove that the sequence consisting in all the elements of the group is a Jamison sequence. In the opposite, a sequence which generates a subgroup of infinite index in the group is never a Jamison sequence. We also generalize a result of Nikolskii by showing that the growth of the norms of a representation is influenced by the Haar measure of its unimodular point spectrum. AMS Subject Classification (1991): 47A10, 37C85, 43A40, 28C10 Keyword(s): unimodular point spectrum, Jamison sequences, discrete abelian groups, characters and dual group, Haar measures Received February 28, 2015, and in final form December 20, 2015. (Registered under 20/2015.) The ubiquity of Sidon sets that are not $I_{0}$ Kathryn E. Hare, L. Thomas Ramsey Abstract. We prove that every infinite, discrete abelian group admits a pair of $I_{0}$ sets whose union is not $I_{0}$. In particular, this implies that every such group contains a Sidon set that is not $I_{0}$. AMS Subject Classification (1991): 43A46 Keyword(s): Sidon set, $I_{0}$ set, Kronecker set Received March 24, 2016, and in revised form August 5, 2016. (Registered under 18/2016.) Inner multipliers and Rudin type invariant subspaces Arup Chattopadhyay, B. Krishna Das, Jaydeb Sarkar Abstract. Let $\mathcal{E}$ be a Hilbert space and $H^2_{\mathcal{E}}(\mathbb{D})$ be the $\cle $-valued Hardy space over the unit disc $\mathbb{D}$ in $\mathbb{C}$. The well-known Beurling--Lax--Halmos theorem states that every shift invariant subspace of $H^2_{\cle }(\D )$ other than $\{0\}$ has the form $\Theta H^2_{\cle_*}(\D )$, where $\Theta $ is an operator-valued inner multiplier in $H^\infty_{B(\cle_*;\mathcal{E})}(\mathbb{D})$ for some Hilbert space $\cle_*$. In this paper we identify $H^2(\mathbb{D}^n)$ with the $H^2(\mathbb{D}^{n-1})$-valued Hardy space $H^2_{H^2(\mathbb{D}^{n-1})}(\mathbb{D})$ and classify all such inner multipliers $\Theta\in H^\infty_{\mathcal{B}(H^2(\mathbb{D}^{n-1}))}(\mathbb{D})$ for which $\Theta H^2_{H^2(\mathbb{D}^{n-1})}(\mathbb{D})$ is a Rudin type invariant subspace of $H^2(\mathbb{D}^n)$. AMS Subject Classification (1991): 47A13, 47A15, 46E20, 46M05 Keyword(s): Hardy space, inner sequence, operator-valued inner function, invariant subspace, unitary equivalence Received March 17, 2015. (Registered under 23/2015.) Selfadjoint operators and symmetric operators Go Hirasawa Abstract. Our study is in the set ${{\cal S}}(H)$ of all semiclosed operators in a Hilbert space $H$. We show that the set ${{\cal S}}_{sa}(H)$ of all selfadjoint operators is relatively open in the set ${{\cal S}}_{sym}(H)$ of all semiclosed symmetric operators. We calculate the value of a radius of minus-Laplacian $-\Delta $. As a topological approach, we show the selfadjointness of the Schrödinger operator with a Kato--Rellich potential. Keyword(s): De Branges space, semiclosed symmetric operators, selfadjoint operators, the $q$-metric Received June 18, 2015, and in revised form January 17, 2016. (Registered under 44/2015.) On power bounded operators with holomorphic eigenvectors Maria F. Gamal' Abstract. In [17], M. Uchiyama gave necessary and sufficient conditions for contractions to be quasiaffine transforms, quasisimilar, or similar to unilateral shifts of finite multiplicity in terms of norm-estimates of complete analytic families of eigenvectors of their adjoints. In this paper, the result for contractions to be quasiaffine transforms of unilateral shifts is generalized to power bounded operators. It is shown that the result for contractions to be quasisimilar or similar to unilateral shifts can't be extended to power bounded operators: a counterexample is given. No curvature of the holomorphic vector bundle generated by eigenvectors of operators is computed. AMS Subject Classification (1991): 47A05, 47B99, 47B32, 30H10 Keyword(s): power bounded operator, unilateral shift, quasiaffine transform, quasisimilarity, contraction, analytic family of eigenvalues, similarity Received August 11, 2015, and in revised form September 6, 2016. (Registered under 60/2015.) Two invariant subspaces and spectral properties of a linear operator S. V. Djordjević, I. S. Hwang, B. P. Duggal Abstract. In this note we give conditions for the invertibility of a bounded linear operator $T$ defined on a Banach space $X$ such that $X$ decomposes into a (non direct) sum of two closed $T$-invariant subspaces. DOI: 10.14232/actasm-015-534-x AMS Subject Classification (1991): 47A10, 47A15,47A05, 15A29 Keyword(s): invariant subspace, spectrum of an operator Received May 13, 2015, and in final form April 6, 2016. (Registered under 34/2015.) Deddens algebras and weighted shifts of infinite multiplicity Srdjan Petrovic, Daniel Sievewright Abstract. We consider the weighted shifts of infinite multiplicity with quasi-affine weights. We obtain a necessary and sufficient condition for the Deddens algebra associated to such a shift to have a nontrivial invariant subspace, or to be dense. Our technique is based on the study of compressions of operators in the Deddens algebra to some subspaces, and the relations between such compressions. AMS Subject Classification (1991): 47A15, 47B37 Keyword(s): Deddens algebra, weighted shift, invariant subspace Received May 6, 2014, and in final form June 23, 2016. (Registered under 39/2014.) Examples of cyclic polynomially bounded operators that are not similar to contractions Abstract. The question if a polynomially bounded operator is similar to a contraction was posed by Halmos and was answered in the negative by Pisier. His counterexample is an operator of infinite multiplicity, while all its restrictions on invariant subspaces of finite multiplicity are similar to contractions. In the paper, cyclic polynomially bounded operators which are not similar to contractions and are quasisimilar to $C_0$-contractions or to isometries are constructed. The construction is based on a perturbation of the sequence of finite dimensional operators which is uniformly polynomially bounded, but is not uniformly completely polynomially bounded, constructed by Pisier. AMS Subject Classification (1991): 47A65, 47A60, 47A16, 47A20, 47A55 Keyword(s): polynomially bounded operator, similarity, contraction, unilateral shift, isometry, $C_0$-contraction, $C_0$-operator On a preorder relation for contractions Dan Timotin Abstract. An order relation for contractions on a Hilbert space can be introduced by stating that $A\Prec B$ if and only if $A$ is unitarily equivalent to the restriction of $B$ to an invariant subspace. We discuss the equivalence classes associated to this relation, and identify cases in which they coincide with classes of unitary equivalence. The results extend those for completely nonunitary partial isometries obtained by Garcia, Martin, and Ross. Keyword(s): contractions on Hilbert space, preoder relations, unitary equivalence Received September 6, 2015, and in final form May 23, 2016. (Registered under 68/2015.) Spectrum of class $p$-$wA(s,t)$ operators Muneo Chō, M. H. M. Rashid, Kotaro Tanahashi, Atsushi Uchiyama Abstract. Let $T=U|T|$ be the polar decomposition of a bounded linear operator on a complex Hilbert space. $T $ is called a class $p$-$wA(s,t)$ operator if $(|T^{*}|^{t}|T|^{2s}|T^{*}|^{t})^{\frac{tp}{s+t}}\geq |T^{*}|^{2tp}$ and $(|T|^{s}|T^{*}|^{2t}|T|^{s})^{\frac{sp}{s+t}}\leq |T|^{2sp}$ where $0 < s, t $ and $0 < p \leq1$. We investigate spectral properties of a class $p$-$wA(s,t)$ operator $T$. We prove that if $s + t = 1$ and $\lambda\not = 0 $ is an isolated point of the spectrum $\sigma(T)$ then the Riesz idempotent $E$ with respect to $\lambda $ is self-adjoint and $ {\rm ran } E = \ker(T- \lambda ) = \ker((T-\lambda )^{*})$. Also, we prove relating results. Keyword(s): class $A$ operator, class $p$-$wA(s, t)$ operator, Riesz idempotent Received March 27, 2015, and in revised form May 6, 2015. (Registered under 25/2015.) Adjoints of linear fractional composition operators on weighted Hardy spaces Željko Čučković, Trieu Le Abstract. It is well known that on the Hardy space $H^2(\mathbb{D})$ or weighted Bergman space $A^2_{\alpha }(\mathbb{D})$ over the unit disk, the adjoint of a linear fractional composition operator equals the product of a composition operator and two Toeplitz operators. On $S^2(\mathbb{D})$, the space of analytic functions on the disk whose first derivatives belong to $H^2(\mathbb{D})$, Heller showed that a similar formula holds modulo the ideal of compact operators. In this paper we investigate what the situation is like on other weighted Hardy spaces. Keyword(s): composition operator, adjoint, weighted Hardy space Received July 3, 2015, and in revised form August 31, 2015. (Registered under 51/2015.) Can you see the bubbles in a foam? Árpád Kurusa Abstract. An affirmative answer to the question in the title is proved in the plane by showing that any real analytic multicurve can be uniquely determined from its generalized visual angles given at every point of an open ring around the multicurve. AMS Subject Classification (1991): 0052, 0054, 52A10; 44A12 Keyword(s): visual angle, masking function, Steinhaus, Crofton Received July 1, 2015, and in revised form September 22, 2015. (Registered under 49/2015.)
CommonCrawl
\begin{document} \title{Emulating complex networks with a single\ delay differential equation} \begin{abstract} A single dynamical system with time-delayed feedback can emulate networks. This property of delay systems made them extremely useful tools for Machine Learning applications. Here we describe several possible setups, which allow emulating multilayer (deep) feed-forward networks as well as recurrent networks of coupled discrete maps with arbitrary adjacency matrix by a single system with delayed feedback. While the network's size can be arbitrary, the generating delay system can have a low number of variables, including a scalar case. \end{abstract} \section{Introduction} \label{intro} Systems with time-delays, or delay-differential equations (DDE), play an important role in modeling various natural phenomena and technological processes \cite{Stepan1989,Hale1993,Diekmann1995a,Erneux2009,Smith2010,Erneux2017,Yanchuk2017,Krisztin2008}. In optoelectronics, delays emerge due to finite optical or electric signal propagation time between the elements \cite{Vladimirov2005,Erzgraber2006,DHuys2008,Vicente2008,Fiedler2008,Wolfrum2010,Yanchuk2010a,Soriano2013,Oliver2015,Marconi2015,Puzyrev2016,Yanchuk2019}. Similarly, in neuroscience, propagation delays of the action potentials play a crucial role in information processing in the brain \cite{Foss2000,Wu2001,Izhikevich2006,Stepan2009,Deco2009a,Perlikowski2010a,Popovych2011,Kantner2013}. Machine Learning is another rapidly developing application area of delay systems \cite{Paugam-Moisy2008,Appeltant2011,Martinenghi2012,Appeltant2012a,Larger2012,Brunner2013,SCH13l,Toutounji2014,Grigoryeva2015,Penkovsky2017,Larger2017,Harkhoe2019,Stelzer2019,Hart2019,Koster2020,Koester2020,Argyris2020,Goldmann2020,Sugano2020}. It is shown recently that DDEs can successfully realize a reservoir computing setup, theoretically \cite{Hart2017,Keuninckx2017,Hart2019,Stelzer2019,Koster2020,Stelzer2020,Koester2020,Goldmann2020}, and implemented in optoelectronic hardware \cite{Appeltant2011,Appeltant2012a,Larger2017}. In time-delay reservoir computing, a single DDE with either one or a few variables is used for building a ring network of coupled maps with fixed internal weights and fixed input weights. In a certain sense, the network structure emerges by properly unfolding the temporal behavior of the DDE. In this paper, we explain how such an unfolding appears, not only for the ring network as in reservoir computing but also for arbitrary networks of coupled maps. In \cite{Hermans2015}, a training method is proposed to modify the input weights while the internal weights are still fixed. Among the most related previous publications, Hart and collaborators unfold networks with arbitrary topology from delay systems \cite{Hart2017,Hart2019}. Our work extends their results in several directions, including varying coupling weights and applying it to a broader class of delay systems. The networks constructed by our method allow for a modulation of weights. Hence, they can be employed in Machine Learning applications with weight training. In our recent paper \cite{Stelzer2020}, we show that a single DDE can emulate a deep neural network and perform various computational tasks successfully. More specifically, the work \cite{Stelzer2020} derives a multilayer neural network from a delay system with modulated feedback terms. This neural network is trained by gradient descent using back-propagation and applied to machine learning tasks. As follows from the above-mentioned machine learning applications, delay models can be effectively used for unfolding complex network structures in time. Our goal here is a general description of such networks. While focusing on the network construction, we do not discuss details of specific machine learning applications such as e.g., weights training by gradient descent, or specific tasks. The structure of the paper is as follows. In Sec.~\ref{sec:general-case} we derive a feed-forward network from a DDE with modulated feedback terms. Section~\ref{sec:recurrent-network} describes a recurrent neural network. In Sec.~\ref{sec:semilinear-case}, we review a special but practically important case of delay systems with a linear instantaneous part and nonlinear delayed feedback containing an affine combination of the delayed variables; originally, these results have been derived in \cite{Stelzer2020}. \section{From delay systems to multilayer feed-forward networks} \label{sec:general-case} \subsection{Delay systems with modulated feedback terms} \label{subsec:delay-system} Multiple delays are required for the construction of a network with arbitrary topology by a delay system \cite{Hart2017,Hart2019,Stelzer2020}. In such a network, the connection weights are emulated by a modulation of the delayed feedback signals \cite{Stelzer2020}. Therefore, we consider a DDE of the following form \begin{align}\label{eq:general-system} \dot{x}(t) = f(x(t), z(t), \mathcal{M}_1(t)x(t-\tau_1),\ldots , \mathcal{M}_D(t)x(t-\tau_D)), \end{align} with $D$ delays $\tau_1,\dots,\tau_D$, a nonlinear function $f$, a time-dependent driving signal $z(t)$, and modulation functions $\mathcal{M}_1(t), \ldots , \mathcal{M}_D(t)$. System \eqref{eq:general-system} is a non-autonomous DDE, and the properties of the functions $\mathcal{M}_d(t)$ and $z(t)$ play an important role for unfolding a network from \eqref{eq:general-system}. To define these properties, a time quantity $T>0$ is introduced, called the \textit{clock cycle}. Further, we choose a number $N$ of grid points per $T$-interval and define $\theta := T/N$. We define the clock cycle intervals $$I_\ell := ((\ell-1)T, \ell T],\ \ell = 1, \ldots ,L,$$ which we split into smaller sub-intervals $$I_{\ell, n} := ((\ell -1)T + (n-1)\theta, (\ell - 1)T + n\theta], \ n=1,\ldots ,N,$$ see Fig.~\ref{fig:first}. We assume the following properties for the delays and modulation functions:\\ \textbf{Property (I):} The delays satisfy $\tau_d = n_d \theta,\ d=1,\ldots ,D$ with natural numbers $0 < n_1 < \cdots < n_D < 2N$. Consequently, it holds $0 < \tau_1 < \cdots < \tau_D < 2T$. \\ \textbf{Property (II):} The functions $\mathcal{M}_d(t)$ are step-functions, which are constant on the intervals $I_{\ell,n}$. We denote these constants as $v^\ell_{d,n}$, i.e. $$ \mathcal{M}_d(t) = v^\ell_{d,n} \quad \text{for} \quad t\in I_{\ell,n}. $$ \begin{figure} \caption{ Illustration of the clock cycle intervals $I_\ell$ and sub-intervals $I_{\ell,n}$. The node $x^\ell_n$ (blue dot) is defined by the value of the solution $x(t)$ of system~\eqref{eq:general-system} (blue line) at the time point $t=(\ell -1)T+n\theta$. The modulation function $\mathcal{M}_d(t)$ is a step function with constant values $v^\ell_{d,n}$ on the intervals $I_{\ell ,n}$. } \label{fig:first} \end{figure} In the following sections, we show that one can consider the intervals $I_\ell$ as layers with $N$ nodes of a network arising from the delay system~\eqref{eq:general-system} if the modulation functions $\mathcal{M}_d(t)$ fulfill certain additional requirements. The $n$-th node of the $\ell$-th layer is defined as \begin{align} \label{eq:nodes} x^\ell_n := x((\ell - 1)T + n\theta), \quad n = 1,\ldots ,N, \ \ell =1,\ldots ,L, \end{align} which corresponds to the solution of the DDE \eqref{eq:general-system} at time point $(\ell - 1)T + n\theta$. The solution at later time points $x^{\ell'}_{n'}$ with either $\ell'>\ell$ or $n'>n$ for $\ell'=\ell$ depends, in general, on $x^\ell_n$, thus, providing the interdependence between the nodes. Such dependence can be found explicitly in some situations. The simplest way is to use a discretization for small $\theta$, and we consider such a case in the following Sec.~\ref{subsec:discretization}. Another case, when $\theta$ is large, can be found in \cite{Stelzer2020}. Let us remark about the initial state for DDE~\eqref{eq:general-system}. According to the general theory~\cite{Hale1993}, in order to solve an initial value problem, an initial history function $x_0(s)$ must be provided on the interval $s \in [-\tau_D, 0]$, where $\tau_D$ is the maximal delay. In terms of the nodes, one needs to specify $x_n^\ell$ for $n_D$ ``history'' nodes. However, the modulation functions $\mathcal{M}_d(t)$ can weaken this requirement. For example, if $\mathcal{M}_d(t) = 0$ for $t \leq \tau_d$, then it is sufficient to know the initial state $x(0) = x_0^1 =x_0$ at a single point, and we do not require a history function at all. In fact, the latter special case has been employed in \cite{Stelzer2020} for various machine learning tasks. \subsection{Disclosing network connections via discretization of the DDE} \label{subsec:discretization} Here we consider how a network of coupled maps can be derived from DDE \eqref{eq:general-system}. Since the network nodes are already introduced in Sec.~\ref{subsec:delay-system} as $x_n^\ell$ by Eq.~\eqref{eq:nodes}, it remains to describe the connections between the nodes. Such links are functional connections between the nodes $x_n^\ell$. Hence, our task is to find functional relations (maps) between the nodes. For simplicity, we restrict ourselves to the Euler discretization scheme since the obtained network topology is independent of the chosen discretization. Similar network constructions by discretization from ordinary differential equations have been employed in \cite{Haber2017,Lu2018,Chen2018}. We apply a combination of the forward and backward Euler method: the instantaneous system states of~\eqref{eq:general-system} are approximated by the left endpoints of the small-step intervals of length $\theta$ (forward scheme). The driving signal $z(t)$ and the delayed system states are approximated by the right endpoints of the step intervals (backward scheme). Such an approach leads to simpler expressions. We obtain \begin{align}\label{eq:euler} x^\ell_n = x^\ell_{n-1} + \theta f(x^\ell_{n-1},z(t^\ell_n), \mathcal{M}_1(t^\ell_n)x(t^\ell_n-\tau_1),\ldots ,\mathcal{M}_D(t^\ell_n)x(t^\ell_n-\tau_D)) \end{align} for $n = 2,\ldots,N$, where $t^\ell_n := (\ell - 1)T + n\theta$, and \begin{align}\label{eq:euler1} x^\ell_1 = x^{\ell-1}_N + \theta f(x^{\ell-1}_N, z(t^\ell_1), \mathcal{M}_1(t^\ell_1)x(t^\ell_1-\tau_1),\ldots ,\mathcal{M}_D(t^\ell_1)x(t^\ell_1-\tau_D)) \end{align} for the first node in the $I_\ell$-interval. According to Property (I), the delays satisfy $0<\tau_d<2T$. Therefore, the delay-induced feedback connections with target in the interval $I_\ell$ can originate from one of the following intervals: $I_\ell$, $I_{\ell-1}$, or $I_{\ell-2}$. In other words: the time points $t^\ell_n - \tau_d$ can belong to one of these intervals $I_{\ell}$, $I_{\ell -1}$, $I_{\ell -2}$. Formally, it can be written as \begin{equation} \label{eq:inIell} t_{n}^{\ell}-\tau_{d}=t_{n}^{\ell}-n_{d}\theta=\begin{cases} t_{n-n_{d}}^{\ell}\in I_{\ell}, & \text{if}\quad n_{d}<n,\\ t_{N+n-n_{d}}^{\ell-1}\in I_{\ell-1}, & \text{if}\quad n\leq n_{d}<N+n,\\ t_{2N+n-n_{d}}^{\ell-2}\in I_{\ell-2}, & \text{if}\quad N+n\leq n_{d}. \end{cases} \end{equation} We limit the class of networks to multilayer systems with connections between the neighboring layers. Such networks, see Fig.~\ref{fig:network-from-delay-system}b, are frequently employed in machine learning tasks, e.g. as deep neural networks \cite{Bishop2006,Goodfellow2016,Lecun2015,Schmidhuber2015,Stelzer2020}. Using \eqref{eq:inIell}, we can formulate a condition for the modulation functions $\mathcal{M}_d(t)$ to ensure that the delay terms $x(t-\tau_d)$ induce only connections between subsequent layers. For this, we set the modulation functions' values to zero if the originating time point $t^\ell_n - \tau_d$ of the corresponding delay connection does not belong to the interval $I_{\ell-1}$. This leads to the following assumption on the modulation functions:\\ \textbf{Property (III):} The modulation functions $\mathcal{M}_d(t)$ vanish at the following intervals: \begin{align}\label{eq:M-condition} \mathcal{M}_d(t) = v_{d,n}^\ell = 0 \quad \text{for} \quad t\in I_{\ell,n} \quad \text{if} \quad (n_d < n) \quad \text{or} \quad (N+n \leq n_d). \end{align} In the following, we assume that condition (III) is satisfied. Expressions \eqref{eq:euler}--\eqref{eq:euler1} contain the interdependencies between $x_n^\ell$, i.e., the connections between the nodes of the network. We explain these dependencies and present them in a more explicit form in the following. Our goal is to obtain the multilayer network shown in Fig.~\ref{fig:network-from-delay-system}b. \subsection{Effect of time-delays on the network topology} \label{subsec:network-topology} \begin{figure} \caption{Network connections induced by one time-delay $\tau_d$. Panel (a): connections induced by $\tau_d<T$. Panel (b): $\tau_d = T$. Panel (c): $\tau_d > T$. Multiple delays $\tau_1, \ldots ,\tau_D$ result in a superposition of parallel patterns as shown in Fig.~\ref{fig:network-from-delay-system}b. } \label{fig:net-topo} \end{figure} Taking into account property (III), the node $x^\ell_n$ of layer $I_{\ell}$ receives a connection from a node $x^{\ell -1}_{n-n'_d}$ of layer $I_{\ell -1}$, where $n'_d:=n_d-N$. Two neighboring layers are illustrated in Fig.~\ref{fig:net-topo}, where the nodes in each layer are ordered vertically from top to bottom. Depending on the size of the delay, we can distinguish three cases. \begin{itemize} \item[(a)] For $\tau_d<T$, there are $n_d$ ``upward'' connections as shown in panel Fig.~\ref{fig:net-topo}a. \item[(b)] For $\tau_d=T$, there are $n_d=N$ ``horizontal'' delay-induced connections, i.e. connections from nodes of layer $\ell -1$ to nodes of layer $\ell$ with the same index, see Fig.~\ref{fig:net-topo}b. \item[(c)] For larger delays $\tau_d>T$, there are $2N-n_d$ ``downward'' delay-induced connections, as shown in Fig.~\ref{fig:net-topo}c. \end{itemize} In all cases, the connections induces by one delay $\tau_d$ are parallel. Since the delay system possesses multiple delays $0 < \tau_1 < \ldots < \tau_D < 2T$, the parallel connection patterns overlap, as illustrated in Fig.~\ref{fig:network-from-delay-system}b, leading to a more complex topology. In particular, a fully connected pattern appears for $D = 2N-1$ and $\tau_d = \theta d$. \subsection{Modulation of connection weights} \label{subsec:weight-modulation} With the modulation functions satisfying property (III), the Euler scheme~\eqref{eq:euler}--\eqref{eq:euler1} simplifies to the following map \begin{align}\label{eq:general-map1} x^\ell_1 &= x^{\ell-1}_N + \theta f(x^{\ell-1}_N,z(t^\ell_1), v^\ell_{1,1} x^{\ell -1}_{1-n'_1},\ldots ,v^\ell_{D,1} x^{\ell -1}_{1-n'_D}),\\ x^\ell_n &= x^\ell_{n-1} + \theta f(x^\ell_{n-1}, z(t^\ell_n), v^\ell_{1,n} x^{\ell -1}_{n-n'_1},\ldots ,v^\ell_{D,n} x^{\ell -1}_{n-n'_D}), \quad n=2,\ldots ,N,\label{eq:general-map} \end{align} where Eq.~\eqref{eq:M-condition} implies $v^{\ell}_{d,n} = 0$ if $n-n'_d < 1$ or $n-n'_d > N$. In other words, the dependencies at the right-hand side of \eqref{eq:general-map1}--\eqref{eq:general-map} contain only the nodes from the $\ell-1$-th layer. Moreover, the numbers $v^\ell_{d,n}$ determine the strengths of the connections from $x^{\ell -1}_{n-n'_d}$ to $x^\ell_n$ and can be considered as network weights. By reindexing, we can define weights $w^\ell_{nj}$ connecting node $j$ of layer $\ell -1$ to node $n$ of layer $\ell$. These weights are given by the equation \begin{align}\label{eq:weight-matrix1} w^\ell_{nj} := \sum_{d =1}^D \delta_{n-n'_d,j} v^\ell_{d,n} = \begin{cases} 0 & \text{if } \forall d \colon j \neq n - n'_d,\\ v^\ell_{d,n} & \text{if } \exists d \colon j = n - n'_d, \end{cases} \end{align} and define the entries of the weight matrix $W^\ell = (w^\ell_{nj}) \in \mathbb{R}^{N\times (N+1)}$, except for the last column, which is defined below and contains bias weights. The symbol $\delta_{nj}$ is the Kronecker delta, i.e. $\delta_{nj} = 1$ if $n=j$, and $\delta_{nj} = 0$ if $n\neq j$. \begin{figure} \caption{ Coupling matrix $W^\ell$ between the hidden layers $\ell - 1$ and $\ell$, see Eq.~\eqref{eq:weight-matrix1}--\eqref{eq:weight-matrix2}. The nonzero weights are arranged along the diagonals, and equal $v^\ell_{d,n}$. The position of the diagonals is determined by the corresponding delay $\tau_d$. If $\tau_d = T = N\theta$, then the main diagonal contains the entries $v^\ell_{d,1},\ldots ,v^\ell_{d,N}$ (shown in yellow). If $\tau_d = n_d\theta < T$, then the corresponding diagonal lies above the main diagonal and contains the values $v^\ell_{d , 1}, \ldots , v^\ell_{d , n_d}$ (red). If $\tau_d = n_d\theta > T$, then the corresponding diagonal lies below the main diagonal and contains the values $v^\ell_{d , n_d-N+1}, \ldots , v^\ell_{d, N}$ (blue). The last column of the matrix contains the bias weights (gray). } \label{fig:weight-matrix} \end{figure} The time-dependent driving function $z(t)$ can be utilized to realize a bias weight $b^\ell_n$ for each node $x^\ell_n$. For details, we refer to Sec.~\ref{subsec:mulilayer-network}. We define the last column of the weight matrix $W^\ell$ by \begin{align}\label{eq:weight-matrix2} w^\ell_{n,N+1} := b^\ell_n. \end{align} The weight matrix is illustrated in Fig.~\ref{fig:weight-matrix}. This matrix $W^\ell$ is in general sparse, where the degree of sparsity depends on the number $D$ of delays. If $D=2N-1$ and $\tau_d = d\theta, \ d=1,\ldots ,D$, we obtain a dense connection matrix. Moreover, the positions of the nonzero entries and zero entries are the same for all matrices $W^2, \ldots , W^L$, but the values of the nonzero entries are in general different. \subsection{Interpretation as multilayer neural network} \label{subsec:mulilayer-network} The map~\eqref{eq:general-map1}--\eqref{eq:general-map} can be interpreted as the hidden layer part of a multilayer neural network provided we define suitable input and output layers. \begin{figure} \caption{Implementing a multilayer neural network by delay system~\eqref{eq:general-system}. Panel (a): The system state is considered at discrete time points $x^\ell_{n} := x((\ell - 1)T +n\theta)$. The intervals $I_\ell$ correspond to layers. Due to delayed feedback, non-local connections emerge (color lines). Panel (b) shows the resulting neural network. } \label{fig:network-from-delay-system} \end{figure} The input layer determines how a given input vector $u\in \mathbb{R}^{M+1}$ is transformed to the state of the first hidden layer $x(t),\ t\in I_1$. The input $u\in \mathbb{R}^{M+1}$ contains $M$ input values $u_1,\ldots ,u_M$ and an additional entry $u_{M+1}=1$. In order to ensure that $x(t),\ t\in I_1$ depends on $u$ and the initial state $x(0)=x_0$ exclusively, and does not depend on a history function $x(s),\ s < 0$, we set all modulation functions to zero on the first hidden layer interval. This leads to the following\\ \textbf{Property (IV):} The modulation functions satisfy \begin{align} \mathcal{M}_d(t) = 0, \quad t\in I_1, \ d=1,\ldots ,D. \end{align} The dependence on the input vector $u\in\mathbb{R}^{M+1}$ can be realized by the driving signal $z(t)$. \\ \textbf{Property (V):} The driving signal $z(t)$ on the interval $I_1$ is the step function given by \begin{align}\label{eq:J-1} z(t) = & J(t) \quad \text{for } \quad t\in I_1, \\ & J(t) = J_n = \left[ f^\mathrm{in}(W^{\mathrm{in}} u) \right]_n \quad \text{for} \quad t\in I_{1,n},\label{eq:J-2} \end{align} where $f^\mathrm{in}(W^{\mathrm{in}} u)\in\mathbb{R}^N$ is the preprocessed input, $W^{\mathrm{in}}\in\mathbb{R}^{N\times (M+1)}$ is an input weight matrix, and $f^\mathrm{in}$ is an element-wise input preprocessing function. For example, $f^\mathrm{in}(a)=\tanh (a)$ was used in~\cite{Stelzer2020}. As a result, the following holds for the first hidden layer \begin{align}\label{eq:general-system-first-layer} \dot{x}(t) = f(x(t), J(t), 0,\ldots ,0), \quad t\in I_1, \end{align} which is just a system of ordinary differential equations, which requires an initial condition at a single point $x(0)=x_0$ for solving it in positive time. This yields the coupled map representation \begin{align}\label{eq:general-map1-first-layer} x^\ell_1 &= x_0 + \theta f(x_0, J_{1},0,\ldots ,0),\\ x^1_n &= x^1_{n-1} + \theta f(x^1_{n-1}, J_{n},0,\ldots ,0), \quad n=2,\ldots ,N.\label{eq:general-map-first-layer} \end{align} For the hidden layers $I_2,I_3,\dots$, the driving function $z(t)$ can be used to introduce a bias as follows. \\ \textbf{Property (VI):} The driving signal $z(t)$ on the intervals $I_\ell$, $\ell\ge 2$, is the step function given by \begin{align} z(t) = & b(t) \quad \text{for } \quad t>T, \\ & b(t) = b_n^\ell \quad \text{for} \quad t\in I_{\ell,n},\quad \ell \ge 2. \end{align} Assuming the properties (I)--(VI), Eqs.~\eqref{eq:general-map1}--\eqref{eq:general-map} imply \begin{align}\label{eq:general-map1-hidden-layer} x^\ell_1 &= x^{\ell-1}_N + \theta f(x^{\ell-1}_N,b^\ell_1, v^\ell_{1,1} x^{\ell -1}_{1-n'_1},\ldots ,v^\ell_{D,1} x^{\ell -1}_{1-n'_D}),\\ x^\ell_n &= x^\ell_{n-1} + \theta f(x^\ell_{n-1}, b^\ell_n, v^\ell_{1,n} x^{\ell -1}_{n-n'_1},\ldots ,v^\ell_{D,n} x^{\ell -1}_{n-n'_D}), \quad n=2,\ldots ,N.\label{eq:general-map-hidden-layer} \end{align} Let us finally define the output layer, which transforms the node states $x^L_1,\ldots , x^L_n$ of the last hidden layer to an output vector $\hat{y} \in \mathbb{R}^P$. For this, we define a vector $x^L := (x^L_1, \ldots , x^L_N, 1)^\mathrm{T} \in \mathbb{R}^{N+1}$, an output weight matrix $W^\mathrm{out}\in\mathbb{R}^{P\times (N+1)}$, and an output activation function $f^\mathrm{out}\colon \mathbb{R}^P \to \mathbb{R}^P$. The output vector is then defined as \begin{align}\label{eq:general-output} \hat{y} = f^\mathrm{out}(W^\mathrm{out}x^L). \end{align} Figure~\ref{fig:network-from-delay-system} illustrates the whole construction process of the coupled maps network; it is given by the equations~\eqref{eq:general-map1-first-layer}--\eqref{eq:general-output}. We summarize the main result of section 2. \begin{tcolorbox} Under assumptions (I)--(VI) and for small $\theta$, DDE \eqref{eq:general-system} describes the multilayer network of coupled maps shown in Fig.~\ref{fig:network-from-delay-system}, with the specific dependencies given by Eqs.~\eqref{eq:general-map1-first-layer}, \eqref{eq:general-map-first-layer}, \eqref{eq:general-map1-hidden-layer}, \eqref{eq:general-map-hidden-layer}, and \eqref{eq:general-output}. \end{tcolorbox} \section{Constructing a recurrent neural network from a delay system} \label{sec:recurrent-network} System~\eqref{eq:general-system} can also be considered as recurrent neural network. To show this, we consider the system on the time interval $[0,KT]$, for some $K\in\mathbb{N}$, which is divided into intervals $I_k :=((k-1)T,kT], \ k = 1, \ldots ,K$. We use $k$ instead of $\ell$ as index for the intervals to make clear that the intervals do not represent layers. The state $x(t)$ on an interval $I_k$ is interpreted as the state of the recurrent network at time $k$. More specifically, \begin{align} x^k_n := x((k - 1)T + n\theta), \quad n = 1,\ldots ,N, \ k = 1, \ldots ,K \end{align} is the state of node $n$ at the discrete time $k$. The driving function $z(t)$ can be utilized as an input signal for each $k$-time-step.\\ \textbf{Property (VII):} $z(t)$ is the $\theta$-step function with \begin{align} z(t)= z_n^k & \quad \text{for} \quad t\in I_{k,n}, \\ & (z_1^k,\dots,z_N^k)^T = f^\mathrm{in}(W^\mathrm{in} u(k)), \end{align} where $u(k),\ k = 1, \ldots ,K$ are $(M+1)$-dimensional input vectors, $W^\mathrm{in} \in \mathbb{R}^{N\times (M+1)}$ is an input weight matrix, $f^\mathrm{in}$ is an element-wise input preprocessing function. Each input vector $u(k)$ contains $M$ input values $u_1(k),\ldots ,u_M(k)$ and a fixed entry $u_{M+1}(k):=1$ which is needed to include bias weights in the last column of $W^\mathrm{in}$. The main difference of the Property (VII) from (VI) is that it allows for the information input through $z(t)$ in all intervals $I_k$. Another important difference is related to the modulation functions, which must be $T$-periodic in order to implement a recurrent network. This leads to the following assumption. \\ \textbf{Property (VIII):} The modulation functions $\mathcal{M}_d(t)$ are $T$-periodic $\theta$-step functions with \begin{align} \mathcal{M}_d(t)= v_{d,n}\quad \text{for} \quad t\in I_{k,n}. \end{align} Note that the value $v_{d,n}$ is independent on $k$ due to periodicity of $\mathcal{M}_d(t)$. When assuming the Properties (I), (III), (IV), (VII), and (VIII), the map equations~\eqref{eq:general-map1}--\eqref{eq:general-map} become \begin{align}\label{eq:general-map1-recurrent} x^k_1 &= x^{k-1}_N + \theta f(x^{k-1}_N, z^k_1, v_{1,1} x^{k -1}_{1+n'_1},\ldots ,v_{D,1} x^{k -1}_{1+n'_D}),\\ x^k_n &= x^k_{n-1} + \theta f(x^k_{n-1}, z^k_n, v_{1,n} x^{k -1}_{n+n'_1},\ldots ,v_{D,n} x^{k -1}_{n+n'_D}), \quad n=2,\ldots ,N,\label{eq:general-map-recurrent} \end{align} and can be interpreted as a recurrent neural network with the input matrix $W^\mathrm{in}$ and the internal weight matrix $W = (w_{nj}) \in \mathbb{R}^{N\times N}$ defined by \begin{align}\label{eq:recurrent-weight-matrix} w_{nj} := \sum_{d =1}^D \delta_{n-n'_d,j} v_{d,n} = \begin{cases} 0 & \text{if } \forall d \colon j \neq n - n'_d,\\ v_{d,n} & \text{if } \exists d \colon j = n - n'_d. \end{cases} \end{align} \begin{figure} \caption{Recurrent network obtained from DDE \eqref{eq:general-system} with two delays. The delays $\tau_1<T$ and $\tau_2>T$ induce connections with opposite direction (color arrows). Moreover, the nodes of the recurrent layer a linearly locally coupled (black arrows). All nodes of the recurrent layers are connected to the input and output layer. } \label{fig:recurrent-network} \end{figure} When we choose the number of delays to be $D=2N-1$, we can realize any given connection matrix $W\in \mathbb{R}^{N\times N}$. For that we need to choose the delays $\tau_d = d\theta , \ d=1,\ldots ,2N-1$. Consequently there are $D=2N-1$ modulation functions $\mathcal{M}_d(t)$ which are step functions with values $v_{d,n}$. In this case Eq.~\eqref{eq:recurrent-weight-matrix} provides for all entries $w_{nj}$ of $W$ exactly one corresponding $v_{d,n}$. Therefore, the arbitrary matrix $W$ can be realized by choosing appropriate step heights for the modulation functions. In the setting of Sec.~\ref{sec:recurrent-network} the resulting network is an arbitrary recurrent network. Summarizing, the main message of Sec. 3 is as follows. \begin{tcolorbox} Under assumptions (I), (III), (IV), (VII), and (VIII), and for small $\theta$, DDE \eqref{eq:general-system} describes the recurrent network shown in Fig.~\ref{fig:recurrent-network}, with the specific dependencies given by Eqs.~\eqref{eq:general-map1-recurrent}--\eqref{eq:general-map-recurrent} and an internal weight matrix $W$ given by \eqref{eq:recurrent-weight-matrix}. \end{tcolorbox} \section{Networks from delay systems with linear instantaneous part and nonlinear delayed feedback} \label{sec:semilinear-case} \label{subsec:semilinear-delay-system} Particularly suitable for the construction of neural networks are delay systems with a stable linear instantaneous part and a feedback given by a nonlinear function of an affine combination of the delay terms and a driving signal. Such DDEs are described by the equation \begin{align}\label{eq:semilinear-system} \dot{x}(t) &= -\alpha x(t) + f(a(t)), \end{align} where $\alpha > 0$ is a constant time scale, $f$ is a nonlinear function, and \begin{align}\label{eq:activation-signal} a(t) &= z(t) + \sum_{d=1}^D \mathcal{M}_d(t)x(t-\tau_d). \end{align} Ref.~\cite{Krisztin2008} studied this type of equation for the case $D=1$, i.e. for one delay. An example of \eqref{eq:semilinear-system} is the Ikeda system~\cite{Ikeda1979} where $D=1$, i.e. $a(t)$ consists of only one scaled feedback term $x(t-\tau)$, signal $z(t)$, and the nonlinear function $f(a)=\sin(a)$. This type of dynamics can be applied to reservoir computing using optoelectronic hardware~\cite{Larger2012}. Another delay dynamical system of type~\eqref{eq:semilinear-system}, which can be used for reservoir computing, is the Mackey-Glass system~\cite{Appeltant2011}, where $D=1$ and the nonlinearity is given by $f(a)=\eta a /(1+|a|^p)$ with constants $\eta , p > 0$. In the work~\cite{Stelzer2020}, system~\eqref{eq:semilinear-system} is used to implement a deep neural network. Even though the results of the previous sections are applicable to \eqref{eq:semilinear-system}--\eqref{eq:activation-signal}, the special form of these equations allows for an alternative, more precise approximation of the network dynamics. \subsection{Interpretation as multilayer neural network} \label{subsec:ddnn-network} It is shown in \cite{Stelzer2020} that one can derive a particularly simple map representation for system~\eqref{eq:semilinear-system} with activation signal~\eqref{eq:activation-signal}. We do not repeat here the derivation, and only present the resulting expressions. By applying a semi-analytic Euler discretization and the variation of constants formula, the following equations connecting the nodes in the network are obtained: \begin{align}\label{eq:first-hidden-layer-first-node} x^1_1 &= e^{-\alpha \theta} x_0 + \alpha^{-1}(1-e^{-\alpha \theta}) f( a^1_1), \\ x^1_n &= e^{-\alpha \theta} x^1_{n-1} + \alpha^{-1}(1-e^{-\alpha \theta}) f( a^1_n), \quad n = 2, \ldots , N, \end{align} for the first hidden layer. The hidden layers $\ell = 2, \ldots ,L$ are given by \begin{align}\label{eq:hidden-layer-first-node} x^\ell_1 &= e^{-\alpha \theta} x^{\ell -1}_N + \alpha^{-1}(1-e^{-\alpha \theta}) f( a^\ell_1), \\ x^\ell_n &= e^{-\alpha \theta} x^\ell_{n-1} + \alpha^{-1}(1-e^{-\alpha \theta}) f( a^\ell_n), \quad n = 2, \ldots , N.\label{eq:hidden-layer} \end{align} The output layer is defined by \begin{align}\label{eq:output-layer} \hat{y}_p := f^\mathrm{out}_p (a^\mathrm{out}), \quad p=1,\ldots, P, \end{align} where $f^\mathrm{out}$ is an output activation function. Moreover, \begin{align} a^\mathrm{in}_n &:= \sum_{m=1}^{M+1} w^\mathrm{in}_{nm} u_m, & & n = 1, \ldots , N, \label{eq:activation-in}\\ a^1_n &:= g(a^\mathrm{in}_n), & & n = 1, \ldots , N, \label{eq:activation-first}\\ a^\ell_n &:= \sum_{j=1}^{N+1} w^\ell_{nj} x^{\ell -1}_j, & & n=1, \ldots , N, \ \ell = 2, \ldots , L,\label{eq:activation}\\ a^\mathrm{out}_p &:= \sum_{n=1}^{N+1} w^\mathrm{out}_{pn} x^L_n, & & p = 1, \ldots , P,\label{eq:activation-output} \end{align} where $u_{M+1}:=1$ and $x^\ell_{N+1} := 1$, for $\ell=1,\ldots ,L$. One can also formulate the relation between the hidden layers in a matrix form. For this, we define \begin{align} A := \begin{pmatrix} 0 & \cdots & \cdots & \cdots & 0 \\ e^{-\alpha\theta} & \ddots & & & \vdots \\ 0 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots &\vdots \\ 0 & \cdots & 0 & e^{-\alpha\theta} & 0 \end{pmatrix}. \end{align} Then, for $\ell = 2,\ldots ,L$, the equations~\eqref{eq:hidden-layer-first-node}--\eqref{eq:hidden-layer} become \begin{align}\label{eq:dnn-matrix-form-1} x^\ell = A x^\ell + \begin{pmatrix}e^{-\alpha\theta}x^{\ell -1}_N\\0\\ \vdots \\ 0\end{pmatrix} + \alpha^{-1}(1-e^{-\alpha \theta})f(W^\ell x^{\ell -1}). \end{align} where $f$ is applied component-wise. By subtracting $A x^\ell$ from both sides of Eq.~\eqref{eq:dnn-matrix-form-1} and multiplication by the matrix \begin{align} E := (\mathrm{Id} - A)^{-1} = \begin{pmatrix} 1 & 0 & \cdots & \cdots & 0 \\ e^{-\alpha\theta} & 1 & \ddots & & \vdots \\ e^{-2\alpha\theta} & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ e^{-(N-1)\alpha\theta} & \cdots & e^{-2\alpha\theta} & e^{-\alpha\theta} & 1 \end{pmatrix}, \end{align} we obtain a matrix equation describing the $\ell$-th hidden layer \begin{align} x^\ell = \begin{pmatrix}e^{-\alpha\theta}x^{\ell -1}_N\\e^{-2\alpha\theta}x^{\ell -1}_N\\ \vdots \\ e^{-N\alpha\theta}x^{\ell -1}_N\end{pmatrix} + \alpha^{-1}(1-e^{-\alpha \theta}) E f(W^\ell x^{\ell -1}). \end{align} The neural network~\eqref{eq:first-hidden-layer-first-node}--\eqref{eq:activation-output} obtained from delay system~\eqref{eq:semilinear-system}--\eqref{eq:activation-signal} can be trained by gradient descent~\cite{Stelzer2020}. The training parameters are the entries of the matrices $W^\mathrm{in}$ and $W^\mathrm{out}$, the step heights of the modulation functions $\mathcal{M}_d(t)$, and the bias signal $b(t)$. \subsection{Network for large node distance $\theta$} \label{subsec:largetheta} In contrast to the general system~\eqref{eq:general-system}, the semilinear system~\eqref{eq:semilinear-system} with activation signal~\eqref{eq:activation-signal} does not only emulate a network of nodes for small distance $\theta$. It is also possible to choose large $\theta$. In this case, we can approximate the nodes given by Eq.~\eqref{eq:nodes} by the map limit \begin{align}\label{eq:map-limit} & x^\ell = \alpha^{-1} f(a^\ell), \\ & \text{where} \quad a^\ell = W^\ell x^{\ell-1} \ \text{for} \ \ell>1 \quad \text{and} \quad a^1 = g(W^\mathrm{in} u), \end{align} up to exponentially small terms. The reason for this limit behavior lies in the nature of the local couplings. Considering Eq.~\eqref{eq:semilinear-system}, one can interpret the parameter $\alpha$ as a time scale of the system, which determines how fast information about the system state at a certain time point decays while the system is evolving. This phenomenon is related to the so-called instantaneous Lyapunov exponent \cite{Heiligenthal2011,Heiligenthal2013,Kinzel2013}, which equals $-\alpha$ in this case. As a result, the local coupling between neighboring nodes emerges when only a small amount of time $\theta$ passes between the nodes. Hence, increasing $\theta$ one can reduce the local coupling strength until it vanishes up to a negligibly small value. For a rigorous derivation of Eq.~\eqref{eq:map-limit}, we refer to \cite{Stelzer2020}. The apparent advantage of the map limit case is that the obtained network matches a classical multilayer perceptron. Hence, known methods such as gradient descent training via the classical back-propagation algorithm \cite{Rumelhart1986} can be applied to the delay-induced network~\cite{Stelzer2020}. The downside of choosing large values for the node separation $\theta$ is that the overall processing time of the system scales linearly with $\theta$. We need a period of time $T=N\theta$ to process one hidden layer. Hence, processing a whole network with $L$ hidden layers requires the time period $LT=LN\theta$. For this reason, the work~\cite{Stelzer2020} provides a modified back-propagation algorithm for small node separations to enable gradient descent training of networks with significant local coupling. \section{Conclusions} We have shown how networks of coupled maps with arbitrary topology and arbitrary size can be emulated by a single (possibly even scalar) DDE with multiple delays. Importantly, the coupling weights can be adjusted by changing the modulations of the feedback signals. The network topology is determined by the choice of time-delays. As shown previously \cite{Appeltant2011,Larger2012,Brunner2013,Larger2017,Stelzer2020}, special cases of such networks are successfully applied for reservoir computing or deep learning. As an interesting conclusion, it follows that the temporal dynamics of DDEs can unfold arbitrary spatial complexity, which, in our case, is reflected by the topology of the unfolded network. In this respect, we shall mention previously reported spatio-temporal properties of DDEs \cite{Arecchi1992,Giacomelli1994,Giacomelli1996,Bestehorn2000,Giacomelli2012,Yanchuk2014,Yanchuk2015a,Yanchuk2015b,Kashchenko2016,Yanchuk2017}. These results show how in some limits, mainly for large delays, the DDEs can be approximated by partial differential equations. Further, we remark that similar procedures have been used for deriving networks from systems of ordinary differential equations \cite{Haber2017,Lu2018,Chen2018}. However, in their approach, one should use an $N$-dimensional system of equations for implementing layers with $N$ nodes. This is in contrast to the DDE case, where the construction is possible with just a single-variable equation. As a possible extension, a realization of adaptive networks using a single node with delayed feedback would be an interesting open problem. In fact, the application to deep neural networks in \cite{Stelzer2020} realizes an adaptive mechanism for the adjustment of the coupling weights. However, this adaptive mechanism is specially tailored for DNN problems. Another possibility would be to emulate networks with dynamical adaptivity of connections \cite{Berner}. The presented scheme can also be extended by employing delay differential-algebraic equations \cite{Mehrmann,Unger}. \end{document}
arXiv
A little bit of sex prevents mutation accumulation even in apomictic polyploid plants Ladislav Hodač1, Simone Klatt1, Diego Hojsgaard1, Timothy F. Sharbel2 & Elvira Hörandl1 BMC Evolutionary Biology volume 19, Article number: 170 (2019) Cite this article In the absence of sex and recombination, genomes are expected to accumulate deleterious mutations via an irreversible process known as Muller's ratchet, especially in the case of polyploidy. In contrast, no genome-wide mutation accumulation was detected in a transcriptome of facultative apomictic, hexaploid plants of the Ranunculus auricomus complex. We hypothesize that mutations cannot accumulate in flowering plants with facultative sexuality because sexual and asexual development concurrently occurs within the same generation. We assume a strong effect of purging selection on reduced gametophytes in the sexual developmental pathway because previously masked recessive deleterious mutations would be exposed to selection. We test this hypothesis by modeling mutation elimination using apomictic hexaploid plants of the R. auricomus complex. To estimate mean recombination rates, the mean number of recombinants per generation was calculated by genotyping three F1 progeny arrays with six microsatellite markers and character incompatibility analyses. We estimated the strength of purging selection in gametophytes by calculating abortion rates of sexual versus apomictic development at the female gametophyte, seed and offspring stage. Accordingly, we applied three selection coefficients by considering effects of purging selection against mutations on (1) male and female gametophytes in the sexual pathway (additive, s = 1.000), (2) female gametophytes only (s = 0.520), and (3) on adult plants only (sporophytes, s = 0.212). We implemented recombination rates into a mathematical model considering the three different selection coefficients, and a genomic mutation rate calculated from genome size of our plants and plant-specific mutation rates. We revealed a mean of 6.05% recombinants per generation. This recombination rate eliminates mutations after 138, 204 or 246 generations, depending on the respective selection coefficients (s = 1.000, 0.520, and 0.212). Our results confirm that the empirically observed frequencies of facultative recombination suffice to prevent accumulation of deleterious mutations via Muller's ratchet even in a polyploid genome. The efficiency of selection is in flowering plants strongly increased by acting on the haplontic (reduced) gametophyte stage. The rate of genomic deleterious mutation accumulation and its effects are fundamental to the evolution of species [1]. The evolution of mutation rate is shaped by the presence or absence of recombination, which is an important consequence of meiosis and sex [2, 3]. Recombining genomes eliminate deleterious mutations in significantly higher rates than is possible in non-recombining genomes [4, 5] because recombination can segregate multiple deleterious mutations into single linkage groups that can be eliminated [6]. Obligate asexual lineages, in contrast, are expected to accumulate recessive deleterious mutations in an irreversible, ratchet-like manner over generations as the least loaded genotypes are lost by random drift (Muller's ratchet; [7, 8]). In theoretical models, even low recombination rates slow down mutation accumulation [7, 8]. However, empirical studies on genomic mutation accumulation are still rare, especially for polyploid genomes. Polyploidy multiplies the mutation load by cU whereby c is the ploidy level and U is the mutation rate per haploid genome [9]. Polyploid genomes, however, are typical for many asexual eukaryotes and are particularly common in asexually reproducing plants [10]. A novel hypothesis by [11] for flowering plants predicted that even in high polyploids, low rates of facultative sexuality would suffice to counteract mutation accumulation. This model is based on the consideration of three features specific for flowering plants: First, a sporophyte generation (the familiar green plant) and a gametophyte generation (embryo sac and pollen) alternate during the life cycle, which can increase the efficacy of haploid selection [12]. The female gametophyte remains on the sporophyte, but nevertheless has a separate development. With sexual development, the sporophyte is diplontic (unreduced), while the gametophyte develops from a product of meiosis and hence is haplontic (reduced); in asexual development both sporophyte and gametophyte are diplontic. Diploidy serves to mask the effects of deleterious recessive mutations in heterozygous loci as a non-mutated copy of the respective gene is available on the homologous chromosome [13, 14]. Haploidy eliminates the masking effect and exposes mutations to purging selection, and selection is most efficient in haploids [12, 13]. After a prolonged diploid phase, the return to haploidy leads to the exposure of previously masked deleterious recessive alleles [12, 15]. In plants, gametophytes are multicellular 'mini'-organisms with significant nuclear gene expression [16, 17], and complex signaling pathways [18]. Consequently, purging selection can strongly act on haploid, or more general, haplontic sexual gametophytes, but to a lesser degree on diploid or diplontic (unreduced) apomictic ones (Fig. 1). Second, plant developmental biology allows for concurrent selection on sexual and asexual development within the same generation. Plants reproducing via apomixis (i.e., clonal reproduction via seed), develop a female gametophyte from an initial cell that has not undergone meiosis. Hence, these egg cells are unreduced, non-recombinant, and develop parthenogenetically into embryos. In parallel (in the same or another ovule of the same plant), meiotically produced megaspores can develop into gametophytes, and the reduced, recombinant egg cells are fertilized; the zygotes develop into embryos (Fig. 1). That means, sexual and apomictic gametophytes/gametes/seeds can be produced by the same mother plant in the same generation [19,20,21,22]. Occasional fertilization of egg cells is possible as apomixis affects only female development, while male gametophyte development in the pollen is more or less normal and results mostly in reduced and recombined male gametes [22]. Third, apomictic plants are almost exclusively polyploids [10], which may be selected for the masking effects of deleterious mutations by unmutated chromosome copies [9, 23]. Though polyploids would supposedly have an increased absolute number of mutations because of a higher number of possible mutation sites, recessive mutations would not become expressed in the heterozygous state [9, 12]. Female gametophyte development in facultative apomictic (aposporous) allohexaploid Ranunculus auricomus. The scheme illustrates the parallel development of sexual (upper part) and apomictic (aposporous; lower part) gametophytes and seeds on the same plant (i.e., developmental pathways from megaspore/aposporous initial (a) over female gametophyte (b) to embryo (c). The recessive deleterious mutation is indicated by a red asterisk. In the reduced phase of the sexual development, strong selection (s = 1.0) eliminates mutated gametophytes. The mutation is transmitted into embryo and offspring under weaker selection (s < 1.0). The aposporous development without reduced phase transmits the mutation into embryo and offspring as the mutation remains masked We hypothesize that two factors effectively counteract mutation accumulation in polyploid asexual plants: (1) facultative sexuality and (2) selection on reduced, recombinant sexual gametophytes. Strikingly, a transcriptome study of facultative asexual hexaploid lineages of the Ranunculus auricomus complex revealed that dN/dS ratios did not significantly differ between sexual and asexual species. Asexuals showed no signs of genome-wide mutation accumulation despite ca. 80 kya of hexaploid genome evolution [24]. These hexaploid members of the Ranunculus auricomus species complex exhibit facultative gametophytic aposporous apomixis, where asexual development with unreduced gametophytes runs initially in parallel with the sexual development with reduced gametophytes [25] (shown in Fig. 1). Evidence of low to medium recombination rates was so far inferred only from natural populations [26]. Here we aimed to estimate (1) the recombination rate per generation using progeny arrays, (2) differential effects of purging selection on reduced/unreduced gametophytes by calculating abortion rates, and (3) the speed of mutation elimination in the hexaploid genome under the observed recombination rate and three selection scenarios via a mathematical model. The three progeny arrays exhibited similar genotype diversities [D(V − progeny) = 0.19, D(T − progeny) = 0.20, D(I − progeny) = 0.28] with 5–8 genotypes per progeny array. Pairwise genetic distances between genotypes showed that most progenies were clonal and identical to the respective mother genotype, but non-maternal offspring appeared in all three progenies. Character incompatibility methods revealed recombinants in the T and V clones. After averaging proportions of recombinant genotypes over the three progeny arrays, the mean number of recombinants as a proxy for recombination rate per generation was computed as r = 0.061 (Table 1). Table 1 Genotyping using microsatellite analysis of three progenies of R. carpaticola × R. cassubicifolius. C = maternal clone, M = SSR mutant clone, R = recombinant Starting at time t = 0 generations after a new deleterious mutation has become fixed in a non-recombining clone, the frequency of mutant clones κm = 1. A facultative asexual plant population with a non-zero recombination rate per generation and with one fixed deleterious mutation consists of three different offspring classes in the first generation (Additional file 3a). The frequencies of genotypes are given as: ρ0 (non-mutant recombinant from sexual seeds; 'R' genotype in Additional file 3a-b), ρm (mutant recombinant from sexual seeds; 'R' genotype marked with an asterisk in Additional file 3a-b), κm (mutant clone from apomictic seeds; 'C' genotype marked with an asterisk in Additional file 3a-b). We assume that recombination operates in all subsequent generations with the empirically estimated rate r = 0.061 or ~ 6% of recombinant genotypes per generation and that further generations can continue to produce both recombinant and clonal offspring ('R' and 'C' genotypes in Additional file 3a-b). Hence, in the second generation, the mutant clonal and mutant recombinant mothers each can produce three offspring classes (i.e., non-mutant and mutant recombinants and mutant clones). In contrast, the non-mutant recombinant mother can produce non-mutant clonal offspring ('C' in Additional file 3a-b) with a frequency given as κ0 in the second and following generations (Additional file 3c). If selection s = 1.000, any mutated gametophyte is eliminated. Considering a selection s = 0.520 on female sexual gametophytes only (as approximated by the abortion rate), not only non-mutant recombinants (ρ0 = r|s–2|− 1) but also mutant recombinants (ρm = ρ0|1–s|) might produce offspring. Over generations, the frequency of mutant recombinants decreases with ρm(t) = ρmexp(−ρ0t). Similarly, the frequency of mutant clones κm(t) = (1-ρm)exp(−ρ0t) declines with time: $$ {\kappa}_m(t)=\left(1-\frac{r\left|1-s\right|}{\left|s-2\right|}\right){e}^{-\frac{rt}{\left|s-2\right|}} $$ The decay steepness (Fig. 2) is more dependent on the rate of recombination than on the strength of selection because selection can act only on homozygous mutant recombinants (ρm). Based on the Eq. (1) we revealed an estimate of time (t) needed to reach an arbitrary minimum frequency of mutant clones κm,min, where mutants can be considered as eliminated or lost by genetic drift. Considering any value of κm,min, the time necessary to reach it corresponds to t (κm,min) = ln[(1-ρm)κm,min− 1]ρ0− 1, or expressed with recombination and selection: Abortion rates of viable reproductive units during sexual and aposporous development. Development starts from the megaspore phase (a), continues with gametophyte (b) and the embryo phase (c), and ends with the seedling phase. The major abortion (mean 52%) happens during the reduced female gametophyte (FG) phase (b) of the sexual development (blue dashed line). In comparison, the decline is less steep during the unreduced aposporous female gametophyte phase (solid gray line). These results suggest stronger selection upon the reduced phase (n) of the sexual development than upon the unreduced phase (2n) of the aposporous development. The decline from the embryo stage to the offspring (seedling) stage is in sexual and apomictic offspring almost the same, as germination rates are not significantly different. Proportions of reproductive units are given as means with confidence intervals. Data were adapted from [25, 26] $$ t\ \left({\kappa}_{m,\mathit{\min}}\right)=\ln \left[\left(1-\frac{r\left|1-s\right|}{\left|s-2\right|}\ \right){\kappa_{m,\mathit{\min}}}^{-1}\right]\frac{\left|s-2\right|}{r} $$ Any generation with non-zero recombination rate releases non-mutant clones with frequency κ0(t) = 1-[(ρ0 + κm(t))], or: $$ {\kappa}_0(t)=1-\frac{r\ \exp \left(-\frac{rt}{\left|s-2\right|}\right)}{\left|s-2\right|} $$ Even if mutation accumulation operates rapidly in small populations, a constant recombination rate of 6% restores the least-loaded genotype class within a single generation cycle. According to the Eq. (2), the elimination of all mutant clones (e.g., κm,min < 0.001) takes approximately 138 generations for s = 1.000, 204 generations for s = 0.520 or 246 generations for s = 0.212 (Fig. 2). However, if obligate apomixis without any recombination is assumed, Muller's ratchet would act rapidly in our model system. Depending on the model used, the ratchet would click between 32 and 3570 generations (Table 2). Table 2 The speed of Muller's ratchet under obligate asexuality, assuming fixed population size N = 103 individuals and deleterious mutation rate U = 1.116. n0 = Number of individuals within the least-loaded class, s = selection coefficient, t(J) = interclick time estimate based on Eq. 34 in [28], t(NS) = interclick time estimate based on Eq. 33 in [29] and t(ME) = interclick time estimate based on Eq. 24 [30]; all time estimates are given as numbers of generations per "click". Bold text = estimates of the selection coefficients of the fastest ratchets; the (J)-value was computed according to Eq. 34 in [28], corresponding to a U-shaped function with a local minimum at s = 0.212, which we considered as a critical value of the fastest ratchet acting on sporophytes We present here for the first time a model of mutation elimination in facultative asexual plants under consideration of empirical estimates of recombination rate, selection on gametophytes, and polyploid genomes in hexaploid Ranunculus auricomus. Our model provides support for the hypothesis that low rates of facultative sex prevent genome-wide mutation accumulation even in polyploids, as it was actually observed in a transcriptome study of the same model system [24]. Recombination rates We estimated recombination rates as a number of recombinants per sporophyte generation with microsatellites derived from the nuclear transcriptome dataset [24]. Directly after meiosis, the actual meiotic recombination rate is probably higher according to proportions of megaspores formed (Fig. 3). However, since female gametophytes are few-celled and deeply embedded in tissues, and because of the high chromosome numbers, it is not feasible to quantify meiotic recombination directly with cytogenetic methods. By calculating final recombinant offspring genotypes we rather use a proxy of a minimum recombination rate that actually occurs during the whole sexual development. Sorting out SSR variants is especially important for our study, as SSR loci can show a high mutational dynamics in hexaploids [31]. By using standard character incompatibility methods to discriminate SSR variants from recombinants [26, 32, 33], we avoid a critical overestimate of recombination rates. Overestimates would invalidate our model, while underestimates would just mean that the speed of mutation elimination could be even faster. Model of mutant offspring decline under constant recombination rate per generation (r ≈ 6%) and three selection coefficients. The strongest selection (s = 1.000) on female and male gametophytes eliminates the mutation in ~ 138 generations (dotted blue line). The medium selection coefficient (s = 0.520), approximating selection on female gametophytes only, eliminates the mutation after ~ 204 generations. The weakest selection coefficient (s = 0.212; derived from [27]) represents selection on the sporophyte only; the mutation is eliminated after ~ 246 generations Recombination rate mostly depends on the degree of facultative sexuality, a factor which considerably varies among flowering plants. In general, plants with alternative meiotic and apomictic development in the same ovule, as it is the case in apospory (Fig. 3) or adventitious embryony, have higher degrees of facultative sexuality (i.e., higher proportions of sexually formed seed in one seed generation). These developmental pathways are found in almost 90% of all apomictic plant genera [34]. Hence, our study can serve as a model system for the great majority of apomictic plants. By using averaged data from different environmental conditions [35], we covered a broad range of naturally occurring variation in proportions of sexual versus apomictic seed formation. Indeed, natural populations of R. carpaticola × R. cassubicifolius exhibit a comparable genetic variation [26]. Some other apomictic plant genera show developmental pathways where meiosis is either restitutional or mitosis-like (e.g., diplospory [19]) or with suppressed recombination (e.g., permanent translocation heterozygosity; [36]). In these cases, alternative sexual pathways cannot be easily realized, because meiosis itself is altered, and there is no developmental phase where sexual and asexual gametophyte development could exist in parallel within one ovule. Hence, these forms of apomixis tend to be more obligate [19]. Indeed, transcriptomes of Oenothera (a rare case of obligate asexuality due to translocation heterozygosity; [36]) and genomes of diplosporous Boechera (except for conserved coding sites; [37]) showed signatures of mutation accumulation. These cases are in accordance with our results that maintenance of high levels of facultative sexuality, as in Ranunculus, and certain levels of recombination are crucial for the suppression of mutation accumulation in apomictic plants. Purging selection during reduced phases of the plant life cycle Our results support the hypothesis that purifying selection at haplontic stages of the life cycle is an important benefit of sexuality [12]. For plants, the prolonged gametophyte phase with many genes expressed appears to be the main target of purifying selection. This has been shown for pollen [38] and hypothesized for female gametophytes based on previous developmental studies [39]. Here we demonstrate the same mechanism for female development in R. carpaticola × R. cassubicifolius. We used three selection coefficients to estimate mutation elimination via purging selection. The medium selection coefficient s = 0.520 was taken as an approximation to the fitness decline of ca. 52% during the female sexual pathway, inferred from the observed proportions of functional megaspores to sexually formed seeds (Fig. 3). However, regarding recombination and purging selection acting on male gametophytes as well (e.g., [40]) and additive effects after fusion of male and female gametes, the efficacy of purifying selection in the sexual pathway can be increased by various mechanisms. Self-pollination is frequent in apomictic plants, probably as a consequence of a breakdown of self-incompatibility systems in polyploids; selection acts for self-fertility especially in pseudogamous apomictic plants [41]. After self-fertilization of the egg cell with mutation carrying pollen, the mutation gets the double dosage in the zygote and consequently in the embryo, and hence will be exposed to selection during the sporophytic phase [11]. This happens not only after self-pollination within the same plant (i.e., within the same flower or between flowers of the same individual) but also after "cross-pollination" between clone-mates of a population, i.e., between genetically identical individuals. Furthermore, gene conversion during meiosis could make the mutation homozygous as well [11]. Gene conversion was considered an important mechanism preventing mutation accumulation in ancient asexual animals [42]. Although we have no data on these mechanisms, we regard a strong additive selection coefficient of s = 1.000 possible. Mutation rate estimates and Muller's ratchet There are multiple factors possibly acting against Muller's ratchet, e.g. epistatic interactions, beneficial mutations, compensatory mutations and gene conversion. However, the purpose of our model is to illustrate the effect of deleterious mutations. We show that even in the absence of all the above mentioned corrective factors, a bit of recombination suffices to prevent facultative asexuals from mutational meltdown. Concerning the beneficial mutation rates, there are two lines of evidence, the theoretical and experimental. Theoretical models suggest that epistatic effects can effectively halt Muller's ratchet [28] or that already a low amount of beneficial mutations might have the same effect [43]. However, in the fast ratchet regime, only a considerable amount of beneficial mutations might prevent an asexual population from extinction [43]. The authors confirmed that a combined effect of beneficial mutations and facultative recombination has not been explored even at the theoretical level, and synthesis of a common model including deleterious and beneficial mutations is missing. A full analysis of this complex situation is clearly beyond the scope of our study. Transcriptome data from R. auricomus revealed only a small amount of outlier genes under positive selection [24], and hence we assume effects of potentially beneficial mutations as negligible. Another line of evidence, the experiments on model organisms, shows considerable discrepancies not only among studied organisms (yeasts, viruses, Arabidopsis, Daphnia) but also among experimental approaches showing differences between laboratory and field conditions [44]. Some studies suggest negligible rates of beneficial mutations [45, 46], others postulate that a considerable proportion of genomic mutations is beneficial [47, 48]. Observations of extremely high beneficial mutations rates are considered to be experimental artifacts [44]. The possible explanation for the surprisingly high beneficial mutation rates observed by some authors [44] might reflect specific environmental conditions or experimental setup or specific fitness of founder genotypes [44]. An unexpectedly high ratio of beneficial: deleterious mutations might also result from a low detectability of phenotypic variation due to slightly deleterious mutations [49]. Our model suggests that facultative sexuality in polyploid Ranunculus auricomus halts Muller's ratchet, because (1) the least-loaded class is restored in every generation cycle, and (2) mutant frequency decreases with persistent low recombination rates and strength of selection on gametophytes. However, in the obligatory absence of sex, Muller's ratchet might operate quickly, particularly in very small populations [50, 51]. Considering a small effective population size of hexaploid R. carpaticola × R. cassubicifolius and a relatively high deleterious mutation rate adapted from [52], we estimated the critical selection coefficient, which is responsible for the fastest ratchet 'clicks'. This critical value of s approaches 0.212, 0.218 or 0.340, depending on whether computed according to [28, 29], or [30], respectively. Under these selection scenarios without haplontic phases, and without any recombination, the ratchet could click in the worst case scenario within 32 generations (Table 2). However, this scenario is unrealistic as facultative sexuality is in plants a persistent mechanism. The genetic and epigenetic control mechanisms of apomixis mostly represent just ectopic or asynchronous mis-expressions of the same genes regulating the sexual pathway [53]. The (epi) genetic control factors of apomixis remain in the heterozygous or hemizygous state with the wild-type (epi) alleles [53, 54]. The apomixis-controlling loci represent large, non-recombinant genomic regions and are inherited as dominant factors [54]. Therefore, apomixis-controlling alleles or epialleles cannot become homozygous. Since expression of apomixis is dependent on allele dosage [55], apomixis can never become obligate. Apomixis is just superimposed on the sexual pathway [53]. The option of an alternative sexual reproductive pathway, or even the return to obligate sexuality, is present. Interestingly, plant species with a lower degree of asexuality and signatures of mutation accumulation are diploids [36]. The lower mutation rate of diploid genomes might help these lineages to survive with "almost no sex" for some time. For polyploid apomicts – the typical condition for most apomictic plants - maintenance of low levels of facultative sexuality prevents mutation accumulation. Hence, long-term evolution of facultative apomictic plant lineages is probably not limited by the threat of Muller's ratchet despite polyploidy. We tested here for mutation accumulation in a typical apomictic plant model system with a big polyploid hybrid genome, but still exhibiting facultative sexuality in both female and male development. We used here the novel approach of combining empirical data and mathematical modelling. Our results confirmed that even in a hexaploid genome, the empirically observed low frequencies of facultative sexuality (c. 6% of recombinants per generation) suffice to prevent accumulation of deleterious mutations via Muller's ratchet. The efficiency of purging selection is in the sexual pathway of flowering plants strongly increased by acting on the haplontic (meiotically reduced) gametophyte stage, in which many genes are expressed. Hence, previously masked recessive mutations are exposed to purging selection in the gametophyte. Results provide a general explanation for maintenance of a little bit of sex in polyploid asexual flowering plants. Our model is based on empirical estimates of recombination rate, selection coefficients, and genomic mutation rate for the hexaploid ancient hybrid Ranunculus carpaticola × Ranunculus cassubicifolius, a member of the Eurasian R. auricomus complex [24, 27, 35]. Voucher specimens were deposited in the herbarium of the University of Goettingen (GOET). Recombination rate per generation As a proxy for recombination rates, we determined the mean proportion of recombinant genotypes per offspring generation by using microsatellite markers. This approach is most feasible when phenotypic markers for studying segregation are not available [56]. Cytogenetic meiosis studies for assessment of recombination rates are so far not possible in our model system because of the difficulties to observe female plant meiosis directly, and the high chromosome numbers (2n = 48). To estimate a minimum recombination rate, we produced progeny arrays of three allohexaploid mother plants from geographically separated populations belonging to the ancient hybrid Ranunculus carpaticola × Ranunculus cassubicifolius [24, 27, 35]. Ranunculus auricomus is a long-lived plant with a long generation turnover (2–3 years) and hence we could analyze just one generation. We selfed the three mother plants under controlled garden conditions, following the rationale of [41], and generated 30, 38 and 39 F1-offspring per mother plant ("I", "T" and "V", respectively; Table 1). We genotyped 107 plants with six nuclear microsatellite loci previously developed out of the RNA-Seq data of [24]. We used the five SSR loci and protocols from [35] and one additional locus R2252 (forward primer: TCGGGTTCACCCACTAAATC, annealing temperature = 60 °C; reverse primer CATGGACTAGTTTCCGCCAT, annealing temperature = 60 °C). The plants are hexaploid with up to six alleles per single locus, and allele copy number in a genotype cannot be reliably scored from electropherograms. Hence, we scored the fragments as dominant and recorded presences and absences of each allele across all analyzed individuals in a binary matrix (Additional file 1). In the next step, we compared the F1-offspring multilocus genotypes with their respective mother plant genotypes and estimated the total number of genotypes and genotype diversity D (Nei's corrected genetic diversity) in the software GENOTYPE/GENODIVE [57]. In the same program, we computed a matrix of pairwise genetic distances between all genotypes assuming the infinite allele model (IAM; [58] and plotted them as histograms (Additional file 2). Since the recombination rate has the strongest effect on mutation elimination, overestimates of recombinants per generation would invalidate the model. In SSR studies, this could happen if non-maternal SSR variants that have just arisen from replication slippage [59] would be scored as recombinants. In fact, hexaploid plants can show a high mutational dynamics of SSR variants [31]. To discriminate true recombinants from non-maternal SSR variants we used three character incompatibility methods. We first used character incompatibility analysis as implemented in the module Jactax of the program PICA 4.0 [60]. Any two characters (alleles of the binary matrix) are incompatible with a hierarchical, tree-like divergence pattern if they show all four character state possibilities in at least four different genotypes A, B, C, D (e.g., A: 1/0, B: 1/1, C: 0/1, D: 0/0) which is a strong signal for recombination [61]. Single novel character states or combinations that are compatible with a hierarchical pattern (e.g., A: 1/0, B: 1/0, C: 0/1, D: 0/1) are a signal for mutations in the marker system. The requirement of at least four different genotypes and two different loci was met in all three progeny arrays (Additional file 2). Character compatibility analysis calculates an initial matrix incompatibility (MI), and subsequent stepwise removal of recombinant genotypes contributing to MI finally identify mutant genotypes with MI = 0 (Additional file 2). Second, the same F1-genotypes were concordantly depicted by a recombination network analysis and visualized within a recombination cycle using the software SplitsTree 4.0 [62]. Here, distance-based incompatible splits are visualized as rectangles, in which recombinants are placed at nodes while mutant genotypes appear as terminal branch-offs (Additional file 2: Figure S2). This network topology was further confirmed with a NeighborNet analysis [62] for which a statistical test (1000 bootstrap replicates) was conducted (Additional file 2). Finally, we considered genotypes as recombinants only if they were determined by all three character incompatibility methods. We averaged the observed proportions of recombinant F1-offspring within each of the three progeny arrays (i.e., recombination rate per generation, r). Selection coefficients Selection on gametophytes during development is an essential part of our plant-specific model [11]. Previous histological observation of gametophyte development on different apomictic plant species showed various forms and stages of abortive phenotypes and indicated different proportions of sexual vs. apomictic gametophytes during developmental stages [25, 39, 63]. Sexual and apomictic gametophytes develop concurrently in adult plants, they do not differ morphologically, but they have allele dosage-dependent gene expression levels [64]. Hence, we assume that abortion rates of reduced sexual vs. apomictic unreduced gametophytes are a result of the different strength of selection on expressed versus masked deleterious mutations. We re-analyzed published developmental data for the Ranunculus carpaticola × Ranunculus cassubicifolius progenies [35, 65] to estimate the abortion rates during the female gametophyte phase as an indirect indicator of the strength of selection. First, for the stage directly after meiosis, we averaged the proportions of unreduced aposporous initial cells (AIC; see Fig. 1) and reduced functional meiotic megaspores from microscopic observations of ovules (data from [35]). These data averaged the observed variation under both stressed and unstressed conditions, as they may occur in nature. Second, apomictic versus sexual pathways of seed formation were determined using flow cytometric seed screening, and the averaged proportion of well-developed sexual versus asexual seeds was calculated from mature fruits (data from [35]). Third, averaged seed germination rates were derived from [25] and, finally, we assessed the proportion of viable recombinant and clonal offspring based on our estimate of the average recombination rate (see Table 1). Proportions of sexual vs. apomictic development were calculated from proportions of surviving initial cells/seeds/progenies at each developmental step (as only these can continue development) to reveal the actual fitness differences of sexual and apomictic development during the gametophytic and the sporophytic phases (see Fig. 1). We compared proportions of abortions between different developmental phases, i.e., megaspores, embryos and offspring/seedlings (Fig. 3). As expected, we observed higher abortion rates in the reduced (sexual) than in the unreduced (aposporous apomictic) female gametophyte (Fig. 3). The observed mean abortion rate of ca. 52% in reduced/sexual female gametophytes (Fig. 3), was taken as the argument for a medium selection coefficient (s = 0.520). However, purifying selection is possible in male, reduced gametophytes as well [38]. Not only female gametophytes are partly aborted in apomictic R. auricomus but also pollen [25, 28, 52], where selection against deleterious mutations acts on the male gametophyte as well. Hence, we also take a proxy of s = c. 0.5 for male development. Since sexual development involves the fusion of male and female gametes that have been both purged from mutations, effects of selection will be additive. This justifies also for testing a strong selection coefficient (s = 1.000). For an estimate of selection on adult plants (i.e., sporophytes only), we computed a theoretical model based on mutation fixation (with estimated finite population size) following [28], which is outlined below. Modeling of mutation elimination for polyploids Deleterious mutation rates in plants are U ≈ 0.1–2.0 per genome and generation (for mutations with mean selection coefficients in the range s ≈ 0.1–0.2) [1]. To estimate deleterious mutation rate in hexaploid Ranunculus auricomus, we multiplied the estimate for diploid Arabidopsis U ≈ 0.1 [1, 52, 66] with the correction factor of 11.16 by which the hexaploid Ranunculus genome size is bigger (G6x = 15.69 pg; 2C) [26]. By comparison, the genome of another small-statured plant Amsinckia spectabilis is four times as large as an average Arabidopsis genome [67] and it exhibits approximately four times higher U than Arabidopsis [68]. The correction factor of 11.16 we use for Ranunculus already regards a diploid versus a polyploid chromosome complement. We prefer the correction factor based on absolute genome size rather than on ploidy level, because Ranunculus auricomus shows 'genome downsizing' in polyploids [69], i.e., the increase of hexaploid genome size is lower than expected by multiplication of haploid genome size with ploidy level. Genome downsizing is a frequent phenomenon in polyploid flowering plants [70]. A more detailed estimate of polyploid genome evolution is not yet feasible as no completely sequenced reference genome is available for Ranunculus. We did not regard in our model rare beneficial mutations which can also halt Muller's ratchet [43] because the Ranunculus transcriptome dataset [24] included only a small amount of outliers of non-synonymous mutations. Since even these mutations are not necessarily beneficial [24], we regard the effect of beneficial mutations as negligible. We further did not consider negative epistatic interactions of mutations, which may theoretically increase effects of deleterious mutations, but are relatively uncommon [71]. We assume a mutation (A) with dominance at 1/3 allele dose (i.e., expressed as Aaa in sexual triploid gametophytes but not as Aaaaaa in hexaploid asexual gametophytes or in sporophytes) due to dosage effects. We assume for sexual development predominant bivalent formation at meiosis, regular segregation, and disomic inheritance, as is typical for allopolyploids in which doubled homologs can pair [72] because Ranunculus carpaticola × Ranunculus cassubicifolius is an allopolyploid hybrid [24, 73]. We further assume non-overlapping generations and that mutation-carrying and mutation-free offspring are released in constant proportions every generation, resulting in an exponential decay of mutant frequency with time (Additional file 3). We assume a low but constant average recombination rate, exposing mutation-carrying gametes to purging selection. Based on these parameters, we investigated an influence of three selection coefficients. Mutation fixation We estimated the time needed for a slightly deleterious mutation to become fixed in a finite, obligate asexual population from the approximation of Muller's ratchet inferred by [28] and compared to other approaches [29, 30]. We consider a small effective population size N = 103 estimated from population genetic studies on natural 6x R. carpaticola × R. cassubicifolius which revealed a high Fst value (0.82) [26], and from field observations of population size by the authors. The diploid mutation rate U2x = 0.100 [52] was adapted for hexaploid Ranunculus (i.e., U6x = 1.116) using the correction factor of 11.16, based on the ratio between an averaged DNA content of non-replicated holoploid genomes of small-statured model plants (e.g., A. thaliana) and hexaploid R. auricomus genotypes [26]. We calculated a theoretical selection coefficient with presumably highest effect on the "mutational meltdown" of the population [13, 50, 74], inferred from the critical range of selection coefficients which favor the fastest 'clicking' of Muller's ratchet. The critical selection coefficient was computed with the parameter set above as s = 0.212. We consider it as an approximation of selection against deleterious mutations in hexaploid sporophytes only. We regarded the parameter regime of λ = U/s (=1.116/0.212) ≈ 5.3 as the 'worst scenario', where Muller's ratchet clicks very frequently in a small obligatory asexual population (N = 103 plant individuals). We inferred the ratchet speed based on several published approximations (Table 2). All data generated or analyzed during this study are included in this published article [and its supplementary information files]. dN/dS: The ratio of substitution rates at non-synonymous and synonymous sites F1: The first filial generation of offspring of different parental species Fst: The fixation index kya: Thousand years ago RNA-Seq: Whole transcriptome shotgun sequencing SSR: Simple sequence repeats Schultz ST, Scofield DG. Mutation accumulation in real branches: fitness assays for genomic deleterious mutation rate and effect in large-statured plants. Am Nat. 2009;174(2):163–75. Baer CF, Miyamoto MM, Denver DR. Mutation rate variation in multicellular eukaryotes: causes and consequences. Nat Rev Genet. 2007;8(8):619–31. Agrawal AF. Evolution of sex: why do organisms shuffle their genotypes? Curr Biol. 2006;16(17):696–704. Kimura M, Maruyama T. The mutational load with epistatic gene interactions in fitness. Genetics. 1966;54(6):1337. Charlesworth D, Morgan M, Charlesworth B. Mutation accumulation in finite outbreeding and inbreeding populations. Genet Res. 1993;61(01):39–56. Crow JF. The high spontaneous mutation rate: is it a health risk? Proc Natl Acad Sci U S A. 1997;94(16):8380–6. Muller HJ. The relation of recombination to mutational advance. Mutat Res. 1964;1(1):2–9. Felsenstein J. The evolutionary advantage of recombination. Genetics. 1974;78(2):737–56. Gerstein AC, Otto SP. Ploidy and the causes of genomic evolution. J Hered. 2009;100(5):571–81. Carman JG. Asynchronous expression of duplicate genes in angiosperms may cause apomixis, bispory, tetraspory, and polyembryony. Biol J Linnean Soc. 1997;61(1):51–94. Hojsgaard D, Hörandl E. A little bit of sex matters for genome evolution in asexual plants. Front Plant Sci. 2015;6(82). Hörandl E. A combinational theory for maintenance of sex. Heredity. 2009;103(6):445–57. Lynch M, Bürger R, Butcher D, Gabriel W. The mutational meltdown in asexual populations. J Hered. 1993;84(5):339–44. Otto S, Goldstein D. Recombination and the evolution of diploidy. Genetics. 1992;131(3):745–51. Crow JF, Kimura M. Introduction to population genetics theory. New York: Harper & Row; 1970. Joseph SB, Kirkpatrick M. Haploid selection in animals. Trends Ecol Evol. 2004;19(11):592–7. Schmid MW, Schmidt A, Grossniklaus U. The female gametophyte: an emerging model for cell type-specific systems biology in plant development. Front Plant Sci. 2015;6. Chevalier É, Loubert-Hudon A, Zimmerman EL, Matton DP. Cell–cell communication and signalling pathways within the ovule: from its inception to fertilization. New Phytol. 2011;192(1):13–28. Asker S, Jerling L. Apomixis in plants. Boca Raton: CRC press; 1992. Aliyu OM, Schranz ME, Sharbel TF. Quantitative variation for apomictic reproduction in the genus Boechera (Brassicaceae). Am J Bot. 2010;97(10):1719–31. Mirzaghaderi G, Hörandl E. The evolution of meiotic sex and its alternatives. Proc R Soc B-Biol Sci. 2016;283(1838). Grimanelli D, Leblanc O, Perotti E, Grossniklaus U. Developmental genetics of gametophytic apomixis. Trends Genet. 2001;17(10):597–604. Otto SP, Gerstein AC. The evolution of haploidy and diploidy. Curr Biol. 2008;18(24):1121–4. Pellino M, Hojsgaard D, Schmutzer T, Scholz U, Hörandl E, Vogel H, Sharbel TF. Asexual genome evolution in the apomictic Ranunculus auricomus complex: examining the effects of hybridization and mutation accumulation. Mol Ecol. 2013;22(23):5908–21. Hojsgaard D, Greilhuber J, Pellino M, Paun O, Sharbel TF, Hörandl E. Emergence of apospory and bypass of meiosis via apomixis after sexual hybridisation and polyploidisation. New Phytol. 2014;204(4):1000–12. Paun O, Greilhuber J, Temsch EM, Hörandl E. Patterns, sources and ecological implications of clonal diversity in apomictic Ranunculus carpaticola (Ranunculus auricomus complex, Ranunculaceae). Mol Ecol. 2006;15(4):897–910. Hörandl E, Greilhuber J, Klimova K, Paun O, Temsch E, Emadzade K, Hodalova I. Reticulate evolution and taxonomic concepts in the Ranunculus auricomus complex (Ranunculaceae): insights from analysis of morphological, karyological and molecular data. Taxon. 2009;58(4):1194–215. Jain K. Loss of least-loaded class in asexual populations due to drift and epistasis. Genetics. 2008;179(4):2125–34. Neher RA, Shraiman BI. Fluctuations of fitness distributions and the rate of Muller's ratchet. Genetics. 2012;191(4):1283–93. Metzger JJ, Eule S. Distribution of the fittest individuals and the rate of Muller's ratchet in a model with overlapping generations. PLoS Comput Biol. 2013;9(11):e1003303. Paun O, Hörandl E. Evolution of hypervariable microsatellites in apomictic polyploid lineages of Ranunculus carpaticola: directional bias at dinucleotide loci. Genetics. 2006;174(1):387–98. Van der Hulst RGM, Mes THM, den Nijs JCM, Bachmann C. Amplified fragment length polymorphism (AFLP) markers reveal that population structure of triploid dandelions (Taraxacum officinale) exhibits both clonality and recombination. Mol Ecol. 2000;9:1–8. Paule J, Sharbel TF, Dobes C. Apomictic and sexual lineages of the Potentilla argentea L. group (Rosaceae): Cytotype and molecular genetic differentiation. Taxon. 2011;60(3):721–32. Hojsgaard D, Klatt S, Baier R, Carman JG, Hörandl E. Taxonomy and biogeography of apomixis in angiosperms and associated biodiversity characteristics. Crit Rev Plant Sci. 2014;33(5):414–27. Klatt S, Hadacek F, Hodac L, Brinkmann G, Eilerts M, Hojsgaard D, Hörandl E. Photoperiod extension enhances sexual megaspore formation and triggers metabolic reprogramming in facultative apomictic Ranunculus auricomus. Front Plant Sci. 2016;7. Hollister JD, Greiner S, Wang W, Wang J, Zhang Y, Wong GK-S, Wright SI, Johnson MT. Recurrent loss of sex is associated with accumulation of deleterious mutations in Oenothera. Mol Biol Evol. 2015;32(4):896–905. Lovell JT, Williamson RJ, Wright SI, McKay JK, Sharbel TF. Mutation accumulation in an asexual relative of Arabidopsis. PLoS Genet. 2017;13(1):e1006550. Otto SP, Scott MF, Immler S. Evolution of haploid selection in predominantly diploid organisms. Proc Natl Acad Sci U S A. 2015;112(52):15952–7. Hojsgaard DH, Martinez EJ, Quarin CL. Competition between meiotic and apomictic pathways during ovule and seed development results in clonality. New Phytol. 2013;197(1):336–47. Arunkumar R, Josephs EB, Williamson RJ, Wright SI. Pollen-specific, but not sperm-specific, genes show stronger purifying selection and higher rates of positive selection than sporophytic genes in Capsella grandiflora. Mol Biol Evol. 2013;30(11):2475–86. Hörandl E. The evolution of self-fertility in apomictic plants. Sex Plant Reprod. 2010;23(1):73–86. Flot J-F, Hespeels B, Li X, Noel B, Arkhipova I, Danchin EG, Hejnol A, Henrissat B, Koszul R, Aury J-M. Genomic evidence for ameiotic evolution in the bdelloid rotifer Adineta vaga. Nature. 2013;500(7463):453–7. Goyal S, Balick DJ, Jerison ER, Neher RA, Shraiman BI, Desai MM. Dynamic mutation–selection balance as an evolutionary attractor. Genetics. 2012;191(4):1309–19. Rutter MT, Shaw FH, Fenster CB. Spontaneous mutation parameters for Arabidopsis thaliana measured in the wild. Evolution. 2010;64(6):1825–35. Eyre-Walker A, Keightley PD, Smith NGC, Gaffney D. Quantifying the slightly deleterious mutation model of molecular evolution. Mol Biol Evol. 2002;19(12):2142–9. Eyre-Walker A, Keightley PD. The distribution of fitness effects of new mutations. Nat Rev Genet. 2007;8(8):610–8. Silander OK, Tenaillon O, Chao L. Understanding the evolutionary fate of finite populations: the dynamics of mutational effects. PLoS Biol. 2007;5(4):922–31. Dickinson WJ. Synergistic fitness interactions and a high frequency of beneficial changes among mutations accumulated under relaxed selection in Saccharomyces cerevisiae. Genetics. 2008;178(3):1571–8. Schoen DJ. Deleterious mutation in related species of the plant genus Amsinckia with contrasting mating systems. Evolution. 2005;59(11):2370–7. Loewe L, Lamatsch DK. Quantifying the threat of extinction from Muller's ratchet in the diploid Amazon molly (Poecilia formosa). BMC Evol Biol. 2008;8(88):20. Lynch M, Conery J, Burger R. Mutation accumulation and the extinction of small populations. Am Nat. 1995;146(4):489–518. Schultz ST, Lynch M, Willis JH. Spontaneous deleterious mutation in Arabidopsis thaliana. PNAS. 1999;96(20):11393–8. Hand ML, Koltunow AM. The genetic control of apomixis: asexual seed formation. Genetics. 2014;197(2):441–50. Ozias-Akins P, van Dijk PJ. Mendelian genetics of apomixis in plants. Annu Rev Genet. 2007;41:509–37. Nogler GA. Genetics of apospory in apomictic Ranunculus auricomus. 5. Conclusion. Bot Helv. 1984;94(2):411–22. Stevison LS. Male-mediated effects on female meiotic recombination. Evolution. 2012;66(3):905–11. Meirmans PG, Van Tienderen PH. GENOTYPE and GENODIVE: two programs for the analysis of genetic diversity of asexual organisms. Mol Ecol Notes. 2004;4(4):792–4. Nei M, Chakraborty R, Fuerst P. Infinite allele model with varying mutation rate. PNAS. 1976;73(11):4164. Goldstein DB, Schlötterer C. Microsatellites: evolution and applications. Oxford: Oxford University Press; 1999. Wilkinson M. PICA 4.0: software and documentation. Department of Zoology. In. London: The Natural History Museum; 2001. Van der Hulst R, Mes T, Falque M, Stam P, Den Nijs J, Bachmann K. Genetic structure of a population sample of apomictic dandelions. Heredity. 2003;90(4):326–35. Huson DH. SplitsTree: analyzing and visualizing evolutionary data. Bioinformatics. 1998;14(1):68–73. Quarin CL. Seasonal changes in the incidence of apomixis of diploid, triploid, and tetraploid plants of Paspalum cromyorrhizon. Euphytica. 1986;35(2):515–22. Sharbel TF, Voigt M-L, Corral JM, Galla G, Kumlehn J, Klukas C, Schreiber F, Vogel H, Rotter B. Apomictic and sexual ovules of Boechera display Heterochronic global gene expression patterns. Plant Cell. 2010;22(3):655–71. Hörandl E. Evolutionary implications of self-compatibility and reproductive fitness in the apomictic Ranunculus auricomus polyploid complex (Ranunculaceae). Int J Biol Sci. 2008;169(9):1219–28. Halligan DL, Keightley PD. Spontaneous mutation accumulation studies in evolutionary genetics. Annu Rev Ecol Evol Syst. 2009;40:151–72. Bennett MD, Bhandol P, Leitch IJ. Nuclear DNA amounts in angiosperms and their modern uses - 807 new estimates. Ann Bot. 2000;86(4):859–909. Johnston MO, Schoen DJ. Mutation-rates and dominance levels of genes affecting total fitness in 2 angiosperm species. Science. 1995;267(5195):226–9. Hörandl E, Greilhuber J. Diploid and autotetraploid sexuals and their relationships to apomicts in the Ranunculus cassubicus group: insights from DNA content and isozyme variation. Plant Syst Evol. 2002;234(1–4):85–100. Leitch IJ, Bennett MD. Genome downsizing in polyploid plants. Biol J Linnean Soc. 2004;82(4):651–63. Kouyos RD, Silander OK, Bonhoeffer S. Epistasis between deleterious mutations and the evolution of recombination. Trends Ecol Evol. 2007;22(6):308–15. Comai L. The advantages and disadvantages of being polyploid. Nat Rev Genet. 2005;6(11):836–46. Paun O, Stuessy TF, Hörandl E. The role of hybridization, polyploidization and glaciation in the origin and evolution of the apomictic Ranunculus cassubicus complex. New Phytol. 2006;171(1):223–36. Gabriel W, Lynch M, Burger R. Muller's ratchet and mutational meltdowns. Evolution. 1993;47(6):1744–57. We thank two referees for valuable comments; Jennifer Krüger, Natalia Woźniak, Silvia Friedrichs for technical help, the German Research Foundation for funding (DFG project Ho 4395/4-1 to E.H.). We acknowledge support by the Open Access Publication Funds of the Goettingen University. This work was funded by German Research Fund "Deutsche Forschungsgemeinschaft" (DFG project Ho 4395/4–1) to E.H. The funders had no role in study design, data collection and analysis, and preparation of the manuscript. Department of Systematics, Biodiversity and Evolution of Plants (with Herbarium), University of Goettingen, Goettingen, Germany Ladislav Hodač, Simone Klatt, Diego Hojsgaard & Elvira Hörandl Global Institute for Food Security, University of Saskatchewan, Saskatoon, Canada Timothy F. Sharbel Ladislav Hodač Simone Klatt Diego Hojsgaard Elvira Hörandl EH, LH, TS: study design. LH, EH, SK, DH: data collection and analysis. All authors: writing. All authors have read and approved the manuscript. Correspondence to Elvira Hörandl. Microsatellite data matrix. (XLSX 28 kb) Graphical representation of results of the three character incompatibility methods for identification of recombinants. The graphs visualize results of calculations. (DOCX 249 kb) Model of mutation elimination under constant recombination rate and selection on female gametophytes. The graph visualizes the model for the first generations. (DOCX 264 kb) Hodač, L., Klatt, S., Hojsgaard, D. et al. A little bit of sex prevents mutation accumulation even in apomictic polyploid plants. BMC Evol Biol 19, 170 (2019). https://doi.org/10.1186/s12862-019-1495-z DOI: https://doi.org/10.1186/s12862-019-1495-z Apomixis Haploid selection Mutation accumulation Polyploidy
CommonCrawl
Complex inverse Wishart distribution The complex inverse Wishart distribution is a matrix probability distribution defined on complex-valued positive-definite matrices and is the complex analog of the real inverse Wishart distribution. The complex Wishart distribution was extensively investigated by Goodman[1] while the derivation of the inverse is shown by Shaman[2] and others. It has greatest application in least squares optimization theory applied to complex valued data samples in digital radio communications systems, often related to Fourier Domain complex filtering. Complex inverse Wishart Distribution Notation ${\mathcal {CW}}^{-1}({\mathbf {\Psi } },\nu ,p)$ Parameters $\nu >p-1$ degrees of freedom (real) $\mathbf {\Psi } >0$, $p\times p$ scale matrix (pos. def.) Support $\mathbf {X} $ is p × p positive definite Hermitian PDF ${\frac {\left|\mathbf {\Psi } \right|^{\nu }}{{\mathcal {C}}\Gamma _{p}(\nu )}}\left|\mathbf {x} \right|^{-(\nu +p)}e^{-\operatorname {tr} (\mathbf {\Psi } \mathbf {x} ^{-1})}$ • ${\mathcal {C}}\Gamma _{p}$ is the complex multivariate gamma function ${\mathcal {C}}\Gamma _{p}(\nu )=\pi ^{{\tfrac {1}{2}}p(p-1)}\prod _{j=1}^{p}\Gamma (\nu -j+1)$ • $\operatorname {tr} $ is the trace function Mean ${\frac {\mathbf {\Psi } }{\nu -p}}$ for $\nu >p+1$ Variance see below Letting $\mathbf {S} _{p\times p}=\sum _{j=1}^{\nu }G_{j}G_{j}^{H}$ be the sample covariance of independent complex p-vectors $G_{j}$ whose Hermitian covariance has complex Wishart distribution $\mathbf {S} \sim {\mathcal {CW}}(\mathbf {\Sigma } ,\nu ,p)$ with mean value $\mathbf {\Sigma } {\text{ and }}\nu $ degrees of freedom, then the pdf of $\mathbf {X} =\mathbf {S^{-1}} $ follows the complex inverse Wishart distribution. Density If $\mathbf {S} _{p\times p}$ is a sample from the complex Wishart distribution ${\mathcal {CW}}({\mathbf {\Sigma } },\nu ,p)$ such that, in the simplest case, $\nu \geq p{\text{ and }}\left|\mathbf {S} \right|>0$ then $\mathbf {X} =\mathbf {S} ^{-1}$ is sampled from the inverse complex Wishart distribution ${\mathcal {CW}}^{-1}({\mathbf {\Psi } },\nu ,p){\text{ where }}\mathbf {\Psi } =\mathbf {\Sigma } ^{-1}$. The density function of $\mathbf {X} $ is $f_{\mathbf {x} }(\mathbf {x} )={\frac {\left|\mathbf {\Psi } \right|^{\nu }}{{\mathcal {C}}\Gamma _{p}(\nu )}}\left|\mathbf {x} \right|^{-(\nu +p)}e^{-\operatorname {tr} (\mathbf {\Psi } \mathbf {x} ^{-1})}$ where ${\mathcal {C}}\Gamma _{p}(\nu )$ is the complex multivariate Gamma function ${\mathcal {C}}\Gamma _{p}(\nu )=\pi ^{{\tfrac {1}{2}}p(p-1)}\prod _{j=1}^{p}\Gamma (\nu -j+1)$ Moments The variances and covariances of the elements of the inverse complex Wishart distribution are shown in Shaman's paper above while Maiwald and Kraus[3] determine the 1-st through 4-th moments. Shaman finds the first moment to be $\mathbf {E} [{\mathcal {C}}\mathbf {W^{-1}} ]={\frac {1}{n-p}}\mathbf {\Psi ^{-1}} ,\;n>p$ and, in the simplest case $\mathbf {\Psi } =\mathbf {I} _{p\times p}$, given $d={\frac {1}{n-p}}$, then $\mathbf {\mathbf {E} \left[vec({\mathcal {C}}W_{3}^{-1})\right]} ={\begin{bmatrix}d&0&0&0&d&0&0&0&d\\\end{bmatrix}}$ The vectorised covariance is $\mathbf {Cov\left[vec({\mathcal {C}}W_{p}^{-1})\right]} =b\left(\mathbf {I} _{p}\otimes I_{p}\right)+c\,\mathbf {vecI_{p}} \left(\mathbf {vecI_{p}} \right)^{T}+(a-b-c)\mathbf {J} $ where $\mathbf {J} $ is a $p^{2}\times p^{2}$ identity matrix with ones in diagonal positions $1+(p+1)j,\;j=0,1,\dots p-1$ and $a,b,c$ are real constants such that for $n>p+1$ $a={\frac {1}{(n-p)^{2}(n-p-1)}}$, marginal diagonal variances $b={\frac {1}{(n-p+1)(n-p)(n-p-1)}}$, off-diagonal variances. $c={\frac {1}{(n-p+1)(n-p)^{2}(n-p-1)}}$, intra-diagonal covariances For $\mathbf {\Psi } =\mathbf {I} _{3}$, we get the sparse matrix: $\mathbf {Cov\left[vec({\mathcal {C}}W_{3}^{-1})\right]} ={\begin{bmatrix}a&\cdot &\cdot &\cdot &c&\cdot &\cdot &\cdot &c\\\cdot &b&\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\\cdot &\cdot &b&\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &b&\cdot &\cdot &\cdot &\cdot &\cdot \\c&\cdot &\cdot &\cdot &a&\cdot &\cdot &\cdot &c\\\cdot &\cdot &\cdot &\cdot &\cdot &b&\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &b&\cdot &\cdot \\\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &b&\cdot \\c&\cdot &\cdot &\cdot &c&\cdot &\cdot &\cdot &a\\\end{bmatrix}}$ Eigenvalue distributions The joint distribution of the real eigenvalues of the inverse complex (and real) Wishart are found in Edelman's paper[4] who refers back to an earlier paper by James.[5] In the non-singular case, the eigenvalues of the inverse Wishart are simply the inverted values for the Wishart. Edelman also characterises the marginal distributions of the smallest and largest eigenvalues of complex and real Wishart matrices. References 1. Goodman, N R (1963). "Statistical Analysis Based on a Certain Multivariate Complex Gaussian Distribution: an Introduction". Ann. Math. Statist. 34 (1): 152–177. doi:10.1214/aoms/1177704250. 2. Shaman, Paul (1980). "The Inverted Complex Wishart Distribution and its Application to Spectral Estimation". Journal of Multivariate Analysis. 10: 51–59. doi:10.1016/0047-259X(80)90081-0. 3. Maiwald, Dirk; Kraus, Dieter (1997). "On Moments of Complex Wishart and Complex Inverse Wishart Distributed Matrices". 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing. pp. 3817–3820. doi:10.1109/ICASSP.1997.604712. ISBN 0-8186-7919-0. S2CID 14918978. {{cite book}}: |journal= ignored (help) 4. Edelman, Alan (October 1998). "Eigenvalues and Condition Numbers of Random Matrices". SIAM J. Matrix Anal. Appl. 9 (4): 543–560. doi:10.1137/0609045. hdl:1721.1/14322. 5. James, A. T. (1964). "Distributions of Matrix Variates and Latent Roots Derived from Normal Samples". Ann. Math. Statist. 35 (2): 475–501. doi:10.1214/aoms/1177703550.
Wikipedia
Let $P$ be a point on the curve $xyz^2 = 2$ in three-dimensional space. Find the minimum distance between $P$ and the origin. We want to minimize $x^2 + y^2 + z^2.$ We know that $xyz^2 = 2.$ Note that flipping the sign of $z$ does not change $x^2 + y^2 + z^2$ or $xyz^2,$ so we may assume that $z$ is positive. Also, from the condition $xyz^2 = 2,$ both $x$ and $y$ are positive, or both are negative. If they are both negative, we can flip the sign of both $x$ and $y.$ Thus, we may assume that $x,$ $y,$ and $z$ are all positive. Then by AM-GM, \begin{align*} x^2 + y^2 + z^2 &= x^2 + y^2 + \frac{z^2}{2} + \frac{z^2}{2} \\ &\ge 4 \sqrt[4]{x^2 \cdot y^2 \cdot \frac{z^2}{2} \cdot \frac{z^2}{2}} \\ &= 4 \sqrt[4]{\frac{x^2 y^2 z^4}{4}} \\ &= 4 \sqrt{\frac{xyz^2}{2}} \\ &= 4. \end{align*}Hence, $\sqrt{x^2 + y^2 + z^2} \ge 2.$ Equality occurs when $x = y = \frac{z}{\sqrt{2}}.$ Along with the condition $xyz^2 = 2,$ we can solve to get $x = 1,$ $y = 1,$ and $z = \sqrt{2}.$ Thus, the minimum distance is $\boxed{2}.$
Math Dataset
The shortest distance from the circle $x^2 + y^2 = 4x + 8y$ to the point $(5,-2)$ can be written in the form $\sqrt{m}$, where $m$ is an integer. Find $m$. Completing the square gives $(x-2)^2 + (y-4)^2 = 20$, so the circle has radius $\sqrt{20} = 2\sqrt{5}$ and center $(2,4)$. The distance between $(2,4)$ and $(5,-2)$ is given by $\sqrt{(2-5)^2 + (4-(-2))^2} = \sqrt{9 + 36} = \sqrt{45} = 3\sqrt{5}$. Hence, the shortest distance is the difference of the distance between the center and the point and the radius, yielding $3\sqrt{5} - 2\sqrt{5} = \sqrt{5}$. Thus, $m = \boxed{5}$. [asy] import graph; size(8.33cm); real lsf=0.5; pen dps=linewidth(0.7)+fontsize(10); defaultpen(dps); pen ds=black; real xmin=-3.5,xmax=8.83,ymin=-4.5,ymax=9.58; pen ttzzqq=rgb(0.2,0.6,0); Label laxis; laxis.p=fontsize(10); xaxis(-3.5,8.83,defaultpen+black,Ticks(laxis,Step=2.0,Size=2),Arrows(6),above=true); yaxis(-4.5,9.58,defaultpen+black,Ticks(laxis,Step=2.0,Size=2),Arrows(6),above=true); draw(circle((2,4),4.47)); draw((2,4)--(5,-2)); draw((4,0)--(5,-2),linewidth(1.6)+ttzzqq); label("$(x - 2)^2 + (y - 4)^2 = 20$",(0.91,5.41),NE*lsf); dot((5,-2),ds); label("$(5, -2)$",(5.15,-1.75),NE*lsf); dot((2,4),ds); dot((4,0),ds); clip((xmin,ymin)--(xmin,ymax)--(xmax,ymax)--(xmax,ymin)--cycle); [/asy]
Math Dataset
Microscopic mechanisms of deformation transfer in high dynamic range branched nanoparticle deformation sensors Optical-Tweezers-integrating-Differential-Dynamic-Microscopy maps the spatiotemporal propagation of nonlinear strains in polymer blends and composites Karthik R. Peddireddy, Ryan Clairmont, … Rae M. Robertson-Anderson Modular Design of Programmable Mechanofluorescent DNA Hydrogels Remi Merindol, Giovanne Delechiave, … Andreas Walther SMART transfer method to directly compare the mechanical response of water-supported and free-standing ultrathin polymeric films Luke A. Galuska, Eric S. Muckley, … Xiaodan Gu Shape memory in self-adapting colloidal crystals Seungkyu Lee, Heather A. Calcaterra, … Chad A. Mirkin Controlling the shape of 3D microstructures by temperature and light Marc Hippler, Eva Blasco, … Martin Bastmeyer Digital coding of mechanical stress in a dynamic covalent shape memory polymer network Guogao Zhang, Wenjun Peng, … Tao Xie Mechanical integrated circuit materials Charles El Helou, Benjamin Grossmann, … Ryan L. Harne Multi-scale ordering in highly stretchable polymer semiconducting films Jie Xu, Hung-Chin Wu, … Zhenan Bao Self-assembly as a tool to study microscale curvature and strain-dependent magnetic properties Balram Singh, Jorge. A. Otálora, … Oliver G. Schmidt Shilpa N. Raja1,2 nAff6, Xingchen Ye3 nAff7, Matthew R. Jones ORCID: orcid.org/0000-0002-9289-291X3 nAff8, Liwei Lin4, Sanjay Govindjee5 & Robert O. Ritchie ORCID: orcid.org/0000-0002-0501-69981,2,4 Nanoscale stress sensing is of crucial importance to biomechanics and other fields. An ideal stress sensor would have a large dynamic range to function in a variety of materials spanning orders of magnitude of local stresses. Here we show that tetrapod quantum dots (tQDs) exhibit excellent sensing versatility with stress-correlated signatures in a multitude of polymers. We further show that tQDs exhibit pressure coefficients, which increase with decreasing polymer stiffness, and vary >3 orders of magnitude. This high dynamic range allows tQDs to sense in matrices spanning >4 orders of magnitude in Young's modulus, ranging from compliant biological levels (~100 kPa) to stiffer structural polymers (~5 GPa). We use ligand exchange to tune filler-matrix interfaces, revealing that inverse sensor response scaling is maintained upon significant changes to polymer-tQD interface chemistry. We quantify and explore mechanisms of polymer-tQD strain transfer. An analytical model based on Mori-Tanaka theory presents agreement with observed trends. Nanoscale stresses play a crucial role in a wide variety of fields and processes, such as polymer dynamics and deformation1, crack initiation and propagation2, and biological processes such as stem cell differentiation. In filler-containing polymers, sensing of nanoscale stresses is of key importance to understanding nanoscale mechanisms of mechanical reinforcement3,4 and assessing nanoscale filler-matrix interfacial load transfer4,5. Further, it is important for ensuring reproducible, tailored material synthesis4. In order to study such nanoscale stresses, an appropriate tool is needed that is sensitive, versatile, and does not alter the properties of the host matrix4,6. Further, the ideal nanoscale stress sensor should exhibit stress sensitivity over a large dynamic range, enabling it to detect equally well kilopascal (kPa) stresses in biological systems as well as larger megapascal (MPa) stresses in structural materials. Current techniques7,8 for examining such nanoscale stresses such as Raman spectroscopy5, mechanochromic gels9, atomic force microscopy (AFM)10, electronic skins11, metal nanoparticle chains12, stress-sensitive small molecules13, and others9 have limitations, which constrain their utility in practical situations4,6,7,8,14. These include being invasive, having low signal-to-noise ratio, or being limited to specific laboratory settings, material systems, or geometries. Further limitations are that they generally do not exhibit tunable stress sensitivity or survive multiple cycles of stress sensing. A nanoscale sensor that could report nanoscale stresses in a variety of materials, without the abovementioned limitations, would be highly desirable4,6,8. One nanoscale sensor that could serve this purpose is the tetrapod quantum dot (tQD)15. tQDs are core-shell cadmium selenide-cadmium sulfide (CdSe-CdS) quantum dots in which the ~4 nm CdSe core has four ~25 nm tetrahedrally branched CdS arms, and exhibits type-I band alignment15. The tQD has been shown to be a sensor of local deformation states due to its unique morphology, in which the four tQD arms act as antennae that transmit local deformation to the CdSe core4,6,8,16,17. Its branched geometry also makes it an optimal mechanical filler18. The tQD response to certain types of stresses consists of a reduction in the bandgap, or a photoluminescence (PL) red-shift, arising from the widening of bonds in the CdSe core6,17. Previous work on tQDs has demonstrated sensing of both extension and contraction6,17, sensing of complex behavior such as stress relaxation and hysteresis4, sensing of direct contact between adjacent tQDs6, and the ability to survive many sensing cycles6. However, sensor versatility has not been specifically assessed, as prior studies were limited to only a few polymer host matrices4,6,8. Furthermore, studies were only performed with native ligands, i.e., no ligand exchange was performed to change tQD surface chemistry and alter the interfacial strength between the tQD and polymer matrix4,6,8. To rectify this situation, we report here on a study using a very wide selection of host materials with varied interfacial conditions. An assessment of load or strain transfer from the matrix to the filler phase at the nanoscale is also critically important for material systems with embedded sensors19,20. However, quantification of strain transfer efficiency in such systems poses a formidable experimental challenge, especially at the nanoscale5,21. Probing strain transfer efficiency with tQDs could potentially provide a new unique experimental way to obtain and quantify interfacial dynamics and strain transfer. Here we build on previous studies by systematically assessing the tQD sensing behavior in a variety of materials with embedded tQDs under tensile stress. The diverse materials studied here comprise eight host matrices and multiple tQD-polymer interfacial chemistries, including tQDs coated with native ligands15 and thiol-terminated polymers22. We show the ability of the tQD to provide useful response even when embedded in material systems whose Young's moduli vary by over three orders of magnitude. Intriguingly, more compliant, or lower stiffness, polymers result in a monotonically higher stress response sensitivity, or pressure coefficient (defined as change in tQD PL bandgap in meV per GPa of applied stress23). In the most compliant polymers, the tQD pressure coefficients are orders of magnitude higher than in bulk CdSe23. The tQD is a highly versatile sensor with a large dynamic range, enabling it to detect low kPa stresses for biomechanical applications as well as orders of magnitude higher stresses (MPa level) in stiffer structural materials. We propose that this high dynamic range originates from varying degrees of strain transfer at the critical polymer-tQD interface due to the tQD's branched geometry. Further, in a unique use of visible light experiments to elucidate polymer-filler strain transfer dynamics5,19,24,25, we determine the strain transfer efficiency across the tQD-polymer interface from our in situ stress measurements. A unique corollary of this feature is that it allows us to assess the validity of classical self-consistent micromechanical theories on strain concentration tensors26,27,28. Material system preparation In this work, we studied 17 material systems, which included fibers and films, as well as a variety of different tQD concentrations and dispersions in multiple tQD-ligand-polymer systems. The systems in this work vary widely in terms of interfacial chemistry between polymer and tQD, polymer composition and hydrophobicity, and mechanical properties, with Young's moduli varying more than four orders of magnitude across all host matrices in this work. The tQDs were synthesized using established methods15. As-synthesized tQDs were incorporated into polymer fibers or films with either their native ODPA ligands or after ligand exchange to coatings of thiol-terminated polymers22, such as poly-l-lactide (PLLA), to create a stronger polymer-tQD interfacial bond. The polymers used in this work included poly(ethylene oxide) (PEO), polycaprolactone (PCL), poly(styrene-ethylene-butylene-styrene) (SEBS), PLLA, polydimethylsiloxane (PDMS), and polybutadiene (PBD)29,30,31. Electrospinning was used for PLLA, PEO, SEBS, and PCL; a viscous polymer chloroform solution was mixed with a chloroform solution of nanoparticles to create viscous solutions of 4–12% by weight polymer, and 0.05–20% by weight/0.01–5% by volume tQDs; droplets of the highly viscous solution were subject to high electric fields (15 kV/cm) to form aligned arrays of fibers using the dual-rod geometry of Li et al.4,32. Single fibers were collected from aligned arrays for optical and mechanical tests. Electrospun fibers were examined with an optical microscope to measure their diameters and morphologies, and were seen to generally have a smooth, uniform appearance (Supplementary Fig. 1). Films were prepared by mixing solutions of polymer, nanoparticles, and chloroform into glass vials and then drying them in air or under streams of nitrogen6. PBD fibers were hand-drawn from viscous PBD-chloroform solutions. Figure 1 shows transmission electron microscopy (TEM) images of some of the material systems with embedded tQDs studied in this work. It illustrates the clustering seen in the native ligand-tQD systems due to the chemical incompatibility between the host matrix and the hydrophobic ODPA tQD native ligands18,20,30,33. On increasing concentration in these systems, the tQD cluster size grew slightly while the cluster spacing reduced (Fig. 1). Images of evenly or singly dispersed tQD-polymer systems can be found in Fig. 2, and additional TEM images of more of the polymers can be found in Supplementary Fig. 1. tQD cluster dispersion and concentration in polymers and tQD-polymer interfaces studied in this work (see Fig. 2 for evenly dispersed tQDs in polymers). a, b Schematics of high (evenly dispersed) and low (clustered) dispersions are shown, as well as high and low concentrations in polymers wherein tQDs form clusters, and varying tQD-polymer interface chemistries. c TEM images of tQDs before polymer incorporation. d TEM image of tQD clusters in PLLA (10% by weight). e TEM image of tQD clusters in SEBS (5% by weight). f TEM image of tQD clusters in SEBS (20% by weight). All scale bars shown represent 200 nm, except for the inset to c, which is 50 nm Singly dispersing tQDs into polymers. a Schematic of the two-step process to coat tQDs with thiol-terminated polymers. b, c TEM images of PLLA-coated tQDs in PLLA polymer. c Closer view. d TEM image of PLLA-coated tQDs in PEO fiber. Scale bars are b 200 nm; c 40 nm; and d 80 nm Note that tQDs do not exhibit a preferred orientation in the polymers as a function of drawing. While drawing at high strains can result in orientation of nanorods31, it has no effect on the orientation of tQD fillers. This is due to the tetrahedral symmetry of the tQD15, e.g., the tQDs in Fig. 2b–d of the main text were drawn to over a thousand percent via hand-drawing or electrospinning, but no preferred orientation was seen. Nanoparticle surface ligand density, chemistry, and molecular weight are known to affect polymer matrix-nanoparticle interactions. Such factors can affect nanoparticle dispersion in a polymer matrix, as well as other properties22,34,35. In particular, ligands with similar composition to the polymer host matrix, as well as longer lengths and intermediate grafting density, have been shown to result in better dispersions. Motivated by these findings and the fact that tQDs with native short alkyl chain ligands aggregate in all polymer matrices studied (Fig. 1, Supplementary Fig. 1), we coated them with polymer ligands at intermediate grafting density. We did this via ligand exchange of tQDs to long-chain (~2.5 kDa) polymeric coatings. Our goal was to improve tQD dispersion to look at its effect on sensing behavior. The exchange was done using a two-step process, from the tQD's native octadecylphosphonic acid15 to pyridine and then to a polymer ligand layer22, in order to study the effect of interfacial polymer-tQD chemical bond strength on the embedded tQD pressure coefficient20. By introducing thiol-terminated PLLA (SH-PLLA) and SH-PEO onto tQD surfaces, we achieved a good dispersion, free of aggregation, into these polymers (Fig. 2). tQDs coated with SH-PLLA dispersed well into both PEO and PLLA, with individual tQDs seen in the polymer matrix in TEM images (Fig. 2). This significantly improved dispersion is indicative of a stronger like-like interface between the tetrapod and polymer22. Nuclear magnetic resonance (NMR) showed that ~60% of the ligands on the tQD surface were thiol-terminated polymeric ligands, with a surface ligand density40 of ~2/nm2, within the range of previous findings for CdSe quantum dots41 (Supplementary Fig. 2). Despite this notably improved interfacial interaction, dynamic scanning calorimetry measurements did not reveal any change in glass transition temperature, Tg4, likely because the tQD concentration was too small to impact Tg (0.2% or less by volume). tQD sensing versatility and high dynamic range For a nanosensor probe to be of use for a broad spectrum of applications, versatility of sensing response in a wide variety of host matrices is essential. Additionally, a large dynamic range of pressure coefficient is highly desirable23. For example, in biological settings, stresses between cells or between cell and substrate are often on the order of kPa42, while in structural applications, stresses can be on the order of MPa43. These applications require very different pressure coefficients—if the same pressure coefficient was used in biological applications as for structural parts, the optical shifts corresponding to the stresses applied to tQDs would be two orders of magnitude too low to resolve on any commercially available charge-coupled devise (CCD) detector. Thus, the tQD sensor response in both relatively stiff and compliant host materials needs to be assessed. Accordingly, after sample preparation, the tQD sensor response was assessed via optical spectroscopy while applying tensile strain to the polymers (see Methods). This was done using a mechanical stretcher with a hole for laser passage4,6. Raw spectra as a function of stretching were collected at quasi-static strain rates, and then fit to single Gaussians to determine the PL emission maximum (i.e., the peak of the PL emission spectrum) as a function of stretching (Supplementary Fig. 3). Traditional mechanical testing was conducted using a spring-based load cell (Mark-10) for films or with an Agilent T-150 nanomechanical tensile testing machine (see Methods). All material systems studied in this work exhibited clear tQD spectral red-shifts upon tensile extension (Fig. 3, Supplementary Fig. 3), demonstrating clear versatility of the tQD sensor response in a wide variety of material systems. Typical optical and mechanical measurements for tQD-PCL and tQD-PBD fibers are shown in Fig. 4. For fibers, mechanical and opto-mechanical tests were performed separately, while for films they were performed simultaneously (see Methods). Examples of opto-mechanical and mechanical tests on tQD-PBD and tQD-PCL polymer fibers. Opto-mechanical and mechanical data were acquired separately. a Optically sensed fluorescence tensile curve of tQD-PBD fibers. The x-axis is tensile strain, while the y-axis is the magnitude of the PL emission maximum red-shift. b Corresponding tensile stress–strain curve of tQD-PBD fibers measured using a typical uniaxial mechanical tensile tester. c Optically sensed fluorescence tensile curve of tQD-PCL fibers. The y-axis is the magnitude of the PL emission maximum red-shift. d Corresponding tensile stress–strain curve of tQD-PCL fibers measured using a typical uniaxial mechanical tensile tester. Engineering stress and engineering strain are plotted High dynamic range of tQD pressure coefficients. a Monotonic scaling of the tQD pressure coefficient over three orders of magnitude with the polymer inverse Young's modulus. b Plot of the initial region shown in a (inverse Young's modulus <200/GPa). c Plot of pressure coefficient as a function of tQD cluster size for three concentrations of tQDs in SEBS. Error bars represent standard error of the mean (SEM) and each mean is the average of 10–15 measurements We examined trends in the tQD pressure coefficient as a function of several variables. These included polymer modulus, tQD aggregation, tQD concentration, and tQD-polymer interfacial interactions. We uniquely found an enhancement of up to more than three orders of magnitude in the tQD pressure coefficient with decreasing polymer Young's modulus (Fig. 5). All materials exhibited tQD PL energy gap red-shifts of 2–15 meV upon uniaxial stretching, even those with orders of magnitude lower stresses. In the most compliant polymers, this corresponds to an amplification of the tQD pressure coefficient over bulk CdSe by 100–300 times23,44. This means that tQDs are able to report stresses effectively both in stiff structural and compliant polymer materials, implying that tQDs are ideal nanoscale probes with high dynamic range. Polymer-tQD strain transfer efficiency. a Schematic of tQD-polymer interfacial strain transfer. a, b Lower strain transfer efficiency from relatively low Young's modulus polymers to the tQDs, while c and d depict higher strain transfer efficiency from relatively high Young's modulus (higher stiffness depicted as analogous to a braid) polymers to tQDs. e Red circles and blue Xs indicate experimental values. Blue Xs represent systems with varied ligand-tQD surface chemistry. Polymer-tQD strain transfer efficiency as a function of inverse Young's modulus of the polymer material. The plot shows lines that indicate our theoretical Mori-Tanaka model predictions; the yellow line represents the theoretical model with a polymer Poisson's ratio of 0.3, while the purple line represents the theoretical model with a polymer Poisson's ratio of 0.5. Error bars represent SEM and each mean is the average of 10–15 measurements In Fig. 5, the standard error in the calculation of the pressure coefficient from experimentally measured red-shift and stress is of similar percentage for each system. We also included pressure coefficients from previous studies utilizing preformed polyester fibers in which tQDs were incorporated via diffusion, and tQDs in frozen toluene in diamond anvil cells8,17. Possible origins of large tQD pressure coefficient range Potential factors responsible for the observed tQD pressure coefficient amplification and wide dynamic range include the following: (i) changes in tQD dispersion in the host matrix; (ii) changes in the tQD-polymer interface chemistry; (iii) amplification of stress around the tQD in the polymer due to the tQD arms' nanoscale size and thus relative sharpness45; and (iv) varying degrees of strain transfer from the polymer to the tQD. As described below, we conclude that a potential explanation for this phenomenon is a molecular-scale polymer chain-tQD interaction leading to strain transfer efficiency that monotonically varies with the polymer Young's modulus, as has been observed previously46. Regarding changes in tQD dispersion, we have determined that this is an unlikely cause for the observed phenomena by using TEM image analysis to compute average aggregate sizes and aggregate packing fractions for polymers that contained tQD clusters (Fig. 4c, Supplementary Fig. 4). We found no trend in the pressure coefficient with varying tQD aggregate size, tQD aggregate packing density, or tQD concentration in a given polymer matrix except for one system, tQD-SEBS. In this case the pressure coefficient varied by a factor of only two, as compared to more than three orders of magnitude through varying the host matrix Young's modulus (Fig. 4). In addition, statistical analyses showed much higher correlations for Young's modulus and pressure coefficient than any of the other abovementioned variables (see Methods and Supplementary Fig. 4). Thus, changes to tQD dispersion are not likely to be major causal factors for the observed pressure coefficient trends (see Methods for adjusted R-squared values). The second hypothesis was that interface strength and pressure coefficient would monotonically scale because of more strain transfer to tQDs at stronger interfaces. Interestingly, however, this also fails to explain the observed results, because the tQD pressure coefficient is not seen to scale with increasing tQD-polymer interfacial strength. While the vast majority of systems studied in this work showed mostly little to no change between the Young's modulus of the pure polymer and the tQD-polymer (Supplementary Fig. 5), evenly dispersing tQDs by coating them with PLLA (Fig. 3) increased the polymer elastic modulus by 25–75% (Supplementary Fig. 6). However, the pressure coefficient simultaneously decreased by 50–75%. This is consistent with our finding that stiffer polymers have decreased tQD pressure coefficients (Fig. 4). Stress concentration at the tQD arm tips and edges due to their sharpness47,48 was eliminated as a mechanism for the observed results of pressure coefficient amplification in more compliant polymers. This is because such amplification is localized to the tip of the arm and is far from the tQD core47,48. We suggest a likely explanation is that the tQD fillers in different polymer matrices are experiencing different degrees of strain transfer from the polymer matrix46. This has a literature precedent because stress or strain transfer efficiencies from polymers to embedded fibers have been found to vary monotonically with the ratio of the Young's modulus of the stiff fiber and more compliant polymer46. This leads to pressure coefficient amplification in lower stiffness materials because of the monotonically varying Young's modulus difference between the stiff tQD and increasingly more compliant host polymer, and lower stress in lower stiffness polymers46,49. Stress amplification is known to occur in biological materials when a very more compliant material is bound to a very stiff material and the more compliant material is strained, causing the stiffer material to bear full or partial equal strain and thus higher stress50. Such a situation is often encountered in bone implants, when strain transfer from the surrounding compliant bone to the much stiffer implant problematically leads to stress amplification in the implant phase (stress shielding)49. A very simple, qualitative, elastic model can be used to evaluate tQD-filler strain transfer efficiencies. An elastic model is appropriate because owing to its branched shape and partial strain transfer from the polymer, it is likely that the tQD deformation during mechano-optical stress sensing is fully elastic4,6,8,17. Previous studies have shown that due to their unique core-arm bending modes, tQDs remain elastic to more than 30% bending strain as imposed by an AFM tip16. Furthermore, upon the removal of stress to the polymers, the tQDs always return to the same baseline fluorescence peak position, and have excellent cyclability of the sensing upon repeated deformation4,6, a strong indicator that tQD deformation remains elastic throughout the polymer tensile test17. As a very basic model, one can consider the load on the tQD-polymer system to be carried by the polymer and tQDs acting in parallel. In the elastic regime and in the limit of very strong binding, the strains will be equal in both the polymer matrix and the tQD-filler phase of the material, εt = εp. Further assuming elastic response of the components gives the tQD stress as \(\sigma _{\mathrm{t}} = \frac{{\sigma _{\mathrm{p}}E_{\mathrm{t}}}}{{E_{\mathrm{p}}}}\), where Et and Ep are the Young's moduli of the tQD and polymer, respectively, and σp is the polymer stress. The amplification effect is thus seen, in this very simple model, to be proportional to the ratio of the Young's moduli50. However, the geometry of the tQDs in the polymer matrix makes this very simple parallel model rather unrealistic; additionally, the binding between the two is unlikely to be as perfect as the model assumes. Molecular dynamics studies have shown that polymer chains tend to wrap conformally around spherical and elongated nanoparticles51. In the case of tQDs and polymers, the interface likely consists of polymer chains wrapped around individual and/or multiple tQD arms, likely forming multiple loops51 (Fig. 5a–d). Such molecular-level wrapping between the polymer and tQDs is an important effect that contributes to tQD-polymer strain transfer efficiency, and thus to the observed results, but this is impossible to capture in numerical models using present-day computers. Further, this wrapping is again unlikely to create a strong enough interface to achieve equal strain in the tQD and polymer phases of the material. Accordingly, our simple parallel element model will likely overpredict the experimentally observed tQD stresses and pressure coefficients. However, due to the branched morphology and multiple bending modes of the tetrapod18, it is perhaps conceivable that this wrapping could lead to bending of tQD arms and partial strain transfer from the polymer to the tQD core at the arm-core junctions16. Its branching could give rise to a knotting of multiple chains around the tQD (Fig. 5a–d), so that entangled chains as a collective could transfer deformation and hence strain to the tQD. We note that while we did observe stress relaxation in most systems (Supplementary Fig. 3)4,6, such stress relaxation does not lead to any changes in pressure coefficient. This is because the mechanical stress and optical shift decay similarly during stress relaxation4,6. Polymer matrix-tQD-filler strain transfer efficiency To investigate the partial strain transfer in our tQD-polymer materials, we introduced a phenomenological parameter x for the degree of strain transfer, εt = xεp. Assuming elasticity as before, allows us to express the degree of strain transfer as: $$x = \frac{{\sigma _{\mathrm{t}}E_{\mathrm{p}}}}{{\sigma _{\mathrm{p}}E_{\mathrm{t}}}}.$$ The stress in the tQD, σt, can be determined by dividing the optical stress shifts for each system by the tQD pressure coefficient, as was assessed in prior work using diamond anvil cells17. For the tQD Young's modulus, Et, we used the value for bulk CdS52. The polymer stress, σp, and Young's modulus, Ep, were determined using uniaxial tensile testing. Figure 5e shows a plot of the fraction of strain transfer from the polymer matrix to the tQD across the tQD-polymer interface as a function of host matrices' Young's moduli. Data are shown for all systems that we studied experimentally. The plot indicates that tQDs experience increasing strain transfer as the ratio of tQD Young's modulus to host matrix modulus is reduced; i.e., the more compliant materials have lower strain transfer efficiency. As in previous studies4,6,8,17, we did not see any degradation in the mechanical properties upon incorporation of tQDs at any of the concentrations used in this work, which ranged from 0.05 to 20% by weight, or 0.01 to 5% by volume. This is true in cases where tQDs did not affect the mechanical properties4, as well as in cases of like-like interfaces where tQDs enhanced the Young's modulus (Supplementary Fig. 6). This indicates that tQDs are not acting as damage initiators. We note that the above assumption of elasticity is reasonable even for the more rubbery polymers considered in this work, PDMS and high-molecular-weight SEBS and PBD. Various types of PDMS, for instance, have been reported to be highly elastic rather than viscoelastic53. Similarly, PBD has been shown to exhibit high elasticity54. Also, our previous work showed that SEBS is highly elastic.29. In spite of their high elasticity, the systems do have a relatively small component of viscoelasticity, which leads to relaxation observed in the first 100 s or so in the stress relaxation modulus for PDMS55. This is consistent with our observation of stress relaxation for most of the systems (Supplementary Fig. 3)4,6. However, all of our pressure coefficient measurements were conducted after 200 s, meaning that the systems had already relaxed and that stress relaxation is not affecting our pressure coefficient measurement. We modeled the stress response and dynamic range of the tQD material systems utilizing a model that we developed from the Mori-Tanaka theory for the mechanics of heterogeneous material systems, a field approximation method based on the Eshelby model (details are described in the Mori-Tanaka analysis section in Methods below)26,27,28. This second model was developed in order to validate the new simple uniaxial model (Eq. 1). We present our Mori-Tanaka theoretical model for two host matrix Poisson's ratios that fall within the boundaries of the material systems in this work. Despite the fact that the Mori-Tanaka theory makes a number of simplifying assumptions, such as linear elasticity of the system, and uniform dispersion of the tQDs in all polymers, it is seen to qualitatively match the experimental results. In addition to validating existing micromechanical theories26,27,28, this finding corroborates our simple uniaxial model (Eq. 1), showing that despite the complexity of our tQD-polymer systems, their behavior can be qualitatively captured using simple models. This result further indicates that the tQD response remains elastic even at large deformations in the lowest stiffness materials studied here. We note that despite the fact that Mori-Tanaka theory was originally developed for non-interacting fillers, multiple studies have successfully applied it for non-dilute cases, even as high as 10–50% by volume36,37,38,56. In this work, as evidenced by the TEM images in Figs. 1 and 2 and Supplementary Fig. 1, tQDs are interacting, with one exception being the evenly dispersed system in Fig. 2b. However, in all cases, we are below the 10–50% limit since the maximum volume fraction that we consider is 5%. Regardless, the fact that the theory was originally developed for non-interacting systems may contribute to the fact that only qualitative agreement is seen. Figure 5a–d schematically illustrate our finding that tQDs wrapped with lower Young's modulus materials shift less than tQDs wrapped with stiffer polymers. Strain transfer efficiencies range from 0.2% for the most compliant to 45% for the stiffest polymers that we tested. The findings of this work are consistent with previous reports indicating that filler-polymer load transfer efficiency decreases as the Young's modulus of the host matrix is decreased46. In spite of this lower strain transfer in lower Young's modulus polymers, we find them to have a higher tQD pressure coefficient. This is because even though there is less strain transferred to tQDs in relatively compliant matrix materials, the stress in the polymer is far lower, which results in an overall higher pressure coefficient. (see Supplementary Figs 7 and 8 for plots of stress transfer efficiencies with respect to changes in host material Young's modulus, according to our Mori-Tanaka model). In this work, we have employed experimental opto-mechanical and mechanical characterization to demonstrate that branched tQDs exhibit excellent stress sensing capabilities to track mechanical stress–strain behavior in a wide spectrum of polymeric systems. Specifically, we have studied 17 tQD-polymer systems covering over eight polymer host matrices, multiple tQD-polymer interfacial chemistries, several tQD concentrations and dispersions, and more than four orders of magnitude variation in host matrix Young's modulus. We find that changing the Young's modulus of the host matrix varies the tQD stress response, or pressure coefficient, by over three orders of magnitude. We present a method to functionalize tQDs with different polymeric ligands, achieving excellent dispersion of tQDs in multiple polymer matrices. Clear, cyclable stress sensing is observed in all polymers, with the mechanical properties of the polymers not degraded at all by the tQD additions. We further determine the strain transfer efficiency from the polymer to the tQD, finding that the efficiency increases for stiffer host matrix materials. Our results also represent a validation of the Mori-Tanaka theory for such systems. These findings indicate the high versatility of tQD stress sensors to a wide range of structural and biomedical applications as well as to fundamental polymer dynamics studies. All chemicals were used as received. Chloroform, pyridine, and tetrahydrofuran were purchased from Sigma-Aldrich. PLLA (100 kDa molecular weight) was purchased from ShenZhen ESUN Industrial Co Ltd. SH-PLLA (2.5 kDa molecular weight) was purchased from Polymer Source. SEBS (117 kDa molecular weight, MD-1537) was purchased from Kraton Corporation. Polyethylene oxide (300 kDa molecular weight), PDMS (purchased in monomer form), PCL (80 kDa molecular weight), and PBD (cis, 200–300 kDa molecular weight) were purchased from Sigma-Aldrich. tQD synthesis Chemical precursors used were purchased from Sigma-Aldrich. CdSe-CdS core-shell tQDs were synthesized in the absence of moisture and oxygen, using a two-step seeded synthesis method. To start, zinc-blend CdSe seeds were prepared by mixing cadmium myristate with selenium dissolved in octadecene. This mixture was heated to 170 °C, causing the nucleation of CdSe seeds, followed by injection of oleic acid and oleylamine ligands and growth of the CdSe seeds at 240 °C. Next, seeds were cleaned by repeated centrifugation in polar solvents (isopropanol and acetone). Next, wurtzite CdS arms were grown on the cleaned CdSe seeds by syringe-injecting them via airfree transfer into a heated mixture of n-propylphosphonic acid, trioctylphosphine, trioctylphosphine oxide, and n-cotadecylphosphonic acid. Growth of arms then occurred at 320 °C. The tQDs were then transferred to a glovebox and cleaned via repeated centrifugation in polar solvents. tQD samples in this work had arm lengths ranging from 22 to 29 nm, arm diameters ranging from 4 to 6 nm, and core sizes of 3.5 to 4.5 nm, analyzed from TEM images using ImageJ. Within these tQD size ranges, no statistically significant difference in the optical sensing response was seen in any given nanocomposite system. A Bruker Avance500 II NMR spectrometer system (Bruker, Billerica, MA) was used to conduct NMR spectroscopy. The δ scale is utilized to present shifts, and the unit of hertz is used for coupling constants. Deuterated chloroform was used as solvent, while the standard was tetramethylsilane. Electrospinning and hand-drawing precursor solutions Fibers of SEBS, PLLA, PCL, PBD, and PEO were dissolved in chloroform to create solutions of 12%, 12%, 10%, 12%, and 5% by weight, respectively. tQDs were then added in a chloroform solution at a variety of concentrations ranging from 0.05 to 20% by weight, or 0.01 to 5% by volume. SEBS, PLLA, PCL, and PEO were electrospun using the procedure described below, while PBD was hand-drawn with a syringe needle dipped into the viscous polymer solution directly onto mechanical tabs for mechanical tests or directly onto the piezo-drive for optical tests. Fiber synthesis Polymer-tQD fibers were prepared using either electrospinning or hand-drawing. For electrospinning, a droplet of polymer-tQD solution was placed on the end of a syringe needle (Nordson, 38.1 mm/0.51 mm gauge length/inner diameter, part number 7018225) before application of an electric field between the needle and collector, which resulted in fiber formation. As-formed fiber diameters ranged from 1 to 10 μm, and single fibers were formed by employing a dual-rod collector geometry. Eight-millimeter-diameter stainless steel rods were placed 95 mm apart during electrospinning. Fibers were used as-formed. A bias of 15 kV between the needle and collector was used, with a 150 mm separation between needle and collector, resulting in an electric field of 1 kV/cm. For hand-drawing of fibers, highly viscous solutions of polymer in chloroform were used; these were of similar viscosity to the electrospinning solutions. Fibers were manually pulled from the highly viscous solution using a pipette tip before direct deposition onto a tensile testing tab. Tensile mechanical testing Samples were prepared for mechanical tests by directly gluing electrospun fibers to small 5 mm × 10 mm cardboard tabs with diamond central cut-outs for stability. Specialized electrospun fiber transfer tools made of twisted pipe cleaners and carbon tape were used to transfer as-spun fibers to tensile testing tabs. Epoxy glue was used to secure fibers. Tensile mechanical testing using an Agilent T-150 nanomechanical tensile tester was performed at quasi-static strain rates using standard pivot grips, or in a custom-built tensile tester with a hole for laser passage using a Mark-10 0.5 N load cell for SEBS films. PDMS-tQD composites were tested using a custom load frame (Psylotech) in a confocal microscope (WiTEC). Young's moduli were assessed in the initial linear elastic region of uniaxial tensile stress–strain curves. Each averaged data point for the value of the Young's modulus for the different polymers and composites represents tests from 5–15 trials. All mechanical tests in this work were conducted at room temperature. All stresses and strains presented are engineering stresses and strains. Inverted fluorescence spectroscopy system The nanocrystal fluorescence was excited with a 488-nm Ar+ laser (Lexel Laser, Inc., 95) with 1 W power and 250 μm spot size at the sample. Bright-field and fluorescence images were taken with a digital microscope camera (Paxcam 2+). The fluorescence spectra were monitored using a custom-built inverted fluorescence microscope with a spectrometer (Acton Research Corporation, SpectraPro-3001) and CCD detector (Princeton Instruments, Model 7509-0001). Exposure times of 1 s were used to collect spectra with a 0.6 s lag time between frames. Spectra were collected and binned over the area of the laser spot and fit to single Gaussians; for local mapping, rows of the camera in groups of threes or fours were binned, rather than binning over the entire laser excitation area on the CCD camera. Change in emission was defined as the difference between the peak position at time t and the peak position at zero strain. Stress relaxation rates were determined by fitting the emission shift versus time to a single exponential decay. For mechanical tests, stress was substituted for emission shift. Monitoring nanocomposite fluorescence during deformation To monitor fluorescence while stretching fibers in tension, a piezo-stretcher mounted via screws on a metal platform was used; the platform had a hole to allow the laser to reach the sample. The piezo-drive was controlled with Lab-View. The gauge length for optical tests was 1.8 mm for PLLA and 330 or 500 μm for SEBS, PCL, PEO, and PBD. Fibers were mounted directly onto pieces of tape put onto the arms of the piezo-drive, over which epoxy glue was applied and tests were started 15–30 min later to allow the glue sufficient time to dry. All opto-mechanical tests in this work were conducted at room temperature. Characterization of fiber morphology and size The diameters of tQD-polymer fibers were imaged and photographed using a 63X objective lens on a standard optical microscope (QCapture camera and QImaging software), which was calibrated using a transmission electron microscope grid (11.85 pixels/μm). Fiber diameters ranged from 1 to 10 μm and were analyzed from digital camera images using ImageJ. Film thicknesses ranged from ~150 μm to 1 mm and were assessed using digital calipers with a resolution of 1 μm (Mitutoyo). Determination of pressure coefficient The pressure coefficient was determined by taking the average stress-induced fluorescence red-shift in units of milli-electron volts (meV) across 5–15 optical tests of the nanocomposites at a given engineering strain, and dividing this quantity by the average uniaxial stress in the nanocomposite at the same engineering strain in gigapascals (GPa). In the literature, this is a conventional unit for the pressure coefficient23. The point of maximum optical red-shift achievable in our home-built fluorescence tensile stretcher, 2–10 meV depending on the system, was used to determine pressure coefficient. Then to obtain stresses for pressure coefficient determination, the engineering stress at the same engineering strain from mechanical tests was used. This strain ranged from 0.7 to 2. No difference was seen within error in the pressure coefficient for a particular system when evaluated at higher or lower engineering strains. For all systems, the pressure coefficient was evaluated after a minimum of 200 s had passed in the mechanical or opto-mechanical tensile tests. For films, optical and mechanical tests were performed simultaneously, while for fibers, due to their low strength, a highly specialized load cell was used which required doing the two sets of tests on separate instruments. Each pressure coefficient value represents averages from 5 to 15 different trials. The percent covariance (standard deviation divided by the mean) is fairly similar for all composite Young's moduli and pressure coefficients, which is why the error bar increases with increasing pressure coefficient and composite modulus in Fig. 2 in the main text. Note that while we observed stress relaxation in nearly all nanocomposite systems4,6, such stress relaxation did not affect the pressure coefficient measurement due to similar decay rates for mechanical stress and optical shift during stress relaxation. Preparation of nanocomposite films To prepare SEBS-tQD nanocomposite films, 25 mg of SEBS was dissolved in 2 mL of a tQD-chloroform solution at appropriate concentration to create films of 20% by weight tQDs (5% by volume). These precursor solutions were put into glass vials and dried using a vigorous stream of nitrogen, resulting in film drying occurring within 1–2 min. PDMS films were prepared using a kit with two-part composition (prepolymer base and curing agent) and a 10 to 1 ratio of prepolymer base to curing agent. After mixing, the PDMS was cured at room temperature. TEM and sample preparation For TEM, single fibers or a random fiber network were either deposited directly onto copper TEM grids, or fibers or films were embedded in epoxy and then microtomed at cryogenic temperatures using an ultramicrotome (Boeckler, RMC MT-X). Sections were imaged using a 200 kV Tecnai G2 transmission electron microscope. tQD-ligand exchange tQD-ligand exchange was performed using a two-step procedure. First, tQDs were dissolved in pyridine and then centrifuged using hexane. This process was repeated thrice to replace the native octadecylphosphonic acid coating with pyridine to the greatest extent possible22. Next, SH-PLLA dissolved in tetrahydrofuran was mixed with pyridine-coated tQDs, until tQDs solubilized in the tetrahydrofuran, which was used as an indication that the exchange had completed. NMR was then employed to determine the exchange rate of 60% exchange to SH-PLLA. In order to more quantitatively determine that the Young's modulus was the main variable correlative with the tQD pressure coefficient, we performed linear fitting to the pressure coefficient as a function of several variables. For both linear regimes shown in Fig. 5, the Young's modulus is the main correlative variable with the tQD pressure coefficient, with adjusted R-squared values of 0.99 and 0.93 for the first and second regimes, respectively. Other dispersion-related nanocomposite variables showed little to no correlation, such as tQD concentration (−0.03 adjusted R-squared), tQD aggregate cross-sectional area, as determined by TEM image analysis (0.3788 adjusted R-squared), and tQD aggregate packing fraction, also assessed by TEM image analysis (0.3274 adjusted R-squared). Respective plots are shown in Supplementary Fig. 4. Mori-Tanaka analysis When the tQDs are well dispersed in a host matrix at a low to modest volume fraction, the average state of stress and strain in the tQD can be understood using the theory of Mori and Tanaka28. While this theory is developed for the linear elastic regime26,27, it can still be used to qualitatively understand the trends observed in our experiment. From an experimental point of view, we measure several basic quantities: the stress and strain in the composite polymer-tQD system (which we equate to the host polymer stress and strain due to the low loading fractions) and the PL shift in the tQDs. Knowing the pressure coefficient of the tQD, this latter quantity allows us to infer the stress in the tQD. Note that this inference assumes that the pressure coefficient is known for the exact state of stress in the tQD in the host matrix. The essential result from the theory of Mori and Tanaka28 that we use is that the volume average strains in the tQD (t) and the matrix (m) are related by: $$< \varepsilon > _{\mathrm{t}} = {\cal A}: < \varepsilon > _{\mathrm{m}},$$ where the strain concentration tensor for the tQD is given by: $${\cal A} = \left[ {{\Bbb I} + {\Bbb P}:\left( {{\Bbb C}_{\mathrm{t}} - {\Bbb C}_{\mathrm{m}}} \right)} \right]^{ - 1}.$$ In Eq. (3), \({\Bbb I}\) is the symmetric fourth-order identity tensor, \({\Bbb P} = {\Bbb S}:{\Bbb C}_{\mathrm{m}}^{ - 1}\), \({\Bbb C}_{\mathrm{m}}\) is the fourth-order modulus tensor for the host matrix, \({\Bbb C}_{\mathrm{t}}\) is the fourth-order modulus tensor for the tQD, and \({\Bbb S}\) is the interior Eshelby tensor (for which we utilize the solution for a spherical inclusion in an infinite matrix). The CdSe core has a zinc-blend structure (space group \(F\bar 43m\)) with anisotropic elastic constants39. Given the limitations of our actual knowledge of the precise state of the system, we employ isotropic elastic constants for the CdSe by projecting the full fourth-order elasticity tensor onto the space of isotropic elasticity tensors, viz., \(\left\| {{\Bbb C} - {\Bbb C}^{{\mathrm{iso}}}} \right\| \to \min\). This results in a Young's modulus of ECdSe = 44.6 GPa and a Poisson's ratio of νCdSe = 0.334. The CdS has a wurtzite structure (space group P63mc) with anisotropic elastic constants39. The isotropic projection of these properties results in a Young's modulus of ECdS = 48.3 GPa and a Poisson's ratio or νCdS = 0.349. Given these projections, it is reasonable to take the Eshelby tensor to be \({\Bbb S} = \frac{2}{3}{\Bbb I}^{{\mathrm{vol}}} + \frac{7}{{15}}{\Bbb I}^{{\mathrm{dev}}}\), where \({\Bbb I}^{{\mathrm{vol}}} = \frac{1}{3}{\mathbf{1}} \otimes {\mathbf{1}}\) is the volumetric fourth-order identity tensor, and \({\Bbb I}^{{\mathrm{dev}}} = {\Bbb I} - {\Bbb I}^{{\mathrm{vol}}}\) is the fourth-order deviatoric identity tensor. Evaluation of Eq. (3) results in: $${\cal A} = \frac{1}{{\frac{1}{3} + \frac{{2K_{\mathrm{t}}}}{{3K_{\mathrm{m}}}}}}{\Bbb I}^{{\mathrm{vol}}} + \frac{1}{{\frac{8}{{15}} + \frac{{7\mu _{\mathrm{t}}}}{{15\mu _{\mathrm{m}}}}}}{\Bbb I}^{{\mathrm{dev}}},$$ where \(K = \frac{E}{{3\left( {1 - 2v} \right)}}\) is the isotropic bulk modulus and \(\mu = \frac{E}{{2\left( {1 + v} \right)}}\) is the isotropic shear modulus. If we concern ourselves with the axial strain transfer to the tQD and assume that the matrix deforms in a near incompressible fashion, then the relevant expression for the strain transfer coefficient is: $$< \varepsilon _{11} > _{\mathrm{t}} = x < \varepsilon _{11} > _{\mathrm{m}},$$ $$x = {\cal A}_{1111} - \frac{1}{2}{\cal A}_{1122} - \frac{1}{2}{\cal A}_{1133} = \frac{1}{{\frac{8}{{15}} + \frac{{7\mu _{\mathrm{t}}}}{{15\mu _{\mathrm{m}}}}}}.$$ Equation (6) is plotted in Supplementary Fig. 9 over the experimental range of host material compliances (inverse Young's moduli) at two representative values for the Poisson's ratio of the host material. The trend qualitatively matches the experimental data and is quantitatively close too, despite being a linear elastic theory with a number of simplifying assumptions. (Note a mean value of μt = 17.5 GPa was used for the plot.) The low values of the strain transfer coefficient for soft host materials still allow for nontrivial values for the stress transfer coefficient. Within the Mori-Tanaka theory, the stress transfer relation is given by: $$< \sigma > _{\mathrm{t}} = {\cal B}: < \sigma > _{\mathrm{m}},$$ where the stress concentration tensor for the tQD is given by: $${\cal B} = \left[ {{\Bbb I} + {\Bbb Q}:\left( {{\Bbb C}_{\mathrm{t}}^{ - 1} - {\Bbb C}_{\mathrm{m}}^{ - 1}} \right)} \right]^{ - 1} = \frac{{K_{\mathrm{t}}}}{{K_{\mathrm{m}}}}\frac{1}{{\frac{1}{3} + \frac{{2K_{\mathrm{t}}}}{{3K_{\mathrm{m}}}}}}{\Bbb I}^{{\mathrm{vol}}} + \frac{{\mu _{\mathrm{t}}}}{{\mu _{\mathrm{m}}}}\frac{1}{{\frac{8}{{15}} + \frac{{7\mu _{\mathrm{t}}}}{{15\mu _{\mathrm{m}}}}}}{\Bbb I}^{{\mathrm{dev}}}.$$ In Eq. (8), \({\Bbb Q} = {\Bbb T}:{\Bbb C}_{\mathrm{m}}\), and the conjugate Eshelby tensor \({\Bbb T} = {\Bbb I} - {\Bbb C}_{\mathrm{t}}:{\Bbb S}:{\Bbb C}_{\mathrm{t}}^{ - 1}\). Assuming a uniaxial state of stress in the host material, the stress transfer coefficient for the stress component in the tQD in the direction of the load is given by: $$< \sigma _{11} > _{\mathrm{t}} = y < \sigma _{11} > _{\mathrm{m}},$$ $$y = {\cal B}_{1111} = \frac{1}{3}\frac{{K_{\mathrm{t}}}}{{K_{\mathrm{m}}}}\frac{1}{{\frac{1}{3} + \frac{{2K_{\mathrm{t}}}}{{3K_{\mathrm{m}}}}}} + \frac{2}{3}\frac{{\mu _{\mathrm{t}}}}{{\mu _{\mathrm{m}}}}\frac{1}{{\frac{8}{{15}} + \frac{{7\mu _{\mathrm{t}}}}{{15\mu _{\mathrm{m}}}}}}.$$ The stress transfer coefficient for the pressure in the tQD is given by: $$< p > _{\mathrm{t}} = y_v < \sigma _{11} > _{\mathrm{m}},$$ $$y_v = \frac{1}{3}\left( {{\cal B}_{1111} + {\cal B}_{2211} + {\cal B}_{3311}} \right) = \frac{1}{3}\frac{{K_{\mathrm{t}}}}{{K_{\mathrm{m}}}}\frac{1}{{\frac{1}{3} + \frac{{2K_{\mathrm{t}}}}{{3K_{\mathrm{m}}}}}}.$$ Equation (10) is plotted in Supplementary Fig. 7 over the experimental range of host material compliances (inverse Young's moduli) at two representative values for the Poisson's ratio of the host material. Equation (12) is plotted in Supplementary Fig. 8 over the experimental range of host compliances (inverse Young's moduli) at two representative values for the Poisson's ratio of the host material. The authors confirm that all relevant data are available upon reasonable request. Narayanan, R. A. et al. Dynamics and internal stress at the nanoscale related to unique thermomechanical behavior in polymer nanocomposites. Phys. Rev. Lett. 97, 075505 (2006). Article ADS PubMed Google Scholar Shiari, B. & Miller, R. E. Multiscale modeling of crack initiation and propagation at the nanoscale. J. Mech. Phys. Solids 88, 35–49 (2016). Cheng, S. et al. Unraveling the mechanism of nanoscale mechanical reinforcement in glassy polymer nanocomposites. Nano Lett. 16, 3630–3637 (2016). Article ADS CAS PubMed Google Scholar Raja, S. N. et al. Tetrapod nanocrystals as fluorescent stress probes of electrospun nanocomposites. Nano Lett. 13, 3915–3922 (2013). Mu, M., Osswald, S., Gogotsi, Y. & Winey, K. I. An in situ Raman spectroscopy study of stress transfer between carbon nanotubes and polymer. Nanotechnology 20, 335703 (2009). Raja, S. N., Zherebetskyy, D., Wu, S., Ercius, P. & Powers, A. Mechanisms of local stress sensing in multifunctional polymer films using fluorescent tetrapod nanocrystals. Nano Lett. 16, 5060–5067 (2016). Jin, X. et al. A novel concept for self-reporting materials: stress sensitive photoluminescence in ZnO tetrapod filled elastomers. Adv. Mater. 25, 1342–1347 (2013). Choi, C. L., Koski, K. J., Olson, A. C. K. & Alivisatos, A. P. Luminescent nanocrystal stress gauge. Proc. Natl Acad. Sci. USA 107, 21306–21310 (2010). Article ADS CAS PubMed PubMed Central Google Scholar Chan, E. P., Walish, J. J., Urbas, A. M. & Thomas, E. L. Mechanochromic photonic gels. Adv. Mater. 25, 3934–3947 (2013). Butt, H.-J., Cappella, B. & Kappl, M. Force measurements with the atomic force microscope: technique, interpretation and applications. Surf. Sci. Rep. 59, 1–152 (2005). Hammock, M. L., Chortos, A., Tee, B. C. K., Tok, J. B. H. & Bao, Z. 25th anniversary article: the evolution of electronic skin (E-skin): a brief history, design considerations, and recent progress. Adv. Mater. 25, 5997–6038 (2013). Han, X., Liu, Y. & Yin, Y. Colorimetric stress memory sensor based on disassembly of gold nanoparticle chains. Nano Lett. 14, 2466–2470 (2014). Kamat, N. P. et al. Sensing membrane stress with near IR-emissive porphyrins. Proc. Natl. Acad. Sci. USA 108, 13984–13989 (2011). Koc, M. A. et al. Characterizing photon reabsorption in quantum dot-polymer composites for use as displacement sensors. ACS Nano 11, 2075–2084 (2017). Talapin, D. V. et al. Seeded growth of highly luminescent CdSe/CdS nanoheterostructures with rod and tetrapod morphologies. Nano Lett. 7, 2951–2959 (2007). Fang, L. et al. Mechanical and electrical properties of CdTe tetrapods studied by atomic force microscopy. J. Chem. Phys. 127, 184704 (2007). Choi, C. L., Koski, K. J., Sivasankar, S. & Alivisatos, A. P. Strain-dependent photoluminescence behavior of CdSe/CdS nanocrystals with spherical, linear, and branched topologies. Nano Lett. 9, 3544–3549 (2009). Raja, S. N. et al. Influence of three-dimensional nanoparticle branching on the Young's modulus of nanocomposites: Effect of interface orientation. Proc. Natl Acad. Sci. USA 112, 6533–6538 (2015). Thomas, S., Joseph, K., Malhotra, S. K., Goda, K. & Sreekala, M. S. Polymer Composites, Nanocomposites (John Wiley & Sons, 2013). Bockstaller, M. R., Mickiewicz, R. A. & Thomas, E. L. Block copolymer nanocomposites: perspectives for tailored functional materials. Adv. Mater. 17, 1331–1349 (2005). Young, R. J., Deng, L., Wafy, T. Z. & Kinloch, I. A. Interfacial and internal stress transfer in carbon nanotube based nanocomposites. J. Mater. Sci. 51, 344–352 (2015). Gupta, S., Zhang, Q., Emrick, T., Balazs, A. C. & Russell, T. P. Entropy-driven segregation of nanoparticles to cracks in multilayered composite polymer structures. Nat. Mater. 5, 229–233 (2006). Li, J. & Wang, L.-W. Deformation potentials of CdSe quantum dots. Appl. Phys. Lett. 85, 2929 (2004). Eitan, A., Fisher, F. T., Andrews, R., Brinson, L. C. & Schadler, L. S. Reinforcement mechanisms in MWCNT-filled polycarbonate. Compos. Sci. Technol. 66, 1162–1173 (2006). Mol Menamparambath, M., Arabale, G., Nikolaev, P., Baik, S. & Arepalli, S. Near-infrared fluorescent single walled carbon nanotube-chitosan composite: interfacial strain transfer efficiency assessment. Appl. Phys. Lett. 102, 171903 (2013). Li, S. & Wang, G. Introduction to Micromechanics and Nanomechanics (World Scientific, 2008). Nemat-Nasser, S. & Hori, M. Micromechanics (Elsevier, 2013). Mori, T. & Tanaka, K. Average stress in matrix and average elastic energy of materials with misfitting inclusions. Acta Metall. 21, 571–574 (1973). Raja, S. N. et al. Strain-dependent dynamic mechanical properties of Kevlar to failure: Structural correlations and comparisons to other polymers. Mater. Today Commun. 2, e33–e37 (2015). Raja, S. N. et al. Cavitation-induced stiffness reductions in quantum dot–polymer nanocomposites. Chem. Mater. 28, 2540–2549 (2016). Raja, S. N. et al. Encapsulation of perovskite nanocrystals into macroscale polymer matrices: enhanced stability and polarization. ACS Appl. Mater. Interfaces 8, 35523–35533 (2016). Li, D., Wang, Y. & Xia, Y. Electrospinning of polymeric and ceramic nanofibers as uniaxially aligned arrays. Nano Lett. 3, 1167–1171 (2003). Powers, A. S., Liao, H. G., Raja, S. N. & Bronstein, N. D. Tracking nanoparticle diffusion and interaction during self-assembly in a liquid cell. Nano Lett. 17, 15–20 (2016). Chiu, J. J., Kim, B. J., Kramer, E. J. & Pine, D. J. Control of nanoparticle location in block copolymers. J. Am. Chem. Soc. 127, 5036–5037 (2005). Corbierre, M. K. et al. Polymer-stabilized gold nanoparticles and their incorporation into polymer matrices. J. Am. Chem. Soc. 123, 10411–10412 (2001). Mancarella, F., Style, R. W. & Wettlaufer, J. S. Surface tension and the Mori–Tanaka theory of non-dilute soft composite solids. Proc. R. Soc. A 472, 20150853 (2016). Article ADS MathSciNet PubMed PubMed Central MATH Google Scholar Christensen, R. M. A critical evaluation for a class of micro-mechanics models. J. Mech. Phys. Solids 38, 379–404 (1990). Benveniste, Y. A new approach to the application of Mori-Tanaka's theory in composite materials. Mech. Mater. 6, 147–157 (1987). de Jong, M. et al. Charting the complete elastic properties of inorganic crystalline compounds. Sci. Data 2, 150009 (2015). Ye, X. et al. Structural diversity in binary superlattices self-assembled from polymer-grafted nanocrystals. Nat. Commun. 6, 10052 (2015). Article ADS PubMed PubMed Central Google Scholar Anderson, N. C., Hendricks, M. P., Choi, J. J. & Owen, J. S. Ligand exchange and the stoichiometry of metal chalcogenide nanocrystals: spectroscopic observation of facile metal-carboxylate displacement and binding. J. Am. Chem. Soc. 135, 18536–18548 (2013). Li, Z.-Y. et al. Stress analysis of carotid plaque rupture based on in vivo high resolution MRI. J. Biomech. 39, 2611–2622 (2006). Rana, S. & Fangueiro, R. Advanced Composite Materials for Aerospace Engineering. (Woodhead Publishing, 2016). Shan, W. et al. Pressure dependence of the fundamental band-gap energy of CdSe. Appl. Phys. Lett. 84, 67 (2004). Dharan, C. K. H., Kang, B. S. & Finnie, I. Finnie's Notes on Fracture Mechanics (Springer, New York, 2016). Asloun, E. M., Nardin, M. & Schultz, J. Stress transfer in single-fibre composites: effect of adhesion, elastic modulus of fibre and matrix, and polymer chain mobility. J. Mater. Sci. 24, 1835–1844 (1989). Roatta, A. & Bolmaro, R. E. An Eshelby inclusion based model for the study of stresses and plastic strain localization in metal matrix composites II. Fiber reinforcement and lamellar inclusions. Mat. Sci. Eng. A 229, 192–202 (1997). Ru, C. Q. Analytic solution for Eshelby's problem of an inclusion of arbitrary shape in a plane or half-plane. J. Appl. Mech. 66, 315–523 (1999). Rangert, B., Krogh, P., Langer, B. & Van Roekel, N. Bending overload and implant fracture: a retrospective clinical analysis. Int. J. Oral. Maxillofac. Implants 10, 136–151 (2009). Callister, W. D. & Rethwisch, D. G. Fundamentals of Materials Science and Engineering (John Wiley & Sons, 2012). Tallury, S. S. & Pasquinelli, M. A. Molecular dynamics simulations of flexible polymer chains wrapping single-walled carbon nanotubes. J. Phys. Chem. B 114, 4122–4129 (2010). Group III Condensed Matter 41B (II-VI and I-VII Compounds; Semimagnetic Compounds), O. Madelung, U. Rössler, M. Schulz (ed.), Springer-Verlag Berlin Heidelberg, 1999. Deguchi, S., Hotta, J., Yokoyama, S. & Matsui, T. S. Viscoelastic and optical properties of four different PDMS polymers. J. Micromech. Microeng. 25, 097002 (2015). Malkin, A. Y., Kulichikhin, V. G., Zabugina, M. P. & Vinogradov, G. V. The high elasticity of polybutadienes with different micro-structure. Polym. Sci. U. S. S. R. 12, 138–148 (1970). Du, P., Lu, H. & Zhang, X. Measuring the Young's relaxation modulus of PDMS using stress relaxation nanoindentation. MRS Proc. 1222, 222 (2011). Christensen, R., Schantz, H. & Shapiro, J. On the range of validity of the Mori-Tanaka method. J. Mech. Phys. Solids 40, 69–73 (1992). The authors first and foremost wish to thank Prof. A. Paul Alivisatos for his overall guidance in this program. Work on tQD nanocrystal-polymer sample preparation and optical, mechanical, and structural characterization was supported by the Director, Office of Science, Office of Basic Energy Sciences, Division of Materials Science and Engineering, of the U.S. Department of Energy under contract DE-AC02-05CH11231, specifically on the Inorganic/Organic Nanocomposites NSET Program (for S.N.R. and R.O.R.). L.L. was supported by National Science Foundation NSF Grant ECCS-0901864 for mechanical characterization. M.R.J. acknowledges the Arnold and Mabel Beckman Foundation for a postdoctoral fellowship. The authors would also like to thank Prof. Ting Xu, Dr. Lindsey Hanson and Prof. Yi Liu for helpful discussions. They further thank Siva Wu, Dr. Kari Thorkelsson, Dr. Bo He, David Mrdjenovich, Dr. Wendy Gu, Matthew Koc, Giulio Zhou, Christina Hyland, Turner Anderson, Dr. Lindsey Hanson, and Lillian Hsueh for experimental assistance. Andrew Luong is thanked for artwork design. Shilpa N. Raja Present address: Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA Xingchen Ye Present address: Department of Chemistry, Indiana University-Bloomington, Bloomington, IN, 47405, USA Matthew R. Jones Present address: Department of Chemistry, Rice University, Houston, TX, 77251, USA Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA Shilpa N. Raja & Robert O. Ritchie Department of Materials, Science and Engineering, University of California at Berkeley, Berkeley, CA, 94720, USA Department of Chemistry, University of California at Berkeley, Berkeley, CA, 94720, USA Xingchen Ye & Matthew R. Jones Department of Mechanical Engineering, University of California at Berkeley, Berkeley, CA, 94720, USA Liwei Lin & Robert O. Ritchie Department of Civil and Environmental Engineering, University of California at Berkeley, Berkeley, CA, 94720, USA Sanjay Govindjee Liwei Lin Robert O. Ritchie S.N.R. and R.O.R. conceived the project; S.N.R. performed the experiments; S.G. conducted analytical calculations; R.O.R., S.G., and L.L. supervised the research; and X.Y. and M.R.J. assisted in data analysis. Correspondence to Sanjay Govindjee or Robert O. Ritchie. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Information(PDF 897 kb) Raja, S.N., Ye, X., Jones, M.R. et al. Microscopic mechanisms of deformation transfer in high dynamic range branched nanoparticle deformation sensors. Nat Commun 9, 1155 (2018). https://doi.org/10.1038/s41467-018-03396-5
CommonCrawl
\begin{document} \newcommand{ $\Box $}{ $\Box $} \newtheorem{De}{Definition}[section] \newtheorem{Th}[De]{Theorem} \newtheorem{Pro}[De]{Proposition} \newtheorem{Le}[De]{Lemma} \newtheorem{Co}[De]{Corollary} \newtheorem{Rem}[De]{Remark} \newtheorem{Ex}[De]{Example} \newtheorem{Exo}[De]{Exercises} \newcommand{\otimes}{\otimes} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\oplus}{\oplus} \newcommand{\underline n}{\underline n} \newcommand{{\frak S}}{{\frak S}} \newcommand{\frak F}{\frak F} \newcommand{\qu}{\frak Q} \newcommand{\ga}{\frak g} \newcommand{\lambda}{\lambda} \newcommand{\frak Y}{\frak Y} \newcommand{\frak T}{\frak T} \newcommand{{\sf Coker}}{{\sf Coker}} \newcommand{{\sf Hom}}{{\sf Hom}} \newcommand{{\sf Im}}{{\sf Im}} \newcommand{{\sf Ext}}{{\sf Ext}} \newcommand{{\sf H_{AWB}}}{{\sf H_{AWB}}} \newcommand{{\sf Hoch}}{{\sf Hoch}} \newcommand{{\rm AWB}^!}{{\rm AWB}^!} \newcommand{\ele}{\cal L} \newcommand{\as}{\cal A} \newcommand{\ka}{\cal K}\newcommand{\eme}{\cal M} \newcommand{\pe}{\cal P} \newcommand{\par \noindent}{\par \noindent} \newcommand{\par \noindent}{\par \noindent} \centerline {\Large {\bf On the universal $\alpha$-central extension of the}} \centerline {\Large {\bf semi-direct product of Hom-Leibniz algebras}} \ \centerline {\bf J. M. Casas$^{(1)}$ and N. Pacheco Rego$^{(2)}$} \centerline{$^{(1)}$Dpto. Matem\'atica Aplicada I, Univ. de Vigo, 36005 Pontevedra, Spain} \centerline{e-mail address: [email protected]} \centerline{$^{(2)}$IPCA, Dpto. de Ciências, Campus do IPCA, Lugar do Aldão} \centerline{4750-810 Vila Frescainha, S. Martinho, Barcelos, Portugal} \centerline{e-mail address: [email protected]} \par {\bf Abstract} We introduce Hom-actions, semidirect product and establish the equivalence between split extensions and the semi-direct product extension of Hom-Leibniz algebras. We analyze the functorial properties of the universal ($\alpha$)-central extensions of ($\alpha$)-perfect Hom-Leibniz algebras. We establish under what conditions an automorphism or a derivation can be lifted in an $\alpha$-cover and we analyze the universal $\alpha$-central extension of the semi-direct product of two $\alpha$-perfect Hom-Leibniz algebras. {\it Key words:} universal ($\alpha$)-central extension, Hom-action, semi-direct product, derivation. {\it A. M. S. Subject Class. (2010):} 17A32, 16E40, 17A30 \section{Introduction} Hom-Lie algebras were introduced in \cite{HLS} as Lie algebras whose Jacobi identity is twisted by means of a map. This fact occurs in different applications in models of quantum phenomena or in analysis of complex systems and processes exhibiting complete or partial scaling invariance. From the introducing paper, the investigation of several kinds of Hom-structures is in progress (for instance, see \cite{AM, AMM, AMS,AMS1, Ma, MS, Yau4, Yu} and references given therein). Naturally, the non-skew-symmetric version of Hom-Lie algebras, the so called Hom-Leibniz algebras, was considered as well (see \cite{AMM, CIP, Cheng, Is, MS2, MS, Yau1}). A Hom-Leibniz algebra is un triple $(L,[-,-],\alpha_L)$ consisting of a $\mathbb{K}$-vector space $L$, a bilinear map $[-,-] : L \times L \to L$ and a homomorphism of $\mathbb{K}$-vector spaces $\alpha_L : L \to L$ satisfying the Hom-Leibniz identity: $$[\alpha_L(x),[y,z]]=[[x,y], \alpha_L(z)]-[[x,z], \alpha_L(y)]$$ for all $x, y, z \in L$. When $\alpha_L=Id$, the definition of Leibniz algebra \cite{Lo} is recovered. If the bracket is skew-symmetric, then we recover the definition of Hom-Lie algebra \cite{HLS}. The goal of the present paper is to continue with the investigations on universal ($\alpha$)-central extensions of ($\alpha$)-perfect Hom-Leibniz algebras initiated in \cite{CIP}. In concrete, we consider the extension of results about the universal central extension of the semi-direct product of Leibniz algebra in \cite{CC} to the framework of Hom-Leibniz algebras. To do so, we organize the paper as follows: an initial section recalling the background material on Hom-Leibniz algebras. We introduce the concepts of Hom-action and semi-direct product and we prove a new result (Lemma \ref{ext rota}) that establishes the equivalence between split extensions and the semi-direct product extension. Section 3 is devoted to analyze the functorial properties of the universal ($\alpha$)-central extensions of ($\alpha$)-perfect Hom-Leibniz algebras. In section 4 we establish under what conditions an automorphism or a derivation can be lifted in an $\alpha$-cover (a central extension $f:(L', \alpha_{L'}) \to (L, \alpha_L)$ where $(L', \alpha_{L'})$ is $\alpha$-perfect ($L' = [\alpha_{L'}(L'), \alpha_{L'}(L')]$)). Final section is devoted to analyze the relationships between the universal $\alpha$-central extension of the semi-direct product of two $\alpha$-perfect Hom-Leibniz algebras, such that one of them Hom-act over the other one, and the semi-direct product of the universal $\alpha$-central extensions of both of them. \section{Preliminaries on Hom-Leibniz algebras} In this section we introduce necessary material on Hom-Leibniz algebras which will be used in subsequent sections. \begin{De}\label{HomLeib} \cite{MS} A Hom-Leibniz algebra is a triple $(L,[-,-],\alpha_L)$ consisting of a $\mathbb{K}$-vector space $L$, a bilinear map $[-,-] : L \times L \to L$ and a $\mathbb{K}$-linear map $\alpha_L : L \to L$ satisfying: \begin{equation} \label{def} [\alpha_L(x),[y,z]]=[[x,y],\alpha_L(z)]-[[x,z],\alpha_L(y)] \ \ ({\rm Hom-Leibniz\ identity}) \end{equation} for all $x, y, z \in L$. A Hom-Leibniz algebra $(L,[-,-],\alpha_L)$ is said to be multiplicative \cite{Yau1} if the $\mathbb{K}$-linear map $\alpha_L$ preserves the bracket, that is, if $\alpha_L [x,y] = [\alpha_L(x),\alpha_L(y)]$, for all $x, y \in L$. \end{De} \begin{Ex}\label{ejemplo 1} \ \begin{enumerate} \item[a)] Taking $\alpha = Id$ in Definition \ref{HomLeib} we obtain the definition of Leibniz algebra \cite{Lo}. Hence Hom-Leibniz algebras include Leibniz algebras as a full subcategory, thereby motivating the name "Hom-Leibniz algebras" as a deformation of Leibniz algebras twisted by a homomorphism. Moreover it is a multiplicative Hom-Leibniz algebra. \item[b)] Hom-Lie algebras \cite{HLS} are Hom-Leibniz algebras whose bracket satisfies the condition $[x,x]=0$, for all $x$. So Hom-Lie algebras can be considered as a full subcategory of Hom-Leibniz algebras category. For any multiplicative Hom-Leibniz algebra $(L,[-,-],\alpha_L)$ there is associated the Hom-Lie algebra $(L_{\rm Lie},[-,-],\widetilde{\alpha})$, where $L_{\rm Lie} = L/L^{\rm ann}$, the bracket is the canonical bracket induced on the quotient and $\widetilde{\alpha}$ is the homomorphism naturally induced by $\alpha$. Here $L^{\rm ann} = \langle \{[x,x] : x \in L \} \rangle$. \item[b)] Let $(D,\dashv,\vdash, \alpha_D)$ be a Hom-dialgebra. Then $(D,\dashv,\vdash, \alpha_D)$ is a Hom-Leibniz algebra with respect to the bracket $[x,y]=x \dashv y - y \vdash x$, for all $x,y \in A$ \cite{Yau}. \item[c)] Let $(L,[-,-])$ be a Leibniz algebra and $\alpha_L:L \to L$ a Leibniz algebra endomorphism. Define $[-,-]_{\alpha} : L \otimes L \to L$ by $[x,y]_{\alpha} = [\alpha(x),\alpha(y)]$, for all $x, y \in L$. Then $(L,[-,-]_{\alpha}, \alpha_L)$ is a multiplicative Hom-Leibniz algebra. \item[d)] Abelian or commutative Hom-Leibniz algebras are $\mathbb{K}$-vector spaces $L$ with trivial bracket and any linear map $\alpha_L :L \to L$. \end{enumerate} \end{Ex} \begin{De}\label{homo} A homomorphism of Hom-Leibniz algebras $f:(L,[-,-],\alpha_L) \to (L',[-,-]',\alpha_{L'})$ is a $\mathbb{K}$-linear map $f : L \to L'$ such that \begin{enumerate} \item[a)] $f([x,y]) =[f(x),f(y)]'$ \item [b)] $f \cdot \alpha_L(x) = \alpha_{L'} \cdot f(x)$ \end{enumerate} for all $x, y \in L$. A homomorphism of multiplicative Hom-Leibniz algebras is a homomorphism of the underlying Hom-Leibniz algebras. \end{De} In the sequel we refer Hom-Leibniz algebra to a multiplicative Hom-Leibniz algebra and we shall use the shortened notation $(L,\alpha_L)$ when there is not confusion with the bracket operation. \begin{De} Let $(L,[-,-],\alpha_L)$ be a Hom-Leibniz algebra. A Hom-Leibniz subalgebra $(H, \alpha_H)$ is a linear subspace $H$ of $L$, which is closed for the bracket and invariant by $\alpha_L$, that is, \begin{enumerate} \item [a)] $[x,y] \in H,$ for all $x, y \in H$, \item [b)] $\alpha_L(x) \in H$, for all $x \in H$ ($\alpha_H = \alpha_{L \mid}$). \end{enumerate} A Hom-Leibniz subalgebra $(H, \alpha_H)$ of $(L, \alpha_L)$ is said to be a two-sided Hom-ideal if $[x,y], [y,x] \in H$, for all $x \in H, y \in L$. If $(H,\alpha_H)$ is a two-sided Hom-ideal of $(L,\alpha_L)$, then the quotient $L/H$ naturally inherits a structure of Hom-Leibniz algebra with respect to the endomorphism $\widetilde{\alpha} : L/H \to L/H, \widetilde{\alpha}(\overline{l}) =\overline{\alpha_L(l)}$, which is said to be the quotient Hom-Leibniz algebra. \end{De} So we have defined the category ${\sf Hom-Leib}$ (respectively, ${\sf Hom-Leib_{\rm mult}})$ whose objects are Hom-Leibniz (respectively, multiplicative Hom-Leibniz) algebras and whose morphisms are the homomorphisms of Hom-Leibniz (respectively, multiplicative Hom-Leibniz) algebras. There is an obvious inclusion functor $inc : {\sf Hom-Leib_{\rm mult}} \to {\sf Hom-Leib}$. This functor has as left adjoint the multiplicative functor $(-)_{\rm mult} : {\sf Hom-Leib} \to {\sf Hom-Leib_{\rm mult}}$ which assigns to a Hom-Leibniz algebra $(L,[-,-],\alpha_L)$ the multiplicative Hom-Leibniz algebra $(L/I,[-,-],\tilde{\alpha})$, where $I$ is the two-sided ideal of $L$ spanned by the elements $\alpha_L[x,y]-[\alpha_L(x),\alpha_L(y)]$, for all $x, y \in L$. \begin{De} Let $(H, \alpha_H)$ and $(K, \alpha_K)$ be two-sided Hom-ideals of a Hom-Leibniz algebra $(L,[-,-],\alpha_L)$. The commutator of $(H, \alpha_H)$ and $(K, \alpha_K)$, denoted by $([H,K],\alpha_{[H,K]})$, is the Hom-Leibniz subalgebra of $(L,\alpha_L)$ spanned by the brackets $[h,k], h \in H, k \in K$. \end{De} Obviously, $[H,K] \subseteq H \cap K$ and $[K,H] \subseteq H \cap K$. When $H = K =L$, we obtain the definition of derived Hom-Leibniz subalgebra. Let us observe that, in general, $([H,K],\alpha_{[H,K]})$ is not a Hom-ideal, but if $H, K \subseteq \alpha_L(L)$, then $([H,K],\alpha_{[H,K]})$ is a two-sided ideal of $(\alpha_L(L), \alpha_{L \mid})$. When $\alpha = Id$, the classical notions are recovered. \begin{De} Let $(L,[-,-],\alpha_L)$ be a Hom-Leibnz algebra. The subspace $Z(L) = \{ x \in L \mid [x, y] =0 = [y,x], \text{for\ all}\ y \in L \}$ is said to be the center of $(L,[-,-],\alpha_L)$. When $\alpha_L : L \to L$ is a surjective homomorphism, then $Z(L)$ is a Hom-ideal of $L$. \end{De} \subsection{Hom-Leibniz actions} \begin{De} \label{Hom accion} Let $\left( L ,\alpha_{L}\right)$ and $\left( M,\alpha_{M}\right)$ be Hom-Leibniz algebras. A (right) Hom-action of $\left( L, \alpha_{L}\right)$ over $\left( M, \alpha_{M}\right)$ consists of two bilinear maps, $\lambda:L\otimes M\to M$, $\lambda\left( l\otimes m\right) =l \centerdot m$, and $\rho:M\otimes L\to M$, $\rho \left( m\otimes l\right) =m \centerdot l$, satisfying the following identities: \begin{enumerate} \item[a)] $\alpha_{M}\left( m\right) \centerdot \left[ x,y\right] =\left( m \centerdot x\right) \centerdot \alpha_{L}\left( y\right) -\left( m \centerdot y\right) \centerdot \alpha_{L}\left( x\right);$ \item[b)] $\alpha_{L}\left( x\right) \centerdot \left( m \centerdot y\right) =\left( x \centerdot m\right) \centerdot \alpha_{L}\left( y\right) -\left[ x,y\right] \centerdot \alpha _{M}\left( m\right);$ \item[c)] $\alpha_{L}\left( x\right) \centerdot \left( y \centerdot m\right) =\left[ x,y\right] \centerdot \alpha_{M}\left( m\right) -\left( x \centerdot m\right) \centerdot \alpha _{L}\left( y\right);$ \item[d)] $\alpha_{L}\left( x\right) \centerdot \left[ m,m^{\prime}\right] =\left[ x \centerdot m,\alpha_{M}\left( m^{\prime}\right) \right] -\left[ x \centerdot m^{\prime},\alpha_{M}\left( m\right) \right];$ \item[e)] $\left[ \alpha_{M}\left( m\right) , m^{\prime} \centerdot x \right] =\left[ m,m^{\prime}\right] \centerdot \alpha_{L}\left( x\right) -\left[ m \centerdot x,\alpha_{M}\left( m^{\prime}\right) \right];$ \item[f)] $\left[ \alpha_{M}\left( m\right) , x \centerdot m^{\prime} \right] =\left[ m \centerdot x,\alpha_{M}\left( m^{\prime}\right) \right] -\left[ m,m^{\prime}\right] \centerdot \alpha_{L}\left( x\right);$ \item[g)] $\alpha_{M}\left( x \centerdot m\right) =\alpha_{L}\left( x\right) \centerdot \alpha_{M}\left( m\right);$ \item[h)] $\alpha_{M}\left( m \centerdot x\right) =\alpha_{M}\left( m\right) \centerdot \alpha_{L}\left( x\right);$ \end{enumerate} for all $x,y\in L$ and $m,m^{\prime}\in M$. When $(M, \alpha_M)$ is an abelian Hom-Leibniz algebra, that is the bracket on $M$ is trivial, then the Hom-action is called Hom-representation. \end{De} \begin{Ex}\label{Hom accion Leib} \ \begin{enumerate} \item[a)] Let $M$ be a representation of a Leibniz algebra L \cite{LP}. Then $(M,Id_M)$ is a Hom-representation of the Hom-Leibniz algebra $(L,Id_L)$. \item[b)] Let $\left( K, \alpha_{K}\right)$ be a Hom-Leibniz subalgebra of a Hom-Leibniz algebra $\left( L, \alpha_{L}\right)$ (even $\left( K, \alpha_{K}\right) =\left( L, \alpha_{L}\right) $) and $\left( H, \alpha_{H}\right)$ a two-sided Hom-ideal of $\left( L, \alpha_{L}\right)$. There exists a Hom-action of $\left( K,\alpha_{K}\right)$ over $\left( H, \alpha_{H}\right)$ given by the bracket in $\left( L,\alpha_{L}\right).$ \item[c)] An abelian sequence of Hom-Leibniz algebras is an exact sequence of Hom-Leibniz algebras $0 \to (M,\alpha_M) \stackrel{i}\to (K,\alpha_K) \stackrel{\pi}\to (L,\alpha_L) \to 0$, where $(M,\alpha_M)$ is an abelian Hom-Leibniz algebra, $\alpha_K \cdot i = i \cdot \alpha_M$ and $\pi \cdot \alpha_K = \alpha_L \cdot \pi$. An abelian sequence induces a Hom-representation structure of $(L,\alpha_L)$ over $(M,\alpha_M)$ by means of the actions given by $\lambda : L \otimes M \to M, \lambda(l,m)=[k,m], \pi(k)=l$ and $\rho : M \otimes L \to M, \rho(m,l)=[m,k], \pi(k)=l$. \end{enumerate} \end{Ex} \begin{De} \label{producto semidirecto} Let $\left( M,\alpha_{M}\right)$ and $\left( L,\alpha_{L}\right)$ be Hom-Leibniz algebras together with a Hom-action of $\left( L,\alpha_{L}\right)$ over $\left( M,\alpha_{M}\right)$. Its semi-direct product $\left( M\rtimes L,\widetilde{\alpha}\right)$ is the Hom-Leibniz algebra with underlying $\mathbb{K}$-vector space $M\oplus L$, endomorphism $\widetilde{\alpha}:M\rtimes L\to M\rtimes L$ given by $\widetilde{\alpha} \left( m,l\right) = \left( \alpha_{M}\left(m\right) ,\alpha_{L}\left( l\right) \right)$ and bracket $$\left[ \left( m_{1},l_{1}\right) ,\left( m_{2},l_{2}\right) \right] =\left( [m_1,m_2] + \alpha_{L}\left( l_{1}\right) \centerdot m_{2}+m_{1} \centerdot \alpha_{L}\left( l_{2}\right) ,\left[ l_{1},l_{2}\right] \right).$$ \end{De} Let $\left( M,\alpha_{M}\right)$ and $\left( L,\alpha_{L}\right)$ be Hom-Leibniz algebras with a Hom-action of $\left( L,\alpha_{L}\right)$ over $\left( M,\alpha_{M}\right)$, then we can construct the sequence \begin{equation} \label{extension semidirectoLeib} 0\to \left( M,\alpha_{M}\right) \overset{i}{\to}\left( M\rtimes L,\widetilde{\alpha}\right) \overset{\pi}{\to}\left( L,\alpha_{L}\right) \to 0 \end{equation} where $i: M \to M\rtimes L, i(m) = \left( m,0\right),$ and $\pi: M\rtimes L \to L, \pi \left( m,l\right) = l$. Moreover, this sequence splits by $\sigma: L \to M\rtimes L, \sigma(l) = \left( 0,l\right),$ that is, $\sigma$ satisfies $\pi \cdot \sigma=Id_{L}$ and $\widetilde{\alpha} \cdot \sigma =\sigma \cdot \alpha_{L}$. \begin{De} Let $(M, \alpha_M)$ and $(L, \alpha_L)$ be Hom-Leibniz algebras such that there is a Hom-action of $(L, \alpha_L)$ over $(M, \alpha_M)$. Two extensions of $(L, \alpha_L)$ by $(M, \alpha_M)$, $0 \to (M, \alpha_M) \stackrel{i} \to (K,\alpha_K) \stackrel{\pi} \to (L, \alpha_L) \to 0$ and $0 \to (M, \alpha_M) \stackrel{i'} \to (K',\alpha_{K'}) \stackrel{\pi'} \to (L, \alpha_L) \to 0$, are said to be equivalent if there exists a homomorphism of Hom-Leibniz algebras $\varphi : (K,\alpha_K) \to (K',\alpha_{K'})$ making the following diagram commutative: \[ \xymatrix{ 0 \ar[r] & (M, \alpha_M) \ar@{=}[d] \ar[r]^{i} & (K,\alpha_K) \ar[d]_{\varphi} \ar[r]^{\pi} & (L, \alpha_L) \ar@{=}[d] \ar[r] & 0 \\ 0 \ar[r] & (M, \alpha_M) \ar[r]^{i'} & (K',\alpha_{K'}) \ar[r]^{\pi'} & (L, \alpha_L) \ar[r] & 0 } \] \end{De} \begin{Le} \label{ext rota} Let $\left( C,Id_{C}\right)$ and $\left( A,\alpha_{A}\right)$ be Hom-Leibniz algebras together with a Hom-action of $\left( C,Id_{C}\right)$ over $\left(A,\alpha_{A}\right)$. A sequence of Hom-Leibniz algebras $0\to \left(A,\alpha_{A}\right) \overset{i}{\to}\left( B,\alpha_{B}\right) \overset{\pi}{\to}\left( C,Id_{C}\right) \to 0$ is split if and only if it is equivalent to the semi-direct sequence $0\to \left( A,\alpha_{A}\right) \overset{j}{\to}\left( A\rtimes C,\widetilde{\alpha}\right) \overset{p}{\to}\left( C,Id_{C}\right) \to 0.$ \end{Le} {\it Proof.} If $0\to \left(A,\alpha_{A}\right) \overset{i}{\to}\left( B,\alpha_{B}\right) \overset{\pi}{\to}\left( C,Id_{C}\right) \to 0$ is split by $s: \left( C,Id_{C}\right) \to \left( B,\alpha_{B}\right)$, then the Hom-action of $\left(C,Id_{C}\right)$ over $\left( A,\alpha_{A}\right)$ is given by $$c \centerdot a=\left[ s(c),i(a)\right]; \quad a \centerdot c=\left[i(a),s(c)\right]$$ With this Hom-action of $\left( C, Id_{C}\right)$ over $\left( A,\alpha_{A}\right)$ we can construct the following split extension: \[\xymatrix{ 0 \ar[r] & \left( A,\alpha_{A}\right) \ar[r]^{j} & \left( A\rtimes C,\widetilde{\alpha}\right) \ar@<0.5ex>[r]^p & \left( C,Id_{C}\right) \ar[r] \ar@<0.5ex>[l]^{\sigma} &0} \] where $j:A\to A\rtimes C$, $j(a)=(a,0)$, $p:A\rtimes C \to C,$ $p(a,c)=c$ and $\sigma:C\to A\rtimes C$, $\sigma(c)=(0,c)$. Moreover the Hom-action of $\left( C,Id_{C}\right)$ over $\left( A,\alpha_{A}\right)$ induced by this extension coincides whit the initial one: $$c\star a=\left[ \sigma\left( c\right) ,j(a)\right] =\left[ \left( 0,c\right) \left( a,0\right) \right] =\left( \left[ 0,a\right] +Id_{C}\left( c\right) \centerdot a+0 \centerdot 0,\left[ c,0\right] \right) =( c \centerdot a, 0) \equiv c \centerdot a$$ Finally, both extensions are equivalent since the homomorphism of Hom-Leibniz algebras $\varphi:\left( A\rtimes C,\widetilde{\alpha}\right) \to \left( B,\alpha_{B}\right)$, $\varphi\left(a,c\right) =i(a)+s(c)$, makes commutative the following diagram: \begin{equation} \label{equiv} \xymatrix{ 0 \ar[r] & \left( A,\alpha_{A}\right) \ar[r]^{j} \ar@{=}[d] & \left( A\rtimes C,\widetilde{\alpha}\right) \ar@<0.5ex>[r]^{p} \ar@{-->}[d]^{\varphi} & \left( C,Id_{C}\right) \ar[r] \ar@{=}[d] \ar@<0.5ex>[l]^{\sigma} & 0\\ 0 \ar[r] & \left( A,\alpha_{A}\right) \ar[r]^{i} & \left( B,\alpha_{B}\right) \ar@<0.5ex>[r]^{\pi} & \left( C,Id_{C}\right) \ar@<0.5ex>[l]^s \ar[r]& 0 } \end{equation} For the converse, if both extensions are equivalent, i.e. there exists a homomorphism of Hom-Leibniz algebras $\varphi:\left( A\rtimes C,\widetilde{\alpha}\right) \to \left( B,\alpha_{B}\right)$ making commutative diagram (\ref{equiv}), then $s:\left( C,Id_{C}\right) \to \left( B,\alpha_{B}\right)$ given by $s(c)=\varphi\left(0,c\right)$, is a homomorphism that splits the extension. $\Box $ \begin{De} Let $\left( M,\alpha_{M}\right)$ be a Hom-representation of a Hom-Leibniz algebra $\left( L, \alpha_{L}\right)$. A derivation of $\left( L, \alpha_{L}\right)$ over $\left( M,\alpha_{M}\right)$ is a $\mathbb{K}$-linear map $d:L\to M$ satisfying: \begin{enumerate} \item[a)] $d\left[ l_{1},l_{2}\right] =\alpha_{L} \left( l_{1}\right) \centerdot d\left( l_{2}\right) + d\left( l_{1}\right) \centerdot \alpha _{L}\left( l_{2}\right) $ \item[b)] $d \cdot \alpha_{L}=\alpha_{M} \cdot d$ \end{enumerate} for all $l_1, l_2 \in L$. \end{De} \begin{Ex}\ \begin{enumerate} \item[a)] The $\mathbb{K}$-linear map $\theta : M \rtimes L\to M, \theta(m,l)=m,$ is a derivation, where $\left(M,\alpha_{M}\right)$ is a Hom-representation of $\left( M \rtimes L,\widetilde{\alpha}\right)$ via $\pi$. \item[b)] When $(M,\alpha_M)=(L,\alpha_L)$ is considered as a representation following Example \ref{Hom accion Leib} {\it b)}, then a derivation consists of a $\mathbb{K}$-linear map $d:L \to L$ such that $d[l_1,l_2]=[\alpha_L(l_1), d(l_2)]+[d(l_1),\alpha_L(l_2)]$ and $d \cdot \alpha_L = \alpha_L \cdot d$. \end{enumerate} \end{Ex} \begin{Pro} Let $\left( M,\alpha_{M}\right)$ be a Hom-representation of a Hom-Leibniz algebra $\left( L,\alpha_{L}\right)$. For every homomorphism of Hom-Leibniz algebras $f:\left( X,\alpha_{X}\right) \to \left( L,\alpha_{L}\right)$ and every $f$-derivation $d:\left( X,\alpha_{X}\right) \to \left( M,\alpha_{M}\right)$ there exists a unique homomorphism of Hom-Leibniz algebras $h:\left( X,\alpha_{X}\right) \to \left( M\rtimes L,\widetilde{\alpha}\right)$, such that the following diagram is commutative \[ \xymatrix{ & \left( X,\alpha_{X} \right) \ar[dr]^{f} \ar[d]^{h} \ar[ld]^{d} & \\ \left( M,\alpha_{M}\right) \ar[r]^i & \ar@/^0.3pc/[l]^{\theta} \left( M\rtimes L,\widetilde{\alpha}\right) \ar[r]^{\pi} & \left( L,\alpha_{L}\right) }\] Conversely, every homomorphism of Hom-Leibniz algebras $h:\left( X,\alpha_{X}\right) \to \left( M\rtimes L,\widetilde{\alpha }\right)$, determines a homomorphism of Hom-Leibniz algebras $f=\pi \cdot h: \left(X,\alpha_{X}\right)$ $\to \left( L,\alpha_{L}\right)$ and any $f$-derivation $d=\theta \cdot h:\left( X,\alpha_{X}\right) \to \left( M,\alpha_{M}\right)$. \end{Pro} {\it Proof.} The homomorphism $h: X \to M\rtimes L, h(x) = \left( d\left( x\right) ,f\left( x\right) \right)$ satisfies all the conditions. $\Box $ \begin{Co} The set of all derivations from $\left( L,\alpha_{L}\right)$ to $\left( M,\alpha_{M}\right)$ is in one-to-one correspondence with the set of Hom-Leibniz algebra homomorphisms $h:\left( L,\alpha_{L}\right) \to \left( M\rtimes L,\widetilde{\alpha}\right)$ such that $\pi \cdot h=Id_L$. \end{Co} \section{Functorial properties} In this section we analyze functorial properties of the universal ($\alpha$)-central extensions of ($\alpha$)-perfect Hom-Leibniz algebras. For detailed motivation, constructions and characterizations we refer to \cite{CIP}. \begin{De} \label{alfacentral} A short exact sequence of Hom-Leibniz algebras $(K) : 0 \to (M, \alpha_M) \stackrel{i} \to (K,\alpha_K) \stackrel{\pi} \to (L, \alpha_L) \to 0$ is said to be central if $[M, K] = 0 = [K, M]$. Equivalently, $M \subseteq Z(K)$. We say that $(K)$ is $\alpha$-central if $[\alpha_M(M), K] = 0 = [K,\alpha_M(M)]$. Equivalently, $\alpha_M(M) \subseteq Z(K)$. A central extension $(K) : 0 \to (M, \alpha_M) \stackrel{i} \to (K,\alpha_K) \stackrel{\pi} \to (L, \alpha_L) \to 0$ is said to be universal if for every central extension $(K') : 0 \to (M', \alpha_{M'}) \stackrel{i'} \to (K',\alpha_{K'}) \stackrel{\pi'} \to (L, \alpha_L) \to 0$ there exists a unique homomorphism of Hom-Leibniz algebras $h : (K,\alpha_K) \to (K',\alpha_{K'})$ such that $\pi'\cdot h = \pi$. We say that the central extension $(K) : 0 \to (M, \alpha_M) \stackrel{i} \to (K,\alpha_K) \stackrel{\pi} \to (L, \alpha_L) \to 0$ is universal $\alpha$-central if for every $\alpha$-central extension $(K) : 0 \to (M', \alpha_{M'}) \stackrel{i'} \to (K',\alpha_{K'}) \stackrel{\pi'} \to (L, \alpha_L) \to 0$ there exists a unique homomorphism of Hom-Leibniz algebras $h : (K,\alpha_K) \to (K',\alpha_{K'})$ such that $\pi'\cdot h = \pi$. \end{De} \begin{Rem} \label{rem} Obviously, every universal $\alpha$-central extension is a universal central extension. Note that in the case $\alpha_M = Id_M$, both notions coincide. \end{Rem} A perfect ($L = [L,L]$) Hom-Leibniz algebra $(L, \alpha_L)$ admits universal central extension, which is $(\frak{uce}(L), \widetilde{\alpha})$, where $\frak{uce}(L)=\frac{L \otimes L}{I_L}$ and $I_L$ is the subspace of $L \otimes L$ spanned by the elements of the form $-[x_1,x_2] \otimes \alpha_L(x_3) + [x_1,x_3] \otimes \alpha_L(x_2) + \alpha_L(x_1) \otimes [x_2,x_3], x_1, x_2, x_3 \in L$; every class $x_1 \otimes x_2 + I_L$ is denoted by $\{x_1,x_2\}$, for all $x_1, x_2 \in L$. $\frak{uce}(L)$ is endowed with a structure of Hom-Leibniz algebra with respect to the bracket $[\{x_1,x_2\},\{y_1,y_2\}]=\{[x_1,x_2],[y_1,y_2]\}$ and the endomorphism $\widetilde{\alpha} : \frak{uce}(L) \to \frak{uce}(L)$ defined by $\widetilde{\alpha}(\{x_1,x_2\}) = \{\alpha_L(x_1), \alpha_L(x_2) \}$. By construction, $u_L : (\frak{uce}(L), \widetilde{\alpha}) \to (L,\alpha_L)$, given by $u_L\{x_1,x_2\}=[x_1,x_2]$, gives rise to the universal central extension $0 \to (HL_2^{\alpha}(L), \widetilde{\alpha}_{\mid}) \to (\frak{uce}(L), \widetilde{\alpha}) \stackrel{u_L}\to (L,\alpha_L) \to 0$. A Hom-Leibniz algebra $\left( L, \alpha_{L}\right)$ is said to be $\alpha$-perfect if $L = [\alpha_L(L), \alpha_L(L)]$. Theorem 5.5 in \cite{CIP} shows that a Hom-Leibniz algebra $\left( L, \alpha_{L}\right)$ is $\alpha$-perfect if and only if it admits a universal $\alpha$-central extension, which is $(\frak{uce}^{\rm Leib}_{\alpha}(L), \overline{\alpha})$, where $\frak{uce}^{\rm Leib}_{\alpha}(L)= \frac{\alpha_L(L) \otimes \alpha_L(L)}{I_L}$ and $I_L$ is the vector subspace spanned by the elements of the form $-[x_1,x_2] \otimes \alpha_L(x_3) + [x_1,x_3] \otimes \alpha_L(x_2) + \alpha_L(x_1) \otimes [x_2,x_3]$, for all $x_1, x_2, x_3 \in L$. We denote by $\{\alpha_L(x_1), \alpha_L(x_2)\}$ the equivalence class of $\alpha_L(x_1) \otimes \alpha_L(x_2) + I_L$. $\frak{uce}^{\rm Leib}_{\alpha}(L)$ is endowed with a structure of Hom-Leibniz algebra with respect to the bracket $[\{\alpha_L(x_1),\alpha_L(x_2)\}, \{\alpha_L(y_1),\alpha_L(y_2)\}] = \{[\alpha_L(x_1),\alpha_L(x_2)], [\alpha_L(y_1),\alpha_L(y_2)]\}$ and the endomorphism $\overline{\alpha} : \frak{uce}^{\rm Leib}_{\alpha}(L) \to \frak{uce}^{\rm Leib}_{\alpha}(L)$ defined by $\overline{\alpha}(\{\alpha_L(x_1),\alpha_L(x_2)\})$ $= \{\alpha_L^2(x_1),\alpha_L^2(x_2)\}$. The homomorphism of Hom-Leibniz algebras $U_{\alpha} : \frak{uce}^{\rm Leib}_{\alpha}(L)$ $\to L$ given by $U_{\alpha}(\{\alpha_L(x_1), \alpha_L(x_2)\})= [\alpha_L(x_1), \alpha_L(x_2)]$ gives rise to the universal $\alpha$-central extension $0 \to (Ker (U_{\alpha}), \overline{\alpha}_{\mid}) \to (\frak{uce}_{\alpha}^{\rm Leib}(L), \overline{\alpha}) \stackrel{U_{\alpha}} \to (L, \alpha_L) \to 0$. See \cite{CIP} for details. \begin{De} A perfect Hom-Leibniz algebra $(L, \alpha_L)$ is said to be centrally closed if its universal central extension is $$0\to 0\overset{}{\to}\left( L,\alpha_{L}\right) \overset{\sim}{\to}\left( L,\alpha_{L}\right) \to 0$$ i.e. $HL_{2}^{\alpha}\left( L\right) =0$ and $\left( \frak{uce}_{\rm Leib} \left( L\right) ,\widetilde{\alpha}\right) \cong\left( L,\alpha_{L}\right)$. A Hom-Leibniz algebra $(L, \alpha_L)$ is said to be superperfect if $HL_{1}^{\alpha}\left( L\right) = HL_{\substack{2}}^{\alpha }\left( L\right) =0.$ \end{De} \begin{Co} \label{centralmente cerrada} If $0\to ( Ker(U_{\alpha}), \alpha_{K_{\mid}}) {\to}\left( K,\alpha_{K}\right) \stackrel{U_{\alpha}}\to\left( L,\alpha_{L}\right) \to 0$ is the universal $\alpha-$central extension of an $\alpha$-perfect Hom-Leibniz algebra $\left( L,\alpha_{L}\right)$, then $\left( K,\alpha_{K}\right)$ it is centrally closed. \end{Co} {\it Proof.} By Corollary 4.12 {\it a)} in \cite{CIP}, $HL_{\substack{1}}^{\alpha}\left( K\right) = HL_{\substack{2}}^{\alpha}\left( K\right)=0.$ $HL_{\substack{1}}^{\alpha}\left( K\right) =0$ if and only if $(K,\alpha_K)$ is perfect. By Theorem 4.11 {\it c)} in \cite{CIP} it admits a universal central extension $0\to ( HL_2^{\alpha}(K), \widetilde{\alpha}_{{\mid}}) {\to}\left( \frak{uce}(K),\widetilde{\alpha} \right) \stackrel{u}\to\left( K,\alpha_{K}\right) \to 0$. Since $HL_{\substack{2}}^{\alpha}\left( K\right)=0$, then $u$ is an isomorphism. $\Box $ \begin{Le} Let $\pi:\left( K,\alpha_{K}\right) \twoheadrightarrow \left( L,\alpha_{L}\right)$ be a central extension where $\left( L,\alpha_{L}\right)$ is a perfect Hom-Leibniz algebra. Then the following statements hold: \begin{enumerate} \item[{\it a)}] $K=\left[ K,K\right] + Ker (\pi)$ and $\overline{\pi}:\left( \left[ K,K\right],\alpha_{_K{\mid}}\right) \twoheadrightarrow\left( L,\alpha_{L}\right)$ is an epimorphism where $\left( \left[ K,K\right],\alpha_{\left[K,K\right]}\right)$ is a perfect Hom-Leibniz algebra. \item[{\it b)}] $\pi\left( Z(K)\right) \subseteq Z(L)$ y $\alpha_L(Z(L)) \subseteq \pi(Z(K))$. \end{enumerate} \end{Le} {\it Proof.} \noindent {\it a)} It suffices to consider the following commutative diagram: \[ \xymatrix{ \left( Ker (\pi) \cap \left[ K,K\right] ,\alpha_{Ker(\pi) \cap\left[ K,K\right] }\right)\ \ar@{>->}[r] \ar@{>->}[d]& \left( \left[ K,K\right] ,\alpha_{\left[K,K\right]}\right) \ar@{>>}[r]^{\overline{\pi}} \ar@{>->}[d]& \left(\left[ L,L\right],\alpha_{\left[ L,L\right] }\right) \ar@{=}[d] \\ \left( Ker (\pi),\alpha_{K\mid}\right)\ \ar@{>>}[d] \ar@{>->}[r] & \left( K,\alpha_{K}\right) \ar@{>>}[r]^{\pi} \ar@{>>}[d] & \left( L,\alpha_{L}\right) \ar@{>>}[d]\\ \ast \ \ar@{>->}[r] & \left( K/\left[ K,K\right] ,\overline{\alpha_{K}}\right) \ar@{>>}[r] & \left( L/\left[ L,L\right] ,\overline{\alpha_{L}}\right) } \] \noindent {\it b)}\ Direct checking $\Box $ \begin{De} A Hom-Leibniz algebra $(L, \alpha_L)$ is said to be simply connected if every central extension $\tau:\left( F,\alpha_{F}\right) \twoheadrightarrow \left(L,\alpha_{L}\right)$ splits uniquely as the product of Hom-Leibniz algebras $\left( F,\alpha_{F}\right) =\left( Ker\left( \tau\right),\alpha_{F\mid}\right) \times\left( L,\alpha_{L}\right)$. \end{De} \begin{Pro} For a perfect Hom-Leibniz algebra $\left( L,\alpha_{L}\right)$, the following statements are equivalent: \begin{enumerate} \item[a)] $\left( L,\alpha_{L}\right)$ is simply connected. \item[ b)] $\left( L,\alpha_{L}\right)$ is centrally closed. \end{enumerate} If $u:\left( L,\alpha_{L}\right) \twoheadrightarrow\left( M,\alpha_{M}\right)$ is a central extension, then: \begin{enumerate} \item[c)] Statement {\it a)} (respectively, statement {\it b)}) implies that $u:\left( L,\alpha_{L}\right)\twoheadrightarrow\left( M,\alpha_{M}\right)$ is a universal central extension. \item[d)] If in addition $u:\left( L,\alpha_{L}\right) \twoheadrightarrow\left( M,\alpha_{M}\right)$ is a universal $\alpha$-central extension, then statements {\it a)} and {\it b)} hold. \end{enumerate} \end{Pro} {\it Proof}. {\it a)} $\Rightarrow$ {\it b)} Let $ 0 \to \left( Ker\left( u_{\alpha}^{{}}\right) =HL_{2}^{\alpha}\left( L\right), \widetilde{\alpha}\right) \to \left( \frak{uce}_{\alpha}^{{}}\left( L\right), \widetilde{\alpha}\right) \stackrel{u_{\alpha}}\to \left( L,\alpha_{L}\right) \to 0$ be the universal central extension of $(L, \alpha_L)$, then it is split. Consequently there exists an isomorphism $\frak{uce}_{\alpha}\left( L\right) \cong L$ and $H_{2}^{\alpha}\left( L\right) =0$. {\it b)} $\Rightarrow$ {\it a)} The universal central extension of $\left( L,\alpha_{L}\right)$ is $0\to 0\to \left( L,\alpha_{L}\right) \overset{\sim}{\to}\left( L,\alpha_{L}\right) \to 0$. Consequently every central extension splits uniquely thanks to the universal property. {\it c)} Let $u:\left( L,\alpha_{L}\right) \twoheadrightarrow\left( M,\alpha_{M}\right)$ be a central extension. By Theorem 4.11 {\it b)} in \cite{CIP}, it is universal if $\left( L,\alpha_{L}\right)$ is perfect and every central extension of $\left( L,\alpha_{L}\right)$ splits. $\left( L,\alpha_{L}\right)$ is perfect by hypothesis and by statement {\it a)}, it is simply connected, which means that every central extension splits. {\it d)} If $u:\left( L,\alpha_{L}\right) \twoheadrightarrow\left( M,\alpha_{M}\right)$ is a universal $\alpha-$central extension, then by Theorem 4.1. {\it a)} in \cite{CIP} every central extension $\left( L,\alpha_{L}\right)$ splits. Consequently $\left( L,\alpha_{L}\right)$ is simply connected, equivalently, it is centrally closed. $\Box $ Now we are going to study functorial properties of the universal central extensions. Consider a homomorphism of perfect Hom-Leibniz algebras $f:\left( L^{\prime},\alpha_{L^{\prime}}\right)$ $\to \left( L,\alpha_{L}\right)$. This homomorphism induces a $\mathbb{K}$-linear map $f\otimes f:L^{\prime}\otimes L^{\prime}\to L\otimes L$ given by $\left( f\otimes f\right) \left( x_{1}\otimes x_{2}\right) = f\left( x_{1}\right) \otimes f\left( x_{2}\right)$, that maps the submodule $I_{L'}$ to the submodule $I_L$, hence $f \otimes f$ induces a $\mathbb{K}$-linear map $\frak{uce}(f): \frak{uce}(L^{\prime})\to \frak{uce}(L)$, given by $\frak{uce}(f)\left\{x_{1},x_{2}\right\} =\left\{ f(x_{1}),f(x_{2})\right\}$, which is a homomorphism of Hom-Leibniz algebras as well. Moreover, the following diagram is commutative: \begin{equation} \label{diagrama uce} \vcenter{ \xymatrix{ HL_{2}^{\alpha}\left( L^{\prime}\right) \ar@{>->}[d] & HL_{2}^{\alpha}\left( L\right) \ar@{>->}[d] \\ ( \frak{uce}(L^{\prime}),\widetilde{\alpha^{\prime}}) \ar[r]^{\frak{uce}(f)} \ar@{>>}[d]_{u_{L'}} & ( \frak{uce}(L),\widetilde{\alpha}) \ar@{>>}[d]^{u_L}\\ ( L^{\prime},\alpha_{L^{\prime}}) \ar[r]^f & \left( L,\alpha_{L}\right) } }\end{equation} From diagram (\ref{diagrama uce}), the existence of a covariant right exact functor $\frak{uce} :{\sf Hom-Leib^{\rm perf}} \to {\sf Hom-Leib^{\rm perf}}$ between the category of perfect Hom-Leibniz algebras is derived. Consequently, an automorphism $f$ of $(L,\alpha_L)$ gives rise to an automorphism $\frak{uce}(f)$ of $(\frak{uce}(L), \widetilde{\alpha})$. Commutativity of diagram (\ref{diagrama uce}) implies that $\frak{uce}(f)$ leaves $HL_{2}^{\alpha}(L)$ invariant. So the Hom-group homomorphism $$ \begin{array}{rcl} {\rm Aut}(L,\alpha_L) & \to & \{ g \in {\rm Aut}(\frak{uce}(L),\widetilde{\alpha}) : f(HL_2^{\alpha}(L))=HL_2^{\alpha}(L) \} \\ f & \mapsto &\frak{uce}(f) \end{array} $$ is obtained. By means of similar considerations as the previous ones, an analogous analysis with respect to the functorial properties of $\alpha$-perfect Hom-Leibniz algebras can be done. Namely, consider a homomorphism of $\alpha-$perfect Hom-Leibniz algebras $f:\left( L^{\prime},\alpha_{L^{\prime}}\right) \to \left( L,\alpha_{L}\right)$. Let $I_{L}$ the vector subspace of $\alpha_L(L) \otimes \alpha_L(L)$ spanned by the elements of the form $-[x_1,x_2] \otimes \alpha_L(x_3) + [x_1,x_3] \otimes \alpha_L(x_2) + \alpha_L(x_1) \otimes [x_2,x_3], x_1, x_2,x_2 \in L$, respectively $I_{L^{\prime}}$. $f$ induces a $\mathbb{K}$-linear map $f\otimes f:\left( \alpha_{L^{\prime}}\left( L^{\prime}\right) \otimes\alpha_{L^{\prime}}\left( L^{\prime}\right) \right.,$ $\left. \alpha_{L^{\prime }\otimes L^{\prime}}\right) \to \left( \alpha_{L}\left(L\right) \otimes\alpha\left( _{L}L\right) ,\alpha_{L\otimes L}\right)$, given by $\left( f\otimes f\right) \left( \alpha_{L^{\prime}}\left(x_{1}^{\prime}\right) \right.$ $\left. \otimes\alpha_{L^{\prime}}\left( x_{2}^{\prime}\right) \right) =\alpha_L(f\left( x_{1}^{\prime}\right)) \otimes \alpha_L(f\left(x_{2}^{\prime}\right))$ such that $\left( f\otimes f\right) \left(I_{L^{\prime}}\right) \subseteq I_{L}$. Consequently, it induces a homomorphism of Hom-Leibniz algebras $\frak{uce}_{\alpha}(f): (\frak{uce}_{\alpha^{\prime}}(L^{\prime}), \overline{\alpha'})\to ( \frak{uce}_{\alpha}(L), \overline{\alpha})$ given by $\frak{uce}_{\alpha}(f)\left\{ \alpha_{L^{\prime}}\left( x_{1}^{\prime}\right),\alpha_{L^{\prime}}\left( x_{2}^{\prime}\right) \right\} =\left\{ \alpha_{L}\left( f(x'_{1}\right) ),\alpha_{L}\left( f(x'_{2})\right) \right\}$ such that the following diagram is commutative \begin{equation} \label{diagrama alfa uce} \vcenter{\xymatrix{ Ker\left( U_{\alpha^{\prime}}\right) \ \ar@{>->}[d] & Ker\left(U_{\alpha}\right) \ \ar@{>->}[d] \\ ( \frak{uce}_{\alpha}(L^{\prime}),\overline{\alpha^{\prime}}) \ar[r]^{\frak{uce}_{\alpha}(f)} \ar@{>>}[d]_{U_{\alpha'}} & ( \frak{uce}_{\alpha}(L),\overline{\alpha} ) \ar@{>>}[d]^{U_{\alpha}}\\ ( L^{\prime},\alpha_{L^{\prime}}) \ar[r]^{f} & ( L,\alpha_{L} ) }} \end{equation} From diagram (\ref{diagrama alfa uce}) one derives the existence of a covariant right exact functor $\frak{uce}_{\alpha} : {\sf Hom-Leib^{\alpha-\rm perf}} \to {\sf Hom-Leib^{\alpha-\rm perf}}$ between the $\alpha$-perfect Hom-Leibniz algebras category. Consequently, an automorphism $f$ of $(L,\alpha_L)$ gives rise to an automorphism $\frak{uce}_{\alpha}(f)$ of $(\frak{uce}_{\alpha}(L), \overline{\alpha})$. Commutativity of diagram (\ref{diagrama alfa uce}) implies that $\frak{uce}_{\alpha}(f)$ leaves $Ker(U_{\alpha})$ invariant. So the homomorphism of Hom-groups $$ \begin{array}{rcl} {\rm Aut}(L,\alpha_L) & \to & \{ g \in {\rm Aut}(\frak{uce}_{\alpha}(L),\overline{\alpha}) : f(Ker(U_{\alpha})=Ker(U_{\alpha}) \} \\ f & \mapsto &\frak{uce}_{\alpha}(f) \end{array} $$ is obtained. Now we consider a derivation $d$ of the $\alpha$-perfect Hom-Leibniz algebra $\left( L,\alpha_{L}\right)$. The linear map $\varphi:\alpha_L(L)\otimes \alpha_L(L) \to \alpha_L(L)\otimes \alpha_L(L)$ given by $\varphi\left( \alpha_L(x_{1})\otimes \alpha_L(x_{2})\right) =d(\alpha_L(x_{1}))\otimes\alpha^2 _{L}\left( x_{2}\right) +\alpha^2_{L}\left( x_{1}\right) \otimes d\left(\alpha_L(x_{2})\right)$, leaves invariant the vector subspace $I_{L}$ of $\alpha_L(L) \otimes \alpha_L(L)$ spanned by the elements of the form $ -[x_1,x_2] \otimes \alpha_L(x_3) + [x_1,x_3] \otimes \alpha_L(x_2) + \alpha_L(x_1) \otimes [x_2,x_3], x_1, x_2,x_2 \in L$. Hence it induces a linear map $\frak{uce}_{\alpha}(d):\left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha}\right) \to \left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha}\right)$, given by $\frak{uce}_{\alpha}(d)( \left\{ \alpha_L(x_{1}),\alpha_L(x_{2}) \right\}) = \left\{ d(\alpha_L(x_{1})),\alpha^2_{L}\left( x_{2}\right) \right\} +\left\{ \alpha^2_{L}\left( x_{1}\right) ,d\left( \alpha_L(x_{2})\right) \right\}$, that makes commutative the following diagram: \begin{equation} \label{diagrama derivacion} \vcenter{ \xymatrix{ \left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha} \right) \ar[r]^{uce_{\alpha}(d)} \ar@{>>}[d]_{U_{\alpha}} & \left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha} \right) \ar@{>>}[d]^{U_{\alpha}} \\ \left( L,\alpha_{L}\right) \ar[r]^d & \left( L,\alpha_{L}\right) } } \end{equation} Consequently, a derivation $d$ of $(L,\alpha_L)$ gives rise to a derivation $\frak{uce}_{\alpha}(d)$ of $(\frak{uce}_{\alpha}(L), \overline{\alpha})$. The commutativity of diagram (\ref{diagrama derivacion}) implies that $\frak{uce}_{\alpha}(d)$ maps $Ker(U_{\alpha})$ on itself. Hence, it is obtained the homomorphism of Hom-$\mathbb{K}$-vector spaces $$ \begin{array}{rcl} \frak{uce}_{\alpha} : {\rm Der}(L,\alpha_L) & \to & \{ \delta \in {\rm Der}(\frak{uce}_{\alpha}(L),\overline{\alpha}) : \delta (Ker(U_{\alpha}) \subseteq Ker(U_{\alpha}) \} \\ d & \mapsto &\frak{uce}_{\alpha}(d) \end{array} $$ whose kernel belongs to the subalgebra of derivations of $(L,\alpha_L)$ such that vanish on $[\alpha_L(L),\alpha_L(L)]$. The functorial properties of $\frak{uce}_{\alpha}(-)$ relative to the derivations are described by the following result. \begin{Le} \label{uce derivacion} Let $f:(L^{\prime},\alpha_{L^{\prime}})\to \left( L,\alpha_{L}\right)$ be a homomorphism of $\alpha$-perfect Hom-Leibniz algebras. Consider $d\in Der(L)$ and $d^{\prime}\in Der(L^{\prime})$ such that $f \cdot d^{\prime}=d \cdot f$. then $\frak{uce}_{\alpha}(f) \cdot \frak{uce}_{\alpha}(d^{\prime})= \frak{uce}_{\alpha}(d) \cdot \frak{uce}_{\alpha}(f).$ \end{Le} {\it Proof.} Routine checking. $\Box $ \section{Lifting automorphisms and derivations} In this section we analyze under what conditions an automorphism or a derivation can be lifted to an $\alpha$-cover. We restrict the study to $\alpha$-covers since we must compose central extensions in the constructions. This fact does not allow to obtain more general results, mainly due to Lemma 4.10 in \cite{CIP}. \begin{De} A central extension of Hom-Leibniz algebras $f:\left( L^{\prime},\alpha_{L^{\prime}}\right)$ $\twoheadrightarrow\left( L,\alpha_{L}\right)$, where $\left( L^{\prime},\alpha_{L^{\prime}}\right)$ is an $\alpha$-perfect Hom-Leibniz algebra, is said to be an $\alpha$-cover. \end{De} \begin{Le} \label{alfa perfecta sobre} If $f:\left( L^{\prime},\alpha_{L^{\prime}}\right) \to \left( L,\alpha_{L}\right)$ is a surjective homomorphism of Hom-Leibniz algebras and $\left( L^{\prime},\alpha_{L^{\prime}}\right)$ is $\alpha-$perfect, then $\left( L,\alpha_{L}\right)$ is $\alpha$-perfect as well. \end{Le} Let $f:\left( L^{\prime},\alpha_{L^{\prime}}\right)\twoheadrightarrow \left( L,\alpha_{L}\right)$ be an $\alpha$-cover. Thanks to Lema \ref{alfa perfecta sobre} $\left( L,\alpha_{L}\right)$ is an $\alpha$-perfect Hom-Leibniz algebra as well. By Theorem 5.5 in \cite{CIP}, everyone admits universal $\alpha$-central extension. Having in mind the functorial properties given in diagram (\ref{diagrama alfa uce}), we can construct the following diagram: \[{\xymatrix{ Ker\left( U_{\alpha^{\prime}}\right) \ \ar@{>->}[d] & Ker\left(U_{\alpha}\right) \ \ar@{>->}[d] \\ ( \frak{uce}_{\alpha}(L^{\prime}),\overline{\alpha^{\prime}}) \ar[r]^{\frak{uce}_{\alpha}(f)} \ar@{>>}[d]_{U_{\alpha'}} & ( \frak{uce}_{\alpha}(L),\overline{\alpha} ) \ar@{>>}[d]^{U_{\alpha}}\\ ( L^{\prime},\alpha_{L^{\prime}}) \ar[r]^{f} & ( L,\alpha_{L} ) }}\] Since $U_{\alpha^{\prime}}:\left( \frak{uce}_{\alpha^{\prime}}\left( L^{\prime}\right),\overline{\alpha^{\prime}}\right) \twoheadrightarrow \left( L^{\prime},\alpha_{L^{\prime}}\right)$ is a universal $\alpha$-central extension, then by Remark \ref{rem}, it is a universal central extension as well. Since $f:\left( L^{\prime},\alpha_{L^{\prime}}\right) \twoheadrightarrow \left( L,\alpha_{L}\right)$ is a central extension and $U_{\alpha^{\prime}}: \left( \frak{uce}_{\alpha^{\prime}}\left( L^{\prime}\right),\overline{\alpha^{\prime}}\right) \twoheadrightarrow \left( L',\alpha_{L'}\right)$ is a universal central extension, then by Proposition 4.15 in \cite{CIP} the extension $f \cdot U_{\alpha^{\prime}}:\left( \frak{uce}_{\alpha^{\prime}}\left( L^{\prime}\right) ,\widetilde{\alpha}^{\prime}\right) \to\left( L,\alpha_{L}\right)$ is $\alpha$-central which is universal in the sense of Definition 4.13 in \cite{CIP}. On the other hand, since $U_{\alpha} : \left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha} \right) \twoheadrightarrow (L,\alpha_L)$ is a universal $\alpha$-central extension, then there exists a unique homomorphism $\varphi:\left( \frak{uce}_{\alpha}\left( L\right), \overline{\alpha} \right) \twoheadrightarrow \left( \frak{uce}_{\alpha^{\prime}}\left(L^{\prime}\right),\overline{\alpha^{\prime}}\right)$ such that $f \cdot U_{\alpha^{\prime}} \cdot \varphi=U_{\alpha}$. Moreover $\varphi \cdot \frak{uce}_{\alpha}(f) = Id$ since the following diagram is commutative \[\xymatrix{ 0 \ar[r] & \left( Ker(f \cdot U_{\alpha^{\prime}}),\overline{\alpha'_{\mid}}\right) \ar[r]& \left( \frak{uce}_{\alpha^{\prime}}\left( L^{\prime}\right),\overline{\alpha'}\right) \ar[r]^{\quad f \cdot U_{\alpha^{\prime}}} \ar@<2ex>[d]_{\varphi \cdot \frak{uce}_{\alpha}(f)\quad} \ar[d]^{\quad Id}& \left( L,\alpha_{L}\right) \ar[r] \ar@{=}[d]& 0 \\ 0 \ar[r] & \left( Ker(f \cdot U_{\alpha^{\prime}}),\overline{\alpha'_{\mid}}\right) \ar[r] & \left( \frak{uce}_{\alpha^{\prime}}\left( L^{\prime}\right),\overline{\alpha^{\prime}}\right) \ar[r]^{\quad f \cdot U_{\alpha^{\prime}}} & \left( L,\alpha_{L}\right) \ar[r] & 0 }\] and $f \cdot U_{\alpha'}$ is an $\alpha$-central extension which is universal in the sense of Definition 4.13 in \cite{CIP}. Conversely, $\frak{uce}_{\alpha}(f) \cdot \varphi = Id$ since the following diagram is commutative \[\xymatrix{ 0 \ar[r] & \left( Ker( U_{\alpha}),\overline{\alpha_{\mid}}\right) \ar[r]& \left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha}\right) \ar[r]^{\quad U_{\alpha}} \ar@<2ex>[d]_{\frak{uce}_{\alpha}(f) \cdot \varphi\quad} \ar[d]^{\quad Id}& \left( L,\alpha_{L}\right) \ar[r] \ar@{=}[d]& 0 \\ 0 \ar[r] & \left( Ker(U_{\alpha}),\overline{\alpha_{\mid}}\right) \ar[r] & \left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha}\right) \ar[r]^{\quad U_{\alpha}} & \left( L,\alpha_{L}\right) \ar[r] & 0 }\] whose horizontal rows are central extensions and $\left( \frak{uce}_{\alpha}\left( L\right),\overline{\alpha}\right)$ is $\alpha$-perfect, then Lemma 5.4 in \cite{CIP} guarantees the uniqueness of the vertical homomorphism. Consequently $\frak{uce}_{\alpha}(f)$ is an isomorphism and from now on we will use the notation $\frak{uce}_{\alpha}(f)^{-1}$ instead of $\varphi$. On the other hand, $U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}:\left( \frak{uce}_{\alpha}\left( L\right) ,\overline{\alpha} \right)$ $\twoheadrightarrow \left( L^{\prime},\alpha_{L^{\prime}}\right)$ is an $\alpha-$cover. In the sequel, we will denote its kernel by $$C:=Ker(U_{\alpha^{\prime}}\cdot \frak{uce}_{\alpha}(f)^{-1})= \frak{uce}_{\alpha}(f)\left( Ker\left( U_{\alpha^{\prime}}\right) \right)$$ \begin{Th} \label{levantamiento automorfismo} Let $f:\left( L^{\prime},\alpha_{L^{\prime}}\right) \twoheadrightarrow \left( L,\alpha_{L}\right)$ be an $\alpha-$cover. For any $h\in Aut\left( L,\alpha_{L}\right)$, there exists a unique $\theta_{h}\in Aut\left( L^{\prime},\alpha_{L^{\prime}}\right)$ such that the following diagram is commutative: \begin{equation} \label{automorfismo} \vcenter{ \xymatrix{ \left( L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} \ar[d]_{\theta_h}& \left( L,\alpha_{L}\right) \ar[d]^h \\ \left( L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} & \left( L,\alpha_{L}\right) }} \end{equation} if and only if the automorphism $\frak{uce}_{\alpha}(h)$ of $(\frak{uce}_{\alpha}\left( L \right), \overline{\alpha})$ satisfies $\frak{uce}_{\alpha}(h)\left( C\right) =C$. In this case, it is uniquely determined by diagram (\ref{automorfismo}) and $\theta_{h}\left( Ker(f)\right) =Ker(f)$. Moreover, the map $$\begin{array}{rcl}\Theta:\left\{ h\in Aut\left( L,\alpha_{L}\right) : \frak{uce}_{\alpha}(h)\left( C\right) =C\right\} &\to & \left\{ g\in Aut\left( L^{\prime},\alpha_{L^{\prime}}\right) :g\left( Ker(f)\right) =Ker(f)\right\}\\ h &\mapsto & \theta_{h} \end{array}$$ is a group isomorphism. \end{Th} {\it Proof.} Let $h\in Aut\left( L,\alpha_{L}\right)$ be and assume that there exists a $\theta_{h}\in Aut\left( L^{\prime},\alpha_{L^{\prime}}\right)$ such that diagram (\ref{automorfismo}) is commutative. Then $h \cdot f:\left( L^{\prime},\alpha_{L^{\prime}}\right) \to\left( L,\alpha_{L}\right)$ is an $\alpha-$cover, hence $\theta_h$ is a homomorphism from the $\alpha$-cover $h \cdot f$ to the $\alpha$-cover $f$ which is unique by Remark 5.3 {\it b)} and Lemma 4.7 in \cite{CIP}. By application of the functor $\frak{uce}_{\alpha}(-)$ to diagram (\ref{automorfismo}), one obtains the following commutative diagram: \[\xymatrix{ \left( \frak{uce}_{\alpha}\left( L^{\prime}\right),\overline{\alpha_L{^{\prime}}}\right) \ar@{>>}[r]^{\frak{uce}_{\alpha}(f)} \ar@{>->>}[d]_{\frak{uce}_{\alpha}(\theta_{h})}& \left( \frak{uce}_{\alpha}(L),\overline{\alpha_L} \right) \ar@{>->>}[d]^{\frak{uce}_{\alpha}(h)}\\ \left( \frak{uce}_{\alpha}\left( L^{\prime}\right),\overline{\alpha_{L^{\prime}}}\right) \ar@{>>}[r]^{\frak{uce}_{\alpha}(f)} & \left( \frak{uce}_{\alpha}(L),\overline{\alpha_L} \right) }\] Hence $\frak{uce}_{\alpha}(h)\left( C\right) = \frak{uce}_{\alpha}(h)\cdot \frak{uce}_{\alpha}(f)\left( Ker\left( U_{\alpha'}\right) \right) =\frak{uce}_{\alpha}(f) \cdot \frak{uce}_{\alpha}(\theta_{h})\left( Ker\left( U_{\alpha'}\right) \right) = \frak{uce}_{\alpha}(f) \left( Ker\left( U_{\alpha'}\right) \right) = C$ Conversely, from diagram (\ref{diagrama alfa uce}), we have that $U_{\alpha}=f \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}$, hence we obtain the following diagram: \[\xymatrix{ C \ar@{>->}[r] \ar[d] & \left( \frak{uce}_{\alpha}(L),\overline{\alpha} \right) \ar@{>>}[rr]^{U_{\alpha^{\prime}}\cdot \frak{uce}_{\alpha}(f)^{-1}} \ar[d]^{\frak{uce}_{\alpha}(h)}& \ & \left(L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} \ar@{-->}[d]^{\theta_h} & \left( L,\alpha_{L}\right) \ar[d]^h\\ C \ar@{>->}[r] & \left( \frak{uce}_{\alpha}(L),\overline{\alpha} \right) \ar@{>>}[rr]^{U_{\alpha^{\prime}}\cdot \frak{uce}_{\alpha}(f)^{-1}} & \ & \left(L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} & \left( L,\alpha_{L}\right) }\] If $\frak{uce}_{\alpha}(h)\left( C\right) =C$, then $U_{\alpha^{\prime}}\cdot \frak{uce}_{\alpha}(f)^{-1}\cdot \frak{uce}_{\alpha}(h)\left( C\right) = U_{\alpha^{\prime}}\cdot \frak{uce}_{\alpha}(f)^{-1}(C)=0$, then there exists a unique $\theta_{h}:\left( L^{\prime},\alpha_{L^{\prime}}\right) \to \left( L^{\prime},\alpha_{L^{\prime}}\right)$ such that $\theta_{h} \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}=U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1} \cdot \frak{uce}_{\alpha}(h)$. On the other hand, $h\cdot f \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}=f \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1} \cdot \frak{uce}_{\alpha}(h)= f \cdot U_{\alpha'} \cdot \frak{uce}_{\alpha}(\theta_h) \cdot \frak{uce}_{\alpha}(f)^{-1} = f \cdot \theta_{h} \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}$, then $h\cdot f=f\cdot \theta_{h}.$ In conclusion, $\theta_{h}$ is uniquely determined by diagram (\ref{automorfismo}) and moreover $\theta_{h}( Ker(f))$ $=Ker(f)$. By the previous arguments, it is easy to check that $\Theta$ is a well-defined map, it is a monomorphism thanks to the uniqueness of $\theta_{h}$ and it is an epimorphism, since every $g\in Aut\left( L',\alpha_{L'}\right)$ with $g\left( Ker(f)\right) =Ker(f)$, induces a unique homomorphism $h:\left( L,\alpha_{L}\right) \to \left( L,\alpha_{L}\right)$ such that $h \cdot f=f \cdot g$. Then $g=\theta_{h}$ and $\frak{uce}_{\alpha}\left(h\right) \left( C\right) =C$. $\Box $ \begin{Co} If $\left( L,\alpha_{L}\right)$ is an $\alpha$-perfect Hom-Leibniz algebra, then the map $$\begin{array}{rcl} Aut(L,\alpha_{L}) & \to &\left\{ g\in Aut(\frak{uce}_{\alpha}(L),\overline{\alpha}):g(Ker(U_{\alpha}))=Ker(U_{\alpha})\right\}\\ h &\mapsto & \frak{uce}_{\alpha}(h)\end{array}$$ is a group isomorphism. \end{Co} {\it Proof}. By application of Theorem \ref{levantamiento automorfismo} to the $\alpha-$cover $U_{\alpha}:\left( \frak{uce}_{\alpha }\left( L\right),\overline{\alpha}\right) \twoheadrightarrow\left(L,\alpha_{L}\right)$, it is enough to have in mind that under these conditions $C=0$ and $\frak{uce}_{\alpha}(f)(0)=0$. $\Box $ Now we analyze under what conditions a derivation of an $\alpha$-perfect Hom-Leibniz algebra can be lifted to an $\alpha$-cover. \begin{Th} Let $f:\left( L^{\prime},\alpha_{L^{\prime}}\right) \twoheadrightarrow\left( L,\alpha_{L}\right)$ be an $\alpha-$cover. Denote by $C=\frak{uce}_{\alpha}(f)\left( Ker\left( U_{\alpha^{\prime}}\right) \right) \subseteq Ker(U_{\alpha})$. Then the following statements hold: \begin{enumerate} \item[a)] For any $d\in Der\left( L,\alpha_{L}\right)$ there exists a $\delta_{d}\in Der\left( L^{\prime},\alpha_{L^{\prime}}\right)$ such that the following diagram is commutative \begin{equation} \label{derivacion} \vcenter{ \xymatrix{ \left( L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} \ar[d]_{\delta_d}& \left( L,\alpha_{L}\right) \ar[d]^d \\ \left( L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} & \left( L,\alpha_{L}\right) }} \end{equation} if and only if the derivation $\frak{uce}_{\alpha}(d)$ of $(\frak{uce}_{\alpha}\left(L\right),\overline{\alpha_L})$ satisfies $\frak{uce}_{\alpha}(d)\left( C\right) \subseteq C$. In this case, $\delta_{d}$ is uniquely determined by (\ref{derivacion}) and $\delta_{d}\left( Ker(f)\right)$ $\subseteq Ker(f)$. \item[b)] The map $$ \begin{array}{rcl} \Delta:\left\{ d\in Der\left( L,\alpha_{L}\right) :\frak{uce}_{\alpha}(d)\left( C\right) \subseteq C\right\} & \to &\left\{\rho\in Der\left( L^{\prime}, \alpha_{L^{\prime}}\right) :\rho\left(Ker(f)\right) \subseteq Ker(f)\right\}\\ d & \mapsto & \delta_{d}\end{array}$$ is an isomorphism of Hom-vector spaces. \item[c)] For the $\alpha$-cover $U_{\alpha} : (\frak{uce}_{\alpha}(L),\overline{\alpha_L}) \twoheadrightarrow (L, \alpha_L)$, the map $$\frak{uce}_{\alpha} : Der(L,\alpha_L) \to \{\delta \in Der (\frak{uce}_{\alpha}(L),\overline{\alpha_L}) : \delta(Ker(U_{\alpha})) \subseteq Ker(U_{\alpha}) \}$$ is an isomorphism of Hom-vector spaces. \end{enumerate} \end{Th} {\it Proof}. {\it a)} Let $d\in Der\left( L, \alpha_L \right)$ be and assume the existence of a $\delta_{d}\in Der\left( L', \alpha_{L'} \right)$ such that diagram(\ref{derivacion}) is commutative. Then, by Lemma \ref{uce derivacion}, we obtain the following diagram commutative: \[\xymatrix{ \left( \frak{uce}_{\alpha}\left( L^{\prime}\right),\overline{\alpha_{L^{\prime}}}\right) \ar[r]^{\frak{uce}_{\alpha}(f)} \ar[d]_{\frak{uce}_{\alpha}(\delta_d)} & \left(\frak{uce}_{\alpha}(L),\overline{\alpha_L} \right) \ar[d]^{\frak{uce}_{\alpha}(d)}\\ \left( \frak{uce}_{\alpha}\left( L^{\prime}\right), \overline{\alpha_{L^{\prime}}} \right) \ar[r]^{\frak{uce}_{\alpha}(f)} & \left( \frak{uce}_{\alpha}(L),\overline{\alpha_L} \right) }\] Hence, having in mind the properties derived from diagram (\ref{diagrama derivacion}), we obtain: \noindent $\frak{uce}_{\alpha}(d)\left( C\right) = \frak{uce}_{\alpha}(d) \cdot \frak{uce}_{\alpha}(f) \left( Ker(U_{\alpha^{\prime}})\right) =\frak{uce}_{\alpha}(f) \cdot \frak{uce}_{\alpha}(\delta_d)\left( Ker(U_{\alpha^{\prime}})\right) \subseteq \frak{uce}_{\alpha}(f)\left( Ker(U_{\alpha^{\prime}})\right) = C$. Conversely, from diagram (\ref{diagrama alfa uce}) we have that $U_{\alpha}=f \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}$ and consider the following diagram: \[\xymatrix{ C \ar@{>->}[r] \ar[d] & \left( \frak{uce}_{\alpha}(L),\overline{\alpha_L} \right) \ar@{>>}[rr]^{U_{\alpha^{\prime}}\cdot \frak{uce}_{\alpha}(f)^{-1}} \ar[d]^{\frak{uce}_{\alpha}(d)}& \ & \left(L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} \ar@{-->}[d]^{\delta_d} & \left( L,\alpha_{L}\right) \ar[d]^d\\ C \ar@{>->}[r] & \left( \frak{uce}_{\alpha}(L),\overline{\alpha_L} \right) \ar@{>>}[rr]^{U_{\alpha^{\prime}}\cdot \frak{uce}_{\alpha}(f)^{-1}} & \ & \left(L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^{f} & \left( L,\alpha_{L}\right) }\] Since $\frak{uce}_{\alpha}(d)\left( C\right) \subseteq C$, then $U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1} \cdot \frak{uce}_{\alpha}(d)\left( C\right) \subseteq U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}(C)= U_{\alpha'} (Ker(U_{\alpha'})) = 0$. Hence there exists a unique $\mathbb{K}$-linear map $\delta_{d}:\left( L^{\prime},\alpha_{L^{\prime}}\right) \to \left( L^{\prime},\alpha_{L^{\prime}}\right)$ such that $\delta_{d} \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1}=U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1} \cdot \frak{uce}_{\alpha}(d).$ On the other hand $d \cdot f \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1} = d \cdot U_{\alpha} \cdot \frak{uce}_{\alpha}(f) \cdot \frak{uce}_{\alpha}(f)^{-1} = U_{\alpha} \cdot \frak{uce}_{\alpha}(d) =f \cdot U_{\alpha^{\prime}} \cdot \frak{uce}_{\alpha}(f)^{-1} \cdot \frak{uce}_{\alpha}(d)$, since $\delta_d \cdot U_{\alpha'} \cdot \frak{uce}_{\alpha}(f)^{-1} = U_{\alpha'} \cdot \frak{uce}_{\alpha}(f)^{-1} \cdot \frak{uce}_{\alpha}(d)$, then $d \cdot f=f \cdot \delta_{d}$. Finally, a direct checking shows that $\delta_{d}$ is a derivation of $L^{\prime}$, which is uniquely determined by diagram (\ref{derivacion}) and $\delta_{d}\left( Ker(f)\right) \subseteq Ker(f)$. {\it b)} The map $\Delta$ is a homomorphism of Hom-vector spaces by construction, which is injective by the uniqueness of $\delta_{d}$, and surjective, since for every $\rho\in Der\left( L^{\prime},\alpha_{L^{\prime}}\right)$ such that $\rho\left( Ker(f)\right) \subseteq Ker(f)$ there exists the following diagram commutative: \[ \xymatrix{ Ker(f)\ \ar@{>->}[r] \ar[d] & \left( L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^f \ar[d]^{\rho} & \left( L,\alpha_{L}\right) \ar@{-->}[d]^d\\ Ker(f)\ \ar@{>->}[r] & \left( L^{\prime},\alpha_{L^{\prime}}\right) \ar@{>>}[r]^f & \left( L,\alpha_{L}\right) } \] where $d:\left( L,\alpha_{L}\right) \to \left( L,\alpha_{L}\right)$ is a derivation satisfying $\frak{uce}_{\alpha}\left( d\right) \left( C\right) = \frak{uce}_{\alpha}\left( d\right) \cdot \frak{uce}_{\alpha}(f)\left( Ker(U_{\alpha^{\prime}})\right) =\frak{uce}_{\alpha}\left( f\right) \cdot \frak{uce}_{\alpha}(\rho)\left( Ker(U_{\alpha^{\prime}})\right) \subseteq \frak{uce}_{\alpha}(f)\left( Ker(U_{\alpha^{\prime}})\right) =C.$ Finally, the uniqueness of $\delta_d$ implies that $\Delta\left( d\right) = \delta_{d}= \rho$. {\it c)} It is enough to write the statement {\it b)} for the $\alpha-$cover $U_{\alpha}:\left( \frak{uce}_{\alpha}(L),\overline{\alpha_{L}}\right)\twoheadrightarrow\left( L,\alpha_{L}\right)$. Now $C= \frak{uce}_{\alpha}(U_{\alpha})\left( Ker\left( U_{\alpha}\right) \right) =0$, and $\Delta$ is the map $\frak{uce}_{\alpha}$ derived from diagram (\ref{diagrama derivacion}). $\Box $ \section{Universal $\alpha$-central extension of the semi-direct product} Consider a split extension of $\alpha$-perfect Hom-Leibniz algebras \[\xymatrix{ 0 \ar[r]& (M,\alpha_M) \ar[r]^t& (G,\alpha_G) \ar@<0.5ex>[r]^p &(Q,Id_Q) \ar[r] \ar@<0.5ex>[l]^s &0} \] where, by Lemma \ref{ext rota}, $(G,\alpha_G) \cong (M,\alpha_M) \rtimes (Q,Id_Q)$, whose Hom-action of $(Q,Id_Q)$ on $(M,\alpha_M)$ is given by $q \centerdot m = [s(q), t(m)]$ and $m \centerdot q = [t(m),s(q)], q \in Q, m \in M$. Moreover we will assume, when it is needed, that the previous action is symmetric, i.e. $q \centerdot m + m \centerdot q =0, q \in Q, m \in M$. An example of the above situation is when $(M,\alpha_M)$ is an $\alpha$-perfect Hom-Leibniz algebra, $Q$ is a perfect Leibniz algebra considered as the Hom-Leibniz algebra $(Q, Id_Q)$ and $(G,\alpha_G)=(M,\alpha_M) \times (Q, Id_Q)=(M \times Q, \alpha_M \times Id_Q)$. Applying the functorial properties of $\frak{uce}_{\alpha}(-)$ given in diagram (\ref{diagrama alfa uce}) and having in mind that $(Q,Id_Q)$ is perfect is equivalent to $Q$ is perfect, we have the following commutative diagram: \[ \xymatrix{ & Ker(U_{\alpha}^M) \ar@{>->}[d]& Ker(U_{\alpha}^G) \ar@{>->}[d]& HL_2(Q) \ar@{>->}[d]& \\ & (\frak{uce}_{\alpha}(M),\overline{\alpha_M}) \ar@{>>}[d]^{U_{\alpha}^M}\ar[r]^{\tau}& (\frak{uce}_{\alpha}(G),\overline{\alpha_G}) \ar@{>>}[d]^{U_{\alpha}^G} \ar@<0.5ex>[r]^{\pi} &(\frak{uce}(Q),Id_{\frak{uce}(Q)}) \ar@{>>}[d]^{u_Q} \ar@<0.5ex>[l]^{\sigma} &\\ 0 \ar[r]& (M,\alpha_M) \ar[r]^t& (G,\alpha_G) \ar@<0.5ex>[r]^p &(Q,Id_Q) \ar[r] \ar@<0.5ex>[l]^s & 0 } \] Here $\tau = \frak{uce}_{\alpha}(t), \pi = \frak{uce}_{\alpha}(p), \sigma = \frak{uce}_{\alpha}(s)$. The sequence \[\xymatrix{ & (\frak{uce}_{\alpha}(M),\overline{\alpha_M}) \ar[r]^{\tau}& (\frak{uce}_{\alpha}(G),\overline{\alpha_G}) \ar@<0.5ex>[r]^{\pi} &(\frak{uce}(Q),Id_{\frak{uce}(Q)}) \ar@<0.5ex>[l]^{\sigma} &} \] is split, since $p \cdot s = Id_Q$, then $\frak{uce}_{\alpha}(p) \cdot \frak{uce}_{\alpha}(s) = \frak{uce}_{\alpha}(Id_Q)$, i.e. $\pi \cdot \sigma = Id_{\frak{uce}(Q)}$. Obviously $\pi$ is surjective and there exists a Hom-action of $(\frak{uce}(Q),Id_{\frak{uce}(Q)})$ on $(Ker(\pi),\overline{\alpha_{G}}_{\mid})$ induced by the section $\sigma$, which is given by: \noindent $\lambda:\frak{uce}(Q)\otimes Ker(\pi) \to Ker(\pi)$, $\lambda\left( \left\{ q_{1}, q_{2} \right\} \otimes \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\} \right) = \left\{ q_{1}, q_{2}\right\} \centerdot \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\} =$ $\left[ \sigma\left\{ q_{1}, q_{2} \right\},i \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\} \right] = \left[ \left\{ s\left( q_{1}\right), s\left( q_{2}\right) \right\}, \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\} \right] =$ $\left\{ s\left[ q_{1},q_{2}\right] , \alpha_G\left[g_{1},g_{2}\right] \right\}$ \noindent $\rho:Ker(\pi) \otimes \frak{uce}(Q)\to Ker(\pi)$, $\rho\left( \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\} \otimes\left\{ q_{1}, q_{2} \right\} \right) = \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\} \centerdot \left\{ q_{1}, q_{2}\right\} =$ $\left[ i \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\} ,\sigma\left\{ q_{1}, q_{2} \right\} \right] = \left[ \left\{ \alpha_G(g_{1}), \alpha_G(g_{2}) \right\},\left\{ s\left( q_{1}\right) , s\left( q_{2}\right) \right\} \right] =$ $\left\{ \alpha_G \left[ g_{1},g_{2}\right] ,s\left[q_{1},q_{2}\right] \right\}$ By Lemma \ref{ext rota}, the split exact sequence \[\xymatrix{ 0 \ar[r]& (Ker(\pi), \overline{\alpha_{G }}_{\mid}) \ar[r]^{i}& (\frak{uce}_{\alpha}(G),\overline{\alpha_G}) \ar@<0.5ex>[r]^{\pi \quad} &(\frak{uce}(Q),Id_{\frak{uce}(Q)}) \ar@<0.5ex>[l]^{\sigma \quad} \ar[r]& 0} \] is equivalent to the semi-direct product sequence, i.e. $$\left( \frak{uce}_{\alpha}(G),\overline{\alpha_{G}}\right) \cong \left( Ker(\pi), \overline{\alpha_{G}}_{\mid} \right) \rtimes \left(\frak{uce}_{\alpha}(Q),Id_{\frak{uce}_{\alpha}(Q)}\right)$$ Let $q\in Q$ and $\alpha_M(m_{1}),\alpha_M(m_{2})\in \alpha_M(M)$ be, then the following identities hold in $(\frak{uce}_{\alpha}(G), \overline{\alpha_G})$: \noindent $\left\{ \alpha_{G}\left( s\left(q\right) \right) ,\left[ t\left( \alpha_M(m_{1})\right) ,t\left( \alpha_M(m_{2})\right)\right] \right\} = \left\{ \left[ s(q),t\left( \alpha_M(m_{1})\right) \right] ,\alpha_{G}\left(t\left( \alpha_M(m_{2})\right) \right) \right\}$ \noindent $- \left\{ \left[ s(q),t( \alpha_M(m_{2}))\right] ,\alpha_{G}\left( t\left( \alpha_M(m_{1})\right) \right) \right\}$ and \noindent$\left\{ \left[ t\left( \alpha_M(m_{1})\right) ,t\left( \alpha_M(m_{2})\right) \right] ,\alpha_{G}\left( s\left( q\right) \right) \right\} = \left\{ \alpha_{G}\left( t\left( \alpha_M(m_{1})\right) \right) ,\left[ t\left( \alpha_M(m_{2})\right) ,s\left( q\right) \right] \right\}$ \noindent $+ \left\{ \left[ t(\alpha_M(m_{1})),s(q)\right] ,\alpha_{G}\left( t\left( \alpha_M(m_{2})\right) \right) \right\}.$ These equalities together with the $\alpha$-perfection of $\left( M,\alpha_{M}\right)$ imply: \noindent $\{s(Q),M\} =\left\{ \alpha_{G}\left( s\left( Q\right) \right) , [\alpha_M(M), \alpha_M(M)] \right\} \subseteq \left\{ \alpha_M(M), \alpha_{M}^2\left( M\right) \right\} \subseteq$ \noindent $\left\{\alpha_M(M), \alpha_M(M) \right\}$ and \noindent $\{M,s(Q)\} = \left\{ [\alpha_M(M), \alpha_M((M)], \alpha_{G}\left( s\left( Q\right) \right) \right\} \subseteq \left\{ \alpha_M^2(M), \alpha_M(M) \right\}+$ \noindent $\left\{ \alpha_M(M), \alpha_{M}^2\left( M\right) \right\}\subseteq \left\{ \alpha_M(M), \alpha_M(M) \right\}$. Moreover \begin{equation} \label{Eq 1} \tau\left( \frak{uce}_{\alpha}(M), \overline{\alpha_M} \right) \equiv \left( \left\{ \alpha_M(M), \alpha_M(M) \right\}, \overline{\alpha_G}_{\mid} \right) \end{equation} since $\tau\left\{ \alpha_M(m_{1}) , \alpha_M(m_{2}) \right\} = \left\{ t\left( \alpha_M(m_{1}) \right) ,t\left( \alpha_M(m_{2})\right) \right\} \equiv\left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\}$, and $$\sigma(\frak{uce}(Q))=\{s(Q),s(Q)\}=\{\alpha_G (s(Q)), \alpha_G (s(Q)) \}$$ since $\sigma(\{q_1,q_2\}) = \{s(q_1),s(q_2) \} = \{\alpha_G(s(q_1)),\alpha_G(s(q_2)) \}$. On the other hand, for every $\alpha_G(g) \in G$, there exists an $\alpha_M(m) \in \alpha_M(M)$ such that $\alpha_G(g) =s \left( p\left( \alpha_G(g) \right) \right)+ \alpha_M(m)$. Hence \begin{equation} \label{Eq 2} \left( \frak{uce}_{\alpha} \left( G\right), \overline{\alpha_G} \right) = \left( \left\{ s\left( Q\right),s\left( Q\right) \right\} +\left\{ \alpha_M(M), \alpha_M(M) \right\}, \overline{\alpha_G} \right) \end{equation} \begin{Pro}\ $$\left( Ker(\pi), \overline{\alpha_G}_{\mid} \right)= \left( \left \{ \alpha_M(M), \alpha_M(M) \right \}, \overline{\alpha_G}_{\mid} \right) = \tau\left( \frak{uce}_{\alpha}\left(M \right),\overline{\alpha_{M}}\right).$$ \end{Pro} {\it Proof.} Let $\{g_1,g_2\} \in Ker(\pi)$ be. From (\ref{Eq 2}), $\{g_1,g_2\} = \left\{ s\left( q_{1} \right) ,s\left( q_{2} \right) \right\} +\left\{ \alpha_M(m_{1}),\right.$ $\left. \alpha_M(m_{2}) \right\} \in \frak{uce}_{\alpha} \left( G\right)$. Then $\overline{0} = \pi \{g_1,g_2\} = \left\{ p\left( s \left( q_{1}\right) \right) ,p\left( s\left( q_{2}\right) \right) \right\} + \left\{ p\left( \alpha_M(m_{1}) \right),p\left( \alpha_M(m_{2}) \right) \right\} = \{q_1,q_2\}$, i.e. $q_{1} \otimes q_{2} \in I_{Q}$. Consequently, $\sigma\left\{ q_{1}, q_{2} \right\} = \left\{ s\left( q_{1} \right) , s\left( q_{2} \right) \right\}=0$ since $s\left( q_{1} \right) \otimes s\left( q_{2} \right) \in \sigma(I_Q) \subseteq I_{G}$. So any element in the kernel has the form $\left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\}$. The reverse inclusion is obvious. Second equality was proved in (\ref{Eq 1}). $\Box $ On the other hand $\sigma\left( \frak{uce}(Q), Id_{\frak{uce}(Q)} \right) = \left( \left\{ s(Q) ,s(Q) \right\}, \overline{\alpha_G} \right)$. Since $\pi \cdot \sigma=Id_{\frak{uce}(Q)}$, then $\left( \frak{uce}_{\alpha}(G), \overline{\alpha_G} \right) = \left( Ker(\pi), \overline{\alpha_G}_{\mid} \right) \rtimes \sigma \left( \frak{uce}\left( Q\right), Id_{\frak{uce}(Q)} \right)$. Moreover $\sigma$ is an isomorphism between $\left( \frak{uce}(Q), Id_{\frak{uce}(Q)} \right)$ and $\sigma\left( \frak{uce}(Q), Id_{\frak{uce}(Q)} \right)$. These facts imply: \begin{enumerate} \item[{\bf 1.}] $\left( \frak{uce}_{\alpha}(G),\overline{\alpha_{G}} \right) =\tau\left( \frak{uce}_{\alpha} \left( M\right), \overline{\alpha_{M}} \right) \rtimes \sigma\left( \frak{uce}\left( Q\right), Id_{\frak{uce}(Q)}\right).$ \item[{\bf 2.}] $\sigma\left( \frak{uce} \left( Q\right),Id_{\frak{uce}(Q)} \right) \cong\left( \frak{uce} \left( Q\right), Id_{\frak{uce}(Q)}\right).$ \end{enumerate} From {\bf 1.}, an element of $\left( \frak{uce}_{\alpha}(G),\overline{\alpha_{G}} \right)$ can be written as $\left( \tau\left( m\right) ,\sigma\left( q\right) \right)$, for $m\in\left( \frak{uce}_{\alpha}\left( M\right),\overline{\alpha_{M}}\right)$ and $q\in\left( \frak{uce}\left( Q\right), Id_{\frak{uce}(Q)} \right)$ with a suitable choice. Such an element belongs to $Ker(U_{\alpha}^G)$ if and only if $U_{G}^{\alpha}\left( \tau\left( m\right) ,\sigma\left( q\right) \right) =0$, i.e. $m\in Ker(U_{\alpha}^M)$ and $q\in HL_2(Q)$. From these facts we can derive that \begin{enumerate} \item[{\bf 3.}] $\left( Ker(U_{\alpha}^G), \overline{\alpha_G}_{\mid} \right) \cong \tau\left( Ker(U_{\alpha}^M) , \overline{\alpha_{M}}_{\mid} \right) \oplus \sigma \left( HL_2(Q), {Id_{\frak{uce}(Q)}}_{\mid} \right).$ \end{enumerate} Since there exists a symmetric Hom-action of $(Q,Id_Q)$ on $(M, \alpha_M)$, then there is a Hom-action of $\left( \frak{uce}(Q), Id_{\frak{uce}(Q)} \right)$ on $\left( \frak{uce}_{\alpha}(M),\overline{\alpha_{M}}\right)$ given by: $\begin{array}{rcl} \lambda : \frak{uce}(Q) \otimes \frak{uce}_{\alpha}(M)& \to & \frak{uce}_{\alpha}(M)\\ \left\{ q_{1}, q_{2} \right\} \otimes \left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\} & \mapsto & \left\{ q_{1}, q_{2}\right\} \centerdot \left\{ \alpha_M(m_1), \alpha_M(m_{2}) \right\} =\\ & & \left\{ \left[ q_{1},q_{2}\right] \centerdot \alpha_M(m_{1}),\alpha_{M}^2 \left( m_{2}\right) \right\} -\\ & & \left\{ \left[ q_{1},q_{2}\right] \centerdot \alpha_M(m_{2}),\alpha_{M}^2\left( m_{1}\right) \right\} \end{array}$ and $\begin{array}{rcl} \rho: \frak{uce}_{\alpha}(M) \otimes \frak{uce}(Q) & \to & \frak{uce}_{\alpha}(M)\\ \left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\} \otimes \left\{ q_{1}, q_{2} \right\} &\mapsto & \left\{ \alpha_M(m_{1}), \alpha_M(m_{2})\right\} \centerdot \left\{ q_{1}, q_{2} \right\} = \\ & & \left\{ \alpha_M(m_{1}) \centerdot \left[ q_{1},q_{2}\right],\alpha_{M}^2\left( m_{2}\right) \right\} -\\ &&\left\{ \alpha_{M}^2\left( m_{1}\right) ,\left[ q_{1},q_{2}\right] \centerdot \alpha_M(m_{2}) \right\} \end{array}$ Then we can define the following homomorphism of Hom-Leibniz algebras: $$\tau \rtimes \sigma : \left( \frak{uce}_{\alpha}\left( M\right), \overline{\alpha_{M}}\right) \rtimes \left( \frak{uce} \left( Q\right), Id_{\frak{uce}(Q)}\right) \to \left( \frak{uce}_{\alpha}\left( G\right) , \overline {\alpha_{G}}\right) \cong \quad \quad \quad \quad \quad $$ $$\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \tau \left( \frak{uce}_{\alpha}\left( M\right), \overline{\alpha_{M}}\right) \rtimes \sigma \left( \frak{uce}\left( Q\right), Id_{\frak{uce}(Q)}\right)$$ $$\left( \left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\},\left\{ q_{1}, q_{2} \right\} \right) \mapsto \left( \left\{ t\left( \alpha_M(m_{1}) \right) ,t\left( \alpha_M(m_{2}) \right) \right\},\left\{ s\left( q_{1} \right),s\left( q_{2} \right) \right\} \right)$$ Moreover $\tau\rtimes\sigma$ is an epimorphisms since $$\left( \frak{uce}_{\alpha}(G),\overline{\alpha_{G}}\right) \cong\tau\left( \frak{uce}_{\alpha}\left( M\right),\overline{\alpha_{M}}\right) \rtimes\sigma\left( \frak{uce}\left( Q\right),Id_{\frak{uce}(Q)}\right).$$ By the relations coming from the action induced by the split extension \noindent $\tau\left( \left\{ q_{1},q_{2}\right\} \centerdot \left\{\alpha_M(m_{1}), \alpha_M(m_{2})\right\} \right) =\left[ \left\{ s\left( q_{1}\right) ,s\left( q_{2}\right) \right\}, \left\{ t\left( \alpha_M(m_{1})\right) ,t\left( \alpha_M(m_{2})\right) \right\} \right]$ and \noindent $\tau\left( \left\{ \alpha_M(m_{1}), \alpha_M(m_{2})\right\} \centerdot \left\{ q_{1},q_{2}\right\} \right) =$ $\left[ \left\{ t\left( \alpha_M(m_{1})\right), t\left( \alpha_M(m_{2})\right) \right\} ,\left\{ s\left( q_{1}\right) ,s\left( q_{2}\right) \right\} \right]$ one derives that: \noindent $t \cdot U_{\alpha}^{M}\left( \left\{ q_{1},q_{2}\right\} \centerdot \left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\} \right)= \left[q_1,q_2 \right] \centerdot [\alpha_M(m_1), \alpha_M(m_2)],$ and \noindent $t \cdot U_{\alpha}^{M}\left( \left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\} \centerdot \left\{ q_{1},q_{2}\right\} \right) = \left[\alpha_M(m_1), \alpha_M(m_2) \right] \centerdot [q_1,q_2].$ \begin{enumerate} \item[{\bf 4.}] Now we define the surjective homomorphism of Hom-Leibniz algebras \end{enumerate} \begin{center} $\begin{array}{rcl} \Phi:= \left( t \cdot U_{\alpha}^{M}\right) \rtimes\left( s \cdot u_{Q}\right):\left( \frak{uce}_{\alpha} \left( M\right) \rtimes \frak{uce} \left( Q\right) ,\overline{\alpha_{M}}\rtimes Id_{\frak{uce}(Q)}\right) & \to & \left( G,\alpha_{G}\right) \\ \left( \left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\},\left\{ q_{1},q_{2}\right\} \right) & \mapsto & \left( t\left[ \alpha_M(m_{1}), \alpha_M(m_{2})\right] ,s\left[ q_{1},q_{2}\right] \right) \end{array}$ \end{center} that makes commutative the following diagram: \begin{equation} \label{diagrama Fi} \vcenter{ \xymatrix{ \left( \frak{uce}_{\alpha} \left( M\right) \rtimes \frak{uce} \left( Q\right),\overline{\alpha_{M}}\rtimes Id_{\frak{uce}(Q)}\right) \ar@{>>}[rr]^{\quad \tau \rtimes \sigma} \ar@{-->}[dr]_{\Phi}& & \left( \frak{uce}_{\alpha} \left( G\right),\overline{\alpha_{G}} \right)\ar@{>>}[ld]^{U_{\alpha}^{G}}\\ & \left( G,\alpha_{G}\right)& }} \end{equation} Now we prove that $$\frak{uce} \left( Q\right) \centerdot Ker(U_{\alpha}^M) \oplus Ker(U_{\alpha}^M) \centerdot \frak{uce}\left( Q\right) \subseteq Ker(\tau) \subseteq Ker(U_{\alpha}^M)$$ Second inclusion is obvious since $t \cdot U_{\alpha}^M = U_{\alpha}^G \cdot \tau$ and $t$ is injective. From the commutativity of the following diagram \[ \xymatrix{ & \left( Ker(U_{\alpha}^M),\overline{\alpha_{M}}_{\mid}\right) \ar@{-->}[r] \ar@{>->}[d] & \left( Ker(U_{\alpha}^G), \overline{\alpha_{G}}_{\mid}\right) \ar@{>->}[d]\\ Ker(\tau)\ \ar@{-->}[ru] \ar@{>->}[r] & \left( \frak{uce}_{\alpha}(M),\overline{\alpha_{M}} \right) \ar[r]^{\tau} \ar@{>>}[d]^{U_{\alpha}^M}& \left( \frak{uce}_{\alpha}(G),\overline{\alpha_{G}} \right) \ar@{>>}[d]^{U_{\alpha}^G}\\ & \left( M,\alpha_{M}\right) \ar[r]^t & \left(G,\alpha_{G}\right) } \] we have that $U_{\alpha}^{G} \cdot \tau\left( Ker(U_{\alpha}^M) \right) = t \cdot U_{\alpha}^{M}\left( Ker(U_{\alpha}^M)\right) =0$, then $\tau\left( Ker(U_{\alpha}^M) \right)$ $\subseteq Ker(U_{\alpha}^{G}) \subseteq Z\left( \frak{uce}_{\alpha}(G)\right)$, so, $\tau\left( \frak{uce}\left( Q\right) \centerdot Ker(U_{\alpha}^M) \right) =\left[ \sigma\left( \frak{uce} \left( Q\right) \right) ,\tau\left( Ker(U_{\alpha}^M) \right) \right] =0$ \noindent and $\tau\left( Ker(U_{\alpha}^M) \centerdot \frak{uce}\left( Q\right)\right) =\left[ \tau\left( Ker(U_{\alpha}^M) \right),\sigma\left( \frak{uce}\left( Q\right) \right) \right] =0$ Consequently, $\frak{uce} \left( Q\right) \centerdot Ker(U_{\alpha}^M) \oplus Ker(U_{\alpha}^M) \centerdot \frak{uce}\left( Q\right) \subseteq Ker(\tau).$ On the other hand, we observe that $\left( \frak{uce}\left( Q\right) \centerdot Ker(U_{\alpha}^M) \oplus Ker(U_{\alpha}^M) \centerdot \frak{uce}\left( Q\right), \right.$ $\left. \overline{\alpha_M}_{\mid} \right)$ is a two-sided ideal of $\left( \frak{uce}_{\alpha}\left( M\right), \overline{\alpha_M} \right)$. Then the Hom-action of $\left( \frak{uce}\left( Q\right), Id_Q \right)$ on $\left( \frak{uce}_{\alpha}\left( M\right), \overline{\alpha_M} \right)$ induces a Hom-action of $\left( \frak{uce}\left( Q\right), Id_Q \right)$ on $$\left( \overline{\frak{uce}_{\alpha}\left( M\right) }, \overline{\overline{\alpha_M}} \right)= \left( \frac{\frak{uce}_{\alpha}\left( M\right)}{\frak{uce}\left( Q\right) \centerdot Ker(U_{\alpha}^{M}) \oplus Ker(U_{\alpha}^{M}) \centerdot \frak{uce}\left( Q\right)}, \overline{\overline{\alpha_M}} \right).$$ Since $\tau$ vanishes on $\frak{uce}\left( Q\right) \centerdot Ker(U_{\alpha}^{M}) \oplus Ker(U_{\alpha}^{M}) \centerdot \frak{uce} \left( Q\right)$, then it induces $\overline{\tau}:\overline{\frak{uce}_{\alpha}\left( M\right)}\to \tau\left( \frak{uce}_{\alpha}\left( M\right) \right)$. This fact is illustrated in the following diagram where the notation $I= \frak{uce} \left( Q\right) \centerdot Ker(U_{\alpha}^{M}) \oplus Ker(U_{\alpha}^{M}) \centerdot \frak{uce}\left(Q\right)$ is employed: \[ \xymatrix{ \left( I, \overline{\alpha_M}_{\mid} \right) \ar@{>->}[d] \ar@/^1pc/[ddr]^0& &\\ \left( \frak{uce}_{\alpha}(M),\overline{\alpha_{M}}\right) \ar[rr]^{\tau} \ar@{>>}[dd] \ar@{>>}[dr]& & \left( \frak{uce}_{\alpha}(G),\overline{\alpha_{G}}\right) \\ & \tau \left( \frak{uce}_{\alpha}(M), \overline{\alpha_M} \right) \ar@{>->}[ur]&\\ \left( \overline{\frak{uce}_{\alpha}\left( M\right)}, \overline{\overline{\alpha_M}} \right)\ar@{-->}[ur]^{\overline{\tau}} & & } \] Now we can construct the following commutative diagram: \[ \xymatrix{ I\ \ar@{>->}[r] \ar@{>->}[d] & I\rtimes 0 \ar@{>>}[r] \ar@{>->}[d] & 0 \ar@{>->}[d] \\ Ker\left( \tau\rtimes\sigma\right)\ \ar@{>>}[d] \ar@{>->}[r] & \left( \frak{uce}_{\alpha }\left( M\right) \rtimes \frak{uce}\left( Q\right) ,\overline{\alpha _{M}}\rtimes Id_{\frak{uce}(Q)}\right) \ar@{>>}[r]^{\quad \quad \quad \quad \quad \tau\rtimes\sigma} \ar@{>>}[d]& \left( \frak{uce}_{\alpha}\left( G\right) ,\overline{\alpha_{G}}\right) \ar@{=}[d]\\ \frac{Ker\left( \tau\rtimes\sigma\right)}{I}\ \ar@{>->}[r] & \left( \overline{\frak{uce}_{\alpha}\left( M\right) }\rtimes \frak{uce}\left( Q\right),\overline{\overline{\alpha_{M}}}\rtimes Id_{\frak{uce}(Q)}\right) \ar@{>>}[r]^{\quad \quad \quad \quad \quad \Psi} & \left( \frak{uce}_{\alpha}\left( G\right) ,\overline{\alpha_{G}}\right) }\] whose bottom row is a central extension. Moreover $\left( \frak{uce}_{\alpha}\left( G\right),\overline{\alpha_{G}}\right)$ is an $\alpha$-perfect Hom-Leibniz algebra, then by Theorem 5.5 in \cite{CIP}, it admits a universal $\alpha$-central extension and, by Corollary \ref{centralmente cerrada}, $\frak{uce}_{\alpha}\left( G\right)$ is centrally closed, i.e. $\frak{uce} \left( \frak{uce}_{\alpha}\left( G\right) \right) \cong \frak{uce}_{\alpha}\left( G\right)$. Having in mind the following diagram, \[ \xymatrix{ \left( \overline{\frak{uce}_{\alpha}\left( M\right) }\rtimes \frak{uce}\left( Q\right),\overline{\alpha_{M}}\rtimes Id_{\frak{uce}(Q)}\right) \ar@{>>}[r]^{\quad \quad \quad \quad \Psi} \ar@{>>}[d]^{\Psi} \ar@/_4pc/[dd]_{Id} & \left( \frak{uce}_{\alpha}\left( G\right),\overline{\alpha_{G}}\right) \ar@{=}[d]\\ \left( \frak{uce}_{\alpha} \left( G\right) ,\overline{\alpha_{G}}\right) \ar@{>->>}[r]^{\quad Id} \ar[d]^{\mu} & \left( \frak{uce}_{\alpha}\left( G\right) ,\overline{\alpha_{G}}\right) \ar@{=}[d] \\ \left( \overline{\frak{uce}_{\alpha}\left( M\right) }\rtimes \frak{uce}\left(Q\right) ,\overline{\alpha_{M}}\rtimes Id_{\frak{uce}(Q)}\right) \ar@{>>}[r]^{\quad \quad \quad \quad \Psi} & \left( \frak{uce}_{\alpha}\left( G\right) ,\overline{\alpha_{G}}\right) } \] where $Id:\left( \frak{uce}_{\alpha}\left( G\right),\overline{\alpha_{G}}\right) \to \left( \frak{uce}_{\alpha}\left( G\right) ,\overline{\alpha_{G}}\right)$ is a universal central extension since $\left( \frak{uce}_{\alpha}\left( G\right) ,\overline{\alpha_{G}}\right)$ is centrally closed and $\Psi$ is a central extension, then there exists a unique homomorphism of Hom-Leibniz algebras $\mu: \left( \frak{uce}_{\alpha}\left( G\right), \overline{\alpha_G} \right) \to \left( \overline{\frak{uce}_{\alpha}\left( M\right) }\rtimes \frak{uce}\left( Q\right), \overline{\alpha_{M}}\rtimes Id_{\frak{uce}(Q)}\right)$ such that $\Psi \cdot \mu=Id.$ Since $\Psi \cdot \mu \cdot \Psi=Id \cdot \Psi=\Psi=\Psi \cdot Id$ and $\overline{\frak{uce}_{\alpha} \left( M\right) }\rtimes \frak{uce}\left( Q\right)$ is $\alpha$-perfect, then Lemma 5.4 in \cite{CIP} implies that $\mu \cdot \Psi=Id$. Consequently, $\Psi$ is an isomorphism, then $Ker(\Psi)= \frac{Ker\left( \tau\rtimes\sigma\right)}{I}=0$, so $Ker\left( \tau\rtimes\sigma\right) \subseteq I$. The above discussion can be summarized in: \begin{enumerate} \item[{\bf 5.}] $Ker\left( \tau\rtimes\sigma\right) \cong \frak{uce}\left( Q\right) \centerdot Ker(U_{\alpha}^{M}) \oplus Ker(U_{\alpha}^{M}) \centerdot \frak{uce}\left( Q\right)$ \end{enumerate} We summarize the above results in the following \begin{Th} \label{5 puntos} Consider a split extension of $\alpha$-perfect Hom-Leibniz algebras \[\xymatrix{ 0 \ar[r]& (M,\alpha_M) \ar[r]^t& (G,\alpha_G) \ar@<0.5ex>[r]^p &(Q,Id_Q) \ar[r] \ar@<0.5ex>[l]^s &0} \] where the induced Hom-action of $(Q, Id_Q)$ on $(M, \alpha_M)$ is symmetric. Then the following statements hold: \begin{enumerate} \item[{\bf 1.}] $\left( \frak{uce}_{\alpha}(G),\overline{\alpha_{G}} \right) =\tau\left( \frak{uce}_{\alpha} \left( M\right), \overline{\alpha_{M}} \right) \rtimes \sigma\left( \frak{uce}\left( Q\right), Id_{\frak{uce}(Q)}\right).$ \item[{\bf 2.}] $\sigma\left( \frak{uce} \left( Q\right),Id_{\frak{uce}(Q)} \right) \cong\left( \frak{uce} \left( Q\right), Id_{\frak{uce}(Q)}\right).$ \item[{\bf 3.}] $\left( Ker(U_{\alpha}^G), \overline{\alpha_G}_{\mid} \right) \cong \tau\left( Ker(U_{\alpha}^M) , \overline{\alpha_{M}}_{\mid} \right) \oplus \sigma \left( HL_2(Q), {Id_{\frak{uce}(Q)}}_{\mid} \right).$ \item[{\bf 4.}] The homomorphism of Hom-Leibniz algebras $$\Phi:\left(\frak{uce}_{\alpha} \left( M\right) \rtimes \frak{uce} \left( Q\right), \overline{\alpha_{M}}\rtimes Id_{\frak{uce}(Q)}\right)\to \left( G,\alpha_{G}\right)$$ given by $\Phi \left( \left\{ \alpha_M(m_{1}), \alpha_M(m_{2}) \right\},\left\{ q_{1},q_{2}\right\} \right) = \left( t\left[ \alpha_M(m_{1}), \alpha_M(m_{2})\right] ,s\left[ q_{1},q_{2}\right] \right)$ is an epimorphism that makes commutative diagram (\ref{diagrama Fi}) and its kernel is $Ker(U_{\alpha}^{M})\oplus HL_2(Q)$. \item[{\bf 5.}] $Ker\left( \tau\rtimes\sigma\right) \cong \frak{uce}\left( Q\right) \centerdot Ker(U_{\alpha}^{M}) \oplus Ker(U_{\alpha}^{M}) \centerdot \frak{uce}\left( Q\right)$ \end{enumerate} \end{Th} \begin{Rem} Let us observe that statements {\bf 1., 2.} and {\bf 3.} in Theorem \ref{5 puntos} hold in the general case, they do not need the hypothesis of symmetric Hom-action. \end{Rem} \begin{Th} The following statements are equivalent: \begin{enumerate} \item[a)] $\Phi = (t \cdot U_{\alpha}^M) \rtimes (s \cdot u_Q) :\left( \frak{uce}_{\alpha}\left( M\right) \rtimes \frak{uce}\left( Q\right) ,\overline{\alpha_{M}}\rtimes Id_{\frak{uce}_{\alpha}(Q)}\right) \to \left( G,\alpha_{G}\right)$ is a central extension, hence is an $\alpha$-cover. \item[b)] The Hom-action of $(\frak{uce}(Q), Id_Q)$ on $(Ker(U_{_{\alpha}}^{M}), \overline{\alpha_M}_{\mid})$ is trivial. \item[c)] $\tau \rtimes \sigma$ is an isomorphism. Consequently $\frak{uce}_{\alpha}\left( M\right) \rtimes \frak{uce}\left( Q\right)$ is the universal $\alpha$-central extension of $(G, \alpha_G)$. \item[d)] $\tau$ is injective. \end{enumerate} In particular, for the direct product $\left( G,\alpha_{G}\right) = \left( M,\alpha_{M} \right) \times \left(Q, Id_{Q}\right)$ the following isomorphism hold: $$\left( \frak{uce}_{\alpha}\left( M\times Q\right),\overline{\alpha_{M}\times Id_{Q}}\right) \cong \left( \frak{uce}_{\alpha}\left( M\right) \times \frak{uce}\left( Q\right) ,\overline{\alpha_{M}}\times Id_{\frak{uce}(Q)}\right)$$ \end{Th} {\it Proof.} {\it a)} $\Longleftrightarrow$ {\it b)} If $\Phi:\left( \frak{uce}_{\alpha}\left( M\right) \rtimes \frak{uce}_{\alpha}\left( Q\right) ,\overline{\alpha_{M}}\rtimes Id_{\frak{uce}_{\alpha}(Q)}\right) \to \left( G,\alpha_{G}\right)$ is a central extension and having in mind that $Ker(\Phi) = Ker(U_{\alpha}^{M})\oplus HL_2(Q)$, then the Hom-action of $(\frak{uce}(Q), Id_Q)$ on $(Ker(U_{\alpha}^M), \overline{\alpha_M}_{\mid})$ is trivial and conversely. Moreover $(\frak{uce}_{\alpha}\left( M\right) \rtimes \frak{uce}\left(Q\right), \overline{\alpha_M} \rtimes Id_{\frak{uce}(Q)})$ is $\alpha-$perfect since the Hom-action is trivial. {\it b)} $\Longleftrightarrow$ {\it c)} By statement {\bf 5.} in Theorem \ref{5 puntos} we know that $Ker\left( \tau\rtimes\sigma\right) \cong \frak{uce}\left( Q\right) \centerdot Ker(U_{_{\alpha}}^{M}) \oplus Ker(U_{_{\alpha}}^{M}) \centerdot \frak{uce}\left( Q\right)$, then $\tau \rtimes \sigma$ is injective if and only if the Hom-action is trivial. Hence and having in mind diagram (\ref{diagrama Fi}), immediately follows that $\frak{uce}_{\alpha}\left( M\right) \rtimes \frak{uce}\left( Q\right)$ is the universal $\alpha$-central extension of $(G, \alpha_G)$. {\it c)} $\Longleftrightarrow$ {\it d)} It suffices to have in mind the identification of $\tau$ with $\tau \rtimes \sigma$ given by $\tau\left\{ \alpha_{M}\left( m_{1}\right) ,\right.$ $\left. \alpha_{M}\left( m_{2}\right) \right\} =\left( \tau\rtimes\sigma\right) \left( \left\{ \alpha_{M}\left( m_{1}\right),\alpha_{M}\left( m_{2}\right) \right\} ,0\right)$, since $Ker(\tau) \cong Ker\left( \tau\rtimes\sigma\right)$, then the equivalence is obvious. Finally, in case of the direct product $\left( G,\alpha_{G}\right) =\left(M, \alpha_{M}\right) \times \left(Q, Id_{Q}\right)$ the Hom-action of $\left(Q,Id_{Q}\right)$ on $\left( M,\alpha_{M}\right)$ is trivial, then the Hom-action of $(\frak{uce}(Q), Id_{\frak{uce}(Q)})$ on $(\frak{uce}_{\alpha}(M), \overline{\alpha_M})$ is trivial as well and, consequently, $(\frak{uce}_{\alpha}(M)\rtimes \frak{uce}(Q), \overline{\alpha_M} \rtimes Id_{\frak{uce}(Q)})=(\frak{uce}_{\alpha}(M)\rtimes \frak{uce}(Q), \overline{\alpha_M} \times Id_{\frak{uce}(Q)})$. Statement {\it c)} ends the proof. $\Box $ \begin{Rem} Note that when the Hom-Leibniz algebras are considered as Leibniz algebras, i.e. the endomorphisms $\alpha$ are identities, then the results in this section recover the corresponding results for Leibniz algebras given in \cite{CC}. \end{Rem} \begin{center} \end{center} \end{document}
arXiv
Architecture and Architectonics (3) Biology and Life Sciences (17) Chemistry and Chemical Engineering (35) Materials and Applied Sciences (3) Medical and Health Sciences (1) From 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1-01 — To 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1-01 Author or Editor: Z. L. Li x Sort by RelevanceArticle A-ZArticle Z-ADate - Old to RecentDate - Recent to OldAuthor A-ZAuthor Z-AJournal A-ZJournal Z-A Page:123456 A kinetic analysis of thermal decomposition of polyaniline/ZrO2 composite Authors: S. Wang, Z. Tan, Y. Li, L. Sun, and Y. Li Synthesis, characterization and thermal analysis of polyaniline (PANI)/ZrO2 composite and PANI was reported in our early work. In this present, the kinetic analysis of decomposition process for these two materials was performed under non-isothermal conditions. The activation energies were calculated through Friedman and Ozawa-Flynn-Wall methods, and the possible kinetic model functions have been estimated through the multiple linear regression method. The results show that the kinetic models for the decomposition process of PANI/ZrO2 composite and PANI are all D3, and the corresponding function is ƒ(α)=1.5(1−α)2/3[1−(1-α)1/3]−1. The correlated kinetic parameters are E a=112.7±9.2 kJ mol−1, lnA=13.9 and E a=81.8±5.6 kJ mol−1, lnA=8.8 for PANI/ZrO2 composite and PANI, respectively. Thermodynamic investigation of room temperature ionic liquid Heat capacity and thermodynamic functions of BMIBF4 Authors: Z. Zhang, Z. Tan, Y. Li, and L. Sun The molar heat capacities of the room temperature ionic liquid 1-butyl-3-methylimidazolium tetrafluoroborate (BMIBF4) were measured by an adiabatic calorimeter in temperature range from 80 to 390 K. The dependence of the molar heat capacity on temperature is given as a function of the reduced temperature X by polynomial equations, C P,m (J K–1 mol–1)= 195.55+47.230 X–3.1533 X 2+4.0733 X 3+3.9126 X 4 [X=(T–125.5)/45.5] for the solid phase (80~171 K), and C P,m (J K–1 mol–1)= 378.62+43.929 X+16.456 X 2–4.6684 X 3–5.5876 X 4 [X=(T–285.5)/104.5] for the liquid phase (181~390 K), respectively. According to the polynomial equations and thermodynamic relationship, the values of thermodynamic function of the BMIBF4 relative to 298.15 K were calculated in temperature range from 80 to 390 K with an interval of 5 K. The glass translation of BMIBF4 was observed at 176.24 K. Using oxygen-bomb combustion calorimeter, the molar enthalpy of combustion of BMIBF4 was determined to be Δc H m o= – 5335±17 kJ mol–1. The standard molar enthalpy of formation of BMIBF4 was evaluated to be Δf H m o= –1221.8±4.0 kJ mol–1 at T=298.150±0.001 K. Heterogeneous interaction between zwitterions of amino acids and glycerol in aqueous solutions at 298.15 K Authors: Y. Li, Z. Yingyuan, L. Yonghui, J. Jing, and W. Xiaoqing The enthalpies of mixing of six kinds of amino acid (glycine, L-alanine, L-valine, L-serine, L-threonine, and L-proline) with glycerol in aqueous solutions and the enthalpies of diluting of amino acid and glycerol aqueous solutions have been determined by flow microcalorimetry at 298.15 K. Employing McMillan–Mayer theory, the enthalpies of mixing and diluting have been used to calculate heterogeneous enthalpic pairwise interaction coefficients (h xy) between amino acids and glycerol in aqueous solutions. Combining h xy values of amino acids with glycol in the previous study, the variations of the h xy values between amino acids and glycerol have been interpreted from the point of view of solute–solute interactions. Dynamics in simultaneous bio-electro-generative leaching for pyrite-MnO2 Authors: L. Xiao, Y. Li, S. Wang, Z. Fang, and G. Qiu The principle for the electro-generative simultaneous leaching (EGSL) is applied to simultaneous leaching of pyrite-MnO2 in this paper. A galvanic system for the bio-electro-generative simultaneous leaching (BEGSL) has been set up. The equation of electric quantity vs. time is used to study the effect of produced sulfur on electro-generative efficiency and quantity. It has been shown that the resistance decreased in the presence of Acidithiobacillus thiooxidans (A. thiooxidans) with the increase of electro-generative efficiency. The effects of temperature and grain size on rate of ferrous extraction from pyrite under the conditions of presence and absence of A. thiooxidans were studied, respectively. The changes in the extraction rate of Fe2+ as particle size in presence of A. thiooxidans were more evident than that in the absence, which indicated that the extraction in bio-electro-generative leaching was affected by particle size remarkably. Around the optimum culture temperature for A. thiooxidans, the bigger change in the conversion rate of Fe2+ was depending on temperature. The transferred charge in BEGSL including part of S0 to sulfate group in the presence of (A. thiooxidans) which is called as biologic electric quantity, and the ratio of biologic electric quantity reached to 58.10% in 72 h among the all-transferred charge. Molar heat capacity and thermodynamic properties of 1,2-cyclohexane dicarboxylic anhydride [C8H10O3] Authors: X. Lv, X. Gao, Z. Tan, Y. Li, and L. Sun The molar heat capacity C p,m of 1,2-cyclohexane dicarboxylic anhydride was measured in the temperature range from T=80 to 390 K with a small sample automated adiabatic calorimeter. The melting point T m, the molar enthalpy Δfus H m and the entropy Δfus S m of fusion for the compound were determined to be 303.80 K, 14.71 kJ mol−1 and 48.43 J K−1 mol−1, respectively. The thermodynamic functions [H T-H 273.15] and [S T-S 273.15] were derived in the temperature range from T=80 to 385 K with temperature interval of 5 K. The thermal stability of the compound was investigated by differential scanning calorimeter (DSC) and thermogravimetry (TG), when the process of the mass-loss was due to the evaporation, instead of its thermal decomposition. Synthesis, characterization and thermal analysis of polyaniline (PANI)/Co3O4 composites Authors: S. Wang, L. Sun, Z. Tan, F. Xu, and Y. Li Conducting polyaniline/Cobaltosic oxide (PANI/Co3O4) composites were synthesized for the first time, by in situ deposition technique in the presence of hydrochloric acid (HCl) as a dopant by adding the fine grade powder (an average particle size of approximately 80 nm) of Co3O4 into the polymerization reaction mixture of aniline. The composites obtained were characterized by infrared spectra (IR) and X-ray diffraction (XRD). The composition and the thermal stability of the composites were investigated by TG-DTG. The results suggest that the thermal stability of the composites is higher than that of the pure PANI. The improvement in the thermal stability for the composites is attributed to the interaction between PANI and nano-Co3O4. Hypoxia-inducible factor 1 mediates the anti-apoptosis of berberine in neurons during hypoxia/ischemia Acta Physiologica Hungarica https://doi.org/10.1556/aphysiol.99.2012.3.8 Authors: Q. Zhang, Z. Qian, L. Pan, H. Li, and H. Zhu Berberine, a primary pharmacological active constitute of Coptidis Rhizoma, could inhibit neuronal apoptosis in cerebral ischemia. Here, we aimed to investigate whether and how HIF-1 is implicated in the anti-apoptosis effect of berberine on neurons under hypoxia/ischemia. Viability of PC12 cells treated with berberine prior to or following CoCl2-induced hypoxia was evaluated. Annexin V-PI staining was employed to analyse cell apoptosis ratio. HIF-1α and apoptosis-associated molecules were detected via Western blotting. TUNEL and immunohistochemistry were used to demonstrate apoptosis, HIF-1α and p53 levels in cerebral tissue of middle cerebral artery occlusion (MCAO) rats. Berberine pretreatment promoted PC12 cells survival and inhibited apoptosis under hypoxia condition. At the same time, it decreased cell viability and enhancement of apoptosis were observed with berberine treatment under hypoxia. Decreased HIF-1α, caspase 9, caspase 3 and increased Bcl-2/Bax ratio were responsible for the anti-apoptosis of berberine pretreatment. However, pro-apoptosis by berberine under hypoxia was indicated with opposing regulation of those molecules. Significant reduction of apoptosis, HIF-1α and p53 were found in cerebral tissue of MCAO rats treated with berberine. The present study suggests that berberine regulates neuronal apoptosis in cerebral ischemia, which might be dependent on the degree of cell injury. HIF-1 and the followed apoptotic pathway are involved in those effects of berberine. Effects of acute lanthanum exposure on calcium absorption in rats Journal of Radioanalytical and Nuclear Chemistry Authors: X. He, Z. Zhang, L. Feng, Z. Li, J. Yang, Y. Zhao, and Z. Chai After an acute exposure to lanthanum chloride, the pharmacokinetics of calcium uptake in rats was studied by radioactive 47Ca tracer. The accumulated doses of calcium in the left femurs during 24 hours were determined. The results showed that the area under the curves (AUC), specific activity of maximal blood 47Ca concentration (C max), distribution rate constant (K a) and the accumulated dose of calcium in the left femur decreased while time to C max (T peak) increased with the rising dosage of lanthanum exposure. It indicated that lanthanum expose had a negative effect on calcium absorption. Effects of long-chain alcohols on the micellar properties of anionic surfactants in non-aqueous solutions by titration microcalorimetry Authors: Y. Li, G. Fei, Z. Honglin, L. Zhen, Z. Liqiang, and L. Ganzuo The power–time curves of micellar formation of two anionic surfactants, sodium laurate (SLA) and sodium dodecyl sulfate (SDS), in N,N-dimethyl acetamide (DMA) in the presence of various long-chain alcohols (1-heptanol, 1-octanol, 1-nonanol and 1-decanol) were measured by titration microcalorimetry at 298 K. The critical micelle concentrations (CMCs) of SLA and SDS under various conditions at 298 K were obtained based on the power–time curves. Thermodynamic parameters ( \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\Updelta H^\circ_{\text{mic}}$$ \end{document} \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\Updelta S^\circ_{\text{mic}}$$ \end{document} \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\Updelta G^\circ_{\text{mic}}$$ \end{document} ) for micellar systems at 298 K were evaluated according to the power–time curves and the mass action model. The influences of the number of carbon-atom and the concentration of alcohol were investigated. Moreover, combined the thermodynamic parameters at 303, 308 and 313 K in our previous work and those of 298 K in the present work for SLA and SDS in DMA in the presence of long-chain alcohols, an enthalpy–entropy compensation effect was observed. The values of the enthalpy of micellization calculated by direct and indirect methods were made a comparison. Identification of a novel wheat-Thinopyrum ponticum addition line revealed with cytology, SSR, EST-SSR, EST-STS and PLUG markers Cereal Research Communications https://doi.org/10.1556/0806.43.2015.021 Authors: X. J. Li, X. G. Hu, T. Z. Hu, G. Li, Z. G. Ru, L. L. Zhang, and Y. M. Lang Thinopyrum ponticum is particularly a valuable source of genes for wheat improvement. A novel wheat-Th. ponticum addition line, 1–27, was identified using cytology, SSR, ESTSSR, EST-STS and PCR-based landmark unique gene (PLUG) markers in this study. Cytological studies showed that 1–27 contained 44 chromosomes and formed 22 bivalents at meiotic metaphase I. Genomic in situ hybridization (GISH) analysis indicated that two chromosomes from Th. ponticum had been introduced into 1–27 and that these two chromosomes could form a bivalent in wheat background. Such results demonstrated that 1–27 was a disomic addition line with 42 wheat chromosomes and a pair of Th. ponticum chromosomes. One SSR marker (BARC235), one EST-STS marker (MAG3284) and 8 PLUG markers (TNAC1210, TNAC1787, TNAC1803, TNAC1805, TNAC1806, TNAC1821, TNAC1867 and TNAC1957), which were all from wheat chromosome group 7, produced the specific band in Th. ponticum and 1–27, indicating that the introduced Th. ponticum chromosomes belonging to the group 7 of wheat. Sequence analysis on specific bands from Th. ponticum and 1–27 amplified using the PLUG marker TNAC1867 further confirmed this result. The 1–27 addition line was also observed to be high resistant to powdery mildew though it is not clear if the resistance of 1–27 inherited from Th. ponticum. This study provided some useful information for effective exploitation of the source of genetic variability in wheat breeding.
CommonCrawl
export.arXiv.org > gr-qc > arXiv:2009.03428 General Relativity and Quantum Cosmology Title: Making a Quantum Universe: Symmetry and Gravity Authors: Houri Ziaeepour (Submitted on 7 Sep 2020 (v1), last revised 25 Nov 2021 (this version, v3)) Abstract: So far, none of attempts to quantize gravity has led to a satisfactory model that not only describe gravity in the realm of a quantum world, but also its relation to elementary particles and other fundamental forces. Here, we outline the preliminary results for a model of quantum universe, in which gravity is fundamentally and by construction quantic. The model is based on three well motivated assumptions with compelling observational and theoretical evidence: quantum mechanics is valid at all scales; quantum systems are described by their symmetries; universe has infinite independent degrees of freedom. The last assumption means that the Hilbert space of the Universe has $SU(N\rightarrow \infty) \cong \text{area preserving Diff.} (S_2)$ symmetry, which is parameterized by two angular variables. We show that, in the absence of a background spacetime, this Universe is trivial and static. Nonetheless, quantum fluctuations break the symmetry and divide the Universe to subsystems. When a subsystem is singled out as reference -- observer -- and another as clock, two more continuous parameters arise, which can be interpreted as distance and time. We identify the classical spacetime with parameter space of the Hilbert space of the Universe. Therefore, its quantization is meaningless. In this view, the Einstein equation presents the projection of quantum dynamics in the Hilbert space into its parameter space. Finite dimensional symmetries of elementary particles emerge as a consequence of symmetry breaking when the Universe is divided to subsystems/particles, without having any implication for the infinite dimensional symmetry and its associated interaction - perceived as gravity. This explains why gravity is a universal force. Comments: 30 pages; No figure; v3: published version Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph) Journal reference: Universe 2020, 6(11), 194 DOI: 10.3390/universe6110194 Cite as: arXiv:2009.03428 [gr-qc] (or arXiv:2009.03428v3 [gr-qc] for this version) From: Houri Ziaeepour [view email] [v1] Mon, 7 Sep 2020 21:15:45 GMT (135kb,D) [v2] Mon, 12 Oct 2020 19:38:04 GMT (100kb,D) [v3] Thu, 25 Nov 2021 11:09:23 GMT (45kb)
CommonCrawl
Association between heavy metals and colon cancer: an ecological study based on geographical information systems in North-Eastern Iran Behzad Kiani ORCID: orcid.org/0000-0002-8816-328X1, Fatemeh Hashemi Amin1, Nasser Bagheri ORCID: orcid.org/0000-0003-1097-27972, Robert Bergquist ORCID: orcid.org/0000-0002-0190-10843, Ali Akbar Mohammadi4, Mahmood Yousefi5, Hossein Faraji6, Gholamreza Roshandel7, Somayeh Beirami8, Hadi Rahimzadeh8 na1 & Benyamin Hoseini ORCID: orcid.org/0000-0002-0355-61819,10 na1 Colorectal cancer has increased in Middle Eastern countries and exposure to environmental pollutants such as heavy metals has been implicated. However, data linking them to this disease are generally lacking. This study aimed to explore the spatial pattern of age-standardized incidence rate (ASR) of colon cancer and its potential association with the exposure level of the amount of heavy metals existing in rice produced in north-eastern Iran. Cancer data were drawn from the Iranian population-based cancer registry of Golestan Province, north-eastern Iran. Samples of 69 rice milling factories were analysed for the concentration levels of cadmium, nickel, cobalt, copper, selenium, lead and zinc. The inverse distance weighting (IDW) algorithm was used to interpolate the concentration of this kind of heavy metals on the surface of the study area. Exploratory regression analysis was conducted to build ordinary least squares (OLS) models including every possible combination of the candidate explanatory variables and chose the most useful ones to show the association between heavy metals and the ASR of colon cancer. The highest concentrations of heavy metals were found in the central part of the province and particularly counties with higher amount of cobalt were shown to be associated with higher ASR of men with colon cancer. In contrast, selenium concentrations were higher in areas with lower ASR of colon cancer in men. A significant regression equation for men with colon cancer was found (F(4,137) = 38.304, P < .000) with an adjusted R2 of 0.77. The predicted ASR of men colon cancer was − 58.36 with the coefficients for cobalt = 120.33; cadmium = 80.60; selenium = − 6.07; nickel = − 3.09; and zinc = − 0.41. The association of copper and lead with colon cancer in men was not significant. We did not find a significant outcome for colon cancer in women. Increased amounts of heavy metals in consumed rice may impact colon cancer incidence, both positively and negatively. While there were indications of an association between high cobalt concentrations and an increased risk for colon cancer, we found that high selenium concentrations might instead decrease the risk. Further investigations are needed to clarify if there are ecological or other reasons for these discrepancies. Regular monitoring of the amount of heavy metals in consumed rice is recommended. With around 18 million new cases and 9 million deaths annually [1], colon cancer ranks the third most frequent malignancy in the world and the fourth most common in Iran [2, 3]. These figures may be skewed in the direction of a higher percentage in low-income countries, and the scientific literature reveals that many causes can be involved in its development [4,5,6,7,8]. Some factors (obesity, physical inactivity, diet, smoking, and access to medical services) are modifiable [5, 8], while others (age, gender, family history and genetics) are not [4, 6, 7]. Exposure to environmental pollutants, such as heavy metals, constitutes a potential risk that falls somewhat between these two categories [9,10,11,12]. Heavy metals, defined as such due to their relatively high atomic weights (and thus also densities), constitute one of the main environmental pollutants that can cause critical problem for all organisms [13,14,15,16]. They exist in water, air, soil and consequently also in food making it essential to assess the amount of toxic trace elements in these compartments with particular emphasis on the food chain. Cadmium, lead, arsenic and mercury are generally toxic to soil microbial populations, plants as well as humans [14, 17, 18], while copper, nickel and zinc act as micronutrients in low concentrations but can still be toxic when contents are high [19, 20]. They enter the food chain when accumulated in plant tissues resulting in further accumulation in human organs over time [21] that eventually can cause adverse health effects [22]. Soil, growth media, pesticides, fertilizers and nutrient solutions are the main sources of these trace elements in plants [23]. Rice can be harmful if the soil where it is produced is contaminated by these metals [17, 24] and this amounts to a potentially major worldwide problem as rice is a common staple food. Many recent studies have highlighted the association between heavy metals and some forms of cancer [25,26,27,28,29]. For example, Adimalla et al. [25] investigated the association between heavy metals in soil and health risks for adults and children in India, and found high concentration of arsenic and chromium which are potentially associated with increased cancer risk for both adults and children; Fei et al. [26] assessed the association between food contamination of heavy metals and the incidence and spatial distribution of stomach cancer in Hangzhou, China finding a significant association between multiple heavy metals and with stomach cancer risk, despite the fact that each metal contamination on its own did not reach statistical significance; and Sohrabi et al. [28] found evidence for involvement of heavy metals in the development of colorectal cancer based on a cross-sectional study of tissue levels of trace elements performed in Tehran, the capital of Iran. On the other hand, a 20-year old study in the United States on the potential association between the level of lead in the blood and cancer mortality in general, found no significant association between lead levels and increased risks of cancer mortality [27]. Focussing on rice and the potential cancer risks related heavy metal contamination, Kukusamude et al. [30] recently measured the contents of chromium, zinc, nickel, copper, manganese, cobalt, arsenic and cadmium in 55 Thai local rice products. Assessing the potential impact of this on Thai populations with lifetime consumption of locally produced rice, they came up with risks of developing cancer varying from 5 to 30 per 10,000 population; Al-Saleh et al. [31] determined the levels of lead, cadmium, methyl mercury and arsenic in 37 brands of imported rice commonly consumed in Saudi Arabia and found that long-term consumption of rice contaminated with heavy metals, particularly arsenic, pose potential cancer risks; and Rahimzadeh et al. (2017) measured the level of heavy metals in rice harvested in the Golestan Province, Iran, revealing that these elements in rice may act as possible risk factors for oesophagus cancer. People living close to each other should have more similar exposure to heavy metals in food. Spatial analysis could help scientists to explore how heavy metals and colon cancer incidence rates are associated. Such studies are conducted by Geographical Information Systems (GIS) [32], which consider the geographical distribution patterns of climate, diseases, contaminants, etc. and based on such data establish statically significant links of potential hazard. Ordinary least square (OLS) is a practical approach for estimating the relationship between a response variable and explanatory variables [33]. Using OLS in a study in north-eastern Iran to explore the relationship between colorectal cancer incidence and various explanatory variables, including Body Mass Index (BMI), daily fibre intake and red meat consumption, Goshayeshi et al. [34] found that these explanatory variables, except red meat consumption, were associated with colorectal cancer incidence in some way. Colorectal cancer has increased in Iran and elsewhere [1,2,3]. Although many studies indicate trace elements, such as heavy metals [9,10,11,12], there is still a lack of data to unequivocally link consumption of heavy metals and colorectal cancer. In search of aetiology, there is a need to investigate how strong this association is. Since the majority of populations in Iran are stationary and also regularly consume rice, which generally is their main meal, we felt that it would be useful to measure the association between colon cancer and locally grown rice in areas known to be contaminated by heavy metals. This study was conducted in 11 counties of the province of Golestan in Iran in 2018, which covers an area of 20,367 km2 with a population of 1,868,819 people and includes also 69 of the 90 milling factories in the province (Fig. 1). Distribution of milling factories in the study area in 2018 We used two different data sources: 1) Iranian Golestan Population-based cancer registry to extract individuals with colon cancer; and 2) polarograph assessment of the amount of heavy metals in rice samples including cadmium (Cd), cobalt (Co) copper (Cu), lead (Pb), nickel (Ni), selenium (Se) and zinc (Zn). We calculated the age-standardized rate (ASR) of colon cancer cases and used it as the dependent variable for statistical modelling. The number of rice milling factories required for the study was estimated based on a study conducted by Zazouli et al. [35] on cultured rice in the province of Mazandaran, whose climate is similar to that of Golestan. As lead comprised a large proportion of heavy metals in our testing, the number of factories (sample size) was calculated based on this metal. Using the formula below, the confidence interval level (α = 0.05) and error rate were assumed not to exceed 0.05%. $$ N=\left({\mathrm{z}}_{\left(1-\alpha /2\right)}^2\ast {\sigma}^2\right)/ {d}^2 $$ where N is number of factories, z the value of the normal variable with confidence level α; σ the standard deviation (SD) of the amount of lead; and d the error rate. The required number of milling factories was determined to be 62. Considering the adequate allocation of the area under cultivation, for some counties the sample size was estimated at 1 or 2, which was very small. As a result, at least 4 cases were considered for each county and ultimately, a total of 69 rice milling factories were investigated in this study. The number of samples for each county in the study area is shown in Table 1. Table 1 The number of samples in the study area Directly after delivery of harvested rise to the milling factories from the fields, rice samples were drawn from the factories in each county. The plastic bags were transported to the environmental chemistry laboratory of the school of public health in Gorgan and kept in the refrigerator until analysis. All seven heavy metals were measured in each sample. Visualising the spatial pattern of heavy metals We used spatial interpolation to visualise the spatial pattern of heavy metals across the study area through the generation of a so-called heat map based on GIS software where the quantitative value of each metal is represented by pixels in a computer's raster layer. This approach predicts the values of unobserved areas based on known point values, thereby creating a "statistical surface". There are different spatial interpolation methods and the most important issue is to generate a map as accurate as possible, i.e. based on the method with the least amount of error for the specific purpose at hand [36]. In our case, we performed the interpolation by inverse distance weighting (IDW) [37], Spline [38] and Kriging [34]. After that, we evaluated the three methods by measuring the difference between the predicted and the observed amount of heavy metals in points where the observed values were available, i.e. at the location of the rice milling factories. The average square error (R2) was measured and the method with the lowest amount of error was chosen for the visualisation. Exploratory regression mining We used this approach to develop several regression models. Here, the explanatory variables comprised the spatial presence of the heavy metals, and the dependent one was the ASR of men and women with colon cancer. We used ArcGIS 10.5 (ESRI, Redlands, CA, USA) to conduct the exploratory regression analyses. This tool runs many ordinary least square (OLS) models using all possible combinations for a list of independent variables and evaluates which model has the best fit. According to ESRI [39], The six items addressed in assessing an appropriate regression model are: The expected coefficient signs: they should be consistent with the scientific literature, e.g., in recent literature lead has had a positive correlation with cancer occurrence and we expected to find a consistent result in our study. Lack of redundancy: the variance inflation factor (VIF) should be smaller than 7.5 for any variable, otherwise there might be collinearity between dependent exposure variables. Significance of coefficients: Probability and Robust Probability in the result should be checked to assess if coefficients are statistically significant. Normal distribution of residuals: the sum of the residuals should be zero with a SD of 1 and a statistically significant Jarque-Bera test [40] must be avoided. Strong adjusted R2: this value must be > 0.5. Lack of spatial autocorrelation among OLS residuals: the Global Moran's I [41] is used to confirm this. Overall, 1184 colon cancer cases (656 males vs. 528 females) were identified in Golestan Province, Iran in 2018. The ASR ranged between 3.1 and 18.6 for males among the counties with Ramian having the lowest rate and Gorgan the highest. For the females, it ranged between 1.4 and 14, Minodasht having the lowest rate and Gorgan again the highest. The average concentration and ranges for each of the seven heavy metals investigated are shown in Table 2. Gonbad had unusually high concentrations of all the heavy metals, except zinc and selenium. Table 2 The average amount of heavy metals in the counties of Golestan province, Iran, in 2018 The geographical distribution of the ASR of colon cancer is shown in Fig. 2. Gorgan County demonstrates the highest ASR, while Azadshahr and Ramian were at the lower end of the range. The geographical distribution of age-standardized colon cancer rate in the study area Figure 3 shows the residuals and the differences between observed and predicted values for the different spatial interpolation methods. We found the IDW method resulting in the lowest amount of error. Comparison of different methods to interpolate the amount of heavy metals Figure 4 shows the spatial distribution of explanatory variables used to build the regression model to predict the ASR of colon cancer with results expressed as a set of heat maps, one for each metal. The figure reveals a high level of heavy metal concentration in the central part of the study area. However, nickel and selenium had a higher concentration in the North-east of the study area. Concentrations of the different heavy metals in the study as expressed as heat maps The exploratory regression model was run for men and women separately, considering seven independent variables. Among these variables, selenium and cobalt were significant in 20.3% and 12.5% of the created models for men with colon cancer, respectively. However, lead was not significant in any model and only selenium and cobalt had a clear direction in all models. In fact, selenium had an inverse association with ASR of colon cancer in men, while cobalt had a positive association with the dependent variable in all models. We did not find a significant outcome for colon cancer in women. Tables 3 and 4 include the sign and significance level of the independent variables in all regression models for men and women with colon cancer, respectively. Table 3 Summary of variables significance for men colon cancer Table 4 Summary of variables significance for women colon cancer Two models for men (Model 1 and Model 2) were identified using the Exploratory Regression, which fit all the six requirements mentioned in the method section. As Table 5 shows, their adjusted R2 values were above 0.7, which illustrated the high performance of the models, and they explored five significant variables (p-value < 0.05). Model 1 showed a better fit with higher adjusted R2 of 0.77 and lower Akaike's Information Criterion (AICc) of 89.14 for men with colon cancer (Table 5). Table 5 Highest Adjusted R2 Results for male colon cancer None of models met all six requirements mentioned in the method section for women with colon cancer. However, Table 6 shows the two best models (Model 1 and Model 2) with the highest adjusted R2 and lowest AICc. Table 6 Highest Adjusted R2 Results for female colon cancer To our knowledge, this is the first study in Golestan Province, Iran to identify the association of the spatial patterns of ASR of colon cancer with exposure to heavy metals in rice. Our findings show that the exposure level of cobalt was positively associated with ASR of men with colon cancer at the county level, while higher exposure level of selenium was associated with lower ASR of colon cancer in this group. However, the study did not find any significant association between the exposure level of the heavy metals and ASR of colon cancer in women residing in a county. These findings are in line with previous studies [42,43,44,45,46]. While a high concentration of the heavy metal distribution was observed in the central part of the study area, nickel and selenium had a higher concentration in the North-east. There is no specific industry in these areas, so the high levels of these two metals in these regions may depend on the soil's natural mineral contents. Although selenium is a micronutrient required for the functioning of a number of enzymes in both humans and animals, it may cause adverse health effects in high concentration [15]. A study by Rahimzadeh et al. [47] in Golestan Province reported that the selenium concentration in high-risk areas of oesophageal cancer was significantly higher than in low-risk areas. The results of our study are in line with two previous meta-analyses [42, 43] that revealed a reverse relationship for colon cancer incidence in these areas, at least for men. Another study [46] found that lower concentrations of selenium with thresholds of 55 μg/l and 65 μg/l in Poland and Estonia, respectively, were associated with a higher risk of colorectal cancer. According to Fernandez-Banares et al. [48], a high level of selenium (≥82.11 μg/L) decreases the risk of colorectal adenomas for those aged < 60 years. Although they found no significant association for this in groups aged ≥60 years, another study [44] reported a low risk of colorectal adenomas for those aged ≥67 years in areas with a high level of selenium. Our study is also in line with the study by Peters et al. [44] that did not see any significant association between selenium concentrations and colon cancer in women. It is recommended that the impact of age and gender be further considered within assessing the association between selenium concentration and risk of colon cancer, which thus remains for future studies. As a part of vitamin B-12, cobalt is beneficial. However, excessive concentration of cobalt may damage human health [49]. Previous studies investigated the association between exposure levels of the cobalt and cancer incidence and reported controversial findings [50,51,52]. While some of these studies [50, 51] reported that exposure level of the cobalt might increase the risk of cancer of the upper gastrointestinal tract and lungs, Sauni et al. [52] suggest that occupational-exposure to cobalt may not be associated with an increased overall cancer risk. Our study revealed that exposure level of cobalt is significantly associated with colon cancer incidence in men. Contaminated soil and water may be the source of environmental exposure to heavy metals [53], and contamination of the agricultural soil and water with these elements has been reported previously in some areas of Iran [54]. Further studies are needed to find the source of the high level of cobalt concentration in Golestan, especially in the central region of this province. Further, we need to conduct future studies to assess the association between cobalt exposure levels and colon cancer incidence at the individual level. The environmental presence of heavy metals found in this study and also by many other authors [55,56,57,58] may explain spatial variation in colon cancer pattern. Lead, for example, is becoming known as of major health concern, and the exposure level of this element may enhance the cancer risk in general [55, 56]. A study by Halimi et al. [59] performed in Hamedan Province, Iran, strongly brought forth the hypothesis that exposure level of heavy metals, especially lead, might result in a high incidence of colorectal cancer. Although our study reinforced their hypothesis regarding selenium and cobalt, the results could not confirm that exposure level of lead enhances the risk of colon cancer. Genetically inherited autosomal disorders, such as the Lynch syndrome, increases the risk of colon cancer [60], and it is estimated 10–14% of cases to be at a high risk of this type of cancer in Iran [6, 61,62,63]. However, we found no study addressing this subject in Golestan Province. As the high ASR of colon cancer in some regions of the study area, such as Gonbad and Gorgan, may be due to a high prevalence of Lynch syndrome, the screening for this disorder among patients with colon cancers is strongly suggested. Limitation(s) The population census in Iran is carried out every 5 years, so we used census data of 2015 for this research because we did not have an annual census data. The GIS-based approach used in this study introduced an opportunity to assess the geographical distribution of different cancers in relation to environmental risk factors. The results of this first study in Iran to ecologically investigate the potential association between colon cancer and locally grown rice in areas contaminated by heavy metals indicate that it is likely that increased exposure to heavy metals in consumed rice could impact the colon cancer incidence. The inverse association between high selenium concentrations and colon cancer incidence in men reinforces previous findings indicating that this may decrease the risk, while there might be an association between high cobalt concentrations and increased risk for colon cancer. Although further studies beyond the ecological approach applied here are needed to confirm these findings, regular monitoring of the amount of heavy metals in consumed rice is recommended. Data will be provided upon request with the permission of the corresponding authors. OLS model: Ordinary Least Squares model AdjR2: Adjusted R-Squared Akaike's Information Criterion JB: Jarque-Bera p-value K (BP): Koenker's studentized Breusch-Pagan p-value VIF Maximum: Variance Inflation Factor SA Global Moran's I p-value: a measure of residual spatial autocorrelation Model Variable direction: (+/−) Model Variable significance: (* = 0.10, ** = 0.05, *** = 0.01). Ferlay J, Colombet M, Soerjomataram I, Mathers C, Parkin D, Piñeros M, et al. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int J Cancer. 2019;144(8):1941–53. https://doi.org/10.1002/ijc.31937. West NP. Complete mesocolic excision for colon cancer: is now the time for a change in practice? Lancet Oncol. 2019;20(11):1474–6. https://doi.org/10.1016/S1470-2045(19)30573-X. Setareh S, Zahiri Esfahani M, Zare Bandamiri M, Raeesi A, Abbasi R. Using data mining for survival prediction in patients with colon cancer. Iran J Epidemiol. 2018;14(1):19–29. Chow WH, Devesa SS, Blot WJ. Colon cancer incidence: recent trends in the United States. Cancer Causes Control. 1991;2(6):419–25. https://doi.org/10.1007/BF00054303. Frezza EE, Wachtel MS, Chiriva-Internati M. Influence of obesity on the risk of developing colon cancer. Gut. 2006;55(2):285–91. https://doi.org/10.1136/gut.2005.073163. Goshayeshi L, Ghaffarzadegan K, Khooei A et a. Prevalence and clinicopathological characteristics of mismatch repair-deficient colorectal carcinoma in early onset cases as compared with lateonset cases: a retrospective cross-sectional study in Northeastern Iran. BMJ Open. 2018. https://doi.org/10.1136/bmjopen-2018-023102. Khorram MR, Goshayeshi L, Maghool F, Bergquist R, Ghaffarzadegan K, Eslami S, et al. Prevalence of mismatch repair-deficient colorectal adenoma/polyp in early-onset, advanced cases: a cross-sectional study based on Iranian hereditary colorectal Cancer registry. J Gastrointest Cancer. 2021;52(1):263–8. https://doi.org/10.1007/s12029-020-00395-y. Tarasiuk A, Mosinska P, Fichna J. The mechanisms linking obesity to colon cancer: an overview. Obes Res Clin Pract. 2018;12(3):251–9. https://doi.org/10.1016/j.orcp.2018.01.005. Antwi SO, Eckert EC, Sabaque CV, Leof ER, Hawthorne KM, Bamlet WR, et al. Exposure to environmental chemicals and heavy metals, and risk of pancreatic cancer. Cancer Causes Control. 2015;26(11):1583–91. https://doi.org/10.1007/s10552-015-0652-y. Lim JT, Tan YQ, Valeri L, Lee J, Geok PP, Chia SE, et al. Association between serum heavy metals and prostate cancer risk - a multiple metal analysis. Environ Int. 2019;132:105109. https://doi.org/10.1016/j.envint.2019.105109. Tang WR, Chen ZJ, Lin K, Su M, Au WW. Development of esophageal cancer in Chaoshan region, China: association with environmental, genetic and cultural factors. Int J Hyg Environ Health. 2015;218(1):12–8. https://doi.org/10.1016/j.ijheh.2014.10.004. Nawrot TS, Martens DS, Hara A, Plusquin M, Vangronsveld J, Roels HA, et al. Association of total cancer and lung cancer with environmental exposure to cadmium: the meta-analytical evidence. Cancer Causes Control. 2015;26(9):1281–8. https://doi.org/10.1007/s10552-015-0621-5. Nasreddine L, Rehaime M, Kassaify Z, Rechmany R, Jaber F. Dietary exposure to pesticide residues from foods of plant origin and drinks in Lebanon. Environ Monit Assess. 2016;188(8):485. Mohammadi MJ, Yari AR, Saghazadeh M, Sobhanardakani S, Geravandi S, Afkar A, et al. A health risk assessment of heavy metals in people consuming Sohan in Qom, Iran. Toxin Reviews. 2018;37(4):278–86. https://doi.org/10.1080/15569543.2017.1362655. Sobhanardakani S. Tuna fish and common kilka: health risk assessment of metal pollution through consumption of canned fish in Iran. J Consumer Protection Food Safety. 2017;12(2):157–63. https://doi.org/10.1007/s00003-017-1107-z. Sobhanardakani S, Tayebi L, Hosseini SV. Health risk assessment of arsenic and heavy metals (cd, cu, co, Pb, and Sn) through consumption of caviar of Acipenser persicus from southern Caspian Sea. Environ Sci Pollut Res Int. 2018;25(3):2664–71. https://doi.org/10.1007/s11356-017-0705-8. Liu WX, Shen LF, Liu JW, Wang YW, Li SR. Uptake of toxic heavy metals by rice (Oryza sativa L.) cultivated in the agricultural soil near Zhengzhou city, People's Republic of China. Bull Environ Contam Toxicol. 2007;79(2):209–13. https://doi.org/10.1007/s00128-007-9164-0. Lenart A, Wolny-Koładka K. The effect of heavy metal concentration and soil pH on the abundance of selected microbial groups within ArcelorMittal Poland steelworks in Cracow. Bull Environ Contam Toxicol. 2013;90(1):85–90. https://doi.org/10.1007/s00128-012-0869-3. Akbari B, Gharanfoli F, Khayyat MH, Khashyarmanesh Z, Rezaee R, Karimi G. Determination of heavy metals in different honey brands from Iranian markets. Food Addit Contam Part B Surveill. 2012;5(2):105–11. https://doi.org/10.1080/19393210.2012.664173. Sobhan AS. Health risk assessment of as and Zn in canola and soybean oils consumed in Kermanshah, Iran. J Adv Environ Health Res. 2016;4(2):62–7. Seenivasan S, Manikandan N, Muraleedharan NN, Selvasundaram R. Heavy metal content of black teas from South India. Food Control. 2008;19(8):746–9. https://doi.org/10.1016/j.foodcont.2007.07.012. Lai Y-L, Thirumavalavan M, Lee J-F. Effective adsorption of heavy metal ions (Cu2+, Pb2+, Zn2+) from aqueous solution by immobilization of adsorbents on ca-alginate beads. Toxicol Environ Chem. 2010;92(4):697–705. https://doi.org/10.1080/02772240903057382. Shen FM, Chen HW. Element composition of tea leaves and tea infusions and its impact on health. Bull Environ Contam Toxicol. 2008;80(3):300–4. https://doi.org/10.1007/s00128-008-9367-z. Zarcinas BA, Pongsakul P, McLaughlin MJ, Cozens G. Heavy metals in soils and crops in Southeast Asia. 2. Thailand. Environ Geochem Health. 2004;26(4):359–71. https://doi.org/10.1007/s10653-005-4670-7. Adimalla N. Heavy metals pollution assessment and its associated human health risk evaluation of urban soils from Indian cities: a review. Environ Geochem Health. 2020;42:173–90. https://doi.org/10.1007/s10653-019-00324-4. Fei X, Lou Z, Christakos G, Ren Z, Liu Q, Lv X. The association between heavy metal soil pollution and stomach cancer: a case study in Hangzhou City, China. Environ Geochem Health. 2018;40(6):2481–90. https://doi.org/10.1007/s10653-018-0113-0. Jemal A, Graubard BI, Devesa SS, Flegal KM. The association of blood lead level and cancer mortality among whites in the United States. Environ Health Perspect. 2002;110(4):325–9. https://doi.org/10.1289/ehp.02110325. Sohrabi M, Gholami A, Azar MH, Yaghoobi M, Shahi MM, Shirmardi S, et al. Trace element and heavy metal levels in colorectal cancer: comparison between cancerous and non-cancerous tissues. Biol Trace Elem Res. 2018;183(1):1–8. https://doi.org/10.1007/s12011-017-1099-7. Qiao L, Feng Y. Intakes of heme iron and zinc and colorectal cancer incidence: a meta-analysis of prospective studies. Cancer Causes Control. 2013;24(6):1175–83. https://doi.org/10.1007/s10552-013-0197-x. Kukusamude C, Sricharoen P, Limchoowong N, Kongsri S. Heavy metals and probabilistic risk assessment via rice consumption in Thailand. Food Chem. 2020;334:127402. Al-Saleh I, Abduljabbar M. Heavy metals (lead, cadmium, methylmercury, arsenic) in commonly imported rice grains (Oryza sativa) sold in Saudi Arabia and their potential health risk. Int J Hyg Environ Health. 2017;220(7):1168–78. https://doi.org/10.1016/j.ijheh.2017.07.007. Kiani B, Bagheri N, Tara A, Hoseini B, Tabesh H, Tara M. Revealed access to haemodialysis facilities in northeastern Iran: factors that matter in rural and urban areas. Geospat Health. 2017;12(2):584. https://doi.org/10.4081/gh.2017.584. Lemenkova P. Testing linear regressions by StatsModel library of Python for Oceanological data interpretation. Aquatic Sci Eng. 2019;34(2):51–60. https://doi.org/10.26650/ASE2019547010. Goshayeshi L, Pourahmadi A, Ghayour-Mobarhan M, Hashtarkhani S, Karimian S, Shahhosein Dastjerdi R, et al. Colorectal cancer risk factors in north-eastern Iran: A retrospective cross-sectional study based on geographical information systems, spatial autocorrelation and regression analysis. Geospatial Health. 2019;14(2):219–28. https://doi.org/10.4081/gh.2019.793. Zazouli MA, Bandpei AM, Ebrahimi M, Izanloo H. Investigation of cadmium and lead contents in Iranian rice cultivated in Babol region. Asian J Chem. 2010;22(2):1369. Ikechukwu MN, Ebinne E, Idorenyin U, Raphael NI. Accuracy assessment and comparative analysis of IDW, spline and kriging in spatial interpolation of landform (topography): an experimental study. J Geogr Inf Syst. 2017;9(03):354–71. https://doi.org/10.4236/jgis.2017.93022. Ouabo RE, Sangodoyin AY, Ogundiran MB. Assessment of ordinary Kriging and inverse distance weighting methods for modeling chromium and cadmium soil pollution in E-waste sites in Douala, Cameroon. J Health Pollut. 2020;10(26):200605. https://doi.org/10.5696/2156-9614-10.26.200605. Xie Y. Chen T-b, lei M, Yang J, Guo Q-j, song B, et al. spatial distribution of soil heavy metal pollution estimated by different interpolation methods: accuracy and uncertainty analysis. Chemosphere. 2011;82(3):468–76. https://doi.org/10.1016/j.chemosphere.2010.09.053. Lauren Rosenshein LS, and Monica Pratt. Finding a Meaningful Model. 2011. Available from: https://www.esri.com/news/arcuser/0111/findmodel.html. Accessed 14 Jan 2021. Thadewald T, Büning H. Jarque–Bera test and its competitors for testing normality–a power comparison. J Appl Stat. 2007;34(1):87–105. https://doi.org/10.1080/02664760600994539. Ramezankhani R, Sajjadi N, Nezakati Esmaeilzadeh R, Jozi SA, Shirzadi MR. Spatial analysis of cutaneous leishmaniasis in an endemic area of Iran based on environmental factors. Geospat Health. 2017;12(2):578. https://doi.org/10.4081/gh.2017.578. Jacobs ET, Jiang R, Alberts DS, Greenberg ER, Gunter EW, Karagas MR, et al. Selenium and colorectal adenoma: results of a pooled analysis. J Natl Cancer Inst. 2004;96(22):1669–75. https://doi.org/10.1093/jnci/djh310. Ou Y, Jiang B, Wang X, Ma W, Guo J. Selenium and colorectal adenomas risk: a meta-analysis. Nutr Cancer. 2012;64(8):1153–9. https://doi.org/10.1080/01635581.2012.722248. Peters U, Chatterjee N, Church TR, Mayo C, Sturup S, Foster CB, et al. High serum selenium and reduced risk of advanced colorectal adenoma in a colorectal cancer early detection program. Cancer Epidemiol Biomarkers Prev. 2006;15(2):315–20. https://doi.org/10.1158/1055-9965.EPI-05-0471. Chen K, Liao QL, Ma ZW, Jin Y, Hua M, Bi J, et al. Association of soil arsenic and nickel exposure with cancer mortality rates, a town-scale ecological study in Suzhou, China. Environ Sci Pollut Res Int. 2015;22(7):5395-404. https://doi.org/10.1007/s11356-014-3790-y. Epub 2014/11/21. Lener MR, Gupta S, Scott RJ, Tootsi M, Kulp M, Tammesoo ML, et al. Can selenium levels act as a marker of colorectal cancer risk? BMC Cancer. 2013;13(1):214. https://doi.org/10.1186/1471-2407-13-214. Rahimzadeh H, Sadeghi M, Beirami S, Bay A, Mansurian M, Roshandel G. Association of heavy metals and selenium content in Rice with incidence of esophageal Cancer in Golestan Province, Iran. J Clin Basic Res. 2017;1(1):27–32. Fernandez-Banares F, Cabre E, Esteve M, Mingorance MD, Abad-Lacruz A, Lachica M, et al. Serum selenium and risk of large size colorectal adenomas in a geographical area with a low selenium status. Am J Gastroenterol. 2002;97(8):2103–8. https://doi.org/10.1111/j.1572-0241.2002.05930.x. Abdolmohammad-Zadeh H, Ebrahimzadeh E. Determination of cobalt in water samples by atomic absorption spectrometry after pre-concentration with a simple ionic liquid-based dispersive liquid-liquid micro-extraction methodology. Cent Eur J Chem. 2010;8(3):617–25. Turkdogan MK, Kilicel F, Kara K, Tuncer I, Uygan I. Heavy metals in soil, vegetables and fruits in the endemic upper gastrointestinal cancer region of Turkey. Environ Toxicol Pharmacol. 2003;13(3):175–9. https://doi.org/10.1016/S1382-6689(02)00156-4. Lauwerys R, Lison D. Health risks associated with cobalt exposure--an overview. Sci Total Environ. 1994;150(1–3):1–6. https://doi.org/10.1016/0048-9697(94)90125-2. Sauni R, Oksa P, Uitti J, Linna A, Kerttula R, Pukkala E. Cancer incidence among Finnish male cobalt production workers in 1969-2013: a cohort study. BMC Cancer. 2017;17(1):340. https://doi.org/10.1186/s12885-017-3333-2. Chen K, Liao QL, Ma ZW, Jin Y, Hua M, Bi J, et al. Association of soil arsenic and nickel exposure with cancer mortality rates, a town-scale ecological study in Suzhou, China. Environ Sci Pollut Res Int. 2015;22(7):5395–404. https://doi.org/10.1007/s11356-014-3790-y. Peiravi R, Alidadi H, Dehghan AA, Vahedian M. Heavy metals concentrations in Mashhad drinking water network. Zahedan J Res Med Sci. 2013;15(9):74–6. Kim MG, Ryoo JH, Chang SJ, Kim CB, Park JK, Koh SB, et al. Blood Lead levels and cause-specific mortality of inorganic Lead-exposed Workers in South Korea. PLoS One. 2015;10(10):e0140360. https://doi.org/10.1371/journal.pone.0140360. Rehman K, Fatima F, Waheed I, Akash MSH. Prevalence of exposure of heavy metals and their impact on health consequences. J Cell Biochem. 2018;119(1):157–84. https://doi.org/10.1002/jcb.26234. Whanger PD. Selenium in the treatment of heavy metal poisoning and chemical carcinogenesis. J Trace Elem Electrolytes Health Dis. 1992;6(4):209–21. Gatto NM, Kelsh MA, Mai DH, Suh M, Proctor DM. Occupational exposure to hexavalent chromium and cancers of the gastrointestinal tract: a meta-analysis. Cancer Epidemiol. 2010;34(4):388–99. https://doi.org/10.1016/j.canep.2010.03.013. Halimi L, Bagheri N, Hoseini B, Hashtarkhani S, Goshayeshi L, Kiani B. Spatial analysis of colorectal cancer incidence in Hamadan Province, Iran: a retrospective cross-sectional study. Appl Spatial Anal Policy. 2020;13:293–303. https://doi.org/10.1007/s12061-019-09303-9. Sinicrope FA. Lynch syndrome-associated colorectal Cancer. N Engl J Med. 2018;379(8):764–73. https://doi.org/10.1056/NEJMcp1714533. Fakheri H, Bari Z, Merat S. Familial aspects of colorectal cancers in southern littoral of Caspian Sea. Arch Iran Med. 2011;14(3):175–8 DOI: 011143/AIM.006. Molaei M, Mansoori BK, Ghiasi S, Khatami F, Attarian H, Zali M. Colorectal cancer in Iran: immunohistochemical profiles of four mismatch repair proteins. Int J Color Dis. 2010;25(1):63–9. https://doi.org/10.1007/s00384-009-0784-1. Zeinalian M, Hashemzadeh-Chaleshtori M, Akbarpour MJ, Emami MH. Epidemioclinical feature of early-onset colorectal Cancer at-risk for lynch syndrome in Central Iran. Asian Pac J Cancer Prev. 2015;16(11):4647–52. https://doi.org/10.7314/APJCP.2015.16.11.4647. We would like to thank Golestan University of Medical Sciences for its supports. This study was supported by Golestan University of Medical Sciences (grant number of 90–10–1-30209). Hadi Rahimzadeh and Benyamin Hoseini contributed equally to this work. Department of Medical Informatics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran Behzad Kiani & Fatemeh Hashemi Amin Visualization and Decision Analytics (VIDEA) lab, Centre for Mental Health Research, Research School of Population Health, College of Health and Medicine, The Australian National University, Canberra, Australia Nasser Bagheri Ingerod, Brastad, Sweden (formerly with the UNICEF/UNDP/World Bank/WHO Special Programme for Research and Training in Tropical Diseases, World Health Organization), Geneva, Switzerland Robert Bergquist Department of Environmental Health Engineering, Neyshabur University of Medical Sciences, Neyshabur, Iran Ali Akbar Mohammadi Department of Environmental Health Engineering, School of Public Health, Iran University of Medical Sciences, Tehran, Iran Mahmood Yousefi Department of Environmental Health Engineering, Health Center, Babol University of Medical Sciences, Babol, Iran Hossein Faraji Golestan Research Center of Gastroenterology and Hepatology, Golestan University of Medical Sciences, Gorgan, Iran Gholamreza Roshandel Department of Environmental Health Engineering, Faculty of Health and Environmental Health Research Center, Golestan University of Medical Sciences, Gorgan, Iran Somayeh Beirami & Hadi Rahimzadeh Pharmaceutical Research Center, Mashhad University of Medical Sciences, Mashhad, Iran Benyamin Hoseini Department of Health Information Technology, Neyshabur University of Medical Sciences, Neyshabur, Iran Behzad Kiani Fatemeh Hashemi Amin Somayeh Beirami Hadi Rahimzadeh BH, BK, and HR contributed to the study design. All authors (BK, FHA, NB, RB, AAM, MY, HF, GR, SB, HR, and BH) contributed to data gathering and interpretation of the results. BK, BH, and FHA performed analyses and wrote the first draft of the manuscript. NB and RB edited the final version of the manuscript. All authors (BK, FHA, NB, RB, AAM, MY, HF, GR, SB, HR, and BH) read, commented, and approved the final manuscript. B.K is an assistant professor at Mashhad University of Medical Sciences. He has been working on geo-informatics and different aspects of geo-spatial data sciences. N.B is a senior research fellow and spatial epidemiologist at the Australian National University (ANU) with a particular interest in geo-spatial analysis and modelling. R.B is the editor-in-chief of the Geospatial Health Journal and has much experience in spatial epidemiology. B.H is a faculty member at Neyshabur University of Medical Sciences and junior researcher at Mashhad University of Medical Sciences. He works on the health informatics projects and also has much experience in the health outcome registries and Health GIS. H.R is a faculty member at Golestan University of Medical Sciences and works on the environment health projects. Correspondence to Hadi Rahimzadeh or Benyamin Hoseini. The study has only used aggregated data drawn from the population-based cancer registry of Golestan Province, Iran and did not involve human subjects. Kiani, B., Hashemi Amin, F., Bagheri, N. et al. Association between heavy metals and colon cancer: an ecological study based on geographical information systems in North-Eastern Iran. BMC Cancer 21, 414 (2021). https://doi.org/10.1186/s12885-021-08148-1 Ordinary least square Regression model
CommonCrawl
Seppo Linnainmaa Seppo Ilmari Linnainmaa (born 28 September 1945) is a Finnish mathematician and computer scientist known for creating the modern version of backpropagation. Biography He was born in Pori.[1] In 1974 he obtained the first doctorate ever awarded in computer science at the University of Helsinki.[2] In 1976, he became Assistant Professor. From 1984 to 1985 he was Visiting Professor at the University of Maryland, USA. From 1986 to 1989 he was Chairman of the Finnish Artificial Intelligence Society. From 1989 to 2007, he was Research Professor at the VTT Technical Research Centre of Finland. He retired in 2007. Backpropagation Explicit, efficient error backpropagation in arbitrary, discrete, possibly sparsely connected, neural networks-like networks was first described in Linnainmaa's 1970 master's thesis,[3][4] albeit without reference to NNs,[5] when he introduced the reverse mode of automatic differentiation (AD), in order to efficiently compute the derivative of a differentiable composite function that can be represented as a graph, by recursively applying the chain rule to the building blocks of the function.[2][3][4][6] Linnainmaa published it first, following Gerardi Ostrowski who had used it in the context of certain process models in chemical engineering some five years earlier, but didn't publish. With faster computers emerging, the method has become heavily used in numerous applications. For example, backpropagation of errors in multi-layer perceptrons, a technique used in machine learning, is a special case of reverse mode AD. Notes 1. Ellonen, Leena, ed. (2008). Suomen professorit 1640–2007 (in Finnish). Helsinki: Professoriliitto. p. 405. ISBN 978-952-99281-1-8. 2. Griewank, Andreas (2012). "Who Invented the Reverse Mode of Differentiation?" (PDF). Documenta Matematica, Extra Volume ISMP. pp. 389–400. S2CID 15568746. 3. Linnainmaa, Seppo (1970). Algoritmin kumulatiivinen pyöristysvirhe yksittäisten pyöristysvirheiden Taylor-kehitelmänä [The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors] (PDF) (Thesis) (in Finnish). pp. 6–7. 4. Linnainmaa, Seppo (1976). "Taylor expansion of the accumulated rounding error". BIT Numerical Mathematics. 16 (2): 146–160. doi:10.1007/BF01931367. S2CID 122357351. 5. Jürgen Schmidhuber, (2015). Who Invented Backpropagation? 6. Griewank, Andreas and Walther, A.. Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM, 2008. External links • Seppo Linnainmaa on LinkedIn Authority control International • VIAF National • Germany Academics • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Mathematics / Unit 3: Shapes and Volume Shapes and Volume Students explore the volume of three-dimensional shapes, connecting it to the operations of multiplication and addition, as well as classify two-dimensional shapes hierarchically. Unit Practice In Unit 3, students will explore volume of three-dimensional shapes (5.MD.3—5), connecting it to the operations of multiplication and addition (5.NBT.5, 4.NBT.4). They also use their understanding that they gradually built in prior grade levels to classify shapes in a hierarchy, seeing that attributes of shapes in one category belong to shapes in all subcategories of that category (5.G.3—4). In prior grade levels, students explored the idea of volume informally, comparing the capacity of various containers as being able to "hold more" or "hold less" (K.MD.2). Students have also explored one-dimensional and two-dimensional measurements of figures, developing a deep understanding of length in Grade 2 and of area in Grade 3. In their exploration of area in Grade 3, students come to understand area as an attribute of plane figures (3.MD.5) and measure it by counting unit squares (3.MD.6), and they connect area to the operations of multiplication and addition (3.MD.7). Students have also explored two-dimensional shapes and their attributes extensively in previous grades. "From Kindergarten on, students experience all of the properties of shapes that they will study in Grades K–7, recognizing and working with these properties in increasingly sophisticated ways" (Geometry Progression, p. 3). In Kindergarten through Grade 2, students focused on building understanding of shapes and their properties. In Grade 3, students started to conceptualize shape categories, in particular quadrilaterals. In Grade 4, work with angle measure (4.MD.5—7) lent itself to classifying figures based on the presence or absence of parallel and perpendicular sides. Thus, this unit builds off of students' well-established understanding of geometry and geometric measurement. Similar to students' work with area, students develop an understanding of volume as an attribute of solid figures (5.MD.3) and measure it by counting unit cubes (5.MD.4). Students then connect volume to the operation of multiplication of length, width, and height or of the area of the base and the height and to the operation of addition to find composite area (5.MD.5). Throughout Topic A, students have an opportunity to use appropriate tools strategically (MP.5) and make use of structure of three-dimensional figures (MP.7) to draw conclusions about how to find the volume of a figure. Students then move on to classifying shapes into categories and see that attributes belonging to shapes in one category are shared by all subcategories of that category (5.G.3). This allows students to create a hierarchy of shapes over the course of many days (5.G.4). Throughout this topic, students use appropriate tools strategically (MP.5) to verify various attributes of shapes including their angle measure and presence of parallel or perpendicular lines, as well as attend to precision in their use of language when referring to geometric figures (MP.6). They also look for and make use of structure to construct a hierarchy based on properties (MP.7). In Grade 6, students will explore concepts of length, area, and volume with more complex figures, such as finding the area of right triangles or finding the volume of right rectangular prisms with non-whole-number measurements (6.G.1, 6.G.2). Students will even rely on their understanding of shapes and their attributes to prove various geometric theorems in high school (GEO.G-CO.9—11). Thus, this unit provides a nice foundation for connections in many grades to come. Pacing: 16 instructional days (14 lessons, 1 flex day, 1 assessment day) For guidance on adjusting the pacing for the 2020-2021 school year due to school closures, see our 5th Grade Scope and Sequence Recommended Adjustments. Expanded Assessment Package Problem Sets for Each Lesson Download Unit Assessment Download Unit Assessment Answer Key Download Student Self-Assessment Intellectual Prep for All Units Read and annotate "Unit Summary" and "Essential Understandings" portion of the unit plan. Do all the Target Tasks and annotate them with the "Unit Summary" and "Essential Understandings" in mind. Take the unit assessment. Essential Understandings Volume refers to the amount of space a three-dimensional figure takes up. Two-dimensional figures have no volume. You can find the volume of a rectangular prism by counting individual cubic units; counting the number of cubic units in a "layer" and multiplying by the number of layers; or multiplying the length, width, and height of the figure. The latter two strategies correspond to the formulas $${v = b \times h}$$ and $${v = l \times w \times h}$$. You can calculate the volume of a rectangular prism by multiplying edge lengths in any order because of the associative property. Two-dimensional figures are classified by their properties into categories but can fit into more than one category at the same time. unit cube rectangular prism regular polygon cubic units 5th Grade Vocabulary Glossary Unit Materials, Representations and Tools Centimeter cubes Markers or crayons Template: Nets A-C Template: Net D Template: Net E Template: Polygons Template: Parallelograms Template: Triangles Additional Unit Practice With Fishtank Plus you can access our Daily Word Problem Practice and our content-aligned Fluency Activities created to help students strengthen their application and fluency skills. Topic A: Volume of Three-Dimensional Figures 5.MD.C.3 Understand volume as an attribute of solid figures that is measured in cubic units. Find the volume of concrete three-dimensional figures. Find the volume of pictorial three-dimensional figures. Find the volume of a rectangular prism by thinking about their layers, applying the formula $$v=b\times h$$. Solve more complex problems involving volume by applying the formula v = b x h. Find the volume of a rectangular prism by multiplying all of their dimensions, applying the formula $$v= l \times w \times h$$. Solve more complex problems involving volume by applying the formula $$v=l\times w \times h$$. 5.MD.C.5.C Understand that volume is additive. Find the volume of composite solid figures when all dimensions are given and their decomposition is already shown. Understand that volume is additive. Find the volume of composite solid figures when not all dimensions are given and/or they must be decomposed. Topic B: Classification of Two-Dimensional Shapes Classify shapes as polygons versus non-polygons and classify polygons according to their number of sides. Classify quadrilaterals based on the presence or absence of one pair of parallel sides. Define trapezoids as quadrilaterals with at least one pair of parallel sides. Classify trapezoids based on the presence of one or two sets of parallel sides. Define parallelograms as trapezoids with two sets of parallel sides. Classify parallelograms based on the presence or absence of right angles or based on the presence or absence of sides of equal length. Define rectangles as parallelograms with four right angles and rhombuses as parallelograms with four equal sides. Classify rectangles based on the presence or absence of sides of equal length, and classify rhombuses based on the presence or absence of right angles. Define squares as quadrilaterals with sides of equal length and all right angles. Classify triangles based on side and angle measures. Key: Major Cluster Supporting Cluster Additional Cluster Core Standards 5.G.B.3 — Understand that attributes belonging to a category of two-dimensional figures also belong to all subcategories of that category. For example, all rectangles have four right angles and squares are rectangles, so all squares have four right angles. 5.G.B.4 — Classify two-dimensional figures in a hierarchy based on properties. 5.MD.C.3 — Recognize volume as an attribute of solid figures and understand concepts of volume measurement. 5.MD.C.3.A — A cube with side length 1 unit, called a "unit cube," is said to have "one cubic unit" of volume, and can be used to measure volume. 5.MD.C.3.B — A solid figure which can be packed without gaps or overlaps using n unit cubes is said to have a volume of n cubic units. 5.MD.C.4 — Measure volumes by counting unit cubes, using cubic cm, cubic in, cubic ft, and improvised units. 5.MD.C.5 — Relate volume to the operations of multiplication and addition and solve real world and mathematical problems involving volume. 5.MD.C.5.A — Find the volume of a right rectangular prism with whole-number side lengths by packing it with unit cubes, and show that the volume is the same as would be found by multiplying the edge lengths, equivalently by multiplying the height by the area of the base. Represent threefold whole-number products as volumes, e.g., to represent the associative property of multiplication. 5.MD.C.5.B — Apply the formulas V = l × w × h and V = b × h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems. 5.MD.C.5.C — Recognize volume as additive. Find volumes of solid figures composed of two non-overlapping right rectangular prisms by adding the volumes of the non-overlapping parts, applying this technique to solve real world problems. 2.G.A.1 2.G.A.1 — Recognize and draw shapes having specified attributes, such as a given number of angles or a given number of equal faces. Identify triangles, quadrilaterals, pentagons, hexagons, and cubes. Sizes are compared directly or visually, not compared by measuring. 3.G.A.1 — Understand that shapes in different categories (e.g., rhombuses, rectangles, and others) may share attributes (e.g., having four sides), and that the shared attributes can define a larger category (e.g., quadrilaterals). Recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. 4.G.A.2 — Classify two-dimensional figures based on the presence or absence of parallel or perpendicular lines, or the presence or absence of angles of a specified size. Recognize right triangles as a category, and identify right triangles. 3.MD.C.5 — Recognize area as an attribute of plane figures and understand concepts of area measurement. 3.MD.C.6 — Measure areas by counting unit squares (square cm, square m, square in, square ft, and improvised units). 3.MD.C.7 — Relate area to the operations of multiplication and addition. 4.MD.A.3 4.MD.A.3 — Apply the area and perimeter formulas for rectangles in real world and mathematical problems. For example, find the width of a rectangular room given the area of the flooring and the length, by viewing the area formula as a multiplication equation with an unknown factor. Number and Operations in Base Ten 5.NBT.B.5 5.NBT.B.5 — Fluently multiply multi-digit whole numbers using the standard algorithm. Operations and Algebraic Thinking 3.OA.B.5 3.OA.B.5 — Apply properties of operations as strategies to multiply and divide. Students need not use formal terms for these properties. Example: Knowing that 8 × 5 = 40 and 8 × 2 = 16, one can find 8 × 7 as 8 × (5 + 2) = (8 × 5) + (8 × 2) = 40 + 16 = 56. (Distributive property.) Example: If 6 × 4 = 24 is known, then 4 × 6 = 24 is also known (Commutative property of multiplication.) 3 × 5 × 2 can be found by 3 × 5 = 15, then 15 × 2 = 30, or by 5 × 2 = 10, then 3 × 10 = 30. (Associative property of multiplication.) Future Standards 6.G.A.1 — Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these techniques in the context of solving real-world and mathematical problems. 6.G.A.2 — Find the volume of a right rectangular prism with fractional edge lengths by packing it with unit cubes of the appropriate unit fraction edge lengths, and show that the volume is the same as would be found by multiplying the edge lengths of the prism. Apply the formulas V = l w h and V = b h to find volumes of right rectangular prisms with fractional edge lengths in the context of solving real-world and mathematical problems. Standards for Mathematical Practice CCSS.MATH.PRACTICE.MP1 — Make sense of problems and persevere in solving them. CCSS.MATH.PRACTICE.MP2 — Reason abstractly and quantitatively. CCSS.MATH.PRACTICE.MP3 — Construct viable arguments and critique the reasoning of others. CCSS.MATH.PRACTICE.MP4 — Model with mathematics. CCSS.MATH.PRACTICE.MP5 — Use appropriate tools strategically. CCSS.MATH.PRACTICE.MP6 — Attend to precision. CCSS.MATH.PRACTICE.MP7 — Look for and make use of structure. CCSS.MATH.PRACTICE.MP8 — Look for and express regularity in repeated reasoning. Multiplication and Division of Whole Numbers Addition and Subtraction of Fractions/Decimals
CommonCrawl
How to get sensor covariance matrix from CSI values? I try to use musicdoa() function in Matlab to estimate the direction of arrival of the paths of signals received by the receiver in a wireless link. The function needs sensor covariance matrix as one of the arguments. I have a matrix of Channel State Information (CSI) that is obtained from a WiFi card. The CSI values come in a matrix shown below: $C=\begin{bmatrix}c_{1,1} & c_{1,2} & ... & c_{1,30}\\ c_{2,1} & c_{2,2} & ... & c_{2,30}\\ c_{3,1} & c_{3,2} & ... & c_{3,30}\end{bmatrix}$ In which, $c_{i,j}$ is the CSI value for $i^{th}$ antenna on $j^{th}$ subcarrier. How can I get the sensor covariance matrix using the C matrix? digital-communications music MasoudMasoud The Channel State Information (CSI) matrix is calculated over a packet of known signals that is sent as part of the communications protocol. The CSI matrix contains two things (per carrier). The bearing of the "target" and the "strength" of transmission along the line of that bearing. So basically, this can tell you where is the transmitter with respect to the receiver array. The CSI matrix is calculated via cross correlation between the signals received by each antenna and correlation is normalised covariance. So, strictly speaking, it is impossible to derive covariance from the CSI because you need access to the received signals themselves. But, the angle (bearing) component of the CSI could be considered as being proportional to the real covariance for the pilot tones that are used to generate them. You could even obtain some theoretical covariance from the pilot tones by delaying them progressively but that would not include the covariance introduced due to noise. For more information please see here and here. A_AA_A If you relax the requirement of using a MUSIC beamformer, using this as a reference, you could use the Bartlett (Fourier) beamformer for each frequency. Robert L.Robert L. Not the answer you're looking for? Browse other questions tagged digital-communications music or ask your own question. How channel state information is calculated from Sounding Packet On computing the number of independent values from a signal's bandwidth? How to Find the Frequency Response of a Communication Channel from Input and Output Symbols in MATLAB? How to compute bit level soft decisions from an M-ary symbol? What benefits can we get from the use of GFDM? How to derive the transmitted signal from a constellation diagram How can the I and Q components of a quatrature signal be different from each other (apart from the 90º phase shift) Get I and Q components from a signal (multple frequency signal)? How to compute the channel gain from the path loss model of a wireless channel Parity Check matrix from degree and check nodes polynoms, LDPC codes
CommonCrawl
P-matrix In mathematics, a P-matrix is a complex square matrix with every principal minor is positive. A closely related class is that of $P_{0}$-matrices, which are the closure of the class of P-matrices, with every principal minor $\geq $ 0. Spectra of P-matrices By a theorem of Kellogg,[1][2] the eigenvalues of P- and $P_{0}$- matrices are bounded away from a wedge about the negative real axis as follows: If $\{u_{1},...,u_{n}\}$ are the eigenvalues of an n-dimensional P-matrix, where $n>1$, then $|\arg(u_{i})|<\pi -{\frac {\pi }{n}},\ i=1,...,n$ If $\{u_{1},...,u_{n}\}$, $u_{i}\neq 0$, $i=1,...,n$ are the eigenvalues of an n-dimensional $P_{0}$-matrix, then $|\arg(u_{i})|\leq \pi -{\frac {\pi }{n}},\ i=1,...,n$ Remarks The class of nonsingular M-matrices is a subset of the class of P-matrices. More precisely, all matrices that are both P-matrices and Z-matrices are nonsingular M-matrices. The class of sufficient matrices is another generalization of P-matrices.[3] The linear complementarity problem $\mathrm {LCP} (M,q)$ has a unique solution for every vector q if and only if M is a P-matrix.[4] This implies that if M is a P-matrix, then M is a Q-matrix. If the Jacobian of a function is a P-matrix, then the function is injective on any rectangular region of $\mathbb {R} ^{n}$.[5] A related class of interest, particularly with reference to stability, is that of $P^{(-)}$-matrices, sometimes also referred to as $N-P$-matrices. A matrix A is a $P^{(-)}$-matrix if and only if $(-A)$ is a P-matrix (similarly for $P_{0}$-matrices). Since $\sigma (A)=-\sigma (-A)$, the eigenvalues of these matrices are bounded away from the positive real axis. See also • Hurwitz matrix • Linear complementarity problem • M-matrix • Q-matrix • Z-matrix • Perron–Frobenius theorem Notes 1. Kellogg, R. B. (April 1972). "On complex eigenvalues ofM andP matrices". Numerische Mathematik. 19 (2): 170–175. doi:10.1007/BF01402527. 2. Fang, Li (July 1989). "On the spectra of P- and P0-matrices". Linear Algebra and its Applications. 119: 1–25. doi:10.1016/0024-3795(89)90065-7. 3. Csizmadia, Zsolt; Illés, Tibor (2006). "New criss-cross type algorithms for linear complementarity problems with sufficient matrices" (pdf). Optimization Methods and Software. 21 (2): 247–266. doi:10.1080/10556780500095009. MR 2195759. 4. Murty, Katta G. (January 1972). "On the number of solutions to the complementarity problem and spanning properties of complementary cones" (PDF). Linear Algebra and its Applications. 5 (1): 65–108. doi:10.1016/0024-3795(72)90019-5. hdl:2027.42/34188. 5. Gale, David; Nikaido, Hukukane (10 December 2013). "The Jacobian matrix and global univalence of mappings". Mathematische Annalen. 159 (2): 81–93. doi:10.1007/BF01360282. References • Csizmadia, Zsolt; Illés, Tibor (2006). "New criss-cross type algorithms for linear complementarity problems with sufficient matrices" (pdf). Optimization Methods and Software. 21 (2): 247–266. doi:10.1080/10556780500095009. MR 2195759. • David Gale and Hukukane Nikaido, The Jacobian matrix and global univalence of mappings, Math. Ann. 159:81-93 (1965) doi:10.1007/BF01360282 • Li Fang, On the Spectra of P- and $P_{0}$-Matrices, Linear Algebra and its Applications 119:1-25 (1989) • R. B. Kellogg, On complex eigenvalues of M and P matrices, Numer. Math. 19:170-175 (1972)
Wikipedia
Brigitte Servatius Brigitte Irma Servatius (born 1954)[1] is a mathematician specializing in matroids and structural rigidity. She is a professor of mathematics at Worcester Polytechnic Institute,[2] and has been the editor-in-chief of the Pi Mu Epsilon Journal since 1999.[3] Education and career Servatius is originally from Graz in Austria.[4] As a student at an all-girl gymnasium in Graz that specialized in language studies rather than mathematics, her interest in mathematics was sparked by her participation in a national mathematical olympiad,[2] and she went on to earn master's degrees in mathematics and physics at the University of Graz.[3] She became a high school mathematics and science teacher in Leibnitz. [4] She moved to the US in 1981, to begin doctoral studies at Syracuse University.[2] She completed her Ph.D. in 1987,[5] and joined the Worcester Polytechnic Institute [4] faculty in the same year.[2] Her dissertation, Planar Rigidity, was supervised by Jack Graver.[5] Contributions While still in Austria, Servatius began working on combinatorial group theory, and her first publication (appearing while she was a graduate student) is in that subject.[2][Z] She switched to the theory of structural rigidity for her doctoral research, and later became the author (with Jack Graver and Herman Servatius) of the book Combinatorial Rigidity (1993).[6][G] Another well-cited paper of hers in this area characterizes the planar Laman graphs, the minimally rigid graphs that can be embedded without crossings in the plane, as the graphs of pseudotriangulations, partitions of a plane region into subregions with three convex corners studied in computational geometry.[H] Servatius is also the co-editor of a book on matroid theory.[B] With Tomaž Pisanski she wrote the book Configurations from a Graphical Viewpoint (2013), on configurations of points and lines in the plane with the same number of points touching each two lines and the same number of lines touching each two points.[7][P] Other topics in her research include graph duality[S] and the triconnected components of infinite graphs.[D] Selected publications Z. Servatius, Brigitte (1983), "A short proof of a theorem of Burns", Mathematische Zeitschrift, 184 (1): 133–137, doi:10.1007/BF01162012, MR 0711734, S2CID 120011455 G. Graver, Jack; Servatius, Brigitte; Servatius, Herman (1993), Combinatorial rigidity, Graduate Studies in Mathematics, vol. 2, Providence, RI: American Mathematical Society, doi:10.1090/gsm/002, ISBN 0-8218-3801-6, MR 1251062 D. Droms, Carl; Servatius, Brigitte; Servatius, Herman (1995), "The structure of locally finite two-connected graphs", Electronic Journal of Combinatorics, 2: R17, doi:10.37236/1211, MR 1346878 S. Servatius, Brigitte; Servatius, Herman (1996), "Self-dual graphs", Discrete Mathematics, 149 (1–3): 223–232, doi:10.1016/0012-365X(94)00351-I, MR 1375109 B. Bonin, Joseph E.; Oxley, James G.; Servatius, Brigitte, eds. (1996), Matroid Theory: Proceedings of the AMS-IMS-SIAM Joint Summer Research Conference held at the University of Washington, Seattle, Washington, July 2–6, 1995, Contemporary Mathematics, vol. 197, Providence, RI: American Mathematical Society, doi:10.1090/conm/197, ISBN 0-8218-0508-8, MR 1411689 H. Haas, Ruth; Orden, David; Rote, Günter; Santos, Francisco; Servatius, Brigitte; Servatius, Herman; Souvaine, Diane; Streinu, Ileana; Whiteley, Walter (2005), "Planar minimally rigid graphs and pseudo-triangulations", Computational Geometry: Theory and Applications, 31 (1–2): 31–61, arXiv:math/0307347, doi:10.1016/j.comgeo.2004.07.003, MR 2131802, S2CID 38637747 P. Pisanski, Tomaž; Servatius, Brigitte (2013), Configurations from a graphical viewpoint, Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks], New York: Birkhäuser/Springer, doi:10.1007/978-0-8176-8364-1, ISBN 978-0-8176-8363-4, MR 2978043 References 1. Birth date from Worldcat 2. Adenberger, Caroline. "Introducing Brigitte Servatius: There's No Such Thing as Can't". Office of Science and Technology Austria. Archived from the original on 2018-02-06. 3. 2005 Council Candidates, Pi Mu Epsilon, retrieved 2018-02-05 4. Illetschko, Peter (August 27, 2008), "Geistesblitz: Zurück zur Natur", Der Standard (in German) 5. Brigitte Servatius at the Mathematics Genealogy Project 6. Reviews of Combinatorial Rigidity: • Recski, A., zbMATH, Zbl 0788.05001{{citation}}: CS1 maint: untitled periodical (link) • Connelly, Robert (1995), Mathematical Reviews, MR 1251062{{citation}}: CS1 maint: untitled periodical (link) • Connelly, Robert (1996), Bulletin of the American Mathematical Society, 33 (3): 399–401, doi:10.1090/S0273-0979-96-00670-2{{citation}}: CS1 maint: untitled periodical (link) 7. Reviews of Configurations from a Graphical Viewpoint: • Tucker, Thomas W., Mathematical Reviews, MR 2978043{{citation}}: CS1 maint: untitled periodical (link) • Nishimura, Hirokazu, zbMATH, Zbl 1277.05001{{citation}}: CS1 maint: untitled periodical (link) External links • Home page • Brigitte Servatius publications indexed by Google Scholar Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Normal closure (group theory) In group theory, the normal closure of a subset $S$ of a group $G$ is the smallest normal subgroup of $G$ containing $S.$ This article is about the normal closure of a subset of a group. For the normal closure of a field extension, see Normal closure (field theory). Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve Properties and description Formally, if $G$ is a group and $S$ is a subset of $G,$ the normal closure $\operatorname {ncl} _{G}(S)$ of $S$ is the intersection of all normal subgroups of $G$ containing $S$:[1] $\operatorname {ncl} _{G}(S)=\bigcap _{S\subseteq N\triangleleft G}N.$ The normal closure $\operatorname {ncl} _{G}(S)$ is the smallest normal subgroup of $G$ containing $S,$[1] in the sense that $\operatorname {ncl} _{G}(S)$ is a subset of every normal subgroup of $G$ that contains $S.$ The subgroup $\operatorname {ncl} _{G}(S)$ is generated by the set $S^{G}=\{s^{g}:g\in G\}=\{g^{-1}sg:g\in G\}$ of all conjugates of elements of $S$ in $G.$ Therefore one can also write $\operatorname {ncl} _{G}(S)=\{g_{1}^{-1}s_{1}^{\epsilon _{1}}g_{1}\dots g_{n}^{-1}s_{n}^{\epsilon _{n}}g_{n}:n\geq 0,\epsilon _{i}=\pm 1,s_{i}\in S,g_{i}\in G\}.$ Any normal subgroup is equal to its normal closure. The conjugate closure of the empty set $\varnothing $ is the trivial subgroup.[2] A variety of other notations are used for the normal closure in the literature, including $\langle S^{G}\rangle ,$ $\langle S\rangle ^{G},$ $\langle \langle S\rangle \rangle _{G},$ and $\langle \langle S\rangle \rangle ^{G}.$ Dual to the concept of normal closure is that of normal interior or normal core, defined as the join of all normal subgroups contained in $S.$[3] Group presentations For a group $G$ given by a presentation $G=\langle S\mid R\rangle $ with generators $S$ and defining relators $R,$ the presentation notation means that $G$ is the quotient group $G=F(S)/\operatorname {ncl} _{F(S)}(R),$ where $F(S)$ is a free group on $S.$[4] References 1. Derek F. Holt; Bettina Eick; Eamonn A. O'Brien (2005). Handbook of Computational Group Theory. CRC Press. p. 14. ISBN 1-58488-372-3. 2. Rotman, Joseph J. (1995). An introduction to the theory of groups. Graduate Texts in Mathematics. Vol. 148 (Fourth ed.). New York: Springer-Verlag. p. 32. doi:10.1007/978-1-4612-4176-8. ISBN 0-387-94285-8. MR 1307623. 3. Robinson, Derek J. S. (1996). A Course in the Theory of Groups. Graduate Texts in Mathematics. Vol. 80 (2nd ed.). Springer-Verlag. p. 16. ISBN 0-387-94461-3. Zbl 0836.20001. 4. Lyndon, Roger C.; Schupp, Paul E. (2001). Combinatorial group theory. Classics in Mathematics. Springer-Verlag, Berlin. p. 87. ISBN 3-540-41158-5. MR 1812024.
Wikipedia
10001000_2=8+128=136100010002​=8+128=136 0001101100_2=4+8+32+64=10800011011002​=4+8+32+64=108 136-108=28136−108=28 28=11100_228=111002​ Out of 300 students taking discrete mathematics, 60 take coffee, 27 take cocoa, 36 take tea, 17 take tea only, 47 take chocolate only, 7 take chocolate and cocoa, 3 take chocolate, tea and cocoa, 20 take cocoa only, 2 take tea, coffee… Let W ={1,2,….,8} Q = {2,4,6,8,10}, Y = {1,2,4,5,6,8,9}. Evaluate: W union Y (2 marks) Q intersection Y (2 marks) Set Difference P minus Y (2 marks) (W intersection Q) union Y Let p, q, and r be the propositions: p = "the flag is set" q = "I = 0" r = "subroutine S is completed" Translate each of the following propositions into symbols, using the letters p, q, r and logical conn… Develop a digital circuit diagram that produces the output for the following logical expression when the input bits are A, B and C i. (A ∧ B) ∨ ((B ∧ C) ∧ (B ∨ C)) [4 marks] ii. (A ∧ B ∧ C) ∨ A ∧ (¬ B ∨ ¬C) [4 mark Let U = {l, 2, 3, 4, 5, 6, 7, 8, 9, and 10} be a universal set. Let A, B, C such that A= {l, 3, 4, 8}, B = {2, 3, 4, 5, 9, 10}, and C = {3, 5, 7, 9, 10}. Use bit representations(computer representation) for A, B, and C together with … A number of students prepared for an examination in physics, chemistry and mathematics. Out of this number, 15 took physics, 20 took chemistryand 23 took mathematics, 9 students took both chemistry and mathematics, 6 student took both… Determine truth value for this statement if the domain consist of all integers Vn( n+1>n) Q1. Explain complement arithmetic and its significance in the computation. Q2. Compute the value of AFD416-BECE16+67758 using signed magnitude representation Q3. Evaluate the expression in Q2. above using 1's complement arithmetic… (a) Suppose f and g are functions whose domains are subsets of z+, the set of positive integers. Give the definition of 'f is \OmicronO(g)' (b) Use the definition of 'f is \OmicronO(g)' to show that: (I) 2n+27 is \O… If a simple graph G has p vertices and any two distinct vertices u and v of G have the property that degGu + degGv ≥ p-1 then prove that G is connected Solve recurring relation :an+2 -10an+1+25an=5n(n≥0) Show that whether x5 + 10x3 + x + 1 is O(x4) or not? Write smallest cutset of K3,5 Let f: A\to→ B and g: B\to→ C be functions. Show that if g o f is onto, then g is onto. Let f:R\to→ R be f(x)= x/1+|x| a) Is f everywhere defined? If not, give the domain. b) Is f onto? If not give the range. c) Is f one-to-one? Explain. d) Is f invertible? If so, what is f-1? Consider the following relations on the set A={1,2,3,4,5}. a) R1={(1,1), (1,2), (3,3), (3,2)} b) R2={(1,1), (2,2), (3,2), (4,5), (5,2)} c) R3={(1,4), (2,5), (3,3), (4,2), (5,2)} State, with reasons, for a), b), c) whether the rela… Question #221673 Say for each of the posets represented by the given Hasse-diagram whether the poset is i) a lattice ii) a complemented lattice iii) a Boolean algebra Give reasons for your answers Describe Konigsberg Bridge problem Branches and chords may change according to the spanning tree."-- Do you agree? Justify your answer Given the following recurrence relation (M). an = −4an−1 + 5an−2, a0 = 2, a1 = 8 The solution of (M) is: a. an = 3 − (−5) n b. an = 3 + (5) n c. an = (3) n − 5 d. None of these ACTIVITIES/ASSESSMENT: A. Tell if the following statements are propositions or not. 1. Study hard! 2. The Apple Macintosh is a 16-bit computer. 3. 1 is an even number. 4. Why are we here? 5.8+7=13 B. p is "x … Question #222539 1. Let p and q be the propositions "Swimming at the New Jersey shore is allowed" and "Sharks have been spotted near the shore," respectively. Express each of these compound propositions as an English sentence. a) ¬… let z be the set of integers and R be the relation on Z defined as: aRb if and only if 1+ab>0 then What is degree of a vertex in a graph? Express the negations of each of these statements so that all negation symbols immediately precede predicates. ∃x∃y(Q(x, y) ↔ Q(y, x)) Draw the Venn diagrams for each of these combinations of the sets A, B, C, and D. *A̅ ∪ B̅ ∪ C̅ ∪ D̅ Express each of these statment using quantifires : a) every student in this classes has taken exactly two mathematics classes at this school. b) someone has visited every country in the world except Libya Describe the Hasse diagram formed by the Relation "x is a divisor of y" for the set A = {1, 3, 6, 12, 24, 48} Express each of these statement using quantifires : No one has climbed every mountain in the Himalayas Express each of these statement using quantifires : every movie actor has either been in a movie with Kevin Bacon or has been in a movie with someone who has been in. a movie with Kevin Bacon. 3.3 In a class of 100 learners, 80 do Mathematics and 35 do Physics. If x learners do Mathematics and Physics, and y learners do neither Mathematics nor Physics, find the greatest possible value of y. In the following Statement: " If the BIOS test runs fine, the CPU and motherboard must be OK. If the CPU and motherboard and memory are all OK, then there must be a flaw in the OS. The BIOS test runs fine and the memory is OK. … Express 2x+1 / x(x+1) in partial fractions and hence find the general solution of the differential equation x(x+1)dy/dx = y(2x+1) expressing y explicitly in terms of x. A Hasse diagram is a graphical rendering of a partially ordered set displayed via the cover relation of the partially ordered set with an implied upward orientation. A point is drawn for each element of the poset, and line segments ar… For the following relation on the set {1,2,3,4,5,6,7,8,9,10,11,12}. Find the number of distinct permutation of the word 'programmer ' Draw the Venn diagrams for each of these combinations of the sets A, B, and C. a) A ∩ (B − C) b) (A ∩ B) ∪ (A ∩ C) c) (A ∩ B) ∪ (A ∩ C) The number of elements in the Power set P(S) of the set S = { { Φ} , 1, { 2, 3 }} is? complete \space question \space is \space \\ Let \space f:R→R \space is \space defined \space by \space f(x)= \space \frac{x}{1+∣x∣}.\\ \space Then \space… Let f :R \to→ R be f (x) =x /(1+|x|) 2) Translate the given statements into propositional logic using the propositions provided You are eligible to be the President of the USA if and only if you are at least 35 years old, were born in the USA, or at the time of your … Suppose the city of MISSISSAUGA has a contest where they arrange the letters of their city name in a particular order. The city residents have to guess which order the city has chosen, and the winner gets $10 000. The rules allow peop… Determine whether the function f(x) = x^2 + 3x + 2 from Z to Z is a bijective function. Solve the recurrence by master's method. a) T(n) = 3T[n/4)+ cn^2 b) T(n)=T[2n/3)+1 Solve the recurrence by substitution method. T(n) = 2T (n/2)+n-1 and T (1) =1. Translate the following English sentences into Propositional Logic. Let: C A bird can fiy F A bird has wings a) If Bird has wings, then bird can fly. b) If bird has no wings, then it can't fly. Translate the following Propositional Logic to English sentences. Let: E-Lion is eating H=Lion is hungry a) E-H b) EA-H c)-(H-E) Is (p q) > I(p>q)> q] a tautology? Why or why not? Draw the Venn diagrams for each of these combinations of the sets A, B, C, and D. a) (A ∩ B) ∪ (C ∩ D) b) A c U B c U Cc U Dc c) A − (B ∩ C ∩ D) d) A ⊕ B How many strings of six lowercase letters from the English alphabet contain: i.The letter a? ii. The letters a and b? iii. The letters a and b in consecutive positions with a preceding b, with all the letters distinct? iv. The letters… Let A= {0, 1, 2, 3} and define relations R, S and T on A as follows: R= { (0,0),(0,1),(0,3),(1,0),(1,1),(2,2),(3,0),(3,3)} S= {(0, 0),(2,2),(1,1), (0, 2), (0, 3), (2, 3),(2,2)} T= {(0, 1), (2, 3),(0,0),(2,2),(1,0),(3,3),(3,2)} i. Is R… Determine whether the relation R on the set of all people is irreflexive, where (a, b) ∈ R if and only if : i) a and b were born on the same day. ii) a has the same first name as b. iii) a and b have a common grandparent Question No 05: [07 marks] Let f (x) = ax + b and g(x) = cx + d, where a, b, c, and d are constants. Determine necessary and sufficient conditions on the constants a, b, c, and d so that f ◦ g = g ◦ f Find the domain and range of these functions. i. The function that assigns to each pair of positive integers the first integer of the pair b) the function that assigns to each positive integer its largest decimal digit ii. The functio… Briefly answer the following short questions 1) Determine if the following argument is valid and explain why. Every CS major takes CMSC203. Mr. Ali is taking CMSC203, therefore he is a CS major. 2) What is the time complexity (in … Prove that for every positive integer n. 1.2.3+2.3.4++ n(n+1)(n+2) = n(n+1)(n+2)(n+3)/4. How many strings of six lowercase letters from the English alphabet contain: i.The letter a? ii. The letters a and b? iii. The letters a and b in consecutive positions with a preceding b, with all the letters distinct? iv. The lett… Determine whether the function f(x) = |4x| from Z to Z is a bijective function. A boy has 10 red balls, 20 blue balls,25 black balls ,and 30 pink balls .He select ball at random without looking at them .calculate the minimum number of balls he must select to be sure that at least 6 balls of the same color . 1) Draw the Hasse diagram for inclusion on the set P(S), where S = {a, b, c, d} 2) Let S = {1,2,3,4} with lexicographic order "<=" relation a. Find all pairs in S x S less than (2, 3) b. Find all pairs in S x S greater than… Determine how many bit strings of length 5 can be formed ,where three consecutive 0s are not allowed Determine whether the given relation is reflexive, symmetric, transitive, or none of these. R is the "greater than or equal to" relation on the set of real numbers: For all x, y ∈ R , xRy ⇐⇒ x ≥ y determine how many bit strings of length can be formed, where three consecutive 0s are not allowed Use a Venn diagram to illustrate the set of all months of the year whose names do not contain the letter R in the set of all months of the year. Let p and q be the propositions defined as below. p : It is below freezing. q : It is snowing. Show that a complete graph with n vertices has n(n -1) 2 edges Show that for any real number x, if x2 is odd, then x is odd. Let p, q and r be statements. Use the Laws of Logical Equivalence and the equivalence of → to a disjunction to show that: ∼ ((p ∨ (q →∼ r)) ∧ (r → (p∨ ∼ q))) ≡ (∼ p ∧ q) ∧ r.
CommonCrawl
Is veal white, pink, or red? This is the big question of the week in Switzerland, as on the first of September the new law for the humane treatment of calves enters into effect. When I was in the third school year, our teacher Egidio Bernasconi took us to visit the stable of our classmate Rita Spizzi. Her father was in charge of food at the private clinic Moncucco in Coremmo, just behind our school building. Besides the large vegetable garden, he also had half a dozen cows, and that was the topic of the lesson. We learned about agribusiness. In the self-sustenance times, a farmer in the Prealps would have one or two cows to provide the proteins for his family. Spizzi's operation was larger, because it had to sustain a clinic instead of just a family. A cow's main product was its milk, which had a shelf life of only a couple of days. The cream would be skimmed off and cultured for a few days to make butter, which had about a week of shelf life. The excess milk would be used to make yogurt, which had a longer shelf life. When there was a lot of excess milk, it would be used to make cheese, which depending on the type could last for a whole season. When a cow would no longer produce milk due to her age, Mr. Spizzi would sell her to the slaughterhouse. This explains why the typical beef dish of the region is brasato: the meat had to be stewed for hours in a red wine sauce because it was very tough, coming from old cows. Mr. Spizzi always needed enough cows to feed the patients and staff. Every spring he would walk his cows to the outskirts of town to visit a bull. Half of the off-spring would be female, and that was good, because Mr. Spizzi could select one to raise to replace the next old cow; the other cows would be raised to be sold at the fair. This is why Mr. Spizzi spend a little more money to use a select bull, as its offspring would fetch a higher price at the market. For the male offspring there was not much use, because only few bulls are required. Mr. Spizzi would keep them as long as they can live on their mother's milk, then sell them to the slaughterhouse. Because these calves were young and milk-fed, their meat was whitish. By the way, this is why in the Insubrian culture the fancy meat is veal scaloppine. This was a long time ago and in modern agribusiness a farmer has an order of magnitude more cows. Also, much progress has been made in cattle feed, so the farmer can make more money by feeding his calves for a longer time, yielding more meat. This is where the animal protection groups come in and the new law for veal comes into play. When the calves are kept alive for a longer time, they would naturally eat hay and grass, roaming on the Alps. Their meat would become reddish. Although taste and nutritional value are the same, for centuries people have known that the whiter the veal, the more it was tender. Before the new law, a farmer would have been paid less per kilo if the veal was redder. To keep the veal whiter, the contemporary farmer would keep his animals on milk and indoors, but this means that the calves are anemic and therefore tortured. The current debate is on whether veal should be red, pink, or white. This is where color science comes into play. Instead of using color terms, the experts want to sound more authoritative by using numbers rather than words. Instead of red, pink, white, they use 38, 42, 48. They never mention a unit, so what are these numbers? Is there a new redness scale? It turns out that the new law also introduces a new standardized method to determine the color of veal. The carcass is measured at a specified location near the shoulder with a Minolta colorimeter. The first number on the colorimeter is the color number for the carcass. Zooming in on the pictures reveals that the colorimeter is displaying CIELAB data, so the first number is L*. Therefore, what the gastronome takes for red, pink, white, a color scientist would take for dark, medium, light. Newspaper article on the debate (in German): Kalbfleisch-Knatsch in der Fleischbranche. Posted by Giordano Beretta at 11:39 No comments: Labels: color terms Generating your publication list It is often necessary to compile one's bibliography. For example to apply for a grant or a job. One approach is to keep a text file and update it as you publish. However, unstructured data is a pain to update when you fall behind, and you anyway already have your publications in your bibliography database. Is there a quick and simple way to generate a publication list? For those using BibDesk to manage their bibliography, the answer is Jürgen Spitzmüller's biblatex-publist package. It generates correct citations leaving out your name, sorts them by date, and allows grouping by publication type. In the preamble you just add three items: \usepackage[bibstyle=publist]{biblatex} \omitname[your first name]{your last name} \addbibresource{biblio.bib} In the document part you just add a block like below for each publication type: \section{Patents} \begin{refsection}[biblio] \nocite{*} \printbibliography[heading=none, filter=mine, type=patent] \end{refsection} The citation result will look like this when set into type: Aug. 2006 (with Audrius J. Budrys). "Multi-component iconic representation of file characteristics". 7,086,011. Hewlett-Packard Development Company. If you are still using BibTeX, this is a good time to update your engine. BibTeX has been obsolete for many years and is no longer used these days. People now use biblatex, and publist is just a style file for biblatex. Actually, biblatex still uses BibTeX, so you want to switch your engine to Biber. The make the switch, in the TexShop preferences go in the Engine tab and replace the BibTeX engine bibtex with biber. It may be necessary to run the TeX Live utility to update all the packages, as there has been a bug fix in the last week. Extra tip: Option-Go to the folder ∼/Library/TeXShop/Engines/Inactive/Latexmk/ where you find the file Latexmk For TeXShop.pdf with the instructions on how to typeset your documents with a single mouse click. Labels: tools Back in 2009 we looked at the carbon footprint of ripping color documents for digital presses and published the result in the EI 2010 paper "Font rendering on a GPU-based raster image processor." Assuming the raster image processor is run at maximum capacity, the state of the art system at the time consumed 38,723 KWh and generated 23,234 Kg of CO2. By using GPUs, we were able to rip the same data with 10,804 KWh respectively 6,483 Kg of CO2. At the time we thought saving 16,751 Kg of CO2 per year per RIP was a pretty cool result, but at the end the product never shipped, despite — or maybe because — it was much lower cost. (See the paper for the details of the calculations.) This month the Digital Power Group published the white-paper "The cloud begins with coal: big data, big networks, big infrastructure, and big power." The work was sponsored by the National Mining Association and the American Coalition for Clean Coal Electricity, which explains why some of the numbers appear a little optimistic in terms of the coal needed to keep the smart phones running and serving contents, but even if we divide the numbers by 5 to make them a little more realistic, the numbers are quite staggering when we add everything up. It turns out, that a smart phone requires as much coal as a small refrigerator. Cloud computing will consume an ever increasing fraction of our total energy consumption. This is a good reason to work on more efficient and greener storage systems. Labels: cloud computing, storage, sustainability During the ascent of Nazism in Europe in the decade before world war II, the Swiss banks introduced secret bank accounts to hide the identity of persecuted customers from infiltrated spies working at the banks or loitering in their lobbies. After the war, this mechanism was abused by the banks to assist tax evaders and consequently has been largely dismantled. Apparently, the Swiss have been able to maintain their reputation as a discreet country. An echo effect of the leak of an American agency's penchant for snooping everybody's data, is that more entities are now storing their data in Switzerland, although data storage in Switzerland is about 25% more expensive than in the neighboring EU countries. "Our customers know that money can be replaced — but sensitive data can not," says Mateo Meier of Artmotion, a data center in Zürich. Switzerland's know-how, political stability and adequate infrastructure are ideal conditions to store data securely, he says. History does not repeat itself, and the Swiss have learned from the mistakes related to bank secrecy: there are no privacy rights for suspected felons and their data. Newspaper article: Schweizer Datentresore sind nicht sicher Labels: storage Color can prevent shark attack A Western Australian company has used pioneering research by leading University of WA shark experts to develop wetsuits designed to confuse sharks or render surfers invisible to the predators. The world-first shark repellent suits are based on discoveries by Associate Professor Nathan Hart and Winthrop Professor Shaun Collin, from UWA's Oceans Institute and School of Animal Biology, about how predatory sharks see and detect prey. The suits use a specific combination of colours and patterns to deter the creatures. One design — known as the 'cryptic' wetsuit — allows the wearer to effectively blend with background colours in the water, making it difficult for a shark to detect or focus on the wearer. The other design — the 'warning' wetsuit — makes the user appear highly visible by using disruptive and high contrast banding patterns to make them appear totally unlike any normal prey, or even as an unpalatable or dangerous option. The designs also come in the form of stickers for the undersides of surfboards. While the company Shark Attack Mitigation Systems could not claim the suits were a failsafe protection against shark attacks, results from initial testing of the wetsuits in the ocean with wild sharks had been 'extraordinary'. Original article: UWA science leads to world-first anti-shark suits See also Why are animals colourful? Sex and violence, seeing and signals Labels: color science, perception, science Skin-Whitening In Japanese culture a white skin is an important component of female aesthetics. In the old days, maikos applied a thick white base mask that was made with lead, but after the discovery that it poisoned the skin and caused terrible skin and back problems for the older geisha towards the end of the Meiji Era, it was replaced with rice powder. At the end of 2007 it appeared that the cosmetics industry had finally invented a skin whitening product that is both safe and convenient, as we reported in this blog in a whiter shade of pale. Unfortunately it now appears this Rhododenol product is unsafe after all. 2,250 users of Kanebo Cosmetics Inc. skin-whitening cosmetics have reported developing serious symptoms such as white blotches on their skin. Serious effects of the products include depigmentation in an area of at least 5 cm and depigmentation in three or more areas of the body, as well as clearly visible depigmentation in parts of the face. In total, the company has received more than 100,000 inquiries in connection with the recall. The product in question is named "Rhododenol." Kanebo earlier said it has been marketing cosmetics that use the ingredient as an active substance since 2008 via various retail outlets. The company said it has secured the cooperation of the Japanese Dermatological Association in getting a list of medical facilities which will treat the symptoms posted on the association's website. Kanebo Cosmetics Inc.'s voluntary recall of its skin-whitening line is likely to deal a crippling blow not only to its brand image but also to its parent Kao Corp. It will not be easy to restore the image of the damaged brand. The recall became more damaging due to Kanebo's delayed response. Labels: color reproduction, color terms, etymology, photography NIH To Fund Big Data Projects Biomedical research is increasingly data-intensive, with researchers routinely generating and using large, diverse datasets. Yet the ability to manage, integrate and analyze such data, and to locate and use data generated by others, is often limited due to a lack of tools, accessibility, and training. The National Institutes of Health announced that it will provide up to $24 million per year for the new Big Data to Knowledge (BD2K) Centers of Excellence. This initiative supports research, implementation, and training in data science that will enable biomedical scientists to capitalize on the transformative opportunities that large datasets provide. Press release: NIH commits $24 million annually for Big Data Centers of Excellence Web site: NIH Big Data to Knowledge (BD2K) Labels: research process, tools
CommonCrawl
Preprocess the data Let's return to the tragic titanic dataset. This time, you'll classify its observations differently, with k-Nearest Neighbors (k-NN). However, there is one problem you'll have to tackle first: scale. As you've seen in the video, the scale of your input variables may have a great influence on the outcome of the k-NN algorithm. In your case, the Age is on an entirely different scale than Sex and Pclass, hence it's best to rescale first! For example, to normalize a vector \(\mathbf{x}\), you could do the following: $$\frac{\mathbf{x} - \min(\mathbf{x})}{\max(\mathbf{x}) - \min(\mathbf{x})}$$ Head over to the instructions to normalize Age and Pclass for both the training and the test set. Assign the class label, Survived, of both train and test to separate vectors: train_labels and test_labels. Copy the train and test set to knn_train and knn_test. You can just use the assignment operator (<-) to do this. Drop the Survived column from knn_train and knn_test. Tip: dropping a column named column in a data frame named df can be done as follows: df$column <- NULL. For this instruction, you don't have to write any code. Pclass is an ordinal value between 1 and 3. Have a look at the code that normalizes this variable in both the training and the test set. To define the minimum and maximum, only the training set is used; we can't use information on the test set (like the minimums or maximums) to normalize the data. In a similar fashion, normalize the Age column of knn_train as well as knn_test. Fill in the ___ in the code. Again, you should only use features from the train set to decide on the normalization! You should use the intermediate variables min_age and max_age.
CommonCrawl
Numerical differentiation In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function. Finite differences Further information: Finite differences The simplest method is to use finite difference approximations. A simple two-point estimation is to compute the slope of a nearby secant line through the points (x, f(x)) and (x + h, f(x + h)).[1] Choosing a small number h, h represents a small change in x, and it can be either positive or negative. The slope of this line is ${\frac {f(x+h)-f(x)}{h}}.$ This expression is Newton's difference quotient (also known as a first-order divided difference). The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to h. As h approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of f at x is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line: $f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.$ Since immediately substituting 0 for h results in ${\frac {0}{0}}$ indeterminate form , calculating the derivative directly can be unintuitive. Equivalently, the slope could be estimated by employing positions (x − h) and x. Another two-point formula is to compute the slope of a nearby secant line through the points (x - h, f(x − h)) and (x + h, f(x + h)). The slope of this line is ${\frac {f(x+h)-f(x-h)}{2h}}.$ This formula is known as the symmetric difference quotient. In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to $h^{2}$. Hence for small values of h this is a more accurate approximation to the tangent line than the one-sided estimation. However, although the slope is being computed at x, the value of the function at x is not involved. The estimation error is given by $R={\frac {-f^{(3)}(c)}{6}}h^{2}$, where $c$ is some point between $x-h$ and $x+h$. This error does not include the rounding error due to numbers being represented and calculations being performed in limited precision. The symmetric difference quotient is employed as the method of approximating the derivative in a number of calculators, including TI-82, TI-83, TI-84, TI-85, all of which use this method with h = 0.001.[2][3] Step size See also: Adaptive step size An important consideration in practice when the function is calculated using floating-point arithmetic of finite precision is the choice of step size, h. If chosen too small, the subtraction will yield a large rounding error. In fact, all the finite-difference formulae are ill-conditioned[4] and due to cancellation will produce a value of zero if h is small enough.[5] If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse.[6] For basic central differences, the optimal step is the cube-root of machine epsilon.[7] For the numerical derivative formula evaluated at x and x + h, a choice for h that is small without producing a large rounding error is ${\sqrt {\varepsilon }}x$ (though not when x = 0), where the machine epsilon ε is typically of the order of 2.2×10−16 for double precision.[8] A formula for h that balances the rounding error against the secant error for optimum accuracy is[9] $h=2{\sqrt {\varepsilon \left|{\frac {f(x)}{f''(x)}}\right|}}$ (though not when $f''(x)=0$), and to employ it will require knowledge of the function. For computer calculations the problems are exacerbated because, although x necessarily holds a representable floating-point number in some precision (32 or 64-bit, etc.), x + h almost certainly will not be exactly representable in that precision. This means that x + h will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that (x + h) − x will not equal h; the two function evaluations will not be exactly h apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as h = 0.1 will not be a round number in binary; it is 0.000110011001100...2 A possible approach is as follows: h := sqrt(eps) * x; xph := x + h; dx := xph - x; slope := (F(xph) - F(x)) / dx; However, with computers, compiler optimization facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that dx and h are the same. With C and similar languages, a directive that xph is a volatile variable will prevent this. Other methods Higher-order methods Further information: Finite difference coefficient Higher-order methods for approximating the derivative, as well as methods for higher derivatives, exist. Given below is the five-point method for the first derivative (five-point stencil in one dimension):[10] $f'(x)={\frac {-f(x+2h)+8f(x+h)-8f(x-h)+f(x-2h)}{12h}}+{\frac {h^{4}}{30}}f^{(5)}(c),$ where $c\in [x-2h,x+2h]$. For other stencil configurations and derivative orders, the Finite Difference Coefficients Calculator is a tool that can be used to generate derivative approximation methods for any stencil with any derivative order (provided a solution exists). Higher derivatives Using Newton's difference quotient, $f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}$ the following can be shown[11] (for n>0): $f^{(n)}(x)=\lim _{h\to 0}{\frac {1}{h^{n}}}\sum _{k=0}^{n}(-1)^{k+n}{\binom {n}{k}}f(x+kh)$ Complex-variable methods The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if $f$ is a holomorphic function, real-valued on the real line, which can be evaluated at points in the complex plane near $x$, then there are stable methods. For example,[5] the first derivative can be calculated by the complex-step derivative formula:[12][13][14] $f^{\prime }(x)={\frac {\Im (f(x+\mathrm {i} h))}{h}}+O(h^{2}),\quad \mathrm {i^{2}} :=-1.$ :=-1.} The recommended step size to obtain accurate derivatives for a range of conditions is $h=10^{-200}$.[6] This formula can be obtained by Taylor series expansion: $f(x+\mathrm {i} h)=f(x)+\mathrm {i} hf^{\prime }(x)-h^{2}f''(x)/2!-\mathrm {i} h^{3}f^{(3)}(x)/3!+\cdots .$ The complex-step derivative formula is only valid for calculating first-order derivatives. A generalization of the above for calculating derivatives of any order employs multicomplex numbers, resulting in multicomplex derivatives.[15] [16] [17] $f^{(n)}(x)\approx {\frac {{\mathcal {C}}_{n^{2}-1}^{(n)}(f(x+\mathrm {i} ^{(1)}h+\ldots +\mathrm {i} ^{(n)}h))}{h^{n}}}$ where the $\mathrm {i} ^{(k)}$ denote the multicomplex imaginary units; $\mathrm {i} ^{(1)}\equiv \mathrm {i} $. The ${\mathcal {C}}_{k}^{(n)}$ operator extracts the $k$th component of a multicomplex number of level $n$, e.g., ${\mathcal {C}}_{0}^{(n)}$ extracts the real component and ${\mathcal {C}}_{n^{2}-1}^{(n)}$ extracts the last, “most imaginary” component. The method can be applied to mixed derivatives, e.g. for a second-order derivative ${\frac {\partial ^{2}f(x,y)}{\partial x\,\partial y}}\approx {\frac {{\mathcal {C}}_{3}^{(2)}(f(x+\mathrm {i} ^{(1)}h,y+\mathrm {i} ^{(2)}h))}{h^{2}}}$ A C++ implementation of multicomplex arithmetics is available.[18] In general, derivatives of any order can be calculated using Cauchy's integral formula:[19] $f^{(n)}(a)={\frac {n!}{2\pi i}}\oint _{\gamma }{\frac {f(z)}{(z-a)^{n+1}}}\,\mathrm {d} z,$ where the integration is done numerically. Using complex variables for numerical differentiation was started by Lyness and Moler in 1967.[20] Their algorithm is applicable to higher-order derivatives. A method based on numerical inversion of a complex Laplace transform was developed by Abate and Dubner.[21] An algorithm that can be used without requiring knowledge about the method or the character of the function was developed by Fornberg.[4] Differential quadrature Differential quadrature is the approximation of derivatives by using weighted sums of function values.[22][23] Differential quadrature is of practical interest because its allows one to compute derivatives from noisy data. The name is in analogy with quadrature, meaning numerical integration, where weighted sums are used in methods such as Simpson's method or the Trapezoidal rule. There are various methods for determining the weight coefficients, for example, the Savitzky–Golay filter. Differential quadrature is used to solve partial differential equations. There are further methods for computing derivatives from noisy data.[24] See also • Automatic differentiation – Techniques to evaluate the derivative of a function specified by a computer program • Five-point stencil • Savitzky-Golay filter – Algorithm to smooth data pointsPages displaying short descriptions of redirect targets • Numerical integration – Methods of calculating definite integrals • Numerical ordinary differential equations – Methods used to find numerical solutions of ordinary differential equationsPages displaying short descriptions of redirect targets • Numerical smoothing and differentiation – Algorithm to smooth data pointsPages displaying short descriptions of redirect targets • List of numerical-analysis software References 1. Richard L. Burden, J. Douglas Faires (2000), Numerical Analysis, (7th Ed), Brooks/Cole. ISBN 0-534-38216-9. 2. Katherine Klippert Merseth (2003). Windows on Teaching Math: Cases of Middle and Secondary Classrooms. Teachers College Press. p. 34. ISBN 978-0-8077-4279-2. 3. Tamara Lefcourt Ruby; James Sellers; Lisa Korf; Jeremy Van Horn; Mike Munn (2014). Kaplan AP Calculus AB & BC 2015. Kaplan Publishing. p. 299. ISBN 978-1-61865-686-5. 4. Numerical Differentiation of Analytic Functions, B Fornberg – ACM Transactions on Mathematical Software (TOMS), 1981. 5. Using Complex Variables to Estimate Derivatives of Real Functions, W. Squire, G. Trapp – SIAM REVIEW, 1998. 6. Martins, Joaquim R. R. A.; Ning, Andrew (2021-10-01). Engineering Design Optimization (PDF). Cambridge University Press. ISBN 978-1108833417. 7. Sauer, Timothy (2012). Numerical Analysis. Pearson. p.248. 8. Following Numerical Recipes in C, Chapter 5.7. 9. p. 263. 10. Abramowitz & Stegun, Table 25.2. 11. Shilov, George. Elementary Real and Complex Analysis. 12. Martins, J. R. R. A.; Sturdza, P.; Alonso, J. J. (2003). "The Complex-Step Derivative Approximation". ACM Transactions on Mathematical Software. 29 (3): 245–262. CiteSeerX 10.1.1.141.8002. doi:10.1145/838250.838251. S2CID 7022422. 13. Differentiation With(out) a Difference by Nicholas Higham 14. article from MathWorks blog, posted by Cleve Moler 15. "Archived copy" (PDF). Archived from the original (PDF) on 2014-01-09. Retrieved 2012-11-24.{{cite web}}: CS1 maint: archived copy as title (link) 16. Lantoine, G.; Russell, R. P.; Dargent, Th. (2012). "Using multicomplex variables for automatic computation of high-order derivatives". ACM Trans. Math. Softw. 38 (3): 1–21. doi:10.1145/2168773.2168774. S2CID 16253562. 17. Verheyleweghen, A. (2014). "Computation of higher-order derivatives using the multi-complex step method" (PDF). 18. Bell, I. H. (2019). "mcx (multicomplex algebra library)". GitHub. 19. Ablowitz, M. J., Fokas, A. S.,(2003). Complex variables: introduction and applications. Cambridge University Press. Check theorem 2.6.2 20. Lyness, J. N.; Moler, C. B. (1967). "Numerical differentiation of analytic functions". SIAM J. Numer. Anal. 4 (2): 202–210. Bibcode:1967SJNA....4..202L. doi:10.1137/0704019. 21. Abate, J; Dubner, H (March 1968). "A New Method for Generating Power Series Expansions of Functions". SIAM J. Numer. Anal. 5 (1): 102–112. Bibcode:1968SJNA....5..102A. doi:10.1137/0705008. 22. Differential Quadrature and Its Application in Engineering: Engineering Applications, Chang Shu, Springer, 2000, ISBN 978-1-85233-209-9. 23. Advanced Differential Quadrature Methods, Yingyan Zhang, CRC Press, 2009, ISBN 978-1-4200-8248-7. 24. Ahnert, Karsten; Abel, Markus (2007). "Numerical differentiation of experimental data: local versus global methods". Computer Physics Communications. 177 (10): 764–774. Bibcode:2007CoPhC.177..764A. doi:10.1016/j.cpc.2007.03.009. ISSN 0010-4655. External links Wikibooks has a book on the topic of: Numerical Methods • Numerical Differentiation from wolfram.com • Numerical Differentiation Resources: Textbook notes, PPT, Worksheets, Audiovisual YouTube Lectures at Numerical Methods for STEM Undergraduate • Fortran code for the numerical differentiation of a function using Neville's process to extrapolate from a sequence of simple polynomial approximations. • NAG Library numerical differentiation routines • http://graphulator.com Online numerical graphing calculator with calculus function. • Boost. Math numerical differentiation, including finite differencing and the complex step derivative • Complex Step Differentiation • Differentiation With(out) a Difference by Nicholas Higham, SIAM News. • findiff Python project
Wikipedia
\begin{document} \title{Three Edge-disjoint Plane Spanning Paths \\ in a Point Set} \author{P. Kindermann\inst{1}\orcidID{0000-0001-5764-7719} \and J. Kratochvíl\inst{2}\orcidID{0000-0002-2620-6133} \and G. Liotta\inst{3}\orcidID{---} \and P.~Valtr\inst{2}\orcidID{0000-0002-3102-4166} } \authorrunning{Kindermann et al.} \institute{Universität Trier, Trier, Germany. \email{[email protected]} \and Charles University, Prague, Czech Republic, \email{{honza,valtr}@kam.mff.cuni.cz} \and Università degli Studi di Perugia, Perugia, Italy, \email{[email protected]} } \maketitle \begin{abstract} We study the following problem: Given a set $S$ of $n$ points in the plane, how many edge-disjoint plane straight-line spanning paths of $S$ can one draw? A well known result is that when the $n$ points are in convex position, $\lfloor n/2\rfloor$ such paths always exist, but when the points of $S$ are in general position the only known construction gives rise to two edge-disjoint plane straight-line spanning paths. In this paper, we show that for any set $S$ of at least ten points, no three of which are collinear, one can draw at least three edge-disjoint plane straight-line spanning paths of~$S$. Our proof is based on a structural theorem on halving lines of point configurations and a strengthening of the theorem about two spanning paths, which we find interesting in its own right: if $S$ has at least six points, and we prescribe any two points on the boundary of its convex hull, then the set contains two edge-disjoint plane spanning paths starting at the prescribed points. \keywords{Plane Spanning Paths, Point Sets, Geometric Graph Theory} \end{abstract} \section{Introduction}\label{sec:Intro} A set of locations of an industrial plant is visited by three different groups of mobile robots, the red, the blue, and the green robots. Robots are anonymous, autonomous, asynchronous, and oblivious. Every location can be simultaneously occupied by a red, a green, and a blue robot, each executing a different task in the production chain. Robots of a same color follow the same colored trajectory, which visits all locations of the industrial plant by connecting consecutive locations with straight-line segments. Each robot operates in standard Look-Compute-Move cycles (see, e.g.~\cite{CDN16,DN17,FPS12}), i.e., it perceives its location (Look), it decides where to move (Compute), and it eventually moves to a new location by following its trajectory (Move). Since robots of different color may be programmed to visit the locations in a different order, any two trajectories of different color are edge-disjoint, so to avoid collisions. Also, since trajectories of different color may intersect, the programs of the robots encode a rule of color-precedence (e.g. red $>$ blue $>$ green). Finally, collisions between robots of the same color are avoided by making each colored trajectory non-self intersecting. The above robot-motion-planning problem of computing the three colored trajectories can be expressed as the following graph drawing question: Let $S$ be a set of distinct points (locations) in the plane and let $\pi_1$, $\pi_2$, and $\pi_3$ be three simple paths each having $|S|$ vertices; we want to compute three plane (i.e. crossing-free) straight-line drawings of $\pi_1$, $\pi_2$, and $\pi_3$ such that the vertex set in each drawing coincides with $S$ and no two drawings share an edge. In other words: does $S$ admit three edge-disjoint plane straight-line spanning paths? The question appears almost innocent and, at a first glance, one would be tempted to ask it for any $k$ paths with $k\geq 2$. Namely, the proof of Bernhart and Kainen about the book thickness of a complete graph (Theorem 3.4 of~\cite{DBLP:journals/jct/BernhartK79}) already gives a partial answer: if the $n$ points are in convex position, then it is possible to draw $\lfloor \frac{n}{2} \rfloor$ edge-disjoint plane straight-line spanning paths of the point set which is also a tight upper bound for even values of $n$ (the complete graph has $\frac{n(n-1)}{2}$ edges). However, little is known when the $n$ points are not in convex position: The only result we are aware of is by Aichholzer et al.~\cite{ahkklpsw-ppstpcgg-17}, who show the existence of two edge-disjoint plane straight-line spanning paths for any set of $n\geq 4$ points in general position (no three collinear). Aichholzer et al.\ leave open the problem of proving whether three or more paths always exist. Our main result is as follows. \begin{restatable}{theorem}{ThmMain} \label{thm:main} Let $S$ be any set of at least ten points in general position in the plane. There are three edge-disjoint plane straight-line spanning paths of $S$. \end{restatable} Besides addressing an open problem by Aichholzer et al.~\cite{ahkklpsw-ppstpcgg-17}, \cref{thm:main} relates with some classical topics in the graph drawing literature. Among them, the \emph{graph packing problem} asks whether it is possible to map a set of smaller graphs into a larger graph, called the \emph{host graph}, without using the same edge of the host graph twice. A rich body of literature is devoted to this problem, both when the host graph is the complete graph and when the smaller graph is either planar or near-planar (see, e.g.~\cite{Bollobas1978,DBLP:journals/jocg/GeyerHKKT17,DBLP:journals/ajc/HalerW14,Hedetniemi1982,DBLP:journals/jgaa/LucaGHKLLMTW21,Teo1990}). While most papers devoted to graph packing do not assume that a drawing of the host graph is given as part of the input, our study considers a \emph{geometric graph packing problem}, as we want to map three plane geometric paths with $n$ vertices into a complete geometric graph $K_n$. Bose et al.~\cite{DBLP:journals/comgeo/BoseHRW06} give a characterization of those plane trees that can be packed in a complete geometric graph $K_n$ in the special case that the vertices of $K_n$ are in convex position. Aichholzer et al.~\cite{ahkklpsw-ppstpcgg-17} show that $\Omega(\sqrt{n})$ edge-disjoint plane trees can be packed into a complete geometric graph with $n$ vertices, but it is not known whether this lower bound extends to paths. From the perspective of geometric graph packing problems, \cref{thm:main} directly implies the following. \begin{corollary}\label{thm:main-alt} Three edge-disjoint plane Hamiltonian paths can be packed into every complete geometric graph with at least ten vertices. \end{corollary} \paragraph{Paper organization.} The rest of the paper is organized as follows. In \cref{sec:prelim}, we introduce technical notions and briefly remind the concept of zig-zag paths of~\cite{DBLP:journals/dam/AbellanasGHNR99}, and then in \cref{sec:Overview}, we present an overview of the proof of our main theorem. \cref{sec:Twopath} is devoted to a strengthening of the result of~\cite{ahkklpsw-ppstpcgg-17} on the existence of two paths even when the starting points are prescribed, which we believe is also interesting on its own. \cref{sec:Main} then presents the proof of our main theorem in detail. \section{Preliminaries}\label{sec:prelim} We denote a path by its sequence of vertices and edges. For a path $P$ starting at $v_1$, ending at $v_h$, and visiting its vertices in increasing order from $1$ to $h$, we represent the sequence as $P = v_1 \circ v_1v_2 \circ v_2\circ \cdots \circ v_{h-1}v_h \circ v_h$. Thus, the concatenation of vertex-disjoint path $P$ (ending with vertex $x$) and $Q$ (starting with vertex $y$ adjacent to $x$) is the path $P\circ Q=P\circ xy \circ Q$. By $P^{-1}$ we denote the path $P$ traversed in the reversed order. Let $S$ be a set of points on general position (i.e., no three collinear) in the plane and let $p,q$ be two points of $S$. We denote by $\overline{pq}$ the line passing through them, and by ${pq}$ its line segment with endpoints $p$ and $q$. A path on $S$ is a crossing-free straight-line drawing of a path whose set of vertices is a subset of $S$. For the sake of brevity we shall omit the term ``straight-line''. A path on $S$ is \emph{spanning} if its set of vertices coincides with~$S$. We shall say \emph{plane (spanning) path} to mean that a (spanning) path is crossing-free. We denote by $\ch(S)$ the convex hull of $S$ and by $\partial\ch(S)$ the boundary of its convex hull. Points of $S\cap\partial\ch(S)$ are the {\emph{extreme points}} of $S$. We will call a partition $S=S_1\cup S_2$ a \emph{balanced separated partition\xspace} if the two sets are almost equal in sizes (i.e., $||S_1|-|S_2||\le 1$) and $\ch(S_1)\cap \ch(S_2)=\emptyset$. In such a case we denote the partition as $(S_1,S_2)$. The boundary of $\ch(S)$ contains two edges with one end-point in $S_1$ and the other one in $S_2$; such edges are called \emph{bridges} of the partition, and each vertex incident with a bridge is called \emph{bridged}. A line $\ell$ is a \emph{balancing line} for a set $S$ if it induces a balanced separated partition\xspace\ of $S\setminus\ell$. Note that a balancing line for $S$ may contain 0, 1 or 2 points of $S$. Every point of $ S$ belongs to at least one balancing line passing through this point, but not every two points of $S$ belong to the same balancing line. However, every set (of size at least 2) contains two points which determine a balancing line. If $(S_1,S_2)$ is a balanced separated partition\xspace of $S$, a \emph{zig-zag $(S_1,S_2)$-path} is a plane spanning path in $S$ on which the points from $S_1$ and $S_2$ alternate. When $S_1$ and $S_2$ are clear from the context, we just call it a zig-zag path. It is well known that a zig-zag path exists for every balanced separated partition\xspace {} of $S$~\cite{DBLP:journals/bit/HershbergerS92,DBLP:journals/dam/AbellanasGHNR99}. \begin{lemma}[\cite{DBLP:journals/dam/AbellanasGHNR99}]\label{lem:zigzag} Every balanced separated partition\xspace admits a zig-zag path. \end{lemma} The algorithm by Abellanas et al.~\cite{DBLP:journals/dam/AbellanasGHNR99} works roughly as follows. Assume that $(S_1,S_2)$ is a partion of $S$, $S_1$ and $S_2$ are separated by a horizontal line and that $|S_1|\ge |S_2|$. Let $(p_1,q_1)$ be the \emph{left bridge} of $S$, i.e., the left edge of $\ch(S)$ crossing the separating line, with $p_1\in S_1$ and $q_1\in S_2$. Starting with $P=p_1$, compute inductively the left bridge $pq$ of $S\setminus V(P)$, and set $P=P\circ p$ if the last point of $P$ was in $S_2$, or set $P=P\circ q$ if the last point of $P$ was in $S_1$. Continue this process until all vertices of $S$ are added to $P$. Then $P$ is a zig-zag path. Note here that if $|S_1|=|S_2|$, we may choose if the zig-zag path starts in $S_1$ or in $S_2$, but when the sets $S_1$ and $S_2$ are not equal in sizes, every zig-zag path must start in the bigger one of them. And that on a zig-zag path constructed in the above sketched way, the crossing points of the edges of the path with the separating horizontal line have the same linear order along the path and along the separating line. \section{Approach overview}\label{sec:Overview} The main idea of our approach is to construct three edge-disjoint plane spanning paths $Z,P,Q$ on a point set $S$ with $|S|\ge 10$ as follows. We find a suitable balanced separated partition\xspace $(S_1,S_2)$ of $S$. The first path $Z$ will be the zig-zag path obtained by \cref{lem:zigzag}. For $P$ and $Q$, we seek to find two edge-disjoint plane spanning paths $P_1,Q_1$ (ending in $p_1$ and $q_1$, respectively) in $S_1$ and two edge-disjoint plane spanning paths $P_2,Q_2$ (starting in $p_2$ and $q_2$, respectively) in $S_2$; these are obviously edge-disjoint with $Z$. If $p_1p_2$ and $q_1q_2$ do not belong to $Z$ and their interiors are disjoint with $\ch(S_1)\cup\ch(S_2)$, we combine these four paths to two edge-disjoint plane spanning paths $P=P_1\circ p_1p_2 \circ P_2$ and $Q_1\circ q_1q_2\circ Q_2$ in $S$. To this end, we reverse the strategy and try to find two pairs of vertices on $\ch(S_1)$ and $\ch(S_2)$ that see each other (i.e., their connections do not go through $\ch(S_1)\cup\ch(S_2)$) and that are not connected by an edge in $Z$. See also \cref{fig:overview} for a schematic description of our approach. \begin{figure} \caption{Schematic illustration of the approach behind the proof of \cref{thm:main}.} \label{fig:overview-P1} \label{fig:overview-P2} \label{fig:overview-P3} \label{fig:overview} \end{figure} There are several difficulties: First, we cannot use the algorithm by Aichholzer et al.~\cite{ahkklpsw-ppstpcgg-17} to find the two paths in $S_1$ and $S_2$, as that does not control the starting and ending points of the paths. Hence, we strengthen their theorem and prove that one can always find two edge-disjoint plane spanning paths even if their starting points are prescribed (\cref{thm:pq-paths}). Secondly, it might not be possible to find two pairs of vertices on the convex hulls with our desired properties. However, we prove that in most of such situations, the zig-zag path can be slightly modified so that the connection is possible. Lastly, we show that all of the previously described moves fail in one and only one very specific configuration, which allows three edge-disjoint plane spanning paths to be constructed easily in an ad hoc way, thus establishing \cref{thm:main}. \section{Two edge-disjoint plane spanning paths with prescribed starting points}\label{sec:Twopath} Let $S$ be a set of at least five distinct points in the plane in general position and let $s$ and $t$ be two distinguished elements of $S$, possibly coincident. In this section we show that there exist two edge-disjoint plane spanning paths of $S$, one starting at $s$ and the other one starting at $t$ such that $st$ is not an edge of either path. To this aim, we start with some basic properties of planar point sets. Let $p$ be a point outside $\ch(S)$. We say that $p$ \emph{sees} a point $q\in S$ if ${pq}\cap \ch(S) = \{q\}$. We denote by $S(p)$ the set of (extreme) points of $S$ that are seen from $p$. \begin{restatable}{lemma}{SeesTwo} \label{lem:seestwo} Let $|S|\ge 3$ and let $p$ be a point outside $\ch(S)$. Then $p$ sees at least two points of $S$. Moreover, $S(p)$ forms a continuous interval on $S\cap\partial\ch(S)$ (along $\partial\ch(S)$). \end{restatable} \begin{proof} We radially sort the points of $S$ in counterclockwise order from $p$; see \cref{fig:point-visibility}. The first and the last points in the order belong to $\partial\ch(S)$ and obviously they belong to $S(p)$. The set $S(p)$ is a continuous interval of points of $S$ along $\partial\ch(S)$which starts and ends in these two points. \end{proof} \begin{figure} \caption{Illustration for the proof of (\subref{fig:point-visibility}) \cref{lem:seestwo} and (\subref{fig:bad2case}) \cref{lem:claimbad2case}.} \label{fig:point-visibility} \label{fig:bad2case} \end{figure} \begin{restatable}{lemma}{ClaimBadTwoCase} \label{lem:claimbad2case} Let $|S|\ge 3$ and let $p,q$ be 2 distinct points outside $\ch(S)$ such that $S\cup\{p,q\}$ is in general position; let $x$ and $y$ be two extreme points of $S$. If $x\in S(p), y\in S(q)$ and the (visibility) segments $px$ and $qy$ cross in an interior point, then $\{x,y\}\subseteq S(p)\cap S(q)$ and $py\cap qx=\emptyset$. \end{restatable} \begin{proof} Let $\ell=\overline{xy}$, let $\pi^+_{xy}$ be the half-plane determined by $\ell$ that contains the crossing point $t$ of $px$ and $qy$, and let $\pi^-_{xy}$ be the opposite half-plane determined by $\ell$; see \cref{fig:bad2case}. Clearly $p,q\in \pi^+_{xy}$. Then $S\subseteq \pi^-_{xy}\cup \Delta xyt$, hence also $\ch(S)\subseteq \pi^-_{xy}\cup \Delta xyt$ (here we are using the fact that both $x$ and $y$ lie on $\partial\ch(S)$), and thus $py\cap \ch(S)=\{y\}$ and $qx\cap \ch(S)=\{x\}$. The segments $py$ and $qx$ are non-crossing, since the edges of a complete graph on 4 vertices ($x,y,p,q$) may cross in at most one point, and this crossing is already consumed by $t$. \end{proof} \begin{restatable}{lemma}{BadTwoCase} \label{lem:bad2case} Let $|S|\ge 3$ and let $p,q$ be 2 distinct points outside $\ch(S)$ such that $S\cup\{p,q\}$ is in general position. Let $|S(p)\cup S(q)|\ge 3$. Then for any point $c\in S(q)$, there exist points $a\in S(p)$ and $b\in S(q)\setminus\{c\}$ such that $ap\cap bq = \emptyset$. \end{restatable} \begin{figure} \caption{Illustration the proof of \cref{lem:bad2case}.} \label{fig:bad2case2-22} \label{fig:bad2case2-23} \label{fig:bad2case2-33} \label{fig:bad2case2} \end{figure} \begin{lemma}\label{lem:pq-path} Let $s$ and $t$ be two distinct points of $S$. Then $S$ contains a plane spanning path which starts at $s$ and ends at $t$. \end{lemma} \begin{proof} Suppose first that both $s$ and $t$ lie on $\partial\ch(S)$; see \cref{fig:pq-path-boundary}. Let $\ell_s$ ($\ell_t$) be a supporting line of $S$ passing through $s$ (through $t$) and let $x$ be the crossing point of $\ell_s$ and $\ell_t$ (since the points of $S$ are in general position, we may assume without loss of generality that $\ell_s$ and $\ell_t$ are not parallel and that $S\cup\{x\}$ is in general position). Consider the lines $\overline{xy}$, for $y\in S$, and order them $\ell_1, \ell_2, \ldots, \ell_{|S|}$ as they form a rotation scheme around $x$ from $\ell_1=\ell_s$ to $\ell_{|S|}=\ell_t$. Rename the points of $S$ as $y_i\in \ell_i$, $i=1,2,\ldots,|S|$. Then $s{=}y_1 \circ y_1y_2 \circ y_2 \circ \ldots \circ y_{|S|}{=}t$ is a plane spanning path starting at $s$ and ending at $t$. Now assume that at least one of $s,t$ is an interior point of $\ch(S)$, say $s$; see \cref{fig:pq-path-inside}. The line $\overline{st}$ separates $S\setminus\{s,t\}$ into two disjoint nonempty sets $A,B$. Also, this line intersects the relative interior of an edge of $\partial\ch(S)$, say $ab$ with $a\in A$ and $b\in B$. Now both $s$ and $a$ lie on $\partial\ch(A\cup \{s\})$, and the previously proven case implies existence of a plane spanning path $P_A$ in $A\cup\{s\}$ which starts in $s$ and ends in $a$. Similarly, $B\cup\{t\}$ contains a plane spanning path $P_B$ which starts in $b$ and ends in $t$. Then $P_A \circ ab \circ P_B$ is the desired path. \end{proof} \begin{figure} \caption{Illustration for the proof of \cref{lem:pq-path}.} \label{fig:pq-path-boundary} \label{fig:pq-path-inside} \label{fig:pq-path} \end{figure} The following result will be used in the proof of \cref{thm:main}, but at the same time it provides a strengthening of the result by Aichholzer et al. on the existence of two edge-disjoint plane straight-line spanning paths in a point set~\cite{ahkklpsw-ppstpcgg-17}. \begin{restatable}{theorem}{PQPaths} \label{thm:pq-paths} Let $|S|\ge 5$ and let $s$ and $t$ be two (not necessarily distinct) points of $\partial \ch(S)$. Then $S$ contains two edge-disjoint plane spanning paths, one starting at $s$ and the other one at $t$. Moreover, if the points $s$ and $t$ are distinct, then the paths can be chosen so that none of them contains the edge ${st}$. \end{restatable} \begin{proof} We distinguish between two cases, $s\neq t$ and $s=t$. \ccase{c:unequal}\textit{$s\neq t$.} \subcase{sc:odd-not-halfing}\textit{$|S|$ is odd or $\overline{st}$ is not a balancing line; see \cref{fig:pq-paths-odd}.} Let $\ell$ be a balancing line passing through $s$ and no other point of $S$, chosen so that $t$ belongs to the smaller part in case $|S|$ is even. This line defines a balanced separated partition\xspace $S\setminus\{s\}=(S_1, S_2)$. If $|S|$ is odd, the choice of the partition is unique. Without loss of generality assume that $t\in S_2$. Since $|S|\ge 5$, we have $|S_1|\ge |S_2|\ge 2$. Let $s_0t_0$ be the edge of $\partial\ch(S_1\cup S_2)$ which intersects $\ell$ and is seen from the point $s$, with $s_0\in S_1$ and $t_0\in S_2$. Let $Z$ be the zig-zag path for $S_1\cup S_2$ starting in point $s_0$. Set $P=s \circ ss_0 \circ Z$. Since the zig-zag path $Z$ lies inside $\ch(S_1\cup S_2)$ and $ss_0$ is outside it, $P$ is a non-crossing path, and it visits all points of $S$. Let $s_1\neq s_0$ be a point on $\partial\ch(S_1)$ which is seen from $s$ (since $S_1\ge 2$ and $\{s\}\cup S_1$ is in general position, it follows from \cref{lem:seestwo} that $s$ sees at least two points of $\partial\ch(S_1)$, and thus $s_1$ exists). Similarly, let $t_1\neq t$ be a point on $\partial\ch(S_2)$ which is seen from $s$. Let $Q_1$ be a plane spanning path for $S_2$ that starts in $t$ and ends in $t_1$ (its existence is guaranteed by \cref{lem:pq-path}), and let $P_1$ be a plane spanning path for $S_1$ starting in $s_1$ (again, such a path exists because of \cref{lem:pq-path}, we could even prescribe its ending point, but we do not bother). Set $Q= Q_1 \circ t_1s \circ s \circ ss_1 \circ P_1$. This path is plane and it visits all points of $S$. The paths $P$ and $Q$ are edge-disjoint, since we made sure that they use different edges incident with $s$, and among the remaining edges, $P$ uses only edges with one end-point in $S_1$ and the other one in $S_2$, while $Q$ uses only edges with both end-points in $S_1$, or both in $S_2$. Neither path uses the edge $st$. \begin{figure} \caption{Illustration for \cref{sc:odd-not-halfing,ssc:exactly-two} of the proof of \cref{thm:pq-paths}.} \label{fig:pq-paths-odd} \label{fig:pq-paths-even} \label{fig:pq-paths} \end{figure} \subcase{sc:even-halfing}\textit{$|S|$ is even and $\overline{st}$ is a balancing line.} Then $\ell=\overline{st}$ defines a balanced separated partition\xspace $S\setminus\{s,t\}=(S_1,S_2)$. For the illustrative figures, suppose that $\ell$ is horizontal, $s$ is the leftmost point of $\ch(S)\cap\ell$ and $S_1$ is above and $S_2$ below $\ell$. Since $t$ is also on $\partial\ch(S)$, $t$ is the rightmost point of $\ch(S)\cap\ell$. Let $s_0t_0$ be the edge of $\ch(S_1\cup S_2)$ which intersects $\ell$ and whose intersection with $\ell$ is leftmost possible. Suppose without loss of generality that $s_0\in S_1$ and $t_0\in S_2$. \subsubcase{ssc:at-least-three}\textit{$|S_2(s)\cup S_2(t)|\ge 3$.} Let $Z$ be the zig-zag path for $S_1\cup S_2$ starting in point $s_0$. Denote its vertices by $z_0,z_1,\ldots,z_h$, with $z_0=s_0,z_2,\ldots,z_{h-1}\in S_1$, $z_1,z_3,\ldots,z_h\in S_2$, and the crossing points of $Z$ with the line $\ell$ by $x_1, x_2, \ldots, x_h$, with $x_i=z_{i-1}z_i\cap\ell$ for $i=1,2,\ldots,h$. We know that the crossing points $x_1,\ldots, x_h$ are ordered from left to right. Since $t$ is on $\partial\ch(S)$, it lies to the right of $x_h$. The interior of the triangle $x_hz_ht$ does not contain any point of $S$, and hence the edge $z_ht$ does not intersect any edge of the zig-zag path $Z$. Therefore the path $P=s \circ ss_0 \circ Z \circ z_ht \circ t$ is a plane spanning path of~$S$. The assumption $|S_2(s)\cup S_2(t)|\ge 3$ implies, via \cref{lem:bad2case}, that there exist $b\neq z_h, b\in S_2(t)$ and $a\in S_2(s), a\neq b$ such that $sa$ and $bt$ are non-crossing. \cref{lem:seestwo} implies that $s$ sees a point $c\neq s_0$ on $\partial\ch(S_1)$. Now let $Q_2$ be a plane spanning path in $S_2$ starting at $b$ and ending at $a$ (guaranteed by \cref{lem:pq-path}) and $Q_1$ be a plane spanning path in $S_1$ starting at $c$. Then $Q=t \circ tb \circ Q_2 \circ as \circ s \circ sc \circ Q_1$ is a plane spanning path for $S$, and $P$ and $Q$ are edge-disjoint. \subsubcase{ssc:other-at-least-three}\textit{$|S_1(s)\cup S_1(t)|\ge 3$.} This case is symmetric to \cref{ssc:at-least-three}, we just start the zig-zag path $Z$ in $t_0$. \subsubcase{ssc:exactly-two}\textit{$|S_1(s)\cup S_1(t)| = |S_2(s)\cup S_2(t)| = 2$.} Consider again the path $P=s \circ ss_0 \circ s_0 \circ Z \circ z_h \circ z_ht \circ t$ as in \cref{ssc:at-least-three} and note that, due to the assumption that both $s$ and $t$ see the same two points on $\partial\ch(S_1)$ (and $\partial\ch(S_2)$), it follows that $z_1=t_0$ and that $z_{h-1}z_h$ is an edge of $\partial\ch(S_1\cup S_2)$. Further, the edge $z_0z_h$ does not belong to $Z$ (since $h\ge 3$, and thus $z_h\neq z_1$). Let $Q_1$ be a plane spanning path for $S_1$ starting at $z_{h-1}$ and ending at $s_0$, and let $Q_2$ be a plane spanning path for $S_2$ starting at $z_h$ and ending at $t_0$ (the existence of such paths follows from \cref{lem:pq-path}). Then $Q=t \circ tz_{h-1} \circ Q_1 \circ s_0z_h \circ Q_2 \circ t_0s \circ s$ is a plane spanning path which is edge-disjoint with $P$. None of $P$ and $Q$ uses the edge $st$. \ccase{c:equal}{$s=t$.} Let $S'=S\setminus\{s\}$. Then $|S'|\ge 4$ and \cref{lem:seestwo} implies that $|S'(s)|\ge 2$. Let $a\neq b$ be two consecutive points on $\partial\ch(S')$ which are seen by $s$. If $|S|\ge 6$, then $|S'|\ge 5$ and, by the already proven \cref{c:unequal}, $S'$ contains two edge-disjoint plane spanning paths $P'$ (starting at $a$) and $Q'$ (starting at $b$). It follows that $P=s \circ sa \circ P'$ and $Q=t \circ tb \circ Q'$ are edge-disjoint plane spanning paths for $S$ both starting in $s=t$; see \cref{fig:caseanalysis-ge6}. For $|S|=5$, we have $|S'|=4$. It is easy to see that for any two consecutive points on $\partial\ch(S')$, there exist two edge-disjoint plane spanning paths starting in these points; see \cref{fig:caseanalysis-4}. The paths starting at point $s$ are then constructed in the same way as in the case of $|S|\ge 6$. \end{proof} \begin{figure} \caption{Illustration to the case that $s= t$ of the proof of \cref{thm:pq-paths}. } \label{fig:caseanalysis-ge6} \label{fig:caseanalysis-4} \label{fig:caseanalysis} \end{figure} \section{Three edge-disjoint plane spanning paths}\label{sec:Main} We first introduce a few technical notions. For two points $u,v$ in the plane, we denote by $(uv)^+$ the open halfplane to the right of the line $\overline{uv}$, if the line is traversed in the way that $u$ precedes $v$. The opposite open halfplane is denoted by $(uv)^-$. Note that $(uv)^-=(vu)^+$. Let $Q$ be a convex polygon and let $u,v$ be two adjacent vertices of $Q$ such that $Q\subseteq (uv)^+$. Then we say that $v$ is the \emph{clockwise neighbor} of $u$ along (the boundary of) $Q$ and $u$ is the \emph{counterclockwise neighbor} of $v$ along (the boundary of) $Q$. We shall omit the words ``the boundary of'' when talking about the (counter-)clockwise neighbor along $Q$. Let $(S_1,S_2)$ be a balanced separated partition\xspace of $S$. The \emph{visibility graph} of the partition is \[\ensuremath{\mathcal{V}}(S_1,S_2)=(S,\{ab:a\in S_1, b\in S_2, {ab}\cap (\ch(S_1)\cup \ch(S_2))=\{a,b\} \}),\] i.e., $ab\in E(\ensuremath{\mathcal{V}}(S_1,S_2))$ if and only if $a\in S_1(b)$ and $b\in S_2(a)$; see \cref{fig:visibility}. A path $v_1\circ v_1v_2\circ \ldots \circ v_k$ of $\ensuremath{\mathcal{V}}(S_1,S_2)$ is called \emph{switchable} if its edges $v_1v_2, v_2v_3, \ldots, v_{k-1}v_k$ cross the separating line of the partition $(S_1,S_2)$ in this order and for each $i=1,2,\ldots,k-2$, the open region bounded by the triangle $v_i v_{i+1} v_{i+2}$ contains no point of $S$. Observe that every switchable path is non-crossing. \begin{figure} \caption{Illustration for the definition of the visibility graph of a balanced separated partition\xspace. The edges of the visibility graph are drawn purple.} \label{fig:visibility} \end{figure} The \emph{$n$-wheel configuration} $W_n$ of points (in general position) is a set of $n-1$ points in convex position, augmented with one point lying inside the convex hull of these $n-1$ points in such a position that every line that passes through the augmenting point and any other point is a balancing line of $W_n$. This point configuration plays an important role in the proof below, and we need to show that it contains three edge-disjoint plane spanning paths by an ad hoc construction, at least for the case of $n$ even. This has already been sketched by Aichholzer et al~\cite{ahkklpsw-ppstpcgg-17}; for completeness, our proof can be found here. \begin{restatable}{proposition}{EvenWheel} \label{prop:even-wheel} For even $n\geq6$, the maximum number of edge-disjoint plane spanning paths in the wheel configuration $W_n$ is $\frac{n}{2}-1$. \end{restatable} \begin{figure} \caption{Illustration for the proof of \cref{prop:even-wheel} for $W_8$.} \label{fig:evenwheel-1} \label{fig:evenwheel-2} \label{fig:evenwheel-3} \label{fig:evenwheel-4} \label{fig:evenwheel} \end{figure}\begin{proof} Let $y$ be the augmenting point lying inside $\ch(W_n)$ and let $x_1,x_2, \ldots,x_{n-1}$ be the points on $\partial\ch(W_n)$ listed in the clock-wise order; see \cref{fig:evenwheel}. First construct $\frac{n}{2}-1$ plane paths $P'_k = \ldots \circ x_{k-1} \circ x_{k-1}x_{k+\frac{n}{2}} \circ x_{k+\frac{n}{2}} \circ x_{k+\frac{n}{2}}x_k \circ x_k \circ x_kx_{k+\frac{n}{2}-1} \circ x_{k+\frac{n}{2}-1} \circ x_{k+\frac{n}{2}-1}x_{k+1} \circ x_{k+1} \circ \ldots$ for $k=1,2,\ldots,\frac{n}{2}-1$, with counting in subscripts modulo $n-1$. Each of the paths is non-crossing and spanning the points $W_n\setminus\{y\}$. They are pairwise edge-disjoint, because for every $k$, if $x_i$ and $x_j$ are consecutive on $P'_k$, then $i+j=2k+\frac{n}{2}-1$ or $2k+\frac{n}{2}$. Then, again for each $k$, create a path $P_k$ from $P'_k$ by replacing the edge $x_kx_{k+\frac{n}{2}-1}$ by the path $ x_ky \circ y \circ yx_{k+\frac{n}{2}-1}$. Again, the edges of these subpaths are private for them, and thus the paths $P_k, k=1,2,\ldots,\frac{n}{2}-1$ are edge-disjoint. Each of them is non-crossing, because for each $k$, $\overline{x_ky}$ is a balancing line, and they clearly span all points of $W_n$. It remains to prove that $W_n$ does not admit $\frac{n}{2}$ edge-disjoint plane spanning paths for $n\geq 6$. Suppose for a contradiction that $W_n$ admits $\frac{n}{2}$ edge-disjoint plane spanning paths. Since the number ${n\choose 2}$ of edges in the complete graph on $n$ vertices is exactly $\frac{n}{2}$ times bigger than the number $n-1$ of edges in any spanning path, each edge of the complete geometric graph with the vertex set $W_n$ lies in one of the $\frac{n}{2}$ edge-disjoint paths. Since the degree of each vertex is $n-1$ in the complete graph, it follows that each vertex is the endpoint of exactly one of the paths. Consider first the beginning of the path $P_A$ starting in the single interior point $A$ of $W_n$. It continues to some extreme point $B$ and then to one of the neighbors of $B$ along the boundary of the convex hull of $W_n$. We denote this point by $C$ and suppose without loss of generality that it is the clockwise neighbor of $B$. Denote the clockwise neighbor of $C$ by $D$ and the clockwise neighbor of $D$ by $E$. The path starting in $A$ must continue from $C$ do $D$, since any other edge from $C$ would intersect the edge $AB$ or it would separate the point $D$ from the counterclockwise neighbor of $B$, and the path could not contain both these two points due to the fact that the path is assumed to be plane. Consider now the path $P_C$ starting in the point $C$. The two edges from $C$ to the neighboring extreme points $B$ and $D$ of $W_n$ are used in the path $P_A$. It follows that the path $P_C$ goes from $C$ to $A$, since any other starting edge from $C$ would separate $B$ and $D$ which would make it impossible to include both $B$ and $D$ in $P_C$. Due to a similar reason, the path $P_C$ continues from $A$ to one of the two neighbors of $C$. Since the edge $AB$ is present in $P_A$, the path $P_C$ continues from $A$ to $D$ and then, again by a similar argument, from $D$ to $E$. Consider now the path $P_D$ starting in $D$. Due to the above analysis the three edges $DA$, $DC$ and $DE$ are present in the union of the paths $P_A$ and $P_C$. Thus, $P_D$ starts with some other edge incident to $D$. Any such edge separates $C$ and $E$ and therefore these two points cannot be both in $P_D$, a contradiction. \end{proof} Our later proof of \cref{thm:main} is based on the following structural result. \begin{theorem}\label{thm:crucial} Let $S$ be a set of $n\ge 5$ points in general position in the plane. Then at least one of the following holds true \begin{enumerate} \item\label[condition]{thm:crucial-1} $S$ has a balanced separated partition\xspace $(S_1,S_2)$ such that $\ensuremath{\mathcal{V}}(S_1,S_2)$ contains two crossing edges, or \item\label[condition]{thm:crucial-2} $S$ has a balanced separated partition\xspace $(S_1,S_2)$ such that $\ensuremath{\mathcal{V}}(S_1,S_2)$ contains a switchable path of length 3 and a bridged vertex not included in the path which is incident with at least 2 edges of $\ensuremath{\mathcal{V}}(S_1,S_2)$, or \item\label[condition]{thm:crucial-3} $n$ is even and $S$ is the wheel configuration $W_n$. \end{enumerate} \end{theorem} \begin{proof} {\bf Case 1: $n$ is odd}. A line $\overline{xy}$ passing through two points $x,y\in S$ is an \emph{almost-balancing line} of $S$, if exactly $(n-1)/2$ points of $S\setminus\{x,y\}$ lie on one of the sides of $\overline{xy}$ and the remaining $(n-3)/2$ points of $S\setminus\{x,y\}$ lie on the other side of $\overline{xy}$. We fix an extreme point $u$ of $S$. Let $\overline{ua}$, $\overline{ub}$ be the two almost-balancing lines passing through $u$; see \cref{fig:crucial-partition}. Suppose $b\in (ua)^+$. Note that the interior of the convex wedge bounded by the two rays emanating from $u$, one passing through $a$ and the other passing through $b$, contains no point of $S$. Each of the lines $\overline{ua}$ and $\overline{ub}$ partitions the set $S\setminus\{u,a,b\}$ into two sets $A$ and $B$ of equal size $(n-3)/2$, such that $A$ lies to the left of the lines $\overline{ua}$ and $\overline{ub}$ and $B$ lies to the right of them. The line $\overline{ab}$ partitions $A$ into $A_1$ and $A_2$, and $B$ into $B_1$ and $B_2$, such that $A_1$ and $B_1$ lie in $({ab})^+$, and $A_2$ and $B_2$ lie in $({ab})^-$. \begin{figure} \caption{Illustration for Case~1 of the proof of \cref{thm:crucial}.} \label{fig:crucial-partition} \label{fig:crucial-nonempty} \label{fig:crucial-empty} \label{fig:crucial} \end{figure} Suppose first that $A_1\neq\emptyset$, and let $a_{\circlearrowright}$ be the clockwise neighbor of $a$ along $\ch(A\cup\{a\})$; see \cref{fig:crucial-nonempty}. Consider the balanced separated partition\xspace $(S_1,S_2)$ with $S_1=A\cup\{a\}$ and $S_2=B\cup\{b,u\})$ of $S$. Its visibility graph $\ensuremath{\mathcal{V}}(S_1,S_2)$ contains the crossing edges $au$ and $a_{\circlearrowright}b$, which proves the result in this case. If $B_1\ne\emptyset$, then we can analogously find a crossing pair of edges in $\ensuremath{\mathcal{V}}(A\cup\{a,u\},B\cup\{b\})$. Thus, we may further assume that $A_1=B_1=\emptyset$. Then we have $|A_2|=|B_2|=(n-3)/2>0$. Let $a_\circlearrowleft\in A_2$ be the counterclockwise neighbor of $a$ along $\ch(A\cup\{a\})$, and let $b_\circlearrowright$ be the clockwise neighbor of $b$ along $\ch(B\cup\{u,b\})$. Then $a_\circlearrowleft b$, $ab_\circlearrowright$ is a crossing pair of edges of $\ensuremath{\mathcal{V}}(A\cup\{a\},B\cup\{b,u\})$. \noindent {\bf Case 2: $n$ is even}. A line passing through two points $x,y\in S$ is a \emph{halving line} of $S$, if exactly $\frac{n-2}{2}$ points of $S$ lie on each of its two sides. If $\overline{xy}$ is a halving line of $S$, where $x,y\in S$, then the segment $xy$ is called a \emph{halving segment} of $S$. \begin{claim}\label{clm:halving} Let $uv$ be a halving segment of a set $S$ of $n$ points in general position in the plane such that $u\in\partial\ch(S)$. Then there is another halving segment $pq$ of $S$ such that the following three conditions hold: \begin{enumerate*}[label=(\arabic*)] \item an unbounded part of the ray emanating from $p$ and passing through $q$ lies in $(uv)^+$; \item no point of $S\setminus\{u,v,p,q\}$ lies in the double-wedge $((uv)^+\cap (pq)^-)\cup((uv)^-\cap (pq)^+)$; \item $p=v$, or the two open segments $uv$ and $pq$ cross. \end{enumerate*} \end{claim} \begin{proof} Suppose without loss of generality that $uv$ is a vertical line and $u$ lies below $v$; see \cref{fig:halving}. Let $X:=\ch(S\cap (uv)^-)$ and $Y:=\ch(S\cap (uv)^+)$. Let $x\in X$ and $y\in Y$ be the extreme points of $X$ and $Y$, respectively, such that $\overline{xy}$ avoids the interiors of $X$ and $Y$, the set $X$ lies above $\overline{xy}$, and $Y$ lies below $\overline{xy}$. If the open segments $uv$ and $xy$ cross, then $p=x$ and $q=y$ yields the claim; see \cref{fig:halving-cross}. Otherwise, $xy$ crosses the line $\overline{uv}$ above $v$; see \cref{fig:halving-nocross}. Let $p=v$ and $q\in Y$ be the extreme point of $Y$ seen from $p$ as the highest point of $Y$. (That is, all points of $Y\setminus\{q\}$ lie below the line $\overline{pq}$.) Then $pq$ is a halving segment and yields the claim. \end{proof} \begin{figure} \caption{Illustration for \cref{clm:halving}.} \label{fig:halving-cross} \label{fig:halving-nocross} \label{fig:halving} \end{figure} Fix an extreme point $u$ of $S$. Let $uv$ be the unique halving segment incident to $u$. Let $A:=S\cap (uv)^-$ and $B:=S\cap (uv)^+$. Since $uv$ is a halving segment, we have $|A|=|B|=\frac{n-2}{2}$. Let $pq$ be the halving segment guaranteed by \cref{clm:halving}. If the halving segments $uv$ and $pq$ cross each other, then they both belong to the visibility graph $\ensuremath{\mathcal{V}}(A',B')$, where $A':=A\cup\{v\}$ and $B':=B\cup\{u\}$, and \cref{thm:crucial-1} applies; see \cref{fig:crucial-cross}. Thus we may assume that $p=v$ and there are no points of $S$ inside the double wedge $((uv)^+\cap (vq)^-)\cup((uv)^-\cap (vq)^+)$. By a mirror argument, we can also assume that there is a point $r\in A$ such that there are no points of $S$ inside the double wedge $((uv)^-\cap (vr)^+)\cup((uv)^+\cap (vr)^-)$. The line $\overline{qr}$ partitions $A\setminus\{r\}$ ($B\setminus\{q\}$, respectively) into two sets $A_1$ and $A_2$ ($B_1$ and $B_2$, respectively) such that $A_1$, $B_1$ lie in $(qr)^-$ and $A_2$, $B_2$ lie in $(qr)^+$. We now distinguish three cases. Consider first the case $A_2\neq\emptyset$ and $B_2\neq\emptyset$; see \cref{fig:crucial-bothempty}. Then the counterclockwise neighbor $r_\circlearrowleft $ of $r$ along $\ch(A\cup\{u\})$ lies in $A_2$. Similarly, the clockwise neighbor $q_\circlearrowright$ of $q$ along $\ch(B\cup\{v\})$ lies in $B_2$. It follows that the edges $qr_\circlearrowleft $ and $rq_\circlearrowright$ form a crossing pair in $\ensuremath{\mathcal{V}}((A\cup\{u\}),(B\cup\{v\}))$, and \cref{thm:crucial-1} applies.. \begin{figure} \caption{Illustration for Case 2 of \cref{thm:crucial}.} \label{fig:crucial-cross} \label{fig:crucial-bothempty} \label{fig:crucial-oneempty} \label{fig:crucial-bothempty-1} \label{fig:crucial-bothempty-2} \label{fig:crucial-bothempty-3} \label{fig:crucial-case2} \end{figure} Consider now that one of the sets $A_2$ and $B_2$ is empty and the other one is non-empty; see \cref{fig:crucial-bothempty}. By symmetry, we may assume that $A_2\neq\emptyset$ and $B_2=\emptyset$. Then $B_1\neq\emptyset$. We consider the balanced separated partition\xspace $(A',B')$, where $A':=A\cup\{u\}$ and $B':=B\cup\{v\}$. Let $r_\circlearrowleft $ be the counterclockwise neighbor of $r$ along $\ch(A')$. Since $A_2\neq\emptyset$, $r_\circlearrowleft $ lies in $A_2$. Then (1) $u$ is a bridged vertex for the partition $(A',B')$, (2) $u$ is incident with at least 2 edges of $\ensuremath{\mathcal{V}}(A',B')$ -- the edge $uv$ and the edge $uv_{\circlearrowleft}$ for the counter-clockwise neighbor $v_{\circlearrowleft}s$ of $v$ along $\ch(B')$, and (3) $v\circ vr\circ r\circ rq\circ q\circ qr_\circlearrowleft \circ r_\circlearrowleft $ is a switchable path in $\ensuremath{\mathcal{V}}(A',B')$, and \cref{thm:crucial-2} applies. Finally, consider that $A_2=\emptyset$ and $B_2=\emptyset$; see \cref{fig:crucial-bothempty-1}. Then $q$ and $r$ are neighbors along $\ch(S)$, and we again consider the whole analysis which started with fixing an extreme point of $S$ but now we fix the point $u':=r$ instead of $u$; see \cref{fig:crucial-bothempty-2}. Either we find a balanced separated partition\xspace satisfying \cref{thm:crucial-1} or \cref{thm:crucial-2}, or we find two neighbors $q'$ and $r'$ along $\ch(S)$. In the first case we are done. In the latter case, the point $q'$ is actually equal to $u$ and it is clockwise of $r'$ along $\ch(S)$. We then again consider the analysis which started with fixing an extreme point $S$ but now we fix the point $u'':=r'$ instead of $u$; see \cref{fig:crucial-bothempty-3}. Continuing this process, at some point we find a balanced separated partition\xspace satisfying condition 1) or 2) of the theorem, or otherwise after $n/2$ repetitions of the analysis starting with fixing an extreme point of $S$ we would conclude that the set $S$ is necessarily a wheel on $n$ vertices. \end{proof} Let $Z$ be a zig-zag path for a partition $(S_1,S_2)$ of $S$. An edge $ab$ of $S$ is called \emph{free} (with respect to $Z$) if $ab\in E(\ensuremath{\mathcal{V}}(S_1,S_2))$ and $ab\not\in E(Z)$. \begin{lemma}\label{lem:2freeedges} Let $|S|\ge 10$ and let $Z$ be a zig-zag path for a balanced separated partition\xspace {} $(S_1,S_2)$ of $S$ which leaves at least two free edges. Then $S$ allows three edge-disjoint plane spanning paths. \end{lemma} \begin{figure} \caption{Illustration for the proof of \cref{lem:2freeedges}} \label{fig:2freeedges-Z} \label{fig:2freeedges-PQ} \label{fig:2freeedges-all3} \label{fig:2freeedges} \end{figure} \begin{proof} Let $ab$ and $cd$ be two free edges with respect to a zig-zag path $Z$ and a balanced separated partition\xspace $(S_1,S_2)$; see \cref{fig:2freeedges-Z}. Since $|S|\ge 10$, we have $|S_1|\ge 5$ and $|S_2|\ge 5$. Suppose $a,c\in \partial\ch(S_1)$ and $b,d\in \partial\ch(S_2)$. We may have $a=c$ or $b=d$, but not both. Let $P_1$ and $Q_1$ be edge-disjoint plane spanning paths for $S_1$, with $P_1$ starting at $a$ and $Q_1$ starting at $c$. Similarly, let $P_2$ and $Q_2$ be edge-disjoint plane spanning paths for $S_2$, with $P_2$ starting at $b$ and $Q_2$ starting at $d$. The existence of such paths is guaranteed by \cref{thm:pq-paths}. Then $ P=P_2^{-1} \circ ba \circ P_1$ and $Q=Q_2^{-1} \circ dc \circ Q_1$ are edge-disjoint plane spanning paths for $S$; see \cref{fig:2freeedges-PQ}. Each of them is plane because the edge $ab$ ($cd$, respectively) contains no point in the interior of $\ch(S_1)$ (of $\ch(S_2)$, respectively). Both of them are edge-disjoint with $Z$, because the only two edges of them that are incident with vertices from both $S_1$ and $S_2$ are $ab$ and $cd$, and these are by assumption free w.r.t.\ $Z$; see \cref{fig:2freeedges-all3}. \end{proof} The following lemmas can be proved similarly with \cref{lem:seestwo,lem:claimbad2case}. \begin{restatable}{lemma}{BigN} \label{lem:bigN} Let $|S|\ge 10$ and let $Z$ be a zig-zag path for a balanced separated partition\xspace $(S_1,S_2)$ of $S$ which contains all three edges of a switchable path of length~3 in $\ensuremath{\mathcal{V}}(S_1,S_2)$. Then $S$ contains three edge-disjoint plane spanning paths. \end{restatable} \begin{figure} \caption{Illustration for the proof of \cref{lem:bigN}} \label{fig:bigN-Z} \label{fig:bigN-Z'} \label{fig:bigN-PQ} \label{fig:bigN} \end{figure} \begin{proof} Note that $|S_i|\ge 5$ for $i=1,2$. Let $ab,bc,cd$ be the edges of a switchable path $P\subseteq Z$, and let they occur on $Z$ in this order, i.e., $Z=Z_1 \circ ab \circ b \circ bc \circ c \circ cd \circ Z_2$. Since $a,c$ ($b,d$) are neighbors along $\partial\ch(S_1)$ ($\partial\ch(S_2)$, respectively), the edges $ac$ and $bd$ are not crossed by any edge of $Z$, and hence $Z'=Z_1 \circ ac \circ c \circ cb \circ b \circ bd \circ Z_2$ is a plane spanning path for $S$; see \cref{fig:bigN-Z'}. Now \cref{thm:pq-paths} implies that there exist edge-disjoint plane spanning paths $P_1, Q_1$ of $S_1$, $P_1$ starting at $a$ and $Q_1$ starting at $c$, none of them containing the edge $ac$. Similarly, there exist edge-disjoint plane spanning paths $P_2, Q_2$ for $S_2$, $P_2$ starting at $b$ and $Q_2$ starting at $d$, none of them containing the edge $bd$; see \cref{fig:bigN-PQ}. It follows that $P=P^{-1}_1 \circ ab \circ P_2$ and $Q=Q^{-1}_1 \circ cd \circ Q_2$ are edge-disjoint plane spanning paths of $S$, and they are both edge-disjoint with $Z'$, since $Z'\cap \{ab,cd\}=\emptyset$ and $(P\cup Q)\cap \{ac,bd\}=\emptyset$; see \cref{fig:bigN-PQ}. \end{proof} \begin{restatable}{lemma}{Diamond} \label{lem:diamond} Let $|S|\ge 10$ and let $(S_1,S_2)$ be a balanced separated partition\xspace of $S$ such that $\ensuremath{\mathcal{V}}(S_1,S_2)$ contains two crossing edges. Then $S$ contains three edge-disjoint plane spanning paths. \end{restatable} \begin{proof} Note that $|S_i|\ge 5$ for $i=1,2$. We first argue that $\ensuremath{\mathcal{V}}(S_1,S_2)$ contains two crossing edges $ad$ and $bc$, with $a,c\in S_1$ and $b,d\in S_2$, such that $a$ and $c$ are consecutive on $\partial\ch(S_1)$ and $b$ and $d$ are consecutive on $\partial\ch(S_2)$; see \cref{fig:diamond-abcd}. To see this, suppose $a'd', b'c'$ are crossing edges of $\ensuremath{\mathcal{V}}(S_1,S_2)$, with $a',c'\in S_1$ and $b',d'\in S_2$. \cref{lem:claimbad2case} implies that $a'b',c'd'\in E(\ensuremath{\mathcal{V}}(S_1,S_2))$. \cref{lem:seestwo} states that every point on the boundary path of $\ch(S_1)$ between $a'$ and $c'$ sees both $b'$ and $d'$, and any point on $\partial\ch(S_2)$ between $b'$ and $d'$ sees both $a'$ and $c'$, and consequently, these points induce a complete bipartite subgraph of $\ensuremath{\mathcal{V}}(S_1,S_2)$. Hence it suffices to take any $c=c'$ and as $a$ its neighbor along $\partial\ch(S_1)$ in the direction to $a'$, and $d=d'$ and as $b$ its neighbor along $\partial\ch(S_2)$ in the direction to $b'$. \begin{figure} \caption{Illustration for the proof of \cref{lem:diamond}} \label{fig:diamond-abcd} \label{fig:diamond-twofree} \label{fig:diamond-switchable} \label{fig:diamond} \end{figure} Now consider the four points $a,b,c,d$, and consider a zig-zag path $Z$ with respect to the partition $(S_1,S_2)$. If $Z$ contains at most 2 of the edges $ab,ad,bc,cd$, $\ensuremath{\mathcal{V}}(S_1,S_2)$ contains 2 free edges w.r.t. $Z$ and $S$ contains 3 edge-disjoint plane spanning paths according to \cref{lem:2freeedges}; see \cref{fig:diamond-twofree}. Obviously, $Z$ cannot contain all four of these edges, since $Z$ is non-crossing. In the remaining case, when $Z$ contains exactly three of the edges $ab,bc,ad,cd$, suppose w.l.o.g. that $Z$ contains $ab,bc,cd$. Then \cref{lem:bigN} implies that $S$ allows three edge-disjoint plane spanning paths; see \cref{fig:diamond-switchable}. \end{proof} \begin{restatable}{lemma}{BigNPlus} \label{lem:bigN+} Let $|S|\ge 10$ and let $(S_1,S_2)$ be a balanced separated partition\xspace of $S$ with $|S_1|\ge |S_2|$ such that $\ensuremath{\mathcal{V}}(S_1,S_2)$ contains a switchable path of length~3 and a bridged vertex in $S_1$ which does not belong to the switchable path and which is incident with at least two edges of $\ensuremath{\mathcal{V}}(S_1,S_2)$. Then $S$ contains three edge-disjoint plane spanning paths. \end{restatable} \begin{figure} \caption{Illustration for the proof of \cref{lem:bigN+}} \label{fig:bigNPlus-miss} \label{fig:bigNPlus-hit} \label{fig:bigNPlus} \end{figure} \begin{proof} Let $u\in S_1$ be a bridged vertex of $S_1$ which does not belong to a switchable path $P$ and is incident to at least two edges of $\ensuremath{\mathcal{V}}(S_1,S_2)$; see \cref{fig:bigNPlus}. Let $Z$ be a zig-zag path starting in point $u$ (since $u$ is incident with a bridge of the partition $(S_1,S_2)$, such a path always exists as argued in the sketch of proof of \cref{lem:zigzag}). Then the degree of $u$ in $Z$ is 1, and since $u$ is incident with at least 2 edges of $\ensuremath{\mathcal{V}}(S_1,S_2)$ by the assumption, $u$ is incident with at least one free edge w.r.t. $Z$. If $Z$ misses at least one of the edges of $P$, it leaves at least two free edges and $S$ contains three edge-disjoint plane spanning paths by \cref{lem:2freeedges}; see \cref{fig:bigNPlus-miss}. Otherwise, $Z$ contains all three edges of the switchable path $P$, and three edge-disjoint plane spanning paths exist according to \cref{lem:bigN}; see \cref{fig:bigNPlus-hit}. \end{proof} We are now ready to proof our main result \cref{thm:main}. \ThmMain* \begin{proof} Given a set of points $S$, apply \cref{thm:crucial}. If $S$ allows a balanced separated partition\xspace with two crossing edges in its visibility graph, $S$ has three edge-disjoint plane spanning paths according to \cref{lem:diamond}. If $S$ allows a balanced separated partition\xspace whose visibility graph contains a switchable path of length three and a bridged vertex, then $S$ has three edge-disjoint plane spanning paths according to \cref{lem:bigN+}. If none of these cases apply, then by \cref{thm:crucial} $S$ is the wheel configuration $W_n$ and $n$ is even, in which case $S$ has $n/2-1\ge 4$ edge-disjoint plane spanning paths according to \cref{prop:even-wheel}. \end{proof} \section{Conclusion} In this paper, we showed that every set of at least 10 points in general position admits three edge-disjoint plane spanning paths. While we mostly focused on the combinatorial part, it is easy to see that our constructive arguments give rise to a polynomial time algorithm. We note that it's a simple exercise to verify that the 6-wheel configuration does not contain three edge-disjoint spanning paths. On the other hand, it was verified by a computer program that all point sets of 7, 8, or 9 points contain three edge-disjoint spanning paths~\cite{Scheucher}. Can \cref{thm:pq-paths} be strengthened? Does any set of $n$ points (for large enough $n$) in general position contain, for any choice of two distinct points $s,t$ (not necessarily lying on the boundary of the convex hull of the set), edge-disjoint plane spanning paths starting in these points and not containing the edge $st$? Let us mention in this connection that the main \cref{thm:main} cannot be strengthened in the way of \cref{thm:pq-paths}. If the points of $S$ are in convex position and the starting points of the three paths are prescribed to be the same point of $S$, then three edge-disjoint plane spanning paths do not exist (for a convex position, every path must start with an edge of $\partial\ch(S)$, and for a single point, there are only two such edges). The question is currently open to us if the starting points are required to be distinct. \section{Acknowledgment} The second and fourth author gratefully acknowledge the support of Czech Science Foundation through research grant GA\v{C}R 23-04949X. All authors acknowledge the working atmosphere of Homonolo meetings where the research was initiated and part of the results were obtained, as well as of Bertinoro Workshops on Graph Drawing, during which we could meet and informally work on the project. Our special thanks go to Manfred Scheucher whose experimental results encouraged us to keep working on the problem in time when all hopes for a solution seemed far out of sight. \end{document}
arXiv
Richard's paradox In logic, Richard's paradox is a semantical antinomy of set theory and natural language first described by the French mathematician Jules Richard in 1905. The paradox is ordinarily used to motivate the importance of distinguishing carefully between mathematics and metamathematics. Kurt Gödel specifically cites Richard's antinomy as a semantical analogue to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The paradox was also a motivation for the development of predicative mathematics. Description The original statement of the paradox, due to Richard (1905), is strongly related to Cantor's diagonal argument on the uncountability of the set of real numbers. The paradox begins with the observation that certain expressions of natural language define real numbers unambiguously, while other expressions of natural language do not. For example, "The real number the integer part of which is 17 and the nth decimal place of which is 0 if n is even and 1 if n is odd" defines the real number 17.1010101... = 1693/99, whereas the phrase "the capital of England" does not define a real number, nor the phrase "the smallest positive integer not definable in under sixty letters" (see Berry's paradox). There is an infinite list of English phrases (such that each phrase is of finite length, but the list itself is of infinite length) that define real numbers unambiguously. We first arrange this list of phrases by increasing length, then order all phrases of equal length lexicographically, so that the ordering is canonical. This yields an infinite list of the corresponding real numbers: r1, r2, ... . Now define a new real number r as follows. The integer part of r is 0, the nth decimal place of r is 1 if the nth decimal place of rn is not 1, and the nth decimal place of r is 2 if the nth decimal place of rn is 1. The preceding paragraph is an expression in English that unambiguously defines a real number r. Thus r must be one of the numbers rn. However, r was constructed so that it cannot equal any of the rn (thus, r is an undefinable number). This is the paradoxical contradiction. Analysis and relationship with metamathematics Richard's paradox results in an untenable contradiction, which must be analyzed to find an error. The proposed definition of the new real number r clearly includes a finite sequence of characters, and hence it seems at first to be a definition of a real number. However, the definition refers to definability-in-English itself. If it were possible to determine which English expressions actually do define a real number, and which do not, then the paradox would go through. Thus the resolution of Richard's paradox is that there is not any way to unambiguously determine exactly which English sentences are definitions of real numbers (see Good 1966). That is, there is not any way to describe in a finite number of words how to tell whether an arbitrary English expression is a definition of a real number. This is not surprising, as the ability to make this determination would also imply the ability to solve the halting problem and perform any other non-algorithmic calculation that can be described in English. A similar phenomenon occurs in formalized theories that are able to refer to their own syntax, such as Zermelo–Fraenkel set theory (ZFC). Say that a formula φ(x) defines a real number if there is exactly one real number r such that φ(r) holds. Then it is not possible to define, by ZFC, the set of all (Gödel numbers of) formulas that define real numbers. For, if it were possible to define this set, it would be possible to diagonalize over it to produce a new definition of a real number, following the outline of Richard's paradox above. Note that the set of formulas that define real numbers may exist, as a set F; the limitation of ZFC is that there is not any formula that defines F without reference to other sets. This is related to Tarski's undefinability theorem. The example of ZFC illustrates the importance of distinguishing the metamathematics of a formal system from the statements of the formal system itself. The property D(φ) that a formula φ of ZFC defines a unique real number is not itself expressible by ZFC, but must be considered as part of the metatheory used to formalize ZFC. From this viewpoint, Richard's paradox results from treating a construction of the metatheory (the enumeration of all statements in the original system that define real numbers) as if that construction could be performed in the original system. Variation: Richardian numbers A variation of the paradox uses integers instead of real numbers, while preserving the self-referential character of the original. Consider a language (such as English) in which the arithmetical properties of integers are defined. For example, "the first natural number" defines the property of being the first natural number, one; and "divisible by exactly two natural numbers" defines the property of being a prime number (It is clear that some properties cannot be defined explicitly, since every deductive system must start with some axioms. But for the purposes of this argument, it is assumed that phrases such as "an integer is the sum of two integers" are already understood). While the list of all such possible definitions is itself infinite, it is easily seen that each individual definition is composed of a finite number of words, and therefore also a finite number of characters. Since this is true, we can order the definitions, first by length and then lexicographically. Now, we may map each definition to the set of natural numbers, such that the definition with the smallest number of characters and alphabetical order will correspond to the number 1, the next definition in the series will correspond to 2, and so on. Since each definition is associated with a unique integer, then it is possible that occasionally the integer assigned to a definition fits that definition. If, for example, the definition "not divisible by any integer other than 1 and itself" happened to be 43rd, then this would be true. Since 43 is itself not divisible by any integer other than 1 and itself, then the number of this definition has the property of the definition itself. However, this may not always be the case. If the definition: "divisible by 3" were assigned to the number 58, then the number of the definition does not have the property of the definition itself. Since 58 is itself not divisible by 3. This latter example will be termed as having the property of being Richardian. Thus, if a number is Richardian, then the definition corresponding to that number is a property that the number itself does not have. (More formally, "x is Richardian" is equivalent to "x does not have the property designated by the defining expression with which x is correlated in the serially ordered set of definitions".) Thus in this example, 58 is Richardian, but 43 is not. Now, since the property of being Richardian is itself a numerical property of integers, it belongs in the list of all definitions of properties. Therefore, the property of being Richardian is assigned some integer, n. For example, the definition "being Richardian" might be assigned to the number 92. Finally, the paradox becomes: Is 92 Richardian? Suppose 92 is Richardian. This is only possible if 92 does not have the property designated by the defining expression which it is correlated with. In other words, this means 92 is not Richardian, contradicting our assumption. However, if we suppose 92 is not Richardian, then it does have the defining property which it corresponds to. This, by definition, means that it is Richardian, again contrary to assumption. Thus, the statement "92 is Richardian" cannot consistently be designated as either true or false. Relation to predicativism Another opinion concerning Richard's paradox relates to mathematical predicativism. By this view, the real numbers are defined in stages, with each stage only making reference to previous stages and other things that have already been defined. From a predicative viewpoint it is not valid to quantify over all real numbers in the process of generating a new real number, because this is believed to result in a circularity problem in the definitions. Set theories such as ZFC are not based on this sort of predicative framework, and allow impredicative definitions. Richard (1905) presented a solution to the paradox from the viewpoint of predicativisim. Richard claimed that the flaw of the paradoxical construction was that the expression for the construction of the real number r does not actually define a real number unambiguously, because the statement refers to the construction of an infinite set of real numbers, of which r itself is a part. Thus, Richard says, the real number r will not be included as any rn, because the definition of r does not meet the criteria for being included in the sequence of definitions used to construct the sequence rn. Contemporary mathematicians agree that the definition of r is invalid, but for a different reason. They believe the definition of r is invalid because there is no well-defined notion of when an English phrase defines a real number, and so there is no unambiguous way to construct the sequence rn. Although Richard's solution to the paradox did not gain favor with mathematicians, predicativism is an important part of the study of the foundations of mathematics. Predicativism was first studied in detail by Hermann Weyl in Das Kontinuum, wherein he showed that much of elementary real analysis can be conducted in a predicative manner starting with only the natural numbers. More recently, predicativism has been studied by Solomon Feferman, who has used proof theory to explore the relationship between predicative and impredicative systems.[1] See also • Algorithmic information theory • Berry paradox, which also uses numbers definable by language. • Curry's paradox • Grelling–Nelson paradox • Kleene–Rosser paradox • List of paradoxes • Löb's theorem • Ordinal definable set, a set-theoretic concept of definability that is itself definable in the language of set theory • Paradoxes of set theory • Russell's paradox: Does the set of all those sets that do not contain themselves contain itself? References 1. Solomon Feferman, "Predicativity" (2002) • Fraenkel, Abraham; Bar-Hillel, Yehoshua & Levy, Azriel (1973). Foundations of Set Theory. With the collaboration of Dirk van Dalen (Second ed.). Amsterdam: Noord-Hollandsche. ISBN 0-7204-2270-1. • Good, I. J. (1966). "A Note on Richard's Paradox". Mind. 75 (299): 431. doi:10.1093/mind/LXXV.299.431. • Richard, Jules (1905). Les Principes des Mathématiques et le Problème des Ensembles. Revue Générale des Sciences Pures et Appliquées. Translated in Heijenoort, J. van, ed. (1964). Source Book in Mathematical Logic 1879-1931. Cambridge, MA: Harvard University Press. External links • "Paradoxes and contemporary logic", Stanford Encyclopedia of Philosophy Common paradoxes Philosophical • Analysis • Buridan's bridge • Dream argument • Epicurean • Fiction • Fitch's knowability • Free will • Goodman's • Hedonism • Liberal • Meno's • Mere addition • Moore's • Newcomb's • Nihilism • Omnipotence • Preface • Rule-following • Sorites • Theseus' ship • White horse • Zeno's Logical • Barber • Berry • Bhartrhari's • Burali-Forti • Court • Crocodile • Curry's • Epimenides • Free choice paradox • Grelling–Nelson • Kleene–Rosser • Liar • Card • No-no • Pinocchio • Quine's • Yablo's • Opposite Day • Paradoxes of set theory • Richard's • Russell's • Socratic • Hilbert's Hotel • Temperature paradox • Barbershop • Catch-22 • Chicken or the egg • Drinker • Entailment • Lottery • Plato's beard • Raven • Ross's • Unexpected hanging • "What the Tortoise Said to Achilles" • Heat death paradox • Olbers' paradox Economic • Allais • Antitrust • Arrow information • Bertrand • Braess's • Competition • Income and fertility • Downs–Thomson • Easterlin • Edgeworth • Ellsberg • European • Gibson's • Giffen good • Icarus • Jevons • Leontief • Lerner • Lucas • Mandeville's • Mayfield's • Metzler • Plenty • Productivity • Prosperity • Scitovsky • Service recovery • St. Petersburg • Thrift • Toil • Tullock • Value Decision theory • Abilene • Apportionment • Alabama • New states • Population • Arrow's • Buridan's ass • Chainstore • Condorcet's • Decision-making • Downs • Ellsberg • Fenno's • Fredkin's • Green • Hedgehog's • Inventor's • Kavka's toxin puzzle • Morton's fork • Navigation • Newcomb's • Parrondo's • Preparedness • Prevention • Prisoner's dilemma • Tolerance • Willpower • List • Category Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
If \begin{align*} 3x+4y-12z&=10,\\ -2x-3y+9z&=-4, \end{align*} compute $x$. Let $w=y-3z$. The equations become \begin{align*} 3x+4w&=10,\\ -2x-3w&=-4. \end{align*} Adding four times the second equation to three times the first equation, $$9x+12w-8x-12w=30-16\Rightarrow x=\boxed{14}.$$
Math Dataset
\begin{definition}[Definition:Field of Quotients/Definition 3] Let $D$ be an integral domain. A '''field of quotients''' of $D$ is a pair $\struct {F, \iota}$ where: :$(1): \quad$ $F$ is a field :$(2): \quad$ $\iota : D \to F$ is a ring monomorphism :$(3): \quad$ it satisfies the following universal property: ::::For every field $E$ and for every ring monomorphism $\varphi: D \to E$, there exists a unique field homomorphism $\bar \varphi: F \to E$ such that $\varphi = \bar \varphi \circ \iota$ \end{definition}
ProofWiki
What is the greatest common divisor of $1729$ and $1768$? By the Euclidean algorithm, \begin{align*} \text{gcd}\,(1729, 1768) &= \text{gcd}\,(1729, 1768 - 1729) \\ &= \text{gcd}\,(1729, 39). \end{align*}Since the sum of the digits of $1729$ is $19$, which is not divisible by $3$, it suffices to check whether or not $1729$ is divisible by $13$. We can find that it is by long division or noting that $12+1 = \boxed{13}$ divides into $1729 = 12^3 + 1^3$ using the sum of cubes factorization.
Math Dataset
DCDS-B Home A PDE model of intraguild predation with cross-diffusion December 2017, 22(10): 3629-3651. doi: 10.3934/dcdsb.2017143 Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$ Linfang Liu 1, , Xianlong Fu 1, and Yuncheng You 2,, Department of Mathematics, Shanghai Key Laboratory of PMMP, East China Normal University, Shanghai 200241, China Department of Mathematics and Statistics, Uniersity of South Florida, Tampa, FL 33620, USA * Corresponding author: Yuncheng You Received October 2016 Revised January 2017 Published April 2017 Fund Project: The second author is supported by NSF grant of China (Nos. 11671142 and 11371087), Science and Technology Commission of Shanghai Municipality (STCSM) (grant No. 13dz2260400) and Shanghai Leading Academic Discipline Project (No. B407) In this paper we study the asymptotic dynamics of the weak solutions of nonautonomous stochastic reaction-diffusion equations driven by a time-dependent forcing term and the multiplicative noise. By conducting the uniform estimates we show that the cocycle generated by this SRDE has a pullback $(L^2, H^1)$ absorbing set and it is pullback asymptotically compact through the pullback flattening approach. The existence of a pullback $(L^2, H^1)$ random attractor for this random dynamical system in space $H^{1}(\mathbb{R}^{n})$ is proved. Keywords: Pullback random attractor, stochastic reaction-diffusion equation, pullback asymptotic compactness, pullback flattening property. Mathematics Subject Classification: 35B40, 35B41, 35R60, 37L30. Citation: Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143 L. Arnold, Random Dynamical Systems Spring-Verlag, New York and Berlin, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar J. Ball, Continuity properties and global attractors of generalized semiflows and the Naiver-Stokes equations, J. Nonlinear Science, 7 (1997), 475-502. doi: 10.1007/s003329900037. Google Scholar J. Ball, Global attractors for damped semilinear wave equation, Discrete and Continuous Dynamical Systems, Ser. A, 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar T. Bao, Existence and upper semi-continuity of uniform attractors for non-autonomous reaction-diffusion equations on $\mathbb{R}^{n}$, Electronic Journal of Differential Equations, 2012 (2012), 1-18. Google Scholar H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probability Theory and Related Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar [6] A. N. Carvalho, J. A. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Nonautonomous Dynamical Systems, Springer, New York, 2013. T. Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Analysis, 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111. Google Scholar Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for non-autonomous 2D-Navier-Stokes equations in some unbounded domains, C.R. Math. Acad. Sci. Paris, 342 (2006), 263-268. doi: 10.1016/j.crma.2005.12.015. Google Scholar V. Chepyzhov and M. Vishik, Attractors of nonautonomous dynamical systems and their dimensions, J. Math. Pures Appl., 73 (1994), 279-333. Google Scholar P. E. Kloeden and M. Rasmussen, Nonautonomous Dynamical Systems American Mathematical Society, Providence, RI, 2011. doi: 10.1090/surv/176. Google Scholar H. Li, Y. You and J. Tu, Random attractors and averaging for non-autonomous stochastic wave equations with nonlinear damping, Journal of Differential Equations, 258 (2015), 148-190. doi: 10.1016/j.jde.2014.09.007. Google Scholar Y. Li and B. Guo, Random attractors for quasi-continuous random dynamical systems and applications to stochastic reaction-diffusion equations, Journal of Differential Equations, 245 (2008), 1775-1800. doi: 10.1016/j.jde.2008.06.031. Google Scholar G. Lukaszewicz and A. Tarasinska, On $H^{1}$-pullback attractors for nonautonomous micropolar fluid equations in a bounded domain, Nonlinear Analysis, 71 (2009), 782-788. doi: 10.1016/j.na.2008.10.124. Google Scholar Y. Li and C. Zhong, Pullback attractors for the norm-to-weak continuous process and application to the nonautonomous reaction-diffusion equations, Appl. Math. Computation, 190 (2007), 1020-1029. doi: 10.1016/j.amc.2006.11.187. Google Scholar B. Schmalfuss, Backward cocycles and attractors of stochastic differential equations, in International Seminar on Applied Mathematics-Nonlinear Dynamics: Attractor Approximation and Global Behavior (1992), 185-192.Google Scholar [16] G. R. Sell and Y. You, Dynamics of Evolutionary Equations, Springer, New York, 2002. doi: 10.1007/978-1-4757-5037-9. B. Q. Tang, Regularity of random attractors for stochastic reaction-diffusion equations on unbounded domains Stochastics and Dynamics 16 (2016), 1650006, 29pp. doi: 10.1142/S0219493716500064. Google Scholar [18] R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, Springer, New York, 1998. doi: 10.1007/978-1-4684-0313-8. [19] H. Tuckwell, INTRODUCTION to Theoretical Neurobiology: Nonlinear and Stochastic Theories, Cambridge University Press, Cambridge, 1998. B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical system, Journal of Differential Equations, 253 (2012), 1544-1563. doi: 10.1016/j.jde.2012.05.015. Google Scholar B. Wang, Random attractors for non-autonomous stochastic wave equation with multiplicative noise, Discrete and Continuous Dynamical Systems, Ser. A, 34 (2014), 269-300. doi: 10.3934/dcds.2014.34.269. Google Scholar B. Wang, Existence and upper semicontinuity of attractors for stochastic equations with deterministic non-autonomous terms Stochastics and Dynamics 14 (2014), 1450009, 31pp. doi: 10.1142/S0219493714500099. Google Scholar G. Wang and Y. Tang, $(L^2, H^1)$-random attractors for stochastic reaction-diffusion equations on unbounded domains, Abstract and Applied Analysis 2013 (2013), 279509, 23pp.Google Scholar Y. Wang and C. Zhong, On the existence of pullback attractors for non-autonomous reaction-diffusion equation, Dynamical Systems, 23 (2008), 1-16. doi: 10.1080/14689360701611821. Google Scholar K. Wiesenfeld, D. Pierson, E. Pantazelou, C. Dames and F. Moss, Stochastic resonance on a circle, Phys. Rev. Lett., 72 (1994), 2125-2129. doi: 10.1103/PhysRevLett.72.2125. Google Scholar K. Wiesenfeld and F. Moss, Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs, Nature, 373 (1995), 33-35. doi: 10.1038/373033a0. Google Scholar Y. You, Random attractors and robustness for stochastic reversible reaction-diffusion systems, Discrete and Continuous Dynamical Systems, Ser. A, 34 (2014), 301-333. doi: 10.3934/dcds.2014.34.301. Google Scholar Y. You, Random dynamics of stochastic reaction-diffusion systems with additive noise, J. Dynamics and Differential Equations, 29 (2017), 83-112. doi: 10.1007/s10884-015-9431-4. Google Scholar W. Xhao, $H^{1}$-random attractors and random equilibria for stochastic reaction-diffusion equations with multiplicative noise, Communications in Nonlinear Science and Numerical Simulation, 18 (2013), 2707-2721. doi: 10.1016/j.cnsns.2013.03.012. Google Scholar C. Zhong, M. Yang and C. Sun, The existence of global attractors for the norm-to-weak continuous semigroup and application to the nonlinear reaction-diffusion equations, Journal of Differential Equations, 223 (2006), 367-399. doi: 10.1016/j.jde.2005.06.008. Google Scholar Peter E. Kloeden, Thomas Lorenz. Pullback attractors of reaction-diffusion inclusions with space-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1909-1964. doi: 10.3934/dcdsb.2017114 Saugata Bandyopadhyay, Bernard Dacorogna, Olivier Kneuss. The Pullback equation for degenerate forms. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 657-691. doi: 10.3934/dcds.2010.27.657 Yangrong Li, Lianbing She, Jinyan Yin. Longtime robustness and semi-uniform compactness of a pullback attractor via nonautonomous PDE. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1535-1557. doi: 10.3934/dcdsb.2018058 María Anguiano, Tomás Caraballo, José Real, José Valero. Pullback attractors for reaction-diffusion equations in some unbounded domains with an $H^{-1}$-valued non-autonomous forcing term and without uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 307-326. doi: 10.3934/dcdsb.2010.14.307 Wenqiang Zhao. Pullback attractors for bi-spatial continuous random dynamical systems and application to stochastic fractional power dissipative equation on an unbounded domain. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3395-3438. doi: 10.3934/dcdsb.2018326 Tomás Caraballo, José Real, I. D. Chueshov. Pullback attractors for stochastic heat equations in materials with memory. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 525-539. doi: 10.3934/dcdsb.2008.9.525 Yuncheng You. Pullback uniform dissipativity of stochastic reversible Schnackenberg equations. Conference Publications, 2015, 2015 (special) : 1134-1142. doi: 10.3934/proc.2015.1134 Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194 Bao Quoc Tang. Regularity of pullback random attractors for stochastic FitzHugh-Nagumo system on unbounded domains. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 441-466. doi: 10.3934/dcds.2015.35.441 Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 509-523. doi: 10.3934/dcdsb.2017195 T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037 Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060 Linfang Liu, Xianlong Fu. Existence and upper semicontinuity of (L2, Lq) pullback attractors for a stochastic p-laplacian equation. Communications on Pure & Applied Analysis, 2017, 6 (2) : 443-474. doi: 10.3934/cpaa.2017023 Jianhua Huang, Wenxian Shen. Pullback attractors for nonautonomous and random parabolic equations on non-smooth domains. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 855-882. doi: 10.3934/dcds.2009.24.855 José A. Langa, Alain Miranville, José Real. Pullback exponential attractors. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1329-1357. doi: 10.3934/dcds.2010.26.1329 Yuncheng You. Random attractors and robustness for stochastic reversible reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 301-333. doi: 10.3934/dcds.2014.34.301 Tomás Caraballo, José A. Langa, James C. Robinson. Stability and random attractors for a reaction-diffusion equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 875-892. doi: 10.3934/dcds.2000.6.875 Yejuan Wang, Chengkui Zhong, Shengfan Zhou. Pullback attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 587-614. doi: 10.3934/dcds.2006.16.587 Alexey Cheskidov, Landon Kavlie. Pullback attractors for generalized evolutionary systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 749-779. doi: 10.3934/dcdsb.2015.20.749 Perla El Kettani, Danielle Hilhorst, Kai Lee. A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5615-5648. doi: 10.3934/dcds.2018246 Linfang Liu Xianlong Fu Yuncheng You
CommonCrawl
Mat. Sb. (N.S.), 1988, Volume 136(178), Number 3(7), Pages 361–376 (Mi msb1747) This article is cited in 12 scientific papers (total in 12 papers) The fundamental theorem of Galois theory G. Z. Dzhanelidze Abstract: For arbitrary categories $C$ and $X$ and an arbitrary functor $I\colon C\to X$ the author introduces the notion of an $I$-normal object and proves a general type of fundamental theorem of Galois theory for such objects. It is shown that the normal extensions of commutative rings and central extensions of multi-operator groups are special cases of $I$-normal objects. Bibliography: 14 titles. Full text: PDF file (1011 kB) References: PDF file HTML file English version: Mathematics of the USSR-Sbornik, 1989, 64:2, 359–374 UDC: 512.58+512.7+512.66 MSC: Primary 13B05, 16A74; Secondary 12F10, 18A25, 18B40 Citation: G. Z. Dzhanelidze, "The fundamental theorem of Galois theory", Mat. Sb. (N.S.), 136(178):3(7) (1988), 361–376; Math. USSR-Sb., 64:2 (1989), 359–374 \Bibitem{Dzh88} \by G.~Z.~Dzhanelidze \paper The fundamental theorem of Galois theory \jour Mat. Sb. (N.S.) \vol 136(178) \issue 3(7) \mathscinet{http://www.ams.org/mathscinet-getitem?mr=959487} \zmath{https://zbmath.org/?q=an:0677.18003|0653.18002} \transl \jour Math. USSR-Sb. \vol 64 \crossref{https://doi.org/10.1070/SM1989v064n02ABEH003313} http://mi.mathnet.ru/eng/msb/v178/i3/p361 This publication is cited in the following articles: G. Janelidze, G.M. Kelly, "Galois theory and a general notion of central extension", Journal of Pure and Applied Algebra, 97:2 (1994), 135 George Janelidze, László Márki, Walter Tholen, "Locally semisimple coverings", Journal of Pure and Applied Algebra, 128:3 (1998), 281 Michael Müger, "Galois Theory for Braided Tensor Categories and the Modular Closure", Advances in Mathematics, 150:2 (2000), 151 George Janelidze, "Galois Groups, Abstract Commutators, and Hopf Formula", Appl Categor Struct, 2007 Dali Zangurashvili, "Effective codescent morphisms, amalgamations and factorization systems", Journal of Pure and Applied Algebra, 209:1 (2007), 255 George Janelidze, "Light morphisms for generalized -reflections", Topology and its Applications, 156:12 (2009), 2109 S. H. Dalalyan, "Grothendieck's extension of the fundamental theorem of galois theory in abstract categories", J. Contemp. Mathemat. Anal, 46:1 (2011), 48 Dominique Bourn, Diana Rodelo, "Comprehensive factorization and -central extensions", Journal of Pure and Applied Algebra, 2011 Marino Gran, Tomas Everaert, "Monotone-light factorisation systems and torsion theories", Bulletin des Sciences Mathématiques, 2013 Tamar Janelidze-Gray, "Composites of Central Extensions Form a Relative Semi-Abelian Category", Appl Categor Struct, 2013 M.M.anuel Clementino, Dirk Hofmann, Andrea Montoli, "Covering Morphisms in Categories of Relational Algebras", Appl Categor Struct, 2013 Tomas Everaert, Marino Gran, "Protoadditive functors, derived torsion theories and homology", Journal of Pure and Applied Algebra, 2014 Full text: 333 References: 30
CommonCrawl
Shades of Fermat's Last Theorem The familiar Pythagorean 3-4-5 triple gives one solution to (x-1)^n + x^n = (x+1)^n so what about other solutions for x an integer and n= 2, 3, 4 or 5? Find the positive integer solutions of the equation (1+1/a)(1+1/b)(1+1/c) = 2 Code to Zero Find all 3 digit numbers such that by adding the first digit, the square of the second and the cube of the third you get the original number, for example 1 + 3^2 + 5^3 = 135. After Thought Sue Liu, S6, Madras College St Andrews sent one of her super solutions in which she wrote $\sin(\cos x)$ as $\cos (\cos (x - (\pi/2)$ and then used the formula which gives the difference of two cosines as minus the product of two sines. Another triumph for Sue! After a lot more work with trig formulae Sue proves that $\cos(\sin x)$ is greater than $\sin(\cos x)$ for all $x$ and you can try this for yourselves. There is another way of looking at this. You may like to sketch some graphs. First for $x$ between $0$ and $\pi/2$ the cosine function is decreasing and \[ 0 \leq \sin x \leq x \leq \pi/2 \] so it follows that \[1 = \cos 0 \geq \cos(\sin x) \geq \cos x \geq \cos(\pi/2) = 0 \ \ \ [1]\] Also, as for all $x$ in this interval, $ \cos x \geq 0$ and we also know that, for$ y \geq 0, \sin y \leq y$, we can put $y = cos x$ which gives\[ \sin (\cos x) \leq \cos x. \ \ \ [2]\] From [1] and [2] we see that, for $x$ between $0$ and $\pi/2$ \[ \cos(\sin x) \geq \cos x \geq \sin (\cos x). \] For $x$ between $ \pi/2 \mbox{ and } \pi$ it is even easier because in this interval $ \cos(\sin x) > 0\ \mbox{ and }\ \sin (\cos x) < 0.$ So far we have $\cos (\sin x) \geq \sin (\cos x)$ for x between 0 and $ \pi $. For the interval $[-\pi, 0]$ put $y = - x$ then $y$ is in the interval $[-\pi, 0]$ and $x$ is in the interval $[0, \pi]$ so, using what we have already proved and the fact that sine is an odd function and cosine is an even function, we have $$\begin{eqnarray} \cos (\sin y) = \cos (\sin - x) = \cos (- \sin x) \\ = \cos (\sin x) \\ \geq\sin (\cos x) \\ = \sin (\cos y). \end{eqnarray}$$ We have proved that $\cos (\sin x) \geq \sin (\cos x)$ for all x between $-\pi \mbox{ and } \pi$ and hence everywhere by periodicity. Maths Supporting SET. Graphs. Creating and manipulating expressions and formulae. Graph plotters. Graph sketching. Inequalities. Trigonometric functions and graphs. Sine, cosine, tangent. Mathematical reasoning & proof. Transformation of functions.
CommonCrawl
Evaluation of biomass-based production of below zero emission reducing gas for the iron and steel industry Martin Hammerschmid ORCID: orcid.org/0000-0002-1155-926X1, Stefan Müller ORCID: orcid.org/0000-0001-8878-429X1, Josef Fuchs ORCID: orcid.org/0000-0002-4627-44751 & Hermann Hofbauer ORCID: orcid.org/0000-0001-6318-90721 Biomass Conversion and Biorefinery volume 11, pages 169–187 (2021)Cite this article The present paper focuses on the production of a below zero emission reducing gas for use in raw iron production. The biomass-based concept of sorption-enhanced reforming combined with oxyfuel combustion constitutes an additional opportunity for selective separation of CO2. First experimental results from the test plant at TU Wien (100 kW) have been implemented. Based on these results, it could be demonstrated that the biomass-based product gas fulfills all requirements for the use in direct reduction plants and a concept for the commercial-scale use was developed. Additionally, the profitability of the below zero emission reducing gas concept within a techno-economic assessment is investigated. The results of the techno-economic assessment show that the production of biomass-based reducing gas can compete with the conventional natural gas route, if the required oxygen is delivered by an existing air separation unit and the utilization of the separated CO2 is possible. The production costs of the biomass-based reducing gas are in the range of natural gas-based reducing gas and twice as high as the production of fossil coke in a coke oven plant. The CO2 footprint of a direct reduction plant fed with biomass-based reducing gas is more than 80% lower compared with the conventional blast furnace route and could be even more if carbon capture and utilization is applied. Therefore, the biomass-based production of reducing gas could definitely make a reasonable contribution to a reduction of fossil CO2 emissions within the iron and steel sector in Austria. Today the iron and steel industry in EU-28 is responsible for 200 million tons of carbon dioxide [1] which amounts to a share of 5% of the total carbon dioxide equivalent (CO2e) [2] emissions [3]. These numbers show that especially the transformation of heavy load industries like the iron and steel industry towards low-carbon technologies will be challenging. In Austria the iron and steel industry also contributes to a significant share concerning greenhouse gas emissions. In 2017, 8.1 million tons of crude steel were produced in Austria [4], which are responsible for around 16% of the total greenhouse gas emissions [5]. Technological development has enabled to improve the energy efficiency and to reduce CO2 emissions in this sector. However, the principles of steelmaking have not changed fundamentally over the years. In 2017, over 91% of the Austrian crude steel was produced within oxygen-blown converters, which were fed with hot metal from blast furnaces. The remaining share was produced within electric arc furnaces [4]. According to the EU Roadmap 2050 [6], the CO2 emissions within the iron and steel industry must be reduced by around 85%. To accomplish this major goal, a complete conversion towards low-carbon steelmaking technologies has to be done. Numerous researchers and international institutions investigate alternative low-carbon steelmaking routes. Especially, the ULCOS program [7, 8] has evaluated the CO2 reduction potential of over 80 existing and potential technologies. Several investigations are working on further optimization of fossil fuel-based state-of-the-art processes like the coke and pulverized coal-based-integrated blast furnace route [9,10,11]. All this optimization steps to reduce the consumption of fossil fuels are limited [12]. For reaching the previous described climate goals within the iron and steel sector, a fundamental change of steelmaking is necessary. The ULCOS program [7, 8] identified four technologies with CO2 emission reduction potentials of more than 50%. The technologies within this program, which are based on carbon capture and storage (CCS) or utilization (CCU), are the top-gas recycling within the blast furnace (BF-TGR-CCS/U), a novel bath-smelting technology (HISARNA-CCS/U) [13, 14], and a novel direct reduction process (ULCORED-CCS/U). Only the novel ULCOLYSIS [15] process, which is characterized by melting iron ore through electric direct reduction, is not based on CCS or CCU. In addition to the research activities in Europe, the COURSE50 program in Japan, POSCO in Korea, AISI in the USA, and the Australian program are some international examples for investigations regarding CO2 reduction in the iron and steel industry [16]. The COURSE50 program [8, 16, 17] is focused on H2-based reducing agents in blast furnace (BF) for decreasing the fossil coke consumption and technologies for capturing, separating, and recovering CO2 from the BF gas. POSCO [8, 16, 18] in Korea is working on the adaptation of CCS and CCU to smelting reduction processes, like the FINEX and COREX process. Furthermore, POSCO is researching in bio-slag utilization, pre-reduction and heat recovery of hot sinter, CO2 absorption using ammonia scrubber, hydrogen production out of coke-oven gas (COG), and iron ore reduction using hydrogen-enriched syngas. AISI [8, 16] is working on the molten oxide electrolysis, which is similar to the ULCOLYSIS concept and iron making by hydrogen flash smelting. The research programs regarding breakthrough iron and steelmaking technologies in Brazil, Canada, and Australia [19] are all strongly focused on biomass-based iron and steel production routes for replacing fossil coal and coke by use of biomass-derived chars as substitutes [8, 16, 20]. Summing up, there are a lot of investigations going on around the world to reduce the CO2 footprint of the iron and steel industry. The most of the previous described concepts apply CCS or CCU to reach a CO2 reduction potential over 50% in comparison to the conventional integrated BF route. Nevertheless, the implementation of CCS requires a fundamental investigation due to storage sites and long-term response of the environment. Beside the CCS or CCU-based approaches, the replacement of fossil fuel-based reducing agents by biomass-based substitutes or the use of hydrogen as reducing agent are promising approaches for reaching the climate targets within the iron and steel sector. Furthermore, some electric direct reduction processes like ULCOWIN, MOE, and ULCOLYSIS are under investigation. One possible CO2 reduction path could also be the rise of the share of steel production through electric arc furnaces. Therefore, enough high-quality scrap must be available. With respect to the estimates regarding biomass potential in the next decades [20, 21], in Austria beside the rise of the share of steel production through scrap-based electric arc furnaces, another possible synergetic transition option seems to be the replacement of the integrated blast furnace route with the direct reduction of iron ore based on biomass-based reducing gas. The Austrian steel manufacturing and processing group, voestalpine AG, is already operating one of the biggest direct reduction plants, based on the MIDREX concept and reformed natural gas as reducing agent in Texas [22]. This approach would combine the gained expertise within the field of direct reduction with the Austria-developed concept of dual fluidized bed steam gasification [23]. Within the present work, a biomass-based production of biogenic reducing gas through dual fluidized bed steam gasification, which allows the replacement of steam reformed natural gas, is investigated. At this stage, it remains unclear if the investigated process is competitive with respect to other production routes for the supply of reducing gas for iron ore reduction. So far, following question has not been answered sufficiently: How can the production of biomass-based reducing gas via dual fluidized bed steam gasification enable a reasonable contribution to a reduction of fossil CO2 emissions within the iron and steel sector? The following paper describes the results of the investigated process enabling the production of a below zero emission reducing gas by applying the biomass-based dual fluidized bed steam gasification technology in combination with carbon capture and utilization. The investigations are based on experimental results combined with simulation work. The present paper discusses: The comparison of different iron- and steelmaking routes regarding their CO2 footprint The proposed process concept for the production of biomass-based reducing gas Experimental and simulation results achieved The results of a techno-economic assessment With regard to the techno-economic assessment of the selective separation of CO2 technology OxySER, a plant concept for the integration in a direct reduction process has been developed. Beforehand, a short overview and comparison of primary and secondary iron and steelmaking routes regarding their CO2 footprints will be given. Furthermore, the application of dual fluidized bed steam gasification with respect to the combination of sorption-enhanced reforming and oxyfuel combustion will be explained. Comparison of iron and steelmaking routes regarding their CO2 footprint Two main steelmaking processes can be distinguished. The primary steelmaking route converts virgin iron ores into crude steel (CS). Secondary steelmaking is characterized by the recycling of iron and steel scrap in an electric arc furnace [8, 24]. Table 1 gives an overview of chosen iron and steelmaking routes and the comparison regarding CO2 footprint. First of all, the primary steelmaking integrated blast furnace (BF) route, which is predominant in Austria. Thereby, steel production takes place at an integrated steel plant, where iron ores are reduced into hot metal through the use of reduction agents such as coke or coal. Afterwards, the hot metal is converted into steel by oxygen injection in a basic oxygen furnace (BOF). As result of the high energy demand of 11.4 GJ/tCS on fossil reducing agents, the CO2 footprint of the BF-BOF route is with 1.694 t CO2e/tCS very high [25]. Furthermore, the secondary steelmaking electric arc furnace (EAF) route is used in Austria. Therein, the major feedstock is ferrous scrap, which is melted mainly through the use of electricity. However, increasing the share of EAF steel is constrained by the availability of scrap, and the quality requirements for steel grades have to meet [8]. The smelting reduction route belongs also to the state-of-the-art iron and steelmaking routes. Within this route, iron ores are heated and pre-reduced by the off-gas coming from the smelter-gasifier. The pre-reduction step could be realized in a shaft kiln (COREX) or a fluidized bed reactor (FINEX). Pre-reduced iron ores are then melted in the smelter-gasifier. The smelter-gasifier uses oxygen and coal as a reducing agent. Afterwards, the hot metal is also fed to the BOF for steelmaking. Another possibility of steelmaking is the primary direct reduction (DR) route. MIDREX is one of the used direct reduction technologies. It is characterized by the reduction of iron ores into solid direct reduced iron (DRI) within a shaft kiln. The direct reduction technologies could also work within a fluidized bed reactor. Examples include the FINMET and CIRORED process [38]. The direct reduction is driven by the fed of a reducing gas. Currently, the commercial used reducing gas is based on the reforming of natural gas. For extended information regarding the fundamentals of iron and steelmaking routes, a reference is made to [8, 24, 39]. Table 1 Overview of different iron and steelmaking routes including their energy demands and CO2 emissions [16] Beside the previous described state-of-the-art iron and steelmaking routes, some innovative developments and investigations are compared with the conventional routes regarding their energy demand, CO2 footprint, merit, and demerit in Table 1. Therein, the integrated blast furnace route (BF and BOF) which is predominant in Austria is set as reference regarding CO2 emissions. Recycling of the blast furnace top-gas in combination with CCS or CCU (BF-TGR-CCS/U and BOF) or the replacement of fossil coal by biogenic substitutes reduces the fossil reducing agent demand and decrease the CO2 footprint of integrated blast furnace routes up to 50% [7, 16, 26, 30, 31]. The replacement of the BF by smelting reduction processes like the COREX or FINEX process would raise slightly the CO2 footprint due to the high consumption of fossil coal. An ecologically favorable operation of smelting reduction processes only could be realized by the use of CCS or CCU [8, 16, 18]. The use of a smelting reduction technology based on bath-smelting (HISARNA-CCS/U and EAF) in combination with CCS would reduce the CO2 emissions up to 80% [7, 16]. Direct reduction plants enable a big CO2 emission saving potential in comparison with the integrated BF route due to the present used reformed natural gas as reducing agent. Reformed natural gas consists to a large extent of hydrogen, which results in lower CO2 emissions due to the oxidation of hydrogen to steam within the reduction process [12]. The replacement of the integrated BF route by the state-of-the-art MIDREX plant, which is based on the reduction of iron ore within a shaft kiln by the use of reformed natural gas, would decrease the CO2 emissions by 50% in comparison with the reference route [12, 32, 33]. The economic viability of direct reduction-based routes, which are based on reformed natural gas, strongly depend on the natural gas price which is in Europe much higher than in North America [33]. Within the ULCOS project, a novel direct reduction process (ULCORED-CCS/U) based on partial oxidized natural gas is investigated [7, 8]. By the reduction of the required amount of natural gas and the application of CCS or CCU, the CO2 emissions could be decreased up to 65% compared with the reference route. The dual fluidized bed steam gasification process, based on the bed material limestone, which is called sorption-enhanced reforming (SER), produces a biomass-based hydrogen-rich gas, which allows the replacement of the steam reforming unit for reforming of natural gas. The application of SER to produce a biomass-based reducing gas for the MIDREX process (MIDREX-BG-SER) reduces the CO2 footprint compared with the integrated BF route up to 80%. The combination of SER with oxyfuel combustion (OxySER) enables an in situ CO2 sorption within the reducing gas production process. Beside the production of biomass-based reducing gas, a CCU or CCS ready CO2 stream is released. Therefore, a below zero emission reducing gas due to the application of CCU or CCS is generated. Another direct reduction breakthrough technology could be the HYBRIT process, which is based on the reducing agent hydrogen, produced by electrolysis [16, 26, 34, 35]. Therefore, the emissions within the HYBRIT process are mostly caused by the CO2 footprint of the electricity mix. With regard to the Austrian electricity mix, with a CO2 footprint of 0.218 kg CO2e/kWhel [36], a CO2 emission saving potential up to 50% could be reached with the HYBRIT process. Further possibilities are the rise of the share of steel production through scrap-based electric arc furnaces. This steelmaking route enables CO2 reduction potentials up to 90%, because of the replacement from ironmaking processes with scrap. The EAF-based routes are strongly depended on the availability of high-quality scrap [12, 26]. Furthermore, some novel electric direct reduction processes, like the ULCOLYSIS project, are under investigation [7, 16]. Similar to the HYBRIT process, the electric direct reduction processes are strongly depended on the CO2 footprint of the national electricity mix, because of the high-net power demands. Several technologies provide the possibility of additional carbon-emission reduction by sequestration of CO2. The use of post-combustion capture technologies, like pressure swing adsorption or amine scrubber, is the possibility for the sequestration of CO2 within iron and steelmaking routes [40]. Within the OxySER process, through the in situ CO2 sorption, a CCU or CCS ready CO2 stream is produced. Further explanations regarding CO2 sequestration can be found in [41,42,43]. The selective separated and purified CO2 could be used in further process steps as raw material, carbon capture and utilization, or stored in underground deposits, carbon capture and storage [43, 44]. Today around 230 million tons of carbon dioxide per year are globally utilized materially. One hundred thirty million tons are used in urea manufacturing and 80 million tons for enhanced oil recovery [45]. With the assumption that hydrogen for the ammoniac production is produced by water electrolysis, which is beside CO2 the primary energy source for urea production, external CO2 is necessary for the urea synthesis. In Linz, near to one of the main sites for iron and steel production, a urea synthesis plant with a production rate of around 400,000 t per year of urea is located [46]. Therein, around 300,000 t CO2 per year are required for the production of the given amount of urea [46]. Further utilization possibilities could be CO2-derived fuels, like methanol or FT-synthesis and power to gas. Furthermore, the utilization within CO2-derived chemicals beside urea, like formic acid synthesis, or CO2-derived building materials, like the production of concrete, could be promising alternatives [45]. Beside the CCU technologies, CO2 can also be stored in underground deposits. CCS is banned in Austria except research projects up to a storage volume of 100,000 t of CO2 [44]. For further information regarding CCU and CCS, a reference is made to [40, 45, 47,48,49]. Since biomass releases the same amount of CO2 as it aggregates during its growth, the utilization of biogenic fuels can contribute significantly to a reduction of CO2 emissions. Therefore, the main focus of the paper lies on the production of a below zero emission reducing gas by the use of oxyfuel combustion in combination with sorption-enhanced reforming. This technology for the selective separation of CO2 uses as fluidization agent a mix of pure oxygen and recirculated flue gas. Therefore, the nitrogen from the air is excluded from the combustion system [42]. Combination of oxyfuel combustion and sorption-enhanced reforming A promising option for the selective separation of CO2 from biomass and the generation of a hydrogen-rich product gas at the same time is the sorption-enhanced reforming process in combination with oxyfuel combustion (OxySER). The sorption-enhanced reforming (SER) is based on the dual fluidized bed steam gasification process. The main carbon-related (gas-solid) and gas-gas reactions are shown in Table 2. Test runs at the 100 kW pilot plant at TU Wien showed calculated overall cold gas efficiencies of around 70% [51, 52]. Detailed information regarding the dual fluidized bed steam gasification process can be found in literature [37, 51,52,53,54]. Table 2 Important gas-solid and gas-gas reactions during thermochemical fuel conversion [50] The combination of oxyfuel combustion and sorption-enhanced reforming combines the advantages of both technologies. Figure 1 represents the concept of the combined technology [44]. First of all, biomass, residues, or waste materials are introduced in the gasification reactor. Limestone is used as bed material which serves as transport medium for heat but also as carrier for CO2 from the gasification reactor (GR) to the combustion reactor (CR) by adjusting the temperature levels in the reactors correctly. Within the OxySER process, steam serves as fluidization and gasification agent in the GR. Therein, several endothermic gasification reactions take place in a temperature range between 600 and 700 °C [37]. Residual char is transferred with the bed material from the GR to the CR. Due to the combination of SER with oxyfuel combustion, pure oxygen instead of air is used as fluidization agent in the CR, which is operated within a temperature range between 900 and 950 °C. By combustion of residual char in the CR, heat is released. This suitable temperature profiles in the GR and CR ensure that the bed material (limestone) is first calcined to calcium oxide (CaO) at high temperatures in the CR (13). Then the CaO is carbonized in the GR with the carbon dioxide from the product gas (12). Thus, in this cyclic process, a transport of CO2 from the product gas to the flue gas appears [52]. The use of steam in the gasification reactor and the water gas shift reaction (8) in combination with in situ CO2 sorption via the bed material system CaO/CaCO3 enables the production of a nitrogen-free and hydrogen-enriched product gas [37, 56]. Due to the combination of SER with oxyfuel combustion, in addition to the nitrogen-free and hydrogen-enriched product gas, a CO2-enriched flue gas is generated caused by the use of pure oxygen as fluidization agent in the CR instead of air [57]. Concept of OxySER [55] The CO2 equilibrium partial pressure in the CaO/CaCO3 system and the associated operation conditions for the gasification and combustion can be found in [52]. By the use of renewable fuels and a continuous selective separation and storage or utilization of CO2, an improved CO2 balance can be achieved [44, 57]. Table 3 represents a comparison between the product and flue gas compositions of conventional gasification, SER, and OxySER. The results are based on test runs with the 100 kW pilot plant at TU Wien and the 200 kW pilot plant at University of Stuttgart [37, 57]. As mentioned above, the carbon dioxide content of the product gas could be reduced through the SER method. Furthermore, the hydrogen content is higher in comparison with conventional gasification. The possibility of adjusting the H2/CO ratio over a wide range makes the SER process very flexible according to product gas applications [52]. The catalytic activity of limestone enables a reduction of tar at the same time [37, 44, 58]. The comparison between the SER and OxySER process illustrates that a CO2-enriched flue gas in the OxySER test rig in Stuttgart was obtained [57]. In Table 4 the proximate and ultimate analyses of used wood pellets for gasification test runs with the 100 kW pilot plant at TU Wien are listed. Table 3 Comparison product and flue gas composition of conventional gasification, SER, and OxySER [37, 57] Table 4 Proximate and ultimate analyses of used wood pellets for gasification test runs [51] However, OxySER implies the following advantages in comparison to the conventional gasification: Selective CO2 transport to flue gas Decrease of tar content in product gas High CO2 content in flue gas > 90 vol.-%dry [57] Smaller flue gas stream because of flue gas recirculation Nitrogen free flue gas These assumptions according to experimental results serve as a basis for the conception of an industrial application. Integrated OxySER concept for the production of below zero emission reducing gas The OxySER plant concept for integration in a direct reduction plant is illustrated in Fig. 2. The plant concept is designed for a product gas power of 100 MW. For the production of 100 MW product gas, 50,400 kg/h of wood chips with a water content of 40 wt.-% are required [37]. The wood chips are treated in a biomass dryer. Afterwards the biomass is fed in the gasification reactor. The bed material inventory (limestone) of the system contains 25,000 kg. In the gasification reactor, a H2-enriched product gas with a temperature of 680 °C is produced. Subsequently, the dust particles are removed from the product gas by a cyclone. Besides ash, these dust particles contain still carbon. This is the reason why the particles are recirculated to the combustion reactor. Afterwards, the product gas is cooled down to 180 °C. The released heat can be used for preheating of the biomass dryer air [44]. Furthermore, the product gas filter separates further fine dust particles from the product gas stream and conveys them back to the combustion reactor. After that, tar is separated in a scrubber, and water is condensed. Biodiesel (RME) is used as solvent. The product gas exits the scrubber with a temperature of 40 °C. Afterwards, it is compressed in a blower, before it is dried to a water content of 1.5% and fed to the compression and preheating of the direct reduction plant. The CO2-enriched flue gas leaves the combustion reactor with a temperature of 900 °C. The flue gas is cooled down to 180 °C by the steam superheater and a flue gas cooler. Steam is heated up to 450 °C in a countercurrent heat exchanger. Fly ash is removed out of the system by a flue gas filter. A partial flow from the flue gas is recirculated and mixed with pure oxygen. Pure oxygen is produced by an air separation unit. The remaining flue gas stream is compressed in the flue gas blower, and water is condensed in a flue gas dryer. The cleaned CO2-rich gas can be used in different CCU processes, like urea or methanol synthesis [44]. OxySER plant concept with 100-MW product gas power for the production of reducing gas as feedstock for a direct reduction plant The integration approach offers the advantage to use existing equipment, like the air separation unit from the steelmaking facility. Furthermore, the generated product gas can be used directly in the direct reduction plant, as reducing gas [44]. For this application, a compression up to approx. 2.5 bar and preheating of the product gas up to 900 °C are necessary. Simulation of mass and energy balances with IPSEpro The calculation of mass and energy balances for different operation points with the stationary equation-orientated flow sheet simulation software IPSEpro enables the validation of process data. All data which cannot be measured during experimental test runs can be determined by the calculation of closed mass and energy balances. These equations are solved by the numerical Newton-Raphson Algorithm [59, 60]. Therefore, no models regarding kinetic or fluid dynamic approaches are considered. The used simulation models within the software IPSEpro are based on model libraries, which were developed at TU Wien over many years [61]. All experimental results from the pilot plant at TU Wien, presented within this publication, were validated with IPSEpro. Uncertainties are given by the accuracy of measurement data which relies on used analysis methods. The measurement accuracy of the ultimate and proximate analysis is listed in Table 4. The validation percentage error of the gasification model is covered by the range of values which are listed in Table 3. For further information regarding IPSEpro, a reference is made to [61, 62]. Due to the validation of the results from the pilot plant at University of Stuttgart, a reference is made to [57]. The simulation results for the OxySER concept for the production of below zero emission reducing gas presented in Section 2.3 are based on scale up of the experimental results of the pilot plants. The simulation model of the dual fluidized bed steam gasification system is based on an exergy study of T. Pröll [63]. Techno-economic assessment with net present value calculation The techno-economic assessment regarding the net present value (NPV) calculation serves as decision-making tool for the valuation of upcoming investments. The NPV is a function of the investment and operating costs. The operating costs are multiplied by the cumulative present value factor, which includes the interest rate and the plant lifetime. Therefore, the NPV calculation helps to compare expected payments in the future with current payments. Further information can be found in [54, 64]. Cost rates have been updated to the year 2019 by using data from a chemical engineering plant cost index (CEPCI) database [65]. For the calculation of the investment costs, the cost-scaling method was used [66]. The techno-economic analysis is based on the following business case that an operator of a direct reduced iron plant would like to build a new reducing gas supply unit driven by a biogenic feedstock. The goal to produce 100 MW reducing gas should be achieved with regard to CO2 emissions. The reference option (option 0) is the production of reducing gas by steam reforming of natural gas. Furthermore, three biogenic alternative options (options 1–3) are compared with the reference option: Option 0 (reference case): Production of 100 MW reducing gas through steam reforming of natural gas Option 1: Production of 100 MW reducing gas through gasification of wood chips by SER Option 2: Production of 100 MW reducing gas through gasification of wood chips by an integrated OxySER plant Option 3: Production of 100 MW reducing gas through gasification of wood chips by a greenfield OxySER plant The SER process in option 1 requires no pure oxygen, consequently no ASU for operation. However, the flue gas of the SER process cannot be exploited in further utilization steps because of the high nitrogen content in the flue gas. The alternative option 2 is based on the SER process in combination with oxyfuel combustion implemented in an existing iron and steel plant facility. The process heat is used for preheating of the reducing gas. The required oxygen is delivered from an existing ASU within the iron and steel plant facility. Furthermore, the OxySER process is based on the assumption that the CO2 is sold as product for utilization to a urea synthesis plant. Option 3 is based on the OxySER process without the benefits from option 2. This means that, in option 3, the costs for pure oxygen are higher in consideration to the use of a greenfield ASU. Furthermore, no earnings through CO2 utilization are considered. Furthermore, a payback analysis has been done by solving the following equation, where A are the savings minus the operation and maintenance costs, P is the present worth capital costs, and IR is the interest rate. The variable n represents the number of years to return the investment in comparison with the reference case [67]. $$ A=P\ast \frac{\mathrm{IR}\ast {\left(1+\mathrm{IR}\right)}^n}{{\left(1+\mathrm{IR}\right)}^n-1} $$ Based on experiences of the pilot plant from the TU Wien and the University of Stuttgart, combined with the previously described concept, mass and energy balances for the OxySER plant concept for integration in a direct reduction plant were calculated. Furthermore, mass and energy balances are the basis for a techno-economic assessment. In Table 5 the most important streamline data of chosen flow streams, marked in Fig. 2, are shown. Table 6 and Table 7 represent the input and output data and operating parameters of an OxySER plant. Table 5 Streamline data of the OxySER concept according to Fig. 2 Table 6 Input and Output data of an OxySER plant with 100 MW product gas energy Table 7 Operating parameters of an OxySER plant with 100 MW product gas energy Table 6 shows the input and output flows of an OxySER plant with 100 MW product gas energy. It can be seen that 50,400 kg/h of wood chips and 11,020 Nm3/h of pure oxygen are required for the generation of 28,800 Nm3/h product gas. The product gas is used as reducing gas in the direct reduction route. Furthermore, 36,100 kg/h of CO2 can be recovered for further utilization. The costs for final disposal of 1050 kg/h of ash and dust have been taken into account. In Table 8, the main requirements on the product gas for the utilization in the direct reduction plant are listed. The comparison illustrates that the generated below zero emission product gas out of the OxySER plant meets, except from the temperature and pressure, all the requirements. The concept is based on the assumption that the reducing gas is compressed and preheated before it is fed to the direct reduction plant. Therefore, the required temperature and pressure are reached after compression and preheating of the product gas. Table 8 Requirements on product gas for the utilization in the direct reduction plant [22, 70] The techno-economic assessment relies on the results of the IPSEpro simulation. Table 9 represents the fuel prices for chosen fuel types and cost rates for utilities. It is thus evident that the European natural gas price with 25 €/MWh is more expensive than in other continents. Exemplary, the costs for one employee per year are assumed to 70,000 €/a, and the expected plant lifetime of an OxySER plant is 20 years. Table 9 Cost rates for utilities and NPV calculation Table 10 represents the investment cost rates for the NPV calculation. The presented investment costs are based on total capital investment costs of realized fluidized bed steam gasification plants driven as combined heat and power plants reduced by the costs through the gas engine. Furthermore, this investment costs are updated by CEPCI and scaled with the cost-scaling method. For the integrated OxySER plant, the assumption was made that the oxygen from the air separation unit (ASU) of the iron and steel plant is used. For the greenfield OxySER plant, the whole investment costs for an ASU were added. Table 10 Investment costs for NPV calculation The techno-economic analysis is based on the Section 2.5 that described business case, wherein an operator of a direct reduced iron plant would like to build a new reducing gas supply unit driven by a biogenic feedstock. The NPV calculation, which is shown in Table 11, serves as decision-making tool. The goal to produce 100 MW reducing gas should be achieved with regard to CO2 emissions. The reference option (option 0) is the production of reducing gas by steam reforming of natural gas. Furthermore, three biogenic alternative options (options 1–3), which are described in Section 2.5, are compared with the reference option. Table 11 Net present value calculation for the production of 100 MW reducing gas Table 11 represents the net present value calculation for the production of 100 MW reducing gas. Therein, the fuel energy per year, the investment costs including interest and fuel costs per year are listed. Beside the fuel costs, Table 11 shows also all other consumption-related costs. Costs for CO2 emission certificates are paid only for the use of fossil fuels (reference case). The relative NPV represents the profitability of alternative production routes in comparison with the reference case and the payback period for return of investment. The NPV of all alternative options (1–3) shows negative values. This means that the operation of SER and OxySER with wood chips based on the expected plant lifetime of 20 years is less profitable than the reference option. The techno-economic comparison between SER and OxySER shows that in option 2, the earnings through carbon dioxide are higher than the oxygen costs. In option 3, no earnings through CO2 utilization and no benefits regarding oxygen costs have been considered. Therefore, an extremely negative NPV in option 3 is the result. The payback analysis shows that only option 2 could return the investment regarding the expected interest rate in comparison with the reference case. However, the payback time of 24 years is very long and would not be profitable. Option 1 and option 3 could not return the investment in comparison to the reference case. Furthermore, the reducing gas production costs of the four different routes were calculated. As can be seen from Table 11, the production costs (LCOP) of the reference case are with 39.0 €/MWh as the lowest followed by the integrated OxySER process with 39.4 €/MWh. Figure 3 represents the discounted expenses and revenues, divided in the main cost categories. It can be seen that the fuel costs are the main cost driver in the process. The techno-economic comparison points out that the production costs of a below zero emission reducing gas could only be in the range of steam-reformed natural gas, if generated CO2 can be utilized and the pure oxygen is delivered by an integrated ASU. Otherwise, the production of biomass-based reducing gas via the SER process is preferable. A further reduction of the production costs of the biomass-based reducing gas could be reached by the use of cheaper fuels. Relative net present value Additionally, a sensitivity analysis of the NPV calculation has been created. The results for the sensitivity analysis based on the NPV of option 2 are shown in Fig. 4. The sensitivity analysis shows that the fuel prices of natural gas and wood chips are the most sensitive cost rates. The fuel cost rates depend very much on the plant location. Furthermore, the NPV in this techno-economic comparison is also sensitive to the investment costs of the reducing agent production route, the revenues through CCU, the price of CO2 emission certificates, the plant lifetime, the operating hours, and the interest rate. The revenues through CCU depend on the availability of consumers. The sensitivity to operating hours and plant life time reaffirms high importance to a high plant availability during the whole plant life cycle. Cost rates for operating utilities, maintenance, and employees are less sensible to the results. Sensitivity analysis of the NPV calculation Finally, a comparison of the production costs of the biomass-based reducing gas with other reducing agents like reformed natural gas, hydrogen, or coke has been done. The comparison in Fig. 5 shows that the production of biomass-based reducing gas via OxySER (option 2) and SER is more than twice as expensive as the production of coke in a coking plant, but it is in the same range than the production of reducing gas via steam reforming of natural gas. All fuel costs are based on European price levels. Especially, the natural gas price strongly depends on the plant site. For example, the natural gas price in Europe is four to five times higher than in North America [33]. This is the reason why most of the existing direct reduction plants are built in oil-rich countries [33]. The production of hydrogen using water electrolysis is currently economically not competitive. On the ecologic point of view, the use of biomass-based reducing gas without CCU decrease the CO2 emissions of the whole process chain for the production of crude steel down to 0.28 t CO2e/tCS. This amounts to a reduction of CO2 emissions in comparison with the integrated BF-BOF route by more than 80%. Further on, the use of CCU within an OxySER plant could create a CO2 sink, since biomass releases the same amount of CO2 as it aggregates during its growth. Economic and ecologic comparison of different Iron and Steelmaking routes [12, 20, 25, 34, 80] With regard to 8.1 million tons of crude steel production in Austria, in the year 2017 [4], and an estimated woody biomass potential of around 50 PJ in the year 2030 [21], 13 biomass-based reducing gas plants (OxySER or SER) with a reducing gas power of 100 MW could be implemented. This would result in the production of around 35 Mio. GJ of biomass-based reducing gas for the direct reduction process, which is sufficient for the production of 3.5 Mio. tons of crude steel. One of the biomass-based reducing gas plants could be operated via the OxySER process with regard to the CCU potential from the nearby urea synthesis plant of 300,000 t CO2 per year [46]. Further CCU potential could be arise through the production of CO2-derived fuels or chemicals [41]. Conclusion and outlook The scope of this publication was the investigation of a concept for the production of a below zero emission reducing gas for the use in a direct reduction plant and whether it has a reasonable contribution to a reduction of fossil CO2 emissions within the iron and steel sector in Austria. The gasification via SER allows the in situ CO2 sorption via the bed material system CaO/CaCO3. Therefore, a selective transport of carbon dioxide from the product gas to the flue gas stream is reached. The use of a mix of pure oxygen and recirculated flue gas as fluidization agent in the CR results in a nearly pure CO2 flue gas stream. Through the in situ CO2 sorption, CO2 recovery rates up to 95% can be reached. The CO2 could be used for further synthesis processes like, e.g., the urea synthesis. Therefore, a below zero emission reducing gas could be produced. The experimental and simulation results show that the produced below zero emission OxySER product gas meets all requirements for the use in a direct reduction plant. The use of the biomass-based reducing gas out of the SER process within a MIDREX plant would decrease the emitted CO2 emission by 83% in comparison to the blast furnace route. The use of a below zero emission reducing gas out of the OxySER process by the use of CCU would create a CO2 sink. The results of the techno-economic assessment show that the production of reducing gas via sorption-enhanced reforming in combination with oxyfuel combustion can compete with the natural gas route, if the required pure oxygen is delivered by an available ASU and if CCU is possible. Otherwise, the SER process is more profitable. Furthermore, the sensitivity analysis of the cost rates exhibited that the fuel and investment costs are strongly dependent on the profitability of the OxySER plant and in consequence the direct reduction plant. The production costs of the biomass-based reducing gas are more than twice as high as the fossil coke, which is used mainly in the blast furnace route. Summing up, the presented integrated concept and the calculated results enable valuable data for further design of the proposed concept. Beforehand a demonstration at a significant scale is recommended. Further on, the implementation of the energy flows from an iron and steel plant within the simulation model could improve the current model regarding to efficiency. The profitability of the direct reduction with a biomass-based reducing gas or natural gas is strongly dependent on the availability of sufficient fuel. With regard to the woody biomass potentials in Austria in the year 2030, the production of 3.5 Mio. tons of crude steel by the use of biomass-based reducing gas could be reached. Due to the substitution of the integrated BF and BOF route by the MIDREX-BG-SER and EAF route, the reduction of 6.8 Mio. tons of CO2e could be reached. This amount would decrease the CO2 emissions within the iron and steel sector in Austria by 50%. Concluding, the production of biomass-based reducing gas could definitely help to contribute on the way to defossilization of the iron and steelmaking industry in Austria. The data that support the findings of this study are available from the corresponding author, M. Hammerschmid, upon reasonable request. AISI: American Iron and Steel Institute Asm.: ASU: air separation unit BF: biomass-based reducing gas BOF: basic oxygen furnace CaCO3 : CaO: CCS: CCS/U: carbon capture and storage or utilization CCU: CEPCI: chemical engineering plant cost index CH3OH: CH4 : CH4N2O: CHP: CIRCORED: novel direct reduction technology CO2 : CO2e: carbon dioxide equivalent COG: coke oven gas COREX: smelting reduction technology COURSE50: CO2 ultimate reduction steelmaking process by innovative technology for cool Earth 50 located in Japan CR: combustion reactor CS: crude steel CxHy : non condensable hydrocarbons DR: direct reduction DRI: direct reduced iron dry basis EAF: electric arc furnace EU-28: member states of the European Union (until January 2020) FG: FINEX: FINMET: direct reduction technology GR: gasification reactor H2 : H2O: H2S: HCOOH: HISARNA: novel bath-smelting technology HM: HYBRIT: Hydrogen Breakthrough Ironmaking Technology IPSEpro: software tool for process simulation LCOP: levelized costs of products MIDREX: state-of-the-art direct reduction technology MOE: molten oxide electrolysis N2 : NH3 : NPV: O2 : OPEX: OxySER: sorption-enhanced reforming in comb. with oxyfuel combustion PG: product gas POSCO: iron and steelmaking company located in Korea RME: rapeseed methyl ester SER: sorption-enhanced reforming tCS : tons of crude steel TGR: top-gas recycling ULCOLYSIS: novel electric direct reduction technology ULCORED: ULCOS: ultra-low CO2 steelmaking ULCOWIN: vol.-%: volumetric percent wet vol.-%dry : volumetric percent dry wt.-%: weight percent wet wt.-%daf : weight percent dry and ash free wt.-%dry : weight percent dry %CO: volume percent of carbon monoxide within reducing gas %CO2 : volume percent of carbon dioxide within reducing gas %H2 : volume percent of hydrogen within reducing gas %H2O: volume percent of water within reducing gas A : savings minus the operation and maintenance costs IR : n : P : present worth capital costs Borkent B, De Beer J (2016) Carbon costs for the steel sector in Europe post-2020-impact assessment of the proposed ETS revision. Ultrecht Metz B, Davidson O, Meyer L, Bosch P, Dave R (2007) Climate change 2007 - mitigation. Cambridge University Press, Cambridge. United Kingdom and New York. USA Eurostat (2017) Greenhouse gas emissions by source sector. Statistical office of the European Union. https://ec.europa.eu/eurostat/data/database. Accessed 25 Mar 2020 World Steel Association (2018) Steel statistical yearbook 2018. Brussels Zechmeister A, Anderl M, Geiger K, et al (2019) Klimaschutzbericht 2019 - Analyse der Treibhausgas-Emissionen bis 2017. Wien Anderl M, Burgstaller J, Gugele B, et al (2018) Klimaschutzbericht 2018. Wien Quader MA, Ahmed S, Dawal SZ, Nukman Y (2016) Present needs, recent progress and future trends of energy-efficient ultra-low carbon dioxide (CO2) steelmaking (ULCOS) program. Renew Sust Energ Rev 55:537–549. https://doi.org/10.1016/j.rser.2015.10.101 Eder W, Moffat G (2013) A steel roadmap for a low carbon Europe 2050. Brussels Wang H, Sheng C, Lu X (2017) Knowledge-based control and optimization of blast furnace gas system in steel industry. IEEE Access 5:25034–25045. https://doi.org/10.1109/ACCESS.2017.2763630 Shen X, Chen L, Xia S, Xie Z, Qin X (2018) Burdening proportion and new energy-saving technologies analysis and optimization for iron and steel production system. Cleaner Production 172:2153–2166. https://doi.org/10.1016/j.jclepro.2017.11.204 Sato M, Takahashi K, Nouchi T, Ariyama T (2015) Prediction of next-generation ironmaking process based on oxygen blast furnace suitable for CO2 mitigation and energy flexibility. ISIJ Int 55:2105–2114 Rammer B, Millner R, Boehm C (2017) Comparing the CO2 emissions of different steelmaking routes. BHM Berg- und Hüttenmännische Monatshefte 162:7–13. https://doi.org/10.1007/s00501-016-0561-8 Zhang H, Wang G, Wang J, Xue Q (2019) Recent development of energy-saving technologies in ironmaking industry. IOP Conf Series Earth Environ Sci 233:052016. https://doi.org/10.1088/1755-1315/233/5/052016 Buergler T, Kofler I (2016) Direct reduction technology as a flexible tool to reduce the CO2 intensity of Iron and steelmaking. BHM Berg- und Hüttenmännische Monatshefte 162:14–19. https://doi.org/10.1007/s00501-016-0567-2 Junjie Y (2018) Progress and future of breakthrough low-carbon steelmaking technology (ULCOS) of EU Technologies in Global Steel Industry. 3:15–22. https://doi.org/10.11648/j.ijmpem.20180302.11 Suopajärvi H, Umeki K, Mousa E, Hedayati A, Romar H, Kemppainen A, Wang C, Phounglamcheik A, Tuomikoski S, Norberg N, Andefors A, Öhman M, Lassi U, Fabritius T (2018) Use of biomass in integrated steelmaking – status quo, future needs and comparison to other low-CO2 steel production technologies. Appl Energy 213:384–407. https://doi.org/10.1016/j.apenergy.2018.01.060 Tonomura S, Kikuchi N, Ishiwata N, Tomisaki S (2016) Concept and current state of CO2 ultimate reduction in the steelmaking process (COURSE50) aimed at sustainability in the Japanese steel industry. J Sustain Metall 2:191–199. https://doi.org/10.1007/s40831-016-0066-4 Zhao J, Zuo H, Wang Y, Wang J, Xue Q (2020) Review of green and low-carbon ironmaking technology. Ironmak Steelmak 47:296–306. https://doi.org/10.1080/03019233.2019.1639029 Jahanshahi S, Mathieson JG, Reimink H (2016) Low emission steelmaking. Sustain Metall 2:185–190. https://doi.org/10.1007/s40831-016-0065-5 Mandova H, Leduc S, Wang C, Wetterlund E, Patrizio P, Gale W, Kraxner F (2018) Possibilities for CO2 emission reduction using biomass in European integrated steel plants. Biomass Bioenergy 115:231–243 Titschenbacher F, Pfemeter C (2019) Basisdaten 2019 - Bioenergie. Graz Lorraine L (2019) Direct from MIDREX - 3rd quarter 2019. North Carolina Hofbauer H (2013) Biomass gasification for electricity and fuels, Large Scale. Renew Energy Syst:459–478. https://doi.org/10.1007/978-1-4614-5820-3 Wörtler M, Schuler F, Voigt N, et al (2013) Steel's contribution to a low-carbon Europe 2050. Boston Consulting Group. Steel Institute VDEh. Boston Prammer J, Schubert M (2019) Umwelterklärung Voestalpine 2019. Voestalpine AG. Linz Otto A, Robinius M, Grube T, Schiebahn S, Praktiknjo A, Stolten D (2017) Power-to-steel: reducing CO2 through the integration of renewable energy and hydrogen into the German steel industry. Energies 10:451. https://doi.org/10.3390/en10040451 Worrell E, Price L, Neelis M, et al (2008) World best practice energy intensity values for selected industrial sectors. Ernest Orlando Lawrence Berkeley National Laboratory LBNL-62806 Liu L, Jiang Z, Zhang X, Lu Y, He J, Wang J, Zhang X (2018) Effects of top gas recycling on in-furnace status, productivity, and energy consumption of oxygen blast furnace. Energy 163:144–150. https://doi.org/10.1016/j.energy.2018.08.114 Hooey L, Tobiesen A, Johns J, Santos S (2013) Techno-economic study of an integrated steelworks equipped with oxygen blast furnace and CO2 capture. Energy Procedia 37:7139–7151. https://doi.org/10.1016/j.egypro.2013.06.651 Suopajärvi H, Kemppainen A, Haapakangas J, Fabritius T (2017) Extensive review of the opportunities to use biomass-based fuels in iron and steelmaking processes. Cleaner Production 148:–734. https://doi.org/10.1016/j.jclepro.2017.02.029 Wang C, Mellin P, Lövgren J, Nilsson L, Yang W, Salman H, Hultgren A, Larsson M (2015) Biomass as blast furnace injectant - considering availability, pretreatment and deployment in the Swedish steel industry. Energy Convers Manag 102:217–226. https://doi.org/10.1016/j.enconman.2015.04.013 Kopfle JT, Mcclelland JM, Metius GE (2008) Green(er) steelmaking with the Midrex direct reduction process. MIDREX Technologies Ravenscroft C (2017) Direct from MIDREX - 2nd quarter 2017. MIDREX Technologies. North Carolina SSAB (2017) HYBRIT - fossil-free-steel. Summary of findings from pre-feasibility study 2016–2017. Sweden Hölling M, Weng M, Gellert S (2018) Bewertung der Herstellung von Eisenschwamm unter Verwendung von Wasserstoff. Hamburg Corradi O, Hinkle T, Collignon M, et al (2020) Electricity map - CO2 Emissionen. Tomorrow. https://www.electricitymap.org/?countryCode=AT&page=country. Accessed 26 Mar 2020 Müller S (2013) Hydrogen from biomass for industry-industrial application of hydrogen production based on dual fluid gasification. Dissertation. TU Wien Spreitzer D, Schenk J (2019) Reduction of iron oxides with hydrogen - a review. Montanuniversität Leoben Steel Res Online 1900108:1900108. https://doi.org/10.1002/srin.201900108 VDEh (2020) Hot metal and crude steel production. Stahl Online. https://www.vdeh.de/en/technology/steelmaking/. Accessed 25 Jun 2020 Ramírez-Santos ÁA, Castel C, Favre E (2018) A review of gas separation technologies within emission reduction programs in the iron and steel sector: current application and development perspectives. Sep Purif Technol 194:425–442. https://doi.org/10.1016/j.seppur.2017.11.063 Markewitz P, Zhao L, Robinius M (2017) Technologiebericht 2.3 CO2-Abscheidung und Speicherung (CCS). Wuppertal, Karlsruhe, Saarbrücken Tondl G (2013) Oxyfuel Verbrennung von Klärschlamm. Institute of Chemical, Environmental and Bioscience Engineering. Dissertation. Wien Kuckshinrichs W, Markewitz P, Linssen J, et al (2010) Weltweite Innovationen bei der Entwicklung von CCS-Technologien und Möglichkeiten der Nutzung und des Recyclings von CO2. Forschungszentrum Jülich. ISBN 978-3-89336-617-0. Berlin Hammerschmid M (2016) Evaluierung von sorption enhanced reforming in Kombination mit Oxyfuel-combustion für die Abscheidung von CO2. Bachelor Thesis. TU Wien Berghout N, McCulloch S (2019) Putting CO2 to use. Technology Report. International Energy Agency. France Oktawiec D (2009) Erarbeitung eines Konzeptes zur Einhaltung der neu zu erwartenden Abwassergrenzwerte für die Harnstoff- und Melaminanlagen. Master Thesis. Montanuniversität Leoben Leeson D, Mac Dowell N, Shah N, Petit C, Fennell PS (2017) A techno-economic analysis and systematic review of carbon capture and storage (CCS) applied to the iron and steel, cement, oil refining and pulp and paper industries, as well as other high purity sources. Int J Greenh Gas Control 61:71–84. https://doi.org/10.1016/j.ijggc.2017.03.020 Koch T, Scheelhaase T, Jonas N, et al (2016) Evaluation zur Nutzung von Kohlendioxid (CO2) als Rohstoff in der Emscher-Lippe-Region - Erstellung einer Potentialanalyse. Hamburg Werpy T, Petersen G (2004) Top value added chemicals from biomass. US Dep Energy 1:76. https://doi.org/10.2172/926125 Schmid JC (2014) Development of a novel dual fluidized bed gasification system. Dissertation TU Wien Schmid JC, Benedikt F, Fuchs J, et al (2019) Syngas for biorefineries from thermochemical gasification of lignocellulosic fuels and residues - 5 years` experience with an advanced dual fluidized bed gasifier design. Biomass Conversion and Biorefinery. TU Wien Fuchs J, Schmid JC, Müller S, Hofbauer H (2019) Dual fluidized bed gasification of biomass with selective carbon dioxide removal and limestone as bed material: a review. Renew Sust Energ Rev 107:212–231. https://doi.org/10.1016/j.rser.2019.03.013 Schmid JC, Kolbitsch M, Fuchs J, et al (2016) Steam gasification of exhausted olive pomace with a dual fluidized bed pilot plant at TU Wien. Technical Report. TU Wien Hammerschmid M (2019) Entwicklung eines virtuellen Planungsraums anhand des Basic Engineering einer Zweibettwirbelschichtanlage. Diploma Thesis. TU Wien Fuchs J, Wagner K, Kuba M, et al (2017) Thermische Vergasung minderwertiger Reststoffe zur Produktion von Wertstoffen und Energie. Blickpunkt Forschung. Vienna Koppatz S (2008) In–situ Produktgaskonditionierung durch selektive CO2–Abscheidung bei Wirbelschicht– Dampfvergasung von Biomasse: Machbarkeitsnachweis im industriellen Maßstab. Diploma Thesis. TU Wien Schweitzer D, Beirow M, Gredinger A, Armbrust N, Waizmann G, Dieter H, Scheffknecht G (2016) Pilot-scale demonstration of oxy-SER steam gasification: production of syngas with pre-combustion CO2 capture. Energy Procedia 86:56–68. https://doi.org/10.1016/j.egypro.2016.01.007 Soukup G (2009) Der AER–Prozess, Weiterentwicklung in einer Technikumsanlage und Demonstration an einer Großanlage. Dissertation. TU Wien SimTech Simulation Technology (2011) IPSEpro process simulator - model development kit (manual). Graz SimTech Simulation Technology (2011) IPSEpro process simulator - process simulation en- vironment (manual). Graz Pröll T, Hofbauer H (2008) development and application of a simulation tool for biomass gasification based processes. Int J Chem React Eng 6:A89. https://doi.org/10.2202/1542-6580.1769 Müller S, Fuchs J, Schmid JC, Benedikt F, Hofbauer H (2017) Experimental development of sorption enhanced reforming by the use of an advanced gasification test plant. Int J Hydrog Energy 42:29694–29707. https://doi.org/10.1016/j.ijhydene.2017.10.119 Pröll T (2004) Potenziale der Wirbelschichtdampfvergasung fester Biomasse – Modellierung und Simulation auf Basis der Betriebserfahrungen am Biomassekraftwerk Güssing. Dissertation. Technische Universität Wien Schmid JC (2016) Technoökonomische Fallstudien als Entscheidungsunterstützung für das strategische Management. Masterarbeit. Fachhochschule Burgenland Lozowski D (2020) Chemical Engineering Plant Cost Index. https://www.chemengonline.com/pci. Accessed 25 May 2020 Neuling U, Kaltschmitt M (2018) Techno-economic and environmental analysis of aviation biofuels. Fuel Process Technol 171:54–69. https://doi.org/10.1016/j.fuproc.2017.09.022 Piazzi S, Zhang X, Patuzzi F, Baratieri M (2020) Techno-economic assessment of turning gasification-based waste char into energy: a case study in South-Tyrol. Waste Manag 105:550–559. https://doi.org/10.1016/j.wasman.2020.02.038 Jentsch R (2015) Modellierung des SER-Prozesses in einem neuen Zweibettwirbelschicht-Dampfvergaser-System. Diplomarbeit. TU Wien Fuchs J, Schmid JC, Benedikt F, Müller S, Hofbauer H, Stocker H, Kieberger N, Bürgler T (2018) The impact of bed material cycle rate on in-situ CO2 removal for sorption enhanced reforming of different fuel types. Energy 162:35–44. https://doi.org/10.1016/j.energy.2018.07.199 Cheeley R (1999) Gasification and the MIDREX® direct reduction process. In: AISTech - Iron and Steel Technology Conference Proceedings. San Francisco, pp. 633–639 Horvath E (2019) Holzmarktbericht Jänner-Dezember 2019. LK Österreich. Wien E-Control (2020) Preisentwicklungen Strom und Erdgas. https://www.e-control.at/statistik/gas/marktstatistik/preisentwicklung. Accessed 27 Mar 2020 Thrän D, Pfeiffer D (2013) Methodenhandbuch - Stoffstromorientierte Bilanzierung der Klimagaseffekte, Ed. 4. Energetische Biomassenutzung. BMU. Leipzig Damodaran A (2019) Cost of equity and capital. http://people.stern.nyu.edu/adamodar/New_Home_Page/dataarchived.html#discrate. Accessed 6 Jan 2020 EEX (2019) European Emission Allowances (EUA). https://www.eex.com/en/market-data/emission-allowances/spot-market/european-emission-allowances#!/2016/05/26. Accessed 4 Jul 2019 Zimmermann AW, Wunderlich J, Müller L, Buchner GA, Marxen A, Michailos S, Armstrong K, Naims H, McCord S, Styring P, Sick V, Schomäcker R (2020) Techno-economic assessment guidelines for CO2 utilization. Front Energy Res 8:5. https://doi.org/10.3389/fenrg.2020.00005 Körner A, Tam C, Bennett S (2015) Technology roadmap - hydrogen and fuel cells. IEA. Paris Kost C, Shammugam S, Jülch V, et al (2018) Stromgestehungskosten Erneuerbare Energien. Frauenhofer Institut. Freiburg Konstantin P (2017) Praxisbuch Energiewirtschaft: Energieumwandlung, −transport und -beschaffung, Übertragungsnetzausbau und Kernenergieausstieg, 4. Auflage Godula-Jopek A, Stolten D (2015) Hydrogen production by eletrolysis, 1st ed. Wiley-VCH, Weinheim Open access funding provided by TU Wien (TUW). The present work contains results of the project ERBA II which is being conducted within the "Energieforschung" research program funded by the Austrian Climate and Energy Fund and processed by the Austrian Research Promotion Agency (FFG). The work has been accomplished in cooperation with voestalpine Stahl GmbH and voestalpine Stahl Donawitz GmbH. Institute of Chemical Engineering, Environmental and Bioscience Engineering, TU WIEN, Getreidemarkt 9/166, 1060, Vienna, Austria Martin Hammerschmid, Stefan Müller, Josef Fuchs & Hermann Hofbauer Martin Hammerschmid Stefan Müller Josef Fuchs Hermann Hofbauer Correspondence to Martin Hammerschmid. The authors declare that they have no conflict of interest. Code availability Hammerschmid, M., Müller, S., Fuchs, J. et al. Evaluation of biomass-based production of below zero emission reducing gas for the iron and steel industry. Biomass Conv. Bioref. 11, 169–187 (2021). https://doi.org/10.1007/s13399-020-00939-z Issue Date: February 2021 DOI: https://doi.org/10.1007/s13399-020-00939-z Low-carbon steelmaking Oxyfuel combustion
CommonCrawl
Lozi-like maps Stability of transonic jets with strong rarefaction waves for two-dimensional steady compressible Euler system June 2018, 38(6): 2945-2964. doi: 10.3934/dcds.2018126 Isolated singularities for elliptic equations with hardy operator and source nonlinearity Huyuan Chen 1, and Feng Zhou 2,, Department of Mathematics, Jiangxi Normal University, Nanchang, Jiangxi 330022, China Center for PDEs and Department of Mathematics, East China Normal University, Shanghai 200241, China * Corresponding author: F. Zhou Received August 2017 Revised December 2017 Published April 2018 Fund Project: H. Chen is supported by NNSF of China, No: 11726614, 11661045, by the Jiangxi Provincial Natural Science Foundation, No: 20161ACB20007, by the Science and Technology Research Project of Jiangxi Provincial Department of Education, No:GJJ160297. F. Zhou is supported by NNSF of China, No: 11726613, 11271133 and 11431005, and STCSM No:13dZ2260400. Full Text(HTML) In this paper, we concern the isolated singular solutions for semi-linear elliptic equations involving Hardy-Leray potential $- \Delta u + \frac{\mathit{\mu }}{{|x{|^2}}}u = {u^p}\;\;\;{\rm{in }}\;\;\;\Omega \setminus \{ 0\} ,\;\;\;u = 0\;\;\;{\rm{on}}\;\;\;\partial \Omega .\;\;\;\;\;\;\;\;\;\;\;\;\left( 1 \right)$ We classify the isolated singularities and obtain the existence and stability of positive solutions of (1). Our results are based on the study of nonhomogeneous Hardy problem in a new distributional sense. Keywords: Hardy potential, isolated singularity, classification. Mathematics Subject Classification: 35B44, 35J75. Citation: Huyuan Chen, Feng Zhou. Isolated singularities for elliptic equations with hardy operator and source nonlinearity. Discrete & Continuous Dynamical Systems, 2018, 38 (6) : 2945-2964. doi: 10.3934/dcds.2018126 O. Adimurthi, N. Chaudhuri and M. Ramaswamy, An improved Hardy-Sobolev inequality and its application, Proc. Amer. Math. Soc., 130 (2002), 489-505. doi: 10.1090/S0002-9939-01-06132-9. Google Scholar P. Aviles, Local behaviour of the solutions of some elliptic equations, Comm. Math. Phys., 108 (1987), 177-192. doi: 10.1007/BF01210610. Google Scholar L. Boccardo, L. Orsina and I. Peral, A remark on existence and optimal summability of solutions of elliptic problems involving Hardy potential, Discrete Contin. Dyn. Syst. A, 16 (2006), 513-523. doi: 10.3934/dcds.2006.16.513. Google Scholar H. Brezis and P. Lions, A note on isolated singularities for linear elliptic equations, in Mathematical Analysis and Applications, Acad. Press, 7 (1981), 263-266. Google Scholar H. Brezis and M. Marcus, Hardy's inequalities revisited, Ann. Sc. Norm. Super. Pisa Cl. Sci., 25 (1997), 217-237. Google Scholar H. Brezis and L. Vázquez, Blow-up solutions of some nonlinear elliptic problems, Rev. Mat. Univ. Complut. Madrid, 10 (1997), 443-469. Google Scholar L. Caffarelli, B. Gidas and J. Spruck, Asymptotic symmetry and local behaviour of semilinear elliptic equations with critical Sobolev growth, Comm. Pure Appl. Math., 42 (1989), 271-297. doi: 10.1002/cpa.3160420304. Google Scholar D. Cao and Y. Li, Results on positive solutions of elliptic equations with a critical Hardy-Sobolev operator, Methods Appl. Anal., 15 (2008), 81-95. doi: 10.4310/MAA.2008.v15.n1.a8. Google Scholar N. Chaudhuri and F. Cîrstea, On trichotomy of positive singular solutions associated with the Hardy-Sobolev operator, C. R. Math. Acad. Sci. Paris, 347 (2009), 153-158. doi: 10.1016/j.crma.2008.12.018. Google Scholar H. Chen, A. Quaas and F. Zhou, On nonhomogeneous elliptic equations with the Hardy-Leray potentials, arXiv: 1705.08047. Google Scholar H. Chen and F. Zhou, Classification of isolated singularities of positive solutions for Choquard equations, J. Diff. Eq., 261 (2016), 6668-6698. doi: 10.1016/j.jde.2016.08.047. Google Scholar H. Chen and F. Zhou, Isolated singularities of positive solutions for Choquard equations in sublinear case, Comm. Cont. Math., (2017). Google Scholar F. Cîrstea, A complete classification of the isolated singularities for nonlinear elliptic equations with inverse square potentials, Mem. Amer. Math. Soc., 227 (2014), ⅵ+85 pp. Google Scholar J. Davila and L. Dupaigne, Hardy-type inequalities, J. Eur. Math. Soc., 6 (2004), 335-365. Google Scholar L. Dupaigne, A nonlinear elliptic PDE with the inverse square potential, J. d'Analyse Mathématique, 86 (2002), 359-398. doi: 10.1007/BF02786656. Google Scholar M. Fall and R. Musina, Sharp nonexistence results for a linear elliptic inequality involving Hardy and Leray potentials, J. Inequal. Appl., (2011), Art. ID 917201, 21 pp. Google Scholar V. Felli and A. Ferrero, On semilinear elliptic equations with borderline Hardy potentials, J. d'Analyse Mathématique, 123 (2014), 303-340. doi: 10.1007/s11854-014-0022-9. Google Scholar S. Filippas and A. Tertikas, Optimizing improved Hardy inequalities, J. Func. Anal., 192 (2002), 186-233. doi: 10.1006/jfan.2001.3900. Google Scholar A. García and G. Peral, Hardy inequalities and some critical elliptic and parabolic problems, J. Diff. Eq., 144 (1998), 441-476. doi: 10.1006/jdeq.1997.3375. Google Scholar M. Ghergu and S. Taliaferro, Isolated Singularities in Partial Differential Inequalities, Cambridge University Press, 2016. Google Scholar B. Gidas and J. Spruck, Global and local behaviour of positive solutions of nonlinear elliptic equations, Comm. Pure Appl. Math., 34 (1981), 525-598. doi: 10.1002/cpa.3160340406. Google Scholar Q. Han and F. Lin, Elliptic Partial Differential Equations, American Mathematical Soc., 2000. doi: 10.1090/cln/001. Google Scholar W. Jeong and Y. Lee, Stable solutions and finite Morse index solutions of nonlinear elliptic equations with Hardy potential, Nonl. Anal. T.M.A., 87 (2013), 126-145. doi: 10.1016/j.na.2013.04.007. Google Scholar P. Lions, Isolated singularities in semilinear problems, J. Diff. Eq., 38 (1980), 441-450. doi: 10.1016/0022-0396(80)90018-2. Google Scholar R. Mazzeo and F. Pacard, A construction of singular solutions for a semilinear elliptic equation using asymptotic analysis, J. Diff. Geometry., 44 (1996), 331-370. doi: 10.4310/jdg/1214458975. Google Scholar Y. Naito and T. Sato, Positive solutions for semilinear elliptic equations with singular forcing terms, J. Diff. Eq., 235 (2007), 439-483. doi: 10.1016/j.jde.2007.01.006. Google Scholar F. Pacard, Existence and convergence of positive weak solutions of $ -Δ u = u^{\frac{N}{N-2\ }}$ in bounded domains of $ {\mathbb{R}}^N$, Calc. Var. and PDEs., 1 (1993), 243-265. doi: 10.1007/BF01191296. Google Scholar Y. Pinchover and K. Tintarev, Existence of minimizers for Schrödinger operators under domain perturbations with application to Hardy's inequality, Indiana Univ. Math. J., 54 (2005), 1061-1074. doi: 10.1512/iumj.2005.54.2705. Google Scholar P. Rabinowitz, Minimax Methods in Critical Point Theory with Applications to Differential Equations, CBMS Reg. Conf. Ser. Math., 65 American Mathematical Society, 1986. Google Scholar M. Struwe, Variational Methods, Applications to Nonlinear Partial Differential Equations and Hamiltonian systems, Ergebnisse der Mathematik und ihrer Grenzgebiete, 34. Springer-Verlag, Berlin, 1996. Google Scholar L. Véron, Singularities of Solutions of Second-Order Quasilinear Equations, Pitman Research Notes in Mathematics Series, 353. Longman, Harlow, 1996. Google Scholar Lei Wei, Zhaosheng Feng. Isolated singularity for semilinear elliptic equations. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 3239-3252. doi: 10.3934/dcds.2015.35.3239 Jianfeng Huang, Yulin Zhao. Bifurcation of isolated closed orbits from degenerated singularity in $\mathbb{R}^{3}$. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2861-2883. doi: 10.3934/dcds.2013.33.2861 Fengshuang Gao, Yuxia Guo. Multiple solutions for a critical quasilinear equation with Hardy potential. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 1977-2003. doi: 10.3934/dcdss.2019128 Boumediene Abdellaoui, Ahmed Attar. Quasilinear elliptic problem with Hardy potential and singular term. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1363-1380. doi: 10.3934/cpaa.2013.12.1363 Sze-Bi Hsu, Bernold Fiedler, Hsiu-Hau Lin. Classification of potential flows under renormalization group transformation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 437-446. doi: 10.3934/dcdsb.2016.21.437 Soohyun Bae. Classification of positive solutions of semilinear elliptic equations with Hardy term. Conference Publications, 2013, 2013 (special) : 31-39. doi: 10.3934/proc.2013.2013.31 Lucio Boccardo, Luigi Orsina, Ireneo Peral. A remark on existence and optimal summability of solutions of elliptic problems involving Hardy potential. Discrete & Continuous Dynamical Systems, 2006, 16 (3) : 513-523. doi: 10.3934/dcds.2006.16.513 Jingbo Dou, Ye Li. Classification of extremal functions to logarithmic Hardy-Littlewood-Sobolev inequality on the upper half space. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3939-3953. doi: 10.3934/dcds.2018171 Minbo Yang, Fukun Zhao, Shunneng Zhao. Classification of solutions to a nonlocal equation with doubly Hardy-Littlewood-Sobolev critical exponents. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5209-5241. doi: 10.3934/dcds.2021074 Jing Zhang, Shiwang Ma. Positive solutions of perturbed elliptic problems involving Hardy potential and critical Sobolev exponent. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1999-2009. doi: 10.3934/dcdsb.2016033 Fengshuang Gao, Yuxia Guo. Infinitely many solutions for quasilinear equations with critical exponent and Hardy potential in $ \mathbb{R}^N $. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5591-5616. doi: 10.3934/dcds.2020239 Boumediene Abdellaoui, Daniela Giachetti, Ireneo Peral, Magdalena Walias. Elliptic problems with nonlinear terms depending on the gradient and singular on the boundary: Interaction with a Hardy-Leray potential. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1747-1774. doi: 10.3934/dcds.2014.34.1747 Xu Liu, Jun Zhou. Initial-boundary value problem for a fourth-order plate equation with Hardy-Hénon potential and polynomial nonlinearity. Electronic Research Archive, 2020, 28 (2) : 599-625. doi: 10.3934/era.2020032 Gary Froyland. On Ulam approximation of the isolated spectrum and eigenfunctions of hyperbolic maps. Discrete & Continuous Dynamical Systems, 2007, 17 (3) : 671-689. doi: 10.3934/dcds.2007.17.671 Octavian G. Mustafa. On isolated vorticity regions beneath the water surface. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1523-1535. doi: 10.3934/cpaa.2012.11.1523 Ángela Jiménez-Casas, Aníbal Rodríguez-Bernal. Linear model of traffic flow in an isolated network. Conference Publications, 2015, 2015 (special) : 670-677. doi: 10.3934/proc.2015.0670 Zied Douzi, Bilel Selmi. On the mutual singularity of multifractal measures. Electronic Research Archive, 2020, 28 (1) : 423-432. doi: 10.3934/era.2020024 Soumya Kundu, Soumitro Banerjee, Damian Giaouris. Vanishing singularity in hard impacting systems. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 319-332. doi: 10.3934/dcdsb.2011.16.319 Yimei Li, Jiguang Bao. Semilinear elliptic system with boundary singularity. Discrete & Continuous Dynamical Systems, 2020, 40 (4) : 2189-2212. doi: 10.3934/dcds.2020111 PDF downloads (238) HTML views (183) Huyuan Chen Feng Zhou
CommonCrawl
Bioaccessibility and risk assessment of heavy metals, and analysis of arsenic speciation in Cordyceps sinensis Li Zhou ORCID: orcid.org/0000-0003-3222-21881,2, Sheng Wang1, Qingxiu Hao1, Liping Kang1, Chuanzhi Kang1, Jian Yang1, Wanzhen Yang1, Jingyi Jiang1, Lu-Qi Huang1 & Lanping Guo1,2 Cordyceps sinensis (C. sinensis) is a famous and precious Traditional Chinese Medicine (TCM), while frequent reports of heavy metals, especially arsenic, exceeding standards in C. sinensis in recent years have raised concerns of its safety. Therefore, it is urgent for a research on heavy metals (Cu, Pb, As, Cd, Hg) in C. sinensis, of its bioaccessibility, dietary exposure estimation, arsenic speciation analysis and health risks assessment to human body. Three 30 g batches of mixed wild growth C. sinensis samples were collected from Qinghai Province and each batch were divided into three parts: the whole C. sinensis, the stroma and the caterpillar body. The in vitro gastrointestinal method was used to evaluate the bioaccessibility of the heavy metals in the samples. The arsenic speciation analysis in the in vitro gastrointestinal solutions and dilute nitric acid extracted solutions were conducted using high performance liquid chromatography–inductively coupled plasma mass (HPLC–ICP-MS) method. Finally, the target hazard quotient (THQ) developed by the US EPA (1989) was used to assess the health risks of heavy metals in C. sinensis. The contents of Cu, Pb, Cd and Hg in the stroma were higher than those in the caterpillar body. In contrast, As was mainly found in the caterpillar body. In the whole C. sinensis, the average bio-accessibilities of Cu, Pb, As, Hg and Cd were 41.29, 40.11, 64.46, 18.91, and 81.14%, respectively. While in the caterpillar body, the corresponding bio-accessibilities values were 48.26, 42.92, 66.15, 12.86, 87.07%, respectively, and were 38.30, 30.53, 30.18, 7.46, and 82.30%, respectively in the stroma part. Different arsenic speciations of arsenite [As(III)], arsenate [As(V)] and trace amounts of methylarsonic acid [MMA] were detected. Of the total As, 8.69% was in inorganic form, which was also the major form of dissolved As. Among the extracted inorganic species, the concentrations of As(III) and As(V) were 0.56 ± 0.16 and 0.29 ± 0.06 mg kg−1, respectively. In the gastrointestinal solutions, only As(III) and As(V) could be detected; the sum content of the two species was 2.00–2.73%. The bioaccessibility target hazard quotient (BTHQ) values for Cu, Pb, As, Cd and Hg in C. sinensis were 0.0041, 0.0040, 0.5334, 0.0020 and 0.0005, respectively, all less than 1. None of the five heavy metals in C. sinensis can be 100% absorbed by human body. The content of arsenic in C. sinensis is high, but the strong toxic inorganic arsenic accounted for only 8.69%. The heavy metals in C. sinensis presented no obvious risks to human health in a reasonable taking way. With the rapid development of modern industry, a large number of heavy metals are discharged into the environment along with waste water, waste residue and waste gas from industrial and agricultural activities, resulting in environmental heavy metal contamination [1]. Heavy metals in the natural environment could enter human body via a variety of ways and can accumulate in the body [2,3,4], affecting normal organism physiological functions and causing inflammation and a variety of diseases, including cancer [5]. Cordyceps sinensis (BerK). Sacc is a parasitic fungus that grows on the larva of Lepidoptera. It is a Traditional Chinese Medicine known as DongChong-XiaCao (winter worm-summer grass) [6, 7]. It can effectively regulate the immune system and protect the nervous system, with anti-tumour properties [7, 8]. In recent years, the price of C. sinensis has risen dramatically [9], due to its efficacy and the scarcity of resources, and at present, the price of C. sinensis in different specifications ranges from $30,000 to $100,000 per kilogram. At the same time, concerns about the heavy metal As in C. sinensis have also grown with reports of high As content in C. sinensis. Our previous study found that contents of arsenic in 20 batches of C. sinensis were 2.59–12.56 mg kg−1 [10]. The excess rate of As was 88.24% based on the international standards of Chinese Medicine-Chinese Herbal Medicine Heavy Metal Limit (4 mg kg−1 for As) [11]. Chemical speciations of arsenic in nature include arsenate [As(V)], arsenite [As(III)], methylarsonic acid (MMA), dimethylarsinic acid (DMA), arsenobetaine (AsB), and arsenocholine (AsC), etc. [12]. Studies have shown that inorganic arsenic species are the most dangerous species of arsenic, and As(III) is more toxic than As(V), with the toxicity of As(III) almost 60 times that of As(V). MMA and DMA are considered to be hypotoxic; however, arsenosugars and arsenic greases are believed to be non-toxic [13, 14]. As(III) has a strong affinity with enzymes and the SH and NH groups of proteins in the body, and it can combine with thiols to form stable complexes that can hinder the cell's normal metabolism and inhibit some enzymes [15, 16]. At present, the heavy metal problem of C. sinensis has received extensive attention. Previous researches on the heavy metal of C. sinensis mainly focused on its determination method and content, while researches on the species of As contained and the bioaccessibilities and health risk of heavy metals of C. sinensis considering its usage were rarely reported. In this study, we focused on the dietary exposure and bioaccessibilities of As, Cu, Pb, Cd and Hg and the variety of As species present in C. sinensis [17, 18], aiming to assess the risks of heavy metals and supplement the research on the safety of C. sinensis. The Minimum Standards of Reporting Checklist contain details of the experimental design, and statistics, and resources used in this study (Additional file 1). An ICP-MS (ICAP-Q, Thermo Fisher Scientific, Analytical System, United States, Massachusetts) operating under normal multi-element tuning conditions was used to determine the total levels of As, Cu, Pb, Cd and Hg. The main analytical parameters were as follows: RF forward power, 1550 W; plasma argon flow rate, 13.98 L min−1; auxiliary argon flow rate, 0.80 L min−1; nebulizer argon flow rate, 1.10 L min−1; integration time, 0.02 s per point; and points per peak, 3. An ICP-MS was used as the detector system following species separation by HPLC for arsenic speciation analysis, and the column effluent was directly introduced into a nebulizer and spray chamber. Data were collected by single ion monitoring at m/z 75. For chromatographic separation, a high-pressure pump (Ultimate 3000 RSLCnano, Thermo Fisher Scientific, United States, Massachusetts) was used as the sample delivery system. A Dionex IonPacTM AS7 column (2 × 250 mm Thermo Fisher Scientific, United States, Massachusetts) was used for arsenic speciation analysis. The mobile phases were 0.1 mol L−1 ammonium carbonate (A) and 4 nmol L−1 ammonium carbonate (B). The elution gradient system was 0% B (0–2.5 min), 0–100% B (2.5–8 min), 100% B (8–10 min), 100–0% B (10–10.01 min), 0% B (10.01–20 min). The flow rate was 0.25 mL min−1. The injection volume was 20 μL. Calibration standards and reagents The single-element standard solutions (1000 mg L−1) of Cu, Pb, As, Cd, and Hg, purchased from the National Standard Material Center (Beijing, China), were used as the standard calibration solutions. Standard solutions of As As(III), As(V), MMA, AsC and AsB, purchased from the National Standard Material Center (Beijing, China) were used as the standard source solutions, with their concentrations guaranteed. DMA certified reference material was prepared from commercially available reagents, with its purity evaluated before use. Working mixture standard solutions were prepared daily by diluting the source standard solutions to proper concentrations with pure water. Deionized water (18.2 MΩ) purified with a Milli-Q system (Millipore, Inc, Spain) was used for all solution preparation. Plasticware and glassware were maintained in 15% HNO3 for 24 h before use. Simulated gastrointestinal juices Simulated gastric juice (pH 1.8 ± 0.1) was prepared by dissolving 10.0 g of pepsin in 800 mL deionized water, then 16.4 mL of dilute hydrochloric acid (prepared by diluting 234 mL of concentrated hydrochloric acid to 1000 mL with deionized water) was added, and finally, additional deionized water was added to bring the total volume up to 1000 mL. Simulated intestinal juice was prepared just before use by mixing equal volumes of A and B solutions. (Solution A was 6.8 g of potassium dihydrogen phosphate dissolved in 500 mL of water, and 0.1 mol L−1 NaOH was used to adjust the pH to 6.8. Solution B was 10 g of pancreatin dissolved in water up to 500 mL) [19]. Cordyceps sinensis samples Three 30 g batches of C. sinensis samples were collected from Qinghai Province and were numbered 1, 2, and 3. All samples were dried at 40 °C and stored at 4 °C before analysis [20]. Sample preparation for contents determination of Cu, Pb, As, Cd, and Hg by ICP-MS Each batch of C. sinensis samples were equally divided into two parts. One part, the whole C. sinensis (W), was ground with a mortar and passed through a 50-mesh sieve and numbered W1, W2, and W3, respectively. The other part was separated into stroma (S) and caterpillar body (C), and those two portions were separately ground with a mortar and passed through a 50-mesh sieve and numbered S1, S2, S3 and C1, C2, C3, respectively. The whole C. sinensis samples, stroma samples and caterpillar body samples were digested with a microwave digestion system. Approximately 0.2 g of each sample was precisely weighed and transferred in PTEF vessels with 5 mL of concentrated HNO3 and 1 mL of H2O2. The operating procedure of the microwave system was as follows: the samples were heated at 100 °C for 3 min in the 1st step, 160 °C for 8 min in the 2nd step, and 190 °C for 30 min in the 3rd step, and then cooled to room temperature. The sample solutions were diluted to 50 mL with pure water. The blank and reference materials (carrots) were prepared at the same time under the same procedure. The method was fully validated by characteristic indices including linearity, limit of detection (LOD), limit of quantitation (LOQ), accuracy, and stability. Extraction procedures for arsenic speciation analysis in the whole C. sinensis, stroma and caterpillar body by HPLC–ICP-MS Dilute nitric acid solution extraction 5 mL of dilute nitric acid solution (HNO3: water = 2: 98) was added to precisely weighed 0.2 g samples of whole C. sinensis, stroma and caterpillar body, respectively. The mixtures were soaked overnight, then heated in a thermostatic water bath for 120 min at 90 °C and shaken for 1 min every 30 min. The mixture was cooled to room temperature and centrifuged at 11.5g for 10 min after extraction. The supernatants were removed, and 5 mL of dilute nitric acid solution was added to the remaining pellets. The extraction was repeated using procedure described above. The combined supernatants from the two rounds of extraction were analysed immediately after filtration. The procedure was performed in triplicate [21]. In vitro gastrointestinal digestion of whole C. sinensis, stroma and caterpillar body for arsenic speciation analysis Gastric digestion The prepared gastric juice (5 mL) was added to 0.5 g of precisely weighted whole C. sinensis, stroma and caterpillar body samples, respectively. Then the samples were heated for 1 h in a water bath, shaken for 30 min in a shaker (250 r min−1), and heated for an additional 3 h at 38 °C. The mixture was centrifuged at 11.5g for 10 min, and the collected supernatant was used as gastric digestion extract solution. Intestinal digestion The prepared intestinal juice (5 mL) was added to the solid fraction obtained in the gastric digestion, following the digestion procedure as above. The residue was discarded. The supernatant was passed through a 0.45 µm PVDF syringe-type filter before analysis. The procedure was performed in triplicate [22]. Preparation of in vitro gastrointestinal solution samples for determination of Cu, Pb, As, Cd and Hg content by ICP-MS After digestion, the two liquid extracts were mineralized in a microwave oven. The samples were appropriately diluted before analysis. Health risk assessment of heavy metals To evaluate the potential acute or long-term hazards from exposure to heavy metals in C. sinensis, the DI, DE, and THQ of heavy metals were calculated using the following formulas [23]. $$ {\text{DI}} = C \times {\text{ FIR}} $$ where DI is the dietary intake of the heavy metal (µg person−1 day−1), C is the measured concentration of the heavy metal in the samples and FIR is the daily dose (g), which was taken as the maximum daily dose (9 g) recommended by the "Chinese Pharmacopoeia" standard. $$ {\text{DE }} = {\text{ DI}}/{\text{ WAB}} $$ where DE is the dietary exposure (µg kg−1 body weight day−1) and WAB is the average individual body weight (62 kg). For noncancerous effects, the target hazard quotient (THQ) developed by the US EPA (1989) has been used to evaluate potential health risks associated with long-term exposure to chemical pollutants in foodstuffs and was therefore used to assess the health risks from heavy metals in C. sinensis over a lifetime in this study. A THQ value higher than 1 indicates that those ingested C. sinensis are affecting health adversely to some extent. And if the THQ value is less than 1, it suggests that the amount of heavy metal exposed to the body has no significant effect on human health. $$ THQ = \frac{{C \times {\text{FE}} \times {\text{ED}} \times {\text{FIR}}}}{{{\text{WAB}} \times {\text{AT}} \times {\text{RFD}} \times 1000}} $$ $$ {\text{BTHQ }} = THQ \times {\text{Bio}}-{\text{accessibility}}_{\text{HM}} $$ where C is the measured concentration of heavy metal in C. sinensis; EF is the exposure frequency (90 day per year); ED is the exposure duration (50 years); FIR is the daily dose (9 g); AT is the averaging time for non-carcinogens (70 × 365 day); RFD is the oral reference dose provided by the US EPA (Cu = 0.04 µg g−1 day−1 body weight, Pb = 0.0035 µg g−1 day−1 body weight, As = 0.0003 µg g−1 day−1 body weight, Cd = 0.001 µg g−1 day−1 body weight, Hg = 0.0005 µg g−1 day−1 body weight); 10−3 is the unit conversion factor; WAB is the average individual body weight (62 kg). It has been observed that using the total concentrations of heavy metals may overestimate the amount absorbed through oral ingestion; the oral bioaccessibility is defined as the fraction of a compound that is released from its matrix in the gastrointestinal tract, and thus becomes available for intestinal absorption, i.e., enters the blood stream [24]. The bioaccessibility value is used to avoid overestimating risk. Method validation for quantitative analysis of Cu, Pb, As, Cd and Hg Performance parameters of the analytical method were evaluated, and the detection limits (DL) for the elements (expressed in µg L−1) were as follows: Cu: 0.007; Pb: 0.026; As: 0.010; Cd: 0.005; and Hg: 0.006. The quantification limits (QL) for the elements (expressed in µg L−1) were Cu: 0.020; Pb: 0.100; As: 0.030; Cd: 0.020; and Hg: 0.020. The standard curve prepared for each element was linear. The relative standard deviations (RSDs) were less than 3% (Table 1). The results of the reference material were in good consistence with the certified values (Table 2). Table 1 Regression equations, correlation coefficients, linear ranges and detection limits Table 2 The content of heavy metals in standard reference material for carrot The bioaccessibility of heavy metals in different parts of C. sinensis The average bioaccessibilities of Cu, Pb, As, Hg and Cd were 41.29, 40.11, 64.46, 18.91, and 81.14% in whole C. sinensis, respectively; 48.26, 42.92, 66.15, 12.86, and 87.07% in caterpillar body; and 38.30, 30.53, 30.18, 7.46, and 82.30% in stroma (Table 3). The bioaccessibilities of Cu, As, Hg and Cd in each part in the gastric extract were higher than those in the intestinal extract, and the vast majority of Cu and Cd were dissolved by the gastric juice. While the bioaccessibility of Pb in gastric juice was lower than that in intestinal juice. The dissolved quantity of each heavy metal is proportional to its total content of the sample. However, the bioaccessibilities of different heavy metals sometimes varies based on their binding forms and properties. The average contents of Cu, Pb, As, Hg and Cd were 15.02, 1.39, 13.47, 0.06, and 0.08 mg kg−1 in caterpillar body and 18.63, 2.06, 1.49, 0.07, and 0.22 mg kg−1 in stroma, respectively. The contents of Cu, Pb, Hg, and Cd content in the stroma were greater than those in the caterpillar body. The As content was substantially higher in the caterpillar body than it was in the stroma. Table 3 Bioaccessibility of Cu, Pb, As, Cd and Hg in whole C. sinensis, stroma and caterpillar body Arsenic speciation analysis in whole C. sinensis, stroma and caterpillar body In some parts of the samples, only As(III), As(V) and MMA were found, and MMA was only present in trace amounts (Table 4). The concentration of arsenic in C. sinensis was between 9.00 and 10.18 mg kg−1; almost all the dissolved arsenic is inorganic, making up approximately 8.69% of the total arsenic content. Among the extracted inorganic species, the concentrations of As(III) were 0.41–0.73, and the concentrations of As(V) were 0.25–0.35 mg kg−1. The As species that made up approximately 91.31% of the total arsenic were not identified. When the whole C. sinensis was divided into stroma and caterpillar body, the concentrations of As(III) were 0.07–0.08 mg kg−1 and were 0.39–0.50 mg kg−1 of As(V) in stroma, corresponding to a mean value of 34.20% the total arsenic in the stroma. The concentrations of As(III) were 0.48–0.64 and were 0.28–0.32 mg kg−1 of As(V) in caterpillar body. The mean value of these two inorganic arsenic forms made up 6.45% of the total arsenic content in the caterpillar body. Studies have shown that in the stroma of the C. sinensis, the As(V) content is greater than that of As(III). In contrast, As(III) content is greater than that of As(V) in caterpillar body. Table 4 The content of arsenic species in the dilute nitric acid solution (HNO3: H2O = 2: 98) extract As speciation analysis following in vitro gastrointestinal digestion In different parts of C. sinensis, only As(III) and As(V) were detected in the in vitro digestion solution (Table 5). Inorganic arsenic was mainly dissolved in gastric juice; the average dissolved contents were 0.20 mg kg−1 in gastric juice and 0.02 mg kg−1 in intestinal juice. The inorganic arsenic in the in vitro digestion solution accounted for 2.33% of the total arsenic content; in which As(III) was accounted for 1.37% of the total and As(V) for 0.96%. The quantities of inorganic arsenic from the caterpillar body and stroma dissolved in the gastric juice and intestinal juice were 0.21 and 0.09 mg kg−1 and 0.02 and 0.02 mg kg−1, respectively, in which As(III) and As(V) accounted for 1.15 and 2.18% and 0.54 and 5.48%, respectively. And only parts of total As were dissolved. The results were consistent with the results in Table 4. Table 5 The content of As species in the in vitro gastrointestinal digestion juice Estimation of health risks of heavy metals in C. sinensis The DEs for Cu, Pb, As, Cd and Hg were 2.21, 0.21, 1.41, 0.02 and 0.01, respectively, which were all far below the corresponding tolerable limits set by the Joint FAO/WHO Expert Committee on Food Additives (JFCFA) (Table 6). The BTHQ values of these heavy metals were 0.0041, 0.0040, 0.5334, 0.0020 and 0.0005, respectively, all less than 1. The total BTHQ of the five heavy metals was 0.5440. Table 6 Estimated dietary exposures (DE) and BTHQ values of Cu, Pb, As, Cd and Hg in C. sinensis Based on the risk assessment results, As was the greatest contributor to heavy metal-related risks among the five elements, in agreement with previous findings. Other heavy metals in C. sinensis had no significant influence on human health. Bioaccessibility for heavy metal in C. sinensis Bioaccessibility is defined as the proportion of a pollutant that can be dissolved in the gastrointestinal environment, indicating the relative amount of the pollutant in the matrix that can be absorbed by the body, and higher bioaccessibilities indicate greater potentials for absorbing pollutants. The heavy metals in Chinese herbal medicines enter the body through gastrointestinal tract, and cannot be 100% absorbed by digestive system. Therefore, the health risk of heavy metal could be overestimated if the total content of the heavy metal is used to carry out risk assessment instead of the total absorption content. Thus, bioaccessibility may be more accurate in assessing risks of heavy metals in Chinese herbal medicine. Common methods for determining bioaccessibility of heavy metals are in vitro experiments; in this case, a simulated of gastrointestinal method was used for the simple method, short experimental period, and easily controlled experimental conditions. Dynamic in vitro gastrointestinal models mimic the gradual transit of ingested compounds through the digestive tract. In recent years, this method has been widely used to assess the risks of heavy metals in foods [20, 25, 26]. Studies have shown that the biochemical properties of heavy metals at various digestive stages are related to the characteristics of the heavy metals, their existent morphology, the type of matrix, and so on [27]. In this study, the bioaccessibility of heavy metals in C. sinensis and the morphology of arsenic in artificial gastrointestinal fluid were evaluated by in vitro experiments. The risks of heavy metals in C. sinensis were evaluated scientifically. As in vitro gastrointestinal digestion Based on results of in vitro experiments, the bioaccessibility of arsenic in C. sinensis was 64.46%, indicating that arsenic in the C. sinensis could not be 100% dissolved by the artificial gastrointestinal fluid. And there were still a large number of unidentified arsenic species in the artificial gastrointestinal extract solution. The health risks of heavy metals in C. sinensis Cordyceps sinensis is usually consumed for a long period of time to improve health. It is generally recommended to take C. sinensis for 2–3 months each year. The target hazard quotient (THQ) was used in this research, taking into consideration of daily dose and factors including most commonly consumed dosage forms and duration of consumption to evaluate potential and long-term risk of heavy metals in C. sinensis to human body. In the calculation of the THQ value, the duration of consumption was assumed to be 90 days, and the dosage was "the maximum dosage of 9 g recommended in Chinese Pharmacopoeia (2015 version)". However, under actual conditions, the dosage is usually much less than 9 g because of the extremely high cost of C. sinensis, so the calculated THQ value is greater than its actual value and suggests an inflated risk. The experimental results suggested that there is no obvious risk from heavy metals in C. sinensis to human body. However, it is worth noting that the BTHQ value of arsenic is 0.5334 and the TBTHQ is 0.5440. If the required course of C. sinensis is longer than 90 days, the dose of C. sinensis needs to be reduced to avoid damage to the body from heavy metals. The standards of heavy metals in C. sinensis Different medicinal materials grow in different environments with different life cycles and properties, causing variations in heavy metal contents. The unified standards of heavy metals in Chinese herbal medicines ignore some special properties of medicinal herbs. Chinese medicine is different from food in usages and dosages. Mode and frequency of drug use should be taken into consideration to get more equitable results of evaluation and standardization, when evaluating the risks of heavy metals and determining the standards for heavy metals in Chinese herbal medicines. This study showed that the heavy metals in C. sinensis present no obvious risks to human health in a reasonable taking way, indicating the excess of heavy metals in C. sinensis might be attributed to the unduly strict standard, and the heavy metal standards for C. sinensis need to loose properly. The five heavy metals cannot be 100% absorbed by human body. The content of arsenic in C. sinensis is high, but inorganic arsenic of strong toxic accounted for only 8.69% of the total in dilute nitric acid extracted solutions and for merely 2.33% of the total in the in vitro gastrointestinal solutions. The dietary exposure values of Cu, Pb, As, Cd and Hg were lower than the safety limits set by the Joint FAO/WHO Expert Committee on Food Additives (JFCFA) and the BTHQ of these heavy metals were all less than 1. The heavy metals in C. sinensis present no obvious risks to human health in a reasonable taking way. Perhaps, it is the time to loose the heavy metal standards for C. sinensis properly. Luo C, Shen ZG. The mechanisms of heavy metal uptake and accumulation in plants. Chin Bull Bot. 2003;20:59–66. Kampa M, Castanas E. Human health effects of air pollution. Environ Pollut. 2008;151:362–7. Robinson BH. E-waste: an assessment of global production and environmental impacts. Sci Total Environ. 2009;408:183–91. Zeng X, Xu X, Boezen HM, Huo X. Children with health impairments by heavy metals in an e-waste recycling area. Chemosphere. 2016;148:408–15. Koedrith P, Kim H, Weon JI, Seo YR. Toxicogenomic approaches for understanding molecular mechanisms of heavy metal mutagenicity and carcinogenicity. Int J Hyg Environ Health. 2013;216:587. Yoshikawa N, Nishiuchi A, Kubo E, Yu Y, Kunitomo M, Kagota S. Cordyceps sinensis acts as an adenosine a3 receptor agonist on mouse melanoma and lung carcinoma cells, and human fibrosarcoma and colon carcinoma cells. Pharmacol Pharm. 2011;02:266–70. Zheng LP, Gao LW, Zhou JQ, Sima YH, Wang JW. Antioxidant activity of aqueous extract of a Tolypocladium sp. fungus isolated from wild Cordyceps sinensis. Afr J Biotech. 2008;7:3004–10. Guo JY. A contemporary treatment approach to both diabetes and depression by Cordyceps sinensis, rich in vanadium. Evid Based Complement Alternat Med eCAM. 2010;7:387. Li SP, Yang FQ, Tsim KWK. Quality control of Cordyceps sinensis, a valued traditional chinese medicine. J Pharm Biomed Anal. 2006;41:1571–84. Zhou L, Hao QX, Wang S, Yang Q, Kang CZ, Yang WZ, Guo LP. Study on distribution of five heavy metal elements in different parts of Cordyceps sinensis by microwave digestion ICP-MS. China J Chin Materia Med. 2017;42:2934–8. Traditional Chinese medicine—determination of heavy metals in herbal medicines used in traditional Chinese medicine ISO 18664 2015.8.1. Lu T, Zhang LL, Zhang GW, Yi WL, Shen Z. Arsenic speciation of dried Lentinus edodes by HPLC–ICP-MS. Food Ind. 2015;12:275–9. Chen SZ, Liu LP, Zhen-Xia DU. The presence form and analytic technology of arsenic in food. Chin J Food Hyg. 2014;3:296–303. Li H, Zhang LS. Toxicity and biological function of arsenic. Mod Pre Med. 2000;27:39–40. Li YH, Dang RH, Xia YJ. The damage of arsenic poisoning to nervous system. Chin J Control Endemic Disenaces. 2001;16:354–6. Gong ZL, Lu XF, Ma MS, Watt C, Li CX. Arsenic speciation analysis. Talanta. 2002;58:77–96. Wang GL, Jin HY, Han XP, Shi Y, Tian JG, Lin RC. Quality study and problems of Cordyceps sinensis. Chin Tradit Herbal Drugs. 2008;39:115–8. Larsen EH, Hansen M, Gössler W. Speciation and health risk considerations of arsenic in the edible mushroom Laccaria amethystina collected from contaminated and uncontaminated locations. Appl Organomet Chem. 1998;2:285–91. Pharmacopoeia Commission of the People's Republic of China. Chinese pharmacopoeia, vol. 3. Beijing: China Medical Science and Technology Press; 2015. Jin FU, Cui YS. In vitro model system to evaluate the influence of ph and soil-gastric/intestinal juices ratio on bioaccessibility of pb, cd and as in two typical contaminated soils. J Agro Environ Sci. 2012;31:245–51. Chen SB, Wen-Jing YU, Zhao YL. Speciation analysis of arsenic and determination of abio-arsenic in food. Acad Period Farm Products Process. 2013;8:86–8. Pizarro I, Gómez-Gómez M, León J, Román D, Palacios MA. Bioaccessibility and arsenic speciation in carrots, beets and quinoa from a contaminated area of chile. Sci Total Environ. 2016;565:557–63. Chien LC, Hung TC, Choang KY, Yeh CY, Meng PJ, Shieh MJ, Ha BC. Daily intake of tbt, cu, zn, cd and as for fishermen in taiwan. Sci Total Environ. 2002;285:177–85. Ramesh A, Walker SA, Hood DB, Guillén MD, Schneider K, Weyand EH. Bioavailability and risk assessment of orally ingested polycyclic aromatic hydrocarbons. Int J Toxicol. 2004;23:301. Lan DZ, Lei M, Zhou S, Liao BH, Cui YS, Yin NY, Shen Y. Health risk assessment of heavy metals in rice grains from a mining-impacted area in south hunan by in vitro simulation method. J Agro Environ Sci. 2014;33:1897–903. Huang L, Zhou CY, Chen ZL, Zhang JS, Su YM, Peng XC. Advances in the study of in vitro simulation of bioavailability of heavy metals in soil and crops. J Yangtze Univ. 2016;3:42–7 (Natural Science Edition). Oomen AG, Hack A, Minekus M, Zeijdner E, Cornelis C, Schoeters G, Verstraete W, Vande WT, Wragg J, Rompelberg CJ, Sips AJ, Van Wijnen JH. Comparison of five in vitro digestion models to study the bioaccessibility of soil contaminants. Environ Sci Technol. 2002;36:3326–34. JECFA. Evaluation of certain food additives and contaminants. Thirty-third Report of the Joint FAO/WHO Expert Committee on Food Additives (JECFA). Technical Report Series 776, Geneva. 1989. HLQ and GLP substantial contributions to the conception or design of the work. ZL and WS collected the samples. ZL, HQX and KLP performed the ICP-MS and HPLC–ICP-MS analysis experiment and collected data. YWZ and JJY prepared the Cordyceps sinensis samples. YJ and KCZ provided a great help and careful guide for the modification of manuscript including the data processing analysis and grammar part. All authors read and approved the final manuscript. The authors wish to acknowledge Guang Yang and Wen Sun, National Resource Center for Chinese Materia Medica, China Academy of Chinese Medical Sciences, State Key Laboratory Breeding Base of Dao-di Herbs. Special thanks to Ge Mo for the language help in this article. All data generated or analyzed during this study are included in this published article. All of authors consent to publication of this study in journal of Chinese Medicine. This work was financially supported by the Reform project of review and approval system of pharmaceutical medical instruments in Chinese Pharmacopoeia Commission (ZG2016-1), The National Key Research and Development Program of China (2017YFC1700701) and Key fields of Chinese Academy of traditional Chinese Medicine (ZZ10-27). National Resource Center for Chinese Materia Medica, State Key Laboratory Breeding Base of Dao-di Herbs, China Academy of Chinese Medical Sciences, Beijing, 100700, People's Republic of China Li Zhou, Sheng Wang, Qingxiu Hao, Liping Kang, Chuanzhi Kang, Jian Yang, Wanzhen Yang, Jingyi Jiang, Lu-Qi Huang & Lanping Guo College of Traditional Chinese Medicine, Guangdong Pharmaceutical University, Guangdong, 510006, China Li Zhou & Lanping Guo Li Zhou Sheng Wang Qingxiu Hao Liping Kang Chuanzhi Kang Jian Yang Wanzhen Yang Jingyi Jiang Lu-Qi Huang Lanping Guo Correspondence to Lu-Qi Huang or Lanping Guo. Additional file The minimum standards of reporting checklist. Zhou, L., Wang, S., Hao, Q. et al. Bioaccessibility and risk assessment of heavy metals, and analysis of arsenic speciation in Cordyceps sinensis. Chin Med 13, 40 (2018). https://doi.org/10.1186/s13020-018-0196-7 Gastrointestinal digestion
CommonCrawl
Shape optimization method for an inverse geometric source problem and stability at critical shape DCDS-S Home An age group model for the study of a population of trees doi: 10.3934/dcdss.2020463 Representation and approximation of the polar factor of an operator on a Hilbert space Mostafa Mbekhta Université de Lille, Département de Mathématiques, UMR-CNRS 8524, Laboratoire P. Painlevé, 59655 Villeneuve d'Ascq Cedex. France In memory of our friend Ezzeddine Zahrouni, who left us very early Received February 2020 Revised August 2020 Published November 2020 Fund Project: This work was supported in part by the Labex CEMPI (ANR-11-LABX-0007-01) Let $ H $ be a complex Hilbert space and let $ \mathcal{B}(H) $ be the algebra of all bounded linear operators on $ H $. The polar decomposition theorem asserts that every operator $ T \in \mathcal{B}(H) $ can be written as the product $ T = V P $ of a partial isometry $ V\in \mathcal{B}(H) $ and a positive operator $ P \in \mathcal{B}(H) $ such that the kernels of $ V $ and $ P $ coincide. Then this decomposition is unique. $ V $ is called the polar factor of $ T $. Moreover, we have automatically $ P = \vert T\vert = (T^*T)^{\frac{1}{2}} $. Unlike $ P $, we have no representation formula that is required for $ V $. In this paper, we introduce, for $ T\in \mathcal{B}(H) $, a family of functions called a "polar function" for $ T $, such that the polar factor of $ T $ is obtained as a limit of a net built via continuous functional calculus from this family of functions. We derive several explicit formulas representing different polar factors. These formulas allow new for methods of approximations of the polar factor of $ T $. Keywords: Polar decomposition, polar factor, partial isometries, approximations, partial isometry. Mathematics Subject Classification: Primary: 47A58; Secondary: 41A35, 47B02. Citation: Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020463 C. Apostol, The reduced minimum modulus, Michigan Math. J., 32 (1985), 279-294. doi: 10.1307/mmj/1029003239. Google Scholar F. Chabbabi and M. Mbekhta, Polar decomposition, Aluthge and mean transforms, Linear and Multilinear Algebra and Function Spaces, 89–107, Contemp. Math., 750, Centre Rech. Math. Proc., Amer. Math. Soc., Providence, RI, [2020]. Google Scholar J.-P. Demailly, Analyse Numérique et Equations Différentielles, Grenoble Sciences. EDP Sciences, Les Ulis, 2016. Google Scholar R. Duong and F. Philipp, The effect of perturbations of linear operators on their polar decomposition, Proc. Amer. Math. Soc., 145 (2017), 779-790. doi: 10.1090/proc/13252. Google Scholar N. J. Higham, Computing the polar decomposition with applications, SIAM. J. Stat. Comput., 7 (1986), 1160-1174. doi: 10.1137/0907079. Google Scholar N. J. Higham, Functions of Matrices Theory and Computation, Society for Industrial and Applied Mathematics, Philadelphia, PA (2008). doi: 10.1137/1.9780898717778. Google Scholar T. Kato, Perturbation Theory for Linear Operators, Springer, Berlin 1980. Google Scholar R. C. Li, New perturbation bounds for the unitary polar factor, SIAM J. Matrix Anal. Appli., 16 (1995), 327-332. doi: 10.1137/S0895479893256359. Google Scholar M. Mbekhta, Approximation of the polar factor of an operator acting on a Hilbert space, J. Math. Anal. Appl., 487 (2020), 123954, 12 pp. doi: 10.1016/j.jmaa.2020.123954. Google Scholar J. von Neumann, $\ddot{U}$ber adjungierte Funktionaloperatoren, Ann. of Math., 33 (1932), 294-310. doi: 10.2307/1968331. Google Scholar G. K. Pedersen, $C^*$-Algebras and their Automorphism Groups, Academic Press INC. (London) 1979. Google Scholar A. Quarteroni, R. Sacco and F. Saleri, Numerical Mathematics, 2nd Edition, Springer, Berlin, 2007. doi: 10.1007/b98885. Google Scholar S. Sakai, $C^*$-algebras and $W^*$-algebras, Springer Verlag. Berlin 1971. doi: 10.1007/978-3-642-61993-9. Google Scholar Adrian Constantin, Darren G. Crowdy, Vikas S. Krishnamurthy, Miles H. Wheeler. Stuart-type polar vortices on a rotating sphere. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 201-215. doi: 10.3934/dcds.2020263 Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012 Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073 Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264 Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020047 Yantao Wang, Linlin Su. Monotone and nonmonotone clines with partial panmixia across a geographical barrier. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 4019-4037. doi: 10.3934/dcds.2020056 Alain Bensoussan, Xinwei Feng, Jianhui Huang. Linear-quadratic-Gaussian mean-field-game with partial observation and common noise. Mathematical Control & Related Fields, 2021, 11 (1) : 23-46. doi: 10.3934/mcrf.2020025 Musen Xue, Guowei Zhu. Partial myopia vs. forward-looking behaviors in a dynamic pricing and replenishment model for perishable items. Journal of Industrial & Management Optimization, 2021, 17 (2) : 633-648. doi: 10.3934/jimo.2019126 Alberto Bressan, Carlotta Donadello. On the convergence of viscous approximations after shock interactions. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 29-48. doi: 10.3934/dcds.2009.23.29 Yujuan Li, Huaifu Wang, Peipei Zhou, Guoshuang Zhang. Some properties of the cycle decomposition of WG-NLFSR. Advances in Mathematics of Communications, 2021, 15 (1) : 155-165. doi: 10.3934/amc.2020050 Xuhui Peng, Rangrang Zhang. Approximations of stochastic 3D tamed Navier-Stokes equations. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5337-5365. doi: 10.3934/cpaa.2020241 Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215 Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076 Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265
CommonCrawl
\begin{document} \title{A localized criterion for the regularity of solutions to Navier-Stokes equations} \begin{abstract} The Serrin-Prodi-Ladyzhenskaya type $L^{p,q}$ criteria for the regularity of solutions to the incompressible Navier-Stokes equations are fundamental in the study of the millennium problem posted by the Clay Mathematical Institute about the incompressible N-S equations. In this article, we establish some localized $L^{p,q}$ criteria for the regularity of solutions to the equations. In fact, we obtain some a priori estimates of solutions to the equations depend only on some local $L^{p,q}$ type norms. These local $L^{p,q}$ type norms, are small for reasonable initial value and shall remain to be small for global regular solutions. Thus, deriving the smallness or even the boundedness of the local $L^{p,q}$ type norms is necessary and sufficient to affirmatively answer the millennium problem. Our work provides an interesting and plausible approach to study the millennium problem. Keywords: {Incompressible Navier-Stokes equations $|$ Regularity $|$ A priori estimates $|$ Millennium problem} \end{abstract} \subsection*{Significance} The Serrin-Prodi-Ladyzhenskaya type regularity criteria are fundamental in the study of the millennium problem posted by the Clay Mathematical Institute about the incompressible Navier-Stokes equations. These criteria depend on some norms of the solutions in the whole space. These norms in the whole space can be very large and are very hard if not impossible to control. We derive some criteria based on the same type but localized norms. Hopefully one can derive a priori estimates of these localized norms and establish the global regularity of the solutions with compact supported smooth initial value. We think this is a plausible approach to solve the millennium problem about the Navier-Stokes equations. \section{Introduction} Consider the Cauchy problem for the 3-dimensional incompressible Navier-Stokes equations: \begin{equation} \label{eq:ns} \begin{cases} \begin{array}{l} \displaystyle u_t-\nu\Delta u+ (u \cdot \nabla ) u + \nabla p=0,\\ \displaystyle\operatorname{div} u = 0, \end{array}&\quad\text{in}\quad\mathbb{R}^3\times(0,T]\\ \ u(x,0)= u_0 ( x), &\quad \text{for any}\quad x \in \mathbb{R}^3. \end{cases} \end{equation} We naturally assume $u_0$ to be a solenoidal field. The existence of smooth solution to \eqref{eq:ns} is one of the famous millennium problems, posted by the Clay Mathematical Institute (C. L. Fefferman \cite{MR2238274}). Leray \cite{MR1555394} established the existence of weak solutions to \eqref{eq:ns}. Hopf \cite{MR50423} continued on this and the weak solution today is known as the Leray-Hopf weak solution (see also \cite{MR0254401}): \begin{theorem}[Leray-Hopf] Suppose $u_0\in L^2(\mathbb{R}^3)$ is solenoidal, then there exists a weak solution $u\in L^\infty(0,T;L^2(\mathbb{R}^3))\cap L^2(0,T;\dot{H}^1(\mathbb{R}^3))$ of \eqref{eq:ns}. \end{theorem} To solve the millennium problems, it suffices to establish a suitable integrability for the Leray-Hopf solution. Indeed, one has the following smooth-uniqueness criterion. \begin{theorem} \label{thm:LadyProSer} Suppose $u_0\in L^2(\mathbb{R}^3)$, and $u$ is a Leray-Hopf solution of \eqref{eq:ns}. Moreover, suppose $u$ satisfies the following condition: for some $T_0>0$, \begin{equation} \label{cond:LadyProSer} u\in L^r(0,T_0;L^s(\mathbb{R}^3))\qquad\text{with}\quad \frac{3}{s}+\frac{2}{r}=1,\quad3\leqslant s\leqslant +\infty. \end{equation} Then $u$ is smooth in $\mathbb{R}^3\times (0,T_0]$ and any Leray-Hopf weak solutions of \eqref{eq:ns} must coincide with $u$ in $\mathbb{R}^3\times (0,T_0]$. \end{theorem} Condition \eqref{cond:LadyProSer} is now called the Serrin-Prodi-Ladyzhenskaya condition. For $3<s\leqslant +\infty$, the uniqueness was established by Serrin \cite{MR0150444} and Prodi \cite{MR126088} and the smoothness was proved by Ladyzhenskaya \cite{MR0236541}. L. Escauriaza, G. Seregin and V. \v{S}ver\'{a}k \cite{MR1992563} considered the limiting case $s=3$, they proved that any $L^\infty(0,T_0;L^3(\mathbb{R}^3))$ solution must be regular and can be extended further (see also the quantitative version by Terence Tao\cite{MR4337421}). On the other hands, one can check that the Leray-Hopf weak solution must satisfies: \begin{equation} \label{cond:LerayHopf} u\in L^r(0,T_0;L^s(\mathbb{R}^3))\qquad\text{for}\quad \frac{3}{s}+\frac{2}{r}=\frac{3}{2}\qquad\text{and}\quad 2\leqslant r\leqslant +\infty. \end{equation} There is a integrability gap between \eqref{cond:LadyProSer} and \eqref{cond:LerayHopf}. As a step forward, Scheffer \cite{MR454426,MR510154,MR573611,MR676002} and Caffarelli-Khon-Nirenberg \cite{MR673830},established the partial regularity of suitable weak solutions to \eqref{eq:ns}. Lin \cite{MR1488514} simplified the proof (See also \cite{MR1738171}). Indeed, the following result is obtained. \begin{proposition} \label{prop:Partial} There exists universal constants $\epsilon_0, C_0>0$, with the following property. Suppose $(u,p)$ is a suitable weak solution of \eqref{eq:ns} \begin{equation} u_t+(u\cdot\nabla) u-\Delta u+\nabla p=0\qquad\text{in}\quad K_R(z_0,t_0):=(t_0-R^2,t_0)\times B_{R_0}(x_0). \end{equation} Let \begin{equation} C(z_0,t_0,R):=R^{-2}\int_{K_R(z_0,t_0)}|u|^3\ \mathrm{d} x\ \mathrm{d} t\quad\text{and}\quad D(z_0,t_0R):=R^{-2}\int_{K_R(z_0,t_0)}|p-\bar{p}|^{\frac{3}{2}}\ \mathrm{d} x\ \mathrm{d} t. \end{equation} Assume $C(z_0,t_0,R)$ and $D(z_0,t_0,R)$ satisfy \begin{equation} C(z_0,t_0,R)+D(z_0,t_0,R)\leqslant \epsilon_0. \end{equation} Then \begin{equation} R|u|\leqslant C_0\qquad\text{in}\quad K_{R/2}(z_0,t_0), \end{equation} and $|\nabla^k u|$ is H\"{o}lder continuous in $K_{R/2}(z_0,t_0)$ for any $k\in\mathbb{N}$. \end{proposition} In 1972, Fabes, Jones and Riviere \cite{MR316915} established the short time existence of mild solution to \eqref{eq:ns} satisfying condition \eqref{cond:LadyProSer} for initial data $u_0\in L^q(\mathbb{R}^3)$ with $q>3$. Where the existing time $T_0$ depends on the initial data $u_0$. Moreover they obtained that if $u_0\in L^2(\mathbb{R}^3)\cap L^q(\mathbb{R}^3)$, then the mild solution is a Leray-Hopf weak solution, hence by Theorem~\ref{thm:LadyProSer} a smooth solution. In 1989, Kato \cite{MR760047} considered the limiting case. For $u_0\in L^3(\mathbb{R}^3)$, the short time existence of mild solution. In his work, the existing time $T_0$ depends only on the $L^3$-norm of $u_0$, and that for sufficiently small $\|u_0\|_{L^3(\mathbb{R}^3)}$, it holds that $T_0=+\infty$. The following is a simple case that the small initial data implies the global existence: \begin{proposition} \label{prop:1} Suppose $u_0$ satisfies: \begin{equation} \|u_0\|_{L^3(\mathbb{R}^3)}\leqslant C_\ast \nu. \end{equation} then problem \eqref{eq:ns} has a unique smooth solution, \end{proposition} By the well known energy inequality, one have the following: \begin{proposition} \label{prop:2} For sufficiently large $T$, for example $T=C\|u_0\|_{L^2(\mathbb{R}^3)}^4\nu^{-4}$, there exists $t_0<T$ such that \begin{equation} \|u(t_0)\|_{L^2(\mathbb{R}^3)}\|\nabla u(t_0)\|_{L^2(\mathbb{R}^3)}\leqslant C\nu^2. \end{equation} As a corollary of Proposition \ref{prop:1}, solution of \eqref{eq:ns} is smooth for $t\geqslant t_0$. \end{proposition} In order to solve the millennium problem, we need to show global existence of smooth solution for large initial data. In \cite{MR2388660}, T. Hou and C. Li established the global regularity of solutions to the incompressible axis-symmetric Navier-Stokes equations in 3-d for some large initial data of special types. Inspired by the existence of global smooth solutions for globally small initial data, we aim at utilizing the local smallness and develop the global existence for general compact supported smooth initial data. An interesting counter part about the millennium problem can be found in \cite{hou2022potentially} by Thomas Hou and references there in. They are seeking possible finite time blow-up of Leray-Hopf solutions with certain regular initial data. In this article, we derive the regularity of solutions to \eqref{eq:ns} under local smallness assumption \eqref{eq:asu}. Indeed, we define a localized $L^p$-norm of $u$ as: \begin{equation} \|u\|_{L^p_R}=\sup \{\|u\|_{L^p({B_{R}(x)})} \mid x\in\mathbb{R}^n\} \end{equation} Our result states as follow: \begin{theorem} \label{thm:main} Suppose $u$ is a solution of \eqref{eq:ns} satisfying \begin{equation} \label{eq:asu} u_0\in H^1(\mathbb{R}^3)\quad \int_0^T\|u(t)\|_{L^s_{R(t)}}^rdt<\infty\quad\text{for}\quad \frac{3}{s}+\frac{2}{r}=1\quad r<+\infty, \end{equation} where $R(t)$ is a positive function satisfying: \begin{equation} \label{eq:R} \int_0^T(R(t))^{-2}dt<\infty. \end{equation} Then $u\in L^\infty[0,T;C^\infty(\mathbb{R}^3)]$ with \begin{equation} \label{eq3} \|\nabla u(t)\|_{L^2(\mathbb{R}^3)}\leqslant \|\nabla u(0)\|_{L^2(\mathbb{R}^3)}\exp\left\{\frac{C_1}{\nu^{q-1}}\int_0^t \|u(\tau)\|_{L^p_{R(\tau)}}^{q}d\tau +C_2\nu\int_{0}^tR(\tau)^{-2}d\tau\right\}. \end{equation} \end{theorem} \begin{remark} Combining our result with Proposition \ref{prop:2}, we know that for given $u_0$, it suffices to require \eqref{eq:asu} and \eqref{eq:R} for $T=C_*^{-2}\|u_0\|_{L^2(\mathbb{R}^3)}^4\nu^{-4}$ to derive the global smoothness: $u\in L^\infty[0,T;C^\infty(\mathbb{R}^3)]$. \end{remark} \begin{remark} Our estimate appears to be similar to Theorem~\ref{thm:LadyProSer}, but only require a local integrability of $u$. \end{remark} The following is a special case of the Gargliado-Nirenberg inequalities on bounded domains (see \cite{MR4513000} by C. Li and K. Zhang). \begin{lemma} \label{lem:GN} If $w \in H^2(Q)$ then \begin{equation} \int_{Q}|\nabla w-\overline{\nabla w}|^3dx\leqslant C\|w\|_{L^3(Q)}\|\nabla^2 w\|^2_{L^2(Q)}. \end{equation} Here $\overline{\nabla w}$ is the average of $\nabla w$ over $Q$. \end{lemma} Next is the main estimate of this article. It is interesting in its own and the method of derivation might be used to deal with other nonlinear problems. \begin{lemma}[Main estimate] \label{lem:est} There is an universal constant $C$, such that for any $r >0$: \begin{equation}\label{eq15} \left|\int_{\mathbb{R}^3}\frac{\partial u_k}{\partial x_i}\frac{\partial u_j}{\partial x_k}\frac{\partial u_j}{\partial x_i}dx\right| \leqslant C_0\|u\|_{L^p_{r}} \left(r^{-\frac{3}{p}-1}\|\nabla u\|^2_{L^2(\mathbb{R}^3)}+r^{1-\frac{3}{p}}\|\nabla^2 u\|^2_{L^2(\mathbb{R}^3)}\right). \end{equation} \end{lemma} \section*{Proof of theorems} \begin{proof}[Proof of Theorem \ref{thm:main}] Without loss of generality, let $\nu=1$. Define $$H(t)=\int_{\mathbb{R}^3} |\nabla u(x,t)|^2dx=\|\nabla u(t)\|^2_{L^2(\mathbb{R}^3)},$$ then we have the basic estimate: \begin{equation}\label{eq5} \int_0^{t}H(\tau)d\tau=\int_0^{t}\int_{\mathbb{R}^3} |\nabla u(x,\tau)|^2dxd\tau =\frac{1}{2}(\|u_0\|^2_{L^2(\mathbb{R}^3)}-\|u(t)\|^2_{L^2(\mathbb{R}^3)}). \end{equation} Therefore \begin{equation} \int_0^{T}H(t)dt=\int_{0}^T\|\nabla u(t)\|_{L^2(\mathbb{R}^3)}^2dt\leqslant\|u_0\|_{L^2(\mathbb{R}^3)}^2\qquad\text{and}\qquad \|u(t)\|_{L^2(\mathbb{R}^3)}\leqslant\|u_0\|_{L^2(\mathbb{R}^3)}. \end{equation} We can assume that $u(x,t) \in C^k((0,T], H^k(\mathbb{R}^3))$ with the help of classical regularity or approximation (approximate $u_0$ with $C_0^{\infty}(\mathbb{R}^3)$ functions) theory as long as we have a priori bounds on $\|u(\cdot, t)\|_{H^1(\mathbb{R}^3)}$, see \cite{MR1555394} and \cite{MR0150444}. Next, we take the partial derivative $\frac{\partial}{\partial x_i}$ to equation \eqref{eq:ns}, take inner product with $\frac{\partial u}{\partial x_i}$, integral over $\mathbb{R}^3$ and then sum-up $i$ from 1 to 3 to get: \begin{equation}\label{eq10} \frac{d}{dt}H(t)=\frac{d}{dt}\int_{\mathbb{R}^3} |\nabla u(x,t)|^2dx =-2\|\nabla^2 u(t)\|^2_{L^2(\mathbb{R}^3)} -2\sum_{i,j,k=1,2,3}\int_{\mathbb{R}^3}\left(\frac{\partial u_k}{\partial x_i}\frac{\partial u_j}{\partial x_k}\frac{\partial u_j}{\partial x_i}\right)_{(x,t)}dx. \end{equation} We apply the estimate in Lemma \ref{lem:est} to derive: \begin{equation}\label{eq20} H'(t)\leqslant-2\|\nabla^2 u(t)\|^2_{L^2(\mathbb{R}^3)}+2C_0\|u(t)\|_{L^p_{r}} \left(r^{-\frac{3}{p}-1}\|\nabla u(t)\|^2_{L^2(\mathbb{R}^3)}+r^{1-\frac{3}{p}}\|\nabla^2 u(t)\|^2_{L^2(\mathbb{R}^3)}\right). \end{equation} Take \begin{equation} r=\min\left\{R(t),(C_0\|u(t)\|_{L^p_{R(t)}})^{-\frac{p}{p-3}}\right\}. \end{equation} Then \begin{equation} \|u(t)\|_{L^p_{r}}\leqslant\|u(t)\|_{L^p_{R(t)}},\quad\text{since}\quad r\leqslant R(t). \end{equation} And therefore, \begin{equation} 2C_0r^{1-\frac{3}{p}}\|u(t)\|_{L^p_{r}}\leqslant2C_0r^{1-\frac{3}{p}}\|u(t)\|_{L^p_{R(t)}}\leqslant 2. \end{equation} Then \begin{equation}\label{eq30} \begin{aligned} H'(t)&\leqslant2C_0r^{-1-\frac{3}{p}}\|u(t)\|_{L^p_{r}}H(t)\\ &\leqslant2C_0\max\left\{(C_0\|u(t)\|_{L^p_{R(t)}})^{\frac{p+3}{p-3}},R(t)^{-1-\frac{3}{p}}\right\}\|u(t)\|_{L^p_{R(t)}}H(t)\\ &\leqslant\left(2C_0^{\frac{2p}{p-3}}\|u(t)\|_{L^p_{R(t)}}^{\frac{2p}{p-3}} +2C_0\|u(t)\|_{L^p_{R(t)}}R(t)^{-1-\frac{3}{p}}\right)H(t)\\ &\leqslant\left(2C_0^{\frac{2p}{p-3}}\|u(t)\|_{L^p_{R(t)}}^{\frac{2p}{p-3}} +\frac{p-3}{2p}(2C_0)^{\frac{2p}{p-3}}\|u(t)\|_{L^p_{R(t)}}^{\frac{2p}{p-3}}+\frac{p+3}{2p}R(t)^{-2}\right)H(t)\\ &=\left(2C_1\|u(t)\|_{L^p_{R(t)}}^{\frac{2p}{p-3}}+2C_2R(t)^{-2}\right)H(t). \end{aligned} \end{equation} Applying the Gronwall inequality, one obtains: $$H(t)\leqslant H(0)\exp\left\{2C_1\int_0^t \|u(\tau)\|_{L^p_{R(\tau)}}^{\frac{2p}{p-3}}d\tau +2C_2\int_{0}^tR(\tau)^{-2}d\tau\right\}.$$ \eqref{eq3} then follows. \end{proof} Now, we come back to prove the main estimate \eqref{eq15}. \begin{proof}[Proof of Lemma~\ref{lem:est}] We present our proof with the case $i=k=j$ and $u_i=u_j=u_k=w$, all other cases can be proved similarly. We demonstrate with the following: \begin{equation} \left|\int_{\mathbb{R}^3}(Dw)^3dx\right| \leqslant C\max_{x\in\mathbb{R}^3}\|w\|_{L^p(Q_{r}(x))}\left(r^{-\frac{3}{p}-1}\|\nabla w\|^2_{L^2(\mathbb{R}^3)} +r^{1-\frac{3}{p}}\|\nabla^2 w\|^2_{L^2(\mathbb{R}^3)}\right). \end{equation} Here $D$ denotes a first order directional derivative. We decompose $\mathbb{R}^3$ into non-overlapping standard cubes $Q_{r/2}(x_i)$ of side length $r$. By a little shifting of each edge of cubes $Q_{r/2}(x_i)$, one can make a re-decomposition as \begin{equation} \mathbb{R}^n=\bigcup_{i}\tilde{Q}^i, \end{equation} such that for each $\tilde{Q}^i$ coming from $Q_{r/2}(x_i)$ the following inequality holds: \begin{equation} \frac{1}{r^2}\int_{\partial \tilde{Q}^i}|w|d\sigma_x\leqslant \frac{C}{r^3}\int_{Q_r(x_i)}|w|dx. \end{equation} Now, let $$\overline{Dw}|_i=\frac{1}{r^3}\int_{\tilde{Q}^i}Dwdx$$ be the average of $Dw$ on $\tilde{Q}^i$. We derive: \begin{equation}\label{eq43} \begin{aligned} \int_{\mathbb{R}^3}(Dw)^3dx&=\sum_{i}\int_{\tilde{Q}^i}(Dw)^3dx\\ &=\sum_{i}\left[\int_{\tilde{Q}^i}(Dw-\overline{Dw}|_i)^3dx+3\overline{Dw}|_i\int_{\tilde{Q}^i} (Dw-\overline{Dw}|_i)^2dx+r^3\overline{Dw}|_i^3\right]. \end{aligned} \end{equation} We apply Lemma \ref{lem:GN} to estimate: \begin{equation} \label{eq45} \begin{aligned} \int_{\tilde{Q}^i}\left|\nabla w-\overline{\nabla w}|_i\right|^3dx &\leqslant C\|w\|_{L^3(\tilde{Q}^i)}\|\nabla^2 w\|^2_{L^2(\tilde{Q}^i)} \\ &\leqslant C r^{\frac{p-3}{p}}\|w\|_{L^p(\tilde{Q}^i)}\|\nabla^2 w\|^2_{L^2(\tilde{Q}^i)}. \end{aligned} \end{equation} Therefore \begin{equation} \label{eq46} \begin{aligned} \left|\sum_{i}\int_{\tilde{Q}^i}(Dw-\overline{Dw}|_i)^3dx\right| &\leqslant \sum_{i}\int_{\tilde{Q}^i}\left|\nabla w-\overline{\nabla w}|_i\right|^3dx \\ &\leqslant Cr^{\frac{p-3}{p}}\sup_{i}{\|w\|_{L^p(\tilde{Q}^i)}} \sum_{i} \|\nabla^2 w\|^2_{L^2(\tilde{Q}^i)}\\ &=Cr^{\frac{p-3}{p}}\sup_{i}{\|w\|_{L^p(\tilde{Q}^i)}} \|\nabla^2 w\|^2_{L^2(\mathbb{R}^3)} \end{aligned} \end{equation} On the other hand \begin{equation} \begin{aligned} \left|3\overline{Dw}\int_{\tilde{Q}^i}(Dw-\overline{Dw}|_i)^2dx +r^3\overline{Dw}|_i^3\right| &\leqslant Cr^{-3}\|\nabla w\|^2_{L^2(\tilde{Q}^i)}\left|\int_{\tilde{Q}^i}Dwdx\right|\\ &\leqslant Cr^{-3} \|\nabla w\|^2_{L^2(\tilde{Q}^i)}\int_{\partial \tilde{Q}^i}|w|d\sigma_x\\ &\leqslant Cr^{-4} \|\nabla w\|^2_{L^2(\tilde{Q}^i)}\int_{Q_r(x_i)}|w|dx\\ &\leqslant Cr^{-\frac{3}{p}-1} \|\nabla w\|^2_{L^2(Q_r(x_i))}\|w\|_{L^p(Q_r(x_i))}. \end{aligned} \end{equation} Then \begin{equation} \label{eq47} \begin{aligned} \sum_i\left|3\overline{Dw}\int_{\tilde{Q}^i}(Dw-\overline{Dw})^2dx +r^3\overline{Dw}^3\right| &\leqslant Cr^{-\frac{3}{p}-1}\sup_i\|w\|_{L^p(Q_r(x_i))} \sum_i\|\nabla w\|^2_{L^2(Q_r(x_i))}\\ &=Cr^{-\frac{3}{p}-1}\sup_i\|w\|_{L^p(Q_r(x_i))}\|\nabla w\|^2_{L^2(\mathbb{R}^3)}. \end{aligned} \end{equation} Substituting \eqref{eq46} and \eqref{eq47} into \eqref{eq43}, note that \begin{equation} \sup_i\|w\|_{L^p(\tilde{Q}^i)}\leqslant\sup_{x\in\mathbb{R}^3}\|w\|_{L^p(Q_{r}(x))}=\|w\|_{L^p_r}, \end{equation} then we finish the proof. \end{proof} \end{document}
arXiv
\begin{document} \title[Moduli of Low Genus Pointed Curves]{The Dimension of the Moduli Space of Pointed Algebraic Curves of Low Genus} \author{Jan Stevens} \address{\scriptsize Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg. SE 412 96 Gothenburg, Sweden} \email{[email protected]} \begin{abstract} We explicitly compute the moduli space pointed algebraic curves with a given numerical semigroup as Weierstrass semigroup for many cases of genus at most seven and determine the dimension for all semigroups of genus seven. \end{abstract} \subjclass[2020]{14H55 14H45 14H10} \keywords{Weierstrass point; numerical semigroup; pointed curve; versal deformation} \thanks{} \maketitle \section*{Introduction} On a smooth projective curve $C$ the pole orders of rational functions with poles only at a given point $P$ form a numerical semigroup, the Weierstrass semigroup. The space ${\mathcal{M}_{g,1}^{\NN}}$ parametrising pointed smooth curves with Weierstrass semigroup at the marked point equal to $\Gamma$ is a locally closed subspace of the moduli space $\mathcal{M}_{g,1}$. In this paper we compute the dimension of ${\mathcal{M}_{g,1}^{\NN}}$ for all semigroups of genus at most seven. By the famous result of Pinkham \cite{Pi74} the space ${\mathcal{M}_{g,1}^{\NN}}$ is closely related to the negative weight part of the versal deformation of the monomial curve singularity $C_\Gamma$ with semigroup $\Gamma$. This connection has been used in a series of papers by Nakano--Mori \cite{NM04} and Nakano \cite{Na08, Na16} to explicitly determine ${\mathcal{M}_{g,1}^{\NN}}$ for many semigroups of genus at most six, using the \textsc{Singular} \cite{DGPS} package \texttt{deform.lib} \cite{Ma}. In all these cases ${\mathcal{M}_{g,1}^{\NN}}$ is irreducible and rational. For the remaining cases (with two exceptions) irreduciblity and stably rationality was shown by Bullock \cite{Bul14}, with different methods. We extend the computations of Nakano \cite{Na08}. One quickly runs into the limits of what can be computed in reasonable time. Therefore we also use other approaches to compute deformations. One method is to use Hauser's algorithm \cite{Ha83}; the method of Contiero-Stöhr \cite{CoSt} to compute ${\mathcal{M}_{g,1}^{\NN}}$ is closely related. In this method one first perturbs the equations in all possible ways, and takes care of flatness only later. This means introducing may new variables, most of which can be eliminated. In a number of cases this approach is succesful. In one case it is more convenient to use the projection method developed by De Jong and Van Straten \cite{td2}, as applied to curves in \cite{Ste93}. We list the semigroups of genus at most 7 in Tables \ref{tab1} and \ref{tab2}. For $g\leq 6$ we follow the notation of \cite{Na08}. The corresponding gap sequences are already listed by Haure \cite{Hau96}, in the first published paper containing the term Weierstrass points. Haure also gives the number of moduli on which curves with given Weierstrass semigroup depend. Our computations shows that his results are correct except in one case. The non-emptiness of ${\mathcal{M}_{g,1}^{\NN}}$ for all semigroups with $g\leq7$ was established by Komeda \cite{Kom94}. Our tables also contain the structure of ${\mathcal{M}_{g,1}^{\NN}}$ in the cases we have been able to determine it. In many cases, e.g. if the monomial curve $C_\Gamma$ is a complete intersection, the space ${\mathcal{M}_{g,1}^{\NN}}$ is smooth. The next common case is that ${\mathcal{M}_{g,1}^{\NN}}$ is a weighted cone over the Segre embedding of $\mathbb{P}^1\times \mathbb{P}^3$; the curve $C_\Gamma$ has then codimension 3 and is given by 6 equations. For codimension 4 and 10 equations the base space is typically given by 20 equations and the exact structure depends on the curve. Except for the curve already studied in \cite{Ste93} these equations are too complicated, with too many monomials, to be useful. As to the dimension of ${\mathcal{M}_{g,1}^{\NN}}$, in general the following bounds are known \cite{RV77, Co21}: \[ 2g-2+t-\dim\mathrm{T}^{1,+}\leq \dim {\mathcal{M}_{g,1}^{\NN}}\leq 2g-2+t\;. \] where $t$ is the rank of the highest syzygy module of the ideal of $C_\Gamma$, and $\dim\mathrm{T}^{1,+}$ the number of deformations of positive weight, both easily computable with \textsc{Singular} \cite{DGPS} or \emph{Macaulay2} \cite{GrSt}. The result of our computations is that for all semigroups with $g\leq 7$ the dimension is given by the lower bound. In the first section we recall the relation between the moduli space ${\mathcal{M}_{g,1}^{\NN}}$ and deformations of the monomial curve with semigroup $\Gamma$. The next section describes the computation methods used in this paper. The main part of the paper discusses the computation of the moduli space or of its dimension for the different types of semigroup. \section{The moduli space ${\mathcal{M}_{g,1}^{\NN}}$} \label{Pinksection} Let $P$ be a smooth point on a possibly singular integral complete curve $C$ of arithmetic genus $g>1$, defined over an algebraically closed field $\mathbf{k}$ of characteristic zero. An integer $n\in \mathbb{N}$ is a \textit{gap} if there does not exist rational function on $C$ with pole divisor $nP$, or equivalently $H^0(C, \mathcal{O}_{C}(n-1)P))=H^{0}(C, \mathcal{O}_{C}(nP))$. There are exactly $g$ gaps by the Weierstrass gap theorem, an easy consequence of Riemann-Roch. The nongaps form a numerical semigroup $\Gamma$, the \textit{Weierstrass semigroup} of $C$ at $P$; this is the set of nonnegative integers $n\in\mathbb{N}$ such that there is a rational function on $C$ with pole divisor $nP$. For any numerical semigroup the \textit{genus} is defined as the number of gaps. Given a numerical semigroup $\Gamma$ of genus $g>1$, let ${\mathcal{M}_{g,1}^{\NN}}$ be the space parameterising pointed smooth curves with $\Gamma$ as Weierstrass semigroup at the marked point. It is a locally closed subspace of the moduli space $\mathcal{M}_{g,1}$ of pointed smooth curves of genus $g$. Note that ${\mathcal{M}_{g,1}^{\NN}}$ can be empty. The connection between the moduli space ${\mathcal{M}_{g,1}^{\NN}}$ and deformations of negative weight of monomial curves was first observed by Pinkham \cite[Ch. 13]{Pi74}. Given a numerical semigroup $\Gamma=\langle n_1,\dots,n_r\rangle$ we form the semigroup ring $\mathbf{k}[\Gamma]:=\oplus_{n\in\Gamma}\mathbf{k}\,t^{n}$ and denote by $C_{\Gamma}:=\Spec\mathbf{k}[\Gamma]$ its associated affine monomial curve. Consider the versal deformation of $C_{\Gamma}$ \begin{equation*} \begin{matrix} \mathcal{X}_{t_0}\cong C_{\Gamma} & \longrightarrow & \mathcal{X} \\[3pt] \Big\downarrow & & \Big\downarrow\\[7pt] \{t_0\}=\Spec\,\mathbf{k} & \longrightarrow & B \end{matrix} \end{equation*} where $B=\Spec A$ is the spectrum of local, complete noetherian $\mathbf{k}$-algebra. Pinkham \cite{Pi74} showed that the natural $\mathbb{G}_{m}$-action on $C_{\Gamma}$ can be extended to the total and parameter spaces. This induces a grading on the tangent space $T^1_{C_{\Gamma}}$ to $B$. The convention here is that a deformation has negative weight $-e$ if it decreases the weights of the equations of the curve by $e$; the corresponding deformation variable has then (positive) weight $e$. A numerical semigroup $\Gamma$ is called \textit{negatively graded} if $T^{1}_{C_{\Gamma}}$ has no positive graded part. Let $B^-$ be the subspace of $B$ with negative weights. Then the restriction $\mathcal{X}^- \to B^-$ is versal for deformations with good $\mathbb{G}_{m}$-action . Both $\mathcal{X}^- $ and $ B^-$ are defined by polynomials and we use the same symbols for the corresponding affine varieties. The deformation $\mathcal{X}^- \to B^-$ can be fiberwise compactified to $\smash{\overline{\mathcal{X}}}^- \to B^-$; each fibre is an integral curve in a weighted projective space with one point $P$ at infinity and this is a point with semigroup $\Gamma$. All the fibres over a given $\mathbb{G}_m$ orbit in $\mathcal{T}^-$ are isomorphic, and two fibres are isomorphic if and only if they lie in the same orbit. This is proved in \cite{Pi74} for smooth fibres and in general in the Appendix of \cite{Lo84}. Each pointed curve from ${\mathcal{M}_{g,1}^{\NN}}$ occurs as fibre by the following construction. Consider the section ring $\mathcal{R} = \oplus_{n=0}^\infty H^0(C,\mathcal{O}(nP))$. It gives an embedding of $C= \Proj \mathcal R$ in a weighted projective space, with coordinates $X_0, \dots, X_r$ where $\deg X_0 = 1$. The space $\Spec \mathcal R$ is the corresponding quasi-cone in affine space. Setting $X_0 = 0$ defines the monomial curve $C_\Gamma$, all other fibres are isomorphic to $C\setminus P$. In particular, if $C$ is smooth, this construction defines a smoothing of $C_\Gamma$. \begin{thm}[\null{\cite[Thm. 13.9]{Pi74}}]\label{pinkhamthm} Let $\mathcal{X}^- \to B^-$ be the equivariant negative weight miniversal deformation of the monomial curve $C_\Gamma$ for a given semigroup $\Gamma$ and denote by $B^-_s$ the open subset of $B^-$ given by the points with smooth fibers. Then the moduli space ${\mathcal{M}_{g,1}^{\NN}}$ is isomorphic to the quotient ${\mathcal{M}_{g,1}^{\NN}}=B^-_s/\mathbb{G}_{m}$ of $B^-_s$ by the $\mathbb{G}_{m}$-action. \end{thm} The closure of a component of $B^-_s$ is a smoothing component and is itself contained in a smoothing component in $B$. For quasihomogeneous curve singularities there is a simple formula for the dimension of smoothing components: it is $\mu+t-1$ \cite{Gr82}, with $\mu=2\delta-r+1$ the Milnor number and $t= \dim_\mathbf{k} \Ext^1_\mathcal{O}(\mathbf{k},\mathcal{O})$ the type. For monomial curves $\delta=g$ and $r=1$, and the type can be computed from the semigroup \cite[4.1.2]{Bu80} : $t=\lambda(\Gamma)$, the number of gaps $\ell$ of $\Gamma$ such that $\ell+n\in\Gamma$ whenever $n$ is a nongap. Given the equations of $C_\Gamma$ (anyway needed for deformation computations) the type is easily found as the rank of the highest syzygy module. Let $\dim\mathrm{T}^{1,+}$ be the dimension of the space of infinitesimal deformations of $C_\Gamma$ of positive weight. Then we have the following bounds for the dimension of components of $M$ \cite{Co21}; the upper bound is due to Rim--Vitulli \cite{RV77}. \begin{thm}\label{Stevens} Let $\mathbb{N}$ be a numerical semigroup $\mathbb{N}$ of genus bigger than $1$. If ${\mathcal{M}_{g,1}^{\NN}}$ is nonempty, then for any irreducible component $E$ of ${\mathcal{M}_{g,1}^{\NN}}$ \[ 2g-2+t-\dim\mathrm{T}^{1,+}\leq \dim E\leq 2g-2+t\;. \] \end{thm} \section{Computing deformation spaces in negative weight} By Pinkham's theorem (Theorem \ref{pinkhamthm}), to explicitly describe the moduli space ${\mathcal{M}_{g,1}^{\NN}}$ one can compute the negative weight part of the versal deformation of the monomial curve $C_\Gamma$. For many semigroups of low genus this was done by Nakano-Mori \cite{NM04} and Nakano \cite{Na08,Na16}, using the computer algebra system \textsc{Singular} \cite{DGPS}. The main obstacle in the remaining cases is that the computations take too long, and result in long formulas without apparent structure. In this section we describe several methods to determine versal deformations, with comments on computational matters. \subsection{The standard approach} We recall the main steps, see also \cite[Ch. 3]{Ste03}. Let $X$ be a variety with $\mathbb{G}_{m}$-action with isolated singularity at the origin in $\mathbb{A}^n$. Let $S=\mathbf{k}[X_1,\dots,X_n]$ be the polynomial ring in $n$ variables. Let $f=(f_1,\dots,f_r)$ generate the ideal $I(X)$ of $X$. The first few terms of the resolution of $\mathbf{k}[X]=S/I(X)$ are $$ 0\longleftarrow \mathbf{k}[X]\longleftarrow S \mapleft f S^k \mapleft r S^l\; $$ where the columns of the matrix $r$ generate the module of relations. Let $X_B\to B$ be a deformation of $X$ over $ B=\Spec A$. The flatness of the map $X_B\to B$ translates into the existence of a lifting of the resolution to $$ 0\longleftarrow \mathbf{k}[X_B]\longleftarrow S\otimes A \mapleft F (S\otimes A)^k \mapleft R (S\otimes A)^l\;. $$ To find the versal deformation we must find a lift $FR=0$ in the most general way. The first step is to compute infinitesimal deformations. We write $F=f+\varepsilon f'$ and $R=r+\varepsilon r'$. As $\varepsilon^2=0$, the condition $FR=0$ gives \[ FR=(f+\varepsilon f')(r+\varepsilon r')=fr+\varepsilon(fr'+f'r)=0\;. \] Because $fr=0$, we obtain the equation $fr'+f'r=0$ in $S$. We first solve the equation $f'r=0$ or rather its transpose $r^t(f')^t=0$ in $\mathbf{k}[X]$. This means finding syzygies between the columns of the matrix $r^t$; then we find $r'$ by lifting $f'r$ with $f$. After this we lift order for order. Obstructions to do this may come up, leading to equations in the deformation parameters. \comment We consider now $\varepsilon$ as a parameter marking the order and suppose that $f'$ depends on deformation parameters. Suppose we have found $F_{n-1}$ and $R_{n-1}$ with $F_{n-1}R_{n-1}\equiv 0 \pmod {\varepsilon^n}$. We want a solution modulo $\varepsilon^{n+1}$, so we put $F_n=F_{n-1}+\varepsilon^nf^{(n)}$, $R_n=R_{n-1}+\varepsilon^nr^{(n)}$. Then $$ \displaylines{\qquad F_nR_n\equiv F_{n-1}R_{n-1}+ \varepsilon^n\big(f^{(n)}R_{n-1}+F_{n-1}r^{(n)}\big) \cr {} \equiv F_{n-1}R_{n-1}+\varepsilon^n\big(f^{(n)}r+fr^{(n)}\big) \equiv 0 \pmod {\varepsilon^{n+1}}\;.\qquad\cr} $$ Modulo $f$ we have to find $f^{(n)}$ from $$ \varepsilon^{-n} F_{n-1}R_{n-1}+f^{(n)}r\equiv 0 \pmod {\varepsilon}\;. $$ This is possible if the normal form of the column vector $\varepsilon^{-n} F_{n-1}R_{n-1}$ with respect to the module generated by the columns of the matrix $r$ is zero. Here obstructions may come up, which lead to equations in the deformation parameters: the expressions depending on the deformation parameters in the normal form have to vanish. The step is concluded by finding $r^{(n)}$. \endcomment All these computations can be done with a computer algebra system. Indeed, they are implemented implemented \cite{Il,Ma} in \emph{Macaulay2} \cite{GrSt} and \textsc{Singular} \cite{DGPS}. The specific outcome of a computation, which depends on Groebner basis calculations, is governed by the chosen monomial ordering and also by the choice of the generators $(f_1,\dots,f_k)$ of the ideal $I(X)$. The algorithm tries to find the row vector $F$, equations of the base space come from obstructions to do that. Typically a computer computation will not choose the easiest form of the base equations. When restricting to deformations of negative weight all resulting equations are polynomial and the computation is finite; it might be undoable in practice, even with a powerful computer. \subsection{Hauser's algorithm}\label{hauser} An alternative method was developed in the complex analytic setting by Hauser \cite{Ha83,Ha85}. One can see the method of Contiero-Stöhr \cite{CoSt} to compute a compactification of the moduli space ${\mathcal{M}_{g,1}^{\NN}}$ as a variant. It has been used in \cite{Co21} to compute the base space for several families of Gorenstein monomials curves. We start again from the generators $f$ of the ideal $I(X)$, but now we perturb $f$ in the most general way, modulo trivial perturbations, that is we take a semi-universal unfolding of the associated map $f\colon \mathbb{A}^n \to \mathbb{A}^k$. Except when $X$ is zero dimensional, the base space of this unfolding will be infinite dimensional; this problem is handled carefully by Hauser \cite{Ha85}. In our situation $f$ is weighted homogeneous and we restrict ourselves to an unfolding with terms of lower degree. Therefore we are back in a finite dimensional situation, and we can work over any field $\mathbf{k}$. So we have an unfolding $F\colon \mathbb{A}^n\times \mathbb{A}^s \to \mathbb{A}^k$ of $f$. Now we determine the locus $B$ containing $0\in \mathbb{A}^s$ over which $F$ is flat. The restriction of $F$ to $B$ is then the versal deformation of $X$ of negative weight. In our situation we have a monomial curve $X$ of multiplicity $m$ and embedding dimension $e+1$. We take coordinates $x,y_1,\dots,y_e$. An Apéry basis of the semigroup leads to an additive realisation of $\mathbf{k}[X]$ as $\sum_{i=0}^m y^{(i)}\mathbf{k}[x]$, where $y^{(0)}=1$, $y^{(1)}\dots y^{(m-1)}$ are expressions in the variables $y_1,\dots,y_e$. The equations of $X$ are then (in multi-index notation) of the form $y^\alpha=\varphi$ with $\varphi \in \sum_{i=0}^m y^{(i)}\mathbf{k}[x]$. The unfolding is also done only with terms from $\sum_{i=0}^m y^{(i)}\mathbf{k}[x]$. We start from generators $f$ of the said form, compute the relation matrix $r$ and write the unfolding $F$. We have to lift $fr=0$ to $FR=0$. To this end we compute $Fr$ and reduce this column vector to normal form with respect to the list $F$. It is important that we do not compute a Groebner basis of the ideal generated by $F$, as this will take too long. But reducing with respect to $F$ will result in a vector with entries of bounded degree lying in $\sum_{i=0}^m y^{(i)}R[x]$ with coefficients from $R=\mathbf{k}[t_1,\dots,t_s]$, where the $t_j$ are coordinates on the base $\mathbb{A}^s$. The vanishing of these coefficients define the locus where $FR=0$, so where $F$ is flat. This procedure leads to a rather large number of relatively simple equations in a large number of variables, most of which occur linearly and can be eliminated. It is this process of elimination which can lead to few equations in a limited number of variables, but with many monomials, see the proof of Proposition \ref{propdim} for an example. Also here most computations are easily done with a computer algebra system. The first step, to find the unfolding, can be automatised, but for the not too complicated cases relevant for this paper it seems preferable to do it by hand, choosing names for the deformation variables reflecting their weights. \subsection{The projection method} Computing deformations using projections onto a hypersurface is a method developed in a series of papers by Theo de Jong and Duco van Straten, see \cite{td1,td2}. The application to curves is in \cite{Ste93}, see also \cite[Ch. 11]{Ste03}. Let again $X$ be a monomial curve and $X\to Y$ a projection onto a plane curve, which is a finite generically injective map. Let $\Sigma$ be the subspace of $Y$ by the conductor ideal $I={\Hom}_Y({\mathcal O}_{X}, {\mathcal O}_Y)$ in ${\mathcal O}_Y$. This makes it possible to reconstruct $X$, as ${\mathcal O}_{X}={\Hom}_Y(I, {\mathcal O}_Y)$. Because we use this method only once, we refer to \cite[Chapter 11]{Ste03} for a description how to use deformations the plane curve $Y$ together with $\Sigma$ to get deformations of the original curve $X$. \comment The fact that ${\mathcal O}_{X}$ has a ring structure, is equivalent to the {\sl Ring Condition\/} $$ {\rm (R.C.):}\qquad {\Hom}_Y({\mathfrak c},{\mathfrak c})\buildrel \approx \over {\lhook\joinrel\longrightarrow}{\Hom}_Y({\mathfrak c}, {\mathcal O}_Y)\;. $$ This condition makes sense over any basis $B$, so we can set up a deformation theory with ${\rm Def}(\Sigma\hookrightarrow Y, {\rm R.C.})$ the functor of deformations for which the ideal of $\Sigma_B$ in $Y_B$ satisfies the condition $({\rm R.C.})$. In our situation, but also under more general assumptions \cite{td2}, this functor is equivalent to the functor ${\rm Def}(X\to Y)$ of deformations of the map $X\to Y$, and the natural transformation ${\rm Def}(X\to Y)\longrightarrow{\rm Def}(X)$ is smooth. \endcomment The space $\Sigma$ is a fat point, so in particular Cohen-Macaulay of codimension 2. Therefore the ideal $I$ defining $\Sigma$ in $\mathbb{A}^2$ is generated by the maximal minors ${\Delta_1}\dots{\Delta_k}$ of an $k\times(k-1)$ matrix $M$. We write these generators as row vector $\Delta$. The curve $Y$ is defined by a function of the form $f=\Delta \alpha$ with $\alpha$ a column vector, or equivalently by the determinant of the matrix $(M,\alpha)$. We write an element $n\colon \Delta_i\mapsto n_i$ of the normal module $N:= {\rm Hom}_{\Sigma}(I/I^2,{\mathcal O}_{\Sigma})$ as row vector $n$. \comment Let $N:= {\rm Hom}_{\Sigma}(I/I^2,{\mathcal O}_{\Sigma})$ be the normal module of $\Sigma$. We write an element $n\colon \Delta_i\mapsto n_i$ of $N$ as row vector $n$. There is an {\sl evaluation map\/} $\ev_f\colon N\to {\mathcal O}_{\Sigma}$, given by $n\mapsto n(f)$. In terms of $\alpha$ we can write $\ev_f(n)=n\alpha$. This map descends to a map $\ev_f\colon T^1_\Sigma\to {\mathcal O}_{\Sigma}$. Let now $\Sigma_B\to Y_B$ be any deformation over a base $(B,0)$, with $Y_B$ defined by a function $F$. By \cite[(1.12)]{td2} the ideal $I_B$ of $\Sigma_B$ satisfies the condition \/{\rm (R.C.)} if and only if the map $ev_F$ is the zero map. This means that for every normal vector $n_B$ there exists a vector $\gamma_B$ on the ambient space, satisfying \endcomment A deformation $\Sigma_B\to Y_B$ comes from a deformation of the curve $X$ if for every normal vector $n_B$ there exists a vector $\gamma_B$ on the ambient space, satisfying \begin{equation}\label{basic} n_B\alpha_B+\Delta_B\gamma_B=0\;. \end{equation} This is the basic deformation equation, which can be solved step by step. \comment Infinitesimal deformations are given by perturbing $f$ to $f+\varepsilon g$ with $g\in I$ satisfying $ ev_g=0$, and perturbing $\Delta$ such the equation \eqref{basic} is satisfied. Again the lift to higher order is done step for step, with possible obstructions. \endcomment When restricting to deformations of negative weight the result of the computation is again given by quasihomogeneous matrices with polynomial entries. Once setup correctly the computation is easily done with a computer algebra system. An important concept here is that of $I^2$-equivalence \cite[Def.~1.14]{td1}: two functions $f$ and $g$ are $I^2$-equivalent, if and only if $f-g\in I^2$. Suppose $f=\Delta\alpha$ and $g=\Delta\beta$ are $I^2$-equivalent. Then $\alpha-\beta=A \Delta^t$ for some matrix $A$. Suppose $n_B\alpha_B+\Delta_B\gamma_B$ is a lift of $n\alpha+\Delta\gamma$ over a base space $B$. Choose any lift $A_B$ of $A$. Then \begin{equation}\label{ikweq} n_B(\alpha_B-A_B\Delta_B^t)+\Delta_B(\gamma_B+A_B^tn_B^t) \end{equation} is a lift of $n\beta+\Delta(\gamma+A^tn^t)$. \comment Suppose $f_B=\Delta_B\alpha_B$ and $g_B=\Delta_B\beta_B$ are $I^2$-equivalent. Then $\alpha_B-\beta_B=A_B\Delta_B^t$ for some matrix $A_B$. Suppose $n_C\alpha_C+\Delta_C\gamma_C$ is a lift of $n_B\alpha_B+\Delta_B\gamma_B$ over a larger base space $C$. Choose any lift $A_C$ of $A_B$. Then $n_C(\alpha_C-A_C\Delta_C^t)+\Delta_C(\gamma_C+A_C^tn_C^t)$ is a lift of $n_B\beta_B+\Delta_B(\gamma_B+A_B^tn_B^t)$. \endcomment In particular, for curves with projections defined by $I^2$-equivalent functions, the base spaces of the versal deformation are the same up to a smooth factor. \section{Semigroups of genus $g\leq7$} In Tables \ref{tab1} and \ref{tab2} we list the semigroups of genus at most 7. For $g\leq 6$ we follow the notation of \cite{Na08}. The tables also contain also the dimension $d$ of ${\mathcal{M}_{g,1}^{\NN}}$ and the type $t$ of the semigroup. Furthermore they give under the heading base the structure of ${\mathcal{M}_{g,1}^{\NN}}$ in the cases we have been able to determine it; the entries indicating the different possibilities are discussed below. Inspection of the tables shows that the main parameters governing the structure of ${\mathcal{M}_{g,1}^{\NN}}$ are the number of generators of $\Gamma$ and the type $t$. The first step in our computations is always to find equations for the monomial curve $C_\Gamma$, followed by the free resolution. This gives the type $t$. The next step is to find the graded parts of the vector space $T^1$ of infinitisemal deformations. \begin{prop}\label{dimprop} For all semigroups with $g\leq7$ the dimension of ${\mathcal{M}_{g,1}^{\NN}}$ is given by the lower bound $2g-2+t-\dim\mathrm{T}^{1,+}$ of Theorem {\rm \ref{Stevens}}. \end{prop} \noindent This result follows from the computations discussed in the rest of this section. \begingroup \small \begin{table}[htb] \begin{center} \begin{tabular}{llccc||llccc} \cite{Na08}& semigroup& $d$& $t$ & base&\cite{Na08}& semigroup& $d$& $t$ & base\\ \hline \hline $N(1)_{1} $ & 2,3 & 1 & 1 & sm & $N(6)_{1} $ & 2,13 &11 & 1 & sm\\ $N(2)_{1} $ & 2,5 & 3 & 1 & sm & $N(6)_{2} $ & 3,10,11 & 12 & 2 &sm \\ $N(2)_{2} $ & 3,4,5 & 4 & 2 & sm & $N(6)_{3} $ & 3,8,13 & 11 & 2 & sm \\ $N(3)_{1} $ & 2,7 & 5 & 1 & sm & $N(6)_{4} $ & 3,7 & 10 & 1 & sm \\ $N(3)_{2} $ & 3,5,7 & 6& 2 & sm& $N(6)_{5} $ & 4,9,10,11& 13 & 3& $B_2$\\ $N(3)_{3} $ & 3,4 & 5 & 1 &sm & $N(6)_{6} $ & 4,7,10,13 & 12 & 3 & $B_1$ \\ $N(3)_{4} $ & 4,5,6,7& 7 & 3 & $B_1$& $N(6)_{7} $ & 4,7,9 & 11 &2 & sm \\ $N(4)_{1} $ & 2,9 & 7 & 1 & sm& $N(6)_{8} $ & 4,6,11,13& 11 & 3 &$B_1^*$ \\ $N(4)_{2} $ & 3,7,8 &8 & 2 & sm& $N(6)_{9} $ & 4,6,9 & 10 & 1 &sm \\ $N(4)_{3} $ & 3,5& 7 & 1 &sm & $N(6)_{10} $ & 4,5 & 10 & 1 & sm \\ $N(4)_{4} $ & 4,6,7,9 & 9 &3 & $B_1$ & $N(6)_{11} $ & 5,8,9,11,12 & 14 & 4 & ? \\ $N(4)_{5} $ & 4,5,7 & 8 & 2 &sm & $N(6)_{12} $ & 5,7,9,11,13 & 13 & 4 & ! \\ $N(4)_{6} $ & 4,5,6 & 7 & 1 &sm & $N(6)_{13} $ & 5,7,8,11 & 12 & 3 & $B_1$ \\ $N(4)_{7} $ & 5,6,7,8,9 & 10 & 4 & !& $N(6)_{14} $ & 5,7,8,9 & 11 & 1 & sm \\ \cline{1-5} $N(5)_{1} $ & 2,11 & 9 & 1 & sm& $N(6)_{15} $ & 5,6,9,13 & 12 &3 & $B_1$ \\ $N(5)_{2} $ & 3,8,10 & 10 & 2 & sm & $N(6)_{16} $ & 5,6,8 & 11 & 2 & sm \\ $N(5)_{3} $ & $3,7,11$ & 9 & 1 & sm & $N(6)_{17} $ & 5,6,7 & 10 & 2 & sm \\ $N(5)_{4} $ & 4,7,9,10 & 11 & 3 & $B_1$ & $N(6)_{18} $ & 6,8,9,10,11,13& 15 & 5 &? \\ $N(5)_{5} $ & 4,6,9,11 & 10 & 3 & $B_1$ & $N(6)_{19} $ & 6,7,9,10,11 & 14 & 4 & ? \\ $N(5)_{6} $ & 4,6,7 & 9& 1 & sm & $N(6)_{20} $ & 6,7,8,10,11& 13 & 3 & $G'$ \\ $N(5)_{7} $ & 4,5,11 & 9 & 2 & sm & $N(6)_{21} $ & 6,7,8,9,11& 12 & 2 & $G$ \\ $N(5)_{8} $ & 5,7,8,9,11 & 12 & 4& ?& $N(6)_{22} $ & 6,7,8,9,10& 11 & 1 & $G$\\ $N(5)_{9} $ & 5,6,8,9 &11 & 3 & $B_1$ & $N(6)_{23} $ & $7, \dots,13$ &16 & 6 & ? \\ $N(5)_{10} $ & 5,6,7,9 & 10 & 2& sm \\ $N(5)_{11} $ & 5,6,7,8 & 9 & 1 & sm \\ $N(5)_{12} $ & $6, \dots,\!11$ & 13 &5 & ? \\ \hline \end{tabular} \caption{semigroups of genus $\leq 6$} \label{tab1} \end{center} \end{table} \endgroup \subsection{Smooth base space} The base space of the versal deformation of $C_\Gamma$ is smooth (indicated with \lq sm\rq\ in the tables) if the obstruction space $T^2$ vanishes. This happens if the curve is a complete intersection, or Gorenstein of codimension three, or Cohen-Macaulay of codimension two. In the latter case the equations are the vanishing the minors of a $2\times 3$ matrix, and $t=2$. Also $T^2=0$ for codimension 3 curves with $t=2$; two of these curves, $N(7)_{31}$ and $N(7)_{32}$ are almost complete intersections. \begingroup \small \begin{table}[htb] \begin{center} \begin{tabular}{l>{\footnotesize}lccc||l>{\footnotesize}lccc} name& \small semigroup& $d$& $t$ & base&name& \small semigroup& $d$& $t$ & base\\ \hline \hline $N(7)_{1} $ & 2,15 &13 & 1 & sm & $N(7)_{21} $ & 5,6,9 &12 & 2 & sm\\ $N(7)_{2} $ & 3,11,13 &14 & 2 & sm& $N(7)_{22} $ & 6,9,10,11,13,14&17 & 5 & ?\\ $N(7)_{3} $ & 3,10,14 &13 & 2 & sm& $N(7)_{23} $ & 6,8,10,11,13,15 &16 & 5 & ?\\ $N(7)_{4} $ & 3,8 &12 & 1 & sm& $N(7)_{24} $ & 6,8,9,11,13 &15 & 4 & ?\\ $N(7)_{5} $ & 4,10,11,13 &15 & 3 & $B_2$& $N(7)_{25} $ & 6,8,9,10,13 &14 & 2 & $G'$\\ $N(7)_{6} $ & 4,9,11,14 &14 & 3 & $B_1$& $N(7)_{26} $ & 6,8,9,10,11 &13 & 1 & $G$\\ $N(7)_{7} $ & 4,9,10,15 &13 & 3 & $B_1$& $N(7)_{27} $ & 6,7,10,11,15&15 & 4 & ?\\ $N(7)_{8} $ & 4,7,13 &13 & 2 & sm& $N(7)_{28} $ & 6,7,9,11 &14 & 3 & $B_1$\\ $N(7)_{9} $ & 4,7,10 &12 & 1 & sm& $N(7)_{29} $ & 6,7,9,10 &13 & 3 & $B_1$\\ $N(7)_{10} $ & 4,9,11,14 &12 & 3 & $B_1^*$& $N(7)_{30} $ & 6,7,8,11 &13 & 3 & $B_1$\\ $N(7)_{11} $ & 4,6,11 &10 & 1 & sm& $N(7)_{31} $ & 6,7,8,10 &12 & 2 & sm \\ $N(7)_{12} $ & 5,9,11,12,13 &16 & 4 & ?& $N(7)_{32} $ & 6,7,8,9 &11 & 2 & sm\\ $N(7)_{13} $ & 5,8,11,12,14 &15 & 4 & ?& $N(7)_{33} $ & $7,9,\dots,13,15$&18 & 6 & ?\\ $N(7)_{14} $ & 5,8,9,12 &14 & 3 & $B_1$& $N(7)_{34} $ & 7,8,10,11,12,13 &17 & 5 & ?\\ $N(7)_{15} $ & 5,8,9,11 &13 & 2 & sm& $N(7)_{35} $ & 7,8,9,11,12,13 &16 & 4 & ?\\ $N(7)_{16} $ & 5,7,11,13 &14 & 3 & $B_1$& $N(7)_{36} $ & 7,8,9,10,12,13 &15 & 3 & ?\\ $N(7)_{17} $ & 5,7,9,13 &13 & 2 & sm& $N(7)_{37} $ & 7,8,9,10,11,13 &14 & 2 & ? \\ $N(7)_{18} $ & 5,7,9,11 &12 & 1 & sm& $N(7)_{38} $ & 7,8,9,10,11,12 &13 & 1 & ?\\ $N(7)_{19} $ & 5,7,8 &12 & 2 & sm& $N(7)_{39} $ & $8,\dots,15$ &19 & 7 & ?\\ $N(7)_{20} $ & 5,6,13,14 &13 & 3 & $B_1$ \\ \hline \end{tabular} \caption{semigroups of genus $7$} \label{tab2} \end{center} \end{table} \endgroup \subsection{Cone over a Segre embedding} For codimension 3 curves $C_\Gamma$ with $t=3$ and $g\leq7$ it is possible to explicitly determine the structure of the base space. Most cases with $g\leq 6$ were computed by Nakano \cite{Na08}. The result for $N(6)_{6}=\langle 4,7,10,13\rangle $ was not given in \cite{Na08}. We computed the deformation using Bernd Martin's \textsc{Singular} \cite{DGPS} package \texttt{deform.lib} \cite{Ma}. The equations for the curve are determinantal: \[ \begin{bmatrix} x & \y(7) & \y(10) & \y(13) \\ \y(7) & \y(10) & \y(13) & x^4 \end{bmatrix} \] The speed of the computation in \textsc{Singular} depends very much on the chosen ordering. A good choose is using the variables $( \y(13) , \y(10) , \y(7), x)$ in this order with graded reverse lexicographic order, but with weights of the variables all equal to 1, not using the weights $13,10,7,4$. \textsc{Singular} returns an ideal $Js$ in 16 variables $A,\dots,P$ of weight $2$, $6$, $10$, $1$, $5$, $9$, $13$, $8$, $12$, $16$, $7$, $10$, $4$, $1$, $4$, $7$. It is generated by the minors of \begin{equation} \label{matrixeq} \begin{bmatrix} N & -K+P\\ O & C-L-BM+AM^2\\ P & 2G-2FM+2EM^2-2DM^3-M^3N\\ C-BM+AM^2 &-J+IM-HM^2-M^4+M^3O \end{bmatrix} \end{equation} We are allowed to simplify the equations of the base space by a coordinate transformation. An obvious transformation gives the matrix of the cone over the Segre embedding of $\mathbb{P}^1\times\mathbb{P}^3$. Observe that the coordinate ring of the Segre cone has a resolution of the form $$ 0\longleftarrow S/I \longleftarrow S \mapleft f S^6 \mapleft r S^8 \mapleft s S^3 \longleftarrow 0 $$ where $f$ is the row vector of minors of the matrix \[ \begin{bmatrix} X_1 & X_2 & X_3 & X_4 \\ Y_1 & Y_2 & Y_3 & Y_4 \end{bmatrix} \] and the $8\times3$ matrix $s$ is the transpose of a matrix of the form \[ \begin{bmatrix} X_1 & X_2 & X_3 & X_4 & 0 &0 & 0 & 0\\ Y_1 & Y_2 & Y_3 & Y_4 & X_1 & X_2 & X_3 & X_4\\ 0 &0 & 0 & 0& Y_1 & Y_2 & Y_3 &Y_4 \end{bmatrix} \] Computing the resolution of the ideal $Js$ with \textsc{Singular} gives indeed a $8\times3$ matrix with some zeroes, but of course not exactly in the form above. This form can be achieved by column and row operations; in this way the matrix \eqref{matrixeq} was found. The Segre cone occurs for many curves as base space. A necessary condition is that $\dim T^2=6$. For some curves $\dim T^2 = 12$. This happens for the semigroups $\langle n_1,n_2,n_3,n_4\rangle$ in the tables with $n_2>2 n_1$. Then the base space has a more complicated structure. \begin{prop} For monomial curve singularity of genus at most $7$ of codimension $3$, with type $t=3$, such that the first blow up has lower embedding dimension, the base space of negative weight is up to a smooth factor the cone over the Segre embedding of\/ $\mathbb{P}^1\times\mathbb{P}^3$, except in the cases $N(6)_{8} $ and $N(7)_{10} $, where it has two components, being the intersection of the Segre cone with coordinate hyperplanes {\rm(}$B_1^*$ in Tables {\rm \ref{tab1}} and {\rm\ref{tab2}}{\rm)}. \end{prop} \begin{proof} By the assumption in the statement $\dim T^2=6$. Most cases with $g\leq 6$ were computed by Nakano \cite{Na08}. It can be checked that systems of generators given in \cite{Na08} are minors of $2\times4$ matrices. To identify the base space as Segre cone it in fact suffices to show that the quadratic part of the equations defines a Segre cone. If it does, the Segre cone has to be the tangent cone of the base space, for otherwise the dimension of the tangent cone should be less than the dimension of the Segre cone, but this dimension is in all cases equal to the lower bound of theorem \ref{Stevens}. Because the Segre cone is rigid and the base space itself is a deformation of its tangent cone, they are isomorphic. For the semigroups with $4$ generators and $t=3$ of genus $g=7$ (see Table \ref{tab2}) the versal deformation up to degree 2 is easily computed with \textsc{Singular}, and the cases where the base space is $B_1$ identified. For $N(6)_8=\langle 4,6,11,13\rangle$ the versal deformation in all degrees can be computed. The generators of the ideal of the curve are the $2\times2$ minors of the symmetric matrix \[ \begin{bmatrix} x & \y(6) & \y(11) \\ \y(6) & x^2 & \y(13) \\ \y(11) & \y(13) & x^3\y(6) \end{bmatrix} \] With these generators and the graded reverse lexicographic order with variables $(\y(13),\y(11),\y(6),x)$ of weights $(13,11,6,4)$ \textsc{Singular} succeeds in computing rather quickly the versal deformation in all degrees. Replacing the generator $\y(11)\y(13)-x^3\y(6)^2$ by $\y(11)\y(13)-x^6$ results in a computation which does not finish in reasonable time. After a coordinate transformation the base space is given by the minors of \[ \begin{bmatrix} T_{-1} & T_1 & T_6 & T_8 \\ T_{9} & T_{11} & T_{16} & T_{18} \end{bmatrix} \] where the indices indicate the weight of the deformation variables. The base space is again a Segre cone, but the base space in negative weight lies in the hyperplane $T_{-1}=0$ and consists of two components, one smooth given by $T_1 = T_6 = T_8=0$ and the other by $T_9=0$ and the vanishing of the three minors of the matrix not involving $T_9$. Note that the last generator of the ideal of the second component given in \cite[p.159]{Na08} can be expressed in the previous ones. For $N(7)_{10}=\langle 4,6,13,15\rangle$ the quadratic part of the equations for the bases space are the minors of \[ \begin{bmatrix} T_{-3} & T_{-1} & T_6 & T_8 \\ T_{11} & T_{13} & T_{20} & T_{22} \end{bmatrix} \] and in negative weight there are two components of different dimension, given by $T_{11} =T_{13} =T_6T_{22}-T_8T_{20}=0$ and $T_6=T_8=0$. Over the largest component the equations of the total space can be written in rolling factors format (see e.g. \cite[p. 95]{Ste03}): three equations are the minors of the matrix \[ \begin{bmatrix} x & \y(6) + T_2 x & \y(13) \\ \y(6) & x^2 +T_4 x& \y(15) \end{bmatrix} \] while the fifth and sixth equations are obtained by replacing in each monomial a factor occurring in the top row of the matrix by one of the bottom row. From the equations in the matrix one finds that $x(x\y(13) +T_4 \y(13))=\y(6)(\y(15)+T_2\y(13))$, so $\y(15)+T_2\y(13)$ rolls to $x\y(13) +T_4 \y(13)$. This gives: \begin{alignat*}{2} x^2\y(6)^3&-\y(13)^2&&+P_{22}x+P_{20}\y(6)+T_{13}\y(13)+T_{11}\y(15)\\ x\y(6)^4&-\y(13)\y(15)&&+P_{22}\y(6) +\dots\\ \y(6)^5&-\y(15)^2&&+P_{22}(x^2 +T_4 x-T_2\y(6))+\dots \end{alignat*} Here $P_{22}$ and $P_{20}$ are polynomials containing deformation variables of degree $4,6,\dots, 22$. It follows in particular that the origin is a singular point of all fibres, in general an ordinary double point. Only the smallest component is a smoothing component. \end{proof} \begin{remark} For $N(7)_{10}$ the gap sequence is $1,2,3,5,7,9,11$ and Haure \cite{Hau96} gives a plane model of degree 13 with 11 moduli, whereas $\dim {\mathcal{M}_{g,1}^{\NN}} = 12$. This is the only case where Haure's result differs from our result. \end{remark} \subsection{Curves with first blow-up of multiplicity four} For the cases $N(6)_{5} =\langle 4, 9, 10, 11 \rangle$ and $N(6)_{5} =\langle 4, 10, 11,13 \rangle$ the first blow-up is $N(3)_{4} =\langle 4, 5, 6, 7 \rangle$ and $N(4)_{4} =\langle 4, 6, 7,9 \rangle$ respectively. For the first curve we compute the base space with Hauser's algorithm; we do it in fact for all semigroups $\langle 4,1+4\tau ,2+4\tau ,3+4\tau \rangle$. A similar, but more complicated computation is in \cite{Co21}. The equations of the curve are given by the minors of the matrix \[ \begin{bmatrix} x^\tau & \y(1)& \y(2)& \y(3)\\ \y(1)& \y(2)& \y(3)& x^{\tau +1} \end{bmatrix} \] We write the unfolding with variables which are polynomials in $x$, where $\f(i)(j)$ with $i\neq j$ has degree $4\tau +j$. We use coordinate transformations to remove as many terms as possible. The result is \begin{align*} \y(1)^2-\y(2)x^\tau &+\f(2)(1)\y(1)+\f(2)(2)+\f(2)(-1)\y(3)+\f(2)(0)\y(2)\\ \y(1)\y(2)-\y(3)x^\tau &+\f(3)(1)\y(2)+\f(3)(2)\y(1)+\f(3)(3)+\f(3)(0)\y(3)\\ \y(1)\y(3)-x^{2\tau +1} &+\f(4)(2)\y(2)+\f(4)(3)\y(1)+\f(4)(4)\\ \y(2)^2-x^{2\tau +1} & +\g(4)(1)\y(3)+\g(4)(2)\y(2)+\g(4)(3)\y(1)+\g(4)(4)\\ \y(2)\y(3)-\y(1)x^{\tau +1} &+\f(5)(5) +\f(5)(4)\y(1)\\ \y(3)^2-\y(2)x^{\tau +1} & +\f(6)(5)\y(1)+\f(6)(6)+\f(6)(3)\y(3)+\f(6)(4)\y(2) \end{align*} We have four transformations left, which we cannot show in the above notation. They act on the unfolding as $\f(3)(1)-x^\tau a_{3,1}$, $\f(2)(1)-x^\tau a_{2,1}$, $\f(4)(2)+x^\tau a_{3,2}$ and $\f(2)(0)+\tau a_0x^{\tau -1}$; we use them to remove the lowest weight variables from $\f(2)(0)$, $\f(2)(1)$, $\f(3)(1)$ and $\f(4)(2)$. We proceed as explained in section \ref{hauser}: we compute the relation matrix for the unperturbed generators of the ideal, multiply with the perturbed generators and reduce the result with them. The result does not contain quadratic monomials in the $\y(i)$ and for flatness it has to vanish identically, giving conditions on the coefficients. We write these as equations for the polynomials $\f(i)(j)$, $\g(i)(j)$. The polynomials $\f(i)(i)$ and $\g(4)(4)$ can be eliminated. We obtain fifteen equations. The first one is $(x^\tau -\f(3)(0))(\f(2)(1)-\f(3)(1))+ (x^\tau -\f(2)(0))\g(4)(1)+\f(2)(-1)\f(3)(2)=0$. We will use this equation to eliminate $\g(4)(1)$. To this end we rewrite it, and do the same with five other equations containing $x^\tau -\f(2)(0)$. We obtain \begin{multline*} (x^\tau -\f(2)(0))(\g(4)(1)+\f(2)(1)-\f(3)(1))= { -(\f(2)(0)-\f(3)(0))(\f(2)(1)-\f(3)(1))-\f(2)(-1)\f(3)(2)}\\ \shoveleft{(x^\tau -\f(2)(0))(\f(3)(2)-\g(4)(2))= -(\f(2)(1)-\f(3)(1))\f(3)(1)-\f(2)(-1)(\f(4)(3)-\f(6)(3))}\\ \shoveleft{(x^\tau -\f(2)(0))(\g(4)(3)-\f(4)(3)+\f(6)(3)) = (\f(2)(0)-\f(3)(0))(\f(4)(3)-\f(6)(3))-\f(3)(1)\f(3)(2)}\\ \shoveleft{(x^\tau -\f(2)(0))(\f(4)(3)-x\f(2)(-1)) = (x\f(2)(0)-\f(6)(4))\f(2)(-1)-(\f(2)(1)-\f(3)(1))\f(4)(2)}\\ \shoveleft{(x^\tau -\f(2)(0))(\f(6)(4)-\f(5)(4)-x\f(2)(0)+x\f(3)(0)) }\\ \shoveright{ =\f(3)(2)\f(4)(2)+(\f(2)(0)-\f(3)(0))(x\f(2)(0)-\f(6)(4))}\\ \shoveleft{(x^\tau -\f(2)(0))(\f(6)(5)+x\f(3)(1))= - (x\f(2)(0)-\f(6)(4))\f(3)(1)-\f(4)(2)(\f(4)(3)-\f(6)(3))} \end{multline*} It can be checked that the remaining equations are consequences of these ones. All the above equations are of the form \[ L\cdot (x^\tau - \f(2)(0))=R \] with $L$ and $R$ polynomials in $x$ satisfying $\deg_x(R)\leq \deg_x(L)+t$. Division with remainder gives $R=Q (x^\tau - \f(2)(0))+\overline R$, and therefore we can solve $L=Q$ and find the coefficients of $\overline R$ as equations for the base space. In other words, the condition leading to the equations of the base space is that the right hand side of the above equations is divisible by $x^\tau -\f(2)(0)$. A similar structure first appeared for the base spaces of rational surface singularities of multiplicity four \cite{td3}. The right hand side of the equations are the minors of the matrix \[ \begin{bmatrix} -\f(2)(-1) & (\f(2)(0)-\f(3)(0)) &\f(3)(1) & \f(4)(2)\\ (\f(2)(1)-\f(3)(1)) & \f(3)(2) & (\f(4)(3)-\f(6)(3)) & -(x\f(2)(0)-\f(6)(4)) \end{bmatrix} \] It seems that the eliminated variable $\f(4)(3)$ occurs in the matrix, but we can take $\f(4)(3)-\f(6)(3)$ as independent variable. We make the divisibility conditions explicit for $\tau =1$ ($N(3)_4)$) and $\tau =2$ ($N(6)_5$). The $\f(i)(j)$ are polynomials in $x$, of the form $\f(i)(j)= \ff(i)(j+4\tau )+\ff(i)(j+4\tau -4)x + \dots + \ff(i)(r)x^{k}$ if $j+4\tau =4k+r$ with $1\leq r\leq 4$. Recall that we removed the variables of lowest weight in $\f(2)(0)$, $\f(2)(1)$, $\f(3)(1)$ and $\f(4)(2)$. For $\tau =1$ the matrix becomes \[ \begin{bmatrix} -\ff(2)(3) & -\ff(3)(4) & \ff(3)(5) & \ff(4)(6) \\ \ff(2)(5)-\ff(3)(5) & \ff(3)(6)+\ff(3)(2)x & \ff(4)(7)-\ff(6)(7)+ (\ff(4)(3)-\ff(6)(3))x & \ff(6)(8)+\ff(6)(4)x \end{bmatrix} \] and the condition that the minors are divisible by $x$ is obviously that they vanish when $x=0$ is substituted. Therefore the base space is given by the vanishing of the minors of \[ \begin{bmatrix} -\ff(2)(3) & -\ff(3)(4) & \ff(3)(5) & \ff(4)(6) \\ \ff(2)(5)-\ff(3)(5) & \ff(3)(6) & -\ff(6)(4)\ff(2)(3)-\ff(6)(7) & \ff(6)(8) \end{bmatrix} \] where we substituted the value for $\ff(4)(7)$. Note that the variables $\ff(6)(4)$, $\ff(6)(3)$ and $\ff(3)(2)$ do not occur in the equations. We recover the result that the base space for $N(3)_{4} $ is the Segre cone. For $\tau =2$ the last entry of matrix becomes $\ff(6)(12)+\ff(6)(8)x+\ff(6)(4)x^2-\ff(2)(8)x$. We apply division with remainder by $x^2-\ff(2)(8)$, leading to $\ff(6)(12)+\ff(6)(4)\ff(2)(8)+(\ff(6)(8)-\ff(2)(8))x$. Doing the same for other entries we obtain the transpose of the matrix \[ \begin{bmatrix} -\ff(2)(7)-\ff(2)(3)x & \ff(2)(9)-\ff(3)(9)+(\ff(2)(5)-\ff(3)(5))x \\ \ff(2)(8)-\ff(3)(8) -\ff(3)(4) x & \ff(3)(10)+\ff(3)(2)\ff(2)(8)+\ff(3)(6)x\\ \ff(3)(9)+\ff(3)(5)x & \ff(4)(11)-\ff(6)(11)+ (\ff(4)(3)-\ff(6)(3))\ff(2)(8)+(\ff(4)(7)-\ff(6)(7))x\\ \ff(4)(10)+\ff(4)(6)x & \ff(6)(12)+\ff(6)(4)\ff(2)(8)+(\ff(6)(8)-\ff(2)(8))x \end{bmatrix} \] Making a coordinate transformation and renaming the variables gives a matrix of the form \[ \begin{bmatrix} \a(7)+\a(3)x & \a(8)+\a(4) x & \a(9)+\a(5)x & \a(10)+\a(6)x \\ \b(9)+\b(5)x & \b(10)+\b(6)x & \b(11)+ \b(7)x & \b(12)+\b(8)x \end{bmatrix} \] Division with remainder and taking the $x$-coefficient leads to two equations from each minor: \begin{gather*} \a(i+4)\b(j+2)+\a(i)\b(j+6)- \a(j+4)\b(i+2)+\a(j)\b(i+6)\\ \a(i+4)\b(j+6)+\a(i)\b(j+2)\ff(2)(8)- \a(j+4)\b(i+6)-\a(j)\b(i+2)\ff(2)(8) \end{gather*} For $N(7)_5$ a computation of the versal deformation up to order 3 allows to recognise the base to be $B_2$ also in this case. \subsection{The cone over a Grassmannian} The semigroup $N(6)_{22}=\langle 6, 7, 8, 9, 10\rangle$ is the first of the second family of curves studied in \cite{Co21}. The computation can also easily be done with \textsc{Singular}. The result is that the base space is the cone over the Grassmannian $G(2,5)$. In Tables \ref{tab1} and \ref{tab2} this base space is denote by $G$. This shows that $\mathcal{M}_{6,1}^{\scriptscriptstyle N(6)_{22}}$ is rational. Equations for the base space can be recognised because they are the Pfaffians of a skew-symmetric $5\times5$ matrix, which is the relation matrix between the equations. Again, a computer computation will in general not lead to a skew matrix, but one can obtain that form by row and column operations. The curve $N(7)_{26}=\langle 6, 8, 9, 10,11\rangle$ is also Gorenstein and has as base space a cone over $G(2,5)$. While $N(6)_{21}=\langle 6, 7, 8, 9, 11\rangle$, which deforms into $N(6)_{22}$, is not Gorenstein, but has type $t=2$, the dimension of $T^2$ is also five, and a computation with \textsc{Singular} shows that the base space has the same structure: it is a cone over the Grassmannian. \subsection{A codimension four base space} For $N(6)_{20}=\langle 6, 7, 8, 10, 11\rangle$ ($t=3$) and $N(7)_{25}=\langle 6, 8,9,10,13\rangle$ ($t=2$) one has $\dim T^2=9$. We compute the base spaces with Hauser's algorithm. It turns out that they have the same structure, called $G'$ in the tables. We give here the details for the first curve. An additive basis over $\mathbf{k}[x]$ of the coordinate ring is $(1,\y(7),\y(8),\y(10),\y(11), \y(7)\y(8))$. We take the following unfolding of the generators of the ideal: {\small \begin{multline*} \y(7)^2-\y(8)x+(\ff(14)(1)x+\ff(14)(7))\y(7)+\ff(14)(14)+\ff(14)(3)\y(11)+\ff(14)(4)\y(10)+\ff(14)(6)\y(8)\\ \shoveleft{ \y(8)^2-\y(10)x+(\ff(16)(2)x+\ff(16)(8))\y(8)+(\ff(16)(3)x+\ff(16)(9))\y(7)} \\ \shoveright{ + \ff(16)(16)+\ff(16)(5)\y(11)+\ff(16)(6)\y(10)}\\ \shoveleft{ \y(7)\y(10)-\y(11)x+(\ff(17)(1)x+\ff(17)(7))\y(10)+(\ff(17)(3)x+\ff(17)(9))\y(8)}\\ \shoveright{ +(\ff(17)(4)x+\ff(17)(10))\y(7)+\ff(17)(17)+\ff(17)(6)\y(11)}\\ \shoveleft{ \y(11)\y(7)-x^3+(\ff(18)(2)x+\ff(18)(8))\y(10)+(\ff(18)(4)x+\ff(18)(10))\y(8)+\ff(18)(18)}\\ \shoveleft{ \y(8)\y(10)-x^3+(\gg(18)(1)x+\gg(18)(7))\y(11)+(\gg(18)(5)x+\gg(18)(11))\y(7)+\gg(18)(18)}\\ \shoveleft{ \y(11)\y(8)-\y(7)x^2+\ff(19)(19)+(\ff(19)(2)x+\ff(19)(8))\y(11)+(\ff(19)(3)x+\ff(19)(9))\y(10)}\\ \shoveright{ + (\ff(19)(5)x+\ff(19)(11))\y(8)+(\ff(19)(6)x+\ff(19)(12))\y(7)}\\ \shoveleft{ \y(10)^2-\y(8)x^2+(\ff(20)(1)x^2+\ff(20)(7)x+\ff(20)(13))\y(7)+\ff(20)(20)+(\ff(20)(3)x+\ff(20)(9))\y(11)} \\ \shoveright{+ (\ff(20)(4)x+\ff(20)(10))\y(10)+\ff(20)(5)\y(7)\y(8)+(\ff(20)(6)x+\ff(20)(12))\y(8)}\\ \shoveleft{ \y(11)\y(10)-\y(8)\y(7)x+(\ff(21)(1)x^2+\ff(21)(7)x+\ff(21)(13))\y(8) +(\ff(21)(5)x+\ff(21)(11))\y(10)}\\ \shoveright{+(\ff(21)(2)x^2+\ff(21)(8)x+ \ff(21)(14))\y(7)+ (\ff(21)(4)x+\ff(21)(10))\y(11)+\ff(21)(21)}\\ \shoveleft{ \y(11)^2-\y(10)x^2+\ff(22)(7)\y(7)\y(8)+(\ff(22)(6)x+\ff(22)(12))\y(10)+(\ff(22)(5)x+\ff(22)(11))\y(11)}\\ +(\ff(22)(3)x^2+\ff(22)(9)x+\ff(22)(15))\y(7)+(\ff(22)(2)x^2+\ff(22)(8)x+\ff(22)(14))\y(8) +\ff(22)(22) \end{multline*}} \noindent This shows the variables involved, except that the $\ff(i)(i)$ and $\gg(18)(18)$ are polynomials in $x$. In practice it is easier to first work with the coefficients of the $\y(i)$ as polynomials. On this level the variables $\ff(i)(i)$ and $\gg(18)(18)$ can be eliminated. After that step the coefficients of $x$ can be taken. Most variables can be eliminated. What is left are nine rather long polynomials with in total 134 monomials, but on closer inspection a coordinate transformation can be found, leading to the following generators of the ideal of the base space: \begin{gather*} \ff(18)(8)\ff(20)(5)-\ff(17)(6)\ff(22)(7)\\ \ff(14)(4)\ff(20)(9)-\ff(19)(3)\ff(21)(10)+\gg(18)(7)\ff(22)(6)\\ -\ff(16)(6)\ff(19)(3)\ff(20)(5)+\ff(14)(4)\ff(20)(5)\ff(21)(5)-\gg(18)(7)\ff(22)(7)\\ \ff(18)(8)\gg(18)(7)+\ff(16)(6)\ff(17)(6)\ff(19)(3)-\ff(14)(4)\ff(17)(6)\ff(21)(5)\\ -\ff(16)(8)\gg(18)(7)-\ff(16)(6)\ff(20)(9)+\ff(21)(10)\ff(21)(5)\\ -\ff(16)(8)\ff(19)(3)\ff(20)(5)+\ff(20)(9)\ff(22)(7)+\ff(20)(5)\ff(21)(5)\ff(22)(6)\\ \ff(14)(4)\ff(16)(8)\ff(20)(5)-\ff(21)(10)\ff(22)(7)-\ff(16)(6)\ff(20)(5)\ff(22)(6)\\ -\ff(16)(8)\ff(17)(6)\ff(19)(3)+\ff(18)(8)\ff(20)(9)+\ff(17)(6)\ff(21)(5)\ff(22)(6)\\ \ff(14)(4)\ff(16)(8)\ff(17)(6)-\ff(18)(8)\ff(21)(10)-\ff(16)(6)\ff(17)(6)\ff(22)(6) \end{gather*} The singular locus consists of two components, the $(\ff(20)(5),\ff(17)(6))$-plane and the Segre cone \[ \begin{bmatrix} \ff(21)(5)& \ff(16)(6)& \ff(16)(8) \\ \ff(19)(3) &\ff(14)(4)& \ff(22)(6) \end{bmatrix} \] the other variables being zero. If $\ff(20)(5)=1$ then $\ff(18)(8)=\ff(17)(6)\ff(22)(7)$ and the generators reduce to the Pfaffians of the matrix \[ \begin{bmatrix} 0& \ff(19)(3)& \ff(14)(4)& \ff(22)(6)& \ff(22)(7) \\ -\ff(19)(3)&0& -\gg(18)(7)& \ff(20)(9)& -\ff(21)(5)\\ -\ff(14)(4)&\gg(18)(7)& 0& \ff(21)(10)&-\ff(16)(6)\\ -\ff(22)(6)&-\ff(20)(9)&-\ff(21)(10)&0& -\ff(16)(8)\\ -\ff(22)(7)&\ff(21)(5)& \ff(16)(6)& \ff(16)(8)& 0 \end{bmatrix} \] in accordance with the fact that the curve deforms into $N(6)_{21}$ and $N(6)_{22}$. On the other component of the singular locus we take $\ff(19)(3)=1$ and find $\ff(16)(6)=\ff(14)(4)\ff(21)(5)$, $\ff(16)(8)=\ff(21)(5)\ff(22)(6)$ while the generators reduce to the minors of \[ \begin{bmatrix} \ff(22)(7) & \ff(18)(8) &\ff(16)(6)-\ff(14)(4)\ff(21)(5) &\ff(16)(8)-\ff(21)(5)\ff(22)(6)\\ \ff(20)(5) & \ff(17)(6) & \gg(18)(7) & \ff(20)(9) \end{bmatrix} \] \subsection{Codimension 4 and type 4} For most of the semigroups with 5 generators and type 4 in the list the associated monomial curve has $\dim T^2=20$. Only for $N(6)_{19} =\langle 6, 7, 9, 10, 11 \rangle$ and $N(7)_{24} =\langle 6, 8, 9, 11, 13 \rangle$ one has $\dim T^2=21$, while $\dim T^2=26$ for $N(7)_{12} =\langle5,9,11,12,13 \rangle$. The first two curves deform into $N(6)_{20}$ respectively $N(7)_{25}$, which are curves with base space $G'$. We have not been able to determine the exact structure of the base space; in the tables this is marked by a question mark (?). Only for two cases ($N(4)_{7}$ and $N(6)_{12}$, marked !) we give here explicit equations. For $N(4)_7$ Nakano computed the base computing in characteristic $7$ \cite{Na16}. The versal deformation of the monomial curve with semigroup $N(4)_{7} =\langle 5, 6, 7, 8, 9 \rangle$ was computed with the projection method in \cite{Ste93}. This computation also takes care of $N(6)_{12}$ by the following result. \begin{lemma} The curves $N(4)_{7}$ and $N(6)_{12}$ have $I^2$-equivalent plane projections. \end{lemma} \begin{proof} The projection onto the plane of the first two coordinates has equation $\y(6)^5-x^6=0$ for $N(4)_{7}$ and $\y(7)^5-x^7=0$ for $N(6)_{12}$. The conductor ideal $I$ has in both cases length 6, being the difference of the $\delta$-invariant of the plane curve and $\delta=g$ of the monomial curve. An easy computation gives that $I=\mathfrak{m}^3$, and therefore $x^7-x^6 \in I^2$. \end{proof} We slightly modify the computation given in \cite{Ste93} by disregarding all terms in $I^2$. We start by describing the deformation of the matrix defining $\Sigma$: \[ \begin{bmatrix} y&e_{01}&e_{02}\\ -(x+e_{10})&y+e_{11}&e_{12}\\ -e_{20}&-(x+e_{21})&y+e_{22}\\ -e_{30}&-e_{31}&-x \end{bmatrix} \] We consider only consider deformations of negative weight, so we deform the equation of the plane curve in the following way: {\small \[ y^5+c_0x^5+c_1x^4y+c_2x^3y^2+c_{3}x^2y^{3}+d_0x^{4}+d_1x^{3}y+d_2x^{2}y^2+ d_3xy^3+d_{4}y^{4} \] } \noindent With the help of \textsc{Singular} \cite{DGPS} the deformation equation \eqref{basic} was solved for all generators of $N$. The equation holds modulo the ideal $J$ of the base space, described below. We give here the vector $\alpha$ as direct result of the computation. {\small \begin{itemize}[rightmargin=0.7\parindent] \item[$\alpha_1=$] $x^2c_{0}+{\textstyle \frac12} xc_{2}e_{01}+xc_{3}e_{02}+xd_{0}-yc_{3}e_{01}-yc_{1}e_{10}+yc_{2}e_{11}+yc_{3}e_{12}\linebreak -c_{3}e_{02}e_{10} +c_{2}e_{02}e_{20}+c_{3}e_{02}e_{21}-c_{0}e_{01}e_{30}-c_{1}e_{02}e_{30} +c_{0}e_{12}e_{30}\linebreak -e_{10}e_{12}e_{30}-c_{1}e_{01}e_{31}-c_{2}e_{02}e_{31}+c_{0}e_{22}e_{31}+{\textstyle \frac12} d_{2}e_{01}+d_{3}e_{02}$ \item[$\alpha_2=$] $x^2c_{1}+{\textstyle \frac12} xyc_{2}+xc_{3}e_{01}+xc_{1}e_{10}-{\textstyle \frac12} xc_{2}e_{11}+xd_{1}+{\textstyle \frac12} yd_{2} +d_{3}e_{01}\linebreak-e_{01}^2+d_{4}e_{02}+d_{1}e_{10}-{\textstyle \frac12} d_{2}e_{11}$ \item[$\alpha_3=$] ${\textstyle \frac12} x^2c_{2}+xyc_{3}+xc_{1}e_{20}+{\textstyle \frac12} xc_{2}e_{21}+{\textstyle \frac12} xd_{2}-yc_{3}e_{10} +yc_{2}e_{20}+yc_{3}e_{21}\linebreak+yd_{3}-c_{3}e_{02}e_{30}+d_{4}e_{01} +e_{02}e_{10}-e_{01}e_{11} +d_{1}e_{20}+{\textstyle \frac12} d_{2}e_{21}+e_{01}e_{22}$ \item[$\alpha_4=$] $y^2+xc_{1}e_{30}+{\textstyle \frac12} xc_{2}e_{31}+yc_{2}e_{30}+yc_{3}e_{31}+yd_{4} +e_{01}e_{10} +e_{02}e_{20}\linebreak+d_{1}e_{30}+{\textstyle \frac12} d_{2}e_{31}-e_{02}e_{31}$ \end{itemize} } \noindent To describe the base space it is useful to apply a coordinate transformation, given by: {\small \begin{align*} d_{0}&\mapsto d_{0}+c_{0}e_{21} \\ d_{1}&\mapsto d_{1}+c_{1}e_{21}+c_{0}e_{31} \\ d_{2} &\mapsto d_{2}+c_{1}e_{31} \\ d_{3}&\mapsto d_{3}+e_{01}+c_{3}e_{10}\\ d_{4} &\mapsto d_{4}+e_{11} \end{align*} } \noindent In the new coordinates ideal of the base is given by the following 20 generators, which we write as sum of corresponding minors: {\tiny \begin{multline*} \begin{bmatrix} d_{0}+c_{3}e_{02}-c_{0}e_{10}& d_{1}-c_{1}e_{10}+c_{3}e_{12}-c_{0}e_{20} & d_{2}-c_{2}e_{10}-c_{0}e_{30} & d_{3} & d_{4}\\ e_{01} & e_{11} & e_{10}-e_{21} & e_{20}-e_{31} & e_{30} \end{bmatrix}\\ {}+\begin{bmatrix} -c_{0}e_{31}& e_{02}-c_{1}e_{31}&-e_{01}+e_{12}-c_{2}e_{31} & -e_{11}+e_{22}-c_{3}e_{31} &e_{21} \\ e_{02}-c_{0}e_{30}&e_{12}-c_{1}e_{30} & e_{22}-c_{2}e_{30} &e_{10}-c_{3}e_{30} & e_{20} \end{bmatrix} \end{multline*} \setlength{\multlinegap}{3pt} \begin{multline*} \begin{bmatrix} d_{0} & d_{1} & d_{2}-e_{02}+c_{1}e_{20}-c_{3}e_{22} &d_{3}-e_{12}+c_{2}e_{20}+c_{1}e_{30} &d_{4}-e_{22}+c_{2}e_{30} \\ e_{02}& -e_{01}+e_{12} &-e_{11}+e_{22} & e_{21} &e_{31} \end{bmatrix} \\%\qquad\\ {} + {1\over c_0} \begin{bmatrix} c_{0}e_{12}-c_{1}e_{02} &c_{0}e_{22}-c_{2}e_{02} &c_{0}e_{10}-c_{3}e_{02} &c_{0}e_{20}& c_{0}e_{30}-e_{02} \\ c_{1}e_{01}-c_{0}e_{11} &c_{2}e_{01}-c_{0}e_{10}+c_{0}e_{21} &c_{3}e_{01}-c_{0}e_{20}+c_{0}e_{31}&-c_{0}e_{30}&e_{01} \end{bmatrix} \end{multline*}} \noindent The $ \frac1{c_0}$ in front of the last matrix means that each minor has to be divided by $c_0$. The structure of this base space is discussed in\cite{Ste93}. According to the formula \eqref{ikweq} we obtain the deformation for $N(4)_7$ by adding $\Delta_1$ to $\alpha_1$. For $N(6)_{12}$ the terms in $I^2$ are $x^7+b_0x^6+b_1x^5y+b_2x^4y^2$, so we add the vector $\Delta_1(x+b_0,b_1,b_2,0)^t$ to $\alpha$. In particular we find that the codimension of the base space is the same for $N(4)_7$ and $N(6)_{12}$. The curve $N(6)_{11} =\langle 5, 8, 9, 11, 12 \rangle$ deforms with the deformation $(t^5,t^8+s t^7,t^9,t^{11},t^{12})$ of the parametrisation to a curve with semigroup $\langle 5, 7, 9, 11, 13 \rangle$ but not to the monomial curve $N(6)_{12}$ with this semigroup: the deformation $(t^5,t^7+s' t^8,t^9,t^{11},t^{13})$ of $N(6)_{12}$ is non-trivial of positive weight. We did not compute the base space for $N(6)_{11}$, but we determined the quadratic part of the equations. It contains quadrics of rank two, so the base space is definitely more complicated. It is feasible to compute the deformation with Hauser's algorithm, but the problem is to simplify the resulting equations and write them in a systematic way. Even for $N(4)_7$ it is very hard to see that the equations from Hauser's algorithm give the same base space as the one above from the projection method. \begin{prop}\label{propdim} For all semigroups of genus $g\leq7$ with $5$ generators and type $4$ one has $\dim M = 2g +2-\dim T^{1,+}$. \end{prop} \begin{proof} If the monomial curve is negatively graded the result follows directly from the Rim--Vitulli formula in Theorem \ref{Stevens}. For $N(7)_{13}$ and $N(7)_{27}$ a computation of the deformation up to order two yields 20 quadratic equations which are among the equations for the tangent cone to the base space (and probably give the tangent cone exactly). These 20 equations define a projective scheme of dimension 15, which is therefore an upper bound for the dimension of ${\mathcal{M}_{g,1}^{\NN}}$. At the same time Theorem \ref{Stevens} gives 15 as lower bound. For $N(7)_{24}$ the situation is more complicated. One of the 21 equations starts with cubic terms. We did compute the base space with Hauser's method. It leads to 256 equations in 63 variables, with 6923 monomials in total. These equations are not independent, in fact they can be reduced to 59 equations. Thirty eight variables occur linearly and can be eliminated. The result consists of 21 equations in 25 variables, with 24829 monomials in total; the equation starting with cubic terms has 3196 monomials. Taking the lowest degree part of each equation gives a manageable system with 163 monomials defining a scheme of dimension 15. \end{proof} \subsection{Higher codimension} For the remaining curves we did not determine the base space. For the curves $N(5)_{12}$, $N(6)_{18}$, $N(7)_{22}$ and $N(7)_{23}$ with type 5 the dimension of $T^2$ is 45, for $N(7)_{34}$ it is 46. The curves $N(6)_{22}$ and $N(7)_{33}$ have type 6 and $\dim T^2=84$, while for $N(7)_{39}$, the only type 8 curve, $\dim T^2=140$. With decreasing type the dimension of $T^2$ also decreases: for $N(7)_{35}$, $N(7)_{36}$, $N(7)_{37}$ and $N(7)_{38}$ the dimensions are 28, 19, 14, 14 respectively. All the curves discussed here are negatively graded, except $N(7)_{23}$. For this case the base space was computed up to order two, and a standard basis of the resulting ideal was computed in finite characterstic, to speed up the computation. The resulting upper bound for the dimension of $\mathcal{M}_{7,1}^{\scriptscriptstyle N(7)_{23}}$ again coincides with the lower bound of Theorem \ref{Stevens}. Herewith Proposition \ref{dimprop} is completely establised. \end{document}
arXiv
Why collide a moving particle with a particle at rest, rather than two moving particles? I was just reading some lecture notes about relativistic and quantum mechanics, and in the later part of this page the author demonstrates that any relativistic particle collision in the "lab" reference frame (where one particle is at rest and the other is accelerated) requires significantly more initial kinetic energy than it would in the "center of mass" reference frame (where both particles are accelerated equally). So I was wondering: Why don't we use the center of mass frame, i.e. accelerate both particles in a collision? I would guess the answer is something simple like "aiming a 0.5c proton directly at another 0.5c proton is much harder than aiming one 0.9c proton into a big block of stationary protons", but I don't know how accurate accelerators are these days or how many particles are actually involved on each side of a typical high-energy collision. special-relativity kinematics reference-frames collision particle-accelerators IxrecIxrec $\begingroup$ Actually two protons each travelling at 0.5c in opposite directions will hit at a much lower speed than 0.9c. $\endgroup$ – kasperd Mar 24 '15 at 22:47 $\begingroup$ @kasperd to be precise: $(u+v)/(1+uv) = 1/(1+\frac{1}{4}) = 0.8$. $\endgroup$ – Nathaniel Oct 17 '15 at 0:48 $\begingroup$ Can either of you provide a link that explains why the collision speed is so much lower than the sum of the individual particle's speeds? $\endgroup$ – Kelly S. French Nov 16 '17 at 19:35 We do. The LHC accelerates two protons, each with 3.5 TeV of energy, giving a total of 7 TeV in the CoM frame (The energies are from the initial phase of the previous LHC run. Later in the run this was increased to 8 TeV and the combination of the two dataset was what discovered the Higgs boson. The energies are roughly doubling now for Run II, to 13 TeV). The main reason for this is as you mentioned, the energy involved. In any frame, we have the following invariant quantity, $s = (p_1 + p_2)^2$ which its square root, $\sqrt{s}$, gives the Centre of Mass (CoM) energy for the experiment, and here $p_i$ represents the momentum four-vector for each particle i. In a collision where two particles are moving in opposite directions with equal energy we have the following: $$ s \equiv (p_1 + p_2) \cdot (p_1 + p_2) = (E + E, \mathbf{p}_1 − \mathbf{p}_2 ) \cdot (E + E, \mathbf{p}_1 − \mathbf{p}_2 ) = (2E , 0 ) \cdot (2E, 0) = 4E^2$$ and now the CoM energy is given by the square root of this quantity, $$\to E_{CoM} = \sqrt{s} = 2E$$ In a an experiment where one of the particles is at rest (has mass $m_t$) and the other is travelling with momentum $\mathbf{p}$ (and has mass $m_b$) we have the following: $$ s \equiv (p_1 + p_2) \cdot (p_1 + p_2) = (E_b + m_t, \mathbf{p}_b) \cdot (E_b + m_t, \mathbf{p}_b) = E_b^2 + m_t^2 + 2E_b m_t − p_b^2 = m_t^2 + m_b^2 + 2E_b m_t $$ Assuming the masses are negligible, we have the fixed target (FT) CoM energy, $$\to E^{\text{FT}}_{CoM} = \sqrt{s} = \sqrt{2E_b m_t}$$ Thus we would need much more energy input in a fixed target experiment to achieve the same energies as in the case with two co-moving particles. EDIT: Regarding a comment below which I think arises from confusion of what the CoM frame is. $\sqrt{s}$ gives the CoM energy in both cases. This is useful because we can now compare between a fixed target experiment and an experiment where both particles are accelerated at the same speed but in different directions. So, say my collider has the capability to produce a magnetic field which at its maximum, can accelerate a charged particle so as to have energy of 3.5 TeV. Now in the case that we have two particles with the same energy going in opposite directions, we will give a total CoM energy of 7 Tev, following the result above. In the second case though, theres only one accelerating particle, hence $\sqrt{s} = \sqrt{2 \times 3.5 \times m_t}$ and since E $\ll$ m, this is always less than in the first case. So be careful, because both experiments can be transformed into a CoM frame. In the CoM frame $|\mathbf{p_1}| = -|\mathbf{p_2}|$. Note this is true in both experiments, even in the second case where one of the particles is stationary. Well, the whole point is that we can use the above formulas so we can skip transforming to the CoM frame; we can compute this quantity directly. andybuckley Constandinos DamalasConstandinos Damalas $\begingroup$ It occurs to me. You assume at the end that masses are negligible, or at least that $m\ll2E_b$ (assuming both are protons, the rest mass can be called equal between them). Actually, the approximation is better stated as $m\ll p$ since $E$ is composed of $m$ and $p$ and $m\not\ll m$. Anyway, in the first case, we have $s\sim4(m^2+p^2)\sim4p^2$, but in the second case, you have $s\sim2m(m^2+p_b^2)^{1/2}$, which also must approximate as $s\sim2mp_b$. Looking at that, how often would we say it's really the case that $4p^2<2mp_b$? It seems like the second case has low energies $\endgroup$ – Jim Mar 24 '15 at 19:19 $\begingroup$ @Jimnosperm, I sensed some confusion about what the CoM frame means so I edited the question to address this. The answer is to your question is always btw. $\endgroup$ – Constandinos Damalas Mar 24 '15 at 19:54 $\begingroup$ if each particle was moving at 0.6 the speed of light wouldn't the other particle be going 1.2 the speed of light from ones reference frame? $\endgroup$ – easymoden00b Mar 24 '15 at 20:49 $\begingroup$ @easymoden00b : no, velocities do not add this way in special relativity $\endgroup$ – zeldredge Mar 24 '15 at 21:15 $\begingroup$ For completeness, the LHC Run 1 actually finished (and took most data) with CoM energy of 8 TeV, not 7. The original question also asked about number of particles in a high-energy collision and the accuracy of the beam control: the LHC had roughly 10^11 protons in each of 2800 colliding bunches, each proton with 4 TeV of energy. The beams are 16 micrometres wide: colliding them is a technical feat! In each typical bunch crossing there are up to 60 pp interactions, to increase the luminosity; the downside is that overlaid uninteresting collisions (called pile-up) make event reconstruction hard. $\endgroup$ – andybuckley Mar 31 '15 at 10:31 Many modern particle accelerators do accelerate both particles towards each other. LEP accelerated electrons and positrons in opposite directions in the same chamber, and the Tevatron did the same for protons and antiprotons. The LHC is a proton-proton collider, and so it has two stacked rings that accelerate protons in different directions. For the BaBar experiment, SLAC accelerated electron and positron beams toward each other (though with a slight difference in energy so that the B mesons had drift, making them easier to separate). There are also still accelerators that use fixed targets. Some of these are earlier stages for the above accelerators. This is one way to make anti-protons, for example. Usually the experiments being done are different. jwimberleyjwimberley The inverse question ("What's advantageous about colliding beams?") has been conclusively addressed already in the answer by Constandinos Damalas. Why don't we [... always] accelerate both particles in a collision? We also like to study collisions involving neutral particles, of course, which by themselves would be quite difficult to accelerate. But of course we can for instance accelerate neutrons as constituents of nuclei or (heavy) ions; and we can produce for instance neutral pions or neutrinos at fairly large speeds wrt. our labs based on first having accelerated certain suitable charged particles. I would guess the answer is something simple like "aiming a 0.5c proton directly at another 0.5c proton is much harder than aiming one 0.9c proton into a big block of stationary protons" Rather than aiming one individual proton at another, at the Large Hadron Collider (LHC), there are beams of protons with cross-section $\approx 1~\text{mm}^2$ being aimed at each other quite accurately. These beams are structured in so-called bunches (of $\approx 100~\text{mm}$ "instantaneous lab length"), where each bunch contains about $10^{11}$ protons; cmp. https://lhc-data-exchange.web.cern.ch/lhc-data-exchange/ruggiero.pdf This gives a proton density (with respect to the lab) of $10^9~\text{p}^{+} / \text{mm}^3$. By comparison, the proton density in water, which contains 10 protons and 8 neutrons per water molecule is roughly $$\frac{10~\text{p}^{+}}{18}~\left(\frac{6 \times 10^{23}/\text{mol}}{18~\text{g}/\text{mol}}\right) ~10^{-3}~\text{g}/\text{mm}^3 \approx 2 \times 10^{19}~\text{p}^{+} / \text{mm}^3,$$ where $10^{-3}~\text{g}/\text{mm}^3 = 1~\text{g}/\text{cm}^3$ is of course the density of water, and $18~\text{g}/\text{mol}$ is its approximate molar mass. Now, there are some experimental tasks where having a target of such high density (comparable to water), sitting in the lab, are obviously much more sensible than creating a comparatively sparse beam of targets; especially neutrino observatories, or detectors searching for some proposed types of dark matter. Not the answer you're looking for? Browse other questions tagged special-relativity kinematics reference-frames collision particle-accelerators or ask your own question. What experimental proof has been found of Einstein's theory? Confusion over the standard units used in special relativity and particle physics? Minimum $E$ of $p\bar{p}$-collision for $q\bar{q}$ pair with mass $m_q$ Can we squeeze atoms? Why do particles of equal mass (with one at rest) undergoing elastic collisions scatter at only right angles? pi_0 disintegration into 2 photons: why is speed of CM = speed of pi_0 in lab? The center of mass frame and lab frame Is the "center-of-mass energy" in particle physics related to the center-of-mass frame or not? relative distance change of two relativistic objects approaching in opposite directions as seen by an observer at rest relative to both objects Why does relativistic velocity subtraction produce a larger relative velocity than classically? Centre of mass collision
CommonCrawl
If $(x,y) = (3,9)$, what is $y^2 - 3xy + 8$? We have $y^2 -3xy + 8 = 9^2 - 3(3)(9) + 8 = 81 - 81 + 8 = \boxed{8}$.
Math Dataset
\begin{document} \title{Galois theory, graphs and free groups} \begin{abstract} A self-contained exposition is given of the topological and Galois-theoretic properties of the category of combinatorial $1$-complexes, or graphs, very much in the spirit of Stallings \cite{Stallings83}. A number of classical, as well as some new results about free groups are derived. \end{abstract} \section*{Introduction} This paper is about the interplay between graphs, free groups, and their subgroups, a subject with a long history that can be broadly divided into two schools. The first is combinatorial, where graphs, and particularly finite graphs, provide a intuitively convenient way of picturing some aspects of the theory of free groups, as in for example \cites{Imrich77,Imrich76,Servatius83,Tardos96,Tardos92}. The other approach is to treat graphs and their mappings as topological objects, a point of view with its origins from the very beginnings of combinatorial group theory, and resurrected in \cite{Stallings83} (see also \cites{Cohen89, Gersten83,Neumann90}). This is the philosophy we take, but we differ from these earlier papers in that we place centerstage the theory of coverings of arbitrary graphs, rather than coverings being merely a prelude to immersions of finite graphs. The first section sets up the topological preliminaries, \S \ref{topology:galoistheory} formulates the well known connection between subgroups of free groups and coverings of graphs in a Galois-theoretic setting, while \S\S \ref{section:invariants}-\ref{section:pullbacks} focus on the graph-theoretic implications of finitely generated subgroups. \section{The topology of graphs}\label{section:topological} This section is all very ``Stallings-esqe'' \cite{Stallings83}, with much of the material in \S\S \ref{topological:graphs}-\ref{topological:pullbacksection} well known. General references are \cites{Cohen89,Collins98,Gersten83, Serre03,Scott79,Stallings83}. In \S \ref{topological:coverings} we deal with coverings, with a mixture of well known and some (minor) new results; \S \ref{topology:lattice} introduces the lattice of intermediate coverings of a cover. \subsection{Graphs}\label{topological:graphs} A {\em combinatorial $1$-complex\/} or {\em graph\/} \cite{Gersten83}*{\S 1.1} is an arbitrary set $\Gamma$ with an involutary map $^{-1}:\Gamma\rightarrow\Gamma$ and an idempotent map $s:\Gamma\rightarrow V_\Gamma$, (ie: $s^2=s$) where $V_\Gamma$ is the set of fixed points of $^{-1}$. Thus a graph has {\em vertices\/} $V_\Gamma$, and {\em edges\/} $E_\Gamma:=\Gamma\setminus V_\Gamma$ with (i). $s(v)=v$ for all $v\in V_\Gamma$; (ii). $v^{-1}=v$ for all $v\in V_\Gamma$, $e^{-1}\in E_\Gamma$ and $e^{-1}\not= e=(e^{-1})^{-1}$ for all $e\in E_\Gamma$. Indeed, these two can be taken as a more transparent, but less elegant, definition. We will use both interchangebly. The edge $e$ has {\em start vertex\/} $s(e)$ and {\em terminal vertex\/} $t(e):=s(e^{-1})$; an {\em arc\/} is an edge/inverse edge pair, and an {\em orientation\/} for $\Gamma$ is a set $\mathcal{O}$ of edges containing exactly one edge from each arc. Write $\overline{e}$ for the arc containing the edge $e$ (so that $\overline{e^{-1}}=\overline{e}$). A pointed graph is a pair $\Gamma_v:=(\Gamma,v)$ for $v\in\Gamma$ a vertex. The graph $\Gamma$ is {\em finite\/} when $V_\Gamma$ is finite and {\em locally finite\/} when the set $s^{-1}(v)$ is finite for every $v\in V_\Gamma$. The cardinality of the set $s^{-1}(v)$ is the {\em valency\/} $\partial v$ of the vertex $v$. A {\em path\/} is a finite sequence of edges, mutually incident in the obvious sense; similarly we have {\em closed\/} paths and {\em trivial\/} paths (consisting of a single vertex). $\Gamma$ is {\em connected\/} if any two vertices can be joined by a path. The connected component of $\Gamma$ containing the vertex $v$ consists of those vertices for which there is a path connecting them to $v$, together with all their incident edges. \parshape=7 0pt\hsize 0pt.81\hsize 0pt.81\hsize 0pt.81\hsize 0pt.81\hsize 0pt.81\hsize 0pt\hsize A {\em map\/} of graphs is a set map $g :\Gamma\rightarrow \Lambda$ with $g(V_\Gamma)\subseteq V_\Lambda$, such that the diagram \vadjust{ \smash{\lower 62pt \llap{ \begin{pspicture}(0,0)(2,2) \rput(-5.75,0.3){ \rput(0.7,-0.55){ \rput(5,2){$\Gamma$}\rput(6.55,2){$\Lambda$} \rput(5,0.45){$\Gamma$}\rput(6.55,0.45){$\Lambda$} \psline[linewidth=.1mm]{->}(5.3,2)(6.3,2) \psline[linewidth=.1mm]{->}(5.3,0.45)(6.3,0.45) \psline[linewidth=.1mm]{->}(5,1.7)(5,.7) \psline[linewidth=.1mm]{->}(6.55,1.7)(6.55,.7) \rput(4.7,1.2){$\sigma_\Gamma$}\rput(6.85,1.2){$\sigma_\Lambda$} \rput(5.8,2.2){$g$}\rput(5.8,.65){$g$} }} \end{pspicture}}}}\ignorespaces commutes, where $\sigma_\Gamma$ is one of the $s$ or $^{-1}$ maps for $\Gamma$, and $\sigma_\Lambda$ similarly, ie: $g s_\Gamma(x)=s_\Lambda g(x)$ and $g(x^{-1})=g(x)^{-1}$. These are combinatorial versions of continuity: if $\Gamma$ is connected then $g(\Gamma)\subset\Lambda$ is connected. A map is {\em dimension preserving\/} if $g(E_\Gamma)\subseteq E_\Lambda$. These maps of graphs allow one to squash edges down to vertices as in \cite{Gersten83}, rather than the more rigid maps of \cite{Serre03,Stallings83}. The pay off is that the quotient construction below is more useful. A map $g:\Gamma_v\rightarrow\Lambda_u$ of pointed graphs is a map $g:\Gamma\rightarrow\Lambda$ with $g(v)=u$. A map $g:\Gamma\rightarrow \Lambda$ is a {\em homeomorphism\/} if it is dimension preserving and is a bijection on the vertex and edge sets, in which case the inverse set map is a dimension preserving map of graphs $g^{-1}:\Lambda\rightarrow \Gamma$, and hence a homeomorphism. The set of self homeomorphisms $\Gamma\rightarrow \Gamma$ forms a group $\text{Homeo}(\Gamma)$, and a group action $G\stackrel{\varphi}{\rightarrow}\text{Homeo}(\Gamma)$ is said to {\em preserve orientation\/} iff there is an orientation $\mathcal{O}$ for $\Gamma$ with $\varphi(g)(\mathcal{O})=\mathcal{O}$ for all $g\in G$. The action of $G$ is said to be {\em without inversions\/} iff $\varphi(g)(e)\not= e^{-1}$ for all edges $e$ and for all $g\in G$. It is easy to see that $G$ preserves orientation if and only if it acts without inversions. $G$ acts {\em freely\/} iff the action is free on the vertices, ie: for any $g\in G$ and $v$ a vertex, $\varphi(g)(v)=v$ implies that $g$ is the identity element. If $G$ acts freely and orientation preservingly, then the action is free on the edges too. A {\em subgraph\/} is a subset $\Lambda\subset\Gamma$, such that the maps $s$ and $^{-1}$ give a graph when restricted to $\Lambda$. Equivalently, it is a graph mapping $\Lambda\hookrightarrow\Gamma$ that is a homeomorphism onto its image. The {\em coboundary\/} $\delta\Lambda$ of a subgraph consists of those edges $e\in\Gamma$ with $s(e)\in\Lambda$ and $t(e)\not\in\Lambda$ (equivalently, it is those edges $e\in\Gamma$ with $sq(e)$ the vertex $q(\Lambda)$ in the quotient complex $\Gamma/\Lambda$, where $q:\Gamma\rightarrow\Gamma/\Lambda$ is the quotient mapping as below). An {\em elementary homotopy\/} of a path, $e_1\ldots e_ie_{i+1}\ldots e_k\leftrightarrow e_1\ldots e_i(ee^{-1})e_{i+1}\ldots e_k$ inserts or deletes a {\em spur\/}: a path that consecutively traverses both edges of an arc $\overline{e}$. Two paths are (freely) {\em homotopic\/} iff there is a finite sequence of elementary homotopies taking one to the other. Paths homotopic to a trivial path are said to be {\em homotopically trivial\/}. It is easy to see that two homotopic paths have the same start and terminal vertices (and thus homotopically trivial paths are necessarily closed) and that homotopy is an equivalence relation on the paths with common endpoints. The {\em trivial graph\/} has a single vertex and no edges. The {\em real line\/} graph $\mathcal R$ has vertices $V_\mathcal R=\{v_k\}_{k\in\ams{Z}}\def\E{\ams{E}}$, edges $E_\mathcal R=\{e_k^{\pm 1}\}_{k\in\ams{Z}}\def\E{\ams{E}}$ and $s(e_k)=v_k,s(e_k^{-1})=v_{k+1}$. \subsection{Quotients} A {\em quotient relation\/} is an equivalence relation $\sim$ on $\Gamma$ such that $$ \text{(i)}.\, x\sim y \Rightarrow s(x)\sim s(y)\text{ and } x^{-1}\sim y^{-1},\hspace{2em} \text{(ii)}.\, x\sim x^{-1}\Rightarrow[x]\cap V_\Gamma\not=\varnothing, $$ where $[x]$ is the equivalence class under $\sim$ of $x$. \begin{proposition} If $\sim$ is a quotient relation on a graph $\Gamma$ then the quotient graph $\Gamma/\kern -.45em\sim$ has vertices the equivalence classes $[v]$ for $v\in V_\Gamma$, edges the classes $[e]$ for $e\in E_\Gamma$ with $[e]\cap V_\Gamma=\varnothing$, $[x]^{-1}=[x^{-1}]$, and $s_{\Gamma/\sim}[x]=[s_\Gamma(x)]$. Moreover, the quotient map $q:\Gamma\rightarrow \Gamma/\kern -.45em\sim$ given by $q(x)=[x]$ is a map of graphs (and so in particular, if $\Gamma$ is connected then $\Gamma/\kern -.45em\sim$ is connected). \end{proposition} Let $\Lambda_i\hookrightarrow\Gamma, (i\in I),$ be a set of mutually disjoint subgraphs and define $\sim$ on $\Gamma$ by $x\sim y$ iff $x=y$ or both $x$ and $y$ lie in the same $\Lambda_i$. Write $\Gamma/\Lambda_i$ for $\Gamma/\kern -.45em\sim$, the {\em quotient of $\Gamma$ by the family of subgraphs $\Lambda_i$}. It is what results by squashing each $\Lambda_i$ to a distinct vertex. In particular, if the family consists of a single subgraph $\Lambda\hookrightarrow\Gamma$, we have the quotient $\Gamma/\Lambda$. The reader should be wary of the difference between the quotients $\Gamma/\Lambda_i$ and $\Gamma/\Lambda$, for $\Lambda=\amalg\Lambda_i$ the union of the disjoint subgraphs. If $\sim$ is the equivalence relation on $\Gamma$ consisting of the orbits of an action by a group $G$, then $\sim$ is a quotient relation on $\Gamma$ if and only if the group action is orientation preserving. In this case we may form the quotient complex $\Gamma/G:=\Gamma/\kern -.45em\sim$. \subsection{Trees}\label{topological:trees} A path in a graph is {\em reduced\/} when it contains no spurs; by removing spurs, any two vertices in the same component can be joined by a reduced path. It is easily proved that for any vertices $u$ and $v$ of a graph $\Gamma$, there are $\leq 1$ reduced paths between them if and only if any closed non-trivial path in $\Gamma$ contains a spur (equivalently, any closed path is homotopic to the trivial path based at one of its vertices). A graph satisfying any of these equivalent conditions is called a {\em forest\/}; a connected forest is a {\em tree\/}. If $\Gamma$ is a finite graph with $\partial v\geq 2$ for every vertex $v$, then it can be shown that $\Gamma$ contains a homotopically non-trivial closed path. Hence if $T$ is a finite tree, then $|E_T|=2(|V_T|-1)$. A {\em spanning forest\/} is a subgraph $\Phi\hookrightarrow\Gamma$ that is a forest and contains all the vertices of $\Gamma$ (ie: $V_\Phi=V_\Gamma$). It is well known that spanning trees can always be constructed for connected $\Gamma$. \begin{proposition}\label{topological:trees:result100} Let $T_i\hookrightarrow\Gamma$ be a family of mutually disjoint trees in a connected graph $\Gamma$. Then there is a spanning tree $T\hookrightarrow \Gamma$ containing the $T_i$ as subgraphs, and such that if $q:\Gamma\rightarrow \Gamma/T_i$ is the quotient map, then $q(T)$ is a spanning tree for $\Gamma/T_i$. \end{proposition} In particular, any spanning forest for $\Gamma$ can be extended to a spanning tree. For the proof, take $T$ to be $q^{-1}(T')$ for some spanning tree $T'$ of the (connected) graph $\Gamma/T_i$. \subsection{The fundamental group} The {\em fundamental group\/} $\pi_1(\Gamma,v)$ is the usual group of homotopy classes $[\gamma]$ of closed paths $\gamma$ at the vertex $v\in\Gamma$ (ie: equivalence classes under the homotopy relation) with product $[\gamma_1][\gamma_2]=[\gamma_1\gamma_2]$. If $\Phi$ is a forest, then $\pi_1(\Phi,v)$ is trivial for any vertex $v$ and conversely, if $\Gamma$ connected has $\pi_1(\Gamma,v)$ trivial for some (hence every) vertex $v$, then $\Gamma$ is a tree. A connected graph with trivial fundamental group is {\em simply connected\/}. A map $g:\Gamma_v\rightarrow \Lambda_u$ of graphs induces a group homomorphism $g^*:\pi_1(\Gamma,v)\rightarrow\pi_1(\Lambda,u)$ by $g^*[\gamma]=[g(\gamma)]$ and this satisfies the usual functorality properties: $(\text{id})^*=\text{id}$ and $(gf)^*=g^*f^*$. \begin{proposition} \label{homotopy:excision} If $T_i\hookrightarrow\Lambda$ is a family of mutually disjoint trees, $v\in T\in\{T_i\}$ a vertex, and $q:\Lambda\rightarrow\Lambda/T_i$ the quotient map, then $q^*:\pi_1(\Lambda,v)\rightarrow \pi_1(\Lambda/T_i,q(v))$ is an isomorphism. \end{proposition} \begin{proof} \parshape=8 0pt\hsize 0pt.8\hsize 0pt.8\hsize 0pt.8\hsize 0pt.8\hsize 0pt.8\hsize 0pt.8\hsize 0pt\hsize The key to the proof is that the quotient map $q$ is essentially just the identity map outside of the $T_i$. \vadjust{ \smash{\lower 20pt \llap{\begin{pspicture}(0,0)(3,3) \rput(-2.5,-2.3){ \rput(4,2){\BoxedEPSF{fig144a3.eps scaled 250}} \rput(3.25,1.15){$T_{j-1}$}\rput(4.75,1.15){$T_{j}$} \rput(5.35,2.6){{\red $\gamma_{j}'$}} } \end{pspicture}}}}\ignorespaces To see the surjectivity of $q^*$, suppose that $\gamma$ is a closed path in $\Lambda/T_i$ based at $q(v)$ and having edges $e_1\ldots e_k$. Then there are (unique) edges $e'_1,\ldots, e'_k$ in $\Lambda$ with $q(e'_i)=e_i$ and $\gamma'=e'_1\ldots e'_k=\gamma'_1\gamma'_2\ldots\gamma'_k$ a sequence of paths with the terminal vertex of $\gamma_{j}'$ and the start vertex of $\gamma'_{j+1}$ in the same tree $T_j$. Use the connectedness of the $T_i$ to connect these up into a path in $\Lambda$ mapping via $q$ to $\gamma$. For injectivity, suppose that $\gamma'$ is a closed path in $\Lambda$ based at $v$ and mapping via $q$ to a homotopically trivial path $\gamma$ in $\Lambda/T_i$. If $\gamma$ contains a spur, then the section of $\gamma'$ mapping to it looks like below: $$ \begin{pspicture}(0,0)(14,1.5) \rput(0,0){ \rput(5,.8){\BoxedEPSF{fig141a2.eps scaled 375}} \rput(7.4,.5){{\red $\gamma'$}} \rput(4.1,.05){$T_i$}\rput(5.3,1.5){$T_j$} } \psline[linewidth=.1mm]{->}(7.8,1)(9.2,1) \rput(8.5,1.2){$q$} \rput(0,0){ \rput(11,1){\BoxedEPSF{fig141c.eps scaled 375}} \rput(12.2,.5){{\red $\gamma$}} } \end{pspicture} $$ Thus a sequence of elementary homotopies reducing $\gamma$ to the trivial path in $\Lambda/T_i$ can be mirrored by homotopies in $\Lambda$ that reduce $\gamma'$ to a closed path completely contained in $T$. As $T$ is simply connected, this path can in turn be homotoped to the trivial path. Thus, only homotopically trivial paths can be sent by $q$ to homotopically trivial paths, so $q^*$ is injective. \qed \end{proof} Fix a spanning tree $T\hookrightarrow\Gamma$, choose an edge $e$ from each arc of $\Gamma$, and consider the homotopy class of the path through $T$ from $v$ to $s(e)$, traverses $e$ and travels back through $T$ to $v$. Then {\em Schreier generators\/} for $\pi_1(\Gamma,v)$ are the homotopy classes of such paths arising from the arcs omitted by $T$. \subsection{Homology} Fix an orientation $\mathcal{O}$ for $\Gamma$, and always write arcs in the form $\overline{e}$ for $e\in \mathcal{O}$, and paths in the form $\gamma=e_1^{\varepsilon_1}\ldots e_k^{\varepsilon_k}$ with $e_i\in \mathcal{O}$ and $\varepsilon_i=\pm 1$. Let $\ams{Z}}\def\E{\ams{E}[V_\Gamma]$ and $\ams{Z}}\def\E{\ams{E}[\text{arcs}]$ be the free abelian groups on the vertices and arcs of $\Gamma$ (alternatively, one can take $\ams{Z}}\def\E{\ams{E}[E_\Gamma]$ and then pass to the quotient $\ams{Z}}\def\E{\ams{E}[E_\Gamma]/\langle e+e^{-1}=0\rangle$; we prefer the more concrete version). Define the boundary of an arc $\overline{e}$ to be $\partial(\overline{e})=t(e)-s(e)\in\ams{Z}}\def\E{\ams{E}[V_\Gamma]$, and for $\sum n_i\overline{e}_i\in\ams{Z}}\def\E{\ams{E}[\text{arcs}]$, let $\partial(\sum n_i\overline{e}_i)=\sum n_i\partial(\overline{e}_i)$. Then $\partial$ is a group homomorphism $\partial:\ams{Z}}\def\E{\ams{E}[\text{arcs}]\rightarrow\ams{Z}}\def\E{\ams{E}[V_\Gamma]$, and the {\em homology\/} of $\Gamma$ is the pair of abelian groups $$ H_1(\Gamma)=\ker\partial\text{ and }H_0(\Gamma)=\text{coker}\,\partial, $$ (ie: $H_0(\Gamma)=\ams{Z}}\def\E{\ams{E}[V_\Gamma]/\text{im}\,\partial$). By following the proofs in the topological category, one can show the standard homological facts: $H_0(\Gamma)$ is free abelian on the connected components of $\Gamma$; if $\Gamma$ is single vertexed, then $H_1$ is free abelian on the arcs. If $\gamma=e_1^{\varepsilon_1}e_2^{\varepsilon_2}\ldots e_k^{\varepsilon_k}$ is a closed path at $v$ then $\partial(\sum \varepsilon_i\overline{e}_i)=0$, and the Hurewicz map sending $\gamma$ to $\sum \varepsilon_i\overline{e}_i$ is well defined upto homotopy, thus, for $\Gamma$ connected, a surjective homomorphism $\pi_1(\Gamma,v)\rightarrow H_1(\Gamma)$ with kernel the commutator subgroup of $\pi_1(\Gamma,v)$. In particular, $H_1(\Gamma)$ is the abelianisation of $\pi_1(\Gamma,v)$, so that if $\Gamma_1,\Gamma_2$ are connected graphs with $\pi_1(\Gamma_1,v_1)\cong\pi_1(\Gamma_2,v_2)$ then $H_1(\Gamma_1)\cong H_1(\Gamma_2)$. \subsection{Rank and spines}\label{topological:rank} Graph homology provides an important invariant for graphs: \begin{proposition}[rank an invariant] \label{topological:rank:result100} Let $T\hookrightarrow\Gamma$ be a spanning tree for $\Gamma$ connected. Then $H_1(\Gamma)$ is free abelian with basis the set of arcs of $\Gamma$ omitted by $T$. \end{proposition} Thus the cardinality of the set of omitted arcs is equal to $\text{rk}\,_\ams{Z}}\def\E{\ams{E} H_1(\Gamma)$ and independent of $T$ (this can also be shown directly without recourse to homology). Define the {\em rank\/} $\text{rk}\,\Gamma$ of $\Gamma$ connected to be $\text{rk}\,_\ams{Z}}\def\E{\ams{E} H_1(\Gamma)$, or the cardinality of the set of arcs omitted by a spanning tree. \begin{proof}[of Proposition \ref{topological:rank:result100}] We have $\pi(\Gamma,v)\cong\pi_1(\Gamma/T,q(v))$ by Proposition \ref{homotopy:excision}, hence $H_1(\Gamma)\cong H_1(\Gamma/T)$, with $\Gamma/T$ single vertexed, hence this final homology free abelian on its arcs, ie: free abelian on the arcs of $\Gamma$ omitted by $T$. \qed \end{proof} If $\Gamma$ is finite, locally finite, connected, then, $2(\text{rk}\,\Gamma-1)= |E_\Gamma|-2|V_\Gamma|$ by \S \ref{topological:trees}; clearly, $\text{rk}\,\Gamma=0$ if and only if $\Gamma$ is a tree. If $\Gamma$ a connected graph and $T_i\hookrightarrow\Gamma$ a set of mutually disjoint trees, then $\text{rk}\,\Gamma=\text{rk}\,\Gamma/T_i$ (this follows either from Proposition \ref{homotopy:excision} using $\text{rk}\,=\text{rk}\, H_1$ or by Proposition \ref{topological:trees:result100} using rank the number of arcs omitted by a spanning tree). If $\Lambda$ is a connected graph and $v$ a vertex, then the {\em spine $\widehat{\Lambda}_v$ of $\Lambda$ at $v$\/}, is defined to be the union in $\Lambda$ of all closed reduced paths starting at $v$. Stallings and others use core graphs; we have followed \cite{Neumann90}. \begin{lemma}\label{topological:spines:result100} (i). $\widehat{\Lambda}_v$ is connected with $\text{rk}\,\widehat{\Lambda}_v=\text{rk}\,\Lambda$. (ii). If $u\in\widehat{\Lambda}_v$, then every closed reduced path starting at $u$ is contained in $\widehat{\Lambda}_v$. (iii). Spines are topological invariants, ie: a homeomorphism $f:\Lambda_u\rightarrow\Delta_v$ restricts to a homeomorphism $\widehat{\Lambda}_u\rightarrow\widehat{\Delta}_v$. \end{lemma} \begin{proof} The connectedness is immediate. If $T$ is a spanning tree for $\Lambda$ and $e$ an edge not in $T$, then $e$ is contained in the spine $\widehat{\Lambda}_v$, for, the closed path obtained by traversing the reduced path through $T$ from $v$ to $s(e)$, across $e$ and back via the reduced path through $T$ is reduced. The rank assertion follows. For part (ii), let $\mu$ be a closed reduced path at $u$ and $\gamma$ a reduced path in the spine from $v$ to $u$. Then $\gamma=\gamma'_1\gamma_1= \gamma'_2\gamma_2$ and $\mu=\gamma_1^{-1}\mu'\gamma_2$ with $\gamma'_1\mu'(\gamma'_2)^{-1}$ reduced at $v$, hence in the spine. A homeomorphism sends closed reduced paths to closed reduced paths (compare with Proposition \ref{topological:coverings:result100} and the maps of \S \ref{topological:coverings}), hence $f(\widehat{\Lambda}_v)\subset\widehat{\Delta}_u$, and the converse similarly using $f^{-1}$. \qed \end{proof} \subsection{Pushouts}\label{topological:pushouts} These are important examples of quotients. Let $\Lambda_1,\Lambda_2$ and $\Delta$ be graphs and $g_i:\Delta\rightarrow \Lambda_i$ maps of graphs. Let $\sim$ on the disjoint union $\Lambda_1\amalg\Lambda_2$ be the equivalence relation {\em generated by\/} the $x\sim y$ iff there is a $z\in\Delta$ with $x=g_1(z)$ and $y=g_2(z)$. Thus, $x\sim y$ iff there are $x_0,x_1,\ldots,x_k$ with $x_0=x$ and $x_k=y$, and for each $j$, there is $z\in \Delta$ with $x_j=g_i(z)$, $x_{j+1}=g_{i+1\text{ mod }2}(z)$. If $\sim$ is a quotient relation then call the quotient $\Lambda_1\coprod \Lambda_2/\kern -.45em\sim$ the {\em pushout\/} of the data $g_i:\Delta\rightarrow \Lambda_i$, denoted $\Lambda_1\coprod_\Delta \Lambda_2$. Given graphs and maps as above, the pushout cannot always be formed, precisely because the quotient cannot always be formed. Stallings \cite{Stallings83}*{page 552} shows that if the $g_i$ are dimension preserving, then the pushout exists if and only if there are orientations $\mathcal{O},\mathcal{O}_i$ for $\Delta,\Lambda_i$ with $g_i(\mathcal{O})\subseteq \mathcal{O}_i$. Thus in particular, if the graphs $g_1(\Delta)$ and $g_2(\Delta)$ are disjoint, then the pushout can always be formed. Define $t_i:\Lambda_i\rightarrow\Lambda_1\coprod_\Delta\Lambda_2$ to be the compositions $\Lambda_i\hookrightarrow \Lambda_1\coprod \Lambda_2 \rightarrow \Lambda_1\coprod \Lambda_2/\kern -.45em\sim$ of the inclusion of $\Lambda_i$ in the disjoint union and the quotient map. \begin{proposition}\label{pushout:categorical} If $\Delta\not=\varnothing$ and the $\Lambda_i$ are connected then the pushout is connected, and the maps $t_i$ make the diagram on the left commute. $$ \begin{pspicture}(6,3) \rput(0,-.5){ \rput(-1.5,1){ \rput(0,2){$\Delta$} \rput(0,0){$\Lambda_1$} \rput(2,2){$\Lambda_2$} \rput(2.1,0){$\Lambda_1\coprod_\Delta \Lambda_2$} \psline[linewidth=.1mm]{->}(0,1.7)(0,.3) \psline[linewidth=.1mm]{->}(.3,2)(1.7,2) \psline[linewidth=.1mm]{->}(.3,0)(1.2,0) \psline[linewidth=.1mm]{->}(2,1.7)(2,.3) \rput(.25,1.05){$g_1$} \rput(1,1.8){$g_2$} \rput(.75,.2){$t_1$} \rput(1.8,1){$t_2$} } \rput(0,.5){ \rput(5,1){ \rput(0,2){$\Delta$} \rput(0,0){$\Lambda_1$} \rput(2,2){$\Lambda_2$} \rput(2.1,0){$\Lambda_1\coprod_\Delta \Lambda_2$} \psline[linewidth=.1mm]{->}(0,1.7)(0,.3) \psline[linewidth=.1mm]{->}(.3,2)(1.7,2) \psline[linewidth=.1mm]{->}(.3,0)(1.2,0) \psline[linewidth=.1mm]{->}(2,1.7)(2,.3) \rput(.25,1.05){$g_1$} \rput(1,1.8){$g_2$} \rput(.75,.2){$t_1$} \rput(1.8,1){$t_2$} } \rput(8,0){$B$} \psbezier[linewidth=.1mm]{->}(5.2,.8)(6,0)(7,0)(7.8,0) \psbezier[linewidth=.1mm]{->}(7.2,2.8)(8,2)(8,1)(8,.2) \psline[linewidth=.1mm]{->}(7.3,.7)(7.8,.2) \rput(6.35,.4){$t'_1$}\rput(8,2){$t'_2$} }} \end{pspicture} $$ Moreover the pushout is universal in that if $B$, $t'_1$, $t'_2$ are a graph and maps making such a square commute, then there is a map $\Lambda_1\coprod_\Delta \Lambda_2\rightarrow B$ making the diagram above right commute. \end{proposition} These properties can of course be taken as an alternative, categorical definition of the pushout, with uniqueness following from the universality and the usual formal nonsense. If the $g_i:\Delta_v\rightarrow\Lambda_{u_i}$ are pointed maps, and $q:\Lambda_1\coprod \Lambda_2\rightarrow \Lambda_1\coprod \Lambda_2/\kern -.45em\sim$ the quotient map, then $q(u_1)=q(u_2)=u$ (say), and we have a pointed version of Proposition \ref{pushout:categorical}, involving the pointed pushout $(\Lambda_1\coprod_\Delta\Lambda_2)_u$. Many of the quotient constructions from topology (eg: cone, suspension, $\ldots$) can be expressed as some kind of pushout or other, but we content ourselves with the following: let $\Delta$ be a graph with $E_\Delta=\varnothing$ and the $g_i:\Delta\hookrightarrow\Lambda_i$ homeomorphisms onto their images (ie: injections on the vertices of $\Delta$). The resulting pushout (which always exists), the {\em wedge sum\/} $\Lambda_1\bigvee_{\Delta}\Lambda_2$, is the result of identifying the vertices of copies of $\Delta$ in the $\Lambda_i$. If the $\Lambda_i$ coincide ($=\Lambda$ say) with maps $\Delta\rightrightarrows\Lambda$, then we write $\bigvee_\Delta\Lambda$. If $\Delta$ is the trivial graph, $T$ a tree and $g_i:\Delta\rightrightarrows T$ distinct maps, then the wedge sum $\bigvee_{\Delta}T$ has a non-trivial reduced closed path that is unique upto cyclic reordering. Thus, by removing a single arc from $\bigvee_{\Delta}T$ we obtain a new tree. \subsection{Pullbacks}\label{topological:pullbacksection} The categorical nature of the pushout construction (ie: Proposition \ref{pushout:categorical}) suggests a ``co-'' version: let $\Lambda_1,\Lambda_2$ and $\Delta$ be graphs and $g_i:\Lambda_i\rightarrow \Delta$ maps of graphs. The {\em pullback\/} $\Lambda_1\prod_\Delta \Lambda_2$ has vertices (resp. edges) the $x_1\times x_2$, $x_i\in V_{\Lambda_i}$ (resp. $x_i\in E_{\Lambda_i}$) such that $g_1(x_1)=g_2(x_2)$, and $s(x_1\times x_2)=s(x_1)\times s(x_2)$, $(x_1\times x_2)^{-1} =x_1^{-1}\times x_2^{-1}$. Taking $\Delta$ to be the trivial graph has the effect of removing the $g_1(x)=g_2(y)$ conditions and the result is the {\em product\/} $\Lambda_1\prod \Lambda_2$. Thus the pullback $\Lambda_1\prod_\Delta \Lambda_2$ is a subgraph of the product $\Lambda_1\prod\Lambda_2$, but the product will have many more vertices and edges. Define maps $t_i:\Lambda_1\prod_\Delta\Lambda_2\rightarrow\Lambda_i$ to be the compositions $\Lambda_1\prod_\Delta\Lambda_2\hookrightarrow \Lambda_1\prod\Lambda_2\rightarrow\Lambda_i$, with the second map the projection $x_1\times x_2\mapsto x_i$. \begin{proposition}\label{topological:pullbacks} The $t_i$ are dimension preserving maps making the diagram below left commute, $$ \begin{pspicture}(6,3) \rput(0,-1){ \rput(-1.5,1.5){ \rput(0,2){$\Lambda_1\prod_\Delta \Lambda_2$} \rput(0,0){$\Lambda_1$} \rput(2,2){$\Lambda_2$} \rput(2,0){$\Delta$} \psline[linewidth=.1mm]{->}(0,1.7)(0,.3) \psline[linewidth=.1mm]{->}(.9,2)(1.7,2) \psline[linewidth=.1mm]{->}(.3,0)(1.7,0) \psline[linewidth=.1mm]{->}(2,1.7)(2,.3) \rput(.25,1){$t_1$} \rput(1.3,1.8){$t_2$} \rput(1,.2){$g_1$} \rput(1.75,1){$g_2$} } \rput(5,1){ \rput(0,2){$\Lambda_1\prod_\Delta \Lambda_2$} \rput(0,0){$\Lambda_1$} \rput(2,2){$\Lambda_2$} \rput(2,0){$\Delta$} \psline[linewidth=.1mm]{->}(0,1.7)(0,.3) \psline[linewidth=.1mm]{->}(.9,2)(1.7,2) \psline[linewidth=.1mm]{->}(.3,0)(1.7,0) \psline[linewidth=.1mm]{->}(2,1.7)(2,.3) \rput(.25,1){$t_1$} \rput(1.3,1.8){$t_2$} \rput(1,.2){$g_1$} \rput(1.75,1){$g_2$} } \rput(4,4){$B$} \psbezier[linewidth=.1mm]{->}(4,3.8)(4,3)(4,2)(4.8,1.2) \psbezier[linewidth=.1mm]{->}(4.2,4)(5,4)(6,4)(6.8,3.2) \psline[linewidth=.1mm]{->}(4.2,3.8)(4.7,3.3) \rput(3.8,2.6){$t'_1$}\rput(5.6,4.1){$t'_2$} } \end{pspicture} $$ Moreover, the pullback is universal in that if $B$, $t'_1,t'_2$ are a graph and maps making such a square commute, then there is a map $B\rightarrow \Lambda_1\prod_\Delta \Lambda_2$ making the diagram above right commute. \end{proposition} In general the pullback need not be connected. If the $g_i:\Lambda_{u_i}\rightarrow\Delta_v$ are pointed maps then $u_1\times u_2$ is a vertex of the pullback, and we may consider the connected component containing $u_1\times u_2$. Call this the {\em pointed pullback\/} $(\Lambda_1\prod_\Delta\Lambda_2)_{u_1\times u_2}$, and we then have a pointed version of Proposition \ref{topological:pullbacks}. In most of our usages of the pullback construction, the graph $\Delta$ will be single vertexed, and so the vertex set will just be $V_{\Lambda_1}\times V_{\Lambda_2}$. \subsection{Coverings}\label{topological:coverings} A map $p:\Lambda\rightarrow\Delta$ of graphs is a {\em covering\/} iff (i). $p$ preserves dimension; and (ii). for every vertex $v\in \Lambda$, $p$ is a bijection from the set of edges in $\Lambda$ with start vertex $v$ to the set of edges in $\Delta$ with start vertex $p(v)$. If $p(x)=y$, then one says that $x$ {\em covers\/} $y$, and $y$ {\em lifts\/} to $x$. The set of all lifts of the cell $y$, or the set of all cells covering $y$, is its {\em fiber\/} $\text{fib}_{\Lambda\rightarrow\Delta}(y)$. \begin{proposition}[\cite{Stallings83}*{\S 4.1}] \label{topological:coverings:result100} Let $p:\Lambda_u\rightarrow\Delta_v$ be a covering. \noindent(i). If $\gamma$ is a path in $\Delta$ starting at $v$ then there is a path $\gamma'$ in $\Lambda$ starting at $u$ and covering $\gamma$. Moreover, if $\gamma_1,\gamma_2$ are paths in $\Lambda$ starting at $u$ and covering the same path, then $\gamma_1=\gamma_2$. \noindent(ii). A path in $\Lambda$ covering a spur is itself a spur. Consequently, two paths in $\Lambda$ covering homotopic paths are homotopic. \noindent(iii). If $g:\Gamma_w\rightarrow\Delta_v$ is a map then there is a map $f:\Gamma_w\rightarrow\Lambda_u$ with $g=fp$ if and only if $g^*\pi_1(\Gamma,w)\subset p^*\pi_1(\Lambda,u)$. \noindent(iv). $p^*:\pi_1(\Lambda,u)\rightarrow\pi_1(\Delta,v)$ is injective, and if $u'$ is the terminal vertex of a path $\mu$ starting at $u$, then $p^*\pi_1(\Lambda,u)=gp^*\pi_1(\Lambda,u')g^{-1}$, where $g$ is the homotopy class of $p(\mu)$. \end{proposition} The path $\gamma'$ in (i) is a lift of $\gamma$ to $u$, such lifts being unique by (ii). The combination of these two is called {\em path lifting\/}, while (ii) is {\em spur-lifting\/} and {\em homotopy lifting\/}. In particular, the image under a covering of a reduced path is reduced (whereas, as spurs always map to spurs, the pre-image of a reduced path is reduced under any mapping). Part (iii) is a general lifting criterion that implies in particular that if $\gamma$ a closed path at $v$ then its homotopy class lies in $p^*\pi_1(\Lambda,u)$ if and only if there is a closed path $\mu$ at $u$ with $p(\mu)=\gamma$. Part (iv) follows immediately from this and homotopy lifting. \begin{lemma} \label{topological:coverings:result200} Let $p:\Lambda\rightarrow\Delta$ be a covering. \noindent(i). If $\Delta$ is connected then $p$ maps the cells of $\Lambda$ surjectively onto the cells of $\Delta$. \noindent(ii). If $\Lambda$ is connected then the fibers of any two cells of $\Delta$ have the same cardinality, called the degree, $\deg(\Lambda\rightarrow\Delta)$, of the covering. \noindent(iii). If $\Lambda,\Delta$ are connected and $\deg(\Lambda\rightarrow\Delta)=1$, then the covering $\Lambda\rightarrow\Delta$ is a homeomorphism. \end{lemma} \begin{proof} In (i), surjectivity on the vertices follows by path lifting and on the edges by definition. Path lifting gives a bijection in (ii) between the fibers of two vertices, and between the fiber of an edge and it's start vertex. Part (iii) follows immediately from (i) and (ii). \qed \end{proof} From now on, {\em all\/} coverings will be maps between connected complexes unless stated otherwise. \begin{lemma}\label{topology:coverings:result300} \noindent(i). Let $\Lambda\stackrel{q}{\rightarrow}\Gamma\stackrel{r}{\rightarrow}\Delta$ be maps with $p=rq$. If any two of $p,q$ and $r$ are coverings, then so is the third. \noindent(ii). If a group $G$ acts orientation preservingly and freely on $\Lambda$ then the quotient map $q:\Lambda\rightarrow\Lambda/G$ is a covering. \end{lemma} Call the coverings $\Lambda\rightarrow\Gamma\rightarrow\Delta$ in (i) {\em intermediate\/} to the covering $\Lambda\rightarrow\Delta$. It follows from the comments following Proposition \ref{topological:coverings:result100} that if $p:\Lambda_u\rightarrow\Delta_v$ and $r:\Gamma_x\rightarrow\Delta_v$, then $p^*\pi_1(\Lambda,u)\subset r^*\pi_1(\Gamma,x)$. \begin{proof}[of Lemma \ref{topology:coverings:result300}] The freeness of the action in (ii) ensures the injectivity of $q$ on the edges starting at a vertex of $\Lambda$. Part (i) is an easy exercise. \qed \end{proof} \begin{proposition}\label{topology:coverings:result400} Let $\Lambda$ be a graph and $\Upsilon_1,\Upsilon_2\hookrightarrow\Lambda$ subgraphs of the form, $$ \begin{pspicture}(14,1) \rput(4,0){ \rput(3,.5){\BoxedEPSF{free35.eps scaled 750}} \rput(.5,.5){$\Lambda=$} \rput(1.6,.5){$\Upsilon_1$}\rput(4.4,.5){$\Upsilon_2$} \rput(3,.7){$e$} } \rput(14,0.5){$(\dag)$} \end{pspicture} $$ \noindent(i). If $p:\Lambda\rightarrow\Delta$ is a covering with $\Delta$ single vertexed, then the real line is a subgraph $\alpha:\mathcal R\hookrightarrow\Lambda$, with $\alpha(e_0)=e$ and $p\alpha(e_k)=p(e)$ for all $k\in\ams{Z}}\def\E{\ams{E}$. \noindent(ii). If $\Upsilon_1$ is a tree, $p:\Lambda\rightarrow\Delta$, $r:\Gamma\rightarrow\Delta$ coverings, and $\alpha:\Upsilon_2\hookrightarrow\Gamma$ a homeomorphism onto its image, then there is an intermediate covering $\Lambda\stackrel{q}{\rightarrow}\Gamma\stackrel{r}{\rightarrow}\Delta$. \noindent(iii). If $\Psi\rightarrow\Lambda$ is a covering and $\Upsilon_1$ a tree, then $\Psi$ also has the form $(\dag)$ for some subgraphs $\Upsilon'_1,\Upsilon'_2\hookrightarrow\Psi$, with $\Upsilon'_1$ a tree. \end{proposition} \begin{proof} (i). Lift the edge $p(e)$ to the vertex $t(e)$ to get an edge $e_1$ of $\Lambda$. The form of $\Lambda$ prohibits $t(e_1)$ from being any vertex of $\Upsilon_1$, except possibly $s(e)$, in which case $e_1=e^{-1}$. But then $p(e)^{-1}=p(e^{-1})=p(e_1)=p(e)$ a contradiction. Thus $e_1$ is an edge and $t(e_1)$ a vertex of $\Upsilon_2$, and if $t(e_1)=t(e)$, then the injectivity of $p$ fails at this common vertex (as then both $e^{-1}$ and $e_1^{-1}$ start at $t(e_1)$ and cover $p(e)^{-1}$). We therefore have $t(e_1)\not=t(e)$ and this process can be continued inductively, giving the ``positive'' half of $\mathcal R$ a subgraph of $\Upsilon_2$. The symmetry of $\Lambda$ gives the negative half a subgraph of $\Upsilon_1$. For (ii), it suffices, by part Lemma \ref{topology:coverings:result300}(i), to find a map $q:\Lambda\rightarrow\Gamma$ with $p=rq$. Let $q$ coincide with $\alpha$ on $\Upsilon_2$. For any vertex of $\Upsilon_1$, take the reduced path to it from $t(e)$, project via $p$ to $\Delta$, and lift to $\alpha t(e)\in\Gamma$. The edges of $\Upsilon_1$ (and $e$) are similar. (iii). Let $v=t(e)$ and $u$ be in the fiber of $v$ via the covering $\Psi\rightarrow\Lambda$. Take a reduced path in $\Upsilon_1\cup\{e\}$ from $v$ to each vertex of this tree and lift to a path at $u$. Let $\Upsilon'_1$ be the union in $\Psi$ of these lifted paths. A closed path in $\Upsilon'_1$ at $u$ covers a closed path at $v$ in $\Upsilon_1$, a tree, hence by spur-lifting, $\Upsilon'_1$ is a tree. If $e_1,e_2$ are edges in the coboundary $\delta\Upsilon'_1$ then they cover edges in the coboundary $\delta\Upsilon_1$, ie: they cover $e$. A reduced path in $\Upsilon'_1$ from $s(e_1)$ to $s(e_2)$ covers a reduced closed path in $\Upsilon_1$ at $s(e)$. As this covered path must be trivial we get $s(e_1)=s(e_2)$, hence $e_1=e_2$. Thus $\Upsilon'_1$ has a single coboundary edge as required. \qed \end{proof} \begin{proposition}\label{topology:coverings:result500} Let $p:\Lambda\rightarrow\Delta$ be a covering and $T\hookrightarrow\Delta$ a tree. Then (i). $p^{-1}(T)$ a forest. (ii). If $T_i\hookrightarrow\Lambda, (i\in I)$ are the connected components of $p^{-1}(T)$, then $p$ maps each $T_i$ homeomorphically onto $T$. (iii). There is an induced covering $\Lambda/T_i\rightarrow\Delta/T$ making the diagram, $$ \begin{pspicture}(2,2) \rput(0,0){ \rput(0,2){$\Lambda$} \rput(0,0){$\Delta$} \rput(2,2){$\Lambda/T_i$} \rput(2,0){$\Delta/T$} \psline[linewidth=.1mm]{->}(0,1.7)(0,.3) \psline[linewidth=.1mm]{->}(.3,2)(1.55,2) \psline[linewidth=.1mm]{->}(.3,0)(1.55,0) \psline[linewidth=.1mm]{->}(2,1.7)(2,.3) } \end{pspicture} $$ commute (where the horizontal maps are the quotients) and such that $\deg(\Lambda/T_i\rightarrow\Delta/T)=\deg(\Lambda\rightarrow\Delta)$. \end{proposition} This procedure is independent of the tree $T$: if $T'\hookrightarrow\Delta$ another tree such that there is a homeomorphism $\alpha:\Delta/T\rightarrow\Delta/T'$ with $\alpha q=q'$ for $q,q':\Delta\rightarrow\Delta/T,\Delta/T'$ the quotient maps, then by Proposition \ref{topological:coverings:result100}(iv), there is a homeomorphism $\Lambda/T_i\rightarrow\Lambda/T'_i$. Typically we will take $T,T'$ to be spanning trees, so that $\Delta/T,\Delta/T'$ are single vertexed with $\text{rk}\,\Delta$ loops, and such an $\alpha$ is easily found. \begin{proof}[of Proposition \ref{topology:coverings:result500}] That $p^{-1}(T)$ is a forest follows by spur-lifting. For (ii), $p$ is injective on the vertices (and hence edges) of $T_i$ as $T$ is a tree and by spur lifting; surjectivity follows by path lifting. If $q',q$ are the top and bottom quotient maps, define for any cell $q'(x)\in\Lambda/T_i$, the map $p'q'(x) =qp(x)$. Taking a vertex $v\in T$, the degree assertion follows immediately from (ii). \qed \end{proof} A covering $p:\widetilde{\Delta}_u\rightarrow\Delta_v$ is {\em universal\/} iff for any covering $r:\Gamma_w\rightarrow\Delta_v$ there is a covering $q:\widetilde{\Delta}_u\rightarrow\Gamma_w$ with $p=rq$. Equivalently, $p$ is universal when any other covering of $\Delta$ is intermediate to it. To construct a universal covering, one mimics a standard construction in topology, taking as the vertices the homotopy classes of paths in $\Delta$ starting at $v$. There is an edge $\widetilde{e}$ of $\widetilde{\Delta}$ with start vertex the class of $\gamma_1$ and finish vertex the class of $\gamma_2$ if and only if there is an edge $e$ of $\Delta$ with $\gamma_1e$ homotopic to $\gamma_2$. Define $\widetilde{\Delta}_{[v]}\rightarrow\Delta_v$ by sending the class of $\gamma$ to $t(\gamma)$ and the edge $\widetilde{e}$ described above to $e$. \begin{proposition} \label{topology:coverings:result600} $\widetilde{\Delta}_{[v]}$ is connected, simply connected, and the map $\widetilde{\Delta}_{[v]}\rightarrow\Delta_v$ is a universal covering. \end{proposition} \begin{proof} If $[\gamma]$ is a vertex of $\widetilde{\Delta}_{[v]}$ with $\gamma=e_1e_2\ldots e_k$, then $\widetilde{e}_1\widetilde{e}_2\ldots\widetilde{e}_k$ is a path from $[v]$ to $[\gamma]$ and so $\widetilde{\Delta}_{[v]}$ is connected. That $p$ is a covering is straight forward, and hence $\widetilde{\Delta}_{[v]}$ is simply connected, for $\widetilde{\gamma}=\widetilde{e}_1\widetilde{e}_2\ldots\widetilde{e}_k$ is a closed path at $[v]$ if and only if $e_1e_2\ldots e_k$ is homotopic to $v$, ie: $\gamma= e_1e_2\ldots e_k$ is homotopically trivial in $\Delta$, giving $\widetilde{\gamma}$ homotopically trivial as $p(\widetilde{\gamma})=\gamma$ and by homotopy lifting. If $r:\Gamma_w\rightarrow\Delta_v$ is a covering then define $q:\widetilde{\Delta}_{[v]}\rightarrow\Gamma_w$ by $q[\gamma]=t(\gamma')$ where $\gamma'$ is the lift via $r$ of $\gamma$ to the vertex $w$; if $\widetilde{e}$ is an edge with start vertex $[\gamma]$ then let $q(\widetilde{e})$ be the lift via $r$ of $p(\widetilde{e})$ to the vertex $t(\gamma')$. It is easy to see that $p=rq$ and hence $q$ a covering by Lemma \ref{topology:coverings:result300}(i). \qed \end{proof} Many authors, anticipating the Galois correspondence, define a covering to be universal iff it is simply connected. \subsection{The lattice of intermediate coverings} \label{topology:lattice} Throughout this section $\Lambda,\Delta$ are connected graphs and $p:\Lambda_u\rightarrow\Delta_v$ is some fixed pointed covering. A connected pointed intermediate covering $\Lambda_u{\rightarrow}\Gamma_x{\rightarrow}\Delta_v$ is {\em equivalent\/} to another such, $\Lambda_u{\rightarrow}\Upsilon_{y}{\rightarrow}\Delta_v$, if and only if there is a homeomorphism $\beta:\Gamma_x\rightarrow\Upsilon_{y}$ such that $$ \begin{pspicture}(0,0)(6,1.5) \rput(3,1.5){$\Gamma_x$}\rput(3,0){$\Upsilon_{y}$} \psline[linewidth=.1mm]{->}(3,1.2)(3,.4) \rput(1.8,.75){$\Lambda_u$}\rput(4.2,.75){$\Delta_v$} \psline[linewidth=.1mm]{->}(2.1,.5)(2.7,.14)\psline[linewidth=.1mm]{->}(2.1,.9)(2.7,1.26) \psline[linewidth=.1mm]{->}(3.3,.14)(3.9,.5)\psline[linewidth=.1mm]{->}(3.3,1.26)(3.9,.9) \end{pspicture} $$ commutes. Let $\mathcal{L}(\Lambda_u,\Delta_v)$ be the set of equivalence classes of such connected intermediate coverings. Define $(\Lambda_u{\rightarrow}\Gamma_{x_1}{\rightarrow}\Delta_v)\leq (\Lambda_u{\rightarrow}\Upsilon_{x_2}{\rightarrow}\Delta_v)$, or just $\Gamma_{x_1}\leq\Upsilon_{x_2}$, if and only if there is a covering $s:\Upsilon_{x_2}{\rightarrow}\Gamma_{x_1}$ with $p=r_1sq_2$, where $r_1$ is the covering $\Gamma_{x_1}\rightarrow\Delta_v$ and $q_2$ is $\Lambda_u\rightarrow\Upsilon_{x_2}$. If $\beta_1:\Gamma_{x_1}\rightarrow\Psi_{y_1}$ and $\beta_2:\Upsilon_{x_2}\rightarrow\Phi_{y_2}$ are homeomorphisms realizing equivalent coverings, then $\beta_2^{-1}s\beta_1:\Phi_{y_2}\rightarrow\Psi_{y_1}$ is a covering with $p=r'_1(\beta_2^{-1}s\beta_1)q'_2$. Thus, $\leq$ is well defined upto equivalence, giving $\mathcal{L}(\Lambda_u,\Delta_v)$ the structure of a poset. We will also write $\Gamma_w\in\mathcal{L}(\Lambda_u,\Delta_v)$ for an equivalence class of intermediate coverings, without reference to the intermediate covering maps. Recall that a poset $(\mathcal{L},\leq)$ equipped with a join $\vee$ (or supremum) and meet $\wedge$ (or infimum) is a lattice. A $\hat{0}$ (resp. $\hat{1}$) is an element such that $\hat{0}\leq x$ (resp. $x\leq\hat{1}$) for all $x$, and a lattice isomorphism (resp. anti-isomorphism) $\Theta:\mathcal{L}_1\rightarrow\mathcal{L}_2$ is an order-preserving (resp. order-reversing) bijection whose inverse is also order order-preserving (resp. order-reversing). In particular, an isomorphism sends joins to joins (and meets to meets) and an anti-isomorphism sends joins to meets (and meets to joins). A canonical example is the subgroups of a group $G$, ordered by inclusion, and with $A\vee B=\langle A,B\rangle$, $A\wedge B=A\cap B$, $\hat{0}$ the trivial subgroup and $\hat{1}=G$. The remainder of this section is devoted to showing that $\mathcal{L}(\Lambda_u,\Delta_v)$ is a lattice. Let $\Lambda_u{\rightarrow}\Gamma_{x_1}{\rightarrow}\Delta_v$ and $\Lambda_u{\rightarrow}\Upsilon_{x_2}{\rightarrow}\Delta_v$ be intermediate to $p$, and $q$ the quotient map used in the construction of the pushout of the coverings $q_1:\Lambda_u{\rightarrow}\Gamma_{x_1}$ and $q_2:\Lambda_u{\rightarrow}\Upsilon_{x_2}$. Let $x=q(x_1)=q(x_2)$ and $(\Gamma\coprod_\Lambda\Upsilon)_x$ the resulting pointed pushout. \begin{proposition}\label{topological:lattice:result100} (i). We have the intermediate covering $\Lambda_u\stackrel{t_iq_i}{\longrightarrow} (\Gamma\coprod_\Lambda\Upsilon)_x\stackrel{r}{\rightarrow}\Delta_v$, where $r$ is provided by the universality of the pushout. \noindent(ii). Let $\Psi_{y_1},\Phi_{y_2}\in\mathcal{L}(\Lambda_u,\Delta_v)$ be equivalent to $\Gamma_{x_1},\Upsilon_{x_2}$ with $\beta_1:\Gamma_{x_1}\rightarrow\Psi_{y_1}$, $\beta_2:\Upsilon_{x_2}\rightarrow\Phi_{y_2}$ the corresponding homeomorphisms and $\beta_1\amalg\beta_2:\Gamma\amalg\Upsilon\rightarrow\Psi\amalg\Phi$ (disjoint unions) defined by $\beta_1\amalg\beta_2|_{\Gamma}=\beta_1$ and $\beta_1\amalg\beta_2|_\Upsilon=\beta_2$. Then the map $\beta:(\Gamma\coprod_\Lambda\Upsilon)_x\rightarrow(\Psi\coprod_\Lambda\Phi)_{y}$ defined by $\beta q=q'(\beta_1\amalg\beta_2)$ is a homeomorphism making these pointed pushouts equivalent. \end{proposition} Thus there is a well defined pushout of two elements of $\mathcal{L}(\Lambda_u,\Delta_v)$. As the proof will show, the maps $t_1,t_2:\Gamma_{x_1},\Upsilon_{x_2}\rightarrow (\Gamma\coprod_\Lambda\Upsilon)_x$ are coverings, and so the pushout is a lower bound for $\Gamma_{x_1},\Upsilon_{x_2}\in(\mathcal{L}(\Lambda_u,\Delta_v),\leq)$, and the universality implies that it is an infimum. \begin{proof}[of Proposition \ref{topological:lattice:result100}] (i). If $v,v'\in\Gamma_1\coprod\Gamma_2$ are vertices with $v\sim v'$ and $e$ an edge with start $v$ then, by successively lifting and covering, one can show that there is an edge $e'$ with start $v'$ such that $e\sim e'$. Now, $v_1$ maps via $t_1$ to $[v_1]$ and if $[e']$ an edge starting at this vertex then $s(e')\sim v_1$, and so by the above there is an edge $e$ starting at $v_1$ with $[e']=[e]$. Thus $t_1(e)=[e']$, and so $t_1$ maps the edges starting at $v_1$ surjectively onto those starting at $t_1(v_1)$. If $e,e'$ are edges starting at $v_1$ with $t_1(e)=t_1(e')$, then one gets by induction that $q_1(e)=q_1(e')$, and $q_1$ a covering forces $e=e'$, and thus $t_1$ (similarly $t_2$) is a covering, hence the $t_iq_i: \Lambda_u\rightarrow(\Gamma\coprod_\Lambda\Upsilon)_x$ are too. The map $r$ is provided by the universality and is a covering by Lemma \ref{topology:coverings:result300}(i). Part (ii) is a tedious but routine diagram chase. \qed \end{proof} Now to pullbacks. With $\Lambda_u{\rightarrow}\Gamma_{x_1}{\rightarrow}\Delta_v$ and $\Lambda_u{\rightarrow}\Upsilon_{x_2}{\rightarrow}\Delta_v$ intermediate to $p$, $x=x_1\times x_2$ is a vertex of the pullback of the coverings $r_1:\Gamma_{x_1}{\rightarrow}\Delta_v$ and $r_2:\Upsilon_{x_2}{\rightarrow}\Delta_v$. Let $(\Gamma\prod_{\Delta}\Upsilon)_x$ be the pointed pullback consisting of the component containing the vertex $x$. \begin{proposition}\label{topological:lattice:result200} (i). We have the intermediate covering $\Lambda_u\stackrel{q}{\rightarrow} (\Gamma\prod_\Delta\Upsilon)_x\stackrel{r_it_i}{\longrightarrow}\Delta_v$, where $q$ is provided by the universality of the pullback. \noindent(ii). Let $\Psi_{y_1},\Phi_{y_2}\in\mathcal{L}(\Lambda_u,\Delta_v)$ be equivalent to $\Gamma_{x_1},\Upsilon_{x_2}$ with $\beta_1,\beta_2$ the corresponding homeomorphisms and $\beta:(\Gamma\prod_\Delta\Upsilon)_x\rightarrow(\Psi\prod_\Delta\Phi)_{y}$. defined by $\beta(x\times y)=\beta_1(x)\times\beta_2(y)$. Then $\beta$ is a homeomorphism making the pointed pullbacks equivalent. \end{proposition} Thus there is a well defined pullback of two elements of $\mathcal{L}(\Lambda_u,\Delta_v)$. Again the proof shows that the maps $t_1,t_2:(\Gamma\prod_\Delta\Upsilon)_x \rightarrow\Gamma_{x_1},\Upsilon_{x_2}$ are coverings, and so the pullback is an upper bound for $\Gamma_{x_1},\Upsilon_{x_2}\in(\mathcal{L}(\Lambda_u,\Delta_v),\leq)$, and the universality implies that it is a supremum. \begin{proof}[of Proposition \ref{topological:lattice:result200}] (i). We show that $t_1$ is a covering; $t_2$ is similar. From $t_1(e_1\times e_2)=e_1$ it is clear that $t_1$ is dimension preserving. For $t_1(u_1\times u_2)=u_1$, let $e_1$ be an edge of $\Gamma$ with $s(e_1)=u_1$. Then $e_1$ covers $r_1(e_1)$ which lifts via the covering $r_2$ to $u_2$ in $\Upsilon$ to an edge $e_2$ covering $r_1(e_1)$, ie: with $r_2(e_2)=r_1(e_1)$. Thus there is an edge $e_1\times e_2$ of the pullback with $s(e_1\times e_2)=u_1\times u_2$ and $t_1(e_1\times e_2)=e_1$, giving the surjectivity of $t_1$ on the edges starting at $u_1$. If $e'_1\times e'_2$ starts at $u_1\times u_2$ and $t_1(e'_1\times e'_2)=e_1$ then $e'_1=e_1$. We have $t_2(e'_1\times e'_2)=e'_2$ starting at $u_2$, and $r_2(e'_2)=r_1t_1(e'_1\times e'_2)=r_1(e_1)=r_2(e_2)$. Thus, as $r_2$ is a cover, we have $e'_2=e_2$ and so $e'_1\times e'_2=e_1\times e_2$, and $t_1$ is indeed a covering. Hence the $r_it_i$ are too and $q$ by Lemma \ref{topology:coverings:result300}(i). Part (ii) is an analogous to that of Proposition \ref{topological:lattice:result100}. \qed \end{proof} The proof of Proposition \ref{topological:lattice:result200} also shows that the $t_1,t_2:\Gamma\prod_\Delta\Upsilon\rightarrow\Gamma,\Upsilon$ are coverings in the unpointed case. We pause to observe a slight asymmetry to the duality between pushouts and pullbacks: given coverings $r_1,r_2:\Gamma,\Upsilon\rightarrow\Delta$, the $t_1,t_2:\Gamma\prod_\Delta\Upsilon\rightarrow\Gamma,\Upsilon$ are coverings, whereas coverings $q_1,q_2:\Lambda\rightarrow\Gamma,\Upsilon$ do not necessarily give coverings $t_1,t_2:\Gamma,\Upsilon\rightarrow \Gamma\coprod_\Lambda\Upsilon$, unless the $q_i$ are intermediate $\Lambda\rightarrow(\Gamma\text{ or }\Upsilon)\rightarrow\Delta$. Indeed, taking the $\Gamma=\Upsilon$ to be two copies of the left hand graph, $$ \begin{pspicture}(0,0)(13,1) \rput(.4,0){ \rput(-.3,.5){$\Gamma=\Upsilon=$} \rput(1.5,.5){\BoxedEPSF{free31.eps scaled 500}} \rput(8.45,.5){$=\Lambda=$} \rput(6,.5){\BoxedEPSF{free31a.eps scaled 500}} \rput(11,.5){\BoxedEPSF{free31b.eps scaled 500}} \rput(-.7,0){ \psline[linewidth=.1mm]{->}(4.5,.5)(3.5,.5) \rput(4,.7){$q_i$} }} \end{pspicture} $$ and the coverings $q_i:\Lambda{\rightarrow}\Gamma\text{ or }\Upsilon$ (described here by drawing the fibers of the vertices), then the $t_i$ provided by the pushout construction are not coverings of the pushout. Summarising the results of this section: \begin{theorem}[lattice of intermediate coverings] $\mathcal{L}(\Lambda_u,\Delta_v)$ is a lattice with join $\Gamma_{x_1}\vee\Upsilon_{x_2}$ the pullback $(\Gamma\prod_\Delta\Upsilon)_{x_1\times x_2}$, meet $\Gamma_{x_1}\wedge\Upsilon_{x_2}$ the pushout $(\Gamma\coprod_\Lambda\Upsilon)_{q(x_i)}$, $\widehat{0}=\Delta_v$ and $\widehat{1}=\Lambda_u$. \end{theorem} The pointing of the covers in this section is essential if one wishes to work with {\em connected\/} intermediate coverings and also have a lattice structure (both of which we do). The problem is the pullback: because it is not in general connected, we need the pointing to tell us which component to choose. \section{The Galois theory of graphs}\label{topology:galoistheory} The ``Galois correspondence'' between coverings of graphs and subgroups of the fundamental group goes back to Reidemeister \cite{Reidemeister28} (see eg: \cite{Collins98}). We provide a slightly alternative formulation that exploits the lattice structure of \S \ref{topology:lattice} and is more in the spirit of classical Galois theory. Throughout this section $p:\Lambda_u\rightarrow\Delta_v$ is a fixed covering with $\Lambda,\Delta$ connected. An {\em automorphism\/} (or {\em deck transformation\/}) of $p$ is a graph homeomorphism $\alpha:\Lambda_u\rightarrow\Lambda_{u'}$ making the diagram, $$ \begin{pspicture}(0,0)(4,1.5) \rput(0,-.25){ \rput(0.8,1.5){$\Lambda_u$} \rput(3.2,1.5){$\Lambda_{u'}$} \rput(1.95,0.4){$\Delta_v$} \rput(2,1.7){$\alpha$} \rput(1.195,.825){$p$} \rput(2.78,.825){$p$} \psline[linewidth=.1mm]{->}(1.2,1.5)(2.8,1.5) \psline[linewidth=.1mm]{->}(1,1.25)(1.75,.55) \psline[linewidth=.1mm]{->}(3,1.25)(2.25,.55) } \end{pspicture} $$ commute. The automorphisms form a group $\text{Gal}(\Lambda_u,\Delta_v)=\text{Gal}(\Lambda_u\stackrel{p}{\rightarrow}\Delta_v)$, the {\em Galois group\/} of the covering. \begin{lemma}\label{topological:galois:result100} (i). The action of $\text{Gal}(\Lambda_u,\Delta_v)$ on $\Lambda$ is orientation preserving. (ii). The effect of an automorphism $\alpha:\Lambda_u\rightarrow\Lambda_{u'}$ is completely determined by $\alpha(u)=u'$. In particular, the Galois group acts freely on $\Lambda$. \end{lemma} \begin{proof} (i). Both the edge $e$ and $\alpha(e)$ lie in the same fiber of the covering, so that if $\alpha(e)=e^{-1}$ then $p(e)=p(e)^{-1}$, a contradiction, so the Galois group acts without inversions. (ii). If $x$ is a vertex of $\Lambda$ and $\gamma$ a path from $u$ to $x$, then $\alpha(x)$ is the terminal vertex of the lift to $u'$ of the path $p(\gamma)$. The images of the edges are handled similarly. \qed \end{proof} The explicit construction of automorphisms is achieved by the following technical result: \begin{proposition}\label{topological:galois:result200} Let $p:\Lambda_{u}\rightarrow\Delta_v$ be a covering and $u'$ another vertex in the fiber of $v$ such that for any closed path $\gamma$ at $v$ with lifts $\gamma_i$ at $u,u'$, we have $\gamma_1$ closed if and only if $\gamma_2$ closed. For any vertex $x\in\Lambda$ and path $\mu$ from $u$ to $x$, let $\alpha(x)$ be the terminal vertex of the lift at $u'$ of $p(\mu)$. Then $x\mapsto\alpha(x)$ extends to an automorphism $\alpha\in\text{Gal}(\Lambda_{u},\Delta_v)$. \end{proposition} In particular, for a covering satisfying (i) and (ii) of the Proposition, there is an element of the Galois group sending the vertex $u$ to the vertex $u'$. \begin{proof} If $\mu'$ is another path from $u$ to $x$, then $p(\mu)p(\mu')^{-1}$ is a closed path at $v$ that lifts to a closed path (ie: $\mu(\mu')^{-1}$) at $u$, hence to a closed path at $u'$. Thus $\alpha(x)$ is also the terminal vertex of the lift at $u'$ of $p(\mu')$ and $\alpha$ is a well defined map $V_\Lambda\rightarrow V_\Lambda$. To extend $\alpha$ to the edges, let $\mu$ be a path from $u$ to the vertex $s(e)$, and lift the path $p(\mu e)$ to $u'$. Define $\alpha(e)$ to be the lift of $p(e)$ to the terminal vertex of $p(\mu)$. It is easy to see that $\alpha:\Lambda\rightarrow\Lambda$ is a surjective dimension-preserving map of graphs, and that $x$ and $\alpha(x)$ lie in the same fiber of the covering, for any cell $x$, whence $p\alpha=p$. It remains to show that $\alpha$ is injective. For vertices $x$ and $x'$, choose paths $\mu,\mu'$ from $u$ to $x$ and $x'$. Then if $\alpha(x)=\alpha(x')$, the lifts at $u'$ of $p(\mu)$ and $p(\mu')$ finish at the same vertex, and so therefore must $p(\mu)$ and $p(\mu')$, as $p$ is well defined at the vertex $\alpha(x)=\alpha(x')$. Thus $\mu,\mu'$ finish at the same vertex and so $x=x'$. For edges $e,e'$ with $\alpha(e)=\alpha(e')$, the injectivity of $\alpha$ on the vertices gives that they must have the same start vertex, and moreover must lie in the same fiber, hence $e=e'$, by the injectivity of coverings on the edges with start a given vertex. \qed \end{proof} Let $\Lambda_u{\rightarrow}\Gamma_x{\rightarrow}\Delta_v$ be a covering intermediate to $p$ and consider those $\alpha\in\text{Gal}(\Lambda_u,\Delta_v)$ such that $$ \begin{pspicture}(0,0)(4,1.5) \rput(0,-.25){ \rput(0.8,1.5){$\Lambda_u$} \rput(3.2,1.5){$\Lambda_{u'}$} \rput(2,0.4){$\Gamma_x$} \rput(2,1.7){$\alpha$} \rput(1.195,.825){$q$} \rput(2.785,.825){$q$} \psline[linewidth=.1mm]{->}(1.2,1.5)(2.8,1.5) \psline[linewidth=.1mm]{->}(1,1.25)(1.75,.55) \psline[linewidth=.1mm]{->}(3,1.25)(2.25,.55) } \end{pspicture} $$ commutes. This gives a subgroup that can be identified with $\text{Gal}(\Lambda_u\stackrel{q}{\rightarrow}\Gamma_x)$. If $\Lambda_u{\rightarrow}\Upsilon_{y}{\rightarrow}\Delta_v$ is an equivalent covering with homeomorphism $\beta: \Gamma_x\rightarrow\Upsilon_{y}$, then $$ \begin{pspicture}(0,0)(12,1.5) \rput(0,-.25){ \rput(0.8,1.5){$\Lambda_u$} \rput(3.2,1.5){$\Lambda_{u'}$} \rput(2,0.4){$\Gamma_x$} \rput(2,1.7){$\alpha$} \rput(1.195,.825){$q$} \rput(2.785,.825){$q$} \psline[linewidth=.1mm]{->}(1.2,1.5)(2.8,1.5) \psline[linewidth=.1mm]{->}(1,1.25)(1.75,.55) \psline[linewidth=.1mm]{->}(3,1.25)(2.25,.55) } \rput(4.5,.75){commutes} \rput(5.75,.7){$\Leftrightarrow$} \rput(5.75,-.25){ \rput(0.8,1.5){$\Lambda_u$} \rput(3.2,1.5){$\Lambda_{u'}$} \rput(2,0.4){$\Upsilon_y$} \rput(2,1.7){$\alpha$} \rput(1.05,.825){$q'$} \rput(2.785,.825){$q'$} \psline[linewidth=.1mm]{->}(1.2,1.5)(2.8,1.5) \psline[linewidth=.1mm]{->}(1,1.25)(1.75,.55) \psline[linewidth=.1mm]{->}(3,1.25)(2.25,.55) } \rput(10,.75){commutes.} \end{pspicture} $$ Thus $\text{Gal}(\Lambda_u\stackrel{q}{\rightarrow}\Gamma_x) =\text{Gal}(\Lambda_u\stackrel{q'}{\rightarrow}\Upsilon_y)$, and we can associate in a well defined manner a subgroup of the Galois group to an element of the lattice $\mathcal{L}(\Lambda_u,\Delta_v)$. On the other hand, if $H\subset\text{Gal}(\Lambda_u,\Delta_v)$, then by Lemma \ref{topological:galois:result100} we may form the quotient $\Lambda/H$, and indeed, \begin{lemma}\label{topological:galois:result400} If $H_1\subset H_2\subset\text{Gal}(\Lambda_u,\Delta_v)$ then, $$ \Lambda_u\stackrel{q_1}{\rightarrow}(\Lambda/H_1)_{q_1(u)} \stackrel{s}{\rightarrow}(\Lambda/H_2)_{q_2(u)} \stackrel{r}{\rightarrow}\Delta_v $$ are all coverings, where $q_i:\Lambda\rightarrow\Lambda/H_i$ are the quotient maps and $s,r$ are defined by $sq_1=q_2$ and $rq_2=p$. \end{lemma} \begin{proof} All the complexes are connected, and the $q_i$ coverings by Lemmas \ref{topological:galois:result100} and \ref{topology:coverings:result300}(ii). Thus $s$ is a covering by Lemma \ref{topology:coverings:result300}(i), and another application, this time to $p=r(sq_1)$, gives that $r$ is a covering. \qed \end{proof} Thus, letting $H_1=H_2=H$, we can associate to $H\subset\text{Gal}(\Lambda_u\,\Delta_v)$ the intermediate covering $\Lambda_u{\rightarrow}(\Lambda/H)_{q(u)}{\rightarrow}\Delta_v$, and by passing to its equivalence class, we get an element of the lattice $\mathcal{L}(\Lambda_u,\Delta_v)$ associated to $H$. \begin{proposition}\label{topological:galois:result500} The following are equivalent for a covering $p:\Lambda_u\rightarrow\Delta_v$: \begin{enumerate} \item For all closed paths $\gamma$ at $v$, the lifts of $\gamma$ to each vertex of $\text{fib}_{\Lambda\rightarrow\Delta}(v)$ are either all closed or all non-closed; \item $\text{Gal}(\Lambda_u\stackrel{p}{\rightarrow}\Delta_v)$ acts regularly on $\text{fib}_{\Lambda\rightarrow\Delta}(v)$. \end{enumerate} \end{proposition} In any case, we call the covering $\Lambda_u\rightarrow\Delta_v$ {\em Galois}, with {\em regular\/} a common alternative as the second part of the Proposition makes clear. It is clear that if $\Lambda_u\rightarrow\Delta_v$ is Galois then so is $\Lambda_{u'}\rightarrow\Delta_v$ for any other $u'$ in the fiber of $v$; if $\Lambda_u{\rightarrow}\Gamma_w{\rightarrow}\Delta_v$ is intermediate with $\Lambda_u{\rightarrow}\Delta_v$ Galois, then $\Lambda_u{\rightarrow}\Gamma_w$ is Galois. \begin{proof} The equivalence follows immediately from Proposition \ref{topological:galois:result200} and the fact that automorphisms send closed paths to closed paths and non-closed paths to non-closed paths. \qed \end{proof} If the covering $p:\Lambda_u\rightarrow\Delta_v$ is Galois, let $g\in\pi_1(\Delta,v)$ with representative path $\gamma$ and $\alpha_g$ an automorphism in $\text{Gal}(\Lambda_u,\Delta_v)$ that sends $u$ to the terminal vertex $u'$ of the lift of $\gamma$ to $u$. By homotopy lifting, $\alpha_g$ depends only on the vertices $u,u'$ and not on the chosen representative path $\gamma$, and so the map $\pi_1(\Delta,v)\rightarrow\text{Gal}(\Lambda_u,\Delta_v)$ given by $g\mapsto\alpha_g$ is well defined. \begin{proposition}\label{galois:result500} If $p:\Lambda_u\rightarrow\Delta_v$ is Galois then $g\mapsto \alpha_g$ is a surjective homomorphism with kernel $p^*\pi_1(\Lambda,u)$, such that under the induced isomorphism $$ \pi_1(\Delta,v)/p^*\pi_1(\Lambda,u)\rightarrow \text{Gal}(\Lambda_u,\Delta_v), $$ if $\Lambda_u\stackrel{q}{\rightarrow}\Gamma_x\stackrel{r}{\rightarrow}\Delta_v$ is intermediate, then the subgroup $r^*\pi_1(\Gamma,x)/p^*\pi_1(\Lambda,u)$ has image $\text{Gal}(\Lambda_u\stackrel{q}{\rightarrow}\Gamma_x)$. \end{proposition} \begin{proof} It is easy to check that $\alpha_{g_1g_2}=\alpha_{g_1}\alpha_{g_2}$ and so we have a homomorphism. If $\beta\in \text{Gal}(\Lambda_u,\Delta_v)$ then $\beta$ is the unique automorphism sending $u$ to $\beta(u)$. Taking a path $\mu$ in $\Lambda$ from $u$ to $\beta(u)$ thus gives $\beta=\alpha_g$ for $g$ the homotopy class of $p(\mu)$, and hence the homomorphism is surjective. Because automorphisms act freely, an element $g$ is in the kernel iff $\alpha_g$ fixes the vertex $u$, and this happens precisely when $g$ can be represented by a path lifting to a closed path at $u$, ie: when $g\in p^*\pi_1(\Lambda,u)$. It is easy to check that this homomorphism maps $r^*\pi_1(\Gamma,x)$ onto $\text{Gal}(\Lambda_u,\Gamma_x)$. \qed \end{proof} \begin{corollary}\label{galois:result600} A covering $\Lambda_u\stackrel{p}{\rightarrow}\Delta_v$ is Galois if and only if $p^*\pi_1(\Lambda,u)$ is a normal subgroup of $\pi_1(\Delta,v)$. \end{corollary} \begin{proof} It remains to show the ``if'' part. Let $u'$ be a vertex in the fiber of $v$, $\gamma$ a closed path at $v$ with lifts $\gamma_1,\gamma_2$ at $u$ and $u'$, and $\mu$ a path from $u$ to $u'$. Let $g$ and $h$ be the homotopy classes of $\gamma$ and $p(\mu)$. Then $\gamma_1$ is closed iff $g\in p^*\pi_1(\Lambda,u)$ $\Leftrightarrow$ $hgh^{-1}\in p^*\pi_1(\Lambda,u)$ by normality, and this in turn happens precisely when $\mu\gamma_1\mu^{-1}$ is closed at $u$, ie: when $\gamma_2$ is closed at $u'$. Thus the covering is Galois. \qed \end{proof} \begin{proposition}\label{topological:galois:result700} Let $p:\Lambda_u\rightarrow\Delta_v$ be a Galois covering. If $H\subset\text{Gal}(\Lambda_u,\Delta_v)$ then, $$ [\text{Gal}(\Lambda_u,\Delta_v):H] =\deg(\Lambda/H_{q(u)}\stackrel{r}{\rightarrow}\Delta_v). $$ \end{proposition} \begin{proof} If a group $G$ acts regularly on a set and $H$ is a subgroup, then the number of $H$-orbits is the index $[G:H]$. The result follows as the $H$-orbits on the fiber (via $p$) of $v$ are precisely the vertices of $\Lambda/H$ covering $v$ (via $r$). \qed \end{proof} In particular the Galois group of a Galois covering has order the degree of the covering. We have now assembled sufficient machinery to prove, \begin{theorem}[Galois correspondence] \label{topological:galois:result800} Let $\Lambda_u{\rightarrow}\Delta_v$ be a Galois covering with $\mathcal{L}(\Lambda_u,\Delta_v)$ the lattice of equivalence classes of intermediate coverings and $\text{Gal}(\Lambda_u,\Delta_v)$ the Galois group. Then the map that associates to $\Lambda_u{\rightarrow}\Gamma_x{\rightarrow}\Delta_v \in \mathcal{L}(\Lambda_u,\Delta_v)$ the subgroup $\text{Gal}(\Lambda_u,\Gamma_x)$ is a lattice anti-isomorphism from $\mathcal{L}(\Lambda_u,\Delta_v)$ to the lattice of subgroups of $\text{Gal}(\Lambda_u,\Delta_v)$. Its inverse is the map associating to $H\subset\text{Gal}(\Lambda_u,\Delta_v)$ the element $\Lambda_u{\rightarrow}\Lambda/H_{q(u)} {\rightarrow}\Delta_v\in \mathcal{L}(\Lambda_u,\Delta_v)$. \end{theorem} \begin{proof} \parshape=9 0pt\hsize 0pt\hsize 0pt\hsize 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt\hsize Let $f$ and $g$ be the two maps described in the theorem. It is easiest to work from the point of view of $g$: if $H_1\leq H_2$ in the lattice of subgroups, then the covering $s$ of Lemma \ref{topological:galois:result400} gives $g(H_2)\leq g(H_1)$, so $g$ is an anti-morphism of lattices. If $\Lambda_u\rightarrow\Gamma_x\rightarrow\Delta_v$ is intermediate, then we also have the intermediate covering $\Lambda_u\rightarrow\Lambda_u/\text{Gal}(\Lambda_u,\Gamma_x)\rightarrow\Gamma_x$ with $\Lambda_u\rightarrow\Gamma_x$ is Galois. By Proposition \ref{topological:galois:result700}, the covering $\Lambda_u/\text{Gal}(\Lambda_u,\Gamma_x)\rightarrow\Gamma_x$ has degree $1$, hence is a homeomorphism (Lemma \ref{topological:coverings:result200}(iii)) and we have the diagram at right, \vadjust{ \smash{\lower -2pt \llap{ \begin{pspicture}(0,0)(4,1.5) \rput(-1,0.2){ \rput(3,1.5){$\Lambda/\text{Gal}(\Lambda_u,\Gamma_x)$}\rput(3,0){$\Gamma_{x}$} \psline[linewidth=.1mm]{->}(3,1.2)(3,.4) \rput(1.3,.75){$\Lambda_u$}\rput(4.7,.75){$\Delta_v$} \psline[linewidth=.1mm]{->}(1.6,.5)(2.7,.14) \rput(-.5,0){\psline[linewidth=.1mm]{->}(2.1,.9)(2.7,1.26)} \psline[linewidth=.1mm]{->}(3.3,.14)(4.4,.5) \rput(.5,0){\psline[linewidth=.1mm]{->}(3.3,1.26)(3.9,.9)} } \end{pspicture} }}}\ignorespaces with the whole square and the left triangle commuting by intermediacy, hence the right triangle commuting as well. Thus, the intermediate coverings $\Lambda_u\rightarrow\Gamma_x\rightarrow\Delta_v$ and $\Lambda_u\rightarrow\Lambda_u/\text{Gal}(\Lambda_u,\Gamma_x)\rightarrow\Delta_v$ are equivalent, and we have $gf=\text{id}$. If $H\subset\text{Gal}(\Lambda_u,\Delta_v)$ and $q:\Lambda\rightarrow\Lambda/H$ the quotient map, then $q\alpha=q$ for any $\alpha\in H$ and so $H\subset\text{Gal}(\Lambda_u,(\Lambda/H)_{q(u)})$ with the covering $\Lambda_u\rightarrow\Lambda/H_{q(u)}$ intermediate, hence Galois. Proposition \ref{topological:galois:result700} gives the index of $H$ in $\text{Gal}(\Lambda_u,(\Lambda/H)_{q(u)})$ to be the degree of the covering $\Lambda/H_{q(u)}\rightarrow\Lambda/H_{q(u)}$, ie: $H= \text{Gal}(\Lambda_u,(\Lambda/H)_{q(u)})$, and we have $fg=\text{id}$. \qed \end{proof} As lattice anti-isomorphisms send joins to meets and meets to joins we have as an immediate corollary that, \begin{corollary}\label{topological:galois:result900} Let $\Lambda_u{\rightarrow}\Delta_v$ be Galois with $\Lambda_u\rightarrow\Gamma_{x}\rightarrow\Delta_v$ and $\Lambda_u\rightarrow\Upsilon_{y}\rightarrow\Delta_v$ in the lattice $\mathcal{L}(\Lambda_u,\Delta_v)$ and $H_1,H_2\subset \text{Gal}(\Lambda_u,\Delta_v)$. Then, \begin{align*} \text{Gal}(\Lambda_u,(\Gamma\prod_\Delta\Upsilon)_z) &=\text{Gal}(\Lambda_u,\Gamma_{x})\cap\text{Gal}(\Lambda_u,\Upsilon_{y}),\\ \text{Gal}(\Lambda_u,(\Gamma\coprod_\Lambda\Upsilon)_z)&= \langle\text{Gal}(\Lambda_u,\Gamma_{x}),\text{Gal}(\Lambda_u,\Upsilon_{y})\rangle, \end{align*} and the intermediate coverings, \begin{align*} \Lambda_u\rightarrow\Lambda/\langle H_1,H_2\rangle_{t(u)}\rightarrow\Delta_v \text{ and } \Lambda_u\rightarrow(\Lambda/H_1\coprod_\Lambda \Lambda/H_2)_{qq_i(u)} \rightarrow\Delta_v,\\ \Lambda_u\rightarrow\Lambda/(H_1\cap H_2)_{t(u)}\rightarrow\Delta_v\text{ and } \Lambda_u\rightarrow(\Lambda/H_1\prod_\Delta\Lambda/H_2)_w\rightarrow\Delta_v. \end{align*} are equivalent (where $t:\Lambda\rightarrow\Lambda/\langle H_1,H_2\rangle$ or $\Lambda/(H_1\cap H_2)$, $q_i:\Lambda\rightarrow\Lambda/H_i$ and $q$ the quotient from the pushout). \end{corollary} (This result is essentially Theorems 4.3 and 5.5 of \cite{Stallings83}, restated in our terms.) The universal cover $\widetilde{\Delta}_{u}{\rightarrow}\Delta_v$ ($u=[v]$) is Galois by Proposition \ref{topology:coverings:result600} and Corollary \ref{galois:result600}, and by Proposition \ref{galois:result500} there is an isomorphism $$ \varphi:\pi_1(\Delta,v)\rightarrow\text{Gal}(\widetilde{\Delta}_{u},\Delta_v) $$ such that if $\widetilde{\Delta}_{u}{\rightarrow}\Gamma_x \stackrel{r}{\rightarrow}\Delta_v$ is intermediate, then the subgroup $r^*\pi_1(\Gamma,x)$ has image $\text{Gal}(\widetilde{\Delta}_{u},\Gamma_x)$. Moreover, two intermediate coverings, $$ \widetilde{\Delta}_{u}{\rightarrow}\Gamma_x \stackrel{r}{\rightarrow}\Delta_v \text{ and } \widetilde{\Delta}_{u}{\rightarrow}\Upsilon_{y}\stackrel{r'}{\rightarrow} \Delta_v, $$ are equivalent if and only if there is a homeomorphism $\beta:\Gamma_x\rightarrow\Upsilon_{y}$ with $r=r'\beta$. We thus obtain the more familar version of the Galois correspondence, as a special case of Theorem \ref{topological:galois:result800}: \begin{corollary}[Galois correspondence for the universal cover] \label{topological:galois:result1000} The map that associates to a covering $r:\Gamma_x\rightarrow\Delta_v$ the subgroup $r^*\pi_1(\Gamma,x)$ is a lattice anti-isomorphism from $\mathcal{L}(\widetilde{\Delta}_{u},\Delta_v)$ to the lattice of subgroups of $\pi_1(\Delta,v)$ that sends Galois covers to normal subgroups. Its inverse associates to $H\subset\pi_1(\Delta,v)$ the covering $\widetilde{\Delta}/\varphi(H)_{q(u)}\rightarrow\Delta_v$. \end{corollary} We end this section by showing that the excision of trees has little effect on the lattice $\mathcal{L}(\Lambda,\Delta)$. Let $p:\Lambda_u\rightarrow\Delta_v$ be a covering, $T\hookrightarrow\Delta$ a spanning tree, $T_i\hookrightarrow\Lambda$ the components of $p^{-1}(T)$ and $p:(\Lambda/T_i)_{q(u)}\rightarrow(\Delta/T)_{q(v)}$ the induced covering (where we have (ab)used $q$ for both quotients and $p$ for both coverings). \begin{theorem}[lattice excision] \label{galois:lattice:excisetrees} There is a degree and rank preserving isomorphism of lattices $$ \mathcal L(\Lambda, \Delta)\rightarrow \mathcal L(\Lambda/T_i, \Delta/T), $$ that sends the equivalence class of $\Lambda_u\rightarrow\Gamma_{x}\stackrel{r}{\rightarrow}\Delta_v$ to the equivalence class of $\Lambda/T_i\rightarrow\Gamma/T'_i\rightarrow\Delta/T$ (with $T'_i\hookrightarrow\Gamma$ the components of $r^{-1}(T)$) and Galois coverings to Galois coverings. \end{theorem} This result could have been shown directly and messily at the end of \S \ref{topology:lattice}; we use the Galois correspondence. \begin{proof} The quotient $q:\Delta\rightarrow\Delta/T$ induces an isomorphism $q^*:\pi_1(\Delta,v)\rightarrow\pi_1(\Delta/T,q(v))$ where $(qp)^*=(pq)^*$ by the commutativity of the diagram in Proposition \ref{topology:coverings:result500}. Thus $q^*p^*\pi_1(\Lambda,v)=p^*\pi_1(\Lambda/T_i,q(u))$ giving that $\Lambda\rightarrow\Delta$ is Galois iff $\Lambda/T_i\rightarrow\Delta/T$ is Galois by Corollary \ref{galois:result600}, and an isomorphism $\text{Gal}(\Lambda,\Delta)\rightarrow\text{Gal}(\Lambda/T_i,\Delta/T)$ by Proposition \ref{galois:result500} (leaving off the pointings for clarity). This in turn induces an isomorphism $\mathcal{L}_1\rightarrow\mathcal{L}_2$ between the subgroup lattices of these two groups, so that two applications of the Galois correspondence gives $$ \mathcal L(\Lambda,\Delta)\rightarrow\mathcal L_1\rightarrow\mathcal L_2\rightarrow\mathcal L(\Lambda/T_i,\Delta/T), $$ a composition of an isomorphism and two anti-isomorphisms, hence the result in the case that $\Lambda\rightarrow\Delta$ is Galois. If $\Lambda_u\rightarrow\Gamma_{x}\stackrel{r}{\rightarrow}\Delta_v$ is intermediate then $q^*$ sends the subgroup $r^*\pi_1(\Gamma,x)$ to $r^*\pi_1(\Gamma/T'_i,q(x))$ and so the isomorphism of Galois groups sends $\text{Gal}(\Lambda,\Gamma)$ to $\text{Gal}(\Lambda/T_i,\Gamma/T'_i)$. This gives the desired image of the intermediate covering, but also, if $\Lambda\rightarrow\Delta$ is not Galois, then $\mathcal L(\Lambda,\Delta)$ embeds as a sublattice of $\mathcal L(\widetilde{\Delta},\Delta)$, sent to the sublattice $\mathcal L(\Lambda/T_i,\Delta/T)$ via the result applied to the Galois covering $\widetilde{\Delta}\rightarrow\Delta$. \qed \end{proof} Thus in particular, there are homeomorphisms \begin{align*} (\Gamma\prod_\Delta \Upsilon)/T_k&\rightarrow (\Gamma/T_{1j})\prod_{\Delta/T}(\Upsilon/T_{2j}),\\ (\Gamma\coprod_\Lambda \Upsilon)/T_k&\rightarrow (\Gamma/T_{1j})\coprod_{\Lambda/T_i}(\Upsilon/T_{2j}). \end{align*} where $\Gamma,\Upsilon$ are intermediate to $\Lambda\rightarrow\Delta$, and the trees $T_{ij},T_{2j},T_k$ are the components of the preimages of $T$ via the various coverings. \section{Graphs of finite rank} \label{section:invariants} This section is devoted to a more detailed study of the form of those covering graphs $\Lambda\rightarrow\Delta$ where $\text{rk}\,\Lambda<\infty$. \begin{lemma}\label{ranks100} Let $\Gamma$ be a connected graph. (i). If $\text{rk}\,\Gamma<\infty$ and $\Theta$ the trivial graph, then $\text{rk}\,\bigvee_\Theta\Gamma=\text{rk}\,\Gamma+1$. (ii). If $\Gamma_1,\Gamma_2$ are connected of finite rank, and $\Theta$ finite, then $$ \text{rk}\,\biggl(\Gamma_1\bigvee_{\Theta}\Gamma_2\biggr)=|V_\Theta|-1+\sum\text{rk}\,\Gamma_i. $$ \end{lemma} \begin{proof} Part (i) follows from the comments at the end of \S \ref{topological:pushouts}, and (ii) by induction on $|\Theta|$ and (i). \qed \end{proof} \begin{proposition}\label{finite_rank_characterisation100} A connected graph $\Lambda$ has finite rank if and only if $\Lambda$ decomposes as a wedge sum $\Lambda=\Gamma\bigvee_\Theta\Phi$ with $\Gamma$ finite, locally finite, connected, $\Theta$ finite, $\Phi$ a forest, and no two vertices of the image of $\Theta\hookrightarrow\Phi$ lying in the same component. \end{proposition} \begin{proof} If $\Lambda$ has such a decomposition then $\Phi$ necessarily has finitely many components and finite rankness follows by inductively applying Lemma \ref{ranks100}. For the converse, fix a basepoint vertex $v$ and spanning tree $T$ so that there is a finite set $P$ of arcs of $\Lambda$ not in $T$. Take paths in $T$ from $v$ to the start and terminal vertices of the arcs of $P$. Let $\Gamma$ be the union of the edges in $P$ and these paths and let $\Phi$ the result of removing from $T$ the edges in the paths. \qed \end{proof} \begin{lemma}\label{invariants:result400} $\Lambda$ connected is of finite rank if and only if for any vertex $v$, the spine $\widehat{\Lambda}_v$ is finite, locally finite. \end{lemma} \begin{proof} If $\Lambda$ has finite rank then we have the wedge sum decomposition of Proposition \ref{finite_rank_characterisation100}. If $e$ is an edge not contained in $\Gamma$ and $\gamma$ a closed path at $v$ containing $e$, then $e$ is contained in a tree component $T$ of $\Phi$. As this component is wedged onto $\Gamma$ at a single vertex, that part of $\gamma$ contained in $T$ is a closed path, hence contains a spur. Thus $e$ is contained in no closed reduced path at $v$ and the so spine $\widehat{\Lambda}$ is a subgraph of $\Gamma$, hence finite, locally finite. Conversely, a finite spine has finite rank, hence so does $\Lambda$ by Lemma \ref{topological:spines:result100}(i). \qed \end{proof} \begin{proposition} \label{finite_rank_characterisation200} Let $\Lambda$ be a connected graph, $\Gamma\hookrightarrow\Lambda$ a connected subgraph and $v\in\Gamma$ a vertex such that every closed reduced path at $v$ in $\Lambda$ is contained in $\Gamma$. Then $\Lambda$ has a wedge sum decomposition $\Lambda=\Gamma\bigvee_\Theta\Phi$ with $\Phi$ a forest and no two vertices of the image of $\Theta\hookrightarrow\Phi$ lying in the same component. \end{proposition} \begin{proof} Consider an edge $e$ of $\Lambda\setminus\Gamma$ having at least one of its end vertices $s(e)$ or $t(e)$, in $\Gamma$. For definiteness we can assume, by relabeling the edges in the arc $\overline{e}$, that it is $s(e)$ that is a vertex of $\Gamma$. If $t(e)\in\Gamma$ then by traversing a reduced path in $\Gamma$ from $v$ to $s(e)$, crossing $e$ and a reduced path in $\Gamma$ from $t(e)$ to $v$, we get a closed reduced path not contained in $\Gamma$, a contradiction. Thus $t(e)\not\in\Gamma$. Let $T_e$ be the union of all the reduced paths in $\Lambda\setminus\{e\}$ starting at $t(e)$, so we have the situation as in (a): $$ \begin{pspicture}(0,0)(13,3) \rput(1.5,1.7){\BoxedEPSF{free23a.eps scaled 650}} \rput(0.1,1.7){$\Gamma$} \rput(2.35,1.7){$T_e$}\rput(1.1,1.95){$e$} \rput(1.5,0){(a)} \rput(6.5,1.7){\BoxedEPSF{free23b.eps scaled 650}} \rput(5.1,1.7){$\Gamma$} \rput(7.4,1.7){$T_e$} \rput(6.1,2.1){$e$}\rput(6.1,1.3){$e'$} \rput(6.5,0){(b)} \rput(11.5,1.7){\BoxedEPSF{free23c.eps scaled 650}} \rput(10.1,1.7){$\Gamma$} \rput(12.4,1.7){$T_e$} \rput(11.1,2.3){$e$}\rput(11.1,1.1){$e'$} \rput(11.5,0){(c)} \end{pspicture} $$ If $\gamma$ is a non-trivial closed path in $T_e$ starting at $t(e)$, then a path from $v$ to $t(e)$, traversing $\gamma$, and going the same way back to $v$ cannot be reduced. But the only place a spur can occur is in $\gamma$ and so $T_e$ is a tree. If $e'$ is another edge of $\Lambda\setminus\Gamma$ with $s(e')\in\Gamma$ then we claim that neither of the two situations (b) and (c) above can occur, ie: $t(e')$ is not a vertex of $T_e$. For otherwise, a reduced closed path in $T_e$ from $t(e)$ to $t(e')$ will give a reduced closed path at $v$ not in $\Gamma$. Thus, another edge $e'$ yields a tree $T_{e'}$ defined like $T_e$, but disjoint from it. Each component of $\Phi$ is thus obtained this way. \qed \end{proof} In particular we have such a decomposition involving a spine, and so $\Lambda$ is made up of its spine at some vertex, together with a collection of trees, each connected to $\widehat{\Lambda}_v$ by a single edge. If $\Lambda$ has finite rank then $\widehat{\Lambda}$ and $\Theta$ are finite, and we have \begin{equation}\label{finite:rank200} \Lambda=\biggl(\cdots\biggl(\biggl(\widehat{\Lambda}_v \bigvee_{\Theta_1} T_1\biggr) \bigvee_{\Theta_2} T_2\biggr) \cdots\biggr)\bigvee_{\Theta_k} T_k\biggr), \end{equation} with the $\Theta_i$ single vertices, the $\Theta_i\hookrightarrow\widehat{\Lambda}_v$, and the images $\Theta_i\hookrightarrow T_i$ having valency one. Moreover, if $\Lambda\rightarrow\Delta$ is a covering with $\Delta$ single vertexed and $\Lambda$ of finite rank, then by Proposition \ref{topology:coverings:result400}(i), each tree $T_i$ realizes an embedding $\mathcal R\hookrightarrow\Lambda$ of the real line in $\Lambda$, and as the spine is finite, the trees are thus paired $$ \begin{pspicture}(0,0)(14,2) \rput(3,0){ \rput(4,1){\BoxedEPSF{free11a.eps scaled 650}} \rput(4,1.7){$\widehat{\Lambda}$} \rput(3.75,.9){$\gamma$} \rput(2.2,1.3){$e_1$}\rput(5.6,1.3){$e_2$} \rput(1,1){$T_{e_1}$}\rput(7,1){$T_{e_2}$} } \rput(14,1){$(\ddag)$} \end{pspicture} $$ with the $e_i$ (and indeed all the edges in the path $\mathcal{R}\hookrightarrow\Lambda$) in the same fiber of the covering. This pairing will play an important role in \S \ref{section:pullbacks}. \begin{corollary}\label{finiterank:result500} Let $\Lambda\rightarrow\Delta$ be a covering with $\Delta$ non-trivial, single vertexed and $\text{rk}\,\Lambda<\infty$. Then $\deg(\Lambda\rightarrow\Delta)<\infty$ if and only if $\Lambda=\widehat{\Lambda}_v$. \end{corollary} \begin{proof} If $\Lambda$ is more than $\widehat{\Lambda}_v$ then one of the trees $T_i$ in the decomposition (\ref{finite:rank200}) is non trivial and by Proposition \ref{topology:coverings:result400}(i) we get the real line $\mathcal R\hookrightarrow\Lambda$, with image in the fiber of an edge, contradicting the finiteness of the degree. The converse follows from Lemma \ref{invariants:result400}. \qed \end{proof} \begin{proposition} \label{finiterank:result600} Let $\Lambda\rightarrow\Delta$ be a covering with (i). $\text{rk}\,\Delta>1$, (ii). $\text{rk}\,\Lambda<\infty$, and (iii). for any intermediate covering $\Lambda\rightarrow\Gamma\rightarrow\Delta$ we have $\text{rk}\,\Gamma<\infty$. Then $\deg(\Lambda\rightarrow\Delta)<\infty$. \end{proposition} The covering $\mathcal R\rightarrow\Delta$ of a single vertexed $\Delta$ of rank $1$ by the real line shows why the $\text{rk}\,\Delta>1$ condition cannot be dropped. \begin{proof} By lattice excision, Theorem \ref{galois:lattice:excisetrees}, we may pass to the $\Delta$ single vertexed case while preserving (i)-(iii). Establishing the degree here and passing back to the general $\Delta$ will give the result. If the degree of the covering $\Lambda\rightarrow\Delta$ is infinite for $\Delta$ single vertexed, then by Corollary \ref{finiterank:result500}, in the decomposition (\ref{finite:rank200}) for $\Lambda$, one of the trees is non-empty and $\Lambda$ has the form of the graph in Proposition \ref{topology:coverings:result400} with this non-empty tree the union of the edge $e$ and $\Upsilon_2$. Let $\Gamma$ be a graph as defined as follows: take the union of $\Upsilon_1$, the edge $e$ and $\alpha(\mathcal R)\cap\Upsilon_2$, where $\alpha(\mathcal{R})$ is the embedding of the real line given by Proposition \ref{topology:coverings:result400}(i). At each vertex of $\alpha(\mathcal R)\cap\Upsilon_2$ place $\text{rk}\,\Delta-1$ edge loops: $$ \begin{pspicture}(0,0)(14,2) \rput(2,0){ \rput(2,1){\BoxedEPSF{free37.eps scaled 650}} \rput(8,1){\BoxedEPSF{free37a.eps scaled 650}} \rput(.8,1){$\Upsilon_1$}\rput(6.75,1){$\Upsilon_1$} \rput(-.2,1.5){$\Lambda$}\rput(5.8,1.5){$\Gamma$} \rput(9,.7){$\alpha(\mathcal{R})\cap\Upsilon_2$} \rput(10.2,1.3){$\cdots$} } \end{pspicture} $$ (this picture depicting the $\text{rk}\,\Delta=2$ case). Then there is an obvious covering $\Gamma\rightarrow\Delta$ so that by Proposition \ref{topology:coverings:result400}(ii) we have an intermediate covering $\Lambda\rightarrow\Gamma\rightarrow\Delta$. Equally obviously, $\Gamma$ has infinite rank, contradicting (iii). Thus, $\deg(\Lambda\rightarrow\Delta)<\infty$. \qed \end{proof} \begin{proposition} \label{finiterank:result700} Let $\Psi\rightarrow\Lambda\rightarrow\Delta$ be coverings with $\text{rk}\,\Lambda<\infty$, $\Psi\rightarrow\Delta$ Galois, and $\Psi$ not simply connected. Then $\deg(\Lambda\rightarrow\Delta)<\infty$. \end{proposition} The idea of the proof is that if the degree is infinite, then $\Lambda$ has a hanging tree in its spine decomposition, and so $\Psi$ does too. But $\Psi$ should look the same at every point, hence {\em is\/} a tree. \begin{proof} Apply lattice excision to $\mathcal L(\Psi,\Delta)$, and as $\pi_1(\Psi,u)$ is unaffected by the excision of trees, we may assume that $\Delta$ is single vertexed. As $\deg(\Lambda\rightarrow\Delta)$ is infinite, the spine decomposition for $\Lambda$ has an infinite tree, and $\Lambda$ has the form of Proposition \ref{topology:coverings:result400}. Thus $\Psi$ does too, by part (iii) of this Proposition, with subgraphs $\Upsilon'_i \hookrightarrow\Psi$, edge $e'$ and $\Psi'_1$ a tree. Take a closed reduced path $\gamma$ in $\Upsilon'_2$, and choose a vertex $u_1$ of $\Upsilon'_1$ such that the reduced path from $u_1$ to $s(e')$ has at least as many edges as $\gamma$. Project $\gamma$ via the covering $\Psi\rightarrow \Delta$ to a closed reduced path, and then lift to $u_1$. The result is reduced, closed by Proposition \ref{topological:galois:result500}, and entirely contained in the tree $\Upsilon'_1$, hence trivial. Thus $\gamma$ is also trivial so that $\Upsilon'_2$ is a tree and $\Psi$ is simply connected. \qed \end{proof} \begin{proposition} \label{finiterank:result800} Let $\Lambda_u\rightarrow\Delta_v$ be a covering with $\text{rk}\,\Lambda<\infty$ and $\gamma$ a non-trivial reduced closed path at $v$ lifting to a non-closed path at $u$. Then there is an intermediate covering $\Lambda_u\rightarrow\Gamma_w\rightarrow\Delta_v$ with $\deg(\Gamma\rightarrow\Delta)$ finite and $\gamma$ lifting to a non-closed path at $w$. \end{proposition} Stallings shows something very similar \cite{Stallings83}*{Theorem 6.1} starting from a finite immersion rather than a covering. As the proof shows, the path $\gamma$ in Proposition \ref{finiterank:result800} can be replaced by finitely many such paths. Moreover, the intermediate $\Gamma$ constructed has the property that any set of Schreier generators for $\pi_1(\Lambda,u)$ can be extended to a set of Schreier generators for $\pi_1(\Gamma,w)$. \begin{proof} If $T\hookrightarrow\Delta$ is a spanning tree and $q:\Delta\rightarrow\Delta/T$ then $\gamma$ cannot be contained in $T$, and so $q(\gamma)$ is non-trivial, closed and reduced. If the lift of $q(\gamma)$ to $\Lambda/T_i$ is closed then the lift of $\gamma$ to $\Lambda$ has start and finish vertices that lie in the same component $T_i$ of $p^{-1}(T)$, mapped homeomorphically onto $T$ by the covering, and thus implying that $\gamma$ is not closed. Thus we may apply lattice excision and pass to the single vertexed case while maintaining $\gamma$ and its properties. Moreover, the conclusion in this case gives the result in general as closed paths go to closed paths when excising trees. If the lift $\gamma_1$ of $\gamma$ at $u$ is not contained in the spine $\widehat{\Lambda}_u$, then its terminal vertex lies in a tree $T_{e_i}$ of the spine decomposition $(\ddag)$. By adding an edge if necessary to $\widehat{\Lambda}_u\cup\gamma_1$, we obtain a finite subgraph whose coboundary edges are paired, with the edges in each pair covering the same edge in $\Delta$, as below left: $$ \begin{pspicture}(0,0)(14,1) \rput(4.05,.5){$\widehat{\Lambda}_u\cup\gamma_1$} \rput(10,.2){$\Gamma$} \rput(4,.5){\BoxedEPSF{free33b.eps scaled 740}} \rput(10,.5){\BoxedEPSF{free33a.eps scaled 740}} \end{pspicture} $$ (if the lift is contained in the spine, take $\widehat{\Lambda}_u$ itself). In any case, let $\Gamma$ be $\widehat{\Lambda}_u\cup\gamma_1$ together with a single edge replacing each pair as above right. Restricting the covering $\Lambda\rightarrow\Delta$ to $\widehat{\Lambda}_u\cup\gamma_1$ and mapping the new edges to the common image of the old edge pairs gives a finite covering $\Gamma\rightarrow\Delta$ and hence by Lemma \ref{topology:coverings:result300}(i) an intermediate covering $\Lambda\stackrel{q}{\rightarrow}\Gamma{\rightarrow}\Delta$, with $q(\gamma_1)$ non-closed at $q(u)$. \qed \end{proof} For the rest of this section we investigate the rank implications of the decomposition (\ref{finite:rank200}) and the pairing $(\ddag)$ in a special case. Suppose $\Lambda\rightarrow\Delta$ is a covering with $\Delta$ single vertexed, $\text{rk}\,\Delta=2$, $\Lambda$ non-simply connected and $\text{rk}\,\Lambda<\infty$. Let $x_i^{\pm 1}, (1\leq i\leq 2)$ be the edge loops of $\Delta$ and fix a spine so we have the decomposition (\ref{finite:rank200}). An {\em extended spine\/} for such a $\Lambda$ is a connected subgraph $\Gamma\hookrightarrow\Lambda$ obtained by adding finitely many edges to a spine, so that every vertex of $\Gamma$ is incident with either zero or three edges in its coboundary $\delta\Gamma$. It is always possible to find an extended spine: take the union of the spine $\widehat{\Lambda}_u$ and each edge $e\in\delta\widehat{\Lambda}_u$ in its coboundary. Observe that $\Gamma$ is finite and the decomposition (\ref{finite:rank200}) gives $\text{rk}\,\Gamma=\text{rk}\,\widehat{\Lambda}_u=\text{rk}\,\Lambda$. Call a vertex of the extended spine $\Gamma$ {\em interior\/} (respectively {\em boundary\/}) when it is incident with zero (resp. three) edges in $\delta\Gamma$. We have the pairing of trees $(\ddag)$ for an extended spine, so that each boundary vertex $v_1$ is paired with another $v_2$, $$ \begin{pspicture}(0,0)(14,2) \rput(3,0){ \rput(4,1){\BoxedEPSF{free11b.eps scaled 650}} \rput(4,1.7){$\Gamma$} \rput(3.8,.9){$\gamma$} \rput(2.2,1.25){$e_1$}\rput(5.8,1.25){$e_2$} \rput(3.1,1.25){$v_1$}\rput(4.9,1.25){$v_2$} \rput(1,1){$T_{e_1}$}\rput(7,1){$T_{e_2}$} } \rput(14,1){$(*)$} \end{pspicture} $$ with $e_1,e_2$ and all the edges in the path $\gamma=\alpha(\mathcal R)\cap\Gamma$ covering an edge loop $x_i\in\Delta$. Call this an {\em $x_i$-pair\/}, ($i=1,2$). For two $x_i$-pairs (fixed $i$), the respective $\gamma$ paths share no vertices in common, for otherwise there would be two distinct edges covering the same $x_i\in\Delta$ starting at such a common vertex. Moreover, $\gamma$ must contain vertices of $\Gamma$ apart from the two boundary vertices $v_1,v_2$, otherwise $\Lambda$ would be simply connected. These other vertices are incident with at least two edges of $\gamma\in\Gamma$, hence at most $2$ edges of the coboundary $\delta\Gamma$, and thus must be interior. \begin{lemma}\label{finiterank:result1000} If $n_i, (i=1,2)$, is the number of $x_i$-pairs in an extended spine $\Gamma$, then then number of interior vertices is at least $\sum n_i$. \end{lemma} (Lemma \ref{finiterank:result1000} is not true in the case $\text{rk}\,\Delta>2$). \begin{proof} The number of interior vertices is $|V_\Gamma|-2\sum n_i$ and the number of edges of $\Gamma$ is $4(|V_\Gamma|-2\sum n_i)+2\sum n_i$, hence $\text{rk}\,\Gamma-1=|V_\Gamma|-3\sum n_i$ by \S \ref{topological:rank}. As $\Lambda$ is not simply connected, $\text{rk}\,\Lambda-1=\text{rk}\,\Gamma-1\geq 0$, thus $|V_\Gamma|-2\sum n_i\geq \sum n_i$ as required. \qed \end{proof} It will be helpful in \S \ref{section:pullbacks} to have a pictorial description of the quantity $\text{rk}\,-1$ for our graphs. To this end, a {\em checker\/} is a small plastic disk, as used in the eponymous boardgame (called {\em draughts\/} in British English). We place black checkers on some of the vertices of an extended spine $\Gamma$ according to the following scheme: place black checkers on all the interior vertices of $\Gamma$; for each $x_1$-pair in (*), take the interior vertex on the path $\gamma$ that is closest to $v_1$ (ie: is the terminal vertex of the edge of $\gamma$ whose start vertex is $v_1$) and {\em remove\/} its checker; for each $x_2$-pair, we can find, by Lemma \ref{finiterank:result1000}, an interior vertex with a checker still on it. Choose such a vertex and remove its checker also. \begin{lemma}\label{finiterank:whitevertices} With black checkers placed on the vertices of an extended spine for $\Lambda$ as above, the number of black checkers is $\text{rk}\,\Lambda-1$. \end{lemma} \begin{proof} We saw in the proof of Lemma \ref{finiterank:result1000} that $\text{rk}\,\Lambda-1= \text{rk}\,\Gamma-1$ is equal to the number of interior vertices of $\Gamma$ less the number of $x_i$-pairs $(i=1,2)$. \qed \end{proof} From now on we will only use the extended spine obtained by adding the coboundary edges to some fixed spine $\widehat{\Lambda}_u$. Let $p:\Lambda_u\rightarrow\Delta_v$ be a covering with $\text{rk}\,\Delta=2$, $\text{rk}\,\Lambda<\infty$ and $\Lambda$ not simply connected. A spanning tree $T\hookrightarrow\Delta$ induces a covering $\Lambda/T_i\rightarrow\Delta/T$ with $\Delta/T$ single vertexed. Let $\mathcal H(\Lambda_u\rightarrow\Delta_v)$ be the number of vertices of the spine of $\Lambda/T_i$ at $q(u)$ and $n_i(\Lambda_u\rightarrow\Delta_v)$ the number of $x_i$-pairs in the extended spine. The homeomorphism class of $\Lambda/T_i$ and the spine are independent of the spaning tree $T$, hence the quantities $\mathcal H(\Lambda_u\rightarrow\Delta_v)$ and $n_i(\Lambda_u\rightarrow\Delta_v)$ are too. \section{Pullbacks}\label{section:pullbacks} Let $p_i:\Lambda_i:=\Lambda_{u_i}\rightarrow\Delta_v, (i=1,2)$ be coverings and $(\Lambda_{1}\prod_\Delta\Lambda_{2})$ their (unpointed) pullback. If $\widehat{\Lambda}_{u_i}$ is the spine at $u_i$ then we can restrict the coverings to maps $p_i:\widehat{\Lambda}_{u_i}\rightarrow\Delta_v$ and form the pullback $\widehat{\Lambda}_{u_1}\prod_\Delta\widehat{\Lambda}_{u_2}$. \begin{proposition}[spine decomposition of pullbacks] \label{pullbacks:spinedecomposition} The pullback $\Lambda=(\Lambda_{1}\prod_\Delta\Lambda_{2})$ has a wedge sum decomposition $\Lambda=(\widehat{\Lambda}_{u_1}\prod_\Delta\widehat{\Lambda}_{u_2}) \bigvee_\Theta\Phi$ with $\Phi$ a forest and no two vertices of the image of $\Theta\hookrightarrow\Phi$ lying in the same component. \end{proposition} \begin{proof} Let $\Lambda_i=\widehat{\Lambda}_{u_i}\bigvee_{\Theta_i}\Phi_i, (i=1,2)$ be the spine decomposition, $t_i:\Lambda_1\prod_\Delta\Lambda_2\rightarrow\Lambda_i, (i=1,2)$ the coverings provided by the pullback and $\Omega$ a connected component of the pullback. If $\Omega\cap(\widehat{\Lambda}_{u_1}\prod_\Delta\widehat{\Lambda}_{u_2}) =\varnothing$, then a reduced closed path $\gamma\in\Omega$ must map via one of the $t_i$ to a closed path in the forest $\Phi_i$. As the images under coverings of reduced paths are reduced, $t_i(\gamma)$ must contain a spur which can be lifted to a spur in $\gamma$. Thus $\Omega$ is a tree. Otherwise choose a vertex $w_1\times w_2$ in $\Omega\cap(\widehat{\Lambda}_{u_1}\prod_\Delta\widehat{\Lambda}_{u_2})$ and let $\Gamma$ be the connected component of this intersection containing $w_1\times w_2$. If $\gamma$ a reduced closed path at $w_1\times w_2$ then $t_i(\gamma), (i=1,2)$ a reduced closed path at $w_i\in\widehat{\Lambda}_{u_i}$, hence by Lemma \ref{topological:spines:result100}(ii), $t_i(\gamma)\in\widehat{\Lambda}_{u_i}$ and thus $\gamma\in\widehat{\Lambda}_{u_1}\prod_\Delta\widehat{\Lambda}_{u_2}$. Applying Proposition \ref{finite_rank_characterisation200}, we have $\Omega$ a wedge sum of $\Gamma$ and a forest of the required form. \qed \end{proof} \begin{corollary}[Howsen-Stallings] \label{pullbacks:result200} Let $p_i:\Lambda_i\rightarrow\Delta, (i=1,2),$ be coverings with $\text{rk}\,\Lambda_i<\infty$ and $u_1\times u_2$ a vertex of their pullback. Then $\text{rk}\,(\Lambda_1\prod_\Delta\Lambda_2)_{u_1\times u_2}<\infty.$ \end{corollary} \begin{proof} The component $\Omega$ of the pullback containing $u_1\times u_2$ is either a tree or the wedge sum of a finite graph and a forest as described in Proposition \ref{pullbacks:spinedecomposition}. Either case gives the result. \qed \end{proof} The remainder of this section is devoted to a proof of an estimate for the rank of the pullback of finite rank graphs in a special case. Let $p_j:\Lambda_j:=\Lambda_{u_j}\rightarrow\Delta_v, (j=1,2)$ be coverings with $\text{rk}\,\Delta=2$, $\text{rk}\,\Lambda_j<\infty$ and the $\Lambda_j$ not simply connected. Let $\mathcal H_j:=\mathcal H(\Lambda_{u_j}\rightarrow\Delta_v)$ and $n_{ji}:=n_i(\Lambda_{u_j}\rightarrow\Delta_v)$ be as at the end of \S \ref{section:invariants}. \begin{theorem}\label{pullback:rankestimate} For $i=1,2$, $$ \sum_\Omega (\text{rk}\,\Omega-1)\leq \prod_j(\text{rk}\,\Lambda_j-1) +\mathcal H_1\mathcal H_2-(\mathcal H_1-n_{1i})(\mathcal H_2-n_{2i}), $$ the sum over all non simply connected components $\Omega$ of the pullback $\Lambda_1\prod_\Delta\Lambda_2$. \end{theorem} \begin{proof} Lattice excision and the definition of the $\mathcal H_j$ and $n_{ji}$ allow us to pass to the $\Delta$ single vertexed case. Suppose then that $\Delta$ has edge loops $x_i^{\pm 1}, (1\leq i\leq 2)$ at the vertex $v$, extended spines $\widehat{\Lambda}_{u_j}\hookrightarrow\Gamma_j\hookrightarrow\Lambda_j$, and by restricting the covering maps $p_j$ appropriately, the pullbacks $\widehat{\Lambda}_{u_1}\prod_\Delta\widehat{\Lambda}_{u_2}\hookrightarrow \Gamma_1\prod_\Delta\Gamma_2\hookrightarrow \Lambda_1\prod_\Delta\Lambda_2$ with $t_j:\Lambda_1\prod_\Delta\Lambda_2 \rightarrow\Lambda_j$ the resulting covering maps. Place black checkers on the vertices of the extended spines $\Gamma_j$ as in \S \ref{section:invariants} and place a black checker on a vertex $v_1\times v_2$ of $\Gamma_1\prod_\Delta\Gamma_2$ precisely when both $t_j(v_j)\in\Gamma_j, (j=1,2)$ have black checkers on them. By Lemma \ref{finiterank:whitevertices}, and the construction of the pullback for $\Delta$ single vertexed, we get the number of vertices in $\Gamma_1\prod_\Delta\Gamma_2$ with black checkers is equal to $\prod(\text{rk}\,\Lambda_j-1)$. Let $\Omega$ be a non simply connected component of the pullback $\Lambda_1\prod_\Delta\Lambda_2$ and $\Upsilon=\Omega\cap(\Gamma_1\prod_\Delta\Gamma_2)$. If $v_1\times v_2$ is the start vertex of at least one edge in the coboundary $\delta\Upsilon$, then at least one of the $v_j$ must be incident with at least one, hence three, edges of the coboundary $\delta\Gamma_j$. Lifting these three via the covering $t_j$ to $v_1\times v_2$ gives at least three edges starting at $v_1\times v_2$ in the coboundary $\delta\Upsilon$. Four coboundary edges starting here would mean that $\Omega$ was simply connected, hence every vertex of $\Upsilon$ is incident with either zero or three coboundary edges. We can thus extend the interior/boundary terminology of \S \ref{section:invariants} to the vertices of $\Upsilon$, and observe that a vertex of $\Upsilon$ covering, via either of the $t_j$, a boundary vertex $\in\Gamma_j$, must itself be a boundary vertex. The upshot is that $\Upsilon$ is an extended spine in $\Omega$ and by Proposition \ref{pullbacks:spinedecomposition}, $\text{rk}\,\Omega-1= \text{rk}\,\Upsilon-1$. Now place {\em red\/} checkers on the vertices of $\Upsilon$ as in \S \ref{section:invariants} and do this for each non-simply connected component $\Omega$. The number of red checkered vertices is $\sum_\Omega (\text{rk}\,\Omega-1)$. The result is that $\Gamma_1\prod_\Delta\Gamma_2$ has vertices with black checkers, vertices with red checkers, vertices with red checkers sitting on top of black checkers, and vertices that are completely uncheckered. Thus, $$ \sum_\Omega (\text{rk}\,\Omega-1)\leq \prod(\text{rk}\,\Lambda_j-1)+N, $$ where $N$ is the number of vertices of $\Gamma_1\prod_\Delta\Gamma_2$ that have a red checker but no black checker. It remains then to estimate the number of these ``isolated'' red checkers. Observe that a vertex of $\Gamma_1\prod_\Delta\Gamma_2$ has no black checker precisely when it lies in the fiber, via at least one of the $\tau_j$, of a checkerless vertex in $\Gamma_j$. Turning it around, we investigate the fibers of the checkerless vertices of both $\Gamma_j$. Indeed, in an $x_1$-pair, $$ \begin{pspicture}(0,0)(14,2) \rput(3,0){ \rput(4,1){\BoxedEPSF{free11c.eps scaled 650}} \rput(4,1.7){$\Gamma_j$} \rput(2.8,.85){$e$} \rput(2.55,1.25){$v_1$}\rput(5.4,1.25){$v_2$} \rput(3.6,1.25){$u$} } \end{pspicture} $$ the vertices $v_1,v_2$ and $u$ are checkerless, while $v_1,v_2$ are also checkerless in an $x_2$-pair. We claim that no vertex in the fiber, via $t_j$, of these five has a red checker. A vertex of $\Upsilon$ in the fiber of the boundary vertices $v_1,v_2$ is itself a boundary vertex, hence contains no red checker. If $v_1\times v_2\in\Upsilon$ is in the fiber of $u$ and is a boundary vertex of $\Upsilon$ then it carries no red checker either. If instead $v_1\times v_2$ is an interior vertex then the lift to $v_1\times v_2$ of $e^{-1}$ cannot be in the coboundary $\delta\Upsilon$, hence the terminal vertex of this lift is $\in\Upsilon$ also and covers $v_1$. Thus, this terminal vertex is a boundary vertex for an $x_1$-pair of $\Upsilon$, and $v_1\times v_2$ is the interior vertex from which a red checker is removed for this pair. The only remaining checkerless vertices of the $\Gamma_j$ unaccounted for are those interior vertices chosen for each $x_2$-pair, and thus $N\leq$ the number of vertices of $\Gamma_1\prod_\Delta\Gamma_2$ contained in the fibers of these. If $u\in\Gamma_1$ is one of these interior vertices, then $u\times V_{\Gamma_2}$ are the vertices of $\Gamma_1\prod_\Delta\Gamma_2$ in the fiber. As the boundary vertices in this fiber do not have red checkers we need only consider the $u\times\{\text{interior vertices of }\Gamma_2\}$ with these interior vertices precisely those of the spine $\widehat{\Lambda}_{u_2}$. Thus our fiber is $u\times\{\text{vertices of }\widehat{\Lambda}_{u_2}\}$, of which there are $\mathcal H_2$, and a total of $n_{12}\mathcal H_2$ vertices of $\Gamma_1\prod_\Delta\Gamma_2$ arising this way. There are also $n_{22}\mathcal H_1$ vertices arising in this way from $u\in\Gamma_2$, and $n_{12}n_{22}$ vertices counted twice. Thus $N\leq n_{12}\mathcal H_2+n_{22}\mathcal H_1-n_{12}n_{22}$, hence the result for $i=2$. Interchanging the checkering scheme for the $x_i$-pairs gives the result for $i=1$. \qed \end{proof} \section{Free groups and the topological dictionary}\label{free} A group $F$ is {\em free of rank $\text{rk}\, F$\/} if and only if it is isomorphic to the fundamental group of a connected graph of rank $\text{rk}\, F$. If\/ $\Gamma_1,\Gamma_2$ are connected graphs with $\pi_1(\Gamma_1,v_1)\cong\pi_1(\Gamma_2,v_2)$, then we have $H_1(\Gamma_1)\cong H_1(\Gamma_2)$ and thus $\text{rk}\,\Gamma_1=\text{rk}\,\Gamma_2$. The free groups so defined are of course the standard free groups and the rank is the usual rank of a free group. At this stage we appeal to the existing (algebraic) theory of free groups, and in particular, that by applying Nielsen transformations, a set of generators for a free group can be transformed into a set of free generators whose cardinality is no greater. Thus, a finitely generated free group has finite rank (the converse being obvious). From now on we use the (topologically more tractible) notion of finite rank as a synonym for finitely generated. Let $F$ be a free group with representation $\varphi:F\rightarrow\pi_1(\Delta,v)$ for $\Delta$ connected. The topological dictionary is the loose term used to describe the correspondence between algebraic properties of $F$ and topological properties of $\Delta$ as described in \S\S \ref{section:topological}-\ref{topology:galoistheory}. The non-abelian $F$ correspond to the $\Delta$ with $\text{rk}\,\Delta>1$. A subgroup $A\subset F$ corresponds to a covering $p:\Lambda_u\rightarrow\Delta_v$ with $p^*\pi_1(\Lambda,u)=\varphi(A)$, and hence $\text{rk}\, A=\text{rk}\,\Lambda$. Thus finitely generated subgroups correspond to finite rank $\Lambda$ and normal subgroups to Galois coverings. Inclusion relations between subgroups correspond to covering relations, indices of subgroups to degrees of coverings, trivial subgroups to simply connected coverings, conjugation to change of basepoint. Applying the topological dictionary to the italicised results below we recover some classical facts (see also \cites{Servatius83,Stallings83}). \begin{enumerate} \item \cite{Greenberg60,Karrass69}: If a finitely generated subgroup $A$ of a non-abelian free group $F$ is contained in no subgroup of infinite rank, then $A$ has finite index in $F$; {\em Proposition \ref{finiterank:result600}}. \item \cites{Greenberg60}: If a finitely generated subgroup $A$ of a free group $F$ contains a non-trivial normal subgroup of $F$, then it has finite index in $F$; {\em Proposition \ref{finiterank:result700}}. \item \cites{Burns69,Hall49}: Let $F$ be a free group, $X$ a finite subset of $F$, and $A$ a finitely generated subgroup of $F$ disjoint from $X$. Then $A$ is a free factor of a group $G$, of finite index in $F$ and disjoint from $X$; {\em Proposition \ref{finiterank:result800}} (and the comments following it). \item \cite{Howsen54}: If $A_1,A_2$ are finitely generated subgroups of a free group $F$, then the intersection of conjugates $A_1^{g_1}\cap A_2^{g_2}$ is finitely generated for any $g_1,g_2\in F$; {\em Corollary \ref{pullbacks:result200}}. \end{enumerate} If $\Delta$ is a graph, $\text{rk}\,\Delta=2$, and $A\subset F=\pi_1(\Delta,v)$, then we define $\mathcal H(F,A):=\mathcal H(\Lambda_u\rightarrow\Delta_v)$ and $n_i(F,A):= n_i(\Lambda_u\rightarrow\Delta_v)$, where $p:\Lambda_u\rightarrow\Delta_v$ is the covering with $p^*\pi_1(\Lambda,u)=A$. For an arbitrary free group $F$ with representation $\varphi:F\rightarrow\pi_1(\Delta,v)$, define $\mathcal H^\varphi(F,A)$ and $n^\varphi_i(F,A)$ to be $\mathcal H(\varphi(F),\varphi(A))$ and $n_i(\varphi(F),\varphi(A))$. The appearance of $\varphi$ in the notation is meant to indicate that these quantities, unlike rank, are representation dependent. This can be both a strength and a weakness. A weakness because it seems desirable for algebraic statements to involve only algebraic invariants, and a strength if we have the freedom to choose the representation, especially if the most interesting results are obtained when this representation is not the ``obvious'' one. For example, if $F$ is a free group with free generators $x$ and $y$, and $\Delta$ is single vertexed with two edge loops whose homotopy classes are $a$ and $b$, then the subgroup $A=\langle xy\rangle\subset F$ corresponds to the $\Lambda$ below left under the obvious representation $\varphi_1(x)=a,\varphi_1(y)=b$, and to the righthand graph via $\varphi_2(x)=a,\varphi_2(y)=a^{-1}b$: $$ \begin{pspicture}(0,0)(12,3) \rput(9.5,1.5){\BoxedEPSF{fig18e.eps scaled 500}} \rput(3.5,1.5){\BoxedEPSF{fig18g.eps scaled 500}} \end{pspicture} $$ Thus, $\mathcal H^{\varphi_1}(F,A)=2,n^{\varphi_1}_{i}(F,A)=1, (i=1,2)$, whereas $\mathcal H^{\varphi_2}(F,A)=1,n^{\varphi_2}_{1}(F,A)=1,n^{\varphi_2}_{2}(F,A)=0$. We now apply the toplogical dictionary to Theorem \ref{pullback:rankestimate}. Let $\varphi:F\rightarrow\pi_1(\Delta,v)$, $A_j\subset F, (j=1,2)$, finitely generated non-trivial subgroups, and $p_j:\Lambda_{u_j}\rightarrow\Delta_v, (j=1,2)$ coverings with $\varphi(A_j)=p_j^*\pi_1(\Lambda,u_j)$. Each non simply-connected component $\Omega$ of the pullback corresponds to some non-trivial intersection of conjugates $A_1^{g_1}\cap A_2^{g_2}$. As observed in \cite{Neumann90}, these in turn correspond to the conjugates $A_1\cap A_2^g$ for $g$ from a set of double coset representatives for $A_2\backslash F/ A_1$. \begin{theorem} \label{algebraic:shn} Let $F$ be a free group of rank two and $A_j\subset F, (j=1,2)$, finitely generated non-trivial subgroups. Then for any representation $\varphi:F\rightarrow\pi_1(\Delta,v)$ and $i=1,2$, $$ \sum_g (\text{rk}\,(A_1\cap A_2^g)-1)\leq \prod_j(\text{rk}\, A_j-1) +\mathcal H_1\mathcal H_2-(\mathcal H_1-n_{1i})(\mathcal H_2-n_{2i}), $$ the sum over all double coset representatives $g$ for $A_2\backslash F/ A_1$ with $A_1\cap A_2^x$ non-trivial, and where $\mathcal H_j=\mathcal H^\varphi(F,A_j)$ and $n_{ji}=n^\varphi_i(F,A_j)$. \end{theorem} This theorem should be viewed in the context of attempts to prove the so-called {\em strengthened Hanna Neumann conjecture\/}: namely, if $A_j, (j=1,2)$ are finitely generated, non-trivial, subgroups of an arbitrary free group $F$, then $$ \sum_g (\text{rk}\,(A_1\cap A_2^g)-1)\leq \prod_j(\text{rk}\, A_j-1)+\varepsilon, $$ the sum over all double coset representatives $g$ for $A_2\backslash F/ A_1$ with $A_1\cap A_2^x$ non-trivial, where the conjecture is that $\varepsilon$ is zero, while in the existing results, it is an error term having a long history. We provide a very partial, and chronological, summary of these estimates for $\varepsilon$ in the table: $$ \begin{tabular}{ll} \hline\\ $(\text{rk}\, A_1-1)(\text{rk}\, A_2-1)$&H. Neumann \cite{Neumann56}\\ $\max\{(\text{rk}\, A_1-2)(\text{rk}\, A_2-1),(\text{rk}\, A_1-1)(\text{rk}\, A_2-2)\}$&Burns \cite{Burns69}\\ $\max\{(\text{rk}\, A_1-2)(\text{rk}\, A_2-2)-1,0\}$&Tardos \cite{Tardos96}\\ $(\text{rk}\, A_1-3)(\text{rk}\, A_2-3)$&Dicks-Formanek \cite{Dicks01}\\ \\\hline \end{tabular} $$ (the original, unstrengthened, conjecture \cite{Neumann56} involved just the intersection of the two subgroups, rather than their conjugates, and the first two expressions for $\varepsilon$ were proved in this restricted sense; the strengthened version was formulated in \cite{Neumann90}, and the H. Neumann and Burns estimates for $\varepsilon$ were improved to the strengthened case there). Observe that as the join $\langle A_1,A_2\rangle$ of two finitely generated subgroups is finitely generated, and every finitely generated free group can be embedded as a subgroup of the free group of rank two, we may replace the ambient free group in the conjecture with the free group of rank two. It is hard to make a precise comparison between the $\varepsilon$ provided by Theorem \ref{algebraic:shn} and those in the table. Observe that if $A_j\subset F$, with $F$ free of rank two, then with respect to a topological representation we have $\text{rk}\, A_j=\mathcal H_j-(n_{j1}+n_{j2})+1$. It is straight forward to find infinite families $A_{1k},A_{2k}\subset\pi_1(\Delta,v), (k\in\ams{Z}}\def\E{\ams{E}^{>0})$, for which the error term in Theorem \ref{algebraic:shn} is less than those in the table above for all but finitely many $k$, or even for which the strengthened Hanna Neumann conjecture is true by Theorem \ref{algebraic:shn}, for instance, $$ \begin{pspicture}(0,0)(12,2) \rput(6,1){\BoxedEPSF{fig18i.eps scaled 500}} \rput{270}(5.9,.8){$\left.\begin{array}{c} \vrule width 0 mm height 22 mm depth 0 pt\end{array}\right\}$} \rput(5.9,.4){$k$ edge loops} \rput(1.8,1.5){$A_{1k}=A_{2k}=$}\rput(6.2,1.5){$\ldots$} \end{pspicture} $$ but where the error terms in the table are quadratic in $k$. \section*{References} \begin{biblist} \bib{Burns69}{article}{ author={Burns, R. G.}, title={A note on free groups}, journal={Proc. Amer. Math. Soc.}, volume={23}, date={1969}, pages={14--17}, issn={0002-9939}, } \bib{Cohen89}{book}{ author={Cohen, Daniel E.}, title={Combinatorial group theory: a topological approach}, series={London Mathematical Society Student Texts}, volume={14}, publisher={Cambridge University Press}, place={Cambridge}, date={1989}, pages={x+310}, isbn={0-521-34133-7}, isbn={0-521-34936-2}, } \bib{Collins98}{article}{ author={Collins, D. J.}, author={Zieschang, H.}, title={Combinatorial group theory and fundamental groups}, conference={ title={Algebra, VII}, }, book={ series={Encyclopaedia Math. Sci.}, volume={58}, publisher={Springer}, place={Berlin}, }, date={1993}, pages={1--166, 233--240}, } \bib{Dicks01}{article}{ author={Dicks, Warren}, author={Formanek, Edward}, title={The rank three case of the Hanna Neumann conjecture}, journal={J. Group Theory}, volume={4}, date={2001}, number={2}, pages={113--151}, issn={1433-5883}, } \bib{Dicks94}{article}{ author={Dicks, Warren}, title={Equivalence of the strengthened Hanna Neumann conjecture and the amalgamated graph conjecture}, journal={Invent. Math.}, volume={117}, date={1994}, number={3}, pages={373--389}, issn={0020-9910}, } \bib{Gersten83}{article}{ author={Gersten, S. M.}, title={Intersections of finitely generated subgroups of free groups and resolutions of graphs}, journal={Invent. Math.}, volume={71}, date={1983}, number={3}, pages={567\ndash 591}, issn={0020-9910}, } \bib{Greenberg60}{article}{ author={Greenberg, Leon}, title={Discrete groups of motions}, journal={Canad. J. Math.}, volume={12}, date={1960}, pages={415--426}, issn={0008-414X}, } \bib{Hall49}{article}{ author={Hall, Marshall, Jr.}, title={Subgroups of finite index in free groups}, journal={Canadian J. Math.}, volume={1}, date={1949}, pages={187\ndash 190}, } \bib{Howsen54}{article}{ author={Howson, A. G.}, title={On the intersection of finitely generated free groups}, journal={J. London Math. Soc.}, volume={29}, date={1954}, pages={428-434}, issn={0024-6107}, } \bib{Imrich77}{article}{ author={Imrich, Wilfried}, title={On finitely generated subgroups of free groups}, journal={Arch. Math. (Basel)}, volume={28}, date={1977}, number={1}, pages={21--24}, issn={0003-889X}, } \bib{Imrich76}{article}{ author={Imrich, Wilfried}, title={Subgroup theorems and graphs}, conference={ title={Combinatorial mathematics, V}, address={Proc. Fifth Austral. Conf., Roy. Melbourne Inst. Tech., Melbourne}, date={1976}, }, book={ publisher={Springer}, place={Berlin}, }, date={1977}, pages={1--27. Lecture Notes in Math., Vol. 622}, } \bib{Karrass69}{article}{ author={Karrass, Abraham}, author={Solitar, Donald}, title={On finitely generated subgroups of a free group}, journal={Proc. Amer. Math. Soc.}, volume={22}, date={1969}, pages={209--213}, issn={0002-9939}, } \bib{Neumann90}{article}{ author={Neumann, Walter D.}, title={On intersections of finitely generated subgroups of free groups}, conference={ title={Groups---Canberra 1989}, }, book={ series={Lecture Notes in Math.}, volume={1456}, publisher={Springer}, place={Berlin}, }, date={1990}, pages={161--170}, } \bib{Neumann56}{article}{ author={Neumann, Hanna}, title={On the intersection of finitely generated free groups}, journal={Publ. Math. Debrecen}, volume={4}, date={1956}, pages={186-189}, issn={0033-3883}, } \bib{Nickolas85}{article}{ author={Nickolas, Peter}, title={Intersections of finitely generated free groups}, journal={Bull. Austral. Math. Soc.}, volume={31}, date={1985}, number={3}, pages={339--348}, issn={0004-9727}, } \bib{Reidemeister28}{article}{ author={Reidemeister, K}, title={Fundamenttalgruppen und \"{U}berlagerungsr\"{a}ume}, language={German}, journal={Nachr. Ges. Wiss. G\"{o}ttingen, Math. Phys. Kl.}, date={1928}, pages={69\ndash 76}, } \bib{Scott79}{article}{ author={Scott, Peter}, author={Wall, Terry}, title={Topological methods in group theory}, booktitle={Homological group theory (Proc. Sympos., Durham, 1977)}, series={London Math. Soc. Lecture Note Ser.}, volume={36}, pages={137\ndash 203}, publisher={Cambridge Univ. Press}, place={Cambridge}, date={1979}, } \bib{Serre03}{book}{ author={Serre, Jean-Pierre}, title={Trees}, series={Springer Monographs in Mathematics}, note={Translated from the French original by John Stillwell; Corrected 2nd printing of the 1980 English translation}, publisher={Springer-Verlag}, place={Berlin}, date={2003}, pages={x+142}, isbn={3-540-44237-5}, } \bib{Servatius83}{article}{ author={Servatius, Brigitte}, title={A short proof of a theorem of Burns}, journal={Math. Z.}, volume={184}, date={1983}, number={1}, pages={133--137}, issn={0025-5874}, } \bib{Stallings83}{article}{ author={Stallings, John R.}, title={Topology of finite graphs}, journal={Invent. Math.}, volume={71}, date={1983}, number={3}, pages={551\ndash 565}, issn={0020-9910}, } \bib{Tardos96}{article}{ author={Tardos, G{\'a}bor}, title={Towards the Hanna Neumann conjecture using Dicks' method}, journal={Invent. Math.}, volume={123}, date={1996}, number={1}, pages={95--104}, issn={0020-9910}, } \bib{Tardos92}{article}{ author={Tardos, G{\'a}bor}, title={On the intersection of subgroups of a free group}, journal={Invent. Math.}, volume={108}, date={1992}, number={1}, pages={29--36}, issn={0020-9910}, } \end{biblist} \end{document}
arXiv
\begin{document} \title{Quantum trajectories for environment in superposition of coherent states} \author{Anita Magdalena D\k{a}browska} \affiliation{Nicolaus Copernicus University in Toru\'{n},\\Collegium Medicum Bydgoszcz, ul. Jagiello\'{n}ska 15, 85-067 Bydgoszcz, Poland} \begin{abstract} We derive stochastic master equations for a quantum system interacting with a Bose field prepared in a superposition of continuous-mode coherent states. To determine a conditional evolution of the quantum system we use a collision model with an environment given as an infinite chain of not interacting between themselves qubits prepared initially in a entangled state being a discrete analogue of a superposition of coherent states of the Bose field. The elements of the environment chain interact with the quantum system in turn one by one and they are subsequently measured. We determine a conditional evolution of the quantum system for continuous in time observations of the output field as a limit of discrete recurrence equations. We consider the stochastic master equations for a counting as well as for a diffusive stochastic process. \end{abstract} \maketitle \section{Introduction} Quantum filtering theory \cite{Bel89, Bel90, BB91, Car93, BP02, Bar06, GZ10, WM10} formulated within the framework of quantum stochastic It\^{o} calculus (QSC) \cite{HP84, Par92} gives the best state estimation of an open quantum system on the basis of a continuous in time measurement preformed on the Bose field interacting with the system. The filtering theory is formulated with the making use of input-output formalism \cite{GarCol85} wherein the input field is interpreted as the field before interaction with the system and the output field is interpreted as the field after this interaction. Information about the quantum system is gained in an indirect way by performing the measurements on the output field. In general, there are two types of the measurement considered in the filtering theory, namely, the photon counting and homodyne/heterodyne measurements which corresponds respectively to the counting and diffusion stochastic processes \cite{Bar06}. Evolution of an open quantum system conditioned on the results of the continuous in time measurement of the output field is given by the stochastic master equation called also in the literature the quantum filtering equation. The conditional state, depending on all past results of the measurement, creates quantum trajectory. By taking the average over all possible outcomes of the measurements we get from the {\it a posteriori} evolution the {\it a priori} evolution given by the master equation. Clearly, the form of the filtering equation depends on the initial state of the environment and on the type of measurement performed on the output field. There exist many derivations of the filtering equations (see, for example, \cite{Bel89, Bel90, BB91, Car93, GG94, BP96, B02, GS04, WM10}). The rigorous derivations of the conditional evolution for the case when the Bose field is prepared in the Gaussian state one can find, for instance, in \cite{BGM04, GK10, DS11, DS12, N14, DG16}. The standard methods of determination of the filtering equation stop working when the Bose field is prepared in non-classical state. The initial temporal correlations in the Bose field makes then the evolution of open system non-Markovian. The system becomes entangled with the environment and its evolution is no longer given by one equation but by a set of equations. In this case to determine the conditional evolution of the system one can apply a cascaded approach \cite{GZ10} with an ancilla system being a source of non-classical signal. The methods of determination the filtering equation based on the idea of enlarging the Hilbert space of the compound system by the Hilbert space of ancilla were used for single photon state in \cite{ GJNC12a, GJN12b, GJN13, D18}, for a Fock state in \cite{GJN14,SZX16, BC17}, and for a superposition of coherent states in \cite{GJNC12a, GJN13}. Note, however that ancilla system serve here only as a convenient theoretical mathematical device allowing to solve the problem of determination of the conditional evolution. Unfortunately, by introducing such auxiliary system we loose some physical intuition and the interpretation of quantum trajectories become thereby more difficult. In the paper we present derivation of the filtering equations for the environment prepared in a superposition of coherent states. Instead of the methods based on the concept of ancilla and QSC, we use quantum repeating interactions and measurements model \cite{AP06, P08, PP09, P10}, known also in the physical literature as a collision model \cite{C17}. We consider the environment modeled by an infinite chain of qubits which interact in turn one by one with a quantum system. After each interaction the measurement is preformed on the last qubit interacted with the system. The essential properties of our model are that each qubit interacts with the system only once and that the environment qubits do not interact between themselves. So in the paper we use the toy Fock space as an approximation of the symmetrical Fock space \cite{M93, At03, G04, GS04, AP06, BHJ09, P05}. The idea of obtaining the differential filtering equations from their difference versions were implemented for the Markovian case in \cite{B02, GS04, BHJ09, GCMC18}. As shown in \cite{DSC17, DSC18} it can be successfully applied also for the non-Markovian case. The paper is organized as follows. In Sec. II, we introduce a description of the environment and its interaction with the quantum system. Sec. III is devoted to derivation of the conditional evolution of open system for the case when the environment is prepared in a coherent state. In Sec. IV the conditional evolution of open system for the bath in a superposition of coherent states is investigated. As an example, we present the {\it a priori} and the {\it a priori} dynamics of a single mode cavity in Sec. V. Our results are briefly summarized in Sec. VI. \section{The unitary system and environment evolution} Let us consider a quantum system $\mathcal{S}$ of the Hilbert space $\mathcal{H}_{\mathcal{S}}$ interacting with an environment consisting of a sequence of qubits. We assume that the environment qubits do not interact between themselves but they interact in a successive way with the system $\mathcal{S}$ each during the time interval of the length $\tau$. At a given moment $\mathcal{S}$ interacts with only one of the environment qubits. The Hilbert space of the environment is \begin{equation} \mathcal{H}_{\mathcal{E}}=\bigotimes_{k=0}^{+\infty}\mathcal{H}_{\mathcal{E},k}, \end{equation} where $\mathcal{H}_{\mathcal{E},k}$ stands for the Hilbert space of the $k$-th qubit interacting with $\mathcal{S}$ in the time interval $[k\tau, (k+1)\tau)$. We start from a discrete in time model of repeated interactions (collisions) to show finally its limit with time treated as a continuous variable. We will treat $\tau$ as a small time and work to linear order in $\tau$ (we neglect all higher order terms in $\tau$). We assume that the unitary evolution of the compound $\mathcal{E}+\mathcal{S}$ system is governed by \cite{AP06,PP09} \begin{equation} U_{j}=\mathbb{V}_{j-1}\mathbb{V}_{j-2}\ldots \mathbb{V}_{0}\;\; \mathrm{for}\;\; j\geq 1, \;\;\;\;\;U_{0}=\mathbbm{1}, \end{equation} where $\mathbb{V}_{k}$ is the unitary operator acting non-trivially only in the Hilbert space $\mathcal{H}_{\mathcal{E},k}\otimes \mathcal{H}_{S}$, that is, \begin{equation}\label{intermat} \mathbb{V}_{k}= \bigotimes_{i=0}^{k-1}\mathbbm{1}_{i} \otimes {V}_{k}, \end{equation} and \begin{equation} {V}_{k} = \exp\left(-i\tau H_{k}\right), \end{equation} with \begin{eqnarray}\label{hamint} H_{k} = \bigotimes_{i=k}^{+\infty}\mathbbm{1}_{i}\otimes H_{\mathcal{S}}+\frac{i}{\sqrt{\tau}}\left(\sigma_{k}^{+}\otimes \bigotimes_{i=k+1}^{+\infty}\mathbbm{1}_{i}\otimes L-\sigma_{k}^{-}\otimes \bigotimes_{i=k+1}^{+\infty}\mathbbm{1}_{i}\otimes L^{\dagger}\right), \end{eqnarray} where $H_\mathcal{S}$ is the Hamiltonian of $\mathcal{S}$, $L$ is a bounded operator of $\mathcal{S}$, and $\sigma_{k}^{+}=|1\rangle_{k}\langle 0|$, $\sigma_{k}^{-}=|0\rangle_{k}\langle 1|$, where by $|0\rangle_{k}$ and $|1\rangle_{k}$ we denoted respectively the ground and excited states of the $k$-th qubit. The Hamiltonian $H_{k}$ is written in the interaction picture eliminating the free evolution of the bath. A detailed discussion on the physical assumptions leading to (\ref{hamint}) one can find, for instance, in \cite{C17,GCMC18,FTVRS18}. For simplicity, we set the Planck constant $\hbar=1$. Note that $U_{j}$ describes the $j$-th first interactions and it has trivial action on $\bigotimes_{k=j}^{+\infty}\mathcal{H}_{\mathcal{E},k}$. Let us define in $\mathcal{H}_{\mathcal{E},k}$ the vector $|\alpha_{k}\rangle_{k}$ by the formula \cite{GCMC18} \begin{equation} |\alpha_{k}\rangle_{k}=e^{\sqrt{\tau}\left(\alpha_{k}\sigma_{k}^{+}-\alpha_{k}^{\ast}\sigma_{k}^{-}\right)}|0\rangle_{k}, \end{equation} where $\alpha_{k}\in\mathbb{C}$. One can check that \begin{equation} |\alpha_{k}\rangle_{k}=\left(1-\frac{|\alpha_{k}|^2}{2}\tau\right)|0\rangle_{k}+\alpha_{k}\sqrt{\tau}|1\rangle_{k}+O(\tau^{3/2}) \end{equation} and \begin{equation} \langle \alpha_{k}|\sigma_{k}^{-}|\alpha_{k}\rangle=\sqrt{\tau}\alpha_{k}+O(\tau^{3/2}),\;\; \langle \alpha_{k}|\sigma_{k}^{+}\sigma_{k}^{-}| \alpha_{k}\rangle=\tau|\alpha_{k}|^2+O(\tau^2). \end{equation} The coherent state in $\mathcal{H}_{\mathcal{E}}$ we define as \begin{equation} |\alpha\rangle= \displaystyle{\bigotimes_{k=0}^{+\infty}}|\alpha_{k}\rangle_{k} \end{equation} with the condition $\displaystyle{\sum_{k=0}^{+\infty}}|\alpha_{k}|^2\tau<\infty$. Note that the vector state $|\alpha\rangle$ is a discrete analogue of coherent state defined in the symmetric Fock space considered in QSC. We will show that it allows in the continuous time limit to reproduce all results for the coherent state received within QSC. \section{Quantum trajectories for the coherent state} In this section we consider the case when the composed $\mathcal{E}+\mathcal{S}$ system is prepared initially in the pure product state \begin{equation}\label{ini1} |\alpha\rangle\otimes|\psi\rangle, \end{equation} where $|\alpha\rangle$ is the coherent state of the environment. \subsection{Photon counting} We assume that after each interaction the measurement is performed on the last element of the environment chain just after its interaction with $\mathcal{S}$. A goal of this subsection is providing a description of the state of $\mathcal{S}$ conditioned on the results of the measurements of the observables \begin{equation}\label{obser} \sigma_{k}^{-}\sigma_{k}^{+}=|1\rangle_{k}\langle 1|,\;\;\;k=0,1,2,\ldots . \end{equation} \begin{Theorem}\label{TH-1} The conditional state of $\mathcal{S}$ and the part of the environment which has not interacted with $\mathcal{S}$ up to $j\tau$ for the initial state (\ref{ini1}) and the measurement of (\ref{obser}) at the moment $j\tau$ is given by \begin{equation}\label{cond} |\tilde{\Psi}_{j}\rangle = \frac{|\Psi_{j}\rangle}{\sqrt{\langle\Psi_{j}|\Psi_{j}\rangle}}, \end{equation} where \begin{equation}\label{cond1} |\Psi_{j}\rangle=\bigotimes_{k=j}^{+\infty}|\alpha_{k}\rangle_{k}\otimes|\psi_{j}\rangle \end{equation} and the conditional vector $|\psi_{j}\rangle$ from $\mathcal{H}_{S}$ satisfies the recurrence formula \begin{equation}\label{recur} |\psi_{j+1}\rangle=M_{\eta_{j+1}}^{j}|\psi_{j}\rangle, \end{equation} where $\eta_{j+1}$ stands for a random variable describing the $(j+1)$-th output of (\ref{obser}), and $M_{\eta_{j+1}}^{j}$ has the form \begin{eqnarray} M_{0}^{j}&=&\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\alpha_{j}+\frac{|\alpha_{j}|^2}{2}\right)\tau+O(\tau^2),\\ M_{1}^{j}&=&\left(L+\alpha_{j}\right)\sqrt{\tau}+O(\tau^{3/2}). \end{eqnarray} Initially $|\psi_{j=0}\rangle=|\psi\rangle$ such that $|\tilde{\Psi}_{j=0}\rangle=|\alpha\rangle\otimes|\psi\rangle$. \end{Theorem} \noindent It is clear that $|\tilde{\Psi}_{j}\rangle$ is the product state vector belonging to the Hilbert space $\displaystyle{\bigotimes_{k=j}^{+\infty}}\mathcal{H}_{\mathcal{E},k}\otimes \mathcal{H}_{S}$. Note also that the conditional vector $|\psi_{j}\rangle$ depends on all results of the measurements performed on the bath qubits up to time $j\tau$. {\it Proof.} We prove the above theorem by an induction technique. So we start from the assumption that (\ref{cond1}) holds and then check that \begin{eqnarray}\label{action} V_{j}|\Psi_{j}\rangle&=&|0\rangle_{j}\otimes\bigotimes_{k=j+1}^{+\infty}|\alpha_{k}\rangle_{k}\otimes \left(\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\alpha_{j} +\frac{|\alpha_{j}|^2}{2}\right)\tau+O(\tau^2)\right)|\psi_{j}\rangle\nonumber\\ &&+|1\rangle_{j}\otimes\bigotimes_{k=j+1}^{+\infty}|\alpha_{k}\rangle_{k}\otimes \left[\left(L+\alpha_{j}\right)\sqrt{\tau}+O(\tau^{3/2})\right]|\psi_{j}\rangle. \end{eqnarray} Now using the fact that the conditional vector $|\Psi_{j+1}\rangle$ from the Hilbert space $\displaystyle{\bigotimes_{k=j+1}^{+\infty}}\mathcal{H}_{\mathcal{E},k}\otimes \mathcal{H}_{S}$ is defined by \begin{equation} \left(\Pi_{\eta_{j+1}}^{j}\otimes\displaystyle{\bigotimes_{k=j+1}^{+\infty}}\mathbbm{1}_{k}\otimes \mathbbm{1}_{S}\right)V_{j}|\Psi_j\rangle=|\eta_{j+1}\rangle_{j}\otimes|\Psi_{j+1}\rangle, \end{equation} where \begin{equation} \Pi_{0}^{j}=|0\rangle_{j}\langle 0|,\;\;\;\; \Pi_{1}^{j}=|1\rangle_{j}\langle 1|, \end{equation} we readily find that $|\Psi_{j+1}\rangle$ has the form \begin{equation} |\Psi_{j+1}\rangle=\bigotimes_{k=j+1}^{+\infty}|\alpha_{k}\rangle_{k}\otimes|\psi_{j+1}\rangle \end{equation} with $|\psi_{j+1}\rangle$ given by (\ref{recur}), which ends the proof. \subsection{Homodyne detection} Now we describe the evolution conditioned on the results of the measurements of the observables \begin{equation}\label{obs2} \sigma_{k}^{x}= \sigma_{k}^{+}+\sigma_{k}^{-}=|+\rangle_{k}\langle+|-|-\rangle_{k}\langle-| ,\;\;\;k=0,1,2,\ldots, \end{equation} where \begin{eqnarray}\label{xbase} |+\rangle_{k} &=&\frac{1}{\sqrt{2}}\left(|0\rangle_{k}+|1\rangle_{k}\right),\\ |-\rangle_{k} &=&\frac{1}{\sqrt{2}}\left(|0\rangle_{k}-|1\rangle_{k}\right), \end{eqnarray} are vectors from the Hilbert space $\mathcal{H}_{\mathcal{E},k}$. \begin{Theorem}\label{TH-2} The conditional state of $\mathcal{S}$ and the part of the environment which has not interacted with $\mathcal{S}$ up to $j\tau$ for the initial state (\ref{ini1}) and the measurement of (\ref{obs2}) at the moment $j\tau$ is given by \begin{equation}\label{condb1} |\tilde{\Psi}_{j}\rangle = \frac{|\Psi_{j}\rangle}{\sqrt{\langle\Psi_{j}|\Psi_{j}\rangle}}, \end{equation} where \begin{equation}\label{condb2} |\Psi_{j}\rangle=\bigotimes_{k=j}^{+\infty}|\alpha_{k}\rangle_{k}\otimes|\psi_{j}\rangle \end{equation} and the conditional vector $|\psi_{j}\rangle$ from $\mathcal{H}_{S}$ satisfies the recurrence formula \begin{equation}\label{recurb} |\psi_{j+1}\rangle=R_{\zeta_{j+1}}^{j}|\psi_{j}\rangle, \end{equation} where $\zeta_{j+1}=\pm 1$ stands for a random variable describing $(j+1)$-th output of (\ref{obs2}), and \begin{eqnarray}\label{Mq} R_{\zeta_{j+1}}^{j}=\frac{1}{\sqrt{2}}\left[\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\alpha_{j}+\frac{|\alpha_{j}|^2}{2}\right)\tau +(L+\alpha_{j})\zeta_{j+1}\sqrt{\tau}+O(\tau^{3/2})\right]. \end{eqnarray} Initially $|\psi_{j=0}\rangle=|\psi\rangle$ such that $|\tilde{\Psi}_{j=0}\rangle=|\alpha\rangle\otimes|\psi\rangle$. \end{Theorem} {\it Proof}. Assuming that (\ref{condb2}) holds we get \begin{eqnarray}\label{action2} V_{j}|\Psi_{j}\rangle&=&\frac{1}{\sqrt{2}}|+\rangle_{j}\otimes\bigotimes_{k=j+1}^{+\infty}|\alpha_{k}\rangle_{k}\otimes \left\{\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\alpha_{j} +\frac{|\alpha_{j}|^2}{2}\right)\tau\right.\nonumber\\&&\left.+\left(L+\alpha_{j}\right)\sqrt{\tau}+O(\tau^{3/2})\right\}|\psi_{j}\rangle\nonumber\\ &&+\frac{1}{\sqrt{2}}|-\rangle_{j}\otimes\bigotimes_{k=j+1}^{+\infty}|\alpha_{k}\rangle_{k}\otimes \left\{\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\alpha_{j} +\frac{|\alpha_{j}|^2}{2}\right)\tau\right.\nonumber\\&&\left.-\left(L+\alpha_{j}\right)\sqrt{\tau}+O(\tau^{3/2})\right\} |\psi_{j}\rangle. \end{eqnarray} The conditional vector $|\Psi_{j+1}\rangle$ from the Hilbert space $\displaystyle{\bigotimes_{k=j+1}^{+\infty}}\mathcal{H}_{\mathcal{E},k}\otimes \mathcal{H}_{S}$ satisfies for the measurement of (\ref{obs2}) the equation \begin{equation} \left(\Pi_{\zeta_{j+1}}^{j}\otimes\displaystyle{\bigotimes_{k=j+1}^{+\infty}}\mathbbm{1}_{k}\otimes \mathbbm{1}_{S}\right)V_{j}|\Psi_j\rangle=| \zeta_{j+1}\rangle_{j}\otimes|\Psi_{j+1}\rangle, \end{equation} where $\zeta_{j+1}$ has to possible values $\pm 1$, and \begin{equation} \Pi_{+1}^{j}=|+\rangle_{j}\langle +|,\;\;\;\; \Pi_{-1}^{j}=|-\rangle_{j}\langle -|. \end{equation} It is seen that $|\Psi_{j+1}\rangle$ has the form of (\ref{condb2}) and the vector $|\psi_{j}\rangle$ from $\mathcal{H}_{\mathcal{S}}$ satisfies the recurrence equation (\ref{Mq}). \section{Quantum trajectories for a superposition of coherent states} Let us assume that the initial state of the compound $\mathcal{E}+\mathcal{S}$ system is given by \begin{equation}\label{ini2} \left(c_{\alpha}|\alpha\rangle+c_{\beta}|\beta\rangle\right)\otimes|\psi\rangle, \end{equation} where $|\alpha\rangle$ and $|\beta\rangle$ are coherent states of $\mathcal{H}_{\mathcal{E}}$, and \begin{equation} |c_{\alpha}|^2+c_{\alpha}c_{\beta}^{\ast}\langle\beta|\alpha\rangle+c_{\alpha}^{\ast}c_{\beta}\langle\alpha|\beta\rangle+|c_{\beta}|^2=1. \end{equation} Note that in this case the bath qubits are prepared in the entangled state. \subsection{Photon counting} \begin{Theorem}\label{TH-3} The conditional state of $\mathcal{S}$ and the part of the environment which has not interacted with $\mathcal{S}$ up to $j\tau$ for the initial state (\ref{ini2}) and the measurement of (\ref{obser}) at the moment $j\tau$ is given by \begin{equation}\label{cond2} |\tilde{\Psi}_{j}\rangle = \frac{|\Psi_{j}\rangle}{\sqrt{\langle\Psi_{j}|\Psi_{j}\rangle}}, \end{equation} where \begin{equation}\label{cond4} |\Psi_{j}\rangle=c_{\alpha}\bigotimes_{k=j}^{+\infty}|\alpha_{k}\rangle_{k}\otimes|\psi_{j}\rangle+ c_{\beta}\bigotimes_{k=j}^{+\infty}|\beta_{k}\rangle_{k}\otimes|\varphi_{j}\rangle. \end{equation} The conditional vectors $|\psi_{j}\rangle$, $|\varphi_{j}\rangle$ from $\mathcal{H}_{\mathcal{S}}$ in (\ref{cond4}) are given by the recurrence formulas \begin{equation}\label{rec1} |\psi_{j+1}\rangle=M_{\eta_{j+1}}^{\alpha_{j}}|\psi_{j}\rangle, \end{equation} \begin{equation}\label{rec2} |\varphi_{j+1}\rangle=M_{\eta_{j+1}}^{\beta_{j}}|\varphi_{j}\rangle, \end{equation} where $\eta_{j+1}=0,1$ stands for a random variable describing the $(j+1)$-th output of (\ref{obser}), and \begin{equation} M_{0}^{\alpha_{j}}=\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\alpha_{j}+\frac{|\alpha_{j}|^2}{2}\right)\tau +O(\tau^2), \end{equation} \begin{equation} M_{0}^{\beta_{j}}=\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\beta_{j}+\frac{|\beta_{j}|^2}{2}\right)\tau +O(\tau^2), \end{equation} \begin{equation} M_{1}^{\alpha_{j}}=\left(L+\alpha_{j}\right)\sqrt{\tau}+O(\tau^{3/2}), \end{equation} \begin{equation} M_{1}^{\beta_{j}}=\left(L+\beta_{j}\right)\sqrt{\tau}+O(\tau^{3/2}), \end{equation} and initially we have $|\psi_{0}\rangle=|\varphi_{0}\rangle=|\psi\rangle$. \end{Theorem} \noindent {\it Proof}. The proof is straightforward. We simply refer to the results of the previous Section and the linearity of the evolution of the total system. Let us notice that the form of $|\Psi_{j}\rangle$ indicates that the system $\mathcal{S}$ becomes entangled with this part of the environment which has not interacted with $\mathcal{S}$ yet. Taking the partial trace of the operator $|{\Psi}_{j}\rangle\langle{\Psi}_{j}|$ over $\mathcal{S}$ we get the unnnormalized state of the environment of the form \begin{eqnarray} \rho^{field}_{j}=\lefteqn{|c_{\alpha}|^2\bigotimes_{k=j}^{+\infty}|\alpha_{k}\rangle_{k}\langle\alpha_{k}|\langle\psi_{j}|\psi_{j}\rangle +c_{\alpha}c_{\beta}^{\ast}\bigotimes_{k=j}^{+\infty}|\alpha_{k}\rangle_{k}\langle\beta_{k}|\langle \varphi_{j}|\psi_{j}\rangle}\nonumber\\ &&\!+c_{\alpha}^{\ast}c_{\beta} \bigotimes_{k=j}^{+\infty}|\beta_{k}\rangle_{k}\langle\alpha_{k}|\langle \psi_{j}|\varphi_{j}\rangle \!+\!|c_{\beta}|^2|\bigotimes_{k=j}^{+\infty}|\beta_{k}\rangle_{k} \langle\beta_{k}|\langle\varphi_{j}|\varphi_{j}\rangle. \end{eqnarray} The operator $\rho^{field}_{j}$ describes the conditional state of this part of the environment which has not interacted with $\mathcal{S}$ yet. It depends on all results of the measurements performed on the bath qubits up to $j\tau$. Therefore, we can say that the results of the measurements changes our knowledge about the state of the future part of the environment. In order to obtain the conditional state of $\mathcal{S}$ one has to take the partial trace of $|\tilde{\Psi}_{j}\rangle\langle\tilde{\Psi}_{j}|$ over the environment. One can check that the {\it a posteriori} state of $\mathcal{S}$ at the time $j\tau$ has the form \begin{equation} \tilde{\rho}_{j} =\frac{\rho_{j}}{\mathrm{Tr}\rho_{j}}, \end{equation} where \begin{eqnarray} \rho_{j}&=&|c_{\alpha}|^2|\psi_{j}\rangle\langle\psi_{j}|+c_{\alpha}c_{\beta}^{\ast}\prod_{k=j}^{+\infty}\langle \beta_{k}|\alpha_{k}\rangle|\psi_{j}\rangle\langle\varphi_{j}|+c_{\alpha}^{\ast}c_{\beta}\prod_{k=j}^{+\infty}\langle \alpha_{k}|\beta_{k}\rangle|\varphi_{j}\rangle\langle\psi_{j}|+|c_{\beta}|^2|\varphi_{j}\rangle\langle\varphi_{j}| \end{eqnarray} and $\mathrm{Tr}\rho_{j}$ is the probability of a particular trajectory. To derive the set of recurrence equations describing the stochastic evolution of $\mathcal{S}$ it is convenient to write down the conditional state of $\mathcal{S}$ at $j\tau$ in the form \begin{equation}\label{Filter} \tilde{\rho}_{j}= |c_{\alpha}|^2\tilde{\rho}_{j}^{\alpha\alpha}+c_{\alpha}c_{\beta}^{\ast}\tilde{\rho}_{j}^{\alpha\beta} +c_{\alpha}^{\ast}c_{\beta}\tilde{\rho}_{j}^{\beta\alpha} +|c_{\beta}|^2\tilde{\rho}_{j}^{\beta\beta}, \end{equation} where \begin{equation}\label{oper1} \tilde{\rho}_{j}^{\alpha\alpha}=\frac{1}{\mathrm{Tr}\rho_{j}}|\psi_{j}\rangle\langle\psi_{j}|, \end{equation} \begin{equation}\label{oper2} \tilde{\rho}_{j}^{\alpha\beta}=\frac{\displaystyle{\prod_{k=j}^{+\infty}} {}_{k}\langle \beta_{k}|\alpha_{k}\rangle_{k}}{\mathrm{Tr}\rho_{j}}|\psi_{j}\rangle\langle\varphi_{j}|, \end{equation} \begin{equation}\label{oper3} \tilde{\rho}_{j}^{\beta\alpha}=\frac{\displaystyle{\prod_{k=j}^{+\infty}} {}_{k}\langle \alpha_{k}|\beta_{k}\rangle_{k}}{\mathrm{Tr}\rho_{j}}|\varphi_{j}\rangle\langle\psi_{j}|, \end{equation} \begin{equation}\label{oper4} \tilde{\rho}_{j}^{\beta\beta}=\frac{1}{\mathrm{Tr}\rho_{j}}|\varphi_{j}\rangle\langle\varphi_{j}|. \end{equation} In our derivation we will use several times the formula \begin{equation} \prod_{k=j}^{+\infty} {}_{k}\langle \beta_{k}|\alpha_{k}\rangle_{k}= \prod_{k=j+1}^{+\infty} {}_{k}\langle \beta_{k}|\alpha_{k}\rangle_{k}\left(1-\frac{1}{2}\left(|\alpha_{j}|^2+|\beta_{j}|^2-2\alpha_{j}\beta_{j}^{\ast}\right)\tau+O(\tau^2)\right) \end{equation} following from \begin{equation} _{k}\langle \beta_{k}|\alpha_{k}\rangle_{k}=1-\frac{1}{2}\left(|\alpha_{k}|^2+|\beta_{k}|^2-2\alpha_{k}\beta_{k}^{\ast}\right)\tau+O(\tau^2). \end{equation} Let us notice first that the conditional operator $\rho_{j+1}$ is given by the recurrence formula \begin{eqnarray}\label{filter2} \rho_{j+1}&=&|c_{\alpha}|^2M_{\eta_{j+1}}^{\alpha_{j}}|\psi_{j}\rangle\langle\psi_{j}|M_{\eta_{j+1}}^{\alpha_{j} \dagger}+c_{\alpha}c_{\beta}^{\ast}\prod_{k=j+1}^{+\infty}\langle \beta_{k}|\alpha_{k}\rangle M_{\eta_{j+1}}^{\alpha_{j}}|\psi_{j}\rangle\langle\varphi_{j}|M_{\eta_{j+1}}^{\beta_{j}\dagger} \nonumber\\ &&+c_{\alpha}^{\ast}c_{\beta}\prod_{k=j+1}^{+\infty}\langle \alpha_{k}|\beta_{k}\rangle M_{\eta_{j+1}}^{\beta_{j}}|\varphi_{j}\rangle\langle\psi_{j}|M_{\eta_{j+1}}^{\alpha_{j}\dagger} +|c_{\beta}|^2M_{\eta_{j+1}}^{\beta_{j}}|\varphi_{j}\rangle\langle\varphi_{j}|M_{\eta_{j+1}}^{\beta_{j}\dagger}, \end{eqnarray} where $\eta_{j+1}$ stands for the random variable having two possible values: $0$, $1$. Let us note that in order to determine $\tilde{\rho}_{j+1}$ we need to know the operators (\ref{oper1})--(\ref{oper4}) at the moment $j\tau$ and the result of the next measurement. When the result of the measurement is $0$, then we obtain from Eqs. (\ref{rec1}) and (\ref{rec2}) the following set of discrete equations \begin{eqnarray} |\psi_{j+1}\rangle\langle\psi_{j+1}|&=&|\psi_{j}\rangle\langle\psi_{j}| -i[H_{\mathcal{S}},|\psi_{j}\rangle\langle\psi_{j}|]\tau-\frac{1}{2}\left\{L^{\dagger}L,|\psi_{j}\rangle\langle\psi_{j}|\right\}\tau\nonumber\\ &&-|\psi_{j}\rangle\langle\psi_{j}|L\alpha_{j}^{\ast}\tau -L^{\dagger}|\psi_{j}\rangle\langle\psi_{j}|\alpha_{j}\tau-|\psi_{j}\rangle\langle\psi_{j}||\alpha_{j}|^2\tau+O(\tau^2), \end{eqnarray} \begin{eqnarray} |\psi_{j+1}\rangle\langle\varphi_{j+1}|&=&|\psi_{j}\rangle\langle\varphi_{j}| \left(1-\frac{1}{2}\left(|\alpha_{j}|^2+|\beta_{j}|^2\right)\tau\right)\nonumber\\ &&-i[H_{\mathcal{S}},|\psi_{j}\rangle\langle\varphi_{j}|]\tau-\frac{1}{2}\left\{L^{\dagger}L,|\psi_{j}\rangle\langle\varphi_{j}|\right\}\tau\nonumber\\ &&-|\psi_{j}\rangle\langle\varphi_{j}|L\beta_{j}^{\ast}\tau -L^{\dagger}|\psi_{j}\rangle\langle\varphi_{j}|\alpha_{j}\tau+O(\tau^2), \end{eqnarray} \begin{eqnarray} |\varphi_{j+1}\rangle\langle\varphi_{j+1}|&=&|\varphi_{j}\rangle\langle\varphi_{j}| -i[H_{\mathcal{S}},|\varphi_{j}\rangle\langle\varphi_{j}|]\tau-\frac{1}{2}\left\{L^{\dagger}L,|\varphi_{j}\rangle\langle\varphi_{j}|\right\}\tau\nonumber\\ &&-|\varphi_{j}\rangle\langle\varphi_{j}|L\beta_{j}^{\ast}\tau-L^{\dagger}|\varphi_{j}\rangle\langle\varphi_{j}|\beta_{j}\tau-|\varphi_{j}\rangle\langle\varphi_{j}||\beta_{j}|^2\tau+O(\tau^2). \end{eqnarray} The conditional probability of the outcome $0$ at the moment $(j+1)\tau$ when the {\it a posteriori} state of $\mathcal{S}$ at $j\tau$ was $\tilde{\rho}_{j}$ is defined as \begin{equation} p_{j+1}(0|\tilde{\rho}_{j})=\frac{\mathrm{Tr}{\rho}_{j+1}}{\mathrm{Tr}{\rho}_{j}}, \end{equation} where $\rho_{j+1}$ is given by (\ref{filter2}) for $\eta_{j}=0$. Hence, we obtain the formula \begin{equation} p_{j+1}(0|\tilde{\rho}_{j})=1-\nu_{j}\tau+O(\tau^2), \end{equation} where \begin{equation}\label{intensity} \nu_{j}=|c_{\alpha}|^2\nu_{j}^{\alpha\alpha}+c_{\alpha}c_{\beta}^{\ast}\nu_{j}^{\alpha\beta} +c_{\alpha}^{\ast}c_{\beta}\nu_{j}^{\beta\alpha}+|c_{\beta}|^2\nu_{j}^{\beta\beta}, \end{equation} \begin{equation} \nu_{j}^{\alpha\alpha}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\alpha_{j}^{\ast}+L^{\dagger}\alpha_{j} +|\alpha_{j}|^2\right)\tilde{\rho}_{j}^{\alpha\alpha}\right], \end{equation} \begin{equation} \nu_{j}^{\alpha\beta}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\beta_{j}^{\ast}+L^{\dagger}\alpha_{j} +\alpha_{j}\beta_{j}^{\ast}\right)\tilde{\rho}_{j}^{\alpha\beta}\right], \end{equation} \begin{equation} \nu_{j}^{\beta\alpha}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\alpha_{j}^{\ast}+L^{\dagger}\beta_{j} +\alpha_{j}^{\ast}\beta_{j}\right)\tilde{\rho}_{j}^{\beta\alpha}\right], \end{equation} \begin{equation} \nu_{j}^{\beta\beta}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\beta_{j}^{\ast}+L^{\dagger}\beta_{j} +|\beta_{j}|^2\right)\tilde{\rho}_{j}^{\beta\beta}\right]. \end{equation} Now, making use of the fact that \begin{equation} \frac{1}{\mathrm{Tr}\rho_{j+1}}=\frac{1}{\mathrm{Tr}\rho_{j}} \left(1+\nu_{j} \tau+O(\tau^2)\right), \end{equation} we obtain the set of difference equations \begin{eqnarray}\label{rec0a} \tilde{\rho}_{j+1}^{\alpha\alpha}-\tilde{\rho}_{j}^{\alpha\alpha}&=&\tilde{\rho}_{j}^{\alpha\alpha}\nu_{j}\tau -i[H_{\mathcal{S}},\tilde{\rho}_{j}^{\alpha\alpha}]\tau-\frac{1}{2}\left\{L^{\dagger}L,\tilde{\rho}_{j}^{\alpha\alpha}\right\}\tau -\tilde{\rho}_{j}^{\alpha\alpha}L\alpha_{j}^{\ast}\tau \nonumber\\&& -L^{\dagger}\tilde{\rho}_{j}^{\alpha\alpha}\alpha_{j}\tau-\tilde{\rho}_{j}^{\alpha\alpha}|\alpha_{j}|^2\tau+O(\tau^2), \end{eqnarray} \begin{eqnarray}\label{rec0b} \tilde{\rho}_{j+1}^{\alpha\beta}-\tilde{\rho}_{j}^{\alpha\beta}&=&\tilde{\rho}_{j}^{\alpha\beta}\nu_{j}\tau -i[H_{\mathcal{S}},\tilde{\rho}_{j}^{\alpha\beta}]\tau -\frac{1}{2}\left\{L^{\dagger}L,\tilde{\rho}_{j}^{\alpha\beta}\right\}\tau -\tilde{\rho}_{j}^{\alpha\beta}L\beta_{j}^{\ast}\tau \nonumber\\&&-L^{\dagger}\tilde{\rho}_{j}^{\alpha\beta}\alpha_{j}\tau-\tilde{\rho}_{j}^{\alpha\beta}\beta_{j}^{\ast}\alpha_{j}\tau+O(\tau^2), \end{eqnarray} \begin{eqnarray}\label{rec0c} \tilde{\rho}_{j+1}^{\beta\beta}-\tilde{\rho}_{j}^{\beta\beta}&=&\tilde{\rho}_{j}^{\beta\beta}\nu_{j}\tau -i[H_{\mathcal{S}},\tilde{\rho}_{j}^{\beta\beta}]\tau -\frac{1}{2}\left\{L^{\dagger}L,\tilde{\rho}_{j}^{\beta\beta}\right\}\tau -\tilde{\rho}_{j}^{\beta\beta}L\beta_{j}^{\ast}\tau \nonumber\\&&-L^{\dagger}\tilde{\rho}_{j}^{\beta\beta}\beta_{j}\tau -\tilde{\rho}_{j}^{\beta\beta}|\beta_{j}|^2\tau+O(\tau^2). \end{eqnarray} The equation for the operator $\tilde{\rho}_{j}^{\beta\alpha}$ one can get using the fact that $\tilde{\rho}_{j}^{\beta\alpha}=\left(\tilde{\rho}_{j}^{\alpha\beta}\right)^{\dagger}$. When the result of the measurement at the moment $(j+1)\tau$ is $1$, we get the following recurrence formulas \begin{eqnarray} \lefteqn{|\psi_{j+1}\rangle\langle\psi_{j+1}|=\left(L|\psi_{j}\rangle\langle\psi_{j}|L^{\dagger}+L|\psi_{j}\rangle\langle\psi_{j}| \alpha_{j}^{\ast}\right.}\nonumber\\ &&\left.+|\psi_{j}\rangle\langle\psi_{j}|L^{\dagger}\alpha_{j} +|\psi_{j}\rangle\langle\psi_{j}||\alpha_{j}|^2\right)\tau+O(\tau^2), \end{eqnarray} \begin{eqnarray} \lefteqn{|\psi_{j+1}\rangle\langle\varphi_{j+1}|=\left(L|\psi_{j}\rangle\langle\varphi_{j}|L^{\dagger}+ L|\psi_{j}\rangle\langle\varphi_{j}|\beta_{j}^{\ast}\right.}\nonumber\\ &&\left.+|\psi_{j}\rangle\langle\varphi_{j}| L^{\dagger}\alpha_{j} +|\psi_{j}\rangle\langle\varphi_{j}|\alpha_{j}\beta_{j}^{\ast}\right)\tau+O(\tau^2), \end{eqnarray} \begin{eqnarray} \lefteqn{|\varphi_{j+1}\rangle\langle\varphi_{j+1}|=\left(L|\varphi_{j}\rangle\langle\varphi_{j}|L^{\dagger}+ L|\varphi_{j}\rangle\langle\varphi_{j}|\beta_{j}^{\ast}\right.}\nonumber\\ &&\left.+|\varphi_{j}\rangle\langle\varphi_{j}| L^{\dagger}\beta_{j} +|\varphi_{j}\rangle\langle\varphi_{j}||\beta_{j}|^2\right)\tau+O(\tau^2). \end{eqnarray} The conditional probability of the outcome $1$ at the moment $(j+1)\tau$ when the {\it a posteriori} state of $\mathcal{S}$ at the moment $j\tau$ was $\tilde{\rho}_{j}$ is defined by \begin{equation} p_{j+1}(1|\tilde{\rho}_{j})=\frac{\mathrm{Tr}{\rho}_{j+1}}{\mathrm{Tr}{\rho}_{j}}, \end{equation} where $\rho_{j+1}$ is given by (\ref{filter2}) with $\eta_{j}=1$. One can check that \begin{equation} p_{j+1}(1|\tilde{\rho}_{j})=\nu_{j}\tau+O(\tau^2), \end{equation} where the conditional intensity $\nu_{j}$ is defined by (\ref{intensity}). So for the result $1$ we find that \begin{eqnarray}\label{rec1a} \tilde{\rho}_{j+1}^{\alpha\alpha}=\frac{1}{\nu_{j}}\left(L\tilde{\rho}_{j}^{\alpha\alpha}L^{\dagger}+L\tilde{\rho}_{j}^{\alpha\alpha} \alpha_{j}^{\ast}+\tilde{\rho}_{j}^{\alpha\alpha}L^{\dagger}\alpha_{j} +\tilde{\rho}_{j}^{\alpha\alpha}|\alpha_{j}|^2\right)+O(\tau), \end{eqnarray} \begin{eqnarray}\label{rec1b} \tilde{\rho}_{j+1}^{\alpha\beta}=\frac{1}{\nu_{j}}\left(L\tilde{\rho}_{j}^{\alpha\beta}L^{\dagger}+ L\tilde{\rho}_{j}^{\alpha\beta}\beta_{j}^{\ast}+\tilde{\rho}_{j}^{\alpha\beta} L^{\dagger}\alpha_{j} +\tilde{\rho}_{j}^{\alpha\beta}\alpha_{j}\beta_{j}^{\ast}\right)+O(\tau), \end{eqnarray} \begin{eqnarray}\label{rec1c} \tilde{\rho}_{j+1}^{\beta\beta}=\frac{1}{\nu_{j}}\left(L\tilde{\rho}_{j}^{\beta\beta}L^{\dagger}+ L\tilde{\rho}_{j}^{\beta\beta}\beta_{j}^{\ast}+\tilde{\rho}_{j}^{\beta\beta} L^{\dagger}\beta_{j} +\tilde{\rho}_{j}^{\beta\beta}|\beta_{j}|^2\right)+O(\tau). \end{eqnarray} Let us introduce now the stochastic discrete process \begin{equation} n_{j}=\sum_{k=0}^{j}\eta_{k}, \end{equation} with the increment \begin{equation} \Delta n_{j}=n_{j+1}-n_{j}. \end{equation} One can check that the conditional expectation \begin{equation} \mathbbm{E}[\Delta n_{j}|\tilde{\rho}_{j}]=\nu_{j}\tau+O(\tau^2). \end{equation} Finally, by combining Eqs. (\ref{rec0a})--(\ref{rec0c}) with Eqs. (\ref{rec1a})--(\ref{rec1c}), we obtain the set of stochastic difference equations \begin{eqnarray}\label{filter3} \tilde{\rho}_{j+1}^{\alpha\alpha}-\tilde{\rho}_{j}^{\alpha\alpha}&=&\mathcal{L}\rho_{j}^{\alpha\alpha}\tau+[\tilde{\rho}_{j}^{\alpha\alpha},L^{\dagger}]\alpha_{j}\tau +[L,\tilde{\rho}_{j}^{\alpha\alpha}]\alpha^{\ast}_{j}\tau\nonumber\\ &&+\bigg\{\frac{1}{\nu_{j}}\left( L\tilde{\rho}_{j}^{\alpha\alpha}L^{\dagger}+\tilde{\rho}_{j}^{\alpha\alpha}L^{\dagger}\alpha_{j} +L\tilde{\rho}_{j}^{\alpha\alpha}\alpha^{\ast}_{j} \right.\nonumber\\ &&\left.+\tilde{\rho}_{j}^{\alpha\alpha}|\alpha_{j}|^2\right) -\tilde{\rho}_{j}^{\alpha\alpha}\bigg\}\left(\Delta n_{j}-\nu_{j}\tau\right), \end{eqnarray} \begin{eqnarray}\label{filter4} \tilde{\rho}_{j+1}^{\alpha\beta}-\tilde{\rho}_{j}^{\alpha\beta}&=&\mathcal{L}\rho_{j}^{\beta\beta}\tau+[\tilde{\rho}_{j}^{\alpha\beta},L^{\dagger}]\alpha_{j}\tau +[L,\tilde{\rho}_{j}^{\alpha\beta}]\beta^{\ast}_{j}\tau \nonumber\\ &&+\bigg\{\frac{1}{\nu_{j}}\left( L\tilde{\rho}_{j}^{\alpha\beta}L^{\dagger}+\tilde{\rho}_{j}^{\alpha\beta}L^{\dagger}\alpha_{j} +L\tilde{\rho}_{j}^{\alpha\beta}\beta^{\ast}_{j}\right.\nonumber\\ &&\left. +\tilde{\rho}_{j}^{\alpha\beta}\beta_{j}^{\ast}\alpha_{j}\right) -\tilde{\rho}_{j}^{\alpha\beta}\bigg\}\left(\Delta n_{j}-\nu_{j}\tau\right) \end{eqnarray} \begin{eqnarray}\label{filter5} \tilde{\rho}_{j+1}^{\beta\beta}-\tilde{\rho}_{j}^{\beta\beta}&=& \mathcal{L}\rho_{j}^{\beta\alpha}\tau+[\tilde{\rho}_{j}^{\beta\beta},L^{\dagger}]\beta_{j}\tau +[L,\tilde{\rho}_{j}^{\beta\beta}]\beta^{\ast}_{j}\tau\nonumber\\ &&+\bigg\{\frac{1}{\nu_{j}}\left( L\tilde{\rho}_{j}^{\beta\beta}L^{\dagger}+\tilde{\rho}_{j}^{\beta\beta} L^{\dagger}\beta_{j}+L\tilde{\rho}_{j}^{\beta\beta}\beta^{\ast}_{j}\right.\nonumber\\ &&\left. +\tilde{\rho}_{j}^{\beta\beta}|\beta_{t}|^2\right) -\tilde{\rho}_{j}^{\beta\beta}\bigg\}\left(\Delta n_{j}-\nu_{j}\tau\right), \end{eqnarray} where \begin{equation}\label{superop} \mathcal{L}\rho=-i[H_{\mathcal{S}},\rho] -\frac{1}{2}\left\{L^{\dagger}L,\rho\right\} +L\rho L^{\dagger} \end{equation} and the initial condition $\tilde{\rho}_{0}^{\alpha\alpha}=\tilde{\rho}_{0}^{\beta\beta}=|\psi\rangle\langle\psi|$, $\tilde{\rho}_{0}^{\alpha\beta}=\langle\beta|\alpha\rangle|\psi\rangle\langle\psi|$. We dropped here all terms that do not contribute to the continuous time limit when $\tau\to dt$. Note that when $\Delta n_{j}$ is equal to $0$, then Eqs. (\ref{filter3})--(\ref{filter5}) reduce to Eqs. (\ref{rec0a})--(\ref{rec0c}), and when $\Delta n_{j}$ is equal to $1$, then all the terms proportional to $\tau$ in Eqs. (\ref{filter3})--(\ref{filter5}) are negligible and we obtain the formulas (\ref{rec1a})--(\ref{rec1c}). Let us notice that to get the continuous in time evolution of $\mathcal{S}$ we fix time $t=j\tau$ such that when $j\to+\infty$ we have $\tau\to 0$. Of course, we take $t$ fixed but arbitrary. Thus in the continuous time limit we get from (\ref{filter3})--(\ref{filter5}) the set of the stochastic differential equations of the form \begin{eqnarray}\label{filter6} d\tilde{\rho}_{t}^{\alpha\alpha}&=&\mathcal{L}\rho_{t}^{\alpha\alpha}dt+[\tilde{\rho}_{t}^{\alpha\alpha},L^{\dagger}]\alpha_{t}dt +[L,\tilde{\rho}_{t}^{\alpha\alpha}]\alpha^{\ast}_{t}dt \nonumber\\ &&+\bigg\{\frac{1}{\nu_{j}}\left( L\tilde{\rho}_{t}^{\alpha\alpha}L^{\dagger}+\tilde{\rho}_{t}^{\alpha\alpha}L^{\dagger}\alpha_{t} +L\tilde{\rho}_{t}^{\alpha\alpha}\alpha^{\ast}_{t} \right.\nonumber\\ &&\left.+\tilde{\rho}_{t}^{\alpha\alpha}|\alpha_{t}|^2\right) -\tilde{\rho}_{t}^{\alpha\alpha}\bigg\}\left(dn_{t}-\nu_{t}dt\right), \end{eqnarray} \begin{eqnarray}\label{filter7} d\tilde{\rho}_{t}^{\alpha\beta}&=&\mathcal{L}\rho_{t}^{\alpha\beta}dt +[\tilde{\rho}_{t}^{\alpha\beta},L^{\dagger}]\alpha_{t}dt +[L,\tilde{\rho}_{t}^{\alpha\beta}]\beta^{\ast}_{t}dt \nonumber\\ &&+\bigg\{\frac{1}{\nu_{t}}\left( L\tilde{\rho}_{t}^{\alpha\beta}L^{\dagger} +\tilde{\rho}_{t}^{\alpha\beta}L^{\dagger}\alpha_{t} +L\tilde{\rho}_{t}^{\alpha\beta}\beta^{\ast}_{t} \right.\nonumber\\ &&\left.+\tilde{\rho}_{t}^{\alpha\beta}\beta_{t}^{\ast}\alpha_{t}\right) -\tilde{\rho}_{t}^{\alpha\beta}\bigg\}\left(dn_{t}-\nu_{t}dt\right) \end{eqnarray} \begin{eqnarray}\label{filter8} d\tilde{\rho}_{t}^{\beta\beta}&=& \mathcal{L}\rho_{t}^{\beta\beta}dt+[\tilde{\rho}_{t}^{\beta\beta},L^{\dagger}]\beta_{t}dt +[L,\tilde{\rho}_{t}^{\beta\beta}]\beta^{\ast}_{t}dt \nonumber\\ &&+\bigg\{\frac{1}{\nu_{t}}\left( L\tilde{\rho}_{t}^{\beta\beta}L^{\dagger}+\tilde{\rho}_{t}^{\beta\beta} L^{\dagger}\beta_{t}+L\tilde{\rho}_{t}^{\beta\beta}\beta^{\ast}_{j} \right.\nonumber\\ &&\left.+\tilde{\rho}_{t}^{\beta\beta}|\beta_{t}|^2\right) -\tilde{\rho}_{t}^{\beta\beta}\bigg\}\left(dn_{t}-\nu_{t}dt\right) \end{eqnarray} and initially $\tilde{\rho}_{0}^{\alpha\alpha}=\tilde{\rho}_{0}^{\beta\beta}=|\psi\rangle\langle\psi|$, $\tilde{\rho}_{0}^{\alpha\beta}=\langle \beta|\alpha\rangle|\psi\rangle\langle\psi|$. The stochastic process $n_{t}$ is defined as the continuous limit of the discrete process $n_{j}$. The It\^{o} table for $dn_{t}$ is $\left(dn_{t}\right)^2=dn_{t}$ (we can measure at most one photon in the interval of length $dt$) and $\mathbbm{E}\left[dn_{t}|\tilde{\rho}_{t}\right]=\nu_{t}dt$, where \begin{equation} \nu_{t}=|c_{\alpha}|^2\nu_{t}^{\alpha\alpha}+c_{\alpha}c_{\beta}^{\ast}\nu_{t}^{\alpha\beta} +c_{\alpha}^{\ast}c_{\beta}\nu_{t}^{\beta\alpha}+|c_{\beta}|^2\nu_{t}^{\beta\beta}, \end{equation} \begin{equation} \nu_{t}^{\alpha\alpha}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\alpha_{t}^{\ast}+L^{\dagger}\alpha_{t} +|\alpha_{t}|^2\right)\tilde{\rho}_{t}^{\alpha\alpha}\right], \end{equation} \begin{equation} \nu_{t}^{\alpha\beta}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\beta_{t}^{\ast}+L^{\dagger}\alpha_{t} +\alpha_{t}\beta_{t}^{\ast}\right)\tilde{\rho}_{t}^{\alpha\beta}\right], \end{equation} \begin{equation} \nu_{t}^{\beta\alpha}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\alpha_{t}^{\ast}+L^{\dagger}\beta_{t} +\alpha_{t}^{\ast}\beta_{t}\right)\tilde{\rho}_{t}^{\beta\alpha}\right], \end{equation} \begin{equation} \nu_{t}^{\beta\beta}=\mathrm{Tr}\left[\left(L^{\dagger}L +L\beta_{t}^{\ast}+L^{\dagger}\beta_{t} +|\beta_{t}|^2\right)\tilde{\rho}_{t}^{\beta\beta}\right]. \end{equation} Moreover, the complex functions $\alpha_{t}$ and $\beta_{t}$ satisfy the conditions \begin{eqnarray} \int_{0}^{+\infty}|\alpha_{t}|^2dt<+\infty,\;\;\;\;\int_{0}^{+\infty}|\beta_{t}|^2dt<+\infty, \end{eqnarray} and \begin{equation} \langle \beta|\alpha\rangle=\exp\left\{-\frac{1}{2}\int_{0}^{+\infty}\left(|\alpha_{t}|^2+|\beta_{t}|^2-2\alpha_{t}\beta^{\ast}_{t}\right)dt\right\}. \end{equation} Thus, the {\it a posteriori} state of $\mathcal{S}$ is given as \begin{equation} \tilde{\rho}_{t}= |c_{\alpha}|^2\tilde{\rho}_{t}^{\alpha\alpha}+c_{\alpha}c_{\beta}^{\ast}\tilde{\rho}_{t}^{\alpha\beta} +c_{\alpha}^{\ast}c_{\beta}\tilde{\rho}_{t}^{\beta\alpha} +|c_{\beta}|^2\tilde{\rho}_{t}^{\beta\beta}, \end{equation} where the conditional operators $\tilde{\rho}_{t}^{\alpha\alpha}$, $\tilde{\rho}_{t}^{\alpha\beta}$, $\tilde{\rho}_{t}^{\beta\beta}$ satisfy Eqs. (\ref{filter6})-(\ref{filter8}), and $\tilde{\rho}_{t}^{\beta\alpha}=\left(\tilde{\rho}_{t}^{\alpha\beta}\right)^{\dagger}$. One can check that $\mathrm{Tr}\tilde{\rho}_{t}=1$ for any $t\geq 0$. The equations (\ref{filter6})-(\ref{filter8}) agree with the stochastic master equations derived in \cite{GJN12b} (see Sec. IV in \cite{GJN12b}). When we take an average of $\tilde{\rho}_{t}$ over all realizations of the stochastic process $n_{t}$ (all possible outcomes) then we get the {\it a priori} evolution of the system $\mathcal{S}$. One can check that the {\it a priori} state of $\mathcal{S}$ is described by \begin{equation} {\varrho}_{t}= |c_{\alpha}|^2{\varrho}_{t}^{\alpha\alpha}+c_{\alpha}c_{\beta}^{\ast}{\varrho}_{j}^{\alpha\beta} +c_{\alpha}^{\ast}c_{\beta}{\varrho}_{t}^{\beta\alpha} +|c_{\beta}|^2{\varrho}_{t}^{\beta\beta}, \end{equation} where the operators ${\varrho}_{t}^{\alpha\alpha}$, ${\varrho}_{t}^{\alpha\beta}$, ${\varrho}_{t}^{\beta\beta}$ satisfy the differential equations \begin{eqnarray}\label{master1} \dot{{\varrho}}_{t}^{\alpha\alpha}&=&\mathcal{L}\varrho_{t}^{\alpha\alpha}+[{\varrho}_{t}^{\alpha\alpha},L^{\dagger}]\alpha_{t} +[L,{\varrho}_{t}^{\alpha\alpha}]\alpha^{\ast}_{t}, \end{eqnarray} \begin{eqnarray} \dot{{\varrho}}_{t}^{\alpha\beta}&=&\mathcal{L}\varrho_{t}^{\alpha\beta}+[{\varrho}_{t}^{\alpha\beta},L^{\dagger}]\alpha_{t} +[L,{\varrho}_{t}^{\alpha\beta}]\beta^{\ast}_{t}, \end{eqnarray} \begin{eqnarray}\label{master3} \dot{{\varrho}}_{t}^{\beta\beta}&=& \mathcal{L}\rho_{t}^{\beta\beta}+[{\rho}_{t}^{\beta\beta},L^{\dagger}]\beta_{t} +[L,{\rho}_{t}^{\beta\beta}]\beta^{\ast}_{t}, \end{eqnarray} where $\mathcal{L}$ acts as (\ref{superop}). The initial condition is ${\varrho}_{0}^{\alpha\alpha}={\varrho}_{0}^{\beta\beta}=|\psi\rangle\langle\psi|$, ${\varrho}_{0}^{\alpha\beta}=\langle \beta|\alpha\rangle|\psi\rangle\langle\psi|$, and ${\varrho}_{t}^{\beta\alpha}=\left({\varrho}_{t}^{\alpha\beta}\right)^{\dagger}$. One can easily check that $\mathrm{Tr}\varrho_{t}^{\alpha\alpha}=\mathrm{Tr}\varrho_{t}^{\beta\beta}=1$, $\mathrm{Tr}\varrho_{t}^{\alpha\beta}=\langle \beta|\alpha\rangle$, and $\mathrm{Tr}\varrho_{t}^{\beta\alpha}=\langle \alpha|\beta\rangle$ for any $t\geq 0$. \subsection{Homodyne detection} \begin{Theorem}\label{TH-4} The conditional state of $\mathcal{S}$ and the part of the environment which has not interacted with $\mathcal{S}$ up to $j\tau$ for the initial state (\ref{ini2}) and the measurement of (\ref{obs2}) at the moment $j\tau$ is given by \begin{equation}\label{cond5} |\tilde{\Psi}_{j}\rangle = \frac{|\Psi_{j}\rangle}{\sqrt{\langle\Psi_{j}|\Psi_{j}\rangle}}, \end{equation} where \begin{equation}\label{cond6} |\Psi_{j}\rangle=c_{\alpha}\bigotimes_{k=j}^{+\infty}|\alpha_{k}\rangle_{k}\otimes|\psi_{j}\rangle+ c_{\beta}\bigotimes_{k=j}^{+\infty}|\beta_{k}\rangle_{k}\otimes|\varphi_{j}\rangle. \end{equation} The conditional vectors $|\psi_{j}\rangle$, $|\varphi_{j}\rangle$ from $\mathcal{H}_{\mathcal{S}}$ in (\ref{cond4}) are given by the recurrence formulas \begin{equation}\label{rec3} |\psi_{j+1}\rangle=R_{\zeta_{j+1}}^{\alpha_{j}}|\psi_{j}\rangle, \end{equation} \begin{equation}\label{rec4} |\varphi_{j+1}\rangle=R_{\zeta_{j+1}}^{\beta_{j}}|\varphi_{j}\rangle, \end{equation} where $\zeta_{j+1}$ stands for a random variable describing the $(j+1)$-th output of (\ref{obs2}), and \begin{eqnarray} R_{\zeta_{j+1}}^{\alpha_{j}}&=&\frac{1}{\sqrt{2}}\bigg[\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\alpha_{j}+\frac{|\alpha_{j}|^2}{2}\right)\tau\nonumber\\&&+(L+\alpha_{j}) \zeta_{j+1}\sqrt{\tau}+O\left(\tau^{3/2}\right)\bigg], \end{eqnarray} \begin{eqnarray} R_{\zeta_{j+1}}^{\beta_{j}}&=&\frac{1}{\sqrt{2}}\bigg[\mathbbm{1}_{S}-\left(iH_{\mathcal{S}}+\frac{1}{2}L^{\dagger}L+L^{\dagger}\beta_{j}+\frac{|\beta_{j}|^2}{2}\right)\tau\nonumber\\&&+(L+\beta_{j}) \zeta_{j+1}\sqrt{\tau}+O\left(\tau^{3/2}\right)\bigg], \end{eqnarray} and initially we have $|\psi_{0}\rangle=|\varphi_{0}\rangle=|\psi\rangle$. \end{Theorem} {\it Proof.} To prove Theorem (\ref{TH-4}) we use the result of Sec. III.B and the linearity of the evolution equation for the total system. Clearly, the conditional state of $\mathcal{S}$ at the moment $j\tau$ has the form (\ref{Filter}). We start derivation of the filtering equations for the stochastic operators (\ref{oper1})--(\ref{oper4}) from writing down the recursive formulas \begin{eqnarray} 2|\psi_{j+1}\rangle\langle\psi_{j+1}|&=&|\psi_{j}\rangle\langle\psi_{j}| +\mathcal{L}|\psi_{j}\rangle\langle\psi_{j}|\tau\nonumber\\&& +[|\psi_{j}\rangle\langle\psi_{j}|,L^{\dagger}]\alpha_{j}\tau +[L,|\psi_{j}\rangle\langle\psi_{j}|]\alpha_{j}^{\ast}\tau\nonumber\\ &&\!+\!\left[\left(L+\alpha_{j}\right)|\psi_{j}\rangle\langle\psi_{j}| +|\psi_{j}\rangle\langle\psi_{j}|\left(L^{\dagger}+\alpha_{j}^{\ast}\right)\right]\zeta_{j+1}\sqrt{\tau}, \end{eqnarray} \begin{eqnarray} 2|\psi_{j+1}\rangle\langle\varphi_{j+1}|&=&|\psi_{j}\rangle\langle\varphi_{j}| \left(1-\frac{1}{2}\left(|\alpha_{j}|^2+|\beta_{j}|^2-2\beta_{j}^{\ast}\alpha_{j}\right)\tau\right)\nonumber\\ &&+\mathcal{L}|\psi_{j+1}\rangle\langle\varphi_{j+1}|\tau\nonumber\\ &&+[|\psi_{j}\rangle\langle\varphi_{j}|,L^{\dagger}]\alpha_{j}\tau +[L,|\psi_{j}\rangle\langle\varphi_{j}|]\beta_{j}^{\ast}\tau \nonumber\\ &&+\left[\left(L+\alpha_{j}\right)|\psi_{j}\rangle\langle\varphi_{j}|+|\psi_{j}\rangle\langle\varphi_{j}|\left(L^{\dagger}+\beta_{j}^{\ast}\right)\right] \zeta_{j+1}\sqrt{\tau}, \end{eqnarray} \begin{eqnarray} 2|\varphi_{j+1}\rangle\langle\varphi_{j+1}|&=&|\varphi_{j}\rangle\langle\varphi_{j}|+\mathcal{L}|\varphi_{j}\rangle\langle\varphi_{j}|\tau \nonumber\\&&+[|\varphi_{j}\rangle\langle\varphi_{j}|,L]\beta_{j}dt+[L,|\varphi_{j}\rangle\langle\varphi_{j}|]\beta_{j}^{\ast}\tau \nonumber\\ &&+\left[\left(L+\beta_{j}\right)|\varphi_{j}\rangle\langle\varphi_{j}| +|\varphi_{j}\rangle\langle\varphi_{j}|\left(L^{\dagger}+\beta_{j}^{\ast}\right)\right] \zeta_{j+1}\sqrt{\tau}, \end{eqnarray} We can readily deduce that the conditional probability of the result $\zeta_{j+1}$ at the moment $(j+1)\tau$ when the conditional state of $\mathcal{S}$ is $\tilde{\rho}_{j}$ at the time $j\tau$ is given by \begin{equation} p_{j+1}(\zeta_{j+1}|\tilde{\rho}_{j})= \frac{1}{2}\left(1+\mu_{j}\zeta_{j+1}\sqrt{\tau}\right)+O(\tau^{3/2}), \end{equation} where \begin{equation} \mu_{j}=|c_{\alpha}|^2\mu_{j}^{\alpha\alpha}+c_{\alpha}c_{\beta}^{\ast}\mu_{j}^{\alpha\beta} +c_{\alpha}^{\ast}c_{\beta}\mu_{j}^{\beta\alpha}+|c_{\beta}|^2\mu_{j}^{\beta\beta} \end{equation} and \begin{equation}\label{intensity21} \mu_{j}^{\alpha\alpha}= \mathrm{Tr}\left[\left(L+L^{\dagger}+\alpha_{j}+\alpha_{j}^{\ast}\right)\tilde{\rho}_{j}^{\alpha\alpha}\right], \end{equation} \begin{equation}\label{intensity22} \mu_{j}^{\alpha\beta}=\mathrm{Tr}\left[\left(L+L^{\dagger} +\alpha_{j}+\beta_{j}^{\ast}\right)\tilde{\rho}_{j}^{\alpha\beta}\right], \end{equation} \begin{equation}\label{intensity23} \mu_{j}^{\beta\alpha}=\mathrm{Tr}\left[\left(L+L^{\dagger} +\beta_{j}+\alpha_{j}^{\ast}\right)\tilde{\rho}_{j}^{\beta\alpha}\right], \end{equation} \begin{equation}\label{intensity24} \mu_{j}^{\beta\beta}=\mathrm{Tr}\left[\left(L+L^{\dagger} +\beta_{j}+\beta_{j}^{\ast}\right)\tilde{\rho}_{j}^{\beta\beta}\right]. \end{equation} Thus for the discrete stochastic process $\zeta_{j}$ we obtain the conditional mean values \begin{equation} \mathbbm{E}[\zeta_{j+1}|\tilde{\rho}_{j}]=\mu_{j}\sqrt{\tau}+O(\tau^{3/2}), \end{equation} \begin{equation} \mathbbm{E}[\zeta_{j+1}^2|\tilde{\rho}_{j}]=1+O(\tau^{3/2}). \end{equation} Let us introduce now the stochastic process \begin{equation} q_{j}=\sqrt{\tau}\sum_{k=1}^{j}\zeta_{k} \end{equation} One can easily check that $\mathbbm{E}[\Delta q_{j}=q_{j+1}-q_{j}|\tilde{\rho}_{j}]=\mu_{j}\tau+O(\tau^{3/2})$. Now, taking into account that \begin{equation} \frac{1}{\mathrm{Tr}\rho_{j+1}}=\frac{2}{\mathrm{Tr}\rho_{j}} \left(1-\mu_{j}\zeta_{j+1}\sqrt{\tau}+\mu_{j}^2\tau\right) \end{equation} after some algebra we find the set of the stochastic difference equations \begin{eqnarray} \tilde{\rho}_{j+1}^{\alpha\alpha}-\tilde{\rho}_{j}^{\alpha\alpha}&=&\mathcal{L}\tilde{\rho}_{j}^{\alpha\alpha}\tau+[\tilde{\rho}_{j}^{\alpha\alpha},L^{\dagger}]\alpha_{j}\tau +[L,\tilde{\rho}_{j}^{\alpha\alpha}]\alpha^{\ast}_{j}\tau \nonumber\\ &&+\left[\left(L+\alpha_{j}\right)\tilde{\rho}_{j}^{\alpha\alpha}+ \tilde{\rho}_{j}^{\alpha\alpha}(L^{\dagger}+\alpha_{j}^{\ast}) -\mu_{j}\tilde{\rho}_{j}^{\alpha\alpha}\right] \left(\Delta q_{j}-\mu_{j}\tau\right), \end{eqnarray} \begin{eqnarray} \tilde{\rho}_{j+1}^{\alpha\beta}-\tilde{\rho}_{j}^{\alpha\beta}&=&\mathcal{L}\tilde{\rho}_{j}^{\alpha\beta}\tau+[\tilde{\rho}_{j}^{\alpha\beta},L^{\dagger}]\alpha_{j}\tau +[L,\tilde{\rho}_{j}^{\alpha\beta}]\beta^{\ast}_{j}\tau \nonumber\\ &&+\left[\left(L+\alpha_{j}\right)\tilde{\rho}_{j}^{\alpha\beta} +\tilde{\rho}_{j}^{\alpha\beta}(L^{\dagger}+\beta_{j}^{\ast}) -\mu_{j}\tilde{\rho}_{j}^{\alpha\beta}\right]\left(\Delta q_{j}-\mu_{j}\tau\right), \end{eqnarray} \begin{eqnarray} \tilde{\rho}_{j+1}^{\beta\beta}-\tilde{\rho}_{j}^{\beta\beta}&=& \mathcal{L}\tilde{\rho}_{j}^{\beta\beta}\tau+[\tilde{\rho}_{j}^{\beta\beta},L^{\dagger}]\beta_{j}\tau +[L,\tilde{\rho}_{j}^{\beta\beta}]\beta^{\ast}_{j}\tau \nonumber\\ &&+\left[\left(L+\beta_{j}\right)\tilde{\rho}_{j}^{\beta\beta} +\tilde{\rho}_{j}^{\beta\beta}\left(L^{\dagger}+\beta_{j}^{\ast}\right) -\mu_{j}\tilde{\rho}_{j}^{\beta\beta}\right]\left(\Delta q_{j}-\mu_{j}\tau\right) \end{eqnarray} with the initial conditions $\tilde{\rho}_{0}^{\alpha\alpha}=|\psi\rangle\langle\psi|$, $\tilde{\rho}_{0}^{\beta\beta}=|\psi\rangle\langle\psi|$, $\tilde{\rho}_{0}^{\alpha\beta}=\langle\beta|\alpha\rangle|\psi\rangle\langle\psi|$. Then in the continuous time limit we have \begin{eqnarray} d\tilde{\rho}_{t}^{\alpha\alpha}&=&\mathcal{L}\rho_{t}^{\alpha\alpha}dt+[\tilde{\rho}_{t}^{\alpha\alpha},L^{\dagger}]\alpha_{t}dt +[L,\tilde{\rho}_{t}^{\alpha\alpha}]\alpha^{\ast}_{t}dt \nonumber\\ &&+\left[\left(L+\alpha_{t}\right)\tilde{\rho}_{t}^{\alpha\alpha} +\tilde{\rho}_{t}^{\alpha\alpha}\left(L^{\dagger}+\alpha_{t}^{\ast}\right) -\mu_{t}\tilde{\rho}_{t}^{\alpha\alpha}\right]\left(dq_{t}-\mu_{t}dt\right) \end{eqnarray} \begin{eqnarray} d\tilde{\rho}_{t}^{\alpha\beta}&=&\mathcal{L}\rho_{t}^{\alpha\beta}dt +[\tilde{\rho}_{t}^{\alpha\beta},L^{\dagger}]\alpha_{t}dt +[L,\tilde{\rho}_{t}^{\alpha\beta}]\beta^{\ast}_{t}dt \nonumber\\ &&+\left[\left(L+\alpha_{t}\right)\tilde{\rho}_{t}^{\alpha\beta} +\tilde{\rho}_{t}^{\alpha\beta}\left(L^{\dagger}+\beta_{t}^{\ast}\right) -\mu_{t}\tilde{\rho}_{t}^{\alpha\beta}\right]\left(dq_{t}-\mu_{t}dt\right), \end{eqnarray} \begin{eqnarray} d\tilde{\rho}_{t}^{\beta\beta}&=& \mathcal{L}\rho_{t}^{\beta\beta}dt +[\tilde{\rho}_{t}^{\beta\beta},L^{\dagger}]\beta_{t}dt +[L,\tilde{\rho}_{t}^{\beta\beta}]\beta^{\ast}_{t}dt \nonumber\\ &&+\left[\left(L+\beta_{t}\right)\tilde{\rho}_{t}^{\beta\beta} +\tilde{\rho}_{t}^{\beta\beta}\left(L^{\dagger}+\beta_{t}^{\ast}\right) -\mu_{t}\tilde{\rho}_{t}^{\beta\beta}\right]\left(dq_{t}-\mu_{t}dt\right) \end{eqnarray} where \begin{equation} \mu_{t}=|c_{\alpha}|^2\mu_{t}^{\alpha\alpha}+c_{\alpha}c_{\beta}^{\ast}\mu_{t}^{\alpha\beta} +c_{\alpha}^{\ast}c_{\beta}\mu_{t}^{\beta\alpha}+|c_{\beta}|^2\mu_{t}^{\beta\beta} \end{equation} and \begin{equation} \mu_{t}^{\alpha\alpha}= \mathrm{Tr}\left[\left(L+L^{\dagger}+\alpha_{t}+\alpha_{t}^{\ast}\right)\tilde{\rho}_{t}^{\alpha\alpha}\right], \end{equation} \begin{equation} \mu_{t}^{\alpha\beta}=\mathrm{Tr}\left[\left(L+L^{\dagger} +\alpha_{t}+\beta_{t}^{\ast}\right)\tilde{\rho}_{t}^{\alpha\beta}\right], \end{equation} \begin{equation} \mu_{t}^{\beta\alpha}=\mathrm{Tr}\left[\left(L+L^{\dagger} +\beta_{t}+\alpha_{t}^{\ast}\right)\tilde{\rho}_{t}^{\beta\alpha}\right], \end{equation} \begin{equation} \mu_{t}^{\beta\beta}=\mathrm{Tr}\left[\left(L+L^{\dagger} +\beta_{t}+\beta_{t}^{\ast}\right)\tilde{\rho}_{t}^{\beta\beta}\right]. \end{equation} The process $q_{j}$ in the limit $\tau\to 0$ converges to the stochastic process $q_{t}$ with the conditional probability $\mathbbm{E}[d q_{t}=q_{t+dt}-q_{t}|\tilde{\rho}_{t}]=\mu_{t}dt$. \section{An example: a cavity mode} About the emergence of collision model in quantum optics one can read, for instance, in \cite{C17, FTVRS18}. To derive the discrete model of repeated interactions and measurements one starts from description of interaction of a quantum system with a Bose field propagating in only one direction, making the rotating wave approximation and taking the flat spectrum of the field. Then one passes to the interaction picture with respect to the free dynamics of the field and takes the Hamiltonian of the field in the frequency domain with the lower limit of integration extended to $-\infty$. The time coarse-graining model arises from division of the field into some probe segments. From the form of the interaction Hamiltonian (\ref{hamint}) follows the fact that the system (an atom or a cavity mode) can absorb at a given moment at most one photon. Lack of an interaction between the system and the output field means that the photons emitted by the system leaves the interaction region and can not be reabsorbed. We describe here briefly the {\it a priori} and the {\it a posteriori} evolution of a cavity mode coupled to a propagating one-dimensional Bose field in a superposition of two coherent states. Thus, we have \begin{equation} H_{\mathcal{S}}=\omega_{0}a^{\dagger}a, \end{equation} \begin{equation} L=\sqrt{\Gamma} a, \end{equation} where $a$ stands for the annihilation operator, $\omega_{0}>0$, and $\Gamma$ is the positive coupling constant. We consider here the case when the harmonic oscillator is initially in the coherent state \begin{equation} a|u\rangle=u|u\rangle. \end{equation} Then, the solution to the set of the master equations can be written in the form \begin{equation}\label{apriori} \varrho_{t}=|c_{\alpha}|^2|f_{t}\rangle\langle f_{t}|+c_{\alpha}c_{\beta}^{\ast}\frac{\langle \beta|\alpha\rangle}{\langle g_{t}| f_{t}\rangle} |f_{t}\rangle\langle g_{t}|+ c_{\alpha}^{\ast}c_{\beta}\frac{\langle \alpha|\beta\rangle}{\langle f_{t}| g_{t}\rangle} |g_{t}\rangle\langle f_{t}|+|c_{\beta}|^2|g_{t}\rangle\langle g_{t}|, \end{equation} where \begin{equation} \langle g_{t}| f_{t}\rangle=\exp\left\{-\frac{1}{2}\left(|g_{t}|^2+|f_{t}|^2-2g_{t}^{\ast}f_{t}\right)\right\}, \end{equation} and $|f_{t}\rangle$, $|g_{t}\rangle$ are coherent states of the harmonic oscillator with the amplitudes satisfying the equations \begin{equation} \dot{f}_{t}=-\left(\mathrm{i}\omega_{0} +{\Gamma \over 2}\right)f_{t} -\sqrt {\Gamma }\alpha_{t}, \end{equation} \begin{equation} \dot{g}_{t}=-\left(\mathrm{i}\omega_{0} +{\Gamma \over 2}\right)g_{t} -\sqrt {\Gamma }\beta_{t}, \end{equation} where one can easily recognize the damping and driving terms. Hence, we obtain \begin{equation}\label{f} f_{t}=e^{-\left(\mathrm{i}\omega_{0} +{\Gamma \over 2}\right)t} \left( u -\sqrt {\Gamma }\int_{0}^{t} e^{\left(\mathrm{i}\omega_{0} +{\Gamma \over 2}\right)s} \alpha_{s} ds \right) \end{equation} \begin{equation}\label{g} g_{t}=e^{-\left(\mathrm{i}\omega_{0} +{\Gamma \over 2}\right)t} \left( u -\sqrt {\Gamma }\int_{0}^{t} e^{\left(\mathrm{i}\omega_{0} +{\Gamma \over 2}\right)s} \beta_{s} ds \right). \end{equation} The solution (\ref{apriori}) one can check simply by inserting it into Eqs. (\ref{master1})-(\ref{master3}). The conditional state of the system can be written as \begin{equation}\label{posteriori} \tilde{\rho}_{t}=|c_{\alpha}|^2G^{\alpha\alpha}_{t}|f_{t}\rangle\langle f_{t}|+c_{\alpha}c_{\beta}^{\ast}\frac{\langle \beta|\alpha\rangle}{\langle g_{t}| f_{t}\rangle} G^{\alpha\beta}_{t}|f_{t}\rangle\langle g_{t}|+ c_{\alpha}^{\ast}c_{\beta}\frac{\langle \alpha|\beta\rangle}{\langle f_{t}| g_{t}\rangle} G^{\beta\alpha}_{t}|g_{t}\rangle\langle f_{t}|+|c_{\beta}|^2G^{\beta\beta}_{t}|g_{t}\rangle\langle g_{t}|, \end{equation} where the conditional amplitudes $f_{t}$ and $g_{t}$ coincide with the {\it a priori} ones given by (\ref{f}) and (\ref{g}), and the stochastic coefficients $G^{\alpha\alpha}_{t}$, $G^{\alpha\beta}_{t}$, $G^{\beta\alpha}_{t}$, and $G^{\beta\beta}_{t}$ for the counting stochastic process satisfy the equations \begin{equation} dG^{\alpha\alpha}_{t}=\left(\nu_{t}^{\alpha\alpha}-G^{\alpha\alpha}_{t}\nu_{t}\right)\left(dn_{t}-\nu_{t}dt\right), \end{equation} \begin{equation} dG^{\alpha\beta}_{t}=\left(\frac{\nu_{t}^{\alpha\beta}}{\langle \beta|\alpha\rangle }-G^{\alpha\beta}_{t}\nu_{t}\right)\left(dn_{t}-\nu_{t}dt\right), \end{equation} \begin{equation} dG^{\beta\alpha}_{t}=\left(\frac{\nu_{t}^{\beta\alpha}}{\langle \alpha|\beta\rangle }-G^{\beta\alpha}_{t}\nu_{t}\right)\left(dn_{t}-\nu_{t}dt\right), \end{equation} \begin{equation} dG^{\beta\beta}_{t}=\left(\nu_{t}^{\beta\beta}-G^{\beta\beta}_{t}\nu_{t}\right)\left(dn_{t}-\nu_{t}dt\right), \end{equation} where the intensities have the form \begin{equation} \nu_{t}^{\alpha\alpha}=\left|\sqrt{\Gamma}f_{t}+\alpha_{t}\right|^2G^{\alpha\alpha}_{t}, \end{equation} \begin{equation} \nu_{t}^{\alpha\beta}=\left(\sqrt{\Gamma}f_{t}+\alpha_{t}\right)\left(\sqrt{\Gamma}g_{t}^{\ast}+\beta_{t}^{\ast}\right)G^{\alpha\beta}_{t}\langle\beta|\alpha \rangle, \end{equation} \begin{equation} \nu_{t}^{\beta\alpha}=\left(\sqrt{\Gamma}f_{t}^{\ast}+\alpha_{t}^{\ast}\right)\left(\sqrt{\Gamma}g_{t}+\beta_{t}\right)G^{\beta\alpha}_{t}\langle\alpha|\beta \rangle, \end{equation} \begin{equation} \nu_{t}^{\beta\beta}=\left|\sqrt{\Gamma}g_{t}+\beta_{t}\right|^2G^{\beta\beta}_{t}. \end{equation} For the homodyne observation, we get \begin{equation} dG^{\alpha\alpha}_{t}=\left(\mu_{t}^{\alpha\alpha}-G^{\alpha\alpha}_{t}\mu_{t}\right)\left(dq_{t}-\mu_{t}dt\right), \end{equation} \begin{equation} dG^{\alpha\beta}_{t}=\left(\frac{\mu_{t}^{\alpha\beta}}{\langle \beta|\alpha\rangle }-G^{\alpha\beta}_{t}\mu_{t}\right)\left(dq_{t}-\mu_{t}dt\right), \end{equation} \begin{equation} dG^{\beta\alpha}_{t}=\left(\frac{\mu_{t}^{\beta\alpha}}{\langle \alpha|\beta\rangle }-G^{\beta\alpha}_{t}\mu_{t}\right)\left(dq_{t}-\mu_{t}dt\right), \end{equation} \begin{equation} dG^{\beta\beta}_{t}=\left(\mu_{t}^{\beta\beta}-G^{\beta\beta}_{t}\mu_{t}\right)\left(dq_{t}-\mu_{t}dt\right), \end{equation} where \begin{equation} \mu_{t}^{\alpha\alpha}=\left(\sqrt{\Gamma}\left(f_{t}+f_{t}^{\ast}\right)+\alpha_{t}+\alpha_{t}^{\ast}\right)G^{\alpha\alpha}_{t}, \end{equation} \begin{equation} \mu_{t}^{\alpha\beta}=\left(\sqrt{\Gamma}\left(f_{t}+g_{t}^{\ast}\right)+\alpha_{t}+\beta_{t}^{\ast}\right)G^{\alpha\beta}_{t}\langle\beta|\alpha \rangle, \end{equation} \begin{equation} \mu_{t}^{\beta\alpha}=\left(\sqrt{\Gamma}\left(f_{t}^{\ast}+g_{t}\right)+\alpha_{t}^{\ast}+\beta_{t}\right)G^{\beta\alpha}_{t}\langle\alpha|\beta \rangle, \end{equation} \begin{equation} \mu_{t}^{\beta\beta}=\left(\sqrt{\Gamma}\left(g_{t}+g_{t}^{\ast}\right)+\beta_{t}+\beta_{t}^{\ast}\right)G^{\beta\beta}_{t}. \end{equation} One can prove (\ref{posteriori}) by inserting the conditional operators $\tilde{\rho}_{t}^{\alpha\alpha}$, $\tilde{\rho}_{t}^{\alpha\beta}$, $\tilde{\rho}_{t}^{\beta\alpha}$, and $\tilde{\rho}_{t}^{\beta\beta}$ of the proposed forms into the relevant filtering equations. One can check that this leads to the given differential equations for the amplitudes $f_{t}$ and $g_{t}$, and the coefficients $G^{\alpha\alpha}_{t}$, $G^{\alpha\beta}_{t}$, $G^{\beta\alpha}_{t}$, and $G^{\beta\beta}_{t}$. For the environment taken in a coherent state our formulas reduces to the known results (see, for instance, \cite{BB91,Car02}). \section{Conclusion} We derived the stochastic equation describing the conditional evolution of an open quantum system interacting with the Bose field prepared in a superposition of coherent states. We consider two schemes of measurement of the output field: photon counting and homodyne detection. Instead of methods based on the quantum stochastic calculus and the cascades system model \cite{GJN12b}, we used the collision model with the environment given by an infinite chain of qubits. We assumed that the bath qubits do not interact between themselves and they are initially prepared in the entangled state being a discrete analogue of a superposition of continuous-mode coherent states. The initial state of the compound system was factorisable. Because of the temporal correlations present in the environment, the evolution of open quantum system becomes non-Markovian. We started from the discrete in time description of the problem and obtained in the continuous time limit differential filtering equations consistent with the results published in \cite{GJNC12a, GJN13}. We would like to stress that the presented method is more straight and intuitive than the methods described in \cite{GJNC12a, GJN13}. It not only allows to derive the equations describing the conditional evolution of the system but also enables to find the general structure of quantum trajectories. \section{Bibliography} \end{document}
arXiv
Statistics and the law: the prosecutor's fallacy This post arose from a recent meeting at the Royal Society. It was organised by Julie Maxton to discuss the application of statistical methods to legal problems. I found myself sitting next to an Appeal Court Judge who wanted more explanation of the ideas. Here it is. Some preliminaries The papers that I wrote recently were about the problems associated with the interpretation of screening tests and tests of significance. They don't allude to legal problems explicitly, though the problems are the same in principle. They are all open access. The first appeared in 2014: http://rsos.royalsocietypublishing.org/content/1/3/140216 Since the first version of this post, March 2016, I've written two more papers and some popular pieces on the same topic. There's a list of them at http://www.onemol.org.uk/?page_id=456. I also made a video for YouTube of a recent talk. In these papers I was interested in the false positive risk (also known as the false discovery rate) in tests of significance. It turned out to be alarmingly large. That has serious consequences for the credibility of the scientific literature. In legal terms, the false positive risk means the proportion of cases in which, on the basis of the evidence, a suspect is found guilty when in fact they are innocent. That has even more serious consequences. Although most of what I want to say can be said without much algebra, it would perhaps be worth getting two things clear before we start. The rules of probability. (1) To get any understanding, it's essential to understand the rules of probabilities, and, in particular, the idea of conditional probabilities. One source would be my old book, Lectures on Biostatistics (now free), The account on pages 19 to 24 give a pretty simple (I hope) description of what's needed. Briefly, a vertical line is read as "given", so Prob(evidence | not guilty) means the probability that the evidence would be observed given that the suspect was not guilty. (2) Another potential confusion in this area is the relationship between odds and probability. The relationship between the probability of an event occurring, and the odds on the event can be illustrated by an example. If the probability of being right-handed is 0.9, then the probability of being not being right-handed is 0.1. That means that 9 people out of 10 are right-handed, and one person in 10 is not. In other words for every person who is not right-handed there are 9 who are right-handed. Thus the odds that a randomly-selected person is right-handed are 9 to 1. In symbols this can be written \[ \mathrm{probability=\frac{odds}{1 + odds}} \] In the example, the odds on being right-handed are 9 to 1, so the probability of being right-handed is 9 / (1+9) = 0.9. \[ \mathrm{odds =\frac{probability}{1 – probability}} \] In the example, the probability of being right-handed is 0.9, so the odds of being right-handed are 0.9 / (1 – 0.9) = 0.9 / 0.1 = 9 (to 1). With these preliminaries out of the way, we can proceed to the problem. The legal problem The first problem lies in the fact that the answer depends on Bayes' theorem. Although that was published in 1763, statisticians are still arguing about how it should be used to this day. In fact whenever it's mentioned, statisticians tend to revert to internecine warfare, and forget about the user. Bayes' theorem can be stated in words as follows \[ \mathrm{\text{posterior odds ratio} = \text{prior odds ratio} \times \text{likelihood ratio}} \] "Posterior odds ratio" means the odds that the person is guilty, relative to the odds that they are innocent, in the light of the evidence, and that's clearly what one wants to know. The "prior odds" are the odds that the person was guilty before any evidence was produced, and that is the really contentious bit. Sometimes the need to specify the prior odds has been circumvented by using the likelihood ratio alone, but, as shown below, that isn't a good solution. The analogy with the use of screening tests to detect disease is illuminating. A particularly straightforward application of Bayes' theorem is in screening people to see whether or not they have a disease. It turns out, in many cases, that screening gives a lot more wrong results (false positives) than right ones. That's especially true when the condition is rare (the prior odds that an individual suffers from the condition is small). The process of screening for disease has a lot in common with the screening of suspects for guilt. It matters because false positives in court are disastrous. The screening problem is dealt with in sections 1 and 2 of my paper. or on this blog (and here). A bit of animation helps the slides, so you may prefer the Youtube version. The rest of my paper applies similar ideas to tests of significance. In that case the prior probability is the probability that there is in fact a real effect, or, in the legal case, the probability that the suspect is guilty before any evidence has been presented. This is the slippery bit of the problem both conceptually, and because it's hard to put a number on it. But the examples below show that to ignore it, and to use the likelihood ratio alone, could result in many miscarriages of justice. In the discussion of tests of significance, I took the view that it is not legitimate (in the absence of good data to the contrary) to assume any prior probability greater than 0.5. To do so would presume you know the answer before any evidence was presented. In the legal case a prior probability of 0.5 would mean assuming that there was a 50:50 chance that the suspect was guilty before any evidence was presented. A 50:50 probability of guilt before the evidence is known corresponds to a prior odds ratio of 1 (to 1) If that were true, the likelihood ratio would be a good way to represent the evidence, because the posterior odds ratio would be equal to the likelihood ratio. It could be argued that 50:50 represents some sort of equipoise, but in the example below it is clearly too high, and if it is less that 50:50, use of the likelihood ratio runs a real risk of convicting an innocent person. The following example is modified slightly from section 3 of a book chapter by Mortera and Dawid (2008). Philip Dawid is an eminent statistician who has written a lot about probability and the law, and he's a member of the legal group of the Royal Statistical Society. My version of the example removes most of the algebra, and uses different numbers. Example: The island problem The "island problem" (Eggleston 1983, Appendix 3) is an imaginary example that provides a good illustration of the uses and misuses of statistical logic in forensic identification. A murder has been committed on an island, cut off from the outside world, on which 1001 (= N + 1) inhabitants remain. The forensic evidence at the scene consists of a measurement, x, on a "crime trace" characteristic, which can be assumed to come from the criminal. It might, for example, be a bit of the DNA sequence from the crime scene. Say, for the sake of example, that the probability of a random member of the population having characteristic x is P = 0.004 (i.e. 0.4% ), so the probability that a random member of the population does not have the characteristic is 1 – P = 0.996. The mainland police arrive and arrest a random islander, Jack. It is found that Jack matches the crime trace. There is no other relevant evidence. How should this match evidence be used to assess the claim that Jack is the murderer? We shall consider three arguments that have been used to address this question. The first is wrong. The second and third are right. (For illustration, we have taken N = 1000, P = 0.004.) (1) Prosecutor's fallacy Prosecuting counsel, arguing according to his favourite fallacy, asserts that the probability that Jack is guilty is 1 – P , or 0.996, and that this proves guilt "beyond a reasonable doubt". The probability that Jack would show characteristic x if he were not guilty would be 0.4% i.e. Prob(Jack has x | not guilty) = 0.004. Therefore the probability of the evidence, given that Jack is guilty, Prob(Jack has x | Jack is guilty), is one 1 – 0.004 = 0.996. But this is Prob(evidence | guilty) which is not what we want. What we need is the probability that Jack is guilty, given the evidence, P(Jack is guilty | Jack has characteristic x). To mistake the latter for the former is the prosecutor's fallacy, or the error of the transposed conditional. Dawid gives an example that makes the distinction clear. "As an analogy to help clarify and escape this common and seductive confusion, consider the difference between "the probability of having spots, if you have measles" -which is close to 1 and "the probability of having measles, if you have spots" -which, in the light of the many alternative possible explanations for spots, is much smaller." (2) Defence counter-argument Counsel for the defence points out that, while the guilty party must have characteristic x, he isn't the only person on the island to have this characteristic. Among the remaining N = 1000 innocent islanders, 0.4% have characteristic x, so the number who have it will be NP = 1000 x 0.004 = 4 . Hence the total number of islanders that have this characteristic must be 1 + NP = 5 . The match evidence means that Jack must be one of these 5 people, but does not otherwise distinguish him from any of the other members of it. Since just one of these is guilty, the probability that this is Jack is thus 1/5, or 0.2— very far from being "beyond all reasonable doubt". (3) Bayesian argument The probability of the having characteristic x (the evidence) would be Prob(evidence | guilty) = 1 if Jack were guilty, but if Jack were not guilty it would be 0.4%, i.e. Prob(evidence | not guilty) = P. Hence the likelihood ratio in favour of guilt, on the basis of the evidence, is \[ LR=\frac{\text{Prob(evidence } | \text{ guilty})}{\text{Prob(evidence }|\text{ not guilty})} = \frac{1}{P}=250 \] In words, the evidence would be 250 times more probable if Jack were guilty than if he were innocent. While this seems strong evidence in favour of guilt, it still does not tell us what we want to know, namely the probability that Jack is guilty in the light of the evidence: Prob(guilty | evidence), or, equivalently, the odds ratio -the odds of guilt relative to odds of innocence, given the evidence, To get that we must multiply the likelihood ratio by the prior odds on guilt, i.e. the odds on guilt before any evidence is presented. It's often hard to get a numerical value for this. But in our artificial example, it is possible. We can argue that, in the absence of any other evidence, Jack is no more nor less likely to be the culprit than any other islander, so that the prior probability of guilt is 1/(N + 1), corresponding to prior odds on guilt of 1/N. We can now apply Bayes's theorem to obtain the posterior odds on guilt: \[ \text {posterior odds} = \text{prior odds} \times LR = \left ( \frac{1}{N}\right ) \times \left ( \frac{1}{P} \right )= 0.25 \] Thus the odds of guilt in the light of the evidence are 4 to 1 against. The corresponding posterior probability of guilt is \[ Prob( \text{guilty } | \text{ evidence})= \frac{1}{1+NP}= \frac{1}{1+4}=0.2 \] This is quite small –certainly no basis for a conviction. This result is exactly the same as that given by the Defence Counter-argument', (see above). That argument was simpler than the Bayesian argument. It didn't explicitly use Bayes' theorem, though it was implicit in the argument. The advantage of using the former is that it looks simpler. The advantage of the explicitly Bayesian argument is that it makes the assumptions more clear. In summary The prosecutor's fallacy suggested, quite wrongly, that the probability that Jack was guilty was 0.996. The likelihood ratio was 250, which also seems to suggest guilt, but it doesn't give us the probability that we need. In stark contrast, the defence counsel's argument, and equivalently, the Bayesian argument, suggested that the probability of Jack's guilt as 0.2. or odds of 4 to 1 against guilt. The potential for wrong conviction is obvious. Although this argument uses an artificial example that is simpler than most real cases, it illustrates some important principles. (1) The likelihood ratio is not a good way to evaluate evidence, unless there is good reason to believe that there is a 50:50 chance that the suspect is guilty before any evidence is presented. (2) In order to calculate what we need, Prob(guilty | evidence), you need to give numerical values of how common the possession of characteristic x (the evidence) is the whole population of possible suspects (a reasonable value might be estimated in the case of DNA evidence), We also need to know the size of the population. In the case of the island example, this was 1000, but in general, that would be hard to answer and any answer might well be contested by an advocate who understood the problem. These arguments lead to four conclusions. (1) If a lawyer uses the prosecutor's fallacy, (s)he should be told that it's nonsense. (2) If a lawyer advocates conviction on the basis of likelihood ratio alone, s(he) should be asked to justify the implicit assumption that there was a 50:50 chance that the suspect was guilty before any evidence was presented. (3) If a lawyer uses Defence counter-argument, or, equivalently, the version of Bayesian argument given here, (s)he should be asked to justify the estimates of the numerical value given to the prevalence of x in the population (P) and the numerical value of the size of this population (N). A range of values of P and N could be used, to provide a range of possible values of the final result, the probability that the suspect is guilty in the light of the evidence. (4) The example that was used is the simplest possible case. For more complex cases it would be advisable to ask a professional statistician. Some reliable people can be found at the Royal Statistical Society's section on Statistics and the Law. If you do ask a professional statistician, and they present you with a lot of mathematics, you should still ask these questions about precisely what assumptions were made, and ask for an estimate of the range of uncertainty in the value of Prob(guilty | evidence) which they produce. Postscript: real cases Another paper by Philip Dawid, Statistics and the Law, is interesting because it discusses some recent real cases: for example the wrongful conviction of Sally Clark because of the wrong calculation of the statistics for Sudden Infant Death Syndrome. On Monday 21 March, 2016, Dr Waney Squier was struck off the medical register by the General Medical Council because they claimed that she misrepresented the evidence in cases of Shaken Baby Syndrome (SBS). This verdict was questioned by many lawyers, including Michael Mansfield QC and Clive Stafford Smith, in a letter. "General Medical Council behaving like a modern inquisition" The latter has already written "This shaken baby syndrome case is a dark day for science – and for justice".. The evidence for SBS is based on the existence of a triad of signs (retinal bleeding, subdural bleeding and encephalopathy). It seems likely that these signs will be present if a baby has been shake, i.e Prob(triad | shaken) is high. But this is irrelevant to the question of guilt. For that we need Prob(shaken | triad). As far as I know, the data to calculate what matters are just not available. It seem that the GMC may have fallen for the prosecutor's fallacy. Or perhaps the establishment won't tolerate arguments. One is reminded, once again, of the definition of clinical experience: "Making the same mistakes with increasing confidence over an impressive number of years." (from A Sceptic's Medical Dictionary by Michael O'Donnell. A Sceptic's Medical Dictionary BMJ publishing, 1997). Appendix (for nerds). Two forms of Bayes' theorem The form of Bayes' theorem given at the start is expressed in terms of odds ratios. The same rule can be written in terms of probabilities. (This was the form used in the appendix of my paper.) For those interested in the details, it may help to define explicitly these two forms. In terms of probabilities, the probability of guilt in the light of the evidence (what we want) is \[ \text{Prob(guilty } | \text{ evidence}) = \text{Prob(evidence } | \text{ guilty}) \frac{\text{Prob(guilty })}{\text{Prob(evidence })} \] In terms of odds ratios, the odds ratio on guilt, given the evidence (which is what we want) is \[ \frac{ \text{Prob(guilty } | \text{ evidence})} {\text{Prob(not guilty } | \text{ evidence}} = \left ( \frac{ \text{Prob(guilty)}} {\text {Prob((not guilty)}} \right ) \left ( \frac{ \text{Prob(evidence } | \text{ guilty})} {\text{Prob(evidence } | \text{ not guilty}} \right ) \] or, in words, \[ \text{posterior odds of guilt } =\text{prior odds of guilt} \times \text{likelihood ratio} \] This is the precise form of the equation that was given in words at the beginning. A derivation of the equivalence of these two forms is sketched in a document which you can download. It's worth pointing out the following connection between the legal argument (above) and tests of significance. (1) The likelihood ratio works only when there is a 50:50 chance that the suspect is guilty before any evidence is presented (so the prior probability of guilt is 0.5, or, equivalently, the prior odds ratio is 1). (2) The false positive rate in signiifcance testing is close to the P value only when the prior probability of a real effect is 0.5, as shown in section 6 of the P value paper. However there is another twist in the significance testing argument. The statement above is right if we take as a positive result any P < 0.05. If we want to interpret a value of P = 0.047 in a single test, then, as explained in section 10 of the P value paper, we should restrict attention to only those tests that give P close to 0.047. When that is done the false positive rate is 26% even when the prior is 0.5 (and much bigger than 30% if the prior is smaller –see extra Figure), That justifies the assertion that if you claim to have discovered something because you have observed P = 0.047 in a single test then there is a chance of at least 30% that you'll be wrong. Is there, I wonder, any legal equivalent of this argument? Tagged Clive Stafford Smith, false conviction, false discovery rate, False positive risk, false positives, FPR, Law, lawyers, Michael Mansfield, Philip Dawid, Squier, statistics, Waney Squier | 10 Comments Most alternative medicine is illegal I'm perfectly happy to think of alternative medicine as being a voluntary, self-imposed tax on the gullible (to paraphrase Goldacre again). But only as long as its practitioners do no harm and only as long as they obey the law of the land. Only too often, though, they do neither. When I talk about law, I don't mean lawsuits for defamation. Defamation suits are what homeopaths and chiropractors like to use to silence critics. heaven knows, I've becomes accustomed to being defamed by people who are, in my view. fraudsters, but lawsuits are not the way to deal with it. I'm talking about the Trading Standards laws Everyone has to obey them, and in May 2008 the law changed in a way that puts the whole health fraud industry in jeopardy. The gist of the matter is that it is now illegal to claim that a product will benefit your health if you can't produce evidence to justify the claim. I'm not a lawyer, but with the help of two lawyers and a trading standards officer I've attempted a summary. The machinery for enforcing the law does not yet work well, but when it does, there should be some very interesting cases. The obvious targets are homeopaths who claim to cure malaria and AIDS, and traditional Chinese Medicine people who claim to cure cancer. But there are some less obvious targets for prosecution too. Here is a selection of possibilities to savour.. Universities such as Westminster, Central Lancashire and the rest, which promote the spreading of false health claims Hospitals, like the Royal London Homeopathic Hospital, that treat patients with mistletoe and marigold paste. Can they produce any real evidence that they work? Edexcel, which sets examinations in alternative medicine (and charges for them) Ofsted and the QCA which validate these exams Skills for Health and a whole maze of other unelected and unaccountable quangos which offer "national occupational standards" in everything from distant healing to hot stone therapy, thereby giving official sanction to all manner of treatments for which no plausible evidence can be offered. The Prince of Wales Foundation for Integrated Health, which notoriously offers health advice for which it cannot produce good evidence Perhaps even the Department of Health itself, which notoriously referred to "psychic surgery" as a profession, and which has consistently refused to refer dubious therapies to NICE for assessment. The law, insofar as I've understood it, is probably such that only the first three or four of these have sufficient commercial elements for there to be any chance of a successful prosecution. That is something that will eventually have to be argued in court. But lecanardnoir points out in his comment below that The Prince of Wales is intending to sell herbal concoctions, so perhaps he could end up in court too. We are talking about The Consumer Protection from Unfair Trading Regulations 2008. The regulations came into force on 26 May 2008. The full regulations can be seen here, or download pdf file. They can be seen also on the UK Statute Law Database. The Office of Fair Trading, and Department for Business, Enterprise & Regulatory Reform (BERR) published Guidance on the Consumer Protection from Unfair Trading Regulations 2008 (pdf file), Statement of consumer protection enforcement principles (pdf file), and The Consumer Protection from Unfair Trading Regulations: a basic guide for business (pdf file). Has The UK Quietly Outlawed "Alternative" Medicine? On 26 September 2008, Mondaq Business Briefing published this article by a Glasgow lawyer, Douglas McLachlan. (Oddly enough, this article was reproduced on the National Center for Homeopathy web site.) "Proponents of the myriad of forms of alternative medicine argue that it is in some way "outside science" or that "science doesn't understand why it works". Critical thinking scientists disagree. The best available scientific data shows that alternative medicine simply doesn't work, they say: studies repeatedly show that the effect of some of these alternative medical therapies is indistinguishable from the well documented, but very strange "placebo effect" " "Enter The Consumer Protection from Unfair Trading Regulations 2008(the "Regulations"). The Regulations came into force on 26 May 2008 to surprisingly little fanfare, despite the fact they represent the most extensive modernisation and simplification of the consumer protection framework for 20 years." The Regulations prohibit unfair commercial practices between traders and consumers through five prohibitions:- General Prohibition on Unfair Commercial Practices (Regulation 3) Prohibition on Misleading Actions (Regulations 5) Prohibition on Misleading Omissions (Regulation 6) Prohibition on Aggressive Commercial Practices (Regulation 7) Prohibition on 31 Specific Commercial Practices that are in all Circumstances Unfair (Schedule 1). One of the 31 commercial practices which are in all circumstances considered unfair is "falsely claiming that a product is able to cure illnesses, dysfunction or malformations". The definition of "product" in the Regulations includes services, so it does appear that all forms medical products and treatments will be covered. Just look at that! One of the 31 commercial practices which are in all circumstances considered unfair is "falsely claiming that a product is able to cure illnesses, dysfunction or malformations" Section 5 is equally powerful, and also does not contain the contentious word "cure" (see note below) Misleading actions 5.—(1) A commercial practice is a misleading action if it satisfies the conditions in either paragraph (2) or paragraph (3). (2) A commercial practice satisfies the conditions of this paragraph— (a) if it contains false information and is therefore untruthful in relation to any of the matters in paragraph (4) or if it or its overall presentation in any way deceives or is likely to deceive the average consumer in relation to any of the matters in that paragraph, even if the information is factually correct; and (b) it causes or is likely to cause the average consumer to take a transactional decision he would not have taken otherwise. These laws are very powerful in principle, But there are two complications in practice. One complication concerns the extent to which the onus has been moved on to the seller to prove the claims are true, rather than the accuser having to prove they are false. That is a lot more favourable to the accuser than before, but it's complicated. The other complication concerns enforcement of the new laws, and at the moment that is bad. Who has to prove what? That is still not entirely clear. McLachlan says "If we accept that mainstream evidence based medicine is in some way accepted by mainstream science, and alternative medicine bears the "alternative" qualifier simply because it is not supported by mainstream science, then where does that leave a trader who seeks to refute any allegation that his claim is false? Of course it is always open to the trader to show that his the alternative therapy actually works, but the weight of scientific evidence is likely to be against him." On the other hand, I'm advised by a Trading Standards Officer that "He doesn't have to refute anything! The prosecution have to prove the claims are false". This has been confirmed by another Trading Standards Officer who said "It is not clear (though it seems to be) what difference is implied between "cure" and "treat", or what evidence is required to demonstrate that such a cure is false "beyond reasonable doubt" in court. The regulations do not provide that the maker of claims must show that the claims are true, or set a standard indicating how such a proof may be shown." The main defence against prosecution seems to be the "Due diligence defence", in paragraph 17. Due diligence defence 17. —(1) In any proceedings against a person for an offence under regulation 9, 10, 11 or 12 it is a defence for that person to prove— (a) that the commission of the offence was due to— (i) a mistake; (ii) reliance on information supplied to him by another person; (iii) the act or default of another person; (iv) an accident; or (v) another cause beyond his control; and (b) that he took all reasonable precautions and exercised all due diligence to avoid the commission of such an offence by himself or any person under his control. If "taking all reasonable precautions" includes being aware of the lack of any good evidence that what you are selling is effective, then this defence should not be much use for most quacks. Douglas McLachlan has clarified, below, this difficult question False claims for health benefits of foods A separate bit of legislation, European regulation on nutrition and health claims made on food, ref 1924/2006, in Article 6, seems clearer in specifying that the seller has to prove any claims they make. Scientific substantiation for claims 1. Nutrition and health claims shall be based on and substantiated by generally accepted scientific evidence. 2. A food business operator making a nutrition or health claim shall justify the use of the claim. 3. The competent authorities of the Member States may request a food business operator or a person placing a product on the market to produce all relevant elements and data establishing compliance with this Regulation. That clearly places the onus on the seller to provide evidence for claims that are made, rather than the complainant having to 'prove' that the claims are false. On the problem of "health foods" the two bits of legislation seem to overlap. Both have been discussed in "Trading regulations and health foods", an editorial in the BMJ by M. E. J. Lean (Professor of Human Nutrition in Glasgow). "It is already illegal under food labelling regulations (1996) to claim that food products can treat or prevent disease. However, huge numbers of such claims are still made, particularly for obesity " "The new regulations provide good legislation to protect vulnerable consumers from misleading "health food" claims. They now need to be enforced proactively to help direct doctors and consumers towards safe, cost effective, and evidence based management of diseases." In fact the European Food Standards Agency (EFSA) seems to be doing a rather good job at imposing the rules. This, predictably, provoked howls of anguish from the food industry There is a synopsis here. "Of eight assessed claims, EFSA's Panel on Dietetic Products, Nutrition and Allergies (NDA) rejected seven for failing to demonstrate causality between consumption of specific nutrients or foods and intended health benefits. EFSA has subsequently issued opinions on about 30 claims with seven drawing positive opinions." ". . . EFSA in disgust threw out 120 dossiers supposedly in support of nutrients seeking addition to the FSD's positive list. If EFSA was bewildered by the lack of data in the dossiers, it needn't hav been as industry freely admitted it had in many cases submitted such hollow documents to temporarily keep nutrients on-market." Or, on another industry site, "EFSA's harsh health claim regime" "By setting an unworkably high standard for claims substantiation, EFSA is threatening R&D not to mention health claims that have long been officially approved in many jurisdictions." Here, of course,"unworkably high standard" just means real genuine evidence. How dare they ask for that! Enforcement of the law Article 19 of the Unfair Trading regulations says 19. —(1) It shall be the duty of every enforcement authority to enforce these Regulations. (2) Where the enforcement authority is a local weights and measures authority the duty referred to in paragraph (1) shall apply to the enforcement of these Regulations within the authority's area. Nevertheless, enforcement is undoubtedly a weak point at the moment. The UK is obliged to enforce these laws, but at the moment it is not doing so effectively. A letter in the BMJ from Rose & Garrow describes two complaints under the legislation in which it appears that a Trading Standards office failed to enforce the law. They comment " . . . member states are obliged not only to enact it as national legislation but to enforce it. The evidence that the government has provided adequate resources for enforcement, in the form of staff and their proper training, is not convincing. The media, and especially the internet, are replete with false claims about health care, and sick people need protection. All EU citizens have the right to complain to the EU Commission if their government fails to provide that protection." This is not a good start. A lawyer has pointed out to me "that it can sometimes be very difficult to get Trading Standards or the OFT to take an interest in something that they don't fully understand. I think that if it doesn't immediately leap out at them as being false (e.g "these pills cure all forms of cancer") then it's going to be extremely difficult. To be fair, neither Trading Standards nor the OFT were ever intended to be medical regulators and they have limited resources available to them. The new Regulations are a useful new weapon in the fight against quackery, but they are no substitute for proper regulation." Trading Standards originated in Weights and Measures. It was their job to check that your pint of beer was really a pint. Now they are being expected to judge medical controversies. Either they will need more people and more training, or responsibility for enforcement of the law should be transferred to some more appropriate agency (though one hesitates to suggest the MHRA after their recent pathetic performance in this area). Who can be prosecuted? Any "trader", a person or a company. There is no need to have actually bought anything, and no need to have suffered actual harm. In fact there is no need for there to be a complainant at all. Trading standards officers can act on their own. But there must be a commercial element. It's unlikely that simply preaching nonsense would be sufficient to get you prosecuted, so the Prince of Wales is, sadly, probably safe. Universities who teach that "Amethysts emit high Yin energy" make an interesting case. They charge fees and in return they are "falsely claiming that a product is able to cure illnesses". In my view they are behaving illegally, but we shan't know until a university is taken to court. Watch this space. The fact remains that the UK is obliged to enforce the law and presumably it will do so eventually. When it does, alternative medicine will have to change very radically. If it were prevented from making false claims, there would be very little of it left apart from tea and sympathy New Zealand must have similar laws. Just as I was about to post this I found that in New Zealand a "couple who sold homeopathic remedies claiming to cure bird flu, herpes and Sars (severe acute respiratory syndrome) have been convicted of breaching the Fair Trading Act." They were ordered to pay fines and court costs totalling $23,400. A clarification form Douglas McLachlan On the difficult question of who must prove what, Douglas McLachlan, who wrote Has The UK Quietly Outlawed "Alternative" Medicine?, has kindly sent the following clarification. "I would agree that it is still for the prosecution to prove that the trader committed the offence beyond a reasonable doubt, and that burden of proof is always on the prosecution at the outset, but I think if a trader makes a claim regarding his product and best scientific evidence available indicates that that claim is false, then it will be on the trader to substantiate the claim in order to defend himself. How will the trader do so? Perhaps the trader might call witness after witness in court to provide anecdotal evidence of their experiences, or "experts" that support their claim – in which case it will be for the prosecution to explain the scientific method to the Judge and to convince the Judge that its Study evidence is to be preferred. Unfortunately, once human personalities get involved things could get clouded – I could imagine a small time seller of snake oil having serious difficulty, but a well funded homeopathy company engaging smart lawyers to quote flawed studies and lead anecdotal evidence to muddy the waters just enough for a Judge to give the trader the benefit of the doubt. That seems to be what happens in the wider public debate, so it's easy to envisage it happening a courtroom." The "average consumer". The regulations state (3) A commercial practice is unfair if— (a) it contravenes the requirements of professional diligence; and (b) it materially distorts or is likely to materially distort the economic behaviour of the average consumer with regard to the product. It seems,therefore, that what matters is whether the "average consumer" would infer from what is said that a claim was being made to cure a disease. The legal view cited by Mojo (comment #2, below) is that expressions such as "can be used to treat" or "can help with" would be considered by the average consumer as implying successful treatment or cure. The drugstore detox delusion. A nice analysis "detox" at .Science-based Pharmacy Tagged Academia, alternative medicine, Anti-science, antiscience, CAM, Central Lancashire, chiropractic, Fair trading, herbalism, homeopathy, Law, New Zealand, nutribollocks, nutrition, nutritional therapy, Prince's Foundation, Trading Standards, Unfair Trading, Universities, Westminster university | 30 Comments
CommonCrawl
Characterization of cardiac- and respiratory-driven cerebrospinal fluid motion based on asynchronous phase-contrast magnetic resonance imaging in volunteers Ken Takizawa1, Mitsunori Matsumae ORCID: orcid.org/0000-0002-9239-97621, Saeko Sunohara2, Satoshi Yatsushiro2 & Kagayaki Kuroda2 A classification of cardiac- and respiratory-driven components of cerebrospinal fluid (CSF) motion has been demonstrated using echo planar imaging and time-spatial labeling inversion pulse techniques of magnetic resonance imaging (MRI). However, quantitative characterization of the two motion components has not been performed to date. Thus, in this study, the velocities and displacements of the waveforms of the two motions were quantitatively evaluated based on an asynchronous two-dimensional (2D) phase-contrast (PC) method followed by frequency component analysis. The effects of respiration and cardiac pulsation on CSF motion were investigated in 7 healthy subjects under guided respiration using asynchronous 2D-PC 3-T MRI. The respiratory and cardiac components in the foramen magnum and aqueduct were separated, and their respective fractions of velocity and amount of displacement were compared. For velocity in the Sylvian aqueduct and foramen magnum, the fraction attributable to the cardiac component was significantly greater than that of the respiratory component throughout the respiratory cycle. As for displacement, the fraction of the respiratory component was significantly greater than that of the cardiac component in the aqueduct regardless of the respiratory cycle and in the foramen magnum in the 6- and 10-s respiratory cycles. There was no significant difference between the fractions in the 16-s respiratory cycle in the foramen magnum. To separate cardiac- and respiratory-driven CSF motions, asynchronous 2D-PC MRI was performed under respiratory guidance. For velocity, the cardiac component was greater than the respiratory component. In contrast, for the amount of displacement, the respiratory component was greater. Intracranial cerebrospinal fluid (CSF) motion changes with cardiac and respiratory rhythms [1]. In clinical practice, most clinicians accept that the motion of the CSF has two elements, a fast movement synchronized with the heartbeat and a somewhat slower movement synchronized with respiratory movements, on the basis of observations of the fluid surface during surgery or CSF drainage. When discussing the physiological role of CSF, analyzing its motion in terms of its separate cardiac and respiratory components is valuable for elucidating the pathologies of diseases that cause abnormal movement of the CSF, such as hydrocephalus. Magnetic resonance imaging (MRI) provides a noninvasive technique for studying CSF dynamics in human subjects [2,3,4,5,6]. Numerous researchers have investigated cardiac modulation of CSF using various MRI techniques [2, 6, 7]. On the other hand, only a few studies of the modulation of CSF motion induced by respiration have been performed [8,9,10]. To visualize the cardiac- and respiratory-driven CSF motions separately, Yamada et al. [8] used a spin-labeling technique called time-spatial labeling inversion pulse (Time-SLIP). Chen used the simultaneous multi-slice (SMS) echo planar imaging (EPI) technique [11] based on MRI. A new approach using frequency analysis has recently also come into use. Yatsushiro et al. [12] used the 2-dimensional phase-contrast (2D-PC) technique to classify intracranial CSF motion into cardiac and respiratory components and expressed these by means of correlation mapping. We consider that quantitative analysis of velocity and displacement, the integral of velocity over time, is required to ascertain the dynamics of CSF motion as water, and this study was conceived on the assumption that quantitative analysis of CSF motion by 2D-PC, a development building on previous techniques, is appropriate for this purpose. To separate the cardiac and respiratory components of CSF motion, the asynchronous real-time 2D-PC technique was used in seven healthy volunteers under controlled respiration. The velocity and the amount of displacement of the cardiac and respiratory components of CSF motion were quantified. The velocity and displacement were then compared in each respiratory cycle, and the effects of respiratory and cardiac components on CSF motion were quantitatively investigated. Our institutional review board approved this research. All volunteers were examined after providing appropriate informed consent, consistent with the terms of approval from the institutional review board of our institution. Asynchronous 2D-PC technique under controlled respiration was performed in 7 healthy volunteers (6 male and 1 female) aged 21–31 years. The respiratory cycle was set to 6, 10, and 16 s, to cover the range of the normal respiratory cycle. Volunteers were requested to control their respiration according to audio guidance for inhalation and exhalation timing. To monitor respiration, a bellows-type pressure sensor was placed around the abdomen of the subject, and an electrocardiogram (ECG) was monitored to identify the frequency distribution of individual cardiac motion. Asynchronous 2D-PC steady-state-free precession (SSFP) was performed on a 3-T MR scanner with the following conditions: flow encode direction foot–head (FH); data points 256; repetition time (TR) 6.0 ms; echo time (TE) 3.9 ms; flip angle (FA) 10°; field of view (FOV) 28 × 28 cm2; velocity encoding (VENC) 10 cm/s; acquisition matrix 89 × 128 (half-Fourier); reconstruction matrix 256 × 256; and slice thickness 7 mm. These conditions yielded a frame rate of 4.6 images/s (temporal resolution of 217 ms). The total duration of data acquisition for each subject was 55 s. After obtaining the color-coded velocity vector images, rough outlines of the ROI were specified around the Sylvian aqueduct and the foramen of Monro. The partial volume effect arising from the relatively large voxel size (approximately 2 mm) used in the present experiment made a simple threshold-based segmentation of the T2-weighted image difficult. To segment the CSF regions on the images with a reduced partial volume effect and to apply these images to the velocity and pressure images as masks for the quantitative analyses, a novel segmentation technique, called spatial-based fuzzy clustering, was applied. The details of this technique are explained elsewhere [13]. The waveform in the individual voxels was separated into respiratory and cardiac components based on frequency range, and the maximum velocity was determined for the respective components. The technical details of the procedure were explained in our previous study [12, 14]. The ratio of the individual velocity of the respiratory or cardiac component to the sum of the velocities of the respiratory and cardiac components was calculated for both velocity and displacement. The results of the above calculations for the cerebral aqueduct and the foramen magnum were compared statistically. Equation 1 shows the formula for the calculation of the fraction, F r, of the velocity of the respiratory component to the sum of the velocities for the respiratory and cardiac components. $$F_{\text{r}} = \frac{{v_{\text{r}} }}{{v_{\text{r}} + v_{\text{c}} }}$$ where v r is the respiratory component of the velocity, while v c is the cardiac component. The mean CSF displacement of each component in the cranial and caudal directions was calculated from the velocity waveform based on the following equation, $$D = \frac{1}{N}\sum\limits_{n = 1}^{N} {\left( {\Delta t\sum\limits_{m = 1}^{M} {v\left( {m \cdot \Delta t} \right)} } \right)}$$ where v(m∙ Δt) is the velocity at the mth time point of the observation with a sampling period of Δt, and M is the number of time points in the cranial or caudal direction. For example, when the velocity was positive, its direction was regarded as cranial, and the number of corresponding data points was set to M. N is the number of voxels in a region of interest (ROI) for the displacement measurement. Fractions of cardiac- and respiratory-induced displacements were calculated in a similar manner with equation [1], but separately for the cranial and caudal directions. The Kolmogorov–Smirnov test and the Mann–Whitney U test were used to compare the respiratory and cardiac components of the velocity and the amount of displacement. Figure 1b presents a CSF velocity waveform obtained with a 6-s respiratory cycle by the asynchronous time-resolved 2D-PC technique at region of interest (ROI) #1 placed at the foramen magnum, as depicted in Fig. 1a. Summary of the velocities and displacement of the respiratory and cardiac components of the CSF at the Sylvian aqueduct and the foramen magnum are shown in Tables 1, 2. The fractions of the respiratory and cardiac components of the CSF velocity at the Sylvian aqueduct are shown in Fig. 2. The cardiac component was significantly greater than the respiratory component (p = 0.002) regardless of the respiratory period. A similar plot for the fractions at the foramen magnum is shown in Fig. 3. In results for both the Sylvian aqueduct and the foramen magnum, the cardiac component was significantly greater than the respiratory component (p = 0.002) throughout the three different respiratory cycles. There was no significant difference between the fractions of the different respiratory periods for both the respiratory and cardiac components. A T 2-weighted image (a) of a healthy subject with 2 ROIs (red rectangles) placed in the foramen magnum (#1) and the Sylvian aqueduct (#2). The temporal changes of the total velocity wave of the CSF, and separated the cardiac and respiratory velocity components at ROI #1 are shown in (b) Table 1 Summary of the cardiac- and respiratory-driven CSF velocities (cm/s) in the cranial and caudal directions for the three different respiratory periods Table 2 Summary of the cardiac- and respiratory-driven CSF displacements (cm) in the cranial and caudal directions for the three different respiratory periods Box plots of the fractions of the respiratory and cardiac components of the CSF velocity in the three different respiratory cycles (6, 10, and 16 s) at the aqueduct. The cranial and caudal directions are plotted separately. Outlying values are indicated by "o" Similar box plots of the fractions of the CSF velocity components as Fig. 2 but at the foramen magnum. Outlying values are indicated by "o", and far-outlying values are indicated by an asterisk The fraction of the displacement of the CSF for the respiratory and cardiac components at the Sylvian aqueduct is shown in Fig. 4. Throughout the respiratory cycle, the respiratory component was significantly greater than the cardiac component (p = 0.002). No significant difference was found between the fractions of the different respiratory periods. A similar plot for the displacement fraction at the foramen magnum is shown in Fig. 5. In this region, the displacement fraction of the respiratory component was significantly greater than that of the cardiac component in the respiratory cycle at 6 and 10 s (p = 0.02). However, no significant difference was observed at 16 s (p = 0.85). Significant differences between the respiratory cycles of 6 and 16 s were observed in both the respiratory and cardiac components (p = 0.004). No differences were observed in the other respiratory cycles. Box plots of the fractions of the respiratory component and the cardiac component of the cranial and caudal displacements at the aqueduct. The cranial and caudal directions are plotted separately Similar box plots as Fig. 4 for the displacement fractions at the foramen magnum. Outlying values are indicated by "o", and far-outlying values are indicated by an asterisk To understand the driving force of CSF motion, researchers have investigated animals and humans using a variety of techniques [1]. Many concluded that CSF pulsations are mainly arterial in origin. On the other hand, CSF flow changes due to respiration have been the subject of only a few MRI studies. However, some researchers have investigated the effects of respiratory motion on CSF flow using MRI techniques [8, 10, 11, 15]. Beckett et al. [15] used simultaneous multi-slice (SMS) velocity imaging to investigate spinal and brain CSF motion. They reported that the CSF motion in the spine and brain is modulated not only by cardiac motion, but also by respiratory motion. Chen et al. [11] used SMS EPI technique under respiratory guidance to measure respiratory- and cardiac-modulated CSF velocity and direction. They concluded that, during the inspiratory phase, there is upward (inferior to superior) CSF movement into the cranial cavity and lateral ventricles, with a reversal of direction in the expiratory phase. Yamada et al. [8] investigated the effect of respiration on CSF movement by using a non-contrast Time-SLIP technique with balanced steady-state-free precession (bSSFP) readout. Their results demonstrated that a substantially greater amount of CSF movement occurs with deep respiration than with cardiac pulsations. Later, Dreha-Kulaczewski et al. [10] concluded that inspiration is the major regulator of CSF motion. Dreha-Kulaczewski et al. [10] used a highly under-sampled radial gradient–echo sequence with image reconstruction by regularized nonlinear inversion (NLINV) for observing the effect of respiratory on the CSF motion. Since signal intensity modulation due to the inflow effect was used in their work, separated and direct quantification for the CSF velocities due to the cardiac pulsation and respiration were not performed. In the recent publication, Yildiz et al. [9] used very similar technique with our present work to quantify and characterize the cardiac and respiratory-induced CSF motions at the level of the foramen magnum. Assessment of intracranial CSF motions was, however, not made in their work. Thus we believe our present work is adding new insights concerning on the cardiac and respiratory-induced CSF motions in the intracranial space. In the present study, we differentiated the cardiac and respiratory components to evaluate CSF motion. One of the simplest ways to separate cardiac and respiratory motions is to understand frequency analysis. Sunohara et al. [14] developed a method using 2D-PC to analyze the driving force of CSF in terms of power and frequency mapping and successfully analyzed the cardiac and respiratory components of CSF motion, albeit obtaining their images from volunteers engaged in controlled respiration. Our frequency technique was taken further for quantitative analysis of CSF motion related to cardiac and respiratory components. The mathematical algorithm for separating the cardiac and respiratory components of the CSF motion is described in our previous work [12]. Shortly, Fourier transformation was applied to the time series of the obtained velocity data at each voxel. The components of CSF motion were extracted from the frequency spectrum by selecting the particular frequency bands corresponding to the cardiac and respiratory frequencies. In this particular work, the frequency band for the cardiac component was set as 1.0–1.6 Hz, while that for the respiratorion was 0.018–0.3 Hz. In the present study, CSF motion was separated into respiratory and cardiac components. The amount of CSF displacement was found to be larger in the respiratory component than in the cardiac component in both cranial and caudal directions. Simultaneously, while the cardiac component showed a smaller displacement, the velocity was higher compared to the respiratory component. In other words, the movement of CSF due to the cardiac component was rapid and small, and that due to the respiratory component was slow and large. These results are consistent with those of the visual analysis of CSF reported by Yamada et al. [8] demonstrating that the influence of the respiratory component on the amount of displacement per unit of time was greater than that of the cardiac component. These findings provide quantitative values for results that will be readily understandable to clinicians who have observed the rapid, short-period, powerful CSF motion synchronized with the heartbeat and the slowly pulsing, long-period CSF motion in clinical practice. The difference in the displacement was significant (p < 0.001) and clear in the Sylvian aqueduct for all respiratory periods. The difference became slightly less clear in the foramen magnum, particularly for longer respiratory periods (p < 0.05 for the 16-s cycle). This may be attributed to the fact that the respiratory process tended to be unstable in the longer period (16 s), and, thus, the individual variation among the volunteers became larger than that in the shorter period. Time-SLIP enables changes in spin to be visualized. This approximates the results for displacement shown in the present study, showing that CSF moves long distances in accordance with respiratory variations. In the present results, the velocity indicated the rapid movement of CSF with a short period associated with the heartbeat. To summarize CSF motion on the basis of these results, although CSF moves fast as it spreads around the vessels with the heartbeat, it moves over comparatively long distances in accordance with the slower movements of breathing, and this fast movement and movement over long distances may be responsible for physical exchanges in the brain and spinal cord. However, the physical quantity measured in the present study is the displacement calculated by integrating the CSF velocity in the cranial or caudal direction, unlike the spin traveling distance, which the spin-labeling technique measures. Another important point is that the temporal resolution for data sampling (217 ms/frame) was not high enough to sample the cardiac-driven motion. Assuming a heart rate of 1 Hz, only 4–5 points can cover a cycle of cardiac-driven CSF motion resulting in a lack of waveform sampling accuracy, although the present technique is a quantitative measurement based on the 2D-PC technique, which can measure the fluid velocity with 10% accuracy [16]. In this study the asynchronous 2D-PC method was used under respiratory guidance, which also enabled the evaluation of the respiratory movement element. This was done by performing 2D-PC scanning continuously without a trigger in order to evaluate the slow, long-period motion of CSF and then carrying out quantitative analysis. The feature of the PC method in combining the time element with velocity and direction makes it possible to observe the complex motion of the CSF, providing the next step forward in elucidating the physiological functions of the CSF in vivo. The cardiac-related CSF motion is predominant over the respiratory-related motion, which maintains CSF pressure in the CSF cavity. However, the CSF moves a long distance, as shown by our analysis of displacement. The displacement of CSF in different cavities is important to exchange substances between the parenchyma and the CSF space. During surgery, neurosurgeons frequently see powerful short-range cardiac-related CSF waves and long range, large-wave rhythmical pulsations related to the ventilator. Furthermore, at the tip of external ventricular drainage, clinicians always see the short-range, short-distance CSF pulsation and the long-range, long-distance CSF pulsation, and this alternate CSF pulsation can be identified using the present technique non-invasively. Our final goal was to identify the pathogenesis of CSF circulatory disturbances, as in hydrocephalus and Alzheimer dementia. Using quantitative analysis, we were able to differentiate the subgroup of disease or do a pre- and post-treatment analysis. One of the limitations is that the present MR technique is vulnerable to changes in the position of the human body. Such a position change makes the CSF motion more complex, resulting in failure to assess the association between human movements and CSF motion in daily life. CSF: Time-SLIP: time-spatial labeling inversion pulse MRI: 2D: 2-dimensional PC: phase-contrast 2D-PC: 2-dimensional phase-contrast EEG: SSFP: steady-state-free precession FH: foot-head TR: repetition time echo time FA: flip angle FOV: VENC: velocity encoding ROI: region of interest simultaneous multi-slice EPI: echo planar imaging bSSFP: balanced steady-state-free precession Matsumae M, Sato O, Hirayama A, Hayashi N, Takizawa K, Atsumi H, Sorimachi T. Research into the physiology of cerebrospinal fluid reaches a new horizon: intimate exchange between cerebrospinal fluid and interstitial fluid may contribute to maintenance of homeostasis in the central nervous system. Neurol Med Chir (Tokyo). 2016;56:416–41. Matsumae M, Hirayama A, Atsumi H, Yatsushiro S, Kuroda K. Velocity and pressure gradients of cerebrospinal fluid assessed with magnetic resonance imaging. J Neurosurg. 2014;120:218–27. Atsumi H, Matsumae M, Hirayama A, Kuroda K. Measurements of intracranial pressure and compliance index using 1.5-T clinical MRI machine. Tokai J Exp Clin Med. 2014;39:34–43. Yatsushiro S, Hirayama A, Matsumae M, Kuroda K. Visualization of pulsatile CSF motion separated by membrane-like structure based on four-dimensional phase-contrast (4D-PC) velocity mapping. Conf Proc IEEE Eng Med Biol Soc. 2013;2013:6470–3. Horie T, Kajihara N, Matsumae M, Obara M, Hayashi N, Hirayama A, Takizawa K, Takahara T, Yatsushiro S, Kuroda K. Magnetic resonance imaging technique for visualization of irregular cerebrospinal fluid motion in the ventricular system and subarachnoid space. World Neurosurg. 2017;97:523–31. Hayashi N, Matsumae M, Yatsushiro S, Hirayama A, Abdullah A, Kuroda K. Quantitative analysis of cerebrospinal fluid pressure gradients in healthy volunteers and patients with normal pressure hydrocephalus. Neurol Med Chir (Tokyo). 2015;55:657–62. Hirayama A, Matsumae M, Yatsushiro S, Abdulla A, Atsumi H, Kuroda K. Visualization of pulsatile csf motion around membrane-like structures with both 4D velocity mapping and time-slip technique. Magn Reson Med Sci. 2015;14:263–73. Yamada S, Miyazaki M, Yamashita Y, Ouyang C, Yui M, Nakahashi M, Shimizu S, Aoki I, Morohoshi Y, McComb JG. Influence of respiration on cerebrospinal fluid movement using magnetic resonance spin labeling. Fluids Barriers CNS. 2013;10:36. Yildiz S, Thyagaraj S, Jin N, Zhong X, Heidari Pahlavian S, Martin BA, Loth F, Oshinski J, Sabra KG. Quantifying the influence of respiration and cardiac pulsations on cerebrospinal fluid dynamics using real-time phase-contrast MRI. J Magn Reson Imaging. 2017;46:431–9. Dreha-Kulaczewski S, Joseph AA, Merboldt KD, Ludwig HC, Gartner J, Frahm J. Inspiration is the major regulator of human CSF flow. J Neurosci. 2015;35:2485–91. Chen L, Beckett A, Verma A, Feinberg DA. Dynamics of respiratory and cardiac CSF motion revealed with real-time simultaneous multi-slice EPI velocity phase contrast imaging. Neuroimage. 2015;122:281–7. Yatsushiro S, Sunohara S, Takizawa K, Matsumae M, Kajihara N, Kuroda K. Characterization of cardiac- and respiratory-driven cerebrospinal fluid motions using correlation mapping with asynchronous 2-dimensional phase contrast technique. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:3867–70. Abdullah A, Hirayama A, Yatsushiro S, Matsumae M, Kuroda K. Cerebrospinal fluid image segmentation using spatial fuzzy clustering method with improved evolutionary expectation maximization. Conf Proc IEEE Eng Med Biol Soc. 2013;2013:3359–62. Sunohara S, Yatsushiro S, Takizawa K, Matsumae M, Kajihara N, Kuroda K. Investigation of driving forces of cerebrospinal fluid motion by power and frequency mapping based on asynchronous phase contrast technique. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:1232–5. Beckett A, Chen L, Verma A, Feinberg DA. Velocity phase imaging with simultaneous multi-slice EPI reveals respiration driven motion in spinal CSF. Proc Intl Soc Mag Reson Med. 2015;23:4445. Tang C, Blatter DD, Parker DL. Accuracy of phase-contrast flow measurements in the presence of partial-volume effects. J Magn Reson Imaging. 1993;3:377–85. KT, SY, and SS carried out volunteers' data collection, statistical analysis, and drafted the manuscript. KT, MM, and KK drafted the manuscript. KK participated in the coordination of the study. All authors read and approved the final manuscript. The authors are grateful to Mr. Nao Kajiwara, Department of Radiology, Tokai University School of Medicine for his technical assistance with MR imaging. Kagayaki Kuroda is an employee of Bioview Inc. Other the authors declare that they have no competing interests. All authors agree with publication of this manuscript. This research was approved by our institution's Institutional Review Board for Clinical Research, Tokai University Hospital (http://irb.med.u-tokai.ac.jp/) IRB No. 13R-066 (Flow dynamic study of cerebrospinal fluid using MRI). All volunteers were examined after providing appropriate informed consent, consistent with the terms of approval from the internal review board of Tokai University Hospital, Isehara, Kanagawa, Japan. Written, informed consent concerning diagnostic procedures and image analysis was obtained from all volunteers and patients. This study was supported in part by the Research and Study Project of Tokai University Educational System General Research Organization, and Health and Labor Sciences Research grants from the Japanese government for research on rare and intractable disease. The authors have no financial competing or any interest with any commercial product used in this study or any substantial relationship with an entity that may impact or benefit from the conclusions of this research. Department of Neurosurgery, Tokai University School of Medicine, 143 Shimokasuya, Isehara, Kanagawa, 2591193, Japan Ken Takizawa & Mitsunori Matsumae Course of Science and Technology, Graduate School of Science and Technology, Tokai University, 4-1-1 Kitakaname, Hiratsuka, Kanagawa, 2591292, Japan Saeko Sunohara, Satoshi Yatsushiro & Kagayaki Kuroda Ken Takizawa Mitsunori Matsumae Saeko Sunohara Satoshi Yatsushiro Kagayaki Kuroda Correspondence to Mitsunori Matsumae. Takizawa, K., Matsumae, M., Sunohara, S. et al. Characterization of cardiac- and respiratory-driven cerebrospinal fluid motion based on asynchronous phase-contrast magnetic resonance imaging in volunteers. Fluids Barriers CNS 14, 25 (2017). https://doi.org/10.1186/s12987-017-0074-1 Phase-contrast image CNS Fluid and Solute Movement: Physiology, Modelling and Imaging
CommonCrawl
\begin{document} \title{\LARGE \bf Lipschitz Classification of Almost-Riemannian Distances on Compact Oriented Surfaces } \begin{abstract} Two-dimensional almost-Riemannian structures are generalized Riemannian structures on surfaces for which a local orthonormal frame is given by a Lie bracket generating pair of vector fields that can become collinear. We consider the Carnot--Caratheodory distance canonically associated with an almost-Riemannian structure and study the problem of Lipschitz equivalence between two such distances on the same compact oriented surface. We analyse the generic case, allowing in particular for the presence of tangency points, i.e., points where two generators of the distribution and their Lie bracket are linearly dependent. The main result of the paper provides a characterization of the Lipschitz equivalence class of an almost-Riemannian distance in terms of a labelled graph associated with it. \end{abstract} \section{Introduction} Consider a pair of smooth vector fields $X$ and $Y$ on a two-dimensional smooth manifold $M$. If the pair $(X,Y)$ is Lie bracket generating, i.e., if $\mathrm{span}\{X(q),Y(q),[X,Y](q),[X,[X,Y]](q),\ldots\}$ is full-dimensional at every $q\in M$, then the control system \begin{eqnarray} \label{ff} \dot q=u X(q)+v Y(q)\,,~~~u^2+v^2\leq 1\,,~~~q\in M\,, \end{eqnarray} is completely controllable and the minimum-time function defines a continuous distance $d$ on $M$. When $X$ and $Y$ are everywhere linear independent (the only possibility for this to happen is that $M$ is parallelizable), such distance is Riemannian and it corresponds to the metric for which $(X,Y)$ is an orthonormal frame. Our aim is to study the geometry obtained starting from a pair of vector fields which may become collinear. Under generic hypotheses, the set $\mathcal{Z}$ (called {\it singular locus}) of points of $M$ at which $X$ and $Y$ are parallel is a one-dimensional embedded submanifold of $M$ (possibly disconnected). Metric structures that can be defined {\it locally} by a pair of vector fields $(X,Y)$ through \r{ff} are called almost-Riemannian structures. Equivalently, an almost-Riemannian structure ${\cal S}$ can be defined as an Euclidean bundle $E$ of rank two over $M$ (i.e. a vector bundle whose fibre is equipped with a smoothly-varying scalar product $\langle\cdot,\cdot\rangle_q$) and a morphism of vector bundles $f:E\rightarrow TM$ such that the evaluation at $q$ of the Lie algebra generated by the submodule \begin{eqnarray} \Delta:=\{f\circ\sigma\mid\sigma\mathrm{ \, section\, of\, } E\}\label{delta} \end{eqnarray} of the algebra of vector fields on $M$ is equal to $T_{q}M$ for every $q\in M$. If $E$ is orientable, we say that ${\mathcal S}$ is {\it orientable}. The singular locus $\mathcal{Z}$ is the set of points $q$ of $M$ at which $f(E_{q})$ is one-dimensional. An almost-Riemannian structure is Riemannian if and only if $\mathcal{Z}=\emptyset$, i.e. $f$ is an isomorphism of vector bundles. The first example of genuinely almost-Riemannian structure is provided by the Grushin plane, which is the almost-Riemannian structure on $M=\mathbb{R}^2$ with $E=\mathbb{R}^2\times\mathbb{R}^2$, $f((x,y),(a,b))=((x,y),(a,b x))$ and $\langle\cdot,\cdot\rangle$ the canonical Euclidean structure on $\mathbb{R}^2$. The model was originally introduced in the context of hypoelliptic operator theory \cite{FL1,grusin1} (see also \cite{bellaiche,algeria}). Notice that the singular locus is indeed nonempty, being equal to the $y$-axis. Another example of almost-Riemannian structure appeared in problems of control of quantum mechanical systems (see \cite{q4,q1}). Almost-Riemannian structures present very interesting phenomena. For instance, even in the case where the Gaussian curvature is everywhere negative (where it is defined, i.e., on $M\setminus\mathcal{Z}$), geodesics may have conjugate points. This happens for instance on the Grushin plane (see \cite{ABS} and also \cite{tannaka,rigge} in the case of surfaces of revolution). The structure of the cut and conjugate loci is described in \cite{bcgj} under generic assumptions. In \cite{euler}, we provided an extension of the Gauss--Bonnet theorem to almost-Riemannian structures, linking the Euler number of the vector bundle $E$ to a suitable principal part of the integral of the curvature on $M$. For generalizations of the Gauss-Bonnet formula in related context see also \cite{pelletier}. The results in \cite{euler} have been obtained under a set of generic hypotheses called {\bf (H0)}. To introduce it, let us define the {\it flag} of the submodule $\Delta$ defined in \r{delta} as the sequence of submodules $\Delta=\Delta_1\subset \Delta_2\subset\cdots \subset\Delta_m \subset \cdots$ defined through the recursive formula $$ \Delta_{k+1}=\Delta_k+[\Delta,\Delta_k]. $$ Under generic assumptions, the singular locus $\mathcal{Z}$ has the following properties: {\bf (i)} $\mathcal{Z}$ is an embedded one-dimensional submanifold of $M$; {\bf (ii)} the points $q\in M$ at which $\Delta_2(q)$ is one-dimensional are isolated; {\bf (iii)} $\Delta_3(q)=T_qM$ for every $q\in M$. We say that $\cal S$ satisfies {\bf (H0)}\ if properties {\bf (i)},{\bf (ii)},{\bf (iii)} hold true. If this is the case, a point $q$ of $M$ is called {\it ordinary} if $\Delta(q)=T_qM$, {\it Grushin point} if $\Delta(q)$ is one-dimensional and $\Delta_2(q)=T_qM$, i.e. the distribution is transversal to $\mathcal{Z}$, and {\it tangency point} if $\Delta_2(q)$ is one-dimensional, i.e. the distribution is tangent to $\mathcal{Z}$. Local normal forms around ordinary, Grushin and tangency points have been provided in \cite{ABS}. When an ARS ${\cal S}=(E,f,\langle\cdot,\cdot\rangle)$ satisfying {\bf (H0)}\ is oriented and the surface itself is oriented, $M$ is split into two open sets $M^+$, $M^-$ such that $\mathcal{Z}=\partial M^+=\partial M^-$, $f:E|_{M^+}\rightarrow TM^+$ is an orientation preserving isomorphism and $f:E|_{M^-}\rightarrow TM^-$ is an orientation reversing isomorphism. Moreover, in this case it is possible to associate with each tangency point $q$ an integer $\tau_q$ in the following way. Choosing on $\mathcal{Z}$ the orientation induced by $M^+$, $\tau_q= 1$ if walking along the oriented curve $\mathcal{Z}$ in a neighborhood of $q$ the angle between the distribution and the tangent space to $\mathcal{Z}$ increases, $\tau_q= -1$ if the angle decreases. In this paper we provide a classification of orientable two-dimensional almost-Riemannian structures in terms of graphs. With an oriented almost-Riemannian structure, we associate a graph whose vertices correspond to connected components of $M\setminus\mathcal{Z}$ and whose edges correspond to connected components of $\mathcal{Z}$. The edge corresponding to a connected component $W$ of $\mathcal{Z}$ joins the two vertices corresponding to the connected components of $M\setminus\mathcal{Z}$ adjacent to $W$. Every vertex is labelled with its orientation ($\pm 1$ if it a subset of $M^\pm$) and its Euler characteristic. Every edge is labelled with the ordered sequence of signs (modulo cyclic permutations) given by the contributions at the tangency points belonging to $W$. See Figure \ref{figintro} for an example of almost-Riemannian structure and its corresponding graph. We say that two labelled graphs are equivalent if they are equal or they can be obtained by the same almost-Riemannian structure reversing the orientation of the vector bundle. \begin{figure} \caption{Example of ARS on a surface of genus $4$ and corresponding labelled graph} \label{figintro} \end{figure} The main result of the paper is the following. \begin{theorem}\label{lip-eq} Two oriented almost-Riemannian structures, defined on compact oriented surfaces and satisfying {\bf (H0)}, are Lipschitz equivalent if and only if they have equivalent graphs. \end{theorem} In the statement above, two almost-Riemannian structures are said to be Lipschitz equivalent if there exists a diffeomorphism between their base surfaces which is bi-Lipschitz with respect to the two almost-Riemannian distances. This theorem shows another interesting difference between Riemannian manifolds and almost-Riemannian ones: in the Riemannian context, Lipschitz equivalence coincides with the equivalence as differentiable manifolds; in the almost-Riemannian context, Lipschitz equivalence is a stronger condition. Notice, however, that in general Liptschitz equivalence does not imply isometry. Indeed, the Lipschitz equivalence between two structures does not depend on the metric structure but only on the submodule $\Delta$. This is highlightened by the fact that the graph itself depends only on $\Delta$. The structure of the paper is the following. In section~\ref{basdef}, we recall some basic notion of sub-Riemannian geometry. Section~\ref{deftau} introduces the definitions of the number of revolution of a one-dimensional distribution along a closed oriented curve and of the graph associated with an almost-Riemannian structure. In section~\ref{proof} we demonstrate Theorem~\ref{lip-eq}. Section~\ref{necessity} provides the proof of the fact that having equivalent graphs is a necessary condition for Lipschitz equivalent structures. Finally, in section~\ref{sufficiency} we show this condition to be sufficient. \section{Preliminaries}\label{basdef} This section is devoted to recall some basic definitions in the framework of sub-Riemannian geometry following \cite{euler,ABS}, see also \cite{bellaiche,montgomery}. Let $M$ be a $n$-dimensional manifold. Throughout the paper, unless specified, manifolds are smooth (i.e., ${\mathcal C}^{\infty}$) and without boundary; vector fields and differential forms are smooth. Given a vector bundle $E$ over $M$, the ${\mathcal C}^\infty(M)$-module of smooth sections of $E$ is denoted by $\Gamma(E)$. For the particular case $E=TM$, the set of smooth vector fields on $M$ is denoted by $\mathrm{Vec}(M)$. \begin{definition}\label{fiberrvd} A {\it $(n,k)$-rank-varying distribution} on a $n$-dimensional manifold $M$ is a pair $(E,f)$ where $E$ is a vector bundle of rank $k$ over $M$ and $f:E\rightarrow TM$ is a morphism of vector bundles, i.e., {\bf (i)} the diagram $$ \xymatrix{ E \ar[r]^{f} \ar[dr]_{\pi_E} & TM \ar[d]^{\pi} \\ & M } $$ commutes, where $\pi:TM\rightarrow M$ and $\pi_E:E\rightarrow M$ denote the canonical projections and {\bf (ii)} $f$ is linear on fibers. Moreover, we require the map $\sigma\mapstof\circ\sigma$ from $\Gamma(E)$ to $\mathrm{Vec}(M)$ to be injective. \end{definition} Given a $(n,k)$-rank-varying distribution, we denote by $f_*:\Gamma(E)\rightarrow \mathrm{Vec}(M)$ the morphism of ${\mathcal C}^\infty(M)$-modules that maps $\sigma\in\Gamma(E)$ to $f\circ\sigma\in\mathrm{Vec}(M)$. The following proposition shows that all the information about a rank-varying distribution\ is carried by the submodule $f_*(\Gamma(E))$. \begin{proposition}\label{stessomodulo} Given two $(n,k)$-rank-varying distribution s $(E_i,f_i), i=1,2$, assume that they define the same submodule of $\mathrm{Vec}(M)$, i.e., $(f_1)_*(\Gamma(E_1))=(f_2)_*(\Gamma(E_2))=\Delta\subseteq\mathrm{Vec}(M)$. Then, there exists an isomorphism of vector bundles $h:E_1\rightarrow E_2$ such that $f_2\circ h=f_1$. \end{proposition} {\bf Proof.} Since $(f_i)_*:\Gamma(E_i)\rightarrow \Delta, i=1,2$, are isomorphisms of ${\mathcal C}^\infty(M)$-modules, then $(f_2)_*^{-1}\circ(f_1)_*:\Gamma(E_1)\rightarrow\Gamma(E_2)$ is an isomorphism. A classical result given in \cite[Proposition XIII p.78]{greub} states that the map $f\mapsto f_*$ is an isomorphism of ${\mathcal C}^\infty(M)$-modules from the set of morphisms from $E_1$ to $E_2$ to the set of morphisms from $\Gamma(E_1)$ to $\Gamma(E_2)$. Applying this result, there exists a unique isomorphism $h:E_1\rightarrow E_2$ such that $h_*=(f_2)_*^{-1}\circ(f_1)_*$. By construction, $(f_2)_*\circ h_*=(f_1)_*$ and applying again \cite[Proposition XIII p.78]{greub} we get $f_2\circ h=f_1$. $\blacksquare$ Let $(E,f)$ be a $(n,k)$-rank-varying distribution, $\Delta=f_*(\Gamma(E))=\{f\circ\sigma\mid\sigma\in\Gamma(E)\}$ be its associated submodule and denote by $\Delta(q)$ the linear subspace $\{V(q)\mid V\in \Delta\}=f(E_q)\subseteq T_q M$. Let $\mathrm{Lie}(\Delta)$ be the smallest Lie subalgebra of $\mathrm{Vec}(M)$ containing $\Delta$ and, for every $q\in M$, let $\mathrm{Lie}_q(\Delta)$ be the linear subspace of $T_qM$ whose elements are evaluation at $q$ of elements belonging to $\mathrm{Lie}(\Delta)$. We say that $(E,f)$ satisfies the {\it Lie bracket generating condition} if $\mathrm{Lie}_q(\Delta)=T_q M$ for every $q\in M$. A property $(P)$ defined for $(n,k)$-rank-varying distribution s is said to be {\it generic} if for every vector bundle $E$ of rank $k$ over $M$, $(P)$ holds for every $f$ in an open and dense subset of the set of morphisms of vector bundles from $E$ to $TM$, endowed with the ${\mathcal C}^\infty$-Whitney topology. E.g., the Lie bracket generating condition is a generic property among $(n,k)$-rank-varying distribution s satisfying $k>1$. We say that a $(n,k)$-rank-varying distribution\ $(E,f)$ is {\it orientable} if $E$ is orientable as a vector bundle. A rank-varying sub-Riemannian structure is defined by requiring that $E$ is an Euclidean bundle. \begin{definition}\label{gensrs} A {\it $(n,k)$-rank-varying sub-Riemannian structure} is a triple ${\cal S}=(E,f,\langle\cdot,\cdot\rangle)$ where $(E,f)$ is a Lie bracket generating $(n,k)$-rank-varying distribution\ on a manifold $M$ and $\langle\cdot,\cdot\rangle_q$ is a scalar product on $E_q$ smoothly depending on $q$. \end{definition} Several classical structures can be seen as particular cases of rank-varying sub-Riemannian structure s, e.g., Riemannian structures and classical (constant-rank) sub-Riemannian structures (see \cite{book2,montgomery}). An $(n,n)$-rank-varying sub-Riemannian structure\ is called {\it $n$-dimensional almost-Riemannian structure}. In this paper, we focus on $2$-dimensional almost-Riemannian structures ($2$-ARSs for short). Let ${\cal S}=(E,f,\langle\cdot,\cdot\rangle)$ be a $(n,k)$-rank-varying sub-Riemannian structure. The Euclidean structure on $E$ and the injectivity of the morphism $f_*$ allow to define a symmetric positive definite ${\mathcal C}^\infty(M)$-bilinear form on the submodule $\Delta$ by \begin{eqnarray*} G:\Delta\times\Delta&\to&{\mathcal C}^\infty(M)\\ (V,W)&\mapsto&\langle\sigma_V,\sigma_W\rangle, \end{eqnarray*} where $\sigma_V,\sigma_W$ are the unique sections of $E$ such that $f\circ\sigma_V=V, f\circ\sigma_W=W$. If $\sigma_1,\dots,\sigma_k$ is an orthonormal frame for $\langle\cdot,\cdot\rangle$ on an open subset $\Omega$ of $M$, an {\it orthonormal frame for $G$} on $\Omega$ is given by $f\circ\sigma_1,\dots,f\circ\sigma_k$. Orthonormal frames are systems of local generators of $\Delta$. For every $q\in M$ and every $v\in\Delta(q)$ define \begin{eqnarray} {{\bf G}}_q(v)=\inf\{\langle u, u\rangle_q \mid u\in E_q,f(u)=v\}. \nonumber\end{eqnarray} In this paper, a curve $\gamma:[0,T]\to M$ absolutely continuous with respect to the differential structure is said to be {\it admissible} for ${\cal S}$ if there exists a measurable essentially bounded function \begin{eqnarray} [0,T]\ni t\mapsto u(t)\in E_{\gamma(t)} \nonumber\end{eqnarray} called {\it control function}, such that $\dot \gamma(t)=f(u(t))$ for almost every $t\in[0,T]$. Given an admissible curve $\gamma:[0,T]\to M$, the {\it length of $\gamma$} is \begin{eqnarray} \ell(\gamma)= \int_{0}^{T} \sqrt{ {\bf G}_{\gamma(t)}(\dot \gamma(t))}~dt.\nonumber\end{eqnarray} The {\it Carnot-Caratheodory distance} (or sub-Riemannian distance) on $M$ associated with ${\cal S}$ is defined as \begin{eqnarray}\nonumber d(q_0,q_1)=\inf \{\ell(\gamma)\mid \gamma(0)=q_0,\gamma(T)=q_1, \gamma\ \mathrm{admissible}\}. \end{eqnarray} The finiteness and the continuity of $d(\cdot,\cdot)$ with respect to the topology of $M$ are guaranteed by the Lie bracket generating assumption on the rank-varying sub-Riemannian structure\ (see \cite{book2}). The Carnot-Caratheodory distance associated with ${\cal S}$ endows $M$ with the structure of metric space compatible with the topology of $M$ as differential manifold. We give now a characterization of admissible curves. \begin{proposition}\label{lipadm} Let $(E,f,\langle\cdot,\cdot\rangle)$ be a rank-varying sub-Riemannian structure on a manfold $M$. Let $\gamma:[0,T]\rightarrow M$ be an absolutely continuous curve. Then $\gamma$ is admissible if and only if it is Lipschitz continuous with respect to the sub-Riemannian distance. \end{proposition} {\bf Proof.} First we prove that if the curve is admissible then it is Lipschitz with respect to $d$ ({\it $d$-Lipschitz} for short). This is a direct consequence of the definition of the sub-Riemannian distance. Indeed, let \begin{eqnarray} [0,T]\ni t\mapsto u(t)\in E_{\gamma(t)} \nonumber\end{eqnarray} be a control function for $\gamma$ and let $L>0$ be the essential supremum of $\sqrt{\langle u,u\rangle}$. Then, for every subinterval $[t_0,t_1]\subset[0,T]$ one has $$ d(\gamma(t_0),\gamma(t_1)) \leq \int_{t_0}^{t_1} \sqrt{ {\bf G}_{\gamma(t)}(\dot \gamma(t))} dt \leq \int_{t_0}^{t_1} \sqrt{\langle u(t),u(t)\rangle} dt \leq L(t_1-t_0). $$ Hence $\gamma$ is $d$-Lipschitz. Viceversa, assume that $\gamma$ is $d$-Lipschitz with Lipschitz constant $L$. Since $\gamma$ is absolutely continuous, it is differentiable almost everywhere on $[0,T]$. Thanks to the Ball-Box Theorem (see \cite{bellaiche}), for every $t\in[0,T]$ such that the tangent vector $\dot \gamma(t)$ exists, $\dot\gamma(t)$ belongs to the distribution $\Delta(\gamma(t))$ (if not, the curve would fail to be $d$-Lipschitz). Hence for almost every $t\in[0,T]$ there exists $u_t\in E_{\gamma(t)}$ such that $\dot\gamma(t)=f(u_t)$. Moreover, since the curve is $d$-Lipschitz, one has that ${\bf G}_{\gamma(t)}(\dot\gamma(t))\leq L^2$ for almost every $t\in [0,T]$. This can be seen computing lengths in privileged coordinates (see \cite{bellaiche} for the definition of this system of coordinates). Hence, we can assume that $\langle u_t,u_t\rangle\leq L^2$ almost everywhere. Finally, we apply Filippov Theorem (see \cite[Theorem 3.1.1 p.36]{bressan}) to the differential inclusion $$ \dot\gamma(t)\in\{f(u)\mid \pi_E(u)=\gamma(t)\textrm{ and } \langle u,u\rangle \leq L^2\}. $$ that assures the existence of a measurable choice of the control function corresponding to $\gamma$. Thus $\gamma$ is admissible. $\blacksquare$ Given a 2-ARS ${\mathcal S}$, we define its {\it singular locus} as the set $$\mathcal{Z}=\{q\in M\mid \Delta(q) \subsetneq T_qM\}.$$ Since $\Delta$ is bracket generating, the subspace $\Delta(q)$ is nontrivial for every $q$ and $\mathcal{Z}$ coincides with the set of points $q$ where $\Delta(q)$ is one-dimensional. We say that $\cal S$ {\it satisfies condition} {\bf (H0)}\ if the following properties hold: {\bf (i)} $\mathcal{Z}$ is an embedded one-dimensional submanifold of $M$; {\bf (ii)} the points $q\in M$ at which $\Delta_2(q)$ is one-dimensional are isolated; {\bf (iii)} $\Delta_3(q)=T_qM$ for every $q\in M$, where $\Delta_1 = \Delta$ and $\Delta_{k+1}=\Delta_k+[\Delta,\Delta_k]$. It is not difficult to prove that property {\bf (H0)}\ is generic among 2-ARSs (see \cite{ABS}). This hypothesis was essential to show Gauss--Bonnet type results for ARSs in \cite{euler,ABS,high-order}. The following theorem recalls the local normal forms for ARSs satisfying hypothesis {\bf (H0)}. \begin{theorem}[\cite{ABS}] \label{t-normal} Given a 2-ARS ${\mathcal S}$ satisfiyng {\bf (H0)}, for every point $q\in M$ there exist a neighborhood $U$ of $q$, an orthonormal frame $(X,Y)$ for $G$ on $U$ and smooth coordinates defined on $U$ such that $q=(0,0)$ and $(X,Y)$ has one of the forms \begin{eqnarray} \mathrm{(F1)}&& ~~X(x,y)=(1,0),~~~Y(x,y)=(0,e^{\phi(x,y)}), \nonumber \\ \mathrm{(F2)}&& ~~X(x,y)=(1,0),~~~Y(x,y)=(0,x e^{\phi(x,y)}),\nonumber \\ \mathrm{(F3)}&& ~~X(x,y)=(1,0),~~~Y(x,y)=(0,(y -x^2 \psi(x))e^{\xi(x,y)}), \nonumber \end{eqnarray} where $\phi$, $\xi$ and $\psi$ are smooth real-valued functions such that $\phi(0,y)=0$ and $\psi(0)>0$. \end{theorem} Let ${\mathcal S}$ be a 2-ARS satisfying {\bf (H0)}. A point $q\in M$ is said to be an {\it ordinary point} if $\Delta(q)=T_q M$, hence, if ${\mathcal S}$ is locally described by (F1). We call $q$ a {\it Grushin point} if $\Delta(q)$ is one-dimensional and $\Delta_2(q)=T_q M$, i.e., if the local description (F2) applies. Finally, if $\Delta(q)=\Delta_2(q)$ has dimension one and $\Delta_3(q)=T_q M$ then we say that $q$ is a {\it tangency point} and ${\mathcal S}$ can be described near $q$ by the normal form (F3). We define $${\cal T}=\{q\in \mathcal{Z}\mid q \mbox{ tangency point of } {\cal S}\}.$$ Assume ${\cal S}$ and $M$ to be oriented. Thanks to the hypothesis {\bf (H0)}, $M\setminus \mathcal{Z}$ splits into two open sets $M^+$ and $M^-$ such that $f:E|_{M^+}\rightarrow TM^+$ is an orientation-preserving isomorphism and $f:E|_{M^-}\rightarrow TM^-$ is an orientation-reversing isomorphism. \section{Number of revolutions and graph of a $2$-ARS}\label{deftau} From now on $M$ is a compact oriented surface and ${\cal S}=(E,f,\langle\cdot,\cdot\rangle)$ is an oriented ARS on $M$ satisfying {\bf (H0)}. Fix on $\mathcal{Z}$ the orientation induced by $M^+$ and consider a connected component $W$ of $\mathcal{Z}$. Let $V\in\Gamma(TW)$ be a never-vanishing vector field whose duality product with the fixed orientation on $W$ is positive. Since $M$ is oriented, $TM|_{W}$ is isomorphic to the trivial bundle of rank $2$ over $W$. We choose an isomorphism $t:TM|_W\rightarrow W\times\mathbb{R}^2$ such that $t$ is orientation-preserving and for every $q\in W$, $t\circ V(q)=(q,(1,0))$. This trivialization induces an orientation-preserving isomorphism between the projectivization of $TM|_W$ and $W\times S^1$. For the sake of readability, in what follows we omit the isomorphism $t$ and identify $TM|_W$ (respectively, its projectivization) with $W\times \mathbb{R}^2$ (respectively, $W\times S^1$). Since $\Delta|_W$ is a subbundle of rank one of $TM|_W$, $\Delta|_W$ can be seen as a section of the projectivization of $TM|_W$, i.e., a smooth map (still denoted by $\Delta$) $\Delta:W\rightarrow W\times S^1$ such that $\pi_1\circ\Delta=\mathrm{Id}_W$, where $\pi_1:W\times S^1\rightarrow W$ denotes the projection on the first component. We define $\tau(\Delta,W)$, the {\it number of revolutions\ } of $\Delta$ along $W$, to be the degree of the map $\pi_2\circ\Delta:W\rightarrow S^1$, where $\pi_2:W\times S^1\rightarrow S^1$ is the projection on the second component. Notice that $\tau(\Delta,W)$ changes sign if we reverse the orientation of $W$. Let us show how to compute $\tau(\Delta,W)$. By construction, $\pi_2\circ V:W\rightarrow S^1$ is constant. Let $\pi_2\circ V(q)\equiv \theta_0$. Since $\Delta_3(q)=T_qM$ for every $q\in M$, $\theta_0$ is a regular value of $\pi_2\circ \Delta$. By definition, \begin{equation}\label{mario} \tau(\Delta,W)=\sum_{q\mid\pi_2\circ\Delta(q)=\theta_0}\mathrm{sign}( d_q(\pi_2\circ\Delta))=\sum_{q\in W\cap {\cal T}}\mathrm{sign}(d_q(\pi_2\circ\Delta)), \end{equation} where $d_q$ denotes the differential at $q$ of a smooth map and $\mathrm{sign}( d_q(\pi_2\circ\Delta))=1,$ resp. $-1$, if $d_q(\pi_2\circ\Delta)$ preserves, resp. reverses, the orientation. The equality in \r{mario} follows from the fact that a point $q$ satisfies $\pi_2\circ\Delta(q)=\theta_0$ if and only if $\Delta(q)$ is tangent to $W$ at $q$, i.e., $q\in {\cal T}$. Define the {\it contribution at a tangency point $q$} as $\tau_q=\mathrm{sign}(d_q(\pi_2\circ\Delta))$ (see Figure~\ref{tauu}). Moreover, we define $$ \tau({\cal S})=\sum_{W\in{\mathfrak C}(\mathcal{Z})}\tau(\Delta,W), $$ where ${\mathfrak C}(\mathcal{Z})=\{W\mid W \textrm{ connected component of } \mathcal{Z}\}$. Clearly, $\tau({\cal S})=\sum_{q\in{\cal T}}\tau_q$. \begin{figure} \caption{Tangency points with opposite contributions} \label{tauu} \end{figure} Let us associated with the 2-ARS ${\cal S}$ the graph ${\cal G}=({\cal V}({\cal G}),{\cal E}({\cal G}))$ where \begin{itemize} \item each vertex in ${\cal V}({\cal G})$ represents a connected component of $M\setminus \mathcal{Z}$; \item each edge in ${\cal E}({\cal G})$ represents a connected component of $\mathcal{Z}$; \item the edge corresponding to a connected component $W$ connects the two vertices corresponding to the connected components $M_1$ and $M_2$ of $M\setminus \mathcal{Z}$ such that $W\subset \partial{M}_1 \cap \partial{M}_2$. \end{itemize} Thanks to the hypothesis {\bf (H0)}, every connected component of $\mathcal{Z}$ joints a connected component of $M^+$ and one of $M^-$. Thus the graph ${\cal G}$ turns out to be {\it bipartite}, i.e., there exists a partition of the set of vertices into two subsets $V^+$ and $V^-$ such that each edge of ${\cal G}$ joins a vertex of $V^+$ to a vertex of $V^-$. Conversely, it is not difficult to see that every finite bipartite graph can be obtained from an oriented 2-ARS (satisfying {\bf (H0)}) on a compact oriented surface. Using the bipartite nature of ${\cal G}$ we introduce an orientation on ${\cal G}$ given by two functions $\alpha,\omega:{\cal E}({\cal G})\to {\cal V}({\cal G})$ defined as follows. If $e$ corresponds to $W$ then $\alpha(e)=v$ and $\omega(e)=w$, where $v$ and $w$ correspond respectively to the connected components $M_v\subset M^-$ and $M_w\subset M^+$ such that $W\subseteq \partial M_v\cap \partial M_w$. We label each vertex $v$ corresponding to a connected component $\hat M$ of $M\setminus \mathcal{Z}$ with a pair $(\mathrm{sign}(v),\chi(v))$ where sign$(v)=\pm1$ if $\hat M\subset M^\pm$ and $\chi(v)$ is the Euler characteristic of $\hat M$. We define for every $e\in E({\cal G})$ the number $\tau(e)=\sum_{q\in W\cap {\cal T}}\tau_q$, where $W$ is the connected component of $\mathcal{Z}$ corresponding to $e$. Finally, we define a label for each edge $e$ corresponding to a connected component $W$ of $ \mathcal{Z}$ containing tangency points. Let $s\geq1$ be the cardinality of the set $W\cap {\cal T}$. The label of $e$ is an equivalence class of $s$-uples with entries in $\{\pm 1\}$ defined as follows. Fix on $W$ the orientation induced by $M^+$ and choose a point $q\in W\cap{\cal T}$. Let $q_1=q$ and for every $i=1,\dots, s-1$ let $q_{i+1}$ be the first element in $W\cap {\cal T}$ that we meet after $q_i$ walking along $W$ in the fixed orientation. We associate with $e$ the equivalence class of $(\tau_{q_1},\tau_{q_2},\dots,\tau_{q_s})$ in the set of $s$-uples with entries in $\{\pm 1\}$ modulo cyclic permutations. In figure \ref{figsez3} an ARS on a surface of genus 4 and its labelled graph (figure \ref{figsez3}(a)) are portrayed. According to our definition of labels on edges, figures \ref{figsez3}(a) and \ref{figsez3}(b) represent equal graphs associated with the same ARS. On the other hand, the graph in figure \ref{figsez3}(c) is not the graph associated to the ARS of figure \ref{figsez3}. In figure \ref{come} two steps in the construction of the labelled graph associated with the ARS in figure \ref{figintro} are shown. \begin{remark} Once an orientation on $E$ is fixed the labelled graph associated with ${\cal S}$ is unique. \end{remark} \begin{figure} \caption{ Example of ARS on a surface of genus 4. Figures (a) and (b) illustrate equal labelled graphs associated with the ARS. Figure (c) gives an example of labelled graph different from the graph in figure (a)} \label{figsez3} \end{figure} \begin{figure} \caption{Algorythm to build the graph} \label{come} \end{figure} We define an equivalence relation on the set of graphs associated with oriented ARS on $M$ satisfying hypothesis {\bf (H0)}. \begin{definition}\label{samegraph} Let ${\cal S}_i=(E_i,f_i, \langle\cdot,\cdot\rangle_i)$, $i=1,2$, be two oriented almost-Riemannian structures on a compact oriented surface $M$ satisfying hypothesis {\bf (H0)}. Let ${\cal G}_i$ be the labelled graph associated with ${\cal S}_i$ and denote by $\alpha_i,\omega_i:{\cal E}({\cal G}_i)\rightarrow{\cal V}({\cal G}_i)$ the functions defined as above. We say that ${\cal S}_1$ and ${\cal S}_2$ have \emph{{equivalent}\ graphs} if , after possibly changing the orientation on $E_2$, they have the same labelled graph. In other words, after possibly changing the orientation on $E_2$ and still denoting by ${\cal G}_2$ the associated graph, there exist bijections $u:{\cal V}({\cal G}_1)\rightarrow {\cal V}({\cal G}_2)$, $k:{\cal E}({\cal G}_1)\rightarrow {\cal E}({\cal G}_2)$ such that the diagram \begin{equation}\label{d1} \xymatrix{ {\cal V}({\cal G}_1) \ar[r]_u & {\cal V}({\cal G}_2) \\ {\cal E}({\cal G}_1) \ar[u]^{\alpha_1}\ar[r]^k &{\cal E}({\cal G}_2)\ar[u]_{\alpha_2} } \end{equation} commutes and $u$ and $k$ preserve labels. \end{definition} Figure \ref{grafoequivalente} illustrates the graph associated with the ARS obtained by reversing the orientation of the ARS in figure \ref{figintro}. \begin{figure} \caption{Equivalent graph to the one in figure \ref{figintro}} \label{grafoequivalente} \end{figure} \section{Lipschitz equivalence}\label{proof} This section is devoted to the proof of Theorem~\ref{lip-eq} which is a generalization to ARSs of the well-known fact that all Riemannian structures on a compact oriented surface are Lipschitz equivalent. Let $M_1, M_2$ be two manifolds. For $i=1,2$, let ${\cal S}_i=(E_i,f_i,\langle\cdot,\cdot\rangle_i)$ be a sub-Riemannian structure on $M_i$. Denote by $d_i$ the Carnot--Caratheodory distance on $M_i$ associated with ${\cal S}_i$. \begin{definition}\label{deflipeq} We say that a diffeomorphism $\varphi:M_1\rightarrow M_2$ is a \emph{Lipschitz equivalence} if it is bi-Lipschitz as a map from $(M_1,d_1)$ to $(M_2,d_2)$. \end{definition} Notice that in Theorem~\ref{lip-eq} we can assume $M_1=M_2=M$. Indeed, if two ARSs are Lipschitz equivalent, then by definition there exists a diffeomorphism $\varphi:M_1\rightarrow M_2$. On the other hand, if the associated graphs are equivalent, by \cite[Theorem 1]{euler} it follows that $E_1$ and $E_2$ are isomorphic vector bundles. Hence the underlying surfaces are diffeomorphic. \subsection{Necessity}\label{necessity} Denote by $M_{i}^+$, respectively $M_i^-$, the set where $f_i$ is an orientation-preserving, respectively orienta\-tion-reversing, isomorphism of vector bundles, and by $\Delta^i$ the submodule $\{f_i\circ\sigma\mid\sigma\in\Gamma(E_i)\}$. Let $\mathcal{Z}_i$ be the singular locus of ${\cal S}_i$ and ${\cal T}_i$ the set of tangency points of ${\cal S}_i$. Finally, for every $q\in {\cal T}_i$, denote by $\tau^i_q$ the contribution at the tangency point defined in Section~\ref{deftau} with $\Delta=\Delta^i$. In this section we assume $\varphi:(M,d_1)\rightarrow (M,d_2)$ to be a Lipschitz equivalence and we show that ${\cal S}_1$ and ${\cal S}_2$ have {equivalent}\ graphs. As a consequence of the Ball-Box Theorem (see, for instance, \cite{bellaiche}) one can prove the following result. \begin{lemma}\label{samepoint} If $p$ is an ordinary, Grushin or tangency point for ${\cal S}_1$, then $\varphi(p)$ is an ordinary, Grushin or tangency point for ${\cal S}_2$, respectively. \end{lemma} Thanks to Lemma~\ref{samepoint}, for every connected component $\hat M$ of $M\setminus \mathcal{Z}_1$, $\varphi(\hat M)$ is a connected component of $M\setminus \mathcal{Z}_2$ and for every connected component $W$ of $\mathcal{Z}_1\cap\partial\hat M$, $\varphi(W)$ is a connected component of $\mathcal{Z}_2\cap \partial\varphi(\hat M)$. Moreover, since $\varphi|_{\overline{\hat M}}$ is a diffeomorphism, it follows that $\chi(\hat M)=\chi(\varphi(\hat M))$. After possibly changing the orientation on $E_2$, we may assume $\varphi(M_1^\pm)=M_{2}^\pm$. We will prove that, in this case, the labelled graphs are equal. Indeed, if $v\in {\cal V}({\cal G}_1)$ corresponds to $\hat M$, define $u(v)\in {\cal V}({\cal G}_2)$ as the vertex corresponding to $\varphi(\hat M)$. If $e\in {\cal E}({\cal G}_1)$ corresponds to $W$ define $k(e)\in {\cal E}({\cal G}_2)$ as the edge corresponding to $\varphi(W)$. Then $\chi(u(v))=\chi(v)$, $\mathrm{sign}(u(v))=\mathrm{sign}(v)$, and, by construction, the diagram \r{d1} commutes. Let us compute the contribution at a tangency point $q$ of an ARS $(E,f,\langle\cdot,\cdot\rangle)$ using the corresponding normal form given in Theorem \ref{t-normal}. \begin{lemma}\label{calcotau} Let $\gamma:[0,T]\rightarrow M$ be a smooth curve such that $\gamma(0)=q\in {\cal T}$ and $\dot\gamma(0)\in\Delta(q)\setminus\{ 0\}$. Assume moreover that $\gamma$ is $d$-Lipschitz, where $d$ is the almost-Riemannian distance, and that $\gamma((0,T))$ is contained in one of the two connected components of $M\setminus\mathcal{Z}$. Let $(x,y)$ be a coordinate system centered at $q$ such that the form (F3) of Theorem \ref{t-normal} applies. Then $\gamma((0,T))\subset \{(x,y)\mid y-x^2\psi(x)<0\}$. Moreover, if $\{(x,y)\mid y-x^2\psi(x)<0\}\subseteq M^+$, resp. $M^-$, then $\tau_q=1$, resp. $-1$. \end{lemma} {\bf Proof.} Since $\gamma(0)=(0,0)$ and $\dot\gamma(0)\in\mathrm{span}\{(1,0)\}\setminus\{0\}$, there exist two smooth functions $\overline{ x}(t),\overline{y}(t)$ such that $\gamma(t)=(t\overline{x}(t),t^2\overline{y}(t))$ and $\overline{x}(0)\neq 0$. Assume by contradiction that $\gamma((0,T))\subset \{(x,y)\mid y-x^2\psi(x)>0\}$, i.e., for $t\in(0,T)$, $\overline{y}(t)>\psi(t \overline{x}(t))\overline{x}(t)^2$. Since $\psi(0)>0$, for $t$ sufficiently small $\psi(t\overline{x}(t))>0$ and $\overline{y}(t)^{1/3}>\psi(t\overline{x}(t))^{1/3}|\overline{x}(t)|^{2/3}$. By the Ball-Box Theorem (see \cite{bellaiche}) there exist $c_1,c_2$ positive constants such that, for $t$ sufficiently small we have $$ c_1(|t \overline{x}(t)|+|t^2\overline{y}(t)|^{1/3})\leq d(\gamma(t),(0,0))\leq c_2(|t \overline{x}(t)|+|t^2\overline{y}(t)|^{1/3}). $$ On the other hand, for $t$ sufficiently small, $$ |t \overline{x}(t)|+|t^2\overline{y}(t)|^{1/3}>t^{2/3}|\overline{x}(t)|^{2/3}\psi(t \overline{x}(t))^{1/3}. $$ Hence, for $t$ sufficiently small, $d(\gamma(t),(0,0))>c_3t^{2/3}$, with $c_3>0$. This implies that $\gamma$ is not Lipschitz with respect to the almost-Riemannian distance. Finally, a direct computation shows the assertion concerning $\tau_q$, see Figure \ref{tauu}. $\blacksquare$ Next lemma, jointly with Lemma~\ref{samepoint}, guarantees that the two bijections $u$ and $k$ preserve labels. \begin{lemma}\label{prod} Let $q\in{\cal T}_1$. Then $\tau^{1}_q=\tau^{2}_{\varphi(q)}$. \end{lemma} {\bf Proof.} Apply Theorem~\ref{t-normal} to ${\cal S}_1$ and find a neighborhood $U$ of $q$ and a coordinate system $(x,y)$ on $U$ such that $q=(0,0)$ and $\mathcal{Z}_1\cap U=\{(x,y)\mid y=x^2\psi(x)\}$. Let $\sigma,\rho\in\Gamma(E|_U)$ be the local orthonormal frame such that $f_1\circ\sigma=X$ and $f_1\circ\rho=Y$. Assume that $U_1^+=M_1^+\cap U=\{(x,y)\mid y-x^2\psi(x) >0\}$. Fix $T>0$ and consider the smooth curve $\gamma:[0,T]\rightarrow U$ defined by $\gamma(t)=(t,0)$. Then $\gamma$ is admissible for ${\cal S}_1$ with control function $u(t)=\sigma(t,0)$. By definition, for $T$ sufficiently small $\gamma((0, T))$ lies in a single connected component of $U\setminus \mathcal{Z}_1$. Moreover, by Proposition~\ref{lipadm}, $\gamma$ is a $d_1$-Lipschitz map with Lipschitz constant less or equal to $1$. Hence, according to Lemma~\ref{calcotau}, $\tau^1_q=-1$. Consider the curve $\tilde\gamma=\varphi\circ\gamma:[0,T]\rightarrow \varphi(U)$. Since $\varphi$ is Lipschitz, $\tilde\gamma$ is $d_2$-Lipschitz as a map from the interval $[0,T]$ to the metric space $(\varphi(U),d_2)$. Moreover, $\tilde\gamma$ is smooth and $\dot{\tilde{\gamma}}(0)\in\Delta^2(\varphi(q))\setminus\{ 0\}$, $\varphi$ being a diffeomorphism mapping $\mathcal{Z}_1$ to $\mathcal{Z}_2$. Finally, since $\varphi(M_1^-)=M_2^-$, then $\tilde{\gamma}((0,T))\subset U_2^-=\varphi(U)\cap M_2^-$. Thus, by Lemma~\ref{calcotau}, $\tau^2_{\varphi(q)}=-1$. Analogously, one can prove the statement in the case $U_1^+=\{(x,y)\mid y-x^2\psi(x) <0\}$ (for which $\tau_q^1=\tau_q^2=1$). $\blacksquare$ Lemma~\ref{prod} implies that ${\cal S}_{1}$ and ${\cal S}_2$ have equal labelled graphs. This concludes the proof that having {equivalent}\ graphs is a necessary condition for two ARSs being Lipschitz equivalent. \subsection{Sufficiency}\label{sufficiency} In this section we prove that if ${\cal S}_{1}$ and ${\cal S}_2$ have {equivalent}\ graphs then there exists a Lipschitz equivalence between $(M,d_{1})$ and $(M,d_2)$. After possibly changing the orientation on $E_2$, we assume the associated labelled graphs to be equal, i.e., there exist two bijections $u$, $k$ as in Definition~\ref{samegraph} such that diagram \r{d1} commutes. The proof is in five steps. The first step consists in proving that we may assume $E_1=E_2$. The second step shows that we can restrict to the case $\mathcal{Z}_1=\mathcal{Z}_2$ and ${\cal T}_1={\cal T}_2$. In the third step we prove that we can assume that $\Delta^1(q)=\Delta^2(q)$ at each point $q\in M$. As fourth step, we demonstrate that the submodules $ \Delta^1$ and $\Delta^2$ coincide. In the fifth and final step we remark that we can assume $f_1=f_2$ and conclude. The Lipschitz equivalence between the two structures will be the composition of the diffeomorphisms singled out in steps 1, 2, 3, 5. By construction, the push-forward of ${\cal S}_1$ along a diffeomorphism $\psi$ of $M$, denoted by $\psi_*{\cal S}_1$, is Lipschitz equivalent to ${\cal S}_1$ and has the same labelled graph of ${\cal S}_{1}$. Notice, moreover, that the singular locus of $\psi_*{\cal S}_1$ coincides with $\psi(\mathcal{Z}_1)$ and the set of tangency points coincides with $\psi({\cal T}_1)$. {\bf Step 1.} Having the same labelled graph implies $$ \sum_{v\in {\cal V}({\cal G}_{1})}\mathrm{sign}(v)\chi(v)+\sum_{e\in {\cal E}({\cal G}_{1})}\tau(e)=\sum_{v\in {\cal V}({\cal G}_2)}\mathrm{sign}(v)\chi(v)+\sum_{e\in {\cal E}({\cal G}_2)}\tau(e). $$ By \cite[Theorem 1]{euler}, this is equivalent to say that the Euler numbers of $E_{1}$ and $E_2$ are equal. Since $E_{1}$ and $E_2$ are oriented vector bundles of rank $2$, with the same Euler number, over a compact oriented surface, then they are isomorphic. Hence, we assume $E_1=E_2= E$. {\bf Step 2.} Using the bijections $u,k$ and the classification of compact oriented surfaces with boundary (see, for instance, \cite{hirsch}), one can prove the following lemma. \begin{lemma} There exists a diffeomorphism $\tilde{\varphi}:M\rightarrow M$ such that $\tilde\varphi(M_{1}^+)=M_2^+$, $\tilde\varphi(M_{1}^-)=M_2^-$, $\tilde\varphi|_{\mathcal{Z}_{1}}:\mathcal{Z}_{1}\rightarrow \mathcal{Z}_2$ is a diffeomorphism that maps ${\cal T}_{1}$ into ${\cal T}_2$, and, for every $q\in{\cal T}_{1}$ $\tau^2_{\tilde\varphi(q)}=\tau^1_q$. Moreover, if $v\in{\cal V}({\cal G}_1)$ corresponds to $\hat M\subset M\setminus \mathcal{Z}_1$, then $\tilde\varphi(\hat M)$ is the connected component of $M\setminus \mathcal{Z}_2$ corresponding to $u(v)\in{\cal V}({\cal G}_2)$; if $e\in{\cal E}({\cal G}_1)$ corresponds to $W\subset \mathcal{Z}_1$, then $\tilde\varphi(W)$ is the connected component of $\mathcal{Z}_2$ corresponding to $k(e)\in{\cal E}({\cal G}_2)$. \end{lemma} The lemma implies that the singular locus of $\tilde{\varphi}_*{\cal S}_1$ coincides with $\mathcal{Z}_2$ and the set of tangency points coincides with ${\cal T}_2$. For the sake of readability, in the following we rename $\tilde{\varphi}_*{\cal S}_1$ simply by ${\cal S}_1$ and we will denote by $\mathcal{Z}$ the singular locus of the two structures, by ${\cal T}$ the set of their tangency points, and by $M^\pm$ the set $M_i^\pm$. {\bf Step 3.} Remark that the subspaces $ \Delta^{1}(q)$ and $\Delta^2(q)$ coincide at every ordinary and tangency point $q$. We are going to show that there exists a diffeomorphism of $M$ that carries $\Delta^1(q)$ into $\Delta^2(q)$ at every point $q$ of the manifold. \begin{lemma}\label{matrice} Let $W$ be a connected component of $\mathcal{Z}$. There exist a tubular neighborhood ${\bf W}$ of $W$ and a diffeomorphism $\varphi_W:{\bf W}\rightarrow \varphi_W({\bf W})$ such that $d_q\varphi_W(\Delta^1(q))=\Delta^2(\varphi_W(q))$ for every $q\in{\bf W}$, $\varphi_W|_W={\mbox Id}|_W$ and $\varphi({\bf W}\cap M^\pm)\subset M^\pm$. \end{lemma} {\bf Proof.} The idea of the proof is first to consider a smooth section $A$ of Hom$(TM|_W;TM|_W)$ such that for every $q\in W$, $A_q:T_qM\rightarrow T_qM$ is an isomorphism and $A_q(\Delta^1(q))=\Delta^2(q)$. Secondly, we build a diffeomorphism $\varphi_W$ of a tubular neighborhood of $W$ such that $d_q\varphi_W= A_q$ for every point $q\in W$. Choose on a tubular neighborhood ${\bf W}$ of $W$ a parameterization $(\theta,t)$ such that $W=\{(\theta,t)\mid t=0\}$, $M^+\cap {\bf W}=\{(\theta,t)\mid t>0\}$ and $\frac{\partial}{\partial \theta}\left|_{(\theta,0)}\right.$ induces on $W$ the same orientation as $M^+$. We are going to show the existence of two smooth functions $a,b:W\rightarrow \mathbb{R}$ such that $b$ is positive and for every $(\theta,0)\in W$, \begin{equation}\label{endom} \left(\ba{cc}1 & a(\theta)\\0&b(\theta)\end{array}\right)(\Delta^1(\theta,0))=\Delta^2(\theta,0). \end{equation} Then, for every $q=(\theta,0)\in W$ defining $A_q:T_qM\rightarrow T_qM$ by \begin{equation}\label{defisom} A_{(\theta,0)}=\left(\ba{cc}1 & a(\theta)\\0&b(\theta)\end{array}\right), \end{equation} we will get an isomorphism smoothly depending on the point $q$ and carrying $\Delta^{1}(q)$ into $\Delta^{2}(q)$. Let $W\cap{\cal T}=\{(\theta_1,0),\dots,(\theta_s,0)\}$, with $s\geq 0$. Using the chosen parametrization, there exist two smooth functions $\beta_1,\beta_2:W\setminus\{(\theta_1,0),\dots,(\theta_s,0)\}\rightarrow \mathbb{R}$ such that $\Delta^i(\theta,0)=\mathrm{span}\{(\beta_i(\theta),1)\}$. For every $j=1,\dots s$, there exists a smooth function $g^i_j$ defined on a neighborhood of $(\theta_j,0)$ in $W$ such that $g^i_j(\theta_j)\neq 0$, $\tau^i_{(\theta_j,0)}=\mathrm{sign}(g^i_j(\theta_j))$ and $$ \beta_i(\theta)=\frac{1}{(\theta-\theta_j)g^i_j(\theta)}, \quad\theta\sim\theta_j. $$ Since the graphs associated with ${\cal S}_1, {\cal S}_2$ are equivalent, for every $j=1\dots s$ we have $\tau^1_{(\theta_j,0)}=\tau^2_{(\theta_j,0)}$. Hence $\frac{g^2_j(\theta_j)}{g^1_j(\theta_j)}>0$ for every $j$. Let $b:W\rightarrow \mathbb{R}$ be a positive smooth function such that for each $j\in\{1,\dots s\},\,b(\theta_j)=\frac{g^2_j(\theta_j)}{g^1_j(\theta_j)}$. Define $a:W\rightarrow\mathbb{R}$ by $$ a(\theta)=b(\theta)\beta_2(\theta)-\beta_1(\theta). $$ Clearly $a$ is smooth on $W\setminus\{(\theta_1,0),\dots,(\theta_s,0)\}$. Moreover, thanks to our choice of $b$, $a$ is smooth at $\theta_j$, and, by construction, we have \r{endom}. The existence of $a,b$ is established. Define $A_q$ as in \r{defisom}. Let us extend the isomorphism $A_q$ defined for $q\in W$ to a tubular neighborhood. Define $\varphi_W:{\bf W}\rightarrow {\bf W}$ by $$ \varphi_W(\theta,t)=(a(\theta)t+\theta,b(\theta)t). $$ By construction, $d_{(\theta,0)}\varphi_W$ is an isomorphism. Hence, reducing ${\bf W}$ if necessary, $\varphi_W:{\bf W}\rightarrow \varphi_W({\bf W})$ turns out to be a diffeomorphism. Finally, by definition, $\varphi_W(\theta,0)=(\theta,0)$ and, since $b$ is positive, $\varphi({\bf W}\cap M^\pm)\subset M^\pm$. $\blacksquare$ We apply Lemma~\ref{matrice} to every connected component $W$ of $\mathcal{Z}$. We reduce, if necessary, the tubular neighborhood ${\bf W}$ of $W$ in such a way that every pair of distinct connected component of $\mathcal{Z}$ have disjoint corresponding tubular neighborhoods built as in Lemma~\ref{matrice}. We claim that there exists a diffeomorphism $\varphi:M\rightarrow M$ such that $\varphi|_{\bf W}=\varphi_W$ for every connected component $W$ of $\mathcal{Z}$. This is a direct consequence of the fact that the labels on vertices of ${\cal G}_1$ and ${\cal G}_2$ are equal and of the classification of compact oriented surfaces with boundary (see \cite{hirsch}). By construction, the push-forward of ${\cal S}_1$ along $\varphi$ is Lipschitz equivalent to ${\cal S}_1$ and has the same labelled graph as ${\cal S}_1$. To simplify notations, we denote $\varphi_*{\cal S}_1$ by ${\cal S}_1$. By Lemma~\ref{matrice}, $\Delta^1(q)=\Delta^2(q)$ at every point $q$. {\bf Step 4.} The next point is to prove that $ \Delta^1$ and $\Delta^2$ coincide as ${\mathcal C}^\infty(M)$-submodules. \begin{lemma}\label{coin} The submodules $\Delta^1$ and $\Delta^2$ associated with ${\cal S}_1$ and ${\cal S}_2$ coincide. \end{lemma} {\bf Proof.} It is sufficient to show that for every $p\in M$ there exist a neighborhood $U$ of $p$ such that $\Delta^1|_U$ and $\Delta^2|_U$ are generated as ${\mathcal C}^\infty(M)$-submodules by the same pair of vector fields. If $p$ is an ordinary point, then taking $U=M\setminus \mathcal{Z}$, we have $\Delta^1|_U=\Delta^2|_U=\mathrm{Vec}(U)$. Let $p$ be a Grushin point and apply Theorem~\ref{t-normal} to ${\cal S}_1$ to find a neighborhood $U$ of $p$ such that $$ \Delta^1|_U=\mathrm{span}_{{\mathcal C}^\infty(M)}\{F_1,F_2\}, ~\mathrm{ where }~~ F_1(x,y)=(1,0),~F_2(x,y)=(0,x e^{\phi(x,y)}). $$ Up to reducing $U$ we assume the existence of a frame $$ G_1(x,y)=(a_1(x,y),a_2(x,y)),~~~G_2(x,y)=(b_1(x,y),b_2(x,y)) $$ such that $\Delta^2|_U=\mathrm{span}_{{\mathcal C}^\infty(M)}\{G_1,G_2\}$. Since $\Delta^1(q)=\Delta^2(q)$ at every point $q\in M$, $a_2(0,y)\equiv 0$ and $b_2(0,y)\equiv 0$. Since $\Delta^2(0,y)$ is one-dimensional, let us assume $a_1(0,y)\neq 0$ for every $y$. Moreover, after possibly further reducing $U$, $\Delta^2|_U=\mathrm{span}_{{\mathcal C}^\infty(M)}\{(1/a_1)G_1, G_2-(b_1/a_1)G_1\}$ hence we may assume $a_1(x,y)\equiv 1$ and $b_1(x,y)\equiv 0$. The conditions $a_2(0,y)\equiv 0$ and $b_2(0,y)\equiv 0$ imply $a_2(x,y)=x \overline{a}_2(x,y)$ and $b_2(x,y)=x \overline{b}_2(x,y)$ respectively, with $\overline{a}_2,\overline{b}_2$ smooth functions. Since $[G_1,G_2]|_{(0,y)}=(0,\overline{b}_2(0,y))$, thanks to hypothesis {\bf (H0)}\ on ${\cal S}_2$, we have $\overline{b}_2(0,y)\neq 0$. Hence, reducing $U$ if necessary, \begin{eqnarray} \Delta^2|_U&=&\mathrm{span}_{{\mathcal C}^\infty(M)}\{G_1-(\overline{a}_2(x,y)/\overline{b}_2(x,y))G_2, (e^{\phi(x,y)}/\overline{b}_2(x,y)) G_2\}\nonumber\\ &=&\mathrm{span}_{{\mathcal C}^\infty(M)}\{F_1,F_2\}=\Delta^1|_U.\nonumber \end{eqnarray} Finally, let $p$ be a tangency point. Apply Theorem~\ref{t-normal} to ${\cal S}_1$, i.e., choose a neighborhood $U$ of $p$ and a system of coordinates $(x,y)$ such that $p=(0,0)$, $$ \Delta^1|_U=\mathrm{span}_{{\mathcal C}^\infty(M)}\{F_1,F_2\}, ~\mathrm{ where }~~ F_1(x,y)=(1,0),~F_2(x,y)=(0,(y-x^2\psi(x))e^{\xi(x,y)}), $$ and $\psi,\xi$ are smooth functions such that $\psi(0)>0$. Consider the change of coordinates $$ \tilde x=x,~~~\tilde y=y-x^2\psi(x). $$ Then $$ F_1(\tilde x,\tilde y)=(1,\tilde x a(\tilde x)),~~~F_2(\tilde x,\tilde y)=(0, \tilde y e^{\xi(\tilde x,\tilde y+\tilde x^2\psi(\tilde x))}), $$ where $a(\tilde x)=-2\psi(\tilde x)-\tilde x\psi'(\tilde x)$. To simplify notations, in the following we rename $\tilde x, \tilde y$ by $x,y$ respectively and we still denote by $\xi(x,y)$ the function $\xi(x,y+x^2\psi(x))$. In the new coordinate system we have $p=(0,0)$, $\mathcal{Z}\cap U=\{(x,y)\mid y=0\}$, $F_1(x,y)=(1,x a(x))$ and $F_2(x,y)=(0,ye^{\xi(x,y)})$. Reducing $U$, if necessary, let $G_1(x,y)=(a_1(x,y),a_2(x,y)), G_2(x,y)=(b_1(x,y),b_2(x,y))$ be a frame for $\Delta^2|_U$. Since $\Delta^1(q)=\Delta^2(q)$ at every point, we have $a_2(0,0)=b_2(0,0)=0$. Since $\Delta^2(0,0)$ is one-dimensional, we may assume $a_1(0,0)\neq 0$. After possibly further reducing $U$, $\Delta^2|_U=\mathrm{span}_{{\mathcal C}^\infty(M)}\{(1/a_1)G_1,G_2-(b_1/a_1)G_1\}$ and we can assume $a_1(x,y)\equiv 1$ and $b_1(x,y)\equiv 0$. Moreover, by $\Delta^1(x,0)=\Delta^2(x,0)$ we get $a_2(x,0)=x a(x)$ and $b_2(x,0)\equiv 0$, whence $a_2(x,y)=x a(x)+y \overline{a}_2(x,y)$ and $b_2(x,y)=y \overline{b}_2(x,y)$, with $\overline{a}_2,\overline{b}_2$ smooth functions. Computing the Lie brackets we get $$ [G_1,G_2]|_{(x,0)}=(0, x a \overline{b}_2)|_{(x,0)},~~~[G_1,[G_1,G_2]]|_{(0,0)}=(0, a \overline{b}_2))|_{(0,0)}. $$ Applying hypothesis {\bf (H0)}\ to ${\cal S}_2$ we have $\overline{b}_2(x,0)\neq 0$ for all $x$ in a neighborhood of $0$. Hence, up to reducing $U$, \begin{eqnarray} \Delta^2|_U&=&\mathrm{span}_{{\mathcal C}^\infty(M)}\{G_1-(\overline{a}_2(x,y)/\overline{b}_2(x,y))G_2,(e^{\xi(x,y)}/\overline{b}_2(x,y))G_2\} \nonumber\\ &=&\mathrm{span}_{{\mathcal C}^\infty(M)}\{F_1,F_2\}=\Delta^1|_U.\nonumber \end{eqnarray} $\blacksquare$ {\bf Step 5.} Thanks to Lemma \ref{coin} and Proposition \ref{stessomodulo} we can assume $f_1=f_2=f$. In other words, we reduce to the case ${\cal S}_1=(E,f,\langle\cdot,\cdot\rangle_1)$ and ${\cal S}_2=(E,f,\langle\cdot,\cdot\rangle_2)$. By compactness of $M$, there exists a constant $k\geq 1$ such that \begin{equation}\label{compat} \frac{1}{k}\langle u,u\rangle_2\leq \langle u,u\rangle_1\leq k \langle u,u\rangle_2,\,\,\forall\,u\in E. \end{equation} For every $q\in M$ and $v\in\Delta(q)$ let ${\bf G}^i_q(v)=\inf\{\langle u, u\rangle_i \mid u\in E_q,f(u)=v\}$ (see section \ref{basdef}). Clearly, \begin{equation}\label{confronto} \frac{1}{k}{\bf G}^2_q(v)\leq {\bf G}^1_q(v)\leq k {\bf G}^2_q(v),\,\,\forall\,v\in f(E_q). \end{equation} By \r{compat}, admissible curves for ${\cal S}_1$ and ${\cal S}_2$ coincide. Moreover, given an admissible curve $\gamma:[0,T]\rightarrow M$, we can compare its length with respect to ${\cal S}_1$ and ${\cal S}_2$ using \r{confronto}. Namely, $$ \frac{1}{\sqrt{k}}\int_0^T\sqrt{{\bf G}^2_{\gamma(s)}(\dot\gamma(s))}ds\leq \int_0^T\sqrt{{\bf G}^1_{\gamma(s)}(\dot\gamma(s))}ds\leq \sqrt{k} \int_0^T\sqrt{{\bf G}^2_{\gamma(s)}(\dot \gamma(s))}ds. $$ Since the Carnot-Caratheodory distance between two points is defined as the infimum of the lengths of the admissible curves joining them, we get $$ \frac{1}{\sqrt{k}}d_2(p,q)\leq d_1(p,q)\leq \sqrt{k} d_2(p,q),\,\,\forall\,p,q\in M. $$ This is equivalent to say that the identity map is a Lipschitz equivalence between ${\cal S}_1$ and ${\cal S}_2$. $\blacksquare$ \noindent{\bf Acnowledgements.} The authors are grateful to Andrei Agrachev for very helpful discussions. \end{document}
arXiv
Linear relation In linear algebra, a linear relation, or simply relation, between elements of a vector space or a module is a linear equation that has these elements as a solution. More precisely, if $e_{1},\dots ,e_{n}$ are elements of a (left) module M over a ring R (the case of a vector space over a field is a special case), a relation between $e_{1},\dots ,e_{n}$ is a sequence $(f_{1},\dots ,f_{n})$ of elements of R such that $f_{1}e_{1}+\dots +f_{n}e_{n}=0.$ The relations between $e_{1},\dots ,e_{n}$ form a module. One is generally interested in the case where $e_{1},\dots ,e_{n}$ is a generating set of a finitely generated module M, in which case the module of the relations is often called a syzygy module of M. The syzygy module depends on the choice of a generating set, but it is unique up to the direct sum with a free module. That is, if $S_{1}$ and $S_{2}$ are syzygy modules corresponding to two generating sets of the same module, then they are stably isomorphic, which means that there exist two free modules $L_{1}$ and $L_{2}$ such that $S_{1}\oplus L_{1}$ and $S_{2}\oplus L_{2}$ are isomorphic. Higher order syzygy modules are defined recursively: a first syzygy module of a module M is simply its syzygy module. For k > 1, a kth syzygy module of M is a syzygy module of a (k – 1)-th syzygy module. Hilbert's syzygy theorem states that, if $R=K[x_{1},\dots ,x_{n}]$ is a polynomial ring in n indeterminates over a field, then every nth syzygy module is free. The case n = 0 is the fact that every finite dimensional vector space has a basis, and the case n = 1 is the fact that K[x] is a principal ideal domain and that every submodule of a finitely generated free K[x] module is also free. The construction of higher order syzygy modules is generalized as the definition of free resolutions, which allows restating Hilbert's syzygy theorem as a polynomial ring in n indeterminates over a field has global homological dimension n. If a and b are two elements of the commutative ring R, then (b, –a) is a relation that is said trivial. The module of trivial relations of an ideal is the submodule of the first syzygy module of the ideal that is generated by the trivial relations between the elements of a generating set of an ideal. The concept of trivial relations can be generalized to higher order syzygy modules, and this leads to the concept of the Koszul complex of an ideal, which provides information on the non-trivial relations between the generators of an ideal. Basic definitions Let R be a ring, and M be a left R-module. A linear relation, or simply a relation between k elements $x_{1},\dots ,x_{k}$ of M is a sequence $(a_{1},\dots ,a_{k})$ of elements of R such that $a_{1}x_{1}+\dots +a_{k}x_{k}=0.$ If $x_{1},\dots ,x_{k}$ is a generating set of M, the relation is often called a syzygy of M. It makes sense to call it a syzygy of $M$ without regard to $x_{1},..,x_{k}$ because, although the syzygy module depends on the chosen generating set, most of its properties are independent; see § Stable properties, below. If the ring R is Noetherian, or, at least coherent, and if M is finitely generated, then the syzygy module is also finitely generated. A syzygy module of this syzygy module is a second syzygy module of M. Continuing this way one can define a kth syzygy module for every positive integer k. Hilbert's syzygy theorem asserts that, if M is a finitely generated module over a polynomial ring $K[x_{1},\dots ,x_{n}]$ over a field, then any nth syzygy module is a free module. Stable properties In this section, all modules are supposed to be finitely generated. That is the ring R is supposed Noetherian, or, at least, coherent. Generally speaking, in the language of K-theory, a property is stable if it becomes true by making a direct sum with a sufficiently large free module. A fundamental property of syzygies modules is that there are "stably independent" on choices of generating sets for involved modules. The following result is the basis of these stable properties. Proposition — Let $\{x_{1},\dots ,x_{m}\}$ be a generating set of an R-module M, and $y_{1},\dots ,y_{n}$ be other elements of M. The module of the relations between $x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}$ is the direct sum of the module of the relations between $x_{1},\dots ,x_{m},$ and a free module of rank n. Proof. As $\{x_{1},\dots ,x_{m}\}$ is a generating set, each $y_{i}$ can be written $\textstyle y_{i}=\sum \alpha _{i,j}x_{j}.$ This provides a relation $r_{i}$ between $x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}.$ Now, if $r=(a_{1},\dots ,a_{m},b_{1},\dots ,b_{n})$ is any relation, then $\textstyle r-\sum b_{i}r_{i}$ is a relation between the $x_{1},\dots ,x_{m}$ only. In other words, every relation between $x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}$ is a sum of a relation between $x_{1},\dots ,x_{m},$ and a linear combination of the $r_{i}$s. It is straightforward to prove that this decomposition is unique, and this proves the result. $\blacksquare $ This proves that the first syzygy module is "stably unique". More precisely, given two generating sets $S_{1}$ and $S_{2}$ of a module M, if $S_{1}$ and $S_{2}$ are the corresponding modules of relations, then there exist two free modules $L_{1}$ and $L_{2}$ such that $S_{1}\oplus L_{1}$ and $S_{2}\oplus L_{2}$ are isomorphic. For proving this, it suffices to apply twice the preceding proposition for getting two decompositions of the module of the relations between the union of the two generating sets. For obtaining a similar result for higher syzygy modules, it remains to prove that, if M is any module, and L is a free module, then M and M ⊕ L have isomorphic syzygy modules. It suffices to consider a generating set of M ⊕ L that consists of a generating set of M and a basis of L. For every relation between the elements of this generating set, the coefficients of the basis elements of L are all zero, and the syzygies of M ⊕ L are exactly the syzygies of M extended with zero coefficients. This completes the proof to the following theorem. Theorem — For every positive integer k, the kth syzygy module of a given module depends on choices of generating sets, but is unique up to the direct sum with a free module. More precisely, if $S_{1}$ and $S_{2}$ are kth syzygy modules that are obtained by different choices of generating sets, then there are free modules $L_{1}$ and $L_{2}$ such that $S_{1}\oplus L_{1}$ and $S_{2}\oplus L_{2}$ are isomorphic. Relationship with free resolutions Given a generating set $g_{1},\dots ,g_{n}$ of an R-module, one can consider a free module of L of basis $G_{1},\dots ,G_{n},$ where $G_{1},\dots ,G_{n}$ are new indeterminates. This defines an exact sequence $L\longrightarrow M\longrightarrow 0,$ where the left arrow is the linear map that maps each $G_{i}$ to the corresponding $g_{i}.$ The kernel of this left arrow is a first syzygy module of M. One can repeat this construction with this kernel in place of M. Repeating again and again this construction, one gets a long exact sequence $\cdots \longrightarrow L_{k}\longrightarrow L_{k-1}\longrightarrow \cdots \longrightarrow L_{0}\longrightarrow M\longrightarrow 0,$ where all $L_{i}$ are free modules. By definition, such a long exact sequence is a free resolution of M. For every k ≥ 1, the kernel $S_{k}$ of the arrow starting from $L_{k-1}$ is a kth syzygy module of M. It follows that the study of free resolutions is the same as the study of syzygy modules. A free resolution is finite of length ≤ n if $S_{n}$ is free. In this case, one can take $L_{n}=S_{n},$ and $L_{k}=0$ (the zero module) for every k > n. This allows restating Hilbert's syzygy theorem: If $R=K[x_{1},\dots ,x_{n}]$ is a polynomial ring in n indeterminates over a field K, then every free resolution is finite of length at most n. The global dimension of a commutative Noetherian ring is either infinite, or the minimal n such that every free resolution is finite of length at most n. A commutative Noetherian ring is regular if its global dimension is finite. In this case, the global dimension equals its Krull dimension. So, Hilbert's syzygy theorem may be restated in a very short sentence that hides much mathematics: A polynomial ring over a field is a regular ring. Trivial relations In a commutative ring R, one has always ab– ba = 0. This implies trivially that (b, –a) is a linear relation between a and b. Therefore, given a generating set $g_{1},\dots ,g_{k}$ of an ideal I, one calls trivial relation or trivial syzygy every element of the submodule the syzygy module that is generated by these trivial relations between two generating elements. More precisely, the module of trivial syzygies is generated by the relations $r_{i,j}=(x_{1},\dots ,x_{r})$ such that $x_{i}=g_{j},$ $x_{j}=-g_{i},$ and $x_{h}=0$ otherwise. History The word syzygy came into mathematics with the work of Arthur Cayley.[1] In that paper, Cayley used it in the theory of resultants and discriminants.[2] As the word syzygy was used in astronomy to denote a linear relation between planets, Cayley used it to denote linear relations between minors of a matrix, such as, in the case of a 2×3 matrix: $a\,{\begin{vmatrix}b&c\\e&f\end{vmatrix}}-b\,{\begin{vmatrix}a&c\\d&f\end{vmatrix}}+c\,{\begin{vmatrix}a&b\\d&e\end{vmatrix}}=0.$ Then, the word syzygy was popularized (among mathematicians) by David Hilbert in his 1890 article, which contains three fundamental theorems on polynomials, Hilbert's syzygy theorem, Hilbert's basis theorem and Hilbert's Nullstellensatz. In his article, Cayley makes use, in a special case, of what was later[3] called the Koszul complex, after a similar construction in differential geometry by the mathematician Jean-Louis Koszul. Notes 1. 1847[Cayley 1847] A. Cayley, “On the theory of involution in geometry”, Cambridge Math. J. 11 (1847), 52–61. See also Collected Papers, Vol. 1 (1889), 80–94, Cambridge Univ. Press, Cambridge. 2. [Gel’fand et al. 1994] I. M. Gel’fand, M. M. Kapranov, and A. V. Zelevinsky, Discriminants, resultants, and multidimensional determinants, Mathematics: Theory & Applications, Birkhäuser, Boston, 1994. 3. Serre, Jean-Pierre Algèbre locale. Multiplicités. (French) Cours au Collège de France, 1957–1958, rédigé par Pierre Gabriel. Seconde édition, 1965. Lecture Notes in Mathematics, 11 Springer-Verlag, Berlin-New York 1965 vii+188 pp.; this is the published form of mimeographed notes from Serre's lectures at the College de France in 1958. References • Cox, David; Little, John; O’Shea, Donal (2007). "Ideals, Varieties, and Algorithms". Undergraduate Texts in Mathematics. New York, NY: Springer New York. doi:10.1007/978-0-387-35651-8. ISBN 978-0-387-35650-1. ISSN 0172-6056. • Cox, David; Little, John; O’Shea, Donal (2005). "Using Algebraic Geometry". Graduate Texts in Mathematics. New York: Springer-Verlag. doi:10.1007/b138611. ISBN 0-387-20706-6. • Eisenbud, David (1995). Commutative Algebra with a View Toward Algebraic Geometry. Graduate Texts in Mathematics. Vol. 150. Springer-Verlag. doi:10.1007/978-1-4612-5350-1. ISBN 0-387-94268-8. • David Eisenbud, The Geometry of Syzygies, Graduate Texts in Mathematics, vol. 229, Springer, 2005.
Wikipedia
\begin{document} \title{Quantum Computing with Two-dimensional Conformal Field Theories} \author{Elias Kokkas} \email{[email protected]} \affiliation{Department of Physics and Astronomy, The University of Tennessee, Knoxville, TN 37996-1200, USA} \author{Aaron Bagheri} \email{[email protected]} \affiliation{Department of Mathematics, University of California, Santa Barbara, CA 93106, USA} \author{Zhenghan Wang} \email{[email protected]} \affiliation{Microsoft Station Q and Department of Mathematics, University of California, Santa Barbara, CA 93106, USA} \author{George Siopsis} \email{[email protected]} \affiliation{Department of Physics and Astronomy, The University of Tennessee, Knoxville, TN 37996-1200, USA} \date{\today} \begin{abstract} Conformal field theories have been extremely useful in our quest to understand physical phenomena in many different branches of physics, starting from condensed matter all the way up to high energy. Here we discuss applications of two-dimensional conformal field theories to fault-tolerant quantum computation based on the coset $ SU(2)_1^{\otimes k} / SU(2)_{k}$. We calculate higher-dimensional braiding matrices by considering conformal blocks involving $2N$ anyons, and search for gapped states that can be built from these conformal blocks. We introduce a gapped wavefunction that generalizes the Moore-Read state which is based on the critical Ising model, and show that our generalization leads to universal quantum computing. \end{abstract} \maketitle \section{Introduction} Topological quantum computation was first introduced \cite{bib:1,bib:2,bib:2a} and further developed \cite{bib:3,bib:4,bib:5} as an elegant approach to fault-tolerant quantum computation which utilizes certain quasi-particles called anyons. These are exotic quasi-particles that live in two spatial dimensions and exhibit quantum statistics that are neither fermionic nor bosonic. We can distinguish between two different types of anyons, Abelian and non-Abelian. Abelian anyons are associated with the one-dimensional representation of the braid group, and were first studied in Ref.\ \cite{bib:6}. Their quantum state acquires a global phase under the exchange of two identical particles, whereas the exchange of two identical non-Abelian anyons changes their quantum state via a unitary matrix. Another important difference between Abelian and non-Abelian anyons is with regard to their fusion rules. Abelian anyons fuse into a single Abelian anyon, whereas non-Abelian anyons have multiple fusion outcomes (channels). We can store and process information using the fusion rules and braiding statistics of these anyons, respectively. However, only non-Abelian anyons can be used to implement complicated quantum gates that are not proportional to the identity. Ising and Fibonacci anyons are the simplest candidates for a topological quantum computer \cite{bib:7,bib:8}, but only the latter offers a universal set of quantum gates via braiding. Nevertheless, Abelian anyons are still useful for quantum computing tasks, such as quantum memory. We achieve fault tolerance by encoding information non-locally and processing it using braidings that depend only on the topology of anyons. Anyons emerge as localized excitations in a topological phase of matter provided there exists an energy gap and a ground state topological degeneracy which is robust against external interactions \cite{bib:9,bib:10}. Of course, errors associated with wrong braidings can still occur. Despite the enormous theoretical success of anyons, their physical realization is to this day a challenge. Superconductor-semiconductor nanowires are promising candidates for Majorana zero modes \cite{bib:11,bib:12} which are quasi-particles that obey the same fusion and braiding rules as the Ising anyons. Another approach is by studying systems with the Fractional Quantum Hall Effect (FQHE). Experimental data \cite{bib:13} support the emergence of Abelian anyonic excitations at the quantum Hall effect for $\nu=\frac{1}{3}$. There is some evidence that Ising anyons can be found in the $\nu=\frac{5}{2}$ quantum Hall state and Fibonacci anyons exist in the $\nu=\frac{12}{5}$ quantum Hall state. However, experimental results are not conclusive. To better understand the FQHE and its quantum statistics, one needs to understand the wavefunction of the ground state and its quasi-hole excitations. Na\"\i vely, one would have to determine a many-body Hamiltonian that could be diagonalized to obtain its eigenstates. Alternatively, following the pioneering work of Moore and Read \cite{bib:14}, one can construct wavefunctions for these states using conformal blocks of certain conformal field theories (CFTs). For example, one can construct the Laughlin wavefunction \cite{bib:15} that describes the FQHE at filling $\nu=\frac{1}{q}$, with $q$ an odd integer number, and supports Abelian anyons, using a CFT with central charge $c=1$ consisting of a free massless boson. Additionally, Moore and Read \cite{bib:14} constructed the Pfaffian wavefunction, which obeys non-Abelian statistics, using the critical Ising CFT minimal model $\mathcal{M}(4,3)$ to describe the $\nu=\frac{5}{2}$ FQHE. This spurred a lot of activity in the subject \cite{bib:16,bib:17,bib:18,bib:19,bib:20}. Similar proposals suggest that the $\nu=\frac{5}{2}$ state is described by the anti-Pfaffian state \cite{bib:21,bib:22}. In order to build a wavefunction describing a number of non-Abelian quasi-holes from CFT correlators, there are a few caveats to address. Since the conformal blocks for non-Abelian anyons are multi-valued functions, the position coordinates of the non-Abelian quasi-holes can only play the role of parameters in the wavefunction of the system. The role of coordinates is assumed by the position of Abelian particles which must be present in the system. There presence induces singularities which are removed by the inclusion of the Jastrow factor which describes an independent system defined by a different CFT. Moreover, as was emphasized in \cite{bib:18}, for the braiding statistics of the wavefunction to match the monodromy around the branch points of the multi-valued part of the function, we need to ensure that the Berry holonomy vanishes. Fortunately, it was later demonstrated in \cite{bib:20} using the plasma analogy that indeed the Berry holonomy vanishes for the Moore-Read (MR) wavefunction. Attempts to construct MR-like wavefunctions for minimal models $\mathcal{M}(m+1,m)$ with $m > 3$ were also studied in \cite{bib:20}, but it was realized that they cannot describe gapped states because the plasma is not screened. In this work, we propose an alternative generalization of the MR wavefunction based on the coset $SU(2)_1^{\otimes k} /SU(2)_k $, where $SU(2)_k$ is the Wess-Zumino-Witten (WZW) model based on the gauge group $SU(2)$ at level $k$. For $k=2$, it reduces to the MR wavefunction, because the critical Ising model $\mathcal{M}(4,3)$ is isomorphic to the coset $SU(2)_1\otimes SU(2)_1 /SU(2)_2 $ \cite{bib:23,bib:24,bib:25}. Unlike in the $k=2$ case, higher values of $k$ lead to universal quantum computing involving only braiding. Using the plasma analogy, we show that our wavefunction is gapped, ensuring fault-tolerant quantum computing. It should be pointed out that the minimal model $\mathcal{M} (k+2,k+1)$ can be constructed from a similar coset, $SU(2)_{k-1}\times SU(2)_1/SU(2)_{k}$. However, even though braiding alone can be shown to lead to universal quantum computation for $k>2$, no gapped state has been constructed. In our coset construction based on $SU(2)_1^{\otimes k} /SU(2)_k $, there is a primary field of conformal dimension $\frac{1}{2}$ for all $k$, leading to a gapped state. It would be interesting to identify a physical realization of our theoretical construct. Our discussion is organized as follows. In Section \ref{sec:II}, we review the pertinent features of Virasoro minimal models using the Coulomb gas formalism. In Section \ref{sec:III}, we calculate the braiding and fusion matrices for four-point and six-point amplitudes. In Section \ref{sec:coamp}, we calculate amplitudes in the coset CFT. In Section \ref{sec:IV}, we discuss braiding from the point of view of anyon models. In Section \ref{sec:V}, we construct a wavefunction that generalizes the MR wavefunction and leads to universal fault-tolerant quantum computing. Finally, in Section \ref{sec:con} we present our conclusions. Details of our calculations can be found in Appendices \ref{app:A} (exchange matrices) and \ref{app:B} (amplitudes in the $SU(2)_q$ WZW model). \section{Virasoro minimal models} \label{sec:II} In this Section, we review the salient features of the Virasoro minimal models. The minimal model $\mathcal{M} (k+2,k+1)$ shares common features with the coset CFT $SU(2)_1^{\otimes k} /SU(2)_k $ that we are interested in, such as a set of primary fields ($\Phi_{(1,s)}$ with $1 \leq s \leq k+1$). These fields have the same conformal dimensions, fusion rules, and braiding statistics in both CFTs. For $k=2$, the coset CFT coincides with the critical Ising model. For all $k$, the coset CFT contains a primary field $\psi$ of conformal dimension $h_\psi = \frac{1}{2}$, which is not present in minimal models, except for $k=2$. The absence of a similar field in minimal models with $k >2$ prevents us from using them for fault-tolerant universal quantum computation. As we will show in Section \ref{sec:V}, the field $\psi$ obeys Abelian fusion rules, which is crucial for the construction of a gapped wavefunction based on the coset CFT $SU(2)_1^{\otimes k} /SU(2)_k $. \begin{table}[b] \caption{\label{table:1} Charge and dimension of primary fields in the minimal model $\mathcal{M}(k+2,k+1)$ and the coset CFT $SU(2)_1^{\otimes k} /SU(2)_k $.} \begin{ruledtabular} \begin{tabular}{lccc} \textrm{Primary field}& \textrm{Symbol}& \textrm{Dimension}& \textrm{Charge}\\ \colrule $\Phi_{(1,1)}$ & $\mathbb{I}$ & 0 & 0 \\ $\Phi_{(1,2)}$ & $\sigma $& $\frac{k-1}{4(k+2)}$ & $\frac{k+1}{2\sqrt{(k+1)(k+2)}}$ \\ $\Phi_{(1,3)}$ & $\varepsilon$ & $\frac{k}{k+2}$ & $\frac{k+1}{\sqrt{(k+1)(k+2)}}$ \\ $\Phi_{(1,4)}$ & $\varepsilon'$ & $\frac{3(3k+1)}{4(k+2)}$ & $\frac{3(k+1)}{2\sqrt{(k+1)(k+2)}}$ \\ \vdots & & \vdots & \vdots \\ $\Phi_{(1,k+1)}$ & & $\frac{k(k-1)}{4}$ & $\frac{k(k+1)}{2\sqrt{(k+1)(k+2)}}$ \end{tabular} \end{ruledtabular} \end{table} In the Coulomb gas formalism \cite{bib:26,bib:27,bib:28}, one starts with a massless scalar field $\varphi$ in two spacetime dimensions, and adds a background charge $\alpha_0$ at infinity, which shifts the central charge to $c=1-24\alpha_0^2$. In terms of the integer $k$ the background charge is given by \begin{equation} \label{eq:II-1} \alpha_0 = \frac{1}{2\sqrt{(k+1)(k+2)}} \ . \end{equation} Physical observables, such as spin and energy density, are represented by primary fields expressed as vertex operators $\Phi_\alpha(\eta,\bar{\eta})= e^{i \sqrt{2}\alpha \varphi (\eta,\bar{\eta})}$, including both a holomorphic and an anti-holomorphic part, where $\alpha$ is the charge. The primary field $\Phi_\alpha$ has conformal dimension $h_\alpha=\alpha^2-2 \alpha_0 \alpha$. The corresponding observable may also be represented by the conjugate vertex operator $\tilde{\Phi}_\alpha \equiv \Phi_{2\alpha_0-\alpha}$, which has the same conformal dimension as $\Phi_\alpha$. The minimal model $\mathcal{M} (k+2,k+1)$ possesses a finite number of primary fields labeled by a pair of integers $(r,s)$, where $r=1,\dots,k$, and $s=1,\dots,k+1$. The charge and conformal dimension of the primary field $\Phi_{(r,s)}$ are given, respectively, by \begin{equation} \label{eq:II-2} \alpha_{(r,s)} = \frac{(1-r)(k+2)-(1-s)(k+1)}{2\sqrt{(k+1)(k+2)}} \ , \end{equation} \begin{equation} \label{eq:II-3} h_{(r,s)} = \frac{[r(k+2) - s(k+1)]^2-1}{4(k+1)(k+2)} \ . \end{equation} The conjugate field is $\tilde\Phi_{(r,s)}=\Phi_{(k+1-r,k+2-s)}$. Correlators of primary fields can be split into holomorphic and antiholomorphic parts by splitting the scalar field $\varphi (\eta,\bar{\eta}) = \varphi (\eta) + \bar{\varphi} (\bar{\eta})$. For quantum computing, we are interested in chiral CFTs. We will concentrate on the holomorphic part of correlators. Of particular importance are the primary fields with $r=1$. They are common features in the minimal model $\mathcal{M} (k+2,k+1)$ and the coset CFT $SU(2)_1^{\otimes k} / SU(2)_k$ and form a closed algebra thanks to the fusion rules \begin{equation}\label{eq:II-4} \Phi_{(1,s)} \otimes \Phi_{(1,s')} = \sum_{ s''\stackrel{2}{=} |s'-s|+1 }^{\text{min} (s+s'-1,2k+3-s-s')} \Phi_{(1,s'')} \ . \end{equation} where $\stackrel{2}{=}$ denotes incrementing the summation variable by $2$. More generally, the fusion rules are \begin{eqnarray}\label{eq:fusion} && \Phi_{(r,s)} \otimes \Phi_{(r',s')} = \nonumber\\ &&\ \ \ \ \sum_{r'' \stackrel{2}{=} |r'-r|+1}^{\min(r+r'-1, 2q-1-r-r')} \sum_{s'' \stackrel{2}{=} |s'-s|+1}^{\min(s+s'-1, 2p-1-s-s')} \Phi_{(r'',s'')} \ . \nonumber\\ \end{eqnarray} For $k \geq 2$, the charge and dimension of these primary fields are summarized in Table \ref{table:1}. We will build Hilbert spaces of qubits based on correlators of the holomorphic part of the primary field $\Phi_{(1,2)}$, which we denote by $\sigma (z)$. Other fields $\Phi_{(1,s)}$ contribute to correlators of $\sigma$ as intermediate states. We denote their holomorphic part by $\varepsilon, \varepsilon', \dots$, for $s=3,4,\dots$, respectively. Conjugate fields are denoted by $\tilde{\sigma}, \tilde{\varepsilon}, \tilde{\varepsilon'}, \dots$, and their anti-holomorphic counterparts by $\bar{\sigma}, \bar{\varepsilon}, \bar{\varepsilon'}, \dots$. Thus, e.g., $\Phi_{(1,2)} (\eta,\bar{\eta}) = \sigma (\eta) \bar{\sigma} (\bar{\eta})$. The anomalous $U(1)$ symmetry of the massless scalar field coupled to the background charge leads to a charge neutrality condition which states that the total charge in a correlator has to be equal to twice the background charge, \begin{equation} \label{eq:II-5} \sum_i \alpha_i= 2\alpha_0 \ .\end{equation} To define non-vanishing correlators of physical observables that obey the neutrality condition \eqref{eq:II-5}, one introduces screening operators $Q_\pm$ of zero conformal dimension, \begin{equation} \label{eq:II-6} Q_\pm = \int d^2 w V_\pm (w) \bar{V}_{\pm} (\bar{w}) \ , \ \ V_\pm (w) = e^{i\sqrt{2} \alpha_\pm \varphi (w)} \ ,\end{equation} of charge $\alpha_+ = \sqrt{\frac{k+2}{k+1}}$ and $\alpha_- =- \sqrt{\frac{k+1}{k+2}}$, respectively. The correlation function of $2N$ primary fields $\Phi_{(1,2)}$ is given by \begin{equation} \label{eq:II-7} G^{(2N)}(\bm{\eta}, \bar{\bm{\eta}} ) = \left\langle \bm{\sigma}_1 \cdots \bm{\sigma}_{2N-1} \tilde{\bm{\sigma}}_{2N} Q_-^{N-1} \right\rangle \end{equation} where $\bm{\eta} = (\eta_1,\dots, \eta_{2N})$, $\bm{\sigma} = \Phi_{(1,2)}$, and $\bm{\sigma}_j = \bm{\sigma} (\eta_j,\bar{\eta}_j)$. To define this correlator, we inserted $n-1$ screening operators $Q_-$ and used the conjugate field for one of the primary fields. The non-chiral correlator \eqref{eq:II-7} can be split into holomorphic and antiholomorphic parts as \cite{bib:29} \begin{equation} \label{eq:II-8} G^{(2N)}(\bm{\eta}, \bar{\bm{\eta}} ) = \sum_\mu | \mathcal{F}_\mu^{(2N)} (\bm{\eta} ) |^2 \end{equation} where we sum over the conformal blocks of the chiral model labeled by $\mu$. The chiral correlator (conformal block) is given by \begin{equation} \label{eq:II-9} \mathcal{F}_\mu^{(2N)} (\bm{\eta} ) = \sqrt{N_\mu} \oint_\mu d^{N-1}\bm{w} \, \mathcal{I}^{(2N)} ( \bm{\eta}; \bm{w}) \end{equation} where \begin{equation} \label{eq:II-10} \mathcal{I}^{(2N)} ( \bm{\eta}; \bm{w}) = \left\langle \sigma_1 \cdots \sigma_{2N-1} \tilde{\sigma}_{2N} \prod_{j=1}^{N-1} V_- (w_j) \right\rangle \end{equation} where $\bm{w} = (w_1, \dots,w_{N-1})$, $\sigma_j = \sigma(\eta_j)$, and $N_\mu$ are normalization constants determined by matching the expressions in Eqs.\ \eqref{eq:II-7} and \eqref{eq:II-8}. The conformal block \eqref{eq:II-9} is obtained by performing $N-1$ contour integrals. One distinguishes between different conformal blocks by the position of the contours of integration; $\mu$ labels the collective choice. \section{Braiding and Fusion Matrices}\label{sec:III} As discussed in the previous section, chiral amplitudes are not single-valued functions since they depend on the choice of contours of integration. Conformal blocks form a basis for these amplitudes of dimensionality that depends on the number of primary fields and therefore on the integer $k$ labeling the CFT. This basis is mapped onto the basis for the Hilbert space of qubits in quantum computation. One can deduce the number of independent conformal blocks directly from the fusion rules of the model. The exchange of two primary fields $\sigma$ at positions $\eta_i$ and $\eta_j$ is equivalent to a change of basis from $\mathcal{F}_\mu$ to $\mathcal{F}'_\mu$ via an exchange matrix, \begin{equation}\label{eq:III-1} \mathcal{F}'_\mu =\sum_\nu (R_{ij})_{\mu \nu} \mathcal{F}_\nu \ .\end{equation} These exchange matrices result into braiding and fusion matrices \cite{bib:28,bib:30,bib:31} that can be mapped onto quantum gates. We discuss how this is done in detail for four- and six-point amplitudes. \subsection{Four-point amplitudes} The simplest nontrivial amplitude is the four-point correlation function involving $\sigma$ fields. There are two conformal blocks associated with this four-point function for all values of the integer $k$, as can be easily deduced from the fusion rules \eqref{eq:II-4}. They form a two-dimensional Hilbert space corresponding to a single qubit. The amplitude \eqref{eq:II-10} with $N=2$ can be simplified using conformal invariance which allows one to fix $\eta_1 \rightarrow 0$, $\eta_2 \rightarrow x$, $\eta_3 \rightarrow 1$, and $z_4 \rightarrow \infty$, where $x = \frac{\eta_{12} \eta_{34}}{\eta_{13} \eta_{24}}$ is the anharmonic ratio with $\eta_{ij} = \eta_i - \eta_j$. Omitting a factor $\left( \frac{\eta_{13} \eta_{24} }{\eta_{12} \eta_{23} \eta_{34} \eta_{41} } \right)^{2h_\sigma}$, we may write \begin{equation} \label{eq:III-3} \mathcal{I}^{(4)} (x; w) = \left\langle \sigma (0) \sigma (x) \sigma (1) \tilde{\sigma} (\infty) V_- (w) \right\rangle \ .\end{equation} The non-chiral four-point function can be written in terms of conformal blocks as \begin{equation} \label{eq:III-6} G^{(4)} (x,\bar{x}) = |\mathcal{F}^{(4)}_1 (x)|^2 + |\mathcal{F}^{(4)}_2 (x)|^2 \ . \end{equation} To define the two conformal blocks we need to carefully choose the contour of integration in order to avoid the branch points and singularities at $0, x, 1, \infty$. This can be done by choosing two branch cuts along the real axis, one that goes from $0$ to $z$ and another one from $1$ to $\infty$. We obtain two different contours which encircle $(0,x)$ and $(1,\infty)$, respectively. After shrinking these contours, the two conformal blocks are defined as \begin{eqnarray} \label{eq:III-7} \mathcal{F}^{(4)}_1 (x) &=& \sqrt{N_1} [x (1-x)]^{\frac{k+1}{2(k+2)}} \nonumber\\ && \times \int_0^x dw\, [w (x-w) (1-w)]^{-\frac{k+1}{k+2}} \nonumber \\ \mathcal{F}^{(4)}_2 (x) &=& \sqrt{N_2} [x (1-x)]^{\frac{k+1}{2(k+2)}} \nonumber \\ && \times \int_1^\infty dw \, [w (w-x) (w-1)]^{-\frac{k+1}{k+2}} \ .\end{eqnarray} After some algebra, we obtain these conformal blocks in terms of Hypergeometric functions \begin{eqnarray} \label{eq:III-8} \mathcal{F}^{(4)}_1 (x) &=& \sqrt{N_1} \frac{\Gamma^2 ( \frac{1}{k+2} )}{\Gamma ( \frac{2}{k+2})} x^{\frac{1-k}{2(k+2)}} (1-x)^{\frac{k+1}{2(k+2)}} \nonumber\\ &&\times \;_2F_1 \left( \frac{k+1}{k+2}, \frac{1}{k+2}; \frac{2}{k+2}; x \right) \nonumber\\ \mathcal{F}^{(4)}_2 (z) &=& \sqrt{N_2} \frac{\Gamma ( \frac{1}{k+2} )\Gamma ( \frac{2k+1}{k+2} )}{\Gamma (\frac{2k+2}{k+2})} [x (1-x)]^{\frac{k+1}{2(k+2)}} \nonumber\\ &&\times \;_2F_1 \left( \frac{k+1}{k+2}, \frac{2k+1}{k+2}; \frac{2(k+1)}{k+2}; x \right) \ .\end{eqnarray} To understand the physical content of these conformal blocks, we consider the $x\to 0$ limit. We observe that $\mathcal{F}^{(4)}_1 (x) \sim x^{-\frac{k-1}{2(k+2)}} [1 + \mathcal{O} (x)]$, whereas $\mathcal{F}^{(4)}_2 (x) \sim x^{\frac{k+1}{2(k+2)}} [1 + \mathcal{O} (x)]$. Comparing with the operator product expansion (OPE) $ \sigma (x) \sigma(0) \sim x^{-\frac{k-1}{2(k+2)}} \mathbb{I} + x^{ \frac{k+1}{2(k+2)}} \varepsilon (0) $ where $\varepsilon$ is defined in Table \ref{table:1}, it is evident that $\mathcal{F}^{(4)}_1$ and $\mathcal{F}^{(4)}_2$ have intermediate state $\mathbb{I}$ and $\varepsilon$, respectively. Schematically, they are given by the two diagrams shown in Figure \ref{fig:III-1}. \begin{figure} \caption{Conformal blocks of the four-point function.} \label{fig:III-1} \end{figure} The normalization constants $N_\mu$ ($\mu =1,2$) are determined by comparing the expressions \eqref{eq:II-7} for $N=2$ and \eqref{eq:III-6} for the non-chiral amplitude $G^{(4)}$. Calculating the non-chiral amplitude can be avoided by using an argument based on monodromy transformations around $0$ and $1$. Under a monodromy transformation, we change bases. The conformal blocks in the new basis must provide a decomposition of the non-chiral amplitude of the same form \eqref{eq:III-6}. This leads to linear constraints that determine the normalization constants up to an overall multiplicative factor, which suffices for our application to quantum computation. After some algebra, we obtain \begin{equation} N_1 = \mathcal{N} \sin \frac{\pi }{k+2} \ , \ \ N_2 = \mathcal{N} \sin \frac{3 \pi }{k+2}\ . \end{equation} where $\mathcal{N}$ can be determined using \eqref{eq:II-7}, but is not needed for our purposes. For the four-point chiral amplitude, we derive two braiding matrices, $R_{12}$ and $R_{23}$, and a fusion matrix $R_{13}$, where $R_{ij}$ corresponds to the exchange of positions $\eta_i \leftrightarrow \eta_j$. These matrices are defined diagrammatically in Figure \ref{fig:III-2}. \begin{figure} \caption{ Exchange matrices of the four-point correlator. } \label{fig:III-2} \end{figure} The braiding matrix $R_{12}$ is diagonal because the two fields that we exchange fuse together. From the OPE, we deduce \begin{equation} \label{eq:III-10} R_{12}^{(4)} = \begin{pmatrix} e^{-i\pi \frac{k-1}{2(k+2)}} & 0 \\ 0 & e^{i\pi \frac{k+1}{2(k+2)}} \end{pmatrix} \;.\end{equation} The other two exchange matrices can be found using standard Hypergeometric and Gamma function identities. After some algebra, we obtain \begin{equation} \label{eq:III-11} R_{13}^{(4)} = \begin{pmatrix} \cos \theta_k & \sin \theta_k \\ \sin \theta_k & -\cos \theta_k \end{pmatrix} \;, \end{equation} where $\cos\theta_k = \frac{1}{2}\sec \frac{\pi}{k+2}$. The matrix $R_{23}$ can be deduced from \begin{equation} \label{eq:III-21} R_{23} = R_{13} R_{12} R^{-1}_{13} \ ,\end{equation} We obtain \begin{equation} \label{eq:III-11a} R_{23}^{(4)} = \begin{pmatrix} e^{i \pi \frac{k-1}{2(k+2)}}\cos \theta_k & e^{-i \pi \frac{k+1}{2(k+2)}}\sin \theta_k \\ e^{-i \pi \frac{k+1}{2(k+2)}} \sin \theta_k & -e^{-i \pi \frac{3k+1}{2(k+2)}} \cos \theta_k \end{pmatrix} \;. \end{equation} As an example, consider the $k=2$ case which corresponds to the critical Ising model. The diagonal braiding matrix becomes the phase $S$ gate (up to a phase), while the fusion matrix reduces to the Hadamard gate, \begin{equation} \label{eq:III-13} R_{12}^{(4)} = e^{-i\frac{\pi}{8}} \begin{pmatrix} {1} & {0} \\ {0} & i \end{pmatrix} \;\; , \;\; R_{13}^{(4)} = \frac{1}{\sqrt{2}} \begin{pmatrix} {1} & {1} \\ {1} & -{1} \end{pmatrix} \ . \end{equation} These matrices are not enough to achieve universal quantum computation \cite{bib:32} because we have no way to construct the phase $T$ gate using braidings. Universal quantum computation can be achieved for $k=3$ which corresponds to the tri-critical Ising model. We obtain the matrices that appear in the Fibonacci anyon model \begin{equation} \label{eq:III-14} R_{12}^{(4)} = \begin{pmatrix} e^{-i\frac{\pi}{5}} & 0 \\ 0 & e^{-i\frac{2\pi}{5}} \end{pmatrix} \ , \; R_{13}^{(4)} = \begin{pmatrix} \frac{1}{\gamma} & \frac{1}{\sqrt{\gamma}} \\ \frac{1}{\sqrt{\gamma}} & -\frac{1}{\gamma} \end{pmatrix} \ , \end{equation} where $\gamma = \frac{\sqrt{5} +1}{2}$ is the golden ratio. The set \eqref{eq:III-14} is dense in $SU(2)$ \cite{bib:33}, leading to universal quantum computation. However, the minimal model $\mathcal{M} (5,4)$ cannot be used as a foundation for fault-tolerant quantum computation, because of the absence of a gapped state. In Section \ref{sec:V}, we will consider an alternative proposal using the coset $SU(2)^{\otimes 3}/ SU(2)_3$ that leads to universal quantum computation based on the braiding matrices \eqref{eq:III-14} as well as fault-tolerant quantum computation since $SU(2)^{\otimes 3}/ SU(2)_3$ possesses a gapped state. \subsection{Five-point amplitudes} \begin{figure} \caption{Conformal blocks of the five-point function. In the case of the critical Ising model ($k=2$), the third conformal block vanishes since $\varepsilon \otimes \varepsilon = \mathbb{I}$.} \label{fig:III-3} \end{figure} Next, we consider the five-point chiral amplitude of four $\sigma$ fields and one $\varepsilon$ field. This is not an amplitude of the type \eqref{eq:II-10} that we use for quantum computation. However, it is needed for the six-point chiral amplitude of $\sigma$ fields. The correlator needs a single negative screening charge in order to obey the charge neutrality condition, \begin{equation} \label{eq:III-16} \mathcal{I}^{(5)}(\bm{\eta};w)= \left\langle \sigma_1 \sigma_2 \sigma_3 \sigma_4 \tilde{\varepsilon}_5 V_- (w) \right\rangle \ . \end{equation} From the fusion rules \eqref{eq:II-4} we deduce that there are two (three) conformal blocks for $k=2$ ($ k \geq 3$), defined diagrammatically in Figure \ref{fig:III-3}, and in terms of contour integrals by \begin{equation} \label{eq:III-18} \mathcal{F}_\mu^{(5)} (\bm{\eta}) = \sqrt{N_\mu} \oint_{\mu} dw \mathcal{I}^{(5)}(\bm{\eta};w) \ .\end{equation} The normalization constants $N_\mu$ are evaluated using a monodromy argument as before, \begin{eqnarray} N_1 &=& \mathcal{N} \sin ^2\frac{2 \pi }{k+2} \ ,\nonumber \\ N_2 &=& \mathcal{N} \sin ^2 \frac{3 \pi }{k+2} \ ,\nonumber \\ N_3 &=& 8 \mathcal{N}\cos ^2 \frac{\pi }{k+2} \cos \frac{2 \pi }{k+2} \sin ^2 \frac{3 \pi }{k+2} \ , \end{eqnarray} up to an overall multiplicative constant $\mathcal{N}$ which is not needed for our purposes. \begin{figure} \caption{Exchange matrices of the five-point correlator. In our notation each channel $\mu$ is represented by a pair $(\mu_1, \mu_2)$. For the first channel we have $(\mathbb{I},\varepsilon)$, for the second $(\varepsilon, \mathbb{I})$, and the third $(\varepsilon,\varepsilon)$.} \label{fig:III-4} \end{figure} The braiding and fusion matrices for the five-point amplitude are depicted in Figure \ref{fig:III-4}. The braiding matrix $R_{12}$ is easily obtained from the OPE $ \sigma (\eta) \sigma(0) \sim \eta^{-\frac{k-1}{2(k+2)}} \mathbb{I} + \eta^{ \frac{k+1}{2(k+2)}} \varepsilon (0) $, \begin{equation} \label{eq:III-19} R_{12}^{(5)} = e^{-i\pi \frac{k-1}{2(k+2)}} \begin{pmatrix} 1 & 0 & 0 \\ 0 & e^{i\pi \frac{k}{k+2}} & 0 \\ 0 & 0& e^{i\pi \frac{k}{k+2}} \end{pmatrix} \;.\end{equation} The fusion matrix $R_{13}$ is found by converting five-point functions into four-point functions (see Appendix \ref{app:A} for details). We obtain \begin{equation} \label{eq:III-20} R_{13}^{(5)} = \begin{pmatrix} c_k & c_k& -\sqrt{d_k} \\ c_k & \frac{d_k \omega_k -c_k^3}{s_k^2} & \frac{ (\omega_k+c_k) \sqrt{d_k}}{s_k t_k} \\ -\sqrt{d_k} & \frac{ (\omega_k+c_k) \sqrt{d_k}}{s_k t_k} &\frac{\omega_k c_k -d_k}{s_k t_k} \end{pmatrix} \ , \end{equation} where $c_k = \cos\theta_k$, $d_k= -\cos 2\theta_k$, $s_k = \sin\theta_k$, $t_k = \tan\theta_k$, and $\omega_k = e^{i\pi \frac{3(k+1)}{2(k+2)}}$. The braiding matrix $R_{23}$ is deduced from \eqref{eq:III-21}. As an example, for the tri-critical Ising model ($k=3$), we obtain \begin{equation} \label{eq:III-19f} R_{12}^{(5)} = \begin{pmatrix} e^{-i \frac{2\pi}{5}} & 0 & 0 \\ 0 & e^{i \frac{\pi}{5}} & 0 \\ 0 & 0& e^{i \frac{\pi}{5}} \end{pmatrix} \;,\end{equation} \begin{equation} \label{eq:III-22} R_{13}^{(5)} = \begin{pmatrix} \gamma^{-1} & \gamma^{-1} & -\gamma^{-3/2} \\ \gamma^{-1} & -\frac{ \left({1+e^{i\frac{\pi}{5}}}\right)}{\gamma^2}& -\gamma^{-5/2} e^{i\frac{2\pi}{5}}\\ -\gamma^{-3/2} & -\gamma^{-5/2} e^{i\frac{2\pi}{5}}& - \frac{\left({\gamma^{-1} +\gamma e^{i\frac{\pi}{5}}}\right) }{\gamma^2} \end{pmatrix}\ ,\end{equation} and \begin{equation} \label{eq:III-23} R_{23}^{(5)} = \begin{pmatrix} e^{i\frac{\pi}{5}} \gamma^{-1} & e^{-i\frac{2\pi}{5}}\gamma^{-1} & -e^{-i\frac{2\pi}{5}} \gamma^{-3/2} \\ e^{-i\frac{2\pi}{5}}\gamma^{-1} & e^{i\frac{\pi}{5}} \gamma^{-1} & -e^{-i\frac{2\pi}{5}} \gamma^{-3/2} \\ -e^{-i\frac{2\pi}{5}} \gamma^{-3/2} & -e^{-i\frac{2\pi}{5}} \gamma^{-3/2} & e^{i\frac{\pi}{5}}-\gamma^{-2} \end{pmatrix}\ .\end{equation} These expressions are needed for six-point amplitudes to be discussed next. \subsection{Six-point amplitudes} Next, we consider the amplitude involving six $\sigma$ fields. From the fusion rules \eqref{eq:II-4}, we know that there are 4 conformal blocks for $k=2$ and 5 conformal blocks for $k \geq 3$, as shown in Figure \ref{fig:III-5}. \begin{figure} \caption{Conformal blocks of the six-point function. For $k=2$ the last conformal block vanishes.} \label{fig:III-5} \end{figure} Using the OPE $ \sigma (\eta_5) \sigma(\eta_6) \sim \eta_{56}^{-\frac{k-1}{2(k+2)}} \mathbb{I} + \eta_{56}^{ \frac{k+1}{2(k+2)}} \varepsilon (\eta_5) $ to expand near $\eta_6=\eta_5$, we notice two different subspaces, one for the $\frac{1-k}{2(k+2)}$ and one for the $\frac{k+1}{2(k+2)}$ powers of $\eta_{56}$. The first one contains $\mathcal{F}^{(6)}_1$ and $\mathcal{F}^{(6)}_2$ is similar to the four-point amplitude, whereas the second one contains $\mathcal{F}^{(6)}_3$, $\mathcal{F}^{(6)}_4$ and $\mathcal{F}^{(6)}_5$ and is similar to the five-point amplitude. The exchange matrices corresponding to the exchanges $ \eta_1 \leftrightarrow \eta_2$, $ \eta_1 \leftrightarrow \eta_3$ and $ \eta_2 \leftrightarrow \eta_3$ can be found using the four-point and five-point matrices, \begin{equation} R^{(6)} = \left(\begin{array}{@{}c|c@{}} R^{(4)} & \bm{0} \\ \hline \bm{0} & R^{(5)} \end{array}\right) \ , \ \ R \in \{ R_{12}, R_{13}, R_{23} \} \end{equation} For the critical Ising model $(k=2)$ the $4\times4$ exchange matrices have been studied in \cite{bib:34,bib:35,bib:36}. Confirming these results, we observe that the last conformal block decouples and the remaining $2\times2$ blocks correspond to a system of two qubits, \begin{equation} R^{(6)} = \left(\begin{array}{@{}c|c|c@{}} R^{(4)} & \bm{0} & 0 \\ \hline \bm{0} & R^{(4)} & 0 \\ \hline \bm{0} & \bm{0} & r \end{array}\right) \ , \ \ R \in \{ R_{12}, R_{13}, R_{23} \} \end{equation} where $r$ is an irrelevant phase. These matrices are gates acting on two qubits. However, they do not lead to universal quantum computation. For $k\ge 3$, the exchange matrices form a sufficient set of gates for universal quantum computation in five dimensions. Although we focused the discussion on exchange matrices $R_{ij}$, $i,j=1,2,3$, the above method can be straightforwardly extended to include the point $\eta_4$. To obtain exchange matrices involving the points $\eta_5$ or $\eta_6$, we need to consider different limits that reduce the six-point amplitude to different four- and five-point amplitudes. For example, to calculate the exchange matrix $R_{15}$ we can expand near $\eta_4=\eta_3$ using the OPE $ \sigma (\eta_4) \sigma(\eta_3) \sim \eta_{34}^{-\frac{k-1}{2(k+2)}} \mathbb{I} + \eta_{34}^{ \frac{k+1}{2(k+2)}} \varepsilon (\eta_3) $. We obtain two distinct subspaces, one corresponding to the four-point results obtained earlier, but with conformal blocks $\mathcal{F}_1^{(6)}$ and $\mathcal{F}_4^{(6)}$, and the other corresponding to a five-point amplitude with conformal blocks $\mathcal{F}_2^{(6)}$, $\mathcal{F}_3^{(6)}$, and $\mathcal{F}_5^{(6)}$. All other exchange matrices are constructed similarly. \subsection{Higher-point amplitudes} The dimensionality of Hilbert space (number of conformal blocks) depends on both $N$ and $k$. For the four-point amplitude $(N=2)$ we have two conformal blocks for all $k$, due to the fusion rule $\sigma\times \sigma \sim \varepsilon$ (Eq.\ \eqref{eq:II-4} with $s=s'=2$). For the six-point amplitude $(N=3)$, we have four conformal blocks for $k=2$ and five conformal blocks for all other cases $(k\geq 3)$. Using the fusion rules \eqref{eq:II-4}, we can find the number of conformal blocks for higher-point amplitudes. In particular, for $N=4$ we have 8 conformal blocks for $k=2$, 13 for $k=3$, and 14 for all other cases $(k\geq 4)$. For $k=2$ (critical Ising model), the dimensionality of Hilbert space is $2^{N-1}$, whereas for $k=3$ (tricritical Ising model), it follows the Fibonacci sequence. General expressions for other $k\ge 4$ can also be found using the fusion rules. Although higher-point amplitudes cannot be explicitly calculated, we can still obtain the exchange matrices by following the procedure discussed above for the six-point amplitude. For example, to find the matrices $R_{12}$, $R_{13}$ and $R_{23}$ for the eight-point amplitude, we will work in the limit $\eta_8\to \eta_7$ and $\eta_6\to \eta_5$. We obtain the exchange matrix for the eight-point function as a block diagonal matrix, with each block corresponding to a four- or five- function. This procedure can be generalized to arbitrary $N$. \section{Coset Amplitudes} \label{sec:coamp} Correlators in the coset CFT $SU(2)^{\otimes k}/SU(2)_k$ can be factorized \cite{bib:37,bib:37a} in products of correlators of $SU(2)_1$ and $\overline{SU(2)_k}$ WZW models. The primary fields in these WZW models are $\chi_m^{[i]j}$ ($i=1,\dots, k$) in $SU(2)_1$ and $\bar{\tau}_m^j$ in $\overline{SU(2)_k}$. We are interested in the case $j=\frac{1}{2}$. The conformal weights of these primary fields are \begin{equation}\label{eq:35} h_\chi = \frac{1}{4}\ , \ \ h_{\bar{\tau}} = -\frac{3}{4(k+2)}\end{equation} To simplify the notation, we will drop the index $j$ and use $m=\pm$ to denote $m= \pm\frac{1}{2}$. Then correlators are of the general form \begin{eqnarray} \label{eq:V-6} X^{[i]}_{m_1 \dots m_N}&=& \braket{\chi^{[i]}_{m_1}(\eta_1) \cdots \chi^{[i]}_{m_N}(\eta_N)} \ , \ \ i=1,\dots, k \nonumber \\ Y_{\mu ;m_1 \dots m_N}&=& \braket{\bar{\tau}_{m_1}(\eta_1)\cdots \bar{\tau}_{m_N}(\eta_N)} \ . \end{eqnarray} where $\mu$ labels the corresponding conformal block. The primary field $\sigma = \Phi_{(1,2)}$ can be constructed using \begin{equation} \label{eq:V-3a} \sigma^{[i]} = \chi^{[i]}_+ \bar{\tau}_+ + \chi^{[i]}_- \bar{\tau}_- \ . \end{equation} Evidently, this does not lead to a unique definition since we can consider any of the $SU(2)_1$ factors in the coset to construct $\sigma$. We will identify $\sigma \equiv \sigma^{[k]}$. Using \eqref{eq:35}, we obtain its conformal weight $h_\sigma = h_\chi + h_{\bar\tau} = \frac{k-1}{4(k+2)}$, in agreement with the minimal model result in Table \ref{table:1}. Agreement with the minimal model $\mathcal{M} (k+2,k+1)$ is expected, because the latter can be constructed from the coset $SU(2)_{k-1}\times SU(2)_1/SU(2)_k$. The field $\sigma$ in the minimal model is also given by \eqref{eq:V-3a} with $\chi_\pm$ in the (single) $SU(2)_1$ factor in the coset $SU(2)_{k-1}\times SU(2)_1/SU(2)_k$. Therefore, correlators of the $\sigma$ field agree in the two CFTs (minimal model and coset $SU(2)^{\otimes k}/SU(2)_k$). The two chiral conformal blocks for the four-point amplitude $\braket{\sigma_1 \sigma_2\sigma_3 \sigma_4 }$ are found from \begin{eqnarray} \label{eq:V-5x} \mathcal{F}^{(4)}_\mu(\bm{\eta})&=& \sqrt{N_{\mu}} \sum_{m_1 \dots m_4} X_{m_1 \dots m_4} Y_{\mu;m_1 \dots m_4} \ , \end{eqnarray} where we dropped the index $[k]$ that does not affect the calculation. To evaluate these correlators, we will use the free field representation of $SU(2)_k$ \cite{bib:38,bib:39,bib:40}. The primary fields are defined in terms of a massless free boson $\varphi$ and a $( \beta, \gamma)$ bosonic ghost system. Correlators are evaluated using the Coulomb gas formalism. Charge neutrality is enforced using screening charges and conjugate fields, as needed. It leads to the constraint that the total magnetic number vanishes ($\sum m_i = 0$). The sum in \eqref{eq:V-5x} reduces to \begin{eqnarray} \label{eq:V-7} \mathcal{F}^{(4)}_\mu(\bm{\eta})=&& 2 \sqrt{N_{\mu}} [ X_{+--+} Y_{\mu;+--+} + X_{--++} Y_{\mu;--++} \nonumber\\ && + X_{-+-+} Y_{\mu;-+-+} ] \ . \end{eqnarray} As before, a global conformal transformation fixes three points, $\eta_1 \rightarrow 0$, $\eta_2 \rightarrow x$, $\eta_3 \rightarrow 1$ and $\eta_4 \rightarrow \infty$. See Appendix \ref{app:B} for a detailed calculation of $SU(2)_k$ correlators. For $SU(2)_1$ we obtain the functions \begin{equation} \label{eq:V-8} X_{+--+} = \mathcal{C} \sqrt{\frac{1-x}{x}} \ , \ X_{--++} = \mathcal{C} \sqrt{\frac{x}{1-x}} \ , \end{equation} where $\mathcal{C} = \frac{4 \pi^2 }{\Gamma \left(\frac{1}{3}\right)^3}$, and \begin{equation}\label{eq:39} X_{-+-+} = - X_{+--+} - X_{--++} \ , \end{equation} by setting $q=1$ in the corresponding expressions \eqref{eq:B5} - \eqref{eq:B7} \cite{bib:25}. Similarly, we obtain the functions for the two $\overline{SU(2)_k}$ conformal blocks by setting $q=-k-4$ in the corresponding expressions \eqref{eq:B5} - \eqref{eq:B10} \cite{bib:25}, \begin{widetext} \begin{eqnarray} \label{eq:V-10} Y_{1;+--+} &=& -\frac{\Gamma \left(\frac{1}{k+2}\right) \Gamma \left(\frac{k+3}{k+2}\right)}{\Gamma \left(\frac{k+4}{k+2}\right)} (1-x)^{-\frac{1}{2 k+4}} x^{\frac{3}{2 k+4}} \;_2F_1\left(-\frac{1}{k+2},\frac{1}{k+2};\frac{k+4}{k+2};x\right) \nonumber \ , \\ Y_{1;--++} &=& \frac{\Gamma \left(\frac{1}{k+2}\right) \Gamma \left(\frac{k+3}{k+2}\right) }{(k+4) \Gamma \left(\frac{k+4}{k+2}\right)} (1-x)^{-\frac{1}{2 k+4}} x^{\frac{2 k+7}{2 k+4}} \;_2F_1\left(\frac{k+1}{k+2},\frac{k+3}{k+2};\frac{2 (k+3)}{k+2};x\right) \nonumber \ ,\\ Y_{2;+--+} &=& \frac{ \Gamma \left(\frac{1}{k+2}\right) \Gamma \left(-\frac{3}{k+2}\right) }{2 \Gamma \left(-\frac{2}{k+2}\right)} ((1-x) x)^{-\frac{1}{2 k+4}} \;_2F_1\left(-\frac{3}{k+2},-\frac{1}{k+2};\frac{k}{k+2};x\right) \ , \nonumber \\ Y_{2;--++} &=&-\frac{ \Gamma \left(-\frac{3}{k+2}\right) \Gamma \left(\frac{1}{k+2}\right) }{\Gamma \left(-\frac{2}{k+2}\right)} ((1-x) x)^{-\frac{1}{2 k+4}} \;_2F_1\left(-\frac{3}{k+2},-\frac{1}{k+2};-\frac{2}{k+2};x\right) \ , \nonumber\\ Y_{\mu;-+-+} &=& - Y_{\mu;+--+} - Y_{\mu;--++} \ , \ \ \mu=1,2 \ . \end{eqnarray} \end{widetext} After some algebra involving Hypergeometric function identities, we arrive at compact explicit expressions for the conformal blocks \eqref{eq:V-5x}, \begin{eqnarray} \label{eq:V-12} \mathcal{F}^{(4)}_1 &=& - \sqrt{N_1} \frac{32 \pi ^2 \Gamma^2 \left(\frac{1}{k+2}\right) }{\Gamma^3 \left(\frac{1}{3}\right) \Gamma \left(\frac{2}{k+2}\right)} x^{\frac{1-k}{2 k+4}} (1-x)^{\frac{k+1}{2 k+4}} \nonumber \\ && \times \, _2F_1\left(\frac{1}{k+2},\frac{k+1}{k+2};\frac{2}{k+2};x\right) \ , \nonumber \\ \mathcal{F}^{(4)}_2 &=& \sqrt{N_2} \frac{12 \pi ^2 (1-k) \Gamma \left(-\frac{3}{k+2}\right) \Gamma \left(\frac{1}{k+2}\right) }{k \Gamma^3 \left(\frac{1}{3}\right) \Gamma \left(-\frac{2}{k+2}\right)} x^{\frac{k+1}{2 k+4}} \nonumber \\ && \times \, (1-x)^{\frac{k+1}{2 k+4}} _2F_1\left(\frac{k+1}{k+2},\frac{2 k+1}{k+2};\frac{2 (k+1)}{k+2};x\right)\ , \nonumber\\ \end{eqnarray} in agreement with our earlier result \eqref{eq:III-8}. It follows that the exchange matrices one obtains from the coset construction coincide with their counterparts in the corresponding minimal model. It is instructive to confirm this using Eq.\ \eqref{eq:V-7} in order to obtain exchange matrices for general correlators in the coset CFT $SU(2)_1^{\otimes k}/SU(2)_k$. As an example, under the transformation $x\to 1-x$, it is easy to see that $X_{\bm{m}} \leftrightarrow X_{\bm{m}'}$ and $X_{\bm{m}''} \to X_{\bm{m}''}$, where $\bm{m} = +--+$, $\bm{m}' = --++$, and $\bm{m}'' = -+-+$. Also, using Hypergeometric identities, we obtain \begin{equation}\label{eq:42} \bm{Y}_{\bm{m}} \to R_{13}^{(4)} \bm{Y}_{\bm{m}'} \ , \ \bm{Y}_{\bm{m}'} \to R_{13}^{(4)} \bm{Y}_{\bm{m}} \ , \ \bm{Y}_{\bm{m}''} \to R_{13}^{(4)} \bm{Y}_{\bm{m}''} \end{equation} where $\bm{Y}_{\bm{m}} = \left( \begin{array}{c} \sqrt{N_1} Y_{1;\bm{m}} \\ \sqrt{N_2} Y_{2;\bm{m}} \end{array} \right)$, and $R_{13}^{(4)}$ is defined in \eqref{eq:III-11}. It follows from \eqref{eq:V-7} that the conformal blocks transform under \begin{equation}\label{eq:43} \bm{\mathcal{F}}^{(4)} \to R_{13}^{(4)} \bm{\mathcal{F}}^{(4)} \ , \ \ \bm{\mathcal{F}}^{(4)} = \left( \begin{array}{c} \mathcal{F}_1^{(4)} \\ \mathcal{F}_2^{(4)} \end{array} \right) \end{equation} as expected. \section{Braiding conformal blocks via anyon models} \label{sec:IV} The monodromy representations from braiding conformal blocks can also be computed using the corresponding anyon models of chiral minimal models, which are the representation categories of the chiral algebras of minimal models. In this Section, we calculate some of the same representations in earlier sections using the graphical calculus of anyon models. \subsection{Anyon models of minimal models} As discussed above, chiral minimal models $\mathcal{M}(k+2,k+1)$ ($k\geq 2$) can be constructed as the coset $\frac{SU(2)_{k-1}\times SU(2)_1}{SU(2)_{k}}$, where $SU(2)_q$ is the $SU(2)$ WZW model at level $q$. We will also use $SU(2)_q$ to denote the corresponding anyon models of chiral WZW $SU(2)_q$ CFTs. The anyons are sometimes labeled by integers $0, 1, \ldots, q$, which are twice of the spin, and fusion rules in these labels are \begin{equation} j_1 \otimes j_2 = \sum_{j \stackrel{2}{=} |j_1 - j_2|}^{\min(j_1+j_2, 2q-j_1-j_2)} j, \end{equation} where $\stackrel{2}{=}$ denotes incrementing the summation variable by $2$, and twist \begin{equation} \theta_j = e^{\pi i \frac{j\pn{j+2}}{2(q+2)}}. \end{equation} As discussed in Section \ref{sec:II}, the minimal model $\M(k+2,k+1)$ has primary fields $\Phi_{(r,s)}$ with fusion rules given by Eq.\ \eqref{eq:fusion}. \begin{lemma}\label{lemma:k-r} In $SU(2)_q$, $q \otimes s = r$ if and only if $s = q - r$. Moreover, when $s \neq q - r$, the product $q \otimes s$ contains no $r$ term. \end{lemma} \begin{proof} \begin{enumerate} \item[($\Leftarrow$)] Observe that \[ q \otimes (q-r) = \sum_{j \stackrel{2}{=} r}^{\min(2k-r, r)} j = r \ , \] since $\min(2q-r,r) = r$, for all $r = 0, \ldots, q$. \item[($\Rightarrow$)] We have \[ q \otimes s = \sum_{j \stackrel{2}{=} q - s}^{\min(q+s, q-s)} j. \] If $s < q - r$, then $q - s > r$, and there is no $r$ term in the product $q \otimes s$. If $s > q - r$, then $q - s < r$, so $\min(q+s,q-s) < r$, and there is no $r$ term in the product $q \otimes s$. \end{enumerate} \end{proof} \begin{prop}\label{prop:fusionRules} If $\B$ is the anyon model $\B= SU(2)_{k-1} \times SU(2)_1 \times \overline{SU(2)_{k}}$ and $\A = 000 + (k-1)1k$, then $\A$ has a condensable algebra structure. The condensed category is $\B_{\A} = \B_0 \oplus \B_1$, where the deconfined part $\B_0$ has the same fusion rules as the Minimal Model $\M(k+2,k+1)$. The twists of the anyons of $\B_0$ agree with those of the corresponding ones in the minimal model $\M(k+2,k+1)$. \end{prop} \begin{proof} For each $r = 0, \ldots, k-1$ and $t = 0, \ldots, k$, we may uniquely choose $s = 0$ or $s = 1$ so that $r+s+t$ is even. Then, we may identify $rst \sim \Phi_{(r,t)}$ in $\M(k+2,k+1)$. We have \[ rst \otimes mnp = \sum_{j \stackrel{2}{=} |r-m|}^{\min(r+m, 2k-2-r-m)} \sum_{l \stackrel{2}{=} |t-p|}^{\min(t+p, 2k-t-p)} jsl, \] where $s$ is chosen to make $j+s+l$ even, and \[ \Phi_{(r,t)} \otimes \Phi_{(m,p)} = \sum_{j \stackrel{2}{=} |m-r|}^{\min(m+r, 2k-2-m-r)} \sum_{l \stackrel{2}{=} |p-t|}^{\min(p+t, 2k-p-t)} \Phi_{(j,l)}. \] \end{proof} The conformal weights are given by Eq.\ \eqref{eq:II-3}. \footnote{Page 240 of \cite{bib:28} gives another formula for $h$ that seems to suggest $k+1$ and $k+2$ should be switched. The choice we made here makes the twists agree. Possibly relevant is Eq.\ (7.36) on page 209.} \subsection{Braiding universality of minimal models} The anyon model of the tricritical Ising model $\mathcal{M}(5,4)$ is the direct product of the Ising anyon model with the complex conjugate of the Fibonacci anyon model. The anyon types of the Ising theory are usually denoted as $1,\sigma,\psi$, where $\sigma$ is the non-abelian anyon, while the anyon types of the Fibonacci are $1,\tau$. Using the conformal weights $h$, we can identify the corresponding anyon of the primary field $\Phi_{(1,2)}$ with $h=\frac{1}{10}$ as $\psi \boxtimes \bar{\tau}$, and $\Phi_{(1,3)}$ as $1\boxtimes \bar{\tau}$. Because $\psi$ is a fermion, the monodromy representations from braiding $\psi \boxtimes \bar{\tau}$ are equivalent to the braidings of $\bar{\tau}$ up to phases. The Fibonacci anyon $\tau$ is universal for quantum computation by braiding alone \cite{bib:2}. Since the monodromy representations of braiding its complex conjugate $\bar{\tau}$ is simply the complex conjugate, $\bar{\tau}$ is also universal, thus it follows that $\psi \boxtimes \bar{\tau}$ is universal for quantum computation by braiding alone as well. From our identification above, we can compute the monodromy representations of conformal blocks from braid group representations of Fibonacci anyons up to overall phase factors. In the graphical calculus of anyon models, conformal blocks are represented as fusion trees labeled by anyons. The labeled trees in earlier sections can be directly translated into the fusion trees in anyon language. To compute braid group representations, we select a basis of fusion trees and begin with the representatives for the braid group generators. For example, we may compute the representation of $B_4$, the braid group on four strands, in the following orthonormal basis of Fibonacci fusion trees. \[ \bigbrace{ \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {\small $1$}; \node[right] at (1,-1.3) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; }, \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {$\tau$}; \node[right] at (1,-1.3) {\small $1$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; }, \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {$\tau$}; \node[right] at (1,-1.3) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; } } \] Given a braid, we use the Fibonacci $F$-symbols and $R$-symbols to write the result of braiding each of these basis vectors as a linear combination of the basis itself. Thus each braid is assigned a matrix which is the change of basis matrix from a braided basis to this unbraided one. The two generators $\sigma_1,\sigma_3$ of $B_4$ require only a single $R$ move to untangle. The computation for the generator $\sigma_2$ goes as follows. {\allowdisplaybreaks \begin{align*} \stikz{ \node (X) at (0,-0.875) {}; \pic[braid/number of strands=4,braid/anchor=1-1,braid/width=0.5cm,braid/height=0.5cm] {braid={s_2^{-1}}}; \node[left] at (0,-0.75) {$\tau$}; \node[right] at (0.5,-0.75) {$\tau$}; \node[right] at (1,-0.75) {$\tau$}; \node[right] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {\small $1$}; \node[right] at (1,-1.3) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[right] at (0.75,-1.75) {$\tau$}; } &= F_{\tau;\tau\tau}^{1\tau\tau} \stikz{ \node (X) at (0,-0.875) {}; \pic[braid/number of strands=4,braid/anchor=1-1,braid/width=0.5cm,braid/height=0.5cm] {braid={s_2^{-1}}}; \node[left] at (0,-0.75) {$\tau$}; \node[right] at (0.5,-0.75) {$\tau$}; \node[right] at (1,-0.75) {$\tau$}; \node[right] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (0.5,-1.25); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.375,-1.15375) {\small $1$}; \node[left] at (0.625,-1.4375) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[right] at (0.75,-1.75) {$\tau$}; } \\ = F_{\tau;11}^{\tau\tau\tau} &\stikz{ \node (X) at (0,-0.875) {}; \pic[braid/number of strands=4,braid/anchor=1-1,braid/width=0.5cm,braid/height=0.5cm] {braid={s_2^{-1}}}; \node[left] at (0,-0.75) {$\tau$}; \node[left] at (0.5,-0.75) {$\tau$}; \node[right] at (1,-0.75) {$\tau$}; \node[right] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.75,-1); \draw (1,-0.75) -- (0.5,-1.25); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[right] at (0.625,-1.15375) {\small $1$}; \node[left] at (0.625,-1.4375) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[right] at (0.75,-1.75) {$\tau$}; } + F_{\tau;\tau1}^{\tau\tau\tau} \stikz{ \node (X) at (0,-0.875) {}; \pic[braid/number of strands=4,braid/anchor=1-1,braid/width=0.5cm,braid/height=0.5cm] {braid={s_2^{-1}}}; \node[left] at (0,-0.75) {$\tau$}; \node[left] at (0.5,-0.75) {$\tau$}; \node[right] at (1,-0.75) {$\tau$}; \node[right] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.75,-1); \draw (1,-0.75) -- (0.5,-1.25); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[right] at (0.625,-1.15375) {$\tau$}; \node[left] at (0.625,-1.4375) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[right] at (0.75,-1.75) {$\tau$}; } \\ = F_{\tau;11}^{\tau\tau\tau} &R_1^{\tau\tau} \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.75,-1); \draw (1,-0.75) -- (0.5,-1.25); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[right] at (0.625,-1.15375) {\small $1$}; \node[left] at (0.625,-1.4375) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; } + F_{\tau;\tau1}^{\tau\tau\tau} R_{\tau}^{\tau\tau} \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.75,-1); \draw (1,-0.75) -- (0.5,-1.25); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[right] at (0.625,-1.15375) {$\tau$}; \node[left] at (0.625,-1.4375) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; } \\ &\hspace{-2.5cm}= \Big( F_{\tau;11}^{\tau\tau\tau} R_1^{\tau\tau} \pn{F_{\tau}^{\tau\tau\tau}}^{-1}_{11} + F_{\tau;\tau1}^{\tau\tau\tau} R_{\tau}^{\tau\tau} \pn{F_{\tau}^{\tau\tau\tau}}^{-1}_{1\tau} \Big) \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {\small $1$}; \node[right] at (1,-1.3) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; } \\ \qquad + \Big( F_{\tau;11}^{\tau\tau\tau} &R_1^{\tau\tau} \pn{F_{\tau}^{\tau\tau\tau}}^{-1}_{\tau1} + F_{\tau;\tau1}^{\tau\tau\tau} R_{\tau}^{\tau\tau} \pn{F_{\tau}^{\tau\tau\tau}}^{-1}_{\tau\tau} \Big) \\ &\hspace{-1.75cm}\left( \pn{F_{\tau}^{\tau\tau\tau}}^{-1}_{1\tau} \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {$\tau$}; \node[right] at (1,-1.3) {\small $1$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; }\right. \left.+ \pn{F_{\tau}^{\tau\tau\tau}}^{-1}_{\tau\tau} \stikz{ \node (X) at (0,-1.25) {}; \node[above] at (0,-0.75) {$\tau$}; \node[above] at (0.5,-0.75) {$\tau$}; \node[above] at (1,-0.75) {$\tau$}; \node[above] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {$\tau$}; \node[right] at (1,-1.3) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[below] at (0.75,-1.75) {$\tau$}; } \right) \\ &= \mat{\gamma^{-1}e^{4\pi i/5} \\ \gamma^{-1}e^{-3\pi i/5} \\ -\gamma^{-3/2}e^{-3\pi i/5}}, \\ \stikz{ \node (X) at (0,-0.875) {}; \pic[braid/number of strands=4,braid/anchor=1-1,braid/width=0.5cm,braid/height=0.5cm] {braid={s_2^{-1}}}; \node[left] at (0,-0.75) {$\tau$}; \node[right] at (0.5,-0.75) {$\tau$}; \node[right] at (1,-0.75) {$\tau$}; \node[right] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {$\tau$}; \node[right] at (1,-1.3) {$1$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[right] at (0.75,-1.75) {$\tau$}; } &= \mat{\gamma^{-1}e^{-3\pi i/5} \\ \gamma^{-1}e^{4\pi i/5} \\ -\gamma^{-3/2}e^{-3\pi i/5}}, \\ \stikz{ \node (X) at (0,-0.875) {}; \pic[braid/number of strands=4,braid/anchor=1-1,braid/width=0.5cm,braid/height=0.5cm] {braid={s_2^{-1}}}; \node[left] at (0,-0.75) {$\tau$}; \node[right] at (0.5,-0.75) {$\tau$}; \node[right] at (1,-0.75) {$\tau$}; \node[right] at (1.5,-0.75) {$\tau$}; \draw (0,-0.75) -- (0.25,-1); \draw (0.5,-0.75) -- (0.25,-1); \draw (1,-0.75) -- (1.25,-1); \draw (1.5,-0.75) -- (1.25,-1); \draw (0.25,-1) -- (0.75,-1.5); \draw (1.25,-1) -- (0.75,-1.5); \node[left] at (0.5,-1.3) {$\tau$}; \node[right] at (1,-1.3) {$\tau$}; \draw (0.75,-1.5) -- (0.75,-1.75); \node[right] at (0.75,-1.75) {$\tau$}; } &= \mat{-\gamma^{-3/2}e^{-3\pi i/5} \\ -\gamma^{-3/2}e^{-3\pi i/5} \\ \gamma^{-1}e^{3\pi i/5} - \gamma^{-3}}. \end{align*}} Thus, the generator $\sigma_2$ has a representation \begin{equation} \mat{\gamma^{-1}e^{4\pi i/5} & \gamma^{-1}e^{-3\pi i/5} & -\gamma^{-3/2}e^{-3\pi i/5} \\ \gamma^{-1}e^{-3\pi i/5} & \gamma^{-1}e^{4\pi i/5} & -\gamma^{-3/2}e^{-3\pi i/5} \\ -\gamma^{-3/2}e^{-3\pi i/5} & -\gamma^{-3/2}e^{-3\pi i/5} & \gamma^{-1}e^{3\pi i/5} - \gamma^{-3}}. \end{equation} in agreement with our earlier result \eqref{eq:III-23}. \section{Fault-tolerant Quantum Computation} \label{sec:V} For quantum computation, we need to map the conformal blocks to quantum states. Then the braiding matrices become quantum gates. The conformal blocks $\mathcal{F}_\mu (\bm{\eta})$ involving $2N$ primary fields $ \sigma = \Phi_{(1,2)}$ (Eq.\ \eqref{eq:II-9}) cannot be interpreted as wavefunctions because of their singularities. There are two types of singularities, branch cuts and poles. To define a wavefunction, we insert into the correlator $2M$ fields $\psi$ that obey abelian fusion rules at positions $\bm{z} = (z_1, \dots, z_{2M})$. These are the coordinates of the wavefunction, whereas $\bm{\eta} = (\eta_1, \dots, \eta_{2N})$ are treated as parameters. Thus, we eliminate branch cuts associated with $\bm{\eta}$. The correlator still has poles at $\bm{z}$. To eliminate them, we introduce the Jastrow factor $\mathcal{J}$ that has zeroes at the position of these poles canceling the remaining singularities. Therefore, we are led to consider the wavefunction \begin{equation}\label{eq:V-1} \Psi_{\mu ; \bm{\eta}} (\bm{z}) \propto \mathcal{J} (\bm{\eta} ; \bm{z}) \mathcal{F}_\mu (\bm{\eta} ; \bm{z}) , \end{equation} defined as a product of two chiral $2N+2M -$point amplitudes, one (to be specified) determining $\mathcal{J}$, and another one involving the $\sigma$ and $\psi$ fields, \begin{equation}\label{eq:42} \mathcal{F}_\mu^{(2N,2M)} (\bm{\eta} ; \bm{z}) = \braket{\sigma_1 \cdots \sigma_{2N} \psi_1 \cdots \psi_{2M}} \ , \end{equation} where $\sigma_j = \sigma (\eta_j)$ and $\psi_j = \psi (z_j)$. Braiding matrices act as unitary transformations mixing the states $\Psi_{\mu ; \bm{\eta}}$, as long as all conformal blocks yield states in the degenerate vacuum of the system. This leads to universal quantum computation for $k>2$. Since $\psi$ obeys Abelian fusion rules, the amplitudes $\braket{\sigma_1 \cdots \sigma_{2N} \psi_1 \cdots \psi_{2M}}$ and $\braket{\sigma_1 \cdots \sigma_{2N}}$ have the same number of conformal blocks. Diagrammatically, they are shown in Figure \ref{fig:VI-1} for four insertions of $\sigma$ ($N=2$). Moving the $\psi$ insertions to different positions does not affect the conformal blocks. \begin{figure} \caption{The two conformal blocks for $N=2$ and arbitrary $M$.} \label{fig:VI-1} \end{figure} As shown in \cite{bib:20} using the plasma analogy, to construct a gapped state that will lead to fault-tolerant quantum computation, the dimension of $\psi$ must be less than $1$. In the critical Ising model $\mathcal{M}(4,3)$, this requirement is satisfied because $\psi$ can be chosen as the primary field $\Phi_{(1,3)}$ which has conformal dimension $h_{(1,3)} = \frac{1}{2} < 1$. The wavefunction \eqref{eq:V-1} is the MR wavefunction \cite{bib:14}. Unfortunately, braiding alone does not lead to universal quantum computation. In the CFT minimal models $\mathcal{M}(k+2,k+1)$ with $k \geq 3$, we cannot identify $\psi$ with any of their primary fields, therefore we cannot construct a gapped wavefunction of the form \eqref{eq:V-1}. A wavefunction of the form \eqref{eq:V-1} can be created from the coset CFT $SU(2)_1^{\otimes k} / SU(2)_{k}$ for all $k$, generalizing the MR wavefunction to which it reduces for $k=2$ (since the critical Ising model can be constructed from the coset CFT for $k=2$). As shown in Section \ref{sec:coamp}, the $\Phi_{(1,s)}$ primary fields with $1 \leq s \leq k+1$ in $\mathcal{M} (k+2,k+1)$ can be mapped onto primary fields in $SU(2)_1^{\otimes k} / SU(2)_{k}$ with the same fusion rules and conformal dimensions. Thus, correlators of the field $\sigma$ computed in $SU(2)_1^{\otimes k} / SU(2)_{k}$ are equivalent to those computed in the minimal model $\mathcal{M} (k+2,k+1)$. On the other hand, correlators involving the $\psi$ field must be computed in $SU(2)_1^{\otimes k} / SU(2)_{k}$. We define the $\psi$ field using \begin{equation} \label{eq:V-4} \psi^{[ij]} = \chi^{[i]}_+ \chi^{[j]}_{-} +\chi^{[i]}_- \chi^{[j]}_{+}\ . \end{equation} Evidently, \eqref{eq:V-4} does not lead to a unique definition of $\psi$ since we can consider any pair of $SU(2)_1$ factors in the coset to construct $\psi$. For desired results, one of the $SU(2)_1$ factors must be shared with the one in the definition of $\sigma$ (Eq.\ \eqref{eq:V-3a}). Since we identified $\sigma \equiv \sigma^{[k]}$, we will identify $\psi \equiv \psi^{[1k]}$. We also identify the conjugate field $\tilde{\sigma} = \Phi_{(k,k)} \equiv \sigma^{[1]}$, generalizing the critical Ising model result \cite{bib:41}. Then we obtain the fusion rules \begin{equation} \sigma \times \psi \sim \sigma \ , \ \ \psi\times\psi \sim \mathbb{I} \ , \end{equation} for all $k$, same as in the critical Ising model. However, the fusion rule for $\sigma \times \sigma$ does not involve $\psi$ for $k>2$; it involves $\mathbb{I}, \varepsilon, \varepsilon' , \dots$, instead (whereas for $k=2$, $\psi = \varepsilon = \Phi_{(1,3)}$). Using \eqref{eq:35}, we obtain its conformal weight $h_\psi = 2h_\chi = \frac{1}{2} <1$, satisfying the requirement for a gapped wavefunction \eqref{eq:V-1} that leads to fault-tolerant quantum computing. Correlators of $\psi$ are constructed as products of $SU(2)_1$ correlators, similar to the factors of correlators of $\sigma$ (Eq.\ \eqref{eq:V-6}), \begin{equation} \label{eq:VI-6} X^{[i]}_{\bm{m}}= \braket{\chi^{[i]}_{m_1}(z_1) \cdots \chi^{[i]}_{m_{2M}}(z_{2M})} \ , \ \ i=1,\dots, k \ . \end{equation} where $\bm{m} = (m_1,\dots, m_{2M})$. The $2M$-point $\psi$ correlator is found in terms of the Pfaffian, as in the case of the critical Ising model, \begin{equation} \label{eq:V-13} \braket{\psi_1 \cdots \psi_{2M}} = \sum_{ \bm{m}} X^{[1]}_{\bm{m} } X^{[k]}_{\bar{\bm{m}}} = 2^{M} \text{Pf}\left({\frac{1}{z_i-z_j}}\right) \ , \end{equation} where $\bar{\bm{m}} = - \bm{m}$. More generally, we construct the correlator $\mathcal{F}_{\mu}(\bm{\eta}; \bm{z})$ involving $2N$ $\sigma$ and $2M$ $\psi$ insertions (Eq.\ \eqref{eq:42}) as \begin{equation} \label{eq:V-14a} \mathcal{F}_{\mu}(\bm{\eta};\bm{z}) \propto \sum_{\substack{n , m}} X_{\bm{m}}^{[1]} X_{\bar{\bm{m}},\bm{n}}^{[k]} Y_{\mu;\bm{n}} \ , \end{equation} in terms of three factors, similar to those in Eq.\ \eqref{eq:V-6}, \begin{eqnarray} \label{eq:V-15} X_{\bm{m}}&=&\braket{\chi_{m_1}(z_1) \cdots \chi_{m_{2M}}(z_{2M})} \nonumber \\ X_{\bar{\bm{m},\bm{n}}}&=&\braket{\chi_{n_1}(\eta_1) \cdots \chi_{n_{2N}}(\eta_{2N})\chi_{\bar{m}_1}(z_1) \cdots \chi_{\bar{m}_{2M}}(z_{2M}) } \nonumber \\ Y_{\mu ;\bm{n} } &=& \braket{\bar{\tau}_{n_1}(\eta_1) \cdots \bar{\tau}_{n_{2N}}(\eta_{2N})} \ . \end{eqnarray} Notice that the conformal block is only specified in the last factor because the fields in $SU(2)_1$ obey Abelian fusion rules. Also, we dropped the superscripts indicating which $SU(2)_1$ factor is used because it does not affect the calculation. The remaining ingredient in the wavefunction \eqref{eq:V-1} is the Jastrow factor $\mathcal{J}$. We need a factor that cancels the poles of the conformal block $\mathcal{F}_\mu$ due to insertions of both $\sigma$ and $\psi$ fields. Following \cite{bib:20}, we define $\mathcal{J}$ as the correlation function for a free boson $\phi$, \begin{equation} \label{eq:V-16} \mathcal{J} = \langle \mathcal{V}_1 \dots \mathcal{V}_{2N} \mathcal{W}_1 \dots \mathcal{W}_{2M} \mathcal{Q} \rangle \ ,\end{equation} consisting of holomorphic vertices $\mathcal{V}$, $\mathcal{W}$, and a screening charge $\mathcal{Q}$, defined by \begin{equation} \label{eq:V-17} \mathcal{V}_j = e^{i \frac{ 1}{2\sqrt{\Lambda}} \phi (\eta_j)} \ , \ \ \mathcal{W}_j = e^{i\sqrt{\Lambda} \phi (z_j)} \ , \nonumber \end{equation} \begin{equation} \label{eq:54} \mathcal{Q} = e^{-\frac{i}{\sqrt{\Lambda}} \int \frac{d^2w}{2\pi} \phi(w,\bar{w})}\ , \end{equation} Explicitly, \begin{eqnarray} \mathcal{J}=&&\prod_{i<j}^{2M} z_{ij}^{\Lambda} \prod_{a< b}^{2N} \eta_{ab}^{\frac{1}{4\Lambda}} \prod_{a=1}^{2N} \prod_{i=1}^{2M} (\eta_a-z_i)^{\frac{1}{2}} \nonumber \\ &&\times e^{-\frac{1}{4} \sum_{i=1}^{2M} |z_i|^2} e^{-\frac{1}{8\Lambda} \sum_{a=1}^{2N} |\eta_a|^2} \end{eqnarray} where $\Lambda$ is a positive integer that represents the inverse filling of FQHE \cite{bib:42}. For $N=M=1$, we have a single conformal block. We obtain \begin{equation} \mathcal{F}^{(2,2)} = \mathcal{X}_{+-} Y_{+-} + \mathcal{X}_{-+} Y_{-+} \end{equation} where \begin{equation} \mathcal{X}_{m_1m_2} = \sum_{m_3,m_4} X_{m_3m_4} X_{m_1m_2\bar{m}_3 \bar{m}_4} \ . \end{equation} We have $Y_{+-} = Y_{-+} = \eta_{12}^{\frac{3}{2(k+2)}}$, and all $SU(2)_1$ correlators are easily computed. We obtain \begin{equation} \mathcal{F}^{(2,2)} = 2 \frac{ \eta_{12}^{\frac{1-k}{2k+4}}}{z_{12}} \frac{(\eta_1-z_1) (\eta_2-z_2) + (\eta_1-z_2) (\eta_2-z_1)}{ \sqrt{(\eta_1-z_1) (\eta_1-z_2) (\eta_2-z_1) (\eta_2-z_2)}} \end{equation} Notice that the exchange $\eta_1 \leftrightarrow \eta_2$ leads to the same factor as the one obtained for the propagator $\braket{\sigma_1\sigma_2}$. The corresponding Jastrow factor is found to be \begin{eqnarray} \mathcal{J}^{(2,2)} &=& \sqrt{(\eta_1-z_1) (\eta_1-z_2) (\eta_2-z_1) (\eta_2-z_2)} \nonumber \\ && \times z_{12}^{\Lambda} \eta_{12}^{\frac{1}{4\Lambda}} e^{-\frac{|z_1|^2+|z_2|^2}{4} } e^{-\frac{ |\eta_1|^2+|\eta_2|^2}{8\Lambda}} \end{eqnarray} and the wavefunction is \begin{equation} \Psi_{\eta_1,\eta_2} (z_1,z_2) \propto \eta_{12}^{\frac{1}{4\Lambda} - \frac{k-1}{2(k+2)}} z_{12}^{\Lambda -1 } \xi e^{-\frac{|z_1|^2+|z_2|^2}{4} } e^{-\frac{ |\eta_1|^2+|\eta_2|^2}{8\Lambda}} \end{equation} where $\xi = (\eta_1-z_1) (\eta_2-z_2) + (\eta_1-z_2) (\eta_2-z_1)$ is a polynomial in $(z_1,z_2)$. Thus, $\Psi$ has no singularities in $(z_1,z_2)$. For $N=2$ and $M=1$, we have two conformal blocks. Omitting overall normalization constants (\textit{cf}.\ with Eq.\ \eqref{eq:V-7}), after some algebra we obtain \begin{equation} \mathcal{F}^{(4,2)}_\mu = \frac{2}{z_{12}} \prod_{i=1}^2 \prod_{a=1}^4 (\eta_{a}-z_{i})^{-\frac{1}{2}} \Xi \ , \end{equation} where $\Xi$ is a polynomial in $(z_1,z_2)$, \begin{eqnarray} \Xi &=& \mathcal{X}_{+--+} Y_{\mu;+--+} + \mathcal{X}_{--++} Y_{\mu;--++} \nonumber\\ &&+ \mathcal{X}_{-+-+} Y_{\mu;-+-+} \ , \end{eqnarray} and we defined \begin{eqnarray} \mathcal{X}_{+--+} &=& \xi_{(14)(23)} X_{+--+} \nonumber\\ \mathcal{X}_{--++} &=& \xi_{(12)(34)} X_{--++} \nonumber\\ \mathcal{X}_{-+-+} &=& \xi_{(13)(24)} X_{-+-+} \ , \label{eq:62}\end{eqnarray} in terms of the polynomials \begin{equation} \xi_{(ab)(cd)}= (\eta_a-z_1) (\eta_b-z_1) (\eta_c-z_2) (\eta_d-z_2) + (z_1 \leftrightarrow z_2) \ . \end{equation} Notice that under the exchange $\eta_1 \leftrightarrow \eta_3$, we have $\mathcal{X}_{+--+} \leftrightarrow \mathcal{X}_{--++}$, and $\mathcal{X}_{-+-+}$ does not change, showing that $\mathcal{X}_{m_1m_2m_3m_4}$ (Eq.\ \eqref{eq:62}) have the same transformation properties as $X_{m_1m_2m_3m_4}$ (Eqs.\ \eqref{eq:V-8} and \eqref{eq:39}). Moreover, the correlators $Y_{\mu; m_1m_2m_3m_4}$ are the same as in the case of four-point amplitudes (Eq.\ \eqref{eq:42}). Therefore, the braiding rules for $\mathcal{F}_\mu^{(4,2)}$ are the same as in the absence of the field $\psi$ ($\mathcal{F}_\mu^{(4)}$, given by Eq.\ \eqref{eq:43}). The fact that braiding rules are not affected by $\psi$ is easily generalized to an arbitrary number of insertions of $\psi$ ($M > 1$). The corresponding Jastrow factor is found to be \begin{eqnarray} \mathcal{J}^{(4,2)} &=& z_{12}^{\Lambda} \prod_{a<b}^{4}\eta_{ab}^{\frac{1}{4\Lambda}} \prod_{a=1}^4 \prod_{i=1}^2 (\eta_a-z_i)^{\frac{1}{2}} \nonumber \\ && \times e^{-\frac{|z_1|^2+|z_2|^2}{4} } e^{-\frac{|\eta_1|^2 +\cdots+|\eta_4|^2}{8\Lambda}} \end{eqnarray} and the wavefunctions for the two conformal blocks are \begin{equation} \Psi_{\mu; \bm{\eta}} (z_1,z_2) \propto \prod_{a<b}^{4}\eta_{ab}^{\frac{1}{4\Lambda}} z_{12}^{\Lambda-1} \Xi\, e^{-\frac{|z_1|^2+|z_2|^2}{4} } e^{-\frac{|\eta_1|^2 +\cdots+|\eta_4|^2}{8\Lambda}} \end{equation} They have no singularities for $\Lambda \geq 1$. In the case of the critical Ising model $(k=2)$, they reduce to the wavefunctions derived in Ref.\ \cite{bib:20}. It is straightforward, albeit cumbersome, to generalize the above results to arbitrary $N,M$. \section{Conclusion}\label{sec:con} In this work, we developed a method to generalize the Moore-Read Pfaffian wavefunction \cite{bib:14} in a way that leads to fault-tolerant universal quantum computation. The MR wavefunction can be written in terms of correlators of the critical Ising model which is the minimal model CFT $\mathcal{M} (4,3)$. The correlators contain insertions of the field $\psi$ which has conformal weight $h = \frac{1}{2}$. Fault-tolerant quantum computing follows from the fact that the wavefunction is gapped due to the conformal weight of $\psi$ being less than one. Unfortunately, universal quantum computing cannot be achieved with braiding alone. Generalizations involving correlators of minimal models $\mathcal{M} (m+1,m)$ with $m>3$ lead to universal quantum computing, however, no field of conformal weight less than one has been identified, and therefore the resulting wavefunction is not gapped. Instead of relying on minimal models, we constructed the wavefunction using conformal blocks of the coset $SU(2)_1^{\otimes k}/SU(2)_k$ which contain an Abelian primary field $\psi$ of conformal dimension less than one. We showed that the coset CFT $SU(2)_1^{\otimes k}/SU(2)_k$ contains a primary field of the same conformal weight, fusion rules and correlators as $\sigma \equiv \Phi_{(1,2)}$ in the minimal model for $m=k+1$. Additional, the coset CFT contains a field $\psi$ of conformal weight $h= \frac{1}{2}$ and Abelian fusion rules. These properties allowed us to use correlators of the coset CFT to generalize the MR wavefunction in a way that leads to fault-tolerant universal quantum computing. For $k=2$ $(k=3)$ we recover the Ising (Fibonacci) anyons. It would be interesting to find a system that will provide a physical realization of our wavefunction, similar to the realization of the MR wavefunction by the fractional quantum Hall effect at level $\nu=\frac{5}{2}$. In this respect, a comparison with the Read-Rezayi wavefunction \cite{bib:17}, which supports Fibonacci anyons (similar to our construction with $k=3$) and is realized by the FQHE at $\nu=\frac{12}{5}$ might be helpful. Work in this direction is in progress. \acknowledgements Research funded by ARO under grant W911NF-19-1-0397 and MURI contract W911NF-20-1-0082. G.S.\ is also partially supported by NSF grant OMA-1937008. Z.W.\ is also partially supported by NSF grants FRG-1664351 and CCF-2006463. \appendix \section{Details of calculations for the five-point amplitude}\label{app:A} In the mimimal model $\mathcal{M}(k+2,k+1)$ with $k \geq 3$, the five-point correlator $\braket{\sigma_1 \sigma_2 \sigma_3 \sigma_4 \varepsilon_5}$ (Eq.\ \eqref{eq:III-16}) has 3 conformal blocks shown in Figure \ref{fig:III-3}. The calculation of the exchange matrices involving the points $\eta_1$, $\eta_2$ and $\eta_3$ can be simplified by working in the $\eta_5\to \eta_4$ limit. To this end, we need to change bases so that $\sigma_4$ and $\varepsilon_5$ fuse together. A suitable change of basis is shown in Figure \ref{fig:A-1}. It involves four-point amplitudes that can be found explicitly. \begin{figure} \caption{Basis change for the mixed four-point correlators. The LHS of the first, second and third line are denoted as $\mathcal{K}_1$, $\mathcal{K}_2$ and $\mathcal{K}_3$ respectively. On the RHS we have the correlators $\mathcal{K}'_1$, $\mathcal{K}'_2$ and $\mathcal{K}'_3$. } \label{fig:A-1} \end{figure} The first correlator $\mathcal{K}_1$ transforms trivially. The other two correlators, $\mathcal{K}_2$ and $\mathcal{K}_3$, tranform to $\mathcal{K}_2'$ and $\mathcal{K}_3'$ via a matrix $D$, as shown. Working in the Coulomb gas formalism, after fixing three points, we obtain the four-point correlators in terms of Hypergeometric functions, \begin{widetext} \begin{eqnarray} \mathcal{K}_2(x)&=& x^{-\frac{2k}{k+2}} (1-x)^{\frac{k+1}{k+2}} \int_1^\infty dw w^{\frac{2k}{k+2}} (w-x)^{-\frac{2k+2}{k+2}} (w-1)^{-\frac{k+1}{k+2}} \nonumber\\ &=& \frac{ \Gamma \left(\frac{1}{k+2}\right)^2}{\Gamma \left(\frac{2}{k+2}\right)} (1-x)^{-\frac{k}{k+2}} x^{-\frac{2 k}{k+2}} {}_2F_1\left(\frac{1}{k+2},-\frac{2 k}{k+2};\frac{2}{k+2};x\right) \ , \end{eqnarray} \begin{eqnarray}\mathcal{K}_3(x) &=& x^{-\frac{2k}{k+2}} (1-x)^{\frac{k+1}{k+2}} \int_0^x dw w^{\frac{2k}{k+2}} (x-w)^{-\frac{2k+2}{k+2}} (1-w)^{-\frac{k+1}{k+2}} \nonumber\\ &=& \frac{ \Gamma \left(-\frac{k}{k+2}\right) \Gamma \left(\frac{3 k+2}{k+2}\right) }{\Gamma \left(\frac{2 (k+1)}{k+2}\right)} (1-x)^{\frac{k+1}{k+2}} x^{-\frac{k}{k+2}} {}_2F_1\left(\frac{k+1}{k+2},\frac{3 k+2}{k+2};\frac{2 (k+1)}{k+2};x\right) \ , \end{eqnarray} \begin{eqnarray} \mathcal{K}'_2(x) &=& x^{-\frac{2k}{k+2}} (1-x)^{\frac{k+1}{k+2}} \int_x^1 dw w^{\frac{2k}{k+2}} (w-x)^{-\frac{2k+2}{k+2}} (1-w)^{-\frac{k+1}{k+2}} \nonumber\\ &=& \frac{ \Gamma \left(\frac{1}{k+2}\right) \Gamma \left(-\frac{k}{k+2}\right) }{\Gamma \left(\frac{1-k}{k+2}\right)} (1-x)^{-\frac{k}{k+2}} x^{-\frac{k}{k+2}} {}_2F_1\left(-\frac{k}{k+2},\frac{k+1}{k+2};\frac{1-k}{k+2};1-x\right) \ , \end{eqnarray} \begin{eqnarray}\mathcal{K}'_3(x) &=& x^{-\frac{2k}{k+2}} (1-x)^{-\frac{k}{k+2}} \int_x^1 dw w^{-\frac{2k+2}{k+2}} (w-x)^{\frac{2k}{k+2}} (1-w)^{-\frac{k+1}{k+2}} \nonumber\\ &=& \frac{ \Gamma \left(\frac{1}{k+2}\right) \Gamma \left(\frac{3 k+2}{k+2}\right) }{\Gamma \left(\frac{3 (k+1)}{k+2}\right)} (1-x)^{\frac{k+1}{k+2}} x^{-\frac{2 k}{k+2}} {}_2F_1\left(\frac{1}{k+2},\frac{2 (k+1)}{k+2};\frac{3 (k+1)}{k+2};1-x\right) \ . \end{eqnarray} \end{widetext} After some algebra we find \begin{equation} D_{11}= \frac{\sin\frac{2\pi}{k+2}}{\sin \frac{3\pi}{k+2} } ,\ D_{12} = \frac{\sin\frac{4\pi}{k+2}}{\sin \frac{3\pi}{k+2} } ,\ D_{21} = D_{22} = -\frac{\sin\frac{\pi}{k+2}}{\sin \frac{3\pi}{k+2} } \ . \end{equation} After applying this basis change to the five-point correlators depicted in Figure \ref{fig:III-3}, and using the OPE $ \sigma (\eta_4) \varepsilon(\eta_5) \sim \eta_{45}^{-\frac{k}{(k+2)}} \sigma(\eta_4) + \eta_{45}^{ \frac{k+1}{k+2}} \varepsilon' (\eta_4) $, we deduce in the limit $\eta_5\to \eta_4$ \begin{eqnarray}\label{eq:A4} \mathcal{F}_1^{(5)} &\approx &\eta_{45}^{-\frac{k}{k+2}} \mathcal{F}_1^{(4)} \ , \nonumber \\ \mathcal{F}_2^{(5)} &\approx & \eta_{45}^{-\frac{k}{k+2}} D_{11} \mathcal{F}_2^{(4)} + \eta_{45}^{\frac{k+1}{k+2}} D_{12} \mathcal{F}_3^{(4)} \ ,\nonumber \\ \mathcal{F}_3^{(5)} &\approx & \eta_{45}^{-\frac{k}{k+2}} D_{21} \mathcal{F}_2^{(4)} + \eta_{45}^{\frac{k+1}{k+2}} D_{22} \mathcal{F}_3^{(4)} \end{eqnarray} where the four-point correlators $\mathcal{F}_1^{(4)} $, $\mathcal{F}_2^{(4)} $ are depicted in Figure \ref{fig:III-1} and $\mathcal{F}_3^{(4)} $ is depicted in Figure \ref{fig:F0}. \begin{figure} \caption{The four-point function $\braket{\sigma_1\sigma_2\sigma_3\varepsilon'_4}$} \label{fig:F0} \end{figure} Since this correlator does not require screening charges, we readily deduce the algebraic expression \begin{equation} \mathcal{F}_3^{(4)} = \eta_{12}^{\frac{k}{2(k+2)}} (\eta_{13} \eta_{23})^{\frac{k+1}{2(k+2)}} (\eta_{14} \eta_{24} \eta_{34})^{-\frac{3k+1}{2(k+2)}} \ . \end{equation} Using the explicit expressions \eqref{eq:A4} involving four-point amplitudes, we easily obtain the exchange matrices of the five-point amplitudes depicted in Figure \ref{fig:III-3} corresponding to exchanges between the positions $\eta_1$, $\eta_2$, and $\eta_3$. \section{Free-field representation of WZW models}\label{app:B} A straightforward way to evaluate correlators in the $SU(2)_q$ WZW model is through the Wakimoto free-field representation in which the WZW model is expressed in terms of a free boson field $\varphi$ and a ghost system consisting of boson $\beta$ and $\gamma$ fields. The central charge $c$ of the theory and background charge $\alpha_0$ are given in terms of the level $q$ of the WZW model as \begin{equation} c=3-12 \alpha_0^2= \frac{3q}{q+2} \end{equation} The primary fields $\Phi_m^j(z)$ depend on two parameters taking the values $j=0,\frac{1}{2},\dots,\frac{q}{2}$ and $m=-j,\dots,j$. In the free-field representation, \begin{eqnarray} \Phi_m^j(z) = \gamma^{j-m}(z) e^{-2ij \alpha_0 \varphi(z)} \ . \end{eqnarray} To compute correlators, we also need the conjugate fields $\tilde{\Phi}_m^j(z) $ and screening charges $Q_+$. The conjugate of the highest-weight field is \cite{bib:39} \begin{equation} \tilde{\Phi}_j^j(z) = \beta^{2j-q-1}(z) e^{2i (j-q-1) \alpha_0 \varphi(z)} \end{equation} The other fields can also be expressed in terms of boson fields, but we will not need explicit expressions. There are two possible screening charges. For our purposes, we need one of them, \begin{equation} Q_+= \int dw \beta(w) e^{2i \alpha_0 \varphi(w)} \end{equation} Concentrating on the primary fields with $j = \frac{1}{2}$, we simplify the notation by defining $\Phi_\pm \equiv \Phi_{\pm\frac{1}{2}}^{\frac{1}{2}}$. In general, there are two conformal blocks in four-point correlators of fields with $j=\frac{1}{2}$, $X_{\mu, m_1m_2m_3m_4}$, where $\mu =1,2$ and $m_i=\pm$ ($i=1,2,3,4$). We obtain the non-vanishing amplitudes \begin{widetext} \begin{eqnarray}\label{eq:B5} X_{1,+--+} &=& \braket{\Phi_+(\eta_1) Q_+ \Phi_-(\eta_2) \Phi_-(\eta_3) \tilde\Phi_+(\eta_4)} \nonumber \\ &=& - \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_0^x dw \frac{\left[{w (x-w)(1-w)}\right]^{-\frac{1}{q+2}}}{x-w} - \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_0^x dw \frac{\left[{w (x-w)(1-w)}\right]^{-\frac{1}{q+2}}}{1-w}\nonumber \\ &=& -\frac{ \Gamma \left(-\frac{1}{q+2}\right) \Gamma \left(\frac{q+1}{q+2}\right) }{\Gamma \left(\frac{q}{q+2}\right)} (1-x)^{\frac{1}{2 q+4}} x^{-\frac{3}{2 q+4}} {}_2F_1\left(-\frac{1}{q+2},\frac{1}{q+2};\frac{q}{q+2};x\right) \ , \end{eqnarray} \begin{eqnarray}\label{eq:B6} X_{1,--++} &=& \braket{\Phi_-(\eta_1) Q_+ \Phi_-(\eta_2) \Phi_+(\eta_3) \tilde\Phi_+(\eta_4)} \nonumber \\ &=& \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_0^x dw \frac{\left[{w (x-w)(1-w)}\right]^{-\frac{1}{q+2}}}{w} - \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_0^x dw \frac{\left[{w (x-w)(1-w)}\right]^{-\frac{1}{q+2}}}{x-w}\nonumber \\ &=& -\frac{ \Gamma \left(-\frac{1}{q+2}\right) \Gamma \left(\frac{q+1}{q+2}\right) }{q\Gamma \left(\frac{q}{q+2}\right)} (1-x)^{\frac{1}{2 q+4}} x^{\frac{2q+1}{2 q+4}} {}_2F_1\left(\frac{q+1}{q+2},\frac{q+3}{q+2};\frac{2 q+2}{q+2};x\right) \ , \end{eqnarray} \begin{eqnarray}\label{eq:B7} X_{1,-+-+} &=& \braket{\Phi_-(\eta_1) Q_+ \Phi_+(\eta_2) \Phi_-(\eta_3) \tilde\Phi_+(\eta_4)} \nonumber \\ &=& \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_0^x dw \frac{\left[{w (x-w)(1-w)}\right]^{-\frac{1}{q+2}}}{w} - \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_0^x dw \frac{\left[{w (x-w)(1-w)}\right]^{-\frac{1}{q+2}}}{1-w}\nonumber \\ &=& \frac{ \Gamma \left(-\frac{1}{q+2}\right) \Gamma \left(\frac{q+1}{q+2}\right) }{\Gamma \left(\frac{q}{q+2}\right)} (1-x)^{\frac{1}{2 q+4}} x^{-\frac{3}{2 q+4}} {}_2F_1\left(\frac{1}{q+2},\frac{q+1}{q+2};\frac{q}{q+2};x\right) \ , \end{eqnarray} \begin{eqnarray}\label{eq:B8} X_{2,+--+} &=& \braket{\Phi_+(\eta_1) \Phi_-(\eta_2) \Phi_-(\eta_3) Q_+ \tilde\Phi_+(\eta_4)} \nonumber \\ &=& \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_1^\infty dw \frac{\left[{w (w-x) (w-1)}\right]^{-\frac{1}{q+2}} }{w-x} + \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_1^\infty dw \frac{\left[{w (w-x) (w-1)}\right]^{-\frac{1}{q+2}} }{w-1}\nonumber \\ &=& \frac{ \Gamma \left(-\frac{1}{q+2}\right) \Gamma \left(\frac{3}{q+2}\right) }{2 \Gamma \left(\frac{2}{q+2}\right)} (1-x)^{\frac{1}{2 q+4}} x^{\frac{1}{2 q+4}} {}_2F_1\left(\frac{1}{q+2},\frac{3}{q+2};\frac{q+4}{q+2};x\right) \ , \end{eqnarray} \begin{eqnarray}\label{eq:B9} X_{2,--++} &=& \braket{\Phi_-(\eta_1) \Phi_-(\eta_2) \Phi_+(\eta_3) Q_+ \tilde\Phi_+(\eta_4)} \nonumber \\ &=& \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_1^\infty dw \frac{\left[{w (w-x) (w-1)}\right]^{-\frac{1}{q+2}} }{w} + \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_1^\infty dw \frac{\left[{w (w-x) (w-1)}\right]^{-\frac{1}{q+2}} }{w-x}\nonumber \\ &=& -\frac{\Gamma \left(-\frac{1}{q+2}\right) \Gamma \left(\frac{3}{q+2}\right)}{\Gamma \left(\frac{2}{q+2}\right)} (1-x)^{\frac{1}{2 q+4}} x^{\frac{1}{2 q+4}} {}_2F_1\left(\frac{1}{q+2},\frac{3}{q+2};\frac{2}{q+2};x\right) \ , \end{eqnarray} \begin{eqnarray}\label{eq:B10} X_{2,-+-+} &=& \braket{\Phi_-(\eta_1) \Phi_+(\eta_2) \Phi_-(\eta_3) Q_+ \tilde\Phi_+(\eta_4)} \nonumber \\ &=& \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_1^\infty dw \frac{\left[{w (w-x) (w-1)}\right]^{-\frac{1}{q+2}} }{w} + \left[{x(1-x)}\right]^{\frac{1}{2(q+2)}} \int_1^\infty dw \frac{\left[{w (w-x) (w-1)}\right]^{-\frac{1}{q+2}} }{w-1}\nonumber \\ &=& \frac{\Gamma \left(-\frac{1}{q+2}\right) \Gamma \left(\frac{3}{q+2}\right) }{2 \Gamma \left(\frac{2}{q+2}\right)} (1-x)^{-\frac{3}{2 (q+2)}} x^{\frac{1}{2 q+4}} {}_2F_1\left(\frac{1}{q+2},\frac{q+1}{q+2};\frac{q+4}{q+2};x\right) \ . \end{eqnarray} \end{widetext} Notice that only two of these functions are independent for each conformal block, because of the constraints \begin{equation} X_{\mu,+--+} +X_{\mu,--++} +X_{\mu, -+-+} = 0 \ , \ \ \mu = 1,2 \ . \end{equation} We obtain the expressions \eqref{eq:V-8} from the corresponding $X_{1,m_1m_2m_3m_4}$ for $q=1$. The second conformal block does not contribute in $SU(2)_1$ \cite{bib:25,bib:28}. \end{document}
arXiv
Feed additives decrease survival of delta coronavirus in nursery pig diets Katie M. Cottingim1, Harsha Verma2, Pedro E. Urriola1, Fernando Sampedro2, Gerald C. Shurson1 & Sagar M. Goyal ORCID: orcid.org/0000-0002-7781-30802 Feed contaminated with feces from infected pigs is believed to be a potential route of transmission of porcine delta coronavirus (PDCoV). The objective of this study was to determine if the addition of commercial feed additives (e.i., acids, salt and sugar) to swine feed can be an effective strategy to inactive PDCoV. Six commercial feed acids (UltraAcid P, Activate DA, KEMGEST, Acid Booster, Luprosil, and Amasil), salt, and sugar were evaluated. The acids were added at the recommended concentrations to 5 g aliquots of complete feed, which were also inoculated with 1 mL of PDCoV and incubated for 0, 7, 14, 21, 28, and 35 days. In another experiment, double the recommended concentrations of these additives were also added to the feed samples and incubated for 0, 1, 3, 7, and 10 days. All samples were stored at room temperature (~25 °C) followed by removal of aliquots at 0, 7, 14, 21, 28, and 35 days. Any surviving virus was eluted in a buffer solution and then titrated in swine testicular cells. Feed samples without any additive were used as controls. Both Weibull and log-linear kinetic models were used to analyze virus survival curves. The presence of a tail in the virus inactivation curves indicated deviations from the linear behavior and hence, the Weibull model was chosen for characterizing the inactivation responses due to the better fit. At recommended concentrations, delta values (days to decrease virus concentration by 1 log) ranged from 0.62–1.72 days, but there were no differences on virus survival among feed samples with or without additives at the manufacturers recommended concentrations. Doubling the concentration of the additives reduced the delta value to ≤ 0.28 days (P < 0.05) for all the additives except for Amasil (delta values of 0.86 vs. 4.95 days). Feed additives that contained phosphoric acid, citric acid, or fumaric acid were the most effective in reducing virus survival, although none of the additives completely inactivated the virus by 10- days post-inoculation. Commercial feed additives (acidifiers and salt) may be utilized as a strategy to decrease risk of PDCoV in feed, specially, commercial feed acidifiers at double the recommended concentrations reduced PDCoV survival in complete feed during storage at room temperature. However, none of these additives completely inactivated the virus. There are three enteric coronaviruses that can cause gastrointestinal illness in young pigs e.g., transmissible gastroenteritis virus (TGEV), porcine epidemic diarrhea virus (PEDV), and porcine delta coronavirus (PDCoV) [1]. Transmissible gastroenteritis virus has been present in the United States since 1946, but PEDV and PDCoV were introduced more recently in 2013 and 2014, respectively. The spread of PEDV among swine herds was rapid; and strict biosecurity measures known to prevent transmission of other viruses such as porcine respiratory and reproductive syndrome virus were ineffective; later contaminated complete feed was demonstrated to be a route for PEDV transmission that has been overlooked in previous biosecurity protocols [2]. Therefore, for disease prevention purposes, it is essential to understand proper feed handling procedures that minimize risk of transmission, and to identify methods that can rapidly inactivate these viruses if present in feed. Commercial swine feed is often fortified with various additives, including acidifiers such as organic and/or inorganic acids to control bacterial and mold growth in feed, increase growth performance of animals, improve nutrient digestibility, and control harmful bacteria in the animal gut [3]. Acidifiers are often added to feed as an alternative to the use of antibiotics as growth promoters and to control pathogens such as Salmonella spp. [4, 5]. Nursery pigs are believed to obtain the greatest benefit from the addition of acidifiers, and the addition of acidifiers has been shown to increase growth rate by 12% [6]. Acidifiers are also effective in reducing diarrhea and mortality while maintaining adequate growth of nursery pigs [6]. This study was conducted to determine if the addition of commercially available feed additives (salt, sugar, and acidifiers), at recommended or double the recommended concentrations, is effective in reducing the survival of PDCoV in feed. Virus propagation The strain of PDCoV was obtained from the National Veterinary Services Laboratory (NVSL; Ames, IA). Stock virus was propagated in swine testicular cells. The cells were grown in Minimum Essential Medium with Earle's salts supplemented with L-glutamine (Mediatech, Herndon, VA), 8% fetal bovine serum (Hyclone, South Logan, UT), 50 μg/mL gentamicin (Mediatech), 150 μg/mL neomycin sulfate (Sigma, St. Louis, MO), 1.5 μg/mL fungizone (Sigma), and 455 μg/mL streptomycin (Sigma). The maintenance medium included 5 μg/mL of trypsin (Gibco, Life technologies, Grand Island, NY) and the same antibiotics as previously described. Cells inoculated with the virus were incubated at 37 °C under 5% CO2 and were observed for the appearance of virus-induced cytopathic effects (CPE) for up to 6 days post-infection. The infected cells were subjected to 3 freeze-thaw cycles (−80 °C/25 °C) followed by centrifugation at 2500 × g for 15 min at 4 °C. The supernatant was collected, aliquoted, and stored at −80 °C until use. Virus titration Serial 10-fold dilutions of all samples were prepared in maintenance medium followed by inoculation in monolayers of swine testicular cells contained in 96-well microtiter plates (Nunc, NY, USA) using 100 μL/well and 3 wells per dilution. Inoculated cells were incubated at 37 °C under 5% CO2 for up to 6 days and examined daily under an inverted microscope for the appearance of CPE. The highest dilution showing CPE was considered the end point. Virus titers were calculated as Tissue Culture Infectious Dose TCID50/mL by the Karber method [7]. Feed matrix and laboratory analysis The CGI Enhance ground commercial starter feed used in this experiment was obtained from VitaPlus (Madison, WI). This feed is designed for feeding pigs from 5–10 days post-weaning and does not contain any animal derived by-products. The feed was confirmed to be negative for PDCoV by real time reverse transcription-polymerase chain reaction (RT-PCR). A sample of the feed was submitted to Minnesota Valley Testing Laboratories (New Elm, MN), where dry matter (DM; method 930.15), ether extract (method 2003.05), crude protein (CP; method 990.03), crude fiber (method 920.39), and ash (method 942.05) were analyzed following standard procedures [8]. The chemical analysis results of the feed were 91.43% DM, 4.47% EE, 24.2% CP, 2.02% crude fiber, and 9.45% ash on as is basis. Six commercial feed acidifiers, UltraAcid P, (Nutriad, Dendermonde, Belgium), Activate DA (Novus International, St. Charles, MO), Acid Booster (Agri-Nutrition, DeForest, WI), Kemgest (Kemin Agrifoods, Des Moines, IA), Luprosil (BASF, Florham Park, NJ), and Amasil (BASF, Florham Park, NJ) were evaluated when added at their manufacturers' recommended concentrations (Table 1). In addition, the effect of sodium chloride and sucrose on virus survival was also evaluated. In a second experiment, PDCoV survival was evaluated by adding the double of the recommended amounts of these feed additives. Table 1 Commercial name of feed additives, active ingredients, concentration when mixed with complete feed at the manufacturers' recommended doses (1×) and twice the manufacturers' recommended doses (2×) along with pH of the diet and additive mixture Virus inoculation procedure Forty-eight aliquots of feed (5 g/aliquot) were placed in plastic scintillation vials and the recommended concentrations of each feed additive were added. There were a total of 8 observations at each of the 6-time point for each of the 9 dietary combinations (control and 8 additives). Another set of 40 aliquots of feed were used at double of the recommended concentrations of the additives, for a total of 8 replications per each of the 5-time points and 9 dietary combinations (Table 1). Subsequently, 1 mL of PDCoV (initial titer 3.2 × 105 TCID50/mL) was added to all vials. The control treatment consisted of vials containing feed and virus but no feed additive. The samples were thoroughly mixed using a vortex mixer and stored at room temperature (~25 °C). An individual vial served as the experimental unit, and one vial from each set was removed at 0, 7, 14, 21, 28, and 35 days to determine the degree of virus inactivation. In the experiment involving double the recommended concentrations of additives, samples were removed and evaluated for virus inactivation at 0, 1, 3, 7, and 10 days. Different time points were selected to account for greater virus inactivation in the early stages of inoculation. To determine the amount of virus inactivation at each time point, the surviving virus in each vial was eluted by adding 10 mL of 3% beef extract-0.05 M glycine solution at pH 7.2. After thorough mixing by vortexing, the vials were centrifuged at 2500 × g for 15 min. Serial 10-fold dilutions of the supernatants (eluates) were inoculated in swine testicular cells as previously described for virus titration. The amount of surviving virus was calculated and compared with that in control vials (no additive) and was expressed as log10 TCID50/mL. All treatments were applied and analyzed in triplicate. Measurement of pH Fifty mL of distilled water was added to 5 g of feed contained in a 100 mL glass flask. The feed suspension was stirred at room temperature for 2 h using a magnetic stirrer. The pH was measured using a pH probe (Fisher Scientific, Waltham, MA) at 0, 15, 30, 60, and 120 min. The final pH value was calculated as the average of the values at different time intervals. The average pH for feed was 5.82 ± 0.02 and this value was used to compare the pH values after the addition of feed additives. Inactivation kinetics data (log TCID50/mL) were analyzed by using GInaFIT software, a freeware add-on for Microsoft Excel (Microsoft, Redmond, WA) [9]. The traditional log-linear model developed by Bigelow and Esty (1920) was used to characterize the survival curves of PDCoV by using the following equation [10]: $$ \mathrm{Log}\ \mathrm{N} = \mathrm{Log}\ {N}_0 - \left(\mathrm{k} \times \mathrm{t}\right) $$ where N is the amount of surviving virus after treatment, N 0 is the initial virus titer, k is the kinetic parameter (day−1), and t is the treatment time (d). The kinetic parameter k is usually expressed as D, which is also known as 'decimal reduction time' (time required to reduce initial virus titer by 90% or 1 log at a certain temperature) and was calculated as: $$ \mathrm{D} = \kern0.75em \frac{2.3}{k} $$ The Weibull distribution function has been used to describe non-linear inactivation patterns of different microorganisms after thermal and non-thermal processing. Assuming that the temperature resistance of the virus is governed by a Weibull distribution, Mafart et al. [11] developed the following equation [12]: $$ Log(N)= \log \left({N}_0\right)-{\left(\frac{t}{\delta}\right)}^n $$ where N is the surviving virus titer after treatment, N 0 is the initial virus titer, δ is the time (min or days) of first logarithm decline in virus titer, and n is the shape parameter. The n value provides an indication of the shape of the response curve. If n > 1, the curve is convex (it forms a shoulder-shaped response), if n < 1, the curve is concave (it forms a tail-shaped response), and if n = 1, the curve is a straight line and can be described by a linear model. Three replicates per treatment were used to determine how well the model fit the experimental data by calculating the Adj. R2 defined as follows: $$ \mathrm{A}\mathrm{d}\mathrm{j}.\ {R}^2\kern0.5em = \left[1\kern0.5em -\kern0.5em \frac{\left(m-1\right)\left(1\kern0.5em -\kern0.5em \frac{SSQregression}{SSQtotal}\right)}{m\kern0.5em -\kern0.5em j}\right] $$ where m is the number of observations, j is the number of model parameters, and SSQ is the sum of squares. The effect of different additives on the kinetic parameters and survival of virus was assessed by using a mixed model (SAS, v9.3; SAS Inst. Inc., Cary, NC) that included the effect of additives and time as fixed effects and replicate/batch as random effects. Each vial was considered as the experimental unit. Data were analyzed for outliers and the presence of a normal distribution using the UNIVARIATE procedure of SAS that calls for calculations of sample moments, measurements of location and variability, standard deviation, test for normality, robust estimates on scale, missing values among others. The LSMEANS statement in SAS was used to calculate treatment means adjusted for model effects, while Tukey's test was used to determine differences among treatments. For this study, significance was considered when P < 0.05. Effect of additives on the survival of PDCoV in feed at their recommended concentrations The goodness of model fit was analyzed by comparing the Adj. R2 values from the log-linear and Weibull models. The Adj. R2 values for the log-linear model (0.48–0.57) were less than those obtained for the Weibull model (0.86–0.93), indicating that the Weibull model provide a better fit of the experimental data (Table 2). This is explained mainly because the appearance of a resistant fraction of the virus that was able to survive longer than the length of the experiment (35 days). This residual survival produced long tails in the survival curves characterized by shape parameters (n) less than 1. This nonlinear behavior resulted in D-values that overestimated virus survival (14.13–15.52 days), while the delta values obtained with the Weibull model were between 0.86 and 1.72 days. Weibull prediction values showed much faster inactivation kinetics and thus characterized better the virus survival curves. Table 2 Kinetic parameters and correlation coefficients corresponding to the log-linear and Weibull models fitted to survival curves of Porcine Delta coronavirus (PDCoV) in complete feed and feed additives included at the manufacturers' recommended concentrations In spite differences in virus inactivation kinetics, none of the additives appear to be effective in completely inactivating the virus. The total amount of virus inactivation over the sampling period of 35 days was 3 log reduction for the control sample and all the additives evaluated, indicating that none of the additives added at the manufacturers' recommend doses were effective in reducing PDCoV survival. Effect of additives on the survival of PDCoV in feed at twice the recommended concentration Doubling the concentrations of feed additives resulted in faster PDCoV inactivation kinetics (0.0004–0.28 days) for all additives, except for sucrose and formic acid (Table 3). UltraAcid P and KEMGEST provided faster initial virus inactivation kinetics than the other additives, and the delta values were estimated to be 35 s. However, most of the survival curves suggested that a large fraction of the virus remained resistant to the treatment with the appearance of tails (n values < 1) and a maximum inactivation degree achieved of 2 log after 10 days of storage. The addition of Luprosil (0.06 days), Acid Booster (0.28 days), and sodium chloride (0.09 days) resulted in the greatest virus inactivation with 2.3-3.0 log reduction after 10 days of storage at room temperature. Table 3 Kinetic parameters and correlation coefficients corresponding to the Weibull model fitted to PDCoV survival curves in complete feed and feed additives that were added at twice the manufacturers recommended concentrations The pH of the complete feed without addition of acidifiers was greater than pH of the same complete feed with the addition of Luprosil, Activate DA, KEMGEST, Acid Booster, and Amasil. The pH of the complete feed with addition of UltraAcid P was not different from that of the complete feed. There was no correlation between the pH values of the diet with the addition of acidifiers and the inactivation kinetics of PDCoV (delta values; Fig. 1). Interestingly, the virus appeared to survive better at pH values lower than 3 and at pH 7 to 8. Organic, inorganic, or blends of acids are commonly added to swine feeds to control pathogens such as Salmonella spp. [13]. To our knowledge, this is the first study that has evaluated the impact of commercially available acids, sodium chloride, and sucrose on the survival of PDCoV in swine feed. When these commercial additives were added at the manufacturers' recommended doses, none of them were effective in decreasing survival of PDCoV, we had to add all acidifiers at twice the manufacturer recommended concentrations to observe inactivation of PDCoV in complete swine feed. In contrast, PEDV is inactivated by similar acidifiers at the manufacturers' recommended concentration; Activate DA (0.81 d) and KEMGEST (3.28 d) produced inactivation PEDV that was faster than inactivation in the control diet [14]. The current experiment focused on determining inactivation kinetics of commercial additives available to the United States feed industry, and did not focus on evaluating the specific active ingredients present in these additives that may inactivate PDCoV. However, based on the description and order of the active ingredients listed for each commercial additive, it appears that some form of phosphoric acid (pKa 6.9 × 10 −3) was present in UltraAcid P and KEMGEST, which suggests that this acid may be potentially responsible for inactivation of PDCoV. Phosphoric acid has been shown to inactivate pathogens such as Salmonella spp. on stainless steel surfaces, but there are no data available on inactivation of viruses in animal feed [15]. Inactivation of PDCoV was greater in the presence of KEMGEST than Acid Booster, but the active ingredients in these two feed additives are similar, with the exception of fumaric acid present in KEMGEST. Furthermore, fumaric acid was also present in UltraAcid P, which was also effective in rapidly inactivating PDCoV. Therefore, it is possible that fumaric acid in KEMGEST and UltraAcid P may be the primary component that causes PDCoV inactivation. Studies have shown that fumaric acid is an effective antimicrobial that reduces survivability of E. coli [16] and Salmonella spp. [17]. It is believed that changes in pH affect viruses by increasing sensitivity to deoxyriobonuclease [18] and by altering the virus capsid by the loss of structural proteins [18]. The RNA of RNA-containing viruses (such as PDCoV) is sensitive to ribonuclease at all pH levels tested (pH 3–9) [19]. At pH levels of 5 and 7, RNA was hydrolyzed and there was an absence of ribonuclease. There is no clear pattern or indication of a specific acid that inactivates PDCoV and more research is needed to depict the acid or combination of acid that can completely inactivate the virus. Correlation of pH and delta value on virus inactivation at double the recommended concentration of feed additives Comparing data from this experiment with data on inactivation of PEDV, it appears that PDCoV is more labile than PEDV to environmental temperature and storage conditions because the delta values for PDCoV were, in general, much less (<2 d) than 17 days observed for PEDV [20]. Comparison of inactivation kinetics suggest that PEDV resists inactivation during feed storage to a greater extent than does PDCoV. There are limited data comparing the survival of enteric coronaviruses in the environment, but after the initial outbreak of each virus, PEDV infected more number of herds than PDCoV, this epidemiology and geographic distribution data suggest that PEDV survives longer than PDCoV and in agreement with observations of the current experiment [21, 22]. Addition of salt, but not sugar, to the control diet caused a decrease in delta values for inactivation of PDCoV. This observation is in agreement with inactivation of PEDV in complete swine feed, where adding both salt and sugar increased inactivation of PEDV [20]. Likewise, this observation is in agreement with results from an experiment that suggest that addition of phosphate supplemented salt mix to casting for sausage manufacturing increases inactivation of several viruses affecting swine such as Food and Mouth Disease Virus, Classical Swine Fever Virus, Swine Vesicular Disease Virus, and African Swine Fever Virus [23]. Using feed acidifiers could be an effective strategy to decrease the concentration of PDCoV in swine feed, but double the manufacturer's recommended concentration was required to observe an effect. Using feed acidifiers could be an effective strategy to decrease the concentration of PDCoV in swine feed, but double the manufacturer's recommended concentration was required to observe and effect. In spite the observed results on inactivation of PDCoV more experiments are needed to demonstrate the effectiveness of these treatments as means of preventing PDCoV transmission in feed on more applied settings. None of the treatments applied in this experiment were completely effective in inactivating PDCoV. Therefore, the strategy proposed in this research should be used in combination with other virus inactivation procedures within the processing and distribution steps for swine feed rather than a single kill step for virus inactivation. Adj. R2 : Adjusted coefficient of correlation CP: Crude protein DM: Dry matter EE: Ether extract PDCoV: Porcine delta coronavirus PEDV: Porcine epidemic diarrhea virus TCID: Tissue culture infectious dose Saif LJ, Pensaert MB, Sestak K, Yeo S, Jung K. Coronaviruses. Diseases of Swine. Edited: Zimmerman JJ, Karriker LA, Ramirez A, Schwarts KJ, Stevenson GW. 2012, Wiley and Sons, Ames IA 501–524. Dee S, Clement T, Schelkopf A, Nerem J, Knudsen D, Christopher-Hennings J, Nelson E. An evaluation of contaminated complete feed as a vehicle for porcine epidemic diarrhea virus infection of naive pigs following consumption via natural feeding behavior: proof of concept. BMC Vet Res. 2014;10:176. Jacela JY, DeRouchey JM, Tokach MD, Goodband RD, Nelssen JL, Renter DG, Dritz SS. Peer r eviewed Practice t ip. Future. 2009;17:270–5. Van Immerseel F, Cauwerts K, Devriese LA, Haesebrouck F, Ducatelle R. Feed additives to control Salmonella in poultry. Worlds Poult Sci J. 2002;58:501–13. Koyuncu S, Andersson MG, Löfström C, Skandamis PN, Gounadaki A, Zentek J, Häggblom P. Organic acids for control of Salmonella in different feed materials. BMC Vet Res. 2013;9:81. Tung CM, Pettigrew JE. Critical review of acidifiers. Rep NPB. 2008;5–169. Karber G. 50% end-point calculation. Arch Exp Pathol Pharmak. 1931;162:480–3. AOAC. Official Methods of Analysis of AOAC International. 2007. Geeraerd AH, Valdramidis VP, Van Impe JF. GInaFiT, a freeware tool to assess non-log-linear microbial survivor curves. Int J Food Microbiol. 2005;102:95–105. Bigelow WD. The logarithmic nature of thermal death time curves. J Infect Dis. 1921;528–536. Mafart P, Couvert O, Gaillard S, Leguérinel I. On calculating sterility in thermal preservation methods: application of the Weibull frequency distribution model. Int J Food Microbiol. 2002;72(1):107–113. Albert I, Mafart P. A modified Weibull model for bacterial inactivation. Int J Food Microbiol. 2005;100:197–211. Juven BJ, Cox NA, Bailey JS, Thomson JE, Charles OW, Shutze JV. Survival of Salmonella in dry food and feed. J Food Prot. 1984;47:445–8. Trudeau M, Verma H, Sampedro F, Urriola PE, Shurson GC, Goyal S. Survival and mitigation strategies of porcine epidemic diarrhea virus (PEDV) in complete feed. In: ADSA-ASAS 2015 Midwest Meeting. Asas. 2015. Shen C, Luo Y, Nou X, Bauchan G, Zhou B, Wang Q, Millner P. Enhanced inactivation of Salmonella and Pseudomonas biofilms on stainless steel by use of T-128, a fresh-produce washing aid, in chlorinated wash solutions. Appl Environ Microbiol. 2012;78:6789–98. Comes JE, Beelman RB. Addition of fumaric acid and sodium benzoate as an alternative method to achieve a 5-log reduction of Escherichia coli O157:H7 populations in apple cider. J Food Prot. 2002;65:476–83. Kondo N, Murata M, Isshiki K. Efficiency of sodium hypochlorite, fumaric acid, and mild heat in killing native microflora and Escherichia coli O157: H7, Salmonella Typhimurium DT104, and Staphylococcus aureus attached to fresh-cut lettuce. J Food Prot. 2006;69:323–9. Prage L, Pettersson U, Höglund S, Lonberg-Holm K, Philipson L. Structural proteins of adenoviruses: IV, Sequential degradation of the adenovirus type 2 virion. Virology. 1970;42:341–58. Salo RJ, Cliver DO. Inactivation of enteroviruses by ascorbic acid and sodium bisulfite. Appl Environ Microbiol. 1978;36:68–75. Trudeau MP, Verma H, Sampedro F, Urriola PE, Shurson GC, McKelvey J, Pillai SD, Goyal SM. Comparison of Thermal and Non-Thermal Processing of Swine Feed and the Use of Selected Feed Additives on Inactivation of Porcine Epidemic Diarrhea Virus (PEDV). PLoS One. 2016;11, e0158128. USDA. Swine Enteric Coronavirus Disease (SECD) Situation Report - Oct 22. 2015. p. 1–18. McCluskey BJ, Haley C, Rovira A, Main R, Zhang Y, Barder S. Retrospective testing and case series study of porcine delta coronavirus in U.S. swine herds. Prev Vet Med. 2016;123:185–91. Wieringa-Jelsma T, Wijnker JJ, Zijlstra-Willems EM, Dekker A, Stockhofe-Zurwieden N, Maas R, Wisselink HJ. Virus inactivation by salt (NaCl) and phosphate supplemented salt in a 3D collagen matrix model for natural sausage casings. Int J Food Microbiol. 2011;148:128–34. We thank Nhungoc Ti Luong for technical assistance in conducting this study. We thank the National Pork Board for partial funding of this project and Cenex Harvest States for the fellowship provided to K.M. Cottingim. Please contact author for data requests. KMC collected the data and wrote the manuscript, HV collected data and revised the manuscript, PEU designed the experiments, analyzed data and revised the manuscript, FS analyzed the data and revised the manuscript, GCS revised the manuscript, SMG designed the experiments, collected data, analyzed data, and revised the manuscript. All authors read and approved the manuscript. SMG, HV, FS College of Veterinary Medicine, PEU, GCS, and KMC College of Food Agriculture, and Natural Resources Science at the University of Minnesota. The authors declare that they have no competing interest. Department of Animal Science, University of Minnesota, St. Paul, MN, 55108, USA Katie M. Cottingim, Pedro E. Urriola & Gerald C. Shurson Department of Veterinary Population Medicine, University of Minnesota, St. Paul, MN, 55108, USA Harsha Verma, Fernando Sampedro & Sagar M. Goyal Katie M. Cottingim Harsha Verma Pedro E. Urriola Fernando Sampedro Gerald C. Shurson Sagar M. Goyal Correspondence to Sagar M. Goyal. Cottingim, K.M., Verma, H., Urriola, P.E. et al. Feed additives decrease survival of delta coronavirus in nursery pig diets. Porc Health Manag 3, 5 (2017). https://doi.org/10.1186/s40813-016-0048-8 Inactivation kinetics
CommonCrawl
What is the slope of the line containing the midpoint of the segment with endpoints at (0, 0) and (2, 3) and the midpoint of the segment with endpoints at (5, 0) and (6, 3)? Express your answer in simplest form. The midpoint of a line segment with endpoints $(x_1, y_1), (x_2, y_2)$ is $\left(\frac{x_1 + x_2}{2}, \frac{y_1 + y_2}{2}\right)$, which should make sense since the midpoint is halfway between the endpoints. So the midpoint of the first segment is $\left(\frac{0+2}{2}, \frac{0+3}{2}\right) = (1,1.5)$ and the midpoint of the second segment is $\left(\frac{5+6}{2}, \frac{0+3}{2}\right) = (5.5,1.5)$. Note that the slope of the desired line must then be $\boxed{0}$, since the $y$-coordinates of the two points are the same.
Math Dataset
# Understanding the Fundamental Theorem of Calculus The Fundamental Theorem of Calculus is a fundamental concept in calculus that connects integration and differentiation. It states that the integral of a function's derivative is equal to the function itself, plus a constant. Mathematically, it is represented as: $$\int f'(x) dx = f(x) + C$$ This theorem is crucial in understanding the relationship between differentiation and integration. It allows us to find the antiderivative (indefinite integral) of a function by taking the derivative of its integral. In SymPy, the fundamental theorem of calculus can be applied using the `integrate` function. This function can be used to compute both definite and indefinite integrals of expressions. To compute an indefinite integral, pass the variable after the expression. For example: ```python from sympy import symbols, integrate x = symbols('x') expr = x**2 + x + 1 integrate(expr, x) ``` This code snippet will output the indefinite integral of the expression `x**2 + x + 1`: $$\frac{1}{3}x^3 + \frac{1}{2}x^2 + x$$ To compute a definite integral, pass the argument as follows: `(integration_variable, lower_limit, upper_limit)`. For example: ```python expr = exp(-x**2) integrate(expr, (x, 0, oo)) ``` This code snippet will output the definite integral of the expression `exp(-x**2)` from 0 to infinity: $$\frac{\sqrt{\pi}}{2}$$ ## Exercise Compute the indefinite integral of the expression `sin(x)*tan(x)` using SymPy. # Calculus and limits in SymPy In SymPy, you can perform various calculus operations, such as finding derivatives, limits, and integrals. Let's start by finding the derivative of a function. To find the derivative of a function, use the `Derivative` class. For example, to find the derivative of the function `f(x) = x**2 + x + 1` with respect to `x`, you can use the following code: ```python from sympy import symbols, Derivative x = symbols('x') f = x**2 + x + 1 d = Derivative(f) d.doit() ``` This code snippet will output the derivative of the function `f(x) = x**2 + x + 1`: $$2x + 1$$ To find the limit of a function as `x` approaches a certain value, use the `limit` function. For example, to find the limit of the function `f(x) = x**2 + x + 1` as `x` approaches infinity, you can use the following code: ```python from sympy import symbols, limit x = symbols('x') f = x**2 + x + 1 limit(f, x, oo) ``` This code snippet will output the limit of the function `f(x) = x**2 + x + 1` as `x` approaches infinity: $$\infty$$ ## Exercise Find the limit of the function `f(x) = x**2 + x + 1` as `x` approaches 2 using SymPy. # Applications of the Fundamental Theorem in SymPy The fundamental theorem of calculus can be applied in various ways in SymPy. Let's start by finding the antiderivative of a function. To find the antiderivative (indefinite integral) of a function, use the `integrate` function. For example, to find the antiderivative of the function `f(x) = x**2 + x + 1`, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, x) ``` This code snippet will output the antiderivative of the function `f(x) = x**2 + x + 1`: $$\frac{1}{3}x^3 + \frac{1}{2}x^2 + x$$ You can also use the fundamental theorem of calculus to find the definite integral of a function. For example, to find the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, (x, 0, 2)) ``` This code snippet will output the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2: $$\frac{9}{3}$$ ## Exercise Find the definite integral of the function `f(x) = sin(x)*tan(x)` from 0 to pi/2 using SymPy. # Integration and its properties in SymPy SymPy provides various functions and techniques to compute integrals. Let's start by finding the indefinite integral of a function. To find the indefinite integral of a function, use the `integrate` function. For example, to find the indefinite integral of the function `f(x) = x**2 + x + 1`, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, x) ``` This code snippet will output the indefinite integral of the function `f(x) = x**2 + x + 1`: $$\frac{1}{3}x^3 + \frac{1}{2}x^2 + x$$ You can also find the definite integral of a function using the `integrate` function. For example, to find the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, (x, 0, 2)) ``` This code snippet will output the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2: $$\frac{9}{3}$$ ## Exercise Find the definite integral of the function `f(x) = sin(x)*tan(x)` from 0 to pi/2 using SymPy. # SymPy's integration functions and techniques SymPy provides various functions and techniques to compute integrals. Let's start by finding the indefinite integral of a function. To find the indefinite integral of a function, use the `integrate` function. For example, to find the indefinite integral of the function `f(x) = x**2 + x + 1`, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, x) ``` This code snippet will output the indefinite integral of the function `f(x) = x**2 + x + 1`: $$\frac{1}{3}x^3 + \frac{1}{2}x^2 + x$$ You can also find the definite integral of a function using the `integrate` function. For example, to find the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, (x, 0, 2)) ``` This code snippet will output the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2: $$\frac{9}{3}$$ ## Exercise Find the definite integral of the function `f(x) = sin(x)*tan(x)` from 0 to pi/2 using SymPy. # Advanced integration techniques using SymPy SymPy provides various functions and techniques to compute integrals. Let's start by finding the indefinite integral of a function. To find the indefinite integral of a function, use the `integrate` function. For example, to find the indefinite integral of the function `f(x) = x**2 + x + 1`, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, x) ``` This code snippet will output the indefinite integral of the function `f(x) = x**2 + x + 1`: $$\frac{1}{3}x^3 + \frac{1}{2}x^2 + x$$ You can also find the definite integral of a function using the `integrate` function. For example, to find the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, (x, 0, 2)) ``` This code snippet will output the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2: $$\frac{9}{3}$$ ## Exercise Find the definite integral of the function `f(x) = sin(x)*tan(x)` from 0 to pi/2 using SymPy. # Solving complex problems with SymPy SymPy can be used to solve complex problems in calculus, such as finding the antiderivative of a function, computing definite integrals, and applying the fundamental theorem of calculus. Let's start by finding the antiderivative of a function. To find the antiderivative (indefinite integral) of a function, use the `integrate` function. For example, to find the antiderivative of the function `f(x) = x**2 + x + 1`, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, x) ``` This code snippet will output the antiderivative of the function `f(x) = x**2 + x + 1`: $$\frac{1}{3}x^3 + \frac{1}{2}x^2 + x$$ You can also use SymPy to compute the definite integral of a function. For example, to find the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, (x, 0, 2)) ``` This code snippet will output the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2: $$\frac{9}{3}$$ ## Exercise Find the definite integral of the function `f(x) = sin(x)*tan(x)` from 0 to pi/2 using SymPy. # Comparing results with traditional methods SymPy can be used to compare the results of calculations with traditional methods. Let's start by finding the antiderivative of a function. To find the antiderivative (indefinite integral) of a function, use the `integrate` function. For example, to find the antiderivative of the function `f(x) = x**2 + x + 1`, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, x) ``` This code snippet will output the antiderivative of the function `f(x) = x**2 + x + 1`: $$\frac{1}{3}x^3 + \frac{1}{2}x^2 + x$$ You can also use SymPy to compute the definite integral of a function. For example, to find the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2, you can use the following code: ```python from sympy import symbols, integrate x = symbols('x') f = x**2 + x + 1 integrate(f, (x, 0, 2)) ``` This code snippet will output the definite integral of the function `f(x) = x**2 + x + 1` from 0 to 2: $$\frac{9}{3}$$ ## Exercise Find the definite integral of the function `f(x) = sin(x)*tan(x)` from 0 to pi/2 using SymPy. # Conclusion and future developments In this textbook, we have explored the power of SymPy in understanding the Fundamental Theorem of Calculus. We have learned how to use SymPy to find the antiderivative of a function, compute definite integrals, and apply the fundamental theorem of calculus. Future developments in SymPy will likely involve improvements in integration algorithms and support for more advanced integration techniques. This will allow users to solve even more complex problems in calculus using SymPy. In conclusion, SymPy is a powerful tool for understanding and solving problems in calculus. By leveraging the power of SymPy, we can perform various calculus operations, apply the fundamental theorem of calculus, and solve complex problems with ease.
Textbooks
Andrew Putman Andrew Putman (born October 22, 1979) is an American mathematician at the University of Notre Dame. His research fields include geometric group theory and low-dimensional topology. Andrew Putman Born (1979-10-22) October 22, 1979 NationalityAmerican Alma materRice University University of Chicago Scientific career FieldsMathematics InstitutionsRice University University of Notre Dame Doctoral advisorBenson Farb Putman earned his bachelor's degree from Rice University. In 2007, he obtained his doctorate from the University of Chicago, under the supervision of Benson Farb. He was a C. L. E. Moore Instructor at MIT from 2007-2010, and then served on the faculty at Rice from 2010-2016. He then moved to Notre Dame, where he is currently the Notre Dame Professor of Topology. In 2018, he became a fellow of the American Mathematical Society.[1] In 2014, there was a Seminar Bourbaki talk by Aurélien Djament on Putman's work.[2] Further, in 2013, Putman received the Sloan Research Fellowship and a National Science Foundation CAREER Award.[3] References 1. List of Fellows of the American Mathematical Society, retrieved 2018. 2. Djament, Aurélien (2016), "La propriété noethérienne pour les foncteurs entre espaces vectoriels [d'après A. Putman, S. Sam et A. Snowden]" (PDF), Astérisque, 380 (Séminaire Bourbaki, Vol. 2014/2015): Exp. No. 1090, 35–60, ISBN 978-2-85629-836-7, MR 3522170 3. Boyd, Jade (February 18, 2013). "Doubly honored: Two prestigious awards for Rice's Putman". Rice University. External links • Andrew Putman at the Mathematics Genealogy Project • "Website at University of Notre Dame". Authority control: Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
\begin{definition}[Definition:Stationary Model] A '''stationary model''' is a stochastic model for describing a time series which assumes that the underlying stochastic process remains in equilibrium about a constant mean level. That is, it is a stochastic model with an underlying stochastic process which is itself stationary. \end{definition}
ProofWiki
A 35-gene signature discriminates between rapidly- and slowly-progressing glioblastoma multiforme and predicts survival in known subtypes of the cancer Azeez A. Fatai1 & Junaid Gamieldien ORCID: orcid.org/0000-0001-7614-18871 Gene expression can be employed for the discovery of prognostic gene or multigene signatures cancer. In this study, we assessed the prognostic value of a 35-gene expression signature selected by pathway and machine learning based methods in adjuvant therapy-linked glioblastoma multiforme (GBM) patients from the Cancer Genome Atlas. Genes with high expression variance was subjected to pathway enrichment analysis and those having roles in chemoradioresistance pathways were used in expression-based feature selection. A modified Support Vector Machine Recursive Feature Elimination algorithm was employed to select a subset of these genes that discriminated between rapidly-progressing and slowly-progressing patients. Survival analysis on TCGA samples not used in feature selection and samples from four GBM subclasses, as well as from an entirely independent study, showed that the 35-gene signature discriminated between the survival groups in all cases (p<0.05) and could accurately predict survival irrespective of the subtype. In a multivariate analysis, the signature predicted progression-free and overall survival independently of other factors considered. We propose that the performance of the signature makes it an attractive candidate for further studies to assess its utility as a clinical prognostic and predictive biomarker in GBM patients. Additionally, the signature genes may also be useful therapeutic targets to improve both progression-free and overall survival in GBM patients. Glioblastoma multiforme (GBM) is the most common and highly aggressive brain tumour. Patients with GBM have very poor prognosis, with the median OS time of 14.5 months [1]. Chemotherapy and radiotherapies are intended to improve patient survival, but are, however, hampered by development of resistance. Methylation of the promoter of the MGMT gene, which encodes O-6-methylguanine-DNA methyl-transferase, a DNA-repair enzyme that removes alkylating groups at the O6 of guanine residues, is a predictor of treatment response in GBM. Most studies that considered progression-free survival assessed only the prognostic value of MGMT promoter methylation [2–4]. Tumours with hypermethylated MGMT promoters are expected to benefit from temozolomide, an alkylating agent used for treating GBM, but reports regarding the prognostic value of this biomarker have been conflicting [5, 6]. Several gene expression prognostic and predictive signatures have been translated into clinical applications for cancer treatment. Oncotype DX is a 21-gene qRT-PCR assay used to predict likelihood of recurrence in women with estrogen receptor positive breast cancer [7, 8]. Mammostrat is prognostic immunohistochemical test that uses antibodies specific for SLC7A5, p53, HTF9C, NDRG1, and CEACAM5 to classify ER-positive, lymph node negative breast cancer cases into low-, moderate- or high-risk groups [9, 10]. Mammaprint is a 70-gene microarray-based test for predicting risk of metastasis in breast cancer [11]. In light of the lack of standardised prognostic biomarkers for GBM, we aimed to identify a mRNA expression derived prognostic signature using data from the Cancer Genome Atlas (TCGA - http://cancergenome.nih.gov/). As current prognostic feature selection approaches lack reproducibility and do not take chemoradioresistant pathways into consideration, we used a combination of pathway enrichment analysis and Support Vector Machine based Recursive Feature Elimination (SVM-RFE) to ensure that the genes selected as having predictive potential would also be biologically relevant to the phenoptype. We here describe a multigene signature that successfully predicts both progression-free and overall survival in glioblastoma multiforme. Gene-centric expression data Five hundred fifty eight GBM gene expression profiles generated by the Cancer Genome Atlas (TCGA) were downloaded from the NCI Genomic Data Commons Data Portal (https://portal.gdc.cancer.gov/projects/TCGA-GBM). Five hundred forty eight of the these profiles were obtained from GBM patients, and ten were from non-neoplastic patients. One profile was selected for each of the samples profiled two or more times. Five hundred twenty nine profiles left after removing those of non-neoplastic samples were used in this study (Additional file 1). The expression were profiled on Affymetrix HT HG-U133A platform. As gene expression of the TCGA samples was profiled in batches which could introduce bias in classification analysis [12], the statistical significance of batch effect was assessed as a function of the selected genes using guided Principal Component Analysis (gPCA) from the R package gPCA [13]. The approach used by TCGA (2008) [14] and Verhaak et al. (2011) [15] was employed to generate gene-centric expression data. The probe sequences of HT HG-U133A downloaded from Affymetrix were mapped against a database composed of RefSeq version 41 and GenBank 178 complete coding sequences using SpliceMiner [16]. Only perfect matches were considered and probes mapping to more than one gene were excluded. The output file from SpliceMiner and the HT HG-U133A chip definition file (cdf) were passed to the alternate cdf-generating function makealtcdf of AffyProbeMiner [17]. Probe sets with less than five probes were excluded from the resulting alternative cdf, which was then converted to an R package using makecdfenv. The cdf was used to perform Robust Multi-array Average normalization and summarization of the gene expression data, resulting in gene-centric data for 12161 genes. An independent validation data set (GSE7696) profiled on HG-U133 Plus 2 Affymetrix platform and downloaded from the NCBI Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE7696) was equivalently treated. This data set contained gene expression data for 80 GBM and four non-neoplastic samples, and was chosen because of the availability of patients' treatment information. Sample selection To ensure that treatment did not introduce confounding effects, samples from patients that received adjuvant chemotherapy and radiation and had uncensored days to death or progression were selected. Figure 1 shows sample selection for the identification of genes with prognostic value. Four hundred fifteen patients received the standard GBM treatment. Semantically, tumour progression is a radiologically documented increase in tumour size after a subtotal surgical excision [18]. The time for this to occur is known as time to progression, which is the same as uncensored progression-free survival (PFS) [19]. Two hundred one patients had associated uncensored progression-free survival (PFS) times, and 380 had overall survival OS times (censored or uncensored). Sample selection for the identification of prognostic genes in glioblastoma multiforme. PFS: progression-free survival (days); OS: overall survival (days); adjuvant treatment: chemotherapy and radiation Clinical data for all the patients used in this study were obtained from TCGA. PFS times for patients who experienced tumour progression within the follow-up period were obtained from the TCGA file for new tumour events. The GBM subtypes of samples used in our study were obtained from the supplementary clinical file provided by Brennan et al., (2013) [20]. There is no standard for classifying patients as rapid and slow GBM progressors after standard treatment. While the median PFS after treatment could be used as a separation point, it does not provide a 'buffer zone' to filter out borderline samples close to the median that may fall in the incorrect group due to unknown confounding factors. Rather than defining an arbitrary exclusion range, we used the first (Q1) and third (Q3) quartiles, 120 and 341 days respectively, as boundaries to divide patients into three classes, since they are still dependent on the median and not influenced by extreme outliers. Class 1 contained 48 patients having PFS times between 6 and 120 days (rapidly-progressing) and class 2 contained 35 patients having PFS times between 358 and 720 days (slow progressing). Classes 1 and 2 were used in feature selection and the 118 remaining samples (Class 3) that fell within the inter-quartile range were used in PFS and OS analysis. Selection of genes discriminating between rapidly and slowly progressing GBM patients In this present study, genes in the cancer-related pathways were considered in our feature selection because of their known roles in chemoradiation resistance, and to reduce the likelihood of selecting genes related to survival by chance. Studies have identified pathways and processes that drive resistance to chemotherapy and radiotherapy in cancer. Several of these genes are found in known cancer pathways [21–28]. Several genes in the NF- κB and PI3K/Akt signaling pathways are associated with chemoresistance development in cancer [29, 30]. Also, genes involved in drug inactivation and efflux, DNA repair, and epithelial-mesenchymal transition have been shown to enhance drug resistance mechanisms [26, 31]. Pathway enrichment analysis was performed on the genes with high expression variance (median absolute deviation ≥ 0.5) across the 529 samples using the Set Analyser web service provided by the Comparative Toxicogenomics Database [32]. Genes were selected from the pathway categories related to cancer signaling pathways, reactive oxygen species metabolism, DNA repair, and drug transport and metabolism. A set of genes that discriminated between the rapidly-progressing and slowly-progressing groups were selected using a modified Support Vector Machine-Recursive Feature Elimination (SVM-RFE). SVM-RFE, proposed by [33], was modified by introducing 5-fold cross-validation into the SVM classifier step and capturing the error rate generated at this step (the figure showing the workflow for SVM-RFE is attached as Additional file 2). The 118 Class 3 patients not used in the feature selection step were used to calculate regression coefficients (β) for the selected genes using univariate Cox proportional hazards analysis. The β's were computed for the genes using coxph from the R survival package. Prognostic index, PI, was then calculated for each of the patients who received adjuvant chemotherapy and radiation and had PFS and/or OS data using the equation $$PI = \beta_{1} *gene_{1} + \beta_{2} * {gene}_{2} + \ldots + \beta * {gene}_{g} $$ where β g and gene g are the regression coefficient and the gene expression value for gene g, respectively. Patients in Class 3 were classified into low-risk and high-risk groups by choosing a value between the highest and lowest PI that ensured proper patients distribution based on PI. Patients with PI scores greater or equal than the chosen value were assigned to the high-risk group, whereas those with PI scores less than the value were assigned to the low-risk group. 380 patients with OS times were also classified into low-risk and high-risk groups in the same way. Assessment of signature prognostic value in GBM subtypes Verhaak et al. (2010) [15] identified four subtypes of GBM, namely proneural, neural, classical and mesenchymal, using gene expression data from 200 GBM samples. Brennan et al. (2013) [20] assigned additional 342 TGCA samples into the four subtypes using single-sample gene set enrichment analysis. A summarised clinical file provided by the authors was used in our study to assign patients to GBM subtypes. 95, 60, 105 and 120 of the 380 patients with available OS times were assigned to proneural, neural, classical and mesenchymal subtypes, respectively. 51, 33, 51 and 66 of the 201 patient group having associated PFS times were assigned to proneural, neural, classical and mesenchymal subtypes, respectively. We further categorised patients in each subtype into low-risk and high-risk groups. Assessment of signature prognostic value in an independent dataset The prognostic value of the selected gene signature was validated with the data from patients in the Murat et al. [34] validation dataset who had primary tumours and received adjuvant chemo- and radiotherapy. PI was calculated for the patients using the β's obtained from the TCGA training set and the expression values of the selected genes in the samples from the patients. They were classified into low-risk and high-risk groups in such a manner as to ensure proper patient distribution between the two groups. Survival of the low-risk and high-risk groups were determined for both the TCGA and validation cohorts using the Kaplan-Meier method. Differences in survival between the risk groups were estimated statistically by log rank test. Survival differences between groups was said to be statistically significant if p<0.05. Hazard ratios (HR) between risk groups were determined by Cox proportional hazards regression model. Mutivariate survival analysis to assess independent prognostic value A multivariate Cox survival model was built using three variables: our prognostic index, MGMT promoter methylation, and age. Ages of patients at diagnosis were obtained from the clinical file provided by TCGA. MGMT promoter methylation status data were obtained from the clinical file provided by Brennan et al. [20] The univariate Cox analysis was first carried out on each variable followed by multivariate Cox analysis on all the variables. The coxph function in the R survival package was used for the analysis. Using the median PI value, the patients were assigned into low-risk or high-risk groups. Those with PI values lower than the median were assigned to low-risk groups, and those with PI to high-risk groups. The low-risk and the MGMT methylated promoter groups were used as references for prognostic index and MGMT promoter methylation status, respectively. Correlation of variables with PFS and OS was considered statistically significant at p(Wald)<0.05. Identifying functional interactions between signature genes We used the STRING database of known and predicted protein-protein interactions (https://string-db.org/) [35] to construct an interaction network for the signature genes and to perform KEGG pathway enrichment analysis on the derived subnetwork. In this present study, pathway-based and modified SVM-RFE-based methods were used to select a set of genes that discriminated between rapidly- and slowly-progressing GBM patients and combined into a signature. The prognostic value of the signature in predicting PFS and OS was accessed in the risk groups of GBM patients and validated on data set from an independent study. The independence of the signature in predicting PFS and OS was assessed by a multivariate Cox's proportional hazards analysis. Studies on the identification of protein-coding multigene prognostic signatures in GBM focused on OS [7–9]. Overall survival (OS) is dependent on other factors besides gene expression. Progression-free survival, on the other hand, is expected to be a function of the expression of certain key genes. Genes whose expression across a cohort of patients correlated with OS were selected for survival analysis in these previous studies. This method has be shown to produce inconsistent signature genes in different data sets [36, 37]. genes discriminate between rapidly- and slowly-progressing GBM patients GBM is a highly aggressive brain tumour, and the median survival of patients with GBM is 14.6 months [38]. We hypothesized that the tumour's pre-treatment expression of genes in pathways associated with chemoradioresistance in cancer would be predictive of how rapidly a GBM patient would experience progression after standard treatment. Signaling pathways (MAPK, JAK/STAT, WNT, NOTCH, Hedgehog, PIK3/AKT), cell cycle, drug transporters, reactive oxygen species metabolism and DNA repair system are known to be involved in chemoradioresistance in cancer [29, 39–41]. We also reasoned that PFS times were more appropriate than OS times in grouping patients. PFS times were expected to be more closely related to expression of key genes, while other factors including age and treatment after disease progression are also associated with OS. Pathway enrichment analysis was performed on 3899 genes (Additional file 2) that had varied expression (MAD≥0.5) across 529 GBM samples. 18 of the 159 gene sets from the enrichment analysis were annotated for the known chemoradioresistance-associated pathways (Table 1). Assessment of batch effect in TGCA expression data set from 529 GBM samples as a function of the 356 genes extracted from the pathways (Additional file 3) showed that the data set did not have significant batch effect (p=0.118). Inspection of the unguided principal component analysis plot of the first two principal components also showed that no batch effect was present (Additional file 4). The extracted genes were used in gene selection by the modified SVM-RFE. Our modified SVM-RFE was used to identify genes that discriminated between 48 rapidly-progressing patients (between 6 and 120 days PFS) and 35 slowly-progressing patients (between 358 and 720 days PFS). Figure 2 shows the plot of 5-fold cross-validation error rate against number of genes at each recursive step, starting with the 356 genes extracted from the pathways. The CV error rate decreased with decreasing number of genes until it reached 35 genes, which discriminated between rapidly- and slowly-progressing GBM patients at 100% accuracy. Further decreases in the number of genes resulted in increasing error rate. Cross-validated error rates of R-SVM in each recursive steps. *The number of features used for SVM classification in each step. Parameters for SVM: kernel = linear, cost = 10, and 5% cross-validation. The red star represents the level at which the minimal cross-validation error was achieved Table 1 Selected pathway categories associated with chemoradiation resistance by pathway enrichment analysis on genes with high expression variance The PFS times and expression levels of selected genes in the 118 Class 3 patients were used in multivariate Cox regression analysis to compute β's for the genes. Table 2 shows the β's calculated for the 35 selected genes. PI scores were calculated for all patients who received adjuvant chemotherapy and radiation (380) by substituting β's and expression levels of selected genes into the prognostic index formula. The scores were then used to classify samples into low- and high-risk groups in survival analysis. Table 2 Correlation of the expression of the 35 signature genes with progression-free survival using univariate Cox model All the seed pathways in Table 1 except mismatch repair had at least one representative in the signature. Cell cycle had the highest number of genes (eight), followed by WNT pathway, which had five. The expression of four of the selected genes were significantly correlated with PFS (p<0.05): DKK1, FZD7, and PPARGC1A showed positive correlation (β>0), and CCNE1 displayed negative correlation (β<0) (Table 2). Several signature genes are linked to survival in other cancers Several genes in the signature have been reported to be associated with progression-free and/or overall survival in other cancers. DKK1, FZD3, FZD7, SFRP1, and SFRP4 are regulators of the Wnt/ β pathway. Overexpression of DKK1 is predictive of unfavourable overall survival and time to recurrence in intrahepatic cholangiocarcinoma patients [42]. Overexpression of FZD3 in colorectal patients was correlated with poor survival [43]. Underexpression of SFRP1 is associated with poor survival and may be an independent predictive and prognostic factor for prostate cancer [44]. SFRP4 increased the sensitivity of ovarian cancer cell lines to cisplatin, suggesting it is a predictive marker of chemoresistance in the cancer [45]. CCNA1, CCND1, CCNE1, CDC6, CDK2, CDKN1C and CDKN2A regulate the cell cycle. CCND1 amplication was associated with poor prognosis in estrogen receptor positive breast cancer [46] and [47] found it to be an independent prognostic factor in primary tumours and metastases as well as an independent prognostic factor in metastasis. CDC6 expression was correlated with overall and recurrence survival in non-small cell lung cancer patients [48]. CDKN2A promoter methylation was correlated with poor prognosis of colorectal cancer patients [49, 50]. CDK2, regulated by CDKN2A, is a known oncogene and regulator of the cell cycle. Its regression coefficient (β<0) in our study, however, showed that it was positively associated with progression-free survival. Its overexpression was associated with shorter survival in oral cancer [51]. GADD45G is implicated in stress signaling responses to physiological or environmental stressors, resulting in cell cycle arrest, DNA repair, cell survival and senescence, or apoptosis [52, 53]. GADD45G methylation and protein expression were independently associated with survival of gastric cardia adenocarcinoma patients [54] and esophageal squamous cell carcinoma patients [55]. The 35-gene signature predicts progression-free and overall survival in both TCGA and independent dataset The 35 genes that discriminated between rapidly- and slowly-progressing patients were combined into a signature and its prognostic value first assessed in the patients that were not used in the feature selection step (Class 3). The prognostic index (PI) scores of these patients were standardized and used to split the patients into low- and high-risk groups. Figures 3a and 3b show the PFS and OS Kaplan Meier plots, respectively, for the two prognostic groups. The median PFS and OS times for the low-risk group (256 days, 95% CI = 232 - 299 days and 635 days, 95% CI = 502 - 1024 days) were significantly higher than those of the high-risk group (175 days, 95% CI = 158 - 204 days and 393 days, 95% CI = 345 - 454 days) (p<0.05). Kaplan-Meier plots for low-risk and high-risk groups of GBM patients that received adjuvant chemotherapy and radiotherapy. The patients were classified based on PI score. a PFS plots and b OS plots of risks groups from 118 TCGA patients not used in the feature selection. c OS plots of risk groups from 380 TCGA patients with OS times. d OS plots of risks groups from the Murat et al. data set used for validation. The two numbers in the topright corner of each plot represents the total number of patients in each risk group and the number of patients who experienced progression or death within the follow-up periods, respectively Two hundred seventy nine of the 380 patients who received adjuvant chemotherapy and radiotherapy died before the end of the follow-up period. The remaining 101 patients were alive at the end of follow-up or were lost to follow-up. The 380 patients were split into low- and high-risk groups. Figure 3c shows the OS plots for these prognostic groups. There was a statistically significant difference in OS between the groups (p<0.05). The median OS time (548 days, 95% CI = 486 - 646) of the low-risk group was significantly higher than that (442 days, 95% CI = 394 - 476) of the high-risk group (p<0.05). Thirty nine patients in the validation cohort received adjuvant chemotherapy and radiation. The β's computed with the TCGA cohort and the expression levels of the signature genes in the validation cohort were used to calculate PI scores for the patients in the validation cohort. The patients were then split into low- and high-risk groups. The median OS of the low-risk group was higher than that of the high-risk group, and the difference in OS between the groups was statistically significant (p<0.05) (Fig. 3d). The results show that the 35-gene signature identified from the TCGA dataset may be a generically applicable predictor of progression-free and overall survival in GBM, since prognostic value in the prediction of overall survival was validated in an independent cohort. The 35-gene signature predicts progression-free and overall survival in four GBM subtypes The prognostic value of the signature in predicting PFS and OS in subtypes of GBM was assessed. 51, 51, 3 and 66 patients belonged to the classical, proneural, neural, and mesenchymal subtypes, respectively. Figure 4 shows the results of the PFS survival analysis in the subtypes. There was statistically significant difference in survival between low- and high-risk groups in all the subtypes (p<0.05). In the classical subtype, the median PFS times of low- and high-risk groups were 256 and 186 days respectively. In the mesenchymal subtype, the median PFS times were 269 and 146 days respectively. In the neural subtype, the median PFS times were 358 and 172 days, respectively. In the proneural subtype, the median PFS times were 304 and 172 days, respectively. Kaplan-Meier progression-free survival plots for risk groups of patients in each subtype of GBM. The patients were classified based on PI score. The two numbers in the topright corner of each plot represents the total number of patients in each risk group and the number of patients who experienced progression or death within the follow-up periods, respectively One hundred five classical, 95 proneural, 60 neural and 120 mesenchymal subtype patients were used for subtype-specific OS analysis. Figure 5 shows the Kaplan-Meier OS plots for high-risk and low-risk groups in each subtype. The low- and high-risk groups differed significantly in OS in all the subtypes (p<0.05). In the classical subtype, the median OS times of low- and high-risk groups were 544 and 452 days respectively. In the mesenchymal subtype, the median OS times were 485 and 394 days respectively. In the neural subtype, the median OS times were 476 and 435 days, respectively. In the proneural subtype, the median OS times were 748 and 395 days, respectively. Reports from previous studies show that the prognostic value of MGMT promoter methylation in GBM patients is controversial. Zhang et al. [56] showed that MGMT promoter methylation was associated with better PFS and OS in patients with GBM regardless of therapeutic intervention, and associated with longer OS in GBM patients treated with alkylating agents. Costa et al. [5] did not find significant association between MGMT promoter methylation and the outcome of Portuguese GBM patients treated with temozolomide. Brennan et al. [20] however reported that MGMT promoter methylation was only correlated with OS in the GBM classical subtypes. The possible explanation for these conflicting reports on the prognostic value of MGMT promoter methylation could thus be due to differences in the GBM subtype distribution which was not considered in most previous studies. Our 35-gene signature, however, predicted PFS and OS regardless of the subtype, suggesting that it may be a more effective predictor of overall and progression-free survival in GBM. Kaplan-Meier overall survival plots for risk groups of patients in each subtype of GBM. The patients were classified based on PI score. The two numbers in the topright corner of each plot represents the total number of patients in each risk group and the number of patients who experienced progression or death within the follow-up periods, respectively The 35-gene signature is an independent predictor of PFS and OS in GBM patients A multivariate Cox regression model analysis involving the prognostic index, age and MGMT promoter methylation was carried to assess the independence of the gene signature to predict PFS and OS. 79 TCGA GBM patients had associated days to progression, and age and MGMT promoter methylation status (38 methylated and 41 unmethylated) data. Two hundred sixty nine patients had days to death and age and MGMT promoter methylation (135 methylated and 134 unmethylated) data. The results from the univariate and multivariate analyses on the three variable are shown in Table 3. MGMT promoter methylation was not correlated with PFS in both univariate and multivariate Cox analyses (p>0.05). Prognostic index, age and MGMT promoter methylation were significantly correlated with OS in the univariate and multivariate analyses (p<0.05). The univariate Cox's proportional hazard analysis showed that age and the prognostic index based on the 35-gene signature were both significantly correlated with PFS (p<0.05), but only the prognostic index was significantly correlated with PFS in the multivariate analysis (p<0.05). This showed that the expression signature is an independent predictor of PFS and OS in GBM patients. Table 3 Univariate and multivariate Cox's proportional hazards model analyses of prognostic factors for progression-free and overall survival Post-treatment tumour progression depends largely on alterations in classical cancer and chemotherapy/radiation resistance-related pathways. This is supported by findings from the multivariate Cox's proportional hazard analysis findings as only the 35-gene prognostic index was significantly associated with PFS and was an independent predictor of PFS. Overall survival, on the other hand, is determined by many factors. Age at diagnosis is one of the most important factors associated with overall survival in cancer and has been demonstrated in GBM [57–59]. While the prognostic value of MGMT promoter methylation in GBM remains controversial, our findings showed that prognostic index, age and MGMT promoter methylation are all independent prognostic factors for overall survival. Signature genes belong to a functional interaction subnetwork enriched for known cancer pathways A subnetwork generated from the interactions between the signature genes had significantly more interactions than would be expected for a random set of proteins of similar size (PPI enrichment p=1.11×10−16) (Fig. 6). The network was also significantly enriched (p<0.01) for KEGG cancer pathways and pathways known to drive tumour initiation and progression, such as the cell cycle and PI3K-Akt, Wnt, p53 and Ras signaling [60, 61]. Analysis of the subnetwork formed from the interaction between signature genes. a The subnetwork from the STRING database. b Enriched pathways in the subnetwork A subset of the signature genes may be relevant to GBM biology and may have utility in drug discovery Combinatorial medicine have been proposed for the treatment of tumour recurrence. It involves therapeutically targeting as many genomic alterations responsible for a disease in a patient as possible and has strong implications for overcoming the challenge of tumour progression and drug resistance [62, 63]. One of the ways to overcome this challenge is to prioritise combinations of genes to be targeted based on their unique roles in tumour progression. Of the signature genes, only ABL1, CCND1, CCNE1, PDGFRA, PIK3CA were found to be linked to predisposition to at least one cancer by the Online Mendelian Inheritance in Man (OMIM) database [64]. However, CCNA1, CDK2, CDKN1C, CDKN2A, FZD3, HSPA1B, IGFBP3, PDGFRA, PIK3CA, PLA2G5, THBS2 and VEGFA all have gene ontology annotations related to apoptosis, while ABL1, FZD7, PDGFRA, PIK3CA, SFRP1, THBS2, and VEGFA are annotated as being involved in angiogenesis (data not shown). Collectively this may indicate differential gene expression explicitly directed towards towards resisting induced cell death by both intrinsic and extrinsic factors and optimising the tumour microenvironment for maximum fitness. This, combined with the knowledge that the signature genes are involved in classical pathways implicated in cancer drug resistance, suggests that the highlighted genes should be further validated and assessed as drug targets in designing novel combinatorial therapies for GBM in future studies. We propose that the performance of the signature makes it an attractive candidate for further studies to assess its utility as a clinical prognostic and predictive biomarker in GBM patients, and that its component genes may also have utility as therapeutic targets for improving both progression-free and overall survival. Hegi ME, Diserens AC, Gorlia T, Hamou MF, de Tribolet N, Weller M, Kros JM, Hainfellner JA, Mason W, Mariani L, Bromberg JEC, Hau P, Mirimanoff RO, Cairncross JG, Janzer RC, Stupp R. MGMT Gene Silencing and Benefit from Temozolomide in Glioblastoma. N Engl J Med. 2005; 352(10):997–1003. https://doi.org/10.1056/NEJMoa043331. Kim YS, Kim SH, Cho J, Kim JW, Chang JH, Kim DS, Lee KS, Suh CO. MGMT gene promoter methylation as a potent prognostic factor in glioblastoma treated with temozolomide-based chemoradiotherapy: a single-institution study. Int J Radiat Oncol Biol Phys. 2012; 84(3):661–7. https://doi.org/10.1016/j.ijrobp.2011.12.086. Shen D, Liu T, Lin Q, Lu X, Wang Q, Lin F, Mao W. MGMT Promoter Methylation Correlates with an Overall Survival Benefit in Chinese High-Grade Glioblastoma Patients Treated with Radiotherapy and Alkylating Agent-Based Chemotherapy: A Single-Institution Study. PLoS ONE. 2014; 9(9):107558. https://doi.org/10.1371/journal.pone.0107558. Melguizo C, Prados J, González B, Ortiz R, Concha A, Alvarez PJ, Madeddu R, Perazzoli G, Oliver JA, López R, Rodríguez-Serrano F, Aránega A. MGMT promoter methylation status and MGMT and CD133 immunohistochemical expression as prognostic markers in glioblastoma patients treated with temozolomide plus radiotherapy. J Transl Med. 2012; 10(1):250. https://doi.org/10.1186/1479-5876-10-250. Costa BM, Caeiro C, Guimarães I, Martinho O, Jaraquemada T, Augusto I, Castro L, Osório L, Linhares P, Honavar M, Resende M, Braga F, Silva A, Pardal F, Amorim J, Nabiço R, Almeida R, Alegria C, Pires M, Pinheiro C, Carvalho E, Lopes JM, Costa P, Damasceno M, Reis RM. Prognostic value of MGMT promoter methylation in glioblastoma patients treated with temozolomide-based chemoradiation: a Portuguese multicentre study. Oncol Rep. 2010; 23(6):1655–62. Yin A-a, Zhang L-h, Cheng J-x, Dong Y, Liu B-l, Han N, Zhang X. The Predictive but Not Prognostic Value of MGMT Promoter Methylation Status in Elderly Glioblastoma Patients: A Meta-Analysis. PLoS ONE. 2014; 9(1):85102. https://doi.org/10.1371/journal.pone.0085102. Goldstein LJ, Gray R, Badve S, Childs BH, Yoshizawa C, Rowley S, Shak S, Baehner FL, Ravdin PM, Davidson NE, Sledge GW, Perez EA, Shulman LN, Martino S, Sparano JA. Prognostic Utility of the 21-Gene Assay in Hormone Receptor–Positive Operable Breast Cancer Compared With Classical Clinicopathologic Features. J Clin Oncol. 2008; 26(25):4063–71. https://doi.org/10.1200/JCO.2007.14.4501. Paik S, Shak S, Tang G, Kim C, Baker J, Cronin M, Baehner FL, Walker MG, Watson D, Park T, Hiller W, Fisher ER, Wickerham DL, Bryant J, Wolmark N. A Multigene Assay to Predict Recurrence of Tamoxifen-Treated, Node-Negative Breast Cancer. N Engl J Med. 2004; 351(27):2817–26. https://doi.org/10.1056/NEJMoa041588. Acs G, Kiluk J, Loftus L, Laronga C. Comparison of Oncotype DX and Mammostrat risk estimations and correlations with histologic tumor features in low-grade, estrogen receptor-positive invasive breast carcinomas. Mod Pathol. 2013; 26(11):1451–60. https://doi.org/10.1038/modpathol.2013.88. Ring BZ, Seitz RS, Beck R, Shasteen WJ, Tarr SM, Cheang MCU, Yoder BJ, Budd GT, Nielsen TO, Hicks DG, Estopinal NC, Ross DT. Novel prognostic immunohistochemical biomarker panel for estrogen receptor-positive breast cancer. J Clin Oncol Off J Am Soc Clin Oncol. 2006; 24(19):3039–47. https://doi.org/10.1200/JCO.2006.05.6564. van 't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, Peterse HL, van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH. Gene expression profiling predicts clinical outcome of breast cancer. Nature. 2002; 415(6871):530–6. https://doi.org/10.1038/415530a. Soneson C, Gerster S, Delorenzi M. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation. PLoS ONE. 2014; 9(6):100335. https://doi.org/10.1371/journal.pone.0100335. Reese SE, Archer KJ, Therneau TM, Atkinson EJ, Vachon CM, de Andrade M, Kocher J-PA, Eckel-Passow JE. A new statistic for identifying batch effects in high-throughput genomic data that uses guided principal component analysis. Bioinformatics. 2013; 29(22):2877–83. https://doi.org/10.1093/bioinformatics/btt480. TCGA: Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature. 2008; 455(7216):1061–8. https://doi.org/10.1038/nature07385. Verhaak RGW, Hoadley KA, Purdom E, Wang V, Qi Y, Wilkerson MD, Miller CR, Ding L, Golub T, Mesirov JP, Alexe G, Lawrence M, O'Kelly M, Tamayo P, Weir BA, Gabrie S, Winckler W, Gupta S, Jakkula L, Feiler HS, Hodgson JG, James CD, Sarkaria JN, Brennan C, Kahn A, Spellman PT, Wilson RK, Speed TP, Gray JW, Meyerson M, Getz G, Perou CM, Hayes DN. An integrated genomic analysis identifies clinically relevant subtypes of glioblastoma characterized by abnormalities in PDGFRA, IDH1, EGFR and NF1. Cancer Cell. 2010; 17(1):98. https://doi.org/10.1016/j.ccr.2009.12.020. Kahn AB, Ryan MC, Liu H, Zeeberg BR, Jamison DC, Weinstein JN. SpliceMiner: a high-throughput database implementation of the NCBI Evidence Viewer for microarray splice variant analysis. BMC Bioinformatics. 2007; 8(1):75. https://doi.org/10.1186/1471-2105-8-75. Liu H, Zeeberg BR, Qu G, Koru AG, Ferrucci A, Kahn A, Ryan MC, Nuhanovic A, Munson PJ, Reinhold WC, Kane DW, Weinstein JN. AffyProbeMiner: a web resource for computing or retrieving accurately redefined Affymetrix probe sets. Bioinformatics. 2007; 23(18):2385–90. https://doi.org/10.1093/bioinformatics/btm360. Iacob G, Dinca E. Current data and strategy in glioblastoma multiforme. J Med Life. 2009; 2(4):386–93. Tang PA, Bentzen SM, Chen EX, Siu LL. Surrogate end points for median overall survival in metastatic colorectal cancer: Literature-based analysis from 39 randomized controlled trials of first-line chemotherapy. J Clin Oncol. 2007; 25(29):4562–8. https://doi.org/10.1200/JCO.2006.08.1935. Brennan CW, Verhaak RGW, McKenna A, Campos B, Noushmehr H, Salama SR, Zheng S, Chakravarty D, Sanborn JZ, Berman SH, Beroukhim R, Bernard B, Wu CJ, Genovese G, Shmulevich I, Barnholtz-Sloan J, Zou L, Vegesna R, Shukla SA, Ciriello G, Yung W, Zhang W, Sougnez C, Mikkelsen T, Aldape K, Bigner DD, Van Meir EG, Prados M, Sloan A, Black KL, Eschbacher J, Finocchiaro G, Friedman W, Andrews DW, Guha A, Iacocca M, O'Neill BP, Foltz G, Myers J, Weisenberger DJ, Penny R, Kucherlapati R, Perou CM, Hayes DN, Gibbs R, Marra M, Mills GB, Lander E, Spellman P, Wilson R, Sander C, Weinstein J, Meyerson M, Gabriel S, Laird PW, Haussler D, Getz G, Chin L. The Somatic Genomic Landscape of Glioblastoma. Cell. 2013; 155(2):462–77. https://doi.org/10.1016/j.cell.2013.09.034. Shtivelman E, Hensing T, Simon GR, Dennis PA, Otterson GA, Bueno R, Salgia R, Shtivelman E, Hensing T, Simon GR, Dennis PA, Otterson GA, Bueno R, Salgia R. Molecular pathways and therapeutic targets in lung cancer. Oncotarget. 2014; 5(6):1392–433. https://doi.org/10.18632/oncotarget.1891. Bagnyukova TV, Serebriiskii IG, Zhou Y, Hopper-Borge EA, Golemis EA, Astsaturov I. Chemotherapy and signaling: How can targeted therpies supercharge cytotoxic agents?Cancer Biol Ther. 2010; 10(9):839–53. https://doi.org/10.4161/cbt.10.9.13738. Riedel RF, Porrello A, Pontzer E, Chenette EJ, Hsu DS, Balakumaran B, Potti A, Nevins J, Febbo PG. A genomic approach to identify molecular pathways associated with chemotherapy resistance. Mol Cancer Ther. 2008; 7(10):3141–9. https://doi.org/10.1158/1535-7163.MCT-08-0642. Fojo T. Cancer, DNA repair mechanisms, and resistance to chemotherapy. J Natl Cancer Inst. 2001; 93(19):1434–6. https://doi.org/10.1093/jnci/93.19.1434. Sherman-Baust CA, Becker KG, Wood III WH, Zhang Y, Morin PJ. Gene expression and pathway analysis of ovarian cancer cells selected for resistance to cisplatin, paclitaxel, or doxorubicin. J Ovarian Res. 2011; 4:21. https://doi.org/10.1186/1757-2215-4-21. Long J, Zhang Y, Yu X, Yang J, LeBrun D, Chen C, Yao Q, Li M. Overcoming Drug Resistance in Pancreatic Cancer. Expert Opin Ther Targets. 2011; 15(7):817–28. https://doi.org/10.1517/14728222.2011.566216. Pritchard JR, Lauffenburger DA, Hemann MT. Understanding resistance to combination chemotherapy. Drug Resist Updat. 2012; 15(5):249–57. https://doi.org/10.1016/j.drup.2012.10.003. Humphrey RW, Brockway-Lunardi LM, Bonk DT, Dohoney KM, Doroshow JH, Meech SJ, Ratain MJ, Topalian SL, Pardoll DM. Opportunities and challenges in the development of experimental drug combinations for cancer. J Natl Cancer Inst. 2011; 103(16):1222–6. https://doi.org/10.1093/jnci/djr246. Reuter S, Gupta SC, Chaturvedi MM, Aggarwal BB. Oxidative stress, inflammation, and cancer: How are they linked?Free Radic Biol Med. 2010; 49(11):1603–16. https://doi.org/10.1016/j.freeradbiomed.2010.09.006. West KA, Castillo SS, Dennis PA. Activation of the PI3k/Akt pathway and chemotherapeutic resistance. Drug Resist Updat Rev Commentaries Antimicrob Anticancer Chemother. 2002; 5(6):234–48. Housman G, Byler S, Heerboth S, Lapinska K, Longacre M, Snyder N, Sarkar S. Drug Resistance in Cancer: An Overview. Cancers. 2014; 6(3):1769–92. https://doi.org/10.3390/cancers6031769. Davis AP, Murphy CG, Johnson R, Lay JM, Lennon-Hopkins K, Saraceni-Richards C, Sciaky D, King BL, Rosenstein MC, Wiegers TC, Mattingly CJ. The Comparative Toxicogenomics Database: update 2013. Nucleic Acids Res. 2013; 41(Database issue):1104–14. https://doi.org/10.1093/nar/gks994. Guyon I, Weston J, Barnhill S, Vapnik V. Gene Selection for Cancer Classification using Support Vector Machines. Mach Learn. 2002; 46(1-3):389–422. https://doi.org/10.1023/A:1012487302797. Murat A, Migliavacca E, Gorlia T, Lambiv WL, Shay T, Hamou MF, de Tribolet N, Regli L, Wick W, Kouwenhoven MCM, Hainfellner JA, Heppner FL, Dietrich PY, Zimmer Y, Cairncross JG, Janzer RC, Domany E, Delorenzi M, Stupp R, Hegi ME. Stem cell-related "self-renewal" signature and high epidermal growth factor receptor expression associated with resistance to concomitant chemoradiotherapy in glioblastoma. J Clin Oncol Off J Am Soc Clin Oncol. 2008; 26(18):3015–24. https://doi.org/10.1200/JCO.2007.15.7164. Szklarczyk D, Morris JH, Cook H, Kuhn M, Wyder S, Simonovic M, Santos A, Doncheva NT, Roth A, Bork P, Jensen LJ, von Mering C. The STRING database in 2017: quality-controlled protein–protein association networks, made broadly accessible. Nucleic Acids Res. 2017; 45:362–8. https://doi.org/10.1093/nar/gkw937. Zhao X, Rødland EA, Sørlie T, Naume B, Langerød A, Frigessi A, Kristensen VN, Børresen-Dale AL, Lingjærde OC. Combining gene signatures improves prediction of breast cancer survival. PLoS ONE. 2011; 6(3):17845. https://doi.org/10.1371/journal.pone.0017845. Lau SK, Boutros PC, Pintilie M, Blackhall FH, Zhu CQ, Strumpf D, Johnston MR, Darling G, Keshavjee S, Waddell TK, Liu N, Lau D, Penn LZ, Shepherd FA, Jurisica I, Der SD, Tsao MS. Three-gene prognostic classifier for early-stage non small-cell lung cancer. J Clin Oncol Off J Am Soc Clin Oncol. 2007; 25(35):5562–9. https://doi.org/10.1200/JCO.2007.12.0352. Stupp R, Mason WP, van den Bent MJ, Weller M, Fisher B, Taphoorn MJB, Belanger K, Brandes AA, Marosi C, Bogdahn U, Curschmann J, Janzer RC, Ludwin SK, Gorlia T, Allgeier A, Lacombe D, Cairncross JG, Eisenhauer E, Mirimanoff RO. European Organisation for Research and Treatment of Cancer Brain Tumor and Radiotherapy Groups, National Cancer Institute of Canada Clinical Trials Group. Radiotherapy plus concomitant and adjuvant temozolomide for glioblastoma. N Engl J Med. 2005; 352(10):987–96. https://doi.org/10.1056/NEJMoa043330. Jiang BH, Liu LZ. Role of mTOR in anticancer drug resistance. Drug Resist Updat Rev Commentaries Antimicrob Anticancer Chemother. 2008; 11(3):63–76. https://doi.org/10.1016/j.drup.2008.03.001. Niero EL, Rocha-Sales B, Lauand C, Cortez BA, de Souza MM, Rezende-Teixeira P, Urabayashi MS, Martens AA, Neves JH, Machado-Santelli GM. The multiple facets of drug resistance: one history, different approaches. J Exp Clin Cancer Res CR. 2014; 33(1):37. https://doi.org/10.1186/1756-9966-33-37. Martin HL, Smith L, Tomlinson DC. Multidrug-resistant breast cancer: current perspectives. Breast Cancer Targets Ther. 2014; 6:1–13. https://doi.org/10.2147/BCTT.S37638. Shi RY, Yang XR, Shen QJ, Yang LX, Xu Y, Qiu SJ, Sun YF, Zhang X, Wang Z, Zhu K, Qin WX, Tang ZY, Fan J, Zhou J. High expression of Dickkopf-related protein 1 is related to lymphatic metastasis and indicates poor prognosis in intrahepatic cholangiocarcinoma patients after surgery. Cancer. 2013; 119(5):993–1003. https://doi.org/10.1002/cncr.27788. Wong SCC, He CW, Chan CML, Chan AKC, Wong HT, Cheung MT, Luk LLY, Au TCC, Chiu MK, Ma BBY, Chan ATC. Clinical Significance of Frizzled Homolog 3 Protein in Colorectal Cancer Patients. PLoS ONE. 2013; 8(11):79481. https://doi.org/10.1371/journal.pone.0079481. Zheng L, Sun D, Fan W, Zhang Z, Li Q, Jiang T. Diagnostic Value of SFRP1 as a Favorable Predictive and Prognostic Biomarker in Patients with Prostate Cancer. PLoS ONE. 2015; 10(2):0118276. https://doi.org/10.1371/journal.pone.0118276. Saran U, Arfuso F, Zeps N, Dharmarajan A. Secreted frizzled-related protein 4 expression is positively associated with responsiveness to Cisplatin of ovarian cancer cell lines in vitro and with lower tumour grade in mucinous ovarian cancers. BMC Cell Biol. 2012; 13(1):25. https://doi.org/10.1186/1471-2121-13-25. Roy PG, Pratt N, Purdie CA, Baker L, Ashfield A, Quinlan P, Thompson AM. High CCND1 amplification identifies a group of poor prognosis women with estrogen receptor positive breast cancer. Int J Cancer. J Int Du Cancer. 2010; 127(2):355–60. https://doi.org/10.1002/ijc.25034. Seiler R, Thalmann GN, Rotzer D, Perren A, Fleischmann A. CCND1/CyclinD1 status in metastasizing bladder cancer: a prognosticator and predictor of chemotherapeutic response. Mod Pathol. 2014; 27(1):87–95. https://doi.org/10.1038/modpathol.2013.125. Allera-Moreau C, Rouquette I, Lepage B, Oumouhou N, Walschaerts M, Leconte E, Schilling V, Gordien K, Brouchet L, Delisle MB, Mazieres J, Hoffmann JS, Pasero P, Cazaux C. DNA replication stress response involving PLK1, CDC6, POLQ. RAD51 and CLASPIN upregulation prognoses the outcome of early/mid-stage non-small cell lung cancer patients. Oncogenesis. 2012; 1(10):30. https://doi.org/10.1038/oncsis.2012.29. Xing X, Cai W, Shi H, Wang Y, Li M, Jiao J, Chen M. The prognostic value of CDKN2a hypermethylation in colorectal cancer: a meta-analysis. Br J Cancer. 2013; 108(12):2542–8. https://doi.org/10.1038/bjc.2013.251. Maeda K, Kawakami K, Ishida Y, Ishiguro K, Omura K, Watanabe G. Hypermethylation of the CDKN2A gene in colorectal cancer is associated with shorter survival. Oncol Rep. 2003; 10(4):935–8. https://doi.org/10.3892/or.10.4.935. Mihara M, Shintani S, Nakahara Y, Kiyota A, Ueyama Y, Matsumura T, Wong DT. Overexpression of CDK2 is a prognostic indicator of oral cancer progression. Jpn J Cancer Res Gann. 2001; 92(3):352–60. Cretu A, Sha X, Tront J, Hoffman B, Liebermann DA. Stress sensor Gadd45 genes as therapeutic targets in cancer. Cancer Ther. 2009; 7(A):268–76. Zerbini LF, Libermann TA. GADD45 Deregulation in Cancer: Frequently Methylated Tumor Suppressors and Potential Therapeutic Targets. Clin Cancer Res. 2005; 11(18):6409–13. https://doi.org/10.1158/1078-0432.CCR-05-1475. Guo W, Dong Z, Guo Y, Chen Z, Kuang G, Yang Z. Methylation-mediated repression of GADD45a and GADD45g expression in gastric cardia adenocarcinoma. Int J Cancer J Int Du Cancer. 2013; 133(9):2043–53. https://doi.org/10.1002/ijc.28223. Guo W, Zhu T, Dong Z, Cui L, Zhang M, Kuang G. Decreased expression and aberrant methylation of Gadd45g is associated with tumor progression and poor prognosis in esophageal squamous cell carcinoma. Clin Exp Metastasis. 2013; 30(8):977–92. https://doi.org/10.1007/s10585-013-9597-2. Zhang K, Wang X-q, Zhou B, Zhang L. The prognostic value of MGMT promoter methylation in Glioblastoma multiforme: a meta-analysis. Familial Cancer. 2013; 12(3):449–58. https://doi.org/10.1007/s10689-013-9607-1. Scott JG, Suh JH, Elson P, Barnett GH, Vogelbaum MA, Peereboom DM, Stevens GHJ, Elinzano H, Chao ST. Aggressive treatment is appropriate for glioblastoma multiforme patients 70 years old or older: a retrospective review of 206 cases. Neuro-Oncol. 2011; 13(4):428–36. http://dx.doi.org/10.1093/neuonc/nor005. Thumma SR, Fairbanks RK, Lamoreaux WT, Mackay AR, Demakas JJ, Cooke BS, Elaimy AL, Hanson PW, Lee CM. Effect of pretreatment clinical factors on overall survival in glioblastoma multiforme: a Surveillance Epidemiology and End Results (SEER) population analysis. World J Surg Oncol. 2012; 10:75. https://doi.org/10.1186/1477-7819-10-75. Bozdag S, Li A, Riddick G, Kotliarov Y, Baysan M, Iwamoto FM, Cam MC, Kotliarova S, Fine HA. Age-Specific Signatures of Glioblastoma at the Genomic. Genetic, and Epigenetic Levels. PLoS ONE. 2013; 8(4):62982. https://doi.org/10.1371/journal.pone.0062982. Feitelson MA, Arzumanyan A, Kulathinal RJ, Blain SW, Holcombe RF, Mahajna J, Marino M, Martinez-Chantar ML, Nawroth R, Sanchez-Garcia I, Sharma D, Saxena NK, Singh N, Vlachostergios PJ, Guo S, Honoki K, Fujii H, Georgakilas AG, Amedei A, Niccolai E, Amin A, Ashraf SS, Boosani CS, Guha G, Ciriolo MR, Aquilano K, Chen S, Mohammed SI, Azmi AS, Bhakta D, Halicka D, Nowsheen S. Sustained proliferation in cancer: mechanisms and novel therapeutic targets. Semin Cancer Biol. 2015; 35:25–54. https://doi.org/10.1016/j.semcancer.2015.02.006. Zhang J, Chen YH, Lu Q. Pro-oncogenic and anti-oncogenic pathways: opportunities and challenges of cancer therapy. Futur Oncol. 2010; 6(4):587–603. https://doi.org/10.2217/fon.10.15. Al-Lazikani B, Banerji U, Workman P. Combinatorial drug therapy for cancer in the post-genomic era. Nat Biotechnol. 2012; 30(7):679–92. https://doi.org/10.1038/nbt.2284. Tang J, Karhinen L, Xu T, Szwajda A, Yadav B, Wennerberg K, Aittokallio T. Target inhibition networks: predicting selective combinations of druggable targets to block cancer survival pathways. PLoS Comput Biol. 2013; 9(9):1003226. https://doi.org/10.1371/journal.pcbi.1003226. Amberger JS, Bocchini CA, Schiettecatte F, Scott AF, Hamosh A. OMIM.org: Online mendelian inheritance in man (OMIM®;), an online catalog of human genes and genetic disorders. 2015; 43:789–98. https://doi.org/10.1093/nar/gku1205. The datasets supporting the conclusions of this article are available in the NCI Genomic Data Commons Data Portal (https://portal.gdc.cancer.gov/projects/TCGA-GBM) and in the NCBI Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE7696). South African Bioinformatics Institute and SAMRC Unit for Bioinformatics Capacity Development, University of the Western Cape, Bellville, 7535, Western Cape, 7530, South Africa Azeez A. Fatai & Junaid Gamieldien Azeez A. Fatai Junaid Gamieldien AAF conceived and designed the experiments, and analyzed the data. AAF wrote the paper under JG's guidance. JG supervised all aspects of this work, and read, edited and approved the final manuscript. All authors read and approved the final manuscript. Correspondence to Junaid Gamieldien. Additional file 1 Workflow of the modified SVM-RFE used for selecting a set of genes that discriminated between rapidly-progressing and slow-progressing GBM patients. (XLSX 66.7 kb) Genes with high expression variance used in pathway enrichment analysis. (PDF 23.5 kb) Unguided principal component analysis to identify batch effect in the TCGA data set as a function of genes from chemoradioresistance-associated pathways. p=0.118, indicating absence of significant batch effect in the data. Samples in each batch are denoted by a different colour and symbol. (TXT 2.05 kb) Unguided principal component analysis to assess batch effect. (PDF 161 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Fatai, A., Gamieldien, J. A 35-gene signature discriminates between rapidly- and slowly-progressing glioblastoma multiforme and predicts survival in known subtypes of the cancer. BMC Cancer 18, 377 (2018). https://doi.org/10.1186/s12885-018-4103-5 Glioblastoma multiforme Prognostic genes Risk groups Chemoradiation resistance pathways
CommonCrawl
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. A low-cost paper-based synthetic biology platform for analyzing gut microbiota and host biomarkers Melissa K. Takahashi ORCID: orcid.org/0000-0003-4937-29241 na1, Xiao Tan1,2,3,4,5 na1, Aaron J. Dy ORCID: orcid.org/0000-0003-0319-54161,5,6 na1, Dana Braff1,7, Reid T. Akana6, Yoshikazu Furuta ORCID: orcid.org/0000-0003-4710-13891,8, Nina Donghia4, Ashwin Ananthakrishnan2,3 & James J. Collins1,4,5,6,9,10 Nature Communications volume 9, Article number: 3347 (2018) Cite this article This article has been updated There is a need for large-scale, longitudinal studies to determine the mechanisms by which the gut microbiome and its interactions with the host affect human health and disease. Current methods for profiling the microbiome typically utilize next-generation sequencing applications that are expensive, slow, and complex. Here, we present a synthetic biology platform for affordable, on-demand, and simple analysis of microbiome samples using RNA toehold switch sensors in paper-based, cell-free reactions. We demonstrate species-specific detection of mRNAs from 10 different bacteria that affect human health and four clinically relevant host biomarkers. We develop a method to quantify mRNA using our toehold sensors and validate our platform on clinical stool samples by comparison to RT-qPCR. We further highlight the potential clinical utility of the platform by showing that it can be used to rapidly and inexpensively detect toxin mRNA in the diagnosis of Clostridium difficile infections. The gut microbiome is an essential contributor to numerous processes in human health and disease, including proper development of the immune system1, host responses to acute and chronic infections2,3, cardiovascular disease4, and drug metabolism5. It is also an important modulator of gastrointestinal function, including inflammatory bowel disease (IBD)6,7, childhood malnutrition8,9, and cancer immunotherapy treatment10,11. Increasing evidence suggests that host–microbiome interactions also play a key role in these health conditions12,13,14. Despite the progress made in our understanding of the overall gut microbiome and the roles of individual species, large-scale longitudinal studies are needed to more directly investigate the causal relationship between microbial and host changes during disease states and responses to treatment. Current methods for profiling the gut microbiome typically involve deep sequencing coupled with high-throughput bioinformatics. These techniques are expensive, slow, and require significant technical expertise to design, run, and interpret. To reduce costs, researchers often batch samples for sequencing, which can lead to significant increases in turn-around time. These limitations have severely restricted the large-scale prospective monitoring of patient cohorts that is necessary to provide more granular data on microbial changes and human health15. Here we present a synthetic biology platform that addresses the need for affordable, on-demand, and simple analysis of microbiome samples that can aid in monitoring large-scale patient cohorts. Our lab has developed a paper-based diagnostic platform for portable, low-cost detection of biologically relevant RNAs16,17. The platform is comprised of two synthetic biology technologies. The first technology is a molecular sensor called an RNA toehold switch that can be designed to bind and detect virtually any RNA sequence18. The second is an in vitro cell-free transcription–translation system that is freeze-dried onto paper disks for stable, long-term storage at room temperature16; upon rehydration, the cell-free system can execute any genetic circuit. We combined these two technologies to form an abiotic platform for rapid and inexpensive development and deployment of biological sensors. Recently, we reduced the limit of detection of this platform to three femtomolar (fM) by adding an isothermal RNA amplification step called NASBA (nucleic acid sequence based amplification)17. We demonstrated the utility of our platform in detecting the presence or absence of clinically relevant RNAs, including those of Ebola16 and Zika17 viruses, but we were not able to quantify their concentrations. Here, we address the need for affordable, on-demand, and simple analysis of microbiome samples by advancing our paper-based diagnostic platform for use as a research tool to quantify bacterial and host RNAs from stool samples (Fig. 1). To demonstrate the widespread applicability of our diagnostic platform, we select a panel of 10 bacteria relevant to diverse microbiome research studies. We first design toehold switch sensors that detect the V3 hypervariable region of the 16S ribosomal RNA (rRNA) for each species to mimic the standard method of identifying bacterial species through 16S ribosomal DNA sequencing. We then improve the specificity of detection by designing toehold switch sensors for species-specific mRNAs from each bacterial species, and demonstrate sensor orthogonality. Next, we develop a method that semi-quantitatively measures the concentrations of target RNAs using NASBA and toehold switch sensors, and validate this method against quantitative reverse transcription PCR (RT-qPCR) of clinical stool samples. We then develop toehold switch sensors to detect four host biomarkers, one of which, calprotectin, is well-established in clinical use19, and another, oncostatin M (OSM), which may have an immediate impact on clinical decision-making in the treatment of IBD20. We validate our method against RT-qPCR using clinical samples from patients with IBD. Finally, we demonstrate an additional potential clinical application of our RNA detection platform using the example of Clostridium difficile infection (CDI), where differentiating active infection from passive colonization has been fraught with difficulty21. Our method shows markedly different toxin mRNA expression levels in two toxigenic C. difficile strains that would otherwise be indistinguishable by standard DNA-based qPCR diagnosis. Workflow for analysis of microbiome samples using our paper-based detection platform. Once key bacteria or mRNA targets have been identified, RNA toehold switch sensors and primers for isothermal RNA amplification are designed in silico. Sensors and primers are then rapidly assembled and validated in paper-based reactions. For subsequent use, total RNA is extracted from human fecal samples using a commercially available kit. Specific RNAs are amplified via NASBA (nucleic acid sequence based amplification) and quantified using arrays of toehold switch sensors in paper-based reactions. Microbial and host biomarker RNA concentrations of the samples are determined using a simple calibration curve Development of toehold switch sensors to detect 16S rRNA Toehold switch sensors are synthetic riboregulators that control the translation of a gene via RNA–RNA interactions. They utilize a designed hairpin structure to block gene translation in cis by sequestration of the ribosome binding site (RBS) and start codon. Translation is activated upon the binding of a trans-acting trigger RNA to the toehold region of the switch, which relieves the sequestration of the RBS and allows translation of the downstream gene (Fig. 2a)18. Toehold switch sensors can be designed to bind nearly any RNA sequence. 16S rRNA sensors. a Schematic of toehold switch sensor function. b Best performing toehold switch sensors targeting the V3 hypervariable region of 16S rRNA for each species. Data represent mean GFP production rates from paper-based reactions with sensor alone and sensor plus 36-nucleotide trigger RNA (2 μM). Error bars represent high and low values from three technical replicates. c Schematic of NASBA-mediated RNA amplification. d Evaluation of NASBA primers. NASBA reactions were performed on 1 ng of total RNA for 90 min. Outputs from NASBA reactions were used to activate toehold switch sensors in paper-based reactions. Data represent mean values of three technical replicates. Error bars represent high and low values of the three replicates. e Orthogonality of 16S sensors. Each sensor was challenged with 2 μM of NASBA trigger RNAs from each species representing what would be amplified in a NASBA reaction. GFP production rates for an individual sensor were normalized to the production rate of the sensor plus its cognate trigger (100%). Data represent mean values of six replicates (two biological replicates × three technical replicates). Full data and s.d. are shown in Supplementary Figure 3 We designed toehold switch sensors for a panel of 10 bacteria chosen for their relevance to IBD22,23, childhood malnutrition8,9, and cancer immunotherapy10,11. To start, we targeted the 16S rRNA, because 16S rDNA profiling is a standard method for identifying bacterial species and rRNA is present at high copy numbers in bacteria. We used the series B toehold switch design from Pardee et al.17 and the Nucleic Acids Package (NUPACK)24 to design toehold switch sensors that target the V3 hypervariable region of the 16S rRNA for each target species. The candidate sensors were constructed to regulate the expression of the GFPmut3b gene25, and tested using in vitro transcribed trigger RNAs (36 nucleotides) in paper-based, cell-free reactions (Supplementary Fig. 1). The best performing sensor for each species (Fig. 2b, Supplementary Data 1) was chosen based on the lowest background GFP expression in reactions with sensor alone and highest fold activation in reactions with sensors activated by cognate trigger RNA. An individual bacterial species can comprise 1% or less of the total bacterial population within a human gut microbiome, so even highly abundant rRNA from an individual species can constitute 1–10 nanomolar (nM) RNA. Thus, unprocessed rRNA from stool samples is beyond the limit of detection of toehold switch sensors alone, which is approximately 10–30 nM17. We therefore incorporated NASBA, an isothermal RNA amplification technique26, into our sample processing steps prior to detection by toehold switch sensors to improve assay sensitivity. Briefly, NASBA begins with primer-directed reverse transcription of the template RNA, which creates an RNA/DNA duplex. The template RNA strand is then degraded by RNaseH, which allows a second primer containing the T7 promoter to bind and initiate double-stranded DNA synthesis. The double-stranded DNA serves as a template for T7-mediated transcription of the target RNA. Each newly synthesized RNA strand can serve as starting material for further amplification cycles (Fig. 2c)26 and can also be detected by the toehold sensors. We have previously shown that NASBA allows for the detection of single femtomolar concentrations of RNA17 using toehold switch sensors in paper-based reactions. NASBA primers were designed to amplify the V3 hypervariable region of the 16S rRNA for E. coli. We first tested the standard universal primer set routinely used to amplify the V3 region from 16S rDNA27 for sequencing applications. We used total RNA extracted from an E. coli monoculture to screen the primers. NASBA reactions were performed for 90 min on 1 ng of total RNA and then applied to paper-based reactions containing the E. coli 16S toehold switch sensor. Unexpectedly, these primers were not able to amplify the 16S V3 region from total RNA (Fig. 2d–E.c. 1). In order to investigate why the universal primer set performed poorly, we mapped the primer locations to chemical structure probing data for E. coli 30S ribosomal subunits28 and found that the forward primer targeted nucleotides that were not structurally accessible (Supplementary Fig 2). Using the 16S rRNA structure data, we designed new NASBA primer sets and screened for the highest activation of toehold switch sensors (primer set 4). We then designed and screened NASBA primers for the other nine species using the same methodology (Fig. 2d). We next investigated the specificity of our 16S toehold switch sensors. We synthesized trigger RNAs for each species representing the sequence that would be amplified by the NASBA primers (72–171 nucleotides) and measured the activation of each sensor when challenged with each of the 10 trigger RNAs (Fig. 2e, Supplementary Fig. 3, Supplementary Data 2). We observed good specificity for most of the 16S sensors; however, there was significant crosstalk among closely related bacteria. In the case of three closely related Bifidobacteria, the toehold switch sensors preferentially activate in the presence of their cognate trigger RNAs, but show significant crosstalk since the trigger sequences only differ by a few nucleotides. We also observed significant crosstalk between the C. difficile sensor and the trigger RNAs for E. rectale and F. prausnitzii. Although the C. difficile sensor is not activated by the exact 36 nucleotide triggers for E. rectale and F. prausnitzii sensors (Supplementary Fig. 4a), alignment of the NASBA-amplified RNA sequences for the three species showed that the extended sequence that is amplified by the E. rectale and F. prausnitzii NASBA primers aligned with the toehold region of the C. difficile sensor (Supplementary Fig. 4b, c). The 16S sensors can be used to identify and differentiate closely related families of bacteria, but due to crosstalk, they are not suitable for discriminating among highly related bacterial species. Bioinformatic analysis for species-specific identification To address the specificity limitations of the 16S sensors, we devised a bioinformatic pipeline to identify mRNAs that are unique to any given bacterial species (Fig. 3a). Our pipeline uses the phylogenetic assignment tools Metaphlan and Metaphlan229 to identify a set of unique sequences for a given bacterial species. These sequences are then evaluated using a series of BLAST30 alignments to determine the most specific markers with the highest expression in human stool (see Methods). Species-specific mRNA sensors. a Bioinformatic pipeline for identifying species-specific mRNAs. b Best performing NASBA primers and species-specific mRNA sensors for each species. NASBA reactions were performed on 10 ng of total RNA for 90 min. Outputs from NASBA reactions were used to activate toehold switch sensors in paper-based reactions. Data represent mean values of three technical replicates. Error bars represent high and low values of the three replicates. c Orthogonality of species-specific sensors. Each sensor was challenged with 2 μM of trigger RNAs from each species representing what would be amplified in a NASBA reaction. GFP production rates for an individual sensor were normalized to the production rate of the sensor plus its cognate trigger (100%). Data represent mean values of six replicates (two biological replicates × three technical replicates). Full data and s.d. are shown in Supplementary Figure 6. d Orthogonality of NASBA primer sets. NASBA reactions were performed on 10 ng of total RNA for 90 min. Data represent mean ± s.d. of six replicates (two biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)) We followed the same steps described for 16S rRNA sensor development to develop sensors for the species-specific mRNAs. We tested candidate toehold switch sensors in paper-based reactions and selected the best performing sensor for each species (Supplementary Fig. 5). We then designed NASBA primers and screened them on total RNA extracted from monocultures for each species. The best performing NASBA primer sets were chosen based on the ability of the amplified RNA to activate the corresponding toehold switch sensor (Fig. 3b). We note the apparent variation in the efficiency of amplification between species and attribute this to the variation in abundance of the mRNAs in each total RNA sample and possible differences in the structural accessibility of these transcripts. Finally, we tested the specificity of our toehold switch sensors by synthesizing trigger RNAs for each species representing the sequence that would be amplified by the NASBA primers and tested each sensor against each of the 10 trigger RNAs. We observed greatly improved sensor specificity compared to our 16S sensors with no significant crosstalk detected between any of the sensors (Fig. 3c, Supplementary Fig. 6). Next, we investigated the specificity of our NASBA primers by testing the output of NASBA reactions performed on three different total RNA samples: (1) total RNA isolated from an individual species; (2) a mixed sample comprised of total RNA from each of the 10 species; and (3) a mixed sample containing total RNA from all species except for the one corresponding to the NASBA primer set being tested. To keep the total concentration of a given sample constant, we supplemented samples (1) and (3) with yeast tRNA (Ambion), which is commonly used to increase the complexity of mRNA standards in RT-qPCR, because reverse transcription efficiencies change with the total amount of RNA in a reaction31. For example, each NASBA reaction was run on a total of 10 ng of RNA, where sample (1) contained 1 ng of total RNA plus 9 ng of yeast tRNA and sample (2) included 1 ng of RNA from each of the 10 individual species. For each NASBA primer set, we observed equivalent activation of the toehold switch sensors by RNA amplified from samples (1) and (2). Additionally, the outputs from sample (3) were equivalent to the toehold switch sensor alone for each species indicating that there was no amplification of the test target in sample (3) (Fig. 3d). These results showed that the NASBA primers were highly specific within the tested set of 10 bacteria, which included closely related species. Toehold switch sensors quantify NASBA products Quantitation is essential for determining changes in bacterial and host gene expression and abundances of microbes. Therefore, we sought to determine if the toehold switch sensors could be used to quantify bacterial RNA in fecal samples. Previous work has shown that NASBA can be quantified using internal standards and fluorescent hybridization probes to detect amplified RNA32,33. In a previous application of the paper-based diagnostic platform, we demonstrated that the toehold switch sensors exhibit a linear response to trigger RNA inputs in the low nanomolar to micromolar range17. A mathematical model of NASBA reactions suggested that femtomolar to picomolar concentrations of RNA could be amplified to within the toehold detectable linear range, and 10-fold concentration differences would be distinguishable if NASBA reactions were stopped prior to completion (Supplementary Fig. 7). Therefore, we sought to identify NASBA reaction conditions that would allow us to quantify a broad range of RNA concentrations using the toehold switch sensors. We in vitro transcribed species-specific mRNAs and used them as standards for the NASBA reactions. We aimed to quantify standards from 3 fM to 30 picomolar (pM). To mimic the complexity of a total RNA sample, we diluted our standards into yeast tRNA (50 ng/μl). NASBA reactions with varied amplification times (30 min–3 h) were carried out on mRNA standards to determine the duration that allowed us to distinguish concentrations that differed by 10-fold (Supplementary Fig. 8). Excessive amplification times or running amplification reactions to completion did not allow for differentiation between standards, and insufficient amplification times did not allow for detection of the lowest (3 fM) standard. Using the optimal amplification time for each mRNA, we assessed the run-to-run variability of NASBA and paper-based toehold reactions. We found that there is run-to-run variation in overall signal measured from the paper-based reactions, but the relative signal between standards remains the same between runs (Fig. 4a). Normalization to a single standard allowed us to define a calibration curve that eliminated the effect of run-to-run variability on RNA quantification (Fig. 4b). Calibration curves were determined for each of the 10 species (Supplementary Fig. 9). These allow for calculation of species-specific mRNA concentrations in an unknown sample by simply running a single concurrent standard. Quantification of NASBA-mediated amplification using toehold switch sensors. a Run-to-run variation in mRNA standards amplified by NASBA and measured by toehold sensors. mRNA standards for the B. thetaiotaomicron species-specific sensor were run in NASBA reactions for 30 min. Outputs from NASBA reactions were used to activate toehold switch sensors in paper-based reactions. b Calibration curve for the B.t. species-specific mRNA. Values from each standard in the individual runs in a were normalized to the 300 fM standard for that specific run and averaged across runs. c Quantifying species-specific mRNAs in stool. E. coli or B. fragilis cells were spiked into 150 mg of a commercial stool sample and processed for total RNA. Species-specific mRNAs were quantified using our paper-based platform and RT-qPCR. d Analysis of clinical stool samples. Six clinical stool samples were processed for total RNA and analyzed by our paper-based platform and RT-qPCR. Data and s.d. are shown in Supplementary Figure 11. e Correlation of clinical sample results. Non-zero paper-based concentrations from d were compared to RT-qPCR determined values. Data represent mean values. Paper-based error bars in a, c, and e represent s.d. from nine replicates (three biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)). RT-qPCR error bars in c and e represent s.d. from six replicates (two biological replicates (RT reactions) × three technical replicates (qPCR reactions)) To validate our calibration curves, we sought to compare RNA quantification from human stool samples using our paper-based platform and RT-qPCR. We first assessed our ability to detect target mRNA in a pool of total RNA extracted (RNeasy PowerMicrobiome kit, Qiagen) from commercial human stool (Lee BioSolutions) and compared quantification of mRNA standards in this background to standards in a yeast tRNA background. We detected our species-specific mRNAs in stool RNA background, but the signal output for any given standard concentration was higher in total stool RNA than in the yeast tRNA background (Supplementary Fig. 10). Therefore, we experimentally corrected each of our calibration curves to account for this difference (Supplementary Fig. 9). We then compared our quantification method to RT-qPCR. We spiked in between 50 μl and 1.5 ml of bacterial cells grown to mid-log phase to 150 mg of commercial human stool. These samples were processed for total RNA and quantified using our paper-based platform and RT-qPCR. We found good correlation between these methods with R2 values of 0.855 and 0.994 for E. coli and B. fragilis, respectively (Fig. 4c). Next, we tested the performance of our quantification method with clinically acquired stool samples (Fig. 4d). In the six clinical samples tested, we detected six of the bacteria in our panel. The concentrations of species-specific mRNAs determined using our platform showed good correlation with RT-qPCR, with an R2 value of 0.766 (Fig. 4e). We had no false-positive results and seven false-negative results using RT-qPCR as the standard (Supplementary Fig. 11). Of the seven false-negative results, six contained less than three copies per 50 ng of total RNA (6 attomolar) quantified by RT-qPCR, a value below our limit of detection. Toehold switch sensors can detect human mRNA from stool Next, we sought to demonstrate that our platform could be used to detect mRNAs from human cells. We designed toehold switch sensors and NASBA primers to detect the mRNA of three biomarkers associated with inflammation (calprotectin, CXCL5, and IL-8) and oncostatin M (OSM), a cytokine that has recently been found to predict the efficacy of anti-tumor necrosis factor (TNF)-alpha therapies in IBD patients20. To validate our sensors, we performed NASBA and toehold reactions on 50 ng of total RNA from human peripheral leukocytes (Takara Bio 636592) and demonstrated that we could detect each of the four transcripts (Supplementary Fig. 12). We then developed calibration curves for each sensor (Supplementary Fig. 13) and tested the performance of our quantification method with clinically acquired stool samples from patients with IBD (Fig. 5a). We detected each of the four host transcripts in at least two of the clinical samples. Furthermore, the concentrations of human mRNA determined using our platform showed good correlation with RT-qPCR, with an R2 value of 0.912 (Fig. 5b). Detection of host biomarkers of inflammation. a Analysis of clinical stool samples. Four clinical stool samples were processed for total RNA and analyzed by our paper-based platform and RT-qPCR. Data represent mean values. Paper-based error bars represent s.d. from nine replicates (three biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)). RT-qPCR error bars represent s.d. from six replicates (two biological replicates (RT reactions) × three technical replicates (qPCR reactions)). b Correlation of clinical sample results. Non-zero paper-based concentrations from a were compared to RT-qPCR determined values RNA-based detection of C. difficile infection In a final validation of our platform, we sought to demonstrate the advantage of measuring RNA as opposed to DNA in certain clinical applications. CDI is one example where RNA-based detection may be especially useful. CDI causes significant patient morbidity and mortality34, and is responsible for nearly 2.4 million days of inpatient hospital stays at a yearly cost of over $6.4 billion in the United States35. CDI-associated diarrhea and intestinal inflammation are attributed to the direct effects of C. difficile toxins36. As such, current CDI diagnostic tests are focused on detecting the presence of toxigenic C. difficile bacteria or the toxin proteins in patient stool. The traditional gold standard tests for detecting toxigenic C. difficile organisms (toxigenic culture assay) and C. difficile toxin (cell-culture cytotoxicity neutralization assay) are slow, labor-intensive, and technically challenging37. The diagnostics currently in wide-spread use, such as enzyme-linked immunoassays (EIA) for C. difficile toxins and DNA-based qPCR assays for C. difficile toxin genes, offer greatly improved performance characteristics but have their own limitations21. The EIA tests have high clinical specificity, but reports of false-negatives and low sensitivity relative to toxigenic culture21,37 have led to the development of DNA-based qPCR assays for C. difficile toxin genes. This method is extremely sensitive for the presence of toxigenic C. difficile bacteria; however, it cannot distinguish between patients that are carriers with symptoms due to another cause and those with active CDI21. These cases are especially challenging for clinicians, and there is a debate on which testing methodology yields the highest combination of sensitivity and specificity for clinically meaningful CDI21. New ultrasensitive assays to detect C. difficile toxins are in development, but they require highly specialized and expensive laboratory equipment and in some cases have a 60-h turnaround time38. Our paper-based platform has the potential to address these limitations by providing a rapid, easy-to-use method for the diagnosis of active CDI based on the detection of C. difficile toxin mRNA (Fig. 6a). Paper-based detection of C. difficile infection. a Schematic of RNA-based CDI detection using a toehold switch sensor to detect toxin B mRNA. b Toxin B mRNA detection in stool samples. Two C. difficile strains (630 and VPI 10463) were grown in two different media (M1—TYG plus cysteine, M2—TY). Cells from each culture were spiked into 150 mg of a commercial stool sample and processed for total RNA. Toxin B mRNA was measured by our paper-based platform and RT-qPCR. Data represent mean values. Paper-based error bars represent s.d. from nine replicates (three biological replicates (NASBA reactions) × three technical replicates (paper-based reactions)). RT-qPCR error bars represent s.d. from six replicates (two biological replicates (RT reactions) × three technical replicates (qPCR reactions)). Toxin B DNA was confirmed in each sample using qPCR (Cq values shown in Supplementary Table 11) We designed a toehold switch sensor and NASBA primers to detect a conserved region of the C. difficile toxin B gene, which is essential for toxigenic effect and is the target of most commercial DNA-based qPCR assays for toxigenic C. difficile39. To validate our sensor, we collected total RNA from monocultures of two different toxigenic C. difficile strains: 630, a low toxin producing strain, and VPI10463 (VPI), a high toxin producing strain40. We performed NASBA and toehold reactions on 25 ng of total RNA from each strain and demonstrated that we could detect toxin mRNA from both C. difficile strains (Supplementary Fig. 14). Next, we grew the two strains under conditions that suppress (mid-log phase in media 1: TYG plus cysteine) or induce (stationary phase in media 2: TY) toxin production to mimic situations where patients are carriers of toxigenic C. difficile that produce very low levels toxin and those with active CDI resulting from high toxin production, respectively. We then spiked the two strains grown in both conditions into commercial human stool and processed the samples for total RNA as described previously. Using our paper-based platform, we detected toxin mRNA only in the VPI strain grown in media 2 sample (Fig. 6b). Analysis of the samples using RT-qPCR indicated that there was toxin mRNA in the 630 media 2 and VPI media 1 samples, but at very low levels (1 ± 4 and 1 ± 6 copies or 2 attomolar, respectively). Furthermore, all four samples were positive for toxin DNA (Supplementary Table 12). Our results therefore demonstrate a potential advantage of using toxin mRNA to diagnose CDI. All four samples would give a positive result in a DNA-based qPCR test. However, by detecting toxin mRNA using our paper-based platform, it may be possible to rapidly and readily distinguish between carriers of toxigenic C. difficile expressing low levels of toxins and those patients with active CDI expressing significantly higher levels of toxins. Here we presented a synthetic biology platform for affordable, on-demand analysis of microbiome samples that can be employed in research, clinical, and low-resource settings. We demonstrated detection of species-specific mRNAs from 10 different bacteria that have been associated with a wide variety of disease processes. To track abundance of target RNAs, we devised a method to quantify mRNA using our toehold sensors and validated our method using RT-qPCR on clinical stool samples. To highlight the ability to probe both host and bacterial transcripts using a single platform, we validated sensors for clinically relevant human mRNAs using stool samples from IBD patients. We also demonstrated the potential advantage and clinical utility of detecting toxin mRNA in the case of CDI. As part of this study, we developed a simple method that allows for the semi-quantitative determination of mRNA concentration from human stool samples using paper-based toehold switch sensors. By running a single standard alongside test samples and referencing a standard curve, we can determine the mRNA concentration within a sample and account for variation in reagent lots with clear separation of samples that differ in concentration by 10-fold (Fig. 4a, b). Our method is analogous to those used for NASBA-based quantification with an internal control spiked into each sample and a fluorescent hybridization probe for detection32,33. Furthermore, quantification of mRNAs in stool samples using our method correlates well with RT-qPCR (Fig. 4c–e, Fig. 5b). Notably, mRNA concentrations correlate with bacterial abundance (Supplementary Fig. 15), though this correlation may fluctuate with growth conditions and will likely vary depending on the specific target. Our approach is easily adaptable to study any cellular process that results in differences in gene expression, including changes in specific biochemical pathways or cell metabolism. To illustrate the potential utility of assessing specific bacterial pathways, we selected the model of toxin production in CDI. To approximate the clinical scenarios of active CDI versus inactive colonization, we demonstrated that we could distinguish between toxigenic C. difficile that expressed high amounts of toxin and no toxin (Fig. 6), which would otherwise be indistinguishable via standard DNA-based qPCR. Recent studies have shown that fecal mRNA levels of the inflammatory markers CXCL5 and IL-8 are highly correlated with clinical outcomes and perform with significantly better clinical sensitivity than other available tests for identifying CDI41,42. Because our method is equally capable of quantifying microbial and host RNAs and is readily multiplexed, a combined diagnostic testing for C. difficile toxin, CXCL5, and IL-8 mRNA may provide improved sensitivity and specificity for detecting CDI, though further investigation using clinical samples is warranted to help address this important problem. In addition to the potential utility of our platform in the clinical diagnosis of CDI, our ability to assess both host and microbial transcripts in parallel may also be useful in management and treatment selection for IBD. The interaction between the host and resident microbiome has been shown to affect many important biological processes in health and disease, including IBD12. Recent work has demonstrated that a microbial signature can be predictive of clinical remission after treatment with vedolizumab, an anti-integrin IBD medication43. For host transcripts, calprotectin is a well-characterized biomarker routinely used in clinical practice to assess gut mucosal inflammation19; CXCL5 and IL-8 are both elevated in intestinal biopsies from patients with IBD44,45; additionally, OSM levels in intestinal tissues have recently been strongly correlated with a lack of response to anti-TNF agents20, a widely used class of medications to treat IBD. Although highly efficacious, roughly 30–40% of patients will not respond to the anti-TNF medication class, and there was previously no reliable way of predicting the likelihood of response. While the above study was based on intestinal biopsies, we demonstrated we could detect OSM mRNA from IBD patient stool samples. Although the low number of samples precludes any conclusions on clinical utility, our results are consistent with a connection between higher OSM levels and lack of responsiveness to anti-TNF treatment. For example, stool sample S6 with no detectable OSM mRNA was collected from a patient who had successfully responded to anti-TNF treatment. Furthermore, sample S7, which showed intermediate levels of OSM mRNA, was collected from a patient who had failed treatment with two different anti-TNF agents. Our platform provides an easy to use, low-cost method for quantifying microbial and host RNAs from complex biological samples. Its flexibility allows for reactions to be freeze-dried for use outside of a laboratory setting. All reactions can also be run fresh, as they were done here, for researchers that do not have access to a lyophilizer. Specialized lab equipment is not required to develop our sensors or run the reactions. Since our toehold switch sensors can be used to regulate the production of any protein output, reactions may be monitored on a standard microtiter plate reader, if available, or an affordable, easy-to-build, portable electronic reader that quantifies change in absorbance from LacZ production17. To accommodate incubation temperatures required for NASBA (95 °C, 41 °C) and paper-based (37 °C) reactions, existing laboratory incubators or thermocyclers may be used, or affordable incubators can be built for use in low-resource settings. Altogether, the low-cost and portable nature of our platform makes it uniquely suited for use in resource-limited environments. The major advantages of our platform over RT-qPCR are cost and the ability to analyze multiple RNA transcripts at once. Using our platform, we can quantify mRNAs in 3–5 h at a cost of approximately $16 per transcript using commercially available kits as reagents (accounting for triplicate reactions and mRNA standard). This can be reduced to under $2 per transcript by using cell-free extracts prepared in-house, which are suitable for our platform (Supplementary Fig. 16), and individually sourcing NASBA reagents (Supplementary Fig. 17). The same analysis using RT-qPCR also takes 3–5 h, but costs approximately $140 per transcript. Our platform only requires a single mRNA standard for quantification while RT-qPCR generally requires a minimum of five standards46. Our limit of detection in total stool RNA ranges between 30 aM and 3 fM, depending on the specific toehold switch sensor. While this does not match the sensitivity of RT-qPCR (3 aM)47, we believe there are applications where our current limits of detection are sufficient. Future optimization of toehold switch sensor design and NASBA reaction conditions may continue to improve this sensitivity. In a comparison of our platform to next-generation sequencing we offer fast turn-around time, simple data analysis, and on-demand assessment of samples with no change in cost per sample. Average next-generation sequencing runs at core facilities range from $700–2000 per lane (Illumina), depending on machine and run type, and can take anywhere from 4 to 72 h48 to complete. The sequencing cost per sample is typically reduced by running up to 96 samples per lane; however, this sample batching prevents on-demand analysis. Additionally, next-generation sequencing data sets require extensive computational power and training to process, analyze, and interpret. Our platform's data analysis can be performed quickly using a simple spreadsheet or automated program. Our paper-based platform is one of several new synthetic biology platforms that can be used for nucleic acid detection. Recent advances using the CRISPR associated enzymes Cas12a and Cas13 along with recombinase polymerase amplification (RPA)49 yielded sensitive detection of nucleic acids with the ability to discriminate between single nucleotide differences50,51,52. While detection of single nucleotide polymorphisms (SNPs) is important, for example in tracking the epidemiology of viruses, the ability of the toehold switch sensors to tolerate SNPs enables the use of a single sensor to detect multiple strains. Although RT-RPA can be used to amplify RNA, as with RT-qPCR it cannot specifically amplify RNA without thorough DNase treatment to remove genomic DNA. Since NASBA uses reverse transcription to create DNA with a T7 promoter to then transcribe that template into RNA, it is highly resistant to DNA contamination53. Our method and the CRISPR enzyme-based diagnostics, SHERLOCK50,51 and DETECTR52, could be complementary tools, the selection of which will depend on the sample type (DNA or RNA), and whether the detection of single nucleotide differences is desired. Our method for detecting and quantifying RNA sequences could be applied to a broad range of studies including samples from other human anatomical sites, and our approach is easily adaptable to a wide range of biological targets, including viruses, fungi, and eukaryotic nucleic acids from either stool or tissue samples. Furthermore, with continued optimization of sample processing, our method could be adapted for point-of-care use. Such a diagnostic platform could have many applications, including pre-screening enrollees in the field for prospective trials of therapeutic manipulations of the microbiome, at-home monitoring of research participants, and eventually for tracking changes in patient disease activity. Our easy-to-use synthetic biology platform has the potential to meet both research and clinical point-of-care needs. Toehold sensor design and cloning Toehold switch sensors were designed with NUPACK24 using the series B toehold switch design from Pardee et al.17 The script can be found in Supplementary Note 1. Toehold switch sensor designs were checked for premature stop codons and cloned into plasmids with the GFPmut3b gene using PCR amplification and blunt-end ligation. Linear toehold switch templates were generated by amplifying from these plasmids by PCR and then purified using a MinElute PCR Purification kit (Qiagen, 28004), according to manufacturer's protocol. Sequences for all toehold switch sensors can be found in Supplementary Tables 2–3. Trigger RNA and mRNA standard synthesis DNA encoding trigger RNAs or mRNA standard sequences were ordered from Integrated DNA Technologies and amplified by PCR to create a linear template with a T7 promoter. RNA was transcribed from the DNA templates using a HiScribe T7 High Yield RNA Synthesis Kit, according to the manufacturer's protocol (New England Biolabs, E2040). RNA was then purified using a Zymo RNA Clean and Concentrator kit (R1018), according to the manufacturer's protocol. Following purification, DNA template was degraded by DNase digestion using the TURBO DNA-free DNase kit (ThermoFisher, AM1907) for 1 h according to the manufacturer's protocol. Paper-based, cell-free reactions Cell-free reactions were performed using the PURExpress In Vitro Protein Synthesis Kit (New England Biolabs, E6800L). The cell-free reactions consisted of NEB Solution A (40%), NEB Solution B (30%), RNase inhibitor (0.5%; Roche, 3335402001), linear DNA constructs encoding toehold switch sensors (1.875 nM), and trigger RNA for a total of 5.5 µl. Paper disks (Whatman, 1442-042 blocked overnight in 5% BSA) were punched out using a 2 mm biopsy (Integra, 33-31-P/25) and placed in a 384-well plate (Corning 3544). 1.4 µl of the cell-free reaction mixture was applied to paper disks in triplicate. GFP expression (485 nm excitation, 520 nm emission) was monitored on a plate reader (Molecular Devices SpectraMax M5) every 5 min for 2 h at 37 °C. Initial sensor screen Sensor candidate designs from NUPACK were tested in paper-based reactions containing 1.875 nM of linear sensor DNA and 2 µM trigger RNA (36 nucleotides). GFP production rates were calculated (see Data analysis and RNA quantification) for reactions with sensor alone and sensor plus trigger. To select the best sensor, an activation ratio was calculated for each sensor candidate by dividing the sensor plus trigger production rate by the sensor alone production rate. Sensors were chosen based on the highest activation ratio and lowest sensor alone production rate. A minimum activation ratio of 5-fold is necessary to achieve desired sensitivity. NASBA Initial denaturation of total RNA consisted of a 2-min incubation at 95 °C followed by a 10-min incubation at 41 °C of 1.0 µl sample input, 1.675 µl reaction buffer (Life Sciences Advanced Technologies, NECB-24), 0.825 µL nucleotide mix (Life Sciences Advanced Technologies, NECN-24), 0.2 µl of 6.25 µM primers, 0.03 µl water, and 0.025 µl of RNase inhibitor (Roche) per 3.75 µl reaction. Afterwards, 1.25 µl of enzyme mix (Life Sciences NEC-1-24) was added to each reaction and the resulting 5.0 µl NASBA reactions were incubated for 30–180 min at 41 °C. Then 1.0 µl of NASBA product was added to the cell-free reaction mixture for a total of 5.5 µl. Final concentrations of buffer components in each NASBA reaction: 13.2 mM MgCl2 (VWR 97062-848), 75 mM KCl (VWR BDH7296-0), 10 mM DTT (Sigma GE17-1318-01), 40 mM Tris-HCl pH 8.5 (VWR RLMB-005), 15% DMSO, 2 mM each ATP, UTP, and CTP, 1.5 mM GTP, 0.5 mM ITP, 1 mM each dNTP (New England Biolabs, N0447L), 0.25 µM each primer. Enzyme mix: 5 U/ml RNaseH (New England Biolabs M0297L), 1000 U/ml reverse transcriptase (New England Biolabs, M0368L), 2500 U/ml T7 RNA polymerase (New England Biolabs, M0251L), 43.75 mg/ml BSA. Initial denaturation of sample was performed as above, after which 1.25 µl enzyme mix was added to each reaction. Data analysis and RNA quantification Paper-based reactions were analyzed by calculating GFP production rates for each reaction condition. GFP production rates were calculated by first subtracting the average background fluorescence measured from triplicate paper-based reactions that did not contain sensor DNA or trigger RNA. Then, the minimum value of each individual reaction was adjusted to zero by subtracting the average of its first three time points (0, 5, and 10 min) from each time point. The zero-adjusted data were then fit to the equation: \({\rm{RFU}}\left( {{\rm{zero}}\,{\rm{adjusted}}} \right) = \frac{a}{{\mathrm{e}^{ - bt} + c}}\). To compare data from different samples, the slope of the fitted equation was taken at t = 50 min, resulting in values of RFU/min. The GFP production rates were then averaged over the replicates for each reaction condition. In quantification experiments, the GFP production rate for each sample was normalized to the GFP production rate for a single mRNA standard (for standard concentrations see Supplementary Fig. 9). The normalized GFP production rate for reactions with sensor alone was then subtracted from each sample. RNA concentration was determined using the equation: Normalized GFP production = A*ln(concentration) + B. Values for A and B for each sensor can be found in Supplementary Fig. 9. Bacterial culturing and RNA processing All anaerobic bacteria were grown in an anaerobic chamber at 37 °C. Bifidobacterium adolescentis (ATCC 15703), Bifidobacterium breve (ATCC15700), Bifidobacterium longum subsp longum (ATCC 15707), Bacteroides fragilis (ATCC 25285), Bacteroides thetaiotaomicron (ATCC 29148), Clostridium difficile (ATCC BAA-1382), and Eubacterium rectale (ATCC 33656) were obtained from ATCC. Faecalibacterium prausnitzii A2–165 (DSM 17677) and Roseburia hominis (DSM 16839) were obtained from DSMZ. Freeze-dried samples were rehydrated with their respective growth mediums and grown for 24–48 h in liquid culture on a shaker at 200 rpm. For experiments testing RNA isolated from pure cultures, 12 ml of bacterial culture was diluted 1:2 into RNAProtect before removing from the anaerobic chamber for RNA extractions. The cultures were lysed at room temperature using 200 µl of 15 mg/ml of lysozyme in TE buffer and 20 µl of proteinase K (Qiagen). RNA was then extracted using the RNeasy Mini kit (Qiagen 74104), according to the manufacturer's instructions. RNA samples were then DNase digested using TURBO DNA-free DNase kit (ThermoFisher, AM1907) for one hour. E. coli (MG1655) was grown in Luria-Bertan (LB) medium (Difco). B. adolescentis was grown in Bifidobacterium medium (prepared according to DSMZ 58: Bifidobacterium medium). B. breve, B. fragilis, and B. longum subsp longum were grown in brain heart infusion-supplemented (BHIS) medium (prepared according to ATCC medium: 1293). B. thetaiotaomicron, E. rectale, and R. hominis were grown in cooked meat medium (CMM) purchased from Hardy Diagnostics. F. prausnitizii was grown in CMM with an additional 1% glucose. C. difficile was grown in BHIS for the species-specific RNA testing, and grown in either TY medium (3% tryptone, 2% yeast extract, and 0.1% sodium thiogrlycolate), or TY medium with 2% glucose and 10 mM cysteine for toxin RNA testing. RNA purification from stool samples Commercial stool specimens were purchased from Lee Biosolutions and provided as frozen specimens. Clinical stool samples were provided by Dr. Ashwin Ananthakrishnan as anonymized specimens from the Prospective Registry in IBD Study at Massachusetts General Hospital. Approval was provided by the Partners Healthcare Human Subjects Research Committee. Informed consent was obtained from all subjects. Both commercial and clinical stool samples were stored at −80 °C and processed using the RNeasy Powermicrobiome Kit (MoBio, now Qiagen, 26000), which was selected for its ability to isolate high quality RNA from stool54. Each frozen stool was homogenized using a mortar and pestle cooled with liquid nitrogen55, and 150 mg of each sample was loaded into each glass bead tube. Mechanical lysis was performed using a MoBio vortex adapter and a Vortex Genie 2 (Scientific Industries Inc) at maximum speed for 10 min. The manufacturer's protocol was followed for RNA extraction with optional on-column DNase digestion included. Resulting RNA samples were then further DNase digested using TURBO DNA-free DNase kit (ThermoFisher, AM1907) for one hour. Bacterial spike-in experiments E. coli and B. fragilis were grown to mid-log phase and spiked into a commercial stool sample (Lee Biosolutions) before RNA extraction. Bacteria cultures ranging from 10 µl to 1.5 ml were spun down before being re-suspended in PM1 buffer and added to 150 mg of stool. C. difficile was grown to stationary phase in TY medium and TY medium supplemented with 2% glucose and 10 mM cysteine. Two ml of stationary C. difficile culture was spun down and re-suspended in PM1 buffer and added to 150 mg of stool. All samples were processed with the RNeasy PowerMicrobiome kit, according to the manufacturer's instructions, with an extended 30-min lysis step for C. difficile spike-ins. RNA samples were then split into two samples, one that was DNase digested with the TURBO DNA-free DNase kit (ThermoFisher, AM1907) for one hour and one that did not receive DNase treatment so that it could be used for DNA based qPCR. Computational pipeline for species-specific RNA sequences Our computational pipeline employs components from previously developed phylogenetic assignment tools, including Metaphlan and Metaphlan229. These programs use multiple bioinformatics approaches to reduce each bacterial species to a "bag of genes" and identify the set of genes or gene parts that is specifically associated with a target species or clade and not associated with any others. We extracted the Metaphlan2 markers for a given target species and used BLAST30 alignments against available genomes for our target species to ensure that the markers were present. We then assessed these preliminary markers for expression in the human fecal microbiome by using BLAST alignments against a human stool transcriptome database that we created from repositories of publically available adult human stool meta-transcriptome sequencing reads. Keeping only the markers that are expressed in human stool, we again tested for specificity by performing BLAST alignments against a pan-bacterial database that we created from all publically available reference and draft bacterial genomes. We selected the most specific markers with the highest expression in human stool and created toehold switch sensors to target these RNA sequences. In the case of C. difficile, expression was extremely low for all Metaphlan2 markers from the standard human stool transcriptome database. This was not unexpected since this species is reported to be very lowly abundant in normal healthy populations. To develop sensors for this species, we instead screened for expression using transcriptomic data from C. difficile cultures in various conditions available in public repositories. RT-qPCR validation RNA from stool samples and in vitro transcription was extracted, purified, and DNase digested as described above. In vitro transcribed RNA diluted in 150 ng/µl yeast tRNA (ThermoFisher, AM7119) was used to generate standards for absolute quantitation based on calculations designed to incorporate 1 to 107 RNA copies per reverse transcription reaction. cDNA synthesis from stool samples was performed with 300 ng input RNA per reaction using Superscript III (ThermoFisher, 18080-400), according to the manufacturer's protocol, using gene-specific primers (reverse qPCR primers as indicated in Supplementary Table 8) at a final concentration of 2 µM and total volume of 20 µl. Quantitative PCR reactions were prepared in triplicate using 2 µl of the RT reactions, LightCycler 480 Probes Master Mix (Roche, 04707494001), primers (final concentration 5 µM) and hydrolysis probe (final concentration 1 µM) in a reaction volume of 20 µl. The qPCR reactions were performed on a Lightcycler 480 96-well machine using the following program: (i) 95 °C for 10 min, (ii) 95 °C for 10 s, (iii) 48–60 °C for 50 s depending on primer Tm, (iv) 72 °C for 1 s for fluorescence measurement, (v) go to step ii and repeat 44 cycles, and (vi) 40 °C for 10 s. Absolute quantitation was performed using LightCycler 96 software version 1.1.0.1320 (Roche). When there were discordant results between triplicate amplification repeats, non-amplified reaction Cqs were set to 45 (equal to the total number of amplification cycles) prior to incorporation in copy number calculations. Dilution series reactions were performed on RNA extracted from several stool samples to demonstrate the absence of inhibition for the RT and qPCR reactions. Primers and probes used for RT-qPCR analyses are listed in Supplementary Table 8. Hydrolysis probes had a 5' 6-FAM dye, internal ZEN quencher after the 9th base, and 3' Iowa Black quencher (Integrated DNA Technologies). For Oncostatin M, we used the TaqMan RNA-to-Ct 1-Step Kit (ThermoFisher 4392653) with the commercially available probe set Hs00968300_g1 (ThermoFisher 4331182). In-house cell-free extract preparation Cell extract was prepared as described by Kwon and Jewett56. E. coli BL21(DE3)ΔlacZ (gift of Takane Katayama) were grown in 400 ml of LB at 37 °C at 250 rpm. Cells were harvested in mid-exponential growth phase (OD600 ~ 0.6), and cell pellets were washed three times with ice cold Buffer A containing 10 mM Tris-Acetate pH 8.2, 14 mM magnesium acetate, 60 mM potassium glutamate, and 2 mM DTT, and flash frozen and stored at −80 °C. Cell pellets were thawed and resuspended in 1 ml of Buffer A per 1 g of wet cells and sonicated in an ice-water bath. Total sonication energy to lyse cells was determined using the sonication energy equation for BL21 StarTM (DE3), [Energy] = [Volume (µL)] − 33.6] × 1.8−1. A Q125 Sonicator (Qsonica) with 3.174 mm diameter probe at a frequency of 20 kHz was used for sonication. A 50% amplitude in 10 s on/off intervals was applied until the required input energy was met. Lysate was then centrifuged at 12,000 rcf for 10 min at 4 °C, and the supernatant was incubated at 37 °C at 300 rpm for 1 h. The supernatant was centrifuged again at 12,000 rcf for 10 min at 4 °C, and flash frozen and stored at −80 °C until use. Using a previously published cell-free reaction protocol57, reaction mixtures were composed of 26.6 % (v/v) of in-house lysate, 1.5 mM each amino acid except leucine (1.25 mM), 1 mM DTT, 50 mM HEPES (pH 8.0), 1.5 mM ATP and GTP, 0.9 mM CTP and UTP, 0.2 mg/mL tRNA, 0.26 mM CoA, 0.33 mM NAD, 0.75 mM cAMP, 0.068 mM folinic acid, 1 mM spermidine, 30 mM 3-PGA, 2% PEG-8000, 0.5 % (v/v) Protector RNase Inhibitor (Roche), 2 nM LacZ sensor plasmid DNA, 2 uM RNA trigger, and 0.6 mg/mL chlorophenol red-ß-D-galactopyranoside (CPRG, Sigma Aldrich, 59767) for lacZ sensor. Optimal potassium glutamate (40–140 mM) and magnesium glutamate (2–8 mM) concentrations were determined for lacZ reporter product. Reactions were first assembled on ice without CPRG, incubated at 37 °C for 30 min, chilled on ice for 5 min, and then CPRG was added to reaction. 1.4 µl of reaction mixture was then applied to pre-blocked 5% BSA 2 mm paper disks, placed in a black, clear bottom 384-well plate (Corning, 3544) and incubated at 37 °C for 1.5 h for the detection of lacZ expression. All code used in this work is available in Supplementary Note 1 and 2. All toehold switch sensors from this work have been deposited at AddGene. AddGene #110696-110717, 111907-111909. All other data supporting the findings of this study are available within the article and its Supplementary Information files, or are available from the authors upon request. This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication. Rooks, M. G. & Garrett, W. S. Gut microbiota, metabolites and host immunity. Nat. Rev. Immunol. 16, 341–352 (2016). Article PubMed PubMed Central CAS Google Scholar Pfeiffer, J. K. & Virgin, H. W. Transkingdom control of viral infection and immunity in the mammalian intestine. Science 351, aad5872 (2016). Article PubMed CAS Google Scholar van Nood, E. et al. Duodenal infusion of donor feces for recurrent Clostridium difficile. N. Engl. J. Med. 368, 407–415 (2013). Koeth, R. A. et al. Intestinal microbiota metabolism of l-carnitine, a nutrient in red meat, promotes atherosclerosis. Nat. Med. 19, 576–585 (2013). ADS Article PubMed PubMed Central CAS Google Scholar Haiser, H. J. et al. Predicting and manipulating cardiac drug inactivation by the human gut bacterium Eggerthella lenta. Science 341, 295–298 (2013). Huttenhower, C., Kostic, A. D. & Xavier, R. J. Inflammatory bowel disease as a model for translating the microbiome. Immunity 40, 843–854 (2014). Gevers, D. et al. The treatment-naive microbiome in new-onset Crohn's disease. Cell Host. Microbe 15, 382–392 (2014). Subramanian, S. et al. Persistent gut microbiota immaturity in malnourished Bangladeshi children. Nature 510, 417–421 (2014). Blanton, L. V. et al. Gut bacteria that prevent growth impairments transmitted by microbiota from malnourished children. Science 351, aad3311 (2016). Sivan, A. et al. Commensal Bifidobacterium promotes antitumor immunity and facilitates anti – PD-L1 efficacy. Science 350, 1084–1089 (2015). Vétizou, M. et al. Anticancer immunotherapy by CTLA-4 blockade relies on the gut microbiota. Science 350, 1079–1084 (2015). Wlodarska, M., Kostic, A. D. & Xavier, R. J. An integrative view of microbiome-host interactions in inflammatory bowel diseases. Cell Host. Microbe 17, 577–591 (2015). Ilott, N. E. et al. Defining the microbial transcriptional response to colitis through integrated host and microbiome profiling. ISME J. 10, 2389–2404 (2016). Luca, F., Kupfer, S. S., Knights, D., Khoruts, A. & Blekhman, R. Functional genomics of host–microbiome interactions in humans. Trends Genet. 34, 30–40 (2018). Gilbert, J. A. et al. Microbiome-wide association studies link dynamic microbial consortia to disease. Nature 535, 94–104 (2016). ADS Article PubMed CAS Google Scholar Pardee, K. et al. Paper-based synthetic gene networks. Cell 159, 940–954 (2014). Pardee, K. et al. Rapid, low-cost detection of zika virus using programmable biomolecular components. Cell 165, 1255–1266 (2016). Green, A. A., Silver, P. A., Collins, J. J. & Yin, P. Toehold switches: de-novo-designed regulators of gene expression. Cell 159, 925–939 (2014). Sands, B. E. Biomarkers of inflammation in inflammatory bowel disease. Gastroenterology 149, 1275–1285.e2 (2015). West, N. R. et al. Oncostatin M drives intestinal inflammation and predicts response to tumor necrosis factor–neutralizing therapy in patients with inflammatory bowel disease. Nat. Med. 23, 579 (2017). Fang, F. C., Polage, C. R. & Wilcox, M. H. Point-counterpoint: what is the optimal approach for detection of clostridium difficile infection? J. Clin. Microbiol. 55, 670–680 (2017). Walters, W. A., Xu, Z. & Knight, R. Meta-analyses of human gut microbes associated with obesity and IBD. FEBS Lett. 588, 4223–4233 (2014). Korem, T. et al. Growth dynamics of gut microbiota in health and disease inferred from single metagenomic samples. Science 349, 1101–1106 (2015). Zadeh, J. N. et al. NUPACK: Analysis and design of nucleic acid systems. J. Comput. Chem. 32, 170–173 (2011). Cormack, B. P., Valdivia, R. H. & Falkow, S. FACS-optimized mutants of the green fluorescent protein (GFP). Gene 173, 33–38 (1996). Guatelli, J. C. et al. Isothermal, in vitro amplification of nucleic acids by a multienzyme reaction modeled after retroviral replication. Proc. Natl Acad. Sci. USA 87, 1874–1878 (1990). Chakravorty, S., Helb, D., Burday, M., Connell, N. & Alland, D. A detailed analysis of 16S ribosomal RNA gene segments for the diagnosis of pathogenic bacteria. J. Microbiol. Methods 69, 330–339 (2007). McGinnis, J. L. et al. In-cell SHAPE reveals that free 30S ribosome subunits are in the inactive state. Proc. Natl Acad. Sci. USA 112, 2425–2430 (2015). Segata, N. et al. Metagenomic microbial community profiling using unique clade-specific marker genes. Nat. Methods 9, 811–814 (2012). Altschul, S. F., Gish, W., Miller, W., Myers, E. W. & Lipman, D. J. Basic local alignment search tool. J. Mol. Biol. 215, 403–410 (1990). Bustin, S. A. et al. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin. Chem. 55, 611–622 (2009). Patterson, S. S., Casper, E. T., Garcia-Rubio, L., Smith, M. C. & Paul, J. H. I. Increased precision of microbial RNA quantification using NASBA with an internal control. J. Microbiol. Methods 60, 343–352 (2004). Sidoti, F. et al. Development of a quantitative real-time nucleic acid sequence-based amplification assay with an internal control using molecular beacon probes for selective and sensitive detection of human Rhinovirus serotypes. Mol. Biotechnol. 50, 221–228 (2012). DePestel, D. D. & Aronoff, D. M. Epidemiology of Clostridium difficile infection. J. Pharm. Pract. 26, 464–475 (2013). Zhang, S. et al. Cost of hospital management of Clostridium difficile infection in United States—a meta-analysis and modelling study. BMC Infect. Dis. 16, 447 (2016). Ryder, A. B. et al. Assessment of Clostridium difficile infections by quantitative detection of tcdB toxin by use of a real-time cell analysis system. J. Clin. Microbiol. 48, 4129–4134 (2010). Kociolek, L. K. Strategies for optimizing the diagnostic predictive value of Clostridium difficile molecular diagnostics. J. Clin. Microbiol. 55, 1244–1248 (2017). Pollock, N. R. Ultrasensitive detection and quantification of toxins for optimized diagnosis of Clostridium difficile infection. J. Clin. Microbiol. 54, 259–264 (2016). Cohen, S. H. et al. Clinical practice guidelines for Clostridium difficile infection in adults: 2010 Update by the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA). Infect. Control. Hosp. Epidemiol. 31, 431–455 (2010). Theriot, C. M. et al. Cefoperazone-treated mice as an experimental platform to assess differential virulence of Clostridium difficile strains. Gut Microbes 2, 326–334 (2011). El Feghaly, R. E., Stauber, J. L., Tarr, P. I. & Haslam, D. B. Intestinal inflammatory biomarkers and outcome in pediatric Clostridium difficile infections. J. Pediatr. 163, 1697–1704.e2 (2013). El Feghaly, R. E. et al. Markers of Intestinal Inflammation, not bacterial burden, correlate with clinical outcomes in Clostridium difficile Infection. Clin. Infect. Dis. 56, 1713–1721 (2013). Ananthakrishnan, A. N. et al. Gut microbiome function predicts response to anti-integrin biologic therapy in inflammatory bowel diseases. Cell Host. Microbe 21, 603–610.e3 (2017). Holgersen, K. et al. High-resolution gene expression profiling using RNA sequencing in patients with inflammatory bowel disease and in mouse models of colitis. J. Crohn's Colitis 9, 492–506 (2015). Uguccioni, M. et al. Increased expression of IP-10, IL-8, MCP-1, and MCP-3 in ulcerative colitis. Am. J. Pathol. 155, 331–336 (1999). Svec, D., Tichopad, A., Novosadova, V., Pfaffl, M. W. & Kubista, M. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments. Biomol. Detect. Quantif. 3, 9–16 (2015). Matsuda, K. et al. Sensitive quantification of Clostridium difficile cells by reverse transcription-quantitative PCR targeting rRNA molecules. Appl. Environ. Microbiol. 78, 5111–5118 (2012). Deurenberg, R. H. et al. Application of next generation sequencing in clinical microbiology and infection prevention. J. Biotechnol. 243, 16–24 (2017). Piepenburg, O., Williams, C. H., Stemple, D. L. & Armes, N. A. DNA Detection Using Recombination Proteins. PLoS Biol. 4, e204 (2006). Gootenberg, J. S. et al. Nucleic acid detection with CRISPR-Cas13a/C2c2. Science 356, 438–442 (2017). Gootenberg, J. S. et al. Multiplexed and portable nucleic acid detection platform with Cas13, Cas12a, and Csm6. Science 360, 439–444 (2018). Chen, J. S. et al. CRISPR-Cas12a target binding unleashes indiscriminate single-stranded DNase activity. Science 360, 436–439 (2018). Deiman, B., Aarle, P., Van & Sillekens, P. Characteristics and applications of nucleic acid sequence-based amplification (NASBA). Mol. Biotechnol. 20, 163–179 (2002). Reck, M. et al. Stool metatranscriptomics: a technical guideline for mRNA stabilisation and isolation. BMC Genom. 16, 494 (2015). Gorzelak, M. A. et al. Methods for improving human gut microbiome data by reducing variability through sample processing and storage of stool. PLoS ONE 10, e0134802 (2015). Kwon, Y.-C. & Jewett, M. C. High-throughput preparation methods of crude extract for robust cell-free protein synthesis. Sci. Rep. 5, 8663 (2015). Sun, Z. Z. et al. Protocols for implementing an Escherichia coli based TX-TL cell-free expression system for synthetic biology. J. Vis. Exp. 79, e50762 (2013). This work was supported by MIT's Center for Microbiome Informatics and Therapeutics, the Paul G. Allen Frontiers Group, and the Wyss Institute. X.T. is supported in part by grants from the National Institutes of Health T32 DK007191 and a Wyss Institute Clinical Fellowship. A.J.D. is supported by the National Science Foundation Graduate Research Fellowship Program. A.A. is supported in part by grants from the National Institutes of Health (K23 DK097142, R03 DK112909) and the Crohn's and Colitis Foundation. The authors would like to thank Liz Andrews, Will Tan, Heather Wilson, and Eric Rosenberg for their help with clinical stool samples. These authors contributed equally: Melissa K. Takahashi, Xiao Tan, Aaron J. Dy. Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA Melissa K. Takahashi, Xiao Tan, Aaron J. Dy, Dana Braff, Yoshikazu Furuta & James J. Collins Division of Gastroenterology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, 02114, USA Xiao Tan & Ashwin Ananthakrishnan Harvard Medical School, 25 Shattuck St, Boston, MA, 02115, USA Wyss Institute for Biologically Inspired Engineering, Harvard University, 3 Blackfan Circle, Boston, MA, 02115, USA Xiao Tan, Nina Donghia & James J. Collins Broad Institute of MIT and Harvard, 415 Main St, Cambridge, MA, 02142, USA Xiao Tan, Aaron J. Dy & James J. Collins Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA Aaron J. Dy, Reid T. Akana & James J. Collins Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston, MA, 02215, USA Dana Braff Division of Infection and Immunity, Research Center for Zoonosis Control, Hokkaido University, North 20, West 10 Kita-ku, Sapporo, 001-0020, Japan Yoshikazu Furuta Synthetic Biology Center, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA James J. Collins Harvard-MIT Program in Health Sciences and Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA Melissa K. Takahashi Xiao Tan Aaron J. Dy Reid T. Akana Nina Donghia Ashwin Ananthakrishnan M.K.T, X.T. and A.J.D designed experiments, performed experiments, analyzed data, and wrote the manuscript. D.B. performed experiments and edited the manuscript. Y.F. wrote code for identifying species-specific mRNA sequences. R.T.A. and N.D. performed experiments. A.A. provided clinical samples. J.J.C. directed overall research and edited the manuscript. Correspondence to James J. Collins. J.J.C. is an author on a patent application for the paper-based synthetic gene networks US20160312312A1 and a patent for the RNA toehold switch sensors US9550987B2. The remaining authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Takahashi, M.K., Tan, X., Dy, A.J. et al. A low-cost paper-based synthetic biology platform for analyzing gut microbiota and host biomarkers. Nat Commun 9, 3347 (2018). https://doi.org/10.1038/s41467-018-05864-4 A glucose meter interface for point-of-care gene circuit-based diagnostics Evan Amalfitano Margot Karlikow Keith Pardee Nature Communications (2021) Protocell arrays for simultaneous detection of diverse analytes Taisuke Kojima Mark P. Styczynski Holistic engineering of cell-free systems through proteome-reprogramming synthetic circuits Luis E. Contreras-Llano Conary Meyer Cheemeng Tan Open-source, 3D-printed Peristaltic Pumps for Small Volume Point-of-Care Liquid Handling Michael R. Behrens Haley C. Fuller Robert Steward Scientific Reports (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Reviews & Analysis Editorial Values Statement Journal Impact Editors' Highlights Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Nature Communications (Nat Commun) ISSN 2041-1723 (online) nature.com sitemap Protocol Exchange Nature portfolio policies Author & Researcher services Scientific editing Nature Masterclasses Nature Research Academies Libraries & institutions Librarian service & tools Partnerships & Services Nature Conferences Nature Africa Nature China Nature India Nature Italy Nature Middle East Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Canonical singularity In mathematics, canonical singularities appear as singularities of the canonical model of a projective variety, and terminal singularities are special cases that appear as singularities of minimal models. They were introduced by Reid (1980). Terminal singularities are important in the minimal model program because smooth minimal models do not always exist, and thus one must allow certain singularities, namely the terminal singularities. Definition Suppose that Y is a normal variety such that its canonical class KY is Q-Cartier, and let f:X→Y be a resolution of the singularities of Y. Then $\displaystyle K_{X}=f^{*}(K_{Y})+\sum _{i}a_{i}E_{i}$ where the sum is over the irreducible exceptional divisors, and the ai are rational numbers, called the discrepancies. Then the singularities of Y are called: terminal if ai > 0 for all i canonical if ai ≥ 0 for all i log terminal if ai > −1 for all i log canonical if ai ≥ −1 for all i. Properties The singularities of a projective variety V are canonical if the variety is normal, some power of the canonical line bundle of the non-singular part of V extends to a line bundle on V, and V has the same plurigenera as any resolution of its singularities. V has canonical singularities if and only if it is a relative canonical model. The singularities of a projective variety V are terminal if the variety is normal, some power of the canonical line bundle of the non-singular part of V extends to a line bundle on V, and V the pullback of any section of Vm vanishes along any codimension 1 component of the exceptional locus of a resolution of its singularities. Classification in small dimensions Two dimensional terminal singularities are smooth. If a variety has terminal singularities, then its singular points have codimension at least 3, and in particular in dimensions 1 and 2 all terminal singularities are smooth. In 3 dimensions they are isolated and were classified by Mori (1985). Two dimensional canonical singularities are the same as du Val singularities, and are analytically isomorphic to quotients of C2 by finite subgroups of SL2(C). Two dimensional log terminal singularities are analytically isomorphic to quotients of C2 by finite subgroups of GL2(C). Two dimensional log canonical singularities have been classified by Kawamata (1988). Pairs More generally one can define these concepts for a pair $(X,\Delta )$ where $\Delta $ is a formal linear combination of prime divisors with rational coefficients such that $K_{X}+\Delta $ is $\mathbb {Q} $-Cartier. The pair is called • terminal if Discrep$(X,\Delta )>0$ • canonical if Discrep$(X,\Delta )\geq 0$ • klt (Kawamata log terminal) if Discrep$(X,\Delta )>-1$ and $\lfloor \Delta \rfloor \leq 0$ • plt (purely log terminal) if Discrep$(X,\Delta )>-1$ • lc (log canonical) if Discrep$(X,\Delta )\geq -1$. References • Kollár, János (1989), "Minimal models of algebraic threefolds: Mori's program", Astérisque (177): 303–326, ISSN 0303-1179, MR 1040578 • Kawamata, Yujiro (1988), "Crepant blowing-up of 3-dimensional canonical singularities and its application to degenerations of surfaces", Ann. of Math., 2, 127 (1): 93–163, doi:10.2307/1971417, ISSN 0003-486X, JSTOR 1971417, MR 0924674 • Mori, Shigefumi (1985), "On 3-dimensional terminal singularities", Nagoya Mathematical Journal, 98: 43–66, doi:10.1017/s0027763000021358, ISSN 0027-7630, MR 0792770 • Reid, Miles (1980), "Canonical 3-folds", Journées de Géometrie Algébrique d'Angers, Juillet 1979/Algebraic Geometry, Angers, 1979, Alphen aan den Rijn: Sijthoff & Noordhoff, pp. 273–310, MR 0605348 • Reid, Miles (1987), "Young person's guide to canonical singularities", Algebraic geometry, Bowdoin, 1985 (Brunswick, Maine, 1985), Proc. Sympos. Pure Math., vol. 46, Providence, R.I.: American Mathematical Society, pp. 345–414, MR 0927963
Wikipedia
# Mathematical Logic <br> (Math 570) <br> Lecture Notes Lou van den Dries Fall Semester 2019 ## Chapter 1 ## Preliminaries We start with a brief overview of mathematical logic as covered in this course. Next we review some basic notions from elementary set theory, which provides a medium for communicating mathematics in a precise and clear way. In this course we develop mathematical logic using elementary set theory as given, just as one would do with other branches of mathematics, like group theory or probability theory. For more on the course material, see Shoenfield, J. R., Mathematical Logic, Reading, Addison-Wesley, 1967. For additional material in Model Theory we refer the reader to Chang, C. C. and Keisler, H. J., Model Theory, New York, NorthHolland, 1990, Poizat, B., A Course in Model Theory, Springer, 2000, and for additional material on Computability, to Rogers, H., Theory of Recursive Functions and Effective Computability, McGraw-Hill, 1967. ### Mathematical Logic: a brief overview Aristotle identified some simple patterns in human reasoning, and Leibniz dreamt of reducing reasoning to calculation. As a viable mathematical subject, however, logic is relatively recent: the 19th century pioneers were Bolzano, Boole, Cantor, Dedekind, Frege, Peano, C.S. Peirce, and E. Schröder. From our perspective we see their work as leading to boolean algebra, set theory, propositional logic, predicate logic, as clarifying the foundations of the natural and real number systems, and as introducing suggestive symbolic notation for logical operations. Also, their activity led to the view that logic + set theory can serve as a basis for all of mathematics. This era did not produce theorems in mathematical logic of any real depth, ${ }^{1}$ but it did bring crucial progress of a conceptual nature, and the recognition that logic as used in mathematics obeys mathematical rules that can be made fully explicit. In the period 1900-1950 important new ideas came from Russell, Zermelo, Hausdorff, Hilbert, Löwenheim, Ramsey, Skolem, Lusin, Post, Herbrand, Gödel, Tarski, Church, Kleene, Turing, and Gentzen. They discovered the first real theorems in mathematical logic, with those of Gödel having a dramatic impact. Hilbert (in Göttingen), Lusin (in Moscow), Tarski (in Warsaw and Berkeley), and Church (in Princeton) had many students and collaborators, who made up a large part of that generation and the next in mathematical logic. Most of these names will be encountered again during the course. The early part of the 20th century was also marked by the so-called $$ \text { foundational crisis in mathematics. } $$ A strong impulse for developing mathematical logic came from the attempts during these times to provide solid foundations for mathematics. Mathematical logic has now taken on a life of its own, and also thrives on many interactions with other areas of mathematics and computer science. In the second half of the last century, logic as pursued by mathematicians gradually branched into four main areas: model theory, computability theory (or recursion theory), set theory, and proof theory. The topics in this course are part of the common background of mathematicians active in any of these areas. What distinguishes mathematical logic within mathematics is that $$ \text { statements about mathematical objects and structures } $$ are taken seriously as mathematical objects in their own right. More generally, in mathematical logic we formalize (formulate in a precise mathematical way) notions used informally by mathematicians such as: - property - statement (in a given language) - structure - truth (what it means for a given statement to be true in a given structure) - proof (from a given set of axioms) - algorithm ${ }^{1}$ In the case of set theory one could dispute this. Cantor's discoveries were profound, but even so, the main influence of set theory on the rest of mathematics was to enable simple constructions of great generality, like cartesian products, quotient sets and power sets, and this involves only very elementary set theory. Once we have mathematical definitions of these notions, we can try to prove theorems about these formalized notions. If done with imagination, this process can lead to unexpected rewards. Of course, formalization tends to caricature the informal concepts it aims to capture, but no harm is done if this is kept firmly in mind. Example. The notorious Goldbach Conjecture asserts that every even integer greater than 2 is a sum of two prime numbers. With the understanding that the variables range over $\mathbf{N}=\{0,1,2, \ldots\}$, and that $0,1,+, \cdot,<$ denote the usual arithmetic operations and relations on $\mathbf{N}$, this assertion can be expressed formally as $$ \forall x[(1+1<x \wedge \operatorname{even}(x)) \rightarrow \exists p \exists q(\operatorname{prime}(p) \wedge \operatorname{prime}(q) \wedge x=p+q)] $$ where even $(x)$ abbreviates $\exists y(x=y+y)$ and prime $(p)$ abbreviates $$ 1<p \wedge \forall r \forall s(p=r \cdot s \rightarrow(r=1 \vee s=1)) . $$ The expression $G C$ is an example of a formal statement (also called a sentence) in the language of arithmetic, which has symbols $0,1,+, \cdot,<$ to denote arithmetic operations and relations, in addition to logical symbols like $=, \wedge, \vee, \neg, \rightarrow, \forall, \exists$, and variables $x, y, z, p, q, r, s$. The Goldbach Conjecture asserts that this particular sentence $G C$ is true in the structure $(\mathbf{N} ; 0,1,+, \cdot,<)$. (No proof of the Goldbach Conjecture is known.) It also makes sense to ask whether the sentence $G C$ is true in the structure $$ (\mathbf{R} ; 0,1,+, \cdot,<) $$ where now the variables range over $\mathbf{R}$ and $0,1,+, \cdot,<$ have their natural 'real' meanings. (It's not, as is easily verified. That the question makes sense - has a yes or no answer - does not mean that it is of any interest.) A century of experience gives us confidence that all classical number-theoretic results - old or new, proved by elementary methods or by sophisticated algebra and analysis - can be proved from the Peano axioms for arithmetic. ${ }^{2}$ However, in our present state of knowledge, $G C$ might be true in $(\mathbf{N} ; 0,1,+, \cdot,<)$, but not provable from those axioms. (On the other hand, once you know what exactly we mean by $$ \text { provable from the Peano axioms, } $$ you will see that if $G C$ is provable from those axioms, then $G C$ is true in $(\mathbf{N} ; 0,1,+, \cdot,<)$, and that if $G C$ is false in $(\mathbf{N} ; 0,1,+, \cdot,<)$, then its negation $\neg G C$ is provable from those axioms.) The point of this example is simply to make the reader aware of the notions "true in a given structure" and "provable from a given set of axioms," and their difference. One objective of this course is to figure out the connections (and disconnections) between these notions. ${ }^{2}$ Here we do not count as part of classical number theory some results like Ramsey's Theorem that can be stated in the language of arithmetic, but are arguably more in the spirit of logic and combinatorics. ## Some highlights (1900-1950) The results below are among the most frequently used facts of mathematical logic. The terminology used in stating these results might be unfamiliar, but that should change during the course. What matters is to get some preliminary idea of what we are aiming for. As will become clear during the course, each of these results has stronger versions, on which applications often depend, but in this overview we prefer simple statements over strength and applicability. We begin with two results that are fundamental in model theory. They concern the notion of model of $\Sigma$ where $\Sigma$ is a set of sentences in a language $L$. At this stage we only say by way of explanation that a model of $\Sigma$ is a mathematical structure in which all sentences of $\Sigma$ are true. For example, if $\Sigma$ is the (infinite) set of axioms for fields of characteristic zero in the language of rings, then a model of $\Sigma$ is just a field of characteristic zero. Theorem of Löwenheim and Skolem. If $\Sigma$ is a countable set of sentences in some language and $\Sigma$ has a model, then $\Sigma$ has a countable model. Compactness Theorem (Gödel, Mal'cev). Let $\Sigma$ be a set of sentences in some language. Then $\Sigma$ has a model if and only if each finite subset of $\Sigma$ has a model. The next result goes a little beyond model theory by relating the notion of "model of $\Sigma$ " to that of "provability from $\Sigma$ ": Completeness Theorem (Gödel, 1930). Let $\Sigma$ be a set of sentences in some language $L$, and let $\sigma$ be a sentence in $L$. Then $\sigma$ is provable from $\Sigma$ if and only if $\sigma$ is true in all models of $\Sigma$. In our treatment we shall obtain the first two theorems as byproducts of the Completeness Theorem and its proof. In the case of the Compactness Theorem this reflects history, but the theorem of Löwenheim and Skolem predates the Completeness Theorem. The Löwenheim-Skolem and Compactness theorems do not mention the notion of provability, and thus model theorists often prefer to bypass Completeness in establishing these results; see for example Poizat's book. Here is an important early result on a specific arithmetic structure: Theorem of Presburger and Skolem. Each sentence in the language of the structure $(\mathbf{Z} ; 0,1,+,-,<)$ that is true in this structure is provable from the axioms for ordered abelian groups with least positive element 1 , augmented, for each $n=2,3,4, \ldots$, by an axiom that says that for every a there is a $b$ such that $a=n b$ or $a=n b+1$ or $\ldots$ or $a=n b+1+\cdots+1$ (with $n$ disjuncts in total). Moreover, there is an algorithm that, given any sentence in this language as input, decides whether this sentence is true in $(\mathbf{Z} ; 0,1,+,-,<)$. Note that in $(\mathbf{Z} ; 0,1,+,-,<)$ we have not included multiplication among the primitives; accordingly, $n b$ stands for $b+\cdots+b$ (with $n$ summands). When we do include multiplication, the situation changes dramatically: Incompleteness and undecidability of arithmetic. (Gödel-Church, 1930's). One can construct a sentence in the language of arithmetic that is true in the structure $(\mathbf{N} ; 0,1,+, \cdot,<)$, but not provable from the Peano axioms. There is no algorithm that, given any sentence in this language as input, decides whether this sentence is true in $(\mathbf{N} ; 0,1,+, \cdot,<)$. Here "there is no algorithm" is used in the mathematical sense of there cannot exist an algorithm, not in the weaker colloquial sense of "no algorithm is known." This theorem is intimately connected with the clarification of notions like computability and algorithm in which Turing played a key role. In contrast to these incompleteness and undecidability results on (sufficiently rich) arithmetic, we have Tarski's theorem on the field of real numbers (1930-1950). Every sentence in the language of arithmetic that is true in the structure $$ (\mathbf{R} ; 0,1,+, \cdot,<) $$ is provable from the axioms for ordered fields augmented by the axioms - every positive element is a square, - every odd degree polynomial has a zero. There is also an algorithm that decides for any given sentence in this language as input, whether this sentence is true in $(\mathbf{R} ; 0,1,+, \cdot,<)$. ### Sets and Maps We shall use this section as an opportunity to fix notations and terminologies that are used throughout these notes, and throughout mathematics. In a few places we shall need more set theory than we introduce here, for example, ordinals and cardinals. The following little book is a good place to read about these matters. (It also contains an axiomatic treatment of set theory starting from scratch.) Halmos, P. R., Naive set theory, New York, Springer, 1974 In an axiomatic treatment of set theory as in the book by Halmos all assertions about sets below are proved from a few simple axioms. In such a treatment the notion of set itself is left undefined, but the axioms about sets are suggested by thinking of a set as a collection of mathematical objects, called its elements or members. To indicate that an object $x$ is an element of the set $A$ we write $x \in A$, in words: $x$ is in $A$ (or: $x$ belongs to $A$ ). To indicate that $x$ is not in $A$ we write $x \notin A$. We consider the sets $A$ and $B$ as the same set (notation: $A=B$ ) if and only if they have exactly the same elements. We often introduce a set via the bracket notation, listing or indicating inside the brackets its elements. For example, $\{1,2,7\}$ is the set with 1,2 , and 7 as its only elements. Note that $\{1,2,7\}=\{2,7,1\}$, and $\{3,3\}=\{3\}$ : the same set can be described in many different ways. Don't confuse an object $x$ with the set $\{x\}$ that has $x$ as its only element: for example, the object $x=\{0,1\}$ is a set that has exactly two elements, namely 0 and 1 , but the set $\{x\}=\{\{0,1\}\}$ has only one element, namely $x$. Here are some important sets that the reader has probably encountered previously. ## Examples. (1) The empty set: $\emptyset$ (it has no elements). (2) The set of natural numbers: $\mathbf{N}=\{0,1,2,3, \ldots\}$. (3) The set of integers: $\mathbf{Z}=\{\ldots,-2,-1,0,1,2, \ldots\}$. (4) The set of rational numbers: $\mathbf{Q}$. (5) The set of real numbers: $\mathbf{R}$. (6) The set of complex numbers: C. Remark. Throughout these notes $m$ and $n$ always denote natural numbers. For example, "for all $m$..." will mean "for all $m \in \mathbf{N}$...". If all elements of the set $A$ are in the set $B$, then we say that $A$ is a subset of $B$ (and write $A \subseteq B$ ). Thus the empty set $\emptyset$ is a subset of every set, and each set is a subset of itself. We often introduce a set $A$ in our discussions by defining $A$ to be the set of all elements of a given set $B$ that satisfy some property $P$. Notation: $$ A:=\{x \in B: x \text { satisfies } P\} \quad \text { (hence } A \subseteq B \text { ). } $$ Let $A$ and $B$ be sets. Then we can form the following sets: (a) $A \cup B:=\{x: x \in A$ or $x \in B\}$ (union of $A$ and $B$ ); (b) $A \cap B:=\{x: x \in A$ and $x \in B\}$ (intersection of $A$ and $B$ ); (c) $A \backslash B:=\{x: x \in A$ and $x \notin B\}$ (difference of $A$ and $B$ ); (d) $A \times B:=\{(a, b): a \in A$ and $b \in B\}$ (cartesian product of $A$ and $B$ ). Thus the elements of $A \times B$ are the so-called ordered pairs $(a, b)$ with $a \in A$ and $b \in B$. The key property of ordered pairs is that we have $(a, b)=(c, d)$ if and only if $a=c$ and $b=d$. For example, you may think of $\mathbf{R} \times \mathbf{R}$ as the set of points $(a, b)$ in the $x y$-plane of coordinate geometry. We say that $A$ and $B$ are disjoint if $A \cap B=\emptyset$, that is, they have no element in common. Remark. In a definition such as we just gave: "We say that $\cdots$ if —," the meaning of "if" is actually "if and only if." We committed a similar abuse of language earlier in defining set inclusion by the phrase "If - , then we say that ...." We shall continue such abuse, in accordance with tradition, but only in similarly worded definitions. Also, we shall often write "iff" or " $\Leftrightarrow$ " to abbreviate "if and only if." ## Maps Definition. A map is a triple $f=(A, B, \Gamma)$ of sets $A, B, \Gamma$ such that $\Gamma \subseteq A \times B$ and for each $a \in A$ there is exactly one $b \in B$ with $(a, b) \in \Gamma$; we write $f(a)$ for this unique $b$, and call it the value of $f$ at a (or the image of a under $f$ ). ${ }^{3}$ We call $A$ the domain of $f$, and $B$ the codomain of $f$, and $\Gamma$ the graph of $f .{ }^{4}$ We write $f: A \rightarrow B$ to indicate that $f$ is a map with domain $A$ and codomain $B$, and in this situation we also say that $f$ is a map from $A$ to $B$. Among the many synonyms of map are mapping, assignment, function, operator, transformation. Typically, "function" is used when the codomain is a set of numbers of some kind, "operator" when the elements of domain and codomain are themselves functions, and "transformation" is used in geometric situations where domain and codomain are equal. (We use equal as synonym for the same or identical; also coincide is a synonym for being the same.) ## Examples. (1) Given any set $A$ we have the identity map $1_{A}: A \rightarrow A$ defined by $1_{A}(a)=a$ for all $a \in A$. (2) Any polynomial $f(X)=a_{0}+a_{1} X+\cdots+a_{n} X^{n}$ with real coefficients $a_{0}, \ldots, a_{n}$ gives rise to a function $x \mapsto f(x): \mathbf{R} \rightarrow \mathbf{R}$. We often use the "maps to" symbol $\mapsto$ in this way to indicate the rule by which to each $x$ in the domain we associate its value $f(x)$. Definition. Given $f: A \rightarrow B$ and $g: B \rightarrow C$ we have a map $g \circ f: A \rightarrow C$ defined by $(g \circ f)(a)=g(f(a))$ for all $a \in A$. It is called the composition of $g$ and $f$. Definition. Let $f: A \rightarrow B$ be a map. It is said to be injective if for all $a_{1} \neq a_{2}$ in $A$ we have $f\left(a_{1}\right) \neq f\left(a_{2}\right)$. It is said to be surjective if for each $b \in B$ there exists $a \in A$ such that $f(a)=b$. It is said to be bijective (or a bijection) if it is both injective and surjective. For $X \subseteq A$ we put $$ f(X):=\{f(x): x \in X\} \subseteq B \quad \text { (direct image of } X \text { under } f \text { ). } $$ (There is a notational conflict here when $X$ is both a subset of $A$ and an element of $A$, but it will always be clear from the context when $f(X)$ is meant to be the the direct image of $X$ under $f$; some authors resolve the conflict by denoting this direct image by $f[X]$ or in some other way.) We also call $f(A)=\{f(a): a \in A\}$ the image of $f$. For $Y \subseteq B$ we put $$ f^{-1}(Y):=\{x \in A: f(x) \in Y\} \subseteq A \quad \text { (inverse image of } Y \text { under } f \text { ). } $$ Thus surjectivity of our map $f$ is equivalent to $f(A)=B$. ${ }^{3}$ Sometimes we shall write $f a$ instead of $f(a)$ in order to cut down on parentheses. ${ }^{4}$ Other words for "domain" and "codomain" are "source" and "target", respectively. If $f: A \rightarrow B$ is a bijection then we have an inverse map $f^{-1}: B \rightarrow A$ given by $$ f^{-1}(b):=\text { the unique } a \in A \text { such that } f(a)=b . $$ Note that then $f^{-1} \circ f=1_{A}$ and $f \circ f^{-1}=1_{B}$. Conversely, if $f: A \rightarrow B$ and $g: B \rightarrow A$ satisfy $g \circ f=1_{A}$ and $f \circ g=1_{B}$, then $f$ is a bijection with $f^{-1}=g$. (The attentive reader will notice that we just introduced a potential conflict of notation: for bijective $f: A \rightarrow B$ and $Y \subseteq B$, both the inverse image of $Y$ under $f$ and the direct image of $Y$ under $f^{-1}$ are denoted by $f^{-1}(Y)$; no harm is done, since these two subsets of $A$ coincide.) It follows from the definition of "map" that $f: A \rightarrow B$ and $g: C \rightarrow D$ are equal $(f=g)$ if and only if $A=C, B=D$, and $f(x)=g(x)$ for all $x \in A$. We say that $g: C \rightarrow D$ extends $f: A \rightarrow B$ if $A \subseteq C, B \subseteq D$, and $f(x)=g(x)$ for all $x \in A$. $^{5}$ Definition. A set $A$ is said to be finite if there exists $n$ and a bijection $$ f:\{1, \ldots, n\} \rightarrow A . $$ Here we use $\{1, \ldots, n\}$ as a suggestive notation for the set $\{m: 1 \leq m \leq n\}$. For $n=0$ this is just $\emptyset$. If $A$ is finite there is exactly one such $n$ (although if $n>1$ there will be more than one bijection $f:\{1, \ldots, n\} \rightarrow A$ ); we call this unique $n$ the number of elements of $A$ or the cardinality of $A$, and denote it by $|A|$. A set which is not finite is said to be infinite. Definition. A set $A$ is said to be countably infinite if there is a bijection $\mathbf{N} \rightarrow A$. It is said to be countable if it is either finite or countably infinite. Example. The sets $\mathbf{N}, \mathbf{Z}$ and $\mathbf{Q}$ are countably infinite, but the infinite set $\mathbf{R}$ is not countably infinite. Every infinite set has a countably infinite subset. One of the standard axioms of set theory, the Power Set Axiom says: For any set $A$, there is a set whose elements are exactly the subsets of $A$. Such a set of subsets of $A$ is clearly uniquely determined by $A$, is denoted by $\mathcal{P}(A)$, and is called the power set of $A$. If $A$ is finite, so is $\mathcal{P}(A)$ and $|\mathcal{P}(A)|=2^{|A|}$. Note that $a \mapsto\{a\}: A \rightarrow \mathcal{P}(A)$ is an injective map. However, there is no surjective map $A \rightarrow \mathcal{P}(A)$ : Cantor's Theorem. Let $S: A \rightarrow \mathcal{P}(A)$ be a map. Then the set $$ \{a \in A: a \notin S(a)\} \quad \text { (a subset of } A \text { ) } $$ is not an element of $S(A)$. Proof. Suppose otherwise. Then $\{a \in A: a \notin S(a)\}=S(b)$ where $b \in A$. Assuming $b \in S(b)$ yields $b \notin S(b)$, a contradiction. Thus $b \notin S(b)$; but then $b \in S(b)$, again a contradiction. This concludes the proof. ${ }^{5}$ We also say " $g: C \rightarrow D$ is an extension of $f: A \rightarrow B$ " or " $f: A \rightarrow B$ is a restriction of $g: C \rightarrow D . "$ Let $I$ and $A$ be sets. Then there is a set whose elements are exactly the maps $f: I \rightarrow A$, and this set is denoted by $A^{I}$. For $I=\{1, \ldots, n\}$ we also write $A^{n}$ instead of $A^{I}$. Thus an element of $A^{n}$ is a map $a:\{1, \ldots, n\} \rightarrow A$; we usually think of such an $a$ as the $n$-tuple $(a(1), \ldots, a(n))$, and we often write $a_{i}$ instead of $a(i)$. So $A^{n}$ can be thought of as the set of $n$-tuples $\left(a_{1}, \ldots, a_{n}\right)$ with each $a_{i} \in A$. For $n=0$ the set $A^{n}$ has just one element - the empty tuple. An $n$-ary relation on $A$ is just a subset of $A^{n}$, and an $n$-ary operation on $A$ is a map from $A^{n}$ into $A$. Instead of "1-ary" we usually say "unary", and instead of "2-ary" we can say "binary". For example, $\left\{(a, b) \in \mathbf{Z}^{2}: a<b\right\}$ is a binary relation on $\mathbf{Z}$, and integer addition is the binary operation $(a, b) \mapsto a+b$ on $\mathbf{Z}$. Definition. $\left\{a_{i}\right\}_{i \in I}$ or $\left(a_{i}\right)_{i \in I}$ denotes a family of objects $a_{i}$ indexed by the set $I$, and is just a suggestive notation for a set $\left\{\left(i, a_{i}\right): i \in I\right\}$, not to be confused with the set $\left\{a_{i}: i \in I\right\}$. (There may be repetitions in the family, that is, it may happen that $a_{i}=a_{j}$ for distinct indices $i, j \in I$, but such repetition is not reflected in the set $\left\{a_{i}: i \in I\right\}$. For example, if $I=\mathbf{N}$ and $a_{n}=a$ for all $n$, then $\left\{\left(i, a_{i}\right): i \in I\right\}=\{(i, a): i \in \mathbf{N}\}$ is countably infinite, but $\left\{a_{i}: i \in I\right\}=\{a\}$ has just one element.) For $I=\mathbf{N}$ we usually say "sequence" instead of "family". Given any family $\left(A_{i}\right)_{i \in I}$ of sets (that is, each $A_{i}$ is a set) we have a set $$ \bigcup_{i \in I} A_{i}:=\left\{x: x \in A_{i} \text { for some } i \in I\right\}, $$ the union of the family, or, more informally, the union of the sets $A_{i}$. If $I$ is finite and each $A_{i}$ is finite, then so is the union above and $$ \left|\bigcup_{i \in I} A_{i}\right| \leq \sum_{i \in I}\left|A_{i}\right| . $$ If $I$ is countable and each $A_{i}$ is countable then $\bigcup_{i \in I} A_{i}$ is countable. Given any family $\left(A_{i}\right)_{i \in I}$ of sets we have a set $$ \prod_{i \in I} A_{i}:=\left\{\left(a_{i}\right)_{i \in I}: a_{i} \in A_{i} \text { for all } i \in I\right\}, $$ the product of the family. One axiom of set theory, the Axiom of Choice, is a bit special, but we shall use it a few times. It says that for any family $\left(A_{i}\right)_{i \in I}$ of nonempty sets there is a family $\left(a_{i}\right)_{i \in I}$ such that $a_{i} \in A_{i}$ for all $i \in I$, that is, $\prod_{i \in I} A_{i} \neq \emptyset$. ## Words Definition. Let $A$ be a set. Think of $A$ as an alphabet of letters. A word of length $n$ on $A$ is an $n$-tuple $\left(a_{1}, \ldots, a_{n}\right)$ of letters $a_{i} \in A$; because we think of it as a word (string of letters) we shall write this tuple instead as $a_{1} \ldots a_{n}$ (without parentheses or commas). There is a unique word of length 0 on $A$, the empty word and written $\epsilon$. Given a word $a=a_{1} \ldots a_{n}$ of length $n \geq 1$ on $A$, the first letter (or first symbol) of $a$ is by definition $a_{1}$, and the last letter (or last symbol) of $a$ is $a_{n}$. The set of all words on $A$ is denoted $A^{*}$ : $$ A^{*}=\bigcup_{n} A^{n} \quad \text { (disjoint union). } $$ Logical expressions like formulas and terms will be introduced later as words of a special form on suitable alphabets. When $A \subseteq B$ we can identify $A^{*}$ with a subset of $B^{*}$, and this will be done whenever convenient. Definition. Given words $a=a_{1} \ldots a_{m}$ and $b=b_{1} \ldots b_{n}$ on $A$ of length $m$ and $n$ respectively, we define their concatenation $a b \in A^{*}$ : $$ a b=a_{1} \ldots a_{m} b_{1} \ldots b_{n} . $$ Thus $a b$ is a word on $A$ of length $m+n$. Concatenation is a binary operation on $A^{*}$ that is associative: $(a b) c=a(b c)$ for all $a, b, c \in A^{*}$, with $\epsilon$ as two-sided identity: $\epsilon a=a=a \epsilon$ for all $a \in A^{*}$, and with two-sided cancellation: for all $a, b, c \in A^{*}$, if $a b=a c$, then $b=c$, and if $a c=b c$, then $a=b$. ## Equivalence Relations and Quotient Sets Given a binary relation $R$ on a set $A$ it is often more suggestive to write $a R b$ instead of $(a, b) \in R$. Definition. An equivalence relation on a set $A$ is a binary relation $\sim$ on $A$ such that for all $a, b, c \in A$ : (i) $a \sim a$ (reflexivity); (ii) $a \sim b$ implies $b \sim a$ (symmetry); (iii) $(a \sim b$ and $b \sim c$ ) implies $a \sim c$ (transitivity). Example. Given any $n$ we have the equivalence relation "congruence modulo $n$ " on $\mathbf{Z}$ defined as follows: for any $a, b \in \mathbf{Z}$ we have $$ a \equiv b \quad \bmod n \Longleftrightarrow a-b=n c \text { for some } c \in \mathbf{Z} . $$ For $n=0$ this is just equality on $\mathbf{Z}$. Let $\sim$ be an equivalence relation on the set $A$. The equivalence class $a^{\sim}$ of an element $a \in A$ is defined by $a^{\sim}=\{b \in A: a \sim b\}$ (a subset of $A$ ). For $a, b \in A$ we have $a^{\sim}=b^{\sim}$ if and only if $a \sim b$, and $a^{\sim} \cap b^{\sim}=\emptyset$ if and only if $a \nsim b$. The quotient set of $A$ by $\sim$ is by definition the set of equivalence classes: $$ A / \sim=\left\{a^{\sim}: a \in A\right\} . $$ This quotient set is a partition of $A$, that is, it is a collection of pairwise disjoint nonempty subsets of $A$ whose union is $A$. (Collection is a synonym for set; we use it here because we don't like to say "set of ... subsets ...".) Every partition of $A$ is the quotient set $A / \sim$ for a unique equivalence relation $\sim$ on $A$. Thus equivalence relations on $A$ and partitions of $A$ are just different ways to describe the same situation. In the previous example (congruence modulo $n$ ) the equivalence classes are called congruence classes modulo $n$ (or residue classes modulo $n$ ) and the corresponding quotient set is often denoted $\mathbf{Z} / n \mathbf{Z}$. Remark. Readers familiar with some abstract algebra will note that the construction in the example above is a special case of a more general constructionthat of a quotient of a group with respect to a normal subgroup. ## Posets A partially ordered set (short: poset) is a pair $(P, \leq)$ consisting of a set $P$ and a partial ordering $\leq$ on $P$, that is, $\leq$ is a binary relation on $P$ such that for all $p, q, r \in P$ : (i) $p \leq p$ (reflexivity); (ii) if $p \leq q$ and $q \leq p$, then $p=q$ (antisymmetry); (iii) if $p \leq q$ and $q \leq r$, then $p \leq r$ (transitivity). If in addition we have for all $p, q \in P$, (iv) $p \leq q$ or $q \leq p$ then we say that $\leq$ is a linear order on $P$, or that $(P, \leq)$ is a linearly ordered set. ${ }^{6}$ Each of the sets $\mathbf{N}, \mathbf{Z}, \mathbf{Q}, \mathbf{R}$ comes with its familiar linear order on it. As an example, take any set $A$ and its collection $\mathcal{P}(A)$ of subsets. Then $$ X \leq Y: \Longleftrightarrow X \subseteq Y \quad \text { (for subsets } X, Y \text { of } A \text { ) } $$ defines a poset $(\mathcal{P}(A), \leq)$, also referred to as the power set of $A$ ordered by inclusion. This is not a linearly ordered set if $A$ has more than one element. Finite linearly ordered sets are determined "up to unique isomorphism" by their size: if $(P, \leq)$ is a linearly ordered set and $|P|=n$, then there is a unique map $\iota: P \rightarrow\{1, \ldots, n\}$ such that for all $p, q \in P$ we have: $p \leq q \Longleftrightarrow \iota(p) \leq \iota(q)$. This map $\iota$ is a bijection. Let $(P, \leq)$ be a poset. Here is some useful notation. For $x, y \in P$ we set $$ \begin{aligned} & x \geq y: \Longleftrightarrow y \leq x, \\ & x<y: \Longleftrightarrow y>x: \Longleftrightarrow x \leq y \text { and } x \neq y . \end{aligned} $$ Note that $(P, \geq)$ is also a poset. A least element of $P$ is a $p \in P$ such that $p \leq x$ for all $x \in P$; a largest element of $P$ is defined likewise, with $\geq$ instead of $\leq$. ${ }^{6}$ One also uses the term total order instead of linear order. Of course, $P$ can have at most one least element; therefore we can refer to the least element of $P$, if $P$ has a least element; likewise, we can refer to the largest element of $P$, if $P$ has a largest element. A minimal element of $P$ is a $p \in P$ such that there is no $x \in P$ with $x<p$; a maximal element of $P$ is defined likewise, with $>$ instead of $<$. If $P$ has a least element, then this element is also the unique minimal element of $P$; some posets, however, have more than one minimal element. The reader might want to prove the following result to get a feeling for these notions: If $P$ is finite and nonempty, then $P$ has a maximal element, and there is a linear order $\leq '$ on $P$ that extends $\leq$ in the sense that $$ p \leq q \Longrightarrow p \leq^{\prime} q, \quad \text { for all } p, q \in P . $$ (Hint: use induction on $|P|$.) Let $X \subseteq P$. A lowerbound (respectively, upperbound) of $X$ in $P$ is an element $l \in$ $P$ (respectively, an element $u \in P$ ), such that $l \leq x$ for all $x \in X$ (respectively, $x \leq u$ for all $x \in X$ ). We often tacitly consider $X$ as a poset in its own right, by restricting the given partial ordering of $P$ to $X$. More precisely this means that we consider the poset $\left(X, \leq_{X}\right)$ where the partial ordering $\leq_{X}$ on $X$ is defined by $$ x \leq_{X} y \Longleftrightarrow x \leq y \quad(x, y \in X) . $$ Thus we can speak of least, largest, minimal, and maximal elements of a set $X \subseteq P$, when the ambient poset $(P, \leq)$ is clear from the context. For example, when $X$ is the collection of nonempty subsets of a set $A$ and $X$ is ordered by inclusion, then the minimal elements of $X$ are the singletons $\{a\}$ with $a \in A$. We call $X$ a chain in $P$ if $\left(X, \leq_{X}\right)$ is linearly ordered. Occasionally we shall use the following fact about posets $(P, \leq)$. Zorn's Lemma. Suppose $P$ is nonempty and every nonempty chain in $P$ has an upperbound in $P$. Then $P$ has a maximal element. For a further discussion of Zorn's Lemma and its proof using the Axiom of Choice we refer the reader to Halmos's book on set theory. ## Chapter 2 ## Basic Concepts of Logic ### Propositional Logic Propositional logic is the fragment of logic where new statements are built from given statements using so-called connectives like "not", "or" and "and". The truth value of such a new statement is then completely determined by the truth values of the given statements. Thus, given any statements $p$ and $q$, we can form the three statements $$ \begin{aligned} \neg p & \text { (the negation of } p, \text { pronounced as "not } p \text { "), } \\ p \vee q & \text { (the disjunction of } p \text { and } q, \text { pronounced as " } p \text { or } q \text { "), } \\ p \wedge q & \text { (the conjunction of } p \text { and } q, \text { pronounced as " } p \text { and } q \text { "). } \end{aligned} $$ This leads to more complicated combinations like $\neg(p \wedge(\neg q))$. We shall regard $\neg p$ as true if and only if $p$ is not true; also, $p \vee q$ is defined to be true if and only if $p$ is true or $q$ is true (including the possibility that both are true), and $p \wedge q$ is deemed to be true if and only if $p$ is true and $q$ is true. Instead of "not true" we also say "false". We now introduce a formalism that makes this into mathematics. We start with the five distinct symbols $$ \top \perp \neg \vee \wedge $$ to be thought of as true, false, not, or, and and, respectively. These symbols are fixed throughout the course, and are called propositional connectives. In this section we also fix a set $A$ whose elements will be called propositional atoms (or just atoms), such that no propositional connective is an atom. It may help the reader to think of an atom $a$ as a variable for which we can substitute arbitrary statements, assumed to be either true or false. A proposition on $A$ is a word on the alphabet $A \cup\{\top, \perp, \neg, \vee, \wedge\}$ that can be obtained by applying the following rules: (i) each atom $a \in A$ (viewed as a word of length 1 ) is a proposition on $A$; (ii) $\top$ and $\perp$ (viewed as words of length 1) are propositions on $A$; (iii) if $p$ and $q$ are propositions on $A$, then the concatenations $\neg p, \vee p q$ and $\wedge p q$ are propositions on $A$. For the rest of this section "proposition" means "proposition on $A$ ", and $p, q, r$ (sometimes with subscripts) will denote propositions. Example. Suppose $a, b, c$ are atoms. Then $\wedge \vee \neg a b \neg c$ is a proposition. This follows from the rules above: $a$ is a proposition, so $\neg a$ is a proposition, hence $\vee \neg a b$ as well; also $\neg c$ is a proposition, and thus $\wedge \vee \neg a b \neg c$ is a proposition. We defined "proposition" using the suggestive but vague phrase "can be obtained by applying the following rules". The reader should take such an informal description as shorthand for a completely explicit definition, which in the case at hand is as follows: A proposition is a word $w$ on the alphabet $A \cup\{\top, \perp, \neg, \vee, \wedge\}$ for which there is a sequence $w_{1}, \ldots, w_{n}$ of words on that same alphabet, with $n \geq 1$, such that $w=w_{n}$ and for each $k \in\{1, \ldots, n\}$, either $w_{k} \in A \cup\{\top, \perp\}$ (where each element in the last set is viewed as a word of length 1 ), or there are $i, j \in\{1, \ldots, k-1\}$ such that $w_{k}$ is one of the concatenations $\neg w_{i}, \vee w_{i} w_{j}, \wedge w_{i} w_{j}$. We let $\operatorname{Prop}(A)$ denote the set of propositions. Remark. Having the connectives $\vee$ and $\wedge$ in front of the propositions they "connect" rather than in between, is called prefix notation or Polish notation. This is theoretically elegant, but for the sake of readability we usually write $p \vee q$ and $p \wedge q$ to denote $\vee p q$ and $\wedge p q$ respectively, and we also use parentheses and brackets if this helps to clarify the structure of a proposition. So the proposition in the example above could be denoted by $[(\neg a) \vee b] \wedge(\neg c)$, or even by $(\neg a \vee b) \wedge \neg c$ since we shall agree that $\neg$ binds stronger than $\vee$ and $\wedge$ in this informal way of indicating propositions. Because of the informal nature of these conventions, we don't have to give precise rules for their use; it's enough that each actual use is clear to the reader. The intended structure of a proposition-how we think of it as built up from atoms via connectives - is best exhibited in the form of a tree, a twodimensional array, rather than as a one-dimensional string. Such trees, however, occupy valuable space on the printed page, and are typographically demanding. Fortunately, our "official" prefix notation does uniquely determine the intended structure of a proposition: that is what the next lemma amounts to. Lemma 2.1.1 (Unique Readability). If $p$ has length 1 , then either $p=T$, or $p=\perp$, or $p$ is an atom. If $p$ has length $>1$, then its first symbol is either $\neg$, or $\vee$, or $\wedge$. If the first symbol of $p$ is $\neg$, then $p=\neg q$ for a unique $q$. If the first symbol of $p$ is $\vee$, then $p=\vee q r$ for a unique pair $(q, r)$. If the first symbol of $p$ is $\wedge$, then $p=\wedge$ qr for a unique pair $(q, r)$. (Note that we used here our convention that $p, q, r$ denote propositions.) Only the last two claims are worth proving in print, the others should require only a moment's thought. For now we shall assume this lemma without proof. At the end of this section we establish more general results of this kind which are needed also later in the course. Remark. Rather than thinking of a proposition as a statement, it's better viewed as a function whose arguments and values are statements: replacing the atoms in a proposition by specific mathematical statements like " $2 \times 2=4$ ", " $\pi^{2}<7$ ", and "every even integer $>2$ is the sum of two prime numbers", we obtain again a mathematical statement. We shall use the following notational conventions: $p \rightarrow q$ denotes $\neg p \vee q$, and $p \leftrightarrow q$ denotes $(p \rightarrow q) \wedge(q \rightarrow p)$. By recursion on $n$ we define $$ p_{1} \vee \ldots \vee p_{n}= \begin{cases}\perp & \text { if } n=0 \\ p_{1} & \text { if } n=1 \\ p_{1} \vee p_{2} & \text { if } n=2 \\ \left(p_{1} \vee \ldots \vee p_{n-1}\right) \vee p_{n} & \text { if } n>2\end{cases} $$ Thus $p \vee q \vee r$ stands for $(p \vee q) \vee r$. We call $p_{1} \vee \ldots \vee p_{n}$ the disjunction of $p_{1}, \ldots, p_{n}$. The reason that for $n=0$ we take this disjunction to be $\perp$ is that we want a disjunction to be true iff (at least) one of the disjuncts is true. Similarly, the conjunction $p_{1} \wedge \ldots \wedge p_{n}$ of $p_{1}, \ldots, p_{n}$ is defined by replacing everywhere $\vee$ by $\wedge$ and $\perp$ by $T$ in the definition of $p_{1} \vee \ldots \vee p_{n}$. Definition. A truth assignment is a map $t: A \rightarrow\{0,1\}$. We extend such a $t$ to $\hat{t}: \operatorname{Prop}(A) \rightarrow\{0,1\}$ by requiring (i) $\hat{t}(\top)=1, \quad \hat{t}(\perp)=0$ (ii) $\hat{t}(\neg p)=1-\hat{t}(p)$, (iii) $\hat{t}(p \vee q)=\max (\hat{t}(p), \hat{t}(q)), \quad \hat{t}(p \wedge q)=\min (\hat{t}(p), \hat{t}(q))$. Note that there is exactly one such extension $\hat{t}$ by unique readability. To simplify notation we often write $t$ instead of $\hat{t}$. The array below is called a truth table. It shows on each row below the top row how the two leftmost entries $t(p)$ and $t(q)$ determine $t(\neg p), t(p \vee q), t(p \wedge q), t(p \rightarrow q)$ and $t(p \leftrightarrow q)$. | $p$ | $q$ | $\neg p$ | $p \vee q$ | $p \wedge q$ | $p \rightarrow q$ | $p \leftrightarrow q$ | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | 0 | 0 | 1 | 0 | 0 | 1 | 1 | | 0 | 1 | 1 | 1 | 0 | 1 | 0 | | 1 | 0 | 0 | 1 | 0 | 0 | 0 | | 1 | 1 | 0 | 1 | 1 | 1 | 1 | Let $t: A \rightarrow\{0,1\}$. Note that $t(p \rightarrow q)=1$ if and only if $t(p) \leq t(q)$, and that $t(p \leftrightarrow q)=1$ if and only if $t(p)=t(q)$. Suppose $a_{1}, \ldots, a_{n}$ are the distinct atoms that occur in $p$, and we know how $p$ is built up from those atoms. Then we can compute in a finite number of steps $t(p)$ from $t\left(a_{1}\right), \ldots, t\left(a_{n}\right)$. In particular, $t(p)=t^{\prime}(p)$ for any $t^{\prime}: A \rightarrow\{0,1\}$ such that $t\left(a_{i}\right)=t^{\prime}\left(a_{i}\right)$ for $i=1, \ldots, n$. Definition. We say that $p$ is a tautology (notation: $\models p$ ) if $t(p)=1$ for all $t: A \rightarrow\{0,1\}$. We say that $p$ is satisfiable if $t(p)=1$ for some $t: A \rightarrow\{0,1\}$. Thus $\top$ is a tautology, and $p \vee \neg p, p \rightarrow(p \vee q)$ are tautologies for all $p$ and $q$. By the remark preceding the definition one can verify whether any given $p$ with exactly $n$ distinct atoms in it is a tautology by computing $2^{n}$ numbers and checking that these numbers all come out 1 . (To do this accurately by hand is already cumbersome for $n=5$, but computers can handle somewhat larger $n$. Fortunately, other methods are often efficient for special cases.) Remark. Note that $\mid=p \leftrightarrow q$ iff $t(p)=t(q)$ for all $t: A \rightarrow\{0,1\}$. We call $p$ equivalent to $q$ if $\models p \leftrightarrow q$. Note that "equivalent to" defines an equivalence relation on $\operatorname{Prop}(A)$. The lemma below gives a useful list of equivalences. We leave it to the reader to verify them. Lemma 2.1.2. For all $p, q, r$ we have the following equivalences: (1) $\models(p \vee p) \leftrightarrow p$, (2) $\models(p \vee q) \leftrightarrow(q \vee p)$, $=(p \wedge p) \leftrightarrow p$, (3) $\quad=(p \vee(q \vee r)) \leftrightarrow((p \vee q) \vee r)$, $=(p \wedge q) \leftrightarrow(q \wedge p)$, (4) $\models(p \vee(q \wedge r)) \leftrightarrow(p \vee q) \wedge(p \vee r)$, $\models(p \wedge(q \wedge r)) \leftrightarrow((p \wedge q) \wedge r)$, (5) $\models(p \vee(p \wedge q)) \leftrightarrow p$, $\models(p \wedge(q \vee r)) \leftrightarrow(p \wedge q) \vee(p \wedge r)$, (6) $\models(\neg(p \vee q)) \leftrightarrow(\neg p \wedge \neg q)$, $=(p \wedge(p \vee q)) \leftrightarrow p$, (7) $\models(p \vee \neg p) \leftrightarrow \top$, $\models(\neg(p \wedge q)) \leftrightarrow(\neg p \vee \neg q)$, (8) $=(p \wedge \neg p) \leftrightarrow \perp$, Items (1), (2), (3), (4), (5), and (6) are often referred to as the idempotent law, commutativity, associativity, distributivity, the absorption law, and the De Morgan law, respectively. Note the left-right symmetry in (1)-(7) : the so-called duality of propositional logic. We shall return to this issue in the more algebraic setting of boolean algebras. Some notation: let $\left(p_{i}\right)_{i \in I}$ be a family of propositions with finite index set $I$, choose a bijection $k \mapsto i(k):\{1, \ldots, n\} \rightarrow I$ and set $$ \bigvee_{i \in I} p_{i}:=p_{i(1)} \vee \cdots \vee p_{i(n)}, \quad \bigwedge_{i \in I} p_{i}:=p_{i(1)} \wedge \cdots \wedge p_{i(n)} $$ If $I$ is clear from context we just write $\bigvee_{i} p_{i}$ and $\bigwedge_{i} p_{i}$ instead. Of course, the notations $\bigvee_{i \in I} p_{i}$ and $\bigwedge_{i \in I} p_{i}$ can only be used when the particular choice of bijection of $\{1, \ldots, n\}$ with $I$ does not matter; this is usually the case, because the equivalence class of $p_{i(1)} \vee \cdots \vee p_{i(n)}$ does not depend on this choice, and the same is true for the equivalence class of $p_{i(1)} \wedge \cdots \wedge p_{i(n)}$. Next we define "model of $\Sigma$ " and "tautological consequence of $\Sigma$ ". Definition. Let $\Sigma \subseteq \operatorname{Prop}(A)$. By a model of $\Sigma$ we mean a truth assignment $t: A \rightarrow\{0,1\}$ such that $t(p)=1$ for all $p \in \Sigma$. We say that a proposition $p$ is a tautological consequence of $\Sigma$ (written $\Sigma \models p$ ) if $t(p)=1$ for every model $t$ of $\Sigma$. Note that $\models p$ is the same as $\emptyset \models p$. Lemma 2.1.3. Let $\Sigma \subseteq \operatorname{Prop}(A)$ and $p, q \in \operatorname{Prop}(A)$. Then (1) $\Sigma \models p \wedge q \Longleftrightarrow \Sigma \models p$ and $\Sigma \models q$, (2) $\Sigma \models p \Longrightarrow \Sigma \models p \vee q$, (3) $\Sigma \cup\{p\} \models q \Longleftrightarrow \Sigma \models p \rightarrow q$, (4) if $\Sigma \models p$ and $\Sigma \models p \rightarrow q$, then $\Sigma \models q$ (Modus Ponens). Proof. We will prove (3) here and leave the rest as exercise. $(\Rightarrow)$ Assume $\Sigma \cup\{p\} \models q$. To derive $\Sigma \models p \rightarrow q$ we consider any model $t: A \longrightarrow\{0,1\}$ of $\Sigma$, and need only show that then $t(p \rightarrow q)=1$. If $t(p)=1$ then $t(\Sigma \cup\{p\}) \subseteq\{1\}$, hence $t(q)=1$ and thus $t(p \rightarrow q)=1$. If $t(p)=0$ then $t(p \rightarrow q)=1$ by definition. $(\Leftarrow)$ Assume $\Sigma \models p \rightarrow q$. To derive $\Sigma \cup\{p\} \models q$ we consider any model $t: A \longrightarrow\{0,1\}$ of $\Sigma \cup\{p\}$, and need only derive that $t(q)=1$. By assumption $t(p \rightarrow q)=1$ and in view of $t(p)=1$, this gives $t(q)=1$ as required. We finish this section with the promised general result on unique readability. We also establish facts of similar nature that are needed later. Definition. Let $F$ be a set of symbols with a function $a: F \rightarrow \mathbf{N}$ (called the arity function). A symbol $f \in F$ is said to have arity $n$ if $a(f)=n$. A word on $F$ is said to be admissible if it can be obtained by applying the following rules: (i) If $f \in F$ has arity 0 , then $f$ viewed as a word of length 1 is admissible. (ii) If $f \in F$ has arity $m>0$ and $t_{1}, \ldots, t_{m}$ are admissible words on $F$, then the concatenation $f t_{1} \ldots t_{m}$ is admissible. Below we just write "admissible word" instead of "admissible word on $F$ ". Note that the empty word is not admissible, and that the last symbol of an admissible word cannot be of arity $>0$. Example. Take $F=A \cup\{\top, \perp, \neg, \vee, \wedge\}$ and define arity $: F \rightarrow \mathbf{N}$ by $$ \operatorname{arity}(x)=0 \text { for } x \in A \cup\{\top, \perp\}, \quad \operatorname{arity}(\neg)=1, \quad \operatorname{arity}(\vee)=\operatorname{arity}(\wedge)=2 . $$ Then the set of admissible words is just $\operatorname{Prop}(A)$. Lemma 2.1.4. Let $t_{1}, \ldots, t_{m}$ and $u_{1}, \ldots, u_{n}$ be admissible words and $w$ any word on $F$ such that $t_{1} \ldots t_{m} w=u_{1} \ldots u_{n}$. Then $m \leq n, t_{i}=u_{i}$ for $i=$ $1, \ldots, m$, and $w=u_{m+1} \cdots u_{n}$. Proof. By induction on the length of $u_{1} \ldots u_{n}$. If this length is 0 , then $m=n=$ 0 and $w$ is the empty word. Suppose the length is $>0$, and assume the lemma holds for smaller lengths. Note that $n>0$. If $m=0$, then the conclusion of the lemma holds, so suppose $m>0$. The first symbol of $t_{1}$ equals the first symbol of $u_{1}$. Say this first symbol is $h \in F$ with arity $k$. Then $t_{1}=h a_{1} \ldots a_{k}$ and $u_{1}=h b_{1} \ldots b_{k}$ where $a_{1}, \ldots, a_{k}$ and $b_{1}, \ldots, b_{k}$ are admissible words. Cancelling the first symbol $h$ gives $$ a_{1} \ldots a_{k} t_{2} \ldots t_{m} w=b_{1} \ldots b_{k} u_{2} \ldots u_{n} $$ (Caution: any of $k, m-1, n-1$ could be 0 .) We have length $\left(b_{1} \ldots b_{k} u_{2} \ldots u_{n}\right)=$ length $\left(u_{1} \ldots u_{n}\right)-1$, so the induction hypothesis applies. It yields $k+m-1 \leq$ $k+n-1($ so $m \leq n), a_{1}=b_{1}, \ldots, a_{k}=b_{k}\left(\right.$ so $\left.t_{1}=u_{1}\right), t_{2}=u_{2}, \ldots, t_{m}=u_{m}$, and $w=u_{m+1} \cdots u_{n}$. Here are two immediate consequences that we shall use: 1. Let $t_{1}, \ldots, t_{m}$ and $u_{1}, \ldots, u_{n}$ be admissible words such that $t_{1} \ldots t_{m}=$ $u_{1} \ldots u_{n}$. Then $m=n$ and $t_{i}=u_{i}$ for $i=1, \ldots, m$. 2. Let $t$ and $u$ be admissible words and $w$ a word on $F$ such that $t w=u$. Then $t=u$ and $w$ is the empty word. Lemma 2.1.5 (Unique Readability). Each admissible word equals $f t_{1} \ldots t_{m}$ for a unique tuple $\left(f, t_{1}, \ldots, t_{m}\right)$ where $f \in F$ has arity $m$ and $t_{1}, \ldots, t_{m}$ are admissible words. Proof. Suppose $f t_{1} \ldots t_{m}=g u_{1} \ldots u_{n}$ where $f, g \in F$ have arity $m$ and $n$ respectively, and $t_{1}, \ldots, t_{m}, u_{1}, \ldots, u_{n}$ are admissible words on $F$. We have to show that then $f=g, m=n$ and $t_{i}=u_{i}$ for $i=1, \ldots, m$. Observe first that $f=g$ since $f$ and $g$ are the first symbols of two equal words. After cancelling the first symbol of both words, the first consequence of the previous lemma leads to the desired conclusion. Given words $v, w \in F^{*}$ and $i \in\{1, \ldots$, length $(w)\}$, we say that $v$ occurs in $w$ at starting position $i$ if $w=w_{1} v w_{2}$ where $w_{1}, w_{2} \in F^{*}$ and $w_{1}$ has length $i-1$. (For example, if $f, g \in F$ are distinct, then the word $f g f$ has exactly two occurrences in the word $f g f g f$, one at starting position 1 , and the other at starting position 3 ; these two occurrences overlap, but such overlapping is impossible with admissible words, see exercise 5 at the end of this section.) Given $w=w_{1} v w_{2}$ as above, and given $v^{\prime} \in F^{*}$, the result of replacing $v$ in $w$ at starting position $i$ by $v^{\prime}$ is by definition the word $w_{1} v^{\prime} w_{2}$. Lemma 2.1.6. Let $w$ be an admissible word and $1 \leq i \leq \operatorname{length}(w)$. Then there is a unique admissible word that occurs in $w$ at starting position $i$. Proof. We prove existence by induction on length $(w)$. Uniqueness then follows from the fact stated just before Lemma 2.1.5. Clearly $w$ is an admissible word occurring in $w$ at starting position 1. Suppose $i>1$. Then we write $w=$ $f t_{1} \ldots t_{n}$ where $f \in F$ has arity $n>0$, and $t_{1}, \ldots, t_{n}$ are admissible words, and we take $j \in\{1, \ldots, n\}$ such that $$ 1+\operatorname{length}\left(t_{1}\right)+\cdots+\operatorname{length}\left(t_{j-1}\right)<i \leq 1+\operatorname{length}\left(t_{1}\right)+\cdots+\operatorname{length}\left(t_{j}\right) . $$ Now apply the inductive assumption to $t_{j}$. Remark. Let $w=f t_{1} \ldots t_{n}$ where $f \in F$ has arity $n>0$, and $t_{1}, \ldots, t_{n}$ are admissible words. Put $l_{j}:=1+\operatorname{length}\left(t_{1}\right)+\cdots+\operatorname{length}\left(t_{j}\right)$ for $j=0, \ldots, n$ (so $\left.l_{0}=1\right)$. Suppose $l_{j-1}<i \leq l_{j}, 1 \leq j \leq n$, and let $v$ be the admissible word that occurs in $w$ at starting position $i$. Then the proof of the last lemma shows that this occurrence is entirely inside $t_{j}$, that is, $i-1+\operatorname{length}(v) \leq l_{j}$. Corollary 2.1.7. Let $w$ be an admissible word and $1 \leq i \leq \operatorname{length}(w)$. Then the result of replacing the admissible word $v$ in $w$ at starting position $i$ by an admissible word $v^{\prime}$ is again an admissible word. This follows by a routine induction on length $(w)$, using the last remark. Exercises. In the exercises below, $A=\left\{a_{1}, \ldots, a_{n}\right\},|A|=n$. (1) (Disjunctive Normal Form) Each $p$ is equivalent to a disjunction $$ p_{1} \vee \cdots \vee p_{k} $$ where each disjunct $p_{i}$ is a conjunction $a_{1}^{\epsilon_{1}} \wedge \ldots \wedge a_{n}^{\epsilon_{n}}$ with all $\epsilon_{j} \in\{-1,1\}$ and where for an atom $a$ we put $a^{1}:=a$ and $a^{-1}:=\neg a$. (2) (Conjunctive Normal Form) Same as last problem, except that the signs $\vee$ and $\wedge$ are interchanged, as well as the words "disjunction" and "conjunction," and also the words "disjunct" and "conjunct." (3) To each $p$ associate the function $f_{p}:\{0,1\}^{A} \rightarrow\{0,1\}$ defined by $f_{p}(t)=t(p)$. (Think of a truth table for $p$ where the $2^{n}$ rows correspond to the $2^{n}$ truth assignments $t: A \rightarrow\{0,1\}$, and the column under $p$ records the values $t(p)$.) Then for every function $f:\{0,1\}^{A} \rightarrow\{0,1\}$ there is a $p$ such that $f=f_{p}$. (4) Let $\sim$ be the equivalence relation on $\operatorname{Prop}(A)$ given by $$ p \sim q: \Longleftrightarrow \models p \leftrightarrow q . $$ Then the quotient set $\operatorname{Prop}(A) / \sim$ is finite; determine its cardinality as a function of $n=|A|$. (5) Let $w$ be an admissible word and $1 \leq i<i^{\prime} \leq \operatorname{length}(w)$. Let $v$ and $v^{\prime}$ be the admissible words that occur at starting positions $i$ and $i^{\prime}$ respectively in $w$. Then these occurrences are either nonoverlapping, that is, $i-1+\operatorname{length}(v)<i^{\prime}$, or the occurrence of $v^{\prime}$ is entirely inside that of $v$, that is, $$ i^{\prime}-1+\operatorname{length}\left(v^{\prime}\right) \leq i-1+\operatorname{length}(v) . $$ ### Completeness for Propositional Logic In this section we introduce a proof system for propositional logic, state the completeness of this proof system, and then prove this completeness. As in the previous section we fix a set $A$ of atoms, and the conventions of that section remain in force. A propositional axiom is by definition a proposition that occurs in the list below, for some choice of $p, q, r$ : 1. $\top$ 2. $p \rightarrow(p \vee q) ; \quad p \rightarrow(q \vee p)$ 3. $\neg p \rightarrow(\neg q \rightarrow \neg(p \vee q))$ 4. $(p \wedge q) \rightarrow p ; \quad(p \wedge q) \rightarrow q$ 5. $p \rightarrow(q \rightarrow(p \wedge q))$ 6. $(p \rightarrow(q \rightarrow r)) \rightarrow((p \rightarrow q) \rightarrow(p \rightarrow r))$ 6. $p \rightarrow(\neg p \rightarrow \perp)$ 7. $(\neg p \rightarrow \perp) \rightarrow p$ Each of items 2-8 describes infinitely many propositional axioms. That is why we do not call these items axioms, but axiom schemes. For example, if $a, b \in A$, then $a \rightarrow(a \vee \perp)$ and $b \rightarrow(b \vee(\neg a \wedge \neg b))$ are distinct propositional axioms, and both instances of axiom scheme 2 . It is easy to check that all propositional axioms are tautologies. Here is our single rule of inference for propositional logic: Modus Ponens (MP): from $p$ and $p \rightarrow q$, infer $q$. In the rest of this section $\Sigma$ denotes a set of propositions, that is, $\Sigma \subseteq \operatorname{Prop}(A)$. Definition. A formal proof, or just proof, of $p$ from $\Sigma$ is a sequence $p_{1}, \ldots, p_{n}$ with $n \geq 1$ and $p_{n}=p$, such that for $k=1, \ldots, n$ : (i) either $p_{k} \in \Sigma$, (ii) or $p_{k}$ is a propositional axiom, (iii) or there are $i, j \in\{1, \ldots, k-1\}$ such that $p_{k}$ can be inferred from $p_{i}$ and $p_{j}$ by MP. If there exists a proof of $p$ from $\Sigma$, then we write $\Sigma \vdash p$, and say $\Sigma$ proves $p$. For $\Sigma=\emptyset$ we also write $\vdash p$ instead of $\Sigma \vdash p$. Lemma 2.2.1. $\vdash p \rightarrow p$. Proof. The proposition $p \rightarrow((p \rightarrow p) \rightarrow p)$ is a propositional axiom by axiom scheme 2. By axiom scheme 6 , $$ \{p \rightarrow((p \rightarrow p) \rightarrow p)\} \rightarrow\{(p \rightarrow(p \rightarrow p)) \rightarrow(p \rightarrow p)\} $$ is a propositional axiom. Applying MP to these two axioms yields $$ \vdash(p \rightarrow(p \rightarrow p)) \rightarrow(p \rightarrow p) . $$ Since $p \rightarrow(p \rightarrow p)$ is also a propositional axiom by scheme 2 , we can apply MP again to obtain $\vdash p \rightarrow p$. The next result shows that our proof system is sound, to use a term that is often used in this connection. For the straightforward proof, use that propositional axioms are tautologies, and use part (4) of Lemma 2.1.3. Proposition 2.2.2. If $\Sigma \vdash p$, then $\Sigma \models p$. The converse is true but less obvious. In other words: Theorem 2.2.3 (Completeness - first form). $$ \Sigma \vdash p \Longleftrightarrow \Sigma \models p $$ There is some arbitrariness in our choice of axioms and rule, and thus in our notion of formal proof. This is in contrast to the definition of $\models$, which merely formalizes the underlying idea of propositional logic as stated in the introduction to the previous section. However, the equivalence of $\vdash$ and $\models$ (Completeness Theorem) means that our choice of axioms and rule gives a complete proof system. Moreover, this equivalence has consequences which can be stated in terms of $\models$ alone. An example is the Compactness Theorem: Theorem 2.2.4 (Compactness of Propositional Logic). If $\Sigma \models p$, then there is a finite subset $\Sigma_{0}$ of $\Sigma$ such that $\Sigma_{0} \models p$. It is convenient to prove first a variant of the Completeness Theorem. Definition. We say that $\Sigma$ is inconsistent if $\Sigma \vdash \perp$, and otherwise (that is, if $\Sigma \nvdash \perp$ ) we call $\Sigma$ consistent. Theorem 2.2.5 (Completeness - second form). $\Sigma$ is consistent if and only if $\Sigma$ has a model. From this second form of the Completenenes Theorem we obtain easily an alternative form of the Compactness of Propositional Logic: Corollary 2.2.6. $\Sigma$ has a model $\Longleftrightarrow$ every finite subset of $\Sigma$ has a model. We first show that the second form of the Completeness Theorem implies the first form. For this we need a lemma that will also be useful later in the course. It says that " $\rightarrow$ " behaves indeed as one might hope. Lemma 2.2.7 (Deduction Lemma). Suppose $\Sigma \cup\{p\} \vdash q$. Then $\Sigma \vdash p \rightarrow q$. Proof. By induction on (formal) proofs from $\Sigma \cup\{p\}$. If $q$ is a propositional axiom, then $\Sigma \vdash q$, and since $q \rightarrow(p \rightarrow q)$ is a propositional axiom, MP yields $\Sigma \vdash p \rightarrow q$. If $q \in \Sigma \cup\{p\}$, then either $q \in \Sigma$ in which case the same argument as before gives $\Sigma \vdash p \rightarrow q$, or $q=p$ and then $\Sigma \vdash p \rightarrow q$ since $\vdash p \rightarrow p$ by the lemma above. Now assume that $q$ is obtained by MP from $r$ and $r \rightarrow q$, where $\Sigma \cup\{p\} \vdash r$ and $\Sigma \cup\{p\} \vdash r \rightarrow q$ and where we assume inductively that $\Sigma \vdash p \rightarrow r$ and $\Sigma \vdash p \rightarrow(r \rightarrow q)$. Then we obtain $\Sigma \vdash p \rightarrow q$ from the propositional axiom $$ (p \rightarrow(r \rightarrow q)) \rightarrow((p \rightarrow r) \rightarrow(p \rightarrow q)) $$ by applying MP twice. Corollary 2.2.8. $\Sigma \vdash p$ if and only if $\Sigma \cup\{\neg p\}$ is inconsistent. Proof. $(\Rightarrow)$ Assume $\Sigma \vdash p$. Since $p \rightarrow(\neg p \rightarrow \perp)$ is a propositional axiom, we can apply MP twice to get $\Sigma \cup\{\neg p\} \vdash \perp$. Hence $\Sigma \cup\{\neg p\}$ is inconsistent. $(\Leftarrow)$ Assume $\Sigma \cup\{\neg p\}$ is inconsistent. Then $\Sigma \cup\{\neg p\} \vdash \perp$, and so by the Deduction Lemma we have $\Sigma \vdash \neg p \rightarrow \perp$. Since $(\neg p \rightarrow \perp) \rightarrow p$ is a propositional axiom, MP yields $\Sigma \vdash p$. We leave the proof of the next result as an exercise. Corollary 2.2.9. If $\Sigma$ is consistent and $\Sigma \vdash p$, then $\Sigma \cup\{p\}$ is consistent. Corollary 2.2.10. The second form of Completeness (Theorem 2.2.5) implies the first form (Theorem 2.2.3). Proof. Assume the second form of Completeness holds, and that $\Sigma \models p$. We want to show that then $\Sigma \vdash p$. From $\Sigma \models p$ it follows that $\Sigma \cup\{\neg p\}$ has no model. Hence by the second form of Completeness, the set $\Sigma \cup\{\neg p\}$ is inconsistent. Then by Corollary 2.2 .8 we have $\Sigma \vdash p$. Definition. We say that $\Sigma$ is complete if $\Sigma$ is consistent, and for each $p$ either $\Sigma \vdash p$ or $\Sigma \vdash \neg p$. Completeness as a property of a set of propositions should not be confused with the completeness of our proof system as expressed by the Completeness Theorem. (It is just a historical accident that we use the same word.) Below we use Zorn's Lemma to show that any consistent set of propositions can be extended to a complete set of propositions. Lemma 2.2.11 (Lindenbaum). Suppose $\Sigma$ is consistent. Then $\Sigma \subseteq \Sigma^{\prime}$ for some complete $\Sigma^{\prime} \subseteq \operatorname{Prop}(A)$. Proof. Let $P$ be the collection of all consistent subsets of $\operatorname{Prop}(A)$ that contain $\Sigma$. In particular $\Sigma \in P$. We consider $P$ as partially ordered by inclusion. Any totally ordered subcollection $\left\{\Sigma_{i}: i \in I\right\}$ of $P$ with $I \neq \emptyset$ has an upper bound in $P$, namely $\bigcup\left\{\Sigma_{i}: i \in I\right\}$. (To see this it suffices to check that $\bigcup\left\{\Sigma_{i}: i \in I\right\}$ is consistent. Suppose otherwise, that is, suppose $\bigcup\left\{\Sigma_{i}: i \in I\right\} \vdash \perp$. Since a proof can use only finitely many of the axioms in $\bigcup\left\{\Sigma_{i}: i \in I\right\}$, there exists $i \in I$ such that $\Sigma_{i} \vdash \perp$, contradicting the consistency of $\Sigma_{i}$.) Thus by Zorn's lemma $P$ has a maximal element $\Sigma^{\prime}$. We claim that then $\Sigma^{\prime}$ is complete. For any $p$, if $\Sigma^{\prime} \nvdash p$, then by Corollary 2.2 .8 the set $\Sigma^{\prime} \cup\{\neg p\}$ is consistent, hence $\neg p \in \Sigma^{\prime}$ by maximality of $\Sigma^{\prime}$, and thus $\Sigma^{\prime} \vdash \neg p$. Suppose $A$ is countable. For this case we can give a proof of Lindenbaum's Lemma without using Zorn's Lemma as follows. Proof. Because $A$ is countable, $\operatorname{Prop}(A)$ is countable. Take an enumeration $\left(p_{n}\right)_{n \in \mathbb{N}}$ of $\operatorname{Prop}(A)$. We construct an increasing sequence $\Sigma=\Sigma_{0} \subseteq \Sigma_{1} \subseteq \ldots$ of consistent subsets of $\operatorname{Prop}(A)$ as follows. Given a consistent $\Sigma_{n} \subseteq \operatorname{Prop}(A)$ we define $$ \Sigma_{n+1}= \begin{cases}\Sigma_{n} \cup\left\{p_{n}\right\} & \text { if } \Sigma_{n} \vdash p_{n}, \\ \Sigma_{n} \cup\left\{\neg p_{n}\right\} & \text { if } \Sigma_{n} \nvdash p_{n},\end{cases} $$ so $\Sigma_{n+1}$ remains consistent by Corollaries 2.2 .8 and 2.2.9. Thus $$ \Sigma_{\infty}:=\bigcup\left\{\Sigma_{n}: n \in \mathbf{N}\right\} $$ is consistent and also complete: for any $n$ either $p_{n} \in \Sigma_{n+1} \subseteq \Sigma_{\infty}$ or $\neg p_{n} \in$ $\Sigma_{n+1} \subseteq \Sigma_{\infty}$. Define the truth assignment $t_{\Sigma}: A \rightarrow\{0,1\}$ by $$ t_{\Sigma}(a)=1 \text { if } \Sigma \vdash a, \text { and } t_{\Sigma}(a)=0 \text { otherwise. } $$ Lemma 2.2.12. Suppose $\Sigma$ is complete. Then for each $p$ we have $$ \Sigma \vdash p \Longleftrightarrow t_{\Sigma}(p)=1 $$ In particular, $t_{\Sigma}$ is a model of $\Sigma$. Proof. We proceed by induction on the length of $p$. If $p$ is an atom or $p=\top$ or $p=\perp$, then the equivalence follows immediately from the definitions. It remains to consider the three cases below. Case 1: $p=\neg q$, and (inductive assumption) $\Sigma \vdash q \Longleftrightarrow t_{\Sigma}(q)=1$. $(\Rightarrow)$ Suppose $\Sigma \vdash p$. Then $t_{\Sigma}(p)=1$ : Otherwise, $t_{\Sigma}(q)=1$, so $\Sigma \vdash q$ by the inductive assumption; since $q \rightarrow(p \rightarrow \perp)$ is a propositional axiom, we can apply MP twice to get $\Sigma \vdash \perp$, which contradicts the consistency of $\Sigma$. $(\Leftarrow)$ Suppose $t_{\Sigma}(p)=1$. Then $t_{\Sigma}(q)=0$, so $\Sigma \nvdash q$, and thus $\Sigma \vdash p$ by completeness of $\Sigma$. Case 2: $p=q \vee r, \Sigma \vdash q \Longleftrightarrow t_{\Sigma}(q)=1$, and $\Sigma \vdash r \Longleftrightarrow t_{\Sigma}(r)=1$. $(\Rightarrow)$ Suppose that $\Sigma \vdash p$. Then $t_{\Sigma}(p)=1$ : Otherwise, $t_{\Sigma}(p)=0$, so $t_{\Sigma}(q)=0$ and $t_{\Sigma}(r)=0$, hence $\Sigma \nvdash q$ and $\Sigma \nvdash r$, and thus $\Sigma \vdash \neg q$ and $\Sigma \vdash \neg r$ by completeness of $\Sigma$; since $\neg q \rightarrow(\neg r \rightarrow \neg p)$ is a propositional axiom, we can apply MP twice to get $\Sigma \vdash \neg p$, which in view of the propositional axiom $p \rightarrow(\neg p \rightarrow \perp)$ and MP yields $\Sigma \vdash \perp$, which contradicts the consistency of $\Sigma$. $(\Leftarrow)$ Suppose $t_{\Sigma}(p)=1$. Then $t_{\Sigma}(q)=1$ or $t_{\Sigma}(r)=1$. Hence $\Sigma \vdash q$ or $\Sigma \vdash r$. Using MP and the propositional axioms $q \rightarrow p$ and $r \rightarrow p$ we obtain $\Sigma \vdash p$. Case 3: $p=q \wedge r, \Sigma \vdash q \Longleftrightarrow t_{\Sigma}(q)=1$, and $\Sigma \vdash r \Longleftrightarrow t_{\Sigma}(r)=1$. We leave this case as an exercise. We can now finish the proof of Completeness (second form): Suppose $\Sigma$ is consistent. Then by Lindenbaum's Lemma $\Sigma$ is a subset of a complete set $\Sigma^{\prime}$ of propositions. By the previous lemma, such a $\Sigma^{\prime}$ has a model, and such a model is also a model of $\Sigma$. The converse - if $\Sigma$ has a model, then $\Sigma$ is consistent - is left to the reader. Application to coloring infinite graphs. What follows is a standard use of compactness of propositional logic, one of many. Let $(V, E)$ be a graph, by which we mean here that $V$ is a set (of vertices) and $E$ (the set of edges) is a binary relation on $V$ that is irreflexive and symmetric, that is, for all $v, w \in V$ we have $(v, v) \notin E$, and if $(v, w) \in E$, then $(w, v) \in E$. Let some $n \geq 1$ be given. Then an $n$-coloring of $(V, E)$ is a function $c: V \rightarrow\{1, \ldots, n\}$ such that $c(v) \neq c(w)$ for all $(v, w) \in E$ : neighboring vertices should have different colors. Suppose for every finite $V_{0} \subseteq V$ there is an $n$-coloring of $\left(V_{0}, E_{0}\right)$, where $E_{0}:=E \cap\left(V_{0} \times V_{0}\right)$. We claim that there exists an $n$-coloring of $(V, E)$. Proof. Take $A:=V \times\{1, \ldots, n\}$ as the set of atoms, and think of an atom $(v, i)$ as representing the statement that $v$ has color $i$. Thus for $(V, E)$ to have an $n$-coloring means that the following set $\Sigma \subseteq \operatorname{Prop}(A)$ has a model: $$ \begin{aligned} & \Sigma:=\{(v, 1) \vee \cdots \vee(v, n): v \in V\} \cup\{\neg((v, i) \wedge(v, j)): v \in V, 1 \leq i<j \leq n\} \\ & \cup\{\neg((v, i) \wedge(w, i)):(v, w) \in E, 1 \leq i \leq n\} . \end{aligned} $$ The assumption that all finite subgraphs of $(V, E)$ are $n$-colorable yields that every finite subset of $\Sigma$ has a model. Hence by compactness $\Sigma$ has a model. ## Exercises. (1) Let $(P, \leq)$ be a poset. Then there is a linear order $\leq^{\prime}$ on $P$ that extends $\leq$. (Hint: use the compactness theorem and the fact that this is true when $P$ is finite.) (2) Suppose $\Sigma \subseteq \operatorname{Prop}(A)$ is such that for each truth assignment $t: A \rightarrow\{0,1\}$ there is $p \in \Sigma$ with $t(p)=1$. Then there are $p_{1}, \ldots, p_{n} \in \Sigma$ such that $p_{1} \vee \cdots \vee p_{n}$ is a tautology. (The interesting case is when $A$ and $\Sigma$ are infinite.) ### Languages and Structures Propositional Logic captures only one aspect of mathematical reasoning. We also need the capability to deal with predicates, variables, and the quantifiers "for all" and "there exists." We now begin setting up a framework for Predicate Logic (or First-Order Logic, FOL), which has these additional features and has a claim on being a complete logic for mathematical reasoning. This claim will be formulated later in this chapter as the Completeness Theorem and proved in the next chapter. Definition. A language ${ }^{1} L$ is a disjoint union of: (i) a set $L^{\mathrm{r}}$ of relation symbols; each $R \in L^{\mathrm{r}}$ has associated arity $a(R) \in \mathbf{N}$; (ii) a set $L^{\mathrm{f}}$ of function symbols; each $F \in L^{\mathrm{f}}$ has associated arity $a(F) \in \mathbf{N}$. An $m$-ary relation or function symbol is one that has arity $m$. Instead of " 0 ary", "1-ary", "2-ary" we say "nullary", "unary", "binary". A constant symbol is a function symbol of arity 0 . In most cases the symbols of a language will be nullary, unary, or binary, but for good theoretical reasons we do not wish to exclude higher arities. ## Examples. (1) The language $L_{\mathrm{Gr}}=\left\{1,{ }^{-1}, \cdot\right\}$ of groups has constant symbol 1, unary function symbol ${ }^{-1}$, and binary function symbol $\cdot$. (2) The language $L_{\mathrm{Ab}}=\{0,-,+\}$ of (additive) abelian groups has constant symbol 0 , unary function symbol - , and binary function symbol + . (3) The language $L_{\mathrm{O}}=\{<\}$ has just one binary relation symbol $<$. (4) The language $L_{\mathrm{OAb}}=\{<, 0,-,+\}$ of ordered abelian groups. (5) The language $L_{\mathrm{Rig}}=\{0,1,+, \cdot\}$ of rigs (or semirings) has constant symbols 0 and 1 , and binary function symbols + and $\cdot$ ${ }^{1}$ What we call here a language is also known as a signature, or a vocabulary. (6) The language $L_{\text {Ring }}=\{0,1,-,+, \cdot\}$ of rings. The symbols are those of the previous example, plus the unary function symbol - . From now on, let $L$ denote a language. Definition. A structure $\mathcal{A}$ for $L$ (or $L$-structure) is a triple $$ \left(A ;\left(R^{\mathcal{A}}\right)_{R \in L^{\mathrm{r}}},\left(F^{\mathcal{A}}\right)_{F \in L^{\mathrm{f}}}\right) $$ consisting of: (i) a nonempty set $A$, the underlying set of $\mathcal{A} ;{ }^{2}$ (ii) for each $m$-ary $R \in L^{\mathrm{r}}$ a set $R^{\mathcal{A}} \subseteq A^{m}$ (an $m$-ary relation on $A$ ), the interpretation of $R$ in $\mathcal{A}$; (iii) for each $n$-ary $F \in L^{\mathrm{f}}$ an operation $F^{\mathcal{A}}: A^{n} \longrightarrow A$ (an $n$-ary operation on $A$ ), the interpretation of $F$ in $\mathcal{A}$. Remark. The interpretation of a constant symbol $c$ of $L$ is a function $$ c^{\mathcal{A}}: A^{0} \longrightarrow A $$ Since $A^{0}$ has just one element, $c^{\mathcal{A}}$ is uniquely determined by its value at this element; we shall identify $c^{\mathcal{A}}$ with this value, so $c^{\mathcal{A}} \in A$. Given an $L$-structure $\mathcal{A}$, the relations $R^{\mathcal{A}}$ on $A$ (for $R \in L^{r}$ ), and operations $F^{\mathcal{A}}$ on $A$ (for $F \in L^{f}$ ) are called the primitives of $\mathcal{A}$. When $\mathcal{A}$ is clear from context we often omit the superscript $\mathcal{A}$ in denoting the interpretation of a symbol of $L$ in $\mathcal{A}$. The reader is supposed to keep in mind the distinction between symbols of $L$ and their interpretation in an $L$-structure, even if we use the same notation for both. ## Examples. (1) Each group is considered as an $L_{\mathrm{Gr}}$-structure by interpreting the symbols $1,{ }^{-1}$, and $\cdot$ as the identity element of the group, its group inverse, and its group multiplication, respectively. (2) Let $\mathcal{A}=(A ; 0,-,+)$ be an abelian group; here $0 \in A$ is the zero element of the group, and $-: A \rightarrow A$ and $+: A^{2} \rightarrow A$ denote the group operations of $\mathcal{A}$. We consider $\mathcal{A}$ as an $L_{\mathrm{Ab}}$-structure by taking as interpretations of the symbols $0,-$ and + of $L_{\mathrm{Ab}}$ the group operations $0,-$ and + on $A$. (We took here the liberty of using the same notation for possibly entirely different things: + is an element of the set $L_{\mathrm{Ab}}$, but also denotes in this context its interpretation as a binary operation on the set $A$. Similarly with 0 and -.) In fact, any set $A$ in which we single out an element, a unary operation on $A$, and a binary operation on $A$, can be construed as an $L_{\mathrm{Ab}}$-structure if we choose to do so. (3) $(\mathbf{N} ;<)$ is an $L_{\mathrm{O}}$-structure where we interpret $<$ as the usual ordering relation on N. Similarly for $(\mathbf{Z} ;<),(\mathbf{Q} ;<)$ and $(\mathbf{R} ;<)$. (Here we take ${ }^{2}$ It is also called the universe of $\mathcal{A}$; we prefer less grandiose terminology. even more notational liberties, by letting $<$ denote five different things: a symbol of $L_{\mathrm{O}}$, and the usual orderings of $\mathbf{N}, \mathbf{Z}, \mathbf{Q}$, and $\mathbf{R}$ respectively.) Again, any nonempty set $A$ equipped with a binary relation on it can be viewed as an $L_{\mathrm{O}}$-structure. (4) $(\mathbf{Z} ;<, 0,-,+)$ and $(\mathbf{Q} ;<, 0,-,+)$ are both $L_{\mathrm{OAb}}$-structures. (5) $(\mathbf{N} ; 0,1,+, \cdot)$ is an $L_{\mathrm{Rig}}$-structure. (6) $(\mathbf{Z} ; 0,1,-,+, \cdot)$ is an $L_{\text {Ring-structure. }}$ Let $\mathcal{B}$ be an $L$-structure with underlying set $B$, and let $A$ be a nonempty subset of $B$ such that $F^{\mathcal{B}}\left(A^{n}\right) \subseteq A$ for every $n$ and $n$-ary $F \in L^{\mathrm{f}}$. Then $A$ is the underlying set of an $L$-structure $\mathcal{A}$ defined by letting $$ \begin{aligned} & F^{\mathcal{A}}:=\left.F^{\mathcal{B}}\right|_{A^{n}}: A^{n} \rightarrow A, \quad \text { for } n \text {-ary } F \in L^{\mathrm{f}}, \\ & R^{\mathcal{A}}:=R^{\mathcal{B}} \cap A^{m} \quad \text { for } m \text {-ary } R \in L^{\mathrm{r}} \text {. } \end{aligned} $$ Definition. Such an $L$-structure $\mathcal{A}$ is said to be a substructure of $\mathcal{B}$, notation: $\mathcal{A} \subseteq \mathcal{B}$. We also say in this case that $\mathcal{B}$ is an extension of $\mathcal{A}$, or extends $\mathcal{A}$. ## Examples. (1) $(\mathbf{Z} ; 0,1,-,+, \cdot) \subseteq(\mathbf{Q} ; 0,1,-,+, \cdot) \subseteq(\mathbf{R} ; 0,1,-,+, \cdot)$ (2) $(\mathbf{N} ;<, 0,1,+, \cdot) \subseteq(\mathbf{Z} ;<, 0,1,+, \cdot)$ Definition. Let $\mathcal{A}=(A ; \ldots)$ and $\mathcal{B}=(B ; \ldots)$ be $L$-structures. A homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ is a map $h: A \rightarrow B$ such that (i) for each $m$-ary $R \in L^{\mathrm{r}}$ and each $\left(a_{1}, \ldots, a_{m}\right) \in A^{m}$ we have $$ \left(a_{1}, \ldots, a_{m}\right) \in R^{\mathcal{A}} \Longrightarrow\left(h a_{1}, \ldots, h a_{m}\right) \in R^{\mathcal{B}} $$ (ii) for each $n$-ary $F \in L^{\mathrm{f}}$ and each $\left(a_{1}, \ldots, a_{n}\right) \in A^{n}$ we have $$ h\left(F^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right)\right)=F^{\mathcal{B}}\left(h a_{1}, \ldots, h a_{n}\right) . $$ Replacing $\Longrightarrow$ in (i) by $\Longleftrightarrow$ yields the notion of a strong homomorphism. An embedding is an injective strong homomorphism; an isomorphism is a bijective strong homomorphism. An automorphism of $\mathcal{A}$ is an isomorphism $\mathcal{A} \rightarrow \mathcal{A}$. If $\mathcal{A} \subseteq \mathcal{B}$, then the inclusion $a \mapsto a: A \rightarrow B$ is an embedding $\mathcal{A} \rightarrow \mathcal{B}$. Conversely, a homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ yields a substructure $h(\mathcal{A})$ of $\mathcal{B}$ with underlying set $h(A)$, and if $h$ is an embedding we have an isomorphism $$ a \mapsto h(a): \mathcal{A} \rightarrow h(\mathcal{A}) $$ If $i: \mathcal{A} \rightarrow \mathcal{B}$ and $j: \mathcal{B} \rightarrow \mathcal{C}$ are homomorphisms (strong homomorphisms, embeddings, isomorphisms, respectively), then so is $j \circ i: \mathcal{A} \rightarrow \mathcal{C}$. The identity map $1_{A}$ on $A$ is an automorphism of $\mathcal{A}$. If $i: \mathcal{A} \rightarrow \mathcal{B}$ is an isomorphism then so is the map $i^{-1}: \mathcal{B} \rightarrow \mathcal{A}$. Thus the automorphisms of $\mathcal{A}$ form a $\operatorname{group} \operatorname{Aut}(\mathcal{A})$ under composition with identity $1_{A}$. ## Examples. 1. Let $\mathcal{A}=(\mathbf{Z} ; 0,-,+)$. Then $k \mapsto-k$ is an automorphism of $\mathcal{A}$. 2. Let $\mathcal{A}=(\mathbf{Z} ;<)$. The map $k \mapsto k+1$ is an automorphism of $\mathcal{A}$ with inverse given by $k \longmapsto k-1$. If $\mathcal{A}$ and $\mathcal{B}$ are groups (viewed as structures for the language $L_{\mathrm{Gr}}$ ), then a homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ is exactly what in algebra is called a homomorphism from the group $\mathcal{A}$ to the group $\mathcal{B}$. Likewise with rings, and other kinds of algebraic structures. A congruence on the $L$-structure $\mathcal{A}$ is an equivalence relation $\sim$ on its underlying set $A$ such that (i) if $R \in L^{\mathrm{r}}$ is $m$-ary and $a_{1} \sim b_{1}, \ldots, a_{m} \sim b_{m}$, then $$ \left(a_{1}, \ldots, a_{m}\right) \in R^{\mathcal{A}} \Longleftrightarrow\left(b_{1}, \ldots, b_{m}\right) \in R^{\mathcal{A}} $$ (ii) if $F \in L^{\mathrm{f}}$ is $n$-ary and $a_{1} \sim b_{1}, \ldots, a_{n} \sim b_{n}$, then $$ F^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right) \sim F^{\mathcal{A}}\left(b_{1}, \ldots, b_{n}\right) . $$ Note that a strong homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ yields a congruence $\sim_{h}$ on $\mathcal{A}$ as follows: for $a_{1}, a_{2} \in A$ we put $$ a_{1} \sim_{h} a_{2} \Longleftrightarrow h\left(a_{1}\right)=h\left(a_{2}\right) . $$ Given a congruence $\sim$ on the $L$-structure $\mathcal{A}$ we obtain an $L$-structure $\mathcal{A} / \sim$ (the quotient of $\mathcal{A}$ by $\sim$ ) as follows: (i) the underlying set of $\mathcal{A} / \sim$ is the quotient set $A / \sim$; (ii) the interpretation of an $m$-ary $R \in L^{\mathrm{r}}$ in $\mathcal{A} / \sim$ is the $m$-ary relation $$ \left\{\left(a_{1}^{\sim}, \ldots, a_{m}^{\sim}\right):\left(a_{1}, \ldots, a_{m}\right) \in R^{\mathcal{A}}\right\} $$ on $A / \sim$; (iii) the interpretation of an $n$-ary $F \in L^{\mathrm{f}}$ in $\mathcal{A} / \sim$ is the $n$-ary operation $$ \left(a_{1}^{\sim}, \ldots, a_{n}^{\sim}\right) \mapsto F^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right)^{\sim} $$ on $A / \sim$. Note that then we have a strong homomorphism $a \mapsto a^{\sim}: \mathcal{A} \rightarrow \mathcal{A} / \sim$. Products. To combine many structures into a single we form products. Let $\left(\mathcal{B}_{i}\right)_{i \in I}$ be a family of $L$-structures, $\mathcal{B}_{i}=\left(B_{i} ; \ldots\right)$ for $i \in I$. The product $$ \prod_{i \in I} \mathcal{B}_{i} $$ is defined to be the $L$-structure $\mathcal{B}$ whose underlying set is the product set $\prod_{i \in I} B_{i}$, and where the basic relations and functions are defined coordinatewise: for $m$-ary $R \in L^{\mathrm{r}}$ and elements $b_{1}=\left(b_{1 i}\right), \ldots, b_{m}=\left(b_{m i}\right) \in \prod_{i \in I} B_{i}$, $$ \left(b_{1}, \ldots, b_{m}\right) \in R^{\mathcal{B}} \Longleftrightarrow\left(b_{1 i}, \ldots, b_{m i}\right) \in R^{\mathcal{B}_{i}} \text { for all } i \in I, $$ and for $n$-ary $F \in L^{\mathrm{f}}$ and $b_{1}=\left(b_{1 i}\right), \ldots, b_{n}=\left(b_{n i}\right) \in \prod_{i \in I} B_{i}$, $$ F^{\mathcal{B}}\left(b_{1}, \ldots, b_{n}\right):=\left(F^{\mathcal{B}_{i}}\left(b_{1 i}, \ldots, b_{n i}\right)\right)_{i \in I} . $$ For $j \in I$ the projection map to the $j$ th factor is the homomorphism $$ \prod_{i \in I} \mathcal{B}_{i} \rightarrow \mathcal{B}_{j}, \quad\left(b_{i}\right) \mapsto b_{j} . $$ Using products we can combine several homomorphisms with a common domain into a single one: if for each $i \in I$ we have a homomorphism $h_{i}: \mathcal{A} \rightarrow \mathcal{B}_{i}$ we obtain a homomorphism $$ h=\left(h_{i}\right): \mathcal{A} \rightarrow \prod_{i \in I} \mathcal{B}_{i}, \quad h(a):=\left(h_{i}\left(a_{i}\right)\right) . $$ Exercises. For (1) below, recall that a normal subgroup of a group $G$ is a subgroup $N$ of $G$ such that $a x a^{-1} \in N$ for all $a \in G$ and $x \in N$. (1) Let $G$ be a group viewed as a structure for the language of groups. Each normal subgroup $N$ of $G$ yields a congruence $\equiv_{N}$ on $G$ by $$ a \equiv_{N} b \Longleftrightarrow a N=b N $$ and each congruence on $G$ equals $\equiv_{N}$ for a unique normal subgroup $N$ of $G$. (2) Consider a strong homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ of $L$-structures. Then we have an isomorphism from $\mathcal{A} / \sim_{h}$ onto $h(\mathcal{A})$ given by $a^{\sim_{h}} \mapsto h(a)$. ### Variables and Terms Throughout this course $$ \operatorname{Var}=\left\{\mathrm{v}_{0}, \mathrm{v}_{1}, \mathrm{v}_{2}, \ldots\right\} $$ is a countably infinite set of symbols whose elements will be called variables; we assume that $\mathrm{v}_{m} \neq \mathrm{v}_{n}$ for $m \neq n$, and that no variable is a function or relation symbol in any language. We let $x, y, z$ (sometimes with subscripts or superscripts) denote variables, unless indicated otherwise. Remark. Chapters 2-4 go through if we take as our set Var of variables any infinite (possibly uncountable) set; in model theory this can even be convenient. For this more general Var we still insist that no variable is a function or relation symbol in any language. In the few cases in chapters $2-4$ that this more general set-up requires changes in proofs, this will be pointed out. The results in Chapter 5 on undecidability presuppose a numbering of the variables; our Var $=\left\{\mathrm{v}_{0}, \mathrm{v}_{1}, \mathrm{v}_{2}, \ldots\right\}$ comes equipped with such a numbering. Definition. An $L$-term is a word on the alphabet $L^{\mathrm{f}} \cup$ Var obtained as follows: (i) each variable (viewed as a word of length 1 ) is an $L$-term; (ii) whenever $F \in L^{\mathrm{f}}$ is $n$-ary and $t_{1}, \ldots, t_{n}$ are $L$-terms, then the concatenation $F t_{1} \ldots t_{n}$ is an $L$-term. Note: constant symbols of $L$ are $L$-terms of length 1 , by clause (ii) for $n=0$. The $L$-terms are the admissible words on the alphabet $L^{\mathrm{f}} \cup$ Var where each variable has arity 0 . Thus "unique readability" is available. We often write $t\left(x_{1}, \ldots, x_{n}\right)$ to indicate an $L$-term $t$ in which no variables other than $x_{1}, \ldots, x_{n}$ occur. Whenever we use this notation we assume tacitly that $x_{1}, \ldots, x_{n}$ are distinct. Note that we do not require that each of $x_{1}, \ldots, x_{n}$ actually occurs in $t\left(x_{1}, \ldots, x_{n}\right)$. (This is like indicating a polynomial in the indeterminates $x_{1}, \ldots, x_{n}$ by $p\left(x_{1}, \ldots, x_{n}\right)$, where one allows that some of these indeterminates do not actually occur in the polynomial $p$.) If a term is written as an admissible word, then it may be hard to see how it is built up from subterms. In practice we shall therefore use parentheses and brackets in denoting terms, and avoid prefix notation if tradition dictates otherwise. Example. The word $++x-y z$ is an $L_{\mathrm{Ring}}$-term. For easier reading we indicate this term instead by $(x+(-y)) \cdot z$ or even $(x-y) z$. Definition. Let $\mathcal{A}$ be an $L$-structure and $t=t(\vec{x})$ be an $L$-term where $\vec{x}=$ $\left(x_{1}, \ldots, x_{m}\right)$. Then we associate to the ordered pair $(t, \vec{x})$ a function $t^{\mathcal{A}}: A^{m} \rightarrow A$ as follows (i) If $t$ is the variable $x_{i}$, then $t^{\mathcal{A}}(a)=a_{i}$ for $a=\left(a_{1}, \ldots, a_{m}\right) \in A^{m}$. (ii) If $t=F t_{1} \ldots t_{n}$ where $F \in L^{\mathrm{f}}$ is $n$-ary and $t_{1}, \ldots, t_{n}$ are $L$-terms, then $t^{\mathcal{A}}(a)=F^{\mathcal{A}}\left(t_{1}^{\mathcal{A}}(a), \ldots, t_{n}^{\mathcal{A}}(a)\right)$ for $a \in A^{m}$. This inductive definition is justified by unique readability. Note that if $\mathcal{B}$ is a second $L$-structure and $\mathcal{A} \subseteq \mathcal{B}$, then $t^{\mathcal{A}}(a)=t^{\mathcal{B}}(a)$ for $t$ as above and $a \in A^{m}$. Example. Consider $\mathbf{R}$ as a ring in the usual way, and let $t(x, y, z)$ be the $L_{\mathrm{Ring}}$ term $(x-y) z$. Then the function $t^{\mathbf{R}}: \mathbf{R}^{3} \rightarrow \mathbf{R}$ is given by $t^{\mathbf{R}}(a, b, c)=(a-b) c$. A term is said to be variable-free if no variables occur in it. Let $t$ be a variablefree $L$-term and $\mathcal{A}$ an $L$-structure. Then the above gives a nullary function $t^{\mathcal{A}}: A^{0} \rightarrow A$, identified as usual with its value at the unique element of $A^{0}$, so $t^{\mathcal{A}} \in A$. In other words, if $t$ is a constant symbol $c$, then $t^{\mathcal{A}}=c^{\mathcal{A}} \in A$, where $c^{\mathcal{A}}$ is as in the previous section, and if $t=F t_{1} \ldots t_{n}$ with $n$-ary $F \in L^{\mathrm{f}}$ and variable-free $L$-terms $t_{1}, \ldots, t_{n}$, then $t^{\mathcal{A}}=F^{\mathcal{A}}\left(t_{1}^{\mathcal{A}}, \ldots, t_{n}^{\mathcal{A}}\right)$. Definition. Let $t$ be an $L$-term, let $x_{1}, \ldots, x_{n}$ be distinct variables, and let $\tau_{1}, \ldots, \tau_{n}$ be $L$-terms. Then $t\left(\tau_{1} / x_{1}, \ldots, \tau_{n} / x_{n}\right)$ is the word obtained by replacing all occurrences of $x_{i}$ in $t$ by $\tau_{i}$, simultaneously for $i=1, \ldots, n$. If $t$ is given in the form $t\left(x_{1}, \ldots, x_{n}\right)$, then we write $t\left(\tau_{1}, \ldots, \tau_{n}\right)$ as a shorthand for $t\left(\tau_{1} / x_{1}, \ldots, \tau_{n} / x_{n}\right)$. The easy proof of the next lemma is left to the reader. Lemma 2.4.1. Suppose $t$ is an L-term, $x_{1}, \ldots, x_{n}$ are distinct variables, and $\tau_{1}, \ldots, \tau_{n}$ are L-terms. Then $t\left(\tau_{1} / x_{1}, \ldots, \tau_{n} / x_{n}\right)$ is an L-term. If $\tau_{1}, \ldots, \tau_{n}$ are variable-free and $t=t\left(x_{1}, \ldots, x_{n}\right)$, then $t\left(\tau_{1}, \ldots, \tau_{n}\right)$ is variable-free. We urge the reader to do exercise (1) below and thus acquire the confidence that these formal term substitutions do correspond to actual function substitutions. In the definition of $t\left(\tau_{1} / x_{1}, \ldots, \tau_{n} / x_{n}\right)$ the "replacing" should be simultaneous, because it can happen that for $t^{\prime}:=t\left(\tau_{1} / x_{1}\right)$ we have $t^{\prime}\left(\tau_{2} / x_{2}\right) \neq t\left(\tau_{1} / x_{1}, \tau_{2} / x_{2}\right)$. (Here $t, \tau_{1}, \tau_{2}$ are $L$-terms and $x_{1}, x_{2}$ are distinct variables.) Generators. Let $\mathcal{B}$ be an $L$-structure, let $G \subseteq B$, and assume also that $L$ has a constant symbol or that $G \neq \emptyset$. Then the set $$ \left\{t^{\mathcal{B}}\left(g_{1}, \ldots, g_{m}\right): t\left(x_{1}, \ldots, x_{m}\right) \text { is an } L \text {-term and } g_{1}, \ldots, g_{m} \in G\right\} \subseteq B $$ is the underlying set of some $\mathcal{A} \subseteq \mathcal{B}$, and this $\mathcal{A}$ is clearly a substructure of any $\mathcal{A}^{\prime} \subseteq \mathcal{B}$ with $G \subseteq A^{\prime}$. We call this $\mathcal{A}$ the substructure of $\mathcal{B}$ generated by $G$; if $\mathcal{A}=\mathcal{B}$, then we say that $\mathcal{B}$ is generated by $G$. If $\left(a_{i}\right)_{i \in I}$ is a family of elements of $B$, then "generated by $\left(a_{i}\right)$ " means "generated by $G$ " where $G=\left\{a_{i}: i \in I\right\}$. ## Exercises. (1) Let $t\left(x_{1}, \ldots, x_{m}\right)$ and $\tau_{1}\left(y_{1}, \ldots, y_{n}\right), \ldots, \tau_{m}\left(y_{1}, \ldots, y_{n}\right)$ be $L$-terms. Then the $L$ term $t^{*}\left(y_{1}, \ldots, y_{n}\right):=t\left(\tau_{1}\left(y_{1}, \ldots, y_{n}\right), \ldots, \tau_{m}\left(y_{1}, \ldots, y_{n}\right)\right)$ has the property that if $\mathcal{A}$ is an $L$-structure and $a=\left(a_{1}, \ldots, a_{n}\right) \in A^{n}$, then $$ \left(t^{*}\right)^{\mathcal{A}}(a)=t^{\mathcal{A}}\left(\tau_{1}^{\mathcal{A}}(a), \ldots, \tau_{m}^{\mathcal{A}}(a)\right) $$ (2) For every $L_{\mathrm{Ab}}$-term $t\left(x_{1}, \ldots, x_{n}\right)$ there are integers $k_{1}, \ldots, k_{n}$ such that for every abelian $\operatorname{group} \mathcal{A}=(A ; 0,-,+)$, $$ t^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right)=k_{1} a_{1}+\cdots+k_{n} a_{n}, \quad \text { for all }\left(a_{1}, \ldots, a_{n}\right) \in A^{n} . $$ Conversely, for any integers $k_{1}, \ldots, k_{n}$ there is an $L_{\mathrm{Ab}}$-term $t\left(x_{1}, \ldots, x_{n}\right)$ such that in every abelian group $\mathcal{A}=(A ; 0,-,+)$ the above displayed identity holds. (3) For every $L_{\mathrm{Ring}}$-term $t\left(x_{1}, \ldots, x_{n}\right)$ there is a polynomial $$ P\left(x_{1}, \ldots, x_{n}\right) \in \mathbf{Z}\left[x_{1}, \ldots, x_{n}\right] $$ such that for every commutative $\operatorname{ring} \mathcal{R}=(R ; 0,1,-,+, \cdot)$, $$ t^{\mathcal{R}}\left(r_{1}, \ldots, r_{n}\right)=P\left(r_{1}, \ldots, r_{n}\right), \quad \text { for all }\left(r_{1}, \ldots, r_{n}\right) \in R^{n} . $$ Conversely, for any polynomial $P\left(x_{1}, \ldots, x_{n}\right) \in \mathbf{Z}\left[x_{1}, \ldots, x_{n}\right]$ there is an $L_{\mathrm{Ring}}-$ term $t\left(x_{1}, \ldots, x_{n}\right)$ such that in every commutative ring $\mathcal{R}=(R ; 0,1,-,+, \cdot)$ the above displayed identity holds. (4) Let $\mathcal{A}$ and $\mathcal{B}$ be $L$-structures, $h: \mathcal{A} \rightarrow \mathcal{B}$ a homomorphism, and $t=t\left(x_{1}, \ldots, x_{n}\right)$ an $L$-term. Then $$ h\left(t^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right)\right)=t^{\mathcal{B}}\left(h a_{1}, \ldots, h a_{n}\right), \quad \text { for all }\left(a_{1}, \ldots, a_{n}\right) \in A^{n} . $$ (If $\mathcal{A} \subseteq \mathcal{B}$ and $h: \mathcal{A} \rightarrow \mathcal{B}$ is the inclusion, this gives $t^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right)=t^{\mathcal{B}}\left(a_{1}, \ldots, a_{n}\right)$ for all $\left(a_{1}, \ldots, a_{n}\right) \in A^{n}$.) (5) Consider the $L$-structure $\mathcal{N}=(\mathbf{N} ; 0,1,+, \cdot)$ where $L=L_{\text {Rig }}$. (a) Is there an $L$-term $t(x)$ such that $t^{\mathcal{N}}(0)=1$ and $t^{\mathcal{N}}(1)=0$ ? (b) Is there an $L$-term $t(x)$ such that $t^{\mathcal{N}}(n)=2^{n}$ for all $n \in \mathbf{N}$ ? (c) Find all the substructures of $\mathcal{N}$. ### Formulas and Sentences Besides variables we also introduce the eight distinct logical symbols $$ \top \quad \perp \quad \neg \quad=\quad \exists \quad \forall $$ The first five of these we already met when discussing propositional logic. None of these eight symbols is a variable, or a function or relation symbol of any language. Below $L$ denotes a language. To distinguish the logical symbols from those in $L$, the latter are often referred to as the non-logical symbols. Definition. The atomic L-formulas are the following words on the alphabet $L \cup \operatorname{Var} \cup\{\top, \perp,=\}$ : (i) $\top$ and $\perp$, (ii) $R t_{1} \ldots t_{m}$, where $R \in L^{\mathrm{r}}$ is $m$-ary and $t_{1}, \ldots, t_{m}$ are $L$-terms, (iii) $=t_{1} t_{2}$, where $t_{1}$ and $t_{2}$ are $L$-terms. The $L$-formulas are the words on the larger alphabet $$ L \cup \operatorname{Var} \cup\{\top, \perp, \neg, \vee, \wedge,=, \exists, \forall\} $$ obtained as follows: (i) every atomic $L$-formula is an $L$-formula; (ii) if $\varphi, \psi$ are $L$-formulas, then so are $\neg \varphi, \vee \varphi \psi$, and $\wedge \varphi \psi$; (iii) if $\varphi$ is a $L$-formula and $x$ is a variable, then $\exists x \varphi$ and $\forall x \varphi$ are $L$-formulas. Note that all $L$-formulas are admissible words on the alphabet $$ L \cup \operatorname{Var} \cup\{\top, \perp, \neg, \vee, \wedge,=, \exists, \forall\} $$ where $=, \exists$ and $\forall$ are given arity 2 and the other symbols have the arities assigned to them earlier. This fact makes the results on unique readability applicable to $L$-formulas. (However, not all admissible words on this alphabet are $L$-formulas: the word $\exists x x$ is admissible but not an $L$-formula.) The notational conventions introduced in the section on propositional logic go through, with the role of propositions there taken over by formulas here. (For example, given $L$-formulas $\varphi$ and $\psi$ we shall write $\varphi \vee \psi$ to indicate $\vee \varphi \psi$, and $\varphi \rightarrow \psi$ to indicate $\neg \varphi \vee \psi$.) Here is a notational convention specific to predicate logic: given distinct variables $x_{1}, \ldots, x_{n}$ and an $L$-formula $\varphi$ we let $\exists x_{1} \ldots x_{n} \varphi$ and $\forall x_{1} \ldots x_{n} \varphi$ abbreviate $\exists x_{1} \ldots \exists x_{m} \varphi$ and $\forall x_{1} \ldots \forall x_{m} \varphi$, respectively. Thus if $x, y, z$ are distinct variables, then $\exists x y z \varphi$ stands for $\exists x \exists y \exists z \varphi$. The reader should distinguish between different ways of using the symbol $=$. Sometimes it denotes one of the eight formal logical symbols, but we also use it to indicate equality of mathematical objects in the way we have done already many times. The context should always make it clear what our intention is in this respect without having to spell it out. To increase readability we usually write an atomic formula $=t_{1} t_{2}$ as $t_{1}=t_{2}$ and its negation $\neg=t_{1} t_{2}$ as $t_{1} \neq t_{2}$, where $t_{1}, t_{2}$ are $L$-terms. The logical symbol $=$ is treated just as a binary relation symbol, but its interpretation in a structure will always be the equality relation on its underlying set. This will become clear later. Definition. Let $\varphi$ be a formula of $L$. Written as a word on the alphabet above we have $\varphi=s_{1} \ldots s_{m}$. A subformula of $\varphi$ is a subword of the form $s_{i} \ldots s_{k}$ where $1 \leq i \leq k \leq m$ which also happens to be a formula of $L$. An occurrence of a variable $x$ in $\varphi$ at the $j$-th place (that is, $s_{j}=x$ ) is said to be a bound occurrence if $\varphi$ has a subformula $s_{i} s_{i+1} \ldots s_{k}$ with $i \leq j \leq k$ that is of the form $\exists x \psi$ or $\forall x \psi$. If an occurrence is not bound, then it is said to be a free occurrence. At this point the reader is invited to do the first exercise at the end of this section, which gives another useful characterization of subformulas. Example. In the formula $(\exists x(x=y)) \wedge x=0$, where $x$ and $y$ are distinct, the first two occurrences of $x$ are bound, the third is free, and the only occurrence of $y$ is free. (Note: the formula is actually the string $\wedge \exists x=x y=x 0$, and the occurrences of $x$ and $y$ are really the occurrences in this string.) Definition. A sentence is a formula in which all occurrences of variables are bound occurrences. We let $\varphi\left(x_{1}, \ldots, x_{n}\right)$ indicate a formula $\varphi$ such that all variables that occur free in $\varphi$ are among $x_{1}, \ldots, x_{n}$. In using this notation it is understood that $x_{1}, \ldots, x_{n}$ are distinct variables, but it is not required that each of $x_{1}, \ldots, x_{n}$ occurs free in $\varphi$. (This is analogous to indicating a polynomial equation in the indeterminates $x_{1}, \ldots, x_{n}$ by $p\left(x_{1}, \ldots, x_{n}\right)=0$, where one allows that some of these indeterminates do not actually occur in $p$.) Definition. Let $\varphi$ be an $L$-formula, let $x_{1}, \ldots, x_{n}$ be distinct variables, and let $t_{1}, \ldots, t_{n}$ be $L$-terms. Then $\varphi\left(t_{1} / x_{1}, \ldots, t_{n} / x_{n}\right)$ is the word obtained by replacing all the free occurrences of $x_{i}$ in $\varphi$ by $t_{i}$, simultaneously for $i=1, \ldots, n$. If $\varphi$ is given in the form $\varphi\left(x_{1}, \ldots, x_{n}\right)$, then we write $\varphi\left(t_{1}, \ldots, t_{n}\right)$ as a shorthand for $\varphi\left(t_{1} / x_{1}, \ldots, t_{n} / x_{n}\right)$. We have the following lemma whose routine proof is left to the reader. Lemma 2.5.1. Suppose $\varphi$ is an L-formula, $x_{1}, \ldots, x_{n}$ are distinct variables, and $t_{1}, \ldots, t_{n}$ are L-terms. Then $\varphi\left(t_{1} / x_{1}, \ldots, t_{n} / x_{n}\right)$ is an L-formula. If $t_{1}, \ldots, t_{n}$ are variable-free and $\varphi=\varphi\left(x_{1}, \ldots, x_{n}\right)$, then $\varphi\left(t_{1}, \ldots, t_{n}\right)$ is an $L$ sentence. In the definition of $\varphi\left(t_{1} / x_{1}, \ldots, t_{n} / x_{n}\right)$ the "replacing" should be simultaneous, because it can happen that $\varphi\left(t_{1} / x_{1}\right)\left(t_{2} / x_{2}\right) \neq \varphi\left(t_{1} / x_{1}, t_{2} / x_{2}\right)$. Let $\mathcal{A}$ be an $L$-structure with underlying set $A$, and let $C \subseteq A$. We extend $L$ to a language $L_{C}$ by adding a constant symbol $\underline{c}$ for each $c \in C$, called the name of $c$. These names are symbols not in $L$. We make $\mathcal{A}$ into an $L_{C}$-structure by keeping the same underlying set and interpretations of symbols of $L$, and by interpreting each name $\underline{c}$ as the element $c \in C$. The $L_{C}$-structure thus obtained is indicated by $\mathcal{A}_{C}$. Hence for each variable-free $L_{C}$-term $t$ we have a corresponding element $t^{\mathcal{A}_{C}}$ of $A$, which for simplicity of notation we denote instead by $t^{\mathcal{A}}$. All this applies in particular to the case $C=A$, where in $L_{A}$ we have a name $\underline{a}$ for each $a \in A$. Definition. We can now define what it means for an $L_{A}$-sentence $\sigma$ to be true in the L-structure $\mathcal{A}$ (notation: $\mathcal{A} \models \sigma$, also read as $\mathcal{A}$ satisfies $\sigma$ or $\sigma$ holds in $\mathcal{A}$, or $\sigma$ is valid in $\mathcal{A}$ ). First we consider atomic $L_{A}$-sentences: (i) $\mathcal{A} \models \top$, and $\mathcal{A} \not \models \perp$; (ii) $\mathcal{A} \models R t_{1} \ldots t_{m}$ if and only if $\left(t_{1}^{\mathcal{A}}, \ldots, t_{m}^{\mathcal{A}}\right) \in R^{\mathcal{A}}$, for $m$-ary $R \in L^{\mathrm{r}}$, and variable-free $L_{A}$-terms $t_{1}, \ldots, t_{m}$; (iii) $\mathcal{A} \models t_{1}=t_{2}$ if and only if $t_{1}^{\mathcal{A}}=t_{2}^{\mathcal{A}}$, for variable-free $L_{A}$-terms $t_{1}, t_{2}$. We extend the definition inductively to arbitrary $L_{A}$-sentences as follows: (i) $\sigma=\neg \sigma_{1}$ : then $\mathcal{A} \models \sigma$ if and only if $\mathcal{A} \not \models \sigma_{1}$. (ii) $\sigma=\sigma_{1} \vee \sigma_{2}$ : then $\mathcal{A} \models \sigma$ if and only if $\mathcal{A} \models \sigma_{1}$ or $\mathcal{A} \models \sigma_{2}$. (iii) $\sigma=\sigma_{1} \wedge \sigma_{2}$ : then $\mathcal{A} \models \sigma$ if and only if $\mathcal{A} \models \sigma_{1}$ and $\mathcal{A} \models \sigma_{2}$. (iv) $\sigma=\exists x \varphi(x)$ : then $\mathcal{A} \models \sigma$ if and only if $\mathcal{A} \models \varphi(\underline{a})$ for some $a \in A$. (v) $\sigma=\forall x \varphi(x)$ : then $\mathcal{A} \models \sigma$ if and only if $\mathcal{A} \models \varphi(\underline{a})$ for all $a \in A$. Even if we just want to define $\mathcal{A} \models \sigma$ for $L$-sentences $\sigma$, one can see that if $\sigma$ has the form $\exists x \varphi(x)$ or $\forall x \varphi(x)$, the inductive definition above forces us to consider $L_{A}$-sentences $\varphi(\underline{a})$. This is why we introduced names. We didn't say so explicitly, but "inductive" refers here to induction with respect to the number of logical symbols in $\sigma$. For example, the fact that $\varphi(\underline{a})$ has fewer logical symbols than $\exists x \varphi(x)$ is crucial for the above to count as a definition. Also unique readability is involved: without it we would not allow clauses (ii) and (iii) as part of our inductive definition. It is easy to check that for an $L_{A}$-sentence $\sigma=\exists x_{1} \ldots x_{n} \varphi\left(x_{1}, \ldots, x_{n}\right)$, $$ \mathcal{A}=\sigma \Longleftrightarrow \mathcal{A} \models \varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{n}\right) \text { for some }\left(a_{1}, \ldots, a_{n}\right) \in A^{n}, $$ and that for an $L_{A}$-sentence $\sigma=\forall x_{1} \ldots x_{n} \varphi\left(x_{1}, \ldots, x_{n}\right)$, $$ \mathcal{A} \models \sigma \Longleftrightarrow \mathcal{A} \models \varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{n}\right) \text { for all }\left(a_{1}, \ldots, a_{n}\right) \in A^{n} . $$ Definition. Given an $L_{A}$-formula $\varphi\left(x_{1}, \ldots, x_{n}\right)$ we let $\varphi^{\mathcal{A}}$ be the following subset of $A^{n}$ : $$ \varphi^{\mathcal{A}}=\left\{\left(a_{1}, \ldots, a_{n}\right): \mathcal{A}=\varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{n}\right)\right\} $$ The formula $\varphi\left(x_{1}, \ldots, x_{n}\right)$ is said to define the set $\varphi^{\mathcal{A}}$ in $\mathcal{A}$. A set $S \subseteq A^{n}$ is said to be definable in $\mathcal{A}$ if $S=\varphi^{\mathcal{A}}$ for some $L_{A}$-formula $\varphi\left(x_{1}, \ldots, x_{n}\right)$. If moreover $\varphi$ can be chosen to be an $L$-formula, then $S$ is said to be 0 -definable in $\mathcal{A}$. ## Examples. (1) The set $\{r \in \mathbf{R}: r<\sqrt{2}\}$ is 0 -definable in $(\mathbf{R} ;<, 0,1,+,-, \cdot)$ : it is defined by the formula $\left(x^{2}<1+1\right) \vee(x<0)$. (Here $x^{2}$ abbreviates the term $x \cdot x$.) (2) The set $\{r \in \mathbf{R}: r<\pi\}$ is definable in $(\mathbf{R} ;<, 0,1,+,-, \cdot)$ : it is defined by the formula $x<\underline{\pi}$. To show that a set $X \subseteq A$ is not 0-definable in $\mathcal{A}$, one can sometimes use automorphisms of $\mathcal{A}$; see the exercises below. We call a map $f: X \rightarrow A^{n}$ with $X \subseteq A^{m}$ definable in $\mathcal{A}$ if its graph as a subset of $A^{m+n}$ is definable in $\mathcal{A}$; note that then its domain $X$ is definable in $\mathcal{A}$. We now single out formulas by certain syntactical conditions. These conditions have semantic counterparts in terms of the behaviour of these formulas under various kinds of homomorphisms, as shown in some exercises below. (These exercises also show that isomorphic $L$-structures satisfy exactly the same $L$ sentences.) An $L$-formula is said to be quantifier-free if it has no occurrences of $\exists$ and no occurrences of $\forall$. An $L$-formula is said to be existential if it has the form $\exists x_{1} \ldots x_{m} \varphi$ with distinct $x_{1}, \ldots, x_{m}$ and a quantifier-free $L$-formula $\varphi$. An $L$-formula is said to be universal if it has the form $\forall x_{1} \ldots x_{m} \varphi$ with distinct $x_{1}, \ldots, x_{m}$ and a quantifier-free $L$-formula $\varphi$. An $L$-formula is said to be positive if it has no occurrences of $\neg$ (but it can have occurrences of $\perp$ ). ## Exercises. (1) Let $\varphi$ and $\psi$ be $L$-formulas; put $\operatorname{sf}(\varphi):=$ set of subformulas of $\varphi$. (a) If $\varphi$ is atomic, then $\operatorname{sf}(\varphi)=\{\varphi\}$. (b) $\operatorname{sf}(\neg \varphi)=\{\neg \varphi\} \cup \operatorname{sf}(\varphi)$. (c) $\operatorname{sf}(\varphi \vee \psi)=\{\varphi \vee \psi\} \cup \operatorname{sf}(\varphi) \cup \operatorname{sf}(\psi)$, and $\operatorname{sf}(\varphi \wedge \psi)=\{\varphi \wedge \psi\} \cup \operatorname{sf}(\varphi) \cup \operatorname{sf}(\psi)$. (d) $\operatorname{sf}(\exists x \varphi)=\{\exists x \varphi\} \cup \operatorname{sf}(\varphi)$, and $\operatorname{sf}(\forall x \varphi)=\{\forall x \varphi\} \cup \operatorname{sf}(\varphi)$. (2) Let $\varphi$ and $\psi$ be $L$-formulas, $x, y$ variables, and $t$ an $L$-term. (a) $(\neg \varphi)(t / x)=\neg(\varphi(t / x))$. (b) $(\varphi \vee \psi)(t / x)=\varphi(t / x) \vee \psi(t / x)$, and $(\varphi \wedge \psi)(t / x)=\varphi(t / x) \wedge \psi(t / x)$. (c) $(\exists y \varphi)(t / x)=\exists y(\varphi(t / x))$ if $x$ and $y$ are different, and $(\exists y \varphi)(t / x)=\exists y \varphi$ if $x$ and $y$ are the same; likewise with $\forall y \varphi$. (3) If $t\left(x_{1}, \ldots, x_{n}\right)$ is an $L_{A}$-term and $a_{1}, \ldots, a_{n} \in A$, then $$ t\left(\underline{a}_{1}, \ldots, \underline{a}_{n}\right)^{\mathcal{A}}=t^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right) . $$ (4) Suppose that $S_{1} \subseteq A^{n}$ and $S_{2} \subseteq A^{n}$ are defined in $\mathcal{A}$ by the $L_{A}$-formulas $\varphi_{1}\left(x_{1}, \ldots, x_{n}\right)$ and $\varphi_{2}\left(x_{1}, \ldots, x_{n}\right)$ respectively. Then: (a) $S_{1} \cup S_{2}$ is defined in $\mathcal{A}$ by $\left(\varphi_{1} \vee \varphi_{2}\right)\left(x_{1}, \ldots, x_{n}\right)$. (b) $S_{1} \cap S_{2}$ is defined in $\mathcal{A}$ by $\left(\varphi_{1} \wedge \varphi_{2}\right)\left(x_{1}, \ldots, x_{n}\right)$. (c) $A^{n} \backslash S_{1}$ is defined in $\mathcal{A}$ by $\neg \varphi_{1}\left(x_{1}, \ldots, x_{n}\right)$. (d) $S_{1} \subseteq S_{2} \Longleftrightarrow \mathcal{A} \models \forall x_{1} \ldots x_{n}\left(\varphi_{1} \rightarrow \varphi_{2}\right)$. (5) Let $\pi: A^{m+n} \rightarrow A^{m}$ be the projection map given by $$ \pi\left(a_{1}, \ldots, a_{m+n}\right)=\left(a_{1}, \ldots, a_{m}\right), $$ and for $S \subseteq A^{m+n}$ and $a \in A^{m}$, put $$ \left.S(a):=\left\{b \in A^{n}:(a, b) \in S\right\} \quad \text { (a section of } S\right) . $$ Suppose that $S \subseteq A^{m+n}$ is defined in $\mathcal{A}$ by the $L_{A}$-formula $\varphi(x, y)$ where $x=$ $\left(x_{1}, \ldots, x_{m}\right)$ and $y=\left(y_{1}, \ldots, y_{n}\right)$. Then $\exists y_{1} \ldots y_{n} \varphi(x, y)$ defines in $\mathcal{A}$ the subset $\pi(S)$ of $A^{m}$, and $\forall y_{1} \ldots y_{n} \varphi(x, y)$ defines in $\mathcal{A}$ the set $$ \left\{a \in A^{m}: S(a)=A^{n}\right\} . $$ (6) The following sets are 0-definable in the corresponding structures: (a) The ordering relation $\left\{(m, n) \in \mathbf{N}^{2}: m<n\right\}$ in $(\mathbf{N} ; 0,+)$. (b) The set $\{2,3,5,7, \ldots\}$ of prime numbers in the semiring $\mathcal{N}=(\mathbf{N} ; 0,1,+, \cdot)$. (c) The set $\left\{2^{n}: n \in \mathbf{N}\right\}$ in the semiring $\mathcal{N}$. (d) The set $\{a \in \mathbf{R}: f$ is continuous at $a\}$ in $(\mathbf{R} ;<, f)$ where $f: \mathbf{R} \rightarrow \mathbf{R}$ is any function. (7) Let the symbols of $L$ be a binary relation symbol $<$ and a unary relation symbol $U$. Then there is an $L$-sentence $\sigma$ such that for all $X \subseteq \mathbf{R}$ we have $$ (\mathbf{R} ;<, X) \models \sigma \Longleftrightarrow X \text { is finite. } $$ (8) Let $\mathcal{A} \subseteq \mathcal{B}$. Then we consider $L_{A}$ to be a sublanguage of $L_{B}$ in such a way that each $a \in A$ has the same name in $L_{A}$ as in $L_{B}$. This convention is in force throughout these notes. (a) For each variable free $L_{A}$-term $t$ we have $t^{\mathcal{A}}=t^{\mathcal{B}}$. (b) If the $L_{A}$-sentence $\sigma$ is quantifier-free, then $\mathcal{A} \models \sigma \Leftrightarrow \mathcal{B} \models \sigma$. (c) If $\sigma$ is an existential $L_{A}$-sentence, then $\mathcal{A} \models \sigma \Rightarrow \mathcal{B} \models \sigma$ (d) If $\sigma$ is a universal $L_{A}$-sentence, then $\mathcal{B} \models \sigma \Rightarrow \mathcal{A} \models \sigma$. (9) Suppose $h: \mathcal{A} \longrightarrow \mathcal{B}$ is a homomorphism of $L$-structures. For each $L_{A}$-term $t$, let $t_{h}$ be the $L_{B}$-term obtained from $t$ by replacing each occurrence of a name $\underline{a}$ of an element $a \in A$ by the name $\underline{h a}$ of the corresponding element $h a \in B$. Similarly, for each $L_{A}$-formula $\varphi$, let $\varphi_{h}$ be the $L_{B}$-formula obtained from $\varphi$ by replacing each occurrence of a name $\underline{a}$ of an element $a \in A$ by the name $\underline{h a}$ of the corresponding element $h a \in B$. Note that if $\varphi$ is a sentence, so is $\varphi_{h}$. Then: (a) if $t$ is a variable-free $L_{A}$-term, then $h\left(t^{\mathcal{A}}\right)=t_{h}^{\mathcal{B}}$; (b) if $\sigma$ is a positive $L_{A}$-sentence without $\forall$-symbol, then $\mathcal{A} \models \sigma \Rightarrow \mathcal{B} \models \sigma_{h}$; (c) if $\sigma$ is a positive $L_{A}$-sentence and $h$ is surjective, then $\mathcal{A} \models \sigma \Rightarrow \mathcal{B} \models \sigma_{h}$; (d) if $\sigma$ is an $L_{A}$-sentence and $h$ is an isomorphism, then $\mathcal{A} \models \sigma \Leftrightarrow \mathcal{B} \models \sigma_{h}$; (10) If $f$ is an automorphism of $\mathcal{A}$ and $X \subseteq A$ is 0-definable in $\mathcal{A}$, then $f(X)=X$. ### Models In the rest of this chapter $L$ is a language, $\mathcal{A}$ is an $L$-structure (with underlying set $A$ ), and, unless indicated otherwise, $t$ is an $L$-term, $\varphi, \psi$, and $\theta$ are $L$ formulas, $\sigma$ is an $L$-sentence, and $\Sigma$ is a set of $L$-sentences. We drop the prefix $L$ in " $L$-term" and " $L$-formula" and so on, unless this would cause confusion. Definition. We say that $\mathcal{A}$ is a model of $\Sigma$ or $\Sigma$ holds in $\mathcal{A}($ denoted $\mathcal{A} \models \Sigma$ ) if $\mathcal{A} \models \sigma$ for each $\sigma \in \Sigma$. To discuss examples it is convenient to introduce some notation. Suppose $L$ contains (at least) the constant symbol 0 and the binary function symbol + . Given any terms $t_{1}, \ldots, t_{n}$ we define the term $t_{1}+\cdots+t_{n}$ inductively as follows: it is the term 0 if $n=0$, the term $t_{1}$ if $n=1$, and the term $\left(t_{1}+\cdots+t_{n-1}\right)+t_{n}$ for $n>1$. We write $n t$ for the term $t+\cdots+t$ with $n$ summands, in particular, $0 t$ and $1 t$ denote the terms 0 and $t$ respectively. Suppose $L$ contains the constant symbol 1 and the binary function symbol · (the multiplication sign). Then we have similar notational conventions for $t_{1} \cdot \ldots \cdot t_{n}$ and $t^{n}$; in particular, for $n=0$ both stand for the term 1 , and $t^{1}$ is just $t$. Examples. Fix three distinct variables $x, y, z$. (1) Totally ordered sets are the $L_{\mathrm{O}}$-structures that are models of $$ \{\forall x(x \nless x), \forall x y z((x<y \wedge y<z) \rightarrow x<z), \forall x y(x<y \vee x=y \vee y<x)\} . $$ (2) Groups are the $L_{\mathrm{Gr}}$-structures that are models of $$ \begin{aligned} \mathrm{Gr}:=\left\{\forall x(x \cdot 1=x \wedge 1 \cdot x=x), \forall x\left(x \cdot x^{-1}=1 \wedge x^{-1} \cdot x=1\right),\right. \\ \forall x y z((x \cdot y) \cdot z=x \cdot(y \cdot z))\} \end{aligned} $$ (3) Abelian groups are the $L_{\mathrm{Ab}}$-structures that are models of $$ \begin{aligned} \mathrm{Ab}:=\{\forall x(x+0=x), \forall x(x+(-x) & =0), \forall x y(x+y=y+x), \\ & \forall x y z((x+y)+z=x+(y+z))\} \end{aligned} $$ (4) Torsion-free abelian groups are the $L_{\mathrm{Ab}}$-structures that are models of $$ \mathrm{Ab} \cup\{\forall x(n x=0 \rightarrow x=0): n=1,2,3, \ldots\} $$ (5) Rings are the $L_{\text {Ring-structures that }}$ are models of $$ \begin{aligned} \text { Ring }:= & \operatorname{Ab} \cup\{\forall x y z((x \cdot y) \cdot z=x \cdot(y \cdot z)), \forall x(x \cdot 1=x \wedge 1 \cdot x=x), \\ & \forall x y z((x \cdot(y+z)=x \cdot y+x \cdot z \wedge(x+y) \cdot z=x \cdot z+y \cdot z))\} \end{aligned} $$ (6) Fields are the $L_{\text {Ring-structures that are models of }}$ $$ \mathrm{Fl}=\operatorname{Ring} \cup\{\forall x \forall y(x \cdot y=y \cdot x), 1 \neq 0, \forall x(x \neq 0 \rightarrow \exists y(x \cdot y=1))\} $$ (7) Fields of characteristic 0 are the $L_{\text {Ring-structures }}$ that are models of $$ \mathrm{Fl}(0):=\mathrm{Fl} \cup\{n 1 \neq 0: n=2,3,5,7,11, \ldots\} $$ (8) Given a prime number $p$, fields of characteristic $p$ are the $L_{\text {Ring-structures }}$ that are models of $\mathrm{Fl}(p):=\mathrm{Fl} \cup\{p 1=0\}$. (9) Algebraically closed fields are the $L_{\text {Ring-structures }}$ that are models of $\mathrm{ACF}:=\mathrm{Fl} \cup\left\{\forall u_{1} \ldots u_{n} \exists x\left(x^{n}+u_{1} x^{n-1}+\cdots+u_{n}=0\right): n=2,3,4,5, \ldots\right\}$ Here $u_{1}, u_{2}, u_{3}, \ldots$ is some fixed infinite sequence of distinct variables, distinct also from $x$, and $u_{i} x^{n-i}$ abbreviates $u_{i} \cdot x^{n-i}$, for $i=1, \ldots, n$. (10) Algebraically closed fields of characteristic 0 are the $L_{\mathrm{Ring}}$-structures that are models of $\operatorname{ACF}(0):=\mathrm{ACF} \cup\{n 1 \neq 0: n=2,3,5,7,11, \ldots\}$. (11) Given a prime number $p$, algebraically closed fields of characteristic $p$ are the $L_{\text {Ring-structures that are models of }} \operatorname{ACF}(p):=\mathrm{ACF} \cup\{p 1=0\}$. In Example (1) our use of the symbol $<$ rather than $\leq$ indicates that we take the strict version of a total order, as the sentences mentioned in (1) specify. This is a minor difference with how we defined totally ordered sets in Section 2.3, using the nonstrict version of an ordering, with $\leq$ as the primitive notion. Another minor difference is that in Section 2.3 we allowed the underlying set of a poset to be empty, but in (1) the underlying set of a totally ordered set is nonempty, since that is a general requirement for the structures considered in these notes. Definition. We say that $\sigma$ is a logical consequence of $\Sigma$ (written $\Sigma \models \sigma$ ) if $\sigma$ is true in every model of $\Sigma$. Example. It is well-known that in any $\operatorname{ring} R$ we have $a \cdot 0=0$ for all $a \in R$. This can now be expressed as Ring $\models \forall x(x \cdot 0=0)$. We defined what it means for a sentence $\sigma$ to hold in a given structure $\mathcal{A}$. We now extend this to arbitrary formulas. First define an $\mathcal{A}$-instance of a formula $\varphi=\varphi\left(x_{1}, \ldots, x_{m}\right)$ to be an $L_{A^{-}}$ sentence of the form $\varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)$ with $a_{1}, \ldots, a_{m} \in A$. Of course $\varphi$ can also be written as $\varphi\left(y_{1}, \ldots, y_{n}\right)$ for another sequence of variables $y_{1}, \ldots, y_{n}$, for example, $y_{1}, \ldots, y_{n}$ could be obtained by permuting $x_{1}, \ldots, x_{m}$, or it could be $x_{1}, \ldots, x_{m}, x_{m+1}$, obtained by adding a variable $x_{m+1}$. Thus for the above to count as a definition of " $\mathcal{A}$-instance," the reader should check that these different ways of specifying variables (including at least the variables occurring free in $\varphi)$ give the same $\mathcal{A}$-instances. Definition. A formula $\varphi$ is said to be valid in $\mathcal{A}$ (notation: $\mathcal{A} \models \varphi$ ) if all its $\mathcal{A}$-instances are true in $\mathcal{A}$. The reader should check that if $\varphi=\varphi\left(x_{1}, \ldots, x_{m}\right)$, then $$ \mathcal{A} \models \varphi \Longleftrightarrow \mathcal{A} \models \forall x_{1} \ldots \forall x_{m} \varphi $$ We also extend the notion of "logical consequence of $\Sigma$ " to formulas (but $\Sigma$ continues to be a set of sentences). Definition. We say that $\varphi$ is a logical consequence of $\Sigma$ (notation: $\Sigma \models \varphi$ ) if $\mathcal{A} \models \varphi$ for all models $\mathcal{A}$ of $\Sigma$. One should not confuse the notion of "logical consequence of $\Sigma$ " with that of "provable from $\Sigma$." We shall give a definition of provable from $\Sigma$ in the next section. The two notions will turn out to be equivalent, but that is hardly obvious from their definitions: we shall need much of the next chapter to prove this equivalence, which is called the Completeness Theorem for Predicate Logic. We finish this section with two basic facts: Lemma 2.6.1. Let $\alpha\left(x_{1}, \ldots, x_{m}\right)$ be an $L_{A}$-term, and recall that $\alpha$ defines a map $\alpha^{\mathcal{A}}: A^{m} \rightarrow A$. Let $t_{1}, \ldots, t_{m}$ be variable-free $L_{A}$-terms, with $t_{i}^{\mathcal{A}}=a_{i} \in A$ for $i=1, \ldots, m$. Then $\alpha\left(t_{1}, \ldots, t_{m}\right)$ is a variable-free $L_{A}$-term, and $$ \alpha\left(t_{1}, \ldots, t_{m}\right)^{\mathcal{A}}=\alpha\left(\underline{a}_{1}, \ldots \underline{a}_{m}\right)^{\mathcal{A}}=\alpha^{\mathcal{A}}\left(t_{1}^{\mathcal{A}}, \ldots, t_{m}^{\mathcal{A}}\right) . $$ This follows by a straightforward induction on $\alpha$. Lemma 2.6.2. Let $t_{1}, \ldots, t_{m}$ be variable-free $L_{A}$-terms with $t_{i}^{\mathcal{A}}=a_{i} \in A$ for $i=1, \ldots, m$. Let $\varphi\left(x_{1}, \ldots, x_{m}\right)$ be an $L_{A}$-formula. Then the $L_{A}$-formula $\varphi\left(t_{1}, \ldots, t_{m}\right)$ is a sentence and $$ \mathcal{A}=\varphi\left(t_{1}, \ldots, t_{m}\right) \Longleftrightarrow \mathcal{A}=\varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right) . $$ Proof. To keep notations simple we give the proof only for $m=1$ with $t=t_{1}$ and $x=x_{1}$. We proceed by induction on the number of logical symbols in $\varphi(x)$. Suppose that $\varphi$ is atomic. The case where $\varphi$ is $T$ or $\perp$ is obvious. Assume $\varphi$ is $R \alpha_{1} \ldots \alpha_{m}$ where $R \in L^{\mathrm{r}}$ is $m$-ary and $\alpha_{1}(x), \ldots, \alpha_{m}(x)$ are $L_{A}$-terms. Then $\varphi(t)=R \alpha_{1}(t) \ldots \alpha_{m}(t)$ and $\varphi(\underline{a})=R \alpha_{1}(\underline{a}) \ldots \alpha_{m}(\underline{a})$. We have $\mathcal{A} \models \varphi(t)$ iff $\left(\alpha_{1}(t)^{\mathcal{A}}, \ldots, \alpha_{m}(t)^{\mathcal{A}}\right) \in R^{\mathcal{A}}$ and also $\mathcal{A}=\varphi(\underline{a})$ iff $\left(\alpha_{1}(\underline{a})^{\mathcal{A}}, \ldots, \alpha_{m}(\underline{a})^{\mathcal{A}}\right) \in R^{\mathcal{A}}$. As $\alpha_{i}(t)^{\mathcal{A}}=\alpha_{i}(\underline{a})^{\mathcal{A}}$ for all $i$ by the previous lemma, we have $\mathcal{A} \models \varphi(t)$ iff $\mathcal{A} \models \varphi(\underline{a})$. The case that $\varphi(x)$ is $\alpha(x)=\beta(x)$ is handled the same way. It is also clear that the desired property is inherited by disjunctions, conjunctions and negations of formulas $\varphi(x)$ that have the property. Suppose now that $\varphi(x)=\exists y \psi$. Case $y \neq x$ : Then $\psi=\psi(x, y), \varphi(t)=\exists y \psi(t, y)$ and $\varphi(\underline{a})=\exists y \psi(\underline{a}, y)$. As $\varphi(t)=\exists y \psi(t, y)$, we have $\mathcal{A} \models \varphi(t)$ iff $\mathcal{A} \models \psi(t, \underline{b})$ for some $b \in A$. By the inductive hypothesis the latter is equivalent to $\mathcal{A} \models \psi(\underline{a}, \underline{b})$ for some $b \in A$, hence equivalent to $\mathcal{A} \models \exists y \psi(\underline{a}, y)$. As $\varphi(\underline{a})=\exists y \psi(\underline{a}, y)$, we conclude that $\mathcal{A} \models \varphi(t)$ iff $\mathcal{A} \models \varphi(\underline{a})$. Case $y=x$ : Then $x$ does not occur free in $\varphi(x)=\exists x \psi$. So $\varphi(t)=\varphi(\underline{a})=\varphi$ is an $L_{A}$-sentence, and $\mathcal{A} \models \varphi(t) \Leftrightarrow \mathcal{A} \models \varphi(\underline{a})$ is obvious. When $\varphi(x)=\forall y \psi$ then one can proceed exactly as above by distinguishing two cases. ### Logical Axioms and Rules; Formal Proofs In this section we introduce a proof system for predicate logic and state its completeness. We then derive as a consequence the compactness theorem and some of its corollaries. The completeness is proved in the next chapter. We remind the reader of the notational conventions at the beginning of Section 2.6. A propositional axiom of $L$ is by definition a formula that for some $\varphi, \psi, \theta$ occurs in the list below: 1. $\top$ 2. $\varphi \rightarrow(\varphi \vee \psi) ; \quad \varphi \rightarrow(\psi \vee \varphi)$ 3. $\neg \varphi \rightarrow(\neg \psi \rightarrow \neg(\varphi \vee \psi))$ 4. $(\varphi \wedge \psi) \rightarrow \varphi ; \quad(\varphi \wedge \psi) \rightarrow \psi$ 5. $\varphi \rightarrow(\psi \rightarrow(\varphi \wedge \psi))$ 6. $(\varphi \rightarrow(\psi \rightarrow \theta)) \rightarrow((\varphi \rightarrow \psi) \rightarrow(\varphi \rightarrow \theta))$ 7. $\varphi \rightarrow(\neg \varphi \rightarrow \perp)$ 8. $(\neg \varphi \rightarrow \perp) \rightarrow \varphi$ Each of items $2-8$ is a scheme describing infinitely many axioms. Note that this list is the same as the list in Section 2.2 except that instead of propositions $p, q, r$ we have formulas $\varphi, \psi, \theta$. The logical axioms of $L$ are the propositional axioms of $L$ and the equality and quantifier axioms of $L$ as defined below. Definition. The equality axioms of $L$ are the following formulas: (i) $x=x$, (ii) $x=y \rightarrow y=x$ (iii) $(x=y \wedge y=z) \rightarrow x=z$, (iv) $\left(x_{1}=y_{1} \wedge \ldots \wedge x_{m}=y_{m} \wedge R x_{1} \ldots x_{m}\right) \rightarrow R y_{1} \ldots y_{m}$, (v) $\left(x_{1}=y_{1} \wedge \ldots \wedge x_{n}=y_{n}\right) \rightarrow F x_{1} \ldots x_{n}=F y_{1} \ldots y_{n}$, with the following restrictions on the variables and symbols of $L: x, y, z$ are distinct in (ii) and (iii); in (iv), $x_{1}, \ldots, x_{m}, y_{1}, \ldots, y_{m}$ are distinct and $R \in L^{\mathrm{r}}$ is $m$-ary; in $(\mathrm{v}), x_{1}, \ldots, x_{n}, y_{1}, \ldots, y_{n}$ are distinct, and $F \in L^{\mathrm{f}}$ is $n$-ary. Note that (i) represents an axiom scheme rather than a single axiom, since different variables $x$ give different formulas $x=x$. Likewise with (ii)-(v). Let $x$ and $y$ be distinct variables, and let $\varphi(y)$ be the formula $\exists x(x \neq y)$. Then $\varphi(y)$ is valid in all $\mathcal{A}$ with $|A|>1$, but $\varphi(x / y)$ is invalid in all $\mathcal{A}$. Thus substituting $x$ for the free occurrences of $y$ does not always preserve validity. To get rid of this anomaly, we introduce the following restriction on substitutions of a term $t$ for free occurrences of $y$. Definition. We say that $t$ is free for $y$ in $\varphi$, if no variable in $t$ can become bound upon replacing the free occurrences of $y$ in $\varphi$ by $t$, more precisely: whenever $x$ is a variable in $t$, then there are no occurrences of subformulas in $\varphi$ of the form $\exists x \psi$ or $\forall x \psi$ that contain an occurrence of $y$ that is free in $\varphi$. Note that if $t$ is variable-free, then $t$ is free for $y$ in $\varphi$. We remark that "free for" abbreviates "free to be substituted for." In exercise 3 the reader is asked to show that, with this restriction, substitution of a term for the free occurrences of a variable does preserve validity. Definition. The quantifier axioms of $L$ are the formulas $\varphi(t / y) \rightarrow \exists y \varphi$ and $\forall y \varphi \rightarrow \varphi(t / y)$ where $t$ is free for $y$ in $\varphi$. These axioms have been chosen to have the following property. Proposition 2.7.1. The logical axioms of $L$ are valid in every $L$-structure. We first prove this for the propositional axioms of $L$. Let $\alpha_{1}, \ldots, \alpha_{n}$ be distinct propositional atoms not in $L$. Let $p=p\left(\alpha_{1}, \ldots, \alpha_{n}\right) \in \operatorname{Prop}\left\{\alpha_{1}, \ldots, \alpha_{n}\right\}$. Let $\varphi_{1}, \ldots, \varphi_{n}$ be formulas and let $p\left(\varphi_{1}, \ldots, \varphi_{n}\right)$ be the word obtained by replacing each occurrence of $\alpha_{i}$ in $p$ by $\varphi_{i}$ for $i=1, \ldots, n$. One checks easily that $p\left(\varphi_{1}, \ldots, \varphi_{n}\right)$ is a formula. Lemma 2.7.2. Suppose $\varphi_{i}=\varphi_{i}\left(x_{1}, \ldots, x_{m}\right)$ for $1 \leq i \leq n$ and let $a_{1}, \ldots, a_{m} \in$ A. Define a truth assignment $t:\left\{\alpha_{1}, \ldots, \alpha_{n}\right\} \longrightarrow\{0,1\}$ by $t\left(\alpha_{i}\right)=1$ iff $\mathcal{A} \models \varphi_{i}\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)$. Then $p\left(\varphi_{1}, \ldots, \varphi_{n}\right)$ is an $L$-formula and $$ \begin{aligned} p\left(\varphi_{1}, \ldots, \varphi_{n}\right)\left(\underline{a}_{1} / x_{1}, \ldots, \underline{a}_{m} / x_{m}\right) & =p\left(\varphi_{1}\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right), \ldots, \varphi_{n}\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)\right), \\ t\left(p\left(\alpha_{1}, \ldots, \alpha_{n}\right)\right)=1 \Longleftrightarrow \mathcal{A} & \models p\left(\varphi_{1}\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right), \ldots, \varphi_{n}\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)\right) . \end{aligned} $$ In particular, if $p$ is a tautology, then $\mathcal{A}=p\left(\varphi_{1}, \ldots, \varphi_{n}\right)$. Proof. Easy induction on $p$. We leave the details to the reader. Definition. An L-tautology is a formula of the form $p\left(\varphi_{1}, \ldots, \varphi_{n}\right)$ for some tautology $p\left(\alpha_{1}, \ldots, \alpha_{n}\right) \in \operatorname{Prop}\left\{\alpha_{1}, \ldots, \alpha_{n}\right\}$ and some formulas $\varphi_{1}, \ldots, \varphi_{n}$. By Lemma 2.7.2 all $L$-tautologies are valid in all $L$-structures. The propositional axioms of $L$ are $L$-tautologies, so all propositional axioms of $L$ are valid in all $L$-structures. It is easy to check that all equality axioms of $L$ are valid in all $L$-structures. In exercise 4 below the reader is asked to show that all quantifier axioms of $L$ are valid in all $L$-structures. This finishes the proof of Proposition 2.7.1. Next we introduce rules for deriving new formulas from given formulas. Definition. The logical rules of $L$ are the following: (i) Modus Ponens (MP): From $\varphi$ and $\varphi \rightarrow \psi$, infer $\psi$. (ii) Generalization Rule $(\mathrm{G})$ : If the variable $x$ does not occur free in $\varphi$, then (a) from $\varphi \rightarrow \psi$, infer $\varphi \rightarrow \forall x \psi$; (b) from $\psi \rightarrow \varphi$, infer $\exists x \psi \rightarrow \varphi$. A key property of the logical rules is that their application preserves validity. Here is a more precise statement of this fact, to be verified by the reader. (i) If $\mathcal{A} \models \varphi$ and $\mathcal{A} \models \varphi \rightarrow \psi$, then $\mathcal{A} \models \psi$. (ii) Suppose $x$ does not occur free in $\varphi$. Then (a) if $\mathcal{A}=\varphi \rightarrow \psi$, then $\mathcal{A}=\varphi \rightarrow \forall x \psi$; (b) if $\mathcal{A} \models \psi \rightarrow \varphi$, then $\mathcal{A} \models \exists x \psi \rightarrow \varphi$. Definition. A formal proof, or just proof, of $\varphi$ from $\Sigma$ is a sequence $\varphi_{1}, \ldots, \varphi_{n}$ of formulas with $n \geq 1$ and $\varphi_{n}=\varphi$, such that for $k=1, \ldots, n$ : (i) either $\varphi_{k} \in \Sigma$, (ii) or $\varphi_{k}$ is a logical axiom, (iii) or there are $i, j \in\{1, \ldots, k-1\}$ such that $\varphi_{k}$ can be inferred from $\varphi_{i}$ and $\varphi_{j}$ by MP, or from $\varphi_{i}$ by G. We say that $\Sigma$ proves $\varphi$ (notation: $\Sigma \vdash \varphi$ ) if there exists a proof of $\varphi$ from $\Sigma$. Proposition 2.7.3. If $\Sigma \vdash \varphi$, then $\Sigma \models \varphi$. This follows easily from earlier facts that we stated and which the reader was asked to verify. The converse is more interesting, and due to Gödel (1930): Theorem 2.7.4 (Completeness Theorem of Predicate Logic). $$ \Sigma \vdash \varphi \Longleftrightarrow \Sigma \models \varphi $$ Remark. Our choice of proof system, and thus our notion of formal proof is somewhat arbitrary. However the equivalence of $\vdash$ and $\models$ (Completeness Theorem) justifies our choice of logical axioms and rules and shows in particular that no further logical axioms and rules are needed. Moreover, this equivalence has consequences that can be stated in terms of $\models$ alone. An example is the important Compactness Theorem. Theorem 2.7.5 (Compactness Theorem). If $\Sigma \models \sigma$, then there is a finite subset $\Sigma_{0}$ of $\Sigma$ such that $\Sigma_{0} \models \sigma$. The Compactness Theorem has many consequences. Here is one. Corollary 2.7.6. Suppose $\sigma$ is an $L_{\text {Ring-sentence that holds in all fields of }}$ characteristic 0 . Then there exists a natural number $N$ such that $\sigma$ is true in all fields of characteristic $p>N$. Proof. By assumption, $$ \operatorname{Fl}(0)=\mathrm{Fl} \cup\{n 1 \neq 0: n=2,3,5, \ldots\} \models \sigma . $$ Then by Compactness, there is $N \in \mathbf{N}$ such that $$ \mathrm{Fl} \cup\{n 1 \neq 0: n=2,3,5, \ldots, n \leq N\} \models \sigma . $$ It follows that $\sigma$ is true in all fields of characteristic $p>N$. The converse of this corollary fails, see exercise 9 below. Note that $\mathrm{Fl}(0)$ is infinite. Could there be an alternative finite set of axioms whose models are exactly the fields of characteristic 0 ? Corollary 2.7.7. There is no finite set of $L_{\mathrm{Ring}}$-sentences whose models are exactly the fields of characteristic 0 . Proof. Suppose there is such a finite set of sentences $\left\{\sigma_{1}, \ldots, \sigma_{N}\right\}$. Let $\sigma:=$ $\sigma_{1} \wedge \cdots \wedge \sigma_{N}$. Then the models of $\sigma$ are just the fields of characteristic 0 . By the previous result $\sigma$ holds in some field of characteristic $p>0$. Contradiction! Exercises. The conventions made at the beginning of Section 2.6 about $L, \mathcal{A}, t$, $\varphi, \psi, \sigma, \Sigma$ remain in force! All but the last two exercises to be done without using Theorem 2.7.4 or 2.7.5. Actually, (4) and (7) will be used in proving Theorem 2.7.4. (1) Let $L=\{R\}$ where $R$ is a binary relation symbol, and let $\mathcal{A}=(A ; R)$ be a finite $L$-structure (i. e. the set $A$ is finite). Then there exists an $L$-sentence $\sigma$ such that the models of $\sigma$ are exactly the $L$-structures isomorphic to $\mathcal{A}$. (In fact, for an arbitrary language $L$, two finite $L$-structures are isomorphic iff they satisfy the same $L$-sentences.) (2) Let $\varphi$ and $\psi$ be $L$-formulas, $x, y$ variables, and $t$ an $L$-term. (a) If $\varphi$ is atomic, then $t$ is free for $x$ in $\varphi$. (b) $t$ is free for $x$ in $\neg \varphi$ iff $t$ is free for $x$ in $\varphi$. (c) $t$ is free for $x$ in $\varphi \vee \psi$ iff $t$ is free for $x$ in $\varphi$ and in $\psi$; and $t$ is free for $x$ in $\varphi \wedge \psi$ iff $t$ is free for $x$ in $\varphi$ and in $\psi$. (d) $t$ is free for $x$ in $\exists y \varphi$ iff either $x$ and $y$ are different and $t$ is free for $x$ in $\varphi$, or $x$ and $y$ are the same; likewise with $\forall y \varphi$. (3) If $t$ is free for $y$ in $\varphi$ and $\varphi$ is valid in $\mathcal{A}$, then $\varphi(t / y)$ is valid in $\mathcal{A}$. (4) Suppose $t$ is free for $y$ in $\varphi=\varphi\left(x_{1}, \ldots, x_{n}, y\right)$. Then: (i) Each $\mathcal{A}$-instance of the quantifier axiom $\varphi(t / y) \rightarrow \exists y \varphi$ has the form $$ \varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{n}, \tau\right) \rightarrow \exists y \varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{n}, y\right) $$ with $a_{1}, \ldots, a_{n} \in A$ and $\tau$ a variable-free $L_{A}$-term. (ii) The quantifier axiom $\varphi(t / y) \rightarrow \exists y \varphi$ is valid in $\mathcal{A}$. (Hint: use Lemma 2.6.2.) (iii) The quantifier axiom $\forall y \varphi \rightarrow \varphi(t / y)$ is valid in $\mathcal{A}$. (5) If $\varphi$ is an $L$-tautology, then $\vdash \varphi$. (6) $\Sigma \vdash \varphi_{i}$ for $i=1, \ldots, n \Longleftrightarrow \Sigma \vdash \varphi_{1} \wedge \cdots \wedge \varphi_{n}$. (7) If $\Sigma \vdash \varphi \rightarrow \psi$ and $\Sigma \vdash \psi \rightarrow \varphi$, then $\Sigma \vdash \varphi \leftrightarrow \psi$. (8) $\vdash \neg \exists x \varphi \leftrightarrow \forall x \neg \varphi$ and $\vdash \neg \forall x \varphi \leftrightarrow \exists x \neg \varphi$. (9) Indicate an $L_{\text {Ring-sentence that is true in the field of real numbers, but false in }}$ all fields of positive characteristic. (10) Let $\sigma$ be an $L_{\mathrm{Ab}}$-sentence which holds in all non-trivial torsion free abelian groups. Then there exists $N \in \mathbf{N}$ such that $\sigma$ is true in all groups $\mathbf{Z} / p \mathbf{Z}$ where $p$ is a prime number and $p>N$. (11) Suppose $\Sigma$ has arbitrarily large finite models. Then $\Sigma$ has an infinite model. (Here "finite" and "infinite" refer to the underlying set of the model.) ## Chapter 3 ## The Completeness Theorem In this chapter we prove the Completeness Theorem. As a byproduct we also derive some more elementary facts about predicate logic. The last section contains some of the basics of universal algebra, which we can treat here rather efficiently using our construction of a so-called term-model in the proof of the Completeness Theorem. Conventions on the use of $L, \mathcal{A}, t, \varphi, \psi, \theta, \sigma$ and $\Sigma$ are as in the beginning of Section 2.6. ### Another Form of Completeness It is convenient to prove first a variant of the Completeness Theorem. Definition. We say that $\Sigma$ is consistent if $\Sigma \nvdash \perp$, and otherwise (that is, if $\Sigma \vdash \perp)$, we call $\Sigma$ inconsistent. Theorem 3.1.1 (Completeness Theorem - second form). $\Sigma$ is consistent if and only if $\Sigma$ has a model. We first show that this second form of the Completeness Theorem implies the first form. This will be done through a series of technical lemmas, which are also useful later in this Chapter. Lemma 3.1.2. Suppose $\Sigma \vdash \varphi$. Then $\Sigma \vdash \forall x \varphi$. Proof. From $\Sigma \vdash \varphi$ and the $L$-tautology $\varphi \rightarrow(\neg \forall x \varphi \rightarrow \varphi)$ we obtain $\Sigma \vdash$ $\neg \forall x \varphi \rightarrow \varphi$ by MP. Then by G we have $\Sigma \vdash \neg \forall x \varphi \rightarrow \forall x \varphi$. Using the $L$ tautology $(\neg \forall x \varphi \rightarrow \forall x \varphi) \rightarrow \forall x \varphi$ and MP we get $\Sigma \vdash \forall x \varphi$. Lemma 3.1.3 (Deduction Lemma). Suppose $\Sigma \cup\{\sigma\} \vdash \varphi$. Then $\Sigma \vdash \sigma \rightarrow \varphi$. Proof. By induction on the length of a proof of $\varphi$ from $\Sigma \cup\{\sigma\}$. The cases where $\varphi$ is a logical axiom, or $\varphi \in \Sigma \cup\{\sigma\}$, or $\varphi$ is obtained by MP are treated just as in the proof of the Deduction Lemma of Propositional Logic. Suppose that $\varphi$ is obtained by part (a) of $\mathrm{G}$, so $\varphi$ is $\varphi_{1} \rightarrow \forall x \psi$ where $x$ does not occur free in $\varphi_{1}$ and $\Sigma \cup\{\sigma\} \vdash \varphi_{1} \rightarrow \psi$, and where we assume inductively that $\Sigma \vdash \sigma \rightarrow\left(\varphi_{1} \rightarrow \psi\right)$. We have to argue that then $\Sigma \vdash \sigma \rightarrow\left(\varphi_{1} \rightarrow \forall x \psi\right)$. From the $L$-tautology $\left(\sigma \rightarrow\left(\varphi_{1} \rightarrow \psi\right)\right) \rightarrow\left(\left(\sigma \wedge \varphi_{1}\right) \rightarrow \psi\right)$ and MP we get $\Sigma \vdash$ $\left(\sigma \wedge \varphi_{1}\right) \rightarrow \psi$. Since $x$ does not occur free in $\sigma \wedge \varphi_{1}$ this gives $\Sigma \vdash\left(\sigma \wedge \varphi_{1}\right) \rightarrow \forall x \psi$, by G. Using the $L$-tautology $$ \left(\left(\sigma \wedge \varphi_{1}\right) \rightarrow \forall x \psi\right) \rightarrow\left(\sigma \rightarrow\left(\varphi_{1} \rightarrow \forall x \psi\right)\right) $$ and MP this gives $\Sigma \vdash \sigma \rightarrow\left(\varphi_{1} \rightarrow \forall x \psi\right)$. The case that $\varphi$ is obtained by part (b) of $G$ is left to the reader. Corollary 3.1.4. Suppose $\Sigma \cup\left\{\sigma_{1}, \ldots, \sigma_{n}\right\} \vdash \varphi$. Then $\Sigma \vdash \sigma_{1} \wedge \ldots \wedge \sigma_{n} \rightarrow \varphi$. We leave the proof as an exercise. Corollary 3.1.5. $\Sigma \vdash \sigma$ if and only if $\Sigma \cup\{\neg \sigma\}$ is inconsistent. The proof is just like that of the corresponding fact of Propositional Logic. Lemma 3.1.6. $\Sigma \vdash \forall y \varphi$ if and only if $\Sigma \vdash \varphi$. Proof. $(\Leftarrow)$ This is Lemma 3.1.2. For $(\Rightarrow)$, assume $\Sigma \vdash \forall y \varphi$. We have the quantifier axiom $\forall y \varphi \rightarrow \varphi$, so by MP we get $\Sigma \vdash \varphi$. Corollary 3.1.7. $\Sigma \vdash \forall y_{1} \ldots \forall y_{n} \varphi$ if and only if $\Sigma \vdash \varphi$. Corollary 3.1.8. The second form of the Completeness Theorem implies the first form, Theorem 2.7.4. Proof. Assume the second form of the Completeness Theorem holds, and that $\Sigma \models \varphi$. It suffices to show that then $\Sigma \vdash \varphi$. From $\Sigma \models \varphi$ we obtain $\Sigma \models$ $\forall y_{1} \ldots \forall y_{n} \varphi$ where $\varphi=\varphi\left(y_{1}, \ldots, y_{n}\right)$, and so $\Sigma \cup\{\neg \sigma\}$ has no model where $\sigma$ is the sentence $\forall y_{1} \ldots \forall y_{n} \varphi$. But then by the $2^{\text {nd }}$ form of the Completeness Theorem $\Sigma \cup\{\neg \sigma\}$ is inconsistent. Then by Corollary 3.1.5 we have $\Sigma \vdash \sigma$ and thus by Corollary 3.1.7 we get $\Sigma \vdash \varphi$. We finish this section with another form of the Compactness Theorem: Theorem 3.1.9 (Compactness Theorem - second form). If each finite subset of $\Sigma$ has a model, then $\Sigma$ has a model. This follows from the second form of the Completeness Theorem. ### Proof of the Completeness Theorem We are now going to prove Theorem 3.1.1. Since $(\Leftarrow)$ is clear, we focus our attention on $(\Rightarrow)$, that is, given a consistent set of sentences $\Sigma$ we must show that $\Sigma$ has a model. This job will be done in a series of lemmas. Unless we say so, we do not assume in those lemmas that $\Sigma$ is consistent. Lemma 3.2.1. Suppose $\Sigma \vdash \varphi$ and $t$ is free for $x$ in $\varphi$. Then $\Sigma \vdash \varphi(t / x)$. Proof. From $\Sigma \vdash \varphi$ we get $\Sigma \vdash \forall x \varphi$ by Lemma 3.1.2. Then MP together with the quantifier axiom $\forall x \varphi \rightarrow \varphi(t / x)$ gives $\Sigma \vdash \varphi(t / x)$ as required. Lemma 3.2.2. Suppose $\Sigma \vdash \varphi$, let $x_{1}, \ldots, x_{n}$ be distinct variables, and let $t_{1}, \ldots, t_{n}$ be terms whose variables do not occur bound in $\varphi$. Then $$ \Sigma \vdash \varphi\left(t_{1} / x_{1}, \ldots, t_{n} / x_{n}\right) . $$ Proof. Take distinct variables $y_{1}, \ldots, y_{n}$ that do not occur in $\varphi$ or $t_{1}, \ldots, t_{n}$ and that are distinct from $x_{1}, \ldots, x_{n}$. Use Lemma 3.2.1 $n$ times in succession to obtain $\Sigma \vdash \psi$ where $\psi=\varphi\left(y_{1} / x_{1}, \ldots, y_{n} / x_{n}\right)$. Apply Lemma 3.2.1 again $n$ times to get $\Sigma \vdash \psi\left(t_{1} / y_{1}, \ldots, t_{n} / y_{n}\right)$. To finish, observe that $\psi\left(t_{1} / y_{1}, \ldots, t_{n} / y_{n}\right)=$ $\varphi\left(t_{1} / x_{1}, \ldots, t_{n} / x_{n}\right)$. Lemma 3.2.3. Let $t, t^{\prime}, t_{1}, \ldots, t_{1}^{\prime}, \ldots$ be L-terms. (1) $\vdash t=t$. (2) If $\Sigma \vdash t=t^{\prime}$, then $\Sigma \vdash t^{\prime}=t$. (3) If $\Sigma \vdash t_{1}=t_{2}$ and $\Sigma \vdash t_{2}=t_{3}$, then $\Sigma \vdash t_{1}=t_{3}$. (4) Let $R \in L^{\mathrm{r}}$ be $m$-ary and suppose $\Sigma \vdash t_{i}=t_{i}^{\prime}$ for $i=1, \ldots, m$ and $\Sigma \vdash R t_{1} \ldots t_{m}$. Then $\Sigma \vdash R t_{1}^{\prime} \ldots t_{m}^{\prime}$. (5) Let $F \in L^{\mathrm{f}}$ be $n$-ary, and suppose $\Sigma \vdash t_{i}=t_{i}^{\prime}$ for $i=1, \ldots, n$. Then $\Sigma \vdash F t_{1} \ldots t_{n}=F t_{1}^{\prime} \ldots t_{n}^{\prime}$. Proof. For (1), take an equality axiom $x=x$ and apply Lemma 3.2.1. For (2), we take an equality axiom $x=y \rightarrow y=x$, apply Lemma 3.2.2 to obtain $\vdash t=t^{\prime} \rightarrow t^{\prime}=t$, and use MP. For (3), take an equality axiom $$ (x=y \wedge y=z) \rightarrow(x=z), $$ apply Lemma 3.2.2 to get $\vdash\left(t_{1}=t_{2} \wedge t_{2}=t_{3}\right) \rightarrow t_{1}=t_{3}$, use Exercise 6 in Section 2.7 and MP. To prove (4), take an equality axiom $$ x_{1}=y_{1} \wedge \ldots \wedge x_{m}=y_{m} \wedge R x_{1} \ldots x_{m} \rightarrow R y_{1} \ldots y_{m}, $$ apply Lemma 3.2.2 to obtain $$ \Sigma \vdash t_{1}=t_{1}^{\prime} \wedge \ldots \wedge t_{m}=t_{m}^{\prime} \wedge R t_{1} \ldots t_{m} \rightarrow R t_{1}^{\prime} \ldots t_{m}^{\prime}, $$ and use Exercise 6 as before, and MP. Part (5) is obtained similarly by taking an equality axiom $x_{1}=y_{1} \wedge \ldots \wedge x_{n}=y_{n} \rightarrow F x_{1} \ldots x_{n}=F y_{1} \ldots y_{n}$. Definition. Let $\operatorname{Term}_{L}$ be the set of variable-free $L$-terms. We define a binary relation $\sim_{\Sigma}$ on $\operatorname{Term}_{L}$ by $$ t_{1} \sim_{\Sigma} t_{2} \Longleftrightarrow \Sigma \vdash t_{1}=t_{2} . $$ Parts (1), (2) and (3) of the last lemma yield the following. Lemma 3.2.4. The relation $\sim_{\Sigma}$ is an equivalence relation on $\operatorname{Term}_{L}$. Definition. Suppose $L$ has at least one constant symbol. Then $\mathrm{Term}_{L}$ is nonempty. We define the $L$-structure $\mathcal{A}_{\Sigma}$ as follows: (i) Its underlying set is $A_{\Sigma}:=\operatorname{Term}_{L} / \sim_{\Sigma}$. Let $[t]$ denote the equivalence class of $t \in \operatorname{Term}_{L}$ with respect to $\sim_{\Sigma}$. (ii) If $R \in L^{\mathrm{r}}$ is $m$-ary, then $R^{\mathcal{A}_{\Sigma}} \subseteq A_{\Sigma}^{m}$ is given by $$ \left(\left[t_{1}\right], \ldots,\left[t_{m}\right]\right) \in R^{\mathcal{A}_{\Sigma}}: \Longleftrightarrow \Sigma \vdash R t_{1} \ldots t_{m} \quad\left(t_{1}, \ldots, t_{m} \in \mathrm{Term}_{L}\right) . $$ (iii) If $F \in L^{\mathrm{f}}$ is $n$-ary, then $F^{\mathcal{A}_{\Sigma}}: A_{\Sigma}^{n} \rightarrow A_{\Sigma}$ is given by $$ F^{\mathcal{A}_{\Sigma}}\left(\left[t_{1}\right], \ldots,\left[t_{n}\right]\right)=\left[F t_{1} \ldots t_{n}\right] \quad\left(t_{1}, \ldots, t_{n} \in \operatorname{Term}_{L}\right) . $$ Remark. The reader should verify that this counts as a definition, that is: in (ii), whether or not $\Sigma \vdash R t_{1} \ldots t_{m}$ depends only on $\left(\left[t_{1}\right], \ldots,\left[t_{m}\right]\right)$, not on $\left(t_{1}, \ldots, t_{m}\right)$; in (iii), $\left[F t_{1} \ldots t_{n}\right]$ depends likewise only on $\left(\left[t_{1}\right], \ldots,\left[t_{n}\right]\right)$. (Use parts (4) and (5) of Lemma 3.2.3.) Corollary 3.2.5. Suppose $L$ has a constant symbol, and $\Sigma$ is consistent. Then (1) for each $t \in \operatorname{Term}_{L}$ we have $t^{\mathcal{A}_{\Sigma}}=[t]$; (2) for each atomic $\sigma$ we have: $\Sigma \vdash \sigma \Longleftrightarrow \mathcal{A}_{\Sigma} \models \sigma$. Proof. Part (1) follows by an easy induction. Let $\sigma$ be $R t_{1} \ldots t_{m}$ where $R \in L^{\mathrm{r}}$ is $m$-ary and $t_{1}, \ldots, t_{m} \in \operatorname{Term}_{L}$. Then $$ \Sigma \vdash R t_{1} \ldots t_{m} \Leftrightarrow\left(\left[t_{1}\right], \ldots,\left[t_{m}\right]\right) \in R^{\mathcal{A}_{\Sigma}} \Leftrightarrow \mathcal{A}_{\Sigma} \models R t_{1} \ldots t_{m}, $$ where the last " $\Leftrightarrow$ " follows from the definition of $\mid=$ together with part (1). Now suppose that $\sigma$ is $t_{1}=t_{2}$ where $t_{1}, t_{2} \in \operatorname{Term}_{L}$. Then $$ \Sigma \vdash t_{1}=t_{2} \Leftrightarrow\left[t_{1}\right]=\left[t_{2}\right] \Leftrightarrow t_{1}^{\mathcal{A}_{\Sigma}}=t_{2}^{\mathcal{A}_{\Sigma}} \Leftrightarrow \mathcal{A}_{\Sigma} \models t_{1}=t_{2} . $$ We also have $\Sigma \vdash \top \Leftrightarrow \mathcal{A}_{\Sigma} \models \top$. So far we haven't used the assumption that $\Sigma$ is consistent, but now we do. The consistency of $\Sigma$ means that $\Sigma \nvdash \perp$. We also have $\mathcal{A}_{\Sigma} \not \models \perp$ by definition of $\models$. Thus $\Sigma \vdash \perp \Leftrightarrow \mathcal{A}_{\Sigma} \models \perp$. If the equivalence in part (2) of this corollary holds for all $\sigma$ (not only for atomic $\sigma$ ), then $\mathcal{A}_{\Sigma}=\Sigma$, so we would have found a model of $\Sigma$, and be done. But clearly this equivalence can only hold for all $\sigma$ if $\Sigma$ has the property that for each $\sigma$, either $\Sigma \vdash \sigma$ or $\Sigma \vdash \neg \sigma$. This property is of interest for other reasons as well, and deserves a name: Definition. We say that $\Sigma$ is complete if $\Sigma$ is consistent, and for each $\sigma$ either $\Sigma \vdash \sigma$ or $\Sigma \vdash \neg \sigma$. Example. Let $L=L_{\mathrm{Ab}}, \Sigma:=\mathrm{Ab}$ (the set of axioms for abelian groups), and $\sigma$ the sentence $\exists x(x \neq 0)$. Then $\Sigma \nvdash \sigma$ since the trivial group doesn't satisfy $\sigma$. Also $\Sigma \nvdash \neg \sigma$, since there are non-trivial abelian groups and $\sigma$ holds in such groups. Thus $\Sigma$ is not complete. Completeness is a strong property and it can be hard to show that a given set of axioms is complete. The set of axioms for algebraically closed fields of characteristic 0 is complete (see the end of Section 4.3). A key fact about completeness needed in this chapter is that any consistent set of sentences extends to a complete set of sentences: Lemma 3.2.6 (Lindenbaum). If $\Sigma$ is consistent, then $\Sigma \subseteq \Sigma^{\prime}$ for some complete set $\Sigma^{\prime}$ of $L$-sentences. The proof uses Zorn's Lemma, and is just like that of the corresponding fact of Propositional Logic in Section 1.2. Completeness of $\Sigma$ does not guarantee that the equivalence of part (2) of Corollary 3.2.5 holds for all $\sigma$. Completeness is only a necessary condition for this equivalence to hold for all $\sigma$; another necessary condition is "to have witnesses": Definition. A $\Sigma$-witness for the sentence $\exists x \varphi(x)$ is a term $t \in \operatorname{Term}_{L}$ such that $\Sigma \vdash \varphi(t)$. We say that $\Sigma$ has witnesses if there is a $\Sigma$-witness for every sentence $\exists x \varphi(x)$ proved by $\Sigma$. Theorem 3.2.7. Let $L$ have a constant symbol, and suppose $\Sigma$ is consistent. Then the following two conditions are equivalent: (i) For each $\sigma$ we have: $\Sigma \vdash \sigma \Leftrightarrow \mathcal{A}_{\Sigma} \models \sigma$. (ii) $\Sigma$ is complete and has witnesses. In particular, if $\Sigma$ is complete and has witnesses, then $\mathcal{A}_{\Sigma}$ is a model of $\Sigma$. Proof. It should be clear that (i) implies (ii). For the converse, assume (ii). We use induction on the number of logical symbols in $\sigma$ to obtain (i). We already know that (i) holds for atomic sentences. The cases that $\sigma=\neg \sigma_{1}, \sigma=\sigma_{1} \vee \sigma_{2}$, and $\sigma=\sigma_{1} \wedge \sigma_{2}$ are treated just as in the proof of the corresponding Lemma 2.2.12 for Propositional Logic. It remains to consider two cases: Case $\sigma=\exists x \varphi(x)$ : $(\Rightarrow)$ Suppose that $\Sigma \vdash \sigma$. Because we are assuming that $\Sigma$ has witnesses we have a $t \in \operatorname{Term}_{L}$ such that $\Sigma \vdash \varphi(t)$. Then by the inductive hypothesis $\mathcal{A}_{\Sigma} \models \varphi(t)$. So by Lemma 2.6 .2 we have an $a \in A_{\Sigma}$ such that $\mathcal{A}_{\Sigma} \models \varphi(\underline{a})$. Therefore $\mathcal{A}_{\Sigma} \models \exists x \varphi(x)$, hence $\mathcal{A}_{\Sigma} \models \sigma$. $(\Leftarrow)$ Assume $\mathcal{A}_{\Sigma} \models \sigma$. Then there is an $a \in A_{\Sigma}$ such that $\mathcal{A}_{\Sigma} \models \varphi(\underline{a})$. Choose $t \in \operatorname{Term}_{L}$ such that $[t]=a$. Then $t^{\mathcal{A}_{\Sigma}}=a$, hence $\mathcal{A}_{\Sigma} \models \varphi(t)$ by Lemma 2.6.2. Applying the inductive hypothesis we get $\Sigma \vdash \varphi(t)$. This yields $\Sigma \vdash \exists x \varphi(x)$ by MP and the quantifier axiom $\varphi(t) \rightarrow \exists x \varphi(x)$. Case $\sigma=\forall x \varphi(x)$ : This is similar to the previous case but we also need the result from Exercise 8 in Section 2.7 that $\vdash \neg \forall x \varphi \leftrightarrow \exists x \neg \varphi$. We call attention to some new notation in the next lemmas: the symbol $\vdash_{L}$ is used to emphasize that we are dealing with formal provability within $L$. Lemma 3.2.8. Let $\Sigma$ be a set of L-sentences, $c$ a constant symbol not in $L$, and $L_{c}:=L \cup\{c\}$. Let $\varphi(y)$ be an L-formula and suppose $\Sigma \vdash_{L_{c}} \varphi(c)$. Then $\Sigma \vdash_{L} \varphi(y)$. Proof. (Sketch) Take a proof of $\varphi(c)$ from $\Sigma$ in the language $L_{c}$, and take a variable $z$ different from all variables occurring in that proof, and also such that $z \neq y$. Replace in every formula in this proof each occurrence of $c$ by $z$. Check that one obtains in this way a proof of $\varphi(z / y)$ in the language $L$ from $\Sigma$. So $\Sigma \vdash_{L} \varphi(z / y)$ and hence by Lemma 3.2.1 we have $\Sigma \vdash_{L} \varphi(z / y)(y / z)$, that is, $\Sigma \vdash_{L} \varphi(y)$. Lemma 3.2.9. Assume $\Sigma$ is consistent and $\Sigma \vdash \exists y \varphi(y)$. Let $c$ be a constant symbol not in L. Put $L_{c}:=L \cup\{c\}$. Then $\Sigma \cup\{\varphi(c)\}$ is a consistent set of $L_{c}$-sentences. Proof. Suppose not. Then $\Sigma \cup\{\varphi(c)\} \vdash_{L_{c}} \perp$. By the Deduction Lemma (3.1.3) $\Sigma \vdash_{L_{c}} \varphi(c) \rightarrow \perp$. Then by Lemma 3.2.8 we have $\Sigma \vdash_{L} \varphi(y) \rightarrow \perp$. By G we have $\Sigma \vdash_{L} \exists y \varphi(y) \rightarrow \perp$. Applying MP yields $\Sigma \vdash \perp$, contradicting the consistency of $\Sigma$. Lemma 3.2.10. Suppose $\Sigma$ is consistent. Let $\sigma_{1}=\exists x_{1} \varphi_{1}\left(x_{1}\right), \ldots, \sigma_{n}=$ $\exists x_{n} \varphi_{n}\left(x_{n}\right)$ be such that $\Sigma \vdash \sigma_{i}$ for every $i=1, \ldots, n$. Let $c_{1}, \ldots, c_{n}$ be distinct constant symbols not in $L$. Put $L^{\prime}:=L \cup\left\{c_{1}, \ldots, c_{n}\right\}$ and $\Sigma^{\prime}=$ $\Sigma \cup\left\{\varphi_{1}\left(c_{1}\right), \ldots, \varphi_{n}\left(c_{n}\right)\right\}$. Then $\Sigma^{\prime}$ is a consistent set of $L^{\prime}$-sentences. Proof. The previous lemma covers the case $n=1$. The general case follows by induction on $n$. In the next lemma we use a superscript "w" for "witness." Lemma 3.2.11. Suppose $\Sigma$ is consistent. For each L-sentence $\sigma=\exists x \varphi(x)$ such that $\Sigma \vdash \sigma$, let $c_{\sigma}$ be a constant symbol not in $L$ such that if $\sigma^{\prime}$ is a different $L$-sentence of the form $\exists x^{\prime} \varphi^{\prime}\left(x^{\prime}\right)$ provable from $\Sigma$, then $c_{\sigma} \neq c_{\sigma^{\prime}}$. Put $$ \begin{aligned} & L^{w}:=L \cup\left\{c_{\sigma}: \sigma=\exists x \varphi(x) \text { is an } L \text {-sentence such that } \Sigma \vdash \sigma\right\} \\ & \Sigma^{w}:=\Sigma \cup\left\{\varphi\left(c_{\sigma}\right): \sigma=\exists x \varphi(x) \text { is an L-sentence such that } \Sigma \vdash \sigma\right\} \end{aligned} $$ Then $\Sigma^{w}$ is a consistent set of $L^{w}$-sentences. Proof. Suppose not. Then $\Sigma^{w} \vdash \perp$. Take a proof of $\perp$ from $\Sigma^{w}$ and let $c_{\sigma_{1}}, \ldots, c_{\sigma_{n}}$ be constant symbols in $L^{w} \backslash L$ such that this proof is a proof of $\perp$ in the language $L \cup\left\{c_{\sigma_{1}}, \ldots, c_{\sigma_{n}}\right\}$ from $\Sigma \cup\left\{\varphi_{1}\left(c_{\sigma_{1}}\right), \ldots, \varphi_{n}\left(c_{\sigma_{n}}\right)\right\}$, where $\sigma_{i}=\exists x_{i} \varphi_{i}\left(x_{i}\right)$ for $1 \leq i \leq n$. So $\Sigma \cup\left\{\varphi_{1}\left(c_{\sigma_{1}}\right), \ldots, \varphi_{n}\left(c_{\sigma_{n}}\right)\right\}$ is an inconsistent set of $L \cup\left\{c_{\sigma_{1}}, \ldots, c_{\sigma_{n}}\right\}$-sentences. This contradicts Lemma 3.2.10. Lemma 3.2.12. Let $L_{0} \subseteq L_{1} \subseteq L_{2} \subseteq \ldots$ be an increasing sequence $\left(L_{n}\right)$ of languages, and set $L_{\infty}:=\bigcup_{n} L_{n}$. Let $\Sigma_{n}$ be a consistent set of $L_{n}$-sentences, for each $n$, such that $\Sigma_{0} \subseteq \Sigma_{1} \subseteq \Sigma_{2} \ldots$. Then the union $\Sigma_{\infty}:=\bigcup_{n} \Sigma_{n}$ is a consistent set of $L_{\infty}$-sentences. Proof. Suppose that $\Sigma_{\infty} \vdash \perp$. Take a proof of $\perp$ from $\Sigma_{\infty}$. Then we can choose $n$ so large that this is actually a proof of $\perp$ from $\Sigma_{n}$ in $L_{n}$. This contradicts the consistency of $\Sigma_{n}$. Suppose the language $L^{*}$ extends $L$, let $\mathcal{A}$ be an $L$-structure, and let $\mathcal{A}^{*}$ be an $L^{*}$-structure. Then $\mathcal{A}$ is said to be a reduct of $\mathcal{A}^{*}$ (and $\mathcal{A}^{*}$ an expansion of $\mathcal{A}$ ) if $\mathcal{A}$ and $\mathcal{A}^{*}$ have the same underlying set and the same interpretations of the symbols of $L$. For example, $(\mathbf{N} ; 0,+)$ is a reduct of $(\mathbf{N} ;<, 0,1,+, \cdot)$. Note that any $L^{*}$-structure $\mathcal{A}^{*}$ has a unique reduct to an $L$-structure, which we indicate by $\left.\mathcal{A}^{*}\right|_{L}$ A key fact (to be verified by the reader) is that if $\mathcal{A}$ is a reduct of $\mathcal{A}^{*}$, then $t^{\mathcal{A}}=t^{\mathcal{A}^{*}}$ for all variable-free $L_{A}$-terms $t$, and $$ \mathcal{A} \models \sigma \Longleftrightarrow \mathcal{A}^{*} \models \sigma $$ for all $L_{A}$-sentences $\sigma$. We can now prove Theorem 3.1.1. Proof. Let $\Sigma$ be a consistent set of $L$-sentences. We construct a sequence $\left(L_{n}\right)$ of languages and a sequence $\left(\Sigma_{n}\right)$ where each $\Sigma_{n}$ is a consistent set of $L_{n^{-}}$ sentences. We begin by setting $L_{0}=L$ and $\Sigma_{0}=\Sigma$. Given the language $L_{n}$ and the consistent set of $L_{n}$-sentences $\Sigma_{n}$, put $$ L_{n+1}:= \begin{cases}L_{n} & \text { if } n \text { is even }, \\ L_{n}^{w} & \text { if } n \text { is odd },\end{cases} $$ choose a complete set of $L_{n}$-sentences $\Sigma_{n}^{\prime} \supseteq \Sigma_{n}$, and put $$ \Sigma_{n+1}:= \begin{cases}\Sigma_{n}^{\prime} & \text { if } n \text { is even } \\ \Sigma_{n}^{w} & \text { if } n \text { is odd. }\end{cases} $$ Here $L_{n}^{w}$ and $\Sigma_{n}^{w}$ are obtained from $L_{n}$ and $\Sigma_{n}$ in the same way that $L^{w}$ and $\Sigma^{w}$ are obtained from $L$ and $\Sigma$ in Lemma 3.2.11. Note that $L_{n} \subseteq L_{n+1}$, and $\Sigma_{n} \subseteq \Sigma_{n+1}$. By the previous lemma the set $\Sigma_{\infty}$ of $L_{\infty}$-sentences is consistent. It is also complete. To see this, let $\sigma$ be an $L_{\infty}$-sentence. Take $n$ even and so large that $\sigma$ is an $L_{n}$-sentence. Then $\Sigma_{n+1} \vdash \sigma$ or $\Sigma_{n+1} \vdash \neg \sigma$ and thus $\Sigma_{\infty} \vdash \sigma$ or $\Sigma_{\infty} \vdash \neg \sigma$. We claim that $\Sigma_{\infty}$ has witnesses. To see this, let $\sigma=\exists x \varphi(x)$ be an $L_{\infty^{-}}$ sentence such that $\Sigma_{\infty} \vdash \sigma$. Now take $n$ to be odd and so large that $\sigma$ is an $L_{n}$-sentence and $\Sigma_{n} \vdash \sigma$. Then by construction of $\Sigma_{n+1}=\Sigma_{n}^{w}$ we have $\Sigma_{n+1} \vdash \varphi\left(c_{\sigma}\right)$, so $\Sigma_{\infty} \vdash \varphi\left(c_{\sigma}\right)$. It follows from Theorem 3.2.7 that $\Sigma_{\infty}$ has a model, namely $\mathcal{A}_{\Sigma_{\infty}}$. Put $\mathcal{A}:=\left.\mathcal{A}_{\Sigma_{\infty}}\right|_{L}$. Then $\mathcal{A} \mid \Sigma$. This concludes the proof of the Completeness Theorem (second form). ## Exercises. (1) $\quad \Sigma$ is complete if and only if $\Sigma$ has a model and every two models of $\Sigma$ satisfy the same sentences. (2) Let $L$ have just a constant symbol $c$, a unary relation symbol $U$ and a unary function symbol $f$, and suppose that $\Sigma \vdash U f c$, and that $f$ does not occur in the sentences of $\Sigma$. Then $\Sigma \vdash \forall x U x$. ### Some Elementary Results of Predicate Logic Here we obtain some generalities of predicate logic: Equivalence and Equality Theorems, Variants, and Prenex Form. In some proofs we shall take advantage of the fact that the Completeness Theorem is now available. Lemma 3.3.1 (Distribution Rule). We have the following: (i) Suppose $\Sigma \vdash \varphi \rightarrow \psi$. Then $\Sigma \vdash \exists x \varphi \rightarrow \exists x \psi$ and $\Sigma \vdash \forall x \varphi \rightarrow \forall x \psi$. (ii) Suppose $\Sigma \vdash \varphi \leftrightarrow \psi$. Then $\Sigma \vdash \exists x \varphi \leftrightarrow \exists x \psi$ and $\Sigma \vdash \forall x \varphi \leftrightarrow \forall x \psi$. Proof. We only do (i), since (ii) then follows easily. Let $\mathcal{A}$ be a model of $\Sigma$. By the Completeness Theorem it suffices to show that then $\mathcal{A}=\exists x \varphi \rightarrow \exists x \psi$ and $\mathcal{A} \models \forall x \varphi \rightarrow \forall x \psi$. We shall prove $\mathcal{A} \models \exists x \varphi \rightarrow \exists x \psi$ and leave the other part to the reader. We have $\mathcal{A} \models \varphi \rightarrow \psi$. Choose variables $y_{1}, \ldots, y_{n}$ such that $\varphi=\varphi\left(x, y_{1}, \ldots, y_{n}\right)$ and $\psi=\psi\left(x, y_{1}, \ldots, y_{n}\right)$. We need only show that then for all $a_{1}, \ldots, a_{n} \in A$ $$ \mathcal{A} \models \exists x \varphi\left(x, \underline{a}_{1}, \ldots, \underline{a}_{n}\right) \rightarrow \exists x \psi\left(x, \underline{a}_{1}, \ldots, \underline{a}_{n}\right) $$ Suppose $\mathcal{A} \models \exists x \varphi\left(x, \underline{a}_{1}, \ldots, \underline{a}_{n}\right)$. This yields $a_{0} \in A$ with $\mathcal{A} \models \varphi\left(\underline{a}_{0}, \underline{a}_{1}, \ldots, \underline{a}_{n}\right)$. From $\mathcal{A} \models \varphi \rightarrow \psi$ we obtain $\mathcal{A} \models \varphi\left(\underline{a}_{0}, \ldots, \underline{a}_{n}\right) \rightarrow \psi\left(\underline{a}_{0}, \ldots, \underline{a}_{n}\right)$, which gives $\mathcal{A} \models \psi\left(\underline{a}_{0}, \ldots, \underline{a}_{n}\right)$, and thus $\mathcal{A} \models \exists x \psi\left(x, \underline{a}_{1}, \ldots, \underline{a}_{n}\right)$. Theorem 3.3.2 (Equivalence Theorem). Let $\psi^{\prime}$ be the result of replacing in the formula $\psi$ some occurrence of a subformula $\varphi$ by the formula $\varphi^{\prime}$, and suppose that $\Sigma \vdash \varphi \leftrightarrow \varphi^{\prime}$. Then $\psi^{\prime}$ is again a formula and $\Sigma \vdash \psi \leftrightarrow \psi^{\prime}$. Proof. By induction on the number of logical symbols in $\psi$. If $\psi$ is atomic, then necessarily $\psi=\varphi$ and $\psi^{\prime}=\varphi^{\prime}$ and the desired result holds trivially. Suppose that $\psi=\neg \theta$. Then either $\psi=\varphi$ and $\psi^{\prime}=\varphi^{\prime}$, and the desired result holds trivially, or the occurrence of $\varphi$ we are replacing is an occurrence in $\theta$. Then the inductive hypothesis gives $\Sigma \vdash \theta \leftrightarrow \theta^{\prime}$, where $\theta^{\prime}$ is obtained by replacing that occurrence (of $\varphi$ ) by $\varphi^{\prime}$. Then $\psi^{\prime}=\neg \theta^{\prime}$ and the desired result follows easily. The cases $\psi=\psi_{1} \vee \psi_{2}$ and $\psi=\psi_{1} \wedge \psi_{2}$ are left as exercises. Suppose that $\psi=\exists x \theta$. The case $\psi=\varphi$ (and thus $\psi^{\prime}=\varphi^{\prime}$ ) is trivial. Suppose $\psi \neq \varphi$. Then the occurrence of $\varphi$ we are replacing is an occurrence inside $\theta$. So by inductive hypothesis we have $\Sigma \vdash \theta \leftrightarrow \theta^{\prime}$. Then by the distribution rule $\Sigma \vdash \exists x \theta \leftrightarrow \exists x \theta^{\prime}$. The proof is similar if $\psi=\forall x \theta$. Definition. We say $\varphi_{1}$ and $\varphi_{2}$ are $\Sigma$-equivalent if $\Sigma \vdash \varphi_{1} \leftrightarrow \varphi_{2}$. (In case $\Sigma=\emptyset$, we just say equivalent.) One verifies easily that $\Sigma$-equivalence is an equivalence relation on the set of $L$-formulas. Given a family $\left(\varphi_{i}\right)_{i \in I}$ of formulas with finite index set $I$ we choose a bijection $k \mapsto i(k):\{1, \ldots, n\} \rightarrow I$ and set $$ \bigvee_{i \in I} \varphi_{i}:=\varphi_{i(1)} \vee \cdots \vee \varphi_{i(n)}, \quad \bigwedge_{i \in I} \varphi_{i}:=\varphi_{i(1)} \wedge \cdots \wedge \varphi_{i(n)} $$ If $I$ is clear from context we just write $\bigvee_{i} \varphi_{i}$ and $\bigwedge_{i} \varphi_{i}$ instead. Of course, these notations $\bigvee_{i \in I} \varphi_{i}$ and $\bigwedge_{i \in I} \varphi_{i}$ can only be used when the particular choice of bijection of $\{1, \ldots, n\}$ with $I$ does not matter; this is usually the case because the equivalence class of $\varphi_{i(1)} \vee \cdots \vee \varphi_{i(n)}$ does not depend on this choice, and the same is true with " $\wedge$ " instead of " $\vee$ ". Definition. A variant of a formula is obtained by successive replacements of the following type: (i) replace an occurrence of a subformula $\exists x \varphi$ by $\exists y \varphi(y / x)$; (ii) replace an occurrence of a subformula $\forall x \varphi$ by $\forall y \varphi(y / x)$. where $y$ is free for $x$ in $\varphi$ and $y$ does not occur free in $\varphi$. Lemma 3.3.3. A formula is equivalent to any of its variants. Proof. By the Equivalence Theorem (3.3.2) it suffices to show $\vdash \exists x \varphi \leftrightarrow \exists y \varphi(y / x)$ and $\vdash \forall x \varphi \leftrightarrow \forall y \varphi(y / x)$ where $y$ is free for $x$ in $\varphi$ and does not occur free in $\varphi$. We prove the first equivalence, leaving the second as an exercise. Applying $G$ to the quantifier axiom $\varphi(y / x) \rightarrow \exists x \varphi$ gives $\vdash \exists y \varphi(y / x) \rightarrow \exists x \varphi$. Similarly we get $\vdash \exists x \varphi \rightarrow \exists y \varphi(y / x)$ (use that $\varphi=\varphi(y / x)(x / y)$ by the assumption on $y$ ). An application of Exercise 7 of Section 2.7 finishes the proof. Definition. A formula in prenex form is a formula $Q_{1} x_{1} \ldots Q_{n} x_{n} \varphi$ where $x_{1}, \ldots, x_{n}$ are distinct variables, each $Q_{i} \in\{\exists, \forall\}$ and $\varphi$ is quantifier-free. We call $Q_{1} x_{1} \ldots Q_{n} x_{n}$ the prefix, and $\varphi$ the matrix of the formula. Note that a quantifier-free formula is in prenex form; this is the case $n=0$. We leave the proof of the next lemma as an exercise. Instead of "occurrence of ... as a subformula" we say "part ...". In this lemma $Q$ denotes a quantifier, that is, $Q \in\{\exists, \forall\}$, and $Q^{\prime}$ denotes the other quantifier: $\exists^{\prime}=\forall$ and $\forall^{\prime}=\exists$. Lemma 3.3.4. The following prenex transformations always change a formula into an equivalent formula: (1) replace the formula by one of its variants; (2) replace a part $\neg Q x \psi$ by $Q^{\prime} x \neg \psi$; (3) replace a part $(Q x \psi) \vee \theta$ by $Q x(\psi \vee \theta)$ where $x$ is not free in $\theta$; (4) replace a part $\psi \vee Q x \theta$ by $Q x(\psi \vee \theta)$ where $x$ is not free in $\psi$; (5) replace a part $(Q x \psi) \wedge \theta$ by $Q x(\psi \wedge \theta)$ where $x$ is not free in $\theta$; (6) replace a part $\psi \wedge Q x \theta$ by $Q x(\psi \wedge \theta)$ where $x$ is not free in $\psi$. Remark. Note that the free variables of a formula (those that occur free in the formula) do not change under prenex transformations. Theorem 3.3.5 (Prenex Form). Every formula can be changed into one in prenex form by a finite sequence of prenex transformations. In particular, each formula is equivalent to one in prenex form. Proof. By induction on the number of logical symbols. Atomic formulas are already in prenex form. To simplify notation, write $\varphi \Longrightarrow_{\mathrm{pr}} \psi$ to indicate that $\psi$ can be obtained from $\varphi$ by a finite sequence of prenex transformations. Assume inductively that $$ \begin{aligned} & \varphi_{1} \quad \Longrightarrow_{\mathrm{pr}} Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1} \\ & \varphi_{2} \Longrightarrow Q_{m+1} y_{1} \ldots Q_{m+n} y_{n} \psi_{2}, \end{aligned} $$ where $Q_{1}, \ldots, Q_{m}, \ldots, Q_{m+n} \in\{\exists, \forall\}, x_{1}, \ldots, x_{m}$ are distinct, $y_{1}, \ldots, y_{n}$ are distinct, and $\psi_{1}$ and $\psi_{2}$ are quantifier-free. Then for $\varphi:=\neg \varphi_{1}$, we have $$ \varphi \Longrightarrow{ }_{\mathrm{pr}} \neg Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1} . $$ Applying $m$ prenex transformations of type (2) we get $$ \neg Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1} \Longrightarrow \text { pr } Q_{1}^{\prime} x_{1} \ldots Q_{m}^{\prime} x_{m} \neg \psi_{1}, $$ hence $\varphi \Longrightarrow_{\mathrm{pr}} Q_{1}^{\prime} x_{1} \ldots Q_{m}^{\prime} x_{m} \neg \psi_{1}$. Next, let $\varphi:=\varphi_{1} \vee \varphi_{2}$. The assumptions above yield $$ \varphi \Longrightarrow \mathrm{pr}\left(Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1}\right) \vee\left(Q_{m+1} y_{1} \ldots Q_{m+n} y_{n} \psi_{2}\right) $$ Replacing first $Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1}$ by a variant we may assume that $$ \left\{x_{1}, \ldots, x_{m}\right\} \cap\left\{y_{1}, \ldots, y_{n}\right\}=\emptyset, $$ and that no $x_{i}$ occurs free in $\psi_{2}$. Next replace $Q_{m+1} y_{1} \ldots Q_{m+n} y_{n} \psi_{2}$ by a variant to arrange that in addition no $y_{j}$ occurs free in $\psi_{1}$. Applying $m+n$ times prenex transformation of types (3) and (4) we obtain $$ \begin{array}{r} \left(Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1}\right) \vee\left(Q_{m+1} y_{1} \ldots Q_{m+n} y_{n} \psi_{2}\right) \Longrightarrow \mathrm{pr} \\ Q_{1} x_{1} \ldots Q_{m} x_{m} Q_{m+1} y_{1} \ldots Q_{m+n} y_{n}\left(\psi_{1} \vee \psi_{2}\right) . \end{array} $$ Hence $\varphi \Longrightarrow{ }_{\text {pr }} Q_{1} x_{1} \ldots Q_{m} x_{m} Q_{m+1} y_{1} \ldots Q_{m+n} y_{n}\left(\psi_{1} \vee \psi_{2}\right)$. Likewise, to deal with $\varphi_{1} \wedge \varphi_{2}$, we apply prenex transformations of types (5) and (6). Next, let $\varphi:=\exists x \varphi_{1}$. Applying prenex transformations of type (1) we can assume $x_{1}, \ldots, x_{m}$ differ from $x$. Then $\varphi \Longrightarrow{ }_{\mathrm{pr}} \exists x Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1}$, and $\exists x Q_{1} x_{1} \ldots Q_{m} x_{m} \psi_{1}$ is in prenex form. The case $\varphi:=\forall x \varphi_{1}$ is similar. We finish this section with results on equalities. Note that by Corollary 2.1.7, the result of replacing an occurrence of an $L$-term $\tau$ in $t$ by an $L$-term $\tau^{\prime}$ is an $L$-term $t^{\prime}$. Proposition 3.3.6. Let $\tau$ and $\tau^{\prime}$ be L-terms such that $\Sigma \vdash \tau=\tau^{\prime}$, let $t^{\prime}$ be the result of replacing an occurrence of $\tau$ in $t$ by $\tau^{\prime}$. Then $\Sigma \vdash t=t^{\prime}$. Proof. We proceed by induction on terms. First note that if $t=\tau$, then $t^{\prime}=\tau^{\prime}$. This fact takes care of the case that $t$ is a variable. Suppose $t=F t_{1} \ldots t_{n}$ where $F \in L^{f}$ is $n$-ary and $t_{1}, \ldots, t_{n}$ are $L$-terms, and assume $t \neq \tau$. Using the facts on admissible words at the end of Section 2.1, including exercise 5 , we see that $t^{\prime}=F t_{1}^{\prime} \ldots t_{n}^{\prime}$ where for some $i \in\{1, \ldots, n\}$ we have $t_{j}=t_{j}^{\prime}$ for all $j \neq i$, $j \in\{1, \ldots, n\}$, and $t_{i}^{\prime}$ is obtained from $t_{i}$ by replacing an occurrence of $\tau$ in $t_{i}$ by $\tau^{\prime}$. Inductively we can assume that $\Sigma \vdash t_{1}=t_{1}^{\prime}, \ldots, \Sigma \vdash t_{n}=t_{n}^{\prime}$, so by part (5) of Lemma 3.2.3 we have $\Sigma \vdash t=t^{\prime}$. An occurrence of an $L$-term $\tau$ in $\varphi$ is said to be proper if it is not an occurrence immediately following a quantifier symbol. So if $\tau$ is not a variable, then any occurrence of $\tau$ in any formula is proper. If $\tau$ is the variable $x$, then the second symbol in $\exists x \varphi$ is not a proper occurrence of $\tau$ in $\exists x \varphi$. Proposition 3.3.7 (Equality Theorem). Let $\tau$ and $\tau^{\prime}$ be L-terms such that $\Sigma \vdash \tau=\tau^{\prime}$. Let $\varphi^{\prime}$ be the result of replacing a proper occurrence of $\tau$ in $\varphi$ by $\tau^{\prime}$. Then $\varphi^{\prime}$ is an L-formula and $\Sigma \vdash \varphi \leftrightarrow \varphi^{\prime}$. Proof. For atomic $\varphi$, argue as in the proof of Proposition 3.3.6. Next, proceed by induction on formulas, using the Equivalence Theorem. Exercises. For exercise (3) below, recall from Section 2.5 the notions of existential formula and universal formula. The result of exercise (4) is used in later chapters. (1) Let $P$ be a unary relation symbol, $Q$ be a binary relation symbol, and $x, y$ distinct variables. Use prenex transformations to put $$ \forall x \exists y(P(x) \wedge Q(x, y)) \rightarrow \exists x \forall y(Q(x, y) \rightarrow P(y)) $$ into prenex form. (2) Let $\left(\varphi_{i}\right)_{i \in I}$ be a family of formulas with finite index set $I$. Then $$ \vdash\left(\exists x \bigvee_{i} \varphi_{i}\right) \longleftrightarrow\left(\bigvee_{i} \exists x \varphi_{i}\right), \quad \vdash\left(\forall x \bigwedge_{i} \varphi_{i}\right) \longleftrightarrow\left(\bigwedge_{i} \forall x \varphi_{i}\right) $$ (3) If $\varphi_{1}\left(x_{1}, \ldots, x_{m}\right)$ and $\varphi_{2}\left(x_{1}, \ldots, x_{m}\right)$ are existential formulas, then $$ \left(\varphi_{1} \vee \varphi_{2}\right)\left(x_{1}, \ldots, x_{m}\right), \quad\left(\varphi_{1} \wedge \varphi_{2}\right)\left(x_{1}, \ldots, x_{m}\right) $$ are equivalent to existential formulas $\varphi_{12}^{\vee}\left(x_{1}, \ldots, x_{m}\right)$ and $\varphi_{12}^{\wedge}\left(x_{1}, \ldots, x_{m}\right)$. The same holds with "existential" replaced by "universal". (4) A formula is said to be unnested if each atomic subformula has the form $R x_{1} \ldots x_{m}$ with $m$-ary $R \in L^{\mathrm{r}} \cup\{\top, \perp,=\}$ and distinct variables $x_{1}, \ldots, x_{m}$, or the form $F x_{1} \ldots x_{n}=x_{n+1}$ with $n$-ary $F \in L^{\mathrm{f}}$ and distinct variables $x_{1}, \ldots, x_{n+1}$. (This allows $T$ and $\perp$ as atomic subformulas of unnested formulas.) Then: (i) for each term $t\left(x_{1}, \ldots, x_{m}\right)$ and variable $y \notin\left\{x_{1}, \ldots, x_{m}\right\}$ the formula $$ t\left(x_{1}, \ldots, x_{m}\right)=y $$ is equivalent to an unnested existential formula $\theta_{1}\left(x_{1}, \ldots, x_{m}, y\right)$, and also to an unnested universal formula $\theta_{2}\left(x_{1}, \ldots, x_{m}, y\right)$. (ii) each atomic formula $\varphi\left(y_{1}, \ldots, y_{n}\right)$ is equivalent to an unnested existential formula $\varphi_{1}\left(y_{1}, \ldots, y_{n}\right)$, and also to an unnested universal formula $\varphi_{2}\left(y_{1}, \ldots, y_{n}\right)$. (iii) each formula $\varphi\left(y_{1}, \ldots, y_{n}\right)$ is equivalent to an unnested formula $\varphi^{u}\left(y_{1}, \ldots, y_{n}\right)$. ### Equational Classes and Universal Algebra The term-structure $\mathcal{A}_{\Sigma}$ introduced in the proof of the Completeness Theorem also plays a role in what is called universal algebra. This is a general setting for constructing mathematical objects by generators and relations. Free groups, tensor products of various kinds, polynomial rings, and so on, are all special cases of a single construction in universal algebra. In this section we fix a language $L$ that has only function symbols, including at least one constant symbol. So $L$ has no relation symbols. Instead of " $L$ structure" we say " $L$-algebra", and $\mathcal{A}, \mathcal{B}$ denote $L$-algebras. A substructure of $\mathcal{A}$ is also called a subalgebra of $\mathcal{A}$, and a quotient algebra of $\mathcal{A}$ is an $L$-algebra $\mathcal{A} / \sim$ where $\sim$ is a congruence on $\mathcal{A}$. We call $\mathcal{A}$ trivial if $|A|=1$. There is up to isomorphism exactly one trivial $L$-algebra. An $L$-identity is an $L$-sentence $$ \forall \vec{x}\left(s_{1}(\vec{x})=t_{1}(\vec{x}) \wedge \cdots \wedge s_{n}(\vec{x})=t_{n}(\vec{x})\right), \quad \vec{x}=\left(x_{1}, \ldots, x_{m}\right) $$ where $x_{1}, \ldots, x_{m}$ are distinct variables and $\forall \vec{x}$ abbreviates $\forall x_{1} \ldots \forall x_{m}$, and where $s_{1}, t_{1}, \ldots, s_{n}, t_{n}$ are $L$-terms. Given a set $\Sigma$ of $L$-identities we define a $\Sigma$-algebra to be an $L$-algebra that satisfies all identities in $\Sigma$, in other words, a $\Sigma$-algebra is the same as a model of $\Sigma$. To such a $\Sigma$ we associate the class $\operatorname{Mod}(\Sigma)$ of all $\Sigma$-algebras. A class $\mathcal{C}$ of $L$-algebras is said to be equational if there is a set $\Sigma$ of $L$-identities such that $\mathcal{C}=\operatorname{Mod}(\Sigma)$. Examples. With $L=L_{\mathrm{Gr}}, \mathrm{Gr}$ is a set of $L$-identities, and $\operatorname{Mod}(\mathrm{Gr})$, the class of groups, is the corresponding equational class of $L$-algebras. With $L=$ $L_{\text {Ring }}$, Ring is a set of $L$-identities, and Mod(Ring), the class of rings, is the corresponding equational class of $L$-algebras. If one adds to Ring the identity $\forall x \forall y(x y=y x)$ expressing the commutative law, then the corresponding class is the class of commutative rings. Theorem 3.4.1. (G.Birkhoff) Let $\mathcal{C}$ be a class of $L$-algebras. Then the class $\mathcal{C}$ is equational if and only if the following conditions are satisfied: (1) closure under isomorphism: if $\mathcal{A} \in \mathcal{C}$ and $\mathcal{A} \cong \mathcal{B}$, then $\mathcal{B} \in \mathcal{C}$. (2) the trivial L-algebra belongs to $\mathcal{C}$; (3) every subalgebra of any algebra in $\mathcal{C}$ belongs to $\mathcal{C}$; (4) every quotient algebra of any algebra in $\mathcal{C}$ belongs to $\mathcal{C}$; (5) the product of any family $\left(\mathcal{A}_{i}\right)$ of algebras in $\mathcal{C}$ belongs to $\mathcal{C}$. It is easy to see that if $\mathcal{C}$ is equational, then conditions (1)-(5) are satisfied. For (3) and (4) one can also appeal to the Exercises 8 and 10 of section 2.5. Towards a proof of the converse, we need some universal-algebraic considerations that are of interest beyond the connection to Birkhoff's theorem. For the rest of this section we fix a set $\Sigma$ of $L$-identities. Associated to $\Sigma$ is the term algebra $\mathcal{A}_{\Sigma}$ whose elements are the equivalence classes $[t]$ of variable-free $L$-terms $t$, where two such terms $s$ and $t$ are equivalent iff $\Sigma \vdash s=t$. Lemma 3.4.2. $\mathcal{A}_{\Sigma}$ is a $\Sigma$-algebra. Proof. Consider an identity $$ \forall \vec{x}\left(s_{1}(\vec{x})=t_{1}(\vec{x}) \wedge \cdots \wedge s_{n}(\vec{x})=t_{n}(\vec{x})\right), \quad \vec{x}=\left(x_{1}, \ldots, x_{m}\right) $$ in $\Sigma$, let $j \in\{1, \ldots, n\}$ and put $s=s_{j}$ and $t=t_{j}$. Let $a_{1}, \ldots, a_{m} \in A_{\Sigma}$ and put $\mathcal{A}=\mathcal{A}_{\Sigma}$. It suffices to show that then $s\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)^{\mathcal{A}}=t\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)^{\mathcal{A}}$. Take variable-free $L$-terms $\alpha_{1}, \ldots, \alpha_{m}$ such that $a_{1}=\left[\alpha_{1}\right], \ldots, a_{m}=\left[\alpha_{m}\right]$. Then by part (1) of Corollary 3.2.5 we have $a_{1}=\alpha_{1}^{\mathcal{A}}, \ldots, a_{m}=\alpha_{m}^{\mathcal{A}}$, so $$ s\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)^{\mathcal{A}}=s\left(\alpha_{1}, \ldots, \alpha_{m}\right)^{\mathcal{A}}, \quad t\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)^{\mathcal{A}}=t\left(\alpha_{1}, \ldots, \alpha_{m}\right)^{\mathcal{A}} $$ by Lemma 2.6.1. Also, by part (1) of Corollary 3.2.5, $$ s\left(\alpha_{1}, \ldots, \alpha_{m}\right)^{\mathcal{A}}=\left[s\left(\alpha_{1}, \ldots, \alpha_{m}\right)\right], \quad t\left(\alpha_{1}, \ldots, \alpha_{m}\right)^{\mathcal{A}}=\left[t\left(\alpha_{1}, \ldots, \alpha_{m}\right)\right] . $$ Now $\Sigma \vdash s\left(\alpha_{1}, \ldots, \alpha_{m}\right)=t\left(\alpha_{1}, \ldots, \alpha_{m}\right)$, so $\left[s\left(\alpha_{1}, \ldots, \alpha_{m}\right)\right]=\left[t\left(\alpha_{1}, \ldots, \alpha_{m}\right)\right]$, and thus $s\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)^{\mathcal{A}}=t\left(\underline{a}_{1}, \ldots, \underline{a}_{m}\right)^{\mathcal{A}}$, as desired. Actually, we are going to show that $\mathcal{A}_{\Sigma}$ is a so-called initial $\Sigma$-algebra. An initial $\Sigma$-algebra is a $\Sigma$-algebra $\mathcal{A}$ such that for any $\Sigma$-algebra $\mathcal{B}$ there is a unique homomorphism $\mathcal{A} \rightarrow \mathcal{B}$. For example, the trivial group is an initial Gr-algebra, and the ring of integers is an initial Ring-algebra. Suppose $\mathcal{A}$ and $\mathcal{B}$ are both initial $\Sigma$-algebras. Then there is a unique isomorphism $\mathcal{A} \rightarrow \mathcal{B}$. To see this, let $i$ and $j$ be the unique homomorphisms $\mathcal{A} \rightarrow \mathcal{B}$ and $\mathcal{B} \rightarrow \mathcal{A}$, respectively. Then we have homomorphisms $j \circ i: \mathcal{A} \rightarrow \mathcal{A}$ and $i \circ j: \mathcal{B} \rightarrow \mathcal{B}$, respectively, so necessarily $j \circ i=\operatorname{id}_{A}$ and $i \circ j=\operatorname{id}_{B}$, so $i$ and $j$ are isomorphisms. So if there is an initial $\Sigma$-algebra, it is unique up-to-unique-isomorphism. Lemma 3.4.3. $\mathcal{A}_{\Sigma}$ is an initial $\Sigma$-algebra. Proof. Let $\mathcal{B}$ be any $\Sigma$-algebra. Note that if $s, t \in \operatorname{Term}_{L}$ and $[s]=[t]$, then $\Sigma \vdash s=t$, so $s^{\mathcal{B}}=t^{\mathcal{B}}$. Thus we have a map $$ A_{\Sigma} \rightarrow B, \quad[t] \mapsto t^{\mathcal{B}} . $$ It is easy to check that this map is a homomorphism $\mathcal{A}_{\Sigma} \rightarrow \mathcal{B}$. By Exercise 4 in Section 2.4 it is the only such homomorphism. Free algebras. Let $I$ be an index set in what follows. Let $\mathcal{A}$ be a $\Sigma$-algebra and $\left(a_{i}\right)_{i \in I}$ an $I$-indexed family of elements of $A$. Then $\mathcal{A}$ is said to be a free $\Sigma$-algebra on $\left(a_{i}\right)$ if for every $\Sigma$-algebra $\mathcal{B}$ and $I$-indexed family $\left(b_{i}\right)$ of elements of $B$ there is exactly one homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ such that $h\left(a_{i}\right)=b_{i}$ for all $i \in I$. We also express this by " $\left.\mathcal{A},\left(a_{i}\right)\right)$ is a free $\Sigma$-algebra". Finally, $\mathcal{A}$ itself is sometimes referred to as a free $\Sigma$-algebra if there is a family $\left(a_{j}\right)_{j \in J}$ in $A$ such that $\left(\mathcal{A},\left(a_{j}\right)\right)$ is a free $\Sigma$-algebra. As an example, take $L=L_{\text {Ring }}$ and $\mathrm{cRi}:=\operatorname{Ring} \cup\{\forall x \forall y x y=y z\}$, where $x, y$ are distinct variables. So the cRi-algebras are just the commutative rings. Let $\mathbb{Z}\left[X_{1}, \ldots, X_{n}\right]$ be the ring of polynomials in distinct indeterminates $X_{1}, \ldots, X_{n}$ over $\mathbb{Z}$. For any commutative ring $R$ and elements $b_{1}, \ldots, b_{n} \in R$ we have a unique ring homomorphism $\mathbb{Z}\left[X_{1}, \ldots, X_{n}\right] \rightarrow R$ that sends $X_{i}$ to $b_{i}$ for $i=$ $1, \ldots, n$, namely the evaluation map (or substitution map) $$ \mathbb{Z}\left[X_{1}, \ldots, X_{n}\right] \rightarrow R, \quad f\left(X_{1}, \ldots, X_{n}\right) \mapsto f\left(b_{1}, \ldots, b_{n}\right) . $$ Thus $\mathbb{Z}\left[X_{1}, \ldots, X_{n}\right]$ is a free commutative ring on $\left(X_{i}\right)_{1 \leq i \leq n}$. For a simpler example, let $L=L_{\mathrm{Mo}}:=\{1, \cdot\} \subseteq L_{\mathrm{Gr}}$ be the language of monoids, and consider $$ \text { Mo }:=\{\forall x(1 \cdot x=x \wedge x \cdot 1=x), \quad \forall x \forall y \forall z((x y) z=x(y z))\}, $$ where $x, y, z$ are distinct variables. A monoid, or semigroup with identity, is a model $\mathcal{A}=(A ; 1, \cdot)$ of Mo, and we call $1 \in A$ the identity of the monoid $\mathcal{A}$, and - its product operation. Let $E^{*}$ be the set of words on an alphabet $E$, and consider $E^{*}$ as a monoid by taking the empty word as its identity and the concatenation operation $(v, w) \mapsto$ $v w$ as its product operation. Then $E^{*}$ is a free monoid on the family $(e)_{e \in E}$ of words of length 1 , because for any monoid $\mathcal{B}$ and elements $b_{e} \in B$ (for $e \in E$ ) we have a unique monoid homomorphism $E^{*} \rightarrow \mathcal{B}$ that sends each $e \in E$ to $b_{e}$, namely, $$ e_{1} \ldots e_{n} \mapsto b_{e_{1}} \cdots b_{e_{n}} $$ Remark. If $\mathcal{A}$ and $\mathcal{B}$ are both free $\Sigma$-algebras on $\left(a_{i}\right)$ and $\left(b_{i}\right)$ respectively, with same index set $I$, and $g: \mathcal{A} \rightarrow \mathcal{B}$ and $h: \mathcal{B} \rightarrow \mathcal{A}$ are the unique homomorphisms such that $g\left(a_{i}\right)=b_{i}$ and $h\left(b_{i}\right)=a_{i}$ for all $i$, then $(h \circ g)\left(a_{i}\right)=a_{i}$ for all $i$, so $h \circ g=\operatorname{id}_{A}$, and likewise $g \circ h=\operatorname{id}_{B}$, so $g$ is an isomorphism with inverse $h$. Thus, given $I$, there is, up to unique isomorphism preserving $I$-indexed families, at most one free $\Sigma$-algebra on an $I$-indexed family of its elements. We shall now construct free $\Sigma$-algebras as initial algebras by working in an extended language. Let $L_{I}:=L \cup\left\{c_{i}: i \in I\right\}$ be the language $L$ augmented by new constant symbols $c_{i}$ for $i \in I$, where new means that $c_{i} \notin L$ for $i \in I$ and $c_{i} \neq c_{j}$ for distinct $i, j \in I$. So an $L_{I}$-algebra $\left(\mathcal{B},\left(b_{i}\right)\right)$ is just an $L$-algebra $\mathcal{B}$ equipped with an $I$-indexed family $\left(b_{i}\right)$ of elements of $B$. Let $\Sigma_{I}$ be $\Sigma$ viewed as a set of $L_{I}$-identities. Then a free $\Sigma$-algebra on an $I$-indexed family of its elements is just an initial $\Sigma_{I}$-algebra. In particular, the $\Sigma_{I}$-algebra $\mathcal{A}_{\Sigma_{I}}$ is a free $\Sigma$-algebra on $\left(\left[c_{i}\right]\right)$. Thus, up to unique isomorphism of $\Sigma_{I}$-algebras, there is a unique free $\Sigma$-algebra on an $I$-indexed family of its elements. Let $\left(\mathcal{A},\left(a_{i}\right)_{i \in I}\right)$ be a free $\Sigma$-algebra. Then $\mathcal{A}$ is generated by $\left(a_{i}\right)$. To see why, let $\mathcal{B}$ be the subalgebra of $\mathcal{A}$ generated by $\left(a_{i}\right)$. Then we have a unique homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ such that $h\left(a_{i}\right)=a_{i}$ for all $i \in I$, and then the composition $$ \mathcal{A} \rightarrow \mathcal{B} \rightarrow \mathcal{A} $$ is necessarily $\mathrm{id}_{\mathcal{A}}$, so $\mathcal{B}=\mathcal{A}$. Let $\mathcal{B}$ be any $\Sigma$-algebra, and take any family $\left(b_{j}\right)_{j \in J}$ in $B$ that generates $\mathcal{B}$. Take a free $\Sigma$-algebra $\left(\mathcal{A},\left(a_{j}\right)_{j \in J}\right)$, and take the unique homomorphism $h$ : $\left(\mathcal{A},\left(a_{j}\right)\right) \rightarrow\left(\mathcal{B},\left(b_{j}\right)\right)$. Then $h\left(t^{\mathcal{A}}\left(a_{j_{1}}, \ldots, a_{j_{n}}\right)\right)=t^{\mathcal{B}}\left(b_{j_{1}}, \ldots, b_{j_{n}}\right)$ for all $L$ terms $t\left(x_{1}, \ldots, x_{n}\right)$ and $j_{1}, \ldots, j_{n} \in J$, so $h(A)=B$, and thus $h$ induces an isomorphism $\mathcal{A} / \sim_{h} \cong \mathcal{B}$. We have shown: Every $\Sigma$-algebra is isomorphic to a quotient of a free $\Sigma$-algebra. This fact can sometimes be used to reduce problems on $\Sigma$-algebras to the case of free $\Sigma$-algebras; see the next subsection for an example. Proof of Birkhoff's theorem. Let us say that a class $\mathcal{C}$ of $L$-algebras is closed if it has properties (1)-(5) listed in Theorem 3.4.1. Assume $\mathcal{C}$ is closed; we have to show that then $\mathcal{C}$ is equational. Indeed, let $\Sigma(\mathcal{C})$ be the set of $L$-identities $$ \forall \vec{x}(s(\vec{x})=t(\vec{x})) $$ that are true in all algebras of $\mathcal{C}$. It is clear that each algebra in $\mathcal{C}$ is a $\Sigma(\mathcal{C})$ algebra, and it remains to show that every $\Sigma(\mathcal{C})$-algebra belongs to $\mathcal{C}$. Here is the key fact from which this will follow: Claim. If $\mathcal{A}$ is an initial $\Sigma(\mathcal{C})$-algebra, then $\mathcal{A} \in \mathcal{C}$. To prove this claim we take $\mathcal{A}:=\mathcal{A}_{\Sigma(\mathcal{C})}$. For $s, t \in \operatorname{Term}_{L}$ such that $s=t$ does not belong to $\Sigma(\mathcal{C})$ we pick $\mathcal{B}_{s, t} \in \mathcal{C}$ such that $\mathcal{B}_{s, t} \models s \neq t$, and we let $h_{s, t}: \mathcal{A} \rightarrow \mathcal{B}_{s, t}$ be the unique homomorphism, so $h_{s, t}([s]) \neq h_{s, t}([t])$. Let $\mathcal{B}:=\prod \mathcal{B}_{s, t}$ where the product is over all pairs $(s, t)$ as above, and let $h: \mathcal{A} \rightarrow \mathcal{B}$ be the homomorphism given by $h(a)=\left(h_{s, t}(a)\right)$. Note that $\mathcal{B} \in \mathcal{C}$. Then $h$ is injective. To see why, let $s, t \in \operatorname{Term}_{L}$ be such that $[s] \neq[t]$ in $A=A_{\Sigma(\mathcal{C})}$. Then $s=t$ does not belong to $\Sigma(\mathcal{C})$, so $h_{s, t}([s]) \neq h_{s, t}([t])$, and thus $h([s]) \neq h([t])$. This injectivity gives $\mathcal{A} \cong h(\mathcal{A}) \subseteq \mathcal{B}$, so $\mathcal{A} \in \mathcal{C}$. This finishes the proof of the claim. Now, every $\Sigma(\mathcal{C})$-algebra is isomorphic to a quotient of a free $\Sigma(\mathcal{C})$-algebra, so it remains to show that free $\Sigma(\mathcal{C})$-algebras belong to $\mathcal{C}$. Let $\mathcal{A}$ be a free $\Sigma(\mathcal{C})$ algebra on $\left(a_{i}\right)_{i \in I}$. Let $\mathcal{C}_{I}$ be the class of all $L_{I^{-}}$-algebras $\left(\mathcal{B},\left(b_{i}\right)\right)$ with $\mathcal{B} \in \mathcal{C}$. It is clear that $\mathcal{C}_{I}$ is closed as a class of $L_{I}$-algebras. Now, $\left(\mathcal{A},\left(a_{i}\right)\right)$ is easily seen to be an initial $\Sigma\left(\mathcal{C}_{I}\right)$-algebra. By the claim above, applied to $\mathcal{C}_{I}$ in place of $\mathcal{C}$, we obtain $\left(\mathcal{A},\left(a_{i}\right)\right) \in \mathcal{C}_{I}$, and thus $\mathcal{A} \in \mathcal{C}$. Generators and relations. Let $G$ be any set. Then we have a $\Sigma$-algebra $\mathcal{A}$ with a map $\iota: G \rightarrow A$ such that for any $\Sigma$-algebra $\mathcal{B}$ and any map $j: G \rightarrow B$ there is a unique homomorphism $h: \mathcal{A} \rightarrow \mathcal{B}$ such that $h \circ \iota=j$; in other words, $\mathcal{A}$ is a free as a $\Sigma$-algebra on $(\iota g)_{g \in G}$. Note that if $\left(\mathcal{A}^{\prime}, \iota^{\prime}\right)$ (with $\left.\iota^{\prime}: G \rightarrow A^{\prime}\right)$ has the same universal property as $(\mathcal{A}, \iota)$, then the unique homomorphism $h: \mathcal{A} \rightarrow \mathcal{A}^{\prime}$ such that $h \circ \iota=\iota^{\prime}$ is an isomorphism, so this universal property determines the pair $(\mathcal{A}, \iota)$ up-to-unique-isomorphism. So there is no harm in calling $(\mathcal{A}, \iota)$ the free $\Sigma$-algebra on $G$. Note that $\mathcal{A}$ is generated as an $L$-algebra by $\iota G$. Here is a particular way of constructing the free $\Sigma$-algebra on $G$. Take the language $L_{G}:=L \cup G$ (disjoint union) with the elements of $G$ as constant symbols. Let $\Sigma(G)$ be $\Sigma$ considered as a set of $L_{G}$-identities. Then $\mathcal{A}:=\mathcal{A}_{\Sigma(G)}$ as a $\Sigma$-algebra with the map $g \mapsto[g]: G \rightarrow A_{\Sigma(G)}$ is the free $\Sigma$-algebra on $G$. Next, let $R$ be a set of sentences $s(\vec{g})=t(\vec{g})$ where $s\left(x_{1}, \ldots, x_{n}\right)$ and $t\left(x_{1}, \ldots, x_{n}\right)$ are $L$-terms and $\vec{g}=\left(g_{1}, \ldots, g_{n}\right) \in G^{n}$ (with $n$ depending on the sentence). We wish to construct the $\Sigma$-algebra generated by $G$ with $R$ as set of relations. ${ }^{1}$ This object is described up-to-isomorphism in the next lemma. Lemma 3.4.4. There is a $\Sigma$-algebra $\mathcal{A}(G, R)$ with a map $i: G \rightarrow A(G, R)$ such that: (1) $\mathcal{A}(G, R) \models s(i \vec{g})=t(i \vec{g})$ for all $s(\vec{g})=t(\vec{g})$ in $R$; (2) for any $\Sigma$-algebra $\mathcal{B}$ and map $j: G \rightarrow B$ with $\mathcal{B} \models s(j \vec{g})=t(j \vec{g})$ for all $s(\vec{g})=t(\vec{g})$ in $R$, there is a unique homomorphism $h: \mathcal{A}(G, R) \rightarrow \mathcal{B}$ such that $h \circ i=j$. Proof. Let $\Sigma(R):=\Sigma \cup R$, viewed as a set of $L_{G}$-sentences, let $\mathcal{A}(G, R):=$ $\mathcal{A}_{\Sigma(R)}$, and define $i: G \rightarrow A(G, R)$ by $i(g)=[g]$. As before one sees that the universal property of the lemma is satisfied. ${ }^{1}$ The use of the term "relations" here has nothing to do with $n$-ary relations on sets. ## Chapter 4 ## Some Model Theory In this chapter we first derive the Löwenheim-Skolem Theorem. Next we develop some basic methods related to proving completeness of a given set of axioms: Vaught's Test, back-and-forth, quantifier elimination. Each of these methods, when succesful, achieves a lot more than just establishing completeness. ### Löwenheim-Skolem; Vaught's Test Below, the cardinality of a structure is defined to be the cardinality of its underlying set. In this section we have the same conventions concerning $L, \mathcal{A}, t$, $\varphi, \psi, \theta, \sigma$ and $\Sigma$ as in the beginning of Section 2.6, unless specified otherwise. Theorem 4.1.1 (Countable Löwenheim-Skolem Theorem). Suppose $L$ is countable and $\Sigma$ has a model. Then $\Sigma$ has a countable model. Proof. Since Var is countable, the hypothesis that $L$ is countable yields that the set of $L$-sentences is countable. Hence the language $$ L \cup\left\{c_{\sigma}: \Sigma \vdash \sigma \text { where } \sigma \text { is an } L \text {-sentence of the form } \exists x \varphi(x)\right\} $$ is countable, that is, adding witnesses keeps the language countable. The union of countably many countable sets is countable, hence the set $L_{\infty}$ constructed in the proof of the Completeness Theorem is countable. It follows that there are only countably many variable-free $L_{\infty}$-terms, hence $\mathcal{A}_{\Sigma_{\infty}}$ is countable, and thus its reduct $\left.\mathcal{A}_{\Sigma_{\infty}}\right|_{L}$ is a countable model of $\Sigma$. Remark. The above proof is the first time that we used the countability of the set Var $=\left\{\mathrm{v}_{0}, \mathrm{v}_{1}, \mathrm{v}_{2}, \ldots\right\}$ of variables. As promised in Section 2.4, we shall now indicate why the Countable Löwenheim-Skolem Theorem goes through without assuming that Var is countable. Suppose that Var is uncountable. Take a countably infinite subset $\operatorname{Var}^{\prime} \subseteq$ Var. Then each sentence is equivalent to one whose variables are all from Var'. By replacing each sentence in $\Sigma$ by an equivalent one all whose variables are from $\operatorname{Var}^{\prime}$, we obtain a countable set $\Sigma^{\prime}$ of sentences such that $\Sigma$ and $\Sigma^{\prime}$ have the same models. As in the proof above, we obtain a countable model of $\Sigma^{\prime}$ working throughout in the setting where only variables from $\operatorname{Var}^{\prime}$ are used in terms and formulas. This model is a countable model of $\Sigma$. The following test can be useful in showing that a set of axioms $\Sigma$ is complete. Proposition 4.1.2 (Vaught's Test). Let $L$ be countable, and suppose $\Sigma$ has a model, and that all countable models of $\Sigma$ are isomorphic. Then $\Sigma$ is complete. Proof. Suppose $\Sigma$ is not complete. Then there is $\sigma$ such that $\Sigma \nvdash \sigma$ and $\Sigma \nvdash \neg \sigma$. Hence by the Löwenheim-Skolem Theorem there is a countable model $\mathcal{A}$ of $\Sigma$ such that $\mathcal{A} \not \models \sigma$, and there is a countable model $\mathcal{B}$ of $\Sigma$ such that $\mathcal{B} \not \models \sigma \sigma$. We have $\mathcal{A} \cong \mathcal{B}, \mathcal{A} \models \neg \sigma$ and $\mathcal{B} \models \sigma$, contradiction. Example. Let $L=\emptyset$, so the $L$-structures are just the non-empty sets. Let $\Sigma=\left\{\sigma_{1}, \sigma_{2}, \ldots\right\}$ where $$ \sigma_{n}=\exists x_{1} \ldots x_{n} \bigwedge_{1 \leq i<j \leq n} x_{i} \neq x_{j} $$ The models of $\Sigma$ are exactly the infinite sets. All countable models of $\Sigma$ are countably infinite and hence isomorphic to $\mathbf{N}$. Thus by Vaught's Test $\Sigma$ is complete. In this example the hypothesis of Vaught's Test is trivially satisfied. In other cases it may require work to check this hypothesis. One general method in model theory, Back-and-Forth, is often used to verify the hypothesis of Vaught's Test. The next theorem is due to Cantor, but the proof we give stems from Hausdorff and shows Back-and-Forth in action. To formulate that theorem we recall from Section 2.6 that a totally ordered set is a structure $(A ;<)$ for the language $L_{\mathrm{O}}$ that satisfies the following axioms (where $x, y, z$ are distinct variables): $$ \forall x(x \nless x), \quad \forall x y z((x<y \wedge y<z) \rightarrow x<z), \quad \forall x y(x<y \vee x=y \vee y<x) . $$ A totally ordered set is said to be dense if it satisfies in addition the axiom $$ \forall x y(x<y \rightarrow \exists z(x<z<y)) $$ and it is said to have no endpoints if it satisfies the axiom $$ \forall x \exists y z(y<x<z) . $$ So $(\mathbf{Q} ;<)$ and $(\mathbf{R} ;<)$ are dense totally ordered sets without endpoints. Theorem 4.1.3 (Cantor). Any two countable dense totally ordered sets without endpoints are isomorphic. Proof. Let $(A ;<)$ and $(B ;<)$ be countable dense totally ordered sets without endpoints. So $A=\left\{a_{n}: n \in \mathbf{N}\right\}$ and $B=\left\{b_{n}: n \in \mathbf{N}\right\}$. We define by recursion a sequence $\left(\alpha_{n}\right)$ in $A$ and a sequence $\left(\beta_{n}\right)$ in $B$ : Put $\alpha_{0}:=a_{0}$ and $\beta_{0}:=b_{0}$. Let $n>0$, and suppose we have distinct $\alpha_{0}, \ldots, \alpha_{n-1}$ in $A$ and distinct $\beta_{0}, \ldots, \beta_{n-1}$ in $B$ such that for all $i, j<n$ we have $\alpha_{i}<\alpha_{j} \Longleftrightarrow \beta_{i}<\beta_{j}$. Then we define $\alpha_{n} \in A$ and $\beta_{n} \in B$ as follows: Case 1: $n$ is even. (Here we go forth.) First take $k \in \mathbf{N}$ minimal such that $a_{k} \notin\left\{\alpha_{0}, \ldots, \alpha_{n-1}\right\}$; then take $l \in \mathbf{N}$ minimal such that $b_{l}$ is situated with respect to $\beta_{0}, \ldots, \beta_{n-1}$ as $a_{k}$ is situated with respect to $\alpha_{0}, \ldots, \alpha_{n-1}$, that is, $l$ is minimal such that for $i=0, \ldots, n-1$ we have: $\alpha_{i}<a_{k} \Longleftrightarrow \beta_{i}<b_{l}$, and $\alpha_{i}>a_{k} \Longleftrightarrow \beta_{i}>b_{l}$. (The reader should check that such an $l$ exists: that is where density and "no endpoints" come in); put $\alpha_{n}:=a_{k}$ and $\beta_{n}:=b_{l}$. Case 2: $n$ is odd. (Here we go back.) First take $l \in \mathbf{N}$ minimal such that $b_{l} \notin\left\{\beta_{0}, \ldots, \beta_{n-1}\right\}$; next take $k \in \mathbf{N}$ minimal such that $a_{k}$ is situated with respect to $\alpha_{0}, \ldots, \alpha_{n-1}$ as $b_{l}$ is situated with respect to $\beta_{0}, \ldots, \beta_{n-1}$, that is, $k$ is minimal such that for $i=0, \ldots, n-1$ we have: $\alpha_{i}<a_{k} \Longleftrightarrow \beta_{i}<b_{l}$, and $\alpha_{i}>a_{k} \Longleftrightarrow \beta_{i}>b_{l}$. Put $\beta_{n}:=b_{l}$ and $\alpha_{n}:=a_{k}$. One proves easily by induction on $n$ that then $a_{n} \in\left\{\alpha_{0}, \ldots, \alpha_{2 n}\right\}$ and $b_{n} \in$ $\left\{\beta_{0}, \ldots, \beta_{2 n}\right\}$. Thus we have a bijection $\alpha_{n} \mapsto \beta_{n}: A \rightarrow B$, and this bijection is an isomorphism $(A ;<) \rightarrow(B ;<)$. Let $\Sigma$ be the set of axioms for dense totally ordered sets without endpoints as indicated before the statement of Cantor's theorem. Thus $\Sigma$ is a set of sentences in the language $L_{\mathrm{O}}$. By Vaught's Test we obtain from Cantor's theorem: Corollary 4.1.4. $\Sigma$ is complete. In the results below $\kappa$ is an infinite cardinal, construed as the set of all ordinals $\lambda<\kappa$ (as is usual in set theory). We have the following generalization of the Löwenheim-Skolem theorem. Theorem 4.1.5 (Generalized Löwenheim-Skolem Theorem). Suppose $|L| \leq \kappa$ and $\Sigma$ has an infinite model. Then $\Sigma$ has a model of cardinality $\kappa$. Proof. Let $\left\{c_{\lambda}\right\}_{\lambda<\kappa}$ be a family of $\kappa$ new constant symbols that are not in $L$ and are pairwise distinct (that is, $c_{\lambda} \neq c_{\mu}$ for $\lambda<\mu<\kappa$ ). Let $L^{\prime}=L \cup\left\{c_{\lambda}: \lambda<\kappa\right\}$ and let $\Sigma^{\prime}=\Sigma \cup\left\{c_{\lambda} \neq c_{\mu}: \lambda<\mu<\kappa\right\}$. We claim that $\Sigma^{\prime}$ has a model. To see this it suffices to show that, given any finite set $\Lambda \subseteq \kappa$, the set of $L^{\prime}$-sentences $$ \Sigma_{\Lambda}:=\Sigma \cup\left\{c_{\lambda} \neq c_{\mu}: \lambda, \mu \in \Lambda, \lambda \neq \mu\right\} $$ has a model. Take an infinite model $\mathcal{A}$ of $\Sigma$. We make an $L^{\prime}$-expansion $\mathcal{A}_{\Lambda}$ of $\mathcal{A}$ by interpreting distinct $c_{\lambda}$ 's with $\lambda \in \Lambda$ by distinct elements of $A$, and interpreting the $c_{\lambda}$ 's with $\lambda \notin \Lambda$ arbitrarily. Then $\mathcal{A}_{\Lambda}$ is a model of $\Sigma_{\Lambda}$. Note that $L^{\prime}$ also has size at most $\kappa$. The same arguments we used in proving the countable version of the Löwenheim-Skolem Theorem show that then $\Sigma^{\prime}$ has a model $\mathcal{B}^{\prime}=\left(\mathcal{B},\left(b_{\lambda}\right)_{\lambda<\kappa}\right)$ of cardinality at most $\kappa$. We have $b_{\lambda} \neq b_{\mu}$ for $\lambda<\mu<\kappa$, hence $\mathcal{B}$ is a model of $\Sigma$ of cardinality $\kappa$. The next proposition is Vaught's Test for arbitrary languages and cardinalities. Proposition 4.1.6. Suppose $L$ has size at most $\kappa, \Sigma$ has a model and all models of $\Sigma$ are infinite. Suppose also that any two models of $\Sigma$ of cardinality $\kappa$ are isomorphic. Then $\Sigma$ is complete. Proof. Let $\sigma$ be an $L$-sentence and suppose that $\Sigma \nvdash \sigma$ and $\Sigma \nvdash \neg \sigma$. We will derive a contradiction. First $\Sigma \nvdash \sigma$ means that $\Sigma \cup\{\neg \sigma\}$ has a model. Similarly $\Sigma \nvdash \neg \sigma$ means that $\Sigma \cup\{\sigma\}$ has a model. These models must be infinite since they are models of $\Sigma$, so by the Generalized Löwenheim-Skolem Theorem $\Sigma \cup\{\neg \sigma\}$ has a model $\mathcal{A}$ of cardinality $\kappa$, and $\Sigma \cup\{\sigma\}$ has a model $\mathcal{B}$ of cardinality $\kappa$. By assumption $\mathcal{A} \cong \mathcal{B}$, contradicting that $\mathcal{A} \models \neg \sigma$ and $\mathcal{B} \models \sigma$. We now discuss in detail an application of this generalized Vaught Test. Fix a field $F$. A vector space over $F$ is an abelian (additively written) group $V$ equipped with a scalar multiplication operation $$ (\lambda, v) \mapsto \lambda v: F \times V \longrightarrow V $$ such that for all $\lambda, \mu \in F$ and all $v, w \in V$, (i) $(\lambda+\mu) v=\lambda v+\mu v$, (ii) $\lambda(v+w)=\lambda v+\lambda w$, (iii) $1 v=v$, (iv) $(\lambda \mu) v=\lambda(\mu v)$. Let $L_{F}$ be the language of vector spaces over $F$ : it extends the language $L_{\mathrm{Ab}}=$ $\{0,-,+\}$ of abelian groups with unary function symbols $f_{\lambda}$, one for each $\lambda \in F$; a vector space $V$ over $F$ is viewed as an $L_{F}$-structure by interpreting each $f_{\lambda}$ as the function $v \longmapsto \lambda v: V \rightarrow V$. One easily specifies a set $\Sigma_{F}$ of sentences whose models are exactly the vector spaces over $F$. Note that $\Sigma_{F}$ is not complete since the trivial vector space satisfies $\forall x(x=0)$ but $F$ viewed as vector space over $F$ does not. Moreover, if $F$ is finite, then we have also nontrivial finite vector spaces. From a model-theoretic perspective finite structures are somewhat exceptional, so we are going to restrict attention to infinite vector spaces over $F$. Let $x_{1}, x_{2}, \ldots$ be a sequence of distinct variables and put $$ \Sigma_{F}^{\infty}:=\Sigma_{F} \cup\left\{\exists x_{1} \ldots \exists x_{n} \bigwedge_{1 \leq i<j \leq n} x_{i} \neq x_{j}: n=2,3, \ldots\right\} . $$ So the models of $\Sigma_{F}^{\infty}$ are exactly the infinite vector spaces over $F$. Note that if $F$ itself is infinite then each non-trivial vector space over $F$ is infinite. We will need the following facts about vector spaces $V$ and $W$ over $F$. (Proofs can be found in many places.) Fact. (a) $V$ has a basis $B$, that is, $B \subseteq V$, and for each vector $v \in V$ there is a unique family $\left(\lambda_{b}\right)_{b \in B}$ of scalars (elements of $F$ ) such that $\left\{b \in B: \lambda_{b} \neq 0\right\}$ is finite and $v=\Sigma_{b \in B} \lambda_{b} b$. (b) Any two bases $B$ and $C$ of $V$ have the same cardinality. (c) If $V$ has basis $B$ and $W$ has basis $C$, then any bijection $B \rightarrow C$ extends uniquely to an isomorphism $V \rightarrow W$. (d) Let $B$ be a basis of $V$. Then $|V|=|B| \cdot|F|$ if $F$ or $B$ is infinite. If $F$ and $B$ are finite, then $|V|=|F|^{|B|}$. Theorem 4.1.7. $\Sigma_{F}^{\infty}$ is complete. Proof. Take a $\kappa>|F|$. In particular, $L_{F}$ has size at most $\kappa$. Let $V$ be a vector space over $F$ of cardinality $\kappa$. Then a basis of $V$ must also have size $\kappa$ by property (d) above. Hence any two vector spaces over $F$ of cardinality $\kappa$ have bases of cardinality $\kappa$ and thus are isomorphic by property (c). It follows by the Generalized Vaught Test that $\Sigma_{F}^{\infty}$ is complete. Remark. Theorem 4.1.7 and Exercise 3 imply for instance that if $F=\mathbf{R}$ then all non-trivial vector spaces over $F$ satisfy exactly the same sentences in $L_{F}$. With the generalized Vaught Test we can also prove that ACF (0) (whose models are the algebraically closed fields of characteristic 0 ) is complete. The proof is similar, with "transcendence bases" taking over the role of bases. The relevant definitions and facts are as follows. Let $K$ be a field with subfield $\mathbf{k}$. A subset $B$ of $K$ is said to be algebraically independent over $\mathbf{k}$ if for all distinct $b_{1}, \ldots, b_{n} \in B$ we have $p\left(b_{1}, \ldots, b_{n}\right) \neq 0$ for all nonzero polynomials $p\left(x_{1}, \ldots, x_{n}\right) \in \mathbf{k}\left[x_{1}, \ldots, x_{n}\right]$, where $x_{1}, \ldots, x_{n}$ are distinct variables. A transcendence basis of $K$ over $\mathbf{k}$ is a set $B \subseteq K$ such that $B$ is algebraically independent over $\mathbf{k}$ and $K$ is algebraic over its subfield $\mathbf{k}(B)$. Fact. (a) $K$ has a transcendence basis over $\mathbf{k}$; (b) any two transcendence bases of $K$ over $\mathbf{k}$ have the same size; (c) If $K$ is algebraically closed with transcendence basis $B$ over $\mathbf{k}$ and $K^{\prime}$ is also an algebraically closed field extension of $\mathbf{k}$ with transcendence basis $B^{\prime}$ over $\mathbf{k}$, then any bijection $B \rightarrow B^{\prime}$ extends to an isomorphism $K \rightarrow K^{\prime}$; (d) if $K$ is uncountable and $|K|>|\mathbf{k}|$, then $|K|=|B|$ for each transcendence basis $B$ of $K$ over $\mathbf{k}$. Applying this with $\mathbf{k}=\mathbb{Q}$ and $\mathbf{k}=\mathbb{F}_{p}$ for prime numbers $p$, we obtain that any two algebraically closed fields of the same characteristic and the same uncountable size are isomorphic. Using Vaught's Test for models of size $\aleph_{1}$ this yields: Theorem 4.1.8. The set $\mathrm{ACF}(0)$ of axioms for algebraically closed fields of characteristic zero is complete. Likewise, $\mathrm{ACF}(p)$ is complete for each prime number $p$. If the hypothesis of Vaught's Test or its generalization is satisfied, then many things follow of which completeness is only one; it goes beyond the scope of these notes to develop this large chapter of pure model theory, which goes under the name of "categoricity in power", but we cannot resist mentioning two remarkable theorems in this area. First an assumption and a definition. Assume $L$ is countable, $\Sigma$ has a model, and all models of $\Sigma$ are infinite. Given any infinite cardinal $\kappa$, we say that $\Sigma$ is $\kappa$-categorical if all models of $\Sigma$ of cardinality $\kappa$ are isomorphic. Mentioning cardinals here may give the wrong impression about the role of set theory in model theory. The key results concerning these categoricity notions actually show their intrinsic and robust logical nature rather than any sensitive dependence on infinite cardinals: Theorem 4.1.9. For $L$ and $\Sigma$ as above, the following are equivalent: (i) $\Sigma$ is $\aleph_{0}$-categorical; (ii) $\Sigma$ is complete, and for any $n \geq 1$ and distinct variables $x_{1}, \ldots, x_{n}$ there are up to $\Sigma$-equivalence only finitely many $L$-formulas $\varphi\left(x_{1}, \ldots, x_{n}\right)$. This result dates from the 1950's. The next theorem is due to Morley (1965), and is considered the first theorem in pure model theory of real depth. Theorem 4.1.10. With $L$ and $\Sigma$ as above, if $\Sigma$ is $\kappa$-categorical for some uncountable $\kappa$, then $\Sigma$ is $\kappa$-categorical for every uncountable $\kappa$. ## Exercises. (1) Let $L=\{U\}$ where $U$ is a unary relation symbol. Consider the $L$-structure $(\mathbf{Z} ; \mathbf{N})$. Give an informative description of a complete set of $L$-sentences true in $(\mathbf{Z} ; \mathbf{N})$. (A description like $\{\sigma:(\mathbf{Z} ; \mathbf{N}) \models \sigma\}$ is correct but not informative. An explicit, possibly infinite, list of axioms is required. Hint: Make an educated guess and try to verify it using Vaught's Test or one of its variants.) (2) Let $\Sigma_{1}$ and $\Sigma_{2}$ be sets of $L$-sentences such that no symbol of $L$ occurs in both $\Sigma_{1}$ and $\Sigma_{2}$. Suppose $\Sigma_{1}$ and $\Sigma_{2}$ have infinite models. Then $\Sigma_{1} \cup \Sigma_{2}$ has a model. (3) Let $L=\{S\}$ where $S$ is a unary function symbol. Consider the $L$-structure $(\mathbf{Z} ; S)$ where $S(a)=a+1$ for $a \in \mathbf{Z}$. Give an informative description of a complete set of $L$-sentences true in $(\mathbf{Z} ; S)$. ### Elementary Equivalence and Back-and-Forth In the rest of this chapter we relax notation, and just write $\varphi\left(a_{1}, \ldots, a_{n}\right)$ for an $L_{A}$-sentence $\varphi\left(\underline{a}_{1}, \ldots, \underline{a}_{n}\right)$, where $\mathcal{A}=(A ; \ldots)$ is an $L$-structure, $\varphi\left(x_{1}, \ldots, x_{n}\right)$ an $L_{A}$-formula, and $\left(a_{1}, \ldots, a_{n}\right) \in A^{n}$. In this section $\mathcal{A}$ and $\mathcal{B}$ denote $L$-structures. We say that $\mathcal{A}$ and $\mathcal{B}$ are elementarily equivalent (notation: $\mathcal{A} \equiv \mathcal{B}$ ) if they satisfy the same $L$-sentences. Thus by the previous section $(\mathbf{Q} ;<) \equiv(\mathbf{R} ;<)$, and any two infinite vector spaces over a given field $F$ are elementarily equivalent. A partial isomorphism from $\mathcal{A}$ to $\mathcal{B}$ is a bijection $\gamma: X \rightarrow Y$ with $X \subseteq A$ and $Y \subseteq B$ (so $X=\operatorname{domain}(\gamma)$ and $Y=\operatorname{codomain}(\gamma))$ such that (i) for all $m$-ary $R \in L^{\mathrm{r}}$ and $a_{1}, \ldots, a_{m} \in X$, $$ R^{\mathcal{A}}\left(a_{1}, \ldots, a_{m}\right) \Longleftrightarrow R^{\mathcal{B}}\left(\gamma a_{1}, \ldots, \gamma a_{m}\right) . $$ (ii) for all $n$-ary $F \in L^{\mathrm{f}}$ and $a_{1}, \ldots, a_{n}, a_{n+1} \in X$, $$ F^{\mathcal{A}}\left(a_{1}, \ldots, a_{n}\right)=a_{n+1} \Longleftrightarrow F^{\mathcal{B}}\left(\gamma a_{1}, \ldots, \gamma a_{n}\right)=\gamma\left(a_{n+1}\right) . $$ Examples. An isomorphism $\mathcal{A} \rightarrow \mathcal{B}$ is the same as a partial isomorphism from $\mathcal{A}$ to $\mathcal{B}$ with domain $A$ and codomain $B$. If $\gamma: X \rightarrow Y$ is a partial isomorphism from $\mathcal{A}$ to $\mathcal{B}$, then $\gamma^{-1}: Y \rightarrow X$ is a partial isomorphism from $\mathcal{B}$ to $\mathcal{A}$, and for any $E \subseteq X$ the restriction $\left.\gamma\right|_{E}: E \rightarrow \gamma(E)$ is a partial isomorphism from $\mathcal{A}$ to $\mathcal{B}$. Suppose $\mathcal{A}=(A ;<)$ and $\mathcal{B}=(B ;<)$ are totally ordered sets, and $N \in \mathbf{N}$ and $a_{1}, \ldots, a_{N} \in A, b_{1}, \ldots, b_{N} \in B$ are such that $a_{1}<a_{2}<\cdots<a_{N}$ and $b_{1}<b_{2}<\cdots<b_{N}$; then the map $a_{i} \mapsto b_{i}:\left\{a_{1}, \ldots, a_{N}\right\} \rightarrow\left\{b_{1}, \ldots, b_{N}\right\}$ is a partial isomorphism from $\mathcal{A}$ to $\mathcal{B}$. A back-and-forth system from $\mathcal{A}$ to $\mathcal{B}$ is a collection $\Gamma$ of partial isomorphisms from $\mathcal{A}$ to $\mathcal{B}$ such that (i) ("Forth") for all $\gamma \in \Gamma$ and $a \in A$ there is a $\gamma^{\prime} \in \Gamma$ such that $\gamma^{\prime}$ extends $\gamma$ and $a \in \operatorname{domain}\left(\gamma^{\prime}\right)$; (ii) ("Back") for all $\gamma \in \Gamma$ and $b \in B$ there is a $\gamma^{\prime} \in \Gamma$ such that $\gamma^{\prime}$ extends $\gamma$ and $b \in \operatorname{codomain}\left(\gamma^{\prime}\right)$. If $\Gamma$ is a back-and-forth system from $\mathcal{A}$ to $\mathcal{B}$, then $\Gamma^{-1}:=\left\{\gamma^{-1}: \gamma \in \Gamma\right\}$ is a back-and-forth system from $\mathcal{B}$ to $\mathcal{A}$. We call $\mathcal{A}$ and $\mathcal{B}$ back-and-forth equivalent (notation: $\mathcal{A} \equiv_{\mathrm{bf}} \mathcal{B}$ ) if there exists a nonempty back-and-forth system from $\mathcal{A}$ to $\mathcal{B}$. Hence $\mathcal{A} \equiv_{\mathrm{bf}} \mathcal{A}$, and if $\mathcal{A} \equiv_{\mathrm{bf}} \mathcal{B}$, then $\mathcal{B} \equiv_{\mathrm{bf}} \mathcal{A}$. Hausdorff's proof of Cantor's theorem in Section 4.1 generalizes as follows: Proposition 4.2.1. Suppose $\mathcal{A}$ and $\mathcal{B}$ are countable and $\mathcal{A} \equiv_{\mathrm{bf}} \mathcal{B}$. Then $\mathcal{A} \cong \mathcal{B}$. Proof. Let $\Gamma$ be a nonempty back-and-forth system from $\mathcal{A}$ to $\mathcal{B}$. We proceed as in the proof of Cantor's theorem, and construct a sequence $\left(\gamma_{n}\right)$ in $\Gamma$ such that each $\gamma_{n+1}$ extends $\gamma_{n}, A=\bigcup_{n}$ domain $\left(\gamma_{n}\right)$ and $B=\bigcup_{n} \operatorname{codomain}\left(\gamma_{n}\right)$. Then the map $A \rightarrow B$ that extends each $\gamma_{n}$ is an isomorphism $\mathcal{A} \rightarrow \mathcal{B}$. In applying this proposition and the next one in a concrete situation, the key is to guess a back-and-forth system. That is where insight and imagination (and experience) come in. The next result has no countability assumption. Proposition 4.2.2. If $\mathcal{A} \equiv_{\mathrm{bf}} \mathcal{B}$, then $\mathcal{A} \equiv \mathcal{B}$. Proof. Suppose $\Gamma$ is a nonempty back-and-forth system from $\mathcal{A}$ to $\mathcal{B}$. We claim that for any $L$-formula $\varphi\left(y_{1}, \ldots, y_{n}\right)$ and all $\gamma \in \Gamma$ and $a_{1}, \ldots, a_{n} \in \operatorname{domain}(\gamma)$, $$ \mathcal{A} \models \varphi\left(a_{1}, \ldots, a_{n}\right) \Longleftrightarrow \mathcal{B} \models \varphi\left(\gamma a_{1}, \ldots, \gamma a_{n}\right) . $$ (For $n=0$ this gives $\mathcal{A} \equiv \mathcal{B}$, but the claim is much stronger.) Exercise 4 of Section 3.3 shows that it is enough to prove this claim for unnested $\varphi$. We proceed by induction on the number of logical symbols in unnested formulas $\varphi\left(y_{1}, \ldots, y_{n}\right)$. The case of unnested atomic formulas follows directly from the definition of partial isomorphism. The connectives $\neg, \vee, \wedge$ present no problem. For $\exists x \psi\left(x, y_{1}, \ldots, y_{n}\right)$, use the back-and-forth property. As to $\forall x \psi\left(x, y_{1}, \ldots, y_{n}\right)$, use that it is equivalent to $\neg \exists x \neg \psi\left(x, y_{1}, \ldots, y_{n}\right)$. ## Exercises. (1) Define a finite restriction of a bijection $\gamma: X \rightarrow Y$ to be a map $\gamma \mid E: E \rightarrow \gamma(E)$ with finite $E \subseteq X$. If $\Gamma$ is a back-and-forth system from $\mathcal{A}$ to $\mathcal{B}$, so is the set of finite restrictions of members of $\Gamma$. (2) If $\mathcal{A} \equiv_{\text {bf }} \mathcal{B}$ and $\mathcal{B} \equiv_{\text {bf }} \mathcal{C}$, then $\mathcal{A} \equiv_{\text {bf }} \mathcal{C}$. ### Quantifier Elimination First an example from high school algebra. The ordered field of real numbers is the structure $(\mathbf{R} ;<, 0,1,+,-, \cdot)$. In this structure the formula $$ \varphi(a, b, c):=\exists y\left(a y^{2}+b y+c=0\right) $$ is equivalent to the $\mathrm{q}$-free formula $$ \left(a \neq 0 \wedge b^{2} \geq 4 a c\right) \vee(a=0 \wedge b \neq 0) \vee(a=0 \wedge b=0 \wedge c=0) . $$ (Here the coefficients $a, b, c$ are free variables and the "unknown" $y$ is existentially quantified.) This equivalence gives an effective test for the existence of a $y$ with a certain property, which avoids in particular having to check infinitely many values of $y$ (even uncountably many in the case above). This illustrates the kind of property quantifier elimination is. Another example: in every field, the formula $$ \forall y((a x+b y=0 \wedge c x+d y=0) \rightarrow y=0) $$ is equivalent to the q-free formula $a d \neq b c$. Roughly speaking, the role of determinants, discriminants, resultants, and the like is to eliminate a (quantified) variable. The role of the general coefficients $a, b, c, d$ in these examples is taken over in this section by a tuple $x=\left(x_{1}, \ldots, x_{n}\right)$ of distinct variables. Definition. $\Sigma$ has quantifier elimination (QE) if every $L$-formula $\varphi(x)$ is $\Sigma$ equivalent to a quantifier free (short: q-free) $L$-formula $\varphi^{\mathrm{qf}}(x)$. By taking $n=0$ in this definition we see that if $\Sigma$ has QE, then every $L$-sentence is $\Sigma$-equivalent to a q-free $L$-sentence. Lemma 4.3.1. Suppose $\Sigma$ has $\mathrm{QE}$ and $\mathcal{B}$ and $\mathcal{C}$ are models of $\Sigma$ with a common substructure $\mathcal{A}$ (we do not assume $\mathcal{A}=\Sigma$ ). Then $\mathcal{B}$ and $\mathcal{C}$ satisfy the same $L_{A^{-}}$ sentences. Proof. Let $\sigma$ be an $L_{A}$-sentence. We have to show $\mathcal{B} \models \sigma \Leftrightarrow \mathcal{C} \models \sigma$. Write $\sigma$ as $\varphi(a)$ with $\varphi(x)$ an $L$-formula and $a \in A^{n}$. Take a q-free $L$-formula $\varphi^{\mathrm{qf}}(x)$ that is $\Sigma$-equivalent to $\varphi(x)$. Then $\mathcal{B} \models \sigma$ iff $\mathcal{B} \models \varphi^{\mathrm{qf}}(a)$ iff $\mathcal{A} \models \varphi^{\mathrm{qf}}(a)$ (by Exercise 8 of Section 2.5) iff $\mathcal{C}=\varphi^{\mathrm{qf}}(a)$ (by the same exercise) iff $\mathcal{C}=\sigma$. Corollary 4.3.2. Suppose $\Sigma$ has a model, has QE, and there exists an $L$ structure that can be embedded into every model of $\Sigma$. Then $\Sigma$ is complete. Proof. Take an $L$-structure $\mathcal{A}$ that can be embedded into every model of $\Sigma$. Let $\mathcal{B}$ and $\mathcal{C}$ be any two models of $\Sigma$. So $\mathcal{A}$ is isomorphic to a substructure of $\mathcal{B}$ and of $\mathcal{C}$. Then by a slight rewording of the proof of Lemma 4.3.1 (considering only $L$-sentences), we see that $\mathcal{B}$ and $\mathcal{C}$ satisfy the same $L$-sentences. It follows that $\Sigma$ is complete. Remark. We have seen that Vaught's test can be used to prove completeness. The above corollary gives another way of establishing completeness, and is often applicable when the hypothesis of Vaught's Test is not satisfied. Completeness is only one of the nice consequences of QE, and the easiest one to explain at this stage. The main impact of $\mathrm{QE}$ is rather that it gives access to the structural properties of definable sets. This will be reflected in exercises at the end of this section. Applications of model theory to other areas of mathematics often involve QE as a key step. A basic conjunction in $L$ is by definition a conjunction of atomic and negated atomic $L$-formulas. Each q-free $L$-formula $\varphi(x)$ is equivalent to a disjunction $\varphi_{1}(x) \vee \cdots \vee \varphi_{k}(x)$ of basic conjunctions $\varphi_{i}(x)$ in $L$ ("disjunctive normal form"). In what follows $y$ is a single variable distinct from the variables $x_{1}, \ldots, x_{n}$ in a tuple $x=\left(x_{1}, \ldots, x_{n}\right)$. Lemma 4.3.3. Suppose that for every basic conjunction $\theta(x, y)$ in $L$ there is a $q$-free L-formula $\theta^{\mathrm{qf}}(x)$ such that $$ \Sigma \vdash \exists y \theta(x, y) \leftrightarrow \theta^{\mathrm{qf}}(x) . $$ Then $\Sigma$ has QE. Proof. Let us say that an $L$-formula $\varphi(x)$ has $\Sigma$-QE if it is $\Sigma$-equivalent to a qfree $L$-formula $\varphi^{\mathrm{qf}}(x)$. Note that if the $L$-formulas $\varphi_{1}(x)$ and $\varphi_{2}(x)$ have $\Sigma$-QE, then $\neg \varphi_{1}(x),\left(\varphi_{1} \vee \varphi_{2}\right)(x)$, and $\left(\varphi_{1} \wedge \varphi_{2}\right)(x)$ have $\Sigma$-QE. Next, let $\varphi(x)=\exists y \psi(x, y)$, and suppose inductively that the $L$-formula $\psi(x, y)$ has $\Sigma$-QE. Hence $\psi(x, y)$ is $\Sigma$-equivalent to a disjunction $\bigvee_{i} \psi_{i}(x, y)$ of basic conjunctions $\psi_{i}(x, y)$ in $L$, with $i$ ranging over some finite index set. In view of the equivalence of $\exists y \bigvee_{i} \psi_{i}(x, y)$ with $\bigvee_{i} \exists y \psi_{i}(x, y)$ we obtain $$ \Sigma \vdash \varphi(x) \longleftrightarrow \bigvee_{i} \exists y \psi_{i}(x, y) $$ Each $\exists y \psi_{i}(x, y)$ has $\Sigma$-QE, by hypothesis, so $\varphi(x)$ has $\Sigma$-QE. Finally, let $\varphi(x)=\forall y \psi(x, y)$, and suppose inductively that the $L$-formula $\psi(x, y)$ has $\Sigma$-QE. This case reduces to the previous case since $\varphi(x)$ is equivalent to $\neg \exists y \neg \psi(x, y)$. In the following theorem, let $\Sigma$ be the set of axioms for dense totally ordered set without endpoints (in the language $L_{\mathrm{O}}$ ). Theorem 4.3.4. $\Sigma$ has QE. Proof. Let $(x, y)=\left(x_{1}, \ldots, x_{n}, y\right)$ be a tuple of $n+1$ distinct variables, and consider a basic conjunction $\varphi(x, y)$ in $L_{O}$. By Lemma 4.3.3 it suffices to show that $\exists y \varphi(x, y)$ is $\Sigma$-equivalent to a q-free formula $\psi(x)$. We may assume that each conjunct of $\varphi$ is of one of the following types: $$ y=x_{i}, \quad x_{i}<y, \quad y<x_{i} \quad(1 \leq i \leq n) . $$ To justify this, observe that if we had instead a conjunct $y \neq x_{i}$ then we could replace it by $\left(y<x_{i}\right) \vee\left(x_{i}<y\right)$ and use the fact that $\exists y\left(\varphi_{1}(x, y) \vee \varphi_{2}(x, y)\right)$ is equivalent to $\exists y \varphi_{1}(x, y) \vee \exists y \varphi_{2}(x, y)$. Similarly, a negation $\neg\left(y<x_{i}\right)$ can be replaced by the disjunction $y=x_{i} \vee x_{i}<y$, and likewise with negations $\neg\left(x_{i}<y\right)$. Also conjuncts in which $y$ does not appear can be eliminated because $$ \vdash \exists y(\psi(x) \wedge \theta(x, y)) \longleftrightarrow \psi(x) \wedge \exists y \theta(x, y) . $$ Suppose that we have a conjunct $y=x_{i}$ in $\varphi(x, y)$, so, $\varphi(x, y)$ is equivalent to $y=x_{i} \wedge \varphi^{\prime}(x, y)$, where $\varphi^{\prime}(x, y)$ is a basic conjunction in $L_{O}$. Then $\exists y \varphi(x, y)$ is equivalent to $\varphi^{\prime}\left(x, x_{i}\right)$, and we are done. So we can assume also that $\varphi(x, y)$ has no conjuncts of the form $y=x_{i}$. After all these reductions, and after rearranging conjuncts we can assume that $\varphi(x, y)$ is a conjunction $$ \bigwedge_{i \in I} x_{i}<y \wedge \bigwedge_{j \in J} y<x_{j} $$ where $I, J \subseteq\{1, \ldots, n\}$ and where we allow $I$ or $J$ to be empty. Up till this point we did not need the density and "no endpoints" axioms, but these come in now: $\exists y \varphi(x, y)$ is $\Sigma$-equivalent to the formula $$ \bigwedge_{i \in I, j \in J} x_{i}<x_{j} $$ We mention without proof two important examples of QE, and give a complete proof for a third example in the next section. The following theorem is due to Tarski and (independently) to Chevalley. It dates from around 1950. Theorem 4.3.5. ACF has QE. Clearly, ACF is not complete, since it says nothing about the characteristic: it doesn't prove $1+1=0$, nor does it prove $1+1 \neq 0$. However, $\operatorname{ACF}(0)$, which contains additional axioms forcing the characteristic to be 0 , is complete by 4.3.2 and the fact that the ring of integers embeds in every algebraically closed field of characteristic 0. Tarski also established the following more difficult theorem, which is one of the key results in real algebraic geometry. (His original proof is rather long; there is a shorter one due to A. Seidenberg, and an elegant short proof by A. Robinson using a combination of basic algebra and elementary model theory.) Definition. RCF is a set of axioms true in the ordered field $(\mathbf{R} ;<, 0,1,-,+, \cdot)$ of real numbers. In addition to the ordered field axioms, it has the axiom $\forall x\left(x>0 \rightarrow \exists y\left(x=y^{2}\right)\right)(x, y$ distinct variables) and for each odd $n>1$ the axiom $$ \forall x_{1} \ldots x_{n} \exists y\left(y^{n}+x_{1} y^{n-1}+\cdots+x_{n}=0\right) $$ where $x_{1}, \ldots, x_{n}, y$ are distinct variables. The models of RCF are known as real closed ordered fields. Theorem 4.3.6. $\mathrm{RCF}$ has $\mathrm{QE}$ and is complete. Exercises. In (5) and (6), an $L$-theory is a set $T$ of $L$-sentences such that for all $L$-sentences $\sigma$, if $T \vdash \sigma$, then $\sigma \in T$. An axiomatization of an $L$-theory $T$ is a set $\Sigma$ of $L$-sentences such that $T=\{\sigma: \sigma$ is an $L$-sentence and $\Sigma \vdash \sigma\}$. (1) The subsets of $\mathbf{C}$ definable in $(\mathbf{C} ; 0,1,-,+, \cdot)$ are exactly the finite subsets of $\mathbf{C}$ and their complements in $\mathbf{C}$. (Hint: use the fact that ACF has QE.) (2) The subsets of $\mathbf{R}$ definable in the ordered field $(\mathbf{R} ;<, 0,1,-,+, \cdot)$ of real numbers are exactly the finite unions of intervals of all kinds (including degenerate intervals with just one point) (Hint: use the fact that RCF has QE.) (3) Find a set $\mathrm{Eq}_{\infty}$ of sentences in the language $L=\{\sim\}$ where $\sim$ is a binary relation symbol, whose models are the $L$-structures $\mathcal{A}=(A ; \sim)$ such that: (i) $\sim$ is an equivalence relation on $A$; (ii) every equivalence class is infinite; (iii) there are infinitely many equivalence classes. Show that $\mathrm{Eq}_{\infty}$ admits $\mathrm{QE}$ and is complete. (It is also possible to use Vaught's test to prove completeness.) (4) Suppose that a set $\Sigma$ of $L$-sentences has QE. Let the language $L^{\prime}$ extend $L$ by new symbols of arity 0 , and let $\Sigma^{\prime} \supseteq \Sigma$ be a set of $L^{\prime}$-sentences. Then $\Sigma^{\prime}$ (as a set of $L^{\prime}$-sentences) also has QE. (5) Suppose the $L$-theory $T$ has QE. Then $T$ has an axiomatization consisting of sentences $\forall x \exists y \varphi(x, y)$ and $\forall x \psi(x)$ where $\varphi(x, y)$ and $\psi(x)$ are q-free. (Hint: let $\Sigma$ be the set of $L$-sentences provable from $T$ that have the indicated form; show that $\Sigma$ has QE, and is an axiomatization of $T$.) (6) Assume the $L$-theory $T$ has built-in Skolem functions, that is, for each basic conjunction $\varphi(x, y)$ there are $L$-terms $t_{1}(x), \ldots, t_{k}(x)$ such that $$ T \vdash \exists y \varphi(x, y) \rightarrow \varphi\left(x, t_{1}(x)\right) \vee \cdots \vee \varphi\left(x, t_{k}(x)\right) . $$ Then $T$ has QE, for every $\varphi(x, y)$ there are $L$-terms $t_{1}(x), \ldots, t_{k}(x)$ such that $T \vdash \exists y \varphi(x, y) \rightarrow \varphi\left(x, t_{1}(x)\right) \vee \cdots \vee \varphi\left(x, t_{k}(x)\right)$, and $T$ has an axiomatization consisting of sentences $\forall x \psi(x)$ where $\psi(x)$ is $\mathrm{q}$-free. ### Presburger Arithmetic In this section we consider in some detail one example of a set of axioms that has QE, namely "Presburger Arithmetic." Essentially, this is a complete set of axioms for ordinary arithmetic of integers without multiplication, that is, the axioms are true in $(\mathbf{Z} ; 0,1,+,-,<)$, and prove every sentence true in this structure. There is a mild complication in trying to obtain this completeness via QE: one can show (exercise) that for any q-free formula $\varphi(x)$ in the language $\{0,1,+,-,<\}$ there is an $N \in \mathbf{N}$ such that either $(\mathbf{Z} ; 0,1,+,-,<) \models \varphi(n)$ for all $n>N$ or $(\mathbf{Z} ; 0,1,+,-,<) \models \neg \varphi(n)$ for all $n>N$. In particular, formulas such as $\exists y(x=y+y)$ and $\exists y(x=y+y+y)$ are not $\Sigma$-equivalent to any q-free formula in this language, for any set $\Sigma$ of axioms true in $(\mathbf{Z} ; 0,1,+,-,<)$. To overcome this obstacle to $\mathrm{QE}$ we augment the language $\{0,1,+,-,<\}$ by new unary relation symbols $P_{1}, P_{2}, P_{3}, P_{4}, \ldots$ to obtain the language $L_{\mathrm{PrA}}$ of Presburger Arithmetic (named after the Polish logician Presburger who was a student of Tarski). We expand $(\mathbf{Z} ; 0,1,+,-,<)$ to the $L_{\mathrm{PrA}}$-structure $$ \tilde{\mathbf{Z}}=(\mathbf{Z} ; 0,1,+,-,<, \mathbf{Z}, 2 \mathbf{Z}, 3 \mathbf{Z}, 4 \mathbf{Z}, \ldots) $$ that is, $P_{n}$ is interpreted as the set $n \mathbf{Z}$. This structure satisfies the set PrA of Presburger Axioms which consists of the following sentences: (i) the axioms of $\mathrm{Ab}$ for abelian groups; (ii) the axioms in Section 4.1 expressing that $<$ is a total order; (iii) $\forall x \forall y \forall z(x<y \rightarrow x+z<y+z)$ (translation invariance of $<$ ); (iv) $0<1 \wedge \neg \exists y(0<y<1)$ (discreteness axiom); (v) $\forall x \exists y \bigvee_{0 \leq r<n} x=n y+r 1, \quad n=1,2,3, \ldots$ (division with remainder); (vi) $\forall x\left(P_{n} x \leftrightarrow \exists y(x=n y)\right), \quad n=1,2,3, \ldots$ (defining axioms for $\left.P_{1}, P_{2}, \ldots\right)$. Here we have fixed distinct variables $x, y, z$ for definiteness. In (v) and in the rest of this section $r$ ranges over integers. Note that (v) and (vi) are infinite lists of axioms. Here are some elementary facts about models of PrA: Proposition 4.4.1. Let $\mathcal{A}=\left(A ; 0,1,+,-,<, P_{1}^{\mathcal{A}}, P_{2}^{\mathcal{A}}, P_{3}^{\mathcal{A}}, \ldots\right) \models \operatorname{PrA}$. Then (1) There is a unique embedding $\tilde{\mathbf{Z}} \longrightarrow \mathcal{A}$; it sends $k \in \mathbf{Z}$ to $k 1 \in A$. (2) Given any $n>0$ we have $P_{n}^{\mathcal{A}}=n \mathcal{A}$, where we regard $\mathcal{A}$ as an abelian group, and $\mathcal{A} / n \mathcal{A}$ has exactly $n$ elements, namely $0+n \mathcal{A}, \ldots,(n-1) 1+n \mathcal{A}$. (3) For any $n>0$ and $a \in A$, exactly one of the $a, a+1, \ldots, a+(n-1) 1$ lies in $n \mathcal{A}$; (4) $\mathcal{A}$ is torsion-free as an abelian group. Theorem 4.4.2. PrA has QE. Proof. Let $(x, y)=\left(x_{1}, \ldots, x_{n}, y\right)$ be a tuple of $n+1$ distinct variables, and consider a basic conjunction $\varphi(x, y)$ in $L_{\mathrm{PrA}}$. By Lemma 4.3.3 it suffices to show that $\exists y \varphi(x, y)$ is PrA-equivalent to a q-free formula $\psi(x)$. We may assume that each conjunct of $\varphi$ is of one of the following types, where $m, N$ are natural numbers $\geq 1$ and $t(x)$ is an $L_{\operatorname{PrA}}$-term: $$ m y=t(x), \quad m y<t(x), \quad t(x)<m y, \quad P_{N}(m y+t(x)) . $$ To justify this assumption observe that if we had instead a conjunct $m y \neq t(x)$ then we could replace it by $(m y<t(x)) \vee(t(x)<m y)$ and use the fact that $\exists y\left(\varphi_{1}(x, y) \vee \varphi_{2}(x, y)\right)$ is equivalent to $\exists y \varphi_{1}(x, y) \vee \exists y \varphi_{2}(x, y)$. Similarly, a negation $\neg P_{N}(m y+t(x))$ can be replaced by the disjunction $$ P_{N}(m y+t(x)+1) \vee \ldots \vee P_{N}(m y+t(x)+(n-1) 1) $$ Also conjuncts in which $y$ does not appear can be eliminated because $$ \vdash \exists y(\psi(x) \wedge \theta(x, y)) \longleftrightarrow \psi(x) \wedge \exists y \theta(x, y) . $$ Since PrA $\vdash P_{N}(z) \leftrightarrow P_{r N}(r z)$ for $r>0$ we can replace $P_{N}(m y+t(x))$ by $P_{r N}(r m y+r t(x))$. Also, for $r \geq 1$ we can replace $m y=t(x)$ by $r m y=r t(x)$, and likewise with $m y<t(x)$ and $t(x)<m y$. We can therefore assume that all conjuncts have the same "coefficient" $m$ in front of the variable $y$. After all these reductions, and after rearranging conjuncts, $\varphi(x, y)$ has the form $$ \bigwedge_{h \in H} m y=t_{h}(x) \wedge \bigwedge_{i \in I} t_{i}(x)<m y \wedge \bigwedge_{j \in J} m y<t_{j}(x) \wedge \bigwedge_{k \in K} P_{N(k)}\left(m y+t_{k}(x)\right) $$ where $m \geq 1, H, I, J, K$ are disjoint finite index sets, and each $N(k)$ is a natural number $\geq 1$. We allow some of these index sets to be empty in which case the corresponding conjunction can be left out. Suppose that $H \neq \emptyset$, say $h^{\prime} \in H$. Then the formula $\exists y \varphi(x, y)$ is PrAequivalent to $$ \begin{aligned} P_{m}\left(t_{h^{\prime}}(x)\right) \wedge \bigwedge_{h \in H} t_{h}(x)=t_{h^{\prime}}(x) \wedge \bigwedge_{i \in I} t_{i}(x)<t_{h^{\prime}}(x) & \wedge \bigwedge_{j \in J} t_{h^{\prime}}(x)<t_{j}(x) \\ & \wedge \bigwedge_{k \in K} P_{N(k)}\left(t_{h^{\prime}}(x)+t_{k}(x)\right) \end{aligned} $$ For the rest of the proof we assume that $H=\emptyset$. To understand what follows, it may help to focus on the model $\tilde{\mathbf{Z}}$, although the arguments go through for arbitrary models of PrA. Fix any value $a \in \mathbf{Z}^{n}$ of $x$. Consider the system of linear congruences (with "unknown" $y$ ) $$ P_{N(k)}\left(m y+t_{k}(a)\right), \quad(k \in K), $$ which in more familiar notation would be written as $$ m y+t_{k}(a) \equiv 0 \quad \bmod N(k), \quad(k \in K) . $$ The solutions in $\mathbf{Z}$ of this system form a union of congruence classes modulo $N:=\prod_{k \in K} N(k)$, where as usual we put $N=1$ for $K=\emptyset$. This suggests replacing $y$ successively by $N z, 1+N z, \ldots,(N-1) 1+N z$. Our precise claim is that $\exists y \varphi(x, y)$ is PrA-equivalent to the formula $\theta(x)$ given by $$ \begin{aligned} & \bigvee_{r=0}^{N-1}\left(\bigwedge _ { k \in K } P _ { N ( k ) } ( ( m r ) 1 + t _ { k } ( x ) ) \wedge \exists z \left(\bigwedge_{i \in I} t_{i}(x)<m(r 1+N z)\right.\right. \\ &\left.\left.\wedge \bigwedge_{j \in J} m(r 1+N z)<t_{j}(x)\right)\right) . \end{aligned} $$ We prove this equivalence with $\theta(x)$ as follows. Suppose $$ \mathcal{A}=(A ; \ldots) \mid=\operatorname{PrA}, \quad a=\left(a_{1}, \ldots, a_{n}\right) \in A^{n} . $$ We have to show that $\mathcal{A} \models \exists y \varphi(a, y)$ if and only if $\mathcal{A} \models \theta(a)$. So let $b \in A$ be such that $\mathcal{A}=\varphi(a, b)$. Division with remainder yields a $c \in A$ and an $r$ such that $b=r 1+N c$ and $0 \leq r \leq N-1$. Note that then for $k \in K$, $$ m b+t_{k}(a)=m(r 1+N c)+t_{k}(a)=(m r) 1+(m N) c+t_{k}(a) \in N(k) \mathcal{A} $$ and so $\mathcal{A} \models P_{N(k)}\left((m r) 1+t_{k}(a)\right)$. Also, $$ \begin{array}{ll} t_{i}(a)<m(r 1+N c) & \text { for every } i \in I, \\ m(r 1+N c)<t_{j}(a) & \text { for every } j \in J . \end{array} $$ Therefore $\mathcal{A} \models \theta(a)$ with $\exists z$ witnessed by $c$. For the converse, suppose that the disjunct of $\theta(a)$ indexed by a certain $r \in\{0, \ldots, N-1\}$ is true in $\mathcal{A}$, with $\exists z$ witnessed by $c \in A$. Then put $b=r 1+N c$ and we get $\mathcal{A} \models \varphi(a, b)$. This proves the claimed equivalence. Now that we have proved the claim we have reduced to the situation (after changing notation) where $H=K=\emptyset$ (no equations and no congruences). So $\varphi(x, y)$ now has the form $$ \bigwedge_{i \in I} t_{i}(x)<m y \wedge \bigwedge_{j \in J} m y<t_{j}(x) . $$ If $J=\emptyset$ or $I=\emptyset$ then PrA $\vdash \exists y \varphi(x, y) \leftrightarrow \top$. This leaves the case where both $I$ and $J$ are nonempty. So suppose $\mathcal{A} \mid=\operatorname{PrA}$ and that $A$ is the underlying set of $\mathcal{A}$. For each value $a \in A^{n}$ of $x$ there is $i_{0} \in I$ such that $t_{i_{0}}(a)$ is maximal among the $t_{i}(a)$ with $i \in I$, and a $j_{0} \in J$ such that $t_{j_{0}}(a)$ is minimal among the $t_{j}(a)$ with $j \in J$. Moreover each interval of $m$ successive elements of $A$ contains an element of $m \mathcal{A}$. Therefore $\exists y \varphi(x, y)$ is equivalent in $\mathcal{A}$ to the disjunction over all pairs $\left(i_{0}, j_{0}\right) \in I \times J$ of the $\mathrm{q}$-free formula $$ \begin{aligned} \bigwedge_{i \in I} t_{i}(x) \leq t_{i_{0}}(x) \wedge \bigwedge_{j \in J} t_{j_{0}}(x) & \leq t_{j}(x) \\ & \wedge \bigvee_{r=1}^{m}\left(P_{m}\left(t_{i_{0}}(x)+r 1\right) \wedge\left(t_{i_{0}}(x)+r 1<t_{j_{0}}(x)\right)\right) . \end{aligned} $$ This completes the proof. Note that $L_{\operatorname{PrA}}$ does not contain the relation symbol $\leq$; we just write $t \leq t^{\prime}$ to abbreviate $\left(t<t^{\prime}\right) \vee\left(t=t^{\prime}\right)$. Remark. It now follows from Corollary 4.3.2 that PrA is complete: it has QE and $\tilde{\mathbf{Z}}$ can be embedded in every model. Discussion. The careful reader will have noticed that the elimination procedure in the proof above is constructive: it describes an algorithm that, given any basic conjunction $\varphi(x, y)$ in $L_{\mathrm{PrA}}$ as input, constructs a q-free formula $\psi(x)$ of $L_{\mathrm{PrA}}$ such that PrA $\vdash \exists y \varphi(x, y) \leftrightarrow \psi(x)$. In view of the equally constructive proof of Lemma 4.3.3 this yields an algorithm that, given any $L_{\mathrm{PrA}}$-formula $\varphi(x)$ as input, constructs a q-free $L_{\mathrm{PrA}}$-formula $\varphi^{\mathrm{qf}}(x)$ such that $\operatorname{PrA} \vdash \varphi(x) \leftrightarrow \varphi^{\mathrm{qf}}(x)$. (Thus PrA has effective QE.) In particular, this last algorithm constructs for any $L_{\mathrm{PrA}}$-sentence $\sigma$ a q-free $L_{\operatorname{PrA}}$-sentence $\sigma^{\mathrm{qf}}$ such that $\operatorname{PrA} \vdash \sigma \leftrightarrow \sigma^{\mathrm{qf}}$. Since we also have an obvious algorithm that, given any q-free $L_{\operatorname{PrA}}$-sentence $\sigma^{\mathrm{qf}}$, checks whether $\sigma^{\mathrm{qf}}$ is true in $\tilde{\mathbf{Z}}$, this yields an algorithm that, given any $L_{\operatorname{PrA}}$-sentence $\sigma$, checks whether $\sigma$ is true in $\tilde{\mathbf{Z}}$. Thus the structure $\tilde{\mathbf{Z}}$ is decidable. (A precise definition of decidability will be given in the next Chapter.) The algorithms above can easily be implemented by computer programs. Let some $L$-structure $\mathcal{A}$ be given, and suppose we have an algorithm for deciding whether any given $L$-sentence is true in $\mathcal{A}$. Even if we manage to write a computer program that implements this algorithm, there is no guarantee that the program is of practical use, or feasible: on some moderately small inputs it might have to run for $10^{100}$ years before producing an output. This bad behaviour is not at all unusual: no (classical, sequential) algorithm for deciding the truth of $L_{\mathrm{PrA}}$-sentences in $\tilde{\mathbf{Z}}$ is feasible in a precise technical sense. Results of this kind belong to complexity theory; this is an area where mathematics (logic, number theory,...) and computer science interact. There do exist feasible integer linear programming algorithms that decide the truth in $\tilde{\mathbf{Z}}$ of sentences of a special form, and this shows another (very practical) side of complexity theory. A positive impact of $\mathrm{QE}$ is that it yields structural properties of definable sets, as in Exercises (1) and (2) of Section 4.3, and as we discuss next for $\tilde{\mathbf{Z}}$. Definition. Let $d$ be a positive integer. An arithmetic progression of modulus $d$ is a set of the form $$ \{k \in \mathbf{Z}: k \equiv r \quad \bmod d, \alpha<k<\beta\}, $$ where $r \in\{0, \ldots, d-1\}, \alpha, \beta \in \mathbf{Z} \cup\{-\infty,+\infty\}, \alpha<\beta$. We leave the proof of the next lemma to the reader. Lemma 4.4.3. Arithmetic progressions have the following properties. (1) If $P, Q \subseteq \mathbf{Z}$ are arithmetic progressions of moduli d and e respectively, then $P \cap Q$ is an arithmetic progression of modulus $\operatorname{lcm}(d, e)$. (2) If $P \subseteq \mathbf{Z}$ is an arithmetic progression, then $\mathbf{Z} \backslash P$ is a finite union of arithmetic progressions. (3) Let $\mathcal{P}$ be the collection of all finite unions of arithmetic progressions. Then $\mathcal{P}$ contains with any two sets $X, Y$ also $X \cup Y, X \cap Y, X \backslash Y$. Corollary 4.4.4. Let $S \subseteq \mathbf{Z}$. Then $S$ is definable in $\tilde{\mathbf{Z}} \Longleftrightarrow S$ is a finite union of arithmetic progressions. Proof. $(\Leftarrow)$ It suffices to show that each arithmetic progression is definable in $\tilde{\mathbf{Z}}$; this is straightforward and left to the reader. $(\Rightarrow)$ By QE and Lemma 4.4.3 it suffices to show that each atomic $L_{\mathrm{PrA}}$-formula $\varphi(x)$ defines in $\tilde{\mathbf{Z}}$ a finite union of arithmetic progressions. Every atomic formula $\varphi(x)$ different from $\top$ and $\perp$ has the form $t_{1}(x)<t_{2}(x)$ or the form $t_{1}(x)=t_{2}(x)$ or the form $P_{d}(t(x))$, where $t_{1}(x), t_{2}(x)$ and $t(x)$ are $L_{\mathrm{PrA}}$-terms. The first two kinds reduce to $t(x)>0$ and $t(x)=0$ respectively (by subtraction). It follows that we may assume that $\varphi(x)$ has the form $k x+l 1>0$, or the form $k x+l 1=0$, or the form $P_{d}(k x+l 1)$, where $k, l \in \mathbf{Z}$. Considering cases $(k=0, k \neq 0$ and $k \equiv 0 \bmod d$, and so on), we see that such a $\varphi(x)$ defines an arithmetic progression. ## Exercises. (1) The set $2 \mathbf{Z}$ cannot be defined in the structure $(\mathbf{Z} ; 0,1,+,-,<)$ by a $q$-free formula of the language $\{0,1,+,-,<\}$. ### Skolemization and Extension by Definition In this section $L$ is a sublanguage of $L^{\prime}, \Sigma$ is a set of $L$-sentences, and $\Sigma^{\prime}$ is a set of $L^{\prime}$-sentences with $\Sigma \subseteq \Sigma^{\prime}$. Definition. $\Sigma^{\prime}$ is said to be conservative over $\Sigma$ (or a conservative extension of $\Sigma$ ) if for every $L$-sentence $\sigma$, $$ \Sigma^{\prime} \vdash_{L^{\prime}} \sigma \Longleftrightarrow \Sigma \vdash_{L} \sigma . $$ Here $(\Longrightarrow)$ is the significant direction, since $(\Longleftarrow)$ is automatic. Note: (1) If $\Sigma^{\prime}$ is conservative over $\Sigma$, then: $\Sigma$ is consistent $\Leftrightarrow \Sigma^{\prime}$ is consistent. (2) If each model of $\Sigma$ has an $L^{\prime}$-expansion to a model of $\Sigma^{\prime}$, then $\Sigma^{\prime}$ is conservative over $\Sigma$. (This follows easily from the Completeness Theorem.) Proposition 4.5.1. Let $\varphi(x, y)$ be an L-formula, $x=\left(x_{1}, \ldots, x_{m}\right)$. Let $f_{\varphi}$ be an $m$-ary function symbol not in $L$, and put $L^{\prime}:=L \cup\left\{f_{\varphi}\right\}$ and $$ \Sigma^{\prime}:=\Sigma \cup\left\{\forall x\left(\exists y \varphi(x, y) \rightarrow \varphi\left(x, f_{\varphi}(x)\right)\right)\right\} $$ where $\forall x:=\forall x_{1} \ldots \forall x_{m}$. Then $\Sigma^{\prime}$ is conservative over $\Sigma$. Proof. Let $\mathcal{A}$ be any model of $\Sigma$. By (2) above it suffices to obtain an $L^{\prime}$ expansion $\mathcal{A}^{\prime}$ of $\mathcal{A}$ that makes the new axiom about $f_{\varphi}$ true. Choose a function $f: A^{n} \longrightarrow A$ as follows. For any $a \in A^{m}$, if there is a $b \in A$ such that $\mathcal{A} \models \varphi(a, b)$ then we let $f(a)$ be such an element $b$, and if no such $b$ exists, we let $f(a)$ be an arbitrary element of $A$. Interpreting $f_{\varphi}$ as the function $f$ gives an $L^{\prime}$-expansion $\mathcal{A}^{\prime}$ of $\mathcal{A}$ with $$ \mathcal{A}^{\prime} \models \exists y \varphi(x, y) \rightarrow \varphi\left(x, f_{\varphi}(x)\right) $$ as desired. Remark. A function $f$ as in the proof is called a Skolem function in $\mathcal{A}$ for the formula $\varphi(x, y)$. It yields a "witness" for each relevant $m$-tuple. Definition. Given an $L$-formula $\varphi(x)$ with $x=\left(x_{1}, \ldots, x_{m}\right)$, let $R_{\varphi}$ be an $m$-ary relation symbol not in $L$, and put $L_{\varphi}:=L \cup\left\{R_{\varphi}\right\}$ and $$ \Sigma_{\varphi}:=\Sigma \cup\left\{\forall x\left(\varphi(x) \leftrightarrow R_{\varphi}(x)\right)\right\} . $$ The sentence $\forall x\left(\varphi(x) \leftrightarrow R_{\varphi}(x)\right)$ is called the defining axiom for $R_{\varphi}$. We call $\Sigma_{\varphi}$ an extension of $\Sigma$ by a definition for the relation symbol $R_{\varphi}$. Remark. Each model $\mathcal{A}$ of $\Sigma$ has a unique $L_{\varphi}$-expansion $\mathcal{A}_{\varphi} \models \Sigma_{\varphi}$. Every model of $\Sigma_{\varphi}$ is of the form $\mathcal{A}_{\varphi}$ for a unique model $\mathcal{A}$ of $\Sigma$. Proposition 4.5.2. Let $\varphi=\varphi(x)$ be as above. Then: (1) $\Sigma_{\varphi}$ is conservative over $\Sigma$. (2) For each $L_{\varphi}$-formula $\psi(y)$ where $y=\left(y_{1}, \ldots, y_{n}\right)$ there is an L-formula $\psi^{*}(y)$, called a translation of $\psi(y)$, such that $\Sigma_{\varphi} \vdash \psi(y) \leftrightarrow \psi^{*}(y)$. (3) Suppose $\mathcal{A} \models \Sigma$ and $S \subseteq A^{m}$. Then $S$ is 0 -definable in $\mathcal{A}$ if and only if $S$ is 0-definable in $\mathcal{A}_{\varphi}$, and the same with definable instead of 0-definable. Proof. (1) is clear from the remark preceding the proposition, and (3) is immediate from (2). To prove (2) we observe that by the Equivalence Theorem (3.3.2) it suffices to prove it for formulas $\psi(y)=R_{\varphi} t_{1}(y) \ldots t_{m}(y)$ where the $t_{i}$ are $L$-terms. In this case we can take $$ \exists u_{1} \ldots \exists u_{m}\left(u_{1}=t_{1}(y) \wedge \ldots \wedge u_{m}=t_{m}(y) \wedge \varphi\left(u_{1} / x_{1}, \ldots, u_{m} / x_{m}\right)\right) $$ as $\psi^{*}(y)$ where the variables $u_{1}, \ldots, u_{m}$ do not appear in $\varphi$ and are not among $y_{1}, \ldots, y_{n}$. Definition. Suppose $\varphi(x, y)$ is an $L$-formula where $(x, y)=\left(x_{1}, \ldots, x_{m}, y\right)$ is a tuple of $m+1$ distinct variables, such that $\Sigma \vdash \forall x \exists^{!} y \varphi(x, y)$, where $\exists^{!} y \varphi(x, y)$ abbreviates $\exists y(\varphi(x, y) \wedge \forall z(\varphi(x, z) \rightarrow y=z))$, with $z$ a variable not occurring in $\varphi$ and not among $x_{1}, \ldots, x_{m}, y$. Let $f_{\varphi}$ be an $m$-ary function symbol not in $L$ and put $L^{\prime}:=L \cup\left\{f_{\varphi}\right\}$ and $$ \Sigma^{\prime}:=\Sigma \cup\left\{\forall x \varphi\left(x, f_{\varphi}(x)\right)\right\} $$ The sentence $\forall x \varphi\left(x, f_{\varphi}(x)\right)$ is called the defining axiom for $f_{\varphi}$. We call $\Sigma^{\prime}$ an extension of $\Sigma$ by a definition for the function symbol $f_{\varphi}$. Remark. Each model $\mathcal{A}$ of $\Sigma$ has a unique $L^{\prime}$-expansion $\mathcal{A}^{\prime} \models \Sigma^{\prime}$. Every model of $\Sigma^{\prime}$ is of the form $\mathcal{A}^{\prime}$ for a unique model $\mathcal{A}$ of $\Sigma$. Proposition 4.5.2 goes through when $L_{\varphi}, \Sigma_{\varphi}$, and $\mathcal{A}$ are replaced by $L^{\prime}, \Sigma^{\prime}$, and $\mathcal{A}^{\prime}$, respectively. We leave the proof of this as an exercise. (Hint: reduce the proof of the analogue of (2) to the case of an unnested formula.) Definitional expansions. It is also useful to consider expansions of a structure $\mathcal{A}$ by several primitives, each 0 -definable in $\mathcal{A}$. To discuss this situation, let $\mathcal{A}^{\prime}$ be an $L^{\prime}$-expansion of the $L$-structure $\mathcal{A}$. Then we call $\mathcal{A}^{\prime}$ a definitional expansion of $\mathcal{A}$ if for each symbol $s \in L^{\prime} \backslash L$ the interpretation $s^{\mathcal{A}^{\prime}}$ of $s$ in $\mathcal{A}^{\prime}$ is 0-definable in $\mathcal{A}$. Assume $\mathcal{A}^{\prime}$ is a definitional expansion of $\mathcal{A}$. Take for each $m$-ary relation symbol $R$ of $L^{\prime} \backslash L$ an $L$-formula $\varphi_{R}\left(x_{1}, \ldots, x_{m}\right)$ that defines the set $R^{\mathcal{A}^{\prime}} \subseteq A^{m}$ in $\mathcal{A}$, and for each $n$-ary function symbol $F$ of $L^{\prime} \backslash L$ an $L$-formula $\varphi_{F}\left(x_{1}, \ldots, x_{n}, y\right)$ that defines the graph of the map $F^{\mathcal{A}^{\prime}}: A^{n} \rightarrow A$ in $\mathcal{A}$. For $R$ as above, call the sentence $$ \forall x_{1} \ldots \forall x_{m}\left(R x_{1} \ldots x_{m} \longleftrightarrow \varphi_{R}\left(x_{1}, \ldots, x_{m}\right)\right) $$ the defining axiom for $R$, and for $F$ as above, call the sentence $$ \forall x_{1} \ldots \forall x_{n} \forall y\left(F x_{1} \ldots x_{n}=y \longleftrightarrow \varphi_{F}\left(x_{1}, \ldots, x_{n}, y\right)\right) $$ the defining axiom for $F$. Let $D$ be the set of defining axioms for the symbols in $L^{\prime} \backslash L$ obtained in this way. So $D$ is a set of $L^{\prime}$-sentences. Lemma 4.5.3. For each $L^{\prime}$-formula $\varphi^{\prime}(y)$, where $y=\left(y_{1}, \ldots, y_{n}\right)$, there is an $L$-formula $\varphi(y)$ such that $D \vdash \varphi^{\prime}(y) \longleftrightarrow \varphi(y)$. The proof goes by induction on formulas using the Equivalence Theorem and is left to the reader. (It might help to restrict first to unnested formulas; see Section 3.3, Exercise (4).) Assume now that $L$ and $L^{\prime}$ are finite, so $D$ is finite. Then the proof gives an effective procedure that on any input $\varphi^{\prime}(y)$ as in the lemma gives an output $\varphi(y)$ with the property stated in the lemma. Defining $\mathcal{A}$ in $\mathcal{B}$. Before introducing the next concept we consider a simple case. Let $(A ;<)$ be a totally ordered set, $A \neq \emptyset$. By a definition of $(A ;<)$ in a structure $\mathcal{B}$ we mean an injective map $\delta: A \rightarrow B^{k}$, with $k \in \mathbf{N}$, such that (i) $\delta(A) \subseteq B^{k}$ is definable in $\mathcal{B}$. (ii) The set $\{(\delta(a), \delta(b)): a<b$ in $A\} \subseteq\left(B^{k}\right)^{2}=B^{2 k}$ is definable in $\mathcal{B}$. For example, we have a definition $\delta: \mathbf{Z} \rightarrow \mathbf{N}^{2}$ of the ordered set $(\mathbf{Z} ;<)$ of integers in the additive monoid $(\mathbf{N} ; 0,+)$ of natural numbers, given by $\delta(n)=(n, 0)$ and $\delta(-n)=(0, n)$. (We leave it to the reader to check this.) It can be shown with tools a little beyond the scope of these notes that no infinite totally ordered set can be defined in the field of complex numbers. In order to extend this notion to arbitrary structures $\mathcal{A}=(A ; \ldots)$ we use the following notation and terminology. Let $X, Y$ be sets, $f: X \rightarrow Y$ a map, and $S \subseteq X^{n}$. Then the $f$-image of $S$ is the subset $$ f(S):=\left\{\left(f\left(x_{1}\right), \ldots, f\left(x_{n}\right)\right):\left(x_{1}, \ldots, x_{n}\right) \in S\right\} $$ of $Y^{n}$. Also, given $k \in \mathbf{N}$, we use the bijection $$ \left(\left(y_{11}, \ldots, y_{1 k}\right), \ldots,\left(y_{n 1}, \ldots, y_{n k}\right)\right) \mapsto\left(y_{11}, \ldots, y_{1 k}, \ldots, y_{n 1}, \ldots, y_{n k}\right) $$ from $\left(Y^{k}\right)^{n}$ to $Y^{n k}$ to identify these two sets. Definition. A definition of an $L$-structure $\mathcal{A}$ in a structure $\mathcal{B}$ is an injective map $\delta: A \rightarrow B^{k}$, with $k \in \mathbf{N}$, such that (i) $\delta(A) \subseteq B^{k}$ is definable in $\mathcal{B}$; (ii) for each $m$-ary $R \in L^{\mathrm{r}}$ the set $\delta\left(R^{\mathcal{A}}\right) \subseteq\left(B^{k}\right)^{m}=B^{m k}$ is definable in $\mathcal{B}$; (iii) For each $n$-ary $F \in L^{\mathrm{f}}$ the set $\delta\left(\right.$ graph of $\left.F^{\mathcal{A}}\right) \subseteq\left(B^{k}\right)^{n+1}=B^{(n+1) k}$ is definable in $\mathcal{B}$. Remark. Here $\mathcal{B}$ is a structure for a language $L^{*}$ that may have nothing to do with the language $L$. Replacing everywhere "definable" by "0-definable", we get the notion of a 0 -definition of $\mathcal{A}$ in $\mathcal{B}$. A more general way of viewing a structure $\mathcal{A}$ as in some sense living inside a structure $\mathcal{B}$ is to allow $\delta$ to be an injective map from $A$ into $B^{k} / E$ for some equivalence relation $E$ on $B^{k}$ that is definable in $\mathcal{B}$, and imposing suitable conditions. Our special case corresponds to $E=$ equality on $B^{k}$. (We do not develop this idea here further: the right setting for it would be many-sorted structures, rather than our one-sorted structures.) Recall that by Lagrange's "four squares" theorem we have $$ \mathbf{N}=\left\{a^{2}+b^{2}+c^{2}+d^{2}: a, b, c, d \in \mathbf{Z}\right\} . $$ It follows that the inclusion map $\mathbf{N} \rightarrow \mathbf{Z}$ is a 0-definition of $(\mathbf{N} ; 0,+, \cdot,<)$ in $(\mathbf{Z} ; 0,1,+,-, \cdot)$. The bijection $$ a+b i \mapsto(a, b): \mathbf{C} \rightarrow \mathbf{R}^{2} \quad(a, b \in \mathbf{R}) $$ is a 0-definition of the field $(\mathbf{C} ; 0,1,+,-, \cdot)$ of complex numbers in the field $(\mathbf{R} ; 0,1,+,-, \cdot)$ of real numbers. On the other hand, there is no definition of the field of real numbers in the field of complex numbers: this follows from the fact, stated earlier without proof, that no infinite totally ordered set admits a definition in the field of complex numbers. (A special case says that $\mathbf{R}$, considered as a subset of $\mathbf{C}$, is not definable in the field of complex numbers; this follows easily from the fact that ACF admits QE, see Section 4.3, exercise (1).) Indeed, it is known that the only fields definable in the field of complex numbers are finite fields and fields isomorphic to the field of complex numbers itself. Proposition 4.5.4. Let $\delta: A \rightarrow B^{k}$ be a 0 -definition of the $L$-structure $\mathcal{A}$ in the $L^{*}$-structure $\mathcal{B}$. Let $x_{1}, \ldots, x_{n}$ be distinct variables (viewed as ranging over $A$ ), and let $x_{11}, \ldots, x_{1 k}, \ldots, x_{n 1}, \ldots, x_{n k}$ be $n k$ distinct variables (viewed as ranging over $B)$. Then we have a map that assigns to each $L$-formula $\varphi\left(x_{1}, \ldots, x_{n}\right)$ an $L^{*}$-formula $\delta \varphi\left(x_{11}, \ldots, x_{1 k}, \ldots, x_{n 1}, \ldots, x_{n k}\right)$ such that $$ \delta\left(\varphi^{\mathcal{A}}\right)=(\delta \varphi)^{\mathcal{B}} \subseteq B^{n k} $$ In particular, for $n=0$ the map above assigns to each $L$-sentence $\sigma$ an $L^{*}$ sentence $\delta \sigma$ such that $\mathcal{A} \models \sigma \Longleftrightarrow \mathcal{B} \models \delta \sigma$. This proposition is byproduct of what follows, and is good enough for modeltheoretic purposes, but for use in the next Chapter we need to be a bit more precise. In particular, we assume below that the variables are just $\mathrm{v}_{0}, \mathrm{v}_{1}, \mathrm{v}_{2}, \ldots$. (Of course, Proposition 4.5.4 is not affected by this assumption.) Let languages $L$ and $L^{*}$ be given. Given any 0-definition $\delta: A \rightarrow B^{k}$ of an $L$-structure $\mathcal{A}=(A ; \ldots)$ into an $L^{*}$-structure $\mathcal{B}=(B ; \ldots)$, we shall translate any $L$-formula about $\mathcal{A}$ into an equivalent $L^{*}$-formula about $\mathcal{B}$. But what is meant here by "translate" and "equivalent"? This is what we need to make explicit. For use in decidability issues in the next chapter it is important to do this translation in a way that depends only on $L, k, L^{*}$ and the formulas of $L^{*}$ that define $\delta(A) \subseteq B^{k}$ and the sets $\delta\left(R^{\mathcal{A}}\right)$ and $\delta\left(\operatorname{graph}\left(F^{\mathcal{A}}\right)\right)$ in $\mathcal{B}$, for $R \in L^{\mathrm{r}}$ and $F \in L^{\mathrm{f}}$, but not on the structures $\mathcal{A}$ and $\mathcal{B}$ or on the map $\delta$ that defines $\mathcal{A}$ in $\mathcal{B}$. It is not hard to do this, but the details are somewhat lengthy. (Fortunately, they are trivial to verify when fully written out.) We now proceed with these details. We first define a kind of copy $L_{k}$ of the language $L$; it depends only on $L$ and the natural number $k$. The symbols of the language $L_{k}$ are the following: (a) a relation symbol $U$ of arity $k$, (b) for each $m$-ary $R \in L^{\mathrm{r}}$ a relation symbol $R_{k}$ of arity $m k$, (c) for each $n$-ary $F \in L^{\mathrm{f}}$ a relation symbol $F_{k}$ of arity $(n+1) k$. We insist that $U$ is different from $s_{k}$ for each symbol $s \in L$, and that different $s \in L$ give different $s_{k}$. For each variable $x=\mathrm{v}_{j}$, let $x_{1}, \ldots, x_{k}$ be the variables $\mathrm{v}_{j k+1}, \ldots, \mathrm{v}_{j k+k}$, in this order. Next, we define a map $\varphi \mapsto \varphi_{k}$ from the set of unnested $L$-formulas into the set of $L_{k}$-formulas such that if $\varphi$ has the form $\varphi\left(x_{1}, \ldots, x_{n}\right)$, then $\varphi_{k}$ will have the form $\varphi_{k}\left(x_{11}, \ldots, x_{1 k}, \ldots, x_{n 1}, \ldots, x_{n k}\right)$. (Thus if $x_{i}$ happens to be the variable $\mathbf{v}_{j}$, then $x_{i 1}, \ldots, x_{i k}$ are the variables $\mathrm{v}_{j k+1}, \ldots, \mathrm{v}_{j k+k}$, in this order, according to our convention.) The definition of this map $\varphi \mapsto \varphi_{k}$ is by recursion on (unnested) formulas: (i) if $\varphi$ is $T$, then $\varphi_{k}$ is $T$, and if $\varphi$ is $\perp$, then $\varphi_{k}$ is $\perp$; (ii) if $\varphi$ is $x=y$, then $\varphi_{k}$ is $x_{1}=y_{1} \wedge \cdots \wedge x_{k}=y_{k}$; (iii) if $\varphi$ is $R x_{1} \ldots x_{m}$ with $m$-ary $R \in L^{\mathrm{r}}$, then $\varphi_{k}$ is $$ R_{k} x_{11} \ldots x_{1 k} \ldots x_{m 1} \ldots x_{m k} $$ (iv) if $\varphi$ is $F x_{1} \ldots x_{n}=y$ with $n$-ary $F \in L^{\mathrm{f}}$, then $\varphi_{k}$ is $$ F_{k} x_{11} \ldots x_{1 k} \ldots x_{n 1} \ldots x_{n k} y_{1} \ldots y_{k} $$ (v) if $\varphi$ is $\neg \psi$, then $\varphi_{k}$ is $\neg \psi_{k}$; (vi) if $\varphi$ is $\psi \vee \theta$, then $\varphi_{k}$ is $\psi_{k} \vee \theta_{k}$; (vii) if $\varphi$ is $\psi \wedge \theta$, then $\varphi_{k}$ is $\psi_{k} \wedge \theta_{k}$; (viii) if $\varphi$ is $\exists x \psi$, then $\varphi_{k}$ is $\exists x_{1} \ldots \exists x_{k}\left(U x_{1} \ldots x_{k} \wedge \psi_{k}\right)$; (ix) if $\varphi$ is $\forall x \psi$, then $\varphi_{k}$ is $\forall x_{1} \ldots \forall x_{k}\left(U x_{1} \ldots x_{k} \rightarrow \psi_{k}\right)$. In the rest of this section we assume that $\delta: A \rightarrow B^{k}$ is a 0-definition of the $L$-structure $\mathcal{A}=(A ; \ldots)$ in the $L^{*}$-structure $\mathcal{B}=(B ; \ldots)$. We arrange that $L^{*}$ and $L_{k}$ are disjoint, and form the language $L_{k}^{*}:=L^{*} \cup L_{k}$. We now expand $\mathcal{B}$ to an $L_{k}^{*}$-structure $\mathcal{B}_{k}$ as follows: interpret $U$ as $\delta(A)$, and $R_{k}$ for $m$-ary $R \in L^{\mathrm{r}}$ as $\delta\left(R^{\mathcal{A}}\right)$, and $F_{k}$ for $n$-ary $F \in L^{\mathrm{f}}$ as $\delta(\operatorname{graph}(F))$. A straigtforward induction gives: Lemma 4.5.5. For any unnested $L$-formula $\varphi\left(x_{1}, \ldots, x_{n}\right)$ and $a_{1}, \ldots, a_{n} \in A$, $$ \mathcal{A} \models \varphi\left(a_{1}, \ldots, a_{n}\right) \Longleftrightarrow \mathcal{B}_{k} \models \varphi_{k}\left(\delta\left(a_{1}\right), \ldots, \delta\left(a_{n}\right)\right) . $$ It is clear that $\mathcal{B}_{k}$ is a definitional expansion of $\mathcal{B}$. Explicitly, let $U^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{k}\right)$ be an $L^{*}$-formula that defines $\delta(A)$ in $\mathcal{B}$; for each $m$-ary $R \in L^{\mathrm{r}}$, let $R^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{m k}\right)$ be an $L^{*}$-formula that defines $\delta\left(R^{\mathcal{A}}\right)$ in $\mathcal{B}$; for each $n$-ary $F \in L^{\mathrm{r}}$, let $$ F^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{(n+1) k}\right) $$ be an $L^{*}$-formula that defines $\delta\left(\operatorname{graph}\left(F^{\mathcal{A}}\right)\right)$ in $\mathcal{B}$. Then in $\mathcal{B}_{k}$ the $L_{k}$-formula $U \mathrm{v}_{1} \ldots \mathrm{v}_{k}$ is equivalent to the $L^{*}$-formula $U^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{k}\right)$, for $m$-ary $R \in L^{\mathrm{r}}$ the $L_{k}$-formula $R_{k} \mathrm{v}_{1} \ldots \mathrm{v}_{m k}$ is equivalent to the $L^{*}$-formula $R^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{m k}\right)$, and for $n$-ary $F \in L^{\mathrm{f}}$, the $L_{k}$-formula $F_{k} \vee_{1} \ldots \mathrm{v}_{(n+1) k}$ is equivalent to the $L^{*}$-formula $F^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{(n+1) k}\right)$. So the defining axiom for $U$ is $$ \forall \mathrm{v}_{1} \ldots \forall \mathrm{v}_{k}\left(U \mathrm{v}_{1} \ldots \mathrm{v}_{k} \longleftrightarrow U^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{k}\right)\right) $$ for $m$-ary $R \in L^{\mathrm{r}}$ the defining axiom for $R_{k}$ is $$ \forall \mathrm{v}_{1} \ldots \forall \mathrm{v}_{m k}\left(R_{k} \mathrm{v}_{1} \ldots \mathrm{v}_{m k} \longleftrightarrow R^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{m k}\right)\right) $$ and for $n$-ary $F \in L^{\mathrm{f}}$ the defining axiom for $F_{k}$ is $$ \forall \mathrm{v}_{1} \ldots \forall \mathrm{v}_{(n+1) k}\left(F_{k} \mathrm{v}_{1} \ldots \mathrm{v}_{(n+1) k} \longleftrightarrow R^{*}\left(\mathrm{v}_{1}, \ldots, \mathrm{v}_{(n+1) k}\right)\right) $$ Let $\operatorname{Def}(\delta)$ be the set of $L_{k}^{*}$-sentences whose members are the defining axioms for $U$ and the $R_{k}$ and $F_{k}$ described above. These defining axioms are true in $\mathcal{B}_{k}$, and define $\mathcal{B}_{k}$ as an expansion of $\mathcal{B}$. Let $\varphi=\varphi\left(x_{1}, \ldots, x_{n}\right)$ be any $L$-formula. Now $\varphi$ is equivalent to an unnested $L$-formula, so Lemma 4.5.5 gives an $L_{k}$-formula $$ \varphi_{k}=\varphi_{k}\left(x_{11}, \ldots, x_{1 k}, \ldots, x_{n 1}, \ldots, x_{n k}\right) $$ such that for all $a_{1}, \ldots, a_{n} \in A$, $$ \mathcal{A} \models \varphi\left(a_{1}, \ldots, a_{n}\right) \Longleftrightarrow \mathcal{B}_{k} \models \varphi_{k}\left(\delta\left(a_{1}\right), \ldots, \delta\left(a_{n}\right)\right) . $$ Treating $\varphi_{k}$ as an $L_{k}^{*}$-formula we then obtain from Lemma 4.5.3 an $L^{*}$-formula $$ \delta \varphi=(\delta \varphi)\left(x_{11}, \ldots, x_{1 k}, \ldots, x_{n 1}, \ldots, x_{n k}\right) $$ such that $\operatorname{Def}(\delta) \vdash \varphi_{k} \longleftrightarrow \delta \varphi$, and thus for all $a_{1}, \ldots, a_{n} \in A$, $$ \mathcal{A}=\varphi\left(a_{1}, \ldots, a_{n}\right) \Longleftrightarrow \mathcal{B}=(\delta \varphi)\left(\delta\left(a_{1}\right), \ldots, \delta\left(a_{n}\right)\right) . $$ Moreover, the map $\varphi \mapsto \varphi_{k}$ depends only on $L, k$, and the map $\varphi \mapsto \delta \varphi$ depends only on $L, k, L^{*}, \operatorname{Def}(\delta)\left(\right.$ not on $\mathcal{A}, \mathcal{B}$, or $\left.\delta: A \rightarrow B^{k}\right)$. Let us single out the case of sentences, and summarize what we have as follows: Lemma 4.5.6. To each $L$-sentence $\sigma$ is assigned an $L_{k}$-sentence $\sigma_{k}$ such that $\mathcal{A} \models \sigma \Longleftrightarrow \mathcal{B}_{k} \models \sigma_{k}$, and an $L^{*}$-sentence $\delta \sigma$ such that $\operatorname{Def}(\delta) \vdash \sigma_{k} \longleftrightarrow \delta \sigma$. Since $\mathcal{B}_{k} \models \operatorname{Def}(\delta)$, this gives $\mathcal{A} \models \sigma \Longleftrightarrow \mathcal{B} \models \delta \sigma$, for each L-sentence $\sigma$. We shall also need that $\mathcal{B}_{k}$ satisfies certain $L_{k}$-sentences that express: (i) the fact that $U^{\mathcal{B}_{k}} \subseteq B^{k}$ is nonempty; (ii) for each $m$-ary $R \in L^{\mathrm{r}}$ the fact that $R_{k}^{\mathcal{B}_{k}} \subseteq\left(U^{\mathcal{B}_{k}}\right)^{m}$; (iii) for each $n$-ary $F \in L^{\mathrm{f}}$ the fact that the relation $F_{k}^{\mathcal{B}_{k}} \subseteq B^{(n+1) k}$ is the graph of a function $\left(U^{\mathcal{B}_{k}}\right)^{n} \rightarrow U^{\mathcal{B}_{k}}$. For (i), take the sentence $\exists \mathrm{v}_{1} \ldots \exists \mathrm{v}_{k} U \mathrm{v}_{1} \ldots \mathrm{v}_{k}$. To express (ii), take $$ \forall \mathrm{v}_{1} \ldots \forall \mathrm{v}_{m k}\left(R_{k} \mathrm{v}_{1} \ldots \mathrm{v}_{m k} \rightarrow U \mathrm{v}_{1} \ldots \mathrm{v}_{k} \wedge \cdots \wedge U \mathrm{v}_{(m-1) k+1} \ldots \mathrm{v}_{m k}\right) $$ We leave it to the reader to construct the sentences expressing (iii). Note that (i), (ii), and (iii) yield a set $\Delta(L, k)$ of $L_{k}$-sentences that depends only on $L, k$ and not on $\mathcal{A}, \mathcal{B}$ or $\delta: A \rightarrow B^{k}$. This will play a role in the next Chapter via the following Lemma. Lemma 4.5.7. Let $\Sigma^{*}$ be a set of $L^{*}$-sentences. Define $\Sigma:=$ set of $L$-sentences $\sigma$ such that $\Sigma^{*} \cup \operatorname{Def}(\delta) \cup \Delta(L, k) \vdash \sigma_{k}$. Then for all $L$-sentences $\sigma$, $$ \Sigma \vdash \sigma \Longleftrightarrow \Sigma^{*} \cup \operatorname{Def}(\delta) \cup \Delta(L, k) \vdash \sigma_{k} . $$ Proof. The direction $\Longleftarrow$ holds by the definition of $\Sigma$. For the converse, let $\sigma$ is an $L$-sentence such that $\Sigma^{*} \cup \operatorname{Def}(\delta) \cup \Delta(L, k) \forall \sigma_{k}$; it is enough to show that $\Sigma \nvdash \sigma$. The Completeness Theorem provides an $L_{k}^{*}$-structure $\mathcal{D}=(D ; \ldots)$ with $$ \mathcal{D} \models \Sigma^{*} \cup \operatorname{Def}(\delta) \cup \Delta(L, k) \cup\left\{\neg \sigma_{k}\right\} . $$ Then we define an $L$-structure $\mathcal{C}$ as follows: the underlying set $C$ of $\mathcal{C}$ is given by $C=U^{\mathcal{D}} \subseteq D^{k}$, the interpretation in $\mathcal{C}$ of an $m$-ary $R \in L^{\mathrm{r}}$ is the set $R_{k}^{\mathcal{D}}$ viewed as an $m$-ary relation on $C$, and the interpretation in $\mathcal{C}$ of an $n$-ary $F \in L^{\mathrm{f}}$ is the function $C^{n} \rightarrow C$ whose graph is $F_{k}^{\mathcal{D}}$ when the latter is viewed as an $(n+1)$-ary relation on $C$. By construction the inclusion map $C \hookrightarrow D^{k}$ is a 0-definition of $\mathcal{C}$ in the $L^{*}$-reduct of $\mathcal{D}$, and so for any $L$-sentence $\rho$ we have $\mathcal{C} \models \rho \Longleftrightarrow \mathcal{D} \models \rho_{k}$. It follows that $\mathcal{C} \models \Sigma \cup\{\neg \sigma\}$, and so $\Sigma \nvdash \sigma$, as promised. ## Chapter 5 ## Computability, Decidability, and Incompleteness In this chapter we prove Gödel's famous Incompleteness Theorem. Consider the structure $\mathfrak{N}:=(\mathbf{N} ; 0, S,+, \cdot,<)$, where $S: \mathbf{N} \rightarrow \mathbf{N}$ is the successor function. A simple form of the incompleteness theorem is as follows. Let $\Sigma$ be a computable set of sentences in the language of $\mathfrak{N}$ and true in $\mathfrak{N}$. Then there exists a sentence $\sigma$ in that language such that $\mathfrak{N}=\sigma$, but $\Sigma \forall \sigma$. In other words, no computable set of axioms in the language of $\mathfrak{N}$ and true in $\mathfrak{N}$ can be complete, hence the name Incompleteness Theorem. ${ }^{1}$ The only unexplained terminology here is "computable." Intuitively, " $\Sigma$ is computable" means that there is an algorithm to recognize whether any given sentence in the language of $\mathfrak{N}$ belongs to $\Sigma$. (It seems reasonable to require this of an axiom system for $\mathfrak{N}$.) Thus we begin this chapter with developing the notion of computability. The interest of this notion is tied to the Church-Turing Thesis as explained in Section 5.2, and goes far beyond incompleteness. For example, computability plays a role in combinatorial group theory (Higman's Theorem) and in certain diophantine questions (Hilbert's 10th problem), not to mention its role in the ideological underpinnings of computer science. ### Computable Functions First some notation. We let $\mu x(. . x$..) denote the least $x \in \mathbf{N}$ for which ...x.. holds. Here ...x.. is some condition on natural numbers $x$. For example $\mu x\left(x^{2}>7\right)=3$. We will only use this notation when the meaning of ..x.. is clear, and the set $\{x \in \mathbf{N}: . . x .$.$\} is non-empty. For a \in \mathbf{N}$ we also let $\mu x_{<a}(. . x .$.$) be the least$ $x<a$ in $\mathbf{N}$ such that ..x.. holds if there is such an $x$, and if there is no such $x$ we put $\mu x_{<a}(. . x .):.=a$. For example, $\mu x_{<4}\left(x^{2}>3\right)=2$ and $\mu x_{<2}(x>5)=2$. ${ }^{1} \mathrm{~A}$ better name would have been Incompletability Theorem. Definition. For $R \subseteq \mathbf{N}^{n}$, we define $\chi_{R}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ by $\chi_{R}(a)= \begin{cases}1 & \text { if } a \in R, \\ 0 & \text { if } a \notin R\end{cases}$ Think of such $R$ as an $n$-ary relation on $\mathbf{N}$. We call $\chi_{R}$ the characteristic function of $R$, and often write $R\left(a_{1}, \ldots, a_{n}\right)$ instead of $\left(a_{1}, \ldots, a_{n}\right) \in R$. Example. $\chi_{<}(m, n)=1$ iff $m<n$, and $\chi_{<}(m, n)=0$ iff $m \geq n$. Definition. For $i=1, \ldots, n$ we define $I_{i}^{n}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ by $I_{i}^{n}\left(a_{1}, \ldots, a_{n}\right)=a_{i}$. These functions are called coordinate functions. Definition. The computable functions (or recursive functions) are the functions from $\mathbf{N}^{n}$ to $\mathbf{N}$ (for $n=0,1,2, \ldots$ ) obtained by inductively applying the following rules: $(\mathrm{R} 1)+: \mathbf{N}^{2} \rightarrow \mathbf{N}, \cdot: \mathbf{N}^{2} \rightarrow \mathbf{N}, \chi_{\leq}: \mathbf{N}^{2} \rightarrow \mathbf{N}$, and the coordinate functions $I_{i}^{n}$ (for each $n$ and $i=1, \ldots, n$ ) are computable. (R2) If $G: \mathbf{N}^{m} \rightarrow \mathbf{N}$ is computable and $H_{1}, \ldots, H_{m}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ are computable, then so is the function $F=G\left(H_{1}, \ldots, H_{m}\right): \mathbf{N}^{n} \rightarrow \mathbf{N}$ defined by $$ F(a)=G\left(H_{1}(a), \ldots, H_{m}(a)\right) . $$ (R3) If $G: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ is computable, and for all $a \in \mathbf{N}^{n}$ there exists $x \in \mathbf{N}$ such that $G(a, x)=0$, then the function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ given by $$ F(a)=\mu x(G(a, x)=0) $$ is computable. A relation $R \subseteq \mathbf{N}^{n}$ is said to be computable (or recursive) if its characteristic function $\chi_{R}: \mathbf{N}^{n} \longrightarrow \mathbf{N}$ is computable. Example. If $F: \mathbf{N}^{3} \rightarrow \mathbf{N}$ and $G: \mathbf{N}^{2} \rightarrow \mathbf{N}$ are computable, then so is the function $H: \mathbf{N}^{4} \rightarrow \mathbf{N}$ defined by $H\left(x_{1}, x_{2}, x_{3}, x_{4}\right)=F\left(G\left(x_{1}, x_{4}\right), x_{2}, x_{4}\right)$. This follows from (R2) by noting that $H(x)=F\left(G\left(I_{1}^{4}(x), I_{4}^{4}(x)\right), I_{2}^{4}(x), I_{4}^{4}(x)\right)$ where $x=\left(x_{1}, x_{2}, x_{3}, x_{4}\right)$. We shall use this device from now on in many proofs, but only tacitly. (The reader should of course notice when we do so.) From (R1), (R2) and (R3) we derive further rules for obtaining computable functions. This is mostly an exercise in programming. Lemma 5.1.1. Let $H_{1}, \ldots, H_{m}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ and $R \subseteq \mathbf{N}^{m}$ be computable. Then $R\left(H_{1}, \ldots, H_{k}\right) \subseteq \mathbf{N}^{n}$ is computable, where for $a \in \mathbf{N}^{n}$ we put $$ R\left(H_{1}, \ldots, H_{m}\right)(a) \Longleftrightarrow R\left(H_{1}(a), \ldots, H_{m}(a)\right) . $$ Proof. Observe that $\chi_{R\left(H_{1}, \ldots, H_{m}\right)}=\chi_{R}\left(H_{1}, \ldots, H_{m}\right)$. Now apply (R2). Lemma 5.1.2. The functions $\chi \geq$ and $\chi=$ on $\mathbf{N}^{2}$ are computable. Proof. The function $\chi \geq$ is computable because $$ \chi_{\geq}(m, n)=\chi_{\leq}(n, m)=\chi_{\leq}\left(I_{2}^{2}(m, n), I_{1}^{2}(m, n)\right) $$ which enables us to apply (R1) and (R2). Similarly, $\chi=$ is computable: $$ \chi=(m, n)=\chi_{\leq}(m, n) \cdot \chi_{\geq}(m, n) . $$ For $k \in \mathbf{N}$ we define the constant function $c_{k}^{n}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ by $c_{k}^{n}(a)=k$. Lemma 5.1.3. Every constant function $c_{k}^{n}$ is computable. Proof. By induction on $k$. For $k=0$ we use $$ c_{0}^{n}(a)=\mu x\left(I_{n+1}^{n+1}(a, x)=0\right) . $$ For the step from $k$ to $k+1$, observe that $$ c_{k+1}^{n}(a)=\mu x\left(c_{k}^{n}(a)<x\right)=\mu x\left(\chi_{\geq}\left(c_{k}^{n+1}(a, x), I_{n+1}^{n+1}(a, x)\right)=0\right) $$ for $a \in \mathbf{N}^{n}$. Let $P, Q$ be $n$-ary relations on $\mathbf{N}$. Then we can form the $n$-ary relations $$ \begin{aligned} \neg P & :=\mathbf{N}^{n} \backslash P, P \vee Q:=P \cup Q, P \wedge Q:=P \cap Q, \\ P \rightarrow Q & :=(\neg P) \vee Q, P \leftrightarrow Q:=(P \rightarrow Q) \wedge(Q \rightarrow P) \end{aligned} $$ on $\mathbf{N}$. Lemma 5.1.4. Suppose $P, Q$ are computable. Then $\neg P, P \vee Q, P \wedge Q, P \rightarrow Q$ and $P \leftrightarrow Q$ are also computable. Proof. Let $a \in \mathbf{N}^{n}$. Then $\neg P(a)$ iff $\chi_{P}(a)=0$ iff $\chi_{P}(a)=c_{0}^{n}(a)$, so $\chi_{\neg P}(a)=$ $\chi=\left(\chi_{P}(a), c_{0}^{n}(a)\right)$. Hence $\neg P$ is computable by (R2) and Lemma 5.1.2. Next, the relation $P \wedge Q$ is computable since $\chi_{P \wedge Q}=\chi_{P} \cdot \chi_{Q}$. By De Morgan's Law, $P \vee Q=\neg(\neg P \wedge \neg Q)$. Thus $P \vee Q$ is computable. The rest is clear. Lemma 5.1.5. The binary relations $<, \leq,=,>, \geq, \neq$ on $\mathbf{N}$ are computable. Proof. The relations $\geq$, $\leq$ and $=$ have already been taken care of by Lemma 5.1.2 and (R1). The remaining relations are complements of these three, so by Lemma 5.1.4 they are also computable. Lemma 5.1.6. (Definition by Cases) Let $R_{1}, \ldots, R_{k} \subseteq \mathbf{N}^{n}$ be computable such that for each $a \in \mathbf{N}^{n}$ exactly one of $R_{1}(a), \ldots, R_{k}(a)$ holds, and suppose that $G_{1}, \ldots, G_{k}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ are computable. Then $G: \mathbf{N}^{n} \rightarrow \mathbf{N}$ given by $$ G(a)=\left\{\begin{array}{cc} G_{1}(a) & \text { if } R_{1}(a) \\ \vdots & \vdots \\ G_{k}(a) & \text { if } R_{k}(a) \end{array}\right. $$ is computable. Proof. This follows from $G=G_{1} \cdot \chi_{R_{1}}+\cdots+G_{k} \cdot \chi_{R_{k}}$. Lemma 5.1.7. (Definition by Cases) Let $R_{1}, \ldots, R_{k} \subseteq \mathbf{N}^{n}$ be computable such that for each $a \in \mathbf{N}^{n}$ exactly one of $R_{1}(a), \ldots, R_{k}(a)$ holds. Let $P_{1}, \ldots, P_{k} \subseteq \mathbf{N}^{n}$ be computable. Then the relation $P \subseteq \mathbf{N}^{n}$ defined by $$ P(a) \Longleftrightarrow\left\{\begin{array}{cc} P_{1}(a) & \text { if } R_{1}(a) \\ \vdots & \vdots \\ P_{k}(a) & \text { if } R_{k}(a) \end{array}\right. $$ is computable. Proof. Use that $P=\left(P_{1} \wedge R_{1}\right) \vee \cdots \vee\left(P_{k} \wedge R_{k}\right)$. Lemma 5.1.8. Let $R \subseteq \mathbf{N}^{n+1}$ be computable such that for all $a \in \mathbf{N}^{n}$ there exists $x \in \mathbf{N}$ with $(a, x) \in R$. Then the function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ given by $$ F(a)=\mu x R(a, x) $$ is computable. Proof. Note that $F(a)=\mu x\left(\chi_{\neg R}(a, x)=0\right)$ and apply (R3). Here is a nice consequence of 5.1.5 and 5.1.8. Lemma 5.1.9. Let $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$. Then $F$ is computable if and only if its graph (a subset of $\mathbf{N}^{n+1}$ ) is computable. Proof. Let $R \subseteq \mathbf{N}^{n+1}$ be the graph of $F$. Then for all $a \in \mathbf{N}^{n}$ and $b \in \mathbf{N}$, $$ R(a, b) \Longleftrightarrow F(a)=b, \quad F(a)=\mu x R(a, x), $$ from which the lemma follows immediately. Lemma 5.1.10. If $R \subseteq \mathbf{N}^{n+1}$ is computable, then the function $F_{R}: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ defined by $F_{R}(a, y)=\mu x_{<y} R(a, x)$ is computable. Proof. Use that $F_{R}(a, y)=\mu x(R(a, x)$ or $x=y)$. Some notation: below we use the bold symbol $\exists$ as shorthand for "there exists a natural number"; likewise, we use symbol $\forall$ to abbreviate "for all natural numbers." These abbreviation symbols should not be confused with the logical symbols $\exists$ and $\forall$. Lemma 5.1.11. Suppose $R \subseteq \mathbf{N}^{n+1}$ is computable. Let $P, Q \subseteq \mathbf{N}^{n+1}$ be the relations defined by $$ \begin{aligned} & P(a, y) \Longleftrightarrow \exists x_{<y} R(a, x) \\ & Q(a, y) \Longleftrightarrow \forall x_{<y} R(a, x), \end{aligned} $$ for $(a, y)=\left(a_{1}, \ldots, a_{n}, y\right) \in \mathbf{N}^{n+1}$. Then $P$ and $Q$ are computable. Proof. Using the notation and results from Lemma 5.1.10 we note that $P(a, y)$ iff $F_{R}(a, y)<y$. Hence $\chi_{P}(a, y)=\chi_{<}\left(F_{R}(a, y), y\right)$. For $Q$, note that $\neg Q(a, y)$ iff $\exists x_{<y} \neg R(a, x)$. The reader should derive from Lemma 5.1.11 a variant that is often used: Corollary 5.1.12. Suppose $R \subseteq \mathbf{N}^{n+2}$ is computable. Let $P, Q \subseteq \mathbf{N}^{n+1}$ be the relations defined by $$ \begin{aligned} & P(a, y) \Longleftrightarrow \exists x_{<y} R(a, x, y) \\ & Q(a, y) \Longleftrightarrow \forall x_{<y} R(a, x, y), \end{aligned} $$ for $(a, y)=\left(a_{1}, \ldots, a_{n}, y\right) \in \mathbf{N}^{n+1}$. Then $P$ and $Q$ are computable. Lemma 5.1.13. The function $\dot{-}: \mathbf{N}^{2} \rightarrow \mathbf{N}$ defined by $a \dot{-} b= \begin{cases}a-b & \text { if } a \geq b, \\ 0 & \text { if } a<b\end{cases}$ is computable. Proof. Use that $a \dot{-} b=\mu x(b+x=a$ or $a<b)$. The results above imply easily that many familiar functions are computable. But is the exponential function $n \mapsto 2^{n}$ computable? It certainly is in the intuitive sense: we know how to compute (in principle) its value at any given argument. It is not that obvious from what we have proved so far that it is computable in our precise sense. We now develop some coding tricks due to Gödel that enable us to prove routinely that functions like $2^{x}$ are computable according to our definition of "computable function". Definition. Define the function Pair : $\mathbf{N}^{2} \rightarrow \mathbf{N}$ by $$ \operatorname{Pair}(x, y):=\frac{(x+y)(x+y+1)}{2}+x $$ We call Pair the pairing function. Lemma 5.1.14. The function Pair is bijective and computable. Proof. Exercise. Definition. Since Pair is a bijection we can define functions $$ \text { Left, Right }: \mathbf{N} \rightarrow \mathbf{N} $$ by $$ \operatorname{Pair}(x, y)=a \Longleftrightarrow \operatorname{Left}(a)=x \text { and } \operatorname{Right}(a)=y . $$ The reader should check that $\operatorname{Left}(a), \operatorname{Right}(a) \leq a$ for $a \in \mathbf{N}$, and $\operatorname{Left}(a)<a$ if $0<a \in \mathbf{N}$. Lemma 5.1.15. The functions Left and Right are computable. Proof. Use 5.1.9 in combination with $$ \begin{aligned} \operatorname{Left}(a) & =\mu x\left(\exists y_{<a+1} \operatorname{Pair}(x, y)=a\right), \\ \operatorname{Right}(a) & =\mu y\left(\exists x_{<a+1} \operatorname{Pair}(x, y)=a\right) . \end{aligned} $$ For $a, b, c \in \mathbf{Z}$ we have (by definition): $a \equiv b \bmod c \Longleftrightarrow a-b \in c \mathbf{Z}$. Lemma 5.1.16. The ternary relation $a \equiv b \bmod c$ on $\mathbf{N}$ is computable. Proof. Use that for $a, b, c \in \mathbf{N}$ we have $a \equiv b \bmod c \Longleftrightarrow\left(\exists x_{<a+1} a=x \cdot c+b\right.$ or $\left.\exists x_{<b+1} b=x \cdot c+a\right)$. We can now introduce Gödel's function $\beta: \mathbf{N}^{2} \rightarrow \mathbf{N}$. Definition. For $a, i \in \mathbf{N}$ we let $\beta(a, i)$ be the remainder of $\operatorname{Left}(a)$ upon division by $1+(i+1) \operatorname{Right}(a)$, that is, $$ \beta(a, i):=\mu x(x \equiv \operatorname{Left}(a) \quad \bmod 1+(i+1) \operatorname{Right}(a)) . $$ Proposition 5.1.17. The function $\beta$ is computable, and $\beta(a, i) \leq a-1$ for all $a, i \in \mathbf{N}$. For any $a_{0}, \ldots, a_{n} \in \mathbf{N}$ there exists $a \in \mathbf{N}$ such that $$ \beta(a, 0)=a_{0}, \ldots, \beta(a, n)=a_{n} . $$ Proof. The computability of $\beta$ is clear from earlier results. We have $$ \beta(a, i) \leq \operatorname{Left}(a) \leq a \dot{-1} . $$ Let $a_{0}, \ldots, a_{n} \in \mathbf{N}$. Take $N \in \mathbf{N}$ such that $a_{i} \leq N$ for all $i \leq n$ and $N$ is a multiple of every prime number $\leq n$. We claim that then $$ 1+N, 1+2 N, \ldots, 1+n N, 1+(n+1) N $$ are pairwise relatively prime. To see this, suppose $p$ is a prime number such that $p \mid 1+i N$ and $p \mid 1+j N(1 \leq i<j \leq n+1)$; then $p$ divides their difference $(j-i) N$, but $p \equiv 1 \bmod N$, so $p$ does not divide $N$, hence $p \mid j-i \leq n$. But all prime numbers $\leq n$ divide $N$, and we have a contradiction. By the Chinese Remainder Theorem there exists an $M \in \mathbf{N}$ such that $$ \begin{array}{rll} M & \equiv a_{0} & \bmod 1+N \\ M & \equiv a_{1} & \bmod 1+2 N \\ & \vdots & \\ M & \equiv a_{n} & \bmod 1+(n+1) N \end{array} $$ Put $a:=\operatorname{Pair}(M, N)$; then $\operatorname{Left}(a)=M$ and $\operatorname{Right}(a)=N$, and thus $\beta(a, i)=$ $a_{i}$ as required. Remark. Proposition 5.1.17 shows that we can use $\beta$ to encode a sequence of numbers $a_{0}, \ldots, a_{n}$ in terms of a single number $a$. We use this as follows to show that the function $n \mapsto 2^{n}$ is computable. If $a_{0}, \ldots, a_{n}$ are natural numbers such that $a_{0}=1$, and $a_{i+1}=2 a_{i}$ for all $i<n$, then necessarily $a_{n}=2^{n}$. Hence by Proposition 5.1.17 we have $\beta(a, n)=2^{n}$ where $$ a:=\mu x\left(\beta(x, 0)=1 \text { and } \forall i_{<n} \beta(x, i+1)=2 \beta(x, i)\right), $$ that is, $$ 2^{n}=\beta(a, n)=\beta\left(\mu x\left(\beta(x, 0)=1 \text { and } \forall i_{<n} \beta(x, i+1)=2 \beta(x, i)\right), n\right) . $$ It follows that $n \mapsto 2^{n}$ is computable. The above suggests a general method, which we develop next. To each sequence $\left(a_{1}, \ldots, a_{n}\right)$ of natural numbers we assign a sequence number, denoted $\left\langle a_{1}, \ldots, a_{n}\right\rangle$, and defined to be the least natural number $a$ such that $\beta(a, 0)=n$ (the length of the sequence) and $\beta(a, i)=a_{i}$ for $i=1, \ldots, n$. For $n=0$ this gives \langle\rangle$=0$, where \langle\rangle is the sequence number of the empty sequence. We define the length function $\mathrm{lh}: \mathbf{N} \longrightarrow \mathbf{N}$ by $\operatorname{lh}(a)=\beta(a, 0)$, so lh is computable. Observe that $\operatorname{lh}\left(\left\langle a_{1}, \ldots, a_{n}\right\rangle\right)=n$. Put $(a)_{i}:=\beta(a, i+1)$. The function $(a, i) \mapsto(a)_{i}: \mathbf{N}^{2} \longrightarrow \mathbf{N}$ is computable, and $\left(\left\langle a_{1}, \ldots, a_{n}\right\rangle\right)_{i}=a_{i+1}$ for $i<n$. Finally, let Seq $\subseteq \mathbf{N}$ denote the set of sequence numbers. The set Seq is computable since $$ a \in \text { Seq } \Longleftrightarrow \forall x_{<a}\left(\operatorname{lh}(x) \neq \operatorname{lh}(a) \text { or } \exists i_{<\operatorname{lh}(a)}(x)_{i} \neq(a)_{i}\right) $$ Lemma 5.1.18. For any $n$, the function $\left(a_{1}, \ldots, a_{n}\right) \mapsto\left\langle a_{1}, \ldots, a_{n}\right\rangle: \mathbf{N}^{n} \rightarrow \mathbf{N}$ is computable, and $a_{i}<\left\langle a_{1}, \ldots, a_{n}\right\rangle$ for $\left(a_{1}, \ldots, a_{n}\right) \in \mathbf{N}^{n}$ and $i=1, \ldots, n$. Proof. Use $\left\langle a_{1}, \ldots, a_{n}\right\rangle=\mu a\left(\beta(a, 0)=n, \beta(a, 1)=a_{1}, \ldots, \beta(a, n)=a_{n}\right)$, and apply Lemmas 5.1.8, 5.1.4 and 5.1.17. Lemma 5.1.19. We have computable binary operations $\operatorname{In}: \mathbf{N}^{2} \rightarrow \mathbf{N}$ and $*: \mathbf{N}^{2} \rightarrow \mathbf{N}$ such that for all $a_{1}, \ldots, a_{m}, b_{1}, \ldots, b_{n} \in \mathbf{N}$, $$ \begin{aligned} \operatorname{In}\left(\left\langle a_{1}, \ldots, a_{m}\right\rangle, i\right) & =\left\langle a_{1}, \ldots, a_{i}\right\rangle \text { for } i \leq m, \\ \left\langle a_{1}, \ldots, a_{m}\right\rangle *\left\langle b_{1}, \ldots, b_{n}\right\rangle & =\left\langle a_{1}, \ldots, a_{m}, b_{1}, \ldots, b_{n}\right\rangle . \end{aligned} $$ Proof. Such functions are obtained by defining $$ \begin{aligned} \operatorname{In}(a, i) & =\mu x\left(\operatorname{lh}(x)=i \text { and } \forall j_{<i}(x)_{j}=(a)_{j}\right), \\ a * b & =\mu x\left(\operatorname{lh}(x)=\operatorname{lh}(a)+\operatorname{lh}(b) \text { and } \forall i_{<\operatorname{lh}(a)}(x)_{i}=(a)_{i}\right. \\ & \text { and } \left.\forall j_{<\ln (b)}(x)_{\ln (a)+j}=(b)_{j}\right) . \end{aligned} $$ Definition. For $F: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$, let $\bar{F}: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ be given by $$ \bar{F}(a, b)=\langle F(a, 0), \ldots, F(a, b-1)\rangle \quad\left(a \in \mathbf{N}^{n}, b \in \mathbf{N}\right) . $$ Note that $\bar{F}(a, 0)=\langle\rangle=0$. Lemma 5.1.20. Let $F: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$. Then $F$ is computable if and only if $\bar{F}$ is computable. Proof. Suppose $F$ is computable. Then $\bar{F}$ is computable since $$ \bar{F}(a, b)=\mu x\left(\operatorname{lh}(x)=b \text { and } \forall i_{<b}(x)_{i}=F(a, i)\right) . $$ In the other direction, suppose $\bar{F}$ is computable. Then $F$ is computable since $F(a, b)=(\bar{F}(a, b+1))_{b}$. Given $G: \mathbf{N}^{n+2} \rightarrow \mathbf{N}$ there is a unique function $F: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ such that $$ F(a, b)=G(a, b, \bar{F}(a, b)) \quad\left(a \in \mathbf{N}^{n}, b \in \mathbf{N}\right) . $$ This will be clear if we express the requirement on $F$ as follows: $$ F(a, 0)=G(a, 0,0), \quad F(a, b+1)=G(a, b+1,\langle F(a, 0), \ldots, F(a, b)\rangle) . $$ The next result is important because it allows us to introduce computable functions by recursion on its values at smaller arguments. Proposition 5.1.21. Let $G$ and $F$ be as above and suppose $G$ is computable. Then $F$ is computable. Proof. Note that $$ \bar{F}(a, b)=\mu x\left(\operatorname{Seq}(x) \text { and } \operatorname{lh}(x)=b \text { and } \forall i_{<b}(x)_{i}=G(a, i, \operatorname{In}(x, i))\right) $$ for all $a \in \mathbf{N}^{n}$ and $b \in \mathbf{N}$. It follows that $\bar{F}$ is computable, and thus by the previous lemma $F$ is computable. Definition. Let $A: \mathbf{N}^{n} \rightarrow \mathbf{N}$ and $B: \mathbf{N}^{n+2} \rightarrow \mathbf{N}$ be given. Let $a$ range over $\mathbf{N}^{n}$, and define the function $F: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ by $$ \begin{aligned} F(a, 0) & =A(a), \\ F(a, b+1) & =B(a, b, F(a, b)) . \end{aligned} $$ We say that $F$ is obtained from $A$ and $B$ by primitive recursion. Proposition 5.1.22. Suppose $A, B$, and $F$ are as above, and $A$ and $B$ are computable. Then $F$ is computable. Proof. Define $G: \mathbf{N}^{n+2} \rightarrow \mathbf{N}$ by $$ G(a, b, c)= \begin{cases}A(a) & \text { if } b=0 \\ B\left(a, b-1,(c)_{b-1}\right) & \text { if } b>0\end{cases} $$ Clearly, $G$ is computable. We claim that $$ F(a, b)=G(a, b, \bar{F}(a, b)) . $$ This claim yields the computability of $F$, by Proposition 5.1.21. We have $$ \begin{aligned} F(a, 0) & =A(a)=G(a, 0, \bar{F}(a, 0)), \text { and } \\ F(a, b+1) & =B(a, b, F(a, b))=B\left(a, b,(\bar{F}(a, b+1))_{b}\right) \\ & =G(a, b+1, \bar{F}(a, b+1)) . \end{aligned} $$ The claim follows. Proposition 5.1.21 will be applied over and over again in the later section on Gödel numbering, but in combination with definitions by cases. As a simple example of such an application, let $G: \mathbf{N} \rightarrow \mathbf{N}$ and $H: \mathbf{N}^{2} \rightarrow \mathbf{N}$ be computable. There is clearly a unique function $F: \mathbf{N}^{2} \rightarrow \mathbf{N}$ such that for all $a, b \in \mathbf{N}$ $$ F(a, b)= \begin{cases}F(a, G(b)) & \text { if } G(b)<b, \\ H(a, b) & \text { otherwise. }\end{cases} $$ In particular $F(a, 0)=H(a, 0)$. We claim that $F$ is computable. According to Proposition 5.1.21 this claim will follow if we can specify a computable function $K: \mathbf{N}^{3} \rightarrow \mathbf{N}$ such that $F(a, b)=K(a, b, \bar{F}(a, b))$ for all $a, b \in \mathbf{N}$. Such a function $K$ is given by $$ K(a, b, c)= \begin{cases}(c)_{G(b)} & \text { if } G(b)<b \\ H(a, b) & \text { otherwise }\end{cases} $$ ## Exercises. (1) The set of prime numbers is computable. (2) The Fibonacci numbers are the natural numbers $F_{n}$ defined recursively by $F_{0}=0$, $F_{1}=1$, and $F_{n+2}=F_{n+1}+F_{n}$. The function $n \mapsto F_{n}: \mathbf{N} \rightarrow \mathbf{N}$ is computable. (3) If $f_{1}, \ldots, f_{n}: \mathbf{N}^{m} \rightarrow \mathbf{N}$ are computable and $X \subseteq \mathbf{N}^{n}$ is computable, then $f^{-1}(X) \subseteq \mathbf{N}^{m}$ is computable, where $f:=\left(f_{1}, \ldots, f_{n}\right): \mathbf{N}^{m} \rightarrow \mathbf{N}^{n}$. (4) If $f: \mathbf{N} \rightarrow \mathbf{N}$ is computable and surjective, then there is a computable function $g: \mathbf{N} \rightarrow \mathbf{N}$ such that $f \circ g=\operatorname{id}_{\mathbf{N}}$. (5) If $f: \mathbf{N} \rightarrow \mathbf{N}$ is computable and strictly increasing, then $f(\mathbf{N}) \subseteq \mathbf{N}$ is computable. (6) All computable functions and relations are definable in $\mathfrak{N}$. (7) Let $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$, and define $$ \langle F\rangle: \mathbf{N} \rightarrow \mathbf{N}, \quad\langle F\rangle(a):=F\left((a)_{0}, \ldots,(a)_{n-1}\right), $$ so $F\left(a_{1}, \ldots, a_{n}\right)=\langle F\rangle\left(\left\langle a_{1}, \ldots, a_{n}\right\rangle\right)$ for all $a_{1}, \ldots, a_{n} \in \mathbf{N}$. Then $F$ is computable iff $\langle F\rangle$ is computable. (Hence $n$-variable computability reduces to 1 variable computability.) Let $\mathcal{F}$ be a collection of functions $F: \mathbf{N}^{m} \rightarrow \mathbf{N}$ for various $m$. We say that $\mathcal{F}$ is closed under composition if for all $G: \mathbf{N}^{m} \rightarrow \mathbf{N}$ in $\mathcal{F}$ and all $H_{1}, \ldots, H_{m}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ in $\mathcal{F}$, the function $F=G\left(H_{1}, \ldots, H_{m}\right): \mathbf{N}^{n} \rightarrow \mathbf{N}$ is in $\mathcal{F}$. We say that $\mathcal{F}$ is closed under minimalization if for every $G: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ in $\mathcal{F}$ such that for all $a \in \mathbf{N}^{n}$ there exists $x \in \mathbf{N}$ with $G(a, x)=0$, the function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ given by $F(a)=\mu x(G(a, x)=0)$ is in $\mathcal{F}$. We say that a relation $R \subseteq \mathbf{N}^{n}$ is in $\mathcal{F}$ if its characteristic function $\chi_{R}$ is in $\mathcal{F}$. (8) Suppose $\mathcal{F}$ contains the functions mentioned in (R1), and is closed under composition and minimalization. All lemmas and propositions of this Section go through with computable replaced by in $\mathcal{F}$. ### The Church-Turing Thesis The computable functions as defined in the last section are also computable in the informal sense that for each such function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ there is an algorithm that on any input $a \in \mathbf{N}^{n}$ stops after a finite number of steps and produces an output $F(a)$. An algorithm is given by a finite list of instructions, a computer program, say. These instructions should be deterministic (leave nothing to chance or choice). We deliberately neglect physical constraints of space and time: imagine that the program that implements the algorithm has unlimited access to time and space to do its work on any given input. Let us write "calculable" for this intuitive, informal, idealized notion of computable. The Church-Turing Thesis asserts each calculable function $F: \mathbf{N} \rightarrow \mathbf{N}$ is computable. The corresponding assertion for functions $\mathbf{N}^{n} \rightarrow \mathbf{N}$ follows, because the result of Exercise 7 in Section 5.1 is clearly also valid for "calculable" instead of "computable." Call a set $P \subseteq \mathbf{N}$ calculable if its characteristic function is calculable. While the Church-Turing Thesis is not a precise mathematical statement, it is an important guiding principle, and has never failed in practice: any function that any competent person has ever recognized as being calculable, has turned out to be computable, and the informal grounds for calculability have always translated routinely into an actual proof of computability. Here is a heuristic (informal) argument that might make the Thesis plausible. Let an algorithm be given for computing $F: \mathbf{N} \rightarrow \mathbf{N}$. We can assume that on any input $a \in \mathbf{N}$ this algorithm consists of a finite sequence of steps, numbered from 0 to $n$, say, where at each step $i$ it produces a natural number $a_{i}$, with $a_{0}=a$ as starting number. It stops after step $n$ with $a_{n}=F(a)$. We assume that for each $i<n$ the number $a_{i+1}$ is calculated by some fixed procedure from the earlier numbers $a_{0}, \ldots, a_{i}$, that is, we have a calculable function $G: \mathbf{N} \rightarrow \mathbf{N}$ such that $a_{i+1}=G\left(\left\langle a_{0}, \ldots, a_{i}\right\rangle\right)$ for all $i<n$. The algorithm should also tell us when to stop, that is, we should have a calculable $P \subseteq \mathbf{N}$ such that $\neg P\left(\left\langle a_{0}, \ldots, a_{i}\right\rangle\right)$ for $i<n$ and $P\left(\left\langle a_{0}, \ldots, a_{n}\right\rangle\right)$. Since $G$ and $P$ describe only single steps in the algorithm for $F$ it is reasonable to assume that they at least are computable. Once this is agreed to, one can show easily that $F$ is computable as well, see the exercise below. A skeptical reader may find this argument dubious, but Turing gave in 1936 a compelling informal analysis of what functions $F: \mathbf{N} \rightarrow \mathbf{N}$ are calculable in principle, and this has led to general acceptance of the Thesis. In addition, various alternative formalizations of the informal notion of calculable function have been proposed, using various kinds of machines, formal systems, and so on. They all have turned out to be equivalent in the sense of defining the same class of functions on $\mathbf{N}$, namely the computable functions. The above is only a rather narrow version of the Church-Turing Thesis, but it suffices for our purpose. There are various refinements and more ambitious versions. Also, our Church-Turing Thesis does not characterize mathematically the intuitive notion of algorithm, only the intuitive notion of function computable by an algorithm that produces for each input from $\mathbf{N}$ an output in $\mathbf{N}$. ## Exercises. (1) Let $G: \mathbf{N} \rightarrow \mathbf{N}$ and $P \subseteq \mathbf{N}$ be given. Then there is for each $a \in \mathbf{N}$ at most one finite sequence $a_{0}, \ldots, a_{n}$ of natural numbers such that $a_{0}=a$, for all $i<n$ we have $a_{i+1}=G\left(\left\langle a_{0}, \ldots, a_{i}\right\rangle\right)$ and $\neg P\left(\left\langle a_{0}, \ldots, a_{i}\right\rangle\right)$, and $P\left(\left\langle a_{0}, \ldots, a_{n}\right\rangle\right)$. Suppose that for each $a \in \mathbf{N}$ there is such a finite sequence $a_{0}, \ldots, a_{n}$, and put $F(a):=a_{n}$, thus defining a function $F: \mathbf{N} \rightarrow \mathbf{N}$. If $G$ and $P$ are computable, so is $F$. ### Primitive Recursive Functions This section is not really needed in the rest of this chapter, but it may throw light on some issues relating to computability. One such issue is the condition in Rule (R3) for generating computable functions that for all $a \in \mathbf{N}^{n}$ there exists $y \in \mathbf{N}$ such that $G(a, y)=0$. This condition is not constructive: it could be satisfied for a certain $G$ without us ever knowing it. We shall now argue informally that it is impossible to generate in a fully constructive way exactly the computable functions. Such a constructive generation process would presumably enable us to enumerate effectively a sequence of algorithms $\alpha_{0}, \alpha_{1}, \alpha_{2}, \ldots$ such that each $\alpha_{n}$ computes a (computable) function $f_{n}: \mathbf{N} \rightarrow \mathbf{N}$, and such that every computable function $f: \mathbf{N} \rightarrow \mathbf{N}$ occurs in the sequence $f_{0}, f_{1}, f_{2}, \ldots$, possibly more than once. Now consider the function $f_{\text {diag }}: \mathbf{N} \rightarrow \mathbf{N}$ defined by $$ f_{\text {diag }}(n)=f_{n}(n)+1 . $$ Then $f_{\text {diag }}$ is clearly computable in the intuitive sense, but $f_{\text {diag }} \neq f_{n}$ for all $n$, in violation of the Church-Turing Thesis. This way of producing a new function $f_{\text {diag }}$ from a sequence $\left(f_{n}\right)$ is called diagonalization. ${ }^{2}$ The same basic idea applies in other cases, and is used in a more sophisticated form in the proof of Gödel's incompleteness theorem. Here is a class of computable functions that can be generated constructively: The primitive recursive functions are the functions $f: \mathbf{N}^{n} \rightarrow \mathbf{N}$ obtained inductively as follows: (PR1) The nullary function $\mathbf{N}^{0} \rightarrow \mathbf{N}$ with value 0 , the unary successor function $S$, and all coordinate functions $I_{i}^{n}$ are primitive recursive. (PR2) If $G: \mathbf{N}^{m} \rightarrow \mathbf{N}$ is primitive recursive and $H_{1}, \ldots, H_{m}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ are primitive recursive, then $G\left(H_{1}, \ldots, H_{m}\right)$ is primitive recursive. (PR3) If $F: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ is obtained by primitive recursion from primitive recursive functions $G: \mathbf{N}^{n} \rightarrow \mathbf{N}$ and $H: \mathbf{N}^{n+2} \rightarrow \mathbf{N}$, then $F$ is primitive recursive. A relation $R \subseteq \mathbf{N}^{n}$ is said to be primitive recursive if its characteristic function $\chi_{R}$ is primitive recursive. As the next two lemmas show, the computable functions that one ordinarily meets with are primitive recursive. In the rest of this section $x$ ranges over $\mathbf{N}^{m}$ with $m$ depending on the context, and $y$ over $\mathbf{N}$. Lemma 5.3.1. The following functions and relations are primitive recursive: (i) each constant function $c_{m}^{n}$; (ii) the binary operations,$+ \cdot$, and $(x, y) \mapsto x^{y}$ on $\mathbf{N}$; (iii) the predecessor function $\operatorname{Pd}: \mathbf{N} \rightarrow \mathbf{N}$ given by $\operatorname{Pd}(x)=x \dot{-1}$, the unary relation $\{x \in \mathbf{N}: x>0\}$, the function $\stackrel{-}{-}: \mathbf{N}^{2} \rightarrow \mathbf{N}$; (iv) the binary relations $\geq$, $\leq$ and $=$ on $\mathbf{N}$. Proof. The function $c_{m}^{0}$ is obtained from $c_{0}^{0}$ by applying (PR2) $m$ times with $G=S$. Next, $c_{m}^{n}$ is obtained by applying (PR2) with $G=c_{m}^{0}$ (with $k=0$ and $t=n$ ). The functions in (ii) are obtained by the usual primitive recursions. It is also easy to write down primitive recursions for the functions in (iii), in the order they are listed. For (iv), note that $\chi_{\geq}(x, y+1)=\chi_{>0}(x) \cdot \chi_{\geq}(\operatorname{Pd}(x), y)$. Lemma 5.3.2. With the possible exceptions of Lemmas 5.1.8 and 5.1.9, all Lemmas and Propositions in Section 5.1 go through with computable replaced by primitive recursive. Proof. To obtain the primitive recursive version of Lemma 5.1.10, note that $F_{R}(a, 0)=0, \quad F_{R}(a, y+1)=F_{R}(a, y) \cdot \chi_{R}\left(a, F_{R}(a, y)\right)+(y+1) \cdot \chi_{\neg R}\left(a, F_{R}(a, y)\right)$. A consequence of the primitive recursive version of Lemma 5.1.10 is the following restricted minimalization scheme for primitive recursive functions: ${ }^{2}$ Perhaps antidiagonalization would be a better name. if $R \subseteq \mathbf{N}^{n+1}$ and $H: \mathbf{N}^{n} \rightarrow \mathbf{N}$ are primitive recursive, and for all $a \in \mathbf{N}^{n}$ there exists $x<H(a)$ such that $R(a, x)$, then the function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ given by $F(a)=\mu x R(a, x)$ is primitive recursive. The primitive recursive versions of Lemmas 5.1.11-5.1.16 and Proposition 5.1.17 now follow easily. In particular, the function $\beta$ is primitive recursive. Also, the proof of Proposition 5.1.17 yields: There is a primitive recursive function $B: \mathbf{N} \rightarrow \mathbf{N}$ such that, whenever $$ n<N, a_{0}<N, \ldots, a_{n}<N, \quad\left(n, a_{0}, \ldots, a_{n}, N \in \mathbf{N}\right) $$ then for some $a<B(N)$ we have $\beta(a, i)=a_{i}$ for $i=0, \ldots, n$. Using this fact and restricted minimalization, it follows that the unary relation Seq, the unary function lh, and the binary functions $(a, i) \mapsto(a)_{i}$, In and $*$ are primitive recursive. Let a function $F: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ be given. Then $\bar{F}: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ satisfies the primitive recursion $\bar{F}(a, 0)=0$ and $\bar{F}(a, b+1)=\bar{F}(a, b) *\langle F(a, b)\rangle$. It follows that if $F$ is primitive recursive, so is $\bar{F}$. The converse is obvious. Suppose also that $G: \mathbf{N}^{n+2} \rightarrow \mathbf{N}$ is primitive recursive, and $F(a, b)=G(a, b, \bar{F}(a, b))$ for all $(a, b) \in \mathbf{N}^{n+1}$; then $\bar{F}$ satisfies the primitive recursion $$ \bar{F}(A, 0)=G(a, 0,0), \quad \bar{F}(a, b+1)=\bar{F}(a, b) *\langle G(a, b, \bar{F}(a, b))\rangle . $$ so $\bar{F}$ (and hence $F$ ) is primitive recursive. The Ackermann Function. By diagonalization we can produce a computable function that is not primitive recursive, but the so-called Ackermann function does more, and plays a role in several contexts. First we define inductively a sequence $A_{0}, A_{1}, A_{2}, \ldots$ of primitive recursive functions $A_{n}: \mathbf{N} \rightarrow \mathbf{N}$ : $$ \begin{aligned} A_{0}(y)=y+1, \quad A_{n+1}(0) & =A_{n}(1), \\ A_{n+1}(y+1) & =A_{n}\left(A_{n+1}(y)\right) . \end{aligned} $$ Thus $A_{0}=S$ and $A_{n+1} \circ A_{0}=A_{n} \circ A_{n+1}$. One verifies easily that $A_{1}(y)=y+2$ and $A_{2}(y)=2 y+3$ for all $y$. We define the Ackermann function $A: \mathbf{N}^{2} \rightarrow \mathbf{N}$ by $A(n, y):=A_{n}(y)$. Lemma 5.3.3. The function $A$ is computable, and strictly increasing in each variable. Also, for all $n$ and $x, y$ : (i) $A_{n}(x+y) \geq A_{n}(x)+y$; (ii) $n \geq 1 \Longrightarrow A_{n+1}(y)>A_{n}(y)+y$; (iii) $A_{n+1}(y) \geq A_{n}(y+1)$; (iv) $2 A_{n}(y)<A_{n+2}(y)$; (v) $x<y \Longrightarrow A_{n}(x+y) \leq A_{n+2}(y)$. Proof. We leave it as an exercise at the end of this section to show that $A$ is computable. Assume inductively that $A_{0}, \ldots, A_{n}$ are strictly increasing and $A_{0}(y)<A_{1}(y)<\cdots<A_{n}(y)$ for all $y$. Then $$ A_{n+1}(y+1)=A_{n}\left(A_{n+1}(y)\right) \geq A_{0}\left(A_{n+1}(y)\right)>A_{n+1}(y), $$ so $A_{n+1}$ is strictly increasing. Next we show that $A_{n+1}(y)>A_{n}(y)$ for all $y$ : $A_{n+1}(0)=A_{n}(1)$, so $A_{n+1}(0)>A_{n}(0)$ and $A_{n+1}(0)>1$, so $A_{n+1}(y)>y+1$ for all $y$. Hence $A_{n+1}(y+1)=A_{n}\left(A_{n+1}(y)\right)>A_{n}(y+1)$. Inequality (i) follows easily by induction on $n$, and a second induction on $y$. For inequality (ii), we proceed again by induction on $(n, y)$ : Using $A_{1}(y)=$ $y+2$ and $A_{2}(y)=2 y+3$, we obtain $A_{2}(y)>A_{1}(y)+y$. Let $n>1$, and assume inductively that $A_{n}(y)>A_{n-1}(y)+y$. Then $A_{n+1}(0)=A_{n}(1)>A_{n}(0)+0$, and $$ \begin{aligned} A_{n+1}(y+1) & =A_{n}\left(A_{n+1}(y)\right) \geq A_{n}\left(y+1+A_{n}(y)\right) \\ & \geq A_{n}(y+1)+A_{n}(y)>A_{n}(y+1)+y+1 . \end{aligned} $$ In (iii) we proceed by induction on $y$. We have equality for $y=0$. Assuming inductively that (iii) holds for a certain $y$ we obtain $$ A_{n+1}(y+1)=A_{n}\left(A_{n+1}(y)\right) \geq A_{n}\left(A_{n}(y+1)\right) \geq A_{n}(y+2) . $$ Note that (iv) holds for $n=0$. For $n>0$ we have by (i), (ii) and (iii): $$ A_{n}(y)+A_{n}(y) \leq A_{n}\left(y+A_{n}(y)\right)<A_{n}\left(A_{n+1}(y)\right)=A_{n+1}(y+1) \leq A_{n+2}(y) . $$ Note that (v) holds for $n=0$. Assume (v) holds for a certain $n$. Let $x<y+1$. We can assume inductively that if $x<y$, then $A_{n+1}(x+y) \leq A_{n+3}(y)$, and we want to show that $$ A_{n+1}(x+y+1) \leq A_{n+3}(y+1) . $$ Case 1. $x=y$. Then $$ \begin{aligned} A_{n+1}(x+y+1) & =A_{n+1}(2 x+1)=A_{n}\left(A_{n+1}(2 x)\right) \\ & \leq A_{n+2}(2 x)<A_{n+2}\left(A_{n+3}(x)\right)=A_{n+3}(y+1) . \end{aligned} $$ Case 2. $x<y$. Then $$ A_{n+1}(x+y+1)=A_{n}\left(A_{n+1}(x+y)\right) \leq A_{n+2}\left(A_{n+3}(y)\right)=A_{n+3}(y+1) . $$ Below we put $|x|:=x_{1}+\cdots+x_{m}$ for $x=\left(x_{1}, \ldots, x_{m}\right) \in \mathbf{N}^{m}$. Proposition 5.3.4. Given any primitive recursive function $F: \mathbf{N}^{m} \rightarrow \mathbf{N}$ there is an $n=n(F)$ such that $F(x) \leq A_{n}(|x|)$ for all $x \in \mathbf{N}^{m}$. Proof. Call an $n=n(F)$ with the property above a bound for $F$. The nullary constant function with value 0 , the successor function $S$, and each coordinate function $I_{i}^{m},(1 \leq i \leq m)$, has bound 0 . Next, assume $F=G\left(H_{1}, \ldots, H_{k}\right)$ where $G: \mathbf{N}^{k} \rightarrow \mathbf{N}$ and $H_{1}, \ldots, H_{k}: \mathbf{N}^{m} \rightarrow \mathbf{N}$ are primitive recursive, and assume inductively that $n(G)$ and $n\left(H_{1}\right), \ldots, n\left(H_{k}\right)$ are bounds for $G$ and $H_{1}, \ldots, H_{k}$. By part (iv) of the previous lemma we can take $N \in \mathbf{N}$ such that $n(G) \leq N$, and $\sum_{i} H_{i}(x) \leq A_{N+1}(|x|)$ for all $x$. Then $$ F(x)=G\left(H_{1}(x), \ldots, H_{k}(x)\right) \leq A_{N}\left(\sum_{i} H_{i}(x)\right) \leq A_{N}\left(A_{N+1}(|x|)\right) \leq A_{N+2}(|x|) . $$ Finally, assume that $F: \mathbf{N}^{m+1} \rightarrow \mathbf{N}$ is obtained by primitive recursion from the primitive recursive functions $G: \mathbf{N}^{m} \rightarrow \mathbf{N}$ and $H: \mathbf{N}^{m+2} \rightarrow \mathbf{N}$, and assume inductively that $n(G)$ and $n(H)$ are bounds for $G$ and $H$. Take $N \in \mathbf{N}$ such that $n(G) \leq N+3$ and $n(H) \leq N$. We claim that $N+3$ is a bound for $F$ : $F(x, 0)=G(x) \leq A_{N+3}(|x|)$, and by part (v) of the lemma above, $$ \begin{aligned} F(x, y+1)=H(x, y, F(x, y)) & \leq A_{N}\left\{|x|+y+A_{N+3}(|x|+y)\right\} \\ & \leq A_{N+2}\left\{A_{N+3}(|x|+y)\right\}=A_{N+3}(|x|+y+1) . \end{aligned} $$ Consider the function $A^{*}: \mathbf{N} \rightarrow \mathbf{N}$ defined by $A^{*}(n)=A(n, n)$. Then $A^{*}$ is computable, and for any primitive recursive function $F: \mathbf{N} \rightarrow \mathbf{N}$ we have $F(y)<A^{*}(y)$ for all $y>n(F)$, where $n(F)$ is a bound for $F$. In particular, $A^{*}$ is not primitive recursive. Hence $A$ is computable but not primitive recursive. The recursion in "primitive recursion" involves only one variable; the other variables just act as parameters. The Ackermann function is defined by a recursion involving both variables: $$ A(0, y)=y+1, \quad A(x+1,0)=A(x, 1), \quad A(x+1, y+1)=A(x, A(x+1, y)) . $$ This kind of double recursion is therefore more powerful in some ways than what can be done in terms of primitive recursion and composition. ## Exercises. (1) The graph of the Ackermann function is primitive recursive. (It follows that the Ackermann function is recursive, and it gives an example showing that Lemma 5.1.9 fails with "primitive recursive" in place of "computable".) ### Representability Let $L$ be a numerical language, that is, $L$ contains the constant symbol 0 and the unary function symbol $S$. We let $S^{n} 0$ denote the term $S \ldots S 0$ in which $S$ appears exactly $n$ times. So $S^{0} 0$ is the term $0, S^{1} 0$ is the term $S 0$, and so on. Our key example of a numerical language is $$ L(\underline{\mathrm{N}}):=\{0, S,+, \cdot,<\} \quad(\text { the language of } \mathfrak{N}) . $$ Here $\mathrm{N}$ is the following set of nine axioms, where we fix two distinct variables $x$ and $y$ for the sake of definiteness: These axioms are clearly true in $\mathfrak{N}$. The fact that $\underline{\mathrm{N}}$ is finite will play a role later. It is a very weak set of axioms, but strong enough to prove numerical facts like $$ S S 0+S S S 0=S S S S S 0, \quad \forall x(x<S S 0 \rightarrow(x=0 \vee x=S 0)) . $$ Lemma 5.4.1. For each $n$, $$ \underline{\mathrm{N}} \vdash x<S^{n+1} 0 \leftrightarrow\left(x=0 \vee \cdots \vee x=S^{n} 0\right) . $$ Proof. By induction on $n$. For $n=0, \underline{\mathrm{N}} \vdash x<S 0 \leftrightarrow x=0$ by axioms $\underline{\mathrm{N} 8}$ and $\underline{\mathrm{N} 7}$. Assume $n>0$ and $\underline{\mathrm{N}} \vdash x<S^{n} 0 \leftrightarrow\left(x=0 \vee \cdots \vee x=S^{n-1} 0\right)$. Use axiom $\underline{\mathrm{N} 8}$ to conclude that $\underline{\mathrm{N}} \vdash x<S^{n+1} 0 \leftrightarrow\left(x=0 \vee \cdots \vee x=S^{n} 0\right)$. To give an impression how weak $\underline{\mathrm{N}}$ is we consider some of its models: ## Some models of $\underline{\mathrm{N}}$. (1) We usually refer to $\mathfrak{N}$ as the standard model of $\underline{\mathrm{N}}$. (2) Another model of $\underline{\mathrm{N}}$ is $\mathfrak{N}[x]:=(\mathbf{N}[x] ; \ldots)$, where $0, S,+$, , are interpreted as the zero polynomial, as the unary operation of adding 1 to a polynomial, and as addition and multiplication of polynomials in $\mathbf{N}[x]$, and where $<$ is interpreted as follows: $f(x)<g(x)$ iff $f(n)<g(n)$ for all large enough $n$. (3) A more bizarre model of $\underline{\mathrm{N}}:\left(\mathbf{R}^{\geq 0} ; \ldots\right)$ with the usual interpretations of $0, S,+, \cdot$, in particular $S(r):=r+1$, and with $<$ interpreted as the binary relation $<_{\mathbf{N}}$ on $\mathbf{R}^{\geq 0}: r<_{\mathbf{N}} s \Leftrightarrow(r, s \in \mathbf{N}$ and $r<s)$ or $s \notin \mathbf{N}$. Example (2) shows that $\underline{\mathrm{N}} \forall \forall x \exists y(x=2 y \vee x=2 y+S 0)$, since in $\mathfrak{N}[x]$ the element $x$ is not in $2 \mathbf{N}[x] \cup 2 \mathbf{N}[x]+1$; in other words, $\underline{\mathbf{N}}$ cannot prove "every element is even or odd." In example (3) we have $1 / 2<_{\mathbf{N}} 1 / 2$, so the binary relation $<_{\mathbf{N}}$ on $\mathbf{R}^{\geq 0}$ is not even a total order. One useful fact about models of $\underline{\mathrm{N}}$ is that they all contain the so-called standard model $\mathfrak{N}$ in a unique way: Lemma 5.4.2. Suppose $\mathcal{A} \models \underline{\mathrm{N}}$. Then there is a unique homomorphism $$ \iota: \mathfrak{N} \rightarrow \mathcal{A} . $$ This homomorphism $\iota$ is an embedding, and for all $a \in A$ and all $n$, (i) if $a<^{\mathcal{A}} \iota(n)$, then $a=\iota(m)$ for some $m<n$; (ii) if $a \notin \iota(\mathbf{N})$, then $\iota(n)<^{\mathcal{A}} a$. As to the proof, note that for any homomorphism $\iota: \mathfrak{N} \rightarrow \mathcal{A}$ and all $n$ we must have $\iota(n)=\left(S^{n} 0\right)^{\mathcal{A}}$. Hence there is at most one such homomorphism. It remains to show that the map $n \mapsto\left(S^{n} 0\right)^{\mathcal{A}}: \mathbf{N} \rightarrow A$ is an embedding $\iota: \mathfrak{N} \rightarrow \mathcal{A}$ with properties (i) and (ii). We leave this as an exercise to the reader. Definition. Let $L$ be a numerical language, and $\Sigma$ a set of $L$-sentences. A relation $R \subseteq \mathbf{N}^{m}$ is said to be $\Sigma$-representable, if there is an $L$-formula $\varphi\left(x_{1}, \ldots, x_{m}\right)$ such that for all $\left(a_{1}, \ldots, a_{m}\right) \in \mathbf{N}^{m}$ we have (i) $\quad R\left(a_{1}, \ldots, a_{m}\right) \Longrightarrow \Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right)$ (ii) $\neg R\left(a_{1}, \ldots, a_{m}\right) \Longrightarrow \Sigma \vdash \neg \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right)$ Such a $\varphi\left(x_{1}, \ldots, x_{m}\right)$ is said to represent $R$ in $\Sigma$ or to $\Sigma$-represent $R$. Note that if $\varphi\left(x_{1}, \ldots, x_{m}\right) \Sigma$-represents $R$ and $\Sigma$ is consistent, then for all $\left(a_{1}, \ldots, a_{m}\right) \in \mathbf{N}^{m}$ $$ \begin{aligned} R\left(a_{1}, \ldots, a_{m}\right) & \Longleftrightarrow \Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right), \\ \neg R\left(a_{1}, \ldots, a_{m}\right) & \Longleftrightarrow \Sigma \vdash \neg \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right) . \end{aligned} $$ A function $F: \mathbf{N}^{m} \rightarrow \mathbf{N}$ is $\Sigma$-representable if there is a formula $\varphi\left(x_{1}, \ldots, x_{m}, y\right)$ of $L$ such that for all $\left(a_{1}, \ldots, a_{m}\right) \in \mathbf{N}^{m}$ we have $$ \Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0, y\right) \leftrightarrow y=S^{F\left(a_{1}, \ldots, a_{m}\right)} 0 $$ Such a $\varphi\left(x_{1}, \ldots, x_{m}, y\right)$ is said to represent $F$ in $\Sigma$ or to $\Sigma$-represent $F$. An $L$-term $t\left(x_{1}, \ldots, x_{m}\right)$ is said to represent the function $F: \mathbf{N}^{m} \rightarrow \mathbf{N}$ in $\Sigma$ if $\Sigma \vdash t\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right)=S^{F(a)} 0$ for all $a=\left(a_{1}, \ldots, a_{m}\right) \in \mathbf{N}^{m}$. Note that then the function $F$ is $\Sigma$-represented by the formula $t\left(x_{1}, \ldots, x_{m}\right)=y$. Proposition 5.4.3. Let $L$ be a numerical language, $\Sigma$ a set of $L$-sentences such that $\Sigma \vdash S 0 \neq 0$, and $R \subseteq \mathbf{N}^{m}$ a relation. Then $$ R \text { is } \Sigma \text {-representable } \Longleftrightarrow \chi_{R} \text { is } \Sigma \text {-representable. } $$ Proof. $(\Leftarrow)$ Assume $\chi_{R}$ is $\Sigma$-representable and let $\varphi\left(x_{1}, \ldots, x_{m}, y\right)$ be an $L$ formula $\Sigma$-representing it. We show that $\psi\left(x_{1}, \ldots, x_{m}\right):=\varphi\left(x_{1}, \ldots, x_{m}, S 0\right)$ $\Sigma$-represents $R$. Let $\left(a_{1}, \ldots, a_{m}\right) \in R$; then $\chi_{R}\left(a_{1}, \ldots, a_{m}\right)=1$. Hence $$ \Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0, y\right) \leftrightarrow y=S 0, $$ so $\Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0, S 0\right)$, that is, $\Sigma \vdash \psi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right)$. Likewise, but now using also $\Sigma \vdash S 0 \neq 0$, we show that if $\left(a_{1}, \ldots, a_{m}\right) \notin R$, then $\Sigma \vdash$ $\neg \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0, S 0\right)$. $(\Rightarrow)$ Conversely, assume $R$ is $\Sigma$-representable and let $\psi\left(x_{1}, \ldots, x_{m}\right)$ be an $L$-formula $\Sigma$-representing it. We show that $\varphi\left(x_{1}, \ldots, x_{m}, y\right)$ given by $$ \varphi\left(x_{1}, \ldots, x_{m}, y\right):=\left(\psi\left(x_{1}, \ldots, x_{m}\right) \wedge y=S 0\right) \vee\left(\neg \psi\left(x_{1}, \ldots, x_{m}\right) \wedge y=0\right) $$ $\Sigma$-represents $\chi_{R}$. Let $\left(a_{1}, \ldots, a_{m}\right) \in R$. Then $\Sigma \vdash \psi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right)$, hence $\Sigma \vdash\left[\left(\psi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right) \wedge y=S 0\right) \vee\left(\neg \psi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0\right) \wedge y=0\right)\right] \leftrightarrow y=S 0$, that is, $\Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0, y\right) \leftrightarrow y=S 0$. Likewise, for $\left(a_{1}, \ldots, a_{m}\right) \notin R$, we obtain $\Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{m}} 0, y\right) \leftrightarrow y=0$. Theorem 5.4.4 (Representability). Each computable function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ is $\underline{\mathrm{N}}$-representable. Each computable relation $R \subseteq \mathbf{N}^{m}$ is $\underline{\text { N-representable. }}$ Proof. By Proposition 5.4.3 we need only consider the case of functions. We make the following three claims: $(\mathrm{R} 1)^{\prime} \quad+: \mathbf{N}^{2} \rightarrow \mathbf{N}, \cdot: \mathbf{N}^{2} \rightarrow \mathbf{N}, \chi_{\leq}: \mathbf{N}^{2} \rightarrow \mathbf{N}$, and the coordinate function $I_{i}^{n}$ (for each $n$ and $i=1, \ldots, n$ ) are N-representable. $(\mathrm{R} 2)^{\prime} \quad$ If $G: \mathbf{N}^{m} \rightarrow \mathbf{N}$ and $H_{1}, \ldots, H_{m}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ are N-representable, then so is $F=G\left(H_{1}, \ldots, H_{m}\right): \mathbf{N}^{n} \rightarrow \mathbf{N}$ defined by $$ F(a)=G\left(H_{1}(a), \ldots, H_{k}(a)\right) . $$ (R3)' If $G: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ is $\underline{\text { N-representable, and for all } a \in \mathbf{N}^{n}}$ there exists $x \in \mathbf{N}$ such that $G(a, x)=0$, then the function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ given by $$ F(a)=\mu x(G(a, x)=0) $$ is N-representable. $(\mathrm{R} 1)^{\prime}$ : The proof of this claim has six parts. (i) The formula $x_{1}=x_{2}$ represents $\left\{(a, b) \in \mathbf{N}^{2}: a=b\right\}$ in $\underline{\mathrm{N}}$ : Let $a, b \in \mathbf{N}$. If $a=b$, then obviously $\underline{\mathbf{N}} \vdash S^{a} 0=S^{b} 0$. Suppose that $a \neq b$. Then for every model $\mathcal{A}$ of $\underline{\mathrm{N}}$ we have $\mathcal{A} \models S^{a} 0 \neq S^{b} 0$, by Lemma 5.4.2 and its proof. Hence $\underline{\mathrm{N}} \vdash S^{a} 0 \neq S^{b} 0$. (ii) The term $x_{1}+x_{2}$ represents $+: \mathbf{N}^{2} \rightarrow \mathbf{N}$ in $\underline{\mathrm{N}}$ : Let $a+b=c$ where $a, b, c \in \mathbf{N}$. By Lemma 5.4 .2 and its proof we have $\mathcal{A}=S^{a} 0+S^{b} 0=S^{c} 0$ for each model $\mathcal{A}$ of $\underline{\mathrm{N}}$. It follows that $\underline{\mathrm{N}} \vdash S^{a} 0+S^{b} 0=S^{c} 0$. (iii) The term $x_{1} \cdot x_{2}$ represents $\cdot: \mathbf{N}^{2} \rightarrow \mathbf{N}$ in $\underline{\mathrm{N}}$ : The proof is similar to that of (ii). (iv) The formula $x_{1}<x_{2}$ represents $\left\{(a, b) \in \mathbf{N}^{2}: a<b\right\}$ in $\underline{\mathrm{N}}$ : The proof is similar to that of (i). (v) $\chi_{\leq}: \mathbf{N}^{2} \rightarrow \mathbf{N}$ is $\underline{\text { N-representable: }}$ By (i) and (iv), the formula $x_{1}<x_{2} \vee x_{1}=x_{2}$ represents the set $\left\{(a, b) \in \mathbf{N}^{2}: a \leq b\right\}$ in $\underline{\mathbf{N}}$. So by Proposition 5.4.3, $\chi_{\leq}: \mathbf{N}^{2} \rightarrow \mathbf{N}$ is N-representable. (vi) For $n \geq 1$ and $1 \leq i \leq n$, the term $t_{i}^{n}\left(x_{1}, \ldots, x_{n}\right):=x_{i}$, represents the function $I_{i}^{n}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ in $\underline{\mathrm{N}}$. This is obvious. $(\mathrm{R} 2)^{\prime}$ : Let $x_{1}, \ldots, x_{n}, y_{1}, \ldots, y_{m}, z$ be distinct variables, let $G: \mathbf{N}^{m} \rightarrow \mathbf{N}$ be N-represented by $\psi\left(y_{1}, \ldots, y_{m}, z\right)$, and let $H_{i}: \mathbf{N}^{n} \rightarrow \mathbf{N}$ be $\underline{\text { N-represented }}$ by $\varphi_{i}\left(x_{1}, \ldots, x_{n}, y_{i}\right)$ for $i=1, \ldots, m$. Claim : $F=G\left(H_{1}, \ldots, H_{m}\right)$ is N-represented by $$ \theta\left(x_{1}, \ldots, x_{n}, z\right):=\exists y_{1} \ldots \exists y_{m}\left(\left(\bigwedge_{i=1}^{m} \varphi_{i}\left(x_{1}, \ldots, x_{n}, y_{i}\right)\right) \wedge \psi\left(y_{1}, \ldots, y_{m}, z\right)\right) . $$ Put $a=\left(a_{1}, \ldots, a_{n}\right)$ and let $c=F(a)$. We have to show that $$ \underline{\mathrm{N}} \vdash \theta\left(S^{a} 0, z\right) \leftrightarrow z=S^{c} 0, \quad \text { where } S^{a} 0 \text { abbreviates } S^{a_{1}} 0, \ldots, S^{a_{n}} 0 . $$ Let $b_{i}=H_{i}(a)$ and put $b=\left(b_{1}, \ldots, b_{m}\right)$. Then $F(a)=G(b)=c$. Therefore, $\underline{\mathrm{N}} \vdash \psi\left(S^{b} 0, z\right) \leftrightarrow z=S^{c} 0$ and $$ \underline{\mathrm{N}} \vdash \varphi_{i}\left(S^{a} 0, y_{i}\right) \leftrightarrow y_{i}=S^{b_{i}} 0, \quad(i=1, \ldots, m) $$ Argue in models to conclude : $\underline{\mathrm{N}} \vdash \theta\left(S^{a} 0, z\right) \leftrightarrow z=S^{c} 0$. $(R 3)^{\prime}: \quad$ Let $G: \mathbf{N}^{n+1} \rightarrow \mathbf{N}$ be such that for all $a \in \mathbf{N}^{n}$ there exists $b \in \mathbf{N}$ with $G(a, b)=0$. Define $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ by $F(a)=\mu b(G(a, b)=0)$. Suppose that $G$ is $\underline{N}$-represented by $\varphi\left(x_{1}, \ldots, x_{n}, y, z\right)$. We claim that the formula $$ \psi\left(x_{1}, \ldots, x_{n}, y\right):=\varphi\left(x_{1}, \ldots, x_{n}, y, 0\right) \wedge \forall w\left(w<y \rightarrow \neg \varphi\left(x_{1}, \ldots, x_{n}, w, 0\right)\right) $$ N-represents $F$. Let $a \in \mathbf{N}^{n}$ and let $b=F(a)$. Then $G(a, i) \neq 0$ for $i<b$ and $G(a, b)=0$. Therefore, $\underline{\mathrm{N}} \vdash \varphi\left(S^{a} 0, S^{b} 0, z\right) \leftrightarrow z=0$ and for $i<b$, $G(a, i) \neq 0$ and $\underline{\mathrm{N}} \vdash \varphi\left(S^{a} 0, \overline{S^{i} 0}, z\right) \leftrightarrow z=S^{G(a, i)} 0$. By arguing in models using Lemma 5.4.2 we obtain $\underline{\mathrm{N}} \vdash \psi\left(S^{a} 0, y\right) \leftrightarrow y=S^{b} 0$, as claimed. Remark. The converse of this theorem is also true, and is plausible from the Church-Turing Thesis. We shall prove the converse in the next section. Exercises. In the exercises below, $L$ is a numerical language and $\Sigma$ is a set of $L$-sentences. (1) Suppose $\Sigma \vdash S^{m} 0 \neq S^{n} 0$ whenever $m \neq n$. If a function $F: \mathbf{N}^{m} \rightarrow \mathbf{N}$ is $\Sigma$ represented by the $L$-formula $\varphi\left(x_{1}, \ldots, x_{m}, y\right)$, then the graph of $F$, as a relation of arity $m+1$ on $\mathbf{N}$, is $\Sigma$-represented by $\varphi\left(x_{1}, \ldots, x_{m}, y\right)$. (This result applies to $\Sigma=\underline{\mathrm{N}}$, since $\underline{\mathrm{N}} \vdash S^{m} 0 \neq S^{n} 0$ whenever $m \neq n$.) (2) Suppose $\Sigma \supseteq \underline{N}$. Then the set of all $\Sigma$-representable functions $F: \mathbf{N}^{m} \rightarrow \mathbf{N}$, $(m=0,1,2, \ldots)$ is closed under composition and minimalization. ### Decidability and Gödel Numbering Definition. An $L$-theory $T$ is a set of $L$-sentences closed under provability, that is, whenever $T \vdash \sigma$, then $\sigma \in T$. ## Examples. (1) Given a set $\Sigma$ of $L$-sentences, the set $\operatorname{Th}(\Sigma):=\{\sigma: \Sigma \vdash \sigma\}$ of theorems of $\Sigma$, is an $L$-theory. If we need to indicate the dependence on $L$ we write $\operatorname{Th}_{L}(\Sigma)$ for $\operatorname{Th}(\Sigma)$. We say that $\Sigma$ axiomatizes an $L$-theory $T$ (or is an axiomatization of $T$ ) if $T=\operatorname{Th}(\Sigma)$. For $\Sigma=\emptyset$ we also refer to $\operatorname{Th}_{L}(\Sigma)$ as predicate logic in $L$. (2) Given an $L$-structure $\mathcal{A}$, the $\operatorname{set} \operatorname{Th}(\mathcal{A}):=\{\sigma: \mathcal{A} \models \sigma\}$ is also an $L$ theory, called the theory of $\mathcal{A}$. Note that the theory of $\mathcal{A}$ is automatically complete. (3) Given any class $\mathcal{K}$ of $L$-structures, the set $$ \operatorname{Th}(\mathcal{K}):=\{\sigma: \mathcal{A} \models \sigma \text { for all } \mathcal{A} \in \mathcal{K}\} $$ is an $L$-theory, called the theory of $\mathcal{K}$. For example, for $L=L_{\text {Ring }}$, and $\mathcal{K}$ the class of finite fields, $\operatorname{Th}(\mathcal{K})$ is the set of $L$-sentences that are true in all finite fields. The decision problem for a given $L$-theory $T$ is to find an algorithm to decide for any $L$-sentence $\sigma$ whether or not $\sigma$ belongs to $T$. Since we have not (yet) defined the concept of algorithm, this is just an informal description at this stage. One of our goals in this section is to define a formal counterpart, called decidability. In the next section we show that the $L(\underline{\mathrm{N}})$-theory $\operatorname{Th}(\mathfrak{N})$ is undecidable; by the Church-Turing Thesis, this means that the decision problem for $\operatorname{Th}(\mathfrak{N})$ has no solution. (This result is a version of Church's Theorem, and is closely related to the Incompleteness Theorem.) In the rest of this chapter the language $L$ is assumed to be finite unless we say otherwise. This is done for simplicity, and at the end of this section we indicate how to avoid this assumption. We shall number the terms and formulas of $L$ in such a way that various statements about these formulas and about formal proofs in this language can be translated effectively into equivalent statements about natural numbers expressible by sentences in $L(\underline{\mathrm{N}})$. Recall that $\mathrm{v}_{0}, \mathrm{v}_{1}, \mathrm{v}_{2}, \ldots$ are our variables. We assign to each symbol $$ s \in\left\{\mathrm{v}_{0}, \mathrm{v}_{1}, \mathrm{v}_{2}, \ldots\right\} \sqcup\{\text { logical symbols }\} \sqcup L $$ a symbol number $\mathrm{SN}(s) \in \mathbf{N}$ as follows: $\mathrm{SN}\left(\mathrm{v}_{i}\right):=2 i$ and to each remaining symbol, in the finite set $\{$ logical symbols $\} \sqcup L$, we assign an odd natural number as symbol number, subject to the condition that different symbols have different symbol numbers. Definition. The Gödel number $\ulcorner t\urcorner$ of an $L$-term $t$ is defined recursively: $$ \ulcorner t\urcorner= \begin{cases}\left\langle\mathrm{SN}\left(\mathrm{v}_{i}\right)\right\rangle & \text { if } t=\mathrm{v}_{i}, \\ \left\langle\mathrm{SN}(F),\left\ulcorner t_{1}\right\urcorner, \ldots,\left\ulcorner t_{n}\right\urcorner\right\rangle & \text { if } t=F t_{1} \ldots t_{n} .\end{cases} $$ The Gödel number $\ulcorner\varphi\urcorner$ of an $L$-formula $\varphi$ is given recursively by $$ \ulcorner\varphi\urcorner= \begin{cases}\langle\operatorname{SN}(T)\rangle & \text { if } \varphi=\top, \\ \langle\operatorname{SN}(\perp)\rangle & \text { if } \varphi=\perp, \\ \left\langle\operatorname{SN}(=),\left\ulcorner t_{1}\right\urcorner,\left\ulcorner t_{2}\right\urcorner\right\rangle & \text { if } \varphi=\left(t_{1}=t_{2}\right), \\ \left\langle\operatorname{SN}(R),\left\ulcorner t_{1}\right\urcorner, \ldots,\left\ulcorner t_{m}\right\urcorner\right\rangle & \text { if } \varphi=R t_{1} \ldots t_{m}, \\ \langle\operatorname{SN}(\neg),\ulcorner\psi\urcorner\rangle & \text { if } \varphi=\neg \psi, \\ \left\langle\operatorname{SN}(\vee),\left\ulcorner\varphi_{1}\right\urcorner,\left\ulcorner\varphi_{2}\right\urcorner\right\rangle & \text { if } \varphi=\varphi_{1} \vee \varphi_{2}, \\ \left\langle\operatorname{SN}(\wedge),\left\ulcorner\varphi_{1}\right\urcorner,\left\ulcorner\varphi_{2}\right\urcorner\right\rangle & \text { if } \varphi=\varphi_{1} \wedge \varphi_{2}, \\ \langle\operatorname{SN}(\exists),\ulcorner x\urcorner,\ulcorner\psi\urcorner\rangle & \text { if } \varphi=\exists x \psi, \\ \langle\operatorname{SN}(\forall),\ulcorner x\urcorner,\ulcorner\psi\urcorner\rangle & \text { if } \varphi=\forall x \psi .\end{cases} $$ Lemma 5.5.1. The following subsets of $\mathbf{N}$ are computable: (1) Vble $:=\{\ulcorner x\urcorner: x$ is a variable $\}$ (2) Term $:=\{\ulcorner t\urcorner: t$ is an $L$-term $\}$ (3) AFor $:=\{\ulcorner\varphi\urcorner: \varphi$ is an atomic L-formula $\}$ (4) For $:=\{\ulcorner\varphi\urcorner: \varphi$ is an L-formula $\}$ Proof. (1) $a \in$ Vble iff $a=\langle 2 b\rangle$ for some $b \leq a$. (2) $a \in$ Term iff $a \in$ Vble or $a=\left\langle\mathrm{SN}(F),\left\ulcorner t_{1}\right\urcorner, \ldots,\left\ulcorner t_{n}\right\urcorner\right\rangle$ for some function symbol $F$ of $L$ of arity $n$ and $L$-terms $t_{1}, \ldots, t_{n}$ with Gödel numbers $<a$. We leave (3) to the reader. (4) We have For $(a) \Leftrightarrow \begin{cases}\operatorname{For}\left((a)_{1}\right) & \text { if } a=\left\langle\operatorname{SN}(\neg),(a)_{1}\right\rangle, \\ \operatorname{For}\left((a)_{1}\right) \text { and } \operatorname{For}\left((a)_{2}\right) & \text { if } a=\left\langle\operatorname{SN}(\vee),(a)_{1},(a)_{2}\right\rangle \\ & \text { or } a=\left\langle\operatorname{SN}(\wedge),(a)_{1},(a)_{2}\right\rangle, \\ \operatorname{Vble}\left((a)_{1}\right) \text { and } \operatorname{For}\left((a)_{2}\right) & \text { if } a=\left\langle\operatorname{SN}(\exists),(a)_{1},(a)_{2}\right\rangle \\ & \text { or } a=\left\langle\operatorname{SN}(\forall),(a)_{1},(a)_{2}\right\rangle, \\ \operatorname{AFor}(a) & \text { otherwise. }\end{cases}$ So For is computable. In the next two lemmas, $x$ ranges over variables, $\varphi$ and $\psi$ over $L$-formulas, and $t$ and $\tau$ over $L$-terms. Lemma 5.5.2. The function $\operatorname{Sub}: \mathbf{N}^{3} \rightarrow \mathbf{N}$ defined by $\operatorname{Sub}(a, b, c)=$ $$ \begin{cases}c & \text { if } \operatorname{Vble}(a) \text { and } a=b, \\ \left\langle(a)_{0}, \operatorname{Sub}\left((a)_{1}, b, c\right), \ldots, \operatorname{Sub}\left((a)_{n}, b, c\right)\right\rangle & \text { if } a=\left\langle(a)_{0}, \ldots,(a)_{n}\right\rangle \text { with } n>0 \text { and } \\ & (a)_{0} \neq \mathrm{SN}(\exists),(a)_{0} \neq \mathrm{SN}(\forall), \\ \left\langle\operatorname{SN}(\exists),(a)_{1}, \operatorname{Sub}\left((a)_{2}, b, c\right)\right\rangle & \text { if } a=\left\langle\mathrm{SN}(\exists),(a)_{1},(a)_{2}\right\rangle \text { and }(a)_{1} \neq b, \\ \left\langle\operatorname{SN}(\forall),(a)_{1}, \operatorname{Sub}\left((a)_{2}, b, c\right)\right\rangle & \text { if } a=\left\langle\mathrm{SN}(\forall),(a)_{1},(a)_{2}\right\rangle \text { and }(a)_{1} \neq b, \\ a & \text { otherwise }\end{cases} $$ is computable, and satisfies $$ \operatorname{Sub}(\ulcorner t\urcorner,\ulcorner x\urcorner,\ulcorner\tau\urcorner)=\ulcorner t(\tau / x)\urcorner \text { and } \operatorname{Sub}(\ulcorner\varphi\urcorner,\ulcorner x\urcorner,\ulcorner\tau\urcorner)=\ulcorner\varphi(\tau / x)\urcorner \text {. } $$ Proof. Exercise; see also exercise (2) of Section 2.5. Lemma 5.5.3. The following relations on $\mathbf{N}$ are computable: (1) $\operatorname{PrAx}:=\{\ulcorner\varphi\urcorner: \varphi$ is a propositional axiom $\} \subseteq \mathbf{N}$ (2) $\mathrm{Eq}:=\{\ulcorner\varphi\urcorner: \varphi$ is an equality axiom $\} \subseteq \mathbf{N}$ (3) Fr $:=\{(\ulcorner\varphi\urcorner,\ulcorner x\urcorner): x$ occurs free in $\varphi\} \subseteq \mathbf{N}^{2}$ (4) FrSub $:=\{(\ulcorner\varphi\urcorner,\ulcorner x\urcorner,\ulcorner\tau\urcorner): \tau$ is free for $x$ in $\varphi\} \subseteq \mathbf{N}^{3}$ (5) Quant $:=\{\ulcorner\psi\urcorner: \psi$ is a quantifier axiom $\} \subseteq \mathbf{N}$ (6) $\mathrm{MP}:=\left\{\left(\left\ulcorner\varphi_{1}\right\urcorner,\left\ulcorner\varphi_{1} \rightarrow \varphi_{2}\right\urcorner,\left\ulcorner\varphi_{2}\right\urcorner\right): \varphi_{1}, \varphi_{2}\right.$ are $L$-formulas $\} \subseteq \mathbf{N}^{3}$ (7) Gen $:=\{(\ulcorner\varphi\urcorner,\ulcorner\psi\urcorner): \psi$ follows from $\varphi$ by the generalization rule $\} \subseteq \mathbf{N}^{2}$ (8) Sent $:=\{\ulcorner\varphi\urcorner: \varphi$ is a sentence $\} \subseteq \mathbf{N}$ Proof. This is a lengthy, tedious, but routine exercise. The idea is to translate the usual inductive or explicit description of the relevant syntactic notions into a description of its "Gödel image" that establishes computability of this image. For example, when $\varphi=\varphi_{1} \vee \varphi_{2}$, one can use facts like: $x$ occurs free in $\varphi$ iff $x$ occurs free in $\varphi_{1}$ or in $\varphi_{2}$; and $\tau$ is free for $x$ in $\varphi$ iff $\tau$ is free for $x$ in $\varphi_{1}$ and $\tau$ is free for $x$ in $\varphi_{2}$. As usual, the main inductive steps concern terms, atomic formulas, and formulas that start with a quantifier symbol. See also exercise (1) of Section 2.7. As to (8), we have $$ \operatorname{Sent}(a) \Longleftrightarrow \operatorname{For}(a) \text { and } \forall i_{<a} \neg \operatorname{Fr}(a, i), $$ so (8) follows from (1) and earlier results. In the rest of this Section $\Sigma$ is a set of $L$-sentences. Put $$ \ulcorner\Sigma\urcorner:=\{\ulcorner\sigma\urcorner: \sigma \in \Sigma\}, $$ and call $\Sigma$ computable if $\ulcorner\Sigma\urcorner$ is computable. Definition. $\operatorname{Prf}_{\Sigma}$ is the set of Gödel numbers of proofs from $\Sigma$, that is, $$ \operatorname{Prf}_{\Sigma}:=\left\{\left\langle\left\ulcorner\varphi_{1}\right\urcorner, \ldots,\left\ulcorner\varphi_{n}\right\urcorner\right\rangle: \varphi_{1}, \ldots, \varphi_{n} \text { is a proof from } \Sigma\right\} . $$ So every element of $\operatorname{Prf}_{\Sigma}$ is of the form $\left\langle\left\ulcorner\varphi_{1}\right\urcorner, \ldots,\left\ulcorner\varphi_{n}\right\urcorner\right\rangle$ where $n \geq 1$ and every $\varphi_{k}$ is either in $\Sigma$, or a logical axiom, or obtained from some $\varphi_{i}, \varphi_{j}$ with $1 \leq i, j<k$ by Modus Ponens, or obtained from some $\varphi_{i}$ with $1 \leq i<k$ by Generalization. Lemma 5.5.4. If $\Sigma$ is computable, then $\operatorname{Prf}_{\Sigma}$ is computable. Proof. This is because $a$ is in $\operatorname{Prf}_{\Sigma}$ iff $\operatorname{Seq}(a)$ and $\operatorname{lh}(a) \neq 0$ and for every $k<$ $\operatorname{lh}(a)$ either $(a)_{k} \in\ulcorner\Sigma\urcorner \cup \operatorname{PrAx} \cup \operatorname{Eq} \cup$ Quant or $\exists i, j<k: \operatorname{MP}\left((a)_{i},(a)_{j},(a)_{k}\right)$ or $\exists i<k: \operatorname{Gen}\left((a)_{i},(a)_{k}\right)$. Definition. An $L$-theory $T$ is said to be computably axiomatizable if $T$ has a computable axiomatization. ${ }^{3}$ ${ }^{3}$ Instead of "computably axiomatizable," also "recursively axiomatizable" and "effectively axiomatizable" are used. We say that $T$ is decidable if $\ulcorner T\urcorner$ is computable, and undecidable otherwise. (Thus " $T$ is decidable" means the same thing as " $T$ is computable," but for $L$-theories "decidable" is more widely used than "computable".) Definition. A relation $R \subseteq \mathbf{N}^{n}$ is said to be computably generated if there is a computable relation $Q \subseteq \mathbf{N}^{n+1}$ such that for all $a \in \mathbf{N}^{n}$ we have $$ R(a) \Leftrightarrow \exists x Q(a, x) $$ "Recursively enumerable" is also used for "computably generated." Remark. Every computable relation is obviously computably generated. We leave it as an exercise to check that the union and intersection of two computably generated $n$-ary relations on $\mathbf{N}$ are computably generated. The complement of a computably generated subset of $\mathbf{N}$ is not always computably generated, as we shall see later. Lemma 5.5.5. If $\Sigma$ is computable, then $\ulcorner\mathrm{Th}(\Sigma)\urcorner$ is computably generated. Proof. Apply Lemma 5.5.4 and the fact that for all $a \in \mathbf{N}$ $$ a \in\ulcorner\operatorname{Th}(\Sigma)\urcorner \Longleftrightarrow \exists b\left(\operatorname{Prf}_{\Sigma}(b) \text { and } a=(b)_{\operatorname{lh}(b) \dot{-1}} \text { and } \operatorname{Sent}(a)\right) \text {. } $$ Proposition 5.5.6 (Negation Theorem). Let $A \subseteq \mathbf{N}^{n}$ and suppose $A$ and $\neg A$ are computably generated. Then $A$ is computable. Proof. Let $P, Q \subseteq \mathbf{N}^{n+1}$ be computable such that for all $a \in \mathbf{N}^{n}$ we have $$ A(a) \Longleftrightarrow \exists x P(a, x), \quad \neg A(a) \Longleftrightarrow \exists x Q(a, x) . $$ Then there is for each $a \in \mathbf{N}^{n}$ an $x \in \mathbf{N}$ such that $(P \vee Q)(a, x)$. The computability of $A$ follows by noting that for all $a \in \mathbf{N}^{n}$ we have $$ A(a) \Longleftrightarrow P(a, \mu x(P \vee Q)(a, x)) . $$ Proposition 5.5.7. Every complete and computably axiomatizable L-theory is decidable. Proof. Let $T$ be a complete $L$-theory with computable axiomatization $\Sigma$. Then $\ulcorner T\urcorner=\ulcorner\operatorname{Th}(\Sigma)\urcorner$ is computably generated. Now observe: $$ \begin{aligned} a \notin\ulcorner T\urcorner & \Longleftrightarrow a \notin \text { Sent or }\langle\mathrm{SN}(\neg), a\rangle \in\ulcorner T\urcorner \\ & \Longleftrightarrow a \notin \text { Sent or } \exists b\left(\operatorname{Prf}_{\Sigma}(b) \text { and }(b)_{\operatorname{lh}(b)-1}=\langle\mathrm{SN}(\neg), a\rangle\right) . \end{aligned} $$ Hence the complement of $\ulcorner T\urcorner$ is computably generated. Thus $T$ is decidable by the Negation Theorem. Representability implies Computability. We prove here the converse of the Representability Theorem, as promised at the end of Section 5.4. In this subsection we assume that $L$ is numerical. Lemma 5.5.8. The function $\mathrm{Num}: \mathbf{N} \rightarrow \mathbf{N}$ defined by $\operatorname{Num}(a)=\left\ulcorner S^{a} 0\right\urcorner$ is computable. Proof. $\operatorname{Num}(0)=\ulcorner 0\urcorner$ and $\operatorname{Num}(a+1)=\langle\mathrm{SN}(S), \operatorname{Num}(a)\rangle$. Thus, given an $L$-formula $\varphi(x)$, the function $$ a \mapsto\left\ulcorner\varphi\left(S^{a} 0\right)\right\urcorner=\operatorname{Sub}(\ulcorner\varphi\urcorner,\ulcorner x\urcorner, \operatorname{Num}(a)) $$ is computable; this should also be intuitively clear. Such computable functions will play an important role in what follows. Proposition 5.5.9. Suppose $\Sigma$ is a computable consistent set of $L$-sentences. Then every $\Sigma$-representable $U \subseteq \mathbf{N}^{n}$ is computable. Proof. The case $n=0$ is trivial, so let $n \geq 1$. Suppose $\varphi\left(x_{1}, \ldots, x_{n}\right)$ is an $L$-formula that $\Sigma$-represents $U \subseteq \mathbf{N}^{n}$. As $\Sigma$ is consistent, we have $$ U\left(a_{1}, \ldots, a_{n}\right) \Longleftrightarrow \Sigma \vdash \varphi\left(S^{a_{1}} 0, \ldots, S^{a_{n}} 0\right) \quad\left(a_{1}, \ldots, a_{n} \in \mathbf{N}\right) . $$ Define $s: \mathbf{N}^{n} \rightarrow \mathbf{N}$ by $s\left(a_{1}, \ldots, a_{n}\right):=\left\ulcorner\varphi\left(S^{a_{1}} 0, \ldots, S^{a_{n}} 0\right)\right\urcorner$. It is intuitively clear that $s$ is computable, but here is a proof. For $i=1, \ldots, n$, define $$ s_{i}: \mathbf{N}^{i} \rightarrow \mathbf{N}, \quad s_{i}\left(a_{1}, \ldots, a_{i}\right):=\left\ulcorner\varphi\left(S^{a_{1}} 0, \ldots, S^{a_{i}} 0, x_{i+1}, \ldots, x_{n}\right)\right\urcorner . $$ Then $s_{1}\left(a_{1}\right)=\operatorname{Sub}\left(\ulcorner\varphi\urcorner,\left\ulcorner x_{1}\right\urcorner, \operatorname{Num}\left(a_{1}\right)\right),\left(a_{1} \in \mathbb{N}\right)$, so $s_{1}$ is computable. Next, $$ s_{i+1}\left(a_{1}, \ldots, a_{i}, a_{i+1}\right)=\operatorname{Sub}\left(s_{i}\left(a_{1}, \ldots, a_{i}\right),\left\ulcorner x_{i+1}\right\urcorner, \operatorname{Num}\left(a_{i+1}\right)\right) \quad(1 \leq i<n) $$ for all $a_{1}, \ldots, a_{i+1} \in \mathbf{N}$, so the computability of $s_{i}$ gives that of $s_{i+1}$. Thus $s=s_{n}$ is computable. By the first display we obtain that for all $a \in \mathbb{N}^{n}$, $$ U(a) \Longleftrightarrow \Sigma \vdash \varphi\left(S^{a} 0\right) \Longleftrightarrow s(a) \in\ulcorner\mathrm{Th}(\Sigma)\urcorner . $$ Now $\Sigma$ is computable, so $\ulcorner\operatorname{Th}(\Sigma)\urcorner \subseteq \mathbf{N}$ is computably generated by Lemma 5.5.5. Take a computable $R \subseteq \mathbf{N}^{n+1}$ such that for all $x \in \mathbf{N}$, $$ x \in\ulcorner\operatorname{Th}(\Sigma)\urcorner \Longleftrightarrow \exists y R(x, y) . $$ Then by the above, for all $a \in \mathbf{N}^{n}$, $$ U(a) \Longleftrightarrow \exists y R(s(a), y), $$ exhibiting $U$ as computably generated. By the definition of " $\Sigma$-representable" the complement $\neg U$ is also $\Sigma$-representable, so $\neg U$ is computably generated as well. Then by the Negation Theorem $U$ is computable. Corollary 5.5.10. Suppose $L \supseteq L(\underline{\mathrm{N}})$ and $\Sigma$ is a computable consistent set of $L$-sentences. Then every $\Sigma$-representable function $f: \mathbf{N}^{n} \rightarrow \mathbf{N}$ is computable. Proof. Suppose $f: \mathbf{N}^{n} \rightarrow \mathbf{N}$ is $\Sigma$-representable. Then its graph is $\Sigma$-representable by Exercise (1) of Section 5.4, so this graph is computable by Proposition 5.5.9. Thus $f$ is computable by Lemma 5.1.9. In view of the Representability Theorem, these two results give also a nice characterization of computability. Let a function $f: \mathbf{N}^{n} \rightarrow \mathbf{N}$ be given. Corollary 5.5.11. $f$ is computable if and only if $f$ is N-representable. Relaxing the assumption of a finite language. Much of the above does not really need the assumption that $L$ is finite. In the discussion below we only assume that $L$ is countable, so the case of finite $L$ is included. First, we assign to each symbol $$ s \in\left\{\mathrm{v}_{0}, \mathrm{v}_{1}, \mathrm{v}_{2}, \ldots\right\} \sqcup\{\text { logical symbols }\} $$ its symbol number $\operatorname{SN}(s) \in \mathbf{N}$ as follows: $\operatorname{SN}\left(\mathrm{v}_{i}\right):=2 i$, and for $$ \begin{aligned} s & =\top, \perp, \neg, \vee, \wedge,=, \exists, \forall, \text { respectively, put } \\ \mathrm{SN}(s) & =1,3,5,7,9,11,13,15, \text { respectively. } \end{aligned} $$ This part of our numbering of symbols is independent of $L$. By a numbering of $L$ we mean an injective function $L \rightarrow \mathbf{N}$ that assigns to each $s \in L$ an odd natural number $\mathrm{SN}(s)>15$ such that if $s$ is a relation symbol, then $\operatorname{SN}(s) \equiv 1 \bmod 4$, and if $s$ is a function symbol, then $\operatorname{SN}(s) \equiv 3 \bmod 4$. Such a numbering of $L$ is said to be computable if the sets $$ \mathrm{SN}(L)=\{\mathrm{SN}(s): s \in L\}) \subseteq \mathbf{N}, \quad\{(\mathrm{SN}(s), \operatorname{arity}(s)): s \in L\} \subseteq \mathbf{N}^{2} $$ are computable. (So if $L$ is finite, then every numbering of $L$ is computable.) Given a numbering of $L$ we use it to assign to each $L$-term $t$ and each $L$ formula $\varphi$ its Gödel number $\ulcorner t\urcorner$ and $\ulcorner\varphi\urcorner$, just as we did earlier in the section. Suppose a computable numbering of $L$ is given, with the corresponding Gödel numbering of $L$-terms and $L$-formulas. Then Lemmas 5.5.1, 5.5.2, and 5.5.3 go through, as a diligent reader can easily verify. Let also a set $\Sigma$ of $L$-sentences be given. Define $\ulcorner\Sigma\urcorner:=\{\ulcorner\sigma\urcorner: \sigma \in \Sigma\}$, and call $\Sigma$ computable if $\ulcorner\Sigma\urcorner$ is a computable subset of $\mathbf{N}$. We define $\operatorname{Prf}_{\Sigma}$ to be the set of all Gödel numbers of proofs from $\Sigma$, and for an $L$-theory $T$ we define the notions of $T$ being computably axiomatizable, $T$ being decidable, and $T$ being undecidable, all just as we did earlier in this section. (Note, however, that the definitions of these notions are all relative to our given computable numbering of $L$; for finite $L$ different choices of numbering of $L$ yield equivalent notions of $\Sigma$ being computable, $T$ being computably axiomatizable, and $T$ being decidable.) It is now routine to check that Lemmas 5.5.4, 5.5.5, Propositions 5.5.7, 5.5.9, and Corollary 5.5.10 go through. ## Exercises. (1) If $f: \mathbf{N} \rightarrow \mathbf{N}$ is computable and $f(x)>x$ for all $x \in \mathbf{N}$, then $f(\mathbf{N})$ is computable. (2) Let the set $X \subseteq \mathbf{N}$ be nonempty. Then $X$ is computably generated iff there is a computable function $f: \mathbf{N} \rightarrow \mathbf{N}$ such that $X=f(\mathbf{N})$. Moreover, if $X$ is infinite and computably generated, then such $f$ can be chosen to be injective. (3) Every infinite computably generated subset of $\mathbf{N}$ has an infinite computable subset. (4) A function $F: \mathbf{N}^{n} \rightarrow \mathbf{N}$ is computable iff its graph is computably generated. (5) Let $a$ and $b$ denote positive real numbers. Call a computable if there are computable functions $f, g: \mathbf{N} \rightarrow \mathbf{N}$ such that for all $n>0$, $$ g(n) \neq 0 \text { and }|a-f(n) / g(n)|<1 / n . $$ Then: (i) every positive rational number is computable, and $e$ is computable; (ii) if $a$ and $b$ are computable, so are $a+b, a b$, and $1 / a$, and if in addition $a>b$, then $a-b$ is also computable; (iii) $a$ is computable if and only if the binary relation $R_{a}$ on $\mathbf{N}$ defined by $$ R_{a}(m, n) \Longleftrightarrow n>0 \text { and } m / n<a $$ is computable. (Hint: use the Negation Theorem.) ### Theorems of Gödel and Church In this section we assume that the finite language $L$ extends $L(\underline{\mathrm{N}})$. Theorem 5.6.1 (Church). No consistent L-theory extending $\underline{\mathrm{N}}$ is decidable. Before giving the proof we record the following consequence: Corollary 5.6.2 (Weak form of Gödel's Incompleteness Theorem). Each computably axiomatizable L-theory extending $\underline{\mathrm{N}}$ is incomplete. Proof. Immediate from 5.5.7 and Church's Theorem. We will indicate in the next Section how to construct for any consistent computable set of $L$-sentences $\Sigma \supseteq \underline{\mathrm{N}}$ an $L$-sentence $\sigma$ such that $\Sigma \not \forall \sigma$ and $\Sigma \not \neg \sigma$. (The corollary above only says that such a sentence exists.) For the proof of Church's Theorem we need a few lemmas. Let $P \subseteq A^{2}$ be any binary relation on a set $A$. For $a \in A$, we let $P(a) \subseteq A$ be given by the equivalence $P(a)(b) \Leftrightarrow P(a, b)$. Lemma 5.6.3 (Cantor). Given any $P \subseteq A^{2}$, its antidiagonal $Q \subseteq A$ defined by $$ Q(b) \Longleftrightarrow \neg P(b, b) $$ is not of the form $P(a)$ for any $a \in A$. Proof. Suppose $Q=P(a)$, where $a \in A$. Then $Q(a)$ iff $P(a, a)$. But by definition, $Q(a)$ iff $\neg P(a, a)$, a contradiction. This is essentially Cantor's proof that no $f: A \rightarrow \mathfrak{P}(A)$ can be surjective. (Use $P(a, b): \Leftrightarrow b \in f(a)$; then $P(a)=f(a)$.) Definition. Let $\Sigma$ be a set of $L$-sentences. We fix a variable $x$ (e. g. $x=\mathrm{v}_{0}$ ) and define the binary relation $P^{\Sigma} \subseteq \mathbf{N}^{2}$ by $$ P^{\Sigma}(a, b) \Longleftrightarrow \operatorname{Sub}(a,\ulcorner x\urcorner, \operatorname{Num}(b)) \in\ulcorner\operatorname{Th}(\Sigma)\urcorner $$ For an $L$-formula $\varphi(x)$ and $a=\ulcorner\varphi(x)\urcorner$, we have $$ \operatorname{Sub}\left(\ulcorner\varphi(x)\urcorner,\ulcorner x\urcorner,\left\ulcorner S^{b} 0\right\urcorner\right)=\left\ulcorner\varphi\left(S^{b} 0\right)\right\urcorner, $$ So $$ P^{\Sigma}(a, b) \Longleftrightarrow \Sigma \vdash \varphi\left(S^{b} 0\right) $$ Lemma 5.6.4. Suppose $\Sigma \supseteq \underline{\mathrm{N}}$ is consistent. Then each computable set $X \subseteq \mathbf{N}$ is of the form $X=P^{\Sigma}(a)$ for some $a \in \mathbf{N}$. Proof. Let $X \subseteq \mathbf{N}$ be computable. Then $X$ is $\Sigma$-representable by Theorem 5.4.4, say by the formula $\varphi(x)$, i. e. $X(b) \Rightarrow \Sigma \vdash \varphi\left(S^{b} 0\right)$, and $\neg X(b) \Rightarrow \Sigma \vdash \neg \varphi\left(S^{b} 0\right)$. So $X(b) \Leftrightarrow \Sigma \vdash \varphi\left(S^{b} 0\right)$ (using consistency to get " $\Leftarrow$ "). Take $a=\ulcorner\varphi(x)\urcorner$; then $X(b)$ iff $\Sigma \vdash \varphi\left(S^{b} 0\right)$ iff $P^{\Sigma}(a, b)$, that is, $X=P^{\Sigma}(a)$. Proof of Church's Theorem. Let $\Sigma \supseteq \mathrm{N}$ be consistent. We have to show that then $\operatorname{Th}(\Sigma)$ is undecidable, that is, $\ulcorner\operatorname{Th}(\bar{\Sigma})\urcorner$ is not computable. Suppose that $\ulcorner\operatorname{Th}(\Sigma)\urcorner$ is computable. Then the antidiagonal $Q^{\Sigma} \subseteq \mathbf{N}$ of $P^{\Sigma}$ is computable: $$ b \in Q^{\Sigma} \Leftrightarrow(b, b) \notin P^{\Sigma} \Leftrightarrow \operatorname{Sub}(b,\ulcorner x\urcorner, \operatorname{Num}(b)) \notin\ulcorner\operatorname{Th}(\Sigma)\urcorner . $$ By Lemma 5.6.3, $Q^{\Sigma}$ is not among the $P^{\Sigma}(a)$. Therefore by Lemma 5.6.4, $Q^{\Sigma}$ is not computable, a contradiction. This concludes the proof. By Lemma 5.5.5 the subset $\left\ulcorner\mathrm{Th}_{L}(\underline{\mathrm{N}})\right\urcorner$ of $\mathbf{N}$ is computably generated. But this set is not computable: Corollary 5.6.5. $\operatorname{Th}(\underline{\mathrm{N}})$ and $\operatorname{Th}(\emptyset)$ (in the language $L$ ) are undecidable. Proof. The undecidability of $\operatorname{Th}(\underline{\mathrm{N}})$ is a special case of Church's Theorem. Let $\wedge \underline{\mathrm{N}}$ be the sentence $\underline{\mathrm{N} 1} \wedge \cdots \wedge \underline{\mathrm{N}}$. Then, for any $L$-sentence $\sigma$, $$ \underline{\mathrm{N}} \vdash_{L} \sigma \Longleftrightarrow \emptyset \vdash_{L} \wedge \underline{\mathrm{N}} \rightarrow \sigma $$ that is, for all $a \in \mathbf{N}$, $$ a \in\ulcorner\mathrm{Th}(\underline{\mathrm{N}})\urcorner \Longleftrightarrow a \in \text { Sent and }\langle\mathrm{SN}(\vee),\langle\mathrm{SN}(\neg),\ulcorner\wedge \underline{\mathrm{N}}\urcorner\rangle, a\rangle \in\ulcorner\mathrm{Th}(\emptyset)\urcorner . $$ Therefore, if $\operatorname{Th}(\emptyset)$ were decidable, then $\operatorname{Th}(\underline{\mathrm{N}})$ would be decidable; but $\operatorname{Th}(\underline{\mathrm{N}})$ is undecidable. So $\operatorname{Th}(\emptyset)$ is undecidable. We assumed in the beginning of this section that $L \supseteq L(\underline{\mathrm{N}})$, but the statement that $\operatorname{Th}_{L}(\emptyset)$ is undecidable also makes sense without this restriction. For that statement to be true, however, we cannot just drop this restriction. For example, if $L=\{F\}$ with $F$ a unary function symbol, then $\operatorname{Th}_{L}(\emptyset)$ is decidable. Readers unhappy with the restriction that $L$ is finite can replace it by the weaker one that $L$ is countable and equipped with a computable numbering as defined at the end of Section 5.5. Such a numbering comes with corresponding notions of "computable" (for a set of $L$-sentences) and "decidable" (for an $L$ theory), and the above material in this section goes through with the same proofs. Discussion. We have seen that $\mathrm{N}$ is quite weak. A very strong set of axioms in the language $L(\underline{\mathrm{N}})$ is PA (1st order Peano Arithmetic). Its axioms are those of $\underline{\mathrm{N}}$ together with all induction axioms, that is, all sentences of the form $$ \forall x[(\varphi(x, 0) \wedge \forall y(\varphi(x, y) \rightarrow \varphi(x, S y))) \rightarrow \forall y \varphi(x, y)] $$ where $\varphi(x, y)$ is an $L(\underline{\mathrm{N}})$-formula, $x=\left(x_{1}, \ldots, x_{n}\right)$, and $\forall x$ stands for $\forall x_{1} \ldots \forall x_{n}$. Note that PA is consistent, since it has $\mathfrak{N}=(\mathbf{N} ;<, 0, S,+, \cdot)$ as a model. Also $\ulcorner\mathrm{PA}\urcorner$ is computable (exercise). Thus by the theorems above, $\mathrm{Th}(\mathrm{PA})$ is undecidable and incomplete. To appreciate the significance of this result, one needs a little background knowledge, including some history. Over a century of experience has shown that number theoretic assertions can be expressed by sentences of $L(\underline{\mathrm{N}})$, admittedly in an often contorted way. (That is, we know how to construct for any number theoretic statement a sentence $\sigma$ of $L(\underline{\mathrm{N}})$ such that the statement is true if and only if $\mathfrak{N}=\sigma$. In most cases we just indicate how to construct such a sentence, since an actual sentence would be too unwieldy without abbreviations.) What is more important, we know from experience that any established fact of classical number theory - including results obtained by sophisticated analytic and algebraic methods - can be proved from PA, in the sense that PA $\vdash \sigma$ for the sentence $\sigma$ expressing that fact. Thus before Gödel's Incompleteness Theorem it seemed natural to conjecture that PA is complete. (Did people realize at the time that completeness of PA, or similar statements, imply the decidability of number theory? This is not clear to me, but decidability of number theory would surely have been considered as astonishing. Part of the issue here is that notions of completeness and decidability were at the time, before Gödel, just in the process of being defined.) Of course, the situation cannot be remedied by adding new axioms to PA, at least if we insist that the axioms are true in $\mathfrak{N}$ and that we have effective means to tell which sentences are axioms. In this sense, the Incompleteness Theorem is pervasive. ### A more explicit incompleteness theorem In Section 5.6 we obtained Gödel's Incompleteness Theorem as an immediate corollary of Church's theorem. In this section, we prove the incompleteness theorem in the more explicit form stated in the introduction to this chapter. In this section $L \supseteq L(\underline{\mathrm{N}})$ is a finite language, and $\Sigma$ is a set of $L$-sentences. We also fix two distinct variables $x$ and $y$. We shall indicate how to construct, for any computable consistent $\Sigma \supseteq \underline{\mathrm{N}}$, a formula $\varphi(x)$ of $L(\underline{\mathrm{N}})$ with the following properties: (i) $\underline{\mathrm{N}} \vdash \varphi\left(S^{m} 0\right)$ for each $m$; (ii) $\Sigma \nvdash \forall x \varphi(x)$. Note that then the sentence $\forall x \varphi(x)$ is true in $\mathfrak{N}$ but not provable from $\Sigma$. Here is a sketch of how to make such a sentence. Assume for simplicity that $L=L(\underline{\mathrm{N}})$ and $\mathfrak{N} \models \Sigma$. The idea is to construct sentences $\sigma$ and $\sigma^{\prime}$ such that (1) $\mathfrak{N} \models \sigma \leftrightarrow \sigma^{\prime}$; and (2) $\mathfrak{N} \models \sigma^{\prime} \Longleftrightarrow \Sigma \nvdash \sigma$. From (1) and (2) we get $\mathfrak{N} \models \sigma \Longleftrightarrow \Sigma \nvdash \sigma$. Assuming that $\mathfrak{N} \models \neg \sigma$ produces a contradiction. Hence $\sigma$ is true in $\mathfrak{N}$, and thus(!) not provable from $\Sigma$. How to implement this strange idea? To take care of (2), one might guess that $\sigma^{\prime}=\forall x \neg \operatorname{pr}(x, S\ulcorner\sigma\urcorner 0)$ where $\operatorname{pr}(x, y)$ is a formula representing in $\underline{\mathrm{N}}$ the binary relation $\operatorname{Pr} \subseteq \mathbf{N}^{2}$ defined by $$ \begin{aligned} \operatorname{Pr}(m, n) \Longleftrightarrow m & \text { is the Gödel number of a proof from } \Sigma \\ & \text { of a sentence with Gödel number } n . \end{aligned} $$ But how do we arrange (1)? Since $\sigma^{\prime}:=\forall x \neg \operatorname{pr}(x, S\ulcorner\sigma\urcorner 0)$ depends on $\sigma$, the solution is to apply the fixed-point lemma below to $\rho(y):=\forall x \neg \operatorname{pr}(x, y)$. This finishes our sketch. What follows is a rigorous implementation. Lemma 5.7.1 (Fixpoint Lemma). Suppose $\Sigma \supseteq \underline{\mathrm{N}}$. Then for every L-formula $\rho(y)$ there is an L-sentence $\sigma$ such that $\Sigma \vdash \sigma \leftrightarrow \rho\left(S^{n} 0\right)$ where $n=\ulcorner\sigma\urcorner$. Proof. The function $(a, b) \mapsto \operatorname{Sub}(a,\ulcorner x\urcorner, \operatorname{Num}(b)): \mathbf{N}^{2} \rightarrow \mathbf{N}$ is computable by Lemma 5.5.2. Hence by the representability theorem it is $\underline{N}$-representable. Let $\operatorname{sub}\left(x_{1}, x_{2}, y\right)$ be an $L(\underline{\mathrm{N}})$-formula representing it in $\underline{\mathrm{N}}$. We can assume that the variable $x$ does not occur in $\operatorname{sub}\left(x_{1}, x_{2}, y\right)$. Then for all $a, b$ in $\mathbf{N}$, $$ \underline{\mathrm{N}} \vdash \operatorname{sub}\left(S^{a} 0, S^{b} 0, y\right) \leftrightarrow y=S^{c} 0, \quad \text { where } c=\operatorname{Sub}(a,\ulcorner x\urcorner, \operatorname{Num}(b)) $$ Now let $\rho(y)$ be an $L$-formula. Define $\theta(x):=\exists y(\operatorname{sub}(x, x, y) \wedge \rho(y))$ and let $m=\ulcorner\theta(x)\urcorner$. Let $\sigma:=\theta\left(S^{m} 0\right)$, and put $n=\ulcorner\sigma\urcorner$. We claim that $$ \Sigma \vdash \sigma \leftrightarrow \rho\left(S^{n} 0\right) $$ Indeed, $$ n=\ulcorner\sigma\urcorner=\left\ulcorner\theta\left(S^{m} 0\right)\right\urcorner=\operatorname{Sub}\left(\ulcorner\theta(x)\urcorner,\ulcorner x\urcorner,\left\ulcorner S^{m} 0\right\urcorner\right)=\operatorname{Sub}(m,\ulcorner x\urcorner, \operatorname{Num}(m)) . $$ So by (1), $$ \underline{\mathrm{N}} \vdash \operatorname{sub}\left(S^{m} 0, S^{m} 0, y\right) \leftrightarrow y=S^{n} 0 $$ We have $$ \sigma=\theta\left(S^{m} 0\right)=\exists y\left(\operatorname{sub}\left(S^{m} 0, S^{m} 0, y\right) \wedge \rho(y)\right), $$ so by (2) we get $\Sigma \vdash \sigma \leftrightarrow \exists y\left(y=S^{n} 0 \wedge \rho(y)\right)$. Hence, $\Sigma \vdash \sigma \leftrightarrow \rho\left(S^{n} 0\right)$. Theorem 5.7.2. Suppose $\Sigma \supseteq \underline{\mathrm{N}}$ is consistent and computable. Then there exists an $L(\underline{\mathrm{N}})$-formula $\varphi(x)$ such that $\underline{\mathrm{N}} \vdash \varphi\left(S^{m} 0\right)$ for each $m$, but $\Sigma \nvdash \forall x \varphi(x)$. Proof. Consider the relation $\operatorname{Pr}_{\Sigma} \subseteq \mathbf{N}^{2}$ defined by $$ \begin{array}{r} \operatorname{Pr}_{\Sigma}(m, n) \Longleftrightarrow m \text { is the Gödel number of a proof from } \Sigma \\ \\ \quad \text { of an } L \text {-sentence with Gödel number } n . \end{array} $$ Since $\Sigma$ is computable, $\operatorname{Pr}_{\Sigma}$ is computable. Hence $\operatorname{Pr}_{\Sigma}$ is representable in $\underline{N}$. Let $\operatorname{pr}_{\Sigma}(x, y)$ be an $L(\underline{\mathrm{N}})$-formula representing $\operatorname{Pr}_{\Sigma}$ in $\underline{\mathrm{N}}$, and hence in $\Sigma$. Because $\Sigma$ is consistent we have for all $m, n$ : $$ \begin{aligned} \Sigma \vdash \operatorname{pr}_{\Sigma}\left(S^{m} 0, S^{n} 0\right) & \Longleftrightarrow \operatorname{Pr}_{\Sigma}(m, n) \\ \Sigma \vdash \neg \operatorname{pr}_{\Sigma}\left(S^{m} 0, S^{n} 0\right) & \Longleftrightarrow \neg \operatorname{Pr}_{\Sigma}(m, n) \end{aligned} $$ Let $\rho(y)$ be the $L(\underline{\mathrm{N}})$-formula $\forall x \neg \operatorname{pr}_{\Sigma}(x, y)$. Lemma 5.7.1 (with $L=L(\underline{\mathrm{N}})$ and $\Sigma=\underline{\mathrm{N}})$ provides an $L(\underline{\mathrm{N}})$-sentence $\sigma$ such that $\underline{\mathrm{N}} \vdash \sigma \leftrightarrow \rho\left(S^{\ulcorner\sigma\urcorner} 0\right)$. It follows that $\Sigma \vdash \sigma \leftrightarrow \rho\left(S^{\ulcorner\sigma\urcorner} 0\right)$, that is $$ \Sigma \vdash \sigma \leftrightarrow \forall x \neg \operatorname{pr}_{\Sigma}\left(x, S^{\ulcorner\sigma\urcorner} 0\right) $$ Claim: $\Sigma \nvdash \sigma$. Assume towards a contradiction that $\Sigma \vdash \sigma$; let $m$ be the Gödel number of a proof of $\sigma$ from $\Sigma$, so $\operatorname{Pr}_{\Sigma}(m,\ulcorner\sigma\urcorner)$. Because of (3) we also have $\Sigma \vdash \forall x \neg \operatorname{pr}_{\Sigma}(x, S\ulcorner\sigma\urcorner 0)$, so $\Sigma \vdash \neg \operatorname{pr}_{\Sigma}\left(S^{m} 0, S\ulcorner\sigma\urcorner 0\right)$, which by (2) yields $\neg \operatorname{Pr}_{\Sigma}(m,\ulcorner\sigma\urcorner)$, a contradiction. This establishes the claim. Now put $\varphi(x):=\neg \operatorname{pr}_{\Sigma}\left(x, S^{\ulcorner\sigma} 70\right)$. We now show : (i) $\underline{N} \vdash \varphi\left(S^{m} 0\right)$ for each $m$. Because $\Sigma \nvdash \sigma$, no $m$ is the Gödel number of a proof of $\sigma$ from $\Sigma$. Hence $\neg \operatorname{Pr}_{\Sigma}(m,\ulcorner\sigma\urcorner)$ for each $m$, which by the defining property of $\operatorname{pr}_{\Sigma}$ yields $\underline{\mathrm{N}} \vdash \neg \operatorname{pr}_{\Sigma}\left(S^{m} 0, S^{\ulcorner\sigma\urcorner} 0\right)$ for each $m$, that is, $\underline{\mathrm{N}} \vdash \varphi\left(S^{m} 0\right)$ for each $m$. (ii) $\Sigma \nvdash \forall x \varphi(x)$. This is because of the Claim and $\Sigma \vdash \sigma \leftrightarrow \forall x \varphi(x)$, by (3). Corollary 5.7.3. Suppose that $\Sigma$ is computable and true in an $L$-expansion $\mathfrak{N}^{*}$ of $\mathfrak{N}$. Then there exists an $L(\underline{\mathrm{N}})$-formula $\varphi(x)$ such that $\underline{\mathrm{N}} \vdash \varphi\left(S^{n} 0\right)$ for each $n$, but $\Sigma \cup \underline{\mathrm{N}} \forall \forall x \varphi(x)$. (Note that then $\forall x \varphi(x)$ is true in $\mathfrak{N}^{*}$ but not provable from $\Sigma$.) To obtain this corollary, apply the theorem above to $\Sigma \cup \underline{\mathrm{N}}$ in place of $\Sigma$. This entire section, including the exercises below, goes through if we replace the standing assumption that $L$ is finite by the weaker one that $L$ is countable and equipped with a computable numbering as defined at the end of Section 5.5. Exercises. The results below are due to Tarski and known as the undefinability of truth. The first exercise strengthens the special case of Church's theorem which says that the set of Gödel numbers of $L$-sentences true in a given $L$-expansion $\mathfrak{N}^{*}$ of $\mathfrak{N}$ is not computable. Both (1) and (2) are easy applications of the fixpoint lemma. (1) Let $\mathfrak{N}^{*}$ be an $L$-expansion of $\mathfrak{N}$. Then the set of Gödel numbers of $L$-sentences true in $\mathfrak{N}^{*}$ is not definable in $\mathfrak{N}^{*}$. (2) Suppose $\Sigma \supseteq \underline{N}$ is consistent. Then the set $\ulcorner\mathrm{Th}(\Sigma)\urcorner$ is not $\Sigma$-representable, and there is no truth definition for $\Sigma$. Here a truth definition for $\Sigma$ is an $L$-formula $\operatorname{true}(y) \operatorname{such}$ that for all $L$-sentences $\sigma$, $$ \Sigma \vdash \sigma \longleftrightarrow \operatorname{true}\left(S^{n} 0\right) \text {, where } n=\ulcorner\sigma\urcorner . $$ ### Undecidable Theories Church's theorem says that any consistent theory containing a certain basic amount of integer arithmetic is undecidable. How about theories like $\mathrm{Th}(\mathrm{Fl})$ (the theory of fields), and $\mathrm{Th}(\mathrm{Gr})$ (the theory of groups)? An easy way to prove the undecidability of such theories is due to Tarski: he noticed that if $\mathfrak{N}$ is definable in some model of a theory $T$, then $T$ is undecidable. The aim of this section is to establish this result and indicate some applications. In order not to distract from this theme by tedious details, we shall occasionally replace a proof by an appeal to the Church-Turing Thesis. (A conscientious reader will replace these appeals by proofs until reaching a level of skill that makes constructing such proofs utterly routine.) In this section, $L$ and $L^{\prime}$ are finite languages, $\Sigma$ is a set of $L$-sentences, and $\Sigma^{\prime}$ is a set of $L^{\prime}$-sentences. Lemma 5.8.1. Let $L \subseteq L^{\prime}$ and $\Sigma \subseteq \Sigma^{\prime}$. (1) Suppose $\Sigma^{\prime}$ is conservative over $\Sigma$. Then $$ \mathrm{Th}_{L}(\Sigma) \text { is undecidable } \Longrightarrow \mathrm{Th}_{L^{\prime}}\left(\Sigma^{\prime}\right) \text { is undecidable. } $$ (2) Suppose $L=L^{\prime}$ and $\Sigma^{\prime} \backslash \Sigma$ is finite. Then $$ \operatorname{Th}\left(\Sigma^{\prime}\right) \text { is undecidable } \Longrightarrow \operatorname{Th}(\Sigma) \text { is undecidable. } $$ (3) Suppose all symbols of $L^{\prime} \backslash L$ are constant symbols. Then $$ \mathrm{Th}_{L}(\Sigma) \text { is undecidable } \Longleftrightarrow \mathrm{Th}_{L^{\prime}}(\Sigma) \text { is undecidable. } $$ (4) Suppose $\Sigma^{\prime}$ extends $\Sigma$ by a definition. Then $$ \mathrm{Th}_{L}(\Sigma) \text { is undecidable } \Longleftrightarrow \mathrm{Th}_{L^{\prime}}\left(\Sigma^{\prime}\right) \text { is undecidable. } $$ Proof. (1) In this case we have for all $a \in \mathbf{N}$, $$ a \in\left\ulcorner\mathrm{Th}_{L}(\Sigma)\right\urcorner \Longleftrightarrow a \in \operatorname{Sent}_{L} \text { and } a \in\left\ulcorner\mathrm{Th}_{L^{\prime}}\left(\Sigma^{\prime}\right)\right\urcorner \text {. } $$ It follows that if $\operatorname{Th}_{L^{\prime}}\left(\Sigma^{\prime}\right)$ is decidable, so is $\operatorname{Th}_{L}(\Sigma)$. (2) Write $\Sigma^{\prime}=\left\{\sigma_{1}, \ldots, \sigma_{N}\right\} \cup \Sigma$, and put $\sigma^{\prime}:=\sigma_{1} \wedge \cdots \wedge \sigma_{N}$. Then for each $L$-sentence $\sigma$ we have $\Sigma^{\prime} \vdash \sigma \Longleftrightarrow \Sigma \vdash \sigma^{\prime} \rightarrow \sigma$, so for all $a \in \mathbf{N}$, $$ a \in\left\ulcorner\operatorname{Th}\left(\Sigma^{\prime}\right)\right\urcorner \Longleftrightarrow a \in \text { Sent and }\left\langle\operatorname{SN}(\vee),\left\langle\operatorname{SN}(\neg),\left\ulcorner\sigma^{\prime}\right\urcorner\right\rangle, a\right\rangle \in\ulcorner\operatorname{Th}(\Sigma)\urcorner . $$ It follows that if $\operatorname{Th}(\Sigma)$ is decidable then so is $\operatorname{Th}\left(\Sigma^{\prime}\right)$. (3) Let $c_{0}, \ldots, c_{n}$ be the distinct constant symbols of $L^{\prime} \backslash L$. Given any $L^{\prime}$ sentence $\sigma$ we define the $L$-sentence $\sigma^{\prime}$ as follows: take $k \in \mathbf{N}$ minimal such that $\sigma$ contains no variable $\mathbf{v}_{m}$ with $m \geq k$, replace each occurrence of $c_{i}$ in $\sigma$ by $\mathrm{v}_{k+i}$ for $i=0, \ldots, n$, and let $\varphi\left(\mathrm{v}_{k}, \ldots, \mathrm{v}_{k+n}\right)$ be the resulting $L$-formula (so $\left.\sigma=\varphi\left(c_{0}, \ldots, c_{n}\right)\right)$; then $\sigma^{\prime}:=\forall \mathrm{v}_{k} \ldots \forall \mathrm{v}_{k+n} \varphi\left(\mathrm{v}_{k}, \ldots, \mathrm{v}_{k+n}\right)$. An easy argument using the completeness theorem shows that $$ \Sigma \vdash_{L^{\prime}} \sigma \quad \Longleftrightarrow \quad \Sigma \vdash_{L} \sigma^{\prime} . $$ By the Church-Turing Thesis there is a computable function $a \mapsto a^{\prime}: \mathbf{N} \rightarrow \mathbf{N}$ such that $\left\ulcorner\sigma^{\prime}\right\urcorner=\ulcorner\sigma\urcorner$ for all $L^{\prime}$-sentences $\sigma$; we leave it to the reader to replace this appeal to the Church-Turing Thesis by a proof. Then, for all $a \in \mathbf{N}$ : $$ a \in\left\ulcorner\operatorname{Th}_{L^{\prime}}(\Sigma)\right\urcorner \Longleftrightarrow a \in \operatorname{Sent}_{L^{\prime}} \text { and } a^{\prime} \in\left\ulcorner\operatorname{Th}_{L}(\Sigma)\right\urcorner . $$ This yields the $\Leftarrow$ direction of (3); the converse holds by (1). (4) The $\Rightarrow$ direction holds by (1). For the $\Leftarrow$ we use an algorithm (see Section 4.5) that computes for each $L^{\prime}$-sentence $\sigma$ an $L$-sentence $\sigma^{*}$ such that $\Sigma^{\prime} \vdash \sigma \leftrightarrow \sigma^{*}$. By the Church-Turing Thesis there is a computable function $a \mapsto a^{*}: \mathbf{N} \rightarrow \mathbf{N}$ such that $\left\ulcorner\sigma^{*}\right\urcorner=\ulcorner\sigma\urcorner^{*}$ for all $L^{\prime}$-sentences $\sigma$. Hence, for all $a \in \mathbf{N}$ $$ a \in\left\ulcorner\mathrm{Th}_{L^{\prime}}\left(\Sigma^{\prime}\right)\right\urcorner \Longleftrightarrow a \in \operatorname{Sent}_{L^{\prime}} \text { and } a^{*} \in\left\ulcorner\operatorname{Th}_{L}(\Sigma)\right\urcorner . $$ This yields the $\Leftarrow$ direction of (4). Remark. We cannot drop the assumption $L=L^{\prime}$ in (2): take $L=\emptyset, \Sigma=\emptyset$, $L^{\prime}=L(\underline{\mathrm{N}})$ and $\Sigma^{\prime}=\emptyset$. Then $\mathrm{Th}_{L^{\prime}}\left(\Sigma^{\prime}\right)$ is undecidable by Corollary 5.6.5, but $\mathrm{Th}_{L}(\Sigma)$ is decidable (exercise). Definition. An $L$-structure $\mathcal{A}$ is said to be strongly undecidable if for every set $\Sigma$ of $L$-sentences such that $\mathcal{A} \models \Sigma$, $\operatorname{Th}(\Sigma)$ is undecidable. So $\mathcal{A}$ is strongly undecidable iff every $L$-theory of which $\mathcal{A}$ is a model is undecidable. Example. $\mathfrak{N}=(\mathbf{N} ;<, 0, S,+, \cdot)$ is strongly undecidable. To see this, let $\Sigma$ be a set of $L(\underline{\mathrm{N}})$-sentences such that $\mathfrak{N} \models \Sigma$. We have to show that $\operatorname{Th}(\Sigma)$ is undecidable. Now $\mathfrak{N}=\Sigma \cup \underline{\mathrm{N}}$. By Church's Theorem $\operatorname{Th}(\Sigma \cup \underline{\mathrm{N}})$ is undecidable, hence $\operatorname{Th}(\Sigma)$ is undecidable by part (2) of Lemma 5.8.1. The following result is an easy application of part (3) of the previous lemma. Lemma 5.8.2. Let $c_{0}, \ldots, c_{n}$ be distinct constant symbols not in $L$, and let $\left(\mathcal{A}, a_{0}, \ldots, a_{n}\right)$ be an $L\left(c_{0}, \ldots, c_{n}\right)$-expansion of the $L$-structure $\mathcal{A}$. Then $$ \left(\mathcal{A}, a_{0}, \ldots, a_{n}\right) \text { is strongly undecidable } \Longrightarrow \mathcal{A} \text { is strongly undecidable. } $$ Theorem 5.8.3 (Tarski). Suppose the L-structure $\mathcal{A}$ is definable in the $L^{*}$ structure $\mathcal{B}$ and $\mathcal{A}$ is strongly undecidable. Then $\mathcal{B}$ is strongly undecidable. Proof. (Sketch) By the previous lemma (with $L^{*}$ and $\mathcal{B}$ instead of $L$ and $\mathcal{A}$ ), we can reduce to the case that we have a 0-definition $\delta: A \rightarrow B^{k}$ of $\mathcal{A}$ in $\mathcal{B}$. As described at the end of Section 4.5 this allows us to introduce the finite languages $L_{k}$ and $L_{k}^{*}=L_{k} \cup L^{*}$, the $L_{k}^{*}$-expansion $\mathcal{B}_{k}$ of $\mathcal{B}$, a finite set $\operatorname{Def}(\delta)$ of $L_{k}^{*}$-sentences, and a finite set $\Delta(L, k)$ of $L_{k}$-sentences. Moreover, $$ \mathcal{B}_{k} \models \operatorname{Def}(\delta) \cup \Delta(L, k) . $$ Section 4.5 contains implicitly an algorithm that computes for any $L$-sentence $\sigma$ an $L_{k}$-sentence $\sigma_{k}$ and an $L^{*}$-sentence $\delta \sigma$ such that $$ \mathcal{A} \models \sigma \Longleftrightarrow \mathcal{B}_{k} \models \sigma_{k}, \quad \operatorname{Def}(\delta) \vdash \sigma_{k} \longleftrightarrow \delta \sigma . $$ Let $\Sigma^{*}$ be a set of $L^{*}$-sentences such that $\mathcal{B} \models \Sigma^{*}$; we need to show that $\mathrm{Th}_{L^{*}}\left(\Sigma^{*}\right)$ is undecidable. Define $\Sigma$ as the set of $L$-sentences $\sigma$ such that $$ \Sigma^{*} \cup \operatorname{Def}(\delta) \cup \Delta(L, k) \vdash \sigma_{k} . $$ By Lemma 4.5.7 we have for all $L$-sentences $\sigma$, $$ \Sigma \vdash \sigma \Longleftrightarrow \Sigma^{*} \cup \operatorname{Def}(\delta) \cup \Delta(L, k) \vdash \sigma_{k} . $$ Suppose towards a contradiction that $\operatorname{Th}_{L^{*}}\left(\Sigma^{*}\right)$ is decidable. Then $$ \operatorname{Th}\left(\Sigma^{*} \cup \operatorname{Def}(\delta) \cup \Delta(L, k)\right) $$ is decidable, by part (2) of Lemma 5.8.1, so we have an algorithm for deciding whether any given $L_{k}^{*}$-sentence is provable from $\Sigma * \cup \operatorname{Def}(\delta) \Delta(L, k)$, and by the above equivalence this provides an algorithm for deciding whether any given $L$-sentence is provable from $\Sigma$. But $\mathcal{A}=\Sigma$, so $\operatorname{Th}_{L}(\Sigma)$ is undecidable, and we have a contradiction. Corollary 5.8.4. Th(Ring) is undecidable, in other words, the theory of rings is undecidable. Proof. It suffices to show that the $\operatorname{ring}(\mathbf{Z} ; 0,1,+,-, \cdot)$ of integers is strongly undecidable. Using Lagrange's theorem that $$ \mathbf{N}=\left\{a^{2}+b^{2}+c^{2}+d^{2}: a, b, c, d \in \mathbf{Z}\right\}, $$ we see that the inclusion map $\mathbf{N} \rightarrow \mathbf{Z}$ defines $\mathfrak{N}$ in the ring of integers, so by Tarski's Theorem the ring of integers is strongly undecidable. For the same reason, the theory of commutative rings, of integral domains, and more generally, the theory of any class of rings that has the ring of integers among its members is undecidable. Fact. The set $\mathbf{Z} \subseteq \mathbf{Q}$ is 0 -definable in the field $(\mathbf{Q} ; 0,1,+,-, \cdot)$ of rational numbers. Thus the ring of integers is definable in the field of rational numbers. We shall take this here on faith. The only known proofs use non-trivial results about quadratic forms. The first of these proofs is due to Julia Robinson (late 1940s). The second one is due to Jochen Koenigsmann, Annals of Mathematics 183 (2016), 73-93, and yields definability of $\mathbf{Z}$ by a universal formula. Corollary 5.8.5. The theory $\mathrm{Th}(\mathrm{Fl})$ of fields is undecidable. The theory of any class of fields that has the field of rationals among its members is undecidable. Exercises. The point of exercises (2) and (3) is to prove that the theory of groups is undecidable. In fact, the theory of any class of groups that has the group $G$ of (3) among its members is undecidable. On the other hand, $\mathrm{Th}(\mathrm{Ab})$, the theory of abelian groups, is known to be decidable (Szmielew). In (2) and (3) we let $a, b, c$ denote integers; also, $a$ divides $b$ (notation: $a \mid b$ ) if $a x=b$ for some integer $x$, and $c$ is a least common multiple of $a$ and $b$ if $a|c, b| c$, and $c \mid x$ for every integer $x$ such that $a \mid x$ and $b \mid x$. Recall that if $a$ and $b$ are not both zero, then they have a unique positive least common multiple, and that if $a$ and $b$ are coprime (that is, there is no integer $x>1$ with $x \mid a$ and $x \mid b$ ), then they have $a b$ as a least common multiple. (1) Argue informally, using the Church-Turing Thesis, that $\mathrm{Th}(\mathrm{ACF})$ is decidable. You can use the fact that ACF has QE. (2) The structure $(\mathbf{Z} ; 0,1,+, \mid)$ is strongly undecidable, where $\mid$ is the binary relation of divisibility on $\mathbf{Z}$. Hint: Show that if $b+a$ is a least common multiple of $a$ and $a+1$, and $b-a$ is a least common multiple of $a$ and $a-1$, then $b=a^{2}$. Use this to define the squaring function in $(\mathbf{Z} ; 0,1,+, \mid)$, and then show that multiplication is 0-definable in $(\mathbf{Z} ; 0,1,+, \mid)$. (3) Consider the group $G$ of bijective maps $\mathbf{Z} \rightarrow \mathbf{Z}$, with composition as the group multiplication. Then $G$ (as a model of $\mathrm{Gr}$ ) is strongly undecidable. Hint: let $s$ be the element of $G$ given by $s(x)=x+1$. Check that if $g \in G$ commutes with $s$, then $g=s^{a}$ for some $a$. Next show that $$ a \mid b \Longleftrightarrow s^{b} \text { commutes with each } g \in G \text { that commutes with } s^{a} \text {. } $$ Use these facts to specify a definition of $(\mathbf{Z} ; 0,1,+, \mid)$ in the group $G$. (4) Let $L=\{F\}$ have just a binary function symbol. Then predicate logic in $L$, that is, $\operatorname{Th}_{L}(\emptyset)$, is undecidable. As to exercise (4), predicate logic in the language whose only symbol is a binary relation symbol is also known to be undecidable. On the other hand, predicate logic in the language whose only symbol is a unary function symbol is decidable. ## To do? - improve titlepage - improve or delete index - more exercises (from homework, exams) - footnotes pointing to alternative terminology, etc. - brief discussion of classes at end of "Sets and Maps"? - $\quad \square$ at the end of results without proof? - brief discussion on $\mathrm{P}=\mathrm{NP}$ in connection with propositional logic - section(s) on boolean algebra, including Stone representation, LindenbaumTarski algebras, etc. - section on equational logic? (boolean algebras, groups, as examples) - solution to a problem by Erdös via compactness theorem, and other simple applications of compactness - include "equality theorem", - translation of one language in another (needed in connection with Tarski theorem in last section) - more details on back-and-forth in connection with unnested formulas - extra elementary model theory (universal classes, model-theoretic criteria for qe, etc., application to ACF, maybe extra section on RCF, Ax's theorem. - On computability: a few extra results on c.e. sets, and exponential diophantine result. - basic framework for many-sorted logic.
Textbooks
10-demicube In geometry, a 10-demicube or demidekeract is a uniform 10-polytope, constructed from the 10-cube with alternated vertices removed. It is part of a dimensionally infinite family of uniform polytopes called demihypercubes. Demidekeract (10-demicube) Petrie polygon projection Type Uniform 10-polytope Family demihypercube Coxeter symbol 171 Schläfli symbol {31,7,1} h{4,38} s{21,1,1,1,1,1,1,1,1} Coxeter diagram = 9-faces53220 {31,6,1} 512 {38} 8-faces5300180 {31,5,1} 5120 {37} 7-faces24000960 {31,4,1} 23040 {36} 6-faces648003360 {31,3,1} 61440 {35} 5-faces1155848064 {31,2,1} 107520 {34} 4-faces14246413440 {31,1,1} 129024 {33} Cells12288015360 {31,0,1} 107520 {3,3} Faces61440{3} Edges11520 Vertices512 Vertex figure Rectified 9-simplex Symmetry group D10, [37,1,1] = [1+,4,38] [29]+ Dual ? Properties convex E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as HM10 for a ten-dimensional half measure polytope. Coxeter named this polytope as 171 from its Coxeter diagram, with a ring on one of the 1-length branches, and Schläfli symbol $\left\{3{\begin{array}{l}3,3,3,3,3,3,3\\3\end{array}}\right\}$ or {3,37,1}. Cartesian coordinates Cartesian coordinates for the vertices of a demidekeract centered at the origin are alternate halves of the dekeract: (±1,±1,±1,±1,±1,±1,±1,±1,±1,±1) with an odd number of plus signs. Images B10 coxeter plane D10 coxeter plane (Vertices are colored by multiplicity: red, orange, yellow, green = 1,2,4,8) References • H.S.M. Coxeter: • Coxeter, Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8, p.296, Table I (iii): Regular Polytopes, three regular polytopes in n-dimensions (n≥5) • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973, p.296, Table I (iii): Regular Polytopes, three regular polytopes in n-dimensions (n≥5) • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26. pp. 409: Hemicubes: 1n1) • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) • Klitzing, Richard. "10D uniform polytopes (polyxenna) x3o3o *b3o3o3o3o3o3o3o - hede". External links • Olshevsky, George. "Demienneract". Glossary for Hyperspace. Archived from the original on 4 February 2007. • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Only show content I have access to (60) Only show open access (21) Last month (1) Last 12 months (15) Last 3 years (50) Over 3 years (135) Physics and Astronomy (44) Earth and Environmental Sciences (32) Materials Research (24) Politics and International Relations (1) British Journal of Nutrition (19) Journal of Fluid Mechanics (13) Journal of Materials Research (13) Zygote (12) Geological Magazine (8) MRS Online Proceedings Library Archive (8) Microscopy and Microanalysis (8) Chinese Journal of Agricultural Biotechnology (6) Journal of Financial and Quantitative Analysis (5) Quaternary Research (5) Acta Neuropsychiatrica (4) Communications in Computational Physics (4) International Journal of Technology Assessment in Health Care (4) Plant Genetic Resources (4) Psychological Medicine (4) Public Health Nutrition (4) Antiquity (3) High Power Laser Science and Engineering (3) The Journal of Laryngology & Otology (3) The Journal of Navigation (3) Materials Research Society (23) Nutrition Society (12) Nestle Foundation - enLINK (11) Test Society 2018-05-10 (6) Global Science Press (4) Ryan Test (4) International Glaciological Society (3) MSC - Microscopical Society of Canada (3) RIN (3) test society (3) AEPC Association of European Paediatric Cardiology (2) AMA Mexican Society of Microscopy MMS (2) Entomological Society of Canada TCE ESC (2) International Association for Chinese Management Research (2) JLO (1984) Ltd (2) Royal College of Psychiatrists / RCPsych (2) Applied Probability Trust (1) European Microwave Association (1) MiMi / EMAS - European Microbeam Analysis Society (1) Society for Disaster Medicine and Public Health, Inc. SDMPH (1) Detection of aberrant DNA methylation patterns in sperm of male recurrent spontaneous abortion patients Rong-Hua Ma, Zhen-Gang Zhang, Yong-Tian Zhang, Sheng-Yan Jian, Bin-Ye Li Journal: Zygote , FirstView Published online by Cambridge University Press: 09 January 2023, pp. 1-10 Aberrant DNA methylation patterns in sperm are a cause of embryonic failure and infertility, and could be a critical factor contributing to male recurrent spontaneous abortion (RSA). The purpose of this study was to reveal the potential effects of sperm DNA methylation levels in patients with male RSA. We compared sperm samples collected from fertile men and oligoasthenospermia patients. Differentially methylated sequences were identified by reduced representation bisulfite sequencing (RRBS) methods. The DNA methylation levels of the two groups were compared and qRT-PCR was used to validate the expression of genes showing differential methylation. The results indicated that no difference in base distribution was observed between the normal group and the patient group. However, the chromosome methylation in these two groups was markedly different. One site was located on chromosome 8 and measured 150 bp, while the other sites were on chromosomes 9, 10, and X and measured 135 bp, 68 bp, and 136 bp, respectively. In particular, two genes were found to be hypermethylated in these patients, one gene was DYDC2 (placed in the differential methylation region of chromosome 10), and the other gene was NXF3 (located on chromosome X). Expression levels of DYDC2 and NXF3 in the RSA group were significantly lower than those in the normal group (P < 0.05). Collectively, these results demonstrated that changes in DNA methylation might be related to male RSA. Our findings provide important information regarding the potential role of sperm DNA methylation in human development. Continuous theta burst stimulation over the bilateral supplementary motor area in obsessive-compulsive disorder treatment: A clinical randomized single-blind sham-controlled trial Qihui Guo, Kaifeng Wang, Huiqin Han, Puyu Li, Jiayue Cheng, Junjuan Zhu, Zhen Wang, Qing Fan Journal: European Psychiatry / Volume 65 / Issue 1 / 2022 Published online by Cambridge University Press: 07 October 2022, e64 Obsessive-compulsive disorder (OCD) can cause substantial damage to quality of life. Continuous theta burst stimulation (cTBS) is a promising treatment for OCD patients with the advantages of safety and noninvasiveness. The present study aimed to evaluate the treatment efficacy of cTBS over the bilateral supplementary motor area (SMA) for OCD patients with a single-blind, sham-controlled design. Fifty-four OCD patients were randomized to receive active or sham cTBS treatment over the bilateral SMA for 4 weeks (five sessions per week, 20 sessions in total). Patients were assessed at baseline (week 0), the end of treatment (week 4), and follow-up (week 8). Clinical scales included the YBOCS, HAMD24, HAMA14, and OBQ44. Three behavioral tests were also conducted to explore the effect of cTBS on response inhibition and decision-making in OCD patients. The treatment response rates were not significantly different between the two groups at week 4 (active: 23.1% vs. sham: 16.7%, p = 0.571) and week 8 (active: 26.9% vs. sham: 16.7%, p = 0.382). Depression and anxiety improvements were significantly different between the two groups at week 4 (HAMD24: F = 4.644, p = 0.037; HAMA14: F = 5.219, p = 0.028). There was no significant difference between the two groups in the performance of three behavioral tests. The treatment satisfaction and dropout rates were not significantly different between the two groups. The treatment of cTBS over the bilateral SMA was safe and tolerable, and it could significantly improve the depression and anxiety of OCD patients but was not enough to improve OCD symptoms in this study. Telling Our Own Story: A Bibliometrics Analysis of Mainland China's Influence on Chinese Politics Research, 2001–2020 Hui-Zhen Fu, Li Shao Journal: PS: Political Science & Politics / Volume 56 / Issue 1 / January 2023 Print publication: January 2023 This study conducted a bibliometric analysis of Chinese politics research from 2001 to 2020 (N = 11,285) using Social Sciences Citation Index data. The number of publications in the field by scholars from Mainland China surged in the past 20 years; however, their influence on academia remained limited. Chinese institutions serve as the major hubs of collaborative networks. Using structural topic models, we identified 25 research topics that can be categorized in three clusters. In the past 20 years, scholars from Mainland China steered the focus of Chinese politics by causing a reduction in the proportion of international relation topics and an increase in the proportion of political economy topics. Domestic politics topics had the most citations. Scholars from Mainland China have made contributions to better research methods in the field. This article is a comprehensive view of Chinese politics research using a tool that is rarely used by political scientists. It depicts how studies of Chinese politics influence academia from a bibliometrics perspective. A flexible wearable e-skin sensing system for robotic teleoperation Chuanyu Zhong, Shumi Zhao, Yang Liu, Zhijun Li, Zhen Kan, Ying Feng Journal: Robotica , First View Published online by Cambridge University Press: 16 September 2022, pp. 1-14 Electronic skin (e-skin) is playing an increasingly important role in health detection, robotic teleoperation, and human-machine interaction, but most e-skins currently lack the integration of on-site signal acquisition and transmission modules. In this paper, we develop a novel flexible wearable e-skin sensing system with 11 sensing channels for robotic teleoperation. The designed sensing system is mainly composed of three components: e-skin sensor, customized flexible printed circuit (FPC), and human-machine interface. The e-skin sensor has 10 stretchable resistors distributed at the proximal and metacarpal joints of each finger respectively and 1 stretchable resistor distributed at the purlicue. The e-skin sensor can be attached to the opisthenar, and thanks to its stretchability, the sensor can detect the bent angle of the finger. The customized FPC, with WiFi module, wirelessly transmits the signal to the terminal device with human-machine interface, and we design a graphical user interface based on the Qt framework for real-time signal acquisition, storage, and display. Based on this developed e-skin system and self-developed robotic multi-fingered hand, we conduct gesture recognition and robotic multi-fingered teleoperation experiments using deep learning techniques and obtain a recognition accuracy of 91.22%. The results demonstrate that the developed e-skin sensing system has great potential in human-machine interaction. Polar Nano-Domains in Barium Hexaferrite Revealed with Multislice Electron Ptychography Harikrishnan K. P., Yilin Evan Li, Yu-Tsun Shao, Zhen Chen, Jiaqiang Yan, Christo Guguschev, Darrell G. Schlom, David A. Muller Journal: Microscopy and Microanalysis / Volume 28 / Issue S1 / August 2022 Print publication: August 2022 Detrital zircon geochronology of the Permian Lower Shihezi Formation, northern Ordos Basin, China: time constraints for closing of the Palaeo-Asian Ocean Rong Chen, Feng Wang, Zhen Li, Noreen J Evans, Hongde Chen Journal: Geological Magazine / Volume 159 / Issue 9 / September 2022 Published online by Cambridge University Press: 11 July 2022, pp. 1601-1620 Temporal constraints on the closure of the eastern segment of the Palaeo-Asian Ocean along the northern margin of the North China Craton (NCC) remain unclear. As a part of the NCC, the sedimentation and tectonic evolution of the Late Palaeozoic Ordos Basin were closely related to the opening and closing of the Palaeo-Asian Ocean. We use petrology, quantitative mineralogical analysis, U–Pb geochronology and trace element signatures of detrital zircons of the Lower Shihezi Formation from two sections in the eastern north Ordos Basin and two sections in the western north Ordos Basin to reconstruct the sedimentary provenance and tectonic background of the northern Ordos Basin. The results show that the sediments of the western sections were mainly derived from the Yinshan orogenic belt and Alxa block, and that those in the eastern sections only came from the Yinshan orogenic belt. The trace element ratios in detrital zircons from the Late Palaeozoic sandstones indicate that the source areas were mainly subduction-related continental arcs, closely related to the continued subduction of the Palaeo-Asian Ocean in the Late Palaeozoic. Since the main Late Palaeozoic magmatic periods vary on the east and west sides of the northern margin of the Ordos Basin, two main collisions related to Palaeo-Asian Ocean closure are recorded. The collision on the west side occurred significantly earlier than that in the east. This study implies that the Palaeo-Asian Ocean began to subduct beneath the NCC in the Carboniferous and gradually closed from west to east thereafter. Dual-channel LIDAR searching, positioning, tracking and landing system for rotorcraft from ships at sea Tao Zeng, Hua Wang, Xiucong Sun, Hui Li, Zhen Lu, Feifei Tong, Hao Cheng, Canlun Zheng, Mengying Zhang Journal: The Journal of Navigation / Volume 75 / Issue 4 / July 2022 Print publication: July 2022 To address the shortcomings of existing methods for rotorcraft searching, positioning, tracking and landing on a ship at sea, a dual-channel LIDAR searching, positioning, tracking and landing system (DCLSPTLS) is proposed in this paper, which utilises the multi-pulse laser echoes accumulation method and the physical phenomenon that the laser reflectivity of the ship deck in the near-infrared band is four orders of magnitude higher than that of the sea surface. The DCLSPTLS searching and positioning model, tracking model and landing model are established, respectively. The searching and positioning model can provide estimates of the azimuth angle, the distance of the ship relative to the rotorcraft and the ship's course. With the above parameters as inputs, the total tracking time and the direction of the rotorcraft tracking speed can be obtained by using the tracking model. The landing model can calculate the pitch and the roll angles of the ship's deck relative to the rotorcraft by using the least squares method and the laser irradiation coordinates. The simulation shows that the DCLSPTLS can realise the functions of rotorcraft searching, positioning, tracking and landing by using the above parameters. To verify the effectiveness of the DCLSPTLS, a functional test is performed using a rotorcraft and a model ship on a lake. The test results are consistent with the results of the simulation. An Audit on the Monitoring and Management of Hyperprolactinemia in Inpatient Adults on Regular Antipsychotics in 8 Acute Wards in an NHS Mental Health Trust Peter Zhang, Zhen Dong Li Journal: BJPsych Open / Volume 8 / Issue S1 / June 2022 Published online by Cambridge University Press: 20 June 2022, p. S13 The aim of the audit is to ascertain how well hyperprolactinemia is being monitored and managed across all acute adult inpatient wards in a mental health NHS Trust, for patients on regular antipsychotic mediation. The objectives of the audit are to: 1) Assess whether prolactin is being monitored according to local guidelines for patients on regular antipsychotic medication, 2) Determine whether hyperprolactinemia is being identified and managed according to appropriate guidelines, 3) Assess the standard of documentation around the decisions made. Data were collected retrospectively from the electronic notes and records for 78 patients, who were discharged from 8 acute wards in February 2021. For checking prolactin test results, the relevant reporting systems were accessed. Two data collection forms were used, which separated patients between those already taking antipsychotics prior to their admission and those who were newly initiated on an antipsychotic. Patients who were prescribed at least one regular antipsychotic were included. Patients with pre-existing medical conditions that cause hyperprolactinaemia, ongoing pregnancy or breastfeeding were excluded. The monitoring for and management of hyperprolactinaemia was assessed against NICE and local guidance. From the reviewed data, 41 patients were prescribed at least one regular antipsychotic drug during their admission. 32 patients were already established on an antipsychotic prior to their admission and 9 individuals were started on their first antipsychotic. Hyperprolactinaemia was identified in 9 patients. 19 patients had no prolactin assay performed during their whole admission. 44.4% of antipsychotic naïve patients had a baseline prolactin level taken prior to starting an antipsychotic. 9.1% of patients with hyperprolactinaemia had their symptoms assessed by a clinician. 27.3% of patients with hyperprolactinaemia had actions discussed and undertaken to address this. This audit identified that patients are at risk of suffering from hyperprolactinaemia and are being insufficiently monitored. The symptoms of hyperprolactinaemia are not adequately screened or assessed for. This may increase the side effect burden and decrease medication adherence among patients. There is a need to increase the awareness among clinicians about the importance of regular prolactin monitoring to improve patient outcomes. Theory and simulation of electrokinetic fluctuations in electrolyte solutions at the mesoscale Mingge Deng, Faisal Tushar, Luis Bravo, Anindya Ghoshal, George Karniadakis, Zhen Li Journal: Journal of Fluid Mechanics / Volume 942 / 10 July 2022 Published online by Cambridge University Press: 24 May 2022, A29 Electrolyte solutions play an important role in energy storage devices, whose performance relies heavily on the electrokinetic processes at sub-micron scales. Although fluctuations and stochastic features become more critical at small scales, the long-range Coulomb interactions pose a particular challenge for both theoretical analysis and simulation of fluid systems with fluctuating hydrodynamic and electrostatic interactions. Here, we present a theoretical framework based on the Landau–Lifshitz theory to derive closed-form expressions for fluctuation correlations in electrolyte solutions, indicating significantly different decorrelation processes of ionic concentration fluctuations from hydrodynamic fluctuations, which provides insights for understanding transport phenomena of coupled fluctuating hydrodynamics and electrokinetics. Furthermore, we simulate fluctuating electrokinetic systems using both molecular dynamics (MD) with explicit ions and mesoscopic charged dissipative particle dynamics (cDPD) with semi-implicit ions, from which we identify that the spatial probability density functions of local charge density follow a gamma distribution at sub-nanometre scale (i.e. $0.3\,{\rm nm}$) and converge to a Gaussian distribution above nanometre scales (i.e. $1.55\,{\rm nm}$), indicating the existence of a lower limit of length scale for mesoscale models using Gaussian fluctuations. The temporal correlation functions of both hydrodynamic and electrokinetic fluctuations are computed from all-atom MD and mesoscale cDPD simulations, showing good agreement with the theoretical predictions based on the linearized fluctuating hydrodynamics theory. Global, regional and national burden of autism spectrum disorder from 1990 to 2019: results from the Global Burden of Disease Study 2019 Zhen Li, Lejin Yang, Hui Chen, Yuan Fang, Tongchao Zhang, Xiaolin Yin, Jinyu Man, Xiaorong Yang, Ming Lu Journal: Epidemiology and Psychiatric Sciences / Volume 31 / 2022 Published online by Cambridge University Press: 10 May 2022, e33 Autism spectrum disorder (ASD) is a neurodevelopmental condition, with symptoms appearing in the early developmental period. Little is known about its current burden at the global, regional and national levels. This systematic analysis aims to summarise the latest magnitudes and temporal trends of ASD burden, which is essential to facilitate more detailed development of prevention and intervention strategies. The data on ASD incidence, prevalence, disability-adjusted life years (DALYs) in 204 countries and territories between 1990 and 2019 came from the Global Burden of Disease Study 2019. The average annual percentage change was calculated to quantify the secular trends in age-standardised rates (ASRs) of ASD burden by region, sex and age. In 2019, there were an estimated 60.38 × 104 [95% uncertainty interval (UI) 50.17–72.01] incident cases of ASD, 283.25 × 105 (95% UI 235.01–338.11) prevalent cases and 43.07 × 105 (95% UI 28.22–62.32) DALYs globally. The ASR of incidence slightly increased by around 0.06% annually over the past three decades, while the ASRs of prevalence and DALYs both remained stable over the past three decades. In 2019, the highest burden of ASD was observed in high-income regions, especially in high-income North America, high-income Asia Pacific and Western Europe, where a significant growth in ASRs was also observed. The ASR of ASD burden in males was around three times that of females, but the gender difference was shrunk with the pronounced increase among females. Of note, among the population aged over 65 years, the burden of ASD presented increasing trends globally. The global burden of ASD continues to increase and remains a major mental health concern. These substantial heterogeneities in ASD burden worldwide highlight the need for making suitable mental-related policies and providing special social and health services. Dissecting the association between psychiatric disorders and neurological proteins: a genetic correlation and two-sample bidirectional Mendelian randomization study Huimei Huang, Shiqiang Cheng, Chun'e Li, Bolun Cheng, Li Liu, Xuena Yang, Peilin Meng, Yao Yao, Chuyu Pan, Jingxi Zhang, Huijie Zhang, Yujing Chen, Zhen Zhang, Yan Wen, Yumeng Jia, Feng Zhang Journal: Acta Neuropsychiatrica / Volume 34 / Issue 6 / December 2022 The role of neurological proteins in the development of bipolar disorder (BD) and schizophrenia (SCZ) remains elusive now. The current study aims to explore the potential genetic correlations of plasma neurological proteins with BD and SCZ. By using the latest genome-wide association study (GWAS) summary data of BD and SCZ (including 41,917 BD cases, 11,260 SCZ cases, and 396,091 controls) derived from the Psychiatric GWAS Consortium website (PGC) and a recently released GWAS of neurological proteins (including 750 individuals), we performed a linkage disequilibrium score regression (LDSC) analysis to detect the potential genetic correlations between the two common psychiatric disorders and each of the 92 neurological proteins. Two-sample Mendelian randomisation (MR) analysis was then applied to assess the bidirectional causal relationship between the neurological proteins identified by LDSC, BD and SCZ. LDSC analysis identified one neurological protein, NEP, which shows suggestive genetic correlation signals for both BD (coefficient = −0.165, p value = 0.035) and SCZ (coefficient = −0.235, p value = 0.020). However, those association did not remain significant after strict Bonferroni correction. Two sample MR analysis found that there was an association between genetically predicted level of NEP protein, BD (odd ratio [OR] = 0.87, p value = 1.61 × 10−6) and SCZ (OR = 0.90, p value = 4.04 × 10−6). However, in the opposite direction, there is no genetically predicted association between BD, SCZ, and NEP protein level. This study provided novel clues for understanding the genetic effects of neurological proteins on BD and SCZ. Dose–response efficacy of mulberry fruit extract for reducing post-prandial blood glucose and insulin responses: randomised trial evidence in healthy adults David J Mela, Xiu-Zhen Cao, Santhosh Govindaiah, Harry Hiemstra, Ramitha Kalathil, Li Lin, Joshi Manoj, Tingyan Mi, Carole Verhoeven Journal: British Journal of Nutrition , First View Published online by Cambridge University Press: 11 March 2022, pp. 1-8 Extracts of mulberry have been shown to reduce post-prandial glucose (PPG) and insulin (PPI) responses, but reliability of these effects and required doses and specifications are unclear. We previously found that 1·5 g of a specified mulberry fruit extract (MFE) significantly reduced PPG and PPI responses to 50 g carbohydrate as rice porridge, with no indications of intolerance. The trials reported here aimed to replicate that work and assess the efficacy of lower MFE doses, using boiled rice as the carbohydrate source. Two separate randomised controlled intervention studies were carried out with healthy Indian males and females aged 20–50 years (n 84 per trial), with PPG area under the curve over 2 h as the primary outcome. Trial 1 used doses of 0, 0·37, 0·75, 1·12 and 1·5 g MFE in boiled rice and 0 or 1·5 g MFE in rice porridge. Trial 2 used doses of 0, 0·04, 0·12, 0·37 g MFE in boiled rice. In trial 1, relative to control, all MFE doses significantly decreased PPG (–27·2 to −22·9 %; all P ≤ 0·02) and PPI (–34·6 to −14·0 %, all P < 0·01). Breath hydrogen was significantly increased only at 1·5 g MFE (in rice porridge), and self-reported gastrointestinal symptoms were uniformly low. In trial 2, only 0·37 g MFE significantly affected PPG (–20·4 %, P = 0·002) and PPI (–17·0 %, P < 0·001). Together, these trials show that MFE in doses as low as 0·37 g can reliably reduce PPG and PPI responses to a carbohydrate-rich meal, with no apparent adverse effects. Research on navigation risk of the Arctic Northeast Passage based on POLARIS Lei An, Long Ma, Hui Wang, Heng-Yu Zhang, Zhen-Hua Li Journal: The Journal of Navigation / Volume 75 / Issue 2 / March 2022 Published online by Cambridge University Press: 10 February 2022, pp. 455-475 Print publication: March 2022 The complex sea ice conditions in Arctic waters has different impacts on the legs of the Arctic passage, and ships of specific ice classes face different navigation risks. Therefore, the quantitative analysis of the navigation risks faced in different legs has important practical significance. Based on the POLARIS introduced by IMO, the sea ice condition data from 2011 to 2020 was used to quantify the navigation risk of the Arctic Northeast passage. The risk index outcome (RIO) of the Arctic Northeast Passage were calculated. The navigable windows of the route for ice class 1A ships sailing independently under different sea ice conditions in the last decade were determined, with a navigable period of 91 days under normal sea ice conditions, approximately 175 days under light sea ice conditions and only week 40 close to navigation under severe sea ice conditions. The three critical waters affecting the safety of ships were identified. Combined with the navigable windows and critical waters, recommendations on ship's navigation and manipulation and recommendations for stakeholders were given. The method and results provided reference and support for the assessment of the navigation risk of ships in the Northeast Passage and safety navigation and operations of ships, and satisfied the needs of relevant countries and enterprises to rationally arrange shipment dates and sailing plans based on different ice classes of ships. Counter-flow orbiting of the vortex centre in turbulent thermal convection Yi-Zhen Li, Xin Chen, Ao Xu, Heng-Dong Xi Journal: Journal of Fluid Mechanics / Volume 935 / 25 March 2022 Published online by Cambridge University Press: 26 January 2022, A19 We present an experimental study of the large-scale vortex (or large-scale circulation, LSC) in turbulent Rayleigh–Bénard convection in a $\varGamma =\text {diameter}/\text {height}=2$ cylindrical cell. The working fluid is deionized water with Prandtl number ( $Pr$) around 5.7, and the Rayleigh number ( $Ra$) ranges from $7.64\times 10^7$ to $6.06\times 10^8$. We measured the velocity field in various vertical cross-sectional planes by using the planar particle image velocimetry technique. The velocity measurement in the LSC central plane shows that the flow is in the single-roll form, and the centre of the single-roll (vortex) does not always stay at the centre of the cell; instead, it orbits periodically in the direction opposite to the flow direction of the LSC, with its trajectory in the shape of an ellipse. The velocity measurements in the three vertical planes in parallel to the LSC central plane indicate that the flow is in the vortex tube form horizontally filling almost the whole cell, and the centre line of the vortex tube is consistent with the so-called 'jump rope' form proposed by a previous study that combined numerical simulation and local velocity measurements in the low $Pr$ case (Vogt et al., Proc. Natl Acad. Sci. USA, vol. 115, 2018, pp. 12674–12679). In addition, we found that the oscillation of the local velocity in $\varGamma =2$ cells originates from the periodical orbiting of the vortex centre. Our velocity measurements further indicate that the vortex centre orbiting is absent in $\varGamma =1$ cells, at least in the $Ra$ range of our experiments. The potential impact of rising sea levels on China's coastal cultural heritage: a GIS risk assessment Yuqi Li, Xin Jia, Zhen Liu, Luo Zhao, Pengfei Sheng, Michael J. Storozum Journal: Antiquity / Volume 96 / Issue 386 / April 2022 Print publication: April 2022 Without rapid international action to curb greenhouse gas emissions, climate scientists have predicted catastrophic sea-level rise by 2100. Globally, archaeologists are documenting the effects of sea-level rise on coastal cultural heritage. Here, the authors model the impact of 1m, 2m and 5m sea-level rise on China's coastal archaeological sites using data from the Atlas of Chinese Cultural Relics and Shanghai City's Third National Survey of Cultural Relics. Although the resulting number of endangered sites is large, the authors argue that these represent only a fraction of those actually at risk, and they issue a call to mitigate the direct and indirect effects of rising sea levels. GSDMD-mediated pyroptosis: a critical mechanism of diabetic nephropathy Yi Zuo, Li Chen, Huiping Gu, Xiaoyun He, Zhen Ye, Zhao Wang, Qixiang Shao, Caiping Xue Journal: Expert Reviews in Molecular Medicine / Volume 23 / 2021 Published online by Cambridge University Press: 27 December 2021, e23 Pyroptosis is a recently identified mechanism of programmed cell death related to Caspase-1 that triggers a series of inflammatory reactions by releasing several proinflammatory factors such as IL-1β and IL-18. The process is characterised by the rupture of cell membranes and the release of cell contents through the mediation of gasdermin (GSDM) proteins. GSDMD is an important member of the GSDM family and plays a critical role in the two pathways of pyroptosis. Diabetic nephropathy (DN) is a microvascular complication of diabetes and a major cause of end-stage renal disease. Recently, it was revealed that GSDMD-mediated pyroptosis plays an important role in the occurrence and development of DN. In this review, we focus on two types of kidney cells, tubular epithelial cells and renal podocytes, to illustrate the mechanism of pyroptosis in DN and provide new ideas for the prevention, early diagnosis and molecular therapy of DN. Students' perceptions of school sugar-free, food and exercise environments enhance healthy eating and physical activity Chieh-Hsing Liu, Fong-Ching Chang, Yu-Zhen Niu, Li-Ling Liao, Yen-Jung Chang, Yung Liao, Shu-Fang Shih Journal: Public Health Nutrition / Volume 25 / Issue 7 / July 2022 Published online by Cambridge University Press: 22 December 2021, pp. 1762-1770 The objective of this study was to examine the relationships between students' perceptions of their school policies and environments (i.e. sugar-sweetened beverages (SSB) free policy, plain water drinking, vegetables and fruit eating campaign, outdoor physical activity initiative, and the SH150 programme (exercise 150 min/week at school)) and their dietary behaviours and physical activity. Cross-sectional study. Primary, middle and high schools in Taiwan. A nationally representative sample of 2433 primary school (5th–6th grade) students, 3212 middle school students and 2829 high school students completed the online survey in 2018. Multivariate analysis results showed that after controlling for school level, gender and age, the students' perceptions of school sugar-free policies were negatively associated with the consumption of SSB and positively associated with consumption of plain water. Schools' campaigns promoting the eating of vegetables and fruit were positively associated with students' consumption of vegetables. In addition, schools' initiatives promoting outdoor physical activity and the SH150 programme were positively associated with students' engagement in outdoor physical activities and daily moderate-to-vigorous physical activity. Students' perceptions of healthy school policies and environments promote healthy eating and an increase in physical activity for students. Damped shape oscillations of a viscous compound droplet suspended in a viscous host fluid Fang Li, Xie-Yuan Yin, Xie-Zhen Yin Journal: Journal of Fluid Mechanics / Volume 931 / 25 January 2022 Published online by Cambridge University Press: 01 December 2021, A33 A study of small-amplitude shape oscillations of a viscous compound droplet suspended in a viscous host fluid is performed. A generalized eigenvalue problem is formulated and is solved by using the spectral method. The effects of the relevant non-dimensional parameters are examined for three cases, i.e. a liquid shell in a vacuum and a compound droplet in a vacuum or in a host fluid. The fundamental mode $l=2$ is found to be dominant. There exist two oscillatory modes: the in phase and the out of phase. In most situations, the interfaces oscillate in phase rather than out of phase. For the in-phase mode, in the absence of the host, as the viscosity of the core or the shell increases, the damping rate increases whereas the oscillation frequency decreases; when the viscosity exceeds a critical value, the mode becomes aperiodic with the damping rate bifurcating into two branches. In addition, when the tension of the inner interface becomes smaller than some value, the in-phase mode turns aperiodic. In the presence of the unbounded host fluid, there exists a continuous spectrum. The viscosity of the host may decrease or increase the damping rate of the in-phase mode. The mechanism behind it is discussed. The density contrasts between fluids affect oscillations of the droplet in a complicated way. Particularly, sufficiently large densities of the core or the host lead to the disappearance of the out-of-phase mode. The thin shell approximation predicts well the oscillation of the compound droplet when the shell is thin. Geochronology, geochemistry and tectonic implications of early Carboniferous plutons in the southwestern Alxa Block Zeng-Zhen Wang, Xuan-Hua Chen, Zhao-Gang Shao, Bing Li, Hong-Xu Chen, Wei-Cui Ding, Yao-Yao Zhang, Yong-Chao Wang Journal: Geological Magazine / Volume 159 / Issue 3 / March 2022 Published online by Cambridge University Press: 12 November 2021, pp. 372-388 The southeastern Central Asian Orogenic Belt (CAOB) records the assembly process between several micro-continental blocks and the North China Craton (NCC), with the consumption of the Paleo-Asian Ocean (PAO), but whether the S-wards subduction of the PAO beneath the northern NCC was ongoing during Carboniferous–Permian time is still being debated. A key issue to resolve this controversy is whether the Carboniferous magmatism in the northern NCC was continental arc magmatism. The Alxa Block is the western segment of the northern NCC and contiguous to the southeastern CAOB, and their Carboniferous–Permian magmatism could have occurred in similar tectonic settings. In this contribution, new zircon U–Pb ages, elemental geochemistry and Sr–Nd isotopic analyses are presented for three early Carboniferous granitic plutons in the southwestern Alxa Block. Two newly identified aluminous A-type granites, an alkali-feldspar granite (331.6 ± 1.6 Ma) and a monzogranite (331.8 ± 1.7 Ma), exhibit juvenile and radiogenic Sr–Nd isotopic features, respectively. Although a granodiorite (326.2 ± 6.6 Ma) is characterized by high Sr/Y ratios (97.4–139.9), which is generally treated as an adikitic feature, this sample has highly radiogenic Sr–Nd isotopes and displays significantly higher K2O/Na2O ratios than typical adakites. These three granites were probably derived from the partial melting of Precambrian continental crustal sources heated by upwelling asthenosphere in lithospheric extensional setting. Regionally, both the Alxa Block and the southeastern CAOB are characterized by the formation of early Carboniferous extension-related magmatic rocks but lack coeval sedimentary deposits, suggesting a uniform lithospheric extensional setting rather than a simple continental arc. A seamless multiscale operator neural network for inferring bubble dynamics Chensen Lin, Martin Maxey, Zhen Li, George Em Karniadakis Journal: Journal of Fluid Mechanics / Volume 929 / 25 December 2021 Published online by Cambridge University Press: 21 October 2021, A18 Modelling multiscale systems from nanoscale to macroscale requires the use of atomistic and continuum methods and, correspondingly, different computer codes. Here, we develop a seamless method based on DeepONet, which is a composite deep neural network (a branch and a trunk network) for regressing operators. In particular, we consider bubble growth dynamics, and we model tiny bubbles of initial size from 100 nm to 10 $\mathrm {\mu }\textrm {m}$, modelled by the Rayleigh–Plesset equation in the continuum regime above 1 $\mathrm {\mu }\textrm {m}$ and the dissipative particle dynamics method for bubbles below 1 $\mathrm {\mu }\textrm {m}$ in the atomistic regime. After an offline training based on data from both regimes, DeepONet can make accurate predictions of bubble growth on-the-fly (within a fraction of a second) across four orders of magnitude difference in spatial scales and two orders of magnitude in temporal scales. The framework of DeepONet is general and can be used for unifying physical models of different scales in diverse multiscale applications.
CommonCrawl
Riemannian circle In metric space theory and Riemannian geometry, the Riemannian circle is a great circle with a characteristic length. It is the circle equipped with the intrinsic Riemannian metric of a compact one-dimensional manifold of total length 2π, or the extrinsic metric obtained by restriction of the intrinsic metric to the two-dimensional surface of the sphere, rather than the extrinsic metric obtained by restriction of the Euclidean metric to the unit circle of the two-dimensional Cartesian plane. The distance between a pair of points on the surface of the sphere is defined to be the length of the shorter of the two arcs into which the circle is partitioned by the two points. It is named after German mathematician Bernhard Riemann. Properties The diameter of the Riemannian circle is π, in contrast with the usual value of 2 for the Euclidean diameter of the unit circle. The inclusion of the Riemannian circle as the equator (or any great circle) of the 2-sphere of constant Gaussian curvature +1, is an isometric imbedding in the sense of metric spaces (there is no isometric imbedding of the Riemannian circle in Hilbert space in this sense). Gromov's filling conjecture A long-standing open problem, posed by Mikhail Gromov, concerns the calculation of the filling area of the Riemannian circle. The filling area is conjectured to be 2π, a value attained by the hemisphere of constant Gaussian curvature +1. References • Gromov, M.: "Filling Riemannian manifolds", Journal of Differential Geometry 18 (1983), 1–147. Bernhard Riemann • Cauchy–Riemann equations • Generalized Riemann hypothesis • Grand Riemann hypothesis • Grothendieck–Hirzebruch–Riemann–Roch theorem • Hirzebruch–Riemann–Roch theorem • Local zeta function • Measurable Riemann mapping theorem • Riemann (crater) • Riemann Xi function • Riemann curvature tensor • Riemann hypothesis • Riemann integral • Riemann invariant • Riemann mapping theorem • Riemann form • Riemann problem • Riemann series theorem • Riemann solver • Riemann sphere • Riemann sum • Riemann surface • Riemann zeta function • Riemann's differential equation • Riemann's minimal surface • Riemannian circle • Riemannian connection on a surface • Riemannian geometry • Riemann–Hilbert correspondence • Riemann–Hilbert problems • Riemann–Lebesgue lemma • Riemann–Liouville integral • Riemann–Roch theorem • Riemann–Roch theorem for smooth manifolds • Riemann–Siegel formula • Riemann–Siegel theta function • Riemann–Silberstein vector • Riemann–Stieltjes integral • Riemann–von Mangoldt formula • Category
Wikipedia
\begin{document} \begin{frontmatter} \title{QUADRO: A supervised dimension reduction method via Rayleigh quotient optimization} \runtitle{Dimension reduction via Rayleigh optimization} \begin{aug} \author[A]{\fnms{Jianqing}~\snm{Fan}\corref{}\thanksref{M1,T1}\ead[label=e1]{[email protected]}}, \author[B]{\fnms{Zheng Tracy}~\snm{Ke}\thanksref{M2,T1}\ead[label=e2]{[email protected]}}, \author[A]{\fnms{Han}~\snm{Liu}\thanksref{M1,T2}\ead[label=e3]{[email protected]}} \and \author[A]{\fnms{Lucy}~\snm{Xia}\thanksref{M1,T1}\ead[label=e4]{[email protected]}} \runauthor{Fan, Ke, Liu and Xia} \affiliation{Princeton University\thanksmark{M1} and University of Chicago\thanksmark{M2}} \address[A]{J. Fan\\ H. Liu\\ L. Xia\\ Department of Operations Research\\ \quad and Financial Engineering\\ Princeton University\\ Princeton, New Jersey 08544\\ USA\\ \printead{e1}\\ \phantom{E-mail: }\printead*{e3}\\ \phantom{E-mail: }\printead*{e4}} \address[B]{Z. Ke\\ Department of Statistics\\ University of Chicago\\ Chicago, Illinois 60637\\ USA\\ \printead{e2}} \end{aug} \thankstext{T1}{Supported in part by NSF Grants DMS-12-06464 and DMS-14-06266 and NIH Grants \mbox{R01-GM100474} and R01-GM072611.} \thankstext{T2}{Supported in part by NSF Grants III-1116730, NSF III-1332109, an NIH sub-award and a FDA sub-award from Johns Hopkins University and an NIH-subaward from Harvard University.} \received{\smonth{11} \syear{2013}} \revised{\smonth{12} \syear{2014}} \begin{abstract} We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method---named QUADRO (\underline{Qua}dratic \underline {D}imension \underline{R}eduction via Rayleigh \underline {O}ptimization)---for analyzing high-\break dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating nonpolynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{62H30} \kwd[; secondary ]{62G20} \end{keyword} \begin{keyword} \kwd{Classification} \kwd{dimension reduction} \kwd{quadratic discriminant analysis} \kwd{Rayleigh quotient} \kwd{oracle inequality} \end{keyword} \end{frontmatter} \section{Introduction} \label{secintro} Rapid developments of imaging technology, microarray data studies and many other applications call for the analysis of high-dimensional binary-labeled data. We consider the problem of finding a ``nice'' projection $f\dvtx \mathbb{R}^d\to\mathbb{R}$ that embeds all data into the real line. A projection such as $f$ has applications in many statistical problems for analyzing high-dimensional binary-labeled data, including: \begin{itemize} \item \textit{Dimension reduction}: $f$ provides a data reduction tool for people to visualize the high-dimensional data in a one-dimensional space. \item \textit{Classification}: $f$ can be used to construct classification rules. With a carefully chosen set $A\subset\mathbb{R}$, we can classify a new data point $\mathbf{x}\in\mathbb{R}^d$ by checking whether or not $f(\mathbf{x})\in A$. \item \textit{Feature selection}: when $f(\mathbf{x})$ only depends on a small number of coordinates of~$\mathbf{x}$, this projection selects just a few features from numerous observed ones. \end{itemize} A natural question is what kind of $f$ is a ``nice'' projection? It depends on the goal of statistical analysis. For classification, a good $f$ should yield to a small classification error. In feature selection, different criteria select distinct features, and they may suit different real problems. In this paper, we propose using the following criterion for finding $f$: \begin{quote} Under the mapping $f$, the data are as ``separable'' as possible between two classes, and as ``coherent'' as possible within each class. \end{quote} \noindent It can be formulated as to maximize the \emph{Rayleigh quotient} of $f$. Suppose all data are drawn independently from a joint distribution of $(\mathbf{X}, Y)$, where $\mathbf{X}\in\mathbb{R}^d$, and $Y\in\{0,1\}$ is the label. The \emph{Rayleigh quotient} of $f$ is defined as \begin{equation} \label{Rq} \operatorname{Rq}(f) \equiv\frac{\operatorname{var}\{ \mathbb{E}[f(\mathbf{X})| Y]\}}{\operatorname{var} \{ f(\mathbf{X})-\mathbb{E}[f(\mathbf{X})| Y]\} }. \end{equation} Here, the numerator is the variance of $\mathbf{X}$ explained by the class label, and the denominator is the remaining variance of $\mathbf{X}$. Simple calculation shows that $\operatorname{Rq}(f)=\pi(1-\pi)R(f)$, where $\pi \equiv\mathbb{P}(Y=0)$ and \begin{equation} \label{Rf} R(f) \equiv\frac{\{\mathbb{E}[f(\mathbf{X})| Y=0] - \mathbb {E}[f(\mathbf{X} )| Y=1] \}^2}{\pi\operatorname{var}[f(\mathbf{X})| Y=0] + (1-\pi)\operatorname{var}[f(\mathbf{X})| Y=1]}. \end{equation} Our\vspace*{1pt} goal is to develop a data-driven procedure to find $\hat{f}$ such that $\operatorname{Rq}(\hat{f})$ is large, and $\hat{f}$ is sparse in the sense that it depends on few coordinates of $\mathbf{X}$. The Rayleigh quotient, as a criterion for finding a projection $f$, serves different purposes. First, for dimension reduction, it takes care of both variance explanation and label explanation. In contrast, methods such as principal component analysis (PCA) only consider variance explanation. Second, when the data are normally distributed, a monotone transform of the Rayleigh quotient approximates the classification error; see Section~\ref{secclassification}. Therefore, an $f$ with a large Rayleigh quotient enables us to construct nice classification rules. In addition, it is a convex optimization to maximize the Rayleigh quotient among linear and quadratic $f$ (see Section~\ref{secmethod}), while minimizing the classification error is not. Third, with appropriate regularization, this criterion provides a new feature selection tool for data analysis. The criterion (\ref{Rq}), initially introduced by \citet{fisher1936use} for classification, is known as Fisher's linear discriminant analysis (LDA). In the literature of sufficient dimension reduction, the sliced inverse regression (SIR) proposed by \citet{li1991sliced} can also be formulated as maximizing (\ref{Rq}), where $Y$ can be any variable not necessarily binary. In both LDA and SIR, $f$ is restricted to be a linear function, and the dimension $d$ cannot be larger than $n$. In this sense, our work compares directly to various versions of LDA and SIR generalized to nonlinear, high-dimensional settings. We provide a more detailed comparison to the literature in Section~\ref {secconclude}, but preview here the uniqueness of our work. First, we consider a setting where $\mathbf{X}| Y$ has an elliptical distribution and $f$ is a quadratic function, which allows us to derive a simplified version of (\ref{Rq}) and gain extra statistical efficiency; see Section~\ref{secformul} for details. This simplified version of (\ref{Rq}) was never considered before. Furthermore, the assumption of conditional elliptical distribution does not satisfy the requirement of SIR and many other dimension reduction methods [\citet {li1991sliced,cook1991comment}]. In Section~\ref{subsecobjective}, we explain the motivation of the current setting. Second, we utilize robust estimators of mean and covariance matrix, while many generalizations of LDA and SIR are based on sample mean and sample covariance matrix. As shown in Section~\ref{secestimation}, the robust estimators adapt better to heavy tails on the data. It is worth noting that QUADRO only considers the projection to a one-dimensional subspace. In contrast, more sophisticated dimension reduction methods (e.g., the kernel SIR) are able to find multiple projections $f_1,\ldots, f_m$ for $m>1$. This reflects a tradeoff between modeling tractability and flexibility. More specifically, QUADRO achieves better computational and theoretical properties at the cost of sacrificing some flexibility. \subsection{Rayleigh quotient and classification error} \label{subsecRqvsErr} Many popular statistical methods for analyzing high-dimensional binary-labeled data are based on classification error minimization, which is closely related to the Rayleigh quotient maximization. We summarize their connections and differences as follows: \begin{longlist}[(a)] \item[(a)] In an ``ideal'' setting where two classes follow multivariate normal distributions with a common covariance matrix and the class of linear functions $f$ is considered, the two criteria are exactly the same, with one being a monotone transform of the other. \item[(b)] In a ``relaxed'' setting where two classes follow multivariate normal distributions but with nonequal covariance matrices and the class of quadratic functions~$f$ (including linear functions as special cases) is considered, the two criteria are closely related in the sense that a monotone transform of the Rayleigh quotient is an approximation of the classification error. \item[(c)] In other settings, the two criteria can be very different. \end{longlist} We now show (a) and (c), and will discuss (b) in Section~\ref {secclassification}. For each $f$, we define a family of classifiers $h_c(\mathbf{x})=I\{f(\mathbf{x})< c\}$ indexed by $c$, where $I(\cdot)$ is the indicator function.~ For each given $c$, we define the classification error of $h_{c}$ to be $\operatorname{err}(h_c)\equiv\mathbb{P}(h_c(\mathbf{X})\neq Y)$. The classification error of $f$ is then defined by \[ \operatorname{Err}(f) \equiv\min_{c\in\mathbb{R}} \bigl\{ \operatorname{err}(h_c) \bigr\}. \] Most existing classification procedures aim at finding a data-driven projection $\hat{f}$ such that $\operatorname{Err}(\hat{f})$ is small (the threshold $c$ is usually easy to choose). Examples include linear discriminant analysis (LDA) and its variations in high dimensions [e.g., \citet{SCRDA,FAIR,LPD,Shao11,witten2011penalized,ROAD,Coda}], quadratic discriminant analysis (QDA), support vector machine (SVM), logistic regression, boosting, etc. We now compare $\operatorname{Rq}(f)$ and $\operatorname{Err}(f)$. Let $\pi= \mathbb{P}(Y=0)$, $\bolds{\mu}_1=\mathbb{E}(\mathbf{X}| Y=0)$, $\bolds{\Sigma}_1=\operatorname{cov}(\mathbf{X} | Y=0)$, $\bolds{\mu}_2=\mathbb{E}(\mathbf{X}| Y=1)$ and $\bolds{\Sigma} _2=\operatorname {cov}(\mathbf{X}| Y=1)$. We consider linear functions $\{f(\mathbf{x})=\mathbf{a}^{\top}\mathbf{x}+b\dvtx \mathbf {a}\in\mathbb{R}^d, b\in\mathbb{R}\}$, and write $\operatorname{Rq}(\mathbf{a})=\operatorname{Rq}(\mathbf{a}^{\top}\mathbf{x})$, $\operatorname{Err}(\mathbf {a})=\operatorname{Err}(\mathbf{a}^{\top}\mathbf{x})$ for short. By direct calculation, when the two classes have a common covariance matrix $\bolds{\Sigma}$, \[ \operatorname{Rq}(\mathbf{a}) = \pi(1-\pi) \frac{[\mathbf{a}^{\top}(\bolds{\mu}_1-\bolds{\mu} _2)]^2}{\mathbf{a}^{\top}\bolds{\Sigma}\mathbf{a}}. \] Hence, the optimal $\mathbf{a}_R=\bolds{\Sigma}^{-1}(\bolds{\mu}_1-\bolds{\mu}_2)$. On the other hand, when data follow multivariate normal distributions, the optimal classifier is $h^*(\mathbf{x})=I\{\mathbf{a}_E^{\top}\mathbf{x}< c\}$, where $\mathbf{a}_E=\bolds{\Sigma} ^{-1}(\bolds{\mu}_1-\bolds{\mu}_2)$ and $c=\tfrac{1}{2}\bolds{\mu}_1^{\top}\bolds{\Sigma} ^{-1}\bolds{\mu}_1 - \tfrac{1}{2}\bolds{\mu}_2^{\top}\bolds{\Sigma}^{-1}\bolds{\mu}_2 + \log (\tfrac{1-\pi}{\pi})$. It is observed that $\mathbf{a}_R=\mathbf{a}_E$ and the two criteria are the same. In fact, for all vectors $\mathbf{a}$ such that $\mathbf{a}^{\top }(\bolds{\mu}_1-\bolds{\mu}_2)>0$, \[ \operatorname{Err}(\mathbf{a}) = 1 - \Phi\biggl(\frac{1}{2} \biggl[ \frac{\operatorname{Rq}(\mathbf {a})}{\pi(1-\pi)} \biggr]^{1/2} \biggr), \] where $\Phi$ is the distribution function of a standard normal random variable, and we fix $c=\mathbf{a}^{\top}(\bolds{\mu}_1+\bolds{\mu}_2)/2$. Therefore, the classification error is a monotone transform of the Rayleigh quotient. When we move away from these ideal assumptions, the above two criteria can be very different. We illustrate this point using a bivariate distribution, that is, $d=2$, with different covariance matrices. Specifically, $\pi=0.55$, $\bolds{\mu}_1=(0,0)^{\top}$, $\bolds{\mu}_2=(1.28, 0.8)^{\top}$, $\bolds{\Sigma}_1=\operatorname{diag}(1,1)$ and $\bolds{\Sigma} _2=\operatorname{diag}(3, 1/3)$. We still consider linear functions $f(\mathbf{x})=\mathbf{a}^{\top}\mathbf{x}$ but select only one out of the two features, $X_1$ or $X_2$. Then the maximum Rayleigh quotients, by using each of the two features alone, are 0.853 and 0.923, respectively, whereas the minimum classification errors are 0.284 and 0.295, respectively. As a result, under the criterion of maximizing Rayleigh quotient, Feature~2 is selected, whereas under the criterion of minimizing classification error, Feature~1 is selected. Figure~\ref {figlinRQbayes} displays the distributions of data after being projected to each of the two features. It shows that since data from the second class has a much larger variability at Feature~1 than at Feature~2, the Rayleigh quotient maximization favors Feature~2, although Feature~1 yields a smaller classification error. \begin{figure} \caption{An example in $\mathbb{R}^2$. The green and purple represent class 1 and class 2, respectively. The ellipses are contours of distributions. Probability densities after being projected to $X_1$ and $X_2$ are also displayed. The dotted lines correspond to optimal thresholds for classification using each feature.} \label{figlinRQbayes} \end{figure} \subsection{Objective of the paper} \label{subsecobjective} In this paper, we consider the Rayleigh quotient maximization problem in the following setting: \begin{itemize} \item We consider sparse quadratic functions, that is, $f(\mathbf{x})=\mathbf{x} ^{\top}\bolds{\Omega}\mathbf{x}- 2\bolds{\delta}^{\top}\mathbf{x}$, where $\bolds{\Omega}$ is a sparse $d\times d$ symmetric matrix, and $\bolds{\delta}$ is a sparse $d$-dimensional vector. \item The two classes can have different covariance matrices. \item Data from these two classes follow \textit{elliptical distributions}. \item The dimension is large (it is possible that $d \gg n$). \end{itemize} Compared to Fisher's LDA, our setting has several new ingredients. First, we go beyond linear classifiers to enhance flexibility. It is well known that the linear classifiers are inefficient. For example, when two classes have the same mean, linear classifiers perform no better than random guesses. Instead of exploring arbitrary nonlinear functions, we consider the class of quadratic functions so that the Rayleigh quotient still has a nice parametric formulation, and at the same time it helps identify interaction effects between features. Second, we drop the requirement that the two classes share a common covariance matrix, which is a critical condition for Fisher's rule and many other high-dimensional classification methods [e.g., \citet {FAIR,ROAD,LPD}]. In fact, by using quadratic discriminant functions, we take advantage of the difference of covariance matrices between the two classes to enhance classification power. Third, we generalize multivariate normal distributions to the elliptical family, which includes many heavy-tailed distributions, such as multivariate $t$-distributions, Laplace distributions, and Cauchy\vspace*{1pt} distributions. This family of distributions allows us to avoid estimating all $O(d^4)$ fourth cross-moments of $d$ predictors in computing the variance of quadratic statistics and hence overcomes the computation and noise accumulation issues. In our setting, Fisher's rule, that is, $\mathbf{a}_R=\bolds{\Sigma}^{-1}(\bolds{\mu} _1-\bolds{\mu}_2)$, no longer maximizes the Rayleigh quotient. We propose a new method, called quadratic dimension reduction via Rayleigh optimization (QUADRO). It is a \emph{Rayleigh-quotient-oriented procedure} and is a statistical tool for simultaneous dimension reduction and feature selection. QUADRO has several properties. First, it is a statistically efficient generalization of Fisher's linear discriminant analysis to the quadratic setting. A~naive generalization involves estimation of all fourth cross-moments of the two underlying distributions. In contrast, QUADRO only requires estimating a one-dimensional kurtosis parameter. Second, QUADRO adopts rank-based estimators and robust $M$-estimators of the covariance matrices and the means. Therefore, it is robust to possibly heavy-tail distributions. Third, QUADRO can be formulated as a convex programming and is computationally efficient. Theoretically, we prove that under elliptical models, the Rayleigh quotient of the estimated quadratic function $\hat{f}$ converges to population maximum Rayleigh quotient at rate $O_p (s\sqrt{\log (d)/n} )$, where $s$ is the number of important features (counting both single terms and interaction terms). In addition, we establish a connection between our method and quadratic discriminant analysis (QDA) under elliptical models. The rest of this paper is organized as follows. Section~\ref {secformul} formulates Rayleigh quotient maximization as a convex optimization problem. Section~\ref{secmethod} describes QUADRO. Section~\ref{secestimation} discusses rank-based estimators and robust $M$-estimators used in QUADRO. Section~\ref{secanalysis} presents theoretical analysis. Section~\ref{secclassification} discusses the application of QUADRO in elliptically distributed classification problems. Section~\ref{secnumerical} contains numerical studies. Section~\ref{secconclude} concludes the paper. All proofs are collected in Section~\ref{secproof}. \subsubsection*{Notation} For $0\leq q\leq\infty$, $\vert \mathbf{v}\vert _q$ denotes the $L_q$-norm of a vector $\mathbf{v}$, $\vert \mathbf{A}\vert _q$ denotes the elementwise $L_q$-norm of a matrix $\mathbf{A}$ and $\Vert\mathbf{A}\Vert_q$ denotes the matrix \mbox{$L_q$-}norm of $\mathbf{A}$. When $q=2$, we omit the subscript $q$. $\lambda_{\min}(\mathbf{A})$ and $\lambda_{\max}(\mathbf{A})$ denote the minimum and maximum eigenvalues of $\mathbf{A}$. $\det(\mathbf{A})$ denotes the determinant of~$\mathbf{A}$. Let $I(\cdot)$ be the indicator function: for any event $B$, $I(B)=1$ if $B$ happens and $I(B)=0$ otherwise. Let $\operatorname{sign}(\cdot)$ be the sign function, where $\operatorname{sign}(u)=1$ when $u\geq0$ and $\operatorname{sign}(u)=-1$ when $u<0$. \section{Rayleigh quotient for quadratic functions} \label{secformul} We first study the population form of Rayleigh quotient for an arbitrary quadratic function. We show that it has a simplified form under the elliptical family. For a quadratic function \[ Q(\mathbf{X}) = \mathbf{X}^{\top}\bolds{\Omega}\mathbf{X}- 2\bolds{\delta}^{\top}\mathbf{X}, \] using (\ref{Rf}), its Rayleigh quotient is \begin{equation} \label{ROmega,delta} R(\bolds{\Omega},\bolds{\delta}) = \frac{ \{ \mathbb{E}[Q(\mathbf{X} )| Y=0]-\mathbb {E}[Q(\mathbf{X})| Y=1] \}^2}{\pi\operatorname{var}[Q(\mathbf{X})| Y=0] + (1-\pi)\operatorname{var}[Q(\mathbf{X})| Y=1]} \end{equation} up to a constant multiplier. The Rayleigh quotient maximization can be expressed as \[ \max_{(\bolds{\Omega}, \bolds{\delta})\dvtx \bolds{\Omega}=\bolds{\Omega}^{\top}} R(\bolds{\Omega}, \bolds{\delta}). \] \subsection{General setting} Suppose $\mathbb{E}(\mathbf{Z})=\bolds{\mu}$ and $\operatorname{cov}(\mathbf{Z} )=\bolds{\Sigma}$. By direct calculation, \begin{eqnarray*} \mathbb{E} \bigl[Q(\mathbf{Z}) \bigr]&=& \operatorname{tr}(\bolds{\Omega}\bolds{\Sigma}) + \bolds{\mu}^{\top} \bolds{\Omega}\bolds{\mu}- 2\bolds{\delta}^{\top}\bolds{\mu}, \\ \operatorname{var}\bigl[Q(\mathbf{Z}) \bigr] &=& \mathbb{E} \bigl[\operatorname{tr}\bigl(\bolds{\Omega}\mathbf{Z} \mathbf{Z}^{\top} \bolds{\Omega}\mathbf{Z}\mathbf{Z}^{\top} \bigr) \bigr] - 4\mathbb{E} \bigl[\bolds{\delta}^{\top} \mathbf{Z}\mathbf{Z}^{\top}\bolds{\Omega}\mathbf{Z}\bigr] \\ &&{} + 4 \bolds{\delta}^{\top}\bolds{\Sigma}\bolds{\delta}+4 \bigl(\bolds{\delta}^{\top}\bolds{\mu} \bigr)^2 - \bigl\{\mathbb{E} \bigl[Q(\mathbf{Z}) \bigr] \bigr \}^2. \end{eqnarray*} So $\mathbb{E}[Q(\mathbf{Z})]$ is a linear combination of the elements in $\{ \Omega(i,j), 1\leq i\leq j\leq d$; $\delta(i), 1\leq i\leq d\}$, and $\operatorname{var}[Q(\mathbf{Z})]$ is a quadratic form of these elements. The coefficients in $\mathbb{E}[Q(\mathbf{Z})]$ are functions of $\bolds{\mu}$ and $\bolds{\Sigma}$ only. However, the coefficients in $\operatorname{var}[Q(\mathbf{Z})]$ also depend on all the fourth cross-moments of $\mathbf{Z}$, and there are $O(d^4)$ of them. Let us define $M_1(\bolds{\Omega},\bolds{\delta})=\mathbb{E}[Q(\mathbf{X})| Y=0]$, $L_1(\bolds{\Omega},\bolds{\delta}) = \operatorname{var}[Q(\mathbf{X})| Y=0]$ and $M_2(\bolds{\Omega},\bolds{\delta})$, $L_2(\bolds{\Omega},\bolds{\delta})$ similarly. Also, let $\kappa= (1-\pi)/\pi$. We have \[ R(\bolds{\Omega},\bolds{\delta}) = \frac{ [M_1(\bolds{\Omega},\bolds{\delta})-M_2(\bolds{\Omega},\bolds{\delta} )]^2}{L_1(\bolds{\Omega},\bolds{\delta})+\kappa L_2(\bolds{\Omega},\bolds{\delta})}. \] Therefore, both the numerator and denominator are quadratic combinations of the elements in $\bolds{\Omega}$ and $\bolds{\delta}$. We can stack the $d(d+1)/2$ elements in $\bolds{\Omega}$ (assuming it is symmetric) and the $d$ elements in $\bolds{\delta}$ into a long vector $\mathbf{v}$. Then $R(\bolds{\Omega},\bolds{\delta})$ can be written as \[ R(\mathbf{v}) = \frac{(\mathbf{a}^{\top}\mathbf{v})^2}{\mathbf{v}^{\top}\mathbf{A}\mathbf{v}}, \] where $\mathbf{a}$ is a $d'\times1$ vector, $\mathbf{A}$ is a $d'\times d'$ positive semi-definite matrix and $d'=d(d+1)/2+d$. $\mathbf{A}$ and $\mathbf{a}$ are determined by the coefficients in the denominator and numerator of $R(\bolds{\Omega}, \bolds{\delta})$, respectively. Now, $\max_{(\bolds{\Omega},\bolds{\delta} )}R(\bolds{\Omega},\bolds{\delta})$ is equivalent to $\max_{\mathbf{v}}R(\mathbf{v})$. It has explicit solutions. For example, when $\mathbf{A}$ is positive definite, the function $R(\mathbf{v})$ is maximized at $\mathbf{v}^* = \mathbf{A}^{-1}\mathbf{a}$. We can then reshape $\mathbf{v}^*$ to get the desired $(\bolds{\Omega}^*, \bolds{\delta}^*)$. Practical implementation of the above idea is infeasible in high dimensions as it involves $O(d^4)$ cross moments of $\mathbf{Z}$. This not only poses computational challenges, but also accumulates noise in the estimation. Furthermore, good estimates of fourth moments usually require the existence of eighth moments, which is not realistic for many heavy tailed distributions. These problems can be avoided under the elliptical family, as we now illustrate in the next subsection. \subsection{Elliptical distributions} The elliptical family contains multivariate distributions whose densities have elliptical contours. It generalizes multivariate normal distributions and inherits many of their nice properties.\vadjust{\goodbreak} Given a $d\times1$ vector $\bolds{\mu}$ and a $d\times d$ positive definite matrix $\bolds{\Sigma}$, a random vector $\mathbf{Z}$ that follows an elliptical distribution admits \begin{equation} \label{Zdecomposition} \mathbf{Z}= \bolds{\mu}+\xi\bolds{\Sigma}^{1/2} \mathbf{U}, \end{equation} where $\mathbf{U}$ is a random vector which follows the uniform distribution on unit sphere $\mathcal{S}^{d-1}$, and $\xi$ is a nonnegative random variable independent of $\mathbf{U}$. Denote the elliptical distribution by $\mathcal{E}(\bolds{\mu},\bolds{\Sigma},g)$, where $g$ is the density of $\xi$. In this paper, we always assume that $\mathbb{E}\xi^{4}<\infty$ and require that $\mathbb{E}(\xi ^2)=d$ for the model identifiability. Then $\bolds{\Sigma}$ is the covariance matrix of $\mathbf{Z}$. \begin{prop} \label{propellipmeanvar} Suppose $\mathbf{Z}$ follows an elliptical distribution as in (\ref {Zdecomposition}). Then \begin{eqnarray*} \mathbb{E} \bigl[Q(\mathbf{Z}) \bigr] &=& \operatorname{tr}(\bolds{\Omega}\bolds{\Sigma})+\bolds{\mu}^{\top}\bolds{\Omega} \bolds{\mu}-2\bolds{\mu}^{\top}\bolds{\delta}, \\ \operatorname{var}\bigl[Q(\mathbf{Z}) \bigr] &=& 2(1+\gamma)\operatorname{tr}( \bolds{\Omega}\bolds{\Sigma}\bolds{\Omega}\bolds{\Sigma})+ \gamma\bigl[{\operatorname{tr}}(\bolds{\Omega}\bolds{\Sigma}) \bigr]^2 +4(\bolds{\Omega}\bolds{\mu}-\bolds{\delta})^{\top }\bolds{\Sigma}(\bolds{\Omega}\bolds{\mu}- \bolds{\delta}), \end{eqnarray*} where $\gamma=\frac{E(\xi^4)}{d(d+2)}-1$ is the kurtosis parameter. \end{prop} The proof is given in the online supplementary material [\citet{QUADROsupp}]. The variance of $Q(\mathbf{Z})$ does not involve any fourth cross-moments,\vspace*{1pt} but only the kurtosis parameter $\gamma$. For multivariate normal distributions, $\xi^2$ follows a $\chi ^2$-distribution with $d$ degrees of freedom, and $\gamma=0$. For multivariate $t$-distribution with degrees of freedom $\nu>4$, we have $\gamma=2/(\nu-4)$. \subsection{Rayleigh optimization} We assume that the two classes both follow elliptical distributions: $\mathbf{X}|(Y=0)\sim\mathcal{E}(\bolds{\mu}_1,\bolds{\Sigma}_1, g_1)$ and $\mathbf{X} |(Y=1)\sim\mathcal{E}(\bolds{\mu}_2,\break \bolds{\Sigma}_2, g_2)$. To facilitate the presentation, we assume the quantity $\gamma$ is the same for both classes of conditional distributions. Let \begin{eqnarray} \label{ML1L2} M(\bolds{\Omega}, \bolds{\delta}) &=& -\bolds{\mu}^{\top}_1\bolds{\Omega} \bolds{\mu}_1+\bolds{\mu}^{\top }_2\bolds{\Omega}\bolds{\mu}_2 +2( \bolds{\mu}_1-\bolds{\mu}_2)^{\top}\bolds{\delta}-\operatorname{tr}\bigl( \bolds{\Omega}( \bolds{\Sigma}_1-\bolds{\Sigma}_2) \bigr),\nonumber \\ L_k(\bolds{\Omega}, \bolds{\delta}) &=& 2(1+\gamma)\operatorname{tr}(\bolds{\Omega} \bolds{\Sigma}_k\bolds{\Omega}\bolds{\Sigma}_k)+\gamma\bigl[{\operatorname{tr}}(\bolds{\Omega} \bolds{\Sigma}_k) \bigr]^2 \\ &&{} +4(\bolds{\Omega}\bolds{\mu}_k- \bolds{\delta})^{\top}\bolds{\Sigma}_k(\bolds{\Omega}\bolds{\mu}_k- \bolds{\delta}),\nonumber \end{eqnarray} for $k=1$ and $2$. Combining (\ref{ROmega,delta}) with Proposition \ref{propellipmeanvar}, we have \begin{equation} \label{defR} R(\bolds{\Omega},\bolds{\delta}) = \frac{[M(\bolds{\Omega},\bolds{\delta} )]^2}{L_1(\bolds{\Omega},\bolds{\delta})+ \kappa L_2(\bolds{\Omega},\bolds{\delta})}, \end{equation} where $\kappa= (1-\pi)/\pi$. Note that if we multiply both $\bolds{\Omega}$ and $\bolds{\delta}$ by a common constant, $R(\bolds{\Omega},\bolds{\delta})$ remains unchanged. Therefore, maximizing $R(\bolds{\Omega}, \bolds{\delta})$ is equivalent to solving the following constrained minimization problem: \begin{equation} \label{progunconstrain} \min_{(\bolds{\Omega},\bolds{\delta})\dvtx M(\bolds{\Omega},\bolds{\delta} )=1,\bolds{\Omega}=\bolds{\Omega} ^{\top}} \bigl\{ L_1(\bolds{\Omega}, \bolds{\delta})+\kappa L_2(\bolds{\Omega},\bolds{\delta}) \bigr\}. \end{equation} We call problem (\ref{progunconstrain}) the \textit{Rayleigh optimization}. It is a convex problem whenever $\bolds{\Sigma}_1$ and $\bolds{\Sigma}_2$ are both positive semi-definite. The formulation of the Rayleigh optimization only involves the means and covariance matrices, and the kurtosis parameter $\gamma$. Therefore, if we know $\gamma$ (e.g., when we know which subfamily the distributions belong to) and have good estimates $(\widehat{\bmu}_1,\widehat{\bmu} _2,\widehat{\bSigma}_1,\widehat{\bSigma}_2)$, we can solve the empirical version of (\ref{progunconstrain}) to obtain $(\widehat{\bolds{\Omega}}, \widehat {\bolds{\delta}})$, which is the main idea of QUADRO. In addition, (\ref {progunconstrain}) is a convex problem, with a quadratic objective and equality constraints. Hence it can be solved efficiently by many optimization algorithms. \section{Quadratic dimension reduction via Rayleigh optimization} \label{secmethod} Now, we formally introduce the QUADRO procedure. We fix a model parameter $\gamma\geq0$. Let $\widehat{M}$, $\widehat{L}_1$ and $\widehat{L}_2$ be the sample versions of $M, L_{1}, L_{2}$ in (\ref {ML1L2}) by replacing $(\bolds{\mu}_1,\bolds{\mu}_2, \bolds{\Sigma}_1,\bolds{\Sigma}_2)$ with their estimates. Details of these estimates will be given in Section~\ref{secestimation}. Let $\widehat{\pi}=n_1/(n_1+n_2)$ and $\kappa= \widehat{\pi}/(1-\widehat{\pi})$. Given tuning parameters $\lambda_1>0$ and $\lambda_2>0$, we solve \begin{equation} \label{progquadro} \min_{(\bolds{\Omega},\bolds{\delta})\dvtx \widehat{M}(\bolds{\Omega},\bolds{\delta} )=1, \bolds{\Omega} =\bolds{\Omega}^{\top}} \bigl\{ \widehat{L}_1( \bolds{\Omega},\bolds{\delta})+\kappa\widehat{L}_2(\bolds{\Omega},\bolds{\delta}) + \lambda_1 \vert\bolds{\Omega}\vert_1 + \lambda _2 \vert\bolds{\delta}\vert_1 \bigr\}. \end{equation} We propose a linearized augmented Lagrangian method to solve (\ref{progquadro}). To simplify the notation, we write $\widehat{L}=\widehat{L}_1+\kappa \widehat{L}_2$, and omit the hat symbol on $M$ and $L$ when there is no confusion. The optimization problem is then \[ \min_{(\bolds{\Omega}, \bolds{\delta})\dvtx M(\bolds{\Omega}, \bolds{\delta})=1, \bolds{\Omega}=\bolds{\Omega} ^{\top}} \bigl\{ L(\bolds{\Omega}, \bolds{\delta}) + \lambda_1 \vert\bolds{\Omega}\vert_1 + \lambda_2 \vert\bolds{\delta} \vert_1 \bigr\}. \] For an algorithm parameter $\rho>0$, and a dual variable $\nu$, we define the \textit{augmented Lagrangian} as \[ F_{\rho}(\bolds{\Omega},\bolds{\delta}, \nu) = L(\bolds{\Omega},\bolds{\delta}) + \nu\bigl[M( \bolds{\Omega},\bolds{\delta})-1 \bigr] + (\rho/2) \bigl[M(\bolds{\Omega},\bolds{\delta})-1 \bigr]^2. \] Using zero as the initial value, we iteratively update: \begin{itemize} \item$\bolds{\delta}^{(k)} = \mathop{\operatorname{argmin}}_{\bolds{\delta}} \{ F_{\rho}(\bolds{\Omega} ^{(k-1)}, \bolds{\delta}, \nu^{(k-1)}) + \lambda_2 \vert \bolds{\delta}\vert _1 \}$,\vspace*{1pt} \item$\bolds{\Omega}^{(k)} = \mathop{\operatorname{argmin}}_{\bolds{\Omega}\dvtx \bolds{\Omega}=\bolds{\Omega}^{\top }}\{ F_{\rho}(\bolds{\Omega}, \bolds{\delta}^{(k)}, \nu^{(k-1)}) + \lambda _1 \vert \bolds{\Omega}\vert _1 \}$,\vspace*{1pt} \item$\nu^{(k)} = \nu^{(k-1)} + \rho[M(\bolds{\Omega}^{(k)},\bolds{\delta}^{(k)})-1]$. \end{itemize} Here, the first two steps are \textit{primal updates}, and the third step is a \textit{dual update}. First, we consider the update of $\bolds{\delta}$. When $\bolds{\Omega}$ and $\nu$ are fixed, we can write \[ F_\rho(\bolds{\Omega},\bolds{\delta},\nu) = \bolds{\delta}^{\top}\mathbf{A}\bolds{\delta}- 2 \bolds{\delta}^{\top}\mathbf{b}+ c_{\rho}(\bolds{\Omega},\nu), \] where \begin{eqnarray}\label{alg-A,b} \mathbf{A}&=& 4(\bolds{\Sigma}_1+\kappa\bolds{\Sigma}_2) + 2\rho( \bolds{\mu}_1-\bolds{\mu}_2) (\bolds{\mu}_1- \bolds{\mu}_2)^{\top}, \nonumber \\ \mathbf{b}&=& 4(\bolds{\Sigma}_1\bolds{\Omega}\bolds{\mu}_1+\kappa \bolds{\Sigma}_2\bolds{\Omega}\bolds{\mu}_2) \\ &&{} + \bigl[ \rho\operatorname{tr}\bigl( \bolds{\Omega}(\bolds{\Sigma}_1-\bolds{\Sigma}_2) \bigr) + \rho\bolds{\mu} _1^{\top}\bolds{\Omega}\bolds{\mu}_1 - \rho \bolds{\mu}_2^{\top}\bolds{\Omega}\bolds{\mu}_2 + (\rho-\nu) \bigr]( \bolds{\mu}_1-\bolds{\mu}_2),\hspace*{-30pt} \nonumber \end{eqnarray} and $c_\rho(\bolds{\Omega}, \nu)$ does not depend on $\bolds{\delta}$. Note that $\mathbf{A}$ is a positive semi-definite matrix. The update of $\bolds{\delta}$ is indeed a Lasso problem. Next, we consider the update of $\bolds{\Omega}$. When $\bolds{\delta}$ and $\nu$ are fixed, $F_{\rho}(\bolds{\Omega},\bolds{\delta},\nu)$ is a convex function of $\bolds{\Omega}$. We propose an approximate update step: we first ``linearize'' $F_\rho$ at $\bolds{\Omega}=\bolds{\Omega}^{(k-1)}$ to construct an upper envelope $\bar{F}_\rho$, and then minimize this upper envelope. In detail, at any $\bolds{\Omega}=\bolds{\Omega}_0$, we consider the following upper bound of $F_\rho(\bolds{\Omega}, \bolds{\delta}, \nu)$: \begin{eqnarray*} \bar{F}_\rho(\bolds{\Omega},\bolds{\delta},\nu) &\equiv& F_\rho( \bolds{\Omega}_0, \bolds{\delta},\nu) + \sum_{1\leq i\leq j\leq d} \bigl[\Omega(i,j)-\Omega_0(i,j) \bigr]\frac{\partial F_\rho(\bolds{\Omega} _0,\bolds{\delta},\nu)}{\partial \Omega(i,j)} \\ &&{} +\frac{\tau}{2}\sum_{1\leq i\leq j\leq d} \bigl[\Omega(i,j)- \Omega_0(i,j) \bigr]^2, \end{eqnarray*} where $\tau$ is a large enough constant [e.g., we can take $\tau=\sum _{1\leq i\leq j\leq d}\tfrac{\partial^2 F_{\rho}(\bolds{\Omega}_0,\bolds{\delta} ,\nu)}{\partial\Omega(i,j)^2}$]. We then minimize $\bar{F}_\rho (\bolds{\Omega},\bolds{\delta},\nu)+\lambda_1\vert \bolds{\Omega}\vert _1$ to update $\bolds{\Omega}$. This modified update step has an explicit solution, \[ \Omega^*(i,j)= \mathcal{S} \biggl( \Omega_0(i,j) - \frac{1}{\tau }\frac{\partial F_\rho(\bolds{\Omega}_0,\bolds{\delta},\nu)}{\partial\Omega (i,j)}, \frac{\lambda_1}{\tau} \biggr), \] where $\mathcal{S}(x,a)\equiv(\vert x\vert -a)_+\operatorname{sign}(x)$ is the soft-thresholding function. We can write $\bolds{\Omega}^*$ in a matrix form. Let \begin{eqnarray} \label{alg-D} \mathbf{D}&=& 4(1+\gamma) (\bolds{\Sigma}_1\bolds{\Omega} \bolds{\Sigma}_1+\kappa\bolds{\Sigma}_2\bolds{\Omega}\bolds{\Sigma}_2) + 2\gamma\bigl[ \operatorname{tr}(\bolds{\Omega}\bolds{\Sigma}_1)\bolds{\Sigma}_1 +\kappa \operatorname{tr}(\bolds{\Omega}\bolds{\Sigma}_2)\bolds{\Sigma}_2 \bigr]\hspace*{-20pt} \nonumber\\[-8pt]\\[-8pt]\nonumber &&{} + 4\operatorname{sym}\bigl( \bolds{\Sigma}_1(\bolds{\Omega}\bolds{\mu}_1-\bolds{\delta}) \bolds{\mu}_1^{\top} + \kappa\bolds{\Sigma}_2(\bolds{\Omega} \bolds{\mu}_2-\bolds{\delta})\bolds{\mu}_2^{\top} \bigr), \nonumber \end{eqnarray} where $\operatorname{sym}(\mathbf{B})=(\mathbf{B}+\mathbf{B}^{\top})/2$ for any square matrix $\mathbf{B}$. By direct calculation, \[ \bolds{\Omega}^* = \mathcal{S} \biggl( \bolds{\Omega}_0 - \frac{1}{\tau}\mathbf{D}, \frac{\lambda_1}{\tau} \biggr). \] We now describe our algorithm. Let us initialize $\bolds{\Omega}^{(0)}={\mathbf0} _{d\times d}$, $\bolds{\delta}^{(0)}={\mathbf0}$ and $\nu^{(0)}=0$. At iteration $k$, the algorithm updates as follows: \begin{itemize} \item Compute $\mathbf{A}=\mathbf{A}(\bolds{\Omega}^{(k-1)}, \bolds{\delta}^{(k-1)}, \nu ^{(k-1)})$ and $\mathbf{b}= \mathbf{b}(\bolds{\Omega}^{(k-1)}, \bolds{\delta}^{(k-1)}, \nu^{(k-1)})$ using (\ref{alg-A,b}). Update $\bolds{\delta}^{(k)}=\mathop{\operatorname{argmin}} _{\bolds{\delta}}\{\bolds{\delta}^{\top}\mathbf{A}\bolds{\delta}-2\bolds{\delta}^{\top}\mathbf{b}+ \lambda_2\vert \bolds{\delta}\vert _1\}$. \item Compute\vspace*{2pt} $\mathbf{D}=\mathbf{D}(\bolds{\Omega}^{(k-1)}, \bolds{\delta}^{(k)}, \nu ^{(k-1)})$ using (\ref{alg-D}). Update $\bolds{\Omega}^{(k)} = \mathcal {S}(\bolds{\Omega}^{(k-1)} - \frac{1}{\tau}\mathbf{D}, \frac{\lambda_1}{\tau } )$. \item Update $\nu^{(k)} = \nu^{(k-1)} + \rho[M(\bolds{\Omega} ^{(k)},\bolds{\delta}^{(k)})-1]$. \end{itemize} Stop until $\max\{\rho\vert \bolds{\Omega}^{(k)}-\bolds{\Omega}^{(k-1)}\vert, \rho \vert \bolds{\delta}^{(k)}-\bolds{\delta}^{(k-1)}\vert, \vert \nu^{(k)}-\nu ^{(k-1)}\vert /\rho\} \leq\varepsilon$ for some pre-specified precision $\varepsilon$. This is a modified version of the augmented Lagrangian method, where in the step of updating $\bolds{\Omega}$, we minimize an upper envelope, which is obtained by locally linearizing the augmented Lagrangian. \begin{rem*} QUADRO can be extended to folded concave penalties, for example, to SCAD [\citet{fan2001variable}] or to adaptive Lasso [\citet{zou2006adaptive}]. Using the Local Linear Approximation algorithm [\citet{zou2008one,fan2014strong}], we can solve the SCAD-penalized QUADRO and the adaptive-Lasso-penalized QUADRO by solving $L_1$-penalized QUADRO with multiple-step and one-step iterations, respectively. \end{rem*} \section{Estimation of mean and covariance matrix} \label{secestimation} QUADRO requires estimates of the mean vector and covariance matrix for each class as inputs. We will show in Section~\ref{secanalysis} that the performance of QUADRO is closely related to the max-norm estimation error on mean vectors and covariance matrices. Sample mean and sample covariance matrix work well for Gaussian data. However, when data are from elliptical distributions, they may have inferior performance as we estimate nonpolynomially many of means and variances. In Sections~\ref{subsecmean}--\ref{subseccov}, we suggest a robust $M$-estimator to estimate the mean and a rank-based estimator to estimate the covariance matrix, which are more appropriate for non-Gaussian data. Moreover, in Section~\ref{subsecgamma} we discuss how to estimate the model parameter $\gamma$ when it is unknown. \subsection{Estimation of the mean} \label{subsecmean} Suppose $\mathbf{x}_1,\ldots,\mathbf{x}_n$ are i.i.d. samples of a random vector $\mathbf{X}=(X_1,\ldots, X_d)^{\top}$ from an elliptical distribution $\mathcal{E}(\bolds{\mu}, \bolds{\Sigma}, g)$. Let us denote $\bolds{\mu}=(\mu_1,\ldots ,\mu_d)^{\top}$ and $\mathbf{x}_i=(x_{i1},\ldots,x_{id})^{\top}$ for $i=1,\ldots,n$. We estimate each $\mu_j$ marginally using the data $\{ x_{1j}, \ldots, x_{nj}\}$. One possible estimator is the sample median \[ \widehat{\mu}_{Mj} = \operatorname{median} \bigl(\{x_{1j}, \ldots, x_{nj}\} \bigr). \] It can be shown that even under heavy-tailed distributions, $P ( \vert \widehat{\mu}_{Mj}-\mu_j\vert > A\sqrt{\log(\delta^{-1})/n} )\leq \delta$ for small $\delta\in(0,1)$, where $A$ is a constant determined by the probability density at $\mu_j$, for each fixed $j$. This combined with the union bound gives that $\vert \widehat{\bmu}_M-\bolds{\mu} \vert _\infty =O_p(\sqrt{\log(d)/n})$. \citet{Catoni11} proposed another $M$-estimator for the mean of heavy-tailed distributions. It works for distributions where mean is not necessarily equal to median, which is essential for estimating covariance of random variables. We denote the diagonal elements of the covariance matrix $\bolds{\Sigma}$ as $\sigma^2_{1},\sigma^2_{2},\ldots ,\sigma^2_{d}$, and the off-diagonal elements as $\sigma_{kj}$ for $k\neq j$. The estimator $\widehat{\bolds{\mu}}_C=(\widehat{\mu}_{C,1},\ldots, \widehat{\mu}_{C,d})^{\top}$ is obtained as follows. For a strictly increasing function $h\dvtx \mathbb{R}\rightarrow\mathbb{R}$ such that $ -\log(1-y+y^2/2)\leq h(y)\leq\log(1+y+y^2/2)$, and a value $\delta \in(0,1)$ such that $n > 2 \log(1/\delta)$, we let \[ \alpha_{\delta} = \biggl\{\frac{2\log(\delta^{-1})}{n [v + \vafrac{2v\log(\delta^{-1})}{n-2\log(\delta^{-1})} ]} \biggr\}^{1/2}, \] where $v$ is an upper bound of $\max\{\sigma_1^2,\ldots,\sigma_d^2\} $. For each $j$, we define $\widehat{\mu}_{Cj}$ as the unique value that satisfies $\sum_{i=1}^n h(\alpha_\delta(x_{ij}-\widehat{\mu}_{Cj})) = 0$. It was shown in \citet{Catoni11} that $P ( \vert \widehat{\mu }_{Cj}-\mu_j\vert > \sqrt{\tfrac{2v\log(\delta^{-1})}{n(1-2\log (\delta ^{-1})/n)}} )\leq\delta$ when the variance of $X_j$ exists. Therefore, by taking $\delta=1/(n\vee d)^2$, $\vert \widehat{\bmu}_M-\bolds{\mu} \vert _\infty \leq C\sqrt{\log(d)/n}$ with probability at least $1-(n\vee d)^{-1}$, which gives the desired convergence rate. To\vspace*{1pt} implement this estimator, we take $h(y)= \operatorname{sgn}(y) \log (1+\vert y\vert +y^2/2)$. For the choice of $v$, any value larger than $\max\{\sigma_1^2, \ldots, \sigma_d^2\}$ would work in theory. \citet{Catoni11} introduced a Lepski's adaptation method to choose $v$. For simplicity, we take $v=3\max\{\widetilde{\sigma}_1^2,\ldots, \widetilde{\sigma }^2_d\}$, where $\widetilde{\sigma}^2_j$ is the sample covariance of $X_j$. The two estimators, the median and the $M$-estimator, both have a convergence rate of $O_p(\sqrt{\log(d)/n})$ in terms of the max-norm error. In our numerical experiments, the $M$-estimator has a better numerical performance, and we stick to this estimator. \subsection{Estimation of the covariance matrix} \label{subseccov} To estimate the covariance matrix~$\bolds{\Sigma}$, we estimate the marginal covariances $\{\sigma^2_j, 1\leq j\leq d\}$ and the correlation matrix $\mathbf{C}$ separately. Again, we need robust estimates even though the data have fourth moments, as we simultaneously estimate nonpolynomial number of covariance parameters. First, we consider estimating $\sigma^2_{j}$. Note that $\sigma _j^2=\mathbb{E}(X_j^2)-\mathbb{E}^2(X_j)$. We estimate $\mathbb {E}(X_j^2)$ and $\mathbb{E}(X_j)$ separately. To estimate $\mathbb {E}(X_j^2)$, we use the $M$-estimator described above on the squared data $\{x^2_{1j}, \ldots, x^2_{nj}\}$ and denote the estimator by~$\widehat{\eta}_{Cj}$. This works as $\mathbb{E}(X_j^4)$ is finite for each $j$ in our setting; in addition, the $M$-estimator applies to asymmetric distributions. We then define \[ \widehat{\sigma}_{Cj}^2 = \max\bigl\{\widehat{ \eta}_{Cj} - \widehat{\mu}_{Cj}^2, \delta_0 \bigr\}, \] where $\widehat{\mu}_{Cj}$ is the $M$-estimator of $\mathbb{E}(X_j)$ and $\delta_0>0$ is a small constant ($\delta_0<\min\{\sigma _1^2,\ldots,\sigma_d^2\}$). It is easy to see that when the fourth moments of $X_j$ are uniformly upper bounded by a constant and $n\geq4\log(d^2)$, $\max\{\vert \widehat {\sigma}_{Cj}-\sigma_j\vert, 1\leq j\leq d\}= O_p(\sqrt{\log(d)/n})$. Next, we consider estimating the correlation matrix $\mathbf{C}$. For this, we use Kendall's tau correlation matrix proposed by \citet{HanLiu12}. Kendall's tau correlation coefficients [\citet{Kendall}] are defined as \[ \tau_{jk} = \mathbb{P} \bigl( (X_j-\widetilde{X}_j) (X_k-\widetilde{X}_k)> 0 \bigr) - \mathbb{P} \bigl( (X_j-\widetilde{X}_j) (X_k- \widetilde{X}_k)< 0 \bigr), \] where $\widetilde{\mathbf{X}}$ is an independent copy of $\mathbf{X}$. They have the following relationship to the true coefficients: $C_{jk}= \sin (\frac{\pi}{2}\tau_{jk})$ for the elliptical family. Based on this equality, we first estimate Kendall's tau correlation coefficients using rank-based estimators \[ \widehat{\tau}_{jk} = \cases{ \displaystyle \frac{2}{n(n-1)}\sum_{1\leq i< i'\leq n}\operatorname{sign}\bigl((x_{ij}-x_{i'j}) (x_{ik}-x_{i'k}) \bigr), &\quad $j\neq k$, \vspace*{3pt}\cr 1, &\quad $j=k$,} \] and then estimate the correlation matrix by $\widehat{\mathbf{C}}=(\widehat {C}_{jk})$ with \[ \widehat{C}_{jk} = \sin\biggl(\frac{\pi}{2}\widehat{\tau }_{jk} \biggr). \] It\vspace*{1pt} is shown in \citet{HanLiu12} that $\vert \widehat{\mathbf{C}}-\mathbf{C}\vert _\infty= O_p(\sqrt{\log(d)/n})$. Finally, we combine $\{\widehat{\sigma}^2_j, 1\leq j\leq d\}$ and $\widehat{\mathbf{C}}$ to get $\widehat{\bSigma}$. Let \[ \widetilde{\Sigma}_{jk} = \widehat{\sigma}_j \widehat{ \sigma}_k \widehat{C}_{jk}, \qquad1\leq j,k\leq d. \] It follows immediately that $\vert \widetilde{\bolds{\Sigma}}-\bolds{\Sigma} \vert _\infty= O_p(\sqrt{\log(d)/n})$. However, this estimator is not necessarily positive semi-definite. To implement QUADRO, we need $\widehat{\bolds{\Sigma} }$ to be positive semi-definite so that the optimization in (\ref {progquadro}) is a convex problem. We obtain $\widehat{\bSigma}$ by projecting $\widetilde{\bolds{\Sigma}}$ onto the cone of positive semi-definite matrices through the convex optimization \begin{equation} \label{project} \widehat{\bSigma}= \mathop{\operatorname{argmin}}_{\mathbf{A}\dvtx \mathbf{A}~\mathrm {is~positive~semidefinite}} \bigl\{ \vert\mathbf{A}- \widetilde{ \bolds{\Sigma}} \vert_\infty\bigr\}. \end{equation} Note\vspace*{1pt} that $\vert \widehat{\bSigma}-\widetilde{\bolds{\Sigma}}\vert _\infty\leq \vert \bolds{\Sigma} -\widetilde{\bolds{\Sigma}}\vert _\infty$ by definition. Therefore, $\vert \widehat{\bSigma} -\bolds{\Sigma}\vert _\infty\leq\vert \widehat{\bSigma}- \widetilde{\bolds{\Sigma} }\vert _\infty+ \vert \widetilde{\bolds{\Sigma}}-\bolds{\Sigma}\vert _\infty\leq2\vert \widetilde{\bolds{\Sigma} }-\bolds{\Sigma}\vert _\infty= O_p(\sqrt{\log(d)/n})$. To compute $\widehat {\bolds{\Sigma}}$, we note that the optimization problem in (\ref{project}) can be formulated as the dual of a graphical lasso problem corresponding to the smallest possible tuning parameter that still guarantees a feasible solution [\citet{liu2012high}]. \citet{zhao2013psd} provide more algorithmic details. \subsection{Estimation of kurtosis parameter} \label{subsecgamma} When the kurtosis parameter $\gamma$ is unknown, we can estimate it from data. Recall that $\gamma= \frac{1}{d(d+2)}\mathbb{E}(\xi^4) -1$. Using decomposition (\ref{Zdecomposition}) and the properties of $\mathbf{U}$, we have \[ \mathbb{E} \bigl(\xi^4 \bigr) = \mathbb{E} \bigl\{ \bigl[(\mathbf{X}- \bolds{\mu})^{\top}\bolds{\Sigma}^{-1}(\mathbf{X}-\bolds{\mu}) \bigr]^2 \bigr \}. \] Motivated by this equality, we propose the estimator \[ \widehat{\gamma} = \max\Biggl\{ \frac{1}{d(d+2)} \frac{1}{n}\sum _{i=1}^n \bigl[(\mathbf{x}_i- \widetilde{\bolds{\mu}})^{\top}\widetilde{\bolds{\Omega}}(\mathbf{x}_i- \widetilde{\bolds{\mu}}) \bigr]^2 - 1, 0 \Biggr\}, \] where $\widetilde{\bolds{\mu}}$ and $\widetilde{\bolds{\Omega}}$ are estimators of $\bolds{\mu}$ and $\bolds{\Sigma}^{-1}$, respectively. \citet{kurtosis} considered a similar estimator in low-dimensional settings, where they used the sample mean and sample covariance matrix. In high dimensions, we a robust estimate to guarantee uniform convergence. In particular, we take $\widetilde{\bolds{\mu}}=\widehat{\bolds{\mu} }_C$ and $\widetilde{\bolds{\Omega}} = \widehat{\bolds{\Omega}}_{\mathrm {clime}}$ where $\widehat{\bolds{\Omega}}_{\mathrm{clime}}$ is the CLIME estimator proposed in \citet{Clime}. We can also take the covariance estimator in Section~\ref{subseccov}, but we will then need to establish its sampling property as a precision matrix estimator. We decide to use the CLIME estimator since such a property has already been established by \citet{Clime}. Denote by $\bolds{\Sigma}^{-1}=(\Omega _{jk})_{d\times d}$. From simple algebra, \begin{eqnarray*} \vert\widehat{\gamma}-\gamma\vert&\leq&\max_{1\leq j,k\leq d} \vert \widetilde{\mu}_j\widetilde{\Omega}_{jk}\widetilde{ \mu}_k - \mu_j\Omega_{jk}\mu_k \vert \\ &\leq& C\max\bigl\{ \vert\widetilde{\bolds{\mu}}-\bolds{\mu}\vert _{\infty}, \bigl\vert\widetilde{\bolds{\Omega}}-\bolds{\Sigma}^{-1} \bigr\vert _{\infty} \bigr\}. \end{eqnarray*} In\vspace*{1pt} Section~\ref{subsecmean}, we have seen that $\Vert\widehat{\bolds{\mu} }_C - \bolds{\mu}\Vert_\infty=O_p(\sqrt{\log(d)/n})$. Moreover, \citet {Clime} showed that $\vert \widetilde{\bolds{\Omega}}- \bolds{\Sigma}^{-1}\vert _\infty= \Vert\bolds{\Sigma}^{-1}\Vert_1 \cdot O_p(\sqrt{\log(d)/n})$ under mild conditions, where $\Vert\cdot\Vert_1$ is the matrix $L_1$-norm. Therefore, provided that $\Vert\bolds{\Sigma}^{-1}\Vert_1\leq C$, we immediately have $\vert \widehat{\gamma}-\gamma\vert =O_p(\sqrt {\log(d)/n})$. \section{Theoretical properties} \label{secanalysis} In this section, we establish an oracle inequality for the Rayleigh quotient of the QUADRO estimates $(\widehat{\bOmega},\widehat{\bdelta})$. We assume that $\pi$ and $\gamma$ are known. For notational simplicity, we set $\lambda_1=\lambda_2=\lambda$. The results can be easily generalized to the case $\lambda_1\neq\lambda_2$. Moreover,\vspace*{1pt} we drop the symmetry constraint $\bolds{\Omega}=\bolds{\Omega}^{\top}$ in all optimization problems involved. This simplifies the expression of the regularity conditions. The analysis with the symmetry constraint is a trivial extension of current analysis. Recall the definition of $M$, $L_1$ and $L_2$ in (\ref{ML1L2}) and $\kappa=(1-\pi)/\pi$ and $L=L_1+\kappa L_2$, the Rayleigh quotient of $(\bolds{\Omega}, \bolds{\delta})$ is equal to (up to a multiplicative constant) \[ R(\bolds{\Omega}, \bolds{\delta}) = \frac{[M(\bolds{\Omega},\bolds{\delta})]^2}{L(\bolds{\Omega}, \bolds{\delta})}. \] The QUADRO estimates are \[ (\widehat{\bOmega},\widehat{\bdelta}) = \mathop{\operatorname{argmin}}_{(\bolds{\Omega},\bolds{\delta})\dvtx \widehat {M}(\bolds{\Omega},\bolds{\delta})=1} \bigl\{ \widehat{L}(\bolds{\Omega},\bolds{\delta} )+\lambda\vert\bolds{\Omega}\vert_1 + \lambda\vert\bolds{\delta}\vert _1 \bigr\}. \] We shall compare the Rayleigh quotient of $(\widehat{\bOmega},\widehat{\bdelta})$ with the Rayleigh quotients of a class of ``oracle solutions.'' This class includes the one that maximizes the true Rayleigh quotient, which we denote by $(\bolds{\Omega}^*_0, \bolds{\delta}^*_0)$. Here we adopt a class of solutions as the ``oracle'' instead of only $(\bolds{\Omega}^*_0,\bolds{\delta} ^*_0)$, because we want the results not tied to the sparsity assumption on $(\bolds{\Omega}^*_0, \bolds{\delta}^*_0)$ but a weaker assumption: at least one solution in this class is sparse. Our theoretical development is technically nontrivial. Conventional oracle inequalities are derived in a setting of minimizing a data-dependent loss without constraint, and the risk function is the expectation of the loss. Here we minimize a data-dependent loss with a data-dependent equality constraint, and the risk function---the Rayleigh quotient---is not equal to the expectation of the loss. A~similar setting was considered in \citet{ROAD}, where they introduced a data-dependent intermediate solution to deal with such equality constraint. However, the rate they obtained depends on this intermediate solution, which is very hard to quantify. In contrast, the rate in our results purely depends on the oracle solution. To get rid of the intermediate solution in the rate, we need to carefully quantify its difference from both the QUADRO solution and the oracle solution. The technique is new, and potentially useful for other problems. \subsection{Oracle solutions, the restricted eigenvalue condition} For any $\lambda_0\geq0$, we define the \emph{oracle solution associated with $\lambda_0$} to be \begin{equation} \label{oracle} \bigl(\bolds{\Omega}^*_{\lambda_0}, \bolds{\delta}^*_{\lambda_0} \bigr ) = \mathop{\operatorname{argmin}}_{(\bolds{\Omega} ,\bolds{\delta})\dvtx M(\bolds{\Omega},\bolds{\delta})=1} \bigl\{ L(\bolds{\Omega},\bolds{\delta})+\lambda _0\vert \bolds{\Omega}\vert_1 + \lambda_0\vert\bolds{\delta}\vert _1 \bigr\}. \end{equation} We shall compare the Rayleigh quotient of $(\widehat{\bOmega}, \widehat{\bdelta})$ to that of $(\bolds{\Omega}^*_{\lambda_0}, \bolds{\delta}^*_{\lambda_0})$, for an arbitrary $\lambda_0$. In particular, when $\lambda_0=0$, the associated oracle solution (may not be unique) becomes \[ \bigl(\bolds{\Omega}^*_0,\bolds{\delta}^*_0 \bigr) = \mathop{\operatorname{argmin}}_{(\bolds{\Omega},\bolds{\delta})\dvtx M(\bolds{\Omega} ,\bolds{\delta})=1} \bigl\{ L(\bolds{\Omega},\bolds{\delta}) \bigr\}. \] It maximizes the true Rayleigh quotient. Next, we introduce a restricted eigenvalue (RE) condition jointly on $\bolds{\Sigma}_1$, $\bolds{\Sigma}_2$, $\bolds{\mu}_1$ and $\bolds{\mu}_2$. For any matrices $\mathbf{A}$ and $\mathbf{B}$, let $\operatorname{vec}(\mathbf{A})$ be the vectorization of $\mathbf{A}$ by stacking all the elements of $\mathbf{A}$ column by column, and $\mathbf{A}\otimes\mathbf{B}$ be the Kronecker product of $\mathbf{A}$ and $\mathbf{B}$. We define the matrices \[ \mathbf{Q}_k = \left[ \matrix{ \bigl(2(1+\gamma) \bolds{\Sigma}_k+ 4\bolds{\mu}_k\bolds{\mu}_k^{\top} \bigr)\otimes\bolds{\Sigma}_k + \gamma\operatorname{vec}(\bolds{\Sigma}_k)\operatorname{vec}( \bolds{\Sigma}_k)^{\top} & - 4\bolds{\mu}_k\otimes \bolds{\Sigma}_k \vspace*{3pt}\cr - 4\bolds{\mu}_k ^{\top}\otimes \bolds{\Sigma}_k & 4\bolds{\Sigma}_k} \right], \] for $k=1,2$. We note that there are $(d^2+d)$ coefficients to decide when maximizing $R(\bolds{\Omega}, \bolds{\delta})$: $d^2$ elements of $\bolds{\Omega}$ and $d$ elements of $\bolds{\delta}$. We can stack all these coefficients into a long vector $\mathbf{x}=\mathbf{x}(\bolds{\Omega},\bolds{\delta})$ in $\mathbb{R}^{d^2+d}$ defined as \begin{equation} \label{xOmega,delta} \mathbf{x}(\bolds{\Omega}, \bolds{\delta}) \equiv \bigl[ \matrix{\operatorname{vec}(\bolds{\Omega})^{\top}, \bolds{\delta}^{\top}} \bigr]^{\top}. \end{equation} It can be shown that $L_k(\bolds{\Omega},\bolds{\delta})=\mathbf{x}^{\top}\mathbf{Q}_k\mathbf{x}$, for $k=1,2$; see Lemma \ref{lemvectorize}. Therefore, $L(\bolds{\Omega} ,\bolds{\delta})=\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}$, where $\mathbf{Q}=\mathbf{Q}_1+\kappa\mathbf{Q}_2$. Our RE condition is then imposed on the $(d^2+d)\times(d^2+d)$ matrix $\mathbf{Q}$, and hence implicitly on $(\bolds{\Sigma}_1, \bolds{\Sigma}_2, \bolds{\mu}_1, \bolds{\mu}_2)$. We now formally introduce the RE condition. For a set $S\subset\{1,2, \ldots, d^2+d\}$ and a nonnegative value $\bar{c}$, we define the \textit{restricted eigenvalue} in the following way: \[ \Theta(S; \bar{c}) = \min_{\mathbf{v}\dvtx \vert \mathbf{v}_{S^c}\vert _1\leq\bar{c} \vert \mathbf{v} _S\vert _1}\frac{\mathbf{v}^{\top}\mathbf{Q}\mathbf{v}}{\vert \mathbf{v}_S\vert ^2}. \] Generally speaking, $\Theta(S; \bar{c})$ depends on $(\bolds{\Sigma} _1,\bolds{\Sigma}_2, \bolds{\mu}_1, \bolds{\mu}_2)$ in a complicated way. For $\bar {c}=0$, the following proposition builds a connection between $\Theta (S;0)$ and $(\bolds{\Sigma}_1, \bolds{\Sigma}_2, \bolds{\mu}_1,\bolds{\mu}_2)$. For each $S\subset\{1,2, \ldots, d^2+d\}$, there exist sets $U\subset\{ 1,\ldots,d\}\times\{1,\ldots,d\}$ and $V\subset\{1,\ldots,d\}$ such that the support of $\mathbf{x}(\bolds{\Omega},\bolds{\delta})$ is $S$ if and only if the support of $\bolds{\Omega}$ is $U$ and the support of $\bolds{\delta}$ is $V$. Let \[ U'= \bigcup_{(i,j)\in U}\{i,j\}. \] Then $U\subset U'\times U'$. The following result is proved in \citet {QUADROsupp}. \begin{prop} \label{propQeigen} For any set $S\subset\{1,\ldots,d^2+d\}$, suppose $U'$ and $V$ are defined as above. Let $\widetilde{\bolds{\Sigma}}_k$ be the submatrix of $\bolds{\Sigma}_k$ by restricting rows and columns to $U'\cup V$, $\widetilde{\bolds{\mu}}_k$ be the subvector of $\bolds{\mu}_k$ by constraining elements to $U'\cup V$, for $k=1,2$. If there exist constants $v_1, v_2>0$ such that $\lambda_{\min} ( \widetilde{\bolds{\Sigma}}_{k} - v_1 \widetilde{\bolds{\mu}}_k\widetilde{\bolds{\mu} }_k^{\top} )\geq\frac{1}{2}\lambda_{\min}(\widetilde{\bolds{\Sigma} }_k)\geq\frac{v_2}{2}$ for $k=1,2$, then \[ \Theta(S, 0) \geq(1+\gamma) (1+\kappa)v_2 \min\biggl\{ v_2, \frac {4v_1}{2+v_1(1+\gamma)} \biggr\} >0. \] \end{prop} \subsection{Oracle inequality on Rayleigh's quotient} \label{subsecoracle} Suppose $\max\{\vert \bolds{\Sigma}_k\vert _\infty, \break \vert \bolds{\mu} _k\vert _\infty, k=1,2\} \leq 1$ and $\vert \widehat{\bSigma}_k-\bolds{\Sigma}_k\vert _\infty\leq\vert \bolds{\Sigma}_k\vert _\infty$, $\vert \widehat{\bmu}_k-\bolds{\mu}_k\vert _\infty\leq\vert \bolds{\mu}_k\vert _\infty$ for $k=1,2$, without loss of generality. For any $\lambda_0\geq0$, let $(\bolds{\Omega} ^*_{\lambda_0}, \bolds{\delta}^*_{\lambda_0})$ be the\vspace*{1pt} associated oracle solution and $S$ be the\vspace*{1pt} support of $\mathbf{x}^*_{\lambda_0}=[\operatorname{vec}(\bolds{\Omega} ^*_{\lambda_0})^{\top}, (\bolds{\delta}^*_{\lambda_0})^{\top}]^{\top}$. Let $\Delta_n=\max\{\vert \widehat{\bSigma}_k-\bolds{\Sigma}_k\vert _\infty, \vert \widehat{\bmu}_k-\bolds{\mu} _k\vert _\infty, k = 1,2\}$. We have the following result for any given estimators, the proof of which we postpone to Section~\ref{secproof}. \begin{teo} \label{teoRbound} Given $\lambda_0\geq0$, let $S$ be the support of $\mathbf{x}^*_{\lambda _0}$, $s_0=\vert S\vert $ and $k_0=\max\{s_0, R(\bolds{\Omega}^*_{\lambda _0}, \bolds{\delta} ^*_{\lambda_0})\}$. Suppose that $\Theta(S,0)\geq c_0$, $\Theta(S, 3)\geq a_0$ and $R(\bolds{\Omega}^*_{\lambda_0}, \bolds{\delta}^*_{\lambda_0})\geq u_0$, for some positive constants $a_0$, $c_0$ and $u_0$. We assume $4s_0\Delta _n^2\leq a_0c_0$ and $\max\{s_0\Delta_n, s_0^{1/2}k_0^{1/2}\lambda _0\}<1$ without loss of generality. Then there exist positive constants $C=C(a_0,c_0, u_0)$ and $A=A(a_0, c_0, u_0)$ such that for any $\eta>1$, \[ \frac{R(\widehat{\bOmega},\widehat{\bdelta})}{R(\bolds{\Omega}^*_{\lambda_0},\bolds{\delta} ^*_{\lambda_0})} \geq1 - A \eta^2 \max\bigl\{s_0 \Delta_n, s_0^{1/2}k_0^{1/2} \lambda_0 \bigr\}, \] by taking $\lambda= C\eta\max\{ s_0^{1/2}\Delta_n, k_0^{1/2}\lambda _0\}[R(\bolds{\Omega}^*_{\lambda_0},\bolds{\delta}^*_{\lambda_0})]^{-1/2}$. \end{teo} In Theorem \ref{teoRbound}, the rate of convergence has two parts. The term $s_0\Delta_n$ reflects how the stochastic errors of estimating $(\bolds{\Sigma}_1, \bolds{\Sigma}_2, \bolds{\mu}_1, \bolds{\mu}_2)$ affect the Rayleigh quotient. The term $s_0^{1/2}k_0^{1/2}\lambda_0$ is an extra term that depends on the oracle solution we aim to use for comparison. In particular, if we compare $R(\widehat{\bolds{\Omega}},\widehat{\bolds{\delta} })$ with $R_{\max} \equiv R(\bolds{\Omega}^*_0, \bolds{\delta}^*_0)$, the population maximum Rayleigh quotient with $\lambda_0 = 0$, this extra term disappears. If we further use the estimators in Section~\ref {secestimation}, $\Delta_n=O_p(\sqrt{\log(d)/n})$. We summarize the result as follows. \begin{coro} \label{coroquadrorate1} Suppose that the condition of Theorem~\ref{teoRbound} holds with \mbox{$\lambda_0 = 0$}. Then for some positive constants $A$ and $C$, when $\lambda> Cs_0^{1/2}R_{\max}^{-1/2}\Delta_n$, we have \[ R(\widehat{\bOmega},\widehat{\bdelta})\geq(1 - A s_0\Delta_n )R_{\max}. \] Furthermore, if the mean vectors and covariance matrices are estimated by using the robust methods in Section~\ref{secestimation}, then when $\lambda> C s_0^{1/2}R_{\max}^{-1/2}\sqrt{\log(d)/n}$, \[ R(\widehat{\bOmega},\widehat{\bdelta})\geq\bigl( 1 - A s_0 \sqrt{\log(d)/n} \bigr)R_{\max}, \] with probability at least $1-(n\vee d)^{-1}$. \end{coro} From Corollary \ref{coroquadrorate1}, when $(\bolds{\Omega}^*_0, \bolds{\delta} ^*_0)$ is truly sparse, $R(\widehat{\bOmega}, \widehat{\bdelta})$ is close to the population maximum Rayleigh quotient $R_{\max}$. However, we note that Theorem \ref{teoRbound} considers more general situations, including cases where $(\bolds{\Omega}^*_0, \bolds{\delta}^*_0)$ is not sparse. As long as there exists an ``approximately optimal'' and sparse solution, that is, for a small $\lambda_0$ the associated oracle solution $(\bolds{\Omega}^*_{\lambda_0}, \bolds{\delta}^*_{\lambda_0})$ is sparse, Theorem \ref{teoRbound} guarantees that $R(\widehat{\bOmega}, \widehat{\bdelta})$ is close to $R(\bolds{\Omega}^*_{\lambda_0}, \bolds{\delta}^*_0)$ and hence close to $R_{\max}$. \begin{rem*} Our results are analogous to oracle inequalities for prediction error in linear regressions; therefore, the condition $\Theta(S,\bar{c})$ is similar to the RE condition in linear regressions [\citet{bickel2009simultaneous}]. To recover the support of $(\bolds{\Omega}^*_0,\bolds{\delta}^*_0)$, conditions similar to the ``irrepresentable condition'' for Lasso [\citet{zhao2006model}] are needed. \end{rem*} \section{Application to classification} \label{secclassification} One important application of QUADRO is high-dimensional classification for elliptically-distributed data. Suppose $(\widehat{\bOmega}, \widehat{\bdelta})$ are the QUADRO estimates. This yields the classification rule \[ \widehat{h}(\mathbf{x}) = I \bigl\{ \mathbf{x}^{\top}\widehat{\bOmega}\mathbf{x}-2\widehat{\bdelta} ^{\top}\mathbf{x}< c \bigr\}. \] In this section, we first show that for normally distributed data, the Rayleigh quotient is a proxy of the classification error, and then derive an analytic choice of $c$. Comparing with many other high-dimensional classification methods, QUADRO produces quadratic boundaries and can handle both non-Gaussian distributions and nonequal covariance matrices. \subsection{Approximation of classification errors} \label{subsecerrapprox} Given $(\bolds{\Omega}, \bolds{\delta})$ and a threshold $c$, a~general quadratic rule $h(\mathbf{x})=h(\mathbf{x}; \bolds{\Omega},\bolds{\delta},c)$ is defined as \begin{equation} \label{quadrule} h(\mathbf{x}; \bolds{\Omega},\bolds{\delta},c)= I \bigl\{ \mathbf{x}^{\top}\bolds{\Omega} \mathbf{x}- 2\mathbf{x}^{\top}\bolds{\delta}< c \bigr\}. \end{equation} We reparametrize $c$ as \begin{equation} \label{ct} c= t M_1(\bolds{\Omega},\bolds{\delta})+(1-t)M_2( \bolds{\Omega},\bolds{\delta}). \end{equation} Here $M_k(\bolds{\Omega}, \bolds{\delta}) = \bolds{\mu}^{\top}_k\bolds{\Omega}\bolds{\mu}_k - 2\bolds{\mu} _k^{\top} \bolds{\delta}+ \operatorname{tr}(\bolds{\Omega}\bolds{\Sigma}_k)$ is the mean of $Q(\mathbf{X})$ in class $k$, for $k=1,2$. After the reparametrization, $t$ is \emph{scale-free}. As we will see below, in most cases, given $\bolds{\Omega}$ and $\bolds{\delta}$, the optimal $t$ that minimizes the classification error takes values on $(0,1)$. From now on, we write $h(\mathbf{x}; \bolds{\Omega},\bolds{\delta}, c)=h(\mathbf{x}; \bolds{\Omega}, \bolds{\delta}, t)$. Let $\operatorname{Err}(\bolds{\Omega}, \bolds{\delta}, t)$ be the classification error of $h(\cdot; \bolds{\Omega},\bolds{\delta},t)$. Due to technical difficulties, we only give results for Gaussian distributions. Suppose $\mathbf{X}|(Y=0)\sim\mathcal{N}(\bolds{\mu}_1,\bolds{\Sigma}_1)$ and $\mathbf{X}|(Y=1)\sim \mathcal{N}(\bolds{\mu}_2,\bolds{\Sigma}_2)$. For $k=1,2$, we write \[ \bolds{\Sigma}_k^{1/2}\bolds{\Omega}\bolds{\Sigma}_k^{1/2}= \mathbf{K}_k\mathbf{S}_k\mathbf{K}_k^T, \] where $\mathbf{S}_k$ is a diagonal matrix containing the nonzero eigenvalues, and the columns of $\mathbf{K}_k$ are corresponding eigenvectors. Let $\bolds{\beta} _k=\mathbf{K}_k^T\bolds{\Sigma}_k(\bolds{\Omega}\bolds{\mu}_k-\bolds{\delta})$. When $\max\{\vert \mathbf{S} _k\vert _\infty, \vert \bolds{\beta}_k\vert _\infty, k=1,2\}$ is bounded, the following proposition shows that an approximation of $\operatorname{Err}(\bolds{\Omega} , \bolds{\delta}, t)$ is \[ \overline{\operatorname{Err}}(\bolds{\Omega}, \bolds{\delta}, t) \equiv\pi\bar{\Phi} \biggl( \frac{(1-t)M(\bolds{\Omega},\bolds{\delta})}{\sqrt{L_1(\bolds{\Omega},\bolds{\delta})}} \biggr) + (1-\pi)\bar{\Phi} \biggl( \frac{tM(\bolds{\Omega},\bolds{\delta})}{\sqrt {L_2(\bolds{\Omega},\bolds{\delta})}} \biggr), \] where $M$, $L_1$ and $L_2$ are defined in (\ref{ML1L2}), $\Phi$ is the distribution function of a standard normal variable and $\bar{\Phi }=1-\Phi$. Its proof is contained in Section~\ref{secproof}. \begin{prop} \label{properrgap} Suppose that $\max\{\vert \mathbf{S}_k\vert _\infty, \vert \bolds{\beta} _k\vert _\infty, k=1,2\} \leq C_0$ for some constant $C_0>0$, and let $q$ be the rank of $\bolds{\Omega}$. Then as $d$ goes to infinity, \[ \bigl\vert\operatorname{Err}(\bolds{\Omega}, \bolds{\delta}, t) - \overline{\operatorname{Err}}(\bolds{\Omega}, \bolds{\delta}, t) \bigr\vert= \frac{O(q) + o(d) }{[\min\{L_1(\bolds{\Omega},\bolds{\delta} ), L_2(\bolds{\Omega},\bolds{\delta})\}]^{3/2}}. \] \end{prop} In particular, if we consider all such $(\bolds{\Omega},\bolds{\delta})$ that the variance of $Q(\mathbf{X};\bolds{\Omega},\bolds{\delta})$ under both classes are lower bounded by $c_0d^{\theta}$ for some constants $\theta> 2/3$ and $c_0>0$, then we have $\vert \operatorname{Err}-\overline{\operatorname{Err}}\vert =o(1)$. \begin{figure} \caption{Function $H(x)=\bar{\Phi}(1/\sqrt{x})$.} \label{figH} \end{figure} We now take a closer look at $\overline{\operatorname{Err}}$. Let $H(x)=\bar{\Phi }(1/\sqrt{x})$, which is monotone increasing on $(0, \infty)$. Writing for short $M=M_1-M_2$, $M_k=M_k(\bolds{\Omega},\bolds{\delta})$ and $L_k=L_k(\bolds{\Omega},\bolds{\delta})$ for $k=1,2$, we have \[ \overline{\operatorname{Err}}(\bolds{\Omega}, \bolds{\delta}, t) = \pi H \biggl( \frac {L_1}{(1-t)^2 M^2} \biggr) + (1-\pi) H \biggl( \frac{L_2}{t^2 M^2} \biggr). \] Figure~\ref{figH} shows that $H(\cdot)$ is nearly linear on an important range. This suggests the following approximation: \begin{equation} \label{Happrox} \qquad\overline{\operatorname{Err}}(\bolds{\Omega}, \bolds{\delta}, t) \approx H \biggl( \pi \frac {L_1}{(1-t)^2 M^2} + (1-\pi) \frac{L_2}{t^2 M^2} \biggr) = H \biggl( \frac{\pi}{(1-t)^2}\frac{1}{R^{(t)}} \biggr), \end{equation} where $R^{(t)}=R^{(t)}(\bolds{\Omega},\bolds{\delta})$ is the $R(\bolds{\Omega},\bolds{\delta} )$ in (\ref{defR}) corresponding to the $\kappa$ value \[ \kappa(t)\equiv\frac{1-\pi}{\pi}\frac{(1-t)^2}{t^2}. \] The approximation in (\ref{Happrox}) is quantified in the following proposition, which is proved in \citet{QUADROsupp}. \begin{prop} \label{propH-Taylor} Given $(\bolds{\Omega}, \bolds{\delta}, t)$, we write for short $R_k= R_k(\bolds{\Omega} ,\bolds{\delta})=[M(\bolds{\Omega}, \bolds{\delta})]^2/L_k(\bolds{\Omega}, \bolds{\delta})$, for $k=1,2$, and define \begin{eqnarray*} V_1 & =& V_1(\bolds{\Omega}, \bolds{\delta}, t)= \min\biggl \{(1-t)^2 R_1, \frac {1}{(1-t)^2 R_1} \biggr\}, \\ V_2 & =& V_2(\bolds{\Omega}, \bolds{\delta}, t)= \min\biggl \{t^2 R_2, \frac {1}{t^2 R_2} \biggr\}, \\ V &=& V( \bolds{\Omega}, \bolds{\delta}, t) = \max\{ V_1/V_2, V_2/V_1 \}. \end{eqnarray*} Then there exists a constant $C>0$ such that \[ \biggl\vert\overline{\operatorname{Err}}(\bolds{\Omega}, \bolds{\delta}, t)- H \biggl(\frac{\pi }{(1-t)^2R^{(t)}(\bolds{\Omega},\bolds{\delta})} \biggr) \biggr\vert\leq C \bigl[\max\{ V_1, V_2\} \bigr]^{1/2} \cdot\vert V-1\vert^2. \] In particular, when $t=1/2$, \[ \biggl\vert\overline{\operatorname{Err}}(\bolds{\Omega}, \bolds{\delta}, t)- H \biggl(\frac{\pi }{(1-t)^2R^{(t)}(\bolds{\Omega},\bolds{\delta})} \biggr) \biggr\vert\leq C {R_0^{1/2}}\cdot\biggl( \frac{\Delta R}{R_0} \biggr)^{2}, \] where $R_0=\max\{\min\{R_1,1/{R_1}\},\min\{R_2,1/{R_2}\}\}$ and $\Delta R=\vert R_1-R_2\vert $. \end{prop} Note that $L_1$ and $L_2$ are the variances of $Q(\mathbf{X})=\mathbf{X}^{\top }\bolds{\Omega}\mathbf{X}- 2\mathbf{X}^{\top}\bolds{\delta}$ for two classes, respectively. In cases where $\vert L_1-L_2\vert \ll\min\{L_1, L_2\}$, $\Delta R\ll R_0$. Also, $R_0$ is always bounded by 1, and it tends to $0$ in many situations, for example, when $R_1, R_2\to\infty$, or $R_1, R_2\to0$, or $R_1\to 0, R_2\to\infty$. Proposition \ref{propH-Taylor} then implies that the approximation in (\ref{Happrox}) when $t=1/2$ is good. Combining Propositions \ref{properrgap} and \ref{propH-Taylor}, the classification error of a general quadratic rule $h(\cdot; \bolds{\Omega}, \bolds{\delta}, t)$ is approximately a monotone decreasing transform of the Rayleigh quotient $R^{(t)}(\bolds{\Omega}, \bolds{\delta})$, corresponding to $\kappa=\kappa(t)$. In particular, when $t=1/2$ [i.e., $c=(M_1+M_2)/2$], $R^{(1/2)}(\bolds{\Omega},\bolds{\delta})$ is exactly the one used in QUADRO. Consequently, if we fix the threshold to be $c=(M_1+M_2)/2$, then the Rayleigh quotient (upon with a monotone transform) is a good proxy for classification error. This explains why Rayleigh-quotient based procedures can be used for classification. \begin{rem*} Even in the region that $H(\cdot)$ is far from being linear such that the upper bound in Proposition \ref{propH-Taylor} is not $o(1)$, we can still find a monotone transform of the Rayleigh quotient as an \textit{upper bound} of the classification error. To see this, note that for $x\in[1/3,\infty)$, $H(x)$ is a concave function. Therefore, the approximation in (\ref{Happrox}) becomes an inequality, that is, $\overline{\operatorname{Err}}(\bolds{\Omega},\bolds{\delta},t)\leq H ( \tfrac {\pi R^{(t)}}{(1-t)^2} )$. For $x\in(0, 1/3)$, $H(x)\leq 0.1248x$. It follows that $\overline{\operatorname{Err}}(\bolds{\Omega},\bolds{\delta},t)\leq 0.1248\cdot\tfrac{\pi R^{(t)}}{(1-t)^2}$. \end{rem*} \begin{rem*} In the current setting, the Bayes classifier is a quadratic rule $h(\mathbf{x}; \bolds{\Omega}_B,\bolds{\delta}_B, c_B)$ with $\bolds{\Omega}_B = \bolds{\Sigma}_1^{-1}-\bolds{\Sigma}_2^{-1}$, $\bolds{\delta}_B = \bolds{\Sigma}_1^{-1}\bolds{\mu}_1 - \bolds{\Sigma}_2^{-1}\bolds{\mu}_2$ and $c_B = \bolds{\mu}_2^{\top}\bolds{\Sigma}_2^{-1}\bolds{\mu} _2-\bolds{\mu}_1^{\top}\bolds{\Sigma}_1^{-1}\bolds{\mu}_1$. Let $(\bolds{\Omega}^*_0, \bolds{\delta} ^*_0)$ be the population solution of QUADRO when $\lambda=0$. We note that $(\bolds{\Omega}_B, \bolds{\delta}_B)$ and $(\bolds{\Omega}^*_0, \bolds{\delta}^*_0)$ are different: the former minimizes $\inf_t\operatorname{Err}(\bolds{\Omega},\bolds{\delta},t)$, while the latter minimizes $\overline{\operatorname{Err}}(\bolds{\Omega},\bolds{\delta},1/2)$. \end{rem*} \subsection{QUADRO as a classification method} Results in Section~\ref{subsecerrapprox} suggest an analytic method to choose the threshold $c$, or equivalently $t$, with given $(\bolds{\Omega} ,\bolds{\delta})$. Let \begin{equation} \label{hatt} \widehat{t} \in\min_t \biggl\{ \pi\bar{ \Phi} \biggl(\frac {(1-t)\widehat{M}(\bolds{\Omega},\bolds{\delta})}{\sqrt{\widehat{L}_1(\bolds{\Omega} ,\bolds{\delta})}} \biggr) + (1-\pi)\bar{\Phi} \biggl( \frac{t\widehat{M}(\bolds{\Omega},\bolds{\delta} )}{\sqrt{\widehat{L}_2(\bolds{\Omega},\bolds{\delta})}} \biggr) \biggr\}, \end{equation} and set \begin{equation} \label{hatc} \widehat{c}= (1-\widehat{t})\widehat{M}_1(\bolds{\Omega}, \bolds{\delta}) + \widehat{t}\widehat{M}_2(\bolds{\Omega},\bolds{\delta}). \end{equation} Here (\ref{hatt}) is a one-dimensional optimization problem and can be solved easily. The resulting QUADRO classification rule is \[ \widehat{h}^{\mathrm{Quad}}(\mathbf{x}) = I \bigl\{\mathbf{x}^{\top}\widehat{\bOmega}\mathbf{x}- 2 \mathbf{x}^{\top }\widehat{\bdelta}- \widehat{c}<0 \bigr\}. \] As a by-product, the method to decide $c$, described in (\ref{hatt}) and (\ref{hatc}), can be used in other classification procedures on Gaussian data, such as logistic regression, quadratic\vspace*{1pt} discriminant analysis (QDA) and kernel support vector machine, once $(\widehat{\bOmega} ,\widehat{\bdelta})$ are given. It provides a fast and purely data-driven way to decide the threshold value in quadratic classification rules. In our numerical experiments, it performs well. \section{Numerical studies} \label{secnumerical} In this section, we investigate the performance of QUADRO in several simulation examples and a real data example. The simulation studies contain both Gaussian models and general elliptical models. We compare QUADRO with several \textit{classification-oriented procedures}. Performances are evaluated in terms of classification errors. \subsection{Simulations under Gaussian models} Let $n_1=n_2=50$ and $d=40$. For each given $\bolds{\mu}_1$, $\bolds{\mu}_2$, $\bolds{\Sigma}_1$ and $\bolds{\Sigma}_2$, we generate $100$ training datasets independently, each with $n_1$ data from $\mathcal{N}(\bolds{\mu}_1,\bolds{\Sigma}_1)$ and $n_2$ data from $\mathcal {N}(\bolds{\mu}_2,\bolds{\Sigma}_2)$. In QUADRO, we input the sample means and sample covariance matrices. We set $\lambda_2=r\lambda_1$ and work with $\lambda_1$ and $r$ from now on. The two tuning parameters $\lambda_1\geq0$ and $r>0$ are selected in the following way. For various pairs of $(\lambda_1, r)$, we apply QUADRO for each pair and evaluate the classification error via 4000 newly generated testing data; we then choose the $(\lambda_1, r)$ that minimize the classification error. We compare QUADRO with five \textit{classification-oriented procedures}: \begin{itemize} \item Sparse logistic regression (SLR): We apply the sparse logistic regression to the augmented feature space $\{X_i, 1\leq i\leq d; X_iX_j, 1\leq i\leq j\leq d\}$. The resulting estimator then gives a quadratic projection with $(\bolds{\Omega}, \bolds{\delta}, c)$ decided from the fitted regression coefficients. We implement the sparse logistic regression using the R package {glmnet}. \item Linear sparse logistic regression (L-SLR): We apply the sparse logistic regression directly to the original feature space $\{X_i, 1\leq i\leq d\}$. \item ROAD [\citet{ROAD}]: This is a linear classification method, which can be formulated equivalently as a modified version of QUADRO by enforcing $\widehat{\Omega}$ as the zero matrix and plugging in the pooled sample covariance matrix. \item Penalized-LDA (P-LDA) [\citet{witten2011penalized}]: This is a variant of LDA, which solves an optimization problem with a nonconvex objective and $L_1$ penalties. Also, P-LDA only uses diagonals of the sample covariance matrices. \item FAIR [\citet{FAIR}]: This is a variant of LDA for high-dimensional settings, where screening is adopted to pre-select features and only the diagonals of the sample covariance matrices are used. \end{itemize} To make a fair comparison, the tuning parameters in SLR and L-SLR are selected in the same way as in QUADRO based on 4000 testing data. ROAD and P-LDA are self-tuned by its package. The number of features chosen in FAIR is calculated in the way suggested in [\citet{FAIR}]. \begin{figure} \caption{Distributions of minimum classification error based on 100 replications for four different normal models. The tuning parameters for QUADRO, SLR and L-SLR are chosen to minimize the classification errors of 4000 testing samples. See \citet{QUADROsupp} for detailed numerical tables.} \label{figClassiErrorGaussian} \end{figure} We consider four models: \begin{itemize}[--] \item[--] \textit{Model} 1: $\bolds{\Sigma}_1$ is the identity matrix. $\bolds{\Sigma} _2$ is a diagonal matrix in which the first 10 elements are equal to 1.3 and the rest are equal to 1. $\bolds{\mu}_1={\mathbf0}$, and $\bolds{\mu} _2=(0.7,\ldots,0.7,0,\ldots,0)^{\top}$ with the first 10 elements of $\bolds{\mu}_2$ being nonzero. \item[--] \textit{Model} 1L: $\bolds{\mu}_1$, $\bolds{\mu}_2$ are the same as in {model}~1, and both $\bolds{\Sigma}_1$ and $\bolds{\Sigma}_2$ are the identity matrix. \item[--] \textit{Model} 2: $\bolds{\Sigma}_1$ is a block-diagonal matrix. Its upper left $20\times20$ block is an equal correlation matrix with $\rho=0.4$, and its\vspace*{1pt} lower right $20\times20$ block is an identity matrix. $\bolds{\Sigma}_2=(\bolds{\Sigma}^{-1}_1+\,\mathbf{I})^{-1}$. We also set $\bolds{\mu} _1=\bolds{\mu}_2={\mathbf0}$. In this model, neither $\bolds{\Sigma}_1^{-1}$ nor $\bolds{\Sigma}_2^{-1}$ is sparse, but $\bolds{\Sigma}_1^{-1}-\bolds{\Sigma}_2^{-1}$ is. \item[--] \textit{Model} 3: $\bolds{\Sigma}_1$, $\bolds{\Sigma}_2$ and $\bolds{\mu}_1$ are the same as in {model~2}, and $\bolds{\mu}_2$ is taken from {model~1}. \end{itemize} Figure~\ref{figClassiErrorGaussian} contains the boxplots for the classification errors of all methods. In all four models, QUADRO outperforms other methods in terms of classification error. In model~1L, $\bolds{\Sigma}_1=\bolds{\Sigma}_2$, so the Bayes classifier is linear. In this case which favors linear methods, QUADRO is still competitive with the best of all linear classifiers. In model~2, $\bolds{\mu}_1=\bolds{\mu}_2$, so linear methods can do no better than random guessing. Therefore, ROAD, L-SLR, P-LDA and FAIR all have very poor performances. For the two quadratic methods, QUADRO is significantly better than SLR. In models 1~and~3, $\bolds{\mu}_1\neq\bolds{\mu}_2$ and $\bolds{\Sigma}_1\neq\bolds{\Sigma} _2$, so in the Bayes classifier, both ``linear'' parts and ``quadratic'' parts play important roles. In model~1, both $\bolds{\Sigma}_1$ and $\bolds{\Sigma}_2$ are diagonal, and the setting favors methods using only diagonals of sample covariance matrices. As a result, P-LDA and FAIR perform quite well. In model~3, $\bolds{\Sigma}_1$ and $\bolds{\Sigma}_2$ are both nondiagonal and nonsparse (but $\bolds{\Sigma}_1-\bolds{\Sigma}_2$ is sparse). We see that the performances of P-LDA and FAIR are unsatisfactory. QUADRO outperforms other methods in both models 1~and~3. Comparing SLR and L-SLR, we see the former considers a broader class, while the latter is more robust, but neither of them perform uniformly better. However, QUADRO performs well in all cases. In terms of Rayleigh quotients, QUADRO also outperforms other methods in most cases. \subsection{Simulations under elliptical models} Let $n_1=n_2=50$ and $d=40$. For each given $\bolds{\mu}_1$, $\bolds{\mu}_2$, $\bolds{\Sigma}_1$ and $\bolds{\Sigma}_2$, data are generated from multivariate t distribution with degrees of freedom $5$. In QUADRO, we input the robust $M$-estimators for means and the rank-based estimators for covariance matrices as described in Section~\ref{secestimation}. We compare the performance of QUADRO with the five methods compared under Gaussian settings. We also implement QUADRO with inputs of sample means and sample covariance matrices. We name this method QUADRO-0 to differentiate it from QUADRO. \begin{figure} \caption{Distributions of minimum classification error based on 100 replications across different elliptical distribution models. The tuning parameters for QUADRO, SLR and L-SLR are chosen to minimize the classification errors. See \citet{QUADROsupp} for detailed numerical tables.} \label{figClassiErrorEllip} \end{figure} We consider three models: \begin{itemize}[--] \item[--] \textit{Model} 4: Here we use same parameters as those in {model~1}. \item[--] \textit{Model} 5: $\bolds{\Sigma}_1$, $\bolds{\mu}_1$ and $\bolds{\mu}_2$ are the same as in {model~1}. $\bolds{\Sigma}_2$ is the covariance matrix of a fractional white noise process, where the difference parameter $l=0.2$. In other words, $\bolds{\Sigma}_2$ has the polynomial off-diagonal decay $\vert \Sigma_2(i,j)\vert = O(\vert i-j\vert ^{1-2l})$. \item[--] \textit{Model} 6: $\bolds{\Sigma}_1$, $\bolds{\mu}_1$ and $\bolds{\mu}_2$ are the same as in {model} 1. $\bolds{\Sigma}_2$ is a matrix such that $\Sigma _2(i,j)=0.6^{\vert i-j\vert }$; that is, $\bolds{\Sigma}_2$ has an exponential off-diagonal decay. \end{itemize} Figure~\ref{figClassiErrorEllip} contains the boxplots of average classification error over $100$ replications. QUADRO outperforms the other methods in all settings. Also, QUADRO is better than QUADRO-0 (e.g., $0.161$ versus $0.173$, of the average classification error in model~5), which illustrates the advantage of using the robust estimators for means and covariance matrices. \subsection{Real data analysis} We apply QUADRO to a large-scale genomic dataset, GPL96, and compare the performance of QUADRO with SLR, L-SLR, ROAD, P-LDA and FAIR. The GPL96 data set contains 20,263 probes and 8124 samples from 309 tissues. Among the tissues, breast tumor has 1142 samples, which is the largest set. We merge the probes from the same gene by averaging them, and finally get 12,679 genes and 8124 samples. We divide all samples into two groups: breast tumor or nonbreast tumor. First, we look at the classification errors. We replicate our experiment 100 times. Each time, we proceed with the following steps: \begin{itemize} \item Randomly choose a training set of 400 samples, 200 from breast tumor and 200 from nonbreast tumor. \item For each training set, we use half of the samples to compute $(\widehat{\bolds{\Omega}},\widehat{\bolds{\delta}})$ and the other half to select the tuning parameters by minimizing the classification error. \item Use the remaining 942 samples from breast tumor and another randomly chosen 942 samples from nonbreast tumor as testing set, and calculate the testing error. \end{itemize} FAIR does not have any tuning parameters, so we use the whole training set to calculate classification frontier, and the rest to calculate testing error. The results are summarized in Table~\ref {tbreal-classerr}. We see that QUADRO outperforms all other methods. \begin{table} \tabcolsep=0pt \caption{Classification errors on GPL96 dataset, across methods QUADRO, SLR and L-SLR. Means and standard deviations (in the parenthesis) of $100$ replications are reported} \label{tbreal-classerr} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}@{}lccccc@{}} \hline \textbf{QUADRO} & \textbf{SLR} & \textbf{L-SLR} & \textbf{ROAD} & \textbf{Penalized-LDA} & \textbf{FAIR}\\ \hline 0.014& 0.025 & 0.025& 0.016&0.060&0.046\\ (0.007)& (0.007)&(0.009)&(0.007)&(0.011)&(0.009)\\ \hline \end{tabular*} \end{table} Next, we look at gene selection and focus on the two quadratic methods, QUADRO and SLR. We apply two-fold cross-validation to both QUADRO and SLR. In the results, QUADRO selects 139 genes and SLR selects 128 genes. According to KEGG database, genes selected by QUADRO belong to 5 of the pathways that contain more than two genes; correspondingly, genes selected by SLR belong to 7 pathways. Using the ClueGo tool [\citet{bindea2009cluego}], we display the overall KEGG enrichment chart in Figure~\ref{figEnrichment}. We see from Figure~\ref{figEnrichment} that both QUADRO and SLR have \textit{focal adhesion} as its most important functional group. Nevertheless, QUADRO finds \textit{ECM-receptor interaction} as another important functional group. \textit{ECM-receptor interaction} is a class consisting of a mixture of structural and functional macromolecules, and it plays an important role in maintaining cell and tissue structures and functions. Massive studies [\citet{luparello2013aspects,wei2007markov}] have found evidence that this class is closely related to breast cancer. \begin{figure} \caption{Overall KEGG enrichment chart, using \textup{(a)} QUADRO; \textup{(b)} SLR.} \label{figEnrichment} \end{figure} Besides the pathway analysis, we also perform the Gene Ontology (GO) enrichment analysis on genes selected by QUADRO. This analysis was completed by DAVID Bioinformatics Resources, and the results are shown in Table~\ref{GO-Quadro}. We present the biological processes with $p$-values smaller than $10^{-3}$. According to the table, we see that many biological processes are significantly enriched, and they are related to previously selected pathways. For instance, the biological process \textit{cell adhesion} is known to be highly related to \textit{cell communication pathways}, including \textit{focal adhesion} and \textit{ECM-receptor interaction}. \begin{table} \tabcolsep=0pt \caption{Enrichment analysis results according to Gene Ontology for genes selected by QUADRO. The four columns represent GO ID, GO attribute, number of selected genes having the attribute and their corresponding $p$-values. We rank them according to $p$-values in increasing order} \label{GO-Quadro} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}@{}lcd{2.0}d{1.6}@{\hspace*{-3.5pt}}} \hline \textbf{GO ID} & \textbf{GO attribute} & \multicolumn{1}{c}{\textbf{No. of genes}} & \multicolumn{1}{c}{\textbf{$\bolds{p}$-value}} \\ \hline 0048856& Anatomical structure development & 58 & 3.7\mbox{E--}12 \\ 0032502&Developmental process&62&2.9\mbox{E--}10\\ 0048731&System development&52&3.1\mbox{E--}10\\ 0007275&Multicellular organismal development&55&1.8\mbox{E--}8\\ 0001501&Skeletal system development&15&1.3\mbox{E--}6\\ 0032501&Multicellular organismal process&66&1.4\mbox{E--}6\\ 0048513&Organ development&37&1.4\mbox{E--}6\\ 0009653&Anatomical structure morphogenesis&28&8.7\mbox{E--}6\\ 0048869&Cellular developmental process&34&1.9\mbox{E--}5\\ 0030154&Cell differentiation&33&2.1\mbox{E--}5\\ 0007155&Cell adhesion&18 &2.4\mbox{E--}4\\ 0022610&Biological adhesion&18 &2.2\mbox{E--}4\\ 0042127&Regulation of cell proliferation &19 &2.9\mbox{E--}4\\ 0009888&Tissue development&17&3.7\mbox{E--}4\\ 0007398&Ectoderm development&9&4.8\mbox{E--}4\\ 0048518&Positive regulation of biological process &34&5.6\mbox{E--}4\\ 0009605&Response to external stimulus &20&6.3\mbox{E--}4\\ 0043062&Extracellular structure organization &8&7.4\mbox{E--}4\\ 0007399&Nervous system development&22&8.4\mbox{E--}4\\ \hline \end{tabular*} \end{table} \section{Conclusions and extensions} \label{secconclude} QUADRO is a robust sparse high-\break dimensional classifier, which allows us to use differences in covariance matrices to enhance discriminability. It is based on Rayleigh quotient optimization. The variance of quadratic statistics involves all fourth cross moments, and this can create both computational and statistical problems. These problems are avoided by limiting our applications to the elliptical class of distributions. Robust $M$-estimator and rank-based estimation of correlations allow us to obtain the uniform convergence for nonpolynomially many parameters, even when the underlying distributions have the finite fourth moments. This allows us to establish oracle inequalities under relatively weaker conditions. Existing methods in the literature about constructing high-dimensional quadra\-tic classifiers can be divided into two types. One is the regularized QDA, where regularized estimates of $\bolds{\Sigma}^{-1}_1$ and $\bolds{\Sigma}^{-1}_2$ are plugged into the Bayes classifier; see, for example, \citet{friedman1989regularized}. QUADRO avoids directly estimating inverse covariance matrices, which requires strong assumptions in high dimensions. The other is to combine linear classifiers with the inner-product kernel. The main difference between QUADRO and this approach is the simplification in Proposition~\ref {propellipmeanvar}. Due to this simplification, QUADRO avoids incorporating all fourth cross moments from the data and gains extra statistical efficiency. QUADRO also has deep connections with the literature of sufficient dimension reduction. Dimension reduction methods, such as SIR [\citet{li1991sliced}], SAVE [\citet{cook1991comment}] and Directional Regression [\citet{li2007directional}], can be equivalently formulated as maximizing some ``quotients.'' The population objective of SIR is to maximize $\operatorname{var}\{\mathbb {E}[f(\mathbf{X}| Y)]\}$ subject to $\operatorname{var}[f(\mathbf{X})]=1$. Using the same constraint, SAVE and directional regression combine $\operatorname{var}\{\mathbb {E}[f(\mathbf{X}| Y)]\}$ and $\mathbb{E}[\operatorname{var}(f(\mathbf{X}| Y))]$ in the objective. An interesting observation is that the Rayleigh quotient maximization is equivalent to the population objective of SIR, by noting that the denominator of (\ref{Rq}) is equal to $\mathbb{E}[\operatorname{var}(f(\mathbf{X}| Y))]$ and $\operatorname{var}[f(\mathbf{X})]=\mathbb{E}[\operatorname{var}(f(\mathbf{X}| Y))]+ \operatorname{var}\{\mathbb {E}[f(\mathbf{X}| Y)]\}$. This is not a coincidence, but due instead to the known equivalence between SIR and LDA in classification [\citet {kent1991discussion,Li00highdimensional}]. Despite similar population objectives, QUADRO and the aforementioned dimension reduction methods are different in important ways. First, we clarify that even when $\lambda_1,\lambda_2$ are $0$, QUADRO is not the same procedure as SIR combined with the inner-product kernel [\citet{wu2008kernel}], although they share the same population objective. The difference is that QUADRO utilizes a simplification of the Rayleigh quotient for quadratic $f$, relying on the assumption that $\mathbf{X}| Y$ is always elliptically distributed; moreover, it adopts robust estimators of the mean vectors and covariance matrices. Second, QUADRO is designed for high-dimensional settings, in which neither SIR, SAVE nor Directional Regression can be directly implemented. These methods need to either standardize the original data $\mathbf{X}\mapsto\widehat{\bSigma}^{-1}(\mathbf{X}-\bar {\mathbf{X}})$ or solve a generalized eigen-decomposition problem $\mathbf{A}\mathbf{v} =\lambda\widehat{\bSigma}\mathbf{v}$ for some matrix~$\mathbf{A}$. Both methods require that the sample covariance matrix is well conditioned, which is often not the case in high dimensions. Possible solutions include Regularized SIR [\citet{zhong2005rsir,li2008sliced}], solving generalized eigen-decomposition for an undetermined system [\citet {coudret2014comparison}] and variable selection approaches [\citet {chen2010coordinate,jiang2013sliced}]. However, these methods are not designed for Rayleigh quotient maximization. Third, our assumption on the model is different from that in dimension reduction. We require $\mathbf{X}| Y$ to be elliptically distributed, while many dimension reduction methods ``implicitly'' require $\mathbf{X}$ to be marginally elliptically distributed. Neither method is stronger than the other. Assuming conditional elliptical distribution is more natural in classification. In addition, our assumption is used only to simplify the variances of quadratic statistics, whereas the elliptical assumption is critical to SIR. The Rayleigh optimization framework developed in this paper can be extended to the multi-class case. Suppose the data are drawn independently from a joint distribution of $(\mathbf{X}, Y)$, where $\mathbf{X}\in \mathbb{R}^d$ and $Y$ takes values in $\{0,1,\ldots, K-1\}$. Definition (\ref{Rq}) for the Rayleigh quotient of a projection $f\dvtx \mathbb{R}^d\to\mathbb{R}$ is still well defined. Let $\pi _k=\mathbb{P}(Y=k)$, for $k=0, 1,\ldots, K-1$. In this $K$-class situation, \begin{equation} \label{Rq-multiclass} \operatorname{Rq}(f) = \frac{\sum_{0\leq k<l\leq K-1} \pi_k\pi _l \{\mathbb {E}[f(\mathbf{X})| Y=k] - \mathbb{E}[f(\mathbf{X})| Y=l] \}^2}{ \sum_{0\leq k\leq K-1} \pi_k \operatorname{var}[f(\mathbf{X})| Y=k]}. \end{equation} Let $M_k(f)=\mathbb{E}[f(\mathbf{X})| Y=k]$ and $L_k(f)=\operatorname{var}[f(\mathbf{X})| Y=k]$. Similar to the two-class case, maximizing $\operatorname{Rq}(f)$ is equivalent to solving the following optimization problem: \[ \min_f \sum_{k=0}^{K-1} \pi_k L_k(f)\quad\mbox{s.t.}\quad \sum _{0\leq k<l\leq K-1}\pi_k\pi_l \bigl\vert M_k(f)-M_l(f) \bigr\vert^2= 1. \] However, this is not a convex problem. We consider an approximate Rayleigh-quotient-maximization problem as follows: \[ \min_f \sum_{k=0}^{K-1} \pi_k L_k(f) \quad\mbox{s.t.}\quad\sqrt{ \pi_k\pi_l} \bigl\vert M_k(f)-M_l(f) \bigr\vert\geq1, \qquad0\leq k<l\leq K-1. \] To solve this problem, we first pick an order of $M_1(f),\ldots, M_{K}(f)$ to remove the absolute values in the constraints. Then it becomes a convex problem. Therefore, the whole optimization can be carried out by simultaneously solving $K!$ convex problems. When $K$ is small, the computational cost is reasonable. In practice, we can apply more efficient algorithms to speed up the computation. \section{Proofs} \label{secproof} \subsection{Proof of Theorem \texorpdfstring{\protect\ref{teoRbound}}{5.1}} We prove\vspace*{2pt} the claim by first rewriting optimization problem (\ref {progquadro}) into a vector form. For any $(\bolds{\Omega},\bolds{\delta})$, write $\mathbf{x}=[\operatorname{vec}(\bolds{\Omega})^{\top},\break \bolds{\delta}^{\top}]^{\top}$. Let $\mathbf{Q}$ be as defined in Section~\ref{secanalysis}, and \[ \mathbf{q}= \left[ \matrix{ \operatorname{vec}\bigl(\bolds{\Sigma}_2+ \bolds{\mu}_2\bolds{\mu}_2^{\top} - \bolds{\Sigma}_1 - \bolds{\mu}_1\bolds{\mu}_1^{\top} \bigr)^{\top}, 2( \bolds{\mu}_1-\bolds{\mu}_2)^{\top} } \right] ^{\top}. \] We introduce the following lemma which is proved in the supplementary material [\citet{QUADROsupp}]. \begin{lem} \label{lemvectorize} $M(\bolds{\Omega}, \bolds{\delta})=\mathbf{q}^{\top}\mathbf{x}$ and $L(\bolds{\Omega},\bolds{\delta} )=\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}$. \end{lem} Let $\mathbf{x}^*_{\lambda_0}=[\operatorname{vec}(\bolds{\Omega}^*_{\lambda _0})^{\top}, (\bolds{\delta}^*_{\lambda_0})^{\top}]^{\top}$ and $\widehat{\bx} =[\operatorname{vec}(\widehat{\bOmega})^{\top}, \widehat{\bdelta}^{\top}]^{\top}$. Using Lemma \ref{lemvectorize}, \begin{eqnarray*} \mathbf{x}_{\lambda_0}^* &=& \min_{\mathbf{x}\in\mathbb{R}^d\dvtx \mathbf{q}^{\top}\mathbf{x} =1} \bigl\{\mathbf{x}^{\top} \mathbf{Q}\mathbf{x}+ \lambda_0 \vert\mathbf{x}\vert_1 \bigr\}, \\ \widehat{\bx}& = &\mathop{\operatorname{argmin}}_{\mathbf{x}\in\mathbb{R}^d\dvtx \widehat{\mathbf{q}}^{\top}\mathbf{x} =1} \bigl\{ \mathbf{x}^{\top}\widehat{\bQ}\mathbf{x}+ \lambda\vert \mathbf{x}\vert_1 \bigr\}, \end{eqnarray*} where $\widehat{\bQ}$ and $\widehat{\mathbf{q}}$ are counterparts of $\mathbf{Q}$ and $\mathbf{q}$, respectively, by replacing $\bolds{\mu}_1$, $\bolds{\mu}_2$, $\bolds{\Sigma}_1$ and $\bolds{\Sigma}_2$ with their estimates. Moreover, we have the Rayleigh quotient \[ R(\bolds{\Omega}, \bolds{\delta})=R(\mathbf{x})\equiv\frac{(\mathbf{q}^{\top}\mathbf{x} )^{2}}{\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}}. \] In addition, we have the following lemma, which is proved in the supplementary material [\citet{QUADROsupp}]. \begin{lem} \label{leminfnorm} $\max\{\vert \widehat{\bQ}-\mathbf{Q}\vert _\infty, \vert \widehat{\mathbf {q}}-\mathbf{q}\vert _\infty\} \leq C_0 \max\{\vert \widehat{\bSigma}_k-\bolds{\Sigma}_k\vert _\infty, \vert \widehat{\bmu}_k-\bolds{\mu} _k\vert _\infty, k=1,2\}$ for some constant $C_0>0$.\vadjust{\goodbreak} \end{lem} Combining the above results, the claim follows immediately from the following theorem: \begin{teo} \label{teotechnique} For any $\lambda_0\geq0$, let $S$ be the support of $\mathbf{x}^*_{\lambda _0}$. Suppose $\Theta(S, 0)\geq c_0$, $\Theta(S, 3)\geq a_0$ and $R(\mathbf{x}^*_{\lambda_0})\geq u_0$, for positive constants $a_0$, $c_0$ and~$u_0$. Let $\Delta_n=\max\{\vert \widehat{\bQ}-\mathbf{Q}\vert _\infty, \vert \widehat{\mathbf{q}}- \mathbf{q}\vert _\infty\}$, $s_0=\vert S\vert $ and $k_0=\max\{s_0,\break R(\mathbf{x} ^*_{\lambda_0})\}$. Suppose $4s_0\Delta_n^2<c_0u_0$ and $\max\{ s_0\Delta_n, s_0^{1/2}k_0^{1/2}\lambda_0 \}<1$. Then there exist positive constants $C=C(a_0, c_0, u_0)$ and $A= A(a_0, c_0, u_0)$, such\vspace*{1pt} that for any $\eta>1$, by taking $\lambda=C\eta\max\{ s_0^{1/2}\Delta_n, k_0^{1/2}\lambda_0 \}{[R(\mathbf{x}^*_{\lambda_0})]^{-1/2}}$, \[ \frac{R(\widehat{\bx})}{R(\mathbf{x}^*_{\lambda_0})} \geq1 - A\eta^2 \max\bigl\{ s_0 \Delta_n, s_0^{1/2}k_0^{1/2} \lambda_0 \bigr\}. \] \end{teo} The main part of the proof is to show Theorem \ref{teotechnique}. Write for short $\mathbf{x}^*=\mathbf{x}^*_{\lambda_0}$, $R^*=R(\mathbf{x}^*)$, $V^*=(R^*)^{-1}= (\mathbf{x}^*)^{\top}\mathbf{Q}\mathbf{x}^*$, $\bar{V}^*=(V^*)^{1/2}$. Let $\alpha_n = \Delta_n \vert \mathbf{x}^*\vert _0^{1/2}$, $\beta_n = \Delta_n \vert \mathbf{x}^*\vert _0$ and $T_n(\mathbf{x}^*) = \max\{s_0 \Delta_n, s_0^{1/2}k_0^{1/2}\lambda_0\}$. {We define} the quantity \[ \Gamma(\mathbf{x}) = \frac{\vert \mathbf{Q}\mathbf{x}- (\mathbf{x}^{\top}\mathbf{Q}\mathbf{x}) \mathbf {q}\vert _\infty}{(\mathbf{x}^{\top}\mathbf{Q}\mathbf{x})^{1/2}}\qquad\mbox{for any }\mathbf{x}. \] \begin{longlist}[\textit{Step} 2.] \item[\textit{Step} 1.] We introduce $\mathbf{x}^*_1$, a multiple of $\mathbf{x}^*$, and use it to bound $\vert \widehat{\bx}\vert _1$. Let $\mathbf{Q}_{SS}$ be the submatrix of $\mathbf{Q}$ formed by rows and columns corresponding to~$S$. Since $\lambda_{\min}(\mathbf{Q}_{SS}){=\Theta (S,0)}\geq c_0$, we have $(\mathbf{x}^*)^{\top}\mathbf{Q}\mathbf{x}^*\geq c_0\vert \mathbf{x} ^*\vert ^2$. Using this fact and by the Cauchy--Schwarz inequality, \begin{equation} \label{qQtemp0} \bigl\vert\mathbf{x}^* \bigr\vert_1 \leq\sqrt{ \bigl \vert\mathbf{x}^* \bigr\vert_0} \bigl\vert\mathbf{x}^* \bigr\vert\leq c_0^{-1/2} \sqrt{ \bigl\vert\mathbf{x}^* \bigr\vert _0} \bar{V}^*. \end{equation} It follows that \begin{equation} \label{qQtemp1} \bigl\vert\widehat{\mathbf{q}}^{\top}\mathbf{x}^*- \mathbf{q}^{\top}\mathbf{x}^* \bigr\vert\leq\vert\widehat{\mathbf{q}}- \mathbf{q}\vert_{\infty} \bigl\vert\mathbf{x}^* \bigr\vert_1 \leq c_0^{-1/2} \Delta_n \sqrt{ \bigl\vert \mathbf{x}^* \bigr\vert_0} \bar{V}^* = c_0^{-1/2} \alpha_n\bar{V}^*. \end{equation} {Let}\vspace*{1pt} $t_n=\widehat{\mathbf{q}}^{\top}\mathbf{x}^*$. Then (\ref{qQtemp1}) says that $\vert t_n-1\vert \leq c_0^{-1/2}\alpha_n \bar{V}^*$. {Noting that $\bar{V}^*=(R^*)^{1/2}\leq u_0^{-1/2}$, we have $\vert t_n-1\vert \leq (c_0u_0)^{-1/2}s_0^{1/2}\Delta_n<1/2$ by assumption.} In particular, $t_n>0$. Let \[ \mathbf{x}^*_1=t_n^{-1}\mathbf{x}^*. \] Then $\widehat{\mathbf{q}}^{\top}\mathbf{x}^*_1=1$. From the definition of $\widehat{\bx}$, \begin{equation} \label{qQtemp2} \widehat{\bx}^{\top}\widehat{\bQ}\widehat{\bx}+ \lambda\vert\widehat{\bx}\vert _1 \leq\bigl(\mathbf{x}^*_1 \bigr)^{\top}\widehat{\bQ} \mathbf{x}^*_1 + \lambda\bigl\vert\mathbf{x}^*_1 \bigr\vert _1. \end{equation} By direct calculation, \begin{eqnarray} \label{qQtemp31} \widehat{\bx}^{\top}\widehat{\bQ}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\widehat{\bQ}\mathbf{x}^*_1 &=& \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\widehat{\bQ}\bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr) + 2 \bigl(\widehat{\bx}- \mathbf{x}^*_1 \bigr)^{\top}\widehat{\bQ}\mathbf{x}^*_1\nonumber \\ &=& \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\widehat{\bQ}\bigl(\widehat{\bx}- \mathbf{x}^*_1 \bigr) + 2 \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top} \bigl(\widehat{\bQ}\mathbf{x}^*_1- V^* \widehat{\mathbf{q}} \bigr) \\ & \geq& 2 \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top} \bigl( \widehat{\bQ}\mathbf{x}^*_1- V^* \widehat{\mathbf{q}} \bigr), \nonumber \end{eqnarray} where the second equality is due to $\widehat{\mathbf{q}}^{\top}\widehat{\bx} =\widehat{\mathbf{q}}^{\top}\mathbf{x}^*_1=1$. {We aim to bound $\vert \widehat{\bQ} \mathbf{x} ^*_1-{V^*}\widehat{\mathbf{q}}\vert _\infty$. The following lemma is proved in the supplementary material [\citet{QUADROsupp}].} \begin{lem} \label{lemKKTsmall} When $\Theta(S, 0)\geq c_0$, there exists a positive constant $C_1=C_1(c_0)$ such that $\Gamma(\mathbf{x}^*_{\lambda_0})\leq C_1 \lambda _0 [\max\{ s_0, R(\mathbf{x}^*_{\lambda_0}) \}]^{1/2}$ for any $\lambda _0\geq0$. \end{lem} Since $\mathbf{x}^*_1=t_n^{-1}\mathbf{x}^*$ and $t_n^{-1}< 2$, \begin{eqnarray*} && \bigl\vert\widehat{\bQ}\mathbf{x}^*_1-{V^*}\widehat{\mathbf{q}} \bigr\vert _\infty \\ &&\qquad \leq t_n^{-1} \bigl\vert\widehat{\bQ}\mathbf{x}^*- V^* \widehat{\mathbf{q}} \bigr\vert_\infty+ V^* \bigl\vert t_n^{-1}-1 \bigr\vert\vert\widehat{\mathbf{q}}\vert _\infty \\ &&\qquad \leq 2 \bigl( \bigl\vert\mathbf{Q}\mathbf{x}^*- V^*\mathbf{q} \bigr \vert_\infty+ \vert\widehat{\bQ}-\mathbf{Q}\vert_\infty\bigl\vert \mathbf{x}^* \bigr\vert_1 + V^* \vert\widehat{\mathbf{q}}-\mathbf{q} \vert_\infty+ V^* \vert t_n-1\vert\vert\widehat{ \mathbf{q}}\vert_\infty\bigr) \\ &&\qquad \leq 2 \bigl[ \Gamma\bigl(\mathbf{x}^* \bigr)\bar{V}^* + c_0^{-1/2}\alpha_n\bar{V}^* + {u_0^{-1/2} \Delta_n\bar{V}^*} + {\vert \widehat{\mathbf{q}}\vert_\infty}c_0^{-1/2}u_0^{-1} \alpha_n\bar{V}^* \bigr] \\ &&\qquad \leq C_2 \bigl( \lambda_0k_0^{1/2} + s_0^{1/2} \Delta_n \bigr){\bar{V}^*}. \end{eqnarray*} Here the third inequality follows from (\ref{qQtemp0})--(\ref {qQtemp1}) and {$V^*=\bar{V}^*(R^*)^{-1/2}\leq u_0^{-1/2}\bar{V}^*$}. The last inequality is obtained as follows: from Lemma \ref {leminfnorm}, we know that $\vert \widehat{\mathbf{q}}\vert _\infty\leq\vert \mathbf{q}\vert _\infty+\vert \widehat{\mathbf {q}}-\mathbf{q}\vert _\infty\leq2C_0$ (see also the assumptions in the beginning of Section~\ref{subsecoracle}); we also use Lemma \ref{lemKKTsmall} and $\alpha_n\bar{V}^*\leq u_0^{-1/2} s_0^{1/2}\Delta_n$. By letting $C=8C_2$, the choice of $\lambda=C\eta\max\{ s_0^{1/2}\Delta_n, k_0^{1/2}\lambda_0\}{\bar{V}^*}$ for $\eta>1$ ensures that \[ \bigl\vert\widehat{\bQ}\mathbf{x}^*_1-\widehat{\mathbf{q}} \bigr\vert _\infty\leq\lambda/4. \] Plugging this result into (\ref{qQtemp31}) gives \begin{equation} \label{qQtemp3} \widehat{\bx}^{\top}\widehat{\bQ}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\widehat{\bQ}\mathbf{x}^*_1\geq- \frac {\lambda}{2} \bigl\vert \widehat{\bx}-\mathbf{x}^*_1 \bigr\vert_1. \end{equation} Combining (\ref{qQtemp2}) and (\ref{qQtemp3}) gives \begin{equation} \label{qQtemp4origin} \lambda\vert\widehat{\bx}\vert_1 - \frac{\lambda}{2} \bigl\vert\widehat{\bx}- \mathbf{x}_1^* \bigr\vert_1 \leq\lambda \bigl\vert\mathbf{x}_1^* \bigr\vert_1. \end{equation} First, since $\vert \widehat{\bx}\vert _1=\vert \widehat{\bx}_S\vert _1 + \vert \widehat{\bx}_{S^c}\vert _1\geq\vert \mathbf{x} _{1S}^*\vert _1 - \vert \widehat{\bx}_S-\mathbf{x}^*_{1S}\vert _1 + \vert \widehat{\bx}_{S^c}\vert _1$ and $\vert \widehat{\bx}-\mathbf{x} _1^*\vert _1=\vert \widehat{\bx}_{S}-\mathbf{x}^*_{1S}\vert _1 + \vert \widehat{\bx} _{S^c}\vert _1$, we immediately see from (\ref{qQtemp4origin}) that \begin{equation} \label{qQtempRE} \bigl\vert\bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)_{S^c} \bigr\vert_1 \leq3 \bigl\vert\bigl(\widehat{\bx}- \mathbf{x}^*_1 \bigr)_S \bigr\vert_1. \end{equation} Second, note that $\vert \widehat{\bx}-\mathbf{x}_1^*\vert _1\leq\vert \widehat{\bx} \vert _1 + \vert \mathbf{x}_1^*\vert _1$. Plugging this into (\ref{qQtemp4origin}) gives \begin{equation} \label{qQtemp4} \vert\widehat{\bx}\vert_1 \leq3 \bigl\vert \mathbf{x}^*_1 \bigr\vert_1 = 3t_n^{-1} \bigl\vert\mathbf{x}^* \bigr\vert_1 \leq6c_0^{-1/2} \sqrt{ \bigl\vert\mathbf{x}^* \bigr\vert_0} \bar{V}^*. \end{equation} \end{longlist} \begin{longlist}[\textit{Step} 2.] \item[\textit{Step} 2.] We use (\ref{qQtempRE})--(\ref{qQtemp4}) to derive an upper bound for $(\widehat{\bx})^{\top}\mathbf{Q}\widehat{\bx}- (\mathbf{x}_1^*)^{\top }\mathbf{Q}\mathbf{x}_1^*$. Note that \begin{eqnarray} \label{qQtemp5} \quad && \widehat{\bx}^{\top}\widehat{\bQ}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\widehat{\bQ}\mathbf{x}^*_1\nonumber \\ &&\qquad \geq \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1 - \bigl( \bigl\vert \widehat{\bx}^{\top}\widehat{\bQ}\widehat{\bx}- \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}\bigr\vert+ \bigl \vert\bigl(\mathbf{x}^*_1 \bigr)^{\top }\widehat{\bQ}\mathbf{x}^*_1 - \bigl( \mathbf{x}_1^* \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1 \bigr \vert\bigr) \nonumber \\ &&\qquad \geq \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1 - \bigl(\vert\widehat{\bQ}-\mathbf{Q}\vert _\infty\vert\widehat{\bx}\vert_1^2 + \vert\widehat{\bQ}- \mathbf{Q}\vert_\infty\bigl\vert\mathbf{x}^*_1 \bigr\vert _1^2 \bigr) \\ &&\qquad \geq \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1 - 10t_n^{-{2}} \vert\widehat{\bQ}-\mathbf{Q}\vert_\infty\bigl\vert\mathbf{x}^* \bigr\vert _1^2 \nonumber \\ &&\qquad \geq \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1 - C_3 \beta_n V^*, \nonumber \end{eqnarray} where the last two inequalities are direct results of (\ref{qQtemp4}). Combining (\ref{qQtemp2}) and (\ref{qQtemp5}), \begin{equation} \label{qQtemp6} \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}+ \lambda\vert\widehat{\bx}\vert _1 \leq\bigl(\mathbf{x}_1^* \bigr)^{\top}\mathbf{Q} \mathbf{x}_1^* + \lambda\bigl\vert\mathbf{x}_1^* \bigr\vert _1 + C_3 \beta_n V^*. \end{equation} Similar to (\ref{qQtemp31}), we have \begin{equation} \label{qQtemp11} \quad \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1 = \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr) + 2 \bigl(\widehat{\bx}- \mathbf{x}^*_1 \bigr)^{\top} \bigl(\mathbf{Q}\mathbf{x}^*_1- V^* \widehat{\mathbf{q}} \bigr), \end{equation} where \begin{eqnarray*} \bigl\vert\mathbf{Q}\mathbf{x}_1^*-V^*\widehat{\mathbf{q}} \bigr\vert _\infty&\leq& t_n^{-1} \bigl( \bigl\vert\mathbf{Q} \mathbf{x}^*- V^*\mathbf{q} \bigr\vert_\infty+ V^*\vert\widehat{ \mathbf{q}}-\mathbf{q}\vert_\infty\bigr) + V^* \bigl\vert t_n^{-1}-1 \bigr\vert\vert\widehat{\mathbf{q}}\vert _\infty \\ &\leq&2 \bigl[ \Gamma\bigl(\mathbf{x}^* \bigr){\bar{V}^*} + {u_0^{-1/2}\Delta_n\bar{V}^*} + {\vert \widehat{\mathbf{q}}\vert_\infty c_0^{-1/2}u_0^{-1} \alpha_n\bar{V}^*} \bigr] \\ &\leq&\lambda/4. \end{eqnarray*} It follows that \[ \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q} \mathbf{x}^*_1 \geq\bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q} \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr) - \frac{\lambda }{2} \bigl\vert\widehat{\bx}- \mathbf{x}^*_1 \bigr\vert_1. \] Plugging this into (\ref{qQtemp6}), we obtain \begin{equation} \label{qQtemp7} \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\bigl( \widehat{\bx}- \mathbf{x}^*_1 \bigr) + \lambda\vert\widehat{\bx}\vert_1 - \frac{\lambda}{2} \bigl\vert\widehat{\bx}-\mathbf{x}^*_1 \bigr\vert _1 \leq\lambda\bigl\vert\mathbf{x}^*_1 \bigr\vert _1 + C_3 \beta_n V^*. \end{equation} We can rewrite the second and third terms on the left-hand side of (\ref{qQtemp7}) as \[ \lambda\vert\widehat{\bx}_S\vert_1 - \frac{\lambda}{2} \bigl\vert\widehat{\bx}_S - \mathbf{x}^*_{1S} \bigr\vert _1 + \frac{\lambda}{2}\vert\widehat{\bx}_{S^c}\vert _1. \] Plugging this into (\ref{qQtemp7}) and by the triangular inequality $\vert \mathbf{x}_{1S}^*\vert _1 - \vert \widehat{\bx}_S\vert _1\leq \vert \widehat{\bx}_S-\mathbf{x}^*_{1S}\vert _1$, we find that \[ \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\bigl(\widehat{\bx}- \mathbf{x}^*_1 \bigr) + \frac{\lambda}{2}\vert\widehat{\bx}_{S^c} \vert_1 \leq\frac{3\lambda}{2} \bigl\vert\widehat{\bx}_{S}- \mathbf{x}^*_{1S} \bigr\vert_1 + C_3 \beta_n V^*. \] We drop the term $\frac{\lambda}{2}\vert \widehat{\bx}_{S^c}\vert _1$ on the left-hand side and apply the Cauchy--Schwarz inequality to the term $\vert \widehat{\bx} _{S}-\mathbf{x}^*_{1S}\vert _1$. This gives \begin{eqnarray} \label{qQtemp8} \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\bigl( \widehat{\bx}-\mathbf{x}^*_1 \bigr) &\leq&\frac{3\lambda }{2} \sqrt{ \bigl\vert \mathbf{x}^*_1 \bigr\vert_0} \bigl\vert \widehat{\bx}_{1S}-\mathbf{x}^*_{1S} \bigr\vert+ C_3 \beta_n V^*. \end{eqnarray} Since (\ref{qQtempRE}) holds, { by the definition of $\Theta(S,3)$, } \[ \bigl(\widehat{\bx}-\mathbf{x}_1^* \bigr)^{\top}\mathbf{Q}\bigl(\widehat{\bx}- \mathbf{x}_1^* \bigr)\geq a_0 \bigl\vert\widehat{\bx}_S- \mathbf{x}^*_{1S} \bigr\vert^2. \] We write temporarily $Y=(\widehat{\bx}-\mathbf{x}_1^*)^{\top}\mathbf{Q}(\widehat{\bx}-\mathbf{x}_1^*)$ and $b=C_3 \beta_n V^*$. Combining these with (\ref{qQtemp8}), \[ Y \leq\frac{3\lambda}{2\sqrt{a_0}}\sqrt{ \bigl\vert\mathbf{x}^*_1 \bigr \vert _0 Y} + b. \] Note that when $u^2\leq au+b$, we have $(u-\frac{a}{2})^2\leq b + \frac{a^2}{4}$, and hence $u^2\leq2[\frac{a^2}{4} + (u-\frac {a}{2})^2]\leq a^2 + 2b$. As a result, the above inequality implies \begin{equation} \label{qQtemp121} \bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\bigl( \widehat{\bx}- \mathbf{x}^*_1 \bigr) \leq\frac{9\lambda ^2}{4a_0} \bigl\vert\mathbf{x}^* \bigr \vert_0 + 2C_3 \beta_n V^*, \end{equation} {where we have used $\vert \mathbf{x}^*_1\vert _0=\vert \mathbf{x}^*\vert _0$.} Furthermore, (\ref{qQtemp11}) yields that \begin{eqnarray} \label{qQtemp122} \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1 &\leq&\bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr) + \frac{\lambda }{2} \bigl\vert\widehat{\bx}-\mathbf{x}^*_1 \bigr\vert _1\nonumber \\ &\leq&\bigl( \widehat{\bx}-\mathbf{x}^*_1 \bigr)^{\top} \mathbf{Q}\bigl(\widehat{\bx}-\mathbf{x}^*_1 \bigr) + 2\lambda\bigl\vert\mathbf{x} ^*_1 \bigr\vert_1 \\ &\leq&\bigl(\widehat{\bx}- \mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\bigl(\widehat{\bx}- \mathbf{x}^*_1 \bigr) + 4 c_0^{-1/2}\bar{V}^* \lambda\sqrt{ \bigl\vert \mathbf{x}^* \bigr\vert_0},\nonumber \end{eqnarray} where\vspace*{1pt} the second inequality is due to $\vert \widehat{\bx}-\mathbf{x}_1^*\vert _1\leq\vert \widehat{\bx} \vert _1+\vert \mathbf{x}^*_1\vert \leq4\vert \mathbf{x}_1^*\vert _1$, and the last inequality is from (\ref{qQtemp4}). Recall that $\lambda=C\eta\max\{k_0^{1/2}\lambda_0, s_0^{1/2}\Delta _n\} \bar{V}^*$. As a result, \begin{equation} \label{qQtemp123} \lambda\sqrt{ \bigl\vert\mathbf{x}^* \bigr\vert_0} = C \eta\max\bigl\{ k_0^{1/2}s_0^{1/2} \lambda_0, s_0 \Delta_n \bigr\} \bar{V}^*= C \eta T_n \bigl(\mathbf{x}^* \bigr) \bar{V}^*. \end{equation} Combining (\ref{qQtemp121}), (\ref{qQtemp122}) and (\ref {qQtemp123}) gives \begin{eqnarray} \label{qQtemp13} && \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}- \bigl(\mathbf{x}^*_1 \bigr)^{\top}\mathbf{Q}\mathbf{x}^*_1\nonumber \\ &&\qquad \leq \frac{9C^2}{4a_0} \eta^2 \bigl[T_n \bigl(\mathbf{x}^* \bigr) \bigr]^2 V^* + 4C c_0^{-1/2}\eta T_n \bigl(\mathbf{x}^* \bigr) V^* + 2C_3 \beta_n V^* \\ &&\qquad\leq C_4 \eta^2 T_n \bigl(\mathbf{x}^* \bigr) V^*.\nonumber \end{eqnarray} \end{longlist} \begin{longlist}[\textit{Step} 2.] \item[\textit{Step} 3.] We use (\ref{qQtemp13}) to give a lower bound of $R(\widehat{\bx})$. Note that $R(\widehat{\bx})=(\mathbf{q}^{\top}\widehat{\bx})^2/(\widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx})$. First, we look at the denominator $\widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}$. From (\ref {qQtemp1}) and that $t_n>1/2$, \[ \bigl\vert t_n^{-2}-1 \bigr\vert= t_n^{-1} \bigl(1+t_n^{-1} \bigr) \vert t_n-1\vert\leq6 c_0^{-1/2}\alpha _n\bar{V}^*. \] Combining with (\ref{qQtemp13}) and noting that $(\mathbf{x}_1^*)^{\top}\mathbf{Q} \mathbf{x}_1^*=t_n^{-2}(\mathbf{x}^*)^{\top}\mathbf{Q}\mathbf{x}^*=t_n^{-2}V^*$, we have \begin{eqnarray} \label{qQtemp16} \widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}&\leq&\bigl[t_n^{-2} + C_4 \eta^2 T_n \bigl(\mathbf{x}^* \bigr) \bigr] \bigl(\mathbf{x}^* \bigr)^{\top}\mathbf{Q}\mathbf{x}^*\nonumber \\ &\leq&\bigl[1 + 6 c_0^{-1/2}\alpha_n\bar{V}^* + C_{4} \eta^2 T_n \bigl(\mathbf{x}^* \bigr) \bigr] \bigl(\mathbf{x}^* \bigr)^{\top}\mathbf{Q}\mathbf{x}^* \\ &\leq&\bigl[1 + C_5 \eta^2 T_n \bigl(\mathbf{x}^* \bigr) \bigr] \bigl(\mathbf{x}^* \bigr)^{\top}\mathbf{Q}\mathbf{x}^*.\nonumber \end{eqnarray} Second, we look at the numerator $\mathbf{q}^{\top}\widehat{\bx}$. Since $\widehat{\mathbf{q}}^{\top}\widehat{\bx}=1$, {by (\ref{qQtemp4}),} \begin{equation} \label{qQtemp17} \bigl\vert\mathbf{q}^{\top}\widehat{\bx}- 1 \bigr\vert\leq \vert\widehat{\mathbf{q}}-\mathbf{q}\vert_\infty\vert\widehat{\bx} \vert_1 \leq6c_0^{-1/2} \alpha_n \bar{V}^* \leq C_6 T_n \bigl(\mathbf{x}^* \bigr). \end{equation} Combining (\ref{qQtemp16}) and (\ref{qQtemp17}) gives \begin{eqnarray} R(\widehat{\bx}) &=& \frac{(\mathbf{q}^{\top}\widehat{\bx})^2}{\widehat{\bx}^{\top}\mathbf{Q}\widehat{\bx}} \geq \frac{ [1- C_6T_n(\mathbf{x}^*) ]^2}{1 + C_{5} \eta^2 T_n(\mathbf{x} ^*)} \frac{1}{(\mathbf{x}^*)^{\top} \mathbf{Q}\mathbf{x}^*}\nonumber \\ &\geq&\bigl[1 - A\eta^2 T_n \bigl(\mathbf{x}^* \bigr) \bigr] \frac{(\mathbf{q}^{\top}\mathbf{x} ^*)^2}{(\mathbf{x}^*)^{\top}\mathbf{Q}\mathbf{x}^*} \\ & =& \bigl[1 - A\eta^2 T_n \bigl( \mathbf{x}^* \bigr) \bigr] R \bigl(\mathbf{x}^* \bigr),\nonumber \end{eqnarray} where $A=A(a_0, c_0, u_0)$ is a positive constant. \end{longlist} \subsection{Proof of Proposition \texorpdfstring{\protect\ref{properrgap}}{6.1}} Denote by $\mathbb{P}(i| j)$ the probability that a new sample from class $j$ is misclassified to class $i$, for $i,j\in\{1,2\}$ and $i\neq j$. The classification error of $h$ is \[ \operatorname{err}(h) = \pi\mathbb{P}(2\mid1)+ (1- \pi)\mathbb {P}(1\mid2). \] Write $M_k=M_k(\bolds{\Omega},\bolds{\delta})$ and $L_k=L_k(\bolds{\Omega},\bolds{\delta})$ for short. It suffices to show that \begin{eqnarray*} \mathbb{P}(2\mid1) &=&\bar{\Phi} \biggl( \frac{(1-t)M}{\sqrt {L_1}} \biggr) + \frac{O(q) + o(d)}{L_1^{3/2}}, \\ \mathbb{P}(1\mid2) &=&\bar{\Phi} \biggl( \frac{tM}{\sqrt{L_2}} \biggr) + \frac{O(q) + o(d)}{L_2^{3/2}}. \end{eqnarray*} We only consider $\mathbb{P}(2\mid1)$. The analysis of $\mathbb {P}(1\mid2)$ is similar. Suppose $\mathbf{X}|\break \mbox{class 1} \stackrel{(d)}{=}\mathbf{Z}\sim \mathcal{N}(\bolds{\mu}_1,\bolds{\Sigma}_1)$. Define \[ \mathbf{Y}= \bolds{\Sigma}^{-1/2}_1(\mathbf{Z}-\bolds{\mu}_1), \] so that $\mathbf{Y}\sim\mathcal{N}({\mathbf0},\mathbf{I}_d)$ and $\mathbf{Z}=\bolds{\Sigma} ^{1/2}_1\mathbf{Y}+\bolds{\mu}_1$. Note that \begin{eqnarray} \label{temp1-errgap} Q(\mathbf{Z}) &=& \bigl(\bolds{\Sigma}^{1/2}_1\mathbf{Y}+ \bolds{\mu}_1 \bigr)^{\top}\bolds{\Omega}\bigl(\bolds{\Sigma} ^{1/2}_1 \mathbf{Y}+\bolds{\mu}_1 \bigr)-2 \bigl( \bolds{\Sigma}^{1/2}_1\mathbf{Y}+ \bolds{\mu}_1 \bigr)^{\top}\bolds{\delta} \nonumber\\[-8pt]\\[-8pt]\nonumber &=&\mathbf{Y}^{\top} \bolds{\Sigma}^{1/2}_1 \bolds{\Omega}\bolds{\Sigma}^{1/2}_1 \mathbf{Y}+2\mathbf{Y}^{\top } \bolds{\Sigma}^{1/2}_1(\bolds{\Omega}\bolds{\mu}_1-\bolds{\delta})+ \bolds{\mu}^{\top}_1\bolds{\Omega}\bolds{\mu}_1-2 \bolds{\mu}^{\top}_1\bolds{\delta}. \end{eqnarray} Recall that $\bolds{\Sigma}^{1/2}_1\bolds{\Omega}\bolds{\Sigma}^{1/2}_1=\mathbf{K}_1\mathbf{S}_1\mathbf{K} _1^{\top}$ is { the eigen-decomposition by excluding the $0$ eigenvalues. Since $\bolds{\Sigma}_1$ has full rank and the rank of $\bolds{\Omega} $ is $q$, the rank of $\bolds{\Sigma}^{1/2}_1\bolds{\Omega}\bolds{\Sigma}^{1/2}_1$ is $q$. Therefore,\vspace*{2pt} $\mathbf{S}_1$ is a $q\times q$ diagonal matrix, and $\mathbf{K}_1$ is a $d\times q$ matrix satisfying $\mathbf{K}_1^{\top}\mathbf{K}_1=\mathbf{I}_q$.} Let $\widetilde{\mathbf{K}}_1$ be any {$d\times(d-q)$} matrix such that $\mathbf{K} =[\mathbf{K}_1, \widetilde{\mathbf{K}}_1]$ is a {$d\times d$} orthogonal matrix. Since $\mathbf{I}_d=\mathbf{K}\mathbf{K}^{\top}=\mathbf{K}_1\mathbf{K}_1^{\top}+\widetilde{\mathbf{K} }_1\widetilde{\mathbf{K}}_1^{\top}$, we have \[ \mathbf{Y}^{\top}\bolds{\Sigma}^{1/2}_1(\bolds{\Omega} \bolds{\mu}_1-\bolds{\delta}) =\mathbf{Y}^{\top}\mathbf{K}_1 \mathbf{K}_1^{\top}\bolds{\Sigma}^{1/2}_1(\bolds{\Omega}\bolds{\mu} _1-\bolds{\delta}) + \mathbf{Y}^{\top}\widetilde{\mathbf{K}}_1 \widetilde{\mathbf{K}}_1^{\top }\bolds{\Sigma}^{1/2}_1( \bolds{\Omega}\bolds{\mu}_1-\bolds{\delta}). \] We\vspace*{1pt} recall that $\bolds{\beta}_1=\mathbf{K}_1^{\top}\bolds{\Sigma}^{1/2}_1(\bolds{\Omega}\bolds{\mu} _1-\bolds{\beta})$. Let $\widetilde{\bolds{\beta}}_1=\widetilde{\mathbf{K}}_1^{\top}\bolds{\Sigma} ^{1/2}_1(\bolds{\Omega}\bolds{\mu}_1-\bolds{\delta})$, $\mathbf{W}=\mathbf{K}_1^{\top}\mathbf{Y}$, $\widetilde{\mathbf{W}}=\widetilde{\mathbf{K}}_1^{\top }\mathbf{Y}$ and $c_1=\bolds{\mu}^{\top}_1\bolds{\Omega}\bolds{\mu}_1-2\bolds{\mu}^{\top}_1\bolds{\delta} $. It follows from (\ref{temp1-errgap}) that \begin{eqnarray*} Q(\mathbf{Z}) &=&\mathbf{Y}^{\top}{\mathbf{K}_1\mathbf{S}_1 \mathbf{K}_1^{\top}}\mathbf{Y}+ 2\mathbf{Y}^{\top }\mathbf{K}_1 \bolds{\beta}_1 + 2 \mathbf{Y}^{\top}\widetilde{\mathbf{K}}_1 \widetilde{\bolds{\beta}}_1 + c_1 \\ &=& \mathbf{W}^{\top}{ \mathbf{S}_1}\mathbf{W}+2\mathbf{W}^{\top}\bolds{\beta}_1 + 2\widetilde{\mathbf{W} }^{\top}\widetilde{\bolds{\beta}}_1 + c_1 \\ &\equiv& \bar{Q}_1(\mathbf{W})+ \bar{F}_1(\widetilde{\mathbf{W}}) + c_1, \end{eqnarray*} where {$\bar{Q}_1(\mathbf{w})=\mathbf{w}^{\top}\mathbf{S}_1\mathbf{w}+ 2\mathbf{w}^{\top}\bolds{\beta} _1$ and $\bar{F}_1(\mathbf{w})=2\mathbf{w}^{\top}\widetilde{\bolds{\beta}}_1$}. Therefore, \[ \mathbb{P}(2\mid1)=\mathbb{P} \bigl(Q(\mathbf{Z})>c \bigr)=\mathbb{P} \bigl (\bar {Q}_1(\mathbf{W}) + \bar{F}_1(\widetilde{\mathbf{W}})>c-c_1 \bigr). \] We write for convenience $\mathbf{W}=(W_1,\ldots, W_q)^{\top}$, $\widetilde {\mathbf{W}}=(W_{q+1},\ldots, W_d)^{\top}$, $\bolds{\beta}_1=(\beta_{11},\ldots ,\beta_{1q})^{\top}$ and $\widetilde{\bolds{\beta}}_1=(\beta_{1(q+1)}, \ldots, \beta_{1d})^{\top}$, and notice that $W_i\stackrel {\mathrm{i.i.d.}}{\sim }N(0,1)$ for $1\leq i\leq d$. Moreover, \begin{equation} \label{errgap-temp} \bar{Q}_1(\mathbf{W}) + \bar{F}_1(\widetilde{ \mathbf{W}}) = \sum_{i=1}^{q} \bigl(s_i W^2_i+2W_i\beta_{1i} \bigr)+ \sum_{i=q+1}^{d} 2W_i \beta_{1i} \equiv\sum_{i=1}^{d} \xi_i, \end{equation} where $\xi_i=s_iW_i^2 I\{1\leq i\leq q\}+2W_i\beta_{1i}$, for $1\leq i\leq d$. The right-hand side of (\ref{errgap-temp}) is a sum of independent variables, so we can apply the Edgeworth expansion to its distribution function, as described in detail below. Note that $\mathbb{E}(W_i^2)=1$, $\mathbb{E}(W_i^4)=3$, $\mathbb {E}(W_i^6)=15$ and $\mathbb{E}(W_i^{2j+1})=0$ for nonnegative integers $j$. By direct calculation, \begin{eqnarray*} \eta_1& \equiv& \sum_{i=1}^d \mathbb{E}(\xi_i)=\sum_{i=1}^q s_i=\operatorname{tr}(\mathbf{S}_1)=\operatorname{tr}(\bolds{\Omega}\bolds{\Sigma}_1), \\ \eta_2& \equiv&\sum_{i=1}^d \operatorname{var} (\xi_i) = \sum_{i=1}^q \bigl( 2s_i^2 +4\beta^2_{1i} \bigr) + \sum_{i=q+1}^d 4 \beta^2_{1i} = 2\operatorname{tr}\bigl(\mathbf{S}_1^2 \bigr)+4 \vert\bolds{\beta}_1 \vert^2 + 4 \vert \widetilde{\bolds{\beta}}_1\vert^2 \\ & =&2\operatorname{tr}(\bolds{\Omega} \bolds{\Sigma}_1\bolds{\Omega}\bolds{\Sigma}_1)+4 (\bolds{\Omega}\bolds{\mu} _1-\bolds{\delta})^{\top}\bolds{\Sigma}_1(\bolds{\Omega} \bolds{\mu}_1-\bolds{\delta}), \\ \eta_3& \equiv&\sum _{i=1}^d \mathbb{E} \bigl[\xi_i- \mathbb{E}(\xi_i) \bigr]^3 = \sum _{i=1}^d \bigl(8 s^3_i+24 \beta^2_{1i} s_i \bigr) \\ &=&8\operatorname{tr}\bigl( \mathbf{S}_1^3 \bigr)+24 \bolds{\beta}_1^{\top} \mathbf{S}_1\bolds{\beta}_1 =8\operatorname{tr}\bigl[(\bolds{\Omega}\bolds{\Sigma}_1)^3 \bigr]+24(\bolds{\Omega}\bolds{\mu}_1-\bolds{\delta})^{\top }\bolds{\Sigma}_1 \bolds{\Omega}\bolds{\Sigma}_1(\bolds{\Omega}\bolds{\mu}_1-\bolds{\delta}). \end{eqnarray*} Notice that $\mathbb{E}(\vert \xi_i-\mathbb{E}(\xi_i)\vert ^3)<\infty$, as $\max\{\vert s_i\vert,\vert \beta_{1i}\vert, 1\leq i\leq d\} \leq C_0$ by assumption. Using results from Chapter XVI of \citet{Feller1966}, we know \begin{eqnarray*} \mathbb{P}(2\mid1)&=&\mathbb{P} \Biggl(\sum _{i=1}^{d} \xi_i >c-c_1 \Biggr) \\ &=&\mathbb{P} \biggl(\frac{\sum_{i=1}^d \xi_i-\mathbb{E}(\sum_{i=1}^d \xi _i)}{\sqrt{\sum_{i=1}^d \operatorname{var}(\xi _i)}}>\frac{c-c_1-\mathbb{E}(\sum_{i=1}^d \xi_i)}{\sqrt{\sum_{i=1}^d \operatorname{var}(\xi_i)}} \biggr) \\ &=&\bar{ \Phi} \biggl(\frac{c-c_1-\eta_1}{\sqrt{\eta_2}} \biggr) +\frac{\eta _3(1-(\sfrac{(c_1-c+\eta_1)^2}{\eta_2}))}{6 \eta ^{3/2}_2}\phi\biggl( \frac{c_1-c+\eta_1}{\sqrt{\eta_2}} \biggr) \\ &&{} +o \biggl(\frac{d}{\eta ^{3/2}_2} \biggr), \nonumber \end{eqnarray*} where $\phi$ is the probability density function of the standard normal distribution. It is observed that $\eta_2=L_1(\bolds{\Omega},\bolds{\delta} )$ and $c_1+\eta_1=M_1(\bolds{\Omega},\bolds{\delta})$. Also, $c=tM_1(\bolds{\Omega} ,\bolds{\delta})+(1-t)M_2(\bolds{\Omega},\bolds{\delta})$. As a result, \begin{eqnarray*} \frac{c-c_1-\eta_1}{\sqrt{\eta_2}} &=& \frac{[tM_1 + (1-t) M_2]-M_1}{\sqrt{L_1}} = \frac{(1-t)(M_2 - M_1)}{\sqrt{L_1}} \\ &=& (1-t) \frac{M}{\sqrt{L_1}}. \end{eqnarray*} Plugging this into the expression of $\mathbb{P}(2\mid1)$, the first term is $\bar{\Phi}((1-t)\tfrac{M}{\sqrt{L_1}})$. Moreover, since the function $(1-u^2)\phi(u)$ is uniformly bounded, the second term is $O(\frac{\eta_3}{\eta_2^{3/2}})$. Here $\eta_2=L_1$, and $\eta_3=O(q)$ as $s_i$'s and $\beta_{1i}$'s are abounded in magnitude. Combining the above gives \[ \mathbb{P}(2\mid1) = \bar{\Phi} \biggl(\frac{(1-t)M}{\sqrt{L_1}} \biggr ) + \frac{O(q) + o(d)}{L_1^{3/2}}. \] The proof is now complete. \begin{supplement}[id=suppA] \stitle{Supplement to ``QUADRO: A supervised dimension reduction method via Rayleigh quotient optimization''} \slink[doi]{10.1214/14-AOS1307SUPP} \sdatatype{.pdf} \sfilename{AOS1307\_supp.pdf} \sdescription{Owing to space constraints, numerical tables for simulation and some of the technical proofs are relegated to a supplementary document. It contains proofs of Propositions \ref {propellipmeanvar}, \ref{propQeigen} and \ref{propH-Taylor}.} \end{supplement} \printaddresses \end{document}
arXiv
\begin{document} \title{Property testing of the Boolean and binary rank} \author{ Michal Parnas \\ The Academic College\\ of Tel-Aviv-Yaffo \\ {\tt [email protected]} \and Dana Ron \\ Tel-Aviv University,\\ {\tt [email protected]} \and Adi Shraibman \\ The Academic College\\ of Tel-Aviv-Yaffo \\ {\tt [email protected]} } \maketitle \abstract{ We present algorithms for testing if a $(0,1)$-matrix $M$ has Boolean/binary rank at most $d$, or is $\epsilon$-far from Boolean/binary rank $d$ (i.e., at least an $\epsilon$-fraction of the entries in $M$ must be modified so that it has rank at most $d$). The query complexity of our testing algorithm for the Boolean rank is $\tilde{O}\left(d^4/ \epsilon^6\right)$. For the binary rank we present a testing algorithm whose query complexity is $O(2^{2d}/\epsilon)$. Both algorithms are $1$-sided error algorithms that always accept $M$ if it has Boolean/binary rank at most $d$, and reject with probability at least $2/3$ if $M$ is $\epsilon$-far from Boolean/binary rank $d$. }\\ \section{Introduction} The Boolean rank of a $(0,1)$-matrix $M$ of size $n\times m$ is equal to the minimal $r$, such that $M$ can be factorized as a product $M = X\cdot Y$, where $X$ is $(0,1)$-matrix of size $n \times r$ and $Y$ is a $(0,1)$-matrix of size $r \times m$, and all additions and multiplications are Boolean (that is, $1+1 = 1, 1+0 = 0+1 = 1, 1\cdot 1 = 1$). A similar definition holds for the binary rank, where here the operations are the regular operations over the reals (that is, $1+ 1 = 2$). These two rank functions have other equivalent definitions: The Boolean (binary) rank is equal to the minimal number of monochromatic rectangles required to cover (partition) all the $1$-entries of the matrix. The Boolean (binary) rank is also equal to the minimal number of bipartite cliques needed to cover (partition) all the edges of a bipartite graph whose adjacency matrix is $M$ (see~\cite{Gregory}). Furthermore, the Boolean rank of $M$ determines exactly the non-deterministic communication complexity of $M$, and the binary rank of $M$ gives an approximation up to a polynomial of the deterministic communication complexity of $M$ (see, for example,~\cite{KN97} for more details). Given the importance of these two rank functions it is desirable to be able to compute or approximate them efficiently. However, in several works it was shown that computing and even approximating the Boolean or binary rank is NP-hard~\cite{orlin1977contentment,simon1990approximate,gruber2007inapproximability,CHHK14}. The strongest inapproximability result~\cite{CHHK14} shows that it is NP-hard to approximate both ranks to within a factor of $n^{1-\delta}$ for any given $\delta >0$, and using a stronger complexity assumption they prove a lower bound that is even closer to linear in $n$. \subsection{Property testing of the matrix rank} In this work we consider a different relaxation of exactly computing the Boolean or binary rank of a matrix, namely, that of \emph{property testing}~\cite{RS96,GGR98}. For a parameter $\epsilon \in [0,1]$ and an integer $d$, a matrix $M$ is said to be \emph{$\epsilon$-far} from Boolean (binary) rank at most $d$, if it is necessary to modify more than an $\epsilon$-fraction of the entries of $M$ to obtain a matrix with Boolean (binary) rank at most $d$. Otherwise, $M$ is \emph{$\epsilon$-close} to Boolean (binary) rank at most $d$. A property-testing algorithm for the Boolean (binary) rank is given as parameters $\epsilon$ and $d$, as well as query access to a matrix $M$. If $M$ has Boolean (binary) rank at most $d$, then the algorithm should accept with probability at least $2/3$, and if $M$ is $\epsilon$-far from having Boolean (binary) rank at most $d$, then the algorithm should reject with probability at least $2/3$. If the algorithm accepts matrices having rank at most $d$ with probability 1, then it is a \emph{one-sided error} testing algorithm. If it selects all its queries in advance, the it is a \emph{non-adaptive} algorithm. The main complexity measure that we focus on is the query complexity of the testing algorithm. \paragraph{The real rank.} If one considers the real rank of matrices, then there are known efficient property testing algorithms. Krauthgamer and Sasson~\cite{krauthgamer2003property} gave a non-adaptive property testing algorithm for the real rank whose query complexity is $O(d^2 / \epsilon^2)$, and Li, Wang and Woodruff~\cite{Li} showed that by allowing the algorithm to be adaptive, it is possible to reduce the query complexity to $O(d^2 / \epsilon)$. Recently, Balcan et al.~\cite{BLWZ} gave a non-adaptive testing algorithm for the real rank whose query complexity is $\tilde{O}(d^2 / \epsilon)$. It should be noted that the aforementioned property testing algorithms for the real rank cannot be simply adapted to the Boolean and binary rank, since they rely heavily on the augmentation property that holds trivially for the real rank (i.e., if the real rank of $(M|x)$ and of $(M|y)$ is $d$, then the real rank of $(M|x,y)$ is also $d$, where $(M|x)$ is the matrix $M$ augmented with a vector $x$ as the last column). However, the augmentation property does not hold for the Boolean and binary rank (see for example~\cite{parnas2018augmentation}), and thus a different approach is needed. \paragraph{Induced-subgraph freeness in bipartite graphs.} Recalling the formulation of the Boolean and binary rank as properties of bipartite graphs, we observe that these properties can be characterized as being free of a finite collection of induced subgraphs. Alon, Fischer and Newman \cite{alon2007efficient} showed that every such property of bipartite graphs can be tested with a number of queries that is polynomial in $1/ \epsilon$, and with no dependence on the size of the graph. However, by applying their framework to our problems, we obtain algorithms whose complexity is quite high as a function of $d$. It is not hard to verify, as we show for completeness in Section~\ref{sec:Alon}, that in the worst case, the query complexity achieved is upper bounded by $(\frac{2^d}{\epsilon})^{O(2^{4d})}$. Specifically, we define the set $F_{d+1}$ of all $(0,1)$-matrices of Boolean (binary) rank $d+1$, without repetitions of rows or columns in the matrices in $F_{d+1}$, and then use the result of \cite{alon2007efficient} to test for matrices that do not contain as a submatrix any member of $F_{d+1}$. The query complexity of the resulting algorithm depends on the maximal size of the matrices in $F_{d+1}$, where this size is upper bounded by $2^{d+1}$. \paragraph{Relation to graph coloring.} The formulation of the Boolean (binary) rank as a covering (partition) problem of edges by complete bicliques is reminiscent of the well-known problem of graph coloring, where the goal is to partition the graph vertices into a small number of independent sets. Graph coloring has been studied in the context of property testing, where the query complexity is polynomial in $1/\epsilon$ and the number, $k$, of colors (see~\cite{GGR98,AK02,Sohler12}). However, an important difference between the problems is that while $k$-colorability is monotone in terms of the removal of edges, this is not true of the Boolean and binary rank. In particular, this implies that when testing $k$-colorability, the distance of a graph $G$ to the property is the minimal number of $1$-entries in the adjacency matrix of $G$ that should be modified to $0$, so as to obtain a $k$-colorable graph. On the other hand, for the rank functions we consider, if we want to make the minimal number of modifications in a $(0,1)$-matrix $M$, so as to obtain a $(0,1)$-matrix with rank at most $d$, then we might need to modify both $1$-entries and $0$-entries. \iffalse It is also interesting to note that the Boolean and binary rank functions are not monotone as many of the previous objects for which efficient testing algorithms were designed. Thus, for example, when testing $k$-colorability of a graph, the distance from a $k$-colorable graph is the minimal number of edges that should be {\em removed} from the graph so that it is $k$-colorable (this corresponds to modifying only $1$ entries in the adjacency matrix of the graph). On the other hand, when modifying $M$ to get the closest matrix of Boolean/binary rank at most $d$, one can modify both $0$ and $1$ entries in $M$. \fi \subsection{Our results} \iffalse In Section~\ref{sec:Alon} we show how the general approach of Alon, Fischer and Newman~\cite{alon2007efficient} can be used to design a property testing algorithm for the Boolean/binary rank whose query complexity is $(\frac{f}{\epsilon})^{O(f^4)}$, where $f=f(d)$ is the maximal size of a $(0,1)$-matrix with Boolean/binary rank $d+1$ whose sub-matrices all have Boolean/binary rank $d$ or less. To understand the complexity of this algorithm one has to understand how $f(d)$ behaves as a function of $d$. As we show, $d \le f(d) \le 2^d$, and thus in the worst case this yields an algorithm with query complexity $(\frac{2^d}{\epsilon})^{O(2^{4d})}$. We then improve this result significantly by presenting the following property testing algorithm for the Boolean rank that has a polynomial complexity query in $d$ and $\epsilon$: \fi Our first and main result is a property testing algorithm for the Boolean rank that has query complexity polynomial in $d$ and $1/\epsilon$. \BT \label{thm:main} There exists a one-sided error non-adaptive property testing algorithm for the Boolean rank whose query complexity is $\tilde{O}\left(d^4/ \epsilon^6\right)$. \ET The proof of Theorem~\ref{thm:main} builds on the framework used in~\cite{parnas2006tolerant}, which in turn builds on~\cite{czumaj2005abstract}. Specifically, we introduce the notion of {\em skeletons\/} and {\em beneficial entries\/} for a matrix $M$ in the context of the Boolean rank. These notions allow us to separate the analysis of Algorithm~\ref{alg:Boolean test} into a purely combinatorial part and a probabilistic part. Part of the challenge in defining these notions in the context of the Boolean rank, and using them to prove Theorem~\ref{thm:main}, is the non-monotonicity of the problem described above. \iffalse The proof of Theorem~\ref{thm:main} follows the framework used in~\cite{parnas2006tolerant} and~\cite{czumaj2005abstract}, and defines in a careful way {\em skeletons\/} and {\em beneficial entries\/} in the context of the Boolean rank. The non-monotonicity of the problem described above, adds to the complexity of the definition (\mchange {or something like this should be said}). \fi More details on the proof structure, as well as the complete proof of Theorem~\ref{thm:main}, are given in Section~\ref{sec:Boolean}. For the binary rank we present testing algorithms whose query complexity is exponential in the rank $d$. \BT \label{thm:binary} There exists a one-sided error non-adaptive property testing algorithm for the binary rank whose query complexity is $O(2^{2d}/\epsilon^2)$, and a $1$-sided error adaptive property testing algorithm for the binary rank whose query complexity is $O(2^{2d}/\epsilon)$. \ET The proof of Theorem~\ref{thm:binary} is given in Section~\ref{sec:binary}. Observe that even if it turns out that the property of having binary rank at most $d$ can be characterized by being free of all submatrices that belong to a family $F$ of matrices having size $O(d)$, still our algorithms are an improvement over the result that can be derived from~\cite{alon2007efficient}, which in this case would be $(d/\epsilon)^{O(d^4)} = 2^{O(d^4\log d)}/\epsilon^{O(d^4)}$ (see Section~\ref{sec:Alon} for more details). It remains an open problem whether there exists a testing algorithm for the binary rank with query complexity polynomial in $d$ and $1/\epsilon$. \section{Testing the Boolean rank} \label{sec:Boolean} Let $M$ be a $(0,1)$-matrix of size $n\times n$, and let $[n] = \{1,\dots,n\}$. We say that an entry $(x,y)\in [n]\times[n]$ is a \emph{$1$-entry} of $M$ if $M[x,y]=1$. For a subset of entries $U = \{(x_i,y_i)\}_{i=1}^{m}$, the submatrix of $M$ \emph{induced} by $U$ is the submatrix whose rows are $\{x_i\}_{i=1}^{m}$ and whose columns are $\{y_i\}_{i=1}^{m}$. The testing algorithm for the Boolean rank is simple: \fbox{ \begin{minipage}{5.4in} \BA{{\sf(Test $M$ for Boolean rank $d$, given $d$ and $\epsilon$)}} \label{alg:Boolean test} \begin{enumerate} \item Select uniformly, independently and at random $m = \Theta\left(\frac{d^2}{\epsilon^3}\cdot \log \frac{d}{\epsilon}\right)$ entries from $M$. \item Let $U$ be the subset of entries selected and consider the submatrix $W$ of $M$ induced by $U$. \item If $W$ has Boolean rank at most $d$, then accept. Otherwise, reject. \end{enumerate} \EA \end{minipage} } The query complexity of Algorithm~\ref{alg:Boolean test} is clearly $\tilde{O}\left(d^4/ \epsilon^6\right)$. As for the running time of the algorithm, we cannot expect it to be efficient since computing the Boolean rank of a $(0,1)$-matrix is NP-hard. We now proceed to prove Theorem~\ref{thm:main}, and show that Algorithm~\ref{alg:Boolean test} is a $1$-sided error tester for the Boolean rank. \paragraph{Proof Structure.} First note that if $M$ has Boolean rank at most $d$, then so does each of its submatrices, causing Algorithm~\ref{alg:Boolean test} to accept (with probability 1). Hence, our focus is on proving that if $M$ is $\epsilon$-far from Boolean rank $d$, then the algorithm rejects with probability at least $2/3$. Following~\cite{parnas2006tolerant}, which in turn follow~\cite{czumaj2005abstract}, we introduce the notion of {\em skeletons\/} and {\em beneficial entries\/} for a matrix $M$ in the context of the Boolean rank. These notions allow us to separate the analysis of Algorithm~\ref{alg:Boolean test} into a purely combinatorial part (which is the main part of the analysis), and a probabilistic part (which is fairly simple). A skeleton for $M$ is a multiset $S = \{S_1,\dots,S_d\}$ that contains $d$ subsets of $1$-entries of $M$, where all $1$-entries in each subset $S_i$ can be in the same monochromatic rectangle. Roughly speaking, an entry $(x,y)$ is beneficial with respect to a skeleton $S = \{S_1,\dots,S_d\}$, if for each one of the subsets $S_i$, either: (1) $(x,y)$ cannot be added to $S_i$, since it cannot be in the same monochromatic rectangle as the entries already in $S_i$, or (2) adding $(x,y)$ to $S_i$ significantly reduces the number of other entries that can be in the same monochromatic rectangle with the entries of $S_i$ and $(x,y)$. Observe that if an entry $(x,y)$ cannot be added to \emph{any} $S_i$ in $S$, then this is evidence that the skeleton $S$ cannot be extended to a cover of all $1$-entries of $M$ by monochromatic rectangles. More generally, allowing also for the second option defined above, beneficial entries make a skeleton more \emph{constrained}, as we formalize precisely in the next subsection. We show how, given a matrix $M$, it is possible to define a set ${\cal S}(M)$ of relatively small skeletons that have certain useful properties. In particular, if $M$ is $\epsilon$-far from Boolean rank at most $d$, then every skeleton in ${\cal S}(M)$ has many beneficial entries. We establish this claim by showing how, given a skeleton $S$ in ${\cal S}(M)$, we can modify $M$ so as to obtain a matrix with Boolean rank at most $d$, where the number of modifications is upper bounded as a function of the number of entries that are beneficial with respect to $S$. On the other hand, we show that if the matrix $M$ has Boolean rank at most $d$, then for every submatrix $W$ of $M$, every subset of entries $U \subseteq W$ contains a skeleton in ${\cal S}(M)$ with no beneficial entries in $U$. Finally, we prove that if every skeleton in ${\cal S}(M)$ has many beneficial entries, then with high constant probability over the choice of $U$ in Algorithm~\ref{alg:Boolean test}, for every skeleton $S \in {\cal S}(M)$ that is contained in $U$, there exists a beneficial entry in $U$ for $S$. We note that the bound on the size of the skeletons in ${\cal S}(M)$ plays a role in this last proof. Theorem~\ref{thm:main} can then be shown to follow by combining the above. \iffalse \BL\label{lem:submatrix} Let $0 < \alpha <1$, and suppose that every skeleton in ${\cal S}(M)$ has at least $\alpha \cdot n^2$ beneficial entries in $M$. Consider selecting, uniformly, independently and at random, $m = c\cdot \left(\frac{d^2}{\alpha\cdot \epsilon}\cdot \log \frac{d}{\alpha\cdot \epsilon}\right)$ entries from $M$ for a sufficiently large constant $c$, and denoting the subset of selected entries by $U$. Then with probability at least $2/3$, for every skeleton $S =\{S_1,\ldots,S_d\}\in {\cal S}(M)$ such that $\bigcup_{i=1}^d S_i \subset U$, there exists a beneficial entry in $U$ for $S$. \EL \fi \iffalse The skeletons will be built in a careful way as to guarantee that their size is small, since the size of the skeletons affects directly the query complexity of the algorithm. Furthermore, as we show, if $M$ is $\epsilon$-far from Boolean rank at most $d$, then for every skeleton there will be many entries in $M$ that will be beneficial for the skeleton, in the following sense. Intuitively, an entry $(x,y)$ is beneficial with respect to a skeleton $S = \{S_1,\dots,S_d\}$, if for each one of the subsets $S_i$, either: (1) $(x,y)$ cannot be added to $S_i$, since it cannot be in the same monochromatic rectangle as the entries already in $S_i$, or (2) adding $(x,y)$ to $S_i$ significantly reduces the number of other entries that can be added to $S_i$ at a later stage. Thus, beneficial entries make the skeleton more constrained, as we formalize precisely in the next subsection. In Section~\ref{subsec:cent-def} we define the skeletons and beneficial entries formally, as well as all other definitions needed for the proof. In Sections~\ref{subsec:rank-k-no-beneficial} and~\ref{subsec:matrix-far-many-beneficial} we prove two main combinatorial lemmas regarding properties of the set of skeletons we defined for $M$. Specifically, we prove that if the matrix $M$ has Boolean rank at most $d$, then for every submatrix $W$ of $M$, every subset of entries $U \subseteq W$ contains a skeleton with no beneficial entries in $U$. On the other hand, if $M$ is $\epsilon$-far from Boolean rank at most $d$, then every skeleton has many beneficial entries. Finally, in Section~\ref{subsec:sampling} we prove a simple sampling lemma, and then using these lemmas, we establish Theorem~\ref{thm:main}. \fi \subsection{Central definitions} \label{subsec:cent-def} Throughout this subsection, the matrix $M$ and the parameters $d$ and $\epsilon$ are fixed. \BD[Compatible entries] \label{def:compatible} An entry $(x_1,y_1)$ is {\sf compatible} with a $1$-entry $(x_2,y_2)$ if $M[x_1,y_2] = M[x_2,y_1]=1$. Otherwise, $(x_1,y_1)$ is {\sf incompatible} with $(x_2,y_2)$. An entry $(x,y)$ is {\sf compatible with a set} $S$ of $1$-entries, if $(x,y)$ is compatible with every entry in $S$. Otherwise, $(x,y)$ is {\sf incompatible} with $S$. \ED Note that if both entries $(x_1,y_1)$ and $(x_2,y_2)$ are 1-entries, then the compatibility relation is symmetric, but it applies also to pairs of entries $(x_1,y_1)$ and $(x_2,y_2)$ such that $M[x_1,y_1]=0$ and $M[x_2,y_2]=1$. When both entries $(x_1,y_1)$ and $(x_2,y_2)$ are 1-entries, compatibility means that these entries can belong to the same monochromatic rectangle. \BD[Friendly row/column] \label{def:friendly} A row $x$ (column $y$) is {\sf friendly\/} with a set $S$ of $1$-entries, if for every entry $(x',y') \in S$, it holds that $M[x,y'] = 1$ ($M[x',y] = 1$). Otherwise, it is {\sf not friendly} with $S$. \ED Observe that by Definitions~\ref{def:compatible} and~\ref{def:friendly}, an entry $(x,y)$ is compatible with a set $S$ of $1$-entries if and only if row $x$ and column $y$ are both friendly with $S$. See Figure~\ref{fig:friendly} for an illustration. \begin{figure} \caption{\small Row $x$ and column $y$ are friendly with the subset $S$ of grey entries, and entry $(x,y)$ is compatible with $S$. The empty entries can be either $1$ or $0$.} \label{fig:friendly} \end{figure} \iffalse \begin{figure} \caption{Row $x$ and column $y$ are friendly with the subset $S$ of red entries, and entry $(x,y)$ is compatible with $S$. The empty entries can be either $1$ or $0$.} \label{fig:friendly} \end{figure} \fi For a set of entries $S = \{(x_i,y_i)\}_{i=1}^{|S|}$, denote the set of rows of $S$ by $R(S) = \{x_i\}_{i=1}^{|S|}$ and the set of columns of $S$ by $C(S) = \{y_i\}_{i=1}^{|S|}$. \BD[Zeros of a row/column] For a row $x$, let ${Z}(x)$ denote the set of columns $y$ such that $M[x,y]=0$, and for a column $y$ let ${Z}(y)$ denote the set of rows $x$ such that $M[x,y]=0$. We extend the notation ${Z}(\cdot)$ to sets of rows/columns. Namely, for a set of rows $X$, ${Z}(X) = \displaystyle{\bigcup_{x\in X}}{Z}(x)$, and similarly for a set of columns $Y$. \iffalse Given a row $x$ and a subset $S$ of entries, let $$Z_{S}(x) \eqdef {Z}(x) \setminus {Z}(R(S)).$$ Namely, this is the subset of columns $y$ such that $M[x,y]=0$, but there is no row $x_i \in R(S)$ such that $M[x_i,y]=0$. Similarly, for a column $y$, let $Z_{S}(y) \eqdef {Z}(y) \setminus {Z}(C(S))$. \fi \ED \BCM \label{clm:fewzeros} Let $(x,y)$ be an entry such that $M[x,y] = 0$ and such that row $x$ and column $y$ are both friendly with a set of entries $S$. Then $y \in {Z}(x) \setminus {Z}(R(S))$ and $x \in {Z}(y) \setminus {Z}(C(S))$. \ECM \BPF Assume, contrary to the claim, that $y \notin {Z}(x) \setminus {Z}(R(S))$. Since $M[x,y]=0$ then $y \in {Z}(x)$, and therefore, $y \in {Z}(R(S))$. Hence, there exists an entry $(x',y')\in S$ such that $M[x',y] = 0$. But this means that column $y$ is not friendly with $S$. An analogous argument shows that $x \in {Z}(y) \setminus {Z}(C(S))$. \EPF\\ \BD[Zero-heavy row/column]\label{def:zero-heavy} Row $x$ is {\sf zero-heavy} with respect to a set of entries $S$ if $|{Z}(x) \setminus {Z}(R(S))| \geq g(\epsilon,d)\cdot n$, where $g(\epsilon,k) = \epsilon/(4d)$. Otherwise, it is {\sf zero-light} with respect to $S$. Similarly, column $y$ is zero-heavy with respect to $S$ if $|{Z}(y) \setminus {Z}(C(S))| \geq g(\epsilon,d)\cdot n$, and otherwise, it is zero-light. \ED \BD[Influential entries] \label{def:influential} Entry $(x,y)$ is {\sf influential} with respect to a set $S$ of $1$-entries if: (1) $M[x,y] = 1$, (2) $(x,y)$ is compatible with $S$, and (3) either row $x$ or column $y$ is zero-heavy with respect to $S$ (possibly both). Otherwise, $(x,y)$ is {\sf non-influential} for $S$. \ED As we will see shortly, only influential entries will be added to a given skeleton. This will allow us to maintain small skeletons and at the same time, when $M$ is $\epsilon$-far from Boolean rank $d$, each skeleton will have many beneficial entries, as defined next. An illustration for Definitions~\ref{def:zero-heavy} and~\ref{def:influential} is given in Figure~\ref{fig:zero-heavy}. \begin{figure} \caption{\small An illustration for the definition of a zero-heavy row and influential entries. The entries of $S$ are filled with grey. The 0-entries in row $x$ that belong to ${Z}(x) \setminus {Z}(R(S))$ are colored red. Consider the $1$-entry $(x,y)$ that is filled with vertical lines. Assuming that row $x$ is zero-heavy, then entry $(x,y)$ is influential with respect to $S$. Furthermore, although the $1$-entry $(z,w)$, which is filled with horizontal lines, is compatible with $S$, it is not compatible with $S\cup \{(x,y)\}$.} \label{fig:zero-heavy} \end{figure} We are now ready to introduce our main definitions of skeletons and beneficial entries. \BD [Skeletons and beneficial entries for the Boolean rank] \label{def:skeletons-and-benefical} \sloppy A {\sf skeleton} for a matrix $M$ is a multiset $S = \{S_1,\ldots,S_d\}$ that includes $d$ subsets of $1$-entries of $M$, and is defined inductively as follows: \begin{enumerate} \item The multiset $S = \{\emptyset,\dots,\emptyset\}$, which contains the empty set $d$ times, is a skeleton. \item If $S=\{S_1,\ldots,S_d\}$ is a skeleton and $(x,y)$ is an influential entry with respect to $S_i$ for some $i \in [d]$, then $S' = \{S_1,\ldots,S_{i-1},S_i \cup \{(x,y)\},S_{i+1},\ldots,S_d)\}$ is a skeleton. (Note that there may be more than one way to add $(x,y)$ to the skeleton $S$, and $(x,y)$ can be added to more than one of the subsets $S_i$). \end{enumerate} Let ${\cal S}(M)$ denote the set of all skeletons for $M$. A $1$-entry $(x,y) \in M$ is {\sf beneficial} for a skeleton $S = \{S_1,\ldots,S_d\}$, if for every $1 \leq i \leq d$, the entry $(x,y)$ is either incompatible or influential with respect to $S_i$. Otherwise, $(x,y)$ is {\sf non-beneficial} for $S$. \ED Note that by the definition of influential entries, any skeleton $S\in {\cal S}(M)$ contains only $1$-entries of $M$, and beneficial entries are always $1$-entries. In the next two subsections we prove that the set of skeletons ${\cal S}(M)$, as defined in Definition~\ref{def:skeletons-and-benefical}, has certain properties, which are then exploited to prove Theorem~\ref{thm:main}. \subsection{Matrices of rank $d$ have skeletons with no beneficial entries} \label{subsec:rank-k-no-beneficial} \BL\label{lem:rank-k-no-beneficial} Let $W$ be a submatrix of $M$ with Boolean rank at most $d$. Then for every $U \subseteq W$, there exists a skeleton $S =\{S_1,\ldots,S_d\}\in {\cal S}(M)$, such that $\bigcup_{i=1}^d S_i \subset U$, and there is no beneficial entry in $U$ for $S$. \EL \BPF First observe that since $W$ has Boolean rank at most $d$, there exist $d$ monochromatic submatrices $B_1,\dots,B_d$ of $W$ that cover all $1$-entries of $W$, and hence all $1$-entries of $U$. We build the skeleton $S$ in the following iterative manner: \begin{enumerate} \item We start with the skeleton $S^1 = \{\emptyset,\ldots,\emptyset\}$. \item Let $S^j = \{S^j_i\}_{i=1}^d$ be the skeleton at the beginning of the $j$'th iteration. \begin{enumerate} \item If there exists an index $i$ and an entry $(x,y) \in B_i \cap U$ that is an influential entry with respect to $S^j_i$, then we let $S^{j+1} = \{S^j_1,\ldots,S^j_{i-1},S^j_i \cup \{(x,y)\},S^j_{i+1},\ldots,S^j_d\}$. \item If for every $i$, the subset $B_i \cap U$ does not contain any influential entry with respect $S^j_i$, then we stop. \end{enumerate} \end{enumerate} Let $S=\{S_1,\dots,S_d\}$ be the final resulting skeleton. It remains to show that there are no beneficial entries in $U$ for $S$. Assume, contrary to this claim, that there is some beneficial $1$-entry $(x,y)\in U$ for $S$. Since the submatrices $B_1,\dots,B_d$ cover all $1$-entries of $U$, there must exist an $i \in [d]$ such that $(x,y) \in B_i$. Therefore, $(x,y)$ is compatible with $S_i$. Furthermore, $(x,y)$ is not influential with respect to the subset $S_i$ (otherwise, we would have added it to $S_i$). Thus, entry $(x,y)$ cannot be beneficial for $S$. \EPF \subsection{Skeletons of matrices far from rank $d$ have many beneficial entries} \label{subsec:matrix-far-many-beneficial} In this subsection we show that if the matrix $M$ is $\epsilon$-far from Boolean rank at most $d$, then every skeleton has many beneficial entries. To be precise, we prove the contrapositive statement: \BL\label{lem:few-benefical-matrix-close} Let $S = \{S_1,\dots,S_d\}$ be a skeleton for $M$ with at most $\frac{\epsilon^2}{64} n^2$ beneficial entries. Then $M$ is $\epsilon$-close to Boolean rank $d$. \EL In order to prove Lemma~\ref{lem:few-benefical-matrix-close}, we first show how to modify $M$ in at most $\epsilon n^2$ entries, and then prove that after this modification the resulting matrix $M'$ has Boolean rank at most $d$. We note that in all that follows, the reference to beneficial entries is with respect to the given skeleton $S$ stated in Lemma~\ref{lem:few-benefical-matrix-close}. We start by showing how to modify $M$ using the following {\em Modification rules:} \begin{enumerate} \item \label{mod-rule:all-zero} Modify each row/column with at least $\epsilon n/8$ beneficial entries to an all-zero row/column. The number of such rows/columns is at most $\epsilon n/8$. Otherwise, we get more than $\frac{\epsilon^2}{64} n^2$ beneficial entries. Therefore, this step accounts for at most $2n\cdot\epsilon n/8 = \epsilon n^2 /4$ modifications. \item \label{mod-rule:few-beneficial} Modify to $0$'s all beneficial entries in rows/columns with less than $\epsilon n/8$ beneficial entries. This accounts for at most $2n\cdot \epsilon n/8 = \epsilon n^2 /4$ modifications. \item \label{mod-rule:zero-to-one} Modify a 0-entry $(x,y)$ to a 1 (where $x$ and $y$ are a row/column with less than $\epsilon n/8$ beneficial entries) if and only if there exists an $i \in [d]$, such that row $x$ and column $y$ are both friendly and zero-light with respect to $S_{i}$. By Claim~\ref{clm:fewzeros}, in this case it holds that $y \in {Z}(x) \setminus {Z}(R(S_i))$ and $x \in {Z}(y) \setminus {Z}(C(S_i))$. Thus, the total number of modifications of this type is at most $2n\cdot d \cdot g(\epsilon,d)n = \epsilon n^2 /2 $, since $g(\epsilon,d) = \epsilon/(4d)$. \end{enumerate} Therefore, the total number of modified entries is upper bounded by: $$\epsilon n^2 /4 + \epsilon n^2 /4 + \epsilon n^2 /2 = \epsilon n^2.$$ The main issue is hence proving that after this modification, the modified matrix $M'$ has Boolean rank at most $d$. We first define $d$ subsets $B_1,\dots,B_d$ of $1$-entries, such that each $1$-entry of the modified matrix $M'$ is included in one of these subsets: \begin{enumerate} \item For each $(x,y)\in \bigcup_{j=1}^d S_j$ such that $M'[x,y]=1$: place $(x,y)$ in $B_i$ for $i \in [d]$ such that $(x,y)\in S_i$. \item For each $(x,y)\notin \bigcup_{j=1}^d S_j$ such that $M'[x,y]=1$: place $(x,y)$ in $B_{i}$ if both row $x$ and column $y$ are friendly and zero-light (in $M$) with respect to $S_i$. To verify that such an index $i$ exists for such an entry $(x,y)$, we consider two cases: \begin{enumerate} \item\label{it:M-x-y-1} $M[x,y]=1$: Since $M'[x,y]=1$ as well, we know that $(x,y)$ is non-beneficial (since beneficial entries were modified to 0). By the definition of non-beneficial entries, there exists an index $i$, such that $(x,y)$ is compatible with $S_i$ and non-influential with respect to $S_i$. That is, both row $x$ and column $y$ are friendly (in $M$) with $S_i$, and are zero-light (in $M$) with respect to $S_i$. \item \label{it:M-x-y-0} $M[x,y]=0$: Since $M'[x,y]=1$, by Modification rule number~\ref{mod-rule:zero-to-one}, such an index $i$ must exist as well. \end{enumerate} \end{enumerate} It remains to prove that the subsets $B_1,\dots,B_d$ induce a cover of $M'$ by $d$ monochromatic rectangles. That is, for each subset $B_i$, every two $1$-entries in $B_i$ are compatible (in $M'$). We first prove the next claim, which follows from the modification rules of $M$ and the definition of these subsets. \BCM \label{clm:friendly and light} If $M'[x,y]=1$ and $(x,y) \in B_{i}$, then row $x$ and column $y$ are friendly and zero-light in $M$ with respect to $S_{i}$. \ECM \BPF Consider the following cases: \begin{itemize} \item $(x,y)\in \bigcup_{j=1}^d S_j$: Thus, $(x,y) \in S_i$ and by the definition of the skeletons this means that $(x,y)$ is compatible with all entries in $S_i$. Hence, row $x$ and column $y$ are friendly with $S_i$. Furthermore, given that $(x,y) \in S_i$, we have that ${Z}(x) \setminus {Z}(R(S_i)) = \emptyset$ and ${Z}(y) \setminus {Z}(C(S_i)) = \emptyset$, so that row $x$ and column $y$ are zero-light with respect to $S_i$. \item $(x,y)\notin \bigcup_{j=1}^d S_j$ and $M[x,y]=1$: This case corresponds to Case~\ref{it:M-x-y-1} in the definition of the subsets $B_i$, and so row $x$ and column $y$ are friendly and zero-light in $M$ with respect to $S_{i}$ by the definition. \item $(x,y)\notin \bigcup_{j=1}^d S_j$ and $M[x,y]=0$: This case corresponds to Case~\ref{it:M-x-y-0} in the definition of the subsets $B_i$, and so row $x$ and column $y$ are friendly and zero-light in $M$ with respect to $S_{i}$ by the definition. \end{itemize} Since every pair $(x,y)$ fits one of the above cases, the claim follows. \EPF\\ The next claim concludes the proof that $M'$ has Boolean rank at most $d$, thus establishing the proof of Lemma~\ref{lem:few-benefical-matrix-close}. \BCM\label{clm:Bi-compatible} For every $i \in [d]$, every two entries in $B_i$ are compatible in $M'$. \ECM \BPF Consider any pair of entries $(x_1,y_1),(x_2,y_2) \in B_i$. By Claim~\ref{clm:friendly and light}, rows $x_1$ and $x_2$ and columns $y_1$ and $y_2$, are friendly and zero-light in $M$ with respect to $S_{i}$. Furthermore, these rows/columns were not modified by Modification rule number~\ref{mod-rule:all-zero}. We now show that $M'[x_1,y_2]=1$, where a similar proof holds for $(x_2,y_1)$. We consider the following cases: \begin{itemize} \item $M[x_1,y_2]= 0$: Since rows $x_1$ and $x_2$ and columns $y_1$ and $y_2$ are friendly and zero-light with respect to $S_{i}$, then by Modification rule number~\ref{mod-rule:zero-to-one} we have $M'[x_1,y_2]= 1$. \item $M[x_1,y_2]= 1$: Since rows $x_1$ and $x_2$ and columns $y_1$ and $y_2$ are friendly and zero-light with respect to $S_{i}$, then $(x_1,y_2)$ cannot be influential with respect to $S_i$. It remains to show that $(x_1,y_2)$ is compatible with $S_i$, and therefore cannot be beneficial, and thus, was not modified to a $0$ by Modification rule number~\ref{mod-rule:few-beneficial}. Let $(x',y')\in S_i$. Since row $x_1$ and column $y_2$ are friendly with respect to $S_{i}$, then $M[x',y_2] = 1$ and $M[x_1,y'] = 1$. Therefore, $(x_1,y_2)$ is compatible with $S_i$. \end{itemize} Claim~\ref{clm:Bi-compatible} follows. \EPF \subsection{A sampling lemma} \label{subsec:sampling} Before we state and prove the main lemma of this subsection, we first establish a bound on the size of each of the subsets in a skeleton. \BCM\label{clm:skel-size} Let $S =\{S_1,\ldots,S_d\}$ be a skeleton for $M$. Then $|S_i| \leq 8d/\epsilon$ for every $i\in [d]$. \ECM \BPF Every entry $(x,y)$ that is added inductively to subset $S_i$ of the skeleton $S$, adds at least $\frac{\epsilon}{4d}\cdot n$ columns to $Z(R(S_i))$ or rows to $Z(C(S_i))$. Thus, at most $8d/\epsilon$ entries can be added to $S_i$ until there are no more influential entries with respect to $S_i$. \EPF\\ \BL\label{lem:submatrix} Let $0 < \alpha <1$, and suppose that every skeleton in ${\cal S}(M)$ has at least $\alpha \cdot n^2$ beneficial entries in $M$. Consider selecting, uniformly, independently and at random, $m = c\cdot \left(\frac{d^2}{\alpha\cdot \epsilon}\cdot \log \frac{d}{\alpha\cdot \epsilon}\right)$ entries from $M$ for a sufficiently large constant $c$, and denoting the subset of selected entries by $U$. Then with probability at least $2/3$, for every skeleton $S =\{S_1,\ldots,S_d\}\in {\cal S}(M)$ such that $\bigcup_{i=1}^d S_i \subset U$, there exists a beneficial entry in $U$ for $S$. \EL \BPF Consider selecting $m$ entries from $M$, uniformly, independently and at random, and let $ (x_i,y_i)$ be the $i$'th entry selected, so that each entry $(x_i,y_i)$ is a random variable. Let $s = 8d/\epsilon$ and $m = 200\cdot\frac{d^2}{\alpha\cdot \epsilon}\cdot \ln \frac{d}{\alpha\cdot \epsilon}$. By Claim~\ref{clm:skel-size}, for every skeleton $S =\{S_1,\ldots,S_d\}$ in ${\cal S}(M)$, we have that $|S_i| \leq s$ for every subset $S_i\in S$. Therefore, $\bigcup_{i=1}^d S_i \leq d \cdot s$. Observe that for each subset of entries $T$ of size at most $d \cdot s$, the number of skeletons $\{S_1,\dots,S_d\}$ such that $\bigcup_{i=1}^d S_i = T$ is upper bounded by \[ \left( \sum_{i=0}^s {d\cdot s \choose i}\right)^d \leq \left( (s+1)\cdot {d\cdot s \choose s}\right)^d \;\leq\; (s+1)^d \cdot \left(\frac{e \cdot d\cdot s}{s}\right)^{d\cdot s} \; = (s+1)^d \cdot (e\cdot d)^{d\cdot s} \; \;. \] For each subset of indices $I \subset [m]$, where $|I| \leq d\cdot s$, suppose that we first select entries $\{(x_i,y_i)\}_{i\in I}$, and let $T_I$ be the resulting set of entries. By the premise of the lemma, for each skeleton $S = \{S_1,\ldots,S_d\}\in {\cal S}(M)$ such that $\bigcup_{i=1}^d S_i=T_I$, there are at least $\alpha \cdot n^2$ beneficial entries in $M$. For our choice of $m$, we have that $m-s > m/2$. Therefore, if we now select the remaining entries $\{(x_i,y_i)\}_{i\in [m]\setminus I}$, the probability that we do not obtain any entry that is beneficial for $S$ is at most $$(1-\alpha)^{m/2} < e^{-\alpha m/2}.$$ By taking a union bound over all subsets $I$ of size at most $d\cdot s$, and all skeletons $S$ such that $\bigcup_{i=1}^d S_i=T_I$, we get that the probability that there exists a skeleton $S =\{S_1,\ldots,S_d\} \in {\cal S}(M)$ such that $\bigcup_{i=1}^d S_i \subset U$, and there is no beneficial entry in $U$ for $S$, is upper bounded by \[ m^{d\cdot s} \cdot (s+1)^d \cdot (e\cdot d)^{d\cdot s} \cdot e^{-\alpha m/2} = e^{\left(d\cdot s\ln m + d\ln (s+1) + d\cdot s\cdot \ln(e\cdot d)- \alpha m/2\right)} \leq e^{-2} \leq \frac{1}{3}\; \] where the first inequality holds for our setting of $s$ and $m$. \EPF\\ \subsection{Proof of Theorem~\ref{thm:main}}\label{subsec:thm-main-proof} We can now complete the proof of Theorem~\ref{thm:main}, which builds on Lemmas~\ref{lem:rank-k-no-beneficial},~\ref{lem:few-benefical-matrix-close} and~\ref{lem:submatrix}. \\ \BPFOF{Theorem~\ref{thm:main}} If $M$ has Boolean rank at most $d$, then Algorithm~\ref{alg:Boolean test} always accepts since every submatrix of $M$ has Boolean rank at most $d$. Assume, therefore, that $M$ is $\epsilon$-far from Boolean rank at most $d$. By Lemma~\ref{lem:few-benefical-matrix-close}, for every skeleton in ${\cal S}(M)$ there are at least $\frac{\epsilon^2}{64} n^2$ beneficial entries in $M$. Therefore, by Lemma~\ref{lem:submatrix} (applied with $\alpha = \frac{\epsilon^2}{64}$), for $m$ as set in Algorithm~\ref{alg:Boolean test}, with probability at least $2/3$, for every skeleton $S =\{S_1,\ldots,S_d\} \in {\cal S}(M)$ such that $ \bigcup_{i=1}^d S_i \subset U$, there exists a beneficial entry $(x,y)\in U$ for $S$. But by Lemma~\ref{lem:rank-k-no-beneficial}, if the Boolean rank of $W$ was at most $d$, then for every $U \subseteq W$, there must exist a skeleton $S =\{S_1,\ldots,S_d\} \in {\cal S}(M)$, where $\bigcup_{i=1}^d S_i \subset U$, with no beneficial entries in $U$. Hence, the Boolean rank of $W$ must be larger than $d$, and thus Algorithm~\ref{alg:Boolean test} will reject as required. \EPFOF \section{Testing the binary rank} \label{sec:binary} We present simple testing algorithms for the binary rank whose query complexity is exponential in $d$. Although the query complexity is exponential in $d$, it is strictly smaller than that of the algorithm derived from the result of~\cite{alon2007efficient} described in Section~\ref{sec:Alon}. We first give a non-adaptive algorithm whose query complexity is $O(2^{2d}/\epsilon^2)$, and then use its analysis to design an adaptive algorithm whose query complexity is $O(2^{2d}/\epsilon)$. We note that variants of these algorithms are also applicable to the Boolean rank. \subsection{A non-adaptive property testing algorithm for the binary rank} \fbox{ \begin{minipage}{5.4in} \BA{{\sf(Test $M$ for binary rank $d$, given $d$ and $\epsilon$ -- non-adaptive version)}} \label{alg:binary-non-adaptive} \begin{enumerate} \item Select uniformly, independently and at random $m = 24(2^d+1)/\epsilon$ entries from $M$. \item Let $U$ be the subset of entries selected and consider the submatrix $W$ of $M$ induced by $U$. \item If $W$ has binary rank at most $d$, then accept. Otherwise, reject. \end{enumerate} \EA \end{minipage} } The query complexity of the algorithm is $O(2^{2d}/\epsilon^2)$, and it always accepts a matrix $M$ that has binary rank at most $d$, as every submatrix of $M$ has binary rank at most $d$. Hence, it remains to prove the following lemma: \BL\label{lem:binary-non-adaptive} Let $M$ be a matrix that is $\epsilon$-far from binary rank at most $d$. Then Algorithm~\ref{alg:binary-non-adaptive} rejects with probability at least $2/3$. \EL In order to prove Lemma~\ref{lem:binary-non-adaptive}, we first establish a couple of claims. The first is a simple claim regarding the number of distinct rows and columns in matrices with rank at most $d$. \iffalse \BL[\cite{krauthgamer2003property}]\label{lem:KS} Let $k \geq 0$ and $\epsilon > 0$, and let $0 \leq X_0 \leq X_1 \leq \dots$ be a sequence of random variables, such that for all $t \geq 0$ $${\rm Pr}[X_{t+1} > X_t | X_t \leq k] \geq \epsilon\;.$$ Then for $t^* \geq 8(k+1)/\epsilon$ it holds that: $${\rm Pr}[X_{t^*} \leq k] < 1/3.$$ \EL \fi \BCM \label{clm:size of matrix} Let $W$ be a $(0,1)$-matrix of binary (or Boolean) rank at most $d$. Then every submatrix of $W$ has at most $2^d$ distinct rows and at most $2^d$ distinct columns. \ECM \BPF If $W$ has binary rank at most $d$, it clearly has Boolean rank at most $d$. Thus, it suffices to prove the claim for the latter case. If $W$ has Boolean rank $d$, then the $1$-entries can be covered by $d$ monochromatic rectangles. Any two rows that share a monochromatic rectangle must have $1$-entries in the columns that belong to this rectangle. Therefore, there are at most $2^d$ distinct rows in $W$ according to the monochromatic rectangles to which each row can belong. A similar argument holds for the columns. \EPF\\ In order to state our next claim, we introduce a few definitions. \BD[Number of Distinct rows/columns] \label{distinct} Denote by $N(R(W))$ the number of {\sf distinct} rows in a submatrix $W$, and by $N(C(W))$ the number of distinct columns in $W$. \ED \BD[New row/column] A row index $x\in [n]$ is said to be {\sf new} with respect to a submatrix $W$ of $M$, if by extending $W$ with $x$ we obtain a row different from all current rows of $W$. That is, if $W$ is the submatrix induced by $(x_1,y_1),\dots,(x_t,y_t)$, then $(M[x,y_1],\dots,M[x,y_{t}]) \neq ( M[x_i,y_1],\dots,M[x_i,y_{t}])$ for all $i \in [t]$. A new column index $y$ is defined similarly. \ED \BD[New corner entry] \label{def:new corner} Let $W$ be a submatrix of $M$ that is induced by entries $(x_1,y_1),\dots,(x_t,y_t)$, and let $(x,y) \in [n]\times [n]$ be an entry, such that neither $x$ nor $y$ is new for $W$. Then $(x,y)$ is said to be a {\sf new corner entry} with respect to $W$ if there exist $i,j \in [t]$, such that: \begin{enumerate} \item $(M[x,y_1],\dots,M[x,y_{t}]) = (M[x_i,y_1],\dots,M[x_i,y_{t}])$, \item $( M[x_1,y],\dots,M[x_{t},y]) = ( M[x_1,y_j],\dots,M[x_{t},y_j])$, \item $M[x,y] \neq M[x_i,y_j]$. \end{enumerate} \ED For an illustration of a new corner entry, see Figure~\ref{fig:corner}. \begin{figure} \caption{\small An illustration for Definition~\ref{def:new corner} (new corner entry).} \label{fig:corner} \end{figure} \BCM \label{claim:new} Let $W$ be a submatrix of $M$ induced by $(x_1,y_1),\dots,(x_t,y_t)$. If the binary rank of $W$ is at most $d$, and $M$ is $\epsilon$-far from binary rank at most $d$, then one of the following must hold: \begin{enumerate} \item The number of row indices $x\in [n]$ that are new with respect to $W$ is greater than $(\epsilon/3)n$; \item The number of column indices $y\in [n]$ that are new with respect to $W$ is greater than $(\epsilon/3)n$; \item The number of corner entries $(x,y)\in [n]\times [n]$ that are new with respect $W$ is greater than $(\epsilon/3)n^2$. \end{enumerate} \ECM \BPF Assume, contrary to the claim, that none of the three statements stated in the claim holds. In such a case, we can modify $M$ as follows, and obtain a matrix $M'$, which we shall show has binary rank at most $d$: \begin{itemize} \item For each row index $x\in [n]$ that is new with respect to $W$, row $x$ in $M'$ is set to be the all-zero row. \item For each column index $y\in [n]$ that is new with respect to $W$, column $y$ in $M'$ is set to be the all-zero column. \item For each entry $(x,y)\in [n]\times [n]$ that is a new corner entry with respect to $W$: Let $i,j \in [t]$ be such that $(M[x,y_1],\dots,M[x,y_{t}]) = (M[x_i,y_1],\dots,M[x_i,y_{t}])$ and $( M[x_1,y],\dots,M[x_{t},y]) = ( M[x_1,y_j],\dots,M[x_{t},y_j])$. Set $M'[x,y] = M[x_i,y_j]$. \item All other entries of $M'$ are as in $M$. \end{itemize} Observe that by the above modification rules, for every entry $(x,y)$ such that neither $x$ nor $y$ is new with respect to $W$, there exist indices $i,j \in [t]$ as specified in the third item above, and it holds that $M'[x,y] = M[x_i,y_j]$. By the premise of the claim, the number of entries that $M'$ and $M$ differ on, is at most $2\cdot (\epsilon/3)n\cdot n + (\epsilon/3)n^2 = \epsilon n^2$. As we show next, since $W$ has binary rank at most $d$, so does the resulting matrix $M'$, in contradiction to our assumption that $M$ is $\epsilon$-far from binary rank at most $d$. To verify that $M'$ has binary rank at most $d$, consider a partition of the $1$-entries of $W$ into $d' \leq d$ monochromatic rectangles $B_1,\dots,B_{d'}$. We shall show how, based on this partition, we can define a partition of all $1$-entries of $M'$ into $d'$ monochromatic rectangles $B'_1,\dots,B'_{d'}$. Note that for any $1$-entry $(w,z)$ in $M'$, neither the row index $w$ is new with respect to $W$ nor the column index $z$ is a new with respect to $W$ (since otherwise, we would have modified row $w$ and/or column $z$ to the all-zero row, and thus, $M'[w,z]=0$). Therefore, there exists a row index $i(w)$, and column index $j(w)$ such that: \begin{equation} \begin{split}\label{wz} (M[w,y_1],\dots,M[w,y_t]) = (M[x_{i(w)},y_1],\dots,M[x_{i(w)},y_t]), \\ ( M[x_1,z],\dots,M[x_{t},z]) = ( M[x_1,y_{j(z)}],\dots,M[x_{t},y_{j(z)}]). \end{split} \end{equation} where $i()$ is a function that maps row $w$ of $M$ to a row $i(w)$ in $W$ as specified in Equation~\eqref{wz}, and if there are several such rows in $W$, then the function $i()$ chooses one arbitrarily. The function $j()$ is defined similarly for the columns. Also observe that if $w = x_s$ for some $s \in [t]$, then $i(w) = s$ and similarly, if $z = y_s$ for some $s\in [t]$, then $j(z) = s$. Furthermore, as stated above, for such a $1$-entry $(w,z)$ it holds that $M'[w,z] = M[x_{i(w)},y_{j(w)}]$, and therefore, $M[x_{i(w)},y_{j(w)}]=1$. Now, place $(w,z)$ in $B'_\ell$, where $\ell$ is such that $(x_{i(w)},y_{j(z)}) \in B_\ell$. In particular, if $(w,z)$ belongs to $W$ and $(w,z) \in B_\ell$, then $(w,z) \in B'_\ell$. To verify that $B'_1,\dots,B'_{d'}$ is a partition of the $1$-entries of $M'$ into monochromatic rectangles, consider any pair of $1$-entries in $M'$, $(w,z)$ and $(w',z')$, such that $(w,z),(w',z') \in B'_\ell$. We need to show that $(w,z')$ and $(w',z)$ are also $1$-entries of $M'$, and that $(w,z'), (w',z) \in B'_\ell$ as well. Again, since $(w,z')$ and $(w',z)$ do not belong to a row or column that are new with respect to $W$, we have that $M'[w,z'] = M[x_{i(w)},y_{j(z')}]$ and $M'[w',z] = M[x_{i(w')},y_{j(z)}]$. But $(x_{i(w)},y_{j(z)}) \in B_\ell$ and $(x_{i(w')},y_{j(z')}) \in B_\ell$, and thus, we get that $M[x_{i(w)},y_{j(z')}]=1$, $M[x_{i(w')},y_{j(z)}]=1$, so that $(w,z')$ and $(w',z)$ are also $1$-entries of $M'$. Furthermore, $(x_{i(w)},y_{j(z')}) \in B_\ell$ and $(x_{i(w')},y_{j(z)})\in B_\ell$ (by the definition of $B_1,\dots,B_{d'}$), so that $(w,z')$ and $(w',z)$ both belong to $B'_\ell$, as required. See Figure~\ref{fig:proof} for an illustration. \EPF\\ \iffalse \sloppy Let $(w,z)$ be a $1$-entry in $M'$. By the definition of $M'$, neither $w$ is a new row for $W$ nor $z$ is a new column for $W$ (since otherwise $M'[w,z]=0$). Let $i(w)$ be such that $(M[w,y_1],\dots,M[w,y_t]) = (M[x_{i(w)},y_1],\dots,M[x_{i(w)},y_t])$, and let $j(z)$ be such that $( M[x_1,z],\dots,M[x_{t},z]) = ( M[x_1,y_{j(z)}],\dots,M[x_{t},y_{j(z)}])$. Observe that if $w = x_s$ for some $s \in [t]$, then $i(w) = s$ and similarly, if $z = y_s$ for some $s\in [t]$, then $j(z) = s$. We put $(w,z)$ in $B'_\ell$ where $\ell$ is such that $(x_{i(w)},y_{j(z)}) \in B_\ell$. Note that in particular, if $(w,z)$ belongs to $W$ and $(w,z) \in B_\ell$, then $(w,z) \in B'_\ell$. To verify that $B'_1,\dots,B'_{d'}$ is a partition of the $1$-entries of $M'$ into monochromatic rectangles, consider any pair of $1$-entries in $M'$, $(w,z)$ and $(w',z')$ such that $(w,z)$ and $(w',z')$ belong to the same subset $B'_\ell$. We need to show that $(w,z')$ and $(w',z)$ are also $1$-entries of $M'$ and that $(w,z')$ and $(w',z)$ belong to $B'_\ell$ as well. Recall that $M'[w,z'] = M[x_{i(w)},y_{j(z')}]$ and $M'[w',z] = M[x_{i(w')},y_{j(z)}]$. But since $(x_{i(w)},y_{j(z)}) \in B_\ell$ and $x_{i(w')},y_{j(z')} \in B_\ell$, we get that $M[x_{i(w)},y_{j(z')}]=1$, $M[x_{i(w')},y_{j(z)}]=1$, so that $(w,z')$ and $(w',z)$ are also $1$-entries of $M'$, and that $(x_{i(w)},y_{j(z')}) \in B_\ell$ and $(x_{i(w')},y_{j(z)})\in B_\ell$, so that $(w,z')$ and $(w',z)$ both belong to $B'_\ell$, as required. \fi \begin{figure} \caption{\small An illustration of the proof of Claim~\ref{claim:new}. The figure shows the submatrix $W$, as well as the parts of rows $w,w'$ that are identical to rows $x_{i(w)} = x_i, x_{i(w')} = x_{i'}$ in the submatrix $W$, and similarly for columns $z,z'$ that are identical to columns $y_{j(z)}= y_j, y_{j(z')} = y_{j'}$ in $W$. Also shown are entries $(w,z), (w',z'), (w,z'),(w',z)$, where each is filled with the same pattern as the entry it equals to in $W$. If $(w,z)$ and $(w',z')$ belong to $B'_\ell$, then $(x_i,y_j)$ and $(x_{i'},y_{j'})$ belong $B_\ell$. This implies that $(x_i,y_{j'})$ and $(x_{i'},y_j)$ also belong to $B_\ell$, so that $(w,z')$ and $(w',z)$ belong to $B'_\ell$ as well. For simplicity not all entries are specified. } \label{fig:proof} \end{figure} We can now prove Lemma~\ref{lem:binary-non-adaptive}, thus completing the proof of correctness of Algorithm~\ref{alg:binary-non-adaptive}.\\ \BPFOF{Lemma~\ref{lem:binary-non-adaptive}} For the sake of the analysis, we consider Algorithm~\ref{alg:binary-non-adaptive} as if it proceeds in $m=O(2^d/\epsilon)$ iterations, where it starts with the empty $0\times 0$ submatrix, $W_0$, and in each iteration it extends the submatrix it has with a row and a column whose indices are selected uniformly, independently, at random from $[n]$. For each $t \in [m]$, let $W_{t-1}$ be the submatrix of $M$ of size $(t-1) \times (t-1)$ that is considered in the beginning of iteration $t$, and let $x_{t}$ and $y_{t}$ denote, respectively, the indices of the row and column selected in the $t$'th iteration. Therefore, $W_t$ is the submatrix induced by $x_1,\dots,x_t$ and $y_1,\dots,y_t$. We shall show that with probability at least $2/3$, the rank of the final submatrix, $W_m$, is greater than $d$, and therefore, the algorithm will reject as required. We know by Claim~\ref{clm:size of matrix}, that if $M$ has binary rank at most $d$, then for any submatrix $W$ of $M$ it holds that $N(R(W)) \leq 2^d$ and $N(C(W)) \leq 2^d$. Hence, if either $N(R(W_{t})) > 2^d$ or $N(C(W_{t})) > 2^d$, then the rank of $W_t$ is greater than $d$, so that the algorithm will certainly reject. It is of course possible that $W_t$ has binary rank greater than $d$, although both $N(R(W_t)) \leq 2^d$ and $N(C(W_t)) \leq 2^d$, and in this case, the algorithm rejects as well. \iffalse \begin{equation}\label{eq:Xt} {\rm Pr}[X_{t+1} > X_t | X_t \leq 2^d] \geq \epsilon/3. \end{equation} Conditioned on Equation~\eqref{eq:Xt}, by Lemma~\ref{lem:KS}, after at most $24(2^d+1)/\epsilon$ iterations it will hold that ${\rm Pr}[ X_t > 2^d] > 2/3$, so that the algorithm rejects with probability at least $2/3$ (if it hasn't already rejected in an earlier iteration $t'$ when it found evidence that the submatrix $W_{t'}$ has binary rank greater than $d$). As to the number of queries, in each iteration, we query the entries of a row and a column that are added to $W_t$, and, thus, we get a total of $O(2^{2d}/\epsilon^2)$ queries in total. It remains to prove Equation~(\ref{eq:Xt}). \fi \iffalse We say that a row index $x\in [n]$ is \emph{new} for $W_{t-1}$ if by extending $W_{t-1}$ with $x$ we obtain a row different from all current rows of $W_{t-1}$. Namely, $(M[x,y_1],\dots,M[x,y_{t-1}])$ differs from $( M[x_i,y_1],\dots,M[x_i,y_{t-1}])$ for all $i \in [t-1]$. We similarly define a new column index $y$. \fi \iffalse For any $t < m$, assume that the rank of $W_{t-1}$ is at most $d$. We claim that in this case, since $M$ is $\epsilon$-far from binary rank at most $d$, then one of the following must hold: \begin{enumerate} \item The number of row indices $x\in [n]$ that are new for $W_{t-1}$ is greater than $(\epsilon/3)n$; \item The number of column indices $y\in [n]$ that are new for $W_{t-1}$ is greater than $(\epsilon/3)n$; \item Consider all pairs $(x,y) \in [n]\times [n]$ such that neither $x$ nor $y$ is new for $W_{t-1}$. Let $i \in [t-1]$ be such that $(M[x,y_1],\dots,M[x,y_{t-1}]) = (M[x_i,y_1],\dots,M[x_i,y_{t-1}])$, and let $j \in [t-1]$ be such that $( M[x_1,y],\dots,M[x_{t-1},y]) = ( M[x_1,y_j],\dots,M[x_{t-1},y_j])$. The number of such pairs $(x,y)$ for which $M[x,y] \neq M[x_i,y_j]$ is greater than $(\epsilon/3)n^2$. \end{enumerate} To verify this, assume that none of the three statements above holds. In such a case, we can modify $M$ as follows and obtain a matrix $M'$ of binary rank at most $d$: \begin{itemize} \item For each $x$ as in the first item above, modify row $x$ in $M$ to be the all-0 row in $M'$. \item For each $y$ as in the second item above, modify column $y$ in $M$ to be the all-0 column in $M'$. \item For each pair $(x,y)$ as in the third item above, set $M'[x,y] = M[x_i,y_j]$. \end{itemize} The total number of modifications is at most $3\cdot (\epsilon/3)n^2 =\epsilon$, and since $W_{t-1}$ has binary rank at most $d$, so does the resulting matrix $M'$, in contradiction to our assumption that $M$ is $\epsilon$-far from binary rank at most $d$. \dcom{I think we need to elaborate a bit on why $M'$ has binary rank at most $d$} \fi Therefore, for any $t < m$, if the binary rank of $W_{t-1}$ is at most $d$, and given that in iteration $t$, entry $(x_t,y_t)$ is selected uniformly, independently at random, then by Claim~\ref{claim:new}, with probability at least $\epsilon/3$, either: \begin{itemize} \item $x_t$ is new for $W_{t-1}$, so that $N(R(W_t)) > N(R(W_{t-1})$, \item or $y_t$ is new for $W_{t-1}$, so that $N(C(W_{t})) > N(C(W_{t-1})$, \item or $(x_{t},y_{t})$ is a new corner entry for $W_{t-1}$. In this case, let $x_i$ and $y_j$ be as defined in Definition~\ref{def:new corner}. Thus, $( M[x_1,y_{t}],\dots,M[x_{t-1},y_{t}] ) = ( M[x_1,y_j],\dots,M[x_{t-1},y_j])$, and in particular, $M[x_i,y_{t}] = M[x_i,y_j]$. However, $M[x_{t},y_{t}] \neq M[x_i,y_j]$, implying that $N(R(W_{t})) > N(R(W_{t-1}))$ in this case as well. A similar argument shows that $N(C(W_{t})) > N(C(W_{t-1}))$. \end{itemize} To summarize, if the binary rank of $W_{t-1}$ is at most $d$, then with probability at least $\epsilon/3$, the number of distinct rows or the number of distinct columns, or both, of $W_{t}$ increases compared to that of $W_{t-1}$. We thus, have to bound the probability that after all $m = \Theta(2^d/\epsilon)$ iterations, the number of distinct rows and the number of distinct columns of $W_m$, are both at most $2^d$. To do so, define for each $t\in [m]$, a Bernoulli random variable $\chi_t$, where $\chi_t =1$ if and only if $N(R(W_t)) > N(R(W_{t-1}))$ or $N(C(W_{t})) > N(C(W_{t-1}))$, or both. While the random variables $\chi_1,\dots,\chi_m$ are not independent, we have that for any $t \in [m]$: $${\rm Pr}[\chi_{t}=1 | \mbox{the binary rank of $W_{t-1}$ is at most $d$}] \geq \epsilon/3.$$ Furthermore, if $\sum_{t=1}^m \chi_t \geq 2^d + (2^d+1)$, then necessarily $\max\{N(R(W_m)) ,N(C(W_m))\} > 2^d$, so that the binary rank of $W_m$ is greater than $d$, and the algorithm rejects. Note that it is possible that the binary rank of $W_m$ is greater than $d$ although $\max\{N(R(W_m)) ,N(C(W_m))\} \leq 2^d$, and it is possible that $\max\{N(R(W_m)) ,N(C(W_m))\} > 2^d$ although $\sum_{t=1}^m \chi_t < 2^d + (2^d+1)$, but in either case the algorithm rejects. Finally, we show that for $m = 24\cdot 2^d/\epsilon$, with probability at least $2/3$, the binary rank of $W_m$, is greater than $d$. To this end we define $m$ independent random variables, $\tilde{\chi}_1,\dots,\tilde{\chi}_m$, where ${\rm Pr}[\tilde{\chi}_t =1] = \epsilon/3$, so that: $${\rm Pr}\left[\mbox{the binary rank of $W_m$ is at most $d$} \right] \leq {\rm Pr}\left[\sum_{t=1}^m \tilde{\chi}_t \leq 2^{d+1}\right].$$ By applying a multiplicative Chernoff bound, given the setting of $m = 24\cdot 2^d/\epsilon$, so that the expected value of $\sum_{t=1}^m \tilde{\chi}_t$ is greater than $2 \cdot 2^{d+1}$, we get that: $${\rm Pr}\left[\sum_{t=1}^m \tilde{\chi}_t \leq 2^{d+1}\right] \leq \exp(-(m/3)(1/2)^2) < 1/3,$$ and the lemma is established. \EPFOF \subsection{An adaptive property testing algorithm for the binary rank} We next describe an adaptive algorithm for the binary rank whose query complexity is $O(2^{2d}/\epsilon)$. The idea is simple: we modify Algorithm~\ref{alg:binary-non-adaptive} so that it is ``closer'' to the analysis described in its proof of correctness. Namely, the modified algorithm works in $m$ iterations, where $m=\Theta(2^d/\epsilon)$ is as set in Algorithm~\ref{alg:binary-non-adaptive}. In each iteration it maintains a submatrix $W_{t-1}$ of $M$, and selects a random row index $x_{t}$ and a random column index $y_{t}$. It extends $W_{t-1}$ by $x_{t}$ and $y_{t}$ only if this increases the number of distinct rows/columns in the submatrix. Therefore, the number of rows/columns of each submatrix $W_t$ never exceeds $2^d$, and thus the total number of queries is bounded by $O(m\cdot 2^d) = O(2^{2d}/\epsilon)$. Specifically, the modified algorithm is as follows: \noindent \begin{center} \fbox{ \begin{minipage}{5.4in} \BA{{\sf(Test $M$ for binary rank $d$, given $d$ and $\epsilon$ -- adaptive version)}} \label{alg:binary-adaptive} \begin{enumerate} \item Set $W_0$ to be the empty $0 \times 0$ matrix. \item for $t = 1$ to $m$: \begin{enumerate} \item Select, uniformly, independently and at random, an index $x_{t} \in [n]$ and an index $y_{t} \in [n]$. \item Consider the matrix $W'_{t-1}$ obtained by extending $W_{t-1}$ with $x_{t}$ and $y_{t}$. If $$\max\{N(R(W'_{t-1})) ,N(C(W'_{t-1}))\} > \max\{N(R(W_{t-1})) ,N(C(W_{t-1}))\}$$ then set $W_{t} = W'_{t-1}$. Otherwise, $W_{t} = W_{t-1}$. \item If the binary rank of $W_{t}$ is greater than $d$, then stop and reject. \item If no step resulted in a rejection then accept. \iffalse \item If the row $x_t$ restricted to the columns of $W_t$ is different from all rows in $W_{t}$, then let $W_{t+1}$ be the extension of $W_t$ by $x_t$, set $t = t+1$, and go to the next iteration. \item If the column $y_t$ restricted to the rows of $W_t$ is different from all columns in $W_{t}$, then let $W_{t+1}$ be the extension of $W_t$ by $y_t$, set $t = t+1$, and go to the next iteration. \item If by extending $W_t$ with both $x_t$ and $y_t$ we obtain a new row/column, then let $W_{t+1}$ be this extension, set $t = t+1$, and go to the next iteration. \fi \end{enumerate} \end{enumerate} \EA \end{minipage} } \end{center} Similarly to the non-adaptive algorithm (Algorithm~\ref{alg:binary-non-adaptive}), if $M$ has binary rank at most $d$, then Algorithm~\ref{alg:binary-adaptive} always accepts. On the other hand, the argument given in the proof of Lemma~\ref{lem:binary-non-adaptive} directly implies that if $M$ is $\epsilon$-far from binary rank at most $d$, then Algorithm~\ref{alg:binary-adaptive} rejects with probability at least $2/3$. \section{Testing the rank using Alon, Fischer and Newman~\cite{alon2007efficient}} \label{sec:Alon} \iffalse Alon et al.~\cite{alon2000efficient} showed that every property of a graph that is characterized by a finite collection of forbidden induced subgraphs is $\epsilon$-testable (that is, there exists a testing algorithm for such properties whose query complexity is independent of the input size and depends only on $\epsilon$). The complexity of their test, though, is a double-tower in $1/ \epsilon$. Alon, Fischer and Newman \cite{alon2007efficient} showed that when we restrict ourselves to bipartite graphs, then such properties are $\epsilon$-testable with a number of queries that is polynomial in $1/ \epsilon$. The general framework they developed also gives an algorithm for our problem, as we describe next. \fi For a finite collection $F$ of $(0,1)$-matrices, denote by $\mathcal{P}_F$ the set of all $(0,1)$-matrices that do not contain as a submatrix any row and/or column permutation of a member of $F$. The following is proved in~\cite{alon2007efficient}. \BT[{\cite[Thm. 1.4, Cor. 6.4]{alon2007efficient}}] \label{Alon complexity} Let $F$ be a finite collection of $k\times k$ or smaller $(0,1)$-matrices. There is a non-adaptive one-sided error testing algorithm for $\mathcal{P}_F$, whose query complexity is $(\frac{k}{\epsilon})^{O(k^4)}$, and whose running time is polynomial in its query complexity. \ET \iffalse The number of queries in the above algorithm is: \BC{\cite[cor. 6.4]{alon2007efficient}} \label{Alon complexity} Let $F$ be a finite collection of $k\times k$ or smaller binary matrices. The property $\mathcal{P}_F$ is $\epsilon$-testable with $(\frac{k}{\epsilon})^{O(k^4)}$ many queries. \EC \fi In what follows we describe how to use this result to design a testing algorithm for the set $\mathcal{S}_d$ of all $(0,1)$-matrices of Boolean rank at most $d$. An analogous result applies to the binary rank. We must first define a family $F$, such that $\mathcal{P}_F = \mathcal{S}_d$. Let $F_{d}$ be the set of all $(0,1)$-matrices of Boolean rank $d$, without repetitions of rows or columns in the matrices in $F_d$. We next show that $\mathcal{P}_{F_{d+1}} = \mathcal{S}_{d}$ \BCM \label{clm:submatrix} Let $A$ be a matrix of Boolean rank $d$. Then it has a submatrix of Boolean rank $i$ for every $ 0 < i < d$. \ECM \BPF This follows from the fact that if $B$ is some submatrix of $A$, then extending $B$ by a row or column increases its Boolean rank by at most $1$. \EPF \BCM\label{clm:PFd-S-d-1} $\mathcal{P}_{F_{d+1}} = \mathcal{S}_{d}$. \ECM \BPF If $A \in \mathcal{S}_{d}$, then it cannot contain a submatrix of Boolean rank $d+1$, and, therefore, $A\in \mathcal{P}_{F_{d+1}}$. Assume now that $A \not \in \mathcal{S}_{d}$. Therefore, $A$ must contain a submatrix $B$ of Boolean rank at least $d+1$. But then by Claim~\ref{clm:submatrix}, $B$ contains a submatrix $C$ of Boolean rank exactly $d+1$ that is also a submatrix of $A$, and, therefore, $A \not \in \mathcal{P}_{F_{d+1}}$. \EPF\\ Using Claim~\ref{clm:PFd-S-d-1}, as well as Claim~\ref{clm:size of matrix} and Theorem~\ref{Alon complexity}, we get: \begin{coro} \label{coro:alon} There exists a property testing algorithm for $\mathcal{P}_{F_{d+1}}= \mathcal{S}_{d}$ whose query complexity is $(\frac{2^d}{\epsilon})^{O(2^{4d})}$. \end{coro} The question is what is the smallest $k = k(d)$ such that all the matrices in $F_{d}$ are of size at most $k \times k$. Claim~\ref{clm:size of matrix} implies that $k \leq 2^{d}$, and it is clear that $k \geq d$, since matrices of smaller size have rank less than $d$. If $k = o(2^d)$ then the upper bound in Corollary~\ref{coro:alon} can be improved, or it may be the case that $k$ is indeed exponential in $d$, and the complexity we get using this approach is tight. \iffalse It may be more convenient to consider the following definition, rather than that of $k'$. \BD Let $k(d)$ be the maximal number such that $F'$ is the subset of all binary matrices of rank $d$ and size at most $k(d) \times k(d)$, such that for each $A\in F'$ all sub-matrices of $A$ have rank strictly smaller than $d$. \ED Obviously a matrix in $F'$ does not have repeated rows or columns, and thus by Lemma~\ref{size of matrix}, $k(d) \le 2^{d}$, and again $k(d) \geq d$. \BCM $\mathcal{P}_{F_d} = \mathcal{P}_{F'}$ \ECM \BPF If is clear that $F' \subseteq F_d$, and thus $\mathcal{P}_{F_d} \subseteq \mathcal{P}_{F'}$. As for the other direction, let $A$ be a submatrix of rank at least $d$. Then by Claim~\ref{clm:submatrix}, $A$ contains a submatrix $B$ of rank $d$. Therefore $A$ contains a matrix $C$ that is a submatrix of $B$ (it may be that $C= B$), such that $C \in F'$. But then $A \not \in \mathcal{P}_{F'}$. \EPF\\ It is interesting to find whether the bound $k(d) \le 2^{d}$ is in fact tight. In particular, if this bound is tight then another very strong evidence is given to the difference between the rank over the reals and the Boolean rank. The algorithm for the Binary rank is derived similarly, and a similar question is raised. \fi \subsection*{Acknowledgments} The second author would like to acknowledge the support of the Israel Science Foundation (grant No.~1146/18) and the Kadar family award. \iffalse \appendix \section{Checking Lemma~\ref{lem:submatrix} for us...} \dcom{Need to double check that the constant $200$ in the proof of Lemma~\ref{lem:submatrix} indeed suffices.} \mcom{I checked...} $$ k\cdot s\ln m + k\ln (s+1) + k\cdot s\cdot \ln(e\cdot k) \leq \alpha m/2 -2 $$ $$ \frac{8k^2}{\epsilon} (1+ \ln m + \ln k)+ k\ln (\frac{8k}{\epsilon}+1) \leq 100\cdot\frac{k^2}{\epsilon}\cdot \ln \frac{k}{\alpha\cdot \epsilon} - 2 $$ $$ \ln (\frac{8k}{\epsilon}+1) + 2 \leq \frac{8k}{\epsilon} + 2 \leq \frac{8k}{\epsilon} \left(\frac{100}{8}\ln \frac{k}{\alpha\cdot \epsilon} - 1- \ln m - \ln k\right) $$ $$ \left(\frac{100}{8}\ln \frac{k}{\alpha\cdot \epsilon} - 1- \ln m - \ln k\right) \geq 2 $$ $$ \frac{100}{8}\ln \frac{k}{\alpha\cdot \epsilon} \geq 3 + \ln m + \ln k $$ $$ 3 + \ln (200\cdot\frac{k^2}{\alpha\cdot \epsilon}\cdot \ln \frac{k}{\alpha\cdot \epsilon}) + \ln k \leq \frac{100}{8}\ln \frac{k}{\alpha\cdot \epsilon} $$ $$ 3 + \ln 200 + \ln \frac{k^2}{\alpha\cdot \epsilon} + \ln\ln \frac{k}{\alpha\cdot \epsilon} + \ln k \leq \frac{100}{8}\ln \frac{k}{\alpha\cdot \epsilon} $$ $$ 5.3 + \ln \frac{k^2}{\alpha\cdot \epsilon} + \ln\ln \frac{k}{\alpha\cdot \epsilon} + \ln k \leq 12.5 \ln \frac{k}{\alpha\cdot \epsilon} $$ $$ 5.3 + 2\ln k - \ln (\alpha\cdot \epsilon) + \ln\ln \frac{k}{\alpha\cdot \epsilon} + \ln k \leq 12.5 \ln k - 12.5 \ln (\alpha\cdot \epsilon) $$ $$ 5.3 + \ln\ln \frac{k}{\alpha\cdot \epsilon} + 11.5 \ln (\alpha\cdot \epsilon) \leq 9.5 \ln k $$ $$ 5.3 + \ln\ln \frac{k}{\alpha\cdot \epsilon} + 11.5 \ln (\alpha\cdot \epsilon) \leq 5.3 + \ln \frac{k}{\alpha\cdot \epsilon} + 11.5 \ln (\alpha\cdot \epsilon) \leq 9.5 \ln k $$ $$ 5.3 + 10.5 \ln (\alpha\cdot \epsilon) \leq 8.5 \ln k $$ \fi \end{document}
arXiv
\begin{document} \begin{abstract} We construct whole-space extensions of functions in a fractional Sobolev space of order $s\in (0,1)$ and integrability $p\in (0,\infty)$ on an open set $O$ which vanish in a suitable sense on a portion $D$ of the boundary $\bd O$ of $O$. The set $O$ is supposed to satisfy the so-called \emph{interior thickness condition in $\bd O \setminus D$}, which is much weaker than the global interior thickness condition. The proof works by means of a reduction to the case $D=\emptyset$ using a geometric construction. \end{abstract} \maketitle \section{Introduction and main results} \label{Sec: Introduction} Let $O\subseteq \mathbb{R}^d$ be open. For $s\in (0,1)$ and $p\in (0,\infty)$ the fractional Sobolev space $\mathrm{W}^{s,p}(O)$ consists of those $f\in \mathrm{L}^p(O)$ for which the seminorm \begin{align} [f]_{\mathrm{W}^{s,p}(O)} \coloneqq \left( \iint_{\substack{x,y \in O \\ |x-y| < 1}} \frac{|f(x)-f(y)|^p}{|x-y|^{sp+d}} \,\mathrm{d} y \,\mathrm{d} x \right)^\frac{1}{p} \end{align} is finite. Under the interior thickness condition \begin{align} \label{ITC} \forall x\in O, r\in (0,1] \, : \quad |\mathrm{B}(x,r) \cap O| \gtrsim |\mathrm{B}(x,r)|, \tag{ITC} \end{align} whole-space extensions for $\mathrm{W}^{s,p}(O)$ were constructed by Zhou~\cite{Zhou}. Though the mapping is in general not linear, extensions depend boundedly on the data. The case $p\geq 1$ was already treated earlier by Jonsson and Wallin~\cite{JW}, and their extension operator is moreover linear. In fact, Zhou has shown that the interior thickness condition is equivalent for $\mathrm{W}^{s,p}(O)$ to admit whole-space extensions. If we impose a vanishing trace condition on $\bd O$ in a suitable sense, zero extension is possible, so in this case no geometric quality of $O$ is needed. It is now natural to ask what happens if a vanishing trace condition is only imposed on a portion $D\subseteq \bd O$. To be more precise, we consider the space $\mathrm{W}^{s,p}_D(O)$ given by $\mathrm{W}^{s,p}(O) \cap \mathrm{L}^p(O, \dist_D^{-sp})$, where $\dist_D$ is the distance function to $D$. The fractional Hardy term in-there models the vanishing trace condition on $D$, compare for~\cite{Dyda-Vahakangas,Hardy-Poincare,ET,Hajlasz-PointwiseHardy,Lehrback-PointwiseHardy}. Spaces of this kind were also recently investigated in~\cite{KondratievProperties} and have a history of successful application in the theory of elliptic regularity, see for example~\cite{Hansen}. The present paper seeks minimal geometric requirements under which functions in $\mathrm{W}^{s,p}_D(O)$ can be boundedly extended to whole-space functions. We will see in Lemma~\ref{Lem: ITC in boundary suffices} that in~\eqref{ITC} we could equivalently consider balls centered in $\bd O$ instead of $O$. Put $N \coloneqq \bd O \setminus D$. In Definition~\ref{Def: ITC in F} we introduce the \emph{interior thickness condition in $N$}, which requires that~\eqref{ITC} holds for balls centered in $N$. For $D=\emptyset$, this is just the usual interior thickness condition in virtue of the aforementioned Lemma~\ref{Lem: ITC in boundary suffices}. It is the main result of this article to show that the interior thickness condition in $N$ is sufficient for the $\mathrm{W}^{s,p}_D(O)$-extension problem. A major obstacle is that the interior thickness condition in $N$ does not provide thickness in \emph{any} neighborhood around $N$, which makes localization techniques not applicable. An example for this is a self-touching cusp, see Example~\ref{Ex: geometry for extension operator}. Our construction is as follows. The extension procedure decomposes into a zero extension from $O$ to some suitable superset $\boldsymbol{O}$ of $O$, which is an enlargement of $O$ near $D$, followed by an application of Zhou's construction on $\boldsymbol{O}$. Hence, suitability of $\boldsymbol{O}$ is measured by two properties: First, the zero extension can be bounded in $\mathrm{W}^{s,p}(\boldsymbol{O})$ with the aid of the fractional Hardy term. Second, $\boldsymbol{O}$ satisfies~\eqref{ITC}, so that Zhou's result is applicable. A similar construction of $\boldsymbol{O}$ was performed by the author together with M.~Egert and R.~Haller-Dintelmann in~\cite{Kato}. The main result then reads as follows. \begin{theorem} \label{Thm: Extension operator} Let $O\subseteq \mathbb{R}^d$ and let $D\subseteq \bd O$, $p\in (0,\infty)$ and $s\in (0,1)$. If $O$ satisfies the interior thickness condition in $\bd O\setminus D$, then there exists a bounded mapping \begin{align} \mathrm{E}: \mathrm{W}^{s,p}(O) \cap \mathrm{L}^p(O, \dist_D^{-sp}) \to \mathrm{W}^{s,p}_D(\mathbb{R}^d). \end{align} If $p\geq 1$, then $\mathrm{E}$ is moreover linear. \end{theorem} We will also comment on the sharpness of our result in Section~\ref{Sec: necessity}. Finally, a remark on the case $p=\infty$ is in order. In this situation, the fractional Sobolev space is substituted by the H\"older space of order $s\in (0,1)$. Then the Whitney extension theorem~\cite[Thm.~3, p.174]{Stein} provides a linear extension operator without any geometric requirements. In particular, the fractional Hardy term is not needed, though it is easily seen that $\| f \dist_D^{-s} \|_\infty$ can only be finite if $f$ vanishes identically on $D$, and the same is of course true for the extension. \subsection*{Acknowledgments} The author thanks his Ph.D.\ advisor Robert Haller-Dintelmann for his support, the \enquote{Studienstiftung des deutschen Volkes} for financial and academic support, Joachim Rehberg for suggesting the topic and Juha Lehrb\"ack for valuable discussions. \subsection*{(Non-)Standard notation} We write $\mathrm{B}(x,r)$ for the open ball around $x$ with radius $r$. The closure of a set $A$ is denoted by $\cl{A}$ and the Lebesgue measure of $A$ is denoted by $|A|$. If we integrate with respect to the Lebesgue measure, we write $\,\mathrm{d} x$, $\,\mathrm{d} y$ and so on. For diameter and distance induced by the Euclidean metric we write $\diam(\cdot)$ and $\dist(\cdot,\cdot)$. Also, the shorthand notation $\dist_E(x)\coloneqq \dist(\{x\}, E)$ is used. We employ the notation $\lesssim$ and $\gtrsim$ for estimates up to an implicit constant that does not depend on the quantified objects. If two quantities satisfy both $\lesssim$ and $\gtrsim$ we write $\approx$. \section{Geometry} \label{Sec: Geometry} \begin{definition} \label{Def: ITC in F} Let $E\subseteq \mathbb{R}^d$ and $F\subseteq \bd E$. Then $E$ satisfies the \emph{interior thickness condition in $F$} if \begin{align} \forall x\in F, r\in (0,1] \, : \quad |\mathrm{B}(x,r) \cap E| \gtrsim |\mathrm{B}(x,r)|. \end{align} \end{definition} The following lemma shows the equivalence between the~\eqref{ITC} condition with balls centered in $O$ and~\eqref{ITC} with balls centered in $\bd O$ already mentioned in the introduction. Though its proof is simple, we include it for good measure. \begin{lemma} \label{Lem: ITC in boundary suffices} Let $E\subseteq \mathbb{R}^d$. Then $E$ satisfies~\eqref{ITC} if and only if $E$ satisfies the interior thickness condition in $\bd E$. \end{lemma} \begin{proof} Assume~\eqref{ITC} and let $x\in \bd E$, $r\in (0,1]$. Then pick some $y\in \mathrm{B}(x,r/2)\cap E$ and calculate \begin{align} |\mathrm{B}(x,r)\cap E| \geq |\mathrm{B}(y,r/2)\cap E| \gtrsim |\mathrm{B}(y,r/2)| \approx |\mathrm{B}(x,r)|. \end{align} Conversely, let $x\in E$, $r\in (0,1]$ and assume that $E$ is interior thick in $\bd E$. If $\mathrm{B}(x,r/2) \subseteq E$ then the claim follows immediately. Otherwise, pick again some $y \in \mathrm{B}(x,r/2)\cap \bd E$ and argue as above. \end{proof} The following simple example shows that a set can satisfy the thickness condition in some closed subset of the boundary but fails to have it in any neighborhood of it. \begin{example} \label{Ex: geometry for extension operator} Consider $O=\{ (x,y)\in \mathbb{R}^2\colon |y| < x^2, x<0 \} \cup \{ (x,y)\in \mathbb{R}^2\colon x>0 \}$. This means that $O$ consists of the right half-plane touched by a cusp from the left. Put $D$ to be the boundary of the cusp and $N$ is the $y$-axis except the origin. Then the~\eqref{ITC} estimate holds in $N$ since each ball centered in $N$ hits the half-plane with half of its area, but any proper neighborhood around $N$ would contain a region around the tip of the cusp, in which thickness does not hold (consider a sequence that approximates the tip of the cusp and test with balls that do not reach $N$). \end{example} \section{The extension operator} \label{Sec: Extension Operator Wsp} In this section we prove Theorem~\ref{Thm: Extension operator}. First, we construct $\boldsymbol{O}$ and show that it is interior thick. Second, we show that the zero extension to $\boldsymbol{O}$ is bounded using a simple geometric argument. Finally, we patch everything together to conclude. Throughout, $O$ and $D$ are as in Theorem~\ref{Thm: Extension operator} and we put $N\coloneqq \bd O \setminus D$ for convenience. \subsection{Embedding into an interior thick set} \label{Sec: fattening} We construct an open set $\boldsymbol{O} \subseteq \mathbb{R}^d$ with $O\subseteq \boldsymbol{O}$, $\bd O \subseteq \bd \boldsymbol{O}$ and that satisfies~\eqref{ITC}. According to the assumption on $N$ and Lemma~\ref{Lem: ITC in boundary suffices} it suffices to check that $\boldsymbol{O}$ is interior thick in $D$ and the \enquote{added} boundary. Of course we could take $\boldsymbol{O}$ as $\mathbb{R}^d \setminus \bd O$ in this step but this would make zero extension in Section~\ref{Sec: zero extension} impossible. Therefore, our construction will be in such a way that moreover $|x-y| \gtrsim \dist_D(x)$ whenever $x\in O$ and $y\in \boldsymbol{O} \setminus O$, see Lemma~\ref{Lem: point distance for zero extension}, which will do the trick in step two. Let $\{Q_j\}_j$ be a Whitney decomposition for the complement of $\overline{N}$, which means that the $Q_j$ are disjoint dyadic open cubes such that \begin{align} \mathrm{(i)}\quad \bigcup_j \overline{Q_j} = \mathbb{R}^d \setminus \overline{N} \qquad \mathrm{(ii)}\quad \diam(Q_j) \leq \dist(Q_j, N) \leq 4 \diam(Q_j). \end{align} Using the Whitney decomposition we define \begin{align} \Sigma\coloneqq \{ Q_j\colon \cl{Q_j} \cap \cl{O} \neq \emptyset \} \qquad\text{and}\qquad \boldsymbol{O} \coloneqq O \cup \Bigl( \bigcup_{Q\in \Sigma} Q\setminus D \Bigr). \end{align} Note that for $Q\in \Sigma$ one has $Q\setminus D = Q\setminus \bd O$. Then all claimed properties of $\boldsymbol{O}$ except~\eqref{ITC} follow immediately by definition. So, let $x\in \bd \boldsymbol{O}$ and $r\in (0,1]$. If $x \in \cl{N}$ then we are done by assumption (keep Lemma~\ref{Lem: ITC in boundary suffices} in mind). Otherwise, either $x\in D$ or $x\in \bd Q$ for some $Q\in \Sigma$ (to see this, use that the Whitney decomposition is locally finite). But if $x\in D$ then $x\in \overline{Q}$ for some $Q\in \Sigma$ by property (i) of the Whitney decomposition and the definition of $\Sigma$. Hence, in either case $x\in \overline{Q}$ for some $Q\in \Sigma$. Now we make a case distinction on the radius size compared to the size of $Q$. If $r \geq 4 \dist(Q, N)$, pick $y\in \overline{Q}$ and $z\in \overline{N}$ with $\dist(Q,N) = |y-z|$. Then with (ii) we get \begin{align} |x-z| \leq |x-y| + |y-z| \leq \diam(Q) + \dist(Q,N) \leq 2 \dist(Q,N) \leq r/2, \end{align} hence $\mathrm{B}(x,r)$ contains a ball of radius $r/2$ centered in $\overline{N}$ and we are done. Otherwise, if $r<4\dist(Q,N)$, then by (ii) we get $r<16\diam(Q)$ and the claim follows from~\eqref{ITC} for $Q$. \subsection{Zero extension} \label{Sec: zero extension} Let $\boldsymbol{O}$ denote the set constructed in the previous step. We define the zero extension Operator $\mathrm{E}_0$ from $O$ to $\boldsymbol{O}\cup D$ and claim that it is $\mathrm{W}^{s,p}(O) \cap \mathrm{L}^p(O, \dist_D^{-sp}) \to \mathrm{W}^{s,p}(\boldsymbol{O})$ bounded. We start with a preparatory lemma. \begin{lemma} \label{Lem: point distance for zero extension} One has $|x-y| \gtrsim \dist_D(x)$ whenever $x\in O$ and $y\in \boldsymbol{O}\setminus O$. \end{lemma} \begin{proof} We consider $y\in \boldsymbol{O}\setminus O$ and pick some $Q\in \Sigma$ that contains $y$. We distinguish whether or not $x$ and $y$ are far away from each other in relation to $\diam(Q)$. \emph{Case 1}: $|x-y|<\diam(Q)$. Fix a point $z \in \bd O$ on the line segment connecting $x$ with $y$. Assume for the sake of contradiction that $z\in N$. Then using (ii) we calculate \begin{align} \dist(Q,N) \leq |y-z| \leq |x-y| < \diam(Q) \leq \dist(Q,N), \end{align} hence we must have $z\in D$. Thus, $|x-y| \geq |x-z| \geq \dist_D(x)$. \emph{Case 2}: $|x-y| \geq \diam(Q)$. By definition of $\Sigma$ and $y\not\in O$ we can pick $z\in \cl{Q}\cap D$. Then \begin{align} |x-z| \leq |x-y| + |y-z| \leq |x-y| + \diam(Q) \leq 2 |x-y|, \end{align} hence $|x-y| \gtrsim \dist_D(x)$. \end{proof} This enables us to estimate $\mathrm{E}_0$. Clearly, we only have to estimate the $\mathrm{W}^{s,p}(O)$--seminorm since extension by zero is always isometric on $\mathrm{L}^p$. Let $f \in \mathrm{W}^{s,p}(O) \cap \mathrm{L}^p(O,\dist_D^{-sp})$, then \begin{align} \begin{split} \label{Eq: E0 estimate} \iint_{\substack{x,y \in \boldsymbol{O} \\ |x-y| < 1}} \frac{|\mathrm{E}_0 f(x)-\mathrm{E}_0 f(y)|^p}{|x-y|^{sp+d}} \,\mathrm{d} y \,\mathrm{d} x &\leq \iint_{\substack{x,y \in O \\ |x-y| < 1}} \frac{|f(x)-f(y)|^p}{|x-y|^{sp+d}} \,\mathrm{d} x \,\mathrm{d} y \\ &+ 2 \iint_{\substack{x\in O, y\in (\boldsymbol{O}\setminus O) \\ |x-y| < 1}} \frac{|f(x)|^p}{|x-y|^{sp+d}} \,\mathrm{d} x \,\mathrm{d} y. \end{split} \end{align} The first term is bounded by $\|f\|_{W^{s,p}(O)}^p$, so it only remains to bound the second term. Using Lemma~\ref{Lem: point distance for zero extension} and calculating in polar coordinates we find \begin{align} \int_{\substack{y \in (\boldsymbol{O}\setminus O) \\ |x-y| < 1}} |x-y|^{-sp-d} \,\mathrm{d} y \lesssim \dist_D(x)^{-sp}. \end{align} Plugging this back into~\eqref{Eq: E0 estimate} yields that we can bound the second term therein by the Hardy term $\|f\|_{\mathrm{L}^p(O, \dist_D^{-sp})}^p$. \subsection{Proof of Theorem~\ref{Thm: Extension operator}} \label{Sec: Proof of Wsp Ext Theorem} We combine the results from the previous sections with the extension procedure of Zhou to conclude. \begin{proof}[Proof of {Theorem~\ref{Thm: Extension operator}}] Put $\mathrm{E} = \boldsymbol{E} \circ \mathrm{E}_0$, where $\boldsymbol{E}$ is the (non-linear) extension operator of Zhou and $\mathrm{E}_0$ is the zero extension operator from the previous step. Clearly, $\mathrm{E}_0$ is linear, and we have seen in Section~\ref{Sec: zero extension} that it is $\mathrm{W}^{s,p}_D(O) \to \mathrm{W}^{s,p}(\boldsymbol{O})$ bounded. Since $\boldsymbol{O}$ satisfies~\eqref{ITC} by Section~\ref{Sec: fattening}, $\boldsymbol{E}$ is well-defined on $\mathrm{W}^{s,p}(\boldsymbol{O})$ and bounded into $\mathrm{W}^{s,p}(\mathbb{R}^d)$ by Zhou's result. The claim for $p<1$ then follows already by composition. In the case $p\geq 1$, note that $\boldsymbol{E}$ can be constructed to be linear, see also~\cite{JW}. \end{proof} \section{On the sharpness of our result} \label{Sec: necessity} In this final section we take a look on how close to a characterization our condition is. We will see in Example~\ref{Ex: necessity} that the interior thickness condition in $N$ is not necessary for the extension problem, but that our construction might fail without it. Afterwards, we will introduce a \emph{degenerate interior thickness condition in $N$}, which is necessary for the extension problem, but is not sufficient for our construction. \begin{example} \label{Ex: necessity} Consider the upper half-plane in $\mathbb{R}^2$. A Whitney decomposition can be constructed from layers of dyadic cubes. Let $O$ be a \enquote{cusp} that is build from those Whitney cubes which intersect the area below the graph of the exponential function, and let $N$ be its lower boundary given by the real line in $\mathbb{R}^2$. It is eminent that $O$ is not interior thick in $N$. Moreover, our construction of $\boldsymbol{O}$ just adds another layer of cubes, so $\boldsymbol{O}$ is of the same geometric quality. Hence, our construction does not work in this situation. But zero extension to the upper half-plane is still possible, so with $\boldsymbol{O}$ chosen as the upper half-plane, we can construct an extension procedure for $\mathrm{W}^{s,p}_D(O)$ in this configuration. This shows that the interior thickness condition in $N$ is not necessary for $\mathrm{W}^{s,p}_D(O)$ to admit whole-space extensions, but is \enquote{necessary} for our construction to work. \end{example} We introduce the aforementioned modified version of the interior thickness condition in $N\subseteq \bd O$ that degenerates near $\bd O \setminus N$. \begin{definition} \label{Def: degenerate ITC} Say that $O$ satisfies the \emph{degenerate interior thickness condition in $N$} if $O\subseteq \mathbb{R}^d$ is open, $N \subseteq \bd O$ and they fulfill \begin{align} \forall x\in N, r\leq \min(1, \dist_{\bd O \setminus N}(x))\colon |\mathrm{B}(x,r)\cap O| \gtrsim |\mathrm{B}(x,r)|. \end{align} \end{definition} In fact, this condition is necessary for the $\mathrm{W}^{s,p}(O)\cap \mathrm{L}^p(O, \dist_D^{-sp})$-extension problem. The technique to show this is due to Zhou~\cite{Zhou}. By the restriction in radii, the test functions used in Zhou's proof belong to $\mathrm{W}^{s,p}_D(O)$, and then his proof applies \emph{verbatim}, hence we omit the details. \begin{proposition} \label{Prop: degenerate condition necessary} Let $O \subseteq \mathbb{R}^d$ be open, $D\subseteq \bd O$, $p\in (0,\infty)$, $s\in (0,1)$ and put $N\coloneqq \bd O \setminus D$. If $\mathrm{W}^{s,p}_D(O)$ admits whole-space extensions, then $O$ satisfies the degenerate interior thickness condition in $N$. \end{proposition} \begin{remark} \label{Rem: degenerate ITC not sufficient} In Example~\ref{Ex: necessity} we have seen a configuration which admits whole-space extension for $\mathrm{W}^{s,p}_D(O)$-functions, so in this situation, $O$ satisfies the degenerate interior thickness condition near $N$ by Proposition~\ref{Prop: degenerate condition necessary} (of course, this can also be seen directly). On the other hand, we have seen in that example that in this configuration our construction does not work. Hence, the degenerate interior thickness condition in $N$ is too weak for our proof of Theorem~\ref{Thm: Extension operator}. \end{remark} \end{document}
arXiv
\begin{document} \title{Phase transition for continuum Widom-Rowlinson model with random radii} \author[1]{David Dereudre} \author[2]{Pierre Houdebert} \affil[1]{Laboratoire de Math\'ematiques Paul Painlev\'e\\University of Lille 1, France \texttt{[email protected]}} \affil[2]{Aix Marseille Univ, CNRS, Centrale Marseille, I2M, Marseille, France \texttt{[email protected]}} \maketitle \begin{abstract} {In this paper we study the phase transition of continuum Widom-Rowlinson measures in $\ensemblenombre{R}^d$ with $q$ types of particles and random radii. Each particle $x_i$ of type $i$ is marked by a random radius $r_i$ distributed by a probability measure $Q_i$ on $\ensemblenombre{R}^+$. The distributions $Q_i$ may be different for different $i$, this setting is called the non-symmetric case. The particles of same type do not interact with each other whereas a particle $x_i$ and $x_j$ with different type $i\neq j$ interact via an exclusion hardcore interaction forcing $r_i+r_j$ to be smaller than $|x_i-x_j|$. In the symmetric integrable case (i.e. $\int r^d Q_1(dr)<+\infty$ and $Q_i=Q_1$ for every $1\le i\leq$), we show that the Widom-Rowlinson measures exhibit a standard phase transition providing uniqueness, when the activity is small, and co-existence of $q$ ordered phases, when the activity is large. In the non-integrable case (i.e. $\int r^d Q_i(dr)=+\infty$, $1\le i \le q$), we show another type of phase transition. We prove, when the activity is small, the existence of at least $q+1$ extremal phases and we conjecture that, when the activity is large, only the $q$ ordered phases subsist. We prove a weak version of this conjecture in the symmetric case by showing that the Widom-Rowlinson measure with free boundary condition is a mixing of the $q$ ordered phases if and only if the activity is large. } \noindent {\it Key words: Gibbs point process, DLR equation, Boolean model, continuum percolation, random cluster model, Fortuin-Kasteleyn representation.} \end{abstract} \section{Introduction} \label{section_introduction} In this paper we deal with the non-symmetric continuum Widom-Rowlinson model in $\ensemblenombre{R}^d$ with $q$ types of particles and with random radii. Each type of particle $1\le i\le q$ has its proper activity parameter $z_i>0$ and a proper probability measure $Q_i$ on $\ensemblenombre{R}^+$ for the distribution of radii (the distributions $Q_i$ may be different, this setting is called the non-symmetric case). Each particle $x_i$ of type $i$ is marked by a random radius $r_i$ distributed by $Q_i$. The particles of the same type do not interact with each other whereas a particle $x_i$ and $x_j$ with different type $i\neq j$ interact via an exclusion hardcore interaction forcing $r_i+r_j$ to be smaller than $|x_i-x_j|$. This model can be viewed as a collection of $q$ Boolean models, each of intensity $z_i$ and radii distribution $Q_i$, $i=1\dots q$, conditioned to not overlap each other. This model is a generalisation of the simple and beautiful model introduced in the late 1960' by Widom and Rowlinson \cite{widom_rowlinson} where $q=2$ and the radii are deterministic. Its interest comes not only from its applicability in the description of a binary gas, but also from the fact that it was the very first continuum model for which a phase transition was rigorously proved, first by Ruelle \cite{ruelle_1971} using the so-called Peierls' argument. A modern proof of this phase transition, relying on percolation arguments and a Fortuin-Kasteleyn representation, was done in \cite{chayes_kotecky,georgii_haggstrom}. Regarding the non-symmetric case with $q\ge 3$, phase transition results were proved in several articles such as \cite{bricmont_kuroda_lebowitz_1984,mazel_suhov_stuhl} using the Pirogov-Sinai theory. All these results concern the case of deterministic radii. In the present paper, we investigate the random radii case which can be interpreted as a random media or as a size distribution of particles. We prove several phase transition results described below which are, depending on the distribution of radii, similar or different from the deterministic case. The formal definition of Widom-Rowlinson measures is based on the standard DLR equations, which prescribe the local conditional distributions of the model, see definition \ref{def_WR}. The existence and uniqueness of such solutions are not obvious. The set of solutions is denoted by $WR(\MultiIntensity,\MultiRadius)$ where $\boldsymbol{\Intensity}=(z_1, \dots, z_q)$ is the vector of activities and $\boldsymbol{\Radius} := (Q_1,\dots ,Q_q)$ the vector of radii distributions. When the radii are uniformly bounded, a general existence result by Ruelle \cite{ruelle_livre_1969} ensures the existence of Widom-Rowlinson measures. When the radii are not bounded, a long range interaction occurs and the existence is more delicate. In a first theorem we prove the existence of Widom-Rowlinson measures without any assumption on activities or radii. The set $WR(\MultiIntensity,\MultiRadius)$ is never empty. We say that a phase transition occurs when the geometry of $WR(\MultiIntensity,\MultiRadius)$ changes drastically with the choice of parameters $\boldsymbol{\Intensity}$, considering that $\boldsymbol{\Radius}$ is fixed. In the case of deterministic radii, it is proved in papers mentioned above that $WR(\MultiIntensity,\MultiRadius)$ is a singleton for $\boldsymbol{\Intensity}$ small enough and that there exists $\boldsymbol{\Intensity}$ large enough such that $WR(\MultiIntensity,\MultiRadius)$ contains $q$ distinct extremal ordered phases (Widom-Rowlinson measure with boundary condition full of particles with a prescribed type). Precisely in \cite{bricmont_kuroda_lebowitz_1984} it is proved that for any $(z_1,z_2,\ldots, z_{q-1})$ large enough there exists $z_q$ such that $WR(\MultiIntensity,\MultiRadius)$ contains $q$ distinct extremal ordered phases. This result is based on an extension of the Pirogov-Sinai theory of phase transitions in general lattice spin systems to continuum systems. In our random radii setting we do not obtain such a general result. Actually the unbounded radii seem to be a serious and difficult obstacle for using Pirogov-Sinai machinery. Nevertheless, in the integrable setting, $\int r^d Q_i(dr)<+\infty$, $1\le i \le q$), we show first that $WR(\MultiIntensity,\MultiRadius)$ is a singleton for small activities $\boldsymbol{\Intensity}$. And, in the symmetric integrable case $Q_i=Q$, $1\le i\le q$, we show that for activities $\boldsymbol{\Intensity}=(z,z,\ldots,z)$ large enough, the set $WR(\MultiIntensity,\MultiRadius)$ contains $q$ distinct extremal ordered phases. These results derive from a coupling result in \cite{hofer-temmel_houdebert_2017}, a standard Fortuin-Kasteleyn representation and a percolation result developed in \cite{houdebert_2017}. In conclusion, in the integrable symmetric setting, $WR(\MultiIntensity,\MultiRadius)$ exhibits a standard phase transition similar to the deterministic radii case. Let us now turn to the most interesting and surprising result of the present paper. In the non-integrable case (i.e. $\int r^d Q_i(dr)=+\infty$, $1\le i \le q$), we show another type of phase transition on the geometry of $WR(\MultiIntensity,\MultiRadius)$. First, it is easy to see that $WR(\MultiIntensity,\MultiRadius)$ contains always $q$ ordered phases corresponding each to a Poisson point process with only one type of particle whose balls cover the full space $\ensemblenombre{R}^d$. But we prove, when the activity is small enough, the existence of a $q+1$-th extremal phase. As far as we know, the existence of such a $q+1$-th extremal phase has never been observed for a continuum Widom-Rowlinson model with deterministic radii. Let us note that our result is valid in the non-symmetric setting and that the proof is not based on the Pirogov-Sinai theory. The main ingredient is a discrimination by specific entropy. We show that the Widom-Rowlinson measure with free boundary condition has a specific entropy smaller than any ordered phase. Moreover we conjecture that in the non integrable setting, when the activity is large enough, the set $WR(\MultiIntensity,\MultiRadius)$ is exactly the convex hull of the $q$ ordered phases (i.e. for large activities, only the $q$ ordered phases subsist and the disordered phase disappears). It would imply a phase transition result since the set $ WR( \MultiIntensity,\MultiRadius)$ would have exactly $q$ extremal Gibbs measures for $\boldsymbol{\Intensity}$ large and at least $q +1$ extremal Gibbs measures for $\boldsymbol{\Intensity}$ small. Our belief in this conjecture is based on a similar conjecture for the continuum random cluster presented in \cite{dereudre_houdebert} and for which a heuristic proof is given. Moreover simulations in the sense of the conjecture have been implemented in \cite{houdebert_these}. In the present paper we prove a weak version of this conjecture by showing that the symmetric Widom-Rowlinson measure with free boundary condition is a mixing of the $q$ ordered phases if and only if the activity is large. The proof is based on a renewal argument in a prescribed direction which implies that the scales in the thermodynamic limit are different for each direction. Indeed in one direction the size of the box is of order $n$ and in the $d-1$ other directions the size is of order $\log(n)$. We believe that the result remains true with other choice of scales, in particular the standard scale where the size of the box is of the same order in each direction. The paper is organized as follows. Section \ref{section_preliminaries} introduces the notations, main definitions and tools. In Section \ref{section_results} the results of the article are stated. The proof of Theorem \ref{theo_existence_wr} concerning the existence of Widom-Rowlinson measures is done in Section \ref{section_preuve_theo_existence}. In Section \ref{section_preuve_non_monochromaticite_petite activite} we prove the existence of a $(q +1)$-th extremal phase in the non-integrable setting. Finally Section \ref{section_preuve_conjecture_faible} is devoted to the proof of the weak version of the conjecture. \section{Preliminaries} \label{section_preliminaries} \subsection{Space} Let us consider the state space $S := \ensemblenombre{R} ^d \times \ensemblenombre{R}^+$ with $d \geq 1$ being the dimension. Let $\Omega$ be the set of locally finite configurations $\omega$ on $S$. This means that $|\omega \cap (\Lambda \times \ensemblenombre{R}^+)|< \infty$ for every bounded Borel set $\Lambda$ of $\ensemblenombre{R}^d$, with $|\omega|$ being the cardinality of the configuration $\omega$. We write $\omega_{\Lambda}$ as a shorthand for $\omega \cap (\Lambda \times \ensemblenombre{R}^+)$. The configuration space is embedded with the usual $\sigma$-algebra $\mathcal{F}$ generated by the counting variables. To a configuration $\omega \in \Omega$ we associate the germ-grain structure \begin{align*} L(\omega) := \underset{X \in \omega}{\bigcup} B(X), \end{align*} where $B(X)$ is the closed ball associated to the marked point $X=(x,r)$, centred at $x$ and of radius $r$. Let $q$ be an integer larger than 1 fixed through the paper, and consider the space $\boldsymbol{\Omega} := \Omega^q$ of multi-index configurations $\boldsymbol{\omega} := (\omega^1, \dots , \omega^q)$ embedded with the $\sigma$-algebra $\boldsymbol{\mathcal{F}} := \mathcal{F} ^{\otimes q}$. An element $i\in \{1, \dots , q\}$ is called a \emph{type} or a \emph{colour}. We write $L(\boldsymbol{\omega})$ as a shorthand for $\underset{1 \leq i \leq q}{\cup} L(\omega^i)$ and $\boldsymbol{\omega}_{\Lambda}$ for $(\omega^1_{\Lambda}, \dots , \omega^q_{\Lambda})$. We also write $(x,r) \in \boldsymbol{\omega}$ when there exists a colour $i$ such that $(x,r) \in \omega^i$. \subsection{Poisson point processes} For $z>0$ and $Q$ a probability measure on $\ensemblenombre{R}^+$, let $\Poisson{z, Q}$ be the distribution on $\Omega$ of a Poisson point process with intensity measure $z \mathcal{L}^\Dim \otimes Q$, where $\mathcal{L}^\Dim$ stands for the Lebesgue measure on $\ensemblenombre{R}^d$. Recall that it means \begin{itemize} \item for every bounded Borel set $\Lambda$, the distribution of the number of points in $\Lambda \times \ensemblenombre{R}^+$ under $\Poisson{z, Q}$ is a Poisson distribution with parameter $z\mathcal{L}^\Dim(\Lambda)$; \item given the number of points in every bounded $\Lambda$, the points are independent and uniformly distributed in $\Lambda$. Each point is marked by a mark distributed by $Q$ and all the marks are independent. \end{itemize} We refer to \cite{daley_vere_jones} for details on Poisson point process. For multi-index $\boldsymbol{\Intensity}=(z_1, \dots, z_q)$ and $\boldsymbol{\Radius} := (Q_1,\dots ,Q_q)$, $\MultiPoisson{\boldsymbol{\Intensity},\boldsymbol{\Radius}} := \Poisson{z_1,Q_1}\otimes \dots \Poisson{z_q,Q_q}$ is the distribution of a multi-index Poisson point process. For $\Lambda \subseteq \ensemblenombre{R}^d$ bounded, we denote by $\Poisson{z,Q}_\Lambda$ (respectively $\MultiPoisson{\boldsymbol{\Intensity},\boldsymbol{\Radius}}_\Lambda$) the restriction of $\Poisson{z,Q}$(respectively $\MultiPoisson{\boldsymbol{\Intensity},\boldsymbol{\Radius}}$) on $\Lambda \times \ensemblenombre{R}^+$ (respectively $(\Lambda \times \ensemblenombre{R}^+)^q$). Note that measures $\Poisson{z, Q}$ and $\MultiPoisson{\MultiIntensity,\MultiRadius}$ are \emph{stationary}, which means they are invariant under all translations of vector $x \in \ensemblenombre{R}^d$. The connectivity properties of the Poisson point process play a crucial role in this study. It changes drastically depending on an integrability condition, formalised in the following definition. \begin{definition}\label{definition_int} A family $\boldsymbol{\Radius}$ is said to satisfy the \emph{integrability assumption} if for every colour $i$, \begin{align} \label{eq_integrability_assumption} \int_{\ensemblenombre{R}^+ } r^d Q_i(dr) <\infty. \end{align} If not satisfied we say that $\boldsymbol{\Radius}$ is \emph{not integrable} (or in the \emph{extreme case}). We say that $\boldsymbol{\Radius}$ is \emph{completely non-integrable} if for every $1\le i \le q$, $\int_{\ensemblenombre{R}^+ } r^d Q_i(dr) =\infty$. \end{definition} \subsection{Widom-Rowlinson measures} The Widom-Rowlinson measures are defined with standard DLR equations requiring the probability measures to have prescribed conditional probabilities. Let us first define the event $\mathcal{A}$ of \emph{authorized} (or \emph{allowed} ) configurations \begin{align*} \mathcal{A} = \{ \boldsymbol{\omega} \in \boldsymbol{\Omega}, \ \forall 1 \leq i<j \leq q, \ L(\omega^i) \cap L(\omega^j)= \emptyset \} \,. \end{align*} The Widom-Rowlinson specification on a bounded $\Lambda \subseteq \ensemblenombre{R}^d$ with boundary condition $\boldsymbol{\omega}_{\Lambda^c}$ is \begin{align*} \Specification{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}(d \boldsymbol{\omega}'_\Lambda) : = \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c})}{\PartitionFunction{\Lambda,\boldsymbol{\omega}_{\Lambda^c}}} \MultiPoisson{\boldsymbol{\Intensity}, \boldsymbol{\Radius}}_{\Lambda} (d \boldsymbol{\omega}'_{\Lambda}), \end{align*} with \begin{align*} \PartitionFunction{\Lambda,\boldsymbol{\omega}_{\Lambda^c}} := \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda} (d \boldsymbol{\omega}'_{\Lambda}). \end{align*} \begin{definition} \label{def_WR} A probability measure $\boldsymbol{P}$ on $\boldsymbol{\Omega}$ is a Widom-Rowlinson measure of parameters $\boldsymbol{\Intensity}$ and $\boldsymbol{\Radius}$, written $\boldsymbol{P} \in WR(\MultiIntensity,\MultiRadius)$, if $\boldsymbol{P}$ is stationary and if for every bounded Borel set $\Lambda \subseteq \ensemblenombre{R}^d$ and every bounded measurable function $f$, \begin{subequations} \label{eq_def_wr} \begin{align} \label{eq_partition_function_non_deg_def_wr} \PartitionFunction{\Lambda,\boldsymbol{\omega}_{\Lambda^c}} >0 \quad \boldsymbol{P}(d \boldsymbol{\omega})-a.s; \end{align} \begin{align} \label{eq_dlr_definition_wr} \int_{\boldsymbol{\Omega}} f \ d \boldsymbol{P} = \int_{\boldsymbol{\Omega}} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c} ) \Specification{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}(d \boldsymbol{\omega}'_\Lambda) \boldsymbol{P} (d \boldsymbol{\omega} ). \end{align} \end{subequations} \end{definition} For every $\Lambda$ the equations \eqref{eq_dlr_definition_wr} are called DLR equations, named after Dobrushin, Lanford and Ruelle. Thanks to \cite{georgii_livre}, a Widom-Rowlinson measure is a mixture of ergodic Widom-Rowlinson measures. \subsection{Stochastic domination} Let us discuss stochastic domination, which is going to be a key element of several proofs in the paper. Recall that an event $E \in \mathcal{F}$ is said \emph{increasing} if for $\omega' \in E$ and $\omega \supseteq \omega'$, we have $\omega \in E$. Finally if $P$ and $P'$ are two probability measures on $\Omega$, the measure $P$ is said to \emph{stochastically dominate} the measure $P'$, written $P' \preceq P$, if $P'(E) \leq P(E)$ for every increasing event $E \in \mathcal{F}$. Those definitions naturally extend to the case of the multi-index configuration space $\boldsymbol{\Omega}$. The following proposition is a direct application of \cite[Theorem 1.1]{georgii_kuneth} and gives a comparison between a Widom-Rowlinson measure and a Poisson point process. \begin{proposition}[Stochastic domination] \label{propo_stochastic_dom_poisson_wr} For every bounded $\Lambda \subseteq \ensemblenombre{R}^d$ and every boundary condition $\boldsymbol{\omega}_{\Lambda^c}$, we have $$\Specification{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}(d \boldsymbol{\omega}'_\Lambda) \preceq \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}.$$ Furthermore for every $\boldsymbol{P} \in WR( \MultiIntensity,\MultiRadius)$ $$ \boldsymbol{P} \preceq \MultiPoisson{\MultiIntensity,\MultiRadius}. $$ \end{proposition} \section{Results} \label{section_results} This section states the main results of the present article. \subsection{Existence} The first question of interest in statistical physics where the objects are defined through prescribed conditional equations, namely the DLR equations \eqref{eq_dlr_definition_wr}, is the question of the existence of a probability measure solving those equations. The following theorem gives a positive answer to this question. \begin{theorem} \label{theo_existence_wr} For any $d\ge 1$ and any parameters $\boldsymbol{\Intensity}$ and $\boldsymbol{\Radius}$, the set $WR(\MultiIntensity,\MultiRadius)$ is not empty. \end{theorem} The proof of this theorem follows a standard scheme used for several models \cite{dereudre_2009,dereudre_drouilhet_georgii,dereudre_houdebert}. First, using the specific entropy, we build an accumulation point of a sequence of finite volume Gibbs measures. The difficulty is then to prove that the accumulation point satisfies the DLR equations \eqref{eq_dlr_definition_wr}. This is done using the stochastic domination result of Proposition \ref{propo_stochastic_dom_poisson_wr}. A detailed proof is given in Section \ref{section_preuve_theo_existence}. \begin{remark} The existence of a Widom-Rowlinson measure was already known in several cases. First, in the cases of bounded radii, the Widom-Rowlinson interaction is finite range and therefore the existence is a consequence of a general result of Ruelle \cite{ruelle_livre_1969}. Second, in the symmetric case where $z_1=\dots = z_q$ and $Q_1=\dots = Q_q$, a Widom-Rowlinson measure can be built from a \emph{Continuum Random Cluster Model} where to each connected component is assigned a colour uniformly over the $q$ choices. This relation is known as the \emph{Fortuin-Kasteleyn} representation. The existence of the Continuum Random Cluster Model with random radii has been recently proved in \cite{dereudre_houdebert}, yielding the existence of a Widom-Rowlinson measure in the symmetric case. \end{remark} \subsection{Phase transition in the integrable case} Now that Thereom \ref{theo_existence_wr} proves the existence of at least one Widom-Rowlinson measure, the second question concerns the uniqueness, non-uniqueness and consequently the phase transition between both regimes. It is usual, for Gibbs point processes with different type of particles, to show the uniqueness for small activities and non-uniqueness for large activity. In the integrable case (see definition \ref{definition_int}), we recover both regimes. First, the following proposition proves the uniqueness for small activities. \begin{proposition} \label{prop_unicité_disagreement_percolation} Write $z_i= z \alpha_i$ where $\boldsymbol{\alpha} := (\alpha_1, \dots , \alpha_q)$ is a discrete probability measure. If $\boldsymbol{\Radius}$ satisfies the integrability assumption (Definition \ref{definition_int}), then there exists an unique Widom-Rowlinson measure $P$ in $WR(z \boldsymbol{\alpha},\boldsymbol{\Radius})$ as soon as $z \leq z_c \left(d,\sum_i \alpha_i Q_i \right)$, where $z_c (d, Q)>0$ is the percolation threshold of the Poisson Boolean model in $\ensemblenombre{R}^d$ of radius measure $Q$. \end{proposition} \begin{proof} From Proposition \ref{propo_stochastic_dom_poisson_wr} we have the stochastic domination $\Specification{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}(d \boldsymbol{\omega}'_\Lambda) \preceq \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}$. Therefore as a direct consequence of Theorem 3.2 in \cite{hofer-temmel_houdebert_2017} we have uniqueness of the Widom-Rowlinson measure as soon as the "single-type" Poisson Boolean model $\Poisson{z, \sum_i \alpha_i Q_i}$ does not percolate. Thus the result. Let us note that $z_c (d, Q)$ is positive for every $Q$ satisfying the integrability assumption $\int_{\ensemblenombre{R}^+ } r^d Q(dr) <\infty$ \cite{gouere_2008}. \end{proof} In the symmetric case a non-uniqueness result was proved, initially in \cite{chayes_kotecky,georgii_haggstrom} in the case of deterministic radii. A non-trivial generalisation to the case of unbounded radii was proved in \cite{houdebert_2017} and is stated in the following proposition. Let us just mention that the unbounded radii case requires a delicate study of the percolation properties of the continuum random cluster model which does not dominate a Poisson process anymore. \begin{proposition} \label{proposition_transition_phase_cas_inte_perco} Let us consider the symmetric case $z := z_1 = \dots = z_q$ and $Q := Q_1= \dots = Q_q$ in dimension $d\ge 2$. If $Q$ satisfies the integrability assumption and $Q(\{ 0 \})=0$, then for activities $z$ large enough, there exist $q$ distinct ergodic Widom-Rowlinson measures. \end{proposition} This result is a consequence of the Fortuin-Kasteleyn representation and the percolation of the Continuum Random Cluster Model for large activities $z$. As usual, the $q$ distinct ergodic Widom-Rowlinson measures corresponds to the distinct Gibbs measures with boundary condition $1\le i\le q$. \begin{remark}${}$ The assumption $Q (\{ 0 \})=0$ is an artefact of the proof of the percolation of the Continuum Random Cluster Model developed in \cite{houdebert_2017}. In this paper the author emphasizes that the proof would carry out the same for radius measures $Q$ having small atoms in $0$, and he conjectures that the results would be true with the maximal assumption $Q(\{ 0 \})<1$. We do not investigate this generalization here. \end{remark} In the case of deterministic radii $r_1, r_2, \dots, r_q$, it is proved in \cite{bricmont_kuroda_lebowitz_1984} that for large activities $z_1, z_2, \ldots, z_{q -1}$ there exists $z_q>0$ such that the set of Widom-Rowlinson measures $WR((z_1,z_2,\ldots,z_q), (\delta_{r_1},\delta_{r_2},\ldots,\delta_{r_q}))$ exhibits at least $q$ extremal phases. This result is based on an extension of the Pirogov-Sinai theory of phase transitions in general lattice spin systems to continuum systems. In the case of non-symmetric random radii we do not know if such result holds. However it is reasonable to believe that in the case of bounded random radii, the Pirogov-Sinai machinery is feasible and similar results could be obtained. \subsection{Existence of $\mathbf{q +1}$ extremal phases in the non integrable setting} The main results of the paper are presented in this section where we investigate the phase diagram in the non-integrable setting. A central notion here is the monochromaticity or polychromaticity of Widom-Rowlinson measures. This is defined as follows. \begin{definition} Let $Mono$ be the event of \emph{monochromatic} configurations $\boldsymbol{\omega} \in \boldsymbol{\Omega}$ such that $\omega_i= \emptyset$ for all $1\le i\le q$ excepted one index. Let $Poly$ be the set of \emph{polychromatic} configurations, meaning that $Poly=Mono^c$. A probability measure $\boldsymbol{P}$ on $\boldsymbol{\Omega}$ is said \emph{monochromatic} (respectively \emph{polychromatic}) if $\boldsymbol{P} (Mono)=1$ (respectively $\boldsymbol{P}(Poly)=1$). \end{definition} Let us note that, in the case of monochromatic $\boldsymbol{P}$, the index $i$ such that $\omega_i\neq \emptyset$ can be random. In the case of radii $\boldsymbol{\Radius}$ satisfying the integrability assumption \eqref{eq_integrability_assumption}, it is clear that every Widom-Rowlinson measure $\boldsymbol{P} \in WR( \MultiIntensity,\MultiRadius)$ is polychromatic. Therefore, the question of monochromaticity is relevant only in this non-integrable setting. Moreover, we know that such monochromatic Widom-Rowlinson measures exist in the non-integrable setting as mentioned in the next proposition. The proof is obvious and is not detailed here. \begin{proposition}\label{proposition_extreme_evident} For every $1\le i \le q$, such that $\int_{\ensemblenombre{R}^+} r^d Q_i(dr)=+\infty$, the Poisson point process $\Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}$ with $\bar{\boldsymbol{\Intensity}}^i=(0 \dots, 0, z_i,0,\dots,0)$, is an extremal phase of $WR(\MultiIntensity,\MultiRadius)$. \end{proposition} In particular, if $\boldsymbol{\Radius}$ is completely non-integrable (i.e. $\int_{\ensemblenombre{R}^+} r^ d Q_i(dr)=+\infty$ for every $1\le i \le q$) then $WR(\MultiIntensity,\MultiRadius)$ has $q$ extremal monochromatic Widom-Rowlinson measures which correspond to the usual $q$ ordered phases. In the next theorem we prove that, if the activity is small enough, there always exists a polychromatic Widom-Rowlinson measure (without any integrability assumption). Therefore, the existence of a $q +1$-th extremal phase follows in the completely non-integrable setting. \begin{theorem} \label{theo_polychromaticite_cas_extreme_faible activite} Write $z_i = z \alpha_i$ with $\boldsymbol{\alpha} := (\alpha_i)_i$ being a discrete probability distribution. Then, in any dimension $d\ge 1$ \begin{enumerate} \item for every $\boldsymbol{\alpha}$ such that $\max_i \alpha_i <1$, there exists $z_c^{\boldsymbol{\alpha}}$ such that for all $z<z_c^{\boldsymbol{\alpha}}$, the set $WR(z \boldsymbol{\alpha} , \boldsymbol{\Radius} )$ contains at least one polychromatic Widom-Rowlinson measure; \item the constant $z_c^{\boldsymbol{\alpha}}$ can be chosen uniform in $\boldsymbol{\alpha}$ satisfying for some $0< \alpha_{\max}<1$, \begin{align} \label{eq_condition_proportion} \forall i \in \{1, \dots , q \}, \quad \alpha_i \leq \alpha_{\max}. \end{align} \end{enumerate} \end{theorem} The sketch of the proof is as follows. We build first an accumulation point of the sequence of finite volume Widom-Rowlinson measures with free boundary condition. Then we show, for small activity, that the specific entropy of this measure is smaller than the specific entropy of every monochromatic stationary probability measure. Therefore this measure is not monochromatic and the theorem follows. The details of the proof are given in Section \ref{section_preuve_non_monochromaticite_petite activite}. The assumption $\max_i \alpha_i <1$ (respectively $\alpha_{\max}<1$) ensures that at least two of the $\alpha_i$ are positive. This is a natural assumption in order to have polychromaticity. \begin{corollary} We assume that $\boldsymbol{\alpha} := (\alpha_i)_i$ is a discrete probability with non-null coordinate and that $\boldsymbol{\Radius}$ is completely non integrable (i.e. for every $1\le i \le q$, $\int_{\ensemblenombre{R}^+ } r^d Q_i(dr) =\infty$). Then $WR(z \boldsymbol{\alpha} , \boldsymbol{\Radius} )$ has at least $q+1$ extremal phases for $z$ small enough. \end{corollary} The assumption "$z$ small" appears in the proof of Theorem \ref{theo_polychromaticite_cas_extreme_faible activite} as an artificial assumption needed to ensure that the accumulation point is not monochromatic. However it is our belief that this assumption is crucial and that for large activities all Widom-Rowlinson measures would be monochromatic. This is formalized in the following conjecture. \begin{conjecture} \label{conjecture_monochromaticite_cas_extreme} In the non-integrable case (i.e. there exists $1\le i \le q$ such that $\int_{\ensemblenombre{R}^+ } r^d Q_i(dr) =\infty$), for activities $z$ large enough, every Widom-Rowlinson measures is monochromatic. \end{conjecture} Note that if the conjecture is true, it would imply a phase transition result since the set $ WR( \MultiIntensity,\MultiRadius)$ would have exactly $q$ extremal Gibbs measures for $\boldsymbol{\Intensity}$ large and at least $q+1$ extremal Gibbs measures for $\boldsymbol{\Intensity}$ small. Our belief in this conjecture is based on a similar conjecture for the continuum random cluster presented in \cite{dereudre_houdebert} and for which a heuristic proof is given. Moreover simulations in the sense of the conjecture have been implemented in \cite[Section III.2]{houdebert_these}. We have not succeeded to prove the conjecture but our last result is a first step towards it, by proving a weaker version of the conjecture. Indeed we show that the symmetric Widom-Rowlinson measure on $ \ensemblenombre{R}^{d}$ with free boundary condition and non-integrable radii is monochromatic if and only the activity $z$ is large enough. Note that the scale we use in the thermodynamic limit is not symmetric since one direction is favoured. Unfortunately we are not able to extend the result for every thermodynamic limit, in particular, when the volume $\Lambda_n$ is simply an hypercube. For every $k>0$ and $n\ge 1$, let $\Lambda_n^{(k)}:= ]-n,n] \times [0,k]^{d -1}$ and $\Lambda_n^{(k+)}:= ]0,n] \times [0,k]^{d -1}$. Let us fix a sequence $(k_n)_{n\ge 1}$ of positive integers such that $k_n\to +\infty$ and $(k_n)$ is negligible with respect to $(\log(n))_{n\ge 1}$. Now, for any $n\ge 1$, we consider the Widom-Rowlinson measure on $\Lambda_n^{(k_n)}$ with free boundary condition $$\boldsymbol{P}_n^{\text{free}}(d \boldsymbol{\omega}):= \frac{\mathds{1}_\mathcal{A}(\boldsymbol{\omega})}{\boldsymbol{Z}_{n}}\MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda_n^{(k_n)}}(d \boldsymbol{\omega}).$$ As in the proof of Theorem \ref{theo_existence_wr}, we introduce its stationary version $\bar{\boldsymbol{P}}_n^{\text{free}}$. First $\hat{\boldsymbol{P}}_n^{\text{free}}:= \underset{x \in I_n}{\otimes} \boldsymbol{P}_n^{\text{free}} \circ \tau_x^{-1}$ and finally $$\bar{\boldsymbol{P}}_n^{\text{free}} := \frac{1}{\mathcal{L}^\Dim (\Lambda_n^{(k_n)})} \int_{\Lambda_n^{(k_n)}} \hat{\boldsymbol{P}}_n^{\text{free}} \circ \tau_x^{-1} dx,$$ where $I_n:= 2n \Z \times ( k_n \Z)^{d -1}$ and where $\tau_x$ is the translation operator of vector $x\in\ensemblenombre{R}^d$. As in the proof of Theorem \ref{theo_existence_wr}, it is easy to show that the sequence $(\bar{\boldsymbol{P}}_n^{\text{free}})$ admits at least one accumulation point $\boldsymbol{P}^{\text{free}}$, with respect to the local convergence topology. The following theorem is our phase transition result involving these accumulation points. \begin{theorem} \label{theo_conjecture_preuve_faible} We assume that we are in the symmetric case $z := z_1 = \dots = z_q$, $Q := Q_1= \dots = Q_{q}$ in dimension $d\ge 1$, with $Q$ satisfying the two following conditions \begin{subequations} \label{eq_conjecture_conditions} \begin{align} \label{eq_conjecture_cond1} \int_1^\infty \exp \left( - \int_1^u Q (]r,\infty[) dr \right) du < \infty; \end{align} \begin{align} \label{eq_conjecture_cond2} Q (\{0\})<\frac{1}{q}. \end{align} \end{subequations} Then, for $z$ large enough, $(\bar{\boldsymbol{P}}_n^{\text{free}})$ converges (without passing by a subsequence) to $\boldsymbol{P}^{\text{free}}$ which is the mixture $\sum_{i=1}^q \frac{1}{q} \MultiPoisson{\bar{\boldsymbol{\Intensity}}^i ,\boldsymbol{\Radius}}$, with $\bar{\boldsymbol{\Intensity}}^i=(0 \dots, 0, z_i,0,\dots,0)$ (i.e. $\boldsymbol{P}^{\text{free}}$ is monochromatic with equal probability of having any color). In opposite, when $z$ is small enough, every accumulation point of $(\bar{\boldsymbol{P}}_n^{\text{free}})$ is not monochromatic and therefore it is not a mixture of the monochromatic phase $\MultiPoisson{\bar{\boldsymbol{\Intensity}}^i ,\boldsymbol{\Radius}}$, $1\le i\le q$. \end{theorem} Let us note that the assumptions \eqref{eq_conjecture_cond1} and \eqref{eq_conjecture_cond2} are purely technical and probably not really necessary. Indeed, they are related to the renewal strategy we used to proof the first part of the Theorem. We believe that the assumption \eqref{eq_conjecture_cond1} could be replaced by $\int_{\ensemblenombre{R}^+ } r^d Q(dr) =\infty$ and the assumption \eqref{eq_conjecture_cond2} by $Q (\{0\})<1$. The proof of the theorem is based on the Fortuin-Kasteleyn representation in order to transfer the problem from the Widom-Rowlinson setting to the Continuum Random Cluster setting. We will prove that the sequence of finite volume Continuum Random Cluster measures converges towards the Poisson Boolean model which covers the whole space $\ensemblenombre{R}^d$, proving that $\boldsymbol{P}^{\text{free}}$ is monochromatic with equal probability for each color by symmetry of the model. This is done by bounding the mean number of connected components of a Continuum Random Cluster measure, using a fine renewal argument. The detailed proof is given in Section \ref{section_preuve_conjecture_faible}. \begin{remark} In the case of dimension $d=1$, the sequence $(k_n)_n$ plays no role and the probability measures $\boldsymbol{P}_n^{\text{free}}$, $\hat{\boldsymbol{P}}_n^{\text{free}}$ and $\bar{\boldsymbol{P}}_n^{\text{free}}$ are just the measures $\boldsymbol{P}_n$, $\hat{\boldsymbol{P}}_n$ and $\bar{\boldsymbol{P}}_n$ which will be introduced in the proof of Thereom \ref{theo_existence_wr} and Theorem \ref{theo_polychromaticite_cas_extreme_faible activite}, see Section \ref{section_preuve_theo_existence} and Section \ref{section_preuve_non_monochromaticite_petite activite}. Let us note also that the Widom-Rowlinson model with non-integrable radii exhibits a phase transition in dimension one. It is quite unusual and due to the very long range of the interaction. \end{remark} \section{Proof of Theorem \ref{theo_existence_wr}} \label{section_preuve_theo_existence} First let us consider the extreme case where at least one radius measure, let say $Q_i$, satisfy $\int_{\ensemblenombre{R}^+} r^d Q_i(dr)=\infty$. It is known, see \cite{stoyan_kendall_mecke} for instance, that the Poisson Boolean model covers almost surely the all space $\ensemblenombre{R}^d$. In particular for every bounded $\Lambda \subseteq \ensemblenombre{R}^d$ and $\Poisson{z_i,Q_i}$-almost every configuration $\omega$, we have $L(\omega_{\Lambda^c})=\ensemblenombre{R}^d$. Let us consider the probability measure on $\boldsymbol{\Omega}$ with one marginal being $\Poisson{z_i,Q_i}$ and the others producing almost surely empty configurations. This is a Poisson Point process $\Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}$ with only one non-zero intensity $z_i$, i.e. $\bar{\boldsymbol{\Intensity}}^i:= (0, \dots ,0, z_i, 0, \dots ,0)$. This probability measure trivially satisfies conditions \eqref{eq_def_wr} of Definition \ref{def_WR} and is therefore a Widom-Rowlinson measure of parameters $\boldsymbol{\Intensity}$ and $\boldsymbol{\Radius}$. So from now on we consider the case of $\boldsymbol{\Radius}$ satisfying the integrability assumption \eqref{eq_integrability_assumption}. \subsection{Construction of a good cluster point} To build a Widom-Rowlinson measure, consider a sequence of Widom-Rowlinson measures on the bounded boxes $\Lambda_n$ with free boundary condition defined as \begin{align} \boldsymbol{P}_n(d\boldsymbol{\omega}) = \Specification{\Lambda_n, \emptyset}(d\boldsymbol{\omega}) = \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}) }{\boldsymbol{Z}_{n}{}} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda_n}(d\boldsymbol{\omega}_{\Lambda_n}), \nonumber \end{align} with $\Lambda_n :=]-n,n]^d$ and $\boldsymbol{Z}_{n}{}= \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}) \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda_n}(d\boldsymbol{\omega}_{\Lambda_n})$. Then consider $\hat{\boldsymbol{P}}_n = \underset{i \in 2n \Z^d}{\otimes} \boldsymbol{P}_n \circ \tau_{i}^{-1}$ and $\bar{\boldsymbol{P}}_n=\frac{1}{\mathcal{L}^\Dim(\Lambda_n)}\int_{\Lambda_n} ( \hat{\boldsymbol{P}}_n \circ \tau_x^{-1} ) dx$, where $\tau_x$ is the translation operator of vector $x\in \ensemblenombre{R}^d$. The measures $\bar{\boldsymbol{P}}_n$ are by construction stationary. The aim is to find an accumulation point for the sequence $(\boldsymbol{P}_n)$ with respect to the local convergence topology defined in the next definition. \begin{definition} A measurable function $f: \boldsymbol{\Omega} \to \ensemblenombre{R}$ is said to be \emph{local} if there exists a bounded $\Lambda \subseteq \ensemblenombre{R}^d$ such that $f(\boldsymbol{\omega})=f(\boldsymbol{\omega}_{\Lambda})$ for all $\boldsymbol{\omega} \in \boldsymbol{\Omega}$. A sequence $(\tilde{\boldsymbol{P}}_n)$ converges, with respect to the local convergence topology, towards $\tilde{\boldsymbol{P}}$, if for every bounded local function $f$ we have \begin{align*} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}) \tilde{\boldsymbol{P}}_n(d\boldsymbol{\omega}) \underset{n \to \infty}{\longrightarrow} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}) \tilde{\boldsymbol{P}}(d\boldsymbol{\omega}). \end{align*} \end{definition} A very convenient tool for proving the existence of an accumulation point is the specific entropy. It was introduced in \cite{georgii_livre} and is defined in the following definition. \begin{definition} For a stationary probability measure $\boldsymbol{P}$ we define the specific entropy of $\boldsymbol{P}$, written $\mathcal{I} (\boldsymbol{P})$, as the following limit: \begin{align} \label{eq_specific_entropy_def} \mathcal{I} (\boldsymbol{P}) = \lim_{n \to \infty} \ \frac{1}{\mathcal{L}^\Dim (\Lambda_n)} \mathcal{I}_{\Lambda_n}(\boldsymbol{P} | \MultiPoisson{\MultiIntensity,\MultiRadius}), \end{align} with $\mathcal{I}_{\Lambda_n}(\boldsymbol{P} | \MultiPoisson{\MultiIntensity,\MultiRadius})$ being the relative entropy of $\boldsymbol{P}$, with respect to $\MultiPoisson{\MultiIntensity,\MultiRadius}$, defined as \begin{align} \mathcal{I}_{\Lambda_n}(\boldsymbol{P} | \MultiPoisson{\MultiIntensity,\MultiRadius})= \left\lbrace \begin{array}{ccc} \int_{\boldsymbol{\Omega}} g \ \log (g) \ d\MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda_n} & \mbox{if}& \boldsymbol{P}_{\Lambda_n} \ll \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda_n}, \ g=\frac{d\boldsymbol{P}_{\Lambda_n}}{d\MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda_n}} \\ + \infty & \mbox{else} & \end{array}\right. . \nonumber \end{align} \end{definition} The stationarity ensures the convergence of the limit in \eqref{eq_specific_entropy_def}. This is the reason why we introduce a stationary version $\bar{\boldsymbol{P}}_n$ of the finite volume Widom-Rowlinson measure $\boldsymbol{P}_n$. The following proposition ensures the compactness of the level sets of the specific entropy. \begin{proposition}[Proposition 2.6 \cite{georgii_zessin}] \label{propo_compactness_entropy_specifique} With respect to the local convergence topology induced on the stationary probability measures on $\boldsymbol{\Omega}$, we have \begin{enumerate} \item $\mathcal{I}$ is affine; \item $\mathcal{I}$ is lower semi-continuous; \item the set $ \{ \boldsymbol{P}$ stationary, $ \mathcal{I} (\boldsymbol{P}) \leq C \}$ is compact for every positive real number $C$. \end{enumerate} \end{proposition} \begin{proposition} \label{propo_borne_entropi_spécifique_wr} For all $n$ we have \begin{align*} \mathcal{I} (\bar{\boldsymbol{P}}_n) \leq z_1 + \dots + z_q. \end{align*} \end{proposition} \begin{proof} First using the fact that the specific entropy is affine, see Proposition \ref{propo_compactness_entropy_specifique}, we have \begin{align} \mathcal{I}(\bar{\boldsymbol{P}}_n) = \frac{1}{\mathcal{L}^\Dim (\Lambda_n)} \mathcal{I}_{\Lambda_n}(\boldsymbol{P}_n|\MultiPoisson{\MultiIntensity,\MultiRadius}). \nonumber \end{align} Now using the definition of the specific entropy and standard bounds on the partition function, we get \begin{align} \mathcal{I}_{\Lambda_n}(\boldsymbol{P}_n|\MultiPoisson{\MultiIntensity,\MultiRadius}) = -\log (\boldsymbol{Z}_{n}{}) \leq (z_1 + \dots + z_q) \mathcal{L}^\Dim (\Lambda_n), \nonumber \end{align} which leads to the expected result. \end{proof} Using Proposition \ref{propo_compactness_entropy_specifique} and Proposition \ref{propo_borne_entropi_spécifique_wr}, we obtain the existence of an accumulation point $\boldsymbol{P}$ of the sequence $(\bar{\boldsymbol{P}}_n)$. For the rest of the proof we are, for convenience of notation, omitting to take a subsequence when taking $n$ go to infinity. We now have a good candidate for being a Widom-Rowlinson measure. Then we have to prove equations \eqref{eq_def_wr}. This is done for the symmetric case in the PhD manuscript \cite{houdebert_these} and we are here adapting the proof to the non-symmetric case. In the next proposition we prove that the measure $\boldsymbol{P}$ produces almost surely authorized configuration. This trivially implies that condition \eqref{eq_partition_function_non_deg_def_wr} is fulfilled. \begin{proposition} \label{propo_candidat_non_degenere} We have $\boldsymbol{P}(\mathcal{A})=1$, and therefore \begin{align} \PartitionFunction{\Lambda,\boldsymbol{\omega}_{\Lambda^c}} \geq \exp(-z \mathcal{L}^\Dim(\Lambda) ) \ \boldsymbol{P}(d \boldsymbol{\omega}) - \text{almost surely}. \nonumber \end{align} \end{proposition} \begin{proof} The event $\mathcal{A}$ is not local and we cannot use directly the local convergence. But this event could be called "almost local" since for every configuration $\boldsymbol{\omega}$, \begin{align} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}_{\Lambda_k}) \underset{k \to \infty}{\longrightarrow} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}). \nonumber \end{align} Therefore we have \begin{align} \boldsymbol{P}(\mathcal{A}) &= \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}) \boldsymbol{P}(d \boldsymbol{\omega}) = \nonumber \underset{k \to \infty}{\lim}\int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}_{\Lambda_k}) \boldsymbol{P}(d \boldsymbol{\omega}) \\ & = \underset{k \to \infty}{\lim} \underset{n \to \infty}{\lim} \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}_{\Lambda_k}) \bar{\boldsymbol{P}}_n(d \boldsymbol{\omega}) \nonumber \end{align} with \begin{align} \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}_{\Lambda_k}) \bar{\boldsymbol{P}}_n(d \boldsymbol{\omega}) & = \frac{1}{\mathcal{L}^\Dim (\Lambda_n)} \int_{\Lambda_n} \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}_{\Lambda_k}) \hat{\boldsymbol{P}}_n \circ \tau_x ^{-1}(d \boldsymbol{\omega}) dx \nonumber \\ & = \frac{1}{\mathcal{L}^\Dim (\Lambda_n)} \int_{\Lambda_n} \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}} (\boldsymbol{\omega}_{\tau_x (\Lambda_k)}) \hat{\boldsymbol{P}}_n (d \boldsymbol{\omega}) dx . \nonumber \end{align} For $n >k$, we have $\tau_x(\Lambda_k) \subseteq \Lambda _n$ as soon as $x \in [k-n,n-k]^d$ and so \begin{align} \frac{1}{\mathcal{L}^\Dim (\Lambda_n)} \int_{\Lambda_n} \int_{\boldsymbol{\Omega}} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}_{\tau_x (\Lambda_k)}) \hat{\boldsymbol{P}}_n(d \boldsymbol{\omega})dx \geq \frac{\mathcal{L}^\Dim ([k-n,n-k]^d)}{\mathcal{L}^\Dim (\Lambda_n)}, \nonumber \end{align} which tends to $1$ as $n$ goes to infinity. The result is proved. \end{proof} \subsection{The cluster point statisfies \eqref{eq_dlr_definition_wr}} Starting now we fix $\Lambda \subseteq \ensemblenombre{R}^d$. We are going to prove that $\boldsymbol{P}$ satisfies the DLR($\Lambda$) equation. To do so we need to modify the sequence $(\bar{\boldsymbol{P}}_n)$, defining a new sequence $(\tilde{\boldsymbol{P}}_n^{\Lambda})$. This new sequence will be asymptotically equivalent to the former one and each $\tilde{\boldsymbol{P}}_n^{\Lambda}$ will satisfy the DLR($\Lambda$) equation \eqref{eq_dlr_definition_wr}. Finally by considering good "localizing" events, we will be able to pass the DLR property through the limit. Consider \begin{align} \tilde{\boldsymbol{P}}_n^{\Lambda}= \frac{1}{\mathcal{L}^\Dim(\Lambda_n)}\int_{\Lambda_n} \mathds{1}_{\Lambda \subseteq \tau_x(\Lambda_n)} \times ( \boldsymbol{P}_n \circ \tau_x^{-1} ) dx. \nonumber \end{align} The measures $\tilde{\boldsymbol{P}}_n^{\Lambda}$ are not probability measures but satisfy good properties, see the following proposition. \begin{proposition} \label{propo_conv_local_et_dlr} For each bounded local function $f : \boldsymbol{\Omega} \to \ensemblenombre{R}$ we have \begin{align} \left| \int_{\boldsymbol{\Omega}} f \ d\tilde{\boldsymbol{P}}_n^{\Lambda} - \int_{\boldsymbol{\Omega}} f \ d\boldsymbol{P}_n \right| \to 0 \nonumber \end{align} as $n\mapsto +\infty$, which implies that $\boldsymbol{P}$ is an accumulation point of the sequence ($\tilde{\boldsymbol{P}}_n^{\Lambda}$). Furthermore the measure $\widetilde{P}_n^{\Lambda}$ satisfies the DLR($\Lambda$) equation \eqref{eq_dlr_definition_wr}. \end{proposition} We are omitting the proof of this standard result. The first point is done in \cite{dereudre_2009} for the \emph{Quermass-interaction model}, and the second point is a consequence of the compatibility of the Gibbs specification $\Specification{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}$. And both points are done in the PhD manuscript \cite{houdebert_these}. Now let fix a measurable function $f$ bounded by $1$. By the structure of $\boldsymbol{\mathcal{F}}$ we can consider $f$ to be local. We are going to prove that for each $\epsilon>0$, the quantity \begin{align} \delta = \left| \int_{\boldsymbol{\Omega}} \ f d \boldsymbol{P} - \int_{\boldsymbol{\Omega}} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c})} {\PartitionFunction{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda}) \boldsymbol{P} (d\boldsymbol{\omega}) \right| \nonumber \end{align} is bounded by $7\epsilon$. The function $f_{\Lambda}(\boldsymbol{\omega}) := \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c})} {\PartitionFunction{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda})$ is in general not local, which is the main obstacle in proving the result. We cannot directly use the local convergence, and the piecewise convergence used in the proof of Proposition \ref{propo_candidat_non_degenere} is not good enough as we need a more uniform convergence. Let us consider the event \begin{align} U_{r_1}=\{\boldsymbol{\omega}, \forall (x,r) \in \boldsymbol{\omega}_{\Lambda}, r \leq r_1 \}. \nonumber \end{align} \begin{lemma} \label{lemme_preuve_dlr_WR_introduction_petite_boule} For $r_1$ large enough and for all $\boldsymbol{\omega}_{\Lambda^c} \in \mathcal{A}$ we have \begin{align} \left| \int_{\boldsymbol{\Omega}} f(. \cup \boldsymbol{\omega}_{\Lambda^c}) \frac{\mathds{1}_{\mathcal{A}}(. \cup \boldsymbol{\omega}_{\Lambda^c})} {\PartitionFunction{\Lambda, \boldsymbol{\omega}_{\Lambda^c}} } d\MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda} - \int_{\boldsymbol{\Omega}} f(. \cup \boldsymbol{\omega}_{\Lambda^c}) \mathds{1}_{U_{r_1}}(.) \frac{\mathds{1}_{\mathcal{A}}(. \cup \boldsymbol{\omega}_{\Lambda^c})} {\boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c})} d \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda} \right| \leq \epsilon, \nonumber \end{align} where $\boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c}) : = \int_{\boldsymbol{\Omega}} \mathds{1}_{U_{r_1}} (\boldsymbol{\omega}'_{\Lambda}) \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda})$ is the \emph{modified partition function}. The constant $r_1$ can be chosen to also have $\MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(U_{r_1}^c) \leq \epsilon$. \end{lemma} The proof of Lemma \ref{lemme_preuve_dlr_WR_introduction_petite_boule} is done in Section \ref{section_preuves_lemmes_techniques}. Using Lemma \ref{lemme_preuve_dlr_WR_introduction_petite_boule} we get \begin{align} \delta \leq \epsilon + \left| \int_{\boldsymbol{\Omega}} f \ d \boldsymbol{P} - \int _{\boldsymbol{\Omega}} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \mathds{1}_{U_{r_1}}(\boldsymbol{\omega}'_{\Lambda}) \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c})}{\boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c})} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda}) \boldsymbol{P} (d\boldsymbol{\omega}) \right|. \nonumber \end{align} Now in order to "localize" (with respect to $\boldsymbol{\omega}$) the functions $\mathds{1}_{\mathcal{A}}$ and $\boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c})$ we introduce the events \begin{align} \Upsilon_k = \{ \boldsymbol{\omega} \in \boldsymbol{\Omega}, \ \forall (x,r) \in \boldsymbol{\omega}, \ r \leq \frac{|x|}{2} + k \}. \nonumber \end{align} Let us note that a ball in a configuration in $\Upsilon_k$ has a radius smaller than the half of the distance of the centre from the origin (up to an additive fix constant $k$). Then, when the centre is far from the origin, the full ball is far from the origin as well. It is the reason why $\Upsilon_k$ localizes the interaction. Now the next lemma claims that $\Upsilon_k$ has a high probability when $k$ is large enough. \begin{lemma} \label{lemme_preuve_dlr_ups_total} For $k$ large enough we have: \begin{enumerate} \item $\boldsymbol{P}(\Upsilon_k^c) \leq \epsilon$; \item $\tilde{\boldsymbol{P}}_n^{\Lambda}(\Upsilon_k^c) \leq \epsilon$ for each $n$; \item $\MultiPoisson{\MultiIntensity,\MultiRadius}(\Upsilon_k^c) \leq \epsilon$. \end{enumerate} \end{lemma} The proof of Lemma \ref{lemme_preuve_dlr_ups_total} is done in Section \ref{section_preuves_lemmes_techniques}. Using this lemma we have \begin{align*} \delta \leq 2 \epsilon +& \bigg| \int_{\boldsymbol{\Omega}} f \ d \boldsymbol{P} \nonumber \\ & - \int_{\boldsymbol{\Omega}} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \mathds{1}_{U_{r_1} }(\boldsymbol{\omega}'_{\Lambda}) \mathds{1}_{\Upsilon_k}(\boldsymbol{\omega}) \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c})}{\boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c})} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda}) \boldsymbol{P} (d\boldsymbol{\omega}) \bigg|. \end{align*} Now with the introduction of the events $U_{r_1}$ and $\Upsilon_k$, the next lemma enables us to "localize" the integrated functions. \begin{lemma} \label{lemme_preuve_dlr_localisation} There exists $\Delta$ bounded, depending on $r_1$ and $k$, such that for every $\boldsymbol{\omega}'_{\Lambda} \in U_{R_1}$ and $\boldsymbol{\omega} \in \Upsilon_k \cap \mathcal{A}$, \begin{align} \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) = \mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Delta \setminus \Lambda}) \text{ and } \boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c}) = \boldsymbol{Z}_{ R_1} (\Lambda, \boldsymbol{\omega}_{\Delta \setminus \Lambda}). \nonumber \end{align} \end{lemma} The proof of Lemma \ref{lemme_preuve_dlr_localisation} is done is Section \ref{section_preuves_lemmes_techniques}. With this Lemma we have \begin{align} \small \delta &\leq 2 \epsilon + \bigg| \int_{\boldsymbol{\Omega}} f d\boldsymbol{P} \nonumber \\ &\quad \ - \int_{\boldsymbol{\Omega}} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \mathds{1}_{U_{r_1} }(\boldsymbol{\omega}'_{\Lambda}) \mathds{1}_{\Upsilon_k}(\boldsymbol{\omega}) \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Delta \setminus \Lambda})} {\boldsymbol{Z}_{r_1}(\Lambda, \boldsymbol{\omega}_{\Delta \setminus \Lambda})} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda}) \boldsymbol{P} (d\boldsymbol{\omega}) \bigg| \nonumber \\ & \leq 4 \epsilon + \bigg| \int_{\boldsymbol{\Omega}} f d \tilde{\boldsymbol{P}}_n^{\Lambda} \nonumber \\ &\quad \quad \quad - \int_{\boldsymbol{\Omega}} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \mathds{1}_{U_{r_1}}(\boldsymbol{\omega}'_{\Lambda}) \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Delta \setminus \Lambda})} {\boldsymbol{Z}_{r_1}(\boldsymbol{\omega}_{\Delta \setminus \Lambda})} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda}) \tilde{\boldsymbol{P}}_n^{\Lambda} (d\boldsymbol{\omega}) \bigg|, \nonumber \end{align} where the last inequality comes from Lemma \ref{lemme_preuve_dlr_ups_total} and the local convergence of Proposition \ref{propo_conv_local_et_dlr}, for $n$ large enough fixed from now on. Now using again Lemma \ref{lemme_preuve_dlr_WR_introduction_petite_boule} and Lemma \ref{lemme_preuve_dlr_ups_total}, we obtain \begin{align} \delta &\leq 7 \epsilon + \underbrace{\left| \int_{\boldsymbol{\Omega}} f d \tilde{\boldsymbol{P}}_n^{\Lambda} - \int_{\boldsymbol{\Omega}} \int_{\boldsymbol{\Omega}} f(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c}) \frac{\mathds{1}_{\mathcal{A}}(\boldsymbol{\omega}'_{\Lambda} \cup \boldsymbol{\omega}_{\Lambda^c})} {\PartitionFunction{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}} \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda}(d\boldsymbol{\omega}'_{\Lambda}) \tilde{\boldsymbol{P}}_n^{\Lambda} (d\boldsymbol{\omega}) \right|}_{=0 \text{ thanks to Proposition\ref{propo_conv_local_et_dlr}}}. \nonumber \end{align} \subsection{Proof of the lemmas} \label{section_preuves_lemmes_techniques} \subsubsection{Proof of Lemma \ref{lemme_preuve_dlr_WR_introduction_petite_boule}} First from standard computation we have \begin{align} \label{eq_preuve_dlr_poisson_petite_boule} \Poisson{\MultiIntensity,\MultiRadius} (U_{r_1}^c) =1- \exp \left(-\mathcal{L}^\Dim (\Lambda ) \sum_i z_i Q_i([r_1,\infty[) \right) \end{align} which indeed can be as small as needed when $r_1$ is large enough. \begin{align} \label{eq_preuve_dlr_petite_boule_dans_dlr} & \left| \int_{\boldsymbol{\Omega}} f(. \cup \boldsymbol{\omega}_{\Lambda^c}) \frac{\mathds{1}_{\mathcal{A}}(. \cup \boldsymbol{\omega}_{\Lambda^c})} {\PartitionFunction{\Lambda, \boldsymbol{\omega}_{\Lambda^c}} } d\MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda} - \int_{\boldsymbol{\Omega}} f(. \cup \boldsymbol{\omega}_{\Lambda^c}) \mathds{1}_{U_{r_1}}(.) \frac{\mathds{1}_{\mathcal{A}}(. \cup \boldsymbol{\omega}_{\Lambda^c})} {\boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c})} d \MultiPoisson{\MultiIntensity,\MultiRadius}_{\Lambda} \right| \nonumber \\ & \hspace{0.2cm} \leq \Specification{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}(U_{r_1}^c) + \frac{\PartitionFunction{\Lambda, \boldsymbol{\omega}_{\Lambda^c}} - \boldsymbol{Z}_{r_1}(\Lambda, \MultiConf_{\Lambda^c}) } {\PartitionFunction{\Lambda, \boldsymbol{\omega}_{\Lambda^c}}} \nonumber \\ & \hspace{0.2cm} \leq \Poisson{\MultiIntensity,\MultiRadius} (U_{r_1}^c) + \exp(\mathcal{L}^\Dim (\Lambda) \sum_i z_i) \Poisson{\MultiIntensity,\MultiRadius} (U_{r_1}^c), \end{align} where the last inequality comes from Proposition \ref{propo_stochastic_dom_poisson_wr} and Proposition \ref{propo_candidat_non_degenere}. So by choosing $r_1$ large enough both quantities \eqref{eq_preuve_dlr_poisson_petite_boule} and \eqref{eq_preuve_dlr_petite_boule_dans_dlr} are smaller than $\epsilon$. \subsubsection{Proof of Lemma \ref{lemme_preuve_dlr_ups_total}} The events $\Upsilon_k^c$ are increasing. Therefore from Proposition \ref{propo_stochastic_dom_poisson_wr} we have $\boldsymbol{P} (\Upsilon_k^c) \leq \MultiPoisson{\MultiIntensity,\MultiRadius} (\Upsilon_k^c)$. Furthermore we have \begin{align*} \tilde{\boldsymbol{P}}_n^{\Lambda}(\Upsilon_k^c) \ \leq \ \boldsymbol{P}_n(\Upsilon_k^c) \ \leq \ \Poisson{\MultiIntensity,\MultiRadius}_{\Lambda}(\Upsilon_k^c) \ \leq \ \Poisson{\MultiIntensity,\MultiRadius}(\Upsilon_k^c), \end{align*} where the third inequality is a consequence of Proposition \ref{propo_stochastic_dom_poisson_wr} applied to $\boldsymbol{P}_n = \Specification{\Lambda_n, \emptyset}$. So points 1 and 2 from the lemma are a direct consequence of point 3. The point 3 is proved in \cite[Lemma I.3.22]{houdebert_these} or can be adapted from the proof of Lemma 3.5 in \cite{dereudre_2009}. \subsubsection{Proof of Lemma \ref{lemme_preuve_dlr_localisation}} Since $\boldsymbol{\omega} \in \mathcal{A}$, it is enough to check that balls $(x,r) \in \boldsymbol{\omega}$ centred far enough can not overlap $L(\boldsymbol{\omega}'_{\Lambda})$. But since $\boldsymbol{\omega}'_{\Lambda} \in U_{R_1}$, we have $L(\boldsymbol{\omega}'_{\Lambda}) \subseteq \Lambda \oplus B(0,r_1)$. Finally $\boldsymbol{\omega} \in \Upsilon_k$ and therefore we have for a set $\Delta$ large enough (which can be made explicit) that for $(x,r) \in \boldsymbol{\omega}_{\Delta^c}$, $B(x,r) \cap \Lambda \oplus B(0,R_1) = \emptyset$. This concludes the proof of the Lemma. \section{Proof of Theorem \ref{theo_polychromaticite_cas_extreme_faible activite} } \label{section_preuve_non_monochromaticite_petite activite} The first assertion is a consequence of the second which we prove now. Consider the sequence $(\boldsymbol{P}_n)$ of finite-volume Widom-Rowlinson measures with free boundary condition, and $(\bar{\boldsymbol{P}}_n)$ the stationary modification, defined in the proof of Theorem \ref{theo_existence_wr}, see Section \ref{section_preuve_theo_existence}. As in the proof of Theorem \ref{theo_existence_wr}, the sequence $(\bar{\boldsymbol{P}}_n)$ admits an accumulation point denoted by $\boldsymbol{P}$, which satisfies $\boldsymbol{P} (\mathcal{A})=1$. But the proof of the DLR equations done for the proof of Theorem \ref{theo_existence_wr} in Section \ref{section_preuve_theo_existence} is not valid in this case since Lemma \ref{lemme_preuve_dlr_ups_total} is false in the extreme case. In fact to prove the DLR equations we will use the polychromaticity which we show now using the specific entropy. \subsection{Lower bound of the specific entropy of monochromatic measures} Consider a monochromatic stationary probability measure $\boldsymbol{P}^{mono}$. Without loss of generality consider $\boldsymbol{P}^{mono}$ to be ergodic. This implies in particular that the colour of $\boldsymbol{P}^{mono}$ is deterministic, and let us call it $i$. Let us compute the specific entropy of $\boldsymbol{P}^{mono}$. Let $n \geq 0$. If $\boldsymbol{P}^{mono}_{\Lambda_n}$ is not absolutely continuous with respect to $\Poisson{\MultiIntensity,\MultiRadius}_{\Lambda_n}$, then $\mathcal{I}_{\Lambda_n} (\boldsymbol{P}^{mono} | \Poisson{\MultiIntensity,\MultiRadius}) = \infty$. Otherwise $\boldsymbol{P}^{mono}_{\Lambda_n} \ll \Poisson{\MultiIntensity,\MultiRadius}_{\Lambda_n}$ but, since $\boldsymbol{P}^{mono}$ is monochromatic of colour $i$, then we also have $\boldsymbol{P}^{mono}_{\Lambda_n} \ll \Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}_{\Lambda_n}$ where $\bar{\boldsymbol{\Intensity}}^i:= (0, \dots ,0, z_i, 0, \dots ,0)$ is the vector where the only non-zero coordinate being $z_i$ at the $i$-th position. Therefore \begin{align} \label{eq_minoration_entropy_mono_retour_poisson} \mathcal{I}_{\Lambda_n} (\boldsymbol{P}^{mono} | \Poisson{\MultiIntensity,\MultiRadius}) &= \nonumber \mathcal{I}_{\Lambda_n} (\boldsymbol{P}^{mono} | \Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}) + \mathcal{I}_{\Lambda_n} (\Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}| \Poisson{\MultiIntensity,\MultiRadius}) \\ & \geq \mathcal{I}_{\Lambda_n} (\Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}| \Poisson{\MultiIntensity,\MultiRadius}), \end{align} where the inequality \eqref{eq_minoration_entropy_mono_retour_poisson} comes from the positivity of the relative entropy. But a direct computation leads to $$ \frac{d\Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}_{\Lambda_n}} {d\Poisson{\MultiIntensity,\MultiRadius}_{\Lambda_n}} (\boldsymbol{\omega}) = \prod_{j \not = i} \exp (z_j \mathcal{L}^\Dim(\Lambda_n) ) \mathds{1}_{\omega^j= \emptyset} $$ which implies \begin{align*} \mathcal{I}_{\Lambda_n} (\Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}| \Poisson{\MultiIntensity,\MultiRadius}) &= \mathcal{L}^\Dim (\Lambda_n) \sum_{j \not = i} z_j \geq \mathcal{L}^\Dim (\Lambda_n) \min_i \left( \sum_{j \not = i} z_j \right). \end{align*} The last inequality together with \eqref{eq_minoration_entropy_mono_retour_poisson} implies \begin{align} \label{eq_minoration_entropy_monochromatique} \mathcal{I}(\boldsymbol{P}^{mono}) \geq \min_{i} \left( \sum_{j \not = i} z_j \right) = z \left( 1 - \max_i \alpha_i \right). \end{align} \begin{remark} Inequality \eqref{eq_minoration_entropy_monochromatique} cannot be improved since inequality \eqref{eq_minoration_entropy_mono_retour_poisson} becomes an equality in the case where $\boldsymbol{P}^{mono} = \Poisson{\bar{\boldsymbol{\Intensity}}^i , \boldsymbol{\Radius}}$. \end{remark} \subsection{Upper bound of the specific entropy of $\boldsymbol{P}$} Let us now look at the specific entropy of $\boldsymbol{P}$. If we prove that $\mathcal{I}(\boldsymbol{P})< z \left( 1 - \max_i \alpha_i \right)$, then, by the lower bound \eqref{eq_minoration_entropy_monochromatique}, $\boldsymbol{P}$ is not monochromatic. The bound from Proposition \ref{propo_borne_entropi_spécifique_wr} is not good enough for the purpose of the current proof and we now improve it. Divide the cube $\Lambda_n$ into copies of the smaller cube $\Lambda_m$ with $m<n$. We denote by $k_n$ the number of such copies of $\Lambda_m$ in $\Lambda_n$ and we fix $\epsilon<1- \alpha_{\max}<1$. One example of authorized configuration is when each copy of $\Lambda_m$ contains only one type of particles and when the balls do not overlap the outside of the cube. This leads to the following inequality. \begin{align} \Poisson{z \boldsymbol{\alpha} , \boldsymbol{\Radius}}_{\Lambda_n}(\mathcal{A}) \geq \exp\left(-z \mathcal{L}^\Dim (\Lambda_n)\right) \left( 1 + \sum_{ i} \sum_{l \geq 1} \frac{z^l \alpha_i^l \mathcal{L}^\Dim (\Lambda_m)^l \phi_{m,i}^l}{l !} \right)^{k_n}, \end{align} with $\phi_{m,i} := \frac{1}{\mathcal{L}^\Dim (\Lambda_m)} \int_{\ensemblenombre{R}^d} \int_{\ensemblenombre{R}^+} \mathds{1}_{B(x,r) \subseteq \Lambda_m} Q_i(dr) dx$. Fix $\gamma <1 - \epsilon - \alpha_{\max}$. By choosing $m$ large enough, we have $\phi_{m,i} \geq 1 - \gamma$ for all $i$. Furthermore we have \begin{align} \mathcal{I} (\bar{\boldsymbol{P}}_n) &= \nonumber - \frac{\log(\Poisson{z \boldsymbol{\alpha}, \boldsymbol{\Radius}}_{\Lambda_n}(\mathcal{A}))}{\mathcal{L}^\Dim (\Lambda_n)} \\ & \leq z - \frac{k_n}{\mathcal{L}^\Dim (\Lambda_n)} \log \left(1- q + \sum_i \exp(z \alpha_i \mathcal{L}^\Dim (\Lambda_m) \phi_{m,i}) \right). \end{align} Fix $\beta<1$ satisfying $\epsilon + \alpha_{\max} \leq \beta (1-\gamma)$. Using the fact that $\mathcal{L}^\Dim (\Lambda_n)/k_n \underset{n \to \infty}{\longrightarrow} \mathcal{L}^\Dim (\Lambda_m)$, we have for $n$ large enough that \begin{align} \label{eq_preuve_non_mono_plus_de_n} \mathcal{I} (\bar{\boldsymbol{P}}_n) \leq z - \frac{\beta}{\mathcal{L}^\Dim (\Lambda_m)} \log \left(1- q + \sum_i \exp(z \alpha_i \mathcal{L}^\Dim (\Lambda_m) \phi_{m,i}) \right). \end{align} The bound from \eqref{eq_preuve_non_mono_plus_de_n} is not depending on $n$. It remains to prove that the following function of $z$ is negative close to the origin, uniformly in $\boldsymbol{\alpha}$ satisfying \eqref{eq_condition_proportion}, \begin{align*} \Psi_{\boldsymbol{\alpha}}(z) := z \max_i \alpha_i - \frac{\beta}{\mathcal{L}^\Dim (\Lambda_m)} \log \left(1- q + \sum_i \exp(z \alpha_i \mathcal{L}^\Dim (\Lambda_m) \phi_{m,i}) \right). \end{align*} It satisfies $\Psi_{\boldsymbol{\alpha}}(0)=0$ and \begin{align*} \Psi'_{\boldsymbol{\alpha}}(z) & = \max_i \alpha_i - \beta \frac{\sum_i \alpha_i \phi_{m,i}\exp(z \alpha_i \mathcal{L}^\Dim (\Lambda_m) \phi_{m,i})} {1- q + \sum_i \exp(z \alpha_i \mathcal{L}^\Dim (\Lambda_m) \phi_{m,i})} \\ & \leq \alpha_{\max} - \frac{\beta (1 - \gamma)} {1- q + \sum_i \exp(z \alpha_{max} \mathcal{L}^\Dim (\Lambda_m) \phi_{m,i})}, \end{align*} where the last bound does not depend on $\boldsymbol{\alpha}$. Therefore by the choice of the parameters we have $\Psi'_{\boldsymbol{\alpha}}(0)\leq - \epsilon$ and thus for $z$ small enough, uniform in $\boldsymbol{\alpha}$ satisfying \eqref{eq_condition_proportion}, we have $\Psi'_{\boldsymbol{\alpha}}(z)\leq -\frac{ \epsilon}{2}$. Therefore using the lower semi-continuity of the specific entropy, we get that $\mathcal{I} (\boldsymbol{P}) < z \left( 1 - \max_i \alpha_i \right)$, which implies together with \eqref{eq_minoration_entropy_monochromatique} that $\boldsymbol{P}(Poly)>0$. Now by conditioning on the event $Poly$, one can prove that the conditioned probability measure satisfies the DLR equations \eqref{eq_dlr_definition_wr}. This was done in detail in \cite{dereudre_houdebert} for the symmetric case. \section{Proof of Theorem \ref{theo_conjecture_preuve_faible}} \label{section_preuve_conjecture_faible} \subsection{Fortuin-Kasteleyn representation} We consider the sequence $(\bar{\boldsymbol{P}}_n^{\text{free}})$ defined in Section \ref{section_results} which admits, thanks to Proposition \ref{propo_compactness_entropy_specifique}, at least one cluster point denoted $\boldsymbol{P}^{\text{free}}$. The second part of the theorem, which claims that, for small activity, $\boldsymbol{P}^{\text{free}}$ is polychromatic, is similar to the proof of Theorem \ref{theo_polychromaticite_cas_extreme_faible activite}. We do not give the details here. So it remains to prove the first part of the theorem on the monochromaticity. In a same fashion as Section \ref{section_preuve_theo_existence}, one can easily prove that $\boldsymbol{P}^{\text{free}}(\mathcal{A})=1$. So one way of proving that $\boldsymbol{P}^{\text{free}}$ is monochromatic is to prove that $\boldsymbol{P}^{\text{free}}$ covers the whole space $\ensemblenombre{R}^d$ with one giant connected component. The first step of the proof is to transfer the problem from multi-type Widom-Rowlinson measure in $\boldsymbol{\Omega}$ to "single-type" Continuum Random Cluster measure in $\Omega$. To a measure $\boldsymbol{P}$ on $\boldsymbol{\Omega}$ we associate its \emph{color-blind} measure, denoted by $P$, which is a probability measure on $\Omega$ defined by \begin{align*} \forall E \in \mathcal{F}, \ P(E) = \boldsymbol{P} (\{\boldsymbol{\omega} \, | \cup_i \omega^i \in E \}) \,. \end{align*} The specific entropy and the local convergence topology can be defined the same as for $\boldsymbol{\Omega}$. In particular $P^{\text{free}}$ is still an accumulation point of the sequence $(\bar{P}_n^{\text{free}})$. \begin{proposition}[Fortuin-Kasteleyn representation] The measure $P_n^{\text{free}}$ is a \emph{Continuum Random Cluster measure} on the bounded box $\Lambda_n^{(k_n)}$ with free boundary condition and with parameters $q, z$ and $Q$. This means \begin{align*} P_n^{\text{free}}(d\omega)= \frac{q^{N_{cc} (\omega)}}{Z_n^{(k_n)}} \Poisson{z,Q}_{\Lambda_n^{(k_n)}}(d\omega), \end{align*} where $N_{cc} (\omega)$ denotes the number of connected components of the structure $L(\omega)$, and where $Z_n^{(k_n)}$ is the normalizing constant. \end{proposition} This proposition is a standard result, known as the \emph{Fortuin-Kasteleyn representation} or \emph{grey representation}, proved and used in a lot of articles such as \cite{chayes_kotecky,georgii_haggstrom,dereudre_houdebert,houdebert_2017}. The next proposition is the key element of the proof of Theorem \ref{theo_conjecture_preuve_faible}. \begin{proposition} \label{proposition_majoration_ncc} In the assumptions of Theorem \ref{theo_conjecture_preuve_faible}, there exists $\tilde{z}<\infty$ such that for all $z>\tilde{z} $ and for all $n$ \begin{align*} \int_{\Omega} N_{cc} (\omega) P_n^{\text{free}}(d\omega) \leq D \times C^{k_n}, \end{align*} where $C,D>1$ are finite constants not depending on $n$. \end{proposition} \begin{remark} In the case $d=1$ one can prove in the same fashion the bound \begin{align*} \int_{\Omega} N_{cc} (\omega) P_n^{\text{free}}(d\omega) \leq C \end{align*} for large enough $z$ and for a constant $C>1$ independent of $n$. \end{remark} Before proving this proposition, let us show how it leads to the expected result. By the computation done for the proof of Theorem \ref{theo_existence_wr}, see Section \ref{section_preuve_theo_existence}, we have \begin{align*} \mathcal{I} (\bar{P}_n^{\text{free}}) &= \frac{1}{\mathcal{L}^\Dim \left(\Lambda_n^{(k_n)} \right)} \mathcal{I} \left(P_n^{\text{free}} | \Poisson{z, Q}_{\Lambda_n^{(k_n)}} \right) \\ &= \frac{1}{\mathcal{L}^\Dim \left(\Lambda_n^{(k_n)} \right)} \left( -\log (Z_n^{(k_n)}) + \log(q) \int_\Omega N_{cc} \ d P_n^{\text{free}} \right) \\ &\leq \frac{D \times C^{k_n} \log (q)}{2nk_n^{d-1}}\\ & \leq D \log(q) C^{k_n-\log(n)/\log(C)}. \end{align*} Since $(k_n)$ is negligible with respect to $\log(n)$, this upper-bound tends to zero when $n$ goes to infinty. So by the lower semi-continuity of the specific entropy, we have $\mathcal{I} (P^{\text{free}})=0$ which implies that $P^{\text{free}}=\Poisson{z, Q}$. Noting that the condition \eqref{eq_conjecture_cond1} implies in particular that $\int_{\ensemblenombre{R}^+}r^d Q (dr) = \infty$, so the Poisson Boolean model covers almost surely the whole space $\ensemblenombre{R}^d$ with one connected component. Therefore, since $\boldsymbol{P}^{\text{free}}(\mathcal{A})=1$, the measure $\boldsymbol{P}^{\text{free}}$ has no choice but to be monochromatic. By symmetry of the model, each color appears with probability $1/q$. \subsection{Proof of Proposition \ref{proposition_majoration_ncc}} First let $k>0$ which is taken for simplicity as the inverse of a positive integer. A condition on $k$ will appear later. Furthermore let us consider $\bar{\Types}>q$ such that $Q (\{ 0 \})<1/\bar{\Types}$. Since $Z_n^{(k_n)}\geq 1$, we have for a constant $D>1$ large enough, \begin{align} \label{eq_preuve_conj_retour_poisson} \int_{\Omega} N_{cc} \ d P_n^{\text{free}} & \leq \int_{\Omega} N_{cc} \ q^{ N_{cc} } \ d \Poisson{z , Q }_{\Lambda_n^{(k_n)}} \nonumber \\ & \leq D \int_{\Omega} \bar{\Types}^{ N_{cc} } \ d \Poisson{z , Q }_{\Lambda_n^{(k_n)}} \nonumber \\ & \leq D \left( \int_{\Omega} \bar{\Types}^{ N_{cc} } \ d \Poisson{z , Q }_{\Lambda_n^{(k)}} \right) ^{k_n/k}, \end{align} where the last inequality is a consequence of the stationarity of the Poisson Boolean model and the subadditivity of the number of connected components, i.e. $N_{cc}(\omega_{\Delta_1 \cap \Delta_2}) \leq N_{cc}(\omega_{\Delta_1 }) +N_{cc}(\omega_{ \Delta_2})$ for disjoint $\Delta_1$ and $\Delta_2$. \begin{remark} In the case $d=1$ there is no need to introduce $k_n$ and $k$ and we directly have \begin{align*} \int_{\Omega} N_{cc} \ d P_n^{\text{free}} \leq D \int_{\Omega} \bar{\Types}^{ N_{cc} } \ d \Poisson{z , Q }_{\Lambda_n}, \end{align*} where $\Lambda_n=]-n,n]$. \end{remark} Therefore, it is enough to show that that for a fixed $k$ small enough the sequence \begin{align*} \int_{\Omega} \bar{\Types}^{ N_{cc} } \ d \Poisson{z , Q }_{\Lambda_n^{(k)}} \end{align*} is uniformly bounded in $n\ge 1$. We are going to introduce the notion of \emph{connected component to the right}. Thanks to a renewal phenomenon, the number of connected components to the right will have a geometric law. Then we will prove, using the geometric law, that this number of connected components to the right admits exponential moments for $z$ large enough. More precisely we have \begin{align} \label{eq_preuve_conj_droite_1} \int_{\Omega} \bar{\Types}^{ N_{cc} } \ d \Poisson{z , Q }_{\Lambda_n^{(k)}} & \leq \left( \int_{\Omega} \bar{\Types}^{ N_{cc} } \ d \Poisson{z , Q }_{\Lambda_n^{(k+)}} \right)^2 \end{align} where this inequality comes from the stationarity and the independence property of $\Poisson{z, Q}$. Now the problem is that when $n$ grows, a ball can appear and cover all existing connected components. So seeing $n$ as the time, the present is depending on the yet unknown future. To overcome this issue let us do the following. To a radius $r$ we associate the following transformed radius \begin{align*} \tilde{r} = \sqrt{r^2 - (d -1)k^2} \ \mathds{1}_{r^2 \geq (d -1)k^2}. \end{align*} This way if $(x,r)$ is a marked point, the set \begin{align*} T(x,\tilde{r})=[x,x+\tilde{r}]\times [0,k]^{d - 1} \end{align*} does not cover anything left of $x$ and is entirely covered by the ball $B(x,r)$. \begin{remark} In dimension $d=1$ we simply consider $\tilde{r}=r$. \end{remark} Now we are defining the \emph{number of connected components to the right}, denoted by $ N_{cc}^{r} (\omega)$, as the number of connected components of $\underset{(x,r) \in \omega}{\bigcup} T(x,\tilde{r})$. We have $N_{cc} \leq N_{cc}^{r} $ and $N_{cc}^{r}$ is increasing with respect to $n$, which implies \begin{align} \label{eq_preuve_conj_plus_de_n_1} \int_{\Omega} \bar{\Types}^{ N_{cc} } \ d \Poisson{z, Q}_{\Lambda_n^{(k+)}} &\leq \int_{\Omega} \ \bar{\Types}^{ N_{cc}^{r} } \ d \Poisson{z, Q}_{\Lambda_n^{(k+)}} \leq \int_{\Omega} \bar{\Types}^{N_{cc}^{r} } \ d \Poisson{z, Q}_{\Lambda_\infty^{(k+)}} \end{align} where the right-hand side in \eqref{eq_preuve_conj_plus_de_n_1} does not depend on $n$. Therefore in order to prove Proposition \ref{proposition_majoration_ncc} we have to prove that the right-hand side of \eqref{eq_preuve_conj_plus_de_n_1} is finite. The first thing to do is to prove that the quantity $N_{cc}^{r} \left(\omega \right)$ is $\Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}}$ almost surely finite. Then we will be able to show that this quantity admits exponential moments for $z$ large enough. To this end it is sufficient to study the number of connected components of a segment model where the left extremities of the segments are the points of a Poisson point process (on the real line) of intensity $z k^d$, and where the lengths of the segments are independent random variables distributed accordingly to the measure $\tilde{Q}$ satisfying \begin{align*} \tilde{Q} \left([0,r]\right) = Q (\, [ 0 , \sqrt{r^2 + (d-1)k^2} ] \ ) . \end{align*} Condition \eqref{eq_conjecture_cond1} transfers trivially to $\tilde{Q}$. By taking $k$ small enough, we also have $\tilde{Q}(\{ 0 \}) <1/\bar{\Types}$. The next lemma, proved in \cite[Corollary 4]{fitzsimmons_fristedt_shepp}, ensures that $N_{cc}^{r} \left(\omega\right)$ is $\Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}}$ almost surely finite. \begin{lemma} \label{lemme_percolation_segment_a_droite} For every $z \geq k^{-d}$, the quantity $N_{cc}^{r} \left(\omega\right)$ is $\Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}}$ almost surely finite. \end{lemma} This lemma, working thanks to the condition \eqref{eq_conjecture_cond1}, ensures in particular that up to a random point $\mathcal{Y}$, the "cylinder" $[\mathcal{Y},\infty] \times [0,k]^{d -1}$ is covered by $\underset{(x,r) \in \omega_{\Lambda_{\infty}^{(k+)}} }{\bigcup} T(x,\tilde{r})$. So when we go through $\Lambda_{\infty}^{(k+)}$ from the origin, each connected component encountered has a positive probability $p(z)$ of being the infinite one, independently of all finite connected components already encountered. Therefore the random variable $N_{cc}^{r} \left(\omega \right)$ is geometric of mean $p(z)^{-1}$ satisfying \begin{align*} p(z)^{-1}= \int_{\Omega} N_{cc}^{r} \left(\omega \right) \Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}} (d \omega) \end{align*} and \begin{align*} p(z)= \Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}} \left( N_{cc}^{r} =1 \right). \end{align*} So an explicit computation of the right-hand side of \eqref{eq_preuve_conj_plus_de_n_1} leads to \begin{align} \label{eq_somme_geometrique} \int_{\Omega} \bar{\Types}^{N_{cc}^{r}} \ d \Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}} = \sum_{\alpha \geq 0} \bar{\Types}^\alpha p(z) (1-p(z))^{\alpha -1} \end{align} which is finite as soon as $p(z)> 1 - 1/ \bar{\Types}$. In the following lemma we prove that for large activities $z$, $p(z)$ is as close to 1 as we need. \begin{lemma} \label{lemme_preuve_conv_geometrique} For $z$ large enough \begin{align*} p(z) > 1 - 1/ \bar{\Types} . \end{align*} \end{lemma} With this lemma we have for $z$ large enough that \begin{align} C := \left( \int_{\Omega} \bar{\Types}^{N_{cc}^{r} } \ d \Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}} \right) ^{2/k} < \infty \nonumber \end{align} which combined with \eqref{eq_preuve_conj_retour_poisson}, \eqref{eq_preuve_conj_droite_1}, \eqref{eq_preuve_conj_plus_de_n_1} and \eqref{eq_somme_geometrique} concludes the proof of Proposition \ref{proposition_majoration_ncc}. The last thing needed is to prove Lemma \ref{lemme_preuve_conv_geometrique}. \subsection{Proof of Lemma \ref{lemme_preuve_conv_geometrique}} We know that $N_{cc}^{r} \left(\omega\right)$ is a geometric random variable. We have \begin{align} p(z) & = \Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}} \left( N_{cc}^{r} =1 \right) \nonumber \\ & \geq \Poisson{z, Q}_{\Lambda_{\infty}^{(k+)}} \Big( N_{cc}^{r} \left(\omega\right) =1, [x + y,\infty[ \text{ is right-covered in } \omega_{]x, \infty[ \times[0,k]^{d -1}} \Big) , \nonumber \end{align} where $x$ and $y$ are positive real numbers and we say that $[x,\infty[$ is right-covered in $\omega$ if \begin{align} [x,\infty[ \times[0,k]^{d -1} \ \subseteq \underset{ (a,r) \in \omega}{\bigcup} T(a,\tilde{r}). \nonumber \end{align} Therefore we have \begin{align} p(z) \geq & \left(1-e^{-z x} \right) \ \tilde{Q}([x+y, \infty [) \nonumber \\ & \hspace{3cm} \Poisson{z, Q} ( [x + y,\infty[ \text{ is right-covered in } \omega_{]x, \infty[ \times[0,k]^{d -1}}) \nonumber \\ =& \left(1-e^{-z x }\right) \ \tilde{Q}([x+y, \infty [) \nonumber \\ & \hspace{3cm} \Poisson{z, Q} ( [y,\infty[ \text{ is right-covered in } \omega_{[0, \infty[ \times[0,k]^{d -1}}), \nonumber \end{align} where the last equality comes from the stationarity of $\Poisson{z, Q}$. \begin{lemma} \label{lemme_convergence_point_le_plus_a_gauche} For $y>0$ fixed, the probability \begin{align} \Poisson{z, Q} ( [y,\infty[ \text{ is right-covered in } \omega_{[0, \infty[ \times[0,k]^{d -1}}) \nonumber \end{align} converges to $1$ when $z$ grows to infinity. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemme_convergence_point_le_plus_a_gauche}] Consider the realisation of a Poisson point process on the half line of intensity $z $ and with segment length of law $\tilde{Q}$. Thanks to Lemma \ref{lemme_percolation_segment_a_droite}, we know that this segment model percolates for $z\geq k ^{- d}$. Therefore it exists a infinite connected component starting from a random point $\mathcal{Y}$. If $y$ is inside this infinite connected component then there is nothing to do. Otherwise by increasing $z$ we are adding Poisson point and $\mathcal{Y}$ is translating to the left, converging almost surely to $0$ when $z$ grows to infinity, and overlapping $y$ after a finite random time. From this almost surely convergence we deduce the convergence of the probability of Lemma \ref{lemme_convergence_point_le_plus_a_gauche}. \end{proof} Let us now conclude the proof of Lemma \ref{lemme_preuve_conv_geometrique}. For $x,y$ small enough, thanks to the construction of $\tilde{Q}$ satisfying $\tilde{Q}(\{0 \})<1/ \bar{\Types}$, we have $\tilde{Q}([x+y, \infty [) > 1-1 / \bar{\Types}$. Now that $x$ and $y$ are fixed, thanks to Lemma \ref{lemme_convergence_point_le_plus_a_gauche} we have for $z$ large enough that the same is true for \begin{align} \left(1-e^{-z x }\right) \ \tilde{Q}([x+y, \infty [) \Poisson{z, Q} ( [y,\infty[ \text{ is right-covered in } \omega_{[0, \infty[ \times[0,k]^{d -1}}) . \nonumber \end{align} Thus for $z$ large enough $p(z)>1-1/ \bar{\Types}$ and Lemma \ref{lemme_preuve_conv_geometrique} is proved. {\it Acknowledgement:} This work was supported in part by the Labex CEMPI (ANR-11-LABX-0007-01), the GDR 3477 Geosto and the ANP project PPPP (ANR-16-CE40-0016). \end{document}
arXiv
\begin{document} \begin{frontmatter} \title{Intelligent Sampling and Inference for Multiple Change Points in Extremely Long Data Sequences} \begin{aug} \author{Zhiyuan Lu}, \author{Moulinath Banerjee} \and \author{George Michailidis} \end{aug} \begin{abstract} Change point estimation in its offline version is traditionally performed by optimizing a data fit criterion over the data set of interest that considers each data point as the true location parameter. The data point that minimizes the criterion is declared as the change point estimate. For estimating multiple change points, the available procedures in the literature are analogous in spirit, but significantly more involved in execution. Since change-points are local discontinuities, only data points close to the actual change point provide useful information for estimation, while data points far away are superfluous, to the point where using only the points close to the true parameter is just as precise as using the full data set. Leveraging this ``locality principle", we introduce a two-stage procedure for the problem at hand: the 1st stage uses a {\em sparse subsample} to obtain pilot estimates of the underlying change points, and the 2nd stage updates these estimates by sampling densely in appropriately defined neighborhoods around them, and computing a local change-point estimator. We establish that this method achieves the same rate of convergence and even virtually the same asymptotic distribution as an analysis of the full data, while reducing computational complexity to $\sqrt{N}$ time ($N$ being the length of data set) under favorable circumstances, as opposed to at least $O(N)$ time for all current procedure. This makes our new method promising for the analysis of exceedingly long data sets with adequately spaced out change points. The main results, which, in particular, lead to a prescription for constructing explicitly computable joint (asymptotic) confidence intervals for a {\em growing number} of change-points via the proposed procedure, are established under a signal plus noise model with independent and identically distributed error terms, but extensions to dependent data settings, as well as multiple stage ($>2$) procedures are also provided. The performance of our procedure -- which is coined ``intelligent sampling" -- is illustrated on both emulated data and a real internet data stream. \end{abstract} \end{frontmatter} \section{Introduction} \label{sec:introductionsection} Change point analysis has been extensively studied in both the statistics and econometrics literature \cite{Basseville1993detection}, due to its wide applicability in diverse fields, including economics and finance \cite{frisen2008financial}, quality control \cite{qiu2013introduction}, neuroscience \cite{koepcke2016single}, etc. A change point represents a discontinuity in the parameters of the data generating process. The literature has investigated both the {\em offline} and {\em online} versions of the problem \cite{Basseville1993detection,csorgo1997limit}. In the former case, one is given a sequence of observations and questions of interest include: (i) whether there exists a change point and (ii) if there exists one (or multiple) change point(s), identify its (their) location, as well as estimate the parameters of the data generating process to the left and right of it (them). In the latter case, one obtains new observations in a sequential manner and the main interest is in quickest detection of the change point. This paper deals with the \emph{offline} problem for very large data sequences. The offline analysis of \emph{extremely long} data sequences poses a body of challenges, not least owing to the fact that conventional modes of analysis based on the entire data are computationally challenging, owing to the massive size involved. The identification of change-points in a sequence, which constitute local discontinuities, requires some sort of a search procedure. A list of such methods, along with summaries of their results, can be found in \cite{niu2016multiple}. Time complexities for identification of multiple change points range between $O(N)$ and $O(N^2)$ depending on spacing conditions between change points, where $N$ is the length of the time series:\footnote{Precise identification of change-points under very relaxed spacing conditions can be accomplished, for example, by a dynamic programming based algorithm, which is of order $N^2$.}such time complexities are not attractive for very long sequences. Such offline scenarios arise when network engineers examine \emph{previously stored traces of traffic} in high-speed computer and communication networks and examine them in greater detail aiming to identify shifting persistent patterns followed by attribution analysis, possibly correlating the detected change points with other data that would point to either infrastructure related issues or shifts in user patterns. This should be contrasted with \emph{online monitoring} where the goal is to raise alarms \emph{in real time} whenever the underlying process is ``out-of-control" and notify network engineers, who determine the tolerance level for alarms, i.e. how large the change should be to be considered negatively impacting network operations \cite{van2014opennetmon}, \cite{ahmed2016survey}. In the \emph{offline setting}, the number of observations in network traces, corresponding to the number of packets or their payload in bytes at a granular temporal scale, is in the thousands per minute. Analogous problems come up in other systems including manufacturing processes \cite{shen2016change}, or other cyber-physical systems equipped with sensors \cite{khan2016optimal}, where data may possibly be stored in a distributed manner. The point of view adopted in the paper is as follows: we are given a data sequence of massive length and are interested in identifying the major changes in this series. While \emph{short-lived} changes of intensities in mean shifts may occur frequently, such transient perturbations do not affect the overall performance of most engineered or physical systems. With such applications in mind, it is very reasonable to assume that the number of truly significant changes that \emph{persist} over time, is \emph{not particularly large}. To communicate the core ideas effectively, we employ a canonical model: a long piece-wise constant mean model with multiple jumps -- where the number of jumps increases at a rate much slower than the length of the series --- that are not too small relative to the fluctuations induced by noise. Also, the flat stretches between consecutive jumps are taken to be persistent. Indeed, the `piece-wise constant with jumps' strategy has been considered by a variety of authors (see \cite{niu2016multiple} and references therein) for the problem at hand and is convenient for developing the methodology. We demonstrate subsequently that our proposed strategy is robust to mis-specifications of the persistent piecewise constant model, in the sense that its detection of persistent stretches is essentially unaffected in the presence of short-lived `idiosyncratic' signal bursts. Next, we summarize key contributions of this paper. {\bf 1.} We propose an effective solution to the computational problem discussed above via a strategy called ``intelligent sampling'', which proceeds by making two (or more) passes over the time-series at different levels of resolution. The first pass is carried out at a coarse time resolution and enables the analyst to obtain pilot estimates of the change-points, while the second investigates stretches of the time-series in (relatively) small neighborhoods of the initial estimates to produce updated estimates enjoying a high level of precision; in fact, essentially the same level of precision, as would be achieved by an analysis of the entire massive time-series. The core advantage of our proposed method is that it reduces computational time from \emph{linear} to \emph{sub-linear} under appropriate conditions, and, in fact close to square-root order, if the number of change-points is small relative to the length of the time-series. It is established that the computational gains (and analogously other processing gains from input-output operations) can be achieved \emph{without compromising} any statistical efficiency. From a realistic stand-point, the time resolution to be used for the first pass is difficult to determine beforehand, and to alleviate this methodological problem we propose an adaptive \emph{doubling algorithm} [Section 5.4] that samples at increasing levels of resolution till a sufficient number of initial change point estimates are found. As the emulation experiment in Section \ref{sec:realdata} demonstrates, our proposed method consistently picks up persistent structural changes in the presence of multiple spiky signals while staying agnostic to the latter. Nevertheless, if short duration shifts are also of interest, the interested researcher can further analyze the persistent segments (which are of smaller order than $N$) with any of the available procedures in the literature in a \emph{parallel} fashion, thus retaining the computational gains achieved by intelligent sampling. {\bf 2.} Most of our results are rigorously developed in the signal plus noise model with independent and identically distributed (iid) sub-Gaussian errors, and in certain cases, normal errors. Normal errors have been widely used in the change point literature to showcase methods and establish theoretically their performance guarantees, e.g. \cite{fryzlewicz2014wild,niu2012screening,zhang2007modified}, since they provide an attractive canonical model and are more amenable to analysis. However, we also provide results and indicate extensions when the errors exhibit short or long-range dependence and also in the presence of non-stationarity, since both these features are likely to be present with very long data sequences. Some empirical evidence is provided to this effect. \newline \indent Further, the focus of the presentation is on 2-stage procedures that provide all the key insights into the workings of the intelligent sampling procedure. However, for settings where the size of the data exceeds $10^{10}$, multiple stages are required to bring down the analyzed subsample to a manageable size. We therefore cover extensions to multi-stage intelligent sampling procedures as well as address how samples should be allocated at these different stages. Furthermore, such massive data sequences, often, can not be effectively stored in a single location. This does not pose a problem for intelligent sampling as it adapts well to distributed computing: it can be applied on the reduced size subsamples at the various locations where the original data are stored, followed by a single subsequent back and forth communication between the various locations and the central server, and subsequent calculations essentially carried out on the local servers. This is elaborated on in Section \ref{sec:compmethod}. {\bf 3.} On the inferential front, we establish asymptotic approximations to the joint distribution of the change-point estimates obtained by intelligent sampling in terms of the distribution functions of (typically asymmetrically) drifting random walks on the set of integers [Theorems \ref{thm:increasingJasymprotics} and \ref{thm:multidepend}], which can then be used to provide explicitly computable asymptotic joint confidence intervals for both finitely many and a \emph{growing number} of change points. While concentration properties of change-point estimates around the the true parameters in multiple change point problems are known, to the best of our knowledge, such results involve hard-to-pin-down constants (this is discussed in some more detail in Section \ref{sec:refitting} in the context of binary segmentation) and are therefore difficult to use in practical settings. Our prescribed methods involve estimating signal to noise ratios at the different change-points, which is easy to accomplish, and values of quantiles of drifting Gaussian random walks for different values of the drift parameter (which can be pre-generated on a computer). The arguments involved in establishing Theorem \ref{thm:increasingJasymprotics} require, among other things, careful analyses of the distribution functions of both symmetric and asymmetric Gaussian random walks (see part B of Supplement, Section \ref{sec:supplementB}) which may very well prove useful in many other contexts. To the best of our knowledge, our prescription of joint asymptotic confidence intervals for a growing number of change-points with readily computable estimates of the distributional parameters involved [see Theorem \ref{thm:increasingJasymprotics}] is the \emph{first of its kind} in the change point literature. {\bf 4:} We illustrate our procedure by applying it to two data-sets, the first in Section \ref{partly-emulated-data} on a partly emulated data set where the error process arises from real network traffic data and we create ground-truths by 'injecting' change-points into the error process through the addition of piecewise constant mean shifts. Section \ref{real-data-section} examines a real data set, where our intelligent sampling proposal is useful. It corresponds to network (internet) traffic destined for an autonomous system - a collection of Internet addresses under the control of a single network operator. The corresponding time series reflects aggregate incoming traffic to the autonomous system at 2 sec resolution. Short lived spikes are of less interest to network operators, since they usually correspond to transient traffic patterns. On the other hand, persisting shifts are of significant interest, indicating possible malicious activity, technical issues with the infrastructure, or at larger time scales, traffic growth that needs to be mitigated by corresponding capacity growth of the network's infrastructure. While this data is better described by a piecewise linear regime (as opposed to the piecewise constant model used for our theoretical development), our intelligent sampling technique continues to work well after modifying it to fit piecewise linear models at each stage: the procedures to obtain the first stage pilot estimates, as well as the second stage updated estimates are simply tweaked so that they can tackle piecewise linear functions. This demonstrates the scope of intelligent sampling to data-sets with more general mean structures than the one tackled in depth in this paper, and makes it attractive as a general principle. The remainder of the paper is organized as follows. Section \ref{sec:single-CP} addresses intelligent sampling for the simpler {\em single} change point problem, which provides some fundamental insights into the nature of the procedure and its theoretical and computational properties. Section \ref{sec:multiplechangepointsintro} deals with the main topic of this study: intelligent sampling for the multiple change point problem with a growing number of change-points, and presents the main theoretical results of the paper. Section \ref{sec:prac-imple} develops the practical methodology for intelligent sampling using \emph{binary segmentation} as the working procedure at Stage 1, and studies the computational complexity of the resulting approach. Section \ref{sec:compmethod} provides an elaborate study of the minimum subsample size required for precise inferences as a function of the length of the full data sequence and the signal-to-noise ratio using multiple stage procedures. Extensions to non-iid settings which are more pertinent for the more special case of time-series data are discussed in Section \ref{sec:dependenterrors}. The numerical performance of the procedure is calibrated via thorough simulations in Section \ref{sec:simulations}, while applications to both emulated and real Internet data are presented in \ref{sec:realdata}. Section \ref{sec:discussion} concludes with a discussion of possible extensions of the intelligent sampling procedure, both in terms of alternative Stage 1 procedures (like wild binary segmentation and SMUCE), and also to other kinds of data (non-Gaussian data, discrete data, decaying signals). In the interests of space, all proofs and elaborate discussions of various facets of intelligent sampling are collected in the Supplement. \section{Intelligent Sampling for the Single Change Point Problem} \label{sec:single-CP} \subsection{Single Change Point Model}\label{sec:singlemodel} The simplest possible setting for the change point problem is the {\em stump} model, where data $(1/N,Y_1),\cdots,$ $(N/N,Y_N)$ are available with $Y_i=f(i/N)+\varepsilon_i$ for $i=1,\cdots, N$, and where the error term $\varepsilon_i$ is independent and identically distributed (iid) following a $N(0,\sigma^2)$ distribution, while the function $f$ takes the form \begin{eqnarray}\label{eq:stumpsignal} f(x)=\alpha\cdot 1(x\leq \tau)+\beta\cdot 1(x>\tau),\qquad x\in (0,1), \end{eqnarray} for some constants $\alpha,\beta\in\mathbb{R}$, $\alpha\neq\beta$, and $\tau\in (0,1)$: the so-called `stump' model. For estimating the {\em change point} $\tau$ we employ a least squares criterion, given by \begin{eqnarray}\label{eq:fullestimators} (\hat{\alpha},\hat{\beta},\hat{\tau}):=\underset{(a,b,t)\in \mathbb{R}^2\times (0,1)}{\arg\min}\sum_{i=1}^N\left( Y_i-a\cdot 1(i/N\leq t)-b\cdot 1(i/N>t) \right)^2. \end{eqnarray} Using techniques similar to those in Section 14.5 of \cite{kosorok2007introduction}, we can establish that the estimator $\hat{\tau}$ is consistent for $\tau_N:=\lfloor N\tau\rfloor/N$, which acts as the change point among the covariates lying on the even grid. \begin{proposition}\label{prop:fullestimators} For the stump model with normal errors the following hold: \\ (i) Both $(\hat{\alpha}-\alpha)$ and $(\hat{\beta}-\beta)$ converge to 0 with rate $O_p(N^{-1/2})$. \\ (ii) The change point estimate $\hat{\tau}$ satisfies \begin{eqnarray} \mathbb{P}\left[N(\hat{\tau}-\tau_N)=k\right]\to \mathbb{P}[L=k]\text{ for all }k\in\mathbb{Z} \end{eqnarray} where $L=\underset{i\in\mathbb{Z}}{\arg\min}\,X(i)$, and the random walk $\{X(i)\}_{i\in\mathbb{Z}}$ is defined as \begin{equation}\label{Zprocess} X(i)=\begin{cases} \Delta(\varepsilon_1^*+...+\varepsilon_{i-1}^*+\varepsilon_i^*)+i\Delta^2/2, \quad & i>0\\0, & i=0\\ -\Delta(\varepsilon_{i+1}^*+...+\varepsilon_{-1}^*+\varepsilon_{0}^*)+|i|\Delta^2/2,\quad & i<0,\end{cases} \end{equation} with $\varepsilon_0^*,\varepsilon_1^*,\varepsilon_2^*,\dots$ and $\varepsilon_{-1}^*,\varepsilon_{-2}^*,\dots$ being iid $N(0,\sigma^2)$ random variables and $\Delta := \beta - \alpha$. \end{proposition} Next, we make several notes on the random variable $L$ introduced in the above proposition, as it appears multiple times throughout the remainder of this paper. Although, at a glance, the distribution of $L$ depends on two parameters, $\Delta$ and $\sigma$, in actuality $L$ is completely determined by the signal-to-noise ratio $\Delta/\sigma$ due to the Gaussian setting. To see this, note that we can re-write $L=\underset{i\in\mathbb{Z}}{\arg\min}(Z_i/|\Delta\sigma|)$ where \begin{eqnarray}\label{eq:Zprocess2} \frac{X(i)}{|\Delta\sigma|}=\begin{cases} \text{sgn}(\Delta)(\varepsilon_1^*/\sigma+\dots+\varepsilon_i^*/\sigma)+i|\Delta/\sigma|/2,\quad& i>0\\0,& i=0\\-\text{sgn}(\Delta)(\varepsilon_{i+1}^*/\sigma+\dots+\varepsilon_{0}^*/\sigma)+|i\Delta/\sigma|/2,\quad& i<0 \end{cases} \end{eqnarray} Since $\{\text{sgn}(\Delta)\varepsilon_i^*/\sigma\}_{i\in\mathbb{Z}}$ are iid $N(0,1)$ random variables, invariant under $\Delta$ and $\sigma$, it follows that $L$ only depends on the single parameter $\Delta/\sigma$. Hence, from here on, denote the associated random process as \begin{eqnarray} \label{eq:Zprocess3} X_\Delta(i)=\begin{cases} \text{sgn}(\Delta)(\varepsilon_1^\diamond+\dots+\varepsilon_i^\diamond)+i|\Delta|/2,\quad& i>0\\0,& i=0\\-\text{sgn}(\Delta)(\varepsilon_{i+1}^\diamond+\dots+\varepsilon_{0}^\diamond)+|i|\cdot|\Delta|/2,\quad& i<0 \end{cases} \end{eqnarray} where $\varepsilon_j^\diamond$ for $j\in \mathbb{Z}$ are all iid $N(0,1)$ random variables. Denote the argmin of the random walk $X_\Delta(i)$ as $L_\Delta=\underset{i\in\mathbb{Z}}{\arg\min}X_\Delta(i)$. An immediate observable property of $L_\Delta$ is the stochastic ordering with respect to $|\Delta|$: \begin{proposition}\label{prop:Ldist} Suppose we have constants $\Delta_1,\Delta_2\in\mathbb{R}$ such that $0<|\Delta_1|<|\Delta_2|$, then for any positive integer $k$ \begin{eqnarray} \mathbb{P}[|L_{\Delta_1}|\leq k]\leq \mathbb{P}[|L_{\Delta_2}|\leq k] \end{eqnarray} \end{proposition} Practically, this stochastic ordering implies that if the $1-\alpha$ quantile $Q_{\Delta_1}(1-\alpha)$ of $|L_{\Delta_1}|$ is known, then $Q_{\Delta_1}(1-\alpha)$ can also serve as a conservative $1-\alpha$ quantile of $|L_{\Delta_2}|$ for any $|\Delta_2|\geq|\Delta_1|$. This can be useful in settings where given $J>0$ random variables $L_{\Delta_i}$ for $i=1\dots,J$, we desire positive integers $\ell_i$ for $i=1,\dots,J$ such that $\mathbb{P}[ |L_{\Delta_i}|\leq \ell_i ]\geq 1-\alpha$ for $i=1,\dots,J$. This scenario will appear in later sections where we consider models containing several change points with possibly different jump sizes. In such situations, a simple solution is to take $\ell_i=Q_{\Delta_m}(1-\alpha)$ for all $i$ where $m=\underset{1,\dots,J}{\arg\min}|\Delta_i|$, or in other words letting each $\ell_i$ be the $1-\alpha$ quantile of the $|L_{\Delta_i}|$ with the smallest parameter. Alternatively we can generate a table of quantiles for distributions $L_{\delta_1},L_{\delta_2},L_{\delta_3},\dots$ for a mesh of positive constants $\delta_1<\delta_2<\dots $ (e.g. we can let the $\delta_j=0.1j$ for $j=5,\dots,1000$), and let $\ell_i=Q_{\delta_j }(1-\alpha)$ where $\delta_j=\max\{ \delta_k:\delta_k\leq \Delta_i \}$, for $i=1,\dots,J$. \subsection{The Intelligent Sampling Procedure and its Properties}\label{sec:intelligentsampling} \label{sec:singleprocedure} \begin{enumerate}[label=(ISS\arabic*):] \setlength{\itemindent}{.5in} \item From the full data set of $\left(\frac{1}{N},Y_1\right),\left(\frac{2}{N},Y_2\right),...,\left(1,Y_N\right)$, take an evenly spaced subsample of approximately size $N_1=K_1N^{\gamma}$ for some $\gamma\in(0,1)$, $K_1>0$: thus, the data points are $\left(\frac{\lfloor N/N_1 \rfloor}{N},Y_{\lfloor N/N_1 \rfloor}\right)$, $\left(\frac{2\lfloor N/N_1 \rfloor}{N},Y_{2\lfloor N/N_1 \rfloor}\right)$, $\left(\frac{3\lfloor N/N_1 \rfloor}{N},Y_{3\lfloor N/N_1 \rfloor}\right)$ \dots \item On this subsample apply least squares to obtain estimates $\left(\hat{\alpha}^{(1)},\hat{\beta}^{(1)},\hat{\tau}^{(1)}_N\right)$ for parameters $( \alpha,\beta,\tau_N)$. \end{enumerate} By the results for the single change-point problem presented above, $\hat{\tau}^{(1)}_N-\tau_N$ is $O_p(N^{-\gamma})$. Therefore, if we take $w(N)=K_2N^{-\gamma+\delta}$ for some small $\delta>0$ (much smaller than $\gamma$) and any constant $K_2>0$, with probability increasing to 1, $\tau_N \in [\hat{\tau}^{(1)}_N-w(N),\hat{\tau}^{(1)}_N+w(N)]$. In other words, this provides a neighborhood around the true change point as desired; hence, in the next stage only points within this interval will be used. \begin{enumerate}[label=(ISS\arabic*):] \setlength{\itemindent}{.5in} \setcounter{enumi}{2} \item Fix a small constant $\delta>0$. Consider all $i/N$ such that $i/N\in[\hat{\tau}^{(1)}_N-K_2N^{-\gamma+\delta},\hat{\tau}^{(1)}_N+K_2N^{-\gamma+\delta}]$ and $(i/N,Y_i)$ was not used in the first subsample. Denote the set of all such points as $S^{(2)}$. \item Fit a step function on this second subsample by minimizing \begin{equation*} \sum_{i/N\in S^{(2)}}\Big( Y_i-\hat{\alpha}^{(1)}1(i/N\leq d)- \hat{\beta}^{(1)}1(i/N> d)\Big)^2 \end{equation*} with respect to $d$, and take the minimizing $d$ to be the second stage change point estimate $\hat{\tau}^{(2)}_N$. \end{enumerate} The next theorem establishes that the intelligent sampling estimator $\hat{\tau}^{(2)}_N$ is \emph{consistent with the same rate of convergence} as the \emph{estimator based on the full data}. \begin{theorem}\label{thm:singlerate} For the stump single change point model, the estimator obtained based on intelligent sampling satisfies $$|\hat{\tau}_N^{(2)}-\tau_N|=O_p(1/N).$$ \end{theorem} \begin{proof} See Section \ref{sec:proofgeneralratesingle} in Supplement Part A, where a result for a more general model is proven. \end{proof} To derive a clean statement of the asymptotic distribution, we introduce a slight modification to the definition of the true change point and define a new type of 'distance' function $\lambda_2:[0,1]^2\mapsto\mathbb{Z}$, as follows. First, for convenience, denote the set of $i/N$'s of the first stage subsample as \begin{eqnarray} S^{(1)}:=\left\{ \frac{i}{N}:i\in\mathbb{N},\,i<N,\, i\text{ is divisible by }\lfloor N/N_1\rfloor \right\}, \end{eqnarray} \noindent then for any $a,b\in (0,1)$ \begin{eqnarray} \lambda_2(a,b):=\begin{cases} \sum\limits_{i=1}^N 1\left(a<\frac{i}{N}\leq b,\, \frac{i}{N}\notin S^{(1)} \right)\qquad &\text{ if }a\leq b\,,\\ - \sum\limits_{i=1}^N 1\left(b<\frac{i}{N}\leq a,\, \frac{i}{N}\notin S^{(1)} \right) &\text{ otherwise}. \end{cases} \end{eqnarray} The modified ``distance" $\hat{\tau}_N$ is $\lambda_2(\tau_N,\hat{\tau}^{(2)}_N)$, instead of $N(\hat{\tau}^{(2)}_N-\tau_N)$, does converge weakly to a distribution as the next results establishes. \begin{theorem}\label{thmsingledist} For any integer $\ell$, \begin{eqnarray} \mathbb{P}\left[ \lambda_2\left(\tau_N,\hat{\tau}_N^{(2)}\right)=\ell \right] &\to & \mathbb{P}[L_{\Delta/\sigma}=\ell] \,. \end{eqnarray} \end{theorem} \begin{proof} See Section \ref{sec:proofsingledistgeneral} in Supplement Part A where this is established for a more general model. \end{proof} {\bf Computational gains:} The results above establish that the two stage procedure can, using a subset of the full data, be asymptotically almost as precise as employing the full data set. In practice this allows for quicker estimation of big data sets without losing precision. The first stage uses about $N_1 \sim N^\gamma$ points to perform least squares fitting of a stump model, and this step takes $O(N^\gamma)$ computational time. The second stage applies a least-squares fit of a step function on the set $S^{(2)}$, which contains $O(N^{1-\gamma+\delta})$ points and therefore uses $O(N^{1-\gamma+\delta})$ time. Hence, the two stage procedure requires order $N^\gamma \vee N^{1-\gamma+\delta}$ computation time, which is minimized by setting $\gamma=1-\gamma+\delta$, or $\gamma=\frac{1+\delta}{2}$. As $\delta$ tends to 0 (any small positive value of $\delta$ yields the above asymptotic results), the optimal $\gamma$ tends to $1/2$. Therefore, one should employ $N_1=\sqrt{N}$ at the first stage and the second stage sample should be all points in the interval $[\hat{\tau}^{(1)}_N-K_2\sqrt{N},\hat{\tau}^{(1)}_N+K_2\sqrt{N}]$, minus those at the first stage, where $K_2$ ensures that this interval contains $\tau$ with an acceptable high probability $1-\alpha$. If one knows the jump size $\Delta$, $K_2$ can be determined as the $1-\frac{\alpha}{2}$ quantile of the random variable $L_{\Delta/\sigma}$; in the realistic unknown $\Delta$ case, a lower estimate of $\Delta$ can yield a corresponding conservative value of $K_2$. \begin{remark}\label{rem:nosingleasym} The $\lambda_2$ distance was introduced above because it is generally not possible to derive an asymptotic distribution for $N(\hat{\tau}_N^{(2)}-\tau_N)$. Indeed, one can manufacture parameter settings quite easily, that produce different limit distributions along different subsequences. For a specific example, see Remark \ref{rem:nosingleasymgen} in Supplement Part A. \end{remark} \begin{remark}\label{rem:singlemult} The 2-stage procedure can be extended to multiple stages. \indent In the 2-stage version, we first use a subsample of size $N^\gamma$ to find some interval $[\hat{\tau}^{(1)}-K_1N^{-\gamma+\delta_1},\hat{\tau}^{(1)}+K_1N^{-\gamma+\delta_1}]$ which contains the true value of $\tau$ with probability going to 1. However, it is possible to refrain from using all the data points in the interval at the second stage. Instead, a 2-stage procedure can be employed as follows: take a subset of $N^\zeta$ (for some $0<\zeta< 1-\gamma+\delta_1$) points from the second stage interval and obtain an estimate $\hat{\tau}^{(2)}$ and an interval $[\hat{\tau}^{(2)}-K_2N^{-\gamma-\zeta+\delta_1 + \delta_2},\hat{\tau}^{(2)}+K_2N^{-\gamma -\zeta +\delta_1 + \delta_2}]$ (note that $\delta_1$ and $\delta_2$ can be as small as one pleases) which contains $\tau$ with probability going to 1. In the third stage, all points in the aforementioned interval (leaving aside those used in previous stages) are used to obtain the final estimate $\hat{\tau}^{(3)}$. \newline \newline \indent Such a procedure will have the same rate of convergence as the one using the full data: $(\hat{\tau}^{(3)}-\tau)=O_p(1/N)$, and the same asymptotic distribution (in terms of a ``third stage distance'' similar to how $\lambda_2$ was defined) as the one and two stage procedures. In terms of computational time, the first stage takes $O(N^\gamma)$ time, the second stage $O(N^\zeta)$ time, and the final stage $O(N^{1-\gamma-\zeta+\delta_1+\delta_2})$ time, for a total of $O((N^\gamma\vee N^\zeta\vee N^{1-\gamma-\zeta+\delta_1+\delta_2}))$ time, which can reach almost $O(N^{1/3})$ time. In general, a $K$ stage procedure, which works along the same lines can operate in almost as low as $O(N^{1/K})$ time. \end{remark} \section{The Case of Multiple Change Points}\label{sec:multiplechangepointsintro} Suppose one has access to a data set $Y_1,Y_2,\dots, Y_N$ generated according to the following model: \begin{equation}\label{model} Y_i=\theta_i+\varepsilon_i,\qquad i=1,2,3,...,N, \end{equation} where the $\theta_i$'s form a piecewise constant sequence for any fixed $N$ and the $\varepsilon_i$'s are zero-mean error terms\footnote{Specifically, we consider the triangular array of sequences $\theta_{i,N}$, which are piecewise constant in $i$. The error terms $\varepsilon_i=\varepsilon_{i,N}$ also form a triangular array, but we suppress the notation for brevity}. The signal is flat apart from jumps at some unknown change points $1=\tau_0<\tau_1<...<\tau_J<\tau_{J+1}=N$: i.e. $\theta_{i_1}=\theta_{i_2}$ whenever $i_1,i_2\in (\tau_j,\tau_{j+1}]$ for some $j\in\{0,...,J\}$. The number of change points $J=J(N)$ is also unknown and needs to be estimated from the data. We impose the following basic restrictions on this model: \begin{enumerate}[label=(M\arabic*):] \setlength{\itemindent}{.5in} \item there exists a constant $\bar{\theta}\in (0,\infty)$ not dependent on $N$, such that $\underset{i=1,...,N}{\max}|\theta_i|\leq \bar{\theta}$; \item there exists a constant $\underline{\Delta}$ not dependent on $N$, such that $\underset{i=0,...,J}{\min} \left| \theta_{\tau_{i+1}}-\theta_{\tau_i}\right|\geq \underline{\Delta}$; \item there exists a $\Xi\in [0,1)$ and some $C>0$, such that $\delta_N:=\underset{i=0,...,J}{\min}(\tau_{i+1}-\tau_i)\geq CN^{1-\Xi}$ for all large $N$; \item $\varepsilon_i$ for $i=1,...,N$ are independent centered subgaussian random variables, with subgaussian parameters\footnote{The subgaussian parameter of a variable $X$ is any value $\sigma>0$ such that $\mathbb{E}[\exp(sX)]\leq \exp(s^2\sigma^2/2)$ for all real values $s$} $\varepsilon_1,\dots,\varepsilon_N$ that are all bounded above by a constant $\sigma_{\max}>0$ which is not dependent on $N$. \end{enumerate} \begin{remark} \label{comments-on-assumptions} \indent The third assumption above stipulates that the minimum gap between two consecutive stretches is bounded away from 0. This is areasonable assumption for identifying long and significantly well-separated persistent stretches in a big data setting. Condition (M4) places a restriction on the tail probability of the error terms, but still accommodates for some heteroscedastic behavior (a concern for long data sequences) by allowing noise sequence comprising of independent random variables with different distributions. \end{remark} \subsection{Intelligent Sampling on Multiple Change Points}\label{sec:procedure} The intelligent sampling procedure in the multiple change-points case works in two (or more) stages as follows: in the two-stage version, as in Section \ref{sec:single-CP}, the first stage aims to find rough estimates of the change points using a uniform subsample (Steps ISM1-ISM4) and the second stage produces the final estimates (Steps ISM5 and ISM6). \begin{enumerate}[label=(ISM\arabic*):] \setlength{\itemindent}{.5in} \item Start with a data set $Y_1,\cdots,Y_N$ described in (\ref{model}). \item Take $N_1=K_1N^{\gamma}$ for some $K_1$ and $\gamma\in (\Xi,1)$ such that $N/N_1=o(\delta_N)$; for $j=1,\cdots, N^*$ where $N^*:=\left\lfloor \frac{N}{\lfloor N/N_1\rfloor}\right\rfloor $, consider the subsample $\{Z_j\}=\{Y_{j\lfloor N/N_1 \rfloor}\}$. \end{enumerate} The subsample $Z_1,Z_2,...$ can also be considered a data sequence structured as in $(\ref{model})$, and since $\delta_N>>N/N_1$, there are jumps in the signal at $\tau_j^*:=\left\lfloor \frac{\tau_j}{\lfloor N/N_1\rfloor}\right\rfloor$ for $j=1,...,J$, with corresponding minimum spacing \begin{equation}\label{sparsesep} \delta_{N^*}^*:=\min_{i=1,...,J+1}|\tau_i^*-\tau_{i-1}^*|= \frac{1}{\lfloor N/N_1\rfloor} \left(\min_{i=1,\cdots,J+1}|\tau_i-\tau_{i-1}|+O(1) \right)= \left(\frac{N_1}{N}\delta_N\right)(1+o(1)) \,. \end{equation} \begin{enumerate}[label=(ISM\arabic*):] \setlength{\itemindent}{.5in} \setcounter{enumi}{2} \item Apply some multiple change point estimation procedure (such as binary segmentation) to the set of $Z_i$'s to obtain estimates $\hat{\tau}_1^*,...,\hat{\tau}_{\hat{J}}^*$ for the $\tau_i^*$s and $\hat{\nu}^{(1)}_0$,\dots, $\hat{\nu}^{(1)}_{\hat{J}}$ for the levels $(\nu_0,\nu_1,\dots,\nu_J)=(\theta_1,\theta_{\tau_1+1},\theta_{\tau_2+1},\dots,\theta_{\tau_J+1})$. \begin{itemize} \item the choice of the procedure does not matter so long as the estimates satisfy \begin{equation}\label{eq:firstconsistent} \mathbb{P}\left[ \hat{J}=J,\, \max_{i=1,...,J}|\hat{\tau}^*_i-\tau_i^*|\leq w^*(N^*),\,\max_{i=0,...,J}|\hat{\nu}^{(1)}_i-\nu_i|\leq \rho_N \right]\to 1 \end{equation} for some sequence $w^*(N^*)$ such that $w^*(N^*)\to\infty$, $w^*(N^*)=o(\delta^*_{N^*})$ and $\rho_N\to 0$. \end{itemize} \end{enumerate} \begin{enumerate}[label=(ISM\arabic*):] \setlength{\itemindent}{.5in} \setcounter{enumi}{3} \item Convert these into estimates for the $\tau_i$'s by letting $\hat{\tau}_j^{(1)}:=\hat{\tau}^*_j\lfloor N/N_1 \rfloor$ for $j=1,...,\hat{J}$. \begin{itemize} \item taking $w(N):=(w^*(N^*)+1)\lfloor N/N_1\rfloor$, expression (\ref{eq:firstconsistent}) gives \begin{eqnarray}\label{eq:firstconsistent2} \mathbb{P}\left[\hat{J}=J,\,\underset{i=1,...,J}{\max}|\hat{\tau}^{(1)}_i-\tau_i|\leq w(N),\, \max_{i=0,...,J}|\hat{\nu}^{(1)}_i-\nu_i|\leq \rho_N\right]\to 1. \end{eqnarray} \item as a consequence of conditions in (ISM3), $w(N)\to \infty$, $w(N)=o(\delta_N)$, and $w(N)>CN^{1-\gamma}$ for some constant $C$. \end{itemize} \item Fix any integer $K>1$, and consider the intervals $\left[ \hat{\tau}_i^{(1)}-Kw(N), \hat{\tau}_i^{(1)}+Kw(N) \right]$ for $i=1,...,\hat{J}$. Denote by $S^{(2)}\left( \hat{\tau}_i^{(1)} \right)$ all integers in this interval not divisible by $\lfloor N/N_1\rfloor$. \item For each $i=1,...,\hat{J}$, let \begin{equation} \hat{\tau}^{(2)}_i=\underset{d\in S^{(2)}\left(\hat{\tau}_i^{(1)}\right)}{\arg\min} \left(\sum_{j\in S^{(2)}\left(\hat{\tau}_i^{(1)}\right)}\Big[ Y_j-(\hat{\nu}^{(1)}_{i-1}1(j<d)+\hat{\nu}^{(1)}_i1(j\geq d)) \Big]^2\right) \,. \end{equation} \end{enumerate} \begin{remark} As with the single change point problem, a $p>2$ stage procedure can be constructed. This would involve steps (ISM1) to (ISM5), but afterwards a $p-1$ stage procedure as described in Remark \ref{rem:singlemult} for estimating single change points will be applied on every interval $\left[ \hat{\tau}_i\pm Kw(N) \right]$. \end{remark} The intervals $[\hat{\tau}^{(1)}_j\pm Kw(N)]$ referred to at stage ISM4, all respectively contain $[\tau_j\pm (K-1)w(N)]$ with probability going to 1. These latter intervals have width going to $\infty$, and both of these intervals contain exactly one change point (as their widths are $O(w(N))=o(\delta_N)$). Hence, with probability $\to 1$ the multiple change point problem has simplified to $\hat{J}$ single change point problems, justifying ISM6 where a stump model is fitted inside each of $S^{(2)}(\hat{\tau}^{(1)}_j)$'s. \newline \newline \noindent {\bf Asymptotic behavior of the intelligent sampling based estimators:} Next, we present results on the large sample properties of the intelligent sampling estimates. There are several types of results we will showcase to characterize the asymptotic behavior, with the first result focusing on the rate of convergence: \begin{theorem} \label{thm:multiorder} Suppose conditions (M1) to (M4) are satisfied and the first stage estimates satisfy the consistency result (\ref{eq:firstconsistent2}). Then, for any $\varepsilon>0$, there exist constants $C_1$ and $C_2$ (depending on $\epsilon$) such that \begin{eqnarray} \mathbb{P}\left[\hat{J}=J,\,\max_{k=1,...,J}|\hat{\tau}^{(2)}_k-\tau_k|\leq C_1\log(J)+C_2\right]\geq 1-\varepsilon \end{eqnarray} for all sufficiently large $N$. \end{theorem} \begin{proof} See Section \ref{sec:multiorderotherproof} of Appendix B. \end{proof} The rate of convergence matches that of certain estimators when applied to the full data (see the convergence properties of wild binary segmentation in Theorem 3.2 of \cite{fryzlewicz2014wild}), consistent with the proposal that the two stage estimator is on the same level of accuracy as methods using the full data set. \newline\newline \indent To further characterize the asymptotic behavior of the intelligent sampling estimators for the purpose of inference, additional assumptions are required for the sake of deriving their asymptotic distribution. First, we consider a methodological issue: if the error terms around a change point are arbitrarily heteroscedastic (e.g., every error term has a distinct marginal distribution), then it is essentially impossible to estimate the distributions of the noise terms, which play a critical role in determining the asymptotic distributions of the change point estimators. Hence, we require a condition where the distribution around the change points are stable. In the sequel we assume that the error terms on either side of a change-point are i.i.d in slowly growing neighborhoods, though the distribution of errors need not be identical over the entirety of the observed data sequence. We split our results into two parts: $J$ \emph{fixed} and $J$ growing with $N$, as the results we establish in these two cases are somewhat differently formulated. \\\\ \noindent {\bf $J$ fixed with $N$:} In this case, we assume the following further conditions on the model. \begin{enumerate}[label=(M\arabic*):] \setlength{\itemindent}{.5in} \setcounter{enumi}{4} \item The jump sizes $\Delta_j:=\nu_j-\nu_{j-1}$ for $j=1,...,J$ are also constants not dependent on $N$. \newline \item For every $0\leq j\leq J$, the random variables $\{\varepsilon_{\tau_j+1},\dots, \varepsilon_{\tau_{j+1}}\}$ all have the same distribution as the random variable $\mathcal{E}_{j} $, where $\mathcal{E}_0,\mathcal{E}_2,\dots, \mathcal{E}_{J}$ are fixed random variables with distributions not changing with $N$. \end{enumerate} Under these two additional conditions, it is possible to characterize the asymptotic distribution of the $\hat{\tau}^{(2)}_j$'s. Prior to stating the result, we introduce the following distance function $\lambda_2(\cdot,\cdot)=\lambda_{2,N}(\cdot,\cdot)$, which accounts for the fact that at step (ISM5), the first stage subsample points are left out in the second subsample (and thus no first subsample point can be the final estimator): \begin{eqnarray}\label{eq:lambda2multdef} \lambda_2(a,b):=\begin{cases} \sum\limits_{i=1}^N 1\left(a<i\leq b)\cdot 1(i\neq k\lfloor N/N_1\rfloor\text{ for any integer } k \right)\qquad &\text{ if }a\leq b\\ - \sum\limits_{i=1}^N 1\left(b<i\leq a)\cdot 1(i\neq k\lfloor N/N_1\rfloor\text{ for any integer } k \right) &\text{ otherwise} \end{cases}. \end{eqnarray} In terms of this distance function, $\hat{\tau}^{(2)}_j$'s converge as follows: \begin{theorem}\label{thm:multidepend} Suppose conditions (M1) to (M6), and the consistency condition (\ref{eq:firstconsistent2}) are satisfied. Define the independent random variables $L_j$ for $1\leq j\leq J$ as \begin{eqnarray} L^*_j&:=&\underset{k\in\mathbb{Z}}{\arg\min}\; Z_j(k)\nonumber\\ Z_j(k)&:=&\begin{cases} |\Delta_j|(\varepsilon^*_{j,1}+\dots+\varepsilon^*_{j,k})+k\Delta_j^2/2,\qquad & k>0\\ 0, & k=0\\ -|\Delta_j|(\varepsilon^*_{j-1,k+1}+\dots+ \varepsilon^*_{j-1,0})+k\Delta_j^2/2, & k<0 \,, \end{cases} \end{eqnarray} where each $\{\varepsilon^*_{j,k}:k\in\mathbb{Z}\}$ is a set of iid random variables with the same distribution as the random variable $\mathcal{E}_j$, for $j=0,\dots,J$. \newline \newline The deviations $\left\{\lambda_2\left(\tau_j,\hat{\tau}^{(2)}_j\right)\right\}_{j=1}^{\hat J}$ jointly converge to the distribution of $\left(L^*_{1},...,L^*_J\right)$. That is, for any integers $k_1,\dots,k_J$, \begin{eqnarray} \mathbb{P}\left[\hat{J}=J,\, \lambda_2\left(\tau_j,\hat{\tau}^{(2)}_j\right)=k_j \text{ for }1\leq j\leq J\right]\to\prod_{j=1}^J \mathbb{P}\left[L^*_j=k_j\right] \,. \end{eqnarray} \end{theorem} \begin{proof} See Section \ref{sec:multidependthmproof} of Supplement Part B. \end{proof} In practical terms the result enables statistical inference - construction of confidence regions. \newline \newline \indent Note that we require a finite number of change points, which is a strong restriction from a methodological and applications point of view. Nevertheless, the result in Theorem \ref{thm:multidepend} is still useful, as shown through numerical experiments in Section \ref{sec:simulations}. Next, we establish a similar result for growing number of change points $J$. \newline \newline \textbf{$J$ grows with $N$ and Gaussian Errors:} If we restrict ourselves to the case where the error terms are independent Gaussian random variables, the distribution of the intelligent sampling estimators can be characterized \emph{even if} $J\to \infty$. Specifically, assume: \begin{enumerate}[label=(M\arabic*-Gaussian):] \setlength{\itemindent}{.5in} \setcounter{enumi}{3} \item $\varepsilon_1,\dots,\varepsilon_N$ are independent zero mean Gaussian random variables, with variances bounded below by $\sigma_{\min}^2$ and above by $\sigma^2_{\max}$, where $\sigma_{\min}$ and $\sigma_{\max}$ are positive constants not dependent on $N$. \end{enumerate} Further, also consider conditions (M1) to (M3), and (M5). Note that (M5) along with this new condition implies that we can write $\mathcal{E}_j\sim N(0,\sigma_j^2)$ for some constants $\sigma_j$s, lying between the values $\sigma_{\min}$ and $\sigma_{\max}$, a convention we will use for the remainder of the section. \newline \newline \indent When the number of parameters and (the corresponding) estimates go to $\infty$ as $N$ increases, there is no fixed distribution to converge to, hence to characterize the large sample distribution, our result use probability bounds and the quantiles of a growing number of distributions. For all $\alpha\in(0,1)$ and positive values $\Delta,\sigma_1,\sigma_2$, let $Q_{\Delta,\sigma_1,\sigma_2}(1-\alpha)$ be the $1-\alpha$ quantile of $|L_{\Delta,\sigma_1,\sigma_2}|$, where \begin{eqnarray}\label{eq:gauss_walk_def} &&L_{\Delta,\sigma_1,\sigma_2}=\underset{t\in \mathbb{Z}}{\arg\min}X_{\Delta,\sigma_1,\sigma_2}(t)\nonumber\\ &&X_{\Delta,\sigma_1,\sigma_2}(t)=\begin{cases} t\frac{|\Delta|}{2}+\sum_{i=1}^t\varepsilon^*_{ i }\qquad & t>0\\ 0 & t=0\\ \frac{|t\Delta|}{2}+\sum_{i=0}^{t-1}\varepsilon^*_{-i} \quad t<0 \end{cases}\nonumber\\ &&\text{where }\varepsilon^*_i\overset{\text{iid}}{\sim} N(0,\sigma_1^2)\text{ for }i\leq 0\text{ and } \varepsilon^*_i\overset{\text{iid}}{\sim} N(0,\sigma^2_2) \text{ for }i>0 \,. \end{eqnarray} One might envisage a result of the form: \begin{eqnarray} \mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left(\tau_j,\hat{\tau}_j^{(2)}\right)\right|\leq Q_{\Delta_j,\sigma_{j-1},\sigma_j}(\sqrt[J]{1-\alpha})\text{ for all }j=1,\dots,J \right]\to 1-\alpha, \end{eqnarray} where $\lambda_2:=\lambda_{2,N}$ was defined in (\ref{eq:lambda2multdef}). However, as the distribution of $\underset{t\in\mathbb{Z}}{\arg\min}\;X_{\Delta,\sigma_1,\sigma_2}(t)$ is discrete, it is generally not possible to get the probabilities exactly equal to $\sqrt[J]{1-\alpha}$. Instead, we derive the following result: \begin{theorem}\label{thm:increasingJasymprotics} For any $\alpha \in (0,1)$ and positive values $\Delta,\sigma_1,\sigma_2$, define: \begin{eqnarray} P_{\Delta,\sigma_1,\sigma_2}(1-\alpha)=\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}\;X_{\Delta,\sigma_1,\sigma_2}(t)\right|\leq Q_{\Delta,\sigma_1,\sigma_2}(1-\alpha) \right] \,. \end{eqnarray} Suppose that conditions (M1) to (M3) are satisfied, conditions (M4-Gaussian) and (M6) are satisfied, and the first stage estimates satisfy (\ref{eq:firstconsistent2}) with a $\rho_N$ such that $J\rho_N\to 0$. Then \begin{eqnarray}\label{eq:increasingJasymprotics1} &&\mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j,\hat{\tau}^{(2)}_j \right)\right|\leq Q_{|\Delta_j|-2\rho_N,\sigma_{j-1},\sigma_j}(\sqrt[J]{1-\alpha})\text{ for all }j=1\dots, J \right]\nonumber\\ &= &\left(\prod_{j=1}^J P_{|\Delta_j|-2\rho_N,\sigma_{j-1},\sigma_j}(\sqrt[J]{1-\alpha})\right)+o(1)\,, \end{eqnarray} and \begin{eqnarray}\label{eq:increasingJasymprotics2} &&\mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j,\hat{\tau}^{(2)}_j \right)\right|\leq Q_{|\Delta_j|+2\rho_N,\sigma_{j-1},\sigma_j}(\sqrt[J]{1-\alpha})\text{ for all }j=1\dots, J \right]\nonumber\\ &= &\left(\prod_{j=1}^JP_{|\Delta_j|+2\rho_N,\sigma_{j-1},\sigma_j}(\sqrt[J]{1-\alpha})\right)+o(1) \,. \end{eqnarray} \end{theorem} \begin{proof} See Section \ref{sec:increasingJasymproticsproof} for a long and detailed proof of Supplement Part B. A shorter proof sketch which highlights the key steps is also provided in Section \ref{proof-sketch-Theorem-5}. \end{proof} \begin{remark} The proof of this result relies on probability bounds for the $L_{\Delta,\sigma_1,\sigma_2}$ distributions, specifically bounds on their tail probabilities, as well as bounds on the probability that $L_{\Delta,\sigma_1,\sigma_2}\neq L_{\Delta',\sigma_1',\sigma_2'}$ when $(\Delta,\sigma_1,\sigma_2)\neq (\Delta',\sigma_1',\sigma_2')$. Using the Gaussianity assumption, such probability bounds can be methodically derived. The additional condition requiring $J\rho_N\to 0$ stems from the details of the proof, but this is satisfied by several existing change point methods, including binary segmentation that will be showcased later on in the paper. \end{remark} \section{Practical Implementation of Intelligent Sampling} \label{sec:prac-imple} In the previous section, we laid out a generic scheme for intelligent sampling which requires the use of a multiple change point estimation procedure on a sparse subsample of the \emph{data-sequence}. Recall that any procedure that satisfies (\ref{eq:firstconsistent}) can be used here. A variety of such procedures have been explored by various authors (see, e.g., \cite{venkatraman1992consistency}, \cite{fryzlewicz2014wild}, \cite{frick2014multiscale}, and \cite{bai1998estimating}), and therefore a number of options are available. For the sake of concreteness, we pursue intelligent sampling with binary segmentation (henceforth abbreviated to ``BinSeg") employed at Step (ISM3). One main advantage of BinSeg is its computational scaling at an optimal rate of $O(N^*\log(N^*))$ when applied to a data sequence of length $N^*$, and in addition it has the upside of being easy to program, which accounts for its popularity in the change point literature. We later discuss other potential options. \newline \newline \indent However, there are some issues involved in applying the results of BinSeg to our setting. First, BinSeg does not directly provide the signal estimators that are required in (\ref{eq:firstconsistent}). We address this issue in Section \ref{sec:binsegdescription}, where we establish that given certain consistency conditions on the change points, which are satisfied by BinSeg, consistent signal estimators can be obtained by averaging the data between change point estimates. Second, there is no established method for constructing explicit confidence intervals for the actual change points using BinSeg, as existing results give orders of convergence, but no asymptotic distributions or probability bounds with \emph{explicit} constants. However, to implement intelligent sampling, one wants to have high-probability intervals around the initial change-point estimates on which to do the second round sampling, which requires calibration in terms of the coverage probability. To this end, in Section \ref{sec:refitting} we describe a procedure to be performed after applying BinSeg on the first stage subsample: the extra steps provide us with explicit confidence intervals, while not being slower than BinSeg in terms of order of computational time. \newline \newline \indent Before we begin, we remind the reader that this section deals with the first stage subsample and not the whole data set. We will henceforth use the $\star$ notation in connection with the quantities involved at the first stage. Therefore, we let $\nu^*_j:=\nu_j$ for $j=0,\dots,J$ and $\rho^*_{N^*}:=\rho_N$, and referring back to notation used in step (ISM2) and (ISM3), we consider the sub-dataset $Z_1,\dots,Z_{N^*}$ as a multiple change point model, with change points $\tau_j^*$'s and levels $\nu^*_j$'s, following conditions (M1) to (M4) for all large $N$ (as a consequence of $Y_1,\dots, Y_N$ satisfying conditions (M1) to (M4)). Using this notation, (\ref{eq:firstconsistent}) translates to the requirement that a change point estimation scheme applied upon $Z_1,\dots,Z_{N^*}$ w procures estimates $\hat{\tau}_j^*$'s and $ \hat{\nu}^*_j$'s (equal to $\hat{\nu}^{(1)}_j$'s in (ISM3)) such that \begin{eqnarray}\label{eq:firstconsistentsparse} \mathbb{P}\left[ \hat{J}=J,\,\max_{j=1,\dots J}\left| \hat{\tau}_j^*-\tau_j^* \right|\leq w^*(N^*),\, \max_{j=0,\dots,J}\left|\hat{\nu}^*_j-\nu_j^* \right|\leq \rho^*_{N^*}\right]\to 1, \end{eqnarray} for some sequences $w^*(N^*)$ and $\rho^*_{N^*}$ such that $w^*(N^*)\to \infty$, $w^*(N^*)=o(\delta^*_{N^*})$, and $\rho^*_{N^*}\to 0$ as $N^*\to \infty$. We, subsequently, refer to this latter condition. \subsection{A Brief Description of Binary Segmentation}\label{sec:binsegdescription} Consider the model given in (\ref{model}). For any positive integers $1\leq s\leq b<e\leq N^*$, let $n=e-s+1$ and define the Cumulative Sum (CUSUM) statistic at $b$ with endpoints $(s,e)$ as \begin{equation*} \bar{Z}_{s,e}^b=\sqrt{\frac{e-b}{n(b-s+1)}}\sum_{t=s}^bZ_t-\sqrt{\frac{b-s+1}{n(e-b)}}\sum_{t=b+1}^e Z_t. \end{equation*} Binary segmentation is performed by iteratively maximizing the CUSUM statistics over the segment between change point estimates, accepting a new change point if the maximum passes a threshold parameter $\zeta_{N^*}$. Specifically, \begin{enumerate} \item Fix a threshold value $\zeta_{N^*}$ and initialize the segment set $SS=\{ (1,N^*) \}$ and the change point estimate set $\underline{\hat{\tau}}^*=\emptyset$. \item Pick any ordered pair $(s,e)\in SS$, remove it from $SS$ (update $SS$ by $SS\leftarrow SS-\{ (s,e) \}$). If $s\geq e$ then skip to step 5, otherwise continue to step 3. \item Find the argmax and max of the CUSUM statistic over the chosen $(s,e)$ from the previous step: $b_0=\underset{b\in \{s,...,e-1\}}{\arg\max}|\bar{Z}^b_{s,e}|$ and $|\bar{Z}^{b_0}_{s,e}|$ \item If $|\bar{Z}^{b_0}_{s,e}|\geq \zeta_{N^*}$, then add $b_0$ to the list of change point estimates (add $b_0$ to $\underline{\hat{\tau}}^*$), and add ordered pairs $(s,b_0)$ and $(b_0+1,e)$ to $SS$, otherwise skip to step 5. \item Repeat steps 2-4 until $SS$ contains no elements. \end{enumerate} \begin{remark} For our model, this algorithm provides consistent estimates of {\em both} the location of the change points and the corresponding levels to the left and right of them, given certain assumptions. Specifically, consistency results for BinSeg require settings where the minimal separation distance between change points grows faster than the length of the data sequence raised to an appropriate power. Theorem 3.1 of \cite{fryzlewicz2014wild} presents a concrete result in this direction, where this particular power -- $\Theta$ in their notation (which is $1- \Xi$ in our notation) -- is restricted to be strictly larger than $3/4$. The same spacing condition appears in later work by the same author on BinSeg in high dimensional settings \cite{cho2015multiple}. However, there appears to be a caveat. A corrigendum to \cite{cho2015multiple} released by the author \cite{chofrycorrection} shows that this spacing condition does not ensure consistency of BinSeg; rather, a stronger spacing condition, $\Theta > 6/7$, is needed. From our correspondence with the authors, there is strong reason to believe that the $3/4$ in \cite{fryzlewicz2014wild} \emph{should also change to} $6/7$, and accordingly, in the sequel where we focus on a BinSeg based approach, we restrict ourselves to this more stringent regime, to be conservative. \end{remark} We require the following additional assumptions: \begin{itemize} \setlength{\itemindent}{.5in} \item[(M4 (BinSeg))] The error terms of the data sequence are i.i.d. $N(0,\sigma^2)$, where $\sigma$ is a positive constant not dependent on $N$. \item[(M7 (BinSeg))] $\Xi$ (from condition (M3)) is further restricted by $\Xi\in [0,1/7)\,,$ \item[(M8 (BinSeg))] $N_1$, from step (ISM2), is chosen so that $N_1=K_1N^\gamma$ for some $K_1>0$ and $\gamma>7\Xi$. \end{itemize} Condition (M7 (BinSeg)) ensures that BinSeg will be consistent on some subsample of the data sequence, for if the condition was not satisfied and the minimum spacing $\delta_N$ grows slower than $N^{6/7}$, then established results on BinSeg (see Theorem \ref{frythm}) could not guarantee consistency on any subsample of the (or even the entire) data sequence. The latter of the above conditions implies selecting a large enough subsample so that Theorem \ref{frythm} becomes applicable. When (M8 (BinSeg)) is satisfied, the first stage subsample would have size $N^*=(K_1+o(1))N^\gamma$ with minimal change point separation of $\delta^*_{N^*}=(N_1/N+o(1))\delta_N=(C+o(1))(N^{*})^{1-\Xi/\gamma}$ for some positive constant $C$. Finally,(M4 (BinSeg)) imposes a more restrictive structure on the error terms in order to satisfy error term assumptions on established results. These conditions, when taken together, lead to the following result: \begin{theorem}\label{frythm} Suppose that conditions (M1) to (M3), (M4 (BinSeg)), (M7 (BinSeg)), and (M8 (BinSeg)) are satisfied, with the tuning parameter $\zeta_{N^{\star}}$ chosen appropriately so that \begin{itemize} \item if $\Xi/\gamma>0$ then $\zeta_{N^*}=c_1(N^{*})^{\xi}$ where $\xi\in (\Xi/\gamma,1/2-\Xi/\gamma)$ and $c_1>0$ \item if $\Xi/\gamma=0$ then $c_2\left(\log(N^*)\right)^p\leq \zeta_{N^*}\leq c_3(N^*)^\xi$ where $p>1/2, \xi<1/2$, and $c_2,c_3>0\,.$ \end{itemize} Define $E_{N^*}=\left(\frac{N^*}{\delta^*_{N^*}}\right)^2\log(N^*)$. Then, there exist positive constants $C, C_1$ such that \begin{equation}\label{eq:binsegprob} \mathbb{P}\Big[ \hat{J}=J;\quad \max_{i=1,...,J} |\hat{\tau}^*_i-\tau^*_i|\leq CE_{N^*} \Big]\geq 1-C_1/N^* \,. \end{equation} \end{theorem} \begin{remark} \label{rem:frythm} The above theorem is adapted from Theorem 3.1 of \cite{fryzlewicz2014wild} which applies to the more general setting where $\underline{\Delta}$, the minimum signal jump, can decrease to 0 as $N\to\infty$. There are some methodological issues with the above result. First, there is no easy recipe for determining $C$ explicitly, which is addressed in the subsequent section. Second, this result does not give an explicit value for the tuning parameter $\zeta_N$, but there do exist methods such as model selection using a Schwarz information criterion (see \cite{fryzlewicz2014wild}), plus software to perform this within the R \texttt{changepoint} package. \end{remark} Next, we introduce estimators for the signals $\hat{\nu}_j^{\star} :=\mathbb{E}[Z_{\tau_j^{\star}+1}]$, for $j=0,\dots,J$. Intuitively, they can be estimated by the average of data points between change points estimates: \begin{equation}\label{eq:signalestdef} \hat{\nu}^*_j=\frac{1}{\hat{\tau}^*_{j+1}-\hat{\tau}^*_j}\left(\sum_{\hat{\tau}^*_j<i\leq \hat{\tau}^*_{j+1}}Z_i\right)\qquad \text{ for }j=0,...,\hat{J} \end{equation} with the convention of $\hat{\tau}^*_0:=0$ and $\hat{\tau}^*_{\hat{J}+1}:=N^*$. These estimators are consistent: \begin{lemma}\label{lem:binsegsignalconsistent} Suppose conditions (M1) to (M3), (M4 (BinSeg)), (M6 (BinSeg)), and (M7 (BinSeg)) are satisfied, the $\hat{\tau}^*_i$'s are the BinSeg estimators, and $\hat{\nu}^*_i$'s are the signal estimators defined in (\ref{eq:signalestdef}). Then there exists a sequence $\rho^*_{N^*}\to 0$ such that $J\rho^*_{N^*}\to 0$ and \begin{eqnarray}\label{eq:binseg1stconsistent} \mathbb{P}\left[\hat{J}=J;\quad \max_{i=1,...,J}|\hat{\tau}^*_i-\tau^*_i|\leq C E_{N^*};\quad \max_{i=0,...,J}|\hat{\nu}^*_i-\nu^*_i|\leq \rho^*_{N^*}\right]\to 1 \end{eqnarray} as $N^*\to\infty$. \end{lemma} \begin{proof} See Section \ref{sec:binsegsignalconsistentproof} in Supplement Part B. \end{proof} By setting $\rho^*_{N^*}=\rho_N$ and $CE_{N^*}=w^*(N^*)$, condition (\ref{eq:firstconsistent}), will be satisfied for a $\rho_N$ satisfying $J\rho_N\to 0$, which meets all requirement of Theorems \ref{thm:increasingJasymprotics}. \subsection{Calibration of intervals used in Stage 2 of Intelligent Sampling}\label{sec:refitting} \indent Constructing confidence intervals based on Theorem \ref{frythm} would require putting a value on $CE_{N^*}=C(N^*/\delta^*_{N^*})^2\log(N^*)$ from (\ref{eq:binsegprob}). An estimate of $\delta^*_{N^*}$ can be obtained from the minimum difference of consecutive $\hat{\tau}^*_j$'s, but an explicit expression for $C$ is unavailable, and the existing literature on binary segmentation does not appear to provide such an explicit expression. To address this issue, we introduce a calibration method which allows the construction of confidence intervals with explicitly calculable width around the first stage estimates $\hat{\tau}^{(1)}_j$'s: the idea is to fit stump models on data with indices $[\hat{\tau}^{(1)}_{j-1}+1,\hat{\tau}^{(1)}_{j+1}]$, as each of these stretches forms a stump model with probability going to 1. \newline \newline Consider starting right after step (ISM4) (e.g., Figure \ref{fig:calibration1}), where we have rough estimates $\hat{\tau}_i^*$'s of the change points (with respect to the $\{Z_i\}$ sequence) and $\hat{\nu}^{(1)}_i$'s of the signals, obtained from the $N^*$ sized subsample $\{Z_i\}$. \begin{minipage}[t]{.48\textwidth} \begin{center} \begin{overpic}[scale=0.27]{cali1} \end{overpic} \end{center} \begin{figure} \caption{Green points are $Z_i$'s, solid green line is the BinSeg estimate.} \label{fig:calibration1} \end{figure} \end{minipage} \begin{minipage}[t]{0.04\textwidth} ~~ \end{minipage} \begin{minipage}[t]{0.48\textwidth} \begin{center} \begin{overpic}[scale=0.27]{cali2} \end{overpic} \end{center} \begin{figure} \caption{$Z_i$'s are light green points, BinSeg estimates as dashed green line, $V_i$'s as red points.} \label{fig:calibration2} \end{figure} \end{minipage} \begin{minipage}[t]{.48\textwidth} \centering \begin{overpic}[scale=0.27]{cali3-1} \end{overpic} \end{minipage} \begin{minipage}[t]{0.04\textwidth} ~~ \end{minipage} \begin{minipage}[t]{0.48\textwidth} \centering \begin{overpic}[scale=0.27]{cali3-2} \end{overpic} \end{minipage} \begin{figure}\label{fig:calibration3} \end{figure} We then pick a different subsample $\{V_i\}$ of equal size to the $\{Z_i\}$ subsample and consider the $\hat{\tau}^*_i$'s and $\hat{\nu}^{(1)}_i$'s as estimates for the parameters of this data sequence (e.g. Figure \ref{fig:calibration2}). For each $j$, fit a one-parameter stump model $f_j^{(d)}(k)=\hat{\nu}^{(1)}_{j-1}1(k\leq d)+\hat{\nu}^{(1)}_{j}1(k>d)$ (here $d$ is the discontinuity parameter) to the subset of $\{V_k:\hat{\tau}^*_{j-1}+1\leq k\leq \hat{\tau}^*_{j+1}\}$ given by $\{V_k:\hat{\tau}^*_{j}-\hat{d}_j\leq k\leq \hat{\tau}^*_{j}+\hat{d}_j\}$ where $\hat{d}_j=\min\{ (\hat{\tau}^*_j-\hat{\tau}^*_{j-1}),(\hat{\tau}^*_{j+1}-\hat{\tau}^*_{j}) \}$, to get an updated least squares estimate of $\tau_j$ (e.g. Figure \ref{fig:calibration3})\footnote{This subset is used instead of the full interval to avoid situations where $\frac{\tau_j-\tau_{j-1}}{\tau_{j+1}-\tau_j}\to$ 0 or $\infty$, which makes matters easier for theoretical derivations.}. \newline \newline Formally, the calibration steps are: \begin{enumerate}[label=(ISM4-\arabic*):] \setlength{\itemindent}{.5in} \item pick a positive integer $k_N$ less than $\lfloor N/N_1\rfloor$ \footnote{A good pick is $k_N=\left\lfloor\frac{\lfloor N/N_1\rfloor}{2}\right\rfloor$}, take a subsample $\{V_i\}$ from the dataset of $\{Y_i\}$ which is the same size as the $\{Z_i\}$ subsample, by letting $V_i=Y_{i\lfloor N/N_1\rfloor -k_N}$ for all $i$. \end{enumerate} The $\{V_i\}$ subsample also conforms to the model given in (\ref{model}), with change points $\tau^{**}_i=\max\{j\in\mathbb{N}: j\lfloor N/N_1\rfloor -k_N\leq \tau_i\}$ and minimum spacing $\delta_{N^*}^{**}:= \min_k(\tau_{k+1}^{**}-\tau_k^{**})$ which satisfies $|\delta_{N^*}^{**}-\delta_{N^*}^*|\leq 1$. \begin{enumerate}[label=(ISM4-\arabic*):] \setlength{\itemindent}{.5in} \setcounter{enumi}{1} \item For each $i=1,...,J$, consider the estimates $\hat{\tau}_i^*$ (obtained from the $\{Z_j\}$ subsample at step (ISM3)) as estimators for $\tau^{**}_i$. From (\ref{eq:firstconsistent}) it is possible to derive that \begin{equation}\label{eq:reconsistent} \mathbb{P}\left[ \hat{J}=J,\, \max_{i=1,...,J}|\hat{\tau}^*_i-\tau_i^{**}|\leq w^*(N^*)+1,\,\max_{i=0,...,J}|\hat{\nu}^{(1)}_i-\nu_i|\leq \rho_N \right]\to 1 \end{equation} where $w^*(N)+1\to\infty$ and $(w^*(N^*)+1)/\delta^{**}_{N^*}\to 0 \,.$ \item For each $i=1,...,\hat{J}$, define $\hat{d}_i=\min\{ (\hat{\tau}_{i+1}^*-\hat{\tau}_{i}^*),(\hat{\tau}_{i}^*-\hat{\tau}_{i-1}^*) \}$ (with $\hat{\tau}_{0}^*=0$ and $\hat{\tau}_{\hat{J}+1}^*=\lfloor N/\lfloor N/N_1\rfloor\rfloor$), and re-estimate the change points by letting \begin{eqnarray} \hat{\tau}^{re}_i:=\underset{t:|t-\hat{\tau}_{i}^*|< \hat{d}_i}{\arg\min}\left[\sum_{j: |j-\hat{\tau}_{i}^*|<\hat{d}_i}\big( V_i-\hat{\nu}^{(1)}_{i-1}1(j\leq t)-\hat{\nu}^{(1)}_i1(j>t) \big)^2\right] \end{eqnarray} for $i=1,...,\hat{J}\,.$ \item To translate the $\hat{\tau}^{re}_i$'s (change point estimates for the subsample for the $\{V_j\}$'s) into estimates for $\tau_1,\dots,\tau_J$ (change points for the full data set), set the first stage change point estimators as $\hat{\tau}^{(1)}_i:=\hat{\tau}^{re}_i\lfloor N/N_1\rfloor-k_N\,.$ \end{enumerate} \begin{remark} It is important to note that although the above steps are presented in the context of using BinSeg in the first stage, in practice they can be used with many other procedures. Note that any change point detection procedure that provides consistent estimates can be used in the first stage. More broadly, these steps can be used outside of the intelligent sampling framework. Given a consistent change point estimation scheme \emph{for which a method to construct explicit confidence intervals is not known}, one could split the data in two subsamples, the odd points (first, third, fifth, etc data points) and the even points. The aforementioned estimation scheme could be applied to the odd points, and the subsequent steps (ISM4-1) to (ISM4-4) could be applied to the even points. A result similar to that of Theorem \ref{thm:reconsistentineq}, presented below, could then be used to construct confidence intervals. \end{remark} Using this procedure, we can express the deviations of the $\hat{\tau}^{re}$ estimators using the quantiles $Q_{\Delta,\sigma_1,\sigma_2}$ that were defined in the paragraph before (\ref{eq:gauss_walk_def}). \begin{theorem}\label{thm:reconsistentineq} Suppose conditions (M1) to (M3), and (M4 (BinSeg)) are satisfied, and the estimation method used in step (ISM3) satisfies (\ref{eq:firstconsistent}), and the pertinent $\rho_N$ appearing in (\ref{eq:firstconsistent}) also satisfies $J\rho_N\to 0$. For any sequence\footnote{The sequence could be a sequence that converges to 0 or not converge to 0, it only needs to stay between 0 and 1, and satisfy the bound $\alpha\geq cN^{-\eta}$.} $\alpha_N$ between 0 and 1 such that $\alpha_N\geq CN^{-\eta}$ for some positive $C$ and $\eta$, we have \begin{eqnarray}\label{eq:reconsistentineq} \mathbb{P}\left[ \hat{J}=J,|\hat{\tau}_j^{re}-\tau_j^{**}|\leq Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha_N}{J}\right) \text{ for }j=1,\dots,J \right]\geq 1-\alpha_N+o(\alpha_N)\nonumber\\ \end{eqnarray} \end{theorem} \begin{proof} See Section \ref{sec:reconsistentproof} of Supplement Part B. \end{proof} \begin{remark} Similar to Theorem \ref{thm:increasingJasymprotics}, the condition $J\rho_N\to 0 $ is required here. This is because the proofs of both results are similar in structure. For intelligent sampling with BinSeg used in stage 1, we re-iterate that $J\rho_N\to 0$ is automatically satisfied. \end{remark} The practical implication of this result for implementing intelligent sampling is that we can obtain explicitly calculable simultaneous confidence intervals for the change-points. The intervals $[\hat{\tau}^{re}_j\pm Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha_N}{J}\right)]$ for $j=1,\cdots,J$ capture the sparse scale change points $\tau^{**}_j$ for $j=1,\cdots,J$ with probability approaching 1 if we choose some $\alpha_N\to 0$. Converting back to the original scale, the intervals $[\hat{\tau}^{(1)}_j\pm (Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha_N}{J}\right)+1)\lfloor N/N_1\rfloor]$ for $j=1,\cdots,\hat{J}$ have the properties that they simultaneously capture $\tau_1,\cdots,\tau_J$ with probability approaching 1.\footnote{In practice, the $\Delta_j/\sigma$'s are estimated from data.} The second stage samples are then picked as points within each of these intervals that are not the form $i\lfloor N/N_1 \rfloor$ or $i\lfloor N/N_1 \rfloor-k$ for some integer $i$ (since the first stage points cannot be used at steps (ISM5) and (ISM6)). \subsection{Computational Considerations}\label{sec:BSWBS1} The order of computational time is higher than in the single change point case, owing to the fact that the procedure can involve a growing number of data intervals at the second stage. For the sake of our analysis, we make the simplifying assumption that $\delta_N/N^{1-\Xi}\to C_1$ and $J(N)/N^\Lambda\to C_2$ for some $\Lambda\in[0,\Xi]$ and some positive constants $C_1,C_2$. As a reminder, for intelligent sampling with BinSeg used in stage 1, conditions (M6 (BinSeg)) and (M7 (BinSeg)) automatically impose the condition that $\Lambda\leq\Xi<1/7$. \newline \newline \indent Under the previous assumptions, we consider performing our two stage procedure, with the first stage being: \begin{enumerate} \item apply BinSeg on an evenly spaced subsample with $N_1=CN^\gamma$ for some $\gamma \in (0,1]$, $C>0$, \item use the BinSeg results to apply steps (ISM4-1) to (ISM4-4), obtaining confidence intervals, \end{enumerate} and then the second stage of the procedure following through steps (ISM5) and (ISM6). We state here (full details in Section \ref{sec:time_order_longer} of the supplement) that the total computational time of these two stages will be $O( N^{\gamma_{\min}\vee (1-\gamma_{\min}+\Lambda) }\log(N) )$ where \begin{eqnarray} \gamma_{\min} = \max\left\{ \frac{1+\Lambda}{2},7\Xi+\eta \right\} \end{eqnarray} where $\eta$ is any small positive value. We note that in the special case where $\Lambda=0$ and $\Xi=0$, which translates to finitely many ($J\nrightarrow \infty$) change points that are roughly spaced evenly apart, we would have $\gamma_{\min} =1/2$ and the total computational time would be $O(\sqrt{N}\log(N))$. This time scaling would be the shortest time order for the two stage procedure. \newline \newline \indent To achieve lower computational time order, we can also consider multiple ($>2$) stage procedures. A three stage procedure would have the same first stage (BinSeg and re-estimate on separate subsamples); the second stage estimate would estimate a stump model within each confidence interval from stage 1, but using a subsample from each interval instead of the whole interval of data, to create another set of confidence intervals; at stage 3 stump models would be fitted with all the points within each confidence interval created in stage 2. A four and five stage procedure can be similarly designed. In terms of running time, a $k$-stage procedure can, in principle, run in $O(\sqrt[k]{N}\log(N))$ time in certain cases, but for methodological concerns and for illustrative purposes we focused on the two stage procedure. Further details and discussions regarding more stages can be found in Section \ref{sec:compmethod} of the Supplement. \section{Performance Evaluation of Intelligent Sampling: Simulation Results}\label{sec:simulations} We next present a series of simulation results. Part of our computational work illustrates the validity of our theoretical work, such as the rate of convergence, the lower than $O(N\log(N))$ computational running time, and the asymptotic distributions. We also present some other simulations to analyze the practical benefits of our procedure. All simulations in this section were performed on a server with 24 Intel Xeon ES-2620 2.00 GHz CPUs, with a total RAM of 64 GB. \subsection{Rate of Convergence and Time}\label{sec:ratesimu} Our first showcased set of simulation aims to confirm the predicted scaling of the maximal deviation (no faster than $\log(J)$) and computational time ($O(\sqrt{N}\log(N))$ in the best cases). We simulate a sequence of data sequences with increasing $N$, and to show that the $\sqrt{N}\log(N)$ time scaling is reachable, we set the number of change points to be growing slowly and the separation between consecutive change points to be somewhat close to being equal. \newline \newline We generated sequences of length $N$ varying from $10^5$ to $10^{7.5}$, evenly on the log scale, with the number of change points being $J\approx\log_{10}(N)^2$. The change point location and the signal levels were randomly generated: \begin{itemize} \item The spacings $(\tau_1,\tau_2-\tau_1,\tau_3-\tau_2,....,N-\tau_J)$ were generated as the sum of the constant $\frac{N}{1.5J}$ and the rounded values of $\left(N-\frac{(J+1)N}{1.5J}\right)\cdot (U_{(1)},U_{(2)}-U_{(1)},\dots ,U_{(J)}-U_{(J-1)},1-U_{(J)}) $ where $U_{(1)},\dots, U_{(J)}$ are the ordered variables from $J$ iid uniform $[0,1]$ variables. \item The signals were generated as a Markov chain with $\nu_0$ initialized as 0, and iteratively, given $\nu_i$, $\nu_{i+1}$ is generated from a density proportional to $f(x)=\exp(-0.3(x-\nu_i-\underline{\Delta}))1(\nu_i+\underline{\Delta}\leq x\leq M)+\exp(0.3(x+\nu_i-\underline{\Delta}))1(\nu_i-\underline{\Delta}\geq x\geq -M)$, where $M$ was taken to be 10 and $\underline{\Delta}$ was taken to be 1. \end{itemize} For each of 10 values of $N$, 50 configurations of change points and signals were generated, and on each of those configurations 40 datasets with iid $N(0,1)$ error terms simulated. At stage 1 we performed BinSeg and the steps outlined in Section \ref{sec:refitting}, along with some ad-hoc additional steps to increase accuracy. In detail, these steps are \begin{enumerate} \item an even subsample was taken with $N_1=50\sqrt{N}$, and BinSeg was performed on this dataset with tuning parameter $\zeta_{N}=N_1^{0.2}$ (which is within the range described in Theorem \ref{frythm}) \item remove points too close together by removing every $\hat{\tau}_i^*$ where $\hat{\tau}_i^*-\hat{\tau}_{i-1}^*\leq 15$ \item estimate for the signals and remove every $\hat{\tau}_i^{*}$ where the estimated signal to the left and the right have a difference less than 0.5 \item perform steps (ISM4-1) to (ISM4-4) \item repeat steps 2 and 3 \item construct confidence intervals using the $L_{\hat{\Delta}_j,\hat{\sigma},\hat{\sigma}}$ distributions and then proceed with steps (ISM5) and (ISM6) \end{enumerate} To gauge the rate of convergence, the maximum deviations $\underset{j=1,...,J}{\max}|\lambda_2(\tau_j,\hat{\tau}^{(2)}_j)|$ were recorded for instances where $\hat{J}=J$ (which occurs over 99.5\% of the time for all $N$). We also compared the mean computational times of intelligent sampling vs the mean time using BinSeg on the whole data sequence, where the latter was averaged over 100 iterations with similarly generated data (50 configurations of randomly generated change points and signals, 2 runs each configuration). The results are depicted in Figure \ref{fig:convergence-rate-mcp} and are in accordance with the theoretical investigations: the quantiles of $\max\,\lambda_2\left(\tau_j,\hat{\tau}_j^{(2)}\right)$ scale sub-linearly with $\log(J)$ (which is $\sim \log(\log(N))$ in this setup where $J\sim \log^2(N)$) as predicted in Theorem \ref{thm:multiorder}, and the computational time of intelligent sampling scales in the order of $\sqrt{N}$ compared to the order $N$ computational time of using BinSeg on the entire dataset. \begin{figure} \caption{\textbf{Left:} Quantiles of the max deviations versus $\log(\log(N))$, which is the same order as $\log(J)$. Over the observed regime of parameters, the maximal deviation scales with $J$, as was predicted by Theorem \ref{thm:multiorder}. \textbf{Right:} Log-log plot of mean computational time when using intelligent sampling to obtain the final change-point estimates at stage two, and using BinSeg on the full data to construct change-point estimates, with reference lines of slope 1 (black) and 0.5 (red) respectively. To give some sense of the actual values, for $N=10^{7.5}$ the average time for intelligent sampling vs full data were, respectively, 0.644 and 31.805 seconds. } \label{fig:convergence-rate-mcp} \end{figure} \subsection{Asymptotic Distribution}\label{sec:simu_asym_dist} A second set of simulation experiments was used to illustrate the asymptotic distribution of the change point deviations. We considered a setting with $N=10^7$ and 55 change points, which was the maximum number of change points used in the last simulation setting. However, we consider different error term distributions and placement of change points in 4 different settings: \begin{enumerate}[label=(Setup \arabic*):] \setlength{\itemindent}{.5in} \item one set of signal and change point locations generated as in the previous set of simulations, with i.i.d. $N(0,1)$ error terms \item change points evenly spaced apart with signals 0,1,0,1,.., repeating, and i.i.d. $N(0,1)$ error terms \item change points evenly spaced part with 0,1,0,1,... repeating signals, and error terms generated as $\varepsilon_i=\frac{ \varepsilon^*_i+\varepsilon^*_{i+1}+\varepsilon^*_{i+2}+\varepsilon^*_{i+3}+\varepsilon^*_{i+4}}{\sqrt{5}}$ for all $i=1,...,N$, where the $\varepsilon^*_i$'s are generated as i.i.d. $N(0,1)$; \item change points evenly spaced with signals 0,1,0,1,..., and error terms generated from an AR(1) series with parameter 0.8, and each marginally $N(0,1)$. \end{enumerate} The first and second setups will demonstrate the result of Theorem \ref{thm:multidepend}. As for the other two setups with dependent error terms, we claim that the asymptotic distribution will follow a similar form as the asymptotic distribution in the iid case, with more specific details in Section \ref{sec:dependenterrors} of the supplement. For all 4 cases the first stage of intelligent sampling was performed identically as for the previous set of simulations, and with the same tuning parameters. At the second stage, first stage subsample points were omitted for data with setups 1 and 2, but not for setups 3 and 4. From 2000 iterations on each of the 4 simulation setups, the distributions of the maximum deviations (maximum of $|\lambda_2(\tau_j,\hat{\tau}_j^{(2)})|$ for the first two setups and $|\hat{\tau}^{(2)}_j-\tau_j|$ for the other two setups) are seen to match well with their predicted asymptotic distributions. To illustrate the convergence of the individual change point estimates, we also show that the distribution of the 27th change point matches with the $L$-type distributions appearing in Proposition \ref{thm:notiidconsistent} of the supplement. \begin{figure} \caption{Distributions of $\max_{1\leq j\leq 55}\lambda_2\left(\tau_j,\hat{\tau}_j^{(2)}\right)$ (\textbf{Left}) and $\lambda_2\left(\tau_{27},\hat{\tau}_{27}^{(2)}\right)$ (\textbf{Right}) from simulations of setup 1. } \label{fig:setup1} \end{figure} \begin{figure} \caption{Distributions of $\max_{1\leq j\leq 55}\lambda_2\left(\tau_j,\hat{\tau}_j^{(2)}\right)$ (\textbf{Left}) and $\lambda_2\left(\tau_{27},\hat{\tau}_{27}^{(2)}\right)$ (\textbf{Right}) from setup 2. } \label{fig:setup2} \end{figure} \begin{figure} \caption{Distributions of $\max_{1\leq j\leq 55}\left|\tau_j-\hat{\tau}_j^{(2)}\right|$ (\textbf{Left}) and $\left|\tau_{27}-\hat{\tau}_{27}^{(2)}\right|$ (\textbf{Right}) from setup 3 } \label{fig:setup3} \end{figure} \begin{figure} \caption{Distributions of $\max_{1\leq j\leq 55}\left|\tau_j-\hat{\tau}_j^{(2)}\right|$ (\textbf{Left}) and $\left|\tau_{27}-\hat{\tau}_{27}^{(2)}\right|$ (\textbf{Right}) from setup 4. } \label{fig:setup4} \end{figure} The distribution of the deviations for setup 1 is the least spread out, with the primary reason that the jump between signals is randomly generated, but lower bounded by 1, while the other 3 setups have signal jumps all fixed at 1. Setups 3 and 4 have the most spread out distributions, as the dependence among the error terms causes the estimation to be less accurate, but only up to a constant rather than an order of magnitude. Nevertheless, in all 4 setups, the change point estimates behave very closely to what Theorems \ref{thm:increasingJasymprotics} and \ref{thm:multidepend} (for the first two scenarios) and Proposition \ref{thm:notiidconsistent} (for the third and fourth scenarios) predict. \newline \newline \subsection{Asymptotic Distribution for the Heteroscedastic Case} We further explored the validity of Proposition \ref{thm:notiidconsistent} by examing a setting involving heteroscedatic errors. A data sequence of length $N=10^7$ with 55 evenly spaced changed points and signals of magnitude $0,1,0,1,0,1,\dots,1$ was generated. Instead of generating error terms from a stationary series, we, instead, generated them as independent segments of Gaussian processes as follows. For $j=1,2,3,\dots$, \begin{enumerate} \item from $\tau_{4j}$ to $\frac{\tau_{4j+2}+\tau_{4j+1}}{2}$ the errors are iid $N(0,1)$; \item from $\frac{\tau_{4j+2}+\tau_{4j+1}}{2}$ to $\frac{\tau_{4j+3}+\tau_{4j+2}}{2}$ the errors are $\varepsilon_i=\frac{\varepsilon_i^*+0.5\varepsilon_{i+1}^*+0.25\varepsilon_{i+2}^*}{\sqrt{1^2+0.5^2+0.25^2}}$ where the $\varepsilon_i^*$'s are iid $N(0,1)$ (and will be treated as a generic iid $N(0,1)$ sequence from here on;) \item from $\frac{\tau_{4j+3}+\tau_{4j+2}}{2}$ to $\tau_{4j+3}$, error terms are $\varepsilon_i=0.5\cdot\frac{\varepsilon_i^*+\varepsilon_{i+1}^*+\varepsilon_{i+2}^*+\varepsilon_{i+3}^*}{\sqrt{4}}$; \item from $\tau_{4j+3}$ to $\tau_{4j+4}$ the error terms are $\varepsilon_i=0.7\cdot\frac{\varepsilon_i^*+\varepsilon_{i+3}^*}{\sqrt{2}}$; \end{enumerate} and the error terms generated in each stretch are independent of those in any other stretch. This creates a situation where around $\tau_{4j+1}$ the error terms are iid $N(0,1)$, around $\tau_{4j+2}$ the error terms are stationary, and around $\tau_{4j+3}$ and $\tau_{4j+4}$ the error terms are stationary to the left and to the right, but their autocorrelation and marginal variances change at the change points. With the same intelligent sampling procedure as setups 3 and 4, and the same tuning parameters, we ran 2000 replicates of this setup and recorded the $\hat{\tau}^{(2)}_j-\tau_j$ values for $j=1,\dots,55$. \begin{figure} \caption{\textbf{Top:} Predicted and actual distribution of the maximal deviation $\max_{1\leq j\leq 55}|\hat{\tau}^{(2)}_j-\tau_j|$. \textbf{Bottom, Left to Right:} Predicted and actual distributions of the individual deviations $|\hat{\tau}^{(2)}_j-\tau_j|$ for $j=29, 30, 31,$ and $32$. } \end{figure} \noindent These numerical results are consistent with Proposition \ref{thm:notiidconsistent} and even for change points whose error distributions are different to the left and the right of the corresponding change point, the deviations match up very closely with the stated asymptotic distributions. \subsection{Choice of Subsample Size}\label{sec:simu_binseg_double} In order to showcase the veracity of our asymptotic results, for the previous simulation settings we tuned the first stage subsample size to ensure that the exact number of change points was detected with high accuracy. In practice, it is difficult to know how to choose the best size of the first stage subsample, since one does not know the number of change points or how far apart they are. In this section we propose a systematic \emph{data-driven} way of determining a correct subsample size at the first stage: draw a (small) subsample of size $N_1$, perform BinSeg, record the number of change-points detected; then perform BinSeg on a fresh subsample whose size is enhanced by a constant factor and re-record the number of change-points. Keep increasing the subsample size and re-estimating until some stopping criterion related to the growth in the number of detected change-points is reached. Formally, we propose the following procedure, coined {\bf ``BinSeg-Double''}: \begin{enumerate} \item perform BinSeg on an even subsample with $N_1=2\sqrt{N}$, obtaining an estimate $\hat{J}_1$ for $J$ along with change point estimates; \item now generate a new subsample of size $N_1$ that is double the previous value and obtain an estimate $\hat{J}_2$ along with change point estimates; \item continue the previous step until we reach a $\hat{J}_i$ such that $\hat{J}_i>\hat{J}_{i-2}+5$ and $|\hat{J}_i-\hat{J}_{i-1}|<5$, or until $\left|\min\{ \hat{J}_{i-3},\dots,\hat{J}_i \}-\max\{ \hat{J}_{i-3},\dots,\hat{J}_i \}\right|<5$. \end{enumerate} With regard to the stopping criteria at the last step, we reason that generally, the number of change points detected will go up at every iteration until nearly all change points are detected, or the subsample size has increased to the size of the full data set. Hence, we stop doubling the subsample once the number of detected change points essentially stabilizes. This stopping criterion has good results in several settings as we shall see in subsequent simulations, but other scenarios might require alternative stopping rules for better accuracy. In practical settings the practitioner may use a different last step to adapt to the difficulty of the problem. \newline \newline \indent We gauged the effectiveness of this method as follows. We generated data sequences of length $N=10^6$ and iid $N(0,1)$ error terms, with evenly spaced change points, and the signals alternating between $0,\Delta, 0,\Delta,\dots$, where $\Delta$ is the SNR parameter, allowed to vary aross simulation settings in addition to the number of change-points. For each configuration, we invoked intelligent sampling at stage 1 using the BinSeg-Double procedure followed by curtailing consecutive change point estimates that are within 5 points of each other. As an added detail, for the sake of methodology, we used the R changepoint package functions in this and all subsequent sections, instead of the manually coded BinSeg algorithm with pre-specified tuning parameter as in the previous sections. The second stage calculations remain unaltered. We recorded the computational time and confidence level coverage for each simulation configuration, and compared them with the computational time of running BinSeg on the entire dataset. \begin{table}[H] \caption{Two Stage Running Time} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline \backslashbox{SNR~~}{$J$~~} & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450\\ \hline\hline 1 & 1.67 & 4.6 & 11.84 & 15.1 & 39.58 & 49.8 & 53.95 & 80.41 & 132.22\\ \hline 1.5 & 1.08 & 2.85 & 9.04 & 7.42 & 20.12 & 20.84 & 27.34 & 39.3 & 74.16\\ \hline 2 & 0.71 & 1.68 & 3.34 & 7.63 & 11.7 & 20.37 & 24.48 & 45.79 & 57.06\\ \hline \end{tabular}\\ Average running time in seconds across 200 iterations of every configuration. \caption{Full Data Running Time} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline \backslashbox{SNR~~}{$J$~~} & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450\\ \hline\hline 1 & 17.63 & 26.66 & 35.77 & 45.14 & 54.56 & 64.2 & 73.77 & 83.72 & 93.47\\ \hline 1.5 & 17.39 & 26.55 & 35.92 & 44.59 & 53.75 & 63.34 & 72.75 & 82.71 & 92.4\\ \hline 2 & 17.13 & 26.01 & 35.17 & 44.44 & 53.63 & 63.39 & 72.61 & 82.37 & 91.86\\ \hline \end{tabular}\\ Average running time in seconds across 50 iterations of every configuration. \end{table} As is evident from these tables, the running time of intelligent sampling relative to the full data procedure is lowest when the number of change points is low and the SNR at these changes are high. Conversely, when the number of change points is large and/or the SNR values are low, it becomes necessary to take a bigger subsample at the first stage, increasing the relative computation time. We also show that our proposed first stage procedure leads to accurate results. In each iteration, after the second stage change point estimates were obtained and the timer stopped, we constructed $(1-\alpha)$ level (not simultaneous) confidence intervals around each estimate, and recorded the proportion of true change points captured in the union of these intervals. Specifically, at each iteration $k$ we constructed a confidence region $\hat{C}_k$, and recorded \begin{eqnarray} \frac{1}{J\cdot\text{(\# of iterations)}}\sum_{ k=1 }^{\text{(\# of iterations)}}\sum_{j=1}^J 1( \tau_j\in \hat{C}_k ). \end{eqnarray} \begin{table}[H] \caption{Change point Coverage for Two Stages} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline \backslashbox{SNR~~}{$J$~~} & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450\\ \hline\hline 1 & 0.966 & 0.971 & 0.945 & 0.944 & 0.968 & 0.966 & 0.935 & 0.967 & 0.969\\ \hline 1.5 & 0.978 & 0.985 & 0.958 & 0.958 & 0.983 & 0.981 & 0.954 & 0.982 & 0.982\\ \hline 2 & 0.993 & 0.997 & 0.972 & 0.971 & 0.996 & 0.995 & 0.971 & 0.995 & 0.996\\ \hline \end{tabular}\\ Proportion of change points covered over all 200 iterations with $\alpha=0.01$. As all proportions are close to 0.99, this indicates accurate estimates. \end{table} We next perform a similar set of simulations, with $N$ and the error structure staying the same but with an added element of randomness from the placement of the change points and values of the signals. The change points $(\tau_1,\dots,\tau_J)$ are generated as $N/(4(J+1))$ plus the random variables $\frac{3N}{4(J+1)}(U_{(1)}, U_{(2)}-U_{(1)},U_{(3)}-U_{(2)},\dots,1-U_{(J)} )$, where $U_{(1)},\dots, U_{(J)}$ are the ordered variables from $J$ iid uniform (0,1) random variables. The changes in signal at the different change point are randomly generated as independent $(2S-1)*V$ variables, where $S$ is Bernoulli with $p=0.5$ and $V$ is uniform on [1,4]. \begin{table}[H] \caption{Running Time for Random Design} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline \backslashbox{Method~~}{$J$~~} & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450\\ \hline\hline Two Stage & 1.02 & 3.3 & 9.84 & 21.67 & 32.06 & 44.7 & 64.09 & 86.89 & 109.84\\ \hline Full Data & 18.07 & 27.36 & 36.81 & 46.38 & 56.04 & 65.77 & 75.64 & 85.32 & 95.16\\ \hline \end{tabular} \\ Average running time in seconds across 200 iterations of the random design setup. \end{table} The coverage proportion of actual change points are \begin{table}[H] \caption{Coverage Proportion for Random Design} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \hline $J$~~ & 50 & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450\\ \hline\hline Coverage & 0.985 & 0.986 & 0.993 & 0.994 & 0.995 & 0.996 & 0.996 & 0.995 & 0.996\\ \hline \end{tabular} \\ Proportion of change points covered by $\alpha=0.01$ level (non-simultaneous) confidence sets across 200 iterations. \end{table} ~\indent We note that in both simulation setups, intelligent sampling gains the most computational time savings over the full data analysis when the separation between change points is long (corresponding to smaller $J$) and when the SNR is high. Such setups require only a small subsample at stage 1 to capture all the change points. In certain cases where $J$ is high enough and/or the SNR low enough, \emph{it would be impossible to detect all the change points at stage 1 with any strictly smaller subsample, meaning BinSeg-Double would continue until the full dataset is utilized}. In this respect, the doubling procedure is \emph{automatically calibrated} to the intrinsic difficulty of the problem. Our findings are perfectly consistent with our previous claims that intelligent sampling is most useful for detecting sparsely placed, high SNR change points in long data sequences; in settings with densely placed, lower SNR change points (or a large number of change points), one can not obtain accurate results without utilizing large subsamples (or even the entire data set), thus mitigating, or even nullifying, the computational time savings of intelligent sampling. \section{Data Applications}\label{sec:realdata} We apply the intelligent sampling framework to two network traffic data sets. In the first one that exhibits a stationary behavior, we artificially inject change points, by shifting the local mean of the data across different stretches. Hence, there is a known ground truth to calibrate the performance of intelligent sampling in a setting where other features of the data (error distributions, presence of temporal dependence) may not fully adhere to the assumptions used in establishing the theoretical properties of the procedure. The second data set is a fully observed data set where the ground truths are unknown. \subsection{Partly Emulated Data} \label{partly-emulated-data} The effectiveness of the proposed intelligent sampling procedure is illustrated on an Internet traffic data set, obtained from the public CAIDA repository \texttt{http://data.caida.org/datasets/passive/passive-oc48/20020814-160000.UTC/pcap/} that contains traffic traces from an OC48 (2.5 Gbits/sec) capacity link. The trace under consideration contains traffic for a two hour period from a large west coast Internet service provider back in 2002. The original trace contains all packets that went through the link in an approximately 2 hour interval, but after some aggregation into bins of length 300 microseconds, the resulting data sequence comprises $N=1.5\times 10^7$ observations. After applying a square-root transformation, a small segment of this sequence is depicted in Figure \ref{fig:raw-data} and some of its statistical characteristics in Figure \ref{fig:realnormal}, respectively. \begin{figure} \caption{First 5000 time points of the data after a square root transformation.} \label{fig:raw-data} \end{figure} \begin{figure} \caption{QQ plot and estimated ACF of first 5000 points of data set after square root transformation. Note the normality of the transformed data.} \label{fig:realnormal} \end{figure} It can be seen that the data are close to marginally normally distributed, while their autocorrelation decays rapidly and essentially disappears after a lag of 10. Similar exploratory analyses performed for multiple stretches of the data lead to similar conclusions. Hence, for the remainder of the analysis, we work with the square-root transformed data and model them as a short range dependent sequence. To illustrate the methodology, we used an {\em emulation} setting, where we injected various mean shifts to the mean level of the data of random durations, as described next. This allows us to test the proposed intelligent sampling procedure, while at the same time retaining all features in the original data. In our emulation experiments, we posit that there are two types of disruptions, short term spikes that may be the result of specific events (release of a software upgrade, a new product or a highly anticipated broadcast) and longer duration disruptions that may be the result of malicious activity \cite{carl2006denial,kallitsis2016amon}. We first take the post square root transformation data $Y_1,\dots,Y_N$ and add long, persistent change points by taking $Y_i\leftarrow Y_i+\theta_i$, where $\theta_i$ is piecewise constant with change points $(\tau_1,\dots,\tau_J)$ ($J=200$) generated as $\frac{N}{2.5(J+1)}+\frac{1.5N}{2.5(J+1)}\cdot (U_{(1)},U_{(2)},\dots, U_{(J)})$, where $(U_{(1)},\dots, U_{(J)})$ are the order statistics of $J$ iid uniform (0,1) variables, and signals $(0,S_1,0,S_2,\dots)$ where $S_1,S_2,\dots$ are iid uniform (0.7,2) variables. Next we emulate a larger number of \emph{spikes} in the data, or short term large increases in the signal, by randomly selecting 400 locations for these spikes as $(V_1,\dots,V_{400})=(N-50)\cdot(U_1,\dots,U_{500})$ (where $U_1,U_2,\dots$ are iid uniform (0,1)), and setting $(Y_{V_j},\dots,Y_{V_j+50})\leftarrow (Y_{V_j},\dots,Y_{V_j+50})+W_j$ for all $j$, where $W_j$ is an iid uniform (10,15) variable. A depiction of a segment of the data with the emulated signal is given in Figure \ref{fig:emulated-traffic}. \begin{figure} \caption{Example of the first half million points of an emulated data example. } \label{fig:emulated-traffic} \end{figure} As mentioned in the introduction, the main objective of the proposed methodology is to identify {\em long duration, persistent shifts} in the data sequence using a limited number of data points; in the emulation scenario used, this corresponds to change points $(\tau_1,\tau_J)$, while we remain indifferent to the points $V_j, V_j+50$ for $j=1,\dots,400$, which as previously described are change points corresponding to the spikes\footnote{We note that the theoretical development does not include spiky signals. Nonetheless, we included spiky signals in our emulation to mimic the pattern of internet traffic data. As will be seen later, our method is quite robust to the presence of this added feature.} The two-stage intelligent sampling procedure was implemented as follows: (i) the first stage used BinSeg-Double , followed by removal of estimates less than 4 points apart, followed by steps (ISM4-1) to steps (ISM4-4), and then followed by removal of estimates less than 16 apart. (ii) For each $j$, the second stage interval surrounding $\hat{\tau}_j^{(1)}$ was chosen to have half width $Q_{\hat{\Delta}_j,\hat{\sigma},\hat{\sigma}}\left(1-\frac{0.01}{\hat{J}}\right)$ where $\tilde{\Delta}_j$'s and $\hat{\sigma}$ are estimates of the jump and the common standard deviation. A stump model was then fitted to the data in each second stage interval to obtain the final estimates of the change-points and the final (2nd stage) CIs were constructed. \newline \newline \indent To assess the accuracy, we calculated the coverage proportion of the 90\%, 95\%, and 98\% level confidence intervals over different emulation settings. To construct these confidence intervals we had to randomly generate data sequences with identical distribution structure as the data (which would give us a random sample of $L$-type distributions and their quantiles). We generated these sequences as marginally normal random variables, with marginal standard deviation the same as the sample sd of the first 50,000 points of the $1.5\times 10^7$ length data. Finally, the ACF of the generated series was matched with the sample ACF of the first 50,000 points up to a lag of 20: we first generated vectors of iid normal variables, then multiplied them with the Cholesky square root of the band matrix created with the sample ACF (the bandwidth of this matrix is 20, and non-zero entries are taken from the first 20 values of the sample ACF). Intelligent sampling exhibits highly satisfactory performance: among all 200 change points corresponding to persistent changes, the average coverage probability for the 90\%, 95\%, and 98\% nominal confidence intervals were 0.874, 0.914, and 0.939 respectively. On the other hand, for change points induced by the spikes, the average coverage probability was lower than 0.035 even for the 98th confidence interval. However, since the focus of intelligent sampling is on long duration persistent signals, missing the spiky signals is of no great consequence. In terms of computational burden, the average emulation setting utilized 17.1\% of the full dataset, requiring an average time of 29.7 seconds to perform the estimation. \begin{figure} \caption{Coverage proportions, the proportion of time when the change point was covered by some confidence interval, for the 90\% level (green bars), 95\% level (blue bars), and 98\% level (red bars) within the 500 iterations, for a select number of 20 change points (change point \# 2 is always the second one in order, \# 3 is the third in order, etc). Horizontal reference lines are at 0.9 (green), 0.95 (blue), and 0.98 (red).} \end{figure} \subsection{Real Data Example} \label{real-data-section} We further gauged the practical performance of intelligent sampling by evaluating its performance when applied to a different data set with \emph{naturally occurring change points} of varying durations. The growing capacity of Internet links (routinely at 10 Gbps) together with proliferation of such links have rendered network monitoring a more challenging but at the same time critical task, given increased malicious activities. However, most monitoring tools rely on sampling packets at rates of 1:1000 or 1:10000 in order to minimize interference with network infrastructure. In an experimental project, an alternative that aggregates traffic at 2 sec intervals and then records it was developed at Merit Network (see details on the technology in \cite{kallitsis2016amon}). The data analyzed next correspond to a trace of all incoming traffic to the autonomous system administered by Merit Network and corresponds to a data sequence obtained during the spring of 2019, comprising of 2080768 observations. The objective is to identify persistent change points in the traffic pattern. However, since the data have not been previously analyzed, no ground truth is known. Unlike the 2002 trace previously analyzed, the current trace (segments in logarithmic scale shown in Figure \ref{fig:dataportion1}) exhibits a richer set of patterns, but broadly resembles a piecewise linear signal plus noise, with multiple discontinuities and abrupt changes in slopes. \begin{figure} \caption{Plot of the last quarter of the data (i.e., points $0.75N$ to $N$ of the data).} \label{fig:dataportion1} \end{figure} \begin{figure} \caption{Plot of the first 5000 points of the data.} \label{fig:dataportion2} \end{figure} ~\indent The data set contains several particularly prominent change points exhibiting very large jumps, but zooming in to short intervals reveals that the signal oscillates up and down with a wavelength of a few hundred points, as shown in Figure \ref{fig:dataportion2}. Therefore, the data also contain numerous less prominent change points in signal slope at the crest and troughs of the signal vibrations. For our analysis, we are more interested in the locations where large jumps are located. Detection of \emph{all minor signal changes} requires a thorough analysis of the entire data set. \newline \newline \indent Analysis of this data set via intelligent sampling is conceptually similar to data sets with piecewise constant stretches that have been the primary focus of this paper. The multiple change point algorithm in Stage 1 must now be able to deal with piecewise linear signals. Once the change-points from Stage 1 are localized, a piecewise linear function with a single jump can be fitted at Stage 2 in each local neighborhood to get updated estimates of the change points and corresponding confidence intervals by simulating from an estimated error distribution. Specifically, instead of the BinSeg-Double procedure described in Section \ref{sec:simu_binseg_double} in the first stage, we use a doubling procedure that performs Narrowest-Over-Threshold (denoted as NOT henceforth) detection (introduced in \cite{baranowski2019narrowest}) after each doubling. The NOT procedure functions similarly to BinSeg. At each step, a CUSUM-like expression is maximized over sub-intervals of the data\footnote{To be very specific, the maximization is performed over random intervals contained in the segment.}. A change point is declared if the maximum exceeds a specified threshold value, after which the segment is bisected by the new change point estimate and the process repeats over the two components. Because we are trying to detect only the biggest changes in signals instead of all of them, we can use the value of the threshold tuning parameter to limit the number of change points detected by the algorithm. In the associated \texttt{R} package \texttt{not}, we performed this tuning by varying the \texttt{q.max} parameter of the \texttt{features} function, but we set all other tuning parameters to their default values. The stopping criteria for the first stage was slightly altered from before.\footnote{Another stopping criterion was added: if the number of change points ever decreases, stop and keep the previous results.} \begin{figure} \caption{A few selected estimates (red) and confidence intervals (blue denotes 90\% level, purple for 95\%, and black for 99\%). To construct the confidence levels, for each change point estimate we fit 2 AR time series models using 75 points on the left and 75 points on the right. } \label{fig:CFI_examples} \end{figure} We are interested in comparing the performance of intelligent sampling with NOT versus applying NOT over the whole data. Thus, over several values of the \texttt{q.max} tuning parameter, we performed estimation 40 times each for intelligent sampling and full data estimation. All computing was performed on a desktop with an Intel Core i7-8700K CPU, with a script that utilized the \texttt{parallel} package in \texttt{R}. We remind the reader that because of the randomized nature of the NOT algorithm, results can vary between the different runs of estimation. \newline \newline \begin{table}[H] \begin{center} \caption{Average running time over 40 iterations.\label{runtime}} \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline \backslashbox{Method~~}{\texttt{q.max}~~} & 50 & 75 & 100 & 125 & 150 & 175 & 200\\ \hline\hline Two Stage & 12.03 & 4.81 & 8.57 & 9.94 & 21.88 & 8.52 & 11.73\\ \hline Full Data & 463.3 & 475.94 & 494.75 & 517.46 & 503.08 & 500.86 & 507.12\\ \hline \end{tabular} \end{center} \begin{center} \caption{Average number of change points detected over 40 iterations.\label{numCP}} \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline \backslashbox{Method~~}{\texttt{q.max}~~} & 50 & 75 & 100 & 125 & 150 & 175 & 200\\ \hline\hline Two Stage & 49.8 & 72.8 & 77.3 & 80.83 & 81.2 & 77.55 & 80.42\\ \hline Full Data & 49.85 & 70.12 & 76.33 & 78.75 & 83.1 & 80.62 & 79.92\\ \hline \end{tabular} \end{center} \end{table} \noindent In terms of the average number of change points detected, intelligent sampling and the full data analysis detected around the same number of change points for every scenario. When \texttt{q.max} is low, both methods detected close to the maximum number of change points allowed by the package function, but as is clear from Table \ref{runtime}, the number of detected change points stabilized at around 80 for both procedures, irrespective of the value of \texttt{q.max}. Table \ref{numCP} gives a snapshot of the time comparison of the full data analysis with intelligent sampling, with the latter generally seen to be between 40 to 60 times faster. \\\\ To see whether these detected change points correspond to the larger change points, we also show the distribution of the estimated jump sizes. \begin{figure} \caption{Distribution of the log (base 10) of the absolute values of jump sizes.} \label{fig:jumpdist} \end{figure} \begin{minipage}{0.48\hsize}\centering \begin{figure} \caption{Distribution of the log (base 10) of the distances between consecutive change point estimates.} \label{fig:stretchdist} \end{figure} \end{minipage} \begin{minipage}{0.04\hsize} ~~ \end{minipage} \begin{minipage}{0.48\hsize}\centering \begin{table}[H] \centering \caption{Proportion of Large Jumps} \begin{tabular}{|c||c|c|} \hline q.max & Two Stage & Full Data\\ \hline\hline 50 & 0.55 & 0.78 \\ \hline 75 & 0.59 & 0.67 \\ \hline 100 & 0.57 & 0.62 \\ \hline 125 & 0.56 & 0.61 \\ \hline 150 & 0.57 & 0.58 \\ \hline 175 & 0.6 & 0.61 \\ \hline 200 & 0.57 & 0.61 \\ \hline \hline \end{tabular}\\ ~\newline \newline Proportion of jumps that are greater than 50. The threshold of 50 was chosen because the data in Figure \ref{fig:dataportion1} seems to oscillate with an amplitude of no more than 50 when far from the change points. \end{table} \end{minipage} \begin{figure} \caption{ \textbf{Left:} Proportion of estimates from intelligent sampling that are close to the estimates from full data estimation, calculated using the expression in (\ref{eq:proportion_expression}). \textbf{ Center: }A comparison between half of the full data estimates versus the other half, using an expression similar to (\ref{eq:proportion_expression}). \textbf{Right: }The ratio of the values in the left graph divided by the values in the center graph.} \label{fig:closeproportions} \end{figure} ~\newline \noindent For each \texttt{q.max} value, the distribution of the logarithm of the jump sizes are similar, with the jump distribution from the intelligent sampling procedure being skewed more towards the lower values. The proportion of detected jumps which exceed 50 (a value that appears to be a reasonable upper bound on the noise amplitude from inspection of Figure 14) is also similar between the two estimation methods. It is clear that most of the detected jumps are large. Figure \ref{fig:stretchdist} demonstrates the distributions of distances between consecutive change points, and it is clear from the plots that the vast majority of inter change point distances obtained by intelligent sampling are between 10000 and 10000 points, which shows that our approach is picking up long and persistent changes, exactly what it is designed for. \newline \newline \indent Another way of comparing the change point estimates procured from intelligent sampling is to quantify the proportion of intelligent sampling estimates that are close to some full data estimate. For each fixed \texttt{q.max} value, we have the full data estimates $\{ \hat{\tau}^{(j)}_{i,f}:i=1,\dots,\hat{J}_{j,f} \}$ and the intelligent sampling estimates $\{ \hat{\tau}^{(j)}_{i,s}:i=1,\dots,\hat{J}_{j,s} \}$ for iterations $j=1,\dots, 40$. We fix a bandwidth parameter $\delta\in \{5,10,20,\dots,320\}$, and calculate the following proportions to gauge how similar the two sets of estimates are: \begin{eqnarray}\label{eq:proportion_expression} \frac{1}{40} \sum_{k=1}^{40} \left[ \frac{1}{\sum_{j=1}^{40}\hat{J}_{j,s}}\sum_{\substack{j=1,\dots,40\\i=1,\dots,\hat{J}}_{j,s}} 1\left(\min_{\ell=1,\dots,\hat{J}_{k,f}}\left| \hat{\tau}_{\ell,f}^{(k)}-\hat{\tau}_{i,s}^{(j)} \right|<\delta\right) \right]. \end{eqnarray} These values can be seen in the left panel of Figure \ref{fig:closeproportions} for a variety of parameter values, and they come fairly close to 60\%. Although this proportion is well under 100\%, this is primarily due to the randomness of performing NOT estimation on this specific data set. Indeed, if we split the full data estimates with the first 20 iterations in one group and the last 20 iterations in the other group, then comparison of the two groups of the full data estimates with an expression similar to (\ref{eq:proportion_expression}) will still result in values not exceeding 60\%. Taking the ratios of the proportions for each \texttt{q.max} and $\delta$ values, we see that for large \texttt{q.max} values the ratios are very close to 1. In other words, the comparison of intelligent sampling estimates to the full data estimates show no more dis-similarity than when comparing the full data estimates across disjoint runs. For drawing downstream conclusions from an analysis of this type, one should next flag the change-point locations that are persistent across the different estimation runs, and subject them to a more refined local analysis. We however do not proceed in that direction, primarily because the point of this analysis is somewhat different -- namely, the effectiveness of intelligent sampling compared to full data analysis on real data. \newline \newline \indent In conclusion, the above analysis amply demonstrates that intelligent sampling was able to extract a similar number of change points as a full data analysis on the entire data, including finding a large proportion of estimators which also shows up in a full data estimate. It was able to do this while using much less time, as the running time data demonstrates. \section{Concluding Remarks and Discussion}\label{sec:discussion} This paper introduced sampling methodology that reduces significantly the computational requirements in multi-change point problems, while not compromising on the statistical accuracy of the resulting estimates. It leverages the {\em locality principle}, which is obviously at work in the context of the classical signal-plus-noise model employed throughout this study. Intelligent sampling is devised specifically for detecting major but relatively infrequent changes in a data stream, what one can think of as significant 'regime changes', and should \emph{not} be relied upon to identify short-lived disturbances or small perturbations to the mean level of a data stream. While the paper has dealt with a one-dimensional data sequence, extensions to problems involving multiple (potentially high-dimensional) data [e.g., sequences produced by cyber-physical systems equipped with a multitude of sensors monitoring physical or man-made phenomena] are of obvious interest. Also, while our theoretical development studies the canonical model with piecewise flat stretches between change-points, extensions to allow variations in the mean level between change-points, e.g. piecewise polynomial or nonparametric specifications of the mean function, while requiring more careful estimation of mean levels, pose no added conceptual difficulties: indeed, our second data set uses a piecewise linear model. Finally, the focus in this paper has primarily been on a two-stage procedure, which is easiest to implement in practice and suitable for many applications. Nevertheless, as illustrated in Section \ref{sec:compmethod}, in specific settings involving data sets of length exceeding $10^{10}$ points, a multi-stage procedure may be advantageous. A key technical requirement for intelligent sampling is that the procedure used to obtain the 1st stage estimates needs to exhibit consistency properties, e.g. $(\ref{eq:firstconsistent2})$. The choice of binary segmentation in our exposition, or its wild binary segmentation variant (which modifies BinSeg by computing the cusum statistics on an additional number of random intervals) presented in detail in Section \ref{WBINSEG-SUPP} of the Supplement, is due to their computational attractiveness and the fact that they readily provide consistent estimates of the number of change points and their locations. Yet another variant of binary segmentation that we just became aware of, named `seeded binary segmentation' \cite{kovacs2020seeded}, shows promise as a first stage procedure for intelligent sampling, and deserves future exploration. We now briefly turn our attention to two other popular multiple change point methods and their potential relevance to the intelligent sampling problem. Two popular for models defined as in $(\ref{model})$ are the estimation of multiple structural breakpoints introduced in \cite{bai1998estimating} and PELT as described in \cite{killick2012optimal}. The method described in \cite{bai1998estimating} does give consistent estimates, but only under the much stricter condition that $J$ is a constant and there exists values $\beta_1,\dots,\beta_J\in (0,1)$ such that $\tau_j=[\beta_jN]$ for all $j=1,\dots,J$ and $N$. Further, to run the actual procedure would require the use of dynamic programming which is computationally expensive ($O(N^2)$ time). With the PELT procedure, the implementation itself runs in a more manageable $O(N)$ time; however, this works under the very different Bayesian setting where the spacings $\tau_{j+1}-\tau_j$ are iid generated from some distribution. Further, PELT was built upon a procedure described in \cite{yao1984estimation}, which examines another Bayesian model where every point $\{1,\dots,N\}$ has a probability $p$ of being a change point, and the development did not go into details regarding rates of convergence of the change point estimates. Due to the theoretical and computational restrictions of the multiple structural breakpoints method and the differing framework under which PELT works, we focused our analysis on binary segmentation. We also mention the SMUCE procedure, introduced in \cite{frick2014multiscale}, where lower probability bounds for the events $\mathbb{P}[\hat{J}\neq J]$ and $\mathbb{P}[\underset{j=1,\dots,J}{\max}\underset{i=1,\dots,\hat{J}}{\min}|\hat{\tau}_i-\tau_j|\leq c_N]$, for any sequence $c_N$, were derived. These results can be combined to yield $\mathbb{P}[\hat{J}=J;\quad \underset{j=1,\dots,J}{\max}|\hat{\tau}_j-\tau_j|\leq c_N]\to 1 $ under certain restrictions and for some sequences $c_N $ that are $o(\delta_N)$, and therefore could be used in the first stage of intelligent sampling. SMUCE has the flexibility of working for a broader class of error terms \footnote{Some results apply when the errors are iid from a general exponential family.} but as was stated in \cite{frick2014multiscale}, the procedure involves dynamic programming which runs in $O(N^2)$ time. This last point is less of an issue for a modified version of SMUCE designed for iid Gaussian errors with heterogeneous variances. H-SMUCE, in \cite{pein2016heterogeneous}, could run the procedure in as low as $O(N)$ time in some cases. Overall, SMUCE could be used as the first part of intelligent sampling, and the regimes of $\delta_N$ and restrictions on the subsample size $N_1$ needed for intelligent sampling to be consistent could be fleshed out in a similar manner as in this paper. However, as BinSeg and WBinSeg are somewhat easier to implement computationally, we chose to perform our analysis with them instead. Our simulation results indicate that for non-Gaussian errors, as well as dependent error terms (with or without Gaussian distributions), the deviations of our estimators behave like the $L$-type distributions, even in $J$ growing with $N$ settings. This suggests that our results could be extended to broader classes of error terms, and future work could consider models incorporating errors with dependence structures and/or with non-Gaussian distributions. Extending Theorem \ref{thm:increasingJasymprotics} to these settings would require an in-depth investigation into probability bounds on the argmin of a drifted random walk with non-i.i.d and/or non-Gaussian random components along the lines of the derivations in Section \ref{sec:supplementpartc} of the Supplement for random walks with Gaussian innovations. We speculate that this work would be more amenable to rigorous analysis when the tail probabilities of the error terms decay exponentially (similar to Gaussian distributions) and when the dependence is local , e.g. $m$-dependence, where each error term is only correlated with the $m$ neighbors to its left and right. In conclusion, any procedure used at stage 1 of intelligent sampling puts restrictions on the model specifications, as consistent second stage estimators cannot be obtained if the first stage procedure is not consistent. Established results for BinSeg, as in \cite{venkatraman1992consistency} and \cite{fryzlewicz2014wild}, consider only the i.i.d Gaussian framework. Extending the BinSeg based approach to a more flexible class of error terms therefore requires theoretical exploration of BinSeg's properties beyond Gaussian errors, or using alternative methods at stage 1 which do not need the Gaussian error framework, e.g. \cite{bai1998estimating} and \cite{frick2014multiscale}. \section{Supplement Part A (Single Change Point Problem)} \subsection{Problem Setup}\label{sec:singlenonparametric} Instead of proving Theorems \ref{thm:singlerate} and \ref{thmsingledist} directly, we shall consider a more general nonparametric result from which the two theorems will follow as a special case. As before suppose the time series data is $(x_1,Y_1),...,(x_N,Y_N)$, where $x_i=i/N$ and $Y_i=f(x_i)+\varepsilon_i$ for $i=1,...,N$. We will make the weaker assumptions that \begin{itemize} \item $f$ is a right continuous function in $[0,1]$ with a single left discontinuity at some point $\tau_0\in (0,1)$, with jump size $f(\tau_0+)-f(\tau_0-)=\Delta$ \item there exists a $\beta_f>0$ where $|f(x)-f(y)|\leq \beta_f|x-y|$ whenever $(x-\tau_0)(y-\tau_0)>0$ \item the errors $\varepsilon_i$'s are iid $N(0,\sigma^2)$ error terms \end{itemize} The main difference between this model and the model presented in section \ref{sec:singlemodel} is the looser restriction on the signal $f$: here $f$ could be any Lipschitz continuous function with a single discontinuity and not be constrained to the family of piecewise constant functions. \newline \newline \indent We will first remark on some background regarding this this model before moving on to proving some results. Estimation procedures for such a dataset can be found in \cite{loader1996change}, where one-sided polynomial fitting was used to obtain an estimate $\hat{\tau}_N$ for $\tau_N:=\lfloor N\tau_0\rfloor/N$. In summary, fix a sequence of bandwidth $h=h_N$, a non-negative integer $p$, and a kernel function $K$ with support on $[-1,1]$. Next, for all $x_m\in (h,1-h)$, consider the signal estimates \begin{eqnarray} &&\hat{f}_-(x_m):=\pi_1\left(\underset{(a_0,\dots,a_p)\in\mathbb{R}^{p+1}}{\arg\min}\left(\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\big (Y_{m-j-1}-a_0-a_1j-...-a_pj^p\big)^2 \right)\right)\nonumber\\ &&\hat{f}_+(x_m):=\pi_1\left(\underset{(a_0,\dots,a_p)\in\mathbb{R}^{p+1}}{\arg\min}\left(\sum_{j=0}^{Nh}K\left(\frac{j}{nh}\right)\big (Y_{m+j}-a_0-a_1j-...-a_pj^p\big)^2\right)\right), \end{eqnarray} where $\pi_1$ is the projection functions such that $\pi_1(a_0,\dots,a_p)=a_0$. The change point estimate is \begin{eqnarray} \hat{\tau}_N:=\underset{x_i\in (h,1-h)}{\arg\max}|\hat{f}_+(x_i)-\hat{f}_-(x_i)|. \end{eqnarray} This estimator is consistent under a few regularity conditions on the kernel $K$ and conditions on how fast $h$ converges to 0. For the sake of brevity we will not mention all those conditions here, but we will note that under said conditions, $\mathbb{P}[N(\hat{\tau}_N-\tau_N)=k]\to\mathbb{P}[L(\Delta/\sigma)=k]$ for all $k\in\mathbb{Z}$, which is the exactly asymptotic result for $\hat{\tau}_N$ obtained by least squares in a stump model setting. Finally, for our purposes we propose estimators $\hat{\alpha}$ and $\hat{\beta}$ for $f(\tau_0-)$ and $f(\tau_0+)$, respectively, defining them as \begin{eqnarray} \hat{\alpha}:=\frac{\sum_{j=0}^{Nh} K\left(\frac{j}{Nh}\right)Y_{N\hat{\tau}_N-j-1} }{\sum_{j=1}^{Nh}K\left( \frac{j}{Nh} \right)}\nonumber\\ \hat{\beta}:=\frac{\sum_{j=0}^{Nh} K\left(\frac{j}{Nh}\right)Y_{N\hat{\tau}_N+j} }{\sum_{j=1}^{Nh}K\left( \frac{j}{Nh} \right)}. \end{eqnarray} These two estimators are consistent: \begin{lemma} \label{cor:frateconv} $|\hat{\alpha}-f(\tau_0+)|$ and $|\hat{\beta}-f(\tau_0-)|$ are $O_p(h\vee (Nh)^{-1/2})$. \end{lemma} ~\newline \indent It is possible to perform intelligent sampling to this nonparametric setting as in steps (ISS1)-(ISS4), though with a slight adjustment. Instead of fitting a stump function at step (ISS2), use one-sided local polynomial fitting with bandwidth $h$ on the first stage subsample to obtain estimates $(\hat{\alpha}^{(1)},\hat{\beta}^{(1)},\hat{\tau}_N)$ for the parameters $(f(\tau_0-),f(\tau_0+),\tau_N)$. These first stage estimators satisfy the following consistency result: \begin{eqnarray}\label{eq:singleconsistentcond} \mathbb{P}\left[ |\hat{\tau}_N^{(1)}-\tau_N|\leq w(N);\quad |\hat{\alpha}^{(1)}-f(\tau_0-)|\vee|\hat{\beta}^{(1)}-f(\tau_0+)|\leq \rho_N \right]\to 1 \end{eqnarray} for the sequence $w(N)=CN^{1-\gamma+\delta}$ where $\delta$ and $C$ can be any positive constants, and some sequence $\rho_N\to 0$ (an explicit sequence can be derived by Lemma \ref{cor:frateconv}). The consistency condition in (\ref{eq:singleconsistentcond}) is sufficient for a generalized versions of Theorems \ref{thm:singlerate} and \ref{thmsingledist}: \begin{theorem}\label{thm:singlerategeneral} \begin{eqnarray} \hat{\tau}^{(2)}-\tau_N=O_p(N^{-1}) \end{eqnarray} \end{theorem} \begin{theorem}\label{thm:singledistgeneral} Suppose the conditions of Theorem \ref{thm:singledistgeneral} are satisfied, then for all integers $k\in\mathbb{Z}$ we have \begin{eqnarray} \mathbb{P}\left[ \lambda_2\left( \tau_N,\hat{\tau}_N^{(2)} \right)=k \right]\to\mathbb{P}\left[ L_{\Delta/\sigma}=k \right] \end{eqnarray} \end{theorem} These results can hold under a general nonparametric setting, but they still hold for the stump model from Section \ref{sec:single-CP}. Since consistency condition $(\ref{eq:singleconsistentcond})$ is if $f$ is a stump function and least square fitting was used at step (ISS2) as it was written in Section \ref{sec:singleprocedure}, Theorems \ref{thm:singlerategeneral} and \ref{thm:singledistgeneral} do imply Theorems \ref{thm:singlerate} and \ref{thmsingledist}. The proof of Theorem \ref{thm:singlerategeneral} will be covered in Section \ref{sec:proofgeneralratesingle}, while the proof of Theorem \ref{thm:singledistgeneral} will be covered in Appendix B at Section \ref{sec:proofsingledistgeneral}. \begin{remark} We note that not only do the consistency results of intelligent sampling for stump models generalize to this nonparametric setting, the computational time aspects translates also does not change. Local polynomial fitting on $n$ data points takes $O(N)$ computational time, see e.g. \cite{seifert1994fast}. Therefore the computational time analysis in Section \ref{sec:singlemodel} still holds for this nonparametric case. \end{remark} \begin{remark}\label{rem:nosingleasymgen} It is not possible derive asymptotic distribution results for $N(\hat{\tau}_N^{(2)}-\tau_N)$. Consider the case where $\tau=0.5$, $N_1=\sqrt{N}$ (or $\gamma=0.5$), and the two subsequences $N=2^{2j}$ or $N=3^{2j}$ for some large integer $j$. In such cases the first stage subsample would choose points that have integer multiples of $1/2^j$ or $1/3^j$ as their x-coordinate. \begin{itemize} \item If $N=2^{2j}$, $\tau_N=\frac{\lfloor 2^{2j}\cdot 0.5\rfloor}{2^{2j}}=0.5$ is an integer multiple of $1/2^j$, and hence $\tau_N$ is an x-coordinate used in the first stage. \item If $N=3^{2j}$, then $\tau_N=\frac{\lfloor 3^{2j}\cdot 0.5\rfloor}{3^{2j}}$, and it can be checked that $\lfloor 3^{2j}\cdot 0.5\rfloor$ is an even integer not divisible by 3. Since the x-coordinate of every first stage data point takes the form $\frac{k}{3^j}=\frac{3^jk}{3^{2j}}$ for some integer $k$, this means $\tau_N$ is not used in the first stage. \end{itemize} Hence, in the former case we cannot ever have $\hat{\tau}^{(2)}_N=\tau_N$, while in the latter case, we have $\tau^{(2)}_N=\tau_N$ and Theorem \ref{thmsingledist} tells us that $\mathbb{P}[\hat{\tau}^{(2)}_N=\tau_N]$ converges to the nonzero $\mathbb{P}[L(\Delta/\sigma)=0]$ as $j$ increases. Clearly, we have two subsequences for which $\mathbb{P}[\hat{\tau}^{(2)}_N=\tau_N]$ converges to different values. \end{remark} \indent To further validate the extension to this nonparametric setting, we also ran a set of simulations for when $Y_i=2\sin(4\pi x_i)+2\cdot 1(x_i>0.5)+\varepsilon_i$, where $\varepsilon_i$ are iid $N(0,1)$. We took 15 values of $N$ between 2500 and $10^6$, chosen evenly on the log scale, and applied intelligent sampling on 1000 replicates. For each of these values of $N$. First stage used roughly $N_1=\sqrt{N}$ points, which were subjected to one sided local polynomial fitting with a parabolic kernel and bandwidth $h=N_1^{-0.3}$, while the second stage interval had half-width $8/\sqrt{N}$. Figures \ref{fig:singleratenonparametric} and \ref{fig:singledistnonparametric} show results consistent with Theorems \ref{thm:singlerategeneral} and \ref{thm:singledistgeneral}. \begin{figure} \caption{left graph shows log-log plot of the quantiles of $|\hat{\tau}^{(2)}_N-\tau_N|$ versus $N$, with the solid black line having a slope of exactly -1. Some datapoints for the quantiles of the 50th quantiles do not appear since for some $N$, the median of $|\hat{\tau}^{(2)}_N-\tau_N|$ was 0. Right graph is a log-log plot of the mean computational time of using all datapoints (black) and intelligent sampling (red), with the solid black line having a slope of exactly 1 and the solid red a slope of exactly 0.5. } \label{fig:singleratenonparametric} \end{figure} \begin{figure} \caption{ distribution of $\lambda_2\left( \tau_N,\hat{\tau}^{(2)}\right)$ values (blue) compared with the distribution of $L$ from Theorem \ref{thmsingledist}. } \label{fig:singledistnonparametric} \end{figure} \subsection{Proof of Corollary \ref{cor:frateconv}} \begin{proof} We will show that for any $\epsilon>0$, \begin{eqnarray} \mathbb{P}\left[ |\hat{\beta}-f(\tau+)|>C_0(h\vee (Nh)^{-1/2})\right]\leq \epsilon. \end{eqnarray} We start off by utilizing the rate of convergence of the change point estimator: there is a constant $C_1>0$ such that \begin{equation*} \mathbb{P}\left[|\hat{\tau}_N-\tau|> \frac{C_1}{N}\right]< \frac{\epsilon}{2} \end{equation*} for all sufficiently large $N$. Hence, for any $C>0$ we have, \begin{eqnarray} &&\mathbb{P}\Big[ |\hat{\beta}_N)-f(\tau+)|>C \Big]\leq\nonumber\\ &&\mathbb{P}\left[ |\hat{\beta}_N-f(\tau+)|>C \text{ and }|\hat{\tau}_N-\tau|\leq \frac{C_1}{N} \right]+\mathbb{P}\left[ |\hat{\tau}_N-\tau|> \frac{C_1}{N} \right]\leq \nonumber\\ &&\mathbb{P}\left[ |\hat{\beta}_N(t)-f(\tau+)|> C\text{ for some }|t-\tau|\leq \frac{C_1}{N}\right]+\frac{\epsilon}{2}\label{p1} \end{eqnarray} where \begin{eqnarray} \hat{\beta}_N(t):=\frac{\sum_{j=0}^{Nh}K\left( \frac{j}{Nh} \right)Y_{Nt+j}}{\sum_{j=1}^{Nh}K\left( \frac{j}{Nh}\right)} \end{eqnarray} Next, we bound the first term above, so consider only the case where $|t-\tau|\leq \frac{C_1}{N}$. By expanding we have \begin{eqnarray} |\hat{\beta}(t)-f(\tau+)|&=& \left| \frac{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)[f(t+j/N)+\varepsilon_{Nt+j}]}{\sum_{j=1}^{Nh}K\left(\frac{j}{Nh}\right)}-f(\tau+)\right|\nonumber\\ &\leq& \frac{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)|f(t+j/N)-f(\tau+)|}{\sum_{j=1}^{Nh}K\left(\frac{j}{Nh}\right)}+\left| \frac{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\varepsilon_{Nt+j}}{\sum_{j=1}^{Nh}K\left(\frac{j}{Nh}\right)}\right|\nonumber\\ &:=& A(t)+|B(t)| \end{eqnarray} First, we derive a bound for $A(t)$. If $t\geq \tau$ then we have \begin{eqnarray} A(t) &=& \frac{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)|f(t+j/N)-f(\tau+)|}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}\nonumber\\ &\leq & \frac{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\beta_f|t+j/N-\tau|}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}\nonumber\\ &\leq &\frac{\beta_f\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\left(\frac{C_1+j}{N}\right)}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}\nonumber\\ &=& \beta_f h\frac{ \frac{1}{Nh}\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\left(\frac{j}{Nh}\right) }{\frac{1}{Nh} \sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}+\frac{C_1}{N} \end{eqnarray} Note that since $Nh\to\infty$ as $N\to \infty$, we have $\frac{1}{Nh}\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\frac{j}{Nh}\to \int_0^1xK(x)\,dx$ (which exists) and $\sum_{j=1}^{Nh}K\left(\frac{j}{Nh}\right)\to \int_0^1K(x)\,dx=1$, as $N\to\infty$, hence we can find a constant $M>0$ such that \begin{equation} A(t)\leq \beta_f M h+\frac{C_1}{N} \end{equation} for all sufficiently large $N$. On the other hand, suppose $t<\tau$. For sufficiently large $N$ we would have $N(\tau-t)\leq C_1<Nh$ and so \begin{eqnarray} A(t) &=& \frac{\sum_{j=0}^{N(\tau-t)-1}K\left(\frac{j}{Nh}\right)|f(t+j/N)-f(\tau+)|}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}+ \frac{\sum_{j=N(\tau-t)}^{Nh}K\left(\frac{j}{Nh}\right)|f(t+j/N)-f(\tau+)|}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}\nonumber\\ &\leq & \frac{\sum_{j=0}^{N(\tau-t)-1}K\left(\frac{j}{Nh}\right)(\Delta+\beta_f(\tau-t-j/N))}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}+ \frac{\sum_{j=N(\tau-t)}^{Nh}K\left(\frac{j}{Nh}\right)\beta_f(t+j/N-\tau)}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}\nonumber\\ &\leq & \frac{K^\uparrow N(\tau-t)\Delta }{\sum_{j=1}^{Nh}K\left(\frac{j}{Nh}\right)}+\frac{\beta_f\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\left(\frac{C_1+j}{N}\right)}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}\nonumber\\ &&(K^\uparrow \text{ is any constants that uniformly bounds the function }K\text{ from above on}[0,1])\nonumber\\ &\leq & (K^\uparrow \Delta)\frac{\frac{1}{Nh}C_1}{\frac{1}{Nh}\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}+\beta_f h\frac{ \frac{1}{Nh}\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\left(\frac{j}{Nh}\right) }{\frac{1}{Nh} \sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)}+\frac{C_1}{N} \end{eqnarray} which, for sufficiently large $N$, can be bounded by $\frac{M_1}{Nh}+\beta_f Mh+\frac{C_1}{N}$ for some constants $M,M_1>0$. Hence, this shows that $A(t)$ itself is $O(h\vee(Nh)^{-1})$ for all $t$ where $|t-\tau|\leq \frac{C_1}{N}$. \newline \newline Next, we consider the random term \begin{equation} B(t)= \frac{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\varepsilon_{Nt+j}}{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)} \end{equation} which satisfies \begin{eqnarray} \mathbb{E}[B(t)]&=& 0\nonumber\\ \text{var}(B(t))&=& \frac{\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)^2}{\left(\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\right)^2}\nonumber\\ &=& (Nh)^{-1}\frac{\frac{1}{Nh}\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)^2}{\left(\frac{1}{Nh}\sum_{j=0}^{Nh}K\left(\frac{j}{Nh}\right)\right)^2}\nonumber\\ &\leq &(Nh)^{-1}2\int_0^1K(x)^2\,dx,\qquad\text{ for all sufficiently large }N. \end{eqnarray} Thus $B(t)=O_p((Nh)^{-1/2})$ by Chebychev's inequality. \newline \newline Combining these results on $A(t)$ and $B(t)$ derived above, one can find constants $C_2,C_3>0$ such that for all $N>N_2$ for some integer $N_2$ we have \begin{eqnarray} A(t)&\leq & C_2[h\vee(Nh)^{-1}]\nonumber\\ \mathbb{P}[|B(t)|>C_3(h\vee(Nh)^{-1/2})]&\leq& \frac{\epsilon}{2(2C_1+3)} \end{eqnarray} for all $|t-\tau|\leq \frac{C_1}{N}$, to get from (\ref{p1}): \begin{eqnarray} &&\mathbb{P}\Big[ |\hat{\beta}_N-f(\tau+)|>(C_1+C_2)(h\vee(Nh)^{-1/2}) \Big]\leq\nonumber\\ &&\mathbb{P}\left[ |\hat{\beta}_N(t)-f(\tau+)|> (C_1+C_2)(h\vee(Nh)^{-1/2})\text{ for some }|t-\tau|\leq \frac{C_1}{N}\right]+\frac{\epsilon}{2}\leq\nonumber\\ && \mathbb{P}\left[ A(t)+|B(t)|> (C_1+C_2)(h\vee(Nh)^{-1/2})\text{ for some }|t-\tau|\leq \frac{C_1}{N}\right]+\frac{\epsilon}{2}\leq\nonumber\\ && \sum_{t:|t-\tau|\leq C_1/N}\mathbb{P}\left[ A(t)+|B(t)|> (C_1+C_2)(h\vee(Nh)^{-1/2})\right] +\frac{\epsilon}{2}\leq\nonumber\\ && \sum_{t:|t-\tau|\leq C_1/n}\mathbb{P}\left[ |B(t)|> C_2(h\vee(Nh)^{-1/2})\right] +\frac{\epsilon}{2}\leq\nonumber\\ &&\sum_{t:|t-\tau|\leq C_1/N}\frac{\epsilon}{2(2C_1+3)}+\frac{\epsilon}{2}\leq \epsilon \end{eqnarray} for all $N\geq N_1\vee N_2$. This establishes that $|\hat{\beta}_N-f(\tau+)|$ is $O_p(h\vee(Nh)^{-1/2})$, and the proof for $|\hat{\alpha}_N-f(\tau-)|$ proceeds similarly. \end{proof} \subsubsection{Proof of Theorem \ref{thm:singlerategeneral}}\label{sec:proofgeneralratesingle} The structure of this proof will be similar to the rate of convergence proof found in Lan et al (2007). We will initially set some notations: let $\tau_N:=\lfloor N\tau\rfloor /N$, and define \begin{equation} \tau^{(2)}_N:=\begin{cases} \tau_N\qquad &\text{if }\tau_N\text{ is not in first subsample }\\ \tau_N-1/N &\text{if }\tau_N\text{ is in first subsample } \end{cases} \end{equation} We will show that $\left(\hat{\tau}^{(2)}_N-\tau^{(2)}_N\right)$ is $O_p(1/N)$, which will also demonstrate the same rate of convergence for $\left(\hat{\tau}^{(2)}_N-\tau_N\right)$. An additional property of $\tau^{(2)}_N$, used later on, is the fact that $\lambda_2\left( \tau^{(2)}_N,\hat{\tau}^{(2)}_N \right)=\lambda_2\left( \tau_N,\hat{\tau}^{(2)}_N \right)$. This will be utilized in the proof of Theorems \ref{thmsingledist} and \ref{thm:multidepend}. \begin{proof} Denote $G_N$ as the joint distribution of $(\hat{\alpha}^{(1)},\hat{\beta}^{(1)},\hat{\tau}^{(1)}_N)$. Given any constant $\epsilon>0$, there is a positive constant $C_\epsilon$ such that for all sufficiently large $N$ we have \begin{eqnarray} (\hat{\alpha}^{(1)},\hat{\beta}^{(1)},\hat{\tau}^{(1)}_N)&\in& \left[ f(\tau-)- \rho_N,f(\tau-)+ \rho_N \right]\nonumber\\ &\times& \left[ f(\tau+)- \rho_N,f(\tau+)+ \rho_N \right]\times \left[ \tau-C_\epsilon/N^\gamma,\tau+ C_\epsilon/N^\gamma \right] \end{eqnarray} with probability at least $1-\epsilon$. Denote this event as $R_N$. It follows that for any sequence $\{a_N\}$, \begin{eqnarray} &&\mathbb{P}\Big[ N|\hat{\tau}^{(2)}_N-\tau^{(2)}_N|>a_N \Big]\leq \nonumber\\ &&\int_{R_N}\mathbb{P}\Big[N|\hat{\tau}^{(2)}_N-\tau^{(2)}_N|>a_N\Big| (\hat{\alpha}^{(1)},\hat{\beta}^{(1)},\hat{\tau}^{(1)}_N)=(\alpha,\beta,t) \Big]\,dG_N(\alpha,\beta,t)+\epsilon\leq\nonumber\\ &&\sup_{(\alpha,\beta,t)\in R_N}\mathbb{P}\Big[N|\hat{\tau}^{(2)}_N-\tau^{(2)}_N|>a_N\Big| (\hat{\alpha}^{(1)},\hat{\beta}^{(1)},\hat{\tau}^{(1)}_N)=(\alpha,\beta,t) \Big]+\epsilon \end{eqnarray} Next, we show that this first term is smaller than any $\epsilon>0$ for a sequence $a_N=O(1/N)$ and all sufficiently large $N$, by bounding the probability that $$\mathbb{P}_{\alpha,\beta,t}\left[N|\hat{\tau}^{(2)}_N-\tau|>a_N\right]:=\mathbb{P}\left[N|\hat{\tau}^{(2)}_N-\tau|>a_N\Big| (\hat{\alpha}^{(1)},\hat{\beta}^{(1)},\hat{\tau}^{(1)}_N)=(\alpha,\beta,t)\right]$$ for any given $(\alpha,\beta,t)\in R_N$. \newline \newline Conditional on the first stage estimates equaling $(\alpha,\beta,t)$, we can rewrite $\hat{\tau}^{(2)}_N$ and $\tau$ as maximizers of: \begin{eqnarray} \hat{\tau}^{(2)}_N&=& \underset{d\in S^{(2)}}{\arg\min}\left(\frac{1}{\lambda_2(S^{(2)}(t))}\sum_{i:i/N\in S^{(2)}(t)} \left(Y_i-\frac{\alpha+\beta}{2}\right)(1(i/N\leq d)-1(i/N\leq \tau))\right) \nonumber\\ &:=& \underset{d\in S^{(2)}}{\arg\min}\;\mathbb{M}_n(d)\nonumber\\ \tau^{(2)}_N&=&\underset{d\in S^{(2)}}{\arg\min}\left(\frac{1}{\lambda_2(S^{(2)}(t))} \sum_{i:i/N\in S^{(2)}(t)}\left(Y_i-\frac{\alpha+\beta}{2}\right)(1(i/N\leq d)-1(i/N\leq \tau))\right)\nonumber\\ &:=&\underset{d\in S^{(2)}}{\arg\min}\; M_n(d) \end{eqnarray} Since $\tau^{(2)}_N-C_\epsilon/N^\gamma\leq t \leq \tau^{(2)}_N+C_\epsilon/N^\gamma+2/N$, for $N$ large enough so that $KN^{\delta}/2>(C_\epsilon+2)$, we have $t-KN^{-\gamma+\delta}<\tau^{(2)}_N-KN^{-\gamma+\delta}/2<\tau^{(2)}_N+KN^{-\gamma+\delta}/2<t+KN^{-\gamma+\delta}$. This enables us to define the function $A(r)$ in the domain where $8N^{-1+\gamma-\delta}<r<K/2$, such that \begin{eqnarray}\label{aexp} a(r)&:=&\min\{ M_n(d):|d-\tau^{(2)}_N |\geq rN^{-\gamma+\delta}\}\nonumber\\ &=& \min_{|d-\tau^{(2)}|\geq rN^{-\gamma+\delta}}\frac{\sum_{i:i/N\in S^{(2)}}\left(f(i/N)-\frac{\alpha+\beta}{2}\right)(1(i/N\leq d)-1(i/N\leq \tau))}{\lambda_2[t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]} \end{eqnarray} To make $a(r)$ simpler to work with, we show that for sufficiently large $N$, there exists a constant $A>0$ such that $a(r)\geq Ar$. First, because \begin{eqnarray} S^{(2)}(t)&:=&\left\{i/N:i\quad\in\mathbb{N},\quad i/N\in [t\pm KN^{-\gamma+\delta}],\quad \lfloor N/N_1\rfloor\text{ does not evenly divide }i \right\}\nonumber\\ &\subset& [t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]\nonumber\\ &\subset & [\tau^{(2)}-2KN^{-\gamma+\delta},\tau^{(2)}+2KN^{-\gamma+\delta}] \end{eqnarray} this implies \begin{eqnarray} |f(i/N)-f(\tau+)|\leq 2\beta_fKN^{-\gamma+\delta}&\qquad &\text{ for all }i/N\in S^{(2)}, i/n>\tau\nonumber\\ |f(i/n)-f(\tau-)|\leq 2\beta_fKN^{-\gamma+\delta}&\qquad &\text{ for all }i/n\in S^{(2)}, i/n\leq\tau\nonumber \end{eqnarray} \noindent Combine this with the fact that $|\alpha- f(\tau-)|$ and $|\beta- f(\tau+)|$ are $o(1)$, which implies for sufficiently large $N$, and for any $i/n\in S^{(2)}(t)$, \begin{eqnarray} f(i/n)-\frac{\alpha+\beta}{2}>\frac{\Delta}{4}\qquad &\text{ if }i/n>\tau\nonumber\\ f(i/n)-\frac{\alpha+\beta}{2}<-\frac{\Delta}{4}\qquad &\text{ if }i/n\leq\tau\nonumber \end{eqnarray} The preceding fact implies that every term in the summand of (\ref{aexp}) is positive, and therefore the minimizing $d$ for (\ref{aexp}) would be either $\tau^{(2)}\pm rN^{-\gamma+\delta}$: \begin{eqnarray} a(r)&=&\left(\frac{\sum_{i:i/n\in S^{(2)}(t)}\left(f(i/n)-\frac{\alpha+\beta}{2}\right)(1(i/n\leq \tau^{(2)}+rN^{-\gamma+\delta})-1(i/n\leq \tau))}{\lambda_2[t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]}\right)\wedge \nonumber\\ &&\left(\frac{\sum_{i:i/n\in S^{(2)}(t)}\left(f(i/n)-\frac{\alpha+\beta}{2}\right)(1(i/n\leq \tau^{(2)}-rN^{-\gamma+\delta})-1(i/n\leq \tau))}{\lambda_2[t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]}\right)\nonumber\\ &\geq& \frac{\Delta}{4}\cdot \frac{\lambda_2(\tau^{(2)},\tau^{(2)}+rN^{-\gamma+\delta}]\wedge\lambda_2(\tau^{(2)}-rN^{-\gamma+\delta},\tau^{(2)}]}{\lambda_2[t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]} \end{eqnarray} It can also be shown that for $N$ large enough (specifically $\lfloor N^{1-\gamma}\rfloor \geq 2$) and any $d_1,d_2\in [t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]$ such that $d_2-d_1 \geq 8/N$, we have \begin{eqnarray} \lambda_2(d_1,d_2]\geq [N(d_2-d_1)-2]-\left[\frac{N(d_2-d_1)}{\lfloor N^{1-\gamma} \rfloor}+1\right]\geq \frac{N(d_2-d_1)}{8}\nonumber \end{eqnarray} In a slightly similar fashion, it can be argued that for all large $N$, $\lambda_2[t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]\leq 3KN^{1-\gamma+\delta}$. Since we restricted $r$ to be greater than $8N^{-1+\gamma-\delta}$, this means \begin{eqnarray}\label{eq:armorelinear} a(r)&\geq& \frac{\Delta}{4}\cdot\frac{NrN^{-\gamma+\delta}}{8\cdot 3KN^{1-\gamma+\delta}}\nonumber\\ &\geq & \frac{\Delta}{96K}r \end{eqnarray} Hence, this shows that $a(r)$ is greater than some linear function with 0 intercept. \newline \newline \indent Now define $b(r)=(a(r)-M_n(\tau^{(2)}_N))/3=a(r)/3$, then we have the following relation: \begin{eqnarray} \sup_{d\in S^{(2)}}|\mathbb{M}_n(d)-M_n(d)|\leq b(r)\quad\Rightarrow\quad |\hat{\tau}^{(2)}_N-\tau^{(2)}_N|\leq rN^{-\gamma+\delta} \end{eqnarray} To show the above is true, suppose $d\in [t-KN^{-\gamma+\delta},t+KN^{-\gamma+\delta}]$ and $|d-\tau^{(2)}_N|>rN^{-\gamma+\delta}$. If, in addition, the left expression above holds, then \begin{eqnarray} \mathbb{M}_n(d)\geq M_n(d)-b(r)\geq a(r)-b(r)\quad\Rightarrow\nonumber\\ \mathbb{M}_n(d)-\mathbb{M}_n(\tau^{(2)})\geq a(r)-b(r)-M_n(\tau^{(2)})-b(r)=b(r)>0 \end{eqnarray} Since $\mathbb{M}_n(d)>\mathbb{M}_n(\tau^{(2)}_N)$ and $\hat{\tau}^{(2)}_N$ minimizes $\mathbb{M}_n$ among all points in $S^{(2)}(t)$, this implies $d$ could not equal $\tau^{(2)}_N$, showing that $|\hat{\tau}^{(2)}_N-\tau^{(2)}_N|\leq rN^{-\gamma}$. \newline \newline \indent Next, we bound $\mathbb{P}_{\alpha,\beta,t}\Big[ |\hat{\tau}^{(2)}_N-\tau^{(2)}_N|\leq rN^{-\gamma+\delta} \Big]$. First, we split it into the two parts: \begin{eqnarray} &&\mathbb{P}_{\alpha,\beta,t}\Big[|\hat{\tau}^{(2)}_N-\tau^{(2)}_N|>rN^{-\gamma+\delta}\Big]\leq \nonumber\\ &&\mathbb{P}_{\alpha,\beta,t}\left[rN^{-\gamma+\delta}< |\hat{\tau}^{(2)}_N-\tau^{(2)}_N|\leq \eta N^{-\gamma+\delta} \right]+\mathbb{P}_{\alpha,\beta,t}\left[ |\hat{\tau}^{(2)}_N-\tau^{(2)}_N|> \eta N^{-\gamma+\delta} \right]:=\nonumber\\ &&P_N(\alpha,\beta,t)+Q_N(\alpha,\beta,t) \end{eqnarray} where $\eta=K/3$. We first consider the term $P_n(\alpha,\beta,t)$. Because \begin{eqnarray} rN^{-\gamma+\delta}<|\hat{\tau}^{(2)}_N-\tau^{(2)}_N|\leq \eta N^{-\gamma+\delta}\quad&\Rightarrow&\quad \inf_{\tau^{(2)}_N+rN^{-\gamma+\delta}<d\leq \tau^{(2)}_N+\eta N^{-\gamma+\delta}}\mathbb{M}_n(d)\leq \mathbb{M}_n(\tau) \nonumber\\ &\text{or }&\inf_{\tau^{(2)}_N-\eta N^{-\gamma+\delta}<d\leq \tau^{(2)}_N-rN^{-\gamma+\delta}}\mathbb{M}_n(d)\leq \mathbb{M}_n(\tau), \end{eqnarray} we can first split $P_n(\alpha,\beta,t)$ into the two terms \begin{eqnarray} P_N(\alpha,\beta,t)&\leq& P_{N,1}(\alpha,\beta,t)+P_{N,2}(\alpha,\beta,t)\nonumber\\ &=:& \mathbb{P}_{\alpha,\beta,t}\left[ \sup_{\tau^{(2)}_N+rN^{-\gamma+\delta}<d\leq \tau^{(2)}_N+\eta N^{-\gamma+\delta}}(\mathbb{M}_n(\tau)-\mathbb{M}_n(d))\geq 0 \right]+\nonumber\\ &&\mathbb{P}_{\alpha,\beta,t}\left[ \sup_{\tau^{(2)}_N-\eta N^{-\gamma+\delta}<d\leq \tau^{(2)}_N-rN^{-\gamma+\delta}}(\mathbb{M}_n(\tau)-\mathbb{M}_n(d))\geq 0 \right] \end{eqnarray} We first form an upper bound for $P_{n,1}(\alpha,\beta,t)$ for all $(\alpha,\beta,t)\in R_n$. Note that \begin{eqnarray} &&\mathbb{M}_n(\tau^{(2)}_N)-\mathbb{M}_n(d)\nonumber\\&=& -(\mathbb{M}_n(d)-M_n(d))-M_n(d)\nonumber\\ &=& -\frac{\sum\limits_{i:\;i/N\in S^{(2)}(t)}\left[\left(Y_i-\frac{\alpha+\beta}{2}\right)-\left(f(i/N)-\frac{\alpha+\beta}{2}\right)\right](1(i/N\leq d)-1(i/N\leq \tau))}{\lambda_2[t-KN^{-\gamma+\delta},t+kN^{-\gamma+\delta}]}-M_n(d)\nonumber\\ &=& -\frac{ \sum\limits_{i:\;i/N\in S^{(2)}(t)\cap (\tau^{(2)},d]}\varepsilon_i }{\lambda_2[t-KN^{-\gamma+\delta},t+kN^{-\gamma+\delta}]}-\frac{\sum\limits_{i:\;i/N\in S^{(2)}(t)\cap(\tau^{(2)},d]}\left(f(i/N)-\frac{\alpha+\beta}{2}\right)}{\lambda_2[t-KN^{-\gamma+\delta},t+kN^{-\gamma+\delta}]} \end{eqnarray} As previously explained, the $\left(f(i/N)-\frac{\alpha+\beta}{2}\right)$ term in the second summand can be bounded below by $\Delta/4$ for all sufficiently large $N$, and hence this leads to: \begin{eqnarray} \mathbb{M}_n(\tau)-\mathbb{M}_n(d)\geq 0\quad\Rightarrow\nonumber\\ -\sum_{i:\;i/N\in S^{(2)}\cap (\tau^{(2)},d]}\varepsilon_i\geq \frac{\Delta}{4}\lambda_2(\tau^{(2)},d] \end{eqnarray} It thus follows that \begin{eqnarray} P_{N,1}(\alpha,\beta,t)\leq \mathbb{P}_{\alpha,\beta,t}\left[\sup_{\substack{\tau^{(2)}_N+rN^{-\gamma+\delta}<d\\\leq \tau^{(2)}_N+\eta N^{-\gamma+\delta}}} \left(\frac{1}{\lambda_2(\tau^{(2)},d]}\right)\left|\sum_{{i:\;i/N\in S^{(2)}(t)\cap (\tau^{(2)},d]}}\varepsilon_i\right|\geq \frac{\Delta}{4}\right] \end{eqnarray} and by the Hajek-Renyi inequality, we get \begin{eqnarray} &&\mathbb{P}_{\alpha,\beta,t}\left[\sup_{\substack{\tau^{(2)}_N+rN^{-\gamma+\delta}<d\\\leq \tau^{(2)}_N+\eta N^{-\gamma+\delta}}} \left(\frac{1}{\lambda_2(\tau,d]}\right)\left|\sum_{i:\;i/N\in S^{(2)}(t)\cap (\tau^{(2)},d]}\varepsilon_i\right|\geq \frac{\Delta}{4}\right]\nonumber\\ &\leq& \frac{16}{\Delta^2}\left(\frac{1}{\lambda_2(\tau^{(2)},\tau^{(2)}+rN^{-\gamma+\delta}]}+\sum_{j=\lambda_2(\tau^{(2)},\tau^{(2)}+rN^{-\gamma+\delta}]}^{\lambda_2(\tau^{(2)},\tau^{(2)}+\eta N^{-\gamma+\delta}]}\frac{1}{j^2}\right)\nonumber\\ &\leq &\frac{32}{\Delta^2}\cdot\frac{1}{\lambda_2(\tau^{(2)},\tau^{(2)}+rN^{-\gamma+\delta}]} \end{eqnarray} We argued earlier that $\lambda_2 (\tau^{(2)},\tau^{(2)}+rN^{-\gamma+\delta}]\geq rN^{1-\gamma+\delta}/8$ for $N$ sufficiently large enough, thus \begin{equation} P_{N,1}(\alpha,\beta,t)\leq \frac{8B}{rN^{1-\gamma+\delta}} \end{equation} where $B=32/\Delta^2$. From this expression we arrive at $P_{N,1}(\alpha,\beta,t)\leq \epsilon$ (for any $\epsilon>0$) eventually, by setting $r=CN^{-1+\gamma-\delta}$ where $C$ is any constant satisfying $C>8$ and $8B/C\leq \epsilon$. \newline \newline To bound $Q_N(\alpha,\beta,t)$, from (\ref{eq:armorelinear}) we've argued that $a(r)$ is eventually greater than a multiple of $r$ when $r> 8N^{-1+\gamma-\delta}$. Since we've defined $b(r)=a(r)/3$, we can find some positive constant $B'$ where $b(r)\geq B'r$ when $r> 8N^{-1+\gamma-\delta}$ (and for all large $N$). Since $\eta=K/3> 8N^{-1+\gamma-\delta}$, eventually, this leads to \begin{eqnarray} &&\mathbb{P}_{\alpha,\beta,t}\Big[ |\hat{\tau}^{(2)}-d|>\eta N^{-\gamma+\delta} \Big]\nonumber\\ &\leq& \mathbb{P}_{\alpha,\beta,t}\left[ \sup_{d\in S^{(2)}(t)}|\mathbb{M}_n(d)-M_n(d)|>b(\eta)\right]\nonumber\\ &\leq& \mathbb{P}_{\alpha,\beta,t}\left[ \sup_{d\in S^{(2)}(t)}|\mathbb{M}_n(d)-M_n(d)|>B'\eta\right]\nonumber\\ &=&\mathbb{P}_{\alpha,\beta,t}\left[ \sup_{d\in S^{(2)}(t)}\frac{\left|\sum_{i:\; i/N\in S^{(2)}(t)}{\epsilon_i}(\mathbbm{1}(i/N\leq d)-\mathbbm{1}(i/N\leq \tau))\right|}{\lambda_2(S^{(2)}(t))}>B'\eta \right] \end{eqnarray} Using Corollary 8.8 from \cite{geer2000empirical}, the latter expression is bounded by $C_1\exp(-C_2\eta^2\lambda_2(S^{(2)}(t)))$ for some positive constants $C_1, C_2$, which converges to 0. \end{proof} \subsubsection{Proof of Theorem \ref{thm:singledistgeneral}}\label{sec:proofsingledistgeneral} \begin{proof} Let $\{x_1^{(2)},x_2^{(2)},...\}$ be the x-coordinates of the data, not used in the first stage, with corresponding response variable $(Y_1^{(2)},Y_2^{(2)},...)$ and error terms $(\varepsilon^{(2)}_1,\varepsilon_2^{(2)},...)$. As a set, $\{x_1^{(2)},x_2^{(2)},...\}$ equals $\{x_1 ,...,x_N\}-\left\{ \frac{\lfloor N/N_1\rfloor}{N},\frac{2\lfloor N/N_1\rfloor}{N},... \right\}$. Note that we do not have $x_{j}^{(2)}=j/N$ for every integer $j$, and additionally we can write $\tau^{(2)}=x_m^{(2)}$ for some integer $m$. Since our estimate will also be one of the $x^{(2)}_i$'s, we can then denote $\hat{m}$ be the integer such that $\hat{\tau}^{(2)}=x^{(2)}_{\hat{m}}$. Note that we have the following relation between $\hat{m}-m$ and the $\lambda_2$ function on intervals: \begin{eqnarray} \hat{m}-m=\begin{cases} \lambda_2(\tau^{(2)},\hat{\tau}^{(2)}]\qquad& \text{ when }\hat{\tau}^{(2)}>\tau\\ -\lambda_2(\hat{\tau}^{(2)},\tau^{(2)}]&\text{ when }\hat{\tau}^{(2)}\leq\tau \end{cases} \end{eqnarray} Hence we can write results on $\lambda_2(\tau^{(2)},\hat{\tau}^{(2)})$ in terms of $\hat{m}-m$. \newline \newline \indent After taking a subset $S^{(2)}$ of $\{x_1^{(2)},x_2^{(2)},...,x_{N-N_1}^{(2)}\}$ (specifically $S^{(2)}$ are those within $KN^{-\gamma+\delta}$ of the pilot estimate $\hat{\tau}^{(1)}$), we minimize \begin{eqnarray} \hat{\Delta}^{(2)}(t)&:=&\sum_{i:x_i\in S^{(2)}} \left(Y_i-\frac{\hat{\alpha}_N^{(1)}+\hat{\beta}_N^{(1)}}{2}\right)(1(x_i\leq t)-1(x_i\leq \tau))\nonumber\\ &=& \sum_{i:x_i^{(2)}\in S^{(2)}} \left(Y_{i}^{(2)}-\frac{\hat{\alpha}_N^{(1)}+\hat{\beta}_N^{(1)}}{2}\right)(1(x_i^{(2)}\leq t)-1(x_i^{(2)}\leq \tau^{(2)})) \end{eqnarray} over all points $t\in S^{(2)}$ to obtain the estimate for the change point. Equivalently the domain of $\hat{\Delta}^{(2)}(t)$ can be extended to all $t\in \{x^{(2)}_1,x^{(2)}_2,...\}$, letting $$\hat{\Delta}^{(2)}(t)=\max \left\{ \hat{\Delta}^{(2)}(r):r\in S^{(2)} \right\}+1 \qquad\text{ for }t\notin S^{{(2)}}$$ The argmin of this extension is the argmin of the function restricted to $S^{(2)}$. This extended definition will be used for the next result: \begin{lemma}\label{lem:randomwalkconv} For any fixed positive integer $j_0>0$, \begin{eqnarray} \hat{\Delta}^{(2)}\left(x^{(2)}_{m+j} \right)&=&\frac{j\Delta}{2}+\epsilon^{(2)}_{m+1}+...+\epsilon^{(2)}_{m+j}+o_p(1)\qquad\text{ for }1\leq j\leq j_0\nonumber\\ \hat{\Delta}^{(2)}\left(x^{(2)}_{m} \right)&=& 0+o_p(1)\nonumber\\ \hat{\Delta}^{(2)}\left(x^{(2)}_{m-j} \right) &=& \frac{j\Delta}{2}-\epsilon_m^{(2)}-...-\epsilon_{m-j+1}^{(2)}+o_p(1)\qquad \text{ for } 1\leq j\leq j_0 \end{eqnarray} \end{lemma} From this lemma it is straightforward to show the asymptotic distribution of $\lambda_2\left( \tau_N,\hat{\tau}^{(2)}_N \right)$ is the distribution of $L_{\Delta/\sigma}$, the argmax of the random process \begin{eqnarray} X_{\Delta/\sigma}(j)=\begin{cases} \frac{|j|\Delta}{2}-\varepsilon_{-1}^*-...-\varepsilon_{j}^*\qquad &\text{, for }j<0\\ 0&\text{, for }j=0\\ \frac{j\Delta}{2}+\varepsilon_1^*+...+\varepsilon_j^*&\text{, for }j>0 \end{cases} \end{eqnarray} where the $\{ \varepsilon_j \}_{j\in\mathbb{Z}}$ are iid $N(0,\sigma^2)$ random variables. For any fixed $\epsilon>0$ and integer $j$, we will show that $\left|\mathbb{P}\left[\hat{m}-m=j\right]- \mathbb{P}\left[L_{\Delta/\sigma}=j\right]\right|\leq \epsilon$ for all sufficiently large $N$. To do this we will first establish 3 probability bounds. \newline \newline \textbf{First Bound}: First we will show that with high probability we can approximate the stochastic process $L_{\Delta/\sigma}$, which has support $\mathbb{Z}$, with a stochastic process $L_{\Delta/\sigma}^{(k)}$, which has a finite support $\mathbb{Z}\cap [-k,k]$. \newline \newline \indent We note that there exists an integer $j_1>|j|$, such that $|L_{\Delta/\sigma}|>j_1$ with probability less than $\epsilon/3$. For any integer $k$ with $k\geq j_1$, define $L_{\Delta/\sigma}^{(k)}:=\underset{|i|\leq k}{\arg\min}\{X_{\Delta/\sigma}(i)\}$. In the case that $|L_{\Delta/\sigma}|\leq k$, we have $L_{\Delta/\sigma}^{(k)}=L_{\Delta/\sigma}$, and using this we can show that $\mathbb{P}[L_{\Delta/\sigma}=j]$ is within $\epsilon/3$ of $\mathbb{P}[L_{\Delta/\sigma}^{(k)}=j]$: \begin{eqnarray}\label{eqrep2} &&\left| \mathbb{P}[L_{\Delta/\sigma}=j]-\mathbb{P}[L_{\Delta/\sigma}^{(k)}=j] \right| \nonumber\\ &=& \left| \mathbb{P}\bigg[L_{\Delta/\sigma}=j,|L_{\Delta/\sigma}|\leq k\bigg]-\mathbb{P}\bigg[L_{\Delta/\sigma}^{(k)}=j,|L_{\Delta/\sigma}|\leq k\bigg]-\mathbb{P}\bigg[L_{\Delta/\sigma}^{(k)}=j,|L_{\Delta/\sigma}|> k\bigg] \right|\nonumber\\ &\leq& \left| \mathbb{P}\bigg[L_{\Delta/\sigma}=j,|L_{\Delta/\sigma}|\leq k\bigg]-\mathbb{P}\bigg[L_{\Delta/\sigma}^{(k)}=j,|L_{\Delta/\sigma}|\leq k\bigg]\right|+\mathbb{P}[|L_{\Delta/\sigma}|>k]\nonumber\\ &=&\left| \mathbb{P}\bigg[L_{\Delta/\sigma}^{(k)}=j,|L_{\Delta/\sigma}|\leq k\bigg]-\mathbb{P}\bigg[L_{\Delta/\sigma}^{(k)}=j,|L_{\Delta/\sigma}|\leq k\bigg]\right|+\mathbb{P}[|L_{\Delta/\sigma}|>k]\nonumber\\ &\leq & 0+\frac{\epsilon}{3} \end{eqnarray} \newline \newline \textbf{Second Bound}: We will show that there exists an integer $j_0>j_1$ such that $|\hat{m}-m|\leq j_0$ with probability greater than $1-\frac{\epsilon}{3}$. From our theorem on the rate of convergence, we can find some integer $j_0>j_1$ such that for all sufficiently large $N$, \begin{eqnarray} \mathbb{P}\left[|\hat{\tau}^{(2)}-\tau|\leq \frac{j_0-2}{N}\right]>1-\frac{\epsilon}{3}. \end{eqnarray} \noindent When $|\hat{\tau}^{(2)}-\tau|\leq \frac{j_0-2}{N}$, we have $|\hat{m}-m|\leq j_0$; first we can show \begin{eqnarray}\label{ineqminor} \left|x_{\hat{m}}^{(2)}-x_{m}^{(2)}\right|&\leq & \left|\hat{\tau}^{(2)}-\tau\right|+\left|\tau-\tau^{(2)}\right|\nonumber\\ &\leq & \frac{j_0-2}{N}+\frac{2}{N}\nonumber\\&=&\frac{j_0}{N}, \end{eqnarray} and second, because the $\left\{ x^{(2)}_1,x^{(2)}_2,... \right\}$ grid is just the equally spaced $\left\{ 1/N,2/N,...,N/N \right\}$ with some points taken out, the result of (\ref{ineqminor}) implies $|\hat{m}-m|\leq j_0$. Hence \begin{eqnarray} \mathbb{P}\left[ |\hat{m}-m|\leq j_0\right]&\geq& \mathbb{P}\left[|\hat{\tau}^{(2)}-\tau|\leq \frac{j_0-2}{N}\right]\nonumber\\ &>& 1-\frac{\epsilon}{3} \end{eqnarray} \textbf{Third Inequality}: Define $\hat{\tau}^{(2)}_{j_0}$ to be the minimizer of $\hat{\Delta}^{(2)}(\cdot)$ on the set $\left\{x_{m-j_0}^{(2)},x_{m-j_0+1}^{(2)},...,x_{m+j_0}^{(2)}\right\}$, and let $\hat{m}_{j_0}$ be its corresponding index such that $\hat{\tau}^{(2)}_{j_0}=x^{(2)}_{\hat{m}_{j_0}}$. In the case when $|\hat{m}-m|\leq j_0$, then $\hat{\tau}^{(2)}_{j_0}$ would be equal to $\hat{\tau}^{(2)}$, and $\hat{m}=\hat{m}_{j_0}$. Using this notation we can obtain the following bound: \begin{align}\label{eqrep1} &\left|\mathbb{P}\left[\hat{m}-m=j\right]-\mathbb{P}\left[\hat{m}_{j_0}-m=j\right]\right|\nonumber\\ &= \left|\mathbb{P}\bigg[ \hat{m}-m=j,|\hat{m}-m|\leq j_0 \bigg]-\mathbb{P}\bigg[\hat{m}_{j_0}-m=j,|\hat{m}-m|\leq j_0\bigg]-\mathbb{P}\bigg[\hat{m}_{j_0}-m=j,|\hat{m}-m|> j_0\bigg]\right|\nonumber\\ &= \left|\mathbb{P}\bigg[ \hat{m}_{j_0}-m=j,|\hat{m}-m|\leq j_0 \bigg]-\mathbb{P}\bigg[\hat{m}_{j_0}-m=j,|\hat{m}-m|\leq j_0\bigg]-\mathbb{P}\bigg[\hat{m}_{j_0}-m=j,|\hat{m}-m|> j_0\bigg]\right|\nonumber\\ &\leq\mathbb{P}\Bigg[ |\hat{m}-m|>j_0 \Bigg]\nonumber\\ &\leq \epsilon/3\nonumber\\ \end{align} \indent Consider the stochastic process $\hat{\Delta}^{(2)}(x^{(2)}_{m+i})$ for $i\in \{-j_0,...,0,...,j_0\}$. The previous lemma showed that, as a random variable in $\mathbb{R}^{2j_0+1}$, $\left(\hat{\Delta}^{(2)}(x^{(2)}_{m-j_0}),...,\hat{\Delta}^{(2)}(x^{(2)}_{m+j_0})\right)$ converges in distribution to $$(X_{\Delta/\sigma}(-j_0),...,X_{\Delta/\sigma}(j_0)).$$ Also consider the function $\text{Ind}_{min}:\mathbb{R}^{2j_0+1}\to \mathbb{Z}$, defined as \begin{eqnarray} \text{Ind}_{min}(a_1,...,a_{2j_0+1})=\left(\underset{i=1,...,2j_0+1}{\arg\min}(a_i)\right)-(j_0+1). \end{eqnarray} It can be easily checked that $\text{Ind}_{min}$ is a continuous function, and by definition, we also have \begin{eqnarray} L^{j_0}_{\Delta/\sigma}&=& \text{Ind}_{min}(X_{\Delta/\sigma}(-j_0),...,X_{\Delta/\sigma}(j_0))\nonumber\\ \hat{m}_{j_0}-m&=&\text{Ind}_{min} \left(\hat{\Delta}^{(2)}(x^{(2)}_{m-j_0}),...,\hat{\Delta}^{(2)}(x^{(2)}_{m+j_0})\right). \end{eqnarray} Hence, by the continuous mapping theorem we have $\hat{m}_{j_0}-m$ converging to $L^{j_0}_{\Delta/\sigma}$ in distribution. For sufficiently large $N$, the absolute difference between $\mathbb{P}[L^{j_0}_{\Delta/\sigma}=j]$ and $\mathbb{P}[\hat{m}_{j_0}-m=j]$ will be less than $\epsilon/3$. \newline \newline \indent Combining what we have just shown, for sufficiently large $N$ we have \begin{eqnarray} &&\left|\mathbb{P}\left[\hat{m}-m=j\right]- \mathbb{P}\left[L_{\Delta/\sigma}=j\right]\right|\nonumber\\ &\leq & \big|\mathbb{P}\left[\hat{m}-m=j\right]-\mathbb{P}\left[\hat{m}_{j_0}-m=j\right]\big|+\big|\mathbb{P}\left[\hat{m}_{j_0}-m=j\right]-\mathbb{P}\left[L^{j_0}_{\Delta/\sigma}=j\right]\big|\nonumber\\ &&+\big|\mathbb{P}\left[L^{j_0}_{\Delta/\sigma}=j\right]-\mathbb{P}[L_{\Delta/\sigma}=j]\big|\nonumber\\ &\leq& \frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3} \end{eqnarray} \end{proof} \noindent \textbf{Proof of Lemma \ref{lem:randomwalkconv}} \begin{proof} First note that with probability increasing to 1, $x_{m-j_0}^{(2)},x^{(2)}_{m-j_0+1},...,x^{(2)}_{m+j_0}$ are all contained inside $S^{(2)}$, and this fact will be shown first. Since $\hat{\tau}^{(1)}-\tau=O_p(N^{-\gamma})$, for any $\epsilon>0$ it is possible to find a constant $C>0$ such that \begin{eqnarray} \mathbb{P}\left[ \hat{\tau}^{(1)}-CN^{-\gamma}\leq \tau\leq \hat{\tau}^{(1)}+CN^{-\gamma}\right]>1-\epsilon \end{eqnarray} for all sufficiently large $N$. Additionally, for all sufficiently large $N$ we have $\frac{4+j_0}{N}\leq (KN^\delta-C)N^{-\gamma}$, and which means that if $|\tau^{(1)}-\tau|\leq CN^{-\gamma}$ then \begin{eqnarray} \hat{\tau}^{(1)}-KN^{-\gamma+\delta}&=&\hat{\tau}^{(1)}-(KN^{-\gamma}-C)N^{-\gamma}-CN^{-\gamma}\nonumber\\ &\leq& \tau-\frac{4+2j_0}{N}\nonumber\\ \hat{\tau}^{(1)}+KN^{-\gamma+\delta}&=&\hat{\tau}^{(1)}+(KN^{-\gamma}-C)N^{-\gamma}+CN^{-\gamma}\nonumber\\ &\geq&\tau+\frac{4+2j_0}{N} \end{eqnarray} Finally, for all sufficiently large $N$, we have $\lfloor N^{1-\gamma}\rfloor > 2$, i.e. the first stage subsample chooses points which are spaced more than $2/N$ points apart. Hence, \begin{eqnarray} x^{(2)}_{m-j_0}&\geq & x^{(2)}_m-2\left(\frac{j_0+2}{N}\right)=\tau^{(2)}-2\left(\frac{j_0+2}{N}\right)\nonumber\\&\geq &\tau-\frac{2j_0+4}{N}\nonumber\\ x^{(2)}_{m+j_0}&\leq & x^{(2)}_m+2\left(\frac{j_0+2}{N}\right)\nonumber\\ &\leq & \tau+\frac{2j_0+4}{N}, \end{eqnarray} which leads to the conclusion that for all $N$ large enough, we have \begin{eqnarray} 1-\epsilon&< &\mathbb{P}\left[ \hat{\tau}^{(1)}-CN^{-\gamma}\leq \tau\leq \hat{\tau}^{(1)}+CN^{-\gamma}\right]\nonumber\\ &\leq & \mathbb{P}\left[ \hat{\tau}^{(1)}-KN^{-\gamma+\delta}\leq \tau-\frac{2j_0+4}{N}< \tau+\frac{2j_0+4}{N}\leq \hat{\tau}^{(1)}+KN^{-\gamma+\delta}\right]\nonumber\\ &\leq &\mathbb{P}\left[ x^{(2)}_{m-j_0}\geq \hat{\tau}^{(1)}-KN^{-\gamma+\delta}\text{ and } x^{(2)}_{m+j_0}\leq \hat{\tau}^{(1)}+KN^{-\gamma+\delta}\right]\nonumber\\ &=&\mathbb{P}\left[ x^{(2)}_{m-j_0}\text{ and } x^{(2)}_{m+j_0}\text{ are in }S^{(2)}\right] \end{eqnarray} Therefore, consider the case for which $x^{(2)}_{m-j_0}$ through $x^{(2)}_{m+j_0}$ are contained in $S^{(2)}$. Under this condition we have $\hat{\Delta }^{(2)}\left(x^{(2)}_m\right)=0$ by simple calculation, and for any $0<j\leq j_0$, \begin{eqnarray} &&\left|\hat{\Delta}^{(2)}(x^{(2)}_{m+j})-\left(\frac{j\Delta}{2}+\sum_{i=1}^j\epsilon^{(2)}_{m+i}\right)\right|\nonumber\\ &=&\left| \sum_{i:x^{(2)}_i\in S^{(2)}\cap \big(x^{(2)}_m,x^{(2)}_{m+j}\big]}\left( Y_i^{(2)}-\frac{\hat{\alpha}_N+\hat{\beta}_N}{2}\right)-\left(\frac{j\Delta}{2}+\sum_{i=1}^j\epsilon^{(2)}_{m+i}\right)\right|\nonumber\\ &=&\left| \sum_{i=1}^j\left( f(x_{m+i}^{(2)})-\frac{\hat{\alpha}_N^{(1)}+\hat{\beta}_N^{(1)}}{2}\right)-\frac{j\Delta}{2}\right|\nonumber\\ &=& \Bigg| \sum_{i=1}^j\Bigg[\left( f(x_{m+i}^{(2)})-f(\tau+)\right)+\left(\frac{f(\tau+)+f(\tau-)}{2}-\frac{\hat{\alpha}_N^{(1)}+\hat{\beta}_N^{(1)}}{2}\right)+\nonumber\\ &&\left( \frac{f(\tau+)-f(\tau-)}{2}-\frac{\Delta}{2} \right)\Bigg]\Bigg|\nonumber\\ &\leq& \left| \sum_{i=1}^j\left( f(x_{m+i}^{(2)})-f(\tau+)\right)\right| +\frac{j_0}{2}\left(\left| \hat{\alpha}_N^{(1)}-f(\tau-)\right|+\left|\hat{\beta}_N^{(1)}-f(\tau+)\right|\right) \end{eqnarray} For the first term above, using an earlier argument we make the case that for sufficiently large $N$ we have $\lfloor N/N_1\rfloor>2$ and $x^{(2)}_{m+i}\leq \tau+\frac{2i+4}{N}$, hence \begin{eqnarray} \left| \sum_{i=1}^j\left( f(x_{m+i}^{(2)})-f(\tau+)\right)\right|&\leq & \sum_{i=1}^j\beta_f \left| x_{m+i}^{(2)}-\tau\right|\nonumber\\ &\leq& \beta_f\sum_{i=1}^j \frac{2i+4}{N}\nonumber\\ &\leq & \frac{\beta_f}{N}(2j_0^2+4j_0) \end{eqnarray} For the second term, it was shown earlier that both $\left| \hat{\alpha}_N^{(1)}-f(\tau-)\right|$ and $\left|\hat{\beta}_N^{(1)}-f(\tau+)\right|$ are $O_p\left(h\vee \sqrt{\frac{1}{N_1 h}} \right)$, and hence, so is their sum. Overall, this shows that for sufficiently large $N$, $\left|\hat{\Delta}^{(2)}(x^{(2)}_{m+j})-\left(\frac{j\Delta}{2}+\sum_{i=1}^j\epsilon^{(2)}_{m+i}\right)\right|$ is (uniformly for all $0\leq j\leq j_0$) bounded above by the random variable \begin{equation}\label{diffbound} \frac{\beta_f}{N}(2j_0^2+4j_0)+\frac{j_0}{2}\left(\left| \hat{\alpha}_N^{(1)}-f(\tau-)\right|+\left|\hat{\beta}_N^{(1)}-f(\tau+)\right|\right), \end{equation} which is $o_p(1)$. Similarly, again for any $0<j\leq j_0$, \begin{eqnarray} &&\left|\hat{\Delta}^{(2)}(x^{(2)}_{m-j})-\left(\frac{j\Delta}{2}-\sum_{i=1}^j\epsilon^{(2)}_{m-i+1}\right)\right|\nonumber\\ &\leq& \left| \sum_{i=0}^{j-1}\left( f(x_{m-i}^{(2)})-f(\tau)\right)\right| +\frac{j_0}{2}\left(\left| \hat{\alpha}_N^{(1)}-f(\tau-)\right|+\left|\hat{\beta}_N^{(1)}-f(\tau+)\right|\right) \end{eqnarray} which is again uniformly bounded by the expression in (\ref{diffbound}). \newline \newline Therefore, given any $\epsilon>0$, we have \begin{eqnarray} &&\mathbb{P}\left[ \left|\hat{\Delta}^{(2)}(x^{(2)}_{m+j})-\left(\frac{j\Delta}{2}+\sum_{i=1}^j\epsilon^{(2)}_{m+i}\right)\right|\geq \epsilon\right]\nonumber\\ &\leq & \mathbb{P}[x^{(2)}_{m-j_0}\notin S^{(2)}\text{ and/or }x^{(2)}_{m+j_0}\notin S^{(2)}]+\nonumber\\ &&\mathbb{P}\left[ x^{(2)}_{m-j_0},x^{(2)}_{m+j_0}\in S^{(2)}\text{, and }\left|\hat{\Delta}^{(2)}(x^{(2)}_{m+j})-\left(\frac{j\Delta}{2}+\sum_{i=1}^j\epsilon^{(2)}_{m+i}\right)\right|\geq \epsilon\right]\nonumber\\ &\leq &\mathbb{P}[x^{(2)}_{m-j_0}\notin S^{(2)}\text{ and/or }x^{(2)}_{m+j_0}\notin S^{(2)}]+\nonumber\\ &&\mathbb{P}\left[\frac{\beta_f}{N}(2j_0^2+4j_0)+\frac{j_0}{2}\left(\left| \hat{\alpha}_N^{(1)}-f(\tau-)\right|+\left|\hat{\beta}_N^{(1)}-f(\tau+)\right|\right)\geq \epsilon\right]\nonumber\\ &\to & 0+0\qquad\text{ for all }0<j\leq j_0 \end{eqnarray} and similarly, \begin{eqnarray} &&\mathbb{P}\left[ \left|\hat{\Delta}^{(2)}(x^{(2)}_{m})\right|\geq \epsilon\right]\to 0\nonumber\\&& \mathbb{P}\left[ \left|\hat{\Delta}^{(2)}(x^{(2)}_{m-j})-\left(\frac{j\Delta}{2}+\sum_{i=1}^j\epsilon^{(2)}_{m-i+1}\right)\right|\geq \epsilon\right]\to 0 \end{eqnarray} \end{proof} \section{Supplement Part B (Multiple Change Point Problem)}\label{sec:supplementB} Here we will provide proofs for the results presented in Section \ref{sec:multiplechangepointsintro} of the main paper. The model setup will be the same as that section. \subsection{Detailed Computational Time Analysis}\label{sec:time_order_longer} In this section we will give a more detailed analysis on the computational time that was briefly touched upon in Section \ref{sec:BSWBS1} of the main paper. We will make the same assumption that $\delta_N/N^{1-\Xi}\to C_1$ and $J(N)/N^\Lambda\to C_2$ for some $\Lambda\in[0,\Xi]$ and some positive constants $C_1,C_2$. As a reminder, for intelligent sampling with BinSeg at stage 1, conditions (M6 (BinSeg)) and (M7 (BinSeg)) automatically impose the condition that $\Lambda\leq\Xi<1/7$. \newline \newline \indent The BinSeg procedure, when applied to a data sequence of $n$ points, takes $O(N\log (N))$ time to compute (see \cite{fryzlewicz2014wild}). Since first stage of intelligent sampling involves applying BinSeg to $O(N^\gamma)$ points, it therefore takes $O(N^\gamma\log(N))$ time to obtain the first stage estimators. After the BinSeg estimates are obtained, we use the method described in Section \ref{sec:refitting} to upgrade them to ones whose asymptotic distributions are known, this subsequent step only involving least squares fitting upon $O(N^\gamma)$ points and therefore requiring only $O(N^\gamma)$ computational time, leaving the total time as $O(N^\gamma\log(N))$ up to this point. \newline \newline \indent From here on, we use Theorem \ref{thm:reconsistentineq} and construct confidence intervals $\left[ \hat{\tau}^{(1)}_j\pm\left(Q_{\Delta_j,\sigma,\sigma}(\sqrt[\hat{J}]{1-\alpha})+1\right)\lfloor \frac{N}{N_1}\rfloor \right]$ for $j=1,\dots,\hat{J}$. Lemma \ref{lem:quantbound} (see Section \ref{sec:proofquantbound}, Supplement Part C) tells us that $ Q_{\Delta_j,\sigma,\sigma}(\sqrt[\hat{J}]{1-\alpha})$ can be bounded by a multiple of $\log(\hat{J})$, and therefore conditional on the value of $\hat{J}$, the second stage of intelligent sampling will involve least squares fitting on $O\left( \hat{J}N^{1-\gamma}\log(\hat{J}) \right)$ points, taking $O\left( \hat{J}N^{1-\gamma}\log(\hat{J}) \right)$ time to compute. Although the distribution of the $\hat{J}$ obtained from BinSeg is not fully known, a consequence of Theorem \ref{frythm} is that $\mathbb{P}\left[ \hat{J}=J \right]\geq 1-CN^{-1}$ for some constant $C$, and therefore \begin{eqnarray} \mathbb{E}\left[ \hat{J}\log(\hat{J}) \right]\leq J\log(J)+C\frac{N\log(N)}{N}=O(J\log(N)). \end{eqnarray} This leads to the conclusion that the second stage has a computational time that is on average $O\left( JN^{1-\gamma}\log(N) \right)=O\left( N^{1-\gamma+\Lambda}\log(N) \right)$, and the entire procedure takes $O\left( N^{\gamma\vee(1-\gamma+\Lambda)}\log(N) \right)$ time. \newline \newline \indent Using this result we could choose an optimal $\gamma $ and obtain the optimal computational time for each value of $\Xi\in [0,1/7)$ and $\Lambda\in [0,\Xi]$. This can be done by setting the order of the first stage ($O(N^\gamma\log(N))$) to equal the order of time for the second stage ($O(N^{1-\gamma+\Lambda}\log(N))$) which would be $\gamma=\frac{1+\Lambda}{2}$. However Condition (M7 (BinSeg)) prevents this from being done everywhere by placing the restriction that $\gamma>7\Xi$. Thus $\gamma_{min}$ would be the maximum of $\frac{1+\Lambda}{2}$ and $7\Xi+\eta$ ($\eta$ any tiny positive value), resulting in order $ N^{\gamma_{min}\vee(1-\gamma_{min}+\Lambda)}\log(N)$ computational time. \begin{itemize} \item For $\Xi\in [0,1/14)$, we have $\frac{1+\Lambda}{2} <4\Xi$ and hence $\gamma_{min}=\frac{1+\Lambda}{2}$ and the computational time is order $N^{(1+\Lambda)/2}\log(N)$. \item For $\Xi\in [1/13,1/7)$, we have $\frac{1+\Lambda}{2} >7\Xi$, hence $\gamma_{min}=\frac{1+\Lambda}{2}$ and the computational time is order $N^{4\Xi+\eta}\log(N)$. \item For $\Xi \in [1/14,1/13)$, $\gamma_{min}$ can be either $\frac{1+\Lambda}{2}$ or $7\Xi+\eta$, whichever is greater, and the computational time would be either $N^{(1+\Lambda)/2}\log(N)$ or $N^{7\Xi+\eta}\log(N)$. respectively. \end{itemize} \begin{table}[H] \caption{\label{table-time-binseg}Table of $\gamma_{min}$ and computational times for various values of $\Xi$. Also shown are their values for extreme value of $\Lambda$ ($\Lambda=0$ and $\Lambda=\Xi$). For $\Xi\geq 1/7$ no values of $\gamma$ will allow us to obtain consistency from Theorem \ref{frythm}} \end{table} \begin{table}[H] \begin{minipage}{0.99\textwidth} \centering \begin{tabular}{|c|c|c|c|c|} \hline $\Xi$ & $[0,1/14)$ & $[1/14,1/13)$ & [1/13,1/7) & [1/7,1] \\ \hline $\gamma_{min}$ & $\frac{1+\Lambda}{2}$ & $\max\left\{\frac{1+\Lambda}{2},7\Xi+\eta \right\}$ & $7\Xi+\eta$ & N/A\\ \hline Order of Time & $N^{(1+\Lambda)/2}\log(N)$ & $\max\{ N^{(1+\Lambda)/2},N^{7\Xi+\eta} \}\cdot\log(N$) & $N^{7\Xi+\eta}\log(N)$ & N/A \\ \hline \hline $\gamma_{min}$ ($\Lambda=0$) & $\frac{1}{2}$ & $7\Xi+\eta$ & $7\Xi+\eta$ & N/A\\ \hline Time ($\Lambda=0$) & $N^{1/2}\log(N)$ & $N^{7\Xi+\eta} \log(N$) & $N^{7\Xi+\eta}\log(N)$ & N/A \\ \hline \hline $\gamma_{min}$ ($\Lambda=\Xi$) & $\frac{1+\Xi}{2}$ & $\frac{1+\Xi}{2}$ & $7\Xi+\eta$ & N/A\\ \hline Time ($\Lambda=\Xi$) & $N^{(1+\Xi)/2}\log(N)$ & $N^{(1+\Xi)/2} \log(N)$ & $N^{7\Xi+\eta}\log(N)$ & N/A \\ \hline \end{tabular} \end{minipage} \end{table} \begin{figure} \caption{Blue triangle encompasses all valid values of $\gamma$ vs $\Xi$ as set by (M7 (BinSeg)). Pink region, solid red lines, and dotted red lines denotes $\gamma_{min}$ for each $\Xi$ ($\gamma_{min}$ can vary for different values of $\Lambda$ even when $\Xi$ is fixed, hence the red region).} \label{fig-binseg-time} \end{figure} It can be seen that the biggest decrease in order of average computational time is for small values of $\Xi$ and $\Lambda$, and in fact for $\Xi<1/14$ and $\Lambda=0$ it is $O(\sqrt{N}\log(N))$, which is marginally slower than intelligent sampling on a single change point. For larger values of $\Xi$, there is less than a square root drop in $N\log(N)$ (order of using BinSeg on the whole data) to $N^{\gamma_{min}}\log(N)$ (intelligent sampling), to the point where as $\Xi\to 1/7$, both procedures take near the same order of time. \begin{remark} Note that when implementing the intelligent sampling strategy knowledge of $\Xi$ is desirable, but in practice, its value is unknown. If one is willing to impose an upper bound on $\Xi$, intelligent sampling can be implemented with this (conservative) upper-bound. \end{remark} \begin{remark} {\bf Multistage Intelligent Sampling in the multiple change-point problem:} We can also consider intelligent sampling with multiple $(>2)$ stages of estimation for model (\ref{model}). An $m$-stage intelligent sampling procedure would entail: \newline {\bf a.} Take a uniform subsample $Y_{\lfloor N/N_1\rfloor},Y_{2\lfloor N/N_1\rfloor},Y_{3\lfloor N/N_1\rfloor},\dots$, where $N_1=KN^{\gamma}$ for some $K>1,\gamma\in (0,1)$, to obtain estimates $\hat{J}$, $\hat{\tau}^{(1)}_1,\dots,\hat{\tau}_{\hat{J}}^{(1)}$, and confidence intervals $\left[ \hat{\tau}^{(1)}_j-w(N),\hat{\tau}_j^{(1)}+w(N) \right]$, $1\leq j\leq\hat{J}$, for the change points. \newline {\bf b.} On each interval $\left[ \hat{\tau}^{(1)}_j-w(N),\hat{\tau}_j^{(1)}+w(N) \right]$ for $1\leq j\leq \hat{J}$ perform the $(m-1)$ stage intelligent sampling procedure for the single change point (as described in Remark \ref{rem:singlemult}). \end{remark} \subsection{Sample Size Considerations from a Methodological Angle}\label{sec:compmethod} In the asymptotic setting of Section \ref{sec:time_order_longer}, we were concerned about minimizing the \emph{order} of computational time required for locating the change points through intelligent sampling, assuming that certain important quantities were known. The focus in this section is on obtaining explicit expressions for the minimum sample size that the procedure requires to correctly identify the underlying change points. Obviously, the minimum sample size is the key driver in the computational time formlas provided, albeit not the single one, and also addresses computer memory usage issues. In order to develop explicit expressions for the total computational time, one would need to know exactly how fast BinSeg runs versus data size, in terms of its model parameters\footnote{For example, BinSeg will generally terminate in fewer steps on a dataset with fewer change points than on another dataset of the same length but more change points.} and this is unavailable as an exact expression. Therefore, we look at minimizing the subsample utilized as a proxy, with the added benefit of deriving the least amount of data that must be held in memory at a single time. We have already investigated the optimal order of the first stage subsample, denoted $N_1$, and demonstrated in Section \ref{sec:time_order_longer} that in the best cases the size of both the first and second stage subsamples scales as $\sqrt{N}\log(N)$. Although valid, these previous analyses only apply to an abstract asymptotic setting. In practice, given a data set with fixed (large) $N$, a different approach is needed to determine the optimal number of points to use at each different stage. Given the number of change points and their associated SNR's, we show below how to optimally allocate samples in order to minimize the total number used, for intelligent sampling. For simplicity we assume the error terms satisfy condition (M4 (BinSeg)). We start with the two-stage intelligent sampling procedure and assume that in stage 1, roughly $N_1$ points are used for BinSeg and another $N_1$ points, for the calibration steps described in Section \ref{sec:refitting}. At stage 2, we work with $\hat{J}$ (which is $\approx J$) intervals. Using Theorem \ref{thm:reconsistentineq}, setting the width of the second stage intervals to be $\left(Q_{\Delta,\sigma,\sigma}\left( 1-\frac{\alpha}{J} \right)+1\right)\left\lfloor\frac{N}{N_1}\right\rfloor$ for a small $\alpha$ will ensure that they cover the true change points with high probability (close to $1-\alpha$ if not greater). Assuming $N_1$ is large enough so that the first stage is accurate (ie $\hat{J}=J$ and $\max_j|\hat{\tau}^{(1)}_j-\tau_j|$ is small with high probability) , the number of points used in the two stages, combined, is approximately \begin{eqnarray}\label{eq:totalpoints} 2N_1+\frac{2\left(\sum_{j=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)\right)N}{N_1}. \end{eqnarray} This presents a trade-off, e.g. if we decrease $N_1$ by a factor of 2, the second term in (\ref{eq:totalpoints}) increases by a factor of 2. To use a minimal number of points in both stages, we need to set $N_1=\sqrt{N\sum_{j=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)}$. In turn this yields a minimum of $4\sqrt{N\sum_{j=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)}$ when plugged into (\ref{eq:totalpoints}). For any given values of $N$, $J$, and SNR, this provides a lower bound on the minimum number of points that intelligent sampling must utilize, and Tables \ref{tab:percentsmallN} and \ref{tab:percentmiddleN} depict some of these lower bounds for a select number of these parameters. \begin{minipage}[t]{.45\linewidth} \centering \begin{figure} \caption{For $N=1.5\times 10^7$, the minimal percentage of data that must be used for various values of $J$ and SNR, assuming all jumps have equal SNR and $\alpha=0.01$. } \label{tab:percentsmallN} \end{figure} \end{minipage} \begin{minipage}[t]{0.04\linewidth} ~~ \end{minipage} \begin{minipage}[t]{0.45\linewidth} \begin{figure} \caption{For $N=1.5\times 10^{10}$, the minimal percentage of data that must be used for various values of $J$ and SNR, assuming all jumps have equal SNR and $\alpha=0.01$. } \label{tab:percentmiddleN} \end{figure} \end{minipage} Note that while the fraction of points used for the larger $N$ above is smaller, in absolute terms, this still translates to very large subsamples: even just 0.57\% of $1.5\times 10^{10}$ (for SNR 1 and $J=500$ on Table \ref{tab:percentmiddleN}) is a very large dataset of $8.6\times 10^7$, which almost requires server type computer capabilities. The situation becomes more tenuous for larger values of $N$. This suggests that \emph{a larger number of stages} is in order for sample sizes of $N$ exceeding $10^{10}$. \newline \newline For a three-stage implementation, suppose $\approx N_1$ points are utilized at stage 1, letting us form simultaneous confidence intervals that are (approximately) of the form \begin{eqnarray} \left[ \hat{\tau}^{(1)}_j- \sum_{j=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)\frac{N}{N_1},\hat{\tau}^{(1)}_j+ \sum_{j=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)\frac{N}{N_1}\right]\qquad\text{for }j=1,\dots,J\nonumber \end{eqnarray} (assuming $\hat{J}=J$ for simplification). At stage 2, suppose at the $j$'th confidence interval we subsample roughly $N_2^{(j)}$ points, giving us a subsample which skips approximately every $2Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)\frac{N}{N_1N_2^{(j)}}$ points. Hence, at stage 3 we work with confidence intervals that are (approximately) of the form \begin{eqnarray} \left[ \hat{\tau}^{(2)}_j\pm\left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)\left( 2\left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)\frac{N}{N_1N_2^{(j)}} \right)\right] \end{eqnarray} for $j=1,\dots,J$. In total all three stages use around a total of \begin{eqnarray}\label{eq:threestageN} 2N_1+\sum_{j=1}^J N_2^{(j)}+\frac{4N}{N_1}\left( \sum_{j=1}^J \frac{\left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)^2}{N_2^{(j)}} \right) \end{eqnarray} points. This expression is minimized by setting \begin{eqnarray}N_1=N^{1/3}\left(\sum_{k=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)\right)^{2/3}\qquad\text{ and }\nonumber\\ N_2^{(j)}=2N^{1/3}\frac{Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1}{\left(\sum_{k=1}^J \left(Q_{\Delta_k,\sigma,\sigma}(\alpha,J)+1\right)\right)^{1/3}} \end{eqnarray} for $j=1,\dots,J$, which in turn gives a minimum of $6N^{1/3}\left(\sum_1^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)\right)^{2/3}$ for (\ref{eq:threestageN}). A similar analysis on a four-stage procedure would have the optimal subsample allocation as \begin{eqnarray}N_1=N^{1/4}\left(\sum_{k=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right) \right)^{3/4}\qquad\text{ and }\nonumber\\ N_2^{(j)}=N_3^{(j)}=2\left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right)N^{1/4}\left(\sum_{k=1}^J \left(Q_{\Delta_k,\sigma,\sigma}(\alpha,J)+1\right) \right)^{-1/4}\nonumber\\\text{ for }j=1,\dots,J\end{eqnarray} which yields a total of $8N^{1/4}\left( \sum_{k=1}^J \left(Q_{\Delta_j,\sigma,\sigma}\left(1-\frac{\alpha}{J}\right)+1\right) \right)^{3/4}$ points utilized. ~\newline \begin{minipage}[t]{0.45\linewidth} \centering \begin{figure} \caption{ For $N=1.5\times 10^{10}$, minimal percentage of the data that must be used for a three stage procedure, assuming all jumps have equal SNR and $\alpha=0.01$. } \label{tab:threestagemiddleeN} \end{figure} \end{minipage} \begin{minipage}[t]{0.04\linewidth} ~~ \end{minipage} \begin{minipage}[t]{0.45\linewidth} \begin{figure} \caption{ For $N=1.5\times 10^{12}$, minimal percentage of the data that must be used for a four stage procedure, assuming all jumps have equal SNR and $\alpha=0.01$. } \label{tab:fourstagelargeN} \end{figure} \end{minipage} ~\newline Comparing Figures \ref{tab:percentmiddleN} and \ref{tab:threestagemiddleeN}, we focus on the case of 1000 change points with SNR 1.5: using three stages allows us to decrease the minimal required points by a factor of around five. The ease on computations is greater when looking at the largest amount of data the computer must handle at a time:\setcounter{footnote}{0}\footnote{For intelligent sampling the largest data subset the computer has to work with and hold in memory at any moment, under these optimal allocations and when all change points have equal SNR, is the roughly $N_1$ sized data set used at the initial step for BinSeg. All subsequent steps can work with sub-intervals of data less than $N_1$ in size.} this is $N_1\approx 2.1\times 10^7$ for two stages and $N_1\approx2.5\times 10^6$ for three stages, a decrease by a factor of 9. Meanwhile for a dataset of size 1.5 trillion, using four stages allows us to work with subsamples of size at most $N_1\approx 4.7\times 10^6$ for the more demanding scenario of SNR 1.5 and 2000 change points, a very manageable dataset for most computers. We note here that these optimal allocations are valid assuming that BinSeg is able to pin down $\hat{J}$ and the change points with the initial subsample. In general, this will be the case provided the SNR is reasonable, and the initial subsample is large enough so that the change-points are adequately spaced apart. For example, in the context of the above tables, one can ask whether BinSeg will accurately estimate the parameters on a 2.4 million length dataset with 1000 evenly spaced change points, or 2000 change points on a 4.7 million length data with 2000 evenly spaced change points, under a constant SNR of 1.5 (which is of modest intensity). To this end, we ran a set of simulations and concluded that if there are over 1000 data points between consecutive change points of SNR 1.5, based on these two settings and for appropriate tuning parameters, BinSeg's estimators satisfy $\hat{J}=J$ and $\max|\hat{\tau}_j-\tau_j|\leq 150$ with probability over 99\%. Observe also that the formulas provided depend on the values of the SNRs at the change points and the actual number of change points ($J$). In practice, neither will be known, and the the practitioner will not be able to determine the derived allocations exactly. In such situations, conservative lower bounds on the SNRs and a conservative higher bound on $J$, can yield valid (but conservative) sampling allocations when plugged in to the expressions derived through this section. Such bounds can be obtained if background information about the problem and the data are available, or via rough pilot estimates on an appropriately sparse subsample. It is also worth pointing out that the intelligent sampling procedure is readily adaptable to a distributed computing environment, which can come into play, especially with data sets of length exceeding $10^{12}$ that are stored sequentially across several storage disks. In such cases, the two sparse subsamples at the first stage, which are of much smaller order, can be transferred over to a central server (a much easier exercise than transferring all the data on to one server), where the first is analyzed via binary segmentation to determine the initial change-points, and the other used for the re-estimation procedure and associated confidence intervals as described in Section \ref{sec:refitting}. As the number of disks on which the data are stored is of a much smaller order than the length of the data, each re-estimated change-point and its associated confidence interval will typically belong to a stretch of data completely contained within one storage disk, and the subsequent resampling and estimation steps can be performed on the local processor, after the information on the confidence interval has been transferred back from the central server. An occasional communication between two machines may be necessary. \subsection{Dependent Errors}\label{sec:dependenterrors} The proposed intelligent sampling procedure for multiple change point problems has so far been presented in the setting of i.i.d. data for a signal-plus-noise model. However, many data sequences (such as time series) usually exhibit temporal correlation. Hence, it is of interest to examine the properties of the procedure under a non-i.i.d. data generating mechanism. While we believe that results akin to Theorem 3 (growing number of change-points in the i.i.d. error regime) should go through under various forms of dependence among errors, a theoretical treatment of this would require a full investigation of the tail properties of random walks under dependent increments and is outside the scope of this paper. An asymptotic distributional result, analogous to Theorem 5, under finitely many change points in the dependent regime is also expected to hold. We present below a proposition for the finite $J$ case under a set of high-level assumptions. Suppose that the data sequence is in the form (\ref{model}) and satisfies conditions (M1) to (M3) and (M6). Upon the error terms, we impose the assumption that they have an autocorrelation structure which dies out at a polynomial rate or faster, and locally around the change points assume that the joint distributions of the errors are fixed [i.e. invariant to $N$]: \begin{enumerate}[label=(M4-alt\arabic*):] \setlength{\itemindent}{.5in} \item $\varepsilon_j$'s are each marginally $N(0,\sigma_j^2)$, and there exist positive constants $\sigma_{max}$, $B$ and $\alpha$, independent of $N$, such that $\sigma_j\leq \sigma_{max}$ and $\text{cor}(\varepsilon_j,\varepsilon_{j+k})\leq Bk^{-\alpha}$ for any $j$ and $j+k$ from 1 to $N$. \item there exists a sequence $w_e(N)\to\infty$ and Gaussian sequences $\{\epsilon_{i,j} \}_{i\in\mathbb{Z}}$ (not required to be stationary) for $j=1,\dots,J$, such that for all $j=1,\dots,J$ and all sufficiently large $N$, $\{ \varepsilon_{\tau_j-w_e(N)},\dots,\varepsilon_{\tau_j+w_e(N)} \}$ has the same joint distribution as $\{ \epsilon_{-w_e(N),j},\dots,$ $\epsilon_{w_e(N),j} \}$. \end{enumerate} On a set of data where (M4-alt1) and (M4-alt2) hold (along with assumptions (M1) to (M3), and (M6)), we want steps (ISM1) and (ISM4) to go through with some procedure that ensures \begin{eqnarray}\label{eq:firstconsistent3} \mathbb{P}\left[\hat{J}=J,\,\underset{i=1,...,J}{\max}|\hat{\tau}^{(1)}_i-\tau_i|\leq w(N),\, \max_{i=0,...,J}|\hat{\nu}^{(1)}_i-\nu_i|\leq \rho_N\right]\to 1. \end{eqnarray} for some sequence $w(N)\to\infty$ and $w(N)=o(\delta_N)$. Next, we desire for the final estimators $\hat{\tau}^{(2)}$ to be $O_p(1)$ consistent and have the property that for each $\epsilon>0$ there exists a constant $C$ such that \begin{eqnarray}\label{eq:secondnotiidconsistent} \mathbb{P}\left[\hat{J}=J ;\quad\max_{i=1,\dots,J}\left|\hat{\tau}^{(2)}_i-\tau_i\right|\leq C \right]\geq 1-\epsilon \,. \end{eqnarray} for all sufficiently large $N$. \begin{proposition}\label{thm:notiidconsistent} Suppose conditions (M1) to (M3), (M4-alt1), (M4-alt2), and (M6) are satisfied. Next, suppose the first stage estimators satisfy (\ref{eq:firstconsistent3}) and the second stage estimators, constructed as in the i.i.d. setting but with a minor modification\footnote{See the remark right after the proposition.} satisfy (\ref{eq:secondnotiidconsistent}). Define the random walks \begin{eqnarray} Z_{i,j}=\begin{cases} \Delta_j(\epsilon_{1,j}+\dots+\epsilon_{i,j})-i\Delta_j^2/2,\qquad & i>0\\ 0, & i=0\\ \Delta_j(\epsilon_{i+1,j}+\dots+ \epsilon_{0,j})-i\Delta_j^2/2, & i<0 \,, \end{cases} \end{eqnarray} with the $\epsilon_{i,j}$'s from condition (M4-alt2), for $j = 1, 2, \ldots, J$, and denote $\tilde{L}_j:=\underset{i\in \mathbb{Z}}{\arg\min} \,Z_{i,j}$. Then $|\hat{\tau}^{(2)}_j-\tau_j|$'s for $j=1,\dots,J$ jointly converge to the distribution of $(\tilde{L}_1,...,\tilde{L}_J)$: for any integers $k_1,\cdot,k_J$, \begin{eqnarray} \mathbb{P}\left[\hat{J}=J,\, |\hat{\tau}^{(2)}_j-\tau_j|=k_j \text{ for }1\leq j\leq J\right]\to\prod_{j=1}^J \mathbb{P}[\tilde{L}_j=k_j] \end{eqnarray} \end{proposition} \begin{remark} As in the i.i.d. case, the intervals $[\hat{\tau}^{(1)}_j-Kw(N),\hat{\tau}^{(1)}_j+Kw(N)]$ for $j=1,\dots,J$ [obtained at step (ISM5)] would each contain only one change point with probability approaching one. We are therefore still justified in fitting stump models on each interval, although with a slight modification. Unlike the i.i.d. error terms scenario, the joint distribution of the error terms at the second stage does change when we condition on the estimators $\hat{\tau}^{(1)}_j$'s, regardless if we leave out the $\{Z_j\}$ subsample at the second stage. We thus make the following modification to (ISM5) in \ref{sec:procedure} (and assume this altered procedure is used from here on in this section): \begin{itemize} [label=(ISM5-alt):] \setlength{\itemindent}{.5in} \item take $S^{(2)}\left( \hat{\tau}_i^{(1)} \right)$ as all integers in $[\hat{\tau}_i^{(1)}-Kw(N),\hat{\tau}_i^{(1)}+Kw(N)]$ without any points omitted. \end{itemize} Step (ISM-6) then proceeds as before. \end{remark} \begin{remark} We note that the asymptotic distribution given above and in Theorem \ref{thm:multidepend} have the same form, since as before, conditional on (\ref{eq:firstconsistent3}) being true, intelligent sampling simplifies the problem into multiple single change point problems. Using Proposition \ref{thm:notiidconsistent} to construct confidence intervals in a practical setting requires an idea of the joint distribution of the $\{\epsilon_{ij}\}$'s. In practice, one would have to impose some structural conditions, e.g. assuming an ARMA or ARIMA structure on long stretches of the errors to the left and right of the change points. \end{remark} \begin{remark} The hard work lies in the verification of the high-level conditions (\ref{eq:firstconsistent3}) and (\ref{eq:secondnotiidconsistent}) in different dependent error settings, and as mentioned previously, is not dealt with in this paper but should constitute interesting agenda for follow-up research on this problem. We will, however, use Proposition 3 in our simulation and data analysis sections to construct confidence intervals for various dependent error scenarios. \end{remark} \subsection{Proof of Theorem \ref{thm:multiorder}}\label{sec:multiorderotherproof} As a reminder, this theorem only required the error terms to be independent and zero-mean subgaussian, with subguassian parameters $\sigma_i$ for $1\leq i\leq N$, which means \begin{eqnarray} \mathbb{E}[\exp(s\varepsilon_i)]\leq \exp\left( \frac{\sigma_i^2 s^2}{2} \right)\nonumber\\ \mathbb{P}[|\varepsilon_i|\geq s]\leq 2\exp\left( -\frac{s^2}{2\sigma_i^2} \right) \end{eqnarray} for all $s\in \mathbb{R}$. Since condition (M4) also states that the subgaussian parameters ($\sigma_i$'s) are bounded above by a constant $\sigma_{\max}$ not dependent on $N$, this means that \begin{eqnarray} \mathbb{E}[\exp(s\varepsilon_i)]\leq \exp\left( \frac{\sigma_{\max}^2 s^2}{2} \right)\nonumber\\ \mathbb{P}[|\varepsilon_i|\geq s]\leq 2\exp\left( -\frac{s^2}{2\sigma_{\max}^2} \right) \end{eqnarray} for all $i=1,\dots,N$ and $s\in \mathbb{R}$. \newline \newline \indent Much of the theory used in this proof and subsequent proof will extensively deal with the points used at the second stage, where points used at stage 1 will be skipped. To make these arguments, we will denote some notation to clearly , define the points $\tau^{(2)}_j$'s as \begin{eqnarray} \tau^{(2)}_j:=\begin{cases} \tau_j-1\qquad &\text{if }\tau_j\text{ was a first stage subsmaple point}\\ \tau_j &\text{otherwise} \end{cases} \end{eqnarray} for $j=1,\dots,J$ define \begin{eqnarray} S^{(2)}(t):=\left\{ i\in\mathbb{N}:\; |i-t|\leq Kw(N),\quad Y_i\text{ not used in 1st stage subsample} \right\}. \end{eqnarray} \begin{proof} Define the event \begin{eqnarray}\label{eventAN} \mathcal{R}_N:=\left\{ \hat{J}=J;\quad \max_{i=1,...,J}\left|\hat{\tau}^{(1)}_i-\tau_i\right|\leq w(N);\quad \max_{i=0,...,J}|\hat{\nu}_i^{(1)}-\nu_i|\leq \rho_N \right\}, \end{eqnarray} Denote $G_N$ as the joint distribution of $J,\hat{\tau}^{(1)}_1,...,\hat{\tau}^{(1)}_J,\hat{\nu}^{(1)}_0,...,\hat{\nu}^{(1)}_J$; the domain of $G_N$ would be $\bigcup_{k=0}^{N-1}\mathbb{N}^{k+1}\times \mathbb{R}^{k+1}$. Then, for any sequence $\{a_N\}$, we can bound $\mathbb{P}\left[ \hat{J}=J;\quad \max_{i=1,...,J}\left| \hat{\tau}_i^{(2)}-\tau_i^{(2)}\right|\leq a_N\right]$ from below by: \begin{eqnarray}\label{eq:deriv1} &&\mathbb{P}\left[ \hat{J}=J;\quad \max_{i=1,...,J}\left| \hat{\tau}_i^{(2)}-\tau_i^{(2)}\right|\leq a_N\right]\nonumber\\ &\geq& \sum_{k=0}^{N-1}\underset{ \substack{ 0<t_1<t_2<....<t_k<N\\v_1,...,v_k\in\mathbb{R}} }{\int}\mathbb{P}\left[\hat{J}=J;\, \underset{i=1,...,J}{\max}\left| \hat{\tau}^{(2)}_i-\tau_i^{(2)} \right|\leq a_N \Bigr| \hat{J}=k,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for }j\leq k\right] \nonumber\\ && dG_N(k,t_1,...,t_k,v_0,...,v_k)\nonumber\\ &\geq& \underset{\substack{ |t_i-\tau_i|\leq Kw(N)\\ |v_i-\nu_i|\leq \rho_{N}\\\text{for all }i }}{\int} \mathbb{P}\left[\hat{J}=J;\, \underset{i=1,...,J}{\max}\left| \hat{\tau}^{(2)}_i-\tau_i^{(2)} \right|\leq a_N \Bigr| \hat{J}=J,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for }j\leq J \right]\nonumber\\ && dG_N(J,t_1,...,t_J,v_0,...,v_J)\nonumber\\ &\geq & \left(\underset{\substack{ |t_i-\tau_i|\leq Kw(N)\\ |v_i-\nu_i|\leq \rho_{N}\\\text{for all }i }}{\inf} \mathbb{P}\left[\hat{J}=J;\, \underset{i=1,...,J}{\max}\left| \hat{\tau}^{(2)}_i-\tau_i^{(2)} \right|\leq a_N \Bigr| \hat{J}=J,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for }j\leq J \right]\right)\cdot\nonumber\\ &&\mathbb{P}\left[ \hat{J}=J;\quad \max_{i=1,...,J}\left|\hat{\tau}^{(1)}_i-\tau_i\right|\leq Kw(N);\quad \max_{i=0,...,J}|\hat{\nu}_i^{(1)}-\nu_i|\leq \rho_N\right]\nonumber\\ &\geq & \left(\underset{\substack{ |t_i-\tau_i|\leq Kw(N)\\ |v_i-\nu_i|\leq \rho_{N}\\\text{for all }i }}{\inf} \mathbb{P}\left[ \underset{i=1,...,J}{\max}\left| \hat{\tau}^{(2)}_i-\tau_i^{(2)} \right|\leq a_N \Bigr| \hat{J}=J,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for }j\leq J \right]\right)-\nonumber\\ &&\qquad\quad\mathbb{P}[\mathcal{R}_N\text{ is false}] \end{eqnarray} We wish to show that for all $\epsilon>0$, there exists a sequence $a_N=O(\log(J(N)))$ such that $$\mathbb{P}\left[ \hat{J}=J;\quad \max_{i=1,...,J}\left| \hat{\tau}_i^{(2)}-\tau_i^{(2)}\right|\leq a_N\right]>1-\epsilon$$ for all large $N$. It is sufficient to show this is satisfied by the second to last line of (\ref{eq:deriv1}), as $\mathcal{R}_N$ is true with probability increasing to 1. Henceforth, we will work with the probability $$\mathbb{P}\left[\underset{i=1,...,J}{\max}\left| \hat{\tau}^{(2)}_i-\tau_i^{(2)} \right|\leq a_N \Bigr| \hat{J}=J,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for }j\leq J \right]\nonumber $$ and, in the domain $|t_i-\tau_i|\leq Kw(N)$ and $|v_i-\nu_i|\leq \rho_{N}$ for all $i$, we show that it is greater than $1-\epsilon$ for all sufficiently large $N$ and $a_N=C_1\log J+C_2$ for some $C_1,C_2>0$. In the remainder of the proof, assume that all $t_i$'s and $v_i$'s fall within this domain. \newline \newline \indent For sufficiently large $N$, we have $Kw(N) \leq \delta_N/4$, and therefore no two of the second stage intervals ($[ t_i-Kw(N),$ $t_i+Kw(N)]$ for $i=1,...,J$) intersect. Because each $\hat{\tau}^{(2)}_j$ is a function of all $Y_i$'s in the disjoint index sets $S^{(2)}(t_j)\subset [ t_j-Kw(N),$ $t_j+Kw(N)]$ and the two level estimates $\hat{\nu}^{(1)}_{j-1}$ and $\hat{\nu}^{(1)}_j$, conditional independence holds: \begin{eqnarray} &&\mathbb{P}\left[\quad \max_{i=1,...,J}\left| \hat{\tau}_i^{(2)}-\tau_i^{(2)}\right| \leq a_N\Bigr| \hat{J}=J,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for }j\leq J \right]\nonumber\\ &=& \prod_{i=1}^J \mathbb{P}\left[ \left| \hat{\tau}^{(2)}_i-\tau_i^{(2)} \right|\leq a_N \Bigr| \hat{J}=J,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for all }j \right] \nonumber \end{eqnarray} To show the above product is eventually greater than some $1-\varepsilon$, it would suffice to show that, for all $1\leq k\leq J$ and sufficiently large $N$, \begin{eqnarray}\label{mainineq} &&\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \left| \hat{\tau}^{(2)}_k-\tau_k^{(2)} \right|>a_N \right]\nonumber\\ &:=&\mathbb{P}\left[ \left| \hat{\tau}^{(2)}_k-\tau_k^{(2)} \right|> a_N \Bigr| \hat{J}=J,\hat{\tau}^{(1)}_j=t_j,\hat{\nu}^{(1)}_j=v_j\text{ for all }j \right]\nonumber\\ &\leq& C_\epsilon/J \end{eqnarray} for some $C_\epsilon<-\log(1-\varepsilon)$. \newline \newline \indent For any $k$ between 1 to $J$ inclusive, we can write explicit expressions for $\tau_k^{(2)}$ and $\hat{\tau}_k^{(2)}$. \begin{eqnarray} \hat{\tau}^{(2)}_k&=&\underset{d\in S^{(2)}(t_k)}{\arg\min}\left(\text{sgn}(v_k-v_{k-1})\sum\limits_{i\in S^{(2)}(t_k)}\left( Y_i-\frac{v_{k-1}+v_k}{2} \right)\left[1(i\leq d)-1\left(i\leq \tau_k^{(2)}\right)\right]\right)\nonumber\\ &:=& \underset{d\in S^{(2)}(t_k)}{\arg\min}\mathbb{M}_k(d) \end{eqnarray} Next, since, $t_k\in [\tau_k-w(N),\tau_k+w(N)]$, for $N$ large enough so that $\frac{K-1}{2}w(N)>1$ we have \begin{eqnarray} t_k-Kw(N)\leq \tau_k-(K-1)w(N)< \tau_k^{(2)}-\frac{K-1}{2}w(N)<\nonumber\\ \tau_k^{(2)}+\frac{K-1}{2}w(N)< \tau_k+(K-1)w(N)\leq t_k+Kw(N) \end{eqnarray} That is to say, the set $S^{(2)}(t_k)$ includes the interval $\left[ \tau^{(2)}_k-\frac{K-1}{2}w(N), \tau^{(2)}_k+\frac{K-1}{2} \right]$, minus the first stage subsample points, regardless of which $t_k$ was used among those permissible in $R_N$. Therefore, for any even integer $a_N$ such that $1<a_N<\frac{K+1}{2}w(N)$ (which would be satisfied for all large $N$ if $a_N=O(\log(N))=o(w(N))$), \begin{eqnarray} &&\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \left| \hat{\tau}^{(2)}_k-\tau^{(2)}_k \right|>a_N \right]\nonumber\\ &\leq &\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \left|\lambda_2\left( \tau^{(2)}_k,\hat{\tau}^{(2)}_k \right)\right|>\frac{a_N}{2} \right] \end{eqnarray} for all sufficiently large $N$ such that the first stage subsample samples more sparsely than taking every other point. In order for $\left|\lambda_2\left( \hat{\tau}^{(2)}_k,\tau^{(2)}_k \right)\right|>a_N/2$, there must exists a $d$ such that \begin{eqnarray} \left|\lambda_2(\tau^{(2)}_k,d)\right|>a_N/2\qquad\text{ and }\qquad \mathbb{M}_k(d)\leq \min_{\substack{\ell\in S^{(2)}(t_k)\\ |\lambda(\tau_k^{(2)},\ell)|\leq \frac{a_N}{2} }}\mathbb{M}_k(\ell)< \mathbb{M}_k(\tau^{(2)}_k)=0. \end{eqnarray} Therefore, \begin{eqnarray}\label{eq:condprobsumbound} &&\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \left| \hat{\tau}^{(2)}_k-\tau^{(2)}_k \right|>a_N \right]\nonumber\\ &\leq &\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \left|\lambda_2\left( \tau^{(2)}_k,\hat{\tau}^{(2)}_k \right)\right|>\frac{a_N}{2} \right]\nonumber\\ &\leq &\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \exists d\in S^{(2)}(t_k)\text{ where }\left|\lambda_2(\tau^{(2)},d)\right|>a_N/2\text{ and } \mathbb{M}_k(d)<0\right]\nonumber\\ &\leq &\sum_{\substack{d\in S^{(2)}(t_k)\\ \left|\lambda_2(\tau^{(2)},d)\right|>a_N/2}} \mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \mathbb{M}_k(d)\leq 0 \right] \end{eqnarray} Each $\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \mathbb{M}_k(d)\leq 0 \right]$ can be bounded as follows by recognizing that each $\mathbb{M}_k(d)$ has subGaussian distribution: for every $d\in S^{(2)}(t_k)$ and sufficiently large $N$ so that $\rho_N\leq |\underline{\Delta}|/2$ (and hence $\text{sgn}(v_{k-v_{k-1}})=\text{sgn}(\nu_k-\nu_{k-1})$ for all $k$) \begin{eqnarray} &&\mathbb{M}_k(d)\nonumber\\ &=& \text{sgn}(v_k-v_{k-1})\sum\limits_{i\in S^{(2)}(t_k)}\left( Y_i-\frac{v_{k-1}+v_k}{2} \right)\left[1(i\leq d)-1\left(i\leq \tau_k^{(2)}\right)\right]\nonumber\\ &=&\begin{cases} \text{sgn}(\nu_k-\nu_{k-1})\left(\lambda_2( \tau^{(2)}_k,d )\left( \nu_{k}-\frac{v_{k-1}+v_k}{2} \right)+\underset{\substack{\tau^{(2)}_k<\ell\leq d\\ \ell\in S^{(2)}(t_k)}}{\sum}\varepsilon_\ell\right) \quad &\text{for }d>\tau_k^{(2)}\\ 0 &\text{for }d=\tau_k^{(2)}\\ \text{sgn}(\nu_k-\nu_{k-1})\left(\lambda_2( \tau^{(2)}_k,d )\left( \nu_{k-1}-\frac{v_{k-1}+v_k}{2} \right)-\underset{\substack{d<\ell\leq\tau^{(2)}_k\\ \ell\in S^{(2)}(t_k)}}{\sum}\varepsilon_\ell\right) \quad &\text{for }d<\tau_k^{(2)} \end{cases}\nonumber\\ &=&\begin{cases} \left|\lambda_2( \tau^{(2)}_k,d )\right|\left( \left|\frac{\nu_{k+1}-\nu_k}{2}\right|+\text{sgn}(\nu_k-\nu_{k-1})\hat{D}_k \right)+\text{sgn}(\nu_k-\nu_{k-1})\underset{\substack{\tau^{(2)}_k<\ell\leq d\\ \ell\in S^{(2)}(t_k)}}{\sum}\varepsilon_\ell \quad &\text{for }d>\tau_k^{(2)}\\ 0 &\text{for }d=\tau_k^{(2)}\\ \left|\lambda_2( \tau^{(2)}_k,d )\right|\left( \left|\frac{\nu_{k}-\nu_{k-1}}{2}\right|-\text{sgn}(\nu_k-\nu_{k-1})\hat{D}_k \right)-\text{sgn}(\nu_k-\nu_{k-1})\underset{\substack{d<\ell\leq\tau^{(2)}_k\\ \ell\in S^{(2)}(t_k)}}{\sum}\varepsilon_\ell \quad &\text{for }d<\tau_k^{(2)} \end{cases}\nonumber\\ \end{eqnarray} where \begin{eqnarray} \hat{D}_k:=\frac{(\nu_k-v_k)+(\nu_{k-1}-v_{k-1})}{2} \end{eqnarray} The maximal deviation of the signal estimates are at most $\rho_N\to 0$ away from the true signal estimates, and hence for all sufficiently large $N$, $\left|\hat{D}_k\right|\leq |\rho_N|\leq\frac{\underline{\Delta}}{2}$ (where $\underline{\Delta}$ bounds the minimum absolute jump from below). In this case \begin{eqnarray} \left|\frac{\nu_{k+1}-\nu_k}{2}+\hat{D}_k \right|\geq \frac{\underline{\Delta}}{2}\quad\text{ and }\quad\left| \frac{\nu_{k+1}-\nu_k}{2}-\hat{D}_k \right|\geq\frac{\underline{\Delta}}{2}. \end{eqnarray} Therefore the mean of the $\mathbb{M}_k(d)$'s satisfy $$\mathbb{E}[\mathbb{M}_k(d)] = \left|\lambda_2( \tau^{(2)}_k,d )\right|\left( \left|\frac{\nu_{k+1}-\nu_k}{2}\right|\pm\text{sgn}(\nu_k-\nu_{k-1})\hat{D}_k \right)\geq \left|\lambda_2( \tau^{(2)}_k,d )\right|\frac{\underline{\Delta}}{2}.$$ At the same time, $\mathbb{M}_k(d)-\mathbb{E}[\mathbb{M}_k(d)]$ is a zero mean subGaussian random variable with variance parameter $\sigma_{\max}^2\left|\lambda_2( \tau^{(2)}_k,d )\right|$, hence \begin{eqnarray} &&\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \mathbb{M}_k(d)\leq 0 \right]\nonumber\\ &=&\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \mathbb{M}_k(d)-\mathbb{E}[\mathbb{M}_k(d)]\leq -\mathbb{E}[\mathbb{M}_k(d)] \right]\nonumber\\ &\leq &\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \mathbb{M}_k(d)-\mathbb{E}[\mathbb{M}_k(d)]\leq -\left|\lambda_2( \tau^{(2)}_k,d )\right|\frac{\underline{\Delta}}{2} \right]\nonumber\\ &\leq& \frac{1}{2}\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2}\left|\lambda_2(\tau^{(2)}_k,d)\right| \right) \end{eqnarray} Going back to (\ref{eq:condprobsumbound}), this gives: \begin{eqnarray} &&\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \left| \hat{\tau}^{(2)}_k-\tau^{(2)}_k \right|>a_N \right]\nonumber\\ &\leq &\sum_{\substack{d\in S^{(2)}(t_k)\\ \left|\lambda_2(\tau^{(2)},d)\right|>a_N/2}} \mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \mathbb{M}_k(d)\leq 0 \right]\nonumber\\ &\leq &\frac{1}{2}\sum_{\substack{d\in S^{(2)}(t_k)\\ \left|\lambda_2(\tau^{(2)},d)\right|>a_N/2}}\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2}\left|\lambda_2(\tau^{(2)}_k,d)\right| \right)\nonumber\\ &\leq &\frac{1}{2}\left[ \sum_{k=\frac{a_N}{2}+1}^\infty\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2}k \right) +\sum_{k=-\frac{a_N}{2}-1}^{-\infty}\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2}|k| \right) \right]\nonumber\\ &=&\left(1-\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2} \right)\right)^{-1}\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2}\left(\frac{a_N}{2}+1\right) \right) \end{eqnarray} Therefore in order to bound $\mathbb{P}_{J,\boldsymbol{v},\boldsymbol{t}}\left[ \left| \hat{\tau}^{(2)}_k-\tau^{(2)}_k \right|>a_N \right]$ by $C_\varepsilon/J$ as we stated back in (\ref{mainineq}), a legitimate choice of $a_N$ which will satisfy this bound can be found by solving \begin{eqnarray} \frac{\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2} \right)}{1-\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2} \right)}\exp\left( -\frac{\underline{\Delta}^2}{8\sigma_{\max}^2}\cdot\frac{a_N}{2} \right)=\frac{C_\epsilon}{J}. \end{eqnarray} The solution to this will be of the form $a_N=C_1\log(J)+C_2$, where $C_1$ and $C_2$ are constants not dependent on $N$, and because $\log(J)=O(\log(N))$, this is a valid solution. \end{proof} \subsection{Proof sketch of Theorem \ref{thm:increasingJasymprotics}}\label{sec:increasingJasymproticsproofshort} \label{proof-sketch-Theorem-5} \begin{proof} Let $\mathcal{R}_N$ be the event \begin{eqnarray} \mathcal{R}_N:= \left\{ \hat{J}=J;\,\max_{j=1,\dots,J}\left| \hat{\tau}^{(1)}_j-\tau_j \right|\leq w(N);\, \max_{j=0,\dots,J}\left| \hat{\nu}^{(1)}_j-\nu_j \right|\leq \rho_N \right\} \end{eqnarray} Define $G_N()$ as the joint distribution of the first stage estimates $(\hat{J},\boldsymbol{\hat{\tau}^{(1)}},\boldsymbol{\hat{\nu}^{(1)}})=(\hat{J},\hat{\tau}^{(1)}_1,\dots,\hat{\tau}^{(1)}_{\hat{J}},\hat{\nu}^{(1)}_0,\dots,\hat{\nu}^{(1)}_{\hat{J}})$. Because $\mathbb{P}[\mathcal{R}_N]\to 1$, it is possible to show that the difference between \begin{eqnarray}\label{eq:condinte2short} \mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right| \leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\text{ for }j=1,\dots,J \right] \end{eqnarray} and $\prod_{j=1}^J P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})$, equals the expression \begin{eqnarray} &&\underset{ (k,\boldsymbol{t},\boldsymbol{\alpha})\in\mathcal{R}_N }{\int}\Bigg(\prod_{j=1}^J\mathbb{P}\left[ \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=k;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ && -\prod_{j=1}^J P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg) dG_N(k,\boldsymbol{t},\boldsymbol{v})+o(1) \end{eqnarray} It is therefore sufficient to show that the difference inside the integral is, for all $(k,\boldsymbol{t},\boldsymbol{\alpha})\in\mathcal{R}_N$, uniformly bounded in absolute value by a $o(1)$ term. Henceforth, consider only such admissible $k$'s, $\boldsymbol{t}$'s, $\boldsymbol{v}$'s (which, of course, restricts $k$ to $J$), and additionally that $N$ is large enough so that $\rho_N\leq |\underline{\Delta}|/8$ and distance between consecutive points in $S^{(1)}$ is more than 2 (i.e. $\min\limits_{i,j\in S^{(1)},i\neq j}|i-j|>2$). We will proceed to show (\ref{eq:increasingJasymprotics1}) by obtaining an upper bound for the following absolute difference, for every $j=1,\dots, J$: \begin{eqnarray} \Bigg| \mathbb{P}\left[ \left|\hat{\tau}^{(2)}_j-\tau_j\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=J;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\-P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg| \,. \end{eqnarray} This upper bound will be derived in several components. \newline \newline \noindent \textbf{First Component:} A more explicit expression for the change point estimates is, for every $j=1,\dots,J$, \begin{eqnarray} \hat{\tau}^{(2)}_j&=&\underset{t\in S^{(2)}(t_i)}{\arg\min}(\text{sgn}(v_j-v_{j-1}))\sum_{i\in S^{(2)}(t_i)}\left( Y_i-\frac{v_j+v_{j-1}}{2} \right)( 1(i\leq t)-1(i\leq \tau_j^{(2)} ))\nonumber\\ &=:& \underset{t\in S^{(2)}(t_i)}{\arg\min} \hat{X}_j^{(2)}(t) \end{eqnarray} $\hat{X}^{(2)}(t)$ is a random walk over the set $t\in S^{(2)}(t_j)$, which is a set of integers skipping over the points of $S^{(1)}$. To express this as a random process over a set of integers without skipped points (a requirement to apply some probability bounds), define the one-to-one function $\pi_{2,j}(s):=\lambda_2(\tau_j^{(2)},s)$ for $s \in \{1,\dots,N\}-S^{(1)}$. This allows us to consider the random walk $\hat{X}^{(2)}_j( \pi_{2,j}^{-1}(t) )$ over the set $t\in \pi_{2,j}(S^{(2)}(t_j))$. This is a set of integers containing no skipped points, and furthermore it can be shown that \begin{eqnarray}\label{eq:setcontain} \left[ \pm Q_{(|\Delta_j-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\subset \pi_{2,j}(S^{(2)}(t_j)) \end{eqnarray} (could be done by showing $[\pm ((K-1)/2)w(N)]\subset \pi_{2,j}(S^{(2)}(t_j))$ and using Lemma \ref{lem:quantbound} to argue that $Q_{(|\Delta_j-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})=O(\log(N))=o(w(N))$). \newline \newline From here, $\hat{X}^{(2)}_j(\pi_{2,j}^{-1}(t))$ is a random walk with non-symmetrical linear drifts on either side of $ t=0 $. We wish to approximate $\hat{X}^{(2)}(t)$ with a random walk with symmetrical drift, so consider $X'_{|\Delta_j|-2|\hat{D}_j|}(t)$, defined as \begin{eqnarray} X'_{|\Delta_j|-2|\hat{D}_j|}(t):=\begin{cases} t\frac{|\Delta_j|-2|\hat{D}_j|}{2}+\text{sgn}(\Delta_j)\sum\limits_{i=1}^{t}\varepsilon_{ \pi_2^{-1}\left(\pi_2(\tau_j^{(2)})+i \right) }\qquad & \text{for }t>0\\ 0 &t=0\\ |t|\frac{|\Delta_j|-2|\hat{D}_j|}{2}-\text{sgn}(\Delta_j)\sum\limits_{i=t}^{-1}\varepsilon_{\pi_{2}^{-1}\left(\pi_2(\tau_j^{(2)})+i+1\right)}\qquad & \text{for }t<0 \end{cases}. \end{eqnarray} where \begin{eqnarray} \hat{D}_j=\frac{\nu_j-v_j}{2}+\frac{\nu_{j-1}-v_{j-1}}{2}. \end{eqnarray} This random walk is very close to $\hat{X}^{(2)}(\pi_{2,j}^{-1}(t))$, since it equals either $X'_{|\Delta_j|-2|\hat{D}_j|}(t)+2|\hat{D}_jt|1(t>0)$ or $X'_{|\Delta_j|-2|\hat{D}_j|}(t)+2|\hat{D}_jt|1(t<0)$ for $t\in \pi_{2,j}^{-1}(S^{(2)}(t_j)$, and $|\hat{D}_j|<\rho_N\to 0$. In either case, this begs the use of Lemma \ref{lem:randomwalkcompoppo} and Lemma \ref{lem:randomwalkcomp}, two results which allows us to compare the argmin of such similar random walks. The only important condition they require is satisfied by (\ref{eq:setcontain}). Their application gives \begin{eqnarray}\label{eq:firstcompabsshort} &&\Bigg|\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in \pi^{-1}_{2,j}(S^{(2)}(t_j))}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\Bigg| \nonumber\\ &=&\Bigg|\mathbb{P}\left[ \left|\underset{t\in \pi^{-1}_{2,j}(S^{(2)}(t_j))}{\arg\min}X^{(2)}_j(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in \pi^{-1}_{2,j}(S^{(2)}(t_j))}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\Bigg| \nonumber\\ &\leq &C^*_1\rho_N \end{eqnarray} For some constant $C^*_1>0$, not dependent on $j$ or $N$. ~\newline \newline \textbf{Second Component: }Now $X'_{|\Delta|-2|\hat{D}_j|}(t)/\sigma$ has the same exact distribution as $X_{(|\Delta|-2|\hat{D}_j|)/\sigma}(t)$, for all integers $t\in \pi_{2,j}^{-1}(S^{(2)}(t_j)$. It was also shown in the previous section that the set $\pi_{2,j}^{-1}(S^{(2)}(t_j)$ contains the interval of integers $\left[ \pm Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]$ for all large $N$. Therefore apply Lemma \ref{lem:twoXrandwalkcomp} to obtain \begin{eqnarray}\label{eq:secondcompabsshort} &&\Biggr|\mathbb{P}\left[ \left|\underset{ t\in \pi^{-1}_{2,j}(S^{(2)}(t_j))}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in \pi^{-1}_{2,j}(S^{(2)}(t_j))}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Biggr|\leq C^*_2\rho_N \end{eqnarray} for some $C^*_2>0$ not dependent on $j$ or $N$. ~\newline \newline \textbf{Third Component: }The set $\pi_{2,j}^{-1}(S^{(2)}(t_j))$ contains the set $\left[\pm \left(\frac{(K-1)w(N)}{2}-1\right) \right]$. This allows an application of Lemma \ref{lem:eqprob} to obtain \begin{eqnarray}\label{eq:underdiff2short} &&\Biggr|\mathbb{P}\left[ \left|\underset{t\in \pi^{-1}_{2,j}(S^{(2)}(t_j))}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{|t|\leq \frac{(K-1)w(N)}{2}-1}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Biggr|\nonumber\\ &\leq & A_1\exp\left(-B_1\left(\frac{(K-1)w(N)}{2}-1\right)\right) \end{eqnarray} for some constant $A_1>0$, $B_1>0$ not dependent on $j$ or $N$. The same application of the lemma can also yield \begin{eqnarray}\label{eq:underdiff3short} &&\Bigg|\mathbb{P}\left[ \left|\underset{|t|\leq \frac{(K-1)w(N)}{2}-1}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right] \nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Bigg|\nonumber\\ &\leq &A_2\exp\left(-B_2\left(\frac{(K-1)w(N)}{2}-1\right)\right) \end{eqnarray} for some $A_2>0$ and $B_2>0$ not dependent on $j$ or $N$. Adding up these two upper bounds imply \begin{eqnarray}\label{eq:thirdcompabsshort} &&\Biggr|\mathbb{P}\left[ \left|\underset{t\in \pi^{-1}_{2,j}(S^{(2)}(t_j))}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Bigg|\nonumber\\ &\leq &(A_1\vee A_2)\exp\left(-(B_1\wedge B_2)\left(\frac{(K-1)w(N)}{2}-1\right)\right) \end{eqnarray} ~\newline \noindent\textbf{Sum of the Components:} Adding up the differences in (\ref{eq:firstcompabs}), (\ref{eq:secondcompabs}), and (\ref{eq:thirdcompabs}): \begin{eqnarray}\label{eq:individualboundshort} &&\Bigg|\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Bigg|\nonumber\\ &\leq& C_4\rho_N+C_5\exp\left[-C_6(K-1)w(N)\right] \end{eqnarray} for some constants $C_4$, $C_5$, and $C_6$. This allows us to bound \begin{eqnarray} &&\prod_{j=1}^J\mathbb{P}\left[ \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=J;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ && -\prod_{j=1}^J P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \end{eqnarray} with \begin{eqnarray} &&J\Bigg|\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}_{|\Delta_j|-2\rho_N}(\sqrt[J]{1-\alpha})\Bigg|\left(\mathbb{P}_{|\Delta_j|-2\rho_N}(\sqrt[J]{1-\alpha})\right)^{-1}\nonumber\\ &\leq&J\left(C_4\rho_N+C_5\exp\left[-C_6(K-1)w(N)\right]\right)(1-\alpha)^{-1/J}\nonumber\\ &\leq&C_4(1-\alpha)^{-1}J\rho_N+C_5(1-\alpha)^{-1}N\exp\left[-C_6(K-1)w(N)\right] ,\end{eqnarray} which goes to 0 since $J\rho_N\to 0$ and $w(N)\geq C N^{1-\gamma}$ for some constant $C$. \end{proof} \subsection{Proof of Theorem \ref{thm:increasingJasymprotics}}\label{sec:increasingJasymproticsproof} In this section we perform the proof in the special situation where all the error terms of the data sequence are iid $N(0,\sigma^2)$ for some positive $\sigma$. In the more general case where the error terms are independent Gaussian with variances bounded above and below by $\sigma^2_{\max}$ and $\sigma_{\min}^2$, with identical distributions between change points, similar results from Section \ref{sec:supplementpartc} and methods from the following proof will yield the statement of Theorem \ref{thm:multidepend}. Because we are dealing with iid error terms, we could simplify the notations presented with the theorem in the main draft. First, we could simplify the random walks \begin{eqnarray} X_{\Delta}(t):=X_{\Delta,1,1}(t)=\begin{cases} t\frac{|\Delta|}{2}+\sum_{i=1}^t\varepsilon^*_{ i }\qquad & t>0\\ 0 & t=0\\ \frac{|t\Delta|}{2}+\sum_{i=0}^{t-1}\varepsilon^*_{-i} \quad t<0 \end{cases}\\ \text{where }\varepsilon^*_i\overset{\text{iid}}{\sim} N(0,1)\text{ for all }i, \end{eqnarray} denote the argmin of this random walk as $L_{\Delta}:=\underset{t\in\mathbb{Z}}{\arg\min}\; X_\Delta(t)$, simplify the quantiles \begin{eqnarray} Q_{\Delta}(1-\alpha):=Q_{\Delta,1,1}(1-\alpha)=\min\;\left\{t\geq 0:\mathbb{P}\left[\left|L_\Delta\right|\leq t \right]\geq 1-\alpha\right\} \end{eqnarray} for any $\alpha\in [0,1]$, and their associated probabilities \begin{equation}P_{\Delta}(1-\alpha)=P_{\Delta,1,1}(1-\alpha)\left[ |L_\Delta|\leq Q_{\Delta}(1-\alpha) \right] . \end{equation} We will use these notations for the remainder of this section and the entirety of Section \ref{sec:supplementpartc}. \newline \newline Using this notation and working within this simplified framework of iid errors, the results of the theorem translates to: \begin{eqnarray}\label{eq:increasingJasymproticsineqs} &&\mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right| \leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\text{ for }j=1,\dots,J \right]\nonumber\\ &=& \left(\prod_{j=1}^JP_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right)+o(1), \end{eqnarray} and the second inequality \begin{eqnarray}\label{eq:increasingJasymproticsineqs2} &&\mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\text{ for }j=1,\dots,J \right]\nonumber\\ &=& \left(\prod_{j=1}^JP_{(|\Delta_j|+2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right)+o(1), \end{eqnarray} Both these inequalities can be shown utilizing various probability bounds regarding the argmins of random walks of the form $X_\Delta(\cdot)$, and other similar random walks. To utilize such result, the estimators $\hat{\tau}^{(2)}_j$ must be expressed as the argmin of random walks with positive linear drifts. We thus introduce some new notation and new insight to facilitate this task. \newline \newline \indent First we define the indexing function $\pi_2=\pi_{2,N}$, which maps the set of integers $\{1,\dots,N\}-S^{(1)}$ to the set of integers $\{1,\dots, N-|S^{(1)}| \}$: \begin{eqnarray}\label{def:pi2func} \pi_2(k)=\sum_{j=1}^k1(j\notin S^{(1)}). \end{eqnarray} For every $N$, $\pi_2$ is a strictly increasing bijection, and it also has the property such that for any two values $i$ and $j$ in the domain of $\pi_2$, $\lambda_2(i,j)=\pi_2(j)-\pi_2(i)$. Additionally, consider the following subset of the full data: \begin{eqnarray}\label{set:secstagedata} Y_{\pi_2^{-1}(1)},Y_{\pi_2^{-1}(2)},\dots, Y_{\pi_2^{-1}(N-|S^{(1)}|)}. \end{eqnarray} These all the datapoints which were not used in the first stage subsample. This subset of the full dataset is also a change point model following conditions (M1) to (M4) with the same signal upper bound $\bar{\theta}$, the signal jump lower bound $\underline{\Delta}$, and the same error distribution. The change points for this subset is $\pi_2(\tau_j^{(2)})$ for $j=1,\dots,J$. In fact, (\ref{set:secstagedata}) and the first stage subsample $\{ Y_i \}_{i\in S^{(1)}}$ can be considered statistically independent change point models, resulting in the first stage estimates obtained using $\{ Y_i \}_{i\in S^{(1)}}$ being independent of (\ref{set:secstagedata}). This means that conditional on the first stage estimates, the distribution of (\ref{set:secstagedata}) does not change from their marginal distributions. \newline \newline \indent A property of the $\pi_2$ function that will be used is a relationship between the $\pi_2^{-1}$ function and the "normal" subtraction, namely, we want to be able to compare $\pi_2^{-1}(a,b)$ to $b-a$ for $a,b\in \{1,\dots, N-|S^{(1)}|\}$. For all $N$ large enough such that the distance between consecutive points in $S^{(1)}$ is more than 2 (i.e. $\min\limits_{i,j\in S^{(1)},i\neq j}|i-j|>2$), we have the following property: for any integers $a$ and $b$ such that $a$ and $a+b$ are in $\{1,\dots, N-|S^{(1)}|\}$, \begin{eqnarray}\label{eq:compdists} \left|\pi_2^{-1}(a+b)-\pi_2^{-1}(a)\right|\leq 2|b|. \end{eqnarray} One way to see this is the following: for any $c,d\in \{ 1,\dots,N \}-S^{(1)}$, $|\lambda_2(c,d)|$ counts the number of points in either $(c,d]$ (if $c\leq d$) or $(d,c]$ (if otherwise), and therefore $|\lambda_2(c,d)|\geq |d-c|/2$ due to the $S^{(1)}$ containing no two points that are less than 2 apart. Knowing this, the fact that \begin{eqnarray} \left|\lambda_2\left( \pi_2^{-1}(a),\pi_2^{-1}(a+b) \right)\right|=\left|\pi_2\left(\pi_2^{-1}(a+b)\right)-\pi_2\left( \pi_2^{-1}(a) \right)\right|= |b|, \end{eqnarray} means that \begin{eqnarray} 2|b|= 2\left|\lambda_2\left( \pi_2^{-1}(a),\pi_2^{-1}(a+b) \right)\right|\geq \left|\pi_2^{-1}(a+b)-\pi_2^{-1}(a)\right| \end{eqnarray} \indent We will offer a proof of the inequality given in (\ref{eq:increasingJasymproticsineqs}). The steps for verifying (\ref{eq:increasingJasymproticsineqs2}) would only require a few modifications. \begin{proof} Let $\mathcal{R}_N$ be the event \begin{eqnarray} \mathcal{R}_N:= \left\{ \hat{J}=J;\,\max_{j=1,\dots,J}\left| \hat{\tau}^{(1)}_j-\tau_j \right|\leq w(N);\, \max_{j=0,\dots,J}\left| \hat{\nu}^{(1)}_j-\nu_j \right|\leq \rho_N \right\} \end{eqnarray} Again define $G_N()$ joint distribution of the first stage estimates $(\hat{J},\boldsymbol{\hat{\tau}^{(1)}},\boldsymbol{\hat{\nu}^{(1)}})=(\hat{J},\hat{\tau}^{(1)}_1,\dots,\hat{\tau}^{(1)}_{\hat{J}},\hat{\nu}^{(1)}_0,\dots,\hat{\nu}^{(1)}_{\hat{J}})$. \begin{eqnarray}\label{eq:condinte2} &&\mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right| \leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\text{ for }j=1,\dots,J \right]\nonumber\\ &= & \mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right| \leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\,\forall j;\, \mathcal{R}_N \right]\nonumber\\ &&+\mathbb{P}\left[ \hat{J}=J;\, \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right| \leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\,\forall j;\,\text{not }\mathcal{R}_N \right]\nonumber\\ &=&\underset{ (k,\boldsymbol{t},\boldsymbol{\alpha})\in\mathcal{R}_N }{\int}\mathbb{P}\left[ \max_{j=1,\dots,J}\left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=k;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]dG_N(k,\boldsymbol{t},\boldsymbol{v})\nonumber\\ &&+\mathbb{P}\left[\hat{J}=J;\, \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right| \leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\,\forall j;\, \text{not }\mathcal{R}_N \right]\nonumber \end{eqnarray} Because the probability of $\mathcal{R}_N$ goes to 1, the difference between this probability and $\prod_{j=1}^J P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})$ is \begin{eqnarray} &&\underset{ (k,\boldsymbol{t},\boldsymbol{\alpha})\in\mathcal{R}_N }{\int}\Bigg(\mathbb{P}\left[ \max_{j=1,\dots,J}\left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=k;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ && -\prod_{j=1}^J P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg) dG_N(k,\boldsymbol{t},\boldsymbol{v})+o(1) \end{eqnarray} It is therefore sufficient to show that the difference inside the integral is, for all $(k,\boldsymbol{t},\boldsymbol{\alpha})\in\mathcal{R}_N$, uniformly bounded in absolute value by a $o(1)$ term. In other words, show there is a sequence $C_{N,\alpha}=o(1)$ such that \begin{eqnarray}\label{eq:condprodprobdev} &&\Bigg| \mathbb{P}\left[ \max_{j=1,\dots,J}\left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=k;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &&-\prod_{j=1}^J P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\leq C_{N,\alpha} \end{eqnarray} for all admissible $(k,\boldsymbol{t},\boldsymbol{v})\in\mathcal{R}_N$. \newline \newline \indent Henceforth, consider only such admissible $k$ (which restricts to $k=J$), $\boldsymbol{t}$'s and $\boldsymbol{v}$'s, and additionally suppose that $N$ is at least large enough so that $\rho_N\leq |\underline{\Delta}|/16$ and distance between consecutive points in $S^{(1)}$ is more than 2 (i.e. $\min\limits_{i,j\in S^{(1)},i\neq j}|i-j|>2$). We will proceed to show (\ref{eq:condprodprobdev}), by obtaining an upper bound for the following absolute difference, for every $j=1,\dots, J$: \begin{eqnarray} &&\Bigg| \mathbb{P}\left[ \left|\hat{\tau}^{(2)}_j-\tau_j\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=J;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &&-P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg| \end{eqnarray} This upper bound will be derived in several components. \newline \newline \noindent \textbf{First Component:} A more explicit expression for the change point estimates is \begin{eqnarray} \hat{\tau}^{(2)}_j&=&\underset{t\in S^{(2)}(t_i)}{\arg\min}(\text{sgn}(v_j-v_{j-1}))\sum_{i\in S^{(2)}(t_i)}\left( Y_i-\frac{v_j+v_{j-1}}{2} \right)( 1(i\leq t)-1(i\leq \tau_j^{(2)} ))\nonumber\\ &=:& \underset{t\in S^{(2)}(t_i)}{\arg\min} \hat{X}_j^{(2)}(t) \end{eqnarray} Since $N$ was assumed to be large enough so that $\rho_N< |\underline{\Delta}|/8\leq (\nu_j-\nu_{j-1})/8$, the sign of $v_j-v_{j-1}$ is the same as the sign of $\Delta_j:=\nu_j-\nu_{j-1}$, making the optimized expression above equal \begin{align} \hat{X}^{(2)}_j(t) &=\text{sgn}(\Delta_j)\sum_{i\in S^{(2)}(t_i)}\left( Y_i-\frac{v_j+v_{j-1}}{2} \right)( 1(i\leq t)-1(i\leq \tau_j^{(2)} ))\nonumber\\ &=\begin{cases} \left|(\tau_j^{(2)},t]\cap S^{(2)}(t_j)\right|\left[\frac{|\Delta_j|}{2}+\text{sgn}(\Delta_j)\hat{D}_j\right]+\text{sgn}(\Delta_j)\underset{i\in (\tau_j^{(2)},t]\cap S^{(2)}(t_j)}{\sum}\varepsilon_i \, & \text{for }t>\tau_j\\ 0 &t=\tau_j\\ \left| (t,\tau_j^{(2)}]\cap S^{(2)}(t_j) \right|\left[\frac{|\Delta_j|}{2}-\text{sgn}(\Delta_j)\hat{D}_j\right]-\text{sgn}(\Delta_j)\underset{i\in (t,\tau_j^{(2)}]\cap S^{(2)}(t_j) }{\sum}\varepsilon_{i}\qquad & \text{for }t>\tau_j \end{cases}\nonumber\\ &=\begin{cases} \left(\pi_2(t)-\pi_2(\tau_j^{(2)})\right)\cdot\left[\frac{|\Delta_j|}{2}+\text{sgn}(\Delta_j)\hat{D}_j\right]+\text{sgn}(\Delta_j)\sum\limits_{i=\pi_2(\tau_j^{(2)})+1 }^{\pi_2(t)}\varepsilon_{\pi_2^{-1}(i)} \qquad & \text{for }t>\tau_j\\ 0 &t=\tau_j\\ \left(\pi_2(\tau_j^{(2)})-\pi_2(t)\right)\cdot\left[\frac{|\Delta_j|}{2}-\text{sgn}(\Delta_j)\hat{D}_j\right]-\text{sgn}(\Delta_j)\sum\limits_{i=\pi_2(t)+1}^{\pi_2(\tau_j^{(2)})}\varepsilon_{\pi_2^{-1}(i)}\qquad & \text{for }t<\tau_j \end{cases}\nonumber\\ &\text{since }(a,b]\cap S^{(2)}(t_j)=\pi_2^{-1}\left((\pi_2(a),\pi_2(b)]\right)\text{ for any }(a,b]\subset S^{(2)}(t_j)\nonumber\\ \end{align} where \begin{eqnarray} \hat{D}_j=\frac{\nu_j-v_j}{2}+\frac{\nu_{j-1}-v_{j-1}}{2} \end{eqnarray} which is less than $\rho_N$ in absolute value. From the equalities written above, we can deduce that for any integer $t$ such that $\pi_2^{-1}\left( \pi_2(\tau^{(2)}_j)+t \right)\in S^{(2)}(t_j)$, \begin{eqnarray} &&\hat{X}_j^{(2)}\left( \pi_2^{-1}\left(\pi_2(\tau^{(2)}_j)+t \right) \right)\nonumber\\ &=&\begin{cases} t\cdot\left[\frac{|\Delta_j|}{2}+\text{sgn}(\Delta_j)\hat{D}_j\right]+\text{sgn}(\Delta_j)\sum\limits_{i=\pi_2(\tau_j^{(2)})+1 }^{\pi_2(\tau_j^{(2)})+t}\varepsilon_{\pi_2^{-1}(i)} \qquad & \text{for }t>\tau_j\\ 0 &t=\tau_j\\ \left|t\right|\cdot\left[\frac{|\Delta_j|}{2}-\text{sgn}(\Delta_j)\hat{D}_j\right]-\text{sgn}(\Delta_j)\sum\limits_{i=\pi_2(\tau_j^{(2)})+t+1}^{\pi_2(\tau_j^{(2)})}\varepsilon_{\pi_2^{-1}(i)}\qquad & \text{for }t<\tau_j \end{cases} \end{eqnarray} This allows us to compare $\hat{X}_j^{(2)}$ with the random walk $X'_{|\Delta_j|-2|\hat{D}_j|}$: \begin{eqnarray} X'_{|\Delta_j|-2|\hat{D}_j|}(t):=\begin{cases} t\frac{|\Delta_j|-2|\hat{D}_j|}{2}+\text{sgn}(\Delta_j)\sum\limits_{i=1}^{t}\varepsilon_{ \pi_2^{-1}\left(\pi_2(\tau_j^{(2)})+i \right) }\qquad & \text{for }t>0\\ 0 &t=0\\ |t|\frac{|\Delta_j|-2|\hat{D}_j|}{2}-\text{sgn}(\Delta_j)\sum\limits_{i=t}^{-1}\varepsilon_{\pi_{2}^{-1}\left(\pi_2(\tau_j^{(2)})+i+1\right)}\qquad & \text{for }t<0 \end{cases}. \end{eqnarray} Specifically, the random process $\hat{X}^{(2)}_j\left( \pi_2^{-1}\left(\pi_2(\tau^{(2)}_j)+t \right) \right)$ either equals the sum $X'_{|\Delta_j|-2|\hat{D}_j|}(t)+2|\hat{D}_jt|1(t>0)$ or $X'_{|\Delta_j|-2|\hat{D}_j|}(t)+2|\hat{D}_jt|1(t<0)$. In either case, this begs the use of Lemma \ref{lem:randomwalkcompoppo} and Lemma \ref{lem:randomwalkcomp}, but first we must verify some the conditions of those results, namely that $|\Delta_j|>2|\hat{D}_j|$ (automatically true since $N$ was assumed to be large enough so that $|\Delta_j|\geq \underline{\Delta} \geq 8\rho_N$), and secondly, $\pi_2^{-1}\left( \pi_2(\tau^{(2)}_j)+t \right)\in S^{(2)}(t_j)$ for $|t|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})$. To see that this is true, first use (\ref{eq:compdists}) to arrive at \begin{eqnarray} &&\left|\pi_2^{-1}\left(\pi_2(\tau^{(2)}_j)\pm Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right)-\tau_j^{(2)}\right|\nonumber\\ &=&\left|\pi_2^{-1}\left(\pi_2(\tau^{(2)}_j)\pm Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right)-\pi_2^{-1}\left(\pi_2\left(\tau_j^{(2)}\right)\right) \right|\nonumber\\ &\leq& 2Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \end{eqnarray} Therefore, given any $|t|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})$, $\pi_2^{-1}\left( \pi_2\left( \tau_j^{(2)} \right)+t \right)\in\left[ \tau_j^{(2)}\pm 2Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]-S^{(1)}$. Next, due to Lemma \ref{lem:quantbound}, there are positive expressions $C_1(\cdot)$, $C_2(\cdot)$, both decreasing, such that \begin{eqnarray} Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})&\leq& C_1(|\Delta_j|-2\rho_N)\log\left(\frac{C_2(|\Delta_j|-2\rho_N)J}{\alpha} \right)\nonumber\\ &\leq & C_1(\underline{\Delta}/2)\log\left(\frac{C_2(\underline{\Delta}/2)J}{\alpha} \right)\nonumber\\ &\leq &C_1(\underline{\Delta}/2)\log\left(\frac{C_2(\underline{\Delta}/2)N}{\alpha} \right) \end{eqnarray} Since $w(N)$ is greater than order of $N^{1-\gamma}$, which in turn is greater in order than $\log(N)$, we see that for all large $N$: \begin{eqnarray} &&t_j-Kw(N)\leq \tau_j^{(2)}-(K-1)w(N)+1 < \tau_j^{(2)}-2Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\nonumber\\ &\leq & \pi_2^{-1}\left(\pi_2(\tau^{(2)}_j)- Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right)<\pi_2^{-1}\left(\pi_2(\tau^{(2)}_j)+ Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right)\nonumber\\ &\leq &\tau_j^{(2)}+2Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\leq \tau_j^{(2)}+(K-1)w(N)-1\leq t_j+Kw(N) \end{eqnarray} Therefore given any $|t|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})$, \begin{eqnarray} \pi_2^{-1}\left( \pi_2\left( \tau_j^{(2)} \right)+t \right)&\in&\left[ \tau_j^{(2)}\pm 2Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]-S^{(1)}\nonumber\\ &\subset& \left[ t_j\pm Kw(N) \right]-S^{(1)}\nonumber\\ &=& S^{(2)}(t_j) \end{eqnarray} showing that the conditions of Lemma \ref{lem:randomwalkcompoppo} are satisfied. Before continuing, we make a small point before continuing. Note that for any integer $t^*\in S^{(2)}(t_j)$, $\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)=t^*$ is an equivalent event to $\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right)\in S^{(2)}}{\arg\min}\hat{X}^{(2)}_j\left( \pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right) \right)=t^*$. This is because \begin{eqnarray}\label{eq:redefinelambda2} &&\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right)\in S^{(2)}}{\arg\min}\hat{X}^{(2)}_j\left( \pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right) \right)=t^*\nonumber\\ &\longleftrightarrow & \underset{t\in S^{(2)}(t_j)}{\arg\min} \hat{X}^{(2)}_j(t)=\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t^*\right)\nonumber\\ &\longleftrightarrow & \pi_2\left( \underset{t\in S^{(2)}(t_j)}{\arg\min} \hat{X}^{(2)}_j(t) \right)-\pi_2(\tau_j^{(2)})=t^*\nonumber\\ &\longleftrightarrow & \lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)=t^* \end{eqnarray} We are now ready to apply Lemma \ref{lem:randomwalkcompoppo}: \begin{eqnarray}\label{eq:underdiff1} &&\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right] \nonumber\\ &=&\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right)\in S^{(2)}}{\arg\min}\sigma^{-1}\hat{X}^{(2)}_j\left( \pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right) \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|(\hat{J},\boldsymbol{\hat{\tau}^{(1)}},\boldsymbol{\hat{\nu}^{(1)} })=(J,\boldsymbol{t}, \boldsymbol{v})\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}}{\arg\min}\sigma^{-1}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &&\text{indexing change by }(\ref{eq:redefinelambda2});\text{ and division by constant } \sigma\text{ does not change argmin value}\nonumber\\ &\leq& 2\hat{D}_j\sigma^{-1}\Big[A_1^+\left((|\Delta_j|-2\hat{D}_j)/\sigma\right)\left(Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right)^{3/2}\nonumber\\ &&+B_1^+\left((|\Delta_j|-2\hat{D}_j)/\sigma\right)\sqrt{Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})}\Big]\nonumber\\ &&\times \exp\left[ -C_1^+\left( (|\Delta_j|-2\hat{D}_j)/\sigma \right)Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\nonumber \end{eqnarray} Because, as stated previously, $\hat{X}^{(2)}_j\left( \pi_2^{-1}\left(\pi_2(\tau^{(2)}_j)+t \right) \right)$ either equals $X'_{|\Delta_j|-2|\hat{D}_j|}(t)+2|\hat{D}_jt|1(t>0)$ or $X'_{|\Delta_j|-2|\hat{D}_j|}(t)+2|\hat{D}_jt|1(t<0)$, Lemma \ref{lem:randomwalkcompoppo} leads to this inequality for some positive monotone expressions $A_1^+()$, $B_1^+()$, and $C_1^+()$, which are decreasing, decreasing, and increasing. This expression could be further bounded by \begin{align}\label{eq:firstcomp1} &\leq 2\sigma^{-1}\hat{D}_j\left[A_1^+\left(\underline{\Delta}/2\sigma\right)\left(Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right)^{3/2}+B_1^+\left(\underline{\Delta}/2\sigma\right)\sqrt{Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})}\right]\nonumber\\ &\times\exp\left[ -C_1^+\left( \underline{\Delta}/2\sigma \right)Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\nonumber\\ &\qquad \leq 2C^*\left(\underline{\Delta}/2\sigma\right)\rho_N\nonumber\\ &\text{where }C_+^*(\cdot)=\sup_{x\in\mathbb{R}^+}\left([A(\cdot)x^{3/2}+B(\cdot)\sqrt{x}]\exp(-C(\cdot)x)\right)\text{, guaranteed to be finite} \end{align} In a similar manner, apply Lemma \ref{lem:randomwalkcomp} to obtain \begin{align}\label{eq:firstcomp2} &\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &-\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right] \nonumber\\ &=\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right)\in S^{(2)}}{\arg\min}\frac{1}{\sigma}\hat{X}^{(2)}_j\left( \pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t \right) \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &-\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}}{\arg\min}\frac{1}{\sigma}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &\geq -2A_1^-\left(\left( |\Delta_j|-6|\hat{D}_j|\right)/\sigma \right)\rho_N\sqrt{Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})}\nonumber\\ &\times\exp\left[ -B_1^- \left(\left( |\Delta_j|-6|\hat{D}_j|\right)/\sigma \right) Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\nonumber\\ &\text{for some positive decreasing expression }A_1^-\text{ and some positive increasing expression }B_1^-\nonumber\\ &\geq -2A_1^-\left( \underline{\Delta}/2\sigma \right)\rho_N\sqrt{Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})}\exp\left[ -B_1^- \left( \underline{\Delta}/2\sigma \right) Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\nonumber\\ &\geq -2C_1^-(\underline{\Delta}/2\sigma)\rho_N\quad\text{ where }C_1^-(\cdot)=\sup_{x\in\mathbb{R}^+}A_1^-(\cdot)\sqrt{x}\exp\left[ B_1^-(\cdot)x \right]\nonumber\\ \end{align} Altogether, both (\ref{eq:firstcomp1}) and (\ref{eq:firstcomp2}) together imply \begin{eqnarray}\label{eq:firstcompabs} &&\Bigg|\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in S^{(2)}(t_j)-\tau_j}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\Bigg| \nonumber\\ &\qquad &\leq 2\left( C_1^-(\underline{\Delta}/2\sigma)\vee C_1^+(\underline{\Delta}/2\sigma) \right)\rho_N \end{eqnarray} ~\newline \newline \textbf{Second Component: }Now $X'_{|\Delta|-2|\hat{D}_j|}(t)/\sigma$ has the same exact distribution as $X_{(|\Delta|-2|\hat{D}_j|)/\sigma}(t)$, for all integers $t$ such that $\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)$. It was also shown in the previous section that the set $\left\{ t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j) \right\}$ contains the interval of integers $\left[ \pm Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]$. Therefore apply Lemma \ref{lem:twoXrandwalkcomp} to first obtain \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{ t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &=&\mathbb{P}\left[ \left|\underset{ t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}\frac{1}{\sigma}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber \end{eqnarray} \begin{eqnarray} &=&\mathbb{P}\left[ \left|\underset{ t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X_{(|\Delta_j|-2|\hat{D}_j|)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\nonumber\\ &\geq &\mathbb{P}\left[ \left|\underset{ t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right] \end{eqnarray} and apply Lemma \ref{lem:twoXrandwalkcomp} to obtain another equality in the other direction \begin{eqnarray}\label{eq:underdiff4} \mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber \end{eqnarray} \begin{eqnarray} &=&\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X_{(|\Delta_j|-2|\hat{D}_j|)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\nonumber\\ &\leq & \mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]+ \end{eqnarray} \begin{eqnarray} &&(2\rho_N-2|\hat{D}_j|)\Bigg[A_2\left((|\Delta_j|-2\rho_N)/\sigma\right)\left(Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right)^{3/2}\nonumber\\ &&+B_2\left((|\Delta_j|-2\rho_N)/\sigma\right)\sqrt{Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})}\Bigg]\nonumber\\ &&\times\exp\left[ -C_2\left( |\Delta_j|-2\rho_N \right)Q_{|\Delta_j|-2\rho_N}(\sqrt[J]{1-\alpha}) \right]\nonumber\\ &\leq &\mathbb{P}\left[ \left|\underset{t\in S^{(2)}(t_j)-\tau_j}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]+\nonumber\\ &&2\rho_N\left[A_2\left(\underline{\Delta}/2\sigma\right)\left(Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right)^{3/2}+B_2\left(\underline{\Delta}/2\sigma\right)\sqrt{Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})}\right]\nonumber\\ &&\times\exp\left[ -C_2\left( \underline{\Delta}/2\sigma \right)Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha}) \right]\nonumber\\ &&\text{since }N\text{ is large enough such that } 2\rho_N<\underline{\Delta}/2\text{, and by the monotonicity of }A_2,B_2,\text{ and }C_2\nonumber\\ &\leq &\mathbb{P}\left[ \left|\underset{t\in S^{(2)}(t_j)-\tau_j}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]+2C^*_2\left(\frac{\underline{\Delta}}{2\sigma}\right)\rho_N\nonumber\\ &&\text{where }C^*_2(\cdot)=\sup_{x\in\mathbb{R}^+}\left([A_2(\cdot)x^{3/2}+B_2(\cdot)\sqrt{x}]\exp(-C_2(\cdot)x)\right) \end{eqnarray} These two inequalities together imply a bound on the absolute difference: \begin{align}\label{eq:secondcompabs} \Biggr|\mathbb{P}&\left[ \left|\underset{ t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X'_{|\Delta_j|-2|\hat{D}_j|}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &-\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Biggr|\nonumber\\ &\qquad\leq 2C^*_2\left(\frac{\underline{\Delta}}{2\sigma}\right)\rho_N \end{align} ~\newline \newline \textbf{Third Component: }We note that the set $\left\{ t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j) \right\}$ contains the set $\left[\pm \left(\frac{(K-1)w(N)}{2}-1\right) \right]$. This is because by (\ref{eq:compdists}), \begin{eqnarray} \left|\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})\pm \left(\frac{(K-1)w(N)}{2}-1\right)\right)-\tau_j^{(2)}\right|\leq 2\left(\frac{(K-1)w(N)}{2}-1\right) \end{eqnarray} secondly because \begin{eqnarray} &&t_j-Kw(N)\leq \tau_j^{(2)}-(K-1)w(N)+1 < \tau_j^{(2)}-2\left( \frac{(K-1)w(N)}{2}-1 \right)\nonumber\\ &< & \tau_j^{(2)}+2\left( \frac{(K-1)w(N)}{2}-1 \right)\leq \tau_j^{(2)}+(K-1)w(N)\leq t_j+Kw(N) \end{eqnarray} Therefore, for any $t$ such that $|t|\leq \left(\frac{(K-1)w(N)}{2}-1\right)$, we have $\pi_2^{-1}(\pi_2(\tau_j^{(2)})+t)\in [t_j\pm Kw(N)]-S^{(1)}=S^{(2)}(t_j)$. This allows an application of Lemma \ref{lem:eqprob} to obtain \begin{eqnarray}\label{eq:underdiff2} &&\Biggr|\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{|t|\leq \frac{(K-1)W(N)}{2}-1}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Biggr|\nonumber\\ &\leq& A_3((|\Delta_j|-2\rho_N)/\sigma)\exp\left(-B_3((|\Delta_j|-2\rho_N)/\sigma)\left(\frac{(K-1)w(N)}{2}-1\right)\right)\nonumber\\ &&\text{for some decreasing expression }A_3()\text{ and increasing expression }B_3()\nonumber\\ &\leq & A_3(\underline{\Delta}/2\sigma)\exp\left(-B_3(\underline{\Delta}/2\sigma)\left(\frac{(K-1)w(N)}{2}-1\right)\right) \end{eqnarray} The same application of the lemma can also yield \begin{eqnarray}\label{eq:underdiff3} &&\Bigg|\mathbb{P}\left[ \left|\underset{|t|\leq \frac{(K-1)W(N)}{2}-1}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right] \nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Bigg|\nonumber\\ &\leq &A_3(\underline{\Delta}/2\sigma)\exp\left(-B'(\underline{\Delta}/2\sigma)\left(\frac{(K-1)w(N)}{2}-1\right)\right) \end{eqnarray} Adding up these two upper bounds imply \begin{eqnarray}\label{eq:thirdcompabs} &&\Biggr|\mathbb{P}\left[ \left|\underset{t:\,\pi_2^{-1}\left( \pi_2(\tau_j^{(2)})+t\right)\in S^{(2)}(t_j)}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Bigg|\nonumber\\ &\leq &2A_3(\underline{\Delta}/2\sigma)\exp\left(-B'(\underline{\Delta}/2\sigma)\left(\frac{(K-1)w(N)}{2}-1\right)\right) \end{eqnarray} ~\newline \noindent\textbf{Sum of the Components:} Adding up the differences in (\ref{eq:firstcompabs}), (\ref{eq:secondcompabs}), and (\ref{eq:thirdcompabs}): \begin{eqnarray}\label{eq:individualbound} &&\Bigg|\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}X_{(|\Delta_j|-2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\right]\Bigg|\nonumber\\ &\leq& C_4\rho_N+C_5\exp\left[-C_6(K-1)w(N)\right] \end{eqnarray} for some constants $C_4$, $C_5$, and $C_6$. \newline \newline \indent Finally, in order to bound (\ref{eq:condprodprobdev}), we note that given the two real valued triangular arrays $a_{N,1},\dots,a_{N,J}$ and $b_{N,1},\dots, b_{N,J}$, all of which contained in the continuous interval $[0,1]$, such that $\left|\frac{a_{N,i}-b_{N,i}}{b_{N,i}}\right|\leq C_N$ for $1\leq i\leq J$, where $JC_N\to 0$ as $N\to \infty$, then $\left|\prod a_{N,j}-\prod b_{N,j}\right|\to 0$. This is because \begin{eqnarray} \left|\prod_{j=1}^J a_{N,j}-\prod_{j=1}^J b_{N,j}\right|&=& \left|\prod_{j=1}^J b_{N,j}\right|\left| \prod_{j=1}^J \left(1-\frac{a_{N,j}-b_{N,j}}{b_{N,j}}\right)-1 \right|\nonumber\\ \end{eqnarray} Since $\left|\prod b_{N,j}\right|\in [0,1]$, the above converges to 0 if $\prod \left(1-\frac{a_{N,j}-b_{N,j}}{b_{N,j}}\right)\to 1$, which is true since \begin{eqnarray} &&\prod_{j=1}^J \left(1-\frac{a_{N,j}-b_{N,j}}{b_{N,j}}\right)\geq (1-C_N)^J\geq 1-C_NJ\to 1\nonumber\\ &&\prod_{j=1}^J \left(1-\frac{a_{N,j}-b_{N,j}}{b_{N,j}}\right)\leq (1+C_N)^J\leq (e^{C_N})^J\to e^0=1 \end{eqnarray} This result is useful because for all $N$ large enough so that $2Kw(N)\leq \delta_N$, the stage two sets $S^{(2)}(t_j)$'s for $j=1,\dots,J$ are mutually exclusive sets, and hence by conditional independence, (\ref{eq:condprodprobdev}) equals \begin{eqnarray}\label{eq:condprodprobdev2} &&\Bigg| \prod_{j=1}^J\mathbb{P}\left[ \left|\lambda_2\left( \tau_j^{(2)},\hat{\tau}_j^{(2)} \right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Big| \hat{J}=J;\,\boldsymbol{\hat{\tau}^{(1)} }=\boldsymbol{t};\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v} \right]\nonumber\\ &&-\prod_{j=1}^J P_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg| \end{eqnarray} More-ever, using (\ref{eq:individualbound}), \begin{eqnarray} &&J\Bigg|\mathbb{P}\left[ \left|\lambda_2\left(\tau_j^{(2)},\hat{\tau}^{(2)}_j\right)\right|\leq Q_{(|\Delta_j|-2\rho_N)/\sigma}(\sqrt[J]{1-\alpha})\Bigg|\hat{J}=J,\boldsymbol{\hat{\tau}^{(1)}}=\boldsymbol{t},\, \boldsymbol{\hat{\nu}^{(1)} }=\boldsymbol{v}\right]\nonumber\\ &&-\mathbb{P}_{|\Delta_j|-2\rho_N}(\sqrt[J]{1-\alpha})\Bigg|\left(\mathbb{P}_{|\Delta_j|-2\rho_N}(\sqrt[J]{1-\alpha})\right)^{-1}\nonumber\\ &\leq&J\left(C_4\rho_N+C_5\exp\left[-C_6(K-1)w(N)\right]\right)(1-\alpha)^{-1/J}\nonumber\\ &\leq&C_4(1-\alpha)^{-1}J\rho_N+C_5(1-\alpha)^{-1}N\exp\left[-C_6(K-1)w(N)\right] ,\end{eqnarray} which goes to 0 since $J\rho_N\to 0$ and $w(N)\geq C N^{1-\gamma}$ for some constant $C$. This lets us conclude that (\ref{eq:condprodprobdev2}) converges to 0. \end{proof} \subsection{Proof of Theorem \ref{thm:multidepend}}\label{sec:multidependthmproof} \indent Notation-wise, we again utilize the $\pi_2$ function defined in (\ref{def:pi2func}), which is a bijection from the set $\{ 1,\dots,N \}-S^{(1)}$ to the set $\{ 1,\dots,N-|S^{(1)}| \}$. \begin{proof} Let $S^{(2)}\left( \hat{\tau}^{(1)}_k\right)$ for $k=1,...,\hat{J}$ be the second stage subsamples. As in previous sections, define \begin{eqnarray} \mathcal{R}_N:= \left\{ \hat{J}=J;\,\max_{j=1,\dots,J}\left| \hat{\tau}^{(1)}_j-\tau_j \right|\leq w(N);\, \max_{j=0,\dots,J}\left| \hat{\nu}^{(1)}_j-\nu_j \right|\leq \rho_N \right\} \end{eqnarray} We also define the following random functions on integers: for $k=1,...,J$, on the event $\mathcal{R}_N$ let \begin{align} \hat{X}^{(2)}_k(d)&:= \text{sgn}\left(\hat{\nu}^{(1)}_{k}-\hat{\nu}^{(1)}_{k-1}\right)\underset{j\in S^{(2)}(\hat{\tau}^{(1)}_k)}{\sum}\left(Y_j-\frac{\hat{\nu}^{(1)}_{k}+\hat{\nu}^{(1)}_{k-1}}{2}\right) \left(1\left(j\leq \pi_2^{-1}(\pi_2(\tau_k^{(2)})+d)\right)-1\left(j\leq \tau_k^{(2)}\right)\right) \nonumber\\ &\qquad \text{for }\pi_2^{-1}(\pi_2(\tau_k^{(2)})+d)\in S^{(2)}_k\nonumber\\ &\qquad\hat{X}^{(2)}_k(d):=\infty \qquad\text{otherwise} \end{align} and on the event $\mathcal{R}_N^C$ let \begin{equation} \hat{X}^{(2)}_k(d):=d. \end{equation} so that the $\arg\min$ of $\hat{X}^{(2)}_k(d)$ is $d=-\infty$ on the event $\mathcal{R}_N^C$. Using this definition, for all sufficiently large $N$, the event $\{ \hat{J}=J,\, \lambda_2(\tau_k^{(2)},\hat{\tau}_k^{(2)})=j_k\text{ for }k=1,...,J \}\cap\mathcal{R}_N$ is equivalent to the event $$\left\{ \underset{d\in \mathbb{Z}}{\arg\min}\,\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right\}.$$ \newline \newline Further, for any integer $M>0$, we can easily obtain convergence properties by restricting the function $\hat{X}^{(2)}_k$ to the set $\{-M,-(M-1),...,M \}$. For sufficiently large $N$, when event $\mathcal{R}_N$ occurs, we have for any $d\in \{ -M,...,M \}$, and for all $k=1,...,J$, \begin{align} \hat{X}^{(2)}_k(d)= \text{sgn}(\Delta_k)\underset{j\in S^{(2)}(\hat{\tau}^{(1)}_k)}{\sum}\left(Y_j-\frac{\hat{\nu}^{(1)}_{k}+\hat{\nu}^{(1)}_{k-1}}{2}\right) \left(1\left(j\leq \pi_2^{-1}(\pi_2(\tau_k^{(2)})+d)\right)-1\left(j\leq \tau_k^{(2)}\right) \right)\nonumber\\ =\begin{cases} \text{sgn}(\Delta_k)\sum\limits_{j=\pi_2(\tau_k^{(2)})+1}^{\pi_2(\tau_k^{(2)})+d}\left( Y_{\pi^{-1}(j)}-\nu_k+\frac{\nu_k-\nu_{k-1}}{2}+\frac{1}{2}\left( \nu_k-\hat{\nu}_k^{(1)}+\nu_{k-1}-\hat{\nu}_{k-1}^{(1)}\right)\right)\quad &\text{for }d>0\\ 0 &\text{for }d=0\\ -\text{sgn}(\Delta_k)\sum\limits_{j=\pi_2(\tau_k^{(2)})+d+1}^{\pi_2(\tau_k^{(2)})}\left( Y_{\pi_2^{-1}(j)}-\nu_{k-1}+\frac{\nu_{k-1}-\nu_{k}}{2}+\frac{1}{2}\left( \nu_k-\hat{\nu}_k^{(1)}+\nu_{k-1}-\hat{\nu}_{k-1}^{(1)}\right)\right) & \text{for }d<0 \end{cases}\nonumber\\ =\begin{cases} \frac{d|\Delta_k|}{2}+\text{sgn}(\Delta_k)\left(\sum\limits_{j=\pi_2(\tau_k^{(2)})+1}^{\pi_2(\tau_k^{(2)})+d}\varepsilon_{\pi_2^{-1}(j)}+\frac{d}{2}\left( \nu_k-\hat{\nu}_k^{(1)}+\nu_{k-1}-\hat{\nu}_{k-1}^{(1)}\right)\right)\quad &\text{for }d>0\\ 0 &\text{for }d=0\\ -d\frac{|\Delta_k|}{2}-\text{sgn}(\Delta_k)\left(\sum\limits_{j=\pi_2(\tau_k^{(2)})+d+1}^{\pi_2(\tau_k^{(2)})}\varepsilon_{\pi_2^{-1}(j)}+\frac{d}{2}\left( \nu_k-\hat{\nu}_k^{(1)}+\nu_{k-1}-\hat{\nu}_{k-1}^{(1)}\right)\right) &\text{for }d<0 \end{cases}\nonumber\\ \end{align} Because $\max\limits_{i=1,...,J}|\hat{\nu}^{(1)}_i-\nu_o|\leq \rho_N$ under $\mathcal{R}_N$, this gives the uniform bound \begin{eqnarray} \left| \frac{d}{2\sigma}\left( \nu_k-\hat{\nu}_k^{(1)}+\nu_{k-1}-\hat{\nu}_{k-1}^{(1)}\right)\right| \leq \frac{1}{2}M\rho_N. \end{eqnarray} for all $k$ and $d$. The right side of the above inequality converges to 0 since $\rho_N\to 0$, and because all of this occurs with probability $\mathbb{P}[\mathcal{R}_N]\to 1$, this shows that the $\hat{X}^{(2)}_k(d)$'s all jointly converge. Specifically, let $\varepsilon^*_{j,k}$ for $j=0,..,J$ and $k\in\mathbb{Z}$ be random variables such that $\{\varepsilon_{j,k}\}_{k\in\mathbb{Z}}$ are iid $\mathcal{E}_j$ variables for $j=0,\dots,J$. Next, we denote the random walks \begin{eqnarray}\label{eq:generalizedrandomwalk} X^*_{j}(d)=\begin{cases} d\frac{|\Delta_j|}{2}+\text{sgn}(\Delta_j)\sum\limits_{l=1}^d\varepsilon^*_{j,l}\quad &\text{for }d>0\\ 0 &\text{for }d=0\\ -d\frac{|\Delta_j|}{2}-\text{sgn}(\Delta_j)\sum\limits_{l=d+1}^{0}\varepsilon^*_{j,l} &\text{for }d<0 \end{cases} \end{eqnarray} (note that if the error terms are iid $N(0,\sigma^2)$, the random walk $X^*_{\Delta_k}(d)$ has precisely the same distribution as the random walk $\sigma X_{\Delta_k/\sigma}(d)$). We have the joint weak convergence \begin{eqnarray}\label{eq:finitestochproc} &&\begin{pmatrix} \hat{X}^{(2)}_1(-M), & \hat{X}^{(2)}_1(-M+1), &\dots, &\hat{X}^{(2)}_1(M),\\ \hat{X}^{(2)}_2(-M), & \hat{X}^{(2)}_2(-M+1), &\dots, & \hat{X}^{(2)}_2(M),\\ \hdotsfor{4}, \\ \hat{X}^{(2)}_J(-M), &\hat{X}^{(2)}_J(-M+1), &\dots,& \hat{X}^{(2)}_J(M) \end{pmatrix} \quad\Rightarrow\quad\nonumber\\ &&\begin{pmatrix} X_{1}^*(-M), & X_{1}^*(-M+1), &\dots, &X_{1}^*(M),\\ X_{2}^*(-M), & X_{2}^*(-M+1), &\dots, & X_{2}^*(M),\\ \hdotsfor{4}, \\ X_{J}^*(-M), &X_{J}^*(-M+1), &\dots,& X_{J}^*(M) \end{pmatrix} \end{eqnarray} \indent Define $L^*_{j}:=\underset{k\in\mathbb{Z}}{\arg\min}X_{j}^*(k)$, and $L^{*(M)}_{j}:=\underset{|k|\leq M}{\arg\min}X_{j}(k)$ for $j=1,\dots,J$ (note that if the error terms of the data sequence is $N(0,1)$, $L^*_{\Delta_k}$ has the same distribution as $L_{\Delta_k}$). We have the joint weak convergence \begin{eqnarray} \left( \underset{|j|\leq M}{\arg\max}\hat{X}^{(2)}_1(d),\dots,\underset{|j|\leq M}{\arg\max}\hat{X}^{(2)}_J(d)\right) \quad\Rightarrow\quad \left( L_{\Delta_1}^{*(M)},\dots, L^{*(M)}_{\Delta_J}\right) \end{eqnarray} by the continuous mapping theorem, because $\arg\max$ is a continuous function on $\mathbb{R}^{2M+1}$ (except when at least two of the coordinates are equal, which has probability 0 if the error terms have densities). \newline \newline \indent Next, we establish that $\mathbb{P}\left[ \hat{J}=J,\, \lambda_2(\tau_k^{(2)},\hat{\tau}_k^{(2)})=j_k\text{ for }k=1,...,J \right]$ converges to the product of $\mathbb{P}[ L_{k}^*=j_k ]$ for $k=1,...,J$. We will do this by showing for any fixed $\epsilon>0$, the absolute difference between the two is smaller than $\epsilon$ for all large $N$. As in the proof of the single change point problem, this is accomplished through three main inequalities. \newline \newline \textbf{First Inequality}: From the result of Theorem \ref{thm:multiorder}, and by the fact that $\mathbb{P}[\mathcal{R}_N]\to1$, we can find an integer $K_0$ greater than $\max_k |j_k|$, such that for any $K_1\geq K_0$, we have for sufficiently large $N$ \begin{eqnarray} \mathbb{P}\left[ \hat{J}=J,\, \max\limits_{k=1,...,J}\left| \hat{\tau}^{(2)}_k-\tau_k\right| \leq (2K_1+2),\;\mathcal{R}_N\right] \geq 1-\frac{\epsilon}{4} \end{eqnarray} For all sufficiently large $N$, $\left\{\hat{J}=J,\, \max\limits_{k=1,...,J}\left| \hat{\tau}^{(2)}_k-\tau_k\right| \leq (2K_1+2)\right\}\cap\mathcal{R}_N$ would mean $$\left\{\hat{J}=J,\,\max\limits_{k=1,..,J}\left|\lambda_2\left(\tau_k^{(2)},\hat{\tau}^{(2)}_k \right)\right|\leq K_1\right\}\cap\mathcal{R}_N$$ and hence \begin{eqnarray} 1-\frac{\epsilon}{3}&\leq& \mathbb{P}\left[\hat{J}=J,\,\min\limits_{k=1,..,J}\left|\lambda_2\left(\tau_k^{(2)},\hat{\tau}^{(2)}_k \right)\right|\leq K_1,\;\mathcal{R}_N\right]\nonumber\\ &=& \mathbb{P}\left[\hat{J}=J;\; \max_{k=1,...,J}\left| \underset{d\in\mathbb{Z}}{\arg\min}\hat{X}^{(2)}_k(d)\right|\leq K_1\right] \end{eqnarray} Now \begin{eqnarray} &&\underset{d\in\mathbb{Z}}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\nonumber\\ &\longleftrightarrow& \underset{|d|\leq K_1}{\arg\min}\hat{X}^{(2)}_k(d)=j_k\text{ for } k=1,...,J,\text{ and }\max_{k=1,...,J}\left| \underset{d\in\mathbb{Z}}{\arg\min}\hat{X}^{(2)}_k(d)\right|\leq K_1 \end{eqnarray} With steps very similar to those used in (\ref{eqrep1}), it can be shown that \begin{eqnarray}\label{firstineq} &&\left| \mathbb{P}\left[ \underset{d\in\mathbb{Z}}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right] -\mathbb{P}\left[ \underset{|d|\leq K_1}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right]\right|\nonumber\\ &\leq & \mathbb{P}\left[ \max_{k=1,...,J}\left| \underset{d\in\mathbb{Z}}{\arg\min}\hat{X}^{(2)}_k(d)\right|> K_1 \right]\nonumber\\ &\qquad&\leq \epsilon/4 \end{eqnarray} \textbf{Second Inequality} We can find some integer $K_2>K_0$ such that \begin{eqnarray} \mathbb{P}\left[ \max_{k=1,..,J}|L_{k}^*|\leq K_2\right]\geq 1-\frac{\epsilon}{4} \end{eqnarray} Now $L_{k}^*=j_k$ for $k=1,...,J$ if and only if both $L_{k}^{*(K_2)}=j_k$ for $k=1,...,J$ and $\max_k|L_{k}|\leq K_2$. With steps very similar to those in (\ref{eqrep2}), we have \begin{eqnarray}\label{secondineq} &&\left| \mathbb{P}\left[ L_{k}^*=j_k\text{ for }k=1,...,J\right]-\mathbb{P}\left[ L^{*(K_2)}_{k}=j_k\text{ for }k=1,...,J\right]\right|\\ &\leq & \mathbb{P}\left[ \max_{k=1,..,J}|L_{k}^*|> K_2\right]\nonumber\\ &\leq &\epsilon/4 \end{eqnarray} \textbf{Third Inequality} By weak convergence, we have \begin{eqnarray}\label{thirdineq} \left| \mathbb{P}\left[ L_{k}^{*(K_2)}=j_k\text{ for }k=1,...,J\right]-\mathbb{P}\left[ \underset{|d|\leq K_2}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right]\right|\leq \frac{\epsilon}{4} \end{eqnarray} for all sufficiently large $N$. \newline \newline Combining the inequalities in (\ref{firstineq}), (\ref{secondineq}), and (\ref{thirdineq}) will give \begin{eqnarray} &&\left| \mathbb{P}\left[ L_{k}^*=j_k\text{ for }k=1,...,J\right]-\mathbb{P}\left[ \hat{J}=J,\, \lambda_2\left( \tau_k^{(2)},\hat{\tau}_k^{(2)}\right)=j_k\text{ for }k=1,..,J,\;\mathcal{R}_N\right]\right|\nonumber\\ &=& \left| \mathbb{P}\left[ L_{k}^*=j_k\text{ for }k=1,...,J\right]- \mathbb{P}\left[ \underset{d\in\mathbb{Z}}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right]\right|\nonumber\\ &\leq & \left| \mathbb{P}\left[ \underset{d\in\mathbb{Z}}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right] -\mathbb{P}\left[ \underset{|d|\leq K_2}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right]\right|\nonumber\\ &+& \left| \mathbb{P}\left[ L_{k}^{*(K_2)}=j_k\text{ for }k=1,...,J\right]-\mathbb{P}\left[ \underset{|d|\leq K_2}{\arg\min}\hat{X}^{(2)}_k(d)=j_k \text{ for }k=1,...,J\right]\right|\nonumber\\ &+& \left| \mathbb{P}\left[ L_{k}^*=j_k\text{ for }k=1,...,J\right]-\mathbb{P}\left[ L^{*(K_2)}_{k}=j_k\text{ for }k=1,...,J\right]\right|\nonumber\\ &\qquad &\leq 3\epsilon/4 \end{eqnarray} Additionally, for sufficiently large $N$ we have $\mathbb{P}[\text{not }\mathcal{R}_N]<\epsilon/4$ and hence \begin{eqnarray} &&\Bigg| \mathbb{P}\left[ \hat{J}=J,\, \lambda_2\left( \tau_k^{(2)},\hat{\tau}_k^{(2)}\right)=j_k\text{ for }k=1,..,J,\;\mathcal{R}_N\right]\nonumber\\ &-&\mathbb{P}\left[ \hat{J}=J,\, \lambda_2\left( \tau_k^{(2)},\hat{\tau}_k^{(2)}\right)=j_k\text{ for }k=1,..,J\right] \Bigg|\nonumber\\&<&\epsilon/4 \end{eqnarray} and hence \begin{eqnarray} &&\Bigg| \mathbb{P}\left[ L_{k}^*=j_k\text{ for }k=1,...,J\right]-\mathbb{P}\left[ \hat{J}=J,\, \lambda_2\left( \tau_k^{(2)},\hat{\tau}_k^{(2)}\right)=j_k\text{ for }k=1,..,J\right]\Bigg|<\epsilon \end{eqnarray} for all sufficiently large $N$. \end{proof} \subsection{Proof of Theorem \ref{thm:reconsistentineq}}\label{sec:reconsistentproof} We will show that \begin{eqnarray} \mathbb{P}\left[ \hat{J}=J;\,\left|\hat{\tau}^{re}_j -\tau^{**}_j\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\;\forall j\right]\geq 1+\alpha_N+o(1) \end{eqnarray} \begin{proof} Letting $\mathcal{R}_N$ be the event that \begin{eqnarray} \left\{ \hat{J}=J;\, \max_{j=1,\dots,J}|\hat{\tau}^*_j-\tau^{**}_j|\leq w^*(N^*) ;\, \max_{j=0,\dots,J}|\hat{\nu}^{(1)}_j-\nu_j|\leq \rho_N \right\}. \end{eqnarray} In a similar way to the proof for Theorem \ref{thm:increasingJasymprotics}, to prove \begin{eqnarray} &&\mathbb{P}\left[ \hat{J}=J;\, \max_{j=1,\dots,J}|\hat{\tau}^{re}_j-\tau^{**}_j|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\text{ for all }j=1\dots,J \right]\nonumber\\ &\geq&1-\alpha_N+o(1) \end{eqnarray} it is sufficient to demonstrate that \begin{eqnarray} &&\mathbb{P}\left[ \max_{j=1,\dots,J}|\hat{\tau}^{re}_j-\tau^{**}_j|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\; \forall j\Big| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\;\forall j \right]\nonumber\\ &=& 1-P_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)+o(1) \end{eqnarray} for all $t_j$'s and $v_j$'s permissible within $\mathcal{R}_N$, which we will assume when we write $t_j$'s and $v_j$'s from here on. \newline \newline \indent We now try to bound the difference between $$ \mathbb{P}\left[|\hat{\tau}^{re}_j-\tau^{**}_j|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\Big| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\text{ for all }j \right] $$ and $ P_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)$ for all $j$. Each estimator equals the argmin of a random walk: \begin{eqnarray} \hat{\tau}^{re}_j&:=&\underset{t\in [t_j\pm \hat{d}_j ]}{\arg\min}(\text{sgn}(v_j-v_{j-1}))\sum_{i\in [t_j\pm \hat{d}_j ]}\left( Y_i-\frac{v_j+v_{j-1}}{2} \right)\left[ 1(i\leq t)-1(i\leq \tau^{**}_j ) \right]\nonumber\\ &=:&\underset{t\in [t_j\pm \hat{d}_j ]}{\arg\min}\hat{X}_j(t). \end{eqnarray} In comparison with the random walk \begin{eqnarray} X_j'(t):=\begin{cases} t\left(\frac{|\Delta_j|}{2}+\text{sgn}(\Delta_j)\hat{D}_j\right)+\text{sgn}(\Delta_j)\sum_{_i=1}^t\varepsilon_{\tau^{**}_j+i}\quad &t>0\\ 0 & t=0\\ |t|\left(\frac{|\Delta_j|}{2}-\text{sgn}(\Delta_j)\hat{D}_j\right)-\text{sgn}(\Delta_j)\sum_{_i=1}^t\varepsilon_{\tau^{**}_j-i+1} \end{cases} \end{eqnarray} where $\hat{D}_j=\frac{v_j-\nu_j+v_{j-1}-\nu_{j-1}}{2}$. We have, for all sufficiently large $N$, $\hat{X}_j(t+\tau^{**}_j)$ equaling either $X_j'(t)-2|t|\hat{D}_j1(t>0)$ or $X'(t)-2|t|\hat{D}_j1(t<0)$ for all integers $t\in [t_j\pm \hat{d}_j]-\tau^{**}_j$. With regards to the set $[t_j\pm \hat{d}_j]-\tau^{**}_j$, for all large $N$ (such that $3w^*(N^*)\leq \delta^*_{N^*}/2$) this set contains the interval $[-\delta^*_{N^*}/2,\delta^*_{N^*}/2]$. This can be seen by the series of inequalities \begin{eqnarray} &&t_j-\hat{d}_j\leq \tau^{**}_j+w^*(N^*)-(\delta^*_{N^*}-2w^*(N^*))\leq \tau^{**}_j-\delta^*_{N^*}/2\nonumber\\ &<&\tau^{**}_j-\delta^*_{N^*}/2\leq \tau^{**}_j-w^*(N^*)+(\delta^*_{N^*}-2w^*(N^*))\leq t_j+\hat{d}_j. \end{eqnarray} Furthermore, since $\log(J/\alpha_N)=o(\delta^*_{N^*})$, we can use Lemma \ref{lem:probbound} to obtain, for all large $N$, \begin{eqnarray} &&\tau^{**}_j-\delta^*_{N^*}/2\leq \tau^{**}_j-\frac{1}{B}\log\left( \frac{AJ}{\alpha_N} \right)\leq\tau^{**}_j-Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\nonumber\\ &<&\tau^{**}_j+Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\leq \tau^{**}_j+\frac{1}{B}\log\left( \frac{AJ}{\alpha_N} \right)\leq \tau^{**}_j+\delta^*_{N^*}/2 \end{eqnarray} where $A$ and $B$ are some constants. Therefore, we can apply Lemma \ref{lem:randomwalkcompoppo} to obtain the inequality \begin{eqnarray} &&\mathbb{P}\left[|\hat{\tau}^{re}_j-\tau^{**}_j|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\Big| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\text{ for all }j \right]\nonumber\\ &=&\mathbb{P}\left[\left|\underset{t\in [t_j\pm \hat{d}_j]-\tau^{**}_j}{\arg\min}\hat{X}_j(t+\tau^{**}_j)\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\Bigg| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\text{ for all }j \right]\nonumber \end{eqnarray} \begin{eqnarray} &\geq &\mathbb{P}\left[\left|\underset{t\in [t_j\pm \hat{d}_j]-\tau^{**}_j}{\arg\min}\frac{1}{\sigma}X_j'(t)\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\right]- C_1(|\underline{\Delta}|/2)|\hat{D}_j| \nonumber\\ &\geq & \mathbb{P}\left[\left|\underset{t\in [t_j\pm \hat{d}_j]-\tau^{**}_j}{\arg\min}\frac{1}{\sigma}X_j'(t)\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\right]- C_1(|\underline{\Delta}|/2)\rho_N \end{eqnarray} for all large $N$, where $C_1(|\underline{\Delta}|/2)$ is some expression independent of $N$ and $j$. \newline \newline Next, use Lemma \ref{lem:twoXrandwalkcomp} to obtain \begin{eqnarray} &&\mathbb{P}\left[\left|\underset{t\in [t_j\pm \hat{d}_j]-\tau^{**}_j}{\arg\min}\frac{1}{\sigma}X_j'(t)\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\right]\nonumber\\ &\geq& \mathbb{P}\left[\left|\underset{t\in [t_j\pm \hat{d}_j]-\tau^{**}_j}{\arg\min}X_{(|\Delta_j|+2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\right] \end{eqnarray} We could also use Lemma \ref{lem:eqprob} twice to obtain \begin{eqnarray} &&\mathbb{P}\left[\left|\underset{t\in [t_j\pm \hat{d}_j]-\tau^{**}_j}{\arg\min}X_{(|\Delta_j|+2\rho_N)/\sigma}(t)\right|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\right]\nonumber\\ &\geq&P_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)-C_2(\underline{\Delta})\exp(-C_3(\underline{\Delta})\Delta^*_{N^*}) \end{eqnarray} for some positive expressions $C_2(\underline{\Delta})$ and $C_3(\underline{\Delta})$ independent of $N$ and $j$. By combining these inequalities, we come to the conclusion that \begin{eqnarray} &&\mathbb{P}\left[|\hat{\tau}^{re}_j-\tau^{**}_j|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\Big| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\text{ for all }j \right]\nonumber\\ &\geq &P_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)-C_1(|\underline{\Delta}|/2)\rho_N-C_2(\underline{\Delta})\exp(-C_3(\underline{\Delta})\delta^*_{N^*}) \end{eqnarray} Therefore \begin{eqnarray} &&\mathbb{P}\left[ \max_{j=1,\dots,J}|\hat{\tau}^{re}_j-\tau^{**}_j|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\; \forall j\Big| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\;\forall j \right]\nonumber\\ &=&\mathbb{P}\left[ \underset{j=1,\dots,J}{\bigcap}|\hat{\tau}^{re}_j-\tau^{**}_j|\leq Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\; \forall j\Big| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\;\forall j \right]\nonumber\\ &\geq &1-\sum_{j=1}^J\mathbb{P}\left[|\hat{\tau}^{re}_j-\tau^{**}_j|> Q_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)\Big| J=\hat{J}, \hat{\tau}^*_j=t_j,\hat{\nu}_j^{(1)}=v_j\text{ for all }j \right]\nonumber\\ &\geq &1-J\left[1-P_{(|\Delta_j|+2\rho_N)/\sigma}\left(1-\frac{\alpha_N}{J}\right)+\left(C_1(|\underline{\Delta}|/2)\rho_N+C_2(\underline{\Delta})\exp(-C_3(\underline{\Delta})\delta^*_{N^*})\right)\right]\nonumber\\ &\qquad &\geq 1-\alpha_N-J\left(C_1(|\underline{\Delta}|/2)\rho_N+C_2(\underline{\Delta})\exp(-C_3(\underline{\Delta})\delta^*_{N^*})\right) \end{eqnarray} which converges to 1 since $\alpha_N\to 0$, $J\rho_N\to 0$, and $\delta^*_{N^*}>C(N^*)^\eta$ for some positive constant $C$ and $\eta$. \end{proof} \subsection{Proof of Lemma \ref{lem:binsegsignalconsistent}}\label{sec:binsegsignalconsistentproof} In order to prove Lemma \ref{lem:binsegsignalconsistent}, we will rely on the following result: \begin{lemma} \label{cor:signalconsistent} (i) Suppose that we have an estimation scheme which when applied onto $Z_1,\dots, Z_{N^*}$ gives estimates $\hat{J}$ for $J$ and $\left( \hat{\tau}^*_1,\cdots,\hat{\tau}^*_{\hat{J}}\right)$ for $(\tau^*_1,\cdots,\tau^*_J)$ such that \begin{eqnarray} \mathbb{P}\left[ \hat{J}=J;\quad \max_{i=1,\cdots,J}|\hat{\tau}^*_j-\tau^*_j|\leq w^*(N^*) \right]\geq 1-B^*_{N^*} \end{eqnarray} for some sequences $w^*(N^*)$ and $B_{N^*}$, with $w^*(N^*)=o(\delta^*_{N^*})$ and $B_{N^*}\to 0$. Then, for any positive sequence $\{\rho^*_{N^*}\}$ such that $\frac{w^*(N^*)}{\delta^*_{N^*}}=o(\rho^*_{N^*})$, there exist constants $C_1$ and $C_2$, where \begin{eqnarray}\label{pbound} \mathbb{P}\left[ \hat{J}=J;\quad |\hat{\nu}^*_i-\nu^*_i|\geq \rho^*_{N^*} \right]\leq B_{N^*}+C_1w^*(N^*) \frac{\exp\left[ -C_2\delta^*_{N^*}\rho^{* \,2}_{N^*}\right]}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*}} \end{eqnarray} for all $i=1,\cdots,J$, when $N^*$ is sufficiently large. \newline (ii) Moreover, as a consequence of part (i), \begin{eqnarray}\label{eq:sigconsistent} &&\mathbb{P}\left[ \hat{J}=J;\quad \max_{i=0,\cdots,J}|\hat{\nu}^*_i-\nu_i^*|<\rho^*_{N^*}\right] \geq 1 - \left(\frac{N^*}{\delta^*_{N^*}} + 2\right)B_{N^*} \nonumber\\ &&-C_1 \left(\frac{N^*}{\delta^*_{N^*}} + 1 \right)\,w^*(N^*)\frac{\exp[-C_2\delta^*_{N^*} \rho^{*\,2}_{N^*}]}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*}} \,. \end{eqnarray} It follows that in addition to the conditions in (i), if, furthermore, $N^* B_{N^*}/\delta^*_{N^*}\to 0$ and $(N^* w^*(N^*)/\delta^{*\,3/2}_{N^*}\rho^*_{N^*}) =$ $o(\exp[C_2\delta^*_{N^*} \rho^{*\,2}_{N^*}])$, then the probability in (\ref{eq:sigconsistent}) goes to 1. The $\hat{\nu}^*_i$'s are simultaneously consistent if $\rho^*_{N^*}$ also converges to 0. \end{lemma} \begin{proof} See Section \ref{sec:signalconsistentproof} \end{proof} In order to prove Lemma \ref{lem:binsegsignalconsistent}, it is sufficient to find a sequence $\rho_{N^*}^*\to 0$ such that $\rho_{N^*}^*$ satisfies all the conditions of Lemma \ref{cor:signalconsistent} and $J\rho_{N^*}^*\to 0$ . The proof will proceed in such a fashion. \begin{proof} \indent We start off by defining some notations. Since $\delta_N \geq CN^{1-\Xi}$ by (M3), we must have $J\leq C'N^\Lambda$ for some $C'>0$ and $\Lambda\in [0,\Xi]$. Using this notation, we will show that by setting $\rho^*_{N^*}=(N^*)^\theta$ where $\theta$ is chosen to be any value in $\Big( (3\Xi/\gamma-1)\vee (-3/8),-\frac{\Lambda}{\gamma} \Big)$, we will have a $\rho^*_{N^*}\to 0$ which satisfies the conditions of Lemma \ref{cor:signalconsistent} and $J\rho^*_{N^*}\to 0$. \newline \newline \indent We must verify that $\Big( (3\Xi/\gamma-1)\vee (-3/8),-\frac{\Lambda}{\gamma} \Big)$ is a nonempty set by showing that both $(3\Xi/\gamma-1)$ and $-3/8$ are strictly smaller than $-\frac{\Lambda}{\gamma}$. First, due to condition (M7 (BinSeg)) we know that $\Xi/\gamma<1/4$, and therefore \begin{eqnarray} &&\frac{3\Xi}{\gamma}+\frac{\Lambda}{\gamma} \leq \frac{4\Xi}{\gamma}<1\nonumber\\ &\rightarrow& \frac{3\Xi}{\gamma}-1<-\frac{\Lambda}{\gamma}, \end{eqnarray} and additionally, \begin{eqnarray} -\frac{3}{8}<-\frac{1}{7}\leq -\frac{\Xi}{\gamma}\leq -\frac{\Lambda}{\gamma}. \end{eqnarray} Therefore, it is possible to choose some value of $\theta$ within the set $\Big( (3\Xi/\gamma-1)\vee (-3/8),-\frac{\Lambda}{\gamma} \Big)$. \newline \newline \indent To verify that $J\rho_{N^*}^*\to 0$, we first note $$J\rho_{N^*}^*\lesssim N^\Lambda (N^*)^\theta\lesssim N^{\Lambda+\gamma\theta}.$$ The rightmost term goes to 0 because $\theta<\Lambda/\gamma$. \newline \newline \indent To show that the BinSeg estimators satisfy the conditions of Lemma \ref{cor:signalconsistent}, we proceed as follows. First note that $\delta^*_{N^*}\geq C_1(N^*)^{1-\Xi/\gamma}$ for some $C_1>0$, and where $\Xi/\gamma < 1/4$. For some positive constant $C_2$, set $w^*(N^*)= C_2 E_{N^*} = C_2\left(\frac{N^*}{\delta^*_{N^*}}\right)^2\log(N^*)$. Then, there is a positive constant $C_4$ such that $w^*(N^*) = (C_4+o(1))(N^*)^{2\Xi/\gamma}\log(N^*)$. Set $B_{N^*}=C_5 /N^*$; this would mean \begin{itemize} \item since $(N^*)^{2\Xi/\gamma}\log(N^*)=o((N^*)^{1-\Xi/\gamma})$ this does allow $w^*(N^*)=o(\delta^*_{N^*})$ to be satisfied; \item $\frac{N^*}{\delta^*_{N^*}}B_{N^*}= \frac{C_5}{\delta^*_{N^*}} \to 0$; \item since $3\Xi/\gamma-1<0$, and because $\rho^*_{N^*}=(N^*)^\theta$ for some $\theta$ satisfying $(3\Xi/\gamma-1)\vee (-3/8)<\theta<0$, this $\rho^*_{N^*}\to 0$ and satisfies: \begin{itemize} \item $\frac{w^*(N^*)}{\delta^*_{N^*}}\leq C_6(N^*)^{3\Xi/\gamma-1}\log(N^*)$ for some $C_6>0$; latter expression is $o(\rho^*_{N^*})$; \item $N^* w^*(N^*)/(\delta^{*\,3/2}_{N^*}\rho^*_{N^*}) \leq C_7(N^*)^{((7\Xi/\gamma - 1)/2)\,- \theta}$ and $\exp[C_2\delta^*_{N^*} (\rho^*_{N^*})^2] \geq \exp[C_8 (N^*)^{1-\Xi/\gamma + 2\theta}]$, for some positive $C_7$, $C_8$; as $1-\Xi/\gamma + 2\theta>3/4+2\theta > 0$ it follows that $N^* w^*(N^*)/\delta^{*\,3/2}_{N^*}\rho^*_{N^*} =o(\exp[C_2\delta^*_{N^*} (\rho^*_{N^*})^2])$. \end{itemize} \end{itemize} Therefore, all conditions of Lemma \ref{cor:signalconsistent} for a sequence $\rho^*_{N^*}$ tending to 0 are satisfied. Next, combining the results of Theorem \ref{frythm} and Lemma \ref{cor:signalconsistent}, we establish the simultaneous consistency of $\hat{J}$, the $\hat{\tau}_i$'s, and the $\hat{\nu}_i$'s. Specifically, under conditions (M1) to (M6 (BinSeg)), we could combine the two limit results \begin{eqnarray} &&\mathbb{P}\left[\hat{J}=J;\quad \max_{i=1,...,J}|\hat{\tau}^*_i-\tau^*_i|\leq C E_{N^*}\right]\to 1\nonumber\\ &&\mathbb{P}\left[\hat{J}=J;\quad \max_{i=0,...,J}|\hat{\nu}^*_i-\nu^*_i|\leq \rho^*_{N^*}\right]\to 1\nonumber\\ &&\text{for any }\rho^*_{N^*}=(N^*)^\theta\text{, where }\theta\in\left( (3\Xi/\gamma-1)\vee\left( -\frac{3}{8} \right) \right) \end{eqnarray} to get the following through the Bonferroni inequality: \begin{eqnarray}\label{eq:binseg1stconsistentproof} \mathbb{P}\left[\hat{J}=J;\quad \max_{i=1,...,J}|\hat{\tau}^*_i-\tau^*_i|\leq C E_{N^*};\quad \max_{i=0,...,J}|\hat{\nu}^*_i-\nu^*_i|\leq \rho^*_{N^*}\right]\to 1 \end{eqnarray} as $N^*\to\infty$. \end{proof} \subsection{Proof For Lemma \ref{cor:signalconsistent}}\label{sec:signalconsistentproof} \begin{proof} First we focus on part (i). Consider the separate cases of $\tau_i$ for $i=0$ or $i=J$ (case 1), and $1\leq i<J$ (case 2). We know \begin{equation} \mathbb{P}\Big[ \hat{J}=J;\quad \max_{j=1,...,J} |\hat{\tau}_j^*-\tau_j^*|\leq w^*(N^*) \Big]\geq 1-B_{N^*} \end{equation} for some sequences $w^*(N^*)$ and $B_{N^*}$ which are $o(\delta_{N^*}^*)$, and $o(1)$, respectively. Also as in the statement of the theorem, assume there is some sequence $\rho^*_{N^*}$ such that $w^*({N^*})/\delta_{N^*}^*=o(\rho_{N^*})$. For the rest of the proof assume ${N^*}$ is large enough so that \begin{itemize} \item $1<w^*({N^*})< \frac{\delta^*_{N^*}}{6}$ \item$6\frac{w^*({N^*})}{\delta^*_{N^*}}\bar{\theta}<\frac{\rho^*_{N^*}}{2}$ \item $\delta_{N^*}^*>3$ \end{itemize} \noindent \textbf{Case 1}: For $i=0$, $\hat{\nu}^*_0$ is the average of all $Z_t$'s where $t$ lies between 1 and $\hat{\tau}^*_1$, inclusive. We have \begin{eqnarray} &&\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\nu}^*_0-\nu^*_0|\geq \rho^*_{N^*} \right]\nonumber\\ &\leq&\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\tau}^*_1-\tau^*_1|>w^*({N^*})\right]+\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\nu}^*_0-\nu^*_0|\geq \rho^*_{N^*};\quad |\hat{\tau}^*_1-\tau^*_1|\leq w^*({N^*}) \right] \nonumber\\ &\leq& B_{N^*}+\sum_{\tau:\,|\tau-\tau^*_1|\leq w^*({N^*})}\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\nu}^*_0-\nu^*_0|\geq \rho^*_{N^*};\quad \hat{\tau}^*_1=\tau \right]\nonumber\\ &\leq& B_{N^*}+\sum_{\tau:\,|\tau-\tau^*_1|\leq w^*({N^*})}\mathbb{P}\left[ \hat{J}=J;\quad\hat{\tau}^*_1=\tau;\quad \left|\frac{1}{\tau}\sum_{j=1}^\tau Z_j-\nu^*_0\right|\geq \rho^*_{N^*} \right]\nonumber\\ &\qquad&\leq B_{N^*}+\sum_{\tau:\,|\tau-\tau^*_1|\leq w^*({N^*})}\mathbb{P}\left[\left|\frac{1}{\tau}\sum_{j=1}^\tau (Z_j-\nu^*_0)\right|\geq \rho^*_{N^*} \right] \end{eqnarray} For all $\tau_1-w^*({N^*})\leq\tau\leq\tau^*_1$, we have $\frac{1}{\tau}\sum_{j=1}^\tau (Z_j-\nu^*_0)\sim N(0,\sigma^2/\tau)$, and hence \begin{eqnarray}\label{lowerpb1} \mathbb{P}\left[\left|\frac{1}{\tau}\sum_{j=1}^\tau (Z_j-\tau_0)\right|\geq \rho^*_{N^*} \right] &=& 2\left(1-\Phi(\sqrt{\tau}\rho^*_{N^*})\right)\nonumber\\ &\leq& 2\frac{\phi(\sqrt{\tau}\rho^*_{N^*}/\sigma)}{\sqrt{\tau}\rho^*_{N^*}/\sigma}\nonumber\\ &\leq& \sqrt{\frac{2}{\pi}}\cdot\frac{\exp\Big( -(\tau_1-w^*({N^*}))(\rho^*_{N^*})^2/(2\sigma^2)\Big)}{\sqrt{\tau_1-w^*({N^*})}\rho^*_{N^*}/\sigma}\nonumber\\ &\leq & \frac{2\sigma}{\sqrt{\pi}}\cdot\frac{\exp\Big( -\delta^*_{N^*}(\rho^*_{N^*})^2/(4\sigma^2)\Big)}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*}}\nonumber\\ &&\left(\text{by }\tau_1w^*({N^*})>\delta^*_{N^*}/2\right) \end{eqnarray} For all $\tau^*_1<\tau\leq \tau^*_1+w^*({N^*})$ we have $\frac{1}{\tau}\sum_{j=1}^\tau (Z_j-\nu^*_0)\sim N\left(\frac{\tau-\tau^*_1}{\tau}(\nu^*_1-\nu^*_0),\frac{\sigma^2}{\tau}\right)$. Because \begin{eqnarray} \left|\frac{\tau-\tau^*_1}{\tau}(\nu^*_1-\nu^*_0)\right|\leq \frac{w^*({N^*})}{\delta^*_{N^*}}(2\bar{\theta})\leq \frac{\rho^*_{N^*}}{2}, \end{eqnarray} the magnitude of the z-score of both $\pm\rho^*_{N^*}$ for the $N\left(\frac{\tau-\tau^*_1}{\tau}(\nu^*_1-\nu^*_0),\frac{\sigma^2}{\tau}\right)$ distribution is at least $\frac{\rho^*_{N^*}\sqrt{\tau}}{2\sigma}$, and hence \begin{eqnarray} \mathbb{P}\left[\left|\frac{1}{\tau}\sum_{j=1}^\tau (Z_j-\tau^*_0)\right|\geq \rho^*_{N^*} \right]&\leq& 2\left(1-\Phi\left(\frac{\rho^*_{N^*}\sqrt{\tau}}{2}\right)\right)\nonumber\\ &\leq& 2\frac{\phi\left(\frac{\rho^*_{N^*}\sqrt{\tau}}{2\sigma}\right)}{\frac{\rho^*_{N^*}}{2\sigma}\sqrt{\tau}/\sigma}\nonumber\\ &\leq& \frac{2\sigma\sqrt{2}}{\sqrt{\pi}}\cdot\frac{\exp\left(-\delta^*_{N^*}(\rho^*_{N^*})^2/(8\sigma^2)\right)}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*} } \end{eqnarray} Therefore, the expression (\ref{lowerpb1}) can be bounded from above by \begin{align}\label{bb1} & B_{N^*}+(w^*({N^*})+1)\frac{2\sigma}{\sqrt{\pi}}\cdot\frac{\exp\Big( -\delta^*_{N^*}(\rho^*_{N^*})^2/(4\sigma^2)\Big)}{\sqrt{\delta^*_{N^*}}\gamma_{N^*}}+w^*({N^*})\frac{2\sigma\sqrt{2}}{\sqrt{\pi}}\cdot\frac{\exp\left(-\delta^*_{N^*}(\rho^*_{N^*})^2/(8\sigma^2)\right)}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*}} \nonumber\\ &\qquad \qquad\leq B_{N^*}+\frac{6\sigma\sqrt{2}}{\sqrt{\pi}}\cdot\frac{w^*({N^*})\exp\left(-\delta^*_{N^*}(\rho^*_{N^*})^2/(8\sigma^2)\right)}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*}}.\nonumber\\ \end{align} For $i=J$, a very similar argument will bound $\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\nu}^*_J-\nu^*_J|\geq \rho^*_{N^*} \right]$ by the same expression in (\ref{bb1}). \newline \newline \noindent \textbf{Case 2}: The procedure for this case will be similar to the steps for Case 1, but there are a few modifications. For $0<i<J$, $\hat{\nu}^*_i$ is the average of all $Z_t$'s for $\hat{\tau}^*_i<t\leq \hat{\tau}^*_{i+1}$. For the following part we re-write this average by considering the midpoint $\tau_i^{*(m)}:=\frac{\lceil \tau^*_i+\tau^*_{i+1}\rceil}{2}$ where $1<i<J$.\newline \newline \indent In the case where $\hat{\tau}^*_i$ and $\hat{\tau}^*_{i+1}$ are within $\delta^*_{N^*}/3$ (which is less than $|\tau^*_{i+1}-\tau^*_i|/3$) of $\tau^*_i$ and $\tau^*_{i+1}$ respectively, we have $\hat{\tau}^*_i < \tau_i^{*(m)} <\hat{\tau}^*_{i+1}$, and hence we can bound $|\hat{\nu}^*_i-\nu^*_i|$ by \begin{align} &\left|\frac{1}{\hat{\tau}^*_{i+1}-\hat{\tau}^*_i}\sum_{j=\hat{\tau}^*_i+1}^{\hat{\tau}^*_{i+1}}(Z_j-\nu^*_i)\right|\nonumber\\ &= \left|\frac{\tau_i^{*(m)}-\hat{\tau}^*_{i}}{\hat{\tau}^*_{i+1}-\hat{\tau}^*_i}\left( \frac{1}{\tau_i^{*(m)}-\hat{\tau}^*_{i}}\sum_{j= \hat{\tau}^*_i+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right)+\frac{\hat{\tau}^*_{i+1}-\tau_i^{*(m)}}{\hat{\tau}^*_{i+1}-\hat{\tau}^*_i}\left( \frac{1}{\hat{\tau}^*_{i+1}-\tau_i^{*(m)}}\sum_{j= \tau_i^{*(m)}+1}^{\hat{\tau}^*_{i+1}}(Z_j-\nu^*_i)\right)\right| \nonumber\\ &\leq\frac{\tau_i^{*(m)}-\hat{\tau}^*_{i}}{\hat{\tau}^*_{i+1}-\hat{\tau}^*_i}\left| \frac{1}{\tau_i^{*(m)}-\hat{\tau}^*_{i}}\sum_{j= \hat{\tau}^*_i+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|+\frac{\hat{\tau}^*_{i+1}-\tau_i^{*(m)}}{\hat{\tau}^*_{i+1}-\hat{\tau}^*_i}\left| \frac{1}{\hat{\tau}^*_{i+1}-\tau_i^{(m)}}\sum_{j= \tau_i^{(m)}+1}^{\hat{\tau}^*_{i+1}}(Z_j-\nu^*_i)\right|\nonumber\\ \end{align} In order for $|\hat{\nu}^*_i-\nu^*_i|$ to exceed $\rho^*_{N^*}$, at least one of $\left| \frac{1}{\tau_i^{*(m)}-\hat{\tau}^*_{i}}\sum_{j= \hat{\tau}^*_i+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|$ or \newline $\left| \frac{1}{\hat{\tau}^*_{i+1}-\tau_i^{*(m)}}\sum_{j= \tau_i^{*(m)}+1}^{\hat{\tau}^*_{i+1}}(Z_j-\nu^*_i)\right|$ must exceed $\rho^*_{N^*}$, or in other words, \begin{align}\label{bbp2} &\quad\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\nu}^*_i-\nu^*_i|\geq \rho^*_{N^*}\right]\nonumber\\ &\leq \mathbb{P}\left[\hat{J}=J;\quad |\hat{\tau}^*_i-\tau^*_i|>w^*({N^*})\quad\text{ or }\quad|\hat{\tau}^*_{i+1}-\tau^*_{i+1}|>w(N)\right]+\nonumber\\ &\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\tau}^*_i-\tau^*_i|\leq w^*({N^*});\quad |\hat{\tau}^*_{i+1}-\tau^*_{i+1}|\leq w^*({N^*});\quad |\hat{\nu}^*_i-\nu^*_i|\geq\rho^*_{N^*}\right]\nonumber\\ &\leq B_{N^*}+\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\tau}^*_i-\tau^*_i|\leq w^*({N^*});\quad |\hat{\tau}^*_{i+1}-\tau^*_{i+1}|\leq w^*({N^*});\quad \left| \frac{1}{\tau_i^{*(m)}-\hat{\tau}^*_{i}}\sum_{j= \hat{\tau}^*_i+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|\geq \rho^*_{N^*}\right]\nonumber\\ &+\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\tau}^*_i-\tau^*_i|\leq w^*({N^*});\quad |\hat{\tau}^*_{i+1}-\tau^*_{i+1}|\leq w^*({N^*});\quad \left| \frac{1}{\hat{\tau}^*_{i+1}-\tau_i^{*(m)}}\sum_{j= \tau_i^{*(m)}+1}^{\hat{\tau}^*_{i+1}}(Z_j-\nu^*_i)\right|\geq\rho^*_{N^*}\right]\nonumber\\ &\leq B_{N^*}+\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\tau}^*_i-\tau^*_i|\leq w^*({N^*});\quad \left| \frac{1}{\tau_i^{*(m)}-\hat{\tau}^*_{i}}\sum_{j= \hat{\tau}^*_i+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|\geq \rho^*_{N^*}\right]\nonumber\\ &+\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\tau}^*_{i+1}-\tau^*_{i+1}|\leq w^*({N^*});\quad \left| \frac{1}{\hat{\tau}^*_{i+1}-\tau_i^{*(m)}}\sum_{j= \tau_i^{*(m)}+1}^{\hat{\tau}^*_{i+1}}(Z_j-\nu^*_i)\right|\geq\rho^*_{N^*}\right]\nonumber\\ &\leq B_{N^*}+\sum_{\tau:\, |\tau-\tau^*_i|\leq w^*({N^*})}\mathbb{P}\left[ \left| \frac{1}{\tau_i^{*(m)}-\tau}\sum_{j= \tau+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|\geq \rho^*_{N^*}\right]\nonumber\\ &+\sum_{\tau:\,|\tau-\tau^*_{i+1}|\leq w^*({N^*})}\mathbb{P}\left[ \left| \frac{1}{\tau-\tau_i^{*(m)}}\sum_{j= \tau_i^{*(m)}+1}^{\tau}(Z_j-\nu^*_i)\right|\geq\rho^*_{N^*}\right]\nonumber\\ \end{align} Next, we will bound $\mathbb{P}\left[ \left| \frac{1}{\tau_i^{*(m)}-\tau}\sum_{j= \tau+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|\geq \rho^*_{N^*}\right]$ for each $\tau$ such that $|\tau-\tau^*_i|\leq w^*({N^*})$. For $\tau^*_i\leq \tau\leq \tau^*_i+w^*({N^*})$ we have $\frac{1}{\tau_i^{*(m)}-\tau}\sum_{j= \tau+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\sim N\left(0,\frac{\sigma^2}{\tau_i^{*(m)}-\tau}\right)$, and hence \begin{eqnarray}\label{bbbp1} \mathbb{P}\left[ \left| \frac{1}{\tau_i^{*(m)}-\tau}\sum_{j= \tau+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|\geq \rho^*_{N^*}\right]&\leq& 2\left(1-\Phi\left( \rho^*_{N^*}\sigma^{-1}\sqrt{\tau_i^{*(m)}-\tau}\right)\right)\nonumber\\ &\leq& \frac{2\sigma\sqrt{3}}{\sqrt{\pi}}\cdot \frac{\exp(-\delta^*_{N^*}(\rho^*_{N^*})^2/(12\sigma^2))}{\rho^*_{N^*}\sqrt{\delta^*_{N^*}}}, \end{eqnarray} where we used the fact that $\tau_i^{*(m)}-\tau>\tau_i^{*(m)}-\tau^*_i-w^*({N^*})>\delta^*_{N^*}/3-\delta^*_{N^*}/6$. \newline \newline For $\tau^*_i-w^*({N^*})\leq\tau<\tau^*_i$, we have $\frac{1}{\tau_i^{*(m)}-\tau}\sum_{j= \tau+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\sim N\left(\frac{\tau^*_i-\tau}{\tau_i^{*(m)}-\tau}(\nu^*_i-\nu^*_{i-1}),\frac{\sigma^2}{\tau_i^{*(m)}-\tau}\right)$. The z-scores of $\pm\rho^*_{N^*}$ would have magnitudes greater than \begin{align} &\quad\sigma^{-1}\sqrt{\tau_i^{*(m)}-\tau}\left(\rho^*_{N^*}-\left|\frac{\tau^*_i-\tau}{\tau_i^{*(m)}-\tau}(\nu^*_i-\nu^*_{i-1})\right|\right)\nonumber\\ &\geq \sigma^{-1}\sqrt{\frac{\delta^*_{N^*}}{3}}\left(\rho^*_{N^*}-\frac{w^*({N^*})}{\delta^*_{N^*}/3}(2\bar{\theta})\right)\nonumber\\ &\geq \sqrt{\frac{\delta^*_{N^*}}{3}}\cdot\frac{\rho^*_{N^*}}{2\sigma} \end{align} Hence, this gives the probability bound \begin{eqnarray}\label{bbbp2} \mathbb{P}\left[ \left| \frac{1}{\tau_i^{*(m)}-\tau}\sum_{j= \tau+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right|\geq \rho^*_{N^*}\right]&\leq& 2\left(1-\Phi\left( \sqrt{\frac{\delta^*_{N^*}}{3}}\cdot\frac{\rho^*_{N^*}}{2\sigma}\right)\right)\nonumber\\ &\leq &\frac{2\sigma\sqrt{6}}{\sqrt{\pi}}\cdot\frac{\exp\left(- \delta(\gamma^*_{N^*})^2/(24\sigma^2)\right) }{\rho^*_{N^*}\sqrt{\delta^*_{N^*}}}. \end{eqnarray} Putting together the bounds in (\ref{bbbp1}) and (\ref{bbbp2}) will give \begin{align} &\quad \sum_{\tau:\, |\tau-\tau_i|\leq w^*({N^*})}\mathbb{P}\left[ \left| \frac{1}{\tau_i^{*(m)}-\tau}\sum_{j= \tau+1}^{\tau_i^{*(m)}}(Z_j-\nu^*_i)\right| \geq \rho^*_{N^*}\right]\nonumber\\ &\leq 3w^*({N^*})\cdot \frac{2\sigma\sqrt{6}}{\sqrt{\pi}}\cdot\frac{\exp\left(- \delta^*_{N^*}(\rho^*_{N^*})^2/(24\sigma^2)\right) }{\rho^*_{N^*}\sqrt{\delta^*_{N^*}}} \end{align} In an extremely similar manner, it can be argued that \begin{align} &\quad\sum_{\tau:\,|\tau-\tau^*_{i+1}|\leq w^*({N^*})}\mathbb{P}\left[ \left| \frac{1}{\tau-\tau_i^{*(m)}}\sum_{j= \tau_i^{*(m)}+1}^{\tau}(Z_j-\nu^*_i)\right|\geq\rho^*_{N^*}\right]\nonumber\\ &\leq 3w^*({N^*})\cdot \frac{2\sigma\sqrt{6}}{\sqrt{\pi}}\cdot\frac{\exp\left(- \delta^*_{N^*}(\rho^*_{N^*})^2/(24\sigma^2)\right) }{\rho^*_{N^*}\sqrt{\delta^*_{N^*}}} \end{align} Therefore, (\ref{bbp2}) can be bounded by \begin{eqnarray}\label{bb2} B_{N^*}+\frac{12\sigma\sqrt{6}}{\sqrt{\pi}}\cdot \frac{w^*({N^*})\exp\left( -\delta^*_{N^*}(\rho^*_{N^*})^2/(24\sigma^2) \right)}{\rho^*_{N^*}\sqrt{\delta^*_{N^*}}} \end{eqnarray} \newline \newline By taking constants $C_1$ and $C_2$ to be the "worse" of the coefficients in (\ref{bb1}) and (\ref{bb2}), which are $\frac{12\sigma\sqrt{6}}{\sqrt{\pi}}$ and $1/(24\sigma^2)$ respectively, we can combine the result of both cases and establish \begin{eqnarray} \mathbb{P}\left[ \hat{J}=J;\; |\hat{\nu}^*_i-\nu^*_i|\geq\rho^*_{N^*} \right]\leq B_{N^*}+C_1w^*({N^*})\frac{\exp\left( -C_2\delta^*_{N^*}(\rho^*_{N^*})^2 \right)}{\rho^*_{N^*}\sqrt{\delta^*_{N^*}}} \end{eqnarray} for all $i=1,\dots,J$ \end{proof} \noindent Using part (i), previously shown, it is straightforward to show part (ii): \begin{proof} The complement of the event $\{\hat{J}=J;\quad \max_{i=0,...,J}|\hat{\nu}_i-\nu_i|<\rho_N\}$ is the event where either $\hat{J}\neq J$ or $\hat{J}=J$ and $|\hat{\nu}_i-\nu_i|\geq\rho_N$ for some $i$. For all sufficiently large $N$ and some positive constants $C_1$ and $C_2$ we have \begin{eqnarray} && 1-\mathbb{P}\left[ \hat{J}=J;\quad \max_{i=0,...,J}|\hat{\nu}^*_i-\nu^*_i|<\rho^*_{N^*}\right] \nonumber\\ &\leq& \mathbb{P}[\hat{J}\neq J]+\sum_{i=0}^J\mathbb{P}\left[ \hat{J}=J;\quad |\hat{\nu}^*_i-\nu^*_i|\geq \rho^*_{N^*}\right]\nonumber\\ &\leq & B_{N^*}+(J+1)\left( B_{N^*}+C_1w^*({N^*}) \frac{\exp\left[ -C_2\delta^*_{N^*}(\rho^*_{N^*})^2\right]}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*}} \right)\nonumber\\ & \leq & B_{N^*}+\left(\frac{{N^*}}{\delta^*_{N^*}}+1\right)\left( B_{N^*}+C_1w^*({N^*}) \frac{\exp\left[ -C_2\delta^*_{N^*}(\rho^*_{N^*})^2\right]}{\sqrt{\delta^*_{N^*}}\rho^*_{N^*}} \right)\nonumber\\ &\to & 0 \end{eqnarray} \end{proof} \section{Supplement Part C (Probability Bounds on Argmin of Random Walks Absolute Value Drifts)}\label{sec:supplementpartc} \subsection{Probability bound for Argmin of Random Walk} Here we will derive a probability bound for random walks of the form \begin{eqnarray} X_\Delta(t):=\begin{cases} t\left|\frac{\Delta}{2}\right|+\sum_{i=1}^t\varepsilon_i\qquad &t>0\\ 0 &t=0\\ |t|\cdot\left|\frac{\Delta}{2}\right|-\sum_{i=-1}^{|t|}\varepsilon_i &t<0 \end{cases} \end{eqnarray} Specifically, the following exponential bound applies: \begin{lemma}\label{lem:probbound} Suppose that $S$ is a set of integers and $m$ a positive integer such that $[-m,m]\subset S$, then \begin{eqnarray} \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|>m \right]\leq A(\Delta)\exp(-B(\Delta)m) \end{eqnarray} where $A$ and $B$ are expressions dependent only on $|\Delta|$, with $A$ decreasing and $B$ increasing in $|\Delta|$. \end{lemma} \begin{proof} \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|>m \right]\nonumber\\ &= & \sum_{j>m ,\,j\in S}\mathbb{P}\left[ \underset{t\in S}{\arg\min}X_\Delta(t)=j \right]+\sum_{j<-m ,\,j\in S}\mathbb{P}\left[ \underset{t\in S}{\arg\min}X_\Delta(t)=j \right]\nonumber\\ &\leq & \sum_{t>m ,\,t\in S}\mathbb{P}\left[ X_\Delta(t)< 0 \right]+\sum_{t<-m ,\,t\in S}\mathbb{P}\left[ X_\Delta(t)<0 \right]\nonumber\\ &\leq &\sum_{t>m ,\,t\in S}\mathbb{P}\left[ N\left(t\frac{|\Delta|}{2},t\right)< 0 \right]+\sum_{t<-m ,\,t\in S}\mathbb{P}\left[ N\left(|t|\frac{|\Delta|}{2},|t|\right)<0 \right]\nonumber\\ &\leq &\sum_{t=m+1 }^\infty\mathbb{P}\left[ N\left(0,1\right)< -\frac{\sqrt{t}|\Delta|}{2} \right]+\sum_{t=-m-1}^{-\infty}\mathbb{P}\left[ N\left(0,1\right)<-\frac{\sqrt{|t|}|\Delta|}{2} \right]\nonumber\\ &\leq & \sum_{t=m+1 }^\infty \exp\left( -\frac{t\Delta^2}{8} \right)+\sum_{t=-m-1}^{-\infty}\exp\left( -\frac{|t|\Delta^2}{8}\right)\nonumber\\ &=& 2\left( \frac{\exp\left( -\frac{\Delta^2}{8} \right)}{1+\exp\left( -\frac{\Delta^2}{8} \right)} \right)\exp\left( -\frac{m\Delta^2}{8} \right) \end{eqnarray} \end{proof} This result has another implication. With probability approaching to 1 at an exponential pace, the argmin of $X_\Delta(t)$ equals the argmin over a smaller set: \begin{lemma}\label{lem:eqprob} For any set of integers $S$ and positive integer $m$ such that $[-m,m]\in S$, \begin{eqnarray} \mathbb{P}\left( \underset{t\in [-m,m]}{\arg\min}X_\Delta(t)=\underset{t\in S}{\arg\min}X_\Delta(t) \right)\geq 1-A(\Delta)\exp(-B(\Delta)m) \end{eqnarray} where $A$, $B$ are expressions in $\Delta$ that are, respectively, decreasing and increasing in $|\Delta|$ Lemma \ref{lem:probbound}. \end{lemma} \begin{proof} The two argmins of $X_\Delta(t)$ (over $S$ and over $[m,m]$) are different if and only if the argmin over $S$ is outisde of the interval $[-m,m]$. Therefore \begin{eqnarray} &&\mathbb{P}\left( \underset{t\in [-m,m]}{\arg\min}X_\Delta(t)\neq\underset{t\in S}{\arg\min}X_\Delta(t) \right)\nonumber\\ &=&\mathbb{P}\left( \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|>m \right)\nonumber\\ &\leq &A(\Delta)\exp(-B(\Delta)m) \end{eqnarray} \end{proof} This leads to \begin{lemma} Suppose that $S$ is an integer set, $\ell$ and $m$ are positive integers, and $[-\ell,\ell]\subset [-m,m]\subset S$, then \begin{eqnarray} \left| \mathbb{P}\left[ \left|\underset{t\in[-m,m]}{\arg\min}X_\Delta(t)\right|\leq \ell \right]-\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq \ell \right] \right|\leq A(\Delta)\exp(-B(\Delta)m) \end{eqnarray} for some expressions $A()$ and $B()$ that are respectively, decreasing and increasing with respect to $|\Delta|$. \end{lemma} \begin{proof} \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in[-m,m]}{\arg\min}X_\Delta(t)\right|\leq \ell \right]-\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq \ell \right]\nonumber\\ &=&\mathbb{P}\left[ \left|\underset{t\in[-m,m]}{\arg\min}X_\Delta(t)\right|\leq\ell \text{ and } \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|> \ell \right]\nonumber\\ &\leq &\mathbb{P}\left[ \left|\underset{t\in[-m,m]}{\arg\min}X_\Delta(t)\right|\neq\left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\right]\nonumber\\ \end{eqnarray} The last line is greater than 0, and by Lemma \ref{lem:eqprob}, less than $A(\Delta)\exp(-B(\Delta)m)$ for some appropriate expressions $A()$ and $B()$. \end{proof} \subsection{Quantiles}\label{sec:proofquantbound} Due to Lemma \ref{lem:probbound}, the following statement can be made regarding the quantile: \begin{lemma}\label{lem:quantbound} Using the $A$ and $B$ from Lemma \ref{lem:probbound}, \begin{eqnarray} Q_\Delta(\sqrt[J]{1-\alpha})\leq \frac{1}{B}\log\frac{AJ}{\alpha} \end{eqnarray} \end{lemma} \begin{proof} Using the inequality from Lemma \ref{lem:probbound}, \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in\mathbb{Z}}{\arg\min}X_\Delta(t)\right|\leq \frac{1}{B}\log\frac{AJ}{\alpha} \right]\nonumber\\ &\geq & 1-A\exp\left( -B\frac{1}{B}\log\frac{AJ}{\alpha} \right)\nonumber\\ &=&1-\frac{\alpha}{J}\nonumber\\ &\geq &\sqrt[J]{1-\alpha} \end{eqnarray} \end{proof} \subsection{Comparison Between Random Walks, Part 1} Here will show some probability inequalities between random walks with different drifts. These results are useful in proving Theorem \ref{thm:increasingJasymprotics}. \begin{lemma}\label{lem:generalcompoppo} Suppose that $S$ is a set of integers, $m$ is a positive integer, and $[-m,m]\subset S$. Define, for any positive $\Delta_1$, $\Delta_2$, the random walks \begin{eqnarray} W_{\Delta_1,\Delta_2}(t)=\begin{cases} |t|\frac{|\Delta_1|}{2}+\sum_{i=-1}^t\varepsilon_i\qquad &t<0\\ 0 & t=0\\ t\frac{|\Delta_2|}{2}+\sum_{i=1}^{t}\varepsilon_i\qquad& t>0 \end{cases} \end{eqnarray} Then for any $\eta>0$, \begin{eqnarray}\label{eq:oppoineqfirst} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_{\Delta_1,\Delta_2}(t) \right|\leq m\right] \geq\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_{\Delta_1,\Delta_2+2\eta}(t) \right|\leq m \right]\nonumber\\ &&-\left(A(\Delta_2)m^{3/2}+B(\Delta_2)\sqrt{m}\right)\eta\exp(-C(\Delta_2)m) \end{eqnarray} for some expressions $A(\Delta_2)$, $B(\Delta_2)$ and $C(\Delta_2)$ that are, respectively, decreasing, decreasing, and increasing with respect to $|\Delta_2|$. Similarly, the following inequality holds: \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_{\Delta_1,\Delta_2}(t) \right|\leq m\right] \geq\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_{\Delta_1+2\eta,\Delta_2}(t) \right|\leq m \right]\nonumber\\ &&-\left(A(\Delta_1)m^{3/2}+B(\Delta_1)\sqrt{m}\right)\eta\exp(-C(\Delta_1)m) \end{eqnarray} where the form of the expressions $A()$, $B()$, and $C()$ has identical forms as expressions used in (\ref{eq:oppoineqfirst}). \end{lemma} \begin{proof} It is only required to prove the inequality between $W_{\Delta_1, \Delta_2}$ and $W_{\Delta_1, \Delta_2+2\eta}$. This is because $W_{\Delta_1,\Delta_2}(t)$ has the same distribution as $W_{\Delta_2, \Delta_1}(-t)$ for all $t\in S$, and therefore \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_{\Delta_1,\Delta_2}(t) \right|\leq m \right]\nonumber\\ &=&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_{\Delta_2,\Delta_1}(-t) \right|\leq m \right]\nonumber\\ &=&\mathbb{P}\left[ \left|\underset{t\in -S}{\arg\min} W_{\Delta_2,\Delta_1}(t) \right|\leq m \right]\nonumber\\ &&\text{where }-S:=\{-t:t\in S\} \end{eqnarray} where the last inequality is due to the fact that the existence of an $|\ell|\leq m$ such that $W_{\Delta_2,\Delta_1}(-\ell)<W_{\Delta_2,\Delta_1}(-t)$ for all $t\in S$, $|t|>m$ could be true if and only if there exists an $|\ell|\leq m$ such that $W_{\Delta_2,\Delta_1}(\ell)<W_{\Delta_2,\Delta_1}(t)$ for all $t\in -S$, $|t|>m$. Similarly, we also have \begin{eqnarray} \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_{\Delta_1+2\eta,\Delta_2}(t) \right|\leq m \right]=\mathbb{P}\left[ \left|\underset{t\in -S}{\arg\min} W_{\Delta_2,\Delta_1+2\eta}(t) \right|\leq m \right], \end{eqnarray} and from here, an inequality can be derived by comparing $$ \mathbb{P}\left[ \left|\underset{t\in -S}{\arg\min} W_{\Delta_2,\Delta_1}(t) \right|\leq m \right]$$ and $$ \mathbb{P}\left[ \left|\underset{t\in -S}{\arg\min} W_{\Delta_2,\Delta_1+2\eta}(t) \right|\leq m \right]$$ using (\ref{eq:oppoineqfirst}). Therefore, the rest of the proof will only concern the random walks $W_{\Delta_1,\Delta_2}(\cdot)$ and $W_{\Delta_1,\Delta_2+2\eta}(\cdot)$. \newline \newline For the sake of brevity here, we will use the shorthand notations $W(t)$ for the random walk $W_{\Delta_1,\Delta_2}(t)$, and $W_+(t)$ for the random walk $W_{\Delta_1,\Delta_2+2\eta}(t)$. We are interested in the probability of the event when $\left|\underset{t\in S}{\arg\min}W(t)\right|> m$ and $\left|\underset{t\in S}{\arg\min}W_+(t)\right|\leq m$, so for now, assume that for some integer $k\in S$, such that $|k|>m$, $W(k)< W(t)$ for all $t\in S$, $|t|\leq m$. \begin{itemize} \item If $k<-m$, then $W_+(k)=W(k)< W(t)\leq W(t)+t\eta 1(t>0)=W_+(t)$ for all $t\in S-\{k\}$; in other words $\left|\underset{t\in S}{\arg\min}W_+(t)\right|>m$, a contradiction. Therefore it is not possible for $k<-m$. \item This leaves the possibility that $k>m$. Additionally: \begin{itemize} \item as how '$k$' was defined, $W(k)< \min_{|t|\leq m}W(t)$ \item because $W_+(k)=W(k)+k\eta$ is not the minimum among the $W_+(t)$'s for $t\in S$, we have \begin{eqnarray} W(k)+k\eta&=&W_+(k)\nonumber\\ &\geq& \min_{|t|\leq m} W_+(t) \nonumber\\ &\geq&\min_{|t|\leq m}W(t) \end{eqnarray} \end{itemize} \end{itemize} This breakdown of events shows that in order for the argmin of $W_+(t)$ to be within $[-m,m]$ but for the argmin of $W(t)$ to be outside this interval, there must be a $k>m$ where $ \left(\min_{|t|\leq m}W(t)\right)-\eta k< W(k) \leq \min_{|t|\leq m}W(t)$. Therefore \begin{eqnarray}\label{ref:probdcomposepos} &&\mathbb{P}\left[\left|\underset{t\in S}{\arg\min}W(t)\right|> m\text{ and }\left|\underset{t\in S}{\arg\min}W_+(t)\right|\leq m\right]\nonumber\\ &\leq & \mathbb{P}\left[ \exists k:\, k>m\text{ and } \left(\min_{|t|\leq m}W(t)\right)-\eta k< W(k) \leq \min_{|t|\leq m}W(t)\right]\nonumber\\ &\leq & \sum_{k\in S\cap (m,\infty)}\mathbb{P}\left[ \left(\min_{|t|\leq m}W(t)\right)-\eta k< W(k) \leq \min_{|t|\leq m}W(t)\right]. \end{eqnarray} The random variable $\min_{|t|\leq m}W(t)$ can either equal $\min_{t\in [0,m]}W(t)$ or $\min_{t\in [-m,0]}W(t)$. Therefore, for any specific $k$, the event $\left(\min_{|t|\leq m}W(t)\right)-\eta k< W(k) \leq \min_{|t|\leq m}W(t)$ implies that either $\left(\min_{t\in [0,m]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [0,m]}W(t)$ or $\left(\min_{t\in [-m,0]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [-m,0]}W(t)$, yielding the inequality \begin{eqnarray}\label{ref:probdcomposepos2} &&\mathbb{P}\left[ \left(\min_{|t|\leq m}W(t)\right)-\eta k< W(k) \leq \min_{|t|\leq m}W(t)\right]\nonumber\\ &\leq & \mathbb{P}\left[ \left(\min_{t\in [0,m]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [0,m]}W(t)\right]\nonumber\\ &&+\mathbb{P}\left[ \left(\min_{t\in [-m,0]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [-m,0]}W(t)\right] \end{eqnarray} Both of the two probabilities in the last part can be bounded. First, because $k>m$, $W(k)$ is independent of $W(-1),\dots, W(-m)$, the distribution of $W(k)$ is still $N\left( k\frac{|\Delta_2|}{2},k \right)$ even after conditioning on the value of $\min_{t\in [-m,0]}W(t)$, hence \begin{eqnarray}\label{eq:probboundnegside} &&\mathbb{P}\left[ \left(\min_{t\in [-m,0]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [-m,0]}W(t)\right]\nonumber\\ &=&\mathbb{E}\left[ \mathbb{P}\left[ x-\eta k< W(k) \leq x\Big| \min_{t\in [-m,0]}W(t)=x \right] \right]\nonumber\\ &= & \mathbb{E}\left[\mathbb{P}\left[ \frac{x}{\sqrt{k}}-\sqrt{k}\left( \frac{|\Delta_2|}{2}+\eta \right)< N(0,1)\leq \frac{x}{\sqrt{k}}-\sqrt{k} \frac{|\Delta_2|}{2}\Bigg| \min_{t\in [-m,0]}W(t)=x \right]\right]\nonumber\\ &&\text{where }x<0\text{ since }\min_{t\in [-m,0]}W(t)\leq W(0)=0\nonumber\\ &=& \mathbb{E}\left[\int_{\frac{x}{\sqrt{k}}-\sqrt{k}\left( \frac{|\Delta_2|}{2}+\eta \right)}^{ \frac{x}{\sqrt{k}}-\sqrt{k} \frac{|\Delta_2|}{2}} \frac{\exp(-z^2/2)}{\sqrt{2\pi}} \,dz \Bigg| \min_{t\in [-m,0]}W(t)=x \right]\nonumber\\ &\leq &\int_{-\sqrt{k}\left( \frac{|\Delta_2|}{2}+\eta \right)}^{ -\sqrt{k} \frac{|\Delta_2|}{2}} \frac{\exp(-z^2/2)}{\sqrt{2\pi}} \,dz\nonumber\\ &\leq & \frac{\eta\sqrt{k}}{\sqrt{2\pi}}\exp\left[ -\frac{\Delta_2^2}{8}k \right] \end{eqnarray} As for the other inequality, consider the event that $\left(\min_{t\in [0,m]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [0,m]}W(t)$. Because $\min_{t\in [0,m]}W(t)\leq W(0)=0$, this event implies that for some $\ell\in [0,m]$, we have $W(\ell)-\eta k<W(k)<W(\ell)\leq 0$ (namely, letting $\ell=\underset{t\in [0,m]}{\arg\min}W(t)$ would work). Therefore \begin{eqnarray}\label{eq:decomposeprobdepend} && \mathbb{P}\left[ \left(\min_{t\in [0,m]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [0,m]}W(t)\right]\nonumber\\ &\leq & \sum_{\ell=0}^{m}\mathbb{P}\left[W(\ell)-\eta k< W(k)\leq W(\ell)\leq 0 \right]\nonumber\\ &= & \sum_{\ell=0}^{m}\mathbb{P}\left[-\eta k< W(k)-W(\ell)\leq 0\text{ and }W(\ell)\leq 0 \right] \end{eqnarray} Now $W(\ell)=\ell\frac{|\Delta_2|}{2}+\sum_{j=1}^\ell\varepsilon_j$ and $W(k)-W(\ell)=(k-\ell)\frac{|\Delta_2|}{2}+\sum_{j=\ell+1}^k\varepsilon_j$ are independent random variables, with distributions $N\left( \ell\frac{|\Delta_2|}{2},\ell \right)$ and $N\left( (k-\ell)\frac{|\Delta_2|}{2},(k-\ell) \right)$. Hence \begin{eqnarray} &&\mathbb{P}\left[-\eta k< W(k)-W(\ell)\leq 0\text{ and }W(\ell)\leq 0 \right]\nonumber\\ &=& \mathbb{P}\left[-\eta k< W(k)-W(\ell)\leq 0\right]\cdot \mathbb{P}\left[ W(\ell)\leq 0 \right]\nonumber\\ &=&\mathbb{P}\left[-\sqrt{k-\ell}\frac{|\Delta_2|}{2}-\frac{\eta k}{\sqrt{k-\ell}}< N(0,1)\leq-\sqrt{k-\ell}\frac{|\Delta_2|}{2}\right]\cdot \mathbb{P}\left[ N(0,1)\leq -\sqrt{\ell}\frac{|\Delta_2|}{2} \right]\nonumber\\ &=&\int_{-\sqrt{k-\ell}\frac{|\Delta_2|}{2}-\frac{\eta k}{\sqrt{k-\ell}}}^{-\sqrt{k-\ell}\frac{|\Delta_2|}{2}}\frac{\exp(-z^2/2)}{\sqrt{2\pi}}\,dz\cdot \mathbb{P}\left[ N(0,1)\leq -\sqrt{\ell}\frac{|\Delta_2|}{2} \right]\nonumber\\ &\leq & \frac{\eta k}{\sqrt{k-\ell}}\frac{\exp\left( -\frac{\Delta_2^2}{8}(k-\ell) \right)}{\sqrt{2\pi}}\cdot \frac{1}{2}\exp\left[ -\frac{\Delta_2^2}{8}\ell \right]\nonumber\\ &=&\frac{1}{2\sqrt{2\pi}}\cdot \frac{\eta k}{\sqrt{k-\ell}}\exp\left[ -\frac{\Delta_2^2}{8} k \right] \end{eqnarray} Therefore, using (\ref{eq:decomposeprobdepend}), \begin{eqnarray}\label{eq:probboundposside} && \mathbb{P}\left[ \left(\min_{t\in [0,m]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [0,m]}W(t)\right]\nonumber\\ &\leq & \sum_{\ell=0}^{m}\frac{1}{2\sqrt{2\pi}}\cdot \frac{\eta k}{\sqrt{k-\ell}}\exp\left[ -\frac{\Delta_2^2}{8} k \right]\nonumber\\ &\leq &\frac{\exp\left[ -\frac{\Delta_2^2}{8} k \right]}{2\sqrt{2\pi}}\cdot\eta k\cdot \int_0^{m+1}\frac{1}{\sqrt{k-x}}\,dx\nonumber\\ &=& \frac{\exp\left[ -\frac{\Delta_2^2}{8} k \right]}{\sqrt{2\pi}}\cdot\eta k\cdot(\sqrt{k}-\sqrt{k-m-1})\nonumber\\ &\leq &\frac{\exp\left[ -\frac{\Delta_2^2}{8} k \right]}{\sqrt{2\pi}}\cdot\eta k\cdot \sqrt{m+1} \end{eqnarray} Therefore, \begin{eqnarray} &&\mathbb{P}\left[\left|\underset{t\in S}{\arg\min}W(t)\right|> m\text{ and }\left|\underset{t\in S}{\arg\min}W_+(t)\right|\leq m\right]\nonumber\\ &\leq & \sum_{k\in S\cap (m,\infty)} \mathbb{P}\left[ \left(\min_{t\in [0,m]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [0,m]}W(t)\right]\nonumber\\ &&+\sum_{k\in S\cap (m,\infty)}\mathbb{P}\left[ \left(\min_{t\in [-m,0]}W(t)\right)-\eta k< W(k) \leq \min_{t\in [-m,0]}W(t)\right]\nonumber\\ &&\text{according to }(\ref{ref:probdcomposepos})\text{ and }(\ref{ref:probdcomposepos2})\nonumber\\ &\leq &\sum_{k=m+1}^\infty \left(\frac{\exp\left[ -\frac{\Delta_2^2}{8} k \right]}{\sqrt{2\pi}}\cdot\eta k\cdot \sqrt{m+1}\right)+\sum_{k=m+1}^\infty \left(\frac{\eta\sqrt{k}}{\sqrt{2\pi}}\exp\left[ -\frac{\Delta_2^2}{8}k \right]\right)\nonumber\\ &&\text{according to }(\ref{eq:probboundnegside})\text{ and }(\ref{eq:probboundposside})\nonumber\\ &\leq &\frac{\eta(\sqrt{m+1}+1)}{\sqrt{2\pi}}\sum_{m+1}^\infty k \exp\left[ -\frac{\Delta_2^2}{8}k \right]\nonumber\\ &\leq & \frac{3\sqrt{m}}{\sqrt{2\pi}} \left( \frac{m\exp\left[ -\frac{\Delta_2^2}{8} \right]}{1-\exp\left[ -\frac{\Delta_2^2}{8} \right]}+\frac{\exp\left[ -\frac{\Delta_2^2}{8} \right]}{\left( 1-\exp\left[ -\frac{\Delta_2^2}{8} \right] \right)^2} \right)\left( \eta \exp\left[ -\frac{\Delta_2^2}{8}m \right]\right)\nonumber\\ &&\text{since }\sum_{k=a}^\infty kx^k=\left( \frac{a-1}{1-x}+\frac{1}{(1-x)^2} \right)x^a\text{ for any }|x|<1, a\in\mathbb{N} \end{eqnarray} From here, \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W(t) \right|\leq m \right]\nonumber\\ &\geq &\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_+(t) \right|\leq m \right]-\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_+(t) \right|\leq m\text{ and }\left|\underset{t\in S}{\arg\min} W(t) \right|> m \right]\nonumber\\ &\geq &\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min} W_+(t) \right|\leq m \right]-\left(A(\Delta_2)m^{3/2}+B(\Delta_2)\sqrt{m}\right)\eta\exp(-C(\Delta_2)m) \end{eqnarray} for some expressions $A(\Delta_2)$, $B(\Delta_2)$ and $C(\Delta_2)$ that are, respectively, decreasing, decreasing, and increasing with respect to $|\Delta_2|$. \end{proof} This result immediately leads to some results concerning the random walks $X_\Delta(\cdot)$, as they are a special case of the random walks $W_{\Delta_1,\Delta_2}(\cdot)$ where $\Delta_1=\Delta_2$. \begin{lemma}\label{lem:randomwalkcompoppo} Suppose that for the random walk $Y_+(t)$ for $t\in S$ equals \begin{eqnarray} Y_+(t)=\begin{cases} X_\Delta(t)\qquad &\text{for }t\leq 0\\ X_\Delta(t)+\eta t&\text{for }t>0 \end{cases} \end{eqnarray} for some constant $\eta$ such that $0<\eta<\frac{|\Delta|}{2}$. Then for any $[-m,m]\subset S\subset\mathbb{Z}$, \begin{align}\label{eq:lemrwalkcompoppo1} &\quad\mathbb{P}\left[\left|\underset{t\in S}{\arg\min}Y_+(t)\right|\leq m \right]\leq \mathbb{P}\left[\left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq m \right]\nonumber\\ &+\eta\left[A(\Delta)m^{3/2}+B(\Delta)\sqrt{m}\right]\exp\left[-C(\Delta)m\right] \end{align} for some expressions $A()$, $B()$, and $C()$ which are, respectively, decreasing, decreasing, and increasing with respect to $|\Delta|$. The same probability inequality will hold if $Y_+(t)=X_\Delta(t)+\eta |t|1(t<0)$. \newline \newline A similar set of inequalities hold for the random walk $Y_-(t)$ for $t\in S$, defined as \begin{eqnarray} Y_-(t)=\begin{cases} X_\Delta(t)\qquad &\text{for }t\leq 0\\ X_\Delta(t)-\eta t&\text{for }t>0 \end{cases} \end{eqnarray} For $\eta<\frac{|\Delta|}{2}$, and $[-m,m]\subset S$, we have \begin{eqnarray}\label{eq:lemrwalkcompoppo2} &&\mathbb{P}\left[\left|\underset{t\in S}{\arg\min}Y_-(t)\right|\leq m \right]\geq \mathbb{P}\left[\left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq m \right]-\nonumber\\ &&\eta\left[A\left( \frac{|\Delta|}{2}-\eta \right)m^{3/2}+B\left( \frac{|\Delta|}{2}-\eta \right)\sqrt{m}\right]\exp\left[-C\left( \frac{|\Delta|}{2}-\eta \right)m\right] \end{eqnarray} for some expressions $A()$, $B()$, and $C()$ which are, respectively, decreasing, decreasing, and increasing with respect to $|\Delta|/2-\eta$. The same probability inequality will hold if $Y_-(t)=X_\Delta(t)-\eta |t|1(t<0)$. \end{lemma} \begin{proof} Apply Lemma \ref{lem:generalcompoppo} with $\Delta_1=\Delta_2=\Delta$ to prove (\ref{eq:lemrwalkcompoppo1}), and with $\Delta_1=\Delta$, $\Delta_2=\Delta-2\eta$ to prove (\ref{eq:lemrwalkcompoppo2}). \end{proof} Additionally, we can make probabilistic statements regarding the argmin of $X_\Delta(t)$ for two different values of $\Delta$: \begin{lemma}\label{lem:twoXrandwalkcomp} For any $\Delta\neq 0$, $\eta>0$, and a set $S$ which contains the interval $[-m,m]$, \begin{eqnarray} \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq m \right]\leq \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_{|\Delta|+2\eta}(t)\right|\leq m \right] \end{eqnarray} and \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\geq m \right]\geq \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_{|\Delta|+2\eta}(t)\right|\leq m \right]\nonumber\\&&-2\eta\left[A(\Delta)m^{3/2}+B(\Delta)\sqrt{m}\right]\exp\left[-C(\Delta)m\right] \end{eqnarray} for some expressions $A()$, $B()$, and $C()$ which can take the same form and have the same monotonicity properties as the ones used in Lemma \ref{lem:generalcompoppo}. \end{lemma} \begin{proof} The first inequality can be shown by noticing that the event $\left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq m$ implies that for some $|\ell|\leq m$, $X_\Delta(\ell)\leq X_\Delta(t)$ for all $t\in S$, $|t|>m$. This in turn implies that $X_{|\Delta|+\eta}(\ell)=X_{\Delta}(\ell)+|\ell|\eta<X_{\Delta}(t)+|t|\eta=X_{|\Delta|+\eta}(t)$ for all $t\in S$, $|t|>m$, which means $\left|\underset{t\in S}{\arg\min}X_{|\Delta|+\eta}(t)\right|\leq m$. \newline \newline The second inequality can be shown by applying Lemma \ref{lem:generalcompoppo} twice: \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq m \right]\nonumber\\ &\geq& \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}W_{|\Delta|,|\Delta|+2\eta}(t)\right|\leq m \right]-\eta\left[A(\Delta)m^{3/2}+B(\Delta)\sqrt{m}\right]\exp\left[-C(\Delta)m\right]\nonumber\\ &\geq&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}W_{|\Delta|+2\eta,|\Delta|+2\eta}(t)\right|\leq m \right]-2\eta\left[A(\Delta)m^{3/2}+B(\Delta)\sqrt{m}\right]\exp\left[-C(\Delta)m\right]\nonumber\\ &=&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}X_{|\Delta|+\eta}(t)\right|\leq m \right]-2\eta\left[A(\Delta)m^{3/2}+B(\Delta)\sqrt{m}\right]\exp\left[-C(\Delta)m\right] \end{eqnarray} \end{proof} \subsection{Comparison between Random Walks, Part 2} Here we will prove inequalities similar to those presented in Lemma \ref{lem:generalcompoppo}, but in the other direction. These results are also useful in proving Theorem \ref{thm:increasingJasymprotics}. \begin{lemma} Let the random walks $W_{\Delta_1,\Delta_2}$ be as they were defined in Lemma \ref{lem:generalcompoppo}. Then given any positive $\eta$, positive integer $m$, and set $S$ such that $|\eta|<|\Delta_1|/2$ and $[-m,m]\subsetneq{S}$, \begin{eqnarray} &&\mathbb{P}\left[ \left| \underset{t\in S}{\arg\min}W_{\Delta_1,\Delta_2+2\eta}(t) \right|\leq m \right]\geq \mathbb{P}\left[ \left| \underset{t\in S}{\arg\min}W_{\Delta_1,\Delta_2}(t) \right|\leq m \right]\nonumber\\ &&-A\left( \frac{\Delta_1}{2}-\eta \right)\eta\sqrt{m}\exp\left( -B\left( \frac{\Delta_1}{2}-\eta \right)m \right) \end{eqnarray} for some positive expressions $A()$ and $B()$ which are, respectively, decreasing and increasing in $\frac{\Delta_1}{2}-\eta$. Similarly, between the random walks $W_{\Delta_1,\Delta_2}$ and $W_{\Delta_1+2\eta,\Delta_2}$ for $0<\eta<\frac{\Delta_2}{2}$ there is the inequality \begin{eqnarray} &&\mathbb{P}\left[ \left| \underset{t\in S}{\arg\min}W_{\Delta_1+2\eta,\Delta_2}(t) \right|\leq m \right]\geq \mathbb{P}\left[ \left| \underset{t\in S}{\arg\min}W_{\Delta_1,\Delta_2}(t) \right|\leq m \right]\nonumber\\ &&-A\left( \frac{\Delta_2}{2}-\eta \right)\eta\sqrt{m}\exp\left( -B\left( \frac{\Delta_2}{2}-\eta \right)m \right) \end{eqnarray} for some positive expressions $A()$ and $B()$ which are, respectively, decreasing and increasing in $\frac{\Delta_2}{2}-\eta$. \end{lemma} \begin{proof} We will show the inequality between $W_{\Delta_1,\Delta_2+2\eta}$ and $W_{\Delta_1,\Delta_2}$, and the result between $W_{\Delta_1+2\eta,\Delta_2}$ and $W_{\Delta_1,\Delta_2}$ can be shown in a similar fashion or in an argument similar to what was found in the proof for Lemma \ref{lem:generalcompoppo}. As in the proof of that lemma, we will use the shorthand notation $W$ for $W_{\Delta_1,\Delta_2}$ and $W_+$ for $W_{\Delta_1,\Delta_2+2\eta}$. \newline \newline We are interested in how $\left|\underset{t\in S}{\arg\min}W(t)\right|\leq m$ and $\left|\underset{t\in S}{\arg\min}W_+(t)\right|> m$ can simultaneously occur, which we will do by considering the possible values of the argmin of $W(t)$. If these two events are true, then first note that for some integer $k\in [-m,m]$, $W(k)\leq W(t)$ for all $t\in S$, $t\neq k$. \begin{itemize} \item if $k\leq 0$, then $W_+(k)=W(k)\leq W(t)+|t|\eta 1(t>0)=W_+(t)$ for all $t\in S$ and $|t|>m$; in other words $\left|\underset{t\in S}{\arg\min}W_+(t)\right|\leq m$, \item thus $k >0$ and $W(k)=\underset{t\in [0,m]}{\arg\min}W(t)$. The only possible way for $\left|\underset{t\in S}{\arg\min}W_+(t)\right|> m$ is for the argmin to be less than $-m$: for any $t\in S$: \begin{itemize} \item if $t>m$ then $W_+(t)=W(t)+t\eta>W(k)+k\eta=W_+(k)$, which means that the argmin of $W_+(t)$ cannot be greater than $m$ in absolute value \item therefore the only possible argmin for $W_+(t)$ is for some $\ell <-m$, and it must satisfy \begin{eqnarray} W(\ell)=W_+(\ell)<W_+(k)=W(k)+\eta k \leq W(k)+\eta m \end{eqnarray} but at the same time since $W(\ell)$ was not the minimum among the $W(t)$'s, we have $W(\ell)\geq W(k)$ \end{itemize} \end{itemize} This breakdown of events shows that in order for the argmin of $W(t)$ to be within $[-m,m]$ but for the argmin of $W_+(t)$ to be outside this interval, there must be an $\ell<-m$ where $ \min_{t\in [0,m]}W(t)< W(\ell) \leq \min_{t\in [0,m]}W(t)+\eta m$. Hence: \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}W(t)\right|\leq m\text{ and } \left|\underset{t\in S}{\arg\min}W_+(t)\right|>m\right]\nonumber\\ &\leq & \mathbb{P}\left[ \exists \ell<-m \text{ where } \min_{t\in [0,m]}W(t)< W(\ell) \leq \min_{t\in [0,m]}W(t)+\eta m \right]\nonumber\\ &\leq & \sum_{\ell=-m-1}^{-\infty} \mathbb{P}\left[ \min_{t\in [0,m]}W(t)< W(\ell) \leq \min_{t\in [0,m]}W(t)+\eta m \right]\nonumber\\ &= &\sum_{\ell=-m-1}^{-\infty} \mathbb{E}\left[\mathbb{P}\left[ x< W(\ell) \leq x+\eta m\Bigg| \min_{t\in [0,m]}W(t)=x \right]\right]\nonumber \end{eqnarray} \begin{align} &\leq \sum_{\ell=-m-1}^{-\infty} \mathbb{E}\left[\mathbb{P}\left[ \frac{x}{\sqrt{|\ell|}}-\frac{\Delta_1}{2}\sqrt{|\ell|}< N\left( 0,1 \right) \leq \frac{x}{\sqrt{|\ell|}}-\frac{\Delta_1}{2}\sqrt{|\ell|}+\frac{\eta m}{\sqrt{|\ell|}}\Bigg| \min_{t\in [0,m]}W(t)=x \right]\right]\nonumber\\ &= \sum_{\ell=-m-1}^{-\infty}\mathbb{E}\left[ \int_{\frac{x}{\sqrt{|\ell|}}-\frac{\Delta_1}{2}\sqrt{|\ell|}}^{\frac{x}{\sqrt{|\ell|}}-\frac{\Delta_1}{2}\sqrt{|\ell|}+\frac{\eta m}{\sqrt{|\ell|}}} \frac{1}{\sqrt{2\pi}}\exp(-z^2/2)\,dz \Bigg| \min_{t\in [0,m]}W(t)=x\right]\nonumber\\ &\leq \sum_{\ell=-m-1}^{-\infty} \int_{-\frac{\Delta_1}{2}\sqrt{|\ell|}}^{-\frac{\Delta_1}{2}\sqrt{|\ell|}+\frac{\eta m}{\sqrt{|\ell|}}} \frac{1}{\sqrt{2\pi}}\exp(-z^2/2)\,dz\nonumber\\ &\text{since all possible values of }x\text{ are negative, and the integrated density is monotone}\nonumber\\ &\leq \sum_{\ell=-m-1}^{-\infty} \int_{-\frac{\Delta_1}{2}\sqrt{|\ell|}}^{-\frac{\Delta_1}{2}\sqrt{|\ell|}+\eta\sqrt{m}} \frac{1}{\sqrt{2\pi}}\exp(-z^2/2)\,dz\nonumber\\ &\leq \sum_{\ell=-m-1}^{-\infty} \frac{\eta}{\sqrt{2\pi}}\sqrt{m}\exp\left[ -\frac{1}{2}\left( \frac{\Delta_1}{2}\sqrt{|\ell|}-\eta \sqrt{m}\right)^2 \right]\nonumber\\ &\leq \frac{\eta\sqrt{m}}{\sqrt{2\pi}}\sum_{\ell=-m-1}^{-\infty} \exp\left[ -\frac{1}{2}\left(\frac{\Delta_1}{2}-\eta\right)^2|\ell| \right]\nonumber\\ &\leq \frac{1}{\sqrt{2\pi}}\left( \frac{\exp\left[- \frac{1}{2}\left( \frac{\Delta_1}{2}-\eta \right)^2 \right]}{1-\exp\left[- \frac{1}{2}\left( \frac{\Delta_1}{2}-\eta \right)^2\right]}\right)\eta\sqrt{m}\exp\left( -\frac{1}{2}\left( \frac{\Delta_1}{2}-\eta \right)^2m \right).\nonumber\\ \end{align} Therefore, \begin{eqnarray} &&\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}W_+(t)\right|\leq m\right]\nonumber\\ &\geq & \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}W(t)\right|\leq m\right]-\mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}W(t)\right|\leq m\text{ and } \left|\underset{t\in S}{\arg\min}Y_+(t)\right|>m\right]\nonumber\\ & \geq & \mathbb{P}\left[ \left|\underset{t\in S}{\arg\min}W(t)\right|\leq m\right]-A'\eta \sqrt{m}\exp(-B'm) \end{eqnarray} for some constants $A'$ and $B'$ depending only on $\frac{\Delta_1}{2}-\eta$. \end{proof} We can immediately apply this result to random walks of the form $X_\Delta$: \begin{lemma}\label{lem:randomwalkcomp} Suppose that for the random walk $Y_+(t)$ for $t\in\mathbb{Z}$ equals \begin{eqnarray} Y_+(t)=\begin{cases} X_\Delta(t)\qquad &\text{for }t\leq 0\\ X_\Delta(t)+\eta t&\text{for }t>0 \end{cases} \end{eqnarray} for some constant $\eta$ such that $0<\eta<\frac{|\Delta|}{2}$. Then for any $[-m,m]\subset S\subset\mathbb{Z}$, \begin{align} &\mathbb{P}\left[\left|\underset{t\in S}{\arg\min}Y_+(t)\right|\leq m \right]\geq \mathbb{P}\left[\left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq m \right]\nonumber\\ &-A'\left( \frac{|\Delta|}{2}-\eta \right)\eta\sqrt{m}\exp\left( -B'\left( \frac{|\Delta|}{2}-\eta \right)m \right) \end{align} for some expressions $A'()$ and $B'()$ which are, respectively, decreasing and increasing with respect to $\left( \frac{|\Delta|}{2}-\eta \right)$. The same probability inequality will hold if $Y_+(t)=X_\Delta(t)+\eta |t|1(t<0)$. \newline \newline In addition, if $Y_-(t)$ is defined differently as \begin{eqnarray} Y_-(t)=\begin{cases} X_\Delta(t)\qquad &\text{for }t\leq 0\\ X_\Delta(t)-\eta t&\text{for }t>0 \end{cases} \end{eqnarray} then we have the inequality \begin{align} &\mathbb{P}\left[\left|\underset{t\in S}{\arg\min}Y_-(t)\right|\leq m \right]\leq \mathbb{P}\left[\left|\underset{t\in S}{\arg\min}X_\Delta(t)\right|\leq m \right]\nonumber\\ &+A'\left( \frac{|\Delta|}{2}-\eta \right)\eta\sqrt{m}\exp\left( -B'\left( \frac{|\Delta|}{2}-\eta \right)m \right) \end{align} for some expressions $A'()$ and $B'()$ which are, respectively, decreasing and increasing with respect to $\left( \frac{|\Delta|}{2}-\eta \right)$. \end{lemma} \section{Supplement Part D (Intelligent Sampling using Wild Binary Segmentation)} \label{WBINSEG-SUPP} \subsection{Wild Binary Segmentation}\label{sec:wildbinseg} We next discuss the Wild Binary Segmentation (WBinSeg) algorithm, introduced in \cite{fryzlewicz2014wild}. Similar to our treatment of the BinSeg procedure, we will explain the WBinSeg procedure in the context of applying it to the dataset $Z_1,\dots, Z_{N^*}$, a size $\sim N^\gamma$ size subsample of a larger dataset $Y_1,\dots,Y_N$ which satisfies the conditions (M1)-(M4). The steps of this algorithm are: \begin{enumerate} \item Fix a threshold value $\zeta_{N^*}$ and initialize the segment set $SS=\{ (1,N) \}$, the change point estimate set $\underline{\hat{\tau}}=\emptyset$, and $M_{N^*}$ intervals $[s_1,e_1],\dots,[s_{M_{N^*}},e_{M_{N^*}}]$, where each $s_j$ and $e_j$ are uniformly picked from $\{1,\dots,{N^*}\}$. \item Pick any ordered pair $(s,e)\in SS$, remove it from $SS$ (update $SS$ by $SS\leftarrow SS-\{ (s,e) \}$). If $s\geq e$ then skip to step 6, otherwise continue to step 3. \item Define $\mathcal{M}_{s,e}:=\left\{ [s_i,e_i]:[s_i,e_i]\subseteq [s,e] \right\}$. \begin{itemize} \item As an optional step, also take $\mathcal{M}_{s,e}\leftarrow \mathcal{M}_{s,e}\cup\left\{(s,e)\right\}$. \end{itemize} \item Find a $[s^*,e^*]\in\mathcal{M}_{s,e}$ such that $$\max_{b\in\{s^*,\dots,e^*-1\} }|\bar{Y}^b_{s^*,e^*}|=\max_{[s',e']\in\mathcal{M}_{s,e}}\left( \max_{b\in\{s',\dots,e'-1\} }|\bar{Y}^b_{s',e'}| \right)$$ and let $b_0=\underset{b\in\{s^*,\dots,e^*-1\} }{\arg\max}|\bar{Z}^b_{s^*,e^*}|$. \item If $|\bar{Z}^{b_0}_{s^*,e^*}|\geq \zeta_{N^*}$, then add $b_0$ to the list of change point estimates (add $b_0$ to $\underline{\hat{\tau}}$), and add ordered pairs $(s,b_0)$ and $(b_0+1,e)$ to $SS$, otherwise skip to step 5. \item Repeat steps 2-4 until $SS$ contains no elements. \end{enumerate} Roughly speaking, WBinSeg performs very much like binary segmentation but with steps that maximize change point estimates over $M_{N^*}$ randomly chosen intervals. The consistency results in \cite{fryzlewicz2014wild} imply that in our setting, the following holds: \begin{theorem}\label{thm:wbinsegresults} Suppose conditions (M1) to (M4) are satisfied and the tuning parameter $\zeta_{N^*}$ is chosen appropriately such that there exists positive constants $C_1$ and $C_2$ with $C_1\sqrt{\log({N^*})}\leq \zeta_{N^*}\leq C_2\sqrt{\delta}_{N^*}$. Denote $\hat{J}$, $\hat{\tau}_1,\dots,\hat{\tau}_{\hat{J}}$ as the estimates obtained from wild binary segmentation. Then, there exists positive constants $C_3,C_4$ where \begin{equation}\label{eq:wildbound} \mathbb{P}\Big[ \hat{J}=J;\quad \max_{i=1,...,J} |\hat{\tau}_i-\tau_i|\leq C_3\log({N^*}) \Big]\geq 1-C_4({N^*})^{-1}-\left( \frac{{N^*}}{\delta^*_{N^*}} \right)\left(1- \left(\frac{\delta^*_{N^*}}{3{N^*}}\right)^2\right)^{M_{N^*}} \end{equation} \end{theorem} We remark that the right side of (\ref{eq:wildbound}) does not necessarily converge to 1 unless $M_{N^*}\to\infty$ fast enough. Using some simple algebra, it was shown in the original paper, that for sufficiently large $N$, this expression can be bounded from below by $1-CN^{-1}$ for some $C>0$ if $M_{N^*}\geq \left(\frac{3{N^*}}{\delta_{N^*}}\right)^2\log({N^*}^2/\delta_{N^*})$, a condition on $M_{N^*}$ which we we assume from here on in order to simplify some later analysis. \newline \newline \indent Compared with the consistency result for binary segmentation given in Theorem \ref{frythm}, $\max_{j=1,\dots,J}|\hat{\tau}_j-\tau_j|$ can be bounded by some constant times $\log({N^*})$, which can grow much slower than $E_{N^*}=({N^*}/\delta^*_{N^*})^2\log({N^*})$ whenever ${N^*}/\delta^*_{N^*}\to \infty$. However, this comes at the cost of computational time. Suppose we perform wild binary segmentation with $M_{N^*}$ random intervals $[s_1,e_1],\dots,[s_{M_{N^*}},e_{M_{N^*}}]$, then for large ${N^*}$ the most time consuming part of the operation is to maximize the CUSUM statistic over each interval, with the other tasks in the WBinSeg procedure taking much less time. This takes an order of $\sum_{j=1}^{M_{N^*}}(e_j-s_j)$ time, and since the interval endpoints are drawn from $\{1,\dots,{N^*}\}$ with equal probability, we have $\mathbb{E}[e_j-s_j]=\frac{{N^*}}{3}(1+o(1))$ for all $j=1,\dots,M_{N^*}$. Hence the scaling of the average computational time for maximizing the CUSUM statistic in all random intervals, and WBinSeg as a whole, is $O({N^*}M_{N^*})=O\left( \frac{{N^*}^3}{(\delta^*_{N^*})^2}\log(({N^*})^2/\delta^*_{N^*}) \right)$ time, which is greater than $O({N^*}\log({N^*}))$ time for binary segmentation whenever ${N^*}/\delta^*_{N^*}\to\infty$. The trade-off between the increased accuracy of the estimates and the bigger computational time, as they pertain to intelligent sampling, will be analyzed later. \newline \newline \indent As with the BinSeg method, we need to verify that the WBinSeg method also satisfies the consistency condition of (\ref{eq:firstconsistent}) at step (ISM3) of intelligent sampling, so that the results of Theorem \ref{thm:increasingJasymprotics} continue to hold with the latter as the first stage procedure. To this end, we need to demonstrate a set of signal estimators that satisfy the condition of Lemma \ref{cor:signalconsistent} with a $\rho^*_{N^*}$ such that $J\rho^*_{N^*}\to 0$. We do this by using the estimator proposed in (\ref{eq:signalestdef}), and also by imposing two further conditions: \begin{enumerate}[label=(M\arabic* (WBinSeg)):] \setlength{\itemindent}{.5in} \setcounter{enumi}{7} \item $\Xi$ (from condition (M3)) is further restricted by $\Xi\in [0,1/3)\,,$ \item $N_1$, from step (ISM2), is chosen so that $N_1=K_1N^\gamma$ for some $K_1>0$ and $\gamma>3\Xi$. \end{enumerate} \begin{lemma}\label{lem:wbinsegconsistent} Under conditions (M1) through (M4), (M8 (WBinSeg)), and (M9 (WBinSeg)), we have \begin{eqnarray}\label{eq:wildsimulconsistent} \mathbb{P}\left[\hat{J}=J;\quad \max_{i=1,...,J}|\hat{\tau}_i-\tau_i|\leq w^*({N^*});\quad \max_{i=0,...,J}|\hat{\nu}_i-\nu_i|\leq \rho^*_{N^*}\right]\to 1 \end{eqnarray} for some $\rho^*_{N^*}$ where $J\rho^*_{N^*}\to 0$. \end{lemma} \begin{proof} We need to show that wild binary segmentation does satisfy the requirements of Lemma \ref{cor:signalconsistent}. We have $\delta^*_{N^*}\gtrsim (N^{*})^{1-\frac{\Xi}{\gamma}}$ and $J\lesssim (N^{*})^{\frac{\Xi}{\gamma}}$. By the properties of the WBinSeg estimator shown in Theorem \ref{thm:wbinsegresults}, $w^*(N^*)\sim\log(N^*)$ and $B_{N^*} \sim{N^*}^{-1}$. We shall show that for $\rho=(N^*)^\theta$ where $\theta\in \left( \frac{\Xi}{2\gamma}-\frac{1}{2},-\frac{\Xi}{\gamma} \right)$, the conditions of Lemma \ref{cor:signalconsistent} are satisfied, and in addition, $J\rho^*_{N^*}\to 0$. \newline \newline First, the set $ \left( \frac{\Xi}{2\gamma}-\frac{1}{2},-\frac{\Xi}{\gamma} \right)$ is a valid set since \begin{eqnarray} \frac{3\Xi}{2\gamma}-\frac{1}{2}=\frac{1}{2}\left( 3\frac{\Xi}{\gamma}-1 \right)<0. \end{eqnarray} by condition (M9 (WBinSeg)). Hence $\frac{\Xi}{2\gamma}-\frac{1}{2}<-\frac{\Xi}{\gamma}$. Second, $J\rho^*_{N^*}\to 0$ since \begin{eqnarray} J\rho^*_{N^*}\lesssim (N^*)^{ \frac{\Xi}{\gamma} }(N^*)^\theta \end{eqnarray} But $$\theta+\frac{\Xi}{\gamma}<0. $$ Finally, as for the conditions of Lemma \ref{cor:signalconsistent}, we have \begin{itemize} \item $\frac{w^*(N^*)}{\delta^*_{N^*}}\lesssim(N^*)^{-(1-\frac{\Xi}{\gamma})}\log(N^*)\to 0$, hence $w^*(N^*)=o(\delta^*_{N^*})$ \item $\frac{N^*}{\delta^*_{N^*}}B_{N^*}\sim\frac{1}{\delta^*_{N^*}}\to 0$ \item because $\rho_{N^*}^*=(N^*)^\theta$ where $\theta\in \left( \frac{\Xi}{2\gamma}-\frac{1}{2},-\frac{\Xi}{\gamma} \right)$, we have $\rho_{N^*}^*=o(1)$ \item $\frac{N^*w^*(N^*)}{(\delta^*_{N^*})^{3/2}\rho^*_{N^*}}\lesssim (N^*)^{1-\theta}\log(N^*)$, and $\delta^*_{N^*}(\rho^*_{N^*})^2\gtrsim (N^*)^{2\theta+1-\frac{\Xi}{\gamma}} $; since $2\theta+1-\frac{\Xi}{\gamma}>0$, this means that $\frac{N^*w^*(N^*)}{(\delta^*_{N^*})^{3/2}\rho^*_{N^*}}=o( \exp(C_2\delta^*_{N^*}(\rho^*_{N^*})^2) )$ for any positive constant $C_2$ \end{itemize} Since all conditions of Lemma \ref{cor:signalconsistent} are satisfied, the signal estimators satisfy \begin{eqnarray} \mathbb{P}\left[ \hat{J}=J;\; \max_{j=0,\dots,J}|\hat{\nu}^*_j-\nu_j|\leq \rho^*_{N^*} \right]\to 1 \end{eqnarray} which combines with the consistency of the change point estimators through a Bonferroni inequality to obtain the consistency result (\ref{eq:firstconsistent}). \end{proof} \begin{remark} As with the BinSeg algorithm, the WBinSeg procedure is asymptotically consistent but faces the same issues as BinSeg in a practical setting where the goal is to obtain confidence bounds $[\hat{\tau}_i\pm C_3\log({N^*})]$ for the change point $\tau_i$. Namely, there are unspecified constants associated with the tuning parameter $\zeta_{N^*}$ and the confidence interval width $C_3\log({N^*})$ in (\ref{eq:wildbound}). The issue of choosing a confidence interval width will can be resolved by applying the procedure of Section \ref{sec:refitting}. \end{remark} \begin{table} \caption{Table of $\gamma_{min}$ and computational times for various values of $\Xi$, using WBinSeg at stage 1.}\label{tab:WBStime} \centering \begin{tabular}{|c|c|c|c|} \hline $\Xi$ & $[0,1/9)$ & $[1/9,1/7)$ & $[1/7,1/3)$ \\ \hline $\gamma_{min}$ & $\frac{1-2\Xi+\Lambda}{2}$ & $\max\left\{\frac{1-2\Xi+\Lambda}{2},3\Xi+\eta \right\}$ & $\Xi+\eta$\\ \hline Order of Time & $N^{(1+2\Xi+\Lambda)/2}\log(N)$ & $ N^{(1+2\Xi+\Lambda)/2}\log(N)$ or $N^{5\Xi+\eta}\log(N)$ & $N^{5\Xi+\eta}\log(N)$ \\ \hline \hline $\gamma_{min}$ ($\Lambda=0$) & $\frac{1-2\Xi}{2}$ & $3\Xi+\eta$ & $3\Xi+\eta$ \\ \hline Time ($\Lambda=0$) & $N^{(1+2\Xi)/2}\log(N)$ & $N^{5\Xi+\eta} \log(N)$ & $N^{5\Xi+\eta}\log(N)$ \\ \hline \hline $\gamma_{min}$ ($\Lambda=\Xi$) & $\frac{1-\Xi}{2}$ & $\frac{1-\Xi}{2}$ & $3\Xi+\eta$\\ \hline Time ($\Lambda=\Xi$) & $N^{(1+3\Xi)/2}\log(N)$ & $N^{(1+3\Xi)/2} \log(N)$ & $N^{5\Xi+\eta}\log(N)$ \\ \hline \end{tabular} \end{table} \begin{figure} \caption{Blue triangle encompasses all valid values of $\gamma$ vs $\Xi$ as set by (M8 (WBinSeg)). Pink region, solid red lines, and dotted red lines denotes $\gamma_{min}$ for each $\Xi$.} \label{fig:WBStime} \end{figure} \begin{remark} \label{comparing-comuting-time} Although the $\gamma_{min}$ values are, across the board, smaller than those given in Table \ref{table-time-binseg} of the main paper (which records the $\gamma_{min}$ values and computational time of Binseg at Stage 1), the order of the actual computational time is also greater across the board. There is some advantage in using WBinSeg since consistency condition (\ref{eq:firstconsistent}) with $J\rho_N\to 0$ is satisfied in a greater regime of $\Xi$ ($\Xi<1/3$ as opposed to $\Xi<1/4$ for BinSeg), but in scenarios where the change points are placed far apart BinSeg would run in shorter time. \end{remark} \subsection{Computational Time Order for Multiple Change Points}\label{sec:multipletime} \indent To analyze the computational time when using WBinSeg at stage 1, we again assume that $\delta_N/N^\Xi\to K_1$ and $J(N)/N^\Lambda\to K_2$ for some constant $\Lambda\leq \Xi$ and positive constants $K_1$, $K_2$. To summarize the details, for $N_1\sim N^\gamma$, the average time for the first stage is $O(N^{\gamma+2\Xi}\log(N))$, while the second stage takes, on average, $O(N^{1-\gamma+\Lambda}\log(N))$ time. Together with condition (M9 (WBinSeg)) and setting, $\gamma>3\Xi$, the order of average time for both stages combined is minimized by setting $\gamma_{min}=\max\left\{ \frac{1-2\Xi+\Lambda}{2},\Xi+\eta \right\}$ for any small constant $\eta$, with the average total computational time being $O(N^{\gamma_{min}+2\Xi}\log(N))$. \newline \newline \noindent{\bf Detailed Analysis:} We have $\delta_{N}\sim N^{1-\Xi}$ for some $\Xi\in [0,1/3)$ and $J\sim N^\Lambda$ for some $\Lambda\in [0,\Xi]$. Given $n$ data points from (\ref{model}) with minimum separation $\delta_n$ between change points, it takes an order of $\frac{n^3}{\delta_n^2}\log(n)$ time to perform the procedure, due to having to use $M_n\sim \left(\frac{n}{\delta_n}\right)^2\log(n^2/\delta_n)$ random intervals, each requiring $O(n)$ time on average. The first stage works with a time series data of length order $N^\gamma$ with minimal separation $\delta_{N^*}^*$, and hence has an average computational time that is of the same order as$ (N^*/\delta_{N^*}^*)^2\cdot N^*\log(N^* )$, which is the same order as $N^{\gamma+2\Xi}\log(N)$. The second stage works with $\hat{J}$ intervals, each of width $CN^{1-\gamma}\log(N)$ for some constant $C$. Because we have \begin{eqnarray} \mathbb{P}[\hat{J}=J]\geq 1-C(N^*)^{-1}\qquad \text{ for some }C>0 \end{eqnarray} by our earlier condition on $M_N$, we arrive at $\mathbb{E}[\hat{J}]=O(J)$ because $$ \mathbb{E}[\hat{J}]\leq J(1-C(N^*)^{-1})+N(C(N^*)^{-1})\leq (C+1)J. $$ This in turn shows the expected computational time of the second stage is $O(JN^{1-\gamma}\log(N))$ which simplifies to $O(N^{1-\gamma+\Lambda}\log(N))$. \newline \newline \indent Both stages combined are expected to take $O\left(N^{(\gamma+2\Xi)\vee (1-\gamma+\Lambda)}\log(N)\right)$ time. This fact combined with the requirement $\gamma>3\Xi$ lead to an optimal way to choose $\gamma$ to minimize the amount of computational time: \begin{itemize} \item On the region $\Xi<1/9$ we can solve the equation $\gamma_{min}+2\Xi=1-\gamma_{min}+\Lambda$ to get the minimizing $\gamma$ as $\gamma_{min}=\frac{1-2\Xi+\Lambda}{2}$, which satisfies $\gamma_{min}>3\Xi$. This results in $O(N^{\frac{1+2\Xi+\Lambda}{2}}\log(N))$ computational time. \item On the region $\Xi\in [1/9,1/7)$: \begin{itemize} \item If $\frac{1-2\Xi+\Lambda}{2}> 3\Xi$, set $\gamma_{min}=\frac{1-2\Xi+\Lambda}{2}$ resulting in $O(N^{\frac{1+2\Xi+\Lambda}{2}}\log(N))$ computational time. \item Otherwise if $\frac{1-2\Xi+\Lambda}{2}\leq 3\Xi$, set $\gamma_{min}=3\Xi+\eta$, where $\eta>0$ is small, for $O(N^{5\Xi+\eta}\log(N))$ computational time. \end{itemize} \item For $\Xi\in [1/7,1/3)$ also set $\gamma_{min}=\Xi+\eta$, where $\eta>0$ is small, for $O(N^{5\Xi+\eta}\log(N))$ computational time. \end{itemize} \subsection{Simulation Results for WBinSeg} We next looked at how effective intelligent sampling with WBinSeg would work in practice, by running a set of simulations with the same set of model parameters as in Setup 2 of Section \ref{sec:simulations}, and used the exact same method of estimation except with WBinSeg used in place of the Binseg algorithm. For the tuning parameters of WBinSeg, the same $\zeta_N$ was retained and the number of random intervals was taken as $M_n=20000$. Although we could have used the theoretically prescribed value of $M_N=9(N_1/\delta_{N_1}^*)^2\log(N_1^2/\delta_{N_1}^*)$, this turns out to be over 400,000 and is excessive as setting $M_N=20,000$ gave accurate estimates. \begin{figure} \caption{Distributions of $\max_{1\leq j\leq 55}\lambda_2\left(\tau_j,\hat{\tau}_j^{(2)}\right)$ and $\lambda_2\left(\tau_{27},\hat{\tau}_{27}^{(2)}\right)$ from 1000 trials using the same parameters as setup 2 but employing WBinSeg instead of BinSeg. } \label{fig:setup5} \end{figure} Using WBinSeg along with steps (D1) and (D2), the event $\{\hat{J}=J\}$ also occurred over 99\% of the time during simulations. One can also see from Figure \ref{fig:setup5} that the distribution of the $\hat{\tau}^{(2)}_j$'s again match with Theorem \ref{thm:multidepend}. Because the performance of intelligent sampling is near identical for this setup, regardless of whether BinSeg or WBinSeg was used, the reader may wonder why the latter isn't used for the previous few simulations. The reason is the following: when the re-fitting method from Section \ref{sec:refitting} is implemented, it results in the second stage intervals being of width $Q_1(1-\alpha/J)N^{1-\gamma}$, irrespective of which of BinSeg or WBinSeg was used at stage one [where $Q_1(1-\alpha/J)$ is the $1-\alpha/J$ quantile of $|L_1|$]. Hence, WBinSeg loses any possible advantage from the tighter confidence bound of width $O(\log(N))$ rather than $O(E_N)$ for BinSeg from stage one. So, in a sparse change point setting and with stage 1 refitting, WBinSeg provides no accuracy advantages but adds to the computational time, e.g., the 1000 iterations used to create Figure \ref{fig:setup5} averaged $\approx 293$ seconds, while the iterations used to create Figure \ref{fig:setup2} averaged $\approx 7$ seconds. \end{document}
arXiv
Borel's lemma In mathematics, Borel's lemma, named after Émile Borel, is an important result used in the theory of asymptotic expansions and partial differential equations. Statement Suppose U is an open set in the Euclidean space Rn, and suppose that f0, f1, ... is a sequence of smooth functions on U. If I is any open interval in R containing 0 (possibly I = R), then there exists a smooth function F(t, x) defined on I×U, such that $\left.{\frac {\partial ^{k}F}{\partial t^{k}}}\right|_{(0,x)}=f_{k}(x),$ for k ≥ 0 and x in U. Proof Proofs of Borel's lemma can be found in many text books on analysis, including Golubitsky & Guillemin (1974) and Hörmander (1990), from which the proof below is taken. Note that it suffices to prove the result for a small interval I = (−ε,ε), since if ψ(t) is a smooth bump function with compact support in (−ε,ε) equal identically to 1 near 0, then ψ(t) ⋅ F(t, x) gives a solution on R × U. Similarly using a smooth partition of unity on Rn subordinate to a covering by open balls with centres at δ⋅Zn, it can be assumed that all the fm have compact support in some fixed closed ball C. For each m, let $F_{m}(t,x)={t^{m} \over m!}\cdot \psi \left({t \over \varepsilon _{m}}\right)\cdot f_{m}(x),$ where εm is chosen sufficiently small that $\|\partial ^{\alpha }F_{m}\|_{\infty }\leq 2^{-m}$ for |α| < m. These estimates imply that each sum $\sum _{m\geq 0}\partial ^{\alpha }F_{m}$ is uniformly convergent and hence that $F=\sum _{m\geq 0}F_{m}$ is a smooth function with $\partial ^{\alpha }F=\sum _{m\geq 0}\partial ^{\alpha }F_{m}.$ By construction $\partial _{t}^{m}F(t,x)|_{t=0}=f_{m}(x).$ Note: Exactly the same construction can be applied, without the auxiliary space U, to produce a smooth function on the interval I for which the derivatives at 0 form an arbitrary sequence. See also • Non-analytic smooth function § Application to Taylor series References • Erdélyi, A. (1956), Asymptotic expansions, Dover Publications, pp. 22–25, ISBN 0486603180 • Golubitsky, M.; Guillemin, V. (1974), Stable mappings and their singularities, Graduate Texts in Mathematics, vol. 14, Springer-Verlag, ISBN 0-387-90072-1 • Hörmander, Lars (1990), The analysis of linear partial differential operators, I. Distribution theory and Fourier analysis (2nd ed.), Springer-Verlag, p. 16, ISBN 3-540-52343-X This article incorporates material from Borel lemma on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Tom Lehrer Thomas Andrew Lehrer (/ˈlɛərər/; born April 9, 1928) is an American musician, singer-songwriter, satirist, and mathematician, who later taught mathematics and musical theater. He recorded pithy and humorous songs that became popular in the 1950s and 1960s. His songs often parodied popular musical forms, though they usually had original melodies. An exception is "The Elements", in which he set the names of the chemical elements to the tune of the "Major-General's Song" from Gilbert and Sullivan's Pirates of Penzance. Tom Lehrer Lehrer c. 1957 Born Thomas Andrew Lehrer (1928-04-09) April 9, 1928 Manhattan, New York City, U.S. Occupations • Singer-songwriter • satirist • mathematician Musical career Genres • novelty • satire • comedy • science Instrument(s) • Vocals • piano Years active • 1945–1973 • 1980 • 1998 Labels • TransRadio • Lehrer • Reprise/Warner Bros. • Rhino/Atlantic • Shout! Factory Websitetomlehrersongs.com Signature Lehrer's early performances dealt with non-topical subjects and black humor in songs such as "Poisoning Pigeons in the Park". In the 1960s, he produced songs about timely social and political issues, particularly for the U.S. version of the television show That Was the Week That Was. The popularity of these songs has far outlasted their topical subjects and references. Lehrer quoted a friend's explanation: "Always predict the worst and you'll be hailed as a prophet."[1] In the early 1970s, Lehrer largely retired from public performance to devote his time to teaching mathematics and musical theater history at the University of California, Santa Cruz. Early life Thomas Andrew Lehrer was born on April 9, 1928, to a secular Jewish family and grew up on Manhattan's Upper East Side.[2][3] He is the son of Morris James Lehrer and Anna Lehrer (née Waller).[4][5] He began studying classical piano at the age of seven, but was more interested in the popular music of the age. Eventually, his mother also sent him to a popular-music piano teacher.[6] At this early age, he began writing show tunes, which eventually helped him as a satirical composer and writer in his years of lecturing at Harvard University and later at other universities.[7] Lehrer attended the Horace Mann School in Riverdale, New York, part of the Bronx.[2][8] He also attended Camp Androscoggin, both as a camper and a counselor.[9] Lehrer was considered a child prodigy and entered Harvard College, where one of his professors was Irving Kaplansky,[10][11] at the age of 15 after graduating from Loomis School.[2] As a mathematics undergraduate student at Harvard College, he began to write comic songs to entertain his friends, including "Fight Fiercely, Harvard" (1945). Those songs were later named collectively The Physical Revue,[12] a joking reference to a leading scientific journal, the Physical Review. Academic and military career Lehrer graduated Bachelor of Arts in mathematics from Harvard University, magna cum laude,[13] in 1946.[14] At Harvard, he was the roommate of the Canadian theologian Robert Crouse.[15] He received his AM degree the next year and was inducted into Phi Beta Kappa.[16] He later taught mathematics and other classes at MIT, Harvard, Wellesley, and the University of California, Santa Cruz.[17] Lehrer remained in Harvard's doctoral program for several years, taking time out for his musical career and to work as a researcher at the Los Alamos Scientific Laboratory.[18] Lehrer was drafted into the U.S. Army from 1955 to 1957, working at the National Security Agency (NSA). Lehrer has stated that he invented the Jello shot during this time, as a means of circumventing a naval base's ban on alcoholic beverages.[18] Despite holding a master's degree in an era when American conscripts often lacked a high school diploma, Lehrer served as an enlisted soldier, achieving the rank of Specialist Third Class, which he described as being a "corporal without portfolio".[19] These experiences became fodder for songs, such as "The Wild West is Where I Want to Be" and "It Makes a Fellow Proud to Be a Soldier".[20] In 2020 Lehrer publicly revealed that he had been assigned to the NSA; since the mere fact of the NSA's existence was classified at the time, Lehrer found himself in the position of implicitly using nuclear weapons work as a cover story for something more sensitive.[21] In 1960, Lehrer returned to full-time math studies at Harvard.[8] From 1962, Lehrer taught in the political science department at the Massachusetts Institute of Technology (MIT).[22] In 1965 he gave up on his mathematics dissertation on modes in statistics, after working on it intermittently for 15 years.[2] In 1972, Lehrer joined the faculty of the University of California, Santa Cruz, teaching an introductory course entitled The Nature of Mathematics to liberal arts majors—"math for tenors", according to Lehrer. He also taught a class in musical theater. He occasionally performed songs in his lectures.[11] In 2001, Lehrer taught his last mathematics class, on the topic of infinity, and retired from academia.[23] He has remained in the area, and in 2003 said he still "hangs out" around the University of California, Santa Cruz.[24] Musical career Style and influences When You Are Old and Gray Lehrer c. 1958 Lehrer was mainly influenced by musical theater. According to Gerald Nachman's book Seriously Funny,[25] the Broadway musical Let's Face It! made an early and lasting impression on him. Lehrer's style consists of parodying various forms of popular song. For example, his appreciation of list songs led him to write "The Elements", which lists the chemical elements to the tune of Gilbert and Sullivan's "Major-General's Song". In author and Boston University professor Isaac Asimov's second autobiographical volume, In Joy Still Felt, Asimov recounted seeing Lehrer perform in a Boston nightclub on October 9, 1954. Lehrer sang a song about Jim getting it from Louise, and Sally from Jim, "...and after a while you gathered the 'it' was venereal disease. Suddenly, as the combinations grew more grotesque, you realized he was satirizing every known perversion without using a single naughty phrase. It was clearly unsingable outside a nightclub." [The song was likely "I Got It From Agnes".] Asimov also recalled a song that dealt with the Boston subway system, making use of the stations leading into town from Harvard, observing that the local subject-matter rendered the song useless for general distribution. Lehrer subsequently granted Asimov permission to print the lyrics to the subway song in his book. "I haven't gone to nightclubs often," said Asimov, "but of all the times I have gone, it was on this occasion that I had by far the best time."[26] Recordings We Will All Go Together When We Go Lehrer c. 1958 Lehrer was encouraged by the success of his performances, so he paid $15 (equivalent to $164 in 2022) for some studio time in 1953 to record Songs by Tom Lehrer. The initial pressing was 400 copies. Radio stations would not air his songs because of his controversial subjects, so he sold the album on campus at Harvard for $3, equivalent to $33 in 2022, while "several stores near the Harvard campus sold it for $3.50, taking only a minimal markup as a kind of community service. Newsstands on campus sold it for the same price."[27] After one summer, he started to receive mail orders from all parts of the country, as far away as San Francisco, after the San Francisco Chronicle wrote an article on the record.[28] Interest in his recordings spread by word of mouth. People played their records for friends, who then also wanted a copy.[29] Lehrer recalled, "Lacking exposure in the media, my songs spread slowly. Like herpes, rather than ebola."[30] The album included the macabre "I Hold Your Hand in Mine", the mildly risqué "Be Prepared", and "Lobachevsky" regarding plagiarizing mathematicians. It became a cult success by word of mouth, despite being self-published and without promotion. The limited distribution of the album led to a knock off album by Jack Enjal being released in 1958 without Lehrer’s approval, where some of the lyrics were mistranscribed.[31] Lehrer embarked on a series of concert tours and recorded a second album in 1959. He released the second album in two versions: the songs were the same, but More of Tom Lehrer was a studio recording and An Evening Wasted with Tom Lehrer was recorded live in concert. In 2013, Lehrer recalled the studio session for "Poisoning Pigeons in the Park", which referred to the practice of controlling pigeons in Boston with strychnine-treated corn:[32] The copyist arrived at the last minute with the parts and passed them out to the band... And there was no title on it, and there was no lyrics. And so they ran through it, "What a pleasant little waltz".... And the engineer said, "'Poisoning Pigeons in the Park,' take one," and the piano player said, "What?" and literally fell off the stool.[33] Touring Lehrer had a breakthrough in the United Kingdom on December 4, 1957, when the University of London awarded a doctor of music degree honoris causa to Princess Margaret, and the public orator, Professor J. R. Sutherland, said it was "in the full knowledge that the Princess is a connoisseur of music and a performer of skill and distinction, her taste being catholic, ranging from Mozart to the calypso and from opera to the songs of Miss Beatrice Lillie and Tom Lehrer."[34][35][36] This prompted significant interest in Lehrer's works and helped to secure distribution in Britain for his five-year-old debut album. It was there that his music achieved real sales popularity, as a result of the proliferation of university newspapers referring to the material, and inadvertently due to the BBC, which in 1958 banned from broadcast 10 of the 12 songs on the album.[37] By the end of the 1950s, Lehrer had sold 370,000 records.[2] That Was The Week That Was In 1960, Lehrer essentially retired from touring in the U.S.[2] The same year, he toured Australia and New Zealand, performing a total of 33 concerts to great acclaim[27] and controversy.[38] While in New Zealand, Lehrer wrote a song criticizing the All Blacks rugby team 1960 tour of apartheid South Africa.[39] These tours occurred during a time in which he was, he said, "banned, censored, mentioned in several houses of parliament and threatened with arrest". In particular, "Be Prepared" drew advance resistance in Brisbane from the commissioner of police. He performed several songs in Australia that were still unreleased, including "The Masochism Tango".[24] In the early 1960s, he was employed as the resident songwriter for the U.S. edition of That Was The Week That Was (TW3), a satirical television show.[27] National Brotherhood Week Lehrer performing in Copenhagen, 1967 A greater proportion of his output became overtly political, or at least topical, on subjects such as education ("New Math"), the Second Vatican Council ("The Vatican Rag", a tune based on the 1910 "Spaghetti Rag" by Lyons and Yosco),[40][41][42] race relations ("National Brotherhood Week"), air and water pollution ("Pollution"), American militarism ("Send the Marines"), and nuclear proliferation ("Who's Next?" and "MLF Lullaby"). He also wrote a song satirizing rocket scientist Wernher von Braun, who worked for Nazi Germany before working for the United States. Lehrer did not appear on the television show; vocalist Nancy Ames performed his songs (to Lehrer's chagrin),[31] and lines were often cut from his songs.[31] Lehrer later recorded nine of these songs on the album That Was The Year That Was (1965) recorded at the Hungry I nightclub in San Francisco. [43] In 1966, BBC TV host David Frost returned to the UK with the BBC program The Frost Report; alongside Julie Felix, Lehrer provided musical satire on the weekly subject.[44] The show was transmitted live, and he pre-recorded all his segments at one performance.[43] Lehrer was not featured in every edition, but his songs featured in an appropriate part of each show.[45] At least two of his songs were not included on any of his LPs: a reworking of Noël Coward's "That is the End of the News" (with some new lyrics)[46] and a comic explanation of how Britain might adapt to the coming of decimal currency.[47] The record deal with Reprise Records for That Was The Year That Was also gave Reprise distribution rights for his earlier recordings, as Lehrer wanted to wind up his own record imprint.[31] The Reprise issue of Songs by Tom Lehrer was a stereo re-recording. This version was not issued on CD, as Lehrer was unhappy with this version. [43] The live recording included bonus tracks "L-Y" and "Silent E", two of the ten songs that he wrote for the PBS children's educational series The Electric Company. Lehrer later commented that worldwide sales of the recordings under Reprise surpassed 1.8 million units in 1996. That same year, That Was The Year That Was went gold.[29] The album liner notes promote his songs with self-deprecating humor, such as quoting a New York Times review from 1959: Mr. Lehrer's muse is "not fettered by such inhibiting factors as taste."[48] Lehrer toured Sweden, Norway and Denmark in 1967,[49] his concert in Oslo was recorded for Danish television and subsequently released on DVD some 40 years later.[50] He performed as a prominent international guest at the Studenterforeningen (student association) in Copenhagen, which was televised, and he commented on stage that he might be America's "revenge for Victor Borge".[51] He performed original songs in a Dodge automobile industrial film distributed primarily to automobile dealers and shown at promotional events in 1967, set in a fictional American wild west town and titled The Dodge Rebellion Theatre presents Ballads For '67.[27][52] He attempted to adapt Sweeney Todd as a Broadway musical, working with Joe Raposo, to star Jerry Colonna. They started a few songs but, as Lehrer noted, "Nothing ever came of it, and of course twenty years later Stephen Sondheim beat me to the punch."[53] Departure from the music scene L-Y Lehrer performing at a fundraising event for George McGovern in Brattleboro, Vermont, 1972 In the 1970s, Lehrer concentrated on teaching mathematics and musical theater, although he also wrote ten songs for the educational children's television show The Electric Company. His last public performance for many years took place in 1972, on a fundraising tour for Democratic US presidential candidate George McGovern.[2] When asked about his reasons for abandoning his musical career in an interview in the book accompanying his CD boxed set, released in 2000, Lehrer cited a lack of interest, a disdain of touring, and the monotony of performing the same songs repeatedly. He observed that when he was moved to write and perform songs, he did and, when he was not, he did not, and that after a while he simply lost interest. Even though Lehrer was "a hero of the anti-nuclear, civil rights left" and covered its political issues in many of his songs and, even though he shared the New Left's opposition to the Vietnam War, and advocated for civil rights, he disliked the aesthetics of the counterculture of the 1960s and stopped performing as the movement gained momentum.[2] Lehrer's musical career was relatively brief. He once mentioned that he performed a mere 109 shows and wrote 37 songs over 20 years.[54] Nevertheless, he developed a significant following in the United States and abroad. Revivals and reissues Lehrer's music became a staple of the Doctor Demento Show when it began national syndication in 1977.[55] In 1980, Cameron Mackintosh produced Tomfoolery, a revue of Lehrer's songs that was a hit on the London stage. Lehrer was not initially involved with the show, but he was pleased with it; he eventually gave the stage production his full support and updated several of his lyrics for the show. Tomfoolery contained 27 songs and led to more than 200 productions,[29] including an Off-Broadway production at the Village Gate which ran for 120 performances in 1981.[56] Lehrer made a rare TV appearance on BBC's Parkinson show in conjunction with the Tomfoolery premiere in 1980 at the Criterion Theatre in London, where he sang "I Got It from Agnes".[57][58] There were "Tomfoolery" performances in San Francisco about 1982 and in 2018–19. In 1993, he wrote "That's Mathematics" for the closing credits to a Mathematical Sciences Research Institute video[59] celebrating the proof of Fermat's Last Theorem. On June 7 and 8, 1998, Lehrer performed in public for the first time in 25 years at the Lyceum Theatre, London as part of the show Hey, Mr. Producer! celebrating the career of Cameron Mackintosh, who had produced Tomfoolery. The June 8 show was his only performance before Queen Elizabeth II. Lehrer sang "Poisoning Pigeons in the Park" and an updated version of the nuclear proliferation song "Who's Next?"[60] In 2000, Lehrer commented that he doubted his songs had any real effect on those not already critical of the establishment: "I don't think this kind of thing has an impact on the unconverted, frankly. It's not even preaching to the converted; it's titillating the converted ... I'm fond of quoting Peter Cook, who talked about the satirical Berlin Kabaretts of the 1930s, which did so much to stop the rise of Hitler and prevent the Second World War."[61] Lehrer has said, jokingly, of his musical career: "If, after hearing my songs, just one human being is inspired to say something nasty to a friend, or perhaps to strike a loved one, it will all have been worth the while."[6] In 2003, Lehrer commented that his particular brand of political satire is more difficult in the modern world: "The real issues I don't think most people touch. The Clinton jokes are all about Monica Lewinsky and all that stuff and not about the important things, like the fact that he wouldn't ban land mines ... I'm not tempted to write a song about George W. Bush. I couldn't figure out what sort of song I would write. That's the problem: I don't want to satirize George Bush and his puppeteers, I want to vaporize them."[24] In 2000, the boxed CD set The Remains of Tom Lehrer was released by Rhino Entertainment. It included live and studio versions of his first two albums, That Was The Year That Was, the songs that he wrote for The Electric Company, and some previously unreleased material, a small hardbound lyrics book with an introduction by Dr. Demento. In 2010, Shout! Factory launched a reissue campaign, making Lehrer's out-of-print albums available digitally. The CD/DVD combo The Tom Lehrer Collection was also issued, including his best-known songs, with a DVD featuring an Oslo concert.[62] In a February 2008 phone call, Gene Weingarten of The Washington Post interviewed Lehrer off the record. When Weingarten asked if there was anything he could print for the record, Lehrer responded "Just tell the people that I am voting for Obama."[63] In 2012 rapper 2 Chainz sampled Lehrer's song "The Old Dope Peddler", on his 2012 debut album, Based on a T.R.U. Story. In 2013, Lehrer said he was "very proud" to have his song sampled "literally sixty years after I recorded it". Lehrer went on to describe his official response to the request to use his song: "As sole copyright owner of 'The Old Dope Peddler', I grant you motherfuckers permission to do this. Please give my regards to Mr. Chainz, or may I call him 2?"[33][64] All songs in the public domain In October 2020, Lehrer transferred the music and lyrics for all songs he had ever written into the public domain.[65][66] In November 2022, he formally relinquished the copyright and performing/recording rights on his songs, making all music and lyrics composed by him free for anyone to use, and established a website from which all of his recordings and printable copies of all of his songs could be downloaded.[67] His statement releasing all his works into the public domain concludes with this note: "This website will be shut down at some date in the not too distant future, so if you want to download anything, don't wait too long." Musical legacy In 1967, Swedish actor Lars Ekborg, known outside Sweden for his part in Ingmar Bergman’s Summer with Monika, made an album called I Tom Lehrers vackra värld ("In the beautiful world of Tom Lehrer"), with 12 of Lehrer's songs interpreted in Swedish. Lehrer wrote in a letter to the producer Per-Anders Boquist that, "Not knowing any Swedish, I am obviously not equipped to judge, but it sounds to me as though Mr. Ekborg is perfect for the songs", along with further compliments to pianist Leif Asp for unexpected additional flourishes.[68] In 1971, Argentinian singer Nacha Guevara sang Spanish versions of several Lehrer songs for the show/live album Este es el año que es.[69][70] Composer Randy Newman said of Lehrer, "He's one of the great American songwriters without a doubt, right up there with everybody, the top guys. As a lyricist, as good as there's been in the last half of the 20th century."[33] Singer and comedian Dillie Keane has acknowledged Lehrer's influence on her work.[71] Dr. Demento praised Lehrer as "the best musical satirist of the twentieth century." Other artists who cite Lehrer as an influence include "Weird Al" Yankovic, whose work generally addresses more popular and less technical or political subjects,[72] and educator and scientist H. Paul Shuch, who tours under the stage name Dr. SETI, and calls himself "a cross between Carl Sagan and Tom Lehrer: He sings like Sagan and lectures like Lehrer."[73] Yankovic saw Daniel Radcliffe (who called Lehrer his "hero")[74] perform "The Elements" on The Graham Norton Show in his native United Kingdom, which led to Radcliffe starring in Weird: The Al Yankovic Story. In 2004, British medical satirists Amateur Transplants acknowledged the debt they owe to Lehrer on the back of their first album, Fitness to Practice. Their song "The Menstrual Rag" uses the tune of Lehrer's "The Vatican Rag"; and "The Drugs Song" mirrors Lehrer's song "The Elements", both using the tune of the "Major-General's Song" from The Pirates of Penzance by Gilbert and Sullivan. The Amateur Transplants' second album, Unfit to Practise, opens with an update of Lehrer's "The Masochism Tango" called "Masochism Tango 2008". From January 16 to February 25, 2006, the play Letters from Lehrer, written and performed by Canadian Richard Greenblatt, ran at CanStage in Toronto. It followed Lehrer's musical career, the meaning of several songs, the politics of the time, and Greenblatt's own experiences with Lehrer's music, while playing some of Lehrer's songs.[75] In the March 16, 2006 issue of New York Magazine, Donald Fagen of Steely Dan named Tom Lehrer among the writers who had influenced him and his songwriting partner Walter Becker. "We also liked comic songwriting, like Tom Lehrer. He was a piano player and songwriter who wrote these grim, funny songs."[76] In 2010, the German musician-comedian Felix Janosa released an album with the title "Tauben vergiften: Die bösen Lieder von Tom Lehrer" ("Poisoning pigeons: The Evil Songs of Tom Lehrer"), with German versions of some of his best-known songs.[77] Performers influenced by Lehrer's style include American political satirist Mark Russell,[78] Canadian comedian and songwriter Randy Vancourt, and the British duo Kit and The Widow. Composer/cabarettist Leonard Lehrman extended 3 of Lehrer's songs, writing a new verse 4 to "Clementine",[79] a new verse 2 to "Hanukkah in Santa Monica",[80] and a new verse 3 to "The Elements".[81] Discography Studio albums • Songs by Tom Lehrer (1953), re-recorded in 1966 • More of Tom Lehrer (1959) Live albums • An Evening Wasted with Tom Lehrer (1959) • Revisited (1960) • Tom Lehrer Discovers Australia (And Vice Versa) (1960; Australia-only) • That Was the Year That Was (1965) Compilation albums • Tom Lehrer in Concert (1994; UK compilation) • Songs & More Songs by Tom Lehrer (1997; US compilation of his first two studio albums with additional songs) • The Remains of Tom Lehrer (2000) • The Tom Lehrer Collection (2010) • The Conducted Tom Lehrer (2023; Adds instrumental versions of four songs, with an additional song, 'Trees', that was never released before) Many of Lehrer's songs are performed by others in That Was The Week That Was (Radiola LP, 1981). The sheet music of many songs is published in The Tom Lehrer Song Book (Crown Publishers Inc., 1954; Library of Congress Card Catalog Number 54-12068) and Too Many Songs by Tom Lehrer: With Not Enough Drawings by Ronald Searle (Pantheon, 1981, ISBN 0-394-74930-8; Methuen, 1999, ISBN 978-0-413-74230-8). A second song book, Tom Lehrer's Second Song Book, is out of print, ISBN 978-0517502167. Publications The American Mathematical Society database lists him as co-author of two papers: • R. E. Fagen; T. A. Lehrer (March 1958). "Random walks with restraining barrier as applied to the biased binary counter". Journal of the Society for Industrial and Applied Mathematics. 6 (1): 1–14. doi:10.1137/0106001. JSTOR 2098858. MR 0094856. • T. Austin; R. Fagen; T. Lehrer; W. Penney (1957). "The distribution of the number of locally maximal elements in a random sample". Annals of Mathematical Statistics. 28 (3): 786–790. doi:10.1214/aoms/1177706893. MR 0091251. Two of Lehrer's songs were reprinted, with his permission, in Mad magazine:[82] • Tom Lehrer Sings "The Wild West is Where I Want To Be" (illustrated by George Woodbridge, MAD #32, April 1957)[83] • Tom Lehrer's "The Hunting Song" (illustrated by George Woodbridge, MAD #35, October 1957)[84] References 1. Ford, Andrew (July 8, 2006). "Tom Lehrer". The Music Show. Australian Broadcasting Corporation. Radio National. Interview transcript. 2. Smith, Ben (April 9, 2014). "Looking For Tom Lehrer, Comedy's Mysterious Genius". BuzzFeed. Archived from the original on November 29, 2021. Retrieved April 12, 2014. 3. Warren Allen Smith (2002). "Tom Lehrer". Celebrities in Hell. ChelCpress. p. 72. ISBN 9781569802144. He responded: No one is more dangerous than someone who thinks he has The Truth. To be an atheist is almost as arrogant as to be a fundamentalist. But then again, I can get pretty arrogant. 4. "Spotlight: 1960s satirist Tom Lehrer resurfaces". HeraldScotland. November 2020. Archived from the original on February 12, 2022. Retrieved February 12, 2022. 5. "Tom Lehrer: An equal opportunity offender". Jewish Telegraphic Agency. May 31, 2000. Archived from the original on February 12, 2022. Retrieved February 12, 2022. 6. Liner notes, Songs & More Songs By Tom Lehrer, Rhino Records, 1997. 7. Jeremy Mazner. "Tom Lehrer: The Political Musician That Wasn't". Casualhacker.net. Archived from the original on September 30, 2019. Retrieved October 25, 2019. 8. Toobin, Jeffrey R (November 9, 1981). "Tom Lehrer". The Harvard Crimson. Archived from the original on August 7, 2013. Retrieved April 14, 2012. 9. "The Elements by Tom Lehrer". Archived from the original on February 1, 2012. Retrieved May 23, 2013. 10. "MTArchive: A Song about Pi". Mtarchive.blogspot.com. September 22, 2013. Archived from the original on October 25, 2019. Retrieved October 25, 2019. 11. Lehrer performance excerpt, "Irving "Kaps" Kaplansky's 80th Birthday Celebration". 1997. Retrieved October 27, 2015. 12. "The Physical Revue, by Tom Lehrer". Ww3.haverford.edu. October 22, 2003. Archived from the original on September 23, 2015. Retrieved October 27, 2015. 13. Toobin, Jeffrey R. (November 9, 1981). "Tom Lehrer". The Harvard Crimson. Archived from the original on October 22, 2020. Retrieved October 22, 2020. 14. "Tom Lehrer Biography". Dmdb.org. Archived from the original on October 31, 2015. Retrieved October 27, 2015. 15. Burton, Anthony (Easter 2011). "In Memoriam: The Revd. Dr. Robert Crouse" (PDF). The Prayer Book Society of Canada Newsletter. p. 1. Archived from the original (PDF) on August 24, 2016. Retrieved October 22, 2020. 16. Lynch, Peter (September 17, 2018). "Tom Lehrer: a comical, musical, mathematical genius". The Irish Times. Archived from the original on October 20, 2020. Retrieved October 22, 2020. 17. Cromelin, Richard (May 26, 2000). "Tom Lehrer's Playful Satire Still Delivers a Musical Punch". Los Angeles Times. Archived from the original on October 23, 2019. Retrieved October 23, 2019. 18. Boulware, Jack (April 19, 2000). "That Was the Wit That Was". SF Weekly. Archived from the original on November 28, 2018. Retrieved November 27, 2018. 19. Monologue self-introduction on Tom Lehrer Revisited. 20. Lehrer, Tom (May 23, 2000). "The Wild West is Where I Want to Be: spoken introduction". The Remains of Tom Lehrer. Archived from the original on November 18, 2021. Retrieved August 15, 2018. 21. Bernstein, Jeremy (2020). Quantum Profiles (2 ed.). Oxford University Press. ISBN 978-0-190-05686-5. 22. Longley, Eric. "Tom Lehrer". St. James Encyclopedia of Popular Culture. CBS Interactive Resource Library. Archived from the original on July 9, 2012. 23. Reynolds, Alan (April 1, 2013). "Whatever Happened to Tom Lehrer?". The American Spectator. Archived from the original on May 18, 2016. Retrieved October 27, 2015. 24. "Stop clapping, this is serious". smh.com.au. March 1, 2003. Archived from the original on August 14, 2018. Retrieved October 27, 2015. 25. Nachman, Gerald (2004). Seriously Funny: The Rebel Comedians of the 1950s and 1960s. New York, NY: Pantheon Books (published 2003). pp. 659. ISBN 9780375410307. OCLC 50339527. 26. Asimov, Isaac, In Joy Still Felt: The Autobiography of Isaac Asimov, 1954–1978. Garden City, New York: Doubleday, 1980, p. 15. 27. Dr. Demento (2000). "Too Many Facts About Tom Lehrer". The Remains of Tom Lehrer (CD liner notes). Warner Bros Records. 28. Bernstein, Jeremy (1984). "Out of My Mind: Tom Lehrer: Having Fun". The American Scholar. 53 (3): 295–302. ISSN 0003-0937. JSTOR 41211046. Archived from the original on October 27, 2020. Retrieved October 22, 2020. 29. Jim Bessman. "Rhino Reissues Lehrer's Seminal 'Songs' Albums". Billboard. June 21, 1997. 30. Maslon, Laurence, Make Em Laugh: The Funny Business of America, Hachette Book Group, 2008, pg. 81 31. "An Interview with Tom Lehrer". Crazy College. Retrieved May 8, 2023. 32. Faulkner, Clarence (May 1, 1999). "As It Was in Region 5, 1949–1964". The Probe. 200: 7. Archived from the original on March 31, 2020. Retrieved March 2, 2020 – via DigitalCommons@University of Nebraska - Lincoln, "City-wide pigeon control in Boston, MA using strychnine-treated whole corn". 33. "Tom Lehrer at 85". BBC Radio 4. April 6, 2013. Archived from the original on April 7, 2013. Retrieved September 24, 2019 – via The Public Radio Exchange. Alt URL Archived September 25, 2019, at the Wayback Machine 34. East Africa and Rhodesia. 1957. p. 493. Archived from the original on April 26, 2016. Retrieved September 17, 2017. 35. "The Kansas City Times from Kansas City, Missouri · Page 4". The Kansas City Times. Kansas City, Missouri. Associated Press. December 5, 1957. p. 4. Archived from the original on March 4, 2016. Retrieved February 29, 2016. 36. "British Pathé, quoted only "from Mozart to calypso"". YouTube. Archived from the original on November 18, 2021. 37. "Unfit for Auntie's airwaves: The artists censored by the BBC". The Independent. December 14, 2007. Archived from the original on February 2, 2009. Retrieved October 22, 2020. 38. "Lehrer Dissected … | NZETC". nzetc.victoria.ac.nz. Archived from the original on February 4, 2019. Retrieved April 6, 2021. 39. Zavos, Spiro (1997). Winters of Revenge: The Bitter Rivalry Between the All Blacks and the Springboks. Auckland, New Zealand: Viking. p. 147. Archived from the original on February 26, 2022. Retrieved April 11, 2021. 40. "BAIOCCHI vs CLASSICAL FM" (PDF). bccsa.co.za. Broadcasting Complaints Commission of South Africa. November 27, 2013. Archived (PDF) from the original on September 24, 2016. Retrieved September 23, 2016. The Registrar received a complaint concerning the broadcasting of a song called 'Vatican Rag' during a time slot identified for 'classical comedy' on 13 September 2013. The song was written and performed by satirist Tom Lehrer in the early 1960s. The music dates from 1910 and was then known as the 'Spaghetti Rag'. 41. "Big Tiny Little on The Lawrence Welk Show (1-11-1958) UPDATED AUDIO". YouTube. October 2, 2009. Archived from the original on November 18, 2021. Retrieved April 11, 2021. 42. "University of Colorado Digital Sheet Music Collection: Spaghetti Rag". Archived from the original on September 24, 2016. Retrieved June 7, 2013. 43. Morris, Jeff. "Tom Lehrer Discography". Demented Music Database (dmdb.org). Retrieved May 8, 2023. 44. "The Frost Report". BBC Comedy. Retrieved May 9, 2023. 45. "Marty meets Tom". The Official Marty Feldmab. January 18, 2016. Retrieved May 8, 2023. 46. The end of the news .. heh .wmv, retrieved December 23, 2022 47. Lehrer Decimal The Frost Report - YouTube 48. Wilson, J. S. (February 9, 1959). "TOM LEHRER IN PROGRAM; Offers His Brand of 'Sick' Humor at Town Hall". The New York Times. ISSN 0362-4331. Archived from the original on October 26, 2020. Retrieved October 22, 2020. 49. "Songs by Tom Lehrer" (PDF). LOC.gov. Retrieved May 8, 2023. 50. "Premiere Opera". Archived from the original on November 14, 2020. Retrieved November 13, 2020. 51. "The Tom Lehrer Wisdom Channel". YouTube. September 11, 1967. Archived from the original on June 28, 2013. Retrieved October 27, 2015. 52. "The Dodge industrial Film". The Tom Lehrer Wisdom Channel. Archived from the original on April 20, 2017. Retrieved March 21, 2017. 53. Nachman, Gerald (2003). Seriously funny the rebel comedians of the 1950s and 1960s. New York: Pantheon Books. p. 149. ISBN 978-0307490728. Archived from the original on August 17, 2021. Retrieved September 11, 2016. 54. Andrews, Dale (April 9, 2013). "Tom Lehrer". SleuthSayers. Archived from the original on September 24, 2015. Retrieved October 27, 2015. 55. Billboard Magazine pg 19. Jul 10, 1982 Vol. 94, No. 27 ISSN 0006-2510 56. ​Tomfoolery​ at the Internet Off-Broadway Database 57. "The Tom Lehrer Collection (CD + DVD) : DVD Talk Review of the DVD Video". Dvdtalk.com. Archived from the original on September 24, 2015. Retrieved October 27, 2015. 58. Tom Lehrer: I Got It From Agnes on YouTube—from "Parkinson" 1980 59. "Fermat's Last Theorem - The Theorem and Its Proof: An Exploration of Issues and Ideas". MSRI. Palace of Fine Arts, San Francisco: Mathematical Sciences Research Institute. July 28, 1993. Archived from the original on August 9, 2016. Retrieved April 7, 2017. 60. Poisoning Pigeons in the Park on YouTube (original version). Retrieved March 25, 2008. 61. Thompson, Stephen (May 24, 2000). "Tom Lehrer Interview · The A.V. Club". Avclub.com. Archived from the original on August 29, 2017. Retrieved October 27, 2015. 62. "Tom Lehrer: '60s Satirist Still Strikes A Chord". NPR. April 30, 2010. Archived from the original on September 5, 2015. Retrieved October 27, 2015. 63. "Chatological Humor: A Tribute to Tom Lehrer". The Washington Post. February 12, 2008. Archived from the original on January 23, 2016. Retrieved October 27, 2015. 64. Prospero (October 17, 2017). "Who can fill the role of Tom Lehrer today?". The Economist. Archived from the original on December 1, 2017. Retrieved January 1, 2018. 65. Sanderson, David (October 22, 2020). "Copyright-busting website is invitation to have a laugh with Tom Lehrer". The Times. ISSN 0140-0460. Archived from the original on October 22, 2020. Retrieved October 22, 2020. 66. Ho, Justin (October 21, 2020). "Satirist Tom Lehrer has put his songs into the public domain". Marketplace. Archived from the original on October 24, 2020. Retrieved October 25, 2020. 67. "Tom Lehrer Songs". Tom Lehrer. November 1, 2022. Retrieved December 16, 2022. 68. Liner notes (translation): Holmgren, Pontus (2002). Lars Ekborg i Tom Lehrer's Vackra Värld. Sweden: Amigo. Archived from the original on June 21, 2015. 69. "Archived copy". Archived from the original on February 19, 2015. Retrieved April 25, 2016.{{cite web}}: CS1 maint: archived copy as title (link) 70. Guevara, Nacha (1971). Este es el año que es. Argentina: Music Hall. Archived from the original on February 26, 2019. Retrieved February 26, 2019. Track list and lyrics (Spanish). 71. "The British Theatre Guide: Interview with Dillie Keane". Archived from the original on October 8, 2009. Retrieved December 24, 2013. 72. "Weird Al" Yankovic. "Weird Al's FAQs". Weirdal.com. Archived from the original on September 2, 2006. Retrieved June 12, 2009. 73. "SETI League executive director emeritus Dr. H. Paul Shuch". Setileague.org. Archived from the original on April 8, 2017. Retrieved April 7, 2017. 74. Daniel Radcliffe sings "The Elements" - The Graham Norton Show - Series 8 Episode 4 - BBC One, retrieved November 4, 2022 75. Ouzounian, Richard (January 29, 2006). "Letters From Lehrer". Variety. Archived from the original on November 11, 2021. Retrieved November 11, 2021. 76. Williams, Ben (March 16, 2006). "Influences: Donald Fagen". New York Magazine. Retrieved June 30, 2023. 77. Tauben vergiften - Die bösen Lieder des Tom Lehrer - Felix Janosa | Songs, Reviews, Credits | AllMusic, archived from the original on November 11, 2021, retrieved November 11, 2021 78. "NewsHole". Holecity.com. March 5, 2000. Archived from the original on October 13, 2007. Retrieved June 12, 2009. 79. 20230311 20 Clementine, retrieved April 30, 2023 80. 20170409 07 Hanukka in SantaMonica + GootYuntif, retrieved April 30, 2023 81. 04+202100601 The Elements Updated (2021), retrieved April 30, 2023 82. "Doug Gilford's Mad Cover Site - Mad Magazine Contributors - Master List". madcoversite.com. Archived from the original on December 21, 2021. Retrieved December 21, 2021. 83. "Doug Gilford's Mad Cover Site - Mad #32". madcoversite.com. Archived from the original on November 16, 2021. Retrieved December 21, 2021. 84. "Doug Gilford's Mad Cover Site - Mad #35". madcoversite.com. Archived from the original on November 16, 2021. Retrieved December 21, 2021. External links • "Official website". Archived from the original on May 21, 2008. • Official website (Songs) ... Public Domain... This website will be shut down at some date in the not too distant future, so if you want to download anything, don't wait too long... — Tom Lehrer, Disclaimer (Nov 26, 2022) • Tom Lehrer Topic's channel on YouTube Auto-generated by YouTube • Tom Lehrer discography at Discogs • Tom Lehrer at IMDb • Free scores by Tom Lehrer at the International Music Score Library Project (IMSLP) • "Tom Lehrer" (podcast). Interview. Desert Island Discs. BBC Radio 4. July 18, 1980. • Tom Lehrer and The Topp Twins - ABC Radio National (podcast). Interview. Conversation recorded in 2000, re-broadcast in 2021 • "Tom Lehrer" (PDF). Interview. Library of Congress. July 22, 2015. ... Joan Baez—whom I've never met—was asked in an interview if she sang lullabies to her baby. She said that doesn't work, but she sings "The Old Dope Peddler" to him and he goes right to sleep. Tom Lehrer Songs • "The Elements" • "Fight Fiercely, Harvard" • "Lobachevsky" • "New Math" • "The Old Dope Peddler" Albums • Songs by Tom Lehrer (1953) • More of Tom Lehrer (1959) Live albums • An Evening Wasted with Tom Lehrer (1959) • Revisited (1960) • That Was the Year That Was (1965) Compilations • Songs & More Songs by Tom Lehrer (1997) • The Remains of Tom Lehrer (2000) Theater • Tomfoolery • Letters from Lehrer Authority control International • FAST • ISNI • VIAF National • Germany • Israel • Finland • United States • Sweden • Czech Republic • Greece • Netherlands • Poland Academics • MathSciNet • zbMATH Artists • Grammy Awards • MusicBrainz People • Trove Other • SNAC
Wikipedia
Laser-plasma-accelerator's potential to radically transform space radiation testing B. Hidding, O. Karger, T. Königstein, G. Pretzler, J.B. Rosenzweig Laser-plasma-accelerators are relatively new accelerator devices which are characterized by being very compact, which is the result of the giant electric accelerating fields present in strongly focused, high-power ultrashort laser pulses. Peak intensities of modern laser systems can reach $10^{22}\,\mathrm{W/cm^2}$ or more, which is many orders of magnitude larger than the complete sunlight incident on Earth, if it were collected and focused at the same time onto an area of a tip of a pencil. Such intensities make such laser systems attractive for many applications, as exotic as inertial confinement fusion and producing ultrashort electron beams with GeV-scale energies or advanced light sources such as free-electron lasers, or those based on inverse Compton scattering and betatron radiation. The woldwide booming community in this fields works towards these applications which have highly stringent demands on beam quality, as an alternative to well-established accelerators based on radiofrequency cavity based accelerators such as linacs (for electrons) and cyclotrons (for protons and ions). Breakthroughs were achieved in 2004, when for the first time instead of spectrally very broadband and rather divergent particle beams, pencil-like electron beams with quasi-monoenergetic electron bunch distribution were generated. Beam quality in terms of narrow energy spread and larger energies (beyond the GeV barrier) improves continuously and rapidly, fueled by progress in terms of understanding and by ever increasing laser power and technology readiness. In contrast to such highest-quality beams which are needed for example for free-electron-lasers, space radiation which harms electronics and living systems outside Earth's protective magnetic fields, is always very broadband. In fact, conventional accelerators always automatically produce very narrowband particle beams, which are unnatural. It has been proposed (and patented) for the first time in 2009 to use compact laser-plasma-accelerators to produce broadband radiation such as present in space and to use this for radiation hardness tests. Such broadband radiation is the inherent regime of laser-plasma-accelerators. The difficulty of laser-plasma-accelerators to produce monoenergetic beams is turned into a noted advantage here. Since producing broadband radiation is possible since many years with laser-plasma-accelerators, this application is one which has been ''left behind'' for many years now due to the community seeking to produce more monoenergetic beams such as with conventional accelerators. Recent proof-of-concept experiments in a project which merged state-of-the-art space radiation testing with state-of-the-art laser-plasma acceleration has shown that by using laser-plasma-accelerators it is possible to reproduce the spectral characteristics of radiation belt ''killer electrons'' for example, which populate the radiation belts on GEO orbits, for instance. This especially prominent type of space radiation was for the first time produced in the laboratory here on Earth in a well-controlled manner and seems to be a a natural candidate as a benchmark for other radiation sources, which produce monoenergetic beams based on which also the use of degraders cannot reproduce space radiation which is characterized by a decreasing (often exponentially decreasing) spectral flux towards higher particle energies. Spectral flux shaping by tuning the laser-plasma-interaction parameters has been demonstrated, for example to reproduce the electron flux incident on satellites on GPS orbits according to the AE8 model. Sophisticated diagnostics, readily available from the laser-plasma-community as well as the traditional accelerator community, which are increasingly merging (again), have been used to characterize and monitor the flux. State-of-the-art radiation hardness testing techniques have been adapted to the laser-plasma radiation source environment, test devices have been exposed to laser-plasma-generated space radiation and it was shown that the performance of these electronic devices was degraded. With the exception of doing radiation tests directly in space, these irradiation campaigns may have been the most realistic space radiation tests to be carried out in the laboratory here on Earth to date. The approach of reproducing space radiation flux directly in the lab has hitherto not been accessible, which is why approximative techniques employing monoenergetic beams had to be used. This clearly demonstrated the applicability of laser-plasma-accelerators for space radiation reproduction, and is currently triggering large interest in the laser-plasma-community. Other advantages of laser-plasma-accelerators are that they can produce electrons, protons and ions alike -- even at the same time -- as well as enormous peak flux, which may allow for exploration of nonlinear response of electronics and biological systems. Both fields, laser-plasma-acceleration on the one hand and space radiation testing on the other are highly vibrant fields, which have been disjunct so far. Connecting both fields, and to introduce laser-plasma-accelerators as complementary radiation sources for improved space radiation testing is highly advisable. It shall be emphasized that both the traditional radiation sources as well as laser-plasma-accelerators have inherent advantages, and that it is expected that the combination of both types of radiation sources will be highly fruitful for the further development of the space radiation field. Obvious strenghts of laser-plasma-accelerators are the production of broadband particle flux, and the enormous flexibility, compactness and tunability. For example, the devlopment of a test standard for radiation belt electron radiation effects with laser-plasma-accelerators seems advisable, which could then serve as a benchmark for other radiation sources. On the other hand, it is much harder to produce higher energy protons and ions with laser-plasma-accelerators than electrons. This said, the progress in the laser-plasma-accelerator tecnology is rapid, and protons and ions with several hundreds of MeV have already been produced. The highest proton and ion energies are always reached with large, cutting edge laser facilities, but it has been learned from the last years that steady and ongoing advances in laser technology quickly converts prototype, cutting edge laser technology to commercially available off-the-shelf products. Highest power (hundreds of TW or even PW) laser systems are also characterized by relatively low repetition rate (typically, 10 Hz or less), but there is much movement on this front, too, and kHz systems are already available. Generally speaking, the higher the obtainable particle energies, the lower the repetition rate. This further supports the advised strategy to start the establishment of laser-plasma-accelerators in the space radiation field with reproduction of broadband, lower energy electrons and protons. In this regime, the laser shot repetition rate can be very high, currently up to hundreds of kHz, which increases the average flux. It is estimated that with such systems, for example satellite-relevant fluence can be produced within irradiation times which are orders of magnitude shorter than at large facilities. The development of high power thin-disk and fiber lasers and optical parametric amplification (OPA) technology deserves special attention. Such lasers do not only allow for highest repetition rates, but also for an especially compact setup, best cost-effectiveness and a very high wall-plug power efficiency. Such compact devices with ever increasing powers, repetition rates and therefore obtainable radiation flux levels may end up in the future as compact radiation sources without proliferation issues available on site at chip and electronic manufacturers, and in the air- and spacecraft community. Further increased communication between the laser-plasma-accelerator community and the space radiation community is highly desirable. This should contain further collaborative R\&D acitivities, as well as networking, ideally on a European level. Such a network could bundle the needs and requirements for the most efficient use of laser-plasma-accelerators, for example to ameliorate the shortness of available beamtime for radiation tests which the space radiation community faces today. Based on such a network, a coordinated strategy should be developed which ideally would integrate the European space entities, as well as the traditional accelerator and the laser-plasma-accelerator community. For example, the establishment of laser-plasma-accelerator systems at space radiation testing clusters, for example at ESTEC, and in turn the formation of a dedicated space radiation testing beamline at application-oriented laser-plasma-facilities such as the Scottish Centre for the Application of Plasma-based Accelerators (SCAPA) or at facilities of the European Extreme Light Infrastructure (ELI) seems promising. Even mobile laser-plasma-accelerator devices, mounted on mobile trucks, may be feasible. At the same time, the use of plasma afterburner stages which may convert monoenergetic in broadband flux should be considered. Commissioning body laser plasma acceleration space radiation testing Hidding-etal-ESA2014-laser-plasma-accelerators-transform-space-radiation-testingFinal published version, 2.78 MB http://www.esa.int/ESA Dive into the research topics of 'Laser-plasma-accelerator's potential to radically transform space radiation testing'. Together they form a unique fingerprint. plasma accelerators Physics & Astronomy 100% extraterrestrial radiation Physics & Astronomy 96% accelerators Physics & Astronomy 17% broadband Physics & Astronomy 15% Hidding, B., Karger, O., Königstein, T., Pretzler, G., & Rosenzweig, J. B. (2014). Laser-plasma-accelerator's potential to radically transform space radiation testing. Hidding, B. ; Karger, O. ; Königstein, T. et al. / Laser-plasma-accelerator's potential to radically transform space radiation testing. Hamburg, 2014. 31 p. @book{77657e4dbb454a4599232a39a9405a35, title = "Laser-plasma-accelerator's potential to radically transform space radiation testing", abstract = "Laser-plasma-accelerators are relatively new accelerator devices which are characterized by being very compact, which is the result of the giant electric accelerating fields present in strongly focused, high-power ultrashort laser pulses. Peak intensities of modern laser systems can reach $10^{22}\,\mathrm{W/cm^2}$ or more, which is many orders of magnitude larger than the complete sunlight incident on Earth, if it were collected and focused at the same time onto an area of a tip of a pencil. Such intensities make such laser systems attractive for many applications, as exotic as inertial confinement fusion and producing ultrashort electron beams with GeV-scale energies or advanced light sources such as free-electron lasers, or those based on inverse Compton scattering and betatron radiation. The woldwide booming community in this fields works towards these applications which have highly stringent demands on beam quality, as an alternative to well-established accelerators based on radiofrequency cavity based accelerators such as linacs (for electrons) and cyclotrons (for protons and ions). Breakthroughs were achieved in 2004, when for the first time instead of spectrally very broadband and rather divergent particle beams, pencil-like electron beams with quasi-monoenergetic electron bunch distribution were generated. Beam quality in terms of narrow energy spread and larger energies (beyond the GeV barrier) improves continuously and rapidly, fueled by progress in terms of understanding and by ever increasing laser power and technology readiness. In contrast to such highest-quality beams which are needed for example for free-electron-lasers, space radiation which harms electronics and living systems outside Earth's protective magnetic fields, is always very broadband. In fact, conventional accelerators always automatically produce very narrowband particle beams, which are unnatural. It has been proposed (and patented) for the first time in 2009 to use compact laser-plasma-accelerators to produce broadband radiation such as present in space and to use this for radiation hardness tests. Such broadband radiation is the inherent regime of laser-plasma-accelerators. The difficulty of laser-plasma-accelerators to produce monoenergetic beams is turned into a noted advantage here. Since producing broadband radiation is possible since many years with laser-plasma-accelerators, this application is one which has been ''left behind'' for many years now due to the community seeking to produce more monoenergetic beams such as with conventional accelerators. Recent proof-of-concept experiments in a project which merged state-of-the-art space radiation testing with state-of-the-art laser-plasma acceleration has shown that by using laser-plasma-accelerators it is possible to reproduce the spectral characteristics of radiation belt ''killer electrons'' for example, which populate the radiation belts on GEO orbits, for instance. This especially prominent type of space radiation was for the first time produced in the laboratory here on Earth in a well-controlled manner and seems to be a a natural candidate as a benchmark for other radiation sources, which produce monoenergetic beams based on which also the use of degraders cannot reproduce space radiation which is characterized by a decreasing (often exponentially decreasing) spectral flux towards higher particle energies. Spectral flux shaping by tuning the laser-plasma-interaction parameters has been demonstrated, for example to reproduce the electron flux incident on satellites on GPS orbits according to the AE8 model. Sophisticated diagnostics, readily available from the laser-plasma-community as well as the traditional accelerator community, which are increasingly merging (again), have been used to characterize and monitor the flux. State-of-the-art radiation hardness testing techniques have been adapted to the laser-plasma radiation source environment, test devices have been exposed to laser-plasma-generated space radiation and it was shown that the performance of these electronic devices was degraded. With the exception of doing radiation tests directly in space, these irradiation campaigns may have been the most realistic space radiation tests to be carried out in the laboratory here on Earth to date. The approach of reproducing space radiation flux directly in the lab has hitherto not been accessible, which is why approximative techniques employing monoenergetic beams had to be used. This clearly demonstrated the applicability of laser-plasma-accelerators for space radiation reproduction, and is currently triggering large interest in the laser-plasma-community. Other advantages of laser-plasma-accelerators are that they can produce electrons, protons and ions alike -- even at the same time -- as well as enormous peak flux, which may allow for exploration of nonlinear response of electronics and biological systems. Both fields, laser-plasma-acceleration on the one hand and space radiation testing on the other are highly vibrant fields, which have been disjunct so far. Connecting both fields, and to introduce laser-plasma-accelerators as complementary radiation sources for improved space radiation testing is highly advisable. It shall be emphasized that both the traditional radiation sources as well as laser-plasma-accelerators have inherent advantages, and that it is expected that the combination of both types of radiation sources will be highly fruitful for the further development of the space radiation field. Obvious strenghts of laser-plasma-accelerators are the production of broadband particle flux, and the enormous flexibility, compactness and tunability. For example, the devlopment of a test standard for radiation belt electron radiation effects with laser-plasma-accelerators seems advisable, which could then serve as a benchmark for other radiation sources. On the other hand, it is much harder to produce higher energy protons and ions with laser-plasma-accelerators than electrons. This said, the progress in the laser-plasma-accelerator tecnology is rapid, and protons and ions with several hundreds of MeV have already been produced. The highest proton and ion energies are always reached with large, cutting edge laser facilities, but it has been learned from the last years that steady and ongoing advances in laser technology quickly converts prototype, cutting edge laser technology to commercially available off-the-shelf products. Highest power (hundreds of TW or even PW) laser systems are also characterized by relatively low repetition rate (typically, 10 Hz or less), but there is much movement on this front, too, and kHz systems are already available. Generally speaking, the higher the obtainable particle energies, the lower the repetition rate. This further supports the advised strategy to start the establishment of laser-plasma-accelerators in the space radiation field with reproduction of broadband, lower energy electrons and protons. In this regime, the laser shot repetition rate can be very high, currently up to hundreds of kHz, which increases the average flux. It is estimated that with such systems, for example satellite-relevant fluence can be produced within irradiation times which are orders of magnitude shorter than at large facilities. The development of high power thin-disk and fiber lasers and optical parametric amplification (OPA) technology deserves special attention. Such lasers do not only allow for highest repetition rates, but also for an especially compact setup, best cost-effectiveness and a very high wall-plug power efficiency. Such compact devices with ever increasing powers, repetition rates and therefore obtainable radiation flux levels may end up in the future as compact radiation sources without proliferation issues available on site at chip and electronic manufacturers, and in the air- and spacecraft community. Further increased communication between the laser-plasma-accelerator community and the space radiation community is highly desirable. This should contain further collaborative R\&D acitivities, as well as networking, ideally on a European level. Such a network could bundle the needs and requirements for the most efficient use of laser-plasma-accelerators, for example to ameliorate the shortness of available beamtime for radiation tests which the space radiation community faces today. Based on such a network, a coordinated strategy should be developed which ideally would integrate the European space entities, as well as the traditional accelerator and the laser-plasma-accelerator community. For example, the establishment of laser-plasma-accelerator systems at space radiation testing clusters, for example at ESTEC, and in turn the formation of a dedicated space radiation testing beamline at application-oriented laser-plasma-facilities such as the Scottish Centre for the Application of Plasma-based Accelerators (SCAPA) or at facilities of the European Extreme Light Infrastructure (ELI) seems promising. Even mobile laser-plasma-accelerator devices, mounted on mobile trucks, may be feasible. At the same time, the use of plasma afterburner stages which may convert monoenergetic in broadband flux should be considered.", keywords = "laser plasma acceleration, space radiation testing", author = "B. Hidding and O. Karger and T. K{\"o}nigstein and G. Pretzler and J.B. Rosenzweig", Hidding, B, Karger, O, Königstein, T, Pretzler, G & Rosenzweig, JB 2014, Laser-plasma-accelerator's potential to radically transform space radiation testing. Hamburg. Laser-plasma-accelerator's potential to radically transform space radiation testing. / Hidding, B.; Karger, O.; Königstein, T. et al. Hamburg, 2014. 31 p. T1 - Laser-plasma-accelerator's potential to radically transform space radiation testing AU - Hidding, B. AU - Karger, O. AU - Königstein, T. AU - Pretzler, G. AU - Rosenzweig, J.B. N2 - Laser-plasma-accelerators are relatively new accelerator devices which are characterized by being very compact, which is the result of the giant electric accelerating fields present in strongly focused, high-power ultrashort laser pulses. Peak intensities of modern laser systems can reach $10^{22}\,\mathrm{W/cm^2}$ or more, which is many orders of magnitude larger than the complete sunlight incident on Earth, if it were collected and focused at the same time onto an area of a tip of a pencil. Such intensities make such laser systems attractive for many applications, as exotic as inertial confinement fusion and producing ultrashort electron beams with GeV-scale energies or advanced light sources such as free-electron lasers, or those based on inverse Compton scattering and betatron radiation. The woldwide booming community in this fields works towards these applications which have highly stringent demands on beam quality, as an alternative to well-established accelerators based on radiofrequency cavity based accelerators such as linacs (for electrons) and cyclotrons (for protons and ions). Breakthroughs were achieved in 2004, when for the first time instead of spectrally very broadband and rather divergent particle beams, pencil-like electron beams with quasi-monoenergetic electron bunch distribution were generated. Beam quality in terms of narrow energy spread and larger energies (beyond the GeV barrier) improves continuously and rapidly, fueled by progress in terms of understanding and by ever increasing laser power and technology readiness. In contrast to such highest-quality beams which are needed for example for free-electron-lasers, space radiation which harms electronics and living systems outside Earth's protective magnetic fields, is always very broadband. In fact, conventional accelerators always automatically produce very narrowband particle beams, which are unnatural. It has been proposed (and patented) for the first time in 2009 to use compact laser-plasma-accelerators to produce broadband radiation such as present in space and to use this for radiation hardness tests. Such broadband radiation is the inherent regime of laser-plasma-accelerators. The difficulty of laser-plasma-accelerators to produce monoenergetic beams is turned into a noted advantage here. Since producing broadband radiation is possible since many years with laser-plasma-accelerators, this application is one which has been ''left behind'' for many years now due to the community seeking to produce more monoenergetic beams such as with conventional accelerators. Recent proof-of-concept experiments in a project which merged state-of-the-art space radiation testing with state-of-the-art laser-plasma acceleration has shown that by using laser-plasma-accelerators it is possible to reproduce the spectral characteristics of radiation belt ''killer electrons'' for example, which populate the radiation belts on GEO orbits, for instance. This especially prominent type of space radiation was for the first time produced in the laboratory here on Earth in a well-controlled manner and seems to be a a natural candidate as a benchmark for other radiation sources, which produce monoenergetic beams based on which also the use of degraders cannot reproduce space radiation which is characterized by a decreasing (often exponentially decreasing) spectral flux towards higher particle energies. Spectral flux shaping by tuning the laser-plasma-interaction parameters has been demonstrated, for example to reproduce the electron flux incident on satellites on GPS orbits according to the AE8 model. Sophisticated diagnostics, readily available from the laser-plasma-community as well as the traditional accelerator community, which are increasingly merging (again), have been used to characterize and monitor the flux. State-of-the-art radiation hardness testing techniques have been adapted to the laser-plasma radiation source environment, test devices have been exposed to laser-plasma-generated space radiation and it was shown that the performance of these electronic devices was degraded. With the exception of doing radiation tests directly in space, these irradiation campaigns may have been the most realistic space radiation tests to be carried out in the laboratory here on Earth to date. The approach of reproducing space radiation flux directly in the lab has hitherto not been accessible, which is why approximative techniques employing monoenergetic beams had to be used. This clearly demonstrated the applicability of laser-plasma-accelerators for space radiation reproduction, and is currently triggering large interest in the laser-plasma-community. Other advantages of laser-plasma-accelerators are that they can produce electrons, protons and ions alike -- even at the same time -- as well as enormous peak flux, which may allow for exploration of nonlinear response of electronics and biological systems. Both fields, laser-plasma-acceleration on the one hand and space radiation testing on the other are highly vibrant fields, which have been disjunct so far. Connecting both fields, and to introduce laser-plasma-accelerators as complementary radiation sources for improved space radiation testing is highly advisable. It shall be emphasized that both the traditional radiation sources as well as laser-plasma-accelerators have inherent advantages, and that it is expected that the combination of both types of radiation sources will be highly fruitful for the further development of the space radiation field. Obvious strenghts of laser-plasma-accelerators are the production of broadband particle flux, and the enormous flexibility, compactness and tunability. For example, the devlopment of a test standard for radiation belt electron radiation effects with laser-plasma-accelerators seems advisable, which could then serve as a benchmark for other radiation sources. On the other hand, it is much harder to produce higher energy protons and ions with laser-plasma-accelerators than electrons. This said, the progress in the laser-plasma-accelerator tecnology is rapid, and protons and ions with several hundreds of MeV have already been produced. The highest proton and ion energies are always reached with large, cutting edge laser facilities, but it has been learned from the last years that steady and ongoing advances in laser technology quickly converts prototype, cutting edge laser technology to commercially available off-the-shelf products. Highest power (hundreds of TW or even PW) laser systems are also characterized by relatively low repetition rate (typically, 10 Hz or less), but there is much movement on this front, too, and kHz systems are already available. Generally speaking, the higher the obtainable particle energies, the lower the repetition rate. This further supports the advised strategy to start the establishment of laser-plasma-accelerators in the space radiation field with reproduction of broadband, lower energy electrons and protons. In this regime, the laser shot repetition rate can be very high, currently up to hundreds of kHz, which increases the average flux. It is estimated that with such systems, for example satellite-relevant fluence can be produced within irradiation times which are orders of magnitude shorter than at large facilities. The development of high power thin-disk and fiber lasers and optical parametric amplification (OPA) technology deserves special attention. Such lasers do not only allow for highest repetition rates, but also for an especially compact setup, best cost-effectiveness and a very high wall-plug power efficiency. Such compact devices with ever increasing powers, repetition rates and therefore obtainable radiation flux levels may end up in the future as compact radiation sources without proliferation issues available on site at chip and electronic manufacturers, and in the air- and spacecraft community. Further increased communication between the laser-plasma-accelerator community and the space radiation community is highly desirable. This should contain further collaborative R\&D acitivities, as well as networking, ideally on a European level. Such a network could bundle the needs and requirements for the most efficient use of laser-plasma-accelerators, for example to ameliorate the shortness of available beamtime for radiation tests which the space radiation community faces today. Based on such a network, a coordinated strategy should be developed which ideally would integrate the European space entities, as well as the traditional accelerator and the laser-plasma-accelerator community. For example, the establishment of laser-plasma-accelerator systems at space radiation testing clusters, for example at ESTEC, and in turn the formation of a dedicated space radiation testing beamline at application-oriented laser-plasma-facilities such as the Scottish Centre for the Application of Plasma-based Accelerators (SCAPA) or at facilities of the European Extreme Light Infrastructure (ELI) seems promising. Even mobile laser-plasma-accelerator devices, mounted on mobile trucks, may be feasible. At the same time, the use of plasma afterburner stages which may convert monoenergetic in broadband flux should be considered. AB - Laser-plasma-accelerators are relatively new accelerator devices which are characterized by being very compact, which is the result of the giant electric accelerating fields present in strongly focused, high-power ultrashort laser pulses. Peak intensities of modern laser systems can reach $10^{22}\,\mathrm{W/cm^2}$ or more, which is many orders of magnitude larger than the complete sunlight incident on Earth, if it were collected and focused at the same time onto an area of a tip of a pencil. Such intensities make such laser systems attractive for many applications, as exotic as inertial confinement fusion and producing ultrashort electron beams with GeV-scale energies or advanced light sources such as free-electron lasers, or those based on inverse Compton scattering and betatron radiation. The woldwide booming community in this fields works towards these applications which have highly stringent demands on beam quality, as an alternative to well-established accelerators based on radiofrequency cavity based accelerators such as linacs (for electrons) and cyclotrons (for protons and ions). Breakthroughs were achieved in 2004, when for the first time instead of spectrally very broadband and rather divergent particle beams, pencil-like electron beams with quasi-monoenergetic electron bunch distribution were generated. Beam quality in terms of narrow energy spread and larger energies (beyond the GeV barrier) improves continuously and rapidly, fueled by progress in terms of understanding and by ever increasing laser power and technology readiness. In contrast to such highest-quality beams which are needed for example for free-electron-lasers, space radiation which harms electronics and living systems outside Earth's protective magnetic fields, is always very broadband. In fact, conventional accelerators always automatically produce very narrowband particle beams, which are unnatural. It has been proposed (and patented) for the first time in 2009 to use compact laser-plasma-accelerators to produce broadband radiation such as present in space and to use this for radiation hardness tests. Such broadband radiation is the inherent regime of laser-plasma-accelerators. The difficulty of laser-plasma-accelerators to produce monoenergetic beams is turned into a noted advantage here. Since producing broadband radiation is possible since many years with laser-plasma-accelerators, this application is one which has been ''left behind'' for many years now due to the community seeking to produce more monoenergetic beams such as with conventional accelerators. Recent proof-of-concept experiments in a project which merged state-of-the-art space radiation testing with state-of-the-art laser-plasma acceleration has shown that by using laser-plasma-accelerators it is possible to reproduce the spectral characteristics of radiation belt ''killer electrons'' for example, which populate the radiation belts on GEO orbits, for instance. This especially prominent type of space radiation was for the first time produced in the laboratory here on Earth in a well-controlled manner and seems to be a a natural candidate as a benchmark for other radiation sources, which produce monoenergetic beams based on which also the use of degraders cannot reproduce space radiation which is characterized by a decreasing (often exponentially decreasing) spectral flux towards higher particle energies. Spectral flux shaping by tuning the laser-plasma-interaction parameters has been demonstrated, for example to reproduce the electron flux incident on satellites on GPS orbits according to the AE8 model. Sophisticated diagnostics, readily available from the laser-plasma-community as well as the traditional accelerator community, which are increasingly merging (again), have been used to characterize and monitor the flux. State-of-the-art radiation hardness testing techniques have been adapted to the laser-plasma radiation source environment, test devices have been exposed to laser-plasma-generated space radiation and it was shown that the performance of these electronic devices was degraded. With the exception of doing radiation tests directly in space, these irradiation campaigns may have been the most realistic space radiation tests to be carried out in the laboratory here on Earth to date. The approach of reproducing space radiation flux directly in the lab has hitherto not been accessible, which is why approximative techniques employing monoenergetic beams had to be used. This clearly demonstrated the applicability of laser-plasma-accelerators for space radiation reproduction, and is currently triggering large interest in the laser-plasma-community. Other advantages of laser-plasma-accelerators are that they can produce electrons, protons and ions alike -- even at the same time -- as well as enormous peak flux, which may allow for exploration of nonlinear response of electronics and biological systems. Both fields, laser-plasma-acceleration on the one hand and space radiation testing on the other are highly vibrant fields, which have been disjunct so far. Connecting both fields, and to introduce laser-plasma-accelerators as complementary radiation sources for improved space radiation testing is highly advisable. It shall be emphasized that both the traditional radiation sources as well as laser-plasma-accelerators have inherent advantages, and that it is expected that the combination of both types of radiation sources will be highly fruitful for the further development of the space radiation field. Obvious strenghts of laser-plasma-accelerators are the production of broadband particle flux, and the enormous flexibility, compactness and tunability. For example, the devlopment of a test standard for radiation belt electron radiation effects with laser-plasma-accelerators seems advisable, which could then serve as a benchmark for other radiation sources. On the other hand, it is much harder to produce higher energy protons and ions with laser-plasma-accelerators than electrons. This said, the progress in the laser-plasma-accelerator tecnology is rapid, and protons and ions with several hundreds of MeV have already been produced. The highest proton and ion energies are always reached with large, cutting edge laser facilities, but it has been learned from the last years that steady and ongoing advances in laser technology quickly converts prototype, cutting edge laser technology to commercially available off-the-shelf products. Highest power (hundreds of TW or even PW) laser systems are also characterized by relatively low repetition rate (typically, 10 Hz or less), but there is much movement on this front, too, and kHz systems are already available. Generally speaking, the higher the obtainable particle energies, the lower the repetition rate. This further supports the advised strategy to start the establishment of laser-plasma-accelerators in the space radiation field with reproduction of broadband, lower energy electrons and protons. In this regime, the laser shot repetition rate can be very high, currently up to hundreds of kHz, which increases the average flux. It is estimated that with such systems, for example satellite-relevant fluence can be produced within irradiation times which are orders of magnitude shorter than at large facilities. The development of high power thin-disk and fiber lasers and optical parametric amplification (OPA) technology deserves special attention. Such lasers do not only allow for highest repetition rates, but also for an especially compact setup, best cost-effectiveness and a very high wall-plug power efficiency. Such compact devices with ever increasing powers, repetition rates and therefore obtainable radiation flux levels may end up in the future as compact radiation sources without proliferation issues available on site at chip and electronic manufacturers, and in the air- and spacecraft community. Further increased communication between the laser-plasma-accelerator community and the space radiation community is highly desirable. This should contain further collaborative R\&D acitivities, as well as networking, ideally on a European level. Such a network could bundle the needs and requirements for the most efficient use of laser-plasma-accelerators, for example to ameliorate the shortness of available beamtime for radiation tests which the space radiation community faces today. Based on such a network, a coordinated strategy should be developed which ideally would integrate the European space entities, as well as the traditional accelerator and the laser-plasma-accelerator community. For example, the establishment of laser-plasma-accelerator systems at space radiation testing clusters, for example at ESTEC, and in turn the formation of a dedicated space radiation testing beamline at application-oriented laser-plasma-facilities such as the Scottish Centre for the Application of Plasma-based Accelerators (SCAPA) or at facilities of the European Extreme Light Infrastructure (ELI) seems promising. Even mobile laser-plasma-accelerator devices, mounted on mobile trucks, may be feasible. At the same time, the use of plasma afterburner stages which may convert monoenergetic in broadband flux should be considered. KW - laser plasma acceleration KW - space radiation testing UR - http://www.esa.int/ESA BT - Laser-plasma-accelerator's potential to radically transform space radiation testing CY - Hamburg Hidding B, Karger O, Königstein T, Pretzler G, Rosenzweig JB. Laser-plasma-accelerator's potential to radically transform space radiation testing. Hamburg, 2014. 31 p.
CommonCrawl
Animal Biotelemetry Accelerometry to study fine-scale activity of invasive Burmese pythons (Python bivittatus) in the wild Nicholas M. Whitney ORCID: orcid.org/0000-0001-8797-69271, Connor F. White ORCID: orcid.org/0000-0001-8260-290X1, Brian J. Smith ORCID: orcid.org/0000-0002-0531-04922, Michael S. Cherkiss ORCID: orcid.org/0000-0002-7802-67913, Frank J. Mazzotti4 & Kristen M. Hart ORCID: orcid.org/0000-0002-5257-79743 Animal Biotelemetry volume 9, Article number: 2 (2021) Cite this article The establishment of Burmese pythons (Python bivittatus) in Everglades National Park, Florida, USA, has been connected to a > 90% decline in the mesomammal population in the park and is a major threat to native reptile and bird populations. Efforts to control this population are underway, but are hampered by a lack of information about fine-scale activity cycles and ecology of these cryptic animals in the wild. We aimed to establish a technique for monitoring the activity of Burmese pythons in the wild using acceleration data loggers (ADLs), while attempting to identify any behavioral patterns that could be used to help manage this invasive species in the Greater Everglades and South Florida. We obtained continuous acceleration and temperature data from four wild snakes over periods of 19 to 95 days (mean 54 ± 33 days). Snakes spent 86% of their time at rest and 14% of their time active, including transiting between locations. All snakes showed at least one period of continuous transiting lasting 10 h or more, with one animal transiting continuously for a period of 58.5 h. Acceleration data logger-derived transiting bout duration was significantly correlated with the distance snakes traveled per hour for two snakes that also carried GPS loggers. Snakes were most active in midday or early-night depending on individual and time of year, but all snakes were least likely to be active in the early mornings (0400–0700 h local time). Very little movement took place at temperatures below 14 °C or above 24 °C, with most movement taking place between 15° and 20 °C. One animal showed a highly unusual rolling event that may be indicative of a predation attempt, but this could not be confirmed. Fine-scale activity and some behaviors were apparent from ADL data, making ADLs a potentially valuable, unbiased tool for monitoring large-bodied snakes in the wild. Snakes spent the majority of their time resting, but also moved continuously for several hours at a time during bouts of transiting. Results suggest that individuals may shift their diel activity pattern based on season. Understanding seasonal differences in activity levels can improve the accuracy of population estimates, help detect range expansion, and improve managers' ability to find and capture individuals. Invasive reptiles are a growing problem in tropical to sub-tropical climates around the world, as they often exhibit strong direct interactions with prey species and encounter few predators in their introduced habitats [1]. Invasive snakes have been particularly problematic due to their ability to avoid detection, thereby inhibiting capture and eradication [2,3,4]. In South Florida, the Burmese python (Python bivittatus) is estimated to have been established for at least 30 years, and the population has now spread out of Everglades National Park and achieved high densities in the Greater Everglades Ecosystem in the past decade [5, 6]. Within their invasive range, pythons are associated with a large decrease in mesomammal populations and an overall decline in mammalian diversity [5, 7,8,9], leading to cascading effects throughout the ecosystem [10, 11]. Although individual pythons can grow to large sizes (> 5 m), they are difficult to find in the wild as they are cryptic and able to conceal themselves in small patches of vegetation [4]. Managers have employed numerous techniques to identify and control pythons in Everglades National Park, including detection dogs, traps, visual surveys and Judas python tracking. While Judas (or "scout") python tracking, using a tagged snake to reveal the location of other snakes, can be a useful tool for finding other pythons during the breeding season (Nov–Apr; see Smith et al. [12]), the primary removal tool remains human visual searching. However, these efforts are hampered by a lack of knowledge of how these animals behave in the wild, including when they are most likely to be active and the duration of activity. This information is directly relevant to optimizing human search efforts as snakes on the move are easier to detect. Other information, such as predation frequency, is also important for quantifying the direct impact of these snakes on prey species, which otherwise must be inferred from intensive surveys and population estimates [5, 7,8,9]. The same cryptic characteristics that make pythons difficult to eradicate also make them challenging to study in the wild. They are usually well-camouflaged, and they select habitat consisting of thick vegetation or subterranean refuges that prohibit direct observation and often attenuate the signal of VHF transmitters used for tracking. There is thus a strong need for fine-scale, remote monitoring of animal activity and behavior in large-bodied snakes. Radio- and GPS-telemetry have been used to obtain location information [13, 14], but many snake body movements and behaviors do not produce a change in location or occur too quickly to be detected when sampling is over a scale of hours. Additionally, vegetation density, underground refuges, and standing water attenuate radio signals and reduce temporal resolution of collected data, leading to potentially sparse datasets biased by an animal's habitat selection [14]. Even in best of circumstances, due to battery constraints, GPS loggers usually only provide positions on the scale of hours, resulting in relatively coarse temporal resolution. Acceleration data loggers (ADLs) have been increasingly incorporated into wildlife tags to measure activity, behavior, and energy expenditure in various species [e.g., 15, 16, 17, 18, 19]. With logged acceleration data, researchers can measure fine-scale body movements (e.g., flipper beats, tailbeats; [23]) as well as body posture (i.e., pitch, roll; [24]). These devices thus represent a powerful tool for quantifying activity and behavior in animals that are difficult to observe directly for long periods. Although they have been used to study the swimming dynamics of captive sea snakes [20,21,22], ADLs have not previously been used to study the movements of large-bodied snakes or those in terrestrial or semi-aquatic environments. The goal of this study was to establish a technique for monitoring the activity of large-bodied snakes in the wild using Burmese pythons as a model, while also attempting to identify any behavioral patterns that could be used to help manage this invasive species in the Greater Everglades and South Florida. Surgical implantation of ADLs near the posterior end of the snake (Fig. 1) was found to produce a signal in all three acceleration axes that could by amplified by calculating rotational magnitude (Fig. 2). Field trials (N = 5, total body length = 457.4 ± 63.1 cm) lasted from 13 to 95 days (mean = 46; Table 1, Fig. 3). Snake P42 was tagged twice (non-consecutively), but was found dead from unknown causes 13 days after its second deployment. Excluding this dataset, we collected acceleration and temperature data from four snakes totaling 216 days, between September and April (Fig. 3). Three of these snakes had GPS dataloggers; however, one individual's (P51) logger failed shortly after release, collecting only 16 positions. P41 collected 423 positions, averaging one position every 5.3 h, while P52 collected 240 positions, averaging one position every 3.3 h. Pressure data from the ADLs were also available for all snakes except P42, but these data were variable and showed no signs of the snakes inhabiting water deep enough to provide a clear increase in pressure. Graphic representation of a Burmese python (a) showing ADL placement (orange circle) relative to the length of the snake's body. Red dotted line indicates a transverse section that is graphically represented in b showing the ADL's location relative to the body wall and vertebrate. A photograph of the ADL (G6a + Cefas Technology Inc., Lowestoft, UK) showing extra epoxy (brown sections) added to the top of the tag is shown in c. An aluminum crimp sleeve is embedded in the central section of epoxy, allowing suture material (blue) to be threaded through the sleeve and the snake's skin, holding the ADL in place inside the snake throughout the experiment Raw three-axis acceleration (top panel) and calculated vector angle rotation (bottom panel) from a tagged Burmese python (Python bivittatus) in Everglades National Park, South Florida, USA. Top panel: black line is the anterior–posterior axis (X), the red line is the medial lateral axis (Y) and the blue line is the dorsal ventral axis (Z). Blue shaded periods were classified as stationary, whereas un-shaded periods were classified as movement. Bottom panel: horizontal black line on rotation magnitude axis shows the threshold value separating resting from moving Table 1 Summary information for free-living Burmese pythons, deployment times, and tag types Abacus plot showing the activity of Burmese pythons (Python bivittatus) in Everglades National Park, South Florida, USA. Each day is color coded by the number of hours spent moving. November through April (gray shaded background) is the presumed python breeding season. Some snakes were tagged in different years, but placed onto one calendar year to show seasonality. The dark grey bar for P42 shows its second deployment during which it died Internal snake body temperatures averaged 2.6 °C above soil temperatures, varying between 6.0 °C cooler to 12.7 °C warmer than soil temperature at any given point in time. Snake temperatures varied significantly within a day compared to soil temperatures (F23,5202 = 50, p < 0.001), with individuals displaying the coldest average body temperatures at 10:00 h and the greatest body temperature difference above soil temperature at 16:00 h. Additionally, as the environmental temperature increased, the difference between body and soil temperatures decreased (F1,5202 = 940, p < 0.001), with a one degree increase in environmental temperature resulting in only a 0.43 ± 0.01 °C, increase in snake body temperature. Fine-scale movements were typically brief, with periods of continuous movement lasting 30.5 ± 21.2 s on average, and the longest period of continuous movement averaging 3.2 ± 2.3 h across all snakes. Periods of continuous rest lasted 281 ± 326 s on average, and the longest period of continuous rest averaged 1.6 ± 2.0 h across all snakes. Overall, snakes spent 86.1% ± 7.2% of their time resting with comparatively little time spent active (Table 2). Snakes generally spent more time active during the day except for P42, which was significantly more active at night. All snakes spent < 6% of their time transiting except for P41, which spent 14.3% of its time transiting and was recaptured 15 km from its release site while other snakes were recaptured within 2.1 km or their release (Table 2). Table 2 Recapture distance and activity levels for free-living Burmese pythons Transiting bouts were significantly shorter than non-transiting periods, lasting an average of 2.7 h (Table 3), and the dominant locomotory period (or gait frequency; the time between body movements) during transiting was between 4 and 10 s (Fig. 4). However, all individuals exhibited at least one transiting bout that lasted over 10 h, with one individual (P41) transiting for up to 58.5 h consecutively (Table 3). While transiting, snakes spent 70.6 ± 5.7% of their time moving with the average movement lasting 2.3 ± 0.6 min, followed by 0.9 ± 0.18 min of rest. During non-transiting periods, individuals spent 7.9% (± 4.2%) of their time exhibiting small-scale movements (Table 3), which were short in duration (15.1 ± 7.1 s) and followed by 6.5 ± 8.9 min of rest. No movement ever exceeded 12.5 min in duration unless it was part of a transiting bout. Intervals between transiting bouts lasted an average of 31 ± 12 h (Table 3), but could continue for up to 232 h (snake P51). Table 3 Duration of transiting bouts and intervals between bouts for free-living Burmese pythons Transiting bouts for two Burmese pythons (Python bivittatus) in Everglades National Park, South Florida, USA, showing their displacement distance from GPS loggers (red dots), surge acceleration, and dominant locomotory period. Transiting bouts often took place during nighttime hours (gray shading) for snake P42 (A) with a dominant locomotory period of 3–8 s, but usually occurred during the day for P41(B), tagged during the winter and showing a longer locomotory period of 6–10 s. Warmer coloration in spectrograph indicate greater signal power Diel activity patterns Snakes displayed transiting behavior and other movements during all hours of the day, although early morning (0400–0700 h) was the period with least movement for all individuals. Movements were not uniformly distributed throughout the day and patterns varied between individuals (Table 2, Fig. 3). Snake P42, released in September, spent the majority of its time moving during the early hours of the night (1900–0100 h), and moved little during the day. Conversely, snake P41, tagged in December, was most likely to move during daylight hours (0900–1700 h). The remaining snakes (P51 and P52) were tagged in the spring and each showed low levels of activity during both day and nighttime hours. The difference in diel activity patterns between P42 and P41 was almost entirely driven by the timing of concentrated transiting bouts (Fig. 4). Snake P42 spent much more time transiting at night (9.5% of night time spent transiting) than during the day (1.4% transiting; Table 2.), and was especially likely to show transiting behavior in the evenings (2000–2200 h) when it spent > 20% of its time transiting. Snake P41 was far more likely to exhibit transiting behavior during the day, when it spent > 21% of its time transiting, but only spent 6.7% of its time transiting at night. Environmental temperature (F1,5182 = 60.5, p < 0.001), hour of the day (F23,5182 = 13.3, p < 0.001, and the interaction between these variables (F23,5182 = 8.9, p < 0.001) were important predictors of percent time transiting (Fig. 5). At cooler temperatures (< 14 °C), snakes displayed little transiting behavior, but transiting became more likely during the mid-morning when temperatures reached 14° to 20 °C. Once daytime temperatures exceeded 24 °C, transiting movements became less common. However, as temperatures increased further there was an increase in activity during nighttime hours. Percent of time spent transiting according to time of day and environmental temperature for all tagged Burmese pythons (Python bivittatus) in Everglades National Park, South Florida, USA. Percent time transiting is represented by color across hour of the day on the Y axis and environmental temperature on the X-axis. All colors in this figure correspond to the percent time transiting legend in the upper right corner. The top barplot represents the percent time transiting across the environmental temperature range observed and right bar plot represents the percent time transiting across hour of the day GPS accelerometer calibration The two individuals with complete GPS tracks showed different patterns of movement and habitat use. Throughout its 95-day deployment, P41 traveled over 15 km from its release location, including the longest transiting bout (58.5 h) measured in this study, as well as a 48-h period of rapid (2.4 km/day) displacement during which the snake was believed to be transiting through water based on its inferred movement path and the lack of GPS fixes during this period (Fig. 6). In contrast, P52 stayed within 300 m of its release location for over 35 days before moving 1.8 km over the next 3 days until it was recaptured. Snakes did not have an equal probability of logging a GPS position across hour of the day (GLMM: F1,23 = 4.71, p = 0.039, Fig. 6). Both P41 and P52 were more likely to log a GPS position during daylight hours than at night, and both individuals had a significantly higher probability of logging a position between 1000 and 1400 h (Z > 2.58, p < 0.009) than at other times of day. Displacement distance and surge acceleration during January for snake P41, a Burmese python (Python bivittatus) in Everglades National Park, South Florida, USA. During this period, the individual had high activity levels and displayed most of its movement during the day. This period included a 58-h transiting bout (Jan. 20th–22nd; red box) as well as two periods of rapid displacement at a rate of 2.4 km/day (Jan. 19th–20th and Feb. 1st–3rd; blue boxes) during which the snake was likely in an aquatic environment based on habitat from its inferred movement path and the lack of GPS fixes (indicated by red dots) during transiting. Gray shaded bars indicate nighttime hours GPS distance (hourly) traveled for P41 and P52 was positively correlated with both the percent of each hour spent moving (T = 51.9, p < 0.001) and with the percent of each hour transiting (T = 54.2, p < 0.001). However, the percent of each hour transiting was better than percent time moving (ΔAIC = 58) at predicting the GPS distance traveled (R2 = 0.58). Unique behavioral events Even after correcting for attachment angle, tag orientation was highly variable throughout all deployments, likely due to variations in habitat type used (i.e., brush, arboreal, aquatic, or subterranean) and snake body position. However, across all individuals, pitch and roll values rarely (< 0.05%) exceeded ± 80 degrees from horizontal. Incidents (N = 105) in which roll angle exceeded 90 degrees were typically short in duration (34 ± 111 s), but a subset of these lasted for extended periods and may indicate unique behavioral events. One event began with snake P42 in a resting (non-transiting) state when it began a 2-h period of highly atypical rolling behavior, including rolling over completely two times (Fig. 7). The snake started by rolling laterally 220 degrees and ultimately rolled completely, 360 degrees from its original position, over a period of 15 s. Approximately 1.5 h later, during which time it displayed a moderate amount of activity, the snake again commenced atypical rolling behavior and completed another full lateral roll (360 degrees) in the opposite direction from the first roll over a period of ~ 1 min. After this, the snake continued atypical rolling behavior (> 90 degrees) for an additional 15 min. The subsequent night (15 h later), the snake displayed transiting behavior for 50 min, after which it did not engage in transiting behavior for a period of 5.7 days. These were the only instances of complete lateral rolling (180–360 degrees) detected throughout our 216 d of monitoring. The only similar event to this occurred when snake P41 rolled ~ 200 degrees for a period of ~ 20 s before rolling back into its original position. Roll angle of snake P42, a Burmese python (Python bivittatus) in Everglades National Park, South Florida, USA, showing a unique, 2-h long behavioral event during which the snake displayed abnormal body postures, and rolled over completely at two separate times. Snake head schematics represent the approximate roll angles shown in the data trace below. Note that head schematics are used for roll visualization, but the actual roll angle was measured in the posterior third of the snake's body where the ADL was implanted. Data at the far left and right sides of the top panel represent typical body position with low variation in roll Acceleration data loggers effectively differentiated between patterns of rest vs. activity over several weeks in free-living Burmese pythons. The use of ADLs and other inertial sensors is rapidly expanding in wildlife research [18, 19, 25], where they are most often applied using very high (> 10 Hz) sampling frequency in order to identify behaviors or energy expenditure based on repetitive, high-acceleration movements (e.g., running, flying, swimming, etc.) over relatively short periods of time [26,27,28]. In contrast, the slow, usually rectilinear, movements of large-bodied snakes [29] are intermittent and occur at low frequencies. This allowed us to accurately describe their locomotion using a much lower sampling frequency (1 Hz) which extended our monitoring time to several months before filling up ADL memory. However, the low acceleration values of snake movements complicated our effort to distinguish body movements from sensor noise in the low-resolution (8-bit) ADLs applied to three of our four wild snakes. Our use of an angular rotation metric overcame this problem, providing clarity and consistency in comparing movements among individuals and removing artifacts that can arise from dynamic acceleration-based metrics (such as ODBA; Overall Dynamic Body Position; [17]) when using tag types of differing sensor resolution on different individuals. Additionally, while the 1 Hz sampling frequency was sufficient to detect active vs. resting behavior, it may have missed faster locomotor behaviors with a duration shorter than a few seconds. We found that pythons spent an average of 86.1% of their time resting. This overall pattern is typical of many ambush-foraging snakes [29, 30] and has been noted in Burmese pythons [31,32,33], but never quantified in the wild. Although resting periods were interrupted by changes in body position and short movements lasting a few seconds to minutes, snakes typically went for over a day at a time and sometimes several days between transiting events. During transiting, pythons often moved for several hours at a time during periods that also corresponded with longer displacement distances as revealed by GPS loggers. Previous studies have shown that these animals are capable of long-distance dispersal of several kilometers [9, 13], with one snake moving a total of 80 km over an 18-month period. Our results show that these movements are often conducted during bouts of continuous transiting lasting several hours, up to nearly 60 h in snake P41. During this extended transiting period, P41 traveled farther in a single day (2.43 km/day) than any other snake tracked by Hart et al. [13] and it moved faster than translocated snakes exhibiting homing behavior described by Pittman et al. [34]. Given the limits of python metabolism and energetics [35] this long period of continuous transiting by snake P41 appears to indicate the activity of a highly motivated individual. While the proximate motivation of this transiting is unknown, it may represent a search for suitable habitat, prey, or mates, and this was the only deployment that took place entirely during the presumed python breeding season in the Greater Everglades Ecosystem. The differences in diel movement patterns observed here were likely driven primarily by temperature. Most species exhibit diurnal activity patterns due to evolution of particular physiological traits that are optimized at certain temperatures [36]. While we observed strong diel behaviors in two individuals, they showed opposing patterns, possibly due to the fact that P41 was tagged in winter (and moved primarily during midday) and P42 was tagged during much warmer temperatures in early fall (and moved primarily at night). Bhatt and Choudhury [37] found that pythons in India were crepuscular during the warmer months and diurnal during the cooler months. Because pythons are ectothermic, thermoregulation strategies play a large role in behavior determination [38]. By altering their movement patterns (temporally or spatially) at different temperatures, snakes may maximize physiological rates in order to minimize energetic costs or to maximize digestion or locomotion speed. Size may also play a role, as larger animals lose body heat more slowly [39], and thermal inertia may allow large pythons to move even on cool evenings. The increased likelihood of obtaining GPS logger fixes during daytime hours was likely related to animal behavior and habitat. Both snakes that carried GPS loggers were more active during the day than at night, increasing the probability of their antennas being exposed. Smith et al. [14] found that lower GPS fix rates in these snakes were caused not only by vegetation density but also by python microhabitat selection, since fixes cannot be obtained when the animals are underwater or underground. Additionally, individuals may be more likely to bask in the sun during the day [37], increasing the likelihood of logging a GPS position. While GPS fix rates appear to be highest during periods of transiting, we noticed several notable exceptions likely related to microhabitat. For instance, snake P41 showed two lengthy periods of transiting activity with no accompanying GPS fixes during the events (Fig. 6; Jan 19th–20th; Feb 1st–3rd). This suggests that the GPS antenna was not exposed during these transiting periods and that the animal may have been traveling through water. This inference is supported by spatial habitat data from the snake's inferred GPS track and substantiates concerns that aquatic dispersal is possible for this species [40]. Of particular interest was our detection of a unique behavioral event involving a 2-h period of atypical lateral rolling. Although it is possible that this was an attempted or successful foraging event, it may also have been an interaction with a conspecific (although it did not take place during the breeding season) or predator, or some other unknown event. Without additional corroborating evidence, we cannot assign function to this event, but its unique characteristics within our broader dataset make it noteworthy. Future work using long-duration captive experiments may help to create behavioral profiles to better identify specific behaviors in the wild. Given our lengthy monitoring periods, the fact that we did not detect more unique acceleration events suggestive of feeding or breeding is surprising and may be due to a number of biological and technical factors. First, feeding in pythons is likely rare [30, 41,42,43], as these animals may go weeks or months without feeding [35]. Captive female pythons also exhibit reproductive aphagia, possibly starting at the time of copulation [44], and most of our data are from female snakes during the breeding season. It is therefore possible that our animals did not feed during the time they were monitored, and reduced python feeding during winter months is supported by patterns of marsh rabbit predation presented by McCleery et al. [8]. Second, placement of our ADL package on the posterior third of the animal may have prevented the device from detecting movements associated with striking or feeding. Preliminary captive trials showed the posterior part of the animal to remain largely stationary during striking behavior. Although python breeding aggregations have been frequently observed in the Everglades [12, 45], copulating snakes are often found in a resting state. Any unique movements associated with recognition and courtship may be slow [29] and therefore difficult to differentiate from other movements. We were able to identify fine-scale activity patterns from ADL data traces, illustrating the utility of this technology to provide detailed information on the movements of large-bodied snakes in the wild. This information can inform a variety of management plans, as effective control and eradication techniques often rely upon specific and predictable behaviors. For example, Smith et al. [12] found road cruising and the Judas (or "scout") technique to be complementary management approaches for catching Burmese pythons, depending on the season. Road cruising is optimally effective at times of day and seasons when pythons are most likely to be moving and crossing roads, whereas the scout technique is most effective at targeting an entirely different behavior (breeding). Both of these removal techniques are labor intensive and incur substantial cost [12], so using ADL data to optimize the timing of each could greatly improve management efficiency. Coupling movement frequencies with tracking data could also be a promising new method for estimating densities of secretive snakes [46]. Fine-scale activity and some behaviors were apparent from ADL data, making ADLs a potentially valuable tool for monitoring large-bodied snakes in the wild. Individuals spent 86.1% of their time resting, but often moved continuously for several hours at a time during bouts of transiting. Individuals may shift diel activity pattern based on season. Understanding seasonal differences in activity levels can improve the accuracy of population estimates, help detect range expansion, and improve managers' ability to find and capture individuals. All animals utilized in this study were visually spotted and captured crossing a road at night or on roadsides during daytime searches within Everglades National Park. Capture and tagging were permitted under University of Florida animal care protocols F162 and 009-08-FTL, Florida Fish and Wildlife Conservation Commission permit ESC 08–02 and National Park Service (Everglades) permits EVER-2007-SCI-001, EVER-2009-SCI-001, and EVER-2011-SCI-0002. After experiments were finished, snakes were euthanized in accordance with established management protocols. Tag attachment and field trials Three-dimensional acceleration data were recorded using either a Vemco XYZ (16 × 108 mm, 35 g weight, 12-bit resolution, range − 2 – + 2 units of gravity—g, 5 MB memory, Vemco, Nova Scotia, Canada) or Cefas G6a (2.8 × 1.6 cm; 18 g weight; 8-bit resolution, range − 2 – + 2 units of gravity—g, 56 MB memory, Cefas Technology Ltd., Lowestoft, UK) acceleration data loggers (ADL). Cefas G6a tags additionally recorded pressure and temperature, while an iButton temperature logger (8-bit resolution, range − 40 to 85 ℃, Maxim Integrated, San Jose, CA) was used for animals that received a Vemco ADL. All tags were set to record tri-axial acceleration at 1 Hz and temperature every 30 min during the 2011 field season or every 30 s in the 2012 season. This sampling regime minimized memory usage and maximized record duration, allowing the tags to record for approximately 3 months before their memory would be full. Preliminary trials showed that this 1-Hz acceleration sampling rate produced highly similar results (R2 > 0.97) to a 5-Hz sampling rate when comparing several metrics of acceleration in these slow-moving snakes (Additional file 1: Fig. S1.) To ensure retention, ADLs were surgically implanted within the lateral body wall (see [13, 47]) 2/3 the distance from the head to the tail to avoid puncturing the lung in the anterior half of the body (Fig. 1). To ensure that the movements and postures recorded by the ADL accurately reflected those of the snake, loggers were held in place by suture material passed through the skin of the snake and through a single barrel aluminum crimp sleeve (8 × 1.8 mm) that was epoxied to the logger. Sutures were externally visible after surgery and were examined at the end of experiments to ensure they were still in place and had prevented tag movement inside the snake (Fig. 1). Upon release, snake movements were observed and noted for several minutes to later validate acceleration signatures, and similar observations were taken at the end of trials during recapture. Two VHF transmitters (Model: AI-2, 25 g, Holohil Systems, Ltd., Carp, ON, Canada) were implanted into each snake, as in Hart et al. [13] to ensure snake relocation and recovery at the end of the experiment. Some individuals were also implanted with GPS data loggers, recording a position every 1–2 h (Quantem 4000E Medium Backpack, Telemetry Solutions; see Hart et al. [13] for information of GPS loggers and attachment methods). After 20–95 days, snakes were located via their VHF transmitters, captured, and brought back to the laboratory for surgical removal of all tags. Acceleration data analysis Recovered ADLs were downloaded and then analyzed in R (version 3.1.0) and Igor Pro 6.0 wave analysis software (Wavemetrics, Oregon, USA) using the Ethographer extension [26]. To ensure that the tag was oriented in the same frame of reference as the snake, the tag attachment angle was corrected for by rotating the acceleration data so that the average position of the tag across the entire deployment was level and horizontal (Z: dorso-ventral axis was parallel to gravity) [48]. Burmese pythons move primarily through rectilinear locomotion and thus the primary source of acceleration during movement should be along the anterior–posterior axis. To ensure the tag was oriented along the snakes' anterior–posterior axis, the acceleration data were rotated around the dorso-ventral axis (Z-axis) to maximize the dynamic acceleration along the anterior–posterior axis (X-axis). Due to the low acceleration movements exhibited by the snakes, we calculated roll (1) and pitch (2) from the raw acceleration data: $${\text{Roll }} = {\text{ atan}}\left( {Y_{{{\text{accel}}}} , \, Z_{{{\text{accel}}}} } \right),$$ $${\text{Pitch }} = {\text{ atan}}( - X_{{{\text{accel}}}} , \, \left( {Y_{{{\text{accel}}}} \times {\text{ sin}}\left( {{\text{Roll}}} \right)} \right) \, + \, \left( {Z_{{{\text{accel}}}} \times {\text{ cos}}\left( {{\text{Roll}}} \right)} \right)).$$ We observed little dynamic acceleration during deployments with tag measurement error accounting for the majority of variation in the root mean squared acceleration. Therefore, python movements were instead characterized by a change in tag orientation. To do this, we reduced the dimensionality of the acceleration data by calculating the three-dimensional vector angle (angular rotation) between acceleration data points at a 5-s lag (3): $$\mathrm{VectorAngle}=\mathrm{Acos}\frac{\sum {\mathrm{XYZ}}_{\mathrm{t}} * {\mathrm{XYZ}}_{\mathrm{t}-5}}{\sqrt{\sum {\mathrm{XYZ}}_{\mathrm{t}}*{\mathrm{XYZ}}_{\mathrm{t}} }*\sqrt{\sum {\mathrm{XYZ}}_{\mathrm{t}-5}*{\mathrm{XYZ}}_{\mathrm{t}-5} }}.$$ This metric provides the minimum angle that the tag moved, regardless of the axis over which the movement occurred. A 5-s lag was chosen because it increases the magnitude of movement to facilitate differentiation between movement and sensor measurement error, but it is still a sufficiently short period of time to isolate discrete periods of movement. Additionally, to examine for the possibility of cyclical movements that would be indicative of rectilinear locomotion, we did a continuous wavelet transform of the x-axis using the biwavelet package in R. Periodicity values between 3 and 20 s were investigated as possible periodicities of rectilinear locomotion exhibited by the snakes. At each time point, the periodicity with the greatest spectral power was deemed the "dominant" locomotion frequency. Physical movement vs. behavioral movement Physical movement was determined by examining the rotation of the tag from the vector angle metric (angular rotation). Because locomotion in large-bodied snakes is slow, intermittent, and includes repeated intervals of rest for a given body section even when the snake is moving continuously, we applied a filter to translate physical movement of the tag into behavioral (snake) movement or locomotion. Visual examination of the vector angle data showed that acceleration sensor noise contributed to movements less than two degrees. Thus, the snake was considered to be resting any time that the tag moved ≤ 2.5 degrees for at least 7 consecutive seconds. All other periods that were at least 3 s in duration were classified as movement (Fig. 2). Brief movements lasting 1 or 2 s were re-classified as resting. To separate temporally brief, discrete snake movements (e.g., body repositioning and postural changes) from transiting movements (repetitive movements associated with a change in snake location), a running sum (window size = 3600 s; 1 h) of the movement state (moving = 1, stationary = 0) was used to determine the total amount of time moving within a 1-h period. Snakes were considered to be in a transiting bout when an individual maintained a high activity (> 33% moving) for at least 20 consecutive minutes. During these transiting bouts, all snake movements were considered transiting movements and typically showed a clear dominant locomotory period or gait. All movements that were not during transiting bouts were considered small-scale movements and changes of body position or posture. Temperature analysis Since ADLs were surgically implanted, they reported internal body temperature of the snake. To examine trends in activity and possible thermoregulatory behavior, we compared snake movement and body temperature to environmental temperature from the Florida Automated Weather Network (FAWN) Homestead station (25.5126 N, 80.5031 W), which was less than 30 km away from the snakes' release location. These data were collected every 15 min and were linearly interpolated to be at the same frequency as the summarized ADL data (1 min). We used soil temperature at 10 cm depth as a proxy for environmental temperature because it is more stable than ambient air temperature and represents the temperature that the snakes are most exposed to through conduction; it also most closely correlated with snake body temperatures. GPS data analysis GPS data were filtered to only retain times when the GPS was able to log a 2- or 3-D position (excluding points without a reliable location). Depending on snake body orientation or habitat type, the GPS was not able to continuously record locations. GPS data were standardized by linearly interpolating hourly positions throughout each snake's entire deployment. The straight-line distance between each hourly position was calculated as the distance the snake traveled during that hour. All values are reported as mean ± standard deviation unless otherwise noted. Means were calculated using each individual's mean, and thus individuals are equally weighted regardless of deployment duration. For each individual, all activity data throughout their entire deployment was aggregated by hour and compared to a uniform distribution using a Chi-squared test to examine if individuals showed diel differences in activity. Environmental temperature was also aggregated by hour and the combined effects of temperature and time of day on activity was quantified with generalized additive models (mgcv package in R). GPS accelerometer linear regressions were performed using the lme4 package in R, with p-values calculated using the lmerTest package, with snake ID as a random effect. Model comparison was performed using Akaike Information Criterion (AIC), with significance determined by a ΔAIC of 2 [49]. The datasets supporting the conclusions of this article are publicly available in Hart et al. [50]: https://doi.org/10.5066/P91MLPXJ. ADL: Acceleration data logger ODBA: Overall dynamic body acceleration Hz: GAMM: Generalized Additive Mixed Model FAWN: Florida Automated Weather Network Kraus F. Alien reptiles and amphibians: a scientific compendium and analysis: Springer Science & Business Media; 2008. Fritts TH. The brown tree snake, Boiga irregularis, a threat to Pacific islands. US Fish and Wildlife Service; 1988. Snow R, Krysko K, Enge K, Oberhofer L, Warren-Bradley A, Wilkins L. Introduced populations of Boa constrictor (Boidae) and Python molurus bivittatus (Pythonidae) in southern Florida. Biology of Boas and Pythons Edited by RW Henderson and R Powell Eagle Mountain Publishing, Eagle Mountain, Utah, USA. 2007:365–86. Dorcas M, Willson J. Hidden giants: problems associated with studying secretive invasive pythons. Reptiles in Research: Investigations of Ecology, Physiology, and Behavior from Desert to Sea Nova Science Publishers, New York, NY. 2013:367–85. Reichert BE, Sovie AR, Udell BJ, Hart KM, Borkhataria RR, Bonneau M, et al. Urbanization may limit impacts of an invasive predator on native mammal diversity. Divers Distrib. 2017;23(4):355–67. Willson JD, Dorcas ME, Snow RW. Identifying plausible scenarios for the establishment of invasive Burmese pythons (Python molurus) in Southern Florida. Biol Invasions. 2011;13(7):1493–504. Dorcas ME, Willson JD, Reed RN, Snow RW, Rochford MR, Miller MA, et al. Severe mammal declines coincide with proliferation of invasive Burmese pythons in Everglades National Park. Proc Natl Acad Sci. 2012;109(7):2418–22. McCleery RA, Sovie A, Reed RN, Cunningham MW, Hunter ME, Hart KM. Marsh rabbit mortalities tie pythons to the precipitous decline of mammals in the Everglades. Proceedings of the Royal Society B: Biological Sciences. 1805;2015(282):20150120. Sovie AR, McCleery RA, Fletcher RJ, Hart KM. Invasive pythons, not anthropogenic stressors, explain the distribution of a keystone species. Biol Invasions. 2016;18(11):3309–18. Willson JD. Indirect effects of invasive Burmese pythons on ecosystems in southern Florida. J Appl Ecol. 2017;54(4):1251–8. Hoyer IJ, Blosser EM, Acevedo C, Thompson AC, Reeves LE, Burkett-Cadena ND. Mammal decline, linked to invasive Burmese python, shifts host use of vector mosquito towards reservoir hosts of a zoonotic disease. Biol Lett. 2017;13(10):20170353. Smith BJ, Cherkiss MS, Hart KM, Rochford MR, Selby TH, Snow RW, et al. Betrayal: radio-tagged Burmese pythons reveal locations of conspecifics in Everglades National Park. Biol Invasions. 2016;18(11):3239–50. Hart KM, Cherkiss MS, Smith BJ, Mazzotti FJ, Fujisaki I, Snow RW, et al. Home range, habitat use, and movement patterns of non-native Burmese pythons in Everglades National Park, Florida, USA. Animal Biotelemetry. 2015;3(1):8. Smith BJ, Hart KM, Mazzotti FJ, Basille M, Romagosa CM. Evaluating GPS biologging technology for studying spatial ecology of large constricting snakes. Animal Biotelemetry. 2018;6(1):1. Tanaka H, Takagi Y, Naito Y. Swimming speeds and buoyancy compensation of migrating adult chum salmon Oncorhynchus keta revealed by speed/depth/acceleration data logger. J Exp Biol. 2001;204(22):3895–904. Yoda K, Naito Y, Sato K, Takahashi A, Nishikawa J, Ropert-Coudert Y, et al. A new technique for monitoring the behaviour of free-ranging Adelie penguins. J Exp Biol. 2001;204(4):685–90. Wilson RP, White CR, Quintana F, Halsey LG, Liebsch N, Martin GR, et al. Moving towards acceleration for estimates of activity-specific metabolic rate in free-living animals: the case of the cormorant. J Anim Ecol. 2006;75(5):1081–90. Whitney NM, Lear KO, Gleiss AC, Payne N, White CF. Advances in the application of high-resolution biologgers to Elasmobranch fishes. In: Carrier JC, Heithaus MR, Simpfendorfer CA, editors. Shark Research: Emerging Technologies and Applications for the Field and Laboratory: CRC Press; 2019. p. 44–69. Wilson RP, Shepard E, Liebsch N. Prying into the intimate details of animal lives: use of a daily diary on animals. Endangered Species Res. 2008;4(1–2):123–37. Halsey LG, Jones TT, Jones DR, Liebsch N, Booth DT. Measuring energy expenditure in sub-adult and hatchling sea turtles via accelerometry. PLoS ONE. 2011;6(8):e22311. CAS PubMed PubMed Central Article Google Scholar Watanabe S, Izawa M, Kato A, Ropert-Coudert Y, Naito Y. A new technique for monitoring the detailed behaviour of terrestrial animals: a case study with the domestic cat. Appl Anim Behav Sci. 2005;94(1–2):117–31. Gleiss AC, Jorgensen SJ, Liebsch N, Sala JE, Norman B, Hays GC, et al. Convergent evolution in locomotory patterns of flying and swimming animals. Nat Commun. 2011;2:352. PubMed Article CAS Google Scholar Payne NL, Iosilevskii G, Barnett A, Fischer C, Graham RT, Gleiss AC, et al. Great hammerhead sharks swim on their side to reduce transport costs. Nat Commun. 2016;7(1):1–5. Brischoux F, Kato A, Ropert-Coudert Y, Shine R. Swimming speed variation in amphibious seasnakes (Laticaudinae): a search for underlying mechanisms. J Exp Mar Biol Ecol. 2010;394(1–2):116–22. Brown DD, Kays R, Wikelski M, Wilson R, Klimley AP. Observing the unwatchable through acceleration logging of animal behavior. Animal Biotelemetry. 2013;1(1):20. Sakamoto KQ, Sato K, Ishizuka M, Watanuki Y, Takahashi A, Daunt F, et al. Can ethograms be automatically generated using body acceleration data from free-ranging birds? PLoS ONE. 2009;4(4):e5379. PubMed PubMed Central Article CAS Google Scholar Halsey LG, Green J, Wilson R, Frappell P. Accelerometry to estimate energy expenditure during activity: best practice with data loggers. Physiol Biochem Zool. 2008;82(4):396–404. Nathan R, Spiegel O, Fortmann-Roe S, Harel R, Wikelski M, Getz WM. Using tri-axial acceleration data to identify behavioral modes of free-ranging animals: general concepts and tools illustrated for griffon vultures. J Exp Biol. 2012;215(6):986–96. Pope CH. The giant snakes: the natural history of the boa constrictor, the anaconda, and the largest pythons, including comparative facts about other snakes and basic information on reptiles in general: Random House Inc; 1961. Daniel JC, Society BNH. The book of Indian reptiles and amphibians: Bombay Natural History Society India; 2002. Wall F. A popular treatise on the common Indian snakes. J Bombay Nat Hist Soc. 1912;21:447–75. Wall F. Ophidia Taprobanica: Or. The Snakes of Ceylon: HR Cottle, Government printer; 1921. Whitaker R. Common Indian Snakes. Macmillan; 1978. Pittman SE, Hart KM, Cherkiss MS, Snow RW, Fujisaki I, Smith BJ, et al. Homing of invasive Burmese pythons in South Florida: evidence for map and compass senses in snakes. Biol Let. 2014;10(3):20140040. Secor SM. Digestive physiology of the Burmese python: broad regulation of integrated performance. J Exp Biol. 2008;211(24):3767–74. Schmidt-Nielsen K. Animal physiology: adaptation and environment. Cambridge: Cambridge University Press; 1997. Bhatt K, Choudhury B. The diel activity pattern of Indian python (Python molurus molurus Linn.) at Keoladeo National Park, Bharaptur, Rajasthan. J Bombay Natural History Soc. 1993;90(3):394–403. Angilletta MJ. Thermal adaptation: a theoretical and empirical synthesis. Oxford: Oxford University Press; 2009. Ayers D, Shine R. Thermal influences on foraging ability: body size, posture and cooling rate of an ambush predator, the python Morelia spilota. Funct Ecol. 1997;11(3):342–7. Bartoszek IA, Hendricks MB, Easterling IC, Andreadis PT. Python bivittatus (Burmese Python) Dispersal/Marine incursion. Herpetol Rev. 2018;49(3):554–5. Slip DJ, Shine R. Feeding habits of the diamond python, Morelia s. spilota: ambush predation by a boid snake. J Herpetol. 1988:323–30. McCue MD. Snakes survive starvation by employing supply-and demand-side economic strategies. Zoology. 2007;110(4):318–27. Wilson D. Foraging ecology and diet of an ambush predator: the Green Python (Morelia viridis). Biology of the Boas and Pythons. 2007:141–50. Barker DG, Murphy JB, Smith KW. Social behavior in a captive group of Indian pythons, Python molurus (Serpentes, Boidae) with formation of a linear social hierarchy. Copeia. 1979;1:466–71. Smith BJ, Rochford MR, Brien M, Cherkiss MS, Mazzotti F, Snow S, et al. Largest breeding aggregation of Burmese Pythons and implication for potential development of a control tool. IRCF Reptiles Amphibians. 2015;22(1):16–9. Willson JD, Pittman SE, Beane JC, Tuberville TD. A novel approach for estimating densities of secretive species from road-survey and spatial-movement data. Wildlife Res. 2018;45(5):446–56. Reinert HK, Cundall D. An improved surgical implantation method for radio-tracking snakes. Copeia. 1982;1982(3):702–5. Jorgensen SJ, Gleiss AC, Kanive PE, Chapple TK, Anderson SD, Ezcurra JM, et al. In the belly of the beast: resolving stomach tag data to link temperature, acceleration and feeding in white sharks (Carcharodon carcharias). Animal Biotelemetry. 2015;3(1):52. Burnham KP, Anderson DR. Practical use of the information-theoretic approach. Model selection and inference. Springer: New York, NY; 1998. p. 75–117. Hart KM, White CF, Smith BJ, Cherkiss MS, Mazzotti FJ, Whitney NM. 2020. Burmese python acceleration and location data, Everglades National Park, 2010–2012: U.S. Geological Survey data release, https://doi.org/https://doi.org/10.5066/P91MLPXJ. We thank E. Jones, K. Lear, J. Tyminski, G. Schwieterman, R. Snow, T. Kieckhefer, J. Cinci, and L. Medwedeff for help with various aspects of this study. Art work in Fig. 1 was done by V. Winter. This work was supported by the USGS Priority Ecosystems Science program, the U.S. National Park Service Critical Ecosystems Studies Initiative, and the South Florida Water Management District. Permits and approvals were obtained from the U.S. National Park Service and the Animal Research Committee at the University of Florida. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. This work was supported by the U.S. Geological Survey (USGS) Priority Ecosystems Science program, the U.S. National Park Service Critical Ecosystems Studies Initiative, and the South Florida Water Management District. Anderson Cabot Center for Ocean Life, Central Wharf, New England Aquarium, Boston, MA, USA Nicholas M. Whitney & Connor F. White Department of Wildland Resources, Utah State University, 5230 Old Main Hill, Logan, UT, 84322, USA Brian J. Smith U.S. Geological Survey, Wetland and Aquatic Research Center, 3321 College Avenue, Davie, FL, 33314, USA Michael S. Cherkiss & Kristen M. Hart Ft. Lauderdale Research and Education Center, University of Florida, 3205 College Avenue, Davie, FL, 33314, USA Frank J. Mazzotti Nicholas M. Whitney Connor F. White Michael S. Cherkiss Kristen M. Hart NW and KH conceived and designed the study. NW was responsible for ADL design and maintenance and programming. CW conducted data analyses, MC performed surgeries and, with FM and KH, supervised field trials. All authors participated in manuscript preparation. All authors read and approved the final manuscript. Correspondence to Nicholas M. Whitney. All capture and tagging were permitted under University of Florida animal care protocols F162 and 009-08-FTL, Florida Fish and Wildlife Conservation Commission permit ESC 08–02 and National Park Service (Everglades) permits EVER-2007-SCI-001 and EVER-2009-SCI-001, and EVER-2011-SCI-0002. Additional file 1: Figure S1 . Comparison of four separate metrics of acceleration calculated from values sampled at 5 Hz versus the same metrics calculated from data that had been downsampled to 1 Hz. All comparisons produced highly similar (R2 > 0.97) results between sampling rates, justifying our use of a 1 Hz sampling interval for deployments on pythons. Whitney, N.M., White, C.F., Smith, B.J. et al. Accelerometry to study fine-scale activity of invasive Burmese pythons (Python bivittatus) in the wild. Anim Biotelemetry 9, 2 (2021). https://doi.org/10.1186/s40317-020-00227-7 Python bivittatus
CommonCrawl
Eigenvector of gravity gradient tensor for estimating fault dips considering fault type Shigekazu Kusumoto1 The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile. In recent years, gravity gradiometry surveys have been widely conducted to obtain detailed subsurface structure data (e.g., Jekeli 1988; Dransfield 2010; Chowdhury and Cevallos 2013; Braga et al. 2014). Data collected by these surveys is the gravity gradient tensor defined by second derivatives of the gravity potential, and its response to subsurface structures is more sensitive than the gravity anomaly. At present, gravity gradiometry surveys have mainly been performed using a helicopter. Consequently, their observation interval is about 3 m on the flight profile, and the observation density is very high. The gravity gradiometry surveys allowed for high observation density, high resolution, and high sensitivity to the subsurface structures; therefore, these surveys contribute greatly to the earth science and resource engineering fields in terms of being useful and powerful tools for the estimation of subsurface structures. Various analysis techniques using gravity gradient tensors have been suggested and discussed (e.g., Zhang et al. 2000; Beiki 2010; Martinez et al. 2013; Cevallos 2014; Li 2015). These are considered to be so-called inversion techniques. A semi-automatic interpretation method that can extract subsurface structure characteristics without geological and geophysical data input has also been developed and applied to field data (e.g., Cooper 2012; Ma 2013; Ferreira et al. 2013). A typical semi-automatic interpretation method is an edge emphasis technique that uses extraction techniques to find locations (namely, edge) where the potential field changes abruptly due to density variations. The horizontal gravity gradient method and vertical gravity gradient method (e.g., Evjen 1936; Elkins 1951; Tsuboi and Kato 1952; Blakely and Simpson 1986) are classic edge emphasis techniques. In recent years, higher and keener extraction techniques have been suggested (e.g., Miller and Singh 1994; Cooper and Cowan 2006; Sertcelik and Kafadar 2012; Zhang et al. 2014). In addition, attention has been paid to techniques that evaluate the shape of the potential field (e.g., Koenderink and van Doorn 1992; Robert 2001; Zhou et al. 2013; Cevallos 2014). Among these methodologies, a technique for estimating the dip of geological boundary using the gradient tensor of the potential fields has been developed (e.g., Beiki 2013). Beiki and Pedersen (2010) showed that the maximum eigenvector of the gravity gradient tensor points to the causative body (Fig. 1a). Since this property is common in the potential fields, Beiki (2013) applied it to a magnetic anomaly in the Åsele area (Sweden) and obtained useful information on the dip of the dike swarms. Kusumoto (2015), considering that the basement consists of an aggregate of high-density prisms (Fig. 1b), applied Beiki's technique (Beiki and Pedersen 2010; Beiki 2013) to the estimation of fault dips. This method provided results wherein the fault dip estimated by the gravity gradient tensor harmonized with the dip observed from seismic surveys (Kusumoto 2015, 2016a). In addition, the dip of an earthquake source fault of the Kumamoto Earthquake that occurred in April 2016 estimated from the gravity gradient tensor also corresponded with the dip of the fault model (normal fault of 60°), thus explaining the crustal movement observed by GNSS (Global Navigation Satellite System) (Kusumoto 2016b). The range for which this method is applicable is wide from low dip to high dip (e.g., Beiki 2013; Kusumoto 2015, 2016a, 2016b), although it has some numerical instability to the vertical fault (e.g., Kusumoto 2015). Schematic illustration of the maximum eigenvectors for two-dimensional (2D) structures such as dykes and faults. a Basic model. In this figure, v 1 is the maximum eigenvector of the gravity gradient tensor and points to the causative body. The angle α between the surface and the maximum eigenvector is the dip of the causative body. b Fault model. A basement consists of an aggregate of high-density prisms, and the angle, α, indicates the fault dip Although analyses using the gravity gradient tensor have yielded excellent results in subsurface structure estimations and edge detections, gravity gradiometry surveys have been conducted in only a few areas, limiting the tensor data available. If we were to carry out these analyses in areas where gravity gradiometry surveys have not been conducted yet, we would have to use the tensor estimated from existing gravity anomaly data. The procedure for estimating the gravity gradient tensor from gravity anomaly data has already been suggested by Mickus and Hinojosa (2001). This technique estimates the gravity gradient tensor from spatial distribution of gravity anomalies by the Fourier transform. Since the database of gravity anomalies has been prepared, studies using the gravity gradient tensor estimated by Mickus and Hinojosa's method will progress in the future. On the other hand, it is difficult to apply this method directly to gravity anomalies obtained by gravity surveys conducted on a profile employed frequently in active fault research. In dense gravity surveys researching fault structures in detail, profiles were set perpendicular to the fault and short-spaced gravity observations were taken along the profiles (e.g., Iwano et al. 2001; Inoue et al. 2004). It is important to find the fault shape, especially its dip, in these studies because the fault dip affects the area of disaster occurrence (e.g., Abrahamson and Somerville 1996; Takemura et al. 1998) and is an important parameter in numerical simulations for hazard map creation (e.g., Irikura and Miyake 2011). Consequently, in two-dimensional gravity surveys for faults, a fault dip estimated from the eigenvectors of the gravity gradient tensor calculated from the gravity anomaly would be of additional value. In addition, since this analysis technique does not require vast calculation times, I expect it will be an effective new technique for analyzing high-resolution data obtained densely, i.e., through dense gravity surveys for fault research and also airborne gravity gradiometry surveys. In this study, I first introduce the technique for the estimation of the gravity gradient tensor from a gravity anomaly on the profile. After that, I discuss the relationship between fault dips and eigenvectors of the gravity gradient tensor and apply its result to gravity anomaly data obtained on the profile crossing the Kurehayama Fault in Toyama, Japan. Methods/Experimental Gravity gradient tensor on the profile Gravity gradient tensor Γ on the profile is defined as follows (e.g., Beiki and Pedersen 2011) $$ \varGamma =\left[\begin{array}{cc}\hfill {g}_{xx}\hfill & \hfill {g}_{xz}\hfill \\ {}\hfill {g}_{zx}\hfill & \hfill {g}_{zz}\hfill \end{array}\right] $$ Here, g xx , g xz , g zx , and g zz are each component of the tensor and are defined as the first derivative of gravity vector components g x and g z for each direction. In addition, gravity vectors g x and g z are given by the first derivative of gravity potential, W, namely, g x = ∂W/∂x and g z = ∂W/∂z. As the gravity potential satisfies the Laplace equation, ∂ 2 W/∂x 2 + ∂ 2 W/∂z 2 = g xx + g zz = 0, we find the relationship g zz = −g xx . Also, the relationship is known to be g xz = g zx because the gravity gradient tensor is a symmetric tensor (e.g., Torge 1989). Relationship between subsurface structure and gravity anomaly In the two-dimensional analyses, a structure in one direction is assumed to be infinite. Although this assumption is not realistic, it is a good approximation in fault structure analyses and gives us some practical analysis techniques. In calculations of the gravity gradient tensor from the gravity anomaly, we need gravity anomaly values at different heights. Consequently, I will show the relationship between two-dimensional subsurface structures and gravity anomalies in this subsection before estimating the gravity gradient tensor from the gravity anomaly. As the simplest subsurface model, I set a two-dimensional double layer model consisting of a sedimentary layer and a basement (Fig. 2). Horizontal positions are given by x, and vertical positions are given by z. Depth is zero (z = 0) on the surface, and z increases with depth. As shown in Fig. 2, an average boundary depth between the sedimentary layer and basement is defined as z = D (>0). When the boundary surface at point Q(x') deviates by h(x') from the average boundary depth (Fig. 2), gravity anomaly g z (x) at the point P(x) on the surface caused by this deviation is given by the following equation (e.g., Blakely 1996). Model of subsurface structure. A double-layer model consisting of a sedimentary layer and a basement is assumed here. D is the average depth of the stratum boundary, and h(x') is the deviation of the boundary from the average. Here, the deviation is assumed to be very small, i.e., h(x') << D $$ {g}_z(x)=2\gamma \varDelta \rho {\displaystyle {\int}_{-\infty}^{\infty }{\displaystyle {\int}_D^{D+ h\left( x\hbox{'}\right)}\frac{z\hbox{'}}{{\left( x-{x}^{\prime}\right)}^2+ z{\hbox{'}}^2}} d{x}^{\prime } d{z}^{\prime }} $$ where γ is the gravitational constant and Δρ is the density contrast between the sedimentary layer and basement. The integration on z in Eq. (2) is solved as: $$ {\displaystyle {\int}_D^{D+ h\left( x\hbox{'}\right)}\frac{z\hbox{'}}{{\left( x-{x}^{\prime}\right)}^2+ z{\hbox{'}}^2}} d{z}^{\prime }=\frac{1}{2} \log \left[\frac{{\left( x-{x}^{\prime}\right)}^2+{\left( D+ h\left({x}^{\prime}\right)\right)}^2}{{\left( x-{x}^{\prime}\right)}^2+{D}^2}\right] $$ here, if h(x') is much smaller than D, namely, h(x') << D, (D + h)2 is {D[1 + (h/D)]}2 ≈ D 2(1 + 2 h/D) = D 2 + 2Dh, Eq. (3) would be rewritten as follows: $$ {\displaystyle {\int}_D^{D+ h\left( x\hbox{'}\right)}\frac{z\hbox{'}}{{\left( x-{x}^{\prime}\right)}^2+ z{\hbox{'}}^2}} d{z}^{\prime}\approx \frac{1}{2} \log \left[\frac{{\left( x-{x}^{\prime}\right)}^2+{D}^2+2 Dh\left({x}^{\prime}\right)}{{\left( x-{x}^{\prime}\right)}^2+{D}^2}\right]=\frac{1}{2} \log \left[1+\frac{2 Dh\left({x}^{\prime}\right)}{{\left( x-{x}^{\prime}\right)}^2+{D}^2}\right] $$ In general, if −1 < ξ ≤ 1 in log(1 + ξ), we have the following approximation (e.g., Gradshteyn and Ryzhik 2007) $$ \log \left(1+\xi \right)=\xi -\frac{1}{2}{\xi}^2+\frac{1}{3}{\xi}^3-\frac{1}{4}{\xi}^4+\cdots ={\displaystyle \sum_{p=1}^{\infty }{\left(-1\right)}^{p+1}\frac{\xi^p}{p}} $$ The second term, 2Dh/[(x − x')2 + D 2], in Eq. (4) is small because D > > h. We can use Eq. (5) to derive a linear approximate equation of Eq. (4). By neglecting higher terms of ξ, Eq. (3) or (4) is rewritten as follows: $$ {\displaystyle {\int}_D^{D+ h\left( x\hbox{'}\right)}\frac{z\hbox{'}}{{\left( x-{x}^{\prime}\right)}^2+ z{\hbox{'}}^2}} d{z}^{\prime}\approx \frac{ D h\left({x}^{\prime}\right)}{{\left( x-{x}^{\prime}\right)}^2+{D}^2} $$ Consequently, we obtained the following equation. $$ {g}_z(x)\approx 2\gamma D\varDelta \rho {\displaystyle {\int}_{-\infty}^{\infty}\frac{h\left({x}^{\prime}\right)}{{\left( x-{x}^{\prime}\right)}^2+{D}^2} d{x}^{\prime }} $$ Here, I introduce a new function, ϕ, defined by: $$ \varphi (x)=\frac{1}{x^2+{D}^2} $$ and Eq. (7) is rewritten as follows: $$ {g}_z(x)=2\gamma D\varDelta \rho {\displaystyle {\int}_{-\infty}^{\infty}\varphi \left( x-{x}^{\prime}\right) h\left({x}^{\prime}\right) d{x}^{\prime }} $$ This form is convoluted, and we obtain Eq. (10) by applying the Fourier transformation to Eq. (9) $$ {G}_z=2\gamma D\varDelta \rho \varPhi H $$ where, G z , Φ, and H are Fourier transforms of g z (x), ϕ(x), and h(x), respectively. As is well known, the Fourier transform of Eq. (8) is (e.g., Blakely 1996; Gradshteyn and Ryzhik 2007) $$ \varPhi =\frac{\pi}{D}{e}^{- D\left| k\right|} $$ Here, |k| = ik z = |k x | (e.g., Blakely 1996) and k x is the wave number in the x direction. Here, I employed the Fourier transform, F, of a function f(x) defined as follows (e.g., Blakely 1996): $$ F={\displaystyle {\int}_{-\infty}^{\infty } f(x){e}^{- ikx} dx} $$ By Eq. (11), Eq. (10) is rewritten as: $$ {G}_z=2\pi \gamma \varDelta \rho H{e}^{- D\left| k\right|} $$ This is the relationship between gravity anomaly on the profile and two-dimensional subsurface structure. Relationship between gravity anomaly and gravity gradient tensor As shown in the previous section, the gravity gradient tensor is given by the second derivative of the gravity potential. The relationship between gravity anomaly g z and gravity potential W is $$ W=-{\displaystyle \int {g}_z dz} $$ From Eq. (13), the equation giving the gravity anomaly at point P'(x) of an arbitrary height z from the surface (Fig. 2) is obtained in the Fourier domain as follows: $$ {G}_z=2\pi \gamma \varDelta \rho H{e}^{-\left( D+ z\right)\left| k\right|} $$ By integrating this equation to z and substituting z = 0, we obtain the gravity potential at the surface. If the Fourier transform of the gravitational potential is represented by U, from these calculations, the U would be given by G z as follows: $$ U=\frac{1}{\left| k\right|}{G}_z $$ As the x direction component of gravity anomaly is given by the first derivative in the x direction of the gravity potential W, the g x in the Fourier domain, G x , would be given by a differential formula in the Fourier domain (e.g., Blakely 1996) as follows: $$ {G}_x= i{k}_x U $$ From Eq. (16), we obtained $$ {G}_x=\frac{i{ k}_x}{\left| k\right|}{G}_z $$ g xx in the Fourier domain is given by $$ {G}_{x x}=\frac{-{k}_x^2}{\left| k\right|}{G}_z $$ We can obtain g xx by applying the inverse Fourier transform to G xx , and g zz would be obtained from the relationship of g zz = −g xx . The other component g zx (=g xz ) would be given by: $$ {G}_{z x}={G}_{x z}= i{k}_x{G}_z $$ where G zx and G xz are the Fourier transform of g zx and g xz . Here, although I showed a technique to calculate the gravity gradient tensor in the Fourier domain, there is another technique to calculate the tensor by a simple finite-difference method (e.g., Blakely 1996) of gravity vectors g x and g z in the space domain. Relationship between subsurface structures and eigenvectors As indicated by Beiki and Pedersen (2010), the maximum eigenvector of the gravity gradient tensor points to the causative body of the gravity anomaly (Fig. 1a). They also pointed out that the minimum eigenvector of the tensor indicates the strike direction of structures such as dikes in three-dimensional analyses. Since there are two perpendicular eigenvectors of the gravity gradient tensor in the two-dimensional analyses, it is expected that the minimum eigenvector of the tensor will point to the low-density causative body or medium if the maximum eigenvector of the tensor points out high-density causative bodies such as a dike in a low-density layer such as a sedimentary layer. To clear this inference, I calculated the gravity gradient tensor on the profile caused by the model shown in Fig. 3 and investigated the dips of the maximum and minimum eigenvectors of the tensor. The model shown in Fig. 3 has a width and height of 0.25 and 2.0 km, respectively. Model of subsurface structures. Here, rectangular causative body of width and height of 0.25 and 2.0 km, respectively, is assumed. Densities of a medium and a causative body are ρ 2 and ρ 1, respectively Each component of the gravity gradient tensor caused by two-dimensional structures such as the dike shown in Fig. 3 is given by Telford et al. (1990). The relationship between eigenvectors and structural boundaries will be discussed widely in this study; I therefore employed calculation formulas given by Talwani et al. (1959). Talwani et al. (1959) show well-known calculation formulas giving g x and g z for two-dimensional arbitrary structures closed by a polygon. In this study, I obtained g zx (=g xz ) and g xx components by the numerical differentiation of g z and g x , and the g zz component was given by g zz = −g xx . A simple finite-difference method (e.g., Blakely 1996) was employed for these numerical differentiations. In addition, the dip of each eigenvector (α) was calculated by $$ \alpha = \arctan \left(\frac{v_z}{v_x}\right) $$ where v x and v z are x and z components of each eigenvector. Density structures and eigenvectors Figure 4a shows distributions of the maximum (red) and minimum (blue) eigenvectors of the gravity gradient tensor caused by the model structure (Fig. 3) whose density contrast (Δρ = ρ 1 − ρ 2) is 200 kg/m3. Figure 4b shows distributions of the maximum (red) and minimum (blue) eigenvectors of the tensor caused by the model structure (Fig. 3) whose density contrast (Δρ) is −200 kg/m3. In each figure, the lengths of all the eigenvectors are the same. Eigenvectors of the gravity gradient tensor caused by the subsurface model shown in Fig. 3. The maximum eigenvector and minimum eigenvector are shown by red and blue, respectively. a Eigenvectors on the high-density causative body (light green). When the high-density causative body was given in the low-density medium, the maximum eigenvector of the gravity gradient tensor points to the causative body and the minimum eigenvector points to the low-density medium. b Eigenvectors on the low-density causative body (light yellow). When the low-density causative body was given in the high-density medium, the minimum eigenvector of the tensor points to the causative body and the maximum eigenvector points to the high-density medium From Fig. 4a, it is found that the maximum eigenvector of the gravity gradient tensor points to a high-density causative body if the body is embedded in the low-density medium. In this case, the minimum eigenvector of the tensor points to the low-density medium around the high-density body. On the other hand, the minimum eigenvector of the gravity gradient tensor points to a low-density causative body if the body is embedded in the high-density medium. In this case, the maximum eigenvector of the tensor points to the high-density medium around the low-density body. From these results, in the two-dimensional analyses, it was shown that the maximum eigenvector points to a high-density causative body and the minimum eigenvectors points to a low-density causative body. In Fig. 4, there are vectors pointing to the area z < 0. This indicates that α is negative. Structures exist underground, and the negative α is not realistic. Consequently, I will add π to α if α is negative. Fault types and eigenvectors In calderas and/or sedimentary basins, high-density and low-density materials are in contact with each other via normal faults and/or reverse faults. In gravity anomalies and gravity gradient tensors, differences in fault type are defined as differences in density structure. As it was shown that the behavior of each eigenvector is dependent on the density structure in the previous subsection, I investigated the relationship between eigenvectors and fault type by the simplified sedimentary basin models. Figure 5a is a simplified sedimentary basin model in which the sedimentary layer is in contact with the basement by normal faults, and Fig. 5b is a simplified sedimentary basin model in which the sedimentary layer is in contact with the basement by reverse faults. Density contrast between sedimentary layer and basement is assumed to be −200 kg/m3. Simplified sedimentary basin model. Light yellow and white areas indicate sedimentary layer and basement, respectively. a Sedimentary basin model where sedimentary layer is in contact with the basement by normal faults of 45° dip. b Sedimentary basin model where the sedimentary layer is in contact with the basement by reverse faults of 45° dip In Fig. 6, I showed distributions of the maximum (red) and minimum (blue) eigenvectors of the gravity gradient tensor caused by these models. In each figure, the lengths of all the eigenvectors are the same, because we are interested in the fault dip and only angle information is necessary for this study. Eigenvectors of the gravity gradient tensor caused by the simplified sedimentary basin models shown in Fig. 5. The maximum eigenvector and minimum eigenvector are indicated by red and blue, respectively. a Eigenvectors on the sedimentary basin formed by normal faults. When the sedimentary layer is in contact with the basement by normal fault, the dip of the maximum eigenvector follows the dip of the normal fault. b Eigenvectors on the sedimentary basin are formed by reverse faults. When the sedimentary layer is in contact with the basement by reverse fault, the dip of the minimum eigenvector follows the dip of the reverse fault From Fig. 6a, it is found that the dip of the maximum eigenvector of the gravity gradient tensor closely follows the dip of the normal fault. When the basement distributes near the surface, the maximum eigenvector points in the vertical direction to the high-density basement. The effect of the high-density basement is weak in the sedimentary layer area, while the effect of the low-density sedimentary layer is strong; therefore, the minimum eigenvector points in the vertical direction to the low-density sediment and the maximum eigenvector points in the horizontal direction. When the boundary is a reverse fault, from Fig. 6b, it is found that the dip of the minimum eigenvector of the gravity gradient tensor indicates the dip of the fault well. The maximum eigenvector on the basement points in the vertical direction to the high-density basement, and the minimum eigenvector points in the horizontal direction. Since the low-density sediment distributes near the surface in the sedimentary layer area, the minimum eigenvector points vertically. From these results, it was concluded that if the structural boundary is a normal fault, its dip can be estimated from the dip of the maximum eigenvector of the gravity gradient tensor, and if the boundary is a reverse fault, its dip can be estimated from the dip of the minimum eigenvector of the tensor. In addition, in the area away from the boundary, it was found that the maximum eigenvector on the basement and the minimum eigenvector on the sediment point in the vertical direction, and the maximum eigenvector on the sediment and the minimum eigenvector on the basement point in the horizontal direction, regardless of whether the boundary is a normal fault or reverse fault. Subsurface structures and eigenvectors By simple numerical simulations, it was found that the maximum eigenvector of the gravity gradient tensor points to a high-density causative body and that the minimum eigenvector points to a low-density causative body. In addition, it was found that the dip of the maximum eigenvector of the tensor closely follows the dip of the normal fault and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. As mentioned above, Beiki and Pedersen (2010) have already pointed out that the maximum eigenvector of the gravity gradient tensor points to the high-density causative body. The result in Fig. 4a confirms that their results are true for the two-dimensional analyses as well. When the basement distributes near the surface, the maximum eigenvector points in the vertical direction. This property also shows that Beiki and Pedersen (2010) are correct, and the idea of the basement as an aggregate of high-density prisms (Fig. 1b), suggested by Kusumoto (2015, 2016b), would not be incorrect. As to why the dip of normal fault was given by the dip of the maximum eigenvector of the gravity gradient tensor, I considered that the lower part of the boundary structure (fault) exists inside the low-density area more than its upper part. Therefore, because the gravity gradient tensor is most sensitive to the subsurface structures near the surface, the structure shown in Fig. 5a was considered a high-density body that intruded into the low-density layer, and the dip of the normal fault was given by the dip of the maximum eigenvector. I believe Kusumoto (2015, 2016a, 2016b) was able to obtain results that coincided with seismic surveys since he estimated the fault dip in normal fault regions by the maximum eigenvector of the tensor. On the other hand, when the maximum eigenvector points to high-density causative bodies embedded in low-density medium or low-density causative bodies embedded in a high-density medium, the minimum eigenvector points to the low-density mediums or to the causative bodies. Beiki and Pedersen (2010) have not explicitly referred to analyses of low-density causative bodies using eigenvectors. Since it is necessary to analyze anomalies caused by low-density bodies in the field, it seems that the result, in which the minimum eigenvector points to the low-density bodies, would play an important role in subsurface structure estimation, although this is the result of two-dimensional analysis. In addition, it was found that the dip of the minimum eigenvector of the gravity gradient tensor gave the dip of the reverse fault. As to the reason why the dip of reverse fault was given by the minimum eigenvector of the gravity gradient tensor, I considered that the lower part of the boundary structure (fault) exists inside the high-density area more than its upper part. Namely, because this structure was considered a low-density body that intruded into the high-density layer, the dip of the reverse fault was given by the dip of the minimum eigenvector of the gravity gradient tensor. As is understood from the results and discussions obtained in this study, selecting a suitable eigenvector for estimating the fault dip is important. If the study area is not too wide and prior geological information is available, the eigenvector that should be employed for estimating the fault dip correctly would be selected based on the information. If the study area was a fault area where normal faults were mainly distributed, the maximum eigenvector of the gravity gradient tensor would be employed for estimating the fault dip. If the study area was a fault area where reverse faults were mainly distributed, the minimum eigenvector would be employed. In the three-dimensional study for high-density causative bodies, it is pointed out that the minimum eigenvector is parallel to the strike direction of the structure (Beiki and Pedersen 2010; Beiki 2013). However, in the two-dimensional analyses, the strike direction of the structure is perpendicular to x- and z-axes and does not appear in the analyses. As it is difficult to directly compare the properties of the minimum eigenvector obtained in different dimensions, in the future, it would be necessary to discuss detailed properties of the minimum eigenvector. Application to field data As an application of the techniques, I estimated the dip of the Kurehayama Fault located in Toyama, Japan. The Kurehayama Fault is a reverse fault located at the center of the Toyama basin, and it strikes in the NNE-SSW direction (Fig. 7). The length of the fault is about 22 km, and the fault dip is about 45° (e.g., The Headquarters for Earthquake Research Promotion 2008; Toyama City 2013). The Toyama City has carried out seismic surveys and dense gravity surveys crossing this fault (Toyama City 2013). Toyama City (2013) set three profiles crossing the Kurehayama Fault, and the dense gravity surveys of 50 m spaced measurements have been conducted on these profiles, although spacing of several hundred meters has been usually employed for these surveys. Here, I used gravity anomaly data on the profile located at the shoreline. Figure 8 shows the Bouguer anomaly in which the Bouguer density of 2260 kg/m3 was assumed (Toyama City 2013). The indication "Kurehayama Fault" shown in this figure indicates a rough fault location. Location map of the study area. Kurehayama Fault is a reverse fault located in the center of the Toyama Basin, Toyama Prefecture, Japan. Its location has been estimated by topographic, geological, and geophysical data. The red line and brown lines denote the estimated location of the Kurehayama Fault, Toyama City (Toyama City 2013), and The Headquarters for Earthquake Research Promotion (The Headquarters for Earthquake Research Promotion 2008), respectively. Blue line a - b indicates the dense gravity survey profile, which has gravity observation points at about 50 m intervals Bouguer anomalies on the profiles (after Toyama City 2013). The Bouguer density of 2260 kg/m3 is assumed. The "Kurehayama Fault" shown in this figure indicates a rough fault location by Toyama City (Toyama City 2013). The unit of the gravity anomaly is given in milligal, and mGal = 10−5 m/s2 I applied the techniques to the Bouguer anomaly and obtained the gravity gradient tensor shown in Fig. 9. Figure 10 shows distributions of the maximum eigenvector (red) and the minimum eigenvector (blue) of the gravity gradient tensor. Since the Kurehayama Fault is a reverse fault, I focus on the dip of the minimum eigenvector. From Fig. 10, it is found that the dip (α) of the Kurehayama Fault was about 138°. Since the angle α is measured clockwise from the surface (x-axis), it seems that the obtained dip indicates the dip of the reverse fault of 42°. This fault dip is consistent with conventional data. Gravity gradient tensor (g xx , g xz (=g zx ), g zz ) on the profile. These are estimated from the Bouguer anomalies on the profile shown in Fig. 8. The component of g xx and g xz is calculated by a finite-difference method of gravity vectors g x and g z in the space domain. The "Kurehayama Fault" shown in this figure indicates a rough fault location by Toyama City (Toyama City 2013). The unit of the gravity gradient tensor is given in E (Eötvös), and 1 E = 0.1 mGal/km Eigenvectors of the gravity gradient tensor on the profile shown in Fig. 7. The maximum eigenvector and minimum eigenvector are indicated by red and blue, respectively. The dips of eigenvectors are given clockwise from x-axis to z-axis. Since it is known that the Kurehayama Fault is a reverse fault, we focus on the minimum eigenvector of the tensor. The "Kurehayama Fault" shown in this figure indicates a tentative fault location in Toyama City (Toyama City 2013). The average dip of the minimum eigenvector in the Kurehayama Fault zone shown by a rectangle with dashed lines is about 138°, and this angle indicates that the Kurehayama Fault would be a reverse fault of 42°. In addition, the maximum eigenvectors on the right side of the "Kurehayama Fault" shown in this figure point to the vertical direction, and the minimum eigenvectors in the left side of the "Kurehayama Fault" point to the vertical direction The estimated fault dip would be the dip near the surface because the method employs the gravity gradient tensor, which is sensitive to subsurface structures near the surface. Since it is important to know quantitatively which depth the estimated fault dip is, in the future, it would be necessary to develop a technique estimating the depth of the estimated dip or the dip in the arbitrary depth. In this study, I showed techniques for estimating the gravity gradient tensor from gravity anomalies on the profile and for estimating the fault dip by eigenvector of the observed or calculated gravity gradient tensor on the profile. I also investigated its properties by numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to a high-density causative body and that the dip of the maximum eigenvector closely follows the dip of the normal fault. In addition, if the basement distributes near the surface, the maximum eigenvector points to the vertical direction. They have been pointed out already in previous studies, and the results shown in here confirmed that their results are true for the two-dimensional analyses as well. On the other hand, it was found that the minimum eigenvector of the tensor points to a low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. Since eigenvector analyses of the anomalies caused by the low-density causative body have not been discussed explicitly in previous studies, these results would play an important role in estimations of subsurface structures in the future. From these results, it was found that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type, and we would estimate the fault dip correctly if we were to employ suitable eigenvectors based on prior information. As an application of suggestions, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained the fault dip of about 42° as the dip of the minimum eigenvector of the gravity gradient tensor because the fault is the reverse fault. This dip harmonized with conventional geological information. Since the analysis technique shown in this study does not require complex calculations and vast calculation times, it will be an effective technique for analyzing high-resolution data obtained densely by not only dense gravity surveys for fault research but also airborne gravity or gravity gradiometry surveys. Abrahamson NA, Somerville P (1996) Effects of the hanging wall and footwall on ground motions recorded during the Northridge earthquake. Bull Seism Soc Am 86:S93–S99 Beiki M (2010) Analytic signals of gravity gradient tensor and their application to estimate source location. Geophysics 75:I59–I74 Beiki M (2013) TSVD analysis of Euler deconvolution to improve estimating magnetic source parameters: an example from the Asele area, Sweden. J Appl Geophys 90:82–91 Beiki M, Pedersen LB (2010) Eigenvector analysis of gravity gradient tensor to locate geologic bodies. Geophysics 75:I37–I49 Beiki M, Pedersen LB (2011) Window constrained inversion of gravity gradient tensor data using dike and contact models. Geophysics 76:I59–I70 Blakely RJ (1996) Potential theory in gravity and magnetic applications. Cambridge University Press, Cambridge Blakely R, Simpson RW (1986) Approximating edges of source bodies from magnetic or gravity anomalies. Geophysics 51:1494–1498 Braga MA, Endo I, Galbiatti HF, Carlos DU (2014) 3D full tensor gradiometry and Falcon systems data analysis for iron ore exploration: Bau Mine, Quadrilatero Ferrifero, Minas Gerais, Brazil. Geophysics 79:B213–B220 Cevallos C (2014) Automatic generation of 3D geophysical models using curvatures derived from airborne gravity gradient data. Geophysics 79:G49–G58 Chowdhury PR, Cevallos C (2013) Geometric shapes derived from airborne gravity gradiometry data: new tools for the explorationist. Lead Edge 32:1468–1474 Cooper GRJ (2012) The removal of unwanted edge contours from gravity datasets. Expl Geophys 44:42–47 Cooper GRJ, Cowan DR (2006) Enhancing potential field data using filters based on the local phase. Comp Geosci 32:1585–1591 Dransfield M (2010) Conforing Falcon gravity and the global gravity anomaly. Geophys Prospect 58:469–483 Elkins TA (1951) The second derivative method of gravity interpretation. Geophysics 16:29–50 Evjen HM (1936) The place of the vertical gradient in gravitational interpretations. Geophysics 1:127–137 Ferreira FJF, de Souza J, de B e S Bongiolo A, de Castro LG (2013) Enhancement of the total horizontal gradient of magnetic anomalies using the tilt angle. Geophysics 78:J33–J41 Gradshteyn IS, Ryzhik IM (2007) Table of integrals, series, and products, 7th edn. Academic press Elsevier, Oxford Inoue N, Tanaka Y, Itoh H, Iwano S, Kitada N, Fukuda Y, Takemura K (2004) Density of sediment in Kyoto basin inferred from 2D gravity analysis along Horikawa-Oguraike and Kuzebashi seismic survey lines. Zisin 2(57):45–54 (in Japanese with English abstract) Irikura K, Miyake H (2011) Recipe for predicting strong ground motion from crustal earthquake scenarios. Pure Appl Geophys 168:85–104. doi:10.1007/s00024-010-0150-9 Iwano S, Fukuda Y, Ishiyama T (2001) An estimation of fault related structures by means of one-dimensional gravity surveys - case studies at the Katagihara Fault and the Fumotomura Fault. J Geog 110:44–57 (in Japanese with English abstract) Jekeli C (1988) The gravity gradiometer survey system (GGSS). EOS Trans AGU 69:105 Koenderink JJ, van Doorn AJ (1992) Surface shape and curvature scales. Im Vis Comp 10:557–564 Kusumoto S (2015) Estimation of dip angle of fault or structural boundary by eigenvectors of gravity gradient tensors. Butsuri-Tansa 68:277–287 (in Japanese with English abstract) Kusumoto S (2016a) Structural analysis of caldera and buried caldera by semi-automatic interpretation techniques using gravity gradient tensor: a case study in central Kyushu Japan. In: Nemeth K (ed) Updates in Volcanology - From volcano modelling to volcano geology. InTech, Rijeka Kusumoto S (2016b) Dip distribution of Oita-Kumamoto Tectonic Line located in central Kyushu, Japan, estimated by eigenvectors of gravity gradient tensor. Earth Plan Space 68:153. doi:10.1186/s40623-016-0529-7 Li X (2015) Curvature of a geometric surface and curvature of gravity and magnetic anomalies. Geophysics 80:G15–G26 Ma G (2013) Edge detection of potential field data using improved local phase filter. Expl Geophys 44:36–41 Martinez C, Li Y, Krahenbuhl R, Braga MA (2013) 3D inversion of airborne gravity gradiometry data in mineral exploration: a case study in the Quadrilatero Ferrifero, Brazil. Geophysics 78:B1–B11 Mickus KL, Hinojosa JH (2001) The complete gravity gradient tensor derived from the vertical component of gravity: a Fourier transform technique. J Appl Geophys 46:159–174 Miller HG, Singh V (1994) Potential field tilt—a new concept for location of potential field sources. J Appl Geophys 32:213–217 Robert A (2001) Curvature attributes and their application to 3D interpreted horizons. First Break 19:85–99 Sertcelik I, Kafadar O (2012) Application of edge detection to potential field data using eigenvalue analysis of structure tensor. J Appl Geophys 84:86–94 Takemura M, Moroi T, Yashiro K (1998) Characteristics of strong ground motions as deduced from spatial distributions of damages due to the destructive inland earthquakes from 1891 to 1995 in Japan. Zisin 2(50):485–505 (in Japanese with English abstract) Talwani M, Lamar WJ, Landisman M (1959) Rapid gravity computations for two-dimensional bodies with application to the Mendocino submarine fracture zone. J Geophys Res 64:49–59 Telford WM, Geldart LP, Sheriff RE (1990) Applied geophysics. Cambridge University Press, Cambridge The Headquarters for Earthquake Research Promotion (2008) Evaluations of Tonami Fault zone and Kurehayama Fault zone. In: The Headquarters for Earthquake Research Promotion web site. http://www.jishin.go.jp/main/chousa/katsudansou_pdf/56_tonami_kureha_2.pdf (in Japanese). Accessed 21 Nov 2016 Torge W (1989) Gravimetry. Walter de Gruyter, Berlin Toyama City (2013) Research report on Kurehayama Fault (2). Toyama-shi, Toyama (in Japanese) Tsuboi C, Kato M (1952) The first and second vertical derivatives of gravity. J Phys Earth 1:95–96 Zhang C, Mushayandebvu MF, Reid AB, Fairhead JD, Odegrad ME (2000) Euler deconvolution of gravity tensor gradient data. Geophysics 65:512–520 Zhang X, Yu P, Tang R, Xiang Y, Zhao C-J (2014) Edge enhancement of potential field data using an enhanced tilt angle. Expl Geophys 46:276–283. doi:10.1071/EG13104 Zhou W, Du X, Li J (2013) The limitation of curvature gravity gradient tensor for edge detection and a method for overcoming it. J Appl Geophys 98:237–242 The author is most grateful to the two anonymous reviewers for their constructive reviews and comments on the manuscript. In addition, the author is most grateful to Yuichi Hayakawa for his editorial advices and cooperation. The manuscript was improved by these reviewers' comments and suggestions. This work was supported partially by JSPS (Japan Society for the Promotion of Science) KAKENHI Grant Numbers 15K14274, 16H05651, 17K01325. The author is grateful to JSPS. This work was supported partially by JSPS KAKENHI Grant Numbers 15K14274, 16H05651, and 17K01325. SK planned this study and conducted all the calculations and discussion. He also drafted this manuscript. SK is an associate professor at the University of Toyama. The author declares no competing interests. Graduate School of Science and Engineering for Research (Science), University of Toyama, 3910 Gofuku, Toyama, 930-8555, Japan Shigekazu Kusumoto Search for Shigekazu Kusumoto in: Correspondence to Shigekazu Kusumoto. Kusumoto, S. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type. Prog. in Earth and Planet. Sci. 4, 15 (2017) doi:10.1186/s40645-017-0130-0 Fault dip Gravity gradient tensor Normal fault Reverse fault Kurehayama Fault 3. Human geosciences High-definition topographic and geophysical data in geosciences
CommonCrawl
Honam Mathematical Journal (호남수학학술지) 1225-293X(pISSN) The Honam Mathematical Society (호남수학회) EVALUATION OF CERTAIN ALTERNATING SERIES Choi, Junesang (Department of Mathematics, Dongguk University) https://doi.org/10.5831/HMJ.2014.36.2.263 Ever since Euler solved the so-called Basler problem of ${\zeta}(2)=\sum_{n=1}^{\infty}1/n^2$, numerous evaluations of ${\zeta}(2n)$ ($n{\in}\mathbb{N}$) as well as ${\zeta}(2)$ have been presented. Very recently, Ritelli [61] used a double integral to evaluate ${\zeta}(2)$. Modifying mainly Ritelli's double integral, here, we aim at evaluating certain interesting alternating series. Riemann Zeta function;Basler problem;Bernoulli numbers;double integrals;residue theorem H. Tsumura, An elementary proof of Euler's formula for $\zeta$(2m), Amer. Math. Monthly 111 (2004), 430-431. https://doi.org/10.2307/4145270 E. C. Titchmarsh, A series inversion formula, Proc. London Math. Soc. (2) 26 (1926), 1-11. F. G. Tricomi, Sulla somma delle inverse delle terze e quinte potenze dei numeri naturali, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. 8 47 (1969), 16-18. G. P. Tolstov, Fourier Series (translated from the Russian by Richard A. Silverman), Dover Publications, Inc., New York, 1976. R. S. Underwood, An expression for the summation $\sum_{m=1}^{n}\;m^p$, Amer. Math. Monthly 35 (1928), 424-428. https://doi.org/10.2307/2299769 G. T. Williams, A new method of evaluating $\zeta$(2n), Amer. Math. Monthly 60 (1953), 19-25. https://doi.org/10.2307/2306473 K. S. Williams, On $\sum_{n=1}^{\infty}\;1/n^{2k}$, Math. Mag. 44 (1971), 273-276. https://doi.org/10.2307/2688638 A. M. Yaglom, I. M. Yaglom, An elementary derivation of the formulas of Wallis, Leibniz and Euler for the number $\pi$ (Russian), Uspehi Mat. Nauk, 8, 5 (57) (1953), 181-187. A. M. Yaglom, I. M. Yaglom, Challenging mathematical problems with elementary solutions, Vol. II, translated by James McCawley, Jr., Holden Day, San Francisco, 1967; Dover Publications, Inc., New York, 1987. D. Zagier, Values of Zeta functions and their Applications, First European Congress of Mathematics, Vol. II (Paris, 1992) (A. Joseph, F. Mignot, F. Murat, B. Prum, and R. Rentschler, Editors) pp. 497-512, Progress in Mathematics 120, Birkhauser, Basel, 1994. G. B. M. Zerr, Summation of series, Amer. Math. Monthly 5 (1898), 128-135. https://doi.org/10.2307/2968590 N. Y. Zhang, K. S. Williams, Application of the Hurwitz zeta function to the evaluation of certain integrals, Canad. Math. Bull. 36 (1993), 373-384. https://doi.org/10.4153/CMB-1993-051-6 W. Zudilin, One of the numbers $\zeta$(5), $\zeta$(7), $\zeta$(9), $\zeta$(11) is irrational, Russian Math. Surveys 56 (2001), 774-776. https://doi.org/10.1070/RM2001v056n04ABEH000427 A. Zygmund, Series, in: Encyclopaedia Britannica, 20 (1963), 363-367. E. L. Stark, Another proof of the formula $\sum_{k=1}^{\infty}\;1/k^2\;={\pi}^2/6$, Amer. Math. Monthly 76 (1969), 552-553. https://doi.org/10.2307/2316976 H. M. Srivastava, Some families of rapidly convergent series representations for the zeta functions, Taiwanese J. Math. 4 (2000), 569-599. https://doi.org/10.11650/twjm/1500407293 H. M. Srivastava and J. Choi, Zeta and q-Zeta Functions and Associated Series and Integrals, Elsevier Science Publishers, Amsterdam, London, and New York, 2012. H. M. Srivastava, M. L. Glasser and V. S. Adamchik, Some definite integrals associated with the Riemann Zeta function, Zeitschr. Anal. Anwendungen 19 (2000), 831-846. https://doi.org/10.4171/ZAA/982 E. L. Stark, 1 - 1/4 + 1/9 + -... = $\pi^2$/12, Praxis Math. 12 (1970), 1-3. E. L. Stark, A new method of evaluating the sums of $\sum_{k=1}^{\infty}\;(-1)^{k+1}\;k^{-2p},\;p=1,2,3,{\cdot}{\cdot}{\cdot}$ and related series, Elem. Math. 27 (1972), 32-34. E. L. Stark, The series $\sum_{k=1}^{\infty}\;k^{-s},\;s=2,3,4{\cdot}{\cdot}{\cdot}$, once more, Math. Mag. 47 (1974), 197-202. https://doi.org/10.2307/2689209 G. Stoica, A recurrence formula in the study of the Riemann Zeta function, Stud. Cere. Mat. (3) 39 (1987), 261-264. T. J. Osler, Finding $\zeta$(2p) from a product of sines, Amer. Math. Monthly 111 (2004), 52-54. https://doi.org/10.2307/4145017 E. R. Hansen, A Table of Series and Products, Prentice-Hall, Englewood Cliffs, New Jersey, 1975. E. H. Neville, A trigonometrical inequality, Proc. Cambridge Philos. Soc. 47 (1951), 629-632. https://doi.org/10.1017/S0305004100027043 P. K. Ojha, I. N. Singh, A discussion on two methods for finding the value of "The Riemann Zeta Function" $\zeta$(s) where s = 2, Math. Education 33(1) (1999), 24-28. I. Papadimitriou, A simple proof of the formula $\sum_{k=1}{\infty}\;k{-2}\;=\;{\pi}2/6$, Amer. Math. Monthly 80 (1973), 424-425. https://doi.org/10.2307/2319092 A. van der Poorten, A proof that Euler missed... Apery's proof of the irrational-ity of $\zeta$(3), Math. Intelligencer 1 (1979), 195-203. https://doi.org/10.1007/BF03028234 E. Popovici, G. Costovici, C. Popovici, The calculation of sums of harmonic series of even power, Bul. Inst. Politehn. Lasi, Sect. 1 33 (1987), 9-11. D. Ritelli, Another proof of $\zeta$(2) = $\frac{{\pi}2}{6}$ using double integrals, Amer. Math. Monthly 120(7) (2013), 642-645. https://doi.org/10.4169/amer.math.monthly.120.07.642 T. Rivoal, La fonction zeta de Riemann prend une infinite de valeurs irra-tionnelles aux entiers impairs, C. R. Acad. Sci. Paris 331, Serie I, (2000), 267-270. N. Robbins, Revisiting an old favorite, $\zeta$(2m), Math. Mag. 72 (1999), 317-319. E. E. Scheufens, From Fourier series to rapidly convergent series for Zeta(3), Math. Mag. 84 (2011), 26-32. https://doi.org/10.4169/math.mag.84.1.026 I. Skau and E. S. Selmer, Noen anvendelser Finn Holmes methode for beregning av $\sum_{k=1}^{\infty}\;\frac{k^2}{1}$, Nordisk Mat. Tidskr. 19 (1971), 120-124. I. Song, A recursive formula for even order harmonic series, J. Comput. Appl. Math. 21 (1988), 251-256. https://doi.org/10.1016/0377-0427(88)90274-9 O. Spiess, Die Summe der reziproken Quadratzahlen, in Festschrift zum 60 Geburtstag von Prof. Dr. Andreas Speiser (L. V. Ahlfors et al., Editors), pp. 66-86, Fussli, Zurich, 1955. H. M. Srivastava, Some rapidly converging series for $\zeta$(2n + 1), Proc. Amer. Math. Soc. 127(2) (1999), 385-396. https://doi.org/10.1090/S0002-9939-99-04945-X J. Choi, A. K. Rathie, H. M. Srivastava, Some hypergeometric and other evalu-ations of $\zeta$(2) and allied series, Appl. Math. Comput. 104 (1999), 101-108. https://doi.org/10.1016/S0096-3003(98)10082-6 J. Choi, A proof of Euler's formula $\sum_{k=1}{\infty}\;1/n2={\pi}2/6$, East Asian Math. J. 20(2) (2004), 127-129. J. Choi, Rapidly converging series for $\zeta$(2n+1) from Fourier series, Abs. Appl. Anal. Vol. 2014, Article ID 457620, 9 pages. http://dx.doi.org/10.1155/2014/457620 https://doi.org/10.1155/2014/457620 J. Choi, A. K. Rathie, An evaluation of $\zeta$(2), Far East J. Math. Sci. 5 (1997), 393-398. W. Dunham, Euler, The Master of Us All, Mathematical Association of America, Washington, DC, 1999. J. Duoandikoetxea, A sequence of polynomials related to the evaluation of the Riemann Zeta function, Math. Mag. 80 (2007), 38-45. T. Estermann, Elementary evaluation of $\zeta$(2n), J. London Math. Soc. 22 (1947), 10-13. Euler at 300. An Appreciation. Edited by R. E. Bradley, L. A. D'Antonio, and C. E. Sandifier. Mathematical Association of America, Washington, DC, 2007. L. Euler, De summis serierum reciprocarum, Comment. acad. sci. Petropolit. 7(1734/35), (1740) 123-134 = Opera Omnia, Ser. 1 Bd. 14, 73-86, Leipzig-Berlin, 1924. L. Euler, De seriebus quibusdam considerationes, Comment. acad. sci. Petropolit. 12 (1740), (1750) 53-96 = Opera Omnia, Ser. 1 Bd. 14, 407-462. L. Euler, Introduction to Analysis of the Infinite, Book I (Translated by John D. Blanton), Springer-Verlag, New York, Berlin, Heidelberg, London, Paris, and Tokyo, 1988. D. P. Giesy, Still another elementary proof that ${\sum}1/k2={\pi}2/6$, Math. Mag. 45(1972), 148-149. https://doi.org/10.2307/2687871 J. V. Grabiner, Who gave you the epsilon? Cauchy and the origins of rigorous calculus, Amer. Math. Monthly 90 (1983), 185-194. https://doi.org/10.2307/2975545 I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series, and Products (Corrected and Enlarged edition prepared by A. Jeffrey), Academic Press, New York, 1980; Sixth edition, 2000. F. Beukers, E. Calabi, J. Kolk, Sums of generalized harmonic series and volumes, Nieuw Arch. Wiskd. 3 (1993), 217-224. A. Benyi, Finding the sums of harmonic series of even order, College Math. J. 36 (2005), 44-48. https://doi.org/10.2307/30044817 B. C. Berndt, Elementary evaluation of $\zeta$(2n), Math. Mag. 48 (1975), 148-154. https://doi.org/10.2307/2689696 B. C. Berndt, Ramanujan's Notebooks, Part I, Springer-Verlag, New York, Berlin, Heidelberg, Tokyo, 1985. J. M. Borwein, P. B. Borwein, Pi and the AGM, John Wiley & Sons, Inc., 1987. C. B. Boyer, A History of Mathematics, Princeton University Press, Princeton, John Wiley & Sons, Inc., 1968. J. W. Brown and R. V. Churchill, Complex Variables and Applications, eighth edition, McGraw-Hill International Edition, 2009. P. L. Butzer and M. Hauss, Integral and rapidly convergeing series representa-tions of the Dirichlet L-functions $L_1$(s) and $L_{-4}$(s), Atti Sem. Mat. Fis. Univ. Modena XL (1992), 329-359. R. Calinger, Leonhard Euler: The first St. Petersburg years (1727-1741), Historia Mathematica, 23 (1996), 121-166. https://doi.org/10.1006/hmat.1996.0015 L. Carlitz, A recurrence formula for $\zeta$(2n), Proc. Amer. Math. Soc. 12 (1961), 991-992. R. Chapman, Evaluating $\zeta$(2). www.math.titech.ac.jp/-inoue/ACII-05-holder/zeta2.pdf C.-P. Chen and J. Choi, Further new evaluation of $\zeta$(2n), submitted for publication, 2013. X. Chen, Recursive formulas for $\zeta$(2k) and L(2k-1), College Math. J. 26 (1995), 372-376. https://doi.org/10.2307/2687382 M. P. Chen, An elementary evaluation of $\zeta$(2k), Chinese J. Math. 3 (1975), 11-15. Y. J. Cho, J. Choi, M. Jung, Note on an evaluation of $\zeta$(p), Indian J. Pure Appl. Math. 37(5) (2006), 259-263. B. R. Choe, An elementary proof of $\sum_{n=1}{\infty}\;1/n2={\pi}2/6$, Amer. Math. Monthly 94 (1987), 662-663. https://doi.org/10.2307/2322220 M. Dalai, How would Riemann evaluate $\zeta$(2n)?, Amer. Math. Monthly 120(2) (2013), 169-171. https://doi.org/10.4169/amer.math.monthly.120.02.169 K. Knopp, Theory and Application of Infinite Series (second English edition, translated from the second German edition revised in accordance with the fourth German edition by R. C. H. Young), Hafner Publ. Company, New York, 1951. H. T. Kuo, A recurrence formula for $\zeta$(2n), Bull. Amer. Math. Soc. 55 (1949), 573-574. https://doi.org/10.1090/S0002-9904-1949-09247-9 L. Lewin, Polylogarithms and Associated Functions, Elsevier (North-Holland), New York, London, and Amsterdam, 1981. T. Marshall, A short proof of ${\zeta}(2)\;=\;{\pi}2/6$, Amer. Math. Monthly 117 (2010), 352-353. https://doi.org/10.4169/000298910X480810 Y. Matsuoka, An elementary proof of the formula $\sum_{k=1}{\infty}\;1/k2\;=\;{\pi}2/6$, Amer. Math. Monthly 68 (1961), 485-487. https://doi.org/10.2307/2311110 I. P. Natanson, Constructive Function Theory, Vol. I, Uniform Approximation, New York, 1964. M. Abramowitz, I. A. Stegun (Editors), Handbook of Mathematical Functions with Formulas; Graphs; and Mathematical Tables, Applied Mathematics Series 55, ninth printing, National Bureau of Standards, Washington, D.C., 1972. E. De Amo, M. Diaz Carrillo, J. Fernandez-Sanchez, Another proof of Euler's formula for $\zeta$(2k), Proc. Amer. Math. Soc. 139 (2011), 1441-1444. https://doi.org/10.1090/S0002-9939-2010-10565-8 R. Apery, Irrationalite de $\zeta$(2) et $\zeta$(3), in "Journees Arithmetiques de Luminy" (Colloq. Internat. CNRS, Centre Univ. Luminy, Luminy, 1978), pp. 11-13, Asterisque 61 (1979), Soc. Math. France, Paris. T. M. Apostol, Another elementary proof of Euler's formula for $\zeta$(2n), Amer. Math. Monthly 80 (1973), 425-431. https://doi.org/10.2307/2319093 T. M. Apostol, A proof that Euler missed: Evauating $\zeta$(2) the easy way, Math. Intelligencer 5 (1983), 59-60. R. Ayoub, Euler and the Zeta Function, Amer. Math. Monthly 81 (1974), 1067-1086. https://doi.org/10.2307/2319041 E. Balanzario, Metodo elemental para la evaluacion de la funcion zeta de Rie-mann en los enteros pares, Miscelanea Mat. 33 (2001), 31-41. L. Holst, A proof of Euler's infinite product for the sine, Amer. Math. Monthly 119 (2012), 518-521. https://doi.org/10.4169/amer.math.monthly.119.06.518 J. D. Harper, Another simple proof of $1+{\frac{1}{22}+{\frac{1}{32}+{\cdot}{\cdot}{\cdot}={\frac{{\pi}2}{6}$, Amer. Math. Monthly 110 (2003), 540-541. https://doi.org/10.2307/3647912 J. Havil, Gamma: Exploring Euler's Constant, Princeton University Press, Princeton, 2003. F. Holme, En enkel beregning av $\sum_{k=1}{\infty}\;\frac{1}{k2}$, Nordisk Mat. Tidskr. 18 (1970), 91-92. R. M. Hovstad, The series $\sum_{k=1}{\infty}\;1/k{2p}$, the area of the unit circle and Leibniz' formula, Nordisk Mat. Tidskr. 20 (1972), 92-98. C. Ji and Y. Chen, Euler's formula for $\zeta$(2k), proved by induction on k, Math. Mag. 73 (2000), 154-155. https://doi.org/10.2307/2691089 D. Kalman, Six ways to sum a series, College Math. J. (1993), 402-421. H. L. Keng, Introduction to Number Theory, Springer-Verlag, Berlin, Heidelberg, New York, 1982. M. Kline, Euler and infinite series, Math. Mag. 56 (1983), 307-314. https://doi.org/10.2307/2690371
CommonCrawl
\begin{document} \title[What is a quantum computer, and how do we build one?]{What is a quantum computer, and how do we build one?} \author{Carlos A. P\'erez-Delgado\footnote{[email protected]} and Pieter Kok\footnote{[email protected]}} \address{Department of Physics and Astronomy, University of Sheffield, Hicks building, Hounsfield road, Sheffield, S3 7RH, United Kingdom} \begin{abstract} \noindent The DiVincenzo criteria for implementing a quantum computer have been seminal in focussing both experimental and theoretical research in quantum information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. The question is therefore what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that according to this definition a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault-tolerantly. We discuss various existing quantum computing paradigms, and how they fit within this framework. Finally, we present a decision tree for selecting an avenue towards building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation. \end{abstract} \pacs{03.67.Lx 03.67.Ac 03.67.Pp 03.67.Hk} \submitto{New Journal of Physics} \date{\today} \maketitle \section{Introduction} \noindent One of the main focal points of modern physics is the construction of a full-scale quantum computer \cite{deutsch85,deutsch89}, which holds the promise of vastly increased computational power in simulating quantum systems. In turn, this may lead to fundamentally new quantum technologies \cite{feynman82,lloyd96}. There is also mounting evidence that these devices can solve standard mathematical-computational problems more efficiently than classical computers. Improvements range from quadratic speed-up in general-purpose algorithms such as search \cite{BBBGLS00,Grover96,BBBGL98,BBBGL99,CGW00,Grover98}, to exponential (over best known classical counterparts) in specialized algorithms, such as the hidden subgroup problem \cite{shor94,CEHM98,CEMM98}. The common goal of creating a quantum computer has acted as a focus for research in experimental and theoretical physics, as well as in computer science and mathematics. In particular, this goal has pushed forward the search for better ways to control different types of quantum systems, like cold and hot ions \cite{cirac95,molmer99}, Cavity QED \cite{pellizari95}, neutral atoms \cite{briegel00}, liquid and solid state NMR \cite{cory97,gerschenfeld97,braunstein99,yamaguchi99,vandersypen04}, silicon-based nuclear spins \cite{kane98}, electrons floating on Helium \cite{Platzman99}, quantum dots \cite{loss98}, Cooper pairs \cite{averin98}, Josephson junctions \cite{mooij99,schnirman97,makhlin99,makhlin01}, and linear optical systems \cite{knill01,kok07} among others. Important new results in the field, both theoretical and experimental, continue to drive progress in quantum information processing. As a consequence, our ability to control quantum systems has improved dramatically over the past fifteen years, and we understand much better many aspects of classical and quantum computing, as well as broader aspects of fundamental physics. An important part in focussing the research in quantum information processing has been the set of criteria for creating a quantum computer, pioneered by Deutsch \cite{deutsch85}, and expanded and formalized by DiVincenzo \cite{divincenzo96,divincenzo00,divincenzo01}. They have inspired new experimental and theoretical research in quantum control and quantum information processing. The criteria, now known as {\em DiVincenzo's Criteria}, apply explicitly to the circuit model of quantum computation. According to the criteria, any quantum computer must facilitate the following: \begin{enumerate}\itemsep=0pt \item A scalable physical system with well-characterized qubits; \item the ability to initialize the state of the qubits to a simple fiducial state, such as $\ket{000\ldots}$; \item long relevant decoherence times, much longer than the gate operation time; \item a universal set of quantum gates; \item a qubit-specific measurement capability. \end{enumerate} These five criteria were originally formulated by DiVincenzo in 1996 \cite{divincenzo96}. Subsequently, DiVincenzo formulated two more criteria for quantum communication \cite{divincenzo00}: \begin{enumerate} \item[(vi)] the ability to interconvert stationary and flying qubits; \item[(vii)] the ability to faithfully transmit flying qubits between specified locations. \end{enumerate} Since the formulation of the criteria, new ways of making quantum computers have been invented that do not always seem to fit the criteria very well. Today, experimentalists can choose between various paradigms for quantum computation, such as adiabatic quantum computing, globally controlled quantum computing, and the one-way model, all of them with their various strengths and weaknesses. Each paradigm requires a completely different approach, yet all attempt to reach the same end goal, namely to construct a quantum computer. While all models are computationally equivalent, their differences allow for different intuitions, and practical advantages. This allows for greater freedom in the laboratory. For example, where one was once required to control individual qubits, today we know that global control suffices in certain instances. The wealth of paradigms also enriches our theoretical knowledge, and increases our chances for finding new algorithms. For example, universal blind quantum computation \cite{broadbent08} was developed using intuition gained from the one-way model for quantum computing. DiVincenzo himself wrote about the implications of these new paradigms for the criteria in 2001, and relished how heresies to the ``dogmatic'' criteria were arising in the field of quantum computation \cite{divincenzo01}. We expect that there must be a form of the criteria that does not make any assumptions about the particular implementation of the quantum computer. In this paper, we formulate such general criteria for constructing a quantum computer, and we identify the metric that determines whether the criteria are met in terms of fault tolerance and scalability. To achieve this, we must construct a new {\em operational} definition of a quantum computer, and the algorithms that run on them. Finally, we construct a decision tree to help select the most promising paradigm given a particular physical implementation. This will provide the experimentalist with a map of the key theoretical results they need in order to make their laboratory setup into a full-scale quantum computer. While the criteria are designed to be independent of the paradigms, the decision tree must be updated whenever new paradigms are developed. \section{What is a Quantum Computer?} \noindent Before we can discuss the new criteria for quantum computation, we have to define exactly what we mean by the term {\em quantum computer}. Although most readers will have an intuitive concept, a formal definition of the term is quite elusive. In the literature, there are broadly four types of definitions, and we will argue that they generally fall short of what we seek in a useful definition of a quantum computer. First, a quantum computer can be defined as a representation of a quantum Turing machine, as proposed by Deutsch in 1985 \cite{deutsch85}. While satisfactory from a formal computer science perspective, this is not the most useful formulation when one is concerned primarily with the implementation of a quantum computer. Second, many texts use an implicit definition of a quantum computer. For example, Mermin \cite{mermin07} writes that ``a quantum computer is one whose operation exploits certain very special transformations of its internal state'', and the subject of the book is these special transformations. In Nielsen and Chuang \cite{nielsen00}, a definition of a quantum computer is never given, and the reader instead develops an intuition for the meaning of the word ``quantum computer'' over the course of the material. The implicit definition is a perfectly good pedagogical approach, but it is not the clear, brief statement we need to derive the criteria. Third, quantum computers are sometimes defined as devices that can outperform classical computers. The trouble with these types of definitions is that they depend on the classification of computational problems, and there are a number of important open questions about this classification (for example whether $\mathsf{P} = \mathsf{NP}$). Although unlikely, it may well be that classical computers are just as powerful as quantum computers, and the definition ceases to have meaning. We therefore require a definition that does not depend on the classification of problems in complexity theory. Finally, there are constructive definitions, which state that quantum computers are made of quantum bits, use entanglement, etc. The trouble with such definitions is that they tend to be quite specific about the implementation. For example, a definition in terms of qubits seems to exclude the possibility of creating a quantum computer using continuous variable quantum systems. Instead, we want a definition that does not presuppose anything about the building blocks of the quantum computer, does not depend on the classification of problems in complexity theory, and is independent of our interpretation of quantum mechanics (two philosophers with radically different interpretations of quantum mechanics should still be able to agree whether a device can be classified as a quantum computer). Finally, the definition must not make any reference to the paradigm that is used to perform the computation in any fundamental way. For example, the end user of a quantum computer will generally not be interested whether the device uses the adiabatic, measurement-based, or some other as yet unknown form of quantum computing, unless that makes a difference in the performance of the device. In other words, we need an {\em operational} definition of a quantum computer. We will give a formal definition of a quantum computer below, after a brief discussion of the intuitive background of this definition. Broadly speaking, we define a quantum computer as a device that can run a quantum algorithm efficiently, where a quantum algorithm is a classical bit string that encodes a series of quantum operations (typically quantum gates). The quantum computer should be able to take this string as input and produce as output another bit string. The probability distribution of the output should be consistent with the predictions of quantum theory. Finally, the time it takes the computer to produce the output should be in agreement with the difficulty of the algorithm, \emph{e.g.} an exponential-time algorithm can take the quantum computer an exponentially long time to compute, but a polynomial-time algorithm should be computed in polynomial time. To see how a classical computer is believed to fail this criteria, consider Shor's factoring algorithm: The number of steps in the algorithm scales polynomially with the size of the input, but the actual classical implementation will scale exponentially. There are three important advantages to this definition. First, it makes no reference to how the quantum computer works, which means that the definition does not have to be updated when new methods of implementation are invented. Second, the definition does not refer to any specific computational problems, like factoring, as a differentiator from a classical computer. Instead, the definition calls for the ability to compute \emph{any} quantum algorithm, in a efficient enough manner. And third, the definition does not make any assumptions about either the theory of computation or the nature of physical reality. This means that the definition will still be valid when our knowledge of the relationship between classical and quantum computing becomes more complete, and when physical theories that supersede quantum theory are developed. To formulate our definition in a precise mathematical way, let $\smash{s^{(n)}_{\rm in}}$ be a string of classical symbols, and let the program $P$ of size $r$ be a symbolic representation of an algorithm (for more details see Sec.\ \ref{sec:algorithms}). \begin{definition}\label{def:idealQC} An {\em ideal quantum computer} is a hypothetical device that accepts as input a classical bit string $\smash{s^{(n)}_{\rm in}}$, and a quantum program $P$ with size $r$, acting on a Hilbert space $\mathcal{H}_n$ of dimension $2^n$. For any given program $P$ the quantum computer produces the classical output bit string $\smash{s^{(m)}_{\rm out}}$ with probability \begin{equation*} p_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in}) = \langle{s^{(n)}_{\rm in}} | U_P^{\dagger} \left( \mathbb{I}_{n-m} \otimes |{s^{(m)}_{\rm out}}\rangle\langle{s^{(m)}_{\rm out}}| \right) U_P |{s^{(n)}_{\rm in}}\rangle . \end{equation*} The total amount of resources used by the device scales polynomially in $r$. \end{definition} The operator $\mathbb{I}_{n-m}$ is shorthand for the identity on the subspace of $\mathcal{H}_n$ that is the orthocomplement of the subspace spanned by $\{\ket{\smash{s^{(m)}_{\rm out}}}\}$. It is worthwhile emphasizing that the quantum computer uses resources that scale polynomially in the size of the program $r$, as opposed to the number of bits $n$ in the input to the program. In other words, a quantum computer can very well implement an exponential-time quantum algorithm. The requirement states that if for example the algorithm itself is polynomial-time, then the quantum computer must also run in polynomial time. We do not impose the stronger restriction that the quantum computer uses resources that are \emph{linear} in $r$. There are a number of reasons why this is reasonable. First, there are various different quantum computer paradigms, as we discuss elsewhere in this paper, and while all of them are equivalent with regards to which computation they can perform efficiently, some computations can be slightly more efficient on one platform than on another. For example, calculating the ground state of a BCS Hamiltonian on a traditional NMR quantum computer may be quadratically slower than doing the same calculation on a qubus quantum computer \cite{brown09}. Second, as we have discussed earlier, there are various different languages in which quantum programs can written. It may be necessary to translate or \emph{compile} the program from the language it is written to the \emph{`native'} language of the quantum computer; and this translation may take up to polynomial time in the size of the program. Third, there are the issues of scalability and fault tolerance, to which we will return in Sec.\ \ref{sec:ftnsc}. Because of overheads due to error-correction and fault-tolerance, it is possible that the amount of resources needed by a quantum computer to solve larger problems scales super-linearly. Why is the quantum computer in Definition \ref{def:idealQC} a hypothetical device? Suppose that instead we defined a real quantum computer according to Definition \ref{def:idealQC}. The quantum computer should then be able to accept an input of any size, and compute arbitrary quantum programs on this input. On the other hand, any real quantum computer has a well-defined finite size, in terms of the number of logical input bits it can operate on. Furthermore, it needs to potentially create entanglement across as many subsystems as it has input bits. We can therefore always construct a problem that is too large for our (possibly very large) quantum computer, and such a device would fail according to Definition \ref{def:idealQC}. Nevertheless, any reasonable definition of a real quantum computer should admit this finite device as a true quantum computer. This is an important difference between classical and quantum computers. Any classical computer that is large enough to be universal will be universal in the strictest sense, and no restrictions are placed on the efficiency of the classical computation. It accepts an input string of any size and can compute an algorithm of any size, given that it is provided with sufficient `work space'. We could introduce the notion of a `quantum work space', but that would be cheating. It is a profound and important fact that a quantum computer cannot store partial results while it works on other parts of a computation. Every part of a computation can potentially be entangled to every other part, which is central to the speedup that quantum computers can achieve over classical computers \cite{vidal03,jozsa03}. Moreover, due to the monogamy of entanglement, stored partial results cannot interact with anything but the quantum computer itself. The quantum work space must therefore be considered an integral part of the quantum computer. The quantum work space has traditionally been described as a set of qubits. However, in recent years it has been shown that it is not necessary to restrict ourselves to the qubit model. In particular, an important sub-discipline of quantum information theory involves the use of continuous variable quantum information carriers, or {\em qunats}. This therefore leads us to consider a formal definition of the quantum work space, which from now on we call a quantum memory: \begin{definition}\label{def:qumemory} A {\em quantum memory} of size $k=\log_2 d$ is a physical system that can represent any (computable) quantum state in a Hilbert space of dimension $d$. \end{definition} Even though the size of a quantum memory is measured in qubits, the definition does not specify what type of information carriers should be used. However, $d$ must be finite, which means that even if the quantum memory is based on continuous variables, the effective Hilbert space must be finite. Consequently, when continuous variables are the information carriers, the logical encoding must either be in terms of a qudit, or must take into account the finite precision that is inherent in continuous variables. Finally, a subtle but important point: According to Definition \ref{def:qumemory}, a quantum memory must be able to represent (that is, store) any computable quantum state, but we do \emph{not} require that it actually is in such a state. That would be too strong a requirement. Just about any current experimental implementation of a qubit is a good example of this. For instance, in NMR quantum computation the nuclear spin of an atom is used to represent a qubit. This is natural when using a spin-$1/2$ nucleus; but one can also use, say, spin-$3/2$ nuclei, and simply use a sub-space of the physical Hilbert space, as the `computational' space. A more extreme example would be optical lattices, or quantum dots, where experimentalists use two energy eigenstates of their respective systems to represent qubits. It does not matter that the energy eigenstates of an atom trapped in an optical lattice is infinite-dimensional; it can still represent a simple two-dimensional system. Finally, a more subtle example is a device that is not actually quantum mechanical itself, but can simulate any quantum mechanical interaction. This too is a perfectly viable quantum memory. Let us now return to the definition of a quantum computer. Another sense in which Definition \ref{def:idealQC} must describe a hypothetical device is that it must produce the probability distribution $\smash{p_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in})}$ exactly. However, any real device will unavoidably have errors, and can never produce $\smash{p_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in})}$ exactly. Of course, from a practical point of view, this level of precision is quite unnecessary. It is perfectly acceptable that the quantum computer creates $\smash{s^{(m)}_{\rm out}}$ with probability $\smash{p'_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in})}$, where $p_P$ and $p_P'$ are close to each other with respect to some metric (e.g., the fidelity or the statistical distance). The difference between the two distributions will manifest itself as the occurrence of wrong answers in the quantum computer. If the problem to be solved is such that we can verify the correctness of the answer efficiently (such as factoring), then it is sufficient to run the quantum computer repeatedly. As long as the probability of getting a wrong answer is small enough, we can repeat the computation until we obtain the right answer. However, one might be interested in tackling problems that are not efficiently verifiable. In addition, it is desirable to be able to lower bound the success probability of running the quantum computer efficiently. In such cases, we require more sophisticated methods of quality control. One such method is circuit self-testing \cite{magniez06}. Taking these practical considerations into account, we arrive at the following definition: \begin{definition}\label{def:kqubitQC} A {\em $k$-bit quantum computer} is a physical device that accepts as input a classical bit string $\smash{s^{(n)}_{\rm in}}$, with $n \leq k$, and a quantum program $P$ of size $r$, acting on a Hilbert space of size $2^n$. For any given program $P$ the quantum computer produces the classical output bit string $\smash{s^{(m)}_{\rm out}}$ with probability \begin{equation}\nonumber p_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in}) = \langle{s^{(n)}_{\rm in}} | U_P^{\dagger} \left( \mathbb{I}_{n-m} \otimes |{s^{(m)}_{\rm out}}\rangle\langle{s^{(m)}_{\rm out}}| \right) U_P |{s^{(n)}_{\rm in}}\rangle , \end{equation} with sufficiently high fidelity. The amount of resources used by the device scales polynomially in $r$. \end{definition} This definition captures the notion of a quantum computer as a real, finite physical device. In particular, we now allow our device to fail some of the time. We impose the weaker condition that it reproduces the probability distribution with \emph{sufficiently} high fidelity. Though we do not impose a particular number as the lower bound on the success probability, a standard choice would be $2/3$. It is not \emph{a priori} clear that proving a lower bound on the reliability of a quantum device is easy, or even feasible. Practically, however, there are procedures that should be able to prove the reliability of some types of quantum computer architectures \cite{magniez06}. The drawback of Definition \ref{def:kqubitQC} over Definition \ref{def:idealQC} is that the mathematical notion of scalability is lost, since the device is strictly finite. To retrieve scalability, we must refer to Definition \ref{def:idealQC}. That is, we can claim scalability of our quantum computer architecture when, given an input string of any size $n$ and a quantum program, we can in principle construct a physical $k$-bit quantum computer with $k \geq n$ to run the computation. Thus far, we have avoided any discussion on how the measurement statistics $\smash{p_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in})}$ are obtained. Definitions \ref{def:idealQC} and \ref{def:kqubitQC} are entirely operational, which means that if a device \emph{acts} like a quantum computer in producing the probability distribution, then by our definition it \emph{is} a quantum computer. Importantly, we have completely avoided any assumptions about the representation of the data in the quantum computer. Even though the input and output are always considered classical bit strings, the physical representation in the quantum computer can be in terms of qubits, qudits, or qunats. Even if the physical representation is in terms of qubits, the logical qubits will in general not map directly to the physical qubits, for example due to levels of error correction. We have also sidestepped any consequences of possible efficient classical simulability of quantum processes. Finally, the definition is independent of interpretations of quantum mechanics. \section{Defining Quantum Algorithms}\label{sec:algorithms} \noindent Our definition of a quantum computer is based on the notion of efficient computability. In other words, an efficient quantum program should be run efficiently by a quantum computer, while an inefficient program may run equally inefficiently. The purpose of this section is to formalise this notion. However, for many readers the idea of a program $P$ of size $r$ will be sufficiently intuitive, and these readers can skip this section if they desire. We formalise the notion of quantum program by means of an inductive definition: we begin by defining basic building-blocks and then describe how these building blocks can be combined to build more complex objects. A consequence of this is that we have to choose a particular way to \emph{construct} quantum algorithms. We have chosen the language and building blocks of quantum circuits, since they form a well-known and intuitive approach. While in many ways it is desirable to give a general definition of quantum algorithms that does not refer to any particular implementation, the definition presented here allows us to achieve the goals set in this paper properly and easily. An inductive definition allows us to define a formal size function in a straightforward manner. That is, not only is the size of a quantum program always well-defined, it is also easily characterised. In our case, it is simply the sum of the cost of each individual gate. For simplicity, and without loss of generality, we will restrict ourselves predominantly to the binary alphabet $\{ 0, 1 \}$. Any higher-dimensional information carrier can be described in terms of multiple bits, and even continuous variables can be treated this way, as long as we keep in mind that they have an intrinsic finite precision. Consequently, we consider mostly binary finite classical bit strings $s^{(n)} \in \{0, 1 \}^n$. Quantum bit strings will be denoted similarly by $\ket{\smash{s^{(n)}}} \in \{ \ket{0}, \ket{1} \}^{\otimes n}$. We have defined quantum computers as accepting three pieces of input: a bit string representing the initial state; a bit string representing a computable quantum function on initial state; and a bit string that determines what measurement should be made on the resulting final state. We call the second input a \emph{quantum program}. Note that a quantum program, by our definition, is not quite a quantum algorithm. A quantum algorithm is a recipe for transforming an input of any size; e.g., Shor's quantum factoring algorithm can be used to factor any number. A quantum program, by contrast, acts only on inputs of a certain size. In short, a quantum program classically encodes a unitary operator, and a quantum algorithm is a family of programs for different input sizes. A commonly chosen universal set of quantum operations consists of arbitrary finite precision one qubit gates, along with the controlled-Z operation ($CZ$). \begin{definition}\label{def:QAprimitives} Quantum Gates are strings that denote quantum operators that fall in either of the following two categories: \begin{enumerate} \item The category of qubit rotations \[ R_j(\bm{\theta}) = \exp\left( -\frac{i}{\hbar} \bm{\theta}\cdot\bm{\sigma}_j \right) , \] were $j$ indicates the qubit, $\bm{\theta} = (\theta_x,\theta_y,\theta_z)$, and $\bm{\sigma}_j = (X,Y,Z)$ is the vector of Pauli operators on qubit $j$. The size of this primitive in ``big Omega'' notation is $\Omega(m)$, where $m$ is the maximum precision of $\theta_x$, $\theta_y$, and $\theta_z$ in bits; \item the category of operators $CZ_{jk}$ of controlled-$Z$ operators between the $j^{\rm th}$ and the $k^{\rm th}$ logical data qubits. The size of this primitive is $\Omega(1)$. \end{enumerate} \end{definition} Note that any gate amounts to a bit-string that records the type of operation, the qubit(s) it operates on, and if necessary the value of the angles in $\bm{\theta}$, up to a finite precision. The size function assumes that it is in general harder to implement a quantum unitary operation with higher precision, than one with lower precision. Hence, the higher precision of the gate, the higher its size value. The two qubit gate, $CZ$, has a fixed constant size because its precision is fixed. The second part of an inductive definition is the closure function. In other words, a method for constructing general set items from atomic ones. \begin{definition}\label{def:Qcircuit} A Quantum Circuit can be constructed inductively as follows: \begin{enumerate} \item If $q$ is a gate then it is a quantum circuit. \item if $q_1$ and $q_2$ are quantum circuits, then their composition, denoted by $q_2 \circ q_1$, is also a quantum circuit. \end{enumerate} \end{definition} A quantum circuit is therefore a bit-string that encodes (i.e., gives a method for implementing) a unitary operation. Given a quantum circuit $q$ we denote the unitary operator it encodes by $U_q$. It acts on a Hilbert space of size $2^{n}$ where $n$ is the size of the largest index in any gate in $q$ (recall that the index or indices of a gate establish which qubit(s) it acts on). An important mathematical remark is that the set of quantum algorithms as defined above is \emph{freely generated}. This means that given a properly formed quantum circuit, there is only one way to construct it using the rules in definition \ref{def:Qcircuit}. This in turn implies that we can define a size measure as a recursive function on a quantum circuit, and that this value is always well-defined. \begin{definition}\label{def:Qcomplexity} The size of a quantum circuit $q$, denoted by $C(q)$ or $|q|$ is a function from the set of all quantum circuits to the non-negative integers, and is defined as follows. \begin{enumerate} \item If $q$ consists of a single gate, its size is the same as that of the gate (see Definition \ref{def:QAprimitives}); \item if $q$ is a quantum circuit such that $q= q_2 \circ q_1$ then the size of this circuit is $|q_1| + |q_2|$. \end{enumerate} \end{definition} In Definition \ref{def:computableUnitary} below we describe computability and complexity of unitary operators in terms of the quantum programs that encode them. \begin{definition}\label{def:computableUnitary} A unitary operator $U$ is \emph{computable}, if there exists a quantum program $q$ such that $U_q = U$. Furthermore, the cost of $U$ is the size of the smallest quantum program $q$ such that $U_q = U$. Finally, $U$ is $\epsilon$-approximable if there exists a program $q$ such that the fidelity $F(U, U_P) \geq 1 - \epsilon$. Similarly, the cost of approximating $U$ is the size of the smallest program that approximates $U$. \end{definition} Using the concepts developed in this section we can easily and formally speak of quantum program size. This is not a fully developed complexity theory, since it only speaks of the size of individual objects or programs. It is merely a useful tool to properly define quantum computers in the previous section. Also, we stress again that although we have used the language of circuits to discuss quantum programs, this is but one of many languages in which programs can be described. Regardless of which language we choose to represent a program, a quantum computer, as defined above, must be able to implement and run the program, possibly via a \emph{compiler}, i.e., a classical program that translates the the quantum program from our language to some internal representation appropriate for the hardware implementation. This translation might incur an additional cost, but it should always be a polynomially scaling penalty. This observation and restriction comes into play in our definition, where we state that the quantum computer must implement the quantum program in a time that is polynomial in the size of the program. \section{Criteria for building a quantum computer}\label{sec:ftnsc} \noindent One of the main aims of this paper is to establish the criteria that any implementation of a quantum computer must meet. The first set of these criteria were formulated by DiVincenzo, and mainly concern the circuit model of quantum computation, applied to qubits. However, other models of quantum computation have been proposed since, and most likely new models will be formulated in the future. In this section we will discuss the set of criteria that any model of quantum computation must meet, now and in the future. These are: \setcounter{criterion}{-1} \begin{criterion} Any quantum computer must have a quantum memory. \end{criterion} \begin{criterion} Any quantum computer must facilitate a controlled evolution of the quantum memory, that allows for universal quantum computation. \end{criterion} \begin{criterion} Any quantum computer must include a method for cooling the quantum memory. \end{criterion} \begin{criterion} Any quantum computer must provide a readout mechanism for (non-empty) subsets of the quantum memory. \end{criterion}\setcounter{criterion}{-1} Criterion 0 establishes the \emph{conditio sine qua non} of any quantum computing device: its ability to dynamically represent a quantum state, in other words, have a quantum memory as defined in Definition \ref{def:qumemory}. Criteria 1-3 establish further requirements imposed on this quantum memory. These criteria are more general than DiVincenzo's criteria, and therefore more abstract. This means that we must give a metric that allows us to determine whether the criteria are satisfied. This metric requires two concepts, namely fault tolerance and scalability. \subsection{Fault Tolerance} \noindent In practice, no device will ever be perfect. Random fluctuations induce errors in all aspects of the computation, including the quantum memory, the quantum evolution, and the readout. If the device still operates in accordance with Definition \ref{def:kqubitQC} in the presence of errors, the device is called {\em fault tolerant}. The size of the errors must typically be below a certain value in order to achieve fault tolerance, and this is called the fault tolerance {\em threshold}. Fault-tolerance was first considered in classical computation, where fundamental results show that in the presence of faulty gates \cite{neumannX}, and even faulty gates and wires \cite{gacs}, classical computation can be done robustly. The key to fault tolerance is to employ error correction effectively. All properly functioning classical computers operate in a fault tolerant manner. In quantum computing, due to the fragile nature of quantum information and entanglement, fault tolerance is much harder to achieve. Indeed, many people initially believed that fault tolerant quantum computers are physically impossible \cite{unruh95}. Nevertheless, it turned out that, as in classical computing, we can employ quantum error correction, and Shor and Steane where the first to show how to perform quantum error correction in principle \cite{Shor1, steane1, steane2}. Each logical qubit is typically encoded in a number of physical qubits. A measurement of certain observables on the qubits then allows us to extract the {\em syndrome}, which tells us which, if any, error has occurred. For example, an error correction code can protect a logical qubit from a single error. However, the codes themselves consist of a number of physical qubits, all of which are prone to errors. In order for quantum error correction to be beneficial, the probability of an error in the logical qubit must be smaller than the error probability on a single physical qubit, multiplied by all the distinct places the error can occur. When this condition is satisfied, we can in principle concatenate the codes, which means that each physical qubit in the code is itself encoded, and so on. However, in order to obtain fault tolerance, we also have to make sure that the quantum error correction code and the concatenation do not allow errors to multiply. For example, when a qubit experiences a bit flip error, using that qubit as the control of a $CNOT$ will induce a bit flip error on the target qubit as well. This means that we now have {\em two} errors, even though the $CNOT$ worked perfectly. Quantum error correction codes must be designed such that error propagation and multiplication is kept under control. Furthermore, all aspects of the quantum computer, including the memory, the evolution, and the readout, must be performed in a fault tolerant way. All this must be achieved without compromising the polynomial scaling in the amount of resources required to perform the computation. Some of the first results on fault-tolerance are due to Shor \cite{Shor2}, Kitaev \cite{kitaev97}, Steane \cite{steane3}, Gottesman \cite{gottesman}, and Preskill \cite{preskill98}. How to achieve fault tolerance, and the numerical value of the threshold, depends on the paradigm and the type of error correction. In Sec.\ \ref{sec:paradigms} we will discuss specific fault-tolerance thresholds for the particular paradigms. \subsection{Scalability} \noindent The second essential characteristic of any quantum computer is {\em scalability}. It is what allows us to move beyond the proof of principle experiment to a large-scale quantum computer that can solve interesting problems not within reach of classical computing. While it is universally agreed that scalability is a desired, or even required, characteristic of potential quantum computing platforms, what exactly constitutes scalability is often subject to debate. A textbook definition of scalability in computer science is that a system is scalable if \emph{`its performance does not degrade significantly as [\ldots] the load on the system increases.'} \cite{menasce04}. More generally, the cost of scaling a system to size $n$ can be described by a cost function. This function, and in particular its asymptotic growth rate, determines the degree of scalability of the system. When discussing a quantum computing device using the above definition of scalability, one would say that the quantum computer is scalable if the resources needed to run the same polynomial-time algorithm on a larger input scaled polynomially on the the size of the input. If the device, for reasons of having to cope with increased error-rates, decoherence, fundamental limitations on its size, etc., cannot compute the algorithm using at most a polynomial overhead, we say that the device is not scalable. The quantum information community often determines that it is the size of the quantum memory of a quantum computer that must be scaled. Even without any further considerations, this also imposes a necessary scalability in the quantum control efficiency. In particular, the more logical qubits are maintained in the device, the more parallel operations (or faster serial operations) are needed in order to keep the device fault-tolerant. By this definition, there is currently no scalable quantum computing device. While the above definition of scalability is widely accepted in the computer science and systems engineering communities, it is somewhat problematic when applied to quantum computing technology. In particular, people often have a different definition of scalability in mind. Considering that no scalable quantum computers exist yet, scalability usually refers to the future scalability of the proposed implementation of a quantum computer. It is therefore a \emph{prediction}. This issue was addressed recently by the Quantum Pontiff \cite{bacon09}, who presents an overview of several methods to analyze, evaluate and discuss the future scalability of a quantum computing device. These include the economic cost of scaling these devices; the current knowledge of the technology used in the device; how this technology fits in the larger field of physics, chemistry, etc., and whether the technology has been used in scalable devices other than quantum computers. These three forms of scalability can be understood as follows: First, there is the {\em actual} scalability of a system. This is the scalability cost function of actually existing devices, according to the above definition. Second, there is the {\em projected} scalability of a quantum computer architecture, which is given by the expected cost function. As with the actual cost, this will be a function mapping the size of logical qubits to a resource cost. It is similar in all other respects too, except that it refers to a hypothetical cost derived by projecting our advances in technology into the future. In the absence of true quantum computers, the projected scalability is the quantity that is of most use to current designers of quantum computer architectures. However, it is also the most difficult to quantify, due to inherent uncertainties in technological and economic predictions. Finally, the {\em fundamental} scalability of an architecture is given by upper and lower bounds on the resource cost function that are direct consequences of the (known) laws of physics, in particular quantum mechanics. These are absolute values, that upper and lower bounds can be derived formally from first principles, and generally the usefulness of the values for a particular device will rest in the size of the gap between the lower and upper bounds. We can now state unambiguously when the criteria for quantum computing are met. First, the device must be a fault tolerant quantum computer, which means that all the error probabilities for all the possible errors in a realistic error model are below the fault tolerance threshold. Consequently, the errors associated with each of the four criteria (memory, evolution, cooling, and readout) must be below this threshold. Second, the device must be scalable. The specific type of scalability (actual, projected, or fundamental) depends on the objectives of the user. For a prototype quantum computer one would require {\em actual} scalability up to a certain small number, while an industry interested in commercial production of quantum computers must meet {\em projected} scalability. \subsection{The Criteria} \noindent We now discuss the four criteria that must be met when we want to build a scalable, fault tolerant quantum computer. \begin{criterion}\label{crit:memory} Any quantum computer must have a quantum memory. \end{criterion} It is clear from Definition \ref{def:kqubitQC} that the quantum computer must be able to store the classical input bit string $\smash{s^{(n)}_{\rm in}}$ in such a way that it can be used in the computation. The storage device must be able to hold any computable quantum state that can be unitarily evolved from the input qubit string $\smash{|s^{(n)}_{\rm in}\rangle}$. Some states are known to have an efficient classical representation, such as the so-called stabilizer states \cite{gottesman97}. These can have large amounts of entanglement, and may be considered true quantum states. Since we do not specify how the quantum memory operates, we may try to cheat the system by tracking only the efficient classical description of the state, and proceed with the computation using only these states. However, if the memory holds only states of this type, the quantum computer operating with this memory cannot produce the output $\smash{p_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in})}$ efficiently. This is a consequence of the Gottesman-Knill theorem \cite{gottesman97}. A quantum computer requires the ability to store any computable quantum state, which in general, are not known to have efficient classical representations. The quantum memory must be able to store any computable pure quantum state. However, this does {\em not} mean that the quantum memory always {\em is} in a pure state. The storage of a pure quantum state in an overall mixed system can be achieved in several ways. For example, a pure quantum state can be encoded in pseudo-pure states, or in decoherence-free subspaces. Regardless of the encoding, the device generally needs to store a quantum state for prolonged periods of time. \begin{criterion}\label{crit:evolution} Any quantum computer must facilitate a controlled evolution of the quantum memory, that allows for universal quantum computation. \end{criterion} According to Definition \ref{def:kqubitQC}, a quantum computer produces a probability distribution that depends on the unitary encoding $U_P$ of the program of interest $P$, and it must do so efficiently. It is generally accepted that this implies that for many algorithms the same probability distribution cannot be simulated efficiently on a classical computer. If this is indeed the case, then the evolution $U_P$ must be inherently quantum mechanical. Currently, we know of several ways to implement $U_P$, for example via a series of qubit gates, via adiabatic transformations, controlled measurements, etc. This process must act in accordance with the laws of quantum mechanics, and is ultimately controlled by the user or programmer of the device. We therefore call this \emph{controlled quantum evolution}. This leads to Criterion \ref{crit:evolution}. The core distinguishing feature between various quantum computing paradigms is often the mechanism for performing the quantum evolution. Other criteria (memory, cooling, and readout) are then chosen to support the specific implementation of the controlled evolution. It does not matter whether the quantum evolution is described in the Schr\"odinger picture, the Heisenberg picture, or in an interaction picture. These different pictures do not yield observable differences, and all lead to the same probability distribution $\smash{p_P (s^{(m)}_{\rm out}|s^{(n)}_{\rm in})}$. At this point, we should note that any type of controlled evolution implies that the quantum computer must incorporate a clock of some sort. Moreover, as quantum computing is currently understood, each part of the computer must be synchronized to the same clock. It is, however, possible to implement distributed quantum computing between parties that move relativistically with respect to each other by encoding the quantum information in Lorentz invariant subspaces \cite{kok05}. A general discussion on the role of reference frames in quantum information can be found in \cite{bartlett07}. \begin{criterion}\label{crit:cooling} Any quantum computer must include a method for cooling the quantum memory. \end{criterion} Once we have built a quantum computer, we most likely want to use it more than once. Since any real quantum computer has a finite size $k$, the preparation of a computation therefore includes the erasure of the previous computation. The entropy generated in this procedure must be extracted from the quantum computer, and we call this (information theoretic) cooling. In addition to the entropy generated by the erasure of previous computations, entropy may leak into the quantum memory via unwanted and uncontrolled interactions with the environment. Such entropy leaks cause errors in the computation, which may be removed using quantum error correction procedures. This is also a form of cooling. Any real quantum computer must therefore satisfy Criterion \ref{crit:cooling}. The concept of cooling encompasses both the initialization of the quantum memory and error correction during the computation. Furthermore, the boundary between initialization and error correction is fuzzy. For example, in the one-way model of quantum computation, it can be argued that the initialization process is the construction of a high-fidelity graph state. In this case the initialization may include the preparation of the qubits in a ground state, as well as the entangling interactions and the various entanglement distillation procedures that are used to make the graph state. Alternatively, one may argue that initialization means only the preparation of the qubits in the pure ground state $\ket{0}$. The creation of the graph state then falls in the category of controlled quantum evolution, augmented by quantum error correction. Whether it is more natural to view the entire graph state creation as the initialization procedure, or only qubit initialization depends often on the physical implementation. In optical lattices it may be more appropriate to interpret initialization as the creation of the cluster state, whereas systems in which each two-qubit interaction must be invoked separately favor qubit initialization as the natural interpretation. A consequence of this fuzziness between initialization and error correction is that it is not important what pure state the system is cooled to. In practice, this will depend on the most accessible states of the physical implementation. Once the system is cooled down sufficiently, the different types of control (both quantum evolution and further error correction) can bring the quantum memory into any desired state. \begin{criterion}\label{crit:readout} Any quantum computer must provide a readout mechanism for subsets of the quantum memory. \end{criterion} According to Definition \ref{def:kqubitQC}, a quantum computer produces a classical bit string $\smash{s_{\rm out}^{(m)}}$ as the output. This implies that the quantum computer includes a mechanism that translates the output state of the quantum memory to a classical bit string. This is done via measurement, and in the context of computing we call this readout. Therefore, Criterion \ref{crit:readout} must be satisfied for any quantum computer. Similar to cooling, what is considered readout is a rather fluid concept. Much of error correction involves the measurement of large parts of the quantum memory, but the end user is generally not interested in these measurement outcomes. We therefore may regard this type of readout as part of the cooling mechanism. Again, the most natural interpretation depends on the physical implementation of the quantum computer. Since the different paradigms may differ in the type of control they require, they may also differ in the required readout abilities. In general, the more restrictive the controlled quantum evolution requirement is, the less restrictive the readout requirement needs to be. For example, the ability to do single-qubit measurements on an arbitrary basis can be exchanged with the ability to do single-qubit measurements on a predefined axis, if one has also the ability to do arbitrary single qubit rotations. \section{Computational Paradigms}\label{sec:paradigms} \noindent There are potentially many ways to implement a quantum computer. For example, we can use trapped ions with optical control fields, photons in linear optical networks, electrically controlled quantum dots, etc. The specific way the quantum computer is constructed is called the {\em architecture}. It encompasses all the details of the implementation. Apart from the architecture, there is another useful distinction in different types of quantum computers. Not all types of quantum computers treat the quantum information in the same way, and we call the different ways of implementing the computation at the abstract computational level {\em paradigms}. The paradigm can be interpreted as the way the computation $\smash{s_{\rm in}^{(n)} \to s_{\rm out}^{(m)}}$ is decomposed into primitive elements. As an example, in Definitions \ref{def:QAprimitives} and \ref{def:Qcircuit}, the primitives are chosen as the single-qubit rotations, supplemented by the two-qubit $CZ$ gate. These can be translated into other primitives, associated with the different paradigms. Finally, there is what we refer to as \emph{data abstraction and encoding}. Error correction schemes fall into this category. For example, CSS, stabilizer, and topological codes are all different ways to encode data. These encodings can be used with various (though not necessarily all) paradigms. A particular case worth mentioning is topological quantum computing, which was first introduced as the coupling of the topological data encoding with the anyonic architecture. Though first described together, it has been shown that topological codes can be used in other architectures and paradigms, such as the one-way model of quantum computing, using photons and matter qubits instead of anyons. The examples of different paradigms we will discuss shortly are the circuit model, globally controlled quantum computing, measurement-based quantum computing, and adiabatic quantum computing. Each paradigm must meet the criteria, but they are typically met in slightly different ways. Furthermore, how a particular device satisfies the criteria determines the most natural quantum computing paradigm. In this section we will discuss the various paradigms of quantum computing, and how the criteria for quantum computing must be met. \subsection{The circuit model}\label{sec:circuit-model} \noindent In the circuit model, the computation is decomposed into logical gates that are applied successively to the qubits. This is commonly represented graphically as a circuit, where each horizontal line denotes the time evolution of a qubit, and logical gates are symbols on the lines. The circuit model is arguably the most natural way to visualize a quantum computation, and the universality proof of other paradigms often proceeds by reduction to the circuit model. The key to a circuit-model description of quantum computing is a series of results, showing that any unitary operator on $n$ qubits can be decomposed into a series of arbitrary single-qubit rotations, and two-qubit entangling gates \cite{barenco95,barenco95b,divincenzo95}. Such a restricted set of gates is called {\em universal}. In addition, it was shown that almost any two-qubit entangling gate can be used to construct a universal set of gates \cite{Deutsch95,lloyd95}. Typical choices of the entangling gate are the $CNOT$ and the $CZ$. This universal set of operations is infinite, since it contains all possible single qubit rotations on any one qubit. The Solovay-Kitaev theorem states that for any unitary operation $U$ there exists a finite set of gates that can efficiently implement $U$ to arbitrary precision \cite{kitaev97}. The most general form of this theorem was proved in Appendix 3 of \cite{nielsen00}, and for a history of the theorem see \cite{dawson06}. The implication of the theorem is that we can construct a quantum computer based on a finite set of gates. In order to achieve a complete computation device it is necessary to supplement the universal set of gates mentioned above with two more primitives, namely measurements and cooling. It is often assumed that the measurements are {\em von Neumann} measurements, which leave the system in the eigenstate corresponding to the measurement outcome. This type of measurement can act as a cooling mechanism as well, since it transforms mixed states into pure states. In general, however, von Neumann measurements may be difficult to implement (for example when the quantum information is carried by a photon, which is usually destroyed by the detector), and we will assume here that measurement and cooling are independent requirements. Fault tolerance thresholds in the circuit model (as well as in other paradigms) depend very much on the implementation of the quantum error correction codes. Consequently, the thresholds vary substantially. Early calculations yielded thresholds for the error probability per gate around $10^{-6}$ \cite{kitaev97,preskill98,knill98,aharonov99}, and $10^{-4}$ \cite{gottesman97}. Steane proved a threshold of $3\cdot 10^{-3}$ \cite{steane03}, and Knill derived a threshold of about one percent \cite{knill04b}. Models that allow only nearest-neighbour interactions have thresholds on the order of $10^{-4}$ to $10^{-5}$ \cite{svore:022317}. \begin{table}[t] \caption{\label{tab:circuit} Quantum computing criteria for the circuit model. } \begin{indented} \item[]\begin{tabular}{@{}ll}\br {\bf C0.} & Identifiable, and addressable, individual qubits; \\ {\bf C1.} & the ability to implement a universal set of gates; \\ {\bf C2.} & the ability to reduce the entropy of the state of any qubit; \\ {\bf C3.} & the ability to measure any single qubit in the computational basis. \\ \br \end{tabular} \end{indented} \end{table} Summarizing, the requirements for universal fault-tolerant quantum computation using the circuit model are given in Table~\ref{tab:circuit}. We have put `qubits' in quotes, since there are circuit-model proposals that act on qudits or continuous variables \cite{lloyd99}. However, these proposals can always be recast in terms of qubits, including continuous variables (with finite precision). For qubits, this list reduces to the DiVincenzo criteria, where our Criterion 0 encompasses DiVincenzo's Criteria (i) and (iii); our Criterion 1 is equal to Criterion (iv); our Criterion 2 is equivalent to Criterion (ii), and our Criterion 3 is DiVincenzo's Criterion (v). \subsection{Global control}\label{sec:global-control} \noindent The second paradigm we consider here is usually called `globally-controlled quantum computing', or `global control', for short. In 1993 Seth Lloyd described the fundamental idea behind this paradigm, which differs significantly from the circuit model in that it does not employ a universal set of single- and two-qubit gates \cite{lloyd93}. Instead, the quantum memory consists of units called {\em quantum cells}, which have some controllable (often nearest-neighbour) interactions that can be switched on and off. The quantum evolution associated with the algorithm then proceeds via operations that act on all quantum cells indiscriminately. As an example, consider a one-dimensional spin-$\frac{1}{2}$ lattice (this can be a crystal, optical lattice, etc.), often called a {\em spin chain}. Suppose further that the method of addressing the spins in the lattice is through some magnetic field. A homogenous field affecting all spins identically is typically easier to achieve than a gradient field that couples to only a few, or even a single lattice spin. Similarly, the distance between interacting spins is much shorter than optical wavelengths, making individual optical control of the spins extremely challenging. A global-control paradigm offers clear practical benefits in these situations. In Lloyd's original scheme, the spins in the one-dimensional lattice are of three different types, or species, $A$, $B$, and, $C$, arranged in cyclical repeating order with switchable nearest-neighbour interactions. Each species can be addressed independent from the other species. A setup that accommodates this type of control is a crystal or polymer with three different species of nuclei, and where each nucleus is coupled to its neighbours. The controlling mechanism then consists of magnetic pulses tuned to the resonance frequencies of the target nuclei. Lloyd showed that even such limited control over a homogenous system like this allows for universal quantum computation. The scheme involves initializing the system to a particular state where the entire lattice, except for a small region, is set to a fiduciary initial state. The small region is set to a state known as the \emph{control pointer}. Logical operations then consist of applying homogenous pulses that act only non-trivially in the vicinity of the control pointer state. These operations can change the state of logical qubits in the vicinity of the pointer, or move the pointer up and down the chain. It was further shown by Benjamin \cite{Benjamin:2004eu,Benjamin:2004wd,Benjamin:2004nx,Benjamin:2003oq} that two different spin species, $A$ and $B$, still allows for universal quantum computation. Other possible implementations are discussed in \cite{Vollbrecht:2006lq,cp07,fitzsimmons07,Fitzsimons:2006ul}. Readout is done through a global, or \emph{bulk}, measurement on a cell species. This means that the state of any one particular cell is not readily available, but rather it is possible to make a measurement of a global character that does not distinguish between cells. An example measurement of this type is the bulk magnetization of a spin ensemble. Mathematically, it is similar to projecting onto eigenspaces spanned by computational basis states of a certain Hamming weight. This might not seem powerful enough to give the output of a computation, but it is indeed possible to use the finite nature of the cell chain, and the particular boundary conditions, to extract the result of the computation \cite{lloyd93,Benjamin:2004eu}. Another possible technique includes the use of spin-amplification \cite{perez06}. Quantum Cellular Automata (QCA) are similar to spin chains, with the added constraint that the evolution is not only homogenous in space, but also in time. In other words, the global operation is a repeating cycle of a set of operations. QCA are an important theoretical construct, in that they are the natural generalization of the classical model of computation based on cellular automata \cite{cp07}. Quantum error correction in globally controlled systems was discussed by Bririd {\em et al}.\ \cite{bririd}. Fitzsimmons \cite{fitzsimmons07,Fitzsimons:2006ul} proved the existence of a threshold, and Kay \cite{kay-2007,kay-2005} proved a fault tolerance theorem with a threshold of $10^{-11}$. Both fault tolerant protocols require the ability to cool spins using global control. One way to achieve cooling is to endow all spins (or at least a large subset of them) with a a third, unstable state $\ket{2}$. Spontaneous emission of a photon from this state then produces the required ground state $\ket{0}$. It is not normally assumed that these emissions can be detected. However, this does not affect the initialization procedure. QCA have been shown to be universal for quantum computing \cite{Vollbrecht:2006lq,watrous,cp07}, but it remains an open question whether they can be implemented in a fault tolerant manner. The requirements for universal fault-tolerant quantum computation using the globally controlled array model are summarized in Table~\ref{tab:global}. \begin{table}[t] \caption{\label{tab:global} Quantum computing criteria for global control models. } \begin{indented} \item[]\begin{tabular}{@{}ll}\br {\bf C0.} & Identifiable and {\em globally} addressable individual quantum cells; \\ {\bf C1.} & the ability to implement a universal set of global operators; \\ {\bf C2.} & the ability to reduce the entropy of the global state of a species of cells;\\ {\bf C3.} & the ability to make a global measurement on a species of cells. \\ \br \end{tabular} \end{indented} \end{table} \subsection{Measurement-based quantum computing}\label{sec:measurement-based} \noindent Measurement-based quantum computing has its origins in at least two converging lines of research. First, it was realized in the quantum computing community that two-qubit gates are generally far more difficult to implement with high fidelity than single-qubit gates. This led to the concept of gate teleportation \cite{gottesman99}, in which the quantum channel of a teleportation event is modified to induce a specific gate on the teleported qubits. The most famous application of gate teleportation is the demonstration that it is possible to build a quantum computer with only single photons, linear optical elements, and photon counting \cite{knill01}. The second line of research that led to measurement-based quantum computing was the study of practical applications of large-scale entanglement with a lattice structure \cite{briegel01}. These so-called {cluster states} appear naturally in optical lattices, and many other regular structures that are characterized by nearest-neighbour interactions (such as the Ising interaction). Measurement-based quantum computation relies on the preparation of a large entangled state, the cluster state, which is typically a regular lattice where each vertex represents a qubit initially in the state $(\ket{0}+\ket{1})/\sqrt{2}$, and each edge a $CZ$ operation \cite{raussendorf01}. The computation proceeds by making single-qubit measurements on a subset of the qubits. The measurement outcomes determine the single-qubit observables for the next measurement round, and so on. Many classes of large entangled states have been identified as universal resources for quantum computing. The quantum program is encoded entirely in the single-qubit measurement bases \cite{raussendorf03,hein04}. Because of this measurement-driven approach, and the fundamental irreversibility of the measurement procedure, this is also called the {\em one-way model} of quantum computing. One of the major advantages of the one-way model over the circuit model is that the universal resource state, i.e., the cluster state, does not carry any information about the computation. It is therefore possible to create these states by any (efficient) means, not necessarily using high-fidelity deterministic two-qubit gates \cite{nielsen04,browne05,barrett05}. This can significantly reduce the requirements for building a quantum computer. The fulfillment of the four criteria for building a quantum computer in the one-way model is somewhat fluid, in the sense that we can interpret certain aspects of the model as falling under different criteria. First, the quantum memory of a quantum computer based on the one-way model consists of qubits, although the model can also be defined on qudits \cite{zhou03,tame06} and qunats \cite{menicucci06}. The qubits must be addressable to the extent that any single-qubit measurement in the equatorial plane of the Bloch sphere, as well as a computational basis measurement, can be reliably performed. Similar requirements exist for measurement-based protocols that are based on other types of information carriers, such as continuous variables \cite{menicucci06}. Second, the controlled quantum evolution is somewhat hidden in measurement-based quantum computation. We start with a large entangled resource state that is typically a stabilizer state, which permit an efficient classical description. The measurements will remove qubits from the state, and in doing so drive the entangled resource state to different states that are typically no longer efficiently describable as stabilizer states. Another sense in which measurement-based quantum computing needs controlled quantum evolution is in the creation of the entangled universal resource. If we use probabilistic gates, we have to put up with an inherent lack of control. However, we can choose efficient strategies that allow us to induce the control necessary to create the universal resource \cite{gross06}. Alternatively, we can use near-deterministic gates with additional purification and distillation \cite{lim06}. Third, the creation of the entangled resource can also be regarded as cooling to the ground state of a suitable many-body Hamiltonian. Still, even after the cluster state has been produced we have to allow for quantum error correction protocols, since the cluster state must be protected from errors while its qubits are waiting to be measured. Finally, the readout mechanism is of course central to the measurement-based model. Not only does the one-way quantum computer need the readout for the final step of retrieving the outcome of the computation, the computation itself is driven by the measurements. The criteria are summarized as in Table~\ref{tab:mbqc}. Here we encounter a rare occasion of a paradigm for which one criterion implies another. In this case, Criterion 1 implies Criterion 3. This indicates that measurement-based quantum computation may in some cases be easier to implement than other paradigms. \begin{table}[t] \caption{\label{tab:mbqc} Criteria for measurement-based quantum computing. } \begin{indented} \item[]\begin{tabular}{@{}ll}\br {\bf C0.} & Identifiable, and addressable, individual qubits; \\ {\bf C1.} & the ability to implement single-qubit measurements in a large subset of bases; \\ {\bf C2.} & the ability to cool the quantum memory to a universal entangled resource state; \\ {\bf C3.} & the ability to measure any single qubit in the computational basis. \\ \br \end{tabular} \end{indented} \end{table} The one-way model of quantum computing can be made fault tolerant as well. Two threshold theorems by Nielsen and Dawson (2005) give the maximum allowable errors when the cluster states are created with noisy deterministic linear optical entangling gates, and when only probabilistic (noisy) linear optical entangling gates are available \cite{nielsen05}. Raussendorf and Harrington derived a threshold theorem for general two-dimensional cluster states, and found a maximum error of 0.75\% \cite{raussendorf06,raussendorf07,raussendorf07b}. \subsection{Adiabatic control}\label{sec:adiabatic} \noindent The final paradigm we consider here is adiabatic quantum computing. The main difference from the previous paradigms is that in the adiabatic model the quantum information is not processed in discrete time steps (i.e., gates), but in a continuous fashion. Of course, all gates must also operate continuously in the temporal domain in any practical implementation, but the important distinction here is that in non-adiabatic paradigms the quantum program is defined \emph{procedurally}. In other words, an algorithm is composed of discrete time steps, at which specific operations are carried out. Adiabatic quantum computing is a complete departure from this line of thinking. The core of this paradigm is the {\em adiabatic theorem}, first developed and proven by Born and Fock \cite{born}. The theorem states that a physical system will remain in its instantaneous eigenstate if any given perturbation acting on it does so slowly enough with respect to the gap between the minimal eigenvalue and the rest of the Hamiltonian's spectrum. The basic idea of the adiabatic paradigm is that instead of carrying out an algorithm that takes you to the desired output via gates, one has to give a Hamiltonian $H_f$ whose ground state represents the solution to the computational problem. The quantum computer starts in the ground state of a Hamiltonian $H_0$, and the computation proceeds by adiabatically changing the initial Hamiltonian $H_0$ into $H_f$. The paradigm has a very distinct elegance, since we do not have to state the procedure, but rather the desired output in terms of a Hamiltonian. In classical computing this approach leads to various benefits, for example, easily provable correctness of algorithms. The standard example of this paradigm is the adiabatic implementation of a Grover search. The problem is to find a marked bit string $s$ of size $n$, out of a possible $N=2^n$ bit strings. Grover showed by explicit construction that this can be solved in time $O(\sqrt{N})$, whereas the most efficient classical algorithm scales as $O(N)$. In the adiabatic paradigm we choose $H_f = \mathbb{I} - \ket{s}\bra{s}$. The ground state of this Hamiltonian is $\ket{s}$, i.e., the solution to the problem. Moreover, the Hamiltonian can be constructed without explicit knowledge of $\ket{s}$ (which would defeat the purpose), but rather by properly encoding the condition that marks $s$ as the solution. Since there is no \emph{a priori} knowledge of which string is the marked one, we start the system in a superposition $\ket{\psi_0}$ of all possible strings, and we let $H_0 = \mathbb{I} - \ket{\psi_0}\bra{\psi_0}$. Evolving adiabatically from $H_0$ to $H_f$, and measuring the final state will provide the desired outcome $s$. Roland and Cerf calculated the time constraints needed in order to maintain adiabaticity, and showed that total running time of the algorithm is $O(\sqrt{N})$ \cite{cerf02}. It is straightforward to adapt this algorithm to solve any $NP$ problem, again in $O(\sqrt{N})$ time (where $N$ is the size of the search space, or set of possible solutions). Recently, Aharonov demonstrated a procedure for adapting any quantum circuit algorithm into an adiabatic procedure \cite{aharonov07}. However, the straightforward translation of algorithms can produce unwieldy results. Even though all paradigms have the same computational power, some algorithms are better described in one model rather than another. In the case of adiabatic quantum computation, the natural algorithm is Grover's search. From an implementation standpoint, perhaps the biggest drawback of adiabatic quantum computation is the very specific, and non-trivial, requirements on $H_f$. While in the quantum circuit model all quantum computation can be reduced to single- and two-qubit gates, in the adiabatic model we often need many-body interactions. For example, three-body interactions are required for the naive implementation of satisfiability, and up to five-body interactions are necessary for general quantum circuit simulation. If we implement a fault tolerant adiabatic computation, the many-body interactions may be even higher. The error correction schemes that are developed for adiabatic quantum computing are so far based on using quantum error detecting codes in the Hamiltonian, ensuring that any logical error would drive the energy of the system considerably upward \cite{jordan06}. This should ensure that the system, assuming it stays sufficiently cool, does not leave the ground state. It is in principle possible to use longer and longer codes, which would protect against larger and larger entropy in the system. However, this also requires higher many-body terms in the Hamiltonian. Progress is being made towards complete fault tolerance for adiabatic quantum computing \cite{lidar08}, but there currently exists no threshold theorem. We summarize the criteria for universal fault-tolerant adiabatic quantum computation in Table~\ref{tab:adiabatic}. However, since the precise requirements for fault tolerance are not known at this time, these criteria may change. In particular, the required size of the many-body interaction is not known at this time. \begin{table}[t] \caption{\label{tab:adiabatic} Criteria for adiabatic quantum computing. } \begin{indented} \item[]\begin{tabular}{@{}ll}\br {\bf C0.} & Identifiable, and addressable, individual qubits; \\ {\bf C1.} & the ability to implement arbitrary Hamiltonians consisting of many-body interactions \\ & terms; \\ {\bf C2.} & the ability reduce the entropy of the quantum memory to the ground state of a \\ & prescribed Hamiltonian; \\ {\bf C3.} & the ability to measure any single qubit in the computational basis. \\ \br \end{tabular} \end{indented} \end{table} \subsection{Hybrid architectures}\label{sec:hybrid} \noindent While many architectures for quantum computers use only one particular paradigm, this is not necessary in general. It is possible that specific implementations of a quantum computer use a combination of paradigms for different parts of the computation. Several hybrid schemes have been proposed already, and we mention a few of them here. First, different paradigms may be used to implement a quantum computer, and to connect the several computers in a network. For example, a circuit model-QCA hybrid architecture was proposed by Laflamme and Cory for a universal quantum computer developed in the near future. Such a device would have several universal quantum registers, each with the same number of logical qubits. These registers are linked to each other via spin chains, which are in turn controlled as a QCA, and act as conveyer belts for information between the universal registers. Second, different paradigms may be used for creating the universal quantum evolution on the one hand, and the error correction protocols on the other. For example, a hybrid architecture using the one-way model and the circuit model was proposed by Campbell and Benjamin \cite{campbell08}. This is based on a distributed scheme where distant qubits are entangled via optical path erasure, but instead of a single qubit at each distant site there may be several qubits. These few-qubit processors process quantum information using the circuit model. An earlier version of such an architecture is the broker-client scheme for efficiently creating large cluster states \cite{benjamin06a}. The number of qubits per site may in principle become quite large, allowing error correction and other low-level information management at the site level, and having only logical or higher level operations occur via optically entangling remote operations. Third, we can even use different paradigms in several aspects of the quantum evolution. For example, we can decompose the evolution in terms of a quantum circuit, but use adiabatic control to implement the universal set of quantum gates \cite{gauger08}. In short, even if none of the existing paradigms suit a particular physical setup perfectly, it might still be possible to tailor a hybrid scheme. Finally, we should note that there are quite possibly many more universal quantum computational paradigms. \subsection{Quantum communication} \noindent The four criteria for quantum computing also apply to quantum communication, albeit in slightly modified form. First, we can regard quantum communication as a form of distributed quantum computing, even though communication protocols are typically much simpler than general quantum algorithms. We therefore consider the criteria explicitly for the case of distributed quantum computing. Clearly, the criteria are then {sufficient} for quantum communication. However, they are not quite {necessary}. Quantum communication requires quantum memories, cooling, and readout, but only a restricted set of quantum algorithms need to be performed. We therefore modify Criterion \ref{crit:evolution} to \noindent {\bf Criterion 1a.} Any quantum communication device must facilitate a restricted set of controlled quantum evolution of the quantum memory. \noindent The restricted set consists of the identity, and simple operations, such as the swap operation. The measure that determines whether the criteria are satisfied are still scalability and fault tolerance. Scalability is almost immediate: when Alice and Bob are able to communicate a single qubit at a cost $C$, they will be able to communicate $N$ qubits at cost $NC$, simply by repeating the procedure for communicating a single qubit. Fault tolerance for quantum communication can also be defined. In particular, this is illustrated by the security proof of BB84 by Shor and Preskill \cite{shor00}. Security is defined in terms of the maximum allowable mutual information between Alice and Bob on the one hand, and the environment on the other. Error correction can then be used to minimize the mutual information. Shor and Preskill found a maximum allowed error of about 11\%. This can be regarded a threshold theorem for BB84. \section{Selecting a Paradigm} \noindent With the availability of various paradigms for quantum computing, the natural question for any experimentalist is `which paradigm is best suited for my physical system'. In order to help answer this question, we present a decision tree (shown in Fig.~\ref{fig:flowchart}) that may act as a rough guide towards implementing fault tolerant quantum computation. To this end, we discuss some aspects of scalability, addressability of the qubits, and the type of qubit control that is most natural for the physical system. \subsection{Monolithic vs. Modular Scalability} \noindent Scalability is required for any implementation of a quantum computer. Here, we consider two types of scalability, namely {\em monolithic} scalability, and {\em modular} scalability. Scalability is monolithic when the quantum computer is a single device, and we can increase the size of the device while still satisfying the criteria. One example of a monolithically scalable setup is solid-state NMR. Here, each nucleus is a physical qubit, and scaling the device simply implies using a larger solid with more nuclei. Other examples include optical lattices, quantum dots, and superconducting qubits. An example of a system that is not monolithically scalable is an atom in a cavity. We cannot double the number of atoms in the cavity without altering the physics of the device. However, atoms in cavities can exhibit modular scalability. We can increase the number of cavities (with atoms inside), which in turn can be entangled remotely, for example using mediating photons. A scalable quantum computer can then be built by using various cavities, or more abstractly and in general, various {\em modules}. There is a certain amount of arbitrariness in determining whether a system is modular or monolithic. For example, a full-scale quantum internet can be interpreted as a modular system, even if each module is a monolithic $k$-bit quantum computer. Whether a system should be regarded modular or monolithic therefore depends in a large part on the context. \begin{figure} \caption{Example of a decision tree to determine which quantum computing paradigms is most suitable for a give physical setup.} \label{fig:flowchart} \end{figure} While it is possible to force both monolithic and modular quantum devices into any of the paradigms above, some paradigms can be seen as more natural. A physical setup that exhibits modular scalability is best suited for the measurement-based quantum computing paradigm of Sec.\ \ref{sec:measurement-based}. A setup that is monolithically scalable may be better suited for the circuit model (Sec.\ \ref{sec:circuit-model}), or global-control (Sec.\ \ref{sec:global-control}), depending on the other device properties. Setups that are somewhere between monolithically and modularly scalable can benefit from a hybrid approach (see Sec.\ \ref{sec:hybrid}). \subsection{Addressability} \noindent The quantum memory of the device, whether it be monolithic or modular, must be addressable in some form. We have seen that different paradigms place different requirements on the precise character of this addressability. In the circuit model and the measurement-based model, each qubit must be addressed individually, while in the global control model the quantum memory is addressed without discriminating between qubits. This typically places restrictions on the type of interactions between the qubits (see Sec.\ \ref{sec:global-control}). It is a natural assumption that each module in a modular device can be distinguished and addressed independently from each other module. Hence, addressability is really only a concern for the physical qubits \emph{within} each module, and is of greatest concern for monolithic setups. Therefore, the paradigm best suited for monolithic quantum systems with limited addressability is globally controlled quantum computing. Another important issue is the level of addressability. Like other features discussed before, addressability is not a binary condition, but rather there is a full gradient of possibilities. On the one extreme, there is complete individual addresability of each individual subsystem, qubit \emph{etc.} On the other extreme, each subsystem is completely indistinguishable from any other. In between, we have systems, and architectures, where there are two, three, or more \emph{distinguishable} species. Furthermore, these may be spatially ordered in a homogenous or non-homogenous fashion. Physical system examples of these differences could be a carbon nano-tube, where the individual carbon nuclei are the fundamental quantum subsystems, each indistinguishable from each other; and a polymer consisting of two or three different nuclei arranged in a repeating pattern. Various proposals, as discussed in Sec \ref{sec:global-control}, are available, each more or less relevant depending on the exact nature of the physical device. \subsection{Quantum Evolution Implementation} \noindent Finally, we consider the type of control that we may have over a device. This depends on various aspects of the architecture. For instance, modular devices, and those with limited addressability, impose restrictions on the type of control available to the user or programmer of the quantum device. Even without these constraints, however, it is possible to choose between two fairly distinct methods of quantum control, namely adiabatic or non-adiabatic quantum evolution. In our decision tree in Fig.~\ref{fig:flowchart}, this choice is the final bifurcation that indicates which quantum paradigm is most suitable for a particular device. Again, the distinction between adiabatic and non-adiabatic control can be blurry in some cases. As an example of a hybrid approach to quantum control we refer to Gauger {\em et al}., who propose an implementation in which the computational paradigm is the circuit model, but (some of) the individual gates are operated in an adiabatic manner \cite{gauger08}. In general, the decision tree should be taken at best as a help to guide experimentalists towards the most suitable implementation given their experimental setup. As new paradigms are developed, the decision tree will grow more branches, and other questions about the capabilities of the physical devices must be answered. Nevertheless, the four criteria themselves are independent of the computational paradigm, and should therefore once more be regarded the central dogma for implementing a quantum computer. \section{Conclusions} \noindent The DiVincenzo criteria have been extremely influential in focussing the theoretical and experimental research in quantum computing. However, since the initial formulation of the criteria for the circuit model, several new paradigms for quantum computation have been invented or developed further. As a consequence, the original criteria are sometimes violated in certain paradigms. In this paper, we have generalized the DiVincenzo criteria to take into account new paradigms, such as the one-way model, globally controlled quantum computing, and adiabatic quantum computing. We distilled the criteria down to four general requirements, namely the availability of a quantum memory, the ability to induce a (near) unitary evolution, the ability to implement (information-theoretic) cooling, and readout of the quantum memory. These criteria are derived directly from a new definition of a quantum computer. We distinguish between an ideal quantum computer, which has arbitrarily large size, and a $k$-bit quantum computer that can have a physical implementation. The desiderata that determine whether the criteria are met are fault tolerance and scalability. In addition to the four criteria for quantum computing, we constructed a decision tree that may help experimentalists decide which paradigm is the most natural for a particular physical implementation. This decision tree will have to be updated whenever new paradigms for quantum computing are invented. However, the criteria are independent of new paradigms. \section*{Acknowledgments} \noindent The authors wish to acknowledge valuable discussions with and comments from Sean Barrett, Niel de Beaudrap, Earl Campbell, Donnie Chuang, Irene D'Amico, David Deutsch, Richard Jozsa, Andrew Steane, Terry Rudolph, Stefan Weigert, and David Whittaker. \section*{References} \end{document}
arXiv
\begin{document} \title{f The homotopy type of spaces of resultants of bounded multiplicity} \begin{abstract} For positive integers $m,n, d\geq 1$ with $(m,n)\not= (1,1)$ and a field $\Bbb F$ with its algebraic closure $\overline{\Bbb F}$, let $\mbox{{\rm Poly}}^{d,m}_n(\Bbb F)$ denote the space of all $m$-tuples $(f_1(z),\cdots ,f_m(z))\in \Bbb F [z]$ of monic polynomials of the same degree $d$ such that polynomials $f_1(z),\cdots ,f_m(z)$ have no common root in $\overline{\Bbb F}$ of multiplicity $\geq n$. These spaces were defined by Farb and Wolfson in \cite{FW} as generalizations of spaces first studied by Arnold, Vassiliev, Segal and others in different contexts. In \cite{FW} they obtained algebraic geometrical and arithmetic results about the topology of these spaces. In this paper we investigate the homotopy type of these spaces for the case $\Bbb F =\mathbb{C}$. Our results generalize those of \cite{FW} for $\Bbb F =\Bbb C$ and also results of G. Segal \cite{Se}, V. Vassiliev \cite{Va} and F.Cohen-R.Cohen-B.Mann-R.Milgram \cite{CCMM} for $m\geq 2$ and $n\geq 2$. \end{abstract} \section{Introduction}\label{section 1} \paragraph{1.1 Historical survey.} There are two related intriguing phenomena that have been observed in various situations and can be roughly described as follows. Let $X$ and $Y$ be two manifolds with some structure (e.g. holomorphic, symplectic, real algebraic) and let $\{M_d\}$ be a family of subspaces of structure-preserving continuous mappings $X\to Y$ indexed by \lq\lq degree\rq\rq (for a suitable notion of index) with \lq\lq stabilization mappings \rq\rq $q_d: M_d \to M_{d+1}.$ The first phenomenon is that the homology (homotopy) groups of the subspaces $M_d$ stabilize, that is: the mappings $q_d$ are homology (homotopy) equivalences up to some dimension $n(d)$ which is a monotonically increasing function of $d$. The other phenomenon is that a certain limit of the subspaces $M_d$ is homology (homotopy) equivalent to the space of all continuous mappings $X\to Y$. It seems that the first appearance of such phenomena was in the work of V. I. Arnold \cite{Ar}. Arnold considered the space $\mbox{{\rm SP}}_n^d(\Bbb C)$ of complex monic polynomials of the degree $d$ without roots of multiplicity $n$. For the case $n=2$ this is the same as the space of monic polynomials without repeated roots (with non-zero discriminant), whose fundamental group is the braid group $\mbox{Br}(d)$ of $d$-strings and whose cohomology is the cohomology of the braid group $\mbox{Br}(d)$. Arnold computed the homology of these braid groups and established their homological stability. The corresponding relationship between these spaces of polynomials and spaces of continuous maps is given by the May-Segal theorem \cite{Se1}. Similar results also hold in the real polynomial case (\cite{GKY2}, \cite{KY1}, \cite{Va}). \par \par Analogous another phenomena were discovered by G. Segal \cite{Se} in a different context inspired by control theory (later it was discovered to have a close relationship with mathematical physics \cite{AJ}). Segal considered the space $\mbox{{\rm Hol}}^*_d(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1})$ of based holomorphic maps from $\Bbb C {\rm P}^1$ to $\Bbb C {\rm P}^{n-1}$ of the degree $d$ and its inclusion into the space $\mbox{{\rm Map}}_d^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1})=\Omega^2_d\Bbb C {\rm P}^{n-1}$ of corresponding continuous maps. Intuitive considerations based on Morse theory, suggest that homotopy of the first space should approximate that of the second space more and more closely as the degree $d$ increases. Segal proved this result by observing that this space of based holomorphic mappings can be identified with the space of $n$-tuples $(f_1(z),\dots,f_n(z))\in \Bbb C [z]^n$ of monic polynomials of the same degree $d$ without common roots. He defined a stabilization map $\mbox{{\rm Hol}}_d^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1})\to \mbox{{\rm Hol}}_{d+1}^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1})$ and proved that the induced maps on homotopy groups are isomorphisms up to some dimension increasing with $d$. Using a different technique (based on ideas of Gromov and Dold-Thom) he also proved that there is a homotopy equivalence $q: \varinjlim \mbox{{\rm Hol}}_d^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1}) \stackrel{\simeq}{\longrightarrow} \Omega^2_0 \Bbb C {\rm P}^{n-1}$ defined by a \lq\lq scanning of particles\rq\rq, and that this equivalence is homotopic to the inclusion of the space of all holomorphic maps into the space of all continuous maps. \par With the help of the spectral sequence for complements of discriminants (analogous to the one he used in defining invariants of knots) Vassilev \cite{Va} showed that there is a stable homotopy equivalence \begin{equation}\label{eq: Va0} \text{SP}_n^{d}(\Bbb C) \simeq_s \mbox{{\rm Hol}}_{\lfloor \frac{d}{n}\rfloor}^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1}), \end{equation} where $\lfloor x\rfloor$ denotes the integer part of a real number $x$. \par The relationship between Arnold's and Segal's arguments can also be explained in terms of Gromov's h-principle in \cite{GKY2} (cf. \cite{Gr}), and in \cite{GKY4} it was shown that there is a homotopy equivalence\footnote{ Since $\pi_1(\mbox{{\rm SP}}^d_2(\Bbb C))=\mbox{Br}(d)$, it is not commutative. However, $\pi_1(\mbox{{\rm Hol}}^*_{\lfloor\frac{d}{n}\rfloor}(\Bbb C {\rm P}^1,\Bbb C {\rm P}^1))=\Bbb Z$. So two spaces $\mbox{{\rm SP}}^d_n(\Bbb C)$ and $\mbox{{\rm Hol}}^*_{\lfloor\frac{d}{n}\rfloor}(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1})$ are not homotopy equivalent if $n=2$.} \begin{equation}\label{eq: KY4-result} \text{SP}_n^{d}(\Bbb C) \simeq \mbox{{\rm Hol}}_{\lfloor \frac{d}{n}\rfloor}^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1}), \quad \mbox{ if }n\geq 3 \end{equation} (see Theorem \ref{thm: GKY4}). The argument makes use of the existence of a $C_2$-operad actions on the spaces $\coprod_{d\geq 0}\mbox{{\rm SP}}^d_n(\Bbb C)$ and $\coprod_{d\geq 0}\mbox{{\rm Hol}}_d^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1})$ (\cite{BM}, \cite{CS}, \cite{GKY4}). \par \par Recently Benson Farb and Jesse Wolfson \cite{FW} made a remarkable discovery. In order to state their results in full generality \footnote{ They have been generalized even farther in \cite{FWW}. } Farb and Wolfson defined a new algebraic variety, given in terms of $m$-tuples of monic polynomials with conditions on common roots. Namely, for a field $\Bbb F$ with its algebraic closure $\bar{\Bbb F}$, let $ \mbox{{\rm Poly}}^{d,m}_n(\Bbb F)$ denote the space of of $m$-tuples $(f_1(z),\cdots ,f_m(z))\in \Bbb F [z]^m$ of monic polynomials of the same degree $d$ with no common root in $\overline{\Bbb F}$ of multiplicity $n$ or greater. \par For example, if $\Bbb F =\Bbb C$, $\mbox{{\rm Poly}}^{d,1}_n (\Bbb C)=\mbox{{\rm SP}}^d_n(\Bbb C)$ and $\mbox{{\rm Poly}}^{d,n}_1 (\Bbb C)$ can be identified with the space $\mbox{{\rm Hol}}_d^*(\Bbb C {\rm P}^1,\Bbb C {\rm P}^{n-1})$. Note that in terms of this definition the homotopy equivalence (\ref{eq: KY4-result}) can be expressed as the homotopy equivalence \begin{equation}\label{equ: conj} \mbox{{\rm Poly}}^{d,1}_n (\Bbb C) \simeq \mbox{{\rm Poly}}^{\lfloor\frac{d}{n}\rfloor,n}_1(\Bbb C) \quad \mbox{ if }n\geq 3. \end{equation} By the classical theory of resultants, the space $\text{Poly}_n^{d,m}(\Bbb C)$ is an affine variety defined by systems of polynomial equations with integer coefficients. Thus both varieties given in (\ref{equ: conj}) can be defined over $\Bbb Z$ and (by extension of scalars or reduction modulo a prime number) over any field $\Bbb F$. Farb and Wolfson computed various algebraic and arithmetic invariants (such as the number of points for a finite field $\Bbb F_q$, etale cohomology etc) of these varieties and found that these invariants are always equal. They asked the natural question: are these varieties algebraically isomorphic for $n\ge 3$? \par In the simplest case $(d,n)=(3,3)$, Curt MacMullen constructed an isomorphism between $\text{Poly}_3^{3,1}(\Bbb Z[\frac{1}{3}])$ and $\mbox{{\rm Poly}}^{1,3}_1(\Bbb Z[\frac{1}{3}])$ and a different isomorphism between $\text{Poly}_3^{3,1}(\Bbb Z/3)$ and $\mbox{{\rm Poly}}^{1,3}_1(\Bbb Z/3)$ (\cite{FW}). The formula defining the first of these isomorphisms of course gives also an isomorphism over $\Bbb C$ and $\Bbb R$, and hence, of course, implies that these spaces are homeomorphic and thus homotopy equivalent. This example suggests that it is unlikely that the varieties $\mbox{{\rm Poly}}_n^{dn,1}(\Bbb Z)$ and $\mbox{{\rm Poly}}_1^{d,n}(\Bbb Z)$ are isomorphic but are more likely to be so if we invert the primes dividing $d$. As before, such an isomorphism of varieties (over a local ring) induces an isomorphism over both $\Bbb C$ and $\Bbb R$. The question posed by Farb and Wolfson seems difficult to answer, and doing so would certainly require completely different methods from the ones used here. However, our results will lead to its generalization. Namely, in this paper we will show that there is a homotopy equivalence \begin{equation}\label{equ: conj1} \mbox{{\rm Poly}}^{d,m}_n (\Bbb C) \simeq \mbox{{\rm Poly}}^{\lfloor\frac{d}{n}\rfloor,mn}_1(\Bbb C) \quad \mbox{ if }mn\geq 3 \end{equation} (see Theorem \ref{thm: IV}). This naturally leads to the question of whether this homotopy equivalence derives from an isomorphism of varieties. Such an isomorphism should be defined for varieties over $\Bbb Q$ and perhaps over the ring $\Bbb Z[S^{-1}]$ where $S$ is some set of primes. Of course if this is the case, the same must hold with coefficients in $\Bbb R$. We should therefore expect to be able to prove homotopy equivalence in the real case too. We intend to pursue this topic in the future work \cite{KY11}. \par We would like to note here the key role played in our argument by the \lq\lq jet map\rq\rq\ (see (\ref{equ: jet}) below). The jet map is an algebraic map and can be defined over any field $\Bbb F$. Over $\Bbb C$ it induces a homotopy equivalence of suitable stabilizations (Theorem \ref{thm: natural map}) but it is not a candidate for the conjectured isomorphism since it does not have the right target space. \par \par It is interesting to observe that there is another \lq\lq real analogue \rq\rq of (\ref{equ: conj}) given in \cite{KY1}. It is obtained by having on the left hand side the space of real polynomials without real roots of multiplicity $\ge n$ (considered by Arnold and Vassiliev \cite{Va}) and on the right hand side the space of real rational maps from $\Bbb R\mbox{{\rm P}}^1$ to $\Bbb R\mbox{{\rm P}}^{n-1}$ in the sense of Mostovoy \cite{Mo1}, which can be identified with the space represented by $n$-tuples of real coefficients monic polynomials of the same degree without common real roots (but possibly with common complex roots). Since these spaces are semi-algebraic varieties rather than algebraic varieties, the above argument does not seem to apply, but intriguingly the analogous result of (\ref{equ: conj1}) remains true. This will be proved in another work \cite{KY10}. \par \par The purpose of this article is to determine the homotopy type of the space $ \mbox{{\rm Poly}}^{d,m}_n(\Bbb F)$ for $\Bbb F =\Bbb C$ which, from now on, will be denoted simply as $ \mbox{{\rm Poly}}^{d,m}_n=\mbox{{\rm Poly}}^{d,m}_n(\Bbb C)$ (see Definition \ref{dfn: Polydmn}). The homotopy equivalence (\ref{equ: conj1}) is an immediate consequence of this (Theorem \ref{thm: IV}). Our arguments are generally analogous to those in \cite{GKY2}, \cite{GKY4} and \cite{KY6}, but the technical details are more complicated. \paragraph{1.2 Basic definitions and notations.} For connected spaces $X$ and $Y$, let $\mbox{{\rm Map}}(X,Y)$ (resp. $\mbox{{\rm Map}}^*(X,Y)$) denote the space consisting of all continuous maps (resp. base-point preserving continuous maps) from $X$ to $Y$ with the compact-open topology. When $X$ and $Y$ are complex manifolds, we denote by $\mbox{{\rm Hol}} (X,Y)$ (resp. $\mbox{{\rm Hol}}^*(X,Y))$ the subspace of $\mbox{{\rm Map}} (X,Y)$ (resp. $\mbox{{\rm Map}}^*(X,Y))$ consisting of all holomorphic maps (resp. base-point preserving holomorphic maps). \par For each integer $d\geq 0$, let $\mbox{{\rm Map}}_d^*(S^2,\Bbb C {\rm P}^{N-1})=\Omega^2_d\Bbb C {\rm P}^{N-1}$ denote the space of all based continuous maps $f:(S^2,\infty)\to (\Bbb C {\rm P}^{N-1},*)$ such that $[f]=d\in \Bbb Z=\pi_2(\Bbb C {\rm P}^{N-1})$, where we identify $\Bbb C {\rm P}^1=S^2=\Bbb C \cup \{\infty\}$ and the points $\infty\in S^2$ and $*\in \Bbb C {\rm P}^{N-1}$ are the base points of $S^2$ and $\Bbb C {\rm P}^{N-1}$, respectively. Let $\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{N-1})$ denote the subspace of $\mbox{{\rm Map}}_d^*(S^2,\Bbb C {\rm P}^{N-1})$ consisting of all based holomorphic maps of degree $d$. \begin{remark} Let ${\rm P}^d(\Bbb C)$ denote the space consisting of all complex monic polynomials $f(z)=z^d+a_1z^{d-1}+\cdots +a_d\in \Bbb C [z]$ of the degree $d$. \par If we choose the point $[1:1:\cdots :1]\in\Bbb C {\rm P}^{N-1}$ as its base point, the space $\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{N-1})$ can be identified with the space consisting of all $N$-tuples $(f_1(z),\cdots ,f_{N}(z))\in {\rm P}^d(\Bbb C)^{N}$ of monic polynomials of the same degree $d$ such that polynomials $f_{1}(z),\cdots ,f_{N}(z)$ have no common root, i.e. the space $\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{N-1})$ can be identified with {\small \begin{equation*} \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{N-1})= \big\{(f_1,\cdots ,f_{N})\in {\rm P}^d(\Bbb C)^{N}: \{f_k(z)\}_{k=1}^N \mbox{ have no common root}\big\}. \end{equation*} } \end{remark} \begin{definition}\label{dfn: Polydmn} (i) Let $\mbox{{\rm SP}}^d_n(\Bbb C)$ denote the space of all monic polynomials $f(z)\in {\rm P}^d(\Bbb C)$ of the degree $d$ without root of multiplicity $\geq n$. More generally, for positive integers $m,n,d\geq 1$ with $(m,n)\not= (1,1)$, let $\mbox{{\rm Poly}}^{d,m}_n$ denote the space of all $m$-tuples $(f_1(z),\cdots ,f_m(z))\in ({\rm P}^d(\Bbb C))^m$ of monic polynomials of the same degree $d$ such that polynomials $f_1(z),\cdots ,f_m(z)$ have no common root of multiplicity $\geq n$. \par (ii) Let $(f_1(z),\cdots ,f_m(z))\in {\rm P}^d(\Bbb C)^m$. Note that $(f_1(z),\cdots ,f_m(z))\in \mbox{{\rm Poly}}^{d,m}_n$ iff the polynomials $\{f_j^{(k)}(z):1\leq j\leq m,\ 0\leq k<n\}$ have no common root. In this situation, define {\it the jet map} \begin{equation}\label{equ: jet} j^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}) \qquad \mbox{by} \end{equation} \begin{equation}\label{equ: jet embedding} j^{d,m}_n(f_1(z),\cdots ,f_m(z))= (\textit{\textbf{f}}_1(z),\cdots ,\textit{\textbf{f}}_m(z)) \end{equation} for $(f_1(z),\cdots ,f_m(z))\in \mbox{{\rm Poly}}^{d,m}_n$, where $\textit{\textbf{f}}_k(z)$ $(k=1,2,\cdots ,m)$ denotes the $n$-tuple of monic polynomials of the same degree $d$ defined by \begin{equation}\label{equ: bff} \textit{\textbf{f}}_k(z)=(f_k(z),f_k(z)+f^{\prime}_k(z),f_k(z)+f^{\prime\prime}_k(z), \cdots ,f_k(z)+f^{(n-1)}_k(z)). \end{equation} \par Let $i^{d,m}$ denote the natural map $i^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to \Omega^2_d\Bbb C {\rm P}^{mn-1}\simeq \Omega^2S^{2mn-1}$ defined by \begin{equation}\label{equ: jet inclusion} i^{d,m}_n(f_1(z),\cdots ,f_m(z))(\alpha) = \begin{cases} [\textit{\textbf{f}}_1(\alpha):\cdots :\textit{\textbf{f}}_m(\alpha)] & \mbox{ if }\alpha \in\Bbb C \\ [1:1:\cdots :1] & \mbox{ if }\alpha =\infty \end{cases} \end{equation} for $(f_1(z),\cdots ,f_m(z))\in \mbox{{\rm Poly}}^{d,m}_n$ and $\alpha \in S^2=\Bbb C \cup \infty$. \qed \end{definition} \begin{remark}\label{rmk: 1.3} (i) Note that $\mbox{{\rm SP}}^d_n(\Bbb C)=\mbox{{\rm Poly}}^{d,1}_n$ for $m=1$, and that we can identify $\mbox{{\rm Poly}}^{d,m}_1=\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{m-1})$ for $n=1$. It is easy to see that there is a homeomorphism \begin{equation}\label{eq: Poly} \mbox{{\rm Poly}}^{d,m}_n \cong \begin{cases} \Bbb C^{dm} & \mbox{if }d<n, \\ \Bbb C^{mn}\setminus \{(x,\cdots ,x):x\in\Bbb C\} & \mbox{if }d=n. \end{cases} \end{equation} Thus in this paper we shall mainly consider the case $m\geq 2$ with $d\geq n\geq 2$. \par (ii) A map $f:X\to Y$ is called {\it a homotopy equivalence} (resp. {\it a homology equivalence}) {\it through dimension} $D$ if the induced homomorphism $f_*:\pi_k(X)\to \pi_k(Y)$ (resp. $f_*:H_k(X,\Bbb Z)\to H_k(Y,\Bbb Z))$ is an isomorphism for any $k\leq D$. \qed \end{remark} \paragraph{1.3 The related known results. } Now, recall the following known results. First, consider the case $m=1$. Note that $\mbox{{\rm Poly}}^{d,1}_n=\mbox{{\rm SP}}^d_n(\Bbb C)$. \begin{theorem} [\cite{GKY2}, \cite{KY7}] \label{thm: KY7} The jet map $$ j^{d,1}_n:\mbox{{\rm SP}}^d_n(\Bbb C)\to \Omega^2_d\Bbb C {\rm P}^{n-1} $$ is a homotopy equivalence through dimension $(2n-3)(\lfloor \frac{d}{n}\rfloor +1)-1$ if $n\geq 3$ and it is a homology equivalence thorough dimension $\lfloor \frac{d}{2}\rfloor$ if $n=2$. \qed \end{theorem} Next, consider the case $m\geq 2$ and $n=1$. In this case, we can identify $\mbox{{\rm Poly}}^{d,m}_1=\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{m-1})$ and the following result is known. \begin{theorem}[\cite{KY6}, \cite{Se}]\label{thm: KY6} If $m\geq 2$, the inclusion map $$ i_d=j^{d,m}_1:\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{m-1})\to \Omega^2_d\Bbb C {\rm P}^{m-1} \simeq \Omega^2S^{2m-1} $$ is a homotopy equivalence through dimension $(2m -3)(d+1)-1$. \qed \end{theorem} We also recall the stable result obtained by F.Cohen-R.Cohen-B.Mann-R.Milgram and its improvement due to R.Cohen-D.Shimamoto (\cite{CCMM}, \cite{CCMM2}, \cite{CS}). \begin{theorem}[\cite{CCMM}, \cite{CCMM2}, \cite{CS}] \label{thm: CCMM} $\mbox{{\rm (i)}}$ If $m\geq 2$, there is a stable homotopy equivalence $$ \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{m-1})\simeq_s \bigvee_{k=1}^d\Sigma^{2(m-2)k}D_k, $$ where $\Sigma^kX$ denotes the $k$-fold reduced suspension of a based space $X$ and $D_k$ is the equivariant half smash product $D_k=F(\Bbb C,k)_+\wedge_{S_k}(\wedge^kS^1)$ defined by (\ref{equ: DkS}). \par $\mbox{{\rm (ii)}}$ In particular, if $m\geq 3$, there is a homotopy equivalence $$ \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{m-1})\simeq J_2(S^{2m-3})_d, $$ where $J_2(S^{2m-3})_d$ denotes the $d$-th stage filtration of the May-Milgram model for $\Omega^2S^{2m-1}$ defined by (\ref{eq: d-th MM-model}). \qed \end{theorem} Note that the homotopy types of the spaces $\mbox{{\rm SP}}^d_n(\Bbb C)$ and $\mbox{{\rm Hol}}^*_{\lfloor \frac{d}{n}\rfloor}(S^2,\Bbb C {\rm P}^{n-1})$ are closely connected in the following sense. \begin{theorem}[\cite{GKY4}, \cite{Va}]\label{thm: GKY4} \begin{enumerate} \item[$\mbox{{\rm (i)}}$] If $n\geq 3$, there is a homotopy equivalence\footnote{ This result was also proved independently by S. Kallel. } $$ \mbox{{\rm SP}}^d_n(\Bbb C)\simeq \mbox{{\rm Hol}}^*_{\lfloor \frac{d}{n}\rfloor}(S^2,\Bbb C {\rm P}^{n-1}). $$ \item[$\mbox{{\rm (ii)}}$] If $n=2$, there is a stable homotopy equivalence $$ \mbox{{\rm SP}}^d_2(\Bbb C)\simeq_s \mbox{{\rm Hol}}^*_{\lfloor \frac{d}{2}\rfloor}(S^2,\Bbb C {\rm P}^{1}). \qed $$ \end{enumerate} \end{theorem} \paragraph{1.4 The main results. } The main purpose of this paper is to investigate the homotopy type of the space $\mbox{{\rm Poly}}^{d,m}_n$ and generalize the above results. Since the case $m=1$ or the case $n=1$ was already well studied in the above theorems, we mainly consider the case $m\geq 2$ and $n\geq 2$. Let $D(d;m,n)$ denote the positive integer defined by \begin{equation}\label{equ: number D} D(d;m,n)=(2mn-3)(\Big\lfloor \frac{d}{n}\Big\rfloor +1)-1. \end{equation} The main results of this paper is stated as follows. \begin{theorem}\label{thm: I} Let $m$ and $n$ be positive integers. If $mn\geq 3$, the natural map $$ i^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n \to \Omega^2_d\Bbb C {\rm P}^{mn-1}\simeq \Omega^2S^{2mn-1} $$ is a homotopy equivalence through dimension $D(d;m,n)$. \end{theorem} \begin{corollary}\label{thm: II} Let $m$ and $n$ be positive integers. If $mn\geq 3$, the jet map $$ j^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n \to \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}) $$ is a homotopy equivalence through dimension $D(d;m,n)$. \end{corollary} By using the above result and the result obtained by R. Cohen and D. Shimamoto \cite{CS}, we also obtain the following two results. \begin{theorem}\label{thm: III} If $m$ and $n$ are positive integers with $(m,n)\not= (1,1)$, there is a stable homotopy equivalence $$\displaystyle \mbox{{\rm Poly}}^{d,m}_n\simeq_s \bigvee_{k=1}^{\lfloor \frac{d}{n}\rfloor} \Sigma^{2(mn-2)k}D_k. $$ \end{theorem} \begin{theorem}\label{thm: IV} Let $m$ and $n$ be positive integers. If $mn\geq 3$, then there is a homotopy equivalence $$\displaystyle \mbox{{\rm Poly}}^{d,m}_n\simeq \mbox{{\rm Hol}}^*_{\lfloor\frac{d}{n}\rfloor}(S^2,\Bbb C {\rm P}^{mn-1}). $$ \end{theorem} \begin{remark}\label{rmk: Theorem IV} It is easy to see that the above homotopy equivalence given in Theorem \ref{thm: IV} can also be expressed in the form (\ref{equ: conj1}). \qed \end{remark} This paper is organized as follows. In \S \ref{section: simplicial resolution} we recall the simplicial resolutions, and in \S \ref{section: spectral sequence} we shall construct the Vassiliev type spectral sequences converging to the homologies of the spaces $\mbox{{\rm Poly}}^{d,m}_n$ and $\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1})$. In \S \ref{section: sd}, we consider the stabilization map $s^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to \mbox{{\rm Poly}}^{d+1,m}_n$ and prove the homological stability of the map $s^{d,m}_n$ (Theorem \ref{thm: stab1}). In \S \ref{section: scanning maps} we prove Theorem \ref{thm: natural map} by using the scanning maps and we give the proofs of the main results of this paper (Theorem \ref{thm: I} and Corollary \ref{thm: II}). In \S \ref{section: III} we prove Theorem \ref{thm: III} and Theorem \ref{thm: IV} by using Theorem \ref{thm: V}. In \S \ref{section: Proof V} we give the proof of Theorem \ref{thm: V} by using Lemma \ref{lmm: X}, and in \S \ref{section: transfer} we prove Lemma \ref{lmm: X}. \section{Simplicial resolutions}\label{section: simplicial resolution} In this section, we give the definitions of and summarize the basic facts about non-degenerate simplicial resolutions (\cite{Va}, \cite{Va2}, (cf. \cite{Mo2})). \begin{definition}\label{def: def} {\rm (i) For a finite set $\textbf{\textit{v}} =\{v_1,\cdots ,v_l\}\subset \Bbb R^N$, let $\sigma (\textbf{\textit{v}})$ denote the convex hull spanned by $\textbf{\textit{v}}.$ Let $h:X\to Y$ be a surjective map such that $h^{-1}(y)$ is a finite set for any $y\in Y$, and let $i:X\to \Bbb R^N$ be an embedding. Let $\mathcal{X}^{\Delta}$ and $h^{\Delta}:{\mathcal{X}}^{\Delta}\to Y$ denote the space and the map defined by \begin{equation} \mathcal{X}^{\Delta}= \big\{(y,u)\in Y\times \Bbb R^N: u\in \sigma (i(h^{-1}(y))) \big\}\subset Y\times \Bbb R^N, \ h^{\Delta}(y,u)=y. \end{equation} The pair $(\mathcal{X}^{\Delta},h^{\Delta})$ is called {\it the simplicial resolution of }$(h,i)$. In particular, $(\mathcal{X}^{\Delta},h^{\Delta})$ is called {\it a non-degenerate simplicial resolution} if for each $y\in Y$ any $k$ points of $i(h^{-1}(y))$ span $(k-1)$-dimensional simplex of $\Bbb R^N$. \par (ii) For each $k\geq 0$, let $\mathcal{X}^{\Delta}_k\subset \mathcal{X}^{\Delta}$ be the subspace given by \begin{equation} \mathcal{X}_k^{\Delta}=\big\{(y,u)\in \mathcal{X}^{\Delta}: u \in\sigma (\textbf{\textit{v}}), \textbf{\textit{v}}=\{v_1,\cdots ,v_l\}\subset i(h^{-1}(y)),\ l\leq k\big\}. \end{equation} We make identification $X=\mathcal{X}^{\Delta}_1$ by identifying $x\in X$ with $(h(x),i(x))\in \mathcal{X}^{\Delta}_1$, and we note that there is an increasing filtration \begin{equation*}\label{equ: filtration} \emptyset = \mathcal{X}^{\Delta}_0\subset X=\mathcal{X}^{\Delta}_1\subset \mathcal{X}^{\Delta}_2\subset \cdots \subset \mathcal{X}^{\Delta}_k\subset \mathcal{X}^{\Delta}_{k+1}\subset \cdots \subset \bigcup_{k= 0}^{\infty}\mathcal{X}^{\Delta}_k=\mathcal{X}^{\Delta}. \end{equation*} } \end{definition} Since the map $h^{\Delta}$ is a proper map, it extends the map $h^{\Delta}_+:\mathcal{X}^{\Delta}_+\to Y_+$ between one-point compactifications, where $X_+$ denotes the one-point compactification of a locally compact space $X$. \begin{theorem}[\cite{Va}, \cite{Va2} (cf. \cite{KY7}, \cite{Mo2})]\label{thm: simp} Let $h:X\to Y$ be a surjective map such that $h^{-1}(y)$ is a finite set for any $y\in Y,$ $i:X\to \Bbb R^N$ an embedding, and let $(\mathcal{X}^{\Delta},h^{\Delta})$ denote the simplicial resolution of $(h,i)$. \par \begin{enumerate} \item[$\mbox{{\rm (i)}}$] If $X$ and $Y$ are semi-algebraic spaces and the two maps $h$, $i$ are semi-algebraic maps, then $h^{\Delta}_+:\mathcal{X}^{\Delta}_+\stackrel{\simeq}{\rightarrow}Y_+$ is a homology equivalence.\footnote{ It is known that the map $h^{\Delta}_+$ is a homotopy equivalence \cite[page 156]{Va2}. (cf. \cite[Theorem in page 43]{GM}). But in this paper we do not need such a stronger assertion. } Moreover, there is an embedding $j:X\to \Bbb R^M$ such that the associated simplicial resolution $(\tilde{\mathcal{X}}^{\Delta},\tilde{h}^{\Delta})$ of $(h,j)$ is non-degenerate. \par \item[$\mbox{{\rm (ii)}}$] If there is an embedding $j:X\to \Bbb R^M$ such that its associated simplicial resolution $(\tilde{\mathcal{X}}^{\Delta},\tilde{h}^{\Delta})$ is non-degenerate, the space $\tilde{\mathcal{X}}^{\Delta}$ is uniquely determined up to homeomorphism and there is a filtration preserving homotopy equivalence $q^{\Delta}:\tilde{\mathcal{X}}^{\Delta}\stackrel{\simeq}{\rightarrow}{\mathcal{X}}^{\Delta}$ such that $q^{\Delta}\vert X=\mbox{id}_X$. \qed \end{enumerate} \end{theorem} \section{The Vassiliev spectral sequences} \label{section: spectral sequence} Our goal in this section is to construct, by means of the {\it non-degenerate} simplicial resolutions of the discriminants, two Vassiliev type spectral sequences converging to the homology of $\mbox{{\rm Poly}}^{d,m}_n$ and that of $\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1})$, respectively. \begin{definition}\label{Def: 3.1} {\rm \par (i) Let $\Sigma^{d,m}_n$ denote \emph{the discriminant} of $\mbox{{\rm Poly}}^{d,m}_n$ in ${\rm P}^d(\Bbb C)^m$ given by the complement \begin{eqnarray*} \Sigma^{d,m}_n &=& {\rm P}^d(\Bbb C)^m\setminus \mbox{{\rm Poly}}^{d,m}_n \\ &=& \{(f_1,\cdots ,f_{m})\in {\rm P}^d(\Bbb C)^m : \textbf{\textit{f}}_1(x)=\cdots =\textbf{\textit{f}}_m(x)=\mathbf{0} \mbox{ for some }x\in \Bbb C\}. \end{eqnarray*} Let us write ${\rm P}^d (m,n)=({\rm P}^d(\Bbb C)^n)^m$, and let $\tilde{\Sigma}^d$ be \emph{the discriminant} of $\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1})$ in ${\rm P}^d (m,n)$ given by \begin{eqnarray*} \tilde{\Sigma}^d &=& {\rm P}^d (m,n)\setminus \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}) \\ &=& \{(f_1,\cdots ,f_{mn})\in {\rm P}^d (m,n) : f_1(x)=\cdots =f_{mn}(x)=0 \mbox{ for some }x\in \Bbb C\}. \end{eqnarray*} \par (ii) Let $Z^{d,m}_n\subset \Sigma^{d,m}_n\times\Bbb C$ denote {\it the tautological normalization} of $\Sigma^{d,m}_n$ given by $$ Z^{d,m}_n=\{ ((f_1(z),\cdots ,f_m(z),x)\in \Sigma^{d,m}_n\times\Bbb C: \textit{\textbf{f}}_1(x)=\cdots =\textit{\textbf{f}}_m(x)=\mathbf{0}\}. $$ Similarly, let $\tilde{Z}^d_N\subset \tilde{\Sigma}^d\times \Bbb C$ be {\it the tautological normalization} of $\tilde{\Sigma}^d$ given by $$ \tilde{Z}^d=\{((f_1,\ldots ,f_{mn}),x)\in \tilde{\Sigma}^d\times\Bbb C :f_1(x)=\cdots =f_{mn}(x)=0\}. $$ Projection on the first factor gives the surjective maps $\pi^{d,m}_n:Z^{d,m}_n\to \Sigma^{d,m}_n$ and $\tilde{\pi}^{d}:\tilde{Z}^{d}\to \tilde{\Sigma}^{d}$, respectively. } \end{definition} \begin{definition}\label{non-degenerate simp.} (i) Let $\varphi :{\rm P}^d(m,n)\stackrel{\cong}{\rightarrow} \Bbb C^{dmn}$ be any fixed homeomorphism and define the embedding $j_d:\tilde{Z}^d\to \Bbb C^{dmn+2d+1}$ by \begin{equation}\label{3.1} j_d((f_1,\cdots ,f_{mn}),x)=(\varphi (f_1,\cdots ,f_{mn}), 1,x,x^2,\cdots ,x^{2d}) \end{equation} for $((f_1,\cdots ,f_{mn}),x)\in \tilde{Z}^d$. Similarly, define the embedding $\tilde{i}:Z^{d,m}_n\to \Bbb C^{dmn+2d+1}$ by \begin{equation}\label{3.2} \tilde{i}((f_1,\cdots ,f_m),x)=j((\textit{\textbf{f}}_1(z),\cdots ,\textit{\textbf{f}}_m(z),x) \end{equation} for $((f_1,\cdots ,f_m),x)\in Z^{d,m}_n$, where where $\textit{\textbf{f}}_k(z)$ denotes the $n$-tuple of polynomials defined in (\ref{equ: bff}). Note that it is easy to see that the following holds: \begin{equation}\label{equ: emb-discriminant} j=\tilde{i}\circ (\hat{j}^{d,m}_n\times \mbox{id}_{\Bbb C}) \end{equation} where $\hat{j}^{d,m}_n:\Sigma^{d,m}_n\to \tilde{\Sigma}^d$ denote the embedding defined by \begin{equation}\label{equ: 3.4} \tilde{j}^{d,m}_n(f_1(z),\cdots ,f_m(z))= \big(\textit{\textbf{f}}_1(z),\cdots ,\textit{\textbf{f}}_m(z)\big) \end{equation} for $(f_1(z),\cdots ,f_m(z))\in\Sigma^{d,m}_n$. \par (ii) Let $({\mathcal{X}}^{d},{\pi}^{\Delta}:{\mathcal{X}}^{d}\to\Sigma^{d,m}_n)$ and $(\tilde{\mathcal{X}}^{d},\tilde{\pi}^{\Delta}:\tilde{\mathcal{X}}^{d}\to \tilde{\Sigma}^d)$ be the simplicial resolutions of $(\pi^{d,m}_n,i)$ and $(\tilde{\pi},j)$, respectively. Then it is easy to see that $\mathcal{X}^d$ and $\tilde{\mathcal{X}}^d$ are no-degenerate simplicial resolutions, and that there are two natural increasing filtrations \begin{eqnarray*} \emptyset &=& {\mathcal{X}}^{d}_0 \subset {\mathcal{X}}^{d}_1\subset {\mathcal{X}}^{d}_2\subset \cdots \cdots\subset \bigcup_{k= 0}^{\infty}{\mathcal{X}}^{d}_k={\mathcal{X}}^{d}, \\ \emptyset &=& \tilde{\mathcal{X}}^{d}_0 \subset \tilde{\mathcal{X}}^{d}_1\subset \tilde{\mathcal{X}}^{d}_2\subset \cdots\cdots \subset \bigcup_{k= 0}^{\infty}\tilde{\mathcal{X}}^{d}_k =\tilde{\mathcal{X}}^{d}, \end{eqnarray*} such that \begin{equation}\label{filt} \mathcal{X}^d_k=\mathcal{X}^d \quad\mbox{ if }k> \Big\lfloor \frac{d}{n}\Big\rfloor \quad \mbox{and}\quad \tilde{\mathcal{X}}^d_k=\tilde{\mathcal{X}}^d\quad \mbox{ if }k>d. \end{equation} \end{definition} By Theorem \ref{thm: simp}, the map $\pi_{+}^{\Delta}:{\mathcal{X}}^{d}_+\stackrel{\simeq}{\rightarrow}{\Sigma^{d,m}_{n+}}$ is a homology equivalence. Since ${\mathcal{X}_k^{d}}_+/{{\mathcal{X}}^{d}_{k-1}}_+ \cong ({\mathcal{X}}^{d}_k\setminus {\mathcal{X}}^{d}_{k-1})_+$, we have a spectral sequence $$ \big\{E_{t;d}^{k,s}, d_t:E_{t;d}^{k,s}\to E_{t;d}^{k+t,s+1-t} \big\} \Rightarrow H^{k+s}_c(\Sigma_n^{d,m},\Bbb Z), $$ where $E_{1;d}^{k,s}=\tilde{H}^{k+s}_c({\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1},\Bbb Z)$ and $H_c^k(X,\Bbb Z)$ denotes the cohomology group with compact supports given by $ H_c^k(X,\Bbb Z)= H^k(X_+,\Bbb Z). $ \par Since there is a homeomorphism ${\rm P}^d(\Bbb C)^m\cong \Bbb C^{dm}$, by Alexander duality there is a natural isomorphism \begin{equation}\label{Al} \tilde{H}_k(\mbox{{\rm Poly}}^{d,m}_n,\Bbb Z)\cong \tilde{H}_c^{2md-k-1}(\Sigma_n^{d,m},\Bbb Z) \quad \mbox{for any }k. \end{equation} By reindexing we obtain a spectral sequence \begin{eqnarray}\label{SS} &&\big\{E^{t;d}_{k,s}, d^{t}:E^{t;d}_{k,s}\to E^{t;d}_{k+t,s+t-1} \big\} \Rightarrow H_{s-k}(\mbox{{\rm Poly}}^{d,m}_n,\Bbb Z), \end{eqnarray} where $E^{1;d}_{k,s}= \tilde{H}^{2md+k-s-1}_c({\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1},\Bbb Z).$ \par \par By a complete similar method we also have the spectral sequence \begin{eqnarray}\label{SSSS} &&\big\{\tilde{E}^{t;d}_{k,s}, \tilde{d}^{t}: \tilde{E}^{t}_{k,s}\to \tilde{E}^{t}_{k+t,s+t-1} \big\} \Rightarrow H_{s-k}(\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}),\Bbb Z), \end{eqnarray} where $\tilde{E}^{1}_{k,s}= \tilde{H}^{2dmn+k-s-1}_c(\tilde{\mathcal{X}}^{d}_k\setminus \tilde{\mathcal{X}}_{k-1}^{d},\Bbb Z).$ \par \par For a space $X$, let $F(X,k)\subset X^k$ denote the ordered configuration space given by $$ F(X,k)=\{(x_1,\cdots ,x_k)\in X^k: x_i\not= x_j\mbox{ if }i\not= j\}. $$ Let $S_k$ be the symmetric group of $k$ letters. Then the group $S_k$ acts on $F(X,k)$ by permuting coordinates and let $C_k(X)$ denote the orbit space $C_k(X)=F(X,k)/S_k$. Let $X^{\wedge k}$ denote the $k$-fold smash product of a space $X$, i.e. $X^{\wedge k}=X\wedge \cdots \wedge X$ $(k$-times). Then $S_k$ acts on $X^{\wedge k}$ by the coordinate permutation and define $D_k(X)$ by the equivariant half smash product \begin{equation}\label{equ: DkX} D_k(X)=F(\Bbb C,k)_+\wedge_{S_k}X^{\wedge k}. \end{equation} In particular, for $X=S^1$ we set \begin{equation}\label{equ: DkS} D_k=D_k(S^1). \end{equation} \begin{lemma}\label{lemma: vector bundle} If $1\leq k\leq \lfloor \frac{d}{n}\rfloor$, ${\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1}$ is homeomorphic to the total space of a real affine bundle $\xi_{d,k}$ over $C_k(\Bbb C)$ with rank $l_{d,k}=2m(d-nk)+k-1$. \end{lemma} \begin{proof} The argument is exactly analogous to the one in the proof of \cite[Lemma 4.4]{AKY1}. Namely, an element of ${\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1}$ is represented by the $(m+1)$-tuple $(f_1(z),\cdots ,f_m(z),u)$, where $(f_1(z),\cdots ,f_m(z))$ is an $m$-tuple of monic polynomials of the same degree $d$ in $\Sigma^{d,m}_n$ and $u$ is an element of the interior of the span of the images of $k$ distinct points $\{x_1,\cdots, x_k\}\in C_k(\Bbb C)$ such that $\{x_j\}_{j=1}^k$ are common roots of $\{f_i(z)\}_{k=1}^m$ of multiplicity $n$ under a suitable embedding. \ Since the $k$ distinct points $\{x_j\}_{j=1}^k$ are uniquely determined by $u$, by the definition of the non-degenerate simplicial resolution, there are projection maps $\pi_{k,d} :{\cal X}^{d}_k\setminus {\cal X}^{d}_{k-1}\to C_{k}(\Bbb C)$ defined by $((f_1,\cdots ,f_m),u) \mapsto \{x_1,\cdots ,x_k\}$. \par Now suppose that $1\leq k\leq \lfloor \frac{d}{n}\rfloor$ and $1\leq i\leq m$. Let $c=\{x_j\}_{j=1}^k\in C_{k}(\Bbb C)$ be any fixed element and consider the fibre $\pi_{k,d}^{-1}(c)$. If we consider the condition that a polynomial $f_i(z)\in{\rm P}^d(\Bbb C)$ is divided by the polynomial $\prod_{j=1}^k(z-x_j)^n$, then it is easy to see that this condition is equivalent to the following condition: \begin{equation}\label{equ: equation} f^{(t)}_i(x_j)=0 \quad \mbox{for }0\leq t<n,\ 1\leq j\leq k. \end{equation} In general, for each $0\leq t< n$ and $1\leq j\leq k$, the condition $f^{(t)}_i(x_j)=0$ gives one linear condition on the coefficients of $f_i(z)$, and it determines an affine hyperplane in ${\rm P}^d(\Bbb C)$. For example, if we set $f_i(z)=z^d+\sum_{s=1}^da_{s}z^{d-s}$, then $f_i(x_j)=0$ for all $1\leq j\leq k$ if and only if \begin{equation*}\label{equ: matrix equation} \begin{bmatrix} 1 & x_1 & x_1^2 & x_1^3 & \cdots & x_1^{d-1} \\ 1 & x_2 & x_2^2 & x_2^3 & \cdots & x_2^{d-1} \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 1 & x_k & x_k^2 & x_k^3 & \cdots & x_k^{d-1} \end{bmatrix} \cdot \begin{bmatrix} a_{d}\\ a_{d-1} \\ \vdots \\ a_{1} \end{bmatrix} = - \begin{bmatrix} x_1^d\\ x_2^d \\ \vdots \\ x_k^d \end{bmatrix} \end{equation*} Similarly, $f^{\prime}_i(x_j)=0$ for all $1\leq j\leq k$ if and only if \begin{equation*}\label{equ: matrix equation2} \begin{bmatrix} 0 &1 & 2x_1 & 3x_1^2 & \cdots & (d-1)x_1^{d-2} \\ 0 & 1 & 2x_2 & 3x_2^2 & \cdots & (d-1)x_2^{d-2} \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 &1 & 2x_k & 3x_k^2 & \cdots & (d-1)x_k^{d-2} \end{bmatrix} \cdot \begin{bmatrix} a_d \\ a_{d-1} \\ \vdots \\ a_{1} \end{bmatrix} = - \begin{bmatrix} dx_1^{d-1}\\ dx_2^{d-1} \\ \vdots \\ dx_k^{d-1} \end{bmatrix} \end{equation*} and $f^{\prime\prime}_i(x_j)=0$ for all $1\leq j\leq k$ if and only if \begin{equation*}\label{equ: matrix equation2} \begin{bmatrix} 0 & 0 & 2 & 6x_1 & \cdots & (d-1)(d-2)x_1^{d-3} \\ 0 & 0 & 2 & 6x_2 & \cdots & (d-1)(d-2)x_2^{d-3} \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & 2 & 6x_k & \cdots & (d-1)(d-2)x_k^{d-3} \end{bmatrix} \cdot \begin{bmatrix} a_d \\ a_{d-1} \\ \vdots \\ a_{1} \end{bmatrix} = - \begin{bmatrix} d(d-1)x_1^{d-2}\\ d(d-1)x_2^{d-1} \\ \vdots \\ d(d-1)x_k^{d-2} \end{bmatrix} \end{equation*} and so on. Since $1\leq k\leq \lfloor \frac{d}{n}\rfloor$ and $\{x_j\}_{j=1}^k\in C_k(\Bbb C)$, it follows from the properties of Vandermonde matrices that the the condition (\ref{equ: equation}) gives exactly $nk$ affinely independent conditions on the coefficients of $f_i(z)$. Hence, we see that the space of $m$-tuples $(f_1(z),\cdots ,f_m(z))\in{\rm P}^d(\Bbb C)^m$ of monic polynomials which satisfies the condition (\ref{equ: equation}) for each $1\leq i\leq m$ is the intersection of $mnk$ affine hyperplanes in general position, and it has codimension $mnk$ in ${\rm P}^d(\Bbb C)^m$. Therefore, the fibre $\pi_{k,d}^{-1}(c)$ is homeomorphic to the product of an open $(k-1)$-simplex with the real affine space of dimension $2m(d-nk)$. Because we can also check that the local triviality holds, we see that ${\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1}$ is a real affine bundle over $C_{k}(\Bbb C)$ of rank $l_{d,k} =2m(d-nk)+k-1$. \end{proof} \par \par By using a complete similar method of Lemma \ref{lemma: vector bundle} we can also prove the following result. \begin{lemma}\label{lemma: vector bundle*} If $1\leq k\leq d$, $\tilde{\mathcal{X}}^{d}_k\setminus \tilde{\mathcal{X}}^{d}_{k-1}$ is homeomorphic to the total space of a real affine bundle $\tilde{\xi}_{d,k}$ over $C_k(\Bbb C)$ with rank $\tilde{l}_{d,k}=2mn(d-k)+k-1$. \qed \end{lemma} \begin{lemma}\label{lemma: E1} If $1\leq k\leq \lfloor \frac{d}{n}\rfloor$, there is a natural isomorphism $$ E^{1;d}_{k,s}\cong H_{s-2(mn-1)k}(C_{k}(\Bbb C),\pm \Bbb Z), $$ where the twisted coefficients system $\pm \Bbb Z$ comes from the Thom isomorphism.\footnote{ The twisted coefficients system $\pm \Bbb Z$ on $C_k(\Bbb C)$ is induced by the sign representation of the symmetric group. (cf. \cite[page 114 and 254]{Va}).} \end{lemma} \begin{proof} Suppose that $1\leq k\leq \lfloor \frac{d}{n}\rfloor$. By Lemma \ref{lemma: vector bundle}, there is a homeomorphism $ ({\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1})_+\cong T(\xi_{d,k}), $ where $T(\xi_{d,k})$ denotes the Thom space of $\xi_{d,k}$. Since $(2md+k-s-1)-l_{d,k} = 2mnk-s,$ by using the Thom isomorphism and the Poincar\'e duality, there is a natural isomorphism $ E^{1;d}_{k,s} \cong \tilde{H}^{2md+k-s-1}(T(\xi_{d,k}),\Bbb Z) \cong H^{2mnk-s}_c(C_{k}(\Bbb C),\pm \Bbb Z) \cong H_{s-2(mn-1)k}(C_k(\Bbb C),\pm \Bbb Z) $ and this completes the proof. \end{proof} A similar method also proves the following: \begin{lemma}\label{lemma: E1*} If $1\leq k\leq d$, there is a natural isomorphism $$ \tilde{E}^{1}_{k,s}\cong H_{s-2(N-1)k}(C_{k}(\Bbb C),\pm \Bbb Z). \qed $$ \end{lemma} \begin{corollary}\label{crl: Er} \begin{enumerate} \item[$\mbox{{\rm (i)}}$] There is a natural isomorphism $$ E^{1;d}_{k,s}\cong \begin{cases} \Bbb Z & \mbox{if }(k,s)=(0,0) \\ H_{s-2(mn-1)k}(C_k(\Bbb C),\pm\Bbb Z) & \mbox{if }1\leq k\leq \lfloor \frac{d}{n}\rfloor , \ s\geq 2(mn-1)k \\ 0 & \mbox{otherwise} \end{cases} $$ \item[$\mbox{{\rm (ii)}}$] There is a natural isomorphism $$ \tilde{E}^{1}_{k,s}\cong \begin{cases} \Bbb Z & \mbox{if }(k,s)=(0,0) \\ H_{s-2(mn-1)k}(C_k(\Bbb C),\pm\Bbb Z) & \mbox{if }1\leq k\leq d , \ s\geq 2(mn-1)k \\ 0 & \mbox{otherwise} \end{cases} $$ \end{enumerate} \end{corollary} \begin{proof} It is easy to see that the assertion (i) follows from Lemma \ref{lemma: E1} and (\ref{filt}). The assertion (ii) follows similarly. \end{proof} \begin{remark}\label{rmk: spectral sequence for general N} By using the complete similar way, for an integer $N\geq 2$ one can obtain the spectral sequence $$ \{\tilde{E}^{t;N}_{k,s},d^t:\tilde{E}^{t;N}_{k,s} \to \tilde{E}^{t;N}_{k+t,s+t-1}\} \Rightarrow H_{s-k}(\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{N-1}),\Bbb Z) $$ such that \begin{equation}\label{equ: Er in general} \tilde{E}^{1;N}_{k,s}\cong \begin{cases} \Bbb Z & \mbox{if }(k,s)=(0,0) \\ H_{s-2(N-1)k}(C_k(\Bbb C),\pm\Bbb Z) & \mbox{if }1\leq k\leq d , \ s\geq 2(N-1)k \\ 0 & \mbox{otherwise} \quad \qed \end{cases} \end{equation} \end{remark} \begin{remark} One can show that for $1\leq k\leq \lfloor \frac{d}{n}\rfloor$ there is a homeomorphism \begin{equation} (\mathcal{X}^d_k\setminus \mathcal{X}^d_{k-1})\times \Bbb C^{dm(n-1)} \cong \tilde{\mathcal{X}}^d_k\setminus \tilde{\mathcal{X}}^d_{k-1}. \end{equation} Hence, it is easy to see that there is an isomorphism $E^{1;d}_{k,s}\cong \tilde{E}^1_{k,s}$ for any $s$ if $1\leq k\leq \lfloor \frac{d}{n}\rfloor$ (but this is easily seen by Lemma \ref{lemma: E1} and Lemma \ref{lemma: E1*}). \qed \end{remark} \section{Stabilization maps}\label{section: sd} \begin{definition}\label{def: stabilization} Let \begin{equation}\label{equ: stabilization map} s^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to \mbox{{\rm Poly}}^{d+1,m}_n \end{equation} denote the stabilization map given by adding the points from the infinity as in \cite[\S 5 page 57]{Se}. Similarly, let \begin{equation} s_d:\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1})\to \mbox{{\rm Hol}}_{d+1}^*(S^2,\Bbb C {\rm P}^{mn-1}) \end{equation} be the stabilization map given in \cite{Se}. \qed \end{definition} It is easy to see that the following diagram is commutative (up to homotopy) \begin{equation}\label{CD: stab0} \begin{CD} \mbox{{\rm Poly}}^{d,m}_n @>s^{d,m}_n>> \mbox{{\rm Poly}}^{d+1,m}_n \\ @V{j^{d,m}_n}VV @V{j^{d+1,m}_n}VV \\ \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}) @>s_d>>\mbox{{\rm Hol}}_{d+1}^*(S^2,\Bbb C {\rm P}^{mn-1}) \end{CD} \end{equation} Note that the map $s^{d,m}_n$ clearly extends to the map ${\rm P}^d(\Bbb C)^m\to{\rm P}^{d+1}(\Bbb C)^m$ and its restriction gives the stabilization map $\tilde{s}^{d,m}_n:\Sigma^{d,m}_n\to \Sigma^{d+1,m}_n$ between discriminants. It is easy to see that it also extends to the open embedding \begin{equation}\label{equ: open-stab} \tilde{s}^{d,m}_n:\Sigma^{d,m}_n\times\Bbb C^m\to \Sigma^{d+1,m}_n. \end{equation} Since the one-point compactification is contravariant for open embeddings, it induces the map \begin{equation}\label{equ: embedding3} \tilde{s}^{d,m}_{n+}: (\Sigma^{d+1,m}_{n})_+ \to (\Sigma^{d,m}_n\times \Bbb C^{m})_+=\Sigma^{d,m}_{n+}\wedge S^{2m} \end{equation} between one-point compactifications. Then we obtain the following diagram is commutative \begin{equation}\label{diagram: discriminant} \begin{CD} \tilde{H}_k(\mbox{{\rm Poly}}^{d,m}_n,\Bbb Z) @>{s^{d,m}_{n*}}>>\tilde{H}_k(\mbox{{\rm Poly}}^{d+1,m}_n,\Bbb Z) \\ @V{Al}V{\cong}V @V{Al}V{\cong}V \\ \tilde{H}^{2dm-k-1}_c(\Sigma^{d,m}_n,\Bbb Z) @>\tilde{s}^{d,m*}_{n+}>> \tilde{H}^{2(d+1)m-k-1}_c(\Sigma^{d+1,m}_n,\Bbb Z) \end{CD} \end{equation} where $Al$ denotes the Alexander duality isomorphism and $\tilde{s}^{d,m*}_{n+}$ denotes the composite of the the suspension isomorphism with the homomorphism ${(\tilde{s}^{d,m}_n)^*}$, $$ \tilde{H}^{2dm-k-1}_c(\Sigma^{d,m}_n) \stackrel{\cong}{\rightarrow} \tilde{H}^{2(d+1)m-k-1}_c(\Sigma^{d,m}_n\times \Bbb C^{m}) \stackrel{(\tilde{s}^{d,m}_{n+})^*}{\longrightarrow} \tilde{H}^{2(d+1)m-k-1}_c(\Sigma^{d+1,m}_{n}). $$ Note that the map $\tilde{s}^{d,m}_n$ induces the filtration preserving map \begin{equation} \hat{s}^{d,m}_n:{\mathcal{X}}^{d} \times \Bbb C^m\to \mathcal{X}^{d+1} \end{equation} and it defines the homomorphism of spectral sequences \begin{equation}\label{equ: theta1} \{ \theta_{k,s}^t:E^{t;d}_{k,s}\to E^{t;d+1}_{k,s}\}. \end{equation} \begin{lemma}\label{lmm: E1} If $1\leq k\leq \lfloor \frac{d}{n}\rfloor$, $\theta^1_{k,s}: E^{1;d}_{k,s}\stackrel{\cong}{\rightarrow} {E}^{1;d+1}_{k,s}$ is an isomorphism for any $s$. \end{lemma} \begin{proof} Suppose that $1\leq k\leq \lfloor \frac{d}{n}\rfloor$. If we set $\hat{s}^{d,m}_{n;k}=\hat{s}^{d,m}_n\vert {\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1},$ the diagram $$ \begin{CD} ({\mathcal{X}}^{d}_k\setminus{\mathcal{X}}^{d}_{k-1})\times \Bbb C^{m} @>\pi_{k,d}>> C_{k}(\Bbb C) \\ @V{\hat{s}^{d,m}_{n;k}}VV \Vert @. \\ \mathcal{X}^{d+1}_k\setminus \mathcal{X}^{d+1}_{k-1} @>\pi_{k,d+1}>> C_{k}(\Bbb C) \end{CD} $$ is commutative. Hence, $\theta^1_{k,s}$ is an isomorphism. \end{proof} Now we prove the main key result. \begin{theorem}\label{thm: stab1} If $n\geq 2$, the stabilization map $$ s^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to \mbox{{\rm Poly}}^{d+1,m}_n $$ is a homology equivalence for $\lfloor \frac{d}{n}\rfloor =\lfloor\frac{d+1}{n}\rfloor$, and it is a homology equivalence through dimension $D(d;m,n)$ for $\lfloor \frac{d}{n}\rfloor <\lfloor\frac{d+1}{n}\rfloor$. \end{theorem} \begin{proof} First, consider the case $\lfloor \frac{d}{n}\rfloor =\lfloor\frac{d+1}{n}\rfloor$. In this case, by using Corollary \ref{crl: Er} and Lemma \ref{lmm: E1} it is easy to see that $\theta^{1}_{k,s}:E^{1;d}_{k,s}\stackrel{\cong}{\longrightarrow} E^{1;d+1}_{k,s}$ is an isomorphism for any $(k,s)$. Hence, $\theta^{\infty}_{k,s}$ is an isomorphism for any $(k,s).$ Since $\theta^t_{k,s}$ is induced from $\hat{s}^{d,m}_n$, it follows from (\ref{diagram: discriminant}) that the map $s^{d,m}_n$ is a homology equivalence. \par Next assume that $\lfloor \frac{d}{n}\rfloor <\lfloor\frac{d+1}{n}\rfloor$, i.e. $\lfloor \frac{d+1}{n}\rfloor =\lfloor\frac{d}{n}\rfloor +1.$ In this case, by considering the differential $d^t:E^{t;d+\epsilon}_{k,s}\to E^{t;d+\epsilon}_{k+t,s+t-1}$ ($\epsilon \in \{0,1\})$, Lemma \ref{lmm: E1} and Corollary \ref{crl: Er}, we easily see that $\theta^t_{k,s}:E^{t;d}_{k,s}\to E^{t;d+1}_{k,s}$ is an isomorphism for any $(k,s)$ and $t$ as long as the condition $s-t\leq D(d;m,n)$ is satisfied. Hence, if $s-t\leq D(d;m,n)$, $\theta^{\infty}_{k,s}$ is always an isomorphism and so that the map $s^{d,m}_n$ is a homology equivalence through dimension $D(d;m,n)$. \end{proof} \begin{theorem}[\cite{KY6}, Theorem 2.8]\label{thm: stab2} If $n\geq 2$, the stabilization map $$ s_d:\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1})\to \mbox{{\rm Hol}}_{d+1}^*(S^2,\Bbb C {\rm P}^{mn-1}) $$ is a homology equivalence through dimension $(2mn-3)(d+1)-1$. \qed \end{theorem} \begin{definition}\label{def: stability} Let $\mbox{{\rm Poly}}^{\infty, m}_n$ denote the colimit $\displaystyle \lim_{d\to\infty}\mbox{{\rm Poly}}^{d,m}_n$ taken from the stabilization maps $s^{d,m}_n$'s. Then the natural map $i^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to \Omega^2_d\Bbb C {\rm P}^{mn-1}$ (given in (\ref{equ: jet inclusion})) induces the map \begin{equation}\label{equ: natural map} i^{\infty,m}_n= \varinjlim i^{d,m}_n:\mbox{{\rm Poly}}^{\infty, m}_n\to \lim_{d\to\infty}\Omega^2_d\Bbb C {\rm P}^{mn-1}\simeq \Omega^2S^{2mn-1}. \end{equation} \end{definition} Then we have the following result whose proof is given in the next section. \begin{theorem}\label{thm: natural map} If $n\geq 2$, the map $ i^{\infty,m}_n:\mbox{{\rm Poly}}^{\infty, m}_n \stackrel{\simeq}{\longrightarrow} \Omega^2S^{2mn-1} $ is a homology equivalence. \end{theorem} \section{Scanning maps and the unstable results} \label{section: scanning maps} In this section, we prove Theorem \ref{thm: natural map} by using the scanning maps. Next we give the proofs of the stability results (Theorem \ref{thm: I} and Corollary \ref{thm: II}). \begin{definition} For a space $X$ let $\mbox{{\rm SP}}^d(X)$ denote the $d$-th symmetric product defined by the quotient space $\mbox{{\rm SP}}^d(X)=X^d/S_d$, where the symmetric group $S_d$ of $d$-letters acts on $X^d$ by the permutation of coordinates. Note that there is a natural inclusion $C_d(X)\subset \mbox{{\rm SP}}^d(X)$. \end{definition} \begin{remark}\label{rmk: notation} (i) An element of $\mbox{{\rm SP}}^d(X)$ may be identified with the formal linear combination $\alpha =\sum_{i=1}^kd_kx_i$ ($\{x_i\}\in C_k(X), \ \sum_{k=1}^ld_k=d$). We shall refer to $\alpha$ as configuration of points, the point $x_i$ having a multiplicity $d_i$. \par (ii) If $X =\Bbb C$, then ${\rm P}^d(\Bbb C)$ can be easily identified with the space $\mbox{{\rm SP}}^d(\Bbb C)$ by the correspondence $\prod_{i=1}^k(z-\alpha_i)^{d_i}\mapsto \sum_{i=1}^{k}d_i\alpha_i$. \qed \end{remark} \begin{definition} For a space $X$, define the space $\mbox{{\rm Pol}}^{d,m}_n(X)$ by \begin{equation}\label{equ: Poly(X)} \mbox{{\rm Pol}}^{d,m}_n(X)= \{(\xi_1,\cdots ,\xi_m)\in\mbox{{\rm SP}}^d(X)^m: (*) \}, \end{equation} where the condition $(*)$ is given by \begin{enumerate} \item[$(*)$] $\cap_{i=1}^m\xi_i$ does not contain any element of multiplicity $\geq n$. \end{enumerate} \end{definition} \begin{remark}\label{rmk: Poly identification} By identifying ${\rm P}^d(\Bbb C)= \mbox{{\rm SP}}^d(\Bbb C)$ as in Remark \ref{rmk: notation}, we easily see that there is a homeomorphism \begin{equation} \label{eq: Po} \mbox{{\rm Poly}}^{d,m}_n\cong\mbox{{\rm Pol}}^{d,m}_n(\Bbb C). \end{equation} \end{remark} \begin{definition} If $A\subset X$ is a closed subspace, we define \begin{equation} \mbox{{\rm Pol}}^{d,m}_n(X,A)=\mbox{{\rm Pol}}^{d,m}_n(X)/\sim , \end{equation} where the equivalence relation \lq\lq$\sim$\rq\rq is defined by $$ (\xi_1,\cdots ,\xi_m)\sim (\eta_1,\cdots ,\eta_m) \quad \mbox{ if }\quad \xi_i\cap (X\setminus A)=\eta_i \cap (X\setminus A) $$ for each $1\leq i\leq m$. Therefore, points in $A$ are \lq\lq ignored\rq\rq . When $A\not=\emptyset$, there is a natural inclusion $$ \mbox{{\rm Pol}}^{d,m}_n(X,A)\subset \mbox{{\rm Pol}}^{d+1,m}_n(X,A) $$ by adding points in $A$. Define the space $\mbox{{\rm Pol}}^{m}_n(X,A)$ by the union \begin{equation} \mbox{{\rm Pol}}^{m}_n(X,A)=\bigcup_{d\geq 1}\mbox{{\rm Pol}}^{d,m}_n(X,A). \end{equation} \end{definition} \begin{remark} As a set, $\mbox{{\rm Pol}}^{m}_n(X,A)$ is bijectively equivalent to the disjoint union $\displaystyle \bigcup_{d\geq 1}\mbox{{\rm Pol}}^{d,m}_n(X\setminus A)$. But these two spaces are not homeomorphic. For example, if $X$ is connected, then $\mbox{{\rm Pol}}^{m}_n(X,A)$ is connected. \qed \end{remark} We need two kinds of scanning maps. First, we define the scanning map for configuration space of particles. \begin{definition} Let us identify $D^2=\{x\in\Bbb C :\vert x\vert \leq 1\}$, and let $\epsilon >0$ be a fixed sufficiently small positive number Then for each $w\in\Bbb C$, let $U_{w}$ be the open set $U_{w}=\{x\in\Bbb C: \vert x-w\vert <\epsilon\}.$ Now define the scanning map \begin{equation} sc^{d,m}_n:\mbox{{\rm Pol}}^{d,m}_n(\Bbb C)\to \Omega^2\mbox{{\rm Pol}}^{m}_n(D^2,S^1) \end{equation} as follows. Let $\alpha =(\xi_1,\cdots ,\xi_m)\in \mbox{{\rm Pol}}^{d,m}_n(\Bbb C).$ Then let $$ sc^{d,m}_n(\alpha):S^2=\Bbb C\cup\infty \to \mbox{{\rm Pol}}^{m}_n(D^2,S^1) $$ denote the map given by $$ w\mapsto (\xi_1\cap\overline{U}_{w},\cdots ,\xi_m\cap\overline{U}_{w}) \in\mbox{{\rm Pol}}^{m}_n(\overline{U}_{w},\partial \overline{U}_{w}) \cong \mbox{{\rm Pol}}^{m}_n(D^2,S^1) $$ for $w\in \Bbb C$, where we use the canonical identification $(\overline{U}_w,\partial \overline{U}_w)\cong (D^2,S^1)$. Since $\displaystyle\lim_{w\to\infty}sc^{d,m}_n(\alpha)(w)=(\emptyset ,\cdots ,\emptyset)$, we set $sc^{d,m}_n(\alpha)(\infty)=(\emptyset ,\cdots ,\emptyset)$ and we obtain the based map $sc^{d,m}_n(\alpha)\in \Omega^2\mbox{{\rm Pol}}^{m}_n(D^2,S^1).$ \par Since the space $\mbox{{\rm Pol}}^{d,m}_n(\Bbb C)$ is connected, the image of $sc^{d,m}_n$ is contained in some path-component of $\Omega^2\mbox{{\rm Pol}}^{m}_n(D^2,S^1),$ which is denoted by $\Omega^2_d\mbox{{\rm Pol}}^{m}_n(D^2,S^1).$ Hence, finally we obtain the map \begin{equation} sc^{d,m}_n: \mbox{{\rm Pol}}^{d,m}_n(\Bbb C)\to \Omega^2_d\mbox{{\rm Pol}}^{m}_n(D^2,S^1). \end{equation} Now we identify $\mbox{{\rm Poly}}^{d,m}_n\cong \mbox{{\rm Pol}}^{d,m}_n(\Bbb C)$ as in (\ref{eq: Po}) and by setting $\displaystyle S=\lim_{d\to\infty}sc^{d,m}_n$, we obtain {\it the scanning map} \begin{equation} S: \mbox{{\rm Poly}}^{\infty,m}_n \to \lim_{d\to\infty}\Omega^2_d \mbox{{\rm Pol}}^{m}_n(D^2,S^1)\simeq \Omega^2_0 \mbox{{\rm Pol}}^{m}_n(D^2,S^1). \end{equation} \end{definition} \begin{theorem}[\cite{GKY2}, \cite{Se}] \label{thm: scanning map} If $n\geq 2$, the scanning map $$ S=\lim_{d\to\infty}sc^{d,m}_n:\mbox{{\rm Poly}}^{\infty,m}_n \stackrel{\simeq}{\longrightarrow} \Omega^2_0\mbox{{\rm Pol}}^{m}_n(D^2,S^1) $$ is a homology equivalence. \end{theorem} \begin{proof} The proof is similar to the argument of \cite[\S 3]{Se}. Alternatively, we can prove this by using the complete similar way as in \cite[page 99-100]{GKY2} \end{proof} \begin{definition}\label{def: 5.9} (i) Let $\mathcal{P}^d(\Bbb C)$ denote the space of (not necessarily monic) all polynomials $f(z)=\sum_{i=0}^da_iz^i\in\Bbb C[z]$ of degree exactly $d$ and let $\mathcal{P}oly^{d,m}_n$ denote the space of all $m$-tuples $(f_1(z),\cdots ,f_m(z))\in \mathcal{P}^d(\Bbb C)^m$ such that polynomials $\{f_1(z),\cdots ,f_m(z)\}$ have no common root of multiplicity $\geq n$. \par (ii) For each nonempty open set $X\subset \Bbb C$, let $\mathcal{P}oly^{m}_n(X)$ denote the space of all $m$-tuples $(f_1(z),\cdots ,f_m(z))$ satisfying the following two conditions: \begin{enumerate} \item[(\ref{def: 5.9}.1)] $f_i(z)\in\Bbb C[z]$ is a complex polynomial of the same degree and it is not identically zero for each $1\leq i\leq m$. \item[(\ref{def: 5.9}.2)] Polynomials $\{f_1(z),\cdots ,f_m(z)\}$ have no common root in $X$ of multiplicity $\geq n$. \end{enumerate} When $X=\Bbb C$, we write $\mathcal{P}oly^{d,m}_n=\mathcal{P}oly^{d,m}_n(\Bbb C)$. \qed \end{definition} \begin{remark} (i) Note that $\mathcal{P}oly^{m}_n(\Bbb C)$ is bijectively equivalent to the union $\bigcup_{d\geq 0}\mathcal{P}oly^{d,m}_n(\Bbb C)$, but these spaces are not homeomorphic because $\mathcal{P}oly^{m}_n(\Bbb C)$ is connected. \par (ii) It is easy to see that there are homeomorphisms $$ \mathcal{P}^d(\Bbb C)\cong \Bbb C^*\times{\rm P}^d(\Bbb C) \ \mbox{ and }\ \mathcal{P}oly^{d,m}_n(\Bbb C)\cong \Bbb T^m \times \mbox{{\rm Poly}}^{d,m}_n, $$ where we set $\Bbb T^m=(\Bbb C^*)^m$. \qed \end{remark} \par \par Next consider the scanning map for algebraic maps. \begin{definition} (i) Let $U=D^2\setminus S^1=\{x\in\Bbb C :\vert x\vert <1\}$ and define {\it the scanning map} \begin{equation}\label{equ: scan2} \mbox{sc}^{d,m}_n:\mathcal{P}oly^{d,m}_n\to \mbox{{\rm Map}}(\Bbb C, \mathcal{P}oly^{d,m}_n(U)) \end{equation} for $\mathcal{P}oly^{d,m}_n$ by $$ \mbox{sc}^{d,m}_n(f_1(z),\cdots ,f_m(z))(w)= (f_1\vert U_w,\cdots ,f_m\vert U_w) $$ for $((f_1(z),\cdots ,f_m(z),w)\in \mathcal{P}oly^{d,m}_n\times \Bbb C$, where we also use the canonical identification $U\cong U_w$ as in the definition of the earlier scanning map. \par \par (ii) Let $q:\mathcal{P}oly^m_n(\Bbb C)\to \mbox{{\rm Pol}}^{m}_n(D^2,S^1)$ denote the map given by assigning to an $m$-tuples of polynomials their corresponding roots in $U$. \qed \end{definition} \begin{lemma}\label{lmm: quasi-fibration} The map $q:\mathcal{P}oly^m_n(\Bbb C)\to \mbox{{\rm Pol}}^{m}_n(D^2,S^1)$ is a quasifibration with fibre $\Bbb T^m$. \end{lemma} \begin{proof} This may be proved by using the well-known criterion of Dold-Thom as in the proof of \cite[Lemma 3.3]{Se}. In fact, we can prove this by using the induction on the number $m$. The case $m=1$ is proved in \cite[page 101]{GKY2}. Now assume that the case $m-1$ is true and filter the base space $\mbox{{\rm Pol}}^{d,m}_n(D^2,S^1)$ by the points of $U$ in the first coordinate. Note that $q$ is a trivial fibration over each the successive difference of the filtration. Then the Dold-Thom "attaching map" has the effect of multiplying polynomials with no root in $U$ by a fixed polynomial $z-\alpha$, where $\alpha\in \Bbb C\setminus U$. Since $\alpha$ may be moved continuously to $1$, we can show the assertion in the same way as the case $m=1$. \end{proof} \begin{definition} (i) Let $ev_0:\mathcal{P}oly^m_n(U)\to \Bbb C^{mn}\setminus \{{\bf 0}\}$ denote the evaluation map at $z=0$ given by \begin{eqnarray*} ev_0(f_1(z),\cdots ,f_m(z)) &=& (\textit{\textbf{f}}_1(0),\textit{\textbf{f}}_2(0)\cdots , \textit{\textbf{f}}_m(0)) \end{eqnarray*} for $(f_1(z),\cdots ,f_m(z))\in \mathcal{P}oly^{d,m}_n(U)$. \par (ii) Let $G$ be a group and $X$ a $G$-space. Then we denote by $X//G$ the homotopy quotient of $X$ by $G$, $X//G=EG\times_{G}X$. \qed \end{definition} \begin{remark} Let $\Bbb T^m=(\Bbb C^*)^m$ and consider the natural $\Bbb T^m$-actions on the spaces $\mathcal{P}oly^m_n(U)$ and $\Bbb C^{mn}\setminus \{{\bf 0}\}$ given by the usual coordinate-wise multiplications. Then it is easy to see that $ev_0$ is a $\Bbb T^m$-equivariant map. \end{remark} \begin{lemma}\label{lmm: evaluation0} The map $ev_0:\mathcal{P}oly^m_n(U)\to \Bbb C^{mn}\setminus \{{\bf 0}\}$ is a homotopy equivalence. \end{lemma} \begin{proof} If $m=1$, this is proved in \cite[Theorem 2.4 (page 102-103)]{GKY2}. By using the same method with \cite[Prop. 1]{ha}, we can show the assertion for $m\geq 2$. \end{proof} Now we can prove Theorem \ref{thm: natural map}. \begin{proof}[Proof of Theorem \ref{thm: natural map}] Note that $\Bbb T^m$ does not act on $(\Bbb C^*)^{mn}\setminus \{{\bf 0}\}$ freely. Hence, by using its homotopy quotient, we have the commutative diagram {\small $$ \begin{CD} \mathcal{P}oly^{d,m}_n @>\mbox{sc}^{d,m}_n>> \mbox{{\rm Map}} (\Bbb C,\mathcal{P}oly^m_n(U)) @>ev_0>\simeq> \mbox{{\rm Map}} (\Bbb C, \Bbb C^{mn}\setminus\{{\bf 0}\}) \\ @V{q_1}VV @V{q_2}VV @V{q_3}VV \\ \mathcal{P}oly^{d,m}_n/\Bbb T^m @>>> \mbox{{\rm Map}} (\Bbb C,\mathcal{P}oly^m_n(U)/\Bbb T^m) @>\widetilde{ev}_0>\simeq> \mbox{{\rm Map}} (\Bbb C, (\Bbb C^{mn}\setminus \{{\bf 0}\})//\Bbb T^{m}) \\ @V{\cong}VV @V{q^{\prime}}V{\simeq}V @. \\ \mbox{{\rm Poly}}^{d,m}_n @>sc^{d,m}_n>> \mbox{{\rm Map}} (\Bbb C, \mbox{{\rm Pol}}^m_n(D^2,S^1)) @. \end{CD} $$ } \newline where the vertical maps $q_i$ $(i=1,2,3)$ are induced maps from the corresponding group actions, and $q^{\prime}$ is induced map from the map $q$. \par Note that the map $q^{\prime}$ is a homotopy equivalence by Lemma \ref{lmm: quasi-fibration}. Since the map $ev_0$ is $\Bbb T^m$-equivariant and a homotopy equivalence by Lemma \ref{lmm: evaluation0}, the map $\widetilde{ev}_0$ is a homotopy equivalence. \par \par Now consider the map $\gamma$ given by the second row of the above diagram after imposing the base point condition at $\infty$. If $d\to\infty$, $sc^{d,m}_n$ is a homology equivalence by Theorem \ref{thm: scanning map}. So the map $\gamma$ is a homotopy equivalence if $d\to\infty$. However, since the map $q_3$ induces a homotopy equivalence $\Omega^2S^{2mn-1}\simeq \Omega^2_0((\Bbb C^{mn}\setminus \{{\bf 0}\})//\Bbb T^{m})$ (after imposing the base point condition at $\infty$), this map coincides the map $i^{\infty,m}_n$ (if $d\to\infty)$ up to homotopy equivalence. Hence, $i^{\infty,m}_n$ is a homology equivalence. \end{proof} Next, we shall prove the unstable results (Theorem \ref{thm: I} and Corollary \ref{thm: II}). For this purpose, remark the following two results. \begin{lemma}\label{lmm: abelian} If $m\geq 2$ and $n\geq 2$, the space $\mbox{{\rm Poly}}^{d,m}_n$ is simply connected. \end{lemma} \begin{proof} Assume that $m\geq 2$ and $n\geq 2$. Then by using the braid representation as in \cite[\S 5. Appendix]{GKY1}, any different kinds of strings can pass through and one can show that $\pi_1(\mbox{{\rm Poly}}^{d,m}_n)$ is an abelian group. Hence, there is an isomorphism $\pi_1(\mbox{{\rm Poly}}^{d,m}_n)\cong H_1(\mbox{{\rm Poly}}^{d,m}_n,\Bbb Z)$. Now consider the spectral sequence (\ref{SS}). Then it follows from Corollary \ref{crl: Er} that $H_k(\mbox{{\rm Poly}}^{d,m}_n,\Bbb Z)=0$ for any $1\leq k\leq 2mn-5$. Thus, the space $\mbox{{\rm Poly}}^{d,m}_n$ is simply connected. \end{proof} \begin{theorem}\label{crl: stabilization} If $m\geq 2$ and $n\geq 2$, the stabilization map $$ s^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to \mbox{{\rm Poly}}^{d+1,m}_n $$ is a homotopy equivalence if $\lfloor \frac{d}{n}\rfloor =\lfloor \frac{d+1}{n}\rfloor$ and a homotopy equivalence through dimension $D(d;m,n)$ otherwise. \end{theorem} \begin{proof} This follows from Theorem \ref{thm: stab1} and Lemma \ref{lmm: abelian}. \end{proof} Now we can complete the proof of Theorem \ref{thm: I} and Corollary \ref{thm: II}. \begin{proof}[Proof of Theorem \ref{thm: I}] Since the case $(m,n)=(3,1)$ and the case $(m,n)=(1,3)$ were already proved in Theorem \ref{thm: KY7} and Theorem \ref{thm: KY6}, without loss of generalities, we may assume that $m\geq $ and $n\geq 2$. It follows from Theorem \ref{thm: stab1} and Theorem \ref{thm: natural map} that the map $i^{d,m}_n:\mbox{{\rm Poly}}^{d,m}_n\to \Omega^2_0\Bbb C {\rm P}^{mn-1}$ is a homology equivalence through dimension $D(d;m,n)$. However, since $\mbox{{\rm Poly}}^{d,m}_n$ and $\Omega^2_0\Bbb C {\rm P}^{mn-1}\simeq \Omega^2S^{2mn-1}$ are simply connected, the map $i^{d,m}_n$ is indeed a homotopy equivalence through dimension $D(d;m,n)$. \end{proof} \begin{proof}[Proof of Corollary \ref{thm: II}] By the same reason as the proof of Theorem \ref{thm: I}, it suffices to prove the assertion when $m\geq 2$ and $n\geq 2$. By the diagram (\ref{CD: stab0}), it is easy to see that the following diagram is commutative (up to homotopy). $$ \begin{CD} \mbox{{\rm Poly}}^{d,m}_n @>i^{d,m}_n>> \Omega^2_d\Bbb C {\rm P}^{mn-1} \\ @V{j^{d,m}_n}VV \Vert @. \\ \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}) @>i_d^{\prime}>> \Omega^2_d\Bbb C {\rm P}^{mn-1} \end{CD} $$ It follows from Theorem \ref{thm: KY6} that $i_d^{\prime}$ is a homotopy equivalence through dimension $(2mn-3)(d+1)-1$. Moreover, by Theorem \ref{thm: I} we know that $i^{d,m}_n$ is a homotopy equivalence through dimension $D(d;m,n)$. Since $D(d;m,n)<(2mn-3)(d+1)-1$, it follows from above diagram that the map $j^{d,m}_n$ is a homotopy equivalence through dimension $D(d;m,n)$. \end{proof} \section{$C_2$-structures}\label{section: III} In this section we shall prove Theorem \ref{thm: III} and Theorem \ref{thm: IV}. \begin{definition} Let $J=(0,1)$ be an open unit disk in $\Bbb R$. Recall that {\it a little $2$-cube} $c$ means an affine embedding $c:J^2\to J^2$ with parallel axes. \par (i) For each integer $j\geq 1$, let $C_2(j)$ denote the space consisting of all $j$-tuples $(c_1,\cdots ,c_j)$ of little $2$-cubes such that $c_k(J^2)\cap c_i(J^2)=\emptyset$ if $k\not= i$. \par (ii) Let $\alpha_0:\Bbb C\stackrel{\cong}{\longrightarrow}J^2$. If $S_j$ denotes the symmetric group of $j$-letters, it is easy to see that it acts on the spaces $C_2(j)$ and $\mbox{{\rm Poly}}^{d.m}_n$ by the permutation of coordinates in a natural way. Then we identify $\Bbb C =\Bbb R^2$ and define {\it the structure map} $ \mathcal{I}_j:C_2(j)\times_{S_j}(\mbox{{\rm Poly}}^{d,m}_n)^j\to \mbox{{\rm Poly}}^{dj,m}_n $ by \begin{equation} \mathcal{I}_j((c_1,\cdots ,c_j),(f_1,\cdots ,f_j))= (\prod_{k=1}^jc_k(f_{1;k}(z)),\cdots ,\prod_{k=1}^jc_k(f_{m;k}(z))) \end{equation} for $(c_1,\cdots ,c_j)\in C_2(j)$ and $f_k=(f_{i;k}(z),\cdots ,f_{m;k}(z))\in \mbox{{\rm Poly}}^{d,m}_n$ ($1\leq k\leq j$), where we set \begin{equation} c_k(f(z))=\prod_{i=1}^d\big(z-c_k\circ\alpha_0(a_i)\big) \quad \mbox{if }f(z)=\prod_{i=1}^d(z-a_i)\in {\rm P}^d(\Bbb C). \end{equation} \par (iii) Similarly, let $c_*=(c_{1},c_2)\in C_2(2)$ be any fixed element and define {\it the loop product} $ \mu_{d_1,d_2} :\mbox{{\rm Poly}}^{d_1,m}_n\times \mbox{{\rm Poly}}^{d_2,m}_n\to \mbox{{\rm Poly}}^{d_1+d_2,m}_n $ by \begin{equation} \mu_{d_1,d_2} (f,g)= (c_1(f_1(z))c_2(g_1(z)),\cdots ,c_1(f_m(z))c_2(g_m(z))) \end{equation} for $(f,g)=\big((f_1(z),\cdots ,f_m(z)),(g_1(z),\cdots ,g_m(z))\big) \in \mbox{{\rm Poly}}^{d_1,m}_n\times \mbox{{\rm Poly}}^{d_2,m}_n$. \end{definition} \begin{remark} (i) It is easy to see that $\mu_{d_1,d_2} (f,g)=\mathcal{I}_2(c_*;f,g)$ if $d_1=d_2=d$. \par (ii) Let $\mbox{{\rm Poly}}^{0,m}_n=\{*_0\}$ and let $\mbox{{\rm Poly}}^{m}_n$ denote the disjoint union \begin{equation} \mbox{Poly}^{m}_n=\coprod_{d\geq 0}\mbox{{\rm Poly}}^{d,m}_n. \end{equation} If we set $\mu_{d,0}(f,*_0)=\mu_{0,d}(*_0,f)=f$ for any $f\in \mbox{{\rm Poly}}^{d,m}_n$, it is easy to see that $\mbox{Poly}^{m}_n$ is a homotopy associative H-space with unit $*_0$, and we can easily see that $\{\mbox{Poly}^{m}_n,\mathcal{I}_j\}_{j\geq 1}$ is a $C_2$-operad space. Thus, by using the group completion Theorem and Theorem \ref{thm: I}, we see that there is a homotopy equivalence \begin{equation}\label{eq: group completion} \Omega B(\mbox{Poly}^{m}_n)\simeq \Omega^2\Bbb C {\rm P}^{mn-1}\simeq \Bbb Z \times \Omega^2S^{2mn-1}. \quad \qed \end{equation} \end{remark} \begin{definition} Let \begin{eqnarray*} *:&&\mbox{{\rm Hol}}_{d_1}^*(S^2,\Bbb C {\rm P}^{mn-1})\times \mbox{{\rm Hol}}_{d_2}^*(S^2,\Bbb C {\rm P}^{mn-1})\to \mbox{{\rm Hol}}_{d_1+d_2}^*(S^2,\Bbb C {\rm P}^{mn-1}) \\ &&\mbox{and} \\ \mathcal{I}:&&C_2(j)\times_{S_j}\mbox{{\rm Hol}}_{d}^*(S^2,\Bbb C {\rm P}^{mn-1})^j \to \mbox{{\rm Hol}}_{dj}^*(S^2,\Bbb C {\rm P}^{mn-1}) \end{eqnarray*} denote the loop product and the $C_2$-structure map defined in \cite[(4.8)]{BM} and \cite[(4.10)]{BM}, respectively. \end{definition} It is easy to see that the above definitions of the loop products and the structure maps are completely analogous to those of \cite[(4.8), (4.10)]{BM}, and we have the following: \begin{lemma}\label{lmm: C2} \begin{enumerate} \item[$\mbox{{\rm (i)}}$] The following two diagrams are homotopy commutative. $$ \begin{CD} \mbox{{\rm Poly}}^{d_1,m}_n\times \mbox{{\rm Poly}}^{d_2,m}_n @>{\mu_{d_1,d_2}}>> \mbox{{\rm Poly}}^{d_1+d_2,m}_n \\ @V{j^{d_1,m}_n\times j^{d_2,m}_n}VV @V{j^{d_1+d_2,m}_n}VV \\ \mbox{{\rm Hol}}_{d_1}^*(S^2,\Bbb C {\rm P}^{mn-1}) \times \mbox{{\rm Hol}}_{d_2}^*(S^2,\Bbb C {\rm P}^{mn-1}) @>*>> \mbox{{\rm Hol}}_{d_1+d_2}^*(S^2,\Bbb C {\rm P}^{mn-1}) \end{CD} $$ $$ \begin{CD} C_2(j)\times_{S_j}(\mbox{{\rm Poly}}^{d,m}_n)^j @>{\mathcal{I}_j}>> \mbox{{\rm Poly}}^{dj,m}_n \\ @V{1\times (j^{d,m}_n)^j}VV @V{j^{dj,m}_n}VV \\ C_j(2)\times_{S_j} \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1})^j @>\mathcal{I}>> \mbox{{\rm Hol}}_{dj}^*(S^2,\Bbb C {\rm P}^{mn-1}) \end{CD} $$ \item[$\mbox{{\rm (ii)}}$] The map $$ \displaystyle\coprod_{d\geq 0}i^{d,m}_n:\mbox{\rm Poly}^m_n=\coprod_{d\geq 0}\mbox{{\rm Poly}}^{d,m}_n \to \coprod_{d\in\Bbb Z}\Omega^2_d\Bbb C {\rm P}^{mn-1}=\Omega^2\Bbb C {\rm P}^{mn-1} $$ is a $C_2$-map up to homotopy equivalence. \end{enumerate} \end{lemma} \begin{proof} This can be proved by an analogous way as given in \cite[Theorem 4.10]{BM}. \end{proof} \begin{lemma}\label{lmm: Pon} There is a homotopy equivalence $\mbox{{\rm Poly}}^{n,m}_n\simeq S^{2mn-3}$. \end{lemma} \begin{proof} This easily follows from the homeomorphism (\ref{eq: Poly}). \end{proof} \begin{lemma}[\cite{CMM}, \cite{CMT}, \cite{Sn}] \label{lmm: Snaith} \begin{enumerate} \item[$\mbox{{\rm (i)}}$] If $X$ is a connected based CW complex, there is a stable homotopy equivalence $\displaystyle \Omega^2\Sigma^2X\simeq_s \bigvee_{k=1}^{\infty}D_k(X), $ where $D_k(X)$ denotes the space $D_k(X)=F(\Bbb C,k)_+\wedge_{S_k}(\bigwedge^k X)$ as in (\ref{equ: DkX}). \item[$\mbox{{\rm (ii)}}$] For integers $k\geq 1$ and $N\geq 2$, there is a homotopy equivalence $D_k(S^{2N-1})\simeq \Sigma^{2(N-1)k}D_k$, where $D_k=D_k(S^1)$ as in (\ref{equ: DkS}). \item[$\mbox{{\rm (iii)}}$] The canonical projection $p_{k,N}:F(\Bbb C,k)\times_{S_k}(S^{2N-1})^k\to D_k(S^{2N-1})$ has the stable section $e_{k,N}: D_k(S^{2N-1})\to F(\Bbb C,k)\times_{S_k}(S^{2N-1})^k$. \qed \end{enumerate} \end{lemma} \begin{definition}\label{dfn: Phi, Psi} (i) For each $1\leq k<d$, let $s_{k,d}:\mbox{{\rm Poly}}^{kn,m}_n\to \mbox{{\rm Poly}}^{dn,m}_n$ denote the composite of stabilization maps \begin{equation} s_{k,d}: \mbox{{\rm Poly}}^{kn,m}_n\stackrel{}{\longrightarrow} \mbox{{\rm Poly}}^{kn+1,m}_n\stackrel{}{\longrightarrow} \cdots \stackrel{}{\longrightarrow} \mbox{{\rm Poly}}^{dn-1,m}_n\stackrel{}{\longrightarrow} \mbox{{\rm Poly}}^{dn,m}_n. \end{equation} We denote by $\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n$ the mapping cone of the map $s_{k,d}$. \par (ii) Let \begin{equation}\label{eq: ed def} e_d:\Sigma^{2(mn-2)d}D_d\to F(\Bbb C,d)\times_{S_d}(S^{2mn-3})^d \end{equation} denote the stable map defined by the composite of maps $$ \begin{CD} e_d:\Sigma^{2(mn-2)d}D_d\simeq D_d(S^{2mn-3})@>e_{d,mn-1}>> F(\Bbb C,d)\times_{S_d}(S^{2mn-3})^d. \end{CD} $$ It is easy to see that $e_d$ is a stable section of the projection $F(\Bbb C,d)\times_{S_d}(S^{2mn-3})^d\to D_d(S^{2mn-3})\simeq \Sigma^{2(mn-2)d}D_d.$ \par (iii) Since there is a $S_d$-equivariant homotopy equivalence $C_2(d)\times_{S_d}F(\Bbb C,d)$, one can define the stable map $ \Psi_d:\Sigma^{2(mn-2)d}D_d\to \mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n $ by \begin{equation} \Psi_d=\tilde{p}_d\circ \mathcal{I}_d^{\prime}\circ e_d, \end{equation} where $\tilde{p}_d:\mbox{{\rm Poly}}^{dn,m}_n\to \mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n$ is the natural projection and the map $\mathcal{I}_d^{\prime}$ denotes the map defined by the composite of maps \begin{equation} \mathcal{I}_d^{\prime}: F(\Bbb C,d)\times_{S_d}(S^{2mn-3})^d\simeq C_2(d)\times_{S_d}(\mbox{{\rm Poly}}^{n,m}_n)^d \stackrel{\mathcal{I}_d}{\longrightarrow} \mbox{{\rm Poly}}^{dn,m}_n. \end{equation} Note that the following diagram is commutative. \begin{equation*}\label{CD: Psi} \begin{CD} \Sigma^{2(mn-2)d}D_d \simeq_s D_d(S^{2mn-3})@>e_d>> F(\Bbb C ,d)\times_{S_d}(S^{2mn-3})^d \\ @V{\Psi_d}VV @V{\mathcal{I}_d^{\prime}}VV \\ \mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n @<\tilde{p}_d<< \mbox{{\rm Poly}}^{dn,m}_n \end{CD} \end{equation*} Similarly, define the stable map $ \Phi_d:\bigvee_{k=1}^d\Sigma^{2(mn-2)k}D_k \to \mbox{{\rm Poly}}^{dn,m}_n $ by \begin{equation} \Phi_d=(\vee s_{k,d})\circ (\vee \mathcal{I}_d^{\prime})\circ (\vee e_k), \end{equation} Thus the following diagram is commutative: \begin{equation*} \begin{CD} \displaystyle \bigvee_{k=1}^d\Sigma^{2(mn-2)k}D_k @>>\simeq_s> \displaystyle \bigvee_{k=1}^dD_k(S^{2mn-3}) @>{\vee e_k}>> \displaystyle \bigvee_{k=1}^d F(\Bbb C ,k)\times_{S_k}(S^{2mn-3})^k \\ @V{\Phi_d}VV @. @V{\vee \mathcal{I}_k^{\prime}}VV \\ \mbox{{\rm Poly}}^{dn,m}_n @>=>> \mbox{{\rm Poly}}^{dn,m}_n @<{\vee s_{k,d}}<< \displaystyle \bigvee_{k=1}^d \mbox{{\rm Poly}}^{kn,m}_n \end{CD} \end{equation*} \par (iv) For a connected space $X$, let $J_2(X)$ denote {\it the May-Milgram model} for $\Omega^2\Sigma^2X$ \cite{May} \begin{equation} J_2(X)=\big(\coprod_{k= 1}^{\infty}F(\Bbb C,k)\times_{S_k}X^k\big)/\sim , \end{equation} where $\sim$ denotes the well-known equivalence relation. For each integer $d\geq 1$, let $J_2(X)_d\subset J_2(X)$ denote {\it the $d$-th stage filtration of the May-Milgram model} for $\Omega^2\Sigma^2X$ defined by \begin{equation}\label{eq: d-th MM-model} J_2(X)_d=\big(\coprod_{k=1}^dF(\Bbb C,k)\times_{S_k}X^k\big)/\sim. \quad \qed \end{equation} \end{definition} \begin{theorem}\label{thm: V} The map $\Psi_d:\Sigma^{2(mn-2)d}D_d \stackrel{\simeq_s}{\longrightarrow} \mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n$ is a stable homotopy equivalence. \end{theorem} The proof of Theorem \ref{thm: V} is postponed to \S \ref{section: Proof V}, and we prove the following result. \begin{theorem}\label{thm: VI} The map $\displaystyle \Phi_d:\bigvee_{k=1}^d\Sigma^{2(mn-2)k}D_k \stackrel{\simeq_s}{\longrightarrow} \mbox{{\rm Poly}}^{dn,m}_n$ is a stable homotopy equivalence. \end{theorem} \begin{proof} We proceed by the induction on $d$. If $d=1$, since there is a homotopy equivalence $D_1\simeq S^1$, the assertion follows from Lemma \ref{lmm: Pon}. Assume that the result holds for the case $d-1$, i.e. the map $\Phi_{d-1}$ is a stable homotopy equivalence. Note that the following diagram is commutative. $$ \begin{CD} \bigvee_{k=1}^d\Sigma^{2(mn-2)k}D_k @>\Phi_d>> \mbox{{\rm Poly}}^{dn,m}_n \\ \Vert @. @A{s_{d-1,d} \vee 1}AA \\ \big(\bigvee_{k=1}^{d-1}\Sigma^{2(mn-2)k}D_k\big) \vee \Sigma^{2(mn-2)d}D_d @>\Phi_{d-1}\vee \mathcal{I}_d^{\prime}\circ e_d>> \mbox{{\rm Poly}}^{(d-1)n,m}_n\vee \mbox{{\rm Poly}}^{dn,m}_n \end{CD} $$ Thus, we can easily obtain the following homotopy commutative diagram $$ \begin{CD} \bigvee_{k=1}^{d-1}\Sigma^{2(mn-2)k}D_k @>>\subset> \bigvee_{k=1}^{d-1}\Sigma^{2(mn-2)k}D_k @>>> \Sigma^{2(mn-2)d}D_d \\ @V{\Phi_{d-1}}V{\simeq_s}V @V{\Phi_d}VV @V{\Psi_d}V{\simeq_s}V \\ \mbox{{\rm Poly}}^{(d-1)n,m}_n @>{s_{d-1,d}}>> \mbox{{\rm Poly}}^{dn,m}_n @>\tilde{p}_d>> \mbox{{\rm Poly}}^{dn,m}_n/ \mbox{{\rm Poly}}^{(d-1)n,m}_n \end{CD} $$ where the horizontal sequences are cofibration sequences. Since $\Phi_{d-1}$ and $\Psi_d$ are stable homotopy equivalences, the map $\Phi_d$ is so. \end{proof} Now it is ready to prove Theorem \ref{thm: III}. \begin{proof}[Proof of Theorem \ref{thm: III}] Since the case $m=1$ or the case $n=1$ were obtained by Theorem \ref{thm: CCMM} and Theorem \ref{thm: GKY4}, assume that $m\geq 2$ and $n\geq 2$. Then the assertion easily follows from Theorem \ref{crl: stabilization} and Theorem \ref{thm: VI}. \end{proof} Next we shall prove Theorem \ref{thm: IV}. \begin{definition} It follows from Lemma \ref{lmm: C2} and \cite[Theorem 4.14, Theorem 4.16]{BM} that $C_2$-structure of $\mbox{Pol}^m_n=\coprod_{d\geq 0}\mbox{{\rm Poly}}^{d,m}_n$ and that of $J_2(S^{2mn-3})$ induced from the double loop product are compatible. So the structure maps $\mathcal{I}_d$'s induce a map \begin{equation} \epsilon_d:J_2(S^{2mn-3})_d\to \mbox{{\rm Poly}}^{dn,m}_n \end{equation} such that the following diagram is homotopy commutative: \begin{equation} \begin{CD} \bigvee_{k=1}^dF(\Bbb C.k)\times_{S_k}(S^{2mn-3})^k @>\vee\mathcal{I}_k^{\prime}>> \bigvee_{k=1}^d\mbox{{\rm Poly}}^{kn,m}_n \\ @V{\vee q_k}VV @V{\vee s_{k,d}}VV \\ J_2(S^{2mn-3})_d @>\epsilon_d>> \mbox{{\rm Poly}}^{dn,m}_n \end{CD} \end{equation} where \begin{equation} q_k:F(\Bbb C,k)\times_{S_k}(S^{2mn-3})^k\to J_2(S^{2mn-3})_d \qquad (1\leq k\leq d) \end{equation} denotes the natural projection. \qed \end{definition} \begin{theorem}\label{thm: VII} If $m\geq 2$ and $n\geq 2$, the map $\epsilon_d:J_2(S^{2mn-3})_d \stackrel{\simeq}{\longrightarrow}\mbox{{\rm Poly}}^{dn,m}_n$ is a homotopy equivalence. \end{theorem} \begin{proof} Since stable maps $\{e_k\}$ are stable sections of the Snaith splittings (by Lemma \ref{lmm: Snaith}), the map $\displaystyle (\vee q_k)\circ (\vee e_k):\bigvee_{k=1}^d\Sigma^{2(mn-2)k}D_k \stackrel{\simeq_s}{\longrightarrow}J_2(S^{2mn-3})_d$ is a stable homotopy equivalence. Now, it is easy to see that the following diagram is stable homotopy commutative: {\small $$ \begin{CD} \displaystyle\bigvee_{k=1}^d\Sigma^{2(mn-2)k}D_k @>{\vee e_k}>> \displaystyle\bigvee_{k=1}^dF(\Bbb C,k)\times_{S_k}(S^{2mn-3})^k @>{\vee \mathcal{I}_k^{\prime}}>> \displaystyle\bigvee_{k=1}^d\mbox{{\rm Poly}}^{kn,m}_n \\ \Vert @. @V{\vee q_k}VV @V{\vee s_{k,d}}VV \\ \displaystyle \bigvee_{k=1}^d\Sigma^{2(mn-2)k}D_k @>(\vee q_k)\circ (\vee e_k)>{\simeq_s}> J_2(S^{2mn-3})_d @>{\epsilon_d}>> \mbox{{\rm Poly}}^{dn,m}_n \end{CD} $$} Since $\Phi_d=(\vee s_{k,d})\circ (\vee \mathcal{I}_k^{\prime}) \circ (\vee e_k)$ and it is a stable homotopy equivalence (by Theorem \ref{thm: VI}), the map $\epsilon_d$ is so. Thus, the map $\epsilon_d$ is a homology equivalence. Since $J_2(S^{2mn-3})_d$ and $\mbox{{\rm Poly}}^{dn,m}_n$ are simply connected (by Lemma \ref{lmm: abelian}), the map $\epsilon_d$ is a homotopy equivalence. \end{proof} Now we can give the proof of Theorem \ref{thm: IV}. \begin{proof}[Proof of Theorem \ref{thm: IV}] If $(m,n)=(3,1)$, then $\mbox{{\rm Poly}}^{d,m}_n=\mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^2)=\mbox{{\rm Hol}}_{\lfloor\frac{d}{n}\rfloor}^*(S^2,\Bbb C {\rm P}^{mn-1})$ and the assertion holds. If $(m,n)=(1,3)$, it was already proved by Theorem \ref{thm: GKY4} and assume that $m\geq 2$ and $n\geq 2$. It follows from Theorem \ref{crl: stabilization} that it suffices to prove that there is a homotopy equivalence \begin{equation}\label{eq: equiv} \mbox{{\rm Poly}}^{dn,m}_n\simeq \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}). \end{equation} By Theorem \ref{thm: VII}, $ \epsilon_d:J_2(S^{2mn-3})_d\stackrel{\simeq}{\longrightarrow}\mbox{{\rm Poly}}^{dn,m}_n $ is a homotopy equivalence. On the other hand, it follows from (ii) of Theorem \ref{thm: CCMM} that there is a homotopy equivalence $ J_2(S^{2mn-3})_d\simeq \mbox{{\rm Hol}}_d^*(S^2,\Bbb C {\rm P}^{mn-1}). $ Hence, there is a homotopy equivalence (\ref{eq: equiv}) and the assertion is obtained. \end{proof} \section{The space $\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n$}\label{section: Proof V} In this section we give the proof of Theorem \ref{thm: V}. Since the case $m=1$ and the case $n=1$ were already proved in \cite[Theorem 1, Theorem 15]{CCMM2} and \cite[Theorem 2.9]{GKY4}, in this section we always assume that $m\geq 2$ and $n\geq 2$. Let \begin{equation} \iota_d: \mbox{{\rm Poly}}^{dn,m} \to \mbox{{\rm Poly}}^{\infty ,m}_n \end{equation} denote the natural inclusion map induced from the stabilization maps. \begin{theorem}\label{thm: VIII} The stable map $$ \displaystyle (\vee \iota_d)\circ (\vee \mathcal{I}_d^{\prime}) \circ (\vee e_d): \bigvee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d \stackrel{\simeq_s}{\longrightarrow} \mbox{{\rm Poly}}^{\infty ,m}_n $$ is a stable homotopy equivalence. \end{theorem} \begin{proof} By using (ii) of Lemma \ref{lmm: C2} we see that $i^{\infty ,m}_n:\mbox{{\rm Poly}}^{\infty ,m}_n\to \Omega^2S^{2mn-1}$ is a $C_2$-map such that the following diagram is homotopy commutative \begin{equation*}\label{CD: a} \begin{CD} J_2(\mbox{{\rm Poly}}^{\infty ,m}_n) @>J_2(i^{\infty ,m}_n)>\simeq> J_2(\Omega^2S^{2mn-1}) \\ @V{r_1}VV @V{r_2}VV \\ \mbox{{\rm Poly}}^{\infty ,m}_n @>i^{\infty ,m}_n>\simeq>\Omega^2S^{2mn-1} \end{CD} \end{equation*} where $r_1$ and $r_2$ are natural retraction maps. Since $m\geq 2$ and $n\geq 2$, the two maps $i^{\infty ,m}_n$ and $J_2(i^{\infty ,m}_n)$ are indeed homotopy equivalences (by Theorem \ref{thm: natural map} and Lemma \ref{lmm: abelian}). \par Similarly, by using (ii) of Lemma \ref{lmm: C2} we have the following homotopy commutative diagram \begin{equation*}\label{CD: b} \begin{CD} \bigvee_{d=1}^{\infty}F(\Bbb C ,d)\times_{S_d}(S^{2mn-3})^d @>{\vee p_d}>> J_2(S^{2mn-3}) @>J_2(\iota )>> J_2(\mbox{{\rm Poly}}^{\infty ,m}_n) \\ @V{\vee \mathcal{I}_d^{\prime}}VV @. @V{r_1}VV \\ \bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n @>{\vee \iota_d}>>\mbox{{\rm Poly}}^{\infty,m}_n @>=>>\mbox{{\rm Poly}}^{\infty,m}_n \end{CD} \end{equation*} where $\iota$ denotes the natural inclusion map $\iota :S^{2mn-3}\simeq \mbox{{\rm Poly}}^{n,m}_n\stackrel{\iota_1}{\longrightarrow} \mbox{{\rm Poly}}^{\infty ,m}_n.$ Now consider the composite of maps $$ \begin{CD} J_2(S^{2mn-3}) @>J_2(\iota)>> J_2(\mbox{{\rm Poly}}^{\infty,m}_n) @>J_2(i^{\infty ,m}_n)>\simeq> J_2(\Omega^2S^{2mn-1}) @>r_2>> \Omega^2S^{2mn-1} \end{CD} $$ Since the map $i^{\infty,m}_n\circ \iota$ is the natural inclusion of the bottom cell of the double suspension $E^2:S^{2mn-3}\to \Omega^2\Sigma^2S^{2mn-3}=\Omega^2S^{2mn-1}$ (up to homotopy equivalence), the map $r_2\circ J_2(i^{\infty ,m}_n)\circ J_2(\iota)$ is homotopic to the natural homotopy equivalence $J_2(S^{2mn-3})\simeq \Omega^2S^{2mn-1}$. Thus the above two diagrams reduce to the following stable homotopy commutative diagram \begin{equation*} \begin{CD} \bigvee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d @. \\ @V{\vee e_d}VV @. \\ \bigvee_{d=1}^{\infty}F(\Bbb C,d)\times_{S_d}(S^{2mn-3})^d @>{\vee p_d}>> J_2(S^{2mn-3}) \simeq \Omega^2S^{2mn-1} \\ @V{\vee \mathcal{I}_d^{\prime}}VV @A{i^{\infty ,m}_n}A{\simeq}A \\ \bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n @>{\vee \iota_d}>> \mbox{{\rm Poly}}^{\infty,m}_n \end{CD} \end{equation*} Since the stable maps $\{e_d\}$ are stable sections of the stable homotopy equivalence $\Omega^2S^{2mn-1}\simeq_s\vee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d$, the map $(\vee p_d)\circ (\vee e_d)$ is a stable homotopy equivalence and the map $(\vee \iota_d) \circ (\vee \mathcal{I}_d^{\prime})\circ (\vee e_d)$ is so. \end{proof} \begin{remark}\label{remark: rem} Since $\displaystyle\lim_{d\to\infty}\Phi_d =(\vee \iota_d) \circ (\vee \mathcal{I}_d^{\prime})\circ (\vee e_d)$, the above result may be regarded as the stable version of Theorem \ref{thm: VI}. \qed \end{remark} \begin{lemma}\label{lmm: X} $\mbox{{\rm (i)}}$ The induced homomorphism $$ (s_{d-1,d})_*: H_*(\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z)\to H_*(\mbox{{\rm Poly}}^{dn,m}_n,\Bbb Z) $$ is a monomorphism. \par $\mbox{{\rm (ii)}}$ The induced homomorphism $$ (\Psi_{d})_*: H_*(\Sigma^{2(mn-2)d}D_d,\Bbb F)\to H_*(\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb F) $$ is a monomorphism for $\Bbb F =\Bbb Q$ or $\Bbb Z/p$ $(p:$ any prime$)$. \end{lemma} We postpone the proof of Lemma \ref{lmm: X} to \S \ref{section: transfer} and we give the proof of Theorem \ref{thm: V}. \begin{proof}[Proof of Theorem \ref{thm: V}] Let $\Bbb F =\Bbb Q$ or $\Bbb Z/p$ $(p:$ any prime$)$. By (i) of Lemma \ref{lmm: X}, there is an isomorphism of $\Bbb F$-vector spaces $$ H_*(\mbox{{\rm Poly}}^{\infty,m}_n,\Bbb F)\cong H_*(\bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb F). $$ Hence, it follows from Lemma \ref{lmm: Snaith}, Theorem \ref{thm: natural map} and Theorem \ref{thm: VIII} that there is an isomorphism of $\Bbb F$-vector spaces $$ H_*(\bigvee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d,\Bbb F)\cong H_*(\bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb F). $$ Thus the following equality holds for each $k\geq 1$: $$ \dim_{\Bbb F}H_k(\bigvee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d,\Bbb F)= \dim_{\Bbb F}H_k(\bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb F) <\infty $$ However, by (ii) of Lemma \ref{lmm: X}, we see that the homomorphism $$ (\vee_d\Psi_{d})_*: H_*(\bigvee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d,\Bbb F)\to H_*(\bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb F) $$ is injective. Therefore, the homomorphism $$ (\vee_d\Psi_{d})_*: H_*(\bigvee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d,\Bbb F) \stackrel{\cong}{\longrightarrow} H_*(\bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb F) $$ is indeed an isomorphism for $\Bbb F =\Bbb Q$ or $\Bbb Z/p$ $(p:$ any prime$)$, and from the universal coefficient Theorem, the homomorphism $$ (\vee_d\Psi_{d})_*: H_*(\bigvee_{d=1}^{\infty}\Sigma^{2(mn-2)d}D_d,\Bbb Z) \stackrel{\cong}{\longrightarrow} H_*(\bigvee_{d=1}^{\infty}\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z) $$ is an isomorphism. Hence, for each $d\geq 1,$ the homomorphism $$ (\Psi_{d})_*: H_*(\Sigma^{2(mn-2)d}D_d,\Bbb Z)\stackrel{\cong}{\longrightarrow} H_*(\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z) $$ is an isomorphism. Therefore $\Psi_d$ is a stable homotopy equivalence. \end{proof} \section{Transfer maps}\label{section: transfer} In this section we prove Lemma \ref{lmm: X}. For this purpose, we use the transfer maps defined as follows. \begin{definition} (i) For a connected based space $(X,x_0)$, let $\mbox{{\rm SP}}^{\infty}(X)$ denote {\it the infinite symmetric product} of $X$ defined by $\displaystyle \mbox{{\rm SP}}^{\infty}(X)=\lim_{d\to\infty}\mbox{{\rm SP}}^d(X)=\bigcup_{d\geq 0}\mbox{{\rm SP}}^d(X), $ where the space $\mbox{{\rm SP}}^d(X)$ is regarded as the subspace of $\mbox{{\rm SP}}^{d+1}(X)$ by identifying $\sum_{k=1}^dx_k$ with $\sum_{k=1}^dx_k+x_0$. So the space $\mbox{{\rm SP}}^{\infty}(X)$ may be regarded as the abelian monoid generated by $X$ with the unit $x_0$. \par \par (ii) Since there is a homotopy equivalence $\mbox{{\rm Poly}}^{(d-1)n,m}_n\simeq \mbox{{\rm Poly}}^{dn-1,m}_n$ (by Theorem \ref{crl: stabilization}), by using this identification wee define the map $$ \tau :\mbox{{\rm Poly}}^{dn,m}_n \to \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn-1,m}_n) \simeq \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{(d-1)n,m}_n) \qquad\mbox{by} $$ $$ (f_1(z),\cdots ,f_m(z)) \mapsto \sum_{1\leq i_1,\cdots ,i_m\leq dn}(\prod^{dn}_{k=1, k\not=i_1}(z-a_{k,1}),\cdots ,\prod^{dn}_{k=1,k\not= i_m}(z-a_{k,m})), $$ where $(f_1(z),\cdots ,f_m(z))\in \mbox{{\rm Poly}}^{dn,m}_n$ and $f_j(z)=\prod^{dn}_{k=1}(z-a_{k,j})$ for $1\leq j\leq m$. The map $\tau$ naturally extends to a homomorphism of abelian monoid \begin{equation} \tau_{d-1}:\mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n)\to \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{(d-1)n,m}_n). \end{equation} For each $1\leq k< d$, define the transfer map \begin{equation} \tau_{k,d}:\mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n)\to \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dk,m}_n) \end{equation} as the composite $\tau_{k,d}=\tau_k\circ \tau_{k+1}\circ \cdots \circ \tau_{d-1}$, i.e. $$ \tau_{k,d}: \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n) \stackrel{\tau_{d-1}}{\longrightarrow} \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{(d-1)n,m}_n) \to \cdots \stackrel{\tau_{k}}{\longrightarrow} \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{kn,m}_n). $$ In particular, we set $\tau_{d,d}=\mbox{id}$ for $k=d$. \qed \end{definition} \begin{lemma}\label{lmm: XI} $\mbox{{\rm (i)}}$ The induced homomorphism $(s_{d-1,d})_*:H_*(\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z) \to H_*(\mbox{{\rm Poly}}^{dn,m}_n,\Bbb Z)$ is a monomorphism. \par $\mbox{{\rm (ii)}}$ The map $$ \begin{CD} \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n) @>(\tilde{p}_d,\tau_{d-1,d})>\simeq> \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n) \times \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{(d-1)n,m}_n) \end{CD} $$ is a homotopy equivalence, where $$ \tilde{p}_k: \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{kn,m}_n)\to \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{kn,m}_n/\mbox{{\rm Poly}}^{(k-1)n,m}_n) $$ denotes the map induced from the natural projection $$\mbox{{\rm Poly}}^{kn,m}_n\to \mbox{{\rm Poly}}^{kn,m}_n/\mbox{{\rm Poly}}^{(k-1)n,m}_n. $$ \end{lemma} \begin{proof} It is well-known that there is an natural isomorphism $\pi_k(\mbox{{\rm SP}}^{\infty}(X))\cong \tilde{H}_k(X,\Bbb Z)$ for any connected space $X$ and any $k\geq 0$. Furthermore, note that the equality $\tilde{p}_k\circ \tau_{k,d-1}= \tilde{p}_k\circ \tau_{k,d}\circ s_{d-1,d} $ (up to homotopy equivalence) holds for each $1\leq k<d$. Thus we can show that $$ (\tau_{k,d})_*\circ (s_{d-1,d})_*\equiv (\tau_{k,d-1})_* \quad (\mbox{mod Im }(s_{k-1,k})_*) $$ on $H_*(\mbox{{\rm Poly}}^{kn,m}_n,\Bbb Z)$ for each $1\leq k<d$. Then by using \cite[Lemma 2]{Do}, we can prove that $(s_{d-1,d})_*$ is a monomorphism and that the map $(\tilde{p}_d,\tau_{d-1,d})$ induces an isomorphism on the homotopy group $\pi_k(\ )$ for any $k$. Hence, the map $(\tilde{p}_d,\tau_{d-1,d})$ is a homotopy equivalence. \end{proof} If we use the above result, it is easy to prove the following result. \begin{corollary} The map $$ \tilde{\tau}_d=\prod_{k=1}^d\tilde{\tau}_{k,d}:\mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n) \stackrel{\simeq}{\longrightarrow} \prod_{k=1}^d\mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{kn,m}_n/\mbox{{\rm Poly}}^{(k-1)n,m}_n) $$ is a homotopy equivalence, where the map $\tilde{\tau}_{k,d}$ denotes the composite of maps $$ \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n) \stackrel{\tau_{k,d}}{\longrightarrow} \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{kn,m}_n) \stackrel{\tilde{p}_k}{\longrightarrow} \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{kn,m}_n/\mbox{{\rm Poly}}^{(k-1)n,m}_n). \qed $$ \end{corollary} \begin{definition} Let $N$ be a positive integer and assume that $1\leq j<d$. \par (i) Let $ q_{d.j}^{(N)}:F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2N-1})^d \to F(\Bbb C,d)\times_{S_d}(S^{2N-1})^d $ denote the natural covering projection corresponding the subgroup $S_j\times S_{d-j}\subset S_d$. Define the transfer \begin{equation} \sigma^{(N)} :F(\Bbb C,d)\times_{S_d}(S^{2N-1})^d \to \mbox{{\rm SP}}^{\infty}(F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2N-1})^d) \end{equation} for the covering projection $q_{d,j}^{(N)}$ by $$ \sigma^{(N)} (x)=\sum_{\tilde{x}\in q_{d,j}^{-1}(x)}\tilde{x} \quad \mbox{ for }x\in F(\Bbb C,d)\times_{S_d}(S^{2N-1})^d. $$ \par (ii) Let $\rho_j^{(N)}:F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2N-1})^d \to F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2N-1})^j$ denote the map onto the first $j$ coordinates of $(S^{2N-1})^d$, and define the map $$ \sigma_j^{(N)}: F(\Bbb C,d)\times_{S_d}(S^{2N-1})^d \to \mbox{{\rm SP}}^{\infty}(F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2N-1})^j) $$ by $\sigma_j^{(N)}=\mbox{{\rm SP}}^{\infty}(\rho_j^{(N)})\circ \sigma^{(N)}$. The map $\sigma_j^{(N)}$ naturally extends to a map \begin{equation} \tilde{\sigma}_j^{(N)}: \mbox{{\rm SP}}^{\infty}(F(\Bbb C,d)\times_{S_d}(S^{2N-1})^d) \to \mbox{{\rm SP}}^{\infty}(F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2N-1})^j) \end{equation} by the usual addition $\tilde{\sigma}_j^{(N)}(\sum_kx_k)=\sum_k\sigma_j^{(N)}(x_k)$. \par (iii) Let $$ \mathcal{I}_{j,d}^{\prime}:F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2mn-3})^j \to \mbox{{\rm Poly}}^{jn,m}_n $$ denote the $C_2$-structure map given by the similar way as $\mathcal{I}_d^{\prime}$ was defined. \qed \end{definition} \begin{lemma}[\cite{CCMM2}]\label{lmm: sigma e} Let $1\leq j<d$. Then the stable map $$ \sigma_j^{(N)}\circ e_{d,N}:D_d(S^{2N-1}) \to \mbox{{\rm SP}}^{\infty}(F(\Bbb C,d)\times_{S_j\times S_{d-j}} (S^{2N-1})^j) $$ is null-homotopic. \end{lemma} \begin{proof} The case $N=1$ was proved in \cite[page 44-45]{CCMM2} and the case $N\geq 2$ can be proved completely same way. \end{proof} The following is easy to verify: \begin{lemma}\label{lmm: CD-transfer} Let $m$ and $n$ be positive integers $\geq 2$. Then if $1\leq j<d$ and $N=mn-1$, the following diagram is commutative: \begin{equation*} \begin{CD} \mbox{{\rm SP}}^{\infty}(F(\Bbb C,d)\times_{S_d}(S^{2mn-3})^d) @>\tilde{\sigma}_j^{(mn-1)}>> \mbox{{\rm SP}}^{\infty}(F(\Bbb C,d)\times_{S_j\times S_{d-j}}(S^{2mn-3})^d)) \\ @V{\mbox{{\rm SP}}^{\infty}(\mathcal{I}_d^{\prime})}VV @V{\mbox{{\rm SP}}^{\infty}(\mathcal{I}_{j,d}^{\prime})}VV \\ \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{dn,m}_n) @>{\tau_{j,d}}>> \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{jn,m}_n) \quad \qed \end{CD} \end{equation*} \end{lemma} \begin{lemma}\label{lmm: null homotopic} If $m\geq 2$ and $n\geq 2$ be positive integers, the stable map $$ \tau_{d-1,d}\circ \mbox{{\rm SP}}^{\infty}(\mathcal{I}_d^{\prime})\circ \mbox{{\rm SP}}^{\infty}(e_d): \mbox{{\rm SP}}^{\infty}(\Sigma^{2(mn-2)d}D_d) \to \mbox{{\rm SP}}^{\infty}(\mbox{{\rm Poly}}^{(d-1)n,m}_n) $$ is null-homotopic. \end{lemma} \begin{proof} Note that $\sigma_{d-1}^{(mn-1)}\circ e_d$ is null-homotopic by Lemma \ref{lmm: sigma e} (cf. (\ref{eq: ed def})). By Lemma \ref{lmm: CD-transfer} we see that \begin{eqnarray*} \tau_{d-1,d}\circ \mbox{{\rm SP}}^{\infty}(\mathcal{I}_d^{\prime})\circ \mbox{{\rm SP}}^{\infty}(e_d) &= & \mbox{{\rm SP}}^{\infty}(\mathcal{I}_{d-1,d}^{\prime})\circ \tilde{\sigma}_{d-1}^{(mn-1)}\circ \mbox{{\rm SP}}^{\infty}(e_d) \\ &=& \mbox{{\rm SP}}^{\infty}(\mathcal{I}_{d-1,d}^{\prime})\circ \mbox{{\rm SP}}^{\infty}(\sigma_{d-1}^{(mn-1)}\circ e_d) \simeq *. \end{eqnarray*} Thus the map $\tau_{d-1,d}\circ \mbox{{\rm SP}}^{\infty}(\mathcal{I}_d^{\prime})\circ \mbox{{\rm SP}}^{\infty}(e_d)$ is null-homotopic. \end{proof} Now it is ready to prove Lemma \ref{lmm: X}. \begin{proof}[Proof of Lemma \ref{lmm: X}] Since the first assertion follows from (i) of Lemma \ref{lmm: XI}, it suffices to prove the assertion (ii). First, it follows from Theorem \ref{thm: VIII} that the homomorphism $ (\mathcal{I}_d^{\prime}\circ e_d)_*: H_*(\Sigma^{2(mn-2)d}D_d,\Bbb Z) \to H_*(\mbox{{\rm Poly}}^{dn,m}_n,\Bbb Z) $ is a monomorphism. Next, by (ii) of Lemma \ref{lmm: XI} the homomorphism $$ \begin{CD} H_*(\mbox{{\rm Poly}}^{dn,m}_n) @>(\tilde{p}_{d*},\tau_{d-1,d*})>\cong> H_*(\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n)\oplus H_*(\mbox{{\rm Poly}}^{(d-1)n,m}_n) \end{CD} $$ is an isomorphism. Hence, the composite $((\tilde{p}_{d})_*\circ (\mathcal{I}_d^{\prime}\circ e_d)_*, (\tau_{d-1,d})_*\circ (\mathcal{I}_d^{\prime}\circ e_d)_*)$ $$ H_*(\Sigma^{2(mn-2)d}D_d,\Bbb Z) \to H_*(\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z)\oplus H_*(\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z) $$ is a monomorphism. However, by Lemma \ref{lmm: null homotopic}, $(\tau_{d-1,d})_*\circ (\mathcal{I}_d^{\prime}\circ e_d)_*$ is trivial. Thus, the homomorphism $$ (\tilde{p}_d)_*\circ (\mathcal{I}_d^{\prime}\circ e_d)_*: H_*(\Sigma^{2(mn-2)d}D_d,\Bbb Z) \to H_*(\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z) $$ is a monomorphism. Since $\Psi_d=\tilde{p}_d\circ \mathcal{I}_d^{\prime}\circ e_d$, the homomorphism $$ (\Psi_d)_*: H_*(\Sigma^{2(mn-2)d}D_d,\Bbb Z) \to H_*(\mbox{{\rm Poly}}^{dn,m}_n/\mbox{{\rm Poly}}^{(d-1)n,m}_n,\Bbb Z) $$ is a monomorphism. \end{proof} \noindent{\bf Funding. } The second author was supported by JSPS KAKENHI Grant Number 26400083, Japan. \end{document}
arXiv
Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.) The important factors seem to be: #1/MR6 (Creativity.self.rating, Time.Bitcoin, Time.Backups, Time.Blackmarkets, Gwern.net.linecount.log), #2/MR1 (Time.PDF, Time.Stats), #7/MR7 (Time.Writing, Time.Sysadmin, Time.Programming, Gwern.net.patches.log), and #8/MR8 (Time.States, Time.SRS, Time.Sysadmin, Time.Backups, Time.Blackmarkets). The rest seem to be time-wasting or reflect dual n-back/DNB usage (which is not relevant in the LLLT time period). These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds." In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment. This formula presents a relatively high price and one bottle of 60 tables, at the recommended dosage of two tablets per day with a meal, a bottle provides a month's supply. The secure online purchase is available on the manufacturer's site as well as at several online retailers. Although no free trials or money back guarantees are available at this time, the manufacturer provides free shipping if the desired order exceeds a certain amount. With time different online retailers could offer some advantages depending on the amount purchased, so an online research is advised before purchase, as to assess the market and find the best solution. Regarding other methods of cognitive enhancement, little systematic research has been done on their prevalence among healthy people for the purpose of cognitive enhancement. One exploratory survey found evidence of modafinil use by people seeking cognitive enhancement (Maher, 2008), and anecdotal reports of this can be found online (e.g., Arrington, 2008; Madrigal, 2008). Whereas TMS requires expensive equipment, tDCS can be implemented with inexpensive and widely available materials, and online chatter indicates that some are experimenting with this method. That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩ Dopaminergics are smart drug substances that affect levels of dopamine within the brain. Dopamine is a major neurotransmitter, responsible for the good feelings and biochemical positive feedback from behaviors for which our biology naturally rewards us: tasty food, sex, positive social relationships, etc. Use of dopaminergic smart drugs promotes attention and alertness by either increasing the efficacy of dopamine within the brain, or inhibiting the enzymes that break dopamine down. Examples of popular dopaminergic smart drug drugs include Yohimbe, selegiline and L-Tyrosine. Natural and herbal nootropics are by far the safest and best smart drugs to ingest. For this reason, they're worth covering first. Our recommendation is always to stick with natural brain fog cures. Herbal remedies for enhancing mental cognition are often side-effect free. These substances are superior for both long-term safety and effectiveness. They are also well-studied and have deep roots in traditional medicine. I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). Or in other words, since the standard deviation of my previous self-ratings is 0.75 (see the Weather and my productivity data), a mean rating increase of >0.39 on the self-rating. This is, unfortunately, implying an extreme shift in my self-assessments (for example, 3s are ~50% of the self-ratings and 4s ~25%; to cause an increase of 0.25 while leaving 2s alone in a sample of 23 days, one would have to push 3s down to ~25% and 4s up to ~47%). So in advance, we can see that the weak plausible effects for Noopept are not going to be detected here at our usual statistical levels with just the sample I have (a more plausible experiment might use 178 pairs over a year, detecting down to d>=0.18). But if the sign is right, it might make Noopept worthwhile to investigate further. And the hardest part of this was just making the pills, so it's not a waste of effort. Too much caffeine may be bad for bone health because it can deplete calcium. Overdoing the caffeine also may affect the vitamin D in your body, which plays a critical role in your body's bone metabolism. However, the roles of vitamin D as well as caffeine in the development of osteoporosis continue to be a source of debate. Significance: Caffeine may interfere with your body's metabolism of vitamin D, according to a 2007 Journal of Steroid Biochemistry & Molecular Biology study. You have vitamin D receptors, or VDRs, in your osteoblast cells. These large cells are responsible for the mineralization and synthesis of bone in your body. They create a sheet on the surface of your bones. The D receptors are nuclear hormone receptors that control the action of vitamin D-3 by controlling hormone-sensitive gene expression. These receptors are critical to good bone health. For example, a vitamin D metabolism disorder in which these receptors don't work properly causes rickets. However, normally when you hear the term nootropic kicked around, people really mean a "cognitive enhancer" — something that does benefit thinking in some way (improved memory, faster speed-of-processing, increased concentration, or a combination of these, etc.), but might not meet the more rigorous definition above. "Smart drugs" is another largely-interchangeable term. A new all-in-one nootropic mix/company run by some people active on /r/nootropics; they offered me a month's supply for free to try & review for them. At ~$100 a month (it depends on how many months one buys), it is not cheap (John Backus estimates one could buy the raw ingredients for $25/month) but it provides convenience & is aimed at people uninterested in spending a great deal of time reviewing research papers & anecdotes or capping their own pills (ie. people with lives) and it's unlikely I could spare the money to subscribe if TruBrain worked well for me - but certainly there was no harm in trying it out. Most diehard nootropic users have considered using racetams for enhancing brain function. Racetams are synthetic nootropic substances first developed in Russia. These smart drugs vary in potency, but they are not stimulants. They are unlike traditional ADHD medications (Adderall, Ritalin, Vyvanse, etc.). Instead, racetams boost cognition by enhancing the cholinergic system. This doesn't fit the U-curve so well: while 60mg is substantially negative as one would extrapolate from 30mg being ~0, 48mg is actually better than 15mg. But we bought the estimates of 48mg/60mg at a steep price - we ignore the influence of magnesium which we know influences the data a great deal. And the higher doses were added towards the end, so may be influenced by the magnesium starting/stopping. Another fix for the missingness is to impute the missing data. In this case, we might argue that the placebo days of the magnesium experiment were identical to taking no magnesium at all and so we can classify each NA as a placebo day, and rerun the desired analysis: Nootropics include natural and manmade chemicals that produce cognitive benefits. These substances are used to make smart pills that deliver results for enhancing memory and learning ability, improving brain function, enhancing the firing control mechanisms in neurons, and providing protection for the brain. College students, adult professionals, and elderly people are turning to supplements to get the advantages of nootropic substances for memory, focus, and concentration. This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137. Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high. A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea. To make things more interesting, I think I would like to try randomizing different dosages as well: 12mg, 24mg, and 36mg (1-3 pills); on 5 May 2014, because I wanted to finish up the experiment earlier, I decided to add 2 larger doses of 48 & 60mg (4-5 pills) as options. Then I can include the previous pilot study as 10mg doses, and regress over dose amount. There are some other promising prescription drugs that may have performance-related effects on the brain. But at this point, all of them seem to involve a roll of the dice. You may experience a short-term brain boost, but you could also end up harming your brain (or some other aspect of your health) in the long run. "To date, there is no safe drug that may increase cognition in healthy adults," Fond says of ADHD drugs, modafinil and other prescription nootropics. ATTENTION CANADIAN CUSTOMERS: Due to delays caused by it's union's ongoing rotating strikes, Canada Post has suspended its delivery standard guarantees for parcel services. This may cause a delay in the delivery of your shipment unless you select DHL Express or UPS Express as your shipping service. For more information or further assistance, please visit the Canada Post website. Thank you. P.S. Even though Thrive Natural's Super Brain Renew is the best brain and memory supplement we have found, we would still love to hear about other Brain and Memory Supplements that you have tried! If you have had a great experience with a memory supplement that we did not cover in this article, let us know! E-mail me at : [email protected] We'll check it out for you and if it looks good, we'll post it on our site! "Love this book! Still reading and can't wait to see what else I learn…and I am not brain injured! Cavin has already helped me to take steps to address my food sensitivity…seems to be helping and I am only on day 5! He has also helped me to help a family member who has suffered a stroke. Thank you Cavin, for sharing all your knowledge and hard work with us! This book is for anyone that wants to understand and implement good nutrition with all the latest research to back it up. Highly recommend!" In our list of synthetic smart drugs, Noopept may be the genius pill to rule them all. Up to 1000 times stronger than Piracetam, Noopept may not be suitable for everyone. This nootropic substance requires much smaller doses for enhanced cognitive function. There are plenty of synthetic alternatives to Adderall and prescription ADHD medications. Noopept may be worth a look if you want something powerful over the counter. There are seven primary classes used to categorize smart drugs: Racetams, Stimulants, Adaptogens, Cholinergics, Serotonergics, Dopaminergics, and Metabolic Function Smart Drugs. Despite considerable overlap and no clear border in the brain and body's responses to these substances, each class manifests its effects through a different chemical pathway within the body. It isn't unlikely to hear someone from Silicon Valley say the following: "I've just cycled off a stack of Piracetam and CDP-Choline because I didn't get the mental acuity I was expecting. I will try a blend of Noopept and Huperzine A for the next two weeks and see if I can increase my output by 10%. We don't have immortality yet and I would really like to join the three comma club before it's all over." The magnesium was neither randomized nor blinded and included mostly as a covariate to avoid confounding (the Noopept coefficient & t-value increase somewhat without the Magtein variable), so an OR of 1.9 is likely too high; in any case, this experiment was too small to reliably detect any effect (~26% power, see bootstrap power simulation in the magnesium section) so we can't say too much. If the entire workforce were to start doping with prescription stimulants, it seems likely that they would have two major effects. Firstly, people would stop avoiding unpleasant tasks, and weary office workers who had perfected the art of not-working-at-work would start tackling the office filing system, keeping spreadsheets up to date, and enthusiastically attending dull meetings. For illustration, consider amphetamines, Ritalin, and modafinil, all of which have been proposed as cognitive enhancers of attention. These drugs exhibit some positive effects on cognition, especially among individuals with lower baseline abilities. However, individuals of normal or above-average cognitive ability often show negligible improvements or even decrements in performance following drug treatment (for details, see de Jongh, Bolt, Schermer, & Olivier, 2008). For instance, Randall, Shneerson, and File (2005) found that modafinil improved performance only among individuals with lower IQ, not among those with higher IQ. [See also Finke et al 2010 on visual attention.] Farah, Haimm, Sankoorikal, & Chatterjee 2009 found a similar nonlinear relationship of dose to response for amphetamines in a remote-associates task, with low-performing individuals showing enhanced performance but high-performing individuals showing reduced performance. Such ∩-shaped dose-response curves are quite common (see Cools & Robbins, 2004) With just 16 predictions, I can't simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number: By which I mean that simple potassium is probably the most positively mind altering supplement I've ever tried…About 15 minutes after consumption, it manifests as a kind of pressure in the head or temples or eyes, a clearing up of brain fog, increased focus, and the kind of energy that is not jittery but the kind that makes you feel like exercising would be the reasonable and prudent thing to do. I have done no tests, but feel smarter from this in a way that seems much stronger than piracetam or any of the conventional weak nootropics. It is not just me – I have been introducing this around my inner social circle and I'm at 7/10 people felt immediately noticeable effects. The 3 that didn't notice much were vegetarians and less likely to have been deficient. Now that I'm not deficient, it is of course not noticeable as mind altering, but still serves to be energizing, particularly for sustained mental energy as the night goes on…Potassium chloride initially, but since bought some potassium gluconate pills… research indicates you don't want to consume large amounts of chloride (just moderate amounts). 10:30 AM; no major effect that I notice throughout the day - it's neither good nor bad. This smells like placebo (and part of my mind is going how unlikely is it to get placebo 3 times in a row!, which is just the Gambler's fallacy talking inasmuch as this is sampling with replacement). I give it 60% placebo; I check the next day right before taking, and it is. Man! As for newer nootropic drugs, there are unknown risks. "Piracetam has been studied for decades," says cognitive neuroscientist Andrew Hill, the founder of a neurofeedback company in Los Angeles called Peak Brain Institute. But "some of [the newer] compounds are things that some random editor found in a scientific article, copied the formula down and sent it to China and had a bulk powder developed three months later that they're selling. Please don't take it, people!" There are certain risks associated with smart pills that might restrain their use. A smart pill usually leaves the body within two weeks. Sometimes, the pill might get lodged in the digestive tract rather than exiting the body via normal bowel movements. The risk might be higher in people with a tumor, Crohns disease, or some surgery within that area that lead to narrowing of the digestive tract. CT scan is usually performed in people with high-risk to assess the narrowing of the tract. However, the pill might still be lodged even if the results are negative for the CT scan, which might lead to bowel obstruction and can be removed either by surgery or traditional endoscopy. Smart pills might lead to skin irritation, which results in mild redness and need to be treated topically. It may also lead to capsule aspiration, which involves the capsule going down the wrong pipe and entering the airway instead of the esophagus. This might result in choking and death if immediate bronchoscopic extraction is not performed. Patients with comorbidities related to brain injury or chronic obstructive pulmonary disease may be at a higher risk. So, the health risks associated with the use of smart pills are hindering the smart pills technology market. The other factors, such as increasing cost with technological advancement and ethical constraints are also hindering the market. That left me with 329 days of data. The results are that (correcting for the magnesium citrate self-experiment I was running during the time period which did not turn out too great) days on which I happened to use my LED device for LLLT were much better than regular days. Below is a graph showing the entire MP dataseries with LOESS-smoothed lines showing LLLT vs non-LLLT days: When Giurgea coined the word nootropic (combining the Greek words for mind and bending) in the 1970s, he was focused on a drug he had synthesized called piracetam. Although it is approved in many countries, it isn't categorized as a prescription drug in the United States. That means it can be purchased online, along with a number of newer formulations in the same drug family (including aniracetam, phenylpiracetam, and oxiracetam). Some studies have shown beneficial effects, including one in the 1990s that indicated possible improvement in the hippocampal membranes in Alzheimer's patients. But long-term studies haven't yet borne out the hype. As professionals and aging baby boomers alike become more interested in enhancing their own brain power (either to achieve more in a workday or to stave off cognitive decline), a huge market has sprung up for nonprescription nootropic supplements. These products don't convince Sahakian: "As a clinician scientist, I am interested in evidence-based cognitive enhancement," she says. "Many companies produce supplements, but few, if any, have double-blind, placebo-controlled studies to show that these supplements are cognitive enhancers." Plus, supplements aren't regulated by the U.S. Food and Drug Administration (FDA), so consumers don't have that assurance as to exactly what they are getting. Check out these 15 memory exercises proven to keep your brain sharp. Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners. Going back to the 1960s, although it was a Romanian chemist who is credited with discovering nootropics, a substantial amount of research on racetams was conducted in the Soviet Union. This resulted in the birth of another category of substances entirely: adaptogens, which, in addition to benefiting cognitive function were thought to allow the body to better adapt to stress. But he has also seen patients whose propensity for self-experimentation to improve cognition got out of hand. One chief executive he treated, Ngo said, developed an unhealthy predilection for albuterol, because he felt the asthma inhaler medicine kept him alert and productive long after others had quit working. Unfortunately, the drug ended up severely imbalancing his electrolytes, which can lead to dehydration, headaches, vision and cardiac problems, muscle contractions and, in extreme cases, seizures. Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement. Smart drugs could lead to enhanced cognitive abilities in the military. Also known as nootropics, smart drugs can be viewed similarly to medical enhancements. What's important to remember though, is that smart drugs do not increase your intelligence; however, they may improve cognitive and executive functions leading to an increase in intelligence. The methodology would be essentially the same as the vitamin D in the morning experiment: put a multiple of 7 placebos in one container, the same number of actives in another identical container, hide & randomly pick one of them, use container for 7 days then the other for 7 days, look inside them for the label to determine which period was active and which was placebo, refill them, and start again. "It is important to note that Abilify MyCite's prescribing information (labeling) notes that the ability of the product to improve patient compliance with their treatment regimen has not been shown. Abilify MyCite should not be used to track drug ingestion in "real-time" or during an emergency because detection may be delayed or may not occur," the FDA said in a statement. Nootropics are a responsible way of using smart drugs to enhance productivity. As defined by Giurgea in the 1960's, nootropics should have little to no side-effects. With nootropics, there should be no dependency. And maybe the effects of nootropics are smaller than for instance Adderall, you still improve your productivity without risking your life. This is what separates nootropics from other drugs. Prescription smart pills are common psychostimulants that can be purchased and used after receiving a prescription. They are most commonly given to patients diagnosed with ADD or ADHD, as well as narcolepsy. However many healthy people use them as cognitive enhancers due to their proven ability to improve focus, attention, and support the overall process of learning. I never watch SNL. I just happen to know about every skit, every line of dialogue because I'm a stable genius.Hey Donnie, perhaps you are unaware that:1) The only Republican who is continually obsessed with how he or she is portrayed on SNL is YOU.2) SNL has always been laden with political satire.3) There is something called the First Amendment that would undermine your quest for retribution. If you're concerned with using either supplement, speak to your doctor. Others will replace these supplements with something like Phenylpiracetam or Pramiracetam. Both of these racetams provide increased energy levels, yielding less side-effects. If you do plan on taking Modafinil or Adrafinil, it's best to use them on occasion or cycle your doses. This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn't satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability. Kennedy et al. (1990) administered what they termed a grammatical reasoning task to subjects, in which a sentence describing the order of two letters, A and B, is presented along with the letter pair, and subjects must determine whether or not the sentence correctly describes the letter pair. They found no effect of d-AMP on performance of this task. Swanson J, Arnold LE, Kraemer H, Hechtman L, Molina B, Hinshaw S, Wigal T. Evidence, interpretation and qualification from multiple reports of long-term outcomes in the Multimodal Treatment Study of Children With ADHD (MTA): Part II. Supporting details. Journal of Attention Disorders. 2008;12:15–43. doi: 10.1177/1087054708319525. [PubMed] [CrossRef] Another classic approach to the assessment of working memory is the span task, in which a series of items is presented to the subject for repetition, transcription, or recognition. The longest series that can be reproduced accurately is called the forward span and is a measure of working memory capacity. The ability to reproduce the series in reverse order is tested in backward span tasks and is a more stringent test of working memory capacity and perhaps other working memory functions as well. The digit span task from the Wechsler (1981) IQ test was used in four studies of stimulant effects on working memory. One study showed that d-AMP increased digit span (de Wit et al., 2002), and three found no effects of d-AMP or MPH (Oken, Kishiyama, & Salinsky, 1995; Schmedtje, Oman, Letz, & Baker, 1988; Silber, Croft, Papafotiou, & Stough, 2006). A spatial span task, in which subjects must retain and reproduce the order in which boxes in a scattered spatial arrangement change color, was used by Elliott et al. (1997) to assess the effects of MPH on working memory. For subjects in the group receiving placebo first, MPH increased spatial span. However, for the subjects who received MPH first, there was a nonsignificant opposite trend. The group difference in drug effect is not easily explained. The authors noted that the subjects in the first group performed at an overall lower level, and so, this may be another manifestation of the trend for a larger enhancement effect for less able subjects. The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement. Supplements, medications, and coffee certainly might play a role in keeping our brains running smoothly at work or when we're trying to remember where we left our keys. But the long-term effects of basic lifestyle practices can't be ignored. "For good brain health across the life span, you should keep your brain active," Sahakian says. "There is good evidence for 'use it or lose it.'" She suggests brain-training apps to improve memory, as well as physical exercise. "You should ensure you have a healthy diet and not overeat. It is also important to have good-quality sleep. Finally, having a good work-life balance is important for well-being." Try these 8 ways to get smarter while you sleep. Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment. Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs. At this point, I began thinking about what I was doing. Black-market Adderall is fairly expensive; $4-10 a pill vs prescription prices which run more like $60 for 120 20mg pills. It would be a bad idea to become a fan without being quite sure that it is delivering bang for the buck. Now, why the piracetam mix as the placebo as opposed to my other available powder, creatine powder, which has much smaller mental effects? Because the question for me is not whether the Adderall works (I am quite sure that the amphetamines have effects!) but whether it works better for me than my cheap legal standbys (piracetam & caffeine)? (Does Adderall have marginal advantage for me?) Hence, I want to know whether Adderall is better than my piracetam mix. People frequently underestimate the power of placebo effects, so it's worth testing. (Unfortunately, it seems that there is experimental evidence that people on Adderall know they are on Adderall and also believe they have improved performance, when they do not5. So the blind testing does not buy me as much as it could.) One fairly powerful nootropic substance that, appropriately, has fallen out of favor is nicotine. It's the chemical that gives tobacco products their stimulating kick. It isn't what makes them so deadly, but it does make smoking very addictive. When Europeans learned about tobacco's use from indigenous tribes they encountered in the Americas in the 15th and 16th centuries, they got hooked on its mood-altering effects right away and even believed it could cure joint pain, epilepsy, and the plague. Recently, researchers have been testing the effects of nicotine that's been removed from tobacco, and they believe that it might help treat neurological disorders including Parkinson's disease and schizophrenia; it may also improve attention and focus. But, please, don't start smoking or vaping. Check out these 14 weird brain exercises that make you smarter. Segmental analysis of the key components of the global smart pills market has been performed based on application, target area, disease indication, end-user, and region. Applications of smart pills are found in capsule endoscopy, drug delivery, patient monitoring, and others. Sub-division of the capsule endoscopy segment includes small bowel capsule endoscopy, controllable capsule endoscopy, colon capsule endoscopy, and others. Meanwhile, the patient monitoring segment is further divided into capsule pH monitoring and others. the rise of IP scofflaw countries which enable the manufacture of known drugs: India does not respect the modafinil patents, enabling the cheap generics we all use, and Chinese piracetam manufacturers don't give a damn about the FDA's chilling-effect moves in the US. If there were no Indian or Chinese manufacturers, where would we get our modafinil? Buy them from pharmacies at $10 a pill or worse? It might be worthwhile, but think of the chilling effect on new users. What if you could simply take a pill that would instantly make you more intelligent? One that would enhance your cognitive capabilities including attention, memory, focus, motivation and other higher executive functions? If you have ever seen the movie Limitless, you have an idea of what this would look like—albeit the exaggerated Hollywood version. The movie may be fictional but the reality may not be too far behind. That first night, I had severe trouble sleeping, falling asleep in 30 minutes rather than my usual 19.6±11.9, waking up 12 times (5.9±3.4), and spending ~90 minutes awake (18.1±16.2), and naturally I felt unrested the next day; I initially assumed it was because I had left a fan on (moving air keeps me awake) but the new potassium is also a possible culprit. When I asked, Kevin said: Similarly, we could try applying Nick Bostrom's reversal test and ask ourselves, how would we react to a virus which had no effect but to eliminate sleep from alternating nights and double sleep in the intervening nights? We would probably grouch about it for a while and then adapt to our new hedonistic lifestyle of partying or working hard. On the other hand, imagine the virus had the effect of eliminating normal sleep but instead, every 2 minutes, a person would fall asleep for a minute. This would be disastrous! Besides the most immediate problems like safely driving vehicles, how would anything get done? You would hold a meeting and at any point, a third of the participants would be asleep. If the virus made it instead 2 hours on, one hour off, that would be better but still problematic: there would be constant interruptions. And so on, until we reach our present state of 16 hours on, 8 hours off. Given that we rejected all the earlier buffer sizes, one wonders if 16:8 can be defended as uniquely suited to circumstances. Is that optimal? It may be, given the synchronization with the night-day cycle, but I wonder; rush hour alone stands as an argument against synchronized sleep - wouldn't our infrastructure would be much cheaper if it only had to handle the average daily load rather than cope with the projected peak loads? Might not a longer cycle be better? The longer the day, the less we are interrupted by sleep; it's a hoary cliche about programmers that they prefer to work in long sustained marathons during long nights rather than sprint occasionally during a distraction-filled day, to the point where some famously adopt a 28 hour day (which evenly divides a week into 6 days). Are there other occupations which would benefit from a 20 hour waking period? Or 24 hour waking period? We might not know because without chemical assistance, circadian rhythms would overpower anyone attempting such schedules. It certainly would be nice if one had long time chunks in which could read a challenging book in one sitting, without heroic arrangements.↩ Since my experiment had a number of flaws (non-blind, varying doses at varying times of day), I wound up doing a second better experiment using blind standardized smaller doses in the morning. The negative effect was much smaller, but there was still no mood/productivity benefit. Having used up my first batch of potassium citrate in these 2 experiments, I will not be ordering again since it clearly doesn't work for me. Brain-imaging studies are consistent with the existence of small effects that are not reliably captured by the behavioral paradigms of the literature reviewed here. Typically with executive function tasks, reduced activation of task-relevant areas is associated with better performance and is interpreted as an indication of higher neural efficiency (e.g., Haier, Siegel, Tang, Abel, & Buchsbaum, 1992). Several imaging studies showed effects of stimulants on task-related activation while failing to find effects on cognitive performance. Although changes in brain activation do not necessarily imply functional cognitive changes, they are certainly suggestive and may well be more sensitive than behavioral measures. Evidence of this comes from a study of COMT variation and executive function. Egan and colleagues (2001) found a genetic effect on executive function in an fMRI study with sample sizes as small as 11 but did not find behavioral effects in these samples. The genetic effect on behavior was demonstrated in a separate study with over a hundred participants. In sum, d-AMP and MPH measurably affect the activation of task-relevant brain regions when participants' task performance does not differ. This is consistent with the hypothesis (although by no means positive proof) that stimulants exert a true cognitive-enhancing effect that is simply too small to be detected in many studies. In sum, the evidence concerning stimulant effects of working memory is mixed, with some findings of enhancement and some null results, although no findings of overall performance impairment. A few studies showed greater enhancement for less able participants, including two studies reporting overall null results. When significant effects have been found, their sizes vary from small to large, as shown in Table 4. Taken together, these results suggest that stimulants probably do enhance working memory, at least for some individuals in some task contexts, although the effects are not so large or reliable as to be observable in all or even most working memory studies. I took 1.5mg of melatonin, and went to bed at ~1:30AM; I woke up around 6:30, took a modafinil pill/200mg, and felt pretty reasonable. By noon my mind started to feel a bit fuzzy, and lunch didn't make much of it go away. I've been looking at studies, and users seem to degrade after 30 hours; I started on mid-Thursday, so call that 10 hours, then 24 (Friday), 24 (Saturday), and 14 (Sunday), totaling 72hrs with <20hrs sleep; this might be equivalent to 52hrs with no sleep, and Wikipedia writes: Most people would describe school as a place where they go to learn, so learning is an especially relevant cognitive process for students to enhance. Even outside of school, however, learning plays a role in most activities, and the ability to enhance the retention of information would be of value in many different occupational and recreational contexts. Though their product includes several vitamins including Bacopa, it seems to be missing the remaining four of the essential ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine and N-Acetyl L-Tyrosine. It missed too many of our key criteria and so we could not endorse this product of theirs. Simply, if you don't mind an insufficient amount of essential ingredients for improved brain and memory function and an inclusion of unwanted ingredients – then this could be a good fit for you. You'll find several supplements that can enhance focus, energy, creativity, and mood. These brain enhancers can work very well, and their benefits often increase over time. Again, nootropics won't dress you in a suit and carry you to Wall Street. That is a decision you'll have to make on your own. But, smart drugs can provide the motivation boost you need to make positive life changes. Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'. "I enjoyed this book. It was full of practical information. It was easy to understand. I implemented some of the ideas in the book and they have made a positive impact for me. Not only is this book a wealth of knowledge it helps you think outside the box and piece together other ideas to research and helps you understand more about TBI and the way food might help you mitigate symptoms." Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007). Capsule Connection sells 1000 00 pills (the largest pills) for $9. I already have a pill machine, so that doesn't count (a sunk cost). If we sum the grams per day column from the first table, we get 9.75 grams a day. Each 00 pill can take around 0.75 grams, so we need 13 pills. (Creatine is very bulky, alas.) 13 pills per day for 1000 days is 13,000 pills, and 1,000 pills is $9 so we need 13 units and 13 times 9 is $117. Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below. First was a combination of L-theanine and aniracetam, a synthetic compound prescribed in Europe to treat degenerative neurological diseases. I tested it by downing the recommended dosages and then tinkering with a story I had finished a few days earlier, back when caffeine was my only performance-enhancing drug. I zoomed through the document with renewed vigor, striking some sentences wholesale and rearranging others to make them tighter and punchier. However, when I didn't stack it with Choline, I would get what users call "racetam headaches." Choline, as Patel explains, is not a true nootropic, but it's still a pro-cognitive compound that many take with other nootropics in a stack. It's an essential nutrient that humans need for functions like memory and muscle control, but we can't produce it, and many Americans don't get enough of it. The headaches I got weren't terribly painful, but they were uncomfortable enough that I stopped taking Piracetam on its own. Even without the headache, though, I didn't really like the level of focus Piracetam gave me. I didn't feel present when I used it, even when I tried to mix in caffeine and L-theanine. And while it seemed like I could focus and do my work faster, I was making more small mistakes in my writing, like skipping words. Essentially, it felt like my brain was moving faster than I could. Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything: That said, there are plenty of studies out there that point to its benefits. One study, published in the British Journal of Pharmacology, suggests brain function in elderly patients can be greatly improved after regular dosing with Piracetam. Another study, published in the journal Psychopharmacology, found that Piracetam improved memory in most adult volunteers. And another, published in the Journal of Clinical Psychopharmacology, suggests it can help students, especially dyslexic students, improve their nonverbal learning skills, like reading ability and reading comprehension. Basically, researchers know it has an effect, but they don't know what or how, and pinning it down requires additional research. Do you start your day with a cup (or two, or three) of coffee? It tastes delicious, but it's also jump-starting your brain because of its caffeine content. Caffeine is definitely a nootropic substance—it's a mild stimulant that can alleviate fatigue and improve concentration, according to the Mayo Clinic. Current research shows that coffee drinkers don't suffer any ill effects from drinking up to about four cups of coffee per day. Caffeine is also found in tea, soda, and energy drinks. Not too surprisingly, it's also in many of the nootropic supplements that are being marketed to people looking for a mental boost. Take a look at these 7 genius brain boosters to try in the morning. "We stumbled upon fasting as a way to optimize cognition and make yourself into a more efficient human being," says Manuel Lam, an internal medicine physician who advises Nootrobox on clinical issues. He and members of the company's executive team have implanted glucose monitors in their arms — not because they fear diabetes but because they wish to track the real-time effect of the foods they eat. I bought 500g of piracetam (Examine.com; FDA adverse events) from Smart Powders (piracetam is one of the cheapest nootropics and SP was one of the cheapest suppliers; the others were much more expensive as of October 2010), and I've tried it out for several days (started on 7 September 2009, and used it steadily up to mid-December). I've varied my dose from 3 grams to 12 grams (at least, I think the little scoop measures in grams), taking them in my tea or bitter fruit juice. Cranberry worked the best, although orange juice masks the taste pretty well; I also accidentally learned that piracetam stings horribly when I got some on a cat scratch. 3 grams (alone) didn't seem to do much of anything while 12 grams gave me a nasty headache. I also ate 2 or 3 eggs a day. Several studies have assessed the effect of MPH and d-AMP on tasks tapping various other aspects of spatial working memory. Three used the spatial working memory task from the CANTAB battery of neuropsychological tests (Sahakian & Owen, 1992). In this task, subjects search for a target at different locations on a screen. Subjects are told that locations containing a target in previous trials will not contain a target in future trials. Efficient performance therefore requires remembering and avoiding these locations in addition to remembering and avoiding locations already searched within a trial. Mehta et al. (2000) found evidence of greater accuracy with MPH, and Elliott et al. (1997) found a trend for the same. In Mehta et al.'s study, this effect depended on subjects' working memory ability: the lower a subject's score on placebo, the greater the improvement on MPH. In Elliott et al.'s study, MPH enhanced performance for the group of subjects who received the placebo first and made little difference for the other group. The reason for this difference is unclear, but as mentioned above, this may reflect ability differences between the groups. More recently, Clatworthy et al. (2009) undertook a positron emission tomography (PET) study of MPH effects on two tasks, one of which was the CANTAB spatial working memory task. They failed to find consistent effects of MPH on working memory performance but did find a systematic relation between the performance effect of the drug in each individual and its effect on individuals' dopamine activity in the ventral striatum. Potassium citrate powder is neither expensive nor cheap: I purchased 453g for $21. The powder is crystalline white, dissolves instantly in water, and largely tasteless (sort of saline & slightly unpleasant). The powder is 37% potassium by weight (the formula is C6H5K3O7) so 453g is actually 167g of potassium, so 80-160 days' worth depending on dose. I tried taking whole pills at 1 and 3 AM. I felt kind of bushed at 9 AM after all the reading, and the 50 minute nap didn't help much - I was sleep only around 10 minutes and spent most of it thinking or meditation. Just as well the 3D driver is still broken; I doubt the scores would be reasonable. Began to perk up again past 10 AM, then felt more bushed at 1 PM, and so on throughout the day; kind of gave up and began watching & finishing anime (Amagami and Voices of a Distant Star) for the rest of the day with occasional reading breaks (eg. to start James C. Scotts Seeing Like A State, which is as described so far). As expected from the low quality of the day, the recovery sleep was bigger than before: a full 10 hours rather than 9:40; the next day, I slept a normal 8:50, and the following day ~8:20 (woken up early); 10:20 (slept in); 8:44; 8:18 (▁▇▁▁). It will be interesting to see whether my excess sleep remains in the hour range for 'good modafinil nights and two hours for bad modafinil nights. Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance. "Certain people might benefit from certain combinations of certain things," he told me. "But across populations, there is still no conclusive proof that substances of this class improve cognitive functions." And with no way to reliably measure the impact of a given substance on one's mental acuity, one's sincere beliefs about "what works" probably have a lot to do with, say, how demanding their day was, or whether they ate breakfast, or how susceptible they are to the placebo effect. Noopept was developed in Russia in the 90s, and is alleged to improve learning. This drug modifies acetylcholine and AMPA receptors, increasing the levels of these neurotransmitters in the brain. This is believed to account for reports of its efficacy as a 'study drug'. Noopept in the UK is illegal, as the 2016 Psychoactive Substances Act made it an offence to sell this drug in the UK - selling it could even lead to 7 years in prison. To enhance its nootropic effects, some users have been known to snort Noopept. First half at 6 AM; second half at noon. Wrote a short essay I'd been putting off and napped for 1:40 from 9 AM to 10:40. This approach seems to work a little better as far as the aboulia goes. (I also bother to smell my urine this time around - there's a definite off smell to it.) Nights: 10:02; 8:50; 10:40; 7:38 (2 bad nights of nasal infections); 8:28; 8:20; 8:43 (▆▃█▁▂▂▃). Nature magazine conducted a poll asking its readers about their cognitive-enhancement practices and their attitudes toward cognitive enhancement. Hundreds of college faculty and other professionals responded, and approximately one fifth reported using drugs for cognitive enhancement, with Ritalin being the most frequently named (Maher, 2008). However, the nature of the sample—readers choosing to answer a poll on cognitive enhancement—is not representative of the academic or general population, making the results of the poll difficult to interpret. By analogy, a poll on Vermont vacations, asking whether people vacation in Vermont, what they think about Vermont, and what they do if and when they visit, would undoubtedly not yield an accurate estimate of the fraction of the population that takes its vacations in Vermont. A record of nootropics I have tried, with thoughts about which ones worked and did not work for me. These anecdotes should be considered only as anecdotes, and one's efforts with nootropics a hobby to put only limited amounts of time into due to the inherent limits of drugs as a force-multiplier compared to other things like programming1; for an ironic counterpoint, I suggest the reader listen to a video of Jonathan Coulton's I Feel Fantastic while reading. One last note on tolerance; after the first few days of using smart drugs, just like with other drugs, you may not get the same effects as before. You've just experienced the honeymoon period. This is where you feel a large effect the first few times, but after that, you can't replicate it. Be careful not to exceed recommended doses, and try cycling to get the desired effects again. Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
CommonCrawl
How big is the role of black hole spin in forming its mass? Excuse me if i'm saying something weird, but as far as i know moving faster means increasing mass according to relativity theory, right? If you make some star spin really really fast throwing something into it at high speed and appropriate angle will it turn it into black hole? If yes, how much material it would take to make Sun a black hole? general-relativity black-holes angular-momentum astrophysics mass-energy Vitaly DenisovVitaly Denisov $\begingroup$ moving faster with respect to what? $\endgroup$ – JEB Nov 21 '19 at 23:44 $\begingroup$ moving faster means increasing mass according to relativity theory, right? That's not a good way to interpret relativity, and today it is considered old-fashioned thinking. Mass is independent of speed. The mass and spin of a black hole are independent of each other. $\endgroup$ – G. Smith Nov 22 '19 at 0:01 $\begingroup$ FWIW, most SMBHs appear to have a lot of spin. See the chart at the end of this answer: astronomy.stackexchange.com/a/20292/16685 $\endgroup$ – PM 2Ring Nov 22 '19 at 13:54 Although increasing an object's spin will increase its total energy, it will not increase its tendency to collapse to a black hole. The physical reason for this, is that conservation of angular momentum will work against any possible collapse. From a slightly different perspective, it can also be seen from the fact that the event horizon radius of a spinning black hole is smaller than that of a non-spinning black hole of the same mass. A practical and astrophysically relevant consequence of this is that rotating neutron stars can be more massive than non-rotating neutron stars. This allows for the reverse to the OPs scenario to happen: Suppose a neutron star is created with a very high spin (maybe from the collision of two other neutron stars), and a mass that is higher than the critical mass for a non-rotating neutron star to collapse to a black hole. Initial the neutron star is kept stable by its angular momentum, but over time it will lose angular momentum (e.g. due to emission of EM radiation) and spin down. At some point the angular momentum will be insufficient to prevent collapse, and the neutron star collapses to a black hole. This leaves the question what portion of a rotating black hole's mass can be thought off as consisting of "rotational energy". This is not straightforward to answer since in general relativity there is no clear cut separation of different kinds of energy. However, some indication can be gleamed from looking at the rotational energy of neutron stars at the critical point of collapse. Table II of arXiv:1905.03656 gives values for the mass ($M$), angular momentum ($J$), and rotational energy ($T$) of such neutron stars depending on the model for the equation of state for the neutron star. For one such model these values are \begin{align} M & = 2.57 M_{\odot} \\ J &= 4.183\times10^{49} \text{erg s}\\ T &= 2.415 \times 10^{53} \text{erg} \end{align} This translates to a spin parameter $$\chi = \frac{c J}{GM^2} = 0.719,$$ i.e. it would collapse to a black hole spinning at 72% of its maximum rate. However, the fraction of its total energy in rotational energy ($T/(Mc^2)$) is only about 5 percent. mmeentmmeent This question is not quite phrased in the way that a specialist would phrase it, but it still sort of makes sense, and basically the answer is that for a typical astrophysical black hole, the spin's contribution to the mass is pretty big. Although a black hole has spin, the standard models of black holes that we study are vacuum solutions. There is no matter anywhere. So the spin of the black hole is actually an angular momentum that exists because of properties of empty space, which is kind of strange. However, if you want to get more concrete, you can equate this to the angular momentum of the infalling material that formed the astrophysical black hole that we're modeling. This infalling material was ultrarelativistic, so its kinetic energy was large compared to its mass. Some fraction of this energy was locked up in transverse rather than radial motion. The fraction is fairly big, and the way we see this in the final object is that astrophysical black holes often have spins that are pretty close to the maximum spin that a black hole can have for that mass. Ben CrowellBen Crowell Not the answer you're looking for? Browse other questions tagged general-relativity black-holes angular-momentum astrophysics mass-energy or ask your own question. Are we all travelling in the speed of light? How precisely does a star collapse into a black hole? How does the star that has collapsed to form a Schwarschild black hole appear to an observer falling into the black hole? Light Deflection by a Black Hole What would happen if a negative mass crossed the event horizon of a black hole? Spacetime geometry around two black holes What happens when a very large star swallows a small black hole? Black hole maximum mass/stress-energy limit? How does light energy increase when passing through the ergo sphere of a black hole?
CommonCrawl
Hilbert's lemma Hilbert's lemma was proposed at the end of the 19th century by mathematician David Hilbert. The lemma describes a property of the principal curvatures of surfaces. It may be used to prove Liebmann's theorem that a compact surface with constant Gaussian curvature must be a sphere.[1] Statement of the lemma Given a manifold in three dimensions that is smooth and differentiable over a patch containing the point p, where k and m are defined as the principal curvatures and K(x) is the Gaussian curvature at a point x, if k has a max at p, m has a min at p, and k is strictly greater than m at p, then K(p) is a non-positive real number.[2] See also • Hilbert's theorem (differential geometry) References 1. Gray, Mary (1997), "28.4 Hilbert's Lemma and Liebmann's Theorem", Modern Differential Geometry of Curves and Surfaces with Mathematica (2nd ed.), CRC Press, pp. 652–654, ISBN 9780849371646. 2. O'Neill, Barrett (2006), Elementary Differential Geometry (2nd ed.), Academic Press, p. 278, ISBN 9780080505428.
Wikipedia
Joachim Nitsche Joachim A. Nitsche (September 2, 1926, Nossen – January 12, 1996) was a German mathematician and professor of mathematics in Freiburg, known for his important contributions to the mathematical and numerical analysis of partial differential equations. The duality argument for estimating the error of the finite element method and a scheme for the weak enforcement of Dirichlet boundary conditions for Poisson's equation bear his name. Biography Education Nitsche graduated from school at Bischofswerda in 1946. Starting in summer 1947, he studied mathematics the University of Göttingen, where he received his Diplom (under supervision of Franz Rellich) after only six semesters. In 1951, he received his degree (Dr. rer. nat.) at the Technical University of Berlin-Charlottenburg (nowadays TU Berlin). After only two years, he received his Habilitation at the Free University of Berlin. Marriage and children In 1952, Nitsche married Gisela Lange, with whom he had three children. Professional career From 1955 to 1957, Nitsche held a teaching position at the Free University of Berlin, which he left for a position at IBM in Böblingen. He became professor at the Albert Ludwigs University of Freiburg in 1958 and received the chair for applied mathematics there in 1962. He remained in this position until he became emeritus in 1991. Works Publications Praktische Mathematik, BI Hochschulskripten 812*, Bibliographisches Institut, Mannheim, Zurich, 1968. References • Amann, H., Helfrich, H.-P., Scholz, R. "Joachim A. Nitsche (1926-1996)", Jahresbericht der Deutschen Mathematikervereinigung 99 (1997) 90-100. Authority control International • ISNI • VIAF National • Germany • Italy • United States Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie
Wikipedia
Automatic purpose-driven basis set truncation for time-dependent Hartree–Fock and density-functional theory Practical quantum computation of chemical and nuclear energy levels using quantum imaginary time evolution and Lanczos algorithms Kübra Yeter-Aydeniz, Raphael C. Pooser & George Siopsis A fast and adaptable method for high accuracy integration of the time-dependent Schrödinger equation Daniel Wells & Harry Quiney Beyond the RPA and GW methods with adiabatic xc-kernels for accurate ground state and quasiparticle energies Thomas Olsen, Christopher E. Patrick, … Kristian S. Thygesen Extracting sub-cycle electronic and nuclear dynamics from high harmonic spectra Dane R. Austin, Allan S. Johnson, … Jon P. Marangos Conical-intersection dynamics and ground-state chemistry probed by extreme-ultraviolet time-resolved photoelectron spectroscopy A. von Conta, A. Tehlar, … H. J. Wörner Comparison of an improved self-consistent lower bound theory with Lehmann's method for low-lying eigenvalues Miklos Ronto, Eli Pollak & Rocco Martinazzo A fast and accurate computational method for the linear-combination-based isotropic periodic sum Kazuaki Z. Takahashi, Takuma Nozawa & Kenji Yasuoka Critical test of isotropic periodic sum techniques with group-based cut-off schemes Takuma Nozawa, Kenji Yasuoka & Kazuaki Z. Takahashi Exact exchange-correlation potentials from ground-state electron densities Bikash Kanungo, Paul M. Zimmerman & Vikram Gavini Ruocheng Han ORCID: orcid.org/0000-0001-9874-99651, Johann Mattiat1 & Sandra Luber ORCID: orcid.org/0000-0002-6203-93791 Nature Communications volume 14, Article number: 106 (2023) Cite this article Density functional theory Real-time time-dependent density-functional theory (RT-TDDFT) and linear response time-dependent density-functional theory (LR-TDDFT) are two important approaches to simulate electronic spectra. However, the basis sets used in such calculations are usually the ones designed mainly for electronic ground state calculations. In this work, we propose a systematic and robust scheme to truncate the atomic orbital (AO) basis set employed in TDDFT and TD Hartree–Fock (TDHF) calculations. The truncated bases are tested for both LR- and RT-TDDFT as well as RT-TDHF approaches, and provide an acceleration up to an order of magnitude while the shifts of excitation energies of interest are generally within 0.2 eV. The procedure only requires one extra RT calculation with 1% of the total propagation time and a simple modification on basis set file, which allows an instant application in any quantum chemistry package supporting RT-/LR-TDDFT calculations. Aside from the reduced computational effort, this approach also offers valuable insight into the effect of different basis functions on computed electronic excitations and further ideas on the design of basis sets for special purposes. Electronically excited states and their properties are among the central topics of quantum chemistry research. The utilized theoretical methods for excited state calculations typically require equivalent or higher computational resources compared to analogous ground state calculations. Highly accurate multiconfigurational methods are computationally demanding and thus can only be applied to small systems. Time-dependent density-functional theory (TDDFT), due to its good compromise between accuracy and efficiency, has been employed in a wide range of applications, especially for spectroscopy1,2,3. Real-time propagation (RTP) has become an appealing technique for the solution of the time-dependent Kohn–Sham calculations, namely, real-time time-dependent density-functional theory (RT-TDDFT), or general approximations to the time-dependent Schrödinger equation4,5,6,7,8,9. It is based on the evolution of molecular orbitals under the influence of an external field, often with only a δ-pulse (field) applied at the beginning. In the weak field limit within the adiabatic approximation, the spectroscopy simulations using RT-TDDFT and linear response time-dependent density-functional theory (LR-TDDFT) should provide comparable results4. During each time step, one needs to construct the Hamiltonian given by the new molecular orbital (MO) coefficients, which is the most time-consuming part of the RTP. Depending on the quantum chemistry method employed in RTP, the construction of the Hamiltonian may scale to \({{{{{{{\mathcal{O}}}}}}}}({N}^{4})\), e.g., for HF Coulomb and exchange matrices calculated with 2-electron integrals, where N refers to the number of AO basis functions. Therefore, reducing the number of basis functions or finding a proper smaller basis set can potentially save a large amount of computational time and memory. Previous studies on the topic of basis set truncation/reduction have mainly followed three strategies: (1) decreasing the size of the virtual space for frozen natural orbital approximations used in perturbation based methods10,11 (e.g., Møller–Plesset perturbation theory, coupled cluster single-double and perturbative triple, complete active space perturbation theory), (2) reducing the number of functions in correlation consistent basis sets12,13, (3) reducing the number of basis functions of subsystems (which apply expensive wavefunction-based methods) for embedding calculations14,15. However, these works focus on the electronic ground state. Multiple embedding techniques have been applied to accelerate RT-TDDFT calculations by treating subsystems with different level of theories16,17,18,19,20. The idea of a decomposition of the electric dipole moment into molecular orbital pairs was also proposed in recent works for the acceleration or analyses of spectra21,22,23. This work explores the contribution of a fundamental ingredient—basis functions—to the electronic spectra. The truncation of basis functions proposed in this work is designed to check every single component in the basis set (basis function). One can also apply a shell level truncation for general applications. Basis set files can be easily modified for an accelerated simulation of the spectrum and to obtain a better chemical insight into the electric dipole moments contribution. Moreover, a routine to construct complete basis set (CBS) for TDDFT calculations is proposed. The calculations of electronic absorption and ECD spectra in this work take place in a linear response framework within the electric dipole approximation, assuming that the excited states of the system can be well described within the occupied-virtual space spanned by the ground state solution of the system. As standardly done, we assume the adiabatic approximation, discarding the dependence of the exchange-correlation functional on the history of the propagation. The decomposition of electric dipole moments into the contribution of individual AO basis functions and checking the variation of molecular orbitals (in component of basis functions) during the RTP provides a quantitative evaluation of each AO basis function based on its importance for the electronic spectra under study. This further paves the way for a truncation process on the basis set for the computational speed-up and a way to generate complete basis set for TDDFT calculations. In this work, we propose a basis set truncation scheme for TDDFT calculations. The method is tested for small molecules up to a highly conjugated system and a metal cluster, and achieves an acceleration up to an order of magnitude in RT-TDDFT or LR-TDDFT calculations with negligible change in the region of interests (e.g., valence-shell transitions) of the computed spectra. Electric dipole moment In the context of this work, the electronic part of the electric dipole moment \(\overrightarrow{d}\) is defined as the trace of the product of the density matrix and integrals of the electric dipole moment operator \(-e\overrightarrow{r}\) (\(\overrightarrow{r}=(x,\, y,\, z)\), e is elementary charge) in the AO basis with basis functions \(\{{\chi }_{\mu }\}\). For calculating the time-dependent electric dipole moment \(\overrightarrow{d}(t)\), we use the AO basis representation for both density matrix PAO(t) and the electric dipole moment integrals \(\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}}\) as shown in Eq. (1). In this way, only the density matrix PAO(t) is time-dependent and \(\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}}\) remains the same during the RTP for fixed nuclei. PAO(t) can be further expressed in molecular orbital (MO) basis as PMO (MO density matrix after SCF, see Eq. (3) where fi is the occupation number of the ith MO) and the time-dependent part is only carried by the MO coefficients C(t) and its complex conjugate C†(t) (see Eq. (2)). In this work, the AO basis functions are all Gaussian-type orbitals. $$\overrightarrow{d}(t)=-\!e\cdot \mathop{\sum}\limits_{\mu \nu }{P}_{\mu \nu }^{{{{{{{{\rm{AO}}}}}}}}}\langle {\chi }_{\mu }|\overrightarrow{r}|{\chi }_{\nu }\rangle=-\!e\cdot {{{{{{{\rm{Tr}}}}}}}}({{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{AO}}}}}}}}}(t)\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}})$$ $${{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{AO}}}}}}}}}(t)={{{{{{{\boldsymbol{C}}}}}}}}(t){{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{MO}}}}}}}}}{{{{{{{{\boldsymbol{C}}}}}}}}}^{{{{\dagger}}} }(t)$$ $${{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{MO}}}}}}}}}=({p}_{ij})=\left\{\begin{array}{ll}{f}_{i},\quad &i=j\\ 0,\quad &i\,\ne\, j\end{array}\right.$$ Real-time propagation In our implementation, the MO coefficients C(t) are propagated for a small timestep Δt (see Eq. (4)) using the "enforced time-reversal symmetry" (ETRS)24 scheme. U(t + Δt) represents the propagator at time t + Δt and is calculated with Eq. (5), where S is the overlap matrix in AO basis and F(t) is the Fock matrix or Kohn–Sham (KS) matrix in AO basis at time t. C(t + Δt), U(t + Δt), and F(t + Δt) are computed self-consistently24. $${{{{{{{\boldsymbol{C}}}}}}}}(t+{{\Delta }}t)={{{{{{{\boldsymbol{U}}}}}}}}(t+{{\Delta }}t){{{{{{{\boldsymbol{C}}}}}}}}(t)$$ $${{{{{{{\boldsymbol{U}}}}}}}}(t+{{\Delta }}t)=\exp \left[-\frac{i}{2}{{{{{{{{\boldsymbol{S}}}}}}}}}^{-1}({{{{{{{\boldsymbol{F}}}}}}}}(t)+{{{{{{{\boldsymbol{F}}}}}}}}(t+{{\Delta }}t)){{\Delta }}t\right]$$ F(t) needs to be constructed for each time step and usually contributes most to the computational time in RTP. For example, the elements of the HF exchange matrix Kμν(t) are given in Eq. (6) (\(\left\langle \mu \lambda|\sigma \nu \right\rangle\) are the two-electron repulsion integral (ERIs) in AO basis expressed in Eq. (7)), and the computation of the exchange matrix K(t), which is required for the construction of F(t), scales as \({{{{{{{\mathcal{O}}}}}}}}({{N}_{{{{{{{{\rm{AO}}}}}}}}}}^{4})\) (NAO is the number of AO basis functions). $${K}_{\mu \nu }(t)=\mathop{\sum}\limits_{\lambda \sigma }{P}_{\lambda \sigma }^{{{{{{{{\rm{AO}}}}}}}}}(t)\left\langle \mu \sigma|\lambda \nu \right\rangle$$ $$\left\langle \mu \sigma|\lambda \nu \right\rangle=\int{\chi }_{\mu }^{*}({\overrightarrow{r}}_{1}){\chi }_{\sigma }^{*}({\overrightarrow{r}}_{2})\frac{1}{{r}_{12}}{\chi }_{\lambda }({\overrightarrow{r}}_{1}){\chi }_{\nu }({\overrightarrow{r}}_{2})d{\overrightarrow{r}}_{1}d{\overrightarrow{r}}_{2}$$ AO basis truncation In order to decrease NAO, we first analyse Eq. (1) for the electric dipole contribution from each AO basis function. For the sake of simplicity, \({\overrightarrow{O}}_{\mu }(t)\) is used to represent the μth diagonal element of \({{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{AO}}}}}}}}}(t)\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}}\), and thus \(\overrightarrow{d}(t)\) can then be rewritten as in Eq. (8). Taking a detailed look at the construction of \({\overrightarrow{O}}_{\mu }(t)\) in Eq. (9), one can find that it provides a decomposed form of electric dipole moments of each basis function. Therefore, we use \({\overrightarrow{O}}_{\mu }(t)\) to represent the electric dipole contribution from the μth basis function. $$\overrightarrow{d}(t)=-\!e\cdot \mathop{\sum }\limits_{\mu }^{{N}_{{{{{{{{\rm{AO}}}}}}}}}}{\overrightarrow{O}}_{\mu }(t),\,{{{{{{{\rm{where}}}}}}}}\,{\overrightarrow{O}}_{\mu }(t)={({{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{AO}}}}}}}}}(t)\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}})}_{\mu \mu }$$ $${\overrightarrow{O}}_{\mu }(t)=\mathop{\sum}\limits_{\nu }{P}_{\mu \nu }^{{{{{{{{\rm{AO}}}}}}}}}(t)\langle {\chi }_{\mu }|\overrightarrow{r}-\overrightarrow{R}|{\chi }_{\nu }\rangle$$ However, \({\overrightarrow{O}}_{\mu }(t)\) is not translational invariant because the value of \({\overrightarrow{D}}_{\mu \nu }\) (element in \(\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}}\)) depends on the choice of reference points \(\overrightarrow{R}\) (see Eq. (10)). Note that \(\overrightarrow{r}\) and \(\overrightarrow{R}\) are referenced to the origin of coordinate system. Though \(\overrightarrow{R}\) does not affect the full spectrum after Fourier transform (because \(\overrightarrow{d}\) is translational invariant for neutral systems as \(\overrightarrow{R}{S}_{\mu \nu }\) cancels with the nuclear electric dipole contribution), it can change the relative contribution of electric dipole moments from each AO basis function (\({\overrightarrow{O}}_{\mu }(t)\)). We can further split \({\overrightarrow{D}}_{\mu \nu }\) into a reference point (\(\overrightarrow{R}\))-independent term \(\langle {\chi }_{\mu }|\overrightarrow{r}|{\chi }_{\nu }\rangle\) and a reference point-dependent term \(\overrightarrow{R}{S}_{\mu \nu }\), where Sμν is the element of the overlap matrix in AO basis. $${\overrightarrow{D}}_{\mu \nu }=\langle {\chi }_{\mu }|\overrightarrow{r}-\overrightarrow{R}|{\chi }_{\nu }\rangle=\langle {\chi }_{\mu }|\overrightarrow{r}|{\chi }_{\nu }\rangle -\overrightarrow{R}{S}_{\mu \nu },\,{{{{{{{\rm{where}}}}}}}}\,{S}_{\mu \nu }=\left\langle {\chi }_{\mu }|{\chi }_{\nu }\right\rangle$$ In \(\langle {\chi }_{\mu }|\overrightarrow{r}|{\chi }_{\nu }\rangle\), the relative position of atoms can cause different values of elements in the matrix, which we would like to avoid. To explain the reason, we can think about a toy system consisting of only two hydrogen atoms with Cartesian coordinates H1 \({\overrightarrow{r}}_{1}\) = (0, 0, -a) and H2 \({\overrightarrow{r}}_{2}\) = (0, 0, a) where a ≠ 0. It is obvious that e.g., diagonal matrix elements \(\langle {\chi }_{{{{{{{{{\rm{H1}}}}}}}}}_{s}}|\overrightarrow{r}|{\chi }_{{{{{{{{{\rm{H1}}}}}}}}}_{s}}\rangle \,\ne \,\langle {\chi }_{{{{{{{{{\rm{H2}}}}}}}}}_{s}}|\overrightarrow{r}|{\chi }_{{{{{{{{{\rm{H2}}}}}}}}}_{s}}\rangle\) (see Eq. (11)) even though, by symmetry, we expect the same "contribution" of electric dipole from the two atoms. In Eq. (11), each H atom has a Slater-type 1s orbital Ae−ζr where A is normalization constant and r is the distance from the center of the atom, and we change the integration variable from \(\overrightarrow{r}\) to \(\overrightarrow{s}\) using \(\overrightarrow{s}=\overrightarrow{r}-{\overrightarrow{r}}_{1}\) and \(\overrightarrow{s}=\overrightarrow{r}-{\overrightarrow{r}}_{2}\). $$\langle {\chi }_{{{{{{{{{\rm{H2}}}}}}}}}_{s}}|\overrightarrow{r}|{\chi }_{{{{{{{{{\rm{H2}}}}}}}}}_{s}}\rangle -\langle {\chi }_{{{{{{{{{\rm{H1}}}}}}}}}_{s}}|\overrightarrow{r}|{\chi }_{{{{{{{{{\rm{H1}}}}}}}}}_{s}}\rangle =\int{A}^{2}\overrightarrow{r}{e}^{-2\zeta|\overrightarrow{r}-{\overrightarrow{r}}_{2}|}d\overrightarrow{r}-\int{A}^{2}\overrightarrow{r}{e}^{-2\zeta|\overrightarrow{r}-{\overrightarrow{r}}_{1}|}d\overrightarrow{r}\\ =\int{A}^{2}(\overrightarrow{s}+{\overrightarrow{r}}_{2}){e}^{-2\zeta|\overrightarrow{s}|}d\overrightarrow{s}-\int{A}^{2}(\overrightarrow{s}+{\overrightarrow{r}}_{1}){e}^{-2\zeta|\overrightarrow{s}|}d\overrightarrow{s}\\ =({\overrightarrow{r}}_{2}-{\overrightarrow{r}}_{1})\int{A}^{2}{e}^{-2\zeta|\overrightarrow{s}|}d\overrightarrow{s}\\ ={\overrightarrow{r}}_{2}-{\overrightarrow{r}}_{1}$$ One way of minimizing the effect of \(\langle {\chi }_{\mu }|\overrightarrow{r}|{\chi }_{\nu }\rangle\) is to shift the molecular system far from (0, 0, 0), which is equivalent to set a large \(\overrightarrow{R}\). It is worth noting that we do not need to formally "move" the molecule, and this is just an assumption made in the derivation from Eq. (9) to Eq. (12). In this method, we only care about the relative value of \({\overrightarrow{O}}_{\mu }(t)\) when determining the basis function(s) to be truncated, and \(\overrightarrow{R}\) provides the same factor to \({\overrightarrow{D}}_{\mu \nu }\) and later \({\overrightarrow{O}}_{\mu }(t)\). Therefore, it is safe to substitute \({\overrightarrow{D}}_{\mu \nu }\) with Sμν in the expression of \({\overrightarrow{O}}_{\mu }(t)\), and thus we have a scalar Oμ(t) as shown in Eq. (12). It is worth noting that Oμ(t) does not explicitly contain any electric dipole information, which makes sense because the electric dipole cannot be formally defined on a single atomic centered orbital. However, AOs do contribute to the electric dipole by forming MOs across different atomic centers, and the information of such contribution, which we call density matrix contribution, is contained in PAO(t). The expression in Eq. (12) is also known in Mulliken population analysis, as the number of electrons associated with \({\chi }_{\mu }\). The following computations are all based on the scalar form Oμ(t). $${O}_{\mu }(t)={({{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{AO}}}}}}}}}(t){{{{{{{\boldsymbol{S}}}}}}}})}_{\mu \mu }$$ To quantify the contribution, we use the formula in Eq. (13). \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) is an indicator measuring the variation of the Density matrix Contribution (DC) of the μth AO basis function. St[Oμ(t)] computes the standard deviation of Oμ(t) for the total simulation time for each μ. The numerator of Eq. (13) indicates variation (along the RTP) of the electric dipole moment contribution from each AO basis function. The dimensionless quantity \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) is then constructed by dividing the numerator with its mean value of all AO basis functions. A small \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) value means the change of electric dipole moments contributed from the μth basis function is comparatively small among all AO basis functions, and removing this basis function should not change the spectrum (a constant value vanishes after Fourier transform for RT-TDHF/TDDFT) significantly. One might point out that \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) cannot distinguish the pulse from different directions because Oμ(t) in Eq. (12) is no longer direction dependent like in Eq. (8). However, it is found that PAO(t) still varies according to the direction of the pulse because its action is coded in the MO coefficients. $${x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}=\frac{{S}_{t}[{O}_{\mu }(t)]}{\frac{1}{{N}_{{{{{{{{\rm{AO}}}}}}}}}}{\sum }_{\mu }^{{N}_{{{{{{{{\rm{AO}}}}}}}}}}{S}_{t}[{O}_{\mu }(t)]}$$ It is worth noting that we have considered applying basis transformation regarding to \({{{{{{{{\boldsymbol{P}}}}}}}}}^{{{{{{{{\rm{AO}}}}}}}}}(t)\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}}\), namely using the eigenvectors of PAO(t) (transform to natural orbital basis) or \(\overrightarrow{{{{{{{{\boldsymbol{D}}}}}}}}}\). However, the former one is time-dependent so it is hard to choose a transformation matrix for all time steps, and the latter one is reference point-dependent (see Eq. (10)) and one cannot obtain the consistent truncation choice under translation (note that nuclei do not move in this study as opposed to, e.g., Ehrenfest dynamics). Also, an AO basis is a common choice in most molecular simulations and some solid state simulations (e.g., Gaussian and Plane Waves25 method in CP2K package (CP2K version 7.0 (Development Version), the CP2K developers group. CP2K is freely available from https://www.cp2k.org/.)). Therefore, the truncation on the AO basis has broad application prospects and can be easily applied by a simple modification of basis set file. During practical tests of basis truncation, we observed that using only \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) as indicator is not enough for obtaining an accurate spectrum. Another indicator \({x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}}\) is introduced which measures the Importance of Propagation stability (IP) of the μth AO basis function (see Eq. (14)). Cμj(t) denotes an element in transformation matrix (from AO to MO basis), St computes the standard deviation along the time for each μ and j, and \(\mathop{\sum }\nolimits_{j}^{{N}_{{{{{{{{\rm{MO}}}}}}}}}}\) sums over all standard deviations in MOs originating from the μth AO basis function. The numerator of Eq. (14) indicates the variation (along RTP) of the contribution from each AO basis function to all MOs in transformation matrix C(t). As for \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\), the dimensionless quantity \({x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}}\) is also constructed by dividing the numerator with its mean value of all AO basis functions. Small \({x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}}\) value means that the contributions to MOs from the μth basis function do not change much compared to the contribution of all AO basis functions, and removing this basis function should not affect the propagation of the density matrix (remaining part) significantly. Note that both Oμ(t) and Cμj(t) are usually complex numbers for RTP, and the standard deviation of a set of complex numbers is calculated as in Supplementary Eq. (1). $${x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}}=\frac{\mathop{\sum }\nolimits_{j}^{{N}_{{{{{{{{\rm{MO}}}}}}}}}}{S}_{t}[{C}_{\mu j}(t)]}{\frac{1}{{N}_{{{{{{{{\rm{AO}}}}}}}}}}\mathop{\sum }\nolimits_{\mu }^{{N}_{{{{{{{{\rm{AO}}}}}}}}}}\mathop{\sum }\nolimits_{j}^{{N}_{{{{{{{{\rm{MO}}}}}}}}}}{S}_{t}[{C}_{\mu j}(t)]}$$ In practice, an empirical parameter xthr is chosen as threshold for both \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) and \({x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}}\), where the AO basis functions with both indicators smaller than xthr can be removed. The remaining basis set \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) (truncated AO basis set) is then defined as in Eq. (15), given the original AO basis set {χμ} (of which the cardinality \(|{\{{\chi }_{\mu }\}}|\) is NAO). Sometimes \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) includes only part of the given shell, e.g., for a p-shell, only \({\chi }^{{p}_{x}}\) and \({\chi }^{{p}_{y}}\) are in \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) and \({\chi }^{{p}_{z}}\) is truncated. Such symmetry breaking is mainly due to the utilization of polarized field (δ-pulse) in the RT-TDDFT calculations, and the rotational invariance requires a shell level truncation. Considering the truncated basis set used in any computational chemistry package, we also recommend shell level truncation for the general application of the basis set file. In most cases, the majority rule can be applied for a truncation at the shell level, namely, the shells containing more than half of their original basis functions remain in \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) while others are fully discarded. This has to be checked with the \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) indicators of the basis functions in the same shell to ensure that there are no strong contribution to electric dipole transitions arising from some basis functions. In this study, the basis sets of (S)-methyloxirane, (-)-α-pinene, ZnPc, and Ag20 systems are truncated at the shell level. The schematic view of the truncation process is shown in Fig. 1. $${\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}=\left\{{\chi }_{\mu }\,|\,{x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}} \, > \, {x}^{{{{{{{{\rm{thr}}}}}}}}}\vee {x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}} \, > \,{x}^{{{{{{{{\rm{thr}}}}}}}}},\,\forall i\in \{1,...,{N}_{{{{{{{{\rm{AO}}}}}}}}}\}\right\}$$ Fig. 1: Schematic diagram of the basis set truncation process. First, a real-time propagation run of 1% (e.g., 100 steps) of the total simulation time is performed. Then the information of AO density matrix PAO(t) and MO coefficient C(t) at every step, and overlap matrix S, is collected. Basis functions to be truncated are selected based on the low standard deviation (std. in the figure) of Oμ(t) and Cμj(t). Eventually, one can directly modify the basis set file for a complete RT-TDHF/TDDFT calculation or a LR-TDHF/TDDFT calculation. Using \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\), namely, reducing number of basis functions from NAO to \({N}_{{{{{{{{\rm{trunc}}}}}}}}}=|{\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}|\), can ideally decrease the total computational time to \({({N}_{{{{{{{{\rm{trunc}}}}}}}}}/{N}_{{{{{{{{\rm{AO}}}}}}}}})}^{4}\) for a RT-TDHF calculation or a RT-TDDFT calculation with hybrid exchange-correlation functional. Also, this truncated basis set can be transferred to LR-TDHF/TDDFT calculations. The procedure for carrying out RT-TDHF/TDDFT calculations with truncated AO basis set for the examples studied in this work is described as follows: Run 100 (400) steps (1% of the total simulation time) of RT-TDHF/RT-TDDFT simulation with the timestep 0.2 (0.05) atomic units using a preliminarily chosen basis set \({\{{\chi }_{\mu }\}}\), and collect the information regarding S, PAO(t), C(t) of every step. Calculate \({x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}}\) and \({x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}}\) via Eqs. (13) and (14), respectively, and select truncated AO basis set \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) based on the criteria in Eq. (15). Run 10'000 (40'000) steps of RT-TDHF/TDDFT (full) simulation with the same timestep using \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\). Note that a ground state SCF calculation should be carried out with the truncated basis set before the RT-TDHF/RT-TDDFT simulation in order to apply the perturbation to a converged ground state. It is worth noting that the truncation procedure by construction eliminates the transitions that are not (or very weakly) electric dipole allowed, and thus this approach focuses more on the overall spectrum rather than the types of transitions. Complete basis set limit In addition to the analyses of truncated AO basis functions, we introduce an algorithm to construct basis sets towards the CBS limit for RT(LR)-TDHF/TDDFT calculations. The idea of CBS limit employed here is to add diffuse functions (see examples (S)-methyloxirane and (-)-α-pinene for the reason) to all types of AO basis functions (s, p, d, f, g, ...) representing different orbital angular momenta l. These functions are added in an even-tempered manner26,27 by a geometric progression of the orbital exponents in the original basis set: \({\alpha }_{l,k}={\alpha }_{l}{\beta }_{l}^{k},\,\forall k\in {\mathbb{N}}\). αl,k is an exponent of the l-shell with kth power, and αl and \({\beta }_{l}^{k}\) are two parameters to be optimized for the basis set. Since most basis sets available (Pople28, Dunning29, Jensen30, Ahlrichs31,32, etc.) provide more than one exponent for each type of shell, we can directly extrapolate from these values to get the additional exponent \({\alpha }_{l,k+1}={\alpha }_{l,k}^{2}/{\alpha }_{l,k-1}\). One can increase the k value until significant linear dependencies are found in the basis set (sometimes also referred to as basis set overcompleteness33). This CBS scheme usually requires a quite large basis set for the calculation, and it is usually unclear which basis function(s) should be removed once overcompleteness is reached. Therefore, we combine it with AO truncation and propose an "Add-While-Truncate" algorithm (see Algo. 1) to construct the CBS specifically designed for RT(LR)-TDHF/TDDFT calculations. Firstly, a preliminarily chosen basis set \({\{{\chi }_{\mu }\}}\) is used for a short period of RT-TDHF/TDDFT calculation and \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) is selected. An additional basis set containing diffuse functions (\({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{diffuse}}}}}}}}}\)) is constructed in an even-tempered manner. \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{diffuse}}}}}}}}}\) may contain some basis functions truncated in previous steps (combined as \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}\)), which should be removed. Then \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{diffuse}}}}}}}}}\) is combined with \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) to form a new basis set \({\{{\chi }_{\mu }\}}\). In order to check the overcompleteness of the newly created AO basis set, we calculate the overlap matrix \({{{{{{{{\boldsymbol{S}}}}}}}}}_{\{{\chi }_{\mu }\}}\) and solve for its eigenvalues λ. If the minimal absolute eigenvalue ∣λ∣min is smaller than a user-defined small value ϵ or \({\{{\chi }_{\mu }\}}\) remains the same as in the last cycle (namely, basis functions are neither truncated nor added), \({\{{\chi }_{\mu }\}}\) is regarded as the CBS under such ϵ-condition (\({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{CBS-}}}}}}}}\epsilon }\)), otherwise the new \({\{{\chi }_{\mu }\}}\) is used to repeat the previous steps until the final condition is fulfilled. In practice, one can also manually remove some newly added diffuse functions within \({\{{\chi }_{\mu }\}}\) in the iteration to satisfy the given ϵ-condition. In this case, in order to minimize the total number of basis functions, we first remove the diffuse functions corresponding to higher orbital angular momentum, which is the same idea as the one applied in calendar basis sets34. For the sake of simplicity, we use the term "basis functions" for "AO basis functions" in the remaining part of this manuscript. Add-While-Truncate CBS Algorithm 1: repeat 2: \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{old}}}}}}}}}\leftarrow \{{\chi }_{\mu }\}\) 3: Run a RT-TDHF/TDDFT simulation with \({\{{\chi }_{\mu }\}}\) for 100 (400) steps. 4: Construct \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) by Eqs. (13)–(15) 5: Construct additional even-tempered basis set \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{diffuse}}}}}}}}}\) for \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) 6: \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}\leftarrow {\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}\cup (\{{\chi }_{\mu }\}\setminus {\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}})\) 7: \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{diffuse}}}}}}}}}\leftarrow {\{{\chi }_{\mu }\}}_{{{{{{{{\rm{diffuse}}}}}}}}}\setminus {\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}\) 8: \(\{{\chi }_{\mu }\}\leftarrow {\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\cup {\{{\chi }_{\mu }\}}_{{{{{{{{\rm{diffuse}}}}}}}}}\) 9: Solve for eigenvalues λ of overlap matrix \({{{{{{{{\boldsymbol{S}}}}}}}}}_{\{{\chi }_{\mu }\}}\) 10: until ∣λ∣min < ϵ or \(\{{\chi }_{\mu }\}={\{{\chi }_{\mu }\}}_{{{{{{{{\rm{old}}}}}}}}}\) 11: \({\{{\chi }_{\mu }\}}_{{{{{{{{\rm{CBS-}}}}}}}}\epsilon }\leftarrow {\{{\chi }_{\mu }\}}_{{{{{{{{\rm{trunc}}}}}}}}}\) Example: H2 dimer The H2 dimer is used as the first test system, with the δ-pulse applied along z direction (see the geometry in Fig. 2d, z axis is parallel to the H–H bond, and y axis is perpendicular to the plane formed by the four atoms). Four different basis sets, 6-31G, 6-31G**, 6-31++G, and 6-31++G**28,35,36, are utilized for RT-TDHF calculations. Fig. 2: RT-TDHF calculations of the H2 dimer. xDC-xIP map obtained with a 6-31G, b 6-31G**, c 6-31++G, and d 6-31++G** basis set. Each square in the map represents a basis function with its numbering on x and y axes. The color of the squares is based on the value calculated as xDC ⋅ xIP, so the deeper the color the more important the basis function is in the TDHF calculation. d includes the visualization of related basis functions. Axis labels (basis functions) are sorted in ascending order according to their xDC or xIP values. The red dashed line represents xthr = 0.1 and the gray dashed line represents xthr = 0.2. e Jaccard indices for 6-31G, 6-31G**, 6-31++G, and 6-31++G** basis sets. The red dashed line represents xthr = 0.1 and the gray dashed line represents xthr = 0.2. f Electronic absorption spectra using 6-31G, 6-31G**, 6-31++G, 6-31++G**, and truncated 6-31++G** basis sets with xthr = 0.1 (6-31++G** trunc 16) and xthr = 0.2 (6-31++G** trunc 12). The number after the basis set label indicates the number of basis functions used for the RTP. a.u. arbitrary units. For each H atom, 6-31G contains two s-type basis functions (noted as 2s for convenience), 6-31G** contains 2s1p (1p as extra polarization function, we use italic form to represent specific basis function(s), e.g., 1p means the first p-type basis function), 6-31++G contains 3s (3s as an extra diffuse function), and 6-31++G** contains 3s1p. Note that the abbreviations we use here refer to basis functions but not specific electron shells. The same convention of basis/orbital notations is utilized for all examples in this study. E.g. for a truncation from 5s4p to 5s3p ( −3p), −3p means the 3rd p-type basis function is removed but the 1st, 2nd, and 4th p-type basis functions and all s-type basis functions remain. After 100 steps of RT-TDHF calculations, xDC and xIP of each basis function are computed and can be visualized in Fig. 2a–d as xDC-xIP map. Basis functions are represented by colored squares with their value xDC ⋅ xIP. The electric dipole contribution and importance of propagation stability of basis functions are sorted in two axes. The red dashed line and the gray dashed line represent xthr = 0.1 and xthr = 0.2, respectively. xthr splits the xDC-xIP map into four quadrants: important for both electric dipole contribution and propagation stability (top right), important for only electric dipole contribution (top left), important for only propagation stability (bottom right), and important for neither one (bottom left). The basis functions locate inside the left below region (red borders for xthr = 0.1) are the ones recommended to be deleted from the basis set. One can find that no basis function is to be deleted in the case of 6-31G (Fig. 2a) and 6-31++G (Fig. 2c) basis sets, and there are 8 basis functions to be deleted in the case of 6-31G** (Fig. 2b) and 6-31++G** (Fig. 2d) basis sets with xthr = 0.1. These 8 basis functions are the same for 6-31G** and 6-31++G**: 2px and 2py of each H atom, which belong to the polarization functions. If xthr is set to 0.2 (gray dashed lines in Fig. 2b, d), extra four basis functions are to be deleted in both cases: 2pz of each H atom. Therefore, a setting of xthr = 0.2 essentially truncates 6-31G** to 6-31G and 6-31++G** to 6-31++G for the H2 dimer. Note that the pulse causes differences between atoms with different nuclear Cartesian coordinates, leading to different xDC-xIP values of the same basis functions in different H atoms. In Supplementary Fig. 1, we further provide a more intuitive view of Oμ from each basis function in 6-31++G and 6-31++G** basis sets. One can easily distinguish small contribution components from the large contribution ones. Let us take a closer look at the xDC-xIP map in the case of 6-31++G** basis set (see basis functions shown in Fig. 2d, we focus only on one H atom here). Actually, we can explain qualitatively that 2px and 2py are the least important basis functions for the simulation of the electronic absorption spectrum. The electric dipole transitions from σ1s−1s to \({\pi }_{2{p}_{x}-2{p}_{x}}\), \({\pi }_{2{p}_{x}-2{p}_{x}}^{*}\), \({\pi }_{2{p}_{y}-2{p}_{y}}\), and \({\pi }_{2{p}_{y}-2{p}_{y}}^{*}\) are almost (considering the effect from the other H2 molecule close-by) forbidden due to symmetry reasons. The xDC-xIP map shows that 2px is slightly more important than 2py, which may be explained by a stronger interaction on the x direction between H2 molecules. The electric dipole transition \({\pi }_{2{p}_{z}-2{p}_{z}}^{*}\leftarrow {\sigma }_{1s-1s}\) is allowed, and thus the 2pz basis function is considered to be more important than 2px and 2py for the electronic absorption spectrum. The electric dipole transition \({\sigma }_{2s-2s}^{*}\leftarrow {\sigma }_{1s-1s}\) is also allowed and \({\sigma }_{2s-2s}^{*}\) has an lower energy than \({\pi }_{2{p}_{z}-2{p}_{z}}^{*}\), which leads to a higher occupation probability. Therefore, 2s in 6-31++G** basis set is one of the "dominant" basis functions in the RTP for H2 with the computational settings used. In more complex systems, such an energetic analysis in terms of "static" wavefunctions (e.g., wavefunctions after SCF) is not enough to give a reasonable truncated basis set since MO coefficients C(t) are time-dependent, which explains our choice of using first 100 (400) steps for the analysis. Besides, a Jaccard index37 ( J(xthr)) is applied to analyze the similarity of deleted basis functions suggested by our two criteria xDC and xIP (see Eq. (16)). A high Jaccard index indicates more basis functions in common between two sets and vice versa. This information provides an intuitive view of the truncation along xthr for a given basis set. $$J({x}^{{{{{{{{\rm{thr}}}}}}}}})=\frac{|\{{\chi }_{\mu }\,|\,{x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}} \, < \, {x}^{{{{{{{{\rm{thr}}}}}}}}}\}\cap \{{\chi }_{\mu }\,|\,{x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}} \, < \, {x}^{{{{{{{{\rm{thr}}}}}}}}}\}|}{|\{{\chi }_{\mu }\,|\,{x}_{\mu }^{{{{{{{{\rm{DC}}}}}}}}} \, < \, {x}^{{{{{{{{\rm{thr}}}}}}}}}\}\cup \{{\chi }_{\mu }\,|\,{x}_{\mu }^{{{{{{{{\rm{IP}}}}}}}}} \, < \, {x}^{{{{{{{{\rm{thr}}}}}}}}}\}|}$$ For H2 dimer system, it is found that J(xthr) of 6-31G** and 6-31++G** remains at a value of 1.0 from xthr = 0.01 to xthr = 1.0 (see Fig. 2e). This shows that RTP has clear "preference" for some basis functions within the given basis sets. In the case of 6-31G and 6-31++G basis sets, on the other hand, J(xthr) remains at a value of 0.0 up to xthr = 0.9, which means that no redundant basis functions are found for such basis sets. The spectra using different basis sets and truncated 6-31++G** basis sets are in Fig. 2f. The spectra using 6-31G and 6-31G** basis sets look very similar, which matches the truncation suggestion given in Fig. 2b. The same situation is also found in the spectra using 6-31++G and 6-31++G** basis sets. Truncated 6-31++G** basis set with xthr = 0.2 (noted as 6-31++G** trunc 12, 12 basis functions left) leads to the same basis set as 6-31++G, with an error of ~ 0.1 eV of corresponding excitation energies. With a tighter threshold xthr = 0.1, truncated 6-31++G** basis set (noted as 6-31++G** trunc 16, 16 basis functions left) achieves more accurate spectra compared to 6-31++G basis set, with an error at the level of ~0.01 eV. These results are in accordance with the xDC-xIP map introduced before. For the sake of completeness, the cases of δ-pulse from x or y direction are included in Supplementary Fig. 2. Example: H2O dimer Four different basis sets, def2-TZVP, def2-TZVPP, def2-TZVPD, and def2-TZVPPD32,38 are utilized for RT-TDHF calculations of H2O dimer system (see Fig. 3g for the nuclear structure). Again, these four basis sets are chosen based on the addition of polarization functions and/or diffuse functions. The xDC-xIP map of this system is shown in Fig. 3a–d after 100 steps of RT-TDHF calculations. Compared to H2 dimer, H2O dimer is a more complicated system and thus the xDC-xIP map is more involved. Nevertheless, it is clear that the distribution of basis functions in all plots shows a "dumbbell" shape (from left below to right top), namely, more dispersed in low and high xDC/xIP region compared to the middle range. This provides a rough idea of the range of truncation, and the cross point of xthr = 0.1 generally locates at the neck of the "dumbbell". The Jaccard indices for the four basis sets are shown in Fig. 3e. In Supplementary Figure 3, some visualizations of orbitals are shown in the xDC-xIP map of the def2-TZVP basis set. Fig. 3: RT-TDHF calculations of the H2O dimer. xDC-xIP map obtained with a def2-TZVP, b def2-TZVPP, c def2-TZVPD, and d def2-TZVPPD basis set. Each square in the map represents a basis function with its numbering on x and y axes. The color of the squares is based on the value calculated as xDC ⋅ xIP, so the deeper the color the more important the basis function is in the TDHF calculation. The red dashed line represents xthr = 0.1. e Jaccard indices for def2-TZVP, def2-TZVPP, def2-TZVPD, and def2-TZVPPD basis sets, and f def2-TZVPD, def2-TZVPPD, and corresponding truncated/recursively truncated basis sets. The vertical red dashed line represents xthr = 0.1. g Electronic absorption spectra using original, one-time truncation, and recursive truncation of def2-TZVPD, def2-TZVPPD basis sets. The truncation threshold is xthr = 0.1. h Electronic absorption spectra using original, CBS-10−6, and CBS-all (combined basis set of the original one and CBS-10−6) of def2-TZVPPD and def2-QZVPPD basis sets. The truncation threshold used in the CBS scheme is xthr = 0.1. The number after the basis set label indicates the number of basis functions used for the RTP. a.u. arbitrary units. A natural question of basis set truncation is whether one needs to do it recursively until the basis set does not change anymore (which we refer to as "recursive truncation"). Therefore, we test the recursive truncation of def2-TZVPD and def2-TZVPPD basis sets, and provide the number of basis functions together with Jaccard indices (see Fig. 3f). One-time truncation decreases the number of basis functions from 148 to 103 and from 116 to 92 for the two original basis sets, while recursive truncation only decreases the number further from 103 to 98 and from 92 to 91, respectively. It is worth mentioning that the two basis sets are very similar as one can see from the number of basis functions after the recursive truncation. The Jaccard indices give visual evidence that "def2-TZVPD trunc 91" and "def2-TZVPPD trunc 98" do not change after another truncation process since J(x) = 0 for x ∈ [0, 0.1]. It also shows that one-time truncation is good enough to significantly decrease J(xthr) value (see two dashed lines). Considering the time and computational resources spent on recursive truncation process (usually needs several rounds of RT-TDHF/TDDFT calculation), we only focus on one-time truncation in the following. The spectra of H2O dimer using def2-TZVP, def2-TZVPP, def2-TZVPD, and def2-TZVPPD basis sets give a similar conclusion as in H2 dimer case, namely, that extra diffuse functions have large impact on the absorption spectra while extra polarization functions have limited impact on the absorption spectra (see Supplementary Figure 4). In addition, we provide the spectra after a one-time truncation and recursive truncation processes (see Fig. 3g). All spectra in this figure are very close to each other up to an excitation energy of 20 eV, indicating def2-TZVPPD includes many redundant basis functions for RT-TDHF calculations in this case. For the usage of computational resources, in the case of \({{{{{{{\mathcal{O}}}}}}}}({N}^{4})\) scaling (HF Coulomb and exchange matrices calculated with 2-electron integrals, without real-space griding or density-fitting), a RT-TDHF run with "def2-TZVPD trunc 91" basis set only consumes (91/148)4 = 14 % of the time compared to the original def2-TZVPPD basis set. For this system, a special interest is the effect of hydrogen bonds on the truncation process. However, we do not find any dependence of deleted basis functions on the distance (up to 10 Å) between two water molecules, and the suggested truncated basis sets are very similar. This may indicate that the basis functions needed for the description of hydrogen bonds are also important for the electronic absorption spectrum of the water monomers themselves. Moreover, the CBS scheme is tested for the H2O dimer system. Two basis sets, def2-TZVPPD and def2-QZVPPD are used as the starting point for the CBS scheme, with ϵ = 10−6 (CBS-10−6). We directly modify the basis set file every time when truncating or adding basis functions. The detailed steps of the CBS scheme for def2-TZVPPD basis set are shown in Table 1. The original def2-TZVPPD basis set of the H atom and the O atom is 3s3p1d and 6s4p3d1f, respectively, with 148 basis functions for the H2O dimer system. After the first RT-TDHF run, the first d-subshell (1d) of H and the first d-subshell and f-subshell of O (1d1f) are truncated (shown in the bracket). The diffuse functions are then added to the remaining subshells (shown in the bracket), resulting in 4s4p for H and 7s5p3d for O. This is followed by a second RT-TDHF run with truncating and adding basis function. With a basis set of 5s4p for H and 8s6p4d for O, we find ∣λ∣min is smaller than the threshold we set (10−6), thus the newly added subshell with the highest angular moment is removed, say, 4p in H and 4d in O. Finally, we obtain a basis set of 5s3p for H and 8s6p3d for O, with 138 basis functions and ∣λ∣min = 3.7 × 10−6. In Supplementary Table 1, we do it analogously for def2-QZVPPD basis set. Table 1 The steps of the CBS scheme for def2-TZVPPD basis set in the H2O dimer system RT-TDHF calculations are carried out with these two CBSs, and the resulting spectra are shown in Fig. 3g. Choosing either the def2-TZVPPD or the def2-QZVPPD basis set (blue and red solid lines) can result in differences of absorption peaks with an excitation energy larger than 12 eV, while their corresponding CBSs-10−6 (blue and red dashed lines) match until 15 eV. Also, CBSs-10−6 leads to a red shift of 0.2 eV compared to the two original basis sets, which usually indicates the behavior of a larger basis set according to the observations in this work, while in our case this is achieved by less basis functions. In addition, we utilize combined basis sets of original basis set and its CBS-10−6 (noted as CBS-all) for the same calculation, and we can see that each spectrum (dotted lines) also agrees with the corresponding CBS-10−6 one. Examples: (S)-methyloxirane and (-)-α-pinene As other examples, (S)-methyloxirane and (-)-α-pinene molecules are tested with truncated basis sets. The def2-TZVPP32,38 basis set is adopted as a reference basis set and the B3LYP functional is selected as the exchange-correlation functional for these two systems. xthr = 0.1 and xthr = 0.2 are used as the truncation threshold. Supplementary Tables 2 and 3 give information about the original and truncated basis sets for (S)-methyloxirane and (-)-α-pinene, respectively. The corresponding xDC-xIP maps are in Supplementary Figs. 5 and 6. The resulting absorption spectra are shown in Fig. 4a and Supplementary Fig. 8a. The truncated basis sets, both xthr = 0.1 and xthr = 0.2, provide a good approximation to the absorption spectra compared to the original def2-TZVPP basis set, while using as few as half of the basis functions (in the case of xthr = 0.2). Apart from the usage in RT-TDDFT, the truncation process is found to be robust in LR-TDDFT as well. In LR-TDDFT calculations, 500 and 2000 roots are solved for (S)-methyloxirane and (-)-α-pinene systems, respectively. From these results and the results in H2 dimer and H2O dimer systems, one may find that most basis functions truncated are polarization functions, e.g., p/d-subshell for H and d/f-subshell for C/O, while diffuse functions are usually not removed. This explains why the CBS scheme we propose only considers additional diffuse functions. Fig. 4: RT-TDDFT calculations of (S)-methyloxirane, ZnPc, and the Ag20 structures. (S)-methyloxirane a electronic absorption spectra and b electronic circular dichroism spectra using original and truncated basis sets of def2-TZVPP (stick spectra below correspond to LR-TDDFT ones). c Number of deleted basis functions (\(|{\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}|\)) and corresponding Jaccard indices of 6-31G(d,p) basis set for ZnPc system. Purple, red, green, and blue dashed lines represent different xthr values, and the corresponding colored notations give the deleted basis functions in italic form. Note that the red dashed line with xthr = 0.09 and blue dashed line with xthr = 0.18 essentially reduce 6-31G(d,p) to 6-31G(d) and 6-31G basis set, respectively (except for the addition truncation on the Zn atom). d RT-TDDFT/LR-TDDFT electronic absorption spectra using 6-31G(d,p), 6-31G(d), 6-31G, and 6-31G(d,p) CBS-10−6 basis sets for ZnPc system (stick spectra below correspond to LR-TDDFT ones). e Ag20 geometry with colored vertex atoms (top), colored edge atoms (middle), and colored face atoms (bottom). f Ag20 LR-TDDFT electronic absorption spectra using GTH-TZV2P, GTH-TZVP, GTH-DZVP, and truncated GTH-TZV2P basis sets. Below are the stick spectra and above are the Gaussian broadened spectra with FWHM = 0.12 eV. The number after the basis set label indicates the number of basis functions. a.u. arbitrary units. Furthermore, we use the same basis sets for ECD spectra calculations, considering that the two quantities xDC and xIP do not explicitly depend on the electric dipole operator. The ECD spectra of (S)-methyloxirane and (-)-α-pinene are shown in Fig. 4b and Supplementary Fig. 8b, respectively. Table 2 gives the benchmark of basis sets used for (S)-methyloxirane and (-)-α-pinene. RT-TDDFT calculations of these two systems are carried out using CP2K. Because Coulomb and exchange and correlation (XC) terms are evaluated on grids, we do not observe a significant time-saving using the proposed truncated basis sets. Nevertheless, computational resources can be reduced as much as one order of magnitude in LR-TDDFT calculations (Gaussian0939) using truncated basis sets. The corresponding memory usage of (-)-α-pinene system is also shown in the table. The memory cost of Coulomb and exchange matrices scale as \({{{{{{{\mathcal{O}}}}}}}}({N}^{4})\) (2-electron integrals, without real-space griding or density-fitting), and one may easily encounter a memory bottleneck with large basis sets (e.g., using def2-TZVPP for (-)-α-pinene, maximal memory set to 200 GB), which, however, can be alleviated with truncated basis sets. In Supplementary Tables 4 and 5, we show the scaling information of (-)-α-pinene using HF/def2-TZVPP and its truncated basis set, and computational time to calculate Coulomb and exchange matrices, respectively. To assess the contribution from HF exchange term, we further show the difference between B3LYP/def2-TZVPP and BLYP/def2-TZVPP in the calculation of RT-TDDFT spectrum in Supplementary Fig. 7. Table 2 Benchmark of the original and truncated basis sets of def2-TZVPP used in (S)-methyloxirane and (-)-α-pinene systems Example: ZnPc ZnPc is a popular example for excited-state calculations40,41,42,43,44,45. This example is mainly utilized to demonstrate a step-by-step truncation from 6-31G(d,p) to 6-31G. Here we use the nuclear geometry of ZnPc from a previous study45 with the B3LYP functional and 6-31G(d,p)28,35,36 as the reference basis set. Figure 4c provides the information of deleted basis functions and Jaccard indices. The numbers close to the dashed lines are xthr values, and the notations on the right side are details of deleted basis functions in italic form. Note that the number of deleted basis functions (\(|{\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}|\)) can be higher than the number calculated from the subshell notations on the right. This is because extra basis functions might also be deleted but not the corresponding full subshells, e.g., Zn 1f xthr=0.04 corresponds to a \(|{\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}|\) value larger than 7 because some other basis functions like H 1pz (but not the full 1p) are deleted. In addition, this planar system, which we place in x − y plane in the simulation, shows some preferences for 1dxy and \(1{d}_{{x}^{2}-{y}^{2}}\) of C/N elements, and thus 1dyz, 1dxz, and \(1{d}_{{z}^{2}}\) basis functions are the first to be deleted in the range of xthr = 0.09 ~ 0.18, indicating that the truncation scheme can provide the information of preference on the orientation of basis functions (or AOs with different magnetic quantum number). It is worth mentioning that xthr = 0.09 and xthr = 0.18 truncation leads basically to the 6-31G(d) and 6-31G basis sets, except for the additional truncation on the Zn atom. Also, it is found that \(|{\{{\chi }_{\mu }\}}_{{{{{{{{\rm{deleted}}}}}}}}}|\) and J(xthr) show a very similar trend. After xthr = 0.18, both lines reach a plateau where seldom further basis functions can be removed, indicating 6-31G as a good truncated basis set. Actually, we can see this from xDC-xIP map of the same system (see Supplementary Fig. 9) in which the truncated and remaining basis functions almost form two blocks with xthr = 0.18 (dashed blue line). The corresponding RT-TDDFT and LR-TDDFT (1000 roots) spectra are given in Fig. 4d. It is clear that 6-31G(d,p), 6-31G(d), and 6-31G basis sets all provide similar results, which match our truncation suggestions. This shows a practical usage of our truncation scheme on the selection of basis set. In addition, the CBS-10−6 with 6-31G(d,p) reference is constructed (see Supplementary Table 6 for the CBS process). Nevertheless, it does not change much in the RT-TDDFT/LR-TDDFT spectra compared to the original 6-31G(d,p) basis set. Example: Ag20 Ag20 is a metal cluster with tetrahedral structure (Td symmetry), which has been investigated with TDDFT calculations46,47,48. Here we use the nuclear geometry of Ag20 from a previous study48 together with PBE049 functional and GTH50,51 Gaussian-type pseudopotential basis sets52 GTH-DZVP, GTH-TZVP, and GTH-TZV2P. GTH-TZV2P is used as the reference basis set for the truncation process. The Ag atoms in Ag20 cluster are categorized into 3 groups: vertex (v), edge (e), and face (f) (see Fig. 4e). The atoms in the same group are equivalent in space and should have the same contribution to the electronic absorption spectrum. Table 3 shows the truncated basis functions versus increasing xthr values. As one can see from the table, atoms in different groups generally have different suggested basis set truncations. More basis functions are truncated for atoms at vertex position, and less for atoms at face position. This is reasonable because vertex Ag atoms have a limited space angle "bonded" with other atoms, while face Ag atoms have half of their surrounding space occupied with 9 nearest neighbors, and complex surroundings often require more basis functions to describe the interactions. For comparison, basis set information of GTH-TZVP and GTH-DZVP is also listed. The truncation scheme provides quite different basis sets from GTH-TZVP and GTH-DZVP basis set, e.g., GTH-TZVP can be regarded as −2f truncated basis set of GTH-TZV2P, but 2f basis functions are the last choice of truncation from our scheme (up to xthr = 0.6). This means that the corresponding standardly available smaller basis sets, e.g., GTH-DZVP or GTH-TZVP, do not always contain the most important basis functions (for TDDFT calculations) from the larger ones, e.g., GTH-TZV2P, which is different from what we have found for the ZnPc example system. We select xthr = 0.3 and xthr = 0.5 truncated basis sets for LR-TDDFT calculations, under the consideration that the numbers of basis functions are close to GTH-TZVP and GTH-DZVP basis sets, respectively. Table 3 The truncation process of TZV2P basis set in Ag20 system The LR-TDDFT spectra (2000 roots) of the Ag20 cluster using 5 different basis sets are shown in Fig. 4f. In general, "GTH-TZV2P trunc 684" (xthr = 0.3) gives better agreement with the reference GTH-TZV2P basis set than the one with the GTH-TZVP basis set. This can be seen as follows: 1. for the first several peaks (at ~ 2.6 eV, 2.9 eV, and 3.4 eV), red dashed peaks all locate closer to black peaks than yellow dashed peaks; 2. for the peaks up to 7 eV, the red dashed line follows closer to the black line than the yellow dashed line. However, the difference between the spectra calculated using "GTH-TZV2P trunc 512" (xthr = 0.5) and GTH-DZVP basis sets is limited, which can be explained by their similar composition in terms of basis functions shown in Table 3. The Ag20 example demonstrates that the proposed truncation scheme has the ability of assigning different basis sets to the atoms, according to their "interaction" with the full system. Additionally, we also provide some testing calculations to demonstrate: 1. 1% of the total propagation time is sufficient to show the contribution from each basis function, and the same truncation suggestion has been obtained using 1%, 10%, and 100% of the RTP steps (see Supplementary Fig. 10) 2. Indicator xIP is necessary in the truncation scheme, and the truncation using only indicator xDC can lead to a different spectrum (see Supplementary Fig. 11) 3. ERIs Schwarz screening does not affect the truncation scheme, and they can be used together for the acceleration in RT-TDHF/TDDFT calculations (see Supplementary Figs. 12-13 and Supplementary Tables 7-9). We have introduced an AO basis set truncation scheme for TDDFT calculations, based on the analysis of a short period of real time propagation of MO coefficients. Two quantities – density matrix contribution and importance of propagation stability – are constructed as indicators for the truncation process. The truncated basis sets are found to reproduce the electronic absorption spectra obtained with the original basis sets well. In some cases, truncated basis sets can serve as intermediate basis sets between two levels of available basis sets, or are found to be very close to lower level basis sets available, in which the truncation process works as a means to help in basis set selection. Two intuitive graphs, xDC-xIP map and Jaccard index, are introduced for the analysis of basis functions. These graphs also provide a guide for the choice of the truncation threshold xthr (e.g., see diagrams in Fig. 5). Fig. 5: Diagrams of four different types of xDC-xIP maps. Red/gray denote basis functions that can/cannot be removed. From left to right: a high correlation between xDC and xIP, basis set can be truncated at any accuracy (e.g., H2 dimer 6-31G**, pulse z direction); b high correlation between xDC and xIP until certain s (where s is boundary of high and low correlation regions), basis set can be truncated at accuracy with xthr ≤ s (e.g., H2 dimer 6-31++G**, pulse z direction); c two blocks with low correlation regions, basis set is recommended to be truncated at connected part between two blocks (e.g., all other test examples), which is also the most common case; d fully low correlation region, basis set can hardly be truncated (e.g., H2 dimer 6-31G/6-31++G, pulse z direction). As opposed to basis sets constructed mainly for the purpose of energy minimization and geometry optimization, the truncation scheme proposed provides a task-, system-, and chemical environment-specific basis set. It has reduced number of basis functions and accelerates the calculations involving the construction of Coulomb and/or exchange matrices iteratively in every propagation step, potentially with the scaling of \({{{{{{{\mathcal{O}}}}}}}}({N}^{4})\). ERIs usually benefit from evaluating all components from the same shell. However, they are only computed once before the propagation. Because the truncation process is carried out on the original AO basis set without any rotation or reconstruction of basis functions, the truncated basis sets can be easily employed in any quantum chemistry package using Gaussian-type (or Slater-type) basis, with a simple modification of the basis set file. Additionally, we have tested recursive truncation to show that the process is robust and will result in a "truncation consistent" basis set given a certain xthr. Though the truncation is based on the analysis of real-time propagation, the basis sets produced can also been used for LR-TDDFT calculations and provide equally good spectra. Nevertheless, the acceleration of LR-TDDFT calculations depends on the the systems and purposes of the research, e.g., limited number of excitations in LR-TDDFT for a small system may not be worth an additional RT-TDDFT calculation to determine a truncated basis set, while a highly conjugated or a large system with excitations of higher energies should benefit from the truncation scheme. How the truncation scheme and truncated basis set might be transferable to more accurate yet expensive methods like GW/Bethe-Salpeter equation53,54,55, time-dependent coupled cluster/configurational interaction56,57,58, or other type of any excited-state calculations might be explored in the future. Furthermore, an "Add-While-Truncate" algorithm has been proposed to construct basis sets towards the complete basis set limit. The additional basis functions are added as diffuse functions in an even-tempered manner, and no extra polarization functions are added. The neglect of polarization functions is primarily based on truncation experiences we have got from this study (e.g., in H2O dimer, (S)-methyloxirane, and (-)-α-pinene systems). There are some discussions about the use of polarization and diffuse functions used for electric dipole moment, polarizability, and TDDFT calculations in previous works59,60,61,62,63. Nevertheless, as shown in test examples, the truncation process provides the possibility to select polarization and diffuse functions quantitatively. The proposed CBS scheme can construct basis sets to arbitrary accuracy, depending on a predefined parameter limited by the linear dependency between basis functions. The truncation scheme might reveal some intrinsic knowledge for the better description of electronic excitations between ground state and excited states, and offer a thought for the design of basis sets in TDDFT calculations. Future work can be on both basis set constructions and migration to other excited-state calculations or properties. In this work, all original basis sets employed are ground state energy-optimized, however, there is another group of completeness-optimized basis sets64,65,66, with which one may also test the efficiency and validate the accuracy towards CBS-limit65. Auxiliary density matrix methods67 provide an alternative way to accelerate HF exchange calculation via auxiliary basis set, and have been found to yield highly accurate results in energies and response properties63. Considering the computational demanding HF exchange calculation employed in hybrid functionals, it is possible to further assess the truncation scheme for auxiliary basis sets. In addition, the idea of decomposing the electric dipole contributions into the contribution from individual basis functions can be migrated to other properties and produce different task specific basis sets. One may be interested in a truncated basis for dynamic calculations, which, however, might require further investigations on the consistency in the truncation for each nuclear configuration. Apart from basis set truncation, a direct basis set optimization algorithm (e.g., on exponents of Gaussian-type basis functions) is also possible given a proper loss function based on xDC and xIP parameters. While we have only tested the truncation process on neutral molecules in this study, charged systems could also be investigated. This would be an interesting topic since it may demonstrate the dependence of necessary basis functions on different charges for excited state calculations (e.g., effect of diffuse function on anions which is known for ground state cases). In summary, our basis set truncation scheme provides a robust process for decreasing the number of basis functions and speeding up TDDFT calculations, while preserving the high accuracy of the spectra. The quantitative basis set analysis allows a profound understanding of the basis functions employed and opens up a broad area for potential research in excited state calculations. The systems H2 dimer, H2O dimer, (S)-methyloxirane, (-)-α-pinene, zinc phthalocyanine (ZnPc), and Ag20 have been investigated. Information about the applied computational methods, basis sets, and codes are listed in Table 4. For the H2 dimer and the H2O dimer, we utilize an in-house version of the PySCF68,69 RT-TDHF module70 to test truncation and CBS scheme. Calculations are carried out with 6-31G series28,35,36 w/o additional polarization/diffuse functions, and def2-TZVP series32,38 w/o additional polarization/diffuse functions. No Schwarz screening is used for the RT-TDHF calculations of H2 dimer and H2O dimer systems. For (S)-methyloxirane, (-)-α-pinene, and ZnPc, the CP2K (CP2K version 7.0 (Development Version), the CP2K developers group. CP2K is freely available from https://www.cp2k.org/.) package and the Gaussian0939 package is used for RT-TDDFT and LR-TDDFT (B3LYP71) calculations, respectively. For Ag20, Goedecker-Teter-Hutter (GTH) pseudopotential50,51 with the corresponding Gaussian-type pseudopotential basis sets52 GTH-DZVP, GTH-TZVP, and GTH-TZV2P, and PBE049 hybrid functional are employed for time-dependent density functional perturbation theory (TDDFPT, up to the first order of the perturbation we use the term LR-TDDFT in this work) calculations using CP2K (CP2K version 7.0 (Development Version), the CP2K developers group. CP2K is freely available from https://www.cp2k.org/.) package. Schwarz screening threshold 10−10 (default in CP2K) is used in the RT-TDDFT calculations of (S)-methyloxirane, (-)-α-pinene, ZnPc, and Ag20 systems. All basis set files used in this work are from Basis Set Exchange72, visualization of molecular structures and orbitals uses Avogadro73 software, and graphs are generated with Matplotlib74. Table 4 Systems and corresponding computational details A δ-pulse is chosen as the electric field perturbation to excite the molecules in RT-TDHF/TDDFT calculations. The application of the δ-pulse can be thought of as being applied instantly to the converged ground states MOs \(|{\phi }^{0}\rangle\) between a time t = 0− and t = 0+. It corresponds to an impulse75 $$\left|{\psi }^{\delta }(\overrightarrow{r},\, t={0}^{+})\right\rangle={e}^{-\frac{i}{\hslash }\overrightarrow{\kappa }\overrightarrow{r}}\left|{\psi }^{0}(\overrightarrow{r},\, t={0}^{-})\right\rangle,$$ where ℏ is reduced Planck constant. The vector \(\overrightarrow{\kappa }\) indicates the direction and amplitude of the perturbation. The propagation is then started from the perturbed MOs \(|{\psi }_{i}^{\delta }\rangle\). The data generated in this study have been deposited at https://gitlab.uzh.ch/lubergroup/ao-truncation. Source data are provided in this paper. Python codes for carrying out basis set truncation and RT-TDSCF calculations are available at https://gitlab.uzh.ch/lubergroup/ao-truncation. Casida, M. & Huix-Rotllant, M. Progress in time-dependent density-functional theory. Annu. Rev. Phys. Chem. 63, 287–323 (2012). Laurent, A. D. & Jacquemin, D. TD-DFT benchmarks: a review. Int. J. Quant. Chem. 113, 2019–2039 (2013). Adamo, C. & Jacquemin, D. The calculations of excited-state properties with time-dependent density functional theory. Chem. Soc. Rev. 42, 845–856 (2013). Provorse, M. R. & Isborn, C. M. Electron dynamics with real-time time-dependent density functional theory. Int. J. Quant. Chem. 116, 739–749 (2016). Goings, J. J., Lestrange, P. J. & Li, X. Real-time time-dependent electronic structure theory. WIREs Comput Mol Sci. 8 (2017). https://doi.org/10.1002/wcms.1341. Li, X., Govind, N., Isborn, C., DePrince, A. E. & Lopata, K. Real-time time-dependent electronic structure theory. Chem. Rev. 120, 9951–9993 (2020). Mattiat, J. & Luber, S. Efficient calculation of (resonance) Raman spectra and excitation profiles with real-time propagation. J. Chem. Phys. 149, 174108 (2018). Mattiat, J. & Luber, S. Vibrational (resonance) Raman optical activity with real time time dependent density functional theory. J. Chem. Phys. 151, 234110 (2019). Mattiat, J. & Luber, S. Time domain simulation of (resonance) Raman spectra of liquids in the short time approximation. J. Chem. Theory Comput. 17, 344–356 (2020). Aquilante, F., Todorova, T. K., Gagliardi, L., Pedersen, T. B. & Roos, B. O. Systematic truncation of the virtual space in multiconfigurational perturbation theory. J. Chem. Phys. 131, 034113 (2009). Nagy, P. R., Gyevi-Nagy, L. & Kállay, M. Basis set truncation corrections for improved frozen natural orbital CCSD(t) energies. Mol. Phys. 119 (2021). https://doi.org/10.1080/00268976.2021.1963495. Mintz, B. & Wilson, A. K. Truncation of the correlation consistent basis sets: extension to third-row (ga–kr) molecules. J. Chem. Phys. 122, 134106 (2005). Feller, D. & Dixon, D. A. Density functional theory and the basis set truncation problem with correlation consistent basis sets: elephant in the room or mouse in the closet? J. Phys. Chem. A 122, 2598–2603 (2018). Barnes, T. A., Goodpaster, J. D., Manby, F. R. & Miller, T. F. Accurate basis set truncation for wavefunction embedding. J. Chem. Phys. 139, 024103 (2013). Claudino, D. & Mayhall, N. J. Simple and efficient truncation of virtual spaces in embedded wave functions via concentric localization. J. Chem. Theory Comput. 15, 6085–6096 (2019). Ding, F., Manby, F. R. & Miller, T. F. Embedded mean-field theory with block-orthogonalized partitioning. J. Chem. Theory Comput. 13, 1605–1615 (2017). Koh, K. J., Nguyen-Beck, T. S. & Parkhill, J. Accelerating realtime TDDFT with block-orthogonalized manby–miller embedding theory. J. Chem. Theory Comput. 13, 4173–4178 (2017). Krishtal, A., Ceresoli, D. & Pavanello, M. Subsystem real-time time dependent density functional theory. J. Chem. Phys. 142, 154116 (2015). Santis, M. D. et al. Environmental effects with frozen-density embedding in real-time time-dependent density functional theory using localized basis functions. J. Chem. Theory Comput. 16, 5695–5711 (2020). Sharma, M. & Sierka, M. Efficient implementation of density functional theory based embedding for molecular and periodic systems using gaussian basis functions. J. Chem. Theory Comput. 18, 6892–6904 (2022). Repisky, M. et al. Excitation energies from real-time propagation of the four-component dirac–kohn–sham equation. J. Chem. Theory Comput. 11, 980–991 (2015). Bruner, A., LaMaster, D. & Lopata, K. Accelerated broadband spectra using transition dipole decomposition and padé approximants. J. Chem. Theory Comput. 12, 3741–3750 (2016). Wibowo, M., Irons, T. J. P. & Teale, A. M. Modeling ultrafast electron dynamics in strong magnetic fields using real-time time-dependent electronic structure methods. J. Chem. Theory Comput. 17, 2137–2165 (2021). Castro, A., Marques, M. A. L. & Rubio, A. Propagators for the time-dependent kohn–sham equations. J. Chem. Phys. 121, 3425–3433 (2004). Lippert, G., Hutter, J. & Parrinello, M. A hybrid gaussian and plane wave density functional scheme. Mol. Phys. 92, 477–487 (1997). Bardo, R. D. & Ruedenberg, K. Even-tempered atomic orbitals. VI. optimal orbital exponents and optimal contractions of gaussian primitives for hydrogen, carbon, and oxygen in molecules. J. Chem. Phys. 60, 918–931 (1974). Cherkes, I., Klaiman, S. & Moiseyev, N. Spanning the hilbert space with an even tempered gaussian basis set. Int. J. Quant. Chem. 109, 2996–3002 (2009). Ditchfield, R., Hehre, W. J. & Pople, J. A. Self-consistent molecular-orbital methods. IX. an extended gaussian-type basis for molecular-orbital studies of organic molecules. J. Chem. Phys. 54, 724–728 (1971). Dunning, T. H. Gaussian basis sets for use in correlated molecular calculations. i. the atoms boron through neon and hydrogen. J. Chem. Phys. 90, 1007–1023 (1989). Jensen, F. Polarization consistent basis sets: principles. J. Chem. Phys. 115, 9113–9125 (2001). Weigend, F., Furche, F. & Ahlrichs, R. Gaussian basis sets of quadruple zeta valence quality for atoms h–kr. J. Chem. Phys. 119, 12753–12762 (2003). Weigend, F. & Ahlrichs, R. Balanced basis sets of split valence, triple zeta valence and quadruple zeta valence quality for h to rn: design and assessment of accuracy. Phys. Chem. Chem. Phys. 7, 3297 (2005). Lehtola, S. Curing basis set overcompleteness with pivoted cholesky decompositions. J. Chem. Phys. 151, 241102 (2019). Papajak, E., Zheng, J., Xu, X., Leverentz, H. R. & Truhlar, D. G. Perspectives on basis sets beautiful: seasonal plantings of diffuse basis functions. J. Chem. Theory Comput. 7, 3027–3034 (2011). Hariharan, P. C. & Pople, J. A. The influence of polarization functions on molecular orbital hydrogenation energies. Theor. Chem. Acc. 28, 213–222 (1973). Clark, T., Chandrasekhar, J., Spitznagel, G. W. & Schleyer, P. V. R. Efficient diffuse function-augmented basis sets for anion calculations. III. the 3-21+g basis set for first-row elements, li-f. J. Comput. Chem. 4, 294–301 (1983). Jaccard, P. THE DISTRIBUTION OF THE FLORA IN THE ALPINE ZONE.1. New Phytol. 11, 37–50 (1912). Rappoport, D. & Furche, F. Property-optimized gaussian basis sets for molecular response calculations. J. Chem. Phys. 133, 134105 (2010). Frisch, M. J. et al. Gaussian 09 Revision D.01. Gaussian Inc. Wallingford CT (2009). Theisen, R. F., Huang, L., Fleetham, T., Adams, J. B. & Li, J. Ground and excited states of zinc phthalocyanine, zinc tetrabenzoporphyrin, and azaporphyrin analogs using DFT and TDDFT with franck-condon analysis. J. Chem. Phys. 142, 094310 (2015). Wang, C., Shao, J., Chen, F. & Sheng, X. Excited-state absorption for zinc phthalocyanine from linear-response time-dependent density functional theory. RSC Adv. 10, 28066–28074 (2020). Martynov, A. G. et al. Methodological survey of simplified TD-DFT methods for fast and accurate interpretation of UV–vis–NIR spectra of phthalocyanines. ACS Omega 4, 7265–7284 (2019). Zhang, L., Qi, D., Zhao, L., Bian, Y. & Li, W. Substituent effects on the structure–property relationship of unsymmetrical methyloxy and methoxycarbonyl phthalocyanines: DFT and TDDFT theoretical studies. J. Mol. Graph. Model. 35, 57–65 (2012). Wallace, A. J., Williamson, B. E. & Crittenden, D. L. Coupled cluster calculations provide a one-to-one mapping between calculated and observed transition energies in the electronic absorption spectrum of zinc phthalocyanine. Int. J. Quant. Chem. 117, e25350 (2017). Tussupbayev, S., Govind, N., Lopata, K. & Cramer, C. J. Comparison of real-time and linear-response time-dependent density functional theories for molecular chromophores ranging from sparse to high densities of states. J. Chem. Theory Comput. 11, 1102–1109 (2015). Sánchez-González, Á., Muñoz-Losa, A., Vukovic, S., Corni, S. & Mennucci, B. Quantum mechanical approach to solvent effects on the optical properties of metal nanoparticles and their efficiency as excitation energy transfer acceptors. J. Phys. Chem. C 114, 1553–1561 (2010). Kuda-Singappulige, G. U. & Aikens, C. M. Excited-state absorption in silver nanoclusters. J. Phys. Chem. C 125, 24996–25006 (2021). Chen, M., Dyer, J. E., Li, K. & Dixon, D. A. Prediction of structures and atomization energies of small silver clusters, (ag)n, n < 100. J. Phys. Chem. A 117, 8298–8313 (2013). Adamo, C. & Barone, V. Toward reliable density functional methods without adjustable parameters: the PBE0 model. J. Chem. Phys. 110, 6158–6170 (1999). Goedecker, S., Teter, M. & Hutter, J. Separable dual-space gaussian pseudopotentials. Phys. Rev. B 54, 1703–1710 (1996). Krack, M. Pseudopotentials for h to kr optimized for gradient-corrected exchange-correlation functionals. Theor. Chem. Acc. 114, 145–152 (2005). VandeVondele, J. & Hutter, J. Gaussian basis sets for accurate calculations on molecular systems in gas and condensed phases. J. Chem. Phys. 127, 114105 (2007). Rohlfing, M. & Louie, S. G. Electron-hole excitations and optical spectra from first principles. Phys. Rev. B 62, 4927–4944 (2000). Deslippe, J. et al. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures. Comput. Phys. Commun. 183, 1269–1289 (2012). Bruneval, F. et al. molgw 1: Many-body perturbation theory software for atoms, molecules, and clusters. Comput. Phys. Commun. 208, 149–161 (2016). Pedersen, T. B. & Kvaal, S. Symplectic integration and physical interpretation of time-dependent coupled-cluster theory. J. Chem. Phys. 150, 144106 (2019). Koulias, L. N., Williams-Young, D. B., Nascimento, D. R., DePrince, A. E. & Li, X. Relativistic real-time time-dependent equation-of-motion coupled-cluster. J. Chem. Theory Comput. 15, 6617–6624 (2019). Sonk, J. A., Caricato, M. & Schlegel, H. B. TD-CI simulation of the electronic optical response of molecules in intense fields: comparison of RPA, CIS, CIS(d), and EOM-CCSD. J. Phys. Chem. A 115, 4678–4690 (2011). Darling, C. L. & Schlegel, H. B. Dipole moments, polarizabilities, and infrared intensities calculated with electric field dependent functions. J. Phys. Chem. 98, 5855–5861 (1994). Elliott, P., Furche, F. & Burke, K. Excited states from time-dependent density functional theory. In Reviews in Computational Chemistry, 91-165 (John Wiley & Sons, Inc., 2009). https://doi.org/10.1002/9780470399545.ch3. Pescitelli, G. & Bruhn, T. Good computational practice in the assignment of absolute configurations by TDDFT calculations of ECD spectra. Chirality 28, 466–474 (2016). Barboza, C. A., Vazquez, P. A. M., Carey, D. M.-L. & Arratia-Perez, R. A TD-DFT basis set and density functional assessment for the calculation of electronic excitation energies of fluorene. Int. J. Quant. Chem. 112, 3434–3438 (2012). Kumar, C. et al. Accelerating kohn-sham response theory using density fitting and the auxiliary-density-matrix method. Int. J. Quant. Chem. 118, e25639 (2018). Chong, D. P. Completeness profiles of one-electron basis sets. Can. J. Chem. 73, 79–83 (1995). Manninen, P. & Vaara, J. Systematic gaussian basis-set limit using completeness-optimized primitive sets. a case for magnetic properties. J. Comput. Chem. 27, 434–445 (2006). Lehtola, S. Automatic algorithms for completeness-optimization of gaussian basis sets. J. Comput. Chem. 36, 335–347 (2014). Guidon, M., Hutter, J. & VandeVondele, J. Auxiliary density matrix methods for hartree-fock exchange calculations. J. Chem. Theory Comput. 6, 2348–2364 (2010). Sun, Q. et al. PySCF: the python-based simulations of chemistry framework. Wiley Interdiscip. Rev. Comput. Mol. Sci. 8 (2017). https://doi.org/10.1002/wcms.1340. Sun, Q. et al. Recent developments in the PySCF program package. J. Chem. Phys. 153, 024109 (2020). Nguyen, T. S. & Parkhill, J. Nonadiabatic dynamics for electrons at second-order: real-time TDDFT and OSCF2. J. Chem. Theory Comput. 11, 2918–2924 (2015). Stephens, P. J., Devlin, F. J., Chabalowski, C. F. & Frisch, M. J. Ab initio calculation of vibrational absorption and circular dichroism spectra using density functional force fields. J. Chem. Phys. 98, 11623–11627 (1994). Pritchard, B. P., Altarawy, D., Didier, B., Gibson, T. D. & Windus, T. L. New basis set exchange: an open, up-to-date resource for the molecular sciences community. J. Chem. Inf. Model. 59, 4814–4820 (2019). Hanwell, M. D. et al. Avogadro: an advanced semantic chemical editor, visualization, and analysis platform. J. Cheminform. 4 (2012). https://doi.org/10.1186/1758-2946-4-17. Hunter, J. D. Matplotlib: A 2d graphics environment. Comput. Sci. Eng. 9, 90–95 (2007). Yabana, K., Nakatsukasa, T., Iwata, J.-I. & Bertsch, G. F. Real-time, real-space implementation of the linear response time-dependent density-functional theory. Phys. Status Solidi B 243, 1121–1138 (2006). Funding by the University of Zurich and the Swiss National Science Foundation (grant no: PP00P2_170667) is gratefully acknowledged. We thank the Swiss National Supercomputing Center for computing resources (project ID: pr119, s1001, and s1036). Department of Chemistry, University of Zurich, Zurich, Switzerland Ruocheng Han, Johann Mattiat & Sandra Luber Ruocheng Han Johann Mattiat Sandra Luber R.H. conceived the method. R.H., J.M., and S.L. designed the method. R.H. and J.M. performed the research. S.L. provided the resources and supervised the research. R.H., J.M., and S.L. wrote the manuscript. Correspondence to Sandra Luber. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Han, R., Mattiat, J. & Luber, S. Automatic purpose-driven basis set truncation for time-dependent Hartree–Fock and density-functional theory. Nat Commun 14, 106 (2023). https://doi.org/10.1038/s41467-022-35694-4
CommonCrawl
How to fit the parameters of differential equations with known data? I have the following data from chemical kinetics research to fit the parameters of ordinary differential equations: $$ \left[ \begin{array}{ccccccc} \text{No.}& t & y_1(t)&y_2(t) & y_3(t) & y_4(t) & y_5(t)\\ 1&30.0000 & 9.1300 & 0.0931 & 0.0899 & 0.1000 & 0.0000 \\ 2&60.0000 & 8.9300 & 0.1270 & 0.1230 & 0.2270 & 0.0049 \\ 3&90.0000 & 8.6000 & 0.1510 & 0.1390 & 0.4920 & 0.0153 \\ 4&120.0000 & 8.2800 & 0.1540 & 0.1490 & 0.7780 & 0.0249 \\ 5&150.0000 & 7.9700 & 0.1540 & 0.1570 & 1.0700 & 0.0329 \\ 6&180.0000 & 7.8600 & 0.1540 & 0.1600 & 1.1700 & 0.0348 \\ 7&210.0000 & 7.8100 & 0.1530 & 0.1530 & 1.2100 & 0.0404 \\ 8&240.0000 & 7.7700 & 0.1400 & 0.1420 & 1.2800 & 0.0432 \\ \end{array} \right] $$ The ordinary differential equations to fit have $k_1,k_2,k_3,k_4,k_5,k_6$ to be determined. $$ \left\{ \begin{array}{l} {y_1}'(t)=-{k_1} {y_1}(t)-{k_2} {y_1}(t),\\ {y_2}'(t)={k_2} {y_1}(t)-{k_3} {y_2}(t),\\ {y_3}'(t)={k_1} {y_1}(t)+{k_3} {y_2}(t)-{k_4} {y_3}(t),\\ {y_4}'(t)={k_4} {y_3}(t)-{k_5} {y_2}(t) {y_4}(t)+{k_6} {y_5}(t),\\ {y_5}'(t)={k_5} {y_2}(t) {y_4}(t)-{k_6} {y_5}(t)\\ \end{array} \right.$$ In order to solve it from conventional numerical optimization methods, my original thoughts are: first convert it into least square problems, then apply numerical optimization to it, but this requires symbolically solve a nonlinear system of ordinary differential equations into explicit solutions first, which seems difficult. (1)Is it possible to determine the global (least square or similarly converted) solution to all the parameters $k_i(i=1,\cdots, 6)$ ? (2)Is the solution unique? (3) are there general approaches to solve such problems (globally if possible)? Update: Further data from replicates: $$ \begin{array}{ccccccc} \text{No}&t& y_1(t) &y_2(t) &y_3(t) & y_4(t) &y_5(t)\\ 9&30.0000 & 9.0400 & 0.1190 & 0.1040 & 0.1390 & 0.0044 \\ 10& 60.0000 & 8.8000 & 0.1640 & 0.1120 & 0.3210 & 0.0097 \\ 11&90.0000 & 8.5300 & 0.1640 & 0.1140 & 0.5630 & 0.0219 \\ 12&120.0000 & 8.1800 & 0.1600 & 0.1250 & 0.8730 & 0.0369 \\ 13&150.0000 & 7.9700 & 0.1550 & 0.1380 & 1.0600 & 0.0459 \\ 14&180.0000 & 7.7900 & 0.1580 & 0.1510 & 1.2000 & 0.0545 \\ 15&210.0000 & 7.4900 & 0.1480 & 0.1430 & 1.5000 & 0.0636 \\ 16&240.0000 & 7.0800 & 0.1390 & 0.1380 & 1.9100 & 0.0756 \\ \end{array} $$ I am still seeking for a working approach to obtaining solutions to the problem. differential-equations global-optimization nonlinear-optimization LCFactorization asked Aug 9, 2014 at 23:06 LCFactorizationLCFactorization $\begingroup$ Isn't there a typo in the equations for ${y_4}'$ and ${y_5}'$? As it is stated, they aren't linear. $\endgroup$ – Tadashi $\begingroup$ Is there a typo in your equation for $y_1'$? $y_1$ appears twice on the right-hand side. $\endgroup$ – Paul Tupper $\begingroup$ This time it is not a typo, but because $k_1, k_2$ are different dynamic parameters. thank you $\endgroup$ – LCFactorization This kind of problem is known in the literature as a nonlinear state-space system identification. Several algorithms have been proposed in the literature to solve these problems. I think a good starting point would be (1) and the references therein, in particular the works of L. Ljung. As far I know, in general if you don't have a good initial estimate of the parameters $k_i$, there is no guarantee that the algorithm will converge to a global solution. (1) Schön, Thomas B., Adrian Wills, and Brett Ninness. "System identification of nonlinear state-space models." Automatica 47.1 (2011): 39-49. P.S. Many of these identification algorithms have been implemented in MATLAB (maybe there is an equivalent implementation in Octave). See for instance this freely available third party toolbox and this example from the system identification toolbox of MATLAB. Update (08/23/2014): Following this example and using the first data set I found the following solution: $$\begin{array}{cccc} k_1 &= &1.100350 &\pm& 8.3610 \\ k_2 &= &2.588940 &\pm& 9.6769 \\ k_3 &= &0.400037 &\pm& 14.8607 \\ k_4 &= &0.338143 &\pm& 7.8111 \\ k_5 &= &3.757100 &\pm& 168.1370 \\ k_6 &= &0.915842 &\pm& 11.7712 \end{array}$$ The algorithm used was trust-region reflective Newton method of nonlinear least-squares, where the cost is the sum of squares of errors between the measured and simulated outputs. Notice that the high standard deviation must be due to the small amount of samples. najliel TadashiTadashi $\begingroup$ It seems I will need more time to learn it. Can you please help obtain one solution to the above problem and indicate the method used? thank you $\endgroup$ $\begingroup$ @LCFactorization, sorry for the delay in answering. I updated the answer with one solution (which isn't so good, but I think it'd need a higher sample rate to get a better model). $\endgroup$ $\begingroup$ thank you very much! there are more data here: ilovematlab.cn/thread-151440-1-1.html (simplified Chinese version, original question raised by a Chinese student), together there was also a solution by 1stOpt. I am only curious whether there is already systematic approach to solve such problems. thank you $\endgroup$ $\begingroup$ @Shamisen When you mention the references 30 and 31 in your answer, which are these references? in the reference number [1] the bibliography is not enumerated. Please advse $\endgroup$ – JuanMuñoz $\begingroup$ Hi @JuanMuñoz, thanks for noticing it. When I put these numbers I was reading the pre-print version of the paper: user.it.uu.se/~thosc112/pubpdf/schonwn2011-2.pdf $\endgroup$ If you have access to Maple and the Global Optimization Toolbox, you could try something like this. data:=[[0.0000,9.1300,0.0931,0.0899,0.1000,0.0000], [30.0000,8.9300,0.1270,0.1230,0.2270,0.0049], [120.0000,7.9700,0.1540,0.1570,1.0700,0.0329], [210.0000,7.7700,0.1400,0.1420,1.2800,0.0432]]: des:=[diff(y1(t),t) = -k1*y1(t) - k2*y1(t), diff(y2(t),t) = k2*y1(t) - k3*y2(t), diff(y3(t),t) = k1*y1(t) +k3*y2(t) - k4*y3(t), diff(y4(t),t) = k4*y3(t) - k5*y2(t)*y4(t) + k6*y5(t), diff(y5(t),t) = k5*y2(t)*y4(t) - k6*y5(t)]: ics := seq((y || i)(0) = data[1, i+1], i = 1 .. 5): res := dsolve({ics, des[]}, numeric, parameters = [k1, k2, k3, k4, k5, k6]): timeList := [0, 30, 60, 90, 120, 150, 180, 210]; sse := proc (k1, k2, k3, k4, k5, k6) res(parameters = [k1, k2, k3, k4, k5, k6]); add( (rhs(select(has, res(timeList[i]), y1)[])-data[i, 2])^2+ (rhs(select(has, res(timeList[i]), y5)[])-data[i, 6])^2, i = 2 .. 8); end proc; c := GlobalOptimization:-GlobalSolve('sse'(k1, k2, k3, k4, k5, k6), k1 = 0 .. 1, k2 = 0 .. 1, k3 = 0 .. 1, k4 = 0 .. 1, k5 = 0 .. 1, k6 = 0 .. 1, timelimit = 10); [0.219132447080011505, [k1 = 8.52482740113834e-4, k2 = 5.2683998680924474e-5, k3 = 0.0, k4 = 0.05113239298267808, k5 = 0.004363021255887466, k6 = 0.0]] res(parameters = c[2]): p:=Array(1..5): for n from 1 to 5 do p[n]:=plots:-display(plots:-odeplot(res, [t, (y || n)(t)], t = 0 .. 210),plots:-pointplot([seq([data[i, 1], data[i, n+1]], i = 1 .. 8)])); end do; plots:-display(p) The code is a parameterized numeric solution of the differential equations (I shifted all the data down by 30s, so I could get the ICs at t=0) followed by a global optimization of the least-squares problem. I haven't tested the code for correctness, and if you let the optimizer run for longer or change the optimization bounds, you may get better results. SamirSamir $\begingroup$ thank you! I have no access to the global solver you mentioned. -- It was said another optimization software 1stOpt 7d-soft.com supporting such fitting from its version 4.0 (latest 6.0). But I am interested in the mathematical principles not just the answer. $\endgroup$ $\begingroup$ Are the differential equation correct? Can you give me the actual stoichometric reactionsequations? $\endgroup$ – Samir $\begingroup$ I saw the question with 1stOpt answers from this link: ilovematlab.cn/thread-151440-1-1.html, it seems like chemical reactions and side reactions being combined together. $\endgroup$ Converting an ODE system to State space formulation how to solve this system of nonlinear differential equations Are all rational exactly solvable differential equations known? Coupled differential equations Nonlinear differential equations with zero initial conditions
CommonCrawl
\begin{document} \title{Deterministic Combinatorial Replacement Paths and Distance Sensitivity Oracles} \begin{abstract} In this work we derandomize two central results in graph algorithms, replacement paths and distance sensitivity oracles (DSOs) matching in both cases the running time of the randomized algorithms. For the replacement paths problem, let $G = (V,E)$ be a directed unweighted graph with $n$ vertices and $m$ edges and let $P$ be a shortest path from $s$ to $t$ in $G$. The {\sl replacement paths} problem is to find for every edge $e \in P$ the shortest path from $s$ to $t$ avoiding $e$. Roditty and Zwick [ICALP 2005] obtained a randomized algorithm with running time of $\widetilde{O}(m \sqrt{n})$. Here we provide the first deterministic algorithm for this problem, with the same $\widetilde{O}(m \sqrt{n})$ time. Due to matching conditional lower bounds of Williams {\sl et. al.} [FOCS 2010], our deterministic combinatorial algorithm for the replacement paths problem is optimal up to polylogarithmic factors (unless the long standing bound of $\widetilde{O}(mn)$ for the combinatorial boolean matrix multiplication can be improved). This also implies a deterministic algorithm for the second simple shortest path problem in $\widetilde{O}(m \sqrt{n})$ time, and a deterministic algorithm for the $k$-simple shortest paths problem in $\widetilde{O}(k m \sqrt{n})$ time (for any integer constant $k > 0$). For the problem of distance sensitivity oracles, let $G = (V,E)$ be a directed graph with real-edge weights. An $f$-Sensitivity Distance Oracle ($f$-DSO) gets as input the graph $G=(V,E)$ and a parameter $f$, preprocesses it into a data-structure, such that given a query $(s,t,F)$ with $s,t \in V$ and $F \subseteq E \cup V, |F| \le f$ being a set of at most $f$ edges or vertices (failures), the query algorithm efficiently computes the distance from $s$ to $t$ in the graph $G \setminus F$ ({\sl i.e.}, the distance from $s$ to $t$ in the graph $G$ after removing from it the failing edges and vertices $F$). For weighted graphs with real edge weights, Weimann and Yuster [FOCS 2010] presented several randomized $f$-DSOs. In particular, they presented a combinatorial $f$-DSO with $\widetilde{O}(mn^{4-\alpha})$ preprocessing time and subquadratic $\widetilde{O}(n^{2-2(1-\alpha)/f})$ query time, giving a tradeoff between preprocessing and query time for every value of $0 < \alpha < 1$. We derandomize this result and present a combinatorial deterministic $f$-DSO with the same asymptotic preprocessing and query time. \end{abstract} \section{Introduction} In many algorithms used in computing environments such as massive storage devices, large scale parallel computation, and communication networks, recovering from failures must be an integral part. Therefore, designing algorithms and data structures whose running time is efficient even in the presence of failures is an important task. In this paper we study variants of shortest path queries in setting with failures. The computation of shortest paths and distances in the presence of failures was extensively studied. Two central problems researched in this field are the Replacement Paths problem and Distance Sensitivity Oracles, we define these problems hereinafter. {\bf The Replacement Paths problem} (See, {\sl e.g.}, \cite{Roditty2005, WilliamsRP11, GoLe09,EmPeRo10, Klein10, WY13, Bernstein10, Yen71, Law72, LeeL14, MaMiGu89, NaPrWi01, WilliamsW10, Epp98}). Let $G=(V,E)$ be a graph (directed or undirected, weighted or unweighted) with $n$ vertices and $m$ edges and let $P_G(s,t)$ be a shortest path from $s$ to $t$. For every edge $e \in P_G(s,t)$ a replacement path $P_G(s,t,e)$ is a shortest path from $s$ to $t$ in the graph $G \setminus \{e\}$ (which is the graph $G$ after removing the edge $e$). Let $d_G(s,t,e)$ be the length of the path $P_G(s,t,e)$. The replacement paths problem is as follows: given a shortest path $P_G(s,t)$ from $s$ to $t$ in $G$, compute $d_G(s,t,e)$ (or an approximation of it) for every $e \in P_G(s,t)$. {\bf Distance Sensitivity Oracles} (See, {\sl e.g.}, \cite{ChCoFiKa17, GW12, BeKa08, BK09, CLPR10, DT02, DeThChRa08, DP09, KB10}). An $f$-Sensitivity Distance Oracle ($f$-DSO) gets as input a graph $G=(V,E)$ and a parameter $f$, preprocesses it into a data-structure, such that given a query $(s,t,F)$ with $s,t \in V$ and $F \subseteq E \cup V, |F| \le f$ being a set of at most $f$ edges or vertices (failures), the query algorithm efficiently computes (exactly or approximately) $d_G(s,t,F)$ which is the distance from $s$ to $t$ in the graph $G \setminus F$ ({\sl i.e.}, in the graph $G$ after removing from it the failing edges and vertices $F$). Here we would like to optimize several parameters of the data-structure: minimize the size of the oracle, support many failures $f$, have efficient preprocessing and query algorithms, and if the output is an approximation of the distance then optimize the approximation-ratio. An important line of research in the theory of computer science is derandomization. In many algorithms and data-structures there exists a gap between the best known randomized algorithms and the best known deterministic algorithms. There has been extensive research on closing the gaps between the best known randomized and deterministic algorithms in many problems or proving that no deterministic algorithm can perform as good as its randomized counterpart. There also has been a long line of work on developing derandomization techniques, in order to obtain deterministic versions of randomized algorithms ({\sl e.g.}, Chapter 16 in \cite{alon2011probabilistic}). In this paper we derandomize algorithms and data-structures for computing distances and shortest paths in the presence of failures. Many randomized algorithms for computing shortest paths and distances use variants of the following sampling lemma (see Lemma 1 in Roditty and Zwick \cite{Roditty2005}). \shiri{I think that at this point you want to mention that the following two lemmas are not the main result of the paper and also mention in short what is the main contribution - I added a stentense pls take a look} \sarel{Ok, I think the sentence you added is good. Later there is a complete Section \ref{sec:framework} where we address it in more details.} \begin{lemma} [Lemma 1 in \cite{Roditty2005}] \label{lem:sampling-roditty} Let $D_1, D_2, \ldots, D_q \subseteq V$ satisfy $|D_i| > L$ for $1 \le i \le q$ and $|V|=n$. If $R \subseteq V$ is a random subset obtained by selecting each vertex, independently, with probability $(c \ln n)/L$, for some $c>0$, then with probability of at least $1 - q \cdot n^{-c}$ we have $D_i \cap R \ne \emptyset$ for every $1 \le i \le q$. \end{lemma} Our derandomization step of Lemma \ref{lem:sampling-roditty} is very simple, as described in Section \ref{sec:framework}, we use the folklore greedy approach to prove the following lemma, which is a deterministic version of Lemma \ref{lem:sampling-roditty}. \begin{lemma} \label{lemma:greedy} [See also Section \ref{sec:framework}] Let $D_1, D_2, \ldots, D_q \subseteq V$ satisfy $|D_i| > L$ for $1 \le i \le q$ and $|V|=n$. One can deterministically find in $\widetilde{O}(qL)$ time a set $R \subset V$ such that $|R| = \widetilde{O}(n/L)$ and $D_i \cap R \ne \emptyset$ for every $1 \le i \le q$. \end{lemma} We emphasize that the use of Lemma \ref{lemma:greedy} is very standard and is not our main contribution. The main technical challenge is how to efficiently and deterministically compute a small number of sets $D_1, D_2, \ldots, D_q \subseteq V$ so that the invocation of Lemma \ref{lemma:greedy} is fast. \subsection{Derandomizing the Replacment Paths Algorithm of Roditty and Zwick \cite{Roditty2005}} We derandomize the algorithm of Roditty and Zwick \cite{Roditty2005} and obtain a near optimal deterministic algorithm for the replacement paths problem in directed unweighed graphs (a problem which was open for more than a decade since the randomized algorithm was published) as stated in the following theorem. \begin{theorem} \label{thm:replacement} There exists a deterministic algorithm for the replacement paths problem in unweighted directed graphs whose runtime is $\widetilde{O}(m\sqrt{n})$. This algorithm is near optimal assuming the conditional lower bound of combinatorial boolean matrix multiplication of \cite{WilliamsW10}. \end{theorem} The term ``combinatorial algorithms'' is not well-defined, and it is often interpreted as non-Strassen-like algorithms \cite{BaDeHoSc12}, or more intuitively, algorithms that do not use any matrix multiplication tricks. Arguably, in practice, combinatorial algorithms are to some extent considered more efficient since the constants hidden in the matrix multiplication bounds are high. On the other hand, there has been research done to make fast matrix multiplication practical, {\sl e.g.}, \cite{HuRiMaGe17, AuGr15}. Vassilevska Williams and Williams \cite{WilliamsW10} proved a subcubic equivalence between $\sqrt{n}$ occurrences of the combinatorial replacement paths problem in unweighted directed graphs and the combinatorial boolean multiplication (BMM) problem. More precisely, they proved that there exists some fixed $\epsilon >0$ such that the combinatorial replacement paths problem can be solved in $O(mn^{1/2-\epsilon})$ time if and only if there exists some fixed $\delta > 0$ such that the combinatorial boolean matrix multiplication (BMM) can be solved in subcubic $O(n^{3-\delta})$ time. Giving a subcubic combinatorial algorithm to the BMM problem, or proving that no such algorithm exists, is a long standing open problem. This implies that either both problems can be polynomially improved, or neither of them does. Hence, assuming the conditional lower bound of combinatorial BMM, our combinatorial $\widetilde{O}(m \sqrt{n})$ algorithm for the replacement paths problem in unweighted directed graphs is essentially optimal (up to $n^{o(1)}$ factors). The replacement paths problem is related to the $k$ simple shortest paths problem, where the goal is to find the $k$ simple shortest paths between two vertices. Using known reductions from the replacement paths problem to the $k$ simple shortest paths problem, we close this gap as the following Corollary states. \begin{corollary} There exists a deterministic algorithm for computing $k$ simple shortest paths in unweighted directed graphs whose runtime is $\widetilde{O}(k m\sqrt{n})$. \end{corollary} More related work can be found in Section \ref{appendix:more-related-work}. As written in Section \ref{appendix:more-related-work}, the trivial $\widetilde{O}(mn)$ time algorithm for solving the replacement paths problem in directed weighted graphs (simply, for every edge $e \in P_G(s,t)$ run Dijkstra in the graph $G\setminus \{e\}$) is deterministic and near optimal (according to a conditional lower bound by \cite{WilliamsW10}). To the best of our knowledge the only deterministic combinatorial algorithms known for directed unweighted graphs are the algorithms for general directed weighted graphs whose runtime is $\widetilde{O}(mn)$ leaving a significant gap between the randomized and deterministic algorithms. As mentioned above, in this paper we derandomize the $\widetilde{O}(m \sqrt{n})$ algorithm of Roditty and Zwick \cite{Roditty2005} and close this gap. \subsection{Derandomizing the Combinatorial Distance Sensitivity Oracle of Weimann and Yuster \cite{WY13}} Our second result is derandomizing the combinatorial distance sensitivity oracle of Weimann and Yuster \cite{WY13} and obtaining the following theorem. \begin{theorem} \label{thm:dso} Let $G=(V,E)$ be a directed graph with real edge weights, let $|V|=n$ and $|E|=m$. There exists a deterministic algorithm that given $G$ and parameters $f = O(\frac{\log n}{\log \log n})$ and $0 < \alpha < 1$ constructs an $f$-sensitivity distance oracle in $\widetilde{O}(mn^{4-\alpha})$ time. Given a query $(s,t,F)$ with $s,t \in V$ and $F \subseteq E \cup V, |F| \le f$ being a set of at most $f$ edges or vertices (failures), the deterministic query algorithm computes in $\widetilde{O}(n^{2-2(1-\alpha)/f})$ time the distance from $s$ to $t$ in the graph $G \setminus F$. \end{theorem} We remark that while our focus in this paper is in computing distances, one may obtain the actual shortest path in time proportional to the number of edges of the shortest paths, using the same algorithm for obtaining the shortest paths in the replacement paths problem \cite{Roditty2005}, and in the distance sensitivity oracles case \cite{WY13}. \subsection{Technical Contribution and Our Derandomization Framework} \label{sec:framework} Let ${\cal A}$ be a random algorithm that uses Lemma \ref{lem:sampling-roditty} for sampling a subset of vertices $R \subseteq V$. We say that ${\cal P} = \{D_1, \ldots, D_q\}$ is a set of {\sl critical} paths for the randomized algorithm ${\cal A}$ if ${\cal A}$ uses the sampling Lemma \ref{lem:sampling-roditty} and it is sufficient for the correctness of algorithm ${\cal A}$ that $R$ is a hitting set for ${\cal P}$ ({\sl i.e.}, every path in ${\cal P}$ contains at least one vertex of $R$). According to Lemma \ref{lemma:greedy} one can derandomize the random selection of the hitting set $R$ in time that depends on the number of paths in ${\cal P}$. Therefore, in order to obtain an efficient derandomization procedure, we want to find a small set ${ \cal P}$ of critical paths for the randomized algorithms. Our main technical contribution is to show how to compute a small set of critical paths that is sufficient to be used as input for the greedy algorithm stated in Lemma \ref{lemma:greedy}. Our framework for derandomizing algorithms and data-structures that use the sampling Lemma \ref{lem:sampling-roditty} is given in Figure \ref{fig:framework}. \begin{figure} \caption{Our derandomization framework to derandomize algorithms that use the sampling Lemma \ref{lem:sampling-roditty}.} \label{fig:framework} \end{figure} Our first main technical contribution, denoted as Step 1 in Figure \ref{fig:framework}, is proving the existence of small sets of critical paths for the randomized replacement path algorithm of Roditty and Zwick \cite{Roditty2005} and for the distance sensitivity oracles of Weimann and Yuster \cite{WY13}. Our second main technical contribution, denoted as Step 2 in Figure \ref{fig:framework}, is developing algorithms to efficiently compute these small sets of critical paths. For the replacement paths problem, Roditty and Zwick \cite{Roditty2005} proved the existence of a critical set of $O(n^2)$ paths, each path containing at least $\lceil \sqrt{n} \rceil$ edges. Simply applying Lemma \ref{lemma:greedy} on this set of paths requires $\widetilde{O}(n^{2.5})$ time which is too much, and it is also not clear from their algorithm how to efficiently compute this set of critical paths. As for Step 1, we prove the existence of a small set of $O(n)$ critical paths, each path contains $\lceil \sqrt{n} \rceil$ edges, and for Step 2, we develop an efficient algorithm that computes this set of critical paths in $\widetilde{O}(m\sqrt{n})$ time. For the problem of distance sensitivity oracles, Weimann and Yuster \cite{WY13} proved the existence of a critical set of $O(n^{2f+3})$ paths, each path containing $n^{(1-\alpha)/f}$ edges (where $0 < \alpha < 1$). Simply applying Lemma \ref{lemma:greedy} on this set of paths requires $\widetilde{O}(n^{2f+3 + (1-\alpha)/f})$ time which is too much, and here too, it is also not clear from their algorithm how to efficiently and deterministically compute this set of critical paths. As for Step 1, we prove the existence of a small set of $O(n^{2+\epsilon})$ critical paths, each path contains $n^{(1-\alpha)/f}$ edges, and for Step 2, we develop an efficient deterministic algorithm that computes this set of critical paths in $\widetilde{O}(mn^{1+\epsilon})$ time. For Step 3, we use the folklore greedy deterministic algorithm denoted here by \newline GreedyPivotsSelection$(\{ D_1, \ldots, D_q \})$. Given as input the paths $D_1, \ldots, D_q$, each path contains at least $L$ vertices, the algorithm chooses a set of pivots $R \subseteq V$ such that for every $1 \le i \le q$ it holds that $D_i \cap R \ne \emptyset$. In addition, it holds that $|R| = \widetilde{O}(\frac{n}{L})$ and the runtime of the algorithm is $\widetilde{O}(qL)$. The GreedyPivotsSelection algorithm works as follows. Let $\mathcal{P} = \{ D_1, \ldots, D_q \}$. Starting with $R \gets \emptyset$, find a vertex $v \in V$ which is contained in the maximum number of sets of $\mathcal{P}$, add it to $R$ and remove all the sets that contain $v$ from $\mathcal{P}$. Repeat this process until $\mathcal{P} = \emptyset$. \begin{lemma} \label{lemma:greedy-correctness} Let $1 \le L \le n$ and $1 \le q < poly(n)$ be two integers. Let $D_1, \ldots, D_q \subseteq V$ be paths satisfying $|D_i| \ge L$ for every $1 \le i \le q$. The algorithm GreedyPivotsSelection$( \{D_1, \ldots, D_q \})$ finds in $\widetilde{O}(qL)$ time a set $R \subset V$ such that for every $1 \le i \le q$ it holds that $R \cap D_i \ne \emptyset$ and $|R| = O(\frac{n \log q}{L}) = \widetilde{O}(n/L)$. \end{lemma} \begin{proof} We first prove that for every $1 \le i \le q$ it holds that $R \cap D_i \ne \emptyset$ and $|R| = O(\frac{n \log q}{L}) = \widetilde{O}(n/L)$. When the algorithm terminates then every set $D \in \mathcal{D}$ contains at least one of the vertices of $R$, as otherwise $\mathcal{D}$ would have contained the sets which are disjoint from $R$ and the algorithm should have not finished since $\mathcal{D} \ne \emptyset$. For every vertex $v \in V$, let $c(v)$ be a variable which denotes, at every moment of the algorithm, the number of sets in $\mathcal{D}$ which contain $v$. Denote by $\mathcal{D}_i$ the set $\mathcal{D}$ after $i$ iterations. Let $\mathcal{D}_0 = \{ D_1, \ldots, D_q\}$ be the initial set $\mathcal{D}$ given as input to the algorithm, then $|\mathcal{D}_0| = q$. We claim that the process terminates after at most $\widetilde{O}(n/L)$ iterations, and since at every iteration we add one vertex $v$ to $R$, it follows that $|R| = \widetilde{O}(n/L)$. Recall that $\mathcal{D}$ contains sets of size at least $L$. Hence, $\Sigma_{v \in V} c(v) \ge |\mathcal{D}| L$. It follows that the average number of sets that a vertex $v \in V$ belongs to is: $avg = \frac{\Sigma_{v \in V} c(v)}{n} \ge \frac{|\mathcal{D}| L }{n}$. By the pigeonhole principle, the vertex $v_i = \arg \max_{v \in V} \{ c(v) \}$ belongs to at least $\frac{|\mathcal{D}|L}{n}$ sets of $\mathcal{D}$. Therefore, $|\mathcal{D}_i| = |\{ D \in \mathcal{D} | v_i \in D \}| \ge \frac{|\mathcal{D}|L}{n}$. At iteration $i$ we remove from $\mathcal{D}$ the sets $\mathcal{D}_i$, so in each iteration we decrease the size of $\mathcal{D}$ by at least a factor of $(1-L/n)$. After the $i^{\text{th}}$ iteration, the size of $\mathcal{D}$ is at most $(1-L/n)^i |\mathcal{D}_0|$. Therefore, after the $i = (n/L)\ln q + 1$ iteration, the size of $\mathcal{D}$ is at most $(1-L/n)^i |\mathcal{D}_0| < 1/q |\mathcal{D}_0| \le 1$, where the last inequality holds since $|\mathcal{D}_0|=q$. It follows that after $(n/L) \ln q + 1$ iterations we have $\mathcal{D} = \emptyset$. At each iteration we add one vertex $v_i$ to the set $R$, thus the size of the set $R$ is $\widetilde{O}(n/L)$. Next we describe an implementation of the GreedyPivotsSelection algorithm (see Figure \ref{fig:greedy-pivots-selection} for pseudo-code). The first thing we do is keep only an arbitrary subset of $L$ vertices from every $D \in \mathcal{D}$ so that every set $D \in \mathcal{D}$ contains exactly $L$ vertices. We implement the algorithm GreedyPivotsSelection as follows. During the runtime of the algorithm we maintain a counter $c(v)$ for every vertex $v \in V$ which equals the number of sets in $\mathcal{D}$ that contain $v$. During the initialization of the algorithm, we construct a subset of vertices $V' \subseteq V$ which contains all the vertices in all the paths $\mathcal{D}$, and compute we compute $c(v)$ directly, first by setting $\forall_{v \in V'} c(v) \gets 0$ and then we scan all the sets $D \in \mathcal{D}$ and every vertex $v \in D$ and increase the counter $c(v) \gets c(v) + 1$. After this initialization we have $c(v) = |\{ D \in \mathcal{D} | v \in D \}|$ which is the number of sets of $\mathcal{D}$ that contain $v$. We further initialize a binary search tree $BST$ and insert every vertex $v \in V'$ into $BST$ with the key $c(v)$, and initialize $R \gets \emptyset$. We also create a list $L(v)$ for every vertex $v \in V'$ which contains pointers to the sets $D \in \mathcal{D}$ that contain $v$. Hence, $L(v) = \{ D \in \mathcal{D} | v \in D \}$ and $c(v) = |L(v)|$. To obtain the set $R$ we run the following loop. While $\mathcal{D} \ne \emptyset$ we find the vertex $v \in V'$ which is contained in the maximum number of paths of $\mathcal{D}$ and add $v$ to $R$. The vertex $v$ is computed in $O(\log n)$ time by extracting the element in $BST$ whose key is maximal. Then we remove from $\mathcal{D}$ all the sets which contain $v$ (these are exactly the sets $L(v)$) and we update the counters $c(v)$ by scanning every set $D \in L(v)$ and every vertex $u \in D$ and decreasing the counter $c(u)$ by one (we also update the key of $u$ in $BST$ to the counter $c(u)$). We analyse the runtime of this greedy algorithm. Computing the subset of vertices $V' \subseteq V$ and setting all the values $c(v) \gets 0$ at the beginning for every $v \in V'$ takes $\widetilde{O}(qL)$ time. Computing the values $c(v) = |\{ D \in \mathcal{D} | v \in D \}|$ takes $O(qL)$ time as we loop over all the $q$ sets $D \in \mathcal{D}$ and for every $D$ we loop over the exactly $L$ vertices $v \in D$ and increase the counter $c(v)$ by one. Initializing the binary search tree $BST$ and inserting to it every vertex $v \in V'$ with key $c(v)$ takes $\widetilde{O}(|V'|) = \widetilde{O}(qL)$ time, and all the extract-max operations on $BST$ take additional $O(|V'|) = \widetilde{O}(qL)$ time. The total time of operations of the form $c(v) \gets c(v)-1$ is $O(qL)$ as this is the sum of all values $c(v)$ at the beginning and each such operation is handled in $O(\log n)$ time by updating the key of the vertex $v$ in $BST$ to $c(v)-1$. The total time for checking the lists $L(v)$ of all vertices chosen to $R$ is at most $O(qL)$, as this is the sum of sizes of all sets $L(v)$. Therefore, the total running time is $\widetilde{O}(qL)$. \end{proof} \begin{figure} \caption{Algorithm GreedyPivotsSelection} \label{fig:greedy-pivots-selection} \end{figure} \subsection{Related Work - the Blocker Set Algorithm of King} We remark that the GreedyPivotsSelection algorithm is similar to the blocker set algorithm described in \cite{King99} for finding a hitting set for a set of paths. The blocker set algorithm was used in \cite{King99} to develop sequential dynamic algorithms for the APSP problem. Additional related work is that of Agarwal {\sl et. al. } \cite{AgarwalRKP18}. They presented a deterministic distributed algorithm to compute APSP in an edge-weighted directed or undirected graph in $\widetilde{O}(n^{3/2})$ rounds in the Congest model by incorporating a deterministic distributed version of the blocker set algorithm. While our derandomization framework uses the greedy algorithm (or the blocker set algorithm) to find a hitting set of vertices for a critical set of paths $D_1, \ldots, D_q$, we stress that our main contribution are the techniques to reduce the number of sets $q$ the greedy algorithm must hit (Step 1), and the algorithms to efficiently compute the sets $D_1, \ldots, D_q$ (Step 2). These techniques are our main contribution, which enable us to use the greedy algorithm (or the blocker set algorithm) for a wider range of problems. Specifically, these techniques allow us to derandomize the best known random algorithms for the replacement paths problem and distance sensitivity oracles. We believe that our techniques can also be leveraged for additional related problems which use a sampling lemma similar to Lemma \ref{lem:sampling-roditty}. \subsection{More Related Work} \label{appendix:more-related-work} We survey related work for the replacement paths problem and distance sensitivity oracles. {\bf The replacement paths problem.} The replacement paths problem is motivated by several different applications and has been extensively studied in the last few decades (see e.g. \cite{MaMiGu89,HeSu01,HeSu02,NaPrWi01,WilliamsW10,GoLe09,Roditty2005,EmPeRo10,Klein10,Bernstein10}). It is well motivated by its own right from the fault-tolerance perspective. In many applications it is desired to find algorithms and data-structures that are resilient to failures. Since links in a network can fail, it is important to find backup shortest paths between important vertices of the graph. Furthermore, the replacement paths problem is also motivated by several applications. First, the fastest algorithms to compute the $k$ simple shortest paths between $s$ and $t$ in directed graphs executes $k$ iterations of the replacement paths between $s$ and $t$ in total $\widetilde{O}(mnk)$ time (see \cite{Yen71,Law72}). Second, considering path auctions, suppose we would like to find the shortest path from $s$ to $t$ in a directed graph $G$, where links are owned by selfish agents. Nisan and Ronen \cite{NiRo01} showed that Vickrey Pricing is an incentive compatible mechanism, and in order to compute the Vickery Pricing of the edges one has to solve the replacement paths problem. It was raised as an open problem by Nisan and Ronen \cite{NiRo01} whether there exists an efficient algorithm for solving the replacement paths problem. In biological sequence alignment \cite{ByWa84} replacement paths can be used to compute which pieces of an alignment are most important. The replacement paths problem has been studied extensively, and by now near optimal algorithms are known for many cases of the problem. For instance, the case of undirected graphs admits deterministic near linear solutions (see \cite{MaMiGu89,HeSu01, HeSu02,NaPrWi01}). In fact, Lee and Lu present linear $O(n+m)$-time algorithms for the replacement-paths problem in on the following classes of $n$-node $m$-edge graphs: (1) undirected graphs in the word-RAM model of computation, (2) undirected planar graphs, (3) undirected minor-closed graphs, and (4) directed acyclic graphs. A natural question is whether a near linear time algorithm is also possible for the directed case. Vassilevska Williams and Williams \cite{WilliamsW10} showed that such an algorithm is essentially not possible by presenting conditional lower bounds. More precisely, Vassilevska Williams and Williams \cite{WilliamsW10} showed a subcubic equivalence between the combinatorial all pairs shortest paths (APSP) problem and the combinatorial replacement paths problem. They proved that there exists a fixed $\epsilon >0$ and an $O(n^{3-\epsilon})$ time combinatorial algorithm for the replacement paths problem if and only if there exists a fixed $\delta>0$ and an $O(n^{3-\delta})$ time combinatorial algorithm for the APSP problem. This implies that either both problems admit truly subcubic algorithms, or neither of them does. Assuming the conditional lower bound that no subcubic APSP algorithm exists, then the trivial algorithm of computing Dijkstra from $s$ in every graph $G \setminus \{e\}$ for every edge $e \in P_G(s,t)$, which takes $O(mn + n^2\log n)$ time, is essentially near optimal. The near optimal algorithms for the undirected case and the conditional lower bounds for the directed case seem to close the problem. However, it turned out that if we consider the directed case with bounded edge weights then the picture is not yet complete. For instance, if we assume that the graph is directed with integer weights in the range $[-M, M]$ and allow algebraic solutions (rather than combinatorial ones), then Vassilevska Williams presented \cite{WilliamsRP11} an $\widetilde{O}(Mn^\omega)$ time algebraic randomized algorithm for the replacement paths problem, where $2 \le \omega < 2.373$ is the matrix multiplication exponent, whose current best known upper bound is $2.3728639$ (\cite{LeGall14,Williams12,CoppersmithW90}). Bernstein presented in \cite{Bernstein10} a $(1+\epsilon)$-approximate deterministic replacement paths algorithm which is near optimal (whose runtime is $\widetilde{O}((m \log(nC /c) / \epsilon)$, where $C$ is the largest edge weight in the graph and $c$ is the smallest edge weight). For unweighted directed graphs the gap between randomized and deterministic solutions is even larger for sparse graphs. Roditty and Zwick \cite{Roditty2005} presented a randomized algorithm whose runtime is $\widetilde{O}(m\sqrt{n})$ time for the replacement paths problem for unweighted directed graphs. Vassilevska Williams and Williams \cite{WilliamsW10} proved a subcubic equivalence between the combinatorial replacement paths problem in unweighted directed graphs and the combinatorial boolean multiplication (BMM) problem. They proved that there exists some fixed $\epsilon >0$ such that the combinatorial replacement paths problem can be solved in $O(mn^{1/2-\epsilon})$ time if and only if there exists some fixed $\delta > 0$ such that the combinatorial boolean matrix multiplication (BMM) can be solved in subcubic $O(n^{3-\delta})$ time. Giving a subcubic combinatorial algorithm to the BMM problem, or proving that no such algorithm exists, is a long standing open problem. This implies that either both problems can be polynomially improved, or neither of them does. Hence, assuming the conditional lower bound of combinatorial BMM, the randomized algorithm of Roditty and Zwick \cite{Roditty2005} is near optimal. In the deterministic regime no algorithm for the directed case is known that is asymptotically better (up to ploylog) than invoking APSP algorithm. Interestingly, in the fault-tolerant and the dynamic settings many of the existing algorithms are randomized, and for many of the problems there is a polynomial gap between the best randomized and deterministic algorithms (see e.g. sensitive distance oracles \cite{GW12}, dynamic shortest paths \cite{HenzingerKNFOCS14,BernsteinC16}, dynamic strongly connected components \cite{HenzingerKN14,HenzingerKN15,CDILP16}, dynamic matching \cite{solomon2016fully,ArChCoStWa17}, and many more). Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time. However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve. A central line of research is focused on the derandomization of algorithms that relies on randomness. Our main contribution is a derandomization of the replacement paths algorithm of \cite{Roditty2005} for the case of unweighted directed graphs. After more than a decade we give the first deterministic algorithm for the replacement paths problem, whose runtime is $\widetilde{O}(m\sqrt{n})$. Our deterministic algorithm matches the runtime of the randomized algorithm, which is near optimal assuming the conditional lower bound of combinatorial boolean matrix multiplication \cite{WilliamsW10}. In addition, to the best of our knowledge this is the first deterministic solution for the directed case that is asymptotically better than the APSP bound. The replacement paths problem is related to the $k$ shortest paths problem, where the goal is to find the $k$ shortest paths between two vertices. Eppstein \cite{Epp98} solved the $k$ shortest paths problem for directed graphs with nonnegative edge weights in $O(m + n\log n + k)$ time. However, the $k$ shortest paths may not be simple, {\sl i.e.}, contain cycles. The problem of $k$ simple shortest paths (loopless) is more difficult. The deterministic algorithm by Yen \cite{Yen71} (which was generalized by Lawler \cite{Law72}) for finding $k$ simple shortest paths in weighted directed graphs can be implemented in $O(kn(m + n\log n))$ time. This algorithm essentially uses in each iteration a replacement paths algorithm. Roditty and Zwick \cite{Roditty2005} described how to reduce the problem of $k$ simple shortest paths into $k$ executions of the second shortest path problem. For directed unweighted graphs, the randomized replacement paths algorithm of Roditty and Zwick \cite{Roditty2005} implies that the $k$ simple shortest paths has a randomized $\widetilde{O}(k m \sqrt{n})$ time algorithm. To the best of our knowledge no better deterministic algorithm is known than the algorithms for general directed weighted graphs, yielding a significant gap between randomized and the deterministic $k$ simple shortest paths for directed unweighted graphs. Our deterministic replacement paths algorithm closes this gap and gives the first deterministic $k$ simple shortest paths algorithm for directed unweighted graphs whose runtime is $\widetilde{O}(k m \sqrt{n})$. The best known randomized algorithm for the $k$ simple shortest paths problem in directed unweighted graphs takes $\widetilde{O}(k m\sqrt{n})$ time (\cite{Roditty2005}), leaving a significant gap compared to the best known deterministic algorithm which takes $\widetilde{O}(k m n)$ time ({\sl e.g.}, \cite{Yen71}, \cite{Law72}). We close this gap by proving the existence of a deterministic algorithm for computing $k$ simple shortest paths in unweighted directed graphs whose runtime is $\widetilde{O}(k m\sqrt{n})$. \subsection{Outline} The structure of the paper is as follows. In Section \ref{sec:preliminaries} we describe some preliminaries and notations. In Section \ref{sec:replacement-short} we apply our framework to the replacement paths algorithm of Roditty and Zwick \cite{Roditty2005}. In Section \ref{sec:dso-short} we apply our framework to the DSO of Weimann and Yuster for graphs with real-edge weights \cite{WY13}. In order for this paper to be self-contained, a full description of the combinatorial deterministic replacement paths algorithm is given in Section \ref{appendix:sec:replacement} and a full description of the deterministic distance sensitivity oracles is given in Section \ref{sec:dso1}. \section{Preliminaries} \label{sec:preliminaries} Let $G=(V,E)$ be a directed weighted graph with $n$ vertices and $m$ edges with real edge weights $\omega(\cdot)$. Given a path $P$ in $G$ we define its weight $\omega(P) = \Sigma_{e \in E(P)} \omega(e)$. Given $s,t \in V$, let $P_G(s,t)$ be a shortest path from $s$ to $t$ in $G$ and let $d_G(s,t) = \omega(P_G(s,t))$ be its length, which is the sum of its edge weights. Let $|P_G(s,t)|$ denote the number of edges along $P_G(s,t)$. Note that for unweighted graphs we have $|P_G(s,t)| = d_G(s,t)$. When $G$ is known from the context we sometimes abbreviate $P_G(s,t), d_G(s,t)$ with $P(s,t), d(s,t)$ respectively. We define the path concatenation operator $\circ$ as follows. Let $P_1= (x_1,x_2, \ldots, x_r)$ and $P_2= (y_1,y_2, \ldots, y_t)$ be two paths. Then $P = P_1 \circ P_2$ is defined as the path $P = (x_1, x_2, \ldots, x_r, y_1, y_2, \ldots, y_t)$, and it is well defined if either $x_r=y_1$ or $(x_r, y_1) \in E$. For a graph $H$ we denote by $V(H)$ the set of its vertices, and by $E(H)$ the set of its edges. When it is clear from the context, we abbreviate $e \in E(H)$ by $e \in H$ and $v \in V(H)$ by $v \in H$. Let $P$ be a path which contains the vertices $u,v \in V(P)$ such that $u$ appears before $v$ along $P$. We denote by $P[u..v]$ the subpath of $P$ from $u$ to $v$. For every edge $e \in P_G(s,t)$ a replacement path $P_G(s,t,e)$ for the triple $(s,t,e)$ is a shortest path from $s$ to $t$ avoiding $e$. Let $d_G(s,t,e) = \omega(P_G(s,t,e))$ be the length of the replacement path $P_G(s,t,e)$. We will assume, without loss of generality, that every replacement path $P_G(s,t,e)$ can be decomposed into a common prefix $\mathrm{CommonPref}_{s,t,e}$ with the shortest path $P_G(s,t)$, a detour $\mathrm{Detour}_{s,t,e}$ which is disjoint from the shortest path $P_G(s,t)$ (except for its first vertex and last vertex), and finally a common suffix $\mathrm{CommonSuff}_{s,t,e}$ which is common with the shortest path $P_G(s,t)$. Therefore, for every edge $e \in P_G(s,t)$ it holds that $P_G(s,t,e) = \mathrm{CommonPref}_{s,t,e} \circ \mathrm{Detour}_{s,t,e} \circ \mathrm{CommonSuff}_{s,t,e}$ (the prefix and/or suffix may be empty). Let $F \subseteq V \cup E$ be a set of vertices and edges. We define the graph $G \setminus F = (V \setminus F, E \setminus F)$ as the graph obtained from $G$ by removing the vertices and edges $F$. We define a replacement path $P_G(s,t,F)$ as a shortest path from $s$ to $t$ in the graph $G \setminus F$, and let $d_G(s,t,F) = w(P_G(s,t,e))$ be its length. \section{Deterministic Replacement Paths in $\widetilde{O}(m \sqrt{n})$ Time - an Overview} \label{sec:replacement-short} In this section we apply our framework from Section \ref{sec:framework} to the replacement paths algorithm of Roditty and Zwick \cite{Roditty2005}. A full description of the deterministic replacement paths algorithm is given in Section \ref{appendix:sec:replacement}. \shiri{I think this entire section is a bit detached... maybe write in few sentences what is the part in Roditty-Zwick that need to be derandomized and explain that this is what is being done in this section} \sarel{I've added a few paragraphs here to address this remark} The randomized algorithm by Roddity and Zwick as described in \cite{Roditty2005} takes $\widetilde{O}(m \sqrt{n})$ expected time. They handle separately the case that a replacement path has a short detour containing at most $\lceil \sqrt{n} \rceil$ edges, and the case that a replacement path has a long detour containing more than $\lceil \sqrt{n} \rceil$ edges. The first case is solved deterministically. The second case is solved by first sampling a subset of vertices $R$ according to Lemma \ref{lem:sampling-roditty}, where each vertex is sampled uniformly independently at random with probability $c \ln n/ \sqrt{n}$ for large enough constant $c > 0$. Using this uniform sampling, it holds with high probability (of at least $1-n^{-c+2}$) that for every long triple $(s,t,e)$ (as defined hereinafter), the detour $\mathrm{Detour}_{s,t,e}$ of the replacement path $P_G(s,t,e)$ contains at least one vertex of $R$. \begin{definition} \label{def:long-triple} Let $s,t \in V, e \in P_G(s,t)$. The triple $(s,t,e)$ is a {\sl long} triple if every replacement path from $s$ to $t$ avoiding $e$ has its detour part containing more than $\lceil \sqrt{n} \rceil$ edges. \end{definition} Note that in Definition \ref{def:long-triple} we defined $(s,t,e)$ to be a long triple if {\bf every} replacement path from $s$ to $t$ avoiding $e$ has a long detour (containing more than $\lceil \sqrt{n} \rceil$ edges). We could have defined $(s,t,e)$ to be a long triple even if at least one replacement path from $s$ to $t$ avoiding $e$ has a long detour (perhaps more similar to the definitions in \cite{Roditty2005}), however we find Definition \ref{def:long-triple} more convenient for the following reason. If $(s,t,e)$ has a replacement path whose detour part contains at most $\lceil \sqrt{n} \rceil$ edges, then the algorithm of \cite{Roditty2005} for handling short detours finds deterministically a replacement path for $(s,t,e)$. Hence, we only need to find the replacement paths for triples $(s,t,e)$ for which every replacement path from $s$ to $t$ avoiding $e$ has a long detour, and this is the case for which we define $(s,t,e)$ as a long triple. It is sufficient for the correctness of the replacement paths algorithm that the following condition holds; For every long triple $(s,t,e)$ the detour $\mathrm{Detour}_{s,t,e}$ of the replacement path $P_G(s,t,e)$ contains at least one vertex of $R$. As the authors of \cite{Roditty2005} write, the choice of the random set $R$ is the only randomization used in their algorithm. To obtain a deterministic algorithm for the replacement paths problem and to prove Theorem \ref{thm:replacement}, we prove the following deterministic alternative of Lemma \ref{lemma:greedy}. \begin{lemma} [Our derandomized version of Lemma \ref{lemma:greedy} for the replacement paths algorithm] \label{lemma:the-set-R-deterministic} There exists an $\widetilde{O}(m \sqrt{n})$ time deterministic algorithm that computes a set $R \subseteq V$ of $\widetilde{O}(\sqrt{n})$ vertices, such that for every long triple $(s,t,e)$ there exists a replacement path $P_G(s,t,e)$ whose detour part contains at least one of the vertices of $R$. \end{lemma} Following the above description, in order to prove Theorem \ref{thm:replacement}, that there exists an $\widetilde{O}(m\sqrt{n})$ deterministic replacement paths algorithm, it is sufficient to prove the derandomization Lemma \ref{lemma:the-set-R-deterministic}, we do so in the following sections. \subsection{Step 1: the Method of Reusing Common Subpaths - Defining the Set ${\cal D}_n$} \label{sec:reusing-common-subpaths} In this section we prove the following lemma. \begin{lemma} \label{thm:p-sqrt} There exists a set ${\cal D}_{n}$ of at most $n$ paths, each path of length exactly $\lceil \sqrt{n} \rceil$ with the following property; for every long triple $(s,t,e)$ there exists a path $D \in {\cal D}_{n}$ and a replacement path $P_G(s,t,e)$ such that $D$ is contained in the detour part of $P_G(s,t,e)$. \end{lemma} In order to define the set of paths ${\cal D}_n$ and prove Lemma \ref{thm:p-sqrt} we need the following definitions. Let $G' = G \setminus E(P_G(s,t))$ be the graph obtained by removing the edges of the path $P_G(s,t)$ from $G$. For two vertices $u$ and $v$, let $d_{G'}(u,v)$ be the distance from $u$ to $v$ in $G'$. We use the following definitions of the index $\rho(x)$, the set of vertices $V_{\sqrt{n}}$ and the set of paths $\mathcal{D}_{n}$. \begin{definition}[The index $\rho(x)$]\label{def:x-tag} Let $P_G(s,t) = <v_0, \ldots, v_k>$ and let $X = \{ x \in V \ | \ \exists_{0 \le i \le k} \ d_{G'}(v_i, x) = \lceil \sqrt{n} \rceil \}$ be the subset of all the vertices $x \in V$ such that there exists at least one index $0 \le i \le k$ with $d_{G'}(v_i, x) = \lceil \sqrt{n} \rceil$. For every vertex $x \in X$ we define the index $0 \le \rho(x) \le k$ to be the minimum index such that $d_{G'}(v_{\rho(x)}, x) = \lceil \sqrt{n} \rceil$. \end{definition} \begin{definition} [The set of vertices $V_{\sqrt{n}}$] \label{def:v-sqrtn} We define the set of vertices $V_{\sqrt{n}} = \{ x \in X | \forall_{i < \rho(x)} d_{G'}(v_i, x) > \lceil \sqrt{n} \rceil \}$. In other words, $V_{\sqrt{n}}$ is the set of all vertices $x \in X$ such that for all the vertices $v_i$ before $v_{\rho(x)}$ along $P_G(s,t)$ it holds that $d_{G'}(v_i, x) > \lceil \sqrt{n} \rceil$. \end{definition} \begin{definition} [A set of paths $\mathcal{D}_{n}$] \label{def:p-sqrtn} For every vertex $x \in V_{\sqrt{n}}$, let $D(x)$ be an arbitrary shortest path from $v_{\rho(x)}$ to $x$ in $G'$ (whose length is $\lceil \sqrt{n} \rceil$ as $d_{G'}(v_{\rho(x)}, x) = \lceil \sqrt{n} \rceil$). We define ${\cal D}_{n} = \{ D(x) | x \in V_{\sqrt{n}} \}$. \end{definition} Note that while $V_{\sqrt{n}}$ is uniquely defined (as it is defined according to distances between vertices) the set of paths $\mathcal{D}_{n}$ is not unique, as there may be many shortest paths from $v_{\rho(x)}$ to $x$ in $G'$, and we take $D(x) = P_{G'}(v_{\rho(x)}, x)$ to be an arbitrary such shortest path. The basic intuition for the method of reusing common subpaths is as follows. Let $P_G(s,t,e_1), \ldots, P_G(s,t,e_r)$ be arbitrary replacement paths such that $x$ is the $(\lceil \sqrt{n} \rceil + 1)^{\text{th}}$ vertex along the detours of all the replacement path $P_G(s,t,e_1), \ldots, P_G(s,t,e_r)$. Then one can construct replacement paths $P'_G(s,t,e_1), \ldots, P'_G(s,t,e_r)$ such that the subpath $D(x) \in {\cal D}_n$ is contained in all these replacement paths. Therefore, the subpath $D(x)$ is reused as a common subpath in many replacement paths. We utilize this observation in the following proof of Lemma \ref{thm:p-sqrt}. \begin{proof} [Proof of Lemma \ref{thm:p-sqrt}] Obviously, the set ${\cal D}_{n}$ described in Definition \ref{def:p-sqrtn} contains at most $n$ paths, each path is of length exactly $\lceil \sqrt{n} \rceil$. We prove that for every long triple $(s,t,e)$ there exists a path $D \in \mathcal{D}_{n}$ and a replacement path $P'(s,t,e)$ {\sl s.t.} $D$ is contained in the detour part of $P'(s,t,e)$. Let $P_G(s,t,e)$ be a replacement path for $(s,t,e)$. Since $(s,t,e)$ is a long triple then the detour part $\mathrm{Detour}_{s,t,e}$ of $P_G(s,t,e)$ contains more than $\lceil \sqrt{n} \rceil$ edges. Let $x \in \mathrm{Detour}_{s,t,e}$ be the $(\lceil \sqrt{n} \rceil + 1)^{\text{th}}$ vertex along $\mathrm{Detour}_{s,t,e}$, and let $v_j$ be the first vertex of $\mathrm{Detour}_{s,t,e}$. Let $P_1$ be the subpath of $\mathrm{Detour}_{s,t,e}$ from $v_j$ to $x$ and let $P_2$ be the subpath of $P_G(s,t,e)$ from $x$ to $t$. In other words, $P_G(s,t,e) = <v_0, \ldots, v_j> \circ P_1 \circ P_2$. Since $\mathrm{Detour}_{s,t,e}$ contains more than $\lceil \sqrt{n} \rceil$ edges and is disjoint from $P_G(s,t)$ except for the first and last vertices of $\mathrm{Detour}_{s,t,e}$ and $P_1 \subset \mathrm{Detour}_{s,t,e}$ it follows that $P_1$ is disjoint from $P_G(s,t)$ (except for the vertex $v_j$). In particular, since $P_1$ is a shortest path in $G\setminus \{e\}$ that is edge-disjoint from $P_G(s,t)$, then $P_1$ is also a shortest path in $G' = G \setminus E(P_G(s,t))$. We get that $d_{G'}(v_j,x) = |P_1| = \lceil \sqrt{n} \rceil$. We prove that $j=\rho(x)$ and $x \in V_{\sqrt{n}}$. As we have already proved that $d_{G'}(v_j, x) = \lceil \sqrt{n} \rceil$, we need to prove that for every $0 \le i < j$ it holds that $d_{G'}(v_i, x) > \lceil \sqrt{n} \rceil$. Assume by contradiction that there exists an index $0 \le i < j$ such that $d_{G'}(v_i, x) \le \lceil \sqrt{n} \rceil$. Then the path $\hat{P} = <v_0, \ldots, v_i> \circ P_{G'}(v_i, x) \circ P_2$ is a path from $s$ to $t$ that avoids $e$ and its length is: \begin{eqnarray*} |\hat{P}| &=& |<v_0, \ldots, v_i> \circ P_{G'}(v_i, x) \circ P_2| \\ & \le & i + \lceil \sqrt{n} \rceil + |P_2| \\ & < & j + \lceil \sqrt{n} \rceil + |P_2| \\ & = & |P_G(s,v_j) \circ P_1 \circ P_2| \\ & = & |P_G(s,t,e)| \end{eqnarray*} This means that the path $\hat{P}$ is a path from $s$ to $t$ in $G\setminus \{e\}$ and its length is shorter than the length of the shortest path $P_G(s,t,e)$ from $s$ to $t$ in $G\setminus\{e\}$, which is a contradiction. We get that $d_{G'}(v_j,x) = \lceil \sqrt{n} \rceil$ and for every $0 \le i < j$ it holds that $d_{G'}(v_i, x) > \lceil \sqrt{n} \rceil$. Therefore, according to Definitions \ref{def:x-tag} and \ref{def:v-sqrtn} it holds that $j=\rho(x)$ and $x \in V_{\sqrt{n}}$. Let $D(x) \in \mathcal{D}_{n}$, then according to Definition \ref{def:p-sqrtn}, $D(x)$ is a shortest path from $v_{\rho(x)}$ to $x$ in $G'$. We define the path $P'(s,t,e) = <v_0, \ldots, v_{\rho(x)}> \circ D(x) \circ P_2$. It follows that $P'(s,t,e)$ is a path from $s$ to $t$ that avoids $e$ and $|P'(s,t,e)| = |<v_0, \ldots, v_{\rho(x)}> \circ D(x) \circ P_2| = \rho(x) + \lceil \sqrt{n} \rceil + |P_2| = |P_G(s,t,e)| = d_G(s,t,e)$. Hence, $P'(s,t,e)$ is a replacement path for $(s,t,e)$ such that $D(x) \subset P'(s,t,e)$ so the lemma follows. \end{proof} \subsection{Step 2: the Method of Decremental Distances from a Path - Computing the Set ${\cal D}_n$} \label{sec:rp-efficient} In this section we describe a decremental algorithm that enables us to compute the set of paths ${\cal D}_{n}$ in $\widetilde{O}(m\sqrt{n})$ time, proving the following lemma. \begin{lemma} \label{lemma:compute-dn} There exists a deterministic algorithm for computing the set of paths ${\cal D}_{n}$ in $\widetilde{O}(m\sqrt{n})$ time. \end{lemma} Our algorithm for computing the set of path ${\cal D}_{n}$ is a variant of the decremental SSSP (single source shortest paths) algorithm of King \cite{King99}. Our variant of the algorithm is used to find distances of vertices from a path rather than from a single source vertex as we define below. {\bf Overview of the Deterministic Algorithm for Computing ${\cal D}_{n}$ in $\widetilde{O}(m\sqrt{n})$ Time.} In the following description let $P = P_G(s,t)$. Consider the following assignment of weights $\omega$ to edges of $G$. We assign weight $\epsilon$ for every edge $e$ on the path $P$, and weight $1$ for all the other edges where $\epsilon$ is a small number such that $0 < \epsilon < 1/n$. We define a graph $G^w = (G,w)$ as the weighted graph $G$ with edge weights $\omega$. We define for every $0 \le i \le k$ the graph $G_i = G \setminus \{v_{i+1}, \ldots, v_k\}$ and the path $P_i = P \setminus \{v_{i+1}, \ldots, v_k\}$. We define the graph $G^w_i = (G_i, w)$ as the weighted graph $G_i$ with edge weights $\omega$. The algorithm computes the graph $G^w$ by simply taking $G$ and setting all edge weights of $P_G(s,t)$ to be $\epsilon$ (for some small $\epsilon$ such that $\epsilon < 1/n$) and all other edge weights to be 1. The algorithm then removes the vertices of $P_G(s,t)$ from $G^w$ one after the other (starting from the vertex that is closest to $t$). Loosely speaking after each vertex is removed, the algorithm computes the distances from $s$ in the current graph. In each such iteration, the algorithm adds to $V^w_{\sqrt{n}}$ all vertices such that their distance from $s$ in the current graph is between $\lceil \sqrt{n} \rceil$ and $\lceil \sqrt{n} \rceil+1$. We will later show that at the end of the algorithm we have $V^w_{\sqrt{n}} = V_{\sqrt{n}}$. Unfortunately, we cannot afford running Dijkstra after the removal of every vertex of $P_G(s,t)$ as there might be $n$ vertices on $P_G(s,t)$. To overcome this issue, the algorithm only maintains nodes at distance at most $\lceil \sqrt{n} \rceil +1$ from $s$. In addition, we observe that to compute the SSSP from $s$ in the graph after the removal of a vertex $v_i$ we only need to spend time on nodes such that their shortest path from $s$ uses the removed vertex. Roughly speaking, for these nodes we show that their distance from $s$ rounded down to the closest integer must increase by at least 1 as a result of the removal of the vertex. Hence, for every node we spend time on it in at most $\lceil \sqrt{n} \rceil+1$ iterations until its distance from $s$ is bigger than $\lceil \sqrt{n} \rceil+1$. As we will show later this will yield our desired running time. In Section \ref{sec:p-sqrt-computation} we give a formal description and analysis of the algorithm and prove Lemma \ref{lemma:compute-dn}. {\bf Proof of Theorem \ref{thm:replacement}.} We summarize the $\widetilde{O}(m \sqrt{n})$ deterministic replacement paths algorithm and outline the proof of Theorem \ref{thm:replacement}. First, compute in $\widetilde{O}(m\sqrt{n})$ time the set of paths ${\cal D}_{n}$ as in Lemma \ref{lemma:compute-dn}. Given ${\cal D}_{n}$, the deterministic greedy selection algorithm GreedyPivotsSelection$(\mathcal{D}_{n})$ (as described in Lemma \ref{lemma:greedy}) computes a set $R \subset V$ of $\widetilde{O}(\sqrt{n})$ vertices in $\widetilde{O}( n\sqrt{n})$ time with the following property; every path $D \in \mathcal{D}_{n}$ contains at least one of the vertices of $R$. Theorem \ref{thm:replacement} follows from Lemmas \ref{lemma:the-set-R-deterministic}, \ref{thm:p-sqrt} and \ref{lemma:compute-dn}. \section{Deterministic Distance Sensitivity Oracles - an Overview} \label{sec:dso-short} In this section we apply our framework from Section \ref{sec:framework} to the combinatorial distance sensitivity oracles of Weimann and Yuster \cite{WY13}. A full description of the deterministic combinatorial distance sensitivity oracles is given in Section \ref{sec:dso1}. Let $0 < \epsilon < 1$ and $1 \le f = O(\frac{\log n}{\log \log n})$ be two parameters. In \cite{WY13}, Weimann and Yuster considered the following notion of intervals (note that in \cite{WY13} they use a parameter $0 < \alpha < 1$ and we use a parameter $0 < \epsilon < 1$ such that $\epsilon = 1 - \alpha$). They define an interval of a long simple path $P$ as a subpath of $P$ consisting of $n^{\epsilon/f}$ consecutive vertices, so every simple path induces less than $n$ (overlapping) intervals. For every subset $F \subset E$ of at most $f$ edges, and for every pair of vertices $u, v \in V$, let $P_G(u,v,F)$ be a shortest path from $u$ to $v$ in $G \setminus F$. The path $P_G(u,v,F)$ induces less than $n$ (overlapping) intervals. The total number of possible intervals is less than $O(n^{2f+3})$ as each one of the (at most) $O(n^{2f+2})$ possible queries $(u,v,F)$ corresponds to a shortest path $P_G(u,v,F)$ that induces less than $n$ intervals. \begin{definition} Let ${\cal D}_{f}$ be defined as all the intervals (subpaths containing $n^{\epsilon/f}$ edges) of all the replacement paths $P_G(s,t,F)$ for every $s,t \in V, F \subseteq E \cup V$ with $|F| \le f$. \end{definition} Weimann and Yuster apply Lemma \ref{lem:sampling-roditty} to find a set $R \subseteq V$ of $\widetilde{O}(n^{1- \epsilon/f})$ vertices that hit w.h.p. all the intervals ${\cal D}_{f}$. According to these bounds (that ${\cal D}_{f}$ contains $O(n^{2f+3})$ paths, each containing exactly $n^{\epsilon/f}$ edges) applying the greedy algorithm to obtain the set $R$ deterministically according to Lemma \ref{lemma:greedy} takes $\widetilde{O}(n^{2f+3 + \epsilon/f})$ time, which is very inefficient. In this section we assume that all weights are non-negative (so we can run Dijkstra's algorithm) and that shortest paths are unique, we justify these assumptions in Section \ref{sec:assumption-unique}. \subsection{Step 1: the Method of Using Fault-Tolerant Trees to Significantly Reduce the Number of Intervals} \label{sec:ft-trees} In Lemma \ref{lemma:small-num-of-intervals} we prove that the set of intervals ${\cal D}_{f}$ actually contains at most $O(n^{2+\epsilon})$ unique intervals, rather than the $O(n^{2f+3})$ naive upper bound mentioned above. From Lemmas \ref{lemma:small-num-of-intervals} and \ref{lemma:greedy} it follows that the GreedyPivotsSelection$(\mathcal{D}_f)$ finds in $\widetilde{O}(n^{2+\epsilon+\epsilon/f})$ time the subset $R \subseteq V$ of $\widetilde{O}(n^{1-\epsilon/f})$ vertices that hit all the intervals ${\cal D}_f$. In Section \ref{sec:improved-greedy} we further reduce the time it takes for the greedy algorithm to compute the set of pivots $R$ to $\widetilde{O}(n^{2+\epsilon})$. \begin{lemma} \label{lemma:small-num-of-intervals} $|{\cal D}_{f}| = O(n^{2+\epsilon})$. \end{lemma} In order to prove Lemma \ref{lemma:small-num-of-intervals} we describe the fault-tolerant trees data-structure, which is a variant of the trees which appear in Appendix A of \cite{ChCoFiKa17}. \begin{definition} Let $P^L_G(s,t, F)$ be the shortest among the $s$-to-$t$ paths in $G \setminus F$ that contain at most $L$ edges and let $d^L_G(s,t,F) = \omega(P^L_G(s,t,F))$. In other words, $d^L_G(s,t,F) = \min \{ \omega(P) \ | \ P \text{ is an } s-\text{to}-t \text{ path on at most } L \text{ edges} \}$. If there is no path from $s$ to $t$ in $G\setminus F$ containing at most $L$ edges then we define $P^L_G(s,t, F) = \emptyset$ and $d^L_G(s,t,F) = \infty$. For $F = \emptyset$ we abbreviate $P^L_G(s,t, \emptyset) = P^L_G(s,t)$ as the shortest path from $s$ to $t$ that contains at most $L$ edges, and $d^L_G(s,t) = d^L_G(s,t, \emptyset)$ as its length. \end{definition} Let $s,t \in V$ be vertices and let $L, f\ge 1$ be fixed integer parameters, we define the trees $FT^{L,f}(s,t)$ as follows. \begin{itemize} \item In the root of $FT^{L,f}(s,t)$ we store the path $P^L_G(s,t)$ (and its length $d^L_G(s,t)$), and also store the vertices and edges of $P^L_G(s,t)$ in a binary search tree $BST^L(s,t)$; If $P^L_G(s,t) = \emptyset$ then we terminate the construction of $FT^{L,f}(s,t)$. \item For every edge or vertex $a_1$ of $P^L_G(s,t)$ we recursively build a subtree $FT^{L,f}(s,t, a_1)$ as follows. Let $P^L_G(s,t, \{a_1\})$ be the shortest path from $s$ to $t$ that contains at most $L$ edges in the graph $G\setminus \{a_1\}$. Then in the subtree $FT^{L,f}(s,t,a_1)$ we store the path $P^L_G(s,t, \{a_1\})$ (and its length $d^L_G(s,t, \{a_1\})$) and we also store the vertices and edges of $P^L_G(s,t, \{a_1\})$ in a binary search tree $BST^L(s,t,a_1)$; If $P^L_G(s,t, \{a_1\}) = \emptyset$ we terminate the construction of $FT^{L,f}(s,t,a_1)$. If $f > 1$ then for every vertex or edge $a_{2}$ in $P^L_G(s,t, \{a_1\})$ we recursively build the subtree $FT^{L,f}(s,t, a_1, a_2)$ as follows. \item For the recursive step, assume we want to construct the subtree $FT^{L,f}(s,t, a_1, \ldots, a_i)$. In the root of $FT^{L,f}(s,t, a_1, \ldots, a_i)$ we store the path $P^L_G(s,t, \{a_1, \ldots, a_i\})$ (and its length $d^L_G(s,t, \{a_1, \ldots, a_i\})$) and we also store the vertices and edges of $P^L_G(s,t, \{a_1, \ldots, a_i\})$ in a binary search tree $BST^L(s,t, a_1, \ldots, a_i)$. If $P^L_G(s,t, \{a_1, \ldots, a_i\}) = \emptyset$ then we terminate the construction of $FT^{L,f}(s,t, a_1, \ldots, a_i)$. If $i< f$ then for every vertex or edge $a_{i+1}$ in $P^L_G(s,t, \{a_1, \ldots, a_i\}))$ we recursively build the subtree $FT^{L,f}(s,t, a_1, \ldots, a_i, a_{i+1})$. \end{itemize} Observe that there are two conditions in which we terminate the recursive construction of $FT^{L,f}(s,t, a_1, \ldots, a_i)$: \begin{itemize} \item Either $i = f$ in which case $FT^{L,f}(s,t, a_1, \ldots, a_f)$ is a leaf node of $FT^{L,f}(s,t)$ and we store in the leaf node $FT^{L,f}(s,t, a_1, \ldots, a_f)$ the path $P^L_G(s,t, \{a_1, \ldots, a_f\})$. \item Or there is no path from $s$ to $t$ in $G \setminus \{a_1, \ldots, a_i \}$ that contains at most $L$ edges and then $FT^{L,f}(s,t, a_1, \ldots, a_i)$ is a leaf vertex of $FT^{L,f}(s,v)$ and we store in it $P^L_G(s,t, \{a_1, \ldots, a_i\}) = \emptyset$. \end{itemize} {\bf Querying the tree $FT^{L,f}(s,t)$.} Given a query $(s,t,F)$ such that $F \subset V \cup E$ with $|F| = f$ we would like to compute $d^L_G(s,t,F)$ using the tree $FT^{L,f}(s,t)$. The query procedure is as follows. Let $P^L_G(s,t)$ be the path stored in the root of $FT^{L,f}(s,t)$ (if the root of $FT^{L,f}(s,t)$ contains $\emptyset$ then we output that $d^L_G(s,t,F) = \infty$). First we check if $P^L_G(s,t) \cap F = \emptyset$ by checking if any of the elements $a_1 \in F$ appear in $BST^L(s,t)$ (which takes $O(\log L)$ time for each element $a_1 \in F$). If $P^L_G(s,t) \cap F = \emptyset$ we output $d^L_G(s,t,F) = d^L_G(s,t)$ (as $P^L_G(s,t)$ does not contain any of the vertices or edges in $F$). Otherwise, let $a_1 \in P^L_G(s,t) \cap F$. We continue the search similarly in the subtree $FT^{L,f}(s,t, a_1)$ as follows. Let $P^L_G(s,t, \{a_1\})$ be the path stored in the root of $FT^{L,f}(s,t, a_1)$ (if the root of $FT^{L,f}(s,t, a_1)$ contains $\emptyset$ then we output that $d^L_G(s,t,F) = \infty$). First we check if $P^L_G(s,t, \{a_1\}) \cap F = \emptyset$ by checking if any of the elements $a_2 \in F$ appear in $BST^L(s,t, a_1)$ (which takes $O(\log L)$ time for each element $a_2 \in F$). If $P^L_G(s,t, \{a_1\}) \cap F = \emptyset$ we output $d^L_G(s,t,F) = d^L_G(s,t, \{a_1\})$ (as $P^L_G(s,t, \{a_1\})$ does not contain any of the vertices or edges in $F$). Otherwise, let $a_2 \in P^L_G(s,t, \{a_1\}) \cap F$. We continue the search similarly in the subtrees $FT^{L,f}(s,t, a_1, a_2)$, $FT^{L,f}(s,t, a_1, a_2, \ldots, a_i)$ until we either reach a leaf node which contains $\emptyset$ (and in this case we output that $d^L_G(s,t,F) = \infty$) or we find a path $P^L_G(s,t, \{a_1, \ldots, a_i\})$ such that $P^L_G(s,t, \{a_1, \ldots, a_i\}) \cap F = \emptyset$ and then we output $d^L_G(s,t,F) = d^L_G(s,t, \{a_1, \ldots, a_i\})$. In Section \ref{sec:ft-trees-appendix} we prove the following lemma. \begin{lemma} \label{lemma:query} Given the tree $FT^{L,f}(s,t)$ and a set of failures $F \subset V \cup E$ with $|F| \le f$, the query procedure computes the distance $d^L_G(s,t,F)$ in $O(f^2 \log L)$ time. \end{lemma} We are now ready to prove lemma \ref{lemma:small-num-of-intervals} asserting that $|{\cal D}_{f}| = O(n^{2+\epsilon})$. \begin{proof} [Proof of Lemma \ref{lemma:small-num-of-intervals}] Let $L=n^{\epsilon/f}$ and let ${\cal D}$ be the set of all the unique shortest paths $P^L_G(s,t, \{a_1, \ldots, a_i\})$ stored in all the nodes of all the trees $\{ FT^{L,f}(s,t) \}_{s,t \in V}$ (see Section \ref{sec:assumption-unique} for more details on the assumption of unique shortest paths in our algorithms). Since the number of nodes in every tree $FT^{L,f}(s,t)$ is at most $L^f = (n^{\epsilon/f})^f = n^{\epsilon}$, and there are $O(n^2)$ trees (one tree for every pair of vertices $s,t \in V$) we get that the number of nodes in all the trees $\{ FT^{L,f}(s,t) \}_{s,t \in V}$ is $O(n^{2+\epsilon})$ and hence $|{\cal D}| = O(n^{2+\epsilon})$. We prove that ${\cal D}_{f} \subseteq {\cal D}$. By definition, ${\cal D}_{f}$ contains all the intervals (subpaths containing $n^{\epsilon/f}$ edges) of all the replacement paths $P_G(s,t,F)$ for every $s,t \in V, F \subseteq E \cup V$ with $|F| \le f$. Let $P \in {\cal D}_{f}$ be the unique shortest path as defined in Section \ref{sec:assumption-unique}, then $P$ is a subpath containing $n^{\epsilon/f}$ edges of the replacement paths $P_G(s,t,F)$. Let $u$ be the first vertex of $P$, and let $v$ be the last vertex of $P$. Then $P$ is a shortest path from $u$ to $v$ in $G\setminus F$, and since we assume that the shortest paths our algorithms compute are unique (according to Section \ref{sec:assumption-unique}) then $P = P_G(u,v,F)$ is the unique shortest path from $u$ to $v$ in $G\setminus F$. Since $P$ is assumed to be a path on exactly $L = n^{\epsilon/f}$ edges, then $P = P_G(u,v,F) = P^L_G(u,v,F)$. According to the query procedure in the tree $FT^{L,f}(u,v)$ and Lemma \ref{lemma:query}, if we query the tree $FT^{L,f}(u,v)$ with $(u,v,F)$ then we reach a node $FT^{L,f}(u,v, a_1, \ldots, a_i)$ which contains the path $P^L_G(u,v, \{a_1, \ldots, a_i\})$ with $\{a_1, \ldots, a_i \} \subseteq F$ such that $P^L_G(u,v, \{a_1, \ldots, a_i\}) = P^L_G(u,v,F) = P$ is the shortest $u$-to-$v$ path in $G\setminus F$. Hence, $P \in {\cal D}$ and thus ${\cal D}_{f} \subseteq {\cal D}$ and $|{\cal D}_{f}| \le |{\cal D}| = O(n^{2+\epsilon})$ \end{proof} \subsection{Step 2: Efficient Construction of the Fault-Tolerant Trees - Computing the Paths ${\cal D}_f$} \label{sec:dynamic-programming} Recall that we defined the trees $FT^{L,f}(u,v)$ with respect the parameters $f$ (the maximum number of failures) and $L$ (where we search for shortest paths among paths of at most $L$ edges). The idea is to build the trees $FT^{L,f}(u,v)$ using dynamic programming having the trees $FT^{L-1, f}(u,v)$ with parameters $f, L-1$ as subproblems. Assume we have already built the trees $FT^{i,f}(u,v)$, where $u,v \in V, 1 \le i < L$, we describe how to build the trees $FT^{i+1,f}(u,v)$. Let $(u,v,F)$ be a query for which we want to compute the distance $d^{i+1}(u,v,F)$ (as part of the construction of the tree $FT^{i+1,f}(u,v)$). Scan all the edges $(u,z) \in E$ and query the tree $FT^{i, f}(z,v)$ with the set $F$ to find the distance $d^i(z,v,F)$. Querying the tree $FT^{i,f}(z,v)$ takes $O(f^2 \log i) = O(f^2 \log L)$ time as described in Lemma \ref{lemma:query} (note that $f^2 \log L = \widetilde{O}(1)$ for $f \le \log n$ as $L \le n$), and we run $O(\text{out-degree(u)})$ such queries and take the minimum of the following equation. \begin{eqnarray} \label{eq:dynamic-programming} d^{i+1}(u,v, F) = \min_{z} \{ \omega(u,z) + d^i(z,v,F) \ | \ (u,z) \in E \ \ AND \ \ u,z,(u,z) \not \in F \} \end{eqnarray} \begin{eqnarray} \label{eq:parent-pointer} \textrm{parent}^{i+1}(u,v, F) = \arg \min_{z} \{ \omega(u,z) + d^i(z,v,F) \ | \ (u,z) \in E \ \ AND \ \ u,z,(u,z) \not \in F \} \end{eqnarray} Note that in Equation \ref{eq:dynamic-programming} we assume that for every vertex $u \in V$ it holds that $G$ contains the self loops $(u,u) \in E$ such that $\omega(u,u) = 0$. So the time to compute $d^{i+1}(u,v,F)$ is $\widetilde{O}(\text{out-degree(u)})$. Next, we describe how to reconstruct the path $P^{i+1}(u,v,F)$ in $O(L)$ additional time. We reconstruct the shortest path $P^{i+1}(u,v,F)$ by simply following the (at most $L$) parent pointers. In more details, let $z = \textrm{parent}^{i+1}(u,v, F)$ be the vertex defined according to Equation \ref{eq:parent-pointer}. We reconstruct the shortest path $P^{i+1}(u,v,F)$ by concatenating $(u,z)$ with the shortest path $P^{i}(z,v,F)$ (which we reconstruct in the same way), thus we can reconstruct $P^{i+1}(u,v,F)$ edge by edge in constant time per edge, and hence it takes $O(L)$ time to reconstruct the path $P^{i+1}(u,v,F)$ that contains at most $L$ edges. The tree $FT^{i,f}(u,v)$ contains $i^f \le L^f$ nodes, and thus all the trees $\{FT^{i,f}(u,v)\}$ for all $i \le L, u,v \in V$ contain $O(n^2 L^{f+1})$ nodes together. In each such node we compute the distance $d^i(u,v, \{a_1, \ldots, a_j\})$ in $\widetilde{O}(\text{out-degree(u)})$ time and reconstruct the path $P^i(u,v, \{a_1, \ldots, a_j\})$ in additional $O(L)$ time. Theretofore, computing all the distances $d^i(u,v, \{a_1, \ldots, a_j\})$ and all the paths $P^i(u,v, \{a_1, \ldots, a_j\})$ in all the nodes of all the trees $\{FT^{i,f}(u,v) \}_{u,v \in V, 1 \le i \le L}$ takes $\widetilde{O}(\sum_{i \le L, u,v \in V} {L^f(\text{out-degree(u)}+L)}) = \widetilde{O}(mnL^{f+1} + n^2L^{f+2})$ time. substituting $L = \widetilde{O}(n^{\epsilon/f})$ we get an algorithm to compute the trees $\{FT^{L,f}(u,v) \}_{u,v \in V}$ in $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ time. This proves the following Lemma. \begin{lemma} \label{lemma:ft-tree-eps-plus} One can deterministically construct the trees $FT^{L,f}(s,t)$ for every $s,t \in V$ in $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ time. \end{lemma} In Section \ref{sec:positive} we further reduce the runtime to $\widetilde{O}(mn^{1+\epsilon})$ by using dynamic programming only for computing the first $f-1$ levels of the trees $FT^{L,f}(s,t)$ and then applying Dijkstra in a sophisticated manner to compute the last layer of the trees $FT^{L,f}(s,t)$. In addition, we also boost-up the runtime of the greedy pivots selection algorithm from $\widetilde{O}(n^{2+\epsilon +\epsilon/f})$ to $\widetilde{O}(n^{2+\epsilon})$ time. \section{Deterministic Replacement Paths Algorithm} \label{appendix:sec:replacement} In this section we add the missing parts of the $\widetilde{O}(m\sqrt{n})$ time deterministic replacement paths algorithm, derandomizing the replacement paths algorithm of Roddity and Zwick \cite{Roditty2005}. Recall the notion of a long triple $(s,t,e)$ as in Definition \ref{def:long-triple}. Let $s,t \in V, e \in P_G(s,t)$, the triple $(s,t,e)$ is a {\sl long} triple if for every replacement path from $s$ to $t$ avoiding $e$ has its detour part containing more than $\lceil \sqrt{n} \rceil$ edges. In order for this paper to be self-contained, let us start by describing the randomized $\widetilde{O}(m\sqrt{n})$ replacement paths algorithm of Roditty and Zwick \cite{Roditty2005}. \subsection{The Randomized $\widetilde{O}(m\sqrt{n})$ Replacement Paths Algorithm of Roditty and Zwick - a Summary} \label{appendix:sec:rodittys-algorithm} The algorithm by Roddity and Zwick that is described in \cite{Roditty2005} takes $\widetilde{O}(m \sqrt{n})$ time. Their algorithm handles separately the case that a replacement path has a short detour containing at most $\lceil \sqrt{n} \rceil$ edges, and the case that a replacement path has a long detour containing more than $\lceil \sqrt{n} \rceil$ edges. The first case is solved deterministically while the second case is solved by a randomized algorithm as described below. \subsubsection{Handling Short Detours} Roditty and Zwick's algorithm finds replacement paths with short detours (containing at most $\lceil \sqrt{n} \rceil$ edges) deterministically. Let $P = P_G(s,t) = <v_0, \ldots, v_k>$ be the shortest path from $s$ to $t$, and let $G' = G \setminus E(P_G(s,t))$ be the graph $G$ after removing the edges of $P$ and let $\ell = \lfloor \frac{k}{2\sqrt{n}} \rfloor$. As explained in Section \ref{sec:preliminaries}, for every triple $(s,t,e) \in V \times V \times E$ every replacement paths $P_G(s,t,e)$ can be partitioned into a common prefix $\mathrm{CommonPref}_{s,t,e}$, a disjoint detour $\mathrm{Detour}_{s,t,e}$ and a common suffix $\mathrm{CommonSuff}_{s,t,e}$. In this part of handling short detours, we would like to find all distances $d_G(s,t,e)$ such that there exists at least one replacement path $P_G(s,t,e)$ whose detour part $\mathrm{Detour}_{s,t,e}$ contains at most $\lceil \sqrt{n} \rceil$ edges. The algorithm for handling short detours has two parts. The first part, computes a table $RD[i,j]$ which is defined as follows. For every $0 \le i \le k$ and $0 \le j \le \lceil \sqrt{n} \rceil -1$ the entry $RD[i,j]$ gives the length of the shortest path in $G'$ ({\sl i.e.}, the detour) starting at $v_{i}$ and ending at $v_{i+j}$, if its length is at most $\lceil \sqrt{n} \rceil$, or indicates that $d_{G'}(v_{i}, v_{i+j}) > \lceil \sqrt{n} \rceil$. The second part, uses the table of detours $RD$ to find replacement paths whose detour part contains at most $\lceil \sqrt{n} \rceil$ edges. {\bf First part: computing the table $RD$.} The algorithm builds an auxiliary graph $G^A$ obtained by adding a new source vertex $r$ to $G'$ and an edge $(r, v_{2q \lceil \sqrt{n} \rceil})$ of weight $\omega(r, v_{2q \lceil \sqrt{n} \rceil})=q \lceil \sqrt{n} \rceil$ for every $0 \le q \le \ell$. The weight of all the edges $E \setminus E(P)$ is set to $1$. Then the algorithm runs Dijkstra's algorithm from $r$ in $G^A$ to find all the best short detours ({\sl i.e.}, the shortest paths on at most $\lceil \sqrt{n} \rceil$ edges from $v_i$ to $v_j$ in $G'$ where $1 \le i < j \le k$) that start in one of the vertices $v_0, v_{2 \lceil \sqrt{n} \rceil}, \ldots, v_{2\ell \lceil \sqrt{n} \rceil}$. See Figure \ref{fig:the-graph-G-A} as an illustration of the graph $G^A$. \begin{figure}\label{fig:the-graph-G-A} \end{figure} In a sense, the algorithm already found all the relevant detours from about $\frac{k}{\sqrt{n}}$ of the vertices. More precisely, the algorithm has found the entries $RD[i,j]$ for $0 \le i \le k, 0 \le j \le \lceil \sqrt{n} \rceil -1$ such that $(i \mod 2 \lceil \sqrt{n} \rceil) = 0$. By running this algorithm in $O(\sqrt{n})$ phases (in phase $p$ we compute the entries of $RD[i,j]$ such that $0 \le i \le k, 0 \le j \le \lceil \sqrt{n} \rceil-1$ and $(i \mod 2\lceil \sqrt{n} \rceil) = p$), we can find all the relevant detours starting from all the nodes of the path $P$. That is, we run this algorithm $2\lceil \sqrt{n} \rceil-1$ more times to find short detours emanating from the other vertices of $P_G(s,t)$. In the $p^\text{th}$ phase (for $0 \le p \le 2\lceil \sqrt{n} \rceil-1$) find the short detours emanating from one of the vertices $v_p, v_{p+2\lceil \sqrt{n} \rceil}, \ldots, v_{p+2\ell\lceil \sqrt{n} \rceil}$ by running the algorithm on the graph $G^A$ obtained by adding to $G'$ the edges $(r, v_{p+2q\lceil \sqrt{n} \rceil})$ of weight $\omega(r, v_{p+2q\lceil \sqrt{n} \rceil})=q\lceil \sqrt{n} \rceil$ for every $0 \le q \le \lfloor \frac{k-p}{2\lceil \sqrt{n} \rceil} \rfloor$. Store the computed detours in the table $RD$. This part takes $\widetilde{O}(m\sqrt{n})$ time, as we run $\lceil \sqrt{n} \rceil$ instances of Dijkstra's algorithm whose runtime is $\widetilde{O}(m)$. The correctness of this algorithm for computing short detours is based on the following theorem from \cite{Roditty2005}. \begin{theorem} [Theorem 1 in \cite{Roditty2005}] \label{appendix:thm:roditty-short-detours} If $d_{G^A}(r, v_{2q\lceil \sqrt{n} \rceil+j}) \le (q + 1)\lceil \sqrt{n} \rceil$, where $0 \le q \le \ell$ and $0 \le j \le \lceil \sqrt{n} \rceil-1$, then $d_{G'}(v_{2q\lceil \sqrt{n} \rceil}, v_{2q\lceil \sqrt{n} \rceil+j}) = d_{G^A}(r, v_{2q\lceil \sqrt{n} \rceil+j}) - q\lceil \sqrt{n} \rceil$. Otherwise, $d_{G'}(v_{2q\lceil \sqrt{n} \rceil}, v_{2q\lceil \sqrt{n} \rceil+j}) > \lceil \sqrt{n} \rceil$. \end{theorem} The basic idea of the proof of Theorem \ref{appendix:thm:roditty-short-detours} is the following. Let $0 \le q \le \ell$ and $0 \le j \le \lceil \sqrt{n} \rceil - 1$. If the distance from $r$ to $v_{2q\lceil \sqrt{n} \rceil+j}$ in $G^A$ is less than $(q+1)\lceil \sqrt{n} \rceil$ then it must be that the shortest path from $r$ to $v_{2q\lceil \sqrt{n} \rceil+j}$ starts with the edge $(r,v_{2q\lceil \sqrt{n} \rceil})$. Otherwise, if it starts with the edge $(r,v_{2z\lceil \sqrt{n} \rceil})$ for $z > q$ then the weight of this edge is at least $(z+1)\lceil \sqrt{n} \rceil$ which is already larger than $(q+1)\lceil \sqrt{n} \rceil$ contradicting the assumption that the distance from $r$ to $v_{2q\lceil \sqrt{n} \rceil+j}$ in $G^A$ is less than $(q+1)\lceil \sqrt{n} \rceil$. On the other hand, if it starts with the edge $(r,v_{2z\lceil \sqrt{n} \rceil})$ for $z < q$ then the length of any such path is at least $z \lceil \sqrt{n} \rceil + d_{G'}(v_{2z\lceil \sqrt{n} \rceil}, {2q\lceil \sqrt{n} \rceil+j}) \ge q \lceil \sqrt{n} \rceil + d_{G}(v_{2z\lceil \sqrt{n} \rceil}, {2q\lceil \sqrt{n} \rceil+j}) = q\lceil \sqrt{n} \rceil + 2(q-z)\lceil \sqrt{n} \rceil \ge (q+1)\lceil \sqrt{n} \rceil$ where the first inequality holds since distances in $G'$ (which is obtained by removing the edges of $P_G(s,t)$ from $G$) are only larger than distances in $G$, and the last inequality holds since $z<q$. {\bf Second part: using the table $RD$ to find replacement paths whose detour part contains at most $\lceil \sqrt{n} \rceil$ edges.} To find the replacement path from $s$ to $t$ that avoids the edge $(v_i, v_{i+1})$ and uses a short detour, the algorithm finds indices $i - \lceil \sqrt{n} \rceil \le a \le i$ and $i < b \le i + \lceil \sqrt{n} \rceil$ for which the expression $d_G(s,v_{a}) + RD[a,b-a] + d_G(v_{b}, t) = a + RD[a, b - a] + (k - b)$ is minimized. The algorithm computes it for every edge $(v_i, v_{i+1}) \in P_G(s,t)$ using a priority queue $Q$ and a sliding window approach in time $\widetilde{O}(m\sqrt{n})$ as follows. When looking for the shortest replacement path for the edge $(v_i, v_{i+1})$, the priority queue $Q$ contains all pairs $(a, b)$ such that $i-\lceil \sqrt{n} \rceil \le a \le i$ and $i < b \le i+\lceil \sqrt{n} \rceil$. The key associated with a pair $(a, b)$ is, as mentioned above, $a+RD[a, b-a]+(k-b)$. In the start of the iteration corresponding to the edge $(v_i, v_{i+1})$, the algorithm inserts the pairs $(i, j)$, for $i + 1 \le j \le i + \lceil \sqrt{n} \rceil$ into $Q$, and removes from it the pairs $(j, i)$, for $i-\lceil \sqrt{n} \rceil \le j \le i$. A find-min operation on $Q$ then returns the minimal pair $(a, b)$. The complexity of this process is only $\widetilde{O}(n \lceil \sqrt{n} \rceil)$: for every vertex $v_i$ (for every $0 \le i \le n$) we perform $O(\sqrt{n})$ insert operations (for all values of $j$ such that $i + 1 \le j \le i + \lceil \sqrt{n} \rceil$) which is larger than the assumed distance from $r$ to $v_{2i\lceil \sqrt{n} \rceil+j}$ in $G^A$ is at most $i\lceil \sqrt{n} \rceil+j$ $O(\sqrt{n})$ delete operations (for all values of $j$ such that $i-\lceil \sqrt{n} \rceil \le j \le i$), and a single find-min operation. In total, we have $O(n \sqrt{n})$ operations of insert/delete/find-min which take $\widetilde{O}(n \sqrt{n})$ time. Thus, the total running time of the algorithm for handling short detours is $\widetilde{O}(m \sqrt{n})$. \subsubsection{Handling Long Detours} To find long detours, the algorithm samples a random set $R$ as in Lemma \ref{lem:sampling-roditty} such that each vertex is sampled independently uniformly at random with probability $(4 \ln n)/\sqrt{n}$, the set $R$ has expected size of $\widetilde{O}(\sqrt{n})$. For every sampled vertex $r \in R$ and for every edge $e_i = (v_i, v_{i+1})$ (where $0 \le i \le k)$ we find the shortest replacement path $P_G(s,t,e)$ which passes through $r$. This algorithm has two steps as well. In the first step, for every sampled vertex $r \in R$, we construct two BFS trees from $r$, one in $G' = G \setminus E(P)$ and one in the graph obtained from $G'$ by reversing all the edge directions. This computes the distances $d_{G'}(r, v)$ and $d_{G'}(v, r)$, for every $r \in R$ and $v \in V$. In the second step, we run the following procedure for every sampled vertex $r \in R$. Given $r \in R$, we find for every edge $e_i = (v_i, v_{i+1}) \in P_G(s,t)$ the shortest path from $s$ to $t$ avoiding $e_i$ which passes through $r$. To do so, we construct two priority queues $Q_{in}[r]$ and $Q_{out}[r]$ containing indices of vertices on $P_G(s,t)$. During the computation of a replacement path for the edge $e_i = (v_i, v_{i+1})$ we would like to run a find-min operation using priority queue $Q_{in}[r]$ to find the shortest path from $s$ to $r$ which avoids $e_i$, and we would like to run a find-min operation using priority queue $Q_{out}[r]$ to find the shortest path from $r$ to $t$ which avoids $e_i$. To do so, during the computation of a replacement path for the edge $e_i = (v_i, v_{i+1})$ we would like to have $Q_{in}[r] = \{0, 1, \ldots , i\}$ and $Q_{out}[r] = \{i + 1, \ldots , k\}$ such that an element $j \in Q_{in}[r]$ has its key in $Q_{in}[r]$ equal to $j+d_{G'}(v_j, r)$ and an element $j \in Q_{out}[r]$ has its key in $Q_{out}[r]$ equal to $d_{G'}(r, v_j)+(k-j)$. Note that we have already computed $d_{G'}(v_j, r)$ in the BFS tree rooted in $r$ in the graph $G'$ with reverse edge directions and we have already computed $d_{G'}(r, v_j)$ in the BFS tree rooted in $r$ in the graph $G'$. In order to achieve that at iteration $i$ (for $0 \le i \le k-1$) we have $Q_{in}[r] = \{0, 1, \ldots , i\}$ and $Q_{out}[r] = \{i + 1, \ldots , k\}$ we apply the following sliding window approach. We initiate the queues $Q_{in}[r]$ contains only the element $0$ with key equal to $d_{G'}(v_0, r)$ and $Q_{out}[r] = \{1, \ldots , k\}$ such that an element $j \in Q_{out}[r]$ has its key in $Q_{out}[r]$ equal to $d_{G'}(r, v_j)+(k-j)$. Then we compute the length of the shortest path from $s$ to $t$ avoiding $e_0$ and passing through $r$ as find-min($Q_{in}[r]) + $ find-min($Q_{out}[r])$. Next, we remove from $Q_{out}[r]$ the element $1$ and insert it to $Q_{in}[r]$ with its key equal to $1 + d_{G'}(v_1, r)$, and compute the length of the shortest path from $s$ to $t$ avoiding $e_1$ and passing through $r$ as find-min($Q_{in}[r]) + $ find-min($Q_{out}[r])$. In general, after finishing the $i^\text{th}$ iteration we run the $(i+1)^\text{th}$ iteration as follows. we remove from $Q_{out}[r]$ the element $i+1$ and insert it to $Q_{in}[r]$ with its key equal to $i+1 + d_{G'}(v_{i+1}, r)$, and compute the length of the shortest path from $s$ to $t$ avoiding $e_{i+1}$ and passing through $r$ as find-min($Q_{in}[r]) + $ find-min($Q_{out}[r])$. Finally, for every edge $e_i = (v_i, v_{i+1})$ we iterate over all vertices $r \in R$ and find the shortest path from $s$ to $t$ going through one of the vertices $R$. When $(s,t,e)$ is a long triple, there exists at least one replacement path $P_G(s,t,e)$ whose detour part contains at least $\lceil \sqrt{n} \rceil$ edges, and thus with high probability at least one of the vertices of the detour is sampled in the set $R$ (since we sample every vertex uniformly at random with probability $(4 \ln n)/\sqrt{n}$). The total expected time of computing these distances is $\widetilde{O}(m\sqrt{n})$: first of all, there are $\widetilde{O}(\sqrt{n})$ randomly chosen vertices $R$ and every BFS computation takes $O(m+n)$ time. Secondly, for every $r \in R$ we perform $O(n)$ insert, delete and find-min operations on the queues $Q_{in}[r]$ and $Q_{out}[r]$ which takes $\widetilde{O}(n)$ time per vertex $r \in R$, and hence $\widetilde{O}(n \sqrt{n})$ expected time. Finally for every edge $e_i$ we iterate over all the vertices $r \in R$ to find the minimum length of a shortest path from $s$ to $t$ avoiding $e_i$ which passes through one of the vertices $r \in R$. There are $O(n)$ edges $e_i \in P_G(s,t)$, and for every edge $e_i$ we iterate over $O(|R|)$ vertices which is $\widetilde{O}(\sqrt{n})$ in expectation, and thus the total runtime of this part is $\widetilde{O}(n\sqrt{n})$. In total we get that the algorithm takes $\widetilde{O}(m\sqrt{n})$ time. \subsection{The Only Randomization Used in The Replacement Paths Algorithm of Roditty and Zwick} As mentioned above, the algorithm by Roddity and Zwick handles separately the case that the replacement path has a short detour containing at most $\lceil \sqrt{n} \rceil$ edges, and the case that the replacement path has a long detour containing more than $\lceil \sqrt{n} \rceil$ edges. The first case is solved deterministically. The second case is solved by first sampling a subset of vertices $R$ according to Lemma \ref{lem:sampling-roditty}, where each vertex is sampled uniformly independently at random with probability $c \ln n/ \sqrt{n}$ for large enough constant $c > 0$. Using this uniform sampling, it holds with high probability (of at least $1-n^{-c+2}$) that for every long triple $(s,t,e)$, the detour $\mathrm{Detour}_{s,t,e}$ of the replacement path $P_G(s,t,e)$ contains at least one vertex of $R$. As the authors of \cite{Roditty2005} write, the choice of the random set $R$ is the only randomization used by their algorithm. More precisely, the only randomization used in the algorithm of \cite{Roditty2005} is described in the following lemma (to be self-contained, we re-write the lemma here). \begin{lemma} [proved in \cite{Roditty2005}] \label{appendix:lemma:the-set-R-random} Let $R \subseteq V$ be a random subset obtained by selecting each vertex, independently, with probability $(c \ln n)/\sqrt{n}$, for some constant $c>0$. Then with high probability of at least $1 - n^{-c+2}$, the set $R$ contains $\widetilde{O}(\sqrt{n})$ vertices and for every long triple $(s,t,e)$ there exists a replacement path $P_G(s,t,e)$ whose detour part contains at least one of the vertices of $R$. \end{lemma} \subsection{Derandomizing the Replacement Paths Algorithm of Roditty and Zwick - Outline} To obtain a deterministic algorithm for the replacement paths problem and to prove Theorem \ref{thm:replacement}, we prove the following deterministic alternative to Lemma \ref{appendix:lemma:the-set-R-random} by a clever choice of the set $R$ of $\widetilde{O}(\sqrt{n})$ vertices. \begin{lemma} [Our derandomized version of Lemma \ref{appendix:lemma:the-set-R-random}] \label{appendix:lemma:the-set-R-deterministic} There exists an $\widetilde{O}(m \sqrt{n})$ time deterministic algorithm which computes a set $R \subseteq V$ of $\widetilde{O}(\sqrt{n})$ vertices, such that for every long triple $(s,t,e)$ there exists a replacement path $P_G(s,t,e)$ whose detour part contains at least one of the vertices of $R$. \end{lemma} Following the above description, in order to prove Theorem \ref{thm:replacement}, that there exists an $\widetilde{O}(m\sqrt{n})$ deterministic replacement paths algorithm, it is sufficient to prove the derandomization lemma (Lemma \ref{appendix:lemma:the-set-R-deterministic}), we do so in the following sections. Following is an overview our approach. We compute in $\widetilde{O}(m\sqrt{n})$ time a set ${\cal D}_{n}$ of at most $n$ paths, each path of length exactly $\lceil \sqrt{n} \rceil$. The crucial part of our algorithm is in efficiently computing the set of paths ${\cal D}_{n}$ with the following property; for every long triple $(s,t,e)$ there exists a path $D \in {\cal D}_{n}$ and a replacement path $P_G(s,t,e)$ such that $D$ is contained in the detour part of $P_G(s,t,e)$. More precisely, we prove the following Lemma. \begin{lemma} \label{appendix:thm:p-sqrt} There exists a deterministic $\widetilde{O}(m \sqrt{n})$ algorithm for computing a set ${\cal D}_{n}$ of at most $n$ paths, each path of length exactly $\lceil \sqrt{n} \rceil$ with the following property; for every long triple $(s,t,e)$ there exists a path $D \in {\cal D}_{n}$ and a replacement path $P_G(s,t,e)$ such that $D$ is the detour part of $P_G(s,t,e)$. \end{lemma} After computing ${\cal D}_{n}$ we obtain the set of vertices $R$ by running the GreedyPivotsSelection(${\cal D}_{n}$) algorithm as described in Section \ref{sec:framework} and Section \ref{sec:framework}, and as stated in Lemma \ref{lemma:greedy}. Given ${\cal D}_{n}$, the deterministic greedy selection algorithm GreedyPivotsSelection$(\mathcal{D}_{n})$ computes a set $R \subset V$ of $\widetilde{O}(\sqrt{n})$ vertices in $\widetilde{O}( n\sqrt{n})$ time with the following property; every path $D \in \mathcal{D}_{n}$ contains at least one of the vertices of $R$. Using Lemma \ref{appendix:thm:p-sqrt} and Lemma \ref{lemma:greedy} we can prove the derandomization Lemma \ref{appendix:lemma:the-set-R-deterministic} and thus prove Theorem \ref{thm:replacement}. \begin{proof} [Proof of Theorem \ref{thm:replacement}] According to Lemma \ref{appendix:thm:p-sqrt} we deterministically compute in $\widetilde{O}(m \sqrt{n})$ time the set $\mathcal{D}_{n}$ of at most $n$ paths, each path of length exactly $\lceil \sqrt{n} \rceil$ with the following property; for every long triple $(s,t,e)$ there exists a path $D \in {\cal D}_{n}$ and a replacement path $P_G(s,t,e)$ such that $D$ is contained in the detour part of $P_G(s,t,e)$. Then, we run the greedy selection algorithm on the set of paths $\mathcal{D}_{n}$. According to Lemma \ref{lemma:greedy}, the greedy selection algorithm takes $\widetilde{O}(n \sqrt{n})$ time, and computes the set $R \subset V$ of $\widetilde{O}(\sqrt{n})$ vertices such that every path $D \in \mathcal{D}_{n}$ contains at least one of the vertices of $R$. We get that for every long triple $(s,t,e)$ there exists a path $D \in {\cal D}_{n}$ and a replacement path $P_G(s,t,e)$ such that $D$ is contained in the detour part of $P_G(s,t,e)$, and the path $D \in \mathcal{D}_{n}$ contains at least one of the vertices of $R$. Hence, for every long triple $(s,t,e)$ there exists a replacement path $P_G(s,t,e)$ whose detour part contains at least one of the vertices of $R$. This proves Lemma \ref{appendix:lemma:the-set-R-deterministic}. Thus, we derandomize the randomized selection of the set of vertices $R$ in Roditty and Zwick's algorithm \cite{Roditty2005}, and as this is the only randomization used by their algorithm we obtain an $\widetilde{O}(m\sqrt{n})$ deterministic algorithm for the replacement paths problem in unweighted directed graphs. \end{proof} In the following section we prove Lemma \ref{appendix:thm:p-sqrt}. In Section \ref{sec:reusing-common-subpaths} we already mathematically defined the set ${\cal D}_{n}$ and in Section \ref{sec:p-sqrt-computation} we describe a deterministic algorithm for computing $\mathcal{D}_{n}$ in $\widetilde{O}(m\sqrt{n})$ time. \subsection{An $\widetilde{O}(m\sqrt{n})$ Deterministic Algorithm for Computing ${\cal D}_{n}$} \label{sec:p-sqrt-computation} In this section we describe how to compute ${\cal D}_{n}$ in $\widetilde{O}(m\sqrt{n})$ time and thus proving Lemma \ref{appendix:thm:p-sqrt}. In Section \ref{sec:rp-efficient} we presented an overview of the algorithm. We refer the reader to first read the overview in Section \ref{sec:replacement-short} before reading the following formal description of the algorithm and its analysis. In this section we describe the algorithm more formally, and analyse its correctness and runtime. Given the shortest path $P_G(s,t) = <s = v_0, \ldots, v_k = t>$ the algorithm computes the weighted graph $G^w = (G,w)$ by taking the graph $G$ and setting the weight of the edges of $P_G(s,t)$ to be $\epsilon$ for some small $\epsilon$ such that $0 < \epsilon < 1/n$ and the weight of all other edges to be $1$. The algorithm then invokes Dijkstra from $s$ in $G^w$, builds the shortest paths tree $T_s$ rooted at $s$ and sets the distances array $d[v] = d_{G^w}(s,v)$ for every $v \in V$. In addition, the algorithm initializes the set $V^w_{\sqrt{n}}$ to be $V^w_{\sqrt{n}} \gets \{ v \in V \ | \ \lceil \sqrt{n} \rceil \le d[v] < \lceil \sqrt{n} \rceil + 1 \}$. For every vertex $v\in V^w_{\sqrt{n}}$, let $\Tilde{D}^w(v)$ be the suffix of the last $\lceil \sqrt{n} \rceil$ edges of the shortest path from $s$ to $v$ in $T_s$. Add to the set ${\cal D}^w_{\sqrt{n}}$ (initially is set to be the empty set) the subpath $\Tilde{D}^w(v)$ for every vertex $v\in V^w_{\sqrt{n}}$. The algorithm then removes from $T_s$ all the vertices $v$ such that $d[v] > \lceil \sqrt{n} \rceil + 1$ and sets $d[v] = \infty$. Next, the algorithm operates in $|P_G(s,t)|$ iterations. In iteration $i$ starting from $k-1$ to $0$ the algorithm does the following. Let $T_{s,v_{i+1}}$ be the subtree of $T_s$ rooted at $v_{i+1}$. Construct the graph $G_{s,v_{i+1}}$ as follows. The set of vertices $V_{s,v_{i+1}}$ of $G_{s,v_{i+1}}$ is $V_{s,v_{i+1}} = \{s\} \cup \{v\in V \mid v \in T_{s,v_{i+1}} \setminus\{v_{i+1} \} ~or~ ( \exists (v,v') \in E ~such~that~ v\in T_s\setminus\{v_{i+1} \} ~and~ v' \in T_{s,v_{i+1}})\} \setminus \{v_{i+1}\}$. Essentially, $V_{s,v_{i+1}}$ contains all the vertices in the subtree of $T_s$ rooted in $v_{i+1}$ and all of their neighbours (except the vertex $v_{i+1}$ itself). The set of edges of $G_{s,v_{i+1}}$ contains two types of edges. The first type is auxiliary shortcut edges, for every vertex $v\in V_{s,v_{i+1}}$ such that $v \notin T_{s,v_{i+1}}$ add an edge $(s,v)$ with weight $d[v]$. The second type is original edges, for every vertex $v\in T_{s,v_{i+1}} \setminus \{v_{i+1}\} $ add all its incident edges $(v,v')$ such that $v' \in V_{s,v_{i+1}}$ with weight 1. The algorithm removes from the tree $T_s$ the vertex $v_{i+1}$. The algorithm then computes Dijkstra from $s$ in $G_{s,v_{i+1}}$ and for every vertex $v \in T_{s,v_{i+1}}\setminus \{v_{i+1}\}$ it sets $d[v] = d_{G_{s,v_{i+1}}}(s,v)$ and sets the parent of $v$ in $T_s$ to be the parent of $v$ in the computed Dijkstra. For every vertex $v \in T_{s,v_{i+1}}$ such that $d[v] > \lceil \sqrt{n} \rceil+1$ remove $v$ from $T_s$ and set $d[v]=\infty$. For every vertex $v \in T_{s,v_{i+1}}$ such that $\lceil \sqrt{n} \rceil \leq d[v] < \lceil \sqrt{n} \rceil+1$ add $v$ to $V^w_{\sqrt{n}}$ and the subpath $\Tilde{D}^w(v)$ to ${\cal D}^w_{\sqrt{n}}$. We prove the correctness and efficiency of our algorithm in the following Lemmas. \begin{lemma} \label{lem:weighted} Let $x$ be a vertex such that $x \in V_{\sqrt{n}}$ and let $D'(x)$ be the suffix of the last $\lceil \sqrt{n} \rceil$ edges of the shortest path from $s$ to $x$ in $G^w_{\rho(x)}$. Then $D'(x)$ is a shortest path from $v_{\rho(x)}$ to $x$ in $G'$. \end{lemma} \begin{proof} Since $x \in V_{\sqrt{n}}$ then by definition of $V_{\sqrt{n}}$ for every index $0 \le i < \rho(x)$ it holds that $d_{G'}(v_i, x) > \lceil \sqrt{n} \rceil$. Thus, every path in $G'$ from $v_i$ to $x$ for every $i < \rho(x)$ contains more than $\lceil \sqrt{n} \rceil$ edges, and thus any path from $s$ to $x$ in $G^w_{\rho(x)}$ that does not pass through $v_{\rho(x)}$ has length at least $\lceil \sqrt{n} \rceil+1$. Therefore, the shortest path from $s$ to $x$ in $G^w_{\rho(x)}$ passes through $v_{\rho(x)}$ and its length is $d_{G^w_{\rho(x)}}(s, v_{\rho(x)}) + d_{G^w_{\rho(x)}}(v_{\rho(x)}, x) = \epsilon \rho(x) + d_{G'}(v_{\rho(x)},x) = \epsilon \rho(x) + \lceil \sqrt{n} \rceil < \lceil \sqrt{n} \rceil+1$ (where the last inequality holds as $\epsilon < 1/n$ and $\rho(x)\leq n$). Hence, the suffix of the last $\lceil \sqrt{n} \rceil$ edges of the shortest path from $s$ to $x$ in $G^w_{\rho(x)}$ is a shortest path from $v_{\rho(x)}$ to $x$ in $G'$. \end{proof} The following lemma shows that after the $i$'th iteration $T_s$ is a shortest path tree in $G^w_{i}$ trimmed at distance $\lceil \sqrt{n} \rceil+1$ and $d[v]$ is the distance from $s$ to $v$ in $T_s$ for every $v\in V$. \begin{lemma} \label{lem:TrimDijkstara} After the $i^\text{th}$ iteration, if $d_{G^w_{i}}(s,v) < \lceil \sqrt{n} \rceil+1$ then $v\in T_s$ and $d[v] = d_{T_s}(s,v) = d_{G^w_{i}}(s,v)$, and otherwise $d[v] = \infty$. \end{lemma} \begin{proof} We prove the claim by induction on $i$. For $i=k$, that is, $G^w_{k} = G^w$ the claim trivially holds by the correctness of Dijkstra on $G^w$. Assume the claim is correct for iteration $j$ such that $j>i$ and consider iteration $i$ for some $i < k$. {\bf Consider a vertex $v$ such that $d_{G^w_{i}}(s,v) < \lceil \sqrt{n} \rceil+1$.} We also have $d_{G^w_{i+1}}(s,v) < \lceil \sqrt{n} \rceil+1$ as $G^w_{i} \subseteq G^w_{i+1}$. By induction hypothesis we have that before iteration $i$, $d[v] = d_{T_s}(s,v)= d_{G^w_{i+1}}(s,v)$. If $v \notin T_{s,v_{i+1}}$ then $d[v]$ and $d_{T_s}(s,v)$ do not change. Moreover the shortest path from $s$ to $v$ in $T_s$ before the iteration is a shortest path in $G^w_{i+1}$, since this shortest path does not contain $v_{i+1}$ it is also a shortest path in $G^w_{i}$. Hence, after iteration $i$ it holds that $v \in T_s$ and $d[v] = d_{T_s}(s,v)= d_{G^w_{i}}(s,v)$. Consider the case that $v \in T_{s,v_{i+1}} \setminus \{v_{i+1}\}$. We prove that $d_{G_{s,v_{i+1}}}(s,v) = d_{G^w_{i}}(s,v)$. We first prove that $d_{G_{s,v_{i+1}}}(s,v) \ge d_{G^w_{i}}(s,v)$. By construction of $G_{s,v_{i+1}}$, every edge $(x,y)$ of $G_{s,v_{i+1}}$ is either an original edge of $G$ with weight $1$ that exists also in $G^w_{i}$, or it is an auxiliary shortcut edge $(s,y)$ such that $y \notin T_{s,v_{i+1}}$ and its weight in $G_{s,v_{i+1}}$ is defined as $\omega(s,y) = d[y]$. In the latter case, we have already proved in the previous paragraph that $d[y] = d_{G^w_{i}}(s,y)$ is the length of the shortest path from $s$ to $y$ in $G^w_i$. Hence, every $s$-to-$v$ path in $G_{s,v_{i+1}}$ is associated with an $s$-to-$v$ path in $G^w_{i}$ of the same length, and hence $d_{G_{s,v_{i+1}}}(s,v) \ge d_{G^w_{i}}(s,v)$. We now prove that $d_{G^w_{i}}(s,v) \le d_{G_{s,v_{i+1}}}(s,v)$. Let $P'$ be a shortest path from $s$ to $v$ in $G^w_{i}$. Since we assumed $d_{G^w_{i}}(s,v) < \lceil \sqrt{n} \rceil+1$ and $G^w_{i} \subset G^w_{i+1}$ then $d_{G^w_{i+1}}(s,v) < \lceil \sqrt{n} \rceil+1$ and hence by the induction hypothesis all the vertices along $P'$ are in $T_s$ at the beginning of iteration $i$. Let $v' \in P'$ be the last vertex of $P'$ such that $v' \notin T_{s,v_{i+1}}$. Since we assume $v \in T_{s,v_{i+1}}$, then $v' \ne v$ and all the vertices following $v'$ in $P'$ are in $T_{s,v_{i+1}}$. Let $P_1$ be the subpath of $P'$ from $s$ to $v'$, and let $P_2$ be the subpath of $P'$ from $v'$ to $v$. We claim that the edge $(s,v') \in G_{s,v_{i+1}}$ and its weight is $\omega(s,v') = d_{G^w_{i}}(s,v')$. Since $v'$ is a neighbour of a vertex in $T_{s,v_{i+1}}$ ({\sl e.g.}, the vertex which follows $v'$ in $P'$ is in $T_{s,v_{i+1}}$ by definition of $v'$) then it holds that $v' \in V_{s,v_{i+1}}$. Hence $(s,v')$ is a shortcut edge from $s$ to $v'$ whose weight equals to the weight of the shortest path $d[v']$ and as $v' \not \in T_{s, v_{i+1}}$ then we have already proved above that $v' \in T_s$ and $d[v'] = d_{T_s}(s,v')= d_{G^w_{i}}(s,v')$. Furthermore, since all the vertices of $P'$ after $v'$ are contained in $T_{s,v_{i+1}}$ and thus in $V_{s,v_{i+1}}$, then it follows that all the edges of $P_2$ are contained in $G_{s,v_{i+1}}$ with weight $1$ which is their original weights in $G^w_{i}$. Hence, the path $P'' = (s, v') \cdot P_2$ is a path in $G_{s,v_{i+1}}$ whose length is $|P''| = \omega(s,v') + |P_2| = |P_1| + |P_2| = |P'|$. Therefore, $G_{s,v_{i+1}}$ contains a $s$-to-$v$ path ({\sl e.g.}, the path $P''$) whose length is $|P'|$. It follows that $d_{G^w_{i}}(s,v) \le d_{G_{s,v_{i+1}}}(s,v)$. Therefore, it holds that $d_{G_{s,v_{i+1}}}(s,v) = d_{G^w_{i}}(s,v)$ and the claim follows. {\bf Consider the case where $d_{G^w_{i}}(s,v) > \lceil \sqrt{n} \rceil+1$.} If $d_{G^w_{i+1}}(s,v) > \lceil \sqrt{n} \rceil+1$ then the claim follows by induction hypothesis. Otherwise it follows that $v \in T_{s,v_{i+1}}$. It is not hard to verify that for every vertex $u$ that belongs to $T_s$ after iteration $i$, we indeed have $d_{G_{s,v_{i+1}}}(s,u) = d_{G^w_{i}}(s,u) \leq \lceil \sqrt{n} \rceil+1$. Hence, since $v > \lceil \sqrt{n} \rceil+1$ we have $v \notin T_s$ and $d[v] = \infty$ after iteration $i$. \end{proof} The following lemmas show that every vertex $x$ belongs to at most $\lceil \sqrt{n} \rceil+1$ trees $T_{s,v_{i+1}}$. As we will later see, this will imply our desired running time. \begin{lemma} \label{lem:distance-increase} Consider an iteration $i$ for some $0 \leq i \leq k-1$ and a vertex $x$ such that $x \in T_{s,v_{i+1}}\setminus \{v_{i+1}\}$. Then $$\lfloor d_{G^w_{i}}(s,x) \rfloor \ge \lfloor d_{G^w_{i+1}}(s,x) \rfloor +1.$$ \end{lemma} \begin{proof} Assume $x$ belongs to $T_{s,v_{i+1}}\setminus \{v_{i+1}\}$ for some $0 \leq i \leq k-1$. By Lemma \ref{lem:TrimDijkstara} after the $i$'th iteration, the shortest path between $s$ and $x$ in $T_s$ is a shortest path between $s$ and $x$ in $G^w_i$. Moreover, since $x$ belongs to $T_{s,v_{i+1}}\setminus \{v_{i+1}\}$ then $v_{i+1}$ is on a shortest path from $s$ to $x$ in $G^w_{i+1}$. Therefore, $d_{G^w_{i+1}}(s,x) = \epsilon (i+1) + d_{G'}(v_{i+1}, x)$. Observe that $\lfloor d_{G^w_{i+1}}(s,x) \rfloor$ is the length of the shortest path from $v_{i+1}$ to $x$ in $G'$ (since $\epsilon < 1/n$). Assume by contradiction that $\lfloor d_{G^w_{i}}(s,x) \rfloor \le \lfloor d_{G^w_{i+1}}(s,x) \rfloor$. Then the length of the shortest path from $v_{i'}$ to $x$ in $G'$ for some index $i' < i+1$ is at most the length of the path from $v_{i+1}$ to $x$ in $G'$. Then the path from $s$ to $v_{i'}$ along $P_G(s,t)$ concatenated with the shortest path from $v_{i'}$ to $x$ in $G'$ has length in $G^w_{i+1}$ at most $\epsilon i' + \lfloor d_{G^w_{i+1}}(v_{i'},x) \rfloor$ and hence it is shorter than $d_{G^w_{i+1}}(s,x) = \epsilon (i+1) + d_{G'}(v_{i+1}, x)$ which is a contradiction since $d_{G^w_{i+1}}(s,x)$ is the length of the shortest path from $s$ to $x$ in $G^w_{i+1}$. \end{proof} By Lemmas \ref{lem:TrimDijkstara} and \ref{lem:distance-increase} and the fact that the algorithm trims the tree $T_s$ at distance $\lceil \sqrt{n} \rceil+1$ we get the following. \begin{lemma} \label{lem:num-of-Dijkstra} Every vertex $x$ belongs to at most $\lceil \sqrt{n} \rceil+1$ subtrees $T_{s,v_{i+1}}$ for $0 \leq i \leq k-1$. \end{lemma} \begin{proof} Assume $x$ belongs to the trees $T_{s,v_{i_1}}...T_{s,v_{i_r}}$ for $i_1< i_2<...<i_r$. We will show that $r \leq \lceil \sqrt{n} \rceil+1$, which implies the lemma. By Lemma \ref{lem:TrimDijkstara} as long as $x \in T_s$ for some iteration $j$ we have that after iteration $j$ it holds that $d[v] = d_{G^w_{j}}(s,v)$. By Lemma \ref{lem:distance-increase} we have $\lfloor d_{G^w_{i_{j}}}(s,x) \rfloor +1 \leq \lfloor d_{G^w_{i_j -1} }(s,x) \rfloor \leq \lfloor d_{G^w_{i_{j-1}}}(s,x) \rfloor$. Since the algorithm only maintains vertices $v$ whose distance $d[v]$ is less than $\lceil \sqrt{n} \rceil+1$ then after $x$ participates in $\lceil \sqrt{n} \rceil+1$ subtrees $T_{s,v_{i+1}}$ the algorithm removes it from $T_s$ and therefore $r \leq \lceil \sqrt{n} \rceil+1$ as required. \end{proof} \begin{lemma} The total running time of the algorithm is $\tilde{O}(m\sqrt{n})$. \end{lemma} \begin{proof} The dominant part of the running time of the algorithm is the computations of Dijkstra. The first Dijkstra computation is on the graph $G^w$ and it takes $O(m+n\log{n})$ time. We claim that the computation of iteration $i$ takes $O(\sum_{v\in T_{s,v_{i+1}}}{\deg(v)} \log{n})$, where $\deg(v)$ is the degree of $v$ in $G$. To see this, note that both the number of nodes and the number of edges in $G_{s,v_{i+1}}$ is bounded by $O(\sum_{v\in T_{s,v_{i+1}}}{\deg(v)})$. By Lemma \ref{lem:num-of-Dijkstra} every node $v$ belongs to at most $\lceil \sqrt{n} \rceil+1$ trees $T_{s,v_{i+1}}$. It is not hard to see now that the lemma follows. \end{proof} The following Lemma proves Lemma \ref{thm:p-sqrt}. \begin{lemma} $V^w_{\sqrt{n}} = V_{\sqrt{n}}$ and ${\cal D}^w_{\sqrt{n}}$ can be chosen as the set ${\cal D}_{n}$ according to Definition \ref{def:p-sqrtn}. \end{lemma} \begin{proof} {\bf We first prove that $V^w_{\sqrt{n}} \subseteq V_{\sqrt{n}}$.} Let $x \in V^w_{\sqrt{n}}$. As $x \in V^w_{\sqrt{n}}$ then there exists an iteration $i'$ such that after iteration $i'$, $\lceil \sqrt{n} \rceil \leq d[x] < \lceil \sqrt{n} \rceil+1$. Consider the tree $T_s$ after iteration $i'$. By Lemma \ref{lem:TrimDijkstara} after iteration $i'$, the path from $s$ to $x$ in $T_s$ is of length $d[x]$ and is a shortest path in $G^w_{i'}$. Let $P$ be the shortest path from $s$ to $x$ in $T_s$. Let $i$ be the maximal index such that $v_{i}$ is on the shortest path $P$. As $P$ does not contain any vertex $v_{j}$ such that $i+1 \leq j$ then $P$ is also a shortest path in $G^w_{i}$. As $P$ is of length between $\lceil \sqrt{n} \rceil$ and $\lceil \sqrt{n} \rceil+1$ we prove that $i = \rho(x)$. We need to prove that $d_{G'}(v_i, x) = \lceil \sqrt{n} \rceil$ and that the distance from $v_j$ for every $j\leq i$ to $x$ in $G'$ is more than $\lceil \sqrt{n} \rceil$. We first prove that $d_{G'}(v_i, x) = \lceil \sqrt{n} \rceil$. Since $P$ is a shortest $s$-to-$x$ path in $T_s$ and $i$ be the maximal index such that $v_{i}$ is on the shortest path $P$ then $P$ is composed of the path $<v_0, \ldots, v_i>$ followed by a shortest path from $v_i$ to $x$ in $G'$. That is $P = <v_0, \ldots, v_i> \cdot P_{G'}(v_i, x)$ (where $P_{G'}(v_i, x)$ is a shortest path from $v_i$ to $x$ in $G'$) . We have that $\lceil \sqrt{n} \rceil \le |P| = |<v_0, \ldots, v_i>| + |P_{G'}(v_i, x)| = \epsilon i + d_{G'}(v_i, x) < \lceil \sqrt{n} \rceil+1$ and $|<v_0, \ldots, v_i>| = i \epsilon < 1$ (as $\epsilon < 1/n$). Since all the edges of $P_{G'}(v_i, x)$ have weight $1$, we get that $|P_{G'}(v_i, x)| = d_{G'}(v_i, x) = \lfloor |P| \rfloor = \lceil \sqrt{n} \rceil$. Next, we prove that the distance from $v_j$ for every $j < i$ to $x$ in $G'$ is more than $\lceil \sqrt{n} \rceil$. Indeed, assume by contradiction there exists an index $j < i$ such that $d_{G'}(v_j, x) \le \lceil \sqrt{n} \rceil$. Then the path $P' = <v_0, \ldots, v_j> \circ P_{G'}(v_j, x)$ (where $P_{G'}(v_j, x)$ is a shortest path from $v_j$ to $x$ in $G'$) has length $|P'| = \epsilon j + d_{G'}(v_j, x) \le \epsilon j + \lceil \sqrt{n} \rceil < \epsilon i + \lceil \sqrt{n} \rceil = |P|$. We get that $P'$ which is a $s$-to-$x$ path in $G^w_{i}$ is shorter than $P$ which is a shortest $s$-to-$x$ path in $G^w_{i}$, which is a contradiction. We proved that $d_{G'}(v_i, x) = \lceil \sqrt{n} \rceil$ and $d_{G'}(v_j, x) > \lceil \sqrt{n} \rceil$ for every $j\leq i$ and hence $i = \rho(x)$. By Lemma \ref{lem:weighted} it follows that $\Tilde{D}^w(x)$ (which is the subpath containing the last $\lceil \sqrt{n} \rceil$ edges of $P$) is a shortest path from $v_{\rho(x)}$ to $x$ in $G'$. Note also that the algorithm adds to ${\cal D}^w_{\sqrt{n}}$ the subpath $\Tilde{D}^w(v)$. {\bf We now prove that $V_{\sqrt{n}} \subseteq V^w_{\sqrt{n}}$.} Let $x \in V_{\sqrt{n}}$ then there exists an index $0 \le \rho(x) \le k$ such that $d_{G'}(v_{\rho(x)}, x) = \lceil \sqrt{n} \rceil$ and for all $0 \le i < \rho(x)$ it holds that $d_{G'}(v_i, x) \ge \lceil \sqrt{n} \rceil+1$. Consider the tree $T_s$ at the end of iteration $\rho(x)$. We first prove that $d_{G^w_{\rho(x)}}(s, x) < \lceil \sqrt{n} \rceil+1$ and hence by Lemma \ref{lem:TrimDijkstara} it follows that $x \in T_s$. Let $P$ be the following path from $s$ to $x$. $P = <v_0, \ldots, v_{\rho(x)}> \circ P_{G'}(v_{\rho(x)}, x)$, that is, the path composed of the first $\rho(x)$ edges from $s$ to $v_{\rho(x)}$ along $P_G(s,t)$ followed by a shortest path from $v_{\rho(x)}$ to $x$ in $G'$. Since $|P| = \epsilon \rho(x) + d_{G'}(v_{\rho(x)},x) = \epsilon \rho(x) + \lceil \sqrt{n} \rceil < \lceil \sqrt{n} \rceil+1$ and $P$ is a path in $G^w_{\rho(x)}$ it follows that $d_{G^w_{\rho(x)}}(s,x) \le |P| < \lceil \sqrt{n} \rceil+1$. Next, we prove that $d_{G^w_{\rho(x)}}(s, x) \ge \lceil \sqrt{n} \rceil$. Let $P' = P_{G^w_{\rho(x)}}(s, x)$ be a shortest path from $s$ to $x$ in $G^w_{\rho(x)}$. Let $i$ be the maximal index such that $v_i \in P'$, then the subpath of $P'$ from $v_i$ to $x$ is a shortest path in $G'$ and its length $d_{G'}(v_i, x)$. Since $i \le \rho(x)$ (as $P'$ is a path in $G^w_{\rho(x)}$) then by Definition \ref{def:x-tag} it follows that $d_{G'}(v_i, x) \ge \lceil \sqrt{n} \rceil$. Therefore, $d_{G^w_{\rho(x)}}(s, x) = |P'| \ge d_{G'}(v_i, x) \ge \lceil \sqrt{n} \rceil$. By Lemma \ref{lem:TrimDijkstara} after the $\rho(x)$ iteration, $d[v] = d_{G^w_{\rho(x)}}(s,x)$. Hence, by the end of the $\rho(x)$ iteration $\lceil \sqrt{n} \rceil \leq d[v] < \lceil \sqrt{n} \rceil+1$. Therefore, by construction $x \in V^w_{\sqrt{n}}$. Let $P$ be the shortest $s$-to-$x$ path in $T_s$ after the $\rho(x)$ iteration. By Lemma \ref{lem:TrimDijkstara} it holds that $P$ is a shortest $s$-to-$x$ path in $G^w_{\rho(x)}$, and by Lemma \ref{lem:weighted} it follows that $\Tilde{D}^w(x)$ (which is the subpath containing the last $\lceil \sqrt{n} \rceil$ edges of $P$) is a shortest path from $v_{\rho(x)}$ to $x$ in $G'$. Note that the algorithm adds to ${\cal D}^w_{\sqrt{n}}$ the subpath $\Tilde{D}^w(v)$. In the proof above, we have also proved that every vertex $x \in V_{\sqrt{n}}$ we add a single path $D(x)$ to ${\cal D}^w_{\sqrt{n}}$. The path $D(x)$ is obtained by the algorithm by taking the last $\lceil \sqrt{n} \rceil$ edges of a shortest path from $s$ to $x$ in the graph $G^w_{\rho(x)}$, and we have already proved that it is a shortest path from $\rho(x)$ to $x$ in $G'$ whose length is $\lceil \sqrt{n} \rceil$. This proves that ${\cal D}^w_{\sqrt{n}}$ can be used as the set of paths ${\cal D}_{n}$ according to Definition \ref{def:p-sqrtn}. \end{proof} \subsection{An Alternative $\widetilde{O}(m\sqrt{n})$ Deterministic Algorithm for Computing ${\cal D}_{n}$} \label{sec:p-sqrt-computation2} We shortly describe an alternative $\widetilde{O}(m\sqrt{n})$ deterministic algorithm for computing ${\cal D}_{n}$. The algorithm of Roddity and Zwick \cite{Roditty2005} for handling short detours constructs $2\lceil \sqrt{n} \rceil$ auxiliary graphs $G^A_0, \ldots, G^A_{2\lceil \sqrt{n} \rceil-1}$ (see Figure \ref{fig:the-graph-G-A} as an illustration of the graph $G^A = G^A_0$). For every $0 \le z \le 2\lceil \sqrt{n} \rceil-1$, the auxiliary graph $G^A_z$ is obtained by adding a new source vertex $r_z$ to $G'$ and an edge $(r_z, v_{z+2q \lceil \sqrt{n} \rceil})$ of weight $\omega(r_z, v_{z+2q}) = q \lceil \sqrt{n} \rceil$ for every integer $0 \le q \le \frac{\sqrt{n}}{2}$. The weight of all the edges $E \setminus E(P)$ is set to $1$. Then run Dijkstra's algorithm from $r_z$ in $G^A_z$ that computes a shortest paths tree $T_z$. We claim that given the shortest paths trees $T_0, \ldots, T_{2\lceil \sqrt{n} \rceil-1}$, the following algorithm computes the set of paths ${\cal D}_n$. For every $v \in V$ the algorithm computes the minimum index $0 \le i \le k$ such that at least one of the shortest paths trees $T_0, \ldots, T_{2\lceil \sqrt{n} \rceil-1}$ contains a path from $v_i$ to $v$ of length $\lceil \sqrt{n} \rceil$, and let $P'(v)$ be this path from $v_i$ to $v$. If no such index $0 \le i \le k$ exists, then set $P'(v) = \emptyset$. For every vertex $v \in V$ finding this minimum index $i$ takes $O(\sqrt{n})$ time as there are $O(\sqrt{n})$ shortest paths trees to check, and in every shortest paths tree $T_z$ we only need to check the first edge $(r_z, v_j)$ of the path from $r_z$ to $v$. So, given the shortest paths trees $T_0, \ldots, T_{2\lceil \sqrt{n} \rceil-1}$, computing the paths ${\cal D}_n = \{ P'(v) \ | \ v \in V\}$ takes $O(n\sqrt{n})$ time. \begin{lemma} \label{lem:alg-alternative} Let $x \in V_{\sqrt{n}}$ and let $z := (\rho(x) \mod 2\lceil \sqrt{n} \rceil)$. Then the shortest paths tree $T_z$ contains a shortest path in $G'$ from $v_{\rho(x)}$ to $x$ of length $\lceil \sqrt{n} \rceil$. \end{lemma} \begin{proof} Since $x \in V_{\sqrt{n}}$ then $d_{G'}(v_{\rho(x)}, x) = \lceil \sqrt{n} \rceil$ and thus the shortest path in $G'$ from $v_{\rho(x)}$ to $x$ contains exactly $\lceil \sqrt{n} \rceil$ edges. We prove that every shortest path in $G^A_z$ from $r_z$ to $x$ must start with the edge $(r_z, v_{\rho(x)})$. Let $P' = (r_z, v_{\rho(x)}) \circ P_{G'}(v_{\rho(x)}, x)$. Assume there exists a path $P_1$ in $G^A_z$ from $r_z$ to $x$ that starts with the edge $(r_z, v')$ such that $v' = v_{\rho(x)-i\cdot 2\lceil \sqrt{n} \rceil}$ for some integer $i>0$. Then $\omega(P_1) = \omega(r_z, v') + d_{G'}(v', x) = \omega(r_z, v_{\rho(x)}) - i \cdot \lceil \sqrt{n} \rceil + d_{G'}(v', x) \ge \omega(r_z, v_{\rho(x)}) - i \cdot \lceil \sqrt{n} \rceil + (2i+1)\cdot \lceil \sqrt{n} \rceil = \omega(r_z, v_{\rho(x)}) + (i+1)\cdot \lceil \sqrt{n} \rceil > \omega(r_z, v_{\rho(x)}) + \lceil \sqrt{n} \rceil = \omega(P')$. Then $P'$ is shorter than $P_1$ and thus $P_1$ is not a shortest path in $G^A_z$. Assume there exists a path $P_2$ in $G^A_z$ from $r_z$ to $x$ that starts with the edge $(r_z, v')$ such that $v' = v_{\rho(x) + i\cdot 2\lceil \sqrt{n} \rceil}$ for some integer $i>0$. Then $\omega(P_2) = \omega(r_z, v') + d_{G'}(v', x) = \omega(r_z, v_{\rho(x)}) + i \cdot \lceil \sqrt{n} \rceil + d_{G'}(v', x) > \omega(r_z, v_{\rho(x)}) + \lceil \sqrt{n} \rceil = \omega(P')$. Then $P'$ is shorter than $P_2$ and thus $P_2$ is not a shortest path in $G^A_z$. It follows that every shortest path from $r_z$ to $x$ in $G^A_z$ must start with the edge $(r_z, v_{\rho(x)})$. In particular, the shortest paths tree $T_z$ contains a shortest path in $G'$ from $v_{\rho(x)}$ to $x$ of length $\lceil \sqrt{n} \rceil$. \end{proof} Finally, let ${\cal D}_{n} := \{P'(v) \ | \ v \in V\}$ be the set of paths computed by the above algorithm. Then ${\cal D}_{n}$ is a set of $O(n)$ paths, each path contains exactly $\lceil \sqrt{n} \rceil$ edges, and it follows from Lemma \ref{lem:alg-alternative} and Definition \ref{def:p-sqrtn} that a subset of the paths ${\cal D}_{n}$ satisfies the conditions of Definition \ref{def:p-sqrtn}. Hence, it is sufficient to use the greedy algorithm GreedyPivotsSelection to hit the set of paths ${\cal D}_{n}$. \section{Deterministic $f$-Sensitivity Distance Oracles} \label{sec:dso1} As explained in the introduction, an $f$-Sensitivity Distance Oracle gets as an input a graph $G$ and a parameter $f$, preprocesses it into a data-structure, such that given a query $(s,t,F)$ with $s,t \in V, F \subseteq E \cup V, |F| \le f$ one may efficiently compute the distance $d_{G\setminus F}(s,t)$. In this section we derandomize the result of Weimann and Yuster \cite{WY13} for real edge weights. They presented an $f$-sensitivity distance oracle whose preprocessing time is $\widetilde{O}(mn^{1+\epsilon})$ (which is larger than the time it takes to compute APSP which is $O(mn)$ by a factor of $n^\epsilon$) and whose query time is subquadratic $\widetilde{O}(n^{2-2\epsilon/f})$. More precisely, we prove the following theorem which obtains deterministically the same preprocessing and query time bounds as the randomized $f$-sensitivity distance oracle in \cite{WY13} for real edge weights. \begin{theorem} Let $G$ be a weighted directed graph, and let $f \ge 1$ be a parameter. One can deterministically construct an $f$-sensitivity distance oracle in $\widetilde{O}(mn^{1+\epsilon})$ time, such that given a query $(s,t,F)$ with $F \subset V \cup E$ and $|F| \le f$ the deterministic query algorithm for computing $d_{G\setminus F}(s,t)$ takes subquadratic $\widetilde{O}(n^{2-2\epsilon/f})$ time. \end{theorem} The basic idea in derandomizing the real weighted $f$-sensitivity distance oracles of Weimann and Yuster \cite{WY13} is to use a variant of the fault tolerant trees $FT^{L,f}(s,t)$ described in Appendix A in \cite{ChCoFiKa17} to find short replacement paths, and then use the greedy algorithm from Section \ref{sec:framework} and Lemma \ref{lemma:greedy} for derandomizing the random selection of the pivots, and finally continue with the algorithm of \cite{WY13} for stitching short segments to obtain the long replacement paths. This overview is made clear in the description below. According to Section \ref{sec:assumption-unique}, we will assume WLOG the following holds. \begin{itemize} \item {\bf Unique shortest paths assumption: } we assume that all the shortest paths are unique. \item {\bf Non-negative weights assumption: } we assume that edge weights are non-negative, so that we can run Dijkstra. \end{itemize} {\bf Outline.} Let $s,t \in V$ be vertices and let $f, L\ge 1$ be integer parameters. In Section \ref{sec:ft-trees} we described the trees $FT^{L,f}(s,t)$ which are a variant of the trees that appear in Appendix A of \cite{ChCoFiKa17}. In Section \ref{sec:dynamic-programming} we described how to construct the trees $FT^{L,f}(s,t)$ in $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ time. In Section \ref{sec:ft-trees-appendix} we prove Lemma \ref{lemma:query}, that the algorithm described in Section \ref{sec:ft-trees} computes the distance $d^L_G(s,t,F)$ in $O(f^2 \log L)$ time. In Section \ref{sec:tree-to-dso} we describe how to use the trees $FT^{L,f}(s,t)$ in order to construct an $f$-sensitivity distance oracle. In Section \ref{sec:positive} we reduce the construction time of the trees $FT^{L,f}(s,t)$ to $\widetilde{O}(mn^{1+\epsilon})$. In Section \ref{sec:improved-greedy} we reduce the runtime of the GreedyPivotsSelection algorithm from $\widetilde{O}(n^{2+\epsilon+\epsilon/f})$ to $\widetilde{O}(n^{2+\epsilon})$. In Section \ref{sec:assumption-unique} we justify our assumptions of non-negative edge weights and unique shortest paths. \subsection{Proof of Lemma \ref{lemma:query}}\label{sec:ft-trees-appendix} \begin{proof}[Proof of Lemma \ref{lemma:query}] We first prove correctness, that the query procedure outputs $d^L_G(s,t,F)$. The query procedure outputs a distance $d^L_G(s,t, \{a_1, \ldots, a_i\})$ such that $\{a_1, a_2, \ldots, a_i \} \subseteq F$ and $P^L_G(s,t, \{a_1, \ldots, a_i\}) \cap F = \emptyset$ (note that this includes the case that $d^L_G(s,t, \{a_1, \ldots, a_i\}) = \infty$ when there is no path from $s$ to $t$ in $G \setminus \{a_1, a_2, \ldots, a_i \}$ that contains at most $L$ edges). The distance $d^L_G(s,t, \{a_1, \ldots, a_i\})$ is the minimum length of an $s$-to-$t$ path that contains at most $L$ edges in the graph $G \setminus \{a_1, a_2, \ldots, a_i \}$. On the one hand, since $\{a_1, a_2, \ldots, a_i \} \subseteq F$ and distances may only increase as we delete more and more vertices and edges, we obtain that $d^L_G(s,t, \{a_1, \ldots, a_i\}) \le d^L_G(s,t, F)$. On the other hand, since $P^L_G(s,t, \{a_1, \ldots, a_i\}) \cap F = \emptyset$ then $P^L_G(s,t, \{a_1, \ldots, a_i\})$ is a path in the graph $G \setminus F$ that contains at most $L$ edges, and hence its length $d^L_G(s,t, \{a_1, \ldots, a_i\})$ is at least the length of the shortest $s$-to-$t$ path in $G \setminus F$ that contains at most $L$ edges which is $d^L_G(s,t, F)$, so we get $d^L_G(s,t, \{a_1, \ldots, a_i\}) \ge d^L_G(s,t,F)$. It follows that the output of the query is $d^L_G(s,t, \{a_1, \ldots, a_i\}) = d^L_G(s,t,F)$. We analyze the runtime of the query. The runtime of the query is $O(f^2\log L)$ as we advance along a root to leaf path in $FT^{L,f}(s,t)$ (whose length is at most $f$) and in each node $FT^{L,f}(s,t, a_1, \ldots, a_i)$ of the tree $FT^{L,f}(s,t)$ we make $O(f)$ queries $a_{i+1} \in F$ to $BST^L(s,t, a_1, \ldots, a_i)$ which take $O(\log L)$ as we search in a binary search tree with $L$ elements. So query time is the multiplication of the following terms: \begin{itemize} \item $f$ --- length of root-to-leaf path in $FT^{L,f}(s,t)$ \item $f$ --- number of elements $a_{i+1} \in F$ to check in the node $FT^{L,f}(s,t, a_1, \ldots, a_i)$ whether or not $a_{i+1} \in P^L_G(s,t, \{a_1, \ldots, a_i\})$ \item $O(\log L)$ --- time to check for a single element $a_{i+1} \in F$ whether or not $a_{i+1} \in P^L_G(s,t, \{a_1, \ldots, a_i\})$ by searching $a_{i+1}$ in $BST^L(s,t, a_1, \ldots, a_i)$ which is a binary search tree containing $L$ elements. \end{itemize} \end{proof} \subsection{Deterministic $f$-Sensitivity Distance Oracles with $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2 + \epsilon + 2\epsilon/f})$ Preprocessing Time} \label{sec:tree-to-dso} In this section we describe how to plug-in the trees $FT^{L,f}(s,t)$ from Section \ref{sec:ft-trees} in the $f$-sensitivity distance oracles of Weimann and Yuster \cite{WY13}. Let us first recall how the $f$-sensitivity distance oracle of Weimann and Yuster \cite{WY13} works. The following Lemma is proven in \cite{WY13}. \begin{lemma} [Theorem 1.1 in \cite{WY13}] \label{lem:randomized-f-dso} Given a directed graph $G$ with real positive edge weights, an integer parameter $1 \le f \le \epsilon \log n/ \log \log n$ and a real parameter $0 < \epsilon < 1$, there exists a randomized $f$-sensitivity distance oracle whose construction is randomized and takes $\widetilde{O}(mn^{1+\epsilon})$ time. Given a query $(s,t,F)$ where $F \subset V \cup E$ with $|F| \le f$, the data-structure answers the query by computing w.h.p. $d_G(s,t,F)$ in $\widetilde{O}(n^{2-2\epsilon/f})$ time. \end{lemma} \begin{proof} Use $\alpha = 1-\epsilon$ in the construction of Weimann and Yuster \cite{WY13}. In order for this result to be self-contained, we briefly explain the preprocessing and query procedures of Weimann and Yuster \cite{WY13}. {\bf Preprocessing: } \begin{enumerate} \item Randomly generate graphs $G_1, \ldots, G_r$ (with $r = \widetilde{O}(n^\epsilon)$) where every graph is independently obtained by removing each edge with probability $1/n^{\epsilon/f}$. Compute APSP on each of the graphs $\{G_1, \ldots, G_r\}$. It is proven in \cite{WY13} that with high probability for every set $F \subseteq E \cup V$ with $|F| \le f$ and for every $s-t$ shortest path $P_G(s,t,F)$ that contains at most $n^{\epsilon/f}$ edges in the graph $G\setminus F$, there exists at least one graph $G_i \in \{G_1, \ldots, G_r\}$ that excludes $F$ and contains $P_G(s,t,F)$. \item Sample a random set $B$ of pivots, where every vertex is taken with probability $\frac{6f \ln n}{n^{\epsilon/f}}$. \end{enumerate} {\bf Query: } Given a query $(s,t,F)$ build the dense graph $H$ (denoted by $G^S$ in \cite{WY13}) whose vertices are $B \cup \{s, t\}$ as follows. First, find all the graphs that exclude $F$, $\mathcal{G}_F = \{ G_i \ | \ 1 \le i \le r, F \cap G_i = \emptyset \}$. Then, for every $u,v \in B \cup \{s,t\}$ add the edge $(u,v)$ to $H$ and set is weight to be the minimum length of the shortest path from $u$ to $v$ in all the graphs $\mathcal{G}_F$. If there is no path from $u$ to $v$ in any of the graphs $\mathcal{G}_F$ then set $\omega_H(u,v) = \infty$. Finally, run Dijkstra from $s$ in the graph $H$ and output $d_G(s,t,F) = d_{H}(s,t)$. \end{proof} We derandomize Lemma \ref{lem:randomized-f-dso} as follows. \begin{lemma} \label{lem:f-dso-mn2} Given a directed graph $G$ with real positive edge weights, an integer parameter $1 \le f \le \epsilon \log n/ \log \log n$ and $0 < \epsilon < 1$, there exists a deterministic $f$-sensitivity distance oracle whose construction is deterministic and takes $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ time. Given a query $(s,t,F)$ where $F \subset V \cup E$ with $|F| \le f$, the data-structure answers deterministically the query by computing $d_G(s,t,F)$ in $\widetilde{O}(n^{2-2\epsilon/f})$ time. \end{lemma} To prove Lemma \ref{lem:f-dso-mn2} we describe how to use the fault-tolerant trees $\{ FT^{L,f}(s,t) \}_{s,t \in V}$ to construct the deterministic $f$-sensitivity distance oracle. {\bf Preprocessing: } \begin{enumerate} \item {\bf Compute the trees $FT^{L,f}(u,v)$ }. Deterministically construct the fault-tolerant trees $FT^{L,f}(u,v)$ for every $u,v \in V$ as in Lemma \ref{lemma:ft-tree-eps-plus}. \item {\bf Compute the set of vertices $B \subseteq V$ }. Let $\mathcal{P}^{L,f}$ be the set of all paths in all the nodes of all the trees $\{FT^{L,f}(u,v)\}_{u,v \in V}$ that contain at least $n^{\epsilon/f}/2$ edges. Use the greedy algorithm as in Lemma \ref{lemma:greedy}, to find deterministically a set of pivots $B$, such that for every $P \in \mathcal{P}^{L,f}$ it holds that $B \cap V(P) \ne \emptyset$. \end{enumerate} {\bf Query: } Given a query $(s,t,F)$ build the complete graph $H$ (also referred to as the dense graph) whose vertices are $B \cup \{s, t\}$ as follows. For every $u,v \in B \cup \{s,t\}$ query the tree $FT^{L,f}(u,v)$ with $(u,v,F)$ according to the query procedure described in Section \ref{sec:ft-trees} and set the weight of the edge $(u,v)$ in $H$ to the distance computed $d^{L}_G(u,v,F)$ ({\sl i.e.}, set $\omega_H(u,v) = d^L_G(u,v,F)$). Finally, run Dijkstra from $s$ in the graph $H$ and output $d_{H}(s,t)$ as an estimate of the distance $d_G(s,t,F)$. \begin{proof}[Proof of Lemma \ref{lem:f-dso-mn2}] We prove the correctness of the DSO and then analyse its preprocessing and query time. {\bf Proof of correctness}. We prove that $d_G(s,t,F) = d_H(s,t)$. Since $d_H(s,t)$ is the length of some path from $s$ to $t$ in $G \setminus F$ then $d_H(s,t) \ge d_G(s,t,F)$. Next we prove that $d_G(s,t,F) \ge d_H(s,t)$. Let $P_G(s,t,F)$ be the shortest path from $s$ to $t$ in $G\setminus F$. We prove that the set of vertices $B$ hits every subpath of $P_G(s,t,F)$ that contains exactly $n^{\epsilon/f}/2$ edges. To see that, let $u,v$ be two vertices such that $u$ appears before $v$ along $P_G(s,t,F)$ and $P_G(s,t,F)[u..v]$ contains $n^{\epsilon/f}/2$ edges. It follows that the shortest path from $u$ to $v$ in $G \setminus F$ contains less than $L = n^{\epsilon/f}$ edges. According to the assumption of unique shortest paths (as in Section \ref{sec:unique}) it follows that querying the tree $FT^{L,f}(u,v)$ with $(u,v,F)$ finds a path $P^L_G(u,v,F)$, and since the shortest path from $u$ to $v$ in $G \setminus F$ contains less than $L = n^{\epsilon/f}$ edges then $P^L_G(u,v,F) = P_G(u,v,F) = P_G(s,t,F)[u..v]$ is the shortest path from $u$ to $v$ in $G \setminus F$, and it contains exactly $n^{\epsilon/f}/2$ edges. Therefore, $P^L_G(u,v,F)$ is a path in a node of the tree $FT^{L,f}(u,v)$ and thus $P^L_G(u,v,F) \in \mathcal{P}^{L,f}$. Hence, when computing the hitting set $B$ that hits all the paths of $\mathcal{P}^{L,f}$ the algorithm obtains a set $B$, and in particular it holds that $B \cap P^L_G(u,v,F) \ne \emptyset$ and thus $B$ hits the path $P_G(s,t,F)[u..v] = P_G(u,v,F)$. This prove that $B$ hits every subpath of $P_G(s,t,F)$ that contains exactly $n^{\epsilon/f}/2$ edges. Let $s = v_1, v_2, \ldots, v_k = t$ be all the vertices of $H = B \cup \{s, t \}$ that appear along $P_G(s,t,F)$ sorted according to the order of their appearance along the path $P_G(s,t,F)$. As $B$ hits every subpath of $P_G(s,t,F)$ that contains exactly $n^{\epsilon/f}/2$ edges, it follows that for every $0 \le i < k$ the subpath of $P_G(s,t,F)$ from $v_i$ to $v_{i+1}$ contains at most $n^{\epsilon/f}$ edges. Therefore, the shortest path from $v_i$ to $v_{i+1}$ in $G \setminus F$ contains at most $L = n^{\epsilon/f}$ edges and hence $d^L_G(v_i, v_{i+1}, F) = d_G(v_i, v_{i+1}, F)$. By Lemma \ref{lemma:query} it holds that querying the tree $FT^{L,f}(v_i, v_{i+1})$ according to the query procedure described in Section \ref{sec:ft-trees} computes the distance $d^L_G(v_i, v_{i+1}, F) = d_G(v_i, v_{i+1}, F)$. Hence $\omega_H( <v_1, \ldots, v_k> ) = \omega_H(v_1, v_2) + \ldots + \omega_H(v_{k-1}, v_k) = d^L_G(v_1, v_2, F) + \ldots + d^L_G(v_{k-1}, v_k, F) = d_G(v_1, v_2, F) + \ldots + d_G(v_{k-1}, v_k, F) = d_G(s,t,F)$ where the last equality holds as $s = v_1, v_2, \ldots, v_k = t$ are the vertices of $H$ that appear along $P_G(s,t,F)$ sorted according to the order of their appearance along the path $P_G(s,t,F)$. It follows that the path $<v_1, \ldots, v_k>$ is an $s$-to-$t$ path in $H$ whose weight in $H$ is $d_G(s,t,F)$ and hence the shortest $s$-to-$t$ path in $H$ has length at most $\omega_H(<v_1, \ldots, v_k>) = d_G(s,t,F)$. Therefore $d_H(s,t) \le \omega_H(<v_1, \ldots, v_k>) = d_G(s,t,F)$ and since we already proved that $d_H(s,t) \ge d_G(s,t,F)$ it follows that $d_H(s,t) = d_G(s,t,F)$. {\bf Analysing the preprocessing time}. Next we analyse preprocessing time of the DSO. Constructing the trees $FT^{L,f}(u,v)$ for every $u,v \in V$ as in Lemma \ref{lemma:ft-tree-eps-plus} takes $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ time. We analyse the time it takes to compute the set of vertices $B \subseteq V$. Let $\mathcal{P}^{L,f}$ be the set of all paths in all the nodes of all the trees $\{FT^{L,f}(u,v)\}_{u,v \in V}$ that contain at least $n^{\epsilon/f}/2$ edges. Observe that $\mathcal{P}^{L,f}$ contains at most $O(n^{2+\epsilon})$ paths, as the number of nodes in each of the $n^2$ trees $FT^{L,f}(u,v)$ is $O(n^\epsilon)$ and every such node contains a single path of $G$. Thus, using the greedy algorithm as in Lemma \ref{lemma:greedy} finds deterministically in $\widetilde{O}(n^{2+\epsilon + \epsilon/f})$ time a set of pivots $B$, such that $|B| = \widetilde{O}(n^{1-\epsilon/f})$ and $B$ hits all the paths in $\mathcal{P}^{L,f}$. Thus, the total preprocessing time of the DSO is $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ time. {\bf Analysing the query time}. Next we analyse the query time of the DSO. During query, the algorithm constructs the graph $H$. During the construction of $H$ the algorithm runs $\widetilde{O}(n^{2-2\epsilon/f})$ queries using the trees $\{FT^{L,f}(u,v)\}_{u,v \in V}$, each query is answered in $O(f^2 \log n)=\widetilde{O}(1)$ time since we assumed $f \le \log n / \log \log n$. Thus, constructing the graph $H$ takes $\widetilde{O}(n^{2-2\epsilon/f})$ time, and also running Dijkstra from $s$ in the graph $H$ takes $\widetilde{O}(n^{2-2\epsilon/f})$ time. Therefore, the total query time is $\widetilde{O}(n^{2-2\epsilon/f})$ time. \end{proof} \subsection{Deterministic $f$-Sensitivity Distance Oracles with $\widetilde{O}(mn^{1+\epsilon})$ Preprocessing Time} \label{sec:positive} In this section we reduce the preprocessing time of constructing the $f$-sensitivity distance oracle described in Lemma \ref{lem:f-dso-mn2} from $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ to match the preprocessing time of \cite{WY13} which is $\widetilde{O}(mn^{1+\epsilon})$, while keeping the same query time of $\widetilde{O}(n^{2-2\epsilon/f})$. We improve the preprocessing time in two ways: improving the construction time of the trees $FT^{L,f}(u,v)$ and reducing the runtime of the greedy selection algorithm by considering a smaller set of paths $\mathcal{P}^{L,f}$. In Section \ref{sec:ft-trees-n-plus-eps} we reduce the time it takes to construct the trees $\{FT^{L,f}(u,v) \}_{u,v \in V}$ from $\widetilde{O}(mn^{1+\epsilon+\epsilon/f} + n^{2+\epsilon+2\epsilon/f})$ to $\widetilde{O}(mn^{1+\epsilon})$. In Section \ref{sec:improved-greedy} we show that the runtime of the greedy selection algorithm of the pivots $B$ can be reduced from $\widetilde{O}(n^{2+\epsilon+\epsilon/f})$ to $\widetilde{O}(n^{2+\epsilon})$ which is negligible compared to $\widetilde{O}(mn^{1+\epsilon})$. This will give us the desired $\widetilde{O}(mn^{1+\epsilon})$ preprocessing time. \subsubsection{Building the Trees $FT^{L,f}(u,v)$ in $\widetilde{O}(mn^{1+\epsilon})$ Time} \label{sec:ft-trees-n-plus-eps} In this section we describe how to construct the trees $FT^{L,f}(u,v)$ in $\widetilde{O}(mn^{1+\epsilon})$ time. We first define the node $FT^{L',f}(u,v, a_1, \ldots, a_i)$ of the tree $FT^{L',f}(u,v)$ as follows. \begin{definition} Let $1 \le L' \le L, 0 \le i \le f$ and $u,v \in V$. Assume $a_1 \in P^{L'}(u,v), a_2 \in P^{L'}(u,v, \{a_1\}), \ldots, a_i \in P^{L'}(u,v, \{a_1, \ldots, a_{i-1}\})$. We define the node $FT^{L',f}(u,v, a_1, \ldots, a_i)$ of depth $i$ in the tree $FT^{L',f}(u,v)$ as the node we reach if we query the tree $FT^{L',f}(u,v)$ with $F = \{a_1, \ldots, a_i \}$ according to the query procedure described in Section \ref{sec:ft-trees}. If $i=f$ then $FT^{L',f}(u,v, a_1, \ldots, a_f)$ is a leaf node of $FT^{L',f}(u,v)$ of depth $f$. We slightly abuse notation and use $FT^{L',f}(u,v, a_1, \ldots, a_i)$ to both denote the node $FT^{L',f}(u,v, a_1, \ldots, a_i)$ and the subtree of $FT^{L',f}(u,v)$ rooted in $FT^{L',f}(u,v, a_1, \ldots, a_i)$. \end{definition} Recall that in Section \ref{sec:ft-trees} we described how to build the trees $FT^{L,f}(u,v)$ in $\widetilde{O}(mn^{1+\epsilon+\epsilon/f}+n^{2+\epsilon+2\epsilon/f})$ time. More generally, we built the trees $\{ FT^{L',f}(u,v) \}_{1 \le L' \le L, u,v \in V}$ in $\widetilde{O}(mn L^{f+1} + n^2 L^{f+2})$ time, where the construction time consists of the following two terms $\widetilde{O}(mn L^{f+1})$ and $\widetilde{O}(n^2 L^{f+2})$. The first term $\widetilde{O}(mn L^{f+1})$ is the time it takes to solve the dynamic programming Equation \ref{eq:dynamic-programming} in all the nodes of all the trees $\{ FT^{L',f}(u,v) \}_{u,v \in V, 1 \le L' \le L}$. In Section \ref{sec:dijsktra-last-layer} we reduce the runtime of this part to $\widetilde{O}(mn L^{f})$ by applying Dijkstra on auxiliary graphs $H_{F,t}$ we define later rather than computing the dynamic-programming in the last layer of the trees. The second term $\widetilde{O}(n^2 L^{f+2})$ is the time it takes to reconstruct all the paths $P^{L'}(s,t, \{a_1, \ldots, a_i\})$ that are explicitly stored in all the nodes of all the trees $\{ FT^{L',f}(u,v) \}_{u,v \in V, 1 \le L' \le L}$. It takes $\widetilde{O}(n^2 L^{f+2})$ time since the number of nodes in all the trees $\{ FT^{L',f}(u,v) \}_{u,v \in V, 1 \le L' \le L}$ is $\widetilde{O}(n^2 L^{f+1})$, and it takes $O(L)$ time to reconstruct each path (which contains at most $L$ edges). In Section \ref{sec:improving-paths-reconstruction} we reduce this term to $\widetilde{O}(n^2 L^f) = \widetilde{O}(n^{2+\epsilon})$ by not reconstructing the paths in the leaves of the trees of depth $f$. \subsubsection{The improved algorithm for constructing the trees $FT^{L,f}(u,v)$}\label{sec:improving-paths-reconstruction} We describe the algorithm with the improved construction time. First, the algorithm constructs the trees $\{ FT^{L',f}(u,v) \}_{1 \le L' \le L, u,v \in V}$ up to level $f-1$, {\sl i.e.}, constructing the trees $\{ FT^{L',f-1}(u,v) \}_{1 \le L' \le L, u,v \in V}$, without constructing the nodes in level $f$. Note that using the analysis above this takes $\widetilde{O}(mn L^{(f-1)+1}) = \widetilde{O}(mn L^f) = \widetilde{O}(mn^{1+\epsilon})$ time. We are left with explaining how to accelerate the construction of the last layer (the layer of depth $f$) of the trees $\{ FT^{L,f}(u,v) \}_{u,v \in V}$. The algorithm reconstructs the paths in level $f-1$ only for the trees $\{ FT^{L,f}(u,v) \}_{u,v \in V}$. That is, for every leaf node $FT^{L,f-1}(u,v, a_1, \ldots, a_{f-1})$ of $FT^{L,f-1}(u,v)$ we can reconstruct the path $P^L_G(u,v, \{a_1, \ldots, a_{f-1}\})$ in $O(L)$ time by following the parent pointers $\textrm{parent}^{L}(u,v, F)$ as computed in Equation \ref{eq:parent-pointer}. Reconstructing these $\widetilde{O}(n^2 L^{f-1})$ paths $P^L_G(u,v, \{a_1, \ldots, a_{f-1}\})$ take $\widetilde{O}(n^2 L^f) = \widetilde{O}(n^{2+\epsilon})$ time. Then for every vertex or edge $a_f \in P^L_G(u,v, \{a_1, \ldots, a_{f-1}\})$ we need to construct the leaf node $FT^{L,f}(u,v, a_1, \ldots, a_{f-1}, a_f)$. We describe the construction of the leaves $FT^{L,f}(u,v, a_1, \ldots, a_{f-1}, a_f)$ in the following paragraphs. Let ${\cal F}_{u,v} = \{ \{a_1, \ldots, a_f \} \ | \text{ the node } FT^{L,f}(u,v, a_1, \ldots, a_f) \text{ is a leaf node of } FT^{L,f}(u,v) \}$. Equivalently, ${\cal F}_{u,v} = \{ \{a_1, \ldots, a_f \} \ | \ a_1 \in P^{L}(u,v), a_2 \in P^{L}(u,v, \{a_1\}), \ldots, a_f \in P^{L}(u,v, \{a_1, \ldots, a_{f-1}\}) \}$. In the remaining of this section we describe how to compute the distances $d^L_G(u,v,F)$ and the parent pointer parent$^L(u,v,F)$ for every $u,v \in V, F \in {\cal F}_{u,v}$ in total $\widetilde{O}(mn L^f)$ time. We first explain why it is not possible to use Equation \ref{eq:dynamic-programming} to compute the distance $d^L_G(u,v, \{a_1, \ldots, a_f\})$. Let $F = \{a_1, \ldots, a_f \} \in {\cal F}_{u,v}$. According to Equation \ref{eq:dynamic-programming}, $d^{L}_G(u,v, F) = \min_{z} \{ \omega(u,z) + d^{L-1}(z,v,F) \ | \ (u,z) \in E \ \ AND \ \ u,z,(u,z) \not \in F \}$, where the distance $d^{L-1}(z,v,F)$ need to be obtained by querying the tree $FT^{L-1,f}(z,v)$ using the query $F$. But querying the tree $FT^{L-1,f}(z,v)$ with the set $F$ might reach a leaf node $FT^{L-1,f}(z,v, a_1, \ldots, a_f)$, and since we did not construct yet the last layer (of level $f$) of $FT^{L-1,f}(z,v)$ then we do not have the distance $d^{L-1}(z,v, F)$ computed yet. This breaks the dynamic programming of Equation \ref{eq:dynamic-programming}. To overcome this difficulty, we run Dijkstra in auxiliary graphs $H_{F,t}$. \subsubsection{The auxiliary graphs $H_{F,t}$} \label{sec:dijsktra-last-layer} Let $t \in V$ be a fixed vertex, we define ${\cal F}_{t} = \cup_{s \in V} {\cal F}_{s,t}$. For every $F \in {\cal F}_t$ we build the graph $H_{F,t} = (V_{F,t}, E_{F,t})$ as follows. Let $V'_{F,t} \subseteq V$ be the set of all vertices $s \in V$ such that $F \in {\cal F}_{s,t}$. Note that we can easily compute the sets $V'_{F,t}$ for all the vertices $t\in V$ in $\widetilde{O}(n^2 L^f)$ time using the following procedure. Initialize an empty hash table $h$. For every $s \in V, F \in {\cal F}_{s,t}$: \begin{itemize} \item Check if $h$ contains $(F,t)$. If $(F,t) \not \in h$ then set $h[F,t]$ to be an empty list of vertices (which will eventually represent $V'_{F,t}$). \item Append $s$ to the end of the list $h[F,t]$. \end{itemize} After scanning all the sets $F \in {\cal F}_{s,t}$ for every $s,t \in V$, we get that the vertices $V'_{F,t}$ are listed in $h[F,t]$. The runtime of this procedure is $\widetilde{O}(n^2 L^f)$ for all the vertices $t\in V$ as the number of sets $F$ in $\bigcup_{s,t \in V} {\cal F}_{s,t}$ is at most the number of leaves in all the trees $\{ FT^{L,f}(s,t) \}_{s,t \in V}$, which is $\widetilde{O}(n^2 L^f)$. Let $V_{F,t}$ be the set of vertices $V'_{F,t}$ and their neighbours, {\sl i.e.}, $V_{F,t} = V'_{F,t} \cup N(V'_{F,t})$. Let $s \in V_{F,t}$ and let $u$ be a neighbour of $s$ in $G$, {\sl i.e.}, $(s,u) \in E$, such that $s,u$ has not failed ({\sl i.e.}, $s,u, (s,u) \not \in F$). If $u \in V_{F,t}$ then we add the edge $(s,u)$ to $H_{F,t}$ with its weight $\omega(s,u)$. Otherwise, $u \not \in V_{F,t}$, this means that if we query the tree $FT^{L,f}(u,t)$ with the set $F$ then the query ends in an internal node of $FT^{L,f}(u,t)$ and not in a leaf of depth $f$. Let $FT^{L,f}(u,t, a_1, \ldots, a_i)$ be the internal node we reach at the end of the query such that $\{a_1, \ldots, a_i\} \subsetneq F$. Then the path $P^L_G(u,t, \{a_1, \ldots, a_i\})$ does not contain $F$ (as otherwise we would have not finished the query of $F$ in the tree $FT^{L,f}(u,t)$ at this node), and hence we already computed $d^L_G(u,t,F) = d^L_G(u,t, \{a_1, \ldots, a_i\})$ in the node $FT^{L,f}(u,t, a_1, \ldots, a_i)$. We add an edge $(u,t)$ to $H_{F,t}$ and assign to it the weight $d^L_G(u,t,F)$, and we refer to these edges as {\sl shortcuts}. Finally, we compute Dijkstra from $t$ in the graph $H_{F,t}$ with reverse edge directions. This gives us the distances $d_{H_{F,t}}(s,t)$ for every $s \in V_{F,t}$. We claim that $d_{H_{F,t}}(s,t) \le d^L_G(s,t,F)$ and that computing all the Dijkstra's in all the graphs $H_{F,t}$ takes $\widetilde{O}(mn^{1+\epsilon})$ time. \begin{lemma} \label{lem:H-dijkstra-correctness} Let $s \in V_{F,t}$, then $d_G(s,t,F) \le d_{H_{F,t}}(s,t) \le d^L_G(s,t,F)$. \end{lemma} \begin{proof} We first prove that $d_G(s,t,F) \le d_{H_{F,t}}(s,t)$. If $d_{H_{F,t}}(s,t) < \infty$ then the shortest path from $s$ to $t$ in $H_{F,t}$ is composed of two types of edges: \begin{itemize} \item An edge $(u,v)$ such that $u, v,(u,v) \not \in F$ whose weight equals to $\omega(u,v)$ (its weight in the graph $G$). In this case, the edge $(u,v)$ also exists in the graph $G \setminus F$ with the same weight. \item A ``shortcut'' edge $(u,t)$ whose weight is $d^L_G(u,t,F)$. In this case, there is a path from $u$ to $t$ in the graph $G \setminus F$ whose weight is $d^L_G(u,t,F)$. \end{itemize} In both cases we get that for every edge $(u,v)$ in the graph $H_{F,t}$ there exists a $u$-to-$v$ path in the graph $G \setminus F$ whose weight in $G \setminus F$ equals to the weight of the edge $(u,v)$ in $H_{F,t}$. Hence, $d_G(s,t,F) \le d_{H_{F,t}}(s,t)$. We now prove that $d_{H_{F,t}}(s,t) \le d^L_G(s,t,F)$. Let $P^L_G(s,t,F) = <v_0, \ldots, v_k>$ be a shortest path from $s$ to $t$ in $G \setminus F$ on at most $L$ edges ({\sl i.e.}, $k \le L$). We prove that there exists an $s$-to-$t$ path $P$ in $H_{F,t}$ whose weight $\omega_{H_{F,t}}(P) \le \omega_G(P^L_G(s,t,F))$ and since $d_{H_{F,t}}(s,t) \le \omega_{H_{F,t}}(P)$ it follows that $d_{H_{F,t}}(s,t) \le \omega_{H_{F,t}}(P) \le \omega_G(P^L_G(s,t,F)) = d^L_G(s,t,F)$. In order to construct the path $P$ in $H_{F,t}$, note that for every edge $(v_i, v_{i+1})$ of $P^L_G(s,t,F)$ (for every $0 \le i < k$) it holds that either $(v_i, v_{i+1}) \in G \setminus F$ exists in $H_{F,t}$ with the same weight as its weight in $G$, or there exists an edge $(v_i, t)$ in $H_{F,t}$ whose weight is $d^L_G(v_i, t, F)$. Let $0 \le \ell \le k$ be the maximum index such that for every $0 \le i < \ell$ the edge $(v_i, v_{i+1}) \in H_{F,t}$. We define the path $P := <v_0, \ldots, v_\ell> \circ (v_\ell, t)$, then $P$ is a path in $H_{F,t}$ and its weight is $\omega_{H_{F,t}}(P) = \omega_{H_{F,t}}(<v_0, \ldots, v_\ell>) + \omega_{H_{F,t}}(v_\ell, t) = \omega_G(<v_0, \ldots, v_\ell>) + d^L_G(v_\ell, t, F)$ where the last inequality holds as every $0 \le i < \ell$ the edge $(v_i, v_{i+1})$ exists in $H_{F,t}$ with the same weight as its weight in $G$ and the edge $(v_\ell, t)$ exists in $H_{F,t}$ and its weight in $H_{F,t}$ is $d^L_G(v_i, t, F)$. It follows that $\omega_{H_{F,t}}(P) = \omega_G(<v_0, \ldots, v_\ell>) + d^L_G(v_\ell, t, F) = d^{\ell}_G(s,v_\ell) + d^L_G(v_\ell, t, F) \le d^L_G(s,t)$ where the last equality holds by the triangle inequality and the fact that $<v_0, \ldots, v_\ell> \subseteq P^L_G(s,t,F)$. Hence, the shortest path from $s$ to $t$ in $H_{F,t}$ has weight at most $d^L_G(s,t,F)$, and thus $d_{H_{F,t}}(s,t) \le d^L_G(s,t,F)$. \end{proof} \begin{lemma} \label{lem:H-dijkstra-time} Computing Dijkstra's algorithm in all the graphs $H_{F,t}$ takes $\widetilde{O}(mn L^f) = \widetilde{O}(mn^{1+\epsilon})$ time. \end{lemma} \begin{proof} The runtime of Dijkstra in the graph $H_{F,t}$ is $\widetilde{O}(\Sigma_{\{s \in V'_{F,t} \}} \text{deg}(s))$, as $O(\Sigma_{\{s \in V'_{F,t} \}} \text{deg}(s))$ is a bound on the number of edges and vertices in $H_{F,t}$. It follows that the runtime of running all Dijkstra algorithms is $\widetilde{O}(\Sigma_{t \in V} \Sigma_{F \in {\cal F}_t} \Sigma_{\{s \in V'_{F,t} \}} \text{deg}(s))$. Note that a vertex $s \in V'_{F,t}$ iff $F = \{a_1, \ldots, a_f \}$ and $FT^{L,f}(s,t, a_1, \ldots, a_f)$ is a leaf node of $FT^{L,f}(s,t)$ at depth $f$. Hence, for a fixed vertex $t \in V$ it holds that $\Sigma_{F \in {\cal F}_t} \Sigma_{\{s \in V'_{F,t} \}} \text{deg}(s)$ is the sum of $\text{deg}(s)$ for every leaf node of $FT^{L,f}(s,t)$. As $FT^{L,f}(s,t)$ contains $\widetilde{O}(L^f)$ leaves, then $\Sigma_{F \in {\cal F}_t} \Sigma_{\{s \in V'_{F,t} \}} \text{deg}(s) = \widetilde{O}(L^f \cdot \Sigma_{s \in V} \text{deg}(s)) = \widetilde{O}(m L^f)$. Therefore, \newline \noindent $\widetilde{O}(\Sigma_{t \in V} \Sigma_{F \in {\cal F}_t} \Sigma_{\{s \in V'_{F,t} \}} \text{deg}(s)) = \widetilde{O}(nm L^f) = \widetilde{O}(mn^{1+\epsilon})$, where the last equality holds as $L = n^{\epsilon/f}$. \end{proof} \subsubsection{Reducing the Runtime of the Greedy Selection Algorithm} \label{sec:improved-greedy} We have $O(n^2)$ trees $\{FT^{L,f}(s,t)\}_{s,t \in V}$, every tree contains $O(n^\epsilon)$ nodes, and every node $FT^{L,f}(s,t, a_1, \ldots, a_i)$ contains a path $P^L_G(s,t, \{a_1, \ldots, a_i\})$ with at most $L = n^{\epsilon/f}$ edges. In the greedy algorithm we want to hit all of these paths that contain at least $n^{\epsilon/f}/2$ edges and at most $n^{\epsilon/f}$ edges. In total there might be $O(n^{2+\epsilon})$ such paths $\{P_G(s,t, a_1, \ldots, a_i)\}$, each path contains at least $n^{\epsilon/f}/2$ edges and at most $n^{\epsilon/f}$ edges, and thus according to Lemma \ref{lemma:greedy} finding a set of vertices $R$ of size $\widetilde{O}(n^{1-\epsilon/f})$ which hits all these paths takes $\widetilde{O}(n^{2+\epsilon+\epsilon/f})$ time. Let $R_{<f}$ be the hitting set of vertices obtained by the greedy algorithm which hits all the paths ${\cal P}_{<f} = \{P_G(s,t, \{a_1, \ldots, a_i\}) | 1 \le i < f\}$ that contains at least $n^{\epsilon/f}/4$ edges and at most $n^{\epsilon/f}$ edges, these are paths that appear in the internal nodes of the trees $FT^{L,f}(s,t)$ (which are not in the last layer of the trees). Since there are only $O(n^{2+\epsilon-\epsilon/f})$ such paths ${\cal P}_{<f}$, each path contains at least $n^{\epsilon/f}/8$ edges and at most $n^{\epsilon/f}$ edges, and thus according to Lemma \ref{lemma:greedy} finding a set of vertices $R_{<f}$ of size $\widetilde{O}(n^{1-\epsilon/f})$ which hits all of these paths takes $\widetilde{O}(n^{2+\epsilon})$ time. We define $P_{\text{remaining}}$ to be the subset of paths $\{P_G(s,t, \{a_1, \ldots, a_f\})\}$ for which the following conditions hold: \begin{itemize} \item $P_G(s,t, \{a_1, \ldots, a_f\})$ is a path stored in a some leaf node ($FT^{L,f}(s,t, a_1, \ldots, a_f)$) of depth $f$ in at least one of the trees $FT^{L,f}(s,t)$. \item $P_G(s,t, \{a_1, \ldots, a_f\})$ contains between $n^{\epsilon/f}/2$ to $n^{\epsilon/f}$ edges. \item $P_G(s,t, \{a_1, \ldots, a_f\})$ does not contain any of the vertices $R_{<f}$. \end{itemize} Following we describe how to compute in $\widetilde{O}(mn^{1+\epsilon})$ time a set ${\cal P}_f$ of $\widetilde{O}(n^{2+\epsilon-\epsilon/f})$ paths, each path contains at least $n^{\epsilon/f}/8$ edges, such that if we hit all the paths $P_{f}$ then we also hit every path of $P_{\text{remaining}}$. In Lemma \ref{lem:H-dijkstra-time} we run Dijkstra in the graph $H_{F,t}$ and computed shortest paths to $t$, let $T_{F,t}$ be the shortest paths tree rooted in $t$ in the graph $H_{F,t}$. Let $X_{F,t}$ be all the vertices $x \in V_{F,t}$ in the tree $T_{F,t}$ at depth $n^{\epsilon/f}/8$ ({\sl i.e.}, the number of edges from the root of $T_{F,t}$ to $x$ is $n^{\epsilon/f}/8$) such that there exists at least one vertex $y \in V_{F,t}$ which is a descendent of $x$ in $T_{F,t}$ and $y$ is at depth $n^{\epsilon/f}/4$ in $T_{F,t}$. Let $P_{F,t}$ be the set of paths in the tree $T_{F,t}$ from every vertex $x \in X_{F,t}$ to the root $t$ where a shortcut edge $(u,t)$ is replaced with the subpath $P^L_G(u,t,F)$, so that every path in $P_{F,t}$ is a valid path in $G \setminus F$. Finally, let ${\cal P}_f = \bigcup_{F,t} P_{F,t}$. We claim that ${\cal P}_f$ is a set of $\widetilde{O}(n^{2+\epsilon-\epsilon/f})$ paths, each path contains at least $n^{\epsilon/f}/8$ edges, such that if we hit all the paths ${\cal P}_f$ then we also hit all the paths $P_{\text{remaining}}$. We first need the following lemma. \begin{lemma} \label{lem:H-bound-vertices} The total number of vertices in all the graphs $H_{F,t}$ is $\Sigma_{F,t} |V_{F,t}| = \widetilde{O}(n^{2+\epsilon})$. \end{lemma} \begin{proof} Since every vertex of $V_{F,t}$ is either a vertex of $V'_{F,t}$ or a neighbour of such a vertex, then it holds that $\Sigma_{F,t} |V_{F,t}| \le \Sigma_{F,t} \Sigma_{\{x \in V'_{F,t} \} \text{deg}(x)}$. Note that a vertex $x \in V'_{F,t}$ iff querying the tree $FT^{L,f}(x,t)$ with $F$ results in reaching a leaf at depth $f$ of the tree $FT^{L,f}(x,t)$. Hence, for a fixed vertex $x \in V$, the sum $(\Sigma_{F,t \ | \ x \in V'_{F,t}} \text{deg}(x))$ is bounded by the number of nodes in the last layer of all the trees $\{ FT^{L,f}(x,t) \}_{t \in V}$ multiplied by $\text{deg}(x)$. Since the last layer of every tree $FT^{L,f}(x,t)$ contains $n^{\epsilon}$ nodes, and for every vertex $x \in V$ there are $n$ trees $\{ FT^{L,f}(x,t) \}_{t \in V}$, then we get a bound $\widetilde{O}(\Sigma_{x \in V} n^{1+\epsilon} \text{deg}(x)) = \widetilde{O}(mn^{1+\epsilon})$ on the number of vertices in all the graphs $H_{F,t}$. \end{proof} \begin{lemma} ${\cal P}_f$ is a set of $\widetilde{O}(n^{2+\epsilon-\epsilon/f})$ paths, each path contains at least $n^{\epsilon/f}/8$ edges, such that if we hit all the paths $P_{f}$ then we also hit all the paths $P_{\text{remaining}}$. The runtime to compute ${\cal P}_f$ is $\widetilde{O}(mn^{1+\epsilon})$. \end{lemma} \begin{proof} Let $P_G(s,t, \{a_1, \ldots, a_f\}) = P_G(s,t,F) \in P_{\text{remaining}}$ be the path stored in the node $FT^{L,f}(s,t, a_1, \ldots, a_f)$ such that $\{a_1, \ldots, a_f \} = F$. Denote by $P_G(s,t,F) = <v_1, \ldots, v_r>$. Since $P_G(s,t,F) \in P_{\text{remaining}}$ then $P_G(s,t,F)$ contains between $n^{\epsilon/f}/2$ to $n^{\epsilon/f}$ edges and it is not hit by $R_{<f}$. Let $1 \le i < r- n^{\epsilon/f}/4-1$, then $v_i$ is a vertex on the path $P_G(s,t,F)$ which is not among the last $n^{\epsilon/f}/4$ vertices of the path. Then $P_G(v_i,t,F)$ is a subpath of $P_G(s,t,F)$ (since shortest paths are unique). Furthermore, since $R_{<f}$ (which hits all the paths which contain at least $n^{\epsilon/f}/4$ vertices in all the non-leaf nodes of all the trees) does not hit $P_G(v_i,t,F)$ then it follows that $P_G(v_i,t,F)$ is stored in a leaf node $FT^{L,f}(v_i,t,F)$ of the tree $FT^{L,f}(v_i,t)$. A similar argument shows that $P_G(v_{i+1},t,F)$ is stored in a leaf node $FT^{L,f}(v_{i+1},t,F)$ of the tree $FT^{L,f}(v_{i+1},t)$. It follows that $v_i, v_{i+1} \in V_{F,t}$ and $(v_i, v_{i+1}) \in H_{F,t}$. Therefore, $P_H(s,t)$ contains at least all the edges $(v_i, v_{i+1})$ for every $1 \le i < r- n^{\epsilon/f}/4-1$, and since $r$ is the number of vertices of $P_G(s,t,F)$ which contains at least $n^{\epsilon/f}/2$ vertices then $P_H(s,t)$ contains at least $n^{\epsilon/f}/4$ vertices. Since $P_H(s,t)$ is a path in $H_{F,t}$ containing at least $n^{\epsilon/f}/4$ vertices then it holds for the $n^{\epsilon/f}/8$-th vertex $x$ from the end of $P_H(s,t)$ that $x \in X_{F,t}$. Therefore, the subpath $P_H(x,t)$ of $P_H(s,t)$ from $x$ to $t$ which contains $n^{\epsilon/f}/8$ edges is contained in ${\cal P}_f$. Hence, if we hit ${\cal P}_f$ we also hit $P_H(x,t)$ and therefore we also hit $P_G(s,t,F)$. This proves that hitting all the paths of ${\cal P}_f$ also hits all the paths of $P_{\text{remaining}}$. Next, we prove that ${\cal P}_f$ contains $\widetilde{O}(n^{2+\epsilon-\epsilon/f})$ paths. We have already proved in \ref{lem:H-dijkstra-time} that the number of vertices in all the graphs $H_{F,t}$ is $\widetilde{O}(n^{2+\epsilon})$. Recall that ${\cal P}_f = \bigcup_{F,t} \{P_H(x,t) \ | \ x \in X_{F,t} \}$. Furthermore, for every vertex $x \in X_{F,t}$ there exists at least $n^{\epsilon/f}/8$ unique vertices in the subtree of $x$. To see this, recall that by definition if the vertex $x \in X_{F,t}$ there exists at least one vertex $y \in V_{F,t}$ which is a descendent of $x$ in $T_{F,t}$ and $y$ is at depth $n^{\epsilon/f}/4$ in $T_{F,t}$. Thus, the set of vertices of $T_{F,t}$ from $x$ to $y$ contains at least $n^{\epsilon/f}/8$ vertices which belong to the subpath of $T_{F,t}$ rooted at $x$. Therefore, $|{\cal P}_f| = \sum_{F,t} |X_{F,t}| = \widetilde{O}(n^{2+\epsilon} / (n^{\epsilon/f}/8) = \widetilde{O}(n^{2+\epsilon-\epsilon/f})$. Finally, the run time to compute ${\cal P}_f$ is $\widetilde{O}(mn^{1+\epsilon})$ as it is dominated by the Dijkstra algorithms in the graphs $H_{F,t}$ whose runtime is $\widetilde{O}(mn^{1+\epsilon})$ according to Lemma \ref{lem:H-dijkstra-time}, and every path in ${\cal P}_f$ contains at least $n^{\epsilon/f}/8$ edges by definition. \end{proof} After computing ${\cal P}_f$ in $\widetilde{O}(n^{2+\epsilon})$ time, we run the greedy selection algorithm from Lemma \ref{lemma:greedy} on the set of paths ${\cal P}_f$ in $\widetilde{O}(n^{2+\epsilon})$ time (note that the bound on the runtime follows as $|{\cal P}_f| = \widetilde{O}(n^{2+\epsilon - \epsilon/f})$) to obtain a set $R_f$ of $\widetilde{O}(n^{1-\epsilon/f})$ vertices that hit all the paths $P_{f}$ and thus they also hit all the paths $P_{\text{remaining}}$. Let $R = R_{<f} \cup R_f$. So in total this takes $\widetilde{O}(n^{2+\epsilon})$ time to find the set $R$ of $\widetilde{O}(n^{1-\epsilon/f})$ vertices that hit all the paths $\{P_G(s,t, \{a_1, \ldots, a_i\})\}$ in all the nodes of all the trees $FT^{L,f}(s,t)$ which contain at least $n^{\epsilon/f}/2$ edges and at most $n^{\epsilon/f}$ edges. \begin{corollary} One can find deterministically in $\widetilde{O}(n^{2+\epsilon})$ time a set $R$ of $\widetilde{O}(n^{1-\epsilon/f})$ vertices that hit all the paths $\{P_G(s,t, \{a_1, \ldots, a_i\})\}$ in all the nodes of all the trees $FT^{L,f}(s,t)$ which contain at least $n^{\epsilon/f}/2$ edges and at most $n^{\epsilon/f}$ edges. \end{corollary} \subsection{Assumptions} \label{sec:assumption-unique} In the algorithms we described for the case of directed graphs with real edge weights for constructing and querying the DSO we made two assumptions: \begin{itemize} \item We assumed all edge weights are non-negative, so that we can run Dijkstra algorithm. \item We assumed all the shortest paths: $P_G(s,t), P_G(s,t,F), P^L_G(s,t), P^L_G(s,t,F)$ are unique. \end{itemize} In this section we justify these two assumptions. \subsubsection{Handling Negative Weights} \label{sec:negative} In the description above we assumed that edge weights are non-negative. In this section we describe how to reduce the problem of general edge weights to non-negative edge weights. We handle it similarly to Weimann and Yuster \cite{WY13} by the well known method of feasible price functions in order to transform the negative edge weights to be nonnegative in the graph $G$ as the first step of the preprocessing algorithm. For a directed graph $G = (V,E)$ with (possibly negative) edge weights $\omega(\cdot)$, a price function is a function $\phi$ from the vertices of $G$ to the reals. For an edge $(u,v)$, its reduced weight with respect to $\phi$ is $\omega_\phi(u,v) = \omega(u,v) + \phi(u) - \phi(v)$. A price function $\phi$ is feasible if $\omega_\phi(u,v) \ge 0$ for all edges $(u,v) \in E$. The reason feasible price functions are used in the computation of shortest paths is that for any two vertices $s,t \in V$, for any $s$-to-$t$ path $P$, $\omega_\phi(P) = \omega(P) + \phi(s) - \phi(t)$. This shows that an $s$-to-$t$ path is shortest with respect to $\omega_\phi(\cdot)$ iff it is shortest with respect to $\omega(\cdot)$. Moreover, the $s$-to-$t$ distance with respect to the original weights $\omega(\cdot)$ can be recovered by adding $\phi(t)-\phi(s)$ to the $s$-to-$t$ distance with respect to $\omega_\phi(\cdot)$. The most natural feasible price function comes from single-source distances. Let $s$ be a new vertex added to $G$ with an edge from $s$ to every other vertex of $G$ having weight $0$. Let $d(v)$ denote the distance from $s$ to vertex $v \in G$. Then for every edge $(u,v) \in E$, we have that $d(v) \le d(u) + \omega(u,v)$, so $\omega_d(u,v) \ge 0$ and thus $d( \cdot )$ is feasible. This means that knowing $d(\cdot)$, we can now use Dijkstra's SSSP algorithm on $G$ (with reduced weights) from any source we choose and obtain the SSSP with respect to the original $G$. Therefore, we first compute $\phi = d(\cdot)$ in the original graph, store $\phi$ and change the weights of every edge $(u,v)$ to $\omega_d(u,v)$ which are non-negative. Then we continue with the preprocessing and query algorithms as described in Section \ref{sec:dso1}. Finally, at the end of the query when we computed $d_G(s,t,F)$ with respect to the weights $\omega_\phi(\cdot)$, we add to it $\phi(t) - \phi(s)$ to obtain the weight of this shortest path $P_G(s,t,F)$ with respect to the original weights $\omega(\cdot)$. \subsubsection{Unique Shortest Paths Assumption} \label{sec:unique} In this section we justify the unique shortest paths assumption. For randomized algorithms, unique shortest paths can be achieved easily by using a folklore method of adding small perturbations to the edge weights, such that all the shortest paths in the resulting graph are unique w.h.p. and a shortest path in the resulting graph is also a shortest path in the original graph. We describe a way to define unique shortest paths in a graph $H$ which fits the algorithms we presented. First, we assume that the weights are non-negative according to the reduction described in Section \ref{sec:negative}. Next, let $0 < \epsilon' < 1$ be a small enough number such that $n \cdot \epsilon' < \min \{ \omega(u,v) \ | \ (u,v) \in E \}$. Add $\epsilon'$ to the weight of all the edges of the graph. Then we get that all the edges have positive weights, and every shortest path in the graph after adding $\epsilon'$ is also a shortest path in the original graph. Now, we define the unique shortest paths $P_H^L(s,t)$ in the graph $H$ by recursion on $L \ge 0$. For $L=0$ we define $P_H^0(s,s) = <s>$ and $d_H^0(s,s) = 0$, and for every pair of vertices $s, t \in V, s \ne t$ we define $P_H^0(s,t) = \emptyset$ and $d_H^0(s,t) = \infty$. For the inductive step we need to define $P_H^L(s,t)$. Let $u_1, \ldots, u_\ell$ be all the neighbours of $s$ which minimize $\omega(s,u_i) + d_H^{L-1}(u_i, t)$ among all the vertices $V$. Let $u_i$ be the vertex whose index (label) is minimal among $u_1, \ldots, u_\ell$. We define $P_H^L(s,t) = (s,u_i) \circ P_H^{L-1}(u_i, t)$ and $d_H^L(s,t) = \omega(s,u_i) + d_H^{L-1}(u_i,t)$, such that $P_H^{L-1}(u_i, t)$ are uniquely defined by the induction hypothesis. For every pair of vertices $s,t \in V$ such that the above inductive step did not define $P_H^L(s,t)$ we define $P_H^L(s,t) = \emptyset$ and $d_H^L(s,t) = \infty$. We define $P_H(s,t)$ as follows. Let $X = \arg \min_{0 \le L \le n} \{ d_H^L(s,t) \}$. Then we define $P_H(s,t) = P_H^X(s,t)$. Finally, for every $s,t \in V$ we define $P_G(s,t) = P_G(s,t)$, $d_G(s,t) = \omega(P_G(s,t))$, and for every $0 \le L \le n$ we define $P^L_G(s,t) = P_G^L(s,t)$, $d^L_G(s,t) = \omega(P^L_G(s,t))$. For a subset $F \subseteq E \cup V$ we define $P^L_G(s,t,F) = P_{G\setminus F}^L(s,t)$, $d^L_G(s,t,F) = d_{G\setminus F}^L(s,t)$, $P_G(s,t,F) = P_{G\setminus F}(s,t)$ and $d_G(s,t,F) = d_{G\setminus F}(s,t)$. It is not difficult to prove the following lemma. \begin{lemma} For every $s,t \in V, F \subseteq E \cup V, 0 \le L \le n$ the path $P^L_G(s,t,F)$ is a shortest path among all $s$ to $t$ paths in $G\setminus F$ that contain $L$ edges, and the path $P_G(s,t,F)$ is a shortest path from $s$ to $t$ in $G\setminus F$. Both $P_G(s,t,F)$ and $P^L_G(s,t,F)$ are uniquely defined, and their lengths are $d_G(s,t,F)$ and $d^L_G(s,t,F)$ respectively. \end{lemma} The following lemma is also not difficult to prove. \begin{lemma} When running Dijkstra or the dynamic programming algorithm as described in Section \ref{sec:dynamic-programming} in the graph $G \setminus F$, if during the execution of the algorithm instead of considering vertices in arbitrary order we always consider vertices in ascending order of their labels (indices) then we compute the unique shortest paths $P^L_G(s,t,F)$. \end{lemma} \section{Open Questions} \label{sec:open-questions} Here are some open questions that immediately follow our work. \begin{itemize} \item Weimann and Yuster \cite{WY13} presented a randomized algebraic algorithm for constructing a DSO whose runtime is subcubic and the query has subquadratic runtime supporting $f = O(\lg n / \lg \lg n)$ edges or vertices failures. Grandoni and Vassilevska Williams \cite{GW12} presented a randomized algebraic algorithm for constructing a DSO whose runtime is subcubic and the query has sublinear runtime supporting a single ($f=1$) edge failure. The preprocessing algorithms of both these DSOs is randomized and algebraic, and it remains an open question if there exists a DSO with subcubic deterministic preprocessing algorithm and subquadratic or sublinear deterministic query algorithm, matching their randomized equivalents? \item Both the DSOs of Weimann and Yuster \cite{WY13} and Grandoni and Vassilevska Williams \cite{GW12} use the following randomized procedure ({\sl e.g.}, Lemma 2 in \cite{GW12}): Let $0 < \epsilon < 1$, $1 \le f \le \epsilon \lg n/ \lg \lg n$, $L = n^{\epsilon/f}$, and let $C > 0$ be a large enough constant. Sample $s = L^f \cdot C \log n$ graphs $\{G_1, \ldots ,G_s\}$, where each $G_i$ is obtained from $G$ by independently removing each edge with probability $(1/L)$. For $C$ large enough, it holds whp that for every $(s,t,e)$ for which there exists a replacement path $P_G(s,t,F)$ on at most $L$ nodes, there is at least one $G_i$ that does not contain $F$ but contains at least one replacement path for $(s,t,F)$ on at most $L$ edges. The time to compute the graphs $G_1, \ldots, G_s$ using randomization is $\widetilde{O}(m \cdot s) = \widetilde{O}(n^{2+\epsilon})$. We ask what is the minimum $s$ such that we can deterministically compute such graphs $G_1, \ldots, G_s$ in $\widetilde{O}(n^2 \cdot s)$ time such that the above property holds (that for every $(s,t,e)$ for which there exists a replacement path $P_G(s,t,F)$ on at most $L$ nodes, there is at least one $G_i$ that does not contain $F$ but contains at least one replacement path for $(s,t,F)$ on at most $L$ edges). The randomized algorithm has a simple solution (as mentioned above, sample $s = L^f \cdot C \log n$ graphs $\{G_1, \ldots ,G_s\}$, where each $G_i$ is obtained from $G$ by independently removing each edge with probability $(1/L)$). As there are $O(n^{2f+2})$ different possible queries $(s,t,F)$, and there are at most $O(n^{2f+3})$ intervals $P_G(s,t,F)$ (which we want to maintain) containing exactly $n^{\epsilon/f}$ vertices, it is not difficult to prove (as done in \cite{WY13}) that for every possible query $(s,t,F)$ there exists whp at least one graph $G_i$ which does not contain $F$ but contains $P_G(s,t,F)$. It is not clear how to efficiently derandomize a degenerated version of the above construction. We can even allow some relaxations to the above requirements. Assume there is a list ${\cal L} = \{ (s_1, t_1, F_1), \ldots, (s_\ell, t_\ell, F_\ell) \}$ of at most $\ell = O(n^{2+\epsilon})$ queries which are the only queries that interest us, and assume there is a smaller set of intervals ${\cal P} = \{ P_G(s_1, t_1, F_1), \ldots, P_G(s_\ell, t_\ell, F_\ell) \}$ each contains exactly $n^{\epsilon/f}$ edges that we want to maintain. Then, what is the minimum $s$ (asymptotically) such that one can construct deterministically graphs $\{G_1, \ldots ,G_s\}$ in $\widetilde{O}(n^2 \cdot s)$ time, such that for every $1 \le i \le \ell$ there exists at least one graph $G_i$ which does not contain $F_i$ but contains $P_G(s_i,t_i,F_i)$? It is an open question how to achieve this goal, even for $f=1$ and even if we allow $s$ to be greater than $\tilde{\Omega}(n^{\epsilon})$ (to the extend that running APSP in the graphs $G_1, \ldots, G_s$ still takes subcubic time)? \end{itemize} An indirect open question is to derandomize more randomized algorithms and data-structures in closely related fields, perhaps utilizing some of our techniques and framework. \end{document}
arXiv
\begin{document} \newcommand{\widehat{\rho}}{\widehat{\rho}} \newcommand{\widehat{\omega}}{\widehat{\omega}} \newcommand{\widehat{I}}{\widehat{I}} \newcommand*{\spr}[2]{\langle #1 | #2 \rangle} \newcommand*{\bbN}{\mathbb{N}} \newcommand*{\bbR}{\mathbb{R}} \newcommand*{\cB}{\mathcal{B}} \newcommand*{\E}{\cal{E}} \newcommand{\rm{supp}}{\rm{supp}} \newcommand*{\eps}{\varepsilon} \newcommand*{\id}{I} \newcommand{\overline{\rho}}{\overline{\rho}} \newcommand{\overline{\mu}}{\overline{\mu}} \newcommand*{\half}{{\frac{1}{2}}} \newcommand*{\ket}[1]{| #1 \rangle} \newcommand{{\widetilde{\rho}}_n^\gamma}{{\widetilde{\rho}}_n^\gamma} \newcommand*{\bra}[1]{\langle #1 |} \newcommand*{\proj}[1]{\ket{#1}\bra{#1}} \newcommand{{\widetilde{\rho}}_n^{\gamma 0}}{{\widetilde{\rho}}_n^{\gamma 0}} \newcommand{{\widehat{\sigma}}}{{\widehat{\sigma}}} \newcommand{{\widetilde{\sigma}}}{{\widetilde{\sigma}}} \newcommand{{\widetilde{\pi}}}{{\widetilde{\pi}}} \newcommand{{{\Lambda}}}{{{\Lambda}}} \newcommand{{\widetilde{M}}}{{\widetilde{M}}} \newcommand{{\widetilde{\omega}}}{{\widetilde{\omega}}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\mathrm{Tr}}{\mathrm{Tr}} \newcommand*{\Hmin}{H_{\min}} \newcommand{\mathrm{rank}}{\mathrm{rank}} \newcommand{\rightarrow}{\rightarrow} \newcommand{\underline{S}}{\underline{S}} \newcommand{\overline{S}}{\overline{S}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\eps^{\prime}}{\eps^{\prime}} \newcommand{{(n)}}{{(n)}} \newtheorem{definition}{Definition} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{defn}{Definition} \newtheorem{corollary}{Corollary} \newcommand{\qed}{\hspace*{\fill}\rule{2.5mm}{2.5mm}} \newenvironment{proof}{\noindent{\it Proof}\hspace*{1ex}}{\qed } \def\reff#1{(\ref{#1})} \title{Max- Relative Entropy of Entanglement, {\em{alias}} Log Robustness} \author{Nilanjana Datta} \email{[email protected]} \affiliation{Statistical Laboratory, DPMMS, University of Cambridge, Cambridge CB3 0WB, UK} \date{\today} \begin{abstract} Properties of the {\em{max- relative entropy of entanglement}}, defined in \cite{ND}, are investigated, and its significance as an upper bound to the one-shot rate for {\em{perfect}} entanglement dilution, under a particular class of quantum operations, is discussed. It is shown that it is in fact equal to another known entanglement monotone, namely the {\em{log robustness}}, defined in \cite{BP}. It is known that the latter is not asymptotically continuous and it is not known whether it is weakly additive. However, by suitably modifying the max- relative entropy of entanglement we obtain a quantity which is seen to satisfy both these properties. In fact, the modified quantity is shown to be equal to the regularised relative entropy of entanglement. \end{abstract} \pacs{03.65.Ud, 03.67.Hk, 89.70.+c} \maketitle \section{Introduction} In \cite{renatophd}, Renner introduced two important entropic quantities, called the min- and max- entropies. Recently, the operational meanings of these quantities, i.e., their relevance with regard to actual information-processing tasks, was elucidated in \cite{robert}. Further, two new relative entropy quantities, which act as parent quantities for these min- and max- entropies, were introduced in \cite{ND}. These were, namely, the max-relative entropy, $D_{\max}(\rho||\sigma)$, and the min-relative entropy, $D_{\min}(\rho||\sigma)$. Here $\rho$ denotes a state and $\sigma$ denotes a positive operator. Various properties of these quantities were proved in \cite{ND}. In particular, it was shown that $$D_{\min}(\rho||\sigma)\le S(\rho||\sigma)\le D_{\max}(\rho||\sigma),$$ where $S(\rho||\sigma)$ is the relative entropy of $\rho$ and $\sigma$. In addition, it was shown in \cite{ND} that the minimum over all separable states, $\sigma$, of $D_{\max} (\rho||\sigma)$, defines a (full) entanglement monotone \cite{full} for a bipartite state $\rho$. This quantity, referred to as the the {\em{max-relative entropy of entanglement}} and denoted by $E_{\max}(\rho)$, was proven to be an upper bound to the relative entropy of entanglement, $E_R(\rho)$ \cite{VP}. In this paper we investigate further properties of $E_{\max}(\rho)$ and discuss its significance. We prove that it is quasiconvex, i.e., for a mixture of states ${\rho = \sum_{i=1}^n p_i \rho_i}$, $E_{\max}(\rho) \le \max_{1\le i\le n} E_{\max}(\rho_i).$ We also infer that it is not asymptotically continuous \cite{ascont}, and does not reduce to the {\em{entropy of entanglement}} for pure bipartite states (that is, to the entropy of the reduced state of either of the two parties). We do so by showing that $E_{\max}(\rho)$ is in fact equal to another known entanglement monotone, namely, the {\em{log robustness}} \cite{BP}: $LR_g(\rho) := \log (1 + R_g(\rho)).$ Here $R_g(\rho)$ denotes the global robustness \cite{harrow} of $\rho$, which is a measure of the amount of noise that can be added to an entangled state $\rho$ before it becomes unentangled (separable). By suitably modifying $E_{\max}(\rho)$, we arrive at a quantity, which we denote by $\E_{\max}(\rho)$, and which is asymptotically continuous and weakly additive \cite{weak}. Asymptotic continuity \reff{asc} is proved by showing that $\E_{\max}(\rho)$ is equal to the regularised relative entropy of entanglement $E_R^\infty(\rho)$ \cite{eisert}, for which this property has been proved \cite{matthias}. The necessary modifications involve $(i)$ ``smoothing'' $E_{\max}(\rho)$ to obtain the {\em{smooth max-relative entropy of entanglement}} $E_{\max}^\eps(\rho)$, for any fixed $\eps >0$. (This is similar to the smoothing introduced by Renner \cite{renatophd} to obtain the smooth R\'enyi entropies from the min- and max- entropies mentioned above); $(ii)$ regularising, and $(iii)$ taking the limit $\eps \rightarrow 0$ (see the following sections). It would be natural to proceed analogously with the min-relative entropy and define a quantity, $E_{\min}(\rho)$, to be the minimum over all separable states, $\sigma$, of $D_{\min} (\rho||\sigma)$. However, it can be shown \cite{violate} that $E_{\min}(\rho)$ is not a full entanglement monotone. It {\em{can}} increase on average under local operations and classical comunication (LOCC). Instead $E_{\min}(\rho)$ satisfies a weaker condition of monotonicity under LOCC maps, that is, $E_{\min}(\rho) \ge E_{\min}(\Lambda(\rho)),$ for any LOCC operation $\Lambda$. Nevertheless, as for the case of $E_{\max}(\rho)$, a ``smoothing'' of $E_{\min}(\rho)$, followed by regularisation, yields a quantity which is equal to $E_R^\infty(\rho)$, in the limit of the smoothing parameter $\eps \rightarrow 0$. This will be presented in a forthcoming paper \cite{FBND}. In \cite{BP} it was shown that $E_{R}^\infty(\rho)$ is equal to both the entanglement cost and the distillable entanglement under the set of quantum operations which do not generate any entanglement asymptotically [for details, see \cite{BP}]. This gives an operational significance to the regularised version of the smooth max- and min- relative entropies of entanglement, in the limit $\eps \rightarrow 0$. For a given $\eps >0$, the smoothed versions, $E^\eps_{\max}(\rho)$ and $E^\eps_{\min}(\rho)$, of the max- and min- relative entropies of entanglement, also have operational interpretations. They arise as optimal rates of entanglement manipulation protocols involving separability-preserving maps. A quantum operation $\Lambda$ is said to be a separability-preserving map if $\Lambda(\sigma)$ is separable for any separable state $\sigma$. These maps constitute the largest class of operations which cannot create entanglement and contains the class of separable operations \cite{bennett, rains, VP}. [See \cite{BP} for details]. The quantities $E^\eps_{\max}(\rho)$ and $E^\eps_{\min}(\rho)$ can be interpreted as one-shot rates of entanglement dilution and distillation protocols involving separability-preserving maps, for a given bound on the corresponding probabilities of error, i.e., when the probability of error associated with the protocol is bounded above by the smoothing parameter $\eps$. This is analogous to the interpretation of the $\eps$-smooth R\'enyi entropies, as one-shot rates of various protocols \cite{renatophd, wolf2, RenWol04b}, when the probability of error is at most $\eps$. Evaluation of these one-shot rates for the entanglement manipulation protocols, will be presented in a forthcoming paper \cite{FBND}. The max-relative entropy of entanglement (or log robustness), $E_{\max}(\rho)$, provides an upper bound to the one-shot {\em{perfect}} entanglement cost, not under LOCC maps, but under quantum operations which generate an entanglement (as measured by the global robustness) of at most $1/R_g(\rho)$. We shall refer to such maps as $\alpha_\rho$-separability preserving (or $\alpha_\rho$-SEPP) maps, with $\alpha_\rho =1/R_g(\rho)$. This is elaborated below. We start the main body of our paper with some mathematical preliminaries. Next we recall the definitions of the relevant relative entropy quantities and entanglement monotones, and prove that $E_{\max}(\rho)$ is quasiconvex. We then show that it is equal to the global log robustness, and that it does {{not}} in general reduce to the relative entropy of entanglement for pure states. Next we define the smooth max-relative entropy of entanglement and $\E_{\max}(\rho)$, and prove that the latter is weakly additive. Our main result is given in Theorem \ref{main1}, which states that ${\E}_{\max}(\rho)= E_R^\infty(\rho)$ \cite{point}. \section{Mathematical Preliminaries} \label{prelim} Let ${\cal{B}}({\cal{H}})$ denote the algebra of linear operators acting on a finite-dimensional Hilbert space ${\cal{H}}$. The von Neumann entropy of a state $\rho$, i.e., a positive operator of unit trace in ${\cal{B}}({\cal{H}})$, is given by $S(\rho) = - \mathrm{Tr} \rho \log \rho$. Throughout this paper, we take the logarithm to base $2$ and all Hilbert spaces considered are finite-dimensional. In fact, since in this paper we consider bipartite states, the underlying Hilbert space is given by ${\cal{H}}= {\cal{H}}_A \otimes {\cal{H}}_B$. Let ${\cal{D}}$ denote the set of states in ${\cal{B}}({\cal{H}})$, and let ${\cal{S}} \subset {\cal{D}}$ denote the set of separable states. Further, let ${\cal{S}}_n$ denote the set of separable states in ${\cal{B}}({\cal{H}}^{\otimes n})$. The trace distance between two operators $A$ and $B$ is given by \begin{equation} ||A-B||_1 := \mathrm{Tr}\bigl[\{A \ge B\}(A-B)\bigr] - \mathrm{Tr}\bigl[\{A < B\}(A-B)\bigr] \end{equation} The fidelity of states $\rho$ and $\rho'$ is defined to be $$ F(\rho, \rho'):= \mathrm{Tr} \sqrt{\rho^{\half} \rho' \rho^{\half}}. $$ The trace distance between two states $\rho$ and $\rho'$ is related to the fidelity $ F(\rho, \rho')$ as follows (see (9.110) of \cite{nielsen}): \begin{equation} \frac{1}{2} \| \rho - \rho' \|_1 \leq \sqrt{1-F(\rho, \rho')^2} \leq \sqrt{2(1-F(\rho, \rho'))} \ . \label{fidelity} \end{equation} We also use the ``gentle measurement'' lemma \cite{winter99,ogawanagaoka02}: \begin{lemma}\label{gm} For a state $\rho$ and operator $0\le \Lambda\le I$, if $\mathrm{Tr}(\rho \Lambda) \ge 1 - \delta$, then $$||\rho - {\sqrt{\Lambda}}\rho{\sqrt{\Lambda}}||_1 \le {2\sqrt{\delta}}.$$ The same holds if $\rho$ is only a subnormalized density operator. \end{lemma} \section{Definitions of min- and max- relative entropies} \label{non-smooth} \begin{definition} The \emph{max- relative entropy} of a state $\rho$ and a positive operator $\sigma$ is given by \begin{equation} D_{\max}(\rho|| \sigma) := \log \min\{ \lambda: \, \rho\leq \lambda \sigma \} \label{dmax} \end{equation} \end{definition} Note that $D_{\max}(\rho|| \sigma)$ is well-defined if $\rm{supp}\, \rho \subseteq \rm{supp}\, \sigma$. \begin{definition} The \emph{min- relative entropy} of a state $\rho$ and a positive operator $\sigma$ is given by \begin{equation} D_{\min}(\rho|| \sigma) := - \log \mathrm{Tr}\bigl(\pi\sigma\bigr) \ , \label{dmin} \end{equation} where $\pi$ denotes the projector onto $\rm{supp}\, \rho$, the support of $\rho$. It is well-defined if $\rm{supp}\, \rho$ has non-zero intersection with $\rm{supp}\, \sigma$. \end{definition} Various properties of $D_{\min}(\rho|| \sigma)$ and $D_{\max}(\rho|| \sigma)$ were proved in \cite{ND}. In this paper we shall use the following properties of the max- relative entropy, $D_{\max}(\rho|| \sigma)$: \begin{itemize} \item{The max- relative entropy is monotonic under completely positive trace-preserving (CPTP) maps, i.e., for a state $\rho$, a positive operator $\sigma$, and a CPTP map $\Lambda$: \begin{equation} D_{\max}(\Lambda(\rho)||\Lambda(\sigma))\le D_{\max}(\rho||\sigma) \label{mono} \end{equation}} \item{The max- relative entropy is quasiconvex, i.e., for two mixtures of states, $\rho:=\sum_{i=1}^n p_i \rho_i$ and $\omega:= \sum_{i=1}^n p_i \omega_i$, \begin{equation} D_{\max}(\rho||\omega) \le \max_{1\le i\le n} D_{\max}(\rho_i || \omega_i). \label{quasid} \end{equation}} \item{$D_{\max}(\rho\otimes \rho||\omega\otimes \omega) =2 D_{\max}(\rho||\omega) .$ This property follows directly from the definition \reff{dmax}.} \end{itemize} The min- and max- (unconditional and conditional) entropies, introduced by Renner in \cite{renatophd} are obtained from $D_{\min}(\rho|| \sigma)$ and $D_{\max}(\rho|| \sigma)$ by making suitable substitutions for the positive operator $\sigma$ (see \cite{ND} for details). \section{Smooth min- and max- relative entropies} \label{smooth} \emph{Smooth} min- and max- relative entropies are generalizations of the above-mentioned relative entropy measures, involving an additional \emph{smoothness} parameter $\eps \geq 0$. For $\eps = 0$, they reduce to the \emph{non-smooth} quantities. \begin{definition} \label{def:smoothentropies} For any $\eps \geq 0$, the \emph{$\eps$-smooth min-} and \emph{max-relative entropies} of a bipartite state $\rho$ relative to a state $\sigma$ are defined by \[ D_{\min}^{\eps}(\rho || \sigma) := \max_{\bar{\rho} \in B^{\eps}(\rho)} D_{\min}(\bar{\rho} || \sigma) \] and \begin{equation} D_{\max}^{\eps}( \rho|| \sigma ) := \min_{\bar{\rho} \in B^{\eps}(\rho)} D_{\max}(\bar{\rho} || \sigma) \label{epsmax} \end{equation} where $B^{\eps}(\rho) := \{\bar{\rho} \geq 0: \, \| \bar{\rho} - \rho \|_1 \leq \eps, \mathrm{Tr}(\bar{\rho}) \leq \mathrm{Tr}(\rho)\}$. \end{definition} The following two lemmas are used to prove our main result, Theorem \ref{main1}. \begin{lemma} \label{lem5n} Let $\rho_{A B}$ and $\sigma_{AB}$ be density operators, let $\Delta_{A B}$ be a positive operator, and let $\lambda \in \bbR$ such that \[ \rho_{A B} \leq 2^{\lambda} \cdot \sigma_{AB} + \Delta_{A B} \ . \] Then $D_{\max}^{\eps}(\rho_{A B}||\sigma_{AB}) \le \lambda$ for any $\eps \geq \sqrt{8 \mathrm{Tr}(\Delta_{A B})}$. \end{lemma} \begin{lemma} \label{lem6n} Let $\rho_{A B}$ and $\sigma_{AB}$ be density operators. Then \[ D_{\max}^{\eps}(\rho_{A B}|\sigma_B) \le \lambda \] for any $\lambda \in \bbR$ and \[ \eps = \sqrt{8 \mathrm{Tr}\bigl[\{\rho_{A B} > 2^{\lambda} \sigma_{AB} \} \rho_{A B} \bigr]} \ . \] \end{lemma} The proofs of these lemmas are analogous to the proofs of Lemmas 5 and 6 of \cite{smooth} and are given in the Appendix for completeness. \section{The max-relative entropy of entanglement} \label{entm} For a bipartite state $\rho$, the {max-relative entropy of entanglement} \cite{ND} is given by \begin{equation} E_{\max}(\rho):= \min_{\sigma \in {\cal{S}}} D_{\max} (\rho||\sigma), \label{entmeasure} \end{equation} where the minimum is taken over the set ${\cal{S}}$ of all separable states. It was proved in \cite{ND} that \begin{equation} E_{\max}(\rho) \ge E_R(\rho), \end{equation} where $E_R(\rho) := \min_{\sigma \in {\cal{S}}} S (\rho||\sigma), $ the {\em{relative entropy of entanglement}} of the state $\rho$. That $E_{\max}(\rho)$ is a full entanglement monotone follows from the fact that $D_{\max}(\rho||\sigma)$ satisfies a set of sufficient criteria \cite{VP} which ensure that $E_{\max}(\rho)$ has the following properties: $(a)$ it vanishes if and only if $\rho$ is separable, $(b)$ it is left invariant by local unitary operations and $(c)$ it does not increase on average under LOCC. This was proved in \cite{ND}. \begin{lemma} The {max-relative entropy of entanglement} $E_{\max}(\rho)$ is quasiconvex, i.e., for a mixture of states ${\rho = \sum_{i=1}^n p_i \rho_i}$, \begin{equation} E_{\max}(\rho) \le \max_{1\le i\le n} E_{\max}(\rho_i). \label{convex} \end{equation} \end{lemma} \begin{proof} For each state $\rho$, let $\sigma_\rho$ be a separable state for which $$ E_{\max}(\rho) = D_{\max}(\rho||\sigma_\rho ). $$ Since the set of separable states is convex, and the max-relative entropy is quasiconvex \reff{quasid}, we have \begin{eqnarray} E_{\max}\Bigl(\sum_i p_i \rho_i\Bigr) &\le&D_{\max}\Bigl(\sum_i p_i\rho_i || \sum_i p_i \sigma_{\rho_i}\Bigr)\nonumber\\ &\le & \max_i D_{\max}\Bigl(\rho_i || \sigma_{\rho_i}\Bigr)\nonumber\\ &=& \max_i E_{\max}(\rho_i) \end{eqnarray} \end{proof} Since $E_{\max}(\rho)$ is given by a minimisation over separable states, it is subadditive. Let $\sigma$ be a separable state for which $E_{\max}(\rho) = D_{\max}(\rho ||\sigma).$ Then, \begin{eqnarray} E_{\max}(\rho \otimes \rho) &=& \min_{\omega \in {\cal{S}}_2} D_{\max}(\rho \otimes \rho||\omega)\nonumber\\ &\le & D_{\max}(\rho \otimes \rho||\sigma \otimes \sigma)\nonumber\\ &=& 2 D_{\max}(\rho||\sigma) = 2 E_{\max}(\rho). \end{eqnarray} \begin{lemma} \label{logrobust} The {max-relative entropy of entanglement} $E_{\max}(\rho)$ of a bipartite state $\rho$ is equal to its {\em{global log robustness}} of entanglement \cite{BP}, which is defined as follows: \begin{equation} LR_g(\rho) := \log \bigl(1 + R_g(\rho)\bigr), \end{equation} where $R_g(\rho)$ is the {\em{global robustness of entanglement}}\cite{harrow}, given by $$ R_g(\rho) = \min_{s \in \mathbb{R}} \Bigl\{s\ge 0 : \exists \,\omega \in {\cal{D}} \,\,{\rm{s.t.}}\,\, \frac{1}{1+s}\rho + \frac{s}{1+s}\omega \in {\cal{S}} \Bigr\} $$ \end{lemma} \begin{proof} We can equivalently write $R_g(\rho)$ as follows: \begin{eqnarray} R_g(\rho) &=& \min_{s \in \mathbb{R}} \Bigl\{s\ge 0 : \exists \,\omega \in {\cal{D}} \,\,{\rm{s.t.}}\,\, \rho + s\omega = (1+s) \sigma, \sigma \in {\cal{S}} \Bigr\}\nonumber\\ &=& \min_{s \in \mathbb{R}} \Bigl\{s\ge 0 : \exists \,\sigma \in {\cal{S}} \,\,{\rm{s.t.}}\,\, \rho \le (1+s) \sigma \Bigr\}, \end{eqnarray} since, defining ${\widetilde{\omega}} := (1+s)\sigma - \rho$, we see that $\mathrm{Tr} \, {\widetilde{\omega}} = 1 + s - 1 = s$, hence allowing us to write ${\widetilde{\omega}} = s \omega$ for some $\omega \in {\cal{D}}$. Hence, $$\log (1+ R_g(\rho)) = \min_{\sigma \in {\cal{S}}} D_{\max}( \rho|| \sigma).$$ \end{proof} \begin{definition} A state $\pi$ for which $$\rho + R_g(\rho) \pi = (1 + R_g(\rho))\sigma,$$ for some separable state $\sigma$, is referred to as an optimal state for $\rho$ in the global robustness of entanglement. \end{definition} \noindent It was shown in \cite{harrow} that for a pure bipartite state $\rho= |\psi\rangle\langle \psi| \in {\cal{B}}({\cal{H}}_A \otimes {\cal{H}}_B)$, $$R_g(\rho) = \Bigl(\sum_{i=1}^m \lambda_i\Bigr)^2 - 1,$$ where the $\lambda_i$. $i=1, \ldots, m$, denote the Schmidt coefficients of $|\psi\rangle$. This implies that for the pure state $\rho= |\psi\rangle\langle \psi|$, the max-relative entropy of entanglement is given by \begin{equation} E_{\max}(\rho) = \log (1 + R_g(\rho)) = 2 \log \Bigl( \sum_{i=1}^m \lambda_i\Bigr), \label{pure} \end{equation} i.e., twice the logarithm of the sum of the square roots of the eigenvalues of the reduced density matrix $\rho_\psi^A := \mathrm{Tr}_B |\psi\rangle\langle \psi|$. Hence for a pure state $\rho$, $E_{\max}(\rho)$ does {\em{not}} in general reduce to its {\em{entropy of entanglement}} (i.e., the von Neumann entropy of $\rho_\psi^A$), even though it does so for a maximally entangled state. Let $\Psi_M \in {\cal{B}}({\cal{H}_A}\otimes {\cal{H}_B})$ denote a maximally entangled state (MES) of rank $M$, i.e., $\Psi_M = |\Psi_M\rangle\langle\Psi_M|,$ with $$|\Psi_M\rangle = \frac{1}{\sqrt{M}} \sum_{i=1}^M |i\rangle|i\rangle.$$ Then, $$ E_{\max} (\Psi_M) = \log M = S(\mathrm{Tr}_A \Psi_M).$$ Moreover, $R_g(\Psi_M) = M - 1.$ Note that the right hand side of \reff{pure} is equal to the expression for another known entanglement monotone, namely the {\em{logarithmic negativity}} \cite{martin} $$LN(\rho) := \log ||\rho^\Gamma||_1,$$ for the pure state $\rho=|\psi\rangle\langle \psi|$. Here $\rho^\Gamma$ denotes the partial transpose with respect to the subsystem $A$, and $||\omega||_1 = \mathrm{Tr} \sqrt{w^\dagger \omega}.$ It is known that $LN(\rho)$ is additive \cite{martin} for pure states, and we therefore have \begin{equation} E_{\max}(|\psi\rangle \langle \psi| \otimes |\phi\rangle \langle \phi|) = E_{\max}(|\psi\rangle \langle \psi|) + E_{\max}(|\phi\rangle \langle \phi|) . \label{ad1} \end{equation} This additivity relation does not extend to mixed states in general. However, the following relation can be proved to hold \cite{FBND}: \begin{eqnarray} E_{\max}(\rho \otimes \Psi_M) &=& E_{\max}(\rho)+E_{\max}(\Psi_M)\nonumber\\ &=&E_{\max}(\rho)+ \log M. \label{key} \end{eqnarray} As mentioned in the Introduction $E_{\max}(\rho)$ provides an upper bound to the one-shot {\em{perfect}} entanglement cost, under quantum operations which generate an entanglement of at most $R_g(\rho)$. This is elaborated below. In entanglement dilution the aim is to obtain a state $\rho$ from a maximally entangled state. This cannot necessarily be done by using a single copy of the maximally entangled state and acting on it by a LOCC map. However, a single perfect copy of $\rho$ {\em{can}} be obtained from a single copy of a maximally entangled state if one does not restrict the quantum operation employed to be a LOCC map but instead allows quantum operations which generate an entanglement of at most $1/R_g(\rho)$. Before proving this, let us state the definition of one-shot perfect entanglement cost of a state under a quantum operation $\Lambda$. \begin{definition} A real number $R$ is said to be an {\em{achievable}} one-shot perfect dilution rate, for a state $\rho$, under a quantum operation $\Lambda$, if $\Lambda(\Psi_M) =\rho$ and $\log M \le R$. \end{definition} \begin{definition} The one-shot perfect entanglement cost of a state under a quantum operation $\Lambda$ is given by $E_{c,\lambda}^{(1)} = \inf R,$ where the infimum is taken over all achievable rates. \end{definition} Consider the quantum operation $\Lambda$ which acts on any state $\omega$ as follows: \begin{equation} \Lambda_M(\omega) = \mathrm{Tr}( \Psi_M \omega) \rho + (1 - \mathrm{Tr}( \Psi_M \omega))\pi, \label{qop} \end{equation} where $\pi$ is an optimal state for $\rho$ in the global robustness of entanglement. It was shown in \cite{BP} that for $M= 1 + s$, where $s = R_g(\rho)$, the quantum operation $\Lambda_M$ is an $(1/s)$-separability preserving map (SEPP), i.e., for any separable state $\sigma$: $$R_g(\Lambda (\sigma)) \le 1/s.$$ In other words, the map $\Lambda$ as defined by \reff{qop}, is a quantum operation which generates an entanglement corresponding to a global robustness of at most $1/R_g(\rho)$. Now if $\omega = \Psi_M$, then $\Lambda(\omega) = \rho$, and hence a perfect copy of $\rho$ is obtained from a single copy of the maximally entangled state $ \Psi_M$. The associated rate, $R$, of the one-shot entanglement dilution protocol corresponding to the map $\Lambda_M$ satisfies the bound \cite{loose}: \begin{equation} R \le \log M = \log (1 +s) = E_{\max}(\rho). \label{upbdd} \end{equation} The log robustness, $LR_g(\rho)$, is not asymptotically continuous \cite{BP} and it is not known whether it is weakly additive. However, as mentioned in the Introduction, by suitable modifying $E_{\max}(\rho)$ one can arrive at a quantity which is both asymptotically continuous and weakly additive. The necessary modifications involve $(i)$ ``smoothing'' $E_{\max}(\rho)$ to obtain the {\em{smooth max-relative entropy of entanglement}} $E_{\max}^\eps(\rho)$, for any fixed $\eps >0$; $(ii)$ regularising, and $(iii)$ taking the limit $\eps \rightarrow 0$, as described below. \section{Smooth Max-Relative entropy of entanglement and ${\E}_{\max}(\rho)$} For any $\eps >0$, we define the {\em{smooth max-relative entropy of entanglement}} of a bipartite state $\rho$, as follows: \begin{eqnarray} {E}_{\max}^\eps (\rho) &:=& \min_{\bar{\rho} \in B^{\eps}(\rho)} E_{\max}(\bar{\rho}) \nonumber\\ &=& \min_{\bar{\rho} \in B^{\eps}(\rho)} \min_{\sigma\in {\cal{S}}} D_{\max}(\bar{\rho}||\sigma), \nonumber\\ &=& \min_{\sigma\in {\cal{S}}} D_{\max}^\eps(\rho||\sigma), \label{eqmax} \end{eqnarray} where $D_{\max}^\eps(\rho||\sigma)$ is the smooth max-relative entropy defined by \reff{epsmax}. Further, we define its regularised version \begin{equation} {\E}_{\max}^\eps (\rho):= \limsup_{n\rightarrow \infty}\frac{1}{n} E_{\max}^\eps(\rho^{\otimes n}), \end{equation} and the quantity \begin{equation} {\E}_{\max} (\rho) := \lim_{\eps \rightarrow 0} {\E}_{\max}^\eps (\rho) \label{deff} \end{equation} \begin{lemma} The quantity ${\E}_{\max} (\rho)$ characterizing a bipartite state $\rho \in {\cal{B}}({\cal{H}})$ and defined by \reff{deff}, satisfies the following properties: \begin{enumerate} \item{It is weakly additive, i.e., for any positive integer $m$, \begin{equation} {\E}_{\max} (\rho^{\otimes m}) = m \, {\E}_{\max}(\rho). \label{weak} \end{equation}} \item{It is asymptotically continuous, i.e., for a given $\eps >0$, if $\rho_m \in {\cal{B}}({\cal{H}}^{\otimes m})$ is an operator for which $||\rho_m - \rho^{\otimes m} ||_1 \le \eps$, then \begin{equation} \bigl|\frac{{\E_{\max}}(\rho_m) - {\E_{\max}}(\rho^{\otimes m})}{m}\bigr| \le f(\eps), \label{asc}\end{equation} where $f(\eps)$ is a real function of $\eps$ such that $f(\eps)\rightarrow 0$ as $\eps \rightarrow 0$.} \end{enumerate} \end{lemma} \begin{proof} Here we give the proof of $1$, by showing that ${\E}_{\max} (\rho\otimes \rho) = 2{\E}_{\max}(\rho)$. The proof of $2$ follows from Theorem \ref{main1} below since the regularized relative entropy of entanglement $E_R^\infty(\rho)$, (defined by \reff{rel1}), is known to be asymptotically continuous \cite{matthias}. We first prove that \begin{equation} {\E_{\max}}(\rho \otimes \rho) \le 2 {\E}_{\max}(\rho) \label{part1} \end{equation} Fix $\eps >0$. Then, \begin{eqnarray} {E}_{\max}^\eps(\rho^{\otimes n})&=& \min_{\sigma_n \in {\cal{S}}_n} D^\eps_{\max}(\rho^{\otimes n}|| \sigma_n),\nonumber\\ &=& \min_{\sigma_n \in {\cal{S}}_n} \min_{ \overline{\rho}_n \in B^\eps(\rho^{\otimes n})} D_{\max}(\overline{\rho}_{n}|| \sigma_n) \label{opt}\\ &=& D_{\max}(\rho_n^\eps|| \sigma_n^\eps), \label{opt2} \end{eqnarray} where $\rho_n^\eps \in B^\eps(\rho^{\otimes n})$ and $\sigma_n^\eps \in {\cal{S}}_n$ are operators for which the minima in \reff{opt} are achieved. Since $\rho_n^\eps \in B^\eps(\rho^{\otimes n})$, we have that $|| \rho_n^\eps - \rho^{\otimes n}||_1 \le \eps,$ which in turn implies that $$|| \rho_n^\eps\otimes \rho_n^\eps - \rho^{\otimes 2n}||_1 \le 2\eps.$$ Therefore, $ \rho_n^\eps\otimes \rho_n^\eps \in B^\eps(\rho^{\otimes 2n})$. Further, since $ \sigma_n^\eps\otimes \sigma_n^\eps \in {\cal{S}}_{2n}$, we have \begin{eqnarray} E_{\max}^{2 \eps} (\rho^{\otimes 2n}) &=& \min_{ \overline{\rho}_{2n} \in B^\eps(\rho^{\otimes 2n})} \min_{\sigma_{2n} \in {\cal{S}}_{2n}} D_{\max}(\overline{\rho}_{2n}|| \sigma_{2n})\nonumber\\ &\le & D_{\max}(\rho_n^\eps\otimes \rho_n^\eps|| \sigma_n^\eps \otimes \sigma_n^\eps)\nonumber\\ &=& 2 D_{\max}(\rho_n^\eps|| \sigma_n^\eps)\nonumber\\ &=& 2 {E}_{\max}^\eps(\rho^{\otimes n}). \end{eqnarray} Hence, \begin{equation} \limsup_{n\rightarrow \infty}\frac{1}{n}E_{\max}^{2 \eps} \bigl((\rho\otimes\rho)^{\otimes n}\bigr) \le 2 \limsup_{n\rightarrow \infty}\frac{1}{n}E_{\max}^{\eps} (\rho^{\otimes n}), \end{equation} that is, ${\E}_{\max}^{2\eps}(\rho \otimes \rho) \le 2 {\E}_{\max}^{\eps}(\rho)$. Taking the limit $\eps \rightarrow 0$ on either side of this inequality yields the desired bound \reff{part1}. In fact, the identity holds in \reff{part1}. This is simply because \begin{eqnarray} {\E}_{\max}^{\eps}(\rho \otimes \rho)&=& \limsup_{n\rightarrow \infty}\frac{1}{n}E_{\max}^{\eps} ((\rho\otimes \rho)^{\otimes n}),\nonumber\\ &=& 2\,\limsup_{n\rightarrow \infty}\frac{1}{2n}E_{\max}^{\eps} ((\rho)^{\otimes 2n}),\nonumber\\ &=& 2\,{\E}_{\max}^{\eps}(\rho) \label{29} \end{eqnarray} The last line of \reff{29} is proved \cite{fernando} by employing the monotonicity \reff{mono} of the max- relative entropy under partial trace. We know that \begin{eqnarray} {\E}_{\max}^{\eps}(\rho)&:=& \limsup_{n\rightarrow \infty}\frac{1}{n}E_{\max}^{\eps} ((\rho)^{\otimes n}),\nonumber\\ &\ge & \,\limsup_{n\rightarrow \infty}\frac{1}{2n}E_{\max}^{\eps} ((\rho)^{\otimes 2n}),\label{id} \end{eqnarray} However, it can be proven that the identity always holds in \reff{id}. This is done by assuming that \begin{equation} {\E}_{\max}^{\eps}(\rho) > \,\limsup_{n\rightarrow \infty}\frac{1}{2n}E_{\max}^{\eps} ((\rho)^{\otimes 2n}), \label{assume} \end{equation} and showing that this leads to a contradiction. The assumption \reff{assume} implies that there exists a sequence of odd integers $n_i$ for which \begin{equation} \limsup_{n_i \rightarrow \infty} \frac{1}{n_i} E_{\max}^\eps(\rho^{\otimes n_i}) > \limsup_{n_i \rightarrow \infty} \frac{1}{n_i + 1} E_{\max}^\eps(\rho^{\otimes n_i + 1}). \end{equation} Let $\rho_{n_i + 1} \in {\cal{H}}^{\otimes n_i + 1}$ be an operator in $B^\eps(\rho^{\otimes n_i + 1})$ for which $E_{\max}^\eps(\rho^{\otimes n_i + 1}) = E_{\max}(\rho_{n_i + 1})$. Then, using the monotonicity \reff{mono} of the max-relative entropy under partial trace, we have \begin{eqnarray} E_{\max}^\eps(\rho^{\otimes n_i + 1}) &=&E_{\max}(\rho_{n_i + 1}) \nonumber\\ &\ge& E_{\max}( \mathrm{Tr}_{\cal{H}}(\rho_{n _i + 1}))\nonumber\\ &\ge& E_{\max}^\eps(\rho^{\otimes n_i}), \end{eqnarray} since $\mathrm{Tr}_{\cal{H}}(\rho_{n _i + 1}) \in B^\eps(\rho^{\otimes n_i})$. Therefore, \begin{eqnarray} \limsup_{n_i \rightarrow \infty} \frac{1}{n_i} E_{\max}^\eps(\rho^{\otimes n_i}) &>& \limsup_{n_i \rightarrow \infty} \frac{1}{n_i + 1} E_{\max}^\eps(\rho^{\otimes n_i + 1})\nonumber\\ &\ge & \limsup_{n_i \rightarrow \infty} \frac{1}{n_i + 1} E_{\max}^\eps(\rho^{\otimes n_i})\nonumber\\ &=& \limsup_{n_i \rightarrow \infty} \frac{1}{n_i} E_{\max}^\eps(\rho^{\otimes n_i}), \end{eqnarray} which is a contradiction. \end{proof} \section{Main Result} Our main result is given by the following theorem. \begin{theorem} \label{main1} \begin{equation} {\E}_{\max} (\rho)= E_R^\infty(\rho), \end{equation} where, $E_R^\infty(\rho)$ denotes the regularized relative entropy of entanglement: \begin{equation} E_R^\infty(\rho):= \lim_{n\rightarrow \infty} \frac{1}{n}E_R(\rho^{\otimes n}), \label{rel1} \end{equation} where, $E_R(\rho) = \min_{\sigma\in {\cal{S}}} S(\rho|| \sigma)$ is the relative entropy of entanglement of $\rho$. \end{theorem} \begin{proof} We first prove that $E_R^\infty(\rho) \le {\E}_{\max} (\rho)$. Fix $\eps >0$. \begin{equation} E_{\max}^\eps(\rho^{\otimes n}) = \min_{\sigma_n \in {{\cal{S}}}_n} D_{\max}^\eps (\rho^{\otimes n}||\sigma_n), \end{equation} where ${{\cal{S}}}_n$ denotes the set of separable states in ${\cal{B}}({\cal{H}}^{\otimes n})$. In the above, \begin{equation} D_{\max}^\eps (\rho^{\otimes n}||\sigma_n)= \min_{\overline{\rho}_n \in B^\eps(\rho^{\otimes n})} D_{\max}(\overline{\rho}_n||\sigma_n) \label{eq1} \end{equation} Let $\rho_n^\eps \in B^\eps(\rho^{\otimes n})$ be the operator for which the minimum is achieved in \reff{eq1}. Hence, \begin{equation} E_{\max}^\eps(\rho^{\otimes n}) = \min_{\sigma_n \in {{\cal{S}}}_n}D_{\max} (\rho_n^\eps||\sigma_n) \label{eq2} \end{equation} Further, let ${\widetilde{\sigma}}_n$ be the separable state for which the minimum is achieved in \reff{eq2}. Hence, \begin{equation} E_{\max}^\eps(\rho^{\otimes n}) = D_{\max} (\rho_n^\eps||{\widetilde{\sigma}}_n) \label{eq3} \end{equation} Since, $$D_{\max} (\rho_n^\eps||{\widetilde{\sigma}}_n) = \min\{\alpha : \rho_n^\eps \le 2^\alpha {\widetilde{\sigma}}_n\},$$ we have, \begin{equation} \rho_n^\eps \le 2^{E_{\max}^\eps(\rho^{\otimes n})}{\widetilde{\sigma}}_n. \label{eq4} \end{equation} Using \reff{eq4} and the operator monotonicity of the logarithm, we infer that \begin{equation} S(\rho_n^\eps || {\widetilde{\sigma}}_n) \le E_{\max}^\eps(\rho^{\otimes n}), \label{eq5} \end{equation} since $\mathrm{Tr} \rho_n^\eps \le \mathrm{Tr} \rho^{\otimes n} = 1$. From \reff{eq5} it follows that \begin{equation} \limsup_{n \rightarrow \infty} \frac{1}{n} S(\rho_n^\eps || {\widetilde{\sigma}}_n) \le {\E}_{\max}^\eps(\rho), \label{eq6} \end{equation} and hence, \begin{equation} \limsup_{n \rightarrow \infty} \frac{1}{n} E_R(\rho_n^\eps) \le {\E}_{\max}^\eps(\rho), \label{eq7} \end{equation} where $E_R(\rho_n^\eps):= \min_{\sigma_n \in {\cal{S}}_n} S(\rho_n^\eps || \sigma_n).$ It is known that $E_R(\rho)$ is asymptotically continuous. Hence, \begin{equation} \frac{ E_R(\rho_n^\eps)}{n} \ge \frac{E_R(\rho^{\otimes n})}{n} - f(\eps), \label{eq8} \end{equation} where $f(\eps)$ is a real function of $\eps$ satisfying $f(\eps) \rightarrow 0$ as $\eps \rightarrow 0$. From \reff{eq7} and \reff{eq8} we obtain $$ \limsup_{n \rightarrow \infty} \frac{1}{n} E_R(\rho^{\otimes n}) - f(\eps)\le {\E}_{\max}^\eps(\rho).$$ Taking the limit $\eps \rightarrow 0$ on both sides of the above inequality yields the desired bound: $$ E_R^\infty (\rho)\le {\E}_{\max}(\rho).$$ \noindent We next prove the inequality $ E_R^\infty (\rho) \ge {\E}_{\max}(\rho).$ Consider the sequences $\widehat{\rho}=\{\rho^{\otimes n}\}_{n=1}^\infty$ and ${\widehat{\sigma}}=\{\sigma_\rho^{\otimes n}\}_{n=1}^\infty$, where $\sigma_\rho$ is the separable state for which \begin{equation} E_R(\rho) = S(\rho||\sigma_\rho) \equiv \min_{\sigma'} S(\rho||\sigma'). \end{equation} For these two sequences, one can define the following quantity \begin{equation} \overline{D}(\widehat{\rho} \| {\widehat{\sigma}}) := \inf \Big\{ \gamma : \limsup_{n\rightarrow \infty} \mathrm{Tr}\big[ \{ \rho^{\otimes n} \ge 2^{n\gamma}\sigma_\rho^{\otimes n}\}\rho^{\otimes n} \bigr]=0\Bigr\} \nonumber\\ \label{upd} \end{equation} It s referred to as the {\em{sup-spectral divergence rate}} and arises in the so-called Information Spectrum Approach \cite{nagaoka02, bd1}. The Quantum Stein's Lemma (\cite{ogawa00} or equivalently {\em{Theorem 2}} of \cite{nagaoka02}) tells us that \begin{equation} \overline{D}(\widehat{\rho} \| {\widehat{\sigma}}) = S(\rho||\sigma_\rho) \end{equation} Let us choose \begin{equation} \lambda = \overline{D}(\widehat{\rho} \| {\widehat{\sigma}}) + \delta = E_R(\rho) + \delta, \end{equation} for some arbitrary $\delta >0$. It then follows from the definition \reff{upd} that $$ \limsup_{n\rightarrow \infty} \mathrm{Tr}\big[ \{ \rho^{\otimes n} \ge 2^{n\lambda}\sigma_\rho^{\otimes n}\}\rho^{\otimes n} \Bigr]= 0 $$ In particular, for any $\eps > 0$ there exists $n_0 \in \bbN$, such that for all $n \geq n_0$. \begin{equation} \mathrm{Tr}\bigl[\{\rho^{\otimes n} > 2^{n \lambda} \sigma_\rho^{\otimes n}\} \rho^{\otimes n} \bigr] < \frac{\eps^2}{8} \ . \end{equation} Using Lemma \ref{lem6n} we infer that for all $n \ge n_0$, $$ D_{\max}^{\eps}(\rho^{\otimes n}||\sigma_\rho^{\otimes n}) \le n\lambda = nE_R(\rho) + n\delta $$ Hence, $E_{\max}^\eps(\rho^{\otimes n}) \le nE_R(\rho) + n\delta$, and $$ {\E}_{\max}^\eps (\rho)\le E_R(\rho) + \delta. $$ Moreover, since the above bound holds for any arbitrary $\delta>0$, we deduce that ${\E}_{\max}^\eps (\rho)\le E_R(\rho)$. Finally , taking the limit $\eps \rightarrow 0$ on both sides of this inequality yields \begin{equation} {\E}_{\max} (\rho)\le E_R(\rho). \end{equation} Using the weak additivity \reff{weak} of ${\E}_{\max} (\rho)$, we obtain \begin{eqnarray} \frac{1}{n}E_R(\rho^{\otimes n})&\ge & \frac{1}{n}{\E}_{\max}(\rho^{\otimes n}) \nonumber\\ &=& {\E}_{\max}(\rho). \label{ll} \end{eqnarray} Taking the limit $n\rightarrow \infty$, on both sides of \reff{ll}, yields the desired bound $$ E_R^\infty(\rho) \ge {\E}_{\max}(\rho).$$ \end{proof} \section{Appendix} \label{lempfs} \noindent {\bf{Proof of Lemma \ref{lem5n}}} \begin{proof} Define \begin{align*} \alpha_{A B} & := 2^{\lambda} \cdot \sigma_{AB} \\ \beta_{A B} & := 2^{\lambda} \cdot \sigma_{AB} + \Delta_{A B} \ . \end{align*} and \[ T_{A B} := \alpha_{A B}^\half \beta_{A B}^{-\half} \ . \] Let $\ket{\Psi} = \ket{\Psi}_{A B R}$ be a purification of $\rho_{A B}$ and let $\ket{\Psi'} := T_{A B} \otimes \id_R \ket{\Psi}$ and $\rho'_{A B} := \mathrm{Tr}_R(\proj{\Psi'})$. Note that \begin{align*} \rho'_{A B} & = T_{A B} \rho_{A B} T_{A B}^{\dagger} \\ & \leq T_{A B} \beta_{A B} T_{A B}^{\dagger} \\ & = \alpha_{A B} = 2^{\lambda} \cdot \sigma_{AB} \ , \end{align*} which implies $D_{\max}(\rho'_{A B}|\sigma_{AB}) \le \lambda$. It thus remains to be shown that \begin{equation} \label{eq:distbound} \| \rho_{A B} - \rho'_{A B} \|_1 \leq \sqrt{8 \mathrm{Tr}(\Delta_{A B})} \ . \end{equation} We first show that the Hermitian operator \[ \bar{T}_{A B} := \frac{1}{2} (T_{A B} + T_{A B}^\dagger) \ . \] satisfies \begin{equation} \label{eq:Tleqid} \bar{T}_{A B} \leq \id_{A B} \ . \end{equation} For any vector $\ket{\phi} = \ket{\phi}_{A B}$, \begin{align*} \| T_{A B} \ket{\phi} \|^2 & = \bra{\phi} T_{A B}^\dagger T_{A B} \ket{\phi} = \bra{\phi} \beta_{A B}^{-\half} \alpha_{A B} \beta_{A B}^{-\half} \ket{\phi} \\ & \leq \bra{\phi} \beta_{A B}^{-\half} \beta_{A B} \beta_{A B}^{-\half} \ket{\phi} = \| \ket{\phi} \|^2 \end{align*} where the inequality follows from $\alpha_{A B} \leq \beta_{A B}$. Similarly, \begin{align*} \| T_{A B} ^\dagger \ket{\phi} \|^2 & = \bra{\phi} T_{A B} T_{A B}^\dagger \ket{\phi} = \bra{\phi} \alpha_{A B}^\half \beta_{A B}^{-1} \alpha_{A B}^{\half} \ket{\phi} \\ & \leq \bra{\phi} \alpha_{A B}^{\half} \alpha_{A B}^{-1} \alpha_{A B}^{\half} \ket{\phi} = \| \ket{\phi} \|^2 \end{align*} where the inequality follows from the fact that $\beta_{A B}^{-1} \leq \alpha_{A B}^{-1}$ which holds because the function $\tau \mapsto -\tau^{-1}$ is operator monotone on $(0, \infty)$ (see Proposition V.1.6 of \cite{bhatia}). We conclude that for any vector $\ket{\phi}$, \begin{align*} \| \bar{T}_{A B} \ket{\phi} \| & \leq \frac{1}{2} \| T_{A B} \ket{\phi} + T_{A B}^{\dagger} \ket{\phi} \| \\ & \leq \frac{1}{2} \| T_{A B} \ket{\phi} \| + \frac{1}{2} \| T_{A B}^{\dagger} \ket{\phi} \| \leq \| \ket{\phi} \| \ , \end{align*} which implies~\eqref{eq:Tleqid}. We now determine the overlap between $\ket{\Psi}$ and $\ket{\Psi'}$, \begin{align*} \spr{\Psi}{\Psi'} & = \bra{\Psi} T_{A B} \otimes \id_R \ket{\Psi} \\ & = \mathrm{Tr}(\proj{\Psi} T_{A B} \otimes \id_R) = \mathrm{Tr}(\rho_{A B} T_{A B}) \ . \end{align*} Because $\rho_{A B}$ has trace one, we have \begin{align*} 1 - |\spr{\Psi}{\Psi'}| & \leq 1- \Re \spr{\Psi}{\Psi'} = \mathrm{Tr}\bigl(\rho_{A B} (\id_{A B} - \bar{T}_{A B}) \bigr) \\ & \leq \mathrm{Tr}\bigl(\beta_{A B} (\id_{A B} - \bar{T}_{A B})\bigr) \\ & = \mathrm{Tr}(\beta_{A B}) - \mathrm{Tr}(\alpha_{A B}^{\half} \beta_{A B}^{\half}) \\ & \leq \mathrm{Tr}(\beta_{A B}) - \mathrm{Tr}(\alpha_{A B}) = \mathrm{Tr}(\Delta_{A B}) \ . \end{align*} Here, the second inequality follows from the fact that, because of~\eqref{eq:Tleqid}, the operator $\id_{AB} - \bar{T}_{A B}$ is positive, and $\rho_{A B} \leq \beta_{A B}$. The last inequality holds because $\alpha_{A B}^{\half} \leq \beta_{A B}^{\half}$, which is a consequence of the operator monotonicity of the square root (Proposition V.1.8 of \cite{bhatia}). Using \reff{fidelity} and the fact that the fidelity between two pure states is given by their overlap, we find \begin{align*} \| \proj{\Psi} - \proj{\Psi'} \|_1 & \leq 2 \sqrt{2(1-| \spr{\Psi}{\Psi'} |)} \\ & \leq 2 \sqrt{2 \mathrm{Tr}(\Delta_{A B})} \leq \eps \ . \end{align*} Inequality~\eqref{eq:distbound} then follows because the trace distance can only decrease when taking the partial trace. \end{proof} \noindent {\bf{Proof of Lemma \ref{lem6n}}} \begin{proof} Let $\Delta^+_{A B}$ and $\Delta^-_{A B}$ be mutually orthogonal positive operators such that \[ \Delta^+_{A B} - \Delta^-_{A B} = \rho_{A B} - 2^{\lambda} \sigma_{AB} \ . \] Furthermore, let $P_{A B}$ be the projector onto the support of $\Delta^+_{A B}$, i.e., \[ P_{A B} = \{\rho_{A B} > 2^{\lambda} \sigma_{AB} \} \ . \] We then have \begin{align*} P_{A B} \rho_{A B} P_{A B} & = P_{A B} (\Delta^+_{A B} + 2^{\lambda} \sigma_{AB} - \Delta^-_{A B}) P_{A B} \\ & \ge \Delta^{+}_{A B} \end{align*} and, hence, \[ \sqrt{8 \mathrm{Tr}(\Delta^{+}_{A B})} \le \sqrt{8 \mathrm{Tr}(P_{A B} \rho_{AB})\bigr)} = \eps \ . \] The assertion now follows from Lemma~\ref{lem5n} because \[ \rho_{A B} \leq 2^{\lambda} \sigma_{AB} + \Delta^+_{A B} \ . \] \end{proof} \end{document}
arXiv
\begin{document} \title{Constructing families of moderate-rank elliptic curves over number fields} \author[Mehrle]{David Mehrle} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \author[Miller]{Steven J. Miller} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}, \textcolor{blue}{\href{[email protected]}{[email protected]}}} \address{Department of Mathematics and Statistics, Williams College, Williamstown, MA 01267} \author[Reiter]{Tomer Reiter} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \author[Stahl]{Joseph Stahl} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \author[Yott]{Dylan Yott} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \subjclass[2010]{11G05 (primary), 11G20, 11G40, 14G10} \keywords{Elliptic curves, rational elliptic surface, rank of the Mordell-Weil group, number fields, sums of Legendre symbols} \date{\today} \thanks{This work was supported by NSF grants DMS-1347804, DMS-1265673, Williams College, and the PROMYS program. The authors thank \'Alvaro Lozano-Robledo, Rob Pollack and Glenn Stevens for their insightful comments and support. Thanks also to the referee for their careful reading of an earlier version of this paper.} \begin{abstract} We generalize a construction of families of moderate rank elliptic curves over $\mathbb Q$ to number fields $K/\mathbb Q$. The construction, originally due to Scott Arms, \'Alvaro Lozano-Robledo and Steven J. Miller, invokes a theorem of Rosen and Silverman to show that computing the rank of these curves can be done by controlling the average of the traces of Frobenius; the construction for number fields proceeds in essentially the same way. One novelty of this method is that we can construct families of moderate rank without having to explicitly determine points and calculating determinants of height matrices. \end{abstract} \maketitle \section{Introduction} If $E$ is an elliptic curve over $\mathbb Q$, then the associated group of rational solutions, the Mordell-Weil group $E(\mathbb Q)$, is finitely generated. The rank of this group is a very interesting and well-studied quantity in modern number theory; the famous Birch and Swinnerton-Dyer conjecture states that its rank equals the order of vanishing of the elliptic curve's $L$-function at the central point. We assume the reader is familiar with the basics of the subject; good references are \cite{Kn,Si1,Si2,SiTa}. It is unknown if the rank of an elliptic curve over $\mathbb Q$ can be arbitrarily large. It is an interesting and difficult problem to find examples or families of curves with large rank. To date the best known results are due to Elkies, who constructed an elliptic curve of rank at least 28 (or exactly 28 subject to the Generalized Riemann Hypothesis \cite{KSW}) and a family of elliptic curves of rank at least 18; see \cite{BMSW} for a survey of recent results on the distribution of ranks of curves in families, and conjectures for their behavior. Many of the constructions of high rank families of elliptic curves begin by forcing points to lie in the curves, and then calculating the associated height matrices to verify that they are linearly independent (see for example \cite{Mes1,Mes2,Na1}). We pursue an alternative approach introduced by Arms, Lozano-Robledo and Miller \cite{AL-RM}. Briefly, their strategy is to use a result of Rosen and Silverman \cite{RoSi}, which converts the problem of constructing families of elliptic curves with large rank to finding associated Legendre sums that are large. While in general these sums are intractable, for some carefully constructed families these can be determined in closed form, which allows us to determine the rank of the families \emph{without} having to list points and compute height matrices. Our main result is to generalize the work in \cite{AL-RM} from elliptic curves over $\mathbb Q$ to elliptic curves over number fields. Specifically, we show the following. \begin{theorem}\label{thm:main} Let $K$ be a number field. Then there exists an elliptic curve $\mathcal E$ over $K(T)$ with $j(\mathcal E) \not \in \mathbb Q(T)$ such that the rank of $\mathcal E$ over $K(T)$ is exactly 6. \end{theorem} By specializing to $T = t$ for some $t \in K$, we obtain curves $\mathcal E_t$ over $K$ from the curve $\mathcal E$ over $K(T)$. Silverman's specialization theorem \cite[Theorem 11.4]{Si2} tells us that for all but finitely many $t \in T$, the rank can only possibly increase. \begin{cor} Let $K$ be a number field. There are infinitely many elliptic curves over $K$ with rank at least 6. \end{cor} \begin{remark} Arms et al.\ \cite{AL-RM} construct infinitely many elliptic curves over $\mathbb Q$ with rank at least 6. By base-extending these curves to $K$, we may trivially obtain infinitely many elliptic curves over $K$ with rank at least $6$. Our contribution is to construct curves that are defined over $K$ but \emph{not} defined over $\mathbb Q$; this is evident because the $j$-invariant of curves we construct in Theorem \ref{thm:main} lies in $K(T)$ but not in $\mathbb Q(T)$. \end{remark} \section{The Construction} Let $K$ be a number field and $\mathcal{O}_K$ its ring of integers. Let $\mathcal{E}$ be the elliptic curve over $K(T)$ defined by \[ \label{eq:ellipticcurve} \mathcal{E} \colon \hspace{1em} y^2 + a_1(T)xy + a_3(T)y \ =\ x^3 + a_2(T)x^2 + a_4(T)x + a_6(T), \] where $a_i(T) \in \mathcal{O}_K(T)$. By Silverman's specialization theorem \cite[Theorem 11.4]{Si2}, for all but finitely many $t \in \mathcal{O}_K$ the Mordell-Weil rank of the fiber $\mathcal{E}_t$ over $K$ is at least that of the rank of $\mathcal{E}$ over $\mathcal{O}_K(T)$. Therefore, if we can compute the rank of $\mathcal{E}$, we have a family of infinitely many curves $\mathcal{E}_t$ over $K$ with at least the rank of $\mathcal{E}$. To that end, for $\mathcal{E}$ as above and $\mathfrak p$ a prime of good reduction in $\mathcal{O}_K$ (we do not consider the bad primes here), we define the average \begin{equation} \label{eq:avgnumpoints} A_{\mathcal{E}}(\mathfrak p)\ :=\ \frac{1}{N(\mathfrak p)} \sum_{t \in \mathcal{O}_K / \mathfrak p} a_{t}(\mathfrak p), \end{equation} where $N(\mathfrak p) = | \mathcal{O}_K / \mathfrak p |$ and $a_t(\mathfrak p) = N(\mathfrak p) + 1 - \#\mathcal{E}_t(\mathcal{O}_K / \mathfrak p)$. Nagao \cite{Na2} conjectured that these sums are related to the rank of the family of elliptic curves. Rosen and Silverman proved this conjecture when $\mathcal{E}$ is a rational elliptic surface \cite{RoSi}. Specifically, whenever Tate's conjecture holds (which is known for $K3$ surfaces over certain fields \cite{Sr}) we have \begin{equation} \label{eq:rosensilvermanthm} \lim_{X \to \infty} \frac{1}{X} \sum_{\mathfrak p \colon N(\mathfrak p) \leq X} -A_\mathcal{E}(\mathfrak p) \log(N(\mathfrak p))\ =\ \text{rank } \mathcal{E}(K(T)). \end{equation} Below we study certain carefully chosen families where we are able to prove that $A_\mathcal{E}(\mathfrak p) = -6$ for almost all primes $\mathfrak p$, thus proving these families have rank 6. To calculate the limit \eqref{eq:rosensilvermanthm}, we appeal to the Landau Prime Ideal Theorem, a generalization of the Prime Number Theorem. \begin{theorem}[Landau Prime Ideal Theorem \cite{Lan}] We have \[ \sum_{\mathfrak p \colon N(\mathfrak p) \leq X} \log(N(\mathfrak p)) \ \sim\ X. \] \end{theorem} Assuming we can produce $\mathcal{E}$ such that $A_{\mathcal{E}} (\mathfrak p) = -6$ for almost all $\mathfrak p$, then combining the Landau Prime Ideal Theorem with equation \eqref{eq:rosensilvermanthm} it follows that \[ \text{rank } \mathcal{E}(K(T))\ =\ 6, \] which completes the proof of Theorem \ref{thm:main}. So it remains to show that we can produce an $\mathcal{E}$ such that $A_{\mathcal{E}}(\mathfrak p) = -6$. As in Equation 2.2 of \cite{AL-RM}, define \begin{eqnarray} \label{eq:curvedef} y^2\ =\ f(x,T) &=& x^3 T^2 + 2g(x)T - h(x) \notag\\ g(x) &=& x^3 + ax^2 + bx + c \label{eq:curveformula}\\ h(x) &=& (A-1)x^3 + Bx^2 + Cx + D \notag\\ D_T(x) &=& g(x)^2 + x^3h(x).\notag \end{eqnarray} Notice that $D_T(x)$ is one-quarter of the discriminant of $f(x,T)$, considered as a quadratic polynomial in $T$. When we specialize to a particular $t \in \mathcal{O}_K$, we write $D_t(x)$ for the one-quarter of the discriminant of $f(x,t)$. As a degree six polynomial in $x$, write $r_1, r_2, \ldots, r_6$ for the roots of $D_T(x)$. We will see that the number of distint, nonzero roots of $D_T(x)$ control the rank of the curve. In order to show our claim for the elliptic curve $y^2 = f(x,T)$, we must pick six distinct, nonzero roots of $D_T(x)$ which are squares in $\mathcal{O}_K$. We also need the analogue of equation 2.1 from \cite{AL-RM} for number fields, which can be stated as follows. For $a$, $b$ both not zero mod $\mathfrak p$ and $N(\mathfrak p) > 2$, then for $t \in \mathcal{O}_K$ \begin{equation} \label{eq:legendresum} \sum_{t \in \mathcal{O}_K / \mathfrak p} \left( \frac{at^2 + bt + c}{\mathfrak p} \right)\ =\ \begin{cases} (N(\mathfrak p) - 1) \left( \frac{a}{\mathfrak p} \right) & \text{if } (b^2 - 4ac) \in \mathfrak p \\ - \left( \frac{a}{\mathfrak p} \right) & \text{otherwise.} \end{cases} \end{equation} Note that \eqref{eq:legendresum} is already demonstrated when $\mathcal{O}_K/\mathfrak p = \mathbb F_p$ is a finite field of prime order $p$ in Lemma A.2 of \cite{AL-RM} (they give two proofs; the result also appears in \cite{BEW}). Therefore, it suffices to show \eqref{eq:legendresum} when $\mathcal{O}_K/\mathfrak p = \mathbb F_q$ is a finite field of order $q = p^r$ for $p$ prime and $r > 1$; we do so in Proposition \ref{importantlemma} in the next section. Write $\mathbb F_q = \mathcal{O}_K/\mathfrak p$ for the residue field of $\mathcal{O}_K$ at $\mathfrak p$. We have for the fiber of the elliptic surface $y^2 = f(x,T)$ at $T = t$ \[ a_t(\mathfrak p) = - \sum_{x \in \mathcal{O}_K/\mathfrak p} \left(\frac{f(x,t)}{\mathfrak p}\right) = - \sum_{x \in \mathcal{O}_K/\mathfrak p} \left(\frac{x^3 t^2 + 2g(x)t - h(x)}{\mathfrak p}\right), \] where $\left(\frac \cdot \mathfrak p\right)$ is the Legendre symbol of the residue field $\mathcal{O}_K/\mathfrak p$. Now we study $- N(\mathfrak p) A_\mathcal{E}(p) = \sum_{t \in \mathcal{O}_K/\mathfrak p} a_t(\mathfrak p) = \sum_{x, t \in \mathcal{O}_K/\mathfrak p} \left(\frac{f(x,t)}{\mathfrak p}\right)$ to calculate $A_\mathcal{E}(\mathfrak p)$, as in equation \eqref{eq:avgnumpoints}. When $x \in \mathfrak p$, the $t$-sum vanishes unless $c \in \mathfrak p$ --- it is just $\sum_{t \in \mathcal{O}_K/\mathfrak p} \left(\frac{2ct - D}{\mathfrak p}\right)$. Assume now $x \not \in \mathfrak p$. Then by \eqref{eq:legendresum}, we have \[ \sum_{t \in \mathcal{O}_K/\mathfrak p} \left(\frac{x^3 t^2 + 2g(x) t - h(x)}{\mathfrak p}\right) = \begin{cases} (N(\mathfrak p) - 1) \left(\frac{x^3}{\mathfrak p}\right) & \text{if } D_t(x) \in \mathfrak p \\ - \left( \frac{x^3}{\mathfrak p} \right) & \text{otherwise.} \end{cases} \] If the roots $r_1, r_2, \ldots, r_6$ are squares in $\mathcal{O}_K$, then their contribution to the rank is $(N(\mathfrak p) - 1)\left(\frac{r_i^3}{\mathfrak p}\right)$. If the $r_i$ are not squares, then $\left(\frac{r_i}{\mathfrak p}\right)$ will be $1$ for half of the primes of $\mathcal{O}_K$ and $-1$ for the other half, and therefore yield no net contribution to the rank. So assume that we may choose coefficients $a,b,c,A,B,C,D$ such that $D_T(x)$ has six distinct, non-zero roots $r_i \in \mathcal{O}_K$, each of which is a square. Write $r_i = \rho_i^2$ for $i = 1, \ldots, 6$. Then \begin{align*} - N(\mathfrak p) A_\mathcal{E}(\mathfrak p) & = \sum_{\substack{x \in \mathcal{O}_K/\mathfrak p \\ t \in \mathcal{O}_K/\mathfrak p}} \left(\frac{f(x,t)}{\mathfrak p}\right) = \sum_{\substack{x \in \mathcal{O}_K/\mathfrak p \\ t \in \mathcal{O}_K/\mathfrak p}} \left(\frac{x^3t^2 + 2g(x) t - h(x)}{\mathfrak p}\right) \notag \\ & = \sum_{\substack{x \colon D_t(x) \in \mathfrak p\\ t \in \mathcal{O}_K/\mathfrak p}} \left(\frac{f(x,t)}{\mathfrak p}\right) + \sum_{\substack{x \colon D_t(x) \not \in \mathfrak p \\ t \in \mathcal{O}_K/\mathfrak p}} \left(\frac{f(x,t)}{\mathfrak p}\right) \\ & = 6(N(\mathfrak p)-1) - \sum_{x \colon D_t(x) \not \in \mathfrak p} \left(\frac{x^3}{\mathfrak p}\right) \notag\\ & = 6(N(\mathfrak p)-1) + 6 = 6N(\mathfrak p)\notag \end{align*} Hence, $- N(\mathfrak p)A_{\mathcal{E}}(\mathfrak p) = 6 N(\mathfrak p)$. Therefore $A_{\mathcal{E}}(\mathfrak p) = -6$, completing the proof of Theorem \ref{thm:main}. Now we must find $a, b, c, A, B, C, D \in \mathcal{O}_K$ such that $D_T(x)$ has six distinct, nonzero roots $r_i = \rho_i^2$: \begin{align} D_T(x) &= g(x)^2 + x^3 h(x) \notag \\ & = Ax^6 + (B + 2a)x^5 + (C + a^2 + 2b) x^4 + (D + 2ab + 2c) x^3 \notag\\ & \hspace*{1cm} + (2ac + b^2) x^2 + (2 bc) x + c^2 \label{eq:discriminantcoefficients} \\ & = A(x^6 + R_5x^5 + R_4x^4 + R_3x^3 + R_2 x^2 + R_1x + R_0) \notag\\ & = A (x - \rho_1^2)(x - \rho_2^2)(x - \rho_3^2)(x - \rho_4^2)(x - \rho_5^2)(x - \rho_6^2) \notag \end{align} In practice, we will choose roots $\rho_i^2$ and then determine the polynomial $D_T(x)$; and from it, the coefficients $a, b, c, A, B, C, D$. Note that in the above we are free to choose $B, C, D$, so matching coefficients for the $x^5, x^4$ and $x^3$ terms do not add any additional constraints. So we must simultaneously solve the following three equations in $\mathcal{O}_K$: \begin{align*} 2ac + b^2 &= R_2 A, \\ 2bc &= R_1A, \\ c^2 &= R_0 A. \end{align*} So long as this system of Diophantine equations is solvable in $\mathcal{O}_K$, we may construct such an elliptic surface. In section \ref{sec:examples} below, we provide some examples of elliptic surfaces over number fields. \section{Quadratic Legendre Sums} The following proposition on quadratic Legendre sums in finite fields is the generalization of Lemma A.1 from \cite{AL-RM} to number fields (see also \cite{BEW}). Let $q=p^r$ be an odd prime power, and assume $\mathcal{O}_K / \mathfrak p = \mathbb F_q$ is a finite field with $q$ elements. Let $\left( \frac{\cdot}{q} \right)$ denote the $\mathbb F_q$-Legendre symbol which indicates whether or not an element of $\mathbb F_q$ is a square. \begin{prop} \label{importantlemma} If $a \in \mathcal{O}_K$ is not zero modulo $\mathfrak p$, then \[ \sum_{t \in \mathbb F_q} \left(\frac{at^2 + bt + c}{q}\right) = \begin{cases} (q - 1) \left(\frac a q\right) & \text{if } b^2 - 4ac \equiv 0 \bmod p\\ - \left(\frac a q \right) & \text{otherwise.} \end{cases} \] \end{prop} \begin{proof} The first case is straightforward, as if $b^2-4ac\equiv0 \bmod \mathfrak p$, then $at^2+bt+c = a(t-t')^2$ for some $t' \in \mathbb F_q$, and each of the terms in the sum except $t'$ contribute $\left(\frac{a}{q}\right)$, and $t'$ contributes $0$. For the other case, when $b^2 - 4ac \not \equiv 0 \bmod \mathfrak p$, we first reinterpret the sum as counting points on the conic $C: s^2 = at^2+ bt + c$ in the following way: \[ \#C(\mathbb F_q)\ =\ \sum_{t \in \mathbb F_q} \left(1+ \left(\frac{at^2+bt+c}{q} \right)\right) \ =\ q + S. \] Here $S$ is the sum of interest. It is well-known that a nondegenerate conic of this particular form always has a rational point over $\mathbb F_q$ when $q$ is a power of an odd prime \cite{E}, \cite[Theorem 3.4]{Su}. From this, we may parameterize all rational points using some line that does not meet the original rational point. This gives at most $q+1$ points on the curve. However, this parametrization introduces a denominator that is possibly quadratic in $t$, which means at most $2$ rational points on the line might not correspond to rational points on the curve. Thus we have \[ q-1\ \leq\ \#C(\mathbb F_q) \ \leq\ q+1, \] which gives \[ -1 \ \leq\ S \ \leq\ 1. \] To determine the value of $S$, we compute it is modulo $p$. By Euler's criterion in finite fields: \[ S \ \equiv\ \sum_{t \in \mathbb F_q} (at^2 + bt+ c)^{\frac{q-1}{2}}\ \equiv\ \sum_{t \in \mathbb F_q} \left(\frac{a}{q}\right)t^{q-1} + r(t) \ \equiv\ -\legsym{a}{q} + \sum_i r_i \sum_{t \in \mathbb F_q} t^i, \] where $r(t)$ is a polynomial of degree $<q-1$, and $r_i$ are the roots of $D_t(\mathfrak p)$, as above. Each of the inner sums $\sum_{t \in \mathbb F_q} t^i$ is $0$, since $i<q-1$, so one of the terms is nonzero, but the sum is stable under multiplication by any of its summands. Thus $S \equiv -\legsym{a}{q} \pmod{p}$. Since $-1 \leq S \leq 1$, we have $S = -\legsym{a}{q}$, as desired. \end{proof} \section{Examples} \label{sec:examples} Let $K$ be an arbitrary number field. Theorem \ref{thm:main} claims that we may produce elliptic curves of rank $6$ over $K$; we provide a recipe to produce these curves over number fields in this section. As in \cite[Section 2.2]{AL-RM}, we choose $A = 64 R_0^3$ for simplicity. This choice is convenient, because it allows us to solve \[\begin{array}{llll} c^2 &= 64 R_0^3 & \Rightarrow c &= 8 R_0^3 \\ 2bc &= 64 R_0^3 R_1 & \Rightarrow b &= 4 R_0 R_1\\ 2ac + b^2 &= 64 R_0^3 R_2 & \Rightarrow a & = 4 R_0 R_2 - R_1^2. \end{array}\] Additionally, we may solve for $B, C, D$ in terms of $R_0, \ldots, R_5$. Altogether, we have \begin{align} a & = 4 R_0 R_2 - R_1^2\notag\\ b & = 4 R_0 R_1\notag\\ c & = 8 R_0^2\notag\\ A & = 64 R_0^3 \label{eq:solvedcoeffs}\\ B & = A R_5 - 2a\notag\\ C & = A R_4 - a^2 - 2b\notag\\ D & = A R_3 - 2ab - 2c\notag \end{align} The above determines the coefficients of the elliptic curve defined by the equations \eqref{eq:curvedef} in terms of the roots $r_i = \rho_i^2$ of the discriminant $D_T(x)$, as in \eqref{eq:discriminantcoefficients}. Expanding the first line of \eqref{eq:curvedef}, we arrive at the following equation for the elliptic curve. \begin{equation} \label{eq:expandedcurveequation} y^2 = x^3 + (2aT - B)x^2 + (2bT - C)(T^2 + 2T - A + 1)x + (2cT - D)(T^2 + 2T - A + 1)^2 \end{equation} To produce curves over $K$, we may use the following recipe: \begin{itemize} \setlength{\itemsep}{1em} \item choose six squares $r_1 = \rho_1^2, \ldots, r_6 = \rho_6^2$ in $K$ to be the roots of $D_T(x)$; \item solve for $R_0, \ldots, R_5$ as the coefficients of the degree-six polynomial $D_T(x)$; \item use \eqref{eq:solvedcoeffs} to find $a, b, c, A, B, C, D$; \item plug these values into \eqref{eq:expandedcurveequation} to determine the equation for the elliptic curve. \item Specialize at $T = t$ for some $t \in K$. For generic choices of $t$, this specializes to an elliptic curve of rank at least $6$, by Silverman's specialization theorem \cite[Theorem 11.4]{Si2}. \end{itemize} We carry out this procedure for a few choices of number fields below. \begin{example} Let $K = \mathbb Q$. In \cite[Theorem 2.1]{AL-RM}, a rank 6 elliptic surface $\mathcal{E}$ over $\mathbb Q(T)$ with equation as in \eqref{eq:expandedcurveequation} and $r_\ell = \ell^2$, for $\ell = 1, 2, \ldots, 6$. \[\begin{array}{rlrl} a &= 166601111104, & A &= 8916100448256000000,\\ b &= -1603174809600, \hspace*{1cm}& B &= -811365140824616222208,\\ c &= 2149908480000, & C &= 26497490347321493520384,\\ & & D &= -343107594345448813363200. \end{array}\] \end{example} \begin{example} Choose $K = \mathbb Q(i)$, $i = \sqrt{-1}$. Then with the choices $\rho_1 = 1 + i$, $\rho_s = s$ for $s = 2, 3, \ldots, 6$; $r_\ell = \rho_\ell^2$, we find \begin{align*} a & = - 353892105216+ 528220569600i \\ b & = 2112882278400-2149908480000i \\ c & = -8599633920000\\ A & = -71328803586048000000i\\ B & = 153634690402938169196544+ 380285771115321439027200i \\ C & = 166616532655598905196544 + 166085373946419295027200i \\ D & = - 1191348658308947587891200-789381960170093936640000i \end{align*} Via \eqref{eq:expandedcurveequation}, this determines a rank 6 elliptic curve $\mathcal E$ over $K(T) = \mathbb Q(i)(T)$. The $j$-invariant of this curve is $j(\mathcal E) = p(T)/q(T)$, where $p(T)$ and $q(T)$ are degree $9$ and $10$ polynomials in $T$, respectively. The leading coefficient of $p(T)$ is \begin{align*} p_9 &= 17575652563096624654081015917110624256000000 \\ & \hspace{1cm} + 16682776400034638205353357277029990400000000i. \end{align*} In particular, $j(\mathcal E) \in K(T)$, but $j(\mathcal{E}) \not \in \mathbb Q(T)$. \end{example} \begin{example} Choose $K = \mathbb Q(\zeta_5)$, where $\zeta_5$ is a fifth root of unity. Then with the choices $r_\ell = \rho_\ell^2$, \[ \rho_1 = \zeta_5, \ \ \rho_2 = \zeta_5^2, \ \ \rho_3 = \zeta_5^3, \ \ \rho_4 = 1 + \zeta_5, \ \ \rho_5 = 1 + \zeta_5^2, \ \ \rho_6 = 1 + \zeta_5^3. \] the coefficients are \begin{align*} a & = 55\zeta_5^3 - 8\zeta_5^2 + 41\zeta_5 + 25 \\ b & = -12\zeta_5^3 - 40\zeta_5^2 + 20\zeta_5 - 48\\ c & = -24\zeta_5^2 + 16\zeta_5 - 24\\ A & = -832\zeta_5^3 - 320\zeta_5^2 - 320\zeta_5 - 832\\ B & = 2312\zeta_5^3 + 3693\zeta_5^2 - 861\zeta_5 + 5117\\ C & = -2040\zeta_5^3 + 1837\zeta_5^2 - 2397\zeta_5 + 573\\ D & = 2440\zeta_5^3 + 152\zeta_5^2 + 1408\zeta_5 + 1664 \end{align*} This determines a rank 6 elliptic curve $\mathcal E$ over $K(T) = \mathbb Q(\zeta_5)(T)$ via \eqref{eq:expandedcurveequation}. The $j$-invariant for this elliptic curve is $j(\mathcal E) = p(T)/q(T)$ where $p(T)$ and $q(T)$ are degree 9 and 10 polynomials in $T$, respectively. The leading coefficient of $p(T)$ is \begin{align*} p_9 &=-1203674209337006199645159424 \zeta_{5}^{3} + 470942041084292914570780672 \zeta_{5}^{2} \\ &\hspace{2cm} - 1034969760873268271839698944 \zeta_{5} - 272969531667632951696109568, \end{align*} witnessing the fact that $j(\mathcal{E}) \in K(T)$, but $j(\mathcal E) \not \in \mathbb Q(T)$. \end{example} \end{document}
arXiv
\begin{definition}[Definition:Congruence (Number Theory)/Modulus] Let $x$ be congruent to $y$ modulo $m$. The number $m$ in this congruence is known as the '''modulus''' of the congruence. \end{definition}
ProofWiki
Not so fast: LB-1 is unlikely to contain a 70 M? black hole Kareem El-Badry, Eliot Quataert The recently discovered binary LB-1 has been reported to contain a ${\sim }70\, \mathrm{M}-{\odot}$ black hole (BH). The evidence for the unprecedentedly high mass of the unseen companion comes from reported radial velocity (RV) variability of the H α emission line, which has been proposed to originate from an accretion disc around a BH. We show that there is in fact no evidence for RV variability of the H α emission line, and that its apparent shifts instead originate from shifts in the luminous star's H α absorption line. If not accounted for, such shifts will cause a stationary emission line to appear to shift in antiphase with the luminous star. We show that once the template spectrum of a B star is subtracted from the observed Keck/HIRES spectra of LB-1, evidence for RV variability vanishes. Indeed, the data rule out periodic variability of the line with velocity semi-amplitude $K-{\rm H\,\alpha } \gt 1.3\, {\rm {km}} \, s^{-1}$. This strongly suggests that the observed H α emission does not originate primarily from an accretion disc around a BH, and thus that the mass ratio cannot be constrained from the relative velocity amplitudes of the emission and absorption lines. The nature of the unseen companion remains uncertain, but a 'normal' stellar-mass BH with mass 5 ? M/M? ? 20 seems most plausible. The H α emission likely originates primarily from circumbinary material, not from either component of the binary. Monthly Notices of the Royal Astronomical Society: Letters https://doi.org/10.1093/mnrasl/slaa004 binaries: spectroscopic stars: emission-line, Be 10.1093/mnrasl/slaa004 Dive into the research topics of 'Not so fast: LB-1 is unlikely to contain a 70 M? black hole'. Together they form a unique fingerprint. radial velocity Physics & Astronomy 100% shift Physics & Astronomy 79% accretion disks Physics & Astronomy 68% accretion Earth & Environmental Sciences 61% B stars Physics & Astronomy 43% stellar mass Physics & Astronomy 34% mass ratios Physics & Astronomy 33% El-Badry, K., & Quataert, E. (2020). Not so fast: LB-1 is unlikely to contain a 70 M? black hole. Monthly Notices of the Royal Astronomical Society: Letters, 493(1), L22-L27. https://doi.org/10.1093/mnrasl/slaa004 El-Badry, Kareem ; Quataert, Eliot. / Not so fast : LB-1 is unlikely to contain a 70 M? black hole. In: Monthly Notices of the Royal Astronomical Society: Letters. 2020 ; Vol. 493, No. 1. pp. L22-L27. @article{4627f968c0274d909cba87aaf355077e, title = "Not so fast: LB-1 is unlikely to contain a 70 M? black hole", abstract = "The recently discovered binary LB-1 has been reported to contain a ${\sim }70\, \mathrm{M}-{\odot}$ black hole (BH). The evidence for the unprecedentedly high mass of the unseen companion comes from reported radial velocity (RV) variability of the H α emission line, which has been proposed to originate from an accretion disc around a BH. We show that there is in fact no evidence for RV variability of the H α emission line, and that its apparent shifts instead originate from shifts in the luminous star's H α absorption line. If not accounted for, such shifts will cause a stationary emission line to appear to shift in antiphase with the luminous star. We show that once the template spectrum of a B star is subtracted from the observed Keck/HIRES spectra of LB-1, evidence for RV variability vanishes. Indeed, the data rule out periodic variability of the line with velocity semi-amplitude $K-{\rm H\,\alpha } \gt 1.3\, {\rm {km}} \, s^{-1}$. This strongly suggests that the observed H α emission does not originate primarily from an accretion disc around a BH, and thus that the mass ratio cannot be constrained from the relative velocity amplitudes of the emission and absorption lines. The nature of the unseen companion remains uncertain, but a 'normal' stellar-mass BH with mass 5 ? M/M? ? 20 seems most plausible. The H α emission likely originates primarily from circumbinary material, not from either component of the binary.", keywords = "binaries: spectroscopic, stars: emission-line, Be", author = "Kareem El-Badry and Eliot Quataert", note = "Funding Information: We thank the anonymous referee for a constructive report, and JJ Eldridge, Andrew Howard, Howard Isaacson, Andreas Irrgang, Chris Kochanek, Jifeng Liu, Ben Margalit, Todd Thompson, Dan Weisz, Yanqin Wu, and Wei Zhu for helpful discussions. KE acknowledges support from an NSF graduate research fellowship. Publisher Copyright: {\textcopyright} 2020 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society.", doi = "10.1093/mnrasl/slaa004", pages = "L22--L27", journal = "Monthly Notices of the Royal Astronomical Society: Letters", El-Badry, K & Quataert, E 2020, 'Not so fast: LB-1 is unlikely to contain a 70 M? black hole', Monthly Notices of the Royal Astronomical Society: Letters, vol. 493, no. 1, pp. L22-L27. https://doi.org/10.1093/mnrasl/slaa004 Not so fast : LB-1 is unlikely to contain a 70 M? black hole. / El-Badry, Kareem; Quataert, Eliot. In: Monthly Notices of the Royal Astronomical Society: Letters, Vol. 493, No. 1, 28.01.2020, p. L22-L27. T1 - Not so fast T2 - LB-1 is unlikely to contain a 70 M? black hole AU - El-Badry, Kareem AU - Quataert, Eliot N1 - Funding Information: We thank the anonymous referee for a constructive report, and JJ Eldridge, Andrew Howard, Howard Isaacson, Andreas Irrgang, Chris Kochanek, Jifeng Liu, Ben Margalit, Todd Thompson, Dan Weisz, Yanqin Wu, and Wei Zhu for helpful discussions. KE acknowledges support from an NSF graduate research fellowship. Publisher Copyright: © 2020 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society. N2 - The recently discovered binary LB-1 has been reported to contain a ${\sim }70\, \mathrm{M}-{\odot}$ black hole (BH). The evidence for the unprecedentedly high mass of the unseen companion comes from reported radial velocity (RV) variability of the H α emission line, which has been proposed to originate from an accretion disc around a BH. We show that there is in fact no evidence for RV variability of the H α emission line, and that its apparent shifts instead originate from shifts in the luminous star's H α absorption line. If not accounted for, such shifts will cause a stationary emission line to appear to shift in antiphase with the luminous star. We show that once the template spectrum of a B star is subtracted from the observed Keck/HIRES spectra of LB-1, evidence for RV variability vanishes. Indeed, the data rule out periodic variability of the line with velocity semi-amplitude $K-{\rm H\,\alpha } \gt 1.3\, {\rm {km}} \, s^{-1}$. This strongly suggests that the observed H α emission does not originate primarily from an accretion disc around a BH, and thus that the mass ratio cannot be constrained from the relative velocity amplitudes of the emission and absorption lines. The nature of the unseen companion remains uncertain, but a 'normal' stellar-mass BH with mass 5 ? M/M? ? 20 seems most plausible. The H α emission likely originates primarily from circumbinary material, not from either component of the binary. AB - The recently discovered binary LB-1 has been reported to contain a ${\sim }70\, \mathrm{M}-{\odot}$ black hole (BH). The evidence for the unprecedentedly high mass of the unseen companion comes from reported radial velocity (RV) variability of the H α emission line, which has been proposed to originate from an accretion disc around a BH. We show that there is in fact no evidence for RV variability of the H α emission line, and that its apparent shifts instead originate from shifts in the luminous star's H α absorption line. If not accounted for, such shifts will cause a stationary emission line to appear to shift in antiphase with the luminous star. We show that once the template spectrum of a B star is subtracted from the observed Keck/HIRES spectra of LB-1, evidence for RV variability vanishes. Indeed, the data rule out periodic variability of the line with velocity semi-amplitude $K-{\rm H\,\alpha } \gt 1.3\, {\rm {km}} \, s^{-1}$. This strongly suggests that the observed H α emission does not originate primarily from an accretion disc around a BH, and thus that the mass ratio cannot be constrained from the relative velocity amplitudes of the emission and absorption lines. The nature of the unseen companion remains uncertain, but a 'normal' stellar-mass BH with mass 5 ? M/M? ? 20 seems most plausible. The H α emission likely originates primarily from circumbinary material, not from either component of the binary. KW - binaries: spectroscopic KW - stars: emission-line, Be U2 - 10.1093/mnrasl/slaa004 DO - 10.1093/mnrasl/slaa004 SP - L22-L27 JO - Monthly Notices of the Royal Astronomical Society: Letters JF - Monthly Notices of the Royal Astronomical Society: Letters El-Badry K, Quataert E. Not so fast: LB-1 is unlikely to contain a 70 M? black hole. Monthly Notices of the Royal Astronomical Society: Letters. 2020 Jan 28;493(1):L22-L27. doi: 10.1093/mnrasl/slaa004
CommonCrawl
Thermodynamics of photochemical reactions Suppose I have a photochemical reaction in gas phase, such as $$ \ce{CO2 + {$h\nu$} -> CO + O}. $$ I would like to work through the thermodynamics of such a reaction and understand the meaning of every term. For a non-photochemical reaction, such as $\ce{CO + O -> CO2}$, I have a good understanding of how the change in Gibbs energy breaks down into enthalpy, internal entropy and a log-concentration term corresponding to the entropy of mixing. If I ignore the photon in the first reaction above, I can regard it as simply the reverse of this reaction, which due to its photochemical driving force is able to reduce the Gibbs energy by the same amount, $\Delta G$. All of this is unproblematic. However, $h\nu$ itself is an energy change, which I can calculate, assuming I know the frequency $\nu$ of the absorbed light. I would like to know how to think about the thermodynamics of the whole system, including the coupling of the chemistry to the radiation field. In particular, my questions are: How does the energy $h\nu$ relate to the other terms with energy units, such as $\Delta H$ and $\Delta G^0$? I know that while a single photon has no internal entropy, a beam of light of a particular frequency can be thought of as having a temperature and an entropy. How should I think about the role of radiation entropy in the thermodynamics of photochemistry, and in particular, how can I do second-law calculations for photochemical reactions? thermodynamics photochemistry $\begingroup$ Not sure what you mean by second-law calculations. Can you clarify? $\endgroup$ – Jan Jensen May 27 '15 at 7:13 $\begingroup$ @JanJensen sure - I mean either of the following two closely related things: (i) how can I calculate the total rate of increase of entropy in the universe, including the entropy of the incoming and outgoing radiation fields, per mole reacted; or (ii) for a reversible photochemical reaction, what are the conditions for the chemical system to be in thermodynamic equilibrium with the radiation field? $\endgroup$ – Nathaniel May 27 '15 at 8:27 $\begingroup$ Photochemistry doesn't work by thermally exciting the reacting molecules. You can't simply convert $h\nu$ to energy or entropy and treat it as a thermodynamic variable $\endgroup$ – Jan Jensen May 27 '15 at 8:57 $\begingroup$ Your first statement is true but the second is a non sequitur. $h\nu$ is an energy. $E=h\nu$ is the equation for the energy of a photon, that's why we write it that way. It's completely uncontroversial that the heat given off by a photochemical reaction is $h\nu-\Delta H$. But that's only the first-law energy balance. I'm asking how to do the corresponding second-law calculations, that's all. $\endgroup$ – Nathaniel May 27 '15 at 12:00 $\begingroup$ I have only just seen this post so by now you have probably worked it out. If not look at G. Porter Journal of Chemical Society, Faraday Transactions 2, 1983, vol 79 , p 473-482 which discusses the thermodynamics of photochemical reactions. You can also find the article in the book 'Chemistry in Microtime' by the same author. $\endgroup$ – porphyrin Jul 29 '16 at 13:26 $h\nu$ is not a thermal energy so thermodynamic considerations do not apply. For example, you cannot predict the probability of being in an electronic excited state from equilibrium thermodynamics. The main reason is that the excitation is not a result of (classical) thermal energy transfer, but rather the adsorption of a photon which changes the quantum state. With a constant light source $$\ce{CO2} + h\nu \ce{-> CO + C}$$ will reach a steady state with constant concentrations that will be different from the thermal equilibrium concentrations. However, you can not extract a standard free energy difference by $$ \Delta G^\circ=-RT\ln \left( \frac{p_{\ce{CO}}p_{\ce{O}}}{p_{\ce{CO2}}} \right) $$ because the products and reactant are not in thermal equilibrium. Jan JensenJan Jensen $\begingroup$ Thermodynamic considerations do apply to electromagnetic radiation though, the most basic results being the Stefan-Boltzmann law and the Planck formula for radiation entropy. These are for black body radiation, but formulas also exist for the entropy and temperature of light of a specific frequency, intensity and polarisation. $\endgroup$ – Nathaniel May 27 '15 at 8:26 $\begingroup$ yes, but they still refer to the heat equivalent of the energy of light. Electronic excitation is fundamentally a quantum phenomenon, which is why only certain wavelengths work for certain systems. $\endgroup$ – Jan Jensen May 27 '15 at 8:59 Not the answer you're looking for? Browse other questions tagged thermodynamics photochemistry or ask your own question. Is light a reactant in photochemical reactions? Understanding Gibbs free energy Thermal and photochemical excitation of electron in photochemistry Is the Haber Process here proceeding at positive Gibbs free energy change? Reaction direction change based on phase of reactants and products Energy capture photochemistry with ionizing radiation? Is temperature double-counted in the Gibbs free energy equation? How are reaction enthalpy and entropy affected by temperature Regarding the infinitesimal form of the First Law of Thermodynamics Why do the first and second laws of thermodynamics not contradict each other?
CommonCrawl
BioData Mining A prognostic model based on seven immune-related genes predicts the overall survival of patients with hepatocellular carcinoma Qian Yan1 na1, Wenjiang Zheng1 na1, Boqing Wang1 na1, Baoqian Ye1, Huiyan Luo1, Xinqian Yang1, Ping Zhang1 & Xiongwen Wang ORCID: orcid.org/0000-0002-5428-71102 BioData Mining volume 14, Article number: 29 (2021) Cite this article Hepatocellular carcinoma (HCC) is a disease with a high incidence and a poor prognosis. Growing amounts of evidence have shown that the immune system plays a critical role in the biological processes of HCC such as progression, recurrence, and metastasis, and some have discussed using it as a weapon against a variety of cancers. However, the impact of immune-related genes (IRGs) on the prognosis of HCC remains unclear. Based on The Cancer Gene Atlas (TCGA) and Immunology Database and Analysis Portal (ImmPort) datasets, we integrated the ribonucleic acid (RNA) sequencing profiles of 424 HCC patients with IRGs to calculate immune-related differentially expressed genes (DEGs). Survival analysis was used to establish a prognostic model of survival- and immune-related DEGs. Based on genomic and clinicopathological data, we constructed a nomogram to predict the prognosis of HCC patients. Gene set enrichment analysis further clarified the signalling pathways of the high-risk and low-risk groups constructed based on the IRGs in HCC. Next, we evaluated the correlation between the risk score and the infiltration of immune cells, and finally, we validated the prognostic performance of this model in the GSE14520 dataset. A total of 100 immune-related DEGs were significantly associated with the clinical outcomes of patients with HCC. We performed univariate and multivariate least absolute shrinkage and selection operator (Lasso) regression analyses on these genes to construct a prognostic model of seven IRGs (Fatty Acid Binding Protein 6 (FABP6), Microtubule-Associated Protein Tau (MAPT), Baculoviral IAP Repeat Containing 5 (BIRC5), Plexin-A1 (PLXNA1), Secreted Phosphoprotein 1 (SPP1), Stanniocalcin 2 (STC2) and Chondroitin Sulfate Proteoglycan 5 (CSPG5)), which showed better prognostic performance than the tumour/node/metastasis (TNM) staging system. Moreover, we constructed a regulatory network related to transcription factors (TFs) that further unravelled the regulatory mechanisms of these genes. According to the median value of the risk score, the entire TCGA cohort was divided into high-risk and low-risk groups, and the low-risk group had a better overall survival (OS) rate. To predict the OS rate of HCC, we established a gene- and clinical factor-related nomogram. The receiver operating characteristic (ROC) curve, concordance index (C-index) and calibration curve showed that this model had moderate accuracy. The correlation analysis between the risk score and the infiltration of six common types of immune cells showed that the model could reflect the state of the immune microenvironment in HCC tumours. Our IRG prognostic model was shown to have value in the monitoring, treatment, and prognostic assessment of HCC patients and could be used as a survival prediction tool in the near future. Ranking sixth in worldwide incidence, primary liver cancer (PLC) is the fourth-leading cause of cancer-related mortality [1]. Hepatocellular carcinoma (HCC), the most common pathological type of PLC, accounts for approximately 90% of reported cases [2,3,4,5]. Hepatitis B and C viruses are the biggest risk factors for HCC [6]. Application of the hepatitis B virus vaccine has caused the incidence of HCC to decline [7]. Leaving aside patients who are diagnosed at an early stage or eligible for potentially curative therapies, treatment for advanced HCC is limited due to its heterogeneity, and the overall prognosis of HCC patients is still unsatisfactory [8, 9]. Cancer immunotherapy has contributed to personalized medicine, with substantial clinical benefit against advanced disease [10,11,12,13,14,15]. Current immune checkpoint inhibitors show surprising potential effectiveness against HCC [16, 17]. Indeed, the liver is a central immunological organ with a high density of myeloid and lymphoid immune cells [17, 18]. Immune cells are widespread in the tumor microenvironment (TME) [19, 20], wherein interaction between tumor cells and immune cells is extremely important to maintaining the dynamic balance of normal tissues and tumor growth; this process is closely related to the occurrence, progression, and prognosis of cancer [21]. Meanwhile, inflammatory reaction plays a decisive role at different stages of tumor development. It also affects immune monitoring and response to treatment and promotes the occurrence and development of tumours to varying degrees [22]. Since HCC often arises in the setting of chronic liver inflammation [5, 23] and might be responsive to novel immunotherapies, people infected with hepatitis B or C viruses are at high risk of HCC [24]. While several studies have supported the importance of immunology in HCC, the exact molecular mechanisms still remain unknown, particularly for combinations of immune cells forming a TME [25] and for immunogenomic effects [26]. With the advent of multi-dimensional, large-scale high-throughput analyses, cancer researchers have been able to identify culpable biomarkers for tumour prognosis and prediction [27,28,29,30]. Long et al. explored the prognostic value of immune-related genes (IRGs) linked to TP53 status in order to improve the prognoses of HCC patients [31]. Moeinia et al. analysed the expression profiles of 392 early-stage non-tumour liver tissues from HCC patients and liver tissues from HCC-free cirrhosis patients, identified possible regulatory changes in the expression of IRGs in HCC, and further verified the accuracy of this conclusion through experiments. This gene expression pattern is related to the risk of PLC in cirrhosis patients [32]. Liang S et al. proposed that after liver injury, the molecular pattern related to the release of hepatocytes would activate liver tumour-associated macrophages (TAMs), thus producing cytokines to promote tumour development [33]. However, the clinical relevance and prognostic significance of IRGs in HCC have yet to be comprehensively explored. Our study aimed to better appreciate the potential clinical utility of IRGs prognostic stratification and develop a new IRG-based immune prognostic model (IPM). We systematically investigated the expression status from The Cancer Gene Atlas (TCGA, https://cancergenome.nih.gov/) database and prognostic landscape of IRGs, constructed a genomic–clinicopathological model for these patients and validate it in Gene Expression Omnibus (GEO, https://www.ncbi.nlm.nih.gov/geo/). Moreover, underlying regulatory mechanisms have been explored by bioinformatics analysis. The results of this study could help provide a more complete understanding and more-precise immunotherapy for HCC. HCC datasets and preprocessing As TCGA and GEO databases both are landmark cancer genome projects that are publicly available to any researcher, our research did not require the approval of an ethics committee. After downloading data from transcriptome messenger ribonucleic-acid (mRNA) expression profiles and the clinical information of HCC patients from the TCGA and GEO website, we ultimately obtained a dataset of 374 HCC and 50 para-tumor samples [34] as a training dataset, 225 HCC tissues and 220 adjacent non-tumour samples (GPL3921) in GSE14520 dataset as a test dataset [35]. Also, we obtained a list of IRGs from the Immunology Database and Analysis Portal (ImmPort, https://www.immport.org/shared/home). This is one of the largest open source repositories of human immunological data at the subject level, providing data on clinical and mechanistic studies of human subjects and immunological studies of model organisms [36]. The integrated analysis of these databases, which reveals new insights into the comprehensive analysis yielded by the combination of mass spectrometry staining and tumour molecular profiling, could become a useful resource on the regulation of tumour-related genes. Identification, normalization, and elucidation of differentially expressed genes (DEGs) and immune-related genes (IRGs) We used the limma package in R software (version 3.5.3; R Foundation for Statistical Computing) to calculate genes in common between HCC and para-tumour tissue [37]. The absolute value of log fold change (FC) was ≥2, and adjusted P < 0.05 was the cutoff value. We screened DEGs between the two groups and depicted the results in a heatmap and volcano plot. Then, we use the combat function in the sva package in R software to remove batch effects and batch corrections on the gene expression data between the training and test group [38]. By combining DEGs and IRGs, we obtained the intersection of IRGs involved in HCC pathogenesis, and all of the IRGs were listed in GSE14520 dataset, too. To explore the potential functions and possible pathways of these IRGs, we further analysed the differentially expressed IRGs via gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis, enabled by the clusterProfiler package in R software [39]. Screening of prognosis-specific IRGs We combined and analysed the patients' clinical information and the gene expression of IRGs, using OS as the outcome index. Samples with an OS time of less than 30 days and incomplete clinical information were omitted, and we finally retained 343 samples in the TCGA dataset and 221 samples in the GSE14520 dataset to construct the model. Detailed epidemiological information of the two cohorts is displayed in Table 1. The significance level of univariate Cox regression analysis was set to P < 0.05 and displayed in the form of a forest plot. Table 1 Clinical information in training and validation groups Transcription factor (TF) regulatory network TF protein are critical regulators of gene switches [40]. The Cistrome Cancer database (http://cistrome.org/CistromeCancer/CancerTarget/) combines the cancer genomics data in TCGA with the chromatin analysis data in the Cistrome Data Browser, enabling cancer researchers to explore how TFs regulate the degree of gene expression [41]. To explore the regulatory mechanisms of prognosis-related IRGs, we built a regulatory network covering differentially expressed TFs and IRGs using Cytoscape software version 3.7.1 (Cytoscape Consortium; https://cytoscape.org/) [42]. We also conducted protein–protein interaction (PPI) analysis using the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING; STRING Consortium; https://string-db.org/) to evaluate interactions among all of the TFs. Using the cytoHubba package in Cytoscape, we also performed topological analysis of these key TFs and ranked the top 10 by the "degree" criterion [43]. Construction of IPMs and validation model The glmnet package was utilized to build a multivariate least absolute shrinkage and selection operator (Lasso) Cox proportional hazards regression model, and the cv.glmnet function was used to create 1000 random iterations. We obtained the best modelling parameters through 10-fold cross-validation and the default "deviance", hence constructing an IPM of the IRGs [44]. The calculation formula was as follows: $$ risk\kern0.5em score=\sum \limits_{n=1}^{\infty}\left({\beta}_n\times {\in}_n\right) $$ where β represents the weight of each gene, and ∈ is the standardized expression value of each gene. According to the median value of the risk score, the entire TCGA dataset was divided into two groups. We also divided the GSE14520 data set into high- and low- risk groups according to the median in the training set. We applied Kaplan-Meier (K-M) survival analyses curves to see if there were any differences between these two groups. At the same time, we displayed the risk scores, survival status, and gene expression levels of patients in the high-risk and low-risk groups. Construction and validation of the prognosis-related nomogram We built 1-, 3-, and 5-year nomograms of key genes in the IPM using the rms packages in R software. To evaluate the sensitivity and specificity of our IPM, we drew time-dependent receiver operating characteristic (ROCs) curves and calibration curves, and calculated a concordance index (C-index) using the survivalROC installation package in R software [45]. When the C-index is between 0.5–0.7, it proves that the prognostic performance of the model is statistically acceptable; and when C-index > 0.7, we considered the predictive power of our model has a high degree of discrimination [46]. Correlations between risk score and clinical features Similarly, we analysed the significance of risk score correlated with clinical factors in multivariate and univariate analyses, and constructed a nomogram to evaluate practical-application value of the nomogram. The clinical factors in the training set include age, gender, TNM staging and grade; the clinical information in the testing set include gender, age, alanine transaminase (ALT) (>/<=50 U/L), main tumour size (>/<=5 cm), multinodular, cirrhosis, tumour node metastasis (TNM) staging, Barcelona Clinic Liver Cancer (BCLC) staging, Cancer of the Liver Italian Program (CLIP) staging and alpha fetoprotein (AFP) (>/<=300 ng/ml). In addition, the time-independent ROC curve and C-index value were used to assess its prognostic performance, too. We further analysed the correlation of various clinical factors with gene expression levels and risk scores in the IPM. GSEA v4.0.1 software was used to further identify different biological processes between the low-risk and high-risk groups constructed by the seven IRGs in HCC. We carried out gene set enrichment analysis (GSEA, https://www.gsea-msigdb.org/gsea/index.jsp) to explore the enriched items of the two groups [47] and "c2.all.v7.4.symbols.gmt" was chosen as the reference gene set. P < 0.05 and false discovery rate < 0.25 were used as the screening criteria. Relationship between risk sore and immune cell infiltration The Tumour Immune Estimation Resource online database (TIMER, http://cistrome.org/TIMER/) can estimate the infiltration abundance of six common types of immune cells-B cells, Cluster of Differentiation 4-positive (CD4+) T cells, Cluster of Differentiation 8-positive (CD8+) T cells, neutrophils, TAMs, and dendritic cells (DCs)-and provide a comprehensive resource on immune infiltration of various cancer types [17]. Hence, we performed Pearson correlation analysis between risk score and the content of six types of immune cells. Verification of immune-related signatures We analysed genetic alterations in seven IRGs associated with prognosis. The data were obtained from the cBio Cancer Genomics Portal (cBioPortal, http://www.cbioportal.org/), which is of great utility in exploring multidimensional genomic information [48]. The human protein atlas project (HPA, https://www.proteinatlas.org/) is used to evaluate the protein level differences of each IRGs [49]. To obtain the effect on HCC survival of high and low expression of these genes in HCC, we input them into the Kaplan Meier Plotter (K-M, https://kmplot.com/analysis/), a website providing gene chips and RNA sequencing data sources from the GEO and TCGA for several cancers [48, 50]. P < 0.05 was considered to be statistically significant. We calculated OS, disease-free survival (DFS), progression-free survival (PFS), and relapse-free survival (RFS) rates for HCC. Most of the statistical analyses was performed using R software and online databases. PPI network analysis was completed and the diagram of mechanism regulation between TFs and IRGs was created using Cytoscape. Pearson correlation analysis was used to analyse the correlation between risk score and clinical factors and the degree of immune cell infiltration. In addition, we used the cBioPortal and K-M Plotter to analyse the genetic changes and survival differences of genes, respectively. Differentially expressed OS-related DEGs in HCC The flowchart in Fig. 1 clearly illustrates our analytic process. According to our screening criteria (|log FC| > 2, adjusted P < 0.05), the limma package identified 2068 DEGs in common between HCC and normal liver tissue. These DEGs included 1991 upregulated and 77 downregulated genes (Fig. 2a, d). From this group of genes, we extracted 116 differentially expressed IRGs, including 96 upregulated and 20 downregulated genes (Fig. 2b, e). Finally, we obtained 100 IRGs that exist both in the TCGA and GSE14520 dataset for model construction. Flowchart presenting the process of establishing the seven-gene signature and prognostic nomogram for HCC. Abbreviations: HCC: hepatocellular carcinoma; TCGA-LIHC: The Cancer Genome Atlas, Liver Hepatocellular Carcinoma; GEO: Gene Expression Omnibus; IMMPORT: Immunology Database and Analysis Portal; DEG: differentially expressed gene; TF: transcription factor; ROC: Receiver operating characteristic; IRG: immune-related gene; LASSO: Least Absolute Shrinkage and Selection Operator; GSEA: Gene Set Enrichment Analysis The filter results of differentially expressed immune related genes (IRGs) and transcription factors (TFs) between 374 hepatocellular carcinoma (HCC) and 50 para-tumor samples. a Heatmap and Volcano plot (c) of differentially expressed IRGs; (b) Heatmap and volcano plot (D) of differentially expressed TFs. Green and red dots separately represent low and high expression of IRGs and TFs in HCC, and black dots represent genes that are not differentially expressed The results of IRGs enrichment analysis were more common in the inflammatory pathway, including "positive regulation of secretion by cell," "positive regulation of secretion," "antimicrobial humoral response" and "defense response to bacterium" in biological processes. In the meantime, these genes participated in "secretory granule lumen," "Cytoplasmic vesicle lumen," and "vesicle lumen" in cell components; and played a main role in the regulation of various receptor ligands, cytokines, cytokine receptors, hormones, or chemokines in molecular functions. Also, these IRGs could be involved in the composition of signal pathways such as "Cytokine-cytokine receptor interaction," "Axon guidance," "TGF-beta signaling pathway," "Viral protein interaction with cytokine and cytokine receptor," and "Hippo signaling pathway" (Fig. 3). The above enriched items are all related to immunity or tumour, indicating that these 100 IRGs may play a role in regulating HCC by regulating some immunological process. Functional-enrichment analysis of the 100 common differentially expressed genes. a Biological processes analysis, (b): Cell components analysis; (c): Molecular functions analysis; (d) Pathway analysis of the top 30 most important entries in the Kyoto Encyclopedia of Genes and Genomes. Significance gradually increases from blue to red; the thickness of the line represents the degree of correlation between the two points, and the size of the circle represents the number of genes enriched on the entry Establishment and validation of a seven-gene prognostic signature based on the prognosis of HCC We included a total of 343 patient cases (OS > 30 days) from TCGA in the survival analysis; OS was selected as the primary endpoint for this study. Applying the univariate Cox regression model (P < 0.05), we used the 100 IRGs to identify the DEGs associated with OS in HCC. We identified 30 OS-related DEGs, which were considered to be significant genes associated with HCC (Fig. 4). Forest plot of significant genes in univariate cox regression analysis (The red squares on the right side of the forest map indicate risk factors, and the green squares indicate protective factors) Using Lasso Cox multivariate analysis, we then developed an IPM based on seven genes: FABP6, MAPT, BIRC5, PLXNA1, CSPG5, SPP1 and STC2. The hazard ratios of all these DEGs were > 1, meaning that all were considered oncogenes. According to the following formula: $$ {\displaystyle \begin{array}{c} risk\ score=\left[(0.103)\times FABP6\ standardized\ expression\ value\right]+\\ {}\left[(0.0214)\times MAPT\ standardized\ expression\ value\right]+\\ {}\begin{array}{c}\left[(0.161)\times BICR5\ standardized\ expression\ value\right]\\ {}\left[(0.0421)\times PLXNA1\ standardized\ expression\ value\right]+\\ {}\begin{array}{c}\left[(0.244)\times CSPG5\ standardized\ expression\ value\right]+\\ {}\left[(0.0497)\times SPP1\ standardized\ expression\ value\right]+\\ {}\left[(0.174)\times STG2\ standardized\ expression\ value\right]\end{array}\end{array}\end{array}} $$ we calculated the risk score of each sample and then automatically divided all of the patients in TCGA into high- and low-risk groups according to median risk value. The K-M survival curve showed a significantly worse prognosis in the high-risk group (P = 8.135e− 07; Fig. 5a). The heatmap shows that as the risk score increases, the expression of IRGs gradually increases, thus indicating that high expression of these genes is a risk factor for HCC prognosis (Fig. 5b). In addition, the survival risks of these patients gradually increased as risk scores increased, and the number of survivors decreased significantly (Fig. 5c, d). Finally, we get the same results in the GSE14520 dataset, indicating that our model has a high degree of credibility (Fig. 5e-h). Construction of seven immune-related prognostic signatures for HCC. a: Kaplan-Meier curve for low- and high-risk populations in training group; (b): The distribution of risk score in patients in training group; (c): Survival status of patients with HCC in training group; (d): Heatmap of the expression levels of seven immune-related genes (IRGs) of patients in training group; (e): Kaplan-Meier curve for low- and high-risk populations in testing group; (F): The distribution of risk score in patients in testing group; (g): Survival status of patients with HCC in training group; (H): Heatmap of the expression levels of seven IRGs of patients in testing group Meanwhile, we constructed a nomogram of the seven IRGs and evaluated the prognostic value of the seven-gene model based on the time-dependent ROC curve and the C-index value. The 1-, 3-, and 5-year risk prediction area under ROC curves (AUCs) for OS were 0.780, 0.699, and 0.685, respectively, and the C-index is 0.72, 95% [confidence interval (CI): 0.68, 0.77] and 0.62, 95% [CI: 0.57, 0.68], respectively. The above results indicated that the seven signatures performed well in predicting the OS of HCC, and we obtained the same results in testing set, too (Fig. 6). The establishment and verification of gene-related nomograms in the training (a, c) and verification (b, d) groups for predicting 1-year, 3-year, and 5-year survival rates of HCC patients TF regulatory network To explore the clinical significance of pivotal IRGs and the corresponding underlying molecular mechanisms, we examined the expression profiles of 318 TFs and found that 31 genes were differentially expressed between HCC and non-tumour HCC samples (|log FC| > 2, P < 0.05), and they were related to OS in HCC patients. Then, we established a regulatory network based on these 31 TFs and 9 IRGs that had proven significant in univariate analysis. Correlation coefficients > 0.4 and P-values ≤0.001 were set as screening criteria. The TF-based regulatory diagram in Fig. 7 clearly illustrates the regulatory relationship between these IRGs (Fig. 7). Protein–protein interaction network based on prognosis-related transcription factors (TFs) (a) and the main regulatory network between TFs and prognostic immune-related genes (IRGs) (b). In (a), center, the top 10 genes are sorted by the Degree criterion; the darker the color, the higher the ranking. In (B), the blue triangles represent TFs, the red circles represent differentially expressed prognostic IRGs, and the red connecting lines represent positive regulation Evaluation of prognostic factors associated with OS in HCC We included 219 patients with complete clinical information in the TCGA-LIHC dataset. As important clinical indicators, gender, age, grade, and TNM staging were included in our study to identify prognostic factors. We used univariate and multivariate Cox regression analysis to determine prognostic factors associated with OS in HCC. Univariate analysis showed that risk score, TNM staging, T stage, and M stage were significantly correlated with OS (P < 0.05). Based on univariate-analysis results with P < 0.669, we further included these parameters in multivariate Cox regression analysis for analysis. Multivariate analysis showed that risk score (P < 0.001) was an independent risk factor (Fig. 8a, b), further demonstrating that our IPM's impact on the patient's prognosis is not disturbed by other clinical factors, and it is an independent prognostic factor of OS in HCC patients. (a, c) Univariate and (b, d) multivariate Cox regression analysis of the correlation between risk score and clinical factors in training (a, b) and testing (c, d) group The clinical information of 242 HCC patients who meet the criteria in the GSE14520 dataset includes age, ALT (>/<=50 U/L), main tumour size (>/<=5 cm), multinodular cirrhosis, TNM staging, BCLC staging, CLIP staging and AFP (>/<=300 ng/ml) were included in the analysis. Univariate analysis showed that risk score, main tumour size, cirrhosis, TNM staging, BCLC staging, CLIP staging and AFP were related to OS; while multinodular, cirrhosis, BCLC staging, CLIP staging and risk score were independent prognostic risk factors in multivariate analysis (Fig. 8c, d). Construction and validation of a prognostic nomogram We used a stepwise Cox regression model to establish a prognostic nomogram based on the 219 eligible HCC patients with complete clinical information in the TCGA–LIHC dataset for predicting survival at 1, 3 and 5 years. Risk score, age, sex, TNM stage, T stage, N stage, and M stage were all nomogram parameters. The AUCs of OS at 1, 3 and 5 years were 0.791, 0.760 and 0.793, respectively. The C-index was values were 0.78 (95% CI: 0.72, 0.84) and 0.73 and (95% CI: 0.68, 0.78) in the training and testing groups, respectively. The results of the clinical factors showed that the AUC values of T stage, TNM stage, and risk score were the highest at 0.757, 0.750, and 0.791, respectively, which suggested that the IPM had moderate prognostic performance (Fig. 9). The calibration curve further showed that the nomogram performed well in predicting the OS of HCC patients in the training group. However, the difference between the predicted survival rate and the actual survival rate in the calibration curve of the testing group was large, suggesting that the performance of the prognostic model may need to be further verified (Fig. 10). The establishment and verification of clinical-related nomograms in the training (a-c) and verification (d-f) groups for predicting 1-year, 3-year, and 5-year survival rates of HCC patients Calibration curve of nomogram in the training set and testing set. The X-axis is the predicted survival rate, and the Y-axis is the actual survival rate. a, b, c: 1-year, 3-years, and 5-year calibration curves in the training set; (d, e, f):1- year, 3- year and 5 -year calibration curves in the testing set We performed GSEA in the training group to identify the differences between the high-risk and low-risk groups. Among them, the high-risk group had 41 significantly enriched pathways, and the low-risk group had 12 significantly enriched pathways. The enrichment pathways in high-risk groups are mostly related to tumours (bladder cancer, small cell lung cancer, non-small cell lung cancer, pancreatic cancer and colorectal cancer) or tumour-related pathways ("p53 signaling pathway", "nucleotide-binding oligomerization domain (NOD) -like receptor signaling pathway", "Notch signaling pathway", "VEGF signaling pathway" and "Pathways in cancer"), and metabolic or metabolic disease-related pathways (pyrimidine metabolism, purine metabolism and N-Glycan biosynthesis); the enrichment pathways in the low-risk group are mostly related to metabolism (fatty acid metabolism, valine leucine and isoleucine degradation, drug metabolism cytochrome P450 and tryptophan metabolism), complement and coagulation cascades and the PPAR string pathway (Fig. 11). Representative pathways of significant enrichment in the model by Gene set enrichment analysis (high-risk group above, low-risk group below) The correlation between clinical factors and gene signature in IPM We analysed the relationship between genetic characteristics and clinical parameters (Table 2) in training group. Compared with patients with Stage I/II, G1/G2, and T1/T2 in HCC, patients with Stage III/IV, G3/G4, and T3/T4 have higher levels of BICR5 gene expression and risk score. Female has higher expression level of PLXNA1 than male, while G3/G4 and stage III/IV have higher expression level of PLXNA1 gene. Those of patients in Stage III and IV were higher in SPP1 expression than those of patients in Stage I and II, a difference that was statistically significant. Patients in stage N0 had higher expression of FABP6 and MAPT than patients in stage N1, probably due to the large difference in the number of samples between the two groups. In terms of survival time, patients with HCC in T1/T2 and Stage I/II were significantly higher than those with the disease in T3/T4 and stage III/IV. Table 2 The relationship between clinical factors and risk scores or the expression of seven prognostic related immune genes in hepatocellular carcinoma Association between the degree of immune infiltration and risk score To further study whether risk score in this IPM could affect the abundance of immune cells in the TME, we performed a correlation analysis between six types of common immune cells and risk score. In Fig. 12 we can see that the correlation coefficients of risk score and neutrophils, TAMs and dendritic are all above 0.2; B cells, CD4+ T cells and CD8+ T cells and risk scores is less than 0.2, but it is relatively close to 0.2. The results showed that all immune cells were positively correlated with risk score to a statistically significant degree (Fig. 12; P < 0.05), which implying that the higher the degree of immune infiltration, the worse the prognosis of the patient. Pearson correlation analysis between risk score and infiltration abundances of six types of immune cells. a B cells, (b) Cluster of Differentiation 4–positive (CD4+) T cells, (c) CD8+ T cells, (d) neutrophils, (e) macrophages, and (f) dendritic cells Verification of hub genes using online website Based on the above analysis results, we can see that the model has a high clinical application value, so we conducted a study of the molecular characteristics of the IRGs in the IPM. We analysed the genetic-variation results of the genes FABP6, MAPT, BIRC5, PLXNA1, CSPG5, SPP1 and STC2. Of the 349 patients included in the cBioPortal, 120 (34.38%) showed genetic changes in these seven genes. With mRNA high (21.49%) was the most common genetic variation, amplification (6.5%) and missense mutation (1.69%) being the next most common (Fig. 13a). We further analysed the differences in the expression of these genes at the protein level. As shown in Fig. 11b, except for FABP6, the other genes were all highly expressed in HCC tissues. Genetic alterations landscape (a) and expression in the translational level (b) of the seven-prognostic immune-related genes in hepatocellular carcinoma To confirm that multi-gene signatures have better prognostic performance than a single-gene signature, we performed a K-M survival analysis of the seven IRGs in this IPM (Fig. 14). Almost all of the P-values were < 0.05, and the c-indexes in ROC curves were lower than in IPM, further proving the importance of these seven IRGs. Since these seven IRGs were found to be oncogenes, the high expression of genes is often associated with poor prognosis, which is basically consistent with the conclusions we have obtained before. However, patients with high expression of FABP6 had longer PFS and RFS. Our conclusions regarding FABP6 was unclear, and further research might be needed to verify them. In short, the abnormal stomatic mutations, expression and survival differences of these seven IRGs in HCC may help explain their important application value. Overall survival, progression-free survival, disease-free survival, and relapse-free survival Kaplan-Meier curves of seven prognosis-related IRGs (From top to bottom: BIRC5, CSPG5, FABP6, MAPT, PLXNA1, SPP1, STC2). Black curve represents low expression; red curve represents high expression Components of the TME, the environment in which tumour cells grow, include inflammatory cells, fibroblasts, myofibroblasts, neuroendocrine cells, adipocytes, and extracellular matrix [51]. The TME is inseparable from the growth, invasion, metastasis, and prognosis of tumour cells [45]. Unlike cancer cell genes and epigenetic mechanisms, the matrix population in the TME is relatively stable genetically; therefore, having potential therapeutic value. A growing number of studies have begun to focus on the regulatory mechanism of the TME on HCC [45]. Although major results have been achieved, identifying suitable immunotherapeutic targets in complex TME components requires the joint efforts of multiple research teams. The application of molecular prognostic models and the identification of target genes can be described as molecular therapeutics that may provide effective approaches in the future [3]. Fortunately, the open source TCGA and GEO databases have accumulated an abundance of genomic information, providing a simpler and more reliable way to predict prognosis in cancer. Our study intended to identify IRGs from the TCGA and ImmPort databases that were significant to HCC prognosis, and further verified the results in the GSE14520 dataset. The identification of these IRGs and the construction of the IPM might provide new ideas for immunotherapy in HCC. Finding and developing novel targets with potential value for immunotherapy is very important; after experimental verification and clinical trials, it can strengthen current immunotherapeutic regimes. In the current study, we obtained 2068 HCC-related DEGs from TCGA and finally retained 100 IRGs that also existed in the GEO database for model construction. After identifying the common DEGs in the training set and testing set, we used only the IRGs to construct a Cox model for predicting OS. Incorporating clinical factors and genome information into Lasso regression analysis is indeed a better modelling method, and it is also the current trend of machine learning. However, doing this makes it difficult for us to externally verify the model because the clinical information contained in each data set is different, and there are certain limitations in selecting limited clinical factors. In addition, each type of tumour has different susceptibility factors and important indicators. Taking lung cancer as an example, attention needs to be paid to the influence of radon, asbestos, second-hand smoke and other factors, while in HCC, attention needs to be paid to hepatitis history, liver cirrhosis and AFP. Therefore, these models may not be generalizable to other types of cancer. Konstantina Kourou [52] summarized and analysed a number of studies that incorporated clinical factors and genomic information into models, but their research lacks external verification or testing of the predictive performance of the models. In this case, other clinical information was omitted since we sought to create a simple but powerful model that could be externally verified by various data sets. Then, we evaluated the model, including the AUC curve and C-index, as well as the calibration plot, and made comparisons with the prognostic model constructed based on the risk score and other clinical factors to prove the advantages of the proposed modelling scheme. With the development of high-throughput sequencing technology and the generalization of clinical genetic testing, the use of genomic information to construct a predictive model for the simple prognostic analysis of patients will bring certain convenience to the clinic. Perhaps with the continuous improvement of various public databases, the joint modelling and analysis of clinical information and multiomics data will also become a trend. Further gene function analysis results showed that all of the IRGs were mainly enriched in the positive regulation of secretion by cells, secretory granule lumen, receptor ligand activity, and cytokine-cytokine receptor interaction (Fig. 3). Specifically, these IRGs are mainly involved in various immune regulation processes (such as positive regulation of secretion by cells, positive regulation of secretion, antimicrobial humoral response, defence response to bacteria and humoral immune response) and take part in the composition of the secreted granule lumen, cytoplasmic vesicle lumen, and vesicle lumen; these IRGs regulate various receptors, ligands, growth factors, cytokines, and chemokines. In addition, they were mainly enriched in cytokine-cytokine receptor interactions, axon guidance, the TGF-beta signalling pathway, viral protein interactions with cytokines and cytokine receptors, and the Hippo signaling pathway. Most of the above items are related to immunity and inflammation, and the rest are classic signalling pathways in tumours. Jian Chen et al. reported that dysregulation of the TGF-beta signalling pathway plays a key role in immune regulation, inflammation and fibrogenesis in HCC [53]. Disorders of the Hippo signalling pathway are present in various tumours, including liver cancer [54], breast cancer [55] and lung cancer [56]. At present, immune checkpoint inhibitors can greatly improve the prognosis of HCC. However, the specific mechanism of the immune system affecting HCC are unclear, and further experiments is needed to confirm our conclusion. In our research, we reached a nearly consistent conclusion with other researcher's prognostic models: there were significant differences in OS between the high-risk and low-risk groups, the prognosis of the high-risk group was worse (Fig. 5a, p = 8.135 × 10− 7), and the same conclusion was reached in the testing set (Fig. 5a, p = 1.2535 × 10− 3). The patient's risk score for HCC progressively increases as the expression levels of the genes in the risk signature increase, and the prognosis of HCC worsens as the risk score increases. More importantly, we constructed a nomogram based on these seven IRGs to quantitatively analyse the prognosis of HCC patients. The AUCs for 1-, 3-, and 5-year OS were 0.780, 0.699 and 0.685, respectively, and the C-index was 0.72 (95% CI: 0.68–0.77). HCC is a highly heterogeneous disease, and its prognosis is affected by many factors. We only included and analysed genes related to immunity and ignored the influence of other factors on HCC. Hence, our model does not show a high prognostic performance in predicting the long-term survival rate of patients, which is also one of the inherent defects of the model. We further analysed the risk score and clinically related factors in univariate and multivariate analyses and found that the risk score was associated with higher grade (P = 0.003) and stage III/IV disease (P = 0.004), which indicated that our prognostic model was more significant in advanced HCC patients. We believe that genetic detection should not be considered independently of individual characteristics. Therefore, we also constructed a nomogram combining the risk score and clinical factors, which can easily predict the 1-year, 3-year and 5-year OS of patients. It should be noted that the AUC values were all higher than 0.7. Compared with other clinical factors, the AUC value of the nomogram corresponding to risk score was the highest (AUC = 0.791), and the C-index was 0.78 (95% CI: 0.72–0.84). In addition, when we analysed the risk score combined with clinical factors, the C-index of the test dataset was 0.73 (95% CI: 0.67–0.78), indicating that our IPM has a modest prognostic performance in the test dataset. In the GSE14520 dataset, a series of test results were basically consistent with those in the TCGA dataset. Although the AUC values reached above 0.5 (Fig. 6), the same effect as that in the training set was not achieved, which may be because the samples in the GSE14520 dataset were from China. Generally, the model constructed in this study has certain advantages in the quantitative prediction of patient prognosis and adjustment of the treatment plan. Our research results showed that the risk score is the only meaningful indicator in multiple analyses, which indicates that the risk score may have a better predictive ability for the OS of HCC. However, the standard error of an estimate does not tell us about the estimate's contribution to a prediction model. Nonsignificant coefficients can still have very high predictive power, and vice versa. In addition, a significant covariate doesn't imply a reliable estimation of survival; thus, we still need to assess the model. Next, we performed the same analysis in the test dataset and obtained the same conclusion. In addition, we evaluated our model with the AUC curve, C-index and calibration curve, which suggested that our model has good prognostic performance. To further verify the prognostic performance of the risk score, we compared the AUC value of the model constructed by the risk score with that of the model constructed by other clinical factors (Fig. 9b, e), and the results indicated that the risk score may be a good predictor of HCC survival. In addition, compared with a single gene, prognostic models based on multiple genes can better analyse the prognosis of patients. To develop a simple and effective method for evaluating the prognosis of HCC patients and find potential immunotherapy targets, we established a prognostic model based on the seven IRGs. Of course, ours is not the first IPM for HCC. Wen-jie Wang et al. constructed a prognostic model of 16 IRGs and a ceRNA network to predict the prognosis of HCC [57]; Junyu Long et.al developed a HCC immune prognostic model related to TP53 [28]; and Dengchuan Wang et al. reported a four-gene signature prognostic model related to immune infiltration through coexpression analysis [57]. Recently, an increasing number of researchers have begun to recognize the significance of the TME in HCC, and IPMs have also received extensive attention. Compared with other prognostic models, our IPM has the following advantages. (1) We have not only established a seven-gene prognostic model of IRGs but also showed that the model can be independent of other clinical factors and is positively correlated with the degree of immune infiltration, which can provide valuable prognostic information for optimizing the individual treatment of HCC patients. Additionally, we constructed a gene nomogram and clinically related nomogram to quantitatively evaluate the 1-, 3-, and 5-year OS of patients. (2) We constructed a TF regulatory network, performed GSEA and analysed the possible mechanisms of the IRGs in the IPM related to HCC tumour infiltration, which can contribute to exploring the immunotherapy mechanism of HCC. (3) We performed gene mutation analysis and protein expression level analysis on the genes in this IPM, and also analysed the survival differences between patients with high and low expression levels of the IRGs. The conclusions obtained further confirmed the potential of IRGs in the model as a prognostic marker of HCC. The signatures in this IPM have good prognosis performance, which could be potential prognosis and therapeutic targets for HCC. BIRC5, commonly known as Survivin, is the most effective molecule in inhibitor-of-apoptosis [58]. Experimental investigation showed that BIRC5 can promote the expression of VEGF, which in turn promotes angiogenesis in the tumour stromal [59]. PLXNA1 (Plexin-A1) is expressed in DC and participates in the interaction between T cells and DC, and may be involved in regulating the rearrangement of the cytoskeleton during the interaction between T cells and DC [60]. CSPG5 is only expressed in the human brain, and a study showed that it has a new function that binds to ERBB3 tyrosine kinase [61], and the ERBB3 somatic mutation is a potential tumour driver [62]. However, few studies have focused on its relevance to HCC immunotherapy. Ying Zhu et al. found that SPP1 can activate the CSF1-CSF1R pathway in tumour-associated TAMs and promote the expression of PD-L1 in HCC, and there is a positive correlation between SPP1 and PD-L1, TAM expression. On the other hand, SPP1 can induce endothelial cells and upregulate VEGF-induced migration of endothelial cells, having a synergistic effect with VEGF in tumour angiogenesis [63]. MAPT is mainly expressed in nerve cells, and more commonly studied in geriatric diseases such as various neurodegenerative diseases including Alzheimer's disease [64]. Previous investigation identified that MAPT is overexpressed in certain cancers, and participate in the resistance of various tumours to taxane drugs [65], and its specific mechanism of action still needs further study. FABP6 is involved in the bile acid metabolic process and is related to the bile acid intestinal circulation [66]. STC plays a vital role in tumour growth, invasion, apoptosis and metastasis, and promotes local angiogenesis through the VEGF/VEGFR2 signalling pathway [67]. Hongwei Cheng et al. showed that the high expression of STC2 is related to the poor prognosis of HCC [68]. To further confirm the application value of these IRGs in HCC, we analysed the survival rates of groups with high- and low- expression levels of these seven genes and evaluated whether there was significance in patients' OS, DSS, PFS, and RFS rates. The results showed that there were some contradictions in the survival analysis of FABP6. Also, the results of immunohistochemistry in the HPA database showed that except FABP6, the protein levels of other IRGs were significantly different between HCC tissues and normal liver tissues, and they were highly expressed in HCC, which was consistent with our conclusion. At present, FABP6 has not been reported in immunotherapy of HCC, which may be potential therapeutic targets for HCC. We then used the seven IRGs from the cBioPortal to obtain information about genetic mutations. The mutation rates of these seven IRGs are more than 4%, which may be useful for clinical research in the future. Collectively, these findings indicated that the seven IRGs have the potential to predict the prognosis of HCC. To explore the potential molecular mechanisms associated with these IRGs, we constructed a TF-mediated regulatory network to screen out important TFs that might regulate identified the hub IRGs. BIRC5, PLXNA1 and CSPG5 were the core IRGs in this network; all three were positively regulated by 13 core TFs, of which EZH2 could positively regulate the expression of BIRC5, PLXNA1 and CSPG5. EZH2, the hub TF in our PPI analysis (Fig. 6a), is shown in the network diagram (based on the degree ranking criterion). An accumulating number of studies show that EZH2 is closely associated with of epigenetics [69], immunity [70], metastasis [71], angiogenesis [72] and apoptosis [73]. In addition, our GSEA results showed that these seven IRGs play an important role in tumour regulation in the high-risk and low-risk groups through immune metabolic pathways. Gaia Giannone demonstrated that immune metabolic disorders play an important role in acquired resistance to the TME and immune checkpoint inhibitors [74]. At present, the VEGF signalling pathway is widely used in the immunotherapy of HCC [75] and may be involved in cell proliferation, growth and apoptosis processes as well as the regulation of the PPAR and TP53 signalling pathways [76]. NOD-like receptor X1 can induce HCC cell apoptosis by regulating the PI3K-AKT signaling pathway [77]. The inhibition or promotion of the Notch signalling pathway in different tumours depends on the TME. The cross-talk between the Notch signalling pathway and p53 gene plays an important role in HCC and may be a potential target for HCC treatment [78]. Of particular note, based on the above studies, we found that EZH2 and BIRC5 can inhibit HCC cell apoptosis and are closely related to VEGF-mediated angiogenesis. Interestingly, in the regulatory network of TFs, EZH2 positively regulated BIRC5, with a correlation coefficient of 0.72 (p = 3.76 × 10− 57). STG and SPP1 are associated with the VEGF signalling pathway, PLXNA1 and SPP1 are associated with DCs or TAMs; CSPG5 is associated with common somatic mutation sites. The application values of MAPT and FABP6 in HCC need further experimental confirmation. In this case, we boldly speculate that EZH2 may mediate the angiogenesis of the VEGF signalling pathway through regulating the expression of the seven IRGs, which may be the possible mechanism of this predictive model related to immune infiltration in high-risk patients. In low-risk patients, we found that the mechanism of these seven IRGs related to the immune infiltration of HCC is related to metabolism. However, the specific mechanism remains to be further explored. The combination of antiangiogenic drugs and tumour immunotherapy will show great prospects in the near future. However, further insights by validation with immunohistochemistry analysis are needed to understand whether the VEGF signaling pathway is linked to high-risk groups. To further assess the immune microenvironment of HCC, we also analysed the correlation between risk score and the following six types of immune cells: B cells, CD4+ T cells, CD8+ T cells, neutrophils, TAMs, and DCs. The results showed that for these six cell types, the degree of immune infiltration was positively correlated with the risk score, and the correlations between all immune cells and the risk score were statistically significant (P < 0.05). These results indicated that these cells have a high level of immune infiltration in high-risk patients. TAMs are phagocytes, which are the body's first line of defence against external threats; they can produce proinflammatory responses to pathogens and repair damaged tissues. However, cytokines and chemokines expressed by TAMs can inhibit antitumour immunity and promote tumour progression [79]. The expression of M1 macrophages in HCC can promote tumour formation by promoting the expression of PD-L1, and their infiltration degree is positively correlated with the expression of PD-L1. On the other hand, Ying Zhu et al. found that there was a positive correlation between the expression of SPP1 and PD-L1 and the infiltration of TAMs in HCC tissues, which played an important role in the immune microenvironment of HCC [80]. All these results suggested that our high-risk patients may benefit from PD-L1 treatment. Li Li et al. [81] illustrated that the CXCR2-CXCL1 axis can regulate neutrophil infiltration in HCC; this axis is an independent prognostic factor for HCC and may be a potential target for anti-HCC therapy. Overexpression of CXCL5 is associated with neutrophil infiltration and poor prognosis of HCC [82]. Wei Y et al. showed that the depletion of B cells can prevent the production of TAMs and increase the antitumour T cell response to inhibit the growth of HCC [83]. Several studies indicated that high infiltration levels of immunosuppressive TAMs and regulatory T cells are associated with reduced OS and can increase the aggressiveness of HCC [84, 85]. Research by Zhou ZJ et al. [86] illustrated that the high infiltration level of plasmacytoid DCs is related to the poor prognosis of HCC, and plasmacytoid DC infiltration in HCC can promote tumour progression by promoting the immunosuppression of CD4+ type 1 T regulatory (Tr1) cells [87], which is basically consistent with our research conclusions. The occurrence and development of HCC involve interactions between various immune cells, and participates in the regulation of HCC immunotherapy through a complex mechanism. Our IPM may be a predictor of increased infiltration of HCC immune cells. However, the correlation coefficient between the infiltration abundance of CD4+ T cells and CD8+ T cells and the risk score was less than 0.4. In addition, studies have shown that CD4+ T cells and CD8+ T cells can suppress the occurrence and proliferation of HCC due to their antitumour immune response [88]; however, another study indicated that the proportion and absolute number of CD4+ T cells in the area surrounding HCC tumour tissue increases significantly and could promote the progression of HCC [89]. Shinji Itoh et al. showed that the presence of CD8+T cells is associated with longer OS [90]. Although our conclusion is basically consistent with that of previous studies in terms of the immune infiltration of HCC, the differences we found were not was not related to any biologically relevant levels that should be interpreted outside this analysis in the future. Overall, this study can provide direction and guidance for the mechanism of immune cells in HCC, but the specific mechanism remains to be elucidated by further basic research. We systematically and comprehensively analysed the application value of our IPM for HCC, which could provide new insights for the treatment of this disease. However, the current study still has certain limitations. The IPM constructed based on the TCGA database has the best predictive performance in the training set. In the testing set, the risk score is not the best indicator to predict the OS of HCC. Limited by clinical factors in the testing group, our model still needs to be further validated in other datasets. Although we increased the reliability of our conclusions by combining multiple datasets, evaluating various aspects of the IPM, and verifying our results using the GSE14520 database. Most of the studies are plagued by a lack of validation in vitro and in vivo validation experiments, and further evidence provided by a well-designed clinical study is needed. It is also essential to use relevant basic experiments to further explain the mechanism of HCC immunotherapy. On the other hand, we have studied six major immune cells, the correlation between immune cells and the risk score was weak, and the association between more immune cell subtypes and HCC is still unclear. Therefore, we suggest that further individual studies and discussions should be conducted in the future. We believe that the era of HCC immunotherapy will soon be realized, and we look forward to it. To sum up, we constructed an IPM with seven prognostic IRGs by combining different data types from multiple databases. Individuals with HCC were automatically classified into high-risk and low-risk groups based on their risk scores, with gene expression as an independent variable. In addition, we established gene-related and clinical factor-related nomograms to facilitate more-comprehensive prognostic assessments of HCC patients. Finally, the results of the association between infiltration abundance of common immune cells in the TME and risk score showed that our IPM could predict the TME to a certain extent. This model will be a reliable tool for predicting prognosis in HCC by combining genomic characteristics, immune infiltration abundance, and clinical factors. The datasets for this study can be found in TCGA [https://portal.gdc.cancer.gov/] and GEO databases [https://www.ncbi.nlm.nih.gov/geo/]. Villanueva A. Hepatocellular Carcinoma. N Engl J Med. 2019;380(15):1450–62. https://doi.org/10.1056/NEJMra1713263. El-Serag HB, Rudolph KL. Hepatocellular carcinoma: epidemiology and molecular carcinogenesis. Gastroenterology. 2007;132(7):2557–76. https://doi.org/10.1053/j.gastro.2007.04.061. Forner A, Reig M, Bruix J. Hepatocellular carcinoma. Lancet. 2018;391(10127):1301–14. https://doi.org/10.1016/S0140-6736(18)30010-2. Khemlina G, Ikeda S, Kurzrock R. The biology of hepatocellular carcinoma: implications for genomic and immune therapies. Mol Cancer. 2017;16(1):149. https://doi.org/10.1186/s12943-017-0712-x. Llovet JM, Zucman-Rossi J, Pikarsky E, Sangro B, Schwartz M, Sherman M, et al. Hepatocellular carcinoma. Nat Rev Dis Primers. 2016;2(1):16018. https://doi.org/10.1038/nrdp.2016.18. Shlomai A, de Jong YP, Rice CM. Virus associated malignancies: the role of viral hepatitis in hepatocellular carcinoma. Semin Cancer Biol. 2014;26:78–88. https://doi.org/10.1016/j.semcancer.2014.01.004. Park NH, Chung YH, Lee HS. Impacts of vaccination on hepatitis B viral infections in Korea over a 25-year period. Intervirology. 2010;53(1):20–8. https://doi.org/10.1159/000252780. Kanwal F, Singal AG. Surveillance for hepatocellular carcinoma: current best practice and future direction. Gastroenterology. 2019;157(1):54–64. https://doi.org/10.1053/j.gastro.2019.02.049. Yarchoan M, Agarwal P, Villanueva A, Rao S, Dawson LA, Karasic T, et al. Recent developments and therapeutic strategies against hepatocellular carcinoma. Cancer Res. 2019;79(17):4326–30. https://doi.org/10.1158/0008-5472.CAN-19-0803. Bellmunt J, de Wit R, Vaughn DJ, Fradet Y, Lee JL, Fong L, et al. Pembrolizumab as second-line therapy for advanced Urothelial carcinoma. N Engl J Med. 2017;376(11):1015–26. https://doi.org/10.1056/NEJMoa1613683. Ferris RL, Blumenschein G Jr, Fayette J, Guigay J, Colevas AD, Licitra L, et al. Nivolumab for recurrent squamous-cell carcinoma of the head and neck. N Engl J Med. 2016;375(19):1856–67. https://doi.org/10.1056/NEJMoa1602252. Larkin J, Chiarion-Sileni V, Gonzalez R, Grob JJ, Cowey CL, Lao CD, et al. Combined Nivolumab and Ipilimumab or Monotherapy in untreated melanoma. N Engl J Med. 2015;373(1):23–34. https://doi.org/10.1056/NEJMoa1504030. Mandal R, Chan TA. Personalized oncology meets immunology: the path toward precision immunotherapy. Cancer Discov. 2016;6(7):703–13. https://doi.org/10.1158/2159-8290.CD-16-0146. Motzer RJ, Escudier B, McDermott DF, George S, Hammers HJ, Srinivas S, et al. Nivolumab versus Everolimus in advanced renal-cell carcinoma. N Engl J Med. 2015;373(19):1803–13. https://doi.org/10.1056/NEJMoa1510665. Reck M, Rodriguez-Abreu D, Robinson AG, et al. Pembrolizumab versus chemotherapy for PD-L1-positive non-small-cell lung Cancer. N Engl J Med. 2016;375(19):1823–33. https://doi.org/10.1056/NEJMoa1606774. Inarrairaegui M, Melero I, Sangro B. Immunotherapy of hepatocellular carcinoma: facts and hopes. Clin Cancer Res. 2018;24(7):1518–24. https://doi.org/10.1158/1078-0432.CCR-17-0289. Zongyi Y, Xiaowu L. Immunotherapy for hepatocellular carcinoma. Cancer Lett. 2020;470:8–17. https://doi.org/10.1016/j.canlet.2019.12.002. Heymann F, Tacke F. Immunology in the liver--from homeostasis to disease. Nat Rev Gastroenterol Hepatol. 2016;13(2):88–110. https://doi.org/10.1038/nrgastro.2015.200. Pitt JM, Marabelle A, Eggermont A, Soria JC, Kroemer G, Zitvogel L. Targeting the tumor microenvironment: removing obstruction to anticancer immune responses and immunotherapy. Ann Oncol. 2016;27(8):1482–92. https://doi.org/10.1093/annonc/mdw168. Chew V, Lai L, Pan L, Lim CJ, Li J, Ong R, et al. Delineation of an immunosuppressive gradient in hepatocellular carcinoma using high-dimensional proteomic and transcriptomic analyses. Proc Natl Acad Sci U S A. 2017;114(29):E5900–e5909. https://doi.org/10.1073/pnas.1706559114. Joyce JA, Pollard JW. Microenvironmental regulation of metastasis. Nat Rev Cancer. 2009;9(4):239–52. https://doi.org/10.1038/nrc2618. Grivennikov SI, Greten FR, Karin M. Immunity, inflammation, and cancer. Cell. 2010;140(6):883–99. https://doi.org/10.1016/j.cell.2010.01.025. Kulik L, El-Serag HB. Epidemiology and Management of Hepatocellular Carcinoma. Gastroenterology. 2019;156(2):477–491.e471. Yarchoan M, Xing D, Luan L, Xu H, Sharma RB, Popovic A, et al. Characterization of the immune microenvironment in hepatocellular carcinoma. Clin Cancer Res. 2017;23(23):7333–9. https://doi.org/10.1158/1078-0432.CCR-17-0950. Kurebayashi Y, Ojima H, Tsujikawa H, et al. Landscape of immune microenvironment in hepatocellular carcinoma and its additional impact on histological and molecular classification. Hepatology. 2018;68(3):1025–41. Gnjatic S, Bronte V, Brunet LR, Butler MO, Disis ML, Galon J, et al. Identifying baseline immune-related biomarkers to predict clinical outcome of immunotherapy. J Immunother Cancer. 2017;5(1):44. https://doi.org/10.1186/s40425-017-0243-4. Li X, Xu W, Kang W, Wong SH, Wang M, Zhou Y, et al. Genomic analysis of liver cancer unveils novel driver genes and distinct prognostic features. Theranostics. 2018;8(6):1740–51. https://doi.org/10.7150/thno.22010. Shen S, Wang G, Zhang R, Zhao Y, Yu H, Wei Y, et al. Development and validation of an immune gene-set based prognostic signature in ovarian cancer. EBioMedicine. 2019;40:318–26. https://doi.org/10.1016/j.ebiom.2018.12.054. Wan B, Liu B, Huang Y, Yu G, Lv C. Prognostic value of immune-related genes in clear cell renal cell carcinoma. Aging. 2019;11(23):11474–89. https://doi.org/10.18632/aging.102548. Yang S, Wu Y, Deng Y, Zhou L, Yang P, Zheng Y, et al. Identification of a prognostic immune signature for cervical cancer to predict survival and response to immune checkpoint inhibitors. Oncoimmunology. 2019;8(12):e1659094. https://doi.org/10.1080/2162402X.2019.1659094. Long J, Wang A, Bai Y, Lin J, Yang X, Wang D, et al. Development and validation of a TP53-associated immune prognostic model for hepatocellular carcinoma. EBioMedicine. 2019;42:363–74. https://doi.org/10.1016/j.ebiom.2019.03.022. Moeini A, Torrecilla S, Tovar V, Montironi C, Andreu-Oller C, Peix J, et al. An immune gene expression signature associated with development of human hepatocellular carcinoma identifies mice that respond to Chemopreventive agents. Gastroenterology. 2019;157(5):1383–97 e1311. https://doi.org/10.1053/j.gastro.2019.07.028. Liang S, Ma HY, Zhong Z, Dhar D, Liu X, Xu J, et al. NADPH oxidase 1 in liver macrophages promotes inflammation and tumor development in mice. Gastroenterology. 2019;156(4):1156–72 e1156. https://doi.org/10.1053/j.gastro.2018.11.019. International Cancer Genome C, Hudson TJ, Anderson W, et al. International network of cancer genome projects. Nature. 2010;464(7291):993–8. https://doi.org/10.1038/nature08987. Barrett T, Troup DB, Wilhite SE, Ledoux P, Rudnev D, Evangelista C, et al. NCBI GEO: archive for high-throughput functional genomic data. Nucleic Acids Res. 2009;37(Database issue):D885–90. https://doi.org/10.1093/nar/gkn764. Zalocusky KA, Kan MJ, Hu Z, Dunn P, Thomson E, Wiser J, et al. The 10,000 Immunomes project: building a resource for human immunology. Cell Rep. 2018;25(7):1995. https://doi.org/10.1016/j.celrep.2018.11.013. Ritchie ME, Phipson B, Wu D, et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015;43(7):e47. Muller C, Schillert A, Rothemeier C, et al. Removing batch effects from longitudinal gene expression - Quantile normalization plus ComBat as best approach for microarray Transcriptome data. PLoS One. 2016;11(6):e0156594. https://doi.org/10.1371/journal.pone.0156594. Yu G, Wang LG, Han Y, He QY. clusterProfiler: an R package for comparing biological themes among gene clusters. OMICS. 2012;16(5):284–7. https://doi.org/10.1089/omi.2011.0118. Imran QM, Hussain A, Lee S-U, Mun BG, Falak N, Loake GJ, et al. Transcriptome profile of NO-induced Arabidopsis transcription factor genes suggests their putative regulatory role in multiple biological processes. Sci Rep. 2018;8(1):771. https://doi.org/10.1038/s41598-017-18850-5. Mei S, Meyer CA, Zheng R, Qin Q, Wu Q, Jiang P, et al. Cistrome Cancer: a web resource for integrative gene regulation modeling in Cancer. Cancer Res. 2017;77(21):e19–22. https://doi.org/10.1158/0008-5472.CAN-17-0327. Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13(11):2498–504. https://doi.org/10.1101/gr.1239303. Chin CH, Chen SH, Wu HH, Ho CW, Ko MT, Lin CY. CytoHubba: identifying hub objects and sub-networks from complex interactome. BMC Syst Biol. 2014;8(Suppl 4):S11. Zhao Y, Simon R. Development and validation of predictive indices for a continuous outcome using gene expression profiles. Cancer Informat. 2010;9:105–14. https://doi.org/10.4137/cin.s3805. Farazi PA, DePinho RA. Hepatocellular carcinoma pathogenesis: from genes to environment. Nat Rev Cancer. 2006;6(9):674–87. https://doi.org/10.1038/nrc1934. Vickers AJ, Cronin AM, Elkin EB, Gonen M. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers. BMC Med Inform Decis Mak. 2008;8(1):53. https://doi.org/10.1186/1472-6947-8-53. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102(43):15545–50. https://doi.org/10.1073/pnas.0506580102. Cerami E, Gao J, Dogrusoz U, Gross BE, Sumer SO, Aksoy BA, et al. The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data. Cancer Discov. 2012;2(5):401–4. https://doi.org/10.1158/2159-8290.CD-12-0095. Ponten F, Schwenk JM, Asplund A, Edqvist PH. The human protein atlas as a proteomic resource for biomarker discovery. J Intern Med. 2011;270(5):428–46. https://doi.org/10.1111/j.1365-2796.2011.02427.x. Nagy A, Lanczky A, Menyhart O, Gyorffy B. Validation of miRNA prognostic power in hepatocellular carcinoma using expression data of independent datasets. Sci Rep. 2018;8(1):9227. https://doi.org/10.1038/s41598-018-27521-y. Chen F, Zhuang X, Lin L, Yu P, Wang Y, Shi Y, et al. New horizons in tumor microenvironment biology: challenges and opportunities. BMC Med. 2015;13(1):45. https://doi.org/10.1186/s12916-015-0278-7. Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J. 2015;13:8–17. https://doi.org/10.1016/j.csbj.2014.11.005. Chen J, Gingold JA, Su X. Immunomodulatory TGF-beta signaling in hepatocellular carcinoma. Trends Mol Med. 2019;25(11):1010–23. https://doi.org/10.1016/j.molmed.2019.06.007. Lee KP, Lee JH, Kim TS, Kim TH, Park HD, Byun JS, et al. The hippo-Salvador pathway restrains hepatic oval cell proliferation, liver size, and liver tumorigenesis. Proc Natl Acad Sci U S A. 2010;107(18):8248–53. https://doi.org/10.1073/pnas.0912203107. Cordenonsi M, Zanconato F, Azzolin L, Forcato M, Rosato A, Frasson C, et al. The hippo transducer TAZ confers cancer stem cell-related traits on breast cancer cells. Cell. 2011;147(4):759–72. https://doi.org/10.1016/j.cell.2011.09.048. Lau AN, Curtis SJ, Fillmore CM, Rowbotham SP, Mohseni M, Wagner DE, et al. Tumor-propagating cells and yap/Taz activity contribute to lung tumor progression and metastasis. EMBO J. 2014;33(5):468–81. https://doi.org/10.1002/embj.201386082. Wang D, Liu J, Liu S, Li W. Identification of crucial genes associated with immune cell infiltration in hepatocellular carcinoma by weighted gene co-expression network analysis. Front Genet. 2020;11:342. https://doi.org/10.3389/fgene.2020.00342. Su C. Survivin in survival of hepatocellular carcinoma. Cancer Lett. 2016;379(2):184–90. https://doi.org/10.1016/j.canlet.2015.06.016. Fernandez JG, Rodriguez DA, Valenzuela M, et al. Survivin expression promotes VEGF-induced tumor angiogenesis via PI3K/Akt enhanced beta-catenin/Tcf-Lef dependent transcription. Mol Cancer. 2014;13(1):209. https://doi.org/10.1186/1476-4598-13-209. Takegahara N, Kumanogoh A, Kikutani H. Semaphorins: a new class of immunoregulatory molecules. Philos Trans R Soc Lond Ser B Biol Sci. 2005;360(1461):1673–80. https://doi.org/10.1098/rstb.2005.1696. Kinugasa Y, Ishiguro H, Tokita Y, Oohira A, Ohmoto H, Higashiyama S. Neuroglycan C, a novel member of the neuregulin family. Biochem Biophys Res Commun. 2004;321(4):1045–9. https://doi.org/10.1016/j.bbrc.2004.07.066. Kiavue N, Cabel L, Melaabi S, Bataillon G, Callens C, Lerebours F, et al. ERBB3 mutations in cancer: biological aspects, prevalence and therapeutics. Oncogene. 2020;39(3):487–502. https://doi.org/10.1038/s41388-019-1001-5. Zhu Y, Yang J, Xu D, Gao XM, Zhang Z, Hsu JL, et al. Disruption of tumour-associated macrophage trafficking by the osteopontin-induced colony-stimulating factor-1 signalling sensitises hepatocellular carcinoma to anti-PD-L1 blockade. Gut. 2019;68(9):1653–66. https://doi.org/10.1136/gutjnl-2019-318419. Derry PJ, Hegde ML, Jackson GR, Kayed R, Tour JM, Tsai AL, et al. Revisiting the intersection of amyloid, pathologically modified tau and iron in Alzheimer's disease from a ferroptosis perspective. Prog Neurobiol. 2020;184:101716. https://doi.org/10.1016/j.pneurobio.2019.101716. Gargini R, Segura-Collar B, Sanchez-Gomez P. Novel functions of the neurodegenerative-related gene tau in Cancer. Front Aging Neurosci. 2019;11:231. https://doi.org/10.3389/fnagi.2019.00231. Zwicker BL, Agellon LB. Transport and biological activities of bile acids. Int J Biochem Cell Biol. 2013;45(7):1389–98. https://doi.org/10.1016/j.biocel.2013.04.012. Chu SJ, Zhang J, Zhang R, Lu WW, Zhu JS. Evolution and functions of stanniocalcins in cancer. Int J Immunopathol Pharmacol. 2015;28(1):14–20. https://doi.org/10.1177/0394632015572745. Cheng H, Wu Z, Wu C, Wang X, Liow SS, Li Z, et al. Overcoming STC2 mediated drug resistance through drug and gene co-delivery by PHB-PDMAEMA cationic polyester in liver cancer cells. Mater Sci Eng C Mater Biol Appl. 2018;83:210–7. https://doi.org/10.1016/j.msec.2017.08.075. Xiao G, Jin LL, Liu CQ, Wang YC, Meng YM, Zhou ZG, et al. EZH2 negatively regulates PD-L1 expression in hepatocellular carcinoma. J Immunother Cancer. 2019;7(1):300. https://doi.org/10.1186/s40425-019-0784-9. Bugide S, Green MR, Wajapeyee N. Inhibition of enhancer of zeste homolog 2 (EZH2) induces natural killer cell-mediated eradication of hepatocellular carcinoma cells. Proc Natl Acad Sci U S A. 2018;115(15):E3509–18. https://doi.org/10.1073/pnas.1802691115. Sun S, Wang W, Luo X, Li Y, Liu B, Li X, et al. Circular RNA circ-ADD3 inhibits hepatocellular carcinoma metastasis through facilitating EZH2 degradation via CDK1-mediated ubiquitination. Am J Cancer Res. 2019;9(8):1695–707. CAS PubMed PubMed Central Google Scholar Lu C, Han HD, Mangala LS, Ali-Fehmi R, Newton CS, Ozbun L, et al. Regulation of tumor angiogenesis by EZH2. Cancer Cell. 2010;18(2):185–97. https://doi.org/10.1016/j.ccr.2010.06.016. Guo J, Hao C, Wang C, Li L. Long noncoding RNA PVT1 modulates hepatocellular carcinoma cell proliferation and apoptosis by recruiting EZH2. Cancer Cell Int. 2018;18(1):98. https://doi.org/10.1186/s12935-018-0582-3. Giannone G, Ghisoni E, Genta S, et al. Immuno-metabolism and microenvironment in cancer: key players for immunotherapy. Int J Mol Sci. 2020;21(12):4414. Bupathi M, Kaseb A, Meric-Bernstam F, Naing A. Hepatocellular carcinoma: where there is unmet need. Mol Oncol. 2015;9(8):1501–9. https://doi.org/10.1016/j.molonc.2015.06.005. Zhu GZ, Liao XW, Wang XK, Gong YZ, Liu XG, Yu L, et al. Comprehensive investigation of p53, p21, nm23, and VEGF expression in hepatitis B virus-related hepatocellular carcinoma overall survival after hepatectomy. J Cancer. 2020;11(4):906–18. https://doi.org/10.7150/jca.33766. Hu B, Ding GY, Fu PY, Zhu XD, Ji Y, Shi GM, et al. NOD-like receptor X1 functions as a tumor suppressor by inhibiting epithelial-mesenchymal transition and inducing aging in hepatocellular carcinoma cells. J Hematol Oncol. 2018;11(1):28. https://doi.org/10.1186/s13045-018-0573-9. Giovannini C, Bolondi L, Gramantieri L. Targeting notch3 in hepatocellular carcinoma: molecular mechanisms and therapeutic perspectives. Int J Mol Sci. 2016;18(1):56. Pathria P, Louis TL, Varner JA. Targeting tumor-associated macrophages in Cancer. Trends Immunol. 2019;40(4):310–27. https://doi.org/10.1016/j.it.2019.02.003. Zong Z, Zou J, Mao R, Ma C, Li N, Wang J, et al. M1 macrophages induce PD-L1 expression in hepatocellular carcinoma cells through IL-1beta signaling. Front Immunol. 2019;10:1643. https://doi.org/10.3389/fimmu.2019.01643. Li L, Xu L, Yan J, et al. CXCR2-CXCL1 axis is correlated with neutrophil infiltration and predicts a poor prognosis in hepatocellular carcinoma. J Exp Clin Cancer Res. 2015;34:129. Zhou SL, Dai Z, Zhou ZJ, et al. Overexpression of CXCL5 mediates neutrophil infiltration and indicates poor prognosis for hepatocellular carcinoma. Hepatology. 2012;56(6):2242–54. Wei Y, Lao XM, Xiao X, Wang XY, Wu ZJ, Zeng QH, et al. Plasma cell polarization to the immunoglobulin G phenotype in hepatocellular carcinomas involves epigenetic alterations and promotes Hepatoma progression in mice. Gastroenterology. 2019;156(6):1890–904 e1816. https://doi.org/10.1053/j.gastro.2019.01.250. Zhou J, Ding T, Pan W, Zhu LY, Li L, Zheng L. Increased intratumoral regulatory T cells are related to intratumoral macrophages and poor prognosis in hepatocellular carcinoma patients. Int J Cancer. 2009;125(7):1640–8. https://doi.org/10.1002/ijc.24556. Kuang DM, Wu Y, Chen N, Cheng J, Zhuang SM, Zheng L. Tumor-derived hyaluronan induces formation of immunosuppressive macrophages through transient early activation of monocytes. Blood. 2007;110(2):587–95. https://doi.org/10.1182/blood-2007-01-068031. Zhou ZJ, Xin HY, Li J, Hu ZQ, Luo CB, Zhou SL. Intratumoral plasmacytoid dendritic cells as a poor prognostic factor for hepatocellular carcinoma following curative resection. Cancer Immunol Immunother. 2019;68(8):1223–33. https://doi.org/10.1007/s00262-019-02355-3. Pedroza-Gonzalez A, Zhou G, Vargas-Mendez E, et al. Tumor-infiltrating plasmacytoid dendritic cells promote immunosuppression by Tr1 cells in human liver tumors. Oncoimmunology. 2015;4(6):e1008355. Mossanen JC, Tacke F. Role of lymphocytes in liver cancer. Oncoimmunology. 2013;2(11):e26468. https://doi.org/10.4161/onci.26468. Yang XH, Yamagiwa S, Ichida T, Matsuda Y, Sugahara S, Watanabe H, et al. Increase of CD4+ CD25+ regulatory T-cells in the liver of patients with hepatocellular carcinoma. J Hepatol. 2006;45(2):254–62. https://doi.org/10.1016/j.jhep.2006.01.036. Itoh S, Yoshizumi T, Yugawa K, et al. Impact of immune response on outcomes in hepatocellular carcinoma: association with vascular formation. Hepatology. 2020;72(6):1987. We thank LetPub (www.letpub.com) and Nature Research Editing Service for its linguistic assistance during the preparation of this manuscript. This work was supported by R & D projects in key areas of Guangdong Province, Construction of high-level university in Guangzhou University of Chinese Medicine (Grant number: A1-AFD018181A29), Guangzhou University of Chinese Medicine National University Student Innovation and Entrepreneurship Training Project (Project Leader: Xinqian Yang; grant number: 201810572038) and the First Affiliated Hospital of Guangzhou University of Chinese Medicine Innovation and Student Training Team Incubation Project (Project leader: Wenjiang Zheng; grant number: 2018XXTD003), and 2020 National College Student Innovation and Entrepreneurship Training Program of Guangzhou University of Chinese Medicine (Project leader: Ping Zhang; grant number: S202010572123). Qian Yan, Wenjiang Zheng and Boqing Wang contributed equally to this work, and they should be considered as co-first authors. The First Clinical Medical School, Guangzhou University of Chinese Medicine, Guangzhou, China Qian Yan, Wenjiang Zheng, Boqing Wang, Baoqian Ye, Huiyan Luo, Xinqian Yang & Ping Zhang Department of Oncology, The First Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China Xiongwen Wang Qian Yan Wenjiang Zheng Boqing Wang Baoqian Ye Huiyan Luo Xinqian Yang Ping Zhang QY and WJZ were responsible for research design and writing, and BQY were responsible for data and bioinformatics analysis. In the meantime, BQW made a great contribution to the revision process of our research. HYL was responsible for checking full-text grammatical errors, XWW guided research ideas, design, research methods, and manuscript revision. The author(s) read and approved the final manuscript. Correspondence to Xiongwen Wang. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential or actual conflict of interest. Yan, Q., Zheng, W., Wang, B. et al. A prognostic model based on seven immune-related genes predicts the overall survival of patients with hepatocellular carcinoma. BioData Mining 14, 29 (2021). https://doi.org/10.1186/s13040-021-00261-y DOI: https://doi.org/10.1186/s13040-021-00261-y Immune-related genes Prognostic model Nomogram Immune infiltration
CommonCrawl
AMC Home Optimal antiblocking systems of information sets for the binary codes related to triangular graphs February 2022, 16(1): 185-230. doi: 10.3934/amc.2020108 ${\sf {FAST}}$: Disk encryption and beyond Debrup Chakraborty 1, , Sebati Ghosh 1,, , Cuauhtemoc Mancillas López 2, and Palash Sarkar 1, Indian Statistical Institute, 203, B.T. Road, Kolkata, India 700108 Computer Science Department, CINVESTAV-IPN, Mexico, D.F., 07360, Mexico * Corresponding author: Sebati Ghosh Received February 2020 Published February 2022 Early access September 2020 Figure(1) / Table(10) This work introduces ${\sf {FAST}}$ which is a new family of tweakable enciphering schemes. Several instantiations of ${\sf {FAST}}$ are described. These are targeted towards two goals, the specific task of disk encryption and a more general scheme suitable for a wide variety of practical applications. A major contribution of this work is to present detailed and careful software implementations of all of these instantiations. For disk encryption, the results from the implementations show that ${\sf {FAST}}$ compares very favourably to the IEEE disk encryption standards XCB and EME2 as well as the more recent proposal AEZ. ${\sf {FAST}}$ is built using a fixed input length pseudo-random function and an appropriate hash function. It uses a single-block key, is parallelisable and can be instantiated using only the encryption function of a block cipher. The hash function can be instantiated using either the Horner's rule based usual polynomial hashing or hashing based on the more efficient Bernstein-Rabin-Winograd polynomials. Security of ${\sf {FAST}}$ has been rigorously analysed using the standard provable security approach and concrete security bounds have been derived. Based on our implementation results, we put forward ${\sf {FAST}}$ as a serious candidate for standardisation and deployment. Keywords: Disk encryption, tweakable enciphering schemes, pseudo-random function, Horner, BRW. Mathematics Subject Classification: 11T71, 68P25, 94A60. Citation: Debrup Chakraborty, Sebati Ghosh, Cuauhtemoc Mancillas López, Palash Sarkar. ${\sf {FAST}}$: Disk encryption and beyond. Advances in Mathematics of Communications, 2022, 16 (1) : 185-230. doi: 10.3934/amc.2020108 Public comments on the XTS-AES mode, http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/comments/XTS/collected_XTS_comments.pdf. Google Scholar IEEE Std 1619-2007: Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices, IEEE Computer Society, 2008. Available at: http://standards.ieee.org/findstds/standard/1619-2007.html. Google Scholar IEEE Std 1619.2-2010: IEEE Standard for Wide-block Encryption for Shared Storage Media, March 2011. Available at: http://standards.ieee.org/findstds/standard/1619.2-2010.html. Google Scholar M. Bellare, D. Cash and S. Keelveedhi, Ciphers that securely encipher their own keys, in (eds. Y. Chen, G. Danezis and V. Shmatikov) Proceedings of the 18th ACM Conference on Computer and Communications Security, CCS 2011, Chicago, Illinois, USA, October 17-21, 2011, 2011,423–432. doi: 10.1145/2046707.2046757. Google Scholar M. Bellare, A. Desai, E. Jokipii and P. Rogaway, A concrete security treatment of symmetric encryption, in 38th Annual Symposium on Foundations of Computer Science, FOCS '97, Miami Beach, Florida, USA, October 19-22, 1997, IEEE Computer Society, 1997,394–403. doi: 10.1109/SFCS.1997.646128. Google Scholar D. J. Bernstein, Polynomial evaluation and message authentication, 2007. Available at: http://cr.yp.to/papers.html#pema. Google Scholar R. Bhaumik and M. Nandi, An inverse-free single-keyed tweakable enciphering scheme, in (eds. T. Iwata and J. H. Cheon) Advances in Cryptology - ASIACRYPT 2015 - 21st International Conference on the Theory and Application of Cryptology and Information Security, Auckland, New Zealand, November 29 - December 3, 2015, Proceedings, Part II, Lecture Notes in Computer Science, 9453, Springer, 2015,159–180. doi: 10.1007/978-3-662-48800-3_7. Google Scholar D. Chakraborty, S. Ghosh and P. Sarkar, A fast single-key two-level universal hash function, IACR Trans. Symmetric Cryptol., 2017 (2017), 106–128. Google Scholar D. Chakraborty, V. Hernandez-Jimenez and P. Sarkar, Another look at XCB, Cryptography and Communications, 7 (2015), 439-468. doi: 10.1007/s12095-015-0127-8. Google Scholar D. Chakraborty, C. Mancillas-López, F. Rodríguez-Henríquez and P. Sarkar, Efficient hardware implementations of BRW polynomials and tweakable enciphering schemes, IEEE Trans. Computers, 62 (2013), 279–294. doi: 10.1109/TC.2011.227. Google Scholar D. Chakraborty, C. Mancillas-López and P. Sarkar, STES: A stream cipher based low cost scheme for securing stored data, IEEE Trans. Computers, 64 (2015), 2691–2707. doi: 10.1109/TC.2014.2366739. Google Scholar D. Chakraborty and M. Nandi, An improved security bound for HCTR, in (eds. K. Nyberg) Fast Software Encryption, 15th International Workshop, FSE 2008, Lausanne, Switzerland, February 10-13, 2008, Revised Selected Papers, Lecture Notes in Computer Science, 5086, Springer, 2008,289–302. doi: 10.1007/978-3-540-71039-4_18. Google Scholar D. Chakraborty and P. Sarkar, A new mode of encryption providing a tweakable strong pseudo-random permutation, in (eds. M. J. B. Robshaw) FSE, Lecture Notes in Computer Science, 4047, Springer, 2006,293–309. doi: 10.1007/11799313_19. Google Scholar D. Chakraborty and P. Sarkar, HCH: A new tweakable enciphering scheme using the hash-counter-hash approach, IEEE Transactions on Information Theory, 54 (2008), 1683-1699. doi: 10.1109/TIT.2008.917623. Google Scholar D. Chakraborty and P. Sarkar, On modes of operations of a block cipher for authentication and authenticated encryption, Cryptography and Communications, 8 (2016), 455-511. doi: 10.1007/s12095-015-0153-6. Google Scholar P. Crowley and E. Biggers, Adiantum: Length-preserving encryption for entry-level processors, IACR Trans. Symmetric Cryptol., 2018 (2018), 39–61. Google Scholar M. J. Dworkin, SP 800-38E. Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices, Technical report, Gaithersburg, MD, United States, 2010. Google Scholar S. Gueron and M. E. Kounavis, Efficient implementation of the galois counter mode using a carry-less multiplier and a fast reduction algorithm, Inf. Process. Lett., 110 (2010), 549-553. doi: 10.1016/j.ipl.2010.04.011. Google Scholar S. Gueron, A. Langley and Y. Lindell, AES-GCM-SIV: Specification and analysis, IACR Cryptology ePrint Archive, 168 (2017). doi: 10.1007/978-3-319-52153-4. Google Scholar S. Halevi, EME$^{ * }$: Extending EME to handle arbitrary-length messages with associated data, in (eds. A. Canteaut and K. Viswanathan) INDOCRYPT, Lecture Notes in Computer Science, 3348, Springer 2004,315–327. doi: 10.1007/978-3-540-30556-9_25. Google Scholar S. Halevi, Invertible universal hashing and the TET encryption mode, in (eds. A. Menezes) CRYPTO, Lecture Notes in Computer Science, 4622, Springer, (2007), pages 412–429. doi: 10.1007/978-3-540-74143-5_23. Google Scholar S. Halevi and H. Krawczyk, Security under key-dependent inputs, in (eds. P. Ning, S. De Capitani di Vimercati, and P. F. Syverson) Proceedings of the 2007 ACM Conference on Computer and Communications Security, CCS 2007, Alexandria, Virginia, USA, October 28-31, 2007, 2007,466–475. doi: 10.1007/11935308. Google Scholar S. Halevi and P. Rogaway, A tweakable enciphering mode, in (eds. D. Boneh) CRYPTO, Lecture Notes in Computer Science, 2729, Springer, 2003,482–499. doi: 10.1007/978-3-540-45146-4_28. Google Scholar S. Halevi and P. Rogaway, A parallelizable enciphering mode, in (eds. T. Okamoto) CT-RSA, Lecture Notes in Computer Science, 2964, Springer, 2004,292–304. doi: 10.1007/978-3-540-24660-2_23. Google Scholar V. T. Hoang, T. Krovetz and P. Rogaway, Robust authenticated-encryption AEZ and the problem that it solves, in (eds. E. Oswald and M. Fischlin) Advances in Cryptology - EUROCRYPT 2015 - 34th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Sofia, Bulgaria, April 26-30, 2015, Proceedings, Part I, Lecture Notes in Computer Science, 9056, Springer, 2015, 15–44. doi: 10.1007/978-3-662-46800-5_2. Google Scholar M. Liskov, R. L. Rivest and D. Wagner, Tweakable block ciphers, in (eds. M. Yung) CRYPTO, Lecture Notes in Computer Science, 2442, Springer, 2002, 31–46. doi: 10.1007/3-540-45708-9_3. Google Scholar D. A. McGrew and S. R. Fluhrer, The extended codebook (XCB) mode of operation, Cryptology ePrint Archive, Report 2004/278, 2004, Available at: http://eprint.iacr.org/. Google Scholar D. A. McGrew and S. R. Fluhrer, The security of the extended codebook (xcb) mode of operation, in (eds. C. M. Adams, A. Miri, and M. J. Wiener) Selected Areas in Cryptography, Lecture Notes in Computer Science, 4876, Springer, 2007,311–327. doi: 10.1007/978-3-540-77360-3_20. Google Scholar D. A. McGrew and J. Viega, Arbitrary block length mode, 2004. Google Scholar K. Minematsu, Parallelizable rate-1 authenticated encryption from pseudorandom functions, in (eds. P. Q. Nguyen and E. Oswald) Advances in Cryptology - EUROCRYPT 2014 - 33rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Copenhagen, Denmark, May 11-15, 2014. Proceedings, Lecture Notes in Computer Science, 8441, Springer, 2014,275–292. doi: 10.1007/978-3-642-55220-5_16. Google Scholar M. Naor and O. Reingold, On the construction of pseudorandom permutations: Luby-Rackoff revisited, J. Cryptology, 12 (1999), 29-66. doi: 10.1007/PL00003817. Google Scholar M. O. Rabin and S. Winograd, Fast evaluation of polynomials by rational preparation, Comm. Pure Appl. Math., 25 (1972), 433-458. doi: 10.1002/cpa.3160250405. Google Scholar P. Rogaway, Efficient instantiations of tweakable blockciphers and refinements to modes OCB and PMAC, in (eds. P. J. Lee) ASIACRYPT, Lecture Notes in Computer Science, 3329, Springer, 2004, 16–31. doi: 10.1007/978-3-540-30539-2_2. Google Scholar P. Rogaway and T. Shrimpton, A provable-security treatment of the key-wrap problem, in (eds. Serge Vaudenay) EUROCRYPT, Lecture Notes in Computer Science, 4004, Springer, 2006,373–390. doi: 10.1007/11761679_23. Google Scholar P. Sarkar, A general mixing strategy for the ECB-Mix-ECB mode of operation, Inf. Process. Lett., 109 (2008), 121-123. doi: 10.1016/j.ipl.2008.09.012. Google Scholar P. Sarkar, Efficient tweakable enciphering schemes from (block-wise) universal hash functions, IEEE Transactions on Information Theory, 55 (2009), 4749-4759. doi: 10.1109/TIT.2009.2027487. Google Scholar P. Sarkar, Tweakable enciphering schemes using only the encryption function of a block cipher, Inf. Process. Lett., 111 (2011), 945-955. doi: 10.1016/j.ipl.2011.06.014. Google Scholar P. Wang, D. Feng and W. Wu, HCTR: A variable-input-length enciphering mode, in (eds. D. Feng, D. Lin, and M. Yung) CISC, Lecture Notes in Computer Science, 3822, Springer, 2005,175–188. doi: 10.1007/11599548_15. Google Scholar Figure 1. The hash functions $ \mathbf{H} $ and $ \mathbf{G} $ Table 1. Encryption and decryption algorithms for ${\sf FAST}$ Table 2. A two-round Feistel construction required in Table 1 Table 3. Computations of ${\sf {vecHorner}}$ and ${\sf {vecHash2L}}$. The string $ 1^n $ denotes the element of $ { \mathbb{F} } $ whose binary representation consists of the all-one string. Here $ \eta $ is a positive integer $ \geq 3 $ and $ \mathfrak{d}(\eta) $ denote the degree of $ {\sf {BRW}}_{\tau}(X_1, \ldots, X_\eta) $, where $ X_1, \ldots, X_\eta \in { \mathbb{F} } $ Table 4. Game $ G_{{\sf {real}}} $ Table 5. Game $ G_{{\sf {int}}} $ Table 6. Game $ G_{{\sf {rnd}}} $ Table 7. Comparison of different tweakable enciphering schemes according to computational efficiency. [BC] denotes the number of block cipher calls; [M] denotes the number of field multiplications; [D] denotes the number of doubling ('multiplication by $ \alpha $') operations; type scheme [BC] [M] [D] enc-mix-enc CMC [23] $ 2m+1 $ – – EME2$ ^* $ [20] $ 2m+1+m/n $ – 2 AEZ [25] $ (5m+4)/2 $ – $ \frac{m-2}{4} $ FMix [7] $ 2m+1 $ – – hash-enc-hash XCB [27] $ m+1 $ $ 2(m+3) $ – HCTR [38] $ m $ $ 2(m+1) $ – HCHfp [14] $ m+2 $ $ 2(m-1) $ – TET [21] $ m+1 $ $ 2m $ $ 2(m-1) $ HEH-BRW[36] $ m+1 $ $ 2+2\lfloor(m-1)/2\rfloor $ $ 2(m-1) $ $ {\sf {TESX}} $ with BRW [37] $ m+1 $ $ 4+2\lfloor(m-1)/2\rfloor $ $ 2(m-1) $ $ {\sf {FAST}}[{\sf {Fx}}_m, {\sf Horner}] $ $ m+1 $ $ 2m+1 $ – $ {\sf {FAST}}[{\sf {Fx}}_m, {\sf {BRW}}] $ $ m+1 $ $ 2+2\lfloor(m-1)/2\rfloor $ – Table 8. Comparison of different tweakable enciphering schemes according to practical and implementation simplicity. [BCK] denotes the number of block cipher keys; and [HK] denotes the number of blocks in the hash key type scheme [BCK] [HK] dec module parallel enc-mix-enc CMC [23] 1 - reqd no EME2$ ^* $ [20] 1 2 reqd yes AEZ [25] 1 2 not reqd yes FMix [7] 1 - not reqd no hash-enc-hash XCB [27] 3 2 reqd yes HCTR [38] 1 1 reqd yes HCHfp [14] 1 1 reqd yes TET [21] 2 3 reqd yes HEH-BRW[36] 1 1 reqd yes $ {\sf {TESX}} $ with BRW [37] 1 2 not reqd yes $ {\sf {FAST}}[{\sf {Fx}}_m, {\sf Horner}] $ 1 - not reqd yes $ {\sf {FAST}}[{\sf {Fx}}_m, {\sf {BRW}}] $ 1 - not reqd yes Note: for both Tables 7 and 8, the block size is $n$ bits, the tweak is a single $n$-bit block and the number of blocks $m\geq 3$ in the message is fixed. Table 9. Comparison of the cycles per byte measure of ${\sf {FAST}}$ with those of XCB, EME2 and AEZ in the setting of $ {\sf {Fx}}_{256} $ scheme Skylake Kabylake XCB 1.92 1.85 EME2 2.07 1.99 AEZ 1.74 1.70 $ {\sf {FAST}}[{\sf {Fx}}_{256}, {\sf {Horner}}] $ 1.63 1.56 $ {\sf {FAST}}[{\sf {Fx}}_{256}, {\sf {BRW}}] $ 1.24 1.19 Table 10. Report of cycles per byte measure for the setting of ${\sf {Gn}}$ for $ {\sf {FAST}}[{\sf {Gn}}, \mathfrak{k}, {\sf {vecHorner}}] $ and $ {\sf {FAST}}[{\sf {Gn}}, \mathfrak{k}, 31, {\sf {vecHash2L}}] $ Skylake Kabylake msg len $ \mathfrak{k} $ $ {\sf {vecHorner}} $ $ {\sf {vecHash2L}} $ $ {\sf {vecHash2L}} $ $ {\sf {vecHorner}} $ $ {\sf {vecHash2L}} $ $ {\sf {vecHash2L}} $ (bytes) (delayed) (normal) (delayed) (normal) 2 1.51 1.38 1.59 1.42 1.32 1.56 512 3 1.40 1.38 1.39 1.32 1.31 1.35 1024 3 1.45 1.34 1.34 1.35 1.27 1.30 Debrup Chakraborty, Avijit Dutta, Samir Kundu. Designing tweakable enciphering schemes using public permutations. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021021 Cheng Wang. Convergence analysis of Fourier pseudo-spectral schemes for three-dimensional incompressible Navier-Stokes equations. Electronic Research Archive, 2021, 29 (5) : 2915-2944. doi: 10.3934/era.2021019 Jonathan P. Desi, Evelyn Sander, Thomas Wanner. Complex transient patterns on the disk. Discrete & Continuous Dynamical Systems, 2006, 15 (4) : 1049-1078. doi: 10.3934/dcds.2006.15.1049 Carla Mascia, Massimiliano Sala, Irene Villa. A survey on functional encryption. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021049 Angsuman Das, Avishek Adhikari, Kouichi Sakurai. Plaintext checkable encryption with designated checker. Advances in Mathematics of Communications, 2015, 9 (1) : 37-53. doi: 10.3934/amc.2015.9.37 Sujay Jayakar, Robert S. Strichartz. Average number of lattice points in a disk. Communications on Pure & Applied Analysis, 2016, 15 (1) : 1-8. doi: 10.3934/cpaa.2016.15.1 Donatella Donatelli, Bernard Ducomet, Šárka Nečasová. Low Mach number limit for a model of accretion disk. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3239-3268. doi: 10.3934/dcds.2018141 Karan Khathuria, Joachim Rosenthal, Violetta Weger. Encryption scheme based on expanded Reed-Solomon codes. Advances in Mathematics of Communications, 2021, 15 (2) : 207-218. doi: 10.3934/amc.2020053 Michael Usher. Floer homology in disk bundles and symplectically twisted geodesic flows. Journal of Modern Dynamics, 2009, 3 (1) : 61-101. doi: 10.3934/jmd.2009.3.61 Jong-Shenq Guo, Hirokazu Ninomiya, Chin-Chin Wu. Existence of a rotating wave pattern in a disk for a wave front interaction model. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1049-1063. doi: 10.3934/cpaa.2013.12.1049 Fei Gao. Data encryption algorithm for e-commerce platform based on blockchain technology. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1457-1470. doi: 10.3934/dcdss.2019100 Aiwan Fan, Qiming Wang, Joyati Debnath. A high precision data encryption algorithm in wireless network mobile communication. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1327-1340. doi: 10.3934/dcdss.2019091 Shin-Ichiro Ei, Masayasu Mimura, Tomoyuki Miyaji. Reflection of a self-propelling rigid disk from a boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 803-817. doi: 10.3934/dcdss.2020229 Vikas Srivastava, Sumit Kumar Debnath, Pantelimon Stǎnicǎ, Saibal Kumar Pal. A multivariate identity-based broadcast encryption with applications to the internet of things. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021050 Yu-Chi Chen. Security analysis of public key encryption with filtered equality test. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021053 Paolo Aluffi. Segre classes of monomial schemes. Electronic Research Announcements, 2013, 20: 55-70. doi: 10.3934/era.2013.20.55 Marx Chhay, Aziz Hamdouni. On the accuracy of invariant numerical schemes. Communications on Pure & Applied Analysis, 2011, 10 (2) : 761-783. doi: 10.3934/cpaa.2011.10.761 Benjamin Seibold, Rodolfo R. Rosales, Jean-Christophe Nave. Jet schemes for advection problems. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1229-1259. doi: 10.3934/dcdsb.2012.17.1229 Eric Bedford, Serge Cantat, Kyounghee Kim. Pseudo-automorphisms with no invariant foliation. Journal of Modern Dynamics, 2014, 8 (2) : 221-250. doi: 10.3934/jmd.2014.8.221 Egor Shelukhin. Pseudo-rotations and Steenrod squares. Journal of Modern Dynamics, 2020, 16: 289-304. doi: 10.3934/jmd.2020010 Debrup Chakraborty Sebati Ghosh Cuauhtemoc Mancillas López Palash Sarkar
CommonCrawl
\begin{document} \title[Error bounds in the CLT for random sums]{New Berry-Esseen and Wasserstein bounds in the CLT for non-randomly centered random sums by probabilistic methods} \author{Christian D\"obler} \thanks{Universit\'{e} du Luxembourg, Unit\'{e} de Recherche en Math\'{e}matiques \\ [email protected]\\ {\it Keywords:} random sums, central limit theorem, Kolmogorov distance, Wasserstein distance, Stein's method, zero bias couplings, size bias couplings} \begin{abstract} We prove abstract bounds on the Wasserstein and Kolmogorov distances between non-randomly centered random sums of real i.i.d. random variables with a finite third moment and the standard normal distribution. Except for the case of mean zero summands, these bounds involve a coupling of the summation index with its size biased distribution as was previously considered in \cite{GolRin96} for the normal approximation of nonnegative random variables. When being specialized to concrete distributions of the summation index like the Binomial, Poisson and Hypergeometric distribution, our bounds turn out to be of the correct order of magnitude. \end{abstract} \maketitle \section{Introduction}\label{Intro} Let $N,X_1,X_2,\dotsc$ be random variables on a common probability space such that the $X_j$, $j\geq1$, are real-valued and $N$ assumes values in the set of nonnegative integers $\mathbb{Z}_+=\{0,1,\dotsc\}$. Then, the random variable \begin{equation}\label{defrs} S:=\sum_{j=1}^N X_j \end{equation} is called a \textit{random sum}. Such random variables appear frequently in modern probabiliy theory, as many models for example from physics, finance, reliability and risk theory naturally lead to the consideration of such sums. Furthermore, sometimes a model, which looks quite different from \eqref{defrs} at the outset, may be transformed into a random sum and then general theory of such sums may be invoked to study the original model \cite{GneKo96}. For example, by the recent so-called \textit{master Steiner formula} from \cite{McCTr14} the distribution of the metric projection of a standard Gaussian vector onto a closed convex cone in Euclidean space can be represented as a random sum of i.i.d. centered chi-squared random variables with the distribution of $N$ given by the \textit{conic intrinsic volumes} of the cone. Hence, this distribution belongs to the class of the so-called chi-bar-square distributions, which is ubiquitous in the theory of hypotheses testing with inequality constraints (see e.g. \cite{Dyk91} and \cite{Sha88}). This representation was used in \cite{GNP14} to prove quantitative CLTs for both the distribution of the metric projection and the conic intrinsic volume distribution. These results are of interest e.g. in the field of compressed sensing.\\ There already exists a huge body of literature about the asymptotic distributions of random sums. Their investigation evidently began with the work \cite{Rob48} of Robbins, who assumes that the random variables $X_1,X_2,\dotsc$ are i.i.d. with a finite second moment and that $N$ also has a finite second moment. One of the results of \cite{Rob48} is that under these assumptions asymptotic normality of the index $N$ automatically implies asymptotic normality of the corresponding random sum. The book \cite{GneKo96} gives a comprehensive description of the limiting behaviour of such random sums under the assumption that the random variables $N,X_1,X_2,\dotsc$ are independent. In particular, one may ask under what conditions the sum $S$ in \eqref{defrs} is asymptotically normal, where asymptotically refers to the fact that the \textit{random index} $N$ in fact usually depends on a parameter, which is sent either to infinity or to zero. Once a CLT is known to hold, one might ask about the accuracy of the normal approximation to the distribution of the given random sum. It turns out that it is generally much easier to derive rates of convergence for random sums of centered random variables, or, which amounts to the same thing, for random sums centered by random variables than for random sums of not necessarily centered random variables. In the centered case one might, for instance, first condition on the value of the index $N$, then use known error bounds for sums of a fixed number of independent random variables like the classical Berry-Esseen theorem and, finally, take expectation with respect to $N$. This technique is illustrated e.g. in the manuscript \cite{Doe12rs} and also works for non-normal limiting distributions like the Laplace distribution. For this reason we will mainly be interested in deriving sharp rates of convergence for the case of non-centered summands, but will also consider the mean-zero case and hint at the relevant differences. Also, we will not assume from the outset that the index $N$ has a certain fixed distribution like the Binomial or the Poisson, but will be interested in the general situation.\\ For non-centered summands and general index $N$, the relevant literature on rates of convergence in the random sums CLT seems quite easy to survey. Under the same assumptions as in \cite{Rob48} the paper \cite{Eng83} gives an upper bound on the Kolmogorov distance between the distribution of the random sum and a suitable normal distribution, which is proved to be sharp in some sense. However, this bound is not very explicit as it contains the Kolmogorov distance of $N$ to the normal distribution with the same mean and variance as $N$ as one of the terms appearing in the bound, for instance. This might make it difficult to apply this result to a concrete distribution of $N$. Furthermore, the method of proof cannot be easily adapted to probability metrics different from the Kolmogorov distance like e.g. the Wasserstein distance. In \cite{Kor87} a bound on the Kolmogorov distance is given which improves upon the result of \cite{Eng83} with respect to the constants appearing in the bound. However, the bound given in \cite{Kor87} is no longer strong enough to assure the well-known asymptotic normality of Binomial and Poisson random sums, unless the summands are centered. The paper \cite{Kor88} generalizes the results from \cite{Eng83} to the case of not necessarily identically distributed summands and to situations, where the summands might not have finite absolute third moments. However, at least for non-centered summands, the bounds in \cite{Kor88} still lack some explicitness. \\ To the best of our knowledge, the article \cite{Sunk13} is the only one, which gives bounds on the Wasserstein distance between random sums for general indices $N$ and the standard normal distribution. However, as mentioned by the same author in \cite{Sunk14}, the results of \cite{Sunk13} generally do not yield accurate bounds, unless the summands are centered. Indeed, the results from \cite{Sunk13} do not even yield convergence in distribution for Binomial or Poisson random sums of non-centered summands.\\ The main purpose of the present article is to combine Stein's method of normal approximation with several modern probabilistic concepts like certain coupling constructions and conditional independence, to prove accurate abstract upper bounds on the distance between suitably standardized random sums of i.i.d. summands measured by two popular probability metrics, the Kolmogorov and Wasserstein distances. Using a simple inequality, this gives bounds for the whole classe of $L^p$ distances of distributions, $1\leq p\leq\infty$. These upper bounds, in their most abstract forms (see Theorem \ref{maintheo} below), involve moments of the difference of a coupling of $N$ with its \textit{size-biased distribution} but reduce to very explicit expressions if either $N$ has a concrete distribution like the Binomial, Poisson or dirac delta distribution, the summands $X_j$ are centered, or, if the distribution of $N$ is infinitely divisible. These special cases are extensively presented in order to illustrate the wide applicability and strength of our results. As indicated above, this seems to be the first work which gives Wasserstein bounds in the random sums CLT for general indices $N$, which reduce to bounds of optimal order, when being specialized to concrete distributions like the Binomial and the Poisson distributions. Using our abstract approach via size-bias couplings, we are also able to prove rates for Hypergeometric random sums. These do not seem to have been treated in the literature, yet. This is not a surprise, because the Hypergeometric distribution is conceptually more complicated than the Binomial or Poisson distribution, as it is neither a natural convolution of i.i.d. random variables nor infinitely divisible. Indeed, every distribution of the summation index which allows for a close size-bias coupling should be amenable to our approach.\\ It should be mentioned that Stein's method and coupling techniques have previously been used to bound the error of exponential approximation \cite{PekRol11} and approximation by the Laplace distribution \cite{PiRen12} of certain random sums. In these papers, the authors make use of the fact that the exponential distribution and the Laplace distribution are the unique fixed points of certain distributional transformations and are able to succesfully couple the given random sum with a random variable having the respective transformed distribution. In the case of the standard normal distribution, which is a fixed point of the zero-bias transformation from \cite{GolRei97}, it appears natural to try to construct a close coupling with the zero biased distribution of the random sum under consideration. However, interestingly it turns out that we are only able to do so in the case of centered summands whereas for the general case an intermediate step involving a coupling of the index $N$ with its size biased distribution is required for the proof. Nevertheless, the zero-bias transformation or rather an extension of it to non-centered random variables, plays an important role for our argument. This combination of two coupling constructions which belong to the classical tools of Stein's method for normal approximation is a new feature lying at the heart of our approach. \\ The remainder of the article is structured as follows: In Section \ref{results} we review the relevant probability distances, the size biased distribution and state our quantitative results on the normal approximation of random sums. Furthermore, we prove new identities for the distance of a nonnegative random variable to its size-biased distribution in three prominent metrics and show that for some concrete distributions, natural couplings are $L^1$-optimal and, hence, yield the Wasserstein distance. In Section \ref{stein} we collect necessary facts from Stein's method of normal approximation and introduce a variant of the zero-bias transformation, which we need for the proofs of our results. Then, in Section \ref{proof}, the proof of our main theorems, Theorem \ref{maintheo} and Theorem \ref{meanzero} is given. Finally, Section \ref{Appendix} contains the proofs of some auxiliary results, needed for the proof of the Berry-Esseen bounds in Section \ref{proof}. \section{Main results}\label{results} Recall that for probability measures $\mu$ and $\nu$ on $(\mathbb{R} ,\mathcal{B}(\mathbb{R} ))$, their \textit{Kolmogorov distance} is defined by \begin{equation*} d_\mathcal{K}(\mu,\nu):=\sup_{z\in\mathbb{R} }\babs{\mu\bigl((-\infty,z]\bigr)- \mu\bigl((-\infty,z]\bigr) }=\fnorm{F-G}\,, \end{equation*} where $F$ and $G$ are the distribution functions corresponding to $\mu$ and $\nu$, respectively. Also, if both $\mu$ and $\nu$ have finite first absolute moment, then one defines the \textit{Wasserstein distance} between them via \begin{equation*} d_\mathcal{W}(\mu,\nu):=\sup_{h\in\Lip(1)}\Babs{\int h d\mu -\int h d\nu}\,, \end{equation*} where $\Lip(1)$ denotes the class of all Lipschitz-continous functions $g$ on $\mathbb{R} $ with Lipschitz constant not greater than $1$. In view of Lemma \ref{distlemma} below, we also introduce the \textit{total variation distance} bewtween $\mu$ and $\nu$ by \begin{equation*} d_{TV}(\mu,\nu):=\sup_{B\in\mathcal{B}(\mathbb{R} )}\babs{\mu(B)-\nu(B)}\,. \end{equation*} If the real-valued random variables $X$ and $Y$ have distributions $\mu$ and $\nu$, respectively, then we simply write $d_\mathcal{K}(X,Y)$ for $d_\mathcal{K}\bigl(\mathcal{L}(X),\mathcal{L}(Y)\bigr)$ and similarly for the Wasserstein and total variation distances and also speak of the respective distance between the random variables $X$ and $Y$. Before stating our results, we have to review the concept of the size-biased distribution corresponding to a distribution supported on $[0,\infty)$. Thus, if $X$ is a nonnegative random variable with $0<E[X]<\infty$, then a random variable $X^s$ is said to have the \textit{$X$-size biased distribution}, if for all bounded and measurable functions $h$ on $[0,\infty)$ \begin{equation}\label{sbdef} E[Xh(X)]=E[X]E[h(X^s)]\,, \end{equation} see, e.g. \cite{GolRin96}, \cite{ArrGol} or \cite{AGK}. Equivalently, the distribution of $X^s$ has Radon-Nikodym derivative with respect to the distribution of $X$ given by \begin{equation*} \frac{P(X^s\in dx)}{P(X\in dx)}=\frac{x}{E[X]}\,, \end{equation*} which immediately implies both existence and uniqueness of the $X$-size biased distribution. Also note that \eqref{sbdef} holds true for all measurable functions $h$ for which $E\abs{Xh(X)}<\infty$. In consequence, if $X\in L^p(P)$ for some $1\leq p<\infty$, then $X^s\in L^{p-1}(P)$ and \begin{equation*} E\Bigl[\bigl(X^s\bigr)^{p-1}\Bigr]=\frac{E\big[X^p\bigr]}{E[X]}\,. \end{equation*} The following lemma, which seems to be new and might be of independent interest, gives identities for the distance of $X$ to $X^s$ in the three metrics mentioned above. The proof is deferred to the end of this section. \begin{lemma}\label{distlemma} Let $X$ be a nonnegative random variable such that $0<E[X]<\infty$. Then, the following identities hold true: \begin{enumerate}[{\normalfont (a)}] \item $\displaystyle d_\mathcal{K}(X,X^s)=d_{TV}(X,X^s)=\frac{E\abs{X-E[X]}}{2E[X]}$. \item If additionally $E[X^2]<\infty$, then $\displaystyle d_\mathcal{W}(X,X^s)=\frac{\Var(X)}{E[X]}$. \end{enumerate} \end{lemma} \begin{remark}\label{sbrem} \begin{enumerate}[(a)] \item It is well known (see e.g. \cite{Dud02}) that the Wasserstein distance $d_\mathcal{W}(X,Y)$ between the real random variables $X$ and $Y$ has the dual representation \begin{equation}\label{dualwas} d_\mathcal{W}(X,Y)=\inf_{(\hat{X},\hat{Y})\in\pi(X,Y)}E\abs{\hat{X}-\hat{Y}}\,, \end{equation} where $\pi(X,Y)$ is the collection of all couplings of $X$ and $Y$, i.e. of all pairs $(\hat{X},\hat{Y})$ of random variables on a joint probability space such that $\hat{X}\stackrel{\mathcal{D}}{=}X$ and $\hat{Y}\stackrel{\mathcal{D}}{=}Y$. Also, the infimum in \eqref{dualwas} is always attained, e.g. by the quantile transformation: If $U$ is uniformly distributed on $(0,1)$ and if, for a distribution function $F$ on $\mathbb{R} $, we let \[F^{-1}(p):=\inf\{x\in\mathbb{R} \,:\,F(x)\geq p\},\quad p\in(0,1)\,,\] denote the corresponding \textit{generalized inverse} of $F$, then $F^{-1}(U)$ is a random variable with distribution function $F$. Thus, letting $F_X$ and $F_Y$ denote the distribution functions of $X$ and $Y$, respectively, it was proved e.g. in \cite{Major78} that \[\inf_{(\hat{X},\hat{Y})\in\pi(X,Y)}E\abs{\hat{X}-\hat{Y}}=E\abs{F_X^{-1}(U)-F_Y^{-1}(U)}=\int_0^1\abs{F_X^{-1}(t)-F_Y^{-1}(t)}dt\,.\] Furthermore, it is not difficult to see that $X^s$ is always stochastically larger than $X$, implying that there is a coupling $(\hat{X},\hat{X^s})$ of $X$ and $X^s$ such that $\hat{X^s}\geq \hat{X}$ (see \cite{ArrGol} for details). In fact, this property is already achieved by the coupling via the quantile transformation. By the dual representation \eqref{dualwas} and the fact that the coupling via the quantile transformation yields the minimum $L^1$ distance in \eqref{dualwas} we can conclude that \textit{every} coupling $(\hat{X},\hat{X^s})$ such that $\hat{X^s}\geq \hat{X}$ is optimal in this sense, since \begin{align*} E\babs{\hat{X^s}-\hat{X}}&=E\bigl[\hat{X^s}\bigr]-E\bigl[\hat{X}\bigr]=E\bigl[F_{X^s}^{-1}(U)\bigr]-E\bigl[F_X^{-1}(U)\bigr]\\ &=E\babs{F_{X^s}^{-1}(U)-F_X^{-1}(U)}=d_\mathcal{W}(X,X^s)\,. \end{align*} Note also that, by the last computation and part by (b) of Lemma \ref{distlemma}, we have \begin{equation*} E\bigl[X^s\bigr]-E\bigl[X\bigr]=E\bigl[\hat{X^s}\bigr]-E\bigl[\hat{X}\bigr]=d_\mathcal{W}(X,X^s)=\frac{\Var(X)}{E[X]}\,. \end{equation*} \item Due to a result by Steutel \cite{Steu73}, the distribution of $X$ is infinitely divisible, if and only if there exists a coupling $(X,X^s)$ of $X$ and $X^s$ such that $X^s-X$ is nonnegative and independent of $X$ (see e.g. \cite{ArrGol} for a nice exposition and a proof of this result). According to (a) such a coupling always achieves the minimum $L^1$-distance. \item It might seem curious that according to part (a) of Lemma \ref{distlemma}, the Kolmogorov distance and the total variation distance between a nonnegative random variable and one with its size biased distribution always coincide. Indeed, this holds true since for each Borel-measurable set $B\subseteq \mathbb{R} $ we have the inequality \begin{align*} \babs{P(X^s\in B)-P(X\in B)}&\leq\babs{P(X^s> m)-P(X>m)}\\ &\leq d_\mathcal{K}(X,X^s)\,, \end{align*} where $m:=E[X]$. Thus, the supremum in the definition \begin{equation*} d_{TV}(X,X^s)=\sup_{B\in\mathcal{B}(\mathbb{R} )}\babs{P(X^s\in B)-P(X\in B)} \end{equation*} of the total variation distance is assumed for the set $B=(m,\infty)$. This can be shortly proved and explained in the following way: For $t\in\mathbb{R} $, using the defining property \eqref{sbdef} of the size biased distribution, we can write \begin{equation*} H(t):=P(X^s\leq t)-P(X\leq t)=m^{-1}E\bigl[(X-m))1_{\{X\leq t\}}\bigr]\,. \end{equation*} Thus, for $s<t$ we have \begin{equation*} H(t)-H(s)=m^{-1} E\bigl[(X-m))1_{\{s<X\leq t\}}\bigr]\,, \end{equation*} and, hence, $H$ is decreasing on $(-\infty,m)$ and increasing on $(m,\infty)$. Thus, for every Borel set $B\subseteq\mathbb{R} $ we conclude that \begin{align*} &\; P(X^s\in B)-P(X\in B)=\int_\mathbb{R} 1_B(t)dH(t)\leq \int_\mathbb{R} 1_{B\cap (m,\infty)}(t)dH(t)\\ &\leq \int_\mathbb{R} 1_{(m,\infty)}(t)dH(t)=P(X^s>m)-P(X>m)\,. \end{align*} Note that for this argumentation we heavily relied on the defining property \eqref{sbdef} of the size biased distribution which guaranteed the monotonicity property of the difference $H$ of the distribution functions of $X^s$ and $X$, respectively. Since $X^s$ is stochastically larger than $X$, one might suspect that the coincidence of the total variation and the Kolmogorov distance holds true in this more general situation. However, observe that the fact that $X^s$ dominates $X$ stochastically only implies that $H\leq0$ but that it is the monotonicity of $H$ on $(-\infty,m)$ and on $(m,\infty)$ that was crucial for the derivation. \end{enumerate} \end{remark} \begin{example}\label{exdist} \begin{enumerate}[(a)] \item Let $X\sim\Poi(\lambda)$ have the Poisson distribution with paramter $\lambda>0$. From the Stein characterization of $\Poi(\lambda)$ (see \cite{Ch75}) it is known that \begin{equation*} E[Xf(X)]=\lambda E[f(X+1)]=E[X]E[f(X+1)] \end{equation*} for all bounded and measurable $f$. Hence, $X+1$ has the $X$-size biased distribution. As $X+1\geq X$, by Remark \ref{sbrem} this coupling yields the minimum $L^1$-distance between $X$ and $X^s$, which is equal to $1$ in this case. \item Let $n$ be a positive integer, $p\in(0,1]$ and let $X_1,\dotsc,X_n$ be i.i.d. random variables such that $X_1\sim\Bern(p)$. Then, \[X:=\sum_{j=1}^n X_j\sim\Bin(n,p)\] has the Binomial distribution with parameters $n$ and $p$. From the construction in \cite{GolRin96} one easily sees that \[X^s:=\sum_{j=2}^{n}X_j+1\] has the $X$-size biased distribution. As $X^s\geq X$, by Remark \ref{sbrem} this coupling yields the minimum $L^1$-distance between $X$ and $X^s$, which is equal to \begin{equation*} d_\mathcal{W}(X,X^s)=E[1-X_1]=1-p=\frac{\Var(X)}{E[X]} \end{equation*} in accordance with Lemma \ref{distlemma}. \item Let $n,r,s$ be positive integers such that $n\leq r+s$ and let $X\sim\Hyp(n;r,s)$ have the Hypergeometric distribution with parameters $n,r$ and $s$, i.e. \begin{equation*} P(X=k)=\frac{\binom{r}{k}\binom{s}{n-k}}{\binom{r+s}{n}}\,,\quad k=0,1,\dotsc,n \end{equation*} with $E[X]=\frac{nr}{r+s}$. Imagaine an urn with $r$ red and $s$ silver balls. If we draw $n$ times without replacement from this urn and denote by $X$ the total number of drawn red balls, then $X\sim\Hyp(n;r,s)$. For $j=1,\dotsc,n$ denote by $X_j$ the indicator of the event that a red ball is drawn at the $j$-th draw. Then, $X=\sum_{j=1}^n X_j$ and since the $X_j$ are exchangeable, the well-known construction of a random variable $X^s$ wth the $X$-size biased distribution from \cite{GolRin96} gives that $X^s=1+\sum_{j=2}^n X_j'$, where \begin{equation*} \mathcal{L}\bigl((X_2',\dotsc,X_n')\bigr)=\mathcal{L}\bigl((X_2,\dotsc,X_n)\,\bigl|\,X_1=1\bigr)\,. \end{equation*} But given $X_1=1$ the sum $\sum_{j=2}^n X_j$ has the Hypergeometric distribution with parameters $n-1,r-1$ and $s$ and, hence, \begin{equation*} X^s\stackrel{\mathcal{D}}{=}Y+1\,,\quad\text{where } Y\sim\Hyp(n-1;r-1,s)\,. \end{equation*} In order to construct an $L^1$-optimal coupling of $X$ and $X^s$, fix one of the red balls in the urn and, for $j=2,\dotsc,n$, denote by $Y_j$ the indicator of the event that at the $j$-th draw this fixed red ball is drawn. Then, it is not difficult to see that \begin{align*} Y&:=1_{\{X_1=1\}}\sum_{j=2}^n X_j+1_{\{X_1=0\}}\sum_{j=2}^n(X_j-Y_j)\sim\Hyp(n-1;r-1,s) \end{align*} and, hence, \begin{align*} X^s&:=Y+1=1_{\{X_1=1\}}\sum_{j=2}^n X_j+1_{\{X_1=0\}}\sum_{j=2}^n(X_j-Y_j)+1\\ &= 1_{\{X_1=1\}}X+1_{\{X_1=0\}}\Bigl(X+1-\sum_{j=2}^n Y_j\Bigr) \end{align*} has the $X$-size biased distribution. Note that since $\sum_{j=2}^n Y_j\leq 1$ we have \begin{equation*} X^s-X=1_{\{X_1=0\}}\Bigl(1-\sum_{j=2}^n Y_j\Bigr)\geq0\,, \end{equation*} and consequently, by Remark \ref{sbrem} (a), the coupling $(X,X^s)$ is optimal in the $L^1$-sense and yields the Wasserstein distance between $X$ and $X^s$: \begin{equation*} d_\mathcal{W}(X,X^s)=E\babs{X^s-X}=\frac{\Var(X)}{E[X]}=\frac{n\frac{r}{r+s}\frac{s}{r+s}\frac{r+s-n}{r+s-1}}{\frac{nr}{r+s}}=\frac{s(r+s-n)}{(r+s)(r+s-1)}\,. \end{equation*} \end{enumerate} \end{example} We now turn back to the asymptotic behaviour of random sums. We will rely on the following general assumptions and notation, which we adopt and extend from \cite{Rob48}. \begin{assumption}\label{genass} The random variables $N,X_1,X_2,\dotsc$ are independent, $X_1,X_2,\dotsc$ being i.i.d. and such that $E\abs{X_1}^3<\infty$ and $E[N^3]<\infty$. Furthermore, we let \begin{align*} \alpha&:=E[N],\quad \beta^2:=E[N^2],\quad\gamma^2:=\Var(N)=\beta^2-\alpha^2,\quad \delta^3:=E[N^3],\\ a&:=E[X_1],\quad b^2:=E[X_1^2],\quad c^2:=\Var(X_1)=b^2-a^2\text{ and } d^3:=E\babs{X_1-E[X_1]}^3. \end{align*} \end{assumption} By Wald' s equation and the Blackwell-Girshick formula, from Assumption \ref{genass} we have \begin{equation}\label{meanvar} \mu:=E[S]=\alpha a\quad\text{and}\quad\sigma^2:=\Var(S)=\alpha c^2+a^2\gamma^2\,. \end{equation} The main purpose of this paper is to assess the accuracy of the standard normal approximation to the normalized version \begin{equation}\label{defw} W:=\frac{S-\mu}{\sigma}=\frac{S-\alpha a}{\sqrt{\alpha c^2+a\gamma^2}} \end{equation} of $S$ measured by the Kolmogorov and the Wasserstein distance, respectively. As can be seen from the paper \cite{Rob48}, under the general assumption that \[\sigma^2=\alpha c^2+a^2\gamma^2\to\infty\,, \] there are three typical situations in which $W$ is asymptotically normal, which we will now briefly review. \begin{enumerate}[1)] \item $c\not=0\not=a$ and $\gamma^2=o(\alpha)$ \item $a=0\not=c$ and $\gamma=o(\alpha)$ \item $N$ itself is asymptotically normal and at least one of $a$ and $c$ is different from zero. \end{enumerate} We remark that 1) roughly means that $N$ tends to infinity in a certain sense, but such that it only fluctuates slightly around its mean $\alpha$ and, thus, behaves more or less as the constant $\alpha$ (tending to infinity). If $c=0$ and $a\not=0$, then we have \begin{equation*} S=aN\quad\text{a.s.} \end{equation*} and asymptotic normality of $S$ is equivalent to that of $N$. For this reason, unless specifically stated otherwise, we will from now on assume that $c\not=0$. However, we would like to remark that all bounds in which $c$ does not appear in the denominator also hold true in the case $c=0$. \begin{theorem}\label{maintheo} Suppose that Assumption \ref{genass} holds, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Also, let $(N,N^s)$ be a coupling of $N$ and $N^s$ having the $N$-size biased distribution such that $N^s$ is also independent of $X_1,X_2,\dotsc$ and define $D:=N^s-N$. Then, we have the following bound: \begin{enumerate}[{\normalfont (a)}] \item $\displaystyle d_\mathcal{W}(W,Z)\leq\frac{2c^2b\gamma^2}{\sigma^3}+\frac{3\alpha d^3}{\sigma^3}+\frac{\alpha a^2}{\sigma^2}\sqrt{\frac{2}{\pi}}\sqrt{\Var\bigl(E[D\,|\,N]\bigr)}\\ {}\hspace{2cm}+\frac{2\alpha a^2b}{\sigma^3}E\bigl[1_{\{D<0\}}D^2\bigr]+\frac{\alpha\abs{a}b^2}{\sigma^3}E[D^2]$ \item If, additionally, $D\geq0$, then we also have \begin{align*} d_\mathcal{K}(W,Z)&\leq\frac{(\sqrt{2\pi}+4)bc^2\alpha}{4\sigma^3}\sqrt{E[D^2]}+\frac{ d^3\alpha(3\sqrt{2\pi}+4)}{8\sigma^3}+ \frac{c^3\alpha}{\sigma^3}\notag\\ &\;+\Bigl(\frac{7}{2}\sqrt{2}+2\Bigr)\frac{\sqrt{\alpha} d^3}{c\sigma^2}+\frac{c^2\alpha}{\sigma^2}P(N=0)+\frac{ d^3\alpha}{c\sigma^2}E\bigl[N^{-1/2}1_{\{N\geq1\}}\bigr]\notag\\ &\;+\frac{\alpha a^2}{\sigma^2}\sqrt{\Var\bigl(E[D\,|\,N]\bigr)} +\frac{\alpha \abs{a}b^2}{2\sigma^3}\sqrt{E\Bigl[\bigl(E\bigl[D^2\,\bigl|\,N\bigr]\bigr)^2\Bigr]}\notag\\ &\;+\frac{\alpha \abs{a}b^2\sqrt{2\pi}}{8\sigma^3}E[D^2] +\frac{\alpha \abs{a}b}{\sigma^2}\sqrt{P(N=0)}\sqrt{E[D^2]}\notag\\ &\;+\frac{\alpha \abs{a}b^2}{c\sigma^2\sqrt{2\pi}}E\bigl[D^21_{\{N\geq1\}}N^{-1/2}\bigr]+\Bigl(\frac{ d^3\alpha \abs{a} b}{\sigma^2}+ \frac{\alpha bc}{\sigma^2\sqrt{2\pi}}\Bigr)E\bigl[D1_{\{N\geq1\}}N^{-1/2}\bigr]\,. \end{align*} \end{enumerate} \end{theorem} \begin{remark}\label{mtrem} \begin{enumerate}[(a)] \item In many concrete situations, one has that a natural coupling of $N$ and $N^s$ yields $D\geq0$ and, hence, Theorem \ref{maintheo} gives bounds on both the Wasserstein and Kolmogorov distances (note that the fourth summand in the bound on $d_\mathcal{W}(W,Z)$ vanishes if $D\geq0$). For instance, by Remark \ref{sbrem} (b), this is the case, if the distribution of $N$ is infinitely divisible. In this case, the random variables $D$ and $N$ can be chosen to be independent and, thus, our bounds can further be simplified (see Corollary \ref{infdiv} below). Indeed, since $N^s$ is always stochastically larger than $N$, by Remark \ref{sbrem} (a) it is always possible to construct a coupling $(N,N^s)$ such that $D=N^s-N\geq0$. \item However, although we know that a coupling of $N$ and $N^s$ such that $D=N^s-N\geq0$ is always possible in principle, sometimes one would prefer working with a feasible and natural coupling which does not have this property. For instance, this is the case in the situation of Corollary \ref{divisible} below. This is why we have not restricted ourselves to the case $D\geq0$ but allow for arbitrary couplings $(N,N^s)$. We mention that we also have a bound on the Kolmogorov distance between $W$ and a standard normally distributed $Z$ in this more general situation, which is given by \begin{align*} d_\mathcal{K}(W,Z)&\leq\sum_{j=1}^7 B_j\,, \end{align*} where $B_1, B_2, B_4, B_5, B_6$ and $B_7$ are defined in \eqref{e1z7}, \eqref{e12gen}, \eqref{e222}, \eqref{e221b}, \eqref{boundr1} and \eqref{boundr2}, respectively, and \begin{equation*} B_3:=\frac{\alpha a^2}{\sigma^2}\sqrt{\Var\bigl(E[D\,|\,N]\bigr)}\,. \end{equation*} It is this bound what is actually proved in Section \ref{proof}. Since it is given by a rather long expression in the most general case, we have decided, however, not to present it within Theorem \ref{maintheo}. \item We mention that the our proof of the Wasserstein bounds given in Theorem \ref{maintheo} is only roughly five pages long and is not at all technical but rather makes use of probabilistic ideas and concepts. The extended length of our derivation is simply due to our ambition to present Kolmogorov bounds as well which, as usual within Stein's method, demand much more technicality. \end{enumerate} \end{remark} The next theorem treats the special case of centered summands. \begin{theorem}\label{meanzero} Suppose that Assumption \ref{genass} holds with $a=E[X_1]=0$, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Then, \begin{align*} d_\mathcal{W}(W,Z)&\leq\frac{2\gamma}{\alpha}+\frac{3 d^3}{c^3\sqrt{\alpha}} \quad\text{and}\\ d_\mathcal{K}(W,Z)&\leq\frac{(\sqrt{2\pi}+4)\gamma}{4\alpha}+\Bigl(\frac{ d^3(3\sqrt{2\pi}+4)}{8c^3}+ 1\Bigr)\frac{1}{\sqrt{\alpha}} +\Bigl(\frac{7}{2}\sqrt{2}+2\Bigr)\frac{ d^3}{c^3\alpha}\notag\\ &\;+P(N=0)+\biggl(\frac{ d^3}{c^3}+ \frac{\gamma}{\sqrt{\alpha}\sqrt{2\pi}}\biggr)\sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]}\,. \end{align*} \end{theorem} \begin{remark}\label{remmz} \begin{enumerate}[(a)] \item The proof will show that Theorem \ref{meanzero} holds as long as \\ $E[N^2]<\infty$. Thus, Assumption \ref{genass} could be slightly relaxed in this case. \item Theorem \ref{meanzero} is not a direct consequence of Theorem \ref{maintheo} as it is stated above. Actually, instead of Theorem \ref{maintheo} we could state a result, which would reduce to Theorem \ref{meanzero} if $a=0$, but the resulting bounds would look more cumbersome in the general case. Also, they would be of the same order as the bounds presented in Theorem \ref{maintheo} in the case that $a\not=0$. This is why we have refrained from presenting these bounds in the general case but have chosen to prove Theorem \ref{maintheo} and Theorem \ref{meanzero} in parallel. Note that, if $a\not=0$, then a necessary condition for our bounds to imply the CLT is that \begin{equation}\label{neccond} \frac{\alpha}{\sigma^3}E[D^2]=o(1)\quad\text{and}\quad\frac{\alpha}{\sigma^2}\sqrt{\Var\bigl(E[D|N]\bigr)}=o(1)\,. \end{equation} This should be compared to the conditions which imply asymptotic normality for $N$ by size-bias couplings given in \cite{GolRin96}, namely \begin{equation}\label{condGolRin} \frac{\alpha}{\gamma^3}E[D^2]=o(1)\quad\text{and}\quad\frac{\alpha}{\gamma^2}\sqrt{\Var\bigl(E[D|N]\bigr)}=o(1)\,. \end{equation} If \eqref{condGolRin} holds, then from \cite{GolRin96} we know that $N$ is asymptotically normal and, as was shown within the proof of Lemma 1 in \cite{Rob48}, this implies that $\gamma=o(\alpha)$. Since, if $a\not=0$, \eqref{condGolRin} implies \eqref{neccond}, we can conclude from Theorems \ref{maintheo} and \ref{meanzero} that $W$ is asymptotically normal. In a nutshell, if the bounds from \cite{GolRin96} on the distance to normality of $N$ tend to zero, then so do our bounds and, hence, yield the CLT for $W$. However, the validity of \eqref{condGolRin} is neither necessary for \eqref{neccond} to hold nor for our bounds to imply asymptotic normality of $W$ (see Remark \ref{hyprem} (b) below). \item For distribution functions $F$ and $G$ on $\mathbb{R} $ and $1\leq p<\infty$, one defines their $L^p$-distance by \begin{equation*} \pnorm{F-G}:=\biggl(\int_\mathbb{R} \babs{F(x)-G(x)}^p dx\biggr)^{1/p}\,. \end{equation*} It is known (see \cite{Dud02}) that $\einsnorm{F-G}$ coincides with the Wasserstein distance of the corresponding distributions $\mu$ and $\nu$, say. By H\"older's inequality, for $1\leq p<\infty$, we have \begin{equation*} \pnorm{F-G}\leq d_\mathcal{K}(\mu,\nu)^{\frac{p-1}{p}}\cdot d_\mathcal{W}(\mu,\nu)^{\frac{1}{p}}\,. \end{equation*} Thus, our results immediately yield bounds on the $L^p$-distances of $\mathcal{L}(W)$ and $N(0,1)$. \item It would be possible to drop the assumption that the summands be identically distributed. For reasons of clarity of the presentation, we have, however, decided to stick to the i.i.d. setting. See also the discussion of possible generalizations before the proof of Lemma \ref{distlemma} at the end of this section. \end{enumerate} \end{remark} \begin{cor}\label{infdiv} Suppose that Assumption \ref{genass} holds, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Furthermore, assume that the distribution of the index $N$ is infinitely divisible. Then, we have \begin{align*} &d_\mathcal{W}(W,Z)\leq \frac{2c^2b\gamma^2+3\alpha d^3}{\sigma^3} +\frac{(\alpha\delta^3-\alpha^2\gamma^2+\gamma^4-\beta^4)\abs{a}b^2}{\alpha\sigma^3}\quad\text{and}\\ &d_\mathcal{K}(W,Z)\leq\frac{ d^3\alpha(3\sqrt{2\pi}+4)}{8\sigma^3}+ \frac{c^3\alpha}{\sigma^3}+\biggl(\frac{7}{2}\sqrt{2}+2\biggr)\frac{\sqrt{\alpha} d^3}{c\sigma^2}+\frac{c^2\alpha}{\sigma^2}P(N=0)\\ &\;+\frac{\abs{a}b^2(\delta^3\alpha+\gamma^4-\beta^4-\gamma^2\alpha^2)}{\alpha\sigma^3}\biggl(\frac{\sqrt{2\pi}}{8}+\frac{1}{2}\biggr)\\ &\;+\sqrt{\delta^3\alpha+\gamma^4-\beta^4-\gamma^2\alpha^2}\biggl(\frac{(\sqrt{2\pi}+4)bc^2}{4\sigma^3}+\sqrt{P(N=0)}\frac{\abs{a}b}{\sigma^2}\biggr)\\ &\;+E\bigl[1_{\{N\geq1\}}N^{-1/2}\bigr]\biggl(\frac{\abs{a}b^2(\delta^3\alpha+\gamma^4-\beta^4-\gamma^2\alpha^2)}{c\alpha\sigma^2\sqrt{2\pi}}+\frac{\gamma^2 d^3\abs{a}b}{\sigma^2}+\frac{ d^3\alpha}{c\sigma^2} +\frac{\gamma^2bc}{\sigma^2\sqrt{2\pi}}\biggr) \end{align*} \end{cor} \begin{proof} By Remark \ref{sbrem} (b) we can choose $D\geq0$ independent of $N$ such that $N^s=N+D$ has the $N$-size biased distribution. Thus, by independence we obtain \begin{align*} \Var(D)&=\Var(N^s)-\Var(N)=E\bigl[(N^s)^2\bigr]-E[N^s]^2-\gamma^2\\ &=\frac{E[N^3]}{E[N]}-\left(\frac{E[N^2]}{E[N]}\right)^2-\gamma^2\\ &=\frac{\delta^3}{\alpha}-\frac{\beta^4}{\alpha^2}-\gamma^2\,. \end{align*} This gives \[E[D^2]=\Var(D)+E[D]^2=\frac{\delta^3}{\alpha}+\frac{\gamma^4-\beta^4}{\alpha^2}-\gamma^2\,.\] Also, \[\Var\bigl(E[D|N]\bigr)=\Var\bigl(E[D]\bigr)=0\quad\text{and}\quad \sqrt{E\Bigl[\bigl(E\bigl[D^2\,\bigl|\,N\bigr]\bigr)^2\Bigr]}=E[D^2] \] in this case. Now, the claim follows from Theorem \ref{maintheo}.\\ \end{proof} In the case that $N$ is constant, the results from Theorem \ref{maintheo} reduce to the known optimal convergence rates for sums of i.i.d. random variables with finite third moment, albeit with non-optimal constants (see e.g. \cite{Shev11} and \cite{Gol10} for comparison). \begin{cor}\label{ncons} Suppose that Assumption \ref{genass} holds, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Also, assume that the index $N$ is a positive constant. Then, \begin{align*} d_\mathcal{W}(W,Z)&\leq\frac{3 d^3}{c^3\sqrt{N}}\quad\text{and}\\ d_\mathcal{K}(W,Z)&\leq \frac{1}{\sqrt{N}}\biggl(1+\Bigl(\frac{7}{2}\bigl(1+\sqrt{2}\bigr)+\frac{3\sqrt{2\pi}}{8}\Bigr)\frac{ d^3}{c^3}\biggr)\,. \end{align*} \end{cor} \begin{proof} In this case, we can choose $N^s=N$ yielding $D=0$ and the result follows from Theorem \ref{maintheo}.\\ \end{proof} Another typical situation when the distribution of $W$ may be well approximated by the normal is if the index $N$ is itself a sum of many i.i.d. variables. Our results yield very explicit convergence rates in this special case. This will be exemplified for the Wasserstein distance by the next corollary. Using the bound presented in Remark \ref{mtrem} (b) one would get a bound on the Kolmogorov distance, which is more complicated but of the same order of magnitude. A different way to prove bounds for the CLT by Stein's method in this special situation is presented in Theorem 10.6 of \cite{CGS}. Their method relies on a general bound for the error of normal approximation to the distribution of a non-linear statistic of independent random variables which can be written as a linear statistic plus a small remainder term as well as on truncation and conditioning on $N$ in order to apply the classical Berry-Esseen theorem. Though our method also makes use of conditioning on $N$, it is more directly tied to random sums and also relies on (variations of) classical couplings in Stein's method (see the proof in Section \ref{proof} for details). \begin{cor}\label{divisible} Suppose that Assumption \ref{genass} holds, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Additionally, assume that the distribution of the index $N$ is such that $N\stackrel{\mathcal{D}}{=}N_1+\ldots+N_n$, where $n\in\mathbb{N} $ and $N_1,\dotsc,N_n$ are i.i.d. nonnegative random variables such that $E[N_1^3]<\infty$. Then, using the notation \begin{align*} \alpha_1&:=E[N_1]\,,\quad \beta_1^2:=E[N_1^2]\,,\quad\gamma_1^2:=\Var(N_1)\,,\quad\delta_1^3:=E[N_1^3] \quad\text{and}\\ \sigma_1^2&:=c^2\alpha_1 +a^2\gamma_1^2 \end{align*} we have \begin{align*} d_\mathcal{W}(W,Z)&\leq\frac{1}{\sqrt{n}}\biggl(\frac{2c^2b\gamma_1^2}{\sigma_1^3}+\frac{3\alpha_1 d^3}{\sigma_1^3}+\sqrt{\frac{2}{\pi}}\frac{\alpha_1 a^2\gamma_1^2}{\sigma_1^2} +\frac{2\alpha_1 (a^2b+\abs{a}b^2)}{\sigma_1^3}\Bigl(\frac{\delta_1^3}{\alpha_1}-\beta_1^2\Bigr)\biggr)\,. \end{align*} \end{cor} \begin{proof} From \cite{GolRin96} (see also \cite{CGS}) it is known that letting $N_1^s$ be independent of $N_1,\dotsc,N_n$ and have the $N_1$-size biased distribution, a random variable with the $N$-size biased distribution is given by \[N^s:= N_1^s+\sum_{j=2}^nN_j\,,\quad\text{yielding}\quad D=N_1^s-N_1\,.\] Thus, by independence and since $N_1,\dotsc,N_n$ are i.i.d., we have \[E[D|N]=E[N_1^s]-\frac{1}{n}N\] and, hence, \[\Var\bigl(E[D|N]\bigr)=\frac{\Var(N)}{n^2}=\frac{\gamma_1^2}{n}\,.\] Clearly, we have \begin{equation*} \alpha=n\alpha_1\,,\quad \gamma^2=n\gamma_1^2\quad\text{and}\quad\sigma^2=n\sigma_1^2\,. \end{equation*} Also, using independence and \eqref{sbdef}, \begin{align*} E[D^2]&=E\bigl[N_1^2-2N_1N_1^s+(N_1^s)^2\bigr]=\beta_1^2-2\alpha_1E[N_1^s]+E\bigl[(N_1^s)^2\bigr]\notag\\ &=\beta_1^2-2\alpha_1\frac{\beta_1^2}{\alpha_1}+\frac{\delta_1^3}{\alpha_1}=\frac{\delta_1^3}{\alpha_1}-\beta_1^2\,. \end{align*} Thus, the bound follows from Theorem \ref{maintheo}.\\ \end{proof} Very prominent examples of random sums, which are known to be asymptotically normal, are Poisson and Binomial random sums. The respective bounds, which follow from our abstract findings, are presented in the next two corollaries. \begin{cor}\label{Poisson} Suppose that Assumption \ref{genass} holds, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Assume further that $N\sim\Poi(\lambda)$ has the Poisson distribution with parameter $\lambda>0$. Then, \begin{align*} d_\mathcal{W}(W,Z)&\leq\frac{1}{\sqrt{\lambda}}\Bigl(\frac{2c^2}{b^2}+\frac{3 d^3}{b^3}+\frac{\abs{a}}{b}\Bigr)\quad\text{and}\\ d_\mathcal{K}(W,Z)&\leq\frac{1}{\sqrt{\lambda}}\biggl(\frac{\sqrt{2\pi}}{4}+1+\frac{(3\sqrt{2\pi}+4) d^3}{8b^3}+\frac{c^3}{b^3}+\Bigl(\frac{7}{2}\sqrt{2}+3\Bigr)\frac{ d^3}{cb^2}\\ &\qquad+\frac{\abs{a}(\sqrt{2\pi}+4+8 d^3)}{8b}+\frac{\abs{a}}{c\sqrt{2\pi}}+\frac{c}{b\sqrt{2\pi}}\biggr)+\frac{c^2}{b^2}e^{-\lambda}+\frac{\abs{a}}{b}e^{-\lambda/2}\,. \end{align*} \end{cor} \begin{proof} In this case, by Example \ref{exdist} (a), we can choose $D=1$, yielding that \[E[D^2]=1\quad\text{and}\quad\Var\bigl(E[D|N]\bigr)=0\,.\] Note that \begin{equation*} E\bigl[1_{\{N\geq1\}}N^{-1/2}\bigr]\leq\sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]} \end{equation*} by Jensen's inequality. Also, using $k+1\leq2k$ for all $k\in\mathbb{N} $, we can bound \begin{align*} E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]&=e^{-\lambda}\sum_{k=1}^\infty\frac{\lambda^k}{k k!}\leq 2e^{-\lambda}\sum_{k=1}^\infty\frac{\lambda^k}{(k+1) k!} =\frac{2}{\lambda}e^{-\lambda}\sum_{k=1}^\infty\frac{\lambda^{k+1}}{(k+1)!}\\ &=\frac{2}{\lambda}e^{-\lambda}\sum_{l=2}^\infty\frac{\lambda^l}{l!}\leq\frac{2}{\lambda}\,. \end{align*} Hence, \begin{equation*} E\bigl[1_{\{N\geq1\}}N^{-1/2}\bigr]\leq\frac{\sqrt{2}}{\sqrt{\lambda}}\,. \end{equation*} Noting that \[\alpha=\gamma^2=\lambda\quad\text{and}\quad\sigma^2=\lambda(a^2+c^2)=\lambda b^2\,,\] the result follows from Theorem \ref{maintheo}.\\ \end{proof} \begin{remark}\label{poissrem} The Berry-Esseen bound presented in Corollary \ref{Poisson} is of the same order of $\lambda$ as the bound given in \cite{KorShev12}, which seems to be the best currently available, but has a worst constant. However, it should be mentioned that the bound in \cite{KorShev12} was obtained using special properties of the Poisson distribution and does not seem likely to be easily transferable to other distributions of $N$. \end{remark} \begin{cor}\label{binomial} Suppose that Assumption \ref{genass} holds, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Furthermore, assume that $N\sim\Bin(n,p)$ has the Binomial distribution with parameters $n\in\mathbb{N} $ and $p\in(0,1]$. Then, \begin{align*} d_\mathcal{W}(W,Z)&\leq\frac{1}{\sqrt{np}\bigl(b^2-pa^2\bigr)^{3/2}}\biggl(\bigl(2c^2b+\abs{a}b^2\bigr)(1-p)+3 d^3\\ &\;+\sqrt{\frac{2}{\pi}}a^2p\sqrt{b^2-pa^2}\sqrt{1-p}\biggr)\quad\text{and}\\ d_\mathcal{K}(W,Z)&\leq\frac{1}{\sqrt{np}\bigl(b^2-pa^2\bigr)^{3/2}}\biggl(c^3+\frac{(\sqrt{2\pi}+4)bc^2\sqrt{1-p}}{4}+\frac{(3\sqrt{2\pi}+4) d^3}{8}\\ &\qquad+\frac{\abs{a}b^2\sqrt{1-p}}{2}+\frac{\abs{a}b^2\sqrt{2\pi}(1-p)}{8}\biggr)\\ &\;+\frac{1}{\sqrt{np}\bigl(b^2-pa^2\bigr)}\biggl(\Bigl(\frac{9}{2}\sqrt{2}+2\Bigr)\frac{ d^3}{c}+\sqrt{1-p}\bigl(a^2p+\sqrt{2}\abs{a}b d^3\bigr)\\ &\qquad+\frac{\sqrt{2(1-p)}b\bigl(2b^2-a^2\bigr)}{c\sqrt{2\pi}}\biggr)\\ &\;+\frac{c^2}{b^2-pa^2}(1-p)^n+\frac{\abs{a}b}{b^2-pa^2}(1-p)^{\frac{n+1}{2}}\,. \end{align*} \end{cor} \begin{remark}\label{binrem} Bounds for binomial random sums have also been derived in \cite{Sunk14} using a technique developed in \cite{Tik80}. Our bounds are of the same order $(np)^{-1/2}$ of magnitude. \end{remark} \begin{proof}[Proof of Corollary \ref{binomial}] Here, we clearly have \[\alpha=np\,,\quad \gamma^2=np(1-p)\quad\text{and}\quad\sigma^2=np(a^2(1-p)+c^2)\,.\] Also, using the same coupling as in Example \ref{exdist} (b) we have $D\sim\Bern(1-p)$, \[E[D^2]=E[D]=1-p\quad\text{and}\quad E[D|N]=1-\frac{N}{n}\,.\] This yields \[\Var\bigl(E[D|N]\bigr)=\frac{1}{n^2}\Var(N)=\frac{p(1-p)}{n}\,.\] We have $D^2=D$ and, by Cauchy-Schwarz, \begin{equation*} E\bigl[D1_{\{N\geq1\}}N^{-1/2}\bigr]\leq\sqrt{E[D^2]}\sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]}=\sqrt{1-p}\sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]}\,. \end{equation*} Using \begin{equation*} \frac{1}{k}\binom{n}{k}\leq\frac{2}{n+1}\binom{n+1}{k+1}\leq\frac{2}{n}\binom{n+1}{k+1}\,,\quad 1\leq k\leq n\,, \end{equation*} we have \begin{align*} E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]&=\sum_{k=1}^n\frac{1}{k}\binom{n}{k}p^k(1-p)^{n-k}\leq\frac{2}{n}\sum_{k=1}^n\binom{n+1}{k+1}p^k (1-p)^{n-k}\\ &=\frac{2}{np}\sum_{l=2}^{n+1}\binom{n+1}{l}p^l(1-p)^{n+1-l}\leq\frac{2}{np}\,. \end{align*} Thus, \begin{equation*} E\bigl[D1_{\{N\geq1\}}N^{-1/2}\bigr]\leq\frac{\sqrt{2(1-p)}}{\sqrt{pn}}\quad\text{and}\quad E\bigl[1_{\{N\geq1\}}N^{-1/2}\bigr]\leq\frac{\sqrt{2}}{\sqrt{np}}\,. \end{equation*} Also, we can bound \begin{equation*} E\Bigl[\bigl(E\bigl[D^2\,\bigl|\,N\bigr]\bigr)^2\Bigr]\leq E\bigl[D^4\bigr]=E[D]=1-p\,. \end{equation*} Now, using $a^2+c^2=b^2$, the claim follows from Theorem \ref{maintheo}.\\ \end{proof} \begin{cor}\label{hyper} Suppose that Assumption \ref{genass} holds, let $W$ be given by \eqref{defw} and let $Z$ have the standard normal distribution. Assume further that $N\sim\Hyp(n;r,s)$ has the Hypergeometric distribution with parameters $n,r,s\in\mathbb{N} $ such that $n\leq\min\{r,s\}$. Then, \begin{align*} d_\mathcal{W}(W,Z)&\leq \Bigl(\frac{nr}{r+s}\Bigr)^{-1/2}\biggl(\frac{2b}{c}\;\frac{s(r+s-n)}{(r+s)_2}+\frac{3 d^3}{c^3}+\frac{\abs{a}b^2}{c^2}\;\frac{s(r+s-n)}{(r+s)_2}\biggr)\notag\\ &\;+K\frac{a^2}{c^2}\sqrt{\frac{2}{\pi}}\biggl(\frac{\min\{r,s\}}{n(r+s)}\biggr)^{1/2} \quad\text{and}\\ d_\mathcal{K}(W,Z)&\leq\Bigl(\frac{nr}{r+s}\Bigr)^{-1/2}\Biggl[1+\frac{(\sqrt{2\pi}+4)b}{4c}\Bigl(\frac{s(r+s-n)}{(r+s)_2}\Bigr)^{1/2}\\ &\; +\Bigl(\frac{3\sqrt{2\pi}}{8}+\frac{9}{2}\sqrt{2}+\frac{5}{2}\Bigr)\frac{ d^3}{c^3} +\Bigl(\frac{\sqrt{2\pi}}{8}+1\Bigr)\frac{\abs{a}b^2}{c^3}\;\frac{s(r+s-n)}{(r+s)_2}\\ &\; + \Bigl(\abs{a}{b^2}{c^3\sqrt{2\pi}}+\frac{\abs{a}b d^3}{c^2}+\frac{b}{c\sqrt{2\pi}}\Bigr)\biggl(\frac{2s(r+s-n)}{(r+s)_2}\biggr)^{1/2}\Biggr]\\ &\;+\frac{(s)_n}{(r+s)_n}+K\frac{a^2}{c^2}\biggl(\frac{\min\{r,s\}}{n(r+s)}\biggr)^{1/2}+\frac{\abs{a}b}{c^2}\biggl(\frac{(s)_n}{(r+s)_n}\;\frac{s(r+s-n)}{(r+s)_2}\biggr)^{1/2}\,, \end{align*} where K is a numerical constant and $(m)_n=m(m-1)\cdot\ldots\cdot(m-n+1)$ denotes the lower factorial. \end{cor} \begin{proof} In this case, we clearly have \begin{align*} \alpha&=\frac{nr}{r+s}\,,\quad\gamma^2=\frac{nr}{r+s}\;\frac{s}{r+s}\;\frac{r+s-n}{r+s-1}=\frac{nr}{r+s}\;\frac{s(r+s-n)}{(r+s)_2}\quad\text{and}\\ \sigma^2&=\frac{nr}{r+s}\Bigl(c^2+a^2\frac{s}{r+s}\;\frac{r+s-n}{r+s-1}\Bigr)=\frac{nr}{r+s}\Bigl(c^2+a^2\frac{s(r+s-n)}{(r+s)_2}\Bigr)\,. \end{align*} Hence, \begin{align*} c^2\frac{nr}{r+s}\leq\sigma^2\leq\frac{nr}{r+s}\Bigl(c^2+a^2\frac{s}{r+s}\Bigr)=\frac{nr}{r+s}\Bigl(b^2-a^2\frac{r}{r+s}\Bigr)\,. \end{align*} We use the coupling constructed in Example \ref{exdist} (c) but write $N$ for $X$ and $N^s$ for $X^s$, here. Recall that we have \begin{equation*} D=N^s-N=1_{\{X_1=0\}}\Bigl(1-\sum_{j=2}^n Y_j\Bigr)\geq0\quad\text{and}\quad D=D^2\,. \end{equation*} Furthermore, we know that \begin{equation*} E[D]=E[D^2]=d_\mathcal{W}(N,N^s)=\frac{\Var(N)}{E[N]}=\frac{s(r+s-n)}{(r+s)_2}\,. \end{equation*} Elementary combinatorics yield \begin{equation*} E\bigl[Y_j\,\bigl|\,X_1,\dotsc,X_n\bigr]=r^{-1}1_{\{X_j=1\}}\,. \end{equation*} Thus, \begin{align*} E\bigl[D\,\bigl|\,X_1,\dotsc,X_n\bigr]&=1_{\{X_1=0\}}-\frac{1}{r}1_{\{X_1=0\}}\sum_{j=2}^n1_{\{X_j=1\}}=1_{\{X_1=0\}}\Bigl(1-\frac{N}{r}\Bigr)\quad\text{and}\\ E\bigl[D\,\bigl|\,N\bigr]&=\Bigl(1-\frac{N}{r}\Bigr)P\bigl(X_1=0\,\bigl|\,N\bigr)=\Bigl(1-\frac{N}{r}\Bigr)\frac{n-N}{n}\\ &=\frac{(r-N)(n-N)}{nr}=\Bigl(1-\frac{N}{r}\Bigr)\Bigl(1-\frac{N}{n}\Bigr)\,. \end{align*} Using a computer algebra system, one may check that \begin{align}\label{eps} \Var\bigl(E\bigl[D\,\bigl|\,N\bigr]\bigr)&=\Bigl(n r s - n^3 r s - r^2 s + 5 n^2 r^2 s + 2 n^3 r^2 s - 8 n r^3 s - 8 n^2 r^3 s+ 2 n r s^5 \notag \\ &\; - n^3 r^3 s + 4 r^4 s + 10 n r^4 s + 3 n^2 r^4 s- 4 r^5 s - 3 n r^5 s + r^6 s + n s^2\notag\\ &\; - n^3 s^2 - 2 r s^2 + 4 n^2 r s^2 - 2 n^3 r s^2 - 14 n r^2 s^2 - 4 n^2 r^2 s^2+ n^3 r^2 s^2\notag\\ &\; + 12 r^3 s^2 + 20 n r^3 s^2 + 2 n^2 r^3 s^2 - 14 r^4 s^2 - 7 n r^4 s^2 + 4 r^5 s^2 - s^3\notag\\ &\; - n^2 s^3 + 2 n^3 s^3- 5 n r s^3 + 4 n^2 r s^3 + n^3 r s^3 + 13 r^2 s^3 + 8 n r^2 s^3\notag\\ &\;- 4 n^2 r^2 s^3 - 18 r^3 s^3 - 3 n r^3 s^3 + 6 r^4 s^3 + n s^4- n^3 s^4 + 6 r s^4 - 4 n r s^4\notag\\ &\;- 2 n^2 r s^4 - 10 r^2 s^4 + 3 n r^2 s^4 + 4 r^3 s^4 + s^5 - 2 n s^5 + n^2 s^5 - 2 r s^5\notag\\ &\; + r^2 s^5\Bigr)\Bigl(nr(r + s)^2(r+s-1)^2(r+s-2)(r+s-3)\Bigr)^{-1}\notag\\ & =:\varepsilon(n,r,s)\,. \end{align} One can check that under the assumption $n\leq\min\{r,s\}$ always \begin{equation*} \varepsilon(n,r,s)=O\biggl(\frac{\min\{r,s\}}{n(r+s)}\biggr)\,. \end{equation*} Hence, there is a numerical constant $K$ such that \begin{equation*} \sqrt{\varepsilon(n,r,s)}\leq K\biggl(\frac{\min\{r,s\}}{n(r+s)}\biggr)^{1/2}\,. \end{equation*} Also, by the conditional version of Jensen's inequality \begin{equation*} E\Bigl[\bigl(E\bigl[D^2\,\bigl|\,N\bigr]\bigr)^2\Bigr]\leq E\bigl[D^4\bigr]=E[D]=\frac{s(r+s-n)}{(r+s)_2}\,. \end{equation*} Using \begin{align*} E\bigl[N^{-1}1_{\{N\geq1\}}\bigr]&=\binom{r+s}{n}^{-1}\sum_{k=1}^n\frac{1}{k}\binom{r}{k}\binom{s}{n-k}\\ &\leq \frac{2}{r+1}\binom{r+s}{n}^{-1}\sum_{l=2}^{n+1}\binom{r+1}{l}\binom{s}{n+1-l}\\ &\leq \frac{2}{r+1}\binom{r+s}{n}^{-1}\binom{r+1+s}{n+1}=\frac{2(r+s+1)}{(n+1)(r+1)}\leq2\frac{r+s}{nr}\,, \end{align*} we get \begin{align*} E\bigl[D1_{\{N\geq1\}}N^{-1/2}\bigr]&\leq\sqrt{E[D^2]}\sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]}\leq\biggl(2\frac{s(r+s-n)}{(r+s)(r+s-1)}\;\frac{r+s}{nr}\biggr)^{1/2}\\ &=\sqrt{2}\Bigl(\frac{s}{nr}\;\frac{r+s-n}{r+s-1}\Bigr)^{1/2}\quad\text{and}\\ E\bigl[1_{\{N\geq1\}}N^{-1/2}\bigr]&\leq\sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]}\leq\sqrt{2}\Bigl(\frac{r+s}{nr}\Bigr)^{1/2}\,. \end{align*} Finally, we have \begin{equation*} P(N=0)=\frac{\binom{s}{n}}{\binom{r+s}{n}}=\frac{s(s-1)\cdot\ldots\cdot(s-n+1)}{(r+s)(r+s-1)\cdot\ldots\cdot(r+s-n+1)}=\frac{(s)_n}{(r+s)_n}\,. \end{equation*} Thus, the result follows from Theorem \ref{maintheo}.\\ \end{proof} \begin{remark}\label{hyprem} \begin{enumerate}[(a)] \item From the above proof we see that the numerical constant $K$ appearing in the bounds of Corollary \ref{hyper} could in principle be computed explicitly. Also, as always \begin{equation*} \frac{\min\{r,s\}}{n(r+s)}\leq\frac{r+s}{nr}=\frac{1}{E[N]}\,, \end{equation*} we conclude that the bounds are of order $E[N]^{-1/2}$. \item One typical situation, in which a CLT for Hypergeometric random sums holds, is when $N$, itself, is asymptotically normal. Using the same coupling $(N,N^s)$ as in the above proof and the results from \cite{GolRin96}, one obtains that under the condition \begin{equation}\label{asno} \frac{\max\{r,s\}}{n\min\{r,s\}}\longrightarrow0 \end{equation} the index $N$ is asymptotically normal. This condition is stricter than that \begin{equation}\label{asnors} E[N]^{-1}=\frac{r+s}{nr}\longrightarrow0\,, \end{equation} which implies the random sums CLT. For instance, choosing \begin{equation*} r\propto n^{1+\varepsilon}\,,\quad\text{and}\quad s\propto n^{1+\kappa} \end{equation*} with $\varepsilon,\kappa\geq0$, then \eqref{asno} holds, if and only if $\abs{\varepsilon-\kappa}<1$, whereas \eqref{asnors} is equivalent to $\kappa-\varepsilon<1$ in this case. \end{enumerate} \end{remark} Before we end this section by giving the proof of Lemma \ref{distlemma}, we would like to mention in what respects the results in this article could be generalized. Firstly, it would be possible do dispense with the assumption of independence among the summands $X_1,X_2,\dotsc$. Of course, the terms appearing in the bounds would look more complicated, but the only essential change would be the emergence of the additional error term \begin{equation*} E_3:=\frac{C}{\alpha\sigma} E\babs{\alpha A_N-\mu N}\,, \end{equation*} where \begin{equation*} A_N:=\sum_{j=1}^N E[X_j]\quad\text{and}\quad \mu=E[A_N] \end{equation*} and where $C$ is an explicit constant depending on the probabilistic distance chosen. Note that $E_3=0$ if the summands are either i.i.d. or centered. Secondly, it would be possible in principle to allow for some dependence among the summands $X_1,X_2,\dotsc$. Indeed, an inspection of the proof in Section \ref{proof} reveals that this dependence should be such that for the non-random partial sums bounds on the normal approximation exist and such that suitable couplings with the non-zero biased distribution (see Section \ref{stein} ) of those partial sums are available. The latter, however, have not been constructed yet in great generality, although \cite{GolRei97} gives a construction for summands forming a simple random sampling in the zero bias case. It would be much more difficult to abandon the assumption about the independence of the summation index and the summands. This can be seen from Equation \eqref{mp8} below, in which the second identity would no longer hold, in general, if this independence was no longer valid. Also, one would no longer be able to freely choose the coupling $(N,N^s)$ when specializing to concrete distributions of $N$. \begin{proof}[Proof of Lemma \ref{distlemma}] Let $h$ be a measurable function such that all the expected values in \eqref{sbdef} exist. By \eqref{sbdef} we have \begin{align}\label{sb1} \Bigl|E[h(X^s)]-E[h(X)]\Bigr|&=\frac{1}{E[X]}\Bigr|E\bigl[\bigl(X-E[X]\bigr)h(X)\bigr]\Bigr|\,. \end{align} It is well known that \begin{equation}\label{sb2} d_{TV}(X,Y)=\sup_{h\in\mathcal{H}}\Bigl|E[h(X)]-E[h(Y)]\Bigr|\,, \end{equation} where $\mathcal{H}$ is the class of all measurable functions on $\mathbb{R} $ such that $\fnorm{h}\leq1/2$. If $\fnorm{h}\leq1/2$, then \begin{equation*} \frac{1}{E[X]}\Bigr|E\bigl[\bigl(X-E[X]\bigr)h(X)\bigr]\Bigr|\leq\frac{E\abs{X-E[X]}}{2E[X]}\,. \end{equation*} Hence, from \eqref{sb2} and \eqref{sb1} we conclude that \begin{equation*}\label{sb3} d_{TV}(X,X^s)\leq\frac{E\abs{X-E[X]}}{2E[X]}\,. \end{equation*} On the other hand, letting \[h(x):=\frac{1}{2}\Bigl(1_{\{x>E[X]\}}-1_{\{x\leq E[X]\}}\Bigr)\] in \eqref{sb1} we have $h\in\mathcal{H}$ and obtain \begin{equation*}\label{sb4} \Bigl|E[h(X^s)]-E[h(X)]\Bigr|=\frac{E\abs{X-E[X]}}{2E[X]} \end{equation*} proving the second equality of (a). Note that, since $X^s$ is stochastically larger than $X$, we have \begin{align}\label{sb8} d_\mathcal{K}(X,X^s)&=\sup_{t\geq0}\bigl|P(X^s>t)-P(X>t)\bigr|=\sup_{t\geq0}\Bigl(P(X^s>t)-P(X>t)\Bigr)\nonumber\\ &=\sup_{t\geq0}\Bigl(E[g_t(X^s)]-E[g_t(X)]\Bigr)\,, \end{align} where $g_t:=1_{(t,\infty)}$.\\ By \eqref{sb1}, choosing $t=E[X]$ yields \begin{equation}\label{sb10} d_\mathcal{K}(X,X^s)\geq E\bigl[\bigl(X-E[X]\bigr)1_{\{X>E[X]\}}\bigr]\,. \end{equation} If $0\leq t<E[X]$ we obtain \begin{align}\label{sb5} E\bigl[\bigl(X-E[X]\bigr)1_{\{X>t\}}\bigr]&=E\bigl[\bigl(X-E[X]\bigr)1_{\{t<X\leq E[X]\}}\bigr] +E\bigl[\bigl(X-E[X]\bigr)1_{\{X>E[X]\}}\bigr]\nonumber\\ &\leq E\bigl[\bigl(X-E[X]\bigr)1_{\{X>E[X]\}}\bigr]\,. \end{align} Also, if $t\geq E[X]$, then \begin{equation}\label{sb6} E\bigl[\bigl(X-E[X]\bigr)1_{\{X>t\}}\bigr]\leq E\bigl[\bigl(X-E[X]\bigr)1_{\{X>E[X]\}}\bigr]\,. \end{equation} Thus, by \eqref{sb1}, from \eqref{sb8}, \eqref{sb10}, \eqref{sb5} and \eqref{sb6} we conclude that \begin{equation}\label{sb9} d_\mathcal{K}(X,X^s)= E\bigl[\bigl(X-E[X]\bigr)1_{\{X>E[X]\}}\bigr]\,. \end{equation} Now, the remaining claim of (a) can be easily inferred from \eqref{sb9} and from the following two identities: \begin{align*} 0&=E\bigl[X-E[X]\bigr]=E\bigl[\bigl(X-E[X]\bigr)1_{\{X>E[X]\}}\bigr]-E\bigl[\bigl(X-E[X]\bigr)1_{\{X\leq E[X]\}}\bigr]\\ &=E\bigl[\babs{X-E[X]}1_{\{X>E[X]\}}\bigr]-E\bigl[\babs{X-E[X]}1_{\{X\leq E[X]\}}\bigr] \end{align*} and \begin{align*} E\babs{X-E[X]}&=E\bigl[\babs{X-E[X]}1_{\{X>E[X]\}}\bigr]+E\bigl[\babs{X-E[X]}1_{\{X\leq E[X]\}}\bigr]\\ &=2E\bigl[\bigl(X-E[X]\bigr)1_{\{X>E[X]\}}\bigr]\,. \end{align*} Finally, if $h$ is $1$-Lipschitz continuous, then \begin{align*} \Bigr|E\bigl[\bigl(X-E[X]\bigr)h(X)\bigr]\Bigr|&=\Bigr|E\bigl[\bigl(X-E[X]\bigr)\bigl(h(X)-h(E[X])\bigr)\bigr]\Bigr|\\ &\leq \fnorm{h'}E\bigl[\abs{X-E[X]}^2\bigr]=\Var(X)\,. \end{align*} On the other hand, the function $h(x):=x-E[X]$ is $1$-Lipschitz and \begin{equation*} E\bigl[\bigl(X-E[X]\bigr)h(X)\bigr]=\Var(X)\,. \end{equation*} Thus, also (b) is proved.\\ \end{proof} \section{Elements of Stein's method}\label{stein} In this section we review some well-known and also some recent results about Stein's method of normal approximation. Our general reference for this topic is the book \cite{CGS}. Throughout, $Z$ will denote a standard normal random variable. Stein's method originated from Stein's seminal observation (see \cite{St72}) that a real-valued random variable $X$ has the standard normal distribution, if and only if the identity \begin{equation*}\label{steinchar} E\bigl[f'(X)\bigr]=E\bigl[Xf(X)\bigr] \end{equation*} holds for each, say, continuously differentiable function $f$ with bounded derivative. For a given random variable $W$, which is supposed to be asymptotically normal, and a Borel-measurable test function $h$ on $\mathbb{R} $ with $E\abs{h(Z)}<\infty$ it was then Stein's idea to solve the \textit{Stein equation} \begin{equation}\label{steineq} f'(x)-xf(x)=h(x)-E[h(Z)] \end{equation} and to use properties of the solution $f$ and of $W$ in order to bound the right hand side of \begin{equation*} \Babs{E\bigl[h(W)\bigr]-E\bigl[h(Z)\bigr]}=\Babs{E\bigl[f'(W)-Wf(W)\bigr]} \end{equation*} rather than bounding the left hand side directly. For $h$ as above, by $f_h$ we denote the standard solution to the Stein equation \eqref{steineq} which is given by \begin{align}\label{standsol} f_h(x)&=e^{x^2/2}\int_{-\infty}^x\bigl(h(t)-E[h(Z)]\bigr)e^{-t^2/2}dt\nonumber\\ &=-e^{x^2/2}\int_{x}^\infty\bigl(h(t)-E[h(Z)]\bigr)e^{-t^2/2}dt\,. \end{align} Note that, generally, $f_h$ is only differentiable and satisfies \eqref{steineq} at the continuity points of $h$. In order to be able to deal with distributions which might have point masses, if $x\in\mathbb{R} $ is a point at which $f_h$ is not differentiable, one defines \begin{equation}\label{deffprime} f_h'(x):=xf_h(x)+h(x)-E[h(Z)] \end{equation} such that, by definition, $f_h$ satisfies \eqref{steineq} at each point $x\in\mathbb{R} $. This gives a Borel-measurable version of the derivative of $f_h$ in the Lebesgue sense. Properties of the solutions $f_h$ for various classes of test functions $h$ have been studied. Since we are only interested in the Kolmogorov and Wasserstein distances, we either suppose that $h$ is $1$-Lipschitz or that $h=h_z=1_{(-\infty,z]}$ for some $z\in\mathbb{R} $. In the latter case we write $f_z$ for $f_{h_z}$.\\ We need the following properties of the solutions $f_h$. If $h$ is $1$-Lipschitz, then it is well known (see e.g. \cite{CGS}) that $f_h$ is continuously differentiable and that both $f_h$ and $f_h'$ are Lipschitz-continuous with \begin{equation}\label{bounds} \fnorm{f_h}\leq1\,,\quad\fnorm{f_h'}\leq\sqrt{\frac{2}{\pi}}\quad\text{and }\fnorm{f_h''}\leq2\,. \end{equation} Here, for a function $g$ on $\mathbb{R} $, we denote by \[\fnorm{g'}:=\sup_{x\not=y}\frac{\abs{g(x)-g(y)}}{\abs{x-y}}\] its minimum Lipschitz constant. Note that if $g$ is absolutely continuous, then $\fnorm{g_h'}$ coincides with the essential supremum norm of the derivative of $g$ in the Lebesgue sense. Hence, the double use of the symbol $\fnorm{\cdot}$ does not cause any problems. For an absolutely continuous function $g$ on $\mathbb{R} $, a fixed choice of its derivative $g'$ and for $x,y\in\mathbb{R} $ we let \begin{equation}\label{resttaylor} R_g(x,y):=g(x+y)-g(x)-g'(x)y \end{equation} denote the remainder term of its first order Taylor expansion around $x$ at the point $x+y$. If $h$ is $1$-Lipschitz, then we obtain for all $x,y\in\mathbb{R} $ that \begin{equation}\label{fhtaylor} \babs{R_{f_h}(x,y)}=\babs{f_h(x+y)-f_h(x)-f_h'(x)y}\leq y^2\,. \end{equation} This follows from \eqref{bounds} via \begin{align*} &\babs{f_h(x+y)-f_h(x)-f_h'(x)y}=\Babs{\int_x^{x+y}\bigl(f_h'(t)-f_h'(x)\bigr)dt}\\ &\;\leq\fnorm{f_h''}\Babs{\int_x^{x+y}\abs{t-x}dt}=\frac{y^2\fnorm{f_h''}}{2}\leq y^2\,. \end{align*} For $h=h_z$ we list the following properties of $f_z$: The function $f_z$ has the representation \begin{equation}\label{steinsol} f_z(x)=\begin{cases} \frac{(1-\Phi(z))\Phi(x)}{\varphi(x)}\,,&x\leq z\\ \frac{\Phi(z)(1-\Phi(x))}{\varphi(x)}\,,&x>z\,. \end{cases} \end{equation} Here, $\Phi$ denotes the standard normal distribution function and $\varphi:=\Phi'$ the corresponding continuous density. It is easy to see from \eqref{steinsol} that $f_z$ is infinitely often differentiable on $\mathbb{R} \setminus\{z\}$. Furthermore, it is well-known that $f_z$ is Lipschitz-continuous with Lipschitz constant $1$ and that it satisfies \begin{equation*}\label{boundfz} 0<f_z(x)\leq f_0(0)=\frac{\sqrt{2\pi}}{4}\,,\quad x,z\in\mathbb{R} \,. \end{equation*} These properties already easily yield that for all $x,u,v,z\in\mathbb{R} $ \begin{equation}\label{fzdiff} \babs{(x+u)f_z(x+u)-(x+v)f_z(x+v)}\leq\Biggl(\abs{x}+\frac{\sqrt{2\pi}}{4}\Biggr)\bigl(\abs{u}+\abs{v}\bigr)\,. \end{equation} Proofs of the above mentioned classic facts about the functions $f_z$ can again be found in \cite{CGS}, for instance. As $f_z$ is not differentiable at $z$ (the right and left derivatives do exist but are not equal) by the above Convention \eqref{deffprime} we define \begin{equation}\label{deffzprime} f_z'(z):=zf_z(z)+1-\Phi(z) \end{equation} such that $f=f_z$ satisfies \eqref{steineq} with $h=h_z$ for all $x\in\mathbb{R} $. Furthermore, with this definition, for all $x,z\in\mathbb{R} $ we have \begin{equation}\label{boundfzp} \abs{f_z'(x)}\leq1\,. \end{equation} The following quantitative version of the first order Taylor approximation of $f_z$ has recently been proved by Lachi\`{e}ze-Rey and Peccati \cite{LaPec15} and had already been used implicitly in \cite{EiThae14}. Using \eqref{deffzprime}, for all $x,u,z\in\mathbb{R} $ we have \begin{align}\label{fztaylor} \babs{R_{f_z}(x,u)}&=\babs{f_z(x+u)-f_z(x)-f_z'(x)u}\notag\\ &\leq\frac{u^2}{2}\Biggl(\abs{x}+\frac{\sqrt{2\pi}}{4}\Biggr) +\abs{u}\Bigl(1_{\{x<z\leq x+u\}}+1_{\{x+u\leq z<x\}}\Bigr)\notag\\ &=\frac{u^2}{2}\Biggl(\abs{x}+\frac{\sqrt{2\pi}}{4}\Biggr) +\abs{u}1_{\bigl\{z-(u\vee 0)<x\leq z-(u\wedge 0)\bigr\}}\,, \end{align} where, here and elsewhere, we write $x\vee y:=\max(x,y)$ and $x\wedge y:=\min(x,y)$.\\ For the proof of Theorems \ref{maintheo} and \ref{meanzero} we need to recall a certain coupling construction, which has been efficiently used in Stein's method of normal approximation: Let $X$ be a real-valued random variable such that $E[X]=0$ and $0<E[X^2]<\infty$. In \cite{GolRei97} it was proved that there exists a unique distribution for a random variable $X^*$ such that for all Lipschitz continuous functions $f$ the identity \begin{equation}\label{zbdef} E[Xf(X)]=\Var(X)E[f'(X^*)] \end{equation} holds true. The distribution of $X^*$ is called the \textit{$X$-zero biased distribution} and the distributional transformation which maps $\mathcal{L}(X)$ to $\mathcal{L}(X^*)$ is called the \textit{zero bias transformation}. It can be shown that \eqref{zbdef} holds for all absolutely continuous functions $f$ on $\mathbb{R} $ such that $E\abs{Xf(X)}<\infty$. From the Stein characterization of the family of normal distributions it is immediate that the fixed points of the zero bias transformation are exactly the centered normal distributions. Thus, if, for a given $X$, the distribution of $X^*$ is close to that of $X$, the distribution of $X$ is approximately a fixed point of this transformation and, hence, should be close to the normal distribution with the same variance as $X$. In \cite{Gol04} this heuristic was made precise by showing the inequality \begin{equation*}\label{wasineq} d_\mathcal{W}(X,\sigma Z)\leq2d_\mathcal{W}(X,X^*)\,, \end{equation*} where $X$ is a mean zero random variable with $0<\sigma^2=E[X^2]=\Var(X)<\infty$, $X^*$ having the $X$-zero biased distribution is defined on the same probability space as $X$ and $Z$ is standard normally distributed. For merely technical reasons we introduce a variant of the zero bias transformation for not necessarily centered random variables. Thus, if $X$ is a real random variable with $0<E[X^2]<\infty$, we say that a random variable $X^{nz}$ has the \textit{$X$-non-zero biased distribution}, if for all Lipschitz-continuous functions $f$ it holds that \begin{equation*}\label{nonzero} E\bigl[\bigl(X-E[X]\bigr)f(X)\bigr]=\Var(X)E\bigl[f'(X^{nz})\bigr]\,. \end{equation*} Existence and uniqueness of the $X$-non-zero biased distribution immediately follow from Theorem 2.1 of \cite{GolRei05b} (or Theorem 2.1 of \cite{Doe13b} by letting $B(x)=x-E[X]$, there). Alternatively, letting $Y:=X-E[X]$ and $Y^{*}$ have the $Y$-zero biased distribution, it is easy to see that $X^{nz}:=Y^{*}+E[X]$ fulfills the requirements for the $X$-non-zero biased distribution. Most of the properties of the zero bias transformation have natural analogs for the non-zero bias transformation, so we do not list them all, here. Since an important part of the proof of our main result relies on the so-called \textit{single summand property}, however, we state the result for the sake of reference. \begin{lemma}[single summand property]\label{single} Let $X_1,\dots,X_n$ be independent random variables such that $0<E[X_j^2]<\infty$, $j=1,\dotsc,n$. Define $\sigma_j^2:=\Var(X_j)$,\\ $j=1,\dotsc,n$, $S:=\sum_{j=1}^nX_j$ and $\sigma^2:=\Var(S)=\sum_{j=1}^n \sigma_j^2$. For each $j=1,\dotsc,n$ let $X_j^{nz}$ have the $X_j$-non-zero biased distribution and be independent of\\ $X_1,\dotsc,X_{j-1},X_{j+1},\dotsc,X_n$ and let $I\in\{1,\dotsc,n\}$ be a random index, independent of all the rest and such that \[P(I=j)=\frac{\sigma_j^2}{\sigma^2}\,,\quad j=1,\dotsc,n\,.\] Then, the random variable \[S^{nz}:=S-X_I+X_I^{nz}=\sum_{i=1}^n 1_{\{I=i\}}\Bigl(\sum_{j\not=i}X_j +X_i^{nz}\Bigr)\] has the $S$-non-zero biased distribution. \end{lemma} \begin{proof} The proof is either analogous to the proof of Lemma 2.1 in \cite{GolRei97} or else, the statement could be deduced from this result in the following way: Using the fact that $X^{nz}=Y^{*}+E[X]$ has the $X$-non-zero biased distribution if and only if $Y^*$ has the $(X-E[X])$-zero biased distribution, we Let $Y_j:=X_j-E[X_j]$, $Y_j^{*}:=X_j^{nz}-E[X_j]$, $j=1,\dotsc,n$ and $W:=\sum_{j=1}^n Y_j=S-E[S]$. Then, from Lemma 2.1 in \cite{GolRei97} we know that \begin{align*} W^*&:=W-Y_I+Y_I^{*}=S-E[S]+\sum_{j=1}^n1_{\{I=j\}}\Bigl(E[X_j]-X_j+X_j^{nz}-E[X_j]\Bigr)\\ &=S-X_I+X_I^{nz}-E[S]=S^{nz}-E[S] \end{align*} has the $W$-zero biased distribution, implying that $S^{nz}$ has the $S$-non-zero biased distribution.\\ \end{proof} \section{Proof of Theorems \ref{maintheo} and \ref{meanzero}}\label{proof} From now on we let $h$ be either $1$-Lipschitz or $h=h_z$ for some $z\in\mathbb{R} $ and write $f=f_h$ given by \eqref{standsol}. Since $f$ is a solution to \eqref{steineq}, plugging in $W$ and taking expectations yields \begin{equation}\label{mp1} E[h(W)]-E[h(Z)]=E[f'(W)-Wf(W)]\,. \end{equation} As usual in Stein's method of normal approximation, the main task is to rewrite the term $E[Wf(W)]$ into a more tractable expression be exploiting the structure of $W$ and using properties of $f$. From \eqref{defw} we have \begin{equation}\label{mp2} E[Wf(W)]=\frac{1}{\sigma}E[(S-aN)f(W)]+\frac{a}{\sigma}E[(N-\alpha)f(W)]=:T_1+T_2\,. \end{equation} For ease of notation, for $n\in\mathbb{Z}_+$ and $M$ any $\mathbb{Z}_+$-valued random variable we let \begin{equation*}\label{notation} S_n:=\sum_{j=1}^n X_j,\quad W_n:=\frac{S_n-\alpha a}{\sigma},\quad S_M:=\sum_{j=1}^M X_j\quad\text{and }W_M:=\frac{S_M-\alpha a}{\sigma}\,, \end{equation*} such that, in particular, $S=S_N$ and $W=W_N$. Using the decomposition \[E\bigl[f'(W)\bigr]=\frac{\alpha c^2}{\sigma^2}E\bigl[f'(W)\bigr]+\frac{a^2\gamma^2}{\sigma^2}E\bigl[f'(W)\bigr]\] which is true by virtue of \eqref{meanvar}, from \eqref{mp1} and \eqref{mp2} we have \begin{align}\label{esdec} E[h(W)]-E[h(Z)]&=E[f'(W)]-T_1-T_2\notag\\ &=E\Bigl[\frac{c^2\alpha}{\sigma^2}f'(W)-\frac{1}{\sigma}(S-aN)f(W)\Bigr]\notag\\ &\;+E\Bigl[\frac{a^2\gamma^2}{\sigma^2}f'(W)-\frac{a}{\sigma}(N-\alpha)f(W)\Bigr]=:E_1+E_2\,. \end{align} We will bound the terms $E_1$ and $E_2$ seperately. Using the independence of $N$ and $X_1,X_2,\dotsc$ for $T_1$ we obtain: \begin{align}\label{mp3} T_1&=\frac{1}{\sigma}\sum_{n=0}^\infty P(N=n)E\bigl[(S_n-na)f(W_n)\bigr]\notag\\ &=\frac{1}{\sigma}\sum_{n=0}^\infty P(N=n)E\bigl[(S_n-na)g(S_n)\bigr]\,, \end{align} where \[g(x):=f\left(\frac{x-\alpha a}{\sigma}\right)\,.\] Thus, if, for each $n\geq0$, $S_n^{nz}$ has the $S_n$-non-zero biased distribution, from \eqref{mp3} and \eqref{nonzero} we obtain that \begin{align*}\label{mp4} T_1&=\frac{1}{\sigma}\sum_{n=0}^\infty P(N=n)\Var(S_n)E\bigl[g'\bigl(S_n^{nz}\bigr)\bigr]\notag\\ &=\frac{c^2}{\sigma^2}\sum_{n=0}^\infty nP(N=n)E\Bigl[f'\Bigl(\frac{S_n^{nz}-\alpha a}{\sigma}\Bigr)\Bigr]\,. \end{align*} Note that if we let $M$ be independent of $S_1^{nz}, S_2^{nz},\dotsc$ and have the $N$-size biased distribution, then, this implies that \begin{equation}\label{mp5} T_1=\frac{c^2\alpha}{\sigma^2}E\Bigl[f'\Bigl(\frac{S_{M}^{nz}-\alpha a}{\sigma}\Bigr)\Bigr]\,, \end{equation} where \[S_{M}^{nz}=\sum_{n=1}^{\infty}1_{\{M=n\}}S_n^{nz}\,.\] We use Lemma \ref{single} for the construction of the variables $S_n^{nz}$, $n\in\mathbb{N} $. Note, however, that by the i.i.d. property of the $X_j$ we actually do not need the mixing index $I$, here. Hence, we construct independent random variables \[(N,M), X_1,X_2,\dotsc\text{ and }Y\] such that $M$ has the $N$-size biased distribution and such that $Y$ has the $X_1$-non-zero biased distribution. Then, for all $n\in\mathbb{N} $ \[S_n^{nz}:=S_n-X_1+Y\] has the $S_n$-non-zero biased distribution and we have \begin{equation}\label{mp6} \frac{S_{M}^{nz}-\alpha a}{\sigma}=\frac{S_{M}-\alpha a}{\sigma}+\frac{Y-X_1}{\sigma}=W_{M}+\frac{Y-X_1}{\sigma}=:W^*\,. \end{equation} Thus, from \eqref{mp6} and \eqref{mp5} we conclude that \begin{equation}\label{mp7} T_1=\frac{c^2\alpha}{\sigma^2}E\bigl[f'(W^*)] \end{equation} and \begin{equation}\label{e1} E_1=\frac{c^2\alpha}{\sigma^2}E\bigl[f'(W)-f'(W^*)\bigr]\,. \end{equation} We would like to mention that if $a=0$, then, by \eqref{mp7}, $W^*$ has the $W$-zero biased distribution as $T_2=0$ and $\sigma^2=c^2\alpha$ in this case. Before addressing $T_2$, we remark that the random variables appearing in $E_1$ and $E_2$, respectively, could possibly be defined on different probability spaces, if convenient, since they do not appear under the same expectation sign. Indeed, for $E_2$ we use the coupling $(N,N^s)$, which is given in the statements of Theorems \ref{maintheo} and \ref{meanzero} and which appears in the bounds via the difference $D=N^s-N$. In order to manipulate $E_2$ we thus assume that the random variables \begin{equation*} (N,N^s),X_1,X_2,\dotsc \end{equation*} are independent and that $N^s$ has the $N$-size biased distribution. Note that we do not assume here that $D=N^s-N\geq0$, since sometimes a natural coupling yielding a small value of $\abs{D}$ does not satisfy this nonnegativity condition. In what follows we will use the notation \begin{equation*}\label{defv} V:=W_{N^s}-W_N=\frac{1}{\sigma}\bigl(S_{N^s}-S_N\bigr)\quad\text{and}\quad J:=1_{\{D\geq0\}}=1_{\{N^s\geq N\}}\,. \end{equation*} Now we turn to rewriting $T_2$. Using the independence of $N$ and $X_1,X_2,\dotsc$, and that of $N^s$ and $X_1,X_2,\dotsc$, respectively, $E[N]=\alpha$ and the defining equation \eqref{sbdef} of the $N$-size biased distribution, we obtain from \eqref{mp2} that \begin{align}\label{mp8} T_2&=\frac{a}{\sigma}E[(N-\alpha)f(W_N)]=\frac{\alpha a}{\sigma}E\bigl[f(W_{N^s})-f(W_N)\bigr]\notag\\ &=\frac{\alpha a}{\sigma}E\Bigl[1_{\{N^s\geq N\}}\bigl(f(W_{N^s})-f(W_N)\bigr)\Bigr] +\frac{\alpha a}{\sigma}E\Bigl[1_{\{N^s< N\}}\bigl(f(W_{N^s})-f(W_N)\bigr)\Bigr]\notag\\ &=\frac{\alpha a}{\sigma}E\bigl[J\bigl(f(W_N+V)-f(W_N)\bigr)\bigr] -\frac{\alpha a}{\sigma}E\bigl[(1-J)\bigl(f(W_{N^s}-V)-f(W_{N^s})\bigr)\bigr]\notag\\ &=\frac{\alpha a}{\sigma}E\bigl[JVf'(W_N)\bigr] +\frac{\alpha a}{\sigma}E\bigl[JR_f(W_N,V)\bigr]\notag\\ &\;+\frac{\alpha a}{\sigma}E\bigl[(1-J)Vf'(W_{N^s})\bigr] -\frac{\alpha a}{\sigma}E\bigl[(1-J)R_f(W_{N^s},-V)\bigr]\,, \end{align} where $R_f$ was defined in \eqref{resttaylor}. Note that we have \[JV=1_{\{N^s\geq N\}}\frac{1}{\sigma}\sum_{j=N+1}^{N^s}X_j\quad\text{and}\quad W_N=\frac{\sum_{j=1}^N X_j-\alpha a}{\sigma}\] and, hence, the random variables $JV$ and $W_N$ are conditionally independent given $N$. Noting also that \begin{align*} E\bigl[JV\,\bigl|\,N\bigr]&=\frac{1}{\sigma}E\biggl[J\sum_{j=N+1}^{N^s}X_j\,\biggl|\,N\biggr]\\ &=\frac{1}{\sigma}E\biggl[JE\Bigl[\sum_{j=N+1}^{N^s}X_j\,\Bigl|\,N,N^s\Bigr]\,\biggl|\,N\biggr]\\ &=\frac{a}{\sigma}E\bigl[JD\,\bigl|\,N\bigr]=\frac{a}{\sigma}E\bigl[JD\,\bigl|\,N\bigr] \end{align*} we obtain that \begin{align}\label{mpn1} \frac{\alpha a}{\sigma}E\bigl[JVf'(W_N)\bigr] &=\frac{\alpha a}{\sigma}E\bigl[E\bigl[JV\,\bigl|\,N\bigr]E\bigl[f'(W_N)\,\bigl|\,N\bigr]\Bigr]\notag\\ &=\frac{\alpha a^2}{\sigma^2}E\Bigl[E\bigl[JD\,\bigl|\,N\bigr]E\bigl[f'(W_N)\,\bigl|\,N\bigr]\Bigr]\notag\\ &=\frac{\alpha a^2}{\sigma^2}E\Bigl[E\bigl[JDf'(W_N)\,\bigl|\,N\bigr]\Bigr]\notag\\ &=\frac{\alpha a^2}{\sigma^2}E\Bigl[JDf'(W_N)\Bigr]\,, \end{align} where we have used for the next to last equality that also $D$ and $W_N$ are conditionally independent given $N$. In a similar fashion, using that $W_{N^s}$ and $1_{\{D<0\}}V$ and also $W_{N^s}$ and $D$ are conditionally independent given $N^s$, one can show \begin{align}\label{mpn2} \frac{\alpha a}{\sigma}E\bigl[(1-J)Vf'(W_{N^s})\bigr] =\frac{\alpha a^2}{\sigma^2}E\bigl[(1-J)Df'(W_{N^{s}})\bigr]\,. \end{align} Hence, using that \[\frac{\alpha a^2}{\sigma^2}E[D]=\frac{\alpha a^2}{\sigma^2}E[N^s-N] =\frac{\alpha a^2}{\sigma^2}\frac{\gamma^2}{\alpha}=\frac{a^2\gamma^2}{\sigma^2}\] from \eqref{esdec}, \eqref{mp8}, \eqref{mpn1} and \eqref{mpn2} we obtain \begin{align}\label{e2} E_2&=\frac{\alpha a^2}{\sigma^2}E\Bigl[\bigl(E[D]-D\bigr)f'(W_N)\Bigr]+\frac{\alpha a^2}{\sigma^2}E\Bigl[(1-J)D\bigl(f'(W_N)-f'(W_{N^s})\bigr)\Bigr]\notag\\ &\;-\frac{\alpha a}{\sigma}E\bigl[JR_f(W_N,V)\bigr]+\frac{\alpha a}{\sigma}E\bigl[(1-J)R_f(W_{N^s},-V)\bigr]\notag\\ &=:E_{2,1}+E_{2,2}+E_{2,3}+E_{2,4}\,. \end{align} Using the conditional independence of $D$ and $W_N$ given $N$ as well as the Cauchy-Schwarz inequality, we can estimate \begin{align}\label{e21} \abs{E_{2,1}}&=\frac{\alpha a^2}{\sigma^2}\Babs{E\Bigl[E\bigl[D-E[D]\,\bigr|\,N\bigr]E\bigl[f'(W_N)\,\bigl|\,N\bigr]\Bigr]}\notag\\ &\leq \frac{\alpha a^2}{\sigma^2}\sqrt{\Var\bigl(E[D\,|\,N]\bigr)}\sqrt{E\Bigl[\bigl(E\bigl[f'(W_N)\,\bigl|\,N\bigr]\bigr)^2\Bigr]}\notag\\ &\leq\frac{\alpha a^2}{\sigma^2}\fnorm{f'}\sqrt{\Var\bigl(E[D\,|\,N]\bigr)}\,. \end{align} Now we will proceed by first assuming that $h$ is a $1$-Lipschitz function. In this case, we choose the coupling $(M,N)$ used for $E_1$ in such a way that $M\geq N$. By Remark \ref{sbrem} (a) such a construction of $(M,N)$ is always possible e.g. via the quantile transformation und that it achieves the Wasserstein distance, i.e. \begin{equation*} E\abs{M-N}=E[M-N]=\frac{E[N^2]}{E[N]}-E[N]=\frac{\Var(N)}{E[N]}=\frac{\gamma^2}{\alpha}=d_\mathcal{W}(N,N^s)\,. \end{equation*} In order to bound $E_1$, we first derive an estimate for $E\abs{W_M-W_N}$. We have \begin{align}\label{boundwscond} E\bigl[\abs{W_M-W_N}\,\bigl|\,N,M\bigr]&=\frac{1}{\sigma}E\bigl[\abs{S_{M}-S_N}\,\bigl|\,N,M\bigr]\leq\frac{\abs{M-N}}{\sigma}E\abs{X_1}\notag\\ &\leq\frac{b(M-N)}{\sigma} \end{align} and, hence, \begin{align}\label{bounddws} E\abs{W_M-W_N}&=E\Bigl[E\bigl[\abs{W_M-W_N}\,\bigl|\,N,M\bigr]\Bigr]=\frac{1}{\sigma}E\bigl[\abs{S_{M}-S_N}\,\bigl|\,N,M\bigr]\notag\\ &\leq\frac{b}{\sigma}E[M-N]= \frac{b\gamma^2}{\sigma\alpha}\,. \end{align} Then, using \eqref{bounds}, \eqref{bounddws} as well as the fact that the $X_j$ are i.i.d., for $E_1$ we obtain that \begin{align} \abs{E_1}&=\frac{c^2\alpha}{\sigma^2}\Bigl|E\Bigl[f'(W_N)-f'\Bigl(W_M+\frac{Y-X_1}{\sigma}\Bigr)\Bigr]\Bigr|\notag\\ &\leq\frac{2c^2\alpha}{\sigma^2}\Bigl(E\abs{W_N-W_M}+\sigma^{-1}E\abs{Y-X_1}\Bigr)\label{e1gen}\\ &\leq\frac{2c^2\alpha}{\sigma^3}\Bigl(\frac{b\gamma^2}{\alpha}+\frac{3}{2c^2}E\babs{X_1-E[X_1]}^3\Bigr)\notag\\ &=\frac{2c^2b\gamma^2}{\sigma^3}+\frac{3\alpha d^3}{\sigma^3}\label{e1w}\,. \end{align} Here, we have used the inequality \begin{equation}\label{zbdiff} E\abs{Y-X_1}=E\babs{Y-E[X_1]-\bigl(X_1-E[X_1]\bigr)}\leq\frac{3}{2\Var(X_1)}E\babs{X_1-E[X_1]}^3\,, \end{equation} which follows from an analogous one in the zero-bias framework (see \cite{CGS}) via the fact that $Y-E[X_1]$ has the $(X-E[X_1])$ - zero biased distribution.\\ Similarly to \eqref{boundwscond} we obtain \begin{equation*} E\bigl[\abs{V}\,\bigl|\,N,N^s\bigr]=E\bigl[\abs{W_{N^s}-W_N}\,\bigl|\,N,N^s\bigr]\leq\frac{b\abs{N^s-N}}{\sigma}=\frac{b\abs{D}}{\sigma} \end{equation*} which, together with \eqref{bounds} yields that \begin{align}\label{e22w} \abs{E_{2,2}}&=\frac{\alpha a^2}{\sigma^2}\Babs{E\Bigl[(1-J)D\bigl(f'(W_N)-f'(W_{N^s})\bigr)\Bigr]}\leq\frac{2\alpha a^2}{\sigma^2}E\babs{(1-J)D(W_N-W_{N^s})}\notag\\ &=\frac{2\alpha a^2}{\sigma^2}E\Bigl[(1-J)\abs{D}E\bigl[\abs{V}\,\bigl|\,N,N^s\bigr]\Bigr]\leq\frac{2\alpha a^2b}{\sigma^3}E\bigl[(1-J)D^2\bigr]\,. \end{align} We conclude the proof of the Wasserstein bounds by estimating $E_{2,3}$ and $E_{2,4}$. Note that by \eqref{fhtaylor} we have \[\babs{R_f(W_N,V)}\leq V^2\quad\text{and}\quad \babs{R_f(W_{N^s},-V)}\leq V^2\] yielding \begin{align}\label{e234w} \abs{E_{2,3}}+\abs{E_{2,4}}&\leq\frac{\alpha \abs{a}}{\sigma} E\Bigl[\bigl(1_{\{D\geq0\}}+1_{\{D<0\}}\bigr)V^2\Bigr]=\frac{\alpha \abs{a}}{\sigma}E\bigl[V^2\bigr]\,. \end{align} Observe that \begin{align}\label{boundvsq1} E[V^2]&=\frac{1}{\sigma^2}E\Bigl[\bigl(S_{N^s}-S_{N}\bigr)^2\Bigr]\notag\\ &=\frac{1}{\sigma^2}\Bigl(\Var\bigl(S_{N^s}-S_{N}\bigr)+\bigl(E\bigl[S_{N^s}-S_{N}\bigr]\bigr)^2\Bigr) \end{align} and \begin{equation}\label{mpn8} E\bigl[S_{N^s}-S_{N}\bigr]=E\Bigl[E\bigl[S_{N^s}-S_{N}\,\bigl|\,N,N^s\bigr]\Bigr]=aE[D]=\frac{a\gamma^2}{\alpha}\,. \end{equation} Further, from the variance decomposition formula we obtain \begin{align*} \Var\bigl(S_{N^s}-S_N\bigr)&=E\bigl[\Var\bigl(S_{N^s}-S_N\,\bigl|\,N,N^s\bigr)\bigr]+\Var\bigl(E\bigl[S_{N^s}-S_N,\bigl|\,N,N^s\bigr]\bigr)\notag\\ &=E\bigl[c^2\abs{D}\bigr]+\Var(aD)=c^2E\abs{D}+a^2\Var(D)\,. \end{align*} This together with \eqref{boundvsq1} and \eqref{mpn8} yields the bounds \begin{align} E[V^2]&=E\bigl[(W_{N^s}-W_N)^2\bigr]=\frac{1}{\sigma^2}\Bigl(c^2E\abs{D}+a^2E[D^2]\Bigr)\label{formvsq}\\ &\leq \frac{b^2}{\sigma^2}E[D^2]\label{boundvsq}\,, \end{align} where we have used the fact that $D^2\geq\abs{D}$ and $a^2+c^2=b^2$ to obtain \[c^2E\abs{D}+a^2E[D^2]\leq b^2 E[D^2]\,.\] The asserted bound on the Wasserstein distance between $W$ and $Z$ from Theorem \ref{maintheo} now follows from \eqref{bounds}, \eqref{esdec}, \eqref{e2}, \eqref{e1w}, \eqref{e22w}, \eqref{e234w} and \eqref{boundvsq}.\\ If $a=0$, then $E_1$ can be bounded more accurately than we did before. Indeed, using \eqref{formvsq} with $N^s=M$ and applying the Cauchy-Schwarz inequality give \begin{align*} E\abs{W_M-W_N}&\leq \sqrt{E\bigl[(W_{M}-W_N)^2\bigr]}=\frac{c}{\sigma}\sqrt{E[M-N]}=\frac{c\gamma}{\sqrt{\alpha}\sigma}\,, \end{align*} as $c=b$ in this case. Plugging this into \eqref{e1gen}, we obtain \begin{align*} \abs{E_1}&\leq \frac{2c^2\alpha}{\sigma^2}\Bigl(E\abs{W_M-W_N}+\sigma^{-1}E\abs{Y-X_1}\Bigr)\\ &\leq\frac{2c^3\gamma\sqrt{\alpha}}{\sigma^3}+ \frac{2c^2\alpha}{\sigma^3}E\abs{Y-X_1}\\ &\leq\frac{2c^3\gamma\sqrt{\alpha}}{c^3\alpha^{3/2}}+\frac{3\alpha d^3}{c^3\alpha^{3/2}}\\ &=\frac{2\gamma}{\alpha}+\frac{3 d^3}{c^3\sqrt{\alpha}}\,, \end{align*} which is the Wasserstein bound claimed in Theorem \ref{meanzero}.\\\\ Next, we proceed to the proof of the Berry-Esseen bounds in Theorems \ref{maintheo} and \ref{meanzero}. Bounding the quantities $E_1$, $E_{2,2}$, $E_{2,3}$ and $E_{2,4}$ in the case that $h=h_z$ is much more technically involved. Also, in this case we do not in general profit from choosing $M$ appearing in $T_1$ in such a way that $M\geq N$. This is why we let $M=N^s$ for the proof of the Kolmogorov bound in Theorem \ref{maintheo}. Only for the proof of Theorem \ref{meanzero} we will later assume that $M\geq N$. We write $f=f_z$ and introduce the notation \begin{equation*} \tilde{V}:=W^*-W=W_{N^s}+\sigma^{-1}(Y-X_1)-W_N=V+\sigma^{-1}(Y-X_1)\,. \end{equation*} From \eqref{e1} and the fact that $f$ solves the Stein equation \eqref{steineq} for $h=h_z$ we have \begin{align}\label{e1dec} E_1&=\frac{c^2\alpha}{\sigma^2}E\bigl[f'(W)-f'(W^*)\bigr]\notag\\ &=\frac{c^2\alpha}{\sigma^2}E\bigl[Wf(W)-W^*f(W^*)\bigr]+\frac{c^2\alpha}{\sigma^2}\bigl(P(W\leq z)-P(W^*\leq z)\bigr)\notag\\ &=:E_{1,1}+E_{1,2}\,. \end{align} In order to bound $E_{1,1}$ we apply \eqref{fzdiff} to obtain \begin{equation}\label{e11} \abs{E_{1,1}}\leq\frac{c^2\alpha}{\sigma^2}E\Bigl[\abs{\tilde{V}}\Bigl(\frac{\sqrt{2\pi}}{4}+\abs{W}\Bigr)\Bigr]\,. \end{equation} Using \eqref{formvsq}, \eqref{boundvsq} and \eqref{zbdiff} we have \begin{align} E\abs{\tilde{V}}&\leq E\abs{V}+\sigma^{-1}E\abs{Y-X_1}\leq\sqrt{E[V^2]}+\frac{3 d^3}{2\sigma c^2}\notag\\ &=\frac{1}{\sigma}\sqrt{c^2E\abs{D}+a^2E[D^2]}+\frac{3 d^3}{2\sigma c^2}\label{e1z1}\\ &\leq\frac{b}{\sigma}\sqrt{E[D^2]}+\frac{3 d^3}{2\sigma c^2}\,.\label{e1z2} \end{align} Furthermore, using independence of $W$ and $Y$, we have \begin{align}\label{e1z3} E\babs{(Y-X_1)W}&\leq E\babs{(Y-E[X_1])W}+E\abs{(X_1-E[X_1])W}\notag\\ &=E\babs{Y-E[X_1]}E\abs{W}+E\babs{(X_1-E[X_1])W}\notag\\ &\leq\frac{ d^3}{2c^2}\sqrt{E[W^2]}+\sqrt{\Var(X_1)E[W^2]}=\frac{ d^3}{2c^2} +c\,. \end{align} Finally, we have \begin{align} E\abs{VW}&\leq\sqrt{E[V^2]}\sqrt{E[W^2]}=\frac{1}{\sigma}\sqrt{c^2E\abs{D}+a^2E[D^2]}\label{e1z4}\\ &\leq\frac{b}{\sigma}\sqrt{E[D^2]}\label{e1z5}\,. \end{align} From \eqref{e11}, \eqref{e1z1}, \eqref{e1z2}, \eqref{e1z3}, \eqref{e1z4} and \eqref{e1z5} we conclude that \begin{align} \abs{E_{1,1}}&\leq\frac{c^2\alpha}{\sigma^2}\Bigl(\frac{\sqrt{2\pi}}{4\sigma}\sqrt{c^2E\abs{D}+a^2E[D^2]}+\frac{3 d^3\sqrt{2\pi}}{8c^2\sigma}+\frac{ d^3}{2c^2\sigma} +\frac{c}{\sigma}\notag\\ &\;+\frac{1}{\sigma}\sqrt{c^2E\abs{D}+a^2E[D^2]}\Bigr)\notag\\ &=\frac{c^2\alpha(\sqrt{2\pi}+4)}{4\sigma^3}\sqrt{c^2E\abs{D}+a^2E[D^2]}+\frac{ d^3\alpha(3\sqrt{2\pi}+4)}{8\sigma^3}+ \frac{c^3\alpha}{\sigma^3}\label{e1z6}\\ &\leq\frac{(\sqrt{2\pi}+4)bc^2\alpha}{4\sigma^3}\sqrt{E[D^2]}+\frac{ d^3\alpha(3\sqrt{2\pi}+4)}{8\sigma^3}+ \frac{c^3\alpha}{\sigma^3}=:B_1\,.\label{e1z7} \end{align} In order to bound $E_{1,2}$ we need the following lemma, which will be proved in Section \ref{Appendix}. In the following we denote by $C_\mathcal{K}$ the Berry-Esseen constant for sums of i.i.d. random variables with finite third moment. It is known from \cite{Shev11} that \begin{equation*} C_\mathcal{K}\leq 0.4748\,. \end{equation*} In particular, $2C_\mathcal{K}\leq1$, which is substituted for $2C_\mathcal{K}$ in the statements of Theorems \ref{maintheo} and \ref{meanzero}. However, we prefer keeping the dependence of the bounds on $C_\mathcal{K}$ explicit within the proof. \begin{lemma}\label{ce2} With the above assumptions and notation we have for all $z\in\mathbb{R} $ \begin{align} \babs{P(W^*\leq z)-P(W_{N^s}\leq z)}&\leq\frac{1}{\sqrt{\alpha}}\Bigl(\frac{7}{2}\sqrt{2}+2\Bigr)\frac{ d^3}{c^3}\label{cers1}\quad\text{and}\\ \babs{P(W_{N^s}\leq z)-P(W\leq z)}&\leq P(N=0)+\frac{b}{c\sqrt{2\pi}}E\bigl[D1_{\{D\geq0\}}N^{-1/2}1_{\{N\geq1\}}\bigr]\notag\\ &\; +\frac{2C_\mathcal{K} d^3}{c^3} E\bigl[1_{\{D\geq0\}}N^{-1/2}1_{\{N\geq1\}}\bigr]\notag\\ &\;+\frac{1}{\sqrt{\alpha}}\Bigl(\frac{b}{c\sqrt{2\pi}}\sqrt{E\bigl[D^21_{\{D<0\}}\bigr]}+\frac{2C_\mathcal{K} d^3}{c^3}\sqrt{P(D<0)}\Bigr)\,.\label{cers2} \end{align} If $a=0$ and $D\geq0$, then for all $z\in\mathbb{R} $ \begin{align} \babs{P(W_{N^s}\leq z)-P(W\leq z)}&\leq P(N=0)+\frac{2C_\mathcal{K} d^3}{c^3} E\bigl[N^{-1/2}1_{\{N\geq1\}}\bigr]\notag\\ &\;+\frac{1}{\sqrt{2\pi}}E\bigl[\sqrt{D}N^{-1/2}1_{\{N\geq1\}}\bigr]\label{cers3}\\ &\leq P(N=0)+\biggl(\frac{2C_\mathcal{K} d^3}{c^3}+\frac{\gamma}{\sqrt{\alpha}\sqrt{2\pi}}\biggr) \sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]}\label{cers4}\,. \end{align} \end{lemma} Applying the triangle inequality to Lemma \ref{ce2} yields the following bounds on $E_{1,2}$: In the most general situation (Theorem \ref{maintheo} and Remark \ref{mtrem} (b)) we have \begin{align}\label{e12gen} \abs{E_{1,2}}&\leq\Bigl(\frac{7}{2}\sqrt{2}+2\Bigr)\frac{\sqrt{\alpha} d^3}{c\sigma^2}+\frac{c^2\alpha}{\sigma^2}P(N=0)+\frac{\alpha bc}{\sigma^2\sqrt{2\pi}}E\bigl[D1_{\{D\geq0\}}N^{-1/2}1_{\{N\geq1\}}\bigr]\notag\\ &\;+\frac{2C_\mathcal{K} d^3\alpha}{c\sigma^2}E\bigl[1_{\{D\geq0\}}N^{-1/2}1_{\{N\geq1\}}\bigr]+\frac{cb\sqrt{\alpha}}{\sigma^2\sqrt{2\pi}}\sqrt{E\bigl[D^21_{\{D<0\}}\bigr]}\notag\\ &\; +\frac{\alpha C_\mathcal{K} d^3}{c\sigma^2}\sqrt{P(D<0)}=:B_2\,. \end{align} If $a=0$ and $D\geq0$, then, keeping in mind that $\sigma^2=\alpha c^2$ in this case, \begin{align} \abs{E_{1,2}}&\leq\Bigl(\frac{7}{2}\sqrt{2}+2\Bigr)\frac{ d^3}{c^3\sqrt{\alpha}}+P(N=0)+ \frac{2C_\mathcal{K} d^3}{c^3}E\bigl[N^{-1/2}1_{\{N\geq1\}}\bigr]\notag\\ &\;+\frac{1}{\sqrt{2\pi}}E\bigl[\sqrt{D}N^{-1/2}1_{\{N\geq1\}}\bigr]\notag\\%\label{e12mz1}\\ &\leq \Bigl(\frac{7}{2}\sqrt{2}+2\Bigr)\frac{ d^3}{c^3\sqrt{\alpha}}+P(N=0) +\biggl(\frac{2C_\mathcal{K} d^3}{c^3}+\frac{\gamma}{\sqrt{\alpha}\sqrt{2\pi}}\biggr)\sqrt{E\bigl[1_{\{N\geq1\}}N^{-1}\bigr]}\,.\label{e12mz2} \end{align} The following lemma, which is also proved in Section \ref{Appendix}, will be needed to bound the quantities $E_{2,2}$, $E_{2,3}$ and $E_{2,4}$ from \eqref{e2}. \begin{lemma}\label{auxlemma2} With the above assumptions and notation we have \begin{align} &E\bigl[J\abs{V}1_{\{z-(V\vee0)<W\leq z-(V\wedge0)\}}\bigr]\leq\frac{b}{\sigma}\sqrt{P(N=0)}\sqrt{E[JD^2]}\notag\\ &\;+\frac{b^2}{c\sigma\sqrt{2\pi}}E\bigl[JD^21_{\{N\geq1\}}N^{-1/2}\bigr]+\frac{2C_\mathcal{K} d^3 b}{c^3\sigma} E\bigl[JD1_{\{N\geq1\}}N^{-1/2}\bigr]\,,\label{aux21}\\ &E\bigl[(1-J)\abs{V}1_{\{z+(V\wedge0)<W_{N^s}\leq z+(V\vee0)\}}\bigr]\leq\frac{b^2}{c\sigma\sqrt{2\pi}}E\bigl[(1-J)D^2(N^s)^{-1/2}\bigr]\notag\\ &\;+\frac{2C_\mathcal{K} b d^3}{c^3\sigma\sqrt{\alpha}}\sqrt{E\bigl[(1-J)D^2\bigr]}\quad\text{and}\label{aux22}\\ &E\bigl[(1-J)\abs{D}1_{\{z+(V\wedge0)<W_{N^s}\leq z+(V\vee0)\}}\bigr]\leq\frac{b}{c\sqrt{2\pi}}E\bigl[(1-J)D^2(N^s)^{-1/2}\bigr]\notag\\ &\;+\frac{2C_\mathcal{K} d^3}{c^3\sqrt{\alpha}}\sqrt{E\bigl[(1-J)D^2\bigr]}\label{aux23} \,. \end{align} \end{lemma} Next, we derive a bound on $E_{2,2}$. Since $f$ solves the Stein equation \eqref{steineq} for $h=h_z$ we have \begin{align}\label{e22} E_{2,2}&=\frac{\alpha a^2}{\sigma^2}E\bigl[(1-J)D\bigl(W_Nf(W_N)-W_{N^s}f(W_{N^s})\bigr)\bigr]\notag\\ &\;+\frac{\alpha a^2}{\sigma^2}E\bigl[(1-J)D\bigl(1_{\{W_N\leq z\}}-1_{\{W_{N^s}\leq z\}}\bigr)\bigr]\notag\\ &=:E_{2,2,1}+E_{2,2,2}\,. \end{align} Using \[W_N=W_{N^s}-V\] and Lemma \ref{indlemma}, we obtain from \eqref{aux23} that \begin{align}\label{e222} \abs{E_{2,2,2}}&\leq\frac{\alpha a^2}{\sigma^2}E\bigl[1_{\{D<0\}}\abs{D}1_{\{z+(V\wedge0)<W_{N^s}\leq z+(V\vee0)\}}\bigr]\notag\\ &\leq\frac{\alpha a^2b}{\sigma^2c\sqrt{2\pi}}E\bigl[1_{\{D<0\}}D^2(N^s)^{-1/2}\bigr] +\frac{2C_\mathcal{K} d^3 a^2\sqrt{\alpha}}{c^3\sigma^2}\sqrt{E\bigl[1_{\{D<0\}}D^2\bigr]}\notag\\ &=:B_4\,. \end{align} As to $E_{2,2,1}$, from \eqref{fzdiff} we have \begin{align}\label{e221} \abs{E_{2,2,1}}&\leq\frac{\alpha a^2}{\sigma^2}E\Bigl[(1-J)\abs{DV}\Bigl(\abs{W_{N^s}}+\frac{\sqrt{2\pi}}{4}\Bigr)\Bigr] \end{align} As \begin{equation}\label{e22z1} E\bigl[\abs{V}\,\bigl|\,N,N^s\bigr]\leq\sqrt{E\bigl[V^2\,\bigl|\,N,N^s\bigr]}=\frac{1}{\sigma}\sqrt{c^2\abs{D}+a^2D^2}\leq\frac{b}{\sigma}\abs{D}\,, \end{equation} by conditioning, we see \begin{align}\label{e22z2} E\bigl[(1-J)\abs{DV}]&=E\Bigl[(1-J)\abs{D}E\bigl[\abs{V}\,\bigl|\,N,N^s\bigr]\Bigr]\leq \frac{b}{\sigma}E\bigl[(1-J)D^2]\,. \end{align} Now, using the fact that conditionally on $N^s$, the random variables $W_{N^s}$ and $(1-J)\abs{DV}$ are independent, as well as the Cauchy-Schwarz inequality, we conclude that \begin{align}\label{e22z3} E\babs{(1-J)DVW_{N^s}}&=E\Bigl[E\bigl[\babs{(1-J)DVW_{N^s}}\,\bigl|\,N^s\bigr]\Bigr]\notag\\ &=E\Bigl[E\bigl[(1-J)\abs{DV}\,\bigl|\,N^s\bigr]E\bigl[\abs{W_{N^s}}\,\bigl|\,N^s\bigr]\Bigr]\notag\\ &\leq\sqrt{E\Bigl[\bigl(E\bigl[(1-J)\abs{DV}\,\bigl|\,N^s\bigr]\bigr)^2\Bigr]}\sqrt{E\Bigl[\bigl(E\bigl[\abs{W_{N^s}}\,\bigl|\,N^s\bigr]\bigr)^2\Bigr]}\notag\\ &\leq\frac{b}{\sigma}\sqrt{E\Bigl[\bigl(E\bigl[(1-J)D^2\,\bigl|\,N^s\bigr]\bigr)^2\Bigr]}\sqrt{E\Bigl[W_{N^s}^2\Bigr]}\,, \end{align} where we have used the conditional Jensen inequality, \eqref{e22z1} and \begin{equation*} E\bigl[(1-J)\abs{DV}\,\bigl|\,N^s\bigr]=E\Bigr[(1-J)\abs{D}E\bigl[\abs{V}\,\bigl|\,N,N^s\bigr]\,\Bigl|\,N^s\Bigr] \end{equation*} to obtain the last inequality. Using the defining relation \eqref{sbdef} of the size-biased distribution one can easily show that \begin{align}\label{secmomwns} E\Bigl[W_{N^s}^2\Bigr]&=\frac{1}{\sigma^2}E\Bigl[c^2N^s+a^2(N^s-\alpha)^2\Bigr] =\frac{c^2\beta^2+a^2\bigl(\delta^3-2\alpha\beta^2+\alpha^3\bigr)}{\alpha\sigma^2}\,, \end{align} which, together with \eqref{e221}, \eqref{e22z2} and \eqref{e22z3} yields that \begin{align}\label{e221b} \abs{E_{2,2,1}}&\leq\frac{\alpha a^2b\sqrt{2\pi}}{4\sigma^3}E\bigl[1_{\{D<0\}}D^2] \notag\\ &\;+a^2b\frac{c^2\beta^2+a^2\bigl(\delta^3-2\alpha\beta^2+\alpha^3\bigr)}{\sigma^5}\sqrt{E\Bigl[\bigl(E\bigl[1_{\{D<0\}}D^2\,\bigl|\,N^s\bigr]\bigr)^2\Bigr]}\notag\\ &=:B_5\,. \end{align} It remains to bound the quantities $E_{2,3}$ and $E_{2,4}$ from \eqref{e2} for $f=f_z$. From \eqref{fztaylor} we have \begin{align}\label{r1} \abs{E_{2,3}}&=\frac{\alpha \abs{a}}{\sigma}\Babs{E\bigl[1_{\{D\geq0\}}R_f(W,V)\bigr]}\notag\\ &\leq\frac{\alpha \abs{a}}{2\sigma}E\Bigl[JV^2\Bigl(\abs{W}+\frac{\sqrt{2\pi}}{4}\Bigr)\Bigr]\notag\\ &\;+\frac{\alpha \abs{a}}{\sigma}E\Bigl[J\abs{V}1_{\{z-(V\vee0)<W\leq z-(V\wedge0)\}}\Bigr]=:R_{1,1}+R_{1,2}\,. \end{align} Similarly to \eqref{formvsq} we obtain \begin{align}\label{boundjvsq} E\bigl[JV^2\bigr]&=\frac{1}{\sigma^2}\Bigl(c^2E\bigl[JD\bigr]+a^2E\bigl[JD^2\bigr]\Bigr)\notag\\ &\leq\frac{b^2}{\sigma^2}E\bigl[JD^2\bigr] \end{align} from \begin{align}\label{r1e1} E\bigl[JV^2\,\bigl|\,N,N^s\bigr]&=JE\bigl[V^2\,\bigl|\,N,N^s\bigr]=\frac{J}{\sigma^2}\Bigl(c^2\abs{D}+a^2D^2\Bigr)\notag\\ &=\frac{1}{\sigma^2}\Bigl(c^2 JD+a^2JD^2\Bigr)\,. \end{align} Also, recall that the random variables \[JV^2=\sigma^{-1}1_{\{N^s\geq N\}}\Bigl(\sum_{j=N+1}^{N^s}X_j\Bigr)^2\quad\text{and}\quad W_N=\sigma^{-1}\Bigl(\sum_{j=1}^N X_j-\alpha a\Bigr)\] are conditionally independent given $N$. Hence, using the Cauchy-Schwarz inequality \begin{align}\label{r1e2} E\bigl[JV^2\abs{W_N}\bigr]&=E\Bigl[E\bigl[JV^2\abs{W_N}\,\bigl|\,N\bigr]\Bigr]=E\Bigl[E\bigl[JV^2\,\bigl|\,N\bigr]E\bigl[\abs{W_N}\,\bigl|\,N\bigr]\Bigr]\notag\\ &\leq\sqrt{E\biggl[\Bigl(E\bigl[JV^2\,\bigl|\,N\bigr]\Bigr)^2\biggr]}\sqrt{E\biggl[\Bigl(E\bigl[\abs{W_N}\,\bigl|\,N\bigr]\Bigr)^2\biggr]}\,. \end{align} From \eqref{r1e1} and $D^2\geq\abs{D}$ we conclude that \begin{align}\label{r1e3} E\bigl[JV^2\,\bigl|\,N\bigr]=\frac{1}{\sigma^2}\Bigl(c^2E\bigl[JD\,\bigl|\,N\bigr]+a^2E\bigl[JD^2\,\bigl|\,N\bigr]\Bigr)\leq\frac{b^2}{\sigma^2}E\bigl[JD^2\,\bigl|\,N\bigr]\,. \end{align} Furthermore, by the conditional version of Jensen' s inequality we have \begin{align}\label{r1e4} E\biggl[\Bigl(E\bigl[\abs{W_N}\,\bigl|\,N\bigr]\Bigr)^2\biggr]&\leq E\Bigl[E\bigl[W_N^2\,\bigl|\,N\bigr]\Bigr] =E\bigl[W_N^2\bigr]=1\,. \end{align} Thus, from \eqref{r1e2}, \eqref{r1e3} and \eqref{r1e4} we see that \begin{equation}\label{r1e5} E\bigl[JV^2\abs{W_N}\bigr]\leq\frac{b^2}{\sigma^2}\sqrt{E\Bigl[\bigl(E\bigl[JD^2\,\bigl|\,N\bigr]\bigr)^2\Bigr]}\,. \end{equation} Hence, \eqref{r1}, \eqref{boundjvsq} and \eqref{r1e5} yield \begin{align}\label{r11} R_{1,1}&\leq\frac{\alpha \abs{a}b^2}{2\sigma^3}\sqrt{E\Bigl[\bigl(E\bigl[JD^2\,\bigl|\,N\bigr]\bigr)^2\Bigr]} +\frac{\alpha \abs{a}b^2\sqrt{2\pi}}{8\sigma^3}E[JD^2]\,. \end{align} Finally, from \eqref{r1}, \eqref{r11} and \eqref{aux21} we get \begin{align}\label{boundr1} \abs{E_{2,3}}&\leq\frac{\alpha \abs{a}b^2}{2\sigma^3}\sqrt{E\Bigl[\bigl(E\bigl[1_{\{D\geq0\}}D^2\,\bigl|\,N\bigr]\bigr)^2\Bigr]} +\frac{\alpha \abs{a}b^2\sqrt{2\pi}}{8\sigma^3}E[1_{\{D\geq0\}}D^2]\notag\\ &\;+\frac{\alpha \abs{a}b}{\sigma^2}\sqrt{P(N=0)}\sqrt{E[1_{\{D\geq0\}}D^2]}\notag\\ &\;+\frac{\alpha \abs{a}b^2}{c\sigma^2\sqrt{2\pi}}E\bigl[1_{\{D\geq0\}}D^21_{\{N\geq1\}}N^{-1/2}\bigr]+\frac{2C_\mathcal{K} d^3\alpha \abs{a} b}{\sigma^2} E\bigl[1_{\{D\geq0\}}D1_{\{N\geq1\}}N^{-1/2}\bigr]\notag\\ &=:B_6\,. \end{align} Similarly, we have \begin{align}\label{r2} \abs{E_{2,4}}&=\frac{\alpha \abs{a}}{\sigma}\Babs{E\bigl[1_{\{D<0\}}R_f(W_{N^s},-V)\bigr]}\notag\\ &\leq\frac{\alpha \abs{a}}{2\sigma}E\Bigl[(1-J)V^2\Bigl(W_{N^s}+\frac{\sqrt{2\pi}}{4}\Bigr)\Bigr]\notag\\ &\;+\frac{\alpha \abs{a}}{\sigma}E\Bigl[(1-J)\abs{V}1_{\{z+(V\wedge0)<W_{N^s}\leq z+(V\vee0)\}}\Bigr]=:R_{2,1}+R_{2,2}\,. \end{align} Analogously to the above we obtain \begin{align} E\bigl[(1-J)V^2\bigr]&=\frac{1}{\sigma^2}\Bigl(c^2E\bigl[(1-J)\abs{D}\bigr]+a^2E\bigl[(1-J)D^2\bigr]\Bigr)\notag\\ &\leq\frac{b^2}{\sigma^2}E\bigl[(1-J)D^2\bigr]\quad\text{and}\label{boundnjvsq}\\ E\bigl[(1-J)V^2\,\bigl|\,N^s\bigr]&=\frac{1}{\sigma^2}\Bigl(c^2E\bigl[(1-J)\abs{D}\,\bigl|\,N^s\bigr]+a^2E\bigl[(1-J)D^2\,\bigl|\,N^s\bigr]\Bigr)\notag\\ &\leq\frac{b^2}{\sigma^2}E\bigl[(1-J)D^2\,\bigl|\,N^s\bigr]\notag\,. \end{align} Using these as well as the conditional independence of $(1-J)V^2$ and $W_{N^s}$ given $N^s$, one has \begin{equation}\label{r21} E\bigl[(1-J)V^2\abs{W_{N^s}}\bigr]\leq\frac{b^2}{\sigma^2}\sqrt{E\Bigl[\bigl(E\bigl[(1-J)D^2\,\bigl|\,N^s\bigr]\bigr)^2\Bigr]} \sqrt{E\Bigl[W_{N^s}^2\Bigr]}\,. \end{equation} Combining \eqref{secmomwns} and \eqref{r21} we obtain \begin{align}\label{r22} E\bigl[(1-J)V^2\abs{W_{N^s}}\bigr]&\leq\frac{b^2}{\sigma^2}\sqrt{E\Bigl[\bigl(E\bigl[(1-J)D^2\,\bigl|\,N^s\bigr]\bigr)^2\Bigr]}\notag\\ &\quad\left(\frac{c^2\beta^2+a^2\bigl(\delta^3-2\alpha\beta^2+\alpha^3\bigr)}{\alpha\sigma^2}\right)^{1/2}\,. \end{align} Thus, from \eqref{r2}, \eqref{boundnjvsq}, \eqref{r22} and \eqref{aux22} we conclude \begin{align}\label{boundr2} \abs{E_{2,4}}&\leq\frac{\alpha\abs{a}b^2\sqrt{2\pi}}{8\sigma^3} E\bigl[1_{\{D<0\}}D^2\bigr]+\frac{\alpha\abs{a}b^2}{2\sigma^3}\sqrt{E\Bigl[\bigl(E\bigl[1_{\{D<0\}}D^2\,\bigl|\,N^s\bigr]\bigr)^2\Bigr]}\notag\\ &\quad\cdot\left(\frac{c^2\beta^2+a^2\bigl(\delta^3-2\alpha\beta^2+\alpha^3\bigr)}{\alpha\sigma^2}\right)^{1/2}\notag\\ &\;+\frac{\alpha\abs{a}b^2}{\sigma^2c\sqrt{2\pi}}E\bigl[1_{\{D<0\}}D^2(N^s)^{-1/2}\bigr]+\frac{\sqrt{\alpha}\abs{a}2C_\mathcal{K} b d^3}{\sigma^2}\sqrt{E\bigl[1_{\{D<0\}}D^2\bigr]}\notag\\ &=:B_7\,. \end{align} The Berry-Esseen bound stated in Remark \ref{mtrem} (b) follows from \eqref{esdec}, \eqref{e1dec}, \eqref{e1z7}, \eqref{e12gen}, \eqref{e2}, \eqref{e21}, \eqref{boundfzp}, \eqref{e22}, \eqref{e222}, \eqref{e221b}, \eqref{boundr1} and \eqref{boundr2}. This immediately yields the Berry-Esseen bound presented in Theorem \ref{maintheo} (b) because \[B_2=B_4=B_5=B_7=0\] in this case. In order to obtain the Kolmogorov bound in Theorem \ref{meanzero}, again, we choose $M$ such that $M\geq N$ and use the bounds \eqref{e1z6} and \eqref{e12mz2} instead. The result then follows from \eqref{esdec} and \eqref{e1dec}. \section{Proofs of auxiliary results}\label{Appendix} Here, we give several rather technical proofs. We start with the following easy lemma, whose proof is omitted. \begin{lemma}\label{indlemma} For all $x,u,v,z\in\mathbb{R} $ we have \begin{align*} 1_{\{x+u\leq z\}}-1_{\{x+v\leq z\}}&=1_{\{z-v<x\leq z-u\}}-1_{\{z-u<x\leq z-v\}}\quad\text{and}\\ \babs{1_{\{x+u\leq z\}}-1_{\{x+v\leq z\}}}&=1_{\{z- u\vee v<x\leq z- u\wedge v\}}\,. \end{align*} \end{lemma} \begin{lemma}[Concentration inequality]\label{ce1} For all real $t<u$ and for all $n\geq1$ we have \begin{equation*} P(t<W_n\leq u)\leq \frac{\sigma(u-t)}{c\sqrt{2\pi}\sqrt{n}}+\frac{2 C_\mathcal{K} d^3}{c^3\sqrt{n}}\,. \end{equation*} \end{lemma} \begin{proof} The proof uses the Berry-Esseen Theorem for sums of i.i.d. random variables with finite third moment as well as the following fact, whose proof is straightforward: For each real-valued random variable $X$ and for all real $r<s$ we have the bound \begin{equation}\label{ce1eq1} P(r<X\leq s)\leq\frac{s-r}{\sqrt{2\pi}}+2d_\mathcal{K}(X,Z)\,. \end{equation} A similar result was used in \cite{PekRol11} in the framework of exponential approximation. Now, for given $t<u$ and $n\geq 1$ by \eqref{ce1eq1} and the Berry-Esseen Theorem we have \begin{align*} P(t<W_n\leq u)&=P\biggl(\frac{\sigma t+a(\alpha-n)}{c\sqrt{n}}<\frac{S_n-na}{c\sqrt{n}}\leq \frac{\sigma u+a(\alpha-n)}{c\sqrt{n}}\biggr)\notag\\ &\leq\frac{\sigma(u-t)}{c\sqrt{2\pi}\sqrt{n}}+\frac{2C_\mathcal{K} d^3}{c^3\sqrt{n}}\,. \end{align*} \end{proof} \begin{remark} It is actually not strictly necessary to apply the Berry-Esseen Theorem in order to prove Lemma \ref{ce1}: Using known concentration results for sums of independent random variables like Proposition 3.1 from \cite{CGS}, for instance, would yield a comparable result, albeit with worse constants. \end{remark} In order to prove Lemma \ref{ce2} we cite the following concentration inequality from \cite{CGS}: \begin{lemma}\label{cecgs} Let $Y_1,\dotsc,Y_n$ be independent mean zero random variables such that \[\sum_{j=1}^n E[Y_j^2]=1\quad\text{and}\quad\zeta:=\sum_{j=1}^n E\abs{Y_j}^3<\infty\,,\] then with $S^{(i)}:=\sum_{j\not= i}Y_j$ one has for all real $r<s$ and all $i=1,\dotsc,n$ that \begin{equation*} P(r\leq S^{(i)}\leq s)\leq\sqrt{2}(s-r)+2(\sqrt{2}+1)\zeta\,. \end{equation*} \end{lemma} \begin{proof}[Proof of Lemma \ref{ce2}] We first prove \eqref{cers1}. Define \begin{equation*} W_{N^s}^{(1)}:=W_{N^s}-\sigma^{-1}X_1=\frac{1}{\sigma}\Bigl(\sum_{j=2}^{N^s} X_j-\alpha a\Bigr) \end{equation*} such that \begin{equation*} W_{N^s}=W_{N^s}^{(1)}+\sigma^{-1}X_1\quad\text{and}\quad W^*=W_{N^s}^{(1)}+\sigma^{-1}Y\,. \end{equation*} Then, using Lemma \ref{indlemma} we have \begin{align*} &\:\babs{P(W^*\leq z)-P(W_{N^s}\leq z)}\notag\\ &=\babs{P(W_{N^s}^{(1)}+\sigma^{-1}Y\leq z)-P(W_{N^s}^{(1)}+\sigma^{-1}X_1\leq z)}\notag\\ &\leq P\bigl(z-\sigma^{-1} (X_1\vee Y)<W_{N^s}^{(1)}\leq z-\sigma^{-1} (X_1\wedge Y)\bigr)\notag\\ &=E\Biggl[P\biggl(\frac{\sigma z- (X_1\vee Y)+a(\alpha-N^s+1)}{c\sqrt{N^s}}<\sum_{j=2}^{N^s}\Bigl(\frac{X_j-a}{c\sqrt{N^s}}\Bigr)\notag\\ &\hspace{3cm}\leq \frac{\sigma z- (X_1\wedge Y)+a(\alpha-N^s+1)}{c\sqrt{N^s}}\,\biggl|\,N^s\biggr)\Biggr]\,. \end{align*} Now note that conditionally on $N^s$ the random variables $W_{N^s}^{(1)}$ and $(X_1,Y)$ are independent and that the statement of Lemma \ref{cecgs} may be applied to the random variable in the middle term of the above conditional probabilty giving the bound \begin{align}\label{ce22} &\:\babs{P(W^*\leq z)-P(W_{N^s}\leq z)} \leq E\biggl[\frac{\sqrt{2}\abs{Y-X_1}}{c\sqrt{N^s}}+\frac{2(\sqrt{2}+1) d^3}{c^3\sqrt{N^s}}\biggr]\,. \end{align} Noting that $(X_1,Y)$ and $N^s$ are independent and using \eqref{zbdiff} again, we obtain \begin{equation}\label{ce23} E\biggl[\frac{\abs{Y-X_1}}{\sqrt{N^s}}\biggr]\leq\frac{3}{2c^2} d^3 E\bigl[(N^s)^{-1/2}\bigr]\leq \frac{3 d^3}{2c^2\sqrt{\alpha}}\,, \end{equation} as \begin{equation}\label{nsexp1} E\bigl[(N^s)^{-1/2}\bigr]=\frac{E[\sqrt{N}]}{E[N]}\leq\frac{\sqrt{E[N]}}{E[N]}=\frac{1}{\sqrt{\alpha}} \end{equation} by \eqref{sbdef} and Jensen's inequality. From \eqref{ce22}, \eqref{ce23} and \eqref{nsexp1} the bound \eqref{cers1} follows.\\ Next we prove \eqref{cers2}. Using Lemma \ref{indlemma} we obtain \begin{align}\label{ce24} \babs{P(W_{N^s}\leq z)-P(W\leq z)}&=\babs{E\bigl[J(1_{\{W+V\leq z\}}-1_{\{W\leq z\}})\bigr]\notag\\ &\;-E\bigl[(1-J)(1_{\{W_{N^s}-V\leq z\}}-1_{\{W_{N^s}\leq z\}})\bigr]}\notag\\ &\leq E\bigl[J1_{\{z-(V\vee0)<W\leq z-(V\wedge0)\}}\bigr]\notag\\ &\;+E\bigl[(1-J)1_{\{z+(V\wedge0)<W_{N^s}\leq z+(V\vee0)\}}\bigr]\notag\\ &=:A_1+A_2\,. \end{align} To bound $A_1$ we write \begin{align}\label{ce25} A_1&=\sum_{n=0}^\infty E\bigl[J1_{\{N=n\}}1_{\{z-(V\vee0)<W\leq z-(V\wedge0)\}}\bigr]\notag\\ &=\sum_{n=0}^\infty P\bigl(z-(V\vee0)<W\leq z-(V\wedge0)\,\bigl|\,D\geq0,N=n\bigr)\cdot P\bigl(D\geq0,N=n\bigr)\,. \end{align} Now note that conditionally on the event that $D\geq0$ and $N=n$ the random variables $W$ and $V$ are independent and \begin{equation*} \mathcal{L}(W\,|\,D\geq0,N=n)=\mathcal{L}(W_n)\,. \end{equation*} Thus, using Lemma \ref{ce1} we have for all $n\geq1$: \begin{align}\label{ce26} &\:P\bigl(z-(V\vee0)<W\leq z-(V\wedge0)\,\bigl|\,D\geq0,N=n\bigr)\notag\\ &=P\bigl(z-(V\vee0)<W_n\leq z-(V\wedge0)\,\bigl|\,D\geq0,N=n\bigr)\notag\\ &\leq E\biggl[\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{n}}+\frac{2C_\mathcal{K} d^3}{c^3\sqrt{n}}\,\biggl|\,D\geq0,N=n\biggr] \end{align} From \eqref{ce25} and \eqref{ce26} we thus have \begin{align}\label{ce27} A_1&\leq P(N=0)+\sum_{n=1}^\infty E\biggl[\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{n}}+\frac{2C_\mathcal{K} d^3}{c^3\sqrt{n}}\,\biggl|\,D\geq0,N=n\biggr]P\bigl(D\geq0,N=n\bigr)\notag\\ &= P(N=0)+\sum_{n=1}^\infty E\biggl[1_{\{D\geq0,N=n\}}\biggl(\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{n}}+\frac{2C_\mathcal{K} d^3}{c^3\sqrt{n}}\biggr)\biggr]\notag\\ &=P(N=0)+E\biggl[J1_{\{N\geq1\}}\biggl(\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{N}}+\frac{2C_\mathcal{K} d^3}{c^3\sqrt{N}}\biggr)\biggr] \end{align} Now note that \begin{align} E\bigl[J1_{\{N\geq1\}}\abs{V}N^{-1/2}\bigr]&=E\Bigl[J1_{\{N\geq1\}}N^{-1/2}E\bigl[\abs{V}\,\bigl|\,N,N^s\bigr]\Bigr]\notag\\ &\leq E\Bigl[J1_{\{N\geq1\}}N^{-1/2}\sqrt{E\bigl[V^2\,\bigl|\,N,N^s\bigr]}\Bigr]\notag\\ &=\frac{1}{\sigma}E\Bigl[J1_{\{N\geq1\}}N^{-1/2}\sqrt{c^2D+a^2D^2}\Bigr]\label{ce28mz}\\ &\leq\frac{b}{\sigma}E\bigl[JD1_{\{N\geq1\}}N^{-1/2}\bigr]\label{ce28}\,. \end{align} It remains to bound $A_2$. We may assume that $P(D<0)>0$ since otherwise $A_2=0$. Noting that $N^s\geq1$ almost surely, similarly to \eqref{ce25} we obtain \begin{align*} A_2&=\sum_{m=1}^\infty P\bigl(z+(V\wedge0)<W_{N^s}\leq z+(V\vee0)\,\bigl|\,D<0,N^s=m\bigr)\notag\\ &\qquad\cdot P\bigl(D<0,N^s=m\bigr)\,. \end{align*} Now, using the fact that conditionally on the event $\{N^s=m\}\cap\{D<0\}$ the random variables $W_{N^s}$ and $V$ are independent and \begin{equation*} \mathcal{L}(W_{N^s}|N^s=m,D<0)=\mathcal{L}(W_m) \end{equation*} in the same manner as \eqref{ce27} we find \begin{equation}\label{ce211} A_2\leq E\biggl[(1-J)\biggl(\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{N^s}}+\frac{2C_\mathcal{K} d^3}{c^3\sqrt{N^s}}\biggr)\biggr]\,. \end{equation} Using \eqref{sbdef} we have \begin{equation}\label{nsexp2} E\bigl[\bigl(N^s\bigr)^{-1}\bigr]=\frac{1}{E[N]}=\frac{1}{\alpha}\,. \end{equation} Thus, from the Cauchy-Schwarz inequality and \eqref{nsexp2} we obtain \begin{align} E\Bigl[(1-J)\frac{\abs{V}}{\sqrt{N^s}}\Bigr]&\leq\sqrt{E\bigl[\bigl(N^s\bigr)^{-1}\bigr]}\sqrt{E\bigl[(1-J)V^2\bigr]}\notag\\ &=\frac{1}{\sigma\sqrt{\alpha}}\sqrt{c^2 E\bigl[\abs{D}(1-J)\bigr]+a^2E\bigl[D^2(1-J)\bigr]}\notag\\%\label{ce212}\\ &\leq\frac{b}{\sigma\sqrt{\alpha}}\sqrt{E\bigl[D^2(1-J)\bigr]}\label{ce213}\,. \end{align} Similarly, we have \begin{align}\label{ce14} E\Bigl[\frac{1-J}{\sqrt{N^s}}\Bigr]&\leq\sqrt{P(D<0)}\sqrt{E\bigl[\bigl(N^s\bigr)^{-1}\bigr]} =\frac{\sqrt{P(D<0)}}{\sqrt{\alpha}}\,. \end{align} Thus, from \eqref{ce27}, \eqref{ce28}, \eqref{ce211}, \eqref{ce213} and \eqref{ce14} we see that $A_1+A_2$ is bounded from above by the right hand side of \eqref{cers2}. Using \eqref{ce27} and \eqref{ce28mz} instead gives the bounds \eqref{cers3} and \eqref{cers4}. \\ \end{proof} \begin{proof}[Proof of Lemma \ref{auxlemma2}] We only prove \eqref{aux21}, the proofs of \eqref{aux22} and \eqref{aux23} being similar and easier. By the definition of conditional expectation given an event, we have \begin{align}\label{al1} &\quad E\bigl[J\abs{V}1_{\{z-(V\vee0)<W_N\leq z-(V\wedge0)\}}\bigr]\notag\\ &=\sum_{n=0}^\infty E\bigl[1_{\{N=n,D\geq0\}}\abs{V}1_{\{z-(V\vee0)<W_n\leq z-(V\wedge0)\}}\bigr]=E\bigl[1_{\{N=0\}}J\abs{V}\bigl]\notag\\ &\quad+\sum_{n=1}^\infty E\bigl[\abs{V}1_{\{z-(V\vee0)<W_n\leq z-(V\wedge0)\}}\,\bigl|\,N=n,D\geq0\bigr]\cdot P(N=n,D\geq0)\,. \end{align} Now, for $n\geq1$, using the fact that the random variables $W_N$ and $V$ are conditionally independent given the event $\{D\geq0\}\cap\{N=n\}$, from Lemma \ref{ce1} we infer that \begin{align}\label{al2} &\quad E\bigl[\abs{V}1_{\{z-(V\vee0)<W_n\leq z-(V\wedge0)\}}\,\bigl|\,N=n,D\geq0\bigr]\notag\\ &=E\Bigl[\abs{V}\Bigl(\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{n}}+\frac{2 C_\mathcal{K} d^3}{c^3\sqrt{n}}\Bigr)\,\Bigl|\,N=n,D\geq0\Bigr] \end{align} Combining \eqref{al1} and \eqref{al2} we get \begin{align}\label{al3} &\quad E\bigl[J\abs{V}1_{\{z-(V\vee0)<W_N\leq z-(V\wedge0)\}}\bigr] \leq E\bigl[1_{\{N=0\}}J\abs{V}\bigl]\notag\\ &\quad+\sum_{n=1}^\infty E\Bigl[\abs{V}\Bigl(\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{n}}+\frac{2 C_\mathcal{K} d^3}{c^3\sqrt{n}}\Bigr)\,\Bigl|\,N=n,D\geq0\Bigr]\cdot P(N=n,D\geq0)\notag\\ &=E\bigl[1_{\{N=0\}}J\abs{V}\bigl]+E\Bigl[1_{\{N\geq1\}}J\abs{V}\Bigl(\frac{\sigma\abs{V}}{c\sqrt{2\pi}\sqrt{N}}+\frac{2 C_\mathcal{K} d^3}{c^3\sqrt{N}}\Bigr)\Bigr]\,. \end{align} Using Cauchy-Schwarz as well as \begin{equation*} E\bigl[JV^2\bigr]=E\Bigl[JE\bigl[V^2\,\bigl|\,N,N^s\bigr]\Bigr]\leq\frac{b^2}{\sigma^2}E\bigl[JD^2\bigr] \end{equation*} we obtain \begin{equation}\label{al4} E\bigl[1_{\{N=0\}}J\abs{V}\bigl]\leq\frac{b}{\sigma}\sqrt{P(N=0)}\sqrt{E\bigl[JD^2\bigr]}\,. \end{equation} Analogously to \eqref{ce28} one can show that \begin{equation}\label{al5} E\bigl[1_{\{N\geq1\}}N^{-1/2}JV^2\bigr]\leq \frac{b^2}{\sigma^2}E\bigl[1_{\{N\geq1\}}N^{-1/2}JD^2\bigr]\,. \end{equation} Hence, bound \eqref{aux21} follows from \eqref{al3}, \eqref{al4}, \eqref{ce28} and \eqref{al5}.\\ \end{proof} {} \end{document}
arXiv
\begin{document} \title{Infinitely many lattice surfaces with special pseudo-Anosov maps} \author{Kariane Calta} \address{Vassar College} \email{[email protected] } \author{Thomas A. Schmidt} \address{Oregon State University\\Corvallis, OR 97331} \email{[email protected]} \keywords{pseudo-Anosov, SAF invariant, flux, translation surface, Veech group} \subjclass[2010]{37D99,(30F60,37A25,37A45,37F99, 58F15)} \date{3 October 2012} \begin{abstract} We give explicit pseudo-Anosov homeomorphisms with vanishing Sah-Arnoux-Fathi invariant. Any translation surface whose Veech group is commensurable to any of a large class of triangle groups is shown to have an affine pseudo-Anosov homeomorphism of this type. We also apply a reduction to finite triangle groups and thereby show the existence of non-parabolic elements in the periodic field of certain translation surfaces. \end{abstract} \maketitle \section{Introduction} \subsection{Motivation and History} Thurston introduced the notion of pseudo-Anosov homeomorphism in his proof of the Nielsen-Thurston classification theorem of surface diffeomorphisms, eventually published in \cite{T}. Also in that work, Thurston exhibited {\em affine pseudo-Anosov} homeomorphisms; that is, he gave a flat structure on a surface with appropriate conical singularities, for which the map in question is in the affine diffeomorphism group: it is locally affine, with constant linear part. An affine pseudo-Anosov's linear part in the flat coordinates is given by a real hyperbolic matrix, and the dilatation is simply the eigenvalue of largest absolute value for this matrix; the stable directions of the affine pseudo-Anosov map have directions corresponding to the fixed points of its linear part. The surface with flat structure involved is what is now is called a {\em translation surface}, see below for a definition. Veech \cite{V} showed that the group of linear parts of the affine diffeomorphisms of a translation surface forms a Fuchsian group; this group is now often called the {\em Veech group} of the translation surface. A {\em lattice surface} is a translation surface for which the Veech group is a lattice in $\text{PSL}(2, \mathbb R)$; that is, it is of cofinite volume with respect to Haar measure. The celebrated Veech dichotomy states in particular that lattice surfaces have what McMullen has dubbed ``optimal dynamics''. Any full transversal to the linear flow in a fixed direction a translation surface defines, by first return, an interval exchange transformation. The Sah-Arnoux-Fathi (SAF) invariant of the interval exchange transformation gives what McMullen reformulates as his ``flux''; he shows in \cite{Mc} that the vanishing of this flux for an affine pseudo-Anosov homemomorphism implies zero average drift for the leaves of the foliation in its expanding direction; see also \cite{ABB}, \cite{LPV}, \cite{LPV2}. The first examples of an affine pseudo-Anosov with vanishing SAF invariant was given by Arnoux and Yoccoz \cite{AY}; they used a construction involving suspension of interval exchange transformations. Arnoux and Schmidt \cite{AS} found special pseudo-Anosov maps of the lattice surfaces given by gluing two copies of the regular $n$-gon together along opposite edges for $n \in \lbrace 7, 9, 14, 18, 20, 24 \rbrace$. The discovery of these maps was especially surprising considering that the surfaces themselves are among the well-studied first examples of Veech of non-arithmetic lattice surfaces, \cite{V}. Recently, McMullen \cite{Mc} has communicated an example found by Lanneau of a special pseudo-Anosov homeomorphism with vanishing SAF invariant on a genus three surface. We give a large class of pseudo-Anosov homeomorphisms with vanishing SAF invariant. We do this by finding so-called ``special'' affine pseudo-Anosov homeomorphism. Long and Ried \cite{LR} call a hyperbolic element of a Fuchsian group {\em special} if its eigenvalues lie in the trace field of the group. Accordingly, an affine pseudo-Anosov homeomorphism is called special if the eigenvalues of its linear part lie in the trace field of the Veech group of its translation surface. Calta and Smillie \cite{CS} show that under fairly mild hypotheses, see Lemma~\ref{l:special}, one can normalize a translation surface so that the set of (cotangents of) directions for flows with vanishing Sah-Arnoux-Fathi invariant forms the trace field (and infinity). With this normalization, each special affine pseudo-Anosov homeomorphisms has vanishing Sah-Arnoux-Fathi invariant. \subsection{Main Result} We prove the following result, where $\Delta(m,n,\infty)$ is the usual hyperbolic triangle group; that is, the orientation preserving subgroup of the group generated by reflections in the sides of a hyperbolic triangle having one ideal vertex and the other two having angles $\pi/m$ and $\pi/n$. (See below for the definition of the invariant trace field of a Fuchsian group.) \begin{Thm}\label{t:nonobstrSpecPA} Suppose that both $m,n$ are even, and there is equality of the trace field and invariant trace field for the triangle group $\Delta(m,n, \infty)$. Then any translation surface whose Veech group is commensurable to $\Delta(m,n, \infty)$ has special pseudo-Anosov elements in its affine group. \end{Thm} Note that Bouw-M\"oller \cite{BM}, as confirmed by Hooper \cite{H}, have shown that for each signature, translation surfaces satisfying these hypotheses do exist. Using Hooper's presentation of the Bouw-M\"oller surfaces, we give fully explicit special affine pseudo-Anosov homeomorphisms, see Proposition~\ref{p:explicit}, Example~\ref{eg:HooperSurfs} and Figure~\ref{eight4Fig}.\\ We also show that certain other signatures of triangle groups are such that any translation surface whose Veech group is commensurable to a group of the signature must have infinitely many non-periodic directions with vanishing SAF invariant. Informed by this, we obtain a further new special affine pseudo-Anosov homeomorphism, see Example~\ref{eg:sevenSeven} and Figure~\ref{f:fatSeven7andFluxMapped}. This particular example is on a surface whose the trace field is a cubic field, and thus the flow in the expanding direction of the pseudo-Anosov map has rank three in McMullen's sense; we thus can give a pictorial representation of this flow on a genus 15 surface of the type that McMullen \cite{Mc} gives for each of the cubic example of Arnoux and Yoccoz and the recent cubic example of Lanneau, again see Figure~\ref{f:fatSeven7andFluxMapped}. \\ We note here that we had previously \cite{CaltaSchmidt} built continued fraction algorithms for the Veech groups of the translation surfaces studied by \cite{W} --- these groups are Fuchsian triangle groups, of signature $(3,n, \infty)$---, in anticipation of using them to find non-parabolic directions on the surfaces or even special pseudo-Anosov homeomorphisms. The Ward groups have proven to be significantly more resistant to the search for such phenomena than were the original Veech examples, and of course than the translation surfaces with Veech group of signatures that we treat here. \subsection{Thanks} Parts of this work arose out of our conversations during the 2011 Oberwolfach workshop ``Billiards, Flat Surfaces, and Dynamics on Moduli Spaces'', we warmly thank the organizers for that opportunity. \section{Background} \subsection{Translation surfaces and Veech groups} A translation surface is a $2$-manifold with finitely many marked points and an atlas whose transition functions are translations. This is equivalent to the definition of a translation surface as a disjoint union of finitely many polygons $P_1, \cdots, P_n$ in $\mathbb R^2$ glued along parallel edges to form a closed surface. The marked points are cone points, which can arise at vertices of the $P_i$ when too many polygons are glued around a single vertex, resulting in a total angle at that vertex of $2k\pi$ where $k \ge 1$ is an integer. Equivalently, a translation surface can be seen as a pair $(M,\omega)$ where $M$ is a Riemann surface and $\omega$ an abelian differential on $M$ --- away from the zeros of the abelian differential, integration of the abelian differential gives local coordinates with transition functions that are translations. The group $\text{SL}(2,\mathbb R)$ acts on the moduli space of translation surfaces, preserving genus and the number and order of cone points. In the polygonal model, if $S=P_1, \dots, P_n$ is a surface and $g \in \text{SL}(2,\mathbb R)$ then $gS$ is defined as $gP_1 \cup \dots \cup gP_n$. The stabilizer of a surface $S$ under this action is called its {\it Veech group}, which is always a non-cocompact Fuchsian group. Generically this group is empty but occasionally it is a {\it lattice} subgroup of $\text{SL}(2,\mathbb R)$, which is to say that it is has finite covolume. In this case, we refer to the surface as a {\it lattice surface}. Veech \cite{V} proved that a pair of regular $n$-gons glued together along parallel sides forms a lattice surface for $n \geq 4$. Whether or not a surface is a lattice surface has profound implications for the dynamics of the linear flow on the surface. Veech \cite{V} proved that in any direction $v$ on a lattice surface, the orbits of the linear flow in that direction are either closed or connect two cone points, or all orbits in that direction are uniformly distributed on the surface. In the literature, this dynamic dichotomy has come to be called ``optimal dynamics". \subsection{The Sah-Arnoux-Fathi Invariant} Suppose that $f$ is an interval exchange transformation (iet) on a finite interval $I$, that is, a piecewise linear orientation preserving isometry of $I$. Then by definition $f$ exchanges $n$ intervals $I_i$ of lengths $l_i$ for $i=1,...,n$ by translating each $I_i$ by the amount $t_i$. One can associate to $f$ a certain invariant known as the Sah-Arnoux-Fathi (SAF) invariant that takes values in $\mathbb R \wedge_{\mathbb Q} \mathbb R$ and is defined as $\sum_{i=1}^n l_i \wedge_{\mathbb Q} t_i$. The SAF invariant is a central tool in the study of the dynamics of the linear flow on translation surfaces. Given a direction $v$ on a surface, one can choose an interval transverse to the orbits of the flow in the direction $v$ that meets every orbit. The first return map to this interval is an iet. If the flow in the direction $v$ is periodic then the associated SAF invariant is zero. The converse however is false. The dynamics of the flow in SAF zero directions has been an object of recent interest and in this paper we show that there are surfaces for which the flow in a particular SAF zero direction is the expanding direction of a pseudo-Anosov homeomorphism. There is another way to define the SAF invariant of a direction on a translation surface using the $J$ invariant of Kenyon and Smillie \cite{KS}. The $J$ invariant of a polygon takes values in $\mathbb R^2 \wedge_{\mathbb Q} {\mathbb R^2}$. If the vertices of a polygon $P$ are $v_0,\dots,v_n$, then $J(P)=\sum_{i=0}^n v_i \wedge v_{i+1}$ where $v_{n+1}=v_0$. Since a translation surface $S$ can be realized as a disjoint union of polygons $P_i$ for $i=1, \dots, n$ glued along parallel sides, we define $J(S)$ to be $\sum_{i=1}^n J(P_i)$. Then the projection $J_v(S)$ of $J(S)$ in the direction $v$ is the SAF invariant of the iet which is the first return map on a full transversal to the linear flow in the direction $v$. It is not hard to see that the SAF invariant of a periodic iet is zero. Thus the SAF invariant in a parabolic direction on a surface is zero. Also note that Lemma 2.4 of the Appendix of Calta's \cite{C} directly implies that the SAF invariant in the form of $J_v(S)$ is constant on $\text{SL}_2(\mathbb R)$-orbits, in the sense that $J_v(S) = J_{Av}(A\circ S)$ for any $A\in \text{SL}_2(\mathbb R)$. \subsection{Fields and translation surfaces} \indent If a translation surface has at least three directions of vanishing SAF invariant, then Calta and Smillie \cite{CS} show that the surface can be normalized by way of the $\text{SL}_2(\mathbb R)$-action so that the directions with slope $0$, $1$ and $\infty$ have vanishing SAF invariant and prove that on the normalized surface the set of slopes of directions with vanishing invariant is a field union with infinity. A translation surface so normalized is said to be in {\it standard form}, and the field so described is called the {\it periodic direction field}. In this paper, we are primarily interested in directions on a surface that come from the periodic direction field. On the other hand, Kenyon and Smillie \cite{KS} defined the {\it holonomy field} of a translation surface as the smallest field over which the set of holonomy vectors is contained in a two dimensional vector space. A holonomy vector is associated via the developing map to a closed, nonsingular curve on the surface or to a closed curve that is a union of saddle connections. Gutkin and Judge define the {\it trace field} of a surface to be the extension of $\mathbb Q$ generated by the traces of the elements of its Veech group. Since the trace is a conjugacy invariant, the trace field of a given surface is the same for as that of any other surface in its $\text{SL}(2,\mathbb R)$ orbit. Calta and Smillie \cite{CS} show that if $S$ is a lattice surface, then the holonomy, trace and periodic direction fields are all equal. The following is a direct implication of the Calta-Smillie \cite{CS} result that the periodic field of a translation surface in standard form equals its trace field. \begin{Lem}\label{l:special} On a translation surface in standard form, the stable directions of an affine pseudo-Anosov have vanishing Sah-Arnoux-Fathi invariant if and only if the pseudo-Anosov is special. \end{Lem} \subsection{ Triangle groups and realizability} Of central importance to us is the fact that triangle groups are (up to finite index) realized as Veech groups. \begin{Thm}\label{t:bouwMH}[Bouw-M\"oller, Hooper] Every hyperbolic triangle group with parabolic elements is commensurable to a group realized as the Veech group of a translation surface. \end{Thm} However, not every full triangle group can be realized as a Veech group. Hubert and Schmidt \cite{HS} remarked that one can use the fundamental observation of Kenyon and Smillie \cite{KS} that the trace field of a translation surface is generated by the trace of any of its affine pseudo-Anosov homeomorphisms to show that no triangle group of signature $(2, 2n, \infty)$ can be realized as a Veech group. Hooper \cite{H} uses the observation in the form that a Fuchsian can only be realized as a Veech group if its trace field equals the field generated over the rationals by the traces of the squares of elements of the group; this latter field is called the {\em invariant trace field} of the group, as Margulis proved that it is an invariant of the (wide) commensurability class of the group, see \cite{MR} for a discussion. \section{Special affine pseudo-Anosov homeomorphisms} In this section, we focus on the arithmetic of triangle groups in order to find the linear parts of special affine pseudo-Anosov homeomorphisms. To do this, Lemma~\ref{l:specHypMatrix} is key. It allows us to prove a more precisely worded version of our main theorem, Theorem~\ref{t:mainReworded}, thus giving fully explicit pseudo-Anosov maps, as shown by Example~\ref{eg:HooperSurfs} and Figure~\ref{eight4Fig}. In the final subsection, we give some results about the groups for which the full triangle group is never a Veech group. \subsection{A special hyperbolic matrix} Our approach here is centered on properties of groups realized as Veech groups of translation surfaces. Since any parabolic direction on a translation surface has flow of vanishing Sah-Arnoux-Fathi invariant we use the following variation of a term used by Calta and Smillie \cite{CS} . \begin{Def} A Fuchsian group is in (group) {\em parabolic standard form} if its set of parabolic fixed points (for its action on the Poincar\'e upper half-plane) includes $0$, $1$ and $\infty$. \end{Def} Note that since the Sah-Arnoux-Fathi invariant of a periodic flow is zero, whenever the Veech group of a translation surface is in parabolic standard form, the surface itself is in the standard form defined by Calta and Smillie. To expedite discussion, we take a specific representation for each of the triangle groups we consider. Let $G_{m,n}$ generated by \begin{equation}\label{e:generators} A = \begin{pmatrix} 1& 2 \cos \pi/m + 2 \cos\pi/n\\ 0&1\end{pmatrix},\, B = \begin{pmatrix} 2 \cos \pi/m& 1\\ -1&0\end{pmatrix},\, C = \begin{pmatrix} -2 \cos \pi/n& 1\\ -1&0\end{pmatrix}\,, \end{equation} and note that $C= AB$. The group is easily verified to be a Fuchsian triangle group of signature $(m,n,\infty)$. The trace field of $G_{m,n}$ is $K_{m,n} = \mathbb Q(\cos \pi/m, \cos \pi/n)$, see p. 159 of \cite{MR}. \begin{Lem}\label{l:specHypMatrix} Suppose that $m,n$ are distinct and even. If $\alpha$ is a nonzero finite parabolic fixed point of $G_{m,n}$, then multiplication by $\alpha^{-1}$ defines a transformation that conjugates $G_{m,n}$ to a group in parabolic standard form, with special hyperbolic elements. \end{Lem} \begin{proof} Being generated by $A$ and $B$, the group $G_{m,n}$ clearly has infinity as a parabolic fixed point. Since $B$ sends $0$ to infinity, $0$ is also a parabolic fixed point. Recall that the product of any two distinct elliptic elements of order two in $\text{PSL}(2, \mathbb R)$ is a hyperbolic element Here, we certainly have that $ B^{m/2}, C^{n/2}$ are elliptic of order two, their product is thus hyperbolic. Now, for each integer $k$, \begin{equation}\label{e:powersOfBandC} B^k = \begin{pmatrix} b_{k+1} & b_k\\ -b_k&-b_{k-1}\end{pmatrix}\,, \;\; C^k = (-1)^k\begin{pmatrix} c_{k+1} & -c_k\\ c_k&-c_{k-1}\end{pmatrix}\, \end{equation} where $b_k = \sin \frac{k \pi}{m}/\sin \frac{\pi}{m}$, and $c_k = \sin \frac{k \pi}{n}/\sin \frac{\pi}{n}$, see say \cite{BKS}; thus, one finds that \[ B^{m/2}\cdot (\pm 1) = \mp 1, \;\; \; C^{n/2}\cdot (\pm 1) = \mp 1\,.\] Thus, their product $B^{m/2} C^{n/2}$ fixes both $-1, 1$. Since $G_{m,n} \subset \text{SL}_2(\mathcal O_K)$, all finite parabolic fixed points of $G_{m,n}$ lie in the field $K_{m,n}$. Let $\alpha$ be as in our hypotheses. By the triple transitivity of $\text{SL}_2(\mathbb R)$ acting on the real projective line, there is an element $M$ that sends $\alpha$ to $1$ while fixing each of zero and infinity. But, elementary considerations of $2 \times 2$-matrices show that the action of $M$ is simply multiplication by $\alpha^{-1}$. The conjugation of $G_{m,n}$ by $M$ is clearly in parabolic standard form and the hyperbolic fixed point $1$ of $G_{m,n}$ corresponds to the point $\alpha^{-1}$ --- this is an element of $K$ fixed by some hyperbolic element of the conjugate group. That is, the conjugate group has special hyperbolic elements. \end{proof} \subsection{Special affine pseudo-Anosov homeomorphisms} Recall that W.~P.~Hooper determined conditions such that the triangle group $\Delta(m,n, \infty)$ has its trace field equal to its invariant trace field; the conditions are given in terms of the indices $m$, $n$ and their greatest common divisor. Inequality of the two fields is an obstruction to the group being a Veech group of any translation surface. \begin{Def} Given a pair of integers $m,n$, let $\gamma = \gcd(m,n)$. We say that the pair $m, n$ is {\em unobstructed} if neither of the following conditions hold: \begin{enumerate} \item $\gamma = 2$; \item $m/\gamma$ and $n/\gamma$ are both odd. \end{enumerate} \end{Def} Our main result can be more precisely stated as follows. \begin{Thm}\label{t:mainReworded} Let $m,n$ be an unobstructed pair of even integers, and suppose that $\mathcal S$ is a translation surface whose Veech group is commensurable to $G_{m,n}$. Then some power of $B^{m/2}C^{n/2}$ is the derivative of a special affine pseudo-Anosov automorphism of $\mathcal S$. \end{Thm} \begin{proof} Suppose that $\mathcal S$ is a translation surface whose Veech group is commensurable to $G_{m,n}$. Using the $\text{SL}_2(\mathbb R)$-action, we may assume that $\mathcal S$ has as its Veech group a finite index subgroup of $G_{m,n}$. Lemma~\ref{l:specHypMatrix} now provides an element of $\text{SL}_2(\mathbb R)$ conjugating $G_{m,n}$ into parabolic standard form while conjugating $B^{m/2}C^{n/2}$ to a special hyperbolic matrix. The set of parabolic fixed points is unaltered by passage to a finite index subgroup, thus the image of $\mathcal S$ by this conjugating element is in standard form. By the work of Calta and Smillie \cite{CS} the set of (non-vertical) directions for which the flow has vanishing Sah-Arnoux-Fathi invariant forms a field, the periodic field, and since the translation surface certainly has some affine pseudo-Anosov automorphism, this equals the trace field of the Veech group. Now, there is some nonzero power of our (special) hyperbolic element of the larger group that belongs to the finite index subgroup. Since the two fixed points are common to the cyclic subgroup generated by a hyperbolic element, this element of the Veech group fixes points in the periodic field. From this, the corresponding pseudo-Anosov map has vanishing Sah-Arnoux-Fathi invariant. Finally, by Lemma~2.4 of the Appendix in \cite{C}, any $M \in \text{SL}_2(\mathbb R)$ sends this direction to a direction on the $M$-image of this surface that also has vanishing Sah-Arnoux-Fathi invariant. In particular, we can return in this way to the original direction and surface we began with; the result thus holds. \end{proof} Our construction easily leads to explicit examples. Hooper \cite{H} explicitly realizes the Bouw-M\"oller translation surfaces in two different ways. First, by way of grid graphs presenting the combinatorics of the intersections of horizontal and vertical cylinders so as to apply the Thurston construction \cite{T}; the resulting surfaces Hooper denotes $(X_{m,n}, \omega_{m,n})$. Second, as translation surfaces he denotes as $(Y_{m,n}, \eta_{m,n})$, formed by appropriately identifying sides of semi-regular polygons. In the case where both $m$ and $n$ are even, there is a natural involution on the surface. Hooper denotes the resulting respective quotients as $(X_{m,n}^{e}, \omega_{m,n}^{e})$, $(Y_{m,n}^{e}, \eta_{m,n}^{e})$. Hooper shows that the Veech group of $(X_{m,n}, \omega_{m,n})$ is an index two subgroup of $G_{m,n}$ and that the transformation $z \mapsto D_{\mu}(z) = (\csc \pi/n) z - \cot \pi/n$ conjugates this to the Veech group of $(Y_{m,n}, \eta_{m,n})$. \begin{Prop}\label{p:explicit} Suppose $m,n$ is an obstructed pair of even integers. Then on each of Hooper's translation surfaces $(Y_{m,n}, \eta_{m,n})$ and $(Y_{m,n}^{e}, \eta_{m,n}^{e})$, the flow in the direction $(1- \cos \pi/n)/(\sin \pi/n)$ is a stable direction of a special affine Anosov homeomorphism. Furthermore, letting \[ \lambda = \dfrac{\cos \frac{\pi}{m} \cos \frac{\pi}{n} + \cos \frac{\pi}{m} + \cos \frac{\pi}{n} + 1}{\sin \frac{\pi}{m} \sin \frac{\pi}{n} }\,, \] the dilatation of this Anosov homeomorphism is $\lambda$ if four divides $\gcd(m,n)$ and $\lambda^2$ otherwise. \end{Prop} \begin{proof} Since any power of of $B^{m/2}C^{n/2}$ fixes the direction $z=1$, we find that the direction $D_{\mu}(1) = (1- \cos \pi/n)/(\sin \pi/n)$ determines a flow on each of $(Y_{m,n}, \eta_{m,n})$ and $(Y_{m,n}^{e}, \eta_{m,n}^{e})$ with vanishing Sah-Arnoux-Fathi invariant. Hooper shows that $B^2, C^2$ (in our notation) are in the Veech group of $(X_{m,n}, \omega_{m,n})$, whereas neither $B$ nor $C$ is. It follows that $B^{m/2}C^{n/2}$ itself is in this group if and only if four divides $\gcd(m,n)$. Otherwise it is the square of this special hyperbolic that belongs to the group. The dilatation of an affine pseudo-Anosov automorphism is the larger of the two eigenvalues of its linear part, and here this is the same as that of $B^{m/2}C^{n/2}$ or of its square. Thus, the dilatation is as claimed. \end{proof} \begin{Eg}\label{eg:HooperSurfs} In Figure~\ref{eight4Fig}, we show the result when $(m,n) = (8,4)$; the translation surface $(Y_{8,4}^{e}, \eta_{8,4}^{e})$ is a suspension surface over an interval exchange transformation on eleven intervals with permutation in the usual redundant notation \setcounter{MaxMatrixCols}{20} \[ \begin{pmatrix} 1&2&3& 4&5&6&7&8&9&10&11\\ 7&5&2&10&3&1&11&8 &4&6 &9\end{pmatrix}\,. \] The dilatation of this pseudo-Anosov map is the quartic number $3 + 2 \sqrt{2} + \sqrt{20 + 14 \sqrt{2}}$. \begin{figure} \caption{A pseudo-Anosov map with vanishing Sah-Arnoux-Fathi invariant, indicated as zippered rectangles on Hooper's translation surface $(Y_{8,4}^{e}, \eta_{8,4}^{e})$. Thick (red) intervals comprise the transversal to the flow; rectangles 1,2,3,9,11 are shaded.} \label{eight4Fig} \end{figure} \end{Eg} \subsection{Remarks on the ``obstructed'' setting} We now show that Lemma~\ref{l:specHypMatrix} cannot provide information about the existence of special pseudo-Anosov maps when the group $G_{m,n}$ has its trace field unequal to its invariant trace field. \begin{Prop}\label{p:parabFxPtsNotInInvarTraceField} If $m, n$ are such that the trace field of $G_{m,n}$ does not equal its invariant trace field, then no nonzero parabolic fixed point of $G_{m,n}$ lies in its invariant trace field. \end{Prop} \begin{proof} Recall that the trace field $K_{m,n}$ of $G_{m,n}$ is generated over $\mathbb Q$ by the pair $\{\cos \pi/m, \cos \pi/n\}$ and Hooper shows that the triple $\{\cos 2\pi/m, \cos 2\pi/n, \cos \pi/m \cos \pi/n\}$ generates the invariant trace field, $k_{m,n}$, see also p. 159 of \cite{MR}. Define $\delta = 2 (\cos 2\pi/n + 1)$; note that since $n >2$ our $\delta$ is positive. One easily shows the equality $ K_{m,n} = k_{m,n}(\sqrt{\delta})$ since first a standard double angle formula implies that $2 \cos \pi/n = \sqrt{\delta}$, and then of course $\cos \pi/m = (\cos \pi/m \cos \pi/n)/\cos \pi/n$. Any pair of the elements in ~\eqref{e:generators} generate $G_{m,n}$, thus this group is generated by \[C = \begin{pmatrix} -\sqrt{\delta}& 1\\ -1&0\end{pmatrix},\; A = \begin{pmatrix} 1&b \sqrt{\delta}\\ 0&1\end{pmatrix},\, \] with \[b = \dfrac{\delta + 4 \cos \frac{\pi}{m} \cos \frac{\pi}{n}}{\delta}\,.\] Of course, $b \in k_{m,n}$ and thus $b \sqrt{\delta} \in K_{m,n} \setminus k_{m,n}$. We thus define two types of elements of $G_{m,n}$ --- an {\em even} element is of the form \[ \begin{pmatrix} a&b \sqrt{\delta}\\ c \sqrt{\delta}&d\end{pmatrix}\,,\] with $a,b,c,d \in K_{m,n}$; similarly, an {\em odd} element is of the form \[ \begin{pmatrix} a\sqrt{\delta}&b \\ c &d\sqrt{\delta}\end{pmatrix}\,; \] in particular, $A$ is of even type while $C$ is of odd type. Multiplication thus is similar to addition of integers in that the product of two elements of the same type is an even element, whereas the product of two elements of distinct types is an odd element. Now, any element of $G_{m,n}$ is a product in powers of $A$ and $C$, and therefore is of one of the two types. But, the image of $\infty$ under such a group element is then of the form $a/c\sqrt{\delta}$ or $a\sqrt{\delta}/c$; any nonzero $a/c \in k_{m,n}$ of course has a multiplicative inverse in this field, and hence were the image of infinity to be in $k_{m,n}$, the contradiction of $\sqrt{\delta}$ also belonging to this field would follow. \end{proof} The notion of even and odd subgroups of $G_{m,n}$ defined in the proof of Proposition~\ref{p:parabFxPtsNotInInvarTraceField} generalizes a notion for the Hecke triangle groups, of signature $(2,q, \infty)$ with $q$ even; see say Rankin's {\em review} in the AMS Mathematical Reviews: MR0529968 (80j:10037). \begin{Lem} \label{l:evenGpOnly} Suppose that $m, n$ are such that the trace field of $G_{m,n}$ does not equal its invariant trace field. If $H$ is a subgroup of $G_{m,n}$ that contains a parabolic element and $H$ is realized as a Veech group of some translation surface, then $H$ is contained in the even subgroup of $G_{m,n}$. \end{Lem} \begin{proof} The Kenyon-Smillie result shows that if a hyperbolic element of $G_{m,n}$ lies in some Veech group, then this element is contained in the even subgroup. Suppose now that some subgroup $H$ is {\em not} contained in the even subgroup of $G_{m,n}$. Since parabolic elements of $G_{m,n}$ are conjugates of powers of $A$ they are even elements; thus $H$ either contains a hyperbolic element and we are done, or it contains an odd elliptic element. Denote this odd elliptic element by $E$. Choose a parabolic element $P\in H$, so $P = M A^r M^{-1}$ for some $M \in G_{m,n}$ and some integer $r$. Let $F = M^{-1} E M$, so that \[PE = M A^r M^{-1}\, M F M^{-1} = M A^r F M^{-1}\] has the trace of $A^rF$. Represent $F$ as a $2\times 2$ real matrix with $(2,1)$-entry $c$; this entry is non-zero as $F$ is elliptic. Thus, the trace of $PE$ equals $\text{tr} F + r (2 \cos \pi/m + 2 \cos\pi/n) c$. However, we can replace $P \in H$ by any power of $P$ and hence ensure that the trace of this {\em odd} element of $H$ is greater than 2 in absolute value. That is, we have found an odd hyperbolic element in $H$, obstructing this subgroup from being the Veech group of any translation surface. \end{proof} \section{Non-parabolic directions in the periodic field} We show that certain infinite families of $G_{m,n}$ are such that any translation surface with Veech group commensurable to $G_{m,n}$ has infinite classes of non-parabolic directions in its periodic field. A search through these classes can reveal special affine pseudo-Anosov automorphisms, as we found for the $(m,n) = (7,7)$ case; see Example~\ref{eg:sevenSeven}. (This type of informed search was used in \cite{AS}.) The technique we employ is number theoretic. Since all entries of the generators given in~\eqref{e:generators} are algebraic integers it easily follows that $G_{m,n}$ is a subgroup of $\text{SL}_2(\mathcal O_K)$, where $\mathcal O_K$ is the ring of integers of the field $K= K_{m,n}$. The quotient of $\mathcal O_K$ by any of its prime ideals is a finite field $\mathbb F$, and there is induced homomorphism from $G_{m,n}$ to a subgroup of $\text{SL}_2(\mathbb F)$, as well as from the projectivisation of $G_{m,n}$ to $\text{PSL}_2(\mathbb F)$; these {\em reduction homomorphisms} are defined by entry-wise reduction of the matrices in our group, that is each entry is sent to its equivalence class modulo the ideal. Now, $\text{SL}_2(\mathbb F)$ acts on the finite projective line $\mathbb P^1(\mathbb F)$, and if the image of $G_{m,n}$ fails to act transitively there, then $G_{m,n}$ itself must fail to act transitively on $\mathbb P^1(K)$. In this case, the pre-images of elements not in the orbit of infinity are then elements of $K$ that are not parabolic fixed points. This method goes back to Borho \cite{B} and Borho-Rosenberger \cite{BR}, in the setting of the Hecke triangle groups, where it was further pursued by a school about Leutbecher, see \cite{HMTY} for a recent usage in that setting. Underlying the method is the classification of the subgroups of the various $\text{PSL}_2(\mathbb F)$ by Dickson\cite{D} and Macbeath's \cite{M} application of this to study finite triangle groups. We call the kernel of the reduction homomorphism a {\em congruence subgroup} of $G_{m,n}$. Congruence subgroups of the Hecke triangle groups have been been studied for various reasons, see for example \cite{P}, \cite{LLT}. The action of the full Galois group $\text{Gal}(\overline{ \mathbb Q}/\mathbb Q)$ on the algebraic curves uniformized by those congruence subgroups of Hecke groups corresponding to surjective reduction maps is studied in \cite{SS}. Indeed, as Macbeath showed, the reduction of a triangle group (with parabolics) is almost always the full finite matrix group, see the recent work of Clark and Voight \cite{CV} for further discussion. \begin{Rmk} There is a more elementary manner to prove the existence of non-parabolic points in special cases. Indeed, the trace field $K=K_{m,n}$ of $G_{m,n}$ is totally real, thus in the setting where $G_{m,n}$ is a subgroup of $\text{SL}_2(\mathcal O_K)$, the fact that the cusps of the Hilbert modular group $\text{SL}_2(\mathcal O_K)$ are in 1-to-1 correspondence with the elements of the class group of $K$ shows that whenever the class number of $K$ is greater than one there must be elements of $K$ that are non-parabolic fixed points. See \cite{AS} for further discussion. \end{Rmk} When considering quotients by prime ideals of rings of integers of fields generated by cosine values, the following lemma of Leutbecher is of great utility. \begin{Lem}\label{l:Leutbecher}[Leutbecher \cite{L}] Given an integer $m\ge 3$, let $\lambda = \lambda_m = 2 \cos \pi/m$. If $m$ is not of the form twice a power of a prime, then $\lambda$ is a unit in the ring of integers $\mathcal O_K$ of $K = \mathbb Q(\lambda)$. Otherwise, if $m=2p^k$ for some prime $p$, $\lambda^{\phi(m)}$ is an associate in this ring of $p$; here as usual $\phi(\cdot)$ denotes Euler's totient function. \end{Lem} Similarly, we need the following. \begin{Lem}\label{l:oddStandForm} If at least one of $m, n$ is odd, then $G_{m,n}$ is in parabolic standard form. \end{Lem} \begin{proof} Recall that both $0$ and $\infty$ are parabolic fixed points of any $G_{m,n}$. From \eqref{e:powersOfBandC}, we find that if $m = 2 \ell +1$ or $n = 2 j + 1$ is odd, then \[ B^{\ell}\cdot (-1) = 0, \;\; \; C^{j}\cdot (1) = 0,\] respectively. This as, $\sin \frac{(\ell +1) \pi}{2 \ell + 1} = \sin \frac{\ell \pi}{2 \ell + 1} $, and similarly in the other case. Thus, if at least one of $m,n$ is odd, then all three of $0, 1, \infty$ are parabolic fixed points of our group. \end{proof} \subsection{Nonparabolic directions: $m=2^d$ case when odd $n \neq 2^f + 1$}\label{ss:intransM4} \begin{Prop}\label{p:nonParabolics} Suppose that $m = 2^d$ with $d>1$ and that $n$ is odd with $n \neq 2^f + 1$ for any $f$. Then $G_{m,n}$ is in parabolic standard form and is integrally normalized; furthermore, any finite index subgroup of $G_{m,n}$ that is realized as a Veech group is such that the corresponding translation surface has non-parabolic directions with vanishing Sah-Arnoux-Fathi invariant. \end{Prop} \begin{proof} By Lemma ~\ref{l:oddStandForm} $G_{m,n}$ is in parabolic standard form. By Lemma~\ref{l:Leutbecher} with $m= 2^d$, we find that the rational integral ideal $(2)$ factors as $(2 \cos \pi/m)^{\phi(m)}$. Now, with $K = K_{m,n}$ choose any prime ideal of $\mathcal O_K$ lying above $(2 \cos \pi/m)$, say $\mathfrak p$. We have $\mathcal O_K/\mathfrak p \cong \mathbb F_{2^f}$, where $f$ is the residue degree of $\mathfrak p$. This induces a group homomorphism $\text{PSL}_2 (\mathcal O_K) \to \text{PSL}_2 ( \mathbb F_{2^f})$ that sends $B$ to an element of order two, and hence the image of our group is a dihedral group of order $2n$. Arguing as in \cite{BR}, this dihedral group is transitive on $\mathbb P^1(\mathbb F_{2^f})$ only if $n = 2^f + 1$. (Since $(2)$ is totally ramified to $\mathbb Q(2 \cos \pi/m)$, the residue degree of $\mathfrak p$ is the residue degree of the ideal of $\mathbb Q(2 \cos \pi/n)$ that $\mathfrak p$ lies above.) Thus, when $n$ is not of the form $2^f +1$ the orbit of infinity under $G_{m,n}$ does not equal all of $\mathbb P^1(K)$. That is, there are elements of $\mathbb P^1(K)$ that are not parabolic fixed points. Since $K =K_{m,n}$ also equals the invariant trace field $k_{m,n}$ by p. 159 \cite{MR}, and as verified in detail by Hooper, one has the $K$ is the trace field of any finite index subgroup of $G_{m,n}$. But, the union of the parabolic fixed points of any such subgroup is simply the set of parabolic fixed points of $G_{m,n}$. This is hence a proper subset of the trace field of the subgroup. Since $G_{m,n}$ is in parabolic standard form, so is any finite index subgroup; thus, by Calta-Smillie, $\mathbb P^1(K)$ is the set of directions with vanishing Sah-Arnoux-Fathi invariant for the corresponding surface. \end{proof} \begin{Eg} Let $m=4$ and $n=7$. Recall that $\mathbb Z[2 \cos \pi/7]$ is the full ring of integers of $\mathbb Q(2 \cos \pi/7)$. The minimal polynomial of $2 \cos \pi/7$ over $\mathbb Q$ (and hence over $\mathbb Z$, as this is an algebraic integer) is $p(x) = x^3 - x^2 - 2 x + 1$. The reduction of $p(x)$ modulo two is irreducible; from this, the ideal $(2)$ is inert to $\mathbb Q(2 \cos \pi/7)$, and the quotient field $\mathbb Z[2 \cos \pi/7]$ modulo this ideal is thus a finite field of order $2^3$. Indeed, the orbit of $\infty$ modulo 2 is (by calculations, based on the fact that the orbit of $0$ is given by its orbit under just the reduction of $B$, and using the arithmetic of $\mathbb Q(2 \cos \pi/7)$ to simplify expressions) \[ 0, \infty, \lambda, \lambda^2, 1, \lambda + 1, \lambda^2+\lambda\,\] where $\lambda = 2 \cos \pi/7$. Thus, any element of $K$ in the $G_{4,7}$ orbit of any element of $\mathcal O_K$ equivalent to $\lambda^2+ \lambda + 1$ is {\em not} a parabolic fixed point. \end{Eg} \subsection{Nonparabolic directions: $m=n$ case when odd $m$ not divisible by any $2^f + 1$; another special pseudo-Anosov map}\label{ss:intransDiag} \begin{Prop}\label{p:nonParabolicsDiag} Suppose that $m = n$ is odd and not divisible by any integer $2^f + 1$ for positive $f$. Any finite index subgroup of $G_{m,n}$ that is realized as a Veech group is such that the corresponding translation surface has non-parabolic directions with vanishing Sah-Arnoux-Fathi invariant. \end{Prop} \begin{proof} Again, $G_{m,n}$ is in parabolic standard form. The matrix $A$ is now clearly congruent to the identity modulo $2 \mathcal O_K$. We thus choose a prime $\mathcal O_K$ ideal $\mathfrak p$ dividing this ideal. By Lemma~\ref{l:Leutbecher}, neither $B$ nor $C$ is trivial when entries are reduced modulo $\mathfrak p$. Thus, $G_{m,m}$ projects to a non-trivial cyclic subgroup of $\text{SL}(\mathbb F_{2^f})$ where $f$ is the residue degree of $\mathfrak p$. The order of this homorphic image must divide the orders of $B$ and $C$, that is must divide $m$. Since $\mathbb P^1(\mathbb F_{2^f})$ has $2^f+1$ elements, we conclude that this homomorphic image is too small to act transitively. But then $G_{m,m}$ fails to act transitively on $\mathbb P^1(K)$. \end{proof} \begin{Eg}\label{eg:sevenSeven} One again finds that the class of $\lambda^2+ \lambda + 1$ is not in the orbit of infinity. In fact, this element itself is fixed by a hyperbolic element for $G_{7,7}$; for simplicity, we take a conjugate element to get a simpler appearing matrix. Let \[ M = A C^5 = \begin{pmatrix}-1 - 2 \lambda^2& -2 + 3 \lambda + 2 \lambda^2\\ -\lambda& -1 + \lambda^2\end{pmatrix}\] (we have of course reduced entries modulo $p(x)$, the minimal polynomial of $\lambda$), so $M$ fixes \[ \dfrac{3 \lambda + \sqrt{ 7 \lambda^2 + \lambda -1}}{2 \lambda}\,.\] One calculates that $\beta = (\alpha+13)/(\alpha-16)$ is a square root of $\alpha = 7 \lambda^2 + \lambda -1$. Thus $M$ does have fixed points in the trace field. There is correspondingly a special affine pseudo-Anosov homeomorphism of totally real cubic dilatation $-9 \lambda^2 + 10 \lambda + 16$ on Hooper's $(Y_{7,7}, \eta_{7,7})$. \begin{figure} \caption{A pseudo-Anosov map with vanishing Sah-Arnoux-Fathi invariant, indicated as zippered rectangles on the normalization of Hooper's translation surface $(Y_{7,7}, \eta_{7,7})$ such that all vertices have coordinates in the periodic field. A deterministic walk approximating \`a la McMullen \cite{Mc} a zero flux leaf of this rank three flow, as represented by using Galois automorphisms.} \label{f:fatSeven7andFluxMapped} \end{figure} The periodic field here is totally real and cubic (over the rationals), but as $(Y_{7,7}, \eta_{7,7})$ is of genus 15, this is certainly not the example of a totally real cubic case of pseudo-Anosov map found by Lanneau and discussed by McMullen in \cite{Mc}. We now pursue McMullen's idea of focussing on the {\em rank} of the flow. To normalize the surface so that all of the coordinates lie in the periodic field, we divide all $x$-coordinates by $\sin \pi/7$ (including adjusting the flow direction, of course), see the left hand side of Figure~\ref{f:fatSeven7andFluxMapped}. We choose a transversal (again in a direction perpendicular to that of our pseudo-Anosov with vanishing Sah-Arnoux-Fathi invariant) and explicitly find the interval exchange transformation given by first return to this transversal; both the set of widths and of translations for this transformation are contained in the periodic field. We can now consider an initial piece of the orbit under the interval exchange map of any point $x$ on the interval, $(x_n)_{n\le N}$ and map this to $(\,(x_n-x)', (x_n-x)''\,)_{n\le N}$, the ordered pairs of the conjugates of the difference of the $n^{\text{th}}$ image from $x$, see the right hand side of Figure~\ref{f:fatSeven7andFluxMapped}. (There, after scaling the transversal interval to have length one, we have taken $x = \lambda^{3}/8$.) As MuMullen discusses, this deterministic walk approximates the continuous leaf when identifying the first homology of the period torus of the real part of (the normalization of) $\eta_{7,7}$ with the periodic field itself. \end{Eg} \end{document}
arXiv
\begin{document} \title{A maximality result for orthogonal quantum groups} \author{Teodor Banica} \address{T.B.: Department of Mathematics, Cergy-Pontoise University, 95000 Cergy-Pontoise, France. {\tt [email protected]}} \author{Julien Bichon} \address{J.B.: Department of Mathematics, Clermont-Ferrand University, Campus des Cezeaux, 63177 Aubiere Cedex, France. {\tt [email protected]}} \author{Beno\^ it Collins} \address{B.C.: Department of Mathematics, Lyon 1 University, and University of Ottawa, 585 King Edward, Ottawa, ON K1N 6N5, Canada. {\tt [email protected]}} \author{Stephen Curran} \address{S.C.: Department of Mathematics, University of California, Los Angeles, CA 90095, USA. {\tt [email protected]}} \subjclass[2010]{16T05} \keywords{Orthogonal quantum group} \begin{abstract} We prove that the quantum group inclusion $O_n \subset O_n^*$ is ``maximal'', where $O_n$ is the usual orthogonal group and $O_n^*$ is the half-liberated orthogonal quantum group, in the sense that there is no intermediate compact quantum group $O_n\subset G\subset O_n^*$. In order to prove this result, we use: (1) the isomorphism of projective versions $PO_n^*\simeq PU_n$, (2) some maximality results for classical groups, obtained by using Lie algebras and some matrix tricks, and (3) a short five lemma for cosemisimple Hopf algebras. \end{abstract} \maketitle \section*{Introduction} Quantum groups were introduced by Drinfeld \cite{dri} and Jimbo \cite{jim} in order to study ``non-classical'' symmetries of complex systems. This was followed by the fundamental work of Woronowicz \cite{wo1}, \cite{wo2} on compact quantum groups. The key examples which were constructed by Drinfeld and Jimbo, and further analyzed by Woronowicz, were $q$-deformations $G_q$ of classical Lie groups $G$. The idea is as follows: consider the commutative algebra $A = C(G)$. For a suitable choice of generating ``coordinates'' of this algebra, replace commutativity by the $q$-commutation relations $ab = qba$, where $q > 0$ is a parameter. In this way one obtains an algebra $A_q = C(G_q)$, where $G_q$ is a quantum group. When $q =1$ one then recovers the classical group $G$. For $G=O_n,U_n,S_n$ it was later discovered by Wang \cite{wa1}, \cite{wa2} that one can also obtain compact quantum groups by ``removing'' the commutation relations entirely. In this way one obtains ``free'' versions $O_n^+,U_n^+,S_n^+$ of these classical groups. This construction has been axiomatized in \cite{bsp} in terms of the ``easiness'' condition for compact quantum groups, and has led to several applications in probability. See \cite{ez2}, \cite{ez3}. It is clear from the construction that one has $G \subset G^+$ for $G = O_n,U_n,S_n$. Since $G^+$ can be viewed a ``liberation'' of $G$, it is natural to wonder whether there are any intermediate quantum groups $G \subset G' \subset G^+$, which could be seen as ``partial liberations'' of $G$. For $O_n,S_n$ this problem has been solved in the case of ``easy'' intermediate quantum groups \cite{bve}, \cite{ez1}. For $S_n$ there are no intermediate easy quantum groups $S_n \subset G' \subset S_n^+$. However for $O_n$ there is exactly one intermediate easy quantum group $O_n \subset O_n^* \subset O_n^+$, called the ``half-liberated'' orthogonal group, which was constructed in \cite{bsp}. At the level of relations among coordinates, this is constructed by replacing the commutation relations $ab = ba$ with the half-commutation relations $abc = cba$. In the larger category of compact quantum groups it is an open problem whether there are intermediate quantum groups $S_n \subset G \subset S_n^+$, or $O_n \subset G \subset O_n^+$ with $G \neq O_n^*$. This is an important question for better understanding the ``liberation'' procedure of \cite{bsp}. At $n = 4$ (the smallest value at which $S_n \neq S_n^+$), it follows from the results in \cite{bbi} that the inclusion $S_n \subset S_n^+$ is indeed maximal, and it was conjectured in \cite{bbc} that this is the case, for any $n\in\mathbb N$. Likewise the inclusion $O_n \subset O_n^* \subset O_n^+$ is known to be maximal at $n = 2$, thanks to the results of Podle\'s in \cite{pod}. In general it is likely that these two problems are related to each other via combinatorial invariants \cite{bve} or cocycle twists \cite{tga}. In this paper we make some progress towards solving this problem in the orthogonal case, by showing that the inclusion $O_n\subset O_n^*$ is maximal. A key tool in our analysis will be the fact the ``projective version'' of $O_n^*$ is the same as that of the classical unitary group $U_n$. By using a version of the five lemma for cosemisimple Hopf algebras (following ideas from \cite{abi}, \cite{aga}), we are thus able to reduce the problem to showing that the inclusion of groups $PO_n \subset PU_n$ is maximal. We then solve this problem by using some Lie algebra techniques inspired from \cite{afg}, \cite{dyn}. The paper is organized as follows: Section 1 contains background and preliminaries. In Section 2 we prove that $PO_n\subset PU_n$ is maximal. In Section 3 we prove a short five lemma for cosemisimple Hopf algebras, which may be of independent interest. We then use this in Section 4 to prove our main result, namely that $O_n\subset O_n^*$ is maximal. \noindent\textbf{Acknowledgements.} Part of this work was completed during the Spring 2011 program ``Bialgebras in free probability'' at the Erwin Schr\"odinger Institute in Vienna, and T.B., B.C., S.C. are grateful to the organizers for the invitation. The work of T.B., J.B., B.C. was supported by the ANR grants ``Galoisint'' and ``Granma'', the work of B.C. was supported by an NSERC Discovery grant and an ERA grant, and the work of S.C. was supported by an NSF postdoctoral fellowship and by the NSF grant DMS-0900776. \section{Orthogonal quantum groups} In this section we briefly recall the free and half-liberated orthogonal quantum groups from \cite{wa1}, \cite{bsp}, and the notion of ``projective version'' for a unitary compact quantum group. We will work at the level of Hopf $*$-algebras of representative functions. First we have the following fundamental definition, arising from Woronowicz' work \cite{wo1}. \begin{definition} A \textit{unitary Hopf algebra} is a $*$-algebra $A$ which is generated by elements $\{u_{ij}|1 \leq i,j \leq n\}$ such that $u = (u_{ij})$ and $\overline{u}=(u_{ij}^*)$ are unitaries, and such that: \begin{enumerate} \item There is a $*$-algebra map $\Delta:A \to A \otimes A$ such that $\Delta(u_{ij}) = \sum_{k=1}^n u_{ik} \otimes u_{kj}$. \item There is a $*$-algebra map $\varepsilon:A \to \mathbb C$ such that $\varepsilon(u_{ij}) = \delta_{ij}$. \item There is a $*$-algebra map $S:A \to A^{op}$ such that $S(u_{ij}) = u_{ji}^*$. \end{enumerate} If $u_{ij} = u_{ij}^*$ for $1 \leq i,j \leq n$, we say that $A$ is an \textit{orthogonal Hopf algebra}. \end{definition} It follows that $\Delta,\varepsilon,S$ satisfy the usual Hopf algebra axioms. The motivating examples of unitary (resp. orthogonal) Hopf algebra is $A = \mathcal R(G)$, the algebra of representative function of a compact subgroup $G \subset U_n$ (resp. $G \subset O_n$). Here the standard generators $u_{ij}$ are the coordinate functions which take a matrix to its $(i,j)$-entry. In fact every commutative unitary Hopf algebra is of the form $\mathcal R (G)$ for some compact group $G \subset U_n$. In general we use the suggestive notation ``$A = \mathcal R(G)$'' for any unitary (resp. orthogonal) Hopf algebra, where $G$ is a \textit{unitary (resp. orthogonal) compact quantum group}. Of course any group-theoretic statements about $G$ must be interpreted in terms of the Hopf algebra $A$. It can be shown that shown that a unitary Hopf algebra has an enveloping $C^*$-algebra, satisfying Woronowicz' axioms in \cite{wo1}. In general there are several ways to complete a unitary Hopf algebra into a $C^*$-algebra, but in this paper we will ignore this problem and work at the level of unitary Hopf algebras. The following examples of Wang \cite{wa1} are fundamental to our considerations. \begin{definition} The universal unitary Hopf algebra $A_u(n)$ is the universal $*$-algebra generated by elements $\{u_{ij}|1 \leq i,j \leq n\}$ such that the matrices $u = (u_{ij})$ and $\overline{u}=(u_{ij}^*)$ in $M_n(A_u(n))$ are unitaries. The universal orthogonal Hopf algebra $A_o(n)$ is the universal $*$-algebra generated by self-adjoint elements $\{u_{ij}|1 \leq i,j \leq n\}$ such that the matrix $u = (u_{ij})_{1 \leq i,j \leq n}$ in $M_n(A_o(n))$ is orthogonal. \end{definition} The existence of the Hopf algebra structural morphisms follows from the universal properties of $A_u(n)$ and $A_o(n)$. As discussed above, we use the notations $A_u(n)= \mathcal R(U_n^+)$ and $A_o(n) = \mathcal R(O_n^+)$, where $U_n^+$ is the \textit{free unitary quantum group} and $O_n^+$ is the \textit{free orthogonal quantum group}. Note that we have $\mathcal R(O_n^+) \twoheadrightarrow \mathcal R(O_n)$, in fact $\mathcal R(O_n)$ is the quotient of $\mathcal R(O_n^+)$ by the relations that the coordinates $u_{ij}$ commute. At the level of quantum groups, this means that we have an inclusion $O_n \subset O_n^+$. In other words, $\mathcal R(O_n^+)$ is obtained from $\mathcal R(O_n)$ by ``removing commutativity'' among the coordinates $u_{ij}$. It was discovered in \cite{bsp} that one can obtain a natural orthogonal quantum group by requiring instead that the coordinates ``half-commute''. \begin{definition} The half-liberated othogonal Hopf algebra $A_o^*(n)$ is the universal $*$-algebra generated by self-adjoint elements $\{u_{ij}|1 \leq i,j \leq n\}$ which half-commute in the sense that $abc = cba$ for any $a,b,c \in \{u_{ij}\}$, and such that the matrix $u = (u_{ij})_{1 \leq i,j \leq n}$ in $M_n(A_o^*(n))$ is orthogonal. \end{definition} The existence of the Hopf algebra structural morphisms again follows from the universal properties of $A_o^*(n)$. We use the notation $A_o^*(n) = \mathcal R(O_n^*)$, where $O_n^*$ is the \textit{half-liberated orthogonal quantum group}. Note that we have $\mathcal R(O_n^+) \twoheadrightarrow \mathcal R(O_n^*) \twoheadrightarrow \mathcal R(O_n)$, i.e. $O_n \subset O_n^* \subset O_n^+$. As discussed in the introduction, our aim in this paper is to show that the inclusion $O_n \subset O_n^*$ is maximal. A key tool in our analysis will be the projective version of a unitary quantum group, which we now recall. \begin{definition} The projective version of a unitary compact quantum group $G \subset U_n^+$ is the quantum group $PG\subset U_{n^2}^+$, having as basic coordinates the elements $v_{ij,kl}=u_{ik}u_{jl}^*$. \end{definition} In other words, $P\mathcal R(G)=\mathcal R (PG)\subset \mathcal R(G)$ is the subalgebra generated by the elements $v_{ij,kl}=u_{ik}u_{jl}^*$. It is clearly a Hopf $*$-subalgebra of $\mathcal R(G)$. In the case where $G \subset U_n$ is classical we recover of course the well-known formula $PG=G/(G\cap \mathbb T)$, where $\mathbb T\subset U_n$ is the group of norm one multiples of the identity. The following key result was proved in \cite{bve}. \begin{theorem} We have an isomorphism $PO_n^*\simeq PU_n$. \end{theorem} \begin{proof} First, thanks to the half-commutation relations between the standard coordinates on $O_n^*$, for any $a,b,c,d\in\{u_{ij}\}$ we have $abcd=cbad=cdab$. Thus the standard coordinates on the quantum group $PO_n^*$ commute ($ab\cdot cd=cd\cdot ab$), so this quantum group is actually a classical group. A representation theoretic study, based on the diagrammatic results in \cite{bsp}, allows then to show this classical group is actually $PU_n$. See \cite{bve}. \end{proof} Note that in fact the techniques developed in the present paper enable us to give a very simple proof of this theorem, avoiding the diagramatic techniques from \cite{bsp}, \cite{bve}. See the last remark in Section 4. \section{Classical group results} In this section we prove that the inclusion $PO_n\subset PU_n$ is maximal in the category of compact groups (we assume throughout the paper that $n\geq 2$, otherwise there is nothing to prove). We will see later on, in Sections 3 and 4 below, that this result can be ``twisted'', in order to reach to the maximality of the inclusion $O_n\subset O_n^*$. Let $\tilde O_{n}$ be the group generated by $O_{n}$ and $\mathbb T\cdot I_{n}$ (the group of multiples of identity of norm one). That is, $\tilde O_n$ is the preimage of $PO_n$ under the quotient map $U_n \twoheadrightarrow PU_n$. Let $\widetilde{SO}_n \subset \tilde O_n$ be the group generated by $SO_n$ and $\mathbb T\cdot I_n$. Note that $\tilde{O}_n = \widetilde{SO}_n$ if $n$ is odd, and if $n$ is even then $\tilde{O}_n$ has two connected components and $\widetilde{SO}_n$ is the component containing the identity. It is a classical fact that a compact matrix group is a Lie group, so $\widetilde{SO}_{n}$ is a Lie group. Let $\mathfrak{so}_{n}$ (resp. $\mathfrak{u}_{n}$) be the real Lie algebras of $SO_{n}$ (resp. $U_{n}$). It is known that $\mathfrak{u}_n$ consists of the matrices $M\in M_n(\mathbb C)$ satisfying $M^*=-M$, and $\mathfrak{so}_n=\mathfrak{u}_n\cap M_n(\mathbb R)$. It is easy to see that the Lie algebra of $\widetilde{SO}_{n}$ is $\mathfrak{so}_{n}\oplus i\mathbb{R}$. First we need the following lemma: \begin{lemma}\label{soadj} If $n \geq 2$, the adjoint representation of $SO_n$ on the space of real symmetric matrices of trace zero is irreducible. \end{lemma} \begin{proof} Let $X \in M_n(\mathbb R)$ be symmetric with trace zero, and let $V$ be the span of $\{UXU^t:U \in SO_n\}$. We must show that $V$ is the space of all real symmetric matrices of trace zero. First we claim that $V$ contains all diagonal matrices of trace zero. Indeed, since we may diagonalize $X$ by conjugating with an element of $SO_n$, $V$ contains some non-zero diagonal matrix of trace zero. Now if $D = diag(d_1,d_2,\dotsc,d_n)$ is a diagonal matrix in $V$, then by conjugating $D$ by \begin{equation*} \begin{pmatrix} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & I_{n-2} \end{pmatrix} \in SO_n \end{equation*} we have that $V$ also contains $diag(d_2,d_1,d_3,\dotsc,d_n)$. By a similar argument we see that for any $1 \leq i,j \leq n$ the diagonal matrix obtained from $D$ by interchanging $d_i$ and $d_{j}$ lies in $V$. Since $S_n$ is generated by transpositions, it follows that $V$ contains any diagonal matrix obtained by permuting the entries of $D$. But it is well-known that this representation of $S_n$ on diagonal matrices of trace zero is irreducible, and hence $V$ contains all such diagonal matrices as claimed. Now if $Y$ is any real symmetric matrix of trace zero, we can find a $U$ in $SO_n$ such that $UYU^{t}$ is a diagonal matrix of trace zero. But we then have $UYU^t \in V$, and hence also $Y \in V$ as desired. \end{proof} \begin{proposition}\label{prop-max-connect} The inclusion $\widetilde{SO_{n}}\subset U_{n}$ is maximal in the category of connected compact groups. \end{proposition} \begin{proof} Let $G$ be a connected compact group satisfying $\widetilde{SO}_n \subset G \subset U_n$. Then $G$ is a Lie group, let $\mathfrak{g}$ denote its Lie algebra, which satisfies $\mathfrak{so}_n\oplus i\mathbb{R} \subset\mathfrak{g}\subset \mathfrak{u}_n$. Let $ad_{G}$ be the action of $G$ on $\mathfrak{g}$ obtained by differentiating the adjoint action of $G$ on itself. This action turns $\mathfrak{g}$ into a $G$-module. Since $SO_n \subset G$, $\mathfrak{g}$ is also an $SO_n$-module. Now if $G \neq \widetilde{SO}_n$, then since $G$ is connected we must have $\mathfrak{so}_n\oplus i\mathbb{R}\neq\mathfrak{g}$. It follows from the real vector space structure of the Lie algebras $\mathfrak{u}_n$ and $\mathfrak{so}_{n}$ that there exists a non-zero symmetric real matrix of trace zero $X$ such that $iX\in \mathfrak{g}$. But by Lemma \ref{soadj} the space of symmetric real matrices of trace zero is an irreducible representation of $SO_{n}$ under the adjoint action. So $\mathfrak{g}$ must contain all such $X$, and hence $\mathfrak{g}=\mathfrak{u}_{n}$. But since $U_n$ is connected, it follows that $G = U_n$. \end{proof} Our aim is to extend this result to the category of compact groups. To do this we need to compute the \textit{normalizer} of $\widetilde{SO}_n$ in $U_n$, i.e. the subgroup of $U_n$ consisting of unitary $U$ for which $U^{-1}XU \in \widetilde{SO}_n$ for all $X \in \widetilde{SO}_n$. For this we need two lemmas. \begin{lemma}\label{commutant} The commutant of $SO_n$ in $ M_n(\mathbb C)$, denoted $SO_n'$, is as follows: \begin{enumerate} \item $SO_2' = \{\begin{pmatrix} \alpha & \beta \\ -\beta & \alpha \end{pmatrix}, \ \alpha , \beta \in \mathbb C\}.$ \item If $n \geq 3$, $SO_n' = \{\alpha I_n, \alpha \in \mathbb C\}.$ \end{enumerate} \end{lemma} \begin{proof} At $n=2$ this is a direct computation. For $n \geq 3$, an element in $X \in SO_n'$ commutes with any diagonal matrix having exactly $n-2$ entries equal to $1$ and two entries equal to $-1$. Hence $X$ is a diagonal matrix. Now since $X$ commutes with any even permutation matrix and $n \geq 3$, it commutes in particular with the permutation matrix associated with the cycle $(i,j,k)$ for any $1<i<j<k$, and hence all the entries of $X$ are the same: we conclude that $X$ is a scalar matrix. \end{proof} \begin{lemma}\label{dense} The set of matrices with non-zero trace is dense in $SO_n$. \end{lemma} \begin{proof} At $n=2$ this is clear since the set of elements in $SO_2$ having a given trace is finite. Assume that $n>2$ and let $T \in SO_n \simeq SO(\mathbb R^n)$ with $Tr(T)=0$. Let $E\subset \mathbb R^n$ be a 2-dimensional subspace preserved by $T$ and such that $T_{|E} \in SO(E)$. Let $\epsilon >0$ and let $S_\epsilon \in SO(E)$ with $||T_{|E}-S_\epsilon|| <\epsilon$ and $Tr(T_{|E}) \not= Tr(S_\epsilon)$ ($n=2$ case). Now define $T_\epsilon \in SO(\mathbb R^n)=SO_n$ by $T_{\varepsilon | E} = S_\epsilon$ and $T_{\epsilon|E^\perp}=T_{|E^\perp}$. It is clear that $||T-T_\epsilon|| \leq ||T_{|E}-S_\epsilon||<\epsilon$ and that $Tr(T_\epsilon)=Tr(S_\epsilon)+Tr(T_{|E^\perp}) \not = 0$. \end{proof} \begin{proposition}\label{prop-normalize} $\tilde O_n$ is the normalizer of $\widetilde{SO}_n$ in $U_n$. \end{proposition} \begin{proof} It is clear that $\tilde O_n$ normalizes $\widetilde{SO}_n$, so we must show that if $U \in U_n$ normalizes $\widetilde{SO}_n$ then $U \in \tilde O_n$. First note that $U$ normalizes $SO_n$. Indeed if $X \in SO_n$ then $U^{-1}XU \in \widetilde {SO}_n$, so $U^{-1}XU= \lambda Y$ for $\lambda \in \mathbb T$ and $Y \in SO_n$. If $Tr(X) \not=0$, we have $\lambda \in \mathbb R$ and hence $\lambda Y = U^{-1}XU \in SO_n$. The set of matrices having non-zero trace is dense in $SO_n$ by Lemma \ref{dense}, so since $SO_n$ is closed and the matrix operations are continous, we conclude that $U^{-1}XU \in SO_n$ for all $X \in SO_n$. Thus for any $X \in SO_n$, we have $(UXU^{-1})^t(UXU^{-1})= I_n$ and hence $X^tU^tUX= U^tU$. This means that $U^tU \in SO_n'$. Hence if $n\geq 3$, we have $U^tU= \alpha I_n$ by Lemma \ref{commutant}, with $\alpha \in \mathbb T$ since $U$ is unitary. Hence we have $U = \alpha^{1/2} (\alpha^{-1/2} U)$ with $\alpha^{-1/2} U \in O_n$, and $U \in \widetilde{O_n}$. If $n=2$, Lemma \ref{commutant} combined with the fact that $(U^tU)^t=U^tU$ gives again that $U^tU= \alpha I_2$, and we conclude as in the previous case. \end{proof} We can now extend Proposition \ref{prop-max-connect} as follows. \begin{proposition}\label{prop-class-max} The inclusion $\tilde O_n \subset U_n$ is maximal in the category of compact groups. \end{proposition} \begin{proof} Suppose that $\tilde O_n \subset G \subset U_n$ is a compact group such that $G \neq U_n$. It is a well known fact that the connected component of the identity in $G$ is a normal subgroup, denoted $G_0$. Since we have $\widetilde{SO}_n \subset G_0 \subset U_n$, by Proposition \ref{prop-max-connect} we must have $G_0 = \widetilde{SO}_n$. But since $G_0$ is normal in $G$, $G$ normalizes $\widetilde{SO}_n$ and hence $G \subset \tilde O_n$ by Proposition \ref{prop-normalize}. \end{proof} We are now ready to state and prove the main result in this section. \begin{theorem}\label{thm-proj-max} The inclusion $PO_n\subset PU_n$ is maximal in the category of compact groups. \end{theorem} \begin{proof} It follows directly from the observation that the maximality of $\tilde O_{n}$ in $U_{n}$ implies the maximality of $PO_n$ in $PU_n$. Indeed, if $PO_n \subset G \subset PU_n$ were an intermediate subgroup, then its preimage under the quotient map $U_n \twoheadrightarrow PU_n$ would be an intermediate subgroup of $\tilde O_{n}\subset U_{n}$, contradicting Proposition \ref{prop-class-max}. \end{proof} \section{A short five lemma} In this section we prove a short five lemma for cosemisimple Hopf algebras (Theorem 3.4 below), which is a result having its own interest, to be used in Section 4 below. \begin{definition} A sequence of Hopf algebra maps $$\mathbb C \to B\overset{i}\to A\overset{p}\to L\to\mathbb C$$ is called pre-exact if $i$ is injective, $p$ is surjective and $i(A)=H^{cop}$, where: $$A^{cop}=\{a\in A|(id\otimes p)\Delta(a)=a\otimes 1\}$$ \end{definition} The example that we are interested in is as follows. \begin{proposition} Let $A$ be an orthogonal Hopf algebra with generators $u_{ij}$. Assume that we have surjective Hopf algebra map $p:A\to\mathbb C\mathbb Z_2$, $u_{ij}\to\delta_{ij}g$, where $<g>=\mathbb Z_2$. Let $PA$ be the projective version of $A$, i.e. the subalgebra generated by the elements $u_{ij}u_{kl}$ with the inclusion $i:PA \subset A$. Then the sequence $$\mathbb C\to PA\overset{i'}\to A\overset{p}\to\mathbb C\mathbb Z_2 \to \mathbb C$$ is pre-exact. \end{proposition} \begin{proof} We have: $$(id \otimes p)\Delta(u_{i_1j_1}\ldots u_{i_mj_m})= \begin{cases} u_{i_1j_1}\ldots u_{i_mj_m}\otimes 1&\text{if $m$ is even} \\ u_{i_1j_1}\ldots u_{i_mj_m}\otimes g&\text{if $m$ is odd} \end{cases}$$ Thus $H^{cop}$ is the span of monomials of even length, which is clearly $PH$. \end{proof} A pre-exact sequence as in Definition 3.1 is said to be exact \cite{ade} if in addition we have $i(A)^+H=\ker(\pi)=Hi(A)^+$, where $i(A)^+=i(A)\cap\ker(\varepsilon)$. The pre-exact sequence in Proposition 3.2 is actually exact, but we only need its pre-exactness in what follows. In order to prove the short five lemma, we use the following well-known result. We give a proof for the sake of completness. \begin{lemma}\label{integral-injective} Let $\theta:A\to A'$ be a Hopf algebra morphism with $A,A'$ cosemisimple and let $h_A,h_{A'}$ be the respective Haar integrals of $A,A'$. Then $\theta$ is injective iff $h_{A'}\theta=h_A$. \end{lemma} \begin{proof} For $a \in A$, we have: $$\theta (h_{A'}(\theta(a_1))a_2)=h_{A'}(\theta(a)_1)\theta(a)_2=\theta(h_{A'}\theta(a)1)$$ Thus if $\theta$ is injective then $h_{A'}\theta$ is a Haar integral on $A$, and the result follows from the uniqueness of the Haar integral. Conversely, assume that $h_A=h_{A'}\theta$. Then for all $a,b \in A$, we have $h_A(xy)=h_{A'}(\theta(a)\theta(b))$, so if $\theta(a)=0$, we have $h_A(ab)=0$ for all $b\in H$. It follows from the orthogonality relations that $a=0$, and hence $\theta$ is injective. \end{proof} \begin{theorem}\label{short} Consider a commutative diagram of cosemisimple Hopf algebras $$\begin{CD} k@>>>B@>{i}>>A@>{\pi}>>L@>>>k\\ @.@|@VV{\theta}V@|@.\\ k@>>>B@>{i'}>>A'@>{\pi'}>>L@>>>k \end{CD}$$ where the rows are pre-exact. Then $\theta$ is injective. \end{theorem} \begin{proof} We have to show that $h_A=h_{A'}\theta$, where $h_A,h_{A'}$ are the respective Haar integrals of $A,A'$. Let $\Lambda$ be the set of isomorphism classes of simple $L$-comodules and consider the Peter-Weyl decomposition of $L$: $$L=\bigoplus_{\lambda\in\Lambda}L({\lambda})$$ We view $A$ as a right $L$-comodule via $(id\otimes\pi)\Delta$. Then $A$ has a decomposition into isotypic components as follows, where $A_\lambda =\{a \in A \ | \ (id \otimes \pi)\circ \Delta(a) \in A \otimes L(\lambda)\}$: $$A=\bigoplus_{\lambda\in\Lambda}A_\lambda$$ It is clear that $A_1=A^{co\pi}$. Then if $\lambda\neq 1$, we have $h_A(A_\lambda)=0$. Indeed for $a\in A_\lambda$, we have: $$a_1\otimes\pi(a_2)\in H\otimes L(\lambda) \implies h_A(a)1=\pi(h_H(a_1)a_2)\in L(\lambda) \implies h_H(a)=0$$ Since $\pi' \theta=\pi$, it is easy to see that $\theta(A_\lambda)\subset A'_\lambda$ and hence for $\lambda\neq 1$, $h_{A'|A'_\lambda}=h_{A'}\theta_{|A_\lambda}=0=h_{A| A_\lambda}$. For $\lambda=1$, we have $i(A)=A_1$ and $\theta$ is injective on $i(A)$ since $\theta i=i'$. Hence by Lemma \ref{integral-injective} we have $h_{A'}\theta_{|A_1}=h_{A_1}= h_{A|A_1}$. Since $A=\oplus_{\lambda\in\Lambda}A_\lambda$ we conclude $h_A=h_{A'}\theta$ and by Lemma \ref{integral-injective} we get that $\theta$ is injective. \end{proof} \section{The main result} We have now all the ingredients for stating and proving our main result in this paper. \begin{theorem}\label{main} The inclusion $O_n \subset O_n^*$ is maximal in the category of compact quantum groups. \end{theorem} \begin{proof} Consider a sequence of surjective Hopf $*$-algebra maps as follows, whose composition is the canonical surjection: $$A_o^*(n)\overset{f}\to A\overset{g}\to\mathcal R(O_n)$$ By Proposition 3.2 we get a commutative diagram of Hopf algebra maps with pre-exact rows: $$\begin{CD} \mathbb C@>>>PA_o^*(n)@>{i_1}>>A_o^*(n)@>{p_1}>>\mathbb C\mathbb Z_2 @>>>\mathbb C\\ @.@VV{f_|}V@VV{f}V@|@.\\ \mathbb C@>>>PA@>{i_2}>>A@>{p_2}>>\mathbb C\mathbb Z_2@>>>\mathbb C\\ @.@VV{g_|}V@VV{g}V@|@.\\ \mathbb C@>>>P\mathcal R(O_n)@>{i_3}>>\mathcal R(O_n)@>{p_3}>>\mathbb C\mathbb Z_2 @>>>\mathbb C \end{CD}$$ Consider now the following composition, with the isomorphism on the left coming from Theorem 1.5: $$\mathcal R(PU_n)\simeq PA_o^*(n)\overset{f_|} \to PA\overset{g_|}\to P\mathcal R(O_n)\simeq\mathcal R(PO_n)$$ This induces, at the group level, the embedding $PO_n\subset PU_n$. By Theorem \ref{thm-proj-max} $f_|$ or $g_|$ is an isomorphism. If $f_|$ is an isomorphism we get a commutative diagram of Hopf algebra morphisms with pre-exact rows: $$\begin{CD} \mathbb C@>>>PA_o^*(n)@>{i_1}>>A_o^*(n)@>{p_1}>>\mathbb C\mathbb Z_2 @>>>\mathbb C\\ @.@|@VV{f}V@|@.\\ \mathbb C@>>>PA_o^*(n)@>{i_2 \circ f_|}>>A@>{p_2}>>\mathbb C\mathbb Z_2 @>>>\mathbb C\\ \end{CD}$$ Then $f$ is an isomorphism by Theorem 3.4. Similarly if $g_|$ is an isomorphism, then $g$ is an isomorphism. \end{proof} Observe that the technique in the proof of Theorem \ref{main} also enables us to prove that $PO_n^* \simeq PU_n$ independently from \cite{bve}. Indeed, since $PA_o^*(n)$ is commutative, there exists a compact group $G$ with $PA_o^*(n) \simeq \mathcal R(G)$ and $PO_n \subset G \subset PU_n$. Then Theorem \ref{thm-proj-max} gives $G=PO_n$ or $G=PU_n$. If $G=PO_n$, then as in the proof of Theorem \ref{main}, Theorem \ref{short} gives that $A_o^*(n) \twoheadrightarrow \mathcal R(O_n)$ is an isomorphism, which is false since $A_o^*(n)$ is a not commutative if $n\geq 2$. Hence $G= PU_n$. \end{document}
arXiv
The temperature in a desert rose $1.5$ degrees in $15$ minutes. If this rate of increase remains constant, how many degrees will the temperature rise in the next $2$ hours? Firstly, we find that the temperature rises $\frac{1.5}{15}=0.1$ degrees per minute. Thus, since $2$ hours contains $120$ minutes, we find that the temperature will rise $0.1 \times 120=\boxed{12}$ degrees.
Math Dataset
\begin{document} \begin{center} \begin{singlespace} \vskip 1cm{\LARGE\bf An Extension of the Abundancy Index to Certain Quadratic Rings \vskip 1cm \large Colin Defant\footnote{This work was supported by National Science Foundation grant no. 1262930.}\footnote{Colin Defant\\ 18434 Hancock Bluff Rd. \\ Dade City, FL 33523}\footnote{2010 {\it Mathematics Subject Classification}: Primary 11R11; Secondary 11N80.\\ \emph{Keywords: } Abundancy index, quadratic ring, solitary number, friendly number.}\\ Department of Mathematics\\ University of Florida\\ United States\\ [email protected]} \end{singlespace} \end{center} \vskip .2 in \begin{abstract} We begin by introducing an extension of the traditional abundancy index to imaginary quadratic rings with unique factorization. After showing that many of the properties of the traditional abundancy index continue to hold in our extended form, we investigate what we call $n$-powerfully solitary numbers in these rings. This definition serves to extend the concept of solitary numbers, which have been defined and studied in the integers. We end with some open questions and a conjecture. \end{abstract} \section{Introduction} Throughout this paper, we will let $\mathbb{N}$ denote the set of positive integers, and we will let $\mathbb{N}_0$ denote the set of nonnegative integers. \par The arithmetic functions $\sigma_k$ are defined, for every integer $k$, by \\ $\displaystyle{\sigma_k(n)=\sum_{\substack{c\vert n\\c>0}}c^k}$, and it is conventional to write $\sigma_1=\sigma$. It is well-known that, for each integer $k\neq 0$, $\sigma_k$ is multiplicative and satisfies $\displaystyle{\sigma_k (p^\alpha)=\frac{p^{k(\alpha+1)}-1}{p^k-1}}$ for all (integer) primes $p$ and positive integers $\alpha$. The abundancy index of a positive integer $n$ is defined by $\displaystyle{I(n)=\frac{\sigma(n)}{n}}$. Using the formulas $\displaystyle{\sigma(n)=\prod_{p^{\alpha}\parallel n}\frac{p^{\alpha+1}-1}{p-1}}$ and $\displaystyle{\sigma_{-1}(n)=\prod_{p^{\alpha}\parallel n}\frac{p^{-(\alpha+1)}-1}{p^{-1}-1}}$, it is easy to see that $I=\sigma_{-1}$. Some of the most common questions associated with the abundancy index are those related to friendly numbers. \par Two or more distinct positive integers are said to be friends (with each other) if they have the same abundancy index. For example, $I(6)=I(28)=I(496)=2$, so $6$, $28$, and $496$ are friends. A positive integer that has at least one friend is said to be friendly, and a positive integer that has no friends is said to be solitary. Clearly, $1$ is solitary as $I(n)>1=I(1)$ for any positive integer $n>1$. It is also not difficult to show, using the fact that $I=\sigma_{-1}$, that every prime power is solitary. In the next section, we extend the notions of the abundancy index and friendliness to imaginary quadratic integer rings that are also unique factorization domains. Observing the infinitude of possible such generalizations, we note four important properties of the traditional abundancy index that we wish to preserve (possibly with slight modifications). \begin{itemize} \item The range of the function $I$ is a subset of the interval $[1,\infty)$. \item If $n_1$ and $n_2$ are relatively prime positive integers, then \\ $I(n_1n_2)=I(n_1)I(n_2)$. \item If $n_1$ and $n_2$ are positive integers such that $n_1\vert n_2$, then $I(n_1)\leq I(n_2)$, with equality if and only if $n_1=n_2$. \item All prime powers are solitary. \end{itemize} \par For any square-free integer $d$, let $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ be the quadratic integer ring given by \[\mathcal O_{\mathbb{Q}(\sqrt{d})}=\begin{cases} \mathbb{Z}[\frac{1+\sqrt{d}}{2}], & \mbox{if } d\equiv 1\imod{4}; \\ \mathbb{Z}[\sqrt{d}], & \mbox{if } d\equiv 2, 3 \imod{4}. \end{cases}\] \par Throughout the remainder of this paper, we will work in the rings $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ for different specific or arbitrary values of $d$. We will use the symbol ``$\vert$" to mean ``divides" in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ in which we are working. Whenever we are working in a ring other than $\mathbb{Z}$, we will make sure to emphasize when we wish to state that one integer divides another in $\mathbb{Z}$. For example, if we are working in $\mathbb{Z}[i]$, the ring of Gaussian integers, we might say that $1+i\vert 1+3i$ and that $2\vert 6$ in $\mathbb{Z}$. We will also refer to primes in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ as ``primes," whereas we will refer to (positive) primes in $\mathbb{Z}$ as ``integer primes." Furthermore, we will henceforth focus exclusively on values of $d$ for which $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ is a unique factorization domain and $d<0$. In other words, $d\in K$, where we will define $K$ to be the set $\{-163,-67,-43,-19,-11,-7,-3,-2,-1\}$. The set $K$ is known to be the complete set of negative values of $d$ for which $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ is a unique factorization domain \cite{Stark67}. \par For now, let us work in a ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ such that $d\!\in\! K$. For an element $a+b\sqrt{d}\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $a,b\in \mathbb{Q}$, we define the conjugate by $\overline{a+b\sqrt{d}}=a-b\sqrt{d}$. We also define the norm of an element $z$ by $N(z)=z\overline{z}$ and the absolute value of $z$ by $\vert z\vert=\sqrt{N(z)}$. From now on, we will assume familiarity with these objects and their properties (for example, $\overline{z_1z_2}=\overline{z_1}\hspace{0.75 mm}\overline{z_2}$ and $N(z)\in \mathbb{N}_0$), which are treated in Keith Conrad's online notes \cite{Conrad}. For $x,y\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$, we say that $x$ and $y$ are associated, denoted $x\sim y$, if and only if $x=uy$ for some unit $u$ in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Furthermore, we will make repeated use of the following well-known facts. \begin{fact} \label{Fact1.1} Let $d\!\in\! K$. If $p$ is an integer prime, then exactly one of the following is true. \begin{itemize} \item $p$ is also a prime in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. In this case, we say that $p$ is inert in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. \item $p\sim \pi^2$ and $\pi\sim\overline{\pi}$ for some prime $\pi\in \mathcal O_{\mathbb{Q}(\sqrt{d})}$. In this case, we say $p$ ramifies (or $p$ is ramified) in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. \item $p=\pi\overline{\pi}$ and $\pi\not\sim\overline{\pi}$ for some prime $\pi\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$. In this case, we say $p$ splits (or $p$ is split) in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. \end{itemize} \end{fact} \begin{fact} \label{Fact1.2} Let $d\!\in\! K$. If $\pi\!\in\!\mathcal O_{\mathbb{Q}(\sqrt{d})}$ is a prime, then exactly one of the following is true. \begin{itemize} \item $\pi\sim q$ and $N(\pi)=q^2$ for some inert integer prime $q$. \item $\pi\sim\overline{\pi}$ and $N(\pi)=p$ for some ramified integer prime $p$. \item $\pi\not\sim\overline{\pi}$ and $N(\pi)=N(\overline{\pi})=p$ for some split integer prime $p$. \end{itemize} \end{fact} \begin{fact} \label{Fact1.3} Let $\mathcal O_{\mathbb{Q}(\sqrt{d})}^*$ be the set of units in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Then $\mathcal O_{\mathbb{Q}(\sqrt{-1})}^*=\{\pm 1,\pm i\}$, $\displaystyle{\mathcal O_{\mathbb{Q}(\sqrt{-3})}^*=\left\{\pm 1,\pm \frac{1+\sqrt{-3}}{2},\pm \frac{1-\sqrt{-3}}{2}\right\}}$, and $\mathcal O_{\mathbb{Q}(\sqrt{d})}^*=\{\pm 1\}$ \\ whenever $d\in K\backslash\{-1,-3\}$. \end{fact} \section{The Extension of the Abundancy Index} For a nonzero complex number $z$, let $\arg (z)$ denote the argument, or angle, of $z$. We convene to write $\arg (z)\in [0,2\pi)$ for all $z\in\mathbb{C}$. For each $d\in K$, we define the set $A(d)$ by \[A(d)=\begin{cases} \{z\in\mathcal O_{\mathbb{Q}(\sqrt{d})} \backslash\{0\}: 0\leq \arg (z)<\frac{\pi} {2}\}, & \mbox{if } d=-1; \\ \{z\in\mathcal O_{\mathbb{Q}(\sqrt{d})} \backslash\{0\}: 0\leq \arg (z)<\frac{\pi} {3}\}, & \mbox{if } d=-3; \\ \{z\in\mathcal O_{\mathbb{Q}(\sqrt{d})} \backslash\{0\}: 0\leq \arg (z)<\pi\}, & \mbox{otherwise}. \end{cases}\] Thus, every nonzero element of $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ can be written uniquely as a unit times a product of primes in $A(d)$. Also, every $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ is associated to a unique element, which we will call $B(z)$, of $A(d)$. We are now ready to define analogues of the arithmetic functions $\sigma_k$. \begin{definition} \label{Def2.1} Let $d\in K$, and let $n\in \mathbb{Z}$. Define the function \newline $\delta_n\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\rightarrow [1,\infty)$ by \[\delta_n (z)=\sum_{\substack{x\vert z\\x\in A(d)}}\vert x \vert^n.\] \end{definition} \begin{remark} \label{Rem2.1} We note that, for each $x$ in the summation in the above definition, we may cavalierly replace $x$ with one of its associates. This is because associated numbers have the same absolute value. In other words, the only reason for the criterion $x\!\in\! A (d)$ in the summation that appears in Definition \ref{Def2.1} is to forbid us from counting associated divisors as distinct terms in the summation, but we may choose to use any of the associated divisors as long as we only choose one. This should not be confused with how we count conjugate divisors (we treat $2+i$ and $2-i$ as distinct divisors of $5$ in $\mathbb{Z}[i]$ because $2+i\not\sim 2-i$). \end{remark} \begin{remark} \label{Rem2.2} We note that, by choosing different values of $d$, the functions $\delta_n$ change dramatically. For example, $\delta_2(3)=10$ when we work in the ring $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, but $\delta_2(3)=16$ when we work in the ring $\mathcal O_{\mathbb{Q}(\sqrt{-2})}$. Perhaps it would be more precise to write $\delta_n(z,d)$, but we will omit the latter component for convenience. We note that we will also use this convention with functions such as $I_n$ (which we will define soon). \end{remark} \par We will say that a function $f\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\!\rightarrow\!\mathbb{R}$ is multiplicative if $f(xy)=f(x)f(y)$ whenever $x$ and $y$ are relatively prime (have no nonunit common divisors). \begin{theorem} \label{Thm2.1} Let $d\!\in\! K$, and let $f, g\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\!\rightarrow\!\mathbb{R}$ be multiplicative functions such that $f(u)=g(u)=1$ for all units $u\in \mathcal O_{\mathbb{Q}(\sqrt{d})}^*$. Define \\ $F\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\rightarrow\mathbb{R}$ by \[F(z)=\sum_{\substack{x,y\in A(d)\\xy\sim z}}f(x)g(y).\] Then $F$ is multiplicative. \end{theorem} \begin{proof} Suppose $z_1, z_2\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ and $\gcd (z_1, z_2)=1$. For any $x, y\in A(d)$ satisfying $xy\sim z_1z_2$, we may write $x=x_1x_2$, $y=y_1y_2$ so that $x_1y_1\sim z_1$ and $x_2y_2\sim z_2$. To make the choice of $x_1$, $x_2$, $y_1$, $y_2$ unique, we require $x_1, y_1\in A(d)$. Conversely, if we choose $x_1, x_2, y_1, y_2\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ such that $x_1, y_1\!\in\! A(d)$, $x_1y_1\sim z_1$, $x_2y_2\sim z_2$, and $x_1x_2, y_1y_2\!\in\! A(d)$, then we may write $x=x_1x_2$ and $y=y_1y_2$ so that $xy\sim z_1z_2$. To simplify notation, write $B(x_2)=x_3$, $B(y_2)=y_3$, and let $C$ be the set of all ordered quadruples $(x_1, x_2, y_1, y_2)$ such that $x_1, x_2, y_1, y_2\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$, $x_1, y_1\in A(d)$, $x_1y_1\sim z_1$, $x_2y_2\sim z_2$, and $x_1x_2, y_1y_2\in A(d)$. We have established a bijection between $C$ and the set of ordered pairs $(x,y)$ satisfying $x, y\in A(d)$ and $xy\sim z_1z_2$. Therefore, \[F(z_1z_2)=\sum_{\substack{x, y\in A(d)\\xy\sim z_1z_2}}f(x)g(y)=\sum_{(x_1, x_2, y_1, y_2)\in C}f(x_1x_2)g(y_1y_2)\] \[=\sum_{(x_1, x_2, y_1, y_2)\in C}f(x_1)f(x_2)g(y_1)g(y_2)\] \[=\sum_{(x_1, x_2, y_1, y_2)\in C}f(x_1)f(B(x_2))g(y_1)g(B(y_2))\] \[=\sum_{\substack{x_1, y_1\in A(d)\\x_1y_1\sim z_1}}f(x_1)g(y_1)\sum_{\substack{x_3, y_3\in A(d)\\x_3y_3\sim z_2}}f(x_3)g(y_3)=F(z_1)F(z_2).\] \end{proof} \begin{corollary} \label{Cor2.1} For any integer $n$, $\delta_n$ is multiplicative. \end{corollary} \begin{proof} Noting that $\delta_n(w_1)=\delta_n(w_2)$ whenever $w_1\sim w_2$, we may let \newline $f, g\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\rightarrow\mathbb{R}$ be the functions defined by $f(z)=\vert z\vert ^n$ and $g(z)=1$ for all $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$. Then the desired result follows immediately from Theorem \ref{Thm2.1}. \end{proof} \begin{definition} \label{Def2.2} For each positive integer $n$, define the function \\ $I_n\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\rightarrow[1,\infty)$ by $\displaystyle{I_n(z)=\frac{\delta_n(z)}{\vert z\vert ^n}}$. We say that two or more numbers $z_1, z_2, \ldots, z_r\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ are \textit{$n$-powerfully friendly} (or \textit{$n$-powerful friends}) \textit{in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$} if $I_n(z_j)=I_n(z_k)$ and $\vert z_j\vert\neq\vert z_k\vert$ for all distinct $j, k\in \{1, 2, \ldots, r\}$. Any $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ that has no $n$-powerful friends in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ is said to be \textit{$n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$}. \end{definition} \begin{remark} \label{Rem2.3} Whenever $n=1$, we will omit the adjective ``$1$-powerfully" in the preceding definitions. \end{remark} As an example, we will let $d=-1$ so that $\mathcal O_{\mathbb{Q}(\sqrt{d})}=\mathbb{Z}[i]$. Let us compute $I_2(9+3i)$. We have $9+3i=3(1+i)(2-i)$, so $\delta_2(9+3i)=N(1)+N(3)+N(1+i)+N(2-i)+N(3(1+i))+N(3(2-i))+N((1+i)(2-i))+N(3(1+i)(2-i))=1+9+2+5+18+45+10+90=180$. Then $\displaystyle{I_2(9+3i)=\frac{180}{N(3(1+i)(2-i))}=2}$. Although $I_2(3+9i)$ is also equal to $2$, $3+9i$ and $9+3i$ are not $2$-powerful friends in $\mathbb{Z}[i]$ because $\vert 3+9i\vert=\vert 9+3i\vert$. We now establish some important properties of the functions $I_n$. \begin{theorem} \label{Thm2.2} Let $n\!\in\!\mathbb{N}$, $d\!\in\! K$, and $z_1, z_2, \pi\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ with $\pi$ a prime. Then, if we are working in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, the following statements are true. \begin{enumerate}[(a)] \item The range of $I_n$ is a subset of the interval $[1,\infty)$, and $I_n(z_1)=1$ if and only if $z_1$ is a unit in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. If $n$ is even, then $I_n(z_1)\in\mathbb{Q}$. \item $I_n$ is multiplicative. \item $I_n(z_1)=\delta_{-n}(z_1)$. \item If $z_1\vert z_2$, then $I_n(z_1)\leq I_n(z_2)$, with equality if and only if $z_1\sim z_2$. \item If $z_1\sim \pi ^k$ for a nonnegative integer $k$, then $z_1$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. \end{enumerate} \end{theorem} \begin{proof} The first sentence in part $(a)$ is fairly clear, and the second sentence becomes equally clear if one uses the fact that $\vert z_1\vert^n\!\!\in\mathbb{N}$ whenever $n$ is even. To prove part $(b)$, suppose that $z_1$ and $z_2$ are relatively prime elements of $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Then, by Corollary \ref{Cor2.1}, $\displaystyle{I_n(z_1z_2)=\frac{\delta_n(z_1z_2)}{\vert z_1z_2\vert ^n}=\frac{\delta_n(z_1)\delta_n(z_2)}{\vert z_1\vert ^n\vert z_2\vert ^n}=I_n(z_1)I_n(z_2)}$. In order to prove part $(c)$, it suffices, due to the truth of part $(b)$, to prove that $I_n(\pi ^\alpha)=\delta_{-n}(\pi ^\alpha)$ for any prime $\pi$ and nonnegative integer $\alpha$. To do so is fairly routine, as \[I_n(\pi ^\alpha)=\frac{\delta_n(\pi ^\alpha)}{\vert\pi ^\alpha\vert^n} =\frac{\sum_{j=0}^\alpha\vert \pi ^j\vert ^n}{\vert \pi ^\alpha\vert ^n}=\sum_{j=0}^\alpha\vert \pi ^{j-\alpha}\vert ^n\] \[=\sum_{j=0}^\alpha\vert\pi ^{\alpha-j}\vert ^{-n}=\sum_{l=0}^\alpha\vert \pi ^l\vert ^{-n}=\delta_{-n}(\pi ^\alpha).\] The truth of statement $(d)$ follows from part $(c)$ because, if $z_1\vert z_2$, then \[I_n(z_2)=\delta_{-n}(z_2)=\sum_{\substack{x\vert z_2\\x\in A(d)}}\vert x\vert ^{-n}\] \[=\sum_{\substack{x\vert z_1\\x\in A(d)}}\vert x\vert ^{-n}+\sum_{\substack{x\vert z_2\\x\nmid z_1\\x\in A(d)}}\vert x\vert ^{-n}=I_n(z_1)+\sum_{\substack{x\vert z_2\\x\nmid z_1\\x\in A(d)}}\vert x\vert ^{-n}.\] \par Finally, for part $(e)$, we provide a proof for the case when $n$ is even. We postpone the proof for the case in which $n$ is odd until the next section. Let $\pi$ be a prime in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, and suppose that $z_1\sim \pi ^k$ for a nonnegative integer $k$. If $k=0$, then $z_1$ is a unit and the result follows from part $(a)$. Therefore, assume $k>0$. Assume, for the sake of finding a contradiction, that $I_n(z_1)=I_n(z_2)$ and $\vert z_1\vert\neq\vert z_2\vert$ for some $z_2\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$. Under this assumption, we have $\vert z_2\vert ^n\delta_n(z_1)=\vert z_1\vert ^n\delta_n(z_2)$. Either $N(\pi)=p$ is an integer prime or $N(\pi)=q^2$, where $q$ is an integer prime. \par First, suppose $N(\pi)=p$ is an integer prime. Then the statement \newline $\vert z_2\vert ^n\delta_n(z_1)=\vert z_1\vert ^n\delta_n(z_2)$ is equivalent to $N(z_2)^{n/2}\delta_n(\pi ^k)=p^{kn/2}\delta_n(z_2)$. Noting that $N(z_2)^{n/2}$, $\delta_n(\pi ^k)$, and $\delta_n(z_2)$ are integers (because $n$ is even) and that $p\nmid\delta_n(\pi ^k)=1+p^{n/2}+\cdots+p^{kn/2}$ in $\mathbb{Z}$, we find $p^{kn/2}\vert N(z_2)^{n/2}$ in $\mathbb{Z}$. This implies that $p^k\vert N(z_2)$ in $\mathbb{Z}$, and we conclude that there exist nonnegative integers $t_1, t_2$ satisfying $\pi ^{t_1}\overline{\pi}^{t_2}\vert z_2$ and $t_1+t_2=k$. If $\pi\sim\overline{\pi}$, then we have $\pi ^k\vert z_2$, from which part $(d)$ yields the desired contradiction. Otherwise, $\pi$ and $\overline{\pi}$ are relatively prime, so we may use parts $(b)$ and $(d)$ to write \[I_n(z_2)\geq I_n(\pi ^{t_1})I_n(\overline{\pi}^{t_2})=\frac{1+p^{n/2}+\cdots+p^{t_1n/2}}{p^{t_1n/2}}\frac{1+p^{n/2}+\cdots+p^{t_2n/2}} {p^{t_2n/2}}\] \[=\frac{(1+p^{n/2}+\cdots+p^{t_1n/2})(1+p^{n/2}+\cdots+p^{t_2n/2} )}{p^{kn/2}}\] \[\geq \frac{1+p^{n/2}+\cdots+p^{kn/2}}{p^{kn/2}}=I_n(\pi ^k)=I_n(z_2).\] This implies that $I_n(z_2)=I_n(\pi ^{t_1}\overline{\pi}^{t_2})$, from which part $(d)$ tells us that $z_2\sim \pi ^{t_1}\overline{\pi}^{t_2}$. Therefore, $\vert z_2\vert=\vert\pi ^{t_1}\overline{\pi}^{t_2}\vert=\sqrt{p}^{t_1+t_2}=\sqrt{p}^k=\vert\pi^k\vert=\vert z_1\vert$, which we assumed was false. \par Now, suppose that $N(\pi)=q^2$, where $q$ is an integer prime ($q$ is inert). Then the statement $\vert z_2\vert ^n\delta_n(z_1)=\vert z_1 \vert ^n\delta_n(z_2)$ is equivalent to \newline $N(z_2)^{n/2}\delta_n(\pi ^k)=q^{kn} \delta_n(z_2)$. As before, $N(z_2)^{n/2}$, $\delta_n(\pi ^k)$, and $\delta_n(z_2)$ are integers, and $q\nmid \delta_n(\pi ^k)=1+q^n+\cdots+q^{kn}$ in $\mathbb{Z}$. Therefore, $q^{kn}\vert N(z_2)^{n/2}$ in $\mathbb{Z}$, so $q^{2k}\vert N(z_2)$ in $\mathbb{Z}$. As $q$ is inert, this implies that $q^k\vert z_2$, so $z_1\vert z_2$ (note that $z_1\sim \pi ^k\sim q^k$). Therefore, part $(d)$ provides the final contradiction, and the proof is complete. \end{proof} It is much easier to deal with the functions $I_n$ when $n$ is even than when $n$ is odd because, when $n$ is even, the values of $\delta_n(z)$ and $\vert z\vert ^n$ are positive integers. Therefore, we will devote the next section to developing an understanding of the functions $I_n$ for odd values of $n$. \section{When $n$ is Odd} We begin by establishing some definitions and lemmata that will later prove themselves useful. Let $W$ be the set of all square-free positive integers, and write $W=\{w_0, w_1, w_2, \ldots\}$ so that $w_0=1$ and $w_i<w_j$ for all nonnegative integers $i<j$. Let $F$ be the set of all finite linear combinations of elements of $W$ with rational coefficients. That is, $F=\{a_0+a_1\sqrt{w_1}+\cdots+a_m\sqrt{w_m}: a_0, a_1, \ldots, a_m\!\in\!\mathbb{Q}, m\!\in\!\mathbb{N}_0\}$. For any $r\!\in\! F$, the choice of the rational coefficients is unique. More formally, if $a_0+a_1\sqrt{w_1}+\cdots+a_m\sqrt{w_m}=b_0+b_1\sqrt{w_1}+\cdots+b_m\sqrt{w_m}$, where $a_0, a_1, \ldots, a_m, b_0, b_1, \ldots, b_m\in\mathbb{Q}$, then $a_i=b_i$ for all $i\in \{0, 1, \ldots, m\}$ \cite{Klazar09}. Note that $F$ is a subfield of the real numbers. \begin{definition} \label{Def3.1} For $r\!\in\! F$ and $j\in\mathbb{N}_0$, let $C_j(r)$ be the unique rational coefficient of $\sqrt{w_j}$ in the expansion of $r$. That is, the sequence $(C_j(r))_{j=0}^\infty$ is the unique infinite sequence of rational numbers that has finitely many nonzero terms and that satisfies $\displaystyle{r=\sum_{j=0}^\infty C_j(r)\sqrt{w_j}}$. \end{definition} As an example, $\displaystyle{C_5\left(\frac{3}{5}-\sqrt{6}+\frac{1}{3}\sqrt{7}\right)=\frac{1}{3}}$ because $w_5=7$. \begin{definition} \label{Def3.2} Let $p$ be an integer prime. For $r\in F$, we say that \textit{$r$ has a $\sqrt{p}$ part} if there exists some positive integer $j$ such that $C_j(r)\neq 0$ and $p\vert w_j$ (in $\mathbb{Z}$). We say that $r$ does not have a $\sqrt{p}$ part if no such positive integer $j$ exists. \end{definition} For example, if $\displaystyle{r=\frac{1}{2}+3\sqrt{10}}$, then $r$ has a $\sqrt{2}$ part, and $r$ has a $\sqrt{5}$ part. However, $\displaystyle{\frac{1}{2}+3\sqrt{10}}$ does not have a $\sqrt{7}$ part. \begin{lemma} \label{Lem3.1} If $r_1, r_2\in F$ each do not have a $\sqrt{p}$ part for some integer prime $p$, then $r_1r_2$ does not have a $\sqrt{p}$ part. \end{lemma} \begin{proof} Suppose $p\vert w_j$ for some positive integer $j$. Then, if we let $SF(n)$ denote the square-free part of an integer $n$ and consider the basic algebra used to multiply elements of $F$, we find that \[C_j(r_1r_2)=\sum_{\substack{i_1, i_2\in\mathbb{N}_0\\SF(w_{i_1}w_{i_2})=w_j}}C_{i_1}(r_1)C_{i_2}(r_2)\sqrt{\frac{w_{i_1}w_{i_2}}{w_j}}.\] For every pair of nonnegative integers $i_1, i_2$ satisfying $SF(w_{i_1}w_{i_2})=w_j$, either $p\vert w_{i_1}$ or $p\vert w_{i_2}$. This implies that either $C_{i_1}(r_1)=0$ or $C_{i_2}(r_2)=0$ by the hypothesis that each of $r_1$ and $r_2$ does not have a $\sqrt{p}$ part. Thus, $C_j(r_1r_2)=0$. As $w_j$ was an arbitrary square-free positive integer divisible by $p$, we conclude that $r_1r_2$ does not have a $\sqrt{p}$ part. \end{proof} \begin{lemma} \label{Lem3.2} If each of $r_1, r_2, \ldots, r_l\in F$ does not have a $\sqrt{p}$ part for some integer prime $p$, then $r_1r_2\cdots r_l$ does not have a $\sqrt{p}$ part. \end{lemma} \begin{proof} The desired result follows immediately from repeated use of Lemma \ref{Lem3.1}. \end{proof} \begin{lemma} \label{Lem3.3} If $r_1\in F$ has a $\sqrt{p}$ part and $r_2\in F\backslash\{0\}$ does not have a $\sqrt{p}$ part for some integer prime $p$, then $r_1r_2$ has a $\sqrt{p}$ part. \end{lemma} \begin{proof} Write $\displaystyle{r_1=r_3+\sum_{i=1}^ka_i\sqrt{x_i}}$, where $r_3\in F$ does not have a $\sqrt{p}$ part and, for all distinct $i, j\in \{1, 2, \ldots, k\}$, we have $a_i\in\mathbb{Q}\backslash\{0\}$, $x_i\in W$, $p\vert x_i$ in $\mathbb{Z}$, and $x_i\neq x_j$. If we write $\displaystyle{v_i=\frac{x_i}{p}}$ for all $i\in \{1, 2, \ldots, k\}$, then each $v_i$ is a square-free positive integer that is not divisible by $p$. Therefore, \\ $\displaystyle{r_1r_2=\left(r_3+\sqrt{p}\sum_{i=1}^ka_i\sqrt{v_i}\right)r_2=r_2r_4\sqrt{p}+r_2r_3}$, where $\displaystyle{r_4=\sum_{i=1}^ka_i\sqrt{v_i}}$. By the hypothesis that $r_1$ has a $\sqrt{p}$ part, $r_4\neq 0$. As each of $r_2, r_4$ is nonzero and does not have a $\sqrt{p}$ part, Lemma \ref{Lem3.1} guarantees that $r_2r_4$ is nonzero and does not have a $\sqrt{p}$ part. Now, it is easy to see that this implies that $\sqrt{p}r_2r_4$ has a $\sqrt{p}$ part. Furthermore, each of $r_2, r_3$ does not have a $\sqrt{p}$ part, so Lemma \ref{Lem3.1} tells us that $r_2r_3$ does not have a $\sqrt{p}$ part. Thus, it is clear that $r_2r_4\sqrt{p}+r_2r_3$ has a $\sqrt{p}$ part, so the proof is complete. \end{proof} \begin{lemma} \label{Lem3.4} Let us fix $d\in K$ and work in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Let $\pi$ be a prime such that $N(\pi)=p$ is an integer prime. If $n$ is an odd positive integer and $\pi\vert z$ for some $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$, then $I_n(z)\in F$ and $I_n(z)$ has a $\sqrt{p}$ part. \end{lemma} \begin{proof} It is clear that $I_n(z)\in F$ (this is also true for positive even integer values of $n$). Write $\displaystyle{z\sim \pi ^\alpha\overline{\pi}^\beta\prod_{j=1}^r\pi_j^{\alpha_j}}$, where, for all distinct $j, k\in\{1, 2, \ldots, r\}$, $\pi_j$ is prime, $N(\pi_j)\neq p$, $\alpha_j$ is a positive integer, and $\pi_j\not\sim\pi_k$. Fix some $j\in\{1, 2, \ldots, r\}$. If $\pi_j$ is associated to an inert integer prime, then $I_n(\pi_j^{\alpha_j})\in\mathbb{Q}$, so $I_n(\pi_j^{\alpha_j})$ does not have a $\sqrt{p}$ part. If $N(\pi_j)=p_0$ for some integer prime $p_0$, then $I_n(\pi_j^{\alpha_j})=a+b\sqrt{p_0}$ for some $a, b\in\mathbb{Q}$. Again, we conclude that $I_n(\pi_j^{\alpha_j})$ does not have a $\sqrt{p}$ part because $p_0\neq p$. Writing $\displaystyle{x=\prod_{j=1}^r\pi_j^{\alpha_j}}$, Lemma \ref{Lem3.2} and the multiplicativity of $I_n$ guarantee that $I_n(x)$ does not have a $\sqrt{p}$ part. We now consider two cases. \par First, consider the case in which $p$ ramifies in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ (meaning $\pi\sim \overline{\pi}$). Then $z\sim \pi^{\alpha+\beta}x$. Using part $(c)$ of Theorem \ref{Thm2.2}, we have $I_n(\pi^{\alpha+\beta})=\delta_{-n}(\pi^{\alpha+\beta})=$ \\ $\displaystyle{\sum_{m=0}^{\alpha+\beta}\frac{1}{\vert\pi^m\vert ^n}=\sum_{m=0}^{\alpha+\beta}\frac{1}{\sqrt{p}^{mn}}=t_1+t_2\sqrt{p}}$, where $t_1$ and $t_2$ are positive rational numbers. Thus, $I_n(\pi^{\alpha+\beta})$ has a $\sqrt{p}$ part, so Lemma \ref{Lem3.3} guarantees that $I_n(z)$ has a $\sqrt{p}$ part. \par Next, consider the case in which $p$ splits in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ (meaning $\pi\not\sim\overline{\pi}$). Then we have $\displaystyle{I_n(\pi^\alpha\overline{\pi}^\beta)=\delta_{-n}(\pi^\alpha)\delta_{-n}(\overline{\pi}^\beta)=\left(\sum_{m=0}^\alpha\frac{1}{\vert \pi^m\vert ^n}\right)\left(\sum_{m=0}^\beta\frac{1}{\vert \overline{\pi}^m\vert ^n}\right)=}$ \\ $\displaystyle{\left(\sum_{m=0}^\alpha\frac{1}{\sqrt{p}^{mn}}\right)\left(\sum_{m=0}^\beta\frac{1}{\sqrt{p}^{mn}}\right)=(u_1+u_2\sqrt{p})(u_3+u_4\sqrt{p})}$, where $u_1, u_2, u_3, u_4$ \\ are positive rational numbers. Then $(u_1+u_2\sqrt{p})(u_3+u_4\sqrt{p})=u_1u_3+pu_2u_4+(u_1u_4+u_2u_3)\sqrt{p}$. As $u_1u_4+u_2u_3>0$, $I_n(\pi^\alpha\overline{\pi}^\beta)$ has a $\sqrt{p}$ part. Once again, Lemma \ref{Lem3.3} guarantees that $I_n(z)$ has a $\sqrt{p}$ part. \end{proof} \begin{lemma} \label{Lem3.5} Let $p$ be an integer prime, and let $m_1, m_2, \beta_1, \beta_2$ be nonnegative integers satisfying $(p^{m_1}+p^{m_2})(p^{\beta_1+\beta_2+1}+1)=(p^{\beta_1}+p^{\beta_2})(p^{m_1+m_2+1}+1)$. Then either $m_1=\beta_1$ and $m_2=\beta_2$ or $m_1=\beta_2$ and $m_2=\beta_1$. \end{lemma} \begin{proof} Without loss of generality, we may write $m_1=\min(m_1, m_2, \beta_1, \beta_2)$. We may also assume that $\beta_1\leq\beta_2$ so that it suffices to show that $m_1=\beta_1$ and $m_2=\beta_2$. Dividing each side of the given equation by $p^{m_1}$, we have \begin{equation} \label{Eq3.1} (1+p^{m_2-m_1})(p^{\beta_1+\beta_2+1}+1)=(p^{\beta_1-m_1}+p^{\beta_2-m_1})(p^{m_1+m_2+1}+1). \end{equation} Suppose $m_1=\beta_1$. Then \eqref{Eq3.1} becomes $(1+p^{m_2-m_1})(p^{m_1+\beta_2+1}+1)=(1+p^{\beta_2-m_1})(p^{m_1+m_2+1}+1)$. Now, define a function $f\colon\mathbb{R}\rightarrow\mathbb{R}$ by \\ $\displaystyle{f(x)=\frac{p^{m_1+x+1}+1}{1+p^{x-m_1}}}$. We may differentiate to get \[f'(x)=\frac{(p^{m_1+x})(p^{2m_1+1}-1)}{(p^x+p^{m_1})^2}\log{p}>0,\] so $f$ is one-to-one. As $f(m_2)=f(\beta_2)$, we have $m_2=\beta_2$. Therefore, we only need to show that $m_1=\beta_1$. \par Suppose $p\neq 2$. Then, if $m_1<\beta_1$, we may read \eqref{Eq3.1} modulo $p$ to reach a contradiction. Thus, if $p\neq 2$, we are done. Now, suppose $p=2$ and $m_1<\beta_1$ so that \eqref{Eq3.1} becomes \begin{equation} \label{Eq3.2} (1+2^{m_2-m_1})(2^{\beta_1+\beta_2+1}+1)=(2^{\beta_1-m_1}+2^{\beta_2-m_1})(2^{m_1+m_2+1}+1). \end{equation} The right-hand side of \eqref{Eq3.2} is even, which implies that we must have $m_1=m_2$ so that $1+2^{m_2-m_1}=2$. Dividing each side of \eqref{Eq3.2} by $2$ yields $2^{\beta_1+\beta_2+1}+1=(2^{\beta_1-m_1-1}+2^{\beta_2-m_1-1})(2^{2m_1+1}+1)$. As the left-hand side of this last equation is odd, we must have $\beta_1=m_1+1$. Therefore, $2^{\beta_1+ \beta_2+1}+1=(1+2^{\beta_2-\beta_1})(2^{2\beta_1-1}+1)=2^{\beta_1+\beta_2-1}+2^{\beta_2-\beta_1}+2^{2\beta_1-1}+1$. If we subtract $2^{\beta_1+\beta_2-1}+1$ from each side of this last equation, we get $3\cdot2^{\beta_1+\beta_2-1}=2^{\beta_2-\beta_1}+2^{2\beta_1-1}$. However, $3\cdot2^{\beta_1+\beta_2-1}>2^{\beta_1+\beta_2-1}+2^{\beta_1+\beta_2-1}>2^{\beta_2-\beta_1}+2^{2\beta_1-1}$, so we have reached our final contradiction. This completes the proof. \end{proof} We now possess the tools necessary to complete the proof of part $(e)$ of Theorem \ref{Thm2.2}. We do so in the following two theorems. \begin{theorem} \label{Thm3.1} Let us work in a ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\in K$, and let $n$ be an odd positive integer. Let $\pi$ be a prime such that $\pi\sim\overline{\pi}$, and let $k$ be a positive integer. Then $\pi^k$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. \end{theorem} \begin{proof} We suppose, for the sake of finding a contradiction, that there exists $x\!\in\!\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ such that $\vert x\vert\neq\vert\pi^k\vert$ and $I_n(x)=I_n(\pi^k)$. Suppose that $\pi_0$ is a prime such that $\pi_0\vert x$ and $N(\pi_0)=p_0$ is an integer prime. Then, by Lemma \ref{Lem3.4}, $I_n(x)$ has a $\sqrt{p_0}$ part. This implies that $I_n(\pi^k)$ has a $\sqrt{p_0}$ part. However, if $N(\pi)\!=\!p$, where $p$ is an integer prime, then $I_n(\pi^k)=\displaystyle{\sum_{m=0}^k\frac{1}{\sqrt{p}^{mn}}=t_1+t_2\sqrt{p}}$ for some $t_1, t_2\in\mathbb{Q}$. Hence, we find that $p_0=p$, which means that $\pi_0\sim\pi$. On the other hand, if $\pi$ is associated to an inert integer prime $q$, then $I_n(\pi^k)\in\mathbb{Q}$. Therefore, if a prime that is not associated to $\pi$ divides $x$, that prime must be associated to an inert integer prime. We now consider two cases. \par Case 1: In this case, $\pi\sim q$, where $q$ is an inert integer prime. This implies that all primes dividing $x$ must be associated to inert integer primes, so $\delta_n(x)$ and $\vert x\vert$ are integers. From $I_n(x)=I_n(\pi^k)$ and $\vert\pi^k\vert^n=q^{kn}$, we have $\delta_n(x)q^{kn}=\delta_n(\pi^k)\vert x\vert^n$. We know that $\displaystyle{\delta_n(\pi^k)=\sum_{j=0}^k\vert\pi^j\vert^n=1+\sum_{j=1}^kq^{jn}}$, so $q\nmid\delta_n(\pi^k)$ in $\mathbb{Z}$. Therefore, $q^{kn}$ divides $\vert x\vert^n$ in $\mathbb{Z}$, so $q^k$ divides $\vert x\vert$ in $\mathbb{Z}$. We conclude that $q^k\vert x$, so $\pi^k\vert x$. However, part $(d)$ of Theorem \ref{Thm2.2} tells us that this is a contradiction. \par Case 2: In this case, $N(\pi)=p$ is an integer prime. Because all of the prime divisors of $x$ that are not associated to $\pi$ must be associated to inert integer primes, we may write $\displaystyle{x\sim\pi^\alpha\prod_{j=1}^tq_j^{\beta_j}}$, where $\alpha\in\mathbb{N}_0$ and, for each $j\in\{1, 2, \ldots, t\}$, $q_j$ is an inert integer prime and $\beta_j$ is a positive integer. Note that $\alpha\geq 1$ because $I_n(\pi^k)$ has a $\sqrt{p}$ part, which implies that $I_n(x)$ has a $\sqrt{p}$ part. Also, $\alpha<k$ because, otherwise, $\pi^k\vert x$, from which part $(d)$ of Theorem \ref{Thm2.2} yields a contradiction. We have \[I_n(\pi^k)=\frac{\sum_{l=0}^k\sqrt{p}^{ln}}{\sqrt{p}^{kn}}=\frac{\sqrt{p}^{(k+1)n}-1}{\sqrt{p}^{kn}(\sqrt{p}^n-1)},\] and \[I_n(\pi^\alpha)=\frac{\sum_{l=0}^\alpha\sqrt{p}^{ln}}{\sqrt{p}^{\alpha n}} =\frac{\sqrt{p}^{(\alpha+1)n}-1}{\sqrt{p}^{\alpha n}(\sqrt{p}^n-1)}.\] Now, $\displaystyle{\frac{I_n(\pi^k)}{I_n(\pi^\alpha)}=I_n\left(\prod_{j=1}^tq_j^{\beta_j}\right)\in\mathbb{Q}}$ because each integer prime $q_j$ is inert. This implies that $\displaystyle{(p^{(\alpha+1)n}-1)\frac{I_n(\pi^k)}{I_n(\pi^\alpha)}\in\mathbb{Q}}$. We have \[(p^{(\alpha+1)n}-1)\frac{I_n(\pi^k)}{I_n(\pi^\alpha)}=(p^{(\alpha+1)n}-1)\frac{\sqrt{p}^{(k+1)n-1}}{(\sqrt{p}^{(\alpha+1)n}-1)\sqrt{p}^{(k-\alpha)n}}\] \[=(\sqrt{p}^{(\alpha+1)n}-1)(\sqrt{p}^{(\alpha+1)n}+1)\frac{\sqrt{p}^{(k+1)n}-1}{(\sqrt{p}^{(\alpha+1)n}-1)\sqrt{p}^{(k-\alpha)n}}\] \[=\frac{(\sqrt{p}^{(k+1)n}-1)(\sqrt{p}^{(\alpha+1)n}+1)}{\sqrt{p}^{(k-\alpha)n}}\in\mathbb{Q}.\] If $k$ is odd, then $\sqrt{p}^{(k+1)n}-1$ is rational, which implies that $\alpha$ must also be odd. Similarly, if $\alpha$ is odd, then $k$ must be odd. Therefore, $k$ and $\alpha$ have the same parities, which implies that $\sqrt{p}^{(k-\alpha)n}$ is rational. This implies $(\sqrt{p}^{(k +1)n}-1)(\sqrt{p}^{(\alpha+1)n}+1)\in \mathbb{Q}$. We clearly have a contradiction if $k$ and $\alpha$ are both even, so they must both be odd. As $k$ is odd, we have \[I_n(\pi^k)=\delta_{-n}(\pi^k)=\sum_{l=0}^k\frac{1}{\sqrt{p}^{ln}}=\left(\sum_{m=0}^{\frac{k-1}{2}}\frac{1}{\sqrt{p}^{2mn}}\right)+\left(\sum_{m=0}^{\frac{k-1}{2}}\frac{1}{\sqrt{p}^{2mn}}\right)\left(\frac{1}{\sqrt{p}^n}\right)\] \[=\left(\sum_{m=0}^{\frac{k-1}{2}}\frac{1}{p^{mn}}\right)\left(1+\frac{1}{\sqrt{p}^n}\right)=\frac{h_1}{p^n-1}\left(1+\frac{1}{\sqrt{p}^n}\right),\] where \[h_1=\left(\sum_{m=0}^{\frac{k-1}{2}}\frac{1}{p^{mn}}\right)\left(p^n-1\right)=\frac{p^{\frac{k+1}{2}n}-1}{p^{\frac{k-1}{2}n}}.\] Similarly, if we write $\displaystyle{h_2=\frac{p^{\frac{\alpha+1}{2}n}-1}{p^{\frac{\alpha-1}{2}n}}}$, then we have \\ $I_n(\pi^\alpha)=\displaystyle{\frac{h_2}{p^n-1}\left(1+\frac{1}{\sqrt{p}^n}\right)}$. Now, \[I_n\left(\prod_{j=1}^tq_j^{\beta_j}\right)=\frac{I_n(\pi^k)}{I_n(\pi^\alpha)}=\frac{h_1}{h_2}=\frac{p^{\frac{k+1}{2}n}-1}{p^{\frac{k-\alpha}{2}n}\left(p^{\frac{\alpha+1}{2}n}-1\right)},\] so \[\left[\delta_n\left(\prod_{j=1}^tq_j^{\beta_j}\right)\right]\left[p^{\frac{k-\alpha}{2}n}\right]\left[p^{\frac{\alpha+1}{2}n}-1\right]=\left[\left\lvert\prod_{j=1}^tq_j^{\beta_j}\right\rvert^n\right]\left[p^{\frac{k+1}{2}n}-1\right].\] Notice that each bracketed expression in this last equation is an integer, and notice that $p$ divides the left-hand side in $\mathbb{Z}$. However, $p$ does not divide the right-hand side in $\mathbb{Z}$, so we have a contradiction. \end{proof} We now only have to prove part $(e)$ of Theorem \ref{Thm2.2} for the case in which $n$ is odd and $\pi\not\sim\overline{\pi}$. We do so as a corollary of the following more general theorem. \begin{theorem} \label{Thm3.2} Let us work in a ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\in K$, and let $n$ be an odd positive integer. Let $\pi$ be a prime such that $\pi\not\sim\overline{\pi}$, and let $k_1, k_2$ be nonnegative integers. Then $\pi^{k_1}\overline{\pi}^{k_2}$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ unless, possibly, if $k_1$ and $k_2$ are both odd. In the case that $k_1$ and $k_2$ are both odd, any friend of $\pi^{k_1}\overline{\pi}^{k_2}$, say $x$, must satisfy $\displaystyle{x\sim\pi^{\alpha_1}\overline{\pi}^{\alpha_2}\prod_{j=1}^tq_j^{\gamma_j}}$ , where $\alpha_1, \alpha_2$ are odd positive integers and, for each $j\in\{1, 2, \ldots, t\}$, $q_j$ is an inert integer prime and $\gamma_j$ is a positive integer. \end{theorem} \begin{proof} First note that Fact \ref{Fact1.2} tells us that $N(\pi)=N(\overline{\pi})=p$, where $p$ is an integer prime. \par We suppose, for the sake of finding a contradiction, that there exists $x\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ such that $\vert x\vert\neq\vert\pi^{k_1}\overline{\pi}^{k_2}\vert$ and $I_n(x)=I_n(\pi^{k_1}\overline{\pi}^{k_2})$. Suppose that $\pi_0$ is a prime such that $\pi_0\vert x$ and $N(\pi_0)=p_0$ is an integer prime. Then, by Lemma \ref{Lem3.4}, $I_n(x)$ has a $\sqrt{p_0}$ part. This implies that $I_n(\pi^{k_1}\overline{\pi}^{k_2})$ has a $\sqrt{p_0}$ part. However, as $N(\pi)=N(\overline{\pi})=p$, we must have $I_n(\pi^{k_1}\overline{\pi}^{k_2})=I_n(\pi^{k_1})I_n(\overline{\pi}^{k_2})=\displaystyle{\left(\sum_{m=0}^{k_1}\frac{1}{\sqrt{p}^{mn}}\right)\left(\sum_{m=0}^{k_2}\frac{1}{\sqrt{p}^{mn}}\right)}=t_1+t_2\sqrt{p}$ for some $t_1, t_2\in\mathbb{Q}$, so we find that $p_0=p$. Therefore, if a prime that is not associated to $\pi$ or $\overline{\pi}$ divides $x$, that prime must be associated to an inert integer prime. Hence, we may write $\displaystyle{x\sim\pi^{\alpha_1}\overline{\pi}^{\alpha_2} \prod_{j=1}^tq_j^{\gamma_j}}$, where $\alpha_1, \alpha_2\in\mathbb{N}_0$ and, for each $j\in\{1, 2, \ldots, t\}$, $q_j$ is an inert integer prime and $\gamma_j$ is a positive integer. \par We have \[I_n(\pi^{k_1})=\frac{\sum_{l=0}^{k_1}\sqrt{p}^{ln}}{\sqrt{p}^{k_1n}}=\frac{\sqrt{p}^{(k_1+1)n}-1}{\sqrt{p}^{k_1n}(\sqrt{p}^n-1)},\] \[I_n(\overline{\pi}^{k_2})=\frac{\sum_{l=0}^{k_2}\sqrt{p}^{ln}}{\sqrt{p}^{k_2n}}=\frac{\sqrt{p}^{(k_2+1)n}-1}{\sqrt{p}^{k_2n}(\sqrt{p}^n-1)},\] \[I_n(\pi^{\alpha_1})=\frac{\sum_{l=0}^{\alpha_1}\sqrt{p}^{ln}}{\sqrt{p}^{\alpha_1n}}=\frac{\sqrt{p}^{(\alpha_1+1)n}-1}{\sqrt{p}^{\alpha_1n}(\sqrt{p}^n-1)},\] and \[I_n(\overline{\pi}^{\alpha_2})=\frac{\sum_{l=0}^{\alpha_2}\sqrt{p}^{ln}}{\sqrt{p}^{\alpha_2n}}=\frac{\sqrt{p}^{(\alpha_2+1)n}-1}{\sqrt{p}^{\alpha_2n}(\sqrt{p}^n-1)}.\] \par Now, $\displaystyle{\frac{I_n(\pi^{k_1})I_n(\overline{\pi}^{k_2})}{I_n(\pi^{\alpha_1})I_n(\overline{\pi}^{\alpha_2})}=\frac{I_n(\pi^{k_1}\overline{\pi}^{k_2})}{I_n(\pi^{\alpha_1}\overline{\pi}^{\alpha_2})}=I_n\left(\prod_{j=1}^tq_j^{\gamma_j}\right)\in\mathbb{Q}}$ because each integer prime $q_j$ is inert. This implies that \\ $\displaystyle{(p^{(\alpha_1+1)n}-1)(p^{(\alpha_2+1)n}-1)\frac{I_n(\pi^{k_1})I_n(\overline{\pi}^{k_2})}{I_n(\pi^{\alpha_1})I_n(\overline{\pi}^{\alpha_2})}\in\mathbb{Q}}$. We have \[(p^{(\alpha_1+1)n}-1)(p^{(\alpha_2+1)n}-1)\frac{I_n(\pi^{k_1})I_n(\overline{\pi}^{k_2})}{I_n(\pi^{\alpha_1})I_n(\overline{\pi}^{\alpha_2})}\] \[=(p^{(\alpha_1+1)n}-1)(p^{(\alpha_2+1)n}-1)\frac{(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(k_2+1)n}-1)}{(\sqrt{p}^{(\alpha_1+1)n}-1)(\sqrt{p}^{(\alpha_2+1)n}-1)\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}}\] \[=\frac{(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(k_2+1)n}-1)(\sqrt{p}^{(\alpha_1+1)n}+1)(\sqrt{p}^{(\alpha_2+1)n}+1)}{\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}}\in\mathbb{Q}.\] \par We now consider several cases. In what follows, we will write \\ $\displaystyle{m_1=\frac{(k_1+1)n-1}{2}}$, $\displaystyle{m_2=\frac{(k_2+1)n-1}{2}}$, $\displaystyle{\beta_1=\frac{(\alpha_1+1)n-1}{2}}$, and \\ $\displaystyle{\beta_2=\frac{(\alpha_2+1)n-1}{2}}$. This will simplify notation because, for example, if $k_1$ is even, then $\sqrt{p}^{(k_1+1)n}=p^{m_1}\sqrt{p}$ and $m_1$ is a nonnegative integer. \\ Case 1: $\alpha_1\not\equiv\alpha_2\equiv k_1\equiv k_2\equiv 1\imod{2}$. In this case, $(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(k_2+1)n}-1)(\sqrt{p}^{(\alpha_2+1)n}+1)\in\mathbb{Q}$, so $\displaystyle{\frac{\sqrt{p}^{(\alpha_1+1)n}+1}{\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}}\in\mathbb{Q}}$. However, this is impossible because ($\alpha_1+1)n$ is odd. By the same argument, we may show that it is impossible to have exactly one of $k_1, k_2, \alpha_1, \alpha_2$ be even. \\ Case 2: $\alpha_1\not\equiv\alpha_2\equiv k_1\equiv k_2\equiv 0\imod{2}$. In this case, $\sqrt{p}^{(\alpha_1+1)n}-1\in\mathbb{Q}$, and $\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}=\mu\sqrt{p}$ for some $\mu\in\mathbb{Q}$. This implies that \[(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(k_2+1)n}-1)(\sqrt{p}^{(\alpha_2+1)n}+1)\] \[=(p^{m_1}\sqrt{p}-1)(p^{m_2}\sqrt{p}-1)(p^{\beta_2}\sqrt{p}+1)=\lambda\sqrt{p}\] for some $\lambda\in\mathbb{Q}$. We may expand to get \[(p^{m_1}\sqrt{p}-1)(p^{m_2}\sqrt{p}-1)(p^{\beta_2}\sqrt{p}+1)\] \[=((p^{m_1+m_2+1}+1)-(p^{m_1}+p^{m_2})\sqrt{p})(p^{\beta_2}\sqrt{p}+1)\] \[=(p^{m_1+m_2+1}+1-p^{\beta_2+1}(p^{m_1}+p^{m_2}))+(p^{\beta_2}(p^{m_1+m_2+1}+1)-(p^{m_1}+p^{m_2}))\sqrt{p}.\] As $m_1, m_2, \beta_2\in\mathbb{N}_0$, we find that $p^{m_1+m_2+1}+1-p^{\beta_2+1}(p^{m_1}+p^{m_2})$ and $p^{\beta_2}(p^{m_1+m_2+1}+1)-(p^{m_1}+p^{m_2})$ are integers. Therefore, from the equation $(p^{m_1+m_2+1}+1-p^{\beta_2+1}(p^{m_1}+p^{m_2}))+(p^{\beta_2}(p^{m_1+m_2+1}+1)-(p^{m_1}+p^{m_2}))\sqrt{p}=\lambda\sqrt{p}$, we have $p^{m_1+m_2+1}+1-p^{\beta_2+1}(p^{m_1}+p^{m_2})=0$. Reading this last equation modulo $p$, we have a contradiction. The same argument eliminates the case $\alpha_2\not\equiv\alpha_1\equiv k_1\equiv k_2\equiv 0\imod{2}$. \\ Case 3: $k_1\not\equiv k_2\equiv\alpha_1\equiv\alpha_2\equiv 0\imod{2}$. In this case, $\sqrt{p}^{(k_1+1)n}-1\in\mathbb{Q}$, and $\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}=\mu\sqrt{p}$ for some $\mu\in\mathbb{Q}$. This implies that \[(\sqrt{p}^{(k_2+1)n}-1)(\sqrt{p}^{(\alpha_1+1)n}+1)(\sqrt{p}^{(\alpha_2+1)n}+1)\] \[=(p^{m_2}\sqrt{p}-1)(p^{\beta_1}\sqrt{p}+1)(p^{\beta_2}\sqrt{p}+1)=\lambda\sqrt{p}\] for some $\lambda\in\mathbb{Q}$. We may expand just as we did in Case 2, and we will find $p^{m_1+m_2+1}+1+p^{\beta_2+1}(p^{m_1}+p^{m_2})=0$, which is clearly a contradiction. This same argument eliminates the case $k_2\not\equiv k_1\equiv\alpha_1\equiv\alpha_2\equiv 0\imod{2}$. \\ Case 4: $k_1\equiv k_2\equiv 1\imod{2}$, and $\alpha_1\equiv\alpha_2\equiv 0\imod{2}$. In this case, $\displaystyle{\frac{(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(k_2+1)n}-1)}{\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}}\in\mathbb{Q}}$, so we must have \\ $(\sqrt{p}^{(\alpha_1+1)n}+1)(\sqrt{p}^{(\alpha_2+1)n}+1)=(p^{\beta_1}\sqrt{p}+1)(p^{\beta_2}\sqrt{p}+1)\in\mathbb{Q}$. However, this is impossible because $\beta_1$ and $\beta_2$ are nonnegative integers. \\ Case 5: $k_1\equiv k_2\equiv 0\imod{2}$, and $\alpha_1\equiv\alpha_2\equiv 1\imod{2}$. In this case, $\displaystyle{\frac{(\sqrt{p}^{(\alpha_1+1)n}+1)(\sqrt{p}^{(\alpha_2+1)n}+1)}{\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}}\in\mathbb{Q}}$, so we must have \\ $(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(k_2+1)n}-1)=(p^{m_1}\sqrt{p}-1)(p^{m_2}\sqrt{p}-1)\in\mathbb{Q}$. However, this is impossible because $m_1$ and $m_2$ are nonnegative integers. \\ Case 6: $k_1\equiv k_2\equiv\alpha_1\equiv\alpha_2\equiv 0\imod{2}$. In this case, $\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}\in\mathbb{Q}$, so \[(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(k_2+1)n}-1)(\sqrt{p}^{(\alpha_1+1)n}+1)(\sqrt{p}^{(\alpha_2+1)n}+1)\] \[=(p^{m_1}\sqrt{p}-1)(p^{m_2}\sqrt{p}-1)(p^{\beta_1}\sqrt{p}+1)(p^{\beta_2}\sqrt{p}+1)\in\mathbb{Q}.\] One may verify that, after expanding this last expression and noting that $m_1$, $m_2$, $\beta_1$, and $\beta_2$ must be positive integers, we arrive at the requirement $(p^{m_1}+p^{m_2})(p^{\beta_1+\beta_2+1}+1)=(p^{\beta_1}+p^{\beta_2})(p^{m_1+m_2+1}+1)$. Lemma \ref{Lem3.5} then guarantees that either $m_1=\beta_1$ and $m_2=\beta_2$ or $m_1=\beta_2$ and $m_2=\beta_1$, which means that either $k_1=\alpha_1$ and $k_2=\alpha_2$ or $k_1=\alpha_2$ and $k_2=\alpha_1$. Then $\displaystyle{\frac{I_n(\pi^{k_1})I_n(\overline{\pi}^{k_2})}{I_n(\pi^{\alpha_1})I_n(\overline{\pi}^{\alpha_2})}=I_n\left(\prod_{j=1}^tq_j^{\gamma_j}\right)=1}$, which implies that $\displaystyle{\prod_{j=1}^tq_j^{\gamma_j}}$ is a unit. However, we then find that $\vert\pi^{k_1}\overline{\pi}^{k_2}\vert=\vert\pi^{\alpha_1}\overline{\pi}^{\alpha_2}\vert=\vert x\vert$, which we originally assumed was not true. Therefore, this case yields a contradiction. \\ Case 7: $k_1\equiv\alpha_1\equiv 1\imod{2}$ and $k_2\equiv\alpha_2\equiv 0\imod{2}$. In this case, $\displaystyle{\frac{(\sqrt{p}^{(k_1+1)n}-1)(\sqrt{p}^{(\alpha_1+1)n}+1)}{\sqrt{p}^{(k_1+k_2-\alpha_1-\alpha_2)n}}\in\mathbb{Q}}$, so we must have \\ $(\sqrt{p}^{(k_2+1)n}-1)(\sqrt{p}^{(\alpha_2+1)n}+1)=(p^{m_2}\sqrt{p}-1)(p^{\beta_2}\sqrt{p}+1)\in\mathbb{Q}$. Writing $(p^{m_2}\sqrt{p}-1)(p^{\beta_2}\sqrt{p}+1)=(p^{m_2+\beta_2+1}-1)+(p^{m_2}-p^{\beta_2})\sqrt{p}$ and noting that $m_2$ and $\beta_2$ are nonnegative integers, we find that $m_2=\beta_2$. Therefore, $k_2=\alpha_2$, so $\displaystyle{I_n\left(\prod_{j=1}^tq_j^{\gamma_j}\right)=\frac{I_n(\pi^{k_1})I_n(\overline{\pi}^{k_2})}{I_n(\pi^{\alpha_1})I_n(\overline{\pi}^{\alpha_2})}=\frac{I_n(\pi^{k_1})}{I_n(\pi^{\alpha_1})}}$. Because $\displaystyle{I_n\left(\prod_{j=1}^tq_j^{\gamma_j}\right)>1}$, we see that $\alpha_1<k_1$. As $k_1$ is odd, we have \[I_n(\pi^{k_1})=\delta_{-n}(\pi^{k_1})=\sum_{l=0}^{k_1}\frac{1}{\sqrt{p}^{ln}} =\left(\sum_{r=0}^{\frac{k_1-1}{2}}\frac{1}{\sqrt{p}^{2rn}}\right)+\left(\sum_{r=0}^{\frac{k_1-1}{2}}\frac{1}{\sqrt{p}^{2rn}}\right)\left(\frac{1}{\sqrt{p}^n}\right)\] \[=\left(\sum_{r=0}^{\frac{k_1-1}{2}}\frac{1}{p^{rn}}\right)\left(1+\frac{1}{\sqrt{p}^n}\right)=\frac{h_1}{p^n-1}\left(1+\frac{1}{\sqrt{p}^n}\right),\] where \[h_1=\left(\sum_{r=0}^{\frac{k_1-1}{2}}\frac{1}{p^{rn}}\right)\left(p^n-1\right)=\frac{p^{\frac{k_1+1}{2}n}-1}{p^{\frac{k_1-1}{2}n}}.\] Similarly, if we write $\displaystyle{h_2=\frac{p^{\frac{\alpha_1+1}{2}n}-1}{p^{\frac{\alpha_1-1}{2}n}}}$, then we have \\ $\displaystyle{I_n(\pi^{\alpha_1})=\frac{h_2}{p^n-1}(1+\frac{1}{\sqrt{p}^n})}$. Now, \[I_n\left(\prod_{j=1}^tq_j^{\gamma_j}\right)=\frac{I_n(\pi^{k_1})}{I_n(\pi^{\alpha_1})}=\frac{h_1}{h_2}=\frac{p^{\frac{k_1+1}{2}n}-1}{p^{\frac{k_1-\alpha_1}{2}n}(p^{\frac{\alpha_1+1}{2}n}-1)},\] so \[\left[\delta_n\left(\prod_{j=1}^tq_j^{\gamma_j}\right)\right]\left[p^{\frac{k_1-\alpha_1}{2}n}\right]\left[p^{\frac{\alpha_1+1}{2}n}-1\right]=\left[\left\lvert\prod_{j=1}^tq_j^{\gamma_j}\right\rvert^n\right]\left[p^{\frac{k_1+1}{2}n}-1\right].\] Now, each bracketed part of this last equation is an integer, and $p$ divides the left-hand side in $\mathbb{Z}$. However, $p$ does not divide the right-hand side in $\mathbb{Z}$, so we have a contradiction. We may use this same argument to find contradictions in the three other cases in which $k_1\not\equiv k_2\imod{2}$ and $\alpha_1\not\equiv \alpha_2\imod{2}$. \par One may check that we have found contradictions for all of the possible choices of parities of $k_1$, $k_2$, $\alpha_1$, and $\alpha_2$ except the case in which all four are odd. Therefore, the proof is complete. \end{proof} \begin{corollary} \label{Cor3.1} Let $d\in K$, and let $k, n\in\mathbb{N}$ with $n$ odd. If $\pi$ is a prime in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ such that $\pi\not\sim\overline{\pi}$, then $\pi^k$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. \end{corollary} \begin{proof} Setting $k_1=k$ and $k_2=0$ in Theorem \ref{Thm3.1}, we find that $\pi^k$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ because $k_2$ is even. \end{proof} \begin{corollary} \label{Cor3.2} Let $d\!\in\! K$, and let $p$ be an integer prime. Let $k$ be a positive integer that is either even or equal to $1$, and let $n$ be an odd positive integer. If $z\sim p^k$, then $z$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. \end{corollary} \begin{proof} If $p$ is inert or ramified in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, then $z\sim\pi^\alpha$ for some prime $\pi$ and some positive integer $\alpha$. Therefore, $z$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ by part $(e)$ of Theorem \ref{Thm2.2}. If $p$ splits in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ and $k=1$, then $z\sim\pi\overline{\pi}$. Therefore, by Theorem \ref{Thm3.2}, any friend of $z$, say $x$, must satisfy $\displaystyle{x\sim\pi^{\alpha_1}\overline{\pi}^{\alpha_2}\prod_{j=1}^tq_j^{\gamma_j}}$ , where $\alpha_1, \alpha_2$ are odd positive integers and, for each $j\in\{1, 2, \ldots, t\}$, $q_j$ is an inert integer prime and $\gamma_j$ is a positive integer. However, this implies that $\pi\overline{\pi}\vert x$, so $z\vert x$. This is a contradiction. Finally, if $p$ splits in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ and $k$ is even, then $z\sim\pi^k\overline{\pi}^k$. As $k$ is even, the result follows from Theorem \ref{Thm3.2}. \end{proof} \par Note that Corollary \ref{Cor3.1} delivers the final blow in the proof of part $(e)$ of Theorem \ref{Thm2.2}. \section{Concluding Remarks and Open Questions} After the introduction of our generalization of the abundancy index, we quickly become inundated with new questions. We pose a few such problems, acknowledging that their difficulties could easily span a large gamut. \par To begin, we note that we have focused exclusively on rings $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\!\in\! K$. One could generalize the definitions presented here to the other quadratic integer rings. While complications could surely arise in rings without unique factorization, generalizing the abundancy index to unique factorization domains $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d>0$ does not seem to be a highly formidable task. \par Even if we continue to restrict our attention to the rings $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\!\in\! K$, we may ask some interesting questions. For example, for a given $n$, what are some examples of $n$-powerful friends in these rings? We might also ask which numbers (or which types of numbers), are $n$-powerfully solitary for a given $n$. For example, the number $21$ is solitary in $\mathbb{Z}$, so it is not difficult to show that $21$ is also solitary in $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$. Furthermore, for a given element of some ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, we might ask to find the values of $n$ for which this element is $n$-powerfully solitary. \begin{conjecture} \label{Conj4.1} Let $d\!\in\! K$. If $p$ is an integer prime and $k$ is a positive integer, then $p^k$ is $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ for all positive integers $n$. As a stronger form of this conjecture, we wonder if $\pi^{\alpha_1}\overline{\pi}^{\alpha_2}$ is necessarily $n$-powerfully solitary in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ whenever $\pi$ is a prime in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ and $\alpha_1,\alpha_2,n\in\mathbb{N}$. Note that part $(e)$ of Theorem \ref{Thm2.2} and Theorem \ref{Thm3.2} resolve this issue for many cases. \end{conjecture} \section{Acknowledgments} The author would like to thank Professor Pete Johnson for inviting him to the 2014 REU Program in Algebra and Discrete Mathematics at Auburn University and for making that program an extremely relaxed environment that proved exceptionally conducive to research. The author would also like to thank the unknown referee for his or her careful reading. \end{document}
arXiv
# Gradient descent and its variants Gradient descent is a popular optimization technique used to minimize a function iteratively. It is based on the concept of gradient, which is the vector of partial derivatives of a function. The goal of gradient descent is to find the minimum of the function by moving in the direction of the negative gradient. There are several variants of gradient descent, each with its own advantages and disadvantages. Some common variants include: - Batch gradient descent: In this variant, the gradient is computed using the entire dataset. This can be computationally expensive for large datasets. - Stochastic gradient descent (SGD): In SGD, the gradient is computed using a random subset of the dataset, called a mini-batch. This reduces the computational cost compared to batch gradient descent. - Mini-batch gradient descent: This is a hybrid approach between batch and stochastic gradient descent. The gradient is computed using a smaller subset of the dataset, called a mini-batch. - Adaptive gradient descent (Adagrad): Adagrad is an adaptive optimization algorithm that adjusts the learning rate for each parameter based on the past gradients. This helps in faster convergence for sparse data. - RMSprop: RMSprop is another adaptive optimization algorithm that uses a moving average of the squared gradients to adjust the learning rate. This helps in faster convergence and avoids the vanishing and exploding gradient problems. - Adam (Adaptive Moment Estimation): Adam combines the ideas of RMSprop and Adagrad. It maintains an exponentially decaying average of past gradients and squared gradients, and uses these to adjust the learning rate. This makes it a popular choice for deep learning algorithms. ## Exercise Consider the following function: $$f(x) = x^2$$ Apply gradient descent to minimize this function. ### Solution Gradient of f(x) = 2x Initial point: x = 5 Learning rate: α = 0.1 Iterations: 1. x = 5 - 0.1 * 2 * 5 = 4.75 2. x = 4.75 - 0.1 * 2 * 4.75 = 4.5625 3. x = 4.5625 - 0.1 * 2 * 4.5625 = 4.4121875 The minimum of the function f(x) = x^2 is at x = 0. # Conjugate gradient method The conjugate gradient method is an iterative algorithm used to solve linear systems of equations. It is particularly useful for solving large-scale problems with sparse and symmetric positive-definite matrices. The conjugate gradient method works by approximating the solution of the linear system using a sequence of vectors. The method uses the gradient of the objective function to guide the search for the solution. The main steps of the conjugate gradient method are: 1. Initialize the solution vector x with an initial guess. 2. Compute the gradient of the objective function at the current solution vector. 3. Compute the next search direction using the gradient and previous search directions. 4. Update the solution vector by taking a step in the search direction. 5. Repeat steps 2-4 until the stopping criterion is met, such as reaching a maximum number of iterations or achieving a desired level of accuracy. ## Exercise Solve the following linear system using the conjugate gradient method: $$\begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}$$ ### Solution The conjugate gradient method requires the matrix to be symmetric and positive-definite. The given matrix is not positive-definite, so the conjugate gradient method cannot be applied directly. However, we can modify the problem by adding a small constant to the diagonal elements to make the matrix positive-definite: $$\begin{bmatrix} 4 + \epsilon & 1 \\ 1 & 3 + \epsilon \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}$$ Now, we can apply the conjugate gradient method to solve for x and y. # Introduction to linear programming Linear programming is a mathematical optimization technique used to find the best solution to a problem with linear constraints and a linear objective function. It is widely used in operations research, economics, and engineering. A linear programming problem consists of: - A set of variables, which can be continuous or discrete. - A set of linear constraints, which define the feasible region of the solution space. - An objective function, which measures the quality of the solution. Linear programming problems can be solved using various methods, such as the simplex method, the dual simplex method, the interior-point method, and the big-M method. ## Exercise Consider the following linear programming problem: Minimize: $$f(x) = 3x_1 + 2x_2$$ Subject to: $$\begin{align} 2x_1 + x_2 &\le 10 \\ x_1 + 2x_2 &\le 8 \\ x_1, x_2 &\ge 0 \end{align}$$ Solve this problem using the simplex method. ### Solution The simplex method is a powerful algorithm for solving linear programming problems. It involves creating a tableau and iteratively pivoting to reduce the objective function while maintaining feasibility. In this case, we can solve the problem using the big-M method, which is simpler and does not require creating a tableau. The big-M method involves introducing slack variables and adding artificial variables to the problem. The solution to the modified problem will be the optimal solution to the original problem. # Linear programming formulation and solution methods Linear programming problems can be formulated in standard form as: Minimize: $$c^Tx$$ Subject to: $$Ax \le b$$ $$x \ge 0$$ where c is the objective function vector, A is the constraint matrix, x is the variable vector, and b is the right-hand side vector. There are several methods for solving linear programming problems, including: - Simplex method: This is an iterative algorithm that uses a tableau to represent the problem and pivot to reduce the objective function while maintaining feasibility. - Interior-point method: This is another iterative algorithm that uses a central path to find the optimal solution. It involves solving a sequence of linear systems and updating the solution vector. - Big-M method: This is a simplex-like method that does not require creating a tableau. It involves introducing slack variables and adding artificial variables to the problem. The solution to the modified problem will be the optimal solution to the original problem. - Dual simplex method: This is an extension of the simplex method that solves the dual problem to find the optimal solution. It involves solving a sequence of linear systems and updating the solution vector. ## Exercise Solve the following linear programming problem using the big-M method: Minimize: $$f(x) = 3x_1 + 2x_2$$ Subject to: $$\begin{align} 2x_1 + x_2 &\le 10 \\ x_1 + 2x_2 &\le 8 \\ x_1, x_2 &\ge 0 \end{align}$$ ### Solution The big-M method involves introducing slack variables and adding artificial variables to the problem. The solution to the modified problem will be the optimal solution to the original problem. # Introduction to quadratic programming Quadratic programming is a mathematical optimization technique used to find the best solution to a problem with quadratic constraints and a quadratic objective function. It is a generalization of linear programming. A quadratic programming problem consists of: - A set of variables, which can be continuous or discrete. - A set of quadratic constraints, which define the feasible region of the solution space. - An objective function, which measures the quality of the solution. Quadratic programming problems can be solved using various methods, such as the Newton's method, the augmented Lagrangian method, and the trust-region method. ## Exercise Consider the following quadratic programming problem: Minimize: $$f(x) = x^2 + 2xy + 3y^2$$ Subject to: $$x + y \le 4$$ $$x, y \ge 0$$ Solve this problem using Newton's method. ### Solution Newton's method is an iterative algorithm that uses the gradient of the objective function to find the optimal solution. The method involves computing the Hessian matrix of the objective function and solving a sequence of linear systems to update the solution vector. # Quadratic programming formulation and solution methods Quadratic programming problems can be formulated in standard form as: Minimize: $$\frac{1}{2}x^TQx + c^Tx$$ Subject to: $$Ax \le b$$ $$x \ge 0$$ where Q is the Hessian matrix of the objective function, c is the objective function vector, A is the constraint matrix, x is the variable vector, and b is the right-hand side vector. There are several methods for solving quadratic programming problems, including: - Newton's method: This is an iterative algorithm that uses the gradient of the objective function to find the optimal solution. The method involves computing the Hessian matrix of the objective function and solving a sequence of linear systems to update the solution vector. - Augmented Lagrangian method: This is an iterative algorithm that uses a penalty term to enforce the constraints. It involves solving a sequence of nonlinear systems and updating the solution vector. - Trust-region method: This is an iterative algorithm that uses a trust region to approximate the solution. It involves solving a sequence of quadratic programming problems and updating the solution vector. ## Exercise Solve the following quadratic programming problem using Newton's method: Minimize: $$f(x) = x^2 + 2xy + 3y^2$$ Subject to: $$x + y \le 4$$ $$x, y \ge 0$$ ### Solution Newton's method is an iterative algorithm that uses the gradient of the objective function to find the optimal solution. The method involves computing the Hessian matrix of the objective function and solving a sequence of linear systems to update the solution vector. # Newton's method and its variants Newton's method is an iterative algorithm used to find the roots of a function. It is based on the idea of finding the minimum of the function using the gradient and Hessian of the function. The main steps of Newton's method are: 1. Compute the gradient of the function at the current point. 2. Compute the Hessian of the function at the current point. 3. Solve the linear system formed by the gradient and Hessian to find the next point. 4. Update the current point to the next point. 5. Repeat steps 1-4 until the stopping criterion is met, such as reaching a maximum number of iterations or achieving a desired level of accuracy. There are several variants of Newton's method, each with its own advantages and disadvantages. Some common variants include: - Broyden's method: This is a modification of Newton's method that uses a different approximation for the Hessian. It is particularly useful for solving nonlinear systems with multiple roots. - Davidon-Fletcher-Powell (DFP) method: This is another modification of Newton's method that uses a different approximation for the Hessian. It is particularly useful for solving nonlinear systems with multiple roots. - BFGS (Broyden-Fletcher-Goldfarb-Shanno) method: This is a modification of Newton's method that uses an approximation for the inverse Hessian. It is particularly useful for solving nonlinear systems with multiple roots. ## Exercise Solve the following nonlinear system using Newton's method: $$\begin{align} \frac{\partial f}{\partial x} &= 2x + 2y - 1 \\ \frac{\partial f}{\partial y} &= 2x + 6y - 2 \\ f(x, y) &= x^2 + 2xy + 3y^2 - 4 \end{align}$$ ### Solution Newton's method is an iterative algorithm that uses the gradient of the objective function to find the optimal solution. The method involves computing the Hessian matrix of the objective function and solving a sequence of linear systems to update the solution vector. # Applications and real-world examples Optimization techniques have numerous applications in various fields, including: - Machine learning: Gradient descent and its variants are widely used for training neural networks and other machine learning models. - Control theory: Linear programming and quadratic programming are used to solve optimization problems in control systems, such as optimal control and robust control. - Operations research: Linear programming and quadratic programming are used to solve problems in operations research, such as transportation, inventory management, and production planning. - Economics: Linear programming and quadratic programming are used to solve problems in economics, such as linear programming duality, portfolio optimization, and game theory. - Engineering: Quadratic programming is used to solve problems in engineering, such as structural optimization, heat transfer, and fluid dynamics. - Finance: Quadratic programming is used to solve problems in finance, such as portfolio optimization, risk management, and option pricing. ## Exercise Discuss the application of optimization techniques in the field of computer vision. ### Solution Optimization techniques have numerous applications in computer vision, including: - Image segmentation: Linear programming and quadratic programming are used to solve problems in image segmentation, such as graph-based segmentation and region-based segmentation. - Object detection: Quadratic programming is used to solve problems in object detection, such as object tracking and object localization. - Scene understanding: Quadratic programming is used to solve problems in scene understanding, such as 3D object recognition and scene reconstruction. - Image registration: Quadratic programming is used to solve problems in image registration, such as image alignment and image fusion. - Image restoration: Quadratic programming is used to solve problems in image restoration, such as image denoising and image inpainting. # Challenges and limitations of optimization techniques Optimization techniques have several challenges and limitations: - Global optimality: Gradient-based optimization techniques, such as gradient descent and Newton's method, may converge to a local minimum rather than the global minimum. - Convergence: Gradient-based optimization techniques may converge slowly, especially for non-convex functions. - Computational cost: Gradient-based optimization techniques can be computationally expensive, especially for large-scale problems. - Numerical stability: Numerical issues, such as round-off errors and ill-conditioning, can affect the accuracy and convergence of optimization techniques. - Discontinuous objective functions: Some optimization techniques, such as gradient descent, may struggle with discontinuous objective functions. - Non-differentiable objective functions: Some optimization techniques, such as gradient descent, require the objective function to be differentiable. ## Exercise Discuss the challenges and limitations of optimization techniques in the context of combinatorial optimization problems. ### Solution Combinatorial optimization problems, such as traveling salesman problem and set cover problem, have several challenges and limitations for optimization techniques: - Integer variables: Combinatorial optimization problems often involve integer variables, which are not supported by gradient-based optimization techniques. - Non-convexity: Combinatorial optimization problems are often non-convex, which can make it difficult for gradient-based optimization techniques to find the global minimum. - Large search space: Combinatorial optimization problems often have a large search space, which can be computationally expensive and challenging for optimization techniques. - Discrete variables: Gradient-based optimization techniques, such as gradient descent, require continuous variables. They may struggle with discrete variables in combinatorial optimization problems. - NP-hardness: Many combinatorial optimization problems are NP-hard, which means that finding the optimal solution is computationally expensive. # Comparison of different optimization methods Different optimization methods have their own advantages and disadvantages. Some common comparisons include: - Gradient-based methods, such as gradient descent and Newton's method, are suitable for continuous optimization problems with smooth objective functions. They are computationally efficient and can find the global minimum for convex problems. However, they may struggle with non-convex problems and discontinuous objective functions. - Linear programming and quadratic programming are suitable for linear and quadratic optimization problems with linear constraints. They are computationally efficient and can find the optimal solution for convex problems. However, they may struggle with non-convex problems and non-linear constraints. - Conjugate gradient methods, such as the conjugate gradient method and the augmented Lagrangian method, are suitable for large-scale optimization problems with sparse and symmetric positive-definite matrices. They are computationally efficient and can find the optimal solution for convex problems. However, they may struggle with non-convex problems and non-linear constraints. - Quadratic programming methods, such as Newton's method and trust-region methods, are suitable for quadratic optimization problems with quadratic constraints. They are computationally efficient and can find the optimal solution for convex problems. However, they may struggle with non-convex problems and non-quadratic constraints. - Combinatorial optimization methods, such as branch-and-bound methods and genetic algorithms, are suitable for combinatorial optimization problems with discrete variables. They are computationally expensive but can find the optimal solution for certain problems. ## Exercise Compare the performance of gradient descent and Newton's method for solving the following problem: Minimize: $$f(x) = x^2 + 2xy + 3y^2$$ Subject to: $$x + y \le 4$$ $$x, y \ge 0$$ ### Solution Gradient descent and Newton's method are both iterative algorithms. However, Newton's method uses the Hessian matrix of the objective function to find the optimal solution. This makes Newton's method more accurate and computationally efficient compared to gradient descent. # Future directions and research in optimization The field of optimization is constantly evolving, with new techniques and algorithms being developed to address the challenges and limitations of existing methods. Some future directions and research areas in optimization include: - Machine learning-based optimization: Researchers are exploring the use of machine learning techniques, such as deep learning and reinforcement learning, to improve the performance and accuracy of optimization algorithms. - Metaheuristic optimization: Researchers are developing new optimization algorithms based on metaheuristic concepts, such as swarm intelligence and evolutionary algorithms. These algorithms are inspired by natural phenomena and can be more robust and adaptive than traditional optimization techniques. - Parallel and distributed optimization: Researchers are exploring the use of parallel and distributed computing techniques to solve large-scale optimization problems more efficiently. This includes the development of new algorithms and the use of GPUs and other specialized hardware. - Convex optimization: Researchers are exploring new methods and algorithms for solving convex optimization problems, such as convex relaxations, proximal gradient methods, and first-order methods. - Combinatorial optimization: Researchers are developing new algorithms and techniques for solving combinatorial optimization problems, such as branch-and-cut methods, column generation, and mixed-integer programming. - Stochastic optimization: Researchers are exploring the use of stochastic optimization techniques, such as stochastic gradient descent and stochastic Newton's method, to solve large-scale optimization problems with noisy or uncertain data. - Online optimization: Researchers are developing new algorithms and techniques for solving optimization problems in real-time or online settings, such as online convex optimization and online machine learning. - Interdisciplinary optimization: Researchers are exploring the application of optimization techniques to interdisciplinary problems, such as energy management, transportation planning, and biomedical data analysis. ## Exercise Discuss the potential future application of optimization techniques in the field of renewable energy. ### Solution Optimization techniques have the potential to play a significant role in the field of renewable energy. Some potential future applications include: - Energy management: Optimization techniques can be used to optimize the operation of renewable energy systems, such as wind farms and solar power plants, to maximize energy production and minimize costs. - Load balancing: Optimization techniques can be used to balance the load between renewable energy sources and energy consumption, taking into account factors such as grid constraints, weather forecasts, and demand patterns. - Energy storage: Optimization techniques can be used to optimize the operation of energy storage systems, such as batteries and pumped hydro storage, to maximize energy storage capacity and minimize costs. - Integration of renewable energy and traditional energy sources: Optimization techniques can be used to optimize the integration of renewable energy sources, such as wind and solar power, with traditional energy sources, such as coal and natural gas, to minimize environmental impacts and maximize energy efficiency. - Energy market optimization: Optimization techniques can be used to optimize the operation of energy markets, such as electricity and natural gas markets, to maximize market efficiency and minimize market distortions.
Textbooks
Volume 19 Supplement 15 Proceedings from the 12th International BBCC conference Multi–omic analysis of signalling factors in inflammatory comorbidities Hui Xiao1, Krzysztof Bartoszek2 na1 & Pietro Lio'1 na1 BMC Bioinformatics volume 19, Article number: 439 (2018) Cite this article Inflammation is a core element of many different, systemic and chronic diseases that usually involve an important autoimmune component. The clinical phase of inflammatory diseases is often the culmination of a long series of pathologic events that started years before. The systemic characteristics and related mechanisms could be investigated through the multi–omic comparative analysis of many inflammatory diseases. Therefore, it is important to use molecular data to study the genesis of the diseases. Here we propose a new methodology to study the relationships between inflammatory diseases and signalling molecules whose dysregulation at molecular levels could lead to systemic pathological events observed in inflammatory diseases. We first perform an exploratory analysis of gene expression data of a number of diseases that involve a strong inflammatory component. The comparison of gene expression between disease and healthy samples reveals the importance of members of gene families coding for signalling factors. Next, we focus on interested signalling gene families and a subset of inflammation related diseases with multi–omic features including both gene expression and DNA methylation. We introduce a phylogenetic–based multi–omic method to study the relationships between multi–omic features of inflammation related diseases by integrating gene expression, DNA methylation through sequence based phylogeny of the signalling gene families. The models of adaptations between gene expression and DNA methylation can be inferred from pre–estimated evolutionary relationship of a gene family. Members of the gene family whose expression or methylation levels significantly deviate from the model are considered as the potential disease associated genes. Applying the methodology to four gene families (the chemokine receptor family, the TNF receptor family, the TGF– β gene family, the IL–17 gene family) in nine inflammation related diseases, we identify disease associated genes which exhibit significant dysregulation in gene expression or DNA methylation in the inflammation related diseases, which provides clues for functional associations between the diseases. Inflammation is the body's attempt at removing harmful or irritating affects, which is part of the body's immune response. The inflammatory response is essential for the recruitment and activation of lymphocytes in order to respond to an infection and the subsequent promotion of wound healing and repair. Strong intensity and long duration of unconstrained inflammatory response will cause the consequences of unregulated inflammation, which might result in many acute and chronic autoimmune diseases and comorbidities [1–4]. The inflammatory system is complex because of comorbidities, which involves depression, immune–inflammatory, oxidative stress, gut–brain pathways and so on [5]. For example, inflammation and altered gut microbiota (dysbiosis) could lead to colorectal cancer carcinogenesis [6, 7]. The severity of inflammatory diseases is strongly correlated with high levels of proinflammatory cytokine. Scientific evidence has shown that gut microbiota plays important roles in the genesis of several inflammatory diseases such as arthritis, systemic lupus erythematosus (SLE), pathogen induced colitis, Crohn's disease, inflammatory bowel disease (IBD) [8–16]. Besides, inflammation has also been reported one of the enabling characteristics of cancer development such as colon cancer and breast cancer [17]. The genesis of cancers are considered to be related with the inflammatory responses to microbial or damaged-self stimuli. Both arms of immunity, innate and adaptive, play important roles during tumorgenesis. Growing attentions have been attracted in identifying early biomarkers for inflammatory diseases by exploring the associated molecular mechanisms [18], because the genesis of inflammatory diseases usually take a long preclinical period [19] and the identification of early disease markers could provide valuable clues for better clinical therapies. It is reported that a set of circulating proteins such as inflammatory cytokines and endocrine factors (e.g., TGF– β, TNF, and chemokines), forming a communicome, are involved in inter-cellular and organs communication, which are responsible for spreading inflammation in the body [20]. Recent advances in high–throughput genomics biotechnology such as microarrays and next generation sequencing have produced various omic data such as genome, epigenome, transcriptome, proteome and so on. The rapid growth of the amount of multi–omic data provides great opportunities to understand the mechanisms of complex biological systems such as human diseases from multiple molecular levels [21, 22]. For example, Zhang et al. [23] predicted the driver genes associated with different clinical outcome subtypes of ovarian cancer by integrating genome–wide gene expression, DNA methylation, microRNA expression and copy number alteration profiles. Cabezas-Wallscheid et al. [24] performed a comprehensive analysis of proteome, transcriptome and DNA methylome data to identify coordinated changes at the protein, RNA, and DNA levels during early differentiation steps of hematopoietic stem cells (HSCs). Cantini et al. [25] proposed a multilayer network community detection method to identify cancer related gene modules, which reveals cancer driver genes, through the integration of gene expression, protein interactome and transcription factor regulation network. Chaudhary et al. [26] introduced a neural network model to predict survival in liver cancer by integrating multi–omic data including gene expression, DNA copy number and miRNA expression data. In order to explore the associations between signalling factors and inflammatory diseases as well as cancers, we propose a new methodology based on phylogenetic inference on multi–omic data to identify gene markers of diseases. Taking full advantage of the pre–estimated evolutionary relationship of a gene family with multi–omic information including gene expression and DNA methylation, it is capable of identification of genes exhibiting significant alterations in expression or methylation levels in diseases. A multi–omic approach is necessary as it integrates information from all sources. Phylogenetic information is important as some genetic behaviours may be due to evolutionary inertia. The phylogenetic correlations between gene expression and methylation help in identifying disease relationship due to perturbations of the same or closely related gene family members. Applying the proposed method, we perform a comparative study of the signatures of signalling molecules in several inflammation related diseases, which consists of a two-step analysis: Firstly, we present a systematic study of genomewide molecular signatures, based on gene expression, for several inflammatory diseases as well as cancers. Most of the significant molecular signatures are related to members of a few important gene families. Then, we propose a phylogenetic–based multi–omic approach and apply it to four signalling related gene families selected from the first step to study the correlated or independent roles of the genes as disease markers by integrating the sequences, gene expression and DNA methylation data of the gene families in specific inflammatory diseases. The methodology of this work follows a two-step procedure: Firstly, we analyse case-control gene expression data of a number of inflammatory diseases, focusing on signalling factors and receptors. In the second step, we select the gene families including genes with statistical significant p-values in the first step focusing on specific inflammation related diseases for which both gene expression and DNA methylation data are available. We use an Ornstein–Uhlenbeck phylogenetic approach to identify disease associated genes by integrating the gene expression and DNA methylation data. On the basis of the identified genes, we explore the correlations among the inflammation related diseases. The flowchart of the whole analysis is shown in Fig. 1. The flowchart of the methodology. To identify the significant members in a gene family which are significantly associated with a disease, the input data include the protein sequences of members of the gene family and the case–control gene expression and DNA methylation data of the gene family in the disease Step 1: Genome–wide comparison of gene expression in inflammatory diseases Recent research has increasingly demonstrated that many seemingly dissimilar diseases have common molecular mechanisms. Diseases are more likely to be comorbid if they share pathways. Exploring relations between genes and diseases at the molecular level could greatly facilitate our understanding of pathogenesis, and eventually lead to better diagnosis and treatment. Some diseases have a direct positive association between them while other diseases may have indirect positive associations among them through biological pathways. The analysis of pathway–disease associations, in addition to gene–disease associations, could be used to clarify the molecular mechanisms of a disease. We take advantage of the large number of molecular measurements from experimental results that are now publicly available and identify commonly implicated genes across different pathologies and deliberately varied experimental conditions. We propose the application of a gene expression based genome–wide association study (eGWAS) to evaluate the statistical significance of the differential expression for each gene across a large number of case and control microarray experiments of human inflammatory diseases. Gene expression data for Step 1 We collect a large ensemble of gene expression data related to diseases that have frequent inflammation comorbidities from different cell types/tissues in patients and healthy people, including Type 1 Diabetes, Type 2 Diabetes, Rheumatoid arthritis, Osteoporosis, Osteopetrosis, Osteoarthritis, HIV infection, Osteomyelitis, Measles, Paget's disease, Periodontitis, Renal disorder, Osteosarcoma, Breast cancer and Multiple myeloma. The raw microarray gene expression data are downloaded from the Gene Expression Omnibus (GEO) (see Additional file 1). Evaluation of differentially expressed genes Using the case-control gene expression data, we evaluate the significance of differential expression of each gene between healthy and disease samples using a linear model based statistical method. Differentially expressed genes are selected as genes with significant p-values (e.g. p <0.05) in different diseases. In particular we focus on genes that code for extracellular signalling molecules (including receptors) as they are linked to metabolic physiological flexibility. The normalisation procedures and statistical analysis are implemented in R by using Bioconductor R packages [27]. For each raw microarray gene expression dataset, the background correction and normalization is performed by using the PLIER algorithm [28]. The PLIER algorithm uses a probe affinity parameter, which represents the strength of a signal produced at a specific concentration for a given probe. The error model employed by PLIER assumes that the error is proportional to the observed intensity, rather than to the background–subtracted intensity using the following error function: $$ \epsilon_{ij}=\frac{\hat{\mu}_{ij}/{pm}_{ij} + \sqrt{\left(\hat{\mu}_{ij}/{pm}_{ij}\right)^{2} + 4\left({mm}_{ij}/{pm}_{ij}\right)}}{2}, $$ where, μij is the binding level of probe i on array j, pmij is a perfect match and mmij is a mismatch probe. Then, we sort out the genes with consistently highly differentially expressed between case and control samples using the following linear model: $$ y_{ik}=\alpha_{k}+\epsilon_{ik} \;\; i = 1, 2,....,n_{k}, \;\; k = 1, 2, $$ where k indicates the patient type and i the individual samples. For every gene g, we define the rank consistency score S(g;r) as the normalized maximal rank of this gene among all the patients samples, i.e., $$ S(g;r)={max}_{1ky} R_{k}(g)/N. $$ Step 2: Phylogenetic–based multi–omic analysis of gene families through an Ornstein–Uhlenbeck model Gene families have extraordinary importance in elucidating genome dynamics, which combine the study of diseases at the pathway level with the evolutionary mutational divergence and selection trajectories. The evolutionary information is contained in a phylogenetic tree consisting of all the members of a gene family connected through the similarities of their sequences. The effects of phylogenetic relationships on observed phenotype data has been studied for a long time in evolutionary biology under the generic name of phylogenetic comparative methods. These methods assume a continuous time, continuous (or discrete for discrete phenotypes) space stochastic process for the phenotype and allow it to evolve on top of the phylogenetic tree. At speciation times the process splits into two independent copies which evolve along the branches. Then the process parameters can be inferred based on the law of the values at the tips (the contemporary species). Gene families may provide clues for identifying genes that are involved in particular diseases, e.g., chemokine receptor, TGF– β and TNF gene families play important roles in inflammatory diseases and cancers [29–35]. However, the coordinative functional relationships between members of the gene family are still unknown. Studying the dynamic regulation mechanisms will help understanding the genesis of the diseases and improving the effective drug discovery [36]. Because of the complexly structured interaction between the process of evolutionary functional divergence of the gene family members and the process of pathway proximity of some members, it is important to model together the two processes by using all the available multi–omic information such as epigenetic modification and gene expression. The multi–omic information could help identifying the trajectory between healthy and disease condition. For instance epigenetic variability may drive phenotypic selection on a much shorter timescale than mutation. We can use an ecological analogy to describe together the healthy and disease conditions with their omic information. The mutational (generated by epigenetic modification and gene expression changes) and natural selection pressure could be modeled by an Ornstein–Uhlenbeck (OU) model acting on a landscape. The OU model is frequently used in physics to model an overdamped Brownian harmonic oscillator — that is, the stochastic variation from a normal state with no persistence of the rate of change — opposed by a stronger restoring force towards the equilibrium point. In our system, stochastic changes in multi–omic represent a restoring force constraining the patient in its normal state. A possible visualisation of healthy and disease states is the landscape analogy described by Waddington where the multi–omic information determines the walls of a valley that traps a rolling ball (the condition of the patient). The walls act as restoring forces (representing the natural selection). The multi–omic information may provide a potential mechanism for control of the level of the phenotypic variation (which is represented by the slope of the valley walls). In diseases the multi–omic deregulation would change the valley, altering the balance of regulation that maintained the stable multi–omic signature in the face of noise. This could be the result of repeated restructuring of the multi–omic landscape through inflammatory condition. Clearly different diseases have specific multi–omic structure requirements, i.e. valleys. The multi–omic OU could be considered within a phylogenetic framework [37–41], which is biologically motivated by the ideas of adaptation, selection, stasis. The mean phenotype of the species is expected to exhibit small oscillations around an optimal state defined by the environment where it lives in stasis [42]. If the environment changes, the optimal state will be affected, which will lead to rapid evolution of species towards the new optimum. The concepts from the evolutionary biology have been recently used in the study of genes [43, 44]. Furthermore, Bartoszek and Lio' [45] used a branching Brownian motion (BM) process, implemented in the mvSLOUCH package [37], to distinguish between competing phylogenetic trees for bacterial species. The aim of this work is to find the associations between diseases and the genes in a gene family. Phylogenetic comparative methods can provide us with a probability law that takes into account the phylogeny. We can then estimate the expected variability at the level of tips and test that if a certain gene lies outside the null distribution of healthy cases. In terms of the phylogenetic trees connecting the genes, more recently diverged genes should show similar expression levels due to the common descent. Moreover, the association with a disease of one member in the gene family suggests that closely related genes will have a higher chance of sharing some of this relationship than more distant ones. Recent studies found that DNA methylation plays a regulatory roles on gene expression [46]. The methylation and expression levels of genes usually exhibit dependencies in human diseases such as cancers [47]. Recent studied of DNA methylation data have provided evidences of different patterns of changes existing at promoter and gene body levels [48, 49]. Therefore, in this paper, we study the roles of gene promoter methylation and gene body methylation vs. gene expression respectively constrained by the phylogenetic information. The proposed methodology is a generalized method and could easily accommodate multiple epigenetic features, constrained only by the computational capabilities of available software. In the following sections, we provide the theoretical basis of the methodology. The flowchart of Step 2 is shown in Fig. 2. The scheme of the phylogenetic steps. 1) Construct the phylogenetic tree of the gene family based on the protein sequences; 2) Estimate the best evolutionary models between expression and methylation data based on the phylogenetic tree in control samples; 3) Generate the null distributions of the expression and methylation data for the gene family following the best evolutionary model; 4) Calculate the empirical p-values of the expression and methylation levels for each member of the gene family in patient samples and select the significant ones DNA methylation and gene expression data for Step 2 In Step 2, we focus on nine inflammation related diseases, Allergy, Asthma, Ulcerative Colitis (Colitis), Crohns' Disease (Crohn), Rheumatoid Arthritis (RA), Chronic Fatigue Syndrome (CFS), Systemic Lupus Erythematosus (SLE), Type 2 Diabetes (T2D) and Colon Cancer, which have both gene expression and DNA methylation data (see Additional file 2). The methylation (promoter and gene body) data of all the diseases were measured by Illumina Infinium HumanMethylation450 (450K) BeadChip. The downloaded data have already been prepocessed and normalized into probe level. To convert the data from probe level to gene level, we map the probes to Entrez GeneID following the annotation files of the microarray platforms. Probes mapped to multiple genes are deleted. For the gene expression data, the average of the values of the probes corresponding to the same gene, is calculated as the expression value for the gene. For the methylation data, we keep the probes which are mapped to the promoter (TSS200) and gene body regions, and take the average of methylation values of the probes corresponding to the two regions respectively as the two methylation features for each gene. Construction of phylogenetic correlation models based on expression and methylation of a gene family We assume that the mean expression and methylation levels (denoted by \(\vec {X}(t)\)) evolve on a phylogeny following a multivariate k–dimensional Ornstein–Uhlenbeck process $$ \vec{X}(t) = -\mathbf{A}\left(\vec{X}(t) - \vec{\theta}\right)dt + \mathbf{\Sigma}d\vec{W}(t), $$ where \(\vec {W}(t)\) is a multivariate standard Wiener process. The process parameters are the matrices A (can be in particular 0 or have zero rows), Σ and the vector \(\vec {\theta }\). The process is multivariate normal with mean and variance equalling $$ \begin{aligned} \mathbf{E}\left[\vec{X}\right](t) &= e^{-\mathbf{A}t} \vec{X}(0) + \left(\mathbf{I} - e^{-\mathbf{A}t} \right) \vec{\theta}\\ & \quad \mathbf{Var}\left[\vec{X}\right](t)= \int\limits_{0}^{t}e^{-\mathbf{A}s} \mathbf{\Sigma} \mathbf{\Sigma}^{T} e^{-\mathbf{A}^{T}s} ds. \end{aligned} $$ If all the eigenvalues of A have positive real part, then the process converges weakly to its stationary normal distribution with mean equalling \(\vec {\theta }\) and covariance matrix equalling $$\mathbf{P}\left(\left[ \frac{1}{\lambda_{i} + \lambda_{j}} \right]_{1 \le i,j \le k} \odot \mathbf{P}^{-1}\boldsymbol{\Sigma}\boldsymbol{\Sigma}^{T}\mathbf{P}^{-T} \right)\mathbf{P}^{T}, $$ where ⊙ is the Hadamard product, λis and P are respectively the eigenvalues and eigenvectors of A. The mvSLOUCH R package [37] is used to estimate the parameters of the process for modelling expression, methylation and the gene level evolution. We use the mvSLOUCH package for our analysis because currently it offers the widest choice of possible models for the multivariate trait. Importantly for us it allows for some of the traits to evolve as Brownian motion (i.e. neutrally) while for the others to be under selective pressure to track changes in the "Brownian ones". This wider spectrum of tested models (the wrapper function running the analysis tries out a whole collection of parametrizations of the OU equation, this wrapper function has been incorporated into the public interface of mvSLOUCH and its functionality may be exploited by calling mvSLOUCH::estimate.evolutionary.model()) allows for better exploration of the parameter space but moreover facilitates interpretation—how do the different variables interact with each other. We compare (by AIC c) Brownian motion (A=0), stationary and non–stationary OU models, and specify that certain variables evolve marginally as Brownian motion (equivalent to setting in A rows corresponding to them to 0). The parameters of the OU process have very sophisticated interpretations in the evolutionary biology field [41, 50]. In this work, it is assumed that in a constant environment (e.g. healthy person), called selective regime in evolutionary terminology, the expression and methylation levels should exhibit stationary oscillations around an optimal state. This stasis situation can be modelled by an (multivariate) OU process. If a gene property is associated with a disease, then its levels will be significantly out of the band predicted by the stationary oscillations. The values associated with each gene are the means and variances of the expression and methylation for the populations (case or control individuals) under study. There is always a natural variation inside a population, and the expression and methylation patterns are very variable. Ignoring this variation could lead to spurious conclusions and therefore it should be taken into account. The variation is expected to be dominated by the specific conditions which the individual lived in and hence not to exhibit an evolutionary history. The estimate of this variation is the sample variance for the expression or methylation of the gene. This is a standard procedure in phylogenetic comparative methods and the variation appears in any downstream analysis as uncorrelated measurement error [37, 51, 52] and it is added to the diagonal of the between–gene family member–between–traits variance–covariance matrix. Detection of disease-associated members of a gene family Genes, whose expression levels in disease samples significantly deviate from the optimal evolutionary model (gene expression vs. promoter/body methylation) estimated based on the healthy samples, are defined as dysregulated expressed genes (DEGs). Similarly, genes, whose promoter/body methylation levels in disease samples significantly deviate from the corresponding optimal evolutionary model, are defined as dysregulated methylated genes (DMGs). The identification of DEGs and DMGs requires the estimation of the parameters of the stochastic process generating by the control levels. On the basis of the phylogeny, we simulate 200,000 independent evolutions of expression and methylation levels under the law of this process with the estimated parameters, which gives us the null distribution for the levels that includes the ancestral dependencies between the different genes. Then, the empirical p-values are calculated to assess if a case measurement for a gene is significantly different from its control counterpart. After simulating the process for each tip we take the difference between the simulated control values and true control values. Then, the p-value is calculated by comparing the observed difference between cases and controls to the null distribution of the difference between simulated and true control values. The genes with significant p-values (e.g., p <0.05) are selected as DEGs and DMGs which are considered significantly associated with a disease. Although this approach alone does not really guard us against multiple testing issues as we look at individual p-values [53], our aim is to build an overall network by building up on all the evidences from the families. Therefore, a gene family member with a marginal p-value (e.g., around 0.05) could be considered interesting or suggestive if it has pathway connections with other genes with significant p-values belonging to other families. The significance of a gene suggests the strong association with the genesis of the disease. Because of the high correlation between the members of the family induced by the shared ancestry and phylogenetic inertia, a too stringent approach at single gene family could vanish the opportunity of evaluating the evidence synthesis across the overall gene family network through Gene Ontology. Construction of functional associations between diseases If a gene is associated with two different diseases, it is likely that the two diseases share the similar functional mechanisms involving this common gene at molecular level. Consequently, the functional consistence between two diseases can be evaluated based on the overlapping of the corresponding associated genes. The genes involved in the same biological process usually exhibit high consistence in function. Here we use the Gene Ontology (GO) semantic similarity measure proposed by [54] to evaluate the functional similarities of the associated genes for two diseases. The semantic similarity measure is an information content which takes into account the hierarchical structure of GO. It is calculated by the frequencies of two GO terms and that of their closest common ancestor term in the directed acyclic graph (DAG) of GO. The information content of a GO term is calculated by the negative log probability of the genes occurring in the GO term and all of its children terms against the total genes annotated in GO. The frequency of a GO term t is computed as: $$ p(t) = \frac{n_{t^{\prime}}}{N}, \qquad t^{\prime} \in {\{t, \text{children of t}\}}, $$ where \(\phantom {\dot {i}\!}n_{t^{\prime }}\) is the number of genes annotated in term t and all of its children terms, and N is the number of total genes annotated in GO. The information content (IC) of GO term t is defined as: $$ IC(t) = -\log(p(t)). $$ Because a GO term could have multiple parents in the DAG, two terms can share parents by multiple paths. The similarity between two GO terms is calculated based on the information content of their closest common ancestor term which is also called the most informative common ancestor (MICA). As proposed by [54], the semantic similarity between GO terms t1 and t2 with the most informative common ancestor term MICA is computed as: $$ sim(t_{1},t_{2}) = \frac{2IC(MICA)(1-p(MICA))}{IC(t_{1})+IC(t_{2})}. $$ The functional similarity between two genes is calculated based on their corresponding annotated GO terms. Given two GO terms sets GO1={go11,go12,...,go1n} and GO2={go21,go22,...,go2n} annotated by gene g1 and g2 respectively, the similarities matrix between GO1 terms and GO2 terms is computed following Eq. (9). The similarity score between the two genes is calculated as the average of all maximum similarities on each row and column of the GO terms similarity matrix: $$ {}sim(g_{1}, g_{2}) \! = \! \frac{ \! \sum\limits_{i=1}^{m} \! \max \limits_{1 \leq j \leq n} \! sim({go}_{1i},{go}_{2j}) \! +\! \! \sum\limits_{j=1}^{n} \! \!\max \limits_{1 \leq i \leq m} \! \!sim({go}_{1i},{go}_{2j})}{m+n} \!. $$ Genome-wide analysis of gene expression in human inflammatory diseases reveals several interested signalling gene families Following the Step 1 analysis procedure that introduced in the section of Methods, a number of significantly differentially expressed genes are identified in the selected inflammatory diseases (see Additional file 3). The significant differential expression of these genes in the patients compared with the healthy samples suggests strong associations between the genes and the inflammatory diseases. The genes are involved in several important signalling gene families related with inflammation such as the chemokine receptor family, the tumor necrosis factor (TNF) receptor family, the transforming growth factor beta (TGF– β) family and the interleukin 17 (IL–17) family. The comparative analysis of the selected inflammatory diseases shows that the TNF receptor family and the TGF– β gene family are more differentially expressed between healthy and disease samples. There are more members in these two gene families exhibiting significant p-values than the chemokine receptor family. The signalling molecules of IL–17 family are represented by Rel A, B, TRAF, NFkB1 and NFkB2 related gene members. Their enrichment presence in the pool of disrupted genes in all diseases highlights the crosstalk with the NF–kappaB signaling pathway. The TGF– β family is involved in most of the considered diseases, e.g., TGFB1 in Osteoporosis, TGFB3 in Rheumatoid arthritis, Osteoarthritis and Multiple myeloma, TGFBR2 in HIV, Osteomyelitis and Measles, BMP in Breast cancer and Osteosarcoma, and BMP3 in Periodontitis. The TNF receptor family is involved in many of the selected inflammatory diseases, e.g., TNFRSF10B in Osteoporosis, Osteomyelitis and periodontitis, TNFRSF11B and TNFSF13B in Osteopetrosis and Periodontitis, and TNFIP6/8 in Osteomyelitis. Phylogenetic–based multi–omic analysis suggests potential disease associated genes of four signalling gene families: the chemokine receptor family, the TNF receptor family, the TGF– β family and the IL–17 family Case study of the chemokine receptor family Chemokine receptors belong to the large G–coupled protein receptors family and are abundantly expressed in a variety of immune cells, playing a crucial role in the immune system by binding with chemokines [36]. Accumulating evidence has provided insight into the importance of chemokine and chemokine receptors in various diseases including cancers, HIV and inflammatory diseases [55]. For example, the chemokine receptors CXCR4 and CCR7 have been found to be involved in breast cancer metastasis [56], and both CXCR4 and CCR5 have been successfully used as drug targets for haematopoietic stem cell mobilization and HIV inhibition [57]. Despite the growing effort in developing drugs targeting chemokine receptors, there has been limited success in clinical trials concerning inflammatory diseases. The effects of biased signalling mechanisms at receptor level for the fine–tuning of the immune system [36] have not been clearly understood yet. Growing evidence of the biased signalling of the chemokine gene family implies that different chemokines activate specific signalling pathways via binding to the corresponding receptors in different inflammatory diseases. Studying this dynamic regulation mechanisms will help understanding the genesis of the diseases and improving drug discovery. Here, we apply the proposed phylogenetic–based multi–omic method on the chemokine receptor family to detect the members of this family which are significantly associated with different inflammatory diseases. Using the multivariate OU framework, the optimal correlation models between gene expression and promoter/body methylation data taking account of phylogeny information for chemokine receptors are estimated in controls, which are shown in Table 1. The optimal correlation models for gene expression vs. promoter methylation are almost the same with the ones for gene expression vs. body methylation, which mainly follow a bivariate OU model (OUOU) in eight disease except Colitis in which the best correlation model is a BM model. The disease–associated chemokine receptors with significant dysregulation in expression or methylation are shown in Table 2. The expression levels or the methylation levels in the gene body regions of the significant chemokine receptors in the patients of Allergy, Asthma and Colitis do not follow the estimated correlation models in the corresponding control samples, which suggests that these significant chemokine receptors may be involved in the epigenetic regulation mechanisms during the genesis of the diseases. There is a preponderance of the gene expression effects over the gene body and promoter methylation and a preponderance of gene body over promoter methylation. The phylogenetic correlation between the multi–omic information, diseases and genes is shown in Fig. 3a. Phylogenetic correlation analysis for gene families: a the chemokine receptor family; b the TNF receptor family; c the TGF– β family; d the IL–17 family. In each figure, the phylogenetic tree for the protein sequences of the gene family constructed using neighbour–joining is shown on the left side. The sequences EBV (human EBV-induced G protein-coupled receptor), angio (human type-1 angiotensin II receptor isoform) and somas (human somatostatin receptor) are considered as outgroups. The scale bar refers to the branch lengths, measured in expected numbers of amino acid replacements per site. The significant association between diseases and genes of the gene family is shown on the right side. Large black circles represent up–regulated expression; small black circles represent down–regulated expression; large white circles represent up–regulated (gene body AND promoter) methylation; small white circles represent down–regulated methylation Table 1 Optimal evolutionary models of DNA methylation and gene expression for the four gene families in controls (promoter methylation vs. gene expression ∣ gene body methylation vs. gene expression) Table 2 Significant disease associated genes in the chemokine receptor family Case study of the TNF receptor family The TNF gene family includes 29 receptors which are trimeric cytokine receptors that bind tumor necrosis factors (TNFs). These receptors are important in determining the response outcome (e.g. apoptosis, inflammation), which suggests their potential roles associated with diseases. The phylogenetic correlation between the multi–omic information, diseases and genes is shown in Fig. 3b. The correlation between promoter/body methylation and expression of the TNF receptor family follows the bivariate OU model in most diseases except Colon Cancer and Crohn. As shown in Table 1, the optimal model for gene expression vs. body methylation in Colon Cancer follows the BM model and the optimal model for gene expression vs. promoter methylation in Crohn follows the BM model. The TNF receptors which are associated with the diseases are shown in Table 3. Many members in TNF receptor family show significant dysregulation in expression or methylation in CFS, Colitis, Colon Cancer, RA and T2D. The disruption at the level of promoters seems more important than the disruption at gene body level. Table 3 Significant disease associated genes in the TNF receptor family Case study of the TGF– β family The transforming growth factor beta (TGF– β) family plays key roles in cell proliferation and differentiation, and other important biological processes [31]. Members of the TGF– β family are synthesized as prepropeptide precursors that are processed into mature, biologically active homodimers or heterodimers, which activate serine/threonine kinase receptors. Scientific evidence shows that the TGF– β proteins are involved in the genesis of several diseases such as immunity, cancer, bronchial asthma, lung fibrosis, heart disease, diabetes, Parkinson's disease, and AIDS [58]. The phylogenetic correlation between the multi–omic information, diseases and 36 genes of TGF– β family is shown in Fig. 3c. As shown in Table 1, the correlations between promoter/body methylation and expression of the TGF– β family follow a bivariate OU model in all situations. The significant genes in the TGF– β family which are associated with the diseases are shown in Table 4. There are several members in TGF– β family that are dysregulated significantly in expression (in particular) or methylation in most diseases, but in CFS there is none and in Colon Cancer there is only one gene significantly up–regulated in gene body methylation. There is a good agreement (bold in Table 4) between gene body and promoter methylation relationship with respect to gene expression. Table 4 Significant disease associated genes in the TGF– β family Case study of the IL–17 family The interleukin 17 (IL–17) family plays a crucial role in host defence against microbial organisms and in the genesis of proinflammatory diseases. IL–17 is commonly associated with allergic responses. IL–17 induces the production of many other cytokines (such as IL–6, G–CSF, GM–CSF, IL–1 β, TGF– β, TNF– α), chemokines (including IL–8, GRO– α, and MCP–1), and prostaglandins (e.g., PGE2) from many cell types (fibroblasts, endothelial cells, epithelial cells, keratinocytes, and macrophages). TGF– β and chemokines (IL–6) drive the production IL–17 cytokines in immunity and inflammation [59–61]. The phylogenetic correlation between the multi–omic information, diseases and genes of the IL–17 family is shown in Fig. 3d. Here, we also applied the proposed methods to the IL–17 gene family. The correlation between promoter/body methylation and expression of the IL–17 family follows the bivariate OU model in most diseases except T2D and RA in which the best correlation models follow the BM model (Table 1). The significant genes in the IL–17 family which are associated with the diseases are shown in Table 5. There are only significantly dysregulated genes in CFS, Crohn, SLE and T2D. Table 5 Significant disease associated genes in the IL–17 family Curated evidence for the identified disease associated genes We find a selection of evidences from biomedical literature, which prove the involvement of many identified genes in the genesis of the corresponding diseases (Table 6). Although no clear evidence has been reported for the other identified genes, there are some interesting clues. For example, ARTN has not been proved to be associated with T2D yet, but it is associated with Hirschsprung's disease 1 and Parkinson disease, late–onset. It plays an important role in pathways related to developmental biology and Interleukin receptor SHC signaling, and strong attractant of gut hematopoietic cells thus promotes the formation Peyers patch–like structures [62]. Although there are no report on the involvement of the GDF1 in RA, it is associated to transposition of great arteries, dextro-looped 3 and right atrial isomerism [63]. There are no report on the involvement of GDF9 in Crohn, but mutations in GDF9 can result in sterility and lower ovulation rate [64]. Table 6 Literature evidence for the identified disease associated genes in the four signalling gene families Phylogenetic–based multi–omic analysis of signalling gene families reveals functional associations between inflammation related diseases Disease gene association networks The gene–disease association network (Fig. 4) are constructed from the significant genes of the four gene families which are identified by the proposed multi–omic analysis. As shown in the network, the four signalling related gene families are prone to be associated with different diseases. For example, the chemokine receptor family may play important roles in Allergy, CFS, Colitis, RA, SLE and T2D, while the IL–17 gene family is probably related with T2D, CFS, Crohn and SLE. But the TGF– β and TNF receptor families are more likely to be involved in all the nine inflammation related diseases. The genes which link to multiple diseases in the network suggest the common molecular mechanisms for the diseases, which also provide clues for exploring the functional associations for disease comorbidities. Gene–disease associations between the diseases and significant genes of the four families based on gene expression and methylation. Circle nodes represent genes and square nodes represent diseases. The abbreviations of the diseases are show in Additional file 2. Nodes represent genes and diseases. Edges between two nodes represent the associations between genes and diseases: solid lines for the associations in expression and dashed line for the associations in methylation To explore the functional consistence among the nine diseases on each gene family, we calculate the Gene Ontology semantic similarity between the genes which are significantly associated with the diseases. The functional similarity among the diseases on the four gene families are shown in Fig. 5. The strong similarity between two diseases suggests that the diseases are probably induced by the same disrupted biological pathways. For instance, the high functional consistency between RA, SLE and T2D in chemokine receptor family suggests that these diseases are more likely to involve similar functional mechanisms of the epigenetic regulations on the pathways related with chemokine receptors. Functional consistence of diseases based on the significant genes of each gene family Inflammation map Central to inflammation pathology studies is the molecular analysis of inflammatory comorbidities and the comorbidity map which addresses the occurrence of different medical conditions or diseases, usually complex and often chronic ones, in the same patient. A meaningful way to summarise the relationship between diseases and multi–omic information is to compute the principal component analysis of the matrix that contains diseases and the numbers of changes in the methylation and gene expression in each of the four families. Figure 6 shows the first two principal components of the disease–methylation and disease–gene expression associations, in which the up-dysregulation and down-dysregulation in gene expression and promoter/body methylation are combined together. Principal component analysis (the first two principal components) of the matrix of disease–methylation and disease–expression associations. The figure shows relationships between diseases and dysregulation of gene expression and DNA methylation in the four signalling gene families The proposed method provides a tool to study the involvement of gene families in human disease by integrating gene expression, DNA methylation and gene sequences through a phylogenetic approach. The different models of adaptation (OU and BM) can be evaluated and compared based on the evolutionary relationship of the gene family by using gene expression and methylation data. The members of a gene family whose expression or methylation levels that do not follow the corresponding optimal evolutionary models are considered as the genes significantly associated with the diseases, which suggests the involvement of such genes in the epigenetic regulation mechanisms related to the genesis of diseases. We believe that our proposed methodology provides a meaningful approach to compare the contribution of different omic data (DNA methylation and gene expression) and different genes within a family/group to a disease condition. The proposed methodology could be extended by integrating other omic data in the future. Currently, the methodology is primarily limited by the amount of multi–omic data (traits) that phylogenetic comparative methods can handle. Previously it was up to five or six traits, depending on the size of the phylogeny, or methods that calculated the whole between–species–between–traits variance–covariance matrix. However, right now there is tremendous progress in speeding up likelihood calculations for OU–based evolutionary models [65–68]. These new methods are based on either the three-point structure [67] or Felsenstein's pruning algorithm [69], which allow for the likelihood to be evaluated in linear (in the number of tips) time and hence should make phylogenetic approaches a key multi-omic integration step, instead of a computational bottleneck. Thus, the new methods hold promise that it should be possible to analyse scores of traits for thousands of species in the nearby future. In addition, the proposed method for phylogenetic–based multi–omic analysis is limited to a single gene family because the hypothesis of the model assumes that the correlation between gene expression and DNA methylation of genes from a family is constrained by the evolutionary relationship of the gene family. But in practice, the roles of DNA methylation for gene expression regulation are complex and involve not only genes within the same family but possibly also genes from other functionally related gene families. In the future, the methodology could be extended to multiple gene families by taking into account the functional crosstalks between different families. We have performed a comparative study to explore the influence of signalling gene families in several inflammation related diseases. Firstly, we analyse gene expression in a collection of inflammatory diseases, which highlights the importance of gene families involved in extracellular signalling. In particular we have identified four families significantly associated with the inflammatory diseases, which includes the chemokine receptors family, the TNF receptor family, the TGF– β family and the IL–17 family. Then, in order to understand the roles these gene families in some specific inflammation related diseases, we propose a phylogenetic–based multi–omic method to study the correlations between gene expression and DNA methylation of the members of each gene family taking into account of their evolutionary relationships. Applying the proposed method to four signalling gene families in nine inflammation related diseases, we identify a number of significant disease associated genes whose expression or methylation levels in the patients significantly deviate from the evolutionary models estimated from the control samples. Our results suggest that these families involve in different specific diseases. The chemokine receptor family may play important roles in Allergy, Asthma and Colitis, while the TNF receptor family may play key roles in CFS, Colitis, Colon Cancer and T2D. But the TGF– β family would be involved in all the nine diseases. Besides the larger gene families such as the aforementioned three gene families, the proposed method also works on small gene family such as the IL–17 gene family which contains only six members. The relationship between gene expression and DNA methylation (promoter region or gene body region) mainly follows a bivariate OU model. The genes exhibiting significant dysregulation in promoter methylation and gene expressions are different with respect to the gene body methylation. In the TNF receptor family, most of the genes show significant alterations in promoter region than the gene body region, which is opposite in the TGF– β family. For the chemokine receptor family, the diseases Allergy, Asthma, CFS and Colitis involve both the gene body methylation and the promoter methylation of the family, but the diseases RA, SLE and T2D show differences in promoter methylation. From biomedical literatures, we observe that the impact of methylation levels on diseases seems to be of the same magnitude as that of gene expression levels. Based on the identified disease associated genes for each gene family, the functional associations among the diseases based on the gene families are constructed, revealing the functional consistency and difference between diseases in terms of a signalling gene family. The members of the gene families exhibit different involvement in the inflammatory diseases. For example, viewing from the diseases gene network constructed based on the identified disease associated genes, the connectivities of the genes are different. For example, GDF15 is involved in seven diseases, while TGFB3, INHBC,A, TNFRSF1B and IL17E are associated with four diseases. Most of the other genes have two or three links to the diseases. We obtain multiple confirmatory results and a number of novel gene–disease associations that require experimental verification. Colitis: CFS: Crohn: Crohns'Disease DAG: DEGs: Dysregulated expressed genes DMGs: Dysregulated methylated genes eGWAS: Gene expression based genome–wide association study GEO: HSCs: IBD: Information content IL–17: Interleukin 17 MICA: Most informative common ancestor miRNA: Ornstein–Uhlenbeck OUOU: Bivariate OU model T2D: TGF– β : Transforming growth factor beta TNF: SLE: Jensen AB, Moseley PL, Oprea TI, Ellesøe SG, Eriksson R, Schmock H, et al. Temporal disease trajectories condensed from population-wide registry data covering 6.2 million patients. Nat Commun. 2014; 5(5):4022. Jeon JP, Shim SM, Nam HY, Ryu GM, Hong EJ, Kim HL, et al. Copy number variation at leptin receptor gene locus associated with metabolic traits and the risk of type 2 diabetes mellitus. BMC Genomics. 2010; 11(1):426. Lee DS, Park J, Kay K, Christakis N, Oltvai Z, Barabási AL. The implications of human metabolic network topology for disease comorbidity. Proc Natl Acad Sci. 2008; 105(29):9880–5. Castellani G, Menichetti G, Garagnani P, Bacalini M, Pirazzini C, Franceschi C, et al.Systems medicine of inflammaging. Brief Bioinform. 2016; 17(3):527–540. PubMed Article Google Scholar Martin-Subero M, Anderson G, Kanchanatawan B, Berk M, Maes M. Comorbidity between depression and inflammatory bowel disease explained by immune-inflammatory, oxidative, and nitrosative stress; tryptophan catabolite; and gut–brain pathways. CNS Spectrums. 2016; 21(2):184–98. Wei TT, Lin YT, Tseng RY, Shun CT, Lin YC, Wu MS, et al. Prevention of Colitis and Colitis-Associated Colorectal Cancer by a Novel Polypharmacological Histone Deacetylase Inhibitor. Clin Cancer Res Off J Am Assoc Cancer Res. 2016; 22:4158–69. Wang CZ, Yu C, Wen XD, Chen L, Zhang CF, Calway T, et al. American Ginseng Attenuates Colitis-Associated Colon Carcinogenesis in Mice: Impact on Gut Microbiota and Metabolomics. Cancer Prev Res (Philadelphia, Pa). 2016; 9:803–11. Chen J, Wright K, Davis JM, Jeraldo P, Marietta EV, Murray J, et al. An expansion of rare lineage intestinal microbes characterizes rheumatoid arthritis. Genome Med. 2016; 8:43. He Z, Shao T, Li H, Xie Z, Wen C. Alterations of the gut microbiome in Chinese patients with systemic lupus erythematosus. Gut Pathog. 2016; 8:64. Galley JD, Parry NM, Ahmer BMM, Fox JG, Bailey MT. The commensal microbiota exacerbate infectious colitis in stressor-exposed mice. Brain, Behav, Immun. 2017; 60:44–50. Manuc TEM, Manuc MM, Diculescu MM. Recent insights into the molecular pathogenesis of Crohn's disease: a review of emerging therapeutic targets. Clin Exp Gastroenterol. 2016; 9:59. Stecher B. The Roles of Inflammation, Nutrient Availability and the Commensal Microbiota in Enteric Pathogen Infection. Microbiol Spectr. 2015;3. Hold GL, Smith M, Grange C, Watt ER, El-Omar EM, Mukhopadhya I. Role of the gut microbiota in inflammatory bowel disease pathogenesis: what have we learnt in the past 10 years?World J Gastroenterol. 2014; 20:1192–210. Chen WX, Ren LH, Shi RH. Enteric microbiota leads to new therapeutic strategies for ulcerative colitis. World J Gastroenterol. 2014; 20:15657–63. Zhang XY, Liu ZM, Zhang HF, Li YS, Wen SH, Shen JT, et al. TGF- β1 improves mucosal IgA dysfunction and dysbiosis following intestinal ischaemia–reperfusion in mice. J Cell Mol Med. 2016; 20(6):1014–23. Ruane D, Chorny A, Lee H, Faith J, Pandey G, Shan M, et al. Microbiota regulate the ability of lung dendritic cells to induce IgA class-switch recombination and generate protective gastrointestinal immune responses. J Exp Med. 2016; 213(1):53–73. Crusz SM, Balkwill FR. Inflammation and cancer: advances and new agents. Nat Rev Clin Oncol. 2015; 12:584–96. Surowiec I, Ärlestig L, Rantapää-Dahlqvist S, Trygg J. Metabolite and lipid profiling of biobank plasma samples collected prior to onset of rheumatoid arthritis. PLoS ONE. 2016; 11(10):e0164196. Mankia K, Emery P. Preclinical rheumatoid arthritis: progress toward prevention. Arthritis Rheum. 2016; 68(4):779–88. Ray S, Britschgi M, Herbert C, Takeda-Uchimura Y, Boxer A, Blennow K, et al. Classification and prediction of clinical Alzheimer's diagnosis based on plasma signaling proteins. Nat Med. 2007; 13:1359–62. Sun YV, Hu YJ. Integrative Analysis of Multi-omics Data for Discovery and Functional Studies of Complex Human Diseases. Adv Genet. 2016; 93:147–90. Hasin Y, Seldin M, Lusis A. Multi-omics approaches to disease. Genome Biol. 2017; 18:83. Zhang W, Liu Y, Sun N, Wang D, Boyd-Kirkup J, Dou X, et al.Integrating genomic, epigenomic, and transcriptomic features reveals modular signatures underlying poor prognosis in ovarian cancer. Cell Rep. 2013; 4(3):542–53. Cabezas-Wallscheid N, Klimmeck D, Hansson J, Lipka DB, Reyes A, Wang Q, et al. Identification of regulatory networks in HSCs and their immediate progeny via integrated proteome, transcriptome, and DNA Methylome analysis. Cell Stem Cell. 2014; 15(4):507–22. Cantini L, Medico E, Fortunato S, Caselle M. Detection of gene communities in multi-networks reveals cancer drivers. Sci Rep. 2015; 5:17386. Chaudhary K, Poirion OB, Lu L, Garmire LX. Deep Learning based multi-omics integration robustly predicts survival in liver cancer. Clin Cancer Res. 2017; 24(6):clincanres.0853.2017. Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, et al.Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10):R80. Therneau TM, Ballman KV. What does PLIER really do?Cancer Informat. 2008; 6:423. Lazennec G, Richmond A. Chemokines and chemokine receptors: new insights into cancer-related inflammation. Trends Mol Med. 2010; 16(3):133–44. Sulkowska M, Wincewicz A, Sulkowski S, Koda M, KanczugaKoda L. Relations of TGF-beta1 with HIF-1 alpha, GLUT-1 and longer survival of colorectal cancer patients. Pathology. 2009; 41:254–60. Pezzolesi M, Satake E, McDonnell K, Major M, Smiles A, Krolewski A. Circulating TGF-beta1-Regulated miRNAs and the Risk of Rapid Progression to ESRD in Type 1 Diabetes. Diabetes. 2015; 64:3285–93. Guiqian Chen YPL Chuxia Deng. TGF-beta and BMP Signaling in Osteoblast Differentiation and Bone Formation. Int J Biol Sci. 2012; 8:272–88. Jeroen T Buijs TAG Keith R Stayrook. TGF-beta in the Bone Microenvironment: Role in Breast Cancer Metastases. Cancer Microenviron. 2011; 4:261–81. LK D, KS M, PGJ F, CR M, HW D, et al.Hypoxia and TGF-b Drive Breast Cancer Bone Metastases through Parallel Signaling Pathways in Tumor Cells and the Bone. Microenvironment. PLoS ONE. 2009; 4(e6896). https://doi.org/doi:10.1371/journal.pone.006896. Itoh S, Itoh F. Implication of TGF-b as a survival factor during tumor development. J Biochem. Advance Access published April 23, 2012. Zweemer AJM, Toraskar J, Heitman LH, IJzerman AP. Bias in chemokine receptor signalling. Trends Immunol. 2014; 35(6):243–52. Bartoszek K, Pienaar J, Mostad P, Andersson S, Hansen TF. A phylogenetic comparative method for studying multivariate adaptation. J Theor Biol. 2012; 314:204–15. Beaulieu JM, Jhwueng DC, Boettiger C, O'Meara BC. Modeling stabilizing selection: expanding the Ornstein–Uhlenbeck model of adaptive evolution. Evolution. 2012; 66:2369–89. Clavel J, Escarguel G, Merceron G. mvMORPH: an R package for fitting multivariate evolutionary models to morphometric data. Methods Ecol Evol. 2015; 6(11):1311–9. Hansen TF. Stabilizing selection and the comparative analysis of adaptation. Evolution. 1997; 51(5):1341–51. Hansen TF, Pienaar J, Orzack SH. A comparative method for studying adaptation to a randomly evolving environment. Evolution. 2008; 62(8):1965–77. Gould SJ, Eldredge N. Punctuated equilibrium comes of age. Nature. 1993; 366:223–7. Bedford T, Hartl DL. Optimization of gene expression by natural selection. Proc Natl Acad Sci U S A. 2009; 106(4):1133–8. Rohlfs RV, Harrigan P, Nielsen R. Modeling gene expression evolution with an extended Ornstein-Uhlenbeck process accounting for within-species variation. Mol Biol Evol. 2014; 31(1):201–11. Bartoszek K, Lió P. A Novel Algorithm to Reconstruct Phylogenies Using Gene Sequences and Expression Data. In: International Proceedings of Chemical, Biological & Environmental Engineering; Environment, Energy and Biotechnology III: 2014. p. 8–12. Jones PA. Functions of DNA methylation: islands, start sites, gene bodies and beyond. Nat Rev Genet. 2012; 13:484–92. Mosquera Orgueira A. Hidden among the crowd: differential DNA methylation-expression correlations in cancer occur at important oncogenic pathways. Front Genet. 2015; 6:163. Haider S, Cordeddu L, Robinson E, Movassagh M, Siggens L, Vujic A, et al. The landscape of DNA repeat elements in human heart failure. Genome Biol. 2012; 13(10):R90. Movassagh M, Choy MK, Knowles DA, Cordeddu L, Haider S, Down T, et al.Distinct epigenomic features in end-stage failing human hearts. Circulation. 2011; 124(22):2411–22. Butler MA, King AA. Phylogenetic comparative analysis: a modeling approach for adaptive evolution. Am Nat. 2004; 164(6):683–95. Hansen TF, Bartoszek K. Interpreting the evolutionary regression: the interplay between observational and biological errors in phylogenetic comparative studies. Syst Biol. 2012; 61(3):413–25. Rohlfs RV, Harrigan P, Nielsen R. Modeling gene expression evolution with an extended Ornstein-Uhlenbeck process accounting for within-species variation. Mol Biol Evol. 2013:mst190. Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Ann Stat. 2001:1165–88. Schlicker A, Domingues FS, Rahnenführer J, Lengauer T. A new measure for functional similarity of gene products based on Gene Ontology. BMC Bioinforma. 2006; 7:302. Balkwill F. Cancer and the chemokine network. Nat Rev Cancer. 2004; 4(7):540–50. Müller A, Homey B, Soto H, Ge N, Catron D, Buchanan ME, et al. Involvement of chemokine receptors in breast cancer metastasis. Nature. 2001; 410(6824):50–6. Schall TJ, Proudfoot AEI. Overcoming hurdles in developing successful drugs targeting chemokine receptors. Nat Rev Immunol. 2011; 11(5):355–63. Gordon KJ, Blobe GC. Role of transforming growth factor- β superfamily signaling pathways in human disease. Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease. 2008; 1782(4):197–228. McGeachy M, Bak-Jensen K, Chen Y, Tato C, Blumenschein W, McClanahan T, et al.TGF-beta and IL-6 drive the production of IL-17 and IL-10 by T cells and restrain T(H)-17 cell-mediated pathology. Nat Immunol. 2007; 8:1390–7. Veldhoen M, Hocking R, Atkins C, Locksley R, Stockinger B. TGFbeta in the context of an inflammatory cytokine milieu supports de novo differentiation of IL-17-producing T cells. Immunity. 2006; 24:179–89. Lee Y, Awasthi A, Yosef N, Quintana FJ, Xiao S, Peters A, et al. Induction and molecular signature of pathogenic TH17 cells. Nat Immunol. 2012; 13(10):991–9. Veiga-Fernandes H, Coles MC, Foster KE, Patel A, Williams A, Natarajan D, et al. Tyrosine kinase receptor RET is a key regulator of Peyer's patch organogenesis. Nature. 2007; 446(7135):547–51. Kaasinen E, Aittomäki K, Eronen M, Vahteristo P, Karhu A, Mecklin JP, et al. Recessively inherited right atrial isomerism caused by mutations in growth/differentiation factor 1 (GDF1). Hum Mol Genet. 2010; 19(14):2747–53. Nicol L, Bishop SC, Pong-Wong R, Bendixen C, Holm LE, Rhind SM, et al. Homozygosity for a single base-pair mutation in the oocyte-specific GDF9 gene results in sterility in Thoka sheep. Reproduction. 2009; 138(6):921–33. Goolsby EW. Rapid maximum likelihood ancestral state reconstruction of continuous characters: A rerooting-free algorithm. Ecol Evol. 2017; 7:2791–7. Mitov V, Stadler T. Fast Bayesian Inference of Phylogenetic Models Using Parallel Likelihood Calculation and Adaptive Metropolis Sampling; 2017. bioRxiv (date accessed on 18/12/2017). Available from: https://www.biorxiv.org/content/early/2017/12/18/235739. Tung Ho Ls, Ané C. A linear-time algorithm for Gaussian and non-Gaussian trait evolution models. Syst Biol. 2014; 63(3):397–408. Hiscott G, Fox C, Parry M, Bryant D. Efficient recycled algorithms for quantitative trait models on phylogenies. Genome Biol Evol. 2016; 8(5):1338–50. Felsenstein J. Maximum-likelihood estimation of evolutionary trees from continuous characters. Am J Hum Genet. 1973; 25(5):471. Gilles S, Traidl-Hoffmann C. CD27 expression on allergen-specific T cells: A new surrogate for successful allergen-specific immunotherapy?J Allergy Clin Immunol. 2012; 129(2):552–4. Benson M, Carlsson L, Guillot G, Jernås M, Langston M, Rudemo M, et al.A network-based analysis of allergen-challenged CD4&plus; T cells from patients with allergic rhinitis. Genes Immun. 2006; 7(6):514–21. Lukacs NW, Prosser DM, Wiekowski M, Lira SA, Cook DN. Requirement for the chemokine receptor CCR6 in allergic pulmonary inflammation. J Exp Med. 2001; 194(4):551–6. Donner J, Haapakoski R, Ezer S, Melén E, Pirkola S, Gratacòs M, et al. Assessment of the neuropeptide S system in anxiety disorders. Biol Psychiatry. 2010; 68(5):474–83. Sarkar S, Song Y, Sarkar S, Kipen HM, Laumbach RJ, Zhang J, et al. Suppression of the NF- κB pathway by diesel exhaust particles impairs human antimycobacterial immunity. J Immunol. 2012; 188(6):2778–93. Hardcastle SL, Brenu EW, Johnston S, Nguyen T, Huth T, Ramos S, et al.Longitudinal analysis of immune abnormalities in varying severities of Chronic Fatigue Syndrome/Myalgic Encephalomyelitis patients. J Transl Med. 2015; 13(1):1–9. Del Zotto B, Mumolo G, Pronio A, Montesani C, Tersigni R, Boirivant M. TGF- β1 production in inflammatory bowel disease: differing production patterns in Crohn's disease and ulcerative colitis. Clin Exp Immunol. 2003; 134(1):120–6. McKaig B, Hughes K, Tighe P, Mahida Y. Differential expression of TGF- β isoforms by normal and inflammatory bowel disease intestinal myofibroblasts. Am J Physiol-Cell Physiol. 2002; 282(1):C172–C182. Ohta T, Sugiyama M, Hemmi H, Yamazaki C, Okura S, Sasaki I, et al.Crucial roles of XCR1-expressing dendritic cells and the XCR1-XCL1 chemokine axis in intestinal immune homeostasis. Sci Rep. 2016; 6:23505. Ishitsuka K, Murahashi M, Katsuya H, Mogi A, Masaki M, Kawai C, et al. Colitis mimicking graft-versus-host disease during treatment with the anti-CCR4 monoclonal antibody, mogamulizumab. Int J Hematol. 2015; 102(4):493–7. Dranse HJ, Rourke JL, Stadnyk AW, Sinal CJ. Local chemerin levels are positively associated with DSS-induced colitis but constitutive loss of CMKLR1 does not protect against development of colitis. Physiol Rep. 2015; 3(8):e12497. Ranganathan P, Jayakumar C, Manicassamy S, Ramesh G. CXCR2 knockout mice are protected against DSS-colitis-induced acute kidney injury and inflammation. Am J Physiol-Renal Physiol. 2013; 305(10):F1422–F1427. Frick VO, Rubie C, Ghadjar P, Faust SK, Wagner M, Gräber S, et al. Changes in CXCL12/CXCR4-chemokine expression during onset of colorectal malignancies. Tumor Biol. 2011; 32(1):189–96. Manocha M, Svend R, Laouar A, Liao G, Bhan A, Borst J, et al. Blocking CD27-CD70 costimulatory pathway suppresses experimental colitis. J Immunol. 2009; 183(1):270–6. Mizoguchi E, Mizoguchi A, Takedatsu H, Cario E, De Jong YP, Ooi CJ, et al. Role of tumor necrosis factor receptor 2 (TNFR2) in colonic epithelial hyperplasia and chronic intestinal inflammation in mice. Gastroenterol. 2002; 122(1):134–44. Kim H, Zhao Q, Zheng H, Li X, Zhang T, Ma X. A novel crosstalk between TLR4-and NOD2-mediated signaling in the regulation of intestinal inflammation. Sci Rep. 2015; 5(1):12018. Medrano L, Taxonera C, Márquez A, Barreiro-de Acosta M, Gómez-García M, González-Artacho C, et al. Role of TNFRSF1B polymorphisms in the response of Crohn's disease patients to infliximab. Hum Immunol. 2014; 75(1):71–5. Kim MN, Kim YI, Cho C, Mayo KE, Cho BN. Change in the Gastro-Intestinal Tract by Overexpressed Activin Beta A. Mol cells. 2015; 38(12):1079. Biancheri P, Pender S, Ammoscato F, Giuffrida P, Sampietro G, Ardizzone S, et al. The role of interleukin 17 in Crohn's disease-associated intestinal fibrosis. Fibrogenesis Tissue Repair. 2013; 6(1):13. Nickel N, Kempf T, Tapken H, Tongers Jr, Laenger F, Lehmann U, et al. Growth differentiation factor-15 in idiopathic pulmonary arterial hypertension. Am J Respir Crit Care Med. 2008; 178(5):534–41. Sun X, Meng Y, You T, Li P, Wu H, Yu M, et al. Association of growth/differentiation factor 1 gene polymorphisms with the risk of congenital heart disease in the Chinese Han population. Mol Biol Rep. 2013; 40(2):1291–9. Zhou Y, Jiang Z, Harris EC, Reeves J, Chen X, Pazdro R. Circulating Concentrations of Growth Differentiation Factor 11 Are Heritable and Correlate With Life Span. J Gerontol. 2016; 71(12):1560. Sisto M, Barca A, Lofrumento DD, Lisi S. Downstream activation of NF- κB in the EDA-A1/EDAR signalling in Sjögren's syndrome and its regulation by the ubiquitin-editing enzyme A20. Clin Exp Immunol. 2016. Min SH, Wang Y, Gonsiorek W, Anilkumar G, Kozlowski J, Lundell D, et al. Pharmacological targeting reveals distinct roles for CXCR2/CXCR1 and CCR2 in a mouse model of arthritis. Biochem Biophys Res Commun. 2010; 391(1):1080–6. Sica GL, Zhu G, Tamada K, Liu D, Ni J, Chen L. RELT, a new member of the tumor necrosis factor receptor superfamily, is selectively expressed in hematopoietic tissues and activates transcription factor NF- κB. Blood. 2001; 97(9):2702–7. Wang A, Liu F, Chen S, Wang M, Jia R, Zhu D, et al. Transcriptome Analysis and Identification of Differentially Expressed Transcripts of Immune-Related Genes in Spleen of Gosling and Adult Goose. Int J Mol Sci. 2015; 16(9):22904–26. Pohlers D, Beyer A, Koczan D, Wilhelm T, Thiesen HJ, Kinne RW. Constitutive upregulation of the transforming growth factor- β pathway in rheumatoid arthritis synovial fibroblasts. Arthritis Res Ther. 2007; 9(3):R59. Krüger JP, Endres M, Neumann K, Häupl T, Erggelet C, Kaps C. Chondrogenic differentiation of human subchondral progenitor cells is impaired by rheumatoid arthritis synovial fluid. J Orthop Res. 2010; 28(6):819–27. Jacobs JP, Ortiz-Lopez A, Campbell JJ, Gerard CJ, Mathis D, Benoist C. Deficiency of CXCR2, but not other chemokine receptors, attenuates autoantibody-mediated arthritis in a murine model. Arthritis Rheum. 2010; 62(7):1921–32. Bramlage CP, Häupl T, Kaps C, Ungethüm U, Krenn V, Pruss A, et al. Decrease in expression of bone morphogenetic proteins 4 and 5 in synovial tissue of patients with osteoarthritis and rheumatoid arthritis. Arthritis Res Ther. 2006; 8(3):R58. Lories RJ, Derese I, Ceuppens JL, Luyten FP. Bone morphogenetic proteins 2 and 6, expressed in arthritic synovium, are regulated by proinflammatory cytokines and differentially modulate fibroblast-like synoviocyte apoptosis. Arthritis Rheum. 2003; 48(10):2807–18. Tanaka M, Ozaki S, Osakada F, Mori K, Okubo M, Nakao K. Cloning of follistatin-related protein as a novel autoantigen in systemic rheumatic diseases. Int Immunol. 1998; 10(9):1305–14. Thatava T, Armstrong AS, De Lamo JG, Edukulla R, Khan YK, Sakuma T, et al. Successful disease-specific induced pluripotent stem cell generation from patients with kidney transplantation. Stem Cell Res Ther. 2011; 2(6):48–8. Li Ym, Chen Zq, Yao X, Yang Az, Li As, Liu Dm, et al. mRNA expression of chemokine receptors on peripheral blood mononuclear cells and correlation with clinical features in systemic lupus erythematosus patients. Chin Med Sci J. 2010; 25(3):162–8. Hur KY. Is GDF15 a novel biomarker to predict the development of prediabetes or diabetes?Diabetes Metab J. 2014; 38(6):437–8. Ip B, Cilfone NA, Belkina AC, DeFuria J, Jagannathan-Bogdan M, Zhu M, et al. Th17 cytokines differentiate obesity from obesity-associated type 2 diabetes and promote TNF α production. Obesity. 2016; 24(1):102–12. Belkina A, DeFuria J, Jagannathan-Bogdan M, Hasson B, Kuchibhatla R, McDonnell M, et al. B cells support a dominant Th17 cytokine signature in type 2 diabetes (HEM4P. 255). J Immunol. 2014; 192(1):117–6. Kumar P, Natarajan K, Shanmugam N. High glucose driven expression of pro-inflammatory cytokine and chemokine genes in lymphocytes: molecular mechanisms of IL-17 family gene expression. Cell Signal. 2014; 26(3):528–39. Li L, Shen JJ, Bournat JC, Huang L, Chattopadhyay A, Li Z, et al. Activin signaling: effects on body composition and mitochondrial energy metabolism. Endocrinol. 2009; 150(8):3521–9. Vidaković M, Grdović N, Dinić S, Mihailović M, Uskoković A, Jovanović JA. The importance of the CXCL12/CXCR4 axis in therapeutic approaches to diabetes mellitus attenuation. Front Immunol. 2015;6. Howangyin KY, Silvestre JS. Diabetes mellitus and ischemic diseases molecular mechanisms of vascular repair dysfunction. Arterioscler Thromb Vasc Biol. 2014; 34(6):1126–35. Evangelista AF, Collares CV, Xavier DJ, Macedo C, Manoel-Caetano FS, Rassi DM, et al.Integrative analysis of the transcriptome profiles observed in type 1, type 2 and gestational diabetes mellitus reveals the role of inflammation. BMC Med Genet. 2014; 7(1):1. Vendrell J, Chacón MR. TWEAK: a new player in obesity and diabetes. Front Immunol. 2013; 4:488. Bonala S, Lokireddy S, McFarlane C, Patnam S, Sharma M, Kambadur R. Myostatin induces insulin resistance via Casitas B-lineage lymphoma b (Cblb)-mediated degradation of insulin receptor substrate 1 (IRS1) protein in response to high calorie diet intake. J Biol Chem. 2014; 289(11):7654–70. Tang S, Zhang R, Yu W, Jiang F, Wang J, Chen M, et al. Association of Genetic Variants of BMP4 with Type 2 Diabetes Mellitus and Clinical Traits in a Chinese Han Population. BioMed Res Int. 2013:2013. Taneera J, Lang S, Sharma A, Fadista J, Zhou Y, Ahlqvist E, et al. A systems genetics approach identifies genes and pathways for type 2 diabetes in human islets. Cell Metab. 2012; 16(1):122–34. Chacón MR, Richart C, Gomez J, Megia A, Vilarrasa N, Fernandez-Real J, et al. Expression of TWEAK and its receptor Fn14 in human subcutaneous adipose tissue. Relationship with other inflammatory cytokines in obesity. Cytokine. 2006; 33(3):129–37. PubMed Article CAS Google Scholar The authors acknowledge support by the European Union's Horizon 2020 research and innovation programme PROPAG-AGEING and EPIHEALTHNET. KB would like to thank PL for hosting him at the Computer Laboratory, University of Cambridge, making it possible for KB to contribute to this study. The authors would like to thank the anonymous reviewers for their helpful and constructive comments and suggestions that greatly contributed to improving this paper. Publication costs for this manuscript were sponsored by the European Union's Horizon 2020 research and innovation programme PROPAG–AGEING. All the gene expression and DNA methylation data used in this paper are public datasets downloaded from GEO (see Additional files 1 and 2). Source code of the proposed phylogenetic–based multi–omic approach could be assessed from https://github.com/bioinfoxh/phylogenetic-based_multi-omic_approach. About this supplement This article has been published as part of BMC Bioinformatics Volume 19 Supplement 15, 2018: Proceedings of the 12th International BBCC conference. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-19-supplement-15. Krzysztof Bartoszek and Pietro Lio' contributed equally to this work. Computer Laboratory, University of Cambridge, Cambridge, UK Hui Xiao & Pietro Lio' Department of Computer and Information Science, Linköping University, Linköping, Sweden Krzysztof Bartoszek Hui Xiao Pietro Lio' All authors contributed equally. PL designed the project and performed the genome–wide comparison of gene expression in inflammatory diseases for Step 1 of the study. KB and HX developed, implemented and applied the proposed phylogenetic–based multi–omic approach for Step 2 of the study. HX, KB and PL wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Pietro Lio'. Additional file 1 Gene expression datasets of inflammatory diseases in Step 1. The table in the pdf file shows the public datasets of gene expression of the inflammatory diseases used in Step 1 of the analysis. (PDF 30 kb) DNA methylation and gene expression datasets of inflammation related diseases in Step 2. The table in the pdf file shows the public datasets of DNA methylation and gene expression of the inflammation related diseases used in Step 2 of the analysis. (PDF 42 kb) Significant genes for the inflammatory diseases analysed in Step 1. Tables in the Excel file show genes with significant p-values for different inflammatory diseases. Genes are sorted alphabetically in order to identify the gene families. (XLS 58 kb) Xiao, H., Bartoszek, K. & Lio', P. Multi–omic analysis of signalling factors in inflammatory comorbidities. BMC Bioinformatics 19, 439 (2018). https://doi.org/10.1186/s12859-018-2413-x Multi–omic approach
CommonCrawl
Global stability of an epidemic model with stage structure and nonlinear incidence rates in a heterogeneous host population Baodan Tian1,2, Yunguo Jin2,3, Shouming Zhong2 & Ning Chen1 In this paper, we study an epidemic model with stage structure and latency spreading in a heterogeneous host population. We show that if the disease-free equilibrium exists, then the global dynamics are determined by the basic reproduction number \(R_{0}\). We prove that the disease-free equilibrium is globally asymptotically stable when \(R_{0}\leq1\); and there exists a unique endemic equilibrium which is globally asymptotically stable when \(R_{0}>1\). The global stability of the endemic equilibrium is also proved by using a graph-theoretic approach to the method of Lyapunov functions. Finally, numerical simulations are given to illustrate the main theoretical results. A heterogeneous host population can be divided into several homogeneous groups according to models of transmission, contact patterns, or geographic distributions. Multi-group epidemic models have been proposed in the literature of mathematical epidemiology to describe the transmission dynamics of infectious diseases in heterogeneous host populations, such as measles, mumps, gonorrhea, HIV/AIDS, West-Nile virus and vector-borne diseases such as malaria. Various forms of multi-group epidemic models have subsequently been studied to understand the mechanism of disease transmission. One of the most important subjects in this field is to obtain a threshold that determines the persistence or extinction of a disease. Guo et al. in [1] developed a graph-theoretic approach to prove the global asymptotic stability of a unique endemic equilibrium of a multi-group epidemic model. By applying the idea, global stability of endemic equilibrium for several classes of multi-group epidemic models was investigated in [1–10]. In the real world, some epidemics, such as malaria, dengue, fever, gonorrhea and bacterial infections, may have a different ability to transmit the infections in different ages. For example, measles and varicella always occur in juveniles, while it is reasonable to consider the transmission of diseases such as typhus, diphtheria in adult population. In recent years, epidemic models with stage structure have been studied in many papers [11–17]. For some disease (for example, tuberculosis, influenza, measles), on adequate contact with an infective, a susceptible individual becomes exposed, that is, infected but not infective. This individual remains in the exposed class for a certain period before becoming infective (see, for example, [18–22]). In this paper, we formulate an epidemic model with latency spreading in a heterogeneous host population. Let \(S^{(1)}_{k}\), \(S^{(2)}_{k}\), \(E_{k}\), \(I_{k}\) and \(R_{k}\) denote the immature susceptible, mature susceptible, infected but non-infectious, infectious and recovered population in the kth group, respectively. The disease incidence in the kth group can be calculated as $$\sum_{i=1}^{2} S^{(i)}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}(I_{j}), $$ where the sum takes into account cross-infections from all groups and \(\beta^{(i)}_{kj}\) is the transmission coefficient between compartments \(S^{(i)}_{k}\) and \(I_{j}\). \(G_{j}(I_{j})\) includes some special incidence functions in the literature. For instance, \(G_{j}(I_{j}) =\frac{I_{j}}{1+\alpha_{j}I_{j}}\) (saturation effect). Let \(d^{(1)}_{k}\) and \(d^{(2)}_{k}\) represent death rates of \(S^{(1)}_{k}\) and \(S^{(2)}_{k}\) populations, respectively. Then we obtain the following model for a disease with latency: $$ \left \{ \textstyle\begin{array}{@{}l} \dot{S}^{(1)}_{k}=\varphi_{k}(S^{(1)}_{k})-\sum^{n}_{j=1}\beta ^{(1)}_{kj}S^{(1)}_{k} G_{j}(I_{j}) -a_{k}S^{(1)}_{k},\\ \dot{S}^{(2)}_{k}=a_{k}S^{(1)}_{k}-\sum^{n}_{j=1}\beta ^{(2)}_{kj}S^{(2)}_{k} G_{j}(I_{j}) -d^{(2)}_{k}S^{(2)}_{k},\\ \dot{E_{k}}=\sum^{2}_{i=1}\sum^{n}_{j=1}\beta^{(i)}_{kj}S^{(i)}_{k} G_{j}(I_{j})-(d_{k}+\eta_{k})E_{k},\\ \dot{I_{k}}=\eta_{k}E_{k}-(d_{k}+\mu_{k}+\gamma_{k})I_{k},\\ \dot{R_{k}}=\gamma_{k} I_{k}-d_{k}R_{k}, \quad k=1,2,\ldots,n, \end{array}\displaystyle \right . $$ where \(\varphi_{k}(S^{(1)}_{k})\) denotes the net growth rate of the immature susceptible class in the kth group (a typical form of \(\varphi_{k}(S^{(1)}_{k})\) is \(\varphi_{k}(S^{(1)}_{k})=b_{k}-d^{(1)}_{k}S^{(1)}_{k}\) with \(b_{k}\) the recruitment constant and \(d^{(1)}_{k}\) the natural death rate). \(a_{k}\) is the conversion rate from an immature individual to a mature individual in group k. \(\eta _{k}\) represents the rate of becoming infectious after a latent period in the kth group. \(d_{k}\), \(\mu_{k}\) and \(\gamma_{k}\) are the natural death rate, the disease-related death rate and the recovery rate in the kth group, respectively. All parameter values are assumed to be nonnegative and \(a_{k}, \eta_{k}, d^{(i)}_{k}, d_{k}>0\). The model (1) can be regarded as an SVEIR model such that \(S_{k}^{(1)}\) is unvaccinated and \(S_{k}^{(2)}\) is vaccinated with vaccination rate \(a_{k}\). References studied on the SVEIR model can be seen in [23, 24] and so on. Since the variable \(R_{k}\) does not appear in the remaining four equations of (1), if we denote \(m_{k}:=d_{k}+\mu_{k}+\gamma_{k}\), then we can obtain the following reduced system: $$ \left \{ \textstyle\begin{array}{@{}l} \dot{S}^{(1)}_{k}=\varphi_{k}(S^{(1)}_{k})-\sum^{n}_{j=1}\beta ^{(1)}_{kj}S^{(1)}_{k} G_{j}(I_{j}) -a_{k}S^{(1)}_{k},\\ \dot{S}^{(2)}_{k}=a_{k}S^{(1)}_{k}-\sum^{n}_{j=1}\beta ^{(2)}_{kj}S^{(2)}_{k} G_{j}(I_{j}) -d^{(2)}_{k}S^{(2)}_{k},\\ \dot{E_{k}}=\sum^{2}_{i=1}\sum^{n}_{j=1}\beta^{(i)}_{kj}S^{(i)}_{k} G_{j}(I_{j})-(d_{k}+\eta_{k})E_{k},\\ \dot{I_{k}}=\eta_{k}E_{k}-m_{k}I_{k},\quad k=1,2,\ldots,n. \end{array}\displaystyle \right . $$ The initial conditions for system (2) are $$ S^{(1)}_{k}(0)>0,\qquad S^{(2)}_{k}(0)>0, \qquad E_{k}(0)>0, \qquad I_{k}(0)>0, \quad k=1,2,\ldots,n. $$ The organization of this paper is as follows. In the next section, we prove some preliminary results for system (2). In Section 3, the main theorem of this paper is stated and proved. In the last section, a brief discussion and numerical simulations which support our theoretical analysis are given. We assume: \(\varphi_{k}\) and \(G_{k}\) are Lipschitz on \([0,+\infty)\); \(\varphi_{k}\) is strictly decreasing on \([0,+\infty)\), and there exists \(S^{(1)}_{k 0}>0\) such that $$\varphi_{k}\bigl(S^{(1)}_{k 0}\bigr) -a_{k}S^{(1)}_{k0}=0; $$ \(\frac{G_{k}(x)}{x}\) is nonincreasing on \((0,+\infty)\) and $$\delta_{k}=\lim_{x\rightarrow0} \frac{G_{k}(x)}{x} >0\mbox{ exists},\quad k=1,2,\ldots,n. $$ From our assumptions, it is clear that system (2) has a unique solution for any given initial conditions (3) and the solution remains nonnegative. If (A2) holds, then we see that system (2) has a disease-free equilibrium $$P_{0}=\bigl(S_{10}^{(1)}, S_{10}^{(2)}, \ldots, S_{n0}^{(1)}, S_{n0}^{(2)}, 0 ,0, \ldots,0\bigr), $$ $$ \varphi_{k}\bigl(S_{k0}^{(1)}\bigr)=d^{(2)}_{k}S_{k0}^{(2)}, \qquad a_{k}S_{k0}^{(1)}=d^{(2)}_{k}S_{k0}^{(2)},\quad k=1,2,\ldots,n. $$ For two nonnegative n-square matrices \(\mathbf{A}=(a_{kj})\) and \(\mathbf{B}=(a_{kj})\), we write \(\mathbf{A}\leq\mathbf{B}\) if \(a_{kj}\leq b_{kj}\) for all k and j, and \(\mathbf{A}<\mathbf{B}\) if \(\mathbf{A}\leq\mathbf{B}\) and \(\mathbf{A}\neq\mathbf{B}\). Following [25], we set matrices $$\begin{aligned}& \mathbf{F}:= \Biggl(\sum_{i=1}^{2} \beta^{(i)}_{kj} S_{k0}^{(i)} \delta _{j} \Biggr)_{n \times n}, \qquad \mathbf{V}:=\operatorname{diag} \biggl( \frac{ m_{1}(d_{1}+\eta_{1})}{\eta_{1}}, \frac{ m_{2} (d_{2}+\eta_{2})}{\eta_{2}},\ldots, \frac{m_{n} (d_{n}+\eta_{n})}{\eta_{n}} \biggr) . \end{aligned}$$ The next generation matrix for system (2) is $$\begin{aligned} \mathbf{Q}&:=\mathbf{FV^{-1}} = \biggl( \frac{ \eta_{k} \sum_{i=1}^{2} \beta^{(i)}_{kj} S_{k0}^{(i)} \delta_{j}}{m_{k}( d_{k}+\eta_{k} ) } \biggr)_{n \times n}\\ &= \begin{bmatrix} \frac{\eta_{1} \sum_{i=1}^{2} \beta^{(i)}_{11} S_{10}^{(i)} \delta_{1}}{m_{1}( d_{1}+\eta_{1} ) } & \cdots& \frac{ \eta_{1} \sum_{i=1}^{2} \beta ^{(i)}_{1n} S_{10}^{(i)} \delta_{n}}{m_{1}( d_{1}+\eta_{1} ) } \\ \vdots &\ddots & \vdots\\ \frac{ \eta_{n} \sum_{i=1}^{2} \beta^{(i)}_{n1} S_{n0}^{(i)} \delta_{1}}{m_{n}( d_{n}+\eta_{n} ) } & \cdots& \frac{ \eta_{n} \sum_{i=1}^{2} \beta ^{(i)}_{nn} S_{n0}^{(i)} \delta_{n}}{m_{n}( d_{n}+\eta_{n} ) } \end{bmatrix}. \end{aligned}$$ Thus, we obtain the basic reproduction number \(R_{0}\) for system (2) as $$R_{0}=\rho(\mathbf{Q}), $$ where ρ denotes the spectral radius. Let \(N_{k}=S^{(1)}_{k}+S^{(2)}_{k}+E_{k}+I_{k}\), \(\underline{d}_{k}=\min\{ d^{(1)}_{k}, d^{(2)}_{k}, d_{k}, m_{k} \}\), \(k=1,2,\ldots,n\). Then from (2) we have $$ \dot{N_{k}}\leq\varphi _{k}\bigl(S^{(1)}_{k} \bigr)+d^{(1)}_{k}S^{(1)}_{k}- \underline{d}_{k}N_{k}. $$ We derive from (5) that the region $$\begin{aligned} \Gamma =& \biggl\{ \bigl(S^{(1)}_{1}, S^{(2)}_{1}, \ldots, S^{(1)}_{n}, S^{(2)}_{n}, E_{1},\ldots, E_{n} , I_{1}, \ldots, I_{n} \bigr) \in \mathbb{R}_{+}^{4n}: S^{(1)}_{k}\leq S_{k0}^{(1)},\\ &{} S^{(2)}_{k}\leq S_{k0}^{(2)}, S^{(1)}_{k}+S^{(2)}_{k}+E_{k}+I_{k} \leq \frac{\varphi_{k}(0)+d^{(1)}_{k} S_{k0}^{(1)}}{\underline{d}_{k}}, k=1,2,\ldots,n \biggr\} \end{aligned}$$ is positively invariant with respect to (2). Let \(\Gamma^{\circ}\) denote the interior of Γ. In the section, we study the global stability of equilibria of system (2). Assume that (A1)-(A3) hold and \(\mathbf{B}=(\sum_{i=1}^{2}\beta ^{(i)}_{kj})\) is irreducible. If \(R_{0}\leq1\), then \(P_{0}\) is globally asymptotically stable in Γ; If \(R_{0}>1\), then \(P_{0}\) is unstable and system (2) admits at least one endemic equilibrium in \(\Gamma^{\circ}\). $$\begin{aligned}& \mathbf{S}=\bigl(S^{(1)}_{1}, S^{(2)}_{1}, \ldots,S^{(1)}_{n},S^{(2)}_{n}\bigr), \qquad \mathbf{S^{0}}=\bigl(S^{(1)}_{10}, S^{(2)}_{10}, \ldots ,S^{(1)}_{n0},S^{(2)}_{n0} \bigr), \\& \mathbf{I}=(I_{1}, I_{2}, \ldots, I_{n}),\qquad \mathbf{Q(S,I)}= \biggl( \frac{ \sum^{2}_{i=1}\sum^{n}_{j=1} \eta_{k} \beta^{(i)}_{kj}S^{(i)}_{k} G_{j}(I_{j}) }{m_{k} ( d_{k}+\eta _{k} )I_{j} } \biggr)_{n \times n} . \end{aligned}$$ Notice that B is irreducible, then \(\mathbf{Q(S,I)}\) and Q are irreducible. By (A3), we have \(\mathbf{0}\leq\mathbf{Q(S,I)}\leq\mathbf{Q}\). Hence \(\mathbf{Q(S,I)}+ \mathbf{Q}\) is also irreducible. That is, \(0 \leq\mathbf{Q(S,I)}<\mathbf{Q}\) and \(\mathbf{Q(S,I)}+ \mathbf{Q}\) is irreducible provided that \(\mathbf{S} \neq\mathbf{S^{0}}\). Thus, by [26], Corollary 1.5, p.27, \(\rho(\mathbf{Q(S,I)})<\rho(\mathbf{Q})\) if \(\mathbf{S} \neq \mathbf{S^{0}}\). Since Q is irreducible, there exist \(\omega_{k}>0\), \(k=1,2,\ldots,n\), such that $$(\omega_{1}, \omega_{2}, \ldots, \omega_{n}) \rho(\mathbf{Q})= (\omega _{1}, \omega_{2}, \ldots, \omega_{n})\mathbf{Q} . $$ Consider a Lyapunov functional $$L =\sum_{k=1}^{n}\frac{\omega_{k} \eta_{k}}{m_{k} (d_{k}+\eta_{k} )} \biggl[ E_{k} + \frac{d_{k}+\eta_{k}}{\eta_{k}}I_{k} \biggr]. $$ Differentiating L along the solution of system (2), we obtain $$\begin{aligned} \dot{L} =& \sum_{k=1}^{n} \frac{\omega_{k} \eta_{k}}{m_{k} (d_{k}+\eta_{k} ) } \Biggl[ \sum^{2}_{i=1}\sum ^{n}_{j=1}\beta^{(i)}_{kj}S^{(i)}_{k} G_{j}(I_{j}) - \frac{m_{k} ( d_{k}+\eta_{k} ) }{ \eta_{k} } I_{k} \Biggr] \\ =& \sum_{k=1}^{n}\omega_{k} \biggl[ \frac{ \sum^{2}_{i=1}\sum^{n}_{j=1} \eta_{k} \beta^{(i)}_{kj}S^{(i)}_{k} G_{j}(I_{j}) }{m_{k} ( d_{k}+\eta_{k} ) } - I_{k} \biggr] \\ =&(\omega_{1}, \omega_{2}, \ldots, \omega_{n}) \bigl(\mathbf{Q(S,I)}\mathbf {I}^{T}-\mathbf{I}^{T}\bigr) \\ \leq&(\omega_{1}, \omega_{2}, \ldots, \omega_{n}) \bigl(\mathbf{Q}\mathbf{I}^{T}- \mathbf{I}^{T}\bigr) \\ =&\bigl[\rho(\mathbf{Q})-1\bigr] (\omega_{1}, \omega_{2}, \ldots, \omega_{n})\mathbf{I}^{T}\leq0. \end{aligned}$$ If \(R_{0}<1\), then \(\dot{L}=0\) if and only if \(\mathbf {I}^{T}=\mathbf{0}\). If \(R_{0}=1\), then \(\dot{L}=0\) implies $$(\omega_{1}, \omega_{2}, \ldots, \omega_{n}) \bigl(\mathbf{Q(S,I)}\mathbf {I}^{T}-\mathbf{I}^{T}\bigr) =0. $$ Therefore, \(\dot{L}=0\) if and only if \(\mathbf{I}=\mathbf{0}\), or \(\mathbf{S}=\mathbf{S^{0}}\). On the other hand, from the last equation in system (2), we see that \(\mathbf{I}=\mathbf{0}\) implies that \(E_{k}=0\) for \(k=1,2,\ldots,n\). Hence, the largest invariant subset of the set, where \(\dot{L}=0\), is the singleton \(\{P_{0}\}\). By LaSalle's invariance principle, \(P_{0}\) is globally asymptotically stable for \(R_{0}\leq1\). If \(R_{0}>1\) and \(\mathbf{I}\neq\mathbf{0}\), then $$\bigl[\rho(\mathbf{Q})-1\bigr] (\omega_{1}, \omega_{2}, \ldots, \omega_{n})\mathbf {I}^{T} >0. $$ Thus, by continuity, we have \(\dot{L}= (\omega_{1}, \omega_{2}, \ldots, \omega_{n})(\mathbf{Q(S,I)}\mathbf{I}^{T}-\mathbf{I}^{T}) >0\) in a neighborhood of \(P_{0}\) in \(\Gamma^{\circ}\). This implies that \(P_{0}\) is unstable. From a uniform persistence result of [27] and a similar argument as in the proof of Proposition 3.3 of [28], we can deduce that the instability of \(P_{0}\) implies the uniform persistence of system (2) in \(\Gamma^{\circ}\). This together with the uniform boundedness of solutions of system (2) in \(\Gamma^{\circ}\) implies that system (2) has an endemic equilibrium in \(\Gamma^{\circ}\) (see Theorem 2.8.6 of [29] or Theorem D.3 of [30]). The proof is completed. □ By Theorem 3.1, we have that if \(\mathbf{B}=(\sum_{i=1}^{2}\beta^{(i)}_{kj})\) is irreducible, (A1)-(A3) hold and \(R_{0}>1\), then system (2) has an endemic equilibrium \(P^{*}\) in \(\Gamma^{\circ}\). Let $$P^{*}=\bigl(S_{1}^{(1)*}, S_{1}^{(2)*}, \ldots, S_{n}^{(1)*},S_{n}^{(2)*}, E_{1}^{*},\ldots, E_{n}^{*}, I_{1}^{*}, \ldots, I_{n}^{*}\bigr), $$ then the components of \(P^{*}\) satisfy $$\begin{aligned}& \varphi_{k}\bigl(S^{(1)*}_{k}\bigr)= \sum _{i=1}^{2}S^{(i)*}_{k}\sum _{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr)+d^{(2)}_{k}S_{k}^{(2)*}, \end{aligned}$$ $$\begin{aligned}& a_{k}\bigl(S^{(1)*}_{k}\bigr)= S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr)+d^{(2)}_{k}S_{k}^{(2)*}, \end{aligned}$$ $$\begin{aligned}& \sum_{i=1}^{2}S^{(i)*}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr)=(d_{k}+ \eta_{k})E^{*}_{k}=\frac{m_{k}(d_{k}+\eta _{k})}{\eta_{k}}I^{*}_{k},\quad k=1,2,\ldots,n. \end{aligned}$$ Since \(\varphi_{k}\) is strictly decreasing on \([0,+\infty)\), we have $$ \bigl[\varphi_{k}\bigl(S^{(1)}_{k}\bigr)- \varphi_{k}\bigl(S^{(1)*}_{k}\bigr) \bigr] \biggl( 1- \frac{ S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) \leq0, $$ where equality holds if and only if \(S^{(1)}_{k}=S_{k}^{(1)*}\), \(k=1,2,\ldots,n\). We further make the following assumption: \(G_{k}\) is strictly increasing on \([0,+\infty)\), and $$ \frac{G_{k}(x_{k})I_{k}}{G_{k}(I_{k} )x_{k} } + \frac{G_{k}(I_{k})}{G_{k}(x_{k})}-\frac{I_{k}}{x_{k}} -1\leq0,\quad k=1,2, \ldots,n, $$ where \(x_{k}>0\) is chosen in an arbitrary way and equality holds if \(I_{k}=x_{k}\). Assume that \(\mathbf{B}=[\sum_{i=1}^{2}\beta^{(i)}_{kj}]\) is irreducible. If \(R_{0}>1\), then \(P^{*}\) is globally asymptotically stable. Set \(\overline{\beta}_{kj}=\sum_{i=1}^{2} \beta^{(i)}_{kj}S_{k}^{(i)*}G_{j}(I_{j}^{*})\), \(1\leq k,j\leq n\), and $$\overline{B} = \begin{bmatrix} \sum_{l \neq1} \overline{\beta}_{1l} & -\overline{\beta}_{21}& \ldots& -\overline{\beta}_{n1}\\ -\overline{\beta}_{12} & \sum_{l \neq2} \overline{\beta}_{2l} & \ldots& -\overline{\beta}_{n1} \\ \vdots& \vdots& \ddots& \vdots\\ -\overline{\beta}_{1n} & -\overline{\beta}_{2n} & \ldots& \sum_{l \neq n} \overline{\beta}_{nl} \end{bmatrix}. $$ Then \(\overline{B}\) is also irreducible. It follows from Lemma 2.1 of [1] that the solution space of the linear system $$ \overline{B}\mathbf{v}=0 $$ has dimension 1 with a basis $$ \mathbf{v}:=(v_{1}, v_{2}, \ldots, v_{n})^{T}=( \xi_{1}, \xi_{2}, \ldots, \xi_{n})^{T}, $$ where \(\xi_{k}\) denotes the cofactor of the kth diagonal entry of \(\overline{B}\). Note that from (11) we have $$ \sum_{j=1}^{n}\overline{B}_{kj}v_{k}= \sum_{j=1}^{n}\overline{B}_{jk}v_{j},\quad k=1, 2, \ldots, n. $$ From (13), we have $$\begin{aligned} & \sum_{k=1}^{n}v_{k}\sum _{j=1}^{n} \sum_{i=1}^{2} \beta^{(i)}_{kj}S^{(i)*}_{k}G_{j}(I_{j}) \\ &\quad=\sum_{k,j=1}^{n}\sum _{i=1}^{2}\beta ^{(i)}_{jk}S^{(i)*}_{j}v_{j}G_{k}(I_{k})= \sum_{k=1}^{n} \Biggl[ \sum _{j=1}^{n}\sum_{i=1}^{2} \beta^{(i)}_{jk} S^{(i)*}_{j}G_{k} \bigl(I_{k}^{*}\bigr)v_{j} \Biggr] \frac{G_{k}(I_{k})}{G_{k}(I_{k}^{*})} \\ &\quad=\sum_{k=1}^{n} \Biggl[ \sum _{j=1}^{n}(\overline{\beta}_{jk} v_{j}) \Biggr]\frac{G_{k}(I_{k})}{G_{k}(I_{k}^{*})}=\sum_{k=1}^{n} \Biggl[ \sum_{j=1}^{n}(\overline{ \beta}_{kj} v_{k}) \Biggr]\frac{G_{k}(I_{k})}{G_{k}(I_{k}^{*})} \\ &\quad=\sum_{k=1}^{n}v_{k}\sum _{j=1}^{n} \sum_{i=1}^{2} \beta^{(i)}_{kj}S^{(i)*}_{k}G_{j} \bigl(I^{*}_{j}\bigr)\frac {G_{k}(I_{k})}{G_{k}(I^{*}_{k})}. \end{aligned}$$ $$\begin{aligned} V =&\sum_{k=1}^{n}v_{k} \Biggl[ \sum_{i=1}^{2} \biggl(S^{(i)}_{k}-S^{(i)*}_{k} -S^{(i)*}_{k}\ln\frac{ S^{(i)}_{k}}{S^{(i)*}_{k}} \biggr) +E_{k}- E^{*}_{k} \\ &{} - E^{*}_{k}\ln\frac{E_{k}}{E^{*}_{k}}+ \frac{d_{k}+\eta_{k}}{\eta_{k}} \int_{I^{*}_{k}}^{I_{k}} \frac{G_{k}(x)-G_{k}(I^{*}_{k})}{G_{k}(x)}\,dx \Biggr]. \end{aligned}$$ Differentiating V along the solution of system (2), we obtain $$\begin{aligned} \dot{V} =&\sum_{k=1}^{n} v_{k} \Biggl\{ \varphi_{k}\bigl(S^{(1)}_{k} \bigr)-d^{(2)}_{k}S^{(2)}_{k}- \frac{m_{k}(d_{k}+\eta_{k})}{\eta_{k}} I_{k} \\ &{}-\frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \Biggl[ \varphi_{k}\bigl(S^{(1)}_{k} \bigr)-S^{(1)}_{k}\sum_{j=1}^{n} \beta^{(1)}_{kj} G_{j}(I_{j}) -a_{k}S^{(1)}_{k} \Biggr] \\ &{}-\frac{S^{(2)*}_{k}}{S^{(2)}_{k}} \Biggl[ a_{k}S^{(1)}_{k}-S^{(2)}_{k} \sum_{j=1}^{n}\beta^{(2)}_{kj} G_{j}(I_{j}) -d^{(2)}_{k}S^{(2)}_{k} \Biggr] \\ &{}-\frac{E^{*}_{k}}{E_{k}} \Biggl[ \sum^{2}_{i=1} \sum^{n}_{j=1}\beta^{(i)}_{kj}S^{(i)}_{k} G_{j} (I_{j})-(d_{k}+\eta_{k})E_{k} \Biggr] \\ &{}-\frac{G_{k}(I^{*}_{k})}{G_{k}(I_{k}) } \biggl[ (d_{k}+\eta_{k} )E_{k} - \frac{m_{k}(d_{k}+\eta_{k})}{\eta_{k}} I_{k} \biggr] \Biggr\} \\ =& \sum_{k=1}^{n}v_{k} \Biggl\{ \varphi_{k}\bigl(S^{(1)}_{k}\bigr) \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) +d^{(2)}_{k}S^{(2)*}_{k} \biggl( 1- \frac{S^{(2)}_{k}}{S^{(2)*}_{k}} \biggr) \\ &{}+a_{k}S^{(1)*}_{k} \biggl( 1-\frac{S^{(1)}_{k}S^{(2)*}_{k}}{S^{(1)*}_{k}S^{(2)}_{k}} \biggr)+\sum_{i=1}^{2} S^{(i)*}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}(I_{j}) \\ &{}-\frac{E^{*}_{k}}{E_{k}}\sum_{i=1}^{2}-S^{(i)}_{k} \sum_{j=1}^{n}\beta ^{(i)}_{kj} G_{j}(I_{j}) +(d_{k}+\eta_{k} )E^{*}_{k} \biggl(1-\frac{ E_{k} G_{k}(I^{*}_{k})}{ E^{*}_{k}G_{k}(I_{k}) } \biggr) \\ &{}+\frac{m_{k}(d_{k}+\eta_{k})}{\eta_{k}} I^{*}_{k} \frac{ I_{k}G_{k}(I^{*}_{k})}{I^{*}_{k}G_{k}(I_{k})}- \frac{m_{k}(d_{k}+\eta _{k})}{\eta_{k}} I^{*}_{k} \frac{ I_{k} }{ I^{*}_{k}} \Biggr\} . \end{aligned}$$ From (7) and (8), we have $$\begin{aligned} \dot{V} = &\sum_{k=1}^{n}v_{k} \Biggl\{ \varphi_{k}\bigl(S^{(1)}_{k}\bigr) \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) +d^{(2)}_{k}S^{(2)*}_{k} \biggl( 1- \frac{S^{(2)}_{k}}{S^{(2)*}_{k}} \biggr) \\ &{}-\frac{E^{*}_{k}}{E_{k}} \sum_{i=1}^{2} S^{(i)}_{k} \sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}(I_{j})\\ &{}+ \Biggl[S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) +d^{(2)}_{k}S^{(2)*}_{k} \Biggr] \biggl( 1-\frac{S^{(1)}_{k}S^{(2)*}_{k}}{S^{(1)*}_{k}S^{(2)}_{k}} \biggr)\\ &{}+ \sum_{i=1}^{2} S^{(i)*}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j} \bigl(I^{*}_{j} \bigr) \biggl[ 1 - \frac{ E_{k} G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } +\frac{G_{j}(I_{j})}{G_{j}(I^{*}_{j})} +\frac{ I_{k} G_{k}(I^{*}_{k})}{ I^{*}_{k} G_{k}(I_{k}) } - \frac{I_{k}}{I^{*}_{k}} \biggr] \Biggr\} . \end{aligned}$$ By (10) and (14), we obtain $$\begin{aligned} \dot{V} \leq& \sum_{k=1}^{n}v_{k} \Biggl\{ \varphi_{k}\bigl(S^{(1)}_{k}\bigr) \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) +d^{(2)}_{k}S^{(2)*}_{k} \biggl( 1- \frac{S^{(2)}_{k}}{S^{(2)*}_{k}} \biggr) \\ &{}-\frac{E^{*}_{k}}{E_{k}}\sum_{i=1}^{2} S^{(i)}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}(I_{j})+ \Biggl[S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr)+d^{(2)}_{k}S^{(2)*}_{k} \Biggr] \\ &{}\times \biggl( 1-\frac{S^{(1)}_{k}S^{(2)*}_{k}}{S^{(1)*}_{k}S^{(2)}_{k}} \biggr) +\sum _{i=1}^{2} S^{(i)*}_{k}\sum _{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j}\bigr) \biggl[ 2- \frac{ E_{k} G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } \biggr] \Biggr\} =:B_{1}. \end{aligned}$$ From (6), we know that $$ \varphi_{k}\bigl(S^{(1)*}_{k}\bigr) \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) = \Biggl(\sum_{i=1}^{2}S^{(i)*}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr)+d^{(2)}_{k}S_{k}^{(2)*} \Biggr) \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr). $$ By (16), we can rewrite \(B_{1}\) as $$\begin{aligned} \dot{V} =&\sum_{k=1}^{n}v_{k} \Biggl\{ \bigl[ \varphi_{k}\bigl(S^{(1)}_{k}\bigr)- \varphi_{k}\bigl(S^{(1)*}_{k}\bigr) \bigr] \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) \\ &{}+d^{(2)}_{k}S^{(2)*}_{k} \biggl( 3- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}}-\frac {S^{(1)}_{k}S^{(2)*}_{k}}{S^{(1)*}_{k}S^{(2)}_{k}} -\frac{S^{(2)}_{k}}{S^{(2)*}_{k}} \biggr) \\ &{}-\frac{E^{*}_{k}}{E_{k}} \sum_{i=1}^{2} S^{(i)}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}(I_{j}) +S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \\ &{}\times \biggl(2-\frac{S^{(1)*}_{k}}{S^{(1)}_{k}}-\frac {S^{(1)}_{k}S^{(2)*}_{k}}{S^{(1)*}_{k}S^{(2)}_{k}} \biggr)+ S^{(1)*}_{k}\sum_{j=1}^{n} \beta^{(1)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) \\ &{}+\sum_{i=1}^{2} S^{(i)*}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[ 2 - \frac{ E_{k} G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } \biggr] \Biggr\} . \end{aligned}$$ By (9) and the arithmetic-geometric mean, we easily see that $$\begin{aligned} B_{1} \leq& \sum_{k=1}^{n}v_{k} \Biggl\{ -\frac{E^{*}_{k}}{E_{k}} \sum_{i=1}^{2} S^{(i)}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}(I_{j}) \\ &{}+S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl( 2-\frac{S^{(1)*}_{k}}{S^{(1)}_{k}}-\frac {S^{(1)}_{k}S^{(2)*}_{k}}{S^{(1)*}_{k}S^{(2)}_{k}} \biggr) \\ &{}+S^{(1)*}_{k}\sum_{j=1}^{n} \beta^{(1)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl( 1- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \biggr) \\ &{}+\sum_{i=1}^{2} S^{(i)*}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[ 2 - \frac{ E_{k} G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } \biggr] \Biggr\} =:B_{2}. \end{aligned}$$ We can rewrite \(B_{2}\) as $$\begin{aligned} B_{2} =&\sum_{k=1}^{n}v_{k} \Biggl\{ S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[ 3- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}}-\frac {S^{(1)}_{k}S^{(2)*}_{k}}{S^{(1)*}_{k}S^{(2)}_{k}} \\ &{}-\frac{S^{(2)}_{k} E^{*}_{k} G_{j}(I_{j}) }{S^{(2)*}_{k}E_{k}G_{j}(I^{*}_{j})} \biggr] +S^{(1)*}_{k}\sum _{j=1}^{n}\beta^{(1)}_{kj} G_{j}\bigl(I^{*}_{j}\bigr) \biggl[ 2- \frac{S^{(1)*}_{k}}{S^{(1)}_{k}} \\ &{}-\frac{S^{(1)}_{k} E^{*}_{k} G_{j}(I_{j}) }{S^{(1)*}_{k}E_{k}G_{j}(I^{*}_{j}) } \biggr]+\sum_{i=1}^{2} S^{(i)*}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[ 1- \frac{ E_{k}G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } \biggr] \Biggr\} . \end{aligned}$$ By the arithmetic-geometric mean, we have that $$\begin{aligned} B_{2} \leq& \sum_{k=1}^{n}v_{k} \Biggl\{ 3S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[1 - \biggl(\frac{E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j})} \biggr)^{\frac{1}{3}} \biggr] \\ &{}+2S^{(1)*}_{k}\sum_{j=1}^{n} \beta^{(1)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[1 - \biggl( \frac{ E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j}) } \biggr)^{\frac{1}{2} } \biggr] \\ &{}+\sum_{i=1}^{2} S^{(i)*}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[ 1- \frac{ E_{k} G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } \biggr] \Biggr\} =:B_{3}. \end{aligned}$$ $$\begin{aligned} B_{3} =& \sum_{k=1}^{n}v_{k} \Biggl\{ 3S^{(2)*}_{k}\sum_{j=1}^{n} \beta^{(2)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[ 1 - \biggl[ \frac{ E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j}) } \biggr]^{\frac{1}{3} } +\ln \biggl[ \frac{ E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j}) } \biggr]^{\frac{1}{3} } \biggr] \\ &{}+2S^{(1)*}_{k}\sum_{j=1}^{n} \beta^{(1)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \biggl[ 1 - \biggl[ \frac{ E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j}) } \biggr]^{\frac{1}{2} } +\ln \biggl[ \frac{ E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j}) } \biggr]^{\frac{1}{2} } \biggr] \\ &{}-\sum_{i=1}^{2} S^{(i)*}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j}\bigr) \ln\frac{E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j}) } \\ &{}+\sum_{i=1}^{2} S^{(i)*}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j}\bigr) \biggl[ 1- \frac{ E_{k}G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } +\ln\frac{ E_{k} G_{k}(I^{*}_{k})}{ E^{*}_{k} G_{k}(I_{k}) } \biggr] \\ &{}-\sum_{i=1}^{2} S^{(i)*}_{k}\sum_{j=1}^{n} \beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j} \bigr) \ln\frac{ E_{k} G_{k}(I^{*}_{k}) }{E^{*}_{k}G_{k}(I_{k}) } \Biggr\} . \end{aligned}$$ Using the fact that \(1-x+\ln x \leq0\), where equality holds if and only if \(x=1\), we obtain $$\begin{aligned} B_{3} \leq& \sum_{k=1}^{n}v_{k} \sum_{i=1}^{2} S^{(i)*}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j}\bigr) \biggl[ -\ln \frac{ E^{*}_{k} G_{j}(I_{j}) }{E_{k}G_{j}(I^{*}_{j}) } -\ln \frac{ E_{k} G_{k}(I^{*}_{k}) }{E^{*}_{k}G_{k}(I_{k}) } \biggr] \\ =& \sum_{k=1}^{n}v_{k}\sum _{i=1}^{2} S^{(i)*}_{k} \sum_{j=1}^{n}\beta^{(i)}_{kj} G_{j}\bigl(I^{*}_{j}\bigr) \ln \frac{ G_{k}(I_{k}) G_{j}(I^{*}_{j}) }{G_{k}(I^{*}_{k})G_{j}(I_{j}) } \\ =& \sum_{k=1}^{n}v_{k}\sum ^{n}_{j=1} \overline{\beta}_{kj} \ln \frac{G_{k}(I_{k} )G_{j}(I^{*}_{j}) }{G_{k}(I^{*}_{k})G_{j}(I_{j})}. \end{aligned}$$ In the following, we will show that $$ H_{n} := \sum_{k=1}^{n}v_{k} \sum^{n}_{j=1} \overline{ \beta}_{kj} \ln\frac{G_{k}(I_{k} )G_{j}(I^{*}_{j}) }{G_{k}(I^{*}_{k})G_{j}(I_{j})} \equiv0. $$ We first give the proof of (20) for \(n=2\), which would give a reader the basic yet clear ideas without being hidden by the complexity of terms caused by larger values of n. When \(n=2\), we have $$\begin{aligned} {H}_{2} = \sum_{k=1}^{2}v_{k} \sum^{2}_{j=1} \overline{ \beta}_{kj} \ln\frac{G_{k}(I_{k} )G_{j}(I^{*}_{j}) }{G_{k}(I^{*}_{k})G_{j}(I_{j})}. \end{aligned}$$ Formula (12) gives \(v_{1}=\overline{\beta}_{21}\) and \(v_{2}=\overline{\beta}_{12}\) in this case. Expanding \(H_{2}\) yields $$\begin{aligned} H_{2} =&\overline{\beta}_{21} \overline{ \beta}_{11} \ln \frac{G_{1}(I_{1} )G_{1}(I^{*}_{1}) }{G_{1}(I^{*}_{1})G_{1}(I_{1})} + \overline{\beta}_{12} \overline{\beta}_{22} \ln\frac{G_{2}(I_{2} )G_{2}(I^{*}_{2}) }{G_{2}(I^{*}_{2})G_{2}(I_{2})} \\ &{}+\overline{\beta}_{21}\overline{\beta}_{12} \ln \frac{G_{1}(I_{1} )G_{2}(I^{*}_{2}) }{G_{1}(I^{*}_{1})G_{2}(I_{2})}+\overline{\beta}_{12}\overline{\beta}_{21} \ln\frac{G_{2}(I_{2} )G_{1}(I^{*}_{1}) }{G_{2}(I^{*}_{2})G_{1}(I_{1})} \\ =&\overline{\beta}_{12}\overline{\beta}_{21} \biggl[ \ln \frac{G_{1}(I_{1} )G_{2}(I^{*}_{2}) }{G_{1}(I^{*}_{1})G_{2}(I_{2})} + \ln\frac{G_{2}(I_{2} )G_{1}(I^{*}_{1}) }{G_{2}(I^{*}_{2})G_{1}(I_{1})} \biggr]=0. \end{aligned}$$ For more general n, by a similar argument as in the proof of \(\sum_{k, j=1}^{n}v_{k} \overline{\beta}_{kj} \ln \frac{E^{*}_{k}E_{j} }{E_{k}E^{*}_{j}}\equiv0\) in [7], we obtain that $$\sum_{k=1}^{n}v_{k} \sum ^{n}_{j=1} \overline{\beta}_{kj} \ln\frac{G_{k}(I_{k} )G_{j}(I^{*}_{j}) }{G_{k}(I^{*}_{k})G_{j}(I_{j})} =- \sum_{k,j=1}^{n}v_{k} \overline{\beta}_{kj} \ln\frac{G_{k}(I^{*}_{k} )G_{j}(I_{j}) }{G_{k}(I_{k})G_{j}(I^{*}_{j})}\equiv0. $$ From (17)-(19), we see that if \(\dot{V}=0\), then $$\begin{aligned} S^{(i)}_{k}=S^{(i)*}_{k},\quad i=1,2, k=1,2, \ldots,n. \end{aligned}$$ If (21) holds, it follows from (2) that $$\left \{ \textstyle\begin{array}{@{}l} 0=\varphi_{k}(S^{(1)*}_{k})- \sum^{n}_{j=1}\beta ^{(1)}_{kj}S^{(1)*}_{k} G_{j}(I_{j}) -a_{k}S^{(1)*}_{k},\\ 0=a_{k}S^{(1)*}_{k}- \sum^{n}_{j=1}\beta ^{(2)}_{kj}S^{(2)*}_{k}G_{j}(I_{j})-d^{(2)}_{k}S^{(2)*}_{k}. \end{array}\displaystyle \right . $$ Then we obtain that $$\dot{E_{k}}= \bigl( \varphi_{k}\bigl(S^{(1)*}_{k} \bigr)-a_{k}S^{(1)*}_{k} \bigr) + \bigl( a_{k}S^{(1)*}_{k}-d^{(2)}_{k}S^{(2)*}_{k} \bigr) -(d_{k}+\eta_{k})E_{k}. $$ This implies that $$ \lim_{t\rightarrow+\infty}E_{k}=\frac{ ( \varphi_{k}(S^{(1)*}_{k})-a_{k}S^{(1)*}_{k} ) + ( a_{k}S^{(1)*}_{k}-d^{(2)}_{k}S^{(2)*}_{k} )}{(d_{k}+\eta_{k})}=E^{*}_{k}. $$ By (22) and the fourth equation of system (2), we have $$ \lim_{t\rightarrow+\infty}I_{k}= \frac{ \eta_{k}E^{*}_{k}}{m_{k}} = I^{*}_{k}. $$ From (21)-(23) and the characteristics of V, we obtain that the largest invariant subset of the set, where \(\dot{V}=0\), is the singleton \(\{P^{*}\}\). By LaSalle's invariance principle, \(P^{*}\) is globally asymptotically stable for \(R_{0}>1 \). □ Numerical examples For certain sexually transmitted diseases, AIDS/HIV for example, it is natural to consider two groups of people: a group of males and a group of females. Further, it is always assumed that there are two important age stages for the susceptible, a group of immature susceptible \(S^{(1)}_{k}\) who are less than 18 years old, and a group of mature susceptible \(S^{(2)}_{k}\) who are more than 18 years old. Thus, we consider the following model: $$\begin{aligned} \left \{ \textstyle\begin{array}{@{}l} \dot{S}^{(1)}_{k}=\varphi_{k}(S^{(1)}_{k})-\sum^{2}_{j=1}\beta ^{(1)}_{kj}S^{(1)}_{k} G_{j}(I_{j}) -a_{k}S^{(1)}_{k},\\ \dot{S}^{(2)}_{k}=a_{k}S^{(1)}_{k}-\sum^{2}_{j=1}\beta ^{(2)}_{kj}S^{(2)}_{k} G_{j}(I_{j}) -d^{(2)}_{k}S^{(2)}_{k},\\ \dot{E_{k}}=\sum^{2}_{i=1}\sum^{2}_{j=1}\beta^{(i)}_{kj}S^{(i)}_{k} G_{j}(I_{j})-(d_{k}+\eta_{k})E_{k},\\ \dot{I_{k}}=\eta_{k}E_{k}-m_{k}I_{k}, \quad k=1,2, \end{array}\displaystyle \right . \end{aligned}$$ where \(\varphi_{k}(S^{(1)}_{k})=b_{k}-d^{(1)}_{k}S^{(1)}_{k}\), \(G_{j}(I_{j})=\frac{I_{j}}{1+\alpha_{j}I_{j}}\). Clearly, (A1)-(A4) hold. We fix the parameters as follows: $$\begin{aligned} &b_{1}=50,\qquad b_{2}=30,\qquad d^{(1)}_{1}=0.001,\qquad d^{(2)}_{1}=0.2,\qquad d^{(1)}_{2}=0.002, \\ &d^{(2)}_{2}=0.3,\qquad d_{1}=0.1,\qquad d_{2}=0.2,\qquad \eta_{1}=0.1, \qquad\eta_{2}=0.2, \\ &m_{1}=0.5,\qquad m_{2}=0.6, \qquad a_{1}=0.6,\qquad a_{2}=0.5,\qquad \alpha_{1}=\alpha_{2}=0.1. \end{aligned}$$ Then we have \(P_{0}\approx( 83.1947, 249.5840, 59.7610, 99.6016 , 0 , 0, 0 , 0)\). Case 1. If \(\beta^{(1)}_{1j}=\beta^{(2)}_{k1} =0.002\), \(\beta^{(1)}_{2j}=\beta^{(2)}_{k2} =0.002\), \(k=1,2\), \(j=1,2\), then we obtain $$\mathbf{Q}\approx \begin{pmatrix} 0.6656& 0.5546 \\ 0.3187& 0.2656 \end{pmatrix},\qquad R_{0}\approx0.9312. $$ By Theorem 3.1, the disease dies out in both groups. Numerical simulation illustrates this fact (see Figure 1). Dynamic behavior of system ( 24 ) with parameter values in ( 25 ) and Case 1. \(R_{0}\approx0.9312\). The initial conditions are: \(S^{(1)}_{1}(0)=70\), \(S^{(2)}_{1}(0)=200\), \(S^{(1)}_{2}(0)=80\), \(S^{(2)}_{2}(0)=240\), \(E_{1}(0)=1\), \(E_{2}(0)=9\), \(I_{1}(0)=3\), \(I_{2}(0)=6\). Case 2. If \(\beta^{(1)}_{1j}=\beta^{(2)}_{k1} =0.0025\), \(\beta^{(1)}_{2j}=\beta^{(2)}_{k2} =0.002\), \(k=1,2\), \(j=1,2\), then we have \(P^{*}\approx(82.7845, 244.9127, 59.4787, 98.2113, 4.6734, 1.0441, 0.9347, 0.3480)\) and \(R_{0}\approx 1.0941\). By Theorem 3.2, the disease persists in both groups. Numerical simulation illustrates this fact (see Figure 2). Guo, H, Li, MY, Shuai, Z: Global stability of the endemic equilibrium of multigroup SIR epidemic models. Can. Appl. Math. Q. 14, 259-284 (2006) Sun, R, Shi, J: Global stability of multigroup endemic model with group mixing and nonlinear incidence rates. Appl. Math. Comput. 218, 280-286 (2011) Yuan, Z, Wang, L: Global stability of epidemiological models with group mixing and nonlinear incidence rates. Nonlinear Anal., Real World Appl. 11, 995-1004 (2011) Kuniya, T: Global stability analysis with a discretization approach for an age-structured multigroup SIR epidemic model. Nonlinear Anal., Real World Appl. 12, 2640-2655 (2011) Yuan, Z, Zou, X: Global threshold property in an epidemic model for disease with latency spreading in a heterogeneous host population. Nonlinear Anal., Real World Appl. 11, 3479-3490 (2011) Sun, R: Global stability of the endemic equilibrium of multigroup SIR epidemic models with nonlinear incidence rates. Comput. Math. Appl. 60, 2286-2291 (2010) Li, MY, Shuai, Z, Wang, C: Global stability of multi-group epidemic models with distributed delays. J. Math. Anal. Appl. 361, 38-47 (2010) Shu, H, Fan, D, Wei, J: Global stability of multi-group SEIR endemic models with distributed delays and nonlinear transmission. Nonlinear Anal., Real World Appl. 13, 1581-1592 (2012) Ding, D, Ding, X: Global stability of multi-group vaccination endemic models with delays. Nonlinear Anal., Real World Appl. 12, 1991-1997 (2011) Chen, H, Sun, J: Global stability of delay multigroup endemic models with group mixing and nonlinear incidence rates. Appl. Math. Comput. 218, 4391-4400 (2011) Alexanderian, A, Gobbert, MK, Rister, KR, Gaff, H, Lenhart, S, Schaefer, E: An age-structured model for the spread of epidemic cholera: analysis and simulation. Nonlinear Anal., Real World Appl. 12, 3483-3498 (2011) Liu, Y, Guo, S, Luo, Y: Impulsive epidemic model with differential susceptibility and stage structure. Appl. Math. Model. 36, 370-378 (2012) Zhang, X, Huo, H, Sun, X, Fu, Q: Impulsive epidemic model with differential susceptibility and stage structure. Appl. Math. Model. 36, 370-378 (2012) Shi, X, Cui, J, Zhou, X: Stability and Hopf bifurcation analysis of an eco-epidemic model with a stage structure. Nonlinear Anal. TMA 74, 1088-1106 (2011) Wu, C, Weng, P: Stability analysis of a SIS model with stage structured and distributed maturation delay. Nonlinear Anal. TMA 71, e892-e901 (2009) Inaba, H: Stability analysis of a SIS model with stage structured and distributed maturation delay. Math. Biosci. 201, 15-47 (2006) Feng, Z, Huang, W, Castillo-Chavez, C: Global behavior of a multi-group SIS epidemic model with age structure. J. Differ. Equ. 218, 292-324 (2005) Wang, W: Global behavior of an SEIR epidemic model with two delays. Appl. Math. Lett. 15, 423-428 (2002) Li, MY, Smith, HL, Wang, L: Global dynamics of an SEIR epidemic model with vertical transmission. SIAM J. Appl. Math. 62, 58-69 (2001) Sahu, GP, Dhar, J: Analysis of an SVEIS epidemic model with partial temporary immunity and saturation incidence rate. Appl. Math. Model. 36, 908-923 (2012) Zhang, T, Teng, Z: Extinction and permanence for a pulse vaccination delayed SEIRS epidemic model. Chaos Solitons Fractals 39, 2411-2425 (2009) Meng, X, Jiao, J, Chen, L: Two profitless delays for an SEIRS epidemic disease model with vertical transmission and pulse vaccination. Chaos Solitons Fractals 40, 2114-2125 (2009) Jiang, Y, Wei, H, Song, X, Mei, L, Su, G, Qiu, S: Global attractivity and permanence of a delayed SVEIR epidemic model with pulse vaccination and saturation incidence. Appl. Math. Comput. 213, 312-321 (2009) Xu, R: Global stability of a delayed epidemic model with latent period and vaccination strategy. Appl. Math. Model. 36, 5293-5300 (2012) Van den Driessche, P, Watmough, J: Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math. Biosci. 180, 29-48 (2002) Berman, A, Plemmons, RJ: Nonnegative Matrices in Mathematical Science. Academic Press, New York (1979) Freedman, HI, Tang, MX, Ruan, SG: Uniform persistence of flows near a closed positively invariant set. J. Dyn. Differ. Equ. 6, 583-600 (1994) Li, MY, Graef, JR, Wang, L, Karsai, J: Global dynamics of a SEIR model with varying total population size. Math. Biosci. 160, 191-213 (1999) Bhatia, NP, Szegö, GP: Dynamics Systems: Stability Theory and Applications. Springer, Berlin (1967) Smith, HL, Waltman, P: The Theory of Chemostat: Dynamics of Microbial Competition. Cambridge University Press, Cambridge (1995) This work is supported by the National Natural Science Foundation of China (51349011), Scientific Research Fund of Sichuan Provincial Education Department (11ZB192, 14ZB0115) and Doctorial Research Fund of Southwest University of Science and Technology. School of Science, Southwest University of Science and Technology, Mianyang, 621010, China Baodan Tian & Ning Chen School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, 611731, China , Yunguo Jin & Shouming Zhong School of Statistics, Chengdu University of Information Technology, Chengdu, 610103, China Yunguo Jin Search for Baodan Tian in: Search for Yunguo Jin in: Search for Shouming Zhong in: Search for Ning Chen in: Correspondence to Baodan Tian. All authors contributed equally and significantly in writing this paper. All authors read and approved the final paper. Tian, B., Jin, Y., Zhong, S. et al. Global stability of an epidemic model with stage structure and nonlinear incidence rates in a heterogeneous host population. Adv Differ Equ 2015, 260 (2015) doi:10.1186/s13662-015-0594-4 heterogeneous host epidemic model stage structure nonlinear incidence rate Lyapunov function
CommonCrawl
Results for 'Olga Gomilko' The Body in Thinking.Olga Gomilko - 2008 - Proceedings of the Xxii World Congress of Philosophy 20:69-75.details The paper presents the main ideas of systematic research of the phenomenon of the human body as an essential characteristic of human being and the fundamental philosophical concept. It allows one to scrutinize the concept of the human body as a necessary research tool in the humanities. The human body is analyzed in the process of its conceptualization in the history of philosophy, in relation to which its logic and main phases are defined. The paradigms of the understanding of the (...) human body are identified as resomatization strategies of contemporary thinking. It allows one to claim that evolution of philosophy is inalienable from the process of conceptualization of the phenomenon of the human body. Using an ontological grounding of the human body as a key philosophical concept ensures reconciliation of philosophical anthropology and ontology. (shrink) The Body in Metaphysics The Embodied Mind: From Mind Power to Life Vitality.Olga Gomilko - 2015 - Dialogue and Universalism 25 (2):116-122.details This article discusses the corporeal component of the human mind. Uncertainty is a fundamental attribute of the human body due to which a body transforms itself into the body that allows to connect the world with the human mind. The process of overcoming the transcendental register of the human mind results in the ontological and anthropological shifts from ego to soma. Tracing the trajectory of these shifts we discover the bodily dimension in the human mind as its constitutive transcendental ground. (...) This dimension makes the mind not only open to the world but makes the world a part of the human mind. It prevents the mind from exerting power over the world, and gives rise to life vitality of the embodied mind. (shrink) Revisiting "Intelligent Nursing": Olga Petrovskaya in Conversation with Mary Ellen Purkis and Kristin Bjornsdottir.Olga Petrovskaya, Mary Ellen Purkis & Kristin Bjornsdottir - 2019 - Nursing Philosophy 20 (3).details Medical Ethics in Applied Ethics The Role of Empathy and Compassion in Conflict Resolution.Olga M. Klimecki - 2019 - Emotion Review 11 (4):310-325.details Empathy and empathy-related processes, such as compassion and personal distress, are recognized to play a key role in social relations. This review examines the role of empathy in interpersonal and... Emotions in Philosophy of Mind Recognizing Cited Facts and Principles in Legal Judgements.Olga Shulayeva, Advaith Siddharthan & Adam Wyner - 2017 - Artificial Intelligence and Law 25 (1):107-126.details In common law jurisdictions, legal professionals cite facts and legal principles from precedent cases to support their arguments before the court for their intended outcome in a current case. This practice stems from the doctrine of stare decisis, where cases that have similar facts should receive similar decisions with respect to the principles. It is essential for legal professionals to identify such facts and principles in precedent cases, though this is a highly time intensive task. In this paper, we present (...) studies that demonstrate that human annotators can achieve reasonable agreement on which sentences in legal judgements contain cited facts and principles. We further demonstrate that it is feasible to automatically annotate sentences containing such legal facts and principles in a supervised machine learning framework based on linguistic features, reporting per category precision and recall figures of between 0.79 and 0.89 for classifying sentences in legal judgements as cited facts, principles or neither using a Bayesian classifier, with an overall κ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa$$\end{document} of 0.72 with the human-annotated gold standard. (shrink) The Modality Effect and Echoic Persistence.Olga C. Watkins & Michael J. Watkins - 1980 - Journal of Experimental Psychology: General 109 (3):251-278.details Philosophy of Psychology in Philosophy of Cognitive Science Neural Systems Connecting Interoceptive Awareness and Feelings.Olga Pollatos, Klaus Gramann & Rainer Schandry - 2007 - Human Brain Mapping 28 (1):9-18.details Emotion and Consciousness in Psychology in Philosophy of Cognitive Science Posthuman Sustainability: An Ethos for Our Anthropocenic Future.Olga Cielemęcka & Christine Daigle - 2019 - Theory, Culture and Society 36 (7-8):67-87.details Confronted with an unprecedented scale of human-induced environmental crisis, there is a need for new modes of theorizing that would abandon human exceptionalism and anthropocentrism and instead focus on developing environmentally ethical projects suitable for our times. In this paper, we offer an anti-anthropocentric project of an ethos for living in the Anthropocene. We develop it through revisiting the notion of sustainability in order to problematize the linear vision of human-centric futurity and the uniform 'we' of humanity upon which it (...) relies. We ground our analyses in posthumanism and material feminism, using works by posthumanist and material feminist thinkers such as Stacy Alaimo, Rosi Braidotti, Donna Haraway and Jane Bennett, among others. In dialogue with them, we offer the concept of posthuman sustainability that decenters the human, re-positions it in its ecosystem and, while remaining attentive to difference, fosters the thriving of all instances of life. (shrink) When Interoception Helps to Overcome Negative Feelings Caused by Social Exclusion.Olga Pollatos, Ellen Matthias & Johannes Keller - 2015 - Frontiers in Psychology 6.details What a Dog Can Do: Children with Autism and Therapy Dogs in Social Interaction.Olga Solomon - 2010 - Ethos: Journal of the Society for Psychological Anthropology 38 (1):143-166.details Cognitive Disabilities and Disorders in Philosophy of Cognitive Science Mental Disorders in Philosophy of Cognitive Science Engaging Diverse Social and Cultural Worlds: Perspectives on Benefits in International Clinical Research From South African Communities.Olga Zvonareva, Nora Engel, Eleanor Ross, Ron Berghmans, Ames Dhai & Anja Krumeich - 2015 - Developing World Bioethics 15 (1):8-17.details The issue of benefits in international clinical research is highly controversial. Against the background of wide recognition of the need to share benefits of research, the nature of benefits remains strongly contested. Little is known about the perspectives of research populations on this issue and the extent to which research ethics discourses and guidelines are salient to the expectations and aspirations existing on the ground. This exploratory study contributes to filling this void by examining perspectives of people in low-income South (...) African communities on benefits in international clinical research. Twenty-four individuals with and without experience of being involved in clinical research participated in in-depth interviews. Respondents felt that ancillary care should be provided to clinical research participants, while a clinical study conducted in particular community should bring better health to its members through post-trial benefits. Respondents' perspectives were grounded in the perception that the ultimate goal of international clinical research is to improve local health. We argue that perspectives and understandings of the respondents are shaped by local moral traditions rather than clinical research specificities and require attention as valid moral claims. It is necessary to acknowledge such claims and cultural worlds from which they emerge, thus building the foundation for equal and embracing dialogue to bridge different perspectives and handle contradicting expectations. (shrink) If I Could Just Stop Loving You: Anti-Love Biotechnology and the Ethics of a Chemical Breakup.Brian D. Earp, Olga A. Wudarczyk, Anders Sandberg & Julian Savulescu - 2013 - American Journal of Bioethics 13 (11):3-17.details ?Love hurts??as the saying goes?and a certain amount of pain and difficulty in intimate relationships is unavoidable. Sometimes it may even be beneficial, since adversity can lead to personal growth, self-discovery, and a range of other components of a life well-lived. But other times, love can be downright dangerous. It may bind a spouse to her domestic abuser, draw an unscrupulous adult toward sexual involvement with a child, put someone under the insidious spell of a cult leader, and even inspire (...) jealousy-fueled homicide. How might these perilous devotions be diminished? The ancients thought that treatments such as phlebotomy, exercise, or bloodletting could ?cure? an individual of love. But modern neuroscience and emerging developments in psychopharmacology open up a range of possible interventions that might actually work. These developments raise profound moral questions about the potential uses?and misuses?of such anti-love biotechnology. In this article, we describe a number of prospective love-diminishing interventions, and offer a preliminary ethical framework for dealing with them responsibly should they arise. (shrink) The Influence of Anglo-American Theoretical Models on the Evolution of the Nursing Discipline in Spain.Olga Rodrigo, Jordi Caïs & Cristina Monforte-Royo - 2017 - Nursing Inquiry 24 (3):e12175.details Interoceptive Awareness Mediates the Relationship Between Anxiety and the Intensity of Unpleasant Feelings.Olga Pollatos, Eva Traut-Mattausch, Heike Schroeder & Rainer Schandry - 2007 - Journal of Anxiety Disorders 21 (7):931-943.details Consciousness and Psychology, Misc in Philosophy of Cognitive Science Is There Nursing Phenomenology After Paley? Essay on Rigorous Reading.Olga Petrovskaya - 2014 - Nursing Philosophy 15 (1):60-71.details At the bedside, nurses are expected to be precise when they read indications on screens and on the bodies of patients and decide on the meaning of words framed by the context of acute care. In academia, although there is no incident report to fill when we misread or misrepresent complex philosophical ideas, the consequences of inaccurate reading include misplaced epistemological claims and poor scholarship. A long and broad convention of nursing phenomenological research, in its various forms, claims a philosophical (...) grounding in the ideas of Husserl, Heidegger, and other thinkers. But for nearly two decades, nurse phenomenologists' knowledge claims have been challenged by well‐informed criticisms, most notably by John Paley. At the heart of criticism lies an observation that Continental phenomenological thought is misrepresented in many nursing sources and that nursing phenomenology, both descriptive and interpretive, cannot appeal to the authority of either Husserl or Heidegger. Taking these criticisms seriously, I am asking, Is phenomenology after Paley possible? If misreading seems to be an issue, how can – or should – we read rigorously? My thinking through these questions is influenced by the ideas of Jacques Derrida. Under a condition of a play of language, of Derridian différance, when meaning is never self‐identical and never fully arrives, I suggest that one has to negotiate meanings through reading for differences. I develop this idea in relation to the methodological conventions of phenomenological nursing research and argue for a careful rereading of the whole field of phenomenological nursing research. Such rereading presupposes and necessitates interdisciplinary engagement between nursing and the humanities and interpretive social sciences. Greater familiarity with research practices of those disciplines that stress theoretical and writing rigour might make visible the limits of nursing research approaches and their quality criteria. An understanding of philosophical and theoretical works – a condition of quality scholarship – depends on our reading of both originary texts and contemporary literature from the humanities and the social sciences. This understanding, far from obliging researchers to always trace their work to its philosophical roots, opens other, often more sound, methodological possibilities. (shrink) Multilingualism at the Court of Justice of the European Union: Theoretical and Practical Aspects.Olga Łachacz & Rafał Mańko - 2013 - Studies in Logic, Grammar and Rhetoric 34 (1):75-92.details The paper analyses and evaluates the linguistic policy of the Court of Justice of the European Union against the background of other multilingual courts and in the light of theories of legal interpretation. Multilingualism has a direct impact upon legal interpretation at the Court, displacing traditional approaches with a hermeneutic paradigm. It also creates challenges to the acceptance of the Court's case-law in the Member States, which seem to have been adequately tackled by the Court's idiosyncratic translation policy. Book Review: A Short Treatise on the Metaphysics of Tsunamis. [REVIEW]Olga Rachello - 2019 - Thesis Eleven 153 (1):149-153.details Experiencing Syndemic: Disentangling the Biosocial Complexity of Tuberculosis Through Qualitative Research.Olga Zvonareva, Willemien van Bergen, Nadezhda Kabanets, Aleksander Alliluyev & Olga Filinyuk - 2019 - Journal of Biosocial Science 51 (3):403-417.details Citizen Science and the Politics of Environmental Data.Olga Kuchinskaya - 2019 - Science, Technology, and Human Values 44 (5):871-880.details In this commentary, I reflect on the differences between two independent citizen approaches to monitoring radiological contamination, one in Belarus after the 1986 Chernobyl nuclear accident and the other in Japan following the 2011 Fukushima Daiichi accident. I examine these approaches from the perspective of their contribution to making radiological contamination more publicly visible. The analysis is grounded in my earlier work, where I examined how we have come to know what we know about post–Chernobyl contamination and its effects in (...) Belarus, a former Soviet republic most heavily affected by the fallout. As I described in this study, much of what we know about the consequences of Chernobyl is based on the work of the Belarusian nonprofit Institute of Radiation Safety, "Belrad." I compare Belrad's approach to radiological monitoring with the work of the volunteer network Safecast, arguably one of the best-known citizen science projects in the world, which is working to monitor the scope of the post–Fukushima contamination. Through this comparison of approaches, I raise broader questions about a form of sensing practices—data-related practices of citizen science that make environmental hazards publicly in/visible. (shrink) Neural Networks Underlying Contributions From Semantics in Reading Aloud.Olga Boukrina & William W. Graves - 2013 - Frontiers in Human Neuroscience 7.details Philosophy of Neuroscience in Philosophy of Cognitive Science DIY Genetic Tests: A Product of Fact or Fallacy?Olga C. Pandos - 2020 - Journal of Bioethical Inquiry 17 (3):319-324.details Emotional Processing and Emotional Memory Are Modulated by Interoceptive Awareness.Olga Pollatos & Rainer Schandry - 2008 - Cognition and Emotion 22 (2):272-287.details Some Prosodic and Paralinguistic Features of Speech to Young Children.Olga K. Garnica - 1977 - In Catherine E. Snow & Charles A. Ferguson (eds.), Talking to Children. Cambridge University Press. pp. 63--88.details $6.38 used $46.99 from Amazon (collection) Amazon page Formal Representation of Proper Names in Accordance with a Descriptive Theory of Reference.Olga Poller - 2014 - Polish Journal of Philosophy 8 (1):37-52.details In this paper I present a way of formally representing proper names in accordance with a description theory of reference–fixing and show that such arepresentation makes it possible to retain the claim about the rigidity of proper names and is not vulnerable to Kripke's modal objection. Names in Philosophy of Language Beyond Witches, Angels and Unicorns. The Possibility of Expanding Russell´s Existential Analysis.Olga Ramirez - 2018 - E-Logos Electronic Journal for Philosophy 25 (1):4-15.details This paper attempts to be a contribution to the epistemological project of explaining complex conceptual structures departing from more basic ones. The central thesis of the paper is that there are what I call "functionally structured concepts", these are non-harmonic concepts in Dummett's sense that might be legitimized if there is a function that justifies the tie between the inferential connection the concept allows us to trace. Proving this requires enhancing the russellian existential analysis of definite descriptions to apply to (...) functions and using this in proving the legitimacy of such concepts. The utility of the proposal is shown for the case of thick ethical terms and an attempt is made to use it in explaining the development of natural numbers. This last move could allow us to go one step lower in explaining the genesis of natural numbers while maintaining the notion of abstract numbers as higher order entities. (shrink) Aspects of Meaning, Misc in Philosophy of Language Existence in Metaphysics Numbers in Philosophy of Mathematics The Basis of Meaning, Misc in Philosophy of Language The Handbook of Science and Technology Studies.Edward Hackett, Olga Amsterdamska, Michael Lynch & Judy Wajcman (eds.) - 2007 - MIT Press.details Sociology of Science in General Philosophy of Science $5.95 used $9.99 from Amazon Amazon page Achieving Disbelief: Thought Styles, Microbial Variation, and American and British Epidemiology, 1900–1940.Olga Amsterdamska - 2004 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 35 (3):483-507.details The role of bacterial variation in the waxing and waning of epidemics was a subject of lively debate in late nineteenth and early twentieth-century bacteriology and epidemiology. The notion that changes in bacterial virulence were responsible for the rise and fall of epidemic diseases was an often-voiced, but little investigated hypothesis made by late nineteenth-century epidemiologists. It was one of the first hypotheses to be tested by scientists who attempted to study epidemiological questions using laboratory methods. This paper examines how (...) two groups of experimental epidemiologists, the British group led by W. W. C. Topley and Major Greenwood, and an American group directed by Leslie T. Webster at the Rockefeller Institute, studied the role of variations in bacterial virulence in the course of laboratory epidemics of mouse typhoid. Relying on Ludwik Fleck's concept of thought styles and thought collectives, the paper analyzes the fundamental conceptual differences between these two groups of researchers and analyzes the kinds of innovations they introduced as they attempted to integrate bacteriological and epidemiological approaches. The paper shows that the stylistic differences between the two groups can be understood better in the context of the institutional histories and disciplinary relations of epidemiology and bacteriology in the two countries. (shrink) American Philosophy, Misc in Philosophy of the Americas The Use of PROMs and Shared Decision‐Making in Medical Encounters with Patients: An Opportunity to Deliver Value‐Based Health Care to Patients.Olga C. Damman, Anant Jani, Brigit A. Jong, Annemarie Becker, Margot J. Metz, Martine C. Bruijne, Danielle R. Timmermans, Martina C. Cornel, Dirk T. Ubbink, Marije Steen, Muir Gray & Carla El - 2020 - Journal of Evaluation in Clinical Practice 26 (2):524-540.details Effects of Valence and Emotional Intensity on the Comprehension and Memorization of Texts.Olga Megalakaki, Ugo Ballenghein & Thierry Baccino - 2019 - Frontiers in Psychology 10.details Introduction: Autism: Rethinking the Possibilities.Olga Solomon & Nancy Bagatell - 2010 - Ethos: Journal of the Society for Psychological Anthropology 38 (1):1-7.details Autism in Philosophy of Cognitive Science Disability in Applied Ethics Argumentative Meanings and Their Stylistic Configurations in Clinical Research Publications.Olga L. Gladkova, Chrysanne DiMarco & Randy Allen Harris - 2015 - Argument and Computation 6 (3):310-346.details Volume 6, Issue 3, September 2015, Page 310-346. The Descriptive Content of Names as Predicate Modifiers.Olga Poller - 2017 - Philosophical Studies 174 (9):2329-2360.details In this paper I argue that descriptive content associated with a proper name can serve as a truth-conditionally relevant adjunct and be an additional contribution of the name to the truth-conditions. Definite descriptions the so-and-so associated by speakers with a proper name can be used as qualifying prepositional phrases as so-and-so, so sentences containing a proper name NN is doing something could be understood as NN is doing something as NN. Used as an adjunct, the descriptive content of a proper (...) name expresses the additional circumstances of an action and constitute a part of a predicate. I argue that qualifying prepositional phrases should be analyzed as predicate modifiers and propose a formal representation of modified predicates. The additional truth-conditional relevance of the descriptive content of a proper name helps to explain the phenomenon of the substitution failure of coreferential names in simple sentences. (shrink) Strategic Corporate Social Responsibility and Orphan Drug Development: Insights From the US and the EU Biopharmaceutical Industry. [REVIEW]Olga Bruyaka, Hanko K. Zeitzmann, Isabelle Chalamon, Richard E. Wokutch & Pooja Thakur - 2013 - Journal of Business Ethics 117 (1):45-65.details In recent years, the biopharmaceutical industry has seen an increase in the development of so-called orphan drugs for the treatment of rare and neglected diseases. This increase has been spurred on by legislation in the United States, Europe, and elsewhere designed to promote orphan drug development. In this article, we examine the drivers of corporate social responsibility (CSR) activities in orphan drug markets and the extent to which biopharmaceutical firms engage in these activities with a strategic orientation. The unique context (...) of orphan drugs constitutes a research opportunity to test the applicability of existing theoretical perspectives on CSR and strategic CSR. Using Schwartz and Carroll's (Bus Ethics Q, 13(4):503–530, 2003) three-domain approach to CSR and the literature on strategic CSR as a theoretical background, we employ a combination of semi-structured interviews and a quantitative website content analysis to study practices of biopharmaceutical firms in the United States and European Union. Our findings show that both US- and EU-based companies engaged in orphan drugs development perceive their involvement as a responsible business activity beyond the economic dimension of CSR. However, for the majority of these companies their CSR activities do not qualify as strategic according to the criteria established in the literature. We also find significant differences between larger and smaller firms in their use of CSR. Based on these findings, we make several suggestions regarding orphan drug legislation and other measures that might help firms exploit strategic CSR benefits. (shrink) Drugs in Applied Ethics Medical Research Ethics in Applied Ethics The Body in the Mind: On the Relationship Between Interoception and Embodiment.Beate M. Herbert & Olga Pollatos - 2012 - Topics in Cognitive Science 4 (4):692-704.details The processing, representation, and perception of bodily signals (interoception) plays an important role for human behavior. Theories of embodied cognition hold that higher cognitive processes operate on perceptual symbols and that concept use involves reactivations of the sensory-motor states that occur during experience with the world. Similarly, activation of interoceptive representations and meta-representations of bodily signals supporting interoceptive awareness are profoundly associated with emotional experience and cognitive functions. This article gives an overview over present findings and models on interoception and (...) mechanisms of embodiment and highlights its relevance for disorders that are suggested to represent a translation deficit of bodily states into subjective feelings and self-awareness. (shrink) Embodiment and Situated Cognition in Philosophy of Cognitive Science States of Consciousness in Philosophy of Cognitive Science Unconscious and Conscious Processes in Philosophy of Cognitive Science Tensions Between the Prescriptive and Descriptive Ethics of Psychologists.Olga Voskuijl & Arne Evers - 2007 - Journal of Business Ethics 72 (3):279-291.details Ethical guidelines for psychologists are meant to stimulate and help psychologists to act appropriately with respect to clients, colleagues, and other individuals involved in their professional relations. This paper focuses on the similarity of codes of ethics of psychologists in European countries in general, and on specific ethical dilemmas in the area of work and organizations in particular. First, an overview is given of the development of ethical guidelines in Europe and the USA. Second, the results are presented of a (...) survey by E-mail amongst members of the European Federation of Psychologists' Associations (EFPA) to identify the differences and similarities between ethical guidelines of the affiliate members. Third, the potential dilemmas of stakeholders in work and organizational assessment are addressed. Finally, the results of a survey among Dutch selection psychologists are presented. The purpose of this study was to examine a possible tension between normative behavior and attitudes about normal behavior. It was concluded that ethical guidelines of European countries cover comparable (sub-)principles and that there are indications that individual psychologists agree with the written principles. In addition, suggestions for future research are given. (shrink) Dialectic of the University: A Critique of Instrumental Reason in Graduate Nursing Education.Olga Petrovskaya, Carol McDonald & Marjorie McIntyre - 2011 - Nursing Philosophy 12 (4):239-247.details Avicenna.Olga Lizzini - 2012 - Carocci.details Avicenna in Medieval and Renaissance Philosophy Neurally Dissociable Cognitive Components of Reading Deficits in Subacute Stroke.Olga Boukrina, A. M. Barrett, Edward J. Alexander, Bing Yao & William W. Graves - 2015 - Frontiers in Human Neuroscience 9.details Державна соціальна політика охорони здоров'я в україні.Olga Shaposhnik & Viktoria Khoroshikh - 2014 - Схід 6 (132):82-85.details У статті розглянуто проблеми охорони здоров'я населення в Україні та відповідні державні заходи щодо їх запобігання та усунення. Зокрема, автори вважають найбільш дієвими заходами з боку держави реалізацію комплексних програм управління здоров'ям працівників, що включають такі пункти як зниження питомої ваги поведінкових чинників ризику. З огляду на це профілактична робота і в суспільстві, й на конкретному підприємстві повинна передбачати роботу з попередження шкідливих звичок, формування здоров'язберігаючого середовища і громадської думки, що підтримує практики здоров'язберігаючої поведінки. Важливим аспектом концепції здоров'я працездатного населення (...) автори вважають пріоритет соціальних детермінант над впливом медицини і системи охорони здоров'я. (shrink) Wuğūd-Mawğūd/Existence-Existent in Avicenna. A Key Ontological Notion of Arabic Philosophy.Olga Lizzini - 2003 - Quaestio 3 (1):111-138.details Medieval Arabic and Islamic Philosophy in Philosophical Traditions, Miscellaneous Stabilizing Instability: The Controversy Over Cyclogenic Theories of Bacterial Variation During the Interwar Period.Olga Amsterdamska - 1991 - Journal of the History of Biology 24 (2):191 - 222.details History of Biology in Philosophy of Biology Understanding Greek Sculpture. Ancient Meanings, Modern Readings. [REVIEW]Olga Palagia & N. Spivey - 1998 - Journal of Hellenic Studies 118:245-247.details Hellenistic and Later Ancient Philosophy, Misc in Ancient Greek and Roman Philosophy Pre-Socratic Philosophy in Ancient Greek and Roman Philosophy The Force of Digital Aesthetics. On Memes, Hacking, and Individuation.Olga Goriunova - 2015 - Nordic Journal of Aesthetics 24 (47).details The paper explores memes, digital artefacts that acquire a viral character and become globally popular, as an aesthetic trend that not only entices but propels and molds subjective, collective and political becoming. Following both Simondon and Bakhtin, memes are first considered as aesthetic objects that mediate individuation. Here, resonance between psychic, collective and technical individuation is established and re-enacted through the aesthetic consummation of self, the collective and the technical in the various performances of meme cultures. Secondly, if memes are (...) followed in the making, from birth to their spill-over onto wider social networks, the very expressive form of meme turns out to be borne by specific technical architecture and mannerisms of a small number of platforms, and, most notably, the image board 4chan. The source of memes' various forms of power is concentrated here. Memes are intimately linked to 4chan's /b/ board, the birthplace of Lulzsec and the Anonymous hacking networks. Memes' architectonics, as an inheritance of a few specific human-technical structures, in turn informs the production of new platforms, forms of networked. (shrink) Space and Time in the Child's Mind: Evidence for a Cross-Dimensional Asymmetry.Daniel Casasanto, Olga Fotakopoulou & Lera Boroditsky - 2010 - Cognitive Science 34 (3):387-405.details Russian Nihilism: The Cultural Legacy of the Conflict Between Fathers and Sons.Olga Vishnyakova - 2011 - Comparative and Continental Philosophy 3 (1):99-111.details I argue that the Nineteenth Century phenomenon of Russian nihilism, rather than belonging to the spiritual crisis that threatened Europe, was an independent and historically specific attitude of the Russian intelligentsia in their wholesale and utopian rejection of the prevailing values of their parents' generation. Turgenev's novel, Fathers and Sons, exemplifies this revolt in the literary character Bazarov, who embodies an archetypical account of the conflict between generations, social values, and traditions in Russian—but not just Russian—culture. Russian Philosophy in European Philosophy Reasoning Processes as Epistemic Dynamics.Olga Pombo - 2015 - Axiomathes 25 (1):41-60.details This work proposes an understanding of deductive, default and abductive reasoning as different instances of the same phenomenon: epistemic dynamics. It discusses the main intuitions behind each one of these reasoning processes, and suggest how they can be understood as different epistemic actions that modify an agent's knowledge and/or beliefs in a different way, making formal the discussion with the use of the dynamic epistemic logic framework. The ideas in this paper put the studied processes under the same umbrella, thus (...) highlighting their relationship and allowing a better understanding of how they interact together. (shrink) Epistemic Logic in Logic and Philosophy of Logic Between Local Practices and Global Knowledge: Public Initiatives in the Development of Agricultural Science in Russia in the 19th Century and Early 20th Century. [REVIEW]Olga Elina - 2014 - Centaurus 56 (4):305-329.details Domesticating Paley:Howwe Misread Paley.Olga Petrovskaya - 2014 - Nursing Philosophy 15 (1):72-75.details Rethinking Psychiatric Terror Against Nationalists in Ukraine: Spatial Dimensions of Post-Stalinist State Violence.Olga Bertelsen - 2014 - Kyiv-Mohyla Humanities Journal 1:27.details
CommonCrawl
Find the maximum value of $8\cdot27^{\log_6 x}+27\cdot8^{\log_6 x}-x^3$ Find the maximum value of $$8\cdot27^{\log_6 x}+27\cdot8^{\log_6 x}-x^3.$$ If I apply AM${}\ge{}$GM, then I can find the minimum value of this expression, but not sure how to find the max value. inequality optimization logarithms exponential-function maxima-minima Michael Rozenberg learner_avidlearner_avid $\begingroup$ There is no minimum... $\endgroup$ – TheSimpliFire Mar 3 '18 at 16:33 $\begingroup$ Do you mean $8.27$ as in the decimal or the product $8\times27$? $\endgroup$ – TheSimpliFire Mar 3 '18 at 16:41 $\begingroup$ If those dots were meant to denote multiplication, change them to \cdot $\endgroup$ – J.G. Mar 3 '18 at 16:42 $\begingroup$ In that case there is still no minimum but a maximum of $216$. $\endgroup$ – TheSimpliFire Mar 3 '18 at 16:45 $\log_6x=y,x=6^y$ $$8\cdot 27^y+27\cdot8^y-(6^y)^3=216(3^{3(y-1)}+2^{3(y-1)}-3^{3(y-1)}2^{3(y-1)}-1)+216$$ $$=216-216(3^{3(y-1)}-1)(2^{3(y-1)}-1)$$ lab bhattacharjeelab bhattacharjee prove that $$8\cdot 27^{\log_{6}{x}}+27\cdot 8^{\log_{6}{x}}-x^3\le 216$$ and the equal sign holds for $x=6$ Dr. Sonnhard GraubnerDr. Sonnhard Graubner For $x=6$ we get a value $216$. We'll prove that it's a maximal value. Indeed, we need to prove that $$x^3+216\geq8\cdot27^{\log_6x}+27\cdot8^{\log_6x}$$ or $$\left(6^{\log_6x}\right)^3+216\geq8\cdot27^{\log_6x}+27\cdot8^{\log_6x}$$ or $$27^{\log_6x}\cdot8^{\log_6x}-8\cdot27^{\log_6x}-27\cdot8^{\log_6x}+216\geq0$$ or $$\left(27^{\log_6x}-27\right)\left(8^{\log_6x}-8\right)\geq0,$$ which is obvious for $x\geq6$ and for $0<x\leq6.$ Michael RozenbergMichael Rozenberg $$y=8\cdot 27^{\log_6{x}}+27\cdot 8^{\log_6{x}}-x^3\Rightarrow \max(y)=8\cdot 27^{\log_6{\max(x)}}+27\cdot 8^{\log_6{\max(x)}}-\max(x)^3\Rightarrow \frac{d}{dx}\bigg[8\cdot 27^{\log_6{x}}+27\cdot 8^{\log_6{x}}-x^3\bigg]=\frac{27\cdot 8^{\frac{\ln(x)}{\ln(6)}}\ln(8)+8\cdot 27^{\frac{\ln(x)}{\ln(6)}}\ln(27)}{x\ln(6)}=0\Rightarrow \max(x)=6\Rightarrow \max(y)=216$$ DXTDXT Not the answer you're looking for? Browse other questions tagged inequality optimization logarithms exponential-function maxima-minima or ask your own question. Finding the maximum value of a function without explicit formula The maximum value? find the maximum and minimum value of the function $x^3+y^3-3x-12y+10$ Find the minimum and maximum value of $ab+bc+cd+de+ef+fa$ given that $a+b+c+d+e+f=12$. Find the maximum and minimum value of $S= \log_{2}a+\log_{2}b+\log_{2}c$. A question of trigonometry on how to find minimum value. Find the maximum value of $ax+ by$ Find the maximum value of given expression Find the minimum value of To find the maximum and minimum value of this expression
CommonCrawl
Home Journals IJHT NUMERICAL EVALUATION OF THE EFFECT OF TYPE AND SHAPE OF PERFORATIONS ON THE BUCKLING OF THIN STEEL PLATES BY MEANS OF THE CONSTRUCTAL DESIGN METHOD NUMERICAL EVALUATION OF THE EFFECT OF TYPE AND SHAPE OF PERFORATIONS ON THE BUCKLING OF THIN STEEL PLATES BY MEANS OF THE CONSTRUCTAL DESIGN METHOD Giulio Lorenzini * | Daniel Helbig | Caio Cesar Cardoso da Silva | Mauro de Vasconcellos Real | Elizaldo Domingues dos Santos | Liércio André Isoldi | Luiz Alberto Oliveira Rocha Department of Industrial Engineering, University of Parma, Parco Area delle Scienze, 181/A, Parma, 43124, Italy Programa de Pós-Graduação em Engenharia Mecânica (PROMEC), Universidade Federal do Rio Grande do Sul (UFRGS), Sarmento Leite St. nº 425, Porto Alegre, 90050-170, Brazil Programa de Pós-Graduação em Engenharia Oceânica (PPGEO), Universidade Federal do Rio Grande (FURG), Itália Ave. km 8, Rio Grande, 96203-900, Brazil [email protected] Thin steel plates - with or without cutouts - are structural components largely used in several engineering applications as buildings, bridges, ships, airplanes and automobiles. However, if an axial compressive load is imposed to these panels an undesired instability phenomenon can occur: buckling. At a certain load magnitude the limit stress is reached and the plate suffers lateral displacements (out of plane) indicating the buckling occurrence. In plates an elastic buckling or an elasto-plastic buckling can occur, depending on dimensional, constructive or operational aspects. Therefore, in the present work, the Constructal Design method was adopted to investigate the influence of the type and shape of the cutout in the plate buckling. To do so, by means the Finite Element Method (FEM), computational models were developed to simulate the elastic (linear) and elasto-plastic (nonlinear) plate buckling. Square and rectangular thin steel plates, simply supported in its four edges, with a centered cutout, were analyzed, being the objective function to maximize the buckling limit stress, avoiding the plate buckling occurrence. The square and rectangular plates have a ratio H/L (ratio between height and length of the plate) of 0.5 and 1.0, respectively. A value of 0.2 for the cutout volume fraction (ratio between the cutout volume and the total plate volume) was adopted for different types of cutout: diamond, longitudinal hexagonal, transversal hexagonal, elliptical, and rectangular. The cutout shape variations were produced by the H0/L0 degree of freedom (which relates the characteristic dimensions of the cutout). The results showed that the cutout shape variation has a fundamental influence in the plate buckling behavior, determining if the buckling is elastic or elasto-plastic, allowing the definition of a buckling stress limit curve for each studied cutout type. In addition, it was observed that the Constructal Design method conduct to the definition of optimal geometries, reaching buckling stress limit improvements around 100%. Constructal design, Thin steel plate with cutout, Linear elastic buckling, Nonlinear elasto-plastic buckling, computational modeling. Thin steel plate elements constitute very important structural components in many structures, such as ship decks and hulls, dock gates, plate and box girders of bridges, offshore structures, and structures used in aerospace industries. In many cases, these plates are subjected to axial compressive forces, which make them prone to instability or buckling. If the plate is slender, the buckling is elastic. However, if the plate is sturdy, it buckles in the plastic range causing the so-called inelastic (or elasto-plastic) buckling. It is very likely in many cases to have holes in the plate elements for inspection, maintenance, and service purposes, and the size of these holes could be significant. In such cases, the presence of these holes redistributes the membrane stresses in the plate and may cause significant reduction in its strength in addition to changing its buckling characteristics, El-Sawy et al. [1]. The buckling of a plate involves two planes, and two boundary conditions on each edge of the plate. The basic difference between a column and a plate lies in the buckling characteristics. A column, once it buckles, cannot resist any additional axial load. Thus, the critical load of a column is also its failure load. On the other hand, a plate, since it is invariably supported at the edges (e.g., interconnection between two structural plates, and web connected to flanges), continues to resist the additional axial load even after the primary buckling load. Thus, for a plate, the post-buckling load is much higher than the elastic buckling load. When designing structural members, this fact is largely exploited to minimize the weight of the structure, Iyengar [2]. Moreover, in several practical applications it is necessary to provide cutouts in plate structures to allow access for services or inspection and even aesthetics purposes, as well as to reduce the structure self-weight. The presence of a hole in a plate panel changes the stress distribution within the member, alters its elastic buckling and post-buckling characteristics and generally reduces its ultimate load carrying capacity. The performance of a plate containing an opening is influenced by the nature of the applied stress (e.g. compressive, tensile, shear, etc.), besides the shape, size and location of the hole, Narayanan [3]. Among the elastic buckling studies category, Sharkeley & Brown [4], studied the effect of eccentricity on square plates with square holes. They concluded that the center of a small square hole should be located away from the center of the structural element, but the center of a large square hole should be located on the center of the structural element. El-Sawy & Nazmy [5], provided a comprehensive discussion on the elastic buckling of thin rectangular perforated plates for various hole shapes, sizes, and locations. El-Sawy & Martini [6] used the finite element method to determine the elastic buckling stresses of biaxial loaded perforated rectangular plates with longitudinal axis located circular holes. Alternatively, Moen & Schafer [7] developed, validated and summarized analytical expressions for estimating the influence of single or multiple holes on the elastic buckling critical stress of plates in bending or compression. In Rocha et al. [8, 9] and Isoldi et al. [10], the Constructal Design method was employed to determine the best shape and size of centered cutout in a plate, aiming to maximize the critical buckling load. In the group of studies dedicated to the problem of elasto-plastic buckling, Narayanan & Chow [11] developed design charts based on ultimate capacity of uniaxial compressed perforated plates with square and circular openings. Azizan & Roberts [12], generated interaction curves for ultimate strength of square plates with central square and circular holes subjected to uniaxial compression, biaxial compression and pure shear. Yettram & Brown [13] studied the stability behaviour of flat square plates with central square perforations. Jwalamalini et al. [14], developed the design charts for the stability of simply supported square plate with opening under in-plane loading as uniform compression and trapezoidal loading. Madasamy & Kalyanaraman [15] presented the analysis of plated structures with rectangular cutouts and internal supports using the spline finite strip method. Durban & Zuckerman [16], examines the elastoplastic buckling of a rectangular plate, with various boundary conditions, under uniform compression combined with uniform tension (or compression) in the perpendicular direction. Shanmugam et al. [17] presented a design formula for axially compressed perforated plates with circular openings under axial compression for simply supported and clamped boundary conditions. Paik et al. [18], presented ultimate strength formulations for ship plating under combined biaxial compression/tension, edge shear, and lateral pressure loads. Toulios & Caridis [19] carried out a numerical study on the effect of aspect ratio on the buckling and collapse behaviour of flat bar stiffened plates loaded in uniaxial compression, El-Sawy et al. [1] employed the finite element method to determine the elasto-plastic buckling stress of uniaxial loaded simply supported square and rectangular plates with circular openings. Bakker et al. [20], discussed analytical and semi-analytical formulas for describing the post-buckling behavior of uniformly compressed square plates with initial imperfections. Kumar et al. [21], studied the effect of the size increase of a rectangular opening along the loading direction on the ultimate strength is determined using nonlinear finite element analysis. Helbig et al. [22] studied the shape influence of an elliptical cutout in the elastic and elasto-plastic buckling of square and rectangular steel plates for two different thickness. The Constructal Design method was used to promote the shape hole variation, by means the degree of freedom H0/L0 (ratio between the characteristic dimensions of the elliptical hole), while the slenderness influence was considered by the DOF H/t (ratio between height and thickness of the plate). The numerical results, obtained through the ANSYS software, indicated that both the cutout shape and slenderness of the plate have direct influence in the buckling behavior. In addition, the Constructal Design method allow to define the maximum limit buckling load in each case. More recently, Helbig et al. [23] analyzed numerically the influence of the cutout shape in the plate buckling behavior by means the Finite Element Method. The Constructal Design was applied, ensuring a consistent comparison among elliptical, rectangular and diamond perforations. A constant cutout volume fraction of 0.20 was considered, while the degree of freedom H0/L0 was varied. The objective function was to maximize the limit stress, avoiding the plate buckling. A thin steel plate, simply supported in four edges, with a centered perforation was considered. The results showed the H0/L0 influence in the buckling behavior as well as the cutout shape influence in the limit stress. Considering the above, it appears of fundamental importance in the structural engineering, the study and understanding of the mechanical behavior of perforated steel plates submitted buckling, especially if the goal is to improve the performance of these structural elements. Therefore, the main objective of this work is numerically investigate the influence of the type and shape of the hole in the behavior of buckling perforated steel plates, in order to improve its mechanical behavior. The Constructal Design method is used in order to guarantee an adequate and consistent comparison among the studied cases. The objective function is to maximize the compressive stress, avoiding the plate buckling occurrence. To do so, it was considered a hole volume fraction (ratio between the hole and the volume of the total volume of the plate) of 0.20, for different cutout types: diamond, longitudinal hexagonal, transversal hexagonal, elliptical, and rectangular. The shape of these perforations can vary by means the ratio H0/L0, which relates the characteristics dimensions of each hole. Besides, two ratios between the plate height (H) and the plate length (L) were studied: H/L = 0.5 and H/L = 1.0: emphasizing that the total volume of the plate was kept constant, with a plate thickness (t) of 10 mm. In all studied cases, the plate is simply supported in its four edges and has a centered perforation. A constraint which limit a minimal distance of 100 mm from the plate edges until the hole edges is also employed. 2. Buckling and Postbuckling of Plates In the late 1800s, Bryan successfully formulated and solved the problem of buckling for a linearly elastic and simply supported uniaxial compressed plate. Approximately 33 years later, Bleich made efforts to extend linear elastic plate buckling theory to the solution of problems of plates buckling above the proportional limit. He demonstrated that critical stresses for plates buckling between the proportional limit and the yield stress could be approximated using linear elastic theory with Young's modulus, E, replaced by a reduced value equal to $\sqrt{E_{t} E}$, where Et is the tangent modulus. Timoshenko agreed with this observation and further concluded that plate buckling stresses cannot exceed the yield stress, Dawe et al. [24]. For a common man, the word buckling means sudden catastrophic failure of a structure involving large deformations. But in engineering parlance, the buckling is a phenomenon that generally occurs well before deformations are very large. When a slender structure is loaded in compression, for small loads it deforms with hardly any noticeable change in geometry and load-carrying capacity. On reaching a critical load value, the structure suddenly experiences a large deformation and it may lose its ability to carry the load further. At this stage, the structure is considered to have buckled, Raviprakash [25]. The transition of the plate from the stable state of equilibrium to the unstable one, when submitted a compressive load, is referred to as buckling or structural instability. The smallest value of the load producing buckling is called the critical or buckling load. The importance of buckling is the initiation of a deflection pattern, which if the loads are further increased above their critical values, rapidly leads to very large lateral deflections. Consequently, it leads to large bending stresses, and eventually to complete failure of the plate. The linear buckling analysis of plates based on these assumptions makes it possible to determine accurately the critical loads (Pcr), which are of practical importance in the stability analysis of thin plates. However, this analysis gives no way of describing the behavior of plates after buckling, which is also of considerable interest. The post-buckling analysis of plates is usually difficult because it is basically a nonlinear problem, Ventsel [26]. Therefore, plate buckling has a post-critical load-carrying capacity that enables for additional loading after elastic buckling has occurred. A plate is in that sense inner statically indeterminate, which makes the collapse of the plate not coming when elastic buckling occurs, but instead later, at a higher loading level reached in the elasto-plastic buckling. This is taken into consideration in the ultimate limit state design of plates because the elastic buckling does not restrict the load carrying capacity to the critical buckling stress, instead the maximum capacity consists of the two parts: the buckling load added to the additional post-critical load, Åkesson [27]. In other words, the ultimate loading capacity (Pu) of plates is not restricted to the occurrence of elastic buckling once these structural elements do possess ability for a post-critical reserve strength, which enables for an additional loading capacity after that buckling has occurred. This post-critical reserve strength is shown in the load/displacement diagram in Fig. 1. Figure 1. Load/displacement diagram in the post-critical range This capacity to carry additional load after elastic buckling is due to the formation of a membrane that stabilizes the buckle through a transverse tension band. When the central part of the plate buckles, it loses the major part of its stiffness, and then the load is forced to be "linked'' around this weakened zone into the stiffer parts on either side. Additionally, due to this redistribution a transverse membrane in tension is formed and anchored, as can be seen by the load paths in Fig. 2, Åkesson [27]. Figure 2. The redistribution of the transfer of load in the ultimate limit state The relative magnitude of the post-buckling strength to the buckling load depends on various parameters such as dimensional properties, boundary conditions, types of loading, and the ratio of buckling stress to yield stress, Yoo & Lee [28]. There is an analytical solution for the problem of the elastic buckling of a simply supported solid plate of length L, width H, thickness t, and subjected to a distributed uniaxial load P as it is shown in Fig. 3. Figure 3. The solid plate subject to uniaxial compressive load The critical load per unit length for elastic buckling can be written according to, Vinson [29] as: $P_{c r}=k \frac{\pi^{2} D}{H^{2}}$(1) Where π is the mathematical constant; k is the function of aspect radio H/L and wavelength parameter m, given by: $k=\left(m \frac{H}{L}+\frac{1}{m} \frac{L}{H}\right)^{2}$(2) and D is the plate bending stiffness, defined as: $D=\frac{E t^{3}}{12\left(1-v^{2}\right)}$(3) Where E is the Young's Modulus and v is the Poisson's ratio of the plate material. The optimum value of m that gives the lowest σcr depends on the aspect ratio H/L. For example, the optimum m is 1.0 for a square plate (H/L = 1.0) while it is 2.0 for a plate of (H/L = 2.0). For a plate with a large aspect ratio, k = 4.0 serves as a good approximation. Since the aspect ratio of a component of a steel structural member such as a web plate is large in general, we can often assume k is simply equal to 4.0, Yamaguchi [30]. In turn, the stress at which elastic buckling occurs, σcr, is defined by the average stress that is equal to the uniformly applied compressive load, Pcr, divided by the thickness of the plate, t. This stress is called elastic buckling stress, and it is given by: $\sigma_{c r}=k \frac{\pi^{2} E}{12\left(1-v^{2}\right)}\left(\frac{t}{H}\right)^{2}$(4) 3. Computational Models There are many practical engineering problems for which exact solutions cannot be obtained. This inability to obtain an exact solution may be attributed to either the complex nature of governing differential equations or the difficulties that arise from dealing with the boundary and initial conditions. To deal with such problems, one may resort to numerical approximations. In contrast to analytical solutions, which show the behavior of a system at any point within the system, numerical solutions approximate the solutions only at discrete points, called nodes. The first step of any numerical procedure is the discretization. This process divides the medium of interest into a number of small subregions and nodes. There are two common classes of numerical methods: finite difference methods (FDM) and finite element methods (FEM). With FDM, the differential equation is written for each node, and the derivatives are replaced by difference equations. This approach results in a set of simultaneous linear equations. Although finite difference methods are easy to understand and employ in simple problems, they become difficult to apply to problems with complex geometries or complex boundary conditions. This situation is also true for problems with nonisotropic material properties. In contrast, the FEM uses integral formulations rather than difference equations to create a system of algebraic equations. Moreover, an approximate continuous function is assumed to represent the solution for each element. The complete solution is then generated by connecting or assembling the individual solutions, allowing for continuity at the interelemental boundaries, Moaveni [31]. In this context, the ANSYS® software, which is based on the FEM, was used to solve elastic and elasto-plastic plate buckling problems. A set of interpolation functions is used to define uniquely the state of displacement within each element in terms of its nodal displacements. The state of strain within the element is uniquely defined by the strain-displacement relationship. The state of stress throughout the element is determined by the material stress-strain law. By applying the Virtual Work Principle, the nodal forces corresponding to a displacement field in the element are determined. These nodal forces are related to the nodal displacements through the element stiffness matrix. Thus, the conditions of overall equilibrium have already been satisfied within the element. Now, all that is necessary is to establish equilibrium conditions at the nodes of the structure. The resulting linear equation system will contain the displacements as unknowns. Once these equations have been solved the structural problem is determined. The internal forces in elements, or the stresses, can easily be found by using the strain-displacement relationship and the material stress-strain law, Real & Isoldi [32]. The analytical solutions are only available for problems involving very simple geometry, loading and boundary conditions. Computer modeling can be used to search approximate solution to solve more complex problems, Helbig et al. [22]. In the present work the 8-Node Structural Shell finite element so-called SHELL93 was used (Fig. 4). This element is particularly well suited to model curved shells. The element has six degrees of freedom at each node: translations in the nodal x, y and z directions and rotations about the nodal x, y and z axes. The deformation shapes are quadratic in both in-plane directions. The element has plasticity, stress stiffening, large deflection, and large strain capabilities, Ansys® [33]. Figure 4. SHELL93 finite element [33] 3.1 Elastic buckling Eigenvalue linear buckling analysis is generally used to estimate the critical buckling load of ideal structures Vinson [29]. This numerical procedure is used for calculating the theoretical buckling load of a linear elastic structure. Since it assumes the structure exhibits linearly elastic behavior, the predicted buckling loads are overestimated. So, if the component is expected to exhibit structural instability, the search for the load that causes structural bifurcation is referred to as a buckling load analysis. Because the buckling load is not known a priori, the finite element equilibrium equations for this type of analysis involve the solution of homogeneous algebraic equations whose lowest eigenvalue corresponds to the buckling load, and the associated eigenvector represents the primary buckling mode, Madenci & Guven [34]. The strain formulation used in the analysis includes both the linear and nonlinear terms. Thus, the total stiffness matrix, [K], is obtained by summing the conventional stiffness matrix for small deformation, [KE], with another matrix, [KG], which is the so-called geometrical stiffness matrix, Przemieniecki [35]. The matrix [KG] depends not only on the geometry but also on the initial internal forces (stresses) existing at the start of the loading step, {P0}. Therefore the total stiffness matrix of the plate with load level {P0} can be written as: $[K]=\left[K_{E}\right]+\left[K_{G}\right]$(5) When the load reaches the level of {P} = λ{P0}, where l is a scalar, the stiffness matrix can be defined as: $[K]=\left[K_{E}\right]+\lambda\left[K_{G}\right]$(6) Now, the governing equilibrium equations for the plate behavior can be written as: $\left[\left[K_{E}\right]+\lambda\left[K_{G}\right]\right]\{U\}=\lambda\left\{P_{0}\right\}$(7) where $\{U\}$ is the total displacement vector that may therefore be determined from: $\{U\}=\left[\left[K_{E}\right]+\lambda\left[K_{G}\right]\right]^{-1} \lambda\left\{P_{0}\right\}$(8) At buckling, the plate exhibits a large increase in its displacements with no increase in the load. From the mathematical definition of the matrix inverse as the adjoint matrix divided by the determinant of the coefficients it is possible to note that the displacements $\{U\}$ tend to infinity when: $\operatorname{det}\left[\left[K_{E}\right]+\lambda\left[K_{G}\right]\right]=0$(9) Equation (9) represents an eigenvalue problem, which when solved provides the lowest eigenvalue, λ1 that corresponds to the critical load level {Pcr} = λ1{P0} at which buckling occurs. In addition, the associated scaled displacement vector $\{U\}$defines the mode shape at buckling. In the finite element program ANSYS®, the eigenvalue problem is solved by using the Lanczos numerical method, Ansys® [33]. 3.2 Elasto-plastic buckling Nonlinear or collapse buckling analysis is a more accurate approach since this finite element analysis has capability of analyzing the actual structures with imperfections. This approach is highly recommended for design or evaluation of actual structures. This technique employs a non-linear structural analysis with gradually increasing loads to seek the load level at which the structure become unstable. Using this technique, features such as initial imperfections, plastic behavior etc., can be included in the model. In this analysis, both geometrical and material nonlinearities are considered. A shell is said to behave nonlinearly if the deflection at any point is not proportional to the magnitude of the applied load, Budiansky [36]. The geometric nonlinearity is the result of nonlinear strain-displacement relations, and the material nonlinearity is the result of nonlinear stress-strain relations. The material non-linearities can also be defined with different work hardening behaviors, Avner [37]. To do so, the plate material was assumed to be linear elastic–perfectly plastic (i.e., with no strain hardening) which is the most critical case for the steel material. An initial imperfect geometry for the plate that follows the first buckling mode of its elastic eigenvalue pre-analysis is assumed. The maximum value of the imperfection is chosen to be H/2000, El-Sawy et al. [1], being H the plate width (see Fig. 3). To find out the plate ultimate load, a reference load given by Py = σy.t, where σy is the material yielding strength, was applied in little increments in the plate edge parallel to the y axis. For each load increment the standard Newton-Raphson method was applied to determine the displacements that correspond to the equilibrium configuration of the plate through the equations: $\{P\}_{i+1}=\{P\}_{i}+\{\Delta P\}$(10) $\{\psi\}=\{P\}_{i+1}-\left\{F_{N L}\right\}$(11) $\left[K_{t}\right]\{\Delta U\}=\{\psi\}$(12) $\{U\}_{i+1}=\{U\}_{i}+\{\Delta U\}$(13) Where [Kt] is updated tangent stiffness matrix, $\{\Delta U\}$is the displacements increment vector necessary to reach the equilibrium configuration, {FNL} is the nonlinear internal nodal forces vector and $\{\psi\}$ is the out-of-balance load vector. The vectors {U}i and {U}i+1 correspond to the displacements, while the vectors {P}i and {P}i+1 correspond to the applied external loads, at two successive equilibrium configurations of the structure. If at a certain load stage the convergence could not be achieved; that is, a finite displacement increment cannot be determined so that the out-of-balance load vector $\{\psi\}$is annulled; it means that the failure load of the structure has been reached. This occurs because no matter as large as the displacements and strains can be, the stresses and internal forces cannot increase as it would be required to balance the external loads. The material has reached the exhaustion of its strength capacity. 4. Constructal Design Method The Constructal Theory was created by Adrian Bejan, in 1997, when a new geometric solution philosophy was applied to the conductive cooling of electronics, Bejan [38, 39]. These studies have a significant importance because they played a basic and a starting point role for the extension and application of Constructal Theory to problems in engineering and other branches of science, Bejan & Lorente [40] and Ghodoossi [41]. Moreover, Constructal Theory has been employed to explain deterministically the generation of shapes in nature, and the lesson taught by the Bejan's Constructal Theory is: geometry matters. The principle is the same in engineering and nature: the optimization of flow systems subjected to constraints generates shape and structure, Bejan [39]. The Constructal Law is the base of the Constructal Design Theory and tells us that for a finite size system to persist in time (to live), its configuration must evolve in such a way that provides easier access to the currents that flow throught it. The fundamental idea is: everything that moves, whether animate or inanimate, is a flow system. All flow systems generate shape and structure in time in order to facilitate this movement across a landscape filled with resistance. The designs we see in nature are not the result of chance. They arise naturally, spontaneously, because they enhance access to flow in time, Bejan & Zane [42]. It is well known that a major engineering goal is to improve the system configuration in order to improve their performance, that is, the application of Constructal Law is fundamental to the evolution of the system, and its application is made possible by the Constructal Design method. In the past, scientific and technical knowledge combined with practice and intuition guided engineers in system design man-made for specific purposes. Further, with the advent of computer tools, it was possible to simulate and evaluate several engineering flow architectures formed by a large number of degrees of freedom. However, while the performance of the system was being analyzed and evaluated in a scientific way, the system design remained in art status, Bejan & Lorente [43]. Therefore the Constructal Design method guides the designer toward flow architectures that have greater and greater global performance for the specified flow access conditions (fluid flow, heat flow, flow of stresses), in other words, the Constructal Design method is about the optimal distribution of imperfection. Being this natural tendency of flowing with better and better configuration is the essence of the Constructal Law, Bejan & Lorente [40]. So, in order to apply this philosophy the Constructal Design method needs one or more degrees of freedom (DOF) and constraints to achieve an objective function. To do so, considering the plates with diamond (Fig. 5a), longitudinal hexagonal (Fig. 5b), transversal hexagonal (Fig. 5c), elliptical (Fig. 5d) and rectangular (Fig. 5e) cutouts, the DOF (H0/L0) is free to vary respecting the vertical limit of H - H0 = 200 mm and horizontal limit of L - L0 = 200 mm, while the DOF (H/L) assume two values of 0.5 (H = 1000 mm and L = 2000 mm) and 1.0 (H = L = 1414.2 mm), and the plate slenderness. To allow an adequate and consistent comparison among the different hole types, a constraint called hole volume fraction, given by the ratio between the hole volume (V0) and total plate volume (V) (without perforation), is also taken into account with a value of 0.20, being defined for the diamond, longitudinal hexagonal, transversal hexagonal, elliptical, and rectangular cutouts, respectively, by: $\varphi=\frac{V_{0}}{V}=\frac{\left(H_{0} L_{0} t\right) / 2}{H L t}=\frac{H_{0} L_{0}}{2 H L}$(14) $\varphi=\frac{V_{0}}{V}=\frac{H_{0}\left(L_{1}+L_{2}\right) t}{H L t}=\frac{H_{0}\left(L_{1}+L_{2}\right)}{H L}$(15) $\varphi=\frac{V_{0}}{V}=\frac{H_{0}\left(L_{1}+L_{2}\right) t}{H L t}=\frac{L_{0}\left(H_{1}+H_{2}\right)}{H L}$(16) $\varphi=\frac{V_{0}}{V}=\frac{\left(\pi H_{0} L_{0}\right) / 4}{H L t}=\frac{\pi H_{0} L_{0}}{4 H L}$(17) $\varphi=\frac{V_{0}}{V}=\frac{H_{0} L_{0} t}{H L t}=\frac{H_{0} L_{0}}{H L}$(18) Where π is the mathematical constant; H0 and L0 are the characteristic dimensions of the hole in y and x directions (see Fig. 5), respectively; H, L and t are the height, length and thickness of the plate, respectively. The objective function is to maximize the buckling limit stress and hence to define the optimal hole geometry (H0/L0) for each perforated plate. To normalize, the obtained results for the critical stress (elastic buckling) and ultimate stress (elasto-plastic buckling) the Normalized Limit Stress (NLS) was adopted, being defined by: $N L S=\frac{\sigma_{c r}}{\sigma_{y}}$(19) $N L S=\frac{\sigma_{u}}{\sigma_{y}}$(20) Where σy is the yielding strength of the plate material, being 250 MPa for the steel A-36 adopted in this work. 5a.png (a)Diamond 5b.png (b)Longitudinal hexagonal 5c.png (c)Transversal hexagonal 5d.png (d)Elliptical 5e.png (e) Rectangular Figure 5. Plate of a centered cutout To verify the computational model used to solve the problem of plate elastic buckling, a comparison of its results with the critical load obtained through the analytical solution Eq. (1), was done. It was used a steel plate (E = 210 GPa and v = 0.3) without perforation and with H = 1000 mm, L = 2000 mm and t = 10 mm, (see Fig. 3), discretized by a regular mesh generated with SHELL93 elements having maximum size of 20 mm. The results obtained were 753.74 kN/m by computational model and 759.20 kN/m for the analytical solution, so a difference of only -0.72% was observed, verifying the numerical model. For the plate elasto-plastic buckling problem was used as reference to validate the numerical model the experimental results, El-Sawy et al. [1]. Considering Fig. 3, a square steel plate simply supported at its edges (H = 1000 mm, L = 1000 mm, t = 20 mm, E = 210 GPa, v = 0.3 and σy = 350 MPa) with a circular central hole (H0 = L0 = 300 mm) was employed. Its discretization was performed with quadrangular SHELL93 elements with maximum dimension of 20 mm. The experimental result obtained by El Sawy et al. [1] for the rupture stress of the plate was σu = 213.50 MPa while the present numerical simulation result was σu = 217.00 MPa, existing a difference of approximately 1.610%, which allows validation of the computational model. After verification and validation of the computational models, the earlier defined cases were numerically simulated. Figures 6, 7, 8, 9 and 10 show the results for the elastic buckling and elasto-plastic buckling of the plates with H/L = 0.5 and having, respectively, a centered diamond (Fig. 6), longitudinal hexagonal (Fig. 7), transversal hexagonal (Fig. 8), elliptical (Fig. 9) and rectangular (Fig. 10) cutout. In these figures the shape of each hole change as a function of the DOF H0/L0. Figure 6. Rectangular plate with diamond hole Figure 7. Rectangular plate with longitudinal hexagonal hole Figure 8. Rectangular plate with transversal hexagonal hole Figure 9. Rectangular plate with elliptical hole Figure 10. Rectangular plate with rectangular hole From Figs. 6, 7, 8, 9 and 10, in a general way it can be said that for small values of the ratio H0/L0 there is a predominance of elastic buckling and for larger values the dominance becomes the elasto-plastic buckling. The transition between the elastic and elasto-plastic buckling has a specific point defined in terms of the DOF H0/L0 by values of 0.55, 0.55, 0.70, 0.64 and 0.70, respectively, for the diamond (Fig. 6), longitudinal hexagonal (Fig. 7), transversal hexagonal (Fig. 8), elliptical (Fig. 9) and rectangular (Fig. 10) hole. Therefore, Fig. 11 shows the limit curves for the maximum NLS which each perforated plate can support without to suffer the buckling phenomenon. One can note in Fig. 11 that the maximum global NLS for the plate with H/L = 0.5 was obtained with the rectangular cutout with (H0/L0)opt = 0.70 and reaching a (NLS)max value of 0.36. If this best case is compared with the lower maximum NLS obtained (diamond hole) an improvement of 37.25% was achieved only due to the cutout type. However, the rectangular perforation is not the best global hole type, because for the H0/L0 range between 0.25 and 0.66 the superior plate performance was observed for the longitudinal hexagonal perforation. In addition, for specific values of H0/L0 the elliptical and diamond hole also presented superior performances if compared with the rectangular cutout. Figure 11. Elastic Buckling and elasto-plastic buckling of rectangular perforated plate In Table 1, considering only the elasto-plastic buckling of the plates with H/L = 0.5, it is presented a comparison between the values of the optimal geometry and maximum NLS with the worst geometry and minimum NLS for each hole type. It is observed that improvements around 60.00%, 52.17%, 70.00%, 75.00% and 80.00% can be obtained, respectively for the diamond, hexagonal, elliptical and rectangular cutouts, if the adequate hole geometry is employed. Table 1. Comparison between best and worst shape for each hole type for the plate with H/L = 0.5 Hole Type (H0/L0)opt (NLS)max H0/L0 NLS Difference % Longitudinal Hexagonal Transversal Hexagonal Figures 12, 13, 14, 15 and 16 show the von Mises stress distribution in studied rectangular plates (H/L = 0.5) considering the cases presented in Table 1, for the diamond, longitudinal hexagonal, transversal hexagonal, elliptical and rectangular holes, respectively. Figure 12. Stress distribution in plates with diamond hole: (a) optimal shape and (b) worst shape Figure 13. Stress distribution in plates with longitudinal hexagonal hole: (a) optimal shape and (b) worst shape Figure 14. Stress distribution in plates with transversal hexagonal hole: (a) optimal shape and (b) worst shape Figure 15. Stress distribution in plates with elliptical hole: (a) optimal shape and (b) worst shape Figure 16. Stress distribution in plates with rectangular hole: (a) optimal shape and (b) worst shape It is possible to note in Figs. 12, 13, 14, 15 and 16 that the optimal cutout shapes promote a better distribution of the maximum stress, i.e., there are more regions submitted to the maximum stress in the optimal geometries, being this behavior in agreement with the Constructal principle of the optimal distribution of imperfections. Now, the NLS variation is depicted as function of the DOF H0/L0 for the square plates (H/L = 1.0) with diamond (Fig. 17), longitudinal hexagonal (Fig. 18), transversal hexagonal (Fig. 19), elliptical (Fig. 20) and rectangular (Fig. 21) perforations. Figure 17. Square plate with diamond hole Figure 18. Square plate with longitudinal hexagonal hole. Figure 19. Square plate with transversal hexagonal hole Figure 20. Square plate with elliptical hole Figure 21. Square plate with rectangular hole The H0/L0 value that define the transition from elastic buckling, to the elasto-plastic buckling are 1.30, 1.68, 1.98, 1.94 and 2.23 for the plates with diamond (Fig. 17), longitudinal hexagonal (Fig. 18), transversal hexagonal (Fig. 19), elliptical (Fig. 20) and rectangular (Fig. 21) holes. From Figs. 17, 18, 19, 20 and 21 the NLS limit curves to avoid the buckling occurrence can be defined adding the elastic behavior portion (on the left of the intersection point) and the elasto-plastic behavior portion (on the right of theintersection point) for each hole type, as can be seen in Fig. 22. As already observed for the plates with H/L = 0.5 (see Fig. 11), the rectangular hole conduct to the global superior performance with (H0/L0)opt = 2.23 and (NLS)max = 0.25. This geometry has a mechanical behavior approximately 64% better than the best case among the studied plates with diamond holes. Figure 22. Elastic buckling and elasto-plastic buckling of square plate perforated Again, in some specific regions the rectangular hole do not conducts to the superior performance, despite having reached the highest NLS level. Diamond (1.15 ≤ H0/L0 ≤ 1.40), hexagonal (1.40 ≤ H0/L0 ≤ 1.90), and elliptical (1.90 ≤ H0/L0 ≤ 2.15) perforations can be more efficient depending of the H0/L0 value. In Table 2, considering only the elastic-plastic buckling behavior and for each hole type, the optimized shape and the maximum NLS value is confronted with the worst shape and the minimum NLS value. It is observed that, for the square plates the variation between maximum and minimum NLS values can generate performance improvements of 20.00%, 20.00%, 46.67%, 64.29% and 78.57%, respectively, for diamond, hexagonal, elliptical and rectangular hole types. Table 2. Comparison between best and worst shape for each hole type for the plate H/L = 1.0. For the cases showed in Table 2, the von Mises stress distribution are illustrated in Fig. 23 (diamond), Fig. 24 (longitudinal hexagonal), Fig. 25 (transversal hexagonal), Fig. 26 (elliptical), and Fig. 27 (rectangular). Figure 23. Stress distribution plates with diamond hole: (a) optimal shape and (b) the worst shape. Figure 24. Stress distribution plates with longitudinal hexagonal hole: (a) optimal shape and (b) the worst shape. Figure 25. Stress distribution plates with transversal hexagonal hole: (a) optimal shape and (b) the worst shape. Figure 26. Stress distribution plates with elliptical hole: (a) optimal shape and (b) the worst shape. Figure 27. Stress distribution plates with rectangular hole: (a) optimal shape and (b) the worst shape. As noted in the plates with H/L = 0.5, the optimized geometries have more regions where the maximum NLS is reached, totaling a larger area submitted to the maximum stress. Therefore, the principle of optimal distribution of imperfections is respected for these plates, conducting to a superior performance. The present work shows that the hole type and shape have a fundamental importance for the definition of the buckling behavior in perforated plates. The adequate hole type choice allow to obtain a superior performance, i.e., removing the same material amount of the plate is possible to improve its performance only by the correct hole type choice. In addition, for all studied hole types, its shape variation promoted by means the DOF H0/L0 is responsible to define if the buckling will be elastic (linear) or elasto-plastic (nonlinear). For lower values of H0/L0 an elastic buckling occur while an elasto-plastic buckling happens for higher values for the ratio H0/L0. Hence, there is an intersection point between the curves that define the elastic and elasto-plastic plate buckling. This point determines the transitions between these mechanical behaviors, usually also being where the level maximum of the stress is reached among all studied DOF H0/L0 for each hole type. Another important observation is that there is no optimal global geometry. Depending of the H0/L0 value, a particular hole type presents the best performance. This is an important aspect if the cutout at the plate need to be done with a specific geometry. For each hole type, considering only the elasto-plastic plate buckling behavior and comparing the best and worst geometries one can note that the decrease in the area between the hole and the upper and lower edges of the plate leads to the collapse of the structure. This fact can be explained by the decreasing of the plate resistant area. In addition, the best geometry always have a better stress distribution, in accordance with the Constructal principle of optimal distribution of imperfections, justifying its superior performances. In general it can be said that the studied rectangular perforated plates (H/L = 0.5) and square perforated plates (H/L = 1.0) have a similar buckling behavior. However, the rectangular perforated plates can support a more elevated stress level when compared with the square perforated plates. As the total volume of the plate material is the same in both cases, if possible it is recommended to use the rectangular perforated plate. Therefore, the Constructal Design method proved to be able to analyze the geometric configuration influence at the plate buckling problem, allowing to define for each studied hole type a limit stress curve as function of the hole shape variation which avoids the buckling phenomenon occurrence. The authors acknowledge FURG and UFRGS by the support. D. Helbig thanks to CNPq by the doctoral scholarship. L. A. O. Rocha, L. A. Isoldi, and E. D. dos Santos thanks CNPq by research grant. 1. El-Sawy, K. M., Nazmy, A.S., & Martini, M.I., "Elasto-plastic buckling of perforated plates under uniaxial compression," Thin-Walled Structures, vol. 42, 1083–1101, 2004. DOI: 10.1016/j.tws.2004.03.002. 2. Iyengar, N.G.R., Structural Stability of Columns and Plates, Ellis Horwood Limited, 1988. 3. Narayanan R., Chow F.Y., "Ultimate capacity of uniaxially compressed perforated plates," Thin-Walled Structures, vol. 2, 241–64, 1984. DOI: 10.1016/0263-8231(84)90021-1. 4. Shakerley, T. M., Brown, C.J., "Elastic buckling of plates with eccentrically positioned rectangular perforations," International Journal of Mechanical Science, vol. 38, 825–838, 1996. DOI: 10.1016/0020-7403(95)00107-7. 5. El-Sawy, K.M., Nazmy, A.S., "Effect of aspect ratio on the elastic buckling of uniaxially loaded plates with eccentric holes," Thin-Walled Structures, vol. 39, 983–998, 2001. DOI: 10.1016/S0263-8231(01)00040-4. 6. El-Sawy, K.M., Martini, M.I., "Elastic stability of bi-axially loaded rectangular plates with a single circular hole," Thin-Walled Structures, vol. 45, 122–133, 2007. DOI: 10.1016/j.tws.2006.11.002. 7. Moen, C.D., Schafer, B.W., "Elastic buckling of thin plates with holes in compression or bending," Thin-Walled Structures, vol. 47, 1597–1607, 2009. DOI: 10.1016/j.tws.2009.05.001. 8. Rocha, L.A.O., Real, M.V., Correia, A.L.G., Vaz, J., dos Santos, E.D., & Isoldi L.A., "Geometric optimization based on the constructal design of perforated thin plates subject to buckling," Computational Thermal Sciences, vol. 4, n. 2, 119–129, 2012. DOI: 10.1615/ComputThermalScien.2012005125. 9. Rocha, L.A.O., Isoldi L.A., Real, M.V., dos Santos, E.D., Correia, A.L.G., Lorenzini, G., Biserni, C., "Constructal design applied to the elastic buckling of thin plates with holes," Central European Journal of Engineering, vol. 3, 475–483, 2013. DOI: 10.2478/s13531-013-0105-x. 10. Isoldi, L.A., Real, M.V., Correia, A.L.G., Vaz, J., dos Santos, E.D., & Rocha, L.A.O., "The flow of stresses: constructal design of perforated plates subjected to tension or buckling," In Rocha, L.A.O., Lorente, S., and Bejan, A., eds., Constructal Law and the Unifying Principle of Design, 195–217, Springer-Verlag, 2013. DOI: 10.1007/978-1-4614-5049-8_12. 11. Narayanan R, Chow F.Y., "Ultimate capacity of uniaxially compressed perforated plates," Thin-Walled Structures, vol. 2, 241–64, 1984. DOI: 10.1016/0263-8231(84)90021-1. 12. Azizan Z.G., Roberts T.M., "Buckling and elasto-plastic collapse of perforated plates," Proceedings of the Michael R, Horne Conference on Instability and Plastic Collapse of Steel Structures, Manchester, 322–328, 1983. 13. Yettram, A.L., Brown, C.J., "The elastic stability of square perforated plates," Computers and Structures, Vol. 21, No. 6, 1267-1272, 1985. DOI: 10.1016/0143-974X(87)90014-9. 14. Jwalamalini, R., Sundaravadivelu, R., Vendhan, C.P. and Ganapathy, C., "Stability of initially stressed square plates with square openings," Marine Structures, Vol. 5, No. 1, 71-84, 1992. DOI: 10.1016/0951-8339(92)90034-M. 15. Madasamy, C.M., Kalyanaraman, V., "Analysis of plated structures with rectangular cutouts and internal supports using the spline finite strip method," Computers and Structures, Vol. 52, No. 2, 277-286, 1994. DOI: 10.1016/0045-7949(94)90280-1. 16. Durban, D., Zuckerman, Z., "Elastoplastic buckling of rectangular plates in biaxial compression/tension," International Journal of Mechanical Science, vol. 41, 751–65, 1999. DOI: 10.1016/S0020-7403(98)00055-1. 17. Shanmugam, N.E., Thevendran, V., & Tan Y.H., "Design formula for axially compressed perforated plates," Thin-Walled Structures, vol. 34, 1–20, 1999. DOI: 10.1016/S0263-8231(98)00052-4. 18. Paik, J.K., Thayamballi, A.K. and Kim, B.J., "Advanced ultimate strength formulations for ship plating under combined biaxial compression/tension, edge shear, and lateral pressure loads," Marine Technology, Vol. 38, No. 1, 9-25, 2001. 19. Toulios, M., Caridis, P.A., "The effect of aspect ratio on the elastoplastic response of stiffened plates loaded in uniaxial edge compression," Computers and Structures, Vol. 80, No. 14-15, 1317-1328, 2002. DOI: 10.1016/S0045-7949(02)00080-9. 20. Bakker, M.C.M., Rosmanit, M., Hofmeyer, H., Post-Buckling Strength of Uniformly Compressed Plates, Stability and Ductility of Steel Structures, D. Camotim et al. (Eds.), Lisbon, Portugal, 2006. 21. Kumar et al., "Ultimate strength of square plate with rectangular opening under axial compression," Journal of Naval Architecture and Marine Engineering, Vol. 4, 15-26, 2007. DOI: 10.3329/jname.v4i1.913. 22. Helbig, D., Real, M.V., Correia, A.L.G., Dos Santos, E. D., Isoldi, L.A., Rocha, L.A.O., "Constructal design of perforated steel plates subject to linear elastic and nonlinear elasto-plastic buckling," XXXIV Iberian Latin American Congress on Computational Methods in Engineering (CILAMCE), 1–17, 2013. 23. Helbig, D., Rocha, L.A.O., Dos Santos, E.D., Real, M. V., Isoldi, L.A., Silva, C.C.C., "Numerical simulation and constructal design method applied to the study of the cutout shape influence in the mechanical behavior of perforated plates subjected to buckling," XXXV Iberian Latin American Congress on Computational Methods in Engineering (CILAMCE), 2014. 24. Dawe, J.L. et al., "Inelastic buckling of steel plates," Journal of Structural Engineering, Vol. 3(1), 1985. DOI: 10.1061/(ASCE)0733-9445(1985)111:1(95). 25. Raviprakash, A.V., "Investigations on the ultimate strength of axially compressed thin square plates with geometrical imperfections," Thesis (Doctor of Philosophy in Mechanical Engineering) – Department of Mechanical Engineering, Pondicherry University, India, 2012. 26. Ventsel, E., Krauthammer, T., Thin Plates and Shells – Theory, Analysis, and Applications, Marcel Dekker, Inc., 2001. DOI: 10.1115/1.1483356. 27. Åkesson, B., Plate Buckling in Bridges and Other Structures, Taylor & Francis, 2007. 28. Yoo, C.H., Lee, S.C., Stability of Structures - Principles and Applications, Elsevier, 2011. 29. Vinson, J.R., Plate and Panel Structures of Isotropic, Composite and Piezoelectric Materials, Including Sandwich Construction, Springer, 2005. DOI: 10.1007/1-4020-3111-4. 30. Yamaguchi, E., Basic Theory of Plates and Elastic Stability, Chen Wai-Fah, 1999. DOI: 10.1201/9781439834350.ch1. 31. Moaveni, S., Theory and Application with ANSYS®, Prentice Hall, 1999. 32. Real, M. de V., Isoldi, L.A., "Finite Element Buckling Analysis of Uniaxially Loaded Plates with Holes," IV Southern Conference on Computational Modeling (MCSUL), 69–73, 2010. 33. ANSYS®, User's manual. Swanson Analysis System Inc., 2005. 34. Madenci, E., Guven, I., The Finite Element Method and Applications in Engineering Using ANSYS®, Springer, 2006. DOI: 10.1007/978-1-4899-7550-8. 35. Przemieniecki, J.S., Theory of Matrix Structural Analysis, Dover Publications, 1985. 36. Budiansky, B., "Notes on nonlinear shell theory," Journal of Applied Mechanics, Vol. 35, Nº 2, pp. 393-401, 1968. DOI: 10.1115/1.3601208. 37. Avner, H.S., Introduction to Physical Metallurgy, Tata McGraw Hill Publishing Company, 2001. 38. Bejan, A., "Constructal-theory network of conducting paths for cooling a heat generating volume," International Journal of Heat and Mass Transfer, vol. 40, 799–816, 1997. DOI: 10.1016/0017-9310(96)00175-5. 39. Bejan, A., Shape and Structure, from Engineering to Nature, Cambridge University Press, 2000. DOI: 10.3390/e3050293. 40. Bejan, A., Lorente, S., Design with Constructal Theory, Wiley, 2008. DOI: 10.1002/9780470432709. 41. Ghodoossi, L., Egrican, N., "Conductive cooling of triangular shaped electronics using constructal theory," Energy Conversion and Management, vol. 93(8), 4922-4929, 2003. DOI: 10.1016/S0196-8904(03)00190-0. 42. Bejan, A., Zane, J.P., Design in nature, Doubleday, 2012. 43. Bejan, A., & Lorente, S., "The constructal law (La Loi Constructale)," International Journal of Heat and Mass Transfer, vol. 49, 445–445, 2006.
CommonCrawl